17 Oct, 2007

1 commit


22 May, 2007

1 commit

  • First thing mm.h does is including sched.h solely for can_do_mlock() inline
    function which has "current" dereference inside. By dealing with can_do_mlock()
    mm.h can be detached from sched.h which is good. See below, why.

    This patch
    a) removes unconditional inclusion of sched.h from mm.h
    b) makes can_do_mlock() normal function in mm/mlock.c
    c) exports can_do_mlock() to not break compilation
    d) adds sched.h inclusions back to files that were getting it indirectly.
    e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
    getting them indirectly

    Net result is:
    a) mm.h users would get less code to open, read, preprocess, parse, ... if
    they don't need sched.h
    b) sched.h stops being dependency for significant number of files:
    on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
    after patch it's only 3744 (-8.3%).

    Cross-compile tested on

    all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
    alpha alpha-up
    arm
    i386 i386-up i386-defconfig i386-allnoconfig
    ia64 ia64-up
    m68k
    mips
    parisc parisc-up
    powerpc powerpc-up
    s390 s390-up
    sparc sparc-up
    sparc64 sparc64-up
    um-x86_64
    x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig

    as well as my two usual configs.

    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

14 Dec, 2006

1 commit

  • Remove useless includes of linux/io.h, don't even try to build iomap_copy
    on uml (it doesn't have readb() et.al., so...)

    Signed-off-by: Al Viro
    Acked-by: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Al Viro
     

01 Oct, 2006

2 commits

  • The existing implementation of ioremap_page_range(), which was taken
    from i386, does this:

    flush_cache_all();
    /* modify page tables */
    flush_tlb_all();

    I think this is a bit defensive, so this patch changes the generic
    implementation to do:

    /* modify page tables */
    flush_cache_vmap(start, end);

    instead, which is similar to what vmalloc() does. This should still
    be correct because we never modify existing PTEs. According to
    James Bottomley:

    The problem the flush_tlb_all() is trying to solve is to avoid stale tlb
    entries in the ioremap area. We're just being conservative by flushing
    on both map and unmap. Technically what vmalloc/vfree does (only flush
    the tlb on unmap) is just fine because it means that the only tlb
    entries in the remap area must belong to in-use mappings.

    Signed-off-by: Haavard Skinnemoen
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Russell King
    Cc: Mikael Starvik
    Cc: Andi Kleen
    Cc:
    Cc: Ralf Baechle
    Cc: Kyle McMartin
    Cc: Martin Schwidefsky
    Cc: Paul Mundt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Haavard Skinnemoen
     
  • This patch adds a generic implementation of ioremap_page_range() in
    lib/ioremap.c based on the i386 implementation. It differs from the
    i386 version in the following ways:

    * The PTE flags are passed as a pgprot_t argument and must be
    determined up front by the arch-specific code. No additional
    PTE flags are added.
    * Uses set_pte_at() instead of set_pte()

    [bunk@stusta.de: warning fix]
    ]dhowells@redhat.com: nommu build fix]
    Signed-off-by: Haavard Skinnemoen
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Russell King
    Cc: Mikael Starvik
    Cc: Andi Kleen
    Cc:
    Cc: Ralf Baechle
    Cc: Kyle McMartin
    Cc: Martin Schwidefsky
    Cc: Paul Mundt
    Signed-off-by: Adrian Bunk
    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Haavard Skinnemoen