22 Sep, 2009

40 commits

  • Add reading and using .mailmap file if it exists
    Convert address entries in .mailmap to first encountered address
    Don't terminate shell commands with \n
    Strip characters found after sign-offs by: name

    [stripped]

    Signed-off-by: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • Added format_email and parse_email routines to reduce inline use.

    Added email_address_inuse to eliminate multiple maintainer entries
    for the same email address, the first name encountered is used.

    Used internal perl equivalents of shell cmd use of grep|cut|sort|uniq

    Signed-off-by: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • --pattern-depth is used to control how many levels of directory traversal
    should be performed to find maintainers. default is 0 (all directory levels).

    For instance:

    MAINTAINERS currently has multiple M: and F: entries that match
    net/netfilter/ipvs/ip_vs_app.c

    IPVS
    M: Wensong Zhang
    M: Simon Horman
    M: Julian Anastasov
    [...]
    F: net/netfilter/ipvs/

    NETFILTER/IPTABLES/IPCHAINS
    [...]
    M: Patrick McHardy
    [...]
    F: net/netfilter/

    NETWORKING [GENERAL]
    M: "David S. Miller"
    [...]
    F: net/

    THE REST
    M: Linus Torvalds
    [...]
    F: */

    Using this command will return all of those maintainers:
    (except Linus unless --git-chief-maintainers is specified)

    $ ./scripts/get_maintainer.pl --nogit -nol \
    -f net/netfilter/ipvs/ip_vs_app.c
    Julian Anastasov
    Simon Horman
    Wensong Zhang
    Patrick McHardy
    David S. Miller

    Adding --pattern-depth=1 will match at the deepest level
    $ ./scripts/get_maintainer.pl --nogit -nol --pattern-depth=1 \
    -f net/netfilter/ipvs/ip_vs_app.c
    Julian Anastasov
    Simon Horman
    Wensong Zhang

    Adding --pattern-depth=2 will match at the deepest level and 1 higher
    $ ./scripts/get_maintainer.pl --nogit -nol --pattern-depth=2 \
    -f net/netfilter/ipvs/ip_vs_app.c
    Julian Anastasov
    Simon Horman
    Wensong Zhang
    Patrick McHardy

    and so on.

    Signed-off-by: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • Before this change, matched sections were added in the order
    of appearance in the normally alphabetic section order of
    the MAINTAINERS file.

    For instance, finding the maintainer for drivers/scsi/wd7000.c
    would first find "SCSI SUBSYSTEM", then "WD7000 SCSI SUBSYSTEM",
    then "THE REST".

    before patch:

    $ ./scripts/get_maintainer.pl --nogit -f drivers/scsi/wd7000.c
    James E.J. Bottomley
    Miroslav Zagorac
    linux-scsi@vger.kernel.org
    linux-kernel@vger.kernel.org

    get_maintainer.pl now selects matched sections by longest pattern match.
    Longest is the number of "/"s and any specific file pattern.

    This changes the example output order of MAINTAINERS to whatever is
    selected in "WD7000 SUBSYSTEM", then "SCSI SYSTEM", then "THE REST".

    after patch:

    $ ./scripts/get_maintainer.pl --nogit -f drivers/scsi/wd7000.c
    Miroslav Zagorac
    James E.J. Bottomley
    linux-scsi@vger.kernel.org
    linux-kernel@vger.kernel.org

    Signed-off-by: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • Julia Lawall suggested that get_maintainers.pl should have the
    ability to include signatories of commits that are modified by
    a particular patch.

    Vegard Nossum did something similar once.
    http://lkml.org/lkml/2008/5/29/449

    The modified script looks the commits for all lines in the
    patch, and includes the "-by:" signatories for those commits.
    It uses the same git-min-percent, git-max-maintainers, and
    git-min-signatures options. git-since is ignored.

    It can be used independently from the --git default, so
    ./scripts/get_maintainers.pl --nogit --git-blame
    or
    ./scripts/get_maintainers.pl --nogit --git-blame -f
    is acceptable.

    If used with -f , all lines/commits for the file are
    checked.

    --git-blame can be slow if used with -f
    --git-blame does not work with -f

    Signed-off-by: Joe Perches
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • Signed-off-by: Hannes Eder
    Cc: Joe Perches
    Cc: "David S. Miller"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hannes Eder
     
  • If pmd_alloc() fails we should only free the prior allocated pud, if
    pte_alloc_map() fails, we should free pmd as well.

    Signed-off-by: Roel Kluin
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roel Kluin
     
  • Signed-off-by: Christoph Hellwig
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Hellwig
     
  • Move the state residency accounting and statistics computation off the hot
    exit path.

    On exit, the need to recompute statistics is recorded, and new statistics
    will be computed when menu_select is called again.

    The expected effect is to reduce processor wakeup latency from sleep
    (C-states). We are speaking of few hundreds of cycles reduction out of a
    several microseconds latency (determined by the hardware transition), so
    it is difficult to measure.

    Signed-off-by: Corrado Zoccolo
    Cc: Venkatesh Pallipadi
    Cc: Len Brown
    Cc: Adam Belay
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Corrado Zoccolo
     
  • Fix the menu idle governor which balances power savings, energy efficiency
    and performance impact.

    The reason for a reworked governor is that there have been serious
    performance issues reported with the existing code on Nehalem server
    systems.

    To show this I'm sure Andrew wants to see benchmark results:
    (benchmark is "fio", "no cstates" is using "idle=poll")

    no cstates current linux new algorithm
    1 disk 107 Mb/s 85 Mb/s 105 Mb/s
    2 disks 215 Mb/s 123 Mb/s 209 Mb/s
    12 disks 590 Mb/s 320 Mb/s 585 Mb/s

    In various power benchmark measurements, no degredation was found by our
    measurement&diagnostics team. Obviously a small percentage more power was
    used in the "fio" benchmark, due to the much higher performance.

    While it would be a novel idea to describe the new algorithm in this
    commit message, I cheaped out and described it in comments in the code
    instead.

    [changes since first post: spelling fixes from akpm, review feedback,
    folded menu-tng into menu.c]

    Signed-off-by: Arjan van de Ven
    Cc: Venkatesh Pallipadi
    Cc: Len Brown
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Yanmin Zhang
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arjan van de Ven
     
  • Signed-off-by: Christoph Hellwig
    Cc: Geert Uytterhoeven
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Hellwig
     
  • Convert m68k to use GENERIC_TIME via the arch_getoffset() infrastructure,
    reducing the amount of arch specific code we need to maintain.

    I've taken my best swing at converting this, but I'm not 100% confident
    I got it right. My cross-compiler is now out of date (gcc4.2) so I
    wasn't able to check if it compiled. Any assistance from arch
    maintainers or testers to get this merged would be great.

    Signed-off-by: John Stultz
    Cc: Geert Uytterhoeven
    Cc: Roman Zippel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    john stultz
     
  • Signed-off-by: Christoph Hellwig
    Cc: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Hellwig
     
  • Convert m32r to use GENERIC_TIME via the arch_getoffset() infrastructure,
    reducing the amount of arch specific code we need to maintain.

    I also noted that m32r doesn't seem to be taking the xtime write lock
    before calling do_timer()! That looks like a pretty bad bug to me. If
    folks agree, let me know and I can move the lock grab to the correct spot.

    Signed-off-by: John Stultz
    Cc: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    john stultz
     
  • `off' and `max_cpus' are unsigned. When negative they are wrapped and
    caught by the other test.

    Signed-off-by: Roel Kluin
    Cc: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roel Kluin
     
  • Signed-off-by: Christoph Hellwig
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Hellwig
     
  • Signed-off-by: Marcin Slusarz
    Cc: Ivan Kokshaysky
    Cc: Richard Henderson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marcin Slusarz
     
  • The incorrect variable is tested. fd is used for another open()
    and is already tested.

    Signed-off-by: Roel Kluin
    Cc: Ivan Kokshaysky
    Cc: Richard Henderson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roel Kluin
     
  • Converts alpha to use GENERIC_TIME via the arch_getoffset()
    infrastructure, reducing the amount of arch specific code we need to
    maintain.

    I suspect the alpha arch could even be further improved to provide and
    rpcc() based clocksource, but not having the hardware, I don't feel
    comfortable attempting the more complicated conversion (but I'd be glad to
    help if anyone else is interested).

    [akpm@linux-foundation.org: fix build]
    Signed-off-by: John Stultz
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    john stultz
     
  • Signed-off-by: Christoph Hellwig
    Cc: Yoshinori Sato
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Hellwig
     
  • Some architectures (like the Blackfin arch) implement some of the
    "simpler" features that one would expect out of a MMU such as memory
    protection.

    In our case, we actually get read/write/exec protection down to the page
    boundary so processes can't stomp on each other let alone the kernel.

    There is a performance decrease (which depends greatly on the workload)
    however as the hardware/software interaction was not optimized at design
    time.

    Signed-off-by: Bernd Schmidt
    Signed-off-by: Bryan Wu
    Signed-off-by: Mike Frysinger
    Acked-by: David Howells
    Acked-by: Greg Ungerer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Bernd Schmidt
     
  • Clean up the /drivers/pcmcia/sa1100_jornada.c file with respect to
    formatting. It also changes a build warning into a code comment (since
    its a pain to watch every build and havent seen any problems with driver
    in 3.5years).

    Signed-off-by: Kristoffer Ericson
    Cc: Dominik Brodowski
    Cc: Greg KH
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kristoffer Ericson
     
  • Signed-off-by: Alexey Dobriyan
    Cc: Dominik Brodowski
    Cc: Greg KH
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     
  • If count > 0 and dev->rlen == dev->rpos and dev->proto == 0 then we read
    and write dev->rbuf[-1];

    Signed-off-by: Roel Kluin
    Cc: Harald Welte
    Cc: Dominik Brodowski
    Cc: Greg KH
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roel Kluin
     
  • The remove member of the pci_driver yenta_cardbus_driver uses
    __devexit_p(), so the remove function itself should be marked with
    __devexit. Even more so considering the probe function is marked with
    __devinit.

    Signed-off-by: Mike Frysinger
    Cc: Daniel Ritz
    Cc: Dominik Brodowski
    Cc: Greg KH
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Frysinger
     
  • When the mm being switched to matches the active mm, we don't need to
    increment and then drop the mm count. In a simple benchmark this happens
    in about 50% of time. Making that conditional reduces contention on that
    cacheline on SMP systems.

    Acked-by: Andrea Arcangeli
    Signed-off-by: Michael S. Tsirkin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael S. Tsirkin
     
  • Anyone who wants to do copy to/from user from a kernel thread, needs
    use_mm (like what fs/aio has). Move that into mm/, to make reusing and
    exporting easier down the line, and make aio use it. Next intended user,
    besides aio, will be vhost-net.

    Acked-by: Andrea Arcangeli
    Signed-off-by: Michael S. Tsirkin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael S. Tsirkin
     
  • Fixes the following kmemcheck false positive (the compiler is using
    a 32-bit mov to load the 16-bit sbinfo->mode in shmem_fill_super):

    [ 0.337000] Total of 1 processors activated (3088.38 BogoMIPS).
    [ 0.352000] CPU0 attaching NULL sched-domain.
    [ 0.360000] WARNING: kmemcheck: Caught 32-bit read from uninitialized
    memory (9f8020fc)
    [ 0.361000]
    a44240820000000041f6998100000000000000000000000000000000ff030000
    [ 0.368000] i i i i i i i i i i i i i i i i u u u u i i i i i i i i i i u
    u
    [ 0.375000] ^
    [ 0.376000]
    [ 0.377000] Pid: 9, comm: khelper Not tainted (2.6.31-tip #206) P4DC6
    [ 0.378000] EIP: 0060:[] EFLAGS: 00010246 CPU: 0
    [ 0.379000] EIP is at shmem_fill_super+0xb5/0x120
    [ 0.380000] EAX: 00000000 EBX: 9f845400 ECX: 824042a4 EDX: 8199f641
    [ 0.381000] ESI: 9f8020c0 EDI: 9f845400 EBP: 9f81af68 ESP: 81cd6eec
    [ 0.382000] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
    [ 0.383000] CR0: 8005003b CR2: 9f806200 CR3: 01ccd000 CR4: 000006d0
    [ 0.384000] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
    [ 0.385000] DR6: ffff4ff0 DR7: 00000400
    [ 0.386000] [] get_sb_nodev+0x3c/0x80
    [ 0.388000] [] shmem_get_sb+0x14/0x20
    [ 0.390000] [] vfs_kern_mount+0x4f/0x120
    [ 0.392000] [] init_tmpfs+0x7e/0xb0
    [ 0.394000] [] do_basic_setup+0x17/0x30
    [ 0.396000] [] kernel_init+0x57/0xa0
    [ 0.398000] [] kernel_thread_helper+0x7/0x10
    [ 0.400000] [] 0xffffffff
    [ 0.402000] khelper used greatest stack depth: 2820 bytes left
    [ 0.407000] calling init_mmap_min_addr+0x0/0x10 @ 1
    [ 0.408000] initcall init_mmap_min_addr+0x0/0x10 returned 0 after 0 usecs

    Reported-by: Ingo Molnar
    Analysed-by: Vegard Nossum
    Signed-off-by: Pekka Enberg
    Acked-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pekka Enberg
     
  • A number of architectures have identical asm/mman.h files so they can all
    be merged by using the new generic file.

    The remaining asm/mman.h files are substantially different from each
    other.

    Signed-off-by: Arnd Bergmann
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     
  • Add an example of how to use the MAP_HUGETLB flag to the vm documentation
    directory and a reference to the example in hugetlbpage.txt.

    Signed-off-by: Eric B Munson
    Acked-by: David Rientjes
    Cc: Mel Gorman
    Cc: Adam Litke
    Cc: David Gibson
    Cc: Lee Schermerhorn
    Cc: Nick Piggin
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric B Munson
     
  • Add a flag for mmap that will be used to request a huge page region that
    will look like anonymous memory to userspace. This is accomplished by
    using a file on the internal vfsmount. MAP_HUGETLB is a modifier of
    MAP_ANONYMOUS and so must be specified with it. The region will behave
    the same as a MAP_ANONYMOUS region using small pages.

    [akpm@linux-foundation.org: fix arch definitions of MAP_HUGETLB]
    Signed-off-by: Eric B Munson
    Acked-by: David Rientjes
    Cc: Mel Gorman
    Cc: Adam Litke
    Cc: David Gibson
    Cc: Lee Schermerhorn
    Cc: Nick Piggin
    Cc: Hugh Dickins
    Cc: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric B Munson
     
  • Add a flag for mmap that will be used to request a huge page region that
    will look like anonymous memory to user space. This is accomplished by
    using a file on the internal vfsmount. MAP_HUGETLB is a modifier of
    MAP_ANONYMOUS and so must be specified with it. The region will behave
    the same as a MAP_ANONYMOUS region using small pages.

    The patch also adds the MAP_STACK flag, which was previously defined only
    on some architectures but not on others. Since MAP_STACK is meant to be a
    hint only, architectures can define it without assigning a specific
    meaning to it.

    Signed-off-by: Arnd Bergmann
    Cc: Eric B Munson
    Cc: Hugh Dickins
    Cc: David Rientjes
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     
  • This patchset adds a flag to mmap that allows the user to request that an
    anonymous mapping be backed with huge pages. This mapping will borrow
    functionality from the huge page shm code to create a file on the kernel
    internal mount and use it to approximate an anonymous mapping. The
    MAP_HUGETLB flag is a modifier to MAP_ANONYMOUS and will not work without
    both flags being preset.

    A new flag is necessary because there is no other way to hook into huge
    pages without creating a file on a hugetlbfs mount which wouldn't be
    MAP_ANONYMOUS.

    To userspace, this mapping will behave just like an anonymous mapping
    because the file is not accessible outside of the kernel.

    This patchset is meant to simplify the programming model. Presently there
    is a large chunk of boiler platecode, contained in libhugetlbfs, required
    to create private, hugepage backed mappings. This patch set would allow
    use of hugepages without linking to libhugetlbfs or having hugetblfs
    mounted.

    Unification of the VM code would provide these same benefits, but it has
    been resisted each time that it has been suggested for several reasons: it
    would break PAGE_SIZE assumptions across the kernel, it makes page-table
    abstractions really expensive, and it does not provide any benefit on
    architectures that do not support huge pages, incurring fast path
    penalties without providing any benefit on these architectures.

    This patch:

    There are two means of creating mappings backed by huge pages:

    1. mmap() a file created on hugetlbfs
    2. Use shm which creates a file on an internal mount which essentially
    maps it MAP_SHARED

    The internal mount is only used for shared mappings but there is very
    little that stops it being used for private mappings. This patch extends
    hugetlbfs_file_setup() to deal with the creation of files that will be
    mapped MAP_PRIVATE on the internal hugetlbfs mount. This extended API is
    used in a subsequent patch to implement the MAP_HUGETLB mmap() flag.

    Signed-off-by: Eric Munson
    Acked-by: David Rientjes
    Cc: Mel Gorman
    Cc: Adam Litke
    Cc: David Gibson
    Cc: Lee Schermerhorn
    Cc: Nick Piggin
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Eric B Munson
     
  • shmem_zero_setup() does not change vm_start, pgoff or vm_flags, only some
    drivers change them (such as /driver/video/bfin-t350mcqb-fb.c).

    Move these codes to a more proper place to save cycles for shared
    anonymous mapping.

    Signed-off-by: Huang Shijie
    Reviewed-by: Minchan Kim
    Acked-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang Shijie
     
  • We noticed very erratic behavior [throughput] with the AIM7 shared
    workload running on recent distro [SLES11] and mainline kernels on an
    8-socket, 32-core, 256GB x86_64 platform. On the SLES11 kernel
    [2.6.27.19+] with Barcelona processors, as we increased the load [10s of
    thousands of tasks], the throughput would vary between two "plateaus"--one
    at ~65K jobs per minute and one at ~130K jpm. The simple patch below
    causes the results to smooth out at the ~130k plateau.

    But wait, there's more:

    We do not see this behavior on smaller platforms--e.g., 4 socket/8 core.
    This could be the result of the larger number of cpus on the larger
    platform--a scalability issue--or it could be the result of the larger
    number of interconnect "hops" between some nodes in this platform and how
    the tasks for a given load end up distributed over the nodes' cpus and
    memories--a stochastic NUMA effect.

    The variability in the results are less pronounced [on the same platform]
    with Shanghai processors and with mainline kernels. With 31-rc6 on
    Shanghai processors and 288 file systems on 288 fibre attached storage
    volumes, the curves [jpm vs load] are both quite flat with the patched
    kernel consistently producing ~3.9% better throughput [~80K jpm vs ~77K
    jpm] than the unpatched kernel.

    Profiling indicated that the "slow" runs were incurring high[er]
    contention on an anon_vma lock in vma_adjust(), apparently called from the
    sbrk() system call.

    The patch:

    A comment in mm/mmap.c:vma_adjust() suggests that we don't really need the
    anon_vma lock when we're only adjusting the end of a vma, as is the case
    for brk(). The comment questions whether it's worth while to optimize for
    this case. Apparently, on the newer, larger x86_64 platforms, with
    interesting NUMA topologies, it is worth while--especially considering
    that the patch [if correct!] is quite simple.

    We can detect this condition--no overlap with next vma--by noting a NULL
    "importer". The anon_vma pointer will also be NULL in this case, so
    simply avoid loading vma->anon_vma to avoid the lock.

    However, we DO need to take the anon_vma lock when we're inserting a vma
    ['insert' non-NULL] even when we have no overlap [NULL "importer"], so we
    need to check for 'insert', as well. And Hugh points out that we should
    also take it when adjusting vm_start (so that rmap.c can rely upon
    vma_address() while it holds the anon_vma lock).

    akpm: Zhang Yanmin reprts a 150% throughput improvement with aim7, so it
    might be -stable material even though thiss isn't a regression: "this
    issue is not clear on dual socket Nehalem machine (2*4*2 cpu), but is
    severe on large machine (4*8*2 cpu)"

    [hugh.dickins@tiscali.co.uk: test vma start too]
    Signed-off-by: Lee Schermerhorn
    Signed-off-by: Hugh Dickins
    Cc: Nick Piggin
    Cc: Eric Whitney
    Tested-by: "Zhang, Yanmin"
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • CONFIG_SHMEM off gives you (ramfs masquerading as) tmpfs, even when
    CONFIG_TMPFS is off: that's a little anomalous, and I'd intended to make
    more sense of it by removing CONFIG_TMPFS altogether, always enabling its
    code when CONFIG_SHMEM; but so many defconfigs have CONFIG_SHMEM on
    CONFIG_TMPFS off that we'd better leave that as is.

    But there is no point in asking for CONFIG_TMPFS if CONFIG_SHMEM is off:
    make TMPFS depend on SHMEM, which also prevents TMPFS_POSIX_ACL
    shmem_acl.o being pointlessly built into the kernel when SHMEM is off.

    And a selfish change, to prevent the world from being rebuilt when I
    switch between CONFIG_SHMEM on and off: the only CONFIG_SHMEM in the
    header files is mm.h shmem_lock() - give that a shmem.c stub instead.

    Signed-off-by: Hugh Dickins
    Acked-by: Matt Mackall
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • If (flags & MAP_LOCKED) is true, it means vm_flags has already contained
    the bit VM_LOCKED which is set by calc_vm_flag_bits().

    So there is no need to reset it again, just remove it.

    Signed-off-by: Huang Shijie
    Acked-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Huang Shijie
     
  • Move highest_memmap_pfn __read_mostly from page_alloc.c next to zero_pfn
    __read_mostly in memory.c: to help them share a cacheline, since they're
    very often tested together in vm_normal_page().

    Signed-off-by: Hugh Dickins
    Cc: Rik van Riel
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Nick Piggin
    Cc: Mel Gorman
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Reinstate anonymous use of ZERO_PAGE to all architectures, not just to
    those which __HAVE_ARCH_PTE_SPECIAL: as suggested by Nick Piggin.

    Contrary to how I'd imagined it, there's nothing ugly about this, just a
    zero_pfn test built into one or another block of vm_normal_page().

    But the MIPS ZERO_PAGE-of-many-colours case demands is_zero_pfn() and
    my_zero_pfn() inlines. Reinstate its mremap move_pte() shuffling of
    ZERO_PAGEs we did from 2.6.17 to 2.6.19? Not unless someone shouts for
    that: it would have to take vm_flags to weed out some cases.

    Signed-off-by: Hugh Dickins
    Cc: Rik van Riel
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Nick Piggin
    Cc: Mel Gorman
    Cc: Minchan Kim
    Cc: Ralf Baechle
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Rename hugetlbfs_backed() to hugetlbfs_pagecache_present()
    and add more comments, as suggested by Mel Gorman.

    Signed-off-by: Hugh Dickins
    Cc: Rik van Riel
    Cc: KAMEZAWA Hiroyuki
    Cc: KOSAKI Motohiro
    Cc: Nick Piggin
    Cc: Mel Gorman
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins