07 Mar, 2010

40 commits

  • Fetch rlimit (both hard and soft) values only once and work on them. It
    removes many accesses through sig structure and makes the code cleaner.

    Mostly a preparation for writable resource limits support.

    Signed-off-by: Jiri Slaby
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: john stultz
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jiri Slaby
     
  • kernel/exit.c:1183:26: warning: symbol 'status' shadows an earlier one
    kernel/exit.c:1173:21: originally declared here

    Signed-off-by: Thiago Farina
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Thiago Farina
     
  • Fix the following 'make includecheck' warning:
    kernel/params.c: linux/string.h is included more than once.

    Signed-off-by: Jaswinder Singh Rajput
    Cc: André Goddard Rosa
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jaswinder Singh Rajput
     
  • "ret" needs to be signed or the error handling for splice_to_pipe() won't
    work correctly.

    Signed-off-by: Dan Carpenter
    Cc: Tom Zanussi
    Cc: Jens Axboe
    Cc: Lai Jiangshan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Carpenter
     
  • Add adds a debugfs interface and additional failure modes to LKDTM to
    provide similar functionality to the provoke-crash driver submitted here:

    http://lwn.net/Articles/371208/

    Crashes can now be induced either through module parameters (as before)
    or through the debugfs interface as in provoke-crash.

    The patch also provides a new "direct" interface, where KPROBES are not
    used, i.e., the crash is invoked directly upon write to the debugfs
    file. When built without KPROBES configured, only this mode is available.

    Signed-off-by: Simon Kagstrom
    Cc: M. Mohan Kumar
    Cc: Americo Wang
    Cc: David Woodhouse
    Cc: Ingo Molnar
    Cc: "Eric W. Biederman" ,
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Simon Kagstrom
     
  • Signed-off-by: Thadeu Lima de Souza Cascardo
    Cc: Kay Sievers
    Cc: Greg Kroah-Hartman
    Cc: Alan Cox
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Thadeu Lima de Souza Cascardo
     
  • The variable priv is initialized twice to the same (side effect-free)
    expression. Drop one initialization.

    A simplified version of the semantic match that finds this problem is:
    (http://coccinelle.lip6.fr/)

    //
    @forall@
    idexpression *x;
    identifier f!=ERR_PTR;
    @@

    x = f(...)
    ... when != x
    (
    x = f(...,,...)
    |
    * x = f(...)
    )
    //

    Signed-off-by: Julia Lawall
    Cc: Tomas Winkler
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Julia Lawall
     
  • The only in tree external users of the symbol setup_max_cpus are in
    arch/x86/. The files ./kernel/alternative.c, ./kernel/visws_quirks.c, and
    ./mm/kmemcheck/kmemcheck.c are all guarded by CONFIG_SMP being defined.
    For this case the symbol is an unsigned int and declared as an extern in
    include/linux/smp.h.

    When CONFIG_SMP is not defined the symbol setup_max_cpus is
    a constant value that is only used in init/main.c. Make the symbol
    static for this case.

    Signed-off-by: H Hartley Sweeten
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    H Hartley Sweeten
     
  • smp: Fix documentation.

    Fix documentation in include/linux/smp.h: smp_processor_id()

    Signed-off-by: Rakib Mullick
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rakib Mullick
     
  • The macro any_online_node() is prone to producing sparse warnings due to
    the local symbol 'node'. Since all the in-tree users are really
    requesting the first online node (the mask argument is either
    NODE_MASK_ALL or node_online_map) just use the first_online_node macro and
    remove the any_online_node macro since there are no users.

    Signed-off-by: H Hartley Sweeten
    Acked-by: David Rientjes
    Reviewed-by: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Lee Schermerhorn
    Acked-by: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Dave Hansen
    Cc: Milton Miller
    Cc: Nathan Fontenot
    Cc: Geoff Levand
    Cc: Grant Likely
    Cc: J. Bruce Fields
    Cc: Neil Brown
    Cc: Trond Myklebust
    Cc: David S. Miller
    Cc: Benny Halevy
    Cc: Chuck Lever
    Cc: Ricardo Labiaga
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    H Hartley Sweeten
     
  • Make sure compiler won't do weird things with limits. E.g. fetching them
    twice may return 2 different values after writable limits are implemented.

    I.e. either use rlimit helpers added in commit 3e10e716abf3 ("resource:
    add helpers for fetching rlimits") or ACCESS_ONCE if not applicable.

    Signed-off-by: Jiri Slaby
    Cc: Alexander Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jiri Slaby
     
  • Dependent on CONFIG_SMP the num_*_cpus() functions return unsigned or
    signed values. Let them always return unsigned values to avoid strange
    casts.

    Fixes at least one warning:

    kernel/kprobes.c: In function 'register_kretprobe':
    kernel/kprobes.c:1038: warning: comparison of distinct pointer types lacks a cast

    Signed-off-by: Heiko Carstens
    Cc: Heiko Carstens
    Cc: Ananth N Mavinakayanahalli
    Cc: Masami Hiramatsu
    Cc: Ingo Molnar
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Heiko Carstens
     
  • The symbol 'count' is a local global variable in this file. The function
    clean_rootfs() should use a different symbol name to prevent "symbol
    shadows an earlier one" noise.

    Signed-off-by: H Hartley Sweeten
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    H Hartley Sweeten
     
  • - new Documentation/init.txt file describing various forms of failure
    trying to load the init binary after kernel bootup

    - extend the init/main.c init failure message to direct to
    Documentation/init.txt

    Signed-off-by: Andreas Mohr
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andreas Mohr
     
  • Additional_cpus is only supported for IA64 now. X86_64 should not be
    included.

    Signed-off-by: Chen Gong
    Cc: Ingo Molnar
    Cc: Rusty Russell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Chen Gong
     
  • Move the CS5535 MFGPT hrtimer kconfig option to be with the other MFGPT
    options. This makes it easier to find and also removes it from the main
    "Device Drivers" menu, where it should not have been.

    Signed-off-by: Randy Dunlap
    Acked-by: Andres Salomon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     
  • Tell git to ignore the generated files under um, except:

    include/shared/kern_constants.h
    include/shared/user_constants.h

    which will be moved to include/generated.

    Signed-off-by: WANG Cong
    Cc: Al Viro
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    WANG Cong
     
  • Assign tty only if line is not NULL.

    [akpm@linux-foundation.org: simplification]
    Signed-off-by: Alexander Beregalov
    Cc: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Beregalov
     
  • With id 1 the wrong bp was unwatched.

    Signed-off-by: Roel Kluin
    Cc: Mikael Starvik
    Cc: Jesper Nilsson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roel Kluin
     
  • size_t desc_len cannot be less than 0, test before the subtraction.

    Signed-off-by: Roel Kluin
    Cc: Mikael Starvik
    Cc: Jesper Nilsson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roel Kluin
     
  • Convert cris to use GENERIC_TIME via the arch_getoffset() infrastructure,
    reducing the amount of arch specific code we need to maintain.

    Signed-off-by: John Stultz
    Cc: Mikael Starvik
    Cc: Jesper Nilsson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    john stultz
     
  • Reorder struct menu_device to remove 8 bytes of padding on 64 bit builds.
    Size drops from 136 to 128 bytes, so possibly needing one fewer cache
    lines.

    Signed-off-by: Richard Kennedy
    Cc: Arjan van de Ven
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Richard Kennedy
     
  • The initial -EINVAL value is overwritten by `retval = PTR_ERR(name)'. If
    this isn't an error pointer and typenr is not 1, 6 or 9, then this retval,
    a pointer cast to a long, is returned.

    Signed-off-by: Roel Kluin
    Acked-by: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Matt Turner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roel Kluin
     
  • No architecture except for frv has pci_dma_sync_single() and
    pci_dma_sync_sg(). The APIs are deprecated.

    Signed-off-by: FUJITA Tomonori
    Acked-by: David S. Miller
    Acked-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    FUJITA Tomonori
     
  • swap_duplicate()'s loop appears to miss out on returning the error code
    from __swap_duplicate(), except when that's -ENOMEM. In fact this is
    intentional: prior to -ENOMEM for swap_count_continuation,
    swap_duplicate() was void (and the case only occurs when copy_one_pte()
    hits a corrupt pte). But that's surprising behaviour, which certainly
    deserves a comment.

    Signed-off-by: Hugh Dickins
    Reported-by: Huang Shijie
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • The noMMU version of get_user_pages() fails to pin the last page when the
    start address isn't page-aligned. The patch fixes this in a way that
    makes find_extend_vma() congruent to its MMU cousin.

    Signed-off-by: Steven J. Magnani
    Acked-by: Paul Mundt
    Cc: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Steven J. Magnani
     
  • Use the same log level for printk's in show_mem(), so that those messages
    can be shown completely when using log level 6.

    Signed-off-by: WANG Cong
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Amerigo Wang
     
  • __GFP_NOFAIL was deprecated in dab48dab, so add a comment that no new
    users should be added.

    Reviewed-by: KAMEZAWA Hiroyuki
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The VM currently assumes that an inactive, mapped and referenced file page
    is in use and promotes it to the active list.

    However, every mapped file page starts out like this and thus a problem
    arises when workloads create a stream of such pages that are used only for
    a short time. By flooding the active list with those pages, the VM
    quickly gets into trouble finding eligible reclaim canditates. The result
    is long allocation latencies and eviction of the wrong pages.

    This patch reuses the PG_referenced page flag (used for unmapped file
    pages) to implement a usage detection that scales with the speed of LRU
    list cycling (i.e. memory pressure).

    If the scanner encounters those pages, the flag is set and the page cycled
    again on the inactive list. Only if it returns with another page table
    reference it is activated. Otherwise it is reclaimed as 'not recently
    used cache'.

    This effectively changes the minimum lifetime of a used-once mapped file
    page from a full memory cycle to an inactive list cycle, which allows it
    to occur in linear streams without affecting the stable working set of the
    system.

    Signed-off-by: Johannes Weiner
    Reviewed-by: Rik van Riel
    Cc: Minchan Kim
    Cc: OSAKI Motohiro
    Cc: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • page_mapping_inuse() is a historic predicate function for pages that are
    about to be reclaimed or deactivated.

    According to it, a page is in use when it is mapped into page tables OR
    part of swap cache OR backing an mmapped file.

    This function is used in combination with page_referenced(), which checks
    for young bits in ptes and the page descriptor itself for the
    PG_referenced bit. Thus, checking for unmapped swap cache pages is
    meaningless as PG_referenced is not set for anonymous pages and unmapped
    pages do not have young ptes. The test makes no difference.

    Protecting file pages that are not by themselves mapped but are part of a
    mapped file is also a historic leftover for short-lived things like the
    exec() code in libc. However, the VM now does reference accounting and
    activation of pages at unmap time and thus the special treatment on
    reclaim is obsolete.

    This patch drops page_mapping_inuse() and switches the two callsites to
    use page_mapped() directly.

    Signed-off-by: Johannes Weiner
    Reviewed-by: Rik van Riel
    Cc: Minchan Kim
    Cc: OSAKI Motohiro
    Cc: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • The used-once mapped file page detection patchset.

    It is meant to help workloads with large amounts of shortly used file
    mappings, like rtorrent hashing a file or git when dealing with loose
    objects (git gc on a bigger site?).

    Right now, the VM activates referenced mapped file pages on first
    encounter on the inactive list and it takes a full memory cycle to
    reclaim them again. When those pages dominate memory, the system
    no longer has a meaningful notion of 'working set' and is required
    to give up the active list to make reclaim progress. Obviously,
    this results in rather bad scanning latencies and the wrong pages
    being reclaimed.

    This patch makes the VM be more careful about activating mapped file
    pages in the first place. The minimum granted lifetime without
    another memory access becomes an inactive list cycle instead of the
    full memory cycle, which is more natural given the mentioned loads.

    This test resembles a hashing rtorrent process. Sequentially, 32MB
    chunks of a file are mapped into memory, hashed (sha1) and unmapped
    again. While this happens, every 5 seconds a process is launched and
    its execution time taken:

    python2.4 -c 'import pydoc'
    old: max=2.31s mean=1.26s (0.34)
    new: max=1.25s mean=0.32s (0.32)

    find /etc -type f
    old: max=2.52s mean=1.44s (0.43)
    new: max=1.92s mean=0.12s (0.17)

    vim -c ':quit'
    old: max=6.14s mean=4.03s (0.49)
    new: max=3.48s mean=2.41s (0.25)

    mplayer --help
    old: max=8.08s mean=5.74s (1.02)
    new: max=3.79s mean=1.32s (0.81)

    overall hash time (stdev):
    old: time=1192.30 (12.85) thruput=25.78mb/s (0.27)
    new: time=1060.27 (32.58) thruput=29.02mb/s (0.88) (-11%)

    I also tested kernbench with regular IO streaming in the background to
    see whether the delayed activation of frequently used mapped file
    pages had a negative impact on performance in the presence of pressure
    on the inactive list. The patch made no significant difference in
    timing, neither for kernbench nor for the streaming IO throughput.

    The first patch submission raised concerns about the cost of the extra
    faults for actually activated pages on machines that have no hardware
    support for young page table entries.

    I created an artificial worst case scenario on an ARM machine with
    around 300MHz and 64MB of memory to figure out the dimensions
    involved. The test would mmap a file of 20MB, then

    1. touch all its pages to fault them in
    2. force one full scan cycle on the inactive file LRU
    -- old: mapping pages activated
    -- new: mapping pages inactive
    3. touch the mapping pages again
    -- old and new: fault exceptions to set the young bits
    4. force another full scan cycle on the inactive file LRU
    5. touch the mapping pages one last time
    -- new: fault exceptions to set the young bits

    The test showed an overall increase of 6% in time over 100 iterations
    of the above (old: ~212sec, new: ~225sec). 13 secs total overhead /
    (100 * 5k pages), ignoring the execution time of the test itself,
    makes for about 25us overhead for every page that gets actually
    activated. Note:

    1. File mapping the size of one third of main memory, _completely_
    in active use across memory pressure - i.e., most pages referenced
    within one LRU cycle. This should be rare to non-existant,
    especially on such embedded setups.

    2. Many huge activation batches. Those batches only occur when the
    working set fluctuates. If it changes completely between every full
    LRU cycle, you have problematic reclaim overhead anyway.

    3. Access of activated pages at maximum speed: sequential loads from
    every single page without doing anything in between. In reality,
    the extra faults will get distributed between actual operations on
    the data.

    So even if a workload manages to get the VM into the situation of
    activating a third of memory in one go on such a setup, it will take
    2.2 seconds instead 2.1 without the patch.

    Comparing the numbers (and my user-experience over several months),
    I think this change is an overall improvement to the VM.

    Patch 1 is only refactoring to break up that ugly compound conditional
    in shrink_page_list() and make it easy to document and add new checks
    in a readable fashion.

    Patch 2 gets rid of the obsolete page_mapping_inuse(). It's not
    strictly related to #3, but it was in the original submission and is a
    net simplification, so I kept it.

    Patch 3 implements used-once detection of mapped file pages.

    This patch:

    Moving the big conditional into its own predicate function makes the code
    a bit easier to read and allows for better commenting on the checks
    one-by-one.

    This is just cleaning up, no semantics should have been changed.

    Signed-off-by: Johannes Weiner
    Reviewed-by: Rik van Riel
    Cc: Minchan Kim
    Cc: OSAKI Motohiro
    Cc: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Add a bare description of what /sys/devices/system/node/nodeX is. Others
    will follow in time but right now, none of that tree is documented. The
    existence of this file might at least encourage people to document new
    entries.

    Signed-off-by: Mel Gorman
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Add documentation for /proc/pagetypeinfo.

    Signed-off-by: Mel Gorman
    Reviewed-by: Christoph Lameter
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • free_area_init_nodes() emits pfn ranges for all zones on the system.
    There may be no pages on a higher zone, however, due to memory limitations
    or the use of the mem= kernel parameter. For example:

    Zone PFN ranges:
    DMA 0x00000001 -> 0x00001000
    DMA32 0x00001000 -> 0x00100000
    Normal 0x00100000 -> 0x00100000

    The implementation copies the previous zone's highest pfn, if any, as the
    next zone's lowest pfn. If its highest pfn is then greater than the
    amount of addressable memory, the upper memory limit is used instead.
    Thus, both the lowest and highest possible pfn for higher zones without
    memory may be the same.

    The pfn range for zones without memory is now shown as "empty" instead.

    Signed-off-by: David Rientjes
    Cc: Mel Gorman
    Reviewed-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • There are quite a few GFP_KERNEL memory allocations made during
    suspend/hibernation and resume that may cause the system to hang, because
    the I/O operations they depend on cannot be completed due to the
    underlying devices being suspended.

    Avoid this problem by clearing the __GFP_IO and __GFP_FS bits in
    gfp_allowed_mask before suspend/hibernation and restoring the original
    values of these bits in gfp_allowed_mask durig the subsequent resume.

    [akpm@linux-foundation.org: fix CONFIG_PM=n linkage]
    Signed-off-by: Rafael J. Wysocki
    Reported-by: Maxim Levitsky
    Cc: Sebastian Ott
    Cc: Benjamin Herrenschmidt
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rafael J. Wysocki
     
  • There's an off-by-one disagreement between mkswap and swapon about the
    meaning of swap_header last_page: mkswap (in all versions I've looked at:
    util-linux-ng and BusyBox and old util-linux; probably as far back as
    1999) consistently means the offset (in page units) of the last page of
    the swap area, whereas kernel sys_swapon (as far back as 2.2 and 2.3)
    strangely takes it to mean the size (in page units) of the swap area.

    This disagreement is the safe way round; but it's worrying people, and
    loses us one page of swap.

    The fix is not just to add one to nr_good_pages: we need to get maxpages
    (the size of the swap_map array) right before that; and though that is an
    unsigned long, be careful not to overflow the unsigned int p->max which
    later holds it (probably why header uses __u32 last_page instead of size).

    Why did we subtract one from the maximum swp_offset to calculate maxpages?
    Though it was probably me who made that change in 2.4.10, I don't get it:
    and now we should be adding one (without risk of overflow in this case).

    Fix the handling of swap_header badpages: it could have overrun the
    swap_map when very large swap area used on a more limited architecture.

    Remove pre-initializations of swap_header, nr_good_pages and maxpages:
    those date from when sys_swapon was supporting other versions of header.

    Reported-by: Nitin Gupta
    Reported-by: Jarkko Lavinen
    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • When a VMA is in an inconsistent state during setup or teardown, the worst
    that can happen is that the rmap code will not be able to find the page.

    The mapping is in the process of being torn down (PTEs just got
    invalidated by munmap), or set up (no PTEs have been instantiated yet).

    It is also impossible for the rmap code to follow a pointer to an already
    freed VMA, because the rmap code holds the anon_vma->lock, which the VMA
    teardown code needs to take before the VMA is removed from the anon_vma
    chain.

    Hence, we should not need the VM_LOCK_RMAP locking at all.

    Signed-off-by: Rik van Riel
    Cc: Nick Piggin
    Cc: KOSAKI Motohiro
    Cc: Larry Woodman
    Cc: Lee Schermerhorn
    Cc: Andrea Arcangeli
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     
  • When the parent process breaks the COW on a page, both the original which
    is mapped at child and the new page which is mapped parent end up in that
    same anon_vma. Generally this won't be a problem, but for some workloads
    it could preserve the O(N) rmap scanning complexity.

    A simple fix is to ensure that, when a page which is mapped child gets
    reused in do_wp_page, because we already are the exclusive owner, the page
    gets moved to our own exclusive child's anon_vma.

    Signed-off-by: Rik van Riel
    Cc: KOSAKI Motohiro
    Cc: Larry Woodman
    Cc: Lee Schermerhorn
    Reviewed-by: Minchan Kim
    Cc: Andrea Arcangeli
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     
  • When an anonymous page is inherited from a parent process, the
    vma->anon_vma can differ from the page anon_vma. This can trip up
    __page_check_anon_rmap, which is indirectly called from do_swap_page().

    Remove that obsolete check to prevent an oops.

    Signed-off-by: Rik van Riel
    Cc: KOSAKI Motohiro
    Cc: Larry Woodman
    Cc: Lee Schermerhorn
    Reviewed-by: Minchan Kim
    Cc: Andrea Arcangeli
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel
     
  • The old anon_vma code can lead to scalability issues with heavily forking
    workloads. Specifically, each anon_vma will be shared between the parent
    process and all its child processes.

    In a workload with 1000 child processes and a VMA with 1000 anonymous
    pages per process that get COWed, this leads to a system with a million
    anonymous pages in the same anon_vma, each of which is mapped in just one
    of the 1000 processes. However, the current rmap code needs to walk them
    all, leading to O(N) scanning complexity for each page.

    This can result in systems where one CPU is walking the page tables of
    1000 processes in page_referenced_one, while all other CPUs are stuck on
    the anon_vma lock. This leads to catastrophic failure for a benchmark
    like AIM7, where the total number of processes can reach in the tens of
    thousands. Real workloads are still a factor 10 less process intensive
    than AIM7, but they are catching up.

    This patch changes the way anon_vmas and VMAs are linked, which allows us
    to associate multiple anon_vmas with a VMA. At fork time, each child
    process gets its own anon_vmas, in which its COWed pages will be
    instantiated. The parents' anon_vma is also linked to the VMA, because
    non-COWed pages could be present in any of the children.

    This reduces rmap scanning complexity to O(1) for the pages of the 1000
    child processes, with O(N) complexity for at most 1/N pages in the system.
    This reduces the average scanning cost in heavily forking workloads from
    O(N) to 2.

    The only real complexity in this patch stems from the fact that linking a
    VMA to anon_vmas now involves memory allocations. This means vma_adjust
    can fail, if it needs to attach a VMA to anon_vma structures. This in
    turn means error handling needs to be added to the calling functions.

    A second source of complexity is that, because there can be multiple
    anon_vmas, the anon_vma linking in vma_adjust can no longer be done under
    "the" anon_vma lock. To prevent the rmap code from walking up an
    incomplete VMA, this patch introduces the VM_LOCK_RMAP VMA flag. This bit
    flag uses the same slot as the NOMMU VM_MAPPED_COPY, with an ifdef in mm.h
    to make sure it is impossible to compile a kernel that needs both symbolic
    values for the same bitflag.

    Some test results:

    Without the anon_vma changes, when AIM7 hits around 9.7k users (on a test
    box with 16GB RAM and not quite enough IO), the system ends up running
    >99% in system time, with every CPU on the same anon_vma lock in the
    pageout code.

    With these changes, AIM7 hits the cross-over point around 29.7k users.
    This happens with ~99% IO wait time, there never seems to be any spike in
    system time. The anon_vma lock contention appears to be resolved.

    [akpm@linux-foundation.org: cleanups]
    Signed-off-by: Rik van Riel
    Cc: KOSAKI Motohiro
    Cc: Larry Woodman
    Cc: Lee Schermerhorn
    Cc: Minchan Kim
    Cc: Andrea Arcangeli
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rik van Riel