17 Mar, 2006

3 commits

  • Lee Revell reported 28ms latency when process with lots of swapped memory
    exits.

    2.6.15 introduced a latency regression when unmapping: in accounting the
    zap_work latency breaker, pte_none counted 1, pte_present PAGE_SIZE, but a
    swap entry counted nothing at all. We think of pages present as the slow
    case, but Lee's trace shows that free_swap_and_cache's radix tree lookup
    can make a lot of work - and we could have been doing it many thousands of
    times without a latency break.

    Move the zap_work update up to account swap entries like pages present.
    This does account non-linear pte_file entries, and unmap_mapping_range
    skipping over swap entries, by the same amount even though they're quick:
    but neither of those cases deserves complicating the code (and they're
    treated no worse than they were in 2.6.14).

    Signed-off-by: Hugh Dickins
    Acked-by: Nick Piggin
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • We can call try_to_release_page() with PagePrivate off and a valid
    page->mapping This may cause all sorts of trouble for the filesystem
    *_releasepage() handlers. XFS bombs out in that case.

    Lock the page before checking for page private.

    Signed-off-by: Christoph Lameter
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Currently the migration of anonymous pages will silently fail if no swap is
    setup. This patch makes page migration functions check for available swap
    and fail with -ENODEV if no swap space is available.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

15 Mar, 2006

2 commits

  • It seems that setting scheduling policy and priorities is also the kind of
    thing that might be performed in apps that also use the NUMA API, so it
    would seem consistent to use CAP_SYS_NICE for NUMA also.

    So use CAP_SYS_NICE for controlling migration permissions.

    Signed-off-by: Christoph Lameter
    Cc: Michael Kerrisk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • page migration currently simply retries a couple of times if try_to_unmap()
    fails without inspecting the return code.

    However, SWAP_FAIL indicates that the page is in a vma that has the
    VM_LOCKED flag set (if ignore_refs ==1). We can check for that return code
    and avoid retrying the migration.

    migrate_page_remove_references() now needs to return a reason why the
    failure occured. So switch migrate_page_remove_references to use -Exx
    style error messages.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

10 Mar, 2006

4 commits

  • The cache reaper currently tries to free all alien caches and all remote
    per cpu pages in each pass of cache_reap. For a machines with large number
    of nodes (such as Altix) this may lead to sporadic delays of around ~10ms.
    Interrupts are disabled while reclaiming creating unacceptable delays.

    This patch changes that behavior by adding a per cpu reap_node variable.
    Instead of attempting to free all caches, we free only one alien cache and
    the per cpu pages from one remote node. That reduces the time spend in
    cache_reap. However, doing so will lengthen the time it takes to
    completely drain all remote per cpu pagesets and all alien caches. The
    time needed will grow with the number of nodes in the system. All caches
    are drained when they overflow their respective capacity. So the drawback
    here is only that a bit of memory may be wasted for awhile longer.

    Details:

    1. Rename drain_remote_pages to drain_node_pages to allow the specification
    of the node to drain of pcp pages.

    2. Add additional functions init_reap_node, next_reap_node for NUMA
    that manage a per cpu reap_node counter.

    3. Add a reap_alien function that reaps only from the current reap_node.

    For us this seems to be a critical issue. Holdoffs of an average of ~7ms
    cause some HPC benchmarks to slow down significantly. F.e. NAS parallel
    slows down dramatically. NAS parallel has a 12-16 seconds runtime w/o rotor
    compared to 5.8 secs with the rotor patches. It gets down to 5.05 secs with
    the additional interrupt holdoff reductions.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • When pages are onlined, not only zone->present_pages but also
    pgdat->node_present_pages should be refreshed.

    This parameter is used to show information at
    /sys/device/system/node/nodeX/meminfo via si_meminfo_node().

    So, it shows strange value for MemUsed which is calculated
    (node_present_pages - all zones free pages).

    Signed-off-by: Yasunori Goto
    Cc: Dave Hansen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yasunori Goto
     
  • If the process has already set PF_MALLOC and is already using
    current->reclaim_state then do not try to reclaim memory from the zone.
    This is set by kswapd and/or synchrononous global reclaim which will not
    take it lightly if we zap the reclaim_state.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Remove two early-development BUG_ONs from page_add_file_rmap.

    The pfn_valid test (originally useful for checking that nobody passed an
    artificial struct page) comes too late, since we already have the struct
    page.

    The PageAnon test (useful when anon was first distinguished from file rmap)
    prevents ->nopage implementations from reusing ->mapping, which would
    otherwise be available.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

09 Mar, 2006

4 commits

  • kmem_cache_init() incorrectly assumes that the cache_cache object will fit
    in an order 0 allocation. On very large systems, this is not true. Change
    the code to try larger order allocations if order 0 fails.

    Signed-off-by: Jack Steiner
    Cc: Manfred Spraul
    Cc: Pekka Enberg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jack Steiner
     
  • Implement percpu_counter_sum(). This is a more accurate but slower version of
    percpu_counter_read_positive().

    We need this for Alex's speedup-ext3_statfs patch and for the nr_file
    accounting fix. Otherwise these things would be too inaccurate on large CPU
    counts.

    Cc: Ravikiran G Thirumalai
    Cc: Alex Tomas
    Cc: "David S. Miller"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Fix the mm/mempolicy.c build for !CONFIG_HUGETLB_PAGE.

    Cc: Christoph Lameter
    Cc: Martin Bligh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Instead of having a hard-to-read and confusing conditional in the
    caller, just make the slab order calculation handle this special case,
    since it's simple and obvious there.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

07 Mar, 2006

3 commits

  • Change the format of numa_maps to be more compact and contain additional
    information that is useful for managing and troubleshooting memory on a
    NUMA system. Numa_maps can now also support huge pages.

    Fixes:

    1. More compact format. Only display fields if they contain additional
    information.

    2. Always display information for all vmas. The old numa_maps did not display
    vma with no mapped entries. This was a bit confusing because page
    migration removes ptes for file backed vmas. After page migration
    a part of the vmas vanished.

    3. Rename maxref to maxmap. This is the maximum mapcount of all the pages
    in a vma and may be used as an indicator as to how many processes
    may be using a certain vma.

    4. Include the ability to scan over huge page vmas.

    New items shown:

    dirty
    Number of pages in a vma that have either the dirty bit set in the
    page_struct or in the pte.

    file=
    The file backing the pages if any

    stack
    Stack area

    heap
    Heap area

    huge
    Huge page area. The number of pages shows is the number of huge
    pages not the regular sized pages.

    swapcache
    Number of pages with swap references. Must be >0 in order to
    be shown.

    active
    Number of active pages. Only displayed if different from the number
    of pages mapped.

    writeback
    Number of pages under writeback. Only displayed if >0.

    Sample ouput of a process using huge pages:

    00000000 default
    2000000000000000 default file=/lib/ld-2.3.90.so mapped=13 mapmax=30 N0=13
    2000000000044000 default file=/lib/ld-2.3.90.so anon=2 dirty=2 swapcache=2 N2=2
    2000000000064000 default file=/lib/librt-2.3.90.so mapped=2 active=1 N1=1 N3=1
    2000000000074000 default file=/lib/librt-2.3.90.so
    2000000000080000 default file=/lib/librt-2.3.90.so anon=1 swapcache=1 N2=1
    2000000000084000 default
    2000000000088000 default file=/lib/libc-2.3.90.so mapped=52 mapmax=32 active=48 N0=52
    20000000002bc000 default file=/lib/libc-2.3.90.so
    20000000002c8000 default file=/lib/libc-2.3.90.so anon=3 dirty=2 swapcache=3 active=2 N1=1 N2=2
    20000000002d4000 default anon=1 swapcache=1 N1=1
    20000000002d8000 default file=/lib/libpthread-2.3.90.so mapped=8 mapmax=3 active=7 N2=2 N3=6
    20000000002fc000 default file=/lib/libpthread-2.3.90.so
    2000000000308000 default file=/lib/libpthread-2.3.90.so anon=1 dirty=1 swapcache=1 N1=1
    200000000030c000 default anon=1 dirty=1 swapcache=1 N1=1
    2000000000320000 default anon=1 dirty=1 N1=1
    200000000071c000 default
    2000000000720000 default anon=2 dirty=2 swapcache=1 N1=1 N2=1
    2000000000f1c000 default
    2000000000f20000 default anon=2 dirty=2 swapcache=1 active=1 N2=1 N3=1
    200000000171c000 default
    2000000001720000 default anon=1 dirty=1 swapcache=1 N1=1
    2000000001b20000 default
    2000000001b38000 default file=/lib/libgcc_s.so.1 mapped=2 N1=2
    2000000001b48000 default file=/lib/libgcc_s.so.1
    2000000001b54000 default file=/lib/libgcc_s.so.1 anon=1 dirty=1 active=0 N1=1
    2000000001b58000 default file=/lib/libunwind.so.7.0.0 mapped=2 active=1 N1=2
    2000000001b74000 default file=/lib/libunwind.so.7.0.0
    2000000001b80000 default file=/lib/libunwind.so.7.0.0
    2000000001b84000 default
    4000000000000000 default file=/media/huge/test9 mapped=1 N1=1
    6000000000000000 default file=/media/huge/test9 anon=1 dirty=1 active=0 N1=1
    6000000000004000 default heap
    607fffff7fffc000 default anon=1 dirty=1 swapcache=1 N2=1
    607fffffff06c000 default stack anon=1 dirty=1 active=0 N1=1
    8000000060000000 default file=/mnt/huge/test0 huge dirty=3 N1=3
    8000000090000000 default file=/mnt/huge/test1 huge dirty=3 N0=1 N2=2
    80000000c0000000 default file=/mnt/huge/test2 huge dirty=3 N1=1 N3=2

    Signed-off-by: Christoph Lameter
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • If we triggered the 'offslab_limit' test, we would return with
    cachep->gfporder incremented once too many times.

    This clarifies the logic somewhat, and fixes that bug.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • We want to use the "struct slab" size, not the size of the pointer to
    same. As it is, we'd not print out the last entry pointers in the
    slab (where is ~10, depending on whether it's a 32-bit or 64-bit
    kernel).

    Gaah, that slab code was written by somebody who likes unreadable crud.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

03 Mar, 2006

2 commits

  • numa_maps should not scan over huge vmas in order not to cause problems for
    non IA64 platforms that may have pte entries pointing to huge pages in a
    variety of ways in their page tables. Add a simple check to ignore vmas
    containing huge pages.

    Signed-off-by: Christoph Lameter
    Cc: Hugh Dickins
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • I seem to have lost this read_unlock().

    While we're there, let's turn that interruptible sleep unto uninterruptible,
    so we don't get a busywait if signal_pending(). (Again. We seem to have a
    habit of doing this).

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

01 Mar, 2006

4 commits


25 Feb, 2006

2 commits

  • - PF_SWAPWRITE needs to be set for RECLAIM_SWAP to be able to write
    out pages to swap. Currently RECLAIM_SWAP may not do that.

    - remove setting nr_reclaimed pages after slab reclaim since the slab shrinking
    code does not use that and the nr_reclaimed pages is just right for the
    intended follow up action.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • migrate_pages_to() allocates a list of new pages on the intended target
    node or with the intended policy and then uses the list of new pages as
    targets for the migration of a list of pages out of place.

    When the pages are allocated it is not clear which of the out of place
    pages will be moved to the new pages. So we cannot specify an address as
    needed by alloc_page_vma(). This causes problem for MPOL_INTERLEAVE which
    will currently allocate the pages on the first node of the set. If mbind
    is used with vma that has the policy of MPOL_INTERLEAVE then the
    interleaving of pages may be destroyed.

    This patch fixes that by generating a fake address for each alloc_page_vma
    which will result is a distribution of pages as prescribed by
    MPOL_INTERLEAVE.

    Lee also noted that the sequence of nodes for the new pages seems to be
    inverted. So we also invert the way the lists of pages for migration are
    build.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Lee Schermerhorn
    Looks-ok-to: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

22 Feb, 2006

1 commit

  • I've been dissatisfied with the mpol_nodelist mount option which was
    added to tmpfs earlier in -rc. Replace it by mpol=policy:nodelist.

    And it was broken: a nodelist is a comma-separated list of numbers and
    ranges; the mount options are a comma-separated list of token=values.
    Whoops, blindly strsep'ing on commas doesn't work so well: since we've
    no numeric tokens, and unlikely to add them, use that to distinguish.

    Move the mpol= parsing to shmem_parse_mpol under CONFIG_NUMA, reject
    all its options as invalid if not NUMA. /proc shows MPOL_PREFERRED
    as "prefer", so use that name for the policy instead of "preferred".

    Enforce that mpol=default has no nodelist; that mpol=prefer has one
    node only; that mpol=bind has a nodelist; but let mpol=interleave use
    node_online_map if no nodelist given. Describe this in tmpfs.txt.

    Signed-off-by: Hugh Dickins
    Acked-by: Robin Holt
    Acked-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

21 Feb, 2006

5 commits

  • [akpm; it happens that the code was still correct, only inefficient ]

    Signed-off-by: Alexey Dobriyan
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     
  • Signed-off-by: Luke Yang
    Acked-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Luke Yang
     
  • maxnode is a bit index and can't be directly compared against a byte length
    like PAGE_SIZE

    Signed-off-by: Andi Kleen
    Cc: Chris Wright
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • Some allocations are restricted to a limited set of nodes (due to memory
    policies or cpuset constraints). If the page allocator is not able to find
    enough memory then that does not mean that overall system memory is low.

    In particular going postal and more or less randomly shooting at processes
    is not likely going to help the situation but may just lead to suicide (the
    whole system coming down).

    It is better to signal to the process that no memory exists given the
    constraints that the process (or the configuration of the process) has
    placed on the allocation behavior. The process may be killed but then the
    sysadmin or developer can investigate the situation. The solution is
    similar to what we do when running out of hugepages.

    This patch adds a check before we kill processes. At that point
    performance considerations do not matter much so we just scan the zonelist
    and reconstruct a list of nodes. If the list of nodes does not contain all
    online nodes then this is a constrained allocation and we should kill the
    current process.

    Signed-off-by: Christoph Lameter
    Cc: Nick Piggin
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • In the badness() calculation, there's currently this piece of code:

    /*
    * Processes which fork a lot of child processes are likely
    * a good choice. We add the vmsize of the children if they
    * have an own mm. This prevents forking servers to flood the
    * machine with an endless amount of children
    */
    list_for_each(tsk, &p->children) {
    struct task_struct *chld;
    chld = list_entry(tsk, struct task_struct, sibling);
    if (chld->mm = p->mm && chld->mm)
    points += chld->mm->total_vm;
    }

    The intention is clear: If some server (apache) keeps spawning new children
    and we run OOM, we want to kill the father rather than picking a child.

    This -- to some degree -- also helps a bit with getting fork bombs under
    control, though I'd consider this a desirable side-effect rather than a
    feature.

    There's one problem with this: No matter how many or few children there are,
    if just one of them misbehaves, and all others (including the father) do
    everything right, we still always kill the whole family. This hits in real
    life; whether it's javascript in konqueror resulting in kdeinit (and thus the
    whole KDE session) being hit or just a classical server that spawns children.

    Sidenote: The killer does kill all direct children as well, not only the
    selected father, see oom_kill_process().

    The idea in attached patch is that we do want to account the memory
    consumption of the (direct) children to the father -- however not fully.
    This maintains the property that fathers with too many children will still
    very likely be picked, whereas a single misbehaving child has the chance to
    be picked by the OOM killer.

    In the patch I account only half (rounded up) of the children's vm_size to
    the parent. This means that if one child eats more mem than the rest of
    the family, it will be picked, otherwise it's still the father and thus the
    whole family that gets selected.

    This is heuristics -- we could debate whether accounting for a fourth would
    be better than for half of it. Or -- if people would consider it worth the
    trouble -- make it a sysctl. For now I sticked to accounting for half,
    which should IMHO be a significant improvement.

    The patch does one more thing: As users tend to be irritated by the choice
    of killed processes (mainly because the children are killed first, despite
    some of them having a very low OOM score), I added some more output: The
    selected (father) process will be reported first and it's oom_score printed
    to syslog.

    Description:

    Only account for half of children's vm size in oom score calculation

    This should still give the parent enough point in case of fork bombs. If
    any child however has more than 50% of the vm size of all children
    together, it'll get a higher score and be elected.

    This patch also makes the kernel display the oom_score.

    Signed-off-by: Kurt Garloff
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kurt Garloff
     

18 Feb, 2006

4 commits

  • Make sure maxnodes is safe size before calculating nlongs in
    get_nodes().

    Signed-off-by: Chris Wright
    Signed-off-by: Linus Torvalds

    Chris Wright
     
  • Change the find_next_best_node algorithm to correctly skip
    over holes in the node online mask. Previously it would not handle
    missing nodes correctly and cause crashes at boot.

    [Written by Linus, tested by AK]

    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • The memory allocator doesn't like empty zones (which have an
    uninitialized freelist), so a x86-64 system with a node fully
    in GFP_DMA32 only would crash on mbind.

    Fix that up by putting all possible zones as fallback into the zonelist
    and skipping the empty ones.

    In fact the code always enough allocated space for all zones,
    but only used it for the highest. This change just uses all the
    memory that was allocated before.

    This should work fine for now, but whoever implements node hot removal
    needs to fix this somewhere else too (or make sure zone datastructures
    by itself never go away, only their memory)

    Signed-off-by: Andi Kleen
    Acked-by: Christoph Lameter
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • AMD SimNow!'s JIT doesn't like them at all in the guest. For distribution
    installation it's easiest if it's a boot time option.

    Also I moved the variable to a more appropiate place and make
    it independent from sysctl

    And marked __read_mostly which it is.

    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Andi Kleen
     

15 Feb, 2006

3 commits

  • Currently, copy-on-write may change the physical address of a page even if the
    user requested that the page is pinned in memory (either by mlock or by
    get_user_pages). This happens if the process forks meanwhile, and the parent
    writes to that page. As a result, the page is orphaned: in case of
    get_user_pages, the application will never see any data hardware DMA's into
    this page after the COW. In case of mlock'd memory, the parent is not getting
    the realtime/security benefits of mlock.

    In particular, this affects the Infiniband modules which do DMA from and into
    user pages all the time.

    This patch adds madvise options to control whether memory range is inherited
    across fork. Useful e.g. for when hardware is doing DMA from/into these
    pages. Could also be useful to an application wanting to speed up its forks
    by cutting large areas out of consideration.

    Signed-off-by: Michael S. Tsirkin
    Acked-by: Hugh Dickins
    Cc: Michael Kerrisk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael S. Tsirkin
     
  • Somehow I imagined that calling a NULL destructor would free a compound page
    rather than oopsing. No, we must supply a default destructor, __free_pages_ok
    using the order noted by prep_compound_page. hugetlb can still replace this
    as before with its own free_huge_page pointer.

    The case that needs this is not common: rarely does put_compound_page's
    put_page_testzero bring the count down to 0. But if get_user_pages is applied
    to some part of a compound page, without immediate release (e.g. AIO or
    Infiniband), then it's possible for its put_page to come after the containing
    vma has been unmapped and the driver done its free_pages.

    That's just the kind of case compound pages are supposed to be guarding
    against (but Nick points out, nor did PageReserved handle this right).

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • If a compound page has its own put_page_testzero destructor (the only current
    example is free_huge_page), that is noted in page[1].mapping of the compound
    page. But that's rather a poor place to keep it: functions which call
    set_page_dirty_lock after get_user_pages (e.g. Infiniband's
    __ib_umem_release) ought to be checking first, otherwise set_page_dirty is
    liable to crash on what's not the address of a struct address_space.

    And now I'm about to make that worse: it turns out that every compound page
    needs a destructor, so we can no longer rely on hugetlb pages going their own
    special way, to avoid further problems of page->mapping reuse. For example,
    not many people know that: on 50% of i386 -Os builds, the first tail page of a
    compound page purports to be PageAnon (when its destructor has an odd
    address), which surprises page_add_file_rmap.

    Keep the compound page destructor in page[1].lru.next instead. And to free up
    the common pairing of mapping and index, also move compound page order from
    index to lru.prev. Slab reuses page->lru too: but if we ever need slab to use
    compound pages, it can easily stack its use above this.

    (akpm: decoded version of the above: the tail pages of a compound page now
    have ->mapping==NULL, so there's no need for the set_page_dirty[_lock]()
    caller to check that they're not compund pages before doing the dirty).

    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

12 Feb, 2006

3 commits