15 Nov, 2014

7 commits

  • commit f55fefd1a5a339b1bd08c120b93312d6eb64a9fb upstream.

    The WARN_ON checking whether i_mutex is held in
    pagecache_isize_extended() was wrong because some filesystems (e.g.
    XFS) use different locks for serialization of truncates / writes. So
    just remove the check.

    Signed-off-by: Jan Kara
    Reviewed-by: Dave Chinner
    Signed-off-by: Dave Chinner
    Signed-off-by: Greg Kroah-Hartman

    Jan Kara
     
  • commit 2f7dd7a4100ad4affcb141605bef178ab98ccb18 upstream.

    The cgroup iterators yield css objects that have not yet gone through
    css_online(), but they are not complete memcgs at this point and so the
    memcg iterators should not return them. Commit d8ad30559715 ("mm/memcg:
    iteration skip memcgs not yet fully initialized") set out to implement
    exactly this, but it uses CSS_ONLINE, a cgroup-internal flag that does
    not meet the ordering requirements for memcg, and so the iterator may
    skip over initialized groups, or return partially initialized memcgs.

    The cgroup core can not reasonably provide a clear answer on whether the
    object around the css has been fully initialized, as that depends on
    controller-specific locking and lifetime rules. Thus, introduce a
    memcg-specific flag that is set after the memcg has been initialized in
    css_online(), and read before mem_cgroup_iter() callers access the memcg
    members.

    Signed-off-by: Johannes Weiner
    Cc: Tejun Heo
    Acked-by: Michal Hocko
    Cc: Hugh Dickins
    Cc: Peter Zijlstra
    Cc: [3.12+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Johannes Weiner
     
  • commit 401507d67d5c2854f5a88b3f93f64fc6f267bca5 upstream.

    Commit ff7ee93f4715 ("cgroup/kmemleak: Annotate alloc_page() for cgroup
    allocations") introduces kmemleak_alloc() for alloc_page_cgroup(), but
    corresponding kmemleak_free() is missing, which makes kmemleak be
    wrongly disabled after memory offlining. Log is pasted at the end of
    this commit message.

    This patch add kmemleak_free() into free_page_cgroup(). During page
    offlining, this patch removes corresponding entries in kmemleak rbtree.
    After that, the freed memory can be allocated again by other subsystems
    without killing kmemleak.

    bash # for x in 1 2 3 4; do echo offline > /sys/devices/system/memory/memory$x/state ; sleep 1; done ; dmesg | grep leak

    Offlined Pages 32768
    kmemleak: Cannot insert 0xffff880016969000 into the object search tree (overlaps existing)
    CPU: 0 PID: 412 Comm: sleep Not tainted 3.17.0-rc5+ #86
    Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
    Call Trace:
    dump_stack+0x46/0x58
    create_object+0x266/0x2c0
    kmemleak_alloc+0x26/0x50
    kmem_cache_alloc+0xd3/0x160
    __sigqueue_alloc+0x49/0xd0
    __send_signal+0xcb/0x410
    send_signal+0x45/0x90
    __group_send_sig_info+0x13/0x20
    do_notify_parent+0x1bb/0x260
    do_exit+0x767/0xa40
    do_group_exit+0x44/0xa0
    SyS_exit_group+0x17/0x20
    system_call_fastpath+0x16/0x1b

    kmemleak: Kernel memory leak detector disabled
    kmemleak: Object 0xffff880016900000 (size 524288):
    kmemleak: comm "swapper/0", pid 0, jiffies 4294667296
    kmemleak: min_count = 0
    kmemleak: count = 0
    kmemleak: flags = 0x1
    kmemleak: checksum = 0
    kmemleak: backtrace:
    log_early+0x63/0x77
    kmemleak_alloc+0x4b/0x50
    init_section_page_cgroup+0x7f/0xf5
    page_cgroup_init+0xc5/0xd0
    start_kernel+0x333/0x408
    x86_64_start_reservations+0x2a/0x2c
    x86_64_start_kernel+0xf5/0xfc

    Fixes: ff7ee93f4715 (cgroup/kmemleak: Annotate alloc_page() for cgroup allocations)
    Signed-off-by: Wang Nan
    Acked-by: Johannes Weiner
    Acked-by: Michal Hocko
    Cc: Steven Rostedt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Wang Nan
     
  • commit 5ddacbe92b806cd5b4f8f154e8e46ac267fff55c upstream.

    Compound page should be freed by put_page() or free_pages() with correct
    order. Not doing so will cause tail pages leaked.

    The compound order can be obtained by compound_order() or use
    HPAGE_PMD_ORDER in our case. Some people would argue the latter is
    faster but I prefer the former which is more general.

    This bug was observed not just on our servers (the worst case we saw is
    11G leaked on a 48G machine) but also on our workstations running Ubuntu
    based distro.

    $ cat /proc/vmstat | grep thp_zero_page_alloc
    thp_zero_page_alloc 55
    thp_zero_page_alloc_failed 0

    This means there is (thp_zero_page_alloc - 1) * (2M - 4K) memory leaked.

    Fixes: 97ae17497e99 ("thp: implement refcounting for huge zero page")
    Signed-off-by: Yu Zhao
    Acked-by: Kirill A. Shutemov
    Cc: Andrea Arcangeli
    Cc: Mel Gorman
    Cc: David Rientjes
    Cc: Bob Liu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Yu Zhao
     
  • commit 5695be142e203167e3cb515ef86a88424f3524eb upstream.

    PM freezer relies on having all tasks frozen by the time devices are
    getting frozen so that no task will touch them while they are getting
    frozen. But OOM killer is allowed to kill an already frozen task in
    order to handle OOM situtation. In order to protect from late wake ups
    OOM killer is disabled after all tasks are frozen. This, however, still
    keeps a window open when a killed task didn't manage to die by the time
    freeze_processes finishes.

    Reduce the race window by checking all tasks after OOM killer has been
    disabled. This is still not race free completely unfortunately because
    oom_killer_disable cannot stop an already ongoing OOM killer so a task
    might still wake up from the fridge and get killed without
    freeze_processes noticing. Full synchronization of OOM and freezer is,
    however, too heavy weight for this highly unlikely case.

    Introduce and check oom_kills counter which gets incremented early when
    the allocator enters __alloc_pages_may_oom path and only check all the
    tasks if the counter changes during the freezing attempt. The counter
    is updated so early to reduce the race window since allocator checked
    oom_killer_disabled which is set by PM-freezing code. A false positive
    will push the PM-freezer into a slow path but that is not a big deal.

    Changes since v1
    - push the re-check loop out of freeze_processes into
    check_frozen_processes and invert the condition to make the code more
    readable as per Rafael

    Fixes: f660daac474c6f (oom: thaw threads if oom killed thread is frozen before deferring)
    Signed-off-by: Michal Hocko
    Signed-off-by: Rafael J. Wysocki
    Signed-off-by: Greg Kroah-Hartman

    Michal Hocko
     
  • commit 90a8020278c1598fafd071736a0846b38510309c upstream.

    ->page_mkwrite() is used by filesystems to allocate blocks under a page
    which is becoming writeably mmapped in some process' address space. This
    allows a filesystem to return a page fault if there is not enough space
    available, user exceeds quota or similar problem happens, rather than
    silently discarding data later when writepage is called.

    However VFS fails to call ->page_mkwrite() in all the cases where
    filesystems need it when blocksize < pagesize. For example when
    blocksize = 1024, pagesize = 4096 the following is problematic:
    ftruncate(fd, 0);
    pwrite(fd, buf, 1024, 0);
    map = mmap(NULL, 1024, PROT_WRITE, MAP_SHARED, fd, 0);
    map[0] = 'a'; ----> page_mkwrite() for index 0 is called
    ftruncate(fd, 10000); /* or even pwrite(fd, buf, 1, 10000) */
    mremap(map, 1024, 10000, 0);
    map[4095] = 'a'; ----> no page_mkwrite() called

    At the moment ->page_mkwrite() is called, filesystem can allocate only
    one block for the page because i_size == 1024. Otherwise it would create
    blocks beyond i_size which is generally undesirable. But later at
    ->writepage() time, we also need to store data at offset 4095 but we
    don't have block allocated for it.

    This patch introduces a helper function filesystems can use to have
    ->page_mkwrite() called at all the necessary moments.

    Signed-off-by: Jan Kara
    Signed-off-by: Theodore Ts'o
    Signed-off-by: Greg Kroah-Hartman

    Jan Kara
     
  • commit bb2e226b3bef596dd56be97df655d857b4603923 upstream.

    This reverts commit 3189eddbcafc ("percpu: free percpu allocation info for
    uniprocessor system").

    The commit causes a hang with a crisv32 image. This may be an architecture
    problem, but at least for now the revert is necessary to be able to boot a
    crisv32 image.

    Cc: Tejun Heo
    Cc: Honggang Li
    Signed-off-by: Guenter Roeck
    Signed-off-by: Tejun Heo
    Fixes: 3189eddbcafc ("percpu: free percpu allocation info for uniprocessor system")
    Signed-off-by: Greg Kroah-Hartman

    Guenter Roeck
     

10 Oct, 2014

23 commits

  • commit 50f5aa8a9b248fa4262cf379863ec9a531b49737 upstream.

    BUG_ON() is a big hammer, and should be used _only_ if there is some
    major corruption that you cannot possibly recover from, making it
    imperative that the current process (and possibly the whole machine) be
    terminated with extreme prejudice.

    The trivial sanity check in the vmacache code is *not* such a fatal
    error. Recovering from it is absolutely trivial, and using BUG_ON()
    just makes it harder to debug for no actual advantage.

    To make matters worse, the placement of the BUG_ON() (only if the range
    check matched) actually makes it harder to hit the sanity check to begin
    with, so _if_ there is a bug (and we just got a report from Srivatsa
    Bhat that this can indeed trigger), it is harder to debug not just
    because the machine is possibly dead, but because we don't have better
    coverage.

    BUG_ON() must *die*. Maybe we should add a checkpatch warning for it,
    because it is simply just about the worst thing you can ever do if you
    hit some "this cannot happen" situation.

    Reported-by: Srivatsa S. Bhat
    Cc: Davidlohr Bueso
    Cc: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Linus Torvalds
     
  • commit 615d6e8756c87149f2d4c1b93d471bca002bd849 upstream.

    This patch is a continuation of efforts trying to optimize find_vma(),
    avoiding potentially expensive rbtree walks to locate a vma upon faults.
    The original approach (https://lkml.org/lkml/2013/11/1/410), where the
    largest vma was also cached, ended up being too specific and random,
    thus further comparison with other approaches were needed. There are
    two things to consider when dealing with this, the cache hit rate and
    the latency of find_vma(). Improving the hit-rate does not necessarily
    translate in finding the vma any faster, as the overhead of any fancy
    caching schemes can be too high to consider.

    We currently cache the last used vma for the whole address space, which
    provides a nice optimization, reducing the total cycles in find_vma() by
    up to 250%, for workloads with good locality. On the other hand, this
    simple scheme is pretty much useless for workloads with poor locality.
    Analyzing ebizzy runs shows that, no matter how many threads are
    running, the mmap_cache hit rate is less than 2%, and in many situations
    below 1%.

    The proposed approach is to replace this scheme with a small per-thread
    cache, maximizing hit rates at a very low maintenance cost.
    Invalidations are performed by simply bumping up a 32-bit sequence
    number. The only expensive operation is in the rare case of a seq
    number overflow, where all caches that share the same address space are
    flushed. Upon a miss, the proposed replacement policy is based on the
    page number that contains the virtual address in question. Concretely,
    the following results are seen on an 80 core, 8 socket x86-64 box:

    1) System bootup: Most programs are single threaded, so the per-thread
    scheme does improve ~50% hit rate by just adding a few more slots to
    the cache.

    +----------------+----------+------------------+
    | caching scheme | hit-rate | cycles (billion) |
    +----------------+----------+------------------+
    | baseline | 50.61% | 19.90 |
    | patched | 73.45% | 13.58 |
    +----------------+----------+------------------+

    2) Kernel build: This one is already pretty good with the current
    approach as we're dealing with good locality.

    +----------------+----------+------------------+
    | caching scheme | hit-rate | cycles (billion) |
    +----------------+----------+------------------+
    | baseline | 75.28% | 11.03 |
    | patched | 88.09% | 9.31 |
    +----------------+----------+------------------+

    3) Oracle 11g Data Mining (4k pages): Similar to the kernel build workload.

    +----------------+----------+------------------+
    | caching scheme | hit-rate | cycles (billion) |
    +----------------+----------+------------------+
    | baseline | 70.66% | 17.14 |
    | patched | 91.15% | 12.57 |
    +----------------+----------+------------------+

    4) Ebizzy: There's a fair amount of variation from run to run, but this
    approach always shows nearly perfect hit rates, while baseline is just
    about non-existent. The amounts of cycles can fluctuate between
    anywhere from ~60 to ~116 for the baseline scheme, but this approach
    reduces it considerably. For instance, with 80 threads:

    +----------------+----------+------------------+
    | caching scheme | hit-rate | cycles (billion) |
    +----------------+----------+------------------+
    | baseline | 1.06% | 91.54 |
    | patched | 99.97% | 14.18 |
    +----------------+----------+------------------+

    [akpm@linux-foundation.org: fix nommu build, per Davidlohr]
    [akpm@linux-foundation.org: document vmacache_valid() logic]
    [akpm@linux-foundation.org: attempt to untangle header files]
    [akpm@linux-foundation.org: add vmacache_find() BUG_ON]
    [hughd@google.com: add vmacache_valid_mm() (from Oleg)]
    [akpm@linux-foundation.org: coding-style fixes]
    [akpm@linux-foundation.org: adjust and enhance comments]
    Signed-off-by: Davidlohr Bueso
    Reviewed-by: Rik van Riel
    Acked-by: Linus Torvalds
    Reviewed-by: Michel Lespinasse
    Cc: Oleg Nesterov
    Tested-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Davidlohr Bueso
     
  • commit 83da7510058736c09a14b9c17ec7d851940a4332 upstream.

    Seems to be called with preemption enabled. Therefore it must use
    mod_zone_page_state instead.

    Signed-off-by: Christoph Lameter
    Reported-by: Grygorii Strashko
    Tested-by: Grygorii Strashko
    Cc: Tejun Heo
    Cc: Santosh Shilimkar
    Cc: Ingo Molnar
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Christoph Lameter
     
  • commit d5bc5fd3fcb7b8dfb431694a8c8052466504c10c upstream.

    The name `max_pass' is misleading, because this variable actually keeps
    the estimate number of freeable objects, not the maximal number of
    objects we can scan in this pass, which can be twice that. Rename it to
    reflect its actual meaning.

    Signed-off-by: Vladimir Davydov
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Vladimir Davydov
     
  • commit 99120b772b52853f9a2b829a21dd44d9b20558f1 upstream.

    When direct reclaim is executed by a process bound to a set of NUMA
    nodes, we should scan only those nodes when possible, but currently we
    will scan kmem from all online nodes even if the kmem shrinker is NUMA
    aware. That said, binding a process to a particular NUMA node won't
    prevent it from shrinking inode/dentry caches from other nodes, which is
    not good. Fix this.

    Signed-off-by: Vladimir Davydov
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Johannes Weiner
    Cc: Rik van Riel
    Cc: Dave Chinner
    Cc: Glauber Costa
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Vladimir Davydov
     
  • commit 7fcbbaf18392f0b17c95e2f033c8ccf87eecde1d upstream.

    In some testing I ran today (some fio jobs that spread over two nodes),
    we end up spending 40% of the time in filemap_check_errors(). That
    smells fishy. Looking further, this is basically what happens:

    blkdev_aio_read()
    generic_file_aio_read()
    filemap_write_and_wait_range()
    if (!mapping->nr_pages)
    filemap_check_errors()

    and filemap_check_errors() always attempts two test_and_clear_bit() on
    the mapping flags, thus dirtying it for every single invocation. The
    patch below tests each of these bits before clearing them, avoiding this
    issue. In my test case (4-socket box), performance went from 1.7M IOPS
    to 4.0M IOPS.

    Signed-off-by: Jens Axboe
    Acked-by: Jeff Moyer
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Jens Axboe
     
  • commit d26914d11751b23ca2e8747725f2cae10c2f2c1b upstream.

    Since put_mems_allowed() is strictly optional, its a seqcount retry, we
    don't need to evaluate the function if the allocation was in fact
    successful, saving a smp_rmb some loads and comparisons on some relative
    fast-paths.

    Since the naming, get/put_mems_allowed() does suggest a mandatory
    pairing, rename the interface, as suggested by Mel, to resemble the
    seqcount interface.

    This gives us: read_mems_allowed_begin() and read_mems_allowed_retry(),
    where it is important to note that the return value of the latter call
    is inverted from its previous incarnation.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Mel Gorman
     
  • commit 6d2be915e589b58cb11418cbe1f22ff90732b6ac upstream.

    Currently max_sane_readahead() returns zero on the cpu whose NUMA node
    has no local memory which leads to readahead failure. Fix this
    readahead failure by returning minimum of (requested pages, 512). Users
    running applications on a memory-less cpu which needs readahead such as
    streaming application see considerable boost in the performance.

    Result:

    fadvise experiment with FADV_WILLNEED on a PPC machine having memoryless
    CPU with 1GB testfile (12 iterations) yielded around 46.66% improvement.

    fadvise experiment with FADV_WILLNEED on a x240 machine with 1GB
    testfile 32GB* 4G RAM numa machine (12 iterations) showed no impact on
    the normal NUMA cases w/ patch.

    Kernel Avg Stddev
    base 7.4975 3.92%
    patched 7.4174 3.26%

    [Andrew: making return value PAGE_SIZE independent]
    Suggested-by: Linus Torvalds
    Signed-off-by: Raghavendra K T
    Acked-by: Jan Kara
    Cc: Wu Fengguang
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Raghavendra K T
     
  • commit 91ca9186484809c57303b33778d841cc28f696ed upstream.

    The cached pageblock hint should be ignored when triggering compaction
    through /proc/sys/vm/compact_memory so all eligible memory is isolated.
    Manually invoking compaction is known to be expensive, there's no need
    to skip pageblocks based on heuristics (mainly for debugging).

    Signed-off-by: David Rientjes
    Acked-by: Mel Gorman
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    David Rientjes
     
  • commit da1c67a76f7cf2b3404823d24f9f10fa91aa5dc5 upstream.

    The conditions that control the isolation mode in
    isolate_migratepages_range() do not change during the iteration, so
    extract them out and only define the value once.

    This actually does have an effect, gcc doesn't optimize it itself because
    of cc->sync.

    Signed-off-by: David Rientjes
    Cc: Mel Gorman
    Acked-by: Rik van Riel
    Acked-by: Vlastimil Babka
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    David Rientjes
     
  • commit b6c750163c0d138f5041d95fcdbd1094b6928057 upstream.

    It is just for clean-up to reduce code size and improve readability.
    There is no functional change.

    Signed-off-by: Joonsoo Kim
    Acked-by: Vlastimil Babka
    Cc: Mel Gorman
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Joonsoo Kim
     
  • commit c122b2087ab94192f2b937e47b563a9c4e688ece upstream.

    isolation_suitable() and migrate_async_suitable() is used to be sure
    that this pageblock range is fine to be migragted. It isn't needed to
    call it on every page. Current code do well if not suitable, but, don't
    do well when suitable.

    1) It re-checks isolation_suitable() on each page of a pageblock that was
    already estabilished as suitable.
    2) It re-checks migrate_async_suitable() on each page of a pageblock that
    was not entered through the next_pageblock: label, because
    last_pageblock_nr is not otherwise updated.

    This patch fixes situation by 1) calling isolation_suitable() only once
    per pageblock and 2) always updating last_pageblock_nr to the pageblock
    that was just checked.

    Additionally, move PageBuddy() check after pageblock unit check, since
    pageblock check is the first thing we should do and makes things more
    simple.

    [vbabka@suse.cz: rephrase commit description]
    Signed-off-by: Joonsoo Kim
    Acked-by: Vlastimil Babka
    Cc: Mel Gorman
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Joonsoo Kim
     
  • commit be1aa03b973c7dcdc576f3503f7a60429825c35d upstream.

    It is odd to drop the spinlock when we scan (SWAP_CLUSTER_MAX - 1) th
    pfn page. This may results in below situation while isolating
    migratepage.

    1. try isolate 0x0 ~ 0x200 pfn pages.
    2. When low_pfn is 0x1ff, ((low_pfn+1) % SWAP_CLUSTER_MAX) == 0, so drop
    the spinlock.
    3. Then, to complete isolating, retry to aquire the lock.

    I think that it is better to use SWAP_CLUSTER_MAX th pfn for checking the
    criteria about dropping the lock. This has no harm 0x0 pfn, because, at
    this time, locked variable would be false.

    Signed-off-by: Joonsoo Kim
    Acked-by: Vlastimil Babka
    Cc: Mel Gorman
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Joonsoo Kim
     
  • commit 01ead5340bcf5f3a1cd2452c75516d0ef4d908d7 upstream.

    suitable_migration_target() checks that pageblock is suitable for
    migration target. In isolate_freepages_block(), it is called on every
    page and this is inefficient. So make it called once per pageblock.

    suitable_migration_target() also checks if page is highorder or not, but
    it's criteria for highorder is pageblock order. So calling it once
    within pageblock range has no problem.

    Signed-off-by: Joonsoo Kim
    Acked-by: Vlastimil Babka
    Cc: Mel Gorman
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Joonsoo Kim
     
  • commit 7d348b9ea64db0a315d777ce7d4b06697f946503 upstream.

    Purpose of compaction is to get a high order page. Currently, if we
    find high-order page while searching migration target page, we break it
    to order-0 pages and use them as migration target. It is contrary to
    purpose of compaction, so disallow high-order page to be used for
    migration target.

    Additionally, clean-up logic in suitable_migration_target() to simplify
    the code. There is no functional changes from this clean-up.

    Signed-off-by: Joonsoo Kim
    Acked-by: Vlastimil Babka
    Cc: Mel Gorman
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Joonsoo Kim
     
  • commit 119d6d59dcc0980dcd581fdadb6b2033b512a473 upstream.

    Page migration will fail for memory that is pinned in memory with, for
    example, get_user_pages(). In this case, it is unnecessary to take
    zone->lru_lock or isolating the page and passing it to page migration
    which will ultimately fail.

    This is a racy check, the page can still change from under us, but in
    that case we'll just fail later when attempting to move the page.

    This avoids very expensive memory compaction when faulting transparent
    hugepages after pinning a lot of memory with a Mellanox driver.

    On a 128GB machine and pinning ~120GB of memory, before this patch we
    see the enormous disparity in the number of page migration failures
    because of the pinning (from /proc/vmstat):

    compact_pages_moved 8450
    compact_pagemigrate_failed 15614415

    0.05% of pages isolated are successfully migrated and explicitly
    triggering memory compaction takes 102 seconds. After the patch:

    compact_pages_moved 9197
    compact_pagemigrate_failed 7

    99.9% of pages isolated are now successfully migrated in this
    configuration and memory compaction takes less than one second.

    Signed-off-by: David Rientjes
    Acked-by: Hugh Dickins
    Acked-by: Mel Gorman
    Cc: Joonsoo Kim
    Cc: Rik van Riel
    Cc: Greg Thelen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    David Rientjes
     
  • commit 18ab4d4ced0817421e6db6940374cc39d28d65da upstream.

    Originally get_swap_page() started iterating through the singly-linked
    list of swap_info_structs using swap_list.next or highest_priority_index,
    which both were intended to point to the highest priority active swap
    target that was not full. The first patch in this series changed the
    singly-linked list to a doubly-linked list, and removed the logic to start
    at the highest priority non-full entry; it starts scanning at the highest
    priority entry each time, even if the entry is full.

    Replace the manually ordered swap_list_head with a plist, swap_active_head.
    Add a new plist, swap_avail_head. The original swap_active_head plist
    contains all active swap_info_structs, as before, while the new
    swap_avail_head plist contains only swap_info_structs that are active and
    available, i.e. not full. Add a new spinlock, swap_avail_lock, to protect
    the swap_avail_head list.

    Mel Gorman suggested using plists since they internally handle ordering
    the list entries based on priority, which is exactly what swap was doing
    manually. All the ordering code is now removed, and swap_info_struct
    entries and simply added to their corresponding plist and automatically
    ordered correctly.

    Using a new plist for available swap_info_structs simplifies and
    optimizes get_swap_page(), which no longer has to iterate over full
    swap_info_structs. Using a new spinlock for swap_avail_head plist
    allows each swap_info_struct to add or remove themselves from the
    plist when they become full or not-full; previously they could not
    do so because the swap_info_struct->lock is held when they change
    from fullnot-full, and the swap_lock protecting the main
    swap_active_head must be ordered before any swap_info_struct->lock.

    Signed-off-by: Dan Streetman
    Acked-by: Mel Gorman
    Cc: Shaohua Li
    Cc: Steven Rostedt
    Cc: Peter Zijlstra
    Cc: Hugh Dickins
    Cc: Dan Streetman
    Cc: Michal Hocko
    Cc: Christian Ehrhardt
    Cc: Weijie Yang
    Cc: Rik van Riel
    Cc: Johannes Weiner
    Cc: Bob Liu
    Cc: Paul Gortmaker
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Dan Streetman
     
  • commit adfab836f4908deb049a5128082719e689eed964 upstream.

    The logic controlling the singly-linked list of swap_info_struct entries
    for all active, i.e. swapon'ed, swap targets is rather complex, because:

    - it stores the entries in priority order
    - there is a pointer to the highest priority entry
    - there is a pointer to the highest priority not-full entry
    - there is a highest_priority_index variable set outside the swap_lock
    - swap entries of equal priority should be used equally

    this complexity leads to bugs such as: https://lkml.org/lkml/2014/2/13/181
    where different priority swap targets are incorrectly used equally.

    That bug probably could be solved with the existing singly-linked lists,
    but I think it would only add more complexity to the already difficult to
    understand get_swap_page() swap_list iteration logic.

    The first patch changes from a singly-linked list to a doubly-linked list
    using list_heads; the highest_priority_index and related code are removed
    and get_swap_page() starts each iteration at the highest priority
    swap_info entry, even if it's full. While this does introduce unnecessary
    list iteration (i.e. Schlemiel the painter's algorithm) in the case where
    one or more of the highest priority entries are full, the iteration and
    manipulation code is much simpler and behaves correctly re: the above bug;
    and the fourth patch removes the unnecessary iteration.

    The second patch adds some minor plist helper functions; nothing new
    really, just functions to match existing regular list functions. These
    are used by the next two patches.

    The third patch adds plist_requeue(), which is used by get_swap_page() in
    the next patch - it performs the requeueing of same-priority entries
    (which moves the entry to the end of its priority in the plist), so that
    all equal-priority swap_info_structs get used equally.

    The fourth patch converts the main list into a plist, and adds a new plist
    that contains only swap_info entries that are both active and not full.
    As Mel suggested using plists allows removing all the ordering code from
    swap - plists handle ordering automatically. The list naming is also
    clarified now that there are two lists, with the original list changed
    from swap_list_head to swap_active_head and the new list named
    swap_avail_head. A new spinlock is also added for the new list, so
    swap_info entries can be added or removed from the new list immediately as
    they become full or not full.

    This patch (of 4):

    Replace the singly-linked list tracking active, i.e. swapon'ed,
    swap_info_struct entries with a doubly-linked list using struct
    list_heads. Simplify the logic iterating and manipulating the list of
    entries, especially get_swap_page(), by using standard list_head
    functions, and removing the highest priority iteration logic.

    The change fixes the bug:
    https://lkml.org/lkml/2014/2/13/181
    in which different priority swap entries after the highest priority entry
    are incorrectly used equally in pairs. The swap behavior is now as
    advertised, i.e. different priority swap entries are used in order, and
    equal priority swap targets are used concurrently.

    Signed-off-by: Dan Streetman
    Acked-by: Mel Gorman
    Cc: Shaohua Li
    Cc: Hugh Dickins
    Cc: Dan Streetman
    Cc: Michal Hocko
    Cc: Christian Ehrhardt
    Cc: Weijie Yang
    Cc: Rik van Riel
    Cc: Johannes Weiner
    Cc: Bob Liu
    Cc: Steven Rostedt
    Cc: Peter Zijlstra
    Cc: Paul Gortmaker
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Mel Gorman
    Signed-off-by: Greg Kroah-Hartman

    Dan Streetman
     
  • commit 70ef57e6c22c3323dce179b7d0d433c479266612 upstream.

    We had a report about strange OOM killer strikes on a PPC machine
    although there was a lot of swap free and a tons of anonymous memory
    which could be swapped out. In the end it turned out that the OOM was a
    side effect of zone reclaim which wasn't unmapping and swapping out and
    so the system was pushed to the OOM. Although this sounds like a bug
    somewhere in the kswapd vs. zone reclaim vs. direct reclaim
    interaction numactl on the said hardware suggests that the zone reclaim
    should not have been set in the first place:

    node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
    node 0 size: 0 MB
    node 0 free: 0 MB
    node 2 cpus:
    node 2 size: 7168 MB
    node 2 free: 6019 MB
    node distances:
    node 0 2
    0: 10 40
    2: 40 10

    So all the CPUs are associated with Node0 which doesn't have any memory
    while Node2 contains all the available memory. Node distances cause an
    automatic zone_reclaim_mode enabling.

    Zone reclaim is intended to keep the allocations local but this doesn't
    make any sense on the memoryless nodes. So let's exclude such nodes for
    init_zone_allows_reclaim which evaluates zone reclaim behavior and
    suitable reclaim_nodes.

    Signed-off-by: Michal Hocko
    Acked-by: David Rientjes
    Acked-by: Nishanth Aravamudan
    Tested-by: Nishanth Aravamudan
    Acked-by: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Michal Hocko
     
  • commit abc40bd2eeb77eb7c2effcaf63154aad929a1d5f upstream.

    This patch reverts 1ba6e0b50b ("mm: numa: split_huge_page: transfer the
    NUMA type from the pmd to the pte"). If a huge page is being split due
    a protection change and the tail will be in a PROT_NONE vma then NUMA
    hinting PTEs are temporarily created in the protected VMA.

    VM_RW|VM_PROTNONE
    |-----------------|
    ^
    split here

    In the specific case above, it should get fixed up by change_pte_range()
    but there is a window of opportunity for weirdness to happen. Similarly,
    if a huge page is shrunk and split during a protection update but before
    pmd_numa is cleared then a pte_numa can be left behind.

    Instead of adding complexity trying to deal with the case, this patch
    will not mark PTEs NUMA when splitting a huge page. NUMA hinting faults
    will not be triggered which is marginal in comparison to the complexity
    in dealing with the corner cases during THP split.

    Signed-off-by: Mel Gorman
    Acked-by: Rik van Riel
    Acked-by: Kirill A. Shutemov
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Mel Gorman
     
  • commit f8303c2582b889351e261ff18c4d8eb197a77db2 upstream.

    In __split_huge_page_map(), the check for page_mapcount(page) is
    invariant within the for loop. Because of the fact that the macro is
    implemented using atomic_read(), the redundant check cannot be optimized
    away by the compiler leading to unnecessary read to the page structure.

    This patch moves the invariant bug check out of the loop so that it will
    be done only once. On a 3.16-rc1 based kernel, the execution time of a
    microbenchmark that broke up 1000 transparent huge pages using munmap()
    had an execution time of 38,245us and 38,548us with and without the
    patch respectively. The performance gain is about 1%.

    Signed-off-by: Waiman Long
    Acked-by: Kirill A. Shutemov
    Cc: Andrea Arcangeli
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Scott J Norton
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Waiman Long
     
  • commit 457c1b27ed56ec472d202731b12417bff023594a upstream.

    Currently, I am seeing the following when I `mount -t hugetlbfs /none
    /dev/hugetlbfs`, and then simply do a `ls /dev/hugetlbfs`. I think it's
    related to the fact that hugetlbfs is properly not correctly setting
    itself up in this state?:

    Unable to handle kernel paging request for data at address 0x00000031
    Faulting instruction address: 0xc000000000245710
    Oops: Kernel access of bad area, sig: 11 [#1]
    SMP NR_CPUS=2048 NUMA pSeries
    ....

    In KVM guests on Power, in a guest not backed by hugepages, we see the
    following:

    AnonHugePages: 0 kB
    HugePages_Total: 0
    HugePages_Free: 0
    HugePages_Rsvd: 0
    HugePages_Surp: 0
    Hugepagesize: 64 kB

    HPAGE_SHIFT == 0 in this configuration, which indicates that hugepages
    are not supported at boot-time, but this is only checked in
    hugetlb_init(). Extract the check to a helper function, and use it in a
    few relevant places.

    This does make hugetlbfs not supported (not registered at all) in this
    environment. I believe this is fine, as there are no valid hugepages
    and that won't change at runtime.

    [akpm@linux-foundation.org: use pr_info(), per Mel]
    [akpm@linux-foundation.org: fix build when HPAGE_SHIFT is undefined]
    Signed-off-by: Nishanth Aravamudan
    Reviewed-by: Aneesh Kumar K.V
    Acked-by: Mel Gorman
    Cc: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Nishanth Aravamudan
     
  • commit d3cb8bf6081b8b7a2dabb1264fe968fd870fa595 upstream.

    A migration entry is marked as write if pte_write was true at the time the
    entry was created. The VMA protections are not double checked when migration
    entries are being removed as mprotect marks write-migration-entries as
    read. It means that potentially we take a spurious fault to mark PTEs write
    again but it's straight-forward. However, there is a race between write
    migrations being marked read and migrations finishing. This potentially
    allows a PTE to be write that should have been read. Close this race by
    double checking the VMA permissions using maybe_mkwrite when migration
    completes.

    [torvalds@linux-foundation.org: use maybe_mkwrite]
    Signed-off-by: Mel Gorman
    Acked-by: Rik van Riel
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Mel Gorman
     

06 Oct, 2014

7 commits

  • commit dbab31aa2ceec2d201966fa0b552f151310ba5f4 upstream.

    This fixes the same bug as b43790eedd31 ("mm: softdirty: don't forget to
    save file map softdiry bit on unmap") and 9aed8614af5a ("mm/memory.c:
    don't forget to set softdirty on file mapped fault") where the return
    value of pte_*mksoft_dirty was being ignored.

    To be sure that no other pte/pmd "mk" function return values were being
    ignored, I annotated the functions in arch/x86/include/asm/pgtable.h
    with __must_check and rebuilt.

    The userspace effect of this bug is that the softdirty mark might be
    lost if a file mapped pte get zapped.

    Signed-off-by: Peter Feiner
    Acked-by: Cyrill Gorcunov
    Cc: Pavel Emelyanov
    Cc: Jamie Liu
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Peter Feiner
     
  • commit d4a5fca592b9ab52b90bb261a90af3c8f53be011 upstream.

    Since commit 4590685546a3 ("mm/sl[aou]b: Common alignment code"), the
    "ralign" automatic variable in __kmem_cache_create() may be used as
    uninitialized.

    The proper alignment defaults to BYTES_PER_WORD and can be overridden by
    SLAB_RED_ZONE or the alignment specified by the caller.

    This fixes https://bugzilla.kernel.org/show_bug.cgi?id=85031

    Signed-off-by: David Rientjes
    Reported-by: Andrei Elovikov
    Acked-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    David Rientjes
     
  • commit 849f5169097e1ba35b90ac9df76b5bb6f9c0aabd upstream.

    If pcpu_map_pages() fails midway, it unmaps the already mapped pages.
    Currently, it doesn't flush tlb after the partial unmapping. This may
    be okay in most cases as the established mapping hasn't been used at
    that point but it can go wrong and when it goes wrong it'd be
    extremely difficult to track down.

    Flush tlb after the partial unmapping.

    Signed-off-by: Tejun Heo
    Signed-off-by: Greg Kroah-Hartman

    Tejun Heo
     
  • commit f0d279654dea22b7a6ad34b9334aee80cda62cde upstream.

    When pcpu_alloc_pages() fails midway, pcpu_free_pages() is invoked to
    free what has already been allocated. The invocation is across the
    whole requested range and pcpu_free_pages() will try to free all
    non-NULL pages; unfortunately, this is incorrect as
    pcpu_get_pages_and_bitmap(), unlike what its comment suggests, doesn't
    clear the pages array and thus the array may have entries from the
    previous invocations making the partial failure path free incorrect
    pages.

    Fix it by open-coding the partial freeing of the already allocated
    pages.

    Signed-off-by: Tejun Heo
    Signed-off-by: Greg Kroah-Hartman

    Tejun Heo
     
  • commit 3189eddbcafcc4d827f7f19facbeddec4424eba8 upstream.

    Currently, only SMP system free the percpu allocation info.
    Uniprocessor system should free it too. For example, one x86 UML
    virtual machine with 256MB memory, UML kernel wastes one page memory.

    Signed-off-by: Honggang Li
    Signed-off-by: Tejun Heo
    Signed-off-by: Greg Kroah-Hartman

    Honggang Li
     
  • commit b928095b0a7cff7fb9fcf4c706348ceb8ab2c295 upstream.

    If overwriting an empty directory with rename, then need to drop the extra
    nlink.

    Test prog:

    #include
    #include
    #include
    #include

    int main(void)
    {
    const char *test_dir1 = "test-dir1";
    const char *test_dir2 = "test-dir2";
    int res;
    int fd;
    struct stat statbuf;

    res = mkdir(test_dir1, 0777);
    if (res == -1)
    err(1, "mkdir(\"%s\")", test_dir1);

    res = mkdir(test_dir2, 0777);
    if (res == -1)
    err(1, "mkdir(\"%s\")", test_dir2);

    fd = open(test_dir2, O_RDONLY);
    if (fd == -1)
    err(1, "open(\"%s\")", test_dir2);

    res = rename(test_dir1, test_dir2);
    if (res == -1)
    err(1, "rename(\"%s\", \"%s\")", test_dir1, test_dir2);

    res = fstat(fd, &statbuf);
    if (res == -1)
    err(1, "fstat(%i)", fd);

    if (statbuf.st_nlink != 0) {
    fprintf(stderr, "nlink is %lu, should be 0\n", statbuf.st_nlink);
    return 1;
    }

    return 0;
    }

    Signed-off-by: Miklos Szeredi
    Signed-off-by: Al Viro
    Signed-off-by: Greg Kroah-Hartman

    Miklos Szeredi
     
  • commit 0cfb8f0c3e21e36d4a6e472e4c419d58ba848698 upstream.

    In memblock_find_in_range_node(), we defined ret as int. But it should
    be phys_addr_t because it is used to store the return value from
    __memblock_find_range_bottom_up().

    The bug has not been triggered because when allocating low memory near
    the kernel end, the "int ret" won't turn out to be negative. When we
    started to allocate memory on other nodes, and the "int ret" could be
    minus. Then the kernel will panic.

    A simple way to reproduce this: comment out the following code in
    numa_init(),

    memblock_set_bottom_up(false);

    and the kernel won't boot.

    Reported-by: Xishi Qiu
    Signed-off-by: Tang Chen
    Tested-by: Xishi Qiu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Tang Chen
     

06 Sep, 2014

1 commit

  • commit 4449a51a7c281602d3a385044ab928322a122a02 upstream.

    Aleksei hit the soft lockup during reading /proc/PID/smaps. David
    investigated the problem and suggested the right fix.

    while_each_thread() is racy and should die, this patch updates
    vm_is_stack().

    Signed-off-by: Oleg Nesterov
    Reported-by: Aleksei Besogonov
    Tested-by: Aleksei Besogonov
    Suggested-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Oleg Nesterov
     

08 Aug, 2014

2 commits

  • commit 2bcf2e92c3918ce62ab4e934256e47e9a16d19c3 upstream.

    Paul Furtado has reported the following GPF:

    general protection fault: 0000 [#1] SMP
    Modules linked in: ipv6 dm_mod xen_netfront coretemp hwmon x86_pkg_temp_thermal crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel ablk_helper cryptd lrw gf128mul glue_helper aes_x86_64 microcode pcspkr ext4 jbd2 mbcache raid0 xen_blkfront
    CPU: 3 PID: 3062 Comm: java Not tainted 3.16.0-rc5 #1
    task: ffff8801cfe8f170 ti: ffff8801d2ec4000 task.ti: ffff8801d2ec4000
    RIP: e030:mem_cgroup_oom_synchronize+0x140/0x240
    RSP: e02b:ffff8801d2ec7d48 EFLAGS: 00010283
    RAX: 0000000000000001 RBX: ffff88009d633800 RCX: 000000000000000e
    RDX: fffffffffffffffe RSI: ffff88009d630200 RDI: ffff88009d630200
    RBP: ffff8801d2ec7da8 R08: 0000000000000012 R09: 00000000fffffffe
    R10: 0000000000000000 R11: 0000000000000000 R12: ffff88009d633800
    R13: ffff8801d2ec7d48 R14: dead000000100100 R15: ffff88009d633a30
    FS: 00007f1748bb4700(0000) GS:ffff8801def80000(0000) knlGS:0000000000000000
    CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b
    CR2: 00007f4110300308 CR3: 00000000c05f7000 CR4: 0000000000002660
    Call Trace:
    pagefault_out_of_memory+0x18/0x90
    mm_fault_error+0xa9/0x1a0
    __do_page_fault+0x478/0x4c0
    do_page_fault+0x2c/0x40
    page_fault+0x28/0x30
    Code: 44 00 00 48 89 df e8 40 ca ff ff 48 85 c0 49 89 c4 74 35 4c 8b b0 30 02 00 00 4c 8d b8 30 02 00 00 4d 39 fe 74 1b 0f 1f 44 00 00 8b 7e 10 be 01 00 00 00 e8 42 d2 04 00 4d 8b 36 4d 39 fe 75
    RIP mem_cgroup_oom_synchronize+0x140/0x240

    Commit fb2a6fc56be6 ("mm: memcg: rework and document OOM waiting and
    wakeup") has moved mem_cgroup_oom_notify outside of memcg_oom_lock
    assuming it is protected by the hierarchical OOM-lock.

    Although this is true for the notification part the protection doesn't
    cover unregistration of event which can happen in parallel now so
    mem_cgroup_oom_notify can see already unlinked and/or freed
    mem_cgroup_eventfd_list.

    Fix this by using memcg_oom_lock also in mem_cgroup_oom_notify.

    Addresses https://bugzilla.kernel.org/show_bug.cgi?id=80881

    Fixes: fb2a6fc56be6 (mm: memcg: rework and document OOM waiting and wakeup)
    Signed-off-by: Michal Hocko
    Reported-by: Paul Furtado
    Tested-by: Paul Furtado
    Acked-by: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Michal Hocko
     
  • commit b104a35d32025ca740539db2808aa3385d0f30eb upstream.

    The page allocator relies on __GFP_WAIT to determine if ALLOC_CPUSET
    should be set in allocflags. ALLOC_CPUSET controls if a page allocation
    should be restricted only to the set of allowed cpuset mems.

    Transparent hugepages clears __GFP_WAIT when defrag is disabled to prevent
    the fault path from using memory compaction or direct reclaim. Thus, it
    is unfairly able to allocate outside of its cpuset mems restriction as a
    side-effect.

    This patch ensures that ALLOC_CPUSET is only cleared when the gfp mask is
    truly GFP_ATOMIC by verifying it is also not a thp allocation.

    Signed-off-by: David Rientjes
    Reported-by: Alex Thorlton
    Tested-by: Alex Thorlton
    Cc: Bob Liu
    Cc: Dave Hansen
    Cc: Hedi Berriche
    Cc: Hugh Dickins
    Cc: Johannes Weiner
    Cc: Kirill A. Shutemov
    Cc: Mel Gorman
    Cc: Rik van Riel
    Cc: Srivatsa S. Bhat
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    David Rientjes