30 Apr, 2013

40 commits

  • __remove_pages() is only necessary for CONFIG_MEMORY_HOTREMOVE. PowerPC
    pseries will return -EOPNOTSUPP if unsupported.

    Adding an #ifdef causes several other functions it depends on to also
    become unnecessary, which saves in .text when disabled (it's disabled in
    most defconfigs besides powerpc, including x86). remove_memory_block()
    becomes static since it is not referenced outside of
    drivers/base/memory.c.

    Build tested on x86 and powerpc with CONFIG_MEMORY_HOTREMOVE both enabled
    and disabled.

    Signed-off-by: David Rientjes
    Acked-by: Toshi Kani
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Greg Kroah-Hartman
    Cc: Wen Congyang
    Cc: Tang Chen
    Cc: Yasuaki Ishimatsu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Change __remove_pages() to call release_mem_region_adjustable(). This
    allows a requested memory range to be released from the iomem_resource
    table even if it does not match exactly to an resource entry but still
    fits into. The resource entries initialized at bootup usually cover the
    whole contiguous memory ranges and may not necessarily match with the
    size of memory hot-delete requests.

    If release_mem_region_adjustable() failed, __remove_pages() emits a
    warning message and continues to proceed as it was the case with
    release_mem_region(). release_mem_region(), which is defined to
    __release_region(), emits a warning message and returns no error since a
    void function.

    Signed-off-by: Toshi Kani
    Reviewed-by : Yasuaki Ishimatsu
    Acked-by: David Rientjes
    Cc: Ram Pai
    Cc: T Makphaibulchoke
    Cc: Wen Congyang
    Cc: Tang Chen
    Cc: Jiang Liu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Toshi Kani
     
  • Add release_mem_region_adjustable(), which releases a requested region
    from a currently busy memory resource. This interface adjusts the
    matched memory resource accordingly even if the requested region does
    not match exactly but still fits into.

    This new interface is intended for memory hot-delete. During bootup,
    memory resources are inserted from the boot descriptor table, such as
    EFI Memory Table and e820. Each memory resource entry usually covers
    the whole contigous memory range. Memory hot-delete request, on the
    other hand, may target to a particular range of memory resource, and its
    size can be much smaller than the whole contiguous memory. Since the
    existing release interfaces like __release_region() require a requested
    region to be exactly matched to a resource entry, they do not allow a
    partial resource to be released.

    This new interface is restrictive (i.e. release under certain
    conditions), which is consistent with other release interfaces,
    __release_region() and __release_resource(). Additional release
    conditions, such as an overlapping region to a resource entry, can be
    supported after they are confirmed as valid cases.

    There is no change to the existing interfaces since their restriction is
    valid for I/O resources.

    [akpm@linux-foundation.org: use GFP_ATOMIC under write_lock()]
    [akpm@linux-foundation.org: switch back to GFP_KERNEL, less buggily]
    [akpm@linux-foundation.org: remove unneeded and wrong kfree(), per Toshi]
    Signed-off-by: Toshi Kani
    Reviewed-by : Yasuaki Ishimatsu
    Cc: David Rientjes
    Reviewed-by: Ram Pai
    Cc: T Makphaibulchoke
    Cc: Wen Congyang
    Cc: Tang Chen
    Cc: Jiang Liu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Toshi Kani
     
  • Add __adjust_resource(), which is called by adjust_resource() internally
    after the resource_lock is held. There is no interface change to
    adjust_resource(). This change allows other functions to call
    __adjust_resource() internally while the resource_lock is held.

    Signed-off-by: Toshi Kani
    Reviewed-by: Yasuaki Ishimatsu
    Acked-by: David Rientjes
    Cc: Ram Pai
    Cc: T Makphaibulchoke
    Cc: Wen Congyang
    Cc: Tang Chen
    Cc: Jiang Liu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Toshi Kani
     
  • The comment over migrate_pages() looks quite weird, and makes it hard to
    grasp what it is trying to say. Rewrite it more comprehensibly.

    Signed-off-by: Srivatsa S. Bhat
    Acked-by: Christoph Lameter
    Cc: Mel Gorman
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Srivatsa S. Bhat
     
  • Currently the memory barrier in __do_huge_pmd_anonymous_page doesn't
    work. Because lru_cache_add_lru uses pagevec so it could miss spinlock
    easily so above rule was broken so user might see inconsistent data.

    I was not first person who pointed out the problem. Mel and Peter
    pointed out a few months ago and Peter pointed out further that even
    spin_lock/unlock can't make sure of it:

    http://marc.info/?t=134333512700004

    In particular:

    *A = a;
    LOCK
    UNLOCK
    *B = b;

    may occur as:

    LOCK, STORE *B, STORE *A, UNLOCK

    At last, Hugh pointed out that even we don't need memory barrier in
    there because __SetPageUpdate already have done it from Nick's commit
    0ed361dec369 ("mm: fix PageUptodate data race") explicitly.

    So this patch fixes comment on THP and adds same comment for
    do_anonymous_page, too because everybody except Hugh was missing that.
    It means we need a comment about that.

    Signed-off-by: Minchan Kim
    Acked-by: Andrea Arcangeli
    Acked-by: David Rientjes
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Hugh Dickins
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Minchan Kim
     
  • CONFIG_HOTPLUG is going away as an option, cleanup CONFIG_HOTPLUG
    ifdefs in mm files.

    Signed-off-by: Yijing Wang
    Acked-by: Greg Kroah-Hartman
    Acked-by: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Yijing Wang
     
  • Just a trivial issue I stumbled on while doing something else...

    Signed-off-by: Michel Lespinasse
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michel Lespinasse
     
  • Alter the admin and user reserves of the previous patches in this series
    when memory is added or removed.

    If memory is added and the reserves have been eliminated or increased
    above the default max, then we'll trust the admin.

    If memory is removed and there isn't enough free memory, then we need to
    reset the reserves.

    Otherwise keep the reserve set by the admin.

    The reserve reset code is the same as the reserve initialization code.

    I tested hot addition and removal by triggering it via sysfs. The
    reserves shrunk when they were set high and memory was removed. They
    were reset higher when memory was added again.

    [akpm@linux-foundation.org: use register_hotmemory_notifier()]
    [akpm@linux-foundation.org: init_user_reserve() and init_admin_reserve can no longer be __meminit]
    [fengguang.wu@intel.com: make init_reserve_notifier() static]
    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Andrew Shewmaker
    Signed-off-by: Fengguang Wu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Shewmaker
     
  • Add an admin_reserve_kbytes knob to allow admins to change the hardcoded
    memory reserve to something other than 3%, which may be multiple
    gigabytes on large memory systems. Only about 8MB is necessary to
    enable recovery in the default mode, and only a few hundred MB are
    required even when overcommit is disabled.

    This affects OVERCOMMIT_GUESS and OVERCOMMIT_NEVER.

    admin_reserve_kbytes is initialized to min(3% free pages, 8MB)

    I arrived at 8MB by summing the RSS of sshd or login, bash, and top.

    Please see first patch in this series for full background, motivation,
    testing, and full changelog.

    [akpm@linux-foundation.org: coding-style fixes]
    [akpm@linux-foundation.org: make init_admin_reserve() static]
    Signed-off-by: Andrew Shewmaker
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Shewmaker
     
  • Add user_reserve_kbytes knob.

    Limit the growth of the memory reserved for other user processes to
    min(3% current process size, user_reserve_pages). Only about 8MB is
    necessary to enable recovery in the default mode, and only a few hundred
    MB are required even when overcommit is disabled.

    user_reserve_pages defaults to min(3% free pages, 128MB)

    I arrived at 128MB by taking the max VSZ of sshd, login, bash, and top ...
    then adding the RSS of each.

    This only affects OVERCOMMIT_NEVER mode.

    Background

    1. user reserve

    __vm_enough_memory reserves a hardcoded 3% of the current process size for
    other applications when overcommit is disabled. This was done so that a
    user could recover if they launched a memory hogging process. Without the
    reserve, a user would easily run into a message such as:

    bash: fork: Cannot allocate memory

    2. admin reserve

    Additionally, a hardcoded 3% of free memory is reserved for root in both
    overcommit 'guess' and 'never' modes. This was intended to prevent a
    scenario where root-cant-log-in and perform recovery operations.

    Note that this reserve shrinks, and doesn't guarantee a useful reserve.

    Motivation

    The two hardcoded memory reserves should be updated to account for current
    memory sizes.

    Also, the admin reserve would be more useful if it didn't shrink too much.

    When the current code was originally written, 1GB was considered
    "enterprise". Now the 3% reserve can grow to multiple GB on large memory
    systems, and it only needs to be a few hundred MB at most to enable a user
    or admin to recover a system with an unwanted memory hogging process.

    I've found that reducing these reserves is especially beneficial for a
    specific type of application load:

    * single application system
    * one or few processes (e.g. one per core)
    * allocating all available memory
    * not initializing every page immediately
    * long running

    I've run scientific clusters with this sort of load. A long running job
    sometimes failed many hours (weeks of CPU time) into a calculation. They
    weren't initializing all of their memory immediately, and they weren't
    using calloc, so I put systems into overcommit 'never' mode. These
    clusters run diskless and have no swap.

    However, with the current reserves, a user wishing to allocate as much
    memory as possible to one process may be prevented from using, for
    example, almost 2GB out of 32GB.

    The effect is less, but still significant when a user starts a job with
    one process per core. I have repeatedly seen a set of processes
    requesting the same amount of memory fail because one of them could not
    allocate the amount of memory a user would expect to be able to allocate.
    For example, Message Passing Interfce (MPI) processes, one per core. And
    it is similar for other parallel programming frameworks.

    Changing this reserve code will make the overcommit never mode more useful
    by allowing applications to allocate nearly all of the available memory.

    Also, the new admin_reserve_kbytes will be safer than the current behavior
    since the hardcoded 3% of available memory reserve can shrink to something
    useless in the case where applications have grabbed all available memory.

    Risks

    * "bash: fork: Cannot allocate memory"

    The downside of the first patch-- which creates a tunable user reserve
    that is only used in overcommit 'never' mode--is that an admin can set
    it so low that a user may not be able to kill their process, even if
    they already have a shell prompt.

    Of course, a user can get in the same predicament with the current 3%
    reserve--they just have to launch processes until 3% becomes negligible.

    * root-cant-log-in problem

    The second patch, adding the tunable rootuser_reserve_pages, allows
    the admin to shoot themselves in the foot by setting it too small. They
    can easily get the system into a state where root-can't-log-in.

    However, the new admin_reserve_kbytes will be safer than the current
    behavior since the hardcoded 3% of available memory reserve can shrink
    to something useless in the case where applications have grabbed all
    available memory.

    Alternatives

    * Memory cgroups provide a more flexible way to limit application memory.

    Not everyone wants to set up cgroups or deal with their overhead.

    * We could create a fourth overcommit mode which provides smaller reserves.

    The size of useful reserves may be drastically different depending
    on the whether the system is embedded or enterprise.

    * Force users to initialize all of their memory or use calloc.

    Some users don't want/expect the system to overcommit when they malloc.
    Overcommit 'never' mode is for this scenario, and it should work well.

    The new user and admin reserve tunables are simple to use, with low
    overhead compared to cgroups. The patches preserve current behavior where
    3% of memory is less than 128MB, except that the admin reserve doesn't
    shrink to an unusable size under pressure. The code allows admins to tune
    for embedded and enterprise usage.

    FAQ

    * How is the root-cant-login problem addressed?
    What happens if admin_reserve_pages is set to 0?

    Root is free to shoot themselves in the foot by setting
    admin_reserve_kbytes too low.

    On x86_64, the minimum useful reserve is:
    8MB for overcommit 'guess'
    128MB for overcommit 'never'

    admin_reserve_pages defaults to min(3% free memory, 8MB)

    So, anyone switching to 'never' mode needs to adjust
    admin_reserve_pages.

    * How do you calculate a minimum useful reserve?

    A user or the admin needs enough memory to login and perform
    recovery operations, which includes, at a minimum:

    sshd or login + bash (or some other shell) + top (or ps, kill, etc.)

    For overcommit 'guess', we can sum resident set sizes (RSS)
    because we only need enough memory to handle what the recovery
    programs will typically use. On x86_64 this is about 8MB.

    For overcommit 'never', we can take the max of their virtual sizes (VSZ)
    and add the sum of their RSS. We use VSZ instead of RSS because mode
    forces us to ensure we can fulfill all of the requested memory allocations--
    even if the programs only use a fraction of what they ask for.
    On x86_64 this is about 128MB.

    When swap is enabled, reserves are useful even when they are as
    small as 10MB, regardless of overcommit mode.

    When both swap and overcommit are disabled, then the admin should
    tune the reserves higher to be absolutley safe. Over 230MB each
    was safest in my testing.

    * What happens if user_reserve_pages is set to 0?

    Note, this only affects overcomitt 'never' mode.

    Then a user will be able to allocate all available memory minus
    admin_reserve_kbytes.

    However, they will easily see a message such as:

    "bash: fork: Cannot allocate memory"

    And they won't be able to recover/kill their application.
    The admin should be able to recover the system if
    admin_reserve_kbytes is set appropriately.

    * What's the difference between overcommit 'guess' and 'never'?

    "Guess" allows an allocation if there are enough free + reclaimable
    pages. It has a hardcoded 3% of free pages reserved for root.

    "Never" allows an allocation if there is enough swap + a configurable
    percentage (default is 50) of physical RAM. It has a hardcoded 3% of
    free pages reserved for root, like "Guess" mode. It also has a
    hardcoded 3% of the current process size reserved for additional
    applications.

    * Why is overcommit 'guess' not suitable even when an app eventually
    writes to every page? It takes free pages, file pages, available
    swap pages, reclaimable slab pages into consideration. In other words,
    these are all pages available, then why isn't overcommit suitable?

    Because it only looks at the present state of the system. It
    does not take into account the memory that other applications have
    malloced, but haven't initialized yet. It overcommits the system.

    Test Summary

    There was little change in behavior in the default overcommit 'guess'
    mode with swap enabled before and after the patch. This was expected.

    Systems run most predictably (i.e. no oom kills) in overcommit 'never'
    mode with swap enabled. This also allowed the most memory to be allocated
    to a user application.

    Overcommit 'guess' mode without swap is a bad idea. It is easy to
    crash the system. None of the other tested combinations crashed.
    This matches my experience on the Roadrunner supercomputer.

    Without the tunable user reserve, a system in overcommit 'never' mode
    and without swap does not allow the admin to recover, although the
    admin can.

    With the new tunable reserves, a system in overcommit 'never' mode
    and without swap can be configured to:

    1. maximize user-allocatable memory, running close to the edge of
    recoverability

    2. maximize recoverability, sacrificing allocatable memory to
    ensure that a user cannot take down a system

    Test Description

    Fedora 18 VM - 4 x86_64 cores, 5725MB RAM, 4GB Swap

    System is booted into multiuser console mode, with unnecessary services
    turned off. Caches were dropped before each test.

    Hogs are user memtester processes that attempt to allocate all free memory
    as reported by /proc/meminfo

    In overcommit 'never' mode, memory_ratio=100

    Test Results

    3.9.0-rc1-mm1

    Overcommit | Swap | Hogs | MB Got/Wanted | OOMs | User Recovery | Admin Recovery
    ---------- ---- ---- ------------- ---- ------------- --------------
    guess yes 1 5432/5432 no yes yes
    guess yes 4 5444/5444 1 yes yes
    guess no 1 5302/5449 no yes yes
    guess no 4 - crash no no

    never yes 1 5460/5460 1 yes yes
    never yes 4 5460/5460 1 yes yes
    never no 1 5218/5432 no no yes
    never no 4 5203/5448 no no yes

    3.9.0-rc1-mm1-tunablereserves

    User and Admin Recovery show their respective reserves, if applicable.

    Overcommit | Swap | Hogs | MB Got/Wanted | OOMs | User Recovery | Admin Recovery
    ---------- ---- ---- ------------- ---- ------------- --------------
    guess yes 1 5419/5419 no - yes 8MB yes
    guess yes 4 5436/5436 1 - yes 8MB yes
    guess no 1 5440/5440 * - yes 8MB yes
    guess no 4 - crash - no 8MB no

    * process would successfully mlock, then the oom killer would pick it

    never yes 1 5446/5446 no 10MB yes 20MB yes
    never yes 4 5456/5456 no 10MB yes 20MB yes
    never no 1 5387/5429 no 128MB no 8MB barely
    never no 1 5323/5428 no 226MB barely 8MB barely
    never no 1 5323/5428 no 226MB barely 8MB barely

    never no 1 5359/5448 no 10MB no 10MB barely

    never no 1 5323/5428 no 0MB no 10MB barely
    never no 1 5332/5428 no 0MB no 50MB yes
    never no 1 5293/5429 no 0MB no 90MB yes

    never no 1 5001/5427 no 230MB yes 338MB yes
    never no 4* 4998/5424 no 230MB yes 338MB yes

    * more memtesters were launched, able to allocate approximately another 100MB

    Future Work

    - Test larger memory systems.

    - Test an embedded image.

    - Test other architectures.

    - Time malloc microbenchmarks.

    - Would it be useful to be able to set overcommit policy for
    each memory cgroup?

    - Some lines are slightly above 80 chars.
    Perhaps define a macro to convert between pages and kb?
    Other places in the kernel do this.

    [akpm@linux-foundation.org: coding-style fixes]
    [akpm@linux-foundation.org: make init_user_reserve() static]
    Signed-off-by: Andrew Shewmaker
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Shewmaker
     
  • Use the new interface, remove one ifdef. No code size changes.

    We could/should have been using __meminit/__meminitdata here but there's
    now no point in doing that because all this code is elided at compile time.

    Cc: Li Zefan
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Saves an ifdef, no code size changes

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Squishes a warning which my change to hotplug_memory_notifier() added.

    I want to keep that warning, because it is punishment for failnig to check
    the hotplug_memory_notifier() return value.

    Cc: Greg KH
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Squishes a statement-with-no-effect warning, removes some ifdefs and
    shrinks .text by 2 bytes.

    Note that this code fails to check for blocking_notifier_chain_register()
    failures.

    Cc: Pekka Enberg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Squishes a statement-with-no-effect warning, removes some ifdefs and
    shrinks .text by one byte!

    Note that this code fails to check for blocking_notifier_chain_register()
    failures.

    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • When CONFIG_MEMORY_HOTPLUG=n, we don't want the memory-hotplug notifier
    handlers to be included in the .o files, for space reasons.

    The existing hotplug_memory_notifier() tries to handle this but testing
    with gcc-4.4.4 shows that it doesn't work - the hotplug functions are
    still present in the .o files.

    So implement a new register_hotmemory_notifier() which is a copy of
    register_hotcpu_notifier(), and which actually works as desired.
    hotplug_memory_notifier() and register_memory_notifier() callsites
    should be converted to use this new register_hotmemory_notifier().

    While we're there, let's repair the existing hotplug_memory_notifier():
    it simply stomps on the register_memory_notifier() return value, so
    well-behaved code cannot check for errors. Apparently non of the
    existing callers were well-behaved :(

    Cc: Andrew Shewmaker
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • [sfr@canb.auug.org.au: add missing semicolon]
    Signed-off-by: Cody P Schafer
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Cc: Benjamin Herrenschmidt
    Cc: Yinghai Lu
    Signed-off-by: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cody P Schafer
     
  • Signed-off-by: Cody P Schafer
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Cc: Benjamin Herrenschmidt
    Acked-by: Yinghai Lu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cody P Schafer
     
  • powerpc and x86 were opencoding copies of setup_nr_node_ids(), which
    page_alloc provides but makes static. Make it avaliable to the archs in
    linux/mm.h.

    Signed-off-by: Cody P Schafer
    Signed-off-by: Linus Torvalds

    Cody P Schafer
     
  • When booting on a large memory system, the kernel spends considerable
    time in memmap_init_zone() setting up memory zones. Analysis shows
    significant time spent in __early_pfn_to_nid().

    The routine memmap_init_zone() checks each PFN to verify the nid is
    valid. __early_pfn_to_nid() sequentially scans the list of pfn ranges
    to find the right range and returns the nid. This does not scale well.
    On a 4 TB (single rack) system there are 308 memory ranges to scan. The
    higher the PFN the more time spent sequentially spinning through memory
    ranges.

    Since memmap_init_zone() increments pfn, it will almost always be
    looking for the same range as the previous pfn, so check that range
    first. If it is in the same range, return that nid. If not, scan the
    list as before.

    A 4 TB (single rack) UV1 system takes 512 seconds to get through the
    zone code. This performance optimization reduces the time by 189
    seconds, a 36% improvement.

    A 2 TB (single rack) UV2 system goes from 212.7 seconds to 99.8 seconds,
    a 112.9 second (53%) reduction.

    [akpm@linux-foundation.org: make the statics __meminitdata]
    [akpm@linux-foundation.org: fix comment formatting]
    [akpm@linux-foundation.org: fix ia64, per yinghai]
    [akpm@linux-foundation.org: add missing semicolon, per Tony]
    Signed-off-by: Russ Anderson
    Cc: David Rientjes
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Tested-by: "Luck, Tony"
    Cc: Yinghai Lu
    Cc: Lin Feng
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Russ Anderson
     
  • Signed-off-by: Jianguo Wu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jianguo Wu
     
  • The following problem was reported against a distribution kernel when
    zone_reclaim was enabled but the same problem applies to the mainline
    kernel. The reproduction case was as follows

    1. Run numactl -m +0 dd if=largefile of=/dev/null
    This allocates a large number of clean pages in node 0

    2. numactl -N +0 memhog 0.5*Mg
    This start a memory-using application in node 0.

    The expected behaviour is that the clean pages get reclaimed and the
    application uses node 0 for its memory. The observed behaviour was that
    the memory for the memhog application was allocated off-node since
    commits cd38b115d5ad ("mm: page allocator: initialise ZLC for first zone
    eligible for zone_reclaim") and commit 76d3fbf8fbf6 ("mm: page
    allocator: reconsider zones for allocation after direct reclaim").

    The assumption of those patches was that it was always preferable to
    allocate quickly than stall for long periods of time and they were meant
    to take care that the zone was only marked full when necessary but an
    important case was missed.

    In the allocator fast path, only the low watermarks are checked. If the
    zones free pages are between the low and min watermark then allocations
    from the allocators slow path will succeed. However, zone_reclaim will
    only reclaim SWAP_CLUSTER_MAX or 1<
    Reported-by: Hedi Berriche
    Tested-by: Hedi Berriche
    Reviewed-by: Michal Hocko
    Reviewed-by: Wanpeng Li
    Signed-off-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     
  • Memory hotplug can happen on a machine under load, memory shortness
    and fragmentation, so huge page allocations for the vmemmap are not
    guaranteed to succeed.

    Try to fall back to regular pages before failing the hotplug event
    completely.

    Signed-off-by: Johannes Weiner
    Cc: Ben Hutchings
    Cc: Bernhard Schmidt
    Cc: Johannes Weiner
    Cc: Russell King
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Benjamin Herrenschmidt
    Cc: "Luck, Tony"
    Cc: Heiko Carstens
    Cc: David Miller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • We already have generic code to allocate vmemmap with regular pages, use
    it.

    Signed-off-by: Johannes Weiner
    Cc: Ben Hutchings
    Cc: Bernhard Schmidt
    Cc: Johannes Weiner
    Cc: Russell King
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Benjamin Herrenschmidt
    Cc: "Luck, Tony"
    Cc: Heiko Carstens
    Cc: David Miller
    Cc: Wu Fengguang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • No need to maintain addr_end and p_end when they are never actually read
    anywhere on !pse setups. Remove the dead code.

    Signed-off-by: Johannes Weiner
    Cc: Ben Hutchings
    Cc: Bernhard Schmidt
    Cc: Johannes Weiner
    Cc: Russell King
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Benjamin Herrenschmidt
    Cc: "Luck, Tony"
    Cc: Heiko Carstens
    Cc: David Miller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • The sparse code, when asking the architecture to populate the vmemmap,
    specifies the section range as a starting page and a number of pages.

    This is an awkward interface, because none of the arch-specific code
    actually thinks of the range in terms of 'struct page' units and always
    translates it to bytes first.

    In addition, later patches mix huge page and regular page backing for
    the vmemmap. For this, they need to call vmemmap_populate_basepages()
    on sub-section ranges with PAGE_SIZE and PMD_SIZE in mind. But these
    are not necessarily multiples of the 'struct page' size and so this unit
    is too coarse.

    Just translate the section range into bytes once in the generic sparse
    code, then pass byte ranges down the stack.

    Signed-off-by: Johannes Weiner
    Cc: Ben Hutchings
    Cc: Bernhard Schmidt
    Cc: Johannes Weiner
    Cc: Russell King
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Benjamin Herrenschmidt
    Cc: "Luck, Tony"
    Cc: Heiko Carstens
    Acked-by: David S. Miller
    Tested-by: David S. Miller
    Cc: Wu Fengguang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Hot-adding memory on x86_64 normally requires huge page allocation.
    When this is done to a VM guest, it's usually because the system is
    already tight on memory, so the request tends to fail. Try to avoid
    this by adding __GFP_REPEAT to the allocation flags.

    Addresses http://bugs.debian.org/699913

    Signed-off-by: Ben Hutchings
    Signed-off-by: Johannes Weiner
    Reported-by: Bernhard Schmidt
    Tested-by: Bernhard Schmidt
    Cc: Russell King
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Benjamin Herrenschmidt
    Cc: "Luck, Tony"
    Cc: Heiko Carstens
    Cc: David Miller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ben Hutchings
     
  • Particularly in oom conditions, it's troublesome that hugetlb memory is
    not displayed. All other meminfo that is emitted will not add up to
    what is expected, and there is no artifact left in the kernel log to
    show that a potentially significant amount of memory is actually
    allocated as hugepages which are not available to be reclaimed.

    Booting with hugepages=8192 on the command line, this memory is now
    shown in oom conditions. For example, with echo m >
    /proc/sysrq-trigger:

    Node 0 hugepages_total=2048 hugepages_free=2048 hugepages_surp=0 hugepages_size=2048kB
    Node 1 hugepages_total=2048 hugepages_free=2048 hugepages_surp=0 hugepages_size=2048kB
    Node 2 hugepages_total=2048 hugepages_free=2048 hugepages_surp=0 hugepages_size=2048kB
    Node 3 hugepages_total=2048 hugepages_free=2048 hugepages_surp=0 hugepages_size=2048kB

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: David Rientjes
    Acked-by: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Using mbind to change the mempolicy to MPOL_BIND on several adjacent
    mmapped blocks may result in a reset of the mempolicy to MPOL_DEFAULT in
    vma_adjust.

    Test code. Correct result is three lines containing "OK".

    #include
    #include
    #include
    #include
    #include

    /* gcc mbind_test.c -lnuma -o mbind_test -Wall */
    #define MAXNODE 4096

    void allocate()
    {
    int ret;
    int len;
    int policy = -1;
    unsigned char *p;
    unsigned long mask[MAXNODE] = { 0 };
    unsigned long retmask[MAXNODE] = { 0 };

    len = getpagesize() * 0x2fc00;
    p = mmap(NULL, len, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS,
    -1, 0);
    if (p == MAP_FAILED)
    printf("mbind err: %d\n", errno);

    mask[0] = 1;
    ret = mbind(p, len, MPOL_BIND, mask, MAXNODE, 0);
    if (ret < 0)
    printf("mbind err: %d %d\n", ret, errno);
    ret = get_mempolicy(&policy, retmask, MAXNODE, p, MPOL_F_ADDR);
    if (ret < 0)
    printf("get_mempolicy err: %d %d\n", ret, errno);

    if (policy == MPOL_BIND)
    printf("OK\n");
    else
    printf("ERROR: policy is %d\n", policy);
    }

    int main()
    {
    allocate();
    allocate();
    allocate();
    return 0;
    }

    Signed-off-by: Steven T Hampson
    Cc: Mel Gorman
    Cc: KOSAKI Motohiro
    Cc: Rik van Riel
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hampson, Steven T
     
  • ARM processors with LPAE enabled use 3 levels of page tables, with an
    entry in the top level (pgd) covering 1GB of virtual space. Because of
    the branch relocation limitations on ARM, the loadable modules are
    mapped 16MB below PAGE_OFFSET, making the corresponding 1GB pgd shared
    between kernel modules and user space.

    If free_pgtables() is called with the default ceiling 0,
    free_pgd_range() (and subsequently called functions) also frees the page
    table shared between user space and kernel modules (which is normally
    handled by the ARM-specific pgd_free() function). This patch changes
    defines the ARM USER_PGTABLES_CEILING to TASK_SIZE when CONFIG_ARM_LPAE
    is enabled.

    Note that the pgd_free() function already checks the presence of the
    shared pmd page allocated by pgd_alloc() and frees it, though with
    ceiling 0 this wasn't necessary.

    Signed-off-by: Catalin Marinas
    Cc: Russell King
    Cc: Hugh Dickins
    Cc: [3.3+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     
  • On architectures where a pgd entry may be shared between user and kernel
    (e.g. ARM+LPAE), freeing page tables needs a ceiling other than 0.
    This patch introduces a generic USER_PGTABLES_CEILING that arch code can
    override. It is the responsibility of the arch code setting the ceiling
    to ensure the complete freeing of the page tables (usually in
    pgd_free()).

    [catalin.marinas@arm.com: commit log; shift_arg_pages(), asm-generic/pgtables.h changes]
    Signed-off-by: Hugh Dickins
    Signed-off-by: Catalin Marinas
    Cc: Russell King
    Cc: [3.3+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     
  • Since commit 2d11085e404f ("memcg: do not create memsw files if swap
    accounting is disabled") memsw files are created only if memcg swap
    accounting is enabled so it doesn't make any sense to check for it
    explicitly in mem_cgroup_read(), mem_cgroup_write() and
    mem_cgroup_reset().

    Signed-off-by: Michal Hocko
    Cc: Kamezawa Hiroyuki
    Cc: Tejun Heo
    Acked-by: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     
  • Remove the WARN_ON_ONCE(!mm) check as the comment suggested. Kernel
    code calls find_vma only when it is absolutely sure that the mm_struct
    arg to it is non-NULL.

    Signed-off-by: Zhang Yanfei
    Cc: k80c
    Cc: Michel Lespinasse
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Yanfei
     
  • Now, vmap_area_list is exported as VMCOREINFO for makedumpfile to get
    the start address of vmalloc region (vmalloc_start). The address which
    contains vmalloc_start value is represented as below:

    vmap_area_list.next - OFFSET(vmap_area.list) + OFFSET(vmap_area.va_start)

    However, both OFFSET(vmap_area.va_start) and OFFSET(vmap_area.list)
    aren't exported as VMCOREINFO.

    So this patch exports them externally with small cleanup.

    [akpm@linux-foundation.org: vmalloc.h should include list.h for list_head]
    Signed-off-by: Atsushi Kumagai
    Cc: Joonsoo Kim
    Cc: Joonsoo Kim
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Atsushi Kumagai
    Cc: Chris Metcalf
    Cc: Dave Anderson
    Cc: Eric Biederman
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Atsushi Kumagai
     
  • Now, there is no need to maintain vmlist after initializing vmalloc. So
    remove related code and data structure.

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Joonsoo Kim
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Atsushi Kumagai
    Cc: Chris Metcalf
    Cc: Dave Anderson
    Cc: Eric Biederman
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Although our intention is to unexport internal structure entirely, but
    there is one exception for kexec. kexec dumps address of vmlist and
    makedumpfile uses this information.

    We are about to remove vmlist, then another way to retrieve information
    of vmalloc layer is needed for makedumpfile. For this purpose, we
    export vmap_area_list, instead of vmlist.

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Joonsoo Kim
    Cc: Eric Biederman
    Cc: Dave Anderson
    Cc: Vivek Goyal
    Cc: Atsushi Kumagai
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Chris Metcalf
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • This patch is a preparatory step for removing vmlist entirely. For
    above purpose, we change iterating a vmap_list codes to iterating a
    vmap_area_list. It is somewhat trivial change, but just one thing
    should be noticed.

    Using vmap_area_list in vmallocinfo() introduce ordering problem in SMP
    system. In s_show(), we retrieve some values from vm_struct.
    vm_struct's values is not fully setup when va->vm is assigned. Full
    setup is notified by removing VM_UNLIST flag without holding a lock.
    When we see that VM_UNLIST is removed, it is not ensured that vm_struct
    has proper values in view of other CPUs. So we need smp_[rw]mb for
    ensuring that proper values is assigned when we see that VM_UNLIST is
    removed.

    Therefore, this patch not only change a iteration list, but also add a
    appropriate smp_[rw]mb to right places.

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Joonsoo Kim
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Atsushi Kumagai
    Cc: Chris Metcalf
    Cc: Dave Anderson
    Cc: Eric Biederman
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • This patch is a preparatory step for removing vmlist entirely. For
    above purpose, we change iterating a vmap_list codes to iterating a
    vmap_area_list. It is somewhat trivial change, but just one thing
    should be noticed.

    vmlist is lack of information about some areas in vmalloc address space.
    For example, vm_map_ram() allocate area in vmalloc address space, but it
    doesn't make a link with vmlist. To provide full information about
    vmalloc address space is better idea, so we don't use va->vm and use
    vmap_area directly. This makes get_vmalloc_info() more precise.

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Joonsoo Kim
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Atsushi Kumagai
    Cc: Chris Metcalf
    Cc: Dave Anderson
    Cc: Eric Biederman
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Now, when we hold a vmap_area_lock, va->vm can't be discarded. So we can
    safely access to va->vm when iterating a vmap_area_list with holding a
    vmap_area_lock. With this property, change iterating vmlist codes in
    vread/vwrite() to iterating vmap_area_list.

    There is a little difference relate to lock, because vmlist_lock is mutex,
    but, vmap_area_lock is spin_lock. It may introduce a spinning overhead
    during vread/vwrite() is executing. But, these are debug-oriented
    functions, so this overhead is not real problem for common case.

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Joonsoo Kim
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Atsushi Kumagai
    Cc: Chris Metcalf
    Cc: Dave Anderson
    Cc: Eric Biederman
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim