17 Oct, 2007

40 commits

  • Why do we need r/o bind mounts?

    This feature allows a read-only view into a read-write filesystem. In the
    process of doing that, it also provides infrastructure for keeping track of
    the number of writers to any given mount.

    This has a number of uses. It allows chroots to have parts of filesystems
    writable. It will be useful for containers in the future because users may
    have root inside a container, but should not be allowed to write to
    somefilesystems. This also replaces patches that vserver has had out of the
    tree for several years.

    It allows security enhancement by making sure that parts of your filesystem
    read-only (such as when you don't trust your FTP server), when you don't want
    to have entire new filesystems mounted, or when you want atime selectively
    updated. I've been using the following script to test that the feature is
    working as desired. It takes a directory and makes a regular bind and a r/o
    bind mount of it. It then performs some normal filesystem operations on the
    three directories, including ones that are expected to fail, like creating a
    file on the r/o mount.

    This patch:

    Some filesystems forego the vfs and may_open() and create their own 'struct
    file's.

    This patch creates a couple of helper functions which can be used by these
    filesystems, and will provide a unified place which the r/o bind mount code
    may patch.

    Also, rename an existing, static-scope init_file() to a less generic name.

    Signed-off-by: Dave Hansen
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Hansen
     
  • We don't want to introduce pointless delays in throttle_vm_writeout() when
    the writeback limits are not yet exceeded, do we?

    Cc: Nick Piggin
    Cc: OGAWA Hirofumi
    Cc: Kumar Gala
    Cc: Pete Zaitcev
    Cc: Greg KH
    Reviewed-by: Rik van Riel
    Signed-off-by: Fengguang Wu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fengguang Wu
     
  • I_LOCK was used for several unrelated purposes, which caused deadlock
    situations in certain filesystems as a side effect. One of the purposes
    now uses the new I_SYNC bit.

    Also document the various bits and change their order from historical to
    logical.

    [bunk@stusta.de: make fs/inode.c:wake_up_inode() static]
    Signed-off-by: Joern Engel
    Cc: Dave Kleikamp
    Cc: David Chinner
    Cc: Anton Altaparmakov
    Cc: Al Viro
    Cc: Christoph Hellwig
    Signed-off-by: Adrian Bunk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joern Engel
     
  • After making dirty a 100M file, the normal behavior is to start the writeback
    for all data after 30s delays. But sometimes the following happens instead:

    - after 30s: ~4M
    - after 5s: ~4M
    - after 5s: all remaining 92M

    Some analyze shows that the internal io dispatch queues goes like this:

    s_io s_more_io
    -------------------------
    1) 100M,1K 0
    2) 1K 96M
    3) 0 96M

    1) initial state with a 100M file and a 1K file
    2) 4M written, nr_to_write 0, no more writes(BUG)

    nr_to_write > 0 in (3) fools the upper layer to think that data have all been
    written out. The big dirty file is actually still sitting in s_more_io. We
    cannot simply splice s_more_io back to s_io as soon as s_io becomes empty, and
    let the loop in generic_sync_sb_inodes() continue: this may starve newly
    expired inodes in s_dirty. It is also not an option to draw inodes from both
    s_more_io and s_dirty, an let the loop go on: this might lead to live locks,
    and might also starve other superblocks in sync time(well kupdate may still
    starve some superblocks, that's another bug).

    We have to return when a full scan of s_io completes. So nr_to_write > 0 does
    not necessarily mean that "all data are written". This patch introduces a
    flag writeback_control.more_io to indicate this situation. With it the big
    dirty file no longer has to wait for the next kupdate invocation 5s later.

    Cc: David Chinner
    Cc: Ken Chen
    Signed-off-by: Fengguang Wu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fengguang Wu
     
  • Since nothing earlier than gcc-3.2 is supported for kernel
    compilation, that 2.95 hack can be removed.

    Signed-off-by: Robert P. J. Day
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robert P. J. Day
     
  • mm.h doesn't use directly anything from mutex.h and backing-dev.h, so
    remove them and add them back to files which need them.

    Cross-compile tested on many configs and archs.

    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     
  • These aren't modular, so SLAB_PANIC is OK.

    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     
  • This is a writeback-internal marker but we're propagating it all the way back
    to userspace!.

    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • zone->lock is quite an "inner" lock and mostly constrained to page alloc as
    well, so like slab locks, it probably isn't something that is critically
    important to document here. However unlike slab locks, zone lock could be
    used more widely in future, and page_alloc.c might possibly have more
    business to do tricky things with pagecache than does slab. So... I don't
    think it hurts to document it.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Introduces new zone flag interface for testing and setting flags:

    int zone_test_and_set_flag(struct zone *zone, zone_flags_t flag)

    Instead of setting and clearing ZONE_RECLAIM_LOCKED each time shrink_zone() is
    called, this flag is test and set before starting zone reclaim. Zone reclaim
    starts in __alloc_pages() when a zone's watermark fails and the system is in
    zone_reclaim_mode. If it's already in reclaim, there's no need to start again
    so it is simply considered full for that allocation attempt.

    There is a change of behavior with regard to concurrent zone shrinking. It is
    now possible for try_to_free_pages() or kswapd to already be shrinking a
    particular zone when __alloc_pages() starts zone reclaim. In this case, it is
    possible for two concurrent threads to invoke shrink_zone() for a single zone.

    This change forbids a zone to be in zone reclaim twice, which was always the
    behavior, but allows for concurrent try_to_free_pages() or kswapd shrinking
    when starting zone reclaim.

    Cc: Andrea Arcangeli
    Cc: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • There's no reason to sleep in try_set_zone_oom() or clear_zonelist_oom() if
    the lock can't be acquired; it will be available soon enough once the zonelist
    scanning is done. All other threads waiting for the OOM killer are also
    contingent on the exiting task being able to acquire the lock in
    clear_zonelist_oom() so it doesn't make sense to put it to sleep.

    Cc: Andrea Arcangeli
    Cc: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Since no task descriptor's 'cpuset' field is dereferenced in the execution of
    the OOM killer anymore, it is no longer necessary to take callback_mutex.

    [akpm@linux-foundation.org: restore cpuset_lock for other patches]
    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Instead of testing for overlap in the memory nodes of the the nearest
    exclusive ancestor of both current and the candidate task, it is better to
    simply test for intersection between the task's mems_allowed in their task
    descriptors. This does not require taking callback_mutex since it is only
    used as a hint in the badness scoring.

    Tasks that do not have an intersection in their mems_allowed with the current
    task are not explicitly restricted from being OOM killed because it is quite
    possible that the candidate task has allocated memory there before and has
    since changed its mems_allowed.

    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Suppresses the extraneous stack and memory dump when a parallel OOM killing
    has been found. There's no need to fill the ring buffer with this information
    if its already been printed and the condition that triggered the previous OOM
    killer has not yet been alleviated.

    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Adds a new sysctl, 'oom_kill_allocating_task', which will automatically kill
    the OOM-triggering task instead of scanning through the tasklist to find a
    memory-hogging target. This is helpful for systems with an insanely large
    number of tasks where scanning the tasklist significantly degrades
    performance.

    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • A final allocation attempt with a very high watermark needs to be attempted
    before invoking out_of_memory(). OOM killer serialization needs to occur
    before this final attempt, otherwise tasks attempting to OOM-lock all zones in
    its zonelist may spin and acquire the lock unnecessarily after the OOM
    condition has already been alleviated.

    If the final allocation does succeed, the zonelist is simply OOM-unlocked and
    __alloc_pages() returns the page. Otherwise, the OOM killer is invoked.

    If the task cannot acquire OOM-locks on all zones in its zonelist, it is put
    to sleep and the allocation is retried when it gets rescheduled. One of its
    zones is already marked as being in the OOM killer so it'll hopefully be
    getting some free memory soon, at least enough to satisfy a high watermark
    allocation attempt. This prevents needlessly killing a task when the OOM
    condition would have already been alleviated if it had simply been given
    enough time.

    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • OOM killer synchronization should be done with zone granularity so that memory
    policy and cpuset allocations may have their corresponding zones locked and
    allow parallel kills for other OOM conditions that may exist elsewhere in the
    system. DMA allocations can be targeted at the zone level, which would not be
    possible if locking was done in nodes or globally.

    Synchronization shall be done with a variation of "trylocks." The goal is to
    put the current task to sleep and restart the failed allocation attempt later
    if the trylock fails. Otherwise, the OOM killer is invoked.

    Each zone in the zonelist that __alloc_pages() was called with is checked for
    the newly-introduced ZONE_OOM_LOCKED flag. If any zone has this flag present,
    the "trylock" to serialize the OOM killer fails and returns zero. Otherwise,
    all the zones have ZONE_OOM_LOCKED set and the try_set_zone_oom() function
    returns non-zero.

    Cc: Andrea Arcangeli
    Cc: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Convert the int all_unreclaimable member of struct zone to unsigned long
    flags. This can now be used to specify several different zone flags such as
    all_unreclaimable and reclaim_in_progress, which can now be removed and
    converted to a per-zone flag.

    Flags are set and cleared as follows:

    zone_set_flag(struct zone *zone, zone_flags_t flag)
    zone_clear_flag(struct zone *zone, zone_flags_t flag)

    Defines the first zone flags, ZONE_ALL_UNRECLAIMABLE and ZONE_RECLAIM_LOCKED,
    which have the same semantics as the old zone->all_unreclaimable and
    zone->reclaim_in_progress, respectively. Also converts all current users that
    set or clear either flag to use the new interface.

    Helper functions are defined to test the flags:

    int zone_is_all_unreclaimable(const struct zone *zone)
    int zone_is_reclaim_locked(const struct zone *zone)

    All flag operators are of the atomic variety because there are currently
    readers that are implemented that do not take zone->lock.

    [akpm@linux-foundation.org: add needed include]
    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The OOM killer's CONSTRAINT definitions are really more appropriate in an
    enum, so define them in include/linux/oom.h.

    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Move the OOM killer's extern function prototypes to include/linux/oom.h and
    include it where necessary.

    [clg@fr.ibm.com: build fix]
    Cc: Andrea Arcangeli
    Acked-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Cedric Le Goater
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Slab constructors currently have a flags parameter that is never used. And
    the order of the arguments is opposite to other slab functions. The object
    pointer is placed before the kmem_cache pointer.

    Convert

    ctor(void *object, struct kmem_cache *s, unsigned long flags)

    to

    ctor(struct kmem_cache *s, void *object)

    throughout the kernel

    [akpm@linux-foundation.org: coupla fixes]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Move irq handling out of new slab into __slab_alloc. That is useful for
    Mathieu's cmpxchg_local patchset and also allows us to remove the crude
    local_irq_off in early_kmem_cache_alloc().

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Based on ideas of Andrew:
    http://marc.info/?l=linux-kernel&m=102912915020543&w=2

    Scale the bdi dirty limit inversly with the tasks dirty rate.
    This makes heavy writers have a lower dirty limit than the occasional writer.

    Andrea proposed something similar:
    http://lwn.net/Articles/152277/

    The main disadvantage to his patch is that he uses an unrelated quantity to
    measure time, which leaves him with a workload dependant tunable. Other than
    that the two approaches appear quite similar.

    [akpm@linux-foundation.org: fix warning]
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Scale writeback cache per backing device, proportional to its writeout speed.

    By decoupling the BDI dirty thresholds a number of problems we currently have
    will go away, namely:

    - mutual interference starvation (for any number of BDIs);
    - deadlocks with stacked BDIs (loop, FUSE and local NFS mounts).

    It might be that all dirty pages are for a single BDI while other BDIs are
    idling. By giving each BDI a 'fair' share of the dirty limit, each one can have
    dirty pages outstanding and make progress.

    A global threshold also creates a deadlock for stacked BDIs; when A writes to
    B, and A generates enough dirty pages to get throttled, B will never start
    writeback until the dirty pages go away. Again, by giving each BDI its own
    'independent' dirty limit, this problem is avoided.

    So the problem is to determine how to distribute the total dirty limit across
    the BDIs fairly and efficiently. A DBI that has a large dirty limit but does
    not have any dirty pages outstanding is a waste.

    What is done is to keep a floating proportion between the DBIs based on
    writeback completions. This way faster/more active devices get a larger share
    than slower/idle devices.

    [akpm@linux-foundation.org: fix warnings]
    [hugh@veritas.com: Fix occasional hang when a task couldn't get out of balance_dirty_pages]
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Count per BDI writeback pages.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Count per BDI reclaimable pages; nr_reclaimable = nr_dirty + nr_unstable.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Provide scalable per backing_dev_info statistics counters.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • provide BDI constructor/destructor hooks

    [akpm@linux-foundation.org: compile fix]
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • These patches aim to improve balance_dirty_pages() and directly address three
    issues:
    1) inter device starvation
    2) stacked device deadlocks
    3) inter process starvation

    1 and 2 are a direct result from removing the global dirty limit and using
    per device dirty limits. By giving each device its own dirty limit is will
    no longer starve another device, and the cyclic dependancy on the dirty limit
    is broken.

    In order to efficiently distribute the dirty limit across the independant
    devices a floating proportion is used, this will allocate a share of the total
    limit proportional to the device's recent activity.

    3 is done by also scaling the dirty limit proportional to the current task's
    recent dirty rate.

    This patch:

    nfs: remove congestion_end(). It's redundant, clear_bdi_congested() already
    wakes the waiters.

    Signed-off-by: Peter Zijlstra
    Cc: Trond Myklebust
    Cc: "J. Bruce Fields"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • * 'for-linus' of git://git.kernel.dk/data/git/linux-2.6-block: (63 commits)
    Fix memory leak in dm-crypt
    SPARC64: sg chaining support
    SPARC: sg chaining support
    PPC: sg chaining support
    PS3: sg chaining support
    IA64: sg chaining support
    x86-64: enable sg chaining
    x86-64: update pci-gart iommu to sg helpers
    x86-64: update nommu to sg helpers
    x86-64: update calgary iommu to sg helpers
    swiotlb: sg chaining support
    i386: enable sg chaining
    i386 dma_map_sg: convert to using sg helpers
    mmc: need to zero sglist on init
    Panic in blk_rq_map_sg() from CCISS driver
    remove sglist_len
    remove blk_queue_max_phys_segments in libata
    revert sg segment size ifdefs
    Fixup u14-34f ENABLE_SG_CHAINING
    qla1280: enable use_sg_chaining option
    ...

    Linus Torvalds
     
  • This patch contains the following cleanups:
    - make the needlessly global setup_vmstat() static
    - remove the unused refresh_vm_stats()

    Signed-off-by: Adrian Bunk
    Acked-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adrian Bunk
     
  • This patch contains the following cleanups:
    - every file should include the headers containing the prototypes for
    its global functions
    - make the follosing needlessly global functions static:
    - migrate_to_node()
    - do_mbind()
    - sp_alloc()
    - mpol_rebind_policy()

    [akpm@linux-foundation.org: fix uninitialised var warning]
    Signed-off-by: Adrian Bunk
    Acked-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adrian Bunk
     
  • This patch makes three needlessly global functions static.

    Signed-off-by: Adrian Bunk
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adrian Bunk
     
  • When gather_surplus_pages() fails to allocate enough huge pages to satisfy
    the requested reservation, it frees what it did allocate back to the buddy
    allocator. put_page() should be called instead of update_and_free_page()
    to ensure that pool counters are updated as appropriate and the page's
    refcount is decremented.

    Signed-off-by: Adam Litke
    Acked-by: Dave Hansen
    Cc: David Gibson
    Cc: William Lee Irwin III
    Cc: Badari Pulavarty
    Cc: Ken Chen
    Cc: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adam Litke
     
  • Anton found a problem with the hugetlb pool allocation when some nodes have
    no memory (http://marc.info/?l=linux-mm&m=118133042025995&w=2). Lee worked
    on versions that tried to fix it, but none were accepted. Christoph has
    created a set of patches which allow for GFP_THISNODE allocations to fail
    if the node has no memory.

    Currently, alloc_fresh_huge_page() returns NULL when it is not able to
    allocate a huge page on the current node, as specified by its custom
    interleave variable. The callers of this function, though, assume that a
    failure in alloc_fresh_huge_page() indicates no hugepages can be allocated
    on the system period. This might not be the case, for instance, if we have
    an uneven NUMA system, and we happen to try to allocate a hugepage on a
    node with less memory and fail, while there is still plenty of free memory
    on the other nodes.

    To correct this, make alloc_fresh_huge_page() search through all online
    nodes before deciding no hugepages can be allocated. Add a helper function
    for actually allocating the hugepage. Use a new global nid iterator to
    control which nid to allocate on.

    Note: we expect particular semantics for __GFP_THISNODE, which are now
    enforced even for memoryless nodes. That is, there is should be no
    fallback to other nodes. Therefore, we rely on the nid passed into
    alloc_pages_node() to be the nid the page comes from. If this is
    incorrect, accounting will break.

    Tested on x86 !NUMA, x86 NUMA, x86_64 NUMA and ppc64 NUMA (with 2
    memoryless nodes).

    Before on the ppc64 box:
    Trying to clear the hugetlb pool
    Done. 0 free
    Trying to resize the pool to 100
    Node 0 HugePages_Free: 25
    Node 1 HugePages_Free: 75
    Node 2 HugePages_Free: 0
    Node 3 HugePages_Free: 0
    Done. Initially 100 free
    Trying to resize the pool to 200
    Node 0 HugePages_Free: 50
    Node 1 HugePages_Free: 150
    Node 2 HugePages_Free: 0
    Node 3 HugePages_Free: 0
    Done. 200 free

    After:
    Trying to clear the hugetlb pool
    Done. 0 free
    Trying to resize the pool to 100
    Node 0 HugePages_Free: 50
    Node 1 HugePages_Free: 50
    Node 2 HugePages_Free: 0
    Node 3 HugePages_Free: 0
    Done. Initially 100 free
    Trying to resize the pool to 200
    Node 0 HugePages_Free: 100
    Node 1 HugePages_Free: 100
    Node 2 HugePages_Free: 0
    Node 3 HugePages_Free: 0
    Done. 200 free

    Signed-off-by: Nishanth Aravamudan
    Acked-by: Christoph Lameter
    Cc: Adam Litke
    Cc: David Gibson
    Cc: Badari Pulavarty
    Cc: Ken Chen
    Cc: William Lee Irwin III
    Cc: Lee Schermerhorn
    Cc: KAMEZAWA Hiroyuki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nishanth Aravamudan
     
  • When shrinking the size of the hugetlb pool via the nr_hugepages sysctl, we
    are careful to keep enough pages around to satisfy reservations. But the
    calculation is flawed for the following scenario:

    Action Pool Counters (Total, Free, Resv)
    ====== =============
    Set pool to 1 page 1 1 0
    Map 1 page MAP_PRIVATE 1 1 0
    Touch the page to fault it in 1 0 0
    Set pool to 3 pages 3 2 0
    Map 2 pages MAP_SHARED 3 2 2
    Set pool to 2 pages 2 1 2
    Acked-by: Ken Chen
    Cc: David Gibson
    Cc: Badari Pulavarty
    Cc: William Lee Irwin III
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adam Litke
     
  • The maximum size of the huge page pool can be controlled using the overall
    size of the hugetlb filesystem (via its 'size' mount option). However in the
    common case the this will not be set as the pool is traditionally fixed in
    size at boot time. In order to maintain the expected semantics, we need to
    prevent the pool expanding by default.

    This patch introduces a new sysctl controlling dynamic pool resizing. When
    this is enabled the pool will expand beyond its base size up to the size of
    the hugetlb filesystem. It is disabled by default.

    Signed-off-by: Adam Litke
    Acked-by: Andy Whitcroft
    Acked-by: Dave McCracken
    Cc: William Irwin
    Cc: David Gibson
    Cc: Ken Chen
    Cc: Badari Pulavarty
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adam Litke
     
  • Shared mappings require special handling because the huge pages needed to
    fully populate the VMA must be reserved at mmap time. If not enough pages are
    available when making the reservation, allocate all of the shortfall at once
    from the buddy allocator and add the pages directly to the hugetlb pool. If
    they cannot be allocated, then fail the mapping. The page surplus is
    accounted for in the same way as for private mappings; faulted surplus pages
    will be freed at unmap time. Reserved, surplus pages that have not been used
    must be freed separately when their reservation has been released.

    Signed-off-by: Adam Litke
    Acked-by: Andy Whitcroft
    Acked-by: Dave McCracken
    Cc: William Irwin
    Cc: David Gibson
    Cc: Ken Chen
    Cc: Badari Pulavarty
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adam Litke
     
  • Because we overcommit hugepages for MAP_PRIVATE mappings, it is possible that
    the hugetlb pool will be exhausted or completely reserved when a hugepage is
    needed to satisfy a page fault. Before killing the process in this situation,
    try to allocate a hugepage directly from the buddy allocator.

    The explicitly configured pool size becomes a low watermark. When dynamically
    grown, the allocated huge pages are accounted as a surplus over the watermark.
    As huge pages are freed on a node, surplus pages are released to the buddy
    allocator so that the pool will shrink back to the watermark.

    Surplus accounting also allows for friendlier explicit pool resizing. When
    shrinking a pool that is fully in-use, increase the surplus so pages will be
    returned to the buddy allocator as soon as they are freed. When growing a
    pool that has a surplus, consume the surplus first and then allocate new
    pages.

    Signed-off-by: Adam Litke
    Signed-off-by: Mel Gorman
    Acked-by: Andy Whitcroft
    Acked-by: Dave McCracken
    Cc: William Irwin
    Cc: David Gibson
    Cc: Ken Chen
    Cc: Badari Pulavarty
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adam Litke
     
  • Dynamic huge page pool resizing.

    In most real-world scenarios, configuring the size of the hugetlb pool
    correctly is a difficult task. If too few pages are allocated to the pool,
    applications using MAP_SHARED may fail to mmap() a hugepage region and
    applications using MAP_PRIVATE may receive SIGBUS. Isolating too much memory
    in the hugetlb pool means it is not available for other uses, especially those
    programs not using huge pages.

    The obvious answer is to let the hugetlb pool grow and shrink in response to
    the runtime demand for huge pages. The work Mel Gorman has been doing to
    establish a memory zone for movable memory allocations makes dynamically
    resizing the hugetlb pool reliable within the limits of that zone. This patch
    series implements dynamic pool resizing for private and shared mappings while
    being careful to maintain existing semantics. Please reply with your comments
    and feedback; even just to say whether it would be a useful feature to you.
    Thanks.

    How it works
    ============

    Upon depletion of the hugetlb pool, rather than reporting an error immediately,
    first try and allocate the needed huge pages directly from the buddy allocator.
    Care must be taken to avoid unbounded growth of the hugetlb pool, so the
    hugetlb filesystem quota is used to limit overall pool size.

    The real work begins when we decide there is a shortage of huge pages. What
    happens next depends on whether the pages are for a private or shared mapping.
    Private mappings are straightforward. At fault time, if alloc_huge_page()
    fails, we allocate a page from the buddy allocator and increment the source
    node's surplus_huge_pages counter. When free_huge_page() is called for a page
    on a node with a surplus, the page is freed directly to the buddy allocator
    instead of the hugetlb pool.

    Because shared mappings require all of the pages to be reserved up front, some
    additional work must be done at mmap() to support them. We determine the
    reservation shortage and allocate the required number of pages all at once.
    These pages are then added to the hugetlb pool and marked reserved. Where that
    is not possible the mmap() will fail. As with private mappings, the
    appropriate surplus counters are updated. Since reserved huge pages won't
    necessarily be used by the process, we can't be sure that free_huge_page() will
    always be called to return surplus pages to the buddy allocator. To prevent
    the huge page pool from bloating, we must free unused surplus pages when their
    reservation has ended.

    Controlling it
    ==============

    With the entire patch series applied, pool resizing is off by default so unless
    specific action is taken, the semantics are unchanged.

    To take advantage of the flexibility afforded by this patch series one must
    tolerate a change in semantics. To control hugetlb pool growth, the following
    techniques can be employed:

    * A sysctl tunable to enable/disable the feature entirely
    * The size= mount option for hugetlbfs filesystems to limit pool size

    Performance
    ===========

    When contiguous memory is readily available, it is expected that the cost of
    dynamicly resizing the pool will be small. This series has been performance
    tested with 'stream' to measure this cost.

    Stream (http://www.cs.virginia.edu/stream/) was linked with libhugetlbfs to
    enable remapping of the text and data/bss segments into huge pages.

    Stream with small array
    -----------------------
    Baseline: nr_hugepages = 0, No libhugetlbfs segment remapping
    Preallocated: nr_hugepages = 5, Text and data/bss remapping
    Dynamic: nr_hugepages = 0, Text and data/bss remapping

    Rate (MB/s)
    Function Baseline Preallocated Dynamic
    Copy: 4695.6266 5942.8371 5982.2287
    Scale: 4451.5776 5017.1419 5658.7843
    Add: 5815.8849 7927.7827 8119.3552
    Triad: 5949.4144 8527.6492 8110.6903

    Stream with large array
    -----------------------
    Baseline: nr_hugepages = 0, No libhugetlbfs segment remapping
    Preallocated: nr_hugepages = 67, Text and data/bss remapping
    Dynamic: nr_hugepages = 0, Text and data/bss remapping

    Rate (MB/s)
    Function Baseline Preallocated Dynamic
    Copy: 2227.8281 2544.2732 2546.4947
    Scale: 2136.3208 2430.7294 2421.2074
    Add: 2773.1449 4004.0021 3999.4331
    Triad: 2748.4502 3777.0109 3773.4970

    * All numbers are averages taken from 10 consecutive runs with a maximum
    standard deviation of 1.3 percent noted.

    This patch:

    Simply move update_and_free_page() so that it can be reused later in this
    patch series. The implementation is not changed.

    Signed-off-by: Adam Litke
    Acked-by: Andy Whitcroft
    Acked-by: Dave McCracken
    Acked-by: William Irwin
    Cc: David Gibson
    Cc: Ken Chen
    Cc: Badari Pulavarty
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adam Litke