18 Jun, 2009

1 commit


17 Jun, 2009

4 commits

  • Add page-types, a handy tool for querying page flags.

    It will expand some of the overloaded flags:
    PG_slob_free = PG_private
    PG_slub_frozen = PG_active
    PG_slub_debug = PG_error
    PG_readahead = PG_reclaim

    and mask out obscure flags except in -raw mode:
    PG_reserved
    PG_mlocked
    PG_mappedtodisk
    PG_private
    PG_private_2
    PG_owner_priv_1
    PG_arch_1
    PG_uncached
    PG_compound* for non hugeTLB pages

    [akpm@linux-foundation.org: fix warning]
    Signed-off-by: Wu Fengguang
    Cc: KOSAKI Motohiro
    Cc: Andi Kleen
    Cc: Matt Mackall
    Cc: Alexey Dobriyan
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wu Fengguang
     
  • Also add short descriptions for all of the 20 exported page flags.

    Signed-off-by: Wu Fengguang
    Cc: KOSAKI Motohiro
    Cc: Andi Kleen
    Cc: Matt Mackall
    Cc: Alexey Dobriyan
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wu Fengguang
     
  • Some bit ranges were inclusive and some not. Fix them to be consistently
    inclusive.

    Signed-off-by: Wu Fengguang
    Cc: KOSAKI Motohiro
    Cc: Andi Kleen
    Cc: Matt Mackall
    Cc: Alexey Dobriyan
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wu Fengguang
     
  • ALLOC_WMARK_MIN, ALLOC_WMARK_LOW and ALLOC_WMARK_HIGH determin whether
    pages_min, pages_low or pages_high is used as the zone watermark when
    allocating the pages. Two branches in the allocator hotpath determine
    which watermark to use.

    This patch uses the flags as an array index into a watermark array that is
    indexed with WMARK_* defines accessed via helpers. All call sites that
    use zone->pages_* are updated to use the helpers for accessing the values
    and the array offsets for setting.

    Signed-off-by: Mel Gorman
    Reviewed-by: Christoph Lameter
    Cc: KOSAKI Motohiro
    Cc: Pekka Enberg
    Cc: Peter Zijlstra
    Cc: Nick Piggin
    Cc: Dave Hansen
    Cc: Lee Schermerhorn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

14 Apr, 2009

2 commits


09 Apr, 2009

1 commit

  • Move kmemtrace.txt, tracepoints.txt, ftrace.txt and mmiotrace.txt to
    the new trace/ directory.

    I didnt find any references to those documents in both source
    files and documents, so no extra work needs to be done.

    Signed-off-by: Li Zefan
    Acked-by: Pekka Paalanen
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Mathieu Desnoyers
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Li Zefan
     

06 Apr, 2009

1 commit

  • * 'tracing-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (413 commits)
    tracing, net: fix net tree and tracing tree merge interaction
    tracing, powerpc: fix powerpc tree and tracing tree interaction
    ring-buffer: do not remove reader page from list on ring buffer free
    function-graph: allow unregistering twice
    trace: make argument 'mem' of trace_seq_putmem() const
    tracing: add missing 'extern' keywords to trace_output.h
    tracing: provide trace_seq_reserve()
    blktrace: print out BLK_TN_MESSAGE properly
    blktrace: extract duplidate code
    blktrace: fix memory leak when freeing struct blk_io_trace
    blktrace: fix blk_probes_ref chaos
    blktrace: make classic output more classic
    blktrace: fix off-by-one bug
    blktrace: fix the original blktrace
    blktrace: fix a race when creating blk_tree_root in debugfs
    blktrace: fix timestamp in binary output
    tracing, Text Edit Lock: cleanup
    tracing: filter fix for TRACE_EVENT_FORMAT events
    ftrace: Using FTRACE_WARN_ON() to check "freed record" in ftrace_release()
    x86: kretprobe-booster interrupt emulation code fix
    ...

    Fix up trivial conflicts in
    arch/parisc/include/asm/ftrace.h
    include/linux/memory.h
    kernel/extable.c
    kernel/module.c

    Linus Torvalds
     

30 Mar, 2009

1 commit


11 Jan, 2009

1 commit


07 Jan, 2009

1 commit

  • An unfortunate feature of the Unevictable LRU work was that reclaiming an
    anonymous page involved an extra scan through the anon_vma: to check that
    the page is evictable before allocating swap, because the swap could not
    be freed reliably soon afterwards.

    Now try_to_free_swap() has replaced remove_exclusive_swap_page(), that's
    not an issue any more: remove try_to_munlock() call from
    shrink_page_list(), leaving it to try_to_munmap() to discover if the page
    is one to be culled to the unevictable list - in which case then
    try_to_free_swap().

    Update unevictable-lru.txt to remove comments on the try_to_munlock() in
    shrink_page_list(), and shorten some lines over 80 columns.

    Signed-off-by: Hugh Dickins
    Cc: Lee Schermerhorn
    Acked-by: Rik van Riel
    Cc: Nick Piggin
    Cc: KAMEZAWA Hiroyuki
    Cc: Robin Holt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

29 Dec, 2008

3 commits


31 Oct, 2008

1 commit


20 Oct, 2008

1 commit


15 Aug, 2008

1 commit

  • With man-pages-3.07, the numa_maps documentation home is now proc(5), so
    the reference in Documentation/vm/page_migration needs updating.
    (Cliff/Lee are removing numa_maps.5 from the numactl package.) Also, the
    download location for the numactl package changed a while back. This
    patch fixes both things, as well as a typo (provided-->provides).

    Signed-off-by: Michael Kerrisk
    Cc: Cliff Wickman
    Cc: Lee Schermerhorn
    Cc: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Kerrisk
     

13 Aug, 2008

1 commit

  • Currently source files in the Documentation/ sub-dir can easily bit-rot
    since they are not generally buildable, either because they are hidden in
    text files or because there are no Makefile rules for them. This needs to
    be fixed so that the source files remain usable and good examples of code
    instead of bad examples.

    Add the ability to build source files that are in the Documentation/ dir.
    Add to Kconfig as "BUILD_DOCSRC" config symbol.

    Use "CONFIG_BUILD_DOCSRC=1 make ..." to build objects from the
    Documentation/ sources. Or enable BUILD_DOCSRC in the *config system.
    However, this symbol depends on HEADERS_CHECK since the header files need
    to be installed (for userspace builds).

    Built (using cross-tools) for x86-64, i386, alpha, ia64, sparc32,
    sparc64, powerpc, sh, m68k, & mips.

    Signed-off-by: Randy Dunlap
    Reviewed-by: Sam Ravnborg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     

27 Jul, 2008

1 commit


25 Jul, 2008

1 commit

  • Provide new hugepages user APIs that are more suited to multiple hstates
    in sysfs. There is a new directory, /sys/kernel/hugepages. Underneath
    that directory there will be a directory per-supported hugepage size,
    e.g.:

    /sys/kernel/hugepages/hugepages-64kB
    /sys/kernel/hugepages/hugepages-16384kB
    /sys/kernel/hugepages/hugepages-16777216kB

    corresponding to 64k, 16m and 16g respectively. Within each
    hugepages-size directory there are a number of files, corresponding to the
    tracked counters in the hstate, e.g.:

    /sys/kernel/hugepages/hugepages-64/nr_hugepages
    /sys/kernel/hugepages/hugepages-64/nr_overcommit_hugepages
    /sys/kernel/hugepages/hugepages-64/free_hugepages
    /sys/kernel/hugepages/hugepages-64/resv_hugepages
    /sys/kernel/hugepages/hugepages-64/surplus_hugepages

    Of these files, the first two are read-write and the latter three are
    read-only. The size of the hugepage being manipulated is trivially
    deducible from the enclosing directory and is always expressed in kB (to
    match meminfo).

    [dave@linux.vnet.ibm.com: fix build]
    [nacc@us.ibm.com: hugetlb: hang off of /sys/kernel/mm rather than /sys/kernel]
    [nacc@us.ibm.com: hugetlb: remove CONFIG_SYSFS dependency]
    Acked-by: Greg Kroah-Hartman
    Signed-off-by: Nishanth Aravamudan
    Signed-off-by: Nick Piggin
    Cc: Dave Hansen
    Signed-off-by: Nishanth Aravamudan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nishanth Aravamudan
     

05 Jul, 2008

1 commit

  • Remove all clameter@sgi.com addresses from the kernel tree since they will
    become invalid on June 27th. Change my maintainer email address for the
    slab allocators to cl@linux-foundation.org (which will be the new email
    address for the future).

    Signed-off-by: Christoph Lameter
    Signed-off-by: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Stephen Rothwell
    Cc: Matt Mackall
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

07 Jun, 2008

1 commit

  • Just a quick explanation of the pagemap interface from a userspace point
    of view, and an example of how to use it (in English, not code).

    Signed-off-by: Thomas Tuttle
    Cc: Matt Mackall
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Thomas Tuttle
     

02 May, 2008

1 commit


29 Apr, 2008

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6:
    slub: pack objects denser
    slub: Calculate min_objects based on number of processors.
    slub: Drop DEFAULT_MAX_ORDER / DEFAULT_MIN_OBJECTS
    slub: Simplify any_slab_object checks
    slub: Make the order configurable for each slab cache
    slub: Drop fallback to page allocator method
    slub: Fallback to minimal order during slab page allocation
    slub: Update statistics handling for variable order slabs
    slub: Add kmem_cache_order_objects struct
    slub: for_each_object must be passed the number of objects in a slab
    slub: Store max number of objects in the page struct.
    slub: Dump list of objects not freed on kmem_cache_close()
    slub: free_list() cleanup
    slub: improve kmem_cache_destroy() error message
    slob: fix bug - when slob allocates "struct kmem_cache", it does not force alignment.

    Linus Torvalds
     

28 Apr, 2008

7 commits

  • Now that we're using "preferred local" policy for system default, we need to
    make this as fast as possible. Because of the variable size of the mempolicy
    structure [based on size of nodemasks], the preferred_node may be in a
    different cacheline from the mode. This can result in accessing an extra
    cacheline in the normal case of system default policy. Suspect this is the
    cause of an observed 2-3% slowdown in page fault testing relative to kernel
    without this patch series.

    To alleviate this, use an internal mode flag, MPOL_F_LOCAL in the mempolicy
    flags member which is guaranteed [?] to be in the same cacheline as the mode
    itself.

    Verified that reworked mempolicy now performs slightly better on 25-rc8-mm1
    for both anon and shmem segments with system default and vma [preferred local]
    policy.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • Currently, when one specifies MPOL_DEFAULT via a NUMA memory policy API
    [set_mempolicy(), mbind() and internal versions], the kernel simply installs a
    NULL struct mempolicy pointer in the appropriate context: task policy, vma
    policy, or shared policy. This causes any use of that policy to "fall back"
    to the next most specific policy scope.

    The only use of MPOL_DEFAULT to mean "local allocation" is in the system
    default policy. This requires extra checks/cases for MPOL_DEFAULT in many
    mempolicy.c functions.

    There is another, "preferred" way to specify local allocation via the APIs.
    That is using the MPOL_PREFERRED policy mode with an empty nodemask.
    Internally, the empty nodemask gets converted to a preferred_node id of '-1'.
    All internal usage of MPOL_PREFERRED will convert the '-1' to the id of the
    node local to the cpu where the allocation occurs.

    System default policy, except during boot, is hard-coded to "local
    allocation". By using the MPOL_PREFERRED mode with a negative value of
    preferred node for system default policy, MPOL_DEFAULT will never occur in the
    'policy' member of a struct mempolicy. Thus, we can remove all checks for
    MPOL_DEFAULT when converting policy to a node id/zonelist in the allocation
    paths.

    In slab_node() return local node id when policy pointer is NULL. No need to
    set a pol value to take the switch default. Replace switch default with
    BUG()--i.e., shouldn't happen.

    With this patch MPOL_DEFAULT is only used in the APIs, including internal
    calls to do_set_mempolicy() and in the display of policy in
    /proc//numa_maps. It always means "fall back" to the the next most
    specific policy scope. This simplifies the description of memory policies
    quite a bit, with no visible change in behavior.

    get_mempolicy() continues to return MPOL_DEFAULT and an empty nodemask when
    the requested policy [task or vma/shared] is NULL. These are the values one
    would supply via set_mempolicy() or mbind() to achieve that condition--default
    behavior.

    This patch updates Documentation to reflect this change.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • After further discussion with Christoph Lameter, it has become clear that my
    earlier attempts to clean up the mempolicy reference counting were a bit of
    overkill in some areas, resulting in superflous ref/unref in what are usually
    fast paths. In other areas, further inspection reveals that I botched the
    unref for interleave policies.

    A separate patch, suitable for upstream/stable trees, fixes up the known
    errors in the previous attempt to fix reference counting.

    This patch reworks the memory policy referencing counting and, one hopes,
    simplifies the code. Maybe I'll get it right this time.

    See the update to the numa_memory_policy.txt document for a discussion of
    memory policy reference counting that motivates this patch.

    Summary:

    Lookup of mempolicy, based on (vma, address) need only add a reference for
    shared policy, and we need only unref the policy when finished for shared
    policies. So, this patch backs out all of the unneeded extra reference
    counting added by my previous attempt. It then unrefs only shared policies
    when we're finished with them, using the mpol_cond_put() [conditional put]
    helper function introduced by this patch.

    Note that shmem_swapin() calls read_swap_cache_async() with a dummy vma
    containing just the policy. read_swap_cache_async() can call alloc_page_vma()
    multiple times, so we can't let alloc_page_vma() unref the shared policy in
    this case. To avoid this, we make a copy of any non-null shared policy and
    remove the MPOL_F_SHARED flag from the copy. This copy occurs before reading
    a page [or multiple pages] from swap, so the overhead should not be an issue
    here.

    I introduced a new static inline function "mpol_cond_copy()" to copy the
    shared policy to an on-stack policy and remove the flags that would require a
    conditional free. The current implementation of mpol_cond_copy() assumes that
    the struct mempolicy contains no pointers to dynamically allocated structures
    that must be duplicated or reference counted during copy.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • The terms 'policy' and 'mode' are both used in various places to describe the
    semantics of the value stored in the 'policy' member of struct mempolicy.
    Furthermore, the term 'policy' is used to refer to that member, to the entire
    struct mempolicy and to the more abstract concept of the tuple consisting of a
    "mode" and an optional node or set of nodes. Recently, we have added "mode
    flags" that are passed in the upper bits of the 'mode' [or sometimes,
    'policy'] member of the numa APIs.

    I'd like to resolve this confusion, which perhaps only exists in my mind, by
    renaming the 'policy' member to 'mode' throughout, and fixing up the
    Documentation. Man pages will be updated separately.

    Signed-off-by: Lee Schermerhorn
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Mel Gorman
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Lee Schermerhorn
     
  • MPOL_F_STATIC_NODES and MPOL_F_RELATIVE_NODES don't mean anything for
    MPOL_PREFERRED policies that were created with an empty nodemask (for purely
    local allocations). They'll never be invalidated because the allowed mems of
    a task changes or need to be rebound relative to a cpuset's placement.

    Also fixes a bug identified by Lee Schermerhorn that disallowed empty
    nodemasks to be passed to MPOL_PREFERRED to specify local allocations. [A
    different, somewhat incomplete, patch already existed in 25-rc5-mm1.]

    Cc: Paul Jackson
    Cc: Christoph Lameter
    Cc: Lee Schermerhorn
    Cc: Andi Kleen
    Cc: Randy Dunlap
    Signed-off-by: Lee Schermerhorn
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • Updates Documentation/vm/numa_memory_policy.txt and
    Documentation/filesystems/tmpfs.txt to describe optional mempolicy mode flags.

    Cc: Christoph Lameter
    Cc: Lee Schermerhorn
    Cc: Andi Kleen
    Cc: Randy Dunlap
    Signed-off-by: David Rientjes
    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     
  • The MPOL_BIND policy creates a zonelist that is used for allocations
    controlled by that mempolicy. As the per-node zonelist is already being
    filtered based on a zone id, this patch adds a version of __alloc_pages() that
    takes a nodemask for further filtering. This eliminates the need for
    MPOL_BIND to create a custom zonelist.

    A positive benefit of this is that allocations using MPOL_BIND now use the
    local node's distance-ordered zonelist instead of a custom node-id-ordered
    zonelist. I.e., pages will be allocated from the closest allowed node with
    available memory.

    [Lee.Schermerhorn@hp.com: Mempolicy: update stale documentation and comments]
    [Lee.Schermerhorn@hp.com: Mempolicy: make dequeue_huge_page_vma() obey MPOL_BIND nodemask]
    [Lee.Schermerhorn@hp.com: Mempolicy: make dequeue_huge_page_vma() obey MPOL_BIND nodemask rework]
    Signed-off-by: Mel Gorman
    Acked-by: Christoph Lameter
    Signed-off-by: Lee Schermerhorn
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Hugh Dickins
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

27 Apr, 2008

1 commit

  • Change the statistics to consider that slabs of the same slabcache
    can have different number of objects in them since they may be of
    different order.

    Provide a new sysfs field

    total_objects

    which shows the total objects that the allocated slabs of a slabcache
    could hold.

    Add a max field that holds the largest slab order that was ever used
    for a slab cache.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     

16 Apr, 2008

1 commit

  • As shown by Gurudas Pai recently, we can put hugepages into the surplus
    state (by echo 0 > /proc/sys/vm/nr_hugepages), even when
    /proc/sys/vm/nr_overcommit_hugepages is 0. This is actually correct, to
    allow the original goal (shrink the static pool to 0) to succeed (we are
    converting hugepages to surplus because they are in use). However, the
    documentation does not accurately reflect this case. Update it.

    Signed-off-by: Nishanth Aravamudan
    Acked-by: Andy Whitcroft
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nishanth Aravamudan
     

07 Mar, 2008

1 commit


22 Feb, 2008

1 commit

  • I keep running upstream and mm kernels and the location of the slab
    directory is different since upstream still uses /sys/slab. This patch
    makes slabinfo check /sys/slab if /sys/kernel/slab is not there. Makes
    slabinfo work on any kernel.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Greg Kroah-Hartman

    Christoph Lameter
     

08 Feb, 2008

1 commit

  • The statistics provided here allow the monitoring of allocator behavior but
    at the cost of some (minimal) loss of performance. Counters are placed in
    SLUB's per cpu data structure. The per cpu structure may be extended by the
    statistics to grow larger than one cacheline which will increase the cache
    footprint of SLUB.

    There is a compile option to enable/disable the inclusion of the runtime
    statistics and its off by default.

    The slabinfo tool is enhanced to support these statistics via two options:

    -D Switches the line of information displayed for a slab from size
    mode to activity mode.

    -A Sorts the slabs displayed by activity. This allows the display of
    the slabs most important to the performance of a certain load.

    -r Report option will report detailed statistics on

    Example (tbench load):

    slabinfo -AD ->Shows the most active slabs

    Name Objects Alloc Free %Fast
    skbuff_fclone_cache 33 111953835 111953835 99 99
    :0000192 2666 5283688 5281047 99 99
    :0001024 849 5247230 5246389 83 83
    vm_area_struct 1349 119642 118355 91 22
    :0004096 15 66753 66751 98 98
    :0000064 2067 25297 23383 98 78
    dentry 10259 28635 18464 91 45
    :0000080 11004 18950 8089 98 98
    :0000096 1703 12358 10784 99 98
    :0000128 762 10582 9875 94 18
    :0000512 184 9807 9647 95 81
    :0002048 479 9669 9195 83 65
    anon_vma 777 9461 9002 99 71
    kmalloc-8 6492 9981 5624 99 97
    :0000768 258 7174 6931 58 15

    So the skbuff_fclone_cache is of highest importance for the tbench load.
    Pretty high load on the 192 sized slab. Look for the aliases

    slabinfo -a | grep 000192
    :0000192 -r option implied if cache name is mentioned

    .... Usual output ...

    Slab Perf Counter Alloc Free %Al %Fr
    --------------------------------------------------
    Fastpath 111953360 111946981 99 99
    Slowpath 1044 7423 0 0
    Page Alloc 272 264 0 0
    Add partial 25 325 0 0
    Remove partial 86 264 0 0
    RemoteObj/SlabFrozen 350 4832 0 0
    Total 111954404 111954404

    Flushes 49 Refill 0
    Deactivate Full=325(92%) Empty=0(0%) ToHead=24(6%) ToTail=1(0%)

    Looks good because the fastpath is overwhelmingly taken.

    skbuff_head_cache:

    Slab Perf Counter Alloc Free %Al %Fr
    --------------------------------------------------
    Fastpath 5297262 5259882 99 99
    Slowpath 4477 39586 0 0
    Page Alloc 937 824 0 0
    Add partial 0 2515 0 0
    Remove partial 1691 824 0 0
    RemoteObj/SlabFrozen 2621 9684 0 0
    Total 5301739 5299468

    Deactivate Full=2620(100%) Empty=0(0%) ToHead=0(0%) ToTail=0(0%)

    Descriptions of the output:

    Total: The total number of allocation and frees that occurred for a
    slab

    Fastpath: The number of allocations/frees that used the fastpath.

    Slowpath: Other allocations

    Page Alloc: Number of calls to the page allocator as a result of slowpath
    processing

    Add Partial: Number of slabs added to the partial list through free or
    alloc (occurs during cpuslab flushes)

    Remove Partial: Number of slabs removed from the partial list as a result of
    allocations retrieving a partial slab or by a free freeing
    the last object of a slab.

    RemoteObj/Froz: How many times were remotely freed object encountered when a
    slab was about to be deactivated. Frozen: How many times was
    free able to skip list processing because the slab was in use
    as the cpuslab of another processor.

    Flushes: Number of times the cpuslab was flushed on request
    (kmem_cache_shrink, may result from races in __slab_alloc)

    Refill: Number of times we were able to refill the cpuslab from
    remotely freed objects for the same slab.

    Deactivate: Statistics how slabs were deactivated. Shows how they were
    put onto the partial list.

    In general fastpath is very good. Slowpath without partial list processing is
    also desirable. Any touching of partial list uses node specific locks which
    may potentially cause list lock contention.

    Signed-off-by: Christoph Lameter

    Christoph Lameter
     

25 Jan, 2008

1 commit


18 Dec, 2007

1 commit

  • The hugetlb documentation has gotten a bit out of sync with the current code.
    Updated the sysctl file to refer to Documentation/vm/hugetlbpage.txt. Update
    that file to contain the current state of affairs (with the newer named sysctl
    in place).

    Signed-off-by: Nishanth Aravamudan
    Acked-by: Adam Litke
    Cc: William Lee Irwin III
    Cc: Dave Hansen
    Cc: David Gibson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nishanth Aravamudan
     

17 Oct, 2007

1 commit

  • This patch does the following cleanups for Documentation/vm/slabinfo.c:

    - Fix two memory leaks;
    - Constify some char pointers;
    - Use snprintf instead of sprintf in case of buffer overflow;
    - Fix some indentations;
    - Other little improvements.

    Acked-by: Christoph Lameter
    Signed-off-by: WANG Cong
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    WANG Cong