22 May, 2007

2 commits

  • * git://git.kernel.org/pub/scm/linux/kernel/git/sam/kbuild-fix:
    mm/slab: fix section mismatch warning
    mm: fix section mismatch warnings
    init/main: use __init_refok to fix section mismatch
    kbuild: introduce __init_refok/__initdata_refok to supress section mismatch warnings
    all-archs: consolidate .data section definition in asm-generic
    all-archs: consolidate .text section definition in asm-generic
    kbuild: add "Section mismatch" warning whitelist for powerpc
    kbuild: make better section mismatch reports on i386, arm and mips
    kbuild: make modpost section warnings clearer
    kconfig: search harder for curses library in check-lxdialog.sh
    kbuild: include limits.h in sumversion.c for PATH_MAX
    powerpc: Fix the MODALIAS generation in modpost for of devices

    Linus Torvalds
     
  • First thing mm.h does is including sched.h solely for can_do_mlock() inline
    function which has "current" dereference inside. By dealing with can_do_mlock()
    mm.h can be detached from sched.h which is good. See below, why.

    This patch
    a) removes unconditional inclusion of sched.h from mm.h
    b) makes can_do_mlock() normal function in mm/mlock.c
    c) exports can_do_mlock() to not break compilation
    d) adds sched.h inclusions back to files that were getting it indirectly.
    e) adds less bloated headers to some files (asm/signal.h, jiffies.h) that were
    getting them indirectly

    Net result is:
    a) mm.h users would get less code to open, read, preprocess, parse, ... if
    they don't need sched.h
    b) sched.h stops being dependency for significant number of files:
    on x86_64 allmodconfig touching sched.h results in recompile of 4083 files,
    after patch it's only 3744 (-8.3%).

    Cross-compile tested on

    all arm defconfigs, all mips defconfigs, all powerpc defconfigs,
    alpha alpha-up
    arm
    i386 i386-up i386-defconfig i386-allnoconfig
    ia64 ia64-up
    m68k
    mips
    parisc parisc-up
    powerpc powerpc-up
    s390 s390-up
    sparc sparc-up
    sparc64 sparc64-up
    um-x86_64
    x86_64 x86_64-up x86_64-defconfig x86_64-allnoconfig

    as well as my two usual configs.

    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

19 May, 2007

2 commits


17 May, 2007

12 commits

  • Re-introduce rmap verification patches that Hugh removed when he removed
    PG_map_lock. PG_map_lock actually isn't needed to synchronise access to
    anonymous pages, because PG_locked and PTL together already do.

    These checks were important in discovering and fixing a rare rmap corruption
    in SLES9.

    Signed-off-by: Nick Piggin
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • __vunmap doesn't seem to be used outside of mm/vmalloc.c, and has
    no prototype in any header so let's make it static

    Signed-off-by: Benjamin Herrenschmidt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Benjamin Herrenschmidt
     
  • Currently we have a maze of configuration variables that determine the
    maximum slab size. Worst of all it seems to vary between SLAB and SLUB.

    So define a common maximum size for kmalloc. For conveniences sake we use
    the maximum size ever supported which is 32 MB. We limit the maximum size
    to a lower limit if MAX_ORDER does not allow such large allocations.

    For many architectures this patch will have the effect of adding large
    kmalloc sizes. x86_64 adds 5 new kmalloc sizes. So a small amount of
    memory will be needed for these caches (contemporary SLAB has dynamically
    sizeable node and cpu structure so the waste is less than in the past)

    Most architectures will then be able to allocate object with sizes up to
    MAX_ORDER. We have had repeated breakage (in fact whenever we doubled the
    number of supported processors) on IA64 because one or the other struct
    grew beyond what the slab allocators supported. This will avoid future
    issues and f.e. avoid fixes for 2k and 4k cpu support.

    CONFIG_LARGE_ALLOCS is no longer necessary so drop it.

    It fixes sparc64 with SLAB.

    Signed-off-by: Christoph Lameter
    Signed-off-by: "David S. Miller"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Consolidate functionality into the #ifdef section.

    Extract tracing into one subroutine.

    Move object debug processing into the #ifdef section so that the
    code in __slab_alloc and __slab_free becomes minimal.

    Reduce number of functions we need to provide stubs for in the !SLUB_DEBUG case.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • SLAB_CTOR_CONSTRUCTOR is always specified. No point in checking it.

    Signed-off-by: Christoph Lameter
    Cc: David Howells
    Cc: Jens Axboe
    Cc: Steven French
    Cc: Michael Halcrow
    Cc: OGAWA Hirofumi
    Cc: Miklos Szeredi
    Cc: Steven Whitehouse
    Cc: Roman Zippel
    Cc: David Woodhouse
    Cc: Dave Kleikamp
    Cc: Trond Myklebust
    Cc: "J. Bruce Fields"
    Cc: Anton Altaparmakov
    Cc: Mark Fasheh
    Cc: Paul Mackerras
    Cc: Christoph Hellwig
    Cc: Jan Kara
    Cc: David Chinner
    Cc: "David S. Miller"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • The atomicity when handling flags in SLUB is not necessary since both flags
    used by SLUB are not updated in a racy way. Flag updates are either done
    during slab creation or destruction or under slab_lock. Some of these flags
    do not have the non atomic variants that we need. So define our own.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • slub warns on this, and we're working on making kmalloc(0) return NULL.
    Let's make slab warn as well so our testers detect such callers more
    rapidly.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Use inline functions to access the per cpu bit. Intoduce the notion of
    "freezing" a slab to make things more understandable.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • There is no user of destructors left. There is no reason why we should keep
    checking for destructors calls in the slab allocators.

    The RFC for this patch was discussed at
    http://marc.info/?l=linux-kernel&m=117882364330705&w=2

    Destructors were mainly used for list management which required them to take a
    spinlock. Taking a spinlock in a destructor is a bit risky since the slab
    allocators may run the destructors anytime they decide a slab is no longer
    needed.

    Patch drops destructor support. Any attempt to use a destructor will BUG().

    Acked-by: Pekka Enberg
    Acked-by: Paul Mundt
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • The SLOB allocator should implement SLAB_DESTROY_BY_RCU correctly, because
    even on UP, RCU freeing semantics are not equivalent to simply freeing
    immediately. This also allows SLOB to be used on SMP.

    Signed-off-by: Nick Piggin
    Acked-by: Matt Mackall
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • We call alloc_page where we should be calling __page_cache_alloc.

    __page_cache_alloc performs cpuset memory spreading. alloc_page does not.
    There is no reason that pages allocated via find_or_create should be
    exempt.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • kmem_cache_create() was swapping ctor and dtor in calling find_mergeable():
    though it caused no bug, and probably never would, even if destructors are
    retained; but fix it so as not to generate anxiety ;)

    Signed-off-by: Hugh Dickins
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

14 May, 2007

1 commit


11 May, 2007

6 commits

  • Clean up massive code duplication between mpage_writepages() and
    generic_writepages().

    The new generic function, write_cache_pages() takes a function pointer
    argument, which will be called for each page to be written.

    Maybe cifs_writepages() too can use this infrastructure, but I'm not
    touching that with a ten-foot pole.

    The upcoming page writeback support in fuse will also want this.

    Signed-off-by: Miklos Szeredi
    Acked-by: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Miklos Szeredi
     
  • VM statistics updates do not matter if the kernel is in idle powersaving
    mode. So allow the timer to be deferred.

    It would be better though if we could switch the timer between deferrable
    and nondeferrable based on differentials present. The timer would start
    out nondeferrable and if we find that there were no updates in the last
    statistics interval then we would switch the timer to deferrable. If the
    timer later finds again that there are differentials then go to
    nondeferrable again.

    And yet another way would be to run the timer shortly before going to idle?

    The solution here means that the VM counters may be slightly off during
    idle since differentials may be still pending while the timer is deferred.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Following bug was uncovered by compiling with '-W' flag:

    CC mm/thrash.o
    mm/thrash.c: In function ‘grab_swap_token’:
    mm/thrash.c:52: warning: comparison of unsigned expression < 0 is always false

    Variable token_priority is unsigned, so decrementing first and then
    checking the result does not work; fixed by reversing the test, patch
    attached (compile tested only).

    I am not sure if likely() makes much sense in this new situation, but
    I'll let somebody else to make a decision on that.

    Signed-off-by: Mika Kukkonen
    Cc: Rik van Riel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mika Kukkonen
     
  • This was in SLUB in order to head off trouble while the nr_cpu_ids
    functionality was not merged. Its merged now so no need to still have this.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Since it is referenced by memmap_init_zone (which is __meminit) via the
    early_pfn_in_nid macro when CONFIG_NODES_SPAN_OTHER_NODES is set (which
    basically means PowerPC 64).

    This removes a section mismatch warning in those circumstances.

    Signed-off-by: Stephen Rothwell
    Cc: Yasunori Goto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Stephen Rothwell
     
  • Avoid atomic overhead in slab_alloc and slab_free

    SLUB needs to use the slab_lock for the per cpu slabs to synchronize with
    potential kfree operations. This patch avoids that need by moving all free
    objects onto a lockless_freelist. The regular freelist continues to exist
    and will be used to free objects. So while we consume the
    lockless_freelist the regular freelist may build up objects.

    If we are out of objects on the lockless_freelist then we may check the
    regular freelist. If it has objects then we move those over to the
    lockless_freelist and do this again. There is a significant savings in
    terms of atomic operations that have to be performed.

    We can even free directly to the lockless_freelist if we know that we are
    running on the same processor. So this speeds up short lived objects.
    They may be allocated and freed without taking the slab_lock. This is
    particular good for netperf.

    In order to maximize the effect of the new faster hotpath we extract the
    hottest performance pieces into inlined functions. These are then inlined
    into kmem_cache_alloc and kmem_cache_free. So hotpath allocation and
    freeing no longer requires a subroutine call within SLUB.

    [I am not sure that it is worth doing this because it changes the easy to
    read structure of slub just to reduce atomic ops. However, there is
    someone out there with a benchmark on 4 way and 8 way processor systems
    that seems to show a 5% regression vs. Slab. Seems that the regression is
    due to increased atomic operations use vs. SLAB in SLUB). I wonder if
    this is applicable or discernable at all in a real workload?]

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

10 May, 2007

17 commits

  • * master.kernel.org:/pub/scm/linux/kernel/git/lethal/sh-2.6:
    sh: Fix stacktrace simplification fallout.
    sh: SH7760 DMABRG support.
    sh: clockevent/clocksource/hrtimers/nohz TMU support.
    sh: Truncate MAX_ACTIVE_REGIONS for the common case.
    rtc: rtc-sh: Fix rtc_dev pointer for rtc_update_irq().
    sh: Convert to common die chain.
    sh: Wire up utimensat syscall.
    sh: landisk mv_nr_irqs definition.
    sh: Fixup ndelay() xloops calculation for alternate HZ.
    sh: Add 32-bit opcode feature CPU flag.
    sh: Fix PC adjustments for varying opcode length.
    sh: Support for SH-2A 32-bit opcodes.
    sh: Kill off redundant __div64_32 symbol export.
    sh: Share exception vector table for SH-3/4.
    sh: Always define TRAPA_BUG_OPCODE.
    sh: __GFP_REPEAT for pte allocations, too.
    rtc: rtc-sh: Fix up dev_dbg() warnings.
    sh: generic quicklist support.

    Linus Torvalds
     
  • Commit 6fe6900e1e5b6fa9e5c59aa5061f244fe3f467e2 introduced a nasty bug
    in read_cache_page_async().

    It added a "mark_page_accessed(page)" at the final return path in
    read_cache_page_async(). But in error cases, 'page' holds the error
    code, and you can't mark it accessed.

    [ and Glauber de Oliveira Costa points out that we can use a return
    instead of adding more goto's ]

    Signed-off-by: David Howells
    Acked-by: Nick Piggin
    Signed-off-by: Linus Torvalds

    David Howells
     
  • * git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial: (25 commits)
    sound: convert "sound" subdirectory to UTF-8
    MAINTAINERS: Add cxacru website/mailing list
    include files: convert "include" subdirectory to UTF-8
    general: convert "kernel" subdirectory to UTF-8
    documentation: convert the Documentation directory to UTF-8
    Convert the toplevel files CREDITS and MAINTAINERS to UTF-8.
    remove broken URLs from net drivers' output
    Magic number prefix consistency change to Documentation/magic-number.txt
    trivial: s/i_sem /i_mutex/
    fix file specification in comments
    drivers/base/platform.c: fix small typo in doc
    misc doc and kconfig typos
    Remove obsolete fat_cvf help text
    Fix occurrences of "the the "
    Fix minor typoes in kernel/module.c
    Kconfig: Remove reference to external mqueue library
    Kconfig: A couple of grammatical fixes in arch/i386/Kconfig
    Correct comments in genrtc.c to refer to correct /proc file.
    Fix more "deprecated" spellos.
    Fix "deprecated" typoes.
    ...

    Fix trivial comment conflict in kernel/relay.c.

    Linus Torvalds
     
  • Currently the slab allocators contain callbacks into the page allocator to
    perform the draining of pagesets on remote nodes. This requires SLUB to have
    a whole subsystem in order to be compatible with SLAB. Moving node draining
    out of the slab allocators avoids a section of code in SLUB.

    Move the node draining so that is is done when the vm statistics are updated.
    At that point we are already touching all the cachelines with the pagesets of
    a processor.

    Add a expire counter there. If we have to update per zone or global vm
    statistics then assume that the pageset will require subsequent draining.

    The expire counter will be decremented on each vm stats update pass until it
    reaches zero. Then we will drain one batch from the pageset. The draining
    will cause vm counter updates which will then cause another expiration until
    the pcp is empty. So we will drain a batch every 3 seconds.

    Note that remote node draining is a somewhat esoteric feature that is required
    on large NUMA systems because otherwise significant portions of system memory
    can become trapped in pcp queues. The number of pcp is determined by the
    number of processors and nodes in a system. A system with 4 processors and 2
    nodes has 8 pcps which is okay. But a system with 1024 processors and 512
    nodes has 512k pcps with a high potential for large amount of memory being
    caught in them.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Make it configurable. Code in mm makes the vm statistics intervals
    independent from the cache reaper use that opportunity to make it
    configurable.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • vmstat is currently using the cache reaper to periodically bring the
    statistics up to date. The cache reaper does only exists in SLUB as a way to
    provide compatibility with SLAB. This patch removes the vmstat calls from the
    slab allocators and provides its own handling.

    The advantage is also that we can use a different frequency for the updates.
    Refreshing vm stats is a pretty fast job so we can run this every second and
    stagger this by only one tick. This will lead to some overlap in large
    systems. F.e a system running at 250 HZ with 1024 processors will have 4 vm
    updates occurring at once.

    However, the vm stats update only accesses per node information. It is only
    necessary to stagger the vm statistics updates per processor in each node. Vm
    counter updates occurring on distant nodes will not cause cacheline
    contention.

    We could implement an alternate approach that runs the first processor on each
    node at the second and then each of the other processor on a node on a
    subsequent tick. That may be useful to keep a large amount of the second free
    of timer activity. Maybe the timer folks will have some feedback on this one?

    [jirislaby@gmail.com: add missing break]
    Cc: Arjan van de Ven
    Signed-off-by: Christoph Lameter
    Signed-off-by: Jiri Slaby
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Since nonboot CPUs are now disabled after tasks and devices have been
    frozen and the CPU hotplug infrastructure is used for this purpose, we need
    special CPU hotplug notifications that will help the CPU-hotplug-aware
    subsystems distinguish normal CPU hotplug events from CPU hotplug events
    related to a system-wide suspend or resume operation in progress. This
    patch introduces such notifications and causes them to be used during
    suspend and resume transitions. It also changes all of the
    CPU-hotplug-aware subsystems to take these notifications into consideration
    (for now they are handled in the same way as the corresponding "normal"
    ones).

    [oleg@tv-sign.ru: cleanups]
    Signed-off-by: Rafael J. Wysocki
    Cc: Gautham R Shenoy
    Cc: Pavel Machek
    Signed-off-by: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rafael J. Wysocki
     
  • It's very common for file systems to need to zero part or all of a page,
    the simplist way is just to use kmap_atomic() and memset(). There's
    actually a library function in include/linux/highmem.h that does exactly
    that, but it's confusingly named memclear_highpage_flush(), which is
    descriptive of *how* it does the work rather than what the *purpose* is.
    So this patchset renames the function to zero_user_page(), and calls it
    from the various places that currently open code it.

    This first patch introduces the new function call, and converts all the
    core kernel callsites, both the open-coded ones and the old
    memclear_highpage_flush() ones. Following this patch is a series of
    conversions for each file system individually, per AKPM, and finally a
    patch deprecating the old call. The diffstat below shows the entire
    patchset.

    [akpm@linux-foundation.org: fix a few things]
    Signed-off-by: Nate Diller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nate Diller
     
  • Shutdown the cache_reaper if the cpu is brought down and set the
    cache_reap.func to NULL. Otherwise hotplug shuts down the reaper for good.

    Signed-off-by: Christoph Lameter
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • Looks like this was forgotten when CPU_LOCK_[ACQUIRE|RELEASE] was
    introduced.

    Cc: Pekka Enberg
    Cc: Srivatsa Vaddagiri
    Cc: Gautham Shenoy
    Signed-off-by: Heiko Carstens
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Heiko Carstens
     
  • Export a couple of core functions for AFS write support to use:

    find_get_pages_contig()
    find_get_pages_tag()

    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     
  • When cpuset is configured, it breaks the strict hugetlb page reservation as
    the accounting is done on a global variable. Such reservation is
    completely rubbish in the presence of cpuset because the reservation is not
    checked against page availability for the current cpuset. Application can
    still potentially OOM'ed by kernel with lack of free htlb page in cpuset
    that the task is in. Attempt to enforce strict accounting with cpuset is
    almost impossible (or too ugly) because cpuset is too fluid that task or
    memory node can be dynamically moved between cpusets.

    The change of semantics for shared hugetlb mapping with cpuset is
    undesirable. However, in order to preserve some of the semantics, we fall
    back to check against current free page availability as a best attempt and
    hopefully to minimize the impact of changing semantics that cpuset has on
    hugetlb.

    Signed-off-by: Ken Chen
    Cc: Paul Jackson
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ken Chen
     
  • The internal hugetlb resv_huge_pages variable can permanently leak nonzero
    value in the error path of hugetlb page fault handler when hugetlb page is
    used in combination of cpuset. The leaked count can permanently trap N
    number of hugetlb pages in unusable "reserved" state.

    Steps to reproduce the bug:

    (1) create two cpuset, user1 and user2
    (2) reserve 50 htlb pages in cpuset user1
    (3) attempt to shmget/shmat 50 htlb page inside cpuset user2
    (4) kernel oom the user process in step 3
    (5) ipcrm the shm segment

    At this point resv_huge_pages will have a count of 49, even though
    there are no active hugetlbfs file nor hugetlb shared memory segment
    in the system. The leak is permanent and there is no recovery method
    other than system reboot. The leaked count will hold up all future use
    of that many htlb pages in all cpusets.

    The culprit is that the error path of alloc_huge_page() did not
    properly undo the change it made to resv_huge_page, causing
    inconsistent state.

    Signed-off-by: Ken Chen
    Cc: David Gibson
    Cc: Adam Litke
    Cc: Martin Bligh
    Acked-by: David Gibson
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ken Chen
     
  • No "blank" (or "*") line is allowed between the function name and lines for
    it parameter(s).

    Cc: Randy Dunlap
    Signed-off-by: Pekka Enberg
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pekka J Enberg
     
  • In some cases SLUB is creating uselessly slabs that are larger than
    slub_max_order. Also the layout of some of the slabs was not satisfactory.

    Go to an iterarive approach.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • We have information about how long an object existed and about the nodes and
    cpus where the allocations and frees took place. Add that information to the
    tracking output in /sys/slab/xx/alloc_calls and /sys/slab/free_calls

    This will then enable slabinfo to output nice reports like this:

    christoph@qirst:~/slub$ ./slabinfo kmalloc-128

    Slabcache: kmalloc-128 Aliases: 0 Order : 0

    Sizes (bytes) Slabs Debug Memory
    ------------------------------------------------------------------------
    Object : 128 Total : 12 Sanity Checks : On Total: 49152
    SlabObj: 200 Full : 7 Redzoning : On Used : 24832
    SlabSiz: 4096 Partial: 4 Poisoning : On Loss : 24320
    Loss : 72 CpuSlab: 1 Tracking : On Lalig: 13968
    Align : 8 Objects: 20 Tracing : Off Lpadd: 1152

    kmalloc-128 has no kmem_cache operations

    kmalloc-128: Kernel object allocation
    -----------------------------------------------------------------------
    6 param_sysfs_setup+0x71/0x130 age=284512/284512/284512 pid=1 nodes=0-1,3
    11 percpu_populate+0x39/0x80 age=283914/284428/284512 pid=1 nodes=0
    21 __register_chrdev_region+0x31/0x170 age=282896/284347/284473 pid=1-1705 nodes=0-2
    1 sys_inotify_init+0x76/0x1c0 age=283423 pid=1004 nodes=0
    19 as_get_io_context+0x32/0xd0 age=6/247567/283988 pid=1-11782 nodes=0,2
    10 ida_pre_get+0x4a/0x80 age=277666/283773/284526 pid=0-2177 nodes=0,2
    24 kobject_kset_add_dir+0x37/0xb0 age=282727/283860/284472 pid=1-1723 nodes=0-2
    1 acpi_ds_build_internal_buffer_obj+0xd3/0x11d age=284508 pid=1 nodes=0
    24 con_insert_unipair+0xd7/0x110 age=284438/284438/284438 pid=1 nodes=0,2
    1 uart_open+0x2d2/0x4b0 age=283896 pid=1 nodes=0
    26 dma_pool_create+0x73/0x1a0 age=282762/282833/282916 pid=1705-1723 nodes=0
    1 neigh_table_init_no_netlink+0xd2/0x210 age=284461 pid=1 nodes=0
    2 neigh_parms_alloc+0x2b/0xe0 age=284410/284411/284412 pid=1 nodes=2
    2 neigh_resolve_output+0x1e1/0x280 age=276289/276291/276293 pid=0-2443 nodes=0
    1 netlink_kernel_create+0x90/0x170 age=284472 pid=1 nodes=0
    4 xt_alloc_table_info+0x39/0xf0 age=283958/283958/283959 pid=1 nodes=1
    3 fn_hash_insert+0x473/0x720 age=277653/277661/277666 pid=2177-2185 nodes=0
    1 get_mtrr_state+0x285/0x2a0 age=284526 pid=0 nodes=0
    1 cacheinfo_cpu_callback+0x26d/0x3e0 age=284458 pid=1 nodes=0
    29 kernel_param_sysfs_setup+0x25/0x90 age=284511/284511/284512 pid=1 nodes=0-1,3
    5 process_zones+0x5e/0x170 age=284546/284546/284546 pid=0 nodes=0
    1 drm_core_init+0x48/0x160 age=284421 pid=1 nodes=2

    kmalloc-128: Kernel object freeing
    ------------------------------------------------------------------------
    163 age=4295176847 pid=0 nodes=0-3
    1 __vunmap+0x6e/0xf0 age=282907 pid=1723 nodes=0
    28 free_as_io_context+0x12/0x90 age=9243/262197/283474 pid=42-11754 nodes=0
    1 acpi_get_object_info+0x1b7/0x1d4 age=284475 pid=1 nodes=0
    1 do_acpi_find_child+0x45/0x4e age=284475 pid=1 nodes=0

    NUMA nodes : 0 1 2 3
    ------------------------------------------
    All slabs 7 2 2 1
    Partial slabs 2 2 0 0

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     
  • CONFIG_SLUB_DEBUG can be used to switch off the debugging and sysfs components
    of SLUB. Thus SLUB will be able to replace SLOB. SLUB can arrange objects in
    a denser way than SLOB and the code size should be minimal without debugging
    and sysfs support.

    Note that CONFIG_SLUB_DEBUG is materially different from CONFIG_SLAB_DEBUG.
    CONFIG_SLAB_DEBUG is used to enable slab debugging in SLAB. SLUB enables
    debugging via a boot parameter. SLUB debug code should always be present.

    CONFIG_SLUB_DEBUG can be modified in the embedded config section.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter