01 Feb, 2013

4 commits


19 Dec, 2012

6 commits

  • SLAB allows us to tune a particular cache behavior with tunables. When
    creating a new memcg cache copy, we'd like to preserve any tunables the
    parent cache already had.

    This could be done by an explicit call to do_tune_cpucache() after the
    cache is created. But this is not very convenient now that the caches are
    created from common code, since this function is SLAB-specific.

    Another method of doing that is taking advantage of the fact that
    do_tune_cpucache() is always called from enable_cpucache(), which is
    called at cache initialization. We can just preset the values, and then
    things work as expected.

    It can also happen that a root cache has its tunables updated during
    normal system operation. In this case, we will propagate the change to
    all caches that are already active.

    This change will require us to move the assignment of root_cache in
    memcg_params a bit earlier. We need this to be already set - which
    memcg_kmem_register_cache will do - when we reach __kmem_cache_create()

    Signed-off-by: Glauber Costa
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Frederic Weisbecker
    Cc: Greg Thelen
    Cc: Johannes Weiner
    Cc: JoonSoo Kim
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Pekka Enberg
    Cc: Rik van Riel
    Cc: Suleiman Souhlal
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Glauber Costa
     
  • When we create caches in memcgs, we need to display their usage
    information somewhere. We'll adopt a scheme similar to /proc/meminfo,
    with aggregate totals shown in the global file, and per-group information
    stored in the group itself.

    For the time being, only reads are allowed in the per-group cache.

    Signed-off-by: Glauber Costa
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Frederic Weisbecker
    Cc: Greg Thelen
    Cc: Johannes Weiner
    Cc: JoonSoo Kim
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Pekka Enberg
    Cc: Rik van Riel
    Cc: Suleiman Souhlal
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Glauber Costa
     
  • Implement destruction of memcg caches. Right now, only caches where our
    reference counter is the last remaining are deleted. If there are any
    other reference counters around, we just leave the caches lying around
    until they go away.

    When that happens, a destruction function is called from the cache code.
    Caches are only destroyed in process context, so we queue them up for
    later processing in the general case.

    Signed-off-by: Glauber Costa
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Frederic Weisbecker
    Cc: Greg Thelen
    Cc: Johannes Weiner
    Cc: JoonSoo Kim
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Pekka Enberg
    Cc: Rik van Riel
    Cc: Suleiman Souhlal
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Glauber Costa
     
  • struct page already has this information. If we start chaining caches,
    this information will always be more trustworthy than whatever is passed
    into the function.

    Signed-off-by: Glauber Costa
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Frederic Weisbecker
    Cc: Greg Thelen
    Cc: Johannes Weiner
    Cc: JoonSoo Kim
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Pekka Enberg
    Cc: Rik van Riel
    Cc: Suleiman Souhlal
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Glauber Costa
     
  • Allow a memcg parameter to be passed during cache creation. When the slub
    allocator is being used, it will only merge caches that belong to the same
    memcg. We'll do this by scanning the global list, and then translating
    the cache to a memcg-specific cache

    Default function is created as a wrapper, passing NULL to the memcg
    version. We only merge caches that belong to the same memcg.

    A helper is provided, memcg_css_id: because slub needs a unique cache name
    for sysfs. Since this is visible, but not the canonical location for slab
    data, the cache name is not used, the css_id should suffice.

    Signed-off-by: Glauber Costa
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Frederic Weisbecker
    Cc: Greg Thelen
    Cc: Johannes Weiner
    Cc: JoonSoo Kim
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Pekka Enberg
    Cc: Rik van Riel
    Cc: Suleiman Souhlal
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Glauber Costa
     
  • For the kmem slab controller, we need to record some extra information in
    the kmem_cache structure.

    Signed-off-by: Glauber Costa
    Signed-off-by: Suleiman Souhlal
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Frederic Weisbecker
    Cc: Greg Thelen
    Cc: Johannes Weiner
    Cc: JoonSoo Kim
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Pekka Enberg
    Cc: Rik van Riel
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Glauber Costa
     

11 Dec, 2012

2 commits


31 Oct, 2012

1 commit

  • Some flags are used internally by the allocators for management
    purposes. One example of that is the CFLGS_OFF_SLAB flag that slab uses
    to mark that the metadata for that cache is stored outside of the slab.

    No cache should ever pass those as a creation flags. We can just ignore
    this bit if it happens to be passed (such as when duplicating a cache in
    the kmem memcg patches).

    Because such flags can vary from allocator to allocator, we allow them
    to make their own decisions on that, defining SLAB_AVAILABLE_FLAGS with
    all flags that are valid at creation time. Allocators that doesn't have
    any specific flag requirement should define that to mean all flags.

    Common code will mask out all flags not belonging to that set.

    Acked-by: Christoph Lameter
    Acked-by: David Rientjes
    Signed-off-by: Glauber Costa
    Signed-off-by: Pekka Enberg

    Glauber Costa
     

24 Oct, 2012

3 commits

  • With all the infrastructure in place, we can now have slabinfo_show
    done from slab_common.c. A cache-specific function is called to grab
    information about the cache itself, since that is still heavily
    dependent on the implementation. But with the values produced by it, all
    the printing and handling is done from common code.

    Signed-off-by: Glauber Costa
    CC: Christoph Lameter
    CC: David Rientjes
    Signed-off-by: Pekka Enberg

    Glauber Costa
     
  • The header format is highly similar between slab and slub. The main
    difference lays in the fact that slab may optionally have statistics
    added here in case of CONFIG_SLAB_DEBUG, while the slub will stick them
    somewhere else.

    By making sure that information conditionally lives inside a
    globally-visible CONFIG_DEBUG_SLAB switch, we can move the header
    printing to a common location.

    Signed-off-by: Glauber Costa
    Acked-by: Christoph Lameter
    CC: David Rientjes
    Signed-off-by: Pekka Enberg

    Glauber Costa
     
  • This patch moves all the common machinery to slabinfo processing
    to slab_common.c. We can do better by noticing that the output is
    heavily common, and having the allocators to just provide finished
    information about this. But after this first step, this can be done
    easier.

    Signed-off-by: Glauber Costa
    Acked-by: Christoph Lameter
    CC: David Rientjes
    Signed-off-by: Pekka Enberg

    Glauber Costa
     

05 Sep, 2012

8 commits

  • This reverts commit 96d17b7be0a9849d381442030886211dbb2a7061 which
    caused the following errors at boot:

    [ 1.114885] kobject (ffff88001a802578): tried to init an initialized object, something is seriously wrong.
    [ 1.114885] Pid: 1, comm: swapper/0 Tainted: G W 3.6.0-rc1+ #6
    [ 1.114885] Call Trace:
    [ 1.114885] [] kobject_init+0x87/0xa0
    [ 1.115555] [] kobject_init_and_add+0x2a/0x90
    [ 1.115555] [] ? sprintf+0x40/0x50
    [ 1.115555] [] sysfs_slab_add+0x80/0x210
    [ 1.115555] [] kmem_cache_create+0xa5/0x250
    [ 1.115555] [] ? md_init+0x144/0x144
    [ 1.115555] [] local_init+0xa4/0x11b
    [ 1.115555] [] dm_init+0x14/0x45
    [ 1.115836] [] do_one_initcall+0x3a/0x160
    [ 1.116834] [] kernel_init+0x133/0x1b7
    [ 1.117835] [] ? do_early_param+0x86/0x86
    [ 1.117835] [] kernel_thread_helper+0x4/0x10
    [ 1.118401] [] ? start_kernel+0x33f/0x33f
    [ 1.119832] [] ? gs_change+0xb/0xb
    [ 1.120325] ------------[ cut here ]------------
    [ 1.120835] WARNING: at fs/sysfs/dir.c:536 sysfs_add_one+0xc1/0xf0()
    [ 1.121437] sysfs: cannot create duplicate filename '/kernel/slab/:t-0000016'
    [ 1.121831] Modules linked in:
    [ 1.122138] Pid: 1, comm: swapper/0 Tainted: G W 3.6.0-rc1+ #6
    [ 1.122831] Call Trace:
    [ 1.123074] [] ? sysfs_add_one+0xc1/0xf0
    [ 1.123833] [] warn_slowpath_common+0x7a/0xb0
    [ 1.124405] [] warn_slowpath_fmt+0x41/0x50
    [ 1.124832] [] sysfs_add_one+0xc1/0xf0
    [ 1.125337] [] create_dir+0x73/0xd0
    [ 1.125832] [] sysfs_create_dir+0x81/0xe0
    [ 1.126363] [] kobject_add_internal+0x9d/0x210
    [ 1.126832] [] kobject_init_and_add+0x63/0x90
    [ 1.127406] [] sysfs_slab_add+0x80/0x210
    [ 1.127832] [] kmem_cache_create+0xa5/0x250
    [ 1.128384] [] ? md_init+0x144/0x144
    [ 1.128833] [] local_init+0xa4/0x11b
    [ 1.129831] [] dm_init+0x14/0x45
    [ 1.130305] [] do_one_initcall+0x3a/0x160
    [ 1.130831] [] kernel_init+0x133/0x1b7
    [ 1.131351] [] ? do_early_param+0x86/0x86
    [ 1.131830] [] kernel_thread_helper+0x4/0x10
    [ 1.132392] [] ? start_kernel+0x33f/0x33f
    [ 1.132830] [] ? gs_change+0xb/0xb
    [ 1.133315] ---[ end trace 2703540871c8fab7 ]---
    [ 1.133830] ------------[ cut here ]------------
    [ 1.134274] WARNING: at lib/kobject.c:196 kobject_add_internal+0x1f5/0x210()
    [ 1.134829] kobject_add_internal failed for :t-0000016 with -EEXIST, don't try to register things with the same name in the same directory.
    [ 1.135829] Modules linked in:
    [ 1.136135] Pid: 1, comm: swapper/0 Tainted: G W 3.6.0-rc1+ #6
    [ 1.136828] Call Trace:
    [ 1.137071] [] ? kobject_add_internal+0x1f5/0x210
    [ 1.137830] [] warn_slowpath_common+0x7a/0xb0
    [ 1.138402] [] warn_slowpath_fmt+0x41/0x50
    [ 1.138830] [] ? release_sysfs_dirent+0x73/0xf0
    [ 1.139419] [] kobject_add_internal+0x1f5/0x210
    [ 1.139830] [] kobject_init_and_add+0x63/0x90
    [ 1.140429] [] sysfs_slab_add+0x80/0x210
    [ 1.140830] [] kmem_cache_create+0xa5/0x250
    [ 1.141829] [] ? md_init+0x144/0x144
    [ 1.142307] [] local_init+0xa4/0x11b
    [ 1.142829] [] dm_init+0x14/0x45
    [ 1.143307] [] do_one_initcall+0x3a/0x160
    [ 1.143829] [] kernel_init+0x133/0x1b7
    [ 1.144352] [] ? do_early_param+0x86/0x86
    [ 1.144829] [] kernel_thread_helper+0x4/0x10
    [ 1.145405] [] ? start_kernel+0x33f/0x33f
    [ 1.145828] [] ? gs_change+0xb/0xb
    [ 1.146313] ---[ end trace 2703540871c8fab8 ]---

    Conflicts:

    mm/slub.c

    Signed-off-by: Pekka Enberg

    Pekka Enberg
     
  • Do the initial settings of the fields in common code. This will allow us
    to push more processing into common code later and improve readability.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     
  • Shift the allocations to common code. That way the allocation and
    freeing of the kmem_cache structures is handled by common code.

    Reviewed-by: Glauber Costa
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     
  • Simplify locking by moving the slab_add_sysfs after all locks have been
    dropped. Eases the upcoming move to provide sysfs support for all
    allocators.

    Reviewed-by: Glauber Costa
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     
  • The slab aliasing logic causes some strange contortions in slub. So add
    a call to deal with aliases to slab_common.c but disable it for other
    slab allocators by providng stubs that fail to create aliases.

    Full general support for aliases will require additional cleanup passes
    and more standardization of fields in kmem_cache.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     
  • What is done there can be done in __kmem_cache_shutdown.

    This affects RCU handling somewhat. On rcu free all slab allocators do
    not refer to other management structures than the kmem_cache structure.
    Therefore these other structures can be freed before the rcu deferred
    free to the page allocator occurs.

    Reviewed-by: Joonsoo Kim
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     
  • Make all allocators use the "kmem_cache" slabname for the "kmem_cache"
    structure.

    Reviewed-by: Glauber Costa
    Reviewed-by: Joonsoo Kim
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     
  • kmem_cache_destroy does basically the same in all allocators.

    Extract common code which is easy since we already have common mutex
    handling.

    Reviewed-by: Glauber Costa
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     

09 Jul, 2012

2 commits

  • Use the mutex definition from SLAB and make it the common way to take a sleeping lock.

    This has the effect of using a mutex instead of a rw semaphore for SLUB.

    SLOB gains the use of a mutex for kmem_cache_create serialization.
    Not needed now but SLOB may acquire some more features later (like slabinfo
    / sysfs support) through the expansion of the common code that will
    need this.

    Reviewed-by: Glauber Costa
    Reviewed-by: Joonsoo Kim
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     
  • All allocators have some sort of support for the bootstrap status.

    Setup a common definition for the boot states and make all slab
    allocators use that definition.

    Reviewed-by: Glauber Costa
    Reviewed-by: Joonsoo Kim
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter