24 Feb, 2013

1 commit

  • The function names page_xchg_last_nid(), page_last_nid() and
    reset_page_last_nid() were judged to be inconsistent so rename them to a
    struct_field_op style pattern. As it looked jarring to have
    reset_page_mapcount() and page_nid_reset_last() beside each other in
    memmap_init_zone(), this patch also renames reset_page_mapcount() to
    page_mapcount_reset(). There are others like init_page_count() but as
    it is used throughout the arch code a rename would likely cause more
    conflicts than it is worth.

    [akpm@linux-foundation.org: fix zcache]
    Signed-off-by: Mel Gorman
    Suggested-by: Andrew Morton
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

19 Dec, 2012

1 commit

  • struct page already has this information. If we start chaining caches,
    this information will always be more trustworthy than whatever is passed
    into the function.

    Signed-off-by: Glauber Costa
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Frederic Weisbecker
    Cc: Greg Thelen
    Cc: Johannes Weiner
    Cc: JoonSoo Kim
    Cc: KAMEZAWA Hiroyuki
    Cc: Mel Gorman
    Cc: Michal Hocko
    Cc: Pekka Enberg
    Cc: Rik van Riel
    Cc: Suleiman Souhlal
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Glauber Costa
     

11 Dec, 2012

1 commit

  • Extract the code to do object alignment from the allocators.
    Do the alignment calculations in slab_common so that the
    __kmem_cache_create functions of the allocators do not have
    to deal with alignment.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     

31 Oct, 2012

5 commits

  • The definition of ARCH_SLAB_MINALIGN is architecture dependent
    and can be either of type size_t or int. Comparing that value
    with ARCH_KMALLOC_MINALIGN can cause harmless warnings on
    platforms where they are different. Since both are always
    small positive integer numbers, using the size_t type to compare
    them is safe and gets rid of the warning.

    Without this patch, building ARM collie_defconfig results in:

    mm/slob.c: In function '__kmalloc_node':
    mm/slob.c:431:152: warning: comparison of distinct pointer types lacks a cast [enabled by default]
    mm/slob.c: In function 'kfree':
    mm/slob.c:484:153: warning: comparison of distinct pointer types lacks a cast [enabled by default]
    mm/slob.c: In function 'ksize':
    mm/slob.c:503:153: warning: comparison of distinct pointer types lacks a cast [enabled by default]

    Acked-by: Christoph Lameter
    Signed-off-by: Arnd Bergmann
    [ penberg@kernel.org: updates for master ]
    Signed-off-by: Pekka Enberg

    Arnd Bergmann
     
  • When freeing objects, the slob allocator currently free empty pages
    calling __free_pages(). However, page-size kmallocs are disposed
    using put_page() instead.

    It makes no sense to call put_page() for kernel pages that are provided
    by the object allocator, so we shouldn't be doing this ourselves.

    This is based on:
    commit d9b7f22623b5fa9cc189581dcdfb2ac605933bf4
    Author: Glauber Costa
    slub: use free_page instead of put_page for freeing kmalloc allocation

    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Matt Mackall
    Acked-by: Glauber Costa
    Signed-off-by: Ezequiel Garcia
    Signed-off-by: Pekka Enberg

    Ezequiel Garcia
     
  • This function is identically defined in all three allocators
    and it's trivial to move it to slab.h

    Since now it's static, inline, header-defined function
    this patch also drops the EXPORT_SYMBOL tag.

    Cc: Pekka Enberg
    Cc: Matt Mackall
    Acked-by: Christoph Lameter
    Signed-off-by: Ezequiel Garcia
    Signed-off-by: Pekka Enberg

    Ezequiel Garcia
     
  • Fields object_size and size are not the same: the latter might include
    slab metadata. Return object_size field in kmem_cache_size().
    Also, improve trace accuracy by correctly tracing reported size.

    Cc: Pekka Enberg
    Cc: Matt Mackall
    Acked-by: Christoph Lameter
    Acked-by: David Rientjes
    Signed-off-by: Ezequiel Garcia
    Signed-off-by: Pekka Enberg

    Ezequiel Garcia
     
  • This field was being used to store size allocation so it could be
    retrieved by ksize(). However, it is a bad practice to not mark a page
    as a slab page and then use fields for special purposes.
    There is no need to store the allocated size and
    ksize() can simply return PAGE_SIZE << compound_order(page).

    Cc: Pekka Enberg
    Cc: Matt Mackall
    Acked-by: Christoph Lameter
    Signed-off-by: Ezequiel Garcia
    Signed-off-by: Pekka Enberg

    Ezequiel Garcia
     

03 Oct, 2012

2 commits


26 Sep, 2012

1 commit

  • On Sat, 8 Sep 2012, Ezequiel Garcia wrote:

    > @@ -454,15 +455,35 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
    > gfp |= __GFP_COMP;
    > ret = slob_new_pages(gfp, order, node);
    >
    > - trace_kmalloc_node(_RET_IP_, ret,
    > + trace_kmalloc_node(caller, ret,
    > size, PAGE_SIZE << order, gfp, node);
    > }
    >
    > kmemleak_alloc(ret, size, 1, gfp);
    > return ret;
    > }
    > +
    > +void *__kmalloc_node(size_t size, gfp_t gfp, int node)
    > +{
    > + return __do_kmalloc_node(size, gfp, node, _RET_IP_);
    > +}
    > EXPORT_SYMBOL(__kmalloc_node);
    >
    > +#ifdef CONFIG_TRACING
    > +void *__kmalloc_track_caller(size_t size, gfp_t gfp, unsigned long caller)
    > +{
    > + return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, caller);
    > +}
    > +
    > +#ifdef CONFIG_NUMA
    > +void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags,
    > + int node, unsigned long caller)
    > +{
    > + return __do_kmalloc_node(size, gfp, node, caller);
    > +}
    > +#endif

    This breaks Pekka's slab/next tree with this:

    mm/slob.c: In function '__kmalloc_node_track_caller':
    mm/slob.c:488: error: 'gfp' undeclared (first use in this function)
    mm/slob.c:488: error: (Each undeclared identifier is reported only once
    mm/slob.c:488: error: for each function it appears in.)

    mm, slob: fix build breakage in __kmalloc_node_track_caller

    "mm, slob: Add support for kmalloc_track_caller()" breaks the build
    because gfp is undeclared. Fix it.

    Acked-by: Ezequiel Garcia
    Signed-off-by: David Rientjes
    Signed-off-by: Pekka Enberg

    David Rientjes
     

25 Sep, 2012

2 commits


05 Sep, 2012

8 commits


12 Jul, 2012

1 commit

  • Commit fd3142a59af2012a7c5dc72ec97a4935ff1c5fc6 broke
    slob since a piece of a change for a later patch slipped into
    it.

    Fengguang Wu writes:

    The commit crashes the kernel w/o any dmesg output (the attached one is
    created by the script as a summary for that run). This is very
    reproducible in kvm for the attached config.

    Reported-by: Fengguang Wu
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     

09 Jul, 2012

2 commits


14 Jun, 2012

4 commits

  • Define a struct that describes common fields used in all slab allocators.
    A slab allocator either uses the common definition (like SLOB) or is
    required to provide members of kmem_cache with the definition given.

    After that it will be possible to share code that
    only operates on those fields of kmem_cache.

    The patch basically takes the slob definition of kmem cache and
    uses the field namees for the other allocators.

    It also standardizes the names used for basic object lengths in
    allocators:

    object_size Struct size specified at kmem_cache_create. Basically
    the payload expected to be used by the subsystem.

    size The size of memory allocator for each object. This size
    is larger than object_size and includes padding, alignment
    and extra metadata for each object (f.e. for debugging
    and rcu).

    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     
  • Those have become so simple that they are no longer needed.

    Reviewed-by: Joonsoo Kim
    Acked-by: David Rientjes
    signed-off-by: Christoph Lameter

    Signed-off-by: Pekka Enberg

    Christoph Lameter
     
  • Reviewed-by: Joonsoo Kim
    Acked-by: David Rientjes
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     
  • Define the fields used by slob in mm_types.h and use struct page instead
    of struct slob_page in slob. This cleans up numerous of typecasts in slob.c and
    makes readers aware of slob's use of page struct fields.

    [Also cleans up some bitrot in slob.c. The page struct field layout
    in slob.c is an old layout and does not match the one in mm_types.h]

    Reviewed-by: Glauber Costa
    Acked-by: David Rientjes
    Reviewed-by: Joonsoo Kim
    Signed-off-by: Christoph Lameter
    Signed-off-by: Pekka Enberg

    Christoph Lameter
     

31 Oct, 2011

1 commit


27 Jul, 2011

1 commit

  • This allows us to move duplicated code in
    (atomic_inc_not_zero() for now) to

    Signed-off-by: Arun Sharma
    Reviewed-by: Eric Dumazet
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: Eric Dumazet
    Acked-by: Mike Frysinger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arun Sharma
     

08 Jun, 2011

1 commit

  • Doing a ktest.pl randconfig, I stumbled across the following bug
    on boot up:

    ------------[ cut here ]------------
    WARNING: at /home/rostedt/work/autotest/nobackup/linux-test.git/kernel/lockdep.c:2649 lockdep_trace_alloc+0xed/0x100()
    Hardware name:
    Modules linked in:
    Pid: 0, comm: swapper Not tainted 3.0.0-rc1-test-00054-g1d68b67 #1
    Call Trace:
    [] warn_slowpath_common+0xad/0xf0
    [] warn_slowpath_null+0x1a/0x20
    [] lockdep_trace_alloc+0xed/0x100
    [] __kmalloc_node+0x30/0x2f0
    [] pcpu_mem_alloc+0x13a/0x180
    [] percpu_init_late+0x48/0xc2
    [] ? mem_init+0xd8/0xe3
    [] start_kernel+0x1c2/0x449
    [] x86_64_start_reservations+0x163/0x167
    [] x86_64_start_kernel+0x133/0x142^M
    ---[ end trace a7919e7f17c0a725 ]---

    Then I ran a ktest.pl config_bisect and it came up with this config
    as the problem:

    CONFIG_SLOB

    Looking at what is different between SLOB and SLAB and SLUB, I found
    that the gfp flags are masked against gfp_allowed_mask in
    SLAB and SLUB, but not SLOB.

    On boot up, interrupts are disabled and lockdep will warn if some flags
    are set in gfp and interrupts are disabled. But these flags are masked
    off with the gfp_allowed_mask during boot. Because SLOB does not
    mask the flags against gfp_allowed_mask it triggers the warn on.

    Adding this mask fixes the bug. I also found that kmem_cache_alloc_node()
    was missing both the mask and the lockdep check, and that was added too.

    Acked-by: Matt Mackall
    Cc: Paul Mundt
    Cc: Nick Piggin
    Signed-off-by: Steven Rostedt
    Signed-off-by: Pekka Enberg

    Steven Rostedt
     

24 Jan, 2011

1 commit


07 Jan, 2011

1 commit


02 Oct, 2010

1 commit

  • kmalloc_node() may allocate higher order slob pages, but the __GFP_COMP
    bit is only passed to the page allocator and not represented in the
    tracepoint event. The bit should be passed to trace_kmalloc_node() as
    well.

    Acked-by: Matt Mackall
    Reviewed-by: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Pekka Enberg

    David Rientjes
     

07 Aug, 2010

2 commits

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6:
    slub: Allow removal of slab caches during boot
    Revert "slub: Allow removal of slab caches during boot"
    slub numa: Fix rare allocation from unexpected node
    slab: use deferable timers for its periodic housekeeping
    slub: Use kmem_cache flags to detect if slab is in debugging mode.
    slub: Allow removal of slab caches during boot
    slub: Check kasprintf results in kmem_cache_init()
    SLUB: Constants need UL
    slub: Use a constant for a unspecified node.
    SLOB: Free objects to their own list
    slab: fix caller tracking on !CONFIG_DEBUG_SLAB && CONFIG_TRACING

    Linus Torvalds
     
  • …git/tip/linux-2.6-tip

    * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (162 commits)
    tracing/kprobes: unregister_trace_probe needs to be called under mutex
    perf: expose event__process function
    perf events: Fix mmap offset determination
    perf, powerpc: fsl_emb: Restore setting perf_sample_data.period
    perf, powerpc: Convert the FSL driver to use local64_t
    perf tools: Don't keep unreferenced maps when unmaps are detected
    perf session: Invalidate last_match when removing threads from rb_tree
    perf session: Free the ref_reloc_sym memory at the right place
    x86,mmiotrace: Add support for tracing STOS instruction
    perf, sched migration: Librarize task states and event headers helpers
    perf, sched migration: Librarize the GUI class
    perf, sched migration: Make the GUI class client agnostic
    perf, sched migration: Make it vertically scrollable
    perf, sched migration: Parameterize cpu height and spacing
    perf, sched migration: Fix key bindings
    perf, sched migration: Ignore unhandled task states
    perf, sched migration: Handle ignored migrate out events
    perf: New migration tool overview
    tracing: Drop cpparg() macro
    perf: Use tracepoint_synchronize_unregister() to flush any pending tracepoint call
    ...

    Fix up trivial conflicts in Makefile and drivers/cpufreq/cpufreq.c

    Linus Torvalds
     

16 Jul, 2010

1 commit

  • SLOB has alloced smaller objects from their own list in reduce overall external
    fragmentation and increase repeatability, free to their own list also.

    This is /proc/meminfo result in my test machine:

    without this patch:
    ===
    MemTotal: 1030720 kB
    MemFree: 750012 kB
    Buffers: 15496 kB
    Cached: 160396 kB
    SwapCached: 0 kB
    Active: 105024 kB
    Inactive: 145604 kB
    Active(anon): 74816 kB
    Inactive(anon): 2180 kB
    Active(file): 30208 kB
    Inactive(file): 143424 kB
    Unevictable: 16 kB
    ....

    with this patch:
    ===
    MemTotal: 1030720 kB
    MemFree: 751908 kB
    Buffers: 15492 kB
    Cached: 160280 kB
    SwapCached: 0 kB
    Active: 102720 kB
    Inactive: 146140 kB
    Active(anon): 73168 kB
    Inactive(anon): 2180 kB
    Active(file): 29552 kB
    Inactive(file): 143960 kB
    Unevictable: 16 kB
    ...

    The result shows an improvement of 1 MB!

    And when I tested it on a embeded system with 64 MB, I found this path is never
    called during kernel bootup.

    Acked-by: Matt Mackall
    Signed-off-by: Bob Liu
    Signed-off-by: Pekka Enberg

    Bob Liu
     

15 Jun, 2010

1 commit

  • Remove all rcu head inits. We don't care about the RCU head state before passing
    it to call_rcu() anyway. Only leave the "on_stack" variants so debugobjects can
    keep track of objects on stack.

    Signed-off-by: Alexey Dobriyan
    Signed-off-by: Mathieu Desnoyers
    Signed-off-by: Paul E. McKenney
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: Matt Mackall
    Cc: Andrew Morton

    Paul E. McKenney
     

09 Jun, 2010

1 commit

  • We have been resisting new ftrace plugins and removing existing
    ones, and kmemtrace has been superseded by kmem trace events
    and perf-kmem, so we remove it.

    Signed-off-by: Li Zefan
    Acked-by: Pekka Enberg
    Acked-by: Eduard - Gabriel Munteanu
    Cc: Ingo Molnar
    Cc: Steven Rostedt
    [ remove kmemtrace from the makefile, handle slob too ]
    Signed-off-by: Frederic Weisbecker

    Li Zefan
     

20 May, 2010

1 commit