24 Feb, 2013
1 commit
-
The function names page_xchg_last_nid(), page_last_nid() and
reset_page_last_nid() were judged to be inconsistent so rename them to a
struct_field_op style pattern. As it looked jarring to have
reset_page_mapcount() and page_nid_reset_last() beside each other in
memmap_init_zone(), this patch also renames reset_page_mapcount() to
page_mapcount_reset(). There are others like init_page_count() but as
it is used throughout the arch code a rename would likely cause more
conflicts than it is worth.[akpm@linux-foundation.org: fix zcache]
Signed-off-by: Mel Gorman
Suggested-by: Andrew Morton
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
19 Dec, 2012
1 commit
-
struct page already has this information. If we start chaining caches,
this information will always be more trustworthy than whatever is passed
into the function.Signed-off-by: Glauber Costa
Cc: Christoph Lameter
Cc: David Rientjes
Cc: Frederic Weisbecker
Cc: Greg Thelen
Cc: Johannes Weiner
Cc: JoonSoo Kim
Cc: KAMEZAWA Hiroyuki
Cc: Mel Gorman
Cc: Michal Hocko
Cc: Pekka Enberg
Cc: Rik van Riel
Cc: Suleiman Souhlal
Cc: Tejun Heo
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
11 Dec, 2012
1 commit
-
Extract the code to do object alignment from the allocators.
Do the alignment calculations in slab_common so that the
__kmem_cache_create functions of the allocators do not have
to deal with alignment.Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
31 Oct, 2012
5 commits
-
The definition of ARCH_SLAB_MINALIGN is architecture dependent
and can be either of type size_t or int. Comparing that value
with ARCH_KMALLOC_MINALIGN can cause harmless warnings on
platforms where they are different. Since both are always
small positive integer numbers, using the size_t type to compare
them is safe and gets rid of the warning.Without this patch, building ARM collie_defconfig results in:
mm/slob.c: In function '__kmalloc_node':
mm/slob.c:431:152: warning: comparison of distinct pointer types lacks a cast [enabled by default]
mm/slob.c: In function 'kfree':
mm/slob.c:484:153: warning: comparison of distinct pointer types lacks a cast [enabled by default]
mm/slob.c: In function 'ksize':
mm/slob.c:503:153: warning: comparison of distinct pointer types lacks a cast [enabled by default]Acked-by: Christoph Lameter
Signed-off-by: Arnd Bergmann
[ penberg@kernel.org: updates for master ]
Signed-off-by: Pekka Enberg -
When freeing objects, the slob allocator currently free empty pages
calling __free_pages(). However, page-size kmallocs are disposed
using put_page() instead.It makes no sense to call put_page() for kernel pages that are provided
by the object allocator, so we shouldn't be doing this ourselves.This is based on:
commit d9b7f22623b5fa9cc189581dcdfb2ac605933bf4
Author: Glauber Costa
slub: use free_page instead of put_page for freeing kmalloc allocationCc: Christoph Lameter
Cc: Pekka Enberg
Cc: Matt Mackall
Acked-by: Glauber Costa
Signed-off-by: Ezequiel Garcia
Signed-off-by: Pekka Enberg -
This function is identically defined in all three allocators
and it's trivial to move it to slab.hSince now it's static, inline, header-defined function
this patch also drops the EXPORT_SYMBOL tag.Cc: Pekka Enberg
Cc: Matt Mackall
Acked-by: Christoph Lameter
Signed-off-by: Ezequiel Garcia
Signed-off-by: Pekka Enberg -
Fields object_size and size are not the same: the latter might include
slab metadata. Return object_size field in kmem_cache_size().
Also, improve trace accuracy by correctly tracing reported size.Cc: Pekka Enberg
Cc: Matt Mackall
Acked-by: Christoph Lameter
Acked-by: David Rientjes
Signed-off-by: Ezequiel Garcia
Signed-off-by: Pekka Enberg -
This field was being used to store size allocation so it could be
retrieved by ksize(). However, it is a bad practice to not mark a page
as a slab page and then use fields for special purposes.
There is no need to store the allocated size and
ksize() can simply return PAGE_SIZE << compound_order(page).Cc: Pekka Enberg
Cc: Matt Mackall
Acked-by: Christoph Lameter
Signed-off-by: Ezequiel Garcia
Signed-off-by: Pekka Enberg
03 Oct, 2012
2 commits
-
Fix up a trivial conflict with NUMA_NO_NODE cleanups.
Conflicts:
mm/slob.cSigned-off-by: Pekka Enberg
26 Sep, 2012
1 commit
-
On Sat, 8 Sep 2012, Ezequiel Garcia wrote:
> @@ -454,15 +455,35 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
> gfp |= __GFP_COMP;
> ret = slob_new_pages(gfp, order, node);
>
> - trace_kmalloc_node(_RET_IP_, ret,
> + trace_kmalloc_node(caller, ret,
> size, PAGE_SIZE << order, gfp, node);
> }
>
> kmemleak_alloc(ret, size, 1, gfp);
> return ret;
> }
> +
> +void *__kmalloc_node(size_t size, gfp_t gfp, int node)
> +{
> + return __do_kmalloc_node(size, gfp, node, _RET_IP_);
> +}
> EXPORT_SYMBOL(__kmalloc_node);
>
> +#ifdef CONFIG_TRACING
> +void *__kmalloc_track_caller(size_t size, gfp_t gfp, unsigned long caller)
> +{
> + return __do_kmalloc_node(size, gfp, NUMA_NO_NODE, caller);
> +}
> +
> +#ifdef CONFIG_NUMA
> +void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags,
> + int node, unsigned long caller)
> +{
> + return __do_kmalloc_node(size, gfp, node, caller);
> +}
> +#endifThis breaks Pekka's slab/next tree with this:
mm/slob.c: In function '__kmalloc_node_track_caller':
mm/slob.c:488: error: 'gfp' undeclared (first use in this function)
mm/slob.c:488: error: (Each undeclared identifier is reported only once
mm/slob.c:488: error: for each function it appears in.)mm, slob: fix build breakage in __kmalloc_node_track_caller
"mm, slob: Add support for kmalloc_track_caller()" breaks the build
because gfp is undeclared. Fix it.Acked-by: Ezequiel Garcia
Signed-off-by: David Rientjes
Signed-off-by: Pekka Enberg
25 Sep, 2012
2 commits
-
Currently slob falls back to regular kmalloc for this case.
With this patch kmalloc_track_caller() is correctly implemented,
thus tracing the specified caller.This is important to trace accurately allocations performed by
krealloc, kstrdup, kmemdup, etc.Signed-off-by: Ezequiel Garcia
Signed-off-by: Pekka Enberg -
Acked-by: David Rientjes
Signed-off-by: Ezequiel Garcia
Signed-off-by: Pekka Enberg
05 Sep, 2012
8 commits
-
Get rid of the refcount stuff in the allocators and do that part of
kmem_cache management in the common code.Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Do the initial settings of the fields in common code. This will allow us
to push more processing into common code later and improve readability.Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Shift the allocations to common code. That way the allocation and
freeing of the kmem_cache structures is handled by common code.Reviewed-by: Glauber Costa
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
What is done there can be done in __kmem_cache_shutdown.
This affects RCU handling somewhat. On rcu free all slab allocators do
not refer to other management structures than the kmem_cache structure.
Therefore these other structures can be freed before the rcu deferred
free to the page allocator occurs.Reviewed-by: Joonsoo Kim
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
The freeing action is basically the same in all slab allocators.
Move to the common kmem_cache_destroy() function.Reviewed-by: Glauber Costa
Reviewed-by: Joonsoo Kim
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Make all allocators use the "kmem_cache" slabname for the "kmem_cache"
structure.Reviewed-by: Glauber Costa
Reviewed-by: Joonsoo Kim
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
kmem_cache_destroy does basically the same in all allocators.
Extract common code which is easy since we already have common mutex
handling.Reviewed-by: Glauber Costa
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Move the code to append the new kmem_cache to the list of slab caches to
the kmem_cache_create code in the shared code.This is possible now since the acquisition of the mutex was moved into
kmem_cache_create().Acked-by: David Rientjes
Reviewed-by: Glauber Costa
Reviewed-by: Joonsoo Kim
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
12 Jul, 2012
1 commit
-
Commit fd3142a59af2012a7c5dc72ec97a4935ff1c5fc6 broke
slob since a piece of a change for a later patch slipped into
it.Fengguang Wu writes:
The commit crashes the kernel w/o any dmesg output (the attached one is
created by the script as a summary for that run). This is very
reproducible in kvm for the attached config.Reported-by: Fengguang Wu
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
09 Jul, 2012
2 commits
-
All allocators have some sort of support for the bootstrap status.
Setup a common definition for the boot states and make all slab
allocators use that definition.Reviewed-by: Glauber Costa
Reviewed-by: Joonsoo Kim
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Kmem_cache_create() does a variety of sanity checks but those
vary depending on the allocator. Use the strictest tests and put them into
a slab_common file. Make the tests conditional on CONFIG_DEBUG_VM.This patch has the effect of adding sanity checks for SLUB and SLOB
under CONFIG_DEBUG_VM and removes the checks in SLAB for !CONFIG_DEBUG_VM.Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
14 Jun, 2012
4 commits
-
Define a struct that describes common fields used in all slab allocators.
A slab allocator either uses the common definition (like SLOB) or is
required to provide members of kmem_cache with the definition given.After that it will be possible to share code that
only operates on those fields of kmem_cache.The patch basically takes the slob definition of kmem cache and
uses the field namees for the other allocators.It also standardizes the names used for basic object lengths in
allocators:object_size Struct size specified at kmem_cache_create. Basically
the payload expected to be used by the subsystem.size The size of memory allocator for each object. This size
is larger than object_size and includes padding, alignment
and extra metadata for each object (f.e. for debugging
and rcu).Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Those have become so simple that they are no longer needed.
Reviewed-by: Joonsoo Kim
Acked-by: David Rientjes
signed-off-by: Christoph LameterSigned-off-by: Pekka Enberg
-
Reviewed-by: Joonsoo Kim
Acked-by: David Rientjes
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Define the fields used by slob in mm_types.h and use struct page instead
of struct slob_page in slob. This cleans up numerous of typecasts in slob.c and
makes readers aware of slob's use of page struct fields.[Also cleans up some bitrot in slob.c. The page struct field layout
in slob.c is an old layout and does not match the one in mm_types.h]Reviewed-by: Glauber Costa
Acked-by: David Rientjes
Reviewed-by: Joonsoo Kim
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
31 Oct, 2011
1 commit
-
The files changed within are only using the EXPORT_SYMBOL
macro variants. They are not using core modular infrastructure
and hence don't need module.h but only the export.h header.Signed-off-by: Paul Gortmaker
27 Jul, 2011
1 commit
-
This allows us to move duplicated code in
(atomic_inc_not_zero() for now) toSigned-off-by: Arun Sharma
Reviewed-by: Eric Dumazet
Cc: Ingo Molnar
Cc: David Miller
Cc: Eric Dumazet
Acked-by: Mike Frysinger
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
08 Jun, 2011
1 commit
-
Doing a ktest.pl randconfig, I stumbled across the following bug
on boot up:------------[ cut here ]------------
WARNING: at /home/rostedt/work/autotest/nobackup/linux-test.git/kernel/lockdep.c:2649 lockdep_trace_alloc+0xed/0x100()
Hardware name:
Modules linked in:
Pid: 0, comm: swapper Not tainted 3.0.0-rc1-test-00054-g1d68b67 #1
Call Trace:
[] warn_slowpath_common+0xad/0xf0
[] warn_slowpath_null+0x1a/0x20
[] lockdep_trace_alloc+0xed/0x100
[] __kmalloc_node+0x30/0x2f0
[] pcpu_mem_alloc+0x13a/0x180
[] percpu_init_late+0x48/0xc2
[] ? mem_init+0xd8/0xe3
[] start_kernel+0x1c2/0x449
[] x86_64_start_reservations+0x163/0x167
[] x86_64_start_kernel+0x133/0x142^M
---[ end trace a7919e7f17c0a725 ]---Then I ran a ktest.pl config_bisect and it came up with this config
as the problem:CONFIG_SLOB
Looking at what is different between SLOB and SLAB and SLUB, I found
that the gfp flags are masked against gfp_allowed_mask in
SLAB and SLUB, but not SLOB.On boot up, interrupts are disabled and lockdep will warn if some flags
are set in gfp and interrupts are disabled. But these flags are masked
off with the gfp_allowed_mask during boot. Because SLOB does not
mask the flags against gfp_allowed_mask it triggers the warn on.Adding this mask fixes the bug. I also found that kmem_cache_alloc_node()
was missing both the mask and the lockdep check, and that was added too.Acked-by: Matt Mackall
Cc: Paul Mundt
Cc: Nick Piggin
Signed-off-by: Steven Rostedt
Signed-off-by: Pekka Enberg
24 Jan, 2011
1 commit
-
The last user was ext4 and Eric Sandeen removed the call in a recent patch. See
the following URL for the discussion:http://marc.info/?l=linux-ext4&m=129546975702198&w=2
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
07 Jan, 2011
1 commit
-
This is a nasty and error prone API. It is no longer used, remove it.
Signed-off-by: Nick Piggin
02 Oct, 2010
1 commit
-
kmalloc_node() may allocate higher order slob pages, but the __GFP_COMP
bit is only passed to the page allocator and not represented in the
tracepoint event. The bit should be passed to trace_kmalloc_node() as
well.Acked-by: Matt Mackall
Reviewed-by: Christoph Lameter
Signed-off-by: David Rientjes
Signed-off-by: Pekka Enberg
07 Aug, 2010
2 commits
-
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6:
slub: Allow removal of slab caches during boot
Revert "slub: Allow removal of slab caches during boot"
slub numa: Fix rare allocation from unexpected node
slab: use deferable timers for its periodic housekeeping
slub: Use kmem_cache flags to detect if slab is in debugging mode.
slub: Allow removal of slab caches during boot
slub: Check kasprintf results in kmem_cache_init()
SLUB: Constants need UL
slub: Use a constant for a unspecified node.
SLOB: Free objects to their own list
slab: fix caller tracking on !CONFIG_DEBUG_SLAB && CONFIG_TRACING -
…git/tip/linux-2.6-tip
* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (162 commits)
tracing/kprobes: unregister_trace_probe needs to be called under mutex
perf: expose event__process function
perf events: Fix mmap offset determination
perf, powerpc: fsl_emb: Restore setting perf_sample_data.period
perf, powerpc: Convert the FSL driver to use local64_t
perf tools: Don't keep unreferenced maps when unmaps are detected
perf session: Invalidate last_match when removing threads from rb_tree
perf session: Free the ref_reloc_sym memory at the right place
x86,mmiotrace: Add support for tracing STOS instruction
perf, sched migration: Librarize task states and event headers helpers
perf, sched migration: Librarize the GUI class
perf, sched migration: Make the GUI class client agnostic
perf, sched migration: Make it vertically scrollable
perf, sched migration: Parameterize cpu height and spacing
perf, sched migration: Fix key bindings
perf, sched migration: Ignore unhandled task states
perf, sched migration: Handle ignored migrate out events
perf: New migration tool overview
tracing: Drop cpparg() macro
perf: Use tracepoint_synchronize_unregister() to flush any pending tracepoint call
...Fix up trivial conflicts in Makefile and drivers/cpufreq/cpufreq.c
16 Jul, 2010
1 commit
-
SLOB has alloced smaller objects from their own list in reduce overall external
fragmentation and increase repeatability, free to their own list also.This is /proc/meminfo result in my test machine:
without this patch:
===
MemTotal: 1030720 kB
MemFree: 750012 kB
Buffers: 15496 kB
Cached: 160396 kB
SwapCached: 0 kB
Active: 105024 kB
Inactive: 145604 kB
Active(anon): 74816 kB
Inactive(anon): 2180 kB
Active(file): 30208 kB
Inactive(file): 143424 kB
Unevictable: 16 kB
....with this patch:
===
MemTotal: 1030720 kB
MemFree: 751908 kB
Buffers: 15492 kB
Cached: 160280 kB
SwapCached: 0 kB
Active: 102720 kB
Inactive: 146140 kB
Active(anon): 73168 kB
Inactive(anon): 2180 kB
Active(file): 29552 kB
Inactive(file): 143960 kB
Unevictable: 16 kB
...The result shows an improvement of 1 MB!
And when I tested it on a embeded system with 64 MB, I found this path is never
called during kernel bootup.Acked-by: Matt Mackall
Signed-off-by: Bob Liu
Signed-off-by: Pekka Enberg
15 Jun, 2010
1 commit
-
Remove all rcu head inits. We don't care about the RCU head state before passing
it to call_rcu() anyway. Only leave the "on_stack" variants so debugobjects can
keep track of objects on stack.Signed-off-by: Alexey Dobriyan
Signed-off-by: Mathieu Desnoyers
Signed-off-by: Paul E. McKenney
Cc: Christoph Lameter
Cc: Pekka Enberg
Cc: Matt Mackall
Cc: Andrew Morton
09 Jun, 2010
1 commit
-
We have been resisting new ftrace plugins and removing existing
ones, and kmemtrace has been superseded by kmem trace events
and perf-kmem, so we remove it.Signed-off-by: Li Zefan
Acked-by: Pekka Enberg
Acked-by: Eduard - Gabriel Munteanu
Cc: Ingo Molnar
Cc: Steven Rostedt
[ remove kmemtrace from the makefile, handle slob too ]
Signed-off-by: Frederic Weisbecker
20 May, 2010
1 commit
-
Acked-by: Herbert Xu
Signed-off-by: David Woodhouse
Signed-off-by: Pekka Enberg