13 Jan, 2012
2 commits
-
Move CMPXCHG_DOUBLE and rename it to HAVE_CMPXCHG_DOUBLE so architectures
can simply select the option if it is supported.Signed-off-by: Heiko Carstens
Acked-by: Christoph Lameter
Cc: Pekka Enberg
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
While implementing cmpxchg_double() on s390 I realized that we don't set
CONFIG_CMPXCHG_LOCAL despite the fact that we have support for it.However setting that option will increase the size of struct page by
eight bytes on 64 bit, which we certainly do not want. Also, it doesn't
make sense that a present cpu feature should increase the size of struct
page.Besides that it looks like the dependency to CMPXCHG_LOCAL is wrong and
that it should depend on CMPXCHG_DOUBLE instead.This patch:
If an architecture supports CMPXCHG_LOCAL this shouldn't result
automatically in larger struct pages if the SLUB allocator is used.
Instead introduce a new config option "HAVE_ALIGNED_STRUCT_PAGE" which
can be selected if a double word aligned struct page is required. Also
update x86 Kconfig so that it should work as before.Signed-off-by: Heiko Carstens
Acked-by: Christoph Lameter
Cc: Pekka Enberg
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
12 Jan, 2012
2 commits
-
* 'slab/for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/linux:
slub: disallow changing cpu_partial from userspace for debug caches
slub: add missed accounting
slub: Extract get_freelist from __slab_alloc
slub: Switch per cpu partial page support off for debugging
slub: fix a possible memleak in __slab_alloc()
slub: fix slub_max_order Documentation
slub: add missed accounting
slab: add taint flag outputting to debug paths.
slub: add taint flag outputting to debug paths
slab: introduce slab_max_order kernel parameter
slab: rename slab_break_gfp_order to slab_max_order
11 Jan, 2012
2 commits
-
Disable slub debug facilities and allocate slabs at minimal order when
debug_guardpage_minorder > 0 to increase probability to catch random
memory corruption by cpu exception.Signed-off-by: Stanislaw Gruszka
Cc: "Rafael J. Wysocki"
Cc: Andrea Arcangeli
Acked-by: Christoph Lameter
Cc: Mel Gorman
Cc: Stanislaw Gruszka
Cc: Pekka Enberg
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
For caches with debugging enabled, "slub: Switch per cpu partial page
support off for debugging" changes cpu_partial to 0. It shouldn't be
tunable from userspace for such caches, otherwise the same accounting
issues arise during validation.This patch disallows tuning /sys/kernel/slab/cache/cpu_partial to be non-
zero for caches with debugging enabled.Acked-by: Christoph Lameter
Signed-off-by: David Rientjes
Signed-off-by: Pekka Enberg
10 Jan, 2012
1 commit
-
* 'for-3.3' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
percpu: Remove irqsafe_cpu_xxx variantsFix up conflict in arch/x86/include/asm/percpu.h due to clash with
cebef5beed3d ("x86: Fix and improve percpu_cmpxchg{8,16}b_double()")
which edited the (now removed) irqsafe_cpu_cmpxchg*_double code.
04 Jan, 2012
1 commit
-
Just like the per-CPU ones they had several
problems/shortcomings:Only the first memory operand was mentioned in the asm()
operands, and the 2x64-bit version didn't have a memory clobber
while the 2x32-bit one did. The former allowed the compiler to
not recognize the need to re-load the data in case it had it
cached in some register, while the latter was overly
destructive.The types of the local copies of the old and new values were
incorrect (the types of the pointed-to variables should be used
here, to make sure the respective old/new variable types are
compatible).The __dummy/__junk variables were pointless, given that local
copies of the inputs already existed (and can hence be used for
discarded outputs).The 32-bit variant of cmpxchg_double_local() referenced
cmpxchg16b_local().At once also:
- change the return value type to what it really is: 'bool'
- unify 32- and 64-bit variants
- abstract out the common part of the 'normal' and 'local' variantsSigned-off-by: Jan Beulich
Cc: Christoph Lameter
Cc: Linus Torvalds
Cc: Andrew Morton
Link: http://lkml.kernel.org/r/4F01F12A020000780006A19B@nat28.tlf.novell.com
Signed-off-by: Ingo Molnar
23 Dec, 2011
1 commit
-
We simply say that regular this_cpu use must be safe regardless of
preemption and interrupt state. That has no material change for x86
and s390 implementations of this_cpu operations. However, arches that
do not provide their own implementation for this_cpu operations will
now get code generated that disables interrupts instead of preemption.-tj: This is part of on-going percpu API cleanup. For detailed
discussion of the subject, please refer to the following thread.http://thread.gmane.org/gmane.linux.kernel/1222078
Signed-off-by: Christoph Lameter
Signed-off-by: Tejun Heo
LKML-Reference:
14 Dec, 2011
4 commits
-
With per-cpu partial list, slab is added to partial list first and then moved
to node list. The __slab_free() code path for add/remove_partial is almost
deprecated(except for slub debug). But we forget to account add/remove_partial
when move per-cpu partial pages to node list, so the statistics for such events
are always 0. Add corresponding accounting.This is against the patch "slub: use correct parameter to add a page to
partial list tail"Acked-by: Christoph Lameter
Signed-off-by: Shaohua Li
Signed-off-by: Pekka Enberg -
get_freelist retrieves free objects from the page freelist (put there by remote
frees) or deactivates a slab page if no more objects are available.Acked-by: David Rientjes
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Eric saw an issue with accounting of slabs during validation. Its not
possible to determine accurately how many per cpu partial slabs exist at
any time so this switches off per cpu partial pages during debug.Acked-by: Eric Dumazet
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Zhihua Che reported a possible memleak in slub allocator on
CONFIG_PREEMPT=y builds.It is possible current thread migrates right before disabling irqs in
__slab_alloc(). We must check again c->freelist, and perform a normal
allocation instead of scratching c->freelist.Many thanks to Zhihua Che for spotting this bug, introduced in 2.6.39
V2: Its also possible an IRQ freed one (or several) object(s) and
populated c->freelist, so its not a CONFIG_PREEMPT only problem.Cc: [2.6.39+]
Reported-by: Zhihua Che
Signed-off-by: Eric Dumazet
Acked-by: Christoph Lameter
Signed-off-by: Pekka Enberg
28 Nov, 2011
2 commits
-
With per-cpu partial list, slab is added to partial list first and then moved
to node list. The __slab_free() code path for add/remove_partial is almost
deprecated(except for slub debug). But we forget to account add/remove_partial
when move per-cpu partial pages to node list, so the statistics for such events
are always 0. Add corresponding accounting.This is against the patch "slub: use correct parameter to add a page to
partial list tail"Acked-by: Christoph Lameter
Signed-off-by: Shaohua Li
Signed-off-by: Pekka Enberg
24 Nov, 2011
2 commits
-
show_slab_objects() can trigger NULL dereferences or memory corruption.
Another cpu can change its c->page to NULL or c->node to NUMA_NO_NODE
while we use them.Use ACCESS_ONCE(c->page) and ACCESS_ONCE(c->node) to make sure this
cannot happen.Acked-by: Christoph Lameter
Acked-by: David Rientjes
Signed-off-by: Eric Dumazet
Signed-off-by: Pekka Enberg -
The cmpxchg must be irq safe. The fallback for this_cpu_cmpxchg only
disables preemption which results in per cpu partial page operation
potentially failing on non x86 platforms.This patch fixes the following problem reported by Christian Kujau:
I seem to hit it with heavy disk & cpu IO is in progress on this
PowerBook
G4. Full dmesg & .config: http://nerdbynature.de/bits/3.2.0-rc1/oops/I've enabled some debug options and now it really points to slub.c:2166
http://nerdbynature.de/bits/3.2.0-rc1/oops/oops4m.jpg
With debug options enabled I'm currently in the xmon debugger, not sure
what to make of it yet, I'll try to get something useful out of it :)Reported-by: Christian Kujau
Tested-by: Christian Kujau
Acked-by: Eric Dumazet
Acked-by: David Rientjes
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
17 Nov, 2011
1 commit
-
When we get corruption reports, it's useful to see if the kernel was
tainted, to rule out problems we can't do anything about.Signed-off-by: Dave Jones
Signed-off-by: Andrew Morton
Signed-off-by: Pekka Enberg
16 Nov, 2011
2 commits
-
Lockdep reports there is potential deadlock for slub node list_lock.
discard_slab() is called with the lock hold in unfreeze_partials(),
which could trigger a slab allocation, which could hold the lock again.discard_slab() doesn't need hold the lock actually, if the slab is
already removed from partial list.Acked-by: Christoph Lameter
Reported-and-tested-by: Yong Zhang
Reported-and-tested-by: Julie Sullivan
Signed-off-by: Shaohua Li
Signed-off-by: Pekka Enberg -
unfreeze_partials() needs add the page to partial list tail, since such page
hasn't too many free objects. We now explictly use DEACTIVATE_TO_TAIL for this,
while DEACTIVATE_TO_TAIL != 1. This will cause performance regression (eg, more
lock contention in node->list_lock) without below fix.Signed-off-by: Shaohua Li
Acked-by: Christoph Lameter
Acked-by: David Rientjes
Signed-off-by: Pekka Enberg
01 Nov, 2011
1 commit
-
memchr_inv() is mainly used to check whether the whole buffer is filled
with just a specified byte.The function name and prototype are stolen from logfs and the
implementation is from SLUB.Signed-off-by: Akinobu Mita
Acked-by: Christoph Lameter
Acked-by: Pekka Enberg
Cc: Matt Mackall
Acked-by: Joern Engel
Cc: Marcin Slusarz
Cc: Eric Dumazet
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
26 Oct, 2011
1 commit
28 Sep, 2011
3 commits
-
Discarding slab should be done when node partial > min_partial. Otherwise,
node partial slab may eat up all memory.Signed-off-by: Alex Shi
Acked-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Correct comment errors, that mistake cpu partial objects number as pages
number, may make reader misunderstand.Signed-off-by: Alex Shi
Reviewed-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Historically /proc/slabinfo and files under /sys/kernel/slab/* have
world read permissions and are accessible to the world. slabinfo
contains rather private information related both to the kernel and
userspace tasks. Depending on the situation, it might reveal either
private information per se or information useful to make another
targeted attack. Some examples of what can be learned by
reading/watching for /proc/slabinfo entries:1) dentry (and different *inode*) number might reveal other processes fs
activity. The number of dentry "active objects" doesn't strictly show
file count opened/touched by a process, however, there is a good
correlation between them. The patch "proc: force dcache drop on
unauthorized access" relies on the privacy of dentry count.2) different inode entries might reveal the same information as (1), but
these are more fine granted counters. If a filesystem is mounted in a
private mount point (or even a private namespace) and fs type differs from
other mounted fs types, fs activity in this mount point/namespace is
revealed. If there is a single ecryptfs mount point, the whole fs
activity of a single user is revealed. Number of files in ecryptfs
mount point is a private information per se.3) fuse_* reveals number of files / fs activity of a user in a user
private mount point. It is approx. the same severity as ecryptfs
infoleak in (2).4) sysfs_dir_cache similar to (2) reveals devices' addition/removal,
which can be otherwise hidden by "chmod 0700 /sys/". With 0444 slabinfo
the precise number of sysfs files is known to the world.5) buffer_head might reveal some kernel activity. With other
information leaks an attacker might identify what specific kernel
routines generate buffer_head activity.6) *kmalloc* infoleaks are very situational. Attacker should watch for
the specific kmalloc size entry and filter the noise related to the unrelated
kernel activity. If an attacker has relatively silent victim system, he
might get rather precise counters.Additional information sources might significantly increase the slabinfo
infoleak benefits. E.g. if an attacker knows that the processes
activity on the system is very low (only core daemons like syslog and
cron), he may run setxid binaries / trigger local daemon activity /
trigger network services activity / await sporadic cron jobs activity
/ etc. and get rather precise counters for fs and network activity of
these privileged tasks, which is unknown otherwise.Also hiding slabinfo and /sys/kernel/slab/* is a one step to complicate
exploitation of kernel heap overflows (and possibly, other bugs). The
related discussion:http://thread.gmane.org/gmane.linux.kernel/1108378
To keep compatibility with old permission model where non-root
monitoring daemon could watch for kernel memleaks though slabinfo one
should do:groupadd slabinfo
usermod -a -G slabinfo $MONITOR_USERAnd add the following commands to init scripts (to mountall.conf in
Ubuntu's upstart case):chmod g+r /proc/slabinfo /sys/kernel/slab/*/*
chgrp slabinfo /proc/slabinfo /sys/kernel/slab/*/*Signed-off-by: Vasiliy Kulikov
Reviewed-by: Kees Cook
Reviewed-by: Dave Hansen
Acked-by: Christoph Lameter
Acked-by: David Rientjes
CC: Valdis.Kletnieks@vt.edu
CC: Linus Torvalds
CC: Alan Cox
Signed-off-by: Pekka Enberg
19 Sep, 2011
1 commit
14 Sep, 2011
1 commit
-
I find a way to reduce a variable in get_partial_node(). That is also helpful
for code understanding.Acked-by: Christoph Lameter
Signed-off-by: Alex Shi
Signed-off-by: Pekka Enberg
27 Aug, 2011
2 commits
-
Adding slab to partial list head/tail is sensitive to performance.
So explicitly uses DEACTIVATE_TO_TAIL/DEACTIVATE_TO_HEAD to document
it to avoid we get it wrong.Acked-by: Christoph Lameter
Signed-off-by: Shaohua Li
Signed-off-by: Shaohua Li
Signed-off-by: Pekka Enberg -
The slab has just one free object, adding it to partial list head doesn't make
sense. And it can cause lock contentation. For example,
1. CPU takes the slab from partial list
2. fetch an object
3. switch to another slab
4. free an object, then the slab is added to partial list again
In this way n->list_lock will be heavily contended.
In fact, Alex had a hackbench regression. 3.1-rc1 performance drops about 70%
against 3.0. This patch fixes it.Acked-by: Christoph Lameter
Reported-by: Alex Shi
Signed-off-by: Shaohua Li
Signed-off-by: Shaohua Li
Signed-off-by: Pekka Enberg
20 Aug, 2011
6 commits
-
Allow filling out the rest of the kmem_cache_cpu cacheline with pointers to
partial pages. The partial page list is used in slab_free() to avoid
per node lock taking.In __slab_alloc() we can then take multiple partial pages off the per
node partial list in one go reducing node lock pressure.We can also use the per cpu partial list in slab_alloc() to avoid scanning
partial lists for pages with free objects.The main effect of a per cpu partial list is that the per node list_lock
is taken for batches of partial pages instead of individual ones.Potential future enhancements:
1. The pickup from the partial list could be perhaps be done without disabling
interrupts with some work. The free path already puts the page into the
per cpu partial list without disabling interrupts.2. __slab_free() may have some code paths that could use optimization.
Performance:
Before After
./hackbench 100 process 200000
Time: 1953.047 1564.614
./hackbench 100 process 20000
Time: 207.176 156.940
./hackbench 100 process 20000
Time: 204.468 156.940
./hackbench 100 process 20000
Time: 204.879 158.772
./hackbench 10 process 20000
Time: 20.153 15.853
./hackbench 10 process 20000
Time: 20.153 15.986
./hackbench 10 process 20000
Time: 19.363 16.111
./hackbench 1 process 20000
Time: 2.518 2.307
./hackbench 1 process 20000
Time: 2.258 2.339
./hackbench 1 process 20000
Time: 2.864 2.163Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
There is no need anymore to return the pointer to a slab page from get_partial()
since the page reference can be stored in the kmem_cache_cpu structures "page" field.Return an object pointer instead.
That in turn allows a simplification of the spaghetti code in __slab_alloc().
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Pass the kmem_cache_cpu pointer to get_partial(). That way
we can avoid the this_cpu_write() statements.Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
inuse will always be set to page->objects. There is no point in
initializing the field to zero in new_slab() and then overwriting
the value in __slab_alloc().Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
Two statements in __slab_alloc() do not have any effect.
1. c->page is already set to NULL by deactivate_slab() called right before.
2. gfpflags are masked in new_slab() before being passed to the page
allocator. There is no need to mask gfpflags in __slab_alloc in particular
since most frequent processing in __slab_alloc does not require the use of a
gfpmask.Cc: torvalds@linux-foundation.org
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg -
There are two situations in which slub holds a lock while releasing
pages:A. During kmem_cache_shrink()
B. During kmem_cache_close()For A build a list while holding the lock and then release the pages
later. In case of B we are the last remaining user of the slab so
there is no need to take the listlock.After this patch all calls to the page allocator to free pages are
done without holding any spinlocks. kmem_cache_destroy() will still
hold the slub_lock semaphore.Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
10 Aug, 2011
1 commit
-
deactivate_slab() has the comparison if more than the minimum number of
partial pages are in the partial list wrong. An effect of this may be that
empty pages are not freed from deactivate_slab(). The result could be an
OOM due to growth of the partial slabs per node. Frees mostly occur from
__slab_free which is okay so this would only affect use cases where a lot
of switching around of per cpu slabs occur.Switching per cpu slabs occurs with high frequency if debugging options are
enabled.Reported-and-tested-by: Xiaotian Feng
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
09 Aug, 2011
2 commits
-
The check_bytes() function is used by slub debugging. It returns a pointer
to the first unmatching byte for a character in the given memory area.If the character for matching byte is greater than 0x80, check_bytes()
doesn't work. Becuase 64-bit pattern is generated as below.value64 = value | value << 8 | value << 16 | value << 24;
value64 = value64 | value64 << 32;The integer promotions are performed and sign-extended as the type of value
is u8. The upper 32 bits of value64 is 0xffffffff in the first line, and
the second line has no effect.This fixes the 64-bit pattern generation.
Signed-off-by: Akinobu Mita
Cc: Christoph Lameter
Cc: Matt Mackall
Reviewed-by: Marcin Slusarz
Acked-by: Eric Dumazet
Signed-off-by: Pekka Enberg -
When a slab is freed by __slab_free() and the slab can only contain a
single object ever then it was full (and therefore not on the partial
lists but on the full list in the debug case) before we reached
slab_empty.This caused the following full list corruption when SLUB debugging was enabled:
[ 5913.233035] ------------[ cut here ]------------
[ 5913.233097] WARNING: at lib/list_debug.c:53 __list_del_entry+0x8d/0x98()
[ 5913.233101] Hardware name: Adamo 13
[ 5913.233105] list_del corruption. prev->next should be ffffea000434fd20, but was ffffea0004199520
[ 5913.233108] Modules linked in: nfs fscache fuse ebtable_nat ebtables ppdev parport_pc lp parport ipt_MASQUERADE iptable_nat nf_nat nfsd lockd nfs_acl auth_rpcgss xt_CHECKSUM sunrpc iptable_mangle bridge stp llc cpufreq_ondemand acpi_cpufreq freq_table mperf ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables rfcomm bnep arc4 iwlagn snd_hda_codec_hdmi snd_hda_codec_idt snd_hda_intel btusb mac80211 snd_hda_codec bluetooth snd_hwdep snd_seq snd_seq_device snd_pcm usb_debug dell_wmi sparse_keymap cdc_ether usbnet cdc_acm uvcvideo cdc_wdm mii cfg80211 snd_timer dell_laptop videodev dcdbas snd microcode v4l2_compat_ioctl32 soundcore joydev tg3 pcspkr snd_page_alloc iTCO_wdt i2c_i801 rfkill iTCO_vendor_support wmi virtio_net kvm_intel kvm ipv6 xts gf128mul dm_crypt i915 drm_kms_helper drm i2c_algo_bit i2c_core video [last unloaded: scsi_wait_scan]
[ 5913.233213] Pid: 0, comm: swapper Not tainted 3.0.0+ #127
[ 5913.233213] Call Trace:
[ 5913.233213] [] warn_slowpath_common+0x83/0x9b
[ 5913.233213] [] warn_slowpath_fmt+0x46/0x48
[ 5913.233213] [] __list_del_entry+0x8d/0x98
[ 5913.233213] [] list_del+0xe/0x2d
[ 5913.233213] [] __slab_free+0x1db/0x235
[ 5913.233213] [] ? bvec_free_bs+0x35/0x37
[ 5913.233213] [] ? bvec_free_bs+0x35/0x37
[ 5913.233213] [] ? bvec_free_bs+0x35/0x37
[ 5913.233213] [] kmem_cache_free+0x88/0x102
[ 5913.233213] [] bvec_free_bs+0x35/0x37
[ 5913.233213] [] bio_free+0x34/0x64
[ 5913.233213] [] dm_bio_destructor+0x12/0x14
[ 5913.233213] [] bio_put+0x2b/0x2d
[ 5913.233213] [] clone_endio+0x9e/0xb4
[ 5913.233213] [] bio_endio+0x2d/0x2f
[ 5913.233213] [] crypt_dec_pending+0x5c/0x8b [dm_crypt]
[ 5913.233213] [] crypt_endio+0x78/0x81 [dm_crypt][ Full discussion here: https://lkml.org/lkml/2011/8/4/375 ]
Make sure that we remove such a slab also from the full lists.
Reported-and-tested-by: Dave Jones
Reported-and-tested-by: Xiaotian Feng
Signed-off-by: Christoph Lameter
Signed-off-by: Pekka Enberg
01 Aug, 2011
1 commit
-
Less code and same functionality. The output would be:
| Object c7428000: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
| Object c7428010: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
| Object c7428020: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
| Object c7428030: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5 kkkkkkkkkkk.
| Redzone c742803c: bb bb bb bb ....
| Padding c7428064: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZZZZZZZZZ
| Padding c7428074: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZZZZZAcked-by: Christoph Lameter
Signed-off-by: Sebastian Andrzej Siewior
Signed-off-by: Pekka Enberg
31 Jul, 2011
1 commit
-
* 'slub/lockless' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6: (21 commits)
slub: When allocating a new slab also prep the first object
slub: disable interrupts in cmpxchg_double_slab when falling back to pagelock
Avoid duplicate _count variables in page_struct
Revert "SLUB: Fix build breakage in linux/mm_types.h"
SLUB: Fix build breakage in linux/mm_types.h
slub: slabinfo update for cmpxchg handling
slub: Not necessary to check for empty slab on load_freelist
slub: fast release on full slab
slub: Add statistics for the case that the current slab does not match the node
slub: Get rid of the another_slab label
slub: Avoid disabling interrupts in free slowpath
slub: Disable interrupts in free_debug processing
slub: Invert locking and avoid slab lock
slub: Rework allocator fastpaths
slub: Pass kmem_cache struct to lock and freeze slab
slub: explicit list_lock taking
slub: Add cmpxchg_double_slab()
mm: Rearrange struct page
slub: Move page->frozen handling near where the page->freelist handling occurs
slub: Do not use frozen page flag but a bit in the page counters
...