Commit 43570fd2f47ba518145e9289f54cde3dba4c8b25

Authored by Heiko Carstens
Committed by Linus Torvalds
1 parent 0d259cf819

mm,slub,x86: decouple size of struct page from CONFIG_CMPXCHG_LOCAL

While implementing cmpxchg_double() on s390 I realized that we don't set
CONFIG_CMPXCHG_LOCAL despite the fact that we have support for it.

However setting that option will increase the size of struct page by
eight bytes on 64 bit, which we certainly do not want.  Also, it doesn't
make sense that a present cpu feature should increase the size of struct
page.

Besides that it looks like the dependency to CMPXCHG_LOCAL is wrong and
that it should depend on CMPXCHG_DOUBLE instead.

This patch:

If an architecture supports CMPXCHG_LOCAL this shouldn't result
automatically in larger struct pages if the SLUB allocator is used.
Instead introduce a new config option "HAVE_ALIGNED_STRUCT_PAGE" which
can be selected if a double word aligned struct page is required.  Also
update x86 Kconfig so that it should work as before.

Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 4 changed files with 16 additions and 8 deletions Side-by-side Diff

... ... @@ -185,5 +185,13 @@
185 185 config ARCH_HAVE_NMI_SAFE_CMPXCHG
186 186 bool
187 187  
  188 +config HAVE_ALIGNED_STRUCT_PAGE
  189 + bool
  190 + help
  191 + This makes sure that struct pages are double word aligned and that
  192 + e.g. the SLUB allocator can perform double word atomic operations
  193 + on a struct page for better performance. However selecting this
  194 + might increase the size of a struct page by a word.
  195 +
188 196 source "kernel/gcov/Kconfig"
... ... @@ -60,6 +60,7 @@
60 60 select PERF_EVENTS
61 61 select HAVE_PERF_EVENTS_NMI
62 62 select ANON_INODES
  63 + select HAVE_ALIGNED_STRUCT_PAGE if SLUB && !M386
63 64 select HAVE_ARCH_KMEMCHECK
64 65 select HAVE_USER_RETURN_NOTIFIER
65 66 select ARCH_BINFMT_ELF_RANDOMIZE_PIE
include/linux/mm_types.h
... ... @@ -151,12 +151,11 @@
151 151 #endif
152 152 }
153 153 /*
154   - * If another subsystem starts using the double word pairing for atomic
155   - * operations on struct page then it must change the #if to ensure
156   - * proper alignment of the page struct.
  154 + * The struct page can be forced to be double word aligned so that atomic ops
  155 + * on double words work. The SLUB allocator can make use of such a feature.
157 156 */
158   -#if defined(CONFIG_SLUB) && defined(CONFIG_CMPXCHG_LOCAL)
159   - __attribute__((__aligned__(2*sizeof(unsigned long))))
  157 +#ifdef CONFIG_HAVE_ALIGNED_STRUCT_PAGE
  158 + __aligned(2 * sizeof(unsigned long))
160 159 #endif
161 160 ;
162 161  
... ... @@ -366,7 +366,7 @@
366 366 const char *n)
367 367 {
368 368 VM_BUG_ON(!irqs_disabled());
369   -#ifdef CONFIG_CMPXCHG_DOUBLE
  369 +#if defined(CONFIG_CMPXCHG_DOUBLE) && defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
370 370 if (s->flags & __CMPXCHG_DOUBLE) {
371 371 if (cmpxchg_double(&page->freelist, &page->counters,
372 372 freelist_old, counters_old,
... ... @@ -400,7 +400,7 @@
400 400 void *freelist_new, unsigned long counters_new,
401 401 const char *n)
402 402 {
403   -#ifdef CONFIG_CMPXCHG_DOUBLE
  403 +#if defined(CONFIG_CMPXCHG_DOUBLE) && defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
404 404 if (s->flags & __CMPXCHG_DOUBLE) {
405 405 if (cmpxchg_double(&page->freelist, &page->counters,
406 406 freelist_old, counters_old,
... ... @@ -3014,7 +3014,7 @@
3014 3014 }
3015 3015 }
3016 3016  
3017   -#ifdef CONFIG_CMPXCHG_DOUBLE
  3017 +#if defined(CONFIG_CMPXCHG_DOUBLE) && defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
3018 3018 if (system_has_cmpxchg_double() && (s->flags & SLAB_DEBUG_FLAGS) == 0)
3019 3019 /* Enable fast mode */
3020 3020 s->flags |= __CMPXCHG_DOUBLE;