Commit 2ad654bc5e2b211e92f66da1d819e47d79a866f0

Authored by Zefan Li
Committed by Tejun Heo
1 parent e0e5070b20

cpuset: PF_SPREAD_PAGE and PF_SPREAD_SLAB should be atomic flags

When we change cpuset.memory_spread_{page,slab}, cpuset will flip
PF_SPREAD_{PAGE,SLAB} bit of tsk->flags for each task in that cpuset.
This should be done using atomic bitops, but currently we don't,
which is broken.

Tetsuo reported a hard-to-reproduce kernel crash on RHEL6, which happened
when one thread tried to clear PF_USED_MATH while at the same time another
thread tried to flip PF_SPREAD_PAGE/PF_SPREAD_SLAB. They both operate on
the same task.

Here's the full report:
https://lkml.org/lkml/2014/9/19/230

To fix this, we make PF_SPREAD_PAGE and PF_SPREAD_SLAB atomic flags.

v4:
- updated mm/slab.c. (Fengguang Wu)
- updated Documentation.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Miao Xie <miaox@cn.fujitsu.com>
Cc: Kees Cook <keescook@chromium.org>
Fixes: 950592f7b991 ("cpusets: update tasks' page/slab spread flags in time")
Cc: <stable@vger.kernel.org> # 2.6.31+
Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Signed-off-by: Zefan Li <lizefan@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>

Showing 5 changed files with 23 additions and 13 deletions Side-by-side Diff

Documentation/cgroups/cpusets.txt
... ... @@ -345,14 +345,14 @@
345 345 The implementation is simple.
346 346  
347 347 Setting the flag 'cpuset.memory_spread_page' turns on a per-process flag
348   -PF_SPREAD_PAGE for each task that is in that cpuset or subsequently
  348 +PFA_SPREAD_PAGE for each task that is in that cpuset or subsequently
349 349 joins that cpuset. The page allocation calls for the page cache
350   -is modified to perform an inline check for this PF_SPREAD_PAGE task
  350 +is modified to perform an inline check for this PFA_SPREAD_PAGE task
351 351 flag, and if set, a call to a new routine cpuset_mem_spread_node()
352 352 returns the node to prefer for the allocation.
353 353  
354 354 Similarly, setting 'cpuset.memory_spread_slab' turns on the flag
355   -PF_SPREAD_SLAB, and appropriately marked slab caches will allocate
  355 +PFA_SPREAD_SLAB, and appropriately marked slab caches will allocate
356 356 pages from the node returned by cpuset_mem_spread_node().
357 357  
358 358 The cpuset_mem_spread_node() routine is also simple. It uses the
include/linux/cpuset.h
... ... @@ -93,12 +93,12 @@
93 93  
94 94 static inline int cpuset_do_page_mem_spread(void)
95 95 {
96   - return current->flags & PF_SPREAD_PAGE;
  96 + return task_spread_page(current);
97 97 }
98 98  
99 99 static inline int cpuset_do_slab_mem_spread(void)
100 100 {
101   - return current->flags & PF_SPREAD_SLAB;
  101 + return task_spread_slab(current);
102 102 }
103 103  
104 104 extern int current_cpuset_is_being_rebound(void);
include/linux/sched.h
... ... @@ -1903,8 +1903,6 @@
1903 1903 #define PF_KTHREAD 0x00200000 /* I am a kernel thread */
1904 1904 #define PF_RANDOMIZE 0x00400000 /* randomize virtual address space */
1905 1905 #define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */
1906   -#define PF_SPREAD_PAGE 0x01000000 /* Spread page cache over cpuset */
1907   -#define PF_SPREAD_SLAB 0x02000000 /* Spread some slab caches over cpuset */
1908 1906 #define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_allowed */
1909 1907 #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */
1910 1908 #define PF_MUTEX_TESTER 0x20000000 /* Thread belongs to the rt mutex tester */
1911 1909  
... ... @@ -1958,7 +1956,10 @@
1958 1956  
1959 1957 /* Per-process atomic flags. */
1960 1958 #define PFA_NO_NEW_PRIVS 0 /* May not gain new privileges. */
  1959 +#define PFA_SPREAD_PAGE 1 /* Spread page cache over cpuset */
  1960 +#define PFA_SPREAD_SLAB 2 /* Spread some slab caches over cpuset */
1961 1961  
  1962 +
1962 1963 #define TASK_PFA_TEST(name, func) \
1963 1964 static inline bool task_##func(struct task_struct *p) \
1964 1965 { return test_bit(PFA_##name, &p->atomic_flags); }
... ... @@ -1971,6 +1972,14 @@
1971 1972  
1972 1973 TASK_PFA_TEST(NO_NEW_PRIVS, no_new_privs)
1973 1974 TASK_PFA_SET(NO_NEW_PRIVS, no_new_privs)
  1975 +
  1976 +TASK_PFA_TEST(SPREAD_PAGE, spread_page)
  1977 +TASK_PFA_SET(SPREAD_PAGE, spread_page)
  1978 +TASK_PFA_CLEAR(SPREAD_PAGE, spread_page)
  1979 +
  1980 +TASK_PFA_TEST(SPREAD_SLAB, spread_slab)
  1981 +TASK_PFA_SET(SPREAD_SLAB, spread_slab)
  1982 +TASK_PFA_CLEAR(SPREAD_SLAB, spread_slab)
1974 1983  
1975 1984 /*
1976 1985 * task->jobctl flags
... ... @@ -365,13 +365,14 @@
365 365 struct task_struct *tsk)
366 366 {
367 367 if (is_spread_page(cs))
368   - tsk->flags |= PF_SPREAD_PAGE;
  368 + task_set_spread_page(tsk);
369 369 else
370   - tsk->flags &= ~PF_SPREAD_PAGE;
  370 + task_clear_spread_page(tsk);
  371 +
371 372 if (is_spread_slab(cs))
372   - tsk->flags |= PF_SPREAD_SLAB;
  373 + task_set_spread_slab(tsk);
373 374 else
374   - tsk->flags &= ~PF_SPREAD_SLAB;
  375 + task_clear_spread_slab(tsk);
375 376 }
376 377  
377 378 /*
... ... @@ -2994,7 +2994,7 @@
2994 2994  
2995 2995 #ifdef CONFIG_NUMA
2996 2996 /*
2997   - * Try allocating on another node if PF_SPREAD_SLAB is a mempolicy is set.
  2997 + * Try allocating on another node if PFA_SPREAD_SLAB is a mempolicy is set.
2998 2998 *
2999 2999 * If we are in_interrupt, then process context, including cpusets and
3000 3000 * mempolicy, may not apply and should not be used for allocation policy.
... ... @@ -3226,7 +3226,7 @@
3226 3226 {
3227 3227 void *objp;
3228 3228  
3229   - if (current->mempolicy || unlikely(current->flags & PF_SPREAD_SLAB)) {
  3229 + if (current->mempolicy || cpuset_do_slab_mem_spread()) {
3230 3230 objp = alternate_node_alloc(cache, flags);
3231 3231 if (objp)
3232 3232 goto out;