Commit b246272ecc5ac68c743b15c9e41a2275f7ce70e2

Authored by David Rientjes
Committed by Linus Torvalds
1 parent 511585a28e

cpusets: stall when updating mems_allowed for mempolicy or disjoint nodemask

Kernels where MAX_NUMNODES > BITS_PER_LONG may temporarily see an empty
nodemask in a tsk's mempolicy if its previous nodemask is remapped onto a
new set of allowed cpuset nodes where the two nodemasks, as a result of
the remap, are now disjoint.

c0ff7453bb5c ("cpuset,mm: fix no node to alloc memory when changing
cpuset's mems") adds get_mems_allowed() to prevent the set of allowed
nodes from changing for a thread.  This causes any update to a set of
allowed nodes to stall until put_mems_allowed() is called.

This stall is unncessary, however, if at least one node remains unchanged
in the update to the set of allowed nodes.  This was addressed by
89e8a244b97e ("cpusets: avoid looping when storing to mems_allowed if one
node remains set"), but it's still possible that an empty nodemask may be
read from a mempolicy because the old nodemask may be remapped to the new
nodemask during rebind.  To prevent this, only avoid the stall if there is
no mempolicy for the thread being changed.

This is a temporary solution until all reads from mempolicy nodemasks can
be guaranteed to not be empty without the get_mems_allowed()
synchronization.

Also moves the check for nodemask intersection inside task_lock() so that
tsk->mems_allowed cannot change.  This ensures that nothing can set this
tsk's mems_allowed out from under us and also protects tsk->mempolicy.

Reported-by: Miao Xie <miaox@cn.fujitsu.com>
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Paul Menage <paul@paulmenage.org>
Cc: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 1 changed file with 24 additions and 5 deletions Side-by-side Diff

... ... @@ -123,6 +123,19 @@
123 123 struct cpuset, css);
124 124 }
125 125  
  126 +#ifdef CONFIG_NUMA
  127 +static inline bool task_has_mempolicy(struct task_struct *task)
  128 +{
  129 + return task->mempolicy;
  130 +}
  131 +#else
  132 +static inline bool task_has_mempolicy(struct task_struct *task)
  133 +{
  134 + return false;
  135 +}
  136 +#endif
  137 +
  138 +
126 139 /* bits in struct cpuset flags field */
127 140 typedef enum {
128 141 CS_CPU_EXCLUSIVE,
... ... @@ -949,7 +962,7 @@
949 962 static void cpuset_change_task_nodemask(struct task_struct *tsk,
950 963 nodemask_t *newmems)
951 964 {
952   - bool masks_disjoint = !nodes_intersects(*newmems, tsk->mems_allowed);
  965 + bool need_loop;
953 966  
954 967 repeat:
955 968 /*
... ... @@ -962,6 +975,14 @@
962 975 return;
963 976  
964 977 task_lock(tsk);
  978 + /*
  979 + * Determine if a loop is necessary if another thread is doing
  980 + * get_mems_allowed(). If at least one node remains unchanged and
  981 + * tsk does not have a mempolicy, then an empty nodemask will not be
  982 + * possible when mems_allowed is larger than a word.
  983 + */
  984 + need_loop = task_has_mempolicy(tsk) ||
  985 + !nodes_intersects(*newmems, tsk->mems_allowed);
965 986 nodes_or(tsk->mems_allowed, tsk->mems_allowed, *newmems);
966 987 mpol_rebind_task(tsk, newmems, MPOL_REBIND_STEP1);
967 988  
968 989  
... ... @@ -981,11 +1002,9 @@
981 1002  
982 1003 /*
983 1004 * Allocation of memory is very fast, we needn't sleep when waiting
984   - * for the read-side. No wait is necessary, however, if at least one
985   - * node remains unchanged.
  1005 + * for the read-side.
986 1006 */
987   - while (masks_disjoint &&
988   - ACCESS_ONCE(tsk->mems_allowed_change_disable)) {
  1007 + while (need_loop && ACCESS_ONCE(tsk->mems_allowed_change_disable)) {
989 1008 task_unlock(tsk);
990 1009 if (!task_curr(tsk))
991 1010 yield();