Commit 7765be2fec0f476fcd61812d5f9406b04c765020

Authored by Paul E. McKenney
Committed by Paul E. McKenney
1 parent 131906b006

rcu: Fix RCU_BOOST race handling current->rcu_read_unlock_special

The RCU_BOOST commits for TREE_PREEMPT_RCU introduced an other-task
write to a new RCU_READ_UNLOCK_BOOSTED bit in the task_struct structure's
->rcu_read_unlock_special field, but, as noted by Steven Rostedt, without
correctly synchronizing all accesses to ->rcu_read_unlock_special.
This could result in bits in ->rcu_read_unlock_special being spuriously
set and cleared due to conflicting accesses, which in turn could result
in deadlocks between the rcu_node structure's ->lock and the scheduler's
rq and pi locks.  These deadlocks would result from RCU incorrectly
believing that the just-ended RCU read-side critical section had been
preempted and/or boosted.  If that RCU read-side critical section was
executed with either rq or pi locks held, RCU's ensuing (incorrect)
calls to the scheduler would cause the scheduler to attempt to once
again acquire the rq and pi locks, resulting in deadlock.  More complex
deadlock cycles are also possible, involving multiple rq and pi locks
as well as locks from multiple rcu_node structures.

This commit fixes synchronization by creating ->rcu_boosted field in
task_struct that is accessed and modified only when holding the ->lock
in the rcu_node structure on which the task is queued (on that rcu_node
structure's ->blkd_tasks list).  This results in tasks accessing only
their own current->rcu_read_unlock_special fields, making unsynchronized
access once again legal, and keeping the rcu_read_unlock() fastpath free
of atomic instructions and memory barriers.

The reason that the rcu_read_unlock() fastpath does not need to access
the new current->rcu_boosted field is that this new field cannot
be non-zero unless the RCU_READ_UNLOCK_BLOCKED bit is set in the
current->rcu_read_unlock_special field.  Therefore, rcu_read_unlock()
need only test current->rcu_read_unlock_special: if that is zero, then
current->rcu_boosted must also be zero.

This bug does not affect TINY_PREEMPT_RCU because this implementation
of RCU accesses current->rcu_read_unlock_special with irqs disabled,
thus preventing races on the !SMP systems that TINY_PREEMPT_RCU runs on.

Maybe-reported-by: Dave Jones <davej@redhat.com>
Maybe-reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>

Showing 2 changed files with 9 additions and 2 deletions Side-by-side Diff

include/linux/sched.h
... ... @@ -1254,6 +1254,9 @@
1254 1254 #ifdef CONFIG_PREEMPT_RCU
1255 1255 int rcu_read_lock_nesting;
1256 1256 char rcu_read_unlock_special;
  1257 +#if defined(CONFIG_RCU_BOOST) && defined(CONFIG_TREE_PREEMPT_RCU)
  1258 + int rcu_boosted;
  1259 +#endif /* #if defined(CONFIG_RCU_BOOST) && defined(CONFIG_TREE_PREEMPT_RCU) */
1257 1260 struct list_head rcu_node_entry;
1258 1261 #endif /* #ifdef CONFIG_PREEMPT_RCU */
1259 1262 #ifdef CONFIG_TREE_PREEMPT_RCU
kernel/rcutree_plugin.h
... ... @@ -342,6 +342,11 @@
342 342 #ifdef CONFIG_RCU_BOOST
343 343 if (&t->rcu_node_entry == rnp->boost_tasks)
344 344 rnp->boost_tasks = np;
  345 + /* Snapshot and clear ->rcu_boosted with rcu_node lock held. */
  346 + if (t->rcu_boosted) {
  347 + special |= RCU_READ_UNLOCK_BOOSTED;
  348 + t->rcu_boosted = 0;
  349 + }
345 350 #endif /* #ifdef CONFIG_RCU_BOOST */
346 351 t->rcu_blocked_node = NULL;
347 352  
... ... @@ -358,7 +363,6 @@
358 363 #ifdef CONFIG_RCU_BOOST
359 364 /* Unboost if we were boosted. */
360 365 if (special & RCU_READ_UNLOCK_BOOSTED) {
361   - t->rcu_read_unlock_special &= ~RCU_READ_UNLOCK_BOOSTED;
362 366 rt_mutex_unlock(t->rcu_boost_mutex);
363 367 t->rcu_boost_mutex = NULL;
364 368 }
... ... @@ -1176,7 +1180,7 @@
1176 1180 t = container_of(tb, struct task_struct, rcu_node_entry);
1177 1181 rt_mutex_init_proxy_locked(&mtx, t);
1178 1182 t->rcu_boost_mutex = &mtx;
1179   - t->rcu_read_unlock_special |= RCU_READ_UNLOCK_BOOSTED;
  1183 + t->rcu_boosted = 1;
1180 1184 raw_spin_unlock_irqrestore(&rnp->lock, flags);
1181 1185 rt_mutex_lock(&mtx); /* Side effect: boosts task t's priority. */
1182 1186 rt_mutex_unlock(&mtx); /* Keep lockdep happy. */