Commit 6496968e6cc3f01faafa63a5a28549a708539ac0

Authored by Mathieu Desnoyers
Committed by Linus Torvalds
1 parent 629c8b4cdb

markers: use synchronize_sched()

Markers do not mix well with CONFIG_PREEMPT_RCU because it uses
preempt_disable/enable() and not rcu_read_lock/unlock for minimal
intrusiveness.  We would need call_sched and sched_barrier primitives.

Currently, the modification (connection and disconnection) of probes
from markers requires changes to the data structure done in RCU-style :
a new data structure is created, the pointer is changed atomically, a
quiescent state is reached and then the old data structure is freed.

The quiescent state is reached once all the currently running
preempt_disable regions are done running.  We use the call_rcu mechanism
to execute kfree() after such quiescent state has been reached.
However, the new CONFIG_PREEMPT_RCU version of call_rcu and rcu_barrier
does not guarantee that all preempt_disable code regions have finished,
hence the race.

The "proper" way to do this is to use rcu_read_lock/unlock, but we don't
want to use it to minimize intrusiveness on the traced system.  (we do
not want the marker code to call into much of the OS code, because it
would quickly restrict what can and cannot be instrumented, such as the
scheduler).

The temporary fix, until we get call_rcu_sched and rcu_barrier_sched in
mainline, is to use synchronize_sched before each call_rcu calls, so we
wait for the quiescent state in the system call code path.  It will slow
down batch marker enable/disable, but will make sure the race is gone.

Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@polymtl.ca>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 1 changed file with 9 additions and 0 deletions Side-by-side Diff

... ... @@ -671,6 +671,9 @@
671 671 entry->rcu_pending = 1;
672 672 /* write rcu_pending before calling the RCU callback */
673 673 smp_wmb();
  674 +#ifdef CONFIG_PREEMPT_RCU
  675 + synchronize_sched(); /* Until we have the call_rcu_sched() */
  676 +#endif
674 677 call_rcu(&entry->rcu, free_old_closure);
675 678 end:
676 679 mutex_unlock(&markers_mutex);
... ... @@ -714,6 +717,9 @@
714 717 entry->rcu_pending = 1;
715 718 /* write rcu_pending before calling the RCU callback */
716 719 smp_wmb();
  720 +#ifdef CONFIG_PREEMPT_RCU
  721 + synchronize_sched(); /* Until we have the call_rcu_sched() */
  722 +#endif
717 723 call_rcu(&entry->rcu, free_old_closure);
718 724 remove_marker(name); /* Ignore busy error message */
719 725 ret = 0;
... ... @@ -792,6 +798,9 @@
792 798 entry->rcu_pending = 1;
793 799 /* write rcu_pending before calling the RCU callback */
794 800 smp_wmb();
  801 +#ifdef CONFIG_PREEMPT_RCU
  802 + synchronize_sched(); /* Until we have the call_rcu_sched() */
  803 +#endif
795 804 call_rcu(&entry->rcu, free_old_closure);
796 805 remove_marker(entry->name); /* Ignore busy error message */
797 806 end: