Commit 6c6c54e1807faf116724451ef2bd14993780470a

Authored by Peter Zijlstra
Committed by Ingo Molnar
1 parent f2513cde93

sched: Fix/clarify set_task_cpu() locking rules

Sergey reported a CONFIG_PROVE_RCU warning in push_rt_task where
set_task_cpu() was called with both relevant rq->locks held, which
should be sufficient for running tasks since holding its rq->lock
will serialize against sched_move_task().

Update the comments and fix the task_group() lockdep test.

Reported-and-tested-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1307115427.2353.3456.camel@twins
Signed-off-by: Ingo Molnar <mingo@elte.hu>

Showing 1 changed file with 16 additions and 5 deletions Side-by-side Diff

... ... @@ -605,10 +605,10 @@
605 605 /*
606 606 * Return the group to which this tasks belongs.
607 607 *
608   - * We use task_subsys_state_check() and extend the RCU verification
609   - * with lockdep_is_held(&p->pi_lock) because cpu_cgroup_attach()
610   - * holds that lock for each task it moves into the cgroup. Therefore
611   - * by holding that lock, we pin the task to the current cgroup.
  608 + * We use task_subsys_state_check() and extend the RCU verification with
  609 + * pi->lock and rq->lock because cpu_cgroup_attach() holds those locks for each
  610 + * task it moves into the cgroup. Therefore by holding either of those locks,
  611 + * we pin the task to the current cgroup.
612 612 */
613 613 static inline struct task_group *task_group(struct task_struct *p)
614 614 {
... ... @@ -616,7 +616,8 @@
616 616 struct cgroup_subsys_state *css;
617 617  
618 618 css = task_subsys_state_check(p, cpu_cgroup_subsys_id,
619   - lockdep_is_held(&p->pi_lock));
  619 + lockdep_is_held(&p->pi_lock) ||
  620 + lockdep_is_held(&task_rq(p)->lock));
620 621 tg = container_of(css, struct task_group, css);
621 622  
622 623 return autogroup_task_group(p, tg);
... ... @@ -2200,6 +2201,16 @@
2200 2201 !(task_thread_info(p)->preempt_count & PREEMPT_ACTIVE));
2201 2202  
2202 2203 #ifdef CONFIG_LOCKDEP
  2204 + /*
  2205 + * The caller should hold either p->pi_lock or rq->lock, when changing
  2206 + * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
  2207 + *
  2208 + * sched_move_task() holds both and thus holding either pins the cgroup,
  2209 + * see set_task_rq().
  2210 + *
  2211 + * Furthermore, all task_rq users should acquire both locks, see
  2212 + * task_rq_lock().
  2213 + */
2203 2214 WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
2204 2215 lockdep_is_held(&task_rq(p)->lock)));
2205 2216 #endif