Commit f3b577dec1f2ce32d2db6d2ca6badff7002512af
Committed by
Paul E. McKenney
1 parent
7e27d6e778
Exists in
master
and in
39 other branches
rcu: apply RCU protection to wake_affine()
The task_group() function returns a pointer that must be protected by either RCU, the ->alloc_lock, or the cgroup lock (see the rcu_dereference_check() in task_subsys_state(), which is invoked by task_group()). The wake_affine() function currently does none of these, which means that a concurrent update would be within its rights to free the structure returned by task_group(). Because wake_affine() uses this structure only to compute load-balancing heuristics, there is no reason to acquire either of the two locks. Therefore, this commit introduces an RCU read-side critical section that starts before the first call to task_group() and ends after the last use of the "tg" pointer returned from task_group(). Thanks to Li Zefan for pointing out the need to extend the RCU read-side critical section from that proposed by the original patch. Signed-off-by: Daniel J Blueman <daniel.blueman@gmail.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Showing 1 changed file with 2 additions and 0 deletions Side-by-side Diff
kernel/sched_fair.c
... | ... | @@ -1240,6 +1240,7 @@ |
1240 | 1240 | * effect of the currently running task from the load |
1241 | 1241 | * of the current CPU: |
1242 | 1242 | */ |
1243 | + rcu_read_lock(); | |
1243 | 1244 | if (sync) { |
1244 | 1245 | tg = task_group(current); |
1245 | 1246 | weight = current->se.load.weight; |
... | ... | @@ -1275,6 +1276,7 @@ |
1275 | 1276 | balanced = this_eff_load <= prev_eff_load; |
1276 | 1277 | } else |
1277 | 1278 | balanced = true; |
1279 | + rcu_read_unlock(); | |
1278 | 1280 | |
1279 | 1281 | /* |
1280 | 1282 | * If the currently running task will sleep within |