Commit 9439aab8dbc33c2c03c3a19dba267360383ba38c

Authored by Suresh Siddha
Committed by Ingo Molnar
1 parent c41917df8a

[PATCH] sched: fix newly idle load balance in case of SMT

In the presence of SMT, newly idle balance was never happening for
multi-core and SMP domains (even when both the logical siblings are
idle).

If thread 0 is already idle and when thread 1 is about to go to idle,
newly idle load balance always think that one of the threads is not idle
and skips doing the newly idle load balance for multi-core and SMP
domains.

This is because of the idle_cpu() macro, which checks if the current
process on a cpu is an idle process. But this is not the case for the
thread doing the load_balance_newidle().

Fix this by using runqueue's nr_running field instead of idle_cpu(). And
also skip the logic of 'only one idle cpu in the group will be doing
load balancing' during newly idle case.

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

Showing 1 changed file with 5 additions and 3 deletions Side-by-side Diff

... ... @@ -2235,7 +2235,7 @@
2235 2235  
2236 2236 rq = cpu_rq(i);
2237 2237  
2238   - if (*sd_idle && !idle_cpu(i))
  2238 + if (*sd_idle && rq->nr_running)
2239 2239 *sd_idle = 0;
2240 2240  
2241 2241 /* Bias balancing toward cpus of our domain */
2242 2242  
... ... @@ -2257,9 +2257,11 @@
2257 2257 /*
2258 2258 * First idle cpu or the first cpu(busiest) in this sched group
2259 2259 * is eligible for doing load balancing at this and above
2260   - * domains.
  2260 + * domains. In the newly idle case, we will allow all the cpu's
  2261 + * to do the newly idle load balance.
2261 2262 */
2262   - if (local_group && balance_cpu != this_cpu && balance) {
  2263 + if (idle != CPU_NEWLY_IDLE && local_group &&
  2264 + balance_cpu != this_cpu && balance) {
2263 2265 *balance = 0;
2264 2266 goto ret;
2265 2267 }