Commit 5bfd126e80dca70431aef8fdbc1cf14535f3c338

Authored by Juri Lelli
Committed by Ingo Molnar
1 parent 2d513868e2

sched/deadline: Fix sched_yield() behavior

yield_task_dl() is broken:

 o it forces current to be throttled setting its runtime to zero;
 o it sets current's dl_se->dl_new to one, expecting that dl_task_timer()
   will queue it back with proper parameters at replenish time.

Unfortunately, dl_task_timer() has this check at the very beginning:

	if (!dl_task(p) || dl_se->dl_new)
		goto unlock;

So, it just bails out and the task is never replenished. It actually
yielded forever.

To fix this, introduce a new flag indicating that the task properly yielded
the CPU before its current runtime expired. While this is a little overdoing
at the moment, the flag would be useful in the future to discriminate between
"good" jobs (of which remaining runtime could be reclaimed, i.e. recycled)
and "bad" jobs (for which dl_throttled task has been set) that needed to be
stopped.

Reported-by: yjay.kim <yjay.kim@lge.com>
Signed-off-by: Juri Lelli <juri.lelli@gmail.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20140429103953.e68eba1b2ac3309214e3dc5a@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>

Showing 3 changed files with 9 additions and 4 deletions Side-by-side Diff

include/linux/sched.h
... ... @@ -1153,9 +1153,12 @@
1153 1153 *
1154 1154 * @dl_boosted tells if we are boosted due to DI. If so we are
1155 1155 * outside bandwidth enforcement mechanism (but only until we
1156   - * exit the critical section).
  1156 + * exit the critical section);
  1157 + *
  1158 + * @dl_yielded tells if task gave up the cpu before consuming
  1159 + * all its available runtime during the last job.
1157 1160 */
1158   - int dl_throttled, dl_new, dl_boosted;
  1161 + int dl_throttled, dl_new, dl_boosted, dl_yielded;
1159 1162  
1160 1163 /*
1161 1164 * Bandwidth enforcement timer. Each -deadline task has its
... ... @@ -3124,6 +3124,7 @@
3124 3124 dl_se->dl_bw = to_ratio(dl_se->dl_period, dl_se->dl_runtime);
3125 3125 dl_se->dl_throttled = 0;
3126 3126 dl_se->dl_new = 1;
  3127 + dl_se->dl_yielded = 0;
3127 3128 }
3128 3129  
3129 3130 static void __setscheduler_params(struct task_struct *p,
kernel/sched/deadline.c
... ... @@ -528,6 +528,7 @@
528 528 sched_clock_tick();
529 529 update_rq_clock(rq);
530 530 dl_se->dl_throttled = 0;
  531 + dl_se->dl_yielded = 0;
531 532 if (p->on_rq) {
532 533 enqueue_task_dl(rq, p, ENQUEUE_REPLENISH);
533 534 if (task_has_dl_policy(rq->curr))
534 535  
... ... @@ -893,10 +894,10 @@
893 894 * We make the task go to sleep until its current deadline by
894 895 * forcing its runtime to zero. This way, update_curr_dl() stops
895 896 * it and the bandwidth timer will wake it up and will give it
896   - * new scheduling parameters (thanks to dl_new=1).
  897 + * new scheduling parameters (thanks to dl_yielded=1).
897 898 */
898 899 if (p->dl.runtime > 0) {
899   - rq->curr->dl.dl_new = 1;
  900 + rq->curr->dl.dl_yielded = 1;
900 901 p->dl.runtime = 0;
901 902 }
902 903 update_curr_dl(rq);