Commit 99e5ada9407cc19d7c4c05ce2165f20dc46fc093
1 parent
5811d9963e
Exists in
master
and in
20 other branches
nohz: Re-evaluate the tick for the new task after a context switch
When a task is scheduled in, it may have some properties of its own that could make the CPU reconsider the need for the tick: posix cpu timers, perf events, ... So notify the full dynticks subsystem when a task gets scheduled in and re-check the tick dependency at this stage. This is done through a self IPI to avoid messing up with any current lock scenario. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Christoph Lameter <cl@linux.com> Cc: Geoff Levand <geoff@infradead.org> Cc: Gilad Ben Yossef <gilad@benyossef.com> Cc: Hakan Akkan <hakanakkan@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kevin Hilman <khilman@linaro.org> Cc: Li Zhong <zhong@linux.vnet.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de>
Showing 3 changed files with 24 additions and 0 deletions Side-by-side Diff
include/linux/tick.h
... | ... | @@ -163,12 +163,14 @@ |
163 | 163 | extern void tick_nohz_full_check(void); |
164 | 164 | extern void tick_nohz_full_kick(void); |
165 | 165 | extern void tick_nohz_full_kick_all(void); |
166 | +extern void tick_nohz_task_switch(struct task_struct *tsk); | |
166 | 167 | #else |
167 | 168 | static inline void tick_nohz_init(void) { } |
168 | 169 | static inline int tick_nohz_full_cpu(int cpu) { return 0; } |
169 | 170 | static inline void tick_nohz_full_check(void) { } |
170 | 171 | static inline void tick_nohz_full_kick(void) { } |
171 | 172 | static inline void tick_nohz_full_kick_all(void) { } |
173 | +static inline void tick_nohz_task_switch(struct task_struct *tsk) { } | |
172 | 174 | #endif |
173 | 175 | |
174 | 176 |
kernel/sched/core.c
kernel/time/tick-sched.c
... | ... | @@ -232,6 +232,26 @@ |
232 | 232 | preempt_enable(); |
233 | 233 | } |
234 | 234 | |
235 | +/* | |
236 | + * Re-evaluate the need for the tick as we switch the current task. | |
237 | + * It might need the tick due to per task/process properties: | |
238 | + * perf events, posix cpu timers, ... | |
239 | + */ | |
240 | +void tick_nohz_task_switch(struct task_struct *tsk) | |
241 | +{ | |
242 | + unsigned long flags; | |
243 | + | |
244 | + if (!tick_nohz_full_cpu(smp_processor_id())) | |
245 | + return; | |
246 | + | |
247 | + local_irq_save(flags); | |
248 | + | |
249 | + if (tick_nohz_tick_stopped() && !can_stop_full_tick()) | |
250 | + tick_nohz_full_kick(); | |
251 | + | |
252 | + local_irq_restore(flags); | |
253 | +} | |
254 | + | |
235 | 255 | int tick_nohz_full_cpu(int cpu) |
236 | 256 | { |
237 | 257 | if (!have_nohz_full_mask) |