Commit b9f8fcd55bbdb037e5332dbdb7b494f0b70861ac

Authored by David Miller
Committed by Ingo Molnar
1 parent 1a551ae715

sched: Fix cpu_clock() in NMIs, on !CONFIG_HAVE_UNSTABLE_SCHED_CLOCK

Relax stable-sched-clock architectures to not save/disable/restore
hardirqs in cpu_clock().

The background is that I was trying to resolve a sparc64 perf
issue when I discovered this problem.

On sparc64 I implement pseudo NMIs by simply running the kernel
at IRQ level 14 when local_irq_disable() is called, this allows
performance counter events to still come in at IRQ level 15.

This doesn't work if any code in an NMI handler does
local_irq_save() or local_irq_disable() since the "disable" will
kick us back to cpu IRQ level 14 thus letting NMIs back in and
we recurse.

The only path which that does that in the perf event IRQ
handling path is the code supporting frequency based events.  It
uses cpu_clock().

cpu_clock() simply invokes sched_clock() with IRQs disabled.

And that's a fundamental bug all on it's own, particularly for
the HAVE_UNSTABLE_SCHED_CLOCK case.  NMIs can thus get into the
sched_clock() code interrupting the local IRQ disable code
sections of it.

Furthermore, for the not-HAVE_UNSTABLE_SCHED_CLOCK case, the IRQ
disabling done by cpu_clock() is just pure overhead and
completely unnecessary.

So the core problem is that sched_clock() is not NMI safe, but
we are invoking it from NMI contexts in the perf events code
(via cpu_clock()).

A less important issue is the overhead of IRQ disabling when it
isn't necessary in cpu_clock().

CONFIG_HAVE_UNSTABLE_SCHED_CLOCK architectures are not
affected by this patch.

Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mike Galbraith <efault@gmx.de>
LKML-Reference: <20091213.182502.215092085.davem@davemloft.net>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

Showing 1 changed file with 15 additions and 8 deletions Side-by-side Diff

kernel/sched_clock.c
... ... @@ -236,6 +236,18 @@
236 236 }
237 237 EXPORT_SYMBOL_GPL(sched_clock_idle_wakeup_event);
238 238  
  239 +unsigned long long cpu_clock(int cpu)
  240 +{
  241 + unsigned long long clock;
  242 + unsigned long flags;
  243 +
  244 + local_irq_save(flags);
  245 + clock = sched_clock_cpu(cpu);
  246 + local_irq_restore(flags);
  247 +
  248 + return clock;
  249 +}
  250 +
239 251 #else /* CONFIG_HAVE_UNSTABLE_SCHED_CLOCK */
240 252  
241 253 void sched_clock_init(void)
242 254  
243 255  
244 256  
... ... @@ -251,18 +263,13 @@
251 263 return sched_clock();
252 264 }
253 265  
254   -#endif /* CONFIG_HAVE_UNSTABLE_SCHED_CLOCK */
255 266  
256 267 unsigned long long cpu_clock(int cpu)
257 268 {
258   - unsigned long long clock;
259   - unsigned long flags;
  269 + return sched_clock_cpu(cpu);
  270 +}
260 271  
261   - local_irq_save(flags);
262   - clock = sched_clock_cpu(cpu);
263   - local_irq_restore(flags);
  272 +#endif /* CONFIG_HAVE_UNSTABLE_SCHED_CLOCK */
264 273  
265   - return clock;
266   -}
267 274 EXPORT_SYMBOL_GPL(cpu_clock);