Commit 4fe757dd48a9e95e1a071291f15dda5421dacb66

Authored by Peter Zijlstra
Committed by Ingo Molnar
1 parent 7d44ec193d

perf: Fix throttle logic

It was possible to call pmu::start() on an already running event. In
particular this lead so some wreckage as the hrtimer events would
re-initialize active timers.

This was due to throttled events being activated again by scheduling.
Scheduling in a context would add and force start events, resulting in
running events with a possible throttle status. The next tick to hit
that task will then try to unthrottle the event and call ->start() on
an already running event.

Reported-by: Jeff Moyer <jmoyer@redhat.com>
Cc: <stable@kernel.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

Showing 1 changed file with 15 additions and 4 deletions Side-by-side Diff

... ... @@ -782,6 +782,10 @@
782 782 raw_spin_unlock_irq(&ctx->lock);
783 783 }
784 784  
  785 +#define MAX_INTERRUPTS (~0ULL)
  786 +
  787 +static void perf_log_throttle(struct perf_event *event, int enable);
  788 +
785 789 static int
786 790 event_sched_in(struct perf_event *event,
787 791 struct perf_cpu_context *cpuctx,
788 792  
... ... @@ -794,7 +798,18 @@
794 798  
795 799 event->state = PERF_EVENT_STATE_ACTIVE;
796 800 event->oncpu = smp_processor_id();
  801 +
797 802 /*
  803 + * Unthrottle events, since we scheduled we might have missed several
  804 + * ticks already, also for a heavily scheduling task there is little
  805 + * guarantee it'll get a tick in a timely manner.
  806 + */
  807 + if (unlikely(event->hw.interrupts == MAX_INTERRUPTS)) {
  808 + perf_log_throttle(event, 1);
  809 + event->hw.interrupts = 0;
  810 + }
  811 +
  812 + /*
798 813 * The new state must be visible before we turn it on in the hardware:
799 814 */
800 815 smp_wmb();
... ... @@ -1595,10 +1610,6 @@
1595 1610 perf_event_context_sched_in(ctx);
1596 1611 }
1597 1612 }
1598   -
1599   -#define MAX_INTERRUPTS (~0ULL)
1600   -
1601   -static void perf_log_throttle(struct perf_event *event, int enable);
1602 1613  
1603 1614 static u64 perf_calculate_period(struct perf_event *event, u64 nsec, u64 count)
1604 1615 {