15 Mar, 2010

1 commit

  • This should turn on instruction counting on P4s, which was missing in
    the first version of the new PMU driver.

    It's inaccurate for now, we still need dependant event to tag mops
    before we can count them precisely. The result is that the number of
    instruction may be lifted up.

    Signed-off-by: Cyrill Gorcunov
    Signed-off-by: Lin Ming
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

13 Mar, 2010

2 commits

  • Ingo reported:

    |
    | There's a build failure on -tip with the P4 driver, on UP 32-bit, if
    | PERF_EVENTS is enabled but UP_APIC is disabled:
    |
    | arch/x86/built-in.o: In function `p4_pmu_handle_irq':
    | perf_event.c:(.text+0xa756): undefined reference to `apic'
    | perf_event.c:(.text+0xa76e): undefined reference to `apic'
    |

    So we have to unmask LVTPC only if we're configured to have one.

    Reported-by: Ingo Molnar
    Signed-off-by: Cyrill Gorcunov
    CC: Lin Ming
    CC: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     
  • Merge reason: The new P4 driver is stable and ready now for more
    testing.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

12 Mar, 2010

3 commits

  • Merge reason: We want to queue up a dependent patch.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • In case of not assigned x86_pmu and software events NULL dereference may
    being hit via x86_pmu::schedule_events method.

    Fix it by checking if x86_pmu is initialized at all.

    Signed-off-by: Cyrill Gorcunov
    Cc: Lin Ming
    Cc: Arnaldo Carvalho de Melo
    Cc: Stephane Eranian
    Cc: Robert Richter
    Cc: Frederic Weisbecker
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     
  • The netburst PMU is way different from the "architectural
    perfomance monitoring" specification that current CPUs use.
    P4 uses a tuple of ESCR+CCCR+COUNTER MSR registers to handle
    perfomance monitoring events.

    A few implementational details:

    1) We need a separate x86_pmu::hw_config helper in struct
    x86_pmu since register bit-fields are quite different from P6,
    Core and later cpu series.

    2) For the same reason is a x86_pmu::schedule_events helper
    introduced.

    3) hw_perf_event::config consists of packed ESCR+CCCR values.
    It's allowed since in reality both registers only use a half
    of their size. Of course before making a real write into a
    particular MSR we need to unpack the value and extend it to
    a proper size.

    4) The tuple of packed ESCR+CCCR in hw_perf_event::config
    doesn't describe the memory address of ESCR MSR register
    so that we need to keep a mapping between these tuples
    used and available ESCR (various P4 events may use same
    ESCRs but not simultaneously), for this sake every active
    event has a per-cpu map of hw_perf_event::idx ESCR
    addresses.

    5) Since hw_perf_event::idx is an offset to counter/control register
    we need to lift X86_PMC_MAX_GENERIC up, otherwise kernel
    strips it down to 8 registers and event armed may never be turned
    off (ie the bit in active_mask is set but the loop never reaches
    this index to check), thanks to Peter Zijlstra

    Restrictions:

    - No cascaded counters support (do we ever need them?)
    - No dependent events support (so PERF_COUNT_HW_INSTRUCTIONS
    doesn't work for now)
    - There are events with same counters which can't work simultaneously
    (need to use intersected ones due to broken counter 1)
    - No PERF_COUNT_HW_CACHE_ events yet

    Todo:

    - Implement dependent events
    - Need proper hashing for event opcodes (no linear search, good for
    debugging stage but not in real loads)
    - Some events counted during a clock cycle -- need to set threshold
    for them and count every clock cycle just to get summary statistics
    (ie to behave the same way as other PMUs do)
    - Need to swicth to use event_constraints
    - To support RAW events we need to encode a global list of P4 events
    into p4_templates
    - Cache events need to be added

    Event support status matrix:

    Event status
    -----------------------------
    cycles works
    cache-references works
    cache-misses works
    branch-misses works
    bus-cycles partially (does not work on 64bit cpu with HT enabled)
    instruction doesnt work (needs dependent event [mop tagging])
    branches doesnt work

    Signed-off-by: Cyrill Gorcunov
    Signed-off-by: Lin Ming
    Cc: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: Stephane Eranian
    Cc: Robert Richter
    Cc: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

11 Mar, 2010

3 commits

  • Export perf_trace_regs and perf_arch_fetch_caller_regs since module will
    use these.

    Signed-off-by: Xiao Guangrong
    [ use EXPORT_PER_CPU_SYMBOL_GPL() ]
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Xiao Guangrong
     
  • What happens is that we schedule badly like:

    -1987 [019] 280.252808: x86_pmu_start: event-46/1300c0: idx: 0
    -1987 [019] 280.252811: x86_pmu_start: event-47/1300c0: idx: 1
    -1987 [019] 280.252812: x86_pmu_start: event-48/1300c0: idx: 2
    -1987 [019] 280.252813: x86_pmu_start: event-49/1300c0: idx: 3
    -1987 [019] 280.252814: x86_pmu_start: event-50/1300c0: idx: 32
    -1987 [019] 280.252825: x86_pmu_stop: event-46/1300c0: idx: 0
    -1987 [019] 280.252826: x86_pmu_stop: event-47/1300c0: idx: 1
    -1987 [019] 280.252827: x86_pmu_stop: event-48/1300c0: idx: 2
    -1987 [019] 280.252828: x86_pmu_stop: event-49/1300c0: idx: 3
    -1987 [019] 280.252829: x86_pmu_stop: event-50/1300c0: idx: 32
    -1987 [019] 280.252834: x86_pmu_start: event-47/1300c0: idx: 1
    -1987 [019] 280.252834: x86_pmu_start: event-48/1300c0: idx: 2
    -1987 [019] 280.252835: x86_pmu_start: event-49/1300c0: idx: 3
    -1987 [019] 280.252836: x86_pmu_start: event-50/1300c0: idx: 32
    -1987 [019] 280.252837: x86_pmu_start: event-51/1300c0: idx: 32 *FAIL*

    This happens because we only iterate the n_running events in the first
    pass, and reset their index to -1 if they don't match to force a
    re-assignment.

    Now, in our RR example, n_running == 0 because we fully unscheduled, so
    event-50 will retain its idx==32, even though in scheduling it will have
    gotten idx=0, and we don't trigger the re-assign path.

    The easiest way to fix this is the below patch, which simply validates
    the full assignment in the second pass.

    Reported-by: Stephane Eranian
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Fix:

    arch/powerpc/kernel/perf_event.c:1334: error: 'power_pmu_notifier' undeclared (first use in this function)
    arch/powerpc/kernel/perf_event.c:1334: error: (Each undeclared identifier is reported only once
    arch/powerpc/kernel/perf_event.c:1334: error: for each function it appears in.)
    arch/powerpc/kernel/perf_event.c:1334: error: implicit declaration of function 'power_pmu_notifier'
    arch/powerpc/kernel/perf_event.c:1334: error: implicit declaration of function 'register_cpu_notifier'

    Due to commit 3f6da390 (perf: Rework and fix the arch CPU-hotplug hooks).

    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

10 Mar, 2010

31 commits

  • Events that trigger overflows by interrupting a context can
    use get_irq_regs() or task_pt_regs() to retrieve the state
    when the event triggered. But this is not the case for some
    other class of events like trace events as tracepoints are
    executed in the same context than the code that triggered
    the event.

    It means we need a different api to capture the regs there,
    namely we need a hot snapshot to get the most important
    informations for perf: the instruction pointer to get the
    event origin, the frame pointer for the callchain, the code
    segment for user_mode() tests (we always use __KERNEL_CS as
    trace events always occur from the kernel) and the eflags
    for further purposes.

    v2: rename perf_save_regs to perf_fetch_caller_regs as per
    Masami's suggestion.

    Signed-off-by: Frederic Weisbecker
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: H. Peter Anvin
    Cc: Peter Zijlstra
    Cc: Paul Mackerras
    Cc: Steven Rostedt
    Cc: Arnaldo Carvalho de Melo
    Cc: Masami Hiramatsu
    Cc: Jason Baron
    Cc: Archs

    Frederic Weisbecker
     
  • We were using the frame pointer based stack walker on every
    contexts in x86-32, but not in x86-64 where we only use the
    seven-league boots on the exception stacks.

    Use it also on irq and process stacks. This utterly accelerate
    the captures.

    Signed-off-by: Frederic Weisbecker
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Paul Mackerras
    Cc: Arnaldo Carvalho de Melo

    Frederic Weisbecker
     
  • Fix typo. But the modularization here is ugly and should be improved.

    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    Cc: Paul Mackerras
    Cc: Arnaldo Carvalho de Melo
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • The PEBS+LBR decoding magic needs the insn_get_length() infrastructure
    to be able to decode x86 instruction length.

    So split it out of KPROBES dependency and make it enabled when either
    KPROBES or PERF_EVENTS is enabled.

    Cc: Peter Zijlstra
    Cc: Masami Hiramatsu
    Cc: Frederic Weisbecker
    Cc: Paul Mackerras
    Cc: Arnaldo Carvalho de Melo
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • Don't decrement the TOS twice...

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Pull the core handler in line with the nhm one, also make sure we always
    drain the buffer.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • We don't need checking_{wr,rd}msr() calls, since we should know what cpu
    we're running on and not use blindly poke at msrs.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • If we reset the LBR on each first counter, simple counter rotation which
    first deschedules all counters and then reschedules the new ones will
    lead to LBR reset, even though we're still in the same task context.

    Reduce this by not flushing on the first counter but only flushing on
    different task contexts.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • We need to use the actual cpuc->pebs_enabled value, not a local copy for
    the changes to take effect.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Its unclear if the PEBS state record will have only a single bit set, in
    case it does not and accumulates bits, deal with that by only processing
    each event once.

    Also, robustify some of the code.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • The documentation says we have to enable PEBS before we enable the PMU
    proper.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • We should never call ->enable with the pmu enabled, and we _can_ have
    ->disable called with the pmu enabled.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • We should never call ->enable with the pmu enabled, and we _can_ have
    ->disable called with the pmu enabled.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • I overlooked the perf_disable()/perf_enable() calls in
    intel_pmu_handle_irq(), (pointed out by Markus) so we should not
    explicitly disable_all/enable_all pebs counters in the drain functions,
    these are already disabled and enabling them early is confusing.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Calling ioctl(PERF_EVENT_IOC_DISABLE) on a thottled counter would result
    in a double disable, cure this by using x86_pmu_{start,stop} for
    throttle/unthrottle and teach x86_pmu_stop() to check ->active_mask.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • It turns out the LBR is massively unreliable on certain CPUs, so code the
    fixup a little more defensive to avoid crashing the kernel.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Some CPUs have errata where the LBR is not cleared on Power-On. So always
    clear the LBRs before use.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • This CPU has just too many handycaps to be really useful.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Fix up the ds allocation error path, where we could free @buffer before
    we used it.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Since there's now two users for this, place it in a common header.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: Masami Hiramatsu
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Expose the full PEBS record using PERF_SAMPLE_RAW

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Saner PERF_CAPABILITIES support, which also exposes pebs_trap. Use that
    latter to make PEBS's use of LBR conditional since a fault-like pebs
    should already report the correct IP.

    ( As of this writing there is no known hardware that implements
    !pebs_trap )

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Use the LBR to fix up the PEBS IP+1 issue.

    As said, PEBS reports the next instruction, here we use the LBR to find
    the last branch and from that construct the actual IP. If the IP matches
    the LBR-TO, we use LBR-FROM, otherwise we use the LBR-TO address as the
    beginning of the last basic block and decode forward.

    Once we find a match to the current IP, we use the previous location.

    This patch introduces a new ABI element: PERF_RECORD_MISC_EXACT, which
    conveys that the reported IP (PERF_SAMPLE_IP) is the exact instruction
    that caused the event (barring CPU errata).

    The fixup can fail due to various reasons:

    1) LBR contains invalid data (quite possible)
    2) part of the basic block got paged out
    3) the reported IP isn't part of the basic block (see 1)

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: Masami Hiramatsu
    Cc: "Zhang, Yanmin"
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Implement simple suport Intel Last-Branch-Record, it supports all
    hardware that implements FREEZE_LBRS_ON_PMI, but does not (yet) implement
    the LBR config register.

    The Intel LBR is a FIFO of From,To addresses describing the last few
    branches the hardware took.

    This patch does not add perf interface to the LBR, but merely provides an
    interface for internal use.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • This patch implements support for Intel Precise Event Based Sampling,
    which is an alternative counter mode in which the counter triggers a
    hardware assist to collect information on events. The hardware assist
    takes a trap like snapshot of a subset of the machine registers.

    This data is written to the Intel Debug-Store, which can be programmed
    with a data threshold at which to raise a PMI.

    With the PEBS hardware assist being trap like, the reported IP is always
    one instruction after the actual instruction that triggered the event.

    This implements a simple PEBS model that always takes a single PEBS event
    at a time. This is done so that the interaction with the rest of the
    system is as expected (freq adjust, period randomization, lbr,
    callchains, etc.).

    It adds an ABI element: perf_event_attr::precise, which indicates that we
    wish to use this (constrained, but precise) mode.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • hw_perf_enable() would enable already enabled events.

    This causes problems with code that assumes that ->enable/->disable calls
    are balanced (like the LBR code does).

    What happens is that events that were already running and left in place
    would get enabled again.

    Avoid this by only enabling new events that match their previous
    assignment.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • hw_perf_enable() would disable events that were not yet enabled.

    This causes problems with code that assumes that ->enable/->disable calls
    are balanced (like the LBR code does).

    What happens is that we disable newly added counters that match their
    previous assignment, even though they are not yet programmed on the
    hardware.

    Avoid this by only doing the first pass over the existing events.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Make sure n_added is properly accounted so that we can rely on the value
    to reflect the number of added counters. This is needed if its going to
    be used for more than a boolean check.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Calling ioctl(PERF_EVENT_IOC_DISABLE) on a thottled counter would result
    in a double disable, cure this by using x86_pmu_{start,stop} for
    throttle/unthrottle and teach x86_pmu_stop() to check ->active_mask.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • pmu::start should undo pmu::stop, make it so.

    Signed-off-by: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • There is no concurrency on these variables, so don't use LOCK'ed ops.

    As to the intel_pmu_handle_irq() status bit clean, nobody uses that so
    remove it all together.

    Signed-off-by: Peter Zijlstra
    Cc: paulus@samba.org
    Cc: eranian@google.com
    Cc: robert.richter@amd.com
    Cc: fweisbec@gmail.com
    Cc: Arnaldo Carvalho de Melo
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra