14 Nov, 2011

1 commit

  • People (Linus) objected to using -ENOSPC to signal not having enough
    resources on the PMU to satisfy the request. Use -EINVAL.

    Requested-by: Linus Torvalds
    Cc: Stephane Eranian
    Cc: Will Deacon
    Cc: Deng-Cheng Zhu
    Cc: David Daney
    Cc: Ralf Baechle
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/n/tip-xv8geaz2zpbjhlx0svmpp28n@git.kernel.org
    [ merged to newer kernel, fixed up MIPS impact ]
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

26 Sep, 2011

1 commit

  • The CPU support for perf events on x86 was implemented via included C files
    with #ifdefs. Clean this up by creating a new header file and compiling
    the vendor-specific files as needed.

    Signed-off-by: Kevin Winchester
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1314747665-2090-1-git-send-email-kjwinchester@gmail.com
    Signed-off-by: Ingo Molnar

    Kevin Winchester
     

22 Jul, 2011

1 commit

  • This patch:

    - fixes typos in comments and clarifies the text
    - renames obscure p4_event_alias::original and ::alter members to
    ::original and ::alternative as appropriate
    - drops parenthesis from the return of p4_get_alias_event()

    No functional changes.

    Reported-by: Ingo Molnar
    Signed-off-by: Cyrill Gorcunov
    Link: http://lkml.kernel.org/r/20110721160625.GX7492@sun
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

15 Jul, 2011

1 commit

  • Instead of hw_nmi_watchdog_set_attr() weak function
    and appropriate x86_pmu::hw_watchdog_set_attr() call
    we introduce even alias mechanism which allow us
    to drop this routines completely and isolate quirks
    of Netburst architecture inside P4 PMU code only.

    The main idea remains the same though -- to allow
    nmi-watchdog and perf top run simultaneously.

    Note the aliasing mechanism applies to generic
    PERF_COUNT_HW_CPU_CYCLES event only because arbitrary
    event (say passed as RAW initially) might have some
    additional bits set inside ESCR register changing
    the behaviour of event and we can't guarantee anymore
    that alias event will give the same result.

    P.S. Thanks a huge to Don and Steven for for testing
    and early review.

    Acked-by: Don Zickus
    Tested-by: Steven Rostedt
    Signed-off-by: Cyrill Gorcunov
    CC: Ingo Molnar
    CC: Peter Zijlstra
    CC: Stephane Eranian
    CC: Lin Ming
    CC: Arnaldo Carvalho de Melo
    CC: Frederic Weisbecker
    Link: http://lkml.kernel.org/r/20110708201712.GS23657@sun
    Signed-off-by: Steven Rostedt

    Cyrill Gorcunov
     

01 Jul, 2011

3 commits

  • Add a NODE level to the generic cache events which is used to measure
    local vs remote memory accesses. Like all other cache events, an
    ACCESS is HIT+MISS, if there is no way to distinguish between reads
    and writes do reads only etc..

    The below needs filling out for !x86 (which I filled out with
    unsupported events).

    I'm fairly sure ARM can leave it like that since it doesn't strike me as
    an architecture that even has NUMA support. SH might have something since
    it does appear to have some NUMA bits.

    Sparc64, PowerPC and MIPS certainly want a good look there since they
    clearly are NUMA capable.

    Signed-off-by: Peter Zijlstra
    Cc: David Miller
    Cc: Anton Blanchard
    Cc: David Daney
    Cc: Deng-Cheng Zhu
    Cc: Paul Mundt
    Cc: Will Deacon
    Cc: Robert Richter
    Cc: Stephane Eranian
    Link: http://lkml.kernel.org/r/1303508226.4865.8.camel@laptop
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • The nmi parameter indicated if we could do wakeups from the current
    context, if not, we would set some state and self-IPI and let the
    resulting interrupt do the wakeup.

    For the various event classes:

    - hardware: nmi=0; PMI is in fact an NMI or we run irq_work_run from
    the PMI-tail (ARM etc.)
    - tracepoint: nmi=0; since tracepoint could be from NMI context.
    - software: nmi=[0,1]; some, like the schedule thing cannot
    perform wakeups, and hence need 0.

    As one can see, there is very little nmi=1 usage, and the down-side of
    not using it is that on some platforms some software events can have a
    jiffy delay in wakeup (when arch_irq_work_raise isn't implemented).

    The up-side however is that we can remove the nmi parameter and save a
    bunch of conditionals in fast paths.

    Signed-off-by: Peter Zijlstra
    Cc: Michael Cree
    Cc: Will Deacon
    Cc: Deng-Cheng Zhu
    Cc: Anton Blanchard
    Cc: Eric B Munson
    Cc: Heiko Carstens
    Cc: Paul Mundt
    Cc: David S. Miller
    Cc: Frederic Weisbecker
    Cc: Jason Wessel
    Cc: Don Zickus
    Link: http://lkml.kernel.org/n/tip-agjev8eu666tvknpb3iaj0fg@git.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Due to restriction and specifics of Netburst PMU we need a separated
    event for NMI watchdog. In particular every Netburst event
    consumes not just a counter and a config register, but also an
    additional ESCR register.

    Since ESCR registers are grouped upon counters (i.e. if ESCR is occupied
    for some event there is no room for another event to enter until its
    released) we need to pick up the "least" used ESCR (or the most available
    one) for nmi-watchdog purposes -- so MSR_P4_CRU_ESCR2/3 was chosen.

    With this patch nmi-watchdog and perf top should be able to run simultaneously.

    Signed-off-by: Cyrill Gorcunov
    CC: Lin Ming
    CC: Arnaldo Carvalho de Melo
    CC: Frederic Weisbecker
    Tested-and-reviewed-by: Don Zickus
    Tested-and-reviewed-by: Stephane Eranian
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/20110623124918.GC13050@sun
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

02 May, 2011

1 commit


27 Apr, 2011

1 commit

  • It was noticed that P4 machines were generating double NMIs for
    each perf event. These extra NMIs lead to 'Dazed and confused'
    messages on the screen.

    I tracked this down to a P4 quirk that said the overflow bit had
    to be cleared before re-enabling the apic LVT mask. My first
    attempt was to move the un-masking inside the perf nmi handler
    from before the chipset NMI handler to after.

    This broke Nehalem boxes that seem to like the unmasking before
    the counters themselves are re-enabled.

    In order to keep this change simple for 2.6.39, I decided to
    just simply move the apic LVT un-masking to the beginning of all
    the chipset NMI handlers, with the exception of Pentium4's to
    fix the double NMI issue.

    Later on we can move the un-masking to later in the handlers to
    save a number of 'extra' NMIs on those particular chipsets.

    I tested this change on a P4 machine, an AMD machine, a Nehalem
    box, and a core2quad box. 'perf top' worked correctly along
    with various other small 'perf record' runs. Anything high
    stress breaks all the machines but that is a different problem.

    Thanks to various people for testing different versions of this
    patch.

    Reported-and-tested-by: Shaun Ruffell
    Signed-off-by: Don Zickus
    Cc: Cyrill Gorcunov
    Link: http://lkml.kernel.org/r/1303900353-10242-1-git-send-email-dzickus@redhat.com
    Signed-off-by: Ingo Molnar
    CC: Cyrill Gorcunov

    Don Zickus
     

24 Apr, 2011

1 commit


22 Apr, 2011

2 commits

  • It's not enough to simply disable event on overflow the
    cpuc->active_mask should be cleared as well otherwise counter
    may stall in "active" even in real being already disabled (which
    potentially may lead to the situation that user may not use this
    counter further).

    Don pointed out that:

    " I also noticed this patch fixed some unknown NMIs
    on a P4 when I stressed the box".

    Tested-by: Lin Ming
    Signed-off-by: Cyrill Gorcunov
    Acked-by: Don Zickus
    Signed-off-by: Don Zickus
    Cc: Cyrill Gorcunov
    Link: http://lkml.kernel.org/r/1303398203-2918-3-git-send-email-dzickus@redhat.com
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     
  • Instead of opencoded assignments better to use
    perf_sample_data_init helper.

    Tested-by: Lin Ming
    Signed-off-by: Cyrill Gorcunov
    Signed-off-by: Don Zickus
    Cc: Cyrill Gorcunov
    Link: http://lkml.kernel.org/r/1303398203-2918-2-git-send-email-dzickus@redhat.com
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

29 Mar, 2011

1 commit


26 Mar, 2011

1 commit

  • …/git/tip/linux-2.6-tip

    * 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    perf, x86: Complain louder about BIOSen corrupting CPU/PMU state and continue
    perf, x86: P4 PMU - Read proper MSR register to catch unflagged overflows
    perf symbols: Look at .dynsym again if .symtab not found
    perf build-id: Add quirk to deal with perf.data file format breakage
    perf session: Pass evsel in event_ops->sample()
    perf: Better fit max unprivileged mlock pages for tools needs
    perf_events: Fix stale ->cgrp pointer in update_cgrp_time_from_cpuctx()
    perf top: Fix uninitialized 'counter' variable
    tracing: Fix set_ftrace_filter probe function display
    perf, x86: Fix Intel fixed counters base initialization

    Linus Torvalds
     

25 Mar, 2011

1 commit

  • The read of a proper MSR register was missed and instead of
    counter the configration register was tested (it has
    ARCH_P4_UNFLAGGED_BIT always cleared) leading to unknown NMI
    hitting the system. As result the user may obtain "Dazed and
    confused, but trying to continue" message. Fix it by reading a
    proper MSR register.

    When an NMI happens on a P4, the perf nmi handler checks the
    configuration register to see if the overflow bit is set or not
    before taking appropriate action. Unfortunately, various P4
    machines had a broken overflow bit, so a backup mechanism was
    implemented. This mechanism checked to see if the counter
    rolled over or not.

    A previous commit that implemented this backup mechanism was
    broken. Instead of reading the counter register, it used the
    configuration register to determine if the counter rolled over
    or not. Reading that bit would give incorrect results.

    This would lead to 'Dazed and confused' messages for the end
    user when using the perf tool (or if the nmi watchdog is
    running).

    The fix is to read the counter register before determining if
    the counter rolled over or not.

    Signed-off-by: Don Zickus
    Signed-off-by: Cyrill Gorcunov
    Cc: Lin Ming
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Don Zickus
     

18 Mar, 2011

1 commit


16 Feb, 2011

2 commits

  • Instead of storing the base addresses we can store the counter's msr
    addresses directly in config_base/event_base of struct hw_perf_event.
    This avoids recalculating the address with each msr access. The
    addresses are configured one time. We also need this change to later
    modify the address calculation.

    Signed-off-by: Robert Richter
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Robert Richter
     
  • Several people have reported spurious unknown NMI
    messages on some P4 CPUs.

    This patch fixes it by checking for an overflow (negative
    counter values) directly, instead of relying on the
    P4_CCCR_OVF bit.

    Reported-by: George Spelvin
    Reported-by: Meelis Roos
    Reported-by: Don Zickus
    Reported-by: Dave Airlie
    Signed-off-by: Cyrill Gorcunov
    Cc: Lin Ming
    Cc: Don Zickus
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

28 Jan, 2011

1 commit

  • This patch fixes some issues with raw event validation on
    Pentium 4 (Netburst) based processors.

    As I was testing libpfm4 Netburst support, I ran into two
    problems in the p4_validate_raw_event() function:

    - the shared field must be checked ONLY when HT is on
    - the binding to ESCR register was missing

    The second item was causing raw events to not be encoded
    correctly compared to generic PMU events.

    With this patch, I can now pass Netburst events to libpfm4
    examples and get meaningful results:

    $ task -e global_power_events:running:u noploop 1
    noploop for 1 seconds
    3,206,304,898 global_power_events:running

    Signed-off-by: Stephane Eranian
    Acked-by: Cyrill Gorcunov
    Cc: peterz@infradead.org
    Cc: paulus@samba.org
    Cc: davem@davemloft.net
    Cc: fweisbec@gmail.com
    Cc: perfmon2-devel@lists.sf.net
    Cc: eranian@gmail.com
    Cc: robert.richter@amd.com
    Cc: acme@redhat.com
    Cc: gorcunov@gmail.com
    Cc: ming.m.lin@intel.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Stephane Eranian
     

09 Jan, 2011

1 commit

  • Don found that P4 PMU reads CCCR register instead of counter
    itself (in attempt to catch unflagged event) this makes P4
    NMI handler to consume all NMIs it observes. So the other
    NMI users such as kgdb simply have no chance to get NMI
    on their hands.

    Side note: at moment there is no way to run nmi-watchdog
    together with perf tool. This is because both 'perf top' and
    nmi-watchdog use same event. So while nmi-watchdog reserves
    one event/counter for own needs there is no room for perf tool
    left (there is a way to disable nmi-watchdog on boot of course).

    Ming has tested this patch with the following results

    | 1. watchdog disabled
    |
    | kgdb tests on boot OK
    | perf works OK
    |
    | 2. watchdog enabled, without patch perf-x86-p4-nmi-4
    |
    | kgdb tests on boot hang
    |
    | 3. watchdog enabled, without patch perf-x86-p4-nmi-4 and do not run kgdb
    | tests on boot
    |
    | "perf top" partialy works
    | cpu-cycles no
    | instructions yes
    | cache-references no
    | cache-misses no
    | branch-instructions no
    | branch-misses yes
    | bus-cycles no
    |
    | 4. watchdog enabled, with patch perf-x86-p4-nmi-4 applied
    |
    | kgdb tests on boot OK
    | perf does not work, NMI "Dazed and confused" messages show up
    |

    Which means we still have problems with p4 box due to 'unknown'
    nmi happens but at least it should fix kgdb test cases.

    Reported-by: Jason Wessel
    Reported-by: Don Zickus
    Signed-off-by: Cyrill Gorcunov
    Acked-by: Don Zickus
    Acked-by: Lin Ming
    Cc: Stephane Eranian
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

05 Oct, 2010

1 commit


30 Sep, 2010

1 commit

  • Stephane reported we've forgot to guard the P4 platform
    against spurious in-flight performance IRQs. Fix it.

    This fixes potential spurious 'dazed and confused' NMI
    messages.

    Reported-by: Stephane Eranian
    Signed-off-by: Cyrill Gorcunov
    Signed-off-by: Don Zickus
    Cc: fweisbec@gmail.com
    Cc: peterz@infradead.org
    Cc: Robert Richter
    Cc: Lin Ming
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

10 Sep, 2010

1 commit


03 Sep, 2010

1 commit

  • Now that we rely on the number of handled overflows, ensure all
    handle_irq implementations actually return the right number.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Don Zickus
    Cc: peterz@infradead.org
    Cc: robert.richter@amd.com
    Cc: gorcunov@gmail.com
    Cc: fweisbec@gmail.com
    Cc: ying.huang@intel.com
    Cc: ming.m.lin@intel.com
    Cc: eranian@google.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

01 Sep, 2010

1 commit

  • Implements verification of

    - Bits of ESCR EventMask field (meaningful bits in field are hardware
    predefined and others bits should be set to zero)

    - INSTR_COMPLETED event (it is available on predefined cpu model only)

    - Thread shared events (they should be guarded by "perf_event_paranoid"
    sysctl due to security reason). The side effect of this action is
    that PERF_COUNT_HW_BUS_CYCLES become a "paranoid" general event.

    Signed-off-by: Cyrill Gorcunov
    Tested-by: Lin Ming
    Cc: Frederic Weisbecker
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

25 Aug, 2010

1 commit

  • If on Pentium4 CPUs the FORCE_OVF flag is set then an NMI happens
    on every event, which can generate a flood of NMIs. Clear it.

    Reported-by: Vince Weaver
    Signed-off-by: Lin Ming
    Signed-off-by: Cyrill Gorcunov
    Cc: Frederic Weisbecker
    Cc: Peter Zijlstra
    Cc:
    Signed-off-by: Ingo Molnar

    Lin Ming
     

09 Aug, 2010

1 commit

  • In case if last active performance counter is not overflowed at
    moment of NMI being triggered by another counter, the irq
    statistics may miss an update stage. As a more serious
    consequence -- apic quirk may not be triggered so apic lvt entry
    stay masked.

    Tested-by: Lin Ming
    Signed-off-by: Cyrill Gorcunov
    Cc: Stephane Eranian
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

05 Jul, 2010

1 commit

  • To support cache events we have reserved the low 6 bits in
    hw_perf_event::config (which is a part of CCCR register
    configuration actually).

    These bits represent Replay Event mertic enumerated in
    enum P4_PEBS_METRIC. The caller should not care about
    which exact bits should be set and how -- the caller
    just chooses one P4_PEBS_METRIC entity and puts it into
    the config. The kernel will track it and set appropriate
    additional MSR registers (metrics) when needed.

    The reason for this redesign was the PEBS enable bit, which
    should not be set until DS (and PEBS sampling) support will
    be implemented properly.

    TODO
    ====

    - PEBS sampling (note it's tricky and works with _one_ counter only
    so for HT machines it will be not that easy to handle both threads)

    - tracking of PEBS registers state, a user might need to turn
    PEBS off completely (ie no PEBS enable, no UOP_tag) but some
    other event may need it, such events clashes and should not
    run simultaneously, at moment we just don't support such events

    - eventually export user space bits in separate header which will
    allow user apps to configure raw events more conveniently.

    Signed-off-by: Cyrill Gorcunov
    Signed-off-by: Lin Ming
    Cc: Stephane Eranian
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

09 Jun, 2010

1 commit

  • On Netburst PMU we need a second write to a performance counter
    due to cpu erratum.

    A simple flag test instead of alternative instructions was choosen
    because wrmsrl is already a macro and if virtualization is turned
    on will need an additional wrapper call which is more expencise.

    nb: we should propably switch to jump-labels as only this facility
    reach the mainline.

    Signed-off-by: Cyrill Gorcunov
    Signed-off-by: Peter Zijlstra
    Cc: Robert Richter
    Cc: Lin Ming
    Cc: Arnaldo Carvalho de Melo
    Cc: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

19 May, 2010

2 commits

  • This snippet somehow escaped the commit:

    | commit 137351e0feeb9f25d99488ee1afc1c79f5499a9a
    | Author: Cyrill Gorcunov
    | Date: Sat May 8 15:25:52 2010 +0400
    |
    | x86, perf: P4 PMU -- protect sensible procedures from preemption

    so bring it eventually back. It helps to catch
    preemption issue (if there will be, rule of thumb --
    don't use raw_ if you can).

    Signed-off-by: Cyrill Gorcunov
    Cc: Lin Ming
    Cc: Steven Rostedt
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     
  • To prevent from clashes in future code modifications
    do a real check for ESCR address being in hash. At
    moment the callers are known to pass sane values but
    better to be on a safe side.

    And comment fix.

    Signed-off-by: Cyrill Gorcunov
    CC: Lin Ming
    CC: Peter Zijlstra
    CC: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

18 May, 2010

2 commits


15 May, 2010

1 commit

  • Jaswinder reported this #GP:

    |
    | Message from syslogd@ht at May 14 09:39:32 ...
    | kernel:[ 314.908612] EIP: []
    | x86_perf_event_set_period+0x19d/0x1b2 SS:ESP 0068:edac3d70
    |

    Ming has narrowed it down to a comparision issue
    between arguments with different sizes and
    signs. As result event index reached a wrong
    value which in turn led to a GP fault.

    At the same time it was found that p4_next_cntr
    has broken logic and should return the counter
    index only if it was not yet borrowed for
    another event.

    Reported-by: Jaswinder Singh Rajput
    Reported-by: Lin Ming
    Bisected-by: Lin Ming
    Tested-by: Jaswinder Singh Rajput
    Signed-off-by: Cyrill Gorcunov
    CC: Peter Zijlstra
    CC: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

13 May, 2010

1 commit

  • Linear search over all p4 MSRs should be fine if only
    we would not use it in events scheduling routine which
    is pretty time critical. Lets use hashes. It should speed
    scheduling up significantly.

    v2: Steven proposed to use more gentle approach than issue
    BUG on error, so we use WARN_ONCE now

    Signed-off-by: Cyrill Gorcunov
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Lin Ming
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

08 May, 2010

4 commits

  • RAW events are special and we should be ready for user passing
    in insane event index values.

    Signed-off-by: Cyrill Gorcunov
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    Cc: Lin Ming
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     
  • The caller already has done such a check.
    And it was wrong anyway, it had to be '>=' rather than '>'

    Signed-off-by: Cyrill Gorcunov
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    Cc: Lin Ming
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     
  • Steven reported:

    |
    | I'm getting:
    |
    | Pid: 3477, comm: perf Not tainted 2.6.34-rc6 #2727
    | Call Trace:
    | [] debug_smp_processor_id+0xd5/0xf0
    | [] p4_hw_config+0x2b/0x15c
    | [] ? trace_hardirqs_on_caller+0x12b/0x14f
    | [] hw_perf_event_init+0x468/0x7be
    | [] ? debug_mutex_init+0x31/0x3c
    | [] T.850+0x273/0x42e
    | [] sys_perf_event_open+0x23e/0x3f1
    | [] ? sysret_check+0x2e/0x69
    | [] system_call_fastpath+0x16/0x1b
    |
    | When running perf record in latest tip/perf/core
    |

    Due to the fact that p4 counters are shared between HT threads
    we synthetically divide the whole set of counters into two
    non-intersected subsets. And while we're "borrowing" counters
    from these subsets we should not be preempted (well, strictly
    speaking in p4_hw_config we just pre-set reference to the
    subset which allow to save some cycles in schedule routine
    if it happens on the same cpu). So use get_cpu/put_cpu pair.

    Also p4_pmu_schedule_events should use smp_processor_id rather
    than raw_ version. This allow us to catch up preemption issue
    (if there will ever be).

    Reported-by: Steven Rostedt
    Tested-by: Steven Rostedt
    Signed-off-by: Cyrill Gorcunov
    Cc: Steven Rostedt
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    Cc: Lin Ming
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     
  • If an event is not RAW we should not exit p4_hw_config
    early but call x86_setup_perfctr as well.

    Signed-off-by: Cyrill Gorcunov
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    Cc: Lin Ming
    Cc: Robert Richter
    Signed-off-by: Ingo Molnar

    Cyrill Gorcunov
     

07 May, 2010

1 commit

  • The perfctr setup calls are in the corresponding .hw_config()
    functions now. This makes it possible to introduce config functions
    for other pmu events that are not perfctr specific.

    Also, all of a sudden the code looks much nicer.

    Signed-off-by: Robert Richter
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Robert Richter