18 Oct, 2010

1 commit


07 Aug, 2010

1 commit

  • …git/tip/linux-2.6-tip

    * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (162 commits)
    tracing/kprobes: unregister_trace_probe needs to be called under mutex
    perf: expose event__process function
    perf events: Fix mmap offset determination
    perf, powerpc: fsl_emb: Restore setting perf_sample_data.period
    perf, powerpc: Convert the FSL driver to use local64_t
    perf tools: Don't keep unreferenced maps when unmaps are detected
    perf session: Invalidate last_match when removing threads from rb_tree
    perf session: Free the ref_reloc_sym memory at the right place
    x86,mmiotrace: Add support for tracing STOS instruction
    perf, sched migration: Librarize task states and event headers helpers
    perf, sched migration: Librarize the GUI class
    perf, sched migration: Make the GUI class client agnostic
    perf, sched migration: Make it vertically scrollable
    perf, sched migration: Parameterize cpu height and spacing
    perf, sched migration: Fix key bindings
    perf, sched migration: Ignore unhandled task states
    perf, sched migration: Handle ignored migrate out events
    perf: New migration tool overview
    tracing: Drop cpparg() macro
    perf: Use tracepoint_synchronize_unregister() to flush any pending tracepoint call
    ...

    Fix up trivial conflicts in Makefile and drivers/cpufreq/cpufreq.c

    Linus Torvalds
     

05 Aug, 2010

1 commit


23 Jul, 2010

1 commit


21 Jul, 2010

2 commits

  • Documentation/trace/ftrace.txt says

    buffer_size_kb:

    This sets or displays the number of kilobytes each CPU
    buffer can hold. The tracer buffers are the same size
    for each CPU. The displayed number is the size of the
    CPU buffer and not total size of all buffers. The
    trace buffers are allocated in pages (blocks of memory
    that the kernel uses for allocation, usually 4 KB in size).
    If the last page allocated has room for more bytes
    than requested, the rest of the page will be used,
    making the actual allocation bigger than requested.
    ( Note, the size may not be a multiple of the page size
    due to buffer management overhead. )

    This can only be updated when the current_tracer
    is set to "nop".

    But it's incorrect. currently total memory consumption is
    'buffer_size_kb x CPUs x 2'.

    Why two times difference is there? because ftrace implicitly allocate
    the buffer for max latency too.

    That makes sad result when admin want to use large buffer. (If admin
    want full logging and makes detail analysis). example, If admin
    have 24 CPUs machine and write 200MB to buffer_size_kb, the system
    consume ~10GB memory (200MB x 24 x 2). umm.. 5GB memory waste is
    usually unacceptable.

    Fortunatelly, almost all users don't use max latency feature.
    The max latency buffer can be disabled easily.

    This patch shrink buffer size of the max latency buffer if
    unnecessary.

    Signed-off-by: KOSAKI Motohiro
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    KOSAKI Motohiro
     
  • We found that even enabling a single trace event that will rarely be
    triggered can add big overhead to context switch.

    (lmbench context switch test)
    -------------------------------------------------
    2p/0K 2p/16K 2p/64K 8p/16K 8p/64K 16p/16K 16p/64K
    ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw
    ------ ------ ------ ------ ------ ------- -------
    2.19 2.3 2.21 2.56 2.13 2.54 2.07
    2.39 2.51 2.35 2.75 2.27 2.81 2.24

    The overhead is 6% ~ 11%.

    It's because when a trace event is enabled 3 tracepoints (sched_switch,
    sched_wakeup, sched_wakeup_new) will be activated to map pid to cmdname.

    We'd like to avoid this overhead, so add a trace option '(no)record-cmd'
    to allow to disable cmdline recording.

    Signed-off-by: Li Zefan
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Li Zefan
     

20 Jul, 2010

2 commits

  • Special traces type was only used by sysprof. Lets remove it now
    that sysprof ftrace plugin has been dropped.

    Signed-off-by: Frederic Weisbecker
    Acked-by: Soeren Sandmann
    Cc: Peter Zijlstra
    Cc: Ingo Molnar
    Cc: Steven Rostedt
    Cc: Li Zefan

    Frederic Weisbecker
     
  • The sysprof ftrace plugin doesn't seem to be seriously used
    somewhere. There is a branch in the sysprof tree that makes
    an interface to it, but the real sysprof tool uses either its
    own module or perf events.

    Drop the sysprof ftrace plugin then, as it's mostly useless.

    Signed-off-by: Frederic Weisbecker
    Acked-by: Soeren Sandmann
    Cc: Peter Zijlstra
    Cc: Ingo Molnar
    Cc: Steven Rostedt
    Cc: Li Zefan

    Frederic Weisbecker
     

16 Jul, 2010

1 commit

  • The ksym (breakpoint) ftrace plugin has been superseded by perf
    tools that are much more poweful to use the cpu breakpoints.
    This tracer doesn't bring more feature. It has been deprecated
    for a while now, lets remove it.

    Signed-off-by: Frederic Weisbecker
    Cc: Steven Rostedt
    Cc: Prasad
    Cc: Ingo Molnar

    Frederic Weisbecker
     

29 Jun, 2010

1 commit


10 Jun, 2010

1 commit


09 Jun, 2010

2 commits

  • We have been resisting new ftrace plugins and removing existing
    ones, and kmemtrace has been superseded by kmem trace events
    and perf-kmem, so we remove it.

    Signed-off-by: Li Zefan
    Acked-by: Pekka Enberg
    Acked-by: Eduard - Gabriel Munteanu
    Cc: Ingo Molnar
    Cc: Steven Rostedt
    [ remove kmemtrace from the makefile, handle slob too ]
    Signed-off-by: Frederic Weisbecker

    Li Zefan
     
  • The boot tracer is useless. It simply logs the initcalls
    but in fact these initcalls are also logged through printk
    while using the initcall_debug kernel parameter.

    Nobody seem to be using it so far. Then just remove it.

    Signed-off-by: WANG Cong
    Cc: Chase Douglas
    Cc: Steven Rostedt
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Li Zefan
    LKML-Reference:
    [ remove the hooks in main.c, and the headers ]
    Signed-off-by: Frederic Weisbecker

    Américo Wang
     

04 Jun, 2010

1 commit

  • The ftrace_preempt_disable/enable functions were to address a
    recursive race caused by the function tracer. The function tracer
    traces all functions which makes it easily susceptible to recursion.
    One area was preempt_enable(). This would call the scheduler and
    the schedulre would call the function tracer and loop.
    (So was it thought).

    The ftrace_preempt_disable/enable was made to protect against recursion
    inside the scheduler by storing the NEED_RESCHED flag. If it was
    set before the ftrace_preempt_disable() it would not call schedule
    on ftrace_preempt_enable(), thinking that if it was set before then
    it would have already scheduled unless it was already in the scheduler.

    This worked fine except in the case of SMP, where another task would set
    the NEED_RESCHED flag for a task on another CPU, and then kick off an
    IPI to trigger it. This could cause the NEED_RESCHED to be saved at
    ftrace_preempt_disable() but the IPI to arrive in the the preempt
    disabled section. The ftrace_preempt_enable() would not call the scheduler
    because the flag was already set before entring the section.

    This bug would cause a missed preemption check and cause lower latencies.

    Investigating further, I found that the recusion caused by the function
    tracer was not due to schedule(), but due to preempt_schedule(). Now
    that preempt_schedule is completely annotated with notrace, the recusion
    no longer is an issue.

    Reported-by: Thomas Gleixner
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

18 May, 2010

1 commit


15 May, 2010

3 commits

  • ftrace_trace_stack() and frace_trace_userstacke() take a
    struct ring_buffer argument, not struct trace_array. Commit
    e77405ad("tracing: pass around ring buffer instead of tracer")
    made this change.

    Signed-off-by: Li Zefan
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Li Zefan
     
  • The filter_active and enable both use an int (4 bytes each) to
    set a single flag. We can save 4 bytes per event by combining the
    two into a single integer.

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4894944 1018052 861512 6774508 675eec vmlinux.id
    4894871 1012292 861512 6768675 674823 vmlinux.flags

    This gives us another 5K in savings.

    The modification of both the enable and filter fields are done
    under the event_mutex, so it is still safe to combine the two.

    Note: Although Mathieu gave his Acked-by, he would like it documented
    that the reads of flags are not protected by the mutex. The way the
    code works, these reads will not break anything, but will have a
    residual effect. Since this behavior is the same even before this
    patch, describing this situation is left to another patch, as this
    patch does not change the behavior, but just brought it to Mathieu's
    attention.

    v2: Updated the event trace self test to for this change.

    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Acked-by: Frederic Weisbecker
    Cc: Tom Zanussi
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Move the defined fields from the event to the class structure.
    Since the fields of the event are defined by the class they belong
    to, it makes sense to have the class hold the information instead
    of the individual events. The events of the same class would just
    hold duplicate information.

    After this change the size of the kernel dropped another 3K:

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4900252 1057412 861512 6819176 680d68 vmlinux.regs
    4900375 1053380 861512 6815267 67fe23 vmlinux.fields

    Although the text increased, this was mainly due to the C files
    having to adapt to the change. This is a constant increase, where
    new tracepoints will not increase the Text. But the big drop is
    in the data size (as well as needed allocations to hold the fields).
    This will give even more savings as more tracepoints are created.

    Note, if just TRACE_EVENT()s are used and not DECLARE_EVENT_CLASS()
    with several DEFINE_EVENT()s, then the savings will be lost. But
    we are pushing developers to consolidate events with DEFINE_EVENT()
    so this should not be an issue.

    The kprobes define a unique class to every new event, but are dynamic
    so it should not be a issue.

    The syscalls however have a single class but the fields for the individual
    events are different. The syscalls use a metadata to define the
    fields. I moved the fields list from the event to the metadata and
    added a "get_fields()" function to the class. This function is used
    to find the fields. For normal events and kprobes, get_fields() just
    returns a pointer to the fields list_head in the class. For syscall
    events, it returns the fields list_head in the metadata for the event.

    v2: Fixed the syscall fields. The syscall metadata needs a list
    of fields for both enter and exit.

    Acked-by: Frederic Weisbecker
    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Cc: Tom Zanussi
    Cc: Peter Zijlstra
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

28 Apr, 2010

1 commit


27 Apr, 2010

1 commit


15 Apr, 2010

1 commit

  • Support basic types of integer (u8, u16, u32, u64, s8, s16, s32, s64) in
    kprobe tracer. With this patch, users can specify above basic types on
    each arguments after ':'. If omitted, the argument type is set as
    unsigned long (u32 or u64, arch-dependent).

    e.g.
    echo 'p account_system_time+0 hardirq_offset=%si:s32' > kprobe_events

    adds a probe recording hardirq_offset in signed-32bits value on the
    entry of account_system_time.

    Cc: Ingo Molnar
    Cc: Steven Rostedt
    Cc: Paul Mackerras
    Cc: Peter Zijlstra
    Cc: Mike Galbraith
    Cc: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Arnaldo Carvalho de Melo

    Masami Hiramatsu
     

26 Mar, 2010

1 commit

  • Support for the PMU's BTS features has been upstreamed in
    v2.6.32, but we still have the old and disabled ptrace-BTS,
    as Linus noticed it not so long ago.

    It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without
    regard for other uses (perf) and doesn't provide the flexibility
    needed for perf either.

    Its users are ptrace-block-step and ptrace-bts, since ptrace-bts
    was never used and ptrace-block-step can be implemented using a
    much simpler approach.

    So axe all 3000 lines of it. That includes the *locked_memory*()
    APIs in mm/mlock.c as well.

    Reported-by: Linus Torvalds
    Signed-off-by: Peter Zijlstra
    Cc: Roland McGrath
    Cc: Oleg Nesterov
    Cc: Markus Metzger
    Cc: Steven Rostedt
    Cc: Andrew Morton
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

14 Mar, 2010

1 commit

  • …nel/git/tip/linux-2.6-tip

    * 'tracing-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    tracing: Do not record user stack trace from NMI context
    tracing: Disable buffer switching when starting or stopping trace
    tracing: Use same local variable when resetting the ring buffer
    function-graph: Init curr_ret_stack with ret_stack
    ring-buffer: Move disabled check into preempt disable section
    function-graph: Add tracing_thresh support to function_graph tracer
    tracing: Update the comm field in the right variable in update_max_tr
    function-graph: Use comment notation for func names of dangling '}'
    function-graph: Fix unused reference to ftrace_set_func()
    tracing: Fix warning in s_next of trace file ops
    tracing: Include irqflags headers from trace clock

    Linus Torvalds
     

08 Mar, 2010

1 commit


06 Mar, 2010

1 commit

  • Add support for tracing_thresh to the function_graph tracer. This
    version of this feature isolates the checks into new entry and
    return functions, to avoid adding more conditional code into the
    main function_graph paths.

    When the tracing_thresh is set and the function graph tracer is
    enabled, only the functions that took longer than the time in
    microseconds that was set in tracing_thresh are recorded. To do this
    efficiently, only the function exits are recorded:

    [tracing]# echo 100 > tracing_thresh
    [tracing]# echo function_graph > current_tracer
    [tracing]# cat trace
    # tracer: function_graph
    #
    # CPU DURATION FUNCTION CALLS
    # | | | | | | |
    1) ! 119.214 us | } /* smp_apic_timer_interrupt */
    1)
    Signed-off-by: Tim Bird
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Tim Bird
     

25 Feb, 2010

1 commit

  • GCC 4.5 introduces behavior that forces the alignment of structures to
    use the largest possible value. The default value is 32 bytes, so if
    some structures are defined with a 4-byte alignment and others aren't
    declared with an alignment constraint at all - it will align at 32-bytes.

    For things like the ftrace events, this results in a non-standard array.
    When initializing the ftrace subsystem, we traverse the _ftrace_events
    section and call the initialization callback for each event. When the
    structures are misaligned, we could be treating another part of the
    structure (or the zeroed out space between them) as a function pointer.

    This patch forces the alignment for all the ftrace_event_call structures
    to 4 bytes.

    Without this patch, the kernel fails to boot very early when built with
    gcc 4.5.

    It's trivial to check the alignment of the members of the array, so it
    might be worthwhile to add something to the build system to do that
    automatically. Unfortunately, that only covers this case. I've asked one
    of the gcc developers about adding a warning when this condition is seen.

    Cc: stable@kernel.org
    Signed-off-by: Jeff Mahoney
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Jeff Mahoney
     

12 Feb, 2010

1 commit

  • I don't see why we can only clear all functions from the filter.

    After patching:

    # echo sys_open > set_graph_function
    # echo sys_close >> set_graph_function
    # cat set_graph_function
    sys_open
    sys_close
    # echo '!sys_close' >> set_graph_function
    # cat set_graph_function
    sys_open

    Signed-off-by: Li Zefan
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Li Zefan
     

05 Feb, 2010

1 commit


29 Jan, 2010

1 commit

  • In the function graph tracer, a calling function is to be traced
    only when it is enabled through the set_graph_function file,
    or when it is nested in an enabled function.

    Current code uses TSK_TRACE_FL_GRAPH to test whether it is nested
    or not. Looking at the code, we can get this:
    (trace->depth > 0) (TSK_TRACE_FL_GRAPH is set)

    trace->depth is more explicit to tell that it is nested.
    So we use trace->depth directly and simplify the code.

    No functionality is changed.
    TSK_TRACE_FL_GRAPH is not removed yet, it is left for future usage.

    Signed-off-by: Lai Jiangshan
    Cc: Ingo Molnar
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Lai Jiangshan
     

17 Dec, 2009

1 commit

  • …nel/git/tip/linux-2.6-tip

    * 'tracing-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    tracing: Fix return of trace_dump_stack()
    ksym_tracer: Fix bad cast
    tracing/power: Remove two exports
    tracing: Change event->profile_count to be int type
    tracing: Simplify trace_option_write()
    tracing: Remove useless trace option
    tracing: Use seq file for trace_clock
    tracing: Use seq file for trace_options
    function-graph: Allow writing the same val to set_graph_function
    ftrace: Call trace_parser_clear() properly
    ftrace: Return EINVAL when writing invalid val to set_ftrace_filter
    tracing: Move a printk out of ftrace_raw_reg_event_foo()
    tracing: Pull up calls to trace_define_common_fields()
    tracing: Extract duplicate ftrace_raw_init_event_foo()
    ftrace.h: Use common pr_info fmt string
    tracing: Add stack trace to irqsoff tracer
    tracing: Add trace_dump_stack()
    ring-buffer: Move resize integrity check under reader lock
    ring-buffer: Use sync sched protection on ring buffer resizing
    tracing: Fix wrong usage of strstrip in trace_ksyms

    Linus Torvalds
     

15 Dec, 2009

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (34 commits)
    m68k: rename global variable vmalloc_end to m68k_vmalloc_end
    percpu: add missing per_cpu_ptr_to_phys() definition for UP
    percpu: Fix kdump failure if booted with percpu_alloc=page
    percpu: make misc percpu symbols unique
    percpu: make percpu symbols in ia64 unique
    percpu: make percpu symbols in powerpc unique
    percpu: make percpu symbols in x86 unique
    percpu: make percpu symbols in xen unique
    percpu: make percpu symbols in cpufreq unique
    percpu: make percpu symbols in oprofile unique
    percpu: make percpu symbols in tracer unique
    percpu: make percpu symbols under kernel/ and mm/ unique
    percpu: remove some sparse warnings
    percpu: make alloc_percpu() handle array types
    vmalloc: fix use of non-existent percpu variable in put_cpu_var()
    this_cpu: Use this_cpu_xx in trace_functions_graph.c
    this_cpu: Use this_cpu_xx for ftrace
    this_cpu: Use this_cpu_xx in nmi handling
    this_cpu: Use this_cpu operations in RCU
    this_cpu: Use this_cpu ops for VM statistics
    ...

    Fix up trivial (famous last words) global per-cpu naming conflicts in
    arch/x86/kvm/svm.c
    mm/slab.c

    Linus Torvalds
     

14 Dec, 2009

1 commit

  • Since commit 4d9493c90f8e6e1b164aede3814010a290161abb
    ("ftrace: remove add-hoc code"), option "sched-tree"
    has become useless.

    Signed-off-by: Li Zefan
    Acked-by: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Li Zefan
     

08 Dec, 2009

1 commit

  • An ftrace plugin can add a pipe_open interface when the user opens
    trace_pipe. But if the plugin allocates something within the pipe_open
    it can not free it because there exists no pipe_close. The hook to
    the trace file open has a corresponding close. The closing of the
    trace_pipe file should also have a corresponding close.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

25 Nov, 2009

1 commit

  • Commit ee949a86b3aef15845ea677aa60231008de62672 ("tracing/syscalls:
    Use long for syscall ret format and field definitions") changed the
    syscall exit return type to long, but forgot to change it in the
    struct.

    Signed-off-by: Tom Zanussi
    Cc: Steven Rostedt
    Cc: Peter Zijlstra
    Cc: Mike Galbraith
    Cc: Paul Mackerras
    Cc: Arnaldo Carvalho de Melo
    Cc: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Tom Zanussi
     

21 Nov, 2009

1 commit


08 Nov, 2009

2 commits

  • The macro used to be used in both trace_selftest.c and
    trace_ksym.c, but no longer, so remove it from header file.

    Signed-off-by: Li Zefan
    Cc: Prasad
    Signed-off-by: Frederic Weisbecker

    Li Zefan
     
  • This patch rebase the implementation of the breakpoints API on top of
    perf events instances.

    Each breakpoints are now perf events that handle the
    register scheduling, thread/cpu attachment, etc..

    The new layering is now made as follows:

    ptrace kgdb ftrace perf syscall
    \ | / /
    \ | / /
    /
    Core breakpoint API /
    /
    | /
    | /

    Breakpoints perf events

    |
    |

    Breakpoints PMU ---- Debug Register constraints handling
    (Part of core breakpoint API)
    |
    |

    Hardware debug registers

    Reasons of this rewrite:

    - Use the centralized/optimized pmu registers scheduling,
    implying an easier arch integration
    - More powerful register handling: perf attributes (pinned/flexible
    events, exclusive/non-exclusive, tunable period, etc...)

    Impact:

    - New perf ABI: the hardware breakpoints counters
    - Ptrace breakpoints setting remains tricky and still needs some per
    thread breakpoints references.

    Todo (in the order):

    - Support breakpoints perf counter events for perf tools (ie: implement
    perf_bpcounter_event())
    - Support from perf tools

    Changes in v2:

    - Follow the perf "event " rename
    - The ptrace regression have been fixed (ptrace breakpoint perf events
    weren't released when a task ended)
    - Drop the struct hw_breakpoint and store generic fields in
    perf_event_attr.
    - Separate core and arch specific headers, drop
    asm-generic/hw_breakpoint.h and create linux/hw_breakpoint.h
    - Use new generic len/type for breakpoint
    - Handle off case: when breakpoints api is not supported by an arch

    Changes in v3:

    - Fix broken CONFIG_KVM, we need to propagate the breakpoint api
    changes to kvm when we exit the guest and restore the bp registers
    to the host.

    Changes in v4:

    - Drop the hw_breakpoint_restore() stub as it is only used by KVM
    - EXPORT_SYMBOL_GPL hw_breakpoint_restore() as KVM can be built as a
    module
    - Restore the breakpoints unconditionally on kvm guest exit:
    TIF_DEBUG_THREAD doesn't anymore cover every cases of running
    breakpoints and vcpu->arch.switch_db_regs might not always be
    set when the guest used debug registers.
    (Waiting for a reliable optimization)

    Changes in v5:

    - Split-up the asm-generic/hw-breakpoint.h moving to
    linux/hw_breakpoint.h into a separate patch
    - Optimize the breakpoints restoring while switching from kvm guest
    to host. We only want to restore the state if we have active
    breakpoints to the host, otherwise we don't care about messed-up
    address registers.
    - Add asm/hw_breakpoint.h to Kbuild
    - Fix bad breakpoint type in trace_selftest.c

    Changes in v6:

    - Fix wrong header inclusion in trace.h (triggered a build
    error with CONFIG_FTRACE_SELFTEST

    Signed-off-by: Frederic Weisbecker
    Cc: Prasad
    Cc: Alan Stern
    Cc: Peter Zijlstra
    Cc: Arnaldo Carvalho de Melo
    Cc: Steven Rostedt
    Cc: Ingo Molnar
    Cc: Jan Kiszka
    Cc: Jiri Slaby
    Cc: Li Zefan
    Cc: Avi Kivity
    Cc: Paul Mackerras
    Cc: Mike Galbraith
    Cc: Masami Hiramatsu
    Cc: Paul Mundt

    Frederic Weisbecker
     

23 Oct, 2009

1 commit


18 Oct, 2009

1 commit


15 Oct, 2009

1 commit

  • - Add an ioctl to allocate a filter for a perf event.

    - Free the filter when the associated perf event is to be freed.

    - Do the filtering in perf_swevent_match().

    Signed-off-by: Li Zefan
    Acked-by: Peter Zijlstra
    Acked-by: Frederic Weisbecker
    Cc: Steven Rostedt
    Cc: Tom Zanussi
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Li Zefan