14 Jan, 2020

2 commits

  • As there's two struct ring_buffers in the kernel, it causes some confusion.
    The other one being the perf ring buffer. It was agreed upon that as neither
    of the ring buffers are generic enough to be used globally, they should be
    renamed as:

    perf's ring_buffer -> perf_buffer
    ftrace's ring_buffer -> trace_buffer

    This implements the changes to the ring buffer that ftrace uses.

    Link: https://lore.kernel.org/r/20191213140531.116b3200@gandalf.local.home

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • As we are working to remove the generic "ring_buffer" name that is used by
    both tracing and perf, the ring_buffer name for tracing will be renamed to
    trace_buffer, and perf's ring buffer will be renamed to perf_buffer.

    As there already exists a trace_buffer that is used by the trace_arrays, it
    needs to be first renamed to array_buffer.

    Link: https://lore.kernel.org/r/20191213153553.GE20583@krava

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

04 Jan, 2020

1 commit

  • In the function, if register_trace_sched_migrate_task() returns error,
    sched_switch/sched_wakeup_new/sched_wakeup won't unregister. That is
    why fail_deprobe_sched_switch was added.

    Link: http://lkml.kernel.org/r/20191231133530.2794-1-pilgrimtao@gmail.com

    Cc: stable@vger.kernel.org
    Fixes: 478142c39c8c2 ("tracing: do not grab lock in wakeup latency function tracing")
    Signed-off-by: Kaitao Cheng
    Signed-off-by: Steven Rostedt (VMware)

    Kaitao Cheng
     

25 Jul, 2019

1 commit


21 Feb, 2019

1 commit

  • Currently, tracing snapshots are context-free - they capture the ring
    buffer contents at the time the tracing_snapshot() function was
    invoked, and nothing else. Additionally, they're always taken
    unconditionally - the calling code can decide whether or not to take a
    snapshot, but the data used to make that decision is kept separately
    from the snapshot itself.

    This change adds the ability to associate with each trace instance
    some user data, along with an 'update' function that can use that data
    to determine whether or not to actually take a snapshot. The update
    function can then update that data along with any other state (as part
    of the data presumably), if warranted.

    Because snapshots are 'global' per-instance, only one user can enable
    and use a conditional snapshot for any given trace instance. To
    enable a conditional snapshot (see details in the function and data
    structure comments), the user calls tracing_snapshot_cond_enable().
    Similarly, to disable a conditional snapshot and free it up for other
    users, tracing_snapshot_cond_disable() should be called.

    To actually initiate a conditional snapshot, tracing_snapshot_cond()
    should be called. tracing_snapshot_cond() will invoke the update()
    callback, allowing the user to decide whether or not to actually take
    the snapshot and update the user-defined data associated with the
    snapshot. If the callback returns 'true', tracing_snapshot_cond()
    will then actually take the snapshot and return.

    This scheme allows for flexibility in snapshot implementations - for
    example, by implementing slightly different update() callbacks,
    snapshots can be taken in situations where the user is only interested
    in taking a snapshot when a new maximum in hit versus when a value
    changes in any way at all. Future patches will demonstrate both
    cases.

    Link: http://lkml.kernel.org/r/1bea07828d5fd6864a585f83b1eed47ce097eb45.1550100284.git.tom.zanussi@linux.intel.com

    Signed-off-by: Tom Zanussi
    Signed-off-by: Steven Rostedt (VMware)

    Tom Zanussi
     

07 Feb, 2019

3 commits

  • This align the behavior of wakeup tracers with irqsoff latency tracer
    that we record stacktrace at the beginning and end of waking up. The
    stacktrace shows us what is happening in the kernel.

    Link: http://lkml.kernel.org/r/20190116160249.7554-1-changbin.du@gmail.com

    Signed-off-by: Changbin Du
    Signed-off-by: Steven Rostedt (VMware)

    Changbin Du
     
  • Add these info fields to funcgraph wakeup tracers:
    o Show CPU info since the waker could be on a different CPU.
    o Show function duration and overhead.
    o Show IRQ markers.

    Link: http://lkml.kernel.org/r/20190101154614.8887-3-changbin.du@gmail.com

    Signed-off-by: Changbin Du
    Signed-off-by: Steven Rostedt (VMware)

    Changbin Du
     
  • When function_graph is used for latency tracers, relative timestamp
    is more straightforward than absolute timestamp as function trace
    does. This change adds relative timestamp support to function_graph
    and applies to latency tracers (wakeup and irqsoff).

    Instead of:

    # tracer: irqsoff
    #
    # irqsoff latency trace v1.1.5 on 5.0.0-rc1-test
    # --------------------------------------------------------------------
    # latency: 521 us, #1125/1125, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:8)
    # -----------------
    # | task: swapper/2-0 (uid:0 nice:0 policy:0 rt_prio:0)
    # -----------------
    # => started at: __schedule
    # => ended at: _raw_spin_unlock_irq
    #
    #
    # _-----=> irqs-off
    # / _----=> need-resched
    # | / _---=> hardirq/softirq
    # || / _--=> preempt-depth
    # ||| /
    # TIME CPU TASK/PID |||| DURATION FUNCTION CALLS
    # | | | | |||| | | | | | |
    124.974306 | 2) systemd-693 | d..1 0.000 us | __schedule();
    124.974307 | 2) systemd-693 | d..1 | rcu_note_context_switch() {
    124.974308 | 2) systemd-693 | d..1 0.487 us | rcu_preempt_deferred_qs();
    124.974309 | 2) systemd-693 | d..1 0.451 us | rcu_qs();
    124.974310 | 2) systemd-693 | d..1 2.301 us | }
    [..]
    124.974826 | 2) -0 | d..2 | finish_task_switch() {
    124.974826 | 2) -0 | d..2 | _raw_spin_unlock_irq() {
    124.974827 | 2) -0 | d..2 0.000 us | _raw_spin_unlock_irq();
    124.974828 | 2) -0 | d..2 0.000 us | tracer_hardirqs_on();
    -0 2d..2 552us :
    => __schedule
    => schedule_idle
    => do_idle
    => cpu_startup_entry
    => start_secondary
    => secondary_startup_64

    Show:

    # tracer: irqsoff
    #
    # irqsoff latency trace v1.1.5 on 5.0.0-rc1-test+
    # --------------------------------------------------------------------
    # latency: 511 us, #1053/1053, CPU#7 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:8)
    # -----------------
    # | task: swapper/7-0 (uid:0 nice:0 policy:0 rt_prio:0)
    # -----------------
    # => started at: __schedule
    # => ended at: _raw_spin_unlock_irq
    #
    #
    # _-----=> irqs-off
    # / _----=> need-resched
    # | / _---=> hardirq/softirq
    # || / _--=> preempt-depth
    # ||| /
    # REL TIME CPU TASK/PID |||| DURATION FUNCTION CALLS
    # | | | | |||| | | | | | |
    0 us | 7) sshd-1704 | d..1 0.000 us | __schedule();
    1 us | 7) sshd-1704 | d..1 | rcu_note_context_switch() {
    1 us | 7) sshd-1704 | d..1 0.611 us | rcu_preempt_deferred_qs();
    2 us | 7) sshd-1704 | d..1 0.484 us | rcu_qs();
    3 us | 7) sshd-1704 | d..1 2.599 us | }
    [..]
    509 us | 7) -0 | d..2 | finish_task_switch() {
    510 us | 7) -0 | d..2 | _raw_spin_unlock_irq() {
    510 us | 7) -0 | d..2 0.000 us | _raw_spin_unlock_irq();
    512 us | 7) -0 | d..2 0.000 us | tracer_hardirqs_on();
    -0 7d..2 543us :
    => __schedule
    => schedule_idle
    => do_idle
    => cpu_startup_entry
    => start_secondary
    => secondary_startup_64

    Link: http://lkml.kernel.org/r/20190101154614.8887-2-changbin.du@gmail.com

    Signed-off-by: Changbin Du
    Signed-off-by: Steven Rostedt (VMware)

    Changbin Du
     

09 Dec, 2018

2 commits

  • Currently the registering of function graph is to pass in a entry and return
    function. We need to have a way to associate those functions together where
    the entry can determine to run the return hook. Having a structure that
    contains both functions will facilitate the process of converting the code
    to be able to do such.

    This is similar to the way function hooks are enabled (it passes in
    ftrace_ops). Instead of passing in the functions to use, a single structure
    is passed in to the registering function.

    The unregister function is now passed in the fgraph_ops handle. When we
    allow more than one callback to the function graph hooks, this will let the
    system know which one to remove.

    Reviewed-by: Joel Fernandes (Google)
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • Rearrange the functions in trace_sched_wakeup.c so that there are fewer
    #ifdef CONFIG_FUNCTION_TRACER and #ifdef CONFIG_FUNCTION_GRAPH_TRACER,
    instead of having the #ifdefs spread all over.

    No functional change is made.

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

30 Nov, 2018

1 commit

  • The tracefs file set_graph_function is used to only function graph functions
    that are listed in that file (or all functions if the file is empty). The
    way this is implemented is that the function graph tracer looks at every
    function, and if the current depth is zero and the function matches
    something in the file then it will trace that function. When other functions
    are called, the depth will be greater than zero (because the original
    function will be at depth zero), and all functions will be traced where the
    depth is greater than zero.

    The issue is that when a function is first entered, and the handler that
    checks this logic is called, the depth is set to zero. If an interrupt comes
    in and a function in the interrupt handler is traced, its depth will be
    greater than zero and it will automatically be traced, even if the original
    function was not. But because the logic only looks at depth it may trace
    interrupts when it should not be.

    The recent design change of the function graph tracer to fix other bugs
    caused the depth to be zero while the function graph callback handler is
    being called for a longer time, widening the race of this happening. This
    bug was actually there for a longer time, but because the race window was so
    small it seldom happened. The Fixes tag below is for the commit that widen
    the race window, because that commit belongs to a series that will also help
    fix the original bug.

    Cc: stable@kernel.org
    Fixes: 39eb456dacb5 ("function_graph: Use new curr_ret_depth to manage depth instead of curr_ret_stack")
    Reported-by: Joe Lawrence
    Tested-by: Joe Lawrence
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

08 Nov, 2017

1 commit


02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

10 Oct, 2017

1 commit

  • Steve requested better names for the new task-state helper functions.

    So introduce the concept of task-state index for the printing and
    rename __get_task_state() to task_state_index() and
    __task_state_to_char() to task_index_to_char().

    Requested-by: Steven Rostedt
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Steven Rostedt
    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/20170929115016.pzlqc7ss3ccystyg@hirez.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

29 Sep, 2017

1 commit


25 Dec, 2016

1 commit


09 Dec, 2016

1 commit

  • Currently both the wakeup and irqsoff traces do not handle set_graph_notrace
    well. The ftrace infrastructure will ignore the return paths of all
    functions leaving them hanging without an end:

    # echo '*spin*' > set_graph_notrace
    # cat trace
    [...]
    _raw_spin_lock() {
    preempt_count_add() {
    do_raw_spin_lock() {
    update_rq_clock();

    Where the '*spin*' functions should have looked like this:

    _raw_spin_lock() {
    preempt_count_add();
    do_raw_spin_lock();
    }
    update_rq_clock();

    Instead, have the wakeup and irqsoff tracers ignore the functions that are
    set by the set_graph_notrace like the function_graph tracer does. Move
    the logic in the function_graph tracer into a header to allow wakeup and
    irqsoff tracers to use it as well.

    Cc: Namhyung Kim
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

15 Nov, 2016

1 commit


07 Nov, 2015

1 commit

  • Pull tracking updates from Steven Rostedt:
    "Most of the changes are clean ups and small fixes. Some of them have
    stable tags to them. I searched through my INBOX just as the merge
    window opened and found lots of patches to pull. I ran them through
    all my tests and they were in linux-next for a few days.

    Features added this release:
    ----------------------------

    - Module globbing. You can now filter function tracing to several
    modules. # echo '*:mod:*snd*' > set_ftrace_filter (Dmitry Safonov)

    - Tracer specific options are now visible even when the tracer is not
    active. It was rather annoying that you can only see and modify
    tracer options after enabling the tracer. Now they are in the
    options/ directory even when the tracer is not active. Although
    they are still only visible when the tracer is active in the
    trace_options file.

    - Trace options are now per instance (although some of the tracer
    specific options are global)

    - New tracefs file: set_event_pid. If any pid is added to this file,
    then all events in the instance will filter out events that are not
    part of this pid. sched_switch and sched_wakeup events handle next
    and the wakee pids"

    * tag 'trace-v4.4' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (68 commits)
    tracefs: Fix refcount imbalance in start_creating()
    tracing: Put back comma for empty fields in boot string parsing
    tracing: Apply tracer specific options from kernel command line.
    tracing: Add some documentation about set_event_pid
    ring_buffer: Remove unneeded smp_wmb() before wakeup of reader benchmark
    tracing: Allow dumping traces without tracking trace started cpus
    ring_buffer: Fix more races when terminating the producer in the benchmark
    ring_buffer: Do no not complete benchmark reader too early
    tracing: Remove redundant TP_ARGS redefining
    tracing: Rename max_stack_lock to stack_trace_max_lock
    tracing: Allow arch-specific stack tracer
    recordmcount: arm64: Replace the ignored mcount call into nop
    recordmcount: Fix endianness handling bug for nop_mcount
    tracepoints: Fix documentation of RCU lockdep checks
    tracing: ftrace_event_is_function() can return boolean
    tracing: is_legal_op() can return boolean
    ring-buffer: rb_event_is_commit() can return boolean
    ring-buffer: rb_per_cpu_empty() can return boolean
    ring_buffer: ring_buffer_empty{cpu}() can return boolean
    ring-buffer: rb_is_reader_page() can return boolean
    ...

    Linus Torvalds
     

03 Nov, 2015

1 commit


06 Oct, 2015

1 commit

  • __trace_sched_switch_state() is the last remaining PREEMPT_ACTIVE
    user, move trace_sched_switch() from prepare_task_switch() to
    __schedule() and propagate the @preempt argument.

    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Thomas Gleixner
    Reviewed-by: Steven Rostedt
    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

01 Oct, 2015

1 commit

  • In preparation to make trace options per instance, the global trace_flags
    needs to be moved from being a global variable to a field within the trace
    instance trace_array structure.

    There's still more work to do, as there's some functions that use
    trace_flags without passing in a way to get to the current_trace array. For
    those, the global_trace is used directly (from trace.c). This includes
    setting and clearing the trace_flags. This means that when a new instance is
    created, it just gets the trace_flags of the global_trace and will not be
    able to modify them. Depending on the functions that have access to the
    trace_array, the flags of an instance may not affect parts of its trace,
    where the global_trace is used. These will be fixed in future changes.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

30 Sep, 2015

3 commits


26 Sep, 2015

1 commit

  • In preparation for having trace options be per instance, the trace_array
    needs to be passed to the trace_buffer_unlock_commit(). The
    trace_event_buffer_lock_reserve() already passes in the trace_event_file
    where the trace_array can be derived from.

    Also added a "__init" to the boot up test event plus function tracing
    function function_test_events_call().

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

03 Aug, 2015

1 commit

  • Mathieu reported that since 317f394160e9 ("sched: Move the second half
    of ttwu() to the remote cpu") trace_sched_wakeup() can happen out of
    context of the waker.

    This is a problem when you want to analyse wakeup paths because it is
    now very hard to correlate the wakeup event to whoever issued the
    wakeup.

    OTOH trace_sched_wakeup() is issued at the point where we set
    p->state = TASK_RUNNING, which is right were we hand the task off to
    the scheduler, so this is an important point when looking at
    scheduling behaviour, up to here its been the wakeup path everything
    hereafter is due to scheduler policy.

    To bridge this gap, introduce a second tracepoint: trace_sched_waking.
    It is guaranteed to be called in the waker context.

    Reported-by: Mathieu Desnoyers
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Francis Giraldeau
    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/20150609091336.GQ3644@twins.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

14 May, 2015

1 commit


23 Jan, 2015

1 commit


12 Nov, 2014

1 commit

  • The only code that references tracing_sched_switch_trace() and
    tracing_sched_wakeup_trace() is the wakeup latency tracer. Those
    two functions use to belong to the sched_switch tracer which has
    long been removed. These functions were left behind because the
    wakeup latency tracer used them. But since the wakeup latency tracer
    is the only one to use them, they should be static functions inside
    that code.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

30 Apr, 2014

1 commit

  • Now that the ring buffer has a built in way to wake up readers
    when there's data, using irq_work such that it is safe to do it
    in any context. But it was still using the old "poor man's"
    wait polling that checks every 1/10 of a second to see if it
    should wake up a waiter. This makes the latency for a wake up
    excruciatingly long. No need to do that anymore.

    Completely remove the different wait_poll types from the tracers
    and have them all use the default one now.

    Reported-by: Johannes Berg
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

22 Apr, 2014

3 commits


21 Feb, 2014

2 commits


13 Jan, 2014

2 commits

  • Some method to deal with rt-mutexes and make sched_dl interact with
    the current PI-coded is needed, raising all but trivial issues, that
    needs (according to us) to be solved with some restructuring of
    the pi-code (i.e., going toward a proxy execution-ish implementation).

    This is under development, in the meanwhile, as a temporary solution,
    what this commits does is:

    - ensure a pi-lock owner with waiters is never throttled down. Instead,
    when it runs out of runtime, it immediately gets replenished and it's
    deadline is postponed;

    - the scheduling parameters (relative deadline and default runtime)
    used for that replenishments --during the whole period it holds the
    pi-lock-- are the ones of the waiting task with earliest deadline.

    Acting this way, we provide some kind of boosting to the lock-owner,
    still by using the existing (actually, slightly modified by the previous
    commit) pi-architecture.

    We would stress the fact that this is only a surely needed, all but
    clean solution to the problem. In the end it's only a way to re-start
    discussion within the community. So, as always, comments, ideas, rants,
    etc.. are welcome! :-)

    Signed-off-by: Dario Faggioli
    Signed-off-by: Juri Lelli
    [ Added !RT_MUTEXES build fix. ]
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1383831828-15501-11-git-send-email-juri.lelli@gmail.com
    Signed-off-by: Ingo Molnar

    Dario Faggioli
     
  • It is very likely that systems that wants/needs to use the new
    SCHED_DEADLINE policy also want to have the scheduling latency of
    the -deadline tasks under control.

    For this reason a new version of the scheduling wakeup latency,
    called "wakeup_dl", is introduced.

    As a consequence of applying this patch there will be three wakeup
    latency tracer:

    * "wakeup", that deals with all tasks in the system;
    * "wakeup_rt", that deals with -rt and -deadline tasks only;
    * "wakeup_dl", that deals with -deadline tasks only.

    Signed-off-by: Dario Faggioli
    Signed-off-by: Juri Lelli
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1383831828-15501-9-git-send-email-juri.lelli@gmail.com
    Signed-off-by: Ingo Molnar

    Dario Faggioli
     

15 Mar, 2013

2 commits

  • Currently, the only way to stop the latency tracers from doing function
    tracing is to fully disable the function tracer from the proc file
    system:

    echo 0 > /proc/sys/kernel/ftrace_enabled

    This is a big hammer approach as it disables function tracing for
    all users. This includes kprobes, perf, stack tracer, etc.

    Instead, create a function-trace option that the latency tracers can
    check to determine if it should enable function tracing or not.
    This option can be set or cleared even while the tracer is active
    and the tracers will disable or enable function tracing depending
    on how the option was set.

    Instead of using the proc file, disable latency function tracing with

    echo 0 > /debug/tracing/options/function-trace

    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    Cc: Clark Williams
    Cc: John Kacur
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Currently, the way the latency tracers and snapshot feature works
    is to have a separate trace_array called "max_tr" that holds the
    snapshot buffer. For latency tracers, this snapshot buffer is used
    to swap the running buffer with this buffer to save the current max
    latency.

    The only items needed for the max_tr is really just a copy of the buffer
    itself, the per_cpu data pointers, the time_start timestamp that states
    when the max latency was triggered, and the cpu that the max latency
    was triggered on. All other fields in trace_array are unused by the
    max_tr, making the max_tr mostly bloat.

    This change removes the max_tr completely, and adds a new structure
    called trace_buffer, that holds the buffer pointer, the per_cpu data
    pointers, the time_start timestamp, and the cpu where the latency occurred.

    The trace_array, now has two trace_buffers, one for the normal trace and
    one for the max trace or snapshot. By doing this, not only do we remove
    the bloat from the max_trace but the instances of traces can now use
    their own snapshot feature and not have just the top level global_trace have
    the snapshot feature and latency tracers for itself.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)