12 Jul, 2013

1 commit

  • Pull tracing changes from Steven Rostedt:
    "The majority of the changes here are cleanups for the large changes
    that were added to 3.10, which includes several bug fixes that have
    been marked for stable.

    As for new features, there were a few, but nothing to write to LWN
    about. These include:

    New function trigger called "dump" and "cpudump" that will cause
    ftrace to dump its buffer to the console when the function is called.
    The difference between "dump" and "cpudump" is that "dump" will dump
    the entire contents of the ftrace buffer, where as "cpudump" will only
    dump the contents of the ftrace buffer for the CPU that called the
    function.

    Another small enhancement is a new sysctl switch called
    "traceoff_on_warning" which, when enabled, will disable tracing if any
    WARN_ON() is triggered. This is useful if you want to debug what
    caused a warning and do not want to risk losing your trace data by the
    ring buffer overwriting the data before you can disable it. There's
    also a kernel command line option that will make this enabled at boot
    up called the same thing"

    * tag 'trace-3.11' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (34 commits)
    tracing: Make tracing_open_generic_{tr,tc}() static
    tracing: Remove ftrace() function
    tracing: Remove TRACE_EVENT_TYPE enum definition
    tracing: Make tracer_tracing_{off,on,is_on}() static
    tracing: Fix irqs-off tag display in syscall tracing
    uprobes: Fix return value in error handling path
    tracing: Fix race between deleting buffer and setting events
    tracing: Add trace_array_get/put() to event handling
    tracing: Get trace_array ref counts when accessing trace files
    tracing: Add trace_array_get/put() to handle instance refs better
    tracing: Protect ftrace_trace_arrays list in trace_events.c
    tracing: Make trace_marker use the correct per-instance buffer
    ftrace: Do not run selftest if command line parameter is set
    tracing/kprobes: Don't pass addr=ip to perf_trace_buf_submit()
    tracing: Use flag buffer_disabled for irqsoff tracer
    tracing/kprobes: Turn trace_probe->files into list_head
    tracing: Fix disabling of soft disable
    tracing: Add missing syscall_metadata comment
    tracing: Simplify code for showing of soft disabled flag
    tracing/kprobes: Kill probe_enable_lock
    ...

    Linus Torvalds
     

03 Jul, 2013

3 commits

  • I have patches that will use tracing_open_generic_tr/tc() in other
    files, but as they are not ready to be merged yet, and Fengguang Wu's
    sparse scripts pointed out that these functions were not declared
    anywhere, I'll make them static for now.

    When these functions are required to be used elsewhere, I'll remove
    the static then.

    Reported-by: kbuild test robot
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • The only caller of function ftrace(...) was removed a long time ago,
    so remove the function body as well.

    Link: http://lkml.kernel.org/r/1365564393-10972-10-git-send-email-jovi.zhangwei@huawei.com

    Signed-off-by: zhangwei(Jovi)
    Signed-off-by: Steven Rostedt

    zhangwei(Jovi)
     
  • I have patches that will use tracer_tracing_on/off/is_on() in other
    files, but as they are not ready to be merged yet, and Fengguang Wu's
    sparse scripts pointed out that these functions were not declared
    anywhere, I'll make them static for now.

    When these functions are required to be used elsewhere, I'll remove
    the static then.

    Reported-by: kbuild test robot
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

02 Jul, 2013

5 commits

  • When a trace file is opened that may access a trace array, it must
    increment its ref count to prevent it from being deleted.

    Cc: stable@vger.kernel.org # 3.10
    Reported-by: Alexander Lam
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Commit a695cb58162 "tracing: Prevent deleting instances when they are being read"
    tried to fix a race between deleting a trace instance and reading contents
    of a trace file. But it wasn't good enough. The following could crash the kernel:

    # cd /sys/kernel/debug/tracing/instances
    # ( while :; do mkdir foo; rmdir foo; done ) &
    # ( while :; do cat foo/trace &> /dev/null; done ) &

    Luckily this can only be done by root user, but it should be fixed regardless.

    The problem is that a delete of the file can happen after the reader starts
    to open the file but before it grabs the trace_types_mutex.

    The solution is to validate the trace array before using it. If the trace
    array does not exist in the list of trace arrays, then it returns -ENODEV.

    There's a possibility that a trace_array could be deleted and a new one
    created and the open would open its file instead. But that is very minor as
    it will just return the data of the new trace array, it may confuse the user
    but it will not crash the system. As this can only be done by root anyway,
    the race will only occur if root is deleting what its trying to read at
    the same time.

    Cc: stable@vger.kernel.org # 3.10
    Reported-by: Alexander Lam
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • There are multiple places where the ftrace_trace_arrays list is accessed in
    trace_events.c without the trace_types_lock held.

    Link: http://lkml.kernel.org/r/1372732674-22726-1-git-send-email-azl@google.com

    Cc: Vaibhav Nagarnaik
    Cc: David Sharp
    Cc: Alexander Z Lam
    Cc: stable@vger.kernel.org # 3.10
    Signed-off-by: Alexander Z Lam
    Signed-off-by: Steven Rostedt

    Alexander Z Lam
     
  • The trace_marker file was present for each new instance created, but it
    added the trace mark to the global trace buffer instead of to
    the instance's buffer.

    Link: http://lkml.kernel.org/r/1372717885-4543-2-git-send-email-azl@google.com

    Cc: David Sharp
    Cc: Vaibhav Nagarnaik
    Cc: Alexander Z Lam
    Cc: stable@vger.kernel.org # 3.10
    Signed-off-by: Alexander Z Lam
    Signed-off-by: Steven Rostedt

    Alexander Z Lam
     
  • If the ring buffer is disabled and the irqsoff tracer records a trace it
    will clear out its buffer and lose the data it had previously recorded.

    Currently there's a callback when writing to the tracing_of file, but if
    tracing is disabled via the function tracer trigger, it will not inform
    the irqsoff tracer to stop recording.

    By using the "mirror" flag (buffer_disabled) in the trace_array, that keeps
    track of the status of the trace_array's buffer, it gives the irqsoff
    tracer a fast way to know if it should record a new trace or not.
    The flag may be a little behind the real state of the buffer, but it
    should not affect the trace too much. It's more important for the irqsoff
    tracer to be fast.

    Reported-by: Dave Jones
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

20 Jun, 2013

1 commit

  • Add a traceoff_on_warning option in both the kernel command line as well
    as a sysctl option. When set, any WARN*() function that is hit will cause
    the tracing_on variable to be cleared, which disables writing to the
    ring buffer.

    This is useful especially when tracing a bug with function tracing. When
    a warning is hit, the print caused by the warning can flood the trace with
    the functions that producing the output for the warning. This can make the
    resulting trace useless by either hiding where the bug happened, or worse,
    by overflowing the buffer and losing the trace of the bug totally.

    Acked-by: Peter Zijlstra
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

12 Jun, 2013

2 commits

  • Commit 4f271a2a60c748599b30bb4dafff30d770439b96
    (tracing: Add a proc file to stop tracing and free buffer)
    implement a method to free up ring buffer in kernel memory
    in the release code path of free_buffer's fd.

    Then we don't need read/write support for free_buffer,
    indeed we just have a dummy write fop, and don't implement read fop.

    So the 0200 is more reasonable file mode for free_buffer than
    the current file mode 0644.

    Link: http://lkml.kernel.org/r/20130526085201.GA3183@udknight

    Acked-by: Vaibhav Nagarnaik
    Acked-by: David Sharp
    Signed-off-by: Wang YanQing
    Signed-off-by: Steven Rostedt

    Wang YanQing
     
  • Outputting formats of x86-tsc and counter should be a raw format, but after
    applying the patch(2b6080f28c7cc3efc8625ab71495aae89aeb63a0), the format was
    changed to nanosec. This is because the global variable trace_clock_id was used.
    When we use multiple buffers, clock_id of each sub-buffer should be used. Then,
    this patch uses tr->clock_id instead of the global variable trace_clock_id.

    [ Basically, this fixes a regression where the multibuffer code changed the
    trace_clock file to update tr->clock_id but the traces still use the old
    global trace_clock_id variable, negating the file's effect. The global
    trace_clock_id variable is obsolete and removed. - SR ]

    Link: http://lkml.kernel.org/r/20130423013239.22334.7394.stgit@yunodevel

    Signed-off-by: Yoshihiro YUNOMAE
    Signed-off-by: Steven Rostedt

    Yoshihiro YUNOMAE
     

07 Jun, 2013

1 commit

  • The irqsoff tracer records the max time that interrupts are disabled.
    There are hooks in the assembly code that calls back into the tracer when
    interrupts are disabled or enabled.

    When they are enabled, the tracer checks if the amount of time they
    were disabled is larger than the previous recorded max interrupts off
    time. If it is, it creates a snapshot of the currently running trace
    to store where the last largest interrupts off time was held and how
    it happened.

    During testing, this RCU lockdep dump appeared:

    [ 1257.829021] ===============================
    [ 1257.829021] [ INFO: suspicious RCU usage. ]
    [ 1257.829021] 3.10.0-rc1-test+ #171 Tainted: G W
    [ 1257.829021] -------------------------------
    [ 1257.829021] /home/rostedt/work/git/linux-trace.git/include/linux/rcupdate.h:780 rcu_read_lock() used illegally while idle!
    [ 1257.829021]
    [ 1257.829021] other info that might help us debug this:
    [ 1257.829021]
    [ 1257.829021]
    [ 1257.829021] RCU used illegally from idle CPU!
    [ 1257.829021] rcu_scheduler_active = 1, debug_locks = 0
    [ 1257.829021] RCU used illegally from extended quiescent state!
    [ 1257.829021] 2 locks held by trace-cmd/4831:
    [ 1257.829021] #0: (max_trace_lock){......}, at: [] stop_critical_timing+0x1a3/0x209
    [ 1257.829021] #1: (rcu_read_lock){.+.+..}, at: [] __update_max_tr+0x88/0x1ee
    [ 1257.829021]
    [ 1257.829021] stack backtrace:
    [ 1257.829021] CPU: 3 PID: 4831 Comm: trace-cmd Tainted: G W 3.10.0-rc1-test+ #171
    [ 1257.829021] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
    [ 1257.829021] 0000000000000001 ffff880065f49da8 ffffffff8153dd2b ffff880065f49dd8
    [ 1257.829021] ffffffff81092a00 ffff88006bd78680 ffff88007add7500 0000000000000003
    [ 1257.829021] ffff88006bd78680 ffff880065f49e18 ffffffff810daebf ffffffff810dae5a
    [ 1257.829021] Call Trace:
    [ 1257.829021] [] dump_stack+0x19/0x1b
    [ 1257.829021] [] lockdep_rcu_suspicious+0x109/0x112
    [ 1257.829021] [] __update_max_tr+0xed/0x1ee
    [ 1257.829021] [] ? __update_max_tr+0x88/0x1ee
    [ 1257.829021] [] ? user_enter+0xfd/0x107
    [ 1257.829021] [] update_max_tr_single+0x11d/0x12d
    [ 1257.829021] [] ? user_enter+0xfd/0x107
    [ 1257.829021] [] stop_critical_timing+0x141/0x209
    [ 1257.829021] [] ? trace_hardirqs_on+0xd/0xf
    [ 1257.829021] [] ? user_enter+0xfd/0x107
    [ 1257.829021] [] time_hardirqs_on+0x2a/0x2f
    [ 1257.829021] [] ? user_enter+0xfd/0x107
    [ 1257.829021] [] trace_hardirqs_on_caller+0x16/0x197
    [ 1257.829021] [] trace_hardirqs_on+0xd/0xf
    [ 1257.829021] [] user_enter+0xfd/0x107
    [ 1257.829021] [] do_notify_resume+0x92/0x97
    [ 1257.829021] [] int_signal+0x12/0x17

    What happened was entering into the user code, the interrupts were enabled
    and a max interrupts off was recorded. The trace buffer was saved along with
    various information about the task: comm, pid, uid, priority, etc.

    The uid is recorded with task_uid(tsk). But this is a macro that uses rcu_read_lock()
    to retrieve the data, and this happened to happen where RCU is blind (user_enter).

    As only the preempt and irqs off tracers can have this happen, and they both
    only have the tsk == current, if tsk == current, use current_uid() instead of
    task_uid(), as current_uid() does not use RCU as only current can change its uid.

    This fixes the RCU suspicious splat.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

23 May, 2013

1 commit

  • If ftrace= is on the kernel command line, when that tracer is
    registered, it will be initiated by tracing_set_tracer() to execute that
    tracer.

    The nop tracer is just a stub tracer that is used to have no tracer
    enabled. It is assigned at early bootup as it is the default tracer.

    But if ftrace=nop is on the kernel command line, the registering of the
    nop tracer will call tracing_set_tracer() which will try to execute
    the nop tracer. But it expects tr->current_trace to be assigned something
    as it usually is assigned to the nop tracer. As it hasn't been assigned
    to anything yet, it causes the system to crash.

    The simple fix is to move the tr->current_trace = nop before registering
    the nop tracer. The functionality is still the same as the nop tracer
    doesn't do anything anyway.

    Reported-by: Peter Zijlstra
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

30 Apr, 2013

2 commits

  • During the 3.10 merge, a conflict happened and the resolution was
    almost, but not quite, correct. An if statement was reversed.

    Signed-off-by: Steven Rostedt
    [ Duh. That was just silly of me - Linus ]
    Signed-off-by: Linus Torvalds

    Steven Rostedt
     
  • Pull tracing updates from Steven Rostedt:
    "Along with the usual minor fixes and clean ups there are a few major
    changes with this pull request.

    1) Multiple buffers for the ftrace facility

    This feature has been requested by many people over the last few
    years. I even heard that Google was about to implement it themselves.
    I finally had time and cleaned up the code such that you can now
    create multiple instances of the ftrace buffer and have different
    events go to different buffers. This way, a low frequency event will
    not be lost in the noise of a high frequency event.

    Note, currently only events can go to different buffers, the tracers
    (ie function, function_graph and the latency tracers) still can only
    be written to the main buffer.

    2) The function tracer triggers have now been extended.

    The function tracer had two triggers. One to enable tracing when a
    function is hit, and one to disable tracing. Now you can record a
    stack trace on a single (or many) function(s), take a snapshot of the
    buffer (copy it to the snapshot buffer), and you can enable or disable
    an event to be traced when a function is hit.

    3) A perf clock has been added.

    A "perf" clock can be chosen to be used when tracing. This will cause
    ftrace to use the same clock as perf uses, and hopefully this will
    make it easier to interleave the perf and ftrace data for analysis."

    * tag 'trace-3.10' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (82 commits)
    tracepoints: Prevent null probe from being added
    tracing: Compare to 1 instead of zero for is_signed_type()
    tracing: Remove obsolete macro guard _TRACE_PROFILE_INIT
    ftrace: Get rid of ftrace_profile_bits
    tracing: Check return value of tracing_init_dentry()
    tracing: Get rid of unneeded key calculation in ftrace_hash_move()
    tracing: Reset ftrace_graph_filter_enabled if count is zero
    tracing: Fix off-by-one on allocating stat->pages
    kernel: tracing: Use strlcpy instead of strncpy
    tracing: Update debugfs README file
    tracing: Fix ftrace_dump()
    tracing: Rename trace_event_mutex to trace_event_sem
    tracing: Fix comment about prefix in arch_syscall_match_sym_name()
    tracing: Convert trace_destroy_fields() to static
    tracing: Move find_event_field() into trace_events.c
    tracing: Use TRACE_MAX_PRINT instead of constant
    tracing: Use pr_warn_once instead of open coded implementation
    ring-buffer: Add ring buffer startup selftest
    tracing: Bring Documentation/trace/ftrace.txt up to date
    tracing: Add "perf" trace_clock
    ...

    Conflicts:
    kernel/trace/ftrace.c
    kernel/trace/trace.c

    Linus Torvalds
     

15 Apr, 2013

1 commit

  • Pull perf fixes from Ingo Molnar:
    "Misc fixlets"

    * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
    perf: Fix error return code
    ftrace: Fix strncpy() use, use strlcpy() instead of strncpy()
    perf: Fix strncpy() use, use strlcpy() instead of strncpy()
    perf: Fix strncpy() use, always make sure it's NUL terminated
    perf: Fix ring_buffer perf_output_space() boundary calculation
    perf/x86: Fix uninitialized pt_regs in intel_pmu_drain_bts_buffer()

    Linus Torvalds
     

13 Apr, 2013

1 commit


09 Apr, 2013

2 commits


08 Apr, 2013

1 commit


21 Mar, 2013

1 commit

  • Update the README file in debugfs/tracing to something more useful.
    What's currently in the file is very old and what it shows doesn't
    have much use. Heck, it tells you how to mount debugfs! But to read
    this file you would have already needed to mount it.

    Replace the file with current up-to-date information. It's rather
    limited, but what do you expect from a pseudo README file.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

16 Mar, 2013

3 commits

  • ftrace_dump() had a lot of issues. What ftrace_dump() does, is when
    ftrace_dump_on_oops is set (via a kernel parameter or sysctl), it
    will dump out the ftrace buffers to the console when either a oops,
    panic, or a sysrq-z occurs.

    This was written a long time ago when ftrace was fragile to recursion.
    But it wasn't written well even for that.

    There's a possible deadlock that can occur if a ftrace_dump() is happening
    and an NMI triggers another dump. This is because it grabs a lock
    before checking if the dump ran.

    It also totally disables ftrace, and tracing for no good reasons.

    As the ring_buffer now checks if it is read via a oops or NMI, where
    there's a chance that the buffer gets corrupted, it will disable
    itself. No need to have ftrace_dump() do the same.

    ftrace_dump() is now cleaned up where it uses an atomic counter to
    make sure only one dump happens at a time. A simple atomic_inc_return()
    is enough that is needed for both other CPUs and NMIs. No need for
    a spinlock, as if one CPU is running the dump, no other CPU needs
    to do it too.

    The tracing_on variable is turned off and not turned on. The original
    code did this, but it wasn't pretty. By just disabling this variable
    we get the result of not seeing traces that happen between crashes.

    For sysrq-z, it doesn't get turned on, but the user can always write
    a '1' to the tracing_on file. If they are using sysrq-z, then they should
    know about tracing_on.

    The new code is much easier to read and less error prone. No more
    deadlock possibility when an NMI triggers here.

    Reported-by: zhangwei(Jovi)
    Cc: stable@vger.kernel.org
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • TRACE_MAX_PRINT macro is defined, but is not used.

    Link: http://lkml.kernel.org/r/513D8421.4070404@huawei.com

    Signed-off-by: zhangwei(Jovi)
    Signed-off-by: Steven Rostedt

    zhangwei(Jovi)
     
  • Use pr_warn_once, instead of making an open coded implementation.

    Link: http://lkml.kernel.org/r/513D8419.20400@huawei.com

    Signed-off-by: zhangwei(Jovi)
    Signed-off-by: Steven Rostedt

    zhangwei(Jovi)
     

15 Mar, 2013

15 commits

  • The function trace_clock() calls "local_clock()" which is exactly
    the same clock that perf uses. I'm not sure why perf doesn't call
    trace_clock(), as trace_clock() doesn't have any users.

    But now it does. As trace_clock() calls local_clock() like perf does,
    I added the trace_clock "perf" option that uses trace_clock().

    Now the ftrace buffers can use the same clock as perf uses. This
    will be useful when perf starts reading the ftrace buffers, and will
    be able to interleave them with the same clock data.

    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Add a simple trace clock called "uptime" for those that are
    interested in the uptime of the trace. It uses jiffies as that's
    the safest method, as other uptime clocks grab seq locks, which could
    cause a deadlock if taken from an event or function tracer.

    Requested-by: Mauro Carvalho Chehab
    Cc: Thomas Gleixner
    Cc: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Currently, the only way to stop the latency tracers from doing function
    tracing is to fully disable the function tracer from the proc file
    system:

    echo 0 > /proc/sys/kernel/ftrace_enabled

    This is a big hammer approach as it disables function tracing for
    all users. This includes kprobes, perf, stack tracer, etc.

    Instead, create a function-trace option that the latency tracers can
    check to determine if it should enable function tracing or not.
    This option can be set or cleared even while the tracer is active
    and the tracers will disable or enable function tracing depending
    on how the option was set.

    Instead of using the proc file, disable latency function tracing with

    echo 0 > /debug/tracing/options/function-trace

    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    Cc: Clark Williams
    Cc: John Kacur
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Altough the trace_dump_stack() already skips three functions in
    the call to stack trace, which gets the stack trace to start
    at the caller of the function, the caller may want to skip some
    more too (as it may have helper functions).

    Add a skip argument to the trace_dump_stack() that lets the caller
    skip back tracing functions that it doesn't care about.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • echo 'schedule:snapshot:1' > /debug/tracing/set_ftrace_filter

    This will cause the scheduler to trigger a snapshot the next time
    it's called (you can use any function that's not called by NMI).

    Even though it triggers only once, you still need to remove it with:

    echo '!schedule:snapshot:0' > /debug/tracing/set_ftrace_filter

    The :1 can be left off for the first command:

    echo 'schedule:snapshot' > /debug/tracing/set_ftrace_filter

    But this will cause all calls to schedule to trigger a snapshot.
    This must be removed without the ':0'

    echo '!schedule:snapshot' > /debug/tracing/set_ftrace_filter

    As adding a "count" is a different operation (internally).

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Add alloc_snapshot() and free_snapshot() to allocate and free the
    snapshot buffer respectively, and use these to remove duplicate
    code.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Add EXPORT_SYMBOL_GPL() to let the tracing_snapshot() functions be
    called from modules.

    Also add a test to see if the snapshot was called from NMI context
    and just warn in the tracing buffer if so, and return.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • There's a few places that ftrace uses trace_printk() for internal
    use, but this requires context (normal, softirq, irq, NMI) buffers
    to keep things lockless. But the trace_puts() does not, as it can
    write the string directly into the ring buffer. Make a internal helper
    for trace_puts() and have the internal functions use that.

    This way the extra context buffers are not used.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • The trace_printk() is extremely fast and is very handy as it can be
    used in any context (including NMIs!). But it still requires scanning
    the fmt string for parsing the args. Even the trace_bprintk() requires
    a scan to know what args will be saved, although it doesn't copy the
    format string itself.

    Several times trace_printk() has no args, and wastes cpu cycles scanning
    the fmt string.

    Adding trace_puts() allows the developer to use an even faster
    tracing method that only saves the pointer to the string in the
    ring buffer without doing any format parsing at all. This will
    help remove even more of the "Heisenbug" effect, when debugging.

    Also fixed up the F_printk()s for the ftrace internal bprint and print events.

    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • If debugging the kernel, and the developer wants to use
    tracing_snapshot() in places where tracing_snapshot_alloc() may
    be difficult (or more likely, the developer is lazy and doesn't
    want to bother with tracing_snapshot_alloc() at all), then adding

    alloc_snapshot

    to the kernel command line parameter will tell ftrace to allocate
    the snapshot buffer (if configured) when it allocates the main
    tracing buffer.

    I also noticed that ring_buffer_expanded and tracing_selftest_disabled
    had inconsistent use of boolean "true" and "false" with "0" and "1".
    I cleaned that up too.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Move the tracing startup selftest code into its own function and
    when not enabled, always have that function succeed.

    This makes the register_tracer() function much more readable.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • The new snapshot feature is quite handy. It's a way for the user
    to take advantage of the spare buffer that, until then, only
    the latency tracers used to "snapshot" the buffer when it hit
    a max latency. Now users can trigger a "snapshot" manually when
    some condition is hit in a program. But a snapshot currently can
    not be triggered by a condition inside the kernel.

    With the addition of tracing_snapshot() and tracing_snapshot_alloc(),
    snapshots can now be taking when a condition is hit, and the
    developer wants to snapshot the case without stopping the trace.

    Note, any snapshot will overwrite the old one, so take care
    in how this is done.

    These new functions are to be used like tracing_on(), tracing_off()
    and trace_printk() are. That is, they should never be called
    in the mainline Linux kernel. They are solely for the purpose
    of debugging.

    The tracing_snapshot() will not allocate a buffer, but it is
    safe to be called from any context (except NMIs). But if a
    snapshot buffer isn't allocated when it is called, it will write
    to the live buffer, complaining about the lack of a snapshot
    buffer, and then stop tracing (giving you the "permanent snapshot").

    tracing_snapshot_alloc() will allocate the snapshot buffer if
    it was not already allocated and then take the snapshot. This routine
    *may sleep*, and must be called from context that can sleep.
    The allocation is done with GFP_KERNEL and not atomic.

    If you need a snapshot in an atomic context, say in early boot,
    then it is best to call the tracing_snapshot_alloc() before then,
    where it will allocate the buffer, and then you can use the
    tracing_snapshot() anywhere you want and still get snapshots.

    Cc: Hiraku Toyooka
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Add a ref count to the trace_array structure and prevent removal
    of instances that have open descriptors.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Add the per_cpu directory to the created tracing instances:

    cd /sys/kernel/debug/tracing/instances
    mkdir foo
    ls foo/per_cpu/cpu0
    buffer_size_kb snapshot_raw trace trace_pipe_raw
    snapshot stats trace_pipe

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Add the "snapshot" file to the the multi-buffer instances.

    cd /sys/kernel/debug/tracing/instances
    mkdir foo
    ls foo
    buffer_size_kb buffer_total_size_kb events free_buffer set_event
    snapshot trace trace_clock trace_marker trace_options trace_pipe
    tracing_on
    cat foo/snapshot
    # tracer: nop
    #
    #
    # * Snapshot is freed *
    #
    # Snapshot commands:
    # echo 0 > snapshot : Clears and frees snapshot buffer
    # echo 1 > snapshot : Allocates snapshot buffer, if not already allocated.
    # Takes a snapshot of the main buffer.
    # echo 2 > snapshot : Clears snapshot buffer (but does not allocate)
    # (Doesn't have to be '2' works with any number that
    # is not a '0' or '1')

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)