15 Dec, 2020

1 commit

  • Disable ftrace selftests when any tracer (kernel command line options
    like ftrace=, trace_events=, kprobe_events=, and boot-time tracing)
    starts running because selftest can disturb it.

    Currently ftrace= and trace_events= are checked, but kprobe_events
    has a different flag, and boot-time tracing didn't checked. This unifies
    the disabled flag and all of those boot-time tracing features sets
    the flag.

    This also fixes warnings on kprobe-event selftest
    (CONFIG_FTRACE_STARTUP_TEST=y and CONFIG_KPROBE_EVENTS=y) with boot-time
    tracing (ftrace.event.kprobes.EVENT.probes) like below;

    [ 59.803496] trace_kprobe: Testing kprobe tracing:
    [ 59.804258] ------------[ cut here ]------------
    [ 59.805682] WARNING: CPU: 3 PID: 1 at kernel/trace/trace_kprobe.c:1987 kprobe_trace_self_tests_ib
    [ 59.806944] Modules linked in:
    [ 59.807335] CPU: 3 PID: 1 Comm: swapper/0 Not tainted 5.10.0-rc7+ #172
    [ 59.808029] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.13.0-1ubuntu1 04/01/204
    [ 59.808999] RIP: 0010:kprobe_trace_self_tests_init+0x5f/0x42b
    [ 59.809696] Code: e8 03 00 00 48 c7 c7 30 8e 07 82 e8 6d 3c 46 ff 48 c7 c6 00 b2 1a 81 48 c7 c7 7
    [ 59.812439] RSP: 0018:ffffc90000013e78 EFLAGS: 00010282
    [ 59.813038] RAX: 00000000ffffffef RBX: 0000000000000000 RCX: 0000000000049443
    [ 59.813780] RDX: 0000000000049403 RSI: 0000000000049403 RDI: 000000000002deb0
    [ 59.814589] RBP: ffffc90000013e90 R08: 0000000000000001 R09: 0000000000000001
    [ 59.815349] R10: 0000000000000001 R11: 0000000000000000 R12: 00000000ffffffef
    [ 59.816138] R13: ffff888004613d80 R14: ffffffff82696940 R15: ffff888004429138
    [ 59.816877] FS: 0000000000000000(0000) GS:ffff88807dcc0000(0000) knlGS:0000000000000000
    [ 59.817772] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [ 59.818395] CR2: 0000000001a8dd38 CR3: 0000000002222000 CR4: 00000000000006a0
    [ 59.819144] Call Trace:
    [ 59.819469] ? init_kprobe_trace+0x6b/0x6b
    [ 59.819948] do_one_initcall+0x5f/0x300
    [ 59.820392] ? rcu_read_lock_sched_held+0x4f/0x80
    [ 59.820916] kernel_init_freeable+0x22a/0x271
    [ 59.821416] ? rest_init+0x241/0x241
    [ 59.821841] kernel_init+0xe/0x10f
    [ 59.822251] ret_from_fork+0x22/0x30
    [ 59.822683] irq event stamp: 16403349
    [ 59.823121] hardirqs last enabled at (16403359): [] console_unlock+0x48e/0x580
    [ 59.824074] hardirqs last disabled at (16403368): [] console_unlock+0x3f6/0x580
    [ 59.825036] softirqs last enabled at (16403200): [] __do_softirq+0x33a/0x484
    [ 59.825982] softirqs last disabled at (16403087): [] asm_call_irq_on_stack+0x10
    [ 59.827034] ---[ end trace 200c544775cdfeb3 ]---
    [ 59.827635] trace_kprobe: error on probing function entry.

    Link: https://lkml.kernel.org/r/160741764955.3448999.3347769358299456915.stgit@devnote2

    Fixes: 4d655281eb1b ("tracing/boot Add kprobe event support")
    Cc: Ingo Molnar
    Cc: stable@vger.kernel.org
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     

14 Nov, 2020

1 commit

  • In preparation to have arguments of a function passed to callbacks attached
    to functions as default, change the default callback prototype to receive a
    struct ftrace_regs as the forth parameter instead of a pt_regs.

    For callbacks that set the FL_SAVE_REGS flag in their ftrace_ops flags, they
    will now need to get the pt_regs via a ftrace_get_regs() helper call. If
    this is called by a callback that their ftrace_ops did not have a
    FL_SAVE_REGS flag set, it that helper function will return NULL.

    This will allow the ftrace_regs to hold enough just to get the parameters
    and stack pointer, but without the worry that callbacks may have a pt_regs
    that is not completely filled.

    Acked-by: Peter Zijlstra (Intel)
    Reviewed-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

06 Nov, 2020

1 commit

  • Now that all callbacks are recursion safe, reverse the meaning of the
    RECURSION flag and rename it from RECURSION_SAFE to simply RECURSION.
    Now only callbacks that request to have recursion protecting it will
    have the added trampoline to do so.

    Also remove the outdated comment about "PER_CPU" when determining to
    use the ftrace_ops_assist_func.

    Link: https://lkml.kernel.org/r/20201028115613.742454631@goodmis.org
    Link: https://lkml.kernel.org/r/20201106023547.904270143@goodmis.org

    Cc: Peter Zijlstra
    Cc: Ingo Molnar
    Cc: Josh Poimboeuf
    Cc: Jiri Kosina
    Cc: Masami Hiramatsu
    Cc: Andrew Morton
    Cc: Jonathan Corbet
    Cc: Sebastian Andrzej Siewior
    Cc: Miroslav Benes
    Cc: Kamalesh Babulal
    Cc: Petr Mladek
    Cc: linux-doc@vger.kernel.org
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

02 Nov, 2020

1 commit

  • When an interrupt or NMI comes in and switches the context, there's a delay
    from when the preempt_count() shows the update. As the preempt_count() is
    used to detect recursion having each context have its own bit get set when
    tracing starts, and if that bit is already set, it is considered a recursion
    and the function exits. But if this happens in that section where context
    has changed but preempt_count() has not been updated, this will be
    incorrectly flagged as a recursion.

    To handle this case, create another bit call TRANSITION and test it if the
    current context bit is already set. Flag the call as a recursion if the
    TRANSITION bit is already set, and if not, set it and continue. The
    TRANSITION bit will be cleared normally on the return of the function that
    set it, or if the current context bit is clear, set it and clear the
    TRANSITION bit to allow for another transition between the current context
    and an even higher one.

    Cc: stable@vger.kernel.org
    Fixes: edc15cafcbfa3 ("tracing: Avoid unnecessary multiple recursion checks")
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

14 Jan, 2020

1 commit

  • As we are working to remove the generic "ring_buffer" name that is used by
    both tracing and perf, the ring_buffer name for tracing will be renamed to
    trace_buffer, and perf's ring buffer will be renamed to perf_buffer.

    As there already exists a trace_buffer that is used by the trace_arrays, it
    needs to be first renamed to array_buffer.

    Link: https://lore.kernel.org/r/20191213153553.GE20583@krava

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

22 Apr, 2019

1 commit

  • The function_graph boot up self test emulates the tr->init() function in
    order to add a wrapper around the function graph tracer entry code to test
    for lock ups and such. But it does not emulate the tr->reset(), and just
    calls the function_graph tracer tr->reset() function which will use its own
    fgraph_ops to unregister function tracing with. As the fgraph_ops is
    becoming more meaningful with the register_ftrace_graph() and
    unregister_ftrace_graph() functions, the two need to be the same. The
    emulated tr->init() uses its own fgraph_ops descriptor, which means the
    unregister_ftrace_graph() must use the same ftrace_ops, which the selftest
    currently does not do. By emulating the tr->reset() as the selftest does
    with the tr->init() it will be able to pass the same fgraph_ops descriptor
    to the unregister_ftrace_graph() as it did with the register_ftrace_graph().

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

09 Dec, 2018

1 commit

  • Currently the registering of function graph is to pass in a entry and return
    function. We need to have a way to associate those functions together where
    the entry can determine to run the return hook. Having a structure that
    contains both functions will facilitate the process of converting the code
    to be able to do such.

    This is similar to the way function hooks are enabled (it passes in
    ftrace_ops). Instead of passing in the functions to use, a single structure
    is passed in to the registering function.

    The unregister function is now passed in the fgraph_ops handle. When we
    allow more than one callback to the function graph hooks, this will let the
    system know which one to remove.

    Reviewed-by: Joel Fernandes (Google)
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

18 Nov, 2017

1 commit

  • Pull tracing updates from

    - allow module init functions to be traced

    - clean up some unused or not used by config events (saves space)

    - clean up of trace histogram code

    - add support for preempt and interrupt enabled/disable events

    - other various clean ups

    * tag 'trace-v4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (30 commits)
    tracing, thermal: Hide cpu cooling trace events when not in use
    tracing, thermal: Hide devfreq trace events when not in use
    ftrace: Kill FTRACE_OPS_FL_PER_CPU
    perf/ftrace: Small cleanup
    perf/ftrace: Fix function trace events
    perf/ftrace: Revert ("perf/ftrace: Fix double traces of perf on ftrace:function")
    tracing, dma-buf: Remove unused trace event dma_fence_annotate_wait_on
    tracing, memcg, vmscan: Hide trace events when not in use
    tracing/xen: Hide events that are not used when X86_PAE is not defined
    tracing: mark trace_test_buffer as __maybe_unused
    printk: Remove superfluous memory barriers from printk_safe
    ftrace: Clear hashes of stale ips of init memory
    tracing: Add support for preempt and irq enable/disable events
    tracing: Prepare to add preempt and irq trace events
    ftrace/kallsyms: Have /proc/kallsyms show saved mod init functions
    ftrace: Add freeing algorithm to free ftrace_mod_maps
    ftrace: Save module init functions kallsyms symbols for tracing
    ftrace: Allow module init functions to be traced
    ftrace: Add a ftrace_free_mem() function for modules to use
    tracing: Reimplement log2
    ...

    Linus Torvalds
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

13 Oct, 2017

1 commit

  • After trace_selftest_startup_sched_switch is removed, trace_test_buffer()
    is only used sometimes, leading to this warning:

    kernel/trace/trace_selftest.c:62:12: error: 'trace_test_buffer' defined but not used [-Werror=unused-function]

    There is no simple #ifdef condition that captures well whether the
    function is in fact used or not, so marking it as __maybe_unused is
    probably the best way to shut up the warning. The function will then
    be silently dropped when there is no user.

    Link: http://lkml.kernel.org/r/20171013142227.1273469-1-arnd@arndb.de

    Fixes: d8c4deee6dc6 ("tracing: Remove obsolete sched_switch tracer selftest")
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Steven Rostedt (VMware)

    Arnd Bergmann
     

04 Oct, 2017

1 commit

  • Since commit 87d80de2800d087ea833cb79bc13f85ff34ed49f ("tracing: Remove
    obsolete sched_switch tracer"), the sched_switch tracer selftest is no longer
    used. This patch removes the same.

    Link: http://lkml.kernel.org/r/20170909065517.22262-1-joelaf@google.com

    Cc: Ingo Molnar
    Cc: kernel-team@android.com
    Signed-off-by: Joel Fernandes
    Signed-off-by: Steven Rostedt (VMware)

    Joel Fernandes
     

02 Sep, 2017

1 commit

  • In the second iteration of trace_selftest_ops(), the error goto label is
    wrong in the case where trace_selftest_test_global_cnt is off. In the
    case of error, it leaks the dynamic ops that was allocated.

    Cc: stable@vger.kernel.org
    Fixes: 95950c2e ("ftrace: Add self-tests for multiple function trace users")
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

02 Mar, 2017

1 commit


12 Oct, 2014

1 commit

  • Pull tracing fixes from Steven Rostedt:
    "Seems that Peter Zijlstra added a new check that is making old code
    scream nasty warnings:

    WARNING: CPU: 0 PID: 91 at kernel/sched/core.c:7253 __might_sleep+0x9a/0x378()
    do not call blocking ops when !TASK_RUNNING; state=1 set at [] event_test_thread+0x48/0x93
    Call Trace:
    __might_sleep+0x9a/0x378
    down_read+0x26/0x98
    exit_signals+0x27/0x1c2
    do_exit+0x193/0x10bd
    kthread+0x156/0x156
    ret_from_fork+0x7a/0xb0

    These are triggered by some self tests that run at start up when
    configure in. Although the code is technically correct, they are a
    little sloppy and not very robust. They work now because it runs at
    boot up and the tests do not call anything that might trigger a
    spurious wake up. But that doesn't mean those tests wont change in
    the future.

    It's best to clean them now to make sure the tests used to test the
    internal workings of the system don't cause breakage themselves.

    This also quiets the warnings made by the new checks"

    * tag 'trace-3.18-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
    tracing: Clean up scheduling in trace_wakeup_test_thread()
    tracing: Robustify wait loop

    Linus Torvalds
     

09 Oct, 2014

1 commit

  • Peter's new debugging tool triggers when tasks exit with !TASK_RUNNING.
    The code in trace_wakeup_test_thread() also has a single schedule() call
    that should be encompassed by a loop.

    This cleans up the code a little to make it a bit more robust and
    also makes the return exit properly with TASK_RUNNING.

    Link: http://lkml.kernel.org/p/20141008135216.76142204@gandalf.local.home

    Reported-by: Peter Zijlstra
    Acked-by: Peter Zijlstra
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

13 Sep, 2014

1 commit

  • The ftrace_enabled variable is set to zero in the self tests to keep
    delayed functions from being traced and messing with the checks. This
    only needs to be done when the checks are being performed, otherwise,
    if ftrace_enabled is off when calls back to the utility that is being
    tested, it can cause errors to happen and the tests can fail with
    false positives.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

22 Apr, 2014

4 commits


13 Jan, 2014

1 commit

  • It is very likely that systems that wants/needs to use the new
    SCHED_DEADLINE policy also want to have the scheduling latency of
    the -deadline tasks under control.

    For this reason a new version of the scheduling wakeup latency,
    called "wakeup_dl", is introduced.

    As a consequence of applying this patch there will be three wakeup
    latency tracer:

    * "wakeup", that deals with all tasks in the system;
    * "wakeup_rt", that deals with -rt and -deadline tasks only;
    * "wakeup_dl", that deals with -deadline tasks only.

    Signed-off-by: Dario Faggioli
    Signed-off-by: Juri Lelli
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1383831828-15501-9-git-send-email-juri.lelli@gmail.com
    Signed-off-by: Ingo Molnar

    Dario Faggioli
     

02 Jul, 2013

1 commit

  • If the kernel command line ftrace filter parameters are set
    (ftrace_filter or ftrace_notrace), force the function self test to
    pass, with a warning why it was forced.

    If the user adds a filter to the kernel command line, it is assumed
    that they know what they are doing, and the self test should just not
    run instead of failing (which disables function tracing) or clearing
    the filter, as that will probably annoy the user.

    If the user wants the selftest to run, the message will tell them why
    it did not.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

30 May, 2013

1 commit

  • The branch selftest calls trace_test_buffer(), but with the new code
    it expects the first parameter to be a pointer to a struct trace_buffer.
    All self tests were changed but the branch selftest was missed.

    This caused either a crash or failed test when the branch selftest was
    enabled.

    Link: http://lkml.kernel.org/r/20130529141333.GA24064@localhost

    Reported-by: Fengguang Wu
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

16 Mar, 2013

1 commit

  • ftrace_dump() had a lot of issues. What ftrace_dump() does, is when
    ftrace_dump_on_oops is set (via a kernel parameter or sysctl), it
    will dump out the ftrace buffers to the console when either a oops,
    panic, or a sysrq-z occurs.

    This was written a long time ago when ftrace was fragile to recursion.
    But it wasn't written well even for that.

    There's a possible deadlock that can occur if a ftrace_dump() is happening
    and an NMI triggers another dump. This is because it grabs a lock
    before checking if the dump ran.

    It also totally disables ftrace, and tracing for no good reasons.

    As the ring_buffer now checks if it is read via a oops or NMI, where
    there's a chance that the buffer gets corrupted, it will disable
    itself. No need to have ftrace_dump() do the same.

    ftrace_dump() is now cleaned up where it uses an atomic counter to
    make sure only one dump happens at a time. A simple atomic_inc_return()
    is enough that is needed for both other CPUs and NMIs. No need for
    a spinlock, as if one CPU is running the dump, no other CPU needs
    to do it too.

    The tracing_on variable is turned off and not turned on. The original
    code did this, but it wasn't pretty. By just disabling this variable
    we get the result of not seeing traces that happen between crashes.

    For sysrq-z, it doesn't get turned on, but the user can always write
    a '1' to the tracing_on file. If they are using sysrq-z, then they should
    know about tracing_on.

    The new code is much easier to read and less error prone. No more
    deadlock possibility when an NMI triggers here.

    Reported-by: zhangwei(Jovi)
    Cc: stable@vger.kernel.org
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

15 Mar, 2013

1 commit

  • Currently, the way the latency tracers and snapshot feature works
    is to have a separate trace_array called "max_tr" that holds the
    snapshot buffer. For latency tracers, this snapshot buffer is used
    to swap the running buffer with this buffer to save the current max
    latency.

    The only items needed for the max_tr is really just a copy of the buffer
    itself, the per_cpu data pointers, the time_start timestamp that states
    when the max latency was triggered, and the cpu that the max latency
    was triggered on. All other fields in trace_array are unused by the
    max_tr, making the max_tr mostly bloat.

    This change removes the max_tr completely, and adds a new structure
    called trace_buffer, that holds the buffer pointer, the per_cpu data
    pointers, the time_start timestamp, and the cpu where the latency occurred.

    The trace_array, now has two trace_buffers, one for the normal trace and
    one for the max trace or snapshot. By doing this, not only do we remove
    the bloat from the max_trace but the instances of traces can now use
    their own snapshot feature and not have just the top level global_trace have
    the snapshot feature and latency tracers for itself.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

23 Jan, 2013

2 commits

  • The function tracing recursion self test should not crash
    the machine if the resursion test fails. If it detects that
    the function tracing is recursing when it should not be, then
    bail, don't go into an infinite recursive loop.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • The test that checks function recursion does things differently
    if the arch does not support all ftrace features. But that really
    doesn't make a difference with how the test runs, and either way
    the count variable should be 2 at the end.

    Currently the test wrongly fails for archs that don't support all
    the ftrace features.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

22 Jan, 2013

1 commit

  • Move SAVE_REGS support flag into Kconfig and rename
    it to CONFIG_DYNAMIC_FTRACE_WITH_REGS. This also introduces
    CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS which indicates
    the architecture depending part of ftrace has a code
    that saves full registers.
    On the other hand, CONFIG_DYNAMIC_FTRACE_WITH_REGS indicates
    the code is enabled.

    Link: http://lkml.kernel.org/r/20120928081516.3560.72534.stgit@ltc138.sdl.hitachi.co.jp

    Cc: Ingo Molnar
    Cc: Ananth N Mavinakayanahalli
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Frederic Weisbecker
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt

    Masami Hiramatsu
     

02 Nov, 2012

2 commits

  • Have the ring buffer commit function use the irq_work infrastructure to
    wake up any waiters waiting on the ring buffer for new data. The irq_work
    was created for such a purpose, where doing the actual wake up at the
    time of adding data is too dangerous, as an event or function trace may
    be in the midst of the work queue locks and cause deadlocks. The irq_work
    will either delay the action to the next timer interrupt, or trigger an IPI
    to itself forcing an interrupt to do the work (in a safe location).

    With irq_work, all ring buffer commits can safely do wakeups, removing
    the need for the ring buffer commit "nowake" variants, which were used
    by events and function tracing. All commits can now safely use the
    normal commit, and the "nowake" variants can be removed.

    Cc: Peter Zijlstra
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • The tracing_enabled file has been deprecated as it never was able
    to serve its purpose well. The tracing_on file has taken over.
    Instead of having code to keep tracing_enabled, have the tracing_enabled
    file just set tracing_on, and remove the tracing_enabled variable.

    This allows us to remove the tracing_enabled file. The reason that
    the remove is in a different change set and not removed here is
    in case we find some lonely userspace tool that requires the file
    to exist. Then the removal patch will get reverted, but this one
    will not.

    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

07 Aug, 2012

1 commit

  • The warkeup_rt self test used msleep() calls to wait for real time
    tasks to wake up and run. On bare-metal hardware, this was enough as
    the scheduler should let the RT task run way before the non-RT task
    wakes up from the msleep(). If it did not, then that would mean the
    scheduler was broken.

    But when dealing with virtual machines, this is a different story.
    If the RT task wakes up on a VCPU, it's up to the host to decide when
    that task gets to schedule, which can be far behind the time that the
    non-RT task wakes up. In this case, the test would fail incorrectly.

    As we are not testing the scheduler, but instead the wake up tracing,
    we can use completions to wait and not depend on scheduler timings
    to see if events happen on time.

    Link: http://lkml.kernel.org/r/1343663105.3847.7.camel@fedora

    Reported-by: Fengguang Wu
    Tested-by: Fengguang Wu
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

31 Jul, 2012

3 commits

  • Add selftests to test the save-regs functionality of ftrace.

    If the arch supports saving regs, then it will make sure that regs is
    at least not NULL in the callback.

    If the arch does not support saving regs, it makes sure that the
    registering of the ftrace_ops that requests saving regs fails.
    It then tests the registering of the ftrace_ops succeeds if the
    'IF_SUPPORTED' flag is set. Then it makes sure that the regs passed to
    the function is NULL.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Add selftests to test the function tracing recursion protection actually
    does work. It also tests if a ftrace_ops states it will perform its own
    protection. Although, even if the ftrace_ops states it will protect itself,
    the ftrace infrastructure may still provide protection if the arch does
    not support all features or another ftrace_ops is registered.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • As more users of the function tracer utility are being added, they do
    not always add the necessary recursion protection. To protect from
    function recursion due to tracing, if the callback ftrace_ops does not
    specifically specify that it protects against recursion (by setting
    the FTRACE_OPS_FL_RECURSION_SAFE flag), the list operation will be
    called by the mcount trampoline which adds recursion protection.

    If the flag is set, then the function will be called directly with no
    extra protection.

    Note, the list operation is called if more than one function callback
    is registered, or if the arch does not support all of the function
    tracer features.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

20 Jul, 2012

2 commits

  • Return as the 4th paramater to the function tracer callback the pt_regs.

    Later patches that implement regs passing for the architectures will require
    having the ftrace_ops set the SAVE_REGS flag, which will tell the arch
    to take the time to pass a full set of pt_regs to the ftrace_ops callback
    function. If the arch does not support it then it should pass NULL.

    If an arch can pass full regs, then it should define:
    ARCH_SUPPORTS_FTRACE_SAVE_REGS to 1

    Link: http://lkml.kernel.org/r/20120702201821.019966811@goodmis.org

    Reviewed-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Currently the function trace callback receives only the ip and parent_ip
    of the function that it traced. It would be more powerful to also return
    the ops that registered the function as well. This allows the same function
    to act differently depending on what ftrace_ops registered it.

    Link: http://lkml.kernel.org/r/20120612225424.267254552@goodmis.org

    Reviewed-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

19 May, 2011

2 commits


07 Jan, 2011

1 commit


23 Oct, 2010

1 commit