24 Jan, 2018

1 commit


02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

29 Jun, 2017

1 commit

  • When doing the following command:

    # echo ":mod:kvm_intel" > /sys/kernel/tracing/stack_trace_filter

    it triggered a crash.

    This happened with the clean up of probes. It required all callers to the
    regex function (doing ftrace filtering) to have ops->private be a pointer to
    a trace_array. But for the stack tracer, that is not the case.

    Allow for the ops->private to be NULL, and change the function command
    callbacks to handle the trace_array pointer being NULL as well.

    Fixes: d2afd57a4b96 ("tracing/ftrace: Allow instances to have their own function probes")
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

21 Apr, 2017

10 commits

  • Have the stacktrace function trigger probe trigger stack traces within the
    instance that they were added to in the set_ftrace_filter.

    ># cd /sys/kernel/debug/tracing
    ># mkdir instances/foo
    ># cd instances/foo
    ># echo schedule:stacktrace:1 > set_ftrace_filter
    ># cat trace
    # tracer: nop
    #
    # entries-in-buffer/entries-written: 1/1 #P:4
    #
    # _-----=> irqs-off
    # / _----=> need-resched
    # | / _---=> hardirq/softirq
    # || / _--=> preempt-depth
    # ||| / delay
    # TASK-PID CPU# |||| TIMESTAMP FUNCTION
    # | | | |||| | |
    -0 [001] .N.2 202.585010:
    =>
    => schedule
    => schedule_preempt_disabled
    => do_idle
    => cpu_startup_entry
    => start_secondary
    => verify_cpu

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • Have the traceon/off function probe triggers affect only the instance they
    are set in. This required making the trace_on/off accessible for other files
    in the tracing directory.

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • With the redesign of the registration and execution of the function probes
    (triggers), data can now be passed from the setup of the probe to the probe
    callers that are specific to the trace_array it is on. Although, all probes
    still only affect the toplevel trace array, this change will allow for
    instances to have their own probes separated from other instances and the
    top array.

    That is, something like the stacktrace probe can be set to trace only in an
    instance and not the toplevel trace array. This isn't implement yet, but
    this change sets the ground work for the change.

    When a probe callback is triggered (someone writes the probe format into
    set_ftrace_filter), it calls register_ftrace_function_probe() passing in
    init_data that will be used to initialize the probe. Then for every matching
    function, register_ftrace_function_probe() will call the probe_ops->init()
    function with the init data that was passed to it, as well as an address to
    a place holder that is associated with the probe and the instance. The first
    occurrence will have a NULL in the pointer. The init() function will then
    initialize it. If other probes are added, or more functions are part of the
    probe, the place holder will be passed to the init() function with the place
    holder data that it was initialized to the last time.

    Then this place_holder is passed to each of the other probe_ops functions,
    where it can be used in the function callback. When the probe_ops free()
    function is called, it can be called either with the rip of the function
    that is being removed from the probe, or zero, indicating that there are no
    more functions attached to the probe, and the place holder is about to be
    freed. This gives the probe_ops a way to free the data it assigned to the
    place holder if it was allocade during the first init call.

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • In order to eventually have each trace_array instance have its own unique
    set of function probes (triggers), the trace array needs to hold the ops and
    the filters for the probes.

    This is the first step to accomplish this. Instead of having the private
    data of the probe ops point to the trace_array, create a separate list that
    the trace_array holds. There's only one private_data for a probe, we need
    one per trace_array. The probe ftrace_ops will be dynamically created for
    each instance, instead of being static.

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • Pass the trace_array associated to a ftrace_probe_ops into the probe_ops
    func(), init() and free() functions. The trace_array is the descriptor that
    describes a tracing instance. This will help create the infrastructure that
    will allow having function probes unique to tracing instances.

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • Add a link list to the trace_array to hold func probes that are registered.
    Currently, all function probes are the same for all instances as it was
    before, that is, only the top level trace_array holds the function probes.
    But this lays the ground work to have function probes be attached to
    individual instances, and having the event trigger only affect events in the
    given instance. But that work is still to be done.

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • Currently unregister_ftrace_function_probe_func() is a void function. It
    does not give any feedback if an error occurred or no item was found to
    remove and nothing was done.

    Change it to return status and success if it removed something. Also update
    the callers to return that feedback to the user.

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • No users of the function probes uses the data field anymore. Remove it, and
    change the init function to take a void *data parameter instead of a
    void **data, because the init will just get the data that the registering
    function was received, and there's no state after it is called.

    The other functions for ftrace_probe_ops still take the data parameter, but
    it will currently only be passed NULL. It will stay as a parameter for
    future data to be passed to these functions.

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • As the data pointer for individual ips will soon be removed and no longer
    passed to the callback function probe handlers, convert the rest of the function
    trigger counters over to the new ftrace_func_mapper helper functions.

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • In preparation to cleaning up the probe function registration code, the
    "data" parameter will eventually be removed from the probe->func() call.
    Instead it will receive its own "ops" function, in which it can set up its
    own data that it needs to map.

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

25 Mar, 2017

1 commit


20 Jun, 2016

1 commit


09 Mar, 2016

1 commit

  • When I was updating the ftrace_stress test of ltp. I encountered
    a strange phenomemon, excute following steps:

    echo nop > /sys/kernel/debug/tracing/current_tracer
    echo 0 > /sys/kernel/debug/tracing/options/funcgraph-cpu
    bash: echo: write error: Invalid argument

    check dmesg:
    [ 1024.903855] nop_test_refuse flag set to 0: we refuse.Now cat trace_options to see the result

    The reason is that the trace option test will randomly setup trace
    option under tracing/options no matter what the current_tracer is.
    but the set_tracer_option is always using the set_flag callback
    from the current_tracer. This patch adds a pointer to tracer_flags
    and make it point to the tracer it belongs to. When the option is
    setup, the set_flag of the right tracer will be used no matter
    what the the current_tracer is.

    And the old dummy_tracer_flags is used for all the tracers which
    doesn't have a tracer_flags, having issue to use it to save the
    pointer of a tracer. So remove it and use dynamic dummy tracer_flags
    for tracers needing a dummy tracer_flags, as a result, there are no
    tracers sharing tracer_flags, so remove the check code.

    And save the current tracer to trace_option_dentry seems not good as
    it may waste mem space when mount the debug/trace fs more than one time.

    Link: http://lkml.kernel.org/r/1457444222-8654-1-git-send-email-chuhu@redhat.com

    Signed-off-by: Chunyu Hu
    [ Fixed up function tracer options to work with the change ]
    Signed-off-by: Steven Rostedt

    Chunyu Hu
     

20 Nov, 2014

1 commit


19 Nov, 2014

1 commit

  • The function probe counting for traceon and traceoff suffered a race
    condition where if the probe was executing on two or more CPUs at the
    same time, it could decrement the counter by more than one when
    disabling (or enabling) the tracer only once.

    The way the traceon and traceoff probes are suppose to work is that
    they disable (or enable) tracing once per count. If a user were to
    echo 'schedule:traceoff:3' into set_ftrace_filter, then when the
    schedule function was called, it would disable tracing. But the count
    should only be decremented once (to 2). Then if the user enabled tracing
    again (via tracing_on file), the next call to schedule would disable
    tracing again and the count would be decremented to 1.

    But if multiple CPUS called schedule at the same time, it is possible
    that the count would be decremented more than once because of the
    simple "count--" used.

    By reading the count into a local variable and using memory barriers
    we can guarantee that the count would only be decremented once per
    disable (or enable).

    The stack trace probe had a similar race, but here the stack trace will
    decrement for each time it is called. But this had the read-modify-
    write race, where it could stack trace more than the number of times
    that was specified. This case we use a cmpxchg to stack trace only the
    number of times specified.

    The dump probes can still use the old "update_count()" function as
    they only run once, and that is controlled by the dump logic
    itself.

    Link: http://lkml.kernel.org/r/20141118134643.4b550ee4@gandalf.local.home

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

14 Nov, 2014

1 commit

  • Using seq_printf to print a simple string or a single character is a
    lot more expensive than it needs to be, since seq_puts and seq_putc
    exist.

    These patches do

    seq_printf(m, s) -> seq_puts(m, s)
    seq_printf(m, "%s", s) -> seq_puts(m, s)
    seq_printf(m, "%c", c) -> seq_putc(m, c)

    Subsequent patches will simplify further.

    Link: http://lkml.kernel.org/r/1415479332-25944-2-git-send-email-linux@rasmusvillemoes.dk

    Signed-off-by: Rasmus Villemoes
    Signed-off-by: Steven Rostedt

    Rasmus Villemoes
     

30 Apr, 2014

1 commit

  • Now that the ring buffer has a built in way to wake up readers
    when there's data, using irq_work such that it is safe to do it
    in any context. But it was still using the old "poor man's"
    wait polling that checks every 1/10 of a second to see if it
    should wake up a waiter. This makes the latency for a wake up
    excruciatingly long. No need to do that anymore.

    Completely remove the different wait_poll types from the tracers
    and have them all use the default one now.

    Reported-by: Johannes Berg
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

22 Apr, 2014

1 commit

  • Instead of having a list of global functions that are called,
    as only one global function is allow to be enabled at a time, there's
    no reason to have a list.

    Instead, simply have all the users of the global ops, use the global ops
    directly, instead of registering their own ftrace_ops. Just switch what
    function is used before enabling the function tracer.

    This removes a lot of code as well as the complexity involved with it.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

17 Apr, 2014

1 commit

  • With the restructing of the function tracer working with instances, the
    "top level" buffer is a bit special, as the function tracing is mapped
    to the same set of filters. This is done by using a "global_ops" descriptor
    and having the "set_ftrace_filter" and "set_ftrace_notrace" map to it.

    When an instance is created, it creates the same files but its for the
    local instance and not the global_ops.

    The issues is that the local instance creation shares some code with
    the global instance one and we end up trying to create th top level
    "set_ftrace_*" files twice, and on boot up, we get an error like this:

    Could not create debugfs 'set_ftrace_filter' entry
    Could not create debugfs 'set_ftrace_notrace' entry

    The reason they failed to be created was because they were created
    twice, and the second time gives this error as you can not create the
    same file twice.

    Reported-by: Borislav Petkov
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

21 Feb, 2014

3 commits


19 Jul, 2013

1 commit

  • The selftest for function and function graph tracers are defined as
    __init, as they are only executed at boot up. The "tracer" structs
    that are associated to those tracers are not setup as __init as they
    are used after boot. To stop mismatch warnings, those structures
    need to be annotated with __ref_data.

    Currently, the tracer structures are defined to __read_mostly, as they
    do not really change. But in the future they should be converted to
    consts, but that will take a little work because they have a "next"
    pointer that gets updated when they are registered. That will have to
    wait till the next major release.

    Link: http://lkml.kernel.org/r/1373596735.17876.84.camel@gandalf.local.home

    Reported-by: kbuild test robot
    Reported-by: Chen Gang
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

12 Jun, 2013

2 commits

  • Add the "cpudump" command to have the current CPU ftrace buffer dumped
    to console if a function is hit. This is useful when debugging a
    tripple fault, where you have an idea of a function that is called
    just before the tripple fault occurs, and can tell ftrace to dump its
    content out to the console before it continues.

    This differs from the "dump" command as it only dumps the content of
    the ring buffer for the currently executing CPU, and does not show
    the contents of the other CPUs.

    Format is:

    :cpudump

    echo 'bad_address:cpudump' > /debug/tracing/set_ftrace_filter

    To remove this:

    echo '!bad_address:cpudump' > /debug/tracing/set_ftrace_filter

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Add the "dump" command to have the ftrace buffer dumped to console if
    a function is hit. This is useful when debugging a tripple fault,
    where you have an idea of a function that is called just before the
    tripple fault occurs, and can tell ftrace to dump its content out
    to the console before it continues.

    Format is:

    :dump

    echo 'bad_address:dump' > /debug/tracing/set_ftrace_filter

    To remove this:

    echo '!bad_address:dump' > /debug/tracing/set_ftrace_filter

    Requested-by: Luis Claudio R. Goncalves
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

15 Mar, 2013

6 commits

  • Add a function probe that will cause a stack trace to be traced in
    the ring buffer when the given function(s) are called.

    format is:

    :stacktrace[:]

    echo 'schedule:stacktrace' > /debug/tracing/set_ftrace_filter
    cat /debug/tracing/trace_pipe
    kworker/2:0-4329 [002] ...2 2933.558007:
    => kthread
    => ret_from_fork
    -0 [000] .N.2 2933.558019:
    => rest_init
    => start_kernel
    => x86_64_start_reservations
    => x86_64_start_kernel
    kworker/2:0-4329 [002] ...2 2933.558109:
    => kthread
    => ret_from_fork
    [...]

    This can be set to only trace a specific amount of times:

    echo 'schedule:stacktrace:3' > /debug/tracing/set_ftrace_filter
    cat /debug/tracing/trace_pipe
    -58 [003] ...2 841.801694:
    => kthread
    => ret_from_fork
    -0 [001] .N.2 841.801697:
    => start_secondary
    -2059 [001] ...2 841.801736:
    => wait_for_common
    => wait_for_completion
    => flush_work
    => tty_flush_to_ldisc
    => input_available_p
    => n_tty_poll
    => tty_poll
    => do_select
    => core_sys_select
    => sys_select
    => system_call_fastpath

    To remove these:

    echo '!schedule:stacktrace' > /debug/tracing/set_ftrace_filter
    echo '!schedule:stacktrace:0' > /debug/tracing/set_ftrace_filter

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • The function tracing probes that trigger traceon or traceoff can be
    set to unlimited, or given a count of # of times to execute.

    By separating these two types of probes, we can then use the dynamic
    ftrace function filtering directly, and remove the brute force
    "check if this function called is my probe" routines in ftrace.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • The only thing ftrace_trace_onoff_unreg() does is to do a strcmp()
    against the cmd parameter to determine what op to unregister. But
    this compare is also done after the location that this function is
    called (and returns). By moving the check for '!' to unregister after
    the strcmp(), the callback function itself can just do the unregister
    and we can get rid of the helper function.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Remove some duplicate code and replace it with a helper function.
    This makes the code a it cleaner.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • Currently, the way the latency tracers and snapshot feature works
    is to have a separate trace_array called "max_tr" that holds the
    snapshot buffer. For latency tracers, this snapshot buffer is used
    to swap the running buffer with this buffer to save the current max
    latency.

    The only items needed for the max_tr is really just a copy of the buffer
    itself, the per_cpu data pointers, the time_start timestamp that states
    when the max latency was triggered, and the cpu that the max latency
    was triggered on. All other fields in trace_array are unused by the
    max_tr, making the max_tr mostly bloat.

    This change removes the max_tr completely, and adds a new structure
    called trace_buffer, that holds the buffer pointer, the per_cpu data
    pointers, the time_start timestamp, and the cpu where the latency occurred.

    The trace_array, now has two trace_buffers, one for the normal trace and
    one for the max trace or snapshot. By doing this, not only do we remove
    the bloat from the max_trace but the instances of traces can now use
    their own snapshot feature and not have just the top level global_trace have
    the snapshot feature and latency tracers for itself.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • The global and max-tr currently use static per_cpu arrays for the CPU data
    descriptors. But in order to get new allocated trace_arrays, they need to
    be allocated per_cpu arrays. Instead of using the static arrays, switch
    the global and max-tr to use allocated data.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

24 Jan, 2013

1 commit

  • Dan's smatch found a compare bug with the result of the
    trace_test_and_set_recursion() and comparing to less than
    zero. If the function fails, it returns -1, but was saved in
    an unsigned int, which will never be less than zero and will
    ignore the result of the test if a recursion did happen.

    Luckily this is the last of the recursion tests, as the
    infrastructure of ftrace would catch recursions before it
    got here, except for some few exceptions.

    Reported-by: Dan Carpenter
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

23 Jan, 2013

1 commit

  • The function tracer had two different versions of function tracing.

    The disabling of irqs version and the preempt disable version.

    As function tracing in very intrusive and can cause nasty recursion
    issues, it has its own recursion protection. But the old method to
    do this was a flat layer. If it detected that a recursion was happening
    then it would just return without recording.

    This made the preempt version (much faster than the irq disabling one)
    not very useful, because if an interrupt were to occur after the
    recursion flag was set, the interrupt would not be traced at all,
    because every function that was traced would think it recursed on
    itself (due to the context it preempted setting the recursive flag).

    Now that we have a recursion flag for every context level, we
    no longer need to worry about that. We can disable preemption,
    set the current context recursion check bit, and go on. If an
    interrupt were to come along, it would check its own context bit
    and happily continue to trace.

    As the preempt version is faster than the irq disable version,
    there's no more reason to keep the preempt version around.
    And the irq disable version still had an issue with missing
    out on tracing NMI code.

    Remove the irq disable function tracer version and have the
    preempt disable version be the default (and only version).

    Before this patch we had from running:

    # echo function > /debug/tracing/current_tracer
    # for i in `seq 10`; do ./hackbench 50; done
    Time: 12.028
    Time: 11.945
    Time: 11.925
    Time: 11.964
    Time: 12.002
    Time: 11.910
    Time: 11.944
    Time: 11.929
    Time: 11.941
    Time: 11.924

    (average: 11.9512)

    Now we have:

    # echo function > /debug/tracing/current_tracer
    # for i in `seq 10`; do ./hackbench 50; done
    Time: 10.285
    Time: 10.407
    Time: 10.243
    Time: 10.372
    Time: 10.380
    Time: 10.198
    Time: 10.272
    Time: 10.354
    Time: 10.248
    Time: 10.253

    (average: 10.3012)

    a 13.8% savings!

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

14 Dec, 2012

1 commit

  • Pull trivial branch from Jiri Kosina:
    "Usual stuff -- comment/printk typo fixes, documentation updates, dead
    code elimination."

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (39 commits)
    HOWTO: fix double words typo
    x86 mtrr: fix comment typo in mtrr_bp_init
    propagate name change to comments in kernel source
    doc: Update the name of profiling based on sysfs
    treewide: Fix typos in various drivers
    treewide: Fix typos in various Kconfig
    wireless: mwifiex: Fix typo in wireless/mwifiex driver
    messages: i2o: Fix typo in messages/i2o
    scripts/kernel-doc: check that non-void fcts describe their return value
    Kernel-doc: Convention: Use a "Return" section to describe return values
    radeon: Fix typo and copy/paste error in comments
    doc: Remove unnecessary declarations from Documentation/accounting/getdelays.c
    various: Fix spelling of "asynchronous" in comments.
    Fix misspellings of "whether" in comments.
    eisa: Fix spelling of "asynchronous".
    various: Fix spelling of "registered" in comments.
    doc: fix quite a few typos within Documentation
    target: iscsi: fix comment typos in target/iscsi drivers
    treewide: fix typo of "suport" in various comments and Kconfig
    treewide: fix typo of "suppport" in various comments
    ...

    Linus Torvalds
     

06 Dec, 2012

1 commit


01 Nov, 2012

2 commits

  • There's times during debugging that it is helpful to see traces of early
    boot functions. But the tracers are initialized at device_initcall()
    which is quite late during the boot process. Setting the kernel command
    line parameter ftrace=function will not show anything until the function
    tracer is initialized. This prevents being able to trace functions before
    device_initcall().

    There's no reason that the tracers need to be initialized so late in the
    boot process. Move them up to core_initcall() as they still need to come
    after early_initcall() which initializes the tracing buffers.

    Cc: Thomas Gleixner
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • * remove old string conversions with kstrto*

    Link: http://lkml.kernel.org/r/20120926200838.GC1244@0x90.at

    Signed-off-by: Daniel Walter
    Signed-off-by: Steven Rostedt

    Daniel Walter