09 Jun, 2011

1 commit

  • The fix to fix the printk_formats of modules broke the
    printk_formats of trace_printks in the kernel.

    The update of what to show via the seq_file was only updated
    if the passed in fmt was NULL, which happens only on the first
    iteration. The result was showing the first format every time
    instead of iterating through the available formats.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

08 Jun, 2011

2 commits

  • Revert the commit that removed the disabling of interrupts around
    the initial modifying of mcount callers to nops, and update the comment.

    The original comment was outdated and stated that the interrupts were
    being disabled to prevent kstop machine, which was required with the
    old ftrace daemon, but was no longer the case.

    What the comment failed to mention was that interrupts needed to be
    disabled to keep interrupts from preempting the modifying of the code
    and then executing the code that was partially modified.

    Revert the commit and update the comment.

    Reported-by: Richard W.M. Jones
    Tested-by: Richard W.M. Jones
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • With gcc 4.6, the self test kprobe function:

    kprobe_trace_selftest_target()

    is optimized such that kallsyms does not list it. The kprobes
    test uses this function to insert a probe and test it. But
    it will fail the test if the function is not listed in kallsyms.

    Adding a __used annotation keeps the symbol in the kallsyms table.

    Suggested-by: David Daney
    Cc: Masami Hiramatsu
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

07 Jun, 2011

1 commit

  • kernel/trace/ftrace.c: In function 'ftrace_regex_write.clone.15':
    kernel/trace/ftrace.c:2743:6: warning: 'ret' may be used uninitialized in this
    function

    Signed-off-by: GuoWen Li
    Link: http://lkml.kernel.org/r/201106011918.47939.guowen.li.linux@gmail.com
    Signed-off-by: Steven Rostedt

    GuoWen Li
     

26 May, 2011

5 commits

  • Witold reported a reboot caused by the selftests of the dynamic function
    tracer. He sent me a config and I used ktest to do a config_bisect on it
    (as my config did not cause the crash). It pointed out that the problem
    config was CONFIG_PROVE_RCU.

    What happened was that if multiple callbacks are attached to the
    function tracer, we iterate a list of callbacks. Because the list is
    managed by synchronize_sched() and preempt_disable, the access to the
    pointers uses rcu_dereference_raw().

    When PROVE_RCU is enabled, the rcu_dereference_raw() calls some
    debugging functions, which happen to be traced. The tracing of the debug
    function would then call rcu_dereference_raw() which would then call the
    debug function and then... well you get the idea.

    I first wrote two different patches to solve this bug.

    1) add a __rcu_dereference_raw() that would not do any checks.
    2) add notrace to the offending debug functions.

    Both of these patches worked.

    Talking with Paul McKenney on IRC, he suggested to add recursion
    detection instead. This seemed to be a better solution, so I decided to
    implement it. As the task_struct already has a trace_recursion to detect
    recursion in the ring buffer, and that has a very small number it
    allows, I decided to use that same variable to add flags that can detect
    the recursion inside the infrastructure of the function tracer.

    I plan to change it so that the task struct bit can be checked in
    mcount, but as that requires changes to all archs, I will hold that off
    to the next merge window.

    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Frederic Weisbecker
    Cc: Paul E. McKenney
    Link: http://lkml.kernel.org/r/1306348063.1465.116.camel@gandalf.stny.rr.com
    Reported-by: Witold Baryluk
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Filesystem, like Btrfs, has some "ULL" macros, and when these macros are passed
    to tracepoints'__print_symbolic(), there will be 64->32 truncate WARNINGS during
    compiling on 32bit box.

    Signed-off-by: Liu Bo
    Link: http://lkml.kernel.org/r/4DACE6E0.7000507@cn.fujitsu.com
    Signed-off-by: Steven Rostedt

    liubo
     
  • When dynamic ftrace is not configured, the ops->flags still needs
    to have its FTRACE_OPS_FL_ENABLED bit set in ftrace_startup().

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • The self tests for event tracer does not check if the function
    tracing was successfully activated. It needs to before it continues
    the tests, otherwise the wrong errors may be reported.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • The register_ftrace_function() returns an error code on failure
    except if the call to ftrace_startup() fails. Add a error return to
    ftrace_startup() if it fails to start, allowing register_ftrace_funtion()
    to return a proper error value.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

20 May, 2011

1 commit

  • …kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip

    * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (60 commits)
    sched: Fix and optimise calculation of the weight-inverse
    sched: Avoid going ahead if ->cpus_allowed is not changed
    sched, rt: Update rq clock when unthrottling of an otherwise idle CPU
    sched: Remove unused parameters from sched_fork() and wake_up_new_task()
    sched: Shorten the construction of the span cpu mask of sched domain
    sched: Wrap the 'cfs_rq->nr_spread_over' field with CONFIG_SCHED_DEBUG
    sched: Remove unused 'this_best_prio arg' from balance_tasks()
    sched: Remove noop in alloc_rt_sched_group()
    sched: Get rid of lock_depth
    sched: Remove obsolete comment from scheduler_tick()
    sched: Fix sched_domain iterations vs. RCU
    sched: Next buddy hint on sleep and preempt path
    sched: Make set_*_buddy() work on non-task entities
    sched: Remove need_migrate_task()
    sched: Move the second half of ttwu() to the remote cpu
    sched: Restructure ttwu() some more
    sched: Rename ttwu_post_activation() to ttwu_do_wakeup()
    sched: Remove rq argument from ttwu_stat()
    sched: Remove rq->lock from the first half of ttwu()
    sched: Drop rq->lock from sched_exec()
    ...

    * 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    sched: Fix rt_rq runtime leakage bug

    Linus Torvalds
     

19 May, 2011

13 commits

  • Add some basic sanity tests for multiple users of the function
    tracer at startup.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Since users of the function tracer can now pick and choose which
    functions they want to trace agnostically from other users of the
    function tracer, we need to pass the ops struct to the ftrace_set_filter()
    functions.

    The functions ftrace_set_global_filter() and ftrace_set_global_notrace()
    is added to keep the old filter functions which are used to modify
    the generic function tracers.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Now that functions may be selected individually, it only makes sense
    that we should allow dynamically allocated trace structures to
    be traced. This will allow perf to allocate a ftrace_ops structure
    at runtime and use it to pick and choose which functions that
    structure will trace.

    Note, a dynamically allocated ftrace_ops will always be called
    indirectly instead of being called directly from the mcount in
    entry.S. This is because there's no safe way to prevent mcount
    from being preempted before calling the function, unless we
    modify every entry.S to do so (not likely). Thus, dynamically allocated
    functions will now be called by the ftrace_ops_list_func() that
    loops through the ops that are allocated if there are more than
    one op allocated at a time. This loop is protected with a
    preempt_disable.

    To determine if an ftrace_ops structure is allocated or not, a new
    util function was added to the kernel/extable.c called
    core_kernel_data(), which returns 1 if the address is between
    _sdata and _edata.

    Cc: Paul E. McKenney
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • ftrace_ops that are registered to trace functions can now be
    agnostic to each other in respect to what functions they trace.
    Each ops has their own hash of the functions they want to trace
    and a hash to what they do not want to trace. A empty hash for
    the functions they want to trace denotes all functions should
    be traced that are not in the notrace hash.

    Cc: Paul E. McKenney
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • When a hash is modified and might be in use, we need to perform
    a schedule RCU operation on it, as the hashes will soon be used
    directly in the function tracer callback.

    Cc: Paul E. McKenney
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • This is a step towards each ops structure defining its own set
    of functions to trace. As the current code with pid's and such
    are specific to the global_ops, it is restructured to be used
    with the global ops.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • In order to allow different ops to enable different functions,
    the ftrace_startup() and ftrace_shutdown() functions need the
    ops parameter passed to them.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Add the enabled_functions file that is used to show all the
    functions that have been enabled for tracing as well as their
    ref counts. This helps seeing if any function has been registered
    and what functions are being traced.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Every function has its own record that stores the instruction
    pointer and flags for the function to be traced. There are only
    two flags: enabled and free. The enabled flag states that tracing
    for the function has been enabled (actively traced), and the free
    flag states that the record no longer points to a function and can
    be used by new functions (loaded modules).

    These flags are now moved to the MSB of the flags (actually just
    the top 32bits). The rest of the bits (30 bits) are now used as
    a ref counter. Everytime a tracer register functions to trace,
    those functions will have its counter incremented.

    When tracing is enabled, to determine if a function should be traced,
    the counter is examined, and if it is non-zero it is set to trace.

    When a ftrace_ops is registered to trace functions, its hashes
    are examined. If the ftrace_ops filter_hash count is zero, then
    all functions are set to be traced, otherwise only the functions
    in the hash are to be traced. The exception to this is if a function
    is also in the ftrace_ops notrace_hash. Then that function's counter
    is not incremented for this ftrace_ops.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • When filtering, allocate a hash to insert the function records.
    After the filtering is complete, assign it to the ftrace_ops structure.

    This allows the ftrace_ops structure to have a much smaller array of
    hash buckets instead of wasting a lot of memory.

    A read only empty_hash is created to be the minimum size that any ftrace_ops
    can point to.

    When a new hash is created, it has the following steps:

    o Allocate a default hash.
    o Walk the function records assigning the filtered records to the hash
    o Allocate a new hash with the appropriate size buckets
    o Move the entries from the default hash to the new hash.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Combine the filter and notrace hashes to be accessed by a single entity,
    the global_ops. The global_ops is a ftrace_ops structure that is passed
    to different functions that can read or modify the filtering of the
    function tracer.

    The ftrace_ops structure was modified to hold a filter and notrace
    hashes so that later patches may allow each ftrace_ops to have its own
    set of rules to what functions may be filtered.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • When multiple users are allowed to have their own set of functions
    to trace, having the FTRACE_FL_FILTER flag will not be enough to
    handle the accounting of those users. Each user will need their own
    set of functions.

    Replace the FTRACE_FL_FILTER with a filter_hash instead. This is
    temporary until the rest of the function filtering accounting
    gets in.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • To prepare for the accounting system that will allow multiple users of
    the function tracer, having the FTRACE_FL_NOTRACE as a flag in the
    dyn_trace record does not make sense.

    All ftrace_ops will soon have a hash of functions they should trace
    and not trace. By making a global hash of functions not to trace makes
    this easier for the transition.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

12 May, 2011

1 commit


10 May, 2011

1 commit


07 May, 2011

1 commit

  • This partially reverts commit e6e1e2593592a8f6f6380496655d8c6f67431266.

    That commit changed the structure layout of the trace structure, which
    in turn broke PowerTOP (1.9x generation) quite badly.

    I appreciate not wanting to expose the variable in question, and
    PowerTOP was not using it, so I've replaced the variable with just a
    padding field - that way if in the future a new field is needed it can
    just use this padding field.

    Signed-off-by: Arjan van de Ven
    Signed-off-by: Linus Torvalds

    Arjan van de Ven
     

02 May, 2011

2 commits


30 Apr, 2011

9 commits

  • The code used for matching functions is almost identical between normal
    selecting of functions and using the :mod: feature of set_ftrace_notrace.

    Consolidate the two users into one function.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • There are three locations that perform almost identical functions in order
    to update the ftrace_trace_function (the ftrace function variable that gets
    called by mcount).

    Consolidate these into a single function called update_ftrace_function().

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • The updating of a function record is moved to a single function. This will allow
    us to add specific changes in one location for both modules and kernel
    functions.

    Later patches will determine if the function record itself needs to be updated
    (which enables the mcount caller), or just the ftrace_ops needs the update.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Since we disable all function tracer processing if we detect
    that a modification of a instruction had failed, we do not need
    to track that the record has failed. No more ftrace processing
    is allowed, and the FTRACE_FL_CONVERTED flag is pointless.

    The FTRACE_FL_CONVERTED flag was used to denote records that were
    successfully converted from mcount calls into nops. But if a single
    record fails, all of ftrace is disabled.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Since we disable all function tracer processing if we detect
    that a modification of a instruction had failed, we do not need
    to track that the record has failed. No more ftrace processing
    is allowed, and the FTRACE_FL_FAILED flag is pointless.

    Removing this flag simplifies some of the code, but some ftrace_disabled
    checks needed to be added or move around a little.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • The failures file in the debugfs tracing directory would list the
    functions that failed to convert when the old dead ftrace daemon
    tried to update code but failed. Since this code is now dead along
    with the daemon the failures file is useless. Remove it.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • The disabling of interrupts around ftrace_update_code() was used
    to protect against the evil ftrace daemon from years past. But that
    daemon has long been killed. It is safe to keep interrupts enabled
    while updating the initial mcount into nops.

    The ftrace_mutex is also held which keeps other users at bay.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Let FTRACE_WARN_ON() be used as a stand alone statement or
    inside a conditional: if (FTRACE_WARN_ON(x))

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • If function tracing is enabled, a read of the filter files will
    cause the call to stop_machine to update the function trace sites.
    It should only call stop_machine on write.

    Cc: stable@kernel.org
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

27 Apr, 2011

2 commits


24 Apr, 2011

1 commit

  • Neil Brown pointed out that lock_depth somehow escaped the BKL
    removal work. Let's get rid of it now.

    Note that the perf scripting utilities still have a bunch of
    code for dealing with common_lock_depth in tracepoints; I have
    left that in place in case anybody wants to use that code with
    older kernels.

    Suggested-by: Neil Brown
    Signed-off-by: Jonathan Corbet
    Cc: Arnd Bergmann
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Linus Torvalds
    Cc: Andrew Morton
    Link: http://lkml.kernel.org/r/20110422111910.456c0e84@bike.lwn.net
    Signed-off-by: Ingo Molnar

    Jonathan Corbet