08 Nov, 2011

1 commit

  • In case the the graph tracer (CONFIG_FUNCTION_GRAPH_TRACER) or even the
    function tracer (CONFIG_FUNCTION_TRACER) are not set, the latency tracers
    do not display proper latency header.

    The involved/fixed latency tracers are:
    wakeup_rt
    wakeup
    preemptirqsoff
    preemptoff
    irqsoff

    The patch adds proper handling of tracer configuration options for latency
    tracers, and displaying correct header info accordingly.

    * The current output (for wakeup tracer) with both graph and function
    tracers disabled is:

    # tracer: wakeup
    #
    -0 0d.h5 1us+: 0:120:R + [000] 7: 0:R watchdog/0
    -0 0d.h5 3us+: ttwu_do_activate.clone.1 CPU#
    # / _-----=> irqs-off
    # | / _----=> need-resched
    # || / _---=> hardirq/softirq
    # ||| / _--=> preempt-depth
    # |||| / delay
    # cmd pid ||||| time | caller
    # \ / ||||| \ | /
    cat-1129 0d..4 1us : 1129:120:R + [000] 6: 0:R migration/0
    cat-1129 0d..4 2us+: ttwu_do_activate.clone.1 CPU#
    # / _-----=> irqs-off
    # | / _----=> need-resched
    # || / _---=> hardirq/softirq
    # ||| / _--=> preempt-depth
    # |||| / delay
    # cmd pid ||||| time | caller
    # \ / ||||| \ | /
    -0 1d.h5 1us+: 0:120:R + [001] 12: 0:R watchdog/1
    -0 1d.h5 3us : ttwu_do_activate.clone.1
    Cc: Ingo Molnar
    Signed-off-by: Jiri Olsa
    Signed-off-by: Steven Rostedt

    Jiri Olsa
     

15 Jun, 2011

1 commit

  • The display of absolute time and duration fields is based on the
    latency field. This was added during the irqsoff/wakeup tracers
    graph support changes.

    It's causing confusion in what fields will be displayed for the
    function_graph tracer itself. So I'm removing this depency, and
    adding absolute time and duration fields to the preemptirqsoff
    preemptoff irqsoff wakeup tracers.

    With following commands:
    # echo function_graph > ./current_tracer
    # cat trace

    This is what it looked like before:
    # tracer: function_graph
    #
    # TIME CPU DURATION FUNCTION CALLS
    # | | | | | | | |
    0) 0.068 us | } /* page_add_file_rmap */
    0) | _raw_spin_unlock() {
    ...

    This is what it looks like now:
    # tracer: function_graph
    #
    # CPU DURATION FUNCTION CALLS
    # | | | | | | |
    0) 0.068 us | } /* add_preempt_count */
    0) 0.993 us | } /* vfsmount_lock_local_lock */
    ...

    For preemptirqsoff preemptoff irqsoff wakeup tracers,
    this is what it looked like before:
    SNIP
    # _-----=> irqs-off
    # / _----=> need-resched
    # | / _---=> hardirq/softirq
    # || / _--=> preempt-depth
    # ||| / _-=> lock-depth
    # |||| /
    # CPU TASK/PID ||||| DURATION FUNCTION CALLS
    # | | | ||||| | | | | | |
    1) -0 | d..1 0.000 us | acpi_idle_enter_simple();
    ...

    This is what it looks like now:
    SNIP
    #
    # _-----=> irqs-off
    # / _----=> need-resched
    # | / _---=> hardirq/softirq
    # || / _--=> preempt-depth
    # ||| /
    # TIME CPU TASK/PID |||| DURATION FUNCTION CALLS
    # | | | | |||| | | | | | |
    19.847735 | 1) -0 | d..1 0.000 us | acpi_idle_enter_simple();
    ...

    Signed-off-by: Jiri Olsa
    Link: http://lkml.kernel.org/r/1307113131-10045-2-git-send-email-jolsa@redhat.com
    Signed-off-by: Steven Rostedt

    Jiri Olsa
     

19 May, 2011

1 commit

  • ftrace_ops that are registered to trace functions can now be
    agnostic to each other in respect to what functions they trace.
    Each ops has their own hash of the functions they want to trace
    and a hash to what they do not want to trace. A empty hash for
    the functions they want to trace denotes all functions should
    be traced that are not in the notrace hash.

    Cc: Paul E. McKenney
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

19 Oct, 2010

1 commit

  • The function start_func_tracer() was incorrectly added in the
    #ifdef CONFIG_FUNCTION_TRACER condition, but is still used even
    when function tracing is not enabled.

    The calls to register_ftrace_function() and register_ftrace_graph()
    become nops (and their arguments are even ignored), thus there is
    no reason to hide start_func_tracer() when function tracing is
    not enabled.

    Reported-by: Ingo Molnar
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

18 Oct, 2010

2 commits

  • The wakeup tracer has three types of function tracers. Normal
    function tracer, function graph entry, and function graph return.
    Each of these use a complex dance to prevent recursion and whether
    to trace the data or not (depending on the wake_task variable).

    This patch moves the duplicate code into a single routine, to
    prevent future mistakes with modifying duplicate complex code.

    Cc: Jiri Olsa
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Add function graph support for wakeup latency tracer.
    The graph output is enabled by setting the 'display-graph'
    trace option.

    Signed-off-by: Jiri Olsa
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Jiri Olsa
     

21 Jul, 2010

1 commit

  • Documentation/trace/ftrace.txt says

    buffer_size_kb:

    This sets or displays the number of kilobytes each CPU
    buffer can hold. The tracer buffers are the same size
    for each CPU. The displayed number is the size of the
    CPU buffer and not total size of all buffers. The
    trace buffers are allocated in pages (blocks of memory
    that the kernel uses for allocation, usually 4 KB in size).
    If the last page allocated has room for more bytes
    than requested, the rest of the page will be used,
    making the actual allocation bigger than requested.
    ( Note, the size may not be a multiple of the page size
    due to buffer management overhead. )

    This can only be updated when the current_tracer
    is set to "nop".

    But it's incorrect. currently total memory consumption is
    'buffer_size_kb x CPUs x 2'.

    Why two times difference is there? because ftrace implicitly allocate
    the buffer for max latency too.

    That makes sad result when admin want to use large buffer. (If admin
    want full logging and makes detail analysis). example, If admin
    have 24 CPUs machine and write 200MB to buffer_size_kb, the system
    consume ~10GB memory (200MB x 24 x 2). umm.. 5GB memory waste is
    usually unacceptable.

    Fortunatelly, almost all users don't use max latency feature.
    The max latency buffer can be disabled easily.

    This patch shrink buffer size of the max latency buffer if
    unnecessary.

    Signed-off-by: KOSAKI Motohiro
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    KOSAKI Motohiro
     

04 Jun, 2010

1 commit

  • The ftrace_preempt_disable/enable functions were to address a
    recursive race caused by the function tracer. The function tracer
    traces all functions which makes it easily susceptible to recursion.
    One area was preempt_enable(). This would call the scheduler and
    the schedulre would call the function tracer and loop.
    (So was it thought).

    The ftrace_preempt_disable/enable was made to protect against recursion
    inside the scheduler by storing the NEED_RESCHED flag. If it was
    set before the ftrace_preempt_disable() it would not call schedule
    on ftrace_preempt_enable(), thinking that if it was set before then
    it would have already scheduled unless it was already in the scheduler.

    This worked fine except in the case of SMP, where another task would set
    the NEED_RESCHED flag for a task on another CPU, and then kick off an
    IPI to trigger it. This could cause the NEED_RESCHED to be saved at
    ftrace_preempt_disable() but the IPI to arrive in the the preempt
    disabled section. The ftrace_preempt_enable() would not call the scheduler
    because the flag was already set before entring the section.

    This bug would cause a missed preemption check and cause lower latencies.

    Investigating further, I found that the recusion caused by the function
    tracer was not due to schedule(), but due to preempt_schedule(). Now
    that preempt_schedule is completely annotated with notrace, the recusion
    no longer is an issue.

    Reported-by: Thomas Gleixner
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

14 May, 2010

1 commit

  • This patch adds data to be passed to tracepoint callbacks.

    The created functions from DECLARE_TRACE() now need a mandatory data
    parameter. For example:

    DECLARE_TRACE(mytracepoint, int value, value)

    Will create the register function:

    int register_trace_mytracepoint((void(*)(void *data, int value))probe,
    void *data);

    As the first argument, all callbacks (probes) must take a (void *data)
    parameter. So a callback for the above tracepoint will look like:

    void myprobe(void *data, int value)
    {
    }

    The callback may choose to ignore the data parameter.

    This change allows callbacks to register a private data pointer along
    with the function probe.

    void mycallback(void *data, int value);

    register_trace_mytracepoint(mycallback, mydata);

    Then the mycallback() will receive the "mydata" as the first parameter
    before the args.

    A more detailed example:

    DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));

    /* In the C file */

    DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));

    [...]

    trace_mytracepoint(status);

    /* In a file registering this tracepoint */

    int my_callback(void *data, int status)
    {
    struct my_struct my_data = data;
    [...]
    }

    [...]
    my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
    init_my_data(my_data);
    register_trace_mytracepoint(my_callback, my_data);

    The same callback can also be registered to the same tracepoint as long
    as the data registered is different. Note, the data must also be used
    to unregister the callback:

    unregister_trace_mytracepoint(my_callback, my_data);

    Because of the data parameter, tracepoints declared this way can not have
    no args. That is:

    DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());

    will cause an error.

    If no arguments are needed, a new macro can be used instead:

    DECLARE_TRACE_NOARGS(mytracepoint);

    Since there are no arguments, the proto and args fields are left out.

    This is part of a series to make the tracepoint footprint smaller:

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4914025 1088868 861512 6864405 68be15 vmlinux.class
    4918492 1084612 861512 6864616 68bee8 vmlinux.tracepoint

    Again, this patch also increases the size of the kernel, but
    lays the ground work for decreasing it.

    v5: Fixed net/core/drop_monitor.c to handle these updates.

    v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
    #ifdef CONFIG_TRACE_POINTS, since the two are the same in both
    cases. The __DECLARE_TRACE() is what changes.
    Thanks to Frederic Weisbecker for pointing this out.

    v3: Made all register_* functions require data to be passed and
    all callbacks to take a void * parameter as its first argument.
    This makes the calling functions comply with C standards.

    Also added more comments to the modifications of DECLARE_TRACE().

    v2: Made the DECLARE_TRACE() have the ability to pass arguments
    and added a new DECLARE_TRACE_NOARGS() for tracepoints that
    do not need any arguments.

    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Acked-by: Frederic Weisbecker
    Cc: Neil Horman
    Cc: David S. Miller
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

07 May, 2010

1 commit


15 Dec, 2009

3 commits


13 Sep, 2009

2 commits


10 Sep, 2009

1 commit

  • The wakeup tracer, when enabled, has its own function tracer.
    It only traces the functions on the CPU where the task it is following
    is on. If a task is woken on one CPU but then migrates to another CPU
    before it wakes up, the latency tracer will then start tracing functions
    on the other CPU.

    To find which CPU the task is on, the wakeup function tracer performs
    a task_cpu(wakeup_task). But to make sure the task does not disappear
    it grabs the wakeup_lock, which is also taken when the task wakes up.
    By taking this lock, the function tracer does not need to worry about
    the task being freed as it checks its cpu.

    Jan Blunck found a problem with this approach on his 32 CPU box. When
    a task is being traced by the wakeup tracer, all functions take this
    lock. That means that on all 32 CPUs, each function call is taking
    this one lock to see if the task is on that CPU. This lock has just
    serialized all functions on all 32 CPUs. Needless to say, this caused
    major issues on that box. It would even lockup.

    This patch changes the wakeup latency to insert a probe on the migrate task
    tracepoint. When a task changes its CPU that it will run on, the
    probe will take note. Now the wakeup function tracer no longer needs
    to take the lock. It only compares the current CPU with a variable that
    holds the current CPU the task is on. We don't worry about races since
    it is OK to add or miss a function trace.

    Reported-by: Jan Blunck
    Tested-by: Jan Blunck
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

05 Sep, 2009

1 commit

  • Currently the latency tracers reset the ring buffer. Unfortunately
    if a commit is in process (due to a trace event), this can corrupt
    the ring buffer. When this happens, the ring buffer will detect
    the corruption and then permanently disable the ring buffer.

    The bug does not crash the system, but it does prevent further tracing
    after the bug is hit.

    Instead of reseting the trace buffers, the timestamp of the start of
    the trace is used instead. The buffers will still contain the previous
    data, but the output will not count any data that is before the
    timestamp of the trace.

    Note, this only affects the static trace output (trace) and not the
    runtime trace output (trace_pipe). The runtime trace output does not
    make sense for the latency tracers anyway.

    Reported-by: Arnaldo Carvalho de Melo
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

24 Apr, 2009

1 commit

  • The code had the following outside the lock:

    if (next != wakeup_task)
    return;

    pc = preempt_count();

    /* The task we are waiting for is waking up */
    data = wakeup_trace->data[wakeup_cpu];

    On initialization, wakeup_task is NULL and wakeup_cpu -1. This code
    is not under a lock. If wakeup_task is set on another CPU as that
    task is waking up, we can see the wakeup_task before wakeup_cpu is
    set. If we read wakeup_cpu while it is still -1 then we will have
    a bad data pointer.

    This patch moves the reading of wakeup_cpu within the protection of
    the spinlock used to protect the writing of wakeup_cpu and wakeup_task.

    [ Impact: remove possible race causing invalid pointer dereference ]

    Reported-by: Maneesh Soni
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

15 Apr, 2009

1 commit

  • Impact: clean up

    Create a sub directory in include/trace called events to keep the
    trace point headers in their own separate directory. Only headers that
    declare trace points should be defined in this directory.

    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Neil Horman
    Cc: Zhao Lei
    Cc: Eduard - Gabriel Munteanu
    Cc: Pekka Enberg
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

07 Apr, 2009

1 commit

  • Maneesh Soni was getting a crash when running the wakeup tracer.
    We debugged it down to the recording of the function with the
    CALLER_ADDR2 macro. This is used to get the location of the caller
    to schedule.

    But the problem comes when schedule is called by assmebly. In the case
    that Maneesh had, retint_careful would call schedule. But retint_careful
    does not set up a proper frame pointer. CALLER_ADDR2 is defined as
    __builtin_return_address(2). This produces the following assembly in
    the wakeup tracer code.

    mov 0x0(%rbp),%rcx
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

05 Mar, 2009

1 commit

  • The latency tracers (irqsoff, preemptoff, preemptirqsoff, and wakeup)
    are pretty useless with the default output format. This patch makes them
    automatically enable the latency format when they are selected. They
    also record the state of the latency option, and if it was not enabled
    when selected, they disable it on reset.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

18 Feb, 2009

2 commits

  • Impact: api and pipe waiting change

    Currently, the waiting used in tracing_read_pipe() is done through a
    100 msecs schedule_timeout() loop which periodically check if there
    are traces on the buffer.

    This can cause small latencies for programs which are reading the incoming
    events.

    This patch makes the reader waiting for the trace_wait waitqueue except
    for few tracers such as the sched and functions tracers which might be
    already hold the runqueue lock while waking up the reader.

    This is performed through a new callback wait_pipe() on struct tracer.
    If none is implemented on a specific tracer, the default waiting for
    trace_wait queue is attached.

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     
  • Impact: cleanup

    Fix incorrect hint message in code and typos in comments.

    Signed-off-by: Wenji Huang
    Signed-off-by: Steven Rostedt

    Wenji Huang
     

05 Feb, 2009

1 commit


29 Jan, 2009

1 commit


22 Jan, 2009

3 commits

  • Impact: better data for wakeup tracer

    This patch adds the wakeup and schedule calls that are used by
    the scheduler tracer to make the wakeup tracer more readable.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Impact: add option to trace all tasks or just RT tasks

    The current wakeup tracer only traces RT task wakeups. This is
    fine for those interested in wake up timings of RT tasks, but
    it is useless for those that are interested in the causes
    of long wakeups for non RT tasks.

    This patch creates a "wakeup_rt" to implement the tracing of just
    RT tasks (as the current "wakeup" does). And makes "wakeup" now
    trace all tasks as an average developer would expect.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Impact: fix to erased trace output

    To try not to have the outputing of a trace interfere with the wakeup
    tracer, it would disable tracing while the output was printing. But
    if a trace had started when it was disabled, it can show a partial
    trace. To try to solve this, on closing of the tracer, it would
    clear the trace buffer.

    The latency tracers (wakeup and irqsoff) have two buffers. One for
    recording and one for holding the max trace that is printed. The
    clearing of the trace above should only affect the recording buffer.
    But for some reason it would move the erased trace to the print
    buffer. Probably due to a race with the closing of the trace and
    the saving ofhe max race.

    The above is all pretty useless, and if the user does not want the
    printing of the trace to be traced itself, then the user can manual
    disable tracing. This patch removes all the code that tries to keep
    the output of the tracer from modifying the trace.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

16 Jan, 2009

1 commit

  • Impact: trace max latencies on start of latency tracing

    This patch sets the max latency to zero whenever one of the
    irq variant tracers or the wakeup tracer is set to current tracer.

    Most developers expect to see output when starting up a latency
    tracer. But since the max_latency is already set to max, and
    it takes a latency greater than max_latency to be recorded, there
    is no trace. This is not the expected behavior and has even confused
    myself.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

25 Dec, 2008

1 commit


16 Nov, 2008

1 commit

  • Impact: extend the ->init() method with the ability to fail

    This bring a way to know if the initialization of a tracer successed.
    A tracer must return 0 on success and a traditional error (ie:
    -ENOMEM) if it fails.

    If a tracer fails to init, it is free to print a detailed warn. The
    tracing api will not and switch to a new tracer will just return the
    error from the init callback.

    Note: this will be used for the return tracer.

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

08 Nov, 2008

2 commits


06 Nov, 2008

1 commit

  • Impact: change where tracing is started up and stopped

    Currently, when a new tracer is selected via echo'ing a tracer name into
    the current_tracer file, the startup is only done if tracing_enabled is
    set to one. If tracing_enabled is changed to zero (by echo'ing 0 into
    the tracing_enabled file) a full shutdown is performed.

    The full startup and shutdown of a tracer can be expensive and the
    user can lose out traces when echo'ing in 0 to the tracing_enabled file,
    because the process takes too long. There can also be places that
    the user would like to start and stop the tracer several times and
    doing the full startup and shutdown of a tracer might be too expensive.

    This patch performs the full startup and shutdown when a tracer is
    selected. It also adds a way to do a quick start or stop of a tracer.
    The quick version is just a flag that prevents the tracing from
    taking place, but the overhead of the code is still there.

    For example, the startup of a tracer may enable tracepoints, or enable
    the function tracer. The stop and start will just set a flag to
    have the tracer ignore the calls when the tracepoint or function trace
    is called. The overhead of the tracer may still be present when
    the tracer is stopped, but no tracing will occur. Setting the tracer
    to the 'nop' tracer (or any other tracer) will perform the shutdown
    of the tracer which will disable the tracepoint or disable the
    function tracer.

    The tracing_enabled file will simply start or stop tracing.

    This change is all internal. The end result for the user should be the same
    as before. If tracing_enabled is not set, no trace will happen.
    If tracing_enabled is set, then the trace will happen. The tracing_enabled
    variable is static between tracers. Enabling tracing_enabled and
    going to another tracer will keep tracing_enabled enabled. Same
    is true with disabling tracing_enabled.

    This patch will now provide a fast start/stop method to the users
    for enabling or disabling tracing.

    Note: There were two methods to the struct tracer that were never
    used: The methods start and stop. These were to be used as a hook
    to the reading of the trace output, but ended up not being
    necessary. These two methods are now used to enable the start
    and stop of each tracer, in case the tracer needs to do more than
    just not write into the buffer. For example, the irqsoff tracer
    must stop recording max latencies when tracing is stopped.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

04 Nov, 2008

1 commit


21 Oct, 2008

1 commit

  • Due to confusion between the ftrace infrastructure and the gcc profiling
    tracer "ftrace", this patch renames the config options from FTRACE to
    FUNCTION_TRACER. The other two names that are offspring from FTRACE
    DYNAMIC_FTRACE and FTRACE_MCOUNT_RECORD will stay the same.

    This patch was generated mostly by script, and partially by hand.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

14 Oct, 2008

3 commits

  • With the new ring buffer infrastructure in ftrace, I'm trying to make
    ftrace a little more light weight.

    This patch converts a lot of the local_irq_save/restore into
    preempt_disable/enable. The original preempt count in a lot of cases
    has to be sent in as a parameter so that it can be recorded correctly.
    Some places were recording it incorrectly before anyway.

    This is also laying the ground work to make ftrace a little bit
    more reentrant, and remove all locking. The function tracers must
    still protect from reentrancy.

    Note: All the function tracers must be careful when using preempt_disable.
    It must do the following:

    resched = need_resched();
    preempt_disable_notrace();
    [...]
    if (resched)
    preempt_enable_no_resched_notrace();
    else
    preempt_enable_notrace();

    The reason is that if this function traces schedule() itself, the
    preempt_enable_notrace() will cause a schedule, which will lead
    us into a recursive failure.

    If we needed to reschedule before calling preempt_disable, we
    should have already scheduled. Since we did not, this is most
    likely that we should not and are probably inside a schedule
    function.

    If resched was not set, we still need to catch the need resched
    flag being set when preemption was off and the if case at the
    end will catch that for us.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch ports ftrace over to the new ring buffer.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Porting the trace_mark() used by ftrace to tracepoints. (cleanup)

    Changelog :
    - Change error messages : marker -> tracepoint

    [ mingo@elte.hu: conflict resolutions ]
    Signed-off-by: Mathieu Desnoyers
    Acked-by: 'Peter Zijlstra'
    Signed-off-by: Ingo Molnar

    Mathieu Desnoyers
     

18 Jul, 2008

1 commit

  • On Wed, 16 Jul 2008, Vegard Nossum wrote:

    > When booting 4d3702b6, I got this huge thing:
    >
    > Testing tracer wakeup: ------------[ cut here ]------------
    > WARNING: at kernel/lockdep.c:2731 check_flags+0x123/0x160()
    > Modules linked in:
    > Pid: 1, comm: swapper Not tainted 2.6.26-crashing-02127-g4d3702b6 #30
    > [] warn_on_slowpath+0x59/0xb0
    > [] ? ftrace_call+0x5/0x8
    > [] ? native_read_tsc+0x0/0x20
    > [] ? sub_preempt_count+0x12/0xf0
    > [] ? trace_hardirqs_off+0xb/0x10
    > [] ? __lock_acquire+0x2cc/0x1120
    > [] ? trace_hardirqs_off+0xb/0x10
    > [] ? mcount_call+0x5/0xa
    > [] check_flags+0x123/0x160
    > [] lock_acquire+0x51/0xd0
    > [] ? ftrace_call+0x5/0x8
    > [] _spin_lock_irqsave+0x5f/0xa0
    > [] ? ftrace_record_ip+0xf5/0x220
    > [] ? debug_locks_off+0x3/0x50
    > [] ftrace_record_ip+0xf5/0x220
    > [] mcount_call+0x5/0xa
    > [] ? debug_locks_off+0x8/0x50
    > [] check_flags+0xf7/0x160
    > [] lock_acquire+0x51/0xd0
    > [] ? ftrace_call+0x5/0x8
    > [] _spin_lock_irqsave+0x5f/0xa0
    > [] ? wakeup_tracer_call+0x6d/0xf0
    > [] ? _local_bh_enable+0x62/0xb0
    > [] ? sub_preempt_count+0xd/0xf0
    > [] wakeup_tracer_call+0x6d/0xf0
    > [] ? __do_softirq+0xf4/0x110
    > [] ? wakeup_tracer_call+0x91/0xf0
    > [] ftrace_call+0x5/0x8
    > [] ? __do_softirq+0xf4/0x110
    > [] ? sub_preempt_count+0x12/0xf0
    > [] _local_bh_enable+0x62/0xb0
    > [] __do_softirq+0xf4/0x110
    > [] do_softirq+0xad/0xb0
    > [] irq_exit+0xa5/0xb0
    > [] smp_apic_timer_interrupt+0x66/0xa0
    > [] ? trace_hardirqs_off_thunk+0xc/0x10
    > [] apic_timer_interrupt+0x2d/0x34
    > [] ? find_usage_backwards+0xb/0xf0
    > [] ? _spin_unlock_irqrestore+0x69/0x80
    > [] tg_shares_up+0x132/0x1d0
    > [] walk_tg_tree+0x62/0xa0
    > [] ? tg_shares_up+0x0/0x1d0
    > [] ? tg_nop+0x0/0x10
    > [] update_shares+0x5d/0x80
    > [] try_to_wake_up+0x6f/0x280
    > [] ? __ftrace_modify_code+0x0/0xc0
    > [] ? __ftrace_modify_code+0x0/0xc0
    > [] wake_up_process+0x14/0x20
    > [] kthread_create+0x66/0xb0
    > [] ? do_stop+0x0/0x200
    > [] ? __stop_machine_run+0x30/0xb0
    > [] __stop_machine_run+0x50/0xb0
    > [] ? do_stop+0x0/0x200
    > [] ? __ftrace_modify_code+0x0/0xc0
    > [] ? mutex_unlock+0xd/0x10
    > [] stop_machine_run+0x2c/0x60
    > [] unregister_ftrace_function+0x103/0x180
    > [] stop_wakeup_tracer+0x17/0x60
    > [] wakeup_tracer_ctrl_update+0xf/0x30
    > [] trace_selftest_startup_wakeup+0xb5/0x130
    > [] ? trace_wakeup_test_thread+0x0/0x70
    > [] register_tracer+0x135/0x1b0
    > [] init_wakeup_tracer+0xd/0xf
    > [] kernel_init+0x1a9/0x2ce
    > [] ? _spin_unlock_irq+0x3b/0x60
    > [] ? trace_hardirqs_on_thunk+0xc/0x10
    > [] ? init_wakeup_tracer+0x0/0xf
    > [] ? trace_hardirqs_on_caller+0x126/0x180
    > [] ? trace_hardirqs_on_thunk+0xc/0x10
    > [] ? restore_nocheck_notrace+0x0/0xe
    > [] ? kernel_init+0x0/0x2ce
    > [] ? kernel_init+0x0/0x2ce
    > [] kernel_thread_helper+0x7/0x10
    > =======================
    > ---[ end trace a7919e7f17c0a725 ]---
    > irq event stamp: 579530
    > hardirqs last enabled at (579528): [] trace_hardirqs_on+0xb/0x10
    > hardirqs last disabled at (579529): [] trace_hardirqs_off+0xb/0x10
    > softirqs last enabled at (579530): [] __do_softirq+0xf4/0x110
    > softirqs last disabled at (579517): [] do_softirq+0xad/0xb0
    > irq event stamp: 579530
    > hardirqs last enabled at (579528): [] trace_hardirqs_on+0xb/0x10
    > hardirqs last disabled at (579529): [] trace_hardirqs_off+0xb/0x10
    > softirqs last enabled at (579530): [] __do_softirq+0xf4/0x110
    > softirqs last disabled at (579517): [] do_softirq+0xad/0xb0
    > PASSED
    >
    > Incidentally, the kernel also hung while I was typing in this report.

    Things get weird between lockdep and ftrace because ftrace can be called
    within lockdep internal code (via the mcount pointer) and lockdep can be
    called with ftrace (via spin_locks).

    Signed-off-by: Steven Rostedt
    Tested-by: Vegard Nossum
    Signed-off-by: Ingo Molnar

    Steven Rostedt