15 Dec, 2009

3 commits


13 Sep, 2009

2 commits


10 Sep, 2009

1 commit

  • The wakeup tracer, when enabled, has its own function tracer.
    It only traces the functions on the CPU where the task it is following
    is on. If a task is woken on one CPU but then migrates to another CPU
    before it wakes up, the latency tracer will then start tracing functions
    on the other CPU.

    To find which CPU the task is on, the wakeup function tracer performs
    a task_cpu(wakeup_task). But to make sure the task does not disappear
    it grabs the wakeup_lock, which is also taken when the task wakes up.
    By taking this lock, the function tracer does not need to worry about
    the task being freed as it checks its cpu.

    Jan Blunck found a problem with this approach on his 32 CPU box. When
    a task is being traced by the wakeup tracer, all functions take this
    lock. That means that on all 32 CPUs, each function call is taking
    this one lock to see if the task is on that CPU. This lock has just
    serialized all functions on all 32 CPUs. Needless to say, this caused
    major issues on that box. It would even lockup.

    This patch changes the wakeup latency to insert a probe on the migrate task
    tracepoint. When a task changes its CPU that it will run on, the
    probe will take note. Now the wakeup function tracer no longer needs
    to take the lock. It only compares the current CPU with a variable that
    holds the current CPU the task is on. We don't worry about races since
    it is OK to add or miss a function trace.

    Reported-by: Jan Blunck
    Tested-by: Jan Blunck
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

05 Sep, 2009

1 commit

  • Currently the latency tracers reset the ring buffer. Unfortunately
    if a commit is in process (due to a trace event), this can corrupt
    the ring buffer. When this happens, the ring buffer will detect
    the corruption and then permanently disable the ring buffer.

    The bug does not crash the system, but it does prevent further tracing
    after the bug is hit.

    Instead of reseting the trace buffers, the timestamp of the start of
    the trace is used instead. The buffers will still contain the previous
    data, but the output will not count any data that is before the
    timestamp of the trace.

    Note, this only affects the static trace output (trace) and not the
    runtime trace output (trace_pipe). The runtime trace output does not
    make sense for the latency tracers anyway.

    Reported-by: Arnaldo Carvalho de Melo
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

24 Apr, 2009

1 commit

  • The code had the following outside the lock:

    if (next != wakeup_task)
    return;

    pc = preempt_count();

    /* The task we are waiting for is waking up */
    data = wakeup_trace->data[wakeup_cpu];

    On initialization, wakeup_task is NULL and wakeup_cpu -1. This code
    is not under a lock. If wakeup_task is set on another CPU as that
    task is waking up, we can see the wakeup_task before wakeup_cpu is
    set. If we read wakeup_cpu while it is still -1 then we will have
    a bad data pointer.

    This patch moves the reading of wakeup_cpu within the protection of
    the spinlock used to protect the writing of wakeup_cpu and wakeup_task.

    [ Impact: remove possible race causing invalid pointer dereference ]

    Reported-by: Maneesh Soni
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

15 Apr, 2009

1 commit

  • Impact: clean up

    Create a sub directory in include/trace called events to keep the
    trace point headers in their own separate directory. Only headers that
    declare trace points should be defined in this directory.

    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Neil Horman
    Cc: Zhao Lei
    Cc: Eduard - Gabriel Munteanu
    Cc: Pekka Enberg
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

07 Apr, 2009

1 commit

  • Maneesh Soni was getting a crash when running the wakeup tracer.
    We debugged it down to the recording of the function with the
    CALLER_ADDR2 macro. This is used to get the location of the caller
    to schedule.

    But the problem comes when schedule is called by assmebly. In the case
    that Maneesh had, retint_careful would call schedule. But retint_careful
    does not set up a proper frame pointer. CALLER_ADDR2 is defined as
    __builtin_return_address(2). This produces the following assembly in
    the wakeup tracer code.

    mov 0x0(%rbp),%rcx
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

05 Mar, 2009

1 commit

  • The latency tracers (irqsoff, preemptoff, preemptirqsoff, and wakeup)
    are pretty useless with the default output format. This patch makes them
    automatically enable the latency format when they are selected. They
    also record the state of the latency option, and if it was not enabled
    when selected, they disable it on reset.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

18 Feb, 2009

2 commits

  • Impact: api and pipe waiting change

    Currently, the waiting used in tracing_read_pipe() is done through a
    100 msecs schedule_timeout() loop which periodically check if there
    are traces on the buffer.

    This can cause small latencies for programs which are reading the incoming
    events.

    This patch makes the reader waiting for the trace_wait waitqueue except
    for few tracers such as the sched and functions tracers which might be
    already hold the runqueue lock while waking up the reader.

    This is performed through a new callback wait_pipe() on struct tracer.
    If none is implemented on a specific tracer, the default waiting for
    trace_wait queue is attached.

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     
  • Impact: cleanup

    Fix incorrect hint message in code and typos in comments.

    Signed-off-by: Wenji Huang
    Signed-off-by: Steven Rostedt

    Wenji Huang
     

05 Feb, 2009

1 commit


29 Jan, 2009

1 commit


22 Jan, 2009

3 commits

  • Impact: better data for wakeup tracer

    This patch adds the wakeup and schedule calls that are used by
    the scheduler tracer to make the wakeup tracer more readable.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Impact: add option to trace all tasks or just RT tasks

    The current wakeup tracer only traces RT task wakeups. This is
    fine for those interested in wake up timings of RT tasks, but
    it is useless for those that are interested in the causes
    of long wakeups for non RT tasks.

    This patch creates a "wakeup_rt" to implement the tracing of just
    RT tasks (as the current "wakeup" does). And makes "wakeup" now
    trace all tasks as an average developer would expect.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Impact: fix to erased trace output

    To try not to have the outputing of a trace interfere with the wakeup
    tracer, it would disable tracing while the output was printing. But
    if a trace had started when it was disabled, it can show a partial
    trace. To try to solve this, on closing of the tracer, it would
    clear the trace buffer.

    The latency tracers (wakeup and irqsoff) have two buffers. One for
    recording and one for holding the max trace that is printed. The
    clearing of the trace above should only affect the recording buffer.
    But for some reason it would move the erased trace to the print
    buffer. Probably due to a race with the closing of the trace and
    the saving ofhe max race.

    The above is all pretty useless, and if the user does not want the
    printing of the trace to be traced itself, then the user can manual
    disable tracing. This patch removes all the code that tries to keep
    the output of the tracer from modifying the trace.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

16 Jan, 2009

1 commit

  • Impact: trace max latencies on start of latency tracing

    This patch sets the max latency to zero whenever one of the
    irq variant tracers or the wakeup tracer is set to current tracer.

    Most developers expect to see output when starting up a latency
    tracer. But since the max_latency is already set to max, and
    it takes a latency greater than max_latency to be recorded, there
    is no trace. This is not the expected behavior and has even confused
    myself.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

25 Dec, 2008

1 commit


16 Nov, 2008

1 commit

  • Impact: extend the ->init() method with the ability to fail

    This bring a way to know if the initialization of a tracer successed.
    A tracer must return 0 on success and a traditional error (ie:
    -ENOMEM) if it fails.

    If a tracer fails to init, it is free to print a detailed warn. The
    tracing api will not and switch to a new tracer will just return the
    error from the init callback.

    Note: this will be used for the return tracer.

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

08 Nov, 2008

2 commits


06 Nov, 2008

1 commit

  • Impact: change where tracing is started up and stopped

    Currently, when a new tracer is selected via echo'ing a tracer name into
    the current_tracer file, the startup is only done if tracing_enabled is
    set to one. If tracing_enabled is changed to zero (by echo'ing 0 into
    the tracing_enabled file) a full shutdown is performed.

    The full startup and shutdown of a tracer can be expensive and the
    user can lose out traces when echo'ing in 0 to the tracing_enabled file,
    because the process takes too long. There can also be places that
    the user would like to start and stop the tracer several times and
    doing the full startup and shutdown of a tracer might be too expensive.

    This patch performs the full startup and shutdown when a tracer is
    selected. It also adds a way to do a quick start or stop of a tracer.
    The quick version is just a flag that prevents the tracing from
    taking place, but the overhead of the code is still there.

    For example, the startup of a tracer may enable tracepoints, or enable
    the function tracer. The stop and start will just set a flag to
    have the tracer ignore the calls when the tracepoint or function trace
    is called. The overhead of the tracer may still be present when
    the tracer is stopped, but no tracing will occur. Setting the tracer
    to the 'nop' tracer (or any other tracer) will perform the shutdown
    of the tracer which will disable the tracepoint or disable the
    function tracer.

    The tracing_enabled file will simply start or stop tracing.

    This change is all internal. The end result for the user should be the same
    as before. If tracing_enabled is not set, no trace will happen.
    If tracing_enabled is set, then the trace will happen. The tracing_enabled
    variable is static between tracers. Enabling tracing_enabled and
    going to another tracer will keep tracing_enabled enabled. Same
    is true with disabling tracing_enabled.

    This patch will now provide a fast start/stop method to the users
    for enabling or disabling tracing.

    Note: There were two methods to the struct tracer that were never
    used: The methods start and stop. These were to be used as a hook
    to the reading of the trace output, but ended up not being
    necessary. These two methods are now used to enable the start
    and stop of each tracer, in case the tracer needs to do more than
    just not write into the buffer. For example, the irqsoff tracer
    must stop recording max latencies when tracing is stopped.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

04 Nov, 2008

1 commit


21 Oct, 2008

1 commit

  • Due to confusion between the ftrace infrastructure and the gcc profiling
    tracer "ftrace", this patch renames the config options from FTRACE to
    FUNCTION_TRACER. The other two names that are offspring from FTRACE
    DYNAMIC_FTRACE and FTRACE_MCOUNT_RECORD will stay the same.

    This patch was generated mostly by script, and partially by hand.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

14 Oct, 2008

3 commits

  • With the new ring buffer infrastructure in ftrace, I'm trying to make
    ftrace a little more light weight.

    This patch converts a lot of the local_irq_save/restore into
    preempt_disable/enable. The original preempt count in a lot of cases
    has to be sent in as a parameter so that it can be recorded correctly.
    Some places were recording it incorrectly before anyway.

    This is also laying the ground work to make ftrace a little bit
    more reentrant, and remove all locking. The function tracers must
    still protect from reentrancy.

    Note: All the function tracers must be careful when using preempt_disable.
    It must do the following:

    resched = need_resched();
    preempt_disable_notrace();
    [...]
    if (resched)
    preempt_enable_no_resched_notrace();
    else
    preempt_enable_notrace();

    The reason is that if this function traces schedule() itself, the
    preempt_enable_notrace() will cause a schedule, which will lead
    us into a recursive failure.

    If we needed to reschedule before calling preempt_disable, we
    should have already scheduled. Since we did not, this is most
    likely that we should not and are probably inside a schedule
    function.

    If resched was not set, we still need to catch the need resched
    flag being set when preemption was off and the if case at the
    end will catch that for us.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch ports ftrace over to the new ring buffer.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Porting the trace_mark() used by ftrace to tracepoints. (cleanup)

    Changelog :
    - Change error messages : marker -> tracepoint

    [ mingo@elte.hu: conflict resolutions ]
    Signed-off-by: Mathieu Desnoyers
    Acked-by: 'Peter Zijlstra'
    Signed-off-by: Ingo Molnar

    Mathieu Desnoyers
     

18 Jul, 2008

1 commit

  • On Wed, 16 Jul 2008, Vegard Nossum wrote:

    > When booting 4d3702b6, I got this huge thing:
    >
    > Testing tracer wakeup: ------------[ cut here ]------------
    > WARNING: at kernel/lockdep.c:2731 check_flags+0x123/0x160()
    > Modules linked in:
    > Pid: 1, comm: swapper Not tainted 2.6.26-crashing-02127-g4d3702b6 #30
    > [] warn_on_slowpath+0x59/0xb0
    > [] ? ftrace_call+0x5/0x8
    > [] ? native_read_tsc+0x0/0x20
    > [] ? sub_preempt_count+0x12/0xf0
    > [] ? trace_hardirqs_off+0xb/0x10
    > [] ? __lock_acquire+0x2cc/0x1120
    > [] ? trace_hardirqs_off+0xb/0x10
    > [] ? mcount_call+0x5/0xa
    > [] check_flags+0x123/0x160
    > [] lock_acquire+0x51/0xd0
    > [] ? ftrace_call+0x5/0x8
    > [] _spin_lock_irqsave+0x5f/0xa0
    > [] ? ftrace_record_ip+0xf5/0x220
    > [] ? debug_locks_off+0x3/0x50
    > [] ftrace_record_ip+0xf5/0x220
    > [] mcount_call+0x5/0xa
    > [] ? debug_locks_off+0x8/0x50
    > [] check_flags+0xf7/0x160
    > [] lock_acquire+0x51/0xd0
    > [] ? ftrace_call+0x5/0x8
    > [] _spin_lock_irqsave+0x5f/0xa0
    > [] ? wakeup_tracer_call+0x6d/0xf0
    > [] ? _local_bh_enable+0x62/0xb0
    > [] ? sub_preempt_count+0xd/0xf0
    > [] wakeup_tracer_call+0x6d/0xf0
    > [] ? __do_softirq+0xf4/0x110
    > [] ? wakeup_tracer_call+0x91/0xf0
    > [] ftrace_call+0x5/0x8
    > [] ? __do_softirq+0xf4/0x110
    > [] ? sub_preempt_count+0x12/0xf0
    > [] _local_bh_enable+0x62/0xb0
    > [] __do_softirq+0xf4/0x110
    > [] do_softirq+0xad/0xb0
    > [] irq_exit+0xa5/0xb0
    > [] smp_apic_timer_interrupt+0x66/0xa0
    > [] ? trace_hardirqs_off_thunk+0xc/0x10
    > [] apic_timer_interrupt+0x2d/0x34
    > [] ? find_usage_backwards+0xb/0xf0
    > [] ? _spin_unlock_irqrestore+0x69/0x80
    > [] tg_shares_up+0x132/0x1d0
    > [] walk_tg_tree+0x62/0xa0
    > [] ? tg_shares_up+0x0/0x1d0
    > [] ? tg_nop+0x0/0x10
    > [] update_shares+0x5d/0x80
    > [] try_to_wake_up+0x6f/0x280
    > [] ? __ftrace_modify_code+0x0/0xc0
    > [] ? __ftrace_modify_code+0x0/0xc0
    > [] wake_up_process+0x14/0x20
    > [] kthread_create+0x66/0xb0
    > [] ? do_stop+0x0/0x200
    > [] ? __stop_machine_run+0x30/0xb0
    > [] __stop_machine_run+0x50/0xb0
    > [] ? do_stop+0x0/0x200
    > [] ? __ftrace_modify_code+0x0/0xc0
    > [] ? mutex_unlock+0xd/0x10
    > [] stop_machine_run+0x2c/0x60
    > [] unregister_ftrace_function+0x103/0x180
    > [] stop_wakeup_tracer+0x17/0x60
    > [] wakeup_tracer_ctrl_update+0xf/0x30
    > [] trace_selftest_startup_wakeup+0xb5/0x130
    > [] ? trace_wakeup_test_thread+0x0/0x70
    > [] register_tracer+0x135/0x1b0
    > [] init_wakeup_tracer+0xd/0xf
    > [] kernel_init+0x1a9/0x2ce
    > [] ? _spin_unlock_irq+0x3b/0x60
    > [] ? trace_hardirqs_on_thunk+0xc/0x10
    > [] ? init_wakeup_tracer+0x0/0xf
    > [] ? trace_hardirqs_on_caller+0x126/0x180
    > [] ? trace_hardirqs_on_thunk+0xc/0x10
    > [] ? restore_nocheck_notrace+0x0/0xe
    > [] ? kernel_init+0x0/0x2ce
    > [] ? kernel_init+0x0/0x2ce
    > [] kernel_thread_helper+0x7/0x10
    > =======================
    > ---[ end trace a7919e7f17c0a725 ]---
    > irq event stamp: 579530
    > hardirqs last enabled at (579528): [] trace_hardirqs_on+0xb/0x10
    > hardirqs last disabled at (579529): [] trace_hardirqs_off+0xb/0x10
    > softirqs last enabled at (579530): [] __do_softirq+0xf4/0x110
    > softirqs last disabled at (579517): [] do_softirq+0xad/0xb0
    > irq event stamp: 579530
    > hardirqs last enabled at (579528): [] trace_hardirqs_on+0xb/0x10
    > hardirqs last disabled at (579529): [] trace_hardirqs_off+0xb/0x10
    > softirqs last enabled at (579530): [] __do_softirq+0xf4/0x110
    > softirqs last disabled at (579517): [] do_softirq+0xad/0xb0
    > PASSED
    >
    > Incidentally, the kernel also hung while I was typing in this report.

    Things get weird between lockdep and ftrace because ftrace can be called
    within lockdep internal code (via the mcount pointer) and lockdep can be
    called with ftrace (via spin_locks).

    Signed-off-by: Steven Rostedt
    Tested-by: Vegard Nossum
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

11 Jul, 2008

1 commit


27 May, 2008

1 commit


24 May, 2008

8 commits

  • Porting ftrace to the marker infrastructure.

    Don't need to chain to the wakeup tracer from the sched tracer, because markers
    support multiple probes connected.

    Signed-off-by: Mathieu Desnoyers
    CC: Steven Rostedt
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Mathieu Desnoyers
     
  • printk called from wakeup critical timings and irqs off can
    cause deadlocks since printk might do a wakeup itself. If the
    call to printk happens with the runqueue lock held, it can
    deadlock.

    This patch protects the printk from being called in trace irqs off
    with a test to see if the runqueue for the current CPU is locked.
    If it is locked, the printk is skipped.

    The wakeup always holds the runqueue lock, so the printk is
    simply removed.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Steven Rostedt
     
  • Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Ingo Molnar
     
  • Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Ingo Molnar
     
  • now that we have a kbuild method for notrace, no need to pollute the
    C code with the annotations.

    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Ingo Molnar
     
  • A new check was added in the ftrace function that wont trace if the CPU
    trace buffer is disabled. Unfortunately, other tracers used ftrace() to
    write to the buffer after they disabled it. The new disable check makes
    these calls into a nop.

    This patch changes the __ftrace that is called without the check into a
    new api for the other tracers to use, called "trace_function". The other
    tracers use this interface instead when the trace CPU buffer is already
    disabled.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Steven Rostedt
     
  • rename and uninline now() to ftrace_now().

    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Ingo Molnar
     
  • factor out code and clean it up.

    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Ingo Molnar