09 Feb, 2011

1 commit


14 May, 2010

1 commit

  • This patch adds data to be passed to tracepoint callbacks.

    The created functions from DECLARE_TRACE() now need a mandatory data
    parameter. For example:

    DECLARE_TRACE(mytracepoint, int value, value)

    Will create the register function:

    int register_trace_mytracepoint((void(*)(void *data, int value))probe,
    void *data);

    As the first argument, all callbacks (probes) must take a (void *data)
    parameter. So a callback for the above tracepoint will look like:

    void myprobe(void *data, int value)
    {
    }

    The callback may choose to ignore the data parameter.

    This change allows callbacks to register a private data pointer along
    with the function probe.

    void mycallback(void *data, int value);

    register_trace_mytracepoint(mycallback, mydata);

    Then the mycallback() will receive the "mydata" as the first parameter
    before the args.

    A more detailed example:

    DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));

    /* In the C file */

    DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));

    [...]

    trace_mytracepoint(status);

    /* In a file registering this tracepoint */

    int my_callback(void *data, int status)
    {
    struct my_struct my_data = data;
    [...]
    }

    [...]
    my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
    init_my_data(my_data);
    register_trace_mytracepoint(my_callback, my_data);

    The same callback can also be registered to the same tracepoint as long
    as the data registered is different. Note, the data must also be used
    to unregister the callback:

    unregister_trace_mytracepoint(my_callback, my_data);

    Because of the data parameter, tracepoints declared this way can not have
    no args. That is:

    DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());

    will cause an error.

    If no arguments are needed, a new macro can be used instead:

    DECLARE_TRACE_NOARGS(mytracepoint);

    Since there are no arguments, the proto and args fields are left out.

    This is part of a series to make the tracepoint footprint smaller:

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4914025 1088868 861512 6864405 68be15 vmlinux.class
    4918492 1084612 861512 6864616 68bee8 vmlinux.tracepoint

    Again, this patch also increases the size of the kernel, but
    lays the ground work for decreasing it.

    v5: Fixed net/core/drop_monitor.c to handle these updates.

    v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
    #ifdef CONFIG_TRACE_POINTS, since the two are the same in both
    cases. The __DECLARE_TRACE() is what changes.
    Thanks to Frederic Weisbecker for pointing this out.

    v3: Made all register_* functions require data to be passed and
    all callbacks to take a void * parameter as its first argument.
    This makes the calling functions comply with C standards.

    Also added more comments to the modifications of DECLARE_TRACE().

    v2: Made the DECLARE_TRACE() have the ability to pass arguments
    and added a new DECLARE_TRACE_NOARGS() for tracepoints that
    do not need any arguments.

    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Acked-by: Frederic Weisbecker
    Cc: Neil Horman
    Cc: David S. Miller
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

07 May, 2010

1 commit


05 Sep, 2009

1 commit

  • The latency tracers (irqsoff and wakeup) can swap trace buffers
    on the fly. If an event is happening and has reserved data on one of
    the buffers, and the latency tracer swaps the global buffer with the
    max buffer, the result is that the event may commit the data to the
    wrong buffer.

    This patch changes the API to the trace recording to be recieve the
    buffer that was used to reserve a commit. Then this buffer can be passed
    in to the commit.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

06 Aug, 2009

1 commit


15 Apr, 2009

1 commit

  • Impact: clean up

    Create a sub directory in include/trace called events to keep the
    trace point headers in their own separate directory. Only headers that
    declare trace points should be defined in this directory.

    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Neil Horman
    Cc: Zhao Lei
    Cc: Eduard - Gabriel Munteanu
    Cc: Pekka Enberg
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

07 Apr, 2009

2 commits


18 Mar, 2009

1 commit


18 Feb, 2009

2 commits

  • Impact: api and pipe waiting change

    Currently, the waiting used in tracing_read_pipe() is done through a
    100 msecs schedule_timeout() loop which periodically check if there
    are traces on the buffer.

    This can cause small latencies for programs which are reading the incoming
    events.

    This patch makes the reader waiting for the trace_wait waitqueue except
    for few tracers such as the sched and functions tracers which might be
    already hold the runqueue lock while waking up the reader.

    This is performed through a new callback wait_pipe() on struct tracer.
    If none is implemented on a specific tracer, the default waiting for
    trace_wait queue is attached.

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     
  • Impact: cleanup

    Fix incorrect hint message in code and typos in comments.

    Signed-off-by: Wenji Huang
    Signed-off-by: Steven Rostedt

    Wenji Huang
     

06 Feb, 2009

1 commit


05 Feb, 2009

1 commit


25 Dec, 2008

1 commit


19 Dec, 2008

2 commits

  • Impact: cleanup

    This patch factors out common code from multiple tracers into a
    tracing_reset_online_cpus() function and converts the tracers to use it.

    Signed-off-by: Pekka Enberg
    Signed-off-by: Ingo Molnar

    Pekka J Enberg
     
  • these warnings:

    kernel/trace/trace_sched_switch.c: In function ‘tracing_sched_register’:
    kernel/trace/trace_sched_switch.c:96: warning: passing argument 1 of ‘register_trace_sched_wakeup_new’ from incompatible pointer type
    kernel/trace/trace_sched_switch.c:112: warning: passing argument 1 of ‘unregister_trace_sched_wakeup_new’ from incompatible pointer type
    kernel/trace/trace_sched_switch.c: In function ‘tracing_sched_unregister’:
    kernel/trace/trace_sched_switch.c:121: warning: passing argument 1 of ‘unregister_trace_sched_wakeup_new’ from incompatible pointer type

    Trigger because sched_wakeup_new tracepoints need the same trace
    signature as sched_wakeup - which was changed recently.

    Fix it.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

16 Nov, 2008

1 commit

  • Impact: extend the ->init() method with the ability to fail

    This bring a way to know if the initialization of a tracer successed.
    A tracer must return 0 on success and a traditional error (ie:
    -ENOMEM) if it fails.

    If a tracer fails to init, it is free to print a detailed warn. The
    tracing api will not and switch to a new tracer will just return the
    error from the init callback.

    Note: this will be used for the return tracer.

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

08 Nov, 2008

4 commits

  • Impact: remove obsolete variable in trace_array structure

    With the new start / stop method of ftrace, the ctrl variable
    in the trace_array structure is now obsolete. Remove it.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Impact: Remove the ctrl_update tracer method

    With the new quick start/stop method of tracing, the ctrl_update
    method is out of date.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Impact: fix for sched_switch that broke dynamic ftrace startup

    The commit: tracing/fastboot: use sched switch tracer from boot tracer
    broke the API of the sched_switch trace. The use of the
    tracing_start/stop_cmdline record is for only recording the cmdline,
    NOT recording the schedule switches themselves.

    Seeing that the boot tracer broke the API to do something that it
    wanted, this patch adds a new interface for the API while
    puting back the original interface of the old API.

    Signed-off-by: Steven Rostedt
    Acked-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Impact: boot tracer startup modified

    The boot tracer calls into some of the schedule tracing private functions
    that should not be exported. This patch cleans it up, and makes
    way for further changes in the ftrace infrastructure.

    This patch adds a api to assign a tracer array to the schedule
    context switch tracer.

    Signed-off-by: Steven Rostedt
    Acked-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

06 Nov, 2008

1 commit

  • Impact: change where tracing is started up and stopped

    Currently, when a new tracer is selected via echo'ing a tracer name into
    the current_tracer file, the startup is only done if tracing_enabled is
    set to one. If tracing_enabled is changed to zero (by echo'ing 0 into
    the tracing_enabled file) a full shutdown is performed.

    The full startup and shutdown of a tracer can be expensive and the
    user can lose out traces when echo'ing in 0 to the tracing_enabled file,
    because the process takes too long. There can also be places that
    the user would like to start and stop the tracer several times and
    doing the full startup and shutdown of a tracer might be too expensive.

    This patch performs the full startup and shutdown when a tracer is
    selected. It also adds a way to do a quick start or stop of a tracer.
    The quick version is just a flag that prevents the tracing from
    taking place, but the overhead of the code is still there.

    For example, the startup of a tracer may enable tracepoints, or enable
    the function tracer. The stop and start will just set a flag to
    have the tracer ignore the calls when the tracepoint or function trace
    is called. The overhead of the tracer may still be present when
    the tracer is stopped, but no tracing will occur. Setting the tracer
    to the 'nop' tracer (or any other tracer) will perform the shutdown
    of the tracer which will disable the tracepoint or disable the
    function tracer.

    The tracing_enabled file will simply start or stop tracing.

    This change is all internal. The end result for the user should be the same
    as before. If tracing_enabled is not set, no trace will happen.
    If tracing_enabled is set, then the trace will happen. The tracing_enabled
    variable is static between tracers. Enabling tracing_enabled and
    going to another tracer will keep tracing_enabled enabled. Same
    is true with disabling tracing_enabled.

    This patch will now provide a fast start/stop method to the users
    for enabling or disabling tracing.

    Note: There were two methods to the struct tracer that were never
    used: The methods start and stop. These were to be used as a hook
    to the reading of the trace output, but ended up not being
    necessary. These two methods are now used to enable the start
    and stop of each tracer, in case the tracer needs to do more than
    just not write into the buffer. For example, the irqsoff tracer
    must stop recording max latencies when tracing is stopped.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

05 Nov, 2008

5 commits

  • Impact: fix boot tracer + sched tracer coupling bug

    Fix a bug that made the sched_switch tracer unable to run
    if set as the current_tracer after the boot tracer.

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     
  • Impact: cleanup

    This patch applies some corrections suggested by Steven Rostedt.

    Change the type of shed_ref into int since it is used
    into a Mutex, we don't need it anymore as an atomic
    variable in the sched_switch tracer.
    Also change the name of the register mutex.

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     
  • Impact: enhance boot trace output with scheduling events

    Use the sched_switch tracer from the boot tracer.

    We also can trace schedule events inside the initcalls.
    Sched tracing is disabled after the initcall has finished and
    then reenabled before the next one is started.

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     
  • Impact: cleanup

    When init_sched_switch_trace() is called, it has no reason to start
    the sched tracer if the sched_ref is not zero.

    _ If this is non-zero, the tracer is already used, but we can register it
    to the tracing engine. There is already a security which avoid the tracer
    probes not to be resgistered twice.

    _ If this is zero, this block will not be used.

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     
  • Impact: fix race condition in sched_switch tracer

    This patch fixes a race condition in the sched_switch tracer. If
    several tasks (IE: concurrent initcalls) are playing with
    tracing_start_cmdline_record() and tracing_stop_cmdline_record(), the
    following situation could happen:

    _ Task A and B are using the same tracepoint probe. Task A holds it.
    Task B is sleeping and doesn't hold it.

    _ Task A frees the sched tracer, then sched_ref is decremented to 0.

    _ Task A is preempted and hadn't yet unregistered its tracepoint
    probe, then B runs.

    _ B increments sched_ref, sees it's 1 and then guess it has to
    register its probe. But it has not been freed by task A.

    _ A lot of bad things can happen after that...

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

14 Oct, 2008

4 commits

  • Now that the ring buffer is reentrant, some of the ftrace tracers
    (sched_swich, debugging traces) can also be reentrant.

    Note: Never make the function tracer reentrant, that can cause
    recursion problems all over the kernel. The function tracer
    must disable reentrancy.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • With the new ring buffer infrastructure in ftrace, I'm trying to make
    ftrace a little more light weight.

    This patch converts a lot of the local_irq_save/restore into
    preempt_disable/enable. The original preempt count in a lot of cases
    has to be sent in as a parameter so that it can be recorded correctly.
    Some places were recording it incorrectly before anyway.

    This is also laying the ground work to make ftrace a little bit
    more reentrant, and remove all locking. The function tracers must
    still protect from reentrancy.

    Note: All the function tracers must be careful when using preempt_disable.
    It must do the following:

    resched = need_resched();
    preempt_disable_notrace();
    [...]
    if (resched)
    preempt_enable_no_resched_notrace();
    else
    preempt_enable_notrace();

    The reason is that if this function traces schedule() itself, the
    preempt_enable_notrace() will cause a schedule, which will lead
    us into a recursive failure.

    If we needed to reschedule before calling preempt_disable, we
    should have already scheduled. Since we did not, this is most
    likely that we should not and are probably inside a schedule
    function.

    If resched was not set, we still need to catch the need resched
    flag being set when preemption was off and the if case at the
    end will catch that for us.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch ports ftrace over to the new ring buffer.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Porting the trace_mark() used by ftrace to tracepoints. (cleanup)

    Changelog :
    - Change error messages : marker -> tracepoint

    [ mingo@elte.hu: conflict resolutions ]
    Signed-off-by: Mathieu Desnoyers
    Acked-by: 'Peter Zijlstra'
    Signed-off-by: Ingo Molnar

    Mathieu Desnoyers
     

11 Jul, 2008

1 commit

  • We have two markers now that are enabled on sched_switch. One that records
    the context switching and the other that records task wake ups. Currently
    we enable the tracing first and then set the markers. This causes some
    confusing traces:

    # tracer: sched_switch
    #
    # TASK-PID CPU# TIMESTAMP FUNCTION
    # | | | | |
    trace-cmd-3973 [00] 115.834817: 3973:120:R + 3: 0:S
    trace-cmd-3973 [01] 115.834910: 3973:120:R + 6: 0:S
    trace-cmd-3973 [02] 115.834910: 3973:120:R + 9: 0:S
    trace-cmd-3973 [03] 115.834910: 3973:120:R + 12: 0:S
    trace-cmd-3973 [02] 115.834910: 3973:120:R + 9: 0:S
    -0 [02] 115.834910: 0:140:R ==> 3973:120:R

    Here we see that trace-cmd with PID 3973 wakes up task 9 but the next line
    shows the idle task doing a context switch to task 3973.

    Enabling the tracing to _after_ the markers are set creates a much saner
    output:

    # tracer: sched_switch
    #
    # TASK-PID CPU# TIMESTAMP FUNCTION
    # | | | | |
    -0 [02] 7922.634225: 0:140:R ==> 4790:120:R
    trace-cmd-4789 [03] 7922.634225: 0:140:R + 4790:120:R

    Signed-off-by: Steven Rostedt
    Cc: Steven Rostedt
    Cc: Peter Zijlstra
    Cc: Andrew Morton
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

16 Jun, 2008

1 commit


27 May, 2008

2 commits


24 May, 2008

5 commits