30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

14 Mar, 2010

1 commit

  • …nel/git/tip/linux-2.6-tip

    * 'tracing-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    tracing: Do not record user stack trace from NMI context
    tracing: Disable buffer switching when starting or stopping trace
    tracing: Use same local variable when resetting the ring buffer
    function-graph: Init curr_ret_stack with ret_stack
    ring-buffer: Move disabled check into preempt disable section
    function-graph: Add tracing_thresh support to function_graph tracer
    tracing: Update the comm field in the right variable in update_max_tr
    function-graph: Use comment notation for func names of dangling '}'
    function-graph: Fix unused reference to ftrace_set_func()
    tracing: Fix warning in s_next of trace file ops
    tracing: Include irqflags headers from trace clock

    Linus Torvalds
     

06 Mar, 2010

2 commits

  • Add support for tracing_thresh to the function_graph tracer. This
    version of this feature isolates the checks into new entry and
    return functions, to avoid adding more conditional code into the
    main function_graph paths.

    When the tracing_thresh is set and the function graph tracer is
    enabled, only the functions that took longer than the time in
    microseconds that was set in tracing_thresh are recorded. To do this
    efficiently, only the function exits are recorded:

    [tracing]# echo 100 > tracing_thresh
    [tracing]# echo function_graph > current_tracer
    [tracing]# cat trace
    # tracer: function_graph
    #
    # CPU DURATION FUNCTION CALLS
    # | | | | | | |
    1) ! 119.214 us | } /* smp_apic_timer_interrupt */
    1)
    Signed-off-by: Tim Bird
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Tim Bird
     
  • When a '}' does not have a matching function start, the name is printed
    within parenthesis. But this makes it confusing between ending '}'
    and function starts. This patch makes the function name appear in C comment
    notation.

    Old view:
    3) 1.281 us | } (might_fault)
    3) 3.620 us | } (filldir)
    3) 5.251 us | } (call_filldir)
    3) | call_filldir() {
    3) | filldir() {

    New view:
    3) 1.281 us | } /* might_fault */
    3) 3.620 us | } /* filldir */
    3) 5.251 us | } /* call_filldir */
    3) | call_filldir() {
    3) | filldir() {

    Requested-by: Ingo Molnar
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

03 Mar, 2010

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
    percpu: add __percpu sparse annotations to what's left
    percpu: add __percpu sparse annotations to fs
    percpu: add __percpu sparse annotations to core kernel subsystems
    local_t: Remove leftover local.h
    this_cpu: Remove pageset_notifier
    this_cpu: Page allocator conversion
    percpu, x86: Generic inc / dec percpu instructions
    local_t: Move local.h include to ringbuffer.c and ring_buffer_benchmark.c
    module: Use this_cpu_xx to dynamically allocate counters
    local_t: Remove cpu_local_xx macros
    percpu: refactor the code in pcpu_[de]populate_chunk()
    percpu: remove compile warnings caused by __verify_pcpu_ptr()
    percpu: make accessors check for percpu pointer in sparse
    percpu: add __percpu for sparse.
    percpu: make access macros universal
    percpu: remove per_cpu__ prefix.

    Linus Torvalds
     

27 Feb, 2010

1 commit

  • The function graph tracer is currently the most invasive tracer
    in the ftrace family. It can easily overflow the buffer even with
    10megs per CPU. This means that events can often be lost.

    On start up, or after events are lost, if the function return is
    recorded but the function enter was lost, all we get to see is the
    exiting '}'.

    Here is how a typical trace output starts:

    [tracing] cat trace
    # tracer: function_graph
    #
    # CPU DURATION FUNCTION CALLS
    # | | | | | | |
    0) + 91.897 us | }
    0) ! 567.961 us | }
    0)

    Steven Rostedt
     

25 Feb, 2010

1 commit


29 Jan, 2010

1 commit

  • In the function graph tracer, a calling function is to be traced
    only when it is enabled through the set_graph_function file,
    or when it is nested in an enabled function.

    Current code uses TSK_TRACE_FL_GRAPH to test whether it is nested
    or not. Looking at the code, we can get this:
    (trace->depth > 0) (TSK_TRACE_FL_GRAPH is set)

    trace->depth is more explicit to tell that it is nested.
    So we use trace->depth directly and simplify the code.

    No functionality is changed.
    TSK_TRACE_FL_GRAPH is not removed yet, it is left for future usage.

    Signed-off-by: Lai Jiangshan
    Cc: Ingo Molnar
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Lai Jiangshan
     

17 Jan, 2010

1 commit

  • Each time we save a function entry from the function graph
    tracer, we check if the trace array is set, which is wasteful
    because it is set anyway before we start the tracer. All we need
    is to ensure we have good read and write orderings. When we set
    the trace array, we just need to guarantee it to be visible
    before starting tracing.

    Signed-off-by: Frederic Weisbecker
    Acked-by: Steven Rostedt
    Cc: Lai Jiangshan
    Cc: Paul E. McKenney
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

05 Jan, 2010

1 commit


15 Dec, 2009

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (34 commits)
    m68k: rename global variable vmalloc_end to m68k_vmalloc_end
    percpu: add missing per_cpu_ptr_to_phys() definition for UP
    percpu: Fix kdump failure if booted with percpu_alloc=page
    percpu: make misc percpu symbols unique
    percpu: make percpu symbols in ia64 unique
    percpu: make percpu symbols in powerpc unique
    percpu: make percpu symbols in x86 unique
    percpu: make percpu symbols in xen unique
    percpu: make percpu symbols in cpufreq unique
    percpu: make percpu symbols in oprofile unique
    percpu: make percpu symbols in tracer unique
    percpu: make percpu symbols under kernel/ and mm/ unique
    percpu: remove some sparse warnings
    percpu: make alloc_percpu() handle array types
    vmalloc: fix use of non-existent percpu variable in put_cpu_var()
    this_cpu: Use this_cpu_xx in trace_functions_graph.c
    this_cpu: Use this_cpu_xx for ftrace
    this_cpu: Use this_cpu_xx in nmi handling
    this_cpu: Use this_cpu operations in RCU
    this_cpu: Use this_cpu ops for VM statistics
    ...

    Fix up trivial (famous last words) global per-cpu naming conflicts in
    arch/x86/kvm/svm.c
    mm/slab.c

    Linus Torvalds
     

10 Dec, 2009

1 commit

  • There is a case where the graph tracer might get confused and omits
    displaying of a single record. This applies mostly with the trace_pipe
    since it is unlikely that the trace_seq buffer will overflow with the
    trace file.

    As the function_graph tracer goes through the trace entries keeping a
    pointer to the current record:

    current -> func1 ENTRY
    func2 ENTRY
    func2 RETURN
    func1 RETURN

    When an function ENTRY is encountered, it moves the pointer to the
    next entry to check if the function is a nested or leaf function.

    func1 ENTRY
    current -> func2 ENTRY
    func2 RETURN
    func1 RETURN

    If the rest of the writing of the function fills the trace_seq buffer,
    then the trace_pipe read will ignore this entry. The next read will
    Now start at the current location, but the first entry (func1) will
    be discarded.

    This patch keeps a copy of the current entry in the iterator private
    storage and will keep track of when the trace_seq buffer fills. When
    the trace_seq buffer fills, it will reuse the copy of the entry in the
    next iteration.

    [
    This patch has been largely modified by Steven Rostedt in order to
    clean it up and simplify it. The original idea and concept was from
    Jirka and for that, this patch will go under his name to give him
    the credit he deserves. But because this was modify by Steven Rostedt
    anything wrong with the patch should be blamed on Steven.
    ]

    Signed-off-by: Jiri Olsa
    Cc: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Jiri Olsa
     

29 Oct, 2009

1 commit

  • Now that the return from alloc_percpu is compatible with the address
    of per-cpu vars, it makes sense to hand around the address of per-cpu
    variables. To make this sane, we remove the per_cpu__ prefix we used
    created to stop people accidentally using these vars directly.

    Now we have sparse, we can use that (next patch).

    tj: * Updated to convert stuff which were missed by or added after the
    original patch.

    * Kill per_cpu_var() macro.

    Signed-off-by: Rusty Russell
    Signed-off-by: Tejun Heo
    Reviewed-by: Christoph Lameter

    Rusty Russell
     

13 Oct, 2009

1 commit


18 Sep, 2009

1 commit


12 Sep, 2009

2 commits

  • Both trace_output.c and trace_function_graph.c do basically the same
    thing to handle the printing of the latency-format. This patch moves
    the code into one function that both can use.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • This patch adds the lock depth of the big kernel lock to the generic
    entry header. This way we can see the depth of the lock and help
    in removing the BKL.

    Example:

    # _------=> CPU#
    # / _-----=> irqs-off
    # | / _----=> need-resched
    # || / _---=> hardirq/softirq
    # ||| / _--=> preempt-depth
    # |||| /_--=> lock-depth
    # |||||/ delay
    # cmd pid |||||| time | caller
    # \ / |||||| \ | /
    -0 2.N..3 5902255250us+: lock_acquire: read rcu_read_lock
    -0 2.N..3 5902255253us+: lock_release: rcu_read_lock
    -0 2dN..3 5902255257us+: lock_acquire: xtime_lock
    -0 2dN..4 5902255259us : lock_acquire: clocksource_lock
    -0 2dN..4 5902255261us+: lock_release: clocksource_lock

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

11 Sep, 2009

1 commit

  • While debugging something with the function_graph tracer, I found the
    need to see the preempt count of the traces. Unfortunately, since
    the function graph tracer has its own output formatting, it does not
    honor the latency-format option.

    This patch makes the function_graph tracer honor the latency-format
    option, but still keeps control of the output. But now we have the
    same details that the latency-format supplies.

    # tracer: function_graph
    #
    # _-----=> irqs-off
    # / _----=> need-resched
    # | / _---=> hardirq/softirq
    # || / _--=> preempt-depth
    # ||| /
    # ||||
    # CPU|||| DURATION FUNCTION CALLS
    # | |||| | | | | | |
    3) d..1 1.333 us | idle_cpu();
    3) d.h1 | tick_check_idle() {
    3) d.h1 0.550 us | tick_check_oneshot_broadcast();
    3) d.h1 | tick_nohz_stop_idle() {
    3) d.h1 | ktime_get() {
    3) d.h1 | ktime_get_ts() {

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

05 Sep, 2009

1 commit

  • The latency tracers (irqsoff and wakeup) can swap trace buffers
    on the fly. If an event is happening and has reserved data on one of
    the buffers, and the latency tracer swaps the global buffer with the
    max buffer, the result is that the event may commit the data to the
    wrong buffer.

    This patch changes the API to the trace recording to be recieve the
    buffer that was used to reserve a commit. Then this buffer can be passed
    in to the commit.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

11 Aug, 2009

1 commit


06 Aug, 2009

2 commits

  • The function graph events helpers which insert the function entry and
    return events into the ring buffer currently reside in trace.c
    But this file is quite overloaded and the right place for these helpers
    is in the function graph tracer file.

    Then move them to trace_functions_graph.c

    Signed-off-by: Frederic Weisbecker
    Cc: Steven Rostedt

    Frederic Weisbecker
     
  • print_graph_cpu() is little over-designed.

    And "log10_all" may be wrong when there are holes in cpu_online_mask:
    the max online cpu id > cpumask_weight(cpu_online_mask)

    So change it by using a static column length for the cpu matching
    nr_cpu_ids number of decimal characters.

    Signed-off-by: Lai Jiangshan
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Lai Jiangshan
     

29 Jul, 2009

1 commit

  • When print_graph_entry() computes a function call entry event, it needs
    to also check the next entry to guess if it matches the return event of
    the current function entry.
    In order to look at this next event, it needs to consume the current
    entry before going ahead in the ring buffer.

    However, if the current event that gets consumed is the last one in the
    ring buffer head page, the ring_buffer may reuse the page for writers.
    The consumed entry will then become invalid because of possible
    racy overwriting.

    Me must then handle this entry by making a copy of it.

    The fix also applies on 2.6.30

    Signed-off-by: Lai Jiangshan
    Cc: Steven Rostedt
    Cc: stable@kernel.org
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Lai Jiangshan
     

10 Jul, 2009

1 commit


19 Jun, 2009

1 commit

  • In case gcc does something funny with the stack frames, or the return
    from function code, we would like to detect that.

    An arch may implement passing of a variable that is unique to the
    function and can be saved on entering a function and can be tested
    when exiting the function. Usually the frame pointer can be used for
    this purpose.

    This patch also implements this for x86. Where it passes in the stack
    frame of the parent function, and will test that frame on exit.

    There was a case in x86_32 with optimize for size (-Os) where, for a
    few functions, gcc would align the stack frame and place a copy of the
    return address into it. The function graph tracer modified the copy and
    not the actual return address. On return from the funtion, it did not go
    to the tracer hook, but returned to the parent. This broke the function
    graph tracer, because the return of the parent (where gcc did not do
    this funky manipulation) returned to the location that the child function
    was suppose to. This caused strange kernel crashes.

    This test detected the problem and pointed out where the issue was.

    This modifies the parameters of one of the functions that the arch
    specific code calls, so it includes changes to arch code to accommodate
    the new prototype.

    Note, I notice that the parsic arch implements its own push_return_trace.
    This is now a generic function and the ftrace_push_return_trace should be
    used instead. This patch does not touch that code.

    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Heiko Carstens
    Cc: Martin Schwidefsky
    Cc: Frederic Weisbecker
    Cc: Helge Deller
    Cc: Kyle McMartin
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

11 Jun, 2009

1 commit

  • …rnel/git/tip/linux-2.6-tip

    * 'tracing-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    function-graph: always initialize task ret_stack
    function-graph: move initialization of new tasks up in fork
    function-graph: add memory barriers for accessing task's ret_stack
    function-graph: enable the stack after initialization of other variables
    function-graph: only allocate init tasks if it was not already done

    Manually fix trivial conflict in kernel/trace/ftrace.c

    Linus Torvalds
     

03 Jun, 2009

1 commit

  • The function graph tracer checks if the task_struct has ret_stack defined
    to know if it is OK or not to use it. The initialization is done for
    all tasks by one process, but the idle tasks use the same initialization
    used by new tasks.

    If an interrupt happens on an idle task that just had the ret_stack
    created, but before the rest of the initialization took place, then
    we can corrupt the return address of the functions.

    This patch moves the setting of the task_struct's ret_stack to after
    the other variables have been initialized.

    [ Impact: prevent kernel panic on idle task when starting function graph ]

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

25 Mar, 2009

2 commits

  • graph time is the time that a function is executing another function.
    Thus if function A calls B, if graph-time is set, then the time for
    A includes B. This is the default behavior. But if graph-time is off,
    then the time spent executing B is subtracted from A.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • If the function graph trace is enabled, the function profiler will
    use it to take the timing of the functions.

    cat /debug/tracing/trace_stat/functions

    Function Hit Time
    -------- --- ----
    mwait_idle 127 183028.4 us
    schedule 26 151997.7 us
    __schedule 31 151975.1 us
    sys_wait4 2 74080.53 us
    do_wait 2 74077.80 us
    sys_newlstat 138 39929.16 us
    do_path_lookup 179 39845.79 us
    vfs_lstat_fd 138 39761.97 us
    user_path_at 153 39469.58 us
    path_walk 179 39435.76 us
    __link_path_walk 189 39143.73 us
    [...]

    Note the times are skewed due to the function graph tracer not taking
    into account schedules.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

24 Mar, 2009

1 commit


20 Mar, 2009

2 commits

  • With the added TRACE_EVENT macro, the events no longer appear in
    the function graph tracer. This was because the function graph
    did not know how to display the entries. The graph tracer was
    only aware of its own entries and the printk entries.

    By using the event call back feature, the graph tracer can now display
    the events.

    # echo irq > /debug/tracing/set_event

    Which can show:

    0) | handle_IRQ_event() {
    0) | /* irq_handler_entry: irq=48 handler=eth0 */
    0) | e1000_intr() {
    0) 0.926 us | __napi_schedule();
    0) 3.888 us | }
    0) | /* irq_handler_exit: irq=48 return=handled */
    0) 0.655 us | runqueue_is_locked();
    0) | __wake_up() {
    0) 0.831 us | _spin_lock_irqsave();

    The irq entry and exit events show up as comments.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Currently, the function graph tracer depends on the trace_printk
    to record the depth. All the information is already there in the trace
    to calculate function depth, with the exception of having the printk
    be the first item. But as soon as a entry or exit is reached, then
    we know the depth.

    This patch changes the iter->private data from recording a per cpu
    last_pid, to a structure that holds both the last_pid and the current
    depth. This data is used to determine the function depth for the
    printks.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

19 Mar, 2009

1 commit

  • Impact: clean up

    The prologue of the function graph entry, return and comments all
    start out pretty much the same. Each of these duplicate code and
    do so slightly differently.

    This patch consolidates the printing of the pid, absolute time,
    cpu and proc (and for entry, the interrupt).

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

17 Mar, 2009

1 commit

  • Impact: fix to one cause of incorrect comm outputs in trace

    The spinlock only protected the creation of a comm pid pair.
    But it was possible that a reader could look up a pid, and get the
    wrong comm because it had no locking.

    This also required changing trace_find_cmdline to copy the comm cache
    and not just send back a pointer to it.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

13 Mar, 2009

1 commit

  • Impact: fix callsites with dynamic format strings

    Since its new binary implementation, trace_printk() internally uses static
    containers for the format strings on each callsites. But the value is
    assigned once at build time, which means that it can't take dynamic
    formats.

    So this patch unearthes the raw trace_printk implementation for the callers
    that will need trace_printk to be able to carry these dynamic format
    strings. The trace_printk() macro will use the appropriate implementation
    for each callsite. Most of the time however, the binary implementation will
    still be used.

    The other impact of this patch is that mmiotrace_printk() will use the old
    implementation because it calls the low level trace_vprintk and we can't
    guess here whether the format passed in it is dynamic or not.

    Some parts of this patch have been written by Steven Rostedt (most notably
    the part that chooses the appropriate implementation for each callsites).

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Frederic Weisbecker
     

11 Mar, 2009

2 commits


10 Mar, 2009

1 commit


07 Mar, 2009

1 commit

  • Impact: faster and lighter tracing

    Now that we have trace_bprintk() which is faster and consume lesser
    memory than trace_printk() and has the same purpose, we can now drop
    the old implementation in favour of the binary one from trace_bprintk(),
    which means we move all the implementation of trace_bprintk() to
    trace_printk(), so the Api doesn't change except that we must now use
    trace_seq_bprintk() to print the TRACE_PRINT entries.

    Some changes result of this:

    - Previously, trace_bprintk depended of a single tracer and couldn't
    work without. This tracer has been dropped and the whole implementation
    of trace_printk() (like the module formats management) is now integrated
    in the tracing core (comes with CONFIG_TRACING), though we keep the file
    trace_printk (previously trace_bprintk.c) where we can find the module
    management. Thus we don't overflow trace.c

    - changes some parts to use trace_seq_bprintk() to print TRACE_PRINT entries.

    - change a bit trace_printk/trace_vprintk macros to support non-builtin formats
    constants, and fix 'const' qualifiers warnings. But this is all transparent for
    developers.

    - etc...

    V2:

    - Rebase against last changes
    - Fix mispell on the changelog

    V3:

    - Rebase against last changes (moving trace_printk() to kernel.h)

    Signed-off-by: Frederic Weisbecker
    Acked-by: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

05 Mar, 2009

1 commit

  • Impact: decrease hangs risks with the graph tracer on slow systems

    Since the function graph tracer can spend too much time on timer
    interrupts, it's better now to use the more lightweight local
    clock. Anyway, the function graph traces are more reliable on a
    per cpu trace.

    Signed-off-by: Frederic Weisbecker
    Cc: Steven Rostedt
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker