16 Dec, 2010

1 commit


01 Dec, 2010

1 commit

  • The file_ops struct for the "trace" special file defined llseek as seq_lseek().
    However, if the file was opened for writing only, seq_open() was not called,
    and the seek would dereference a null pointer, file->private_data.

    This patch introduces a new wrapper for seq_lseek() which checks if the file
    descriptor is opened for reading first. If not, it does nothing.

    Cc:
    Signed-off-by: Slava Pestov
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Slava Pestov
     

27 Nov, 2010

1 commit

  • …/git/tip/linux-2.6-tip

    * 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    perf symbols: Remove incorrect open-coded container_of()
    perf record: Handle restrictive permissions in /proc/{kallsyms,modules}
    x86/kprobes: Prevent kprobes to probe on save_args()
    irq_work: Drop cmpxchg() result
    perf: Fix owner-list vs exit
    x86, hw_nmi: Move backtrace_mask declaration under ARCH_HAS_NMI_WATCHDOG
    tracing: Fix recursive user stack trace
    perf,hw_breakpoint: Initialize hardware api earlier
    x86: Ignore trap bits on single step exceptions
    tracing: Force arch_local_irq_* notrace for paravirt
    tracing: Fix module use of trace_bprintk()

    Linus Torvalds
     

18 Nov, 2010

1 commit


13 Nov, 2010

1 commit

  • The user stack trace can fault when examining the trace. Which
    would call the do_page_fault handler, which would trace again,
    which would do the user stack trace, which would fault and call
    do_page_fault again ...

    Thus this is causing a recursive bug. We need to have a recursion
    detector here.

    [ Resubmitted by Jiri Olsa ]

    [ Eric Dumazet recommended using __this_cpu_* instead of __get_cpu_* ]

    Cc: Eric Dumazet
    Signed-off-by: Jiri Olsa
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

21 Oct, 2010

1 commit


18 Oct, 2010

1 commit


16 Aug, 2010

1 commit


14 Aug, 2010

1 commit

  • When userspace code writes non-new-line-terminated string to trace_marker
    file, write handler appends new-line and returns number of bytes written
    to trace buffer, so
    write(fd, "abc", 3) will return 4

    That's unexpected and unfortunately it confuses glibc's fprintf function.

    Example:
    int main() {
    fprintf(stderr, "abc");
    return 0;
    }

    $ gcc test.c -o test
    $ echo mmiotrace > /sys/kernel/debug/tracing/current_tracer
    $ ./test 2>/sys/kernel/debug/tracing/trace_marker

    results in infinite loop:
    write(fd, "abc", 3) = 4
    write(fd, "", 1) = 0
    write(fd, "", 1) = 0
    write(fd, "", 1) = 0
    write(fd, "", 1) = 0
    write(fd, "", 1) = 0
    write(fd, "", 1) = 0
    write(fd, "", 1) = 0
    (...)

    ...and kernel trace buffer full of empty markers.

    Fix it by sanitizing write return value.

    Signed-off-by: Marcin Slusarz
    LKML-Reference:
    Cc: Frederic Weisbecker
    Cc: Ingo Molnar
    Signed-off-by: Steven Rostedt

    Marcin Slusarz
     

08 Aug, 2010

1 commit


07 Aug, 2010

1 commit

  • …git/tip/linux-2.6-tip

    * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (162 commits)
    tracing/kprobes: unregister_trace_probe needs to be called under mutex
    perf: expose event__process function
    perf events: Fix mmap offset determination
    perf, powerpc: fsl_emb: Restore setting perf_sample_data.period
    perf, powerpc: Convert the FSL driver to use local64_t
    perf tools: Don't keep unreferenced maps when unmaps are detected
    perf session: Invalidate last_match when removing threads from rb_tree
    perf session: Free the ref_reloc_sym memory at the right place
    x86,mmiotrace: Add support for tracing STOS instruction
    perf, sched migration: Librarize task states and event headers helpers
    perf, sched migration: Librarize the GUI class
    perf, sched migration: Make the GUI class client agnostic
    perf, sched migration: Make it vertically scrollable
    perf, sched migration: Parameterize cpu height and spacing
    perf, sched migration: Fix key bindings
    perf, sched migration: Ignore unhandled task states
    perf, sched migration: Handle ignored migrate out events
    perf: New migration tool overview
    tracing: Drop cpparg() macro
    perf: Use tracepoint_synchronize_unregister() to flush any pending tracepoint call
    ...

    Fix up trivial conflicts in Makefile and drivers/cpufreq/cpufreq.c

    Linus Torvalds
     

05 Aug, 2010

1 commit


23 Jul, 2010

2 commits


21 Jul, 2010

2 commits

  • Documentation/trace/ftrace.txt says

    buffer_size_kb:

    This sets or displays the number of kilobytes each CPU
    buffer can hold. The tracer buffers are the same size
    for each CPU. The displayed number is the size of the
    CPU buffer and not total size of all buffers. The
    trace buffers are allocated in pages (blocks of memory
    that the kernel uses for allocation, usually 4 KB in size).
    If the last page allocated has room for more bytes
    than requested, the rest of the page will be used,
    making the actual allocation bigger than requested.
    ( Note, the size may not be a multiple of the page size
    due to buffer management overhead. )

    This can only be updated when the current_tracer
    is set to "nop".

    But it's incorrect. currently total memory consumption is
    'buffer_size_kb x CPUs x 2'.

    Why two times difference is there? because ftrace implicitly allocate
    the buffer for max latency too.

    That makes sad result when admin want to use large buffer. (If admin
    want full logging and makes detail analysis). example, If admin
    have 24 CPUs machine and write 200MB to buffer_size_kb, the system
    consume ~10GB memory (200MB x 24 x 2). umm.. 5GB memory waste is
    usually unacceptable.

    Fortunatelly, almost all users don't use max latency feature.
    The max latency buffer can be disabled easily.

    This patch shrink buffer size of the max latency buffer if
    unnecessary.

    Signed-off-by: KOSAKI Motohiro
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    KOSAKI Motohiro
     
  • We found that even enabling a single trace event that will rarely be
    triggered can add big overhead to context switch.

    (lmbench context switch test)
    -------------------------------------------------
    2p/0K 2p/16K 2p/64K 8p/16K 8p/64K 16p/16K 16p/64K
    ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw ctxsw
    ------ ------ ------ ------ ------ ------- -------
    2.19 2.3 2.21 2.56 2.13 2.54 2.07
    2.39 2.51 2.35 2.75 2.27 2.81 2.24

    The overhead is 6% ~ 11%.

    It's because when a trace event is enabled 3 tracepoints (sched_switch,
    sched_wakeup, sched_wakeup_new) will be activated to map pid to cmdname.

    We'd like to avoid this overhead, so add a trace option '(no)record-cmd'
    to allow to disable cmdline recording.

    Signed-off-by: Li Zefan
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Li Zefan
     

20 Jul, 2010

3 commits

  • The default for llseek will change to no_llseek,
    so the tracing debugfs files need to add explicit
    .llseek assignments. Since we're dealing with regular
    files from a VFS perspective, use generic_file_llseek.

    Signed-off-by: Arnd Bergmann
    Cc: Steven Rostedt
    Cc: Ingo Molnar
    Cc: John Kacur
    Cc: Li Zefan
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Arnd Bergmann
     
  • Special traces type was only used by sysprof. Lets remove it now
    that sysprof ftrace plugin has been dropped.

    Signed-off-by: Frederic Weisbecker
    Acked-by: Soeren Sandmann
    Cc: Peter Zijlstra
    Cc: Ingo Molnar
    Cc: Steven Rostedt
    Cc: Li Zefan

    Frederic Weisbecker
     
  • The sysprof ftrace plugin doesn't seem to be seriously used
    somewhere. There is a branch in the sysprof tree that makes
    an interface to it, but the real sysprof tool uses either its
    own module or perf events.

    Drop the sysprof ftrace plugin then, as it's mostly useless.

    Signed-off-by: Frederic Weisbecker
    Acked-by: Soeren Sandmann
    Cc: Peter Zijlstra
    Cc: Ingo Molnar
    Cc: Steven Rostedt
    Cc: Li Zefan

    Frederic Weisbecker
     

09 Jul, 2010

1 commit

  • I have shown by code review that no driver takes
    the BKL at init time any more, so whatever the
    init code was locking against is no longer there
    and it is now safe to remove the BKL there.

    Signed-off-by: Arnd Bergmann
    Acked-by: Steven Rostedt
    Signed-off-by: Frederic Weisbecker

    Arnd Bergmann
     

10 Jun, 2010

1 commit


09 Jun, 2010

2 commits


04 Jun, 2010

1 commit

  • The ftrace_preempt_disable/enable functions were to address a
    recursive race caused by the function tracer. The function tracer
    traces all functions which makes it easily susceptible to recursion.
    One area was preempt_enable(). This would call the scheduler and
    the schedulre would call the function tracer and loop.
    (So was it thought).

    The ftrace_preempt_disable/enable was made to protect against recursion
    inside the scheduler by storing the NEED_RESCHED flag. If it was
    set before the ftrace_preempt_disable() it would not call schedule
    on ftrace_preempt_enable(), thinking that if it was set before then
    it would have already scheduled unless it was already in the scheduler.

    This worked fine except in the case of SMP, where another task would set
    the NEED_RESCHED flag for a task on another CPU, and then kick off an
    IPI to trigger it. This could cause the NEED_RESCHED to be saved at
    ftrace_preempt_disable() but the IPI to arrive in the the preempt
    disabled section. The ftrace_preempt_enable() would not call the scheduler
    because the flag was already set before entring the section.

    This bug would cause a missed preemption check and cause lower latencies.

    Investigating further, I found that the recusion caused by the function
    tracer was not due to schedule(), but due to preempt_schedule(). Now
    that preempt_schedule is completely annotated with notrace, the recusion
    no longer is an issue.

    Reported-by: Thomas Gleixner
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

31 May, 2010

1 commit


28 May, 2010

1 commit

  • …git/tip/linux-2.6-tip

    * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (61 commits)
    tracing: Add __used annotation to event variable
    perf, trace: Fix !x86 build bug
    perf report: Support multiple events on the TUI
    perf annotate: Fix up usage of the build id cache
    x86/mmiotrace: Remove redundant instruction prefix checks
    perf annotate: Add TUI interface
    perf tui: Remove annotate from popup menu after failure
    perf report: Don't start the TUI if -D is used
    perf: Fix getline undeclared
    perf: Optimize perf_tp_event_match()
    perf: Remove more code from the fastpath
    perf: Optimize the !vmalloc backed buffer
    perf: Optimize perf_output_copy()
    perf: Fix wakeup storm for RO mmap()s
    perf-record: Share per-cpu buffers
    perf-record: Remove -M
    perf: Ensure that IOC_OUTPUT isn't used to create multi-writer buffers
    perf, trace: Optimize tracepoints by using per-tracepoint-per-cpu hlist to track events
    perf, trace: Optimize tracepoints by removing IRQ-disable from perf/tracepoint interaction
    perf tui: Allow disabling the TUI on a per command basis in ~/.perfconfig
    ...

    Linus Torvalds
     

25 May, 2010

1 commit

  • Currently the trace splice code zeros out the excess bytes in the page before
    sending it off to userspace.

    This is to make sure userspace is not getting anything it should not be
    when reading the pages, because the excess data was never initialized
    to zero before writing (for perfomance reasons).

    But the splice code has no business in doing this work, it should be
    done by the ring buffer. With the latest changes for recording lost
    events, the splice code gets it wrong anyway.

    Move the zeroing out of excess bytes into the ring buffer code.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

22 May, 2010

2 commits


15 May, 2010

1 commit

  • Multiple events may use the same method to print their data.
    Instead of having all events have a pointer to their print funtions,
    the trace_event structure now points to a trace_event_functions structure
    that will hold the way to print ouf the event.

    The event itself is now passed to the print function to let the print
    function know what kind of event it should print.

    This opens the door to consolidating the way several events print
    their output.

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4900382 1048964 861512 6810858 67ecea vmlinux.init
    4900446 1049028 861512 6810986 67ed6a vmlinux.preprint

    This change slightly increases the size but is needed for the next change.

    v3: Fix the branch tracer events to handle this change.

    v2: Fix the new function graph tracer event calls to handle this change.

    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Acked-by: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

28 Apr, 2010

2 commits

  • When performing a non-consuming read, a synchronize_sched() is
    performed once for every cpu which is actively tracing.

    This is very expensive, and can make it take several seconds to open
    up the 'trace' file with lots of cpus.

    Only one synchronize_sched() call is actually necessary. What is
    desired is for all cpus to see the disabling state change. So we
    transform the existing sequence:

    for_each_cpu() {
    ring_buffer_read_start();
    }

    where each ring_buffer_start() call performs a synchronize_sched(),
    into the following:

    for_each_cpu() {
    ring_buffer_read_prepare();
    }
    ring_buffer_read_prepare_sync();
    for_each_cpu() {
    ring_buffer_read_start();
    }

    wherein only the single ring_buffer_read_prepare_sync() call needs to
    do the synchronize_sched().

    The first phase, via ring_buffer_read_prepare(), allocates the 'iter'
    memory and increments ->record_disabled.

    In the second phase, ring_buffer_read_prepare_sync() makes sure this
    ->record_disabled state is visible fully to all cpus.

    And in the final third phase, the ring_buffer_read_start() calls reset
    the 'iter' objects allocated in the first phase since we now know that
    none of the cpus are adding trace entries any more.

    This makes openning the 'trace' file nearly instantaneous on a
    sparc64 Niagara2 box with 128 cpus tracing.

    Signed-off-by: David S. Miller
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    David Miller
     
  • Add function graph output to irqsoff tracer.

    The graph output is enabled by setting new 'display-graph' trace option.

    Signed-off-by: Jiri Olsa
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Jiri Olsa
     

22 Apr, 2010

1 commit

  • The ftrace_dump_on_oops kernel parameter, sysctl and sysrq let one
    dump every cpu buffers when an oops or panic happens.

    It's nice when you have few cpus but it may take ages if have many,
    plus you miss the real origin of the problem in all the cpu traces.

    Sometimes, all you need is to dump the cpu buffer that triggered the
    opps, most of the time it is our main interest.

    This patch modifies ftrace_dump_on_oops to handle this choice.

    The ftrace_dump_on_oops kernel parameter, when it comes alone, has
    the same behaviour than before. But ftrace_dump_on_oops=orig_cpu
    will only dump the buffer of the cpu that oops'ed.

    Similarly, sysctl kernel.ftrace_dump_on_oops=1 and
    echo 1 > /proc/sys/kernel/ftrace_dump_on_oops keep their previous
    behaviour. But setting 2 jumps into cpu origin dump mode.

    v2: Fix double setup
    v3: Fix spelling issues reported by Randy Dunlap
    v4: Also update __ftrace_dump in the selftests

    Signed-off-by: Frederic Weisbecker
    Acked-by: David S. Miller
    Acked-by: Steven Rostedt
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Li Zefan
    Cc: Lai Jiangshan

    Frederic Weisbecker
     

14 Apr, 2010

1 commit


08 Apr, 2010

1 commit

  • Conflicts:
    include/linux/module.h
    kernel/module.c

    Semantic conflict:
    include/trace/events/module.h

    Merge reason: Resolve the conflict with upstream commit 5fbfb18 ("Fix up
    possibly racy module refcounting")

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

05 Apr, 2010

1 commit

  • Because a local variable is not initialized, I got these
    when I did 'cat tracing/trace'. (not trace_pipe):

    CPU:0 [LOST 18446744071579453134 EVENTS]
    ps-3099 [000] 560.770221: lock_acquire: ffff880030865010 &(&dentry->d_lock)->rlock
    CPU:0 [LOST 18446744071579453134 EVENTS]
    ps-3099 [000] 560.770221: lock_release: ffff880030865010 &(&dentry->d_lock)->rlock
    CPU:0 [LOST 18446612133255294080 EVENTS]
    ps-3099 [000] 560.770221: lock_acquire: ffff880030865010 &(&dentry->d_lock)->rlock
    CPU:0 [LOST 18446744071579453134 EVENTS]
    ps-3099 [000] 560.770222: lock_release: ffff880030865010 &(&dentry->d_lock)->rlock
    CPU:0 [LOST 18446744071579453134 EVENTS]
    ps-3099 [000] 560.770222: lock_release: ffffffff816cfb98 dcache_lock

    See peek_next_entry(), it does not set *lost_events when we 'cat tracing/trace'

    Signed-off-by: Lai Jiangshan
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Lai Jiangshan
     

01 Apr, 2010

2 commits

  • Now that the ring buffer can keep track of where events are lost.
    Use this information to the output of trace_pipe:

    hackbench-3588 [001] 1326.701660: lock_acquire: ffffffff816591e0 read rcu_read_lock
    hackbench-3588 [001] 1326.701661: lock_acquire: ffff88003f4091f0 &(&dentry->d_lock)->rlock
    hackbench-3588 [001] 1326.701664: lock_release: ffff88003f4091f0 &(&dentry->d_lock)->rlock
    CPU:1 [LOST 673 EVENTS]
    hackbench-3588 [001] 1326.702711: kmem_cache_free: call_site=ffffffff81102b85 ptr=ffff880026d96738
    hackbench-3588 [001] 1326.702712: lock_release: ffff88003e1480a8 &mm->mmap_sem
    hackbench-3588 [001] 1326.702713: lock_acquire: ffff88003e1480a8 &mm->mmap_sem

    Even works with the function graph tracer:

    2) ! 170.098 us | }
    2) 4.036 us | rcu_irq_exit();
    2) 3.657 us | idle_cpu();
    2) ! 190.301 us | }
    CPU:2 [LOST 2196 EVENTS]
    2) 0.853 us | } /* cancel_dirty_page */
    2) | remove_from_page_cache() {
    2) 1.578 us | _raw_spin_lock_irq();
    2) | __remove_from_page_cache() {

    Note, it does not work with the iterator "trace" file, since it requires
    the use of consuming the page from the ring buffer to determine how many
    events were lost, which the iterator does not do.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Currently, when the ring buffer drops events, it does not record
    the fact that it did so. It does inform the writer that the event
    was dropped by returning a NULL event, but it does not put in any
    place holder where the event was dropped.

    This is not a trivial thing to add because the ring buffer mostly
    runs in overwrite (flight recorder) mode. That is, when the ring
    buffer is full, new data will overwrite old data.

    In a produce/consumer mode, where new data is simply dropped when
    the ring buffer is full, it is trivial to add the placeholder
    for dropped events. When there's more room to write new data, then
    a special event can be added to notify the reader about the dropped
    events.

    But in overwrite mode, any new write can overwrite events. A place
    holder can not be inserted into the ring buffer since there never
    may be room. A reader could also come in at anytime and miss the
    placeholder.

    Luckily, the way the ring buffer works, the read side can find out
    if events were lost or not, and how many events. Everytime a write
    takes place, if it overwrites the header page (the next read) it
    updates a "overrun" variable that keeps track of the number of
    lost events. When a reader swaps out a page from the ring buffer,
    it can record this number, perfom the swap, and then check to
    see if the number changed, and take the diff if it has, which would be
    the number of events dropped. This can be stored by the reader
    and returned to callers of the reader.

    Since the reader page swap will fail if the writer moved the head
    page since the time the reader page set up the swap, this gives room
    to record the overruns without worrying about races. If the reader
    sets up the pages, records the overrun, than performs the swap,
    if the swap succeeds, then the overrun variable has not been
    updated since the setup before the swap.

    For binary readers of the ring buffer, a flag is set in the header
    of each sub page (sub buffer) of the ring buffer. This flag is embedded
    in the size field of the data on the sub buffer, in the 31st bit (the size
    can be 32 or 64 bits depending on the architecture), but only 27
    bits needs to be used for the actual size (less actually).

    We could add a new field in the sub buffer header to also record the
    number of events dropped since the last read, but this will change the
    format of the binary ring buffer a bit too much. Perhaps this change can
    be made if the information on the number of events dropped is considered
    important enough.

    Note, the notification of dropped events is only used by consuming reads
    or peeking at the ring buffer. Iterating over the ring buffer does not
    keep this information because the necessary data is only available when
    a page swap is made, and the iterator does not swap out pages.

    Cc: Robert Richter
    Cc: Andi Kleen
    Cc: Li Zefan
    Cc: Arnaldo Carvalho de Melo
    Cc: "Luis Claudio R. Goncalves"
    Cc: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

14 Mar, 2010

1 commit

  • …nel/git/tip/linux-2.6-tip

    * 'tracing-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    tracing: Do not record user stack trace from NMI context
    tracing: Disable buffer switching when starting or stopping trace
    tracing: Use same local variable when resetting the ring buffer
    function-graph: Init curr_ret_stack with ret_stack
    ring-buffer: Move disabled check into preempt disable section
    function-graph: Add tracing_thresh support to function_graph tracer
    tracing: Update the comm field in the right variable in update_max_tr
    function-graph: Use comment notation for func names of dangling '}'
    function-graph: Fix unused reference to ftrace_set_func()
    tracing: Fix warning in s_next of trace file ops
    tracing: Include irqflags headers from trace clock

    Linus Torvalds