03 Mar, 2010
1 commit
-
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
percpu: add __percpu sparse annotations to what's left
percpu: add __percpu sparse annotations to fs
percpu: add __percpu sparse annotations to core kernel subsystems
local_t: Remove leftover local.h
this_cpu: Remove pageset_notifier
this_cpu: Page allocator conversion
percpu, x86: Generic inc / dec percpu instructions
local_t: Move local.h include to ringbuffer.c and ring_buffer_benchmark.c
module: Use this_cpu_xx to dynamically allocate counters
local_t: Remove cpu_local_xx macros
percpu: refactor the code in pcpu_[de]populate_chunk()
percpu: remove compile warnings caused by __verify_pcpu_ptr()
percpu: make accessors check for percpu pointer in sparse
percpu: add __percpu for sparse.
percpu: make access macros universal
percpu: remove per_cpu__ prefix.
02 Mar, 2010
1 commit
-
* 'for-2.6.34' of git://git.kernel.dk/linux-2.6-block: (38 commits)
block: don't access jiffies when initialising io_context
cfq: remove 8 bytes of padding from cfq_rb_root on 64 bit builds
block: fix for "Consolidate phys_segment and hw_segment limits"
cfq-iosched: quantum check tweak
blktrace: perform cleanup after setup error
blkdev: fix merge_bvec_fn return value checks
cfq-iosched: requests "in flight" vs "in driver" clarification
cciss: Fix problem with scatter gather elements in the scsi half of the driver
cciss: eliminate unnecessary pointer use in cciss scsi code
cciss: do not use void pointer for scsi hba data
cciss: factor out scatter gather chain block mapping code
cciss: fix scatter gather chain block dma direction kludge
cciss: simplify scatter gather code
cciss: factor out scatter gather chain block allocation and freeing
cciss: detect bad alignment of scsi commands at build time
cciss: clarify command list padding calculation
cfq-iosched: rethink seeky detection for SSDs
cfq-iosched: rework seeky detection
block: remove padding from io_context on 64bit builds
block: Consolidate phys_segment and hw_segment limits
...
01 Mar, 2010
2 commits
-
Currently even if BLKTRACESETUP ioctl has failed user must call
BLKTRACETEARDOWN to be shure what all staff was cleaned, which
is contr-intuitive.
Let's setup ioctl make necessery cleanup by it self.Signed-off-by: Dmitry Monakhov
Signed-off-by: Jens Axboe -
…git/tip/linux-2.6-tip
* 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (172 commits)
perf_event, amd: Fix spinlock initialization
perf_event: Fix preempt warning in perf_clock()
perf tools: Flush maps on COMM events
perf_events, x86: Split PMU definitions into separate files
perf annotate: Handle samples not at objdump output addr boundaries
perf_events, x86: Remove superflous MSR writes
perf_events: Simplify code by removing cpu argument to hw_perf_group_sched_in()
perf_events, x86: AMD event scheduling
perf_events: Add new start/stop PMU callbacks
perf_events: Report the MMAP pgoff value in bytes
perf annotate: Defer allocating sym_priv->hist array
perf symbols: Improve debugging information about symtab origins
perf top: Use a macro instead of a constant variable
perf symbols: Check the right return variable
perf/scripts: Tag syscall_name helper as not yet available
perf/scripts: Add perf-trace-python Documentation
perf/scripts: Remove unnecessary PyTuple resizes
perf/scripts: Add syscall tracing scripts
perf/scripts: Add Python scripting engine
perf/scripts: Remove check-perf-trace from listed scripts
...Fix trivial conflict in tools/perf/util/probe-event.c
27 Feb, 2010
3 commits
-
…/rostedt/linux-2.6-trace into tracing/core
-
…deric/random-tracing into tracing/core
-
The function graph tracer is currently the most invasive tracer
in the ftrace family. It can easily overflow the buffer even with
10megs per CPU. This means that events can often be lost.On start up, or after events are lost, if the function return is
recorded but the function enter was lost, all we get to see is the
exiting '}'.Here is how a typical trace output starts:
[tracing] cat trace
# tracer: function_graph
#
# CPU DURATION FUNCTION CALLS
# | | | | | | |
0) + 91.897 us | }
0) ! 567.961 us | }
0)
26 Feb, 2010
2 commits
-
…/rostedt/linux-2.6-trace into tracing/core
-
Conflicts:
scripts/recordmcount.plMerge reason: Merge up to v2.6.33.
Signed-off-by: Ingo Molnar
25 Feb, 2010
6 commits
-
Discard freeing field->type since it is not necessary.
Reviewed-by: Li Zefan
Signed-off-by: Wenji Huang
LKML-Reference:
Signed-off-by: Steven Rostedt -
The "cpu" variable is declared at the start of the function and
also within a branch, with the exact same initialization.Remove the local variable of the same name in the branch.
Signed-off-by: Wenji Huang
LKML-Reference:
Signed-off-by: Steven Rostedt -
Signed-off-by: Wenji Huang
LKML-Reference:
Signed-off-by: Steven Rostedt -
Signed-off-by: Wenji Huang
LKML-Reference:
Signed-off-by: Steven Rostedt -
The power tracer has been converted to power trace events.
Acked-by: Frederic Weisbecker
Signed-off-by: Li Zefan
LKML-Reference:
Signed-off-by: Steven Rostedt -
GCC 4.5 introduces behavior that forces the alignment of structures to
use the largest possible value. The default value is 32 bytes, so if
some structures are defined with a 4-byte alignment and others aren't
declared with an alignment constraint at all - it will align at 32-bytes.For things like the ftrace events, this results in a non-standard array.
When initializing the ftrace subsystem, we traverse the _ftrace_events
section and call the initialization callback for each event. When the
structures are misaligned, we could be treating another part of the
structure (or the zeroed out space between them) as a function pointer.This patch forces the alignment for all the ftrace_event_call structures
to 4 bytes.Without this patch, the kernel fails to boot very early when built with
gcc 4.5.It's trivial to check the alignment of the members of the array, so it
might be worthwhile to add something to the build system to do that
automatically. Unfortunately, that only covers this case. I've asked one
of the gcc developers about adding a warning when this condition is seen.Cc: stable@kernel.org
Signed-off-by: Jeff Mahoney
LKML-Reference:
Signed-off-by: Steven Rostedt
17 Feb, 2010
2 commits
-
KPROBES_EVENT actually depends on the regs and stack access API
(b1cf540f) and not on x86.
So introduce a new config option which architectures can select if
they have the API implemented and switch x86.Signed-off-by: Heiko Carstens
Acked-by: Masami Hiramatsu
Cc: Ingo Molnar
Cc: Martin Schwidefsky
LKML-Reference:
Signed-off-by: Frederic Weisbecker -
Most implementations of arch_syscall_addr() are the same, so create a
default version in common code and move the one piece that differs (the
syscall table) to asm/syscall.h. New arch ports don't have to waste
time copying & pasting this simple function.The s390/sparc versions need to be different, so document why.
Signed-off-by: Mike Frysinger
Acked-by: David S. Miller
Acked-by: Paul Mundt
Acked-by: Heiko Carstens
Cc: Steven Rostedt
LKML-Reference:
Signed-off-by: Frederic Weisbecker
14 Feb, 2010
1 commit
-
Trying to add a probe like:
echo p:myprobe 0x10000 > /sys/kernel/debug/tracing/kprobe_events
will fail since the wrong pointer is passed to strict_strtoul
when trying to convert the address to an unsigned long.Signed-off-by: Heiko Carstens
Acked-by: Masami Hiramatsu
Cc: Frederic Weisbecker
Cc: Steven Rostedt
LKML-Reference:
Signed-off-by: Ingo Molnar
12 Feb, 2010
1 commit
-
I don't see why we can only clear all functions from the filter.
After patching:
# echo sys_open > set_graph_function
# echo sys_close >> set_graph_function
# cat set_graph_function
sys_open
sys_close
# echo '!sys_close' >> set_graph_function
# cat set_graph_function
sys_openSigned-off-by: Li Zefan
LKML-Reference:
Signed-off-by: Steven Rostedt
10 Feb, 2010
1 commit
-
The branch annotation is a bit difficult to see the worst offenders
because it only sorts by percentage:correct incorrect % Function File Line
------- --------- - -------- ---- ----
0 163 100 qdisc_restart sch_generic.c 179
0 163 100 pfifo_fast_dequeue sch_generic.c 447
0 4 100 pskb_trim_rcsum skbuff.h 1689
0 4 100 llc_rcv llc_input.c 170
0 18 100 psmouse_interrupt psmouse-base.c 304
0 3 100 atkbd_interrupt atkbd.c 389
0 5 100 usb_alloc_dev usb.c 437
0 11 100 vsscanf vsprintf.c 1897
0 2 100 IS_ERR err.h 34
0 23 100 __rmqueue_fallback page_alloc.c 865
0 4 100 probe_wakeup_sched_switch trace_sched_wakeup.c 142
0 3 100 move_masked_irq migration.c 11Adding the incorrect and correct values as sort keys makes this file a
bit more informative:correct incorrect % Function File Line
------- --------- - -------- ---- ----
0 366541 100 audit_syscall_entry auditsc.c 1637
0 366538 100 audit_syscall_exit auditsc.c 1685
0 115839 100 sched_info_switch sched_stats.h 269
0 74567 100 sched_info_queued sched_stats.h 222
0 66578 100 sched_info_dequeued sched_stats.h 177
0 15113 100 trace_workqueue_insertion workqueue.h 38
0 15107 100 trace_workqueue_execution workqueue.h 45
0 3622 100 syscall_trace_leave ptrace.c 1772
0 2750 100 sched_move_task sched.c 10100
0 2750 100 sched_move_task sched.c 10110
0 1815 100 pre_schedule_rt sched_rt.c 1462
0 837 100 audit_alloc auditsc.c 879
0 814 100 tcp_mss_split_point tcp_output.c 1302Signed-off-by: Steven Rostedt
04 Feb, 2010
2 commits
-
Remove record freezing. Because kprobes never puts probe on
ftrace's mcount call anymore, it doesn't need ftrace to check
whether kprobes on it.Signed-off-by: Masami Hiramatsu
Cc: systemtap
Cc: DLE
Cc: Steven Rostedt
Cc: przemyslaw@pawelczyk.it
Cc: Frederic Weisbecker
LKML-Reference:
Signed-off-by: Ingo Molnar -
Introducing *_text_reserved functions for checking the text
address range is partially reserved or not. This patch provides
checking routines for x86 smp alternatives and dynamic ftrace.
Since both functions modify fixed pieces of kernel text, they
should reserve and protect those from other dynamic text
modifier, like kprobes.This will also be extended when introducing other subsystems
which modify fixed pieces of kernel text. Dynamic text modifiers
should avoid those.Signed-off-by: Masami Hiramatsu
Cc: systemtap
Cc: DLE
Cc: Steven Rostedt
Cc: przemyslaw@pawelczyk.it
Cc: Frederic Weisbecker
Cc: Ananth N Mavinakayanahalli
Cc: Jim Keniston
Cc: Mathieu Desnoyers
Cc: Jason Baron
LKML-Reference:
Signed-off-by: Ingo Molnar
02 Feb, 2010
2 commits
-
When we cat /tracing/stack_trace, we may cause circular lock:
sys_read()
t_start()
arch_spin_lock(&max_stack_lock);t_show()
seq_printf(), vsnprintf() .... /* they are all trace-able,
when they are traced, max_stack_lock may be required again. */The following script can trigger this circular dead lock very easy:
#!/bin/bashecho 1 > /proc/sys/kernel/stack_tracer_enabled
mount -t debugfs xxx /mnt > /dev/null 2>&1
(
# make check_stack() zealous to require max_stack_lock
for ((; ;))
{
echo 1 > /mnt/tracing/stack_max_size
}
) &for ((; ;))
{
cat /mnt/tracing/stack_trace > /dev/null
}To fix this bug, we increase the percpu trace_active before
require the lock.Reported-by: Li Zefan
Signed-off-by: Lai Jiangshan
LKML-Reference:
Signed-off-by: Steven Rostedt
29 Jan, 2010
4 commits
-
Merge reason: We want to queue up a dependent patch. Also update to
later -rc's.Signed-off-by: Ingo Molnar
-
The return values of the kprobe's tracing functions are meaningless,
lets remove these.Signed-off-by: Xiao Guangrong
Acked-by: Masami Hiramatsu
Cc: Steven Rostedt
Cc: Ingo Molnar
Cc: Paul Mackerras
Cc: Jason Baron
Cc: Peter Zijlstra
LKML-Reference:
[fweisbec@gmail: whitespace fixes, drop useless void returns in end
of functions]
Signed-off-by: Frederic Weisbecker -
Introduce ftrace_perf_buf_prepare() and ftrace_perf_buf_submit() to
gather the common code that operates on raw events sampling buffer.
This cleans up redundant code between regular trace events, syscall
events and kprobe events.Changelog v1->v2:
- Rename function name as per Masami and Frederic's suggestion
- Add __kprobes for ftrace_perf_buf_prepare() and make
ftrace_perf_buf_submit() inline as per Masami's suggestion
- Export ftrace_perf_buf_prepare since modules will use itSigned-off-by: Xiao Guangrong
Acked-by: Masami Hiramatsu
Cc: Ingo Molnar
Cc: Steven Rostedt
Cc: Paul Mackerras
Cc: Jason Baron
Cc: Peter Zijlstra
LKML-Reference:
Signed-off-by: Frederic Weisbecker -
In the function graph tracer, a calling function is to be traced
only when it is enabled through the set_graph_function file,
or when it is nested in an enabled function.Current code uses TSK_TRACE_FL_GRAPH to test whether it is nested
or not. Looking at the code, we can get this:
(trace->depth > 0) (TSK_TRACE_FL_GRAPH is set)trace->depth is more explicit to tell that it is nested.
So we use trace->depth directly and simplify the code.No functionality is changed.
TSK_TRACE_FL_GRAPH is not removed yet, it is left for future usage.Signed-off-by: Lai Jiangshan
Cc: Ingo Molnar
Cc: Steven Rostedt
LKML-Reference:
Signed-off-by: Frederic Weisbecker
27 Jan, 2010
3 commits
-
Update the graph tracer examples to cover the new frame pointer semantics
(in terms of passing it along). Move the HAVE_FUNCTION_GRAPH_FP_TEST docs
out of the Kconfig, into the right place, and expand on the details.Signed-off-by: Mike Frysinger
LKML-Reference:
Signed-off-by: Steven Rostedt -
If the iterator comes to an empty page for some reason, or if
the page is emptied by a consuming read. The iterator code currently
does not check if the iterator is pass the contents, and may
return a false entry.This patch adds a check to the ring buffer iterator to test if the
current page has been completely read and sets the iterator to the
next page if necessary.Signed-off-by: Steven Rostedt
-
Usually reads of the ring buffer is performed by a single task.
There are two types of reads from the ring buffer.One is a consuming read which will consume the entry that was read
and the next read will be the entry that follows.The other is an iterator that will let the user read the contents of
the ring buffer without modifying it. When an iterator is allocated,
writes to the ring buffer are disabled to protect the iterator.The problem exists when consuming reads happen while an iterator is
allocated. Specifically, the kind of read that swaps out an entire
page (used by splice) and replaces it with a new read. If the iterator
is on the page that is swapped out, then the next read may read
from this swapped out page and return garbage.This patch adds a check when reading the iterator to make sure that
the iterator contents are still valid. If a consuming read has taken
place, the iterator is reset.Signed-off-by: Steven Rostedt
26 Jan, 2010
1 commit
-
If the contents of the ftrace ring buffer gets corrupted and the trace
file is read, it could create a kernel oops (usualy just killing the user
task thread). This is caused by the checking of the pid in the buffer.
If the pid is negative, it still references the cmdline cache array,
which could point to an invalid address.The simple fix is to test for negative PIDs.
Signed-off-by: Steven Rostedt
17 Jan, 2010
2 commits
-
Update kprobe tracing self test for new syntax (it supports
deleting individual probes, and drops $argN support)
and behavior change (new probes are disabled in default).This selftest includes the following checks:
- Adding function-entry probe and return probe with arguments.
- Enabling these probes.
- Deleting it individually.Signed-off-by: Masami Hiramatsu
Cc: systemtap
Cc: DLE
Cc: Steven Rostedt
Cc: Frederic Weisbecker
LKML-Reference:
Signed-off-by: Ingo Molnar -
Each time we save a function entry from the function graph
tracer, we check if the trace array is set, which is wasteful
because it is set anyway before we start the tracer. All we need
is to ensure we have good read and write orderings. When we set
the trace array, we just need to guarantee it to be visible
before starting tracing.Signed-off-by: Frederic Weisbecker
Acked-by: Steven Rostedt
Cc: Lai Jiangshan
Cc: Paul E. McKenney
LKML-Reference:
Signed-off-by: Ingo Molnar
15 Jan, 2010
6 commits
-
We should be clear on 2 things:
- the length parameter of a match callback includes
tailing '\0'.- the string to be searched might not be NULL-terminated.
Signed-off-by: Li Zefan
LKML-Reference:
Signed-off-by: Steven Rostedt -
MATCH_FULL matching for PTR_STRING is not working correctly:
# echo 'func == vt' > events/bkl/lock_kernel/filter
# echo 1 > events/bkl/lock_kernel/enable
...
# cat trace
Xorg-1484 [000] 1973.392586: lock_kernel: ... func=vt_ioctl()
gpm-1402 [001] 1974.027740: lock_kernel: ... func=vt_ioctl()We should pass to regex.match(..., len) the length (including '\0')
of the source string instead of the length of the pattern string.Signed-off-by: Li Zefan
LKML-Reference:
Acked-by: Frederic Weisbecker
Signed-off-by: Steven Rostedt -
The @str might not be NULL-terminated if it's of type
DYN_STRING or STATIC_STRING, so we should use strnstr()
instead of strstr().Signed-off-by: Li Zefan
LKML-Reference:
Acked-by: Frederic Weisbecker
Signed-off-by: Steven Rostedt -
For '*foo' pattern, we should allow any string ending with
'foo', but event filtering incorrectly disallows strings
like bar_foo_foo:Signed-off-by: Li Zefan
LKML-Reference:
Acked-by: Frederic Weisbecker
Signed-off-by: Steven Rostedt -
MATCH_FRONT_ONLY actually is a full matching:
# ./perf record -R -f -a -e lock:lock_acquire \
--filter 'name ~rcu_*' sleep 1
# ./perf trace
(no output)We should pass the length of the pattern string to strncmp().
Signed-off-by: Li Zefan
LKML-Reference:
Acked-by: Frederic Weisbecker
Signed-off-by: Steven Rostedt -
For '*foo' pattern, we should allow any string ending with
'foo', but ftrace filter incorrectly disallows strings
like bar_foo_foo:# echo '*io' > set_ftrace_filter
# cat set_ftrace_filter | grep 'req_bio_endio'
# cat available_filter_functions | grep 'req_bio_endio'
req_bio_endioSigned-off-by: Li Zefan
LKML-Reference:
Acked-by: Frederic Weisbecker
Signed-off-by: Steven Rostedt