04 Feb, 2010
14 commits
-
Remove record freezing. Because kprobes never puts probe on
ftrace's mcount call anymore, it doesn't need ftrace to check
whether kprobes on it.Signed-off-by: Masami Hiramatsu
Cc: systemtap
Cc: DLE
Cc: Steven Rostedt
Cc: przemyslaw@pawelczyk.it
Cc: Frederic Weisbecker
LKML-Reference:
Signed-off-by: Ingo Molnar -
Check whether the address of new probe is already reserved by
ftrace or alternatives (on x86) when registering new probe.
If reserved, it returns an error and not register the probe.Signed-off-by: Masami Hiramatsu
Cc: systemtap
Cc: DLE
Cc: Steven Rostedt
Cc: przemyslaw@pawelczyk.it
Cc: Frederic Weisbecker
Cc: Ananth N Mavinakayanahalli
Cc: Jim Keniston
Cc: Mathieu Desnoyers
Cc: Jason Baron
LKML-Reference:
Signed-off-by: Ingo Molnar -
Introducing *_text_reserved functions for checking the text
address range is partially reserved or not. This patch provides
checking routines for x86 smp alternatives and dynamic ftrace.
Since both functions modify fixed pieces of kernel text, they
should reserve and protect those from other dynamic text
modifier, like kprobes.This will also be extended when introducing other subsystems
which modify fixed pieces of kernel text. Dynamic text modifiers
should avoid those.Signed-off-by: Masami Hiramatsu
Cc: systemtap
Cc: DLE
Cc: Steven Rostedt
Cc: przemyslaw@pawelczyk.it
Cc: Frederic Weisbecker
Cc: Ananth N Mavinakayanahalli
Cc: Jim Keniston
Cc: Mathieu Desnoyers
Cc: Jason Baron
LKML-Reference:
Signed-off-by: Ingo Molnar -
Disable kprobe booster when CONFIG_PREEMPT=y at this time,
because it can't ensure that all kernel threads preempted on
kprobe's boosted slot run out from the slot even using
freeze_processes().The booster on preemptive kernel will be resumed if
synchronize_tasks() or something like that is introduced.Signed-off-by: Masami Hiramatsu
Cc: systemtap
Cc: DLE
Cc: Ananth N Mavinakayanahalli
Cc: Frederic Weisbecker
Cc: Jim Keniston
Cc: Mathieu Desnoyers
Cc: Steven Rostedt
LKML-Reference:
Signed-off-by: Ingo Molnar -
Signed-off-by: Mike Galbraith
Cc: Kirill Smelkov
Cc: Arnaldo Carvalho de Melo
Cc: Arnaldo Carvalho de Melo
Cc: Peter Zijlstra
Cc: Frederic Weisbecker
LKML-Reference:
Signed-off-by: Ingo Molnar -
By relying on logic in dso__load_kernel_sym(), we can
automatically load vmlinux.The only thing which needs to be adjusted, is how --sym-annotate
option is handled - now we can't rely on vmlinux been loaded
until full successful pass of dso__load_vmlinux(), but that's
not the case if we'll do sym_filter_entry setup in
symbol_filter().So move this step right after event__process_sample() where we
know the whole dso__load_kernel_sym() pass is done.By the way, though conceptually similar `perf top` still can't
annotate userspace - see next patches with fixes.Signed-off-by: Kirill Smelkov
Signed-off-by: Arnaldo Carvalho de Melo
Cc: Mike Galbraith
LKML-Reference:
Signed-off-by: Ingo Molnar -
The problem was we were incorrectly calculating objdump
addresses for sym->start and sym->end, look:For simple ET_DYN type DSO (*.so) with one function, objdump -dS
output is something like this:000004ac :
int my_strlen(const char *s)
4ac: 55 push %ebp
4ad: 89 e5 mov %esp,%ebp
4af: 83 ec 10 sub $0x10,%esp
{i.e. we have relative-to-dso-mapping IPs (=RIP) there.
For ET_EXEC type and probably for prelinked libs as well (sorry
can't test - I don't use prelink) objdump outputs absolute IPs,
e.g.08048604 :
extern "C"
int zz_strlen(const char *s)
8048604: 55 push %ebp
8048605: 89 e5 mov %esp,%ebp
8048607: 83 ec 10 sub $0x10,%esp
{So, if sym->start is always relative to dso mapping(*), we'll
have to unmap it for ET_EXEC like cases, and leave as is for
ET_DYN cases.(*) and it is - we've explicitely made it relative. Look for
adjust_symbols handling in dso__load_sym()Previously we were always unmapping sym->start and for ET_DYN
dsos resulting addresses were wrong, and so objdump output was
empty.The end result was that perf annotate output for symbols from
non-prelinked *.so had always 0.00% percents only, which is
wrong.To fix it, let's introduce a helper for converting rip to
objdump address, and also let's document what map_ip() and
unmap_ip() do -- I had to study sources for several hours to
understand it.Signed-off-by: Kirill Smelkov
Signed-off-by: Arnaldo Carvalho de Melo
Cc: Mike Galbraith
LKML-Reference:
Signed-off-by: Ingo Molnar -
Not to pollute too much 'perf annotate' debugging sessions.
Signed-off-by: Arnaldo Carvalho de Melo
Cc: Frédéric Weisbecker
Cc: Mike Galbraith
Cc: Peter Zijlstra
Cc: Paul Mackerras
LKML-Reference:
Signed-off-by: Ingo Molnar -
We want to stream events as fast as possible to perf.data, and
also in the future we want to have splice working, when no
interception will be possible.Using build_id__mark_dso_hit_ops to create the list of DSOs that
back MMAPs we also optimize disk usage in the build-id cache by
only caching DSOs that had hits.Suggested-by: Peter Zijlstra
Signed-off-by: Arnaldo Carvalho de Melo
Cc: Xiao Guangrong
Cc: Frédéric Weisbecker
Cc: Mike Galbraith
Cc: Paul Mackerras
LKML-Reference:
Signed-off-by: Ingo Molnar -
Because 'perf record' will have to find the build-ids in after
we stop recording, so as to reduce even more the impact in the
workload while we do the measurement.Signed-off-by: Arnaldo Carvalho de Melo
Cc: Frédéric Weisbecker
Cc: Mike Galbraith
Cc: Peter Zijlstra
Cc: Paul Mackerras
LKML-Reference:
Signed-off-by: Ingo Molnar -
With the recent modifications done to untie the session and
symbol layers, 'perf probe' now can use just the symbols layer.Signed-off-by: Arnaldo Carvalho de Melo
Acked-by: Masami Hiramatsu
Cc: Frédéric Weisbecker
Cc: Masami Hiramatsu
Cc: Mike Galbraith
Cc: Peter Zijlstra
Cc: Paul Mackerras
Signed-off-by: Ingo Molnar -
We can check using strcmp, most DSOs don't start with '[' so the
test is cheap enough and we had to test it there anyway since
when reading perf.data files we weren't calling the routine that
created this global variable and thus weren't setting it as
"loaded", which was causing a bogus:Failed to open [vdso], continuing without symbols
Message as the first line of 'perf report'.
Signed-off-by: Arnaldo Carvalho de Melo
Cc: Frédéric Weisbecker
Cc: Mike Galbraith
Cc: Peter Zijlstra
Cc: Paul Mackerras
LKML-Reference:
Signed-off-by: Ingo Molnar -
While debugging a problem reported by Pekka Enberg by printing
the IP and all the maps for a thread when we don't find a map
for an IP I noticed that dso__load_sym needs to fixup these
extra maps it creates to hold symbols in different ELF sections
than the main kernel one.Now we're back showing things like:
[root@doppio linux-2.6-tip]# perf report | grep vsyscall
0.02% mutt [kernel.kallsyms].vsyscall_fn [.] vread_hpet
0.01% named [kernel.kallsyms].vsyscall_fn [.] vread_hpet
0.01% NetworkManager [kernel.kallsyms].vsyscall_fn [.] vread_hpet
0.01% gconfd-2 [kernel.kallsyms].vsyscall_0 [.] vgettimeofday
0.01% hald-addon-rfki [kernel.kallsyms].vsyscall_fn [.] vread_hpet
0.00% dbus-daemon [kernel.kallsyms].vsyscall_fn [.] vread_hpet
[root@doppio linux-2.6-tip]#Signed-off-by: Arnaldo Carvalho de Melo
Cc: Frédéric Weisbecker
Cc: Mike Galbraith
Cc: Pekka Enberg
Cc: Peter Zijlstra
Cc: Paul Mackerras
LKML-Reference:
Signed-off-by: Ingo Molnar -
I noticed while writing the first test in 'perf regtest' that to
just test the symbol handling routines one needs to create a
perf session, that is a layer centered on a perf.data file,
events, etc, so I untied these layers.This reduces the complexity for the users as the number of
parameters to most of the symbols and session APIs now was
reduced while not adding more state to all the map instances by
only having data that is needed to split the kernel (kallsyms
and ELF symtab sections) maps and do vmlinux relocation on the
main kernel map.Signed-off-by: Arnaldo Carvalho de Melo
Cc: Frédéric Weisbecker
Cc: Mike Galbraith
Cc: Peter Zijlstra
Cc: Paul Mackerras
LKML-Reference:
Signed-off-by: Ingo Molnar
03 Feb, 2010
1 commit
-
Open perf data file with O_LARGEFILE flag since its size is
easily larger that 2G.For example:
# rm -rf perf.data
# ./perf kmem record sleep 300[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 3142.147 MB perf.data
(~137282513 samples) ]# ll -h perf.data
-rw------- 1 root root 3.1G .....Signed-off-by: Xiao Guangrong
Cc: Frederic Weisbecker
Cc: Steven Rostedt
Cc: Paul Mackerras
Cc: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar
31 Jan, 2010
6 commits
-
Fix up a few small stylistic details:
- use consistent vertical spacing/alignment
- remove line80 artifacts
- group some global variables better
- remove dead codePlus rename 'prof' to 'report' to make it more in line with other
tools, and remove the line/file keying as we really want to use
IPs like the other tools do.Signed-off-by: Ingo Molnar
Cc: Hitoshi Mitake
Cc: Peter Zijlstra
Cc: Paul Mackerras
Cc: Frederic Weisbecker
LKML-Reference:
Signed-off-by: Ingo Molnar -
Adding new subcommand "perf lock" to perf.
I have a lot of remaining ToDos, but for now perf lock can
already provide minimal functionality for analyzing lock
statistics.Signed-off-by: Hitoshi Mitake
Cc: Peter Zijlstra
Cc: Paul Mackerras
Cc: Frederic Weisbecker
LKML-Reference:
Signed-off-by: Ingo Molnar -
Add wait time and lock identification details.
Signed-off-by: Hitoshi Mitake
Cc: Peter Zijlstra
Cc: Paul Mackerras
Cc: Frederic Weisbecker
LKML-Reference:
[ removed the file/line bits as we can do that better via IPs ]
Signed-off-by: Ingo Molnar -
linux/hash.h, hash header of kernel, is also useful for perf.
util/include/linuxhash.h includes linux/hash.h, so we can use
hash facilities (e.g. hash_long()) in perf now.Signed-off-by: Hitoshi Mitake
Cc: Peter Zijlstra
Cc: Paul Mackerras
Cc: Frederic Weisbecker
LKML-Reference:
Signed-off-by: Ingo Molnar -
This patch is required to test the next patch for perf lock.
At 064739bc4b3d7f424b2f25547e6611bcf0132415 ,
support for the modifier "__data_loc" of format is added.But, when I wanted to parse format of lock_acquired (or some
event else), raw_field_ptr() did not returned correct pointer.So I modified raw_field_ptr() like this patch. Then
raw_field_ptr() works well.Signed-off-by: Hitoshi Mitake
Acked-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Paul Mackerras
Cc: Tom Zanussi
Cc: Steven Rostedt
LKML-Reference:
[ v3: fixed minor stylistic detail ]
Signed-off-by: Ingo Molnar -
This reverts commit f5a2c3dce03621b55f84496f58adc2d1a87ca16f.
This patch is required for making "perf lock rec" work.
The commit f5a2c3dce0 changes write_event() of builtin-record.c
. And changed write_event() sometimes doesn't stop with perf
lock rec.Cc: Peter Zijlstra
Cc: Mike Galbraith
Cc: Paul Mackerras
Cc: Arnaldo Carvalho de Melo
Cc: Frederic Weisbecker
LKML-Reference:
[ that commit also causes perf record to not be Ctrl-C-able,
and it's concetually wrong to parse the data at record time
(unconditionally - even when not needed), as we eventually
want to be able to do zero-copy recording, at least for
non-archive recordings. ]
Signed-off-by: Ingo Molnar
29 Jan, 2010
19 commits
-
Tell git to ignore perf-archive.
Signed-off-by: John Kacur
Signed-off-by: Arnaldo Carvalho de Melo
Cc: Frederic Weisbecker
LKML-Reference:
Signed-off-by: Ingo Molnar -
Checked with:
./../scripts/checkpatch.pl --terse --file perf.cperf.c: 51: ERROR: open brace '{' following function declarations go on the next line
perf.c: 73: ERROR: "foo*** bar" should be "foo ***bar"
perf.c:112: ERROR: space prohibited before that close parenthesis ')'
perf.c:127: ERROR: space prohibited before that close parenthesis ')'
perf.c:171: ERROR: "foo** bar" should be "foo **bar"
perf.c:213: ERROR: "(foo*)" should be "(foo *)"
perf.c:216: ERROR: "(foo*)" should be "(foo *)"
perf.c:217: ERROR: space required before that '*' (ctx:OxV)
perf.c:452: ERROR: do not initialise statics to 0 or NULL
perf.c:453: ERROR: do not initialise statics to 0 or NULLSigned-off-by: Arnaldo Carvalho de Melo
Cc: Peter Zijlstra
Cc: Paul Mackerras
Cc: Frederic Weisbecker
Cc: Masami Hiramatsu
LKML-Reference:
Signed-off-by: Ingo Molnar -
Merge reason: We want to queue up a dependent patch. Also update to
later -rc's.Signed-off-by: Ingo Molnar
-
Removing one extra step needed in the tools that need this,
fixing a bug in 'perf probe' where this was not being done.Signed-off-by: Arnaldo Carvalho de Melo
Cc: Masami Hiramatsu
Cc: Frédéric Weisbecker
Cc: Mike Galbraith
Cc: Peter Zijlstra
Cc: Paul Mackerras
LKML-Reference:
Signed-off-by: Ingo Molnar -
To make it clear and allow for direct usage by, for instance,
regression test suites.Signed-off-by: Arnaldo Carvalho de Melo
Cc: Frédéric Weisbecker
Cc: Mike Galbraith
Cc: Peter Zijlstra
Cc: Paul Mackerras
LKML-Reference:
Signed-off-by: Ingo Molnar -
So that we can call it directly from regression tests, and also
to reduce the size of dso__load_kernel_sym(), making it more
clear.Signed-off-by: Arnaldo Carvalho de Melo
Cc: Frédéric Weisbecker
Cc: Mike Galbraith
Cc: Peter Zijlstra
Cc: Paul Mackerras
LKML-Reference:
Signed-off-by: Ingo Molnar -
As we do lazy loading of symtabs we only will know if the
specified vmlinux file is invalid when we actually have a hit in
kernel space and then try to load it. So if we get kernel hits
and there are _no_ symbols in the DSO backing the kernel map,
bail out.Reported-by: Mike Galbraith
Signed-off-by: Arnaldo Carvalho de Melo
Cc: Frédéric Weisbecker
Cc: Mike Galbraith
Cc: Peter Zijlstra
Cc: Paul Mackerras
LKML-Reference:
Signed-off-by: Ingo Molnar -
One problem with frequency driven counters is that we cannot
predict the rate at which they trigger, therefore we have to
start them at period=1, this causes a ramp up effect. However,
if we fail to propagate the stable state on fork each new child
will have to ramp up again. This can lead to significant
artifacts in sample data.Signed-off-by: Peter Zijlstra
Cc: eranian@google.com
Cc: Mike Galbraith
Cc: Paul Mackerras
Cc: Arnaldo Carvalho de Melo
Cc: Frederic Weisbecker
LKML-Reference:
Signed-off-by: Ingo Molnar -
At enable time the counter might still have a ->idx pointing to
a previously occupied location that might now be taken by
another event. Resetting the counter at that location with data
from this event will destroy the other counter's count.Signed-off-by: Peter Zijlstra
Cc: Stephane Eranian
LKML-Reference:
Signed-off-by: Ingo Molnar -
The new Intel documentation includes Westmere arch specific
event maps that are significantly different from the Nehalem
ones. Add support for this generation.Found the CPUID model numbers on wikipedia.
Also ammend some Nehalem constraints, spotted those when looking
for the differences between Nehalem and Westmere.Signed-off-by: Peter Zijlstra
Cc: Arjan van de Ven
Cc: "H. Peter Anvin"
Cc: Stephane Eranian
LKML-Reference:
Signed-off-by: Ingo Molnar -
Put the recursion avoidance code in the generic hook instead of
replicating it in each implementation.Signed-off-by: Peter Zijlstra
Cc: Stephane Eranian
LKML-Reference:
Signed-off-by: Ingo Molnar -
Since constraints are specified on the event number, not number
and unit mask shorten the constraint masks so that we'll
actually match something.Signed-off-by: Peter Zijlstra
Cc: Stephane Eranian
LKML-Reference:
Signed-off-by: Ingo Molnar -
Share the meat of the x86_pmu_disable() code with hw_perf_enable().
Also remove the barrier() from that code, since I could not convince
myself we actually need it.Signed-off-by: Peter Zijlstra
Cc: Stephane Eranian
LKML-Reference:
Signed-off-by: Ingo Molnar -
- Remove stray debug code
- Improve ugly macros a bit
- Remove some whitespace damage
- (Also fix up some accumulated damage in perf_event.h)Signed-off-by: Ingo Molnar
Cc: Stephane Eranian
Cc: Peter Zijlstra
LKML-Reference: -
x86_pmu_disable() removes the event from the cpuc->event_list[], however
since an event can only be on that list once, stop looking after we found
it.Signed-off-by: Peter Zijlstra
Cc: Stephane Eranian
LKML-Reference:
Signed-off-by: Ingo Molnar -
Remove num from the fast path and save a few ops.
Signed-off-by: Peter Zijlstra
Cc: Stephane Eranian
LKML-Reference:
Signed-off-by: Ingo Molnar -
Add a weight member to the constraint structure and avoid recomputing the
weight at runtime.Signed-off-by: Peter Zijlstra
Cc: Stephane Eranian
LKML-Reference:
Signed-off-by: Ingo Molnar -
Instead of copying bitmasks around, pass pointers to the constraint
structure.Signed-off-by: Peter Zijlstra
Cc: Stephane Eranian
LKML-Reference:
Signed-off-by: Ingo Molnar -
Provide compile time versions of hweight.
Signed-off-by: Peter Zijlstra
Cc: Stephane Eranian
Cc: Linus Torvalds
Cc: Andrew Morton
Cc: Thomas Gleixner
LKML-Reference:
[ Remove some whitespace damage while we are at it ]
Signed-off-by: Ingo Molnar