31 Oct, 2011

1 commit


08 Feb, 2011

5 commits

  • Many system calls are unimplemented and mapped to sys_ni_syscall, but at
    boot ftrace would still search through every syscall metadata entry for
    a match which wouldn't be there.

    This patch adds causes the search to terminate early if the system call
    is not mapped.

    Signed-off-by: Ian Munsie
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Ian Munsie
     
  • Some architectures have unusual symbol names and the generic code to
    match the symbol name with the function name for the syscall metadata
    will fail. For example, symbols on PPC64 start with a period and the
    generic code will fail to match them.

    This patch moves the match logic out into a separate function which an
    arch can override by defining ARCH_HAS_SYSCALL_MATCH_SYM_NAME in
    asm/ftrace.h and implementing arch_syscall_match_sym_name.

    Signed-off-by: Ian Munsie
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Ian Munsie
     
  • Some architectures use non-trivial system call tables and will not work
    with the generic arch_syscall_addr code. For example, PowerPC64 uses a
    table of twin long longs.

    This patch makes the generic arch_syscall_addr weak to allow
    architectures with non-trivial system call tables to override it.

    Signed-off-by: Ian Munsie
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Ian Munsie
     
  • With the ftrace events now checking if the syscall_nr is valid upon
    initialisation it should no longer be possible to register or unregister
    a syscall event without a valid syscall_nr since they should not be
    created. This adds a WARN_ON_ONCE in the register and unregister
    functions to locate potential regressions in the future.

    Signed-off-by: Ian Munsie
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Ian Munsie
     
  • FTRACE_SYSCALLS would create events for each and every system call, even
    if it had failed to map the system call's name with it's number. This
    resulted in a number of events being created that would not behave as
    expected.

    This could happen, for example, on architectures who's symbol names are
    unusual and will not match the system call name. It could also happen
    with system calls which were mapped to sys_ni_syscall.

    This patch changes the default system call number in the metadata to -1.
    If the system call name from the metadata is not successfully mapped to
    a system call number during boot, than the event initialisation routine
    will now return an error, preventing the event from being created.

    Signed-off-by: Ian Munsie
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Ian Munsie
     

03 Feb, 2011

1 commit

  • Currently the syscall_meta structures for the syscall tracepoints are
    placed in the __syscall_metadata section, and at link time, the linker
    makes one large array of all these syscall metadata structures. On boot
    up, this array is read (much like the initcall sections) and the syscall
    data is processed.

    The problem is that there is no guarantee that gcc will place complex
    structures nicely together in an array format. Two structures in the
    same file may be placed awkwardly, because gcc has no clue that they
    are suppose to be in an array.

    A hack was used previous to force the alignment to 4, to pack the
    structures together. But this caused alignment issues with other
    architectures (sparc).

    Instead of packing the structures into an array, the structures' addresses
    are now put into the __syscall_metadata section. As pointers are always the
    natural alignment, gcc should always pack them tightly together
    (otherwise initcall, extable, etc would also fail).

    By having the pointers to the structures in the section, we can still
    iterate the trace_events without causing unnecessary alignment problems
    with other architectures, or depending on the current behaviour of
    gcc that will likely change in the future just to tick us kernel developers
    off a little more.

    The __syscall_metadata section is also moved into the .init.data section
    as it is now only needed at boot up.

    Suggested-by: David Miller
    Acked-by: David S. Miller
    Cc: Mathieu Desnoyers
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

15 Jan, 2011

1 commit

  • There is no need for syscall_exit_fields as the syscall
    exit event class can already host the fields in its structure,
    like most other trace events do by default. Use that
    default behavior instead.

    Following this scheme, we don't need anymore to override the
    get_fields() callback of the syscall exit event class either.

    Hence both syscall_exit_fields and syscall_get_exit_fields() can
    be removed.

    Also changed some indentation to keep the following under 80
    characters:

    ".fields = LIST_HEAD_INIT(event_class_syscall_exit.fields),"

    Acked-by: Frederic Weisbecker
    Signed-off-by: Lai Jiangshan
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Lai Jiangshan
     

29 Jun, 2010

1 commit


31 May, 2010

1 commit

  • tracepoint_probe_unregister() does not synchronize against the probe
    callbacks, so do that explicitly. This properly serializes the callbacks
    and the free of the data used therein.

    Also, use this_cpu_ptr() where possible.

    Acked-by: Frederic Weisbecker
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

21 May, 2010

3 commits


19 May, 2010

1 commit


15 May, 2010

6 commits

  • Now that the trace_event structure is embedded in the ftrace_event_call
    structure, there is no need for the ftrace_event_call id field.
    The id field is the same as the trace_event type field.

    Removing the id and re-arranging the structure brings down the tracepoint
    footprint by another 5K.

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4895024 1023812 861512 6780348 6775bc vmlinux.print
    4894944 1018052 861512 6774508 675eec vmlinux.id

    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Acked-by: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Currently, every event has its own trace_event structure. This is
    fine since the structure is needed anyway. But the print function
    structure (trace_event_functions) is now separate. Since the output
    of the trace event is done by the class (with the exception of events
    defined by DEFINE_EVENT_PRINT), it makes sense to have the class
    define the print functions that all events in the class can use.

    This makes a bigger deal with the syscall events since all syscall events
    use the same class. The savings here is another 30K.

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4900382 1048964 861512 6810858 67ecea vmlinux.init
    4900446 1049028 861512 6810986 67ed6a vmlinux.preprint
    4895024 1023812 861512 6780348 6775bc vmlinux.print

    To accomplish this, and to let the class know what event is being
    printed, the event structure is embedded in the ftrace_event_call
    structure. This should not be an issues since the event structure
    was created for each event anyway.

    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Acked-by: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Multiple events may use the same method to print their data.
    Instead of having all events have a pointer to their print funtions,
    the trace_event structure now points to a trace_event_functions structure
    that will hold the way to print ouf the event.

    The event itself is now passed to the print function to let the print
    function know what kind of event it should print.

    This opens the door to consolidating the way several events print
    their output.

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4900382 1048964 861512 6810858 67ecea vmlinux.init
    4900446 1049028 861512 6810986 67ed6a vmlinux.preprint

    This change slightly increases the size but is needed for the next change.

    v3: Fix the branch tracer events to handle this change.

    v2: Fix the new function graph tracer event calls to handle this change.

    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Acked-by: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • The raw_init function pointer in the event is used to initialize
    various kinds of events. The type of initialization needed is usually
    classed to the kind of event it is.

    Two events with the same class will always have the same initialization
    function, so it makes sense to move this to the class structure.

    Perhaps even making a special system structure would work since
    the initialization is the same for all events within a system.
    But since there's no system structure (yet), this will just move it
    to the class.

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4900375 1053380 861512 6815267 67fe23 vmlinux.fields
    4900382 1048964 861512 6810858 67ecea vmlinux.init

    The text grew very slightly, but this is a constant growth that happened
    with the changing of the C files that call the init code.
    The bigger savings is the data which will be saved the more events share
    a class.

    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Acked-by: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Move the defined fields from the event to the class structure.
    Since the fields of the event are defined by the class they belong
    to, it makes sense to have the class hold the information instead
    of the individual events. The events of the same class would just
    hold duplicate information.

    After this change the size of the kernel dropped another 3K:

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4900252 1057412 861512 6819176 680d68 vmlinux.regs
    4900375 1053380 861512 6815267 67fe23 vmlinux.fields

    Although the text increased, this was mainly due to the C files
    having to adapt to the change. This is a constant increase, where
    new tracepoints will not increase the Text. But the big drop is
    in the data size (as well as needed allocations to hold the fields).
    This will give even more savings as more tracepoints are created.

    Note, if just TRACE_EVENT()s are used and not DECLARE_EVENT_CLASS()
    with several DEFINE_EVENT()s, then the savings will be lost. But
    we are pushing developers to consolidate events with DEFINE_EVENT()
    so this should not be an issue.

    The kprobes define a unique class to every new event, but are dynamic
    so it should not be a issue.

    The syscalls however have a single class but the fields for the individual
    events are different. The syscalls use a metadata to define the
    fields. I moved the fields list from the event to the metadata and
    added a "get_fields()" function to the class. This function is used
    to find the fields. For normal events and kprobes, get_fields() just
    returns a pointer to the fields list_head in the class. For syscall
    events, it returns the fields list_head in the metadata for the event.

    v2: Fixed the syscall fields. The syscall metadata needs a list
    of fields for both enter and exit.

    Acked-by: Frederic Weisbecker
    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Cc: Tom Zanussi
    Cc: Peter Zijlstra
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • This patch removes the register functions of TRACE_EVENT() to enable
    and disable tracepoints. The registering of a event is now down
    directly in the trace_events.c file. The tracepoint_probe_register()
    is now called directly.

    The prototypes are no longer type checked, but this should not be
    an issue since the tracepoints are created automatically by the
    macros. If a prototype is incorrect in the TRACE_EVENT() macro, then
    other macros will catch it.

    The trace_event_class structure now holds the probes to be called
    by the callbacks. This removes needing to have each event have
    a separate pointer for the probe.

    To handle kprobes and syscalls, since they register probes in a
    different manner, a "reg" field is added to the ftrace_event_class
    structure. If the "reg" field is assigned, then it will be called for
    enabling and disabling of the probe for either ftrace or perf. To let
    the reg function know what is happening, a new enum (trace_reg) is
    created that has the type of control that is needed.

    With this new rework, the 82 kernel events and 618 syscall events
    has their footprint dramatically lowered:

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4914025 1088868 861512 6864405 68be15 vmlinux.class
    4918492 1084612 861512 6864616 68bee8 vmlinux.tracepoint
    4900252 1057412 861512 6819176 680d68 vmlinux.regs

    The size went from 6863829 to 6819176, that's a total of 44K
    in savings. With tracepoints being continuously added, this is
    critical that the footprint becomes minimal.

    v5: Added #ifdef CONFIG_PERF_EVENTS around a reference to perf
    specific structure in trace_events.c.

    v4: Fixed trace self tests to check probe because regfunc no longer
    exists.

    v3: Updated to handle void *data in beginning of probe parameters.
    Also added the tracepoint: check_trace_callback_type_##call().

    v2: Changed the callback probes to pass void * and typecast the
    value within the function.

    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Acked-by: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

14 May, 2010

2 commits

  • This patch adds data to be passed to tracepoint callbacks.

    The created functions from DECLARE_TRACE() now need a mandatory data
    parameter. For example:

    DECLARE_TRACE(mytracepoint, int value, value)

    Will create the register function:

    int register_trace_mytracepoint((void(*)(void *data, int value))probe,
    void *data);

    As the first argument, all callbacks (probes) must take a (void *data)
    parameter. So a callback for the above tracepoint will look like:

    void myprobe(void *data, int value)
    {
    }

    The callback may choose to ignore the data parameter.

    This change allows callbacks to register a private data pointer along
    with the function probe.

    void mycallback(void *data, int value);

    register_trace_mytracepoint(mycallback, mydata);

    Then the mycallback() will receive the "mydata" as the first parameter
    before the args.

    A more detailed example:

    DECLARE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));

    /* In the C file */

    DEFINE_TRACE(mytracepoint, TP_PROTO(int status), TP_ARGS(status));

    [...]

    trace_mytracepoint(status);

    /* In a file registering this tracepoint */

    int my_callback(void *data, int status)
    {
    struct my_struct my_data = data;
    [...]
    }

    [...]
    my_data = kmalloc(sizeof(*my_data), GFP_KERNEL);
    init_my_data(my_data);
    register_trace_mytracepoint(my_callback, my_data);

    The same callback can also be registered to the same tracepoint as long
    as the data registered is different. Note, the data must also be used
    to unregister the callback:

    unregister_trace_mytracepoint(my_callback, my_data);

    Because of the data parameter, tracepoints declared this way can not have
    no args. That is:

    DECLARE_TRACE(mytracepoint, TP_PROTO(void), TP_ARGS());

    will cause an error.

    If no arguments are needed, a new macro can be used instead:

    DECLARE_TRACE_NOARGS(mytracepoint);

    Since there are no arguments, the proto and args fields are left out.

    This is part of a series to make the tracepoint footprint smaller:

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4914025 1088868 861512 6864405 68be15 vmlinux.class
    4918492 1084612 861512 6864616 68bee8 vmlinux.tracepoint

    Again, this patch also increases the size of the kernel, but
    lays the ground work for decreasing it.

    v5: Fixed net/core/drop_monitor.c to handle these updates.

    v4: Moved the DECLARE_TRACE() DECLARE_TRACE_NOARGS out of the
    #ifdef CONFIG_TRACE_POINTS, since the two are the same in both
    cases. The __DECLARE_TRACE() is what changes.
    Thanks to Frederic Weisbecker for pointing this out.

    v3: Made all register_* functions require data to be passed and
    all callbacks to take a void * parameter as its first argument.
    This makes the calling functions comply with C standards.

    Also added more comments to the modifications of DECLARE_TRACE().

    v2: Made the DECLARE_TRACE() have the ability to pass arguments
    and added a new DECLARE_TRACE_NOARGS() for tracepoints that
    do not need any arguments.

    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Acked-by: Frederic Weisbecker
    Cc: Neil Horman
    Cc: David S. Miller
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • This patch creates a ftrace_event_class struct that event structs point to.
    This class struct will be made to hold information to modify the
    events. Currently the class struct only holds the events system name.

    This patch slightly increases the size, but this change lays the ground work
    of other changes to make the footprint of tracepoints smaller.

    With 82 standard tracepoints, and 618 system call tracepoints
    (two tracepoints per syscall: enter and exit):

    text data bss dec hex filename
    4913961 1088356 861512 6863829 68bbd5 vmlinux.orig
    4914025 1088868 861512 6864405 68be15 vmlinux.class

    This patch also cleans up some stale comments in ftrace.h.

    v2: Fixed missing semi-colon in macro.

    Acked-by: Frederic Weisbecker
    Acked-by: Mathieu Desnoyers
    Acked-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

10 Mar, 2010

2 commits

  • Drop the obsolete "profile" naming used by perf for trace events.
    Perf can now do more than simple events counting, so generalize
    the API naming.

    Signed-off-by: Frederic Weisbecker
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Paul Mackerras
    Cc: Steven Rostedt
    Cc: Masami Hiramatsu
    Cc: Jason Baron

    Frederic Weisbecker
     
  • We are taking a wrong regs snapshot when a trace event triggers.
    Either we use get_irq_regs(), which gives us the interrupted
    registers if we are in an interrupt, or we use task_pt_regs()
    which gives us the state before we entered the kernel, assuming
    we are lucky enough to be no kernel thread, in which case
    task_pt_regs() returns the initial set of regs when the kernel
    thread was started.

    What we want is different. We need a hot snapshot of the regs,
    so that we can get the instruction pointer to record in the
    sample, the frame pointer for the callchain, and some other
    things.

    Let's use the new perf_fetch_caller_regs() for that.

    Comparison with perf record -e lock: -R -a -f -g
    Before:

    perf [kernel] [k] __do_softirq
    |
    --- __do_softirq
    |
    |--55.16%-- __open
    |
    --44.84%-- __write_nocancel

    After:

    perf [kernel] [k] perf_tp_event
    |
    --- perf_tp_event
    |
    |--41.07%-- lock_acquire
    | |
    | |--39.36%-- _raw_spin_lock
    | | |
    | | |--7.81%-- hrtimer_interrupt
    | | | smp_apic_timer_interrupt
    | | | apic_timer_interrupt

    The old case was producing unreliable callchains. Now having
    right frame and instruction pointers, we have the trace we
    want.

    Also syscalls and kprobe events already have the right regs,
    let's use them instead of wasting a retrieval.

    v2: Follow the rename perf_save_regs() -> perf_fetch_caller_regs()

    Signed-off-by: Frederic Weisbecker
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: H. Peter Anvin
    Cc: Peter Zijlstra
    Cc: Paul Mackerras
    Cc: Steven Rostedt
    Cc: Arnaldo Carvalho de Melo
    Cc: Masami Hiramatsu
    Cc: Jason Baron
    Cc: Archs

    Frederic Weisbecker
     

01 Mar, 2010

1 commit

  • …git/tip/linux-2.6-tip

    * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (172 commits)
    perf_event, amd: Fix spinlock initialization
    perf_event: Fix preempt warning in perf_clock()
    perf tools: Flush maps on COMM events
    perf_events, x86: Split PMU definitions into separate files
    perf annotate: Handle samples not at objdump output addr boundaries
    perf_events, x86: Remove superflous MSR writes
    perf_events: Simplify code by removing cpu argument to hw_perf_group_sched_in()
    perf_events, x86: AMD event scheduling
    perf_events: Add new start/stop PMU callbacks
    perf_events: Report the MMAP pgoff value in bytes
    perf annotate: Defer allocating sym_priv->hist array
    perf symbols: Improve debugging information about symtab origins
    perf top: Use a macro instead of a constant variable
    perf symbols: Check the right return variable
    perf/scripts: Tag syscall_name helper as not yet available
    perf/scripts: Add perf-trace-python Documentation
    perf/scripts: Remove unnecessary PyTuple resizes
    perf/scripts: Add syscall tracing scripts
    perf/scripts: Add Python scripting engine
    perf/scripts: Remove check-perf-trace from listed scripts
    ...

    Fix trivial conflict in tools/perf/util/probe-event.c

    Linus Torvalds
     

27 Feb, 2010

1 commit


25 Feb, 2010

1 commit


17 Feb, 2010

1 commit

  • Most implementations of arch_syscall_addr() are the same, so create a
    default version in common code and move the one piece that differs (the
    syscall table) to asm/syscall.h. New arch ports don't have to waste
    time copying & pasting this simple function.

    The s390/sparc versions need to be different, so document why.

    Signed-off-by: Mike Frysinger
    Acked-by: David S. Miller
    Acked-by: Paul Mundt
    Acked-by: Heiko Carstens
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Mike Frysinger
     

29 Jan, 2010

1 commit

  • Introduce ftrace_perf_buf_prepare() and ftrace_perf_buf_submit() to
    gather the common code that operates on raw events sampling buffer.
    This cleans up redundant code between regular trace events, syscall
    events and kprobe events.

    Changelog v1->v2:
    - Rename function name as per Masami and Frederic's suggestion
    - Add __kprobes for ftrace_perf_buf_prepare() and make
    ftrace_perf_buf_submit() inline as per Masami's suggestion
    - Export ftrace_perf_buf_prepare since modules will use it

    Signed-off-by: Xiao Guangrong
    Acked-by: Masami Hiramatsu
    Cc: Ingo Molnar
    Cc: Steven Rostedt
    Cc: Paul Mackerras
    Cc: Jason Baron
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Xiao Guangrong
     

07 Jan, 2010

3 commits

  • The previous patches added the use of print_fmt string and changes
    the trace_define_field() function to also create the fields and
    format output for the event format files.

    text data bss dec hex filename
    5857201 1355780 9336808 16549789 fc879d vmlinux
    5884589 1351684 9337896 16574169 fce6d9 vmlinux-orig

    The above shows the size of the vmlinux after this patch set
    compared to the vmlinux-orig which is before the patch set.

    This saves us 27k on text, 1k on bss and adds just 4k of data.

    The total savings of 24k in size.

    Signed-off-by: Lai Jiangshan
    LKML-Reference:
    Acked-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt

    Lai Jiangshan
     
  • In the clean up of having all events call one specific function,
    the syscall event init was changed to call this helper function.

    With the new print_fmt updates, the syscalls need to do special
    initializations. This patch converts the syscall events to call
    its own init function again.

    Cc: Lai Jiangshan
    Cc: Li Zefan
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • This is part of a patch set that removes the show_format method
    in the ftrace event macros.

    Add the print_fmt initialization to the syscall events.
    The print_fmt is still not used, but will be in the follow up
    patches.

    Signed-off-by: Lai Jiangshan
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Lai Jiangshan
     

28 Dec, 2009

1 commit

  • Quoted from Ingo:

    | This reminds me - i think we should eliminate CONFIG_EVENT_PROFILE -
    | it's an unnecessary Kconfig complication. If both PERF_EVENTS and
    | EVENT_TRACING is enabled we should expose generic tracepoints.
    |
    | Nor is it limited to event 'profiling', so it has become a misnomer as
    | well.

    Signed-off-by: Li Zefan
    Cc: Frederic Weisbecker
    Cc: Steven Rostedt
    Cc: Peter Zijlstra
    Cc: Paul Mackerras
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Li Zefan
     

14 Dec, 2009

2 commits

  • Move the printk from each ftrace_raw_reg_event_foo() to
    its caller ftrace_event_enable_disable(). This avoids each
    regfunc trace event callbacks to handle a same error report
    that can be carried from the caller.

    See how much space this saves:

    text data bss dec hex filename
    5345151 1961864 7103260 14410275 dbe223 vmlinux.o.old
    5331487 1961864 7103260 14396611 dbacc3 vmlinux.o

    Signed-off-by: Li Zefan
    Acked-by: Steven Rostedt
    Cc: Jason Baron
    LKML-Reference:
    [start cmdline record before calling regfunc to avoid lost
    window of pid to comm resolution]
    Signed-off-by: Frederic Weisbecker

    Li Zefan
     
  • Call trace_define_common_fields() in event_create_dir() only.
    This avoids trace events to handle it from their define_fields
    callbacks and shrinks the kernel code size:

    text data bss dec hex filename
    5346802 1961864 7103260 14411926 dbe896 vmlinux.o.old
    5345151 1961864 7103260 14410275 dbe223 vmlinux.o

    Signed-off-by: Li Zefan
    Acked-by: Steven Rostedt
    Cc: Ingo Molnar
    Cc: Jason Baron
    Cc: Masami Hiramatsu
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Li Zefan
     

02 Dec, 2009

4 commits