02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

20 Sep, 2017

1 commit

  • The mmiotrace tracer cannot be enabled with ftrace=mmiotrace in kernel
    commandline. With this patch, noboot is added to the tracer struct,
    and when system boot with a tracer that has noboot=true, it will print
    out a warning message and continue booting.

    Link: http://lkml.kernel.org/r/1505111195-31942-1-git-send-email-zsun@redhat.com

    Signed-off-by: Ziqian SUN (Zamir)
    Signed-off-by: Steven Rostedt (VMware)

    Ziqian SUN (Zamir)
     

20 Jun, 2016

1 commit

  • Previously, mmio_print_pcidev() put "user" addresses in the trace buffer.
    On most architectures, these are the same as CPU physical addresses, but on
    microblaze, mips, powerpc, and sparc, they may be something else, typically
    a raw BAR value (a bus address as opposed to a CPU address).

    Always expose the CPU physical address to avoid this arch-dependent
    behavior.

    This change should have no user-visible effect because this file currently
    depends on CONFIG_HAVE_MMIOTRACE_SUPPORT, which is only defined for x86,
    and pci_resource_to_user() is a no-op on x86.

    Link: http://lkml.kernel.org/r/20160511190657.5898.4248.stgit@bhelgaas-glaptop2.roam.corp.google.com

    Signed-off-by: Bjorn Helgaas
    Signed-off-by: Steven Rostedt

    Bjorn Helgaas
     

23 Mar, 2016

1 commit

  • Use the more common logging method with the eventual goal of removing
    pr_warning altogether.

    Miscellanea:

    - Realign arguments
    - Coalesce formats
    - Add missing space between a few coalesced formats

    Signed-off-by: Joe Perches
    Acked-by: Rafael J. Wysocki [kernel/power/suspend.c]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     

26 Sep, 2015

1 commit

  • In preparation for having trace options be per instance, the trace_array
    needs to be passed to the trace_buffer_unlock_commit(). The
    trace_event_buffer_lock_reserve() already passes in the trace_event_file
    where the trace_array can be derived from.

    Also added a "__init" to the boot up test event plus function tracing
    function function_test_events_call().

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

14 May, 2015

1 commit


20 Nov, 2014

1 commit

  • The return values for trace_seq_printf() and friends are going to be
    removed and they will become void functions. The mmio tracer checked
    their return and even did so incorrectly.

    Some of the funtions which returned the values were never checked
    themselves. Removing all the checks simplifies the code.

    Use trace_seq_has_overflowed() and trace_handle_return() where
    necessary instead.

    Reviewed-by: Petr Mladek
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

06 Nov, 2013

1 commit

  • The trace event filters are still tied to event calls rather than
    event files, which means you don't get what you'd expect when using
    filters in the multibuffer case:

    Before:

    # echo 'bytes_alloc > 8192' > /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
    # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
    bytes_alloc > 8192
    # mkdir /sys/kernel/debug/tracing/instances/test1
    # echo 'bytes_alloc > 2048' > /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
    # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
    bytes_alloc > 2048
    # cat /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
    bytes_alloc > 2048

    Setting the filter in tracing/instances/test1/events shouldn't affect
    the same event in tracing/events as it does above.

    After:

    # echo 'bytes_alloc > 8192' > /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
    # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
    bytes_alloc > 8192
    # mkdir /sys/kernel/debug/tracing/instances/test1
    # echo 'bytes_alloc > 2048' > /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
    # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/filter
    bytes_alloc > 8192
    # cat /sys/kernel/debug/tracing/instances/test1/events/kmem/kmalloc/filter
    bytes_alloc > 2048

    We'd like to just move the filter directly from ftrace_event_call to
    ftrace_event_file, but there are a couple cases that don't yet have
    multibuffer support and therefore have to continue using the current
    event_call-based filters. For those cases, a new USE_CALL_FILTER bit
    is added to the event_call flags, whose main purpose is to keep the
    old behavior for those cases until they can be updated with
    multibuffer support; at that point, the USE_CALL_FILTER flag (and the
    new associated call_filter_check_discard() function) can go away.

    The multibuffer support also made filter_current_check_discard()
    redundant, so this change removes that function as well and replaces
    it with filter_check_discard() (or call_filter_check_discard() as
    appropriate).

    Link: http://lkml.kernel.org/r/f16e9ce4270c62f46b2e966119225e1c3cca7e60.1382620672.git.tom.zanussi@linux.intel.com

    Signed-off-by: Tom Zanussi
    Signed-off-by: Steven Rostedt

    Tom Zanussi
     

19 Jul, 2013

1 commit


15 Mar, 2013

2 commits

  • Currently, the way the latency tracers and snapshot feature works
    is to have a separate trace_array called "max_tr" that holds the
    snapshot buffer. For latency tracers, this snapshot buffer is used
    to swap the running buffer with this buffer to save the current max
    latency.

    The only items needed for the max_tr is really just a copy of the buffer
    itself, the per_cpu data pointers, the time_start timestamp that states
    when the max latency was triggered, and the cpu that the max latency
    was triggered on. All other fields in trace_array are unused by the
    max_tr, making the max_tr mostly bloat.

    This change removes the max_tr completely, and adds a new structure
    called trace_buffer, that holds the buffer pointer, the per_cpu data
    pointers, the time_start timestamp, and the cpu where the latency occurred.

    The trace_array, now has two trace_buffers, one for the normal trace and
    one for the max trace or snapshot. By doing this, not only do we remove
    the bloat from the max_trace but the instances of traces can now use
    their own snapshot feature and not have just the top level global_trace have
    the snapshot feature and latency tracers for itself.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • The global and max-tr currently use static per_cpu arrays for the CPU data
    descriptors. But in order to get new allocated trace_arrays, they need to
    be allocated per_cpu arrays. Instead of using the static arrays, switch
    the global and max-tr to use allocated data.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

27 Jul, 2011

1 commit

  • This allows us to move duplicated code in
    (atomic_inc_not_zero() for now) to

    Signed-off-by: Arun Sharma
    Reviewed-by: Eric Dumazet
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: Eric Dumazet
    Acked-by: Mike Frysinger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arun Sharma
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

13 Sep, 2009

1 commit


05 Sep, 2009

1 commit

  • The latency tracers (irqsoff and wakeup) can swap trace buffers
    on the fly. If an event is happening and has reserved data on one of
    the buffers, and the latency tracer swaps the global buffer with the
    max buffer, the result is that the event may commit the data to the
    wrong buffer.

    This patch changes the API to the trace recording to be recieve the
    buffer that was used to reserve a commit. Then this buffer can be passed
    in to the commit.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

07 Apr, 2009

1 commit


20 Mar, 2009

1 commit


13 Mar, 2009

1 commit

  • Impact: fix callsites with dynamic format strings

    Since its new binary implementation, trace_printk() internally uses static
    containers for the format strings on each callsites. But the value is
    assigned once at build time, which means that it can't take dynamic
    formats.

    So this patch unearthes the raw trace_printk implementation for the callers
    that will need trace_printk to be able to carry these dynamic format
    strings. The trace_printk() macro will use the appropriate implementation
    for each callsite. Most of the time however, the binary implementation will
    still be used.

    The other impact of this patch is that mmiotrace_printk() will use the old
    implementation because it calls the low level trace_vprintk and we can't
    guess here whether the format passed in it is dynamic or not.

    Some parts of this patch have been written by Steven Rostedt (most notably
    the part that chooses the appropriate implementation for each callsites).

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Steven Rostedt

    Frederic Weisbecker
     

07 Mar, 2009

1 commit

  • Impact: faster and lighter tracing

    Now that we have trace_bprintk() which is faster and consume lesser
    memory than trace_printk() and has the same purpose, we can now drop
    the old implementation in favour of the binary one from trace_bprintk(),
    which means we move all the implementation of trace_bprintk() to
    trace_printk(), so the Api doesn't change except that we must now use
    trace_seq_bprintk() to print the TRACE_PRINT entries.

    Some changes result of this:

    - Previously, trace_bprintk depended of a single tracer and couldn't
    work without. This tracer has been dropped and the whole implementation
    of trace_printk() (like the module formats management) is now integrated
    in the tracing core (comes with CONFIG_TRACING), though we keep the file
    trace_printk (previously trace_bprintk.c) where we can find the module
    management. Thus we don't overflow trace.c

    - changes some parts to use trace_seq_bprintk() to print TRACE_PRINT entries.

    - change a bit trace_printk/trace_vprintk macros to support non-builtin formats
    constants, and fix 'const' qualifiers warnings. But this is all transparent for
    developers.

    - etc...

    V2:

    - Rebase against last changes
    - Fix mispell on the changelog

    V3:

    - Rebase against last changes (moving trace_printk() to kernel.h)

    Signed-off-by: Frederic Weisbecker
    Acked-by: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

06 Feb, 2009

2 commits

  • Impact: new API

    These new functions do what previously was being open coded, reducing
    the number of details ftrace plugin writers have to worry about.

    It also standardizes the handling of stacktrace, userstacktrace and
    other trace options we may introduce in the future.

    With this patch, for instance, the blk tracer (and some others already
    in the tree) can use the "userstacktrace" /d/tracing/trace_options
    facility.

    $ codiff /tmp/vmlinux.before /tmp/vmlinux.after
    linux-2.6-tip/kernel/trace/trace.c:
    trace_vprintk | -5
    trace_graph_return | -22
    trace_graph_entry | -26
    trace_function | -45
    __ftrace_trace_stack | -27
    ftrace_trace_userstack | -29
    tracing_sched_switch_trace | -66
    tracing_stop | +1
    trace_seq_to_user | -1
    ftrace_trace_special | -63
    ftrace_special | +1
    tracing_sched_wakeup_trace | -70
    tracing_reset_online_cpus | -1
    13 functions changed, 2 bytes added, 355 bytes removed, diff: -353

    linux-2.6-tip/block/blktrace.c:
    __blk_add_trace | -58
    1 function changed, 58 bytes removed, diff: -58

    linux-2.6-tip/kernel/trace/trace.c:
    trace_buffer_lock_reserve | +88
    trace_buffer_unlock_commit | +86
    2 functions changed, 174 bytes added, diff: +174

    /tmp/vmlinux.after:
    16 functions changed, 176 bytes added, 413 bytes removed, diff: -237

    Signed-off-by: Arnaldo Carvalho de Melo
    Acked-by: Frédéric Weisbecker
    Signed-off-by: Ingo Molnar

    Arnaldo Carvalho de Melo
     
  • Impact: API change, cleanup

    >From ring_buffer_{lock_reserve,unlock_commit}.

    $ codiff /tmp/vmlinux.before /tmp/vmlinux.after
    linux-2.6-tip/kernel/trace/trace.c:
    trace_vprintk | -14
    trace_graph_return | -14
    trace_graph_entry | -10
    trace_function | -8
    __ftrace_trace_stack | -8
    ftrace_trace_userstack | -8
    tracing_sched_switch_trace | -8
    ftrace_trace_special | -12
    tracing_sched_wakeup_trace | -8
    9 functions changed, 90 bytes removed, diff: -90

    linux-2.6-tip/block/blktrace.c:
    __blk_add_trace | -1
    1 function changed, 1 bytes removed, diff: -1

    /tmp/vmlinux.after:
    10 functions changed, 91 bytes removed, diff: -91

    Signed-off-by: Arnaldo Carvalho de Melo
    Acked-by: Frédéric Weisbecker
    Signed-off-by: Ingo Molnar

    Arnaldo Carvalho de Melo
     

16 Jan, 2009

1 commit


11 Jan, 2009

1 commit

  • Impact: enhances lost events counting in mmiotrace

    The tracing framework, or the ring buffer facility it uses, has a switch
    to stop recording data. When recording is off, the trace events will be
    lost. The framework does not count these, so mmiotrace has to count them
    itself.

    Signed-off-by: Pekka Paalanen
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Pekka Paalanen
     

29 Dec, 2008

2 commits

  • Impact: simplify/generalize/refactor trace.c

    The trace.c file is becoming more difficult to maintain due to the
    growing number of events. There is several formats that an event may
    be printed. This patch sets up the infrastructure of an event hash to
    allow for events to register how they should be printed.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Impact: cleanup, remove obsolete code

    Now that the ring buffer used by ftrace allows for variable length
    entries, we do not need the 'cont' feature of the buffer. This code
    makes other parts of ftrace more complex and by removing this it
    simplifies the ftrace code.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

19 Dec, 2008

1 commit


04 Dec, 2008

1 commit

  • Handle the TRACE_PRINT entries from the function grapg tracer
    and output them as a C comment just below the function that called
    it, as if it was a comment inside this function.

    Example with an ftrace_printk inside might_sleep() function:

    void __might_sleep(char *file, int line)
    {
    static unsigned long prev_jiffy; /* ratelimiting */

    ftrace_printk("Hi I'm a comment in might_sleep() :-)");

    A chunk of a resulting trace:

    0) | _reiserfs_free_block() {
    0) | reiserfs_read_bitmap_block() {
    0) | __bread() {
    0) | __getblk() {
    0) | __find_get_block() {
    0) 0.698 us | mark_page_accessed();
    0) 2.267 us | }
    0) | __might_sleep() {
    0) | /* Hi I'm a comment in might_sleep() :-) */
    0) 1.321 us | }
    0) 5.872 us | }
    0) 7.313 us | }
    0) 8.718 us | }

    And this patch brings two minor fixes:

    - The newline after a switch-out task has disappeared
    - The "|" sign just before the cpu number on task-switch has been deleted.

    0) 0.616 us | pick_next_task_rt();
    0) 1.457 us | _spin_trylock();
    0) 0.653 us | _spin_unlock();
    0) 0.728 us | _spin_trylock();
    0) 0.631 us | _spin_unlock();
    0) 0.729 us | native_load_sp0();
    0) 0.593 us | native_load_tls();
    ------------------------------------------
    0) cat-2834 => migrati-3
    ------------------------------------------

    0) | finish_task_switch() {
    0) 0.841 us | _spin_unlock_irq();
    0) 0.616 us | post_schedule_rt();
    0) 3.882 us | }

    Signed-off-by: Frederic Weisbecker
    Acked-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

25 Nov, 2008

1 commit


24 Nov, 2008

1 commit

  • Impact: fix mmiotrace overrun tracing

    When ftrace framework moved to use the ring buffer facility, the buffer
    overrun detection was broken after 2.6.27 by commit

    | commit 3928a8a2d98081d1bc3c0a84a2d70e29b90ecf1c
    | Author: Steven Rostedt
    | Date: Mon Sep 29 23:02:41 2008 -0400
    |
    | ftrace: make work with new ring buffer
    |
    | This patch ports ftrace over to the new ring buffer.

    The detection is now fixed by using the ring buffer API.

    When mmiotrace detects a buffer overrun, it will report the number of
    lost events. People reading an mmiotrace log must know if something was
    missed, otherwise the data may not make sense.

    Signed-off-by: Pekka Paalanen
    Acked-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Pekka Paalanen
     

16 Nov, 2008

1 commit

  • Impact: extend the ->init() method with the ability to fail

    This bring a way to know if the initialization of a tracer successed.
    A tracer must return 0 on success and a traditional error (ie:
    -ENOMEM) if it fails.

    If a tracer fails to init, it is free to print a detailed warn. The
    tracing api will not and switch to a new tracer will just return the
    error from the init callback.

    Note: this will be used for the return tracer.

    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

08 Nov, 2008

2 commits


14 Oct, 2008

8 commits

  • With the new ring buffer infrastructure in ftrace, I'm trying to make
    ftrace a little more light weight.

    This patch converts a lot of the local_irq_save/restore into
    preempt_disable/enable. The original preempt count in a lot of cases
    has to be sent in as a parameter so that it can be recorded correctly.
    Some places were recording it incorrectly before anyway.

    This is also laying the ground work to make ftrace a little bit
    more reentrant, and remove all locking. The function tracers must
    still protect from reentrancy.

    Note: All the function tracers must be careful when using preempt_disable.
    It must do the following:

    resched = need_resched();
    preempt_disable_notrace();
    [...]
    if (resched)
    preempt_enable_no_resched_notrace();
    else
    preempt_enable_notrace();

    The reason is that if this function traces schedule() itself, the
    preempt_enable_notrace() will cause a schedule, which will lead
    us into a recursive failure.

    If we needed to reschedule before calling preempt_disable, we
    should have already scheduled. Since we did not, this is most
    likely that we should not and are probably inside a schedule
    function.

    If resched was not set, we still need to catch the need resched
    flag being set when preemption was off and the if case at the
    end will catch that for us.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • The mmiotrace map had a bug that would typecast the entry from
    the trace to the wrong type. That is a known danger of C typecasts,
    there's absolutely zero checking done on them.

    Help that problem a bit by using a GCC extension to implement a
    type filter that restricts the types that a trace record can be
    cast into, and by adding a dynamic check (in debug mode) to verify
    the type of the entry.

    This patch adds a macro to assign all entries of ftrace using the type
    of the variable and checking the entry id. The typecasts are now done
    in the macro for only those types that it knows about, which should
    be all the types that are allowed to be read from the tracer.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Adapt mmiotrace to the new print_line type.
    By default, it ignores (and consumes) types it doesn't support.

    Signed-off-by: Frederic Weisbecker
    Acked-by: Pekka Paalanen
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     
  • Now that the underlining ring buffer for ftrace now hold variable length
    entries, we can take advantage of this by only storing the size of the
    actual event into the buffer. This happens to increase the number of
    entries in the buffer dramatically.

    We can also get rid of the "trace_cont" operation, but I'm keeping that
    until we have no more users. Some of the ftrace tracers can now change
    their code to adapt to this new feature.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch ports ftrace over to the new ring buffer.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Also make trace_seq_print_cont() non-static, and add a newline if the
    seq buffer can't hold all data.

    Signed-off-by: Pekka Paalanen
    Acked-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Pekka Paalanen
     
  • Offer mmiotrace users a function to inject markers from inside the kernel.
    This depends on the trace_vprintk() patch.

    Signed-off-by: Pekka Paalanen
    Acked-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Pekka Paalanen
     
  • Moves the mmiotrace specific functions from trace.c to
    trace_mmiotrace.c. Functions trace_wake_up(), tracing_get_trace_entry(),
    and tracing_generic_entry_update() are therefore made available outside
    trace.c.

    Signed-off-by: Pekka Paalanen
    Acked-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Pekka Paalanen