16 Mar, 2019

1 commit

  • Pull tracing fixes and cleanups from Steven Rostedt:
    "This contains a series of last minute clean ups, small fixes and error
    checks"

    * tag 'trace-v5.1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
    tracing/probe: Verify alloc_trace_*probe() result
    tracing/probe: Check event/group naming rule at parsing
    tracing/probe: Check the size of argument name and body
    tracing/probe: Check event name length correctly
    tracing/probe: Check maxactive error cases
    tracing: kdb: Fix ftdump to not sleep
    trace/probes: Remove kernel doc style from non kernel doc comment
    tracing/probes: Make reserved_field_names static

    Linus Torvalds
     

15 Mar, 2019

2 commits


17 Jan, 2019

2 commits

  • When printing multiple uprobe arguments as strings the output for the
    earlier arguments would also include all later string arguments.

    This is best explained in an example:

    Consider adding a uprobe to a function receiving two strings as
    parameters which is at offset 0xa0 in strlib.so and we want to print
    both parameters when the uprobe is hit (on x86_64):

    $ echo 'p:func /lib/strlib.so:0xa0 +0(%di):string +0(%si):string' > \
    /sys/kernel/debug/tracing/uprobe_events

    When the function is called as func("foo", "bar") and we hit the probe,
    the trace file shows a line like the following:

    [...] func: (0x7f7e683706a0) arg1="foobar" arg2="bar"

    Note the extra "bar" printed as part of arg1. This behaviour stacks up
    for additional string arguments.

    The strings are stored in a dynamically growing part of the uprobe
    buffer by fetch_store_string() after copying them from userspace via
    strncpy_from_user(). The return value of strncpy_from_user() is then
    directly used as the required size for the string. However, this does
    not take the terminating null byte into account as the documentation
    for strncpy_from_user() cleary states that it "[...] returns the
    length of the string (not including the trailing NUL)" even though the
    null byte will be copied to the destination.

    Therefore, subsequent calls to fetch_store_string() will overwrite
    the terminating null byte of the most recently fetched string with
    the first character of the current string, leading to the
    "accumulation" of strings in earlier arguments in the output.

    Fix this by incrementing the return value of strncpy_from_user() by
    one if we did not hit the maximum buffer size.

    Link: http://lkml.kernel.org/r/20190116141629.5752-1-andreas.ziegler@fau.de

    Cc: Ingo Molnar
    Cc: stable@vger.kernel.org
    Fixes: 5baaa59ef09e ("tracing/probes: Implement 'memory' fetch method for uprobes")
    Acked-by: Masami Hiramatsu
    Signed-off-by: Andreas Ziegler
    Signed-off-by: Steven Rostedt (VMware)

    Andreas Ziegler
     
  • The subsystem-specific message prefix for uprobes was also
    "trace_kprobe: " instead of "trace_uprobe: " as described in
    the original commit message.

    Link: http://lkml.kernel.org/r/20190117133023.19292-1-andreas.ziegler@fau.de

    Cc: Ingo Molnar
    Cc: stable@vger.kernel.org
    Acked-by: Masami Hiramatsu
    Fixes: 7257634135c24 ("tracing/probe: Show subsystem name in messages")
    Signed-off-by: Andreas Ziegler
    Signed-off-by: Steven Rostedt (VMware)

    Andreas Ziegler
     

11 Dec, 2018

1 commit

  • The trace_add/remove_event_call_nolock() functions were added to allow
    the tace_add/remove_event_call() code be called when the event_mutex
    lock was already taken. Now that all callers are done within the
    event_mutex, there's no reason to have two different interfaces.

    Remove the current wrapper trace_add/remove_event_call()s and rename the
    _nolock versions back to the original names.

    Link: http://lkml.kernel.org/r/154140866955.17322.2081425494660638846.stgit@devbox

    Acked-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

09 Dec, 2018

3 commits

  • Use dyn_event framework for uprobe events. This shows
    uprobe events on "dynamic_events" file.
    User can also define new uprobe events via dynamic_events.

    Link: http://lkml.kernel.org/r/154140858481.17322.9091293846515154065.stgit@devbox

    Reviewed-by: Tom Zanussi
    Tested-by: Tom Zanussi
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Integrate similar argument parsers for kprobes and uprobes events
    into traceprobe_parse_probe_arg().

    Link: http://lkml.kernel.org/r/154140850016.17322.9836787731210512176.stgit@devbox

    Reviewed-by: Tom Zanussi
    Tested-by: Tom Zanussi
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Add a busy check loop in cleanup_all_probes() before
    trying to remove all events in uprobe_events, the same way
    that kprobe_events does.

    Without this change, writing null to uprobe_events will
    try to remove events but if one of them is enabled, it will
    stop there leaving some events cleared and others not clceared.

    With this change, writing null to uprobe_events makes
    sure all events are not enabled before removing events.
    So, it clears all events, or returns an error (-EBUSY)
    with keeping all events.

    Link: http://lkml.kernel.org/r/154140841557.17322.12653952888762532401.stgit@devbox

    Reviewed-by: Tom Zanussi
    Tested-by: Tom Zanussi
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     

11 Oct, 2018

10 commits

  • Fix probe_mem_read() to return -EFAULT if copy_from_user()
    failed. The copy_from_user() returns remaining bytes
    when it failed, but probe_mem_read() caller expects it
    returns error code like as probe_kernel_read().

    Link: http://lkml.kernel.org/r/153547306719.26502.8353484532699160223.stgit@devbox

    Reported-by: Dan Carpenter
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Add $argN special fetch variable for accessing function
    arguments. This allows user to trace the Nth argument easily
    at the function entry.

    Note that this returns most probably assignment of registers
    and stacks. In some case, it may not work well. If you need
    to access correct registers or stacks you should use perf-probe.

    Link: http://lkml.kernel.org/r/152465888632.26224.3412465701570253696.stgit@devbox

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Unify the fetch_insn bottom process (from stage 2: dereference
    indirect data) from kprobe and uprobe events, since those are
    mostly same.

    Link: http://lkml.kernel.org/r/152465879965.26224.8547240824606804815.stgit@devbox

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Append traceprobe_ for exported function set_print_fmt() as
    same as other functions.

    Link: http://lkml.kernel.org/r/152465877071.26224.11143125027282999726.stgit@devbox

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Cleanup string fetching routine so that returns the consumed
    bytes of dynamic area and store the string information as
    data_loc format instead of data_rloc.
    This simplifies the fetcharg loop.

    Link: http://lkml.kernel.org/r/152465874163.26224.12125143907501289031.stgit@devbox

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Unify {k,u}probe_fetch_type_table to probe_fetch_type_table
    because the main difference of those type tables (fetcharg
    methods) are gone. Now we can consolidate it.

    Link: http://lkml.kernel.org/r/152465871274.26224.13999436317830479698.stgit@devbox

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Replace {k,u}probe event argument fetching framework with switch-case based.
    Currently that is implemented with structures, macros and chain of
    function-pointers, which is more complicated than necessary and may get a
    performance penalty by retpoline.

    This simplify that with an array of "fetch_insn" (opcode and oprands), and
    make process_fetch_insn() just interprets it. No function pointers are used.

    Link: http://lkml.kernel.org/r/152465868340.26224.2551120475197839464.stgit@devbox

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Cleanup event argument definition code in one place for
    maintenancability.

    Link: http://lkml.kernel.org/r/152465862529.26224.9068605421476018902.stgit@devbox

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • Cleanup the print-argument function to decouple it into
    print-name and print-value, so that it can support more
    flexible expression, like array type.

    Link: http://lkml.kernel.org/r/152465859635.26224.13452846788717102315.stgit@devbox

    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt (VMware)

    Masami Hiramatsu
     
  • This patch enables uprobes with reference counter in fd-based uprobe.
    Highest 32 bits of perf_event_attr.config is used to stored offset
    of the reference count (semaphore).

    Format information in /sys/bus/event_source/devices/uprobe/format/ is
    updated to reflect this new feature.

    Link: http://lkml.kernel.org/r/20181002053636.1896903-1-songliubraving@fb.com

    Cc: Oleg Nesterov
    Acked-by: Peter Zijlstra (Intel)
    Reviewed-and-tested-by: Ravi Bangoria
    Signed-off-by: Song Liu
    Signed-off-by: Steven Rostedt (VMware)

    Song Liu
     

24 Sep, 2018

2 commits

  • We assume to have only one reference counter for one uprobe.
    Don't allow user to add multiple trace_uprobe entries having
    same inode+offset but different reference counter.

    Ex,
    # echo "p:sdt_tick/loop2 /home/ravi/tick:0x6e4(0x10036)" > uprobe_events
    # echo "p:sdt_tick/loop2_1 /home/ravi/tick:0x6e4(0xfffff)" >> uprobe_events
    bash: echo: write error: Invalid argument

    # dmesg
    trace_kprobe: Reference counter offset mismatch.

    There is one exception though:
    When user is trying to replace the old entry with the new
    one, we allow this if the new entry does not conflict with
    any other existing entries.

    Link: http://lkml.kernel.org/r/20180820044250.11659-4-ravi.bangoria@linux.ibm.com

    Acked-by: Srikar Dronamraju
    Reviewed-by: Song Liu
    Reviewed-by: Oleg Nesterov
    Tested-by: Song Liu
    Signed-off-by: Ravi Bangoria
    Signed-off-by: Steven Rostedt (VMware)

    Ravi Bangoria
     
  • Userspace Statically Defined Tracepoints[1] are dtrace style markers
    inside userspace applications. Applications like PostgreSQL, MySQL,
    Pthread, Perl, Python, Java, Ruby, Node.js, libvirt, QEMU, glib etc
    have these markers embedded in them. These markers are added by developer
    at important places in the code. Each marker source expands to a single
    nop instruction in the compiled code but there may be additional
    overhead for computing the marker arguments which expands to couple of
    instructions. In case the overhead is more, execution of it can be
    omitted by runtime if() condition when no one is tracing on the marker:

    if (reference_counter > 0) {
    Execute marker instructions;
    }

    Default value of reference counter is 0. Tracer has to increment the
    reference counter before tracing on a marker and decrement it when
    done with the tracing.

    Implement the reference counter logic in core uprobe. User will be
    able to use it from trace_uprobe as well as from kernel module. New
    trace_uprobe definition with reference counter will now be:

    :[(ref_ctr_offset)]

    where ref_ctr_offset is an optional field. For kernel module, new
    variant of uprobe_register() has been introduced:

    uprobe_register_refctr(inode, offset, ref_ctr_offset, consumer)

    No new variant for uprobe_unregister() because it's assumed to have
    only one reference counter for one uprobe.

    [1] https://sourceware.org/systemtap/wiki/UserSpaceProbeImplementation

    Note: 'reference counter' is called as 'semaphore' in original Dtrace
    (or Systemtap, bcc and even in ELF) documentation and code. But the
    term 'semaphore' is misleading in this context. This is just a counter
    used to hold number of tracers tracing on a marker. This is not really
    used for any synchronization. So we are calling it a 'reference counter'
    in kernel / perf code.

    Link: http://lkml.kernel.org/r/20180820044250.11659-2-ravi.bangoria@linux.ibm.com

    Reviewed-by: Masami Hiramatsu
    [Only trace_uprobe.c]
    Reviewed-by: Oleg Nesterov
    Reviewed-by: Song Liu
    Tested-by: Song Liu
    Signed-off-by: Ravi Bangoria
    Signed-off-by: Steven Rostedt (VMware)

    Ravi Bangoria
     

17 Aug, 2018

1 commit


11 Aug, 2018

1 commit

  • While debugging another bug, I was looking at all the synchronize*()
    functions being used in kernel/trace, and noticed that trace_uprobes was
    using synchronize_sched(), with a comment to synchronize with
    {u,ret}_probe_trace_func(). When looking at those functions, the data is
    protected with "rcu_read_lock()" and not with "rcu_read_lock_sched()". This
    is using the wrong synchronize_*() function.

    Link: http://lkml.kernel.org/r/20180809160553.469e1e32@gandalf.local.home

    Cc: stable@vger.kernel.org
    Fixes: 70ed91c6ec7f8 ("tracing/uprobes: Support ftrace_event_file base multibuffer")
    Acked-by: Oleg Nesterov
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

25 May, 2018

1 commit

  • Currently, suppose a userspace application has loaded a bpf program
    and attached it to a tracepoint/kprobe/uprobe, and a bpf
    introspection tool, e.g., bpftool, wants to show which bpf program
    is attached to which tracepoint/kprobe/uprobe. Such attachment
    information will be really useful to understand the overall bpf
    deployment in the system.

    There is a name field (16 bytes) for each program, which could
    be used to encode the attachment point. There are some drawbacks
    for this approaches. First, bpftool user (e.g., an admin) may not
    really understand the association between the name and the
    attachment point. Second, if one program is attached to multiple
    places, encoding a proper name which can imply all these
    attachments becomes difficult.

    This patch introduces a new bpf subcommand BPF_TASK_FD_QUERY.
    Given a pid and fd, if the is associated with a
    tracepoint/kprobe/uprobe perf event, BPF_TASK_FD_QUERY will return
    . prog_id
    . tracepoint name, or
    . k[ret]probe funcname + offset or kernel addr, or
    . u[ret]probe filename + offset
    to the userspace.
    The user can use "bpftool prog" to find more information about
    bpf program itself with prog_id.

    Acked-by: Martin KaFai Lau
    Signed-off-by: Yonghong Song
    Signed-off-by: Alexei Starovoitov

    Yonghong Song
     

27 Apr, 2018

1 commit

  • As Miklos reported and suggested:

    This pattern repeats two times in trace_uprobe.c and in
    kernel/events/core.c as well:

    ret = kern_path(filename, LOOKUP_FOLLOW, &path);
    if (ret)
    goto fail_address_parse;

    inode = igrab(d_inode(path.dentry));
    path_put(&path);

    And it's wrong. You can only hold a reference to the inode if you
    have an active ref to the superblock as well (which is normally
    through path.mnt) or holding s_umount.

    This way unmounting the containing filesystem while the tracepoint is
    active will give you the "VFS: Busy inodes after unmount..." message
    and a crash when the inode is finally put.

    Solution: store path instead of inode.

    This patch fixes two instances in trace_uprobe.c. struct path is added to
    struct trace_uprobe to keep the inode and containing mount point
    referenced.

    Link: http://lkml.kernel.org/r/20180423172135.4050588-1-songliubraving@fb.com

    Fixes: f3f096cfedf8 ("tracing: Provide trace events interface for uprobes")
    Fixes: 33ea4b24277b ("perf/core: Implement the 'perf_uprobe' PMU")
    Cc: stable@vger.kernel.org
    Cc: Ingo Molnar
    Cc: Howard McLauchlan
    Cc: Josef Bacik
    Cc: Srikar Dronamraju
    Acked-by: Miklos Szeredi
    Reported-by: Miklos Szeredi
    Signed-off-by: Song Liu
    Signed-off-by: Steven Rostedt (VMware)

    Song Liu
     

16 Apr, 2018

1 commit

  • Pull more perf updates from Thomas Gleixner:
    "A rather large set of perf updates:

    Kernel:

    - Fix various initialization issues

    - Prevent creating [ku]probes for not CAP_SYS_ADMIN users

    Tooling:

    - Show only failing syscalls with 'perf trace --failure' (Arnaldo
    Carvalho de Melo)

    e.g: See what 'openat' syscalls are failing:

    # perf trace --failure -e openat
    762.323 ( 0.007 ms): VideoCapture/4566 openat(dfd: CWD, filename: /dev/video2) = -1 ENOENT No such file or directory

    790.228 ( 0.008 ms): VideoCapture/4566 openat(dfd: CWD, filename: /dev/video63) = -1 ENOENT No such file or directory
    ^C#

    - Show information about the event (freq, nr_samples, total
    period/nr_events) in the annotate --tui and --stdio2 'perf
    annotate' output, similar to the first line in the 'perf report
    --tui', but just for the samples for a the annotated symbol
    (Arnaldo Carvalho de Melo)

    - Introduce 'perf version --build-options' to show what features were
    linked, aliased as well as a shorter 'perf -vv' (Jin Yao)

    - Add a "dso_size" sort order (Kim Phillips)

    - Remove redundant ')' in the tracepoint output in 'perf trace'
    (Changbin Du)

    - Synchronize x86's cpufeatures.h, no effect on toolss (Arnaldo
    Carvalho de Melo)

    - Show group details on the title line in the annotate browser and
    'perf annotate --stdio2' output, so that the per-event columns can
    have headers (Arnaldo Carvalho de Melo)

    - Fixup vertical line separating metrics from instructions and
    cleaning unused lines at the bottom, both in the annotate TUI
    browser (Arnaldo Carvalho de Melo)

    - Remove duplicated 'samples' in lost samples warning in
    'perf report' (Arnaldo Carvalho de Melo)

    - Synchronize i915_drm.h, silencing the perf build process,
    automagically adding support for the new DRM_I915_QUERY ioctl
    (Arnaldo Carvalho de Melo)

    - Make auxtrace_queues__add_buffer() allocate struct buffer, from a
    patchkit already applied (Adrian Hunter)

    - Fix the --stdio2/TUI annotate output to include group details, be
    it for a recorded '{a,b,f}' explicit event group or when forcing
    group display using 'perf report --group' for a set of events not
    recorded as a group (Arnaldo Carvalho de Melo)

    - Fix display artifacts in the ui browser (base class for the
    annotate and main report/top TUI browser) related to the extra
    title lines work (Arnaldo Carvalho de Melo)

    - perf auxtrace refactorings, leftovers from a previously partially
    processed patchset (Adrian Hunter)

    - Fix the builtin clang build (Sandipan Das, Arnaldo Carvalho de
    Melo)

    - Synchronize i915_drm.h, silencing a perf build warning and in the
    process automagically adding support for a new ioctl command
    (Arnaldo Carvalho de Melo)

    - Fix a strncpy issue in uprobe tracing"

    * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (36 commits)
    perf/core: Need CAP_SYS_ADMIN to create k/uprobe with perf_event_open()
    tracing/uprobe_event: Fix strncpy corner case
    perf/core: Fix perf_uprobe_init()
    perf/core: Fix perf_kprobe_init()
    perf/core: Fix use-after-free in uprobe_perf_close()
    perf tests clang: Fix function name for clang IR test
    perf clang: Add support for recent clang versions
    perf tools: Fix perf builds with clang support
    perf tools: No need to include namespaces.h in util.h
    perf hists browser: Remove leftover from row returned from refresh
    perf hists browser: Show extra_title_lines in the 'D' debug hotkey
    perf auxtrace: Make auxtrace_queues__add_buffer() do CPU filtering
    tools headers uapi: Synchronize i915_drm.h
    perf report: Remove duplicated 'samples' in lost samples warning
    perf ui browser: Fixup cleaning unused lines at the bottom
    perf annotate browser: Fixup vertical line separating metrics from instructions
    perf annotate: Show group details on the title line
    perf auxtrace: Make auxtrace_queues__add_buffer() allocate struct buffer
    perf/x86/intel: Move regs->flags EXACT bit init
    perf trace: Remove redundant ')'
    ...

    Linus Torvalds
     

13 Apr, 2018

1 commit

  • Pull tracing fixes from Steven Rostedt:
    "A few clean ups and bug fixes:

    - replace open coded "ARRAY_SIZE()" with macro

    - updates to uprobes

    - bug fix for perf event filter on error path"

    * tag 'trace-v4.17-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace:
    tracing: Enforce passing in filter=NULL to create_filter()
    trace_uprobe: Simplify probes_seq_show()
    trace_uprobe: Use %lx to display offset
    tracing/uprobe: Add support for overlayfs
    tracing: Use ARRAY_SIZE() macro instead of open coding it

    Linus Torvalds
     

11 Apr, 2018

4 commits

  • Simplify probes_seq_show() function. No change in output
    before and after patch.

    Link: http://lkml.kernel.org/r/20180315082756.9050-2-ravi.bangoria@linux.vnet.ibm.com

    Acked-by: Masami Hiramatsu
    Signed-off-by: Ravi Bangoria
    Signed-off-by: Steven Rostedt (VMware)

    Ravi Bangoria
     
  • tu->offset is unsigned long, not a pointer, thus %lx should
    be used to print it, not the %px.

    Link: http://lkml.kernel.org/r/20180315082756.9050-1-ravi.bangoria@linux.vnet.ibm.com

    Cc: stable@vger.kernel.org
    Acked-by: Masami Hiramatsu
    Fixes: 0e4d819d0893 ("trace_uprobe: Display correct offset in uprobe_events")
    Suggested-by: Kees Cook
    Signed-off-by: Ravi Bangoria
    Signed-off-by: Steven Rostedt (VMware)

    Ravi Bangoria
     
  • uprobes cannot successfully attach to binaries located in a directory
    mounted with overlayfs.

    To verify, create directories for mounting overlayfs
    (upper,lower,work,merge), move some binary into merge/ and use readelf
    to obtain some known instruction of the binary. I used /bin/true and the
    entry instruction(0x13b0):

    $ mount -t overlay overlay -o lowerdir=lower,upperdir=upper,workdir=work merge
    $ cd /sys/kernel/debug/tracing
    $ echo 'p:true_entry PATH_TO_MERGE/merge/true:0x13b0' > uprobe_events
    $ echo 1 > events/uprobes/true_entry/enable

    This returns 'bash: echo: write error: Input/output error' and dmesg
    tells us 'event trace: Could not enable event true_entry'

    This change makes create_trace_uprobe() look for the real inode of a
    dentry. In the case of normal filesystems, this simplifies to just
    returning the inode. In the case of overlayfs(and similar fs) we will
    obtain the underlying dentry and corresponding inode, upon which uprobes
    can successfully register.

    Running the example above with the patch applied, we can see that the
    uprobe is enabled and will output to trace as expected.

    Link: http://lkml.kernel.org/r/20180410231030.2720-1-hmclauchlan@fb.com

    Reviewed-by: Josef Bacik
    Reviewed-by: Masami Hiramatsu
    Reviewed-by: Srikar Dronamraju
    Signed-off-by: Howard McLauchlan
    Signed-off-by: Steven Rostedt (VMware)

    Howard McLauchlan
     
  • Fix string fetch function to terminate with NUL.
    It is OK to drop the rest of string.

    Signed-off-by: Masami Hiramatsu
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Song Liu
    Cc: Thomas Gleixner
    Cc: security@kernel.org
    Cc: 范龙飞
    Fixes: 5baaa59ef09e ("tracing/probes: Implement 'memory' fetch method for uprobes")
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

17 Feb, 2018

1 commit


06 Feb, 2018

1 commit

  • This patch adds perf_uprobe support with similar pattern as previous
    patch (for kprobe).

    Two functions, create_local_trace_uprobe() and
    destroy_local_trace_uprobe(), are created so a uprobe can be created
    and attached to the file descriptor created by perf_event_open().

    Signed-off-by: Song Liu
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Yonghong Song
    Reviewed-by: Josef Bacik
    Cc:
    Cc:
    Cc:
    Cc:
    Cc: Arnaldo Carvalho de Melo
    Cc: Jiri Olsa
    Cc: Linus Torvalds
    Cc: Namhyung Kim
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/20171206224518.3598254-7-songliubraving@fb.com
    Signed-off-by: Ingo Molnar

    Song Liu
     

24 Jan, 2018

1 commit

  • Recently, how the pointers being printed with %p has been changed
    by commit ad67b74d2469 ("printk: hash addresses printed with %p").
    This is causing a regression while showing offset in the
    uprobe_events file. Instead of %p, use %px to display offset.

    Before patch:

    # perf probe -vv -x /tmp/a.out main
    Opening /sys/kernel/debug/tracing//uprobe_events write=1
    Writing event: p:probe_a/main /tmp/a.out:0x58c

    # cat /sys/kernel/debug/tracing/uprobe_events
    p:probe_a/main /tmp/a.out:0x0000000049a0f352

    After patch:

    # cat /sys/kernel/debug/tracing/uprobe_events
    p:probe_a/main /tmp/a.out:0x000000000000058c

    Link: http://lkml.kernel.org/r/20180106054246.15375-1-ravi.bangoria@linux.vnet.ibm.com

    Cc: stable@vger.kernel.org
    Fixes: ad67b74d2469 ("printk: hash addresses printed with %p")
    Acked-by: Srikar Dronamraju
    Signed-off-by: Ravi Bangoria
    Signed-off-by: Steven Rostedt (VMware)

    Ravi Bangoria
     

18 Nov, 2017

1 commit

  • Pull tracing updates from

    - allow module init functions to be traced

    - clean up some unused or not used by config events (saves space)

    - clean up of trace histogram code

    - add support for preempt and interrupt enabled/disable events

    - other various clean ups

    * tag 'trace-v4.15' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (30 commits)
    tracing, thermal: Hide cpu cooling trace events when not in use
    tracing, thermal: Hide devfreq trace events when not in use
    ftrace: Kill FTRACE_OPS_FL_PER_CPU
    perf/ftrace: Small cleanup
    perf/ftrace: Fix function trace events
    perf/ftrace: Revert ("perf/ftrace: Fix double traces of perf on ftrace:function")
    tracing, dma-buf: Remove unused trace event dma_fence_annotate_wait_on
    tracing, memcg, vmscan: Hide trace events when not in use
    tracing/xen: Hide events that are not used when X86_PAE is not defined
    tracing: mark trace_test_buffer as __maybe_unused
    printk: Remove superfluous memory barriers from printk_safe
    ftrace: Clear hashes of stale ips of init memory
    tracing: Add support for preempt and irq enable/disable events
    tracing: Prepare to add preempt and irq trace events
    ftrace/kallsyms: Have /proc/kallsyms show saved mod init functions
    ftrace: Add freeing algorithm to free ftrace_mod_maps
    ftrace: Save module init functions kallsyms symbols for tracing
    ftrace: Allow module init functions to be traced
    ftrace: Add a ftrace_free_mem() function for modules to use
    tracing: Reimplement log2
    ...

    Linus Torvalds
     

25 Oct, 2017

1 commit

  • This patch enables multiple bpf attachments for a
    kprobe/uprobe/tracepoint single trace event.
    Each trace_event keeps a list of attached perf events.
    When an event happens, all attached bpf programs will
    be executed based on the order of attachment.

    A global bpf_event_mutex lock is introduced to protect
    prog_array attaching and detaching. An alternative will
    be introduce a mutex lock in every trace_event_call
    structure, but it takes a lot of extra memory.
    So a global bpf_event_mutex lock is a good compromise.

    The bpf prog detachment involves allocation of memory.
    If the allocation fails, a dummy do-nothing program
    will replace to-be-detached program in-place.

    Signed-off-by: Yonghong Song
    Acked-by: Alexei Starovoitov
    Acked-by: Martin KaFai Lau
    Signed-off-by: David S. Miller

    Yonghong Song
     

17 Oct, 2017

1 commit

  • Revert commit:

    75e8387685f6 ("perf/ftrace: Fix double traces of perf on ftrace:function")

    The reason I instantly stumbled on that patch is that it only addresses the
    ftrace situation and doesn't mention the other _5_ places that use this
    interface. It doesn't explain why those don't have the problem and if not, why
    their solution doesn't work for ftrace.

    It doesn't, but this is just putting more duct tape on.

    Link: http://lkml.kernel.org/r/20171011080224.200565770@infradead.org

    Cc: Zhou Chengming
    Cc: Jiri Olsa
    Cc: Ingo Molnar
    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Steven Rostedt (VMware)

    Peter Zijlstra
     

05 Oct, 2017

1 commit

  • traceprobe_probes_write() and traceprobe_command() actually contain
    nothing that ties them to kprobes - the code is generically useful for
    similar types of parsing elsewhere, so separate it out and move it to
    trace.c/trace.h.

    Other than moving it, the only change is in naming:
    traceprobe_probes_write() becomes trace_parse_run_command() and
    traceprobe_command() becomes trace_run_command().

    Link: http://lkml.kernel.org/r/ae5c26ea40c196a8986854d921eb6e713ede7e3f.1506105045.git.tom.zanussi@linux.intel.com

    Signed-off-by: Tom Zanussi
    Signed-off-by: Steven Rostedt (VMware)

    Tom Zanussi
     

29 Aug, 2017

1 commit

  • When running perf on the ftrace:function tracepoint, there is a bug
    which can be reproduced by:

    perf record -e ftrace:function -a sleep 20 &
    perf record -e ftrace:function ls
    perf script

    ls 10304 [005] 171.853235: ftrace:function:
    perf_output_begin
    ls 10304 [005] 171.853237: ftrace:function:
    perf_output_begin
    ls 10304 [005] 171.853239: ftrace:function:
    task_tgid_nr_ns
    ls 10304 [005] 171.853240: ftrace:function:
    task_tgid_nr_ns
    ls 10304 [005] 171.853242: ftrace:function:
    __task_pid_nr_ns
    ls 10304 [005] 171.853244: ftrace:function:
    __task_pid_nr_ns

    We can see that all the function traces are doubled.

    The problem is caused by the inconsistency of the register
    function perf_ftrace_event_register() with the probe function
    perf_ftrace_function_call(). The former registers one probe
    for every perf_event. And the latter handles all perf_events
    on the current cpu. So when two perf_events on the current cpu,
    the traces of them will be doubled.

    So this patch adds an extra parameter "event" for perf_tp_event,
    only send sample data to this event when it's not NULL.

    Signed-off-by: Zhou Chengming
    Reviewed-by: Jiri Olsa
    Acked-by: Steven Rostedt (VMware)
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: acme@kernel.org
    Cc: alexander.shishkin@linux.intel.com
    Cc: huawei.libin@huawei.com
    Link: http://lkml.kernel.org/r/1503668977-12526-1-git-send-email-zhouchengming1@huawei.com
    Signed-off-by: Ingo Molnar

    Zhou Chengming
     

02 Mar, 2017

1 commit