19 Sep, 2020

1 commit

  • Current tracing_init_dentry() return a d_entry pointer, while is not
    necessary. This function returns NULL on success or error on failure,
    which means there is no valid d_entry pointer return.

    Let's return 0 on success and negative value for error.

    Link: https://lkml.kernel.org/r/20200712011036.70948-5-richard.weiyang@linux.alibaba.com

    Signed-off-by: Wei Yang
    Signed-off-by: Steven Rostedt (VMware)

    Wei Yang
     

08 Jun, 2020

1 commit


03 Jan, 2020

1 commit

  • On some archs with some configurations, MCOUNT_INSN_SIZE is not defined, and
    this makes the stack tracer fail to compile. Just define it to zero in this
    case.

    Link: https://lore.kernel.org/r/202001020219.zvE3vsty%lkp@intel.com

    Cc: stable@vger.kernel.org
    Fixes: 4df297129f622 ("tracing: Remove most or all of stack tracer stack size from stack_max_size")
    Reported-by: kbuild test robot
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

13 Oct, 2019

1 commit

  • Added various checks on open tracefs calls to see if tracefs is in lockdown
    mode, and if so, to return -EPERM.

    Note, the event format files (which are basically standard on all machines)
    as well as the enabled_functions file (which shows what is currently being
    traced) are not lockde down. Perhaps they should be, but it seems counter
    intuitive to lockdown information to help you know if the system has been
    modified.

    Link: http://lkml.kernel.org/r/CAHk-=wj7fGPKUspr579Cii-w_y60PtRaiDgKuxVtBAMK0VNNkA@mail.gmail.com

    Suggested-by: Linus Torvalds
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

01 Sep, 2019

2 commits

  • As the max stack tracer algorithm is not that easy to understand from the
    code, add comments that explain the algorithm and mentions how
    ARCH_FTRACE_SHIFT_STACK_TRACER affects it.

    Link: http://lkml.kernel.org/r/20190806123455.487ac02b@gandalf.local.home

    Suggested-by: Joel Fernandes
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • Most archs (well at least x86) store the function call return address on the
    stack before storing the local variables for the function. The max stack
    tracer depends on this in its algorithm to display the stack size of each
    function it finds in the back trace.

    Some archs (arm64), may store the return address (from its link register)
    just before calling a nested function. There's no reason to save the link
    register on leaf functions, as it wont be updated. This breaks the algorithm
    of the max stack tracer.

    Add a new define ARCH_FTRACE_SHIFT_STACK_TRACER that an architecture may set
    if it stores the return address (link register) after it stores the
    function's local variables, and have the stack trace shift the values of the
    mapped stack size to the appropriate functions.

    Link: 20190802094103.163576-1-jiping.ma2@windriver.com

    Reported-by: Jiping Ma
    Acked-by: Will Deacon
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

29 Apr, 2019

2 commits

  • Simplify the stack retrieval code by using the storage array based
    interface.

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Steven Rostedt (VMware)
    Reviewed-by: Josh Poimboeuf
    Cc: Andy Lutomirski
    Cc: Alexander Potapenko
    Cc: Alexey Dobriyan
    Cc: Andrew Morton
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: linux-mm@kvack.org
    Cc: David Rientjes
    Cc: Catalin Marinas
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: kasan-dev@googlegroups.com
    Cc: Mike Rapoport
    Cc: Akinobu Mita
    Cc: Christoph Hellwig
    Cc: iommu@lists.linux-foundation.org
    Cc: Robin Murphy
    Cc: Marek Szyprowski
    Cc: Johannes Thumshirn
    Cc: David Sterba
    Cc: Chris Mason
    Cc: Josef Bacik
    Cc: linux-btrfs@vger.kernel.org
    Cc: dm-devel@redhat.com
    Cc: Mike Snitzer
    Cc: Alasdair Kergon
    Cc: Daniel Vetter
    Cc: intel-gfx@lists.freedesktop.org
    Cc: Joonas Lahtinen
    Cc: Maarten Lankhorst
    Cc: dri-devel@lists.freedesktop.org
    Cc: David Airlie
    Cc: Jani Nikula
    Cc: Rodrigo Vivi
    Cc: Tom Zanussi
    Cc: Miroslav Benes
    Cc: linux-arch@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190425094803.340000461@linutronix.de

    Thomas Gleixner
     
  • - Remove the extra array member of stack_dump_trace[] along with the
    ARRAY_SIZE - 1 initialization for struct stack_trace :: max_entries.

    Both are historical leftovers of no value. The stack tracer never exceeds
    the array and there is no extra storage requirement either.

    - Make variables which are only used in trace_stack.c static.

    - Simplify the enable/disable logic.

    - Rename stack_trace_print() as it's using the stack_trace_ namespace. Free
    the name up for stack trace related functions.

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Steven Rostedt
    Reviewed-by: Josh Poimboeuf
    Cc: Andy Lutomirski
    Cc: Alexander Potapenko
    Cc: Alexey Dobriyan
    Cc: Andrew Morton
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: linux-mm@kvack.org
    Cc: David Rientjes
    Cc: Catalin Marinas
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: kasan-dev@googlegroups.com
    Cc: Mike Rapoport
    Cc: Akinobu Mita
    Cc: Christoph Hellwig
    Cc: iommu@lists.linux-foundation.org
    Cc: Robin Murphy
    Cc: Marek Szyprowski
    Cc: Johannes Thumshirn
    Cc: David Sterba
    Cc: Chris Mason
    Cc: Josef Bacik
    Cc: linux-btrfs@vger.kernel.org
    Cc: dm-devel@redhat.com
    Cc: Mike Snitzer
    Cc: Alasdair Kergon
    Cc: Daniel Vetter
    Cc: intel-gfx@lists.freedesktop.org
    Cc: Joonas Lahtinen
    Cc: Maarten Lankhorst
    Cc: dri-devel@lists.freedesktop.org
    Cc: David Airlie
    Cc: Jani Nikula
    Cc: Rodrigo Vivi
    Cc: Tom Zanussi
    Cc: Miroslav Benes
    Cc: linux-arch@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190425094801.230654524@linutronix.de

    Thomas Gleixner
     

15 Apr, 2019

1 commit

  • No architecture terminates the stack trace with ULONG_MAX anymore. As the
    code checks the number of entries stored anyway there is no point in
    keeping all that ULONG_MAX magic around.

    The histogram code zeroes the storage before saving the stack, so if the
    trace is shorter than the maximum number of entries it can terminate the
    print loop if a zero entry is detected.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra (Intel)
    Cc: Josh Poimboeuf
    Cc: Andy Lutomirski
    Cc: Steven Rostedt
    Cc: Alexander Potapenko
    Link: https://lkml.kernel.org/r/20190410103645.048761764@linutronix.de

    Thomas Gleixner
     

23 Dec, 2018

2 commits


09 Dec, 2018

1 commit

  • Dan Carpenter reviewed the trace_stack.c code and figured he found an off by
    one bug.

    "From reviewing the code, it seems possible for
    stack_trace_max.nr_entries to be set to .max_entries and in that case we
    would be reading one element beyond the end of the stack_dump_trace[]
    array. If it's not set to .max_entries then the bug doesn't affect
    runtime."

    Although it looks to be the case, it is not. Because we have:

    static unsigned long stack_dump_trace[STACK_TRACE_ENTRIES+1] =
    { [0 ... (STACK_TRACE_ENTRIES)] = ULONG_MAX };

    struct stack_trace stack_trace_max = {
    .max_entries = STACK_TRACE_ENTRIES - 1,
    .entries = &stack_dump_trace[0],
    };

    And:

    stack_trace_max.nr_entries = x;
    for (; x < i; x++)
    stack_dump_trace[x] = ULONG_MAX;

    Even if nr_entries equals max_entries, indexing with it into the
    stack_dump_trace[] array will not overflow the array. But if it is the case,
    the second part of the conditional that tests stack_dump_trace[nr_entries]
    to ULONG_MAX will always be true.

    By applying Dan's patch, it removes the subtle aspect of it and makes the if
    conditional slightly more efficient.

    Link: http://lkml.kernel.org/r/20180620110758.crunhd5bfep7zuiz@kili.mountain

    Signed-off-by: Dan Carpenter
    Signed-off-by: Steven Rostedt (VMware)

    Dan Carpenter
     

27 Oct, 2018

1 commit

  • The stack tracer traces every function call checking the current stack (in
    non interrupt context), looking for the deepest stack, and saving it when it
    finds a new max depth. The problem is that it calls save_stack_trace(), and
    with the new ORC unwinder, it can skip too much. As it looks at the ip of
    the function call in the backtrace to find where it should start, it doesn't
    need to skip anything.

    The stack trace selftest would fail when the kernel was complied with the
    ORC UNDWINDER enabled. Without skipping functions when doing the stack
    trace, it now passes again.

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

03 May, 2018

1 commit

  • It looks weird that the stack_trace_filter file can be written by root
    but shows that it does not have write permission by ll command.

    Link: http://lkml.kernel.org/r/1518054113-28096-1-git-send-email-liuzhengyuan@kylinos.cn

    Signed-off-by: Zhengyuan Liu
    Signed-off-by: Steven Rostedt (VMware)

    Zhengyuan Liu
     

15 Dec, 2017

1 commit

  • The stack tracer records a stack dump whenever it sees a stack usage that is
    more than what it ever saw before. This can happen at any function that is
    being traced. If it happens when the CPU is going idle (or other strange
    locations), RCU may not be watching, and in this case, the recording of the
    stack trace will trigger a warning. There's been lots of efforts to make
    hacks to allow stack tracing to proceed even if RCU is not watching, but
    this only causes more issues to appear. Simply do not trace a stack if RCU
    is not watching. It probably isn't a bad stack anyway.

    Acked-by: "Paul E. McKenney"
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

07 Nov, 2017

1 commit


02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

25 Oct, 2017

1 commit

  • …READ_ONCE()/WRITE_ONCE()

    Please do not apply this to mainline directly, instead please re-run the
    coccinelle script shown below and apply its output.

    For several reasons, it is desirable to use {READ,WRITE}_ONCE() in
    preference to ACCESS_ONCE(), and new code is expected to use one of the
    former. So far, there's been no reason to change most existing uses of
    ACCESS_ONCE(), as these aren't harmful, and changing them results in
    churn.

    However, for some features, the read/write distinction is critical to
    correct operation. To distinguish these cases, separate read/write
    accessors must be used. This patch migrates (most) remaining
    ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following
    coccinelle script:

    ----
    // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and
    // WRITE_ONCE()

    // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch

    virtual patch

    @ depends on patch @
    expression E1, E2;
    @@

    - ACCESS_ONCE(E1) = E2
    + WRITE_ONCE(E1, E2)

    @ depends on patch @
    expression E;
    @@

    - ACCESS_ONCE(E)
    + READ_ONCE(E)
    ----

    Signed-off-by: Mark Rutland <mark.rutland@arm.com>
    Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: davem@davemloft.net
    Cc: linux-arch@vger.kernel.org
    Cc: mpe@ellerman.id.au
    Cc: shuah@kernel.org
    Cc: snitzer@redhat.com
    Cc: thor.thayer@linux.intel.com
    Cc: tj@kernel.org
    Cc: viro@zeniv.linux.org.uk
    Cc: will.deacon@arm.com
    Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>

    Mark Rutland
     

24 Sep, 2017

1 commit

  • Currently the stack tracer calls rcu_irq_enter() to make sure RCU
    is watching when it records a stack trace. But if the stack tracer
    is triggered while tracing inside of a rcu_irq_enter(), calling
    rcu_irq_enter() unconditionally can be problematic.

    The reason for having rcu_irq_enter() in the first place has been
    fixed from within the saving of the stack trace code, and there's no
    reason for doing it in the stack tracer itself. Just remove it.

    Cc: stable@vger.kernel.org
    Fixes: 0be964be0 ("module: Sanitize RCU usage and locking")
    Acked-by: Paul E. McKenney
    Suggested-by: "Paul E. McKenney"
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

12 Jul, 2017

1 commit


29 Jun, 2017

1 commit

  • When doing the following command:

    # echo ":mod:kvm_intel" > /sys/kernel/tracing/stack_trace_filter

    it triggered a crash.

    This happened with the clean up of probes. It required all callers to the
    regex function (doing ftrace filtering) to have ops->private be a pointer to
    a trace_array. But for the stack tracer, that is not the case.

    Allow for the ops->private to be NULL, and change the function command
    callbacks to handle the trace_array pointer being NULL as well.

    Fixes: d2afd57a4b96 ("tracing/ftrace: Allow instances to have their own function probes")
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

11 Apr, 2017

4 commits

  • Tracing uses rcu_irq_enter() as a way to make sure that RCU is watching when
    it needs to use rcu_read_lock() and friends. This is because tracing can
    happen as RCU is about to enter user space, or about to go idle, and RCU
    does not watch for RCU read side critical sections as it makes the
    transition.

    There is a small location within the RCU infrastructure that rcu_irq_enter()
    itself will not work. If tracing were to occur in that section it will break
    if it tries to use rcu_irq_enter().

    Originally, this happens with the stack_tracer, because it will call
    save_stack_trace when it encounters stack usage that is greater than any
    stack usage it had encountered previously. There was a case where that
    happened in the RCU section where rcu_irq_enter() did not work, and lockdep
    complained loudly about it. To fix it, stack tracing added a call to be
    disabled and RCU would disable stack tracing during the critical section
    that rcu_irq_enter() was inoperable. This solution worked, but there are
    other cases that use rcu_irq_enter() and it would be a good idea to let RCU
    give a way to let others know that rcu_irq_enter() will not work. For
    example, in trace events.

    Another helpful aspect of this change is that it also moves the per cpu
    variable called in the RCU critical section into a cache locale along with
    other RCU per cpu variables used in that same location.

    I'm keeping the stack_trace_disable() code, as that still could be used in
    the future by places that really need to disable it. And since it's only a
    static inline, it wont take up any kernel text if it is not used.

    Link: http://lkml.kernel.org/r/20170405093207.404f8deb@gandalf.local.home

    Acked-by: Paul E. McKenney
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • In order to eliminate a function call, make "trace_active" into
    "disable_stack_tracer" and convert stack_tracer_disable() and friends into
    static inline functions.

    Acked-by: Paul E. McKenney
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • There are certain parts of the kernel that cannot let stack tracing
    proceed (namely in RCU), because the stack tracer uses RCU, and parts of RCU
    internals cannot handle having RCU read side locks taken.

    Add stack_tracer_disable() and stack_tracer_enable() functions to let RCU
    stop stack tracing on the current CPU when it is in those critical sections.

    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     
  • The updates to the trace_active per cpu variable can be updated with the
    __this_cpu_*() functions as it only gets updated on the CPU that the variable
    is on.

    Thanks to Paul McKenney for suggesting __this_cpu_* instead of this_cpu_*.

    Acked-by: Paul E. McKenney
    Signed-off-by: Steven Rostedt (VMware)

    Steven Rostedt (VMware)
     

10 Mar, 2017

1 commit

  • Fix typos and add the following to the scripts/spelling.txt:

    overide||override

    While we are here, fix the doubled "address" in the touched line
    Documentation/devicetree/bindings/regulator/ti-abb-regulator.txt.

    Also, fix the comment block style in the touched hunks in
    drivers/media/dvb-frontends/drx39xyj/drx_driver.h.

    Link: http://lkml.kernel.org/r/1481573103-11329-21-git-send-email-yamada.masahiro@socionext.com
    Signed-off-by: Masahiro Yamada
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masahiro Yamada
     

02 Mar, 2017

1 commit


20 Feb, 2016

1 commit

  • When enabling stack trace via "echo 1 > /proc/sys/kernel/stack_tracer_enabled",
    the below KASAN warning is triggered:

    BUG: KASAN: stack-out-of-bounds in check_stack+0x344/0x848 at addr ffffffc0689ebab8
    Read of size 8 by task ksoftirqd/4/29
    page:ffffffbdc3a27ac0 count:0 mapcount:0 mapping: (null) index:0x0
    flags: 0x0()
    page dumped because: kasan: bad access detected
    CPU: 4 PID: 29 Comm: ksoftirqd/4 Not tainted 4.5.0-rc1 #129
    Hardware name: Freescale Layerscape 2085a RDB Board (DT)
    Call trace:
    [] dump_backtrace+0x0/0x3a0
    [] show_stack+0x24/0x30
    [] dump_stack+0xd8/0x168
    [] kasan_report_error+0x6a0/0x920
    [] kasan_report+0x70/0xb8
    [] __asan_load8+0x60/0x78
    [] check_stack+0x344/0x848
    [] stack_trace_call+0x1c4/0x370
    [] ftrace_ops_no_ops+0x2c0/0x590
    [] ftrace_graph_call+0x0/0x14
    [] fpsimd_thread_switch+0x24/0x1e8
    [] __switch_to+0x34/0x218
    [] __schedule+0x3ac/0x15b8
    [] schedule+0x5c/0x178
    [] smpboot_thread_fn+0x350/0x960
    [] kthread+0x1d8/0x2b0
    [] ret_from_fork+0x10/0x40
    Memory state around the buggy address:
    ffffffc0689eb980: 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00 f4 f4 f4
    ffffffc0689eba00: f3 f3 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00
    >ffffffc0689eba80: 00 00 f1 f1 f1 f1 00 f4 f4 f4 f3 f3 f3 f3 00 00
    ^
    ffffffc0689ebb00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
    ffffffc0689ebb80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

    The stacker tracer traverses the whole kernel stack when saving the max stack
    trace. It may touch the stack red zones to cause the warning. So, just disable
    the instrumentation to silence the warning.

    Link: http://lkml.kernel.org/r/1455309960-18930-1-git-send-email-yang.shi@linaro.org

    Signed-off-by: Yang Shi
    Signed-off-by: Steven Rostedt

    Yang Shi
     

30 Jan, 2016

1 commit

  • When a max stack trace is discovered, the stack dump is saved. In order to
    not record the overhead of the stack tracer, the ip of the traced function
    is looked for within the dump. The trace is started from the location of
    that function. But if for some reason the ip is not found, the entire stack
    trace is then truncated. That's not very useful. Instead, print everything
    if the ip of the traced function is not found within the trace.

    This issue showed up on s390.

    Link: http://lkml.kernel.org/r/20160129102241.1b3c9c04@gandalf.local.home

    Fixes: 72ac426a5bb0 ("tracing: Clean up stack tracing and fix fentry updates")
    Cc: stable@vger.kernel.org # v4.3+
    Reported-by: Heiko Carstens
    Tested-by: Heiko Carstens
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

07 Nov, 2015

1 commit

  • Pull tracking updates from Steven Rostedt:
    "Most of the changes are clean ups and small fixes. Some of them have
    stable tags to them. I searched through my INBOX just as the merge
    window opened and found lots of patches to pull. I ran them through
    all my tests and they were in linux-next for a few days.

    Features added this release:
    ----------------------------

    - Module globbing. You can now filter function tracing to several
    modules. # echo '*:mod:*snd*' > set_ftrace_filter (Dmitry Safonov)

    - Tracer specific options are now visible even when the tracer is not
    active. It was rather annoying that you can only see and modify
    tracer options after enabling the tracer. Now they are in the
    options/ directory even when the tracer is not active. Although
    they are still only visible when the tracer is active in the
    trace_options file.

    - Trace options are now per instance (although some of the tracer
    specific options are global)

    - New tracefs file: set_event_pid. If any pid is added to this file,
    then all events in the instance will filter out events that are not
    part of this pid. sched_switch and sched_wakeup events handle next
    and the wakee pids"

    * tag 'trace-v4.4' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (68 commits)
    tracefs: Fix refcount imbalance in start_creating()
    tracing: Put back comma for empty fields in boot string parsing
    tracing: Apply tracer specific options from kernel command line.
    tracing: Add some documentation about set_event_pid
    ring_buffer: Remove unneeded smp_wmb() before wakeup of reader benchmark
    tracing: Allow dumping traces without tracking trace started cpus
    ring_buffer: Fix more races when terminating the producer in the benchmark
    ring_buffer: Do no not complete benchmark reader too early
    tracing: Remove redundant TP_ARGS redefining
    tracing: Rename max_stack_lock to stack_trace_max_lock
    tracing: Allow arch-specific stack tracer
    recordmcount: arm64: Replace the ignored mcount call into nop
    recordmcount: Fix endianness handling bug for nop_mcount
    tracepoints: Fix documentation of RCU lockdep checks
    tracing: ftrace_event_is_function() can return boolean
    tracing: is_legal_op() can return boolean
    ring-buffer: rb_event_is_commit() can return boolean
    ring-buffer: rb_per_cpu_empty() can return boolean
    ring_buffer: ring_buffer_empty{cpu}() can return boolean
    ring-buffer: rb_is_reader_page() can return boolean
    ...

    Linus Torvalds
     

04 Nov, 2015

2 commits

  • Now that max_stack_lock is a global variable, it requires a naming
    convention that is unlikely to collide. Rename it to the same naming
    convention that the other stack_trace variables have.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • A stack frame may be used in a different way depending on cpu architecture.
    Thus it is not always appropriate to slurp the stack contents, as current
    check_stack() does, in order to calcurate a stack index (height) at a given
    function call. At least not on arm64.
    In addition, there is a possibility that we will mistakenly detect a stale
    stack frame which has not been overwritten.

    This patch makes check_stack() a weak function so as to later implement
    arch-specific version.

    Link: http://lkml.kernel.org/r/1446182741-31019-5-git-send-email-takahiro.akashi@linaro.org

    Signed-off-by: AKASHI Takahiro
    Signed-off-by: Steven Rostedt

    AKASHI Takahiro
     

21 Oct, 2015

1 commit

  • The code in stack tracer should not be executed within an NMI as it grabs
    spinlocks and stack tracing an NMI gives the possibility of causing a
    deadlock. Although this is safe on x86_64, because it does not perform stack
    traces when the task struct stack is not in use (interrupts and NMIs), it
    may be an issue for NMIs on i386 and other archs that use the same stack as
    the NMI.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

20 Oct, 2015

1 commit

  • The stack tracer was triggering the WARN_ON() in module.c:

    static void module_assert_mutex_or_preempt(void)
    {
    #ifdef CONFIG_LOCKDEP
    if (unlikely(!debug_locks))
    return;

    WARN_ON(!rcu_read_lock_sched_held() &&
    !lockdep_is_held(&module_mutex));
    #endif
    }

    The reason is that the stack tracer traces all function calls, and some of
    those calls happen while exiting or entering user space and idle. Some of
    these functions are called after RCU had already stopped watching, as RCU
    does not watch userspace or idle CPUs.

    If a max stack is hit, then the save_stack_trace() is called, which will
    check module addresses and call module_assert_mutex_or_preempt(), and then
    trigger the warning. Sad part is, the warning itself will also do a stack
    trace and tigger the same warning. That probably should be fixed.

    The warning was added by 0be964be0d45 "module: Sanitize RCU usage and
    locking" but this bug has probably been around longer. But it's unlikely to
    cause much harm, but the new warning causes the system to lock up.

    Cc: stable@vger.kernel.org # 4.2+
    Cc: Peter Zijlstra
    Cc:"Paul E. McKenney"
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

21 Jul, 2015

1 commit

  • Akashi Takahiro was porting the stack tracer to arm64 and found some
    issues with it. One was that it repeats the top function, due to the
    stack frame added by the mcount caller and added by itself. This
    was added when fentry came in, and before fentry created its own stack
    frame. But x86's fentry now creates its own stack frame, and there's
    no need to insert the function again.

    This also cleans up the code a bit, where it doesn't need to do something
    special for fentry, and doesn't include insertion of a duplicate
    entry for the called function being traced.

    Link: http://lkml.kernel.org/r/55A646EE.6030402@linaro.org

    Some-suggestions-by: Jungseok Lee
    Some-suggestions-by: Mark Rutland
    Reported-by: AKASHI Takahiro
    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

16 Apr, 2015

1 commit

  • The seq_printf return value, because it's frequently misused,
    will eventually be converted to void.

    See: commit 1f33c41c03da ("seq_file: Rename seq_overflow() to
    seq_has_overflowed() and make public")

    Miscellanea:

    o Remove unused return value from trace_lookup_stack

    Signed-off-by: Joe Perches
    Acked-by: Steven Rostedt
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     

23 Jan, 2015

2 commits


19 Sep, 2014

2 commits

  • This facility is used in a few places so let's introduce
    a helper function to improve code readability.

    Signed-off-by: Aaron Tomlin
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: aneesh.kumar@linux.vnet.ibm.com
    Cc: dzickus@redhat.com
    Cc: bmr@redhat.com
    Cc: jcastillo@redhat.com
    Cc: oleg@redhat.com
    Cc: riel@redhat.com
    Cc: prarit@redhat.com
    Cc: jgh@redhat.com
    Cc: minchan@kernel.org
    Cc: mpe@ellerman.id.au
    Cc: tglx@linutronix.de
    Cc: hannes@cmpxchg.org
    Cc: Andrew Morton
    Cc: Benjamin Herrenschmidt
    Cc: Jiri Olsa
    Cc: Linus Torvalds
    Cc: Masami Hiramatsu
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Cc: Seiji Aguchi
    Cc: Steven Rostedt
    Cc: Yasuaki Ishimatsu
    Cc: linuxppc-dev@lists.ozlabs.org
    Link: http://lkml.kernel.org/r/1410527779-8133-3-git-send-email-atomlin@redhat.com
    Signed-off-by: Ingo Molnar

    Aaron Tomlin
     
  • Tasks get their end of stack set to STACK_END_MAGIC with the
    aim to catch stack overruns. Currently this feature does not
    apply to init_task. This patch removes this restriction.

    Note that a similar patch was posted by Prarit Bhargava
    some time ago but was never merged:

    http://marc.info/?l=linux-kernel&m=127144305403241&w=2

    Signed-off-by: Aaron Tomlin
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Oleg Nesterov
    Acked-by: Michael Ellerman
    Cc: aneesh.kumar@linux.vnet.ibm.com
    Cc: dzickus@redhat.com
    Cc: bmr@redhat.com
    Cc: jcastillo@redhat.com
    Cc: jgh@redhat.com
    Cc: minchan@kernel.org
    Cc: tglx@linutronix.de
    Cc: hannes@cmpxchg.org
    Cc: Alex Thorlton
    Cc: Andrew Morton
    Cc: Benjamin Herrenschmidt
    Cc: Daeseok Youn
    Cc: David Rientjes
    Cc: Fabian Frederick
    Cc: Geert Uytterhoeven
    Cc: Jiri Olsa
    Cc: Kees Cook
    Cc: Kirill A. Shutemov
    Cc: Linus Torvalds
    Cc: Masami Hiramatsu
    Cc: Michael Opdenacker
    Cc: Paul Mackerras
    Cc: Prarit Bhargava
    Cc: Rik van Riel
    Cc: Rusty Russell
    Cc: Seiji Aguchi
    Cc: Steven Rostedt
    Cc: Vladimir Davydov
    Cc: Yasuaki Ishimatsu
    Cc: linuxppc-dev@lists.ozlabs.org
    Link: http://lkml.kernel.org/r/1410527779-8133-2-git-send-email-atomlin@redhat.com
    Signed-off-by: Ingo Molnar

    Aaron Tomlin