01 Oct, 2015

1 commit

  • In preparation to make trace options per instance, the global trace_flags
    needs to be moved from being a global variable to a field within the trace
    instance trace_array structure.

    There's still more work to do, as there's some functions that use
    trace_flags without passing in a way to get to the current_trace array. For
    those, the global_trace is used directly (from trace.c). This includes
    setting and clearing the trace_flags. This means that when a new instance is
    created, it just gets the trace_flags of the global_trace and will not be
    able to modify them. Depending on the functions that have access to the
    trace_array, the flags of an instance may not affect parts of its trace,
    where the global_trace is used. These will be fixed in future changes.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     

10 Jan, 2015

1 commit

  • Pull kgdb/kdb fixes from Jason Wessel:
    "These have been around since 3.17 and in kgdb-next for the last 9
    weeks and some will go back to -stable.

    Summary of changes:

    Cleanups
    - kdb: Remove unused command flags, repeat flags and KDB_REPEAT_NONE

    Fixes
    - kgdb/kdb: Allow access on a single core, if a CPU round up is
    deemed impossible, which will allow inspection of the now "trashed"
    kernel
    - kdb: Add enable mask for the command groups
    - kdb: access controls to restrict sensitive commands"

    * tag 'for_linus-3.19-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/kgdb:
    kernel/debug/debug_core.c: Logging clean-up
    kgdb: timeout if secondary CPUs ignore the roundup
    kdb: Allow access to sensitive commands to be restricted by default
    kdb: Add enable mask for groups of commands
    kdb: Categorize kdb commands (similar to SysRq categorization)
    kdb: Remove KDB_REPEAT_NONE flag
    kdb: Use KDB_REPEAT_* values as flags
    kdb: Rename kdb_register_repeat() to kdb_register_flags()
    kdb: Rename kdb_repeat_t to kdb_cmdflags_t, cmd_repeat to cmd_flags
    kdb: Remove currently unused kdbtab_t->cmd_flags

    Linus Torvalds
     

14 Nov, 2014

2 commits

  • Currently kdb's ftdump command will livelock by constantly printk'ing
    the empty string at KERN_EMERG level if it run when the ftrace system is
    not in use. This occurs because trace_empty() never returns false when
    the ring buffers are left at the start of a non-consuming read [launched
    by ring_buffer_read_start()].

    This patch changes the loop exit condition to use the result of
    trace_find_next_entry_inc(). Effectively this switches the non-consuming
    kdb dumper to follow the approach of the non-consuming userspace
    interface [s_next()] rather than the consuming ftrace_dump().

    Link: http://lkml.kernel.org/r/1415277716-19419-3-git-send-email-daniel.thompson@linaro.org

    Cc: Ingo Molnar
    Cc: Andrew Morton
    Cc: John Stultz
    Cc: Sumit Semwal
    Cc: Jason Wessel
    Signed-off-by: Daniel Thompson
    Signed-off-by: Steven Rostedt

    Daniel Thompson
     
  • Currently kdb's ftdump command unconditionally crashes due to a null
    pointer de-reference whenever the command is run. This in turn causes
    the kernel to panic.

    The abridged stacktrace (gathered with ARCH=arm) is:

    --- cut here ---
    [] (panic) from [] (die+0x264/0x440)
    [] (die) from []
    (__do_kernel_fault.part.11+0x74/0x84)
    [] (__do_kernel_fault.part.11) from []
    (do_page_fault+0x1d0/0x3c4)
    [] (do_page_fault) from [] (do_DataAbort+0x48/0xac)

    [] (do_DataAbort) from [] (__dabt_svc+0x38/0x60)
    Exception stack(0xc0deba88 to 0xc0debad0)
    ba80: e8c29180 00000001 e9854304 e9854300 c0f567d8
    c0df2580
    baa0: 00000000 00000000 00000000 c0f117b8 c0e3a3c0 c0debb0c 00000000
    c0debad0
    bac0: 0000672e c02f4d60 60000193 ffffffff
    [] (__dabt_svc) from [] (kdb_ftdump+0x1e4/0x3d8)
    [] (kdb_ftdump) from [] (kdb_parse+0x2b8/0x698)
    [] (kdb_parse) from [] (kdb_main_loop+0x52c/0x784)
    [] (kdb_main_loop) from [] (kdb_stub+0x238/0x490)
    --- cut here ---

    The NULL deref occurs due to the initialized use of struct trace_iter's
    buffer_iter member.

    This is a regression, albeit a fairly elderly one. It was introduced
    by commit 6d158a813efc ("tracing: Remove NR_CPUS array from
    trace_iterator").

    This patch solves this by providing a collection of ring_buffer_iter(s)
    and using this to initialize buffer_iter. Note that static allocation
    is used solely because the trace_iter itself is also static allocated.
    Static allocation also means that we have to NULL-ify the pointer during
    cleanup to avoid use-after-free problems.

    Link: http://lkml.kernel.org/r/1415277716-19419-2-git-send-email-daniel.thompson@linaro.org

    Cc: Jason Wessel
    Signed-off-by: Daniel Thompson
    Signed-off-by: Steven Rostedt

    Daniel Thompson
     

11 Nov, 2014

3 commits

  • This patch introduces several new flags to collect kdb commands into
    groups (later allowing them to be optionally disabled).

    This follows similar prior art to enable/disable magic sysrq
    commands.

    The commands have been categorized as follows:

    Always on: go (w/o args), env, set, help, ?, cpu (w/o args), sr,
    dmesg, disable_nmi, defcmd, summary, grephelp
    Mem read: md, mdr, mdp, mds, ef, bt (with args), per_cpu
    Mem write: mm
    Reg read: rd
    Reg write: go (with args), rm
    Inspect: bt (w/o args), btp, bta, btc, btt, ps, pid, lsmod
    Flow ctrl: bp, bl, bph, bc, be, bd, ss
    Signal: kill
    Reboot: reboot
    All: cpu, kgdb, (and all of the above), nmi_console

    Signed-off-by: Daniel Thompson
    Cc: Jason Wessel
    Signed-off-by: Jason Wessel

    Daniel Thompson
     
  • Since we now treat KDB_REPEAT_* as flags, there is no need to
    pass KDB_REPEAT_NONE. It's just the default behaviour when no
    flags are specified.

    Signed-off-by: Anton Vorontsov
    Signed-off-by: John Stultz
    Signed-off-by: Daniel Thompson
    Cc: Jason Wessel
    Signed-off-by: Jason Wessel

    Anton Vorontsov
     
  • We're about to add more options for commands behaviour, so let's give
    a more generic name to the low-level kdb command registration function.

    There are just various renames, no functional changes.

    Signed-off-by: Anton Vorontsov
    Signed-off-by: John Stultz
    Signed-off-by: Daniel Thompson
    Cc: Jason Wessel
    Signed-off-by: Jason Wessel

    Anton Vorontsov
     

15 Mar, 2013

3 commits

  • Currently, the way the latency tracers and snapshot feature works
    is to have a separate trace_array called "max_tr" that holds the
    snapshot buffer. For latency tracers, this snapshot buffer is used
    to swap the running buffer with this buffer to save the current max
    latency.

    The only items needed for the max_tr is really just a copy of the buffer
    itself, the per_cpu data pointers, the time_start timestamp that states
    when the max latency was triggered, and the cpu that the max latency
    was triggered on. All other fields in trace_array are unused by the
    max_tr, making the max_tr mostly bloat.

    This change removes the max_tr completely, and adds a new structure
    called trace_buffer, that holds the buffer pointer, the per_cpu data
    pointers, the time_start timestamp, and the cpu where the latency occurred.

    The trace_array, now has two trace_buffers, one for the normal trace and
    one for the max trace or snapshot. By doing this, not only do we remove
    the bloat from the max_trace but the instances of traces can now use
    their own snapshot feature and not have just the top level global_trace have
    the snapshot feature and latency tracers for itself.

    Signed-off-by: Steven Rostedt

    Steven Rostedt (Red Hat)
     
  • The global and max-tr currently use static per_cpu arrays for the CPU data
    descriptors. But in order to get new allocated trace_arrays, they need to
    be allocated per_cpu arrays. Instead of using the static arrays, switch
    the global and max-tr to use allocated data.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • Both RING_BUFFER_ALL_CPUS and TRACE_PIPE_ALL_CPU are defined as
    -1 and used to say that all the ring buffers are to be modified
    or read (instead of just a single cpu, which would be >= 0).

    There's no reason to keep TRACE_PIPE_ALL_CPU as it is also started
    to be used for more than what it was created for, and now that
    the ring buffer code added a generic RING_BUFFER_ALL_CPUS define,
    we can clean up the trace code to use that instead and remove
    the TRACE_PIPE_ALL_CPU macro.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

23 Oct, 2010

1 commit


05 Aug, 2010

2 commits