28 Oct, 2019

5 commits

  • Currently if sequences such as "\ehelp\r" are delivered to the console then
    the h gets eaten by the escape handling code. Since pressing escape
    becomes something of a nervous twitch for vi users (and that escape doesn't
    have much effect at a shell prompt) it is more helpful to emit the 'h' than
    the '\e'.

    We don't simply choose to emit the final character for all escape sequences
    since that will do odd things for unsupported escape sequences (in
    other words we retain the existing behaviour once we see '\e[').

    Signed-off-by: Daniel Thompson
    Reviewed-by: Douglas Anderson
    Link: https://lore.kernel.org/r/20191025073328.643-6-daniel.thompson@linaro.org

    Daniel Thompson
     
  • Currently if an escape timer is interrupted by a character from a
    different input source then the new character is discarded and the
    function returns '\e' (which will be discarded by the level above).
    It is hard to see why this would ever be the desired behaviour.
    Fix this to return the new character rather than the '\e'.

    This is a bigger refactor than might be expected because the new
    character needs to go through escape sequence detection.

    Signed-off-by: Daniel Thompson
    Reviewed-by: Douglas Anderson
    Link: https://lore.kernel.org/r/20191025073328.643-5-daniel.thompson@linaro.org

    Daniel Thompson
     
  • kdb_read() contains special case logic to force it exit after reading
    a single character. We can remove all the special case logic by directly
    calling the function to read a single character instead. This also
    allows us to tidy up the function prototype which, because it now matches
    getchar(), we can also rename in order to make its role clearer.

    This does involve some extra code to handle btaprompt properly but we
    don't mind the new lines of code here because the old code had some
    interesting problems (bad newline handling, treating unexpected
    characters like ).

    Signed-off-by: Daniel Thompson
    Reviewed-by: Douglas Anderson
    Link: https://lore.kernel.org/r/20191025073328.643-4-daniel.thompson@linaro.org

    Daniel Thompson
     
  • Currently kdb_read_get_key() contains complex control flow that, on
    close inspection, turns out to be unnecessary. In particular:

    1. It is impossible to enter the branch conditioned on (escape_delay == 1)
    except when the loop enters with (escape_delay == 2) allowing us to
    combine the branches.

    2. Most of the code conditioned on (escape_delay == 2) simply modifies
    local data and then breaks out of the loop causing the function to
    return escape_data[0].

    3. Based on #2 there is not actually any need to ever explicitly set
    escape_delay to 2 because we it is much simpler to directly return
    escape_data[0] instead.

    4. escape_data[0] is, for all but one exit path, known to be '\e'.

    Simplify the code based on these observations.

    There is a subtle (and harmless) change of behaviour resulting from this
    simplification: instead of letting the escape timeout after ~1998
    milliseconds we now timeout after ~2000 milliseconds

    Signed-off-by: Daniel Thompson
    Reviewed-by: Douglas Anderson
    Link: https://lore.kernel.org/r/20191025073328.643-3-daniel.thompson@linaro.org

    Daniel Thompson
     
  • kdb_read_get_key() has extremely complex break/continue control flow
    managed by state variables and is very hard to review or modify. In
    particular the way the escape sequence handling interacts with the
    general control flow is hard to follow. Separate out the escape key
    handling, without changing the control flow. This makes the main body of
    the code easier to review.

    Signed-off-by: Daniel Thompson
    Reviewed-by: Douglas Anderson
    Link: https://lore.kernel.org/r/20191025073328.643-2-daniel.thompson@linaro.org

    Daniel Thompson
     

24 Oct, 2019

1 commit

  • Recent versions of gcc (reported on gcc-7.4) issue array subscript
    warnings for builds where SMP is not enabled.

    kernel/debug/debug_core.c: In function 'kdb_dump_stack_on_cpu':
    kernel/debug/debug_core.c:452:17: warning: array subscript is outside array
    +bounds [-Warray-bounds]
    if (!(kgdb_info[cpu].exception_state & DCPU_IS_SLAVE)) {
    ~~~~~~~~~^~~~~
    kernel/debug/debug_core.c:469:33: warning: array subscript is outside array
    +bounds [-Warray-bounds]
    kgdb_info[cpu].exception_state |= DCPU_WANT_BT;
    kernel/debug/debug_core.c:470:18: warning: array subscript is outside array
    +bounds [-Warray-bounds]
    while (kgdb_info[cpu].exception_state & DCPU_WANT_BT)

    There is no bug here but there is scope to improve the code
    generation for non-SMP systems (whilst also silencing the warning).

    Reported-by: kbuild test robot
    Fixes: 2277b492582d ("kdb: Fix stack crawling on 'running' CPUs that aren't the master")
    Signed-off-by: Daniel Thompson
    Link: https://lore.kernel.org/r/20191021101057.23861-1-daniel.thompson@linaro.org
    Reviewed-by: Douglas Anderson

    Daniel Thompson
     

10 Oct, 2019

4 commits

  • In kdb when you do 'btc' (back trace on CPU) it doesn't necessarily
    give you the right info. Specifically on many architectures
    (including arm64, where I tested) you can't dump the stack of a
    "running" process that isn't the process running on the current CPU.
    This can be seen by this:

    echo SOFTLOCKUP > /sys/kernel/debug/provoke-crash/DIRECT
    # wait 2 seconds
    g

    Here's what I see now on rk3399-gru-kevin. I see the stack crawl for
    the CPU that handled the sysrq but everything else just shows me stuck
    in __switch_to() which is bogus:

    ======

    [0]kdb> btc
    btc: cpu status: Currently on cpu 0
    Available cpus: 0, 1-3(I), 4, 5(I)
    Stack traceback for pid 0
    0xffffff801101a9c0 0 0 1 0 R 0xffffff801101b3b0 *swapper/0
    Call trace:
    dump_backtrace+0x0/0x138
    ...
    kgdb_compiled_brk_fn+0x34/0x44
    ...
    sysrq_handle_dbg+0x34/0x5c
    Stack traceback for pid 0
    0xffffffc0f175a040 0 0 1 1 I 0xffffffc0f175aa30 swapper/1
    Call trace:
    __switch_to+0x1e4/0x240
    0xffffffc0f65616c0
    Stack traceback for pid 0
    0xffffffc0f175d040 0 0 1 2 I 0xffffffc0f175da30 swapper/2
    Call trace:
    __switch_to+0x1e4/0x240
    0xffffffc0f65806c0
    Stack traceback for pid 0
    0xffffffc0f175b040 0 0 1 3 I 0xffffffc0f175ba30 swapper/3
    Call trace:
    __switch_to+0x1e4/0x240
    0xffffffc0f659f6c0
    Stack traceback for pid 1474
    0xffffffc0dde8b040 1474 727 1 4 R 0xffffffc0dde8ba30 bash
    Call trace:
    __switch_to+0x1e4/0x240
    __schedule+0x464/0x618
    0xffffffc0dde8b040
    Stack traceback for pid 0
    0xffffffc0f17b0040 0 0 1 5 I 0xffffffc0f17b0a30 swapper/5
    Call trace:
    __switch_to+0x1e4/0x240
    0xffffffc0f65dd6c0

    ===

    The problem is that 'btc' eventually boils down to
    show_stack(task_struct, NULL);

    ...and show_stack() doesn't work for "running" CPUs because their
    registers haven't been stashed.

    On x86 things might work better (I haven't tested) because kdb has a
    special case for x86 in kdb_show_stack() where it passes the stack
    pointer to show_stack(). This wouldn't work on arm64 where the stack
    crawling function seems needs the "fp" and "pc", not the "sp" which is
    presumably why arm64's show_stack() function totally ignores the "sp"
    parameter.

    NOTE: we _can_ get a good stack dump for all the cpus if we manually
    switch each one to the kdb master and do a back trace. AKA:
    cpu 4
    bt
    ...will give the expected trace. That's because now arm64's
    dump_backtrace will now see that "tsk == current" and go through a
    different path.

    In this patch I fix the problems by catching a request to stack crawl
    a task that's running on a CPU and then I ask that CPU to do the stack
    crawl.

    NOTE: this will (presumably) change what stack crawls are printed for
    x86 machines. Now kdb functions will show up in the stack crawl.
    Presumably this is OK but if it's not we can go back and add a special
    case for x86 again.

    Signed-off-by: Douglas Anderson
    Acked-by: Will Deacon
    Signed-off-by: Daniel Thompson

    Douglas Anderson
     
  • I noticed that when I did "btc " and the CPU I passed in hadn't
    rounded up that I'd crash. I was going to copy the same fix from
    commit 162bc7f5afd7 ("kdb: Don't back trace on a cpu that didn't round
    up") into the "not all the CPUs" case, but decided it'd be better to
    clean things up a little bit.

    This consolidates the two code paths. It is _slightly_ wasteful in in
    that the checks for "cpu" being too small or being offline isn't
    really needed when we're iterating over all online CPUs, but that
    really shouldn't hurt. Better to have the same code path.

    While at it, eliminate at least one slightly ugly (and totally
    needless) recursive use of kdb_parse().

    Signed-off-by: Douglas Anderson
    Acked-by: Will Deacon
    Signed-off-by: Daniel Thompson

    Douglas Anderson
     
  • The kdb_bt1() had a mysterious "argcount" parameter passed in (always
    the number 5, by the way) and never used. Presumably this is just old
    cruft. Remove it. While at it, upgrade the btaprompt parameter to a
    full fledged bool instead of an int.

    Signed-off-by: Douglas Anderson
    Acked-by: Will Deacon
    Signed-off-by: Daniel Thompson

    Douglas Anderson
     
  • From doing a 'git log --patch kernel/debug', it looks as if DCPU_SSTEP
    has never been used. Presumably it used to be used back when kgdb was
    out of tree and nobody thought to delete the definition when the usage
    went away. Delete.

    Signed-off-by: Douglas Anderson
    Acked-by: Jason Wessel
    Acked-by: Will Deacon
    Signed-off-by: Daniel Thompson

    Douglas Anderson
     

26 Sep, 2019

1 commit

  • Right now kgdb/kdb hooks up to debug panics by registering for the panic
    notifier. This works OK except that it means that kgdb/kdb gets called
    _after_ the CPUs in the system are taken offline. That means that if
    anything important was happening on those CPUs (like something that might
    have contributed to the panic) you can't debug them.

    Specifically I ran into a case where I got a panic because a task was
    "blocked for more than 120 seconds" which was detected on CPU 2. I nicely
    got shown stack traces in the kernel log for all CPUs including CPU 0,
    which was running 'PID: 111 Comm: kworker/0:1H' and was in the middle of
    __mmc_switch().

    I then ended up at the kdb prompt where switched over to kgdb to try to
    look at local variables of the process on CPU 0. I found that I couldn't.
    Digging more, I found that I had no info on any tasks running on CPUs
    other than CPU 2 and that asking kdb for help showed me "Error: no saved
    data for this cpu". This was because all the CPUs were offline.

    Let's move the entry of kdb/kgdb to a direct call from panic() and stop
    using the generic notifier. Putting a direct call in allows us to order
    things more properly and it also doesn't seem like we're breaking any
    abstractions by calling into the debugger from the panic function.

    Daniel said:

    : This patch changes the way kdump and kgdb interact with each other.
    : However it would seem rather odd to have both tools simultaneously armed
    : and, even if they were, the user still has the option to use panic_timeout
    : to force a kdump to happen. Thus I think the change of order is
    : acceptable.

    Link: http://lkml.kernel.org/r/20190703170354.217312-1-dianders@chromium.org
    Signed-off-by: Douglas Anderson
    Reviewed-by: Daniel Thompson
    Cc: Jason Wessel
    Cc: Kees Cook
    Cc: Borislav Petkov
    Cc: Thomas Gleixner
    Cc: Feng Tang
    Cc: YueHaibing
    Cc: Sergey Senozhatsky
    Cc: "Steven Rostedt (VMware)"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Douglas Anderson
     

03 Sep, 2019

2 commits


21 May, 2019

1 commit


14 May, 2019

1 commit

  • The strncpy() function may leave the destination string buffer
    unterminated, better use strscpy() instead.

    This fixes the following warning with gcc 8.2:

    kernel/debug/kdb/kdb_io.c: In function 'kdb_getstr':
    kernel/debug/kdb/kdb_io.c:449:3: warning: 'strncpy' specified bound 256 equals destination size [-Wstringop-truncation]
    strncpy(kdb_prompt_str, prompt, CMD_BUFLEN);
    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    Signed-off-by: Wenlin Kang
    Signed-off-by: Daniel Thompson

    Wenlin Kang
     

12 May, 2019

2 commits

  • The "whichcpu" comes from argv[3]. The cpu_online() macro looks up the
    cpu in a bitmap of online cpus, but if the value is too high then it
    could read beyond the end of the bitmap and possibly Oops.

    Fixes: 5d5314d6795f ("kdb: core for kgdb back end (1 of 2)")
    Signed-off-by: Dan Carpenter
    Reviewed-by: Douglas Anderson
    Signed-off-by: Daniel Thompson

    Dan Carpenter
     
  • If you drop into kdb and type "summary", it prints out a line that
    says this:

    ccversion CCVERSION

    ...and I don't mean that it actually prints out the version of the C
    compiler. It literally prints out the string "CCVERSION".

    The version of the C Compiler is already printed at boot up and it
    doesn't seem useful to replicate this in kdb. Let's just delete it.
    We can also delete the bit of the Makefile that called the C compiler
    in an attempt to pass this into kdb. This will remove one extra call
    to the C compiler at Makefile parse time and (very slightly) speed up
    builds.

    Signed-off-by: Douglas Anderson
    Reviewed-by: Masahiro Yamada
    Signed-off-by: Daniel Thompson

    Douglas Anderson
     

02 May, 2019

3 commits

  • The strcpy() function is being deprecated. Replace it by the safer
    strscpy() and fix the following Coverity warning:

    "You might overrun the 129-character fixed-size string ks_namebuf
    by copying name without checking the length."

    Addresses-Coverity-ID: 138995 ("Copy into fixed size buffer")
    Signed-off-by: Gustavo A. R. Silva
    Signed-off-by: Daniel Thompson

    Gustavo A. R. Silva
     
  • The strcpy() function is being deprecated. Replace it by the safer
    strscpy() and fix the following Coverity warning:

    "You might overrun the 1024-character fixed-size string remcom_in_buffer
    by copying cmd without checking the length."

    Addresses-Coverity-ID: 138999 ("Copy into fixed size buffer")
    Signed-off-by: Gustavo A. R. Silva
    Signed-off-by: Daniel Thompson

    Gustavo A. R. Silva
     
  • In preparation to enabling -Wimplicit-fallthrough, mark switch
    cases where we are expecting to fall through.

    This patch fixes the following warnings:

    kernel/debug/gdbstub.c: In function ‘gdb_serial_stub’:
    kernel/debug/gdbstub.c:1031:7: warning: this statement may fall through [-Wimplicit-fallthrough=]
    if (remcom_in_buffer[1] == '\0') {
    ^
    kernel/debug/gdbstub.c:1036:3: note: here
    case 'C': /* Exception passing */
    ^~~~
    kernel/debug/gdbstub.c:1040:7: warning: this statement may fall through [-Wimplicit-fallthrough=]
    if (tmp == 0)
    ^
    kernel/debug/gdbstub.c:1043:3: note: here
    case 'c': /* Continue packet */
    ^~~~
    kernel/debug/gdbstub.c:1050:4: warning: this statement may fall through [-Wimplicit-fallthrough=]
    dbg_activate_sw_breakpoints();
    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    kernel/debug/gdbstub.c:1052:3: note: here
    default:
    ^~~~~~~

    Warning level 3 was used: -Wimplicit-fallthrough=3

    Notice that, in this particular case, the code comment is modified
    in accordance with what GCC is expecting to find.

    This patch is part of the ongoing efforts to enable
    -Wimplicit-fallthrough.

    Signed-off-by: Gustavo A. R. Silva
    Acked-by: Jason Wessel
    Signed-off-by: Daniel Thompson

    Gustavo A. R. Silva
     

30 Dec, 2018

5 commits

  • defcmd_in_progress is the state trace for command group processing
    - within a command group or not - usable is an indicator if a command
    set is valid (allocated/non-empty) - so use a bool for those binary
    indication here.

    Signed-off-by: Nicholas Mc Guire
    Reviewed-by: Daniel Thompson
    Signed-off-by: Daniel Thompson

    Nicholas Mc Guire
     
  • If you have a CPU that fails to round up and then run 'btc' you'll end
    up crashing in kdb becaue we dereferenced NULL. Let's add a check.
    It's wise to also set the task to NULL when leaving the debugger so
    that if we fail to round up on a later entry into the debugger we
    won't backtrace a stale task.

    Signed-off-by: Douglas Anderson
    Acked-by: Daniel Thompson
    Signed-off-by: Daniel Thompson

    Douglas Anderson
     
  • If we're using the default implementation of kgdb_roundup_cpus() that
    uses smp_call_function_single_async() we can end up hanging
    kgdb_roundup_cpus() if we try to round up a CPU that failed to round
    up before.

    Specifically smp_call_function_single_async() will try to wait on the
    csd lock for the CPU that we're trying to round up. If the previous
    round up never finished then that lock could still be held and we'll
    just sit there hanging.

    There's not a lot of use trying to round up a CPU that failed to round
    up before. Let's keep a flag that indicates whether the CPU started
    but didn't finish to round up before. If we see that flag set then
    we'll skip the next round up.

    In general we have a few goals here:
    - We never want to end up calling smp_call_function_single_async()
    when the csd is still locked. This is accomplished because
    flush_smp_call_function_queue() unlocks the csd _before_ invoking
    the callback. That means that when kgdb_nmicallback() runs we know
    for sure the the csd is no longer locked. Thus when we set
    "rounding_up = false" we know for sure that the csd is unlocked.
    - If there are no timeouts rounding up we should never skip a round
    up.

    NOTE #1: In general trying to continue running after failing to round
    up CPUs doesn't appear to be supported in the debugger. When I
    simulate this I find that kdb reports "Catastrophic error detected"
    when I try to continue. I can overrule and continue anyway, but it
    should be noted that we may be entering the land of dragons here.
    Possibly the "Catastrophic error detected" was added _because_ of the
    future failure to round up, but even so this is an area of the code
    that hasn't been strongly tested.

    NOTE #2: I did a bit of testing before and after this change. I
    introduced a 10 second hang in the kernel while holding a spinlock
    that I could invoke on a certain CPU with 'taskset -c 3 cat /sys/...".

    Before this change if I did:
    - Invoke hang
    - Enter debugger
    - g (which warns about Catastrophic error, g again to go anyway)
    - g
    - Enter debugger

    ...I'd hang the rest of the 10 seconds without getting a debugger
    prompt. After this change I end up in the debugger the 2nd time after
    only 1 second with the standard warning about 'Timed out waiting for
    secondary CPUs.'

    I'll also note that once the CPU finished waiting I could actually
    debug it (aka "btc" worked)

    I won't promise that everything works perfectly if the errant CPU
    comes back at just the wrong time (like as we're entering or exiting
    the debugger) but it certainly seems like an improvement.

    NOTE #3: setting 'kgdb_info[cpu].rounding_up = false' is in
    kgdb_nmicallback() instead of kgdb_call_nmi_hook() because some
    implementations override kgdb_call_nmi_hook(). It shouldn't hurt to
    have it in kgdb_nmicallback() in any case.

    NOTE #4: this logic is really only needed because there is no API call
    like "smp_try_call_function_single_async()" or "smp_csd_is_locked()".
    If such an API existed then we'd use it instead, but it seemed a bit
    much to add an API like this just for kgdb.

    Signed-off-by: Douglas Anderson
    Acked-by: Daniel Thompson
    Signed-off-by: Daniel Thompson

    Douglas Anderson
     
  • When I had lockdep turned on and dropped into kgdb I got a nice splat
    on my system. Specifically it hit:
    DEBUG_LOCKS_WARN_ON(current->hardirq_context)

    Specifically it looked like this:
    sysrq: SysRq : DEBUG
    ------------[ cut here ]------------
    DEBUG_LOCKS_WARN_ON(current->hardirq_context)
    WARNING: CPU: 0 PID: 0 at .../kernel/locking/lockdep.c:2875 lockdep_hardirqs_on+0xf0/0x160
    CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.19.0 #27
    pstate: 604003c9 (nZCv DAIF +PAN -UAO)
    pc : lockdep_hardirqs_on+0xf0/0x160
    ...
    Call trace:
    lockdep_hardirqs_on+0xf0/0x160
    trace_hardirqs_on+0x188/0x1ac
    kgdb_roundup_cpus+0x14/0x3c
    kgdb_cpu_enter+0x53c/0x5cc
    kgdb_handle_exception+0x180/0x1d4
    kgdb_compiled_brk_fn+0x30/0x3c
    brk_handler+0x134/0x178
    do_debug_exception+0xfc/0x178
    el1_dbg+0x18/0x78
    kgdb_breakpoint+0x34/0x58
    sysrq_handle_dbg+0x54/0x5c
    __handle_sysrq+0x114/0x21c
    handle_sysrq+0x30/0x3c
    qcom_geni_serial_isr+0x2dc/0x30c
    ...
    ...
    irq event stamp: ...45
    hardirqs last enabled at (...44): [...] __do_softirq+0xd8/0x4e4
    hardirqs last disabled at (...45): [...] el1_irq+0x74/0x130
    softirqs last enabled at (...42): [...] _local_bh_enable+0x2c/0x34
    softirqs last disabled at (...43): [...] irq_exit+0xa8/0x100
    ---[ end trace adf21f830c46e638 ]---

    Looking closely at it, it seems like a really bad idea to be calling
    local_irq_enable() in kgdb_roundup_cpus(). If nothing else that seems
    like it could violate spinlock semantics and cause a deadlock.

    Instead, let's use a private csd alongside
    smp_call_function_single_async() to round up the other CPUs. Using
    smp_call_function_single_async() doesn't require interrupts to be
    enabled so we can remove the offending bit of code.

    In order to avoid duplicating this across all the architectures that
    use the default kgdb_roundup_cpus(), we'll add a "weak" implementation
    to debug_core.c.

    Looking at all the people who previously had copies of this code,
    there were a few variants. I've attempted to keep the variants
    working like they used to. Specifically:
    * For arch/arc we passed NULL to kgdb_nmicallback() instead of
    get_irq_regs().
    * For arch/mips there was a bit of extra code around
    kgdb_nmicallback()

    NOTE: In this patch we will still get into trouble if we try to round
    up a CPU that failed to round up before. We'll try to round it up
    again and potentially hang when we try to grab the csd lock. That's
    not new behavior but we'll still try to do better in a future patch.

    Suggested-by: Daniel Thompson
    Signed-off-by: Douglas Anderson
    Cc: Vineet Gupta
    Cc: Russell King
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc: Richard Kuo
    Cc: Ralf Baechle
    Cc: Paul Burton
    Cc: James Hogan
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Michael Ellerman
    Cc: Yoshinori Sato
    Cc: Rich Felker
    Cc: "David S. Miller"
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Borislav Petkov
    Cc: "H. Peter Anvin"
    Acked-by: Will Deacon
    Signed-off-by: Daniel Thompson

    Douglas Anderson
     
  • The function kgdb_roundup_cpus() was passed a parameter that was
    documented as:

    > the flags that will be used when restoring the interrupts. There is
    > local_irq_save() call before kgdb_roundup_cpus().

    Nobody used those flags. Anyone who wanted to temporarily turn on
    interrupts just did local_irq_enable() and local_irq_disable() without
    looking at them. So we can definitely remove the flags.

    Signed-off-by: Douglas Anderson
    Cc: Vineet Gupta
    Cc: Russell King
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc: Richard Kuo
    Cc: Ralf Baechle
    Cc: Paul Burton
    Cc: James Hogan
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Michael Ellerman
    Cc: Yoshinori Sato
    Cc: Rich Felker
    Cc: "David S. Miller"
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Borislav Petkov
    Cc: "H. Peter Anvin"
    Acked-by: Will Deacon
    Signed-off-by: Daniel Thompson

    Douglas Anderson
     

14 Nov, 2018

6 commits

  • In preparation to enabling -Wimplicit-fallthrough, mark switch cases
    where we are expecting to fall through.

    Notice that in this particular case, I replaced the code comments with
    a proper "fall through" annotation, which is what GCC is expecting
    to find.

    Signed-off-by: Gustavo A. R. Silva
    Reviewed-by: Daniel Thompson
    Signed-off-by: Daniel Thompson

    Gustavo A. R. Silva
     
  • In preparation to enabling -Wimplicit-fallthrough, mark switch cases
    where we are expecting to fall through.

    Notice that in this particular case, I replaced the code comments with
    a proper "fall through" annotation, which is what GCC is expecting
    to find.

    Signed-off-by: Gustavo A. R. Silva
    Reviewed-by: Daniel Thompson
    Signed-off-by: Daniel Thompson

    Gustavo A. R. Silva
     
  • Replace the whole switch statement with a for loop. This makes the
    code clearer and easy to read.

    This also addresses the following Coverity warnings:

    Addresses-Coverity-ID: 115090 ("Missing break in switch")
    Addresses-Coverity-ID: 115091 ("Missing break in switch")
    Addresses-Coverity-ID: 114700 ("Missing break in switch")

    Suggested-by: Daniel Thompson
    Signed-off-by: Gustavo A. R. Silva
    Reviewed-by: Daniel Thompson
    [daniel.thompson@linaro.org: Tiny grammar change in description]
    Signed-off-by: Daniel Thompson

    Gustavo A. R. Silva
     
  • gcc 8.1.0 warns with:

    kernel/debug/kdb/kdb_support.c: In function ‘kallsyms_symbol_next’:
    kernel/debug/kdb/kdb_support.c:239:4: warning: ‘strncpy’ specified bound depends on the length of the source argument [-Wstringop-overflow=]
    strncpy(prefix_name, name, strlen(name)+1);
    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    kernel/debug/kdb/kdb_support.c:239:31: note: length computed here

    Use strscpy() with the destination buffer size, and use ellipses when
    displaying truncated symbols.

    v2: Use strscpy()

    Signed-off-by: Prarit Bhargava
    Cc: Jonathan Toppins
    Cc: Jason Wessel
    Cc: Daniel Thompson
    Cc: kgdb-bugreport@lists.sourceforge.net
    Reviewed-by: Daniel Thompson
    Signed-off-by: Daniel Thompson

    Prarit Bhargava
     
  • Since commit ad67b74d2469 ("printk: hash addresses printed with %p"),
    all pointers printed with %p are printed with hashed addresses
    instead of real addresses in order to avoid leaking addresses in
    dmesg and syslog. But this applies to kdb too, with is unfortunate:

    Entering kdb (current=0x(ptrval), pid 329) due to Keyboard Entry
    kdb> ps
    15 sleeping system daemon (state M) processes suppressed,
    use 'ps A' to see all.
    Task Addr Pid Parent [*] cpu State Thread Command
    0x(ptrval) 329 328 1 0 R 0x(ptrval) *sh

    0x(ptrval) 1 0 0 0 S 0x(ptrval) init
    0x(ptrval) 3 2 0 0 D 0x(ptrval) rcu_gp
    0x(ptrval) 4 2 0 0 D 0x(ptrval) rcu_par_gp
    0x(ptrval) 5 2 0 0 D 0x(ptrval) kworker/0:0
    0x(ptrval) 6 2 0 0 D 0x(ptrval) kworker/0:0H
    0x(ptrval) 7 2 0 0 D 0x(ptrval) kworker/u2:0
    0x(ptrval) 8 2 0 0 D 0x(ptrval) mm_percpu_wq
    0x(ptrval) 10 2 0 0 D 0x(ptrval) rcu_preempt

    The whole purpose of kdb is to debug, and for debugging real addresses
    need to be known. In addition, data displayed by kdb doesn't go into
    dmesg.

    This patch replaces all %p by %px in kdb in order to display real
    addresses.

    Fixes: ad67b74d2469 ("printk: hash addresses printed with %p")
    Cc:
    Signed-off-by: Christophe Leroy
    Signed-off-by: Daniel Thompson

    Christophe Leroy
     
  • On a powerpc 8xx, 'btc' fails as follows:

    Entering kdb (current=0x(ptrval), pid 282) due to Keyboard Entry
    kdb> btc
    btc: cpu status: Currently on cpu 0
    Available cpus: 0
    kdb_getarea: Bad address 0x0

    when booting the kernel with 'debug_boot_weak_hash', it fails as well

    Entering kdb (current=0xba99ad80, pid 284) due to Keyboard Entry
    kdb> btc
    btc: cpu status: Currently on cpu 0
    Available cpus: 0
    kdb_getarea: Bad address 0xba99ad80

    On other platforms, Oopses have been observed too, see
    https://github.com/linuxppc/linux/issues/139

    This is due to btc calling 'btt' with %p pointer as an argument.

    This patch replaces %p by %px to get the real pointer value as
    expected by 'btt'

    Fixes: ad67b74d2469 ("printk: hash addresses printed with %p")
    Cc:
    Signed-off-by: Christophe Leroy
    Reviewed-by: Daniel Thompson
    Signed-off-by: Daniel Thompson

    Christophe Leroy
     

27 Oct, 2018

1 commit

  • There are several definitions of those functions/macros in places that
    mess with fixed-point load averages. Provide an official version.

    [akpm@linux-foundation.org: fix missed conversion in block/blk-iolatency.c]
    Link: http://lkml.kernel.org/r/20180828172258.3185-5-hannes@cmpxchg.org
    Signed-off-by: Johannes Weiner
    Acked-by: Peter Zijlstra (Intel)
    Tested-by: Suren Baghdasaryan
    Tested-by: Daniel Drake
    Cc: Christopher Lameter
    Cc: Ingo Molnar
    Cc: Johannes Weiner
    Cc: Mike Galbraith
    Cc: Peter Enderborg
    Cc: Randy Dunlap
    Cc: Shakeel Butt
    Cc: Tejun Heo
    Cc: Vinayak Menon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

13 Jun, 2018

2 commits

  • The kzalloc() function has a 2-factor argument form, kcalloc(). This
    patch replaces cases of:

    kzalloc(a * b, gfp)

    with:
    kcalloc(a * b, gfp)

    as well as handling cases of:

    kzalloc(a * b * c, gfp)

    with:

    kzalloc(array3_size(a, b, c), gfp)

    as it's slightly less ugly than:

    kzalloc_array(array_size(a, b), c, gfp)

    This does, however, attempt to ignore constant size factors like:

    kzalloc(4 * 1024, gfp)

    though any constants defined via macros get caught up in the conversion.

    Any factors with a sizeof() of "unsigned char", "char", and "u8" were
    dropped, since they're redundant.

    The Coccinelle script used for this was:

    // Fix redundant parens around sizeof().
    @@
    type TYPE;
    expression THING, E;
    @@

    (
    kzalloc(
    - (sizeof(TYPE)) * E
    + sizeof(TYPE) * E
    , ...)
    |
    kzalloc(
    - (sizeof(THING)) * E
    + sizeof(THING) * E
    , ...)
    )

    // Drop single-byte sizes and redundant parens.
    @@
    expression COUNT;
    typedef u8;
    typedef __u8;
    @@

    (
    kzalloc(
    - sizeof(u8) * (COUNT)
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(__u8) * (COUNT)
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(char) * (COUNT)
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(unsigned char) * (COUNT)
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(u8) * COUNT
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(__u8) * COUNT
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(char) * COUNT
    + COUNT
    , ...)
    |
    kzalloc(
    - sizeof(unsigned char) * COUNT
    + COUNT
    , ...)
    )

    // 2-factor product with sizeof(type/expression) and identifier or constant.
    @@
    type TYPE;
    expression THING;
    identifier COUNT_ID;
    constant COUNT_CONST;
    @@

    (
    - kzalloc
    + kcalloc
    (
    - sizeof(TYPE) * (COUNT_ID)
    + COUNT_ID, sizeof(TYPE)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(TYPE) * COUNT_ID
    + COUNT_ID, sizeof(TYPE)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(TYPE) * (COUNT_CONST)
    + COUNT_CONST, sizeof(TYPE)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(TYPE) * COUNT_CONST
    + COUNT_CONST, sizeof(TYPE)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(THING) * (COUNT_ID)
    + COUNT_ID, sizeof(THING)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(THING) * COUNT_ID
    + COUNT_ID, sizeof(THING)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(THING) * (COUNT_CONST)
    + COUNT_CONST, sizeof(THING)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(THING) * COUNT_CONST
    + COUNT_CONST, sizeof(THING)
    , ...)
    )

    // 2-factor product, only identifiers.
    @@
    identifier SIZE, COUNT;
    @@

    - kzalloc
    + kcalloc
    (
    - SIZE * COUNT
    + COUNT, SIZE
    , ...)

    // 3-factor product with 1 sizeof(type) or sizeof(expression), with
    // redundant parens removed.
    @@
    expression THING;
    identifier STRIDE, COUNT;
    type TYPE;
    @@

    (
    kzalloc(
    - sizeof(TYPE) * (COUNT) * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kzalloc(
    - sizeof(TYPE) * (COUNT) * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kzalloc(
    - sizeof(TYPE) * COUNT * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kzalloc(
    - sizeof(TYPE) * COUNT * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kzalloc(
    - sizeof(THING) * (COUNT) * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kzalloc(
    - sizeof(THING) * (COUNT) * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kzalloc(
    - sizeof(THING) * COUNT * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kzalloc(
    - sizeof(THING) * COUNT * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    )

    // 3-factor product with 2 sizeof(variable), with redundant parens removed.
    @@
    expression THING1, THING2;
    identifier COUNT;
    type TYPE1, TYPE2;
    @@

    (
    kzalloc(
    - sizeof(TYPE1) * sizeof(TYPE2) * COUNT
    + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
    , ...)
    |
    kzalloc(
    - sizeof(TYPE1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
    , ...)
    |
    kzalloc(
    - sizeof(THING1) * sizeof(THING2) * COUNT
    + array3_size(COUNT, sizeof(THING1), sizeof(THING2))
    , ...)
    |
    kzalloc(
    - sizeof(THING1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(THING1), sizeof(THING2))
    , ...)
    |
    kzalloc(
    - sizeof(TYPE1) * sizeof(THING2) * COUNT
    + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
    , ...)
    |
    kzalloc(
    - sizeof(TYPE1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
    , ...)
    )

    // 3-factor product, only identifiers, with redundant parens removed.
    @@
    identifier STRIDE, SIZE, COUNT;
    @@

    (
    kzalloc(
    - (COUNT) * STRIDE * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - COUNT * (STRIDE) * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - COUNT * STRIDE * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - (COUNT) * (STRIDE) * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - COUNT * (STRIDE) * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - (COUNT) * STRIDE * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - (COUNT) * (STRIDE) * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kzalloc(
    - COUNT * STRIDE * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    )

    // Any remaining multi-factor products, first at least 3-factor products,
    // when they're not all constants...
    @@
    expression E1, E2, E3;
    constant C1, C2, C3;
    @@

    (
    kzalloc(C1 * C2 * C3, ...)
    |
    kzalloc(
    - (E1) * E2 * E3
    + array3_size(E1, E2, E3)
    , ...)
    |
    kzalloc(
    - (E1) * (E2) * E3
    + array3_size(E1, E2, E3)
    , ...)
    |
    kzalloc(
    - (E1) * (E2) * (E3)
    + array3_size(E1, E2, E3)
    , ...)
    |
    kzalloc(
    - E1 * E2 * E3
    + array3_size(E1, E2, E3)
    , ...)
    )

    // And then all remaining 2 factors products when they're not all constants,
    // keeping sizeof() as the second factor argument.
    @@
    expression THING, E1, E2;
    type TYPE;
    constant C1, C2, C3;
    @@

    (
    kzalloc(sizeof(THING) * C2, ...)
    |
    kzalloc(sizeof(TYPE) * C2, ...)
    |
    kzalloc(C1 * C2 * C3, ...)
    |
    kzalloc(C1 * C2, ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(TYPE) * (E2)
    + E2, sizeof(TYPE)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(TYPE) * E2
    + E2, sizeof(TYPE)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(THING) * (E2)
    + E2, sizeof(THING)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - sizeof(THING) * E2
    + E2, sizeof(THING)
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - (E1) * E2
    + E1, E2
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - (E1) * (E2)
    + E1, E2
    , ...)
    |
    - kzalloc
    + kcalloc
    (
    - E1 * E2
    + E1, E2
    , ...)
    )

    Signed-off-by: Kees Cook

    Kees Cook
     
  • The kmalloc() function has a 2-factor argument form, kmalloc_array(). This
    patch replaces cases of:

    kmalloc(a * b, gfp)

    with:
    kmalloc_array(a * b, gfp)

    as well as handling cases of:

    kmalloc(a * b * c, gfp)

    with:

    kmalloc(array3_size(a, b, c), gfp)

    as it's slightly less ugly than:

    kmalloc_array(array_size(a, b), c, gfp)

    This does, however, attempt to ignore constant size factors like:

    kmalloc(4 * 1024, gfp)

    though any constants defined via macros get caught up in the conversion.

    Any factors with a sizeof() of "unsigned char", "char", and "u8" were
    dropped, since they're redundant.

    The tools/ directory was manually excluded, since it has its own
    implementation of kmalloc().

    The Coccinelle script used for this was:

    // Fix redundant parens around sizeof().
    @@
    type TYPE;
    expression THING, E;
    @@

    (
    kmalloc(
    - (sizeof(TYPE)) * E
    + sizeof(TYPE) * E
    , ...)
    |
    kmalloc(
    - (sizeof(THING)) * E
    + sizeof(THING) * E
    , ...)
    )

    // Drop single-byte sizes and redundant parens.
    @@
    expression COUNT;
    typedef u8;
    typedef __u8;
    @@

    (
    kmalloc(
    - sizeof(u8) * (COUNT)
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(__u8) * (COUNT)
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(char) * (COUNT)
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(unsigned char) * (COUNT)
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(u8) * COUNT
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(__u8) * COUNT
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(char) * COUNT
    + COUNT
    , ...)
    |
    kmalloc(
    - sizeof(unsigned char) * COUNT
    + COUNT
    , ...)
    )

    // 2-factor product with sizeof(type/expression) and identifier or constant.
    @@
    type TYPE;
    expression THING;
    identifier COUNT_ID;
    constant COUNT_CONST;
    @@

    (
    - kmalloc
    + kmalloc_array
    (
    - sizeof(TYPE) * (COUNT_ID)
    + COUNT_ID, sizeof(TYPE)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(TYPE) * COUNT_ID
    + COUNT_ID, sizeof(TYPE)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(TYPE) * (COUNT_CONST)
    + COUNT_CONST, sizeof(TYPE)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(TYPE) * COUNT_CONST
    + COUNT_CONST, sizeof(TYPE)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(THING) * (COUNT_ID)
    + COUNT_ID, sizeof(THING)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(THING) * COUNT_ID
    + COUNT_ID, sizeof(THING)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(THING) * (COUNT_CONST)
    + COUNT_CONST, sizeof(THING)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(THING) * COUNT_CONST
    + COUNT_CONST, sizeof(THING)
    , ...)
    )

    // 2-factor product, only identifiers.
    @@
    identifier SIZE, COUNT;
    @@

    - kmalloc
    + kmalloc_array
    (
    - SIZE * COUNT
    + COUNT, SIZE
    , ...)

    // 3-factor product with 1 sizeof(type) or sizeof(expression), with
    // redundant parens removed.
    @@
    expression THING;
    identifier STRIDE, COUNT;
    type TYPE;
    @@

    (
    kmalloc(
    - sizeof(TYPE) * (COUNT) * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kmalloc(
    - sizeof(TYPE) * (COUNT) * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kmalloc(
    - sizeof(TYPE) * COUNT * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kmalloc(
    - sizeof(TYPE) * COUNT * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(TYPE))
    , ...)
    |
    kmalloc(
    - sizeof(THING) * (COUNT) * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kmalloc(
    - sizeof(THING) * (COUNT) * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kmalloc(
    - sizeof(THING) * COUNT * (STRIDE)
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    |
    kmalloc(
    - sizeof(THING) * COUNT * STRIDE
    + array3_size(COUNT, STRIDE, sizeof(THING))
    , ...)
    )

    // 3-factor product with 2 sizeof(variable), with redundant parens removed.
    @@
    expression THING1, THING2;
    identifier COUNT;
    type TYPE1, TYPE2;
    @@

    (
    kmalloc(
    - sizeof(TYPE1) * sizeof(TYPE2) * COUNT
    + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
    , ...)
    |
    kmalloc(
    - sizeof(TYPE1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(TYPE1), sizeof(TYPE2))
    , ...)
    |
    kmalloc(
    - sizeof(THING1) * sizeof(THING2) * COUNT
    + array3_size(COUNT, sizeof(THING1), sizeof(THING2))
    , ...)
    |
    kmalloc(
    - sizeof(THING1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(THING1), sizeof(THING2))
    , ...)
    |
    kmalloc(
    - sizeof(TYPE1) * sizeof(THING2) * COUNT
    + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
    , ...)
    |
    kmalloc(
    - sizeof(TYPE1) * sizeof(THING2) * (COUNT)
    + array3_size(COUNT, sizeof(TYPE1), sizeof(THING2))
    , ...)
    )

    // 3-factor product, only identifiers, with redundant parens removed.
    @@
    identifier STRIDE, SIZE, COUNT;
    @@

    (
    kmalloc(
    - (COUNT) * STRIDE * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - COUNT * (STRIDE) * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - COUNT * STRIDE * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - (COUNT) * (STRIDE) * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - COUNT * (STRIDE) * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - (COUNT) * STRIDE * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - (COUNT) * (STRIDE) * (SIZE)
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    |
    kmalloc(
    - COUNT * STRIDE * SIZE
    + array3_size(COUNT, STRIDE, SIZE)
    , ...)
    )

    // Any remaining multi-factor products, first at least 3-factor products,
    // when they're not all constants...
    @@
    expression E1, E2, E3;
    constant C1, C2, C3;
    @@

    (
    kmalloc(C1 * C2 * C3, ...)
    |
    kmalloc(
    - (E1) * E2 * E3
    + array3_size(E1, E2, E3)
    , ...)
    |
    kmalloc(
    - (E1) * (E2) * E3
    + array3_size(E1, E2, E3)
    , ...)
    |
    kmalloc(
    - (E1) * (E2) * (E3)
    + array3_size(E1, E2, E3)
    , ...)
    |
    kmalloc(
    - E1 * E2 * E3
    + array3_size(E1, E2, E3)
    , ...)
    )

    // And then all remaining 2 factors products when they're not all constants,
    // keeping sizeof() as the second factor argument.
    @@
    expression THING, E1, E2;
    type TYPE;
    constant C1, C2, C3;
    @@

    (
    kmalloc(sizeof(THING) * C2, ...)
    |
    kmalloc(sizeof(TYPE) * C2, ...)
    |
    kmalloc(C1 * C2 * C3, ...)
    |
    kmalloc(C1 * C2, ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(TYPE) * (E2)
    + E2, sizeof(TYPE)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(TYPE) * E2
    + E2, sizeof(TYPE)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(THING) * (E2)
    + E2, sizeof(THING)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - sizeof(THING) * E2
    + E2, sizeof(THING)
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - (E1) * E2
    + E1, E2
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - (E1) * (E2)
    + E1, E2
    , ...)
    |
    - kmalloc
    + kmalloc_array
    (
    - E1 * E2
    + E1, E2
    , ...)
    )

    Signed-off-by: Kees Cook

    Kees Cook
     

13 Apr, 2018

1 commit

  • Pull kdb updates from Jason Wessel:

    - fix 2032 time access issues and new compiler warnings

    - minor regression test cleanup

    - formatting fixes for end user use of kdb

    * tag 'for_linus-4.16' of git://git.kernel.org/pub/scm/linux/kernel/git/jwessel/kgdb:
    kdb: use memmove instead of overlapping memcpy
    kdb: use ktime_get_mono_fast_ns() instead of ktime_get_ts()
    kdb: bl: don't use tab character in output
    kdb: drop newline in unknown command output
    kdb: make "mdr" command repeat
    kdb: use __ktime_get_real_seconds instead of __current_kernel_time
    misc: kgdbts: Display progress of asynchronous tests

    Linus Torvalds
     

05 Feb, 2018

1 commit

  • gcc discovered that the memcpy() arguments in kdbnearsym() overlap, so
    we should really use memmove(), which is defined to handle that correctly:

    In function 'memcpy',
    inlined from 'kdbnearsym' at /git/arm-soc/kernel/debug/kdb/kdb_support.c:132:4:
    /git/arm-soc/include/linux/string.h:353:9: error: '__builtin_memcpy' accessing 792 bytes at offsets 0 and 8 overlaps 784 bytes at offset 8 [-Werror=restrict]
    return __builtin_memcpy(p, q, size);

    Signed-off-by: Arnd Bergmann
    Signed-off-by: Jason Wessel

    Arnd Bergmann
     

01 Feb, 2018

1 commit

  • The kdb code will print the monotonic time by ktime_get_ts(), but
    the ktime_get_ts() will be protected by a sequence lock, that will
    introduce one deadlock risk if the lock was already held in the
    context from which we entered the debugger.

    Thus we can use the ktime_get_mono_fast_ns() to get the monotonic
    time, which is NMI safe access to clock monotonic. Moreover we can
    remove the 'struct timespec', which is not y2038 safe.

    Signed-off-by: Baolin Wang
    Reviewed-by: Daniel Thompson
    Reviewed-by: Arnd Bergmann
    Signed-off-by: Jason Wessel

    Baolin Wang
     

25 Jan, 2018

3 commits

  • The "bl" (list breakpoints) command prints a '\t' (tab) character
    in its output, but on a console (video device), that just prints
    some odd graphics character. Instead of printing a tab character,
    just align the output with spaces.

    Signed-off-by: Randy Dunlap
    Cc: Daniel Thompson
    Cc: Jason Wessel
    Cc: kgdb-bugreport@lists.sourceforge.net
    Signed-off-by: Jason Wessel

    Randy Dunlap
     
  • When an unknown command is entered, kdb prints "Unknown kdb command:"
    and then the unknown text, including the newline character. This
    causes the ending single-quote mark to be printed on the next line
    by itself, so just change the ending newline character to a null
    character (end of string) so that it won't be "printed."

    Signed-off-by: Randy Dunlap
    Cc: Daniel Thompson
    Cc: Jason Wessel
    Cc: kgdb-bugreport@lists.sourceforge.net
    Signed-off-by: Jason Wessel

    Randy Dunlap
     
  • The "mdr" command should repeat (continue) when only Enter/Return
    is pressed, so make it do so.

    Signed-off-by: Randy Dunlap
    Cc: Daniel Thompson
    Cc: Jason Wessel
    Cc: kgdb-bugreport@lists.sourceforge.net
    Signed-off-by: Jason Wessel

    Randy Dunlap