13 Aug, 2020

1 commit

  • Fix sparse build warnings:

    kernel/kcov.c:99:1: warning:
    symbol '__pcpu_scope_kcov_percpu_data' was not declared. Should it be static?
    kernel/kcov.c:778:6: warning:
    symbol 'kcov_remote_softirq_start' was not declared. Should it be static?
    kernel/kcov.c:795:6: warning:
    symbol 'kcov_remote_softirq_stop' was not declared. Should it be static?

    Reported-by: Hulk Robot
    Signed-off-by: Wei Yongjun
    Signed-off-by: Andrew Morton
    Reviewed-by: Andrey Konovalov
    Link: http://lkml.kernel.org/r/20200702115501.73077-1-weiyongjun1@huawei.com
    Signed-off-by: Linus Torvalds

    Wei Yongjun
     

11 Jun, 2020

1 commit

  • kcov_remote_stop() should check that the corresponding kcov_remote_start()
    actually found the specified remote handle and started collecting
    coverage. This is done by checking the per thread kcov_softirq flag.

    A particular failure scenario where this was observed involved a softirq
    with a remote coverage collection section coming between check_kcov_mode()
    and the access to t->kcov_area in __sanitizer_cov_trace_pc(). In that
    softirq kcov_remote_start() bailed out after kcov_remote_find() check, but
    the matching kcov_remote_stop() didn't check if kcov_remote_start()
    succeeded, and overwrote per thread kcov parameters with invalid (zero)
    values.

    Fixes: 5ff3b30ab57d ("kcov: collect coverage from interrupts")
    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Reviewed-by: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Marco Elver
    Cc: Tetsuo Handa
    Link: http://lkml.kernel.org/r/fcd1cd16eac1d2c01a66befd8ea4afc6f8d09833.1591576806.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

05 Jun, 2020

6 commits

  • This change extends kcov remote coverage support to allow collecting
    coverage from soft interrupts in addition to kernel background threads.

    To collect coverage from code that is executed in softirq context, a part
    of that code has to be annotated with kcov_remote_start/stop() in a
    similar way as how it is done for global kernel background threads. Then
    the handle used for the annotations has to be passed to the
    KCOV_REMOTE_ENABLE ioctl.

    Internally this patch adjusts the __sanitizer_cov_trace_pc() compiler
    inserted callback to not bail out when called from softirq context.
    kcov_remote_start/stop() are updated to save/restore the current per task
    kcov state in a per-cpu area (in case the softirq came when the kernel was
    already collecting coverage in task context). Coverage from softirqs is
    collected into pre-allocated per-cpu areas, whose size is controlled by
    the new CONFIG_KCOV_IRQ_AREA_SIZE.

    [andreyknvl@google.com: turn current->kcov_softirq into unsigned int to fix objtool warning]
    Link: http://lkml.kernel.org/r/841c778aa3849c5cb8c3761f56b87ce653a88671.1585233617.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Reviewed-by: Dmitry Vyukov
    Cc: Alan Stern
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Greg Kroah-Hartman
    Cc: Marco Elver
    Link: http://lkml.kernel.org/r/469bd385c431d050bc38a593296eff4baae50666.1584655448.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Currently kcov_remote_start() and kcov_remote_stop() check t->kcov to find
    out whether the coverage is already being collected by the current task.
    Use t->kcov_mode for that instead. This doesn't change the overall
    behavior in any way, but serves as a preparation for the following softirq
    coverage collection support patch.

    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Reviewed-by: Dmitry Vyukov
    Cc: Alan Stern
    Cc: Alexander Potapenko
    Cc: Greg Kroah-Hartman
    Cc: Marco Elver
    Cc: Andrey Konovalov
    Link: http://lkml.kernel.org/r/f70377945d1d8e6e4916cbce871a12303d6186b4.1585233617.git.andreyknvl@google.com
    Link: http://lkml.kernel.org/r/ee1a1dec43059da5d7664c85c1addc89c4cd58de.1584655448.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Move t->kcov_sequence assignment before assigning t->kcov_mode for
    consistency.

    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Reviewed-by: Dmitry Vyukov
    Cc: Alan Stern
    Cc: Alexander Potapenko
    Cc: Greg Kroah-Hartman
    Cc: Marco Elver
    Cc: Andrey Konovalov
    Link: http://lkml.kernel.org/r/5889efe35e0b300e69dba97216b1288d9c2428a8.1585233617.git.andreyknvl@google.com
    Link: http://lkml.kernel.org/r/f0283c676bab3335cb48bfe12d375a3da4719f59.1584655448.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Every time kcov_start/stop() is called, t->kcov is also assigned, so move
    the assignment into the functions.

    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Reviewed-by: Dmitry Vyukov
    Cc: Alan Stern
    Cc: Alexander Potapenko
    Cc: Greg Kroah-Hartman
    Cc: Marco Elver
    Cc: Andrey Konovalov
    Link: http://lkml.kernel.org/r/6644839d3567df61ade3c4b246a46cacbe4f9e11.1585233617.git.andreyknvl@google.com
    Link: http://lkml.kernel.org/r/82625ef3ff878f0b585763cc31d09d9b08ca37d6.1584655448.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • If vmalloc() fails in kcov_remote_start() we'll access remote->kcov
    without holding kcov_remote_lock, so remote might potentially be freed at
    that point. Cache kcov pointer in a local variable.

    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Reviewed-by: Dmitry Vyukov
    Cc: Alan Stern
    Cc: Alexander Potapenko
    Cc: Greg Kroah-Hartman
    Cc: Marco Elver
    Cc: Andrey Konovalov
    Link: http://lkml.kernel.org/r/9d9134359725a965627b7e8f2652069f86f1d1fa.1585233617.git.andreyknvl@google.com
    Link: http://lkml.kernel.org/r/de0d3d30ff90776a2a509cc34c7c1c7521bda125.1584655448.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Patch series "kcov: collect coverage from usb soft interrupts", v4.

    This patchset extends kcov to allow collecting coverage from soft
    interrupts and then uses the new functionality to collect coverage from
    USB code.

    This has allowed to find at least one new HID bug [1], which was recently
    fixed by Alan [2].

    [1] https://syzkaller.appspot.com/bug?extid=09ef48aa58261464b621
    [2] https://patchwork.kernel.org/patch/11283319/

    Any subsystem that uses softirqs (e.g. timers) can make use of this in
    the future. Looking at the recent syzbot reports, an obvious candidate
    is the networking subsystem [3, 4, 5 and many more].

    [3] https://syzkaller.appspot.com/bug?extid=522ab502c69badc66ab7
    [4] https://syzkaller.appspot.com/bug?extid=57f89d05946c53dbbb31
    [5] https://syzkaller.appspot.com/bug?extid=df358e65d9c1b9d3f5f4

    This pach (of 7):

    Previous commit left a lot of excessive debug messages, clean them up.

    Link; http://lkml.kernel.org/r/cover.1585233617.git.andreyknvl@google.com
    Link; http://lkml.kernel.org/r/ab5e2885ce674ba6e04368551e51eeb6a2c11baf.1585233617.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Reviewed-by: Dmitry Vyukov
    Cc: Greg Kroah-Hartman
    Cc: Alan Stern
    Cc: Alexander Potapenko
    Cc: Marco Elver
    Cc: Andrey Konovalov
    Link: http://lkml.kernel.org/r/4a497134b2cf7a9d306d28e3dd2746f5446d1605.1584655448.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

08 May, 2020

1 commit


05 Dec, 2019

1 commit

  • Patch series " kcov: collect coverage from usb and vhost", v3.

    This patchset extends kcov to allow collecting coverage from backgound
    kernel threads. This extension requires custom annotations for each of
    the places where coverage collection is desired. This patchset
    implements this for hub events in the USB subsystem and for vhost
    workers. See the first patch description for details about the kcov
    extension. The other two patches apply this kcov extension to USB and
    vhost.

    Examples of other subsystems that might potentially benefit from this
    when custom annotations are added (the list is based on
    process_one_work() callers for bugs recently reported by syzbot):

    1. fs: writeback wb_workfn() worker,
    2. net: addrconf_dad_work()/addrconf_verify_work() workers,
    3. net: neigh_periodic_work() worker,
    4. net/p9: p9_write_work()/p9_read_work() workers,
    5. block: blk_mq_run_work_fn() worker.

    These patches have been used to enable coverage-guided USB fuzzing with
    syzkaller for the last few years, see the details here:

    https://github.com/google/syzkaller/blob/master/docs/linux/external_fuzzing_usb.md

    This patchset has been pushed to the public Linux kernel Gerrit
    instance:

    https://linux-review.googlesource.com/c/linux/kernel/git/torvalds/linux/+/1524

    This patch (of 3):

    Add background thread coverage collection ability to kcov.

    With KCOV_ENABLE coverage is collected only for syscalls that are issued
    from the current process. With KCOV_REMOTE_ENABLE it's possible to
    collect coverage for arbitrary parts of the kernel code, provided that
    those parts are annotated with kcov_remote_start()/kcov_remote_stop().

    This allows to collect coverage from two types of kernel background
    threads: the global ones, that are spawned during kernel boot in a
    limited number of instances (e.g. one USB hub_event() worker thread is
    spawned per USB HCD); and the local ones, that are spawned when a user
    interacts with some kernel interface (e.g. vhost workers).

    To enable collecting coverage from a global background thread, a unique
    global handle must be assigned and passed to the corresponding
    kcov_remote_start() call. Then a userspace process can pass a list of
    such handles to the KCOV_REMOTE_ENABLE ioctl in the handles array field
    of the kcov_remote_arg struct. This will attach the used kcov device to
    the code sections, that are referenced by those handles.

    Since there might be many local background threads spawned from
    different userspace processes, we can't use a single global handle per
    annotation. Instead, the userspace process passes a non-zero handle
    through the common_handle field of the kcov_remote_arg struct. This
    common handle gets saved to the kcov_handle field in the current
    task_struct and needs to be passed to the newly spawned threads via
    custom annotations. Those threads should in turn be annotated with
    kcov_remote_start()/kcov_remote_stop().

    Internally kcov stores handles as u64 integers. The top byte of a
    handle is used to denote the id of a subsystem that this handle belongs
    to, and the lower 4 bytes are used to denote the id of a thread instance
    within that subsystem. A reserved value 0 is used as a subsystem id for
    common handles as they don't belong to a particular subsystem. The
    bytes 4-7 are currently reserved and must be zero. In the future the
    number of bytes used for the subsystem or handle ids might be increased.

    When a particular userspace process collects coverage by via a common
    handle, kcov will collect coverage for each code section that is
    annotated to use the common handle obtained as kcov_handle from the
    current task_struct. However non common handles allow to collect
    coverage selectively from different subsystems.

    Link: http://lkml.kernel.org/r/e90e315426a384207edbec1d6aa89e43008e4caf.1572366574.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Greg Kroah-Hartman
    Cc: Alan Stern
    Cc: "Michael S. Tsirkin"
    Cc: Jason Wang
    Cc: Arnd Bergmann
    Cc: Steven Rostedt
    Cc: David Windsor
    Cc: Elena Reshetova
    Cc: Anders Roxell
    Cc: Alexander Potapenko
    Cc: Marco Elver
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

08 Mar, 2019

2 commits

  • atomic_t variables are currently used to implement reference
    counters with the following properties:

    - counter is initialized to 1 using atomic_set()

    - a resource is freed upon counter reaching zero

    - once counter reaches zero, its further
    increments aren't allowed

    - counter schema uses basic atomic operations
    (set, inc, inc_not_zero, dec_and_test, etc.)

    Such atomic variables should be converted to a newly provided refcount_t
    type and API that prevents accidental counter overflows and underflows.
    This is important since overflows and underflows can lead to
    use-after-free situation and be exploitable.

    The variable kcov.refcount is used as pure reference counter. Convert
    it to refcount_t and fix up the operations.

    **Important note for maintainers:

    Some functions from refcount_t API defined in lib/refcount.c have
    different memory ordering guarantees than their atomic counterparts.

    The full comparison can be seen in https://lkml.org/lkml/2017/11/15/57
    and it is hopefully soon in state to be merged to the documentation
    tree. Normally the differences should not matter since refcount_t
    provides enough guarantees to satisfy the refcounting use cases, but in
    some rare cases it might matter. Please double check that you don't
    have some undocumented memory guarantees for this variable usage.

    For the kcov.refcount it might make a difference
    in following places:
    - kcov_put(): decrement in refcount_dec_and_test() only
    provides RELEASE ordering and control dependency on success
    vs. fully ordered atomic counterpart

    Link: http://lkml.kernel.org/r/1547634429-772-1-git-send-email-elena.reshetova@intel.com
    Signed-off-by: Elena Reshetova
    Suggested-by: Kees Cook
    Reviewed-by: David Windsor
    Reviewed-by: Hans Liljestrand
    Reviewed-by: Dmitry Vyukov
    Reviewed-by: Andrea Parri
    Cc: Mark Rutland
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Elena Reshetova
     
  • When calling debugfs functions, there is no need to ever check the
    return value. The function can work or not, but the code logic should
    never do something different based on this.

    Link: http://lkml.kernel.org/r/20190122152151.16139-46-gregkh@linuxfoundation.org
    Signed-off-by: Greg Kroah-Hartman
    Cc: Andrey Ryabinin
    Cc: Mark Rutland
    Cc: Arnd Bergmann
    Cc: "Steven Rostedt (VMware)"
    Cc: Dmitry Vyukov
    Cc: Anders Roxell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Greg Kroah-Hartman
     

05 Jan, 2019

1 commit

  • Since __sanitizer_cov_trace_const_cmp4 is marked as notrace, the
    function called from __sanitizer_cov_trace_const_cmp4 shouldn't be
    traceable either. ftrace_graph_caller() gets called every time func
    write_comp_data() gets called if it isn't marked 'notrace'. This is the
    backtrace from gdb:

    #0 ftrace_graph_caller () at ../arch/arm64/kernel/entry-ftrace.S:179
    #1 0xffffff8010201920 in ftrace_caller () at ../arch/arm64/kernel/entry-ftrace.S:151
    #2 0xffffff8010439714 in write_comp_data (type=5, arg1=0, arg2=0, ip=18446743524224276596) at ../kernel/kcov.c:116
    #3 0xffffff8010439894 in __sanitizer_cov_trace_const_cmp4 (arg1=, arg2=) at ../kernel/kcov.c:188
    #4 0xffffff8010201874 in prepare_ftrace_return (self_addr=18446743524226602768, parent=0xffffff801014b918, frame_pointer=18446743524223531344) at ./include/generated/atomic-instrumented.h:27
    #5 0xffffff801020194c in ftrace_graph_caller () at ../arch/arm64/kernel/entry-ftrace.S:182

    Rework so that write_comp_data() that are called from
    __sanitizer_cov_trace_*_cmp*() are marked as 'notrace'.

    Commit 903e8ff86753 ("kernel/kcov.c: mark funcs in __sanitizer_cov_trace_pc() as notrace")
    missed to mark write_comp_data() as 'notrace'. When that patch was
    created gcc-7 was used. In lib/Kconfig.debug
    config KCOV_ENABLE_COMPARISONS
    depends on $(cc-option,-fsanitize-coverage=trace-cmp)

    That code path isn't hit with gcc-7. However, it were that with gcc-8.

    Link: http://lkml.kernel.org/r/20181206143011.23719-1-anders.roxell@linaro.org
    Signed-off-by: Anders Roxell
    Signed-off-by: Arnd Bergmann
    Co-developed-by: Arnd Bergmann
    Acked-by: Steven Rostedt (VMware)
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anders Roxell
     

01 Dec, 2018

1 commit

  • Since __sanitizer_cov_trace_pc() is marked as notrace, function calls in
    __sanitizer_cov_trace_pc() shouldn't be traced either.
    ftrace_graph_caller() gets called for each function that isn't marked
    'notrace', like canonicalize_ip(). This is the call trace from a run:

    [ 139.644550] ftrace_graph_caller+0x1c/0x24
    [ 139.648352] canonicalize_ip+0x18/0x28
    [ 139.652313] __sanitizer_cov_trace_pc+0x14/0x58
    [ 139.656184] sched_clock+0x34/0x1e8
    [ 139.659759] trace_clock_local+0x40/0x88
    [ 139.663722] ftrace_push_return_trace+0x8c/0x1f0
    [ 139.667767] prepare_ftrace_return+0xa8/0x100
    [ 139.671709] ftrace_graph_caller+0x1c/0x24

    Rework so that check_kcov_mode() and canonicalize_ip() that are called
    from __sanitizer_cov_trace_pc() are also marked as notrace.

    Link: http://lkml.kernel.org/r/20181128081239.18317-1-anders.roxell@linaro.org
    Signed-off-by: Arnd Bergmann
    Signen-off-by: Anders Roxell
    Co-developed-by: Arnd Bergmann
    Acked-by: Steven Rostedt (VMware)
    Cc: Dmitry Vyukov
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anders Roxell
     

15 Jun, 2018

3 commits

  • During a context switch, we first switch_mm() to the next task's mm,
    then switch_to() that new task. This means that vmalloc'd regions which
    had previously been faulted in can transiently disappear in the context
    of the prev task.

    Functions instrumented by KCOV may try to access a vmalloc'd kcov_area
    during this window, and as the fault handling code is instrumented, this
    results in a recursive fault.

    We must avoid accessing any kcov_area during this window. We can do so
    with a new flag in kcov_mode, set prior to switching the mm, and cleared
    once the new task is live. Since task_struct::kcov_mode isn't always a
    specific enum kcov_mode value, this is made an unsigned int.

    The manipulation is hidden behind kcov_{prepare,finish}_switch() helpers,
    which are empty for !CONFIG_KCOV kernels.

    The code uses macros because I can't use static inline functions without a
    circular include dependency between and ,
    since the definition of task_struct uses things defined in

    Link: http://lkml.kernel.org/r/20180504135535.53744-4-mark.rutland@arm.com
    Signed-off-by: Mark Rutland
    Acked-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mark Rutland
     
  • On many architectures the vmalloc area is lazily faulted in upon first
    access. This is problematic for KCOV, as __sanitizer_cov_trace_pc
    accesses the (vmalloc'd) kcov_area, and fault handling code may be
    instrumented. If an access to kcov_area faults, this will result in
    mutual recursion through the fault handling code and
    __sanitizer_cov_trace_pc(), eventually leading to stack corruption
    and/or overflow.

    We can avoid this by faulting in the kcov_area before
    __sanitizer_cov_trace_pc() is permitted to access it. Once it has been
    faulted in, it will remain present in the process page tables, and will
    not fault again.

    [akpm@linux-foundation.org: code cleanup]
    [akpm@linux-foundation.org: add comment explaining kcov_fault_in_area()]
    [akpm@linux-foundation.org: fancier code comment from Mark]
    Link: http://lkml.kernel.org/r/20180504135535.53744-3-mark.rutland@arm.com
    Signed-off-by: Mark Rutland
    Acked-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mark Rutland
     
  • Patch series "kcov: fix unexpected faults".

    These patches fix a few issues where KCOV code could trigger recursive
    faults, discovered while debugging a patch enabling KCOV for arch/arm:

    * On CONFIG_PREEMPT kernels, there's a small race window where
    __sanitizer_cov_trace_pc() can see a bogus kcov_area.

    * Lazy faulting of the vmalloc area can cause mutual recursion between
    fault handling code and __sanitizer_cov_trace_pc().

    * During the context switch, switching the mm can cause the kcov_area to
    be transiently unmapped.

    These are prerequisites for enabling KCOV on arm, but the issues
    themsevles are generic -- we just happen to avoid them by chance rather
    than design on x86-64 and arm64.

    This patch (of 3):

    For kernels built with CONFIG_PREEMPT, some C code may execute before or
    after the interrupt handler, while the hardirq count is zero. In these
    cases, in_task() can return true.

    A task can be interrupted in the middle of a KCOV_DISABLE ioctl while it
    resets the task's kcov data via kcov_task_init(). Instrumented code
    executed during this period will call __sanitizer_cov_trace_pc(), and as
    in_task() returns true, will inspect t->kcov_mode before trying to write
    to t->kcov_area.

    In kcov_init_task() we update t->kcov_{mode,area,size} with plain stores,
    which may be re-ordered, torn, etc. Thus __sanitizer_cov_trace_pc() may
    see bogus values for any of these fields, and may attempt to write to
    memory which is not mapped.

    Let's avoid this by using WRITE_ONCE() to set t->kcov_mode, with a
    barrier() to ensure this is ordered before we clear t->kov_{area,size}.
    This ensures that any code execute while kcov_init_task() is preempted
    will either see valid values for t->kcov_{area,size}, or will see that
    t->kcov_mode is KCOV_MODE_DISABLED, and bail out without touching
    t->kcov_area.

    Link: http://lkml.kernel.org/r/20180504135535.53744-2-mark.rutland@arm.com
    Signed-off-by: Mark Rutland
    Acked-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mark Rutland
     

07 Feb, 2018

1 commit

  • Currently KCOV_ENABLE does not check if the current task is already
    associated with another kcov descriptor. As the result it is possible
    to associate a single task with more than one kcov descriptor, which
    later leads to a memory leak of the old descriptor. This relation is
    really meant to be one-to-one (task has only one back link).

    Extend validation to detect such misuse.

    Link: http://lkml.kernel.org/r/20180122082520.15716-1-dvyukov@google.com
    Fixes: 5c9a8750a640 ("kernel: add kcov code coverage")
    Signed-off-by: Dmitry Vyukov
    Reported-by: Shankara Pailoor
    Cc: Dmitry Vyukov
    Cc: syzbot
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     

15 Dec, 2017

1 commit

  • Fix a silly copy-paste bug. We truncated u32 args to u16.

    Link: http://lkml.kernel.org/r/20171207101134.107168-1-dvyukov@google.com
    Fixes: ded97d2c2b2c ("kcov: support comparison operands collection")
    Signed-off-by: Dmitry Vyukov
    Cc: syzkaller@googlegroups.com
    Cc: Alexander Potapenko
    Cc: Vegard Nossum
    Cc: Quentin Casasnovas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     

18 Nov, 2017

2 commits

  • Enables kcov to collect comparison operands from instrumented code.
    This is done by using Clang's -fsanitize=trace-cmp instrumentation
    (currently not available for GCC).

    The comparison operands help a lot in fuzz testing. E.g. they are used
    in Syzkaller to cover the interiors of conditional statements with way
    less attempts and thus make previously unreachable code reachable.

    To allow separate collection of coverage and comparison operands two
    different work modes are implemented. Mode selection is now done via a
    KCOV_ENABLE ioctl call with corresponding argument value.

    Link: http://lkml.kernel.org/r/20171011095459.70721-1-glider@google.com
    Signed-off-by: Victor Chibotaru
    Signed-off-by: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Andrey Konovalov
    Cc: Mark Rutland
    Cc: Alexander Popov
    Cc: Andrey Ryabinin
    Cc: Kees Cook
    Cc: Vegard Nossum
    Cc: Quentin Casasnovas
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Victor Chibotaru
     
  • __sanitizer_cov_trace_pc() is a hot code, so it's worth to remove
    pointless '!current' check. Current is never NULL.

    Link: http://lkml.kernel.org/r/20170929162221.32500-1-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Acked-by: Dmitry Vyukov
    Acked-by: Mark Rutland
    Cc: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

09 Sep, 2017

1 commit

  • Support compat processes in KCOV by providing compat_ioctl callback.
    Compat mode uses the same ioctl callback: we have 2 commands that do not
    use the argument and 1 that already checks that the arg does not overflow
    INT_MAX. This allows to use KCOV-guided fuzzing in compat processes.

    Link: http://lkml.kernel.org/r/20170823100553.55812-1-dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     

09 May, 2017

1 commit

  • in_interrupt() semantics are confusing and wrong for most users as it
    also returns true when bh is disabled. Thus we open coded a proper
    check for interrupts in __sanitizer_cov_trace_pc() with a lengthy
    explanatory comment.

    Use the new in_task() predicate instead.

    Link: http://lkml.kernel.org/r/20170321091026.139655-1-dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Cc: Kefeng Wang
    Cc: James Morse
    Cc: Alexander Popov
    Cc: Andrey Konovalov
    Cc: Hillf Danton
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     

21 Dec, 2016

1 commit

  • Subtract KASLR offset from the kernel addresses reported by kcov.
    Tested on x86_64 and AArch64 (Hikey LeMaker).

    Link: http://lkml.kernel.org/r/1481417456-28826-3-git-send-email-alex.popov@linux.com
    Signed-off-by: Alexander Popov
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc: Ard Biesheuvel
    Cc: Mark Rutland
    Cc: Rob Herring
    Cc: Kefeng Wang
    Cc: AKASHI Takahiro
    Cc: Jon Masters
    Cc: David Daney
    Cc: Ganapatrao Kulkarni
    Cc: Dmitry Vyukov
    Cc: Nicolai Stange
    Cc: James Morse
    Cc: Andrey Ryabinin
    Cc: Andrey Konovalov
    Cc: Alexander Popov
    Cc: syzkaller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Popov
     

15 Dec, 2016

1 commit

  • It is fragile that some definitions acquired via transitive
    dependencies, as shown in below:

    atomic_* ()
    ENOMEM/EN* ()
    EXPORT_SYMBOL ()
    device_initcall ()
    preempt_* ()

    Include them to prevent possible issues.

    Link: http://lkml.kernel.org/r/1481163221-40170-1-git-send-email-wangkefeng.wang@huawei.com
    Signed-off-by: Kefeng Wang
    Suggested-by: Mark Rutland
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Mark Rutland
    Cc: James Morse
    Cc: Kefeng Wang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kefeng Wang
     

08 Dec, 2016

1 commit

  • In __sanitizer_cov_trace_pc we use task_struct and fields within it, but
    as we haven't included , it is not guaranteed to be
    defined. While we usually happen to acquire the definition through a
    transitive include, this is fragile (and hasn't been true in the past,
    causing issues with backports).

    Include to avoid any fragility.

    [mark.rutland@arm.com: rewrote changelog]
    Link: http://lkml.kernel.org/r/1481007384-27529-1-git-send-email-wangkefeng.wang@huawei.com
    Signed-off-by: Kefeng Wang
    Acked-by: Mark Rutland
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: James Morse
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kefeng Wang
     

28 Oct, 2016

1 commit

  • in_interrupt() returns a nonzero value when we are either in an
    interrupt or have bh disabled via local_bh_disable(). Since we are
    interested in only ignoring coverage from actual interrupts, do a proper
    check instead of just calling in_interrupt().

    As a result of this change, kcov will start to collect coverage from
    within local_bh_disable()/local_bh_enable() sections.

    Link: http://lkml.kernel.org/r/1476115803-20712-1-git-send-email-andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Acked-by: Dmitry Vyukov
    Cc: Nicolai Stange
    Cc: Andrey Ryabinin
    Cc: Kees Cook
    Cc: James Morse
    Cc: Vegard Nossum
    Cc: Quentin Casasnovas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

15 Jun, 2016

1 commit

  • Since commit 49d200deaa68 ("debugfs: prevent access to removed files'
    private data"), a debugfs file's file_operations methods get proxied
    through lifetime aware wrappers.

    However, only a certain subset of the file_operations members is supported
    by debugfs and ->mmap isn't among them -- it appears to be NULL from the
    VFS layer's perspective.

    This behaviour breaks the /sys/kernel/debug/kcov file introduced
    concurrently with commit 5c9a8750a640 ("kernel: add kcov code coverage").

    Since that file never gets removed, there is no file removal race and thus,
    a lifetime checking proxy isn't needed.

    Avoid the proxying for /sys/kernel/debug/kcov by creating it via
    debugfs_create_file_unsafe() rather than debugfs_create_file().

    Fixes: 49d200deaa68 ("debugfs: prevent access to removed files' private data")
    Fixes: 5c9a8750a640 ("kernel: add kcov code coverage")
    Reported-by: Sasha Levin
    Signed-off-by: Nicolai Stange
    Signed-off-by: Greg Kroah-Hartman

    Nicolai Stange
     

29 Apr, 2016

2 commits

  • Profiling 'if' statements in __sanitizer_cov_trace_pc() leads to
    unbound recursion and crash:

    __sanitizer_cov_trace_pc() ->
    ftrace_likely_update ->
    __sanitizer_cov_trace_pc() ...

    Define DISABLE_BRANCH_PROFILING to disable this tracer.

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Kcov causes the compiler to add a call to __sanitizer_cov_trace_pc() in
    every basic block. Ftrace patches in a call to _mcount() to each
    function it has annotated.

    Letting these mechanisms annotate each other is a bad thing. Break the
    loop by adding 'notrace' to __sanitizer_cov_trace_pc() so that ftrace
    won't try to patch this code.

    This patch lets arm64 with KCOV and STACK_TRACER boot.

    Signed-off-by: James Morse
    Acked-by: Dmitry Vyukov
    Cc: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    James Morse
     

23 Mar, 2016

1 commit

  • kcov provides code coverage collection for coverage-guided fuzzing
    (randomized testing). Coverage-guided fuzzing is a testing technique
    that uses coverage feedback to determine new interesting inputs to a
    system. A notable user-space example is AFL
    (http://lcamtuf.coredump.cx/afl/). However, this technique is not
    widely used for kernel testing due to missing compiler and kernel
    support.

    kcov does not aim to collect as much coverage as possible. It aims to
    collect more or less stable coverage that is function of syscall inputs.
    To achieve this goal it does not collect coverage in soft/hard
    interrupts and instrumentation of some inherently non-deterministic or
    non-interesting parts of kernel is disbled (e.g. scheduler, locking).

    Currently there is a single coverage collection mode (tracing), but the
    API anticipates additional collection modes. Initially I also
    implemented a second mode which exposes coverage in a fixed-size hash
    table of counters (what Quentin used in his original patch). I've
    dropped the second mode for simplicity.

    This patch adds the necessary support on kernel side. The complimentary
    compiler support was added in gcc revision 231296.

    We've used this support to build syzkaller system call fuzzer, which has
    found 90 kernel bugs in just 2 months:

    https://github.com/google/syzkaller/wiki/Found-Bugs

    We've also found 30+ bugs in our internal systems with syzkaller.
    Another (yet unexplored) direction where kcov coverage would greatly
    help is more traditional "blob mutation". For example, mounting a
    random blob as a filesystem, or receiving a random blob over wire.

    Why not gcov. Typical fuzzing loop looks as follows: (1) reset
    coverage, (2) execute a bit of code, (3) collect coverage, repeat. A
    typical coverage can be just a dozen of basic blocks (e.g. an invalid
    input). In such context gcov becomes prohibitively expensive as
    reset/collect coverage steps depend on total number of basic
    blocks/edges in program (in case of kernel it is about 2M). Cost of
    kcov depends only on number of executed basic blocks/edges. On top of
    that, kernel requires per-thread coverage because there are always
    background threads and unrelated processes that also produce coverage.
    With inlined gcov instrumentation per-thread coverage is not possible.

    kcov exposes kernel PCs and control flow to user-space which is
    insecure. But debugfs should not be mapped as user accessible.

    Based on a patch by Quentin Casasnovas.

    [akpm@linux-foundation.org: make task_struct.kcov_mode have type `enum kcov_mode']
    [akpm@linux-foundation.org: unbreak allmodconfig]
    [akpm@linux-foundation.org: follow x86 Makefile layout standards]
    Signed-off-by: Dmitry Vyukov
    Reviewed-by: Kees Cook
    Cc: syzkaller
    Cc: Vegard Nossum
    Cc: Catalin Marinas
    Cc: Tavis Ormandy
    Cc: Will Deacon
    Cc: Quentin Casasnovas
    Cc: Kostya Serebryany
    Cc: Eric Dumazet
    Cc: Alexander Potapenko
    Cc: Kees Cook
    Cc: Bjorn Helgaas
    Cc: Sasha Levin
    Cc: David Drysdale
    Cc: Ard Biesheuvel
    Cc: Andrey Ryabinin
    Cc: Kirill A. Shutemov
    Cc: Jiri Slaby
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov