08 Aug, 2020

1 commit

  • Move free track from kasan_alloc_meta to kasan_free_meta in order to make
    struct kasan_alloc_meta and kasan_free_meta size are both 16 bytes. It is
    a good size because it is the minimal redzone size and a good number of
    alignment.

    For free track, we make some modifications as shown below:
    1) Remove the free_track from struct kasan_alloc_meta.
    2) Add the free_track into struct kasan_free_meta.
    3) Add a macro KASAN_KMALLOC_FREETRACK in order to check whether
    it can print free stack in KASAN report.

    [1]https://bugzilla.kernel.org/show_bug.cgi?id=198437

    [walter-zh.wu@mediatek.com: build fix]
    Link: http://lkml.kernel.org/r/20200710162440.23887-1-walter-zh.wu@mediatek.com

    Suggested-by: Dmitry Vyukov
    Co-developed-by: Dmitry Vyukov
    Signed-off-by: Walter Wu
    Signed-off-by: Andrew Morton
    Tested-by: Dmitry Vyukov
    Reviewed-by: Dmitry Vyukov
    Reviewed-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Joel Fernandes
    Cc: Jonathan Corbet
    Cc: Josh Triplett
    Cc: Lai Jiangshan
    Cc: Mathieu Desnoyers
    Cc: Matthias Brugger
    Cc: "Paul E . McKenney"
    Link: http://lkml.kernel.org/r/20200601051022.1230-1-walter-zh.wu@mediatek.com
    Signed-off-by: Linus Torvalds

    Walter Wu
     

24 May, 2020

1 commit

  • During early boot, while KASAN is not yet initialized, it is possible to
    enter reporting code-path and end up in kasan_report().

    While uninitialized, the branch there prevents generating any reports,
    however, under certain circumstances when branches are being traced
    (TRACE_BRANCH_PROFILING), we may recurse deep enough to cause kernel
    reboots without warning.

    To prevent similar issues in future, we should disable branch tracing
    for the core runtime.

    [elver@google.com: remove duplicate DISABLE_BRANCH_PROFILING, per Qian Cai]
    Link: https://lore.kernel.org/lkml/20200517011732.GE24705@shao2-debian/
    Link: http://lkml.kernel.org/r/20200522075207.157349-1-elver@google.com
    Reported-by: kernel test robot
    Signed-off-by: Marco Elver
    Signed-off-by: Andrew Morton
    Reviewed-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: Qian Cai
    Cc:
    Link: http://lkml.kernel.org/r//20200517011732.GE24705@shao2-debian/
    Link: http://lkml.kernel.org/r/20200519182459.87166-1-elver@google.com
    Signed-off-by: Linus Torvalds

    Marco Elver
     

03 Apr, 2020

1 commit

  • Patch series "fix the missing underflow in memory operation function", v4.

    The patchset helps to produce a KASAN report when size is negative in
    memory operation functions. It is helpful for programmer to solve an
    undefined behavior issue. Patch 1 based on Dmitry's review and
    suggestion, patch 2 is a test in order to verify the patch 1.

    [1]https://bugzilla.kernel.org/show_bug.cgi?id=199341
    [2]https://lore.kernel.org/linux-arm-kernel/20190927034338.15813-1-walter-zh.wu@mediatek.com/

    This patch (of 2):

    KASAN missed detecting size is a negative number in memset(), memcpy(),
    and memmove(), it will cause out-of-bounds bug. So needs to be detected
    by KASAN.

    If size is a negative number, then it has a reason to be defined as
    out-of-bounds bug type. Casting negative numbers to size_t would indeed
    turn up as a large size_t and its value will be larger than ULONG_MAX/2,
    so that this can qualify as out-of-bounds.

    KASAN report is shown below:

    BUG: KASAN: out-of-bounds in kmalloc_memmove_invalid_size+0x70/0xa0
    Read of size 18446744073709551608 at addr ffffff8069660904 by task cat/72

    CPU: 2 PID: 72 Comm: cat Not tainted 5.4.0-rc1-next-20191004ajb-00001-gdb8af2f372b2-dirty #1
    Hardware name: linux,dummy-virt (DT)
    Call trace:
    dump_backtrace+0x0/0x288
    show_stack+0x14/0x20
    dump_stack+0x10c/0x164
    print_address_description.isra.9+0x68/0x378
    __kasan_report+0x164/0x1a0
    kasan_report+0xc/0x18
    check_memory_region+0x174/0x1d0
    memmove+0x34/0x88
    kmalloc_memmove_invalid_size+0x70/0xa0

    [1] https://bugzilla.kernel.org/show_bug.cgi?id=199341

    [cai@lca.pw: fix -Wdeclaration-after-statement warn]
    Link: http://lkml.kernel.org/r/1583509030-27939-1-git-send-email-cai@lca.pw
    [peterz@infradead.org: fix objtool warning]
    Link: http://lkml.kernel.org/r/20200305095436.GV2596@hirez.programming.kicks-ass.net
    Reported-by: kernel test robot
    Reported-by: Dmitry Vyukov
    Suggested-by: Dmitry Vyukov
    Signed-off-by: Walter Wu
    Signed-off-by: Qian Cai
    Signed-off-by: Andrew Morton
    Reviewed-by: Dmitry Vyukov
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Link: http://lkml.kernel.org/r/20191112065302.7015-1-walter-zh.wu@mediatek.com
    Signed-off-by: Linus Torvalds

    Walter Wu
     

13 Jul, 2019

1 commit

  • This changes {,__}kasan_check_{read,write} functions to return a boolean
    denoting if the access was valid or not.

    [sfr@canb.auug.org.au: include types.h for "bool"]
    Link: http://lkml.kernel.org/r/20190705184949.13cdd021@canb.auug.org.au
    Link: http://lkml.kernel.org/r/20190626142014.141844-3-elver@google.com
    Signed-off-by: Marco Elver
    Signed-off-by: Stephen Rothwell
    Reviewed-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Mark Rutland
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     

22 Feb, 2019

1 commit

  • There are two issues with assigning random percpu seeds right now:

    1. We use for_each_possible_cpu() to iterate over cpus, but cpumask is
    not set up yet at the moment of kasan_init(), and thus we only set
    the seed for cpu #0.

    2. A call to get_random_u32() always returns the same number and produces
    a message in dmesg, since the random subsystem is not yet initialized.

    Fix 1 by calling kasan_init_tags() after cpumask is set up.

    Fix 2 by using get_cycles() instead of get_random_u32(). This gives us
    lower quality random numbers, but it's good enough, as KASAN is meant to
    be used as a debugging tool and not a mitigation.

    Link: http://lkml.kernel.org/r/1f815cc914b61f3516ed4cc9bfd9eeca9bd5d9de.1550677973.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

29 Dec, 2018

4 commits

  • This patch adds a "SPDX-License-Identifier: GPL-2.0" mark to all source
    files under mm/kasan.

    Link: http://lkml.kernel.org/r/bce2d1e618afa5142e81961ab8fa4b4165337380.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This commit adds tag-based KASAN specific hooks implementation and
    adjusts common generic and tag-based KASAN ones.

    1. When a new slab cache is created, tag-based KASAN rounds up the size of
    the objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).

    2. On each kmalloc tag-based KASAN generates a random tag, sets the shadow
    memory, that corresponds to this object to this tag, and embeds this
    tag value into the top byte of the returned pointer.

    3. On each kfree tag-based KASAN poisons the shadow memory with a random
    tag to allow detection of use-after-free bugs.

    The rest of the logic of the hook implementation is very much similar to
    the one provided by generic KASAN. Tag-based KASAN saves allocation and
    free stack metadata to the slab object the same way generic KASAN does.

    Link: http://lkml.kernel.org/r/bda78069e3b8422039794050ddcb2d53d053ed41.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This commit adds a few helper functions, that are meant to be used to work
    with tags embedded in the top byte of kernel pointers: to set, to get or
    to reset the top byte.

    Link: http://lkml.kernel.org/r/f6c6437bb8e143bc44f42c3c259c62e734be7935.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This commit splits the current CONFIG_KASAN config option into two:
    1. CONFIG_KASAN_GENERIC, that enables the generic KASAN mode (the one
    that exists now);
    2. CONFIG_KASAN_SW_TAGS, that enables the software tag-based KASAN mode.

    The name CONFIG_KASAN_SW_TAGS is chosen as in the future we will have
    another hardware tag-based KASAN mode, that will rely on hardware memory
    tagging support in arm64.

    With CONFIG_KASAN_SW_TAGS enabled, compiler options are changed to
    instrument kernel files with -fsantize=kernel-hwaddress (except the ones
    for which KASAN_SANITIZE := n is set).

    Both CONFIG_KASAN_GENERIC and CONFIG_KASAN_SW_TAGS support both
    CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.

    This commit also adds empty placeholder (for now) implementation of
    tag-based KASAN specific hooks inserted by the compiler and adjusts
    common hooks implementation.

    While this commit adds the CONFIG_KASAN_SW_TAGS config option, this option
    is not selectable, as it depends on HAVE_ARCH_KASAN_SW_TAGS, which we will
    enable once all the infrastracture code has been added.

    Link: http://lkml.kernel.org/r/b2550106eb8a68b10fefbabce820910b115aa853.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov