22 Jun, 2020

1 commit

  • [ Upstream commit adb72ae1915db28f934e9e02c18bfcea2f3ed3b7 ]

    Patch series "Fix some incompatibilites between KASAN and FORTIFY_SOURCE", v4.

    3 KASAN self-tests fail on a kernel with both KASAN and FORTIFY_SOURCE:
    memchr, memcmp and strlen.

    When FORTIFY_SOURCE is on, a number of functions are replaced with
    fortified versions, which attempt to check the sizes of the operands.
    However, these functions often directly invoke __builtin_foo() once they
    have performed the fortify check. The compiler can detect that the
    results of these functions are not used, and knows that they have no other
    side effects, and so can eliminate them as dead code.

    Why are only memchr, memcmp and strlen affected?
    ================================================

    Of string and string-like functions, kasan_test tests:

    * strchr -> not affected, no fortified version
    * strrchr -> likewise
    * strcmp -> likewise
    * strncmp -> likewise

    * strnlen -> not affected, the fortify source implementation calls the
    underlying strnlen implementation which is instrumented, not
    a builtin

    * strlen -> affected, the fortify souce implementation calls a __builtin
    version which the compiler can determine is dead.

    * memchr -> likewise
    * memcmp -> likewise

    * memset -> not affected, the compiler knows that memset writes to its
    first argument and therefore is not dead.

    Why does this not affect the functions normally?
    ================================================

    In string.h, these functions are not marked as __pure, so the compiler
    cannot know that they do not have side effects. If relevant functions are
    marked as __pure in string.h, we see the following warnings and the
    functions are elided:

    lib/test_kasan.c: In function `kasan_memchr':
    lib/test_kasan.c:606:2: warning: statement with no effect [-Wunused-value]
    memchr(ptr, '1', size + 1);
    ^~~~~~~~~~~~~~~~~~~~~~~~~~
    lib/test_kasan.c: In function `kasan_memcmp':
    lib/test_kasan.c:622:2: warning: statement with no effect [-Wunused-value]
    memcmp(ptr, arr, size+1);
    ^~~~~~~~~~~~~~~~~~~~~~~~
    lib/test_kasan.c: In function `kasan_strings':
    lib/test_kasan.c:645:2: warning: statement with no effect [-Wunused-value]
    strchr(ptr, '1');
    ^~~~~~~~~~~~~~~~
    ...

    This annotation would make sense to add and could be added at any point,
    so the behaviour of test_kasan.c should change.

    The fix
    =======

    Make all the functions that are pure write their results to a global,
    which makes them live. The strlen and memchr tests now pass.

    The memcmp test still fails to trigger, which is addressed in the next
    patch.

    [dja@axtens.net: drop patch 3]
    Link: http://lkml.kernel.org/r/20200424145521.8203-2-dja@axtens.net
    Fixes: 0c96350a2d2f ("lib/test_kasan.c: add tests for several string/memory API functions")
    Signed-off-by: Daniel Axtens
    Signed-off-by: Andrew Morton
    Tested-by: David Gow
    Reviewed-by: Dmitry Vyukov
    Cc: Daniel Micay
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Link: http://lkml.kernel.org/r/20200423154503.5103-1-dja@axtens.net
    Link: http://lkml.kernel.org/r/20200423154503.5103-2-dja@axtens.net
    Signed-off-by: Linus Torvalds
    Signed-off-by: Sasha Levin

    Daniel Axtens
     

11 Feb, 2020

1 commit

  • commit 3e21d9a501bf99aee2e5835d7f34d8c823f115b5 upstream.

    In case memory resources for _ptr2_ were allocated, release them before
    return.

    Notice that in case _ptr1_ happens to be NULL, krealloc() behaves
    exactly like kmalloc().

    Addresses-Coverity-ID: 1490594 ("Resource leak")
    Link: http://lkml.kernel.org/r/20200123160115.GA4202@embeddedor
    Fixes: 3f15801cdc23 ("lib: add kasan test module")
    Signed-off-by: Gustavo A. R. Silva
    Reviewed-by: Dmitry Vyukov
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Gustavo A. R. Silva
     

25 Sep, 2019

1 commit

  • In several places we need to be able to operate on pointers which have
    gone via a roundtrip:

    virt -> {phys,page} -> virt

    With KASAN_SW_TAGS, we can't preserve the tag for SLUB objects, and the
    {phys,page} -> virt conversion will use KASAN_TAG_KERNEL.

    This patch adds tests to ensure that this works as expected, without
    false positives which have recently been spotted [1,2] in testing.

    [1] https://lore.kernel.org/linux-arm-kernel/20190819114420.2535-1-walter-zh.wu@mediatek.com/
    [2] https://lore.kernel.org/linux-arm-kernel/20190819132347.GB9927@lakrids.cambridge.arm.com/

    [akpm@linux-foundation.org: coding-style fixes]
    Link: http://lkml.kernel.org/r/20190821153927.28630-1-mark.rutland@arm.com
    Signed-off-by: Mark Rutland
    Reviewed-by: Andrey Konovalov
    Tested-by: Andrey Konovalov
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mark Rutland
     

13 Jul, 2019

2 commits

  • Add a simple test that checks if double-kzfree is being detected
    correctly.

    Link: http://lkml.kernel.org/r/20190626142014.141844-4-elver@google.com
    Signed-off-by: Marco Elver
    Reviewed-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Mark Rutland
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     
  • Patch series "Bitops instrumentation for KASAN", v5.

    This patch (of 3):

    This adds bitops tests to the test_kasan module. In a follow-up patch,
    support for bitops instrumentation will be added.

    Link: http://lkml.kernel.org/r/20190613125950.197667-2-elver@google.com
    Signed-off-by: Marco Elver
    Acked-by: Mark Rutland
    Reviewed-by: Andrey Ryabinin
    Cc: Peter Zijlstra (Intel)
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: "H. Peter Anvin"
    Cc: Jonathan Corbet
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Borislav Petkov
    Cc: Arnd Bergmann
    Cc: Josh Poimboeuf
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     

19 Jun, 2019

1 commit

  • Based on 2 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license version 2 as
    published by the free software foundation

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license version 2 as
    published by the free software foundation #

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-only

    has been chosen to replace the boilerplate/reference in 4122 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Enrico Weigelt
    Reviewed-by: Kate Stewart
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

06 Mar, 2019

1 commit

  • Use after scope bugs detector seems to be almost entirely useless for
    the linux kernel. It exists over two years, but I've seen only one
    valid bug so far [1]. And the bug was fixed before it has been
    reported. There were some other use-after-scope reports, but they were
    false-positives due to different reasons like incompatibility with
    structleak plugin.

    This feature significantly increases stack usage, especially with GCC <
    9 version, and causes a 32K stack overflow. It probably adds
    performance penalty too.

    Given all that, let's remove use-after-scope detector entirely.

    While preparing this patch I've noticed that we mistakenly enable
    use-after-scope detection for clang compiler regardless of
    CONFIG_KASAN_EXTRA setting. This is also fixed now.

    [1] http://lkml.kernel.org/r/

    Link: http://lkml.kernel.org/r/20190111185842.13978-1-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Acked-by: Will Deacon [arm64]
    Cc: Qian Cai
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

27 Oct, 2018

1 commit

  • Arch code may have asm implementation of string/memory API functions
    instead of using generic one from lib/string.c. KASAN don't see memory
    accesses in asm code, thus can miss many bugs.

    E.g. on ARM64 KASAN don't see bugs in memchr(), memcmp(), str[r]chr(),
    str[n]cmp(), str[n]len(). Add tests for these functions to be sure that
    we notice the problem on other architectures.

    Link: http://lkml.kernel.org/r/20180920135631.23833-3-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Ard Biesheuvel
    Cc: Dmitry Vyukov
    Cc: Kyeongdon Kim
    Cc: Mark Rutland
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

12 Apr, 2018

1 commit

  • When an invalid-free is triggered by one of the KASAN tests, the object
    doesn't actually get freed. This later leads to a BUG failure in
    kmem_cache_destroy that checks that there are no allocated objects in
    the cache that is being destroyed.

    Fix this by calling kmem_cache_free with the proper object address after
    the call that triggers invalid-free.

    Link: http://lkml.kernel.org/r/286eaefc0a6c3fa9b83b87e7d6dc0fbb5b5c9926.1519924383.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Geert Uytterhoeven
    Cc: Nick Terrell
    Cc: Chris Mason
    Cc: Yury Norov
    Cc: Al Viro
    Cc: "Luis R . Rodriguez"
    Cc: Palmer Dabbelt
    Cc: "Paul E . McKenney"
    Cc: Jeff Layton
    Cc: "Jason A . Donenfeld"
    Cc: Kostya Serebryany
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

07 Feb, 2018

4 commits

  • Variable real_size is initialized with a value that is never read, it is
    re-assigned a new value later on, hence the initialization is redundant
    and can be removed.

    Cleans up clang warning:

    lib/test_kasan.c:422:21: warning: Value stored to 'real_size' during its initialization is never read

    Link: http://lkml.kernel.org/r/20180206144950.32457-1-colin.king@canonical.com
    Signed-off-by: Colin Ian King
    Acked-by: Andrey Ryabinin
    Reviewed-by: Andrew Morton
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Colin Ian King
     
  • Detect frees of pointers into middle of heap objects.

    Link: http://lkml.kernel.org/r/cb569193190356beb018a03bb8d6fbae67e7adbc.1514378558.git.dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Cc: Andrey Ryabinin a
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     
  • Patch series "kasan: detect invalid frees".

    KASAN detects double-frees, but does not detect invalid-frees (when a
    pointer into a middle of heap object is passed to free). We recently had
    a very unpleasant case in crypto code which freed an inner object inside
    of a heap allocation. This left unnoticed during free, but totally
    corrupted heap and later lead to a bunch of random crashes all over kernel
    code.

    Detect invalid frees.

    This patch (of 5):

    Detect frees of pointers into middle of large heap objects.

    I dropped const from kasan_kfree_large() because it starts propagating
    through a bunch of functions in kasan_report.c, slab/slub nearest_obj(),
    all of their local variables, fixup_red_left(), etc.

    Link: http://lkml.kernel.org/r/1b45b4fe1d20fc0de1329aab674c1dd973fee723.1514378558.git.dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Cc: Andrey Ryabinin a
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     
  • Link: http://lkml.kernel.org/r/20171204191735.132544-5-paullawrence@google.com
    Signed-off-by: Greg Hackmann
    Signed-off-by: Paul Lawrence
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Masahiro Yamada
    Cc: Matthias Kaehlcke
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Lawrence
     

18 Nov, 2017

1 commit


01 Apr, 2017

1 commit

  • Disable kasan after the first report. There are several reasons for
    this:

    - Single bug quite often has multiple invalid memory accesses causing
    storm in the dmesg.

    - Write OOB access might corrupt metadata so the next report will print
    bogus alloc/free stacktraces.

    - Reports after the first easily could be not bugs by itself but just
    side effects of the first one.

    Given that multiple reports usually only do harm, it makes sense to
    disable kasan after the first one. If user wants to see all the
    reports, the boot-time parameter kasan_multi_shot must be used.

    [aryabinin@virtuozzo.com: wrote changelog and doc, added missing include]
    Link: http://lkml.kernel.org/r/20170323154416.30257-1-aryabinin@virtuozzo.com
    Signed-off-by: Mark Rutland
    Signed-off-by: Andrey Ryabinin
    Cc: Andrey Konovalov
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mark Rutland
     

25 Feb, 2017

1 commit

  • Make a kasan test which uses a SLAB_ACCOUNT slab cache. If the test is
    run within a non default memcg, then it uncovers the bug fixed by
    "kasan: drain quarantine of memcg slab objects"[1].

    If run without fix [1] it shows "Slab cache still has objects", and the
    kmem_cache structure is leaked.
    Here's an unpatched kernel test:

    $ dmesg -c > /dev/null
    $ mkdir /sys/fs/cgroup/memory/test
    $ echo $$ > /sys/fs/cgroup/memory/test/tasks
    $ modprobe test_kasan 2> /dev/null
    $ dmesg | grep -B1 still
    [ 123.456789] kasan test: memcg_accounted_kmem_cache allocate memcg accounted object
    [ 124.456789] kmem_cache_destroy test_cache: Slab cache still has objects

    Kernels with fix [1] don't have the "Slab cache still has objects"
    warning or the underlying leak.

    The new test runs and passes in the default (root) memcg, though in the
    root memcg it won't uncover the problem fixed by [1].

    Link: http://lkml.kernel.org/r/1482257462-36948-2-git-send-email-gthelen@google.com
    Signed-off-by: Greg Thelen
    Reviewed-by: Vladimir Davydov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Greg Thelen
     

01 Dec, 2016

1 commit

  • Gcc revision 241896 implements use-after-scope detection. Will be
    available in gcc 7. Support it in KASAN.

    Gcc emits 2 new callbacks to poison/unpoison large stack objects when
    they go in/out of scope. Implement the callbacks and add a test.

    [dvyukov@google.com: v3]
    Link: http://lkml.kernel.org/r/1479998292-144502-1-git-send-email-dvyukov@google.com
    Link: http://lkml.kernel.org/r/1479226045-145148-1-git-send-email-dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: [4.0+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     

21 May, 2016

2 commits

  • Add some tests for the newly-added user memory access API.

    Link: http://lkml.kernel.org/r/1462538722-1574-1-git-send-email-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Add a test that makes sure ksize() unpoisons the whole chunk.

    Signed-off-by: Alexander Potapenko
    Acked-by: Andrey Ryabinin
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Konstantin Serebryany
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

26 Mar, 2016

2 commits

  • Signed-off-by: Alexander Potapenko
    Acked-by: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     
  • This patchset implements SLAB support for KASAN

    Unlike SLUB, SLAB doesn't store allocation/deallocation stacks for heap
    objects, therefore we reimplement this feature in mm/kasan/stackdepot.c.
    The intention is to ultimately switch SLUB to use this implementation as
    well, which will save a lot of memory (right now SLUB bloats each object
    by 256 bytes to store the allocation/deallocation stacks).

    Also neither SLUB nor SLAB delay the reuse of freed memory chunks, which
    is necessary for better detection of use-after-free errors. We
    introduce memory quarantine (mm/kasan/quarantine.c), which allows
    delayed reuse of deallocated memory.

    This patch (of 7):

    Rename kmalloc_large_oob_right() to kmalloc_pagealloc_oob_right(), as
    the test only checks the page allocator functionality. Also reimplement
    kmalloc_large_oob_right() so that the test allocates a large enough
    chunk of memory that still does not trigger the page allocator fallback.

    Signed-off-by: Alexander Potapenko
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

06 Nov, 2015

1 commit


11 Sep, 2015

2 commits

  • In kmalloc_oob_krealloc_less, I think it is better to test
    the size2 boundary.

    If we do not call krealloc, the access of position size1 will still cause
    out-of-bounds and access of position size2 does not. After call krealloc,
    the access of position size2 cause out-of-bounds. So using size2 is more
    correct.

    Signed-off-by: Wang Long
    Cc: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wang Long
     
  • Signed-off-by: Wang Long
    Cc: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wang Long
     

14 Feb, 2015

1 commit

  • This is a test module doing various nasty things like out of bounds
    accesses, use after free. It is useful for testing kernel debugging
    features like kernel address sanitizer.

    It mostly concentrates on testing of slab allocator, but we might want to
    add more different stuff here in future (like stack/global variables out
    of bounds accesses and so on).

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin