03 Nov, 2020

1 commit

  • Now that we have KASAN-KUNIT tests integration, it's easy to see that
    some KASAN tests are not adopted to the SW_TAGS mode and are failing.

    Adjust the allocation size for kasan_memchr() and kasan_memcmp() by
    roung it up to OOB_TAG_OFF so the bad access ends up in a separate
    memory granule.

    Add a new kmalloc_uaf_16() tests that relies on UAF, and a new
    kasan_bitops_tags() test that is tailored to tag-based mode, as it's
    hard to adopt the existing kmalloc_oob_16() and kasan_bitops_generic()
    (renamed from kasan_bitops()) without losing the precision.

    Add new kmalloc_uaf_16() and kasan_bitops_uaf() tests that rely on UAFs,
    as it's hard to adopt the existing kmalloc_oob_16() and
    kasan_bitops_oob() (rename from kasan_bitops()) without losing the
    precision.

    Disable kasan_global_oob() and kasan_alloca_oob_left/right() as SW_TAGS
    mode doesn't instrument globals nor dynamic allocas.

    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Tested-by: David Gow
    Link: https://lkml.kernel.org/r/76eee17b6531ca8b3ca92b240cb2fd23204aaff7.1603129942.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

14 Oct, 2020

2 commits

  • Transfer all previous tests for KASAN to KUnit so they can be run more
    easily. Using kunit_tool, developers can run these tests with their other
    KUnit tests and see "pass" or "fail" with the appropriate KASAN report
    instead of needing to parse each KASAN report to test KASAN
    functionalities. All KASAN reports are still printed to dmesg.

    Stack tests do not work properly when KASAN_STACK is enabled so those
    tests use a check for "if IS_ENABLED(CONFIG_KASAN_STACK)" so they only run
    if stack instrumentation is enabled. If KASAN_STACK is not enabled, KUnit
    will print a statement to let the user know this test was not run with
    KASAN_STACK enabled.

    copy_user_test and kasan_rcu_uaf cannot be run in KUnit so there is a
    separate test file for those tests, which can be run as before as a
    module.

    [trishalfonso@google.com: v14]
    Link: https://lkml.kernel.org/r/20200915035828.570483-4-davidgow@google.com

    Signed-off-by: Patricia Alfonso
    Signed-off-by: David Gow
    Signed-off-by: Andrew Morton
    Tested-by: Andrey Konovalov
    Reviewed-by: Brendan Higgins
    Reviewed-by: Andrey Konovalov
    Reviewed-by: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Ingo Molnar
    Cc: Juri Lelli
    Cc: Peter Zijlstra
    Cc: Shuah Khan
    Cc: Vincent Guittot
    Link: https://lkml.kernel.org/r/20200910070331.3358048-4-davidgow@google.com
    Signed-off-by: Linus Torvalds

    Patricia Alfonso
     
  • Integrate KASAN into KUnit testing framework.

    - Fail tests when KASAN reports an error that is not expected
    - Use KUNIT_EXPECT_KASAN_FAIL to expect a KASAN error in KASAN
    tests
    - Expected KASAN reports pass tests and are still printed when run
    without kunit_tool (kunit_tool still bypasses the report due to the
    test passing)
    - KUnit struct in current task used to keep track of the current
    test from KASAN code

    Make use of "[PATCH v3 kunit-next 1/2] kunit: generalize kunit_resource
    API beyond allocated resources" and "[PATCH v3 kunit-next 2/2] kunit: add
    support for named resources" from Alan Maguire [1]

    - A named resource is added to a test when a KASAN report is
    expected
    - This resource contains a struct for kasan_data containing
    booleans representing if a KASAN report is expected and if a
    KASAN report is found

    [1] (https://lore.kernel.org/linux-kselftest/1583251361-12748-1-git-send-email-alan.maguire@oracle.com/T/#t)

    Signed-off-by: Patricia Alfonso
    Signed-off-by: David Gow
    Signed-off-by: Andrew Morton
    Tested-by: Andrey Konovalov
    Reviewed-by: Andrey Konovalov
    Reviewed-by: Dmitry Vyukov
    Acked-by: Brendan Higgins
    Cc: Andrey Ryabinin
    Cc: Ingo Molnar
    Cc: Juri Lelli
    Cc: Peter Zijlstra
    Cc: Shuah Khan
    Cc: Vincent Guittot
    Link: https://lkml.kernel.org/r/20200915035828.570483-3-davidgow@google.com
    Link: https://lkml.kernel.org/r/20200910070331.3358048-3-davidgow@google.com
    Signed-off-by: Linus Torvalds

    Patricia Alfonso
     

08 Aug, 2020

4 commits

  • Use OOB_TAG_OFF as access offset to land the access into the next granule.

    Suggested-by: Walter Wu
    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: Catalin Marinas
    Cc: Dmitry Vyukov
    Cc: Elena Petrova
    Cc: Marco Elver
    Cc: Vincenzo Frascino
    Cc: Ard Biesheuvel
    Link: http://lkml.kernel.org/r/403b259f1de49a7a3694531c851ac28326a586a8.1596199677.git.andreyknvl@google.com
    Link: http://lkml.kernel.org/r/3063ab1411e92bce36061a96e25b651212e70ba6.1596544734.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • We use tag-based KASAN, then KASAN unit tests don't detect out-of-bounds
    memory access. They need to be fixed.

    With tag-based KASAN, the state of each 16 aligned bytes of memory is
    encoded in one shadow byte and the shadow value is tag of pointer, so
    we need to read next shadow byte, the shadow value is not equal to tag
    value of pointer, so that tag-based KASAN will detect out-of-bounds
    memory access.

    [walter-zh.wu@mediatek.com: use KASAN_SHADOW_SCALE_SIZE instead of 13]
    Link: http://lkml.kernel.org/r/20200708132524.11688-1-walter-zh.wu@mediatek.com

    Suggested-by: Dmitry Vyukov
    Signed-off-by: Walter Wu
    Signed-off-by: Andrew Morton
    Reviewed-by: Andrey Konovalov
    Acked-by: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Matthias Brugger
    Link: http://lkml.kernel.org/r/20200706115039.16750-1-walter-zh.wu@mediatek.com
    Signed-off-by: Linus Torvalds

    Walter Wu
     
  • Test call_rcu() call stack recording and verify whether it correctly is
    printed in KASAN report.

    Signed-off-by: Walter Wu
    Signed-off-by: Andrew Morton
    Tested-by: Dmitry Vyukov
    Reviewed-by: Dmitry Vyukov
    Reviewed-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Matthias Brugger
    Cc: Joel Fernandes
    Cc: Jonathan Corbet
    Cc: Josh Triplett
    Cc: Lai Jiangshan
    Cc: Mathieu Desnoyers
    Cc: "Paul E . McKenney"
    Link: http://lkml.kernel.org/r/20200601051045.1294-1-walter-zh.wu@mediatek.com
    Signed-off-by: Linus Torvalds

    Walter Wu
     
  • As said by Linus:

    A symmetric naming is only helpful if it implies symmetries in use.
    Otherwise it's actively misleading.

    In "kzalloc()", the z is meaningful and an important part of what the
    caller wants.

    In "kzfree()", the z is actively detrimental, because maybe in the
    future we really _might_ want to use that "memfill(0xdeadbeef)" or
    something. The "zero" part of the interface isn't even _relevant_.

    The main reason that kzfree() exists is to clear sensitive information
    that should not be leaked to other future users of the same memory
    objects.

    Rename kzfree() to kfree_sensitive() to follow the example of the recently
    added kvfree_sensitive() and make the intention of the API more explicit.
    In addition, memzero_explicit() is used to clear the memory to make sure
    that it won't get optimized away by the compiler.

    The renaming is done by using the command sequence:

    git grep -w --name-only kzfree |\
    xargs sed -i 's/kzfree/kfree_sensitive/'

    followed by some editing of the kfree_sensitive() kerneldoc and adding
    a kzfree backward compatibility macro in slab.h.

    [akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
    [akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]

    Suggested-by: Joe Perches
    Signed-off-by: Waiman Long
    Signed-off-by: Andrew Morton
    Acked-by: David Howells
    Acked-by: Michal Hocko
    Acked-by: Johannes Weiner
    Cc: Jarkko Sakkinen
    Cc: James Morris
    Cc: "Serge E. Hallyn"
    Cc: Joe Perches
    Cc: Matthew Wilcox
    Cc: David Rientjes
    Cc: Dan Carpenter
    Cc: "Jason A . Donenfeld"
    Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
    Signed-off-by: Linus Torvalds

    Waiman Long
     

04 Jun, 2020

1 commit

  • Patch series "Fix some incompatibilites between KASAN and FORTIFY_SOURCE", v4.

    3 KASAN self-tests fail on a kernel with both KASAN and FORTIFY_SOURCE:
    memchr, memcmp and strlen.

    When FORTIFY_SOURCE is on, a number of functions are replaced with
    fortified versions, which attempt to check the sizes of the operands.
    However, these functions often directly invoke __builtin_foo() once they
    have performed the fortify check. The compiler can detect that the
    results of these functions are not used, and knows that they have no other
    side effects, and so can eliminate them as dead code.

    Why are only memchr, memcmp and strlen affected?
    ================================================

    Of string and string-like functions, kasan_test tests:

    * strchr -> not affected, no fortified version
    * strrchr -> likewise
    * strcmp -> likewise
    * strncmp -> likewise

    * strnlen -> not affected, the fortify source implementation calls the
    underlying strnlen implementation which is instrumented, not
    a builtin

    * strlen -> affected, the fortify souce implementation calls a __builtin
    version which the compiler can determine is dead.

    * memchr -> likewise
    * memcmp -> likewise

    * memset -> not affected, the compiler knows that memset writes to its
    first argument and therefore is not dead.

    Why does this not affect the functions normally?
    ================================================

    In string.h, these functions are not marked as __pure, so the compiler
    cannot know that they do not have side effects. If relevant functions are
    marked as __pure in string.h, we see the following warnings and the
    functions are elided:

    lib/test_kasan.c: In function `kasan_memchr':
    lib/test_kasan.c:606:2: warning: statement with no effect [-Wunused-value]
    memchr(ptr, '1', size + 1);
    ^~~~~~~~~~~~~~~~~~~~~~~~~~
    lib/test_kasan.c: In function `kasan_memcmp':
    lib/test_kasan.c:622:2: warning: statement with no effect [-Wunused-value]
    memcmp(ptr, arr, size+1);
    ^~~~~~~~~~~~~~~~~~~~~~~~
    lib/test_kasan.c: In function `kasan_strings':
    lib/test_kasan.c:645:2: warning: statement with no effect [-Wunused-value]
    strchr(ptr, '1');
    ^~~~~~~~~~~~~~~~
    ...

    This annotation would make sense to add and could be added at any point,
    so the behaviour of test_kasan.c should change.

    The fix
    =======

    Make all the functions that are pure write their results to a global,
    which makes them live. The strlen and memchr tests now pass.

    The memcmp test still fails to trigger, which is addressed in the next
    patch.

    [dja@axtens.net: drop patch 3]
    Link: http://lkml.kernel.org/r/20200424145521.8203-2-dja@axtens.net
    Fixes: 0c96350a2d2f ("lib/test_kasan.c: add tests for several string/memory API functions")
    Signed-off-by: Daniel Axtens
    Signed-off-by: Andrew Morton
    Tested-by: David Gow
    Reviewed-by: Dmitry Vyukov
    Cc: Daniel Micay
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Link: http://lkml.kernel.org/r/20200423154503.5103-1-dja@axtens.net
    Link: http://lkml.kernel.org/r/20200423154503.5103-2-dja@axtens.net
    Signed-off-by: Linus Torvalds

    Daniel Axtens
     

03 Apr, 2020

1 commit

  • Test negative size in memmove in order to verify whether it correctly get
    KASAN report.

    Casting negative numbers to size_t would indeed turn up as a large size_t,
    so it will have out-of-bounds bug and be detected by KASAN.

    [walter-zh.wu@mediatek.com: fix -Wstringop-overflow warning]
    Link: http://lkml.kernel.org/r/20200311134244.13016-1-walter-zh.wu@mediatek.com
    Signed-off-by: Walter Wu
    Signed-off-by: Andrew Morton
    Reviewed-by: Dmitry Vyukov
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: kernel test robot
    Link: http://lkml.kernel.org/r/20191112065313.7060-1-walter-zh.wu@mediatek.com
    Signed-off-by: Linus Torvalds

    Walter Wu
     

01 Feb, 2020

1 commit

  • In case memory resources for _ptr2_ were allocated, release them before
    return.

    Notice that in case _ptr1_ happens to be NULL, krealloc() behaves
    exactly like kmalloc().

    Addresses-Coverity-ID: 1490594 ("Resource leak")
    Link: http://lkml.kernel.org/r/20200123160115.GA4202@embeddedor
    Fixes: 3f15801cdc23 ("lib: add kasan test module")
    Signed-off-by: Gustavo A. R. Silva
    Reviewed-by: Dmitry Vyukov
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gustavo A. R. Silva
     

02 Dec, 2019

1 commit

  • Test kasan vmalloc support by adding a new test to the module.

    Link: http://lkml.kernel.org/r/20191031093909.9228-3-dja@axtens.net
    Signed-off-by: Daniel Axtens
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Christophe Leroy
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Vasily Gorbik
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daniel Axtens
     

25 Sep, 2019

1 commit

  • In several places we need to be able to operate on pointers which have
    gone via a roundtrip:

    virt -> {phys,page} -> virt

    With KASAN_SW_TAGS, we can't preserve the tag for SLUB objects, and the
    {phys,page} -> virt conversion will use KASAN_TAG_KERNEL.

    This patch adds tests to ensure that this works as expected, without
    false positives which have recently been spotted [1,2] in testing.

    [1] https://lore.kernel.org/linux-arm-kernel/20190819114420.2535-1-walter-zh.wu@mediatek.com/
    [2] https://lore.kernel.org/linux-arm-kernel/20190819132347.GB9927@lakrids.cambridge.arm.com/

    [akpm@linux-foundation.org: coding-style fixes]
    Link: http://lkml.kernel.org/r/20190821153927.28630-1-mark.rutland@arm.com
    Signed-off-by: Mark Rutland
    Reviewed-by: Andrey Konovalov
    Tested-by: Andrey Konovalov
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mark Rutland
     

13 Jul, 2019

2 commits

  • Add a simple test that checks if double-kzfree is being detected
    correctly.

    Link: http://lkml.kernel.org/r/20190626142014.141844-4-elver@google.com
    Signed-off-by: Marco Elver
    Reviewed-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Mark Rutland
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     
  • Patch series "Bitops instrumentation for KASAN", v5.

    This patch (of 3):

    This adds bitops tests to the test_kasan module. In a follow-up patch,
    support for bitops instrumentation will be added.

    Link: http://lkml.kernel.org/r/20190613125950.197667-2-elver@google.com
    Signed-off-by: Marco Elver
    Acked-by: Mark Rutland
    Reviewed-by: Andrey Ryabinin
    Cc: Peter Zijlstra (Intel)
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: "H. Peter Anvin"
    Cc: Jonathan Corbet
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Borislav Petkov
    Cc: Arnd Bergmann
    Cc: Josh Poimboeuf
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     

19 Jun, 2019

1 commit

  • Based on 2 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license version 2 as
    published by the free software foundation

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license version 2 as
    published by the free software foundation #

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-only

    has been chosen to replace the boilerplate/reference in 4122 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Enrico Weigelt
    Reviewed-by: Kate Stewart
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

06 Mar, 2019

1 commit

  • Use after scope bugs detector seems to be almost entirely useless for
    the linux kernel. It exists over two years, but I've seen only one
    valid bug so far [1]. And the bug was fixed before it has been
    reported. There were some other use-after-scope reports, but they were
    false-positives due to different reasons like incompatibility with
    structleak plugin.

    This feature significantly increases stack usage, especially with GCC <
    9 version, and causes a 32K stack overflow. It probably adds
    performance penalty too.

    Given all that, let's remove use-after-scope detector entirely.

    While preparing this patch I've noticed that we mistakenly enable
    use-after-scope detection for clang compiler regardless of
    CONFIG_KASAN_EXTRA setting. This is also fixed now.

    [1] http://lkml.kernel.org/r/

    Link: http://lkml.kernel.org/r/20190111185842.13978-1-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Acked-by: Will Deacon [arm64]
    Cc: Qian Cai
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

27 Oct, 2018

1 commit

  • Arch code may have asm implementation of string/memory API functions
    instead of using generic one from lib/string.c. KASAN don't see memory
    accesses in asm code, thus can miss many bugs.

    E.g. on ARM64 KASAN don't see bugs in memchr(), memcmp(), str[r]chr(),
    str[n]cmp(), str[n]len(). Add tests for these functions to be sure that
    we notice the problem on other architectures.

    Link: http://lkml.kernel.org/r/20180920135631.23833-3-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Ard Biesheuvel
    Cc: Dmitry Vyukov
    Cc: Kyeongdon Kim
    Cc: Mark Rutland
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

12 Apr, 2018

1 commit

  • When an invalid-free is triggered by one of the KASAN tests, the object
    doesn't actually get freed. This later leads to a BUG failure in
    kmem_cache_destroy that checks that there are no allocated objects in
    the cache that is being destroyed.

    Fix this by calling kmem_cache_free with the proper object address after
    the call that triggers invalid-free.

    Link: http://lkml.kernel.org/r/286eaefc0a6c3fa9b83b87e7d6dc0fbb5b5c9926.1519924383.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Geert Uytterhoeven
    Cc: Nick Terrell
    Cc: Chris Mason
    Cc: Yury Norov
    Cc: Al Viro
    Cc: "Luis R . Rodriguez"
    Cc: Palmer Dabbelt
    Cc: "Paul E . McKenney"
    Cc: Jeff Layton
    Cc: "Jason A . Donenfeld"
    Cc: Kostya Serebryany
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

07 Feb, 2018

4 commits

  • Variable real_size is initialized with a value that is never read, it is
    re-assigned a new value later on, hence the initialization is redundant
    and can be removed.

    Cleans up clang warning:

    lib/test_kasan.c:422:21: warning: Value stored to 'real_size' during its initialization is never read

    Link: http://lkml.kernel.org/r/20180206144950.32457-1-colin.king@canonical.com
    Signed-off-by: Colin Ian King
    Acked-by: Andrey Ryabinin
    Reviewed-by: Andrew Morton
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Colin Ian King
     
  • Detect frees of pointers into middle of heap objects.

    Link: http://lkml.kernel.org/r/cb569193190356beb018a03bb8d6fbae67e7adbc.1514378558.git.dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Cc: Andrey Ryabinin a
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     
  • Patch series "kasan: detect invalid frees".

    KASAN detects double-frees, but does not detect invalid-frees (when a
    pointer into a middle of heap object is passed to free). We recently had
    a very unpleasant case in crypto code which freed an inner object inside
    of a heap allocation. This left unnoticed during free, but totally
    corrupted heap and later lead to a bunch of random crashes all over kernel
    code.

    Detect invalid frees.

    This patch (of 5):

    Detect frees of pointers into middle of large heap objects.

    I dropped const from kasan_kfree_large() because it starts propagating
    through a bunch of functions in kasan_report.c, slab/slub nearest_obj(),
    all of their local variables, fixup_red_left(), etc.

    Link: http://lkml.kernel.org/r/1b45b4fe1d20fc0de1329aab674c1dd973fee723.1514378558.git.dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Cc: Andrey Ryabinin a
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     
  • Link: http://lkml.kernel.org/r/20171204191735.132544-5-paullawrence@google.com
    Signed-off-by: Greg Hackmann
    Signed-off-by: Paul Lawrence
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Masahiro Yamada
    Cc: Matthias Kaehlcke
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Lawrence
     

18 Nov, 2017

1 commit


01 Apr, 2017

1 commit

  • Disable kasan after the first report. There are several reasons for
    this:

    - Single bug quite often has multiple invalid memory accesses causing
    storm in the dmesg.

    - Write OOB access might corrupt metadata so the next report will print
    bogus alloc/free stacktraces.

    - Reports after the first easily could be not bugs by itself but just
    side effects of the first one.

    Given that multiple reports usually only do harm, it makes sense to
    disable kasan after the first one. If user wants to see all the
    reports, the boot-time parameter kasan_multi_shot must be used.

    [aryabinin@virtuozzo.com: wrote changelog and doc, added missing include]
    Link: http://lkml.kernel.org/r/20170323154416.30257-1-aryabinin@virtuozzo.com
    Signed-off-by: Mark Rutland
    Signed-off-by: Andrey Ryabinin
    Cc: Andrey Konovalov
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mark Rutland
     

25 Feb, 2017

1 commit

  • Make a kasan test which uses a SLAB_ACCOUNT slab cache. If the test is
    run within a non default memcg, then it uncovers the bug fixed by
    "kasan: drain quarantine of memcg slab objects"[1].

    If run without fix [1] it shows "Slab cache still has objects", and the
    kmem_cache structure is leaked.
    Here's an unpatched kernel test:

    $ dmesg -c > /dev/null
    $ mkdir /sys/fs/cgroup/memory/test
    $ echo $$ > /sys/fs/cgroup/memory/test/tasks
    $ modprobe test_kasan 2> /dev/null
    $ dmesg | grep -B1 still
    [ 123.456789] kasan test: memcg_accounted_kmem_cache allocate memcg accounted object
    [ 124.456789] kmem_cache_destroy test_cache: Slab cache still has objects

    Kernels with fix [1] don't have the "Slab cache still has objects"
    warning or the underlying leak.

    The new test runs and passes in the default (root) memcg, though in the
    root memcg it won't uncover the problem fixed by [1].

    Link: http://lkml.kernel.org/r/1482257462-36948-2-git-send-email-gthelen@google.com
    Signed-off-by: Greg Thelen
    Reviewed-by: Vladimir Davydov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Greg Thelen
     

01 Dec, 2016

1 commit

  • Gcc revision 241896 implements use-after-scope detection. Will be
    available in gcc 7. Support it in KASAN.

    Gcc emits 2 new callbacks to poison/unpoison large stack objects when
    they go in/out of scope. Implement the callbacks and add a test.

    [dvyukov@google.com: v3]
    Link: http://lkml.kernel.org/r/1479998292-144502-1-git-send-email-dvyukov@google.com
    Link: http://lkml.kernel.org/r/1479226045-145148-1-git-send-email-dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: [4.0+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     

21 May, 2016

2 commits

  • Add some tests for the newly-added user memory access API.

    Link: http://lkml.kernel.org/r/1462538722-1574-1-git-send-email-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Add a test that makes sure ksize() unpoisons the whole chunk.

    Signed-off-by: Alexander Potapenko
    Acked-by: Andrey Ryabinin
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Konstantin Serebryany
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

26 Mar, 2016

2 commits

  • Signed-off-by: Alexander Potapenko
    Acked-by: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     
  • This patchset implements SLAB support for KASAN

    Unlike SLUB, SLAB doesn't store allocation/deallocation stacks for heap
    objects, therefore we reimplement this feature in mm/kasan/stackdepot.c.
    The intention is to ultimately switch SLUB to use this implementation as
    well, which will save a lot of memory (right now SLUB bloats each object
    by 256 bytes to store the allocation/deallocation stacks).

    Also neither SLUB nor SLAB delay the reuse of freed memory chunks, which
    is necessary for better detection of use-after-free errors. We
    introduce memory quarantine (mm/kasan/quarantine.c), which allows
    delayed reuse of deallocated memory.

    This patch (of 7):

    Rename kmalloc_large_oob_right() to kmalloc_pagealloc_oob_right(), as
    the test only checks the page allocator functionality. Also reimplement
    kmalloc_large_oob_right() so that the test allocates a large enough
    chunk of memory that still does not trigger the page allocator fallback.

    Signed-off-by: Alexander Potapenko
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

06 Nov, 2015

1 commit


11 Sep, 2015

2 commits

  • In kmalloc_oob_krealloc_less, I think it is better to test
    the size2 boundary.

    If we do not call krealloc, the access of position size1 will still cause
    out-of-bounds and access of position size2 does not. After call krealloc,
    the access of position size2 cause out-of-bounds. So using size2 is more
    correct.

    Signed-off-by: Wang Long
    Cc: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wang Long
     
  • Signed-off-by: Wang Long
    Cc: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wang Long
     

14 Feb, 2015

1 commit

  • This is a test module doing various nasty things like out of bounds
    accesses, use after free. It is useful for testing kernel debugging
    features like kernel address sanitizer.

    It mostly concentrates on testing of slab allocator, but we might want to
    add more different stuff here in future (like stack/global variables out
    of bounds accesses and so on).

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin