12 Apr, 2018

1 commit

  • When an invalid-free is triggered by one of the KASAN tests, the object
    doesn't actually get freed. This later leads to a BUG failure in
    kmem_cache_destroy that checks that there are no allocated objects in
    the cache that is being destroyed.

    Fix this by calling kmem_cache_free with the proper object address after
    the call that triggers invalid-free.

    Link: http://lkml.kernel.org/r/286eaefc0a6c3fa9b83b87e7d6dc0fbb5b5c9926.1519924383.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Geert Uytterhoeven
    Cc: Nick Terrell
    Cc: Chris Mason
    Cc: Yury Norov
    Cc: Al Viro
    Cc: "Luis R . Rodriguez"
    Cc: Palmer Dabbelt
    Cc: "Paul E . McKenney"
    Cc: Jeff Layton
    Cc: "Jason A . Donenfeld"
    Cc: Kostya Serebryany
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

07 Feb, 2018

4 commits

  • Variable real_size is initialized with a value that is never read, it is
    re-assigned a new value later on, hence the initialization is redundant
    and can be removed.

    Cleans up clang warning:

    lib/test_kasan.c:422:21: warning: Value stored to 'real_size' during its initialization is never read

    Link: http://lkml.kernel.org/r/20180206144950.32457-1-colin.king@canonical.com
    Signed-off-by: Colin Ian King
    Acked-by: Andrey Ryabinin
    Reviewed-by: Andrew Morton
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Colin Ian King
     
  • Detect frees of pointers into middle of heap objects.

    Link: http://lkml.kernel.org/r/cb569193190356beb018a03bb8d6fbae67e7adbc.1514378558.git.dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Cc: Andrey Ryabinin a
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     
  • Patch series "kasan: detect invalid frees".

    KASAN detects double-frees, but does not detect invalid-frees (when a
    pointer into a middle of heap object is passed to free). We recently had
    a very unpleasant case in crypto code which freed an inner object inside
    of a heap allocation. This left unnoticed during free, but totally
    corrupted heap and later lead to a bunch of random crashes all over kernel
    code.

    Detect invalid frees.

    This patch (of 5):

    Detect frees of pointers into middle of large heap objects.

    I dropped const from kasan_kfree_large() because it starts propagating
    through a bunch of functions in kasan_report.c, slab/slub nearest_obj(),
    all of their local variables, fixup_red_left(), etc.

    Link: http://lkml.kernel.org/r/1b45b4fe1d20fc0de1329aab674c1dd973fee723.1514378558.git.dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Cc: Andrey Ryabinin a
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     
  • Link: http://lkml.kernel.org/r/20171204191735.132544-5-paullawrence@google.com
    Signed-off-by: Greg Hackmann
    Signed-off-by: Paul Lawrence
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Masahiro Yamada
    Cc: Matthias Kaehlcke
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Lawrence
     

18 Nov, 2017

1 commit


01 Apr, 2017

1 commit

  • Disable kasan after the first report. There are several reasons for
    this:

    - Single bug quite often has multiple invalid memory accesses causing
    storm in the dmesg.

    - Write OOB access might corrupt metadata so the next report will print
    bogus alloc/free stacktraces.

    - Reports after the first easily could be not bugs by itself but just
    side effects of the first one.

    Given that multiple reports usually only do harm, it makes sense to
    disable kasan after the first one. If user wants to see all the
    reports, the boot-time parameter kasan_multi_shot must be used.

    [aryabinin@virtuozzo.com: wrote changelog and doc, added missing include]
    Link: http://lkml.kernel.org/r/20170323154416.30257-1-aryabinin@virtuozzo.com
    Signed-off-by: Mark Rutland
    Signed-off-by: Andrey Ryabinin
    Cc: Andrey Konovalov
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mark Rutland
     

25 Feb, 2017

1 commit

  • Make a kasan test which uses a SLAB_ACCOUNT slab cache. If the test is
    run within a non default memcg, then it uncovers the bug fixed by
    "kasan: drain quarantine of memcg slab objects"[1].

    If run without fix [1] it shows "Slab cache still has objects", and the
    kmem_cache structure is leaked.
    Here's an unpatched kernel test:

    $ dmesg -c > /dev/null
    $ mkdir /sys/fs/cgroup/memory/test
    $ echo $$ > /sys/fs/cgroup/memory/test/tasks
    $ modprobe test_kasan 2> /dev/null
    $ dmesg | grep -B1 still
    [ 123.456789] kasan test: memcg_accounted_kmem_cache allocate memcg accounted object
    [ 124.456789] kmem_cache_destroy test_cache: Slab cache still has objects

    Kernels with fix [1] don't have the "Slab cache still has objects"
    warning or the underlying leak.

    The new test runs and passes in the default (root) memcg, though in the
    root memcg it won't uncover the problem fixed by [1].

    Link: http://lkml.kernel.org/r/1482257462-36948-2-git-send-email-gthelen@google.com
    Signed-off-by: Greg Thelen
    Reviewed-by: Vladimir Davydov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Greg Thelen
     

01 Dec, 2016

1 commit

  • Gcc revision 241896 implements use-after-scope detection. Will be
    available in gcc 7. Support it in KASAN.

    Gcc emits 2 new callbacks to poison/unpoison large stack objects when
    they go in/out of scope. Implement the callbacks and add a test.

    [dvyukov@google.com: v3]
    Link: http://lkml.kernel.org/r/1479998292-144502-1-git-send-email-dvyukov@google.com
    Link: http://lkml.kernel.org/r/1479226045-145148-1-git-send-email-dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: [4.0+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     

21 May, 2016

2 commits

  • Add some tests for the newly-added user memory access API.

    Link: http://lkml.kernel.org/r/1462538722-1574-1-git-send-email-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Add a test that makes sure ksize() unpoisons the whole chunk.

    Signed-off-by: Alexander Potapenko
    Acked-by: Andrey Ryabinin
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Konstantin Serebryany
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

26 Mar, 2016

2 commits

  • Signed-off-by: Alexander Potapenko
    Acked-by: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     
  • This patchset implements SLAB support for KASAN

    Unlike SLUB, SLAB doesn't store allocation/deallocation stacks for heap
    objects, therefore we reimplement this feature in mm/kasan/stackdepot.c.
    The intention is to ultimately switch SLUB to use this implementation as
    well, which will save a lot of memory (right now SLUB bloats each object
    by 256 bytes to store the allocation/deallocation stacks).

    Also neither SLUB nor SLAB delay the reuse of freed memory chunks, which
    is necessary for better detection of use-after-free errors. We
    introduce memory quarantine (mm/kasan/quarantine.c), which allows
    delayed reuse of deallocated memory.

    This patch (of 7):

    Rename kmalloc_large_oob_right() to kmalloc_pagealloc_oob_right(), as
    the test only checks the page allocator functionality. Also reimplement
    kmalloc_large_oob_right() so that the test allocates a large enough
    chunk of memory that still does not trigger the page allocator fallback.

    Signed-off-by: Alexander Potapenko
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

06 Nov, 2015

1 commit


11 Sep, 2015

2 commits

  • In kmalloc_oob_krealloc_less, I think it is better to test
    the size2 boundary.

    If we do not call krealloc, the access of position size1 will still cause
    out-of-bounds and access of position size2 does not. After call krealloc,
    the access of position size2 cause out-of-bounds. So using size2 is more
    correct.

    Signed-off-by: Wang Long
    Cc: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wang Long
     
  • Signed-off-by: Wang Long
    Cc: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wang Long
     

14 Feb, 2015

1 commit

  • This is a test module doing various nasty things like out of bounds
    accesses, use after free. It is useful for testing kernel debugging
    features like kernel address sanitizer.

    It mostly concentrates on testing of slab allocator, but we might want to
    add more different stuff here in future (like stack/global variables out
    of bounds accesses and so on).

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin