27 Jul, 2018

1 commit

  • Building with KASAN and SLUB but without sysfs now results in a
    build-time error:

    WARNING: unmet direct dependencies detected for SLUB_DEBUG
    Depends on [n]: SLUB [=y] && SYSFS [=n]
    Selected by [y]:
    - KASAN [=y] && HAVE_ARCH_KASAN [=y] && (SLUB [=y] || SLAB [=n] && !DEBUG_SLAB [=n]) && SLUB [=y]
    mm/slub.c:4565:12: error: 'list_locations' defined but not used [-Werror=unused-function]
    static int list_locations(struct kmem_cache *s, char *buf,
    ^~~~~~~~~~~~~~
    mm/slub.c:4406:13: error: 'validate_slab_cache' defined but not used [-Werror=unused-function]
    static long validate_slab_cache(struct kmem_cache *s)

    This disallows that broken configuration in Kconfig.

    Link: http://lkml.kernel.org/r/20180709154019.1693026-1-arnd@arndb.de
    Fixes: dd275caf4a0d ("kasan: depend on CONFIG_SLUB_DEBUG")
    Signed-off-by: Arnd Bergmann
    Cc: "Jason A. Donenfeld"
    Cc: Arnd Bergmann
    Cc: Shakeel Butt
    Cc: Andrey Ryabinin
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     

29 Jun, 2018

1 commit

  • KASAN depends on having access to some of the accounting that SLUB_DEBUG
    does; without it, there are immediate crashes [1]. So, the natural
    thing to do is to make KASAN select SLUB_DEBUG.

    [1] http://lkml.kernel.org/r/CAHmME9rtoPwxUSnktxzKso14iuVCWT7BE_-_8PAC=pGw1iJnQg@mail.gmail.com

    Link: http://lkml.kernel.org/r/20180622154623.25388-1-Jason@zx2c4.com
    Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
    Signed-off-by: Jason A. Donenfeld
    Acked-by: Michal Hocko
    Reviewed-by: Shakeel Butt
    Acked-by: Christoph Lameter
    Cc: Shakeel Butt
    Cc: David Rientjes
    Cc: Pekka Enberg
    Cc: Joonsoo Kim
    Cc: Andrey Ryabinin
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jason A. Donenfeld
     

07 Feb, 2018

1 commit

  • We get a lot of very large stack frames using gcc-7.0.1 with the default
    -fsanitize-address-use-after-scope --param asan-stack=1 options, which can
    easily cause an overflow of the kernel stack, e.g.

    drivers/gpu/drm/i915/gvt/handlers.c:2434:1: warning: the frame size of 46176 bytes is larger than 3072 bytes
    drivers/net/wireless/ralink/rt2x00/rt2800lib.c:5650:1: warning: the frame size of 23632 bytes is larger than 3072 bytes
    lib/atomic64_test.c:250:1: warning: the frame size of 11200 bytes is larger than 3072 bytes
    drivers/gpu/drm/i915/gvt/handlers.c:2621:1: warning: the frame size of 9208 bytes is larger than 3072 bytes
    drivers/media/dvb-frontends/stv090x.c:3431:1: warning: the frame size of 6816 bytes is larger than 3072 bytes
    fs/fscache/stats.c:287:1: warning: the frame size of 6536 bytes is larger than 3072 bytes

    To reduce this risk, -fsanitize-address-use-after-scope is now split out
    into a separate CONFIG_KASAN_EXTRA Kconfig option, leading to stack
    frames that are smaller than 2 kilobytes most of the time on x86_64. An
    earlier version of this patch also prevented combining KASAN_EXTRA with
    KASAN_INLINE, but that is no longer necessary with gcc-7.0.1.

    All patches to get the frame size below 2048 bytes with CONFIG_KASAN=y
    and CONFIG_KASAN_EXTRA=n have been merged by maintainers now, so we can
    bring back that default now. KASAN_EXTRA=y still causes lots of
    warnings but now defaults to !COMPILE_TEST to disable it in
    allmodconfig, and it remains disabled in all other defconfigs since it
    is a new option. I arbitrarily raise the warning limit for KASAN_EXTRA
    to 3072 to reduce the noise, but an allmodconfig kernel still has around
    50 warnings on gcc-7.

    I experimented a bit more with smaller stack frames and have another
    follow-up series that reduces the warning limit for 64-bit architectures
    to 1280 bytes (without CONFIG_KASAN).

    With earlier versions of this patch series, I also had patches to address
    the warnings we get with KASAN and/or KASAN_EXTRA, using a
    "noinline_if_stackbloat" annotation.

    That annotation now got replaced with a gcc-8 bugfix (see
    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715) and a workaround for
    older compilers, which means that KASAN_EXTRA is now just as bad as
    before and will lead to an instant stack overflow in a few extreme
    cases.

    This reverts parts of commit 3f181b4d8652 ("lib/Kconfig.debug: disable
    -Wframe-larger-than warnings with KASAN=y"). Two patches in linux-next
    should be merged first to avoid introducing warnings in an allmodconfig
    build:
    3cd890dbe2a4 ("media: dvb-frontends: fix i2c access helpers for KASAN")
    16c3ada89cff ("media: r820t: fix r820t_write_reg for KASAN")

    Do we really need to backport this?

    I think we do: without this patch, enabling KASAN will lead to
    unavoidable kernel stack overflow in certain device drivers when built
    with gcc-7 or higher on linux-4.10+ or any version that contains a
    backport of commit c5caf21ab0cf8. Most people are probably still on
    older compilers, but it will get worse over time as they upgrade their
    distros.

    The warnings we get on kernels older than this should all be for code
    that uses dangerously large stack frames, though most of them do not
    cause an actual stack overflow by themselves.The asan-stack option was
    added in linux-4.0, and commit 3f181b4d8652 ("lib/Kconfig.debug:
    disable -Wframe-larger-than warnings with KASAN=y") effectively turned
    off the warning for allmodconfig kernels, so I would like to see this
    fix backported to any kernels later than 4.0.

    I have done dozens of fixes for individual functions with stack frames
    larger than 2048 bytes with asan-stack, and I plan to make sure that
    all those fixes make it into the stable kernels as well (most are
    already there).

    Part of the complication here is that asan-stack (from 4.0) was
    originally assumed to always require much larger stacks, but that
    turned out to be a combination of multiple gcc bugs that we have now
    worked around and fixed, but sanitize-address-use-after-scope (from
    v4.10) has a much higher inherent stack usage and also suffers from at
    least three other problems that we have analyzed but not yet fixed
    upstream, each of them makes the stack usage more severe than it should
    be.

    Link: http://lkml.kernel.org/r/20171221134744.2295529-1-arnd@arndb.de
    Signed-off-by: Arnd Bergmann
    Acked-by: Andrey Ryabinin
    Cc: Mauro Carvalho Chehab
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Andrey Konovalov
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     

29 Jul, 2016

1 commit

  • For KASAN builds:
    - switch SLUB allocator to using stackdepot instead of storing the
    allocation/deallocation stacks in the objects;
    - change the freelist hook so that parts of the freelist can be put
    into the quarantine.

    [aryabinin@virtuozzo.com: fixes]
    Link: http://lkml.kernel.org/r/1468601423-28676-1-git-send-email-aryabinin@virtuozzo.com
    Link: http://lkml.kernel.org/r/1468347165-41906-3-git-send-email-glider@google.com
    Signed-off-by: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Steven Rostedt (Red Hat)
    Cc: Joonsoo Kim
    Cc: Kostya Serebryany
    Cc: Andrey Ryabinin
    Cc: Kuthonuzo Luruo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

26 Mar, 2016

2 commits

  • Implement the stack depot and provide CONFIG_STACKDEPOT. Stack depot
    will allow KASAN store allocation/deallocation stack traces for memory
    chunks. The stack traces are stored in a hash table and referenced by
    handles which reside in the kasan_alloc_meta and kasan_free_meta
    structures in the allocated memory chunks.

    IRQ stack traces are cut below the IRQ entry point to avoid unnecessary
    duplication.

    Right now stackdepot support is only enabled in SLAB allocator. Once
    KASAN features in SLAB are on par with those in SLUB we can switch SLUB
    to stackdepot as well, thus removing the dependency on SLUB stack
    bookkeeping, which wastes a lot of memory.

    This patch is based on the "mm: kasan: stack depots" patch originally
    prepared by Dmitry Chernenkov.

    Joonsoo has said that he plans to reuse the stackdepot code for the
    mm/page_owner.c debugging facility.

    [akpm@linux-foundation.org: s/depot_stack_handle/depot_stack_handle_t]
    [aryabinin@virtuozzo.com: comment style fixes]
    Signed-off-by: Alexander Potapenko
    Signed-off-by: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     
  • Add KASAN hooks to SLAB allocator.

    This patch is based on the "mm: kasan: unified support for SLUB and SLAB
    allocators" patch originally prepared by Dmitry Chernenkov.

    Signed-off-by: Alexander Potapenko
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

06 Nov, 2015

1 commit

  • It's recommended to have slub's user tracking enabled with CONFIG_KASAN,
    because:

    a) User tracking disables slab merging which improves
    detecting out-of-bounds accesses.
    b) User tracking metadata acts as redzone which also improves
    detecting out-of-bounds accesses.
    c) User tracking provides additional information about object.
    This information helps to understand bugs.

    Currently it is not enabled by default. Besides recompiling the kernel
    with KASAN and reinstalling it, user also have to change the boot cmdline,
    which is not very handy.

    Enable slub user tracking by default with KASAN=y, since there is no good
    reason to not do this.

    [akpm@linux-foundation.org: little fixes, per David]
    Signed-off-by: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

06 Jul, 2015

1 commit

  • KASAN_SHADOW_OFFSET is purely arch specific setting,
    so it should be in arch's Kconfig file.

    Signed-off-by: Andrey Ryabinin
    Cc: Alexander Popov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Borislav Petkov
    Cc: Dmitry Vyukov
    Cc: Linus Torvalds
    Cc: Paul Bolle
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1435828178-10975-7-git-send-email-a.ryabinin@samsung.com
    Signed-off-by: Ingo Molnar

    Andrey Ryabinin
     

06 May, 2015

1 commit

  • The documentation shows a need for gcc > 4.9.2, but it's really >=. The
    Kconfig entries don't show require versions so add them. Correct a
    latter/later typo too. Also mention that gcc 5 required to catch out of
    bounds accesses to global and stack variables.

    Signed-off-by: Joe Perches
    Signed-off-by: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     

14 Feb, 2015

5 commits

  • This feature let us to detect accesses out of bounds of global variables.
    This will work as for globals in kernel image, so for globals in modules.
    Currently this won't work for symbols in user-specified sections (e.g.
    __init, __read_mostly, ...)

    The idea of this is simple. Compiler increases each global variable by
    redzone size and add constructors invoking __asan_register_globals()
    function. Information about global variable (address, size, size with
    redzone ...) passed to __asan_register_globals() so we could poison
    variable's redzone.

    This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
    address making shadow memory handling (
    kasan_module_alloc()/kasan_module_free() ) more simple. Such alignment
    guarantees that each shadow page backing modules address space correspond
    to only one module_alloc() allocation.

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • This is a test module doing various nasty things like out of bounds
    accesses, use after free. It is useful for testing kernel debugging
    features like kernel address sanitizer.

    It mostly concentrates on testing of slab allocator, but we might want to
    add more different stuff here in future (like stack/global variables out
    of bounds accesses and so on).

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • With this patch kasan will be able to catch bugs in memory allocated by
    slub. Initially all objects in newly allocated slab page, marked as
    redzone. Later, when allocation of slub object happens, requested by
    caller number of bytes marked as accessible, and the rest of the object
    (including slub's metadata) marked as redzone (inaccessible).

    We also mark object as accessible if ksize was called for this object.
    There is some places in kernel where ksize function is called to inquire
    size of really allocated area. Such callers could validly access whole
    allocated memory, so it should be marked as accessible.

    Code in slub.c and slab_common.c files could validly access to object's
    metadata, so instrumentation for this files are disabled.

    Signed-off-by: Andrey Ryabinin
    Signed-off-by: Dmitry Chernenkov
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • This patch adds arch specific code for kernel address sanitizer.

    16TB of virtual addressed used for shadow memory. It's located in range
    [ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup
    stacks.

    At early stage we map whole shadow region with zero page. Latter, after
    pages mapped to direct mapping address range we unmap zero pages from
    corresponding shadow (see kasan_map_shadow()) and allocate and map a real
    shadow memory reusing vmemmap_populate() function.

    Also replace __pa with __pa_nodebug before shadow initialized. __pa with
    CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
    __phys_addr is instrumented, so __asan_load could be called before shadow
    area initialized.

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Jim Davis
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
    provides fast and comprehensive solution for finding use-after-free and
    out-of-bounds bugs.

    KASAN uses compile-time instrumentation for checking every memory access,
    therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
    putting symbol aliases into the wrong section, which breaks kasan
    instrumentation of globals.

    This patch only adds infrastructure for kernel address sanitizer. It's
    not available for use yet. The idea and some code was borrowed from [1].

    Basic idea:

    The main idea of KASAN is to use shadow memory to record whether each byte
    of memory is safe to access or not, and use compiler's instrumentation to
    check the shadow memory on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
    memory and uses direct mapping with a scale and offset to translate a
    memory address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

    unsigned long kasan_mem_to_shadow(unsigned long addr)
    {
    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
    }

    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes
    of the corresponding memory region are valid for access; k (1
    Acked-by: Michal Marek
    Signed-off-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin