14 Oct, 2020

2 commits

  • Transfer all previous tests for KASAN to KUnit so they can be run more
    easily. Using kunit_tool, developers can run these tests with their other
    KUnit tests and see "pass" or "fail" with the appropriate KASAN report
    instead of needing to parse each KASAN report to test KASAN
    functionalities. All KASAN reports are still printed to dmesg.

    Stack tests do not work properly when KASAN_STACK is enabled so those
    tests use a check for "if IS_ENABLED(CONFIG_KASAN_STACK)" so they only run
    if stack instrumentation is enabled. If KASAN_STACK is not enabled, KUnit
    will print a statement to let the user know this test was not run with
    KASAN_STACK enabled.

    copy_user_test and kasan_rcu_uaf cannot be run in KUnit so there is a
    separate test file for those tests, which can be run as before as a
    module.

    [trishalfonso@google.com: v14]
    Link: https://lkml.kernel.org/r/20200915035828.570483-4-davidgow@google.com

    Signed-off-by: Patricia Alfonso
    Signed-off-by: David Gow
    Signed-off-by: Andrew Morton
    Tested-by: Andrey Konovalov
    Reviewed-by: Brendan Higgins
    Reviewed-by: Andrey Konovalov
    Reviewed-by: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Ingo Molnar
    Cc: Juri Lelli
    Cc: Peter Zijlstra
    Cc: Shuah Khan
    Cc: Vincent Guittot
    Link: https://lkml.kernel.org/r/20200910070331.3358048-4-davidgow@google.com
    Signed-off-by: Linus Torvalds

    Patricia Alfonso
     
  • Since the kernel now requires at least Clang 10.0.1, remove any mention of
    old Clang versions and simplify the documentation.

    Signed-off-by: Marco Elver
    Signed-off-by: Nick Desaulniers
    Signed-off-by: Andrew Morton
    Reviewed-by: Andrey Konovalov
    Reviewed-by: Kees Cook
    Reviewed-by: Nathan Chancellor
    Cc: Fangrui Song
    Cc: Miguel Ojeda
    Cc: Sedat Dilek
    Cc: Alexei Starovoitov
    Cc: Daniel Borkmann
    Cc: Masahiro Yamada
    Cc: Vincenzo Frascino
    Cc: Will Deacon
    Link: https://lkml.kernel.org/r/20200902225911.209899-7-ndesaulniers@google.com
    Signed-off-by: Linus Torvalds

    Marco Elver
     

08 Aug, 2020

2 commits

  • Updates the recently changed compiler requirements for KASAN. In
    particular, we require GCC >= 8.3.0, and add a note that Clang 11 supports
    OOB detection of globals.

    Fixes: 7b861a53e46b ("kasan: Bump required compiler version")
    Fixes: acf7b0bf7dcf ("kasan: Fix required compiler version")
    Signed-off-by: Marco Elver
    Signed-off-by: Andrew Morton
    Reviewed-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Nick Desaulniers
    Cc: Walter Wu
    Cc: Arnd Bergmann
    Cc: Daniel Axtens
    Link: http://lkml.kernel.org/r/20200629104157.3242503-2-elver@google.com
    Signed-off-by: Linus Torvalds

    Marco Elver
     
  • Turn 'KASAN' into a menuconfig, to avoid cluttering its parent menu with
    the suboptions if enabled. Use 'if KASAN ... endif' instead of having to
    'depend on KASAN' for each entry.

    Signed-off-by: Marco Elver
    Signed-off-by: Andrew Morton
    Reviewed-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Arnd Bergmann
    Cc: Daniel Axtens
    Cc: Dmitry Vyukov
    Cc: Nick Desaulniers
    Cc: Walter Wu
    Link: http://lkml.kernel.org/r/20200629104157.3242503-1-elver@google.com
    Signed-off-by: Linus Torvalds

    Marco Elver
     

25 Jun, 2020

1 commit

  • The first working GCC version to satisfy
    CC_HAS_WORKING_NOSANITIZE_ADDRESS is GCC 8.3.0.

    Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=89124
    Reported-by: Stephen Rothwell
    Signed-off-by: Marco Elver
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/20200623112448.GA208112@elver.google.com

    Marco Elver
     

15 Jun, 2020

1 commit

  • Adds config variable CC_HAS_WORKING_NOSANITIZE_ADDRESS, which will be
    true if we have a compiler that does not fail builds due to
    no_sanitize_address functions. This does not yet mean they work as
    intended, but for automated build-tests, this is the minimum
    requirement.

    For example, we require that __always_inline functions used from
    no_sanitize_address functions do not generate instrumentation. On GCC
    Signed-off-by: Marco Elver
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Nick Desaulniers
    Acked-by: Andrey Konovalov
    Link: https://lkml.kernel.org/r/20200602175859.GC2604@hirez.programming.kicks-ass.net

    Marco Elver
     

02 Dec, 2019

1 commit

  • Patch series "kasan: support backing vmalloc space with real shadow
    memory", v11.

    Currently, vmalloc space is backed by the early shadow page. This means
    that kasan is incompatible with VMAP_STACK.

    This series provides a mechanism to back vmalloc space with real,
    dynamically allocated memory. I have only wired up x86, because that's
    the only currently supported arch I can work with easily, but it's very
    easy to wire up other architectures, and it appears that there is some
    work-in-progress code to do this on arm64 and s390.

    This has been discussed before in the context of VMAP_STACK:
    - https://bugzilla.kernel.org/show_bug.cgi?id=202009
    - https://lkml.org/lkml/2018/7/22/198
    - https://lkml.org/lkml/2019/7/19/822

    In terms of implementation details:

    Most mappings in vmalloc space are small, requiring less than a full
    page of shadow space. Allocating a full shadow page per mapping would
    therefore be wasteful. Furthermore, to ensure that different mappings
    use different shadow pages, mappings would have to be aligned to
    KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

    Instead, share backing space across multiple mappings. Allocate a
    backing page when a mapping in vmalloc space uses a particular page of
    the shadow region. This page can be shared by other vmalloc mappings
    later on.

    We hook in to the vmap infrastructure to lazily clean up unused shadow
    memory.

    Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:

    - Turning on KASAN, inline instrumentation, without vmalloc, introuduces
    a 4.1x-4.2x slowdown in vmalloc operations.

    - Turning this on introduces the following slowdowns over KASAN:
    * ~1.76x slower single-threaded (test_vmalloc.sh performance)
    * ~2.18x slower when both cpus are performing operations
    simultaneously (test_vmalloc.sh sequential_test_order=1)

    This is unfortunate but given that this is a debug feature only, not the
    end of the world. The benchmarks are also a stress-test for the vmalloc
    subsystem: they're not indicative of an overall 2x slowdown!

    This patch (of 4):

    Hook into vmalloc and vmap, and dynamically allocate real shadow memory
    to back the mappings.

    Most mappings in vmalloc space are small, requiring less than a full
    page of shadow space. Allocating a full shadow page per mapping would
    therefore be wasteful. Furthermore, to ensure that different mappings
    use different shadow pages, mappings would have to be aligned to
    KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

    Instead, share backing space across multiple mappings. Allocate a
    backing page when a mapping in vmalloc space uses a particular page of
    the shadow region. This page can be shared by other vmalloc mappings
    later on.

    We hook in to the vmap infrastructure to lazily clean up unused shadow
    memory.

    To avoid the difficulties around swapping mappings around, this code
    expects that the part of the shadow region that covers the vmalloc space
    will not be covered by the early shadow page, but will be left unmapped.
    This will require changes in arch-specific code.

    This allows KASAN with VMAP_STACK, and may be helpful for architectures
    that do not have a separate module space (e.g. powerpc64, which I am
    currently working on). It also allows relaxing the module alignment
    back to PAGE_SIZE.

    Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:

    - Turning on KASAN, inline instrumentation, without vmalloc, introuduces
    a 4.1x-4.2x slowdown in vmalloc operations.

    - Turning this on introduces the following slowdowns over KASAN:
    * ~1.76x slower single-threaded (test_vmalloc.sh performance)
    * ~2.18x slower when both cpus are performing operations
    simultaneously (test_vmalloc.sh sequential_test_order=3D1)

    This is unfortunate but given that this is a debug feature only, not the
    end of the world.

    The full benchmark results are:

    Performance

    No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN

    fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
    full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
    long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
    random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
    fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
    random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
    align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
    pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
    Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82

    Sequential, 2 cpus

    No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN

    fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
    full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
    long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
    random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
    fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
    random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
    align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
    pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
    Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
    fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
    full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
    long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
    random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
    fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
    random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
    align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
    pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
    Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11

    [dja@axtens.net: fixups]
    Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
    Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
    Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
    Signed-off-by: Mark Rutland [shadow rework]
    Signed-off-by: Daniel Axtens
    Co-developed-by: Mark Rutland
    Acked-by: Vasily Gorbik
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Christophe Leroy
    Cc: Qian Cai
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daniel Axtens
     

25 Sep, 2019

1 commit

  • Add memory corruption identification at bug report for software tag-based
    mode. The report shows whether it is "use-after-free" or "out-of-bound"
    error instead of "invalid-access" error. This will make it easier for
    programmers to see the memory corruption problem.

    We extend the slab to store five old free pointer tag and free backtrace,
    we can check if the tagged address is in the slab record and make a good
    guess if the object is more like "use-after-free" or "out-of-bound".
    therefore every slab memory corruption can be identified whether it's
    "use-after-free" or "out-of-bound".

    [aryabinin@virtuozzo.com: simplify & clenup code]
    Link: https://lkml.kernel.org/r/3318f9d7-a760-3cc8-b700-f06108ae745f@virtuozzo.com]
    Link: http://lkml.kernel.org/r/20190821180332.11450-1-aryabinin@virtuozzo.com
    Signed-off-by: Walter Wu
    Signed-off-by: Andrey Ryabinin
    Acked-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Walter Wu
     

03 Aug, 2019

1 commit

  • asan-stack mode still uses dangerously large kernel stacks of tens of
    kilobytes in some drivers, and it does not seem that anyone is working
    on the clang bug.

    Turn it off for all clang versions to prevent users from accidentally
    enabling it once they update to clang-9, and to help automated build
    testing with clang-9.

    Link: https://bugs.llvm.org/show_bug.cgi?id=38809
    Link: http://lkml.kernel.org/r/20190719200347.2596375-1-arnd@arndb.de
    Fixes: 6baec880d7a5 ("kasan: turn off asan-stack for clang-8 and earlier")
    Signed-off-by: Arnd Bergmann
    Acked-by: Nick Desaulniers
    Reviewed-by: Mark Brown
    Reviewed-by: Andrey Ryabinin
    Cc: Qian Cai
    Cc: Andrey Konovalov
    Cc: Vasily Gorbik
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     

21 May, 2019

1 commit


06 Mar, 2019

1 commit

  • Use after scope bugs detector seems to be almost entirely useless for
    the linux kernel. It exists over two years, but I've seen only one
    valid bug so far [1]. And the bug was fixed before it has been
    reported. There were some other use-after-scope reports, but they were
    false-positives due to different reasons like incompatibility with
    structleak plugin.

    This feature significantly increases stack usage, especially with GCC <
    9 version, and causes a 32K stack overflow. It probably adds
    performance penalty too.

    Given all that, let's remove use-after-scope detector entirely.

    While preparing this patch I've noticed that we mistakenly enable
    use-after-scope detection for clang compiler regardless of
    CONFIG_KASAN_EXTRA setting. This is also fixed now.

    [1] http://lkml.kernel.org/r/

    Link: http://lkml.kernel.org/r/20190111185842.13978-1-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Acked-by: Will Deacon [arm64]
    Cc: Qian Cai
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

02 Mar, 2019

1 commit

  • Building an arm64 allmodconfig kernel with clang results in over 140
    warnings about overly large stack frames, the worst ones being:

    drivers/gpu/drm/panel/panel-sitronix-st7789v.c:196:12: error: stack frame size of 20224 bytes in function 'st7789v_prepare'
    drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td028ttec1.c:196:12: error: stack frame size of 13120 bytes in function 'td028ttec1_panel_enable'
    drivers/usb/host/max3421-hcd.c:1395:1: error: stack frame size of 10048 bytes in function 'max3421_spi_thread'
    drivers/net/wan/slic_ds26522.c:209:12: error: stack frame size of 9664 bytes in function 'slic_ds26522_probe'
    drivers/crypto/ccp/ccp-ops.c:2434:5: error: stack frame size of 8832 bytes in function 'ccp_run_cmd'
    drivers/media/dvb-frontends/stv0367.c:1005:12: error: stack frame size of 7840 bytes in function 'stv0367ter_algo'

    None of these happen with gcc today, and almost all of these are the
    result of a single known issue in llvm. Hopefully it will eventually
    get fixed with the clang-9 release.

    In the meantime, the best idea I have is to turn off asan-stack for
    clang-8 and earlier, so we can produce a kernel that is safe to run.

    I have posted three patches that address the frame overflow warnings
    that are not addressed by turning off asan-stack, so in combination with
    this change, we get much closer to a clean allmodconfig build, which in
    turn is necessary to do meaningful build regression testing.

    It is still possible to turn on the CONFIG_ASAN_STACK option on all
    versions of clang, and it's always enabled for gcc, but when
    CONFIG_COMPILE_TEST is set, the option remains invisible, so
    allmodconfig and randconfig builds (which are normally done with a
    forced CONFIG_COMPILE_TEST) will still result in a mostly clean build.

    Link: http://lkml.kernel.org/r/20190222222950.3997333-1-arnd@arndb.de
    Link: https://bugs.llvm.org/show_bug.cgi?id=38809
    Signed-off-by: Arnd Bergmann
    Reviewed-by: Qian Cai
    Reviewed-by: Mark Brown
    Acked-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Nick Desaulniers
    Cc: Kostya Serebryany
    Cc: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     

29 Dec, 2018

1 commit

  • This commit splits the current CONFIG_KASAN config option into two:
    1. CONFIG_KASAN_GENERIC, that enables the generic KASAN mode (the one
    that exists now);
    2. CONFIG_KASAN_SW_TAGS, that enables the software tag-based KASAN mode.

    The name CONFIG_KASAN_SW_TAGS is chosen as in the future we will have
    another hardware tag-based KASAN mode, that will rely on hardware memory
    tagging support in arm64.

    With CONFIG_KASAN_SW_TAGS enabled, compiler options are changed to
    instrument kernel files with -fsantize=kernel-hwaddress (except the ones
    for which KASAN_SANITIZE := n is set).

    Both CONFIG_KASAN_GENERIC and CONFIG_KASAN_SW_TAGS support both
    CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.

    This commit also adds empty placeholder (for now) implementation of
    tag-based KASAN specific hooks inserted by the compiler and adjusts
    common hooks implementation.

    While this commit adds the CONFIG_KASAN_SW_TAGS config option, this option
    is not selectable, as it depends on HAVE_ARCH_KASAN_SW_TAGS, which we will
    enable once all the infrastracture code has been added.

    Link: http://lkml.kernel.org/r/b2550106eb8a68b10fefbabce820910b115aa853.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

09 Oct, 2018

1 commit

  • By default 3-level paging is used when the kernel is compiled with
    kasan support. Add 4-level paging option to support systems with more
    then 3TB of physical memory and to cover 4-level paging specific code
    with kasan as well.

    Reviewed-by: Martin Schwidefsky
    Signed-off-by: Vasily Gorbik
    Signed-off-by: Martin Schwidefsky

    Vasily Gorbik
     

27 Jul, 2018

1 commit

  • Building with KASAN and SLUB but without sysfs now results in a
    build-time error:

    WARNING: unmet direct dependencies detected for SLUB_DEBUG
    Depends on [n]: SLUB [=y] && SYSFS [=n]
    Selected by [y]:
    - KASAN [=y] && HAVE_ARCH_KASAN [=y] && (SLUB [=y] || SLAB [=n] && !DEBUG_SLAB [=n]) && SLUB [=y]
    mm/slub.c:4565:12: error: 'list_locations' defined but not used [-Werror=unused-function]
    static int list_locations(struct kmem_cache *s, char *buf,
    ^~~~~~~~~~~~~~
    mm/slub.c:4406:13: error: 'validate_slab_cache' defined but not used [-Werror=unused-function]
    static long validate_slab_cache(struct kmem_cache *s)

    This disallows that broken configuration in Kconfig.

    Link: http://lkml.kernel.org/r/20180709154019.1693026-1-arnd@arndb.de
    Fixes: dd275caf4a0d ("kasan: depend on CONFIG_SLUB_DEBUG")
    Signed-off-by: Arnd Bergmann
    Cc: "Jason A. Donenfeld"
    Cc: Arnd Bergmann
    Cc: Shakeel Butt
    Cc: Andrey Ryabinin
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     

29 Jun, 2018

1 commit

  • KASAN depends on having access to some of the accounting that SLUB_DEBUG
    does; without it, there are immediate crashes [1]. So, the natural
    thing to do is to make KASAN select SLUB_DEBUG.

    [1] http://lkml.kernel.org/r/CAHmME9rtoPwxUSnktxzKso14iuVCWT7BE_-_8PAC=pGw1iJnQg@mail.gmail.com

    Link: http://lkml.kernel.org/r/20180622154623.25388-1-Jason@zx2c4.com
    Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
    Signed-off-by: Jason A. Donenfeld
    Acked-by: Michal Hocko
    Reviewed-by: Shakeel Butt
    Acked-by: Christoph Lameter
    Cc: Shakeel Butt
    Cc: David Rientjes
    Cc: Pekka Enberg
    Cc: Joonsoo Kim
    Cc: Andrey Ryabinin
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jason A. Donenfeld
     

07 Feb, 2018

1 commit

  • We get a lot of very large stack frames using gcc-7.0.1 with the default
    -fsanitize-address-use-after-scope --param asan-stack=1 options, which can
    easily cause an overflow of the kernel stack, e.g.

    drivers/gpu/drm/i915/gvt/handlers.c:2434:1: warning: the frame size of 46176 bytes is larger than 3072 bytes
    drivers/net/wireless/ralink/rt2x00/rt2800lib.c:5650:1: warning: the frame size of 23632 bytes is larger than 3072 bytes
    lib/atomic64_test.c:250:1: warning: the frame size of 11200 bytes is larger than 3072 bytes
    drivers/gpu/drm/i915/gvt/handlers.c:2621:1: warning: the frame size of 9208 bytes is larger than 3072 bytes
    drivers/media/dvb-frontends/stv090x.c:3431:1: warning: the frame size of 6816 bytes is larger than 3072 bytes
    fs/fscache/stats.c:287:1: warning: the frame size of 6536 bytes is larger than 3072 bytes

    To reduce this risk, -fsanitize-address-use-after-scope is now split out
    into a separate CONFIG_KASAN_EXTRA Kconfig option, leading to stack
    frames that are smaller than 2 kilobytes most of the time on x86_64. An
    earlier version of this patch also prevented combining KASAN_EXTRA with
    KASAN_INLINE, but that is no longer necessary with gcc-7.0.1.

    All patches to get the frame size below 2048 bytes with CONFIG_KASAN=y
    and CONFIG_KASAN_EXTRA=n have been merged by maintainers now, so we can
    bring back that default now. KASAN_EXTRA=y still causes lots of
    warnings but now defaults to !COMPILE_TEST to disable it in
    allmodconfig, and it remains disabled in all other defconfigs since it
    is a new option. I arbitrarily raise the warning limit for KASAN_EXTRA
    to 3072 to reduce the noise, but an allmodconfig kernel still has around
    50 warnings on gcc-7.

    I experimented a bit more with smaller stack frames and have another
    follow-up series that reduces the warning limit for 64-bit architectures
    to 1280 bytes (without CONFIG_KASAN).

    With earlier versions of this patch series, I also had patches to address
    the warnings we get with KASAN and/or KASAN_EXTRA, using a
    "noinline_if_stackbloat" annotation.

    That annotation now got replaced with a gcc-8 bugfix (see
    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715) and a workaround for
    older compilers, which means that KASAN_EXTRA is now just as bad as
    before and will lead to an instant stack overflow in a few extreme
    cases.

    This reverts parts of commit 3f181b4d8652 ("lib/Kconfig.debug: disable
    -Wframe-larger-than warnings with KASAN=y"). Two patches in linux-next
    should be merged first to avoid introducing warnings in an allmodconfig
    build:
    3cd890dbe2a4 ("media: dvb-frontends: fix i2c access helpers for KASAN")
    16c3ada89cff ("media: r820t: fix r820t_write_reg for KASAN")

    Do we really need to backport this?

    I think we do: without this patch, enabling KASAN will lead to
    unavoidable kernel stack overflow in certain device drivers when built
    with gcc-7 or higher on linux-4.10+ or any version that contains a
    backport of commit c5caf21ab0cf8. Most people are probably still on
    older compilers, but it will get worse over time as they upgrade their
    distros.

    The warnings we get on kernels older than this should all be for code
    that uses dangerously large stack frames, though most of them do not
    cause an actual stack overflow by themselves.The asan-stack option was
    added in linux-4.0, and commit 3f181b4d8652 ("lib/Kconfig.debug:
    disable -Wframe-larger-than warnings with KASAN=y") effectively turned
    off the warning for allmodconfig kernels, so I would like to see this
    fix backported to any kernels later than 4.0.

    I have done dozens of fixes for individual functions with stack frames
    larger than 2048 bytes with asan-stack, and I plan to make sure that
    all those fixes make it into the stable kernels as well (most are
    already there).

    Part of the complication here is that asan-stack (from 4.0) was
    originally assumed to always require much larger stacks, but that
    turned out to be a combination of multiple gcc bugs that we have now
    worked around and fixed, but sanitize-address-use-after-scope (from
    v4.10) has a much higher inherent stack usage and also suffers from at
    least three other problems that we have analyzed but not yet fixed
    upstream, each of them makes the stack usage more severe than it should
    be.

    Link: http://lkml.kernel.org/r/20171221134744.2295529-1-arnd@arndb.de
    Signed-off-by: Arnd Bergmann
    Acked-by: Andrey Ryabinin
    Cc: Mauro Carvalho Chehab
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Andrey Konovalov
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     

29 Jul, 2016

1 commit

  • For KASAN builds:
    - switch SLUB allocator to using stackdepot instead of storing the
    allocation/deallocation stacks in the objects;
    - change the freelist hook so that parts of the freelist can be put
    into the quarantine.

    [aryabinin@virtuozzo.com: fixes]
    Link: http://lkml.kernel.org/r/1468601423-28676-1-git-send-email-aryabinin@virtuozzo.com
    Link: http://lkml.kernel.org/r/1468347165-41906-3-git-send-email-glider@google.com
    Signed-off-by: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Steven Rostedt (Red Hat)
    Cc: Joonsoo Kim
    Cc: Kostya Serebryany
    Cc: Andrey Ryabinin
    Cc: Kuthonuzo Luruo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

26 Mar, 2016

2 commits

  • Implement the stack depot and provide CONFIG_STACKDEPOT. Stack depot
    will allow KASAN store allocation/deallocation stack traces for memory
    chunks. The stack traces are stored in a hash table and referenced by
    handles which reside in the kasan_alloc_meta and kasan_free_meta
    structures in the allocated memory chunks.

    IRQ stack traces are cut below the IRQ entry point to avoid unnecessary
    duplication.

    Right now stackdepot support is only enabled in SLAB allocator. Once
    KASAN features in SLAB are on par with those in SLUB we can switch SLUB
    to stackdepot as well, thus removing the dependency on SLUB stack
    bookkeeping, which wastes a lot of memory.

    This patch is based on the "mm: kasan: stack depots" patch originally
    prepared by Dmitry Chernenkov.

    Joonsoo has said that he plans to reuse the stackdepot code for the
    mm/page_owner.c debugging facility.

    [akpm@linux-foundation.org: s/depot_stack_handle/depot_stack_handle_t]
    [aryabinin@virtuozzo.com: comment style fixes]
    Signed-off-by: Alexander Potapenko
    Signed-off-by: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     
  • Add KASAN hooks to SLAB allocator.

    This patch is based on the "mm: kasan: unified support for SLUB and SLAB
    allocators" patch originally prepared by Dmitry Chernenkov.

    Signed-off-by: Alexander Potapenko
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

06 Nov, 2015

1 commit

  • It's recommended to have slub's user tracking enabled with CONFIG_KASAN,
    because:

    a) User tracking disables slab merging which improves
    detecting out-of-bounds accesses.
    b) User tracking metadata acts as redzone which also improves
    detecting out-of-bounds accesses.
    c) User tracking provides additional information about object.
    This information helps to understand bugs.

    Currently it is not enabled by default. Besides recompiling the kernel
    with KASAN and reinstalling it, user also have to change the boot cmdline,
    which is not very handy.

    Enable slub user tracking by default with KASAN=y, since there is no good
    reason to not do this.

    [akpm@linux-foundation.org: little fixes, per David]
    Signed-off-by: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

06 Jul, 2015

1 commit

  • KASAN_SHADOW_OFFSET is purely arch specific setting,
    so it should be in arch's Kconfig file.

    Signed-off-by: Andrey Ryabinin
    Cc: Alexander Popov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Borislav Petkov
    Cc: Dmitry Vyukov
    Cc: Linus Torvalds
    Cc: Paul Bolle
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1435828178-10975-7-git-send-email-a.ryabinin@samsung.com
    Signed-off-by: Ingo Molnar

    Andrey Ryabinin
     

06 May, 2015

1 commit

  • The documentation shows a need for gcc > 4.9.2, but it's really >=. The
    Kconfig entries don't show require versions so add them. Correct a
    latter/later typo too. Also mention that gcc 5 required to catch out of
    bounds accesses to global and stack variables.

    Signed-off-by: Joe Perches
    Signed-off-by: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     

14 Feb, 2015

5 commits

  • This feature let us to detect accesses out of bounds of global variables.
    This will work as for globals in kernel image, so for globals in modules.
    Currently this won't work for symbols in user-specified sections (e.g.
    __init, __read_mostly, ...)

    The idea of this is simple. Compiler increases each global variable by
    redzone size and add constructors invoking __asan_register_globals()
    function. Information about global variable (address, size, size with
    redzone ...) passed to __asan_register_globals() so we could poison
    variable's redzone.

    This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
    address making shadow memory handling (
    kasan_module_alloc()/kasan_module_free() ) more simple. Such alignment
    guarantees that each shadow page backing modules address space correspond
    to only one module_alloc() allocation.

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • This is a test module doing various nasty things like out of bounds
    accesses, use after free. It is useful for testing kernel debugging
    features like kernel address sanitizer.

    It mostly concentrates on testing of slab allocator, but we might want to
    add more different stuff here in future (like stack/global variables out
    of bounds accesses and so on).

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • With this patch kasan will be able to catch bugs in memory allocated by
    slub. Initially all objects in newly allocated slab page, marked as
    redzone. Later, when allocation of slub object happens, requested by
    caller number of bytes marked as accessible, and the rest of the object
    (including slub's metadata) marked as redzone (inaccessible).

    We also mark object as accessible if ksize was called for this object.
    There is some places in kernel where ksize function is called to inquire
    size of really allocated area. Such callers could validly access whole
    allocated memory, so it should be marked as accessible.

    Code in slub.c and slab_common.c files could validly access to object's
    metadata, so instrumentation for this files are disabled.

    Signed-off-by: Andrey Ryabinin
    Signed-off-by: Dmitry Chernenkov
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • This patch adds arch specific code for kernel address sanitizer.

    16TB of virtual addressed used for shadow memory. It's located in range
    [ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup
    stacks.

    At early stage we map whole shadow region with zero page. Latter, after
    pages mapped to direct mapping address range we unmap zero pages from
    corresponding shadow (see kasan_map_shadow()) and allocate and map a real
    shadow memory reusing vmemmap_populate() function.

    Also replace __pa with __pa_nodebug before shadow initialized. __pa with
    CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
    __phys_addr is instrumented, so __asan_load could be called before shadow
    area initialized.

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Jim Davis
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
    provides fast and comprehensive solution for finding use-after-free and
    out-of-bounds bugs.

    KASAN uses compile-time instrumentation for checking every memory access,
    therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
    putting symbol aliases into the wrong section, which breaks kasan
    instrumentation of globals.

    This patch only adds infrastructure for kernel address sanitizer. It's
    not available for use yet. The idea and some code was borrowed from [1].

    Basic idea:

    The main idea of KASAN is to use shadow memory to record whether each byte
    of memory is safe to access or not, and use compiler's instrumentation to
    check the shadow memory on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
    memory and uses direct mapping with a scale and offset to translate a
    memory address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

    unsigned long kasan_mem_to_shadow(unsigned long addr)
    {
    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
    }

    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes
    of the corresponding memory region are valid for access; k (1
    Acked-by: Michal Marek
    Signed-off-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin