25 Sep, 2019

1 commit

  • Add memory corruption identification at bug report for software tag-based
    mode. The report shows whether it is "use-after-free" or "out-of-bound"
    error instead of "invalid-access" error. This will make it easier for
    programmers to see the memory corruption problem.

    We extend the slab to store five old free pointer tag and free backtrace,
    we can check if the tagged address is in the slab record and make a good
    guess if the object is more like "use-after-free" or "out-of-bound".
    therefore every slab memory corruption can be identified whether it's
    "use-after-free" or "out-of-bound".

    [aryabinin@virtuozzo.com: simplify & clenup code]
    Link: https://lkml.kernel.org/r/3318f9d7-a760-3cc8-b700-f06108ae745f@virtuozzo.com]
    Link: http://lkml.kernel.org/r/20190821180332.11450-1-aryabinin@virtuozzo.com
    Signed-off-by: Walter Wu
    Signed-off-by: Andrey Ryabinin
    Acked-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Walter Wu
     

13 Jul, 2019

2 commits

  • This changes {,__}kasan_check_{read,write} functions to return a boolean
    denoting if the access was valid or not.

    [sfr@canb.auug.org.au: include types.h for "bool"]
    Link: http://lkml.kernel.org/r/20190705184949.13cdd021@canb.auug.org.au
    Link: http://lkml.kernel.org/r/20190626142014.141844-3-elver@google.com
    Signed-off-by: Marco Elver
    Signed-off-by: Stephen Rothwell
    Reviewed-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Mark Rutland
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     
  • This adds support for printing stack frame description on invalid stack
    accesses. The frame description is embedded by the compiler, which is
    parsed and then pretty-printed.

    Currently, we can only print the stack frame info for accesses to the
    task's own stack, but not accesses to other tasks' stacks.

    Example of what it looks like:

    page dumped because: kasan: bad access detected

    addr ffff8880673ef98a is located in stack of task insmod/2008 at offset 106 in frame:
    kasan_stack_oob+0x0/0xf5 [test_kasan]

    this frame has 2 objects:
    [32, 36) 'i'
    [96, 106) 'stack_array'

    Memory state around the buggy address:

    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=198435
    Link: http://lkml.kernel.org/r/20190522100048.146841-1-elver@google.com
    Signed-off-by: Marco Elver
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     

30 Mar, 2019

1 commit

  • set_tag() compiles away when CONFIG_KASAN_SW_TAGS=n, so make
    arch_kasan_set_tag() a static inline function to fix warnings below.

    mm/kasan/common.c: In function '__kasan_kmalloc':
    mm/kasan/common.c:475:5: warning: variable 'tag' set but not used [-Wunused-but-set-variable]
    u8 tag;
    ^~~

    Link: http://lkml.kernel.org/r/20190307185244.54648-1-cai@lca.pw
    Signed-off-by: Qian Cai
    Reviewed-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Qian Cai
     

06 Mar, 2019

1 commit

  • Use after scope bugs detector seems to be almost entirely useless for
    the linux kernel. It exists over two years, but I've seen only one
    valid bug so far [1]. And the bug was fixed before it has been
    reported. There were some other use-after-scope reports, but they were
    false-positives due to different reasons like incompatibility with
    structleak plugin.

    This feature significantly increases stack usage, especially with GCC <
    9 version, and causes a 32K stack overflow. It probably adds
    performance penalty too.

    Given all that, let's remove use-after-scope detector entirely.

    While preparing this patch I've noticed that we mistakenly enable
    use-after-scope detection for clang compiler regardless of
    CONFIG_KASAN_EXTRA setting. This is also fixed now.

    [1] http://lkml.kernel.org/r/

    Link: http://lkml.kernel.org/r/20190111185842.13978-1-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Acked-by: Will Deacon [arm64]
    Cc: Qian Cai
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

29 Dec, 2018

6 commits

  • This commit adds tag-based KASAN specific hooks implementation and
    adjusts common generic and tag-based KASAN ones.

    1. When a new slab cache is created, tag-based KASAN rounds up the size of
    the objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).

    2. On each kmalloc tag-based KASAN generates a random tag, sets the shadow
    memory, that corresponds to this object to this tag, and embeds this
    tag value into the top byte of the returned pointer.

    3. On each kfree tag-based KASAN poisons the shadow memory with a random
    tag to allow detection of use-after-free bugs.

    The rest of the logic of the hook implementation is very much similar to
    the one provided by generic KASAN. Tag-based KASAN saves allocation and
    free stack metadata to the slab object the same way generic KASAN does.

    Link: http://lkml.kernel.org/r/bda78069e3b8422039794050ddcb2d53d053ed41.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This commit adds rountines, that print tag-based KASAN error reports.
    Those are quite similar to generic KASAN, the difference is:

    1. The way tag-based KASAN finds the first bad shadow cell (with a
    mismatching tag). Tag-based KASAN compares memory tags from the shadow
    memory to the pointer tag.

    2. Tag-based KASAN reports all bugs with the "KASAN: invalid-access"
    header.

    Also simplify generic KASAN find_first_bad_addr.

    Link: http://lkml.kernel.org/r/aee6897b1bd077732a315fd84c6b4f234dbfdfcb.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Move generic KASAN specific error reporting routines to generic_report.c
    without any functional changes, leaving common error reporting code in
    report.c to be later reused by tag-based KASAN.

    Link: http://lkml.kernel.org/r/ba48c32f8e5aefedee78998ccff0413bee9e0f5b.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This commit adds a few helper functions, that are meant to be used to work
    with tags embedded in the top byte of kernel pointers: to set, to get or
    to reset the top byte.

    Link: http://lkml.kernel.org/r/f6c6437bb8e143bc44f42c3c259c62e734be7935.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This commit splits the current CONFIG_KASAN config option into two:
    1. CONFIG_KASAN_GENERIC, that enables the generic KASAN mode (the one
    that exists now);
    2. CONFIG_KASAN_SW_TAGS, that enables the software tag-based KASAN mode.

    The name CONFIG_KASAN_SW_TAGS is chosen as in the future we will have
    another hardware tag-based KASAN mode, that will rely on hardware memory
    tagging support in arm64.

    With CONFIG_KASAN_SW_TAGS enabled, compiler options are changed to
    instrument kernel files with -fsantize=kernel-hwaddress (except the ones
    for which KASAN_SANITIZE := n is set).

    Both CONFIG_KASAN_GENERIC and CONFIG_KASAN_SW_TAGS support both
    CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.

    This commit also adds empty placeholder (for now) implementation of
    tag-based KASAN specific hooks inserted by the compiler and adjusts
    common hooks implementation.

    While this commit adds the CONFIG_KASAN_SW_TAGS config option, this option
    is not selectable, as it depends on HAVE_ARCH_KASAN_SW_TAGS, which we will
    enable once all the infrastracture code has been added.

    Link: http://lkml.kernel.org/r/b2550106eb8a68b10fefbabce820910b115aa853.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Tag-based KASAN reuses a significant part of the generic KASAN code, so
    move the common parts to common.c without any functional changes.

    Link: http://lkml.kernel.org/r/114064d002356e03bb8cc91f7835e20dc61b51d9.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

07 Feb, 2018

4 commits

  • __builtin_return_address(1) is unreliable without frame pointers.
    With defconfig on kmalloc_pagealloc_invalid_free test I am getting:

    BUG: KASAN: double-free or invalid-free in (null)

    Pass caller PC from callers explicitly.

    Link: http://lkml.kernel.org/r/9b01bc2d237a4df74ff8472a3bf6b7635908de01.1514378558.git.dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Cc: Andrey Ryabinin a
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     
  • Patch series "kasan: detect invalid frees".

    KASAN detects double-frees, but does not detect invalid-frees (when a
    pointer into a middle of heap object is passed to free). We recently had
    a very unpleasant case in crypto code which freed an inner object inside
    of a heap allocation. This left unnoticed during free, but totally
    corrupted heap and later lead to a bunch of random crashes all over kernel
    code.

    Detect invalid frees.

    This patch (of 5):

    Detect frees of pointers into middle of large heap objects.

    I dropped const from kasan_kfree_large() because it starts propagating
    through a bunch of functions in kasan_report.c, slab/slub nearest_obj(),
    all of their local variables, fixup_red_left(), etc.

    Link: http://lkml.kernel.org/r/1b45b4fe1d20fc0de1329aab674c1dd973fee723.1514378558.git.dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Cc: Andrey Ryabinin a
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     
  • As a code-size optimization, LLVM builds since r279383 may bulk-manipulate
    the shadow region when (un)poisoning large memory blocks. This requires
    new callbacks that simply do an uninstrumented memset().

    This fixes linking the Clang-built kernel when using KASAN.

    [arnd@arndb.de: add declarations for internal functions]
    Link: http://lkml.kernel.org/r/20180105094112.2690475-1-arnd@arndb.de
    [fengguang.wu@intel.com: __asan_set_shadow_00 can be static]
    Link: http://lkml.kernel.org/r/20171223125943.GA74341@lkp-ib03
    [ghackmann@google.com: fix memset() parameters, and tweak commit message to describe new callbacks]
    Link: http://lkml.kernel.org/r/20171204191735.132544-6-paullawrence@google.com
    Signed-off-by: Alexander Potapenko
    Signed-off-by: Greg Hackmann
    Signed-off-by: Paul Lawrence
    Signed-off-by: Fengguang Wu
    Signed-off-by: Arnd Bergmann
    Acked-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Masahiro Yamada
    Cc: Matthias Kaehlcke
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     
  • clang's AddressSanitizer implementation adds redzones on either side of
    alloca()ed buffers. These redzones are 32-byte aligned and at least 32
    bytes long.

    __asan_alloca_poison() is passed the size and address of the allocated
    buffer, *excluding* the redzones on either side. The left redzone will
    always be to the immediate left of this buffer; but AddressSanitizer may
    need to add padding between the end of the buffer and the right redzone.
    If there are any 8-byte chunks inside this padding, we should poison
    those too.

    __asan_allocas_unpoison() is just passed the top and bottom of the dynamic
    stack area, so unpoisoning is simpler.

    Link: http://lkml.kernel.org/r/20171204191735.132544-4-paullawrence@google.com
    Signed-off-by: Greg Hackmann
    Signed-off-by: Paul Lawrence
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Masahiro Yamada
    Cc: Matthias Kaehlcke
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Lawrence
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

04 May, 2017

1 commit

  • Changes double-free report header from

    BUG: Double free or freeing an invalid pointer
    Unexpected shadow byte: 0xFB

    to

    BUG: KASAN: double-free or invalid-free in kmalloc_oob_left+0xe5/0xef

    This makes a bug uniquely identifiable by the first report line. To
    account for removing of the unexpected shadow value, print shadow bytes
    at the end of the report as in reports for other kinds of bugs.

    Link: http://lkml.kernel.org/r/20170302134851.101218-9-andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Acked-by: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

01 Apr, 2017

1 commit

  • Disable kasan after the first report. There are several reasons for
    this:

    - Single bug quite often has multiple invalid memory accesses causing
    storm in the dmesg.

    - Write OOB access might corrupt metadata so the next report will print
    bogus alloc/free stacktraces.

    - Reports after the first easily could be not bugs by itself but just
    side effects of the first one.

    Given that multiple reports usually only do harm, it makes sense to
    disable kasan after the first one. If user wants to see all the
    reports, the boot-time parameter kasan_multi_shot must be used.

    [aryabinin@virtuozzo.com: wrote changelog and doc, added missing include]
    Link: http://lkml.kernel.org/r/20170323154416.30257-1-aryabinin@virtuozzo.com
    Signed-off-by: Mark Rutland
    Signed-off-by: Andrey Ryabinin
    Cc: Andrey Konovalov
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mark Rutland
     

01 Dec, 2016

2 commits

  • Gcc revision 241896 implements use-after-scope detection. Will be
    available in gcc 7. Support it in KASAN.

    Gcc emits 2 new callbacks to poison/unpoison large stack objects when
    they go in/out of scope. Implement the callbacks and add a test.

    [dvyukov@google.com: v3]
    Link: http://lkml.kernel.org/r/1479998292-144502-1-git-send-email-dvyukov@google.com
    Link: http://lkml.kernel.org/r/1479226045-145148-1-git-send-email-dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: [4.0+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     
  • kasan_global struct is part of compiler/runtime ABI. gcc revision
    241983 has added a new field to kasan_global struct. Update kernel
    definition of kasan_global struct to include the new field.

    Without this patch KASAN is broken with gcc 7.

    Link: http://lkml.kernel.org/r/1479219743-28682-1-git-send-email-dvyukov@google.com
    Signed-off-by: Dmitry Vyukov
    Acked-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: [4.0+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dmitry Vyukov
     

03 Aug, 2016

3 commits

  • Currently we just dump stack in case of double free bug.
    Let's dump all info about the object that we have.

    [aryabinin@virtuozzo.com: change double free message per Alexander]
    Link: http://lkml.kernel.org/r/1470153654-30160-1-git-send-email-aryabinin@virtuozzo.com
    Link: http://lkml.kernel.org/r/1470062715-14077-6-git-send-email-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • The state of object currently tracked in two places - shadow memory, and
    the ->state field in struct kasan_alloc_meta. We can get rid of the
    latter. The will save us a little bit of memory. Also, this allow us
    to move free stack into struct kasan_alloc_meta, without increasing
    memory consumption. So now we should always know when the last time the
    object was freed. This may be useful for long delayed use-after-free
    bugs.

    As a side effect this fixes following UBSAN warning:
    UBSAN: Undefined behaviour in mm/kasan/quarantine.c:102:13
    member access within misaligned address ffff88000d1efebc for type 'struct qlist_node'
    which requires 8 byte alignment

    Link: http://lkml.kernel.org/r/1470062715-14077-5-git-send-email-aryabinin@virtuozzo.com
    Reported-by: kernel test robot
    Signed-off-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Size of slab object already stored in cache->object_size.

    Note, that kmalloc() internally rounds up size of allocation, so
    object_size may be not equal to alloc_size, but, usually we don't need
    to know the exact size of allocated object. In case if we need that
    information, we still can figure it out from the report. The dump of
    shadow memory allows to identify the end of allocated memory, and
    thereby the exact allocation size.

    Link: http://lkml.kernel.org/r/1470062715-14077-4-git-send-email-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

29 Jul, 2016

1 commit

  • For KASAN builds:
    - switch SLUB allocator to using stackdepot instead of storing the
    allocation/deallocation stacks in the objects;
    - change the freelist hook so that parts of the freelist can be put
    into the quarantine.

    [aryabinin@virtuozzo.com: fixes]
    Link: http://lkml.kernel.org/r/1468601423-28676-1-git-send-email-aryabinin@virtuozzo.com
    Link: http://lkml.kernel.org/r/1468347165-41906-3-git-send-email-glider@google.com
    Signed-off-by: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Steven Rostedt (Red Hat)
    Cc: Joonsoo Kim
    Cc: Kostya Serebryany
    Cc: Andrey Ryabinin
    Cc: Kuthonuzo Luruo
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

27 May, 2016

1 commit


21 May, 2016

1 commit

  • Quarantine isolates freed objects in a separate queue. The objects are
    returned to the allocator later, which helps to detect use-after-free
    errors.

    When the object is freed, its state changes from KASAN_STATE_ALLOC to
    KASAN_STATE_QUARANTINE. The object is poisoned and put into quarantine
    instead of being returned to the allocator, therefore every subsequent
    access to that object triggers a KASAN error, and the error handler is
    able to say where the object has been allocated and deallocated.

    When it's time for the object to leave quarantine, its state becomes
    KASAN_STATE_FREE and it's returned to the allocator. From now on the
    allocator may reuse it for another allocation. Before that happens,
    it's still possible to detect a use-after free on that object (it
    retains the allocation/deallocation stacks).

    When the allocator reuses this object, the shadow is unpoisoned and old
    allocation/deallocation stacks are wiped. Therefore a use of this
    object, even an incorrect one, won't trigger ASan warning.

    Without the quarantine, it's not guaranteed that the objects aren't
    reused immediately, that's why the probability of catching a
    use-after-free is lower than with quarantine in place.

    Quarantine isolates freed objects in a separate queue. The objects are
    returned to the allocator later, which helps to detect use-after-free
    errors.

    Freed objects are first added to per-cpu quarantine queues. When a
    cache is destroyed or memory shrinking is requested, the objects are
    moved into the global quarantine queue. Whenever a kmalloc call allows
    memory reclaiming, the oldest objects are popped out of the global queue
    until the total size of objects in quarantine is less than 3/4 of the
    maximum quarantine size (which is a fraction of installed physical
    memory).

    As long as an object remains in the quarantine, KASAN is able to report
    accesses to it, so the chance of reporting a use-after-free is
    increased. Once the object leaves quarantine, the allocator may reuse
    it, in which case the object is unpoisoned and KASAN can't detect
    incorrect accesses to it.

    Right now quarantine support is only enabled in SLAB allocator.
    Unification of KASAN features in SLAB and SLUB will be done later.

    This patch is based on the "mm: kasan: quarantine" patch originally
    prepared by Dmitry Chernenkov. A number of improvements have been
    suggested by Andrey Ryabinin.

    [glider@google.com: v9]
    Link: http://lkml.kernel.org/r/1462987130-144092-1-git-send-email-glider@google.com
    Signed-off-by: Alexander Potapenko
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

26 Mar, 2016

2 commits

  • Implement the stack depot and provide CONFIG_STACKDEPOT. Stack depot
    will allow KASAN store allocation/deallocation stack traces for memory
    chunks. The stack traces are stored in a hash table and referenced by
    handles which reside in the kasan_alloc_meta and kasan_free_meta
    structures in the allocated memory chunks.

    IRQ stack traces are cut below the IRQ entry point to avoid unnecessary
    duplication.

    Right now stackdepot support is only enabled in SLAB allocator. Once
    KASAN features in SLAB are on par with those in SLUB we can switch SLUB
    to stackdepot as well, thus removing the dependency on SLUB stack
    bookkeeping, which wastes a lot of memory.

    This patch is based on the "mm: kasan: stack depots" patch originally
    prepared by Dmitry Chernenkov.

    Joonsoo has said that he plans to reuse the stackdepot code for the
    mm/page_owner.c debugging facility.

    [akpm@linux-foundation.org: s/depot_stack_handle/depot_stack_handle_t]
    [aryabinin@virtuozzo.com: comment style fixes]
    Signed-off-by: Alexander Potapenko
    Signed-off-by: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     
  • Add KASAN hooks to SLAB allocator.

    This patch is based on the "mm: kasan: unified support for SLUB and SLAB
    allocators" patch originally prepared by Dmitry Chernenkov.

    Signed-off-by: Alexander Potapenko
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Steven Rostedt
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

06 Nov, 2015

2 commits

  • Each access with address lower than
    kasan_shadow_to_mem(KASAN_SHADOW_START) is reported as user-memory-access.
    This is not always true, the accessed address might not be in user space.
    Fix this by reporting such accesses as null-ptr-derefs or
    wild-memory-accesses.

    There's another reason for this change. For userspace ASan we have a
    bunch of systems that analyze error types for the purpose of
    classification and deduplication. Sooner of later we will write them to
    KASAN as well. Then clearly and explicitly stated error types will bring
    value.

    Signed-off-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Konstantin Serebryany
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • The function only disable/enable reporting. In the later patch we will be
    adding a kasan early enable/disable. Rename kasan_enabled to properly
    reflect its function.

    Signed-off-by: Aneesh Kumar K.V
    Reviewed-by: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Aneesh Kumar K.V
     

26 Jun, 2015

1 commit


14 Feb, 2015

5 commits

  • This feature let us to detect accesses out of bounds of global variables.
    This will work as for globals in kernel image, so for globals in modules.
    Currently this won't work for symbols in user-specified sections (e.g.
    __init, __read_mostly, ...)

    The idea of this is simple. Compiler increases each global variable by
    redzone size and add constructors invoking __asan_register_globals()
    function. Information about global variable (address, size, size with
    redzone ...) passed to __asan_register_globals() so we could poison
    variable's redzone.

    This patch also forces module_alloc() to return 8*PAGE_SIZE aligned
    address making shadow memory handling (
    kasan_module_alloc()/kasan_module_free() ) more simple. Such alignment
    guarantees that each shadow page backing modules address space correspond
    to only one module_alloc() allocation.

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Stack instrumentation allows to detect out of bounds memory accesses for
    variables allocated on stack. Compiler adds redzones around every
    variable on stack and poisons redzones in function's prologue.

    Such approach significantly increases stack usage, so all in-kernel stacks
    size were doubled.

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • With this patch kasan will be able to catch bugs in memory allocated by
    slub. Initially all objects in newly allocated slab page, marked as
    redzone. Later, when allocation of slub object happens, requested by
    caller number of bytes marked as accessible, and the rest of the object
    (including slub's metadata) marked as redzone (inaccessible).

    We also mark object as accessible if ksize was called for this object.
    There is some places in kernel where ksize function is called to inquire
    size of really allocated area. Such callers could validly access whole
    allocated memory, so it should be marked as accessible.

    Code in slub.c and slab_common.c files could validly access to object's
    metadata, so instrumentation for this files are disabled.

    Signed-off-by: Andrey Ryabinin
    Signed-off-by: Dmitry Chernenkov
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Add kernel address sanitizer hooks to mark allocated page's addresses as
    accessible in corresponding shadow region. Mark freed pages as
    inaccessible.

    Signed-off-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Signed-off-by: Andrey Konovalov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     
  • Kernel Address sanitizer (KASan) is a dynamic memory error detector. It
    provides fast and comprehensive solution for finding use-after-free and
    out-of-bounds bugs.

    KASAN uses compile-time instrumentation for checking every memory access,
    therefore GCC > v4.9.2 required. v4.9.2 almost works, but has issues with
    putting symbol aliases into the wrong section, which breaks kasan
    instrumentation of globals.

    This patch only adds infrastructure for kernel address sanitizer. It's
    not available for use yet. The idea and some code was borrowed from [1].

    Basic idea:

    The main idea of KASAN is to use shadow memory to record whether each byte
    of memory is safe to access or not, and use compiler's instrumentation to
    check the shadow memory on each memory access.

    Address sanitizer uses 1/8 of the memory addressable in kernel for shadow
    memory and uses direct mapping with a scale and offset to translate a
    memory address to its corresponding shadow address.

    Here is function to translate address to corresponding shadow address:

    unsigned long kasan_mem_to_shadow(unsigned long addr)
    {
    return (addr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET;
    }

    where KASAN_SHADOW_SCALE_SHIFT = 3.

    So for every 8 bytes there is one corresponding byte of shadow memory.
    The following encoding used for each shadow byte: 0 means that all 8 bytes
    of the corresponding memory region are valid for access; k (1
    Acked-by: Michal Marek
    Signed-off-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Konstantin Serebryany
    Cc: Dmitry Chernenkov
    Cc: Yuri Gribov
    Cc: Konstantin Khlebnikov
    Cc: Sasha Levin
    Cc: Christoph Lameter
    Cc: Joonsoo Kim
    Cc: Dave Hansen
    Cc: Andi Kleen
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin