25 Sep, 2019

3 commits

  • Replace 1 << compound_order(page) with compound_nr(page). Minor
    improvements in readability.

    Link: http://lkml.kernel.org/r/20190721104612.19120-4-willy@infradead.org
    Signed-off-by: Matthew Wilcox (Oracle)
    Reviewed-by: Andrew Morton
    Reviewed-by: Ira Weiny
    Acked-by: Kirill A. Shutemov
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Wilcox (Oracle)
     
  • Patch series "Make working with compound pages easier", v2.

    These three patches add three helpers and convert the appropriate
    places to use them.

    This patch (of 3):

    It's unnecessarily hard to find out the size of a potentially huge page.
    Replace 'PAGE_SIZE << compound_order(page)' with page_size(page).

    Link: http://lkml.kernel.org/r/20190721104612.19120-2-willy@infradead.org
    Signed-off-by: Matthew Wilcox (Oracle)
    Acked-by: Michal Hocko
    Reviewed-by: Andrew Morton
    Reviewed-by: Ira Weiny
    Acked-by: Kirill A. Shutemov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Wilcox (Oracle)
     
  • Add memory corruption identification at bug report for software tag-based
    mode. The report shows whether it is "use-after-free" or "out-of-bound"
    error instead of "invalid-access" error. This will make it easier for
    programmers to see the memory corruption problem.

    We extend the slab to store five old free pointer tag and free backtrace,
    we can check if the tagged address is in the slab record and make a good
    guess if the object is more like "use-after-free" or "out-of-bound".
    therefore every slab memory corruption can be identified whether it's
    "use-after-free" or "out-of-bound".

    [aryabinin@virtuozzo.com: simplify & clenup code]
    Link: https://lkml.kernel.org/r/3318f9d7-a760-3cc8-b700-f06108ae745f@virtuozzo.com]
    Link: http://lkml.kernel.org/r/20190821180332.11450-1-aryabinin@virtuozzo.com
    Signed-off-by: Walter Wu
    Signed-off-by: Andrey Ryabinin
    Acked-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Walter Wu
     

25 Aug, 2019

1 commit

  • The code like this:

    ptr = kmalloc(size, GFP_KERNEL);
    page = virt_to_page(ptr);
    offset = offset_in_page(ptr);
    kfree(page_address(page) + offset);

    may produce false-positive invalid-free reports on the kernel with
    CONFIG_KASAN_SW_TAGS=y.

    In the example above we lose the original tag assigned to 'ptr', so
    kfree() gets the pointer with 0xFF tag. In kfree() we check that 0xFF
    tag is different from the tag in shadow hence print false report.

    Instead of just comparing tags, do the following:

    1) Check that shadow doesn't contain KASAN_TAG_INVALID. Otherwise it's
    double-free and it doesn't matter what tag the pointer have.

    2) If pointer tag is different from 0xFF, make sure that tag in the
    shadow is the same as in the pointer.

    Link: http://lkml.kernel.org/r/20190819172540.19581-1-aryabinin@virtuozzo.com
    Fixes: 7f94ffbc4c6a ("kasan: add hooks implementation for tag-based mode")
    Signed-off-by: Andrey Ryabinin
    Reported-by: Walter Wu
    Reported-by: Mark Rutland
    Reviewed-by: Andrey Konovalov
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

13 Jul, 2019

2 commits

  • This changes {,__}kasan_check_{read,write} functions to return a boolean
    denoting if the access was valid or not.

    [sfr@canb.auug.org.au: include types.h for "bool"]
    Link: http://lkml.kernel.org/r/20190705184949.13cdd021@canb.auug.org.au
    Link: http://lkml.kernel.org/r/20190626142014.141844-3-elver@google.com
    Signed-off-by: Marco Elver
    Signed-off-by: Stephen Rothwell
    Reviewed-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Mark Rutland
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     
  • Patch series "mm/kasan: Add object validation in ksize()", v3.

    This patch (of 5):

    This introduces __kasan_check_{read,write}. __kasan_check functions may
    be used from anywhere, even compilation units that disable instrumentation
    selectively.

    This change eliminates the need for the __KASAN_INTERNAL definition.

    [elver@google.com: v5]
    Link: http://lkml.kernel.org/r/20190708170706.174189-2-elver@google.com
    Link: http://lkml.kernel.org/r/20190626142014.141844-2-elver@google.com
    Signed-off-by: Marco Elver
    Acked-by: Mark Rutland
    Reviewed-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     

02 Jun, 2019

1 commit

  • When building with -Wuninitialized and CONFIG_KASAN_SW_TAGS unset, Clang
    warns:

    mm/kasan/common.c:484:40: warning: variable 'tag' is uninitialized when
    used here [-Wuninitialized]
    kasan_unpoison_shadow(set_tag(object, tag), size);
    ^~~

    set_tag ignores tag in this configuration but clang doesn't realize it at
    this point in its pipeline, as it points to arch_kasan_set_tag as being
    the point where it is used, which will later be expanded to (void
    *)(object) without a use of tag. Initialize tag to 0xff, as it removes
    this warning and doesn't change the meaning of the code.

    Link: https://github.com/ClangBuiltLinux/linux/issues/465
    Link: http://lkml.kernel.org/r/20190502163057.6603-1-natechancellor@gmail.com
    Fixes: 7f94ffbc4c6a ("kasan: add hooks implementation for tag-based mode")
    Signed-off-by: Nathan Chancellor
    Reviewed-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Nick Desaulniers
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nathan Chancellor
     

07 May, 2019

1 commit

  • Pull stack trace updates from Ingo Molnar:
    "So Thomas looked at the stacktrace code recently and noticed a few
    weirdnesses, and we all know how such stories of crummy kernel code
    meeting German engineering perfection end: a 45-patch series to clean
    it all up! :-)

    Here's the changes in Thomas's words:

    'Struct stack_trace is a sinkhole for input and output parameters
    which is largely pointless for most usage sites. In fact if embedded
    into other data structures it creates indirections and extra storage
    overhead for no benefit.

    Looking at all usage sites makes it clear that they just require an
    interface which is based on a storage array. That array is either on
    stack, global or embedded into some other data structure.

    Some of the stack depot usage sites are outright wrong, but
    fortunately the wrongness just causes more stack being used for
    nothing and does not have functional impact.

    Another oddity is the inconsistent termination of the stack trace
    with ULONG_MAX. It's pointless as the number of entries is what
    determines the length of the stored trace. In fact quite some call
    sites remove the ULONG_MAX marker afterwards with or without nasty
    comments about it. Not all architectures do that and those which do,
    do it inconsistenly either conditional on nr_entries == 0 or
    unconditionally.

    The following series cleans that up by:

    1) Removing the ULONG_MAX termination in the architecture code

    2) Removing the ULONG_MAX fixups at the call sites

    3) Providing plain storage array based interfaces for stacktrace
    and stackdepot.

    4) Cleaning up the mess at the callsites including some related
    cleanups.

    5) Removing the struct stack_trace based interfaces

    This is not changing the struct stack_trace interfaces at the
    architecture level, but it removes the exposure to the generic
    code'"

    * 'core-stacktrace-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (45 commits)
    x86/stacktrace: Use common infrastructure
    stacktrace: Provide common infrastructure
    lib/stackdepot: Remove obsolete functions
    stacktrace: Remove obsolete functions
    livepatch: Simplify stack trace retrieval
    tracing: Remove the last struct stack_trace usage
    tracing: Simplify stack trace retrieval
    tracing: Make ftrace_trace_userstack() static and conditional
    tracing: Use percpu stack trace buffer more intelligently
    tracing: Simplify stacktrace retrieval in histograms
    lockdep: Simplify stack trace handling
    lockdep: Remove save argument from check_prev_add()
    lockdep: Remove unused trace argument from print_circular_bug()
    drm: Simplify stacktrace handling
    dm persistent data: Simplify stack trace handling
    dm bufio: Simplify stack trace retrieval
    btrfs: ref-verify: Simplify stack trace retrieval
    dma/debug: Simplify stracktrace retrieval
    fault-inject: Simplify stacktrace retrieval
    mm/page_owner: Simplify stack trace handling
    ...

    Linus Torvalds
     

29 Apr, 2019

1 commit

  • Replace the indirection through struct stack_trace by using the storage
    array based interfaces.

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Josh Poimboeuf
    Acked-by: Dmitry Vyukov
    Acked-by: Andrey Ryabinin
    Cc: Andy Lutomirski
    Cc: Alexander Potapenko
    Cc: kasan-dev@googlegroups.com
    Cc: linux-mm@kvack.org
    Cc: Steven Rostedt
    Cc: Alexey Dobriyan
    Cc: Andrew Morton
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Catalin Marinas
    Cc: Mike Rapoport
    Cc: Akinobu Mita
    Cc: Christoph Hellwig
    Cc: iommu@lists.linux-foundation.org
    Cc: Robin Murphy
    Cc: Marek Szyprowski
    Cc: Johannes Thumshirn
    Cc: David Sterba
    Cc: Chris Mason
    Cc: Josef Bacik
    Cc: linux-btrfs@vger.kernel.org
    Cc: dm-devel@redhat.com
    Cc: Mike Snitzer
    Cc: Alasdair Kergon
    Cc: Daniel Vetter
    Cc: intel-gfx@lists.freedesktop.org
    Cc: Joonas Lahtinen
    Cc: Maarten Lankhorst
    Cc: dri-devel@lists.freedesktop.org
    Cc: David Airlie
    Cc: Jani Nikula
    Cc: Rodrigo Vivi
    Cc: Tom Zanussi
    Cc: Miroslav Benes
    Cc: linux-arch@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190425094801.963261479@linutronix.de

    Thomas Gleixner
     

15 Apr, 2019

1 commit

  • No architecture terminates the stack trace with ULONG_MAX anymore. Remove
    the cruft.

    Signed-off-by: Thomas Gleixner
    Acked-by: Dmitry Vyukov
    Acked-by: Peter Zijlstra (Intel)
    Cc: Josh Poimboeuf
    Cc: Andy Lutomirski
    Cc: Steven Rostedt
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: kasan-dev@googlegroups.com
    Cc: linux-mm@kvack.org
    Link: https://lkml.kernel.org/r/20190410103644.750219625@linutronix.de

    Thomas Gleixner
     

03 Apr, 2019

1 commit

  • KASAN inserts extra code for every LOAD/STORE emitted by te compiler.
    Much of this code is simple and safe to run with AC=1, however the
    kasan_report() function, called on error, is most certainly not safe
    to call with AC=1.

    Therefore wrap kasan_report() in user_access_{save,restore}; which for
    x86 SMAP, saves/restores EFLAGS and clears AC before calling the real
    function.

    Also ensure all the functions are without __fentry__ hook. The
    function tracer is also not safe.

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Borislav Petkov
    Cc: Dmitry Vyukov
    Cc: Josh Poimboeuf
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

06 Mar, 2019

1 commit

  • Building little-endian allmodconfig kernels on arm64 started failing
    with the generated atomic.h implementation, since we now try to call
    kasan helpers from the EFI stub:

    aarch64-linux-gnu-ld: drivers/firmware/efi/libstub/arm-stub.stub.o: in function `atomic_set':
    include/generated/atomic-instrumented.h:44: undefined reference to `__efistub_kasan_check_write'

    I suspect that we get similar problems in other files that explicitly
    disable KASAN for some reason but call atomic_t based helper functions.

    We can fix this by checking the predefined __SANITIZE_ADDRESS__ macro
    that the compiler sets instead of checking CONFIG_KASAN, but this in
    turn requires a small hack in mm/kasan/common.c so we do see the extern
    declaration there instead of the inline function.

    Link: http://lkml.kernel.org/r/20181211133453.2835077-1-arnd@arndb.de
    Fixes: b1864b828644 ("locking/atomics: build atomic headers as required")
    Signed-off-by: Arnd Bergmann
    Reported-by: Anders Roxell
    Acked-by: Andrey Ryabinin
    Cc: Ard Biesheuvel
    Cc: Will Deacon
    Cc: Mark Rutland
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Andrey Konovalov
    Cc: Stephen Rothwell ,
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     

22 Feb, 2019

1 commit

  • When an object is kmalloc()'ed, two hooks are called: kasan_slab_alloc()
    and kasan_kmalloc(). Right now we assign a tag twice, once in each of the
    hooks. Fix it by assigning a tag only in the former hook.

    Link: http://lkml.kernel.org/r/ce8c6431da735aa7ec051fd6497153df690eb021.1549921721.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: Catalin Marinas
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Dmitry Vyukov
    Cc: Evgeniy Stepanov
    Cc: Joonsoo Kim
    Cc: Kostya Serebryany
    Cc: Pekka Enberg
    Cc: Qian Cai
    Cc: Vincenzo Frascino
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

09 Jan, 2019

2 commits

  • Right now tag-based KASAN can retag the memory that is reallocated via
    krealloc and return a differently tagged pointer even if the same slab
    object gets used and no reallocated technically happens.

    There are a few issues with this approach. One is that krealloc callers
    can't rely on comparing the return value with the passed argument to
    check whether reallocation happened. Another is that if a caller knows
    that no reallocation happened, that it can access object memory through
    the old pointer, which leads to false positives. Look at
    nf_ct_ext_add() to see an example.

    Fix this by keeping the same tag if the memory don't actually gets
    reallocated during krealloc.

    Link: http://lkml.kernel.org/r/bb2a71d17ed072bcc528cbee46fcbd71a6da3be4.1546540962.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Vincenzo Frascino
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
    in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.

    Link: http://lkml.kernel.org/r/52ddd881916bcc153a9924c154daacde78522227.1546540962.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Suggested-by: Vincenzo Frascino
    Cc: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Vincenzo Frascino
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

29 Dec, 2018

6 commits

  • This patch adds a "SPDX-License-Identifier: GPL-2.0" mark to all source
    files under mm/kasan.

    Link: http://lkml.kernel.org/r/bce2d1e618afa5142e81961ab8fa4b4165337380.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This patch adds __must_check annotations to kasan hooks that return a
    pointer to make sure that a tagged pointer always gets propagated.

    Link: http://lkml.kernel.org/r/03b269c5e453945f724bfca3159d4e1333a8fb1c.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Suggested-by: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Tag-based KASAN doesn't check memory accesses through pointers tagged with
    0xff. When page_address is used to get pointer to memory that corresponds
    to some page, the tag of the resulting pointer gets set to 0xff, even
    though the allocated memory might have been tagged differently.

    For slab pages it's impossible to recover the correct tag to return from
    page_address, since the page might contain multiple slab objects tagged
    with different values, and we can't know in advance which one of them is
    going to get accessed. For non slab pages however, we can recover the tag
    in page_address, since the whole page was marked with the same tag.

    This patch adds tagging to non slab memory allocated with pagealloc. To
    set the tag of the pointer returned from page_address, the tag gets stored
    to page->flags when the memory gets allocated.

    Link: http://lkml.kernel.org/r/d758ddcef46a5abc9970182b9137e2fbee202a2c.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Acked-by: Will Deacon
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This commit adds tag-based KASAN specific hooks implementation and
    adjusts common generic and tag-based KASAN ones.

    1. When a new slab cache is created, tag-based KASAN rounds up the size of
    the objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).

    2. On each kmalloc tag-based KASAN generates a random tag, sets the shadow
    memory, that corresponds to this object to this tag, and embeds this
    tag value into the top byte of the returned pointer.

    3. On each kfree tag-based KASAN poisons the shadow memory with a random
    tag to allow detection of use-after-free bugs.

    The rest of the logic of the hook implementation is very much similar to
    the one provided by generic KASAN. Tag-based KASAN saves allocation and
    free stack metadata to the slab object the same way generic KASAN does.

    Link: http://lkml.kernel.org/r/bda78069e3b8422039794050ddcb2d53d053ed41.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • A tag-based KASAN shadow memory cell contains a memory tag, that
    corresponds to the tag in the top byte of the pointer, that points to that
    memory. The native top byte value of kernel pointers is 0xff, so with
    tag-based KASAN we need to initialize shadow memory to 0xff.

    [cai@lca.pw: arm64: skip kmemleak for KASAN again\
    Link: http://lkml.kernel.org/r/20181226020550.63712-1-cai@lca.pw
    Link: http://lkml.kernel.org/r/5cc1b789aad7c99cf4f3ec5b328b147ad53edb40.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Tag-based KASAN reuses a significant part of the generic KASAN code, so
    move the common parts to common.c without any functional changes.

    Link: http://lkml.kernel.org/r/114064d002356e03bb8cc91f7835e20dc61b51d9.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov