27 May, 2020

1 commit

  • commit 33cd65e73abd693c00c4156cf23677c453b41b3b upstream.

    During early boot, while KASAN is not yet initialized, it is possible to
    enter reporting code-path and end up in kasan_report().

    While uninitialized, the branch there prevents generating any reports,
    however, under certain circumstances when branches are being traced
    (TRACE_BRANCH_PROFILING), we may recurse deep enough to cause kernel
    reboots without warning.

    To prevent similar issues in future, we should disable branch tracing
    for the core runtime.

    [elver@google.com: remove duplicate DISABLE_BRANCH_PROFILING, per Qian Cai]
    Link: https://lore.kernel.org/lkml/20200517011732.GE24705@shao2-debian/
    Link: http://lkml.kernel.org/r/20200522075207.157349-1-elver@google.com
    Reported-by: kernel test robot
    Signed-off-by: Marco Elver
    Signed-off-by: Andrew Morton
    Reviewed-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: Qian Cai
    Cc:
    Link: http://lkml.kernel.org/r//20200517011732.GE24705@shao2-debian/
    Link: http://lkml.kernel.org/r/20200519182459.87166-1-elver@google.com
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Marco Elver
     

25 Sep, 2019

3 commits

  • Replace 1 << compound_order(page) with compound_nr(page). Minor
    improvements in readability.

    Link: http://lkml.kernel.org/r/20190721104612.19120-4-willy@infradead.org
    Signed-off-by: Matthew Wilcox (Oracle)
    Reviewed-by: Andrew Morton
    Reviewed-by: Ira Weiny
    Acked-by: Kirill A. Shutemov
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Wilcox (Oracle)
     
  • Patch series "Make working with compound pages easier", v2.

    These three patches add three helpers and convert the appropriate
    places to use them.

    This patch (of 3):

    It's unnecessarily hard to find out the size of a potentially huge page.
    Replace 'PAGE_SIZE << compound_order(page)' with page_size(page).

    Link: http://lkml.kernel.org/r/20190721104612.19120-2-willy@infradead.org
    Signed-off-by: Matthew Wilcox (Oracle)
    Acked-by: Michal Hocko
    Reviewed-by: Andrew Morton
    Reviewed-by: Ira Weiny
    Acked-by: Kirill A. Shutemov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Wilcox (Oracle)
     
  • Add memory corruption identification at bug report for software tag-based
    mode. The report shows whether it is "use-after-free" or "out-of-bound"
    error instead of "invalid-access" error. This will make it easier for
    programmers to see the memory corruption problem.

    We extend the slab to store five old free pointer tag and free backtrace,
    we can check if the tagged address is in the slab record and make a good
    guess if the object is more like "use-after-free" or "out-of-bound".
    therefore every slab memory corruption can be identified whether it's
    "use-after-free" or "out-of-bound".

    [aryabinin@virtuozzo.com: simplify & clenup code]
    Link: https://lkml.kernel.org/r/3318f9d7-a760-3cc8-b700-f06108ae745f@virtuozzo.com]
    Link: http://lkml.kernel.org/r/20190821180332.11450-1-aryabinin@virtuozzo.com
    Signed-off-by: Walter Wu
    Signed-off-by: Andrey Ryabinin
    Acked-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Walter Wu
     

25 Aug, 2019

1 commit

  • The code like this:

    ptr = kmalloc(size, GFP_KERNEL);
    page = virt_to_page(ptr);
    offset = offset_in_page(ptr);
    kfree(page_address(page) + offset);

    may produce false-positive invalid-free reports on the kernel with
    CONFIG_KASAN_SW_TAGS=y.

    In the example above we lose the original tag assigned to 'ptr', so
    kfree() gets the pointer with 0xFF tag. In kfree() we check that 0xFF
    tag is different from the tag in shadow hence print false report.

    Instead of just comparing tags, do the following:

    1) Check that shadow doesn't contain KASAN_TAG_INVALID. Otherwise it's
    double-free and it doesn't matter what tag the pointer have.

    2) If pointer tag is different from 0xFF, make sure that tag in the
    shadow is the same as in the pointer.

    Link: http://lkml.kernel.org/r/20190819172540.19581-1-aryabinin@virtuozzo.com
    Fixes: 7f94ffbc4c6a ("kasan: add hooks implementation for tag-based mode")
    Signed-off-by: Andrey Ryabinin
    Reported-by: Walter Wu
    Reported-by: Mark Rutland
    Reviewed-by: Andrey Konovalov
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

13 Jul, 2019

3 commits

  • This changes {,__}kasan_check_{read,write} functions to return a boolean
    denoting if the access was valid or not.

    [sfr@canb.auug.org.au: include types.h for "bool"]
    Link: http://lkml.kernel.org/r/20190705184949.13cdd021@canb.auug.org.au
    Link: http://lkml.kernel.org/r/20190626142014.141844-3-elver@google.com
    Signed-off-by: Marco Elver
    Signed-off-by: Stephen Rothwell
    Reviewed-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Mark Rutland
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     
  • Patch series "mm/kasan: Add object validation in ksize()", v3.

    This patch (of 5):

    This introduces __kasan_check_{read,write}. __kasan_check functions may
    be used from anywhere, even compilation units that disable instrumentation
    selectively.

    This change eliminates the need for the __KASAN_INTERNAL definition.

    [elver@google.com: v5]
    Link: http://lkml.kernel.org/r/20190708170706.174189-2-elver@google.com
    Link: http://lkml.kernel.org/r/20190626142014.141844-2-elver@google.com
    Signed-off-by: Marco Elver
    Acked-by: Mark Rutland
    Reviewed-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     
  • This adds support for printing stack frame description on invalid stack
    accesses. The frame description is embedded by the compiler, which is
    parsed and then pretty-printed.

    Currently, we can only print the stack frame info for accesses to the
    task's own stack, but not accesses to other tasks' stacks.

    Example of what it looks like:

    page dumped because: kasan: bad access detected

    addr ffff8880673ef98a is located in stack of task insmod/2008 at offset 106 in frame:
    kasan_stack_oob+0x0/0xf5 [test_kasan]

    this frame has 2 objects:
    [32, 36) 'i'
    [96, 106) 'stack_array'

    Memory state around the buggy address:

    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=198435
    Link: http://lkml.kernel.org/r/20190522100048.146841-1-elver@google.com
    Signed-off-by: Marco Elver
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     

02 Jun, 2019

1 commit

  • When building with -Wuninitialized and CONFIG_KASAN_SW_TAGS unset, Clang
    warns:

    mm/kasan/common.c:484:40: warning: variable 'tag' is uninitialized when
    used here [-Wuninitialized]
    kasan_unpoison_shadow(set_tag(object, tag), size);
    ^~~

    set_tag ignores tag in this configuration but clang doesn't realize it at
    this point in its pipeline, as it points to arch_kasan_set_tag as being
    the point where it is used, which will later be expanded to (void
    *)(object) without a use of tag. Initialize tag to 0xff, as it removes
    this warning and doesn't change the meaning of the code.

    Link: https://github.com/ClangBuiltLinux/linux/issues/465
    Link: http://lkml.kernel.org/r/20190502163057.6603-1-natechancellor@gmail.com
    Fixes: 7f94ffbc4c6a ("kasan: add hooks implementation for tag-based mode")
    Signed-off-by: Nathan Chancellor
    Reviewed-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Nick Desaulniers
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nathan Chancellor
     

07 May, 2019

2 commits

  • Pull arm64 updates from Will Deacon:
    "Mostly just incremental improvements here:

    - Introduce AT_HWCAP2 for advertising CPU features to userspace

    - Expose SVE2 availability to userspace

    - Support for "data cache clean to point of deep persistence" (DC PODP)

    - Honour "mitigations=off" on the cmdline and advertise status via
    sysfs

    - CPU timer erratum workaround (Neoverse-N1 #1188873)

    - Introduce perf PMU driver for the SMMUv3 performance counters

    - Add config option to disable the kuser helpers page for AArch32 tasks

    - Futex modifications to ensure liveness under contention

    - Rework debug exception handling to seperate kernel and user
    handlers

    - Non-critical fixes and cleanup"

    * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (92 commits)
    Documentation: Add ARM64 to kernel-parameters.rst
    arm64/speculation: Support 'mitigations=' cmdline option
    arm64: ssbs: Don't treat CPUs with SSBS as unaffected by SSB
    arm64: enable generic CPU vulnerabilites support
    arm64: add sysfs vulnerability show for speculative store bypass
    arm64: Fix size of __early_cpu_boot_status
    clocksource/arm_arch_timer: Use arch_timer_read_counter to access stable counters
    clocksource/arm_arch_timer: Remove use of workaround static key
    clocksource/arm_arch_timer: Drop use of static key in arch_timer_reg_read_stable
    clocksource/arm_arch_timer: Direcly assign set_next_event workaround
    arm64: Use arch_timer_read_counter instead of arch_counter_get_cntvct
    watchdog/sbsa: Use arch_timer_read_counter instead of arch_counter_get_cntvct
    ARM: vdso: Remove dependency with the arch_timer driver internals
    arm64: Apply ARM64_ERRATUM_1188873 to Neoverse-N1
    arm64: Add part number for Neoverse N1
    arm64: Make ARM64_ERRATUM_1188873 depend on COMPAT
    arm64: Restrict ARM64_ERRATUM_1188873 mitigation to AArch32
    arm64: mm: Remove pte_unmap_nested()
    arm64: Fix compiler warning from pte_unmap() with -Wunused-but-set-variable
    arm64: compat: Reduce address limit for 64K pages
    ...

    Linus Torvalds
     
  • Pull stack trace updates from Ingo Molnar:
    "So Thomas looked at the stacktrace code recently and noticed a few
    weirdnesses, and we all know how such stories of crummy kernel code
    meeting German engineering perfection end: a 45-patch series to clean
    it all up! :-)

    Here's the changes in Thomas's words:

    'Struct stack_trace is a sinkhole for input and output parameters
    which is largely pointless for most usage sites. In fact if embedded
    into other data structures it creates indirections and extra storage
    overhead for no benefit.

    Looking at all usage sites makes it clear that they just require an
    interface which is based on a storage array. That array is either on
    stack, global or embedded into some other data structure.

    Some of the stack depot usage sites are outright wrong, but
    fortunately the wrongness just causes more stack being used for
    nothing and does not have functional impact.

    Another oddity is the inconsistent termination of the stack trace
    with ULONG_MAX. It's pointless as the number of entries is what
    determines the length of the stored trace. In fact quite some call
    sites remove the ULONG_MAX marker afterwards with or without nasty
    comments about it. Not all architectures do that and those which do,
    do it inconsistenly either conditional on nr_entries == 0 or
    unconditionally.

    The following series cleans that up by:

    1) Removing the ULONG_MAX termination in the architecture code

    2) Removing the ULONG_MAX fixups at the call sites

    3) Providing plain storage array based interfaces for stacktrace
    and stackdepot.

    4) Cleaning up the mess at the callsites including some related
    cleanups.

    5) Removing the struct stack_trace based interfaces

    This is not changing the struct stack_trace interfaces at the
    architecture level, but it removes the exposure to the generic
    code'"

    * 'core-stacktrace-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (45 commits)
    x86/stacktrace: Use common infrastructure
    stacktrace: Provide common infrastructure
    lib/stackdepot: Remove obsolete functions
    stacktrace: Remove obsolete functions
    livepatch: Simplify stack trace retrieval
    tracing: Remove the last struct stack_trace usage
    tracing: Simplify stack trace retrieval
    tracing: Make ftrace_trace_userstack() static and conditional
    tracing: Use percpu stack trace buffer more intelligently
    tracing: Simplify stacktrace retrieval in histograms
    lockdep: Simplify stack trace handling
    lockdep: Remove save argument from check_prev_add()
    lockdep: Remove unused trace argument from print_circular_bug()
    drm: Simplify stacktrace handling
    dm persistent data: Simplify stack trace handling
    dm bufio: Simplify stack trace retrieval
    btrfs: ref-verify: Simplify stack trace retrieval
    dma/debug: Simplify stracktrace retrieval
    fault-inject: Simplify stacktrace retrieval
    mm/page_owner: Simplify stack trace handling
    ...

    Linus Torvalds
     

29 Apr, 2019

1 commit

  • Replace the indirection through struct stack_trace by using the storage
    array based interfaces.

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Josh Poimboeuf
    Acked-by: Dmitry Vyukov
    Acked-by: Andrey Ryabinin
    Cc: Andy Lutomirski
    Cc: Alexander Potapenko
    Cc: kasan-dev@googlegroups.com
    Cc: linux-mm@kvack.org
    Cc: Steven Rostedt
    Cc: Alexey Dobriyan
    Cc: Andrew Morton
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Catalin Marinas
    Cc: Mike Rapoport
    Cc: Akinobu Mita
    Cc: Christoph Hellwig
    Cc: iommu@lists.linux-foundation.org
    Cc: Robin Murphy
    Cc: Marek Szyprowski
    Cc: Johannes Thumshirn
    Cc: David Sterba
    Cc: Chris Mason
    Cc: Josef Bacik
    Cc: linux-btrfs@vger.kernel.org
    Cc: dm-devel@redhat.com
    Cc: Mike Snitzer
    Cc: Alasdair Kergon
    Cc: Daniel Vetter
    Cc: intel-gfx@lists.freedesktop.org
    Cc: Joonas Lahtinen
    Cc: Maarten Lankhorst
    Cc: dri-devel@lists.freedesktop.org
    Cc: David Airlie
    Cc: Jani Nikula
    Cc: Rodrigo Vivi
    Cc: Tom Zanussi
    Cc: Miroslav Benes
    Cc: linux-arch@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190425094801.963261479@linutronix.de

    Thomas Gleixner
     

15 Apr, 2019

1 commit

  • No architecture terminates the stack trace with ULONG_MAX anymore. Remove
    the cruft.

    Signed-off-by: Thomas Gleixner
    Acked-by: Dmitry Vyukov
    Acked-by: Peter Zijlstra (Intel)
    Cc: Josh Poimboeuf
    Cc: Andy Lutomirski
    Cc: Steven Rostedt
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: kasan-dev@googlegroups.com
    Cc: linux-mm@kvack.org
    Link: https://lkml.kernel.org/r/20190410103644.750219625@linutronix.de

    Thomas Gleixner
     

09 Apr, 2019

1 commit

  • In preparation for arm64 supporting ftrace built on other compiler
    options, let's have Makefiles remove the $(CC_FLAGS_FTRACE) flags,
    whatever these may be, rather than assuming '-pg'.

    There should be no functional change as a result of this patch.

    Reviewed-by: Mark Rutland
    Acked-by: Andrey Ryabinin
    Signed-off-by: Torsten Duwe
    Signed-off-by: Will Deacon

    Torsten Duwe
     

03 Apr, 2019

1 commit

  • KASAN inserts extra code for every LOAD/STORE emitted by te compiler.
    Much of this code is simple and safe to run with AC=1, however the
    kasan_report() function, called on error, is most certainly not safe
    to call with AC=1.

    Therefore wrap kasan_report() in user_access_{save,restore}; which for
    x86 SMAP, saves/restores EFLAGS and clears AC before calling the real
    function.

    Also ensure all the functions are without __fentry__ hook. The
    function tracer is also not safe.

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Borislav Petkov
    Cc: Dmitry Vyukov
    Cc: Josh Poimboeuf
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

30 Mar, 2019

1 commit

  • set_tag() compiles away when CONFIG_KASAN_SW_TAGS=n, so make
    arch_kasan_set_tag() a static inline function to fix warnings below.

    mm/kasan/common.c: In function '__kasan_kmalloc':
    mm/kasan/common.c:475:5: warning: variable 'tag' set but not used [-Wunused-but-set-variable]
    u8 tag;
    ^~~

    Link: http://lkml.kernel.org/r/20190307185244.54648-1-cai@lca.pw
    Signed-off-by: Qian Cai
    Reviewed-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Qian Cai
     

13 Mar, 2019

1 commit

  • Add check for the return value of memblock_alloc*() functions and call
    panic() in case of error. The panic message repeats the one used by
    panicing memblock allocators with adjustment of parameters to include
    only relevant ones.

    The replacement was mostly automated with semantic patches like the one
    below with manual massaging of format strings.

    @@
    expression ptr, size, align;
    @@
    ptr = memblock_alloc(size, align);
    + if (!ptr)
    + panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__, size, align);

    [anders.roxell@linaro.org: use '%pa' with 'phys_addr_t' type]
    Link: http://lkml.kernel.org/r/20190131161046.21886-1-anders.roxell@linaro.org
    [rppt@linux.ibm.com: fix format strings for panics after memblock_alloc]
    Link: http://lkml.kernel.org/r/1548950940-15145-1-git-send-email-rppt@linux.ibm.com
    [rppt@linux.ibm.com: don't panic if the allocation in sparse_buffer_init fails]
    Link: http://lkml.kernel.org/r/20190131074018.GD28876@rapoport-lnx
    [akpm@linux-foundation.org: fix xtensa printk warning]
    Link: http://lkml.kernel.org/r/1548057848-15136-20-git-send-email-rppt@linux.ibm.com
    Signed-off-by: Mike Rapoport
    Signed-off-by: Anders Roxell
    Reviewed-by: Guo Ren [c-sky]
    Acked-by: Paul Burton [MIPS]
    Acked-by: Heiko Carstens [s390]
    Reviewed-by: Juergen Gross [Xen]
    Reviewed-by: Geert Uytterhoeven [m68k]
    Acked-by: Max Filippov [xtensa]
    Cc: Catalin Marinas
    Cc: Christophe Leroy
    Cc: Christoph Hellwig
    Cc: "David S. Miller"
    Cc: Dennis Zhou
    Cc: Greentime Hu
    Cc: Greg Kroah-Hartman
    Cc: Guan Xuetao
    Cc: Guo Ren
    Cc: Mark Salter
    Cc: Matt Turner
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Petr Mladek
    Cc: Richard Weinberger
    Cc: Rich Felker
    Cc: Rob Herring
    Cc: Rob Herring
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Tony Luck
    Cc: Vineet Gupta
    Cc: Yoshinori Sato
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     

06 Mar, 2019

3 commits

  • kasan_p4d_table(), kasan_pmd_table() and kasan_pud_table() are declared
    as returning bool, but return 0 instead of false, which produces a
    coccinelle warning. Fix it.

    Link: http://lkml.kernel.org/r/1fa6fadf644859e8a6a8ecce258444b49be8c7ee.1551716733.git.andreyknvl@google.com
    Fixes: 0207df4fa1a8 ("kernel/memremap, kasan: make ZONE_DEVICE with work with KASAN")
    Signed-off-by: Andrey Konovalov
    Reported-by: kbuild test robot
    Acked-by: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Building little-endian allmodconfig kernels on arm64 started failing
    with the generated atomic.h implementation, since we now try to call
    kasan helpers from the EFI stub:

    aarch64-linux-gnu-ld: drivers/firmware/efi/libstub/arm-stub.stub.o: in function `atomic_set':
    include/generated/atomic-instrumented.h:44: undefined reference to `__efistub_kasan_check_write'

    I suspect that we get similar problems in other files that explicitly
    disable KASAN for some reason but call atomic_t based helper functions.

    We can fix this by checking the predefined __SANITIZE_ADDRESS__ macro
    that the compiler sets instead of checking CONFIG_KASAN, but this in
    turn requires a small hack in mm/kasan/common.c so we do see the extern
    declaration there instead of the inline function.

    Link: http://lkml.kernel.org/r/20181211133453.2835077-1-arnd@arndb.de
    Fixes: b1864b828644 ("locking/atomics: build atomic headers as required")
    Signed-off-by: Arnd Bergmann
    Reported-by: Anders Roxell
    Acked-by: Andrey Ryabinin
    Cc: Ard Biesheuvel
    Cc: Will Deacon
    Cc: Mark Rutland
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Andrey Konovalov
    Cc: Stephen Rothwell ,
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     
  • Use after scope bugs detector seems to be almost entirely useless for
    the linux kernel. It exists over two years, but I've seen only one
    valid bug so far [1]. And the bug was fixed before it has been
    reported. There were some other use-after-scope reports, but they were
    false-positives due to different reasons like incompatibility with
    structleak plugin.

    This feature significantly increases stack usage, especially with GCC <
    9 version, and causes a 32K stack overflow. It probably adds
    performance penalty too.

    Given all that, let's remove use-after-scope detector entirely.

    While preparing this patch I've noticed that we mistakenly enable
    use-after-scope detection for clang compiler regardless of
    CONFIG_KASAN_EXTRA setting. This is also fixed now.

    [1] http://lkml.kernel.org/r/

    Link: http://lkml.kernel.org/r/20190111185842.13978-1-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Acked-by: Will Deacon [arm64]
    Cc: Qian Cai
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

22 Feb, 2019

3 commits

  • Similarly to commit 0d0c8de8788b ("kasan: mark file common so ftrace
    doesn't trace it") add the -pg flag to mm/kasan/tags.c to prevent
    conflicts with tracing.

    Link: http://lkml.kernel.org/r/9c4c3ce5ccfb894c7fe66d91de7c1da2787b4da4.1550602886.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reported-by: Qian Cai
    Tested-by: Qian Cai
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Catalin Marinas
    Cc: Vincenzo Frascino
    Cc: Kostya Serebryany
    Cc: Evgeniy Stepanov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • There are two issues with assigning random percpu seeds right now:

    1. We use for_each_possible_cpu() to iterate over cpus, but cpumask is
    not set up yet at the moment of kasan_init(), and thus we only set
    the seed for cpu #0.

    2. A call to get_random_u32() always returns the same number and produces
    a message in dmesg, since the random subsystem is not yet initialized.

    Fix 1 by calling kasan_init_tags() after cpumask is set up.

    Fix 2 by using get_cycles() instead of get_random_u32(). This gives us
    lower quality random numbers, but it's good enough, as KASAN is meant to
    be used as a debugging tool and not a mitigation.

    Link: http://lkml.kernel.org/r/1f815cc914b61f3516ed4cc9bfd9eeca9bd5d9de.1550677973.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • When an object is kmalloc()'ed, two hooks are called: kasan_slab_alloc()
    and kasan_kmalloc(). Right now we assign a tag twice, once in each of the
    hooks. Fix it by assigning a tag only in the former hook.

    Link: http://lkml.kernel.org/r/ce8c6431da735aa7ec051fd6497153df690eb021.1549921721.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: Catalin Marinas
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Dmitry Vyukov
    Cc: Evgeniy Stepanov
    Cc: Joonsoo Kim
    Cc: Kostya Serebryany
    Cc: Pekka Enberg
    Cc: Qian Cai
    Cc: Vincenzo Frascino
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

02 Feb, 2019

1 commit

  • When option CONFIG_KASAN is enabled toghether with ftrace, function
    ftrace_graph_caller() gets in to a recursion, via functions
    kasan_check_read() and kasan_check_write().

    Breakpoint 2, ftrace_graph_caller () at ../arch/arm64/kernel/entry-ftrace.S:179
    179 mcount_get_pc x0 // function's pc
    (gdb) bt
    #0 ftrace_graph_caller () at ../arch/arm64/kernel/entry-ftrace.S:179
    #1 0xffffff90101406c8 in ftrace_caller () at ../arch/arm64/kernel/entry-ftrace.S:151
    #2 0xffffff90106fd084 in kasan_check_write (p=0xffffffc06c170878, size=4) at ../mm/kasan/common.c:105
    #3 0xffffff90104a2464 in atomic_add_return (v=, i=) at ./include/generated/atomic-instrumented.h:71
    #4 atomic_inc_return (v=) at ./include/generated/atomic-fallback.h:284
    #5 trace_graph_entry (trace=0xffffffc03f5ff380) at ../kernel/trace/trace_functions_graph.c:441
    #6 0xffffff9010481774 in trace_graph_entry_watchdog (trace=) at ../kernel/trace/trace_selftest.c:741
    #7 0xffffff90104a185c in function_graph_enter (ret=, func=, frame_pointer=18446743799894897728, retp=) at ../kernel/trace/trace_functions_graph.c:196
    #8 0xffffff9010140628 in prepare_ftrace_return (self_addr=18446743592948977792, parent=0xffffffc03f5ff418, frame_pointer=18446743799894897728) at ../arch/arm64/kernel/ftrace.c:231
    #9 0xffffff90101406f4 in ftrace_graph_caller () at ../arch/arm64/kernel/entry-ftrace.S:182
    Backtrace stopped: previous frame identical to this frame (corrupt stack?)
    (gdb)

    Rework so that the kasan implementation isn't traced.

    Link: http://lkml.kernel.org/r/20181212183447.15890-1-anders.roxell@linaro.org
    Signed-off-by: Anders Roxell
    Acked-by: Dmitry Vyukov
    Tested-by: Dmitry Vyukov
    Acked-by: Steven Rostedt (VMware)
    Cc: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anders Roxell
     

09 Jan, 2019

2 commits

  • Right now tag-based KASAN can retag the memory that is reallocated via
    krealloc and return a differently tagged pointer even if the same slab
    object gets used and no reallocated technically happens.

    There are a few issues with this approach. One is that krealloc callers
    can't rely on comparing the return value with the passed argument to
    check whether reallocation happened. Another is that if a caller knows
    that no reallocation happened, that it can access object memory through
    the old pointer, which leads to false positives. Look at
    nf_ct_ext_add() to see an example.

    Fix this by keeping the same tag if the memory don't actually gets
    reallocated during krealloc.

    Link: http://lkml.kernel.org/r/bb2a71d17ed072bcc528cbee46fcbd71a6da3be4.1546540962.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Vincenzo Frascino
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
    in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.

    Link: http://lkml.kernel.org/r/52ddd881916bcc153a9924c154daacde78522227.1546540962.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Suggested-by: Vincenzo Frascino
    Cc: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Vincenzo Frascino
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

05 Jan, 2019

1 commit

  • Patch series "Add support for fast mremap".

    This series speeds up the mremap(2) syscall by copying page tables at
    the PMD level even for non-THP systems. There is concern that the extra
    'address' argument that mremap passes to pte_alloc may do something
    subtle architecture related in the future that may make the scheme not
    work. Also we find that there is no point in passing the 'address' to
    pte_alloc since its unused. This patch therefore removes this argument
    tree-wide resulting in a nice negative diff as well. Also ensuring
    along the way that the enabled architectures do not do anything funky
    with the 'address' argument that goes unnoticed by the optimization.

    Build and boot tested on x86-64. Build tested on arm64. The config
    enablement patch for arm64 will be posted in the future after more
    testing.

    The changes were obtained by applying the following Coccinelle script.
    (thanks Julia for answering all Coccinelle questions!).
    Following fix ups were done manually:
    * Removal of address argument from pte_fragment_alloc
    * Removal of pte_alloc_one_fast definitions from m68k and microblaze.

    // Options: --include-headers --no-includes
    // Note: I split the 'identifier fn' line, so if you are manually
    // running it, please unsplit it so it runs for you.

    virtual patch

    @pte_alloc_func_def depends on patch exists@
    identifier E2;
    identifier fn =~
    "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
    type T2;
    @@

    fn(...
    - , T2 E2
    )
    { ... }

    @pte_alloc_func_proto_noarg depends on patch exists@
    type T1, T2, T3, T4;
    identifier fn =~ "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
    @@

    (
    - T3 fn(T1, T2);
    + T3 fn(T1);
    |
    - T3 fn(T1, T2, T4);
    + T3 fn(T1, T2);
    )

    @pte_alloc_func_proto depends on patch exists@
    identifier E1, E2, E4;
    type T1, T2, T3, T4;
    identifier fn =~
    "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
    @@

    (
    - T3 fn(T1 E1, T2 E2);
    + T3 fn(T1 E1);
    |
    - T3 fn(T1 E1, T2 E2, T4 E4);
    + T3 fn(T1 E1, T2 E2);
    )

    @pte_alloc_func_call depends on patch exists@
    expression E2;
    identifier fn =~
    "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
    @@

    fn(...
    -, E2
    )

    @pte_alloc_macro depends on patch exists@
    identifier fn =~
    "^(__pte_alloc|pte_alloc_one|pte_alloc|__pte_alloc_kernel|pte_alloc_one_kernel)$";
    identifier a, b, c;
    expression e;
    position p;
    @@

    (
    - #define fn(a, b, c) e
    + #define fn(a, b) e
    |
    - #define fn(a, b) e
    + #define fn(a) e
    )

    Link: http://lkml.kernel.org/r/20181108181201.88826-2-joelaf@google.com
    Signed-off-by: Joel Fernandes (Google)
    Suggested-by: Kirill A. Shutemov
    Acked-by: Kirill A. Shutemov
    Cc: Michal Hocko
    Cc: Julia Lawall
    Cc: Kirill A. Shutemov
    Cc: William Kucharski
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joel Fernandes (Google)
     

29 Dec, 2018

13 commits

  • totalram_pages and totalhigh_pages are made static inline function.

    Main motivation was that managed_page_count_lock handling was complicating
    things. It was discussed in length here,
    https://lore.kernel.org/patchwork/patch/995739/#1181785 So it seemes
    better to remove the lock and convert variables to atomic, with preventing
    poteintial store-to-read tearing as a bonus.

    [akpm@linux-foundation.org: coding style fixes]
    Link: http://lkml.kernel.org/r/1542090790-21750-4-git-send-email-arunks@codeaurora.org
    Signed-off-by: Arun KS
    Suggested-by: Michal Hocko
    Suggested-by: Vlastimil Babka
    Reviewed-by: Konstantin Khlebnikov
    Reviewed-by: Pavel Tatashin
    Acked-by: Michal Hocko
    Acked-by: Vlastimil Babka
    Cc: David Hildenbrand
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arun KS
     
  • This patch adds a "SPDX-License-Identifier: GPL-2.0" mark to all source
    files under mm/kasan.

    Link: http://lkml.kernel.org/r/bce2d1e618afa5142e81961ab8fa4b4165337380.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This patch adds __must_check annotations to kasan hooks that return a
    pointer to make sure that a tagged pointer always gets propagated.

    Link: http://lkml.kernel.org/r/03b269c5e453945f724bfca3159d4e1333a8fb1c.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Suggested-by: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Tag-based KASAN doesn't check memory accesses through pointers tagged with
    0xff. When page_address is used to get pointer to memory that corresponds
    to some page, the tag of the resulting pointer gets set to 0xff, even
    though the allocated memory might have been tagged differently.

    For slab pages it's impossible to recover the correct tag to return from
    page_address, since the page might contain multiple slab objects tagged
    with different values, and we can't know in advance which one of them is
    going to get accessed. For non slab pages however, we can recover the tag
    in page_address, since the whole page was marked with the same tag.

    This patch adds tagging to non slab memory allocated with pagealloc. To
    set the tag of the pointer returned from page_address, the tag gets stored
    to page->flags when the memory gets allocated.

    Link: http://lkml.kernel.org/r/d758ddcef46a5abc9970182b9137e2fbee202a2c.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Acked-by: Will Deacon
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This commit adds tag-based KASAN specific hooks implementation and
    adjusts common generic and tag-based KASAN ones.

    1. When a new slab cache is created, tag-based KASAN rounds up the size of
    the objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).

    2. On each kmalloc tag-based KASAN generates a random tag, sets the shadow
    memory, that corresponds to this object to this tag, and embeds this
    tag value into the top byte of the returned pointer.

    3. On each kfree tag-based KASAN poisons the shadow memory with a random
    tag to allow detection of use-after-free bugs.

    The rest of the logic of the hook implementation is very much similar to
    the one provided by generic KASAN. Tag-based KASAN saves allocation and
    free stack metadata to the slab object the same way generic KASAN does.

    Link: http://lkml.kernel.org/r/bda78069e3b8422039794050ddcb2d53d053ed41.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This commit adds rountines, that print tag-based KASAN error reports.
    Those are quite similar to generic KASAN, the difference is:

    1. The way tag-based KASAN finds the first bad shadow cell (with a
    mismatching tag). Tag-based KASAN compares memory tags from the shadow
    memory to the pointer tag.

    2. Tag-based KASAN reports all bugs with the "KASAN: invalid-access"
    header.

    Also simplify generic KASAN find_first_bad_addr.

    Link: http://lkml.kernel.org/r/aee6897b1bd077732a315fd84c6b4f234dbfdfcb.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Move generic KASAN specific error reporting routines to generic_report.c
    without any functional changes, leaving common error reporting code in
    report.c to be later reused by tag-based KASAN.

    Link: http://lkml.kernel.org/r/ba48c32f8e5aefedee78998ccff0413bee9e0f5b.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This commit adds a few helper functions, that are meant to be used to work
    with tags embedded in the top byte of kernel pointers: to set, to get or
    to reset the top byte.

    Link: http://lkml.kernel.org/r/f6c6437bb8e143bc44f42c3c259c62e734be7935.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • A tag-based KASAN shadow memory cell contains a memory tag, that
    corresponds to the tag in the top byte of the pointer, that points to that
    memory. The native top byte value of kernel pointers is 0xff, so with
    tag-based KASAN we need to initialize shadow memory to 0xff.

    [cai@lca.pw: arm64: skip kmemleak for KASAN again\
    Link: http://lkml.kernel.org/r/20181226020550.63712-1-cai@lca.pw
    Link: http://lkml.kernel.org/r/5cc1b789aad7c99cf4f3ec5b328b147ad53edb40.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • With tag based KASAN mode the early shadow value is 0xff and not 0x00, so
    this patch renames kasan_zero_(page|pte|pmd|pud|p4d) to
    kasan_early_shadow_(page|pte|pmd|pud|p4d) to avoid confusion.

    Link: http://lkml.kernel.org/r/3fed313280ebf4f88645f5b89ccbc066d320e177.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Suggested-by: Mark Rutland
    Cc: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This commit splits the current CONFIG_KASAN config option into two:
    1. CONFIG_KASAN_GENERIC, that enables the generic KASAN mode (the one
    that exists now);
    2. CONFIG_KASAN_SW_TAGS, that enables the software tag-based KASAN mode.

    The name CONFIG_KASAN_SW_TAGS is chosen as in the future we will have
    another hardware tag-based KASAN mode, that will rely on hardware memory
    tagging support in arm64.

    With CONFIG_KASAN_SW_TAGS enabled, compiler options are changed to
    instrument kernel files with -fsantize=kernel-hwaddress (except the ones
    for which KASAN_SANITIZE := n is set).

    Both CONFIG_KASAN_GENERIC and CONFIG_KASAN_SW_TAGS support both
    CONFIG_KASAN_INLINE and CONFIG_KASAN_OUTLINE instrumentation modes.

    This commit also adds empty placeholder (for now) implementation of
    tag-based KASAN specific hooks inserted by the compiler and adjusts
    common hooks implementation.

    While this commit adds the CONFIG_KASAN_SW_TAGS config option, this option
    is not selectable, as it depends on HAVE_ARCH_KASAN_SW_TAGS, which we will
    enable once all the infrastracture code has been added.

    Link: http://lkml.kernel.org/r/b2550106eb8a68b10fefbabce820910b115aa853.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • We now have two KASAN modes: generic KASAN and tag-based KASAN. Rename
    kasan.c to generic.c to reflect that. Also rename kasan_init.c to init.c
    as it contains initialization code for both KASAN modes.

    Link: http://lkml.kernel.org/r/88c6fd2a883e459e6242030497230e5fb0d44d44.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Tag-based KASAN reuses a significant part of the generic KASAN code, so
    move the common parts to common.c without any functional changes.

    Link: http://lkml.kernel.org/r/114064d002356e03bb8cc91f7835e20dc61b51d9.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov