12 Dec, 2020

1 commit

  • We hit this issue in our internal test. When enabling generic kasan, a
    kfree()'d object is put into per-cpu quarantine first. If the cpu goes
    offline, object still remains in the per-cpu quarantine. If we call
    kmem_cache_destroy() now, slub will report "Objects remaining" error.

    =============================================================================
    BUG test_module_slab (Not tainted): Objects remaining in test_module_slab on __kmem_cache_shutdown()
    -----------------------------------------------------------------------------

    Disabling lock debugging due to kernel taint
    INFO: Slab 0x(____ptrval____) objects=34 used=1 fp=0x(____ptrval____) flags=0x2ffff00000010200
    CPU: 3 PID: 176 Comm: cat Tainted: G B 5.10.0-rc1-00007-g4525c8781ec0-dirty #10
    Hardware name: linux,dummy-virt (DT)
    Call trace:
    dump_backtrace+0x0/0x2b0
    show_stack+0x18/0x68
    dump_stack+0xfc/0x168
    slab_err+0xac/0xd4
    __kmem_cache_shutdown+0x1e4/0x3c8
    kmem_cache_destroy+0x68/0x130
    test_version_show+0x84/0xf0
    module_attr_show+0x40/0x60
    sysfs_kf_seq_show+0x128/0x1c0
    kernfs_seq_show+0xa0/0xb8
    seq_read+0x1f0/0x7e8
    kernfs_fop_read+0x70/0x338
    vfs_read+0xe4/0x250
    ksys_read+0xc8/0x180
    __arm64_sys_read+0x44/0x58
    el0_svc_common.constprop.0+0xac/0x228
    do_el0_svc+0x38/0xa0
    el0_sync_handler+0x170/0x178
    el0_sync+0x174/0x180
    INFO: Object 0x(____ptrval____) @offset=15848
    INFO: Allocated in test_version_show+0x98/0xf0 age=8188 cpu=6 pid=172
    stack_trace_save+0x9c/0xd0
    set_track+0x64/0xf0
    alloc_debug_processing+0x104/0x1a0
    ___slab_alloc+0x628/0x648
    __slab_alloc.isra.0+0x2c/0x58
    kmem_cache_alloc+0x560/0x588
    test_version_show+0x98/0xf0
    module_attr_show+0x40/0x60
    sysfs_kf_seq_show+0x128/0x1c0
    kernfs_seq_show+0xa0/0xb8
    seq_read+0x1f0/0x7e8
    kernfs_fop_read+0x70/0x338
    vfs_read+0xe4/0x250
    ksys_read+0xc8/0x180
    __arm64_sys_read+0x44/0x58
    el0_svc_common.constprop.0+0xac/0x228
    kmem_cache_destroy test_module_slab: Slab cache still has objects

    Register a cpu hotplug function to remove all objects in the offline
    per-cpu quarantine when cpu is going offline. Set a per-cpu variable to
    indicate this cpu is offline.

    [qiang.zhang@windriver.com: fix slab double free when cpu-hotplug]
    Link: https://lkml.kernel.org/r/20201204102206.20237-1-qiang.zhang@windriver.com

    Link: https://lkml.kernel.org/r/1606895585-17382-2-git-send-email-Kuan-Ying.Lee@mediatek.com
    Signed-off-by: Kuan-Ying Lee
    Signed-off-by: Zqiang
    Suggested-by: Dmitry Vyukov
    Reported-by: Guangye Yang
    Reviewed-by: Dmitry Vyukov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Matthias Brugger
    Cc: Nicholas Tang
    Cc: Miles Chen
    Cc: Qian Cai
    Cc: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kuan-Ying Lee
     

14 Oct, 2020

2 commits

  • KASAN errors will currently trigger a panic when panic_on_warn is set.
    This renders kasan_multishot useless, as further KASAN errors won't be
    reported if the kernel has already paniced. By making kasan_multishot
    disable this behaviour for KASAN errors, we can still have the benefits of
    panic_on_warn for non-KASAN warnings, yet be able to use kasan_multishot.

    This is particularly important when running KASAN tests, which need to
    trigger multiple KASAN errors: previously these would panic the system if
    panic_on_warn was set, now they can run (and will panic the system should
    non-KASAN warnings show up).

    Signed-off-by: David Gow
    Signed-off-by: Andrew Morton
    Tested-by: Andrey Konovalov
    Reviewed-by: Andrey Konovalov
    Reviewed-by: Brendan Higgins
    Cc: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Ingo Molnar
    Cc: Juri Lelli
    Cc: Patricia Alfonso
    Cc: Peter Zijlstra
    Cc: Shuah Khan
    Cc: Vincent Guittot
    Link: https://lkml.kernel.org/r/20200915035828.570483-6-davidgow@google.com
    Link: https://lkml.kernel.org/r/20200910070331.3358048-6-davidgow@google.com
    Signed-off-by: Linus Torvalds

    David Gow
     
  • Integrate KASAN into KUnit testing framework.

    - Fail tests when KASAN reports an error that is not expected
    - Use KUNIT_EXPECT_KASAN_FAIL to expect a KASAN error in KASAN
    tests
    - Expected KASAN reports pass tests and are still printed when run
    without kunit_tool (kunit_tool still bypasses the report due to the
    test passing)
    - KUnit struct in current task used to keep track of the current
    test from KASAN code

    Make use of "[PATCH v3 kunit-next 1/2] kunit: generalize kunit_resource
    API beyond allocated resources" and "[PATCH v3 kunit-next 2/2] kunit: add
    support for named resources" from Alan Maguire [1]

    - A named resource is added to a test when a KASAN report is
    expected
    - This resource contains a struct for kasan_data containing
    booleans representing if a KASAN report is expected and if a
    KASAN report is found

    [1] (https://lore.kernel.org/linux-kselftest/1583251361-12748-1-git-send-email-alan.maguire@oracle.com/T/#t)

    Signed-off-by: Patricia Alfonso
    Signed-off-by: David Gow
    Signed-off-by: Andrew Morton
    Tested-by: Andrey Konovalov
    Reviewed-by: Andrey Konovalov
    Reviewed-by: Dmitry Vyukov
    Acked-by: Brendan Higgins
    Cc: Andrey Ryabinin
    Cc: Ingo Molnar
    Cc: Juri Lelli
    Cc: Peter Zijlstra
    Cc: Shuah Khan
    Cc: Vincent Guittot
    Link: https://lkml.kernel.org/r/20200915035828.570483-3-davidgow@google.com
    Link: https://lkml.kernel.org/r/20200910070331.3358048-3-davidgow@google.com
    Signed-off-by: Linus Torvalds

    Patricia Alfonso
     

10 Aug, 2020

1 commit

  • Pull Kbuild updates from Masahiro Yamada:

    - run the checker (e.g. sparse) after the compiler

    - remove unneeded cc-option tests for old compiler flags

    - fix tar-pkg to install dtbs

    - introduce ccflags-remove-y and asflags-remove-y syntax

    - allow to trace functions in sub-directories of lib/

    - introduce hostprogs-always-y and userprogs-always-y syntax

    - various Makefile cleanups

    * tag 'kbuild-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
    kbuild: stop filtering out $(GCC_PLUGINS_CFLAGS) from cc-option base
    kbuild: include scripts/Makefile.* only when relevant CONFIG is enabled
    kbuild: introduce hostprogs-always-y and userprogs-always-y
    kbuild: sort hostprogs before passing it to ifneq
    kbuild: move host .so build rules to scripts/gcc-plugins/Makefile
    kbuild: Replace HTTP links with HTTPS ones
    kbuild: trace functions in subdirectories of lib/
    kbuild: introduce ccflags-remove-y and asflags-remove-y
    kbuild: do not export LDFLAGS_vmlinux
    kbuild: always create directories of targets
    powerpc/boot: add DTB to 'targets'
    kbuild: buildtar: add dtbs support
    kbuild: remove cc-option test of -ffreestanding
    kbuild: remove cc-option test of -fno-stack-protector
    Revert "kbuild: Create directory for target DTB"
    kbuild: run the checker after the compiler

    Linus Torvalds
     

08 Aug, 2020

3 commits

  • kasan_unpoison_stack_above_sp_to() is defined in kasan code but never
    used. The function was introduced as part of the commit:

    commit 9f7d416c36124667 ("kprobes: Unpoison stack in jprobe_return() for KASAN")

    ... where it was necessary because x86's jprobe_return() would leave
    stale shadow on the stack, and was an oddity in that regard.

    Since then, jprobes were removed entirely, and as of commit:

    commit 80006dbee674f9fa ("kprobes/x86: Remove jprobe implementation")

    ... there have been no callers of this function.

    Remove the declaration and the implementation.

    Signed-off-by: Vincenzo Frascino
    Signed-off-by: Andrew Morton
    Reviewed-by: Mark Rutland
    Reviewed-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Link: http://lkml.kernel.org/r/20200706143505.23299-1-vincenzo.frascino@arm.com
    Signed-off-by: Linus Torvalds

    Vincenzo Frascino
     
  • Move free track from kasan_alloc_meta to kasan_free_meta in order to make
    struct kasan_alloc_meta and kasan_free_meta size are both 16 bytes. It is
    a good size because it is the minimal redzone size and a good number of
    alignment.

    For free track, we make some modifications as shown below:
    1) Remove the free_track from struct kasan_alloc_meta.
    2) Add the free_track into struct kasan_free_meta.
    3) Add a macro KASAN_KMALLOC_FREETRACK in order to check whether
    it can print free stack in KASAN report.

    [1]https://bugzilla.kernel.org/show_bug.cgi?id=198437

    [walter-zh.wu@mediatek.com: build fix]
    Link: http://lkml.kernel.org/r/20200710162440.23887-1-walter-zh.wu@mediatek.com

    Suggested-by: Dmitry Vyukov
    Co-developed-by: Dmitry Vyukov
    Signed-off-by: Walter Wu
    Signed-off-by: Andrew Morton
    Tested-by: Dmitry Vyukov
    Reviewed-by: Dmitry Vyukov
    Reviewed-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Joel Fernandes
    Cc: Jonathan Corbet
    Cc: Josh Triplett
    Cc: Lai Jiangshan
    Cc: Mathieu Desnoyers
    Cc: Matthias Brugger
    Cc: "Paul E . McKenney"
    Link: http://lkml.kernel.org/r/20200601051022.1230-1-walter-zh.wu@mediatek.com
    Signed-off-by: Linus Torvalds

    Walter Wu
     
  • Patch series "kasan: memorize and print call_rcu stack", v8.

    This patchset improves KASAN reports by making them to have call_rcu()
    call stack information. It is useful for programmers to solve
    use-after-free or double-free memory issue.

    The KASAN report was as follows(cleaned up slightly):

    BUG: KASAN: use-after-free in kasan_rcu_reclaim+0x58/0x60

    Freed by task 0:
    kasan_save_stack+0x24/0x50
    kasan_set_track+0x24/0x38
    kasan_set_free_info+0x18/0x20
    __kasan_slab_free+0x10c/0x170
    kasan_slab_free+0x10/0x18
    kfree+0x98/0x270
    kasan_rcu_reclaim+0x1c/0x60

    Last call_rcu():
    kasan_save_stack+0x24/0x50
    kasan_record_aux_stack+0xbc/0xd0
    call_rcu+0x8c/0x580
    kasan_rcu_uaf+0xf4/0xf8

    Generic KASAN will record the last two call_rcu() call stacks and print up
    to 2 call_rcu() call stacks in KASAN report. it is only suitable for
    generic KASAN.

    This feature considers the size of struct kasan_alloc_meta and
    kasan_free_meta, we try to optimize the structure layout and size, lets it
    get better memory consumption.

    [1]https://bugzilla.kernel.org/show_bug.cgi?id=198437
    [2]https://groups.google.com/forum/#!searchin/kasan-dev/better$20stack$20traces$20for$20rcu%7Csort:date/kasan-dev/KQsjT_88hDE/7rNUZprRBgAJ

    This patch (of 4):

    This feature will record the last two call_rcu() call stacks and prints up
    to 2 call_rcu() call stacks in KASAN report.

    When call_rcu() is called, we store the call_rcu() call stack into slub
    alloc meta-data, so that the KASAN report can print rcu stack.

    [1]https://bugzilla.kernel.org/show_bug.cgi?id=198437
    [2]https://groups.google.com/forum/#!searchin/kasan-dev/better$20stack$20traces$20for$20rcu%7Csort:date/kasan-dev/KQsjT_88hDE/7rNUZprRBgAJ

    [walter-zh.wu@mediatek.com: build fix]
    Link: http://lkml.kernel.org/r/20200710162401.23816-1-walter-zh.wu@mediatek.com

    Suggested-by: Dmitry Vyukov
    Signed-off-by: Walter Wu
    Signed-off-by: Andrew Morton
    Tested-by: Dmitry Vyukov
    Reviewed-by: Dmitry Vyukov
    Reviewed-by: Andrey Konovalov
    Acked-by: Paul E. McKenney
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Josh Triplett
    Cc: Mathieu Desnoyers
    Cc: Lai Jiangshan
    Cc: Joel Fernandes
    Cc: Jonathan Corbet
    Cc: Matthias Brugger
    Link: http://lkml.kernel.org/r/20200710162123.23713-1-walter-zh.wu@mediatek.com
    Link: http://lkml.kernel.org/r/20200601050847.1096-1-walter-zh.wu@mediatek.com
    Link: http://lkml.kernel.org/r/20200601050927.1153-1-walter-zh.wu@mediatek.com
    Signed-off-by: Linus Torvalds

    Walter Wu
     

07 Jul, 2020

1 commit

  • Some Makefiles already pass -fno-stack-protector unconditionally.
    For example, arch/arm64/kernel/vdso/Makefile, arch/x86/xen/Makefile.

    No problem report so far about hard-coding this option. So, we can
    assume all supported compilers know -fno-stack-protector.

    GCC 4.8 and Clang support this option (https://godbolt.org/z/_HDGzN)

    Get rid of cc-option from -fno-stack-protector.

    Remove CONFIG_CC_HAS_STACKPROTECTOR_NONE, which is always 'y'.

    Note:
    arch/mips/vdso/Makefile adds -fno-stack-protector twice, first
    unconditionally, and second conditionally. I removed the second one.

    Signed-off-by: Masahiro Yamada
    Reviewed-by: Kees Cook
    Acked-by: Ard Biesheuvel
    Reviewed-by: Nick Desaulniers

    Masahiro Yamada
     

05 Jun, 2020

1 commit

  • There are no architectures that use include/asm-generic/5level-fixup.h
    therefore it can be removed along with __ARCH_HAS_5LEVEL_HACK define and
    the code it surrounds

    Signed-off-by: Mike Rapoport
    Signed-off-by: Andrew Morton
    Cc: Arnd Bergmann
    Cc: Benjamin Herrenschmidt
    Cc: Brian Cain
    Cc: Catalin Marinas
    Cc: Christophe Leroy
    Cc: Fenghua Yu
    Cc: Geert Uytterhoeven
    Cc: Guan Xuetao
    Cc: James Morse
    Cc: Jonas Bonn
    Cc: Julien Thierry
    Cc: Ley Foon Tan
    Cc: Marc Zyngier
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Cc: Rich Felker
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Stefan Kristiansson
    Cc: Suzuki K Poulose
    Cc: Tony Luck
    Cc: Will Deacon
    Cc: Yoshinori Sato
    Link: http://lkml.kernel.org/r/20200414153455.21744-15-rppt@kernel.org
    Signed-off-by: Linus Torvalds

    Mike Rapoport
     

03 Jun, 2020

2 commits

  • The kasan_report() functions belongs to report.c, as it's a common
    functions that does error reporting.

    Reported-by: Leon Romanovsky
    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Tested-by: Leon Romanovsky
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Leon Romanovsky
    Link: http://lkml.kernel.org/r/78a81fde6eeda9db72a7fd55fbc33173a515e4b1.1589297433.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • KASAN uses a single cc-option invocation to disable both conserve-stack
    and stack-protector flags. The former flag is not present in Clang,
    which causes cc-option to fail, and results in stack-protector being
    enabled.

    Fix by using separate cc-option calls for each flag. Also collect all
    flags in a variable to avoid calling cc-option multiple times for
    different files.

    Reported-by: Qian Cai
    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Reviewed-by: Marco Elver
    Link: http://lkml.kernel.org/r/c2f0c8e4048852ae014f4a391d96ca42d27e3255.1590779332.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

24 May, 2020

1 commit

  • During early boot, while KASAN is not yet initialized, it is possible to
    enter reporting code-path and end up in kasan_report().

    While uninitialized, the branch there prevents generating any reports,
    however, under certain circumstances when branches are being traced
    (TRACE_BRANCH_PROFILING), we may recurse deep enough to cause kernel
    reboots without warning.

    To prevent similar issues in future, we should disable branch tracing
    for the core runtime.

    [elver@google.com: remove duplicate DISABLE_BRANCH_PROFILING, per Qian Cai]
    Link: https://lore.kernel.org/lkml/20200517011732.GE24705@shao2-debian/
    Link: http://lkml.kernel.org/r/20200522075207.157349-1-elver@google.com
    Reported-by: kernel test robot
    Signed-off-by: Marco Elver
    Signed-off-by: Andrew Morton
    Reviewed-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: Qian Cai
    Cc:
    Link: http://lkml.kernel.org/r//20200517011732.GE24705@shao2-debian/
    Link: http://lkml.kernel.org/r/20200519182459.87166-1-elver@google.com
    Signed-off-by: Linus Torvalds

    Marco Elver
     

15 May, 2020

2 commits

  • KASAN is currently missing declarations for __asan_report* and __hwasan*
    functions. This can lead to compiler warnings.

    Reported-by: Leon Romanovsky
    Reported-by: Randy Dunlap
    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Tested-by: Leon Romanovsky
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Link: http://lkml.kernel.org/r/45b445a76a79208918f0cc44bfabebaea909b54d.1589297433.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • KASAN is incompatible with some kernel debugging/tracing features.

    There's been multiple patches that disable those feature for some of
    KASAN files one by one. Instead of prolonging that, disable these
    features for all KASAN files at once.

    Reported-by: Randy Dunlap
    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Leon Romanovsky
    Link: http://lkml.kernel.org/r/29bd753d5ff5596425905b0b07f51153e2345cc1.1589297433.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

08 Apr, 2020

2 commits

  • As done in the full WARN() handler, panic_on_warn needs to be cleared
    before calling panic() to avoid recursive panics.

    Signed-off-by: Kees Cook
    Signed-off-by: Andrew Morton
    Acked-by: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Ard Biesheuvel
    Cc: Arnd Bergmann
    Cc: Dan Carpenter
    Cc: Elena Petrova
    Cc: "Gustavo A. R. Silva"
    Link: http://lkml.kernel.org/r/20200227193516.32566-6-keescook@chromium.org
    Signed-off-by: Linus Torvalds

    Kees Cook
     
  • filter_irq_stacks() can be used by other tools (e.g. KMSAN), so it needs
    to be moved to a common location. lib/stackdepot.c seems a good place, as
    filter_irq_stacks() is usually applied to the output of
    stack_trace_save().

    This patch has been previously mailed as part of KMSAN RFC patch series.

    [glider@google.co: nds32: linker script: add SOFTIRQENTRY_TEXT\
    Link: http://lkml.kernel.org/r/20200311121002.241430-1-glider@google.com
    [glider@google.com: add IRQENTRY_TEXT and SOFTIRQENTRY_TEXT to linker script]
    Link: http://lkml.kernel.org/r/20200311121124.243352-1-glider@google.com
    Signed-off-by: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Cc: Vegard Nossum
    Cc: Dmitry Vyukov
    Cc: Marco Elver
    Cc: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Arnd Bergmann
    Cc: Sergey Senozhatsky
    Link: http://lkml.kernel.org/r/20200220141916.55455-3-glider@google.com
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

03 Apr, 2020

1 commit

  • Patch series "fix the missing underflow in memory operation function", v4.

    The patchset helps to produce a KASAN report when size is negative in
    memory operation functions. It is helpful for programmer to solve an
    undefined behavior issue. Patch 1 based on Dmitry's review and
    suggestion, patch 2 is a test in order to verify the patch 1.

    [1]https://bugzilla.kernel.org/show_bug.cgi?id=199341
    [2]https://lore.kernel.org/linux-arm-kernel/20190927034338.15813-1-walter-zh.wu@mediatek.com/

    This patch (of 2):

    KASAN missed detecting size is a negative number in memset(), memcpy(),
    and memmove(), it will cause out-of-bounds bug. So needs to be detected
    by KASAN.

    If size is a negative number, then it has a reason to be defined as
    out-of-bounds bug type. Casting negative numbers to size_t would indeed
    turn up as a large size_t and its value will be larger than ULONG_MAX/2,
    so that this can qualify as out-of-bounds.

    KASAN report is shown below:

    BUG: KASAN: out-of-bounds in kmalloc_memmove_invalid_size+0x70/0xa0
    Read of size 18446744073709551608 at addr ffffff8069660904 by task cat/72

    CPU: 2 PID: 72 Comm: cat Not tainted 5.4.0-rc1-next-20191004ajb-00001-gdb8af2f372b2-dirty #1
    Hardware name: linux,dummy-virt (DT)
    Call trace:
    dump_backtrace+0x0/0x288
    show_stack+0x14/0x20
    dump_stack+0x10c/0x164
    print_address_description.isra.9+0x68/0x378
    __kasan_report+0x164/0x1a0
    kasan_report+0xc/0x18
    check_memory_region+0x174/0x1d0
    memmove+0x34/0x88
    kmalloc_memmove_invalid_size+0x70/0xa0

    [1] https://bugzilla.kernel.org/show_bug.cgi?id=199341

    [cai@lca.pw: fix -Wdeclaration-after-statement warn]
    Link: http://lkml.kernel.org/r/1583509030-27939-1-git-send-email-cai@lca.pw
    [peterz@infradead.org: fix objtool warning]
    Link: http://lkml.kernel.org/r/20200305095436.GV2596@hirez.programming.kicks-ass.net
    Reported-by: kernel test robot
    Reported-by: Dmitry Vyukov
    Suggested-by: Dmitry Vyukov
    Signed-off-by: Walter Wu
    Signed-off-by: Qian Cai
    Signed-off-by: Andrew Morton
    Reviewed-by: Dmitry Vyukov
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Link: http://lkml.kernel.org/r/20191112065302.7015-1-walter-zh.wu@mediatek.com
    Signed-off-by: Linus Torvalds

    Walter Wu
     

01 Feb, 2020

1 commit

  • Pull RISC-V updates from Palmer Dabbelt:
    "This contains a handful of patches for this merge window:

    - Support for kasan

    - 32-bit physical addresses on rv32i-based systems

    - Support for CONFIG_DEBUG_VIRTUAL

    - DT entry for the FU540 GPIO controller, which has recently had a
    device driver merged

    These boot a buildroot-based system on QEMU's virt board for me"

    * tag 'riscv-for-linus-5.6-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux:
    riscv: dts: Add DT support for SiFive FU540 GPIO driver
    riscv: mm: add support for CONFIG_DEBUG_VIRTUAL
    riscv: keep 32-bit kernel to 32-bit phys_addr_t
    kasan: Add riscv to KASAN documentation.
    riscv: Add KASAN support
    kasan: No KASAN's memmove check if archs don't have it.

    Linus Torvalds
     

23 Jan, 2020

1 commit

  • If archs don't have memmove then the C implementation from lib/string.c is used,
    and then it's instrumented by compiler. So there is no need to add KASAN's
    memmove to manual checks.

    Signed-off-by: Nick Hu
    Acked-by: Dmitry Vyukov
    Acked-by: Andrey Ryabinin
    Signed-off-by: Palmer Dabbelt

    Nick Hu
     

31 Dec, 2019

1 commit

  • Make #GP exceptions caused by out-of-bounds KASAN shadow accesses easier
    to understand by computing the address of the original access and
    printing that. More details are in the comments in the patch.

    This turns an error like this:

    kasan: CONFIG_KASAN_INLINE enabled
    kasan: GPF could be caused by NULL-ptr deref or user memory access
    general protection fault, probably for non-canonical address
    0xe017577ddf75b7dd: 0000 [#1] PREEMPT SMP KASAN PTI

    into this:

    general protection fault, probably for non-canonical address
    0xe017577ddf75b7dd: 0000 [#1] PREEMPT SMP KASAN PTI
    KASAN: maybe wild-memory-access in range
    [0x00badbeefbadbee8-0x00badbeefbadbeef]

    The hook is placed in architecture-independent code, but is currently
    only wired up to the X86 exception handler because I'm not sufficiently
    familiar with the address space layout and exception handling mechanisms
    on other architectures.

    Signed-off-by: Jann Horn
    Signed-off-by: Borislav Petkov
    Reviewed-by: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrew Morton
    Cc: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Andy Lutomirski
    Cc: Dave Hansen
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Cc: kasan-dev@googlegroups.com
    Cc: linux-mm
    Cc: Peter Zijlstra
    Cc: Sean Christopherson
    Cc: Thomas Gleixner
    Cc: x86-ml
    Link: https://lkml.kernel.org/r/20191218231150.12139-4-jannh@google.com

    Jann Horn
     

18 Dec, 2019

2 commits

  • kasan_release_vmalloc uses apply_to_page_range to release vmalloc
    shadow. Unfortunately, apply_to_page_range can allocate memory to fill
    in page table entries, which is not what we want.

    Also, kasan_release_vmalloc is called under free_vmap_area_lock, so if
    apply_to_page_range does allocate memory, we get a sleep in atomic bug:

    BUG: sleeping function called from invalid context at mm/page_alloc.c:4681
    in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 15087, name:

    Call Trace:
    __dump_stack lib/dump_stack.c:77 [inline]
    dump_stack+0x199/0x216 lib/dump_stack.c:118
    ___might_sleep.cold.97+0x1f5/0x238 kernel/sched/core.c:6800
    __might_sleep+0x95/0x190 kernel/sched/core.c:6753
    prepare_alloc_pages mm/page_alloc.c:4681 [inline]
    __alloc_pages_nodemask+0x3cd/0x890 mm/page_alloc.c:4730
    alloc_pages_current+0x10c/0x210 mm/mempolicy.c:2211
    alloc_pages include/linux/gfp.h:532 [inline]
    __get_free_pages+0xc/0x40 mm/page_alloc.c:4786
    __pte_alloc_one_kernel include/asm-generic/pgalloc.h:21 [inline]
    pte_alloc_one_kernel include/asm-generic/pgalloc.h:33 [inline]
    __pte_alloc_kernel+0x1d/0x200 mm/memory.c:459
    apply_to_pte_range mm/memory.c:2031 [inline]
    apply_to_pmd_range mm/memory.c:2068 [inline]
    apply_to_pud_range mm/memory.c:2088 [inline]
    apply_to_p4d_range mm/memory.c:2108 [inline]
    apply_to_page_range+0x77d/0xa00 mm/memory.c:2133
    kasan_release_vmalloc+0xa7/0xc0 mm/kasan/common.c:970
    __purge_vmap_area_lazy+0xcbb/0x1f30 mm/vmalloc.c:1313
    try_purge_vmap_area_lazy mm/vmalloc.c:1332 [inline]
    free_vmap_area_noflush+0x2ca/0x390 mm/vmalloc.c:1368
    free_unmap_vmap_area mm/vmalloc.c:1381 [inline]
    remove_vm_area+0x1cc/0x230 mm/vmalloc.c:2209
    vm_remove_mappings mm/vmalloc.c:2236 [inline]
    __vunmap+0x223/0xa20 mm/vmalloc.c:2299
    __vfree+0x3f/0xd0 mm/vmalloc.c:2356
    __vmalloc_area_node mm/vmalloc.c:2507 [inline]
    __vmalloc_node_range+0x5d5/0x810 mm/vmalloc.c:2547
    __vmalloc_node mm/vmalloc.c:2607 [inline]
    __vmalloc_node_flags mm/vmalloc.c:2621 [inline]
    vzalloc+0x6f/0x80 mm/vmalloc.c:2666
    alloc_one_pg_vec_page net/packet/af_packet.c:4233 [inline]
    alloc_pg_vec net/packet/af_packet.c:4258 [inline]
    packet_set_ring+0xbc0/0x1b50 net/packet/af_packet.c:4342
    packet_setsockopt+0xed7/0x2d90 net/packet/af_packet.c:3695
    __sys_setsockopt+0x29b/0x4d0 net/socket.c:2117
    __do_sys_setsockopt net/socket.c:2133 [inline]
    __se_sys_setsockopt net/socket.c:2130 [inline]
    __x64_sys_setsockopt+0xbe/0x150 net/socket.c:2130
    do_syscall_64+0xfa/0x780 arch/x86/entry/common.c:294
    entry_SYSCALL_64_after_hwframe+0x49/0xbe

    Switch to using the apply_to_existing_page_range() helper instead, which
    won't allocate memory.

    [akpm@linux-foundation.org: s/apply_to_existing_pages/apply_to_existing_page_range/]
    Link: http://lkml.kernel.org/r/20191205140407.1874-2-dja@axtens.net
    Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory")
    Signed-off-by: Daniel Axtens
    Reported-by: Dmitry Vyukov
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Qian Cai
    Cc: Uladzislau Rezki (Sony)
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daniel Axtens
     
  • With CONFIG_KASAN_VMALLOC=y any use of memory obtained via vm_map_ram()
    will crash because there is no shadow backing that memory.

    Instead of sprinkling additional kasan_populate_vmalloc() calls all over
    the vmalloc code, move it into alloc_vmap_area(). This will fix
    vm_map_ram() and simplify the code a bit.

    [aryabinin@virtuozzo.com: v2]
    Link: http://lkml.kernel.org/r/20191205095942.1761-1-aryabinin@virtuozzo.comLink: http://lkml.kernel.org/r/20191204204534.32202-1-aryabinin@virtuozzo.com
    Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory")
    Signed-off-by: Andrey Ryabinin
    Reported-by: Dmitry Vyukov
    Reviewed-by: Uladzislau Rezki (Sony)
    Cc: Daniel Axtens
    Cc: Alexander Potapenko
    Cc: Daniel Axtens
    Cc: Qian Cai
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

05 Dec, 2019

1 commit

  • I hit the following compile error in arch/x86/

    mm/kasan/common.c: In function kasan_populate_vmalloc:
    mm/kasan/common.c:797:2: error: implicit declaration of function flush_cache_vmap; did you mean flush_rcu_work? [-Werror=implicit-function-declaration]
    flush_cache_vmap(shadow_start, shadow_end);
    ^~~~~~~~~~~~~~~~
    flush_rcu_work
    cc1: some warnings being treated as errors

    Link: http://lkml.kernel.org/r/1575363013-43761-1-git-send-email-zhongjiang@huawei.com
    Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory")
    Signed-off-by: zhong jiang
    Reviewed-by: Andrew Morton
    Reviewed-by: Daniel Axtens
    Cc: Mark Rutland
    Cc: Mark Rutland
    Cc: Vasily Gorbik
    Cc: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    zhong jiang
     

02 Dec, 2019

1 commit

  • Patch series "kasan: support backing vmalloc space with real shadow
    memory", v11.

    Currently, vmalloc space is backed by the early shadow page. This means
    that kasan is incompatible with VMAP_STACK.

    This series provides a mechanism to back vmalloc space with real,
    dynamically allocated memory. I have only wired up x86, because that's
    the only currently supported arch I can work with easily, but it's very
    easy to wire up other architectures, and it appears that there is some
    work-in-progress code to do this on arm64 and s390.

    This has been discussed before in the context of VMAP_STACK:
    - https://bugzilla.kernel.org/show_bug.cgi?id=202009
    - https://lkml.org/lkml/2018/7/22/198
    - https://lkml.org/lkml/2019/7/19/822

    In terms of implementation details:

    Most mappings in vmalloc space are small, requiring less than a full
    page of shadow space. Allocating a full shadow page per mapping would
    therefore be wasteful. Furthermore, to ensure that different mappings
    use different shadow pages, mappings would have to be aligned to
    KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

    Instead, share backing space across multiple mappings. Allocate a
    backing page when a mapping in vmalloc space uses a particular page of
    the shadow region. This page can be shared by other vmalloc mappings
    later on.

    We hook in to the vmap infrastructure to lazily clean up unused shadow
    memory.

    Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:

    - Turning on KASAN, inline instrumentation, without vmalloc, introuduces
    a 4.1x-4.2x slowdown in vmalloc operations.

    - Turning this on introduces the following slowdowns over KASAN:
    * ~1.76x slower single-threaded (test_vmalloc.sh performance)
    * ~2.18x slower when both cpus are performing operations
    simultaneously (test_vmalloc.sh sequential_test_order=1)

    This is unfortunate but given that this is a debug feature only, not the
    end of the world. The benchmarks are also a stress-test for the vmalloc
    subsystem: they're not indicative of an overall 2x slowdown!

    This patch (of 4):

    Hook into vmalloc and vmap, and dynamically allocate real shadow memory
    to back the mappings.

    Most mappings in vmalloc space are small, requiring less than a full
    page of shadow space. Allocating a full shadow page per mapping would
    therefore be wasteful. Furthermore, to ensure that different mappings
    use different shadow pages, mappings would have to be aligned to
    KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

    Instead, share backing space across multiple mappings. Allocate a
    backing page when a mapping in vmalloc space uses a particular page of
    the shadow region. This page can be shared by other vmalloc mappings
    later on.

    We hook in to the vmap infrastructure to lazily clean up unused shadow
    memory.

    To avoid the difficulties around swapping mappings around, this code
    expects that the part of the shadow region that covers the vmalloc space
    will not be covered by the early shadow page, but will be left unmapped.
    This will require changes in arch-specific code.

    This allows KASAN with VMAP_STACK, and may be helpful for architectures
    that do not have a separate module space (e.g. powerpc64, which I am
    currently working on). It also allows relaxing the module alignment
    back to PAGE_SIZE.

    Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:

    - Turning on KASAN, inline instrumentation, without vmalloc, introuduces
    a 4.1x-4.2x slowdown in vmalloc operations.

    - Turning this on introduces the following slowdowns over KASAN:
    * ~1.76x slower single-threaded (test_vmalloc.sh performance)
    * ~2.18x slower when both cpus are performing operations
    simultaneously (test_vmalloc.sh sequential_test_order=3D1)

    This is unfortunate but given that this is a debug feature only, not the
    end of the world.

    The full benchmark results are:

    Performance

    No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN

    fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
    full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
    long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
    random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
    fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
    random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
    align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
    pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
    Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82

    Sequential, 2 cpus

    No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN

    fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
    full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
    long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
    random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
    fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
    random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
    align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
    pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
    Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
    fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
    full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
    long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
    random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
    fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
    random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
    align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
    pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
    Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11

    [dja@axtens.net: fixups]
    Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
    Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
    Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
    Signed-off-by: Mark Rutland [shadow rework]
    Signed-off-by: Daniel Axtens
    Co-developed-by: Mark Rutland
    Acked-by: Vasily Gorbik
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Christophe Leroy
    Cc: Qian Cai
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daniel Axtens
     

25 Sep, 2019

3 commits

  • Replace 1 << compound_order(page) with compound_nr(page). Minor
    improvements in readability.

    Link: http://lkml.kernel.org/r/20190721104612.19120-4-willy@infradead.org
    Signed-off-by: Matthew Wilcox (Oracle)
    Reviewed-by: Andrew Morton
    Reviewed-by: Ira Weiny
    Acked-by: Kirill A. Shutemov
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Wilcox (Oracle)
     
  • Patch series "Make working with compound pages easier", v2.

    These three patches add three helpers and convert the appropriate
    places to use them.

    This patch (of 3):

    It's unnecessarily hard to find out the size of a potentially huge page.
    Replace 'PAGE_SIZE << compound_order(page)' with page_size(page).

    Link: http://lkml.kernel.org/r/20190721104612.19120-2-willy@infradead.org
    Signed-off-by: Matthew Wilcox (Oracle)
    Acked-by: Michal Hocko
    Reviewed-by: Andrew Morton
    Reviewed-by: Ira Weiny
    Acked-by: Kirill A. Shutemov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Wilcox (Oracle)
     
  • Add memory corruption identification at bug report for software tag-based
    mode. The report shows whether it is "use-after-free" or "out-of-bound"
    error instead of "invalid-access" error. This will make it easier for
    programmers to see the memory corruption problem.

    We extend the slab to store five old free pointer tag and free backtrace,
    we can check if the tagged address is in the slab record and make a good
    guess if the object is more like "use-after-free" or "out-of-bound".
    therefore every slab memory corruption can be identified whether it's
    "use-after-free" or "out-of-bound".

    [aryabinin@virtuozzo.com: simplify & clenup code]
    Link: https://lkml.kernel.org/r/3318f9d7-a760-3cc8-b700-f06108ae745f@virtuozzo.com]
    Link: http://lkml.kernel.org/r/20190821180332.11450-1-aryabinin@virtuozzo.com
    Signed-off-by: Walter Wu
    Signed-off-by: Andrey Ryabinin
    Acked-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Walter Wu
     

25 Aug, 2019

1 commit

  • The code like this:

    ptr = kmalloc(size, GFP_KERNEL);
    page = virt_to_page(ptr);
    offset = offset_in_page(ptr);
    kfree(page_address(page) + offset);

    may produce false-positive invalid-free reports on the kernel with
    CONFIG_KASAN_SW_TAGS=y.

    In the example above we lose the original tag assigned to 'ptr', so
    kfree() gets the pointer with 0xFF tag. In kfree() we check that 0xFF
    tag is different from the tag in shadow hence print false report.

    Instead of just comparing tags, do the following:

    1) Check that shadow doesn't contain KASAN_TAG_INVALID. Otherwise it's
    double-free and it doesn't matter what tag the pointer have.

    2) If pointer tag is different from 0xFF, make sure that tag in the
    shadow is the same as in the pointer.

    Link: http://lkml.kernel.org/r/20190819172540.19581-1-aryabinin@virtuozzo.com
    Fixes: 7f94ffbc4c6a ("kasan: add hooks implementation for tag-based mode")
    Signed-off-by: Andrey Ryabinin
    Reported-by: Walter Wu
    Reported-by: Mark Rutland
    Reviewed-by: Andrey Konovalov
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

13 Jul, 2019

3 commits

  • This changes {,__}kasan_check_{read,write} functions to return a boolean
    denoting if the access was valid or not.

    [sfr@canb.auug.org.au: include types.h for "bool"]
    Link: http://lkml.kernel.org/r/20190705184949.13cdd021@canb.auug.org.au
    Link: http://lkml.kernel.org/r/20190626142014.141844-3-elver@google.com
    Signed-off-by: Marco Elver
    Signed-off-by: Stephen Rothwell
    Reviewed-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Mark Rutland
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     
  • Patch series "mm/kasan: Add object validation in ksize()", v3.

    This patch (of 5):

    This introduces __kasan_check_{read,write}. __kasan_check functions may
    be used from anywhere, even compilation units that disable instrumentation
    selectively.

    This change eliminates the need for the __KASAN_INTERNAL definition.

    [elver@google.com: v5]
    Link: http://lkml.kernel.org/r/20190708170706.174189-2-elver@google.com
    Link: http://lkml.kernel.org/r/20190626142014.141844-2-elver@google.com
    Signed-off-by: Marco Elver
    Acked-by: Mark Rutland
    Reviewed-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     
  • This adds support for printing stack frame description on invalid stack
    accesses. The frame description is embedded by the compiler, which is
    parsed and then pretty-printed.

    Currently, we can only print the stack frame info for accesses to the
    task's own stack, but not accesses to other tasks' stacks.

    Example of what it looks like:

    page dumped because: kasan: bad access detected

    addr ffff8880673ef98a is located in stack of task insmod/2008 at offset 106 in frame:
    kasan_stack_oob+0x0/0xf5 [test_kasan]

    this frame has 2 objects:
    [32, 36) 'i'
    [96, 106) 'stack_array'

    Memory state around the buggy address:

    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=198435
    Link: http://lkml.kernel.org/r/20190522100048.146841-1-elver@google.com
    Signed-off-by: Marco Elver
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     

02 Jun, 2019

1 commit

  • When building with -Wuninitialized and CONFIG_KASAN_SW_TAGS unset, Clang
    warns:

    mm/kasan/common.c:484:40: warning: variable 'tag' is uninitialized when
    used here [-Wuninitialized]
    kasan_unpoison_shadow(set_tag(object, tag), size);
    ^~~

    set_tag ignores tag in this configuration but clang doesn't realize it at
    this point in its pipeline, as it points to arch_kasan_set_tag as being
    the point where it is used, which will later be expanded to (void
    *)(object) without a use of tag. Initialize tag to 0xff, as it removes
    this warning and doesn't change the meaning of the code.

    Link: https://github.com/ClangBuiltLinux/linux/issues/465
    Link: http://lkml.kernel.org/r/20190502163057.6603-1-natechancellor@gmail.com
    Fixes: 7f94ffbc4c6a ("kasan: add hooks implementation for tag-based mode")
    Signed-off-by: Nathan Chancellor
    Reviewed-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Nick Desaulniers
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nathan Chancellor
     

07 May, 2019

2 commits

  • Pull arm64 updates from Will Deacon:
    "Mostly just incremental improvements here:

    - Introduce AT_HWCAP2 for advertising CPU features to userspace

    - Expose SVE2 availability to userspace

    - Support for "data cache clean to point of deep persistence" (DC PODP)

    - Honour "mitigations=off" on the cmdline and advertise status via
    sysfs

    - CPU timer erratum workaround (Neoverse-N1 #1188873)

    - Introduce perf PMU driver for the SMMUv3 performance counters

    - Add config option to disable the kuser helpers page for AArch32 tasks

    - Futex modifications to ensure liveness under contention

    - Rework debug exception handling to seperate kernel and user
    handlers

    - Non-critical fixes and cleanup"

    * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (92 commits)
    Documentation: Add ARM64 to kernel-parameters.rst
    arm64/speculation: Support 'mitigations=' cmdline option
    arm64: ssbs: Don't treat CPUs with SSBS as unaffected by SSB
    arm64: enable generic CPU vulnerabilites support
    arm64: add sysfs vulnerability show for speculative store bypass
    arm64: Fix size of __early_cpu_boot_status
    clocksource/arm_arch_timer: Use arch_timer_read_counter to access stable counters
    clocksource/arm_arch_timer: Remove use of workaround static key
    clocksource/arm_arch_timer: Drop use of static key in arch_timer_reg_read_stable
    clocksource/arm_arch_timer: Direcly assign set_next_event workaround
    arm64: Use arch_timer_read_counter instead of arch_counter_get_cntvct
    watchdog/sbsa: Use arch_timer_read_counter instead of arch_counter_get_cntvct
    ARM: vdso: Remove dependency with the arch_timer driver internals
    arm64: Apply ARM64_ERRATUM_1188873 to Neoverse-N1
    arm64: Add part number for Neoverse N1
    arm64: Make ARM64_ERRATUM_1188873 depend on COMPAT
    arm64: Restrict ARM64_ERRATUM_1188873 mitigation to AArch32
    arm64: mm: Remove pte_unmap_nested()
    arm64: Fix compiler warning from pte_unmap() with -Wunused-but-set-variable
    arm64: compat: Reduce address limit for 64K pages
    ...

    Linus Torvalds
     
  • Pull stack trace updates from Ingo Molnar:
    "So Thomas looked at the stacktrace code recently and noticed a few
    weirdnesses, and we all know how such stories of crummy kernel code
    meeting German engineering perfection end: a 45-patch series to clean
    it all up! :-)

    Here's the changes in Thomas's words:

    'Struct stack_trace is a sinkhole for input and output parameters
    which is largely pointless for most usage sites. In fact if embedded
    into other data structures it creates indirections and extra storage
    overhead for no benefit.

    Looking at all usage sites makes it clear that they just require an
    interface which is based on a storage array. That array is either on
    stack, global or embedded into some other data structure.

    Some of the stack depot usage sites are outright wrong, but
    fortunately the wrongness just causes more stack being used for
    nothing and does not have functional impact.

    Another oddity is the inconsistent termination of the stack trace
    with ULONG_MAX. It's pointless as the number of entries is what
    determines the length of the stored trace. In fact quite some call
    sites remove the ULONG_MAX marker afterwards with or without nasty
    comments about it. Not all architectures do that and those which do,
    do it inconsistenly either conditional on nr_entries == 0 or
    unconditionally.

    The following series cleans that up by:

    1) Removing the ULONG_MAX termination in the architecture code

    2) Removing the ULONG_MAX fixups at the call sites

    3) Providing plain storage array based interfaces for stacktrace
    and stackdepot.

    4) Cleaning up the mess at the callsites including some related
    cleanups.

    5) Removing the struct stack_trace based interfaces

    This is not changing the struct stack_trace interfaces at the
    architecture level, but it removes the exposure to the generic
    code'"

    * 'core-stacktrace-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (45 commits)
    x86/stacktrace: Use common infrastructure
    stacktrace: Provide common infrastructure
    lib/stackdepot: Remove obsolete functions
    stacktrace: Remove obsolete functions
    livepatch: Simplify stack trace retrieval
    tracing: Remove the last struct stack_trace usage
    tracing: Simplify stack trace retrieval
    tracing: Make ftrace_trace_userstack() static and conditional
    tracing: Use percpu stack trace buffer more intelligently
    tracing: Simplify stacktrace retrieval in histograms
    lockdep: Simplify stack trace handling
    lockdep: Remove save argument from check_prev_add()
    lockdep: Remove unused trace argument from print_circular_bug()
    drm: Simplify stacktrace handling
    dm persistent data: Simplify stack trace handling
    dm bufio: Simplify stack trace retrieval
    btrfs: ref-verify: Simplify stack trace retrieval
    dma/debug: Simplify stracktrace retrieval
    fault-inject: Simplify stacktrace retrieval
    mm/page_owner: Simplify stack trace handling
    ...

    Linus Torvalds
     

29 Apr, 2019

1 commit

  • Replace the indirection through struct stack_trace by using the storage
    array based interfaces.

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Josh Poimboeuf
    Acked-by: Dmitry Vyukov
    Acked-by: Andrey Ryabinin
    Cc: Andy Lutomirski
    Cc: Alexander Potapenko
    Cc: kasan-dev@googlegroups.com
    Cc: linux-mm@kvack.org
    Cc: Steven Rostedt
    Cc: Alexey Dobriyan
    Cc: Andrew Morton
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Catalin Marinas
    Cc: Mike Rapoport
    Cc: Akinobu Mita
    Cc: Christoph Hellwig
    Cc: iommu@lists.linux-foundation.org
    Cc: Robin Murphy
    Cc: Marek Szyprowski
    Cc: Johannes Thumshirn
    Cc: David Sterba
    Cc: Chris Mason
    Cc: Josef Bacik
    Cc: linux-btrfs@vger.kernel.org
    Cc: dm-devel@redhat.com
    Cc: Mike Snitzer
    Cc: Alasdair Kergon
    Cc: Daniel Vetter
    Cc: intel-gfx@lists.freedesktop.org
    Cc: Joonas Lahtinen
    Cc: Maarten Lankhorst
    Cc: dri-devel@lists.freedesktop.org
    Cc: David Airlie
    Cc: Jani Nikula
    Cc: Rodrigo Vivi
    Cc: Tom Zanussi
    Cc: Miroslav Benes
    Cc: linux-arch@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190425094801.963261479@linutronix.de

    Thomas Gleixner
     

15 Apr, 2019

1 commit

  • No architecture terminates the stack trace with ULONG_MAX anymore. Remove
    the cruft.

    Signed-off-by: Thomas Gleixner
    Acked-by: Dmitry Vyukov
    Acked-by: Peter Zijlstra (Intel)
    Cc: Josh Poimboeuf
    Cc: Andy Lutomirski
    Cc: Steven Rostedt
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: kasan-dev@googlegroups.com
    Cc: linux-mm@kvack.org
    Link: https://lkml.kernel.org/r/20190410103644.750219625@linutronix.de

    Thomas Gleixner
     

09 Apr, 2019

1 commit

  • In preparation for arm64 supporting ftrace built on other compiler
    options, let's have Makefiles remove the $(CC_FLAGS_FTRACE) flags,
    whatever these may be, rather than assuming '-pg'.

    There should be no functional change as a result of this patch.

    Reviewed-by: Mark Rutland
    Acked-by: Andrey Ryabinin
    Signed-off-by: Torsten Duwe
    Signed-off-by: Will Deacon

    Torsten Duwe
     

03 Apr, 2019

1 commit

  • KASAN inserts extra code for every LOAD/STORE emitted by te compiler.
    Much of this code is simple and safe to run with AC=1, however the
    kasan_report() function, called on error, is most certainly not safe
    to call with AC=1.

    Therefore wrap kasan_report() in user_access_{save,restore}; which for
    x86 SMAP, saves/restores EFLAGS and clears AC before calling the real
    function.

    Also ensure all the functions are without __fentry__ hook. The
    function tracer is also not safe.

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Borislav Petkov
    Cc: Dmitry Vyukov
    Cc: Josh Poimboeuf
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

30 Mar, 2019

1 commit

  • set_tag() compiles away when CONFIG_KASAN_SW_TAGS=n, so make
    arch_kasan_set_tag() a static inline function to fix warnings below.

    mm/kasan/common.c: In function '__kasan_kmalloc':
    mm/kasan/common.c:475:5: warning: variable 'tag' set but not used [-Wunused-but-set-variable]
    u8 tag;
    ^~~

    Link: http://lkml.kernel.org/r/20190307185244.54648-1-cai@lca.pw
    Signed-off-by: Qian Cai
    Reviewed-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Qian Cai
     

13 Mar, 2019

1 commit

  • Add check for the return value of memblock_alloc*() functions and call
    panic() in case of error. The panic message repeats the one used by
    panicing memblock allocators with adjustment of parameters to include
    only relevant ones.

    The replacement was mostly automated with semantic patches like the one
    below with manual massaging of format strings.

    @@
    expression ptr, size, align;
    @@
    ptr = memblock_alloc(size, align);
    + if (!ptr)
    + panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__, size, align);

    [anders.roxell@linaro.org: use '%pa' with 'phys_addr_t' type]
    Link: http://lkml.kernel.org/r/20190131161046.21886-1-anders.roxell@linaro.org
    [rppt@linux.ibm.com: fix format strings for panics after memblock_alloc]
    Link: http://lkml.kernel.org/r/1548950940-15145-1-git-send-email-rppt@linux.ibm.com
    [rppt@linux.ibm.com: don't panic if the allocation in sparse_buffer_init fails]
    Link: http://lkml.kernel.org/r/20190131074018.GD28876@rapoport-lnx
    [akpm@linux-foundation.org: fix xtensa printk warning]
    Link: http://lkml.kernel.org/r/1548057848-15136-20-git-send-email-rppt@linux.ibm.com
    Signed-off-by: Mike Rapoport
    Signed-off-by: Anders Roxell
    Reviewed-by: Guo Ren [c-sky]
    Acked-by: Paul Burton [MIPS]
    Acked-by: Heiko Carstens [s390]
    Reviewed-by: Juergen Gross [Xen]
    Reviewed-by: Geert Uytterhoeven [m68k]
    Acked-by: Max Filippov [xtensa]
    Cc: Catalin Marinas
    Cc: Christophe Leroy
    Cc: Christoph Hellwig
    Cc: "David S. Miller"
    Cc: Dennis Zhou
    Cc: Greentime Hu
    Cc: Greg Kroah-Hartman
    Cc: Guan Xuetao
    Cc: Guo Ren
    Cc: Mark Salter
    Cc: Matt Turner
    Cc: Michael Ellerman
    Cc: Michal Simek
    Cc: Petr Mladek
    Cc: Richard Weinberger
    Cc: Rich Felker
    Cc: Rob Herring
    Cc: Rob Herring
    Cc: Russell King
    Cc: Stafford Horne
    Cc: Tony Luck
    Cc: Vineet Gupta
    Cc: Yoshinori Sato
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mike Rapoport