08 Aug, 2020

3 commits

  • kasan_unpoison_stack_above_sp_to() is defined in kasan code but never
    used. The function was introduced as part of the commit:

    commit 9f7d416c36124667 ("kprobes: Unpoison stack in jprobe_return() for KASAN")

    ... where it was necessary because x86's jprobe_return() would leave
    stale shadow on the stack, and was an oddity in that regard.

    Since then, jprobes were removed entirely, and as of commit:

    commit 80006dbee674f9fa ("kprobes/x86: Remove jprobe implementation")

    ... there have been no callers of this function.

    Remove the declaration and the implementation.

    Signed-off-by: Vincenzo Frascino
    Signed-off-by: Andrew Morton
    Reviewed-by: Mark Rutland
    Reviewed-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Link: http://lkml.kernel.org/r/20200706143505.23299-1-vincenzo.frascino@arm.com
    Signed-off-by: Linus Torvalds

    Vincenzo Frascino
     
  • Move free track from kasan_alloc_meta to kasan_free_meta in order to make
    struct kasan_alloc_meta and kasan_free_meta size are both 16 bytes. It is
    a good size because it is the minimal redzone size and a good number of
    alignment.

    For free track, we make some modifications as shown below:
    1) Remove the free_track from struct kasan_alloc_meta.
    2) Add the free_track into struct kasan_free_meta.
    3) Add a macro KASAN_KMALLOC_FREETRACK in order to check whether
    it can print free stack in KASAN report.

    [1]https://bugzilla.kernel.org/show_bug.cgi?id=198437

    [walter-zh.wu@mediatek.com: build fix]
    Link: http://lkml.kernel.org/r/20200710162440.23887-1-walter-zh.wu@mediatek.com

    Suggested-by: Dmitry Vyukov
    Co-developed-by: Dmitry Vyukov
    Signed-off-by: Walter Wu
    Signed-off-by: Andrew Morton
    Tested-by: Dmitry Vyukov
    Reviewed-by: Dmitry Vyukov
    Reviewed-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Joel Fernandes
    Cc: Jonathan Corbet
    Cc: Josh Triplett
    Cc: Lai Jiangshan
    Cc: Mathieu Desnoyers
    Cc: Matthias Brugger
    Cc: "Paul E . McKenney"
    Link: http://lkml.kernel.org/r/20200601051022.1230-1-walter-zh.wu@mediatek.com
    Signed-off-by: Linus Torvalds

    Walter Wu
     
  • Patch series "kasan: memorize and print call_rcu stack", v8.

    This patchset improves KASAN reports by making them to have call_rcu()
    call stack information. It is useful for programmers to solve
    use-after-free or double-free memory issue.

    The KASAN report was as follows(cleaned up slightly):

    BUG: KASAN: use-after-free in kasan_rcu_reclaim+0x58/0x60

    Freed by task 0:
    kasan_save_stack+0x24/0x50
    kasan_set_track+0x24/0x38
    kasan_set_free_info+0x18/0x20
    __kasan_slab_free+0x10c/0x170
    kasan_slab_free+0x10/0x18
    kfree+0x98/0x270
    kasan_rcu_reclaim+0x1c/0x60

    Last call_rcu():
    kasan_save_stack+0x24/0x50
    kasan_record_aux_stack+0xbc/0xd0
    call_rcu+0x8c/0x580
    kasan_rcu_uaf+0xf4/0xf8

    Generic KASAN will record the last two call_rcu() call stacks and print up
    to 2 call_rcu() call stacks in KASAN report. it is only suitable for
    generic KASAN.

    This feature considers the size of struct kasan_alloc_meta and
    kasan_free_meta, we try to optimize the structure layout and size, lets it
    get better memory consumption.

    [1]https://bugzilla.kernel.org/show_bug.cgi?id=198437
    [2]https://groups.google.com/forum/#!searchin/kasan-dev/better$20stack$20traces$20for$20rcu%7Csort:date/kasan-dev/KQsjT_88hDE/7rNUZprRBgAJ

    This patch (of 4):

    This feature will record the last two call_rcu() call stacks and prints up
    to 2 call_rcu() call stacks in KASAN report.

    When call_rcu() is called, we store the call_rcu() call stack into slub
    alloc meta-data, so that the KASAN report can print rcu stack.

    [1]https://bugzilla.kernel.org/show_bug.cgi?id=198437
    [2]https://groups.google.com/forum/#!searchin/kasan-dev/better$20stack$20traces$20for$20rcu%7Csort:date/kasan-dev/KQsjT_88hDE/7rNUZprRBgAJ

    [walter-zh.wu@mediatek.com: build fix]
    Link: http://lkml.kernel.org/r/20200710162401.23816-1-walter-zh.wu@mediatek.com

    Suggested-by: Dmitry Vyukov
    Signed-off-by: Walter Wu
    Signed-off-by: Andrew Morton
    Tested-by: Dmitry Vyukov
    Reviewed-by: Dmitry Vyukov
    Reviewed-by: Andrey Konovalov
    Acked-by: Paul E. McKenney
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Josh Triplett
    Cc: Mathieu Desnoyers
    Cc: Lai Jiangshan
    Cc: Joel Fernandes
    Cc: Jonathan Corbet
    Cc: Matthias Brugger
    Link: http://lkml.kernel.org/r/20200710162123.23713-1-walter-zh.wu@mediatek.com
    Link: http://lkml.kernel.org/r/20200601050847.1096-1-walter-zh.wu@mediatek.com
    Link: http://lkml.kernel.org/r/20200601050927.1153-1-walter-zh.wu@mediatek.com
    Signed-off-by: Linus Torvalds

    Walter Wu
     

03 Jun, 2020

1 commit

  • The kasan_report() functions belongs to report.c, as it's a common
    functions that does error reporting.

    Reported-by: Leon Romanovsky
    Signed-off-by: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Tested-by: Leon Romanovsky
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Leon Romanovsky
    Link: http://lkml.kernel.org/r/78a81fde6eeda9db72a7fd55fbc33173a515e4b1.1589297433.git.andreyknvl@google.com
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

08 Apr, 2020

1 commit

  • filter_irq_stacks() can be used by other tools (e.g. KMSAN), so it needs
    to be moved to a common location. lib/stackdepot.c seems a good place, as
    filter_irq_stacks() is usually applied to the output of
    stack_trace_save().

    This patch has been previously mailed as part of KMSAN RFC patch series.

    [glider@google.co: nds32: linker script: add SOFTIRQENTRY_TEXT\
    Link: http://lkml.kernel.org/r/20200311121002.241430-1-glider@google.com
    [glider@google.com: add IRQENTRY_TEXT and SOFTIRQENTRY_TEXT to linker script]
    Link: http://lkml.kernel.org/r/20200311121124.243352-1-glider@google.com
    Signed-off-by: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Cc: Vegard Nossum
    Cc: Dmitry Vyukov
    Cc: Marco Elver
    Cc: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Arnd Bergmann
    Cc: Sergey Senozhatsky
    Link: http://lkml.kernel.org/r/20200220141916.55455-3-glider@google.com
    Signed-off-by: Linus Torvalds

    Alexander Potapenko
     

03 Apr, 2020

1 commit

  • Patch series "fix the missing underflow in memory operation function", v4.

    The patchset helps to produce a KASAN report when size is negative in
    memory operation functions. It is helpful for programmer to solve an
    undefined behavior issue. Patch 1 based on Dmitry's review and
    suggestion, patch 2 is a test in order to verify the patch 1.

    [1]https://bugzilla.kernel.org/show_bug.cgi?id=199341
    [2]https://lore.kernel.org/linux-arm-kernel/20190927034338.15813-1-walter-zh.wu@mediatek.com/

    This patch (of 2):

    KASAN missed detecting size is a negative number in memset(), memcpy(),
    and memmove(), it will cause out-of-bounds bug. So needs to be detected
    by KASAN.

    If size is a negative number, then it has a reason to be defined as
    out-of-bounds bug type. Casting negative numbers to size_t would indeed
    turn up as a large size_t and its value will be larger than ULONG_MAX/2,
    so that this can qualify as out-of-bounds.

    KASAN report is shown below:

    BUG: KASAN: out-of-bounds in kmalloc_memmove_invalid_size+0x70/0xa0
    Read of size 18446744073709551608 at addr ffffff8069660904 by task cat/72

    CPU: 2 PID: 72 Comm: cat Not tainted 5.4.0-rc1-next-20191004ajb-00001-gdb8af2f372b2-dirty #1
    Hardware name: linux,dummy-virt (DT)
    Call trace:
    dump_backtrace+0x0/0x288
    show_stack+0x14/0x20
    dump_stack+0x10c/0x164
    print_address_description.isra.9+0x68/0x378
    __kasan_report+0x164/0x1a0
    kasan_report+0xc/0x18
    check_memory_region+0x174/0x1d0
    memmove+0x34/0x88
    kmalloc_memmove_invalid_size+0x70/0xa0

    [1] https://bugzilla.kernel.org/show_bug.cgi?id=199341

    [cai@lca.pw: fix -Wdeclaration-after-statement warn]
    Link: http://lkml.kernel.org/r/1583509030-27939-1-git-send-email-cai@lca.pw
    [peterz@infradead.org: fix objtool warning]
    Link: http://lkml.kernel.org/r/20200305095436.GV2596@hirez.programming.kicks-ass.net
    Reported-by: kernel test robot
    Reported-by: Dmitry Vyukov
    Suggested-by: Dmitry Vyukov
    Signed-off-by: Walter Wu
    Signed-off-by: Qian Cai
    Signed-off-by: Andrew Morton
    Reviewed-by: Dmitry Vyukov
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Link: http://lkml.kernel.org/r/20191112065302.7015-1-walter-zh.wu@mediatek.com
    Signed-off-by: Linus Torvalds

    Walter Wu
     

23 Jan, 2020

1 commit

  • If archs don't have memmove then the C implementation from lib/string.c is used,
    and then it's instrumented by compiler. So there is no need to add KASAN's
    memmove to manual checks.

    Signed-off-by: Nick Hu
    Acked-by: Dmitry Vyukov
    Acked-by: Andrey Ryabinin
    Signed-off-by: Palmer Dabbelt

    Nick Hu
     

18 Dec, 2019

2 commits

  • kasan_release_vmalloc uses apply_to_page_range to release vmalloc
    shadow. Unfortunately, apply_to_page_range can allocate memory to fill
    in page table entries, which is not what we want.

    Also, kasan_release_vmalloc is called under free_vmap_area_lock, so if
    apply_to_page_range does allocate memory, we get a sleep in atomic bug:

    BUG: sleeping function called from invalid context at mm/page_alloc.c:4681
    in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 15087, name:

    Call Trace:
    __dump_stack lib/dump_stack.c:77 [inline]
    dump_stack+0x199/0x216 lib/dump_stack.c:118
    ___might_sleep.cold.97+0x1f5/0x238 kernel/sched/core.c:6800
    __might_sleep+0x95/0x190 kernel/sched/core.c:6753
    prepare_alloc_pages mm/page_alloc.c:4681 [inline]
    __alloc_pages_nodemask+0x3cd/0x890 mm/page_alloc.c:4730
    alloc_pages_current+0x10c/0x210 mm/mempolicy.c:2211
    alloc_pages include/linux/gfp.h:532 [inline]
    __get_free_pages+0xc/0x40 mm/page_alloc.c:4786
    __pte_alloc_one_kernel include/asm-generic/pgalloc.h:21 [inline]
    pte_alloc_one_kernel include/asm-generic/pgalloc.h:33 [inline]
    __pte_alloc_kernel+0x1d/0x200 mm/memory.c:459
    apply_to_pte_range mm/memory.c:2031 [inline]
    apply_to_pmd_range mm/memory.c:2068 [inline]
    apply_to_pud_range mm/memory.c:2088 [inline]
    apply_to_p4d_range mm/memory.c:2108 [inline]
    apply_to_page_range+0x77d/0xa00 mm/memory.c:2133
    kasan_release_vmalloc+0xa7/0xc0 mm/kasan/common.c:970
    __purge_vmap_area_lazy+0xcbb/0x1f30 mm/vmalloc.c:1313
    try_purge_vmap_area_lazy mm/vmalloc.c:1332 [inline]
    free_vmap_area_noflush+0x2ca/0x390 mm/vmalloc.c:1368
    free_unmap_vmap_area mm/vmalloc.c:1381 [inline]
    remove_vm_area+0x1cc/0x230 mm/vmalloc.c:2209
    vm_remove_mappings mm/vmalloc.c:2236 [inline]
    __vunmap+0x223/0xa20 mm/vmalloc.c:2299
    __vfree+0x3f/0xd0 mm/vmalloc.c:2356
    __vmalloc_area_node mm/vmalloc.c:2507 [inline]
    __vmalloc_node_range+0x5d5/0x810 mm/vmalloc.c:2547
    __vmalloc_node mm/vmalloc.c:2607 [inline]
    __vmalloc_node_flags mm/vmalloc.c:2621 [inline]
    vzalloc+0x6f/0x80 mm/vmalloc.c:2666
    alloc_one_pg_vec_page net/packet/af_packet.c:4233 [inline]
    alloc_pg_vec net/packet/af_packet.c:4258 [inline]
    packet_set_ring+0xbc0/0x1b50 net/packet/af_packet.c:4342
    packet_setsockopt+0xed7/0x2d90 net/packet/af_packet.c:3695
    __sys_setsockopt+0x29b/0x4d0 net/socket.c:2117
    __do_sys_setsockopt net/socket.c:2133 [inline]
    __se_sys_setsockopt net/socket.c:2130 [inline]
    __x64_sys_setsockopt+0xbe/0x150 net/socket.c:2130
    do_syscall_64+0xfa/0x780 arch/x86/entry/common.c:294
    entry_SYSCALL_64_after_hwframe+0x49/0xbe

    Switch to using the apply_to_existing_page_range() helper instead, which
    won't allocate memory.

    [akpm@linux-foundation.org: s/apply_to_existing_pages/apply_to_existing_page_range/]
    Link: http://lkml.kernel.org/r/20191205140407.1874-2-dja@axtens.net
    Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory")
    Signed-off-by: Daniel Axtens
    Reported-by: Dmitry Vyukov
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Qian Cai
    Cc: Uladzislau Rezki (Sony)
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daniel Axtens
     
  • With CONFIG_KASAN_VMALLOC=y any use of memory obtained via vm_map_ram()
    will crash because there is no shadow backing that memory.

    Instead of sprinkling additional kasan_populate_vmalloc() calls all over
    the vmalloc code, move it into alloc_vmap_area(). This will fix
    vm_map_ram() and simplify the code a bit.

    [aryabinin@virtuozzo.com: v2]
    Link: http://lkml.kernel.org/r/20191205095942.1761-1-aryabinin@virtuozzo.comLink: http://lkml.kernel.org/r/20191204204534.32202-1-aryabinin@virtuozzo.com
    Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory")
    Signed-off-by: Andrey Ryabinin
    Reported-by: Dmitry Vyukov
    Reviewed-by: Uladzislau Rezki (Sony)
    Cc: Daniel Axtens
    Cc: Alexander Potapenko
    Cc: Daniel Axtens
    Cc: Qian Cai
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

05 Dec, 2019

1 commit

  • I hit the following compile error in arch/x86/

    mm/kasan/common.c: In function kasan_populate_vmalloc:
    mm/kasan/common.c:797:2: error: implicit declaration of function flush_cache_vmap; did you mean flush_rcu_work? [-Werror=implicit-function-declaration]
    flush_cache_vmap(shadow_start, shadow_end);
    ^~~~~~~~~~~~~~~~
    flush_rcu_work
    cc1: some warnings being treated as errors

    Link: http://lkml.kernel.org/r/1575363013-43761-1-git-send-email-zhongjiang@huawei.com
    Fixes: 3c5c3cfb9ef4 ("kasan: support backing vmalloc space with real shadow memory")
    Signed-off-by: zhong jiang
    Reviewed-by: Andrew Morton
    Reviewed-by: Daniel Axtens
    Cc: Mark Rutland
    Cc: Mark Rutland
    Cc: Vasily Gorbik
    Cc: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    zhong jiang
     

02 Dec, 2019

1 commit

  • Patch series "kasan: support backing vmalloc space with real shadow
    memory", v11.

    Currently, vmalloc space is backed by the early shadow page. This means
    that kasan is incompatible with VMAP_STACK.

    This series provides a mechanism to back vmalloc space with real,
    dynamically allocated memory. I have only wired up x86, because that's
    the only currently supported arch I can work with easily, but it's very
    easy to wire up other architectures, and it appears that there is some
    work-in-progress code to do this on arm64 and s390.

    This has been discussed before in the context of VMAP_STACK:
    - https://bugzilla.kernel.org/show_bug.cgi?id=202009
    - https://lkml.org/lkml/2018/7/22/198
    - https://lkml.org/lkml/2019/7/19/822

    In terms of implementation details:

    Most mappings in vmalloc space are small, requiring less than a full
    page of shadow space. Allocating a full shadow page per mapping would
    therefore be wasteful. Furthermore, to ensure that different mappings
    use different shadow pages, mappings would have to be aligned to
    KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

    Instead, share backing space across multiple mappings. Allocate a
    backing page when a mapping in vmalloc space uses a particular page of
    the shadow region. This page can be shared by other vmalloc mappings
    later on.

    We hook in to the vmap infrastructure to lazily clean up unused shadow
    memory.

    Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:

    - Turning on KASAN, inline instrumentation, without vmalloc, introuduces
    a 4.1x-4.2x slowdown in vmalloc operations.

    - Turning this on introduces the following slowdowns over KASAN:
    * ~1.76x slower single-threaded (test_vmalloc.sh performance)
    * ~2.18x slower when both cpus are performing operations
    simultaneously (test_vmalloc.sh sequential_test_order=1)

    This is unfortunate but given that this is a debug feature only, not the
    end of the world. The benchmarks are also a stress-test for the vmalloc
    subsystem: they're not indicative of an overall 2x slowdown!

    This patch (of 4):

    Hook into vmalloc and vmap, and dynamically allocate real shadow memory
    to back the mappings.

    Most mappings in vmalloc space are small, requiring less than a full
    page of shadow space. Allocating a full shadow page per mapping would
    therefore be wasteful. Furthermore, to ensure that different mappings
    use different shadow pages, mappings would have to be aligned to
    KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE.

    Instead, share backing space across multiple mappings. Allocate a
    backing page when a mapping in vmalloc space uses a particular page of
    the shadow region. This page can be shared by other vmalloc mappings
    later on.

    We hook in to the vmap infrastructure to lazily clean up unused shadow
    memory.

    To avoid the difficulties around swapping mappings around, this code
    expects that the part of the shadow region that covers the vmalloc space
    will not be covered by the early shadow page, but will be left unmapped.
    This will require changes in arch-specific code.

    This allows KASAN with VMAP_STACK, and may be helpful for architectures
    that do not have a separate module space (e.g. powerpc64, which I am
    currently working on). It also allows relaxing the module alignment
    back to PAGE_SIZE.

    Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that:

    - Turning on KASAN, inline instrumentation, without vmalloc, introuduces
    a 4.1x-4.2x slowdown in vmalloc operations.

    - Turning this on introduces the following slowdowns over KASAN:
    * ~1.76x slower single-threaded (test_vmalloc.sh performance)
    * ~2.18x slower when both cpus are performing operations
    simultaneously (test_vmalloc.sh sequential_test_order=3D1)

    This is unfortunate but given that this is a debug feature only, not the
    end of the world.

    The full benchmark results are:

    Performance

    No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN

    fix_size_alloc_test 662004 11404956 17.23 19144610 28.92 1.68
    full_fit_alloc_test 710950 12029752 16.92 13184651 18.55 1.10
    long_busy_list_alloc_test 9431875 43990172 4.66 82970178 8.80 1.89
    random_size_alloc_test 5033626 23061762 4.58 47158834 9.37 2.04
    fix_align_alloc_test 1252514 15276910 12.20 31266116 24.96 2.05
    random_size_align_alloc_te 1648501 14578321 8.84 25560052 15.51 1.75
    align_shift_alloc_test 147 830 5.65 5692 38.72 6.86
    pcpu_alloc_test 80732 125520 1.55 140864 1.74 1.12
    Total Cycles 119240774314 763211341128 6.40 1390338696894 11.66 1.82

    Sequential, 2 cpus

    No KASAN KASAN original x baseline KASAN vmalloc x baseline x KASAN

    fix_size_alloc_test 1423150 14276550 10.03 27733022 19.49 1.94
    full_fit_alloc_test 1754219 14722640 8.39 15030786 8.57 1.02
    long_busy_list_alloc_test 11451858 52154973 4.55 107016027 9.34 2.05
    random_size_alloc_test 5989020 26735276 4.46 68885923 11.50 2.58
    fix_align_alloc_test 2050976 20166900 9.83 50491675 24.62 2.50
    random_size_align_alloc_te 2858229 17971700 6.29 38730225 13.55 2.16
    align_shift_alloc_test 405 6428 15.87 26253 64.82 4.08
    pcpu_alloc_test 127183 151464 1.19 216263 1.70 1.43
    Total Cycles 54181269392 308723699764 5.70 650772566394 12.01 2.11
    fix_size_alloc_test 1420404 14289308 10.06 27790035 19.56 1.94
    full_fit_alloc_test 1736145 14806234 8.53 15274301 8.80 1.03
    long_busy_list_alloc_test 11404638 52270785 4.58 107550254 9.43 2.06
    random_size_alloc_test 6017006 26650625 4.43 68696127 11.42 2.58
    fix_align_alloc_test 2045504 20280985 9.91 50414862 24.65 2.49
    random_size_align_alloc_te 2845338 17931018 6.30 38510276 13.53 2.15
    align_shift_alloc_test 472 3760 7.97 9656 20.46 2.57
    pcpu_alloc_test 118643 132732 1.12 146504 1.23 1.10
    Total Cycles 54040011688 309102805492 5.72 651325675652 12.05 2.11

    [dja@axtens.net: fixups]
    Link: http://lkml.kernel.org/r/20191120052719.7201-1-dja@axtens.net
    Link: https://bugzilla.kernel.org/show_bug.cgi?id=3D202009
    Link: http://lkml.kernel.org/r/20191031093909.9228-2-dja@axtens.net
    Signed-off-by: Mark Rutland [shadow rework]
    Signed-off-by: Daniel Axtens
    Co-developed-by: Mark Rutland
    Acked-by: Vasily Gorbik
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Christophe Leroy
    Cc: Qian Cai
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daniel Axtens
     

25 Sep, 2019

3 commits

  • Replace 1 << compound_order(page) with compound_nr(page). Minor
    improvements in readability.

    Link: http://lkml.kernel.org/r/20190721104612.19120-4-willy@infradead.org
    Signed-off-by: Matthew Wilcox (Oracle)
    Reviewed-by: Andrew Morton
    Reviewed-by: Ira Weiny
    Acked-by: Kirill A. Shutemov
    Cc: Michal Hocko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Wilcox (Oracle)
     
  • Patch series "Make working with compound pages easier", v2.

    These three patches add three helpers and convert the appropriate
    places to use them.

    This patch (of 3):

    It's unnecessarily hard to find out the size of a potentially huge page.
    Replace 'PAGE_SIZE << compound_order(page)' with page_size(page).

    Link: http://lkml.kernel.org/r/20190721104612.19120-2-willy@infradead.org
    Signed-off-by: Matthew Wilcox (Oracle)
    Acked-by: Michal Hocko
    Reviewed-by: Andrew Morton
    Reviewed-by: Ira Weiny
    Acked-by: Kirill A. Shutemov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matthew Wilcox (Oracle)
     
  • Add memory corruption identification at bug report for software tag-based
    mode. The report shows whether it is "use-after-free" or "out-of-bound"
    error instead of "invalid-access" error. This will make it easier for
    programmers to see the memory corruption problem.

    We extend the slab to store five old free pointer tag and free backtrace,
    we can check if the tagged address is in the slab record and make a good
    guess if the object is more like "use-after-free" or "out-of-bound".
    therefore every slab memory corruption can be identified whether it's
    "use-after-free" or "out-of-bound".

    [aryabinin@virtuozzo.com: simplify & clenup code]
    Link: https://lkml.kernel.org/r/3318f9d7-a760-3cc8-b700-f06108ae745f@virtuozzo.com]
    Link: http://lkml.kernel.org/r/20190821180332.11450-1-aryabinin@virtuozzo.com
    Signed-off-by: Walter Wu
    Signed-off-by: Andrey Ryabinin
    Acked-by: Andrey Konovalov
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Walter Wu
     

25 Aug, 2019

1 commit

  • The code like this:

    ptr = kmalloc(size, GFP_KERNEL);
    page = virt_to_page(ptr);
    offset = offset_in_page(ptr);
    kfree(page_address(page) + offset);

    may produce false-positive invalid-free reports on the kernel with
    CONFIG_KASAN_SW_TAGS=y.

    In the example above we lose the original tag assigned to 'ptr', so
    kfree() gets the pointer with 0xFF tag. In kfree() we check that 0xFF
    tag is different from the tag in shadow hence print false report.

    Instead of just comparing tags, do the following:

    1) Check that shadow doesn't contain KASAN_TAG_INVALID. Otherwise it's
    double-free and it doesn't matter what tag the pointer have.

    2) If pointer tag is different from 0xFF, make sure that tag in the
    shadow is the same as in the pointer.

    Link: http://lkml.kernel.org/r/20190819172540.19581-1-aryabinin@virtuozzo.com
    Fixes: 7f94ffbc4c6a ("kasan: add hooks implementation for tag-based mode")
    Signed-off-by: Andrey Ryabinin
    Reported-by: Walter Wu
    Reported-by: Mark Rutland
    Reviewed-by: Andrey Konovalov
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Ryabinin
     

13 Jul, 2019

2 commits

  • This changes {,__}kasan_check_{read,write} functions to return a boolean
    denoting if the access was valid or not.

    [sfr@canb.auug.org.au: include types.h for "bool"]
    Link: http://lkml.kernel.org/r/20190705184949.13cdd021@canb.auug.org.au
    Link: http://lkml.kernel.org/r/20190626142014.141844-3-elver@google.com
    Signed-off-by: Marco Elver
    Signed-off-by: Stephen Rothwell
    Reviewed-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Mark Rutland
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     
  • Patch series "mm/kasan: Add object validation in ksize()", v3.

    This patch (of 5):

    This introduces __kasan_check_{read,write}. __kasan_check functions may
    be used from anywhere, even compilation units that disable instrumentation
    selectively.

    This change eliminates the need for the __KASAN_INTERNAL definition.

    [elver@google.com: v5]
    Link: http://lkml.kernel.org/r/20190708170706.174189-2-elver@google.com
    Link: http://lkml.kernel.org/r/20190626142014.141844-2-elver@google.com
    Signed-off-by: Marco Elver
    Acked-by: Mark Rutland
    Reviewed-by: Andrey Ryabinin
    Cc: Dmitry Vyukov
    Cc: Alexander Potapenko
    Cc: Andrey Konovalov
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Joonsoo Kim
    Cc: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Marco Elver
     

02 Jun, 2019

1 commit

  • When building with -Wuninitialized and CONFIG_KASAN_SW_TAGS unset, Clang
    warns:

    mm/kasan/common.c:484:40: warning: variable 'tag' is uninitialized when
    used here [-Wuninitialized]
    kasan_unpoison_shadow(set_tag(object, tag), size);
    ^~~

    set_tag ignores tag in this configuration but clang doesn't realize it at
    this point in its pipeline, as it points to arch_kasan_set_tag as being
    the point where it is used, which will later be expanded to (void
    *)(object) without a use of tag. Initialize tag to 0xff, as it removes
    this warning and doesn't change the meaning of the code.

    Link: https://github.com/ClangBuiltLinux/linux/issues/465
    Link: http://lkml.kernel.org/r/20190502163057.6603-1-natechancellor@gmail.com
    Fixes: 7f94ffbc4c6a ("kasan: add hooks implementation for tag-based mode")
    Signed-off-by: Nathan Chancellor
    Reviewed-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Nick Desaulniers
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nathan Chancellor
     

07 May, 2019

1 commit

  • Pull stack trace updates from Ingo Molnar:
    "So Thomas looked at the stacktrace code recently and noticed a few
    weirdnesses, and we all know how such stories of crummy kernel code
    meeting German engineering perfection end: a 45-patch series to clean
    it all up! :-)

    Here's the changes in Thomas's words:

    'Struct stack_trace is a sinkhole for input and output parameters
    which is largely pointless for most usage sites. In fact if embedded
    into other data structures it creates indirections and extra storage
    overhead for no benefit.

    Looking at all usage sites makes it clear that they just require an
    interface which is based on a storage array. That array is either on
    stack, global or embedded into some other data structure.

    Some of the stack depot usage sites are outright wrong, but
    fortunately the wrongness just causes more stack being used for
    nothing and does not have functional impact.

    Another oddity is the inconsistent termination of the stack trace
    with ULONG_MAX. It's pointless as the number of entries is what
    determines the length of the stored trace. In fact quite some call
    sites remove the ULONG_MAX marker afterwards with or without nasty
    comments about it. Not all architectures do that and those which do,
    do it inconsistenly either conditional on nr_entries == 0 or
    unconditionally.

    The following series cleans that up by:

    1) Removing the ULONG_MAX termination in the architecture code

    2) Removing the ULONG_MAX fixups at the call sites

    3) Providing plain storage array based interfaces for stacktrace
    and stackdepot.

    4) Cleaning up the mess at the callsites including some related
    cleanups.

    5) Removing the struct stack_trace based interfaces

    This is not changing the struct stack_trace interfaces at the
    architecture level, but it removes the exposure to the generic
    code'"

    * 'core-stacktrace-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (45 commits)
    x86/stacktrace: Use common infrastructure
    stacktrace: Provide common infrastructure
    lib/stackdepot: Remove obsolete functions
    stacktrace: Remove obsolete functions
    livepatch: Simplify stack trace retrieval
    tracing: Remove the last struct stack_trace usage
    tracing: Simplify stack trace retrieval
    tracing: Make ftrace_trace_userstack() static and conditional
    tracing: Use percpu stack trace buffer more intelligently
    tracing: Simplify stacktrace retrieval in histograms
    lockdep: Simplify stack trace handling
    lockdep: Remove save argument from check_prev_add()
    lockdep: Remove unused trace argument from print_circular_bug()
    drm: Simplify stacktrace handling
    dm persistent data: Simplify stack trace handling
    dm bufio: Simplify stack trace retrieval
    btrfs: ref-verify: Simplify stack trace retrieval
    dma/debug: Simplify stracktrace retrieval
    fault-inject: Simplify stacktrace retrieval
    mm/page_owner: Simplify stack trace handling
    ...

    Linus Torvalds
     

29 Apr, 2019

1 commit

  • Replace the indirection through struct stack_trace by using the storage
    array based interfaces.

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Josh Poimboeuf
    Acked-by: Dmitry Vyukov
    Acked-by: Andrey Ryabinin
    Cc: Andy Lutomirski
    Cc: Alexander Potapenko
    Cc: kasan-dev@googlegroups.com
    Cc: linux-mm@kvack.org
    Cc: Steven Rostedt
    Cc: Alexey Dobriyan
    Cc: Andrew Morton
    Cc: Christoph Lameter
    Cc: Pekka Enberg
    Cc: David Rientjes
    Cc: Catalin Marinas
    Cc: Mike Rapoport
    Cc: Akinobu Mita
    Cc: Christoph Hellwig
    Cc: iommu@lists.linux-foundation.org
    Cc: Robin Murphy
    Cc: Marek Szyprowski
    Cc: Johannes Thumshirn
    Cc: David Sterba
    Cc: Chris Mason
    Cc: Josef Bacik
    Cc: linux-btrfs@vger.kernel.org
    Cc: dm-devel@redhat.com
    Cc: Mike Snitzer
    Cc: Alasdair Kergon
    Cc: Daniel Vetter
    Cc: intel-gfx@lists.freedesktop.org
    Cc: Joonas Lahtinen
    Cc: Maarten Lankhorst
    Cc: dri-devel@lists.freedesktop.org
    Cc: David Airlie
    Cc: Jani Nikula
    Cc: Rodrigo Vivi
    Cc: Tom Zanussi
    Cc: Miroslav Benes
    Cc: linux-arch@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190425094801.963261479@linutronix.de

    Thomas Gleixner
     

15 Apr, 2019

1 commit

  • No architecture terminates the stack trace with ULONG_MAX anymore. Remove
    the cruft.

    Signed-off-by: Thomas Gleixner
    Acked-by: Dmitry Vyukov
    Acked-by: Peter Zijlstra (Intel)
    Cc: Josh Poimboeuf
    Cc: Andy Lutomirski
    Cc: Steven Rostedt
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: kasan-dev@googlegroups.com
    Cc: linux-mm@kvack.org
    Link: https://lkml.kernel.org/r/20190410103644.750219625@linutronix.de

    Thomas Gleixner
     

03 Apr, 2019

1 commit

  • KASAN inserts extra code for every LOAD/STORE emitted by te compiler.
    Much of this code is simple and safe to run with AC=1, however the
    kasan_report() function, called on error, is most certainly not safe
    to call with AC=1.

    Therefore wrap kasan_report() in user_access_{save,restore}; which for
    x86 SMAP, saves/restores EFLAGS and clears AC before calling the real
    function.

    Also ensure all the functions are without __fentry__ hook. The
    function tracer is also not safe.

    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Borislav Petkov
    Cc: Dmitry Vyukov
    Cc: Josh Poimboeuf
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

06 Mar, 2019

1 commit

  • Building little-endian allmodconfig kernels on arm64 started failing
    with the generated atomic.h implementation, since we now try to call
    kasan helpers from the EFI stub:

    aarch64-linux-gnu-ld: drivers/firmware/efi/libstub/arm-stub.stub.o: in function `atomic_set':
    include/generated/atomic-instrumented.h:44: undefined reference to `__efistub_kasan_check_write'

    I suspect that we get similar problems in other files that explicitly
    disable KASAN for some reason but call atomic_t based helper functions.

    We can fix this by checking the predefined __SANITIZE_ADDRESS__ macro
    that the compiler sets instead of checking CONFIG_KASAN, but this in
    turn requires a small hack in mm/kasan/common.c so we do see the extern
    declaration there instead of the inline function.

    Link: http://lkml.kernel.org/r/20181211133453.2835077-1-arnd@arndb.de
    Fixes: b1864b828644 ("locking/atomics: build atomic headers as required")
    Signed-off-by: Arnd Bergmann
    Reported-by: Anders Roxell
    Acked-by: Andrey Ryabinin
    Cc: Ard Biesheuvel
    Cc: Will Deacon
    Cc: Mark Rutland
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Andrey Konovalov
    Cc: Stephen Rothwell ,
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arnd Bergmann
     

22 Feb, 2019

1 commit

  • When an object is kmalloc()'ed, two hooks are called: kasan_slab_alloc()
    and kasan_kmalloc(). Right now we assign a tag twice, once in each of the
    hooks. Fix it by assigning a tag only in the former hook.

    Link: http://lkml.kernel.org/r/ce8c6431da735aa7ec051fd6497153df690eb021.1549921721.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: Catalin Marinas
    Cc: Christoph Lameter
    Cc: David Rientjes
    Cc: Dmitry Vyukov
    Cc: Evgeniy Stepanov
    Cc: Joonsoo Kim
    Cc: Kostya Serebryany
    Cc: Pekka Enberg
    Cc: Qian Cai
    Cc: Vincenzo Frascino
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

09 Jan, 2019

2 commits

  • Right now tag-based KASAN can retag the memory that is reallocated via
    krealloc and return a differently tagged pointer even if the same slab
    object gets used and no reallocated technically happens.

    There are a few issues with this approach. One is that krealloc callers
    can't rely on comparing the return value with the passed argument to
    check whether reallocation happened. Another is that if a caller knows
    that no reallocation happened, that it can access object memory through
    the old pointer, which leads to false positives. Look at
    nf_ct_ext_add() to see an example.

    Fix this by keeping the same tag if the memory don't actually gets
    reallocated during krealloc.

    Link: http://lkml.kernel.org/r/bb2a71d17ed072bcc528cbee46fcbd71a6da3be4.1546540962.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Cc: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Vincenzo Frascino
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Instead of changing cache->align to be aligned to KASAN_SHADOW_SCALE_SIZE
    in kasan_cache_create() we can reuse the ARCH_SLAB_MINALIGN macro.

    Link: http://lkml.kernel.org/r/52ddd881916bcc153a9924c154daacde78522227.1546540962.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Suggested-by: Vincenzo Frascino
    Cc: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Vincenzo Frascino
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     

29 Dec, 2018

6 commits

  • This patch adds a "SPDX-License-Identifier: GPL-2.0" mark to all source
    files under mm/kasan.

    Link: http://lkml.kernel.org/r/bce2d1e618afa5142e81961ab8fa4b4165337380.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This patch adds __must_check annotations to kasan hooks that return a
    pointer to make sure that a tagged pointer always gets propagated.

    Link: http://lkml.kernel.org/r/03b269c5e453945f724bfca3159d4e1333a8fb1c.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Suggested-by: Andrey Ryabinin
    Cc: Christoph Lameter
    Cc: Dmitry Vyukov
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Tag-based KASAN doesn't check memory accesses through pointers tagged with
    0xff. When page_address is used to get pointer to memory that corresponds
    to some page, the tag of the resulting pointer gets set to 0xff, even
    though the allocated memory might have been tagged differently.

    For slab pages it's impossible to recover the correct tag to return from
    page_address, since the page might contain multiple slab objects tagged
    with different values, and we can't know in advance which one of them is
    going to get accessed. For non slab pages however, we can recover the tag
    in page_address, since the whole page was marked with the same tag.

    This patch adds tagging to non slab memory allocated with pagealloc. To
    set the tag of the pointer returned from page_address, the tag gets stored
    to page->flags when the memory gets allocated.

    Link: http://lkml.kernel.org/r/d758ddcef46a5abc9970182b9137e2fbee202a2c.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Acked-by: Will Deacon
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • This commit adds tag-based KASAN specific hooks implementation and
    adjusts common generic and tag-based KASAN ones.

    1. When a new slab cache is created, tag-based KASAN rounds up the size of
    the objects in this cache to KASAN_SHADOW_SCALE_SIZE (== 16).

    2. On each kmalloc tag-based KASAN generates a random tag, sets the shadow
    memory, that corresponds to this object to this tag, and embeds this
    tag value into the top byte of the returned pointer.

    3. On each kfree tag-based KASAN poisons the shadow memory with a random
    tag to allow detection of use-after-free bugs.

    The rest of the logic of the hook implementation is very much similar to
    the one provided by generic KASAN. Tag-based KASAN saves allocation and
    free stack metadata to the slab object the same way generic KASAN does.

    Link: http://lkml.kernel.org/r/bda78069e3b8422039794050ddcb2d53d053ed41.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • A tag-based KASAN shadow memory cell contains a memory tag, that
    corresponds to the tag in the top byte of the pointer, that points to that
    memory. The native top byte value of kernel pointers is 0xff, so with
    tag-based KASAN we need to initialize shadow memory to 0xff.

    [cai@lca.pw: arm64: skip kmemleak for KASAN again\
    Link: http://lkml.kernel.org/r/20181226020550.63712-1-cai@lca.pw
    Link: http://lkml.kernel.org/r/5cc1b789aad7c99cf4f3ec5b328b147ad53edb40.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov
     
  • Tag-based KASAN reuses a significant part of the generic KASAN code, so
    move the common parts to common.c without any functional changes.

    Link: http://lkml.kernel.org/r/114064d002356e03bb8cc91f7835e20dc61b51d9.1544099024.git.andreyknvl@google.com
    Signed-off-by: Andrey Konovalov
    Reviewed-by: Andrey Ryabinin
    Reviewed-by: Dmitry Vyukov
    Cc: Christoph Lameter
    Cc: Mark Rutland
    Cc: Will Deacon
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrey Konovalov