13 Jan, 2019

2 commits

  • commit 10fdf838e5f540beca466e9d1325999c072e5d3f upstream.

    On several arches, virt_to_phys() is in io.h

    Build fails without it:

    CC lib/test_debug_virtual.o
    lib/test_debug_virtual.c: In function 'test_debug_virtual_init':
    lib/test_debug_virtual.c:26:7: error: implicit declaration of function 'virt_to_phys' [-Werror=implicit-function-declaration]
    pa = virt_to_phys(va);
    ^

    Fixes: e4dace361552 ("lib: add test module for CONFIG_DEBUG_VIRTUAL")
    CC: stable@vger.kernel.org
    Signed-off-by: Christophe Leroy
    Reviewed-by: Kees Cook
    Signed-off-by: Michael Ellerman
    Signed-off-by: Greg Kroah-Hartman

    Christophe Leroy
     
  • commit e213574a449f7a57d4202c1869bbc7680b6b5521 upstream.

    We cannot build these files with clang as it does not allow altivec
    instructions in assembly when -msoft-float is passed.

    Jinsong Ji wrote:
    > We currently disable Altivec/VSX support when enabling soft-float. So
    > any usage of vector builtins will break.
    >
    > Enable Altivec/VSX with soft-float may need quite some clean up work, so
    > I guess this is currently a limitation.
    >
    > Removing -msoft-float will make it work (and we are lucky that no
    > floating point instructions will be generated as well).

    This is a workaround until the issue is resolved in clang.

    Link: https://bugs.llvm.org/show_bug.cgi?id=31177
    Link: https://github.com/ClangBuiltLinux/linux/issues/239
    Signed-off-by: Joel Stanley
    Reviewed-by: Nick Desaulniers
    Signed-off-by: Michael Ellerman
    [nc: Use 'ifeq ($(cc-name),clang)' instead of 'ifdef CONFIG_CC_IS_CLANG'
    because that config does not exist in 4.14; the Kconfig rewrite
    that added that config happened in 4.18]
    Signed-off-by: Nathan Chancellor
    Signed-off-by: Greg Kroah-Hartman

    Joel Stanley
     

17 Dec, 2018

2 commits

  • commit 0b548e33e6cb2bff240fdaf1783783be15c29080 upstream.

    Fengguang reported soft lockups while running the rbtree and interval
    tree test modules. The logic for these tests all occur in init phase,
    and we currently are pounding with the default values for number of
    nodes and number of iterations of each test. Reduce the latter by two
    orders of magnitude. This does not influence the value of the tests in
    that one thousand times by default is enough to get the picture.

    Link: http://lkml.kernel.org/r/20171109161715.xai2dtwqw2frhkcm@linux-n805
    Signed-off-by: Davidlohr Bueso
    Reported-by: Fengguang Wu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Cc: Guenter Roeck
    Signed-off-by: Greg Kroah-Hartman

    Davidlohr Bueso
     
  • [ Upstream commit 8de456cf87ba863e028c4dd01bae44255ce3d835 ]

    CONFIG_DEBUG_OBJECTS_RCU_HEAD does not play well with kmemleak due to
    recursive calls.

    fill_pool
    kmemleak_ignore
    make_black_object
    put_object
    __call_rcu (kernel/rcu/tree.c)
    debug_rcu_head_queue
    debug_object_activate
    debug_object_init
    fill_pool
    kmemleak_ignore
    make_black_object
    ...

    So add SLAB_NOLEAKTRACE to kmem_cache_create() to not register newly
    allocated debug objects at all.

    Link: http://lkml.kernel.org/r/20181126165343.2339-1-cai@gmx.us
    Signed-off-by: Qian Cai
    Suggested-by: Catalin Marinas
    Acked-by: Waiman Long
    Acked-by: Catalin Marinas
    Cc: Thomas Gleixner
    Cc: Yang Shi
    Cc: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Sasha Levin

    Qian Cai
     

13 Dec, 2018

2 commits

  • commit 7d63fb3af87aa67aa7d24466e792f9d7c57d8e79 upstream.

    This removes needless use of '%p', and refactors the printk calls to
    use pr_*() helpers instead.

    Signed-off-by: Kees Cook
    Reviewed-by: Konrad Rzeszutek Wilk
    Signed-off-by: Christoph Hellwig
    [bwh: Backported to 4.14:
    - Adjust filename
    - Remove "swiotlb: " prefix from an additional log message]
    Signed-off-by: Ben Hutchings
    Signed-off-by: Sasha Levin

    Kees Cook
     
  • [ Upstream commit 8bb0a88600f0267cfcc245d34f8c4abe8c282713 ]

    In the case where eq->fw->size > PAGE_SIZE the error return rc
    is being set to EINVAL however this is being overwritten to
    rc = req->fw->size because the error exit path via label 'out' is
    not being taken. Fix this by adding the jump to the error exit
    path 'out'.

    Detected by CoverityScan, CID#1453465 ("Unused value")

    Fixes: c92316bf8e94 ("test_firmware: add batched firmware tests")
    Signed-off-by: Colin Ian King
    Signed-off-by: Greg Kroah-Hartman
    Signed-off-by: Sasha Levin

    Colin Ian King
     

08 Dec, 2018

2 commits

  • commit 77d2a24b6107bd9b3bf2403a65c1428a9da83dd0 upstream.

    gcc 8.1.0 complains:

    lib/kobject.c:128:3: warning:
    'strncpy' output truncated before terminating nul copying as many
    bytes from a string as its length [-Wstringop-truncation]
    lib/kobject.c: In function 'kobject_get_path':
    lib/kobject.c:125:13: note: length computed here

    Using strncpy() is indeed less than perfect since the length of data to
    be copied has already been determined with strlen(). Replace strncpy()
    with memcpy() to address the warning and optimize the code a little.

    Signed-off-by: Guenter Roeck
    Signed-off-by: Greg Kroah-Hartman
    Signed-off-by: Greg Kroah-Hartman

    Guenter Roeck
     
  • commit b1286ed7158e9b62787508066283ab0b8850b518 upstream.

    New versions of gcc reasonably warn about the odd pattern of

    strncpy(p, q, strlen(q));

    which really doesn't make sense: the strncpy() ends up being just a slow
    and odd way to write memcpy() in this case.

    Apparently there was a patch for this floating around earlier, but it
    got lost.

    Acked-again-by: Andy Shevchenko
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Linus Torvalds
     

06 Dec, 2018

1 commit

  • commit 5618cf031fecda63847cafd1091e7b8bd626cdb1 upstream.

    We free the misc device string twice on rmmod; fix this. Without this
    we cannot remove the module without crashing.

    Link: http://lkml.kernel.org/r/20181124050500.5257-1-mcgrof@kernel.org
    Signed-off-by: Luis Chamberlain
    Reported-by: Randy Dunlap
    Reviewed-by: Andrew Morton
    Cc: [4.12+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Luis Chamberlain
     

27 Nov, 2018

1 commit

  • [ Upstream commit 313a06e636808387822af24c507cba92703568b1 ]

    The lib/raid6/test fails to build the neon objects
    on arm64 because the correct machine type is 'aarch64'.

    Once this is correctly enabled, the neon recovery objects
    need to be added to the build.

    Reviewed-by: Ard Biesheuvel
    Signed-off-by: Jeremy Linton
    Signed-off-by: Catalin Marinas
    Signed-off-by: Sasha Levin

    Jeremy Linton
     

21 Nov, 2018

1 commit

  • commit 1c23b4108d716cc848b38532063a8aca4f86add8 upstream.

    gcc-8 complains about the prototype for this function:

    lib/ubsan.c:432:1: error: ignoring attribute 'noreturn' in declaration of a built-in function '__ubsan_handle_builtin_unreachable' because it conflicts with attribute 'const' [-Werror=attributes]

    This is actually a GCC's bug. In GCC internals
    __ubsan_handle_builtin_unreachable() declared with both 'noreturn' and
    'const' attributes instead of only 'noreturn':

    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84210

    Workaround this by removing the noreturn attribute.

    [aryabinin: add information about GCC bug in changelog]
    Link: http://lkml.kernel.org/r/20181107144516.4587-1-aryabinin@virtuozzo.com
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Andrey Ryabinin
    Acked-by: Olof Johansson
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Arnd Bergmann
     

14 Nov, 2018

1 commit

  • [ Upstream commit 9506a7425b094d2f1d9c877ed5a78f416669269b ]

    It was found that when debug_locks was turned off because of a problem
    found by the lockdep code, the system performance could drop quite
    significantly when the lock_stat code was also configured into the
    kernel. For instance, parallel kernel build time on a 4-socket x86-64
    server nearly doubled.

    Further analysis into the cause of the slowdown traced back to the
    frequent call to debug_locks_off() from the __lock_acquired() function
    probably due to some inconsistent lockdep states with debug_locks
    off. The debug_locks_off() function did an unconditional atomic xchg
    to write a 0 value into debug_locks which had already been set to 0.
    This led to severe cacheline contention in the cacheline that held
    debug_locks. As debug_locks is being referenced in quite a few different
    places in the kernel, this greatly slow down the system performance.

    To prevent that trashing of debug_locks cacheline, lock_acquired()
    and lock_contended() now checks the state of debug_locks before
    proceeding. The debug_locks_off() function is also modified to check
    debug_locks before calling __debug_locks_off().

    Signed-off-by: Waiman Long
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Will Deacon
    Link: http://lkml.kernel.org/r/1539913518-15598-1-git-send-email-longman@redhat.com
    Signed-off-by: Ingo Molnar
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Waiman Long
     

04 Nov, 2018

1 commit

  • [ Upstream commit 52fda36d63bfc8c8e8ae5eda8eb5ac6f52cd67ed ]

    Function bpf_fill_maxinsns11 is designed to not be able to be JITed on
    x86_64. So, it fails when CONFIG_BPF_JIT_ALWAYS_ON=y, and
    commit 09584b406742 ("bpf: fix selftests/bpf test_kmod.sh failure when
    CONFIG_BPF_JIT_ALWAYS_ON=y") makes sure that failure is detected on that
    case.

    However, it does not fail on other architectures, which have a different
    JIT compiler design. So, test_bpf has started to fail to load on those.

    After this fix, test_bpf loads fine on both x86_64 and ppc64el.

    Fixes: 09584b406742 ("bpf: fix selftests/bpf test_kmod.sh failure when CONFIG_BPF_JIT_ALWAYS_ON=y")
    Signed-off-by: Thadeu Lima de Souza Cascardo
    Reviewed-by: Yonghong Song
    Signed-off-by: Daniel Borkmann
    Signed-off-by: Sasha Levin

    Thadeu Lima de Souza Cascardo
     

04 Oct, 2018

1 commit

  • [ Upstream commit 624fa7790f80575a4ec28fbdb2034097dc18d051 ]

    In the scsi_transport_srp implementation it cannot be avoided to
    iterate over a klist from atomic context when using the legacy block
    layer instead of blk-mq. Hence this patch that makes it safe to use
    klists in atomic context. This patch avoids that lockdep reports the
    following:

    WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
    Possible interrupt unsafe locking scenario:

    CPU0 CPU1
    ---- ----
    lock(&(&k->k_lock)->rlock);
    local_irq_disable();
    lock(&(&q->__queue_lock)->rlock);
    lock(&(&k->k_lock)->rlock);

    lock(&(&q->__queue_lock)->rlock);

    stack backtrace:
    Workqueue: kblockd blk_timeout_work
    Call Trace:
    dump_stack+0xa4/0xf5
    check_usage+0x6e6/0x700
    __lock_acquire+0x185d/0x1b50
    lock_acquire+0xd2/0x260
    _raw_spin_lock+0x32/0x50
    klist_next+0x47/0x190
    device_for_each_child+0x8e/0x100
    srp_timed_out+0xaf/0x1d0 [scsi_transport_srp]
    scsi_times_out+0xd4/0x410 [scsi_mod]
    blk_rq_timed_out+0x36/0x70
    blk_timeout_work+0x1b5/0x220
    process_one_work+0x4fe/0xad0
    worker_thread+0x63/0x5a0
    kthread+0x1c1/0x1e0
    ret_from_fork+0x24/0x30

    See also commit c9ddf73476ff ("scsi: scsi_transport_srp: Fix shost to
    rport translation").

    Signed-off-by: Bart Van Assche
    Cc: Martin K. Petersen
    Cc: James Bottomley
    Acked-by: Greg Kroah-Hartman
    Signed-off-by: Martin K. Petersen
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Bart Van Assche
     

20 Sep, 2018

1 commit

  • Rehashing and destroying large hash table takes a lot of time,
    and happens in process context. It is safe to add cond_resched()
    in rhashtable_rehash_table() and rhashtable_free_and_destroy()

    Signed-off-by: Eric Dumazet
    Acked-by: Herbert Xu
    Signed-off-by: David S. Miller
    (cherry picked from commit ae6da1f503abb5a5081f9f6c4a6881de97830f3e)
    Signed-off-by: Greg Kroah-Hartman

    Eric Dumazet
     

15 Sep, 2018

1 commit

  • commit fc91a3c4c27acdca0bc13af6fbb68c35cfd519f2 upstream.

    While debugging an issue debugobject tracking warned about an annotation
    issue of an object on stack. It turned out that the issue was due to the
    object in concern being on a different stack which was due to another
    issue.

    Thomas suggested to print the pointers and the location of the stack for
    the currently running task. This helped to figure out that the object was
    on the wrong stack.

    As this is general useful information for debugging similar issues, make
    the error message more informative by printing the pointers.

    [ tglx: Massaged changelog ]

    Signed-off-by: Joel Fernandes (Google)
    Signed-off-by: Thomas Gleixner
    Acked-by: Waiman Long
    Acked-by: Yang Shi
    Cc: kernel-team@android.com
    Cc: Arnd Bergmann
    Cc: astrachan@google.com
    Link: https://lkml.kernel.org/r/20180723212531.202328-1-joel@joelfernandes.org
    Signed-off-by: Greg Kroah-Hartman

    Joel Fernandes (Google)
     

05 Sep, 2018

1 commit

  • commit 03fc7f9c99c1e7ae2925d459e8487f1a6f199f79 upstream.

    The commit 719f6a7040f1bdaf96 ("printk: Use the main logbuf in NMI
    when logbuf_lock is available") brought back the possible deadlocks
    in printk() and NMI.

    The check of logbuf_lock is done only in printk_nmi_enter() to prevent
    mixed output. But another CPU might take the lock later, enter NMI, and:

    + Both NMIs might be serialized by yet another lock, for example,
    the one in nmi_cpu_backtrace().

    + The other CPU might get stopped in NMI, see smp_send_stop()
    in panic().

    The only safe solution is to use trylock when storing the message
    into the main log-buffer. It might cause reordering when some lines
    go to the main lock buffer directly and others are delayed via
    the per-CPU buffer. It means that it is not useful in general.

    This patch replaces the problematic NMI deferred context with NMI
    direct context. It can be used to mark a code that might produce
    many messages in NMI and the risk of losing them is more critical
    than problems with eventual reordering.

    The context is then used when dumping trace buffers on oops. It was
    the primary motivation for the original fix. Also the reordering is
    even smaller issue there because some traces have their own time stamps.

    Finally, nmi_cpu_backtrace() need not longer be serialized because
    it will always us the per-CPU buffers again.

    Fixes: 719f6a7040f1bdaf96 ("printk: Use the main logbuf in NMI when logbuf_lock is available")
    Cc: stable@vger.kernel.org
    Link: http://lkml.kernel.org/r/20180627142028.11259-1-pmladek@suse.com
    To: Steven Rostedt
    Cc: Peter Zijlstra
    Cc: Tetsuo Handa
    Cc: Sergey Senozhatsky
    Cc: linux-kernel@vger.kernel.org
    Cc: stable@vger.kernel.org
    Acked-by: Sergey Senozhatsky
    Signed-off-by: Petr Mladek
    Signed-off-by: Greg Kroah-Hartman

    Petr Mladek
     

18 Aug, 2018

1 commit

  • commit 785a19f9d1dd8a4ab2d0633be4656653bd3de1fc upstream.

    The following kernel panic was observed on ARM64 platform due to a stale
    TLB entry.

    1. ioremap with 4K size, a valid pte page table is set.
    2. iounmap it, its pte entry is set to 0.
    3. ioremap the same address with 2M size, update its pmd entry with
    a new value.
    4. CPU may hit an exception because the old pmd entry is still in TLB,
    which leads to a kernel panic.

    Commit b6bdb7517c3d ("mm/vmalloc: add interfaces to free unmapped page
    table") has addressed this panic by falling to pte mappings in the above
    case on ARM64.

    To support pmd mappings in all cases, TLB purge needs to be performed
    in this case on ARM64.

    Add a new arg, 'addr', to pud_free_pmd_page() and pmd_free_pte_page()
    so that TLB purge can be added later in seprate patches.

    [toshi.kani@hpe.com: merge changes, rewrite patch description]
    Fixes: 28ee90fe6048 ("x86/mm: implement free pmd/pte page interfaces")
    Signed-off-by: Chintan Pandya
    Signed-off-by: Toshi Kani
    Signed-off-by: Thomas Gleixner
    Cc: mhocko@suse.com
    Cc: akpm@linux-foundation.org
    Cc: hpa@zytor.com
    Cc: linux-mm@kvack.org
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: Will Deacon
    Cc: Joerg Roedel
    Cc: stable@vger.kernel.org
    Cc: Andrew Morton
    Cc: Michal Hocko
    Cc: "H. Peter Anvin"
    Cc:
    Link: https://lkml.kernel.org/r/20180627141348.21777-3-toshi.kani@hpe.com
    Signed-off-by: Greg Kroah-Hartman

    Chintan Pandya
     

25 Jul, 2018

1 commit

  • [ Upstream commit 107d01f5ba10f4162c38109496607eb197059064 ]

    rhashtable_init() currently does not take into account the user-passed
    min_size parameter unless param->nelem_hint is set as well. As such,
    the default size (number of buckets) will always be HASH_DEFAULT_SIZE
    even if the smallest allowed size is larger than that. Remediate this
    by unconditionally calling into rounded_hashtable_size() and handling
    things accordingly.

    Signed-off-by: Davidlohr Bueso
    Acked-by: Herbert Xu
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Davidlohr Bueso
     

03 Jul, 2018

1 commit

  • commit 666902e42fd8344b923c02dc5b0f37948ff4f225 upstream.

    "%pCr" formats the current rate of a clock, and calls clk_get_rate().
    The latter obtains a mutex, hence it must not be called from atomic
    context.

    Remove support for this rarely-used format, as vsprintf() (and e.g.
    printk()) must be callable from any context.

    Any remaining out-of-tree users will start seeing the clock's name
    printed instead of its rate.

    Reported-by: Jia-Ju Bai
    Fixes: 900cca2944254edd ("lib/vsprintf: add %pC{,n,r} format specifiers for clocks")
    Link: http://lkml.kernel.org/r/1527845302-12159-5-git-send-email-geert+renesas@glider.be
    To: Jia-Ju Bai
    To: Jonathan Corbet
    To: Michael Turquette
    To: Stephen Boyd
    To: Zhang Rui
    To: Eduardo Valentin
    To: Eric Anholt
    To: Stefan Wahren
    To: Greg Kroah-Hartman
    Cc: Sergey Senozhatsky
    Cc: Petr Mladek
    Cc: Linus Torvalds
    Cc: Steven Rostedt
    Cc: linux-doc@vger.kernel.org
    Cc: linux-clk@vger.kernel.org
    Cc: linux-pm@vger.kernel.org
    Cc: linux-serial@vger.kernel.org
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: linux-renesas-soc@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Cc: Geert Uytterhoeven
    Cc: stable@vger.kernel.org # 4.1+
    Signed-off-by: Geert Uytterhoeven
    Signed-off-by: Petr Mladek
    Signed-off-by: Greg Kroah-Hartman

    Geert Uytterhoeven
     

30 May, 2018

2 commits

  • [ Upstream commit ac68b1b3b9c73e652dc7ce0585672e23c5a2dca4 ]

    As reported by Dan the parentheses is in the wrong place, and since
    unlikely() call returns either 0 or 1 it's never less than zero. The
    second issue is that signed integer overflows like "INT_MAX + 1" are
    undefined behavior.

    Since num_test_devs represents the number of devices, we want to stop
    prior to hitting the max, and not rely on the wrap arround at all. So
    just cap at num_test_devs + 1, prior to assigning a new device.

    Link: http://lkml.kernel.org/r/20180224030046.24238-1-mcgrof@kernel.org
    Fixes: d9c6a72d6fa2 ("kmod: add test driver to stress test the module loader")
    Reported-by: Dan Carpenter
    Signed-off-by: Luis R. Rodriguez
    Acked-by: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Luis R. Rodriguez
     
  • commit 7a4deea1aa8bddfed4ef1b35fc2b6732563d8ad5 upstream.

    If the radix tree underlying the IDR happens to be full and we attempt
    to remove an id which is larger than any id in the IDR, we will call
    __radix_tree_delete() with an uninitialised 'slot' pointer, at which
    point anything could happen. This was easiest to hit with a single
    entry at id 0 and attempting to remove a non-0 id, but it could have
    happened with 64 entries and attempting to remove an id >= 64.

    Roman said:

    The syzcaller test boils down to opening /dev/kvm, creating an
    eventfd, and calling a couple of KVM ioctls. None of this requires
    superuser. And the result is dereferencing an uninitialized pointer
    which is likely a crash. The specific path caught by syzbot is via
    KVM_HYPERV_EVENTD ioctl which is new in 4.17. But I guess there are
    other user-triggerable paths, so cc:stable is probably justified.

    Matthew added:

    We have around 250 calls to idr_remove() in the kernel today. Many of
    them pass an ID which is embedded in the object they're removing, so
    they're safe. Picking a few likely candidates:

    drivers/firewire/core-cdev.c looks unsafe; the ID comes from an ioctl.
    drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c is similar
    drivers/atm/nicstar.c could be taken down by a handcrafted packet

    Link: http://lkml.kernel.org/r/20180518175025.GD6361@bombadil.infradead.org
    Fixes: 0a835c4f090a ("Reimplement IDR and IDA using the radix tree")
    Reported-by:
    Debugged-by: Roman Kagan
    Signed-off-by: Matthew Wilcox
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Matthew Wilcox
     

23 May, 2018

2 commits

  • commit 9f418224e8114156d995b98fa4e0f4fd21f685fe upstream.

    Fix a race in the multi-order iteration code which causes the kernel to
    hit a GP fault. This was first seen with a production v4.15 based
    kernel (4.15.6-300.fc27.x86_64) utilizing a DAX workload which used
    order 9 PMD DAX entries.

    The race has to do with how we tear down multi-order sibling entries
    when we are removing an item from the tree. Remember for example that
    an order 2 entry looks like this:

    struct radix_tree_node.slots[] = [entry][sibling][sibling][sibling]

    where 'entry' is in some slot in the struct radix_tree_node, and the
    three slots following 'entry' contain sibling pointers which point back
    to 'entry.'

    When we delete 'entry' from the tree, we call :

    radix_tree_delete()
    radix_tree_delete_item()
    __radix_tree_delete()
    replace_slot()

    replace_slot() first removes the siblings in order from the first to the
    last, then at then replaces 'entry' with NULL. This means that for a
    brief period of time we end up with one or more of the siblings removed,
    so:

    struct radix_tree_node.slots[] = [entry][NULL][sibling][sibling]

    This causes an issue if you have a reader iterating over the slots in
    the tree via radix_tree_for_each_slot() while only under
    rcu_read_lock()/rcu_read_unlock() protection. This is a common case in
    mm/filemap.c.

    The issue is that when __radix_tree_next_slot() => skip_siblings() tries
    to skip over the sibling entries in the slots, it currently does so with
    an exact match on the slot directly preceding our current slot.
    Normally this works:

    V preceding slot
    struct radix_tree_node.slots[] = [entry][sibling][sibling][sibling]
    ^ current slot

    This lets you find the first sibling, and you skip them all in order.

    But in the case where one of the siblings is NULL, that slot is skipped
    and then our sibling detection is interrupted:

    V preceding slot
    struct radix_tree_node.slots[] = [entry][NULL][sibling][sibling]
    ^ current slot

    This means that the sibling pointers aren't recognized since they point
    all the way back to 'entry', so we think that they are normal internal
    radix tree pointers. This causes us to think we need to walk down to a
    struct radix_tree_node starting at the address of 'entry'.

    In a real running kernel this will crash the thread with a GP fault when
    you try and dereference the slots in your broken node starting at
    'entry'.

    We fix this race by fixing the way that skip_siblings() detects sibling
    nodes. Instead of testing against the preceding slot we instead look
    for siblings via is_sibling_entry() which compares against the position
    of the struct radix_tree_node.slots[] array. This ensures that sibling
    entries are properly identified, even if they are no longer contiguous
    with the 'entry' they point to.

    Link: http://lkml.kernel.org/r/20180503192430.7582-6-ross.zwisler@linux.intel.com
    Fixes: 148deab223b2 ("radix-tree: improve multiorder iterators")
    Signed-off-by: Ross Zwisler
    Reported-by: CR, Sapthagirish
    Reviewed-by: Jan Kara
    Cc: Matthew Wilcox
    Cc: Christoph Hellwig
    Cc: Dan Williams
    Cc: Dave Chinner
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Ross Zwisler
     
  • commit 1e3054b98c5415d5cb5f8824fc33b548ae5644c3 upstream.

    I had neglected to increment the error counter when the tests failed,
    which made the tests noisy when they fail, but not actually return an
    error code.

    Link: http://lkml.kernel.org/r/20180509114328.9887-1-mpe@ellerman.id.au
    Fixes: 3cc78125a081 ("lib/test_bitmap.c: add optimisation tests")
    Signed-off-by: Matthew Wilcox
    Signed-off-by: Michael Ellerman
    Reported-by: Michael Ellerman
    Tested-by: Michael Ellerman
    Reviewed-by: Kees Cook
    Cc: Yury Norov
    Cc: Andy Shevchenko
    Cc: Geert Uytterhoeven
    Cc: [4.13+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Matthew Wilcox
     

09 May, 2018

1 commit

  • commit b4678df184b314a2bd47d2329feca2c2534aa12b upstream.

    The errseq_t infrastructure assumes that errors which occurred before
    the file descriptor was opened are of no interest to the application.
    This turns out to be a regression for some applications, notably Postgres.

    Before errseq_t, a writeback error would be reported exactly once (as
    long as the inode remained in memory), so Postgres could open a file,
    call fsync() and find out whether there had been a writeback error on
    that file from another process.

    This patch changes the errseq infrastructure to report errors to all
    file descriptors which are opened after the error occurred, but before
    it was reported to any file descriptor. This restores the user-visible
    behaviour.

    Cc: stable@vger.kernel.org
    Fixes: 5660e13d2fd6 ("fs: new infrastructure for writeback error handling and reporting")
    Signed-off-by: Matthew Wilcox
    Reviewed-by: Jeff Layton
    Signed-off-by: Jeff Layton
    Signed-off-by: Greg Kroah-Hartman

    Matthew Wilcox
     

02 May, 2018

1 commit

  • commit 3e14c6abbfb5c94506edda9d8e2c145d79375798 upstream.

    This WARNING proved to be noisy. The function still returns an error
    and callers should handle it. That's how most of kernel code works.
    Downgrade the WARNING to pr_err() and leave WARNINGs for kernel bugs.

    Signed-off-by: Dmitry Vyukov
    Reported-by: syzbot+209c0f67f99fec8eb14b@syzkaller.appspotmail.com
    Reported-by: syzbot+7fb6d9525a4528104e05@syzkaller.appspotmail.com
    Reported-by: syzbot+2e63711063e2d8f9ea27@syzkaller.appspotmail.com
    Reported-by: syzbot+de73361ee4971b6e6f75@syzkaller.appspotmail.com
    Cc: stable
    Signed-off-by: Greg Kroah-Hartman

    Dmitry Vyukov
     

26 Apr, 2018

1 commit

  • [ Upstream commit 09584b406742413ac4c8d7e030374d4daa045b69 ]

    With CONFIG_BPF_JIT_ALWAYS_ON is defined in the config file,
    tools/testing/selftests/bpf/test_kmod.sh failed like below:
    [root@localhost bpf]# ./test_kmod.sh
    sysctl: setting key "net.core.bpf_jit_enable": Invalid argument
    [ JIT enabled:0 hardened:0 ]
    [ 132.175681] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
    [ 132.458834] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
    [ JIT enabled:1 hardened:0 ]
    [ 133.456025] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
    [ 133.730935] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
    [ JIT enabled:1 hardened:1 ]
    [ 134.769730] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
    [ 135.050864] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
    [ JIT enabled:1 hardened:2 ]
    [ 136.442882] test_bpf: #297 BPF_MAXINSNS: Jump, gap, jump, ... FAIL to prog_create err=-524 len=4096
    [ 136.821810] test_bpf: Summary: 348 PASSED, 1 FAILED, [340/340 JIT'ed]
    [root@localhost bpf]#

    The test_kmod.sh load/remove test_bpf.ko multiple times with different
    settings for sysctl net.core.bpf_jit_{enable,harden}. The failed test #297
    of test_bpf.ko is designed such that JIT always fails.

    Commit 290af86629b2 (bpf: introduce BPF_JIT_ALWAYS_ON config)
    introduced the following tightening logic:
    ...
    if (!bpf_prog_is_dev_bound(fp->aux)) {
    fp = bpf_int_jit_compile(fp);
    #ifdef CONFIG_BPF_JIT_ALWAYS_ON
    if (!fp->jited) {
    *err = -ENOTSUPP;
    return fp;
    }
    #endif
    ...
    With this logic, Test #297 always gets return value -ENOTSUPP
    when CONFIG_BPF_JIT_ALWAYS_ON is defined, causing the test failure.

    This patch fixed the failure by marking Test #297 as expected failure
    when CONFIG_BPF_JIT_ALWAYS_ON is defined.

    Fixes: 290af86629b2 (bpf: introduce BPF_JIT_ALWAYS_ON config)
    Signed-off-by: Yonghong Song
    Signed-off-by: Daniel Borkmann
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Yonghong Song
     

19 Apr, 2018

1 commit

  • commit 8351760ff5b2042039554b4948ddabaac644a976 upstream.

    syzbot is catching stalls at __bitmap_parselist()
    (https://syzkaller.appspot.com/bug?id=ad7e0351fbc90535558514a71cd3edc11681997a).
    The trigger is

    unsigned long v = 0;
    bitmap_parselist("7:,", &v, BITS_PER_LONG);

    which results in hitting infinite loop at

    while (a
    Reported-by: Tetsuo Handa
    Reported-by: syzbot
    Cc: Noam Camus
    Cc: Rasmus Villemoes
    Cc: Matthew Wilcox
    Cc: Mauro Carvalho Chehab
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Yury Norov
     

01 Apr, 2018

1 commit

  • [ Upstream commit d3dcf8eb615537526bd42ff27a081d46d337816e ]

    When inserting duplicate objects (those with the same key),
    current rhlist implementation messes up the chain pointers by
    updating the bucket pointer instead of prev next pointer to the
    newly inserted node. This causes missing elements on removal and
    travesal.

    Fix that by properly updating pprev pointer to point to
    the correct rhash_head next pointer.

    Issue: 1241076
    Change-Id: I86b2c140bcb4aeb10b70a72a267ff590bb2b17e7
    Fixes: ca26893f05e8 ('rhashtable: Add rhlist interface')
    Signed-off-by: Paul Blakey
    Acked-by: Herbert Xu
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Paul Blakey
     

29 Mar, 2018

1 commit

  • commit b6bdb7517c3d3f41f20e5c2948d6bc3f8897394e upstream.

    On architectures with CONFIG_HAVE_ARCH_HUGE_VMAP set, ioremap() may
    create pud/pmd mappings. A kernel panic was observed on arm64 systems
    with Cortex-A75 in the following steps as described by Hanjun Guo.

    1. ioremap a 4K size, valid page table will build,
    2. iounmap it, pte0 will set to 0;
    3. ioremap the same address with 2M size, pgd/pmd is unchanged,
    then set the a new value for pmd;
    4. pte0 is leaked;
    5. CPU may meet exception because the old pmd is still in TLB,
    which will lead to kernel panic.

    This panic is not reproducible on x86. INVLPG, called from iounmap,
    purges all levels of entries associated with purged address on x86. x86
    still has memory leak.

    The patch changes the ioremap path to free unmapped page table(s) since
    doing so in the unmap path has the following issues:

    - The iounmap() path is shared with vunmap(). Since vmap() only
    supports pte mappings, making vunmap() to free a pte page is an
    overhead for regular vmap users as they do not need a pte page freed
    up.

    - Checking if all entries in a pte page are cleared in the unmap path
    is racy, and serializing this check is expensive.

    - The unmap path calls free_vmap_area_noflush() to do lazy TLB purges.
    Clearing a pud/pmd entry before the lazy TLB purges needs extra TLB
    purge.

    Add two interfaces, pud_free_pmd_page() and pmd_free_pte_page(), which
    clear a given pud/pmd entry and free up a page for the lower level
    entries.

    This patch implements their stub functions on x86 and arm64, which work
    as workaround.

    [akpm@linux-foundation.org: fix typo in pmd_free_pte_page() stub]
    Link: http://lkml.kernel.org/r/20180314180155.19492-2-toshi.kani@hpe.com
    Fixes: e61ce6ade404e ("mm: change ioremap to set up huge I/O mappings")
    Reported-by: Lei Li
    Signed-off-by: Toshi Kani
    Cc: Catalin Marinas
    Cc: Wang Xuefeng
    Cc: Will Deacon
    Cc: Hanjun Guo
    Cc: Michal Hocko
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Borislav Petkov
    Cc: Matthew Wilcox
    Cc: Chintan Pandya
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Toshi Kani
     

19 Mar, 2018

1 commit

  • [ Upstream commit a0e94598e6b6c0d1df6a5fa14eb7c767ca817a20 ]

    Destination is a kernel pointer and source - a userland one
    in _copy_from_user(); _copy_to_user() is the other way round.

    Fixes: d597580d37377 ("generic ...copy_..._user primitives")
    Signed-off-by: Christophe Leroy
    Signed-off-by: Al Viro
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Christophe Leroy
     

15 Mar, 2018

1 commit

  • commit 1b4cfe3c0a30dde968fb43c577a8d7e262a145ee upstream.

    Commit b8347c219649 ("x86/debug: Handle warnings before the notifier
    chain, to fix KGDB crash") changed the ordering of fixups, and did not
    take into account the case of x86 processing non-WARN() and non-BUG()
    exceptions. This would lead to output of a false BUG line with no other
    information.

    In the case of a refcount exception, it would be immediately followed by
    the refcount WARN(), producing very strange double-"cut here":

    lkdtm: attempting bad refcount_inc() overflow
    ------------[ cut here ]------------
    Kernel BUG at 0000000065f29de5 [verbose debug info unavailable]
    ------------[ cut here ]------------
    refcount_t overflow at lkdtm_REFCOUNT_INC_OVERFLOW+0x6b/0x90 in cat[3065], uid/euid: 0/0
    WARNING: CPU: 0 PID: 3065 at kernel/panic.c:657 refcount_error_report+0x9a/0xa4
    ...

    In the prior ordering, exceptions were searched first:

    do_trap_no_signal(struct task_struct *tsk, int trapnr, char *str,
    ...
    if (fixup_exception(regs, trapnr))
    return 0;

    - if (fixup_bug(regs, trapnr))
    - return 0;
    -

    As a result, fixup_bugs()'s is_valid_bugaddr() didn't take into account
    needing to search the exception list first, since that had already
    happened.

    So, instead of searching the exception list twice (once in
    is_valid_bugaddr() and then again in fixup_exception()), just add a
    simple sanity check to report_bug() that will immediately bail out if a
    BUG() (or WARN()) entry is not found.

    Link: http://lkml.kernel.org/r/20180301225934.GA34350@beast
    Fixes: b8347c219649 ("x86/debug: Handle warnings before the notifier chain, to fix KGDB crash")
    Signed-off-by: Kees Cook
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Borislav Petkov
    Cc: Richard Weinberger
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Kees Cook
     

03 Mar, 2018

1 commit

  • [ Upstream commit bbc25bee37d2b32cf3a1fab9195b6da3a185614a ]

    Current MIPS64r6 toolchains aren't able to generate efficient
    DMULU/DMUHU based code for the C implementation of umul_ppmm(), which
    performs an unsigned 64 x 64 bit multiply and returns the upper and
    lower 64-bit halves of the 128-bit result. Instead it widens the 64-bit
    inputs to 128-bits and emits a __multi3 intrinsic call to perform a 128
    x 128 multiply. This is both inefficient, and it results in a link error
    since we don't include __multi3 in MIPS linux.

    For example commit 90a53e4432b1 ("cfg80211: implement regdb signature
    checking") merged in v4.15-rc1 recently broke the 64r6_defconfig and
    64r6el_defconfig builds by indirectly selecting MPILIB. The same build
    errors can be reproduced on older kernels by enabling e.g. CRYPTO_RSA:

    lib/mpi/generic_mpih-mul1.o: In function `mpihelp_mul_1':
    lib/mpi/generic_mpih-mul1.c:50: undefined reference to `__multi3'
    lib/mpi/generic_mpih-mul2.o: In function `mpihelp_addmul_1':
    lib/mpi/generic_mpih-mul2.c:49: undefined reference to `__multi3'
    lib/mpi/generic_mpih-mul3.o: In function `mpihelp_submul_1':
    lib/mpi/generic_mpih-mul3.c:49: undefined reference to `__multi3'
    lib/mpi/mpih-div.o In function `mpihelp_divrem':
    lib/mpi/mpih-div.c:205: undefined reference to `__multi3'
    lib/mpi/mpih-div.c:142: undefined reference to `__multi3'

    Therefore add an efficient MIPS64r6 implementation of umul_ppmm() using
    inline assembly and the DMULU/DMUHU instructions, to prevent __multi3
    calls being emitted.

    Fixes: 7fd08ca58ae6 ("MIPS: Add build support for the MIPS R6 ISA")
    Signed-off-by: James Hogan
    Cc: Ralf Baechle
    Cc: Herbert Xu
    Cc: "David S. Miller"
    Cc: linux-mips@linux-mips.org
    Cc: linux-crypto@vger.kernel.org
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    James Hogan
     

25 Feb, 2018

1 commit

  • [ Upstream commit 8dfd2f22d3bf3ab7714f7495ad5d897b8845e8c1 ]

    Callers of sprint_oid() do not check its return value before printing
    the result. In the case where the OID is zero-length, -EBADMSG was
    being returned without anything being written to the buffer, resulting
    in uninitialized stack memory being printed. Fix this by writing
    "(bad)" to the buffer in the cases where -EBADMSG is returned.

    Fixes: 4f73175d0375 ("X.509: Add utility functions to render OIDs as strings")
    Signed-off-by: Eric Biggers
    Signed-off-by: David Howells
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

22 Feb, 2018

2 commits

  • commit 4675ff05de2d76d167336b368bd07f3fef6ed5a6 upstream.

    Fix up makefiles, remove references, and git rm kmemcheck.

    Link: http://lkml.kernel.org/r/20171007030159.22241-4-alexander.levin@verizon.com
    Signed-off-by: Sasha Levin
    Cc: Steven Rostedt
    Cc: Vegard Nossum
    Cc: Pekka Enberg
    Cc: Michal Hocko
    Cc: Eric W. Biederman
    Cc: Alexander Potapenko
    Cc: Tim Hansen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Levin, Alexander (Sasha Levin)
     
  • commit d0bc0c2a31c95002d37c3cc511ffdcab851b3256 upstream.

    TTM tries to allocate coherent memory in chunks of 2MB first to improve
    TLB efficiency and falls back to allocating 4K pages if that fails.

    Suppress the warning when the 2MB allocations fails since there is a
    valid fall back path.

    Signed-off-by: Christian König
    Reported-by: Mike Galbraith
    Acked-by: Konrad Rzeszutek Wilk
    Bug: https://bugs.freedesktop.org/show_bug.cgi?id=104082
    Signed-off-by: Christoph Hellwig
    Signed-off-by: Greg Kroah-Hartman

    Christian König
     

17 Feb, 2018

3 commits

  • commit 42440c1f9911b4b7b8ba3dc4e90c1197bc561211 upstream.

    UBSAN=y fails to build with new GCC/clang:

    arch/x86/kernel/head64.o: In function `sanitize_boot_params':
    arch/x86/include/asm/bootparam_utils.h:37: undefined reference to `__ubsan_handle_type_mismatch_v1'

    because Clang and GCC 8 slightly changed ABI for 'type mismatch' errors.
    Compiler now uses new __ubsan_handle_type_mismatch_v1() function with
    slightly modified 'struct type_mismatch_data'.

    Let's add new 'struct type_mismatch_data_common' which is independent from
    compiler's layout of 'struct type_mismatch_data'. And make
    __ubsan_handle_type_mismatch[_v1]() functions transform compiler-dependent
    type mismatch data to our internal representation. This way, we can
    support both old and new compilers with minimal amount of change.

    Link: http://lkml.kernel.org/r/20180119152853.16806-1-aryabinin@virtuozzo.com
    Signed-off-by: Andrey Ryabinin
    Reported-by: Sodagudi Prasad
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Andrey Ryabinin
     
  • commit b8fe1120b4ba342b4f156d24e952d6e686b20298 upstream.

    A vist from the spelling fairy.

    Cc: David Laight
    Cc: Andrey Ryabinin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Andrew Morton
     
  • commit e7c52b84fb18f08ce49b6067ae6285aca79084a8 upstream.

    We get a lot of very large stack frames using gcc-7.0.1 with the default
    -fsanitize-address-use-after-scope --param asan-stack=1 options, which can
    easily cause an overflow of the kernel stack, e.g.

    drivers/gpu/drm/i915/gvt/handlers.c:2434:1: warning: the frame size of 46176 bytes is larger than 3072 bytes
    drivers/net/wireless/ralink/rt2x00/rt2800lib.c:5650:1: warning: the frame size of 23632 bytes is larger than 3072 bytes
    lib/atomic64_test.c:250:1: warning: the frame size of 11200 bytes is larger than 3072 bytes
    drivers/gpu/drm/i915/gvt/handlers.c:2621:1: warning: the frame size of 9208 bytes is larger than 3072 bytes
    drivers/media/dvb-frontends/stv090x.c:3431:1: warning: the frame size of 6816 bytes is larger than 3072 bytes
    fs/fscache/stats.c:287:1: warning: the frame size of 6536 bytes is larger than 3072 bytes

    To reduce this risk, -fsanitize-address-use-after-scope is now split out
    into a separate CONFIG_KASAN_EXTRA Kconfig option, leading to stack
    frames that are smaller than 2 kilobytes most of the time on x86_64. An
    earlier version of this patch also prevented combining KASAN_EXTRA with
    KASAN_INLINE, but that is no longer necessary with gcc-7.0.1.

    All patches to get the frame size below 2048 bytes with CONFIG_KASAN=y
    and CONFIG_KASAN_EXTRA=n have been merged by maintainers now, so we can
    bring back that default now. KASAN_EXTRA=y still causes lots of
    warnings but now defaults to !COMPILE_TEST to disable it in
    allmodconfig, and it remains disabled in all other defconfigs since it
    is a new option. I arbitrarily raise the warning limit for KASAN_EXTRA
    to 3072 to reduce the noise, but an allmodconfig kernel still has around
    50 warnings on gcc-7.

    I experimented a bit more with smaller stack frames and have another
    follow-up series that reduces the warning limit for 64-bit architectures
    to 1280 bytes (without CONFIG_KASAN).

    With earlier versions of this patch series, I also had patches to address
    the warnings we get with KASAN and/or KASAN_EXTRA, using a
    "noinline_if_stackbloat" annotation.

    That annotation now got replaced with a gcc-8 bugfix (see
    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715) and a workaround for
    older compilers, which means that KASAN_EXTRA is now just as bad as
    before and will lead to an instant stack overflow in a few extreme
    cases.

    This reverts parts of commit 3f181b4d8652 ("lib/Kconfig.debug: disable
    -Wframe-larger-than warnings with KASAN=y"). Two patches in linux-next
    should be merged first to avoid introducing warnings in an allmodconfig
    build:
    3cd890dbe2a4 ("media: dvb-frontends: fix i2c access helpers for KASAN")
    16c3ada89cff ("media: r820t: fix r820t_write_reg for KASAN")

    Do we really need to backport this?

    I think we do: without this patch, enabling KASAN will lead to
    unavoidable kernel stack overflow in certain device drivers when built
    with gcc-7 or higher on linux-4.10+ or any version that contains a
    backport of commit c5caf21ab0cf8. Most people are probably still on
    older compilers, but it will get worse over time as they upgrade their
    distros.

    The warnings we get on kernels older than this should all be for code
    that uses dangerously large stack frames, though most of them do not
    cause an actual stack overflow by themselves.The asan-stack option was
    added in linux-4.0, and commit 3f181b4d8652 ("lib/Kconfig.debug:
    disable -Wframe-larger-than warnings with KASAN=y") effectively turned
    off the warning for allmodconfig kernels, so I would like to see this
    fix backported to any kernels later than 4.0.

    I have done dozens of fixes for individual functions with stack frames
    larger than 2048 bytes with asan-stack, and I plan to make sure that
    all those fixes make it into the stable kernels as well (most are
    already there).

    Part of the complication here is that asan-stack (from 4.0) was
    originally assumed to always require much larger stacks, but that
    turned out to be a combination of multiple gcc bugs that we have now
    worked around and fixed, but sanitize-address-use-after-scope (from
    v4.10) has a much higher inherent stack usage and also suffers from at
    least three other problems that we have analyzed but not yet fixed
    upstream, each of them makes the stack usage more severe than it should
    be.

    Link: http://lkml.kernel.org/r/20171221134744.2295529-1-arnd@arndb.de
    Signed-off-by: Arnd Bergmann
    Acked-by: Andrey Ryabinin
    Cc: Mauro Carvalho Chehab
    Cc: Andrey Ryabinin
    Cc: Alexander Potapenko
    Cc: Dmitry Vyukov
    Cc: Andrey Konovalov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Arnd Bergmann
     

04 Feb, 2018

1 commit