27 Jan, 2021

1 commit


25 Jan, 2021

1 commit


22 Jan, 2021

1 commit


20 Jan, 2021

10 commits

  • This is the 5.10.9 stable release

    * tag 'v5.10.9': (153 commits)
    Linux 5.10.9
    netfilter: nf_nat: Fix memleak in nf_nat_init
    netfilter: conntrack: fix reading nf_conntrack_buckets
    ...

    Signed-off-by: Jason Liu

    Jason Liu
     
  • This is the 5.10.8 stable release

    * tag 'v5.10.8': (104 commits)
    Linux 5.10.8
    tools headers UAPI: Sync linux/fscrypt.h with the kernel sources
    drm/panfrost: Remove unused variables in panfrost_job_close()
    ...

    Signed-off-by: Jason Liu

    Jason Liu
     
  • This is the 5.10.7 stable release

    * tag 'v5.10.7': (144 commits)
    Linux 5.10.7
    scsi: target: Fix XCOPY NAA identifier lookup
    rtlwifi: rise completion at the last step of firmware callback
    ...

    Signed-off-by: Jason Liu

    Jason Liu
     
  • This is the 5.10.6 stable release

    * tag 'v5.10.6': (21 commits)
    Linux 5.10.6
    mwifiex: Fix possible buffer overflows in mwifiex_cmd_802_11_ad_hoc_start
    exec: Transform exec_update_mutex into a rw_semaphore
    ...

    Signed-off-by: Jason Liu

    Conflicts:
    drivers/rtc/rtc-pcf2127.c

    Jason Liu
     
  • This is the 5.10.5 stable release

    * tag 'v5.10.5': (63 commits)
    Linux 5.10.5
    device-dax: Fix range release
    ext4: avoid s_mb_prefetch to be zero in individual scenarios
    ...

    Signed-off-by: Jason Liu

    Jason Liu
     
  • [ Upstream commit 1b04fa9900263b4e217ca2509fd778b32c2b4eb2 ]

    PowerPC testing encountered boot failures due to RCU Tasks not being
    fully initialized until core_initcall() time. This commit therefore
    initializes RCU Tasks (along with Rude RCU and RCU Tasks Trace) just
    before early_initcall() time, thus allowing waiting on RCU Tasks grace
    periods from early_initcall() handlers.

    Link: https://lore.kernel.org/rcu/87eekfh80a.fsf@dja-thinkpad.axtens.net/
    Fixes: 36dadef23fcc ("kprobes: Init kprobes in early_initcall")
    Tested-by: Daniel Axtens
    Signed-off-by: Uladzislau Rezki (Sony)
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Sasha Levin

    Uladzislau Rezki (Sony)
     
  • [ Upstream commit ee61cfd955a64a58ed35cbcfc54068fcbd486945 ]

    It adds a stub acpi_create_platform_device() for !CONFIG_ACPI build, so
    that caller doesn't have to deal with !CONFIG_ACPI build issue.

    Reported-by: kernel test robot
    Signed-off-by: Shawn Guo
    Signed-off-by: Rafael J. Wysocki
    Signed-off-by: Sasha Levin

    Shawn Guo
     
  • commit 9b5948267adc9e689da609eb61cf7ed49cae5fa8 upstream.

    With external metadata device, flush requests are not passed down to the
    data device.

    Fix this by submitting the flush request in dm_integrity_flush_buffers. In
    order to not degrade performance, we overlap the data device flush with
    the metadata device flush.

    Reported-by: Lukas Straub
    Signed-off-by: Mikulas Patocka
    Cc: stable@vger.kernel.org
    Signed-off-by: Mike Snitzer
    Signed-off-by: Greg Kroah-Hartman

    Mikulas Patocka
     
  • commit dca5244d2f5b94f1809f0c02a549edf41ccd5493 upstream.

    GCC versions >= 4.9 and < 5.1 have been shown to emit memory references
    beyond the stack pointer, resulting in memory corruption if an interrupt
    is taken after the stack pointer has been adjusted but before the
    reference has been executed. This leads to subtle, infrequent data
    corruption such as the EXT4 problems reported by Russell King at the
    link below.

    Life is too short for buggy compilers, so raise the minimum GCC version
    required by arm64 to 5.1.

    Reported-by: Russell King
    Suggested-by: Arnd Bergmann
    Signed-off-by: Will Deacon
    Tested-by: Nathan Chancellor
    Reviewed-by: Nick Desaulniers
    Reviewed-by: Nathan Chancellor
    Acked-by: Linus Torvalds
    Cc:
    Cc: Theodore Ts'o
    Cc: Florian Weimer
    Cc: Peter Zijlstra
    Cc: Nick Desaulniers
    Link: https://lore.kernel.org/r/20210105154726.GD1551@shell.armlinux.org.uk
    Link: https://lore.kernel.org/r/20210112224832.10980-1-will@kernel.org
    Signed-off-by: Catalin Marinas
    Signed-off-by: Greg Kroah-Hartman

    Will Deacon
     
  • The implementation was limiting the size of a message which can be
    received to 4 but soem response can be bigger. For example the
    response of the 'sc_seco_secvio_config' API is 6 words.

    This patch removes this limitation relying on the count of word
    received instead of the index of the chan.
    It does so by duplicating imx_scu_call_rpc as imx_scu_call_big_rpc
    in order to cahnge the RX method using imx_scu_big_rx_callback
    instead of imx_scu_rx_callback.

    Signed-off-by: Franck LENORMAND

    Franck LENORMAND
     

17 Jan, 2021

2 commits

  • commit f09ced4053bc0a2094a12b60b646114c966ef4c6 upstream.

    Fix a race when multiple sockets are simultaneously calling sendto()
    when the completion ring is shared in the SKB case. This is the case
    when you share the same netdev and queue id through the
    XDP_SHARED_UMEM bind flag. The problem is that multiple processes can
    be in xsk_generic_xmit() and call the backpressure mechanism in
    xskq_prod_reserve(xs->pool->cq). As this is a shared resource in this
    specific scenario, a race might occur since the rings are
    single-producer single-consumer.

    Fix this by moving the tx_completion_lock from the socket to the pool
    as the pool is shared between the sockets that share the completion
    ring. (The pool is not shared when this is not the case.) And then
    protect the accesses to xskq_prod_reserve() with this lock. The
    tx_completion_lock is renamed cq_lock to better reflect that it
    protects accesses to the potentially shared completion ring.

    Fixes: 35fcde7f8deb ("xsk: support for Tx")
    Reported-by: Xuan Zhuo
    Signed-off-by: Magnus Karlsson
    Signed-off-by: Daniel Borkmann
    Acked-by: Björn Töpel
    Link: https://lore.kernel.org/bpf/20201218134525.13119-2-magnus.karlsson@gmail.com
    Signed-off-by: Greg Kroah-Hartman

    Magnus Karlsson
     
  • commit 2ca408d9c749c32288bc28725f9f12ba30299e8f upstream.

    Commit

    121b32a58a3a ("x86/entry/32: Use IA32-specific wrappers for syscalls taking 64-bit arguments")

    converted native x86-32 which take 64-bit arguments to use the
    compat handlers to allow conversion to passing args via pt_regs.
    sys_fanotify_mark() was however missed, as it has a general compat
    handler. Add a config option that will use the syscall wrapper that
    takes the split args for native 32-bit.

    [ bp: Fix typo in Kconfig help text. ]

    Fixes: 121b32a58a3a ("x86/entry/32: Use IA32-specific wrappers for syscalls taking 64-bit arguments")
    Reported-by: Paweł Jasiak
    Signed-off-by: Brian Gerst
    Signed-off-by: Borislav Petkov
    Acked-by: Jan Kara
    Acked-by: Andy Lutomirski
    Link: https://lkml.kernel.org/r/20201130223059.101286-1-brgerst@gmail.com
    Signed-off-by: Greg Kroah-Hartman

    Brian Gerst
     

13 Jan, 2021

8 commits

  • To unify the driver interface on 8q and 8m,
    we need to add a new header file to define custom interfaces

    Signed-off-by: Ming Qian
    Reviewed-by: Shijie Qin
    (cherry picked from commit 47c536bc846dfec04cf5eb502b895a65ecd5376d)

    Ming Qian
     
  • commit b16671e8f493e3df40b1fb0dff4078f391c5099a upstream.

    When large bucket feature was added, BCH_FEATURE_INCOMPAT_LARGE_BUCKET
    was introduced into the incompat feature set. It used bucket_size_hi
    (which was added at the tail of struct cache_sb_disk) to extend current
    16bit bucket size to 32bit with existing bucket_size in struct
    cache_sb_disk.

    This is not a good idea, there are two obvious problems,
    - Bucket size is always value power of 2, if store log2(bucket size) in
    existing bucket_size of struct cache_sb_disk, it is unnecessary to add
    bucket_size_hi.
    - Macro csum_set() assumes d[SB_JOURNAL_BUCKETS] is the last member in
    struct cache_sb_disk, bucket_size_hi was added after d[] which makes
    csum_set calculate an unexpected super block checksum.

    To fix the above problems, this patch introduces a new incompat feature
    bit BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE, when this bit is set, it
    means bucket_size in struct cache_sb_disk stores the order of power-of-2
    bucket size value. When user specifies a bucket size larger than 32768
    sectors, BCH_FEATURE_INCOMPAT_LOG_LARGE_BUCKET_SIZE will be set to
    incompat feature set, and bucket_size stores log2(bucket size) more
    than store the real bucket size value.

    The obsoleted BCH_FEATURE_INCOMPAT_LARGE_BUCKET won't be used anymore,
    it is renamed to BCH_FEATURE_INCOMPAT_OBSO_LARGE_BUCKET and still only
    recognized by kernel driver for legacy compatible purpose. The previous
    bucket_size_hi is renmaed to obso_bucket_size_hi in struct cache_sb_disk
    and not used in bcache-tools anymore.

    For cache device created with BCH_FEATURE_INCOMPAT_LARGE_BUCKET feature,
    bcache-tools and kernel driver still recognize the feature string and
    display it as "obso_large_bucket".

    With this change, the unnecessary extra space extend of bcache on-disk
    super block can be avoided, and csum_set() may generate expected check
    sum as well.

    Fixes: ffa470327572 ("bcache: add bucket_size_hi into struct cache_sb_disk for large bucket")
    Signed-off-by: Coly Li
    Cc: stable@vger.kernel.org # 5.9+
    Signed-off-by: Jens Axboe
    Signed-off-by: Greg Kroah-Hartman

    Coly Li
     
  • commit 9ad9f45b3b91162b33abfe175ae75ab65718dbf5 upstream.

    'struct intel_svm' is shared by all devices bound to a give process,
    but records only a single pointer to a 'struct intel_iommu'. Consequently,
    cache invalidations may only be applied to a single DMAR unit, and are
    erroneously skipped for the other devices.

    In preparation for fixing this, rework the structures so that the iommu
    pointer resides in 'struct intel_svm_dev', allowing 'struct intel_svm'
    to track them in its device list.

    Fixes: 1c4f88b7f1f9 ("iommu/vt-d: Shared virtual address in scalable mode")
    Cc: Lu Baolu
    Cc: Jacob Pan
    Cc: Raj Ashok
    Cc: David Woodhouse
    Reported-by: Guo Kaijie
    Reported-by: Xin Zeng
    Signed-off-by: Guo Kaijie
    Signed-off-by: Xin Zeng
    Signed-off-by: Liu Yi L
    Tested-by: Guo Kaijie
    Cc: stable@vger.kernel.org # v5.0+
    Acked-by: Lu Baolu
    Link: https://lore.kernel.org/r/1609949037-25291-2-git-send-email-yi.l.liu@intel.com
    Signed-off-by: Will Deacon
    Signed-off-by: Greg Kroah-Hartman

    Liu Yi L
     
  • [ Upstream commit 52abca64fd9410ea6c9a3a74eab25663b403d7da ]

    blk_queue_enter() accepts BLK_MQ_REQ_PM requests independent of the runtime
    power management state. Now that SCSI domain validation no longer depends
    on this behavior, modify the behavior of blk_queue_enter() as follows:

    - Do not accept any requests while suspended.

    - Only process power management requests while suspending or resuming.

    Submitting BLK_MQ_REQ_PM requests to a device that is runtime suspended
    causes runtime-suspended devices not to resume as they should. The request
    which should cause a runtime resume instead gets issued directly, without
    resuming the device first. Of course the device can't handle it properly,
    the I/O fails, and the device remains suspended.

    The problem is fixed by checking that the queue's runtime-PM status isn't
    RPM_SUSPENDED before allowing a request to be issued, and queuing a
    runtime-resume request if it is. In particular, the inline
    blk_pm_request_resume() routine is renamed blk_pm_resume_queue() and the
    code is unified by merging the surrounding checks into the routine. If the
    queue isn't set up for runtime PM, or there currently is no restriction on
    allowed requests, the request is allowed. Likewise if the BLK_MQ_REQ_PM
    flag is set and the status isn't RPM_SUSPENDED. Otherwise a runtime resume
    is queued and the request is blocked until conditions are more suitable.

    [ bvanassche: modified commit message and removed Cc: stable because
    without the previous patches from this series this patch would break
    parallel SCSI domain validation + introduced queue_rpm_status() ]

    Link: https://lore.kernel.org/r/20201209052951.16136-9-bvanassche@acm.org
    Cc: Jens Axboe
    Cc: Christoph Hellwig
    Cc: Hannes Reinecke
    Cc: Can Guo
    Cc: Stanley Chu
    Cc: Ming Lei
    Cc: Rafael J. Wysocki
    Reported-and-tested-by: Martin Kepplinger
    Reviewed-by: Hannes Reinecke
    Reviewed-by: Can Guo
    Signed-off-by: Alan Stern
    Signed-off-by: Bart Van Assche
    Signed-off-by: Martin K. Petersen
    Signed-off-by: Sasha Levin

    Alan Stern
     
  • [ Upstream commit a4d34da715e3cb7e0741fe603dcd511bed067e00 ]

    Remove flag RQF_PREEMPT and BLK_MQ_REQ_PREEMPT since these are no longer
    used by any kernel code.

    Link: https://lore.kernel.org/r/20201209052951.16136-8-bvanassche@acm.org
    Cc: Can Guo
    Cc: Stanley Chu
    Cc: Alan Stern
    Cc: Ming Lei
    Cc: Rafael J. Wysocki
    Cc: Martin Kepplinger
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Hannes Reinecke
    Reviewed-by: Jens Axboe
    Reviewed-by: Can Guo
    Signed-off-by: Bart Van Assche
    Signed-off-by: Martin K. Petersen
    Signed-off-by: Sasha Levin

    Bart Van Assche
     
  • [ Upstream commit 87dbc209ea04645fd2351981f09eff5d23f8e2e9 ]

    Make mandatory in include/asm-generic/Kbuild and
    remove all arch/*/include/asm/local64.h arch-specific files since they
    only #include .

    This fixes build errors on arch/c6x/ and arch/nios2/ for
    block/blk-iocost.c.

    Build-tested on 21 of 25 arch-es. (tools problems on the others)

    Yes, we could even rename to
    and change all #includes to use
    instead.

    Link: https://lkml.kernel.org/r/20201227024446.17018-1-rdunlap@infradead.org
    Signed-off-by: Randy Dunlap
    Suggested-by: Christoph Hellwig
    Reviewed-by: Masahiro Yamada
    Cc: Jens Axboe
    Cc: Ley Foon Tan
    Cc: Mark Salter
    Cc: Aurelien Jacquiot
    Cc: Peter Zijlstra
    Cc: Arnd Bergmann
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Sasha Levin

    Randy Dunlap
     
  • [ Upstream commit 0854bcdcdec26aecdc92c303816f349ee1fba2bc ]

    Introduce the BLK_MQ_REQ_PM flag. This flag makes the request allocation
    functions set RQF_PM. This is the first step towards removing
    BLK_MQ_REQ_PREEMPT.

    Link: https://lore.kernel.org/r/20201209052951.16136-3-bvanassche@acm.org
    Cc: Alan Stern
    Cc: Stanley Chu
    Cc: Ming Lei
    Cc: Rafael J. Wysocki
    Cc: Can Guo
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Hannes Reinecke
    Reviewed-by: Jens Axboe
    Reviewed-by: Can Guo
    Signed-off-by: Bart Van Assche
    Signed-off-by: Martin K. Petersen
    Signed-off-by: Sasha Levin

    Bart Van Assche
     
  • [ Upstream commit bd1248f1ddbc48b0c30565fce897a3b6423313b8 ]

    Check Scell_log shift size in red_check_params() and modify all callers
    of red_check_params() to pass Scell_log.

    This prevents a shift out-of-bounds as detected by UBSAN:
    UBSAN: shift-out-of-bounds in ./include/net/red.h:252:22
    shift exponent 72 is too large for 32-bit type 'int'

    Fixes: 8afa10cbe281 ("net_sched: red: Avoid illegal values")
    Signed-off-by: Randy Dunlap
    Reported-by: syzbot+97c5bd9cc81eca63d36e@syzkaller.appspotmail.com
    Cc: Nogah Frankel
    Cc: Jamal Hadi Salim
    Cc: Cong Wang
    Cc: Jiri Pirko
    Cc: netdev@vger.kernel.org
    Cc: "David S. Miller"
    Cc: Jakub Kicinski
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Randy Dunlap
     

09 Jan, 2021

5 commits

  • [ Upstream commit f7cfd871ae0c5008d94b6f66834e7845caa93c15 ]

    Recently syzbot reported[0] that there is a deadlock amongst the users
    of exec_update_mutex. The problematic lock ordering found by lockdep
    was:

    perf_event_open (exec_update_mutex -> ovl_i_mutex)
    chown (ovl_i_mutex -> sb_writes)
    sendfile (sb_writes -> p->lock)
    by reading from a proc file and writing to overlayfs
    proc_pid_syscall (p->lock -> exec_update_mutex)

    While looking at possible solutions it occured to me that all of the
    users and possible users involved only wanted to state of the given
    process to remain the same. They are all readers. The only writer is
    exec.

    There is no reason for readers to block on each other. So fix
    this deadlock by transforming exec_update_mutex into a rw_semaphore
    named exec_update_lock that only exec takes for writing.

    Cc: Jann Horn
    Cc: Vasiliy Kulikov
    Cc: Al Viro
    Cc: Bernd Edlinger
    Cc: Oleg Nesterov
    Cc: Christopher Yeoh
    Cc: Cyrill Gorcunov
    Cc: Sargun Dhillon
    Cc: Christian Brauner
    Cc: Arnd Bergmann
    Cc: Peter Zijlstra
    Cc: Ingo Molnar
    Cc: Arnaldo Carvalho de Melo
    Fixes: eea9673250db ("exec: Add exec_update_mutex to replace cred_guard_mutex")
    [0] https://lkml.kernel.org/r/00000000000063640c05ade8e3de@google.com
    Reported-by: syzbot+db9cdf3dd1f64252c6ef@syzkaller.appspotmail.com
    Link: https://lkml.kernel.org/r/87ft4mbqen.fsf@x220.int.ebiederm.org
    Signed-off-by: Eric W. Biederman
    Signed-off-by: Sasha Levin

    Eric W. Biederman
     
  • [ Upstream commit 31784cff7ee073b34d6eddabb95e3be2880a425c ]

    In preparation for converting exec_update_mutex to a rwsem so that
    multiple readers can execute in parallel and not deadlock, add
    down_read_interruptible. This is needed for perf_event_open to be
    converted (with no semantic changes) from working on a mutex to
    wroking on a rwsem.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/87k0tybqfy.fsf@x220.int.ebiederm.org
    Signed-off-by: Sasha Levin

    Eric W. Biederman
     
  • [ Upstream commit 0f9368b5bf6db0c04afc5454b1be79022a681615 ]

    In preparation for converting exec_update_mutex to a rwsem so that
    multiple readers can execute in parallel and not deadlock, add
    down_read_killable_nested. This is needed so that kcmp_lock
    can be converted from working on a mutexes to working on rw_semaphores.

    Signed-off-by: Eric W. Biederman
    Signed-off-by: Peter Zijlstra (Intel)
    Link: https://lkml.kernel.org/r/87o8jabqh3.fsf@x220.int.ebiederm.org
    Signed-off-by: Sasha Levin

    Eric W. Biederman
     
  • [ Upstream commit 5a7a9e038b032137ae9c45d5429f18a2ffdf7d42 ]

    Use the ib_dma_* helpers to skip the DMA translation instead. This
    removes the last user if dma_virt_ops and keeps the weird layering
    violation inside the RDMA core instead of burderning the DMA mapping
    subsystems with it. This also means the software RDMA drivers now don't
    have to mess with DMA parameters that are not relevant to them at all, and
    that in the future we can use PCI P2P transfers even for software RDMA, as
    there is no first fake layer of DMA mapping that the P2P DMA support.

    Link: https://lore.kernel.org/r/20201106181941.1878556-8-hch@lst.de
    Signed-off-by: Christoph Hellwig
    Tested-by: Mike Marciniszyn
    Signed-off-by: Jason Gunthorpe
    Signed-off-by: Sasha Levin

    Christoph Hellwig
     
  • commit aa8c7db494d0a83ecae583aa193f1134ef25d506 upstream.

    Silly GCC doesn't always inline these trivial functions.

    Fixes the following warning:

    arch/x86/kernel/sys_ia32.o: warning: objtool: cp_stat64()+0xd8: call to new_encode_dev() with UACCESS enabled

    Link: https://lkml.kernel.org/r/984353b44a4484d86ba9f73884b7306232e25e30.1608737428.git.jpoimboe@redhat.com
    Signed-off-by: Josh Poimboeuf
    Reported-by: Randy Dunlap
    Acked-by: Randy Dunlap [build-tested]
    Cc: Peter Zijlstra
    Cc: Greg Kroah-Hartman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Josh Poimboeuf
     

08 Jan, 2021

1 commit


06 Jan, 2021

2 commits

  • commit a85cbe6159ffc973e5702f70a3bd5185f8f3c38d upstream.

    and include in UAPI headers instead of .

    The reason is to avoid indirect include when using
    some network headers: or others ->
    -> .

    This indirect include causes on MUSL redefinition of struct sysinfo when
    included both and some of UAPI headers:

    In file included from x86_64-buildroot-linux-musl/sysroot/usr/include/linux/kernel.h:5,
    from x86_64-buildroot-linux-musl/sysroot/usr/include/linux/netlink.h:5,
    from ../include/tst_netlink.h:14,
    from tst_crypto.c:13:
    x86_64-buildroot-linux-musl/sysroot/usr/include/linux/sysinfo.h:8:8: error: redefinition of `struct sysinfo'
    struct sysinfo {
    ^~~~~~~
    In file included from ../include/tst_safe_macros.h:15,
    from ../include/tst_test.h:93,
    from tst_crypto.c:11:
    x86_64-buildroot-linux-musl/sysroot/usr/include/sys/sysinfo.h:10:8: note: originally defined here

    Link: https://lkml.kernel.org/r/20201015190013.8901-1-petr.vorel@gmail.com
    Signed-off-by: Petr Vorel
    Suggested-by: Rich Felker
    Acked-by: Rich Felker
    Cc: Peter Korsgaard
    Cc: Baruch Siach
    Cc: Florian Weimer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Petr Vorel
     
  • commit dc2da7b45ffe954a0090f5d0310ed7b0b37d2bd2 upstream.

    VMware observed a performance regression during memmap init on their
    platform, and bisected to commit 73a6e474cb376 ("mm: memmap_init:
    iterate over memblock regions rather that check each PFN") causing it.

    Before the commit:

    [0.033176] Normal zone: 1445888 pages used for memmap
    [0.033176] Normal zone: 89391104 pages, LIFO batch:63
    [0.035851] ACPI: PM-Timer IO Port: 0x448

    With commit

    [0.026874] Normal zone: 1445888 pages used for memmap
    [0.026875] Normal zone: 89391104 pages, LIFO batch:63
    [2.028450] ACPI: PM-Timer IO Port: 0x448

    The root cause is the current memmap defer init doesn't work as expected.

    Before, memmap_init_zone() was used to do memmap init of one whole zone,
    to initialize all low zones of one numa node, but defer memmap init of
    the last zone in that numa node. However, since commit 73a6e474cb376,
    function memmap_init() is adapted to iterater over memblock regions
    inside one zone, then call memmap_init_zone() to do memmap init for each
    region.

    E.g, on VMware's system, the memory layout is as below, there are two
    memory regions in node 2. The current code will mistakenly initialize the
    whole 1st region [mem 0xab00000000-0xfcffffffff], then do memmap defer to
    iniatialize only one memmory section on the 2nd region [mem
    0x10000000000-0x1033fffffff]. In fact, we only expect to see that there's
    only one memory section's memmap initialized. That's why more time is
    costed at the time.

    [ 0.008842] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
    [ 0.008842] ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff]
    [ 0.008843] ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x55ffffffff]
    [ 0.008844] ACPI: SRAT: Node 1 PXM 1 [mem 0x5600000000-0xaaffffffff]
    [ 0.008844] ACPI: SRAT: Node 2 PXM 2 [mem 0xab00000000-0xfcffffffff]
    [ 0.008845] ACPI: SRAT: Node 2 PXM 2 [mem 0x10000000000-0x1033fffffff]

    Now, let's add a parameter 'zone_end_pfn' to memmap_init_zone() to pass
    down the real zone end pfn so that defer_init() can use it to judge
    whether defer need be taken in zone wide.

    Link: https://lkml.kernel.org/r/20201223080811.16211-1-bhe@redhat.com
    Link: https://lkml.kernel.org/r/20201223080811.16211-2-bhe@redhat.com
    Fixes: commit 73a6e474cb376 ("mm: memmap_init: iterate over memblock regions rather that check each PFN")
    Signed-off-by: Baoquan He
    Reported-by: Rahul Gopakumar
    Reviewed-by: Mike Rapoport
    Cc: David Hildenbrand
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Baoquan He
     

04 Jan, 2021

3 commits

  • This is the 5.10.4 stable release

    * tag 'v5.10.4': (717 commits)
    Linux 5.10.4
    x86/CPU/AMD: Save AMD NodeId as cpu_die_id
    drm/edid: fix objtool warning in drm_cvt_modes()
    ...

    Signed-off-by: Jason Liu

    Conflicts:
    drivers/gpu/drm/imx/dcss/dcss-plane.c
    drivers/media/i2c/ov5640.c

    Jason Liu
     
  • This is the 5.10.3 stable release

    * tag 'v5.10.3': (41 commits)
    Linux 5.10.3
    md: fix a warning caused by a race between concurrent md_ioctl()s
    nl80211: validate key indexes for cfg80211_registered_device
    ...

    Signed-off-by: Jason Liu

    Jason Liu
     
  • This is the 5.10.2 stable release

    * tag 'v5.10.2': (17 commits)
    Linux 5.10.2
    serial: 8250_omap: Avoid FIFO corruption caused by MDR1 access
    ALSA: pcm: oss: Fix potential out-of-bounds shift
    ...

    Signed-off-by: Jason Liu

    Conflicts:
    drivers/usb/host/xhci-hub.c
    drivers/usb/host/xhci.h

    Jason Liu
     

30 Dec, 2020

6 commits

  • commit 5812b32e01c6d86ba7a84110702b46d8a8531fe9 upstream.

    Specify type alignment when declaring linker-section match-table entries
    to prevent gcc from increasing alignment and corrupting the various
    tables with padding (e.g. timers, irqchips, clocks, reserved memory).

    This is specifically needed on x86 where gcc (typically) aligns larger
    objects like struct of_device_id with static extent on 32-byte
    boundaries which at best prevents matching on anything but the first
    entry. Specifying alignment when declaring variables suppresses this
    optimisation.

    Here's a 64-bit example where all entries are corrupt as 16 bytes of
    padding has been inserted before the first entry:

    ffffffff8266b4b0 D __clk_of_table
    ffffffff8266b4c0 d __of_table_fixed_factor_clk
    ffffffff8266b5a0 d __of_table_fixed_clk
    ffffffff8266b680 d __clk_of_table_sentinel

    And here's a 32-bit example where the 8-byte-aligned table happens to be
    placed on a 32-byte boundary so that all but the first entry are corrupt
    due to the 28 bytes of padding inserted between entries:

    812b3ec0 D __irqchip_of_table
    812b3ec0 d __of_table_irqchip1
    812b3fa0 d __of_table_irqchip2
    812b4080 d __of_table_irqchip3
    812b4160 d irqchip_of_match_end

    Verified on x86 using gcc-9.3 and gcc-4.9 (which uses 64-byte
    alignment), and on arm using gcc-7.2.

    Note that there are no in-tree users of these tables on x86 currently
    (even if they are included in the image).

    Fixes: 54196ccbe0ba ("of: consolidate linker section OF match table declarations")
    Fixes: f6e916b82022 ("irqchip: add basic infrastructure")
    Cc: stable # 3.9
    Signed-off-by: Johan Hovold
    Link: https://lore.kernel.org/r/20201123102319.8090-2-johan@kernel.org
    Signed-off-by: Greg Kroah-Hartman

    Johan Hovold
     
  • commit 3dc86ca6b4c8cfcba9da7996189d1b5a358a94fc upstream.

    This commit adds a counter of pending messages for each watch in the
    struct. It is used to skip unnecessary pending messages lookup in
    'unregister_xenbus_watch()'. It could also be used in 'will_handle'
    callback.

    This is part of XSA-349

    Cc: stable@vger.kernel.org
    Signed-off-by: SeongJae Park
    Reported-by: Michael Kurth
    Reported-by: Pawel Wieczorkiewicz
    Reviewed-by: Juergen Gross
    Signed-off-by: Juergen Gross
    Signed-off-by: Greg Kroah-Hartman

    SeongJae Park
     
  • commit 2e85d32b1c865bec703ce0c962221a5e955c52c2 upstream.

    Some code does not directly make 'xenbus_watch' object and call
    'register_xenbus_watch()' but use 'xenbus_watch_path()' instead. This
    commit adds support of 'will_handle' callback in the
    'xenbus_watch_path()' and it's wrapper, 'xenbus_watch_pathfmt()'.

    This is part of XSA-349

    Cc: stable@vger.kernel.org
    Signed-off-by: SeongJae Park
    Reported-by: Michael Kurth
    Reported-by: Pawel Wieczorkiewicz
    Reviewed-by: Juergen Gross
    Signed-off-by: Juergen Gross
    Signed-off-by: Greg Kroah-Hartman

    SeongJae Park
     
  • commit fed1755b118147721f2c87b37b9d66e62c39b668 upstream.

    If handling logics of watch events are slower than the events enqueue
    logic and the events can be created from the guests, the guests could
    trigger memory pressure by intensively inducing the events, because it
    will create a huge number of pending events that exhausting the memory.

    Fortunately, some watch events could be ignored, depending on its
    handler callback. For example, if the callback has interest in only one
    single path, the watch wouldn't want multiple pending events. Or, some
    watches could ignore events to same path.

    To let such watches to volutarily help avoiding the memory pressure
    situation, this commit introduces new watch callback, 'will_handle'. If
    it is not NULL, it will be called for each new event just before
    enqueuing it. Then, if the callback returns false, the event will be
    discarded. No watch is using the callback for now, though.

    This is part of XSA-349

    Cc: stable@vger.kernel.org
    Signed-off-by: SeongJae Park
    Reported-by: Michael Kurth
    Reported-by: Pawel Wieczorkiewicz
    Reviewed-by: Juergen Gross
    Signed-off-by: Juergen Gross
    Signed-off-by: Greg Kroah-Hartman

    SeongJae Park
     
  • commit 0fb6ee8d0b5e90b72f870f76debc8bd31a742014 upstream.

    Use a heap allocated memory for the SPI transfer buffer. Using stack memory
    can corrupt stack memory when using DMA on some systems.

    This change moves the buffer from the stack of the trigger handler call to
    the heap of the buffer of the state struct. The size increases takes into
    account the alignment for the timestamp, which is 8 bytes.

    The 'data' buffer is split into 'tx_buf' and 'rx_buf', to make a clearer
    separation of which part of the buffer should be used for TX & RX.

    Fixes: af3008485ea03 ("iio:adc: Add common code for ADI Sigma Delta devices")
    Signed-off-by: Lars-Peter Clausen
    Signed-off-by: Alexandru Ardelean
    Link: https://lore.kernel.org/r/20201124123807.19717-1-alexandru.ardelean@analog.com
    Cc:
    Signed-off-by: Jonathan Cameron
    Signed-off-by: Greg Kroah-Hartman

    Lars-Peter Clausen
     
  • commit fecc4559780d52d174ea05e3bf543669165389c3 upstream.

    fsnotify_parent() used to send two separate events to backends when a
    parent inode is watching children and the child inode is also watching.
    In an attempt to avoid duplicate events in fanotify, we unified the two
    backend callbacks to a single callback and handled the reporting of the
    two separate events for the relevant backends (inotify and dnotify).
    However the handling is buggy and can result in inotify and dnotify
    listeners receiving events of the type they never asked for or spurious
    events.

    The problem is the unified event callback with two inode marks (parent and
    child) is called when any of the parent and child inodes are watched and
    interested in the event, but the parent inode's mark that is interested
    in the event on the child is not necessarily the one we are currently
    reporting to (it could belong to a different group).

    So before reporting the parent or child event flavor to backend we need
    to check that the mark is really interested in that event flavor.

    The semantics of INODE and CHILD marks were hard to follow and made the
    logic more complicated than it should have been. Replace it with INODE
    and PARENT marks semantics to hopefully make the logic more clear.

    Thanks to Hugh Dickins for spotting a bug in the earlier version of this
    patch.

    Fixes: 497b0c5a7c06 ("fsnotify: send event to parent and child with single callback")
    CC: stable@vger.kernel.org
    Link: https://lore.kernel.org/r/20201202120713.702387-4-amir73il@gmail.com
    Reported-by: Hugh Dickins
    Signed-off-by: Amir Goldstein
    Signed-off-by: Jan Kara
    Signed-off-by: Greg Kroah-Hartman

    Amir Goldstein