13 Jan, 2021

1 commit

  • [ Upstream commit 01341fbd0d8d4e717fc1231cdffe00343088ce0b ]

    In realtime scenario, We do not want to have interference on the
    isolated cpu cores. but when invoking alloc_workqueue() for percpu wq
    on the housekeeping cpu, it kick a kworker on the isolated cpu.

    alloc_workqueue
    pwq_adjust_max_active
    wake_up_worker

    The comment in pwq_adjust_max_active() said:
    "Need to kick a worker after thawed or an unbound wq's
    max_active is bumped"

    So it is unnecessary to kick a kworker for percpu's wq when invoking
    alloc_workqueue(). this patch only kick a worker based on the actual
    activation of delayed works.

    Signed-off-by: Yunfeng Ye
    Reviewed-by: Lai Jiangshan
    Signed-off-by: Tejun Heo
    Signed-off-by: Sasha Levin

    Yunfeng Ye
     

16 Oct, 2020

1 commit


25 Sep, 2020

1 commit

  • This should make it harder for the kernel to corrupt the debug object
    descriptor, used to call functions to fixup state and track debug objects,
    by moving the structure to read-only memory.

    Signed-off-by: Stephen Boyd
    Signed-off-by: Thomas Gleixner
    Reviewed-by: Kees Cook
    Link: https://lore.kernel.org/r/20200815004027.2046113-3-swboyd@chromium.org

    Stephen Boyd
     

18 Jun, 2020

1 commit


01 Jun, 2020

1 commit


29 May, 2020

4 commits

  • This is no point to unlock() and then lock() the same mutex
    back to back.

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Tejun Heo

    Lai Jiangshan
     
  • 008847f66c3 ("workqueue: allow rescuer thread to do more work.") made
    the rescuer worker requeue the pwq immediately if there may be more
    work items which need rescuing instead of waiting for the next mayday
    timer expiration. Unfortunately, it checks only whether the pool needs
    help from rescuers, but it doesn't check whether the pwq has work items
    in the pool (the real reason that this rescuer can help for the pool).

    The patch adds the check and void unneeded requeuing.

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Tejun Heo

    Lai Jiangshan
     
  • The workqueue code has it's internal spinlocks (pool::lock), which
    are acquired on most workqueue operations. These spinlocks are
    converted to 'sleeping' spinlocks on a RT-kernel.

    Workqueue functions can be invoked from contexts which are truly atomic
    even on a PREEMPT_RT enabled kernel. Taking sleeping locks from such
    contexts is forbidden.

    The pool::lock hold times are bound and the code sections are
    relatively short, which allows to convert pool::lock and as a
    consequence wq_mayday_lock to raw spinlocks which are truly spinning
    locks even on a PREEMPT_RT kernel.

    With the previous conversion of the manager waitqueue to a simple
    waitqueue workqueues are now fully RT compliant.

    Signed-off-by: Sebastian Andrzej Siewior
    Reviewed-by: Lai Jiangshan
    Signed-off-by: Tejun Heo

    Sebastian Andrzej Siewior
     
  • The workqueue code has it's internal spinlock (pool::lock) and also
    implicit spinlock usage in the wq_manager waitqueue. These spinlocks
    are converted to 'sleeping' spinlocks on a RT-kernel.

    Workqueue functions can be invoked from contexts which are truly atomic
    even on a PREEMPT_RT enabled kernel. Taking sleeping locks from such
    contexts is forbidden.

    pool::lock can be converted to a raw spinlock as the lock held times
    are short. But the workqueue manager waitqueue is handled inside of
    pool::lock held regions which again violates the lock nesting rules
    of raw and regular spinlocks.

    The manager waitqueue has no special requirements like custom wakeup
    callbacks or mass wakeups. While it does not use exclusive wait mode
    explicitly there is no strict requirement to queue the waiters in a
    particular order as there is only one waiter at a time.

    This allows to replace the waitqueue with rcuwait which solves the
    locking problem because rcuwait relies on existing locking.

    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Tejun Heo

    Sebastian Andrzej Siewior
     

27 May, 2020

1 commit

  • The data structure member "wq->rescuer" was reset to a null pointer
    in one if branch. It was passed to a call of the function "kfree"
    in the callback function "rcu_free_wq" (which was eventually executed).
    The function "kfree" does not perform more meaningful data processing
    for a passed null pointer (besides immediately returning from such a call).
    Thus delete this function call which became unnecessary with the referenced
    software update.

    Fixes: def98c84b6cd ("workqueue: Fix spurious sanity check failures in destroy_workqueue()")

    Suggested-by: Markus Elfring
    Signed-off-by: Zhang Qiang
    Signed-off-by: Tejun Heo

    Zhang Qiang
     

11 May, 2020

1 commit


05 May, 2020

1 commit

  • Replace inline function PTR_ERR_OR_ZERO with IS_ERR and PTR_ERR to
    remove redundant parameter definitions and checks.
    Reduce code size.
    Before:
    text data bss dec hex filename
    47510 5979 840 54329 d439 kernel/workqueue.o
    After:
    text data bss dec hex filename
    47474 5979 840 54293 d415 kernel/workqueue.o

    Signed-off-by: Sean Fu
    Signed-off-by: Tejun Heo

    Sean Fu
     

08 Apr, 2020

1 commit

  • The kernel test robot triggered a warning with the following race:
    task-ctx A interrupt-ctx B
    worker
    -> process_one_work()
    -> work_item()
    -> schedule();
    -> sched_submit_work()
    -> wq_worker_sleeping()
    -> ->sleeping = 1
    atomic_dec_and_test(nr_running)
    __schedule(); *interrupt*
    async_page_fault()
    -> local_irq_enable();
    -> schedule();
    -> sched_submit_work()
    -> wq_worker_sleeping()
    -> if (WARN_ON(->sleeping)) return
    -> __schedule()
    -> sched_update_worker()
    -> wq_worker_running()
    -> atomic_inc(nr_running);
    -> ->sleeping = 0;

    -> sched_update_worker()
    -> wq_worker_running()
    if (!->sleeping) return

    In this context the warning is pointless everything is fine.
    An interrupt before wq_worker_sleeping() will perform the ->sleeping
    assignment (0 -> 1 > 0) twice.
    An interrupt after wq_worker_sleeping() will trigger the warning and
    nr_running will be decremented (by A) and incremented once (only by B, A
    will skip it). This is the case until the ->sleeping is zeroed again in
    wq_worker_running().

    Remove the WARN statement because this condition may happen. Document
    that preemption around wq_worker_sleeping() needs to be disabled to
    protect ->sleeping and not just as an optimisation.

    Fixes: 6d25be5782e48 ("sched/core, workqueues: Distangle worker accounting from rq lock")
    Reported-by: kernel test robot
    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Ingo Molnar
    Cc: Tejun Heo
    Link: https://lkml.kernel.org/r/20200327074308.GY11705@shao2-debian

    Sebastian Andrzej Siewior
     

04 Apr, 2020

1 commit


13 Mar, 2020

1 commit

  • [ 7329.671518] BUG: KCSAN: data-race in flush_workqueue / flush_workqueue
    [ 7329.671549]
    [ 7329.671572] write to 0xffff8881f65fb250 of 8 bytes by task 37173 on cpu 2:
    [ 7329.671607] flush_workqueue+0x3bc/0x9b0 (kernel/workqueue.c:2844)
    [ 7329.672527]
    [ 7329.672540] read to 0xffff8881f65fb250 of 8 bytes by task 37175 on cpu 0:
    [ 7329.672571] flush_workqueue+0x28d/0x9b0 (kernel/workqueue.c:2835)

    Signed-off-by: Chris Wilson
    Cc: Tejun Heo
    Cc: Lai Jiangshan
    Signed-off-by: Tejun Heo

    Chris Wilson
     

10 Mar, 2020

1 commit

  • wq_select_unbound_cpu() is designed for unbound workqueues only, but
    it's wrongly called when using a bound workqueue too.

    Fixing this ensures work queued to a bound workqueue with
    cpu=WORK_CPU_UNBOUND always runs on the local CPU.

    Before, that would happen only if wq_unbound_cpumask happened to include
    it (likely almost always the case), or was empty, or we got lucky with
    forced round-robin placement. So restricting
    /sys/devices/virtual/workqueue/cpumask to a small subset of a machine's
    CPUs would cause some bound work items to run unexpectedly there.

    Fixes: ef557180447f ("workqueue: schedule WORK_CPU_UNBOUND work on wq_unbound_cpumask CPUs")
    Cc: stable@vger.kernel.org # v4.5+
    Signed-off-by: Hillf Danton
    [dj: massage changelog]
    Signed-off-by: Daniel Jordan
    Cc: Tejun Heo
    Cc: Lai Jiangshan
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Tejun Heo

    Hillf Danton
     

05 Mar, 2020

1 commit


29 Jan, 2020

1 commit

  • Pull scheduler updates from Ingo Molnar:
    "These were the main changes in this cycle:

    - More -rt motivated separation of CONFIG_PREEMPT and
    CONFIG_PREEMPTION.

    - Add more low level scheduling topology sanity checks and warnings
    to filter out nonsensical topologies that break scheduling.

    - Extend uclamp constraints to influence wakeup CPU placement

    - Make the RT scheduler more aware of asymmetric topologies and CPU
    capacities, via uclamp metrics, if CONFIG_UCLAMP_TASK=y

    - Make idle CPU selection more consistent

    - Various fixes, smaller cleanups, updates and enhancements - please
    see the git log for details"

    * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (58 commits)
    sched/fair: Define sched_idle_cpu() only for SMP configurations
    sched/topology: Assert non-NUMA topology masks don't (partially) overlap
    idle: fix spelling mistake "iterrupts" -> "interrupts"
    sched/fair: Remove redundant call to cpufreq_update_util()
    sched/psi: create /proc/pressure and /proc/pressure/{io|memory|cpu} only when psi enabled
    sched/fair: Fix sgc->{min,max}_capacity calculation for SD_OVERLAP
    sched/fair: calculate delta runnable load only when it's needed
    sched/cputime: move rq parameter in irqtime_account_process_tick
    stop_machine: Make stop_cpus() static
    sched/debug: Reset watchdog on all CPUs while processing sysrq-t
    sched/core: Fix size of rq::uclamp initialization
    sched/uclamp: Fix a bug in propagating uclamp value in new cgroups
    sched/fair: Load balance aggressively for SCHED_IDLE CPUs
    sched/fair : Improve update_sd_pick_busiest for spare capacity case
    watchdog: Remove soft_lockup_hrtimer_cnt and related code
    sched/rt: Make RT capacity-aware
    sched/fair: Make EAS wakeup placement consider uclamp restrictions
    sched/fair: Make task_fits_capacity() consider uclamp restrictions
    sched/uclamp: Rename uclamp_util_with() into uclamp_rq_util_with()
    sched/uclamp: Make uclamp util helpers use and return UL values
    ...

    Linus Torvalds
     

16 Jan, 2020

1 commit

  • It's surprising that workqueue_execute_end includes only the work when
    its counterpart workqueue_execute_start has both the work and the worker
    function.

    You can't set a tracing filter or trigger based on the function, and
    postprocessing scripts interested in specific functions are harder to
    write since they have to remember the work from _start and match it up
    with the same field in _end.

    Add the function name, taking care to use the copy stashed in the
    worker since the work is no longer safe to touch.

    Signed-off-by: Daniel Jordan
    Cc: Tejun Heo
    Cc: Lai Jiangshan
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Tejun Heo

    Daniel Jordan
     

25 Dec, 2019

1 commit


08 Dec, 2019

1 commit

  • CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT.
    Both PREEMPT and PREEMPT_RT require the same functionality which today
    depends on CONFIG_PREEMPT.

    Update the comment to use PREEMPTION because it is true for both
    preemption models.

    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Thomas Gleixner
    Cc: Lai Jiangshan
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Tejun Heo
    Link: https://lore.kernel.org/r/20191015191821.11479-35-bigeasy@linutronix.de
    Signed-off-by: Ingo Molnar

    Sebastian Andrzej Siewior
     

06 Dec, 2019

1 commit

  • Use pr_warn() instead of the remaining pr_warning() calls.

    Link: http://lkml.kernel.org/r/20191128004752.35268-2-wangkefeng.wang@huawei.com
    To: joe@perches.com
    To: linux-kernel@vger.kernel.org
    Cc: gregkh@linuxfoundation.org
    Cc: tj@kernel.org
    Cc: arnd@arndb.de
    Cc: sergey.senozhatsky@gmail.com
    Cc: rostedt@goodmis.org
    Signed-off-by: Kefeng Wang
    Acked-by: Tejun Heo
    Signed-off-by: Petr Mladek

    Kefeng Wang
     

27 Nov, 2019

1 commit

  • Pull RCU updates from Ingo Molnar:
    "The main changes in this cycle were:

    - Dynamic tick (nohz) updates, perhaps most notably changes to force
    the tick on when needed due to lengthy in-kernel execution on CPUs
    on which RCU is waiting.

    - Linux-kernel memory consistency model updates.

    - Replace rcu_swap_protected() with rcu_prepace_pointer().

    - Torture-test updates.

    - Documentation updates.

    - Miscellaneous fixes"

    * 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (51 commits)
    security/safesetid: Replace rcu_swap_protected() with rcu_replace_pointer()
    net/sched: Replace rcu_swap_protected() with rcu_replace_pointer()
    net/netfilter: Replace rcu_swap_protected() with rcu_replace_pointer()
    net/core: Replace rcu_swap_protected() with rcu_replace_pointer()
    bpf/cgroup: Replace rcu_swap_protected() with rcu_replace_pointer()
    fs/afs: Replace rcu_swap_protected() with rcu_replace_pointer()
    drivers/scsi: Replace rcu_swap_protected() with rcu_replace_pointer()
    drm/i915: Replace rcu_swap_protected() with rcu_replace_pointer()
    x86/kvm/pmu: Replace rcu_swap_protected() with rcu_replace_pointer()
    rcu: Upgrade rcu_swap_protected() to rcu_replace_pointer()
    rcu: Suppress levelspread uninitialized messages
    rcu: Fix uninitialized variable in nocb_gp_wait()
    rcu: Update descriptions for rcu_future_grace_period tracepoint
    rcu: Update descriptions for rcu_nocb_wake tracepoint
    rcu: Remove obsolete descriptions for rcu_barrier tracepoint
    rcu: Ensure that ->rcu_urgent_qs is set before resched IPI
    workqueue: Convert for_each_wq to use built-in list check
    rcu: Several rcu_segcblist functions can be static
    rcu: Remove unused function hlist_bl_del_init_rcu()
    Documentation: Rename rcu_node_context_switch() to rcu_note_context_switch()
    ...

    Linus Torvalds
     

16 Nov, 2019

1 commit

  • An additional check has been recently added to ensure that a RCU related lock
    is held while the RCU list is iterated.
    The `pwqs' are sometimes iterated without a RCU lock but with the &wq->mutex
    acquired leading to a warning.

    Teach list_for_each_entry_rcu() that the RCU usage is okay if &wq->mutex
    is acquired during the list traversal.

    Fixes: 28875945ba98d ("rcu: Add support for consolidated-RCU reader checking")
    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Tejun Heo

    Sebastian Andrzej Siewior
     

30 Oct, 2019

1 commit

  • Because list_for_each_entry_rcu() can now check for holding a
    lock as well as for being in an RCU read-side critical section,
    this commit replaces the workqueue_sysfs_unregister() function's
    use of assert_rcu_or_wq_mutex() and list_for_each_entry_rcu() with
    list_for_each_entry_rcu() augmented with a lockdep_is_held() optional
    argument.

    Acked-by: Tejun Heo
    Signed-off-by: Joel Fernandes (Google)
    Signed-off-by: Paul E. McKenney

    Joel Fernandes (Google)
     

05 Oct, 2019

2 commits

  • 008847f66c3 ("workqueue: allow rescuer thread to do more work.") made
    the rescuer worker requeue the pwq immediately if there may be more
    work items which need rescuing instead of waiting for the next mayday
    timer expiration. Unfortunately, it doesn't check whether the pwq is
    already on the mayday list and unconditionally gets the ref and moves
    it onto the list. This doesn't corrupt the list but creates an
    additional reference to the pwq. It got queued twice but will only be
    removed once.

    This leak later can trigger pwq refcnt warning on workqueue
    destruction and prevent freeing of the workqueue.

    Signed-off-by: Tejun Heo
    Cc: "Williams, Gerald S"
    Cc: NeilBrown
    Cc: stable@vger.kernel.org # v3.19+

    Tejun Heo
     
  • destroy_workqueue() warnings still, at a lower frequency, trigger
    spuriously. The problem seems to be in-flight operations which
    haven't reached put_pwq() yet.

    * Make sanity check grab all the related locks so that it's
    synchronized against operations which puts pwq at the end.

    * Always print out the offending pwq.

    Signed-off-by: Tejun Heo
    Cc: "Williams, Gerald S"

    Tejun Heo
     

21 Sep, 2019

2 commits


19 Sep, 2019

1 commit

  • Before actually destrying a workqueue, destroy_workqueue() checks
    whether it's actually idle. If it isn't, it prints out a bunch of
    warning messages and leaves the workqueue dangling. It unfortunately
    has a couple issues.

    * Mayday list queueing increments pwq's refcnts which gets detected as
    busy and fails the sanity checks. However, because mayday list
    queueing is asynchronous, this condition can happen without any
    actual work items left in the workqueue.

    * Sanity check failure leaves the sysfs interface behind too which can
    lead to init failure of newer instances of the workqueue.

    This patch fixes the above two by

    * If a workqueue has a rescuer, disable and kill the rescuer before
    sanity checks. Disabling and killing is guaranteed to flush the
    existing mayday list.

    * Remove sysfs interface before sanity checks.

    Signed-off-by: Tejun Heo
    Reported-by: Marcin Pawlowski
    Reported-by: "Williams, Gerald S"
    Cc: stable@vger.kernel.org

    Tejun Heo
     

13 Sep, 2019

2 commits

  • Change the calling convention for apply_workqueue_attrs to require CPU
    hotplug read exclusion.

    Avoids lockdep complaints about nested calls to get_online_cpus in a
    future patch where padata calls apply_workqueue_attrs when changing
    other CPU-hotplug-sensitive data structures with the CPU read lock
    already held.

    Signed-off-by: Daniel Jordan
    Acked-by: Tejun Heo
    Acked-by: Steffen Klassert
    Cc: Herbert Xu
    Cc: Lai Jiangshan
    Cc: Peter Zijlstra
    Cc: linux-crypto@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Herbert Xu

    Daniel Jordan
     
  • padata will use these these interfaces in a later patch, so unconfine them.

    Signed-off-by: Daniel Jordan
    Acked-by: Tejun Heo
    Acked-by: Steffen Klassert
    Cc: Herbert Xu
    Cc: Lai Jiangshan
    Cc: Peter Zijlstra
    Cc: linux-crypto@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Herbert Xu

    Daniel Jordan
     

28 Jun, 2019

2 commits


21 May, 2019

1 commit

  • Add SPDX license identifiers to all files which:

    - Have no license information of any form

    - Have EXPORT_.*_SYMBOL_GPL inside which was used in the
    initial scan/conversion to ignore the file

    These files fall under the project license, GPL v2 only. The resulting SPDX
    license identifier is:

    GPL-2.0-only

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

10 May, 2019

1 commit

  • Pull workqueue updates from Tejun Heo:
    "Only three commits, of which two are trivial.

    The non-trivial chagne is Thomas's patch to switch workqueue from
    sched RCU to regular one. The use of sched RCU is mostly historic and
    doesn't really buy us anything noticeable"

    * 'for-5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq:
    workqueue: Use normal rcu
    kernel/workqueue: Document wq_worker_last_func() argument
    kernel/workqueue: Use __printf markup to silence compiler in function 'alloc_workqueue'

    Linus Torvalds
     

08 May, 2019

1 commit

  • Pull printk updates from Petr Mladek:

    - Allow state reset of printk_once() calls.

    - Prevent crashes when dereferencing invalid pointers in vsprintf().
    Only the first byte is checked for simplicity.

    - Make vsprintf warnings consistent and inlined.

    - Treewide conversion of obsolete %pf, %pF to %ps, %pF printf
    modifiers.

    - Some clean up of vsprintf and test_printf code.

    * tag 'printk-for-5.2' of git://git.kernel.org/pub/scm/linux/kernel/git/pmladek/printk:
    lib/vsprintf: Make function pointer_string static
    vsprintf: Limit the length of inlined error messages
    vsprintf: Avoid confusion between invalid address and value
    vsprintf: Prevent crash when dereferencing invalid pointers
    vsprintf: Consolidate handling of unknown pointer specifiers
    vsprintf: Factor out %pO handler as kobject_string()
    vsprintf: Factor out %pV handler as va_format()
    vsprintf: Factor out %p[iI] handler as ip_addr_string()
    vsprintf: Do not check address of well-known strings
    vsprintf: Consistent %pK handling for kptr_restrict == 0
    vsprintf: Shuffle restricted_pointer()
    printk: Tie printk_once / printk_deferred_once into .data.once for reset
    treewide: Switch printk users from %pf and %pF to %ps and %pS, respectively
    lib/test_printf: Switch to bitmap_zalloc()

    Linus Torvalds
     

16 Apr, 2019

1 commit

  • The worker accounting for CPU bound workers is plugged into the core
    scheduler code and the wakeup code. This is not a hard requirement and
    can be avoided by keeping track of the state in the workqueue code
    itself.

    Keep track of the sleeping state in the worker itself and call the
    notifier before entering the core scheduler. There might be false
    positives when the task is woken between that call and actually
    scheduling, but that's not really different from scheduling and being
    woken immediately after switching away. When nr_running is updated when
    the task is retunrning from schedule() then it is later compared when it
    is done from ttwu().

    [ bigeasy: preempt_disable() around wq_worker_sleeping() by Daniel Bristot de Oliveira ]

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Tejun Heo
    Cc: Daniel Bristot de Oliveira
    Cc: Lai Jiangshan
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Link: http://lkml.kernel.org/r/ad2b29b5715f970bffc1a7026cabd6ff0b24076a.1532952814.git.bristot@redhat.com
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     

09 Apr, 2019

2 commits

  • %pF and %pf are functionally equivalent to %pS and %ps conversion
    specifiers. The former are deprecated, therefore switch the current users
    to use the preferred variant.

    The changes have been produced by the following command:

    git grep -l '%p[fF]' | grep -v '^\(tools\|Documentation\)/' | \
    while read i; do perl -i -pe 's/%pf/%ps/g; s/%pF/%pS/g;' $i; done

    And verifying the result.

    Link: http://lkml.kernel.org/r/20190325193229.23390-1-sakari.ailus@linux.intel.com
    Cc: Andy Shevchenko
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: sparclinux@vger.kernel.org
    Cc: linux-um@lists.infradead.org
    Cc: xen-devel@lists.xenproject.org
    Cc: linux-acpi@vger.kernel.org
    Cc: linux-pm@vger.kernel.org
    Cc: drbd-dev@lists.linbit.com
    Cc: linux-block@vger.kernel.org
    Cc: linux-mmc@vger.kernel.org
    Cc: linux-nvdimm@lists.01.org
    Cc: linux-pci@vger.kernel.org
    Cc: linux-scsi@vger.kernel.org
    Cc: linux-btrfs@vger.kernel.org
    Cc: linux-f2fs-devel@lists.sourceforge.net
    Cc: linux-mm@kvack.org
    Cc: ceph-devel@vger.kernel.org
    Cc: netdev@vger.kernel.org
    Signed-off-by: Sakari Ailus
    Acked-by: David Sterba (for btrfs)
    Acked-by: Mike Rapoport (for mm/memblock.c)
    Acked-by: Bjorn Helgaas (for drivers/pci)
    Acked-by: Rafael J. Wysocki
    Signed-off-by: Petr Mladek

    Sakari Ailus
     
  • There is no need for sched_rcu. The undocumented reason why sched_rcu
    is used is to avoid a few explicit rcu_read_lock()/unlock() pairs by
    the fact that sched_rcu reader side critical sections are also protected
    by preempt or irq disabled regions.

    Replace rcu_read_lock_sched with rcu_read_lock and acquire the RCU lock
    where it is not yet explicit acquired. Replace local_irq_disable() with
    rcu_read_lock(). Update asserts.

    Signed-off-by: Thomas Gleixner
    [bigeasy: mangle changelog a little]
    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Tejun Heo

    Thomas Gleixner