05 Mar, 2020

1 commit

  • commit 4bc6b745e5cbefed92c48071e28a5f41246d0470 upstream.

    The current expedited RCU grace-period code expects that a task
    requesting an expedited grace period cannot awaken until that grace
    period has reached the wakeup phase. However, it is possible for a long
    preemption to result in the waiting task never sleeping. For example,
    consider the following sequence of events:

    1. Task A starts an expedited grace period by invoking
    synchronize_rcu_expedited(). It proceeds normally up to the
    wait_event() near the end of that function, and is then preempted
    (or interrupted or whatever).

    2. The expedited grace period completes, and a kworker task starts
    the awaken phase, having incremented the counter and acquired
    the rcu_state structure's .exp_wake_mutex. This kworker task
    is then preempted or interrupted or whatever.

    3. Task A resumes and enters wait_event(), which notes that the
    expedited grace period has completed, and thus doesn't sleep.

    4. Task B starts an expedited grace period exactly as did Task A,
    complete with the preemption (or whatever delay) just before
    the call to wait_event().

    5. The expedited grace period completes, and another kworker
    task starts the awaken phase, having incremented the counter.
    However, it blocks when attempting to acquire the rcu_state
    structure's .exp_wake_mutex because step 2's kworker task has
    not yet released it.

    6. Steps 4 and 5 repeat, resulting in overflow of the rcu_node
    structure's ->exp_wq[] array.

    In theory, this is harmless. Tasks waiting on the various ->exp_wq[]
    array will just be spuriously awakened, but they will just sleep again
    on noting that the rcu_state structure's ->expedited_sequence value has
    not advanced far enough.

    In practice, this wastes CPU time and is an accident waiting to happen.
    This commit therefore moves the rcu_exp_gp_seq_end() call that officially
    ends the expedited grace period (along with associate tracing) until
    after the ->exp_wake_mutex has been acquired. This prevents Task A from
    awakening prematurely, thus preventing more than one expedited grace
    period from being in flight during a previous expedited grace period's
    wakeup phase.

    Fixes: 3b5f668e715b ("rcu: Overlap wakeups with next expedited grace period")
    Signed-off-by: Neeraj Upadhyay
    [ paulmck: Added updated comment. ]
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Greg Kroah-Hartman

    Neeraj Upadhyay
     

24 Feb, 2020

3 commits

  • [ Upstream commit 6cf539a87a61a4fbc43f625267dbcbcf283872ed ]

    This fixes a data-race where `atomic_t dynticks` is copied by value. The
    copy is performed non-atomically, resulting in a data-race if `dynticks`
    is updated concurrently.

    This data-race was found with KCSAN:
    ==================================================================
    BUG: KCSAN: data-race in dyntick_save_progress_counter / rcu_irq_enter

    write to 0xffff989dbdbe98e0 of 4 bytes by task 10 on cpu 3:
    atomic_add_return include/asm-generic/atomic-instrumented.h:78 [inline]
    rcu_dynticks_snap kernel/rcu/tree.c:310 [inline]
    dyntick_save_progress_counter+0x43/0x1b0 kernel/rcu/tree.c:984
    force_qs_rnp+0x183/0x200 kernel/rcu/tree.c:2286
    rcu_gp_fqs kernel/rcu/tree.c:1601 [inline]
    rcu_gp_fqs_loop+0x71/0x880 kernel/rcu/tree.c:1653
    rcu_gp_kthread+0x22c/0x3b0 kernel/rcu/tree.c:1799
    kthread+0x1b5/0x200 kernel/kthread.c:255

    read to 0xffff989dbdbe98e0 of 4 bytes by task 154 on cpu 7:
    rcu_nmi_enter_common kernel/rcu/tree.c:828 [inline]
    rcu_irq_enter+0xda/0x240 kernel/rcu/tree.c:870
    irq_enter+0x5/0x50 kernel/softirq.c:347

    Reported by Kernel Concurrency Sanitizer on:
    CPU: 7 PID: 154 Comm: kworker/7:1H Not tainted 5.3.0+ #5
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
    Workqueue: kblockd blk_mq_run_work_fn
    ==================================================================

    Signed-off-by: Marco Elver
    Cc: Paul E. McKenney
    Cc: Josh Triplett
    Cc: Steven Rostedt
    Cc: Mathieu Desnoyers
    Cc: Joel Fernandes
    Cc: Ingo Molnar
    Cc: Dmitry Vyukov
    Cc: rcu@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Reviewed-by: Joel Fernandes (Google)
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Sasha Levin

    Marco Elver
     
  • [ Upstream commit fd6bc19d7676a060a171d1cf3dcbf6fd797eb05f ]

    Tasks waiting within exp_funnel_lock() for an expedited grace period to
    elapse can be starved due to the following sequence of events:

    1. Tasks A and B both attempt to start an expedited grace
    period at about the same time. This grace period will have
    completed when the lower four bits of the rcu_state structure's
    ->expedited_sequence field are 0b'0100', for example, when the
    initial value of this counter is zero. Task A wins, and thus
    does the actual work of starting the grace period, including
    acquiring the rcu_state structure's .exp_mutex and sets the
    counter to 0b'0001'.

    2. Because task B lost the race to start the grace period, it
    waits on ->expedited_sequence to reach 0b'0100' inside of
    exp_funnel_lock(). This task therefore blocks on the rcu_node
    structure's ->exp_wq[1] field, keeping in mind that the
    end-of-grace-period value of ->expedited_sequence (0b'0100')
    is shifted down two bits before indexing the ->exp_wq[] field.

    3. Task C attempts to start another expedited grace period,
    but blocks on ->exp_mutex, which is still held by Task A.

    4. The aforementioned expedited grace period completes, so that
    ->expedited_sequence now has the value 0b'0100'. A kworker task
    therefore acquires the rcu_state structure's ->exp_wake_mutex
    and starts awakening any tasks waiting for this grace period.

    5. One of the first tasks awakened happens to be Task A. Task A
    therefore releases the rcu_state structure's ->exp_mutex,
    which allows Task C to start the next expedited grace period,
    which causes the lower four bits of the rcu_state structure's
    ->expedited_sequence field to become 0b'0101'.

    6. Task C's expedited grace period completes, so that the lower four
    bits of the rcu_state structure's ->expedited_sequence field now
    become 0b'1000'.

    7. The kworker task from step 4 above continues its wakeups.
    Unfortunately, the wake_up_all() refetches the rcu_state
    structure's .expedited_sequence field:

    wake_up_all(&rnp->exp_wq[rcu_seq_ctr(rcu_state.expedited_sequence) & 0x3]);

    This results in the wakeup being applied to the rcu_node
    structure's ->exp_wq[2] field, which is unfortunate given that
    Task B is instead waiting on ->exp_wq[1].

    On a busy system, no harm is done (or at least no permanent harm is done).
    Some later expedited grace period will redo the wakeup. But on a quiet
    system, such as many embedded systems, it might be a good long time before
    there was another expedited grace period. On such embedded systems,
    this situation could therefore result in a system hang.

    This issue manifested as DPM device timeout during suspend (which
    usually qualifies as a quiet time) due to a SCSI device being stuck in
    _synchronize_rcu_expedited(), with the following stack trace:

    schedule()
    synchronize_rcu_expedited()
    synchronize_rcu()
    scsi_device_quiesce()
    scsi_bus_suspend()
    dpm_run_callback()
    __device_suspend()

    This commit therefore prevents such delays, timeouts, and hangs by
    making rcu_exp_wait_wake() use its "s" argument consistently instead of
    refetching from rcu_state.expedited_sequence.

    Fixes: 3b5f668e715b ("rcu: Overlap wakeups with next expedited grace period")
    Signed-off-by: Neeraj Upadhyay
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Sasha Levin

    Neeraj Upadhyay
     
  • [ Upstream commit 610dea36d3083a977e4f156206cbe1eaa2a532f0 ]

    Commit 18cd8c93e69e ("rcu/nocb: Print gp/cb kthread hierarchy if
    dump_tree") added print statements to rcu_organize_nocb_kthreads for
    debugging, but incorrectly guarded them, causing the function to always
    spew out its message.

    This patch fixes it by guarding both pr_alert statements with dump_tree,
    while also changing the second pr_alert to a pr_cont, to print the
    hierarchy in a single line (assuming that's how it was supposed to
    work).

    Fixes: 18cd8c93e69e ("rcu/nocb: Print gp/cb kthread hierarchy if dump_tree")
    Signed-off-by: Stefan Reiter
    [ paulmck: Make single-nocbs-CPU GP kthreads look less erroneous. ]
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Sasha Levin

    Stefan Reiter
     

11 Feb, 2020

4 commits

  • commit c51f83c315c392d9776c33eb16a2fe1349d65c7f upstream.

    The rcu_node structure's ->expmask field is updated only when holding the
    ->lock, but is also accessed locklessly. This means that all ->expmask
    updates must use WRITE_ONCE() and all reads carried out without holding
    ->lock must use READ_ONCE(). This commit therefore changes the lockless
    ->expmask read in rcu_read_unlock_special() to use READ_ONCE().

    Reported-by: syzbot+99f4ddade3c22ab0cf23@syzkaller.appspotmail.com
    Signed-off-by: Paul E. McKenney
    Acked-by: Marco Elver
    Signed-off-by: Greg Kroah-Hartman

    Paul E. McKenney
     
  • commit 844a378de3372c923909681706d62336d702531e upstream.

    The ->srcu_last_gp_end field is accessed from any CPU at any time
    by synchronize_srcu(), so non-initialization references need to use
    READ_ONCE() and WRITE_ONCE(). This commit therefore makes that change.

    Reported-by: syzbot+08f3e9d26e5541e1ecf2@syzkaller.appspotmail.com
    Acked-by: Marco Elver
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Greg Kroah-Hartman

    Paul E. McKenney
     
  • commit 6935c3983b246d5fbfebd3b891c825e65c118f2d upstream.

    The rcu_gp_fqs_check_wake() function uses rcu_preempt_blocked_readers_cgp()
    to read ->gp_tasks while other cpus might overwrite this field.

    We need READ_ONCE()/WRITE_ONCE() pairs to avoid compiler
    tricks and KCSAN splats like the following :

    BUG: KCSAN: data-race in rcu_gp_fqs_check_wake / rcu_preempt_deferred_qs_irqrestore

    write to 0xffffffff85a7f190 of 8 bytes by task 7317 on cpu 0:
    rcu_preempt_deferred_qs_irqrestore+0x43d/0x580 kernel/rcu/tree_plugin.h:507
    rcu_read_unlock_special+0xec/0x370 kernel/rcu/tree_plugin.h:659
    __rcu_read_unlock+0xcf/0xe0 kernel/rcu/tree_plugin.h:394
    rcu_read_unlock include/linux/rcupdate.h:645 [inline]
    __ip_queue_xmit+0x3b0/0xa40 net/ipv4/ip_output.c:533
    ip_queue_xmit+0x45/0x60 include/net/ip.h:236
    __tcp_transmit_skb+0xdeb/0x1cd0 net/ipv4/tcp_output.c:1158
    __tcp_send_ack+0x246/0x300 net/ipv4/tcp_output.c:3685
    tcp_send_ack+0x34/0x40 net/ipv4/tcp_output.c:3691
    tcp_cleanup_rbuf+0x130/0x360 net/ipv4/tcp.c:1575
    tcp_recvmsg+0x633/0x1a30 net/ipv4/tcp.c:2179
    inet_recvmsg+0xbb/0x250 net/ipv4/af_inet.c:838
    sock_recvmsg_nosec net/socket.c:871 [inline]
    sock_recvmsg net/socket.c:889 [inline]
    sock_recvmsg+0x92/0xb0 net/socket.c:885
    sock_read_iter+0x15f/0x1e0 net/socket.c:967
    call_read_iter include/linux/fs.h:1864 [inline]
    new_sync_read+0x389/0x4f0 fs/read_write.c:414

    read to 0xffffffff85a7f190 of 8 bytes by task 10 on cpu 1:
    rcu_gp_fqs_check_wake kernel/rcu/tree.c:1556 [inline]
    rcu_gp_fqs_check_wake+0x93/0xd0 kernel/rcu/tree.c:1546
    rcu_gp_fqs_loop+0x36c/0x580 kernel/rcu/tree.c:1611
    rcu_gp_kthread+0x143/0x220 kernel/rcu/tree.c:1768
    kthread+0x1d4/0x200 drivers/block/aoe/aoecmd.c:1253
    ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:352

    Reported by Kernel Concurrency Sanitizer on:
    CPU: 1 PID: 10 Comm: rcu_preempt Not tainted 5.3.0+ #0
    Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011

    Signed-off-by: Eric Dumazet
    Reported-by: syzbot
    [ paulmck: Added another READ_ONCE() for RCU CPU stall warnings. ]
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Greg Kroah-Hartman

    Eric Dumazet
     
  • commit 15c7c972cd26d89a26788e609c53b5a465324a6c upstream.

    The rcu_node structure's ->expmask field is accessed locklessly when
    starting a new expedited grace period and when reporting an expedited
    RCU CPU stall warning. This commit therefore handles the former by
    taking a snapshot of ->expmask while the lock is held and the latter
    by applying READ_ONCE() to lockless reads and WRITE_ONCE() to the
    corresponding updates.

    Link: https://lore.kernel.org/lkml/CANpmjNNmSOagbTpffHr4=Yedckx9Rm2NuGqC9UqE+AOz5f1-ZQ@mail.gmail.com
    Reported-by: syzbot+134336b86f728d6e55a0@syzkaller.appspotmail.com
    Signed-off-by: Paul E. McKenney
    Acked-by: Marco Elver
    Signed-off-by: Greg Kroah-Hartman

    Paul E. McKenney
     

26 Jan, 2020

1 commit

  • [ Upstream commit b8889c9c89a2655a231dfed93cc9bdca0930ea67 ]

    We never set this to false. This probably doesn't affect most people's
    runtime because GCC will automatically initialize it to false at certain
    common optimization levels. But that behavior is related to a bug in
    GCC and obviously should not be relied on.

    Fixes: 5d6742b37727 ("rcu/nocb: Use rcu_segcblist for no-CBs CPUs")
    Signed-off-by: Dan Carpenter
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Sasha Levin

    Dan Carpenter
     

17 Sep, 2019

1 commit

  • Pull scheduler updates from Ingo Molnar:

    - MAINTAINERS: Add Mark Rutland as perf submaintainer, Juri Lelli and
    Vincent Guittot as scheduler submaintainers. Add Dietmar Eggemann,
    Steven Rostedt, Ben Segall and Mel Gorman as scheduler reviewers.

    As perf and the scheduler is getting bigger and more complex,
    document the status quo of current responsibilities and interests,
    and spread the review pain^H^H^H^H fun via an increase in the Cc:
    linecount generated by scripts/get_maintainer.pl. :-)

    - Add another series of patches that brings the -rt (PREEMPT_RT) tree
    closer to mainline: split the monolithic CONFIG_PREEMPT dependencies
    into a new CONFIG_PREEMPTION category that will allow the eventual
    introduction of CONFIG_PREEMPT_RT. Still a few more hundred patches
    to go though.

    - Extend the CPU cgroup controller with uclamp.min and uclamp.max to
    allow the finer shaping of CPU bandwidth usage.

    - Micro-optimize energy-aware wake-ups from O(CPUS^2) to O(CPUS).

    - Improve the behavior of high CPU count, high thread count
    applications running under cpu.cfs_quota_us constraints.

    - Improve balancing with SCHED_IDLE (SCHED_BATCH) tasks present.

    - Improve CPU isolation housekeeping CPU allocation NUMA locality.

    - Fix deadline scheduler bandwidth calculations and logic when cpusets
    rebuilds the topology, or when it gets deadline-throttled while it's
    being offlined.

    - Convert the cpuset_mutex to percpu_rwsem, to allow it to be used from
    setscheduler() system calls without creating global serialization.
    Add new synchronization between cpuset topology-changing events and
    the deadline acceptance tests in setscheduler(), which were broken
    before.

    - Rework the active_mm state machine to be less confusing and more
    optimal.

    - Rework (simplify) the pick_next_task() slowpath.

    - Improve load-balancing on AMD EPYC systems.

    - ... and misc cleanups, smaller fixes and improvements - please see
    the Git log for more details.

    * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (53 commits)
    sched/psi: Correct overly pessimistic size calculation
    sched/fair: Speed-up energy-aware wake-ups
    sched/uclamp: Always use 'enum uclamp_id' for clamp_id values
    sched/uclamp: Update CPU's refcount on TG's clamp changes
    sched/uclamp: Use TG's clamps to restrict TASK's clamps
    sched/uclamp: Propagate system defaults to the root group
    sched/uclamp: Propagate parent clamps
    sched/uclamp: Extend CPU's cgroup controller
    sched/topology: Improve load balancing on AMD EPYC systems
    arch, ia64: Make NUMA select SMP
    sched, perf: MAINTAINERS update, add submaintainers and reviewers
    sched/fair: Use rq_lock/unlock in online_fair_sched_group
    cpufreq: schedutil: fix equation in comment
    sched: Rework pick_next_task() slow-path
    sched: Allow put_prev_task() to drop rq->lock
    sched/fair: Expose newidle_balance()
    sched: Add task_struct pointer to sched_class::set_curr_task
    sched: Rework CPU hotplug task selection
    sched/{rt,deadline}: Fix set_next_task vs pick_next_task
    sched: Fix kerneldoc comment for ia64_set_curr_task
    ...

    Linus Torvalds
     

16 Sep, 2019

1 commit


14 Aug, 2019

29 commits

  • Bimodal behavior of rcu_do_batch() is not really suited to Google
    applications like gfe servers.

    When a process with millions of sockets exits, closing all files
    queues two rcu callbacks per socket.

    This eventually reaches the point where RCU enters an emergency
    mode, where rcu_do_batch() do not return until whole queue is flushed.

    Each rcu callback lasts at least 70 nsec, so with millions of
    elements, we easily spend more than 100 msec without rescheduling.

    Goal of this patch is to avoid the infamous message like following
    "need_resched set for > 51999388 ns (52 ticks) without schedule"

    We dynamically adjust the number of elements we process, instead
    of 10 / INFINITE choices, we use a floor of ~1 % of current entries.

    If the number is above 1000, we switch to a time based limit of 3 msec
    per batch, adjustable with /sys/module/rcutree/parameters/rcu_resched_ns

    Signed-off-by: Eric Dumazet
    [ paulmck: Forward-port and remove debug statements. ]
    Signed-off-by: Paul E. McKenney

    Eric Dumazet
     
  • When under overload conditions, __call_rcu_nocb_wake() will wake the
    no-CBs GP kthread any time the no-CBs CB kthread is asleep or there
    are no ready-to-invoke callbacks, but only after a timer delay. If the
    no-CBs GP kthread has a ->nocb_bypass_timer pending, the deferred wakeup
    from __call_rcu_nocb_wake() is redundant. This commit therefore makes
    __call_rcu_nocb_wake() avoid posting the redundant deferred wakeup if
    ->nocb_bypass_timer is pending. This requires adding a bit of ordering
    of timer actions.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Currently, __call_rcu_nocb_wake() advances callbacks each time that it
    detects excessive numbers of callbacks, though only if it succeeds in
    conditionally acquiring its leaf rcu_node structure's ->lock. Despite
    the conditional acquisition of ->lock, this does increase contention.
    This commit therefore avoids advancing callbacks unless there are
    callbacks in ->cblist whose grace period has completed and advancing
    has not yet been done during this jiffy.

    Note that this decision does not take the presence of new callbacks
    into account. That is because on this code path, there will always be
    at least one new callback, namely the one we just enqueued.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Currently, nocb_cb_wait() advances callbacks on each pass through its
    loop, though only if it succeeds in conditionally acquiring its leaf
    rcu_node structure's ->lock. Despite the conditional acquisition of
    ->lock, this does increase contention. This commit therefore avoids
    advancing callbacks unless there are callbacks in ->cblist whose grace
    period has completed.

    Note that nocb_cb_wait() doesn't worry about callbacks that have not
    yet been assigned a grace period. The idea is that the only reason for
    nocb_cb_wait() to advance callbacks is to allow it to continue invoking
    callbacks. Time will tell whether this is the correct choice.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The rcutree_migrate_callbacks() invokes rcu_advance_cbs() on both the
    offlined CPU's ->cblist and that of the surviving CPU, then merges
    them. However, after the merge, and of the offlined CPU's callbacks
    that were not ready to be invoked will no longer be associated with a
    grace-period number. This commit therefore invokes rcu_advance_cbs()
    one more time on the merged ->cblist in order to assign a grace-period
    number to these callbacks.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • When callbacks are in full flow, the common case is waiting for a
    grace period, and this grace period will normally take a few jiffies to
    complete. It therefore isn't all that helpful for __call_rcu_nocb_wake()
    to do a synchronous wakeup in this case. This commit therefore turns this
    into a timer-based deferred wakeup of the no-CBs grace-period kthread.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • This commit causes locking, sleeping, and callback state to be printed
    for no-CBs CPUs when the rcutorture writer is delayed sufficiently for
    rcutorture to complain.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Use of the rcu_data structure's segmented ->cblist for no-CBs CPUs
    takes advantage of unrelated grace periods, thus reducing the memory
    footprint in the face of floods of call_rcu() invocations. However,
    the ->cblist field is a more-complex rcu_segcblist structure which must
    be protected via locking. Even though there are only three entities
    which can acquire this lock (the CPU invoking call_rcu(), the no-CBs
    grace-period kthread, and the no-CBs callbacks kthread), the contention
    on this lock is excessive under heavy stress.

    This commit therefore greatly reduces contention by provisioning
    an rcu_cblist structure field named ->nocb_bypass within the
    rcu_data structure. Each no-CBs CPU is permitted only a limited
    number of enqueues onto the ->cblist per jiffy, controlled by a new
    nocb_nobypass_lim_per_jiffy kernel boot parameter that defaults to
    about 16 enqueues per millisecond (16 * 1000 / HZ). When that limit is
    exceeded, the CPU instead enqueues onto the new ->nocb_bypass.

    The ->nocb_bypass is flushed into the ->cblist every jiffy or when
    the number of callbacks on ->nocb_bypass exceeds qhimark, whichever
    happens first. During call_rcu() floods, this flushing is carried out
    by the CPU during the course of its call_rcu() invocations. However,
    a CPU could simply stop invoking call_rcu() at any time. The no-CBs
    grace-period kthread therefore carries out less-aggressive flushing
    (every few jiffies or when the number of callbacks on ->nocb_bypass
    exceeds (2 * qhimark), whichever comes first). This means that the
    no-CBs grace-period kthread cannot be permitted to do unbounded waits
    while there are callbacks on ->nocb_bypass. A ->nocb_bypass_timer is
    used to provide the needed wakeups.

    [ paulmck: Apply Coverity feedback reported by Colin Ian King. ]
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Upcoming ->nocb_lock contention-reduction work requires that the
    rcu_segcblist structure's ->len field be concurrently manipulated,
    but only if there are no-CBs CPUs in the kernel. This commit
    therefore makes this ->len field be an atomic_long_t, but only
    in CONFIG_RCU_NOCB_CPU=y kernels.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • When there are excessive numbers of callbacks, and when either the
    corresponding no-CBs callback kthread is asleep or there is no more
    ready-to-invoke callbacks, and when least one callback is pending,
    __call_rcu_nocb_wake() will advance the callbacks, but refrain from
    awakening the corresponding no-CBs grace-period kthread. However,
    because rcu_advance_cbs_nowake() is used, it is possible (if a bit
    unlikely) that the needed advancement could not happen due to a grace
    period not being in progress. Plus there will always be at least one
    pending callback due to one having just now been enqueued.

    This commit therefore attempts to advance callbacks and awakens the
    no-CBs grace-period kthread when there are excessive numbers of callbacks
    posted and when the no-CBs callback kthread is not in a position to do
    anything helpful.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The sleep/wakeup of the no-CBs grace-period kthreads is synchronized
    using the ->nocb_lock of the first CPU corresponding to that kthread.
    This commit provides a separate ->nocb_gp_lock for this purpose, thus
    reducing contention on ->nocb_lock.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Currently, nocb_cb_wait() unconditionally acquires the leaf rcu_node
    ->lock to advance callbacks when done invoking the previous batch.
    It does this while holding ->nocb_lock, which means that contention on
    the leaf rcu_node ->lock visits itself on the ->nocb_lock. This commit
    therefore makes this lock acquisition conditional, forgoing callback
    advancement when the leaf rcu_node ->lock is not immediately available.
    (In this case, the no-CBs grace-period kthread will eventually do any
    needed callback advancement.)

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Currently, __call_rcu_nocb_wake() conditionally acquires the leaf rcu_node
    structure's ->lock, and only afterwards does rcu_advance_cbs_nowake()
    check to see if it is possible to advance callbacks without potentially
    needing to awaken the grace-period kthread. Given that the no-awaken
    check can be done locklessly, this commit reverses the order, so that
    rcu_advance_cbs_nowake() is invoked without holding the leaf rcu_node
    structure's ->lock and rcu_advance_cbs_nowake() checks the grace-period
    state before conditionally acquiring that lock, thus reducing the number
    of needless acquistions of the leaf rcu_node structure's ->lock.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Currently, when the square root of the number of CPUs is rounded down
    by int_sqrt(), this round-down is applied to the number of callback
    kthreads per grace-period kthreads. This makes almost no difference
    for large systems, but results in oddities such as three no-CBs
    grace-period kthreads for a five-CPU system, which is a bit excessive.
    This commit therefore causes the round-down to apply to the number of
    no-CBs grace-period kthreads, so that systems with from four to eight
    CPUs have only two no-CBs grace period kthreads.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • A given rcu_data structure's ->nocb_lock can be acquired very frequently
    by the corresponding CPU and occasionally by the corresponding no-CBs
    grace-period and callbacks kthreads. In particular, these two kthreads
    will have frequent gaps between ->nocb_lock acquisitions that are roughly
    a grace period in duration. This means that any excessive ->nocb_lock
    contention will be due to the CPU's acquisitions, and this in turn
    enables a very naive contention-avoidance strategy to be quite effective.

    This commit therefore modifies rcu_nocb_lock() to first
    attempt a raw_spin_trylock(), and to atomically increment a
    separate ->nocb_lock_contended across a raw_spin_lock(). This new
    ->nocb_lock_contended field is checked in __call_rcu_nocb_wake() when
    interrupts are enabled, with a spin-wait for contending acquisitions
    to complete, thus allowing the kthreads a chance to acquire the lock.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Currently, the code provides an extra wakeup for the no-CBs grace-period
    kthread if one of its CPUs is generating excessive numbers of callbacks.
    But satisfying though it is to wake something up when things are going
    south, unless the thing being awakened can actually help solve the
    problem, that extra wakeup does nothing but consume additional CPU time,
    which is exactly what you don't want during a call_rcu() flood.

    This commit therefore avoids doing anything if the corresponding
    no-CBs callback kthread is going full tilt. Otherwise, if advancing
    callbacks immediately might help and if the leaf rcu_node structure's
    lock is immediately available, this commit invokes a new variant of
    rcu_advance_cbs() that advances callbacks only if doing so won't require
    awakening the grace-period kthread (not to be confused with any of the
    no-CBs grace-period kthreads).

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • It might be hard to imagine having more than two billion callbacks
    queued on a single CPU's ->cblist, but someone will do it sometime.
    This commit therefore makes __call_rcu_nocb_wake() handle this situation
    by upgrading local variable "len" from "int" to "long".

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Currently, wake_nocb_gp_defer() simply stores whatever waketype was
    passed in, which can result in a RCU_NOCB_WAKE_FORCE being downgraded
    to RCU_NOCB_WAKE, which could in turn delay callback processing.
    This commit therefore adds a check so that wake_nocb_gp_defer() only
    updates ->nocb_defer_wakeup when the update increases the forcefulness,
    thus avoiding downgrades.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The __call_rcu_nocb_wake() function and its predecessors set
    ->qlen_last_fqs_check to zero for the first callback and to LONG_MAX / 2
    for forced reawakenings. The former can result in a too-quick reawakening
    when there are many callbacks ready to invoke and the latter prevents a
    second reawakening. This commit therefore sets ->qlen_last_fqs_check
    to the current number of callbacks in both cases. While in the area,
    this commit also moves both assignments under ->nocb_lock.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Historically, no-CBs CPUs allowed the scheduler-clock tick to be
    unconditionally disabled on any transition to idle or nohz_full userspace
    execution (see the rcu_needs_cpu() implementations). Unfortunately,
    the checks used by rcu_needs_cpu() are defeated now that no-CBs CPUs
    use ->cblist, which might make users of battery-powered devices rather
    unhappy. This commit therefore adds explicit rcu_segcblist_is_offloaded()
    checks to return to the historical energy-efficient semantics.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Some compilers complain that wait_gp_seq might be used uninitialized
    in nocb_gp_wait(). This cannot actually happen because when wait_gp_seq
    is uninitialized, needwait_gp must be false, which prevents wait_gp_seq
    from being used. But this analysis is apparently beyond some compilers,
    so this commit adds a bogus initialization of wait_gp_seq for the sole
    purpose of suppressing the false-positive warning.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Currently, rcu_pending() invokes rcu_segcblist_is_offloaded() even
    in CONFIG_RCU_NOCB_CPU=n kernels, which cannot possibly be offloaded.
    Given that rcu_pending() is on a fastpath, it makes sense to check for
    CONFIG_RCU_NOCB_CPU=y before invoking rcu_segcblist_is_offloaded().
    This commit therefore makes this change.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Currently, rcu_core() invokes rcu_segcblist_is_offloaded() each time it
    needs to know whether the current CPU is a no-CBs CPU. Given that it is
    not possible to change the no-CBs status of a CPU after boot, and given
    that it is not possible to even have no-CBs CPUs in CONFIG_RCU_NOCB_CPU=n
    kernels, this repeated runtime invocation wastes CPU. This commit
    therefore created a const on-stack variable to allow this check to be
    done only once per rcu_core() invocation.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Currently, rcu_do_batch() invokes rcu_segcblist_is_offloaded() each time
    it needs to know whether the current CPU is a no-CBs CPU. Given that it
    is not possible to change the no-CBs status of a CPU after boot, and given
    that it is not possible to even have no-CBs CPUs in CONFIG_RCU_NOCB_CPU=n
    kernels, this per-callback invocation wastes CPU. This commit therefore
    created a const on-stack variable to allow this check to be done only
    once per rcu_do_batch() invocation.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • This commit removes the obsolete nocb_q_count and nocb_q_count_lazy
    fields, also removing rcu_get_n_cbs_nocb_cpu(), adjusting
    rcu_get_n_cbs_cpu(), and making rcutree_migrate_callbacks() once again
    disable the ->cblist fields of offline CPUs.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Signed-off-by: Paul E. McKenney

    Paul E. McKenney