18 Dec, 2010

1 commit

  • The fix in commit #6a0cc49 requires more than three concurrent instances
    of synchronize_sched_expedited() before batching is possible. This
    patch uses a ticket-counter-like approach that is also not unrelated to
    Lai Jiangshan's Ring RCU to allow sharing of expedited grace periods even
    when there are only two concurrent instances of synchronize_sched_expedited().

    This commit builds on Tejun's original posting, which may be found at
    http://lkml.org/lkml/2010/11/9/204, adding memory barriers, avoiding
    overflow of signed integers (other than via atomic_t), and fixing the
    detection of batching.

    Signed-off-by: Tejun Heo
    Signed-off-by: Paul E. McKenney

    Tejun Heo
     

30 Nov, 2010

4 commits

  • The new (early 2010) implementation of synchronize_sched_expedited() uses
    try_stop_cpu() to force a context switch on every CPU. It also permits
    concurrent calls to synchronize_sched_expedited() to share a single call
    to try_stop_cpu() through use of an atomically incremented
    synchronize_sched_expedited_count variable. Unfortunately, this is
    subject to failure as follows:

    o Task A invokes synchronize_sched_expedited(), try_stop_cpus()
    succeeds, but Task A is preempted before getting to the atomic
    increment of synchronize_sched_expedited_count.

    o Task B also invokes synchronize_sched_expedited(), with exactly
    the same outcome as Task A.

    o Task C also invokes synchronize_sched_expedited(), again with
    exactly the same outcome as Tasks A and B.

    o Task D also invokes synchronize_sched_expedited(), but only
    gets as far as acquiring the mutex within try_stop_cpus()
    before being preempted, interrupted, or otherwise delayed.

    o Task E also invokes synchronize_sched_expedited(), but only
    gets to the snapshotting of synchronize_sched_expedited_count.

    o Tasks A, B, and C all increment synchronize_sched_expedited_count.

    o Task E fails to get the mutex, so checks the new value
    of synchronize_sched_expedited_count. It finds that the
    value has increased, so (wrongly) assumes that its work
    has been done, returning despite there having been no
    expedited grace period since it began.

    The solution is to have the lowest-numbered CPU atomically increment
    the synchronize_sched_expedited_count variable within the
    synchronize_sched_expedited_cpu_stop() function, which is under
    the protection of the mutex acquired by try_stop_cpus(). However, this
    also requires that piggybacking tasks wait for three rather than two
    instances of try_stop_cpu(), because we cannot control the order in
    which the per-CPU callback function occur.

    Cc: Tejun Heo
    Cc: Lai Jiangshan
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Lai's RCU-callback immediate-adoption patch changes the RCU tracing
    output, so update tracing.txt. Also update a few comments to clarify
    the synchronization design.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • When we handle the CPU_DYING notifier, the whole system is stopped except
    for the current CPU. We therefore need no synchronization with the other
    CPUs. This allows us to move any orphaned RCU callbacks directly to the
    list of any online CPU without needing to run them through the global
    orphan lists. These global orphan lists can therefore be dispensed with.
    This commit makes thes changes, though currently victimizes CPU 0 @@@.

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Paul E. McKenney

    Lai Jiangshan
     
  • The first version of synchronize_sched_expedited() used the migration
    code in the scheduler, and was therefore implemented in kernel/sched.c.
    However, the more recent version of this code no longer uses the
    migration code, so this commit moves it to the main RCU source files.

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Paul E. McKenney

    Lai Jiangshan
     

03 Sep, 2010

1 commit

  • CONFIG_RCU_CPU_STALL_VERBOSE depends on CONFIG_TREE_PREEMPT_RCU, but
    rcu_bootup_announce_oddness() complains if CONFIG_RCU_CPU_STALL_VERBOSE
    is not set even in the case of CONFIG_TREE_RCU. This commit therefore
    fixes rcu_bootup_announce_oddness() to avoid insisting on impossibilities.

    Reported-by: Guy Martin
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

21 Aug, 2010

3 commits

  • Replace one of the ACCESS_ONCE() calls in each of __rcu_read_lock()
    and __rcu_read_unlock() with barrier() as suggested by Steve Rostedt in
    order to avoid the potential compiler-optimization-induced bug noted by
    Mathieu Desnoyers.

    Located-by: Mathieu Desnoyers
    Suggested-by: Steven Rostedt
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The CONFIG_PREEMPT_RCU kernel configuration parameter was recently
    re-introduced, but as an indication of the type of RCU (preemptible
    vs. non-preemptible) instead of as selecting a given implementation.
    This commit uses CONFIG_PREEMPT_RCU to combine duplicate code
    from include/linux/rcutiny.h and include/linux/rcutree.h into
    include/linux/rcupdate.h. This commit also combines a few other pieces
    of duplicate code that have accumulated.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • When using a kernel debugger, a long sojourn in the debugger can get
    you lots of RCU CPU stall warnings once you resume. This might not be
    helpful, especially if you are using the system console. This patch
    therefore allows RCU CPU stall warnings to be suppressed, but only for
    the duration of the current set of grace periods.

    This differs from Jason's original patch in that it adds support for
    tiny RCU and preemptible RCU, and uses a slightly different method for
    suppressing the RCU CPU stall warning messages.

    Signed-off-by: Jason Wessel
    Signed-off-by: Paul E. McKenney
    Tested-by: Jason Wessel

    Paul E. McKenney
     

20 Aug, 2010

2 commits

  • Make it explicit that new RCU read-side critical sections that start
    after call_rcu() and synchronize_rcu() start might still be running
    after the end of the relevant grace period.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • &percpu_data is compatible with allocated percpu data.

    And we use it and remove the "->rda[NR_CPUS]" array, saving significant
    storage on systems with large numbers of CPUs. This does add an additional
    level of indirection and thus an additional cache line referenced, but
    because ->rda is not used on the read side, this is OK.

    Signed-off-by: Lai Jiangshan
    Reviewed-by: Tejun Heo
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Lai Jiangshan
     

12 May, 2010

1 commit


11 May, 2010

5 commits

  • Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The current version of RCU_FAST_NO_HZ reproduces the old CLASSIC_RCU
    dyntick-idle bug, as it fails to detect CPUs that have interrupted
    or NMIed out of dyntick-idle mode. Fix this by making rcu_needs_cpu()
    check the state in the per-CPU rcu_dynticks variables, thus correctly
    detecting the dyntick-idle state from an RCU perspective.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Print boot-time messages if tracing is enabled, if fanout is set
    to non-default values, if exact fanout is specified, if accelerated
    dyntick-idle grace periods have been enabled, if RCU-lockdep is enabled,
    if rcutorture has been boot-time enabled, if the CPU stall detector has
    been disabled, or if four-level hierarchy has been enabled.

    This is all for TREE_RCU and TREE_PREEMPT_RCU. TINY_RCU will be handled
    separately, if at all.

    Suggested-by: Josh Triplett
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The addition of preemptible RCU to treercu resulted in a bit of
    confusion and inefficiency surrounding the handling of context switches
    for RCU-sched and for RCU-preempt. For RCU-sched, a context switch
    is a quiescent state, pure and simple, just like it always has been.
    For RCU-preempt, a context switch is in no way a quiescent state, but
    special handling is required when a task blocks in an RCU read-side
    critical section.

    However, the callout from the scheduler and the outer loop in ksoftirqd
    still calls something named rcu_sched_qs(), whose name is no longer
    accurate. Furthermore, when rcu_check_callbacks() notes an RCU-sched
    quiescent state, it ends up unnecessarily (though harmlessly, aside
    from the performance hit) enqueuing the current task if it happens to
    be running in an RCU-preempt read-side critical section. This not only
    increases the maximum latency of scheduler_tick(), it also needlessly
    increases the overhead of the next outermost rcu_read_unlock() invocation.

    This patch addresses this situation by separating the notion of RCU's
    context-switch handling from that of RCU-sched's quiescent states.
    The context-switch handling is covered by rcu_note_context_switch() in
    general and by rcu_preempt_note_context_switch() for preemptible RCU.
    This permits rcu_sched_qs() to handle quiescent states and only quiescent
    states. It also reduces the maximum latency of scheduler_tick(), though
    probably by much less than a microsecond. Finally, it means that tasks
    within preemptible-RCU read-side critical sections avoid incurring the
    overhead of queuing unless there really is a context switch.

    Suggested-by: Lai Jiangshan
    Acked-by: Lai Jiangshan
    Signed-off-by: Paul E. McKenney
    Cc: Ingo Molnar
    Cc: Peter Zijlstra

    Paul E. McKenney
     
  • Offline CPUs are not in nohz_cpu_mask, but can be ignored when checking
    for the last non-dyntick-idle CPU. This patch therefore only checks
    online CPUs for not being dyntick idle, allowing fast entry into
    full-system dyntick-idle state even when there are some offline CPUs.

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Paul E. McKenney

    Lai Jiangshan
     

28 Feb, 2010

1 commit

  • Make the holdoff only happen when the full number of attempts
    have been made.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

27 Feb, 2010

2 commits

  • This patch disables irqs across the call to rcu_needs_cpu(). It
    also enforces a hold-off period so that the idle loop doesn't
    softirq itself to death when there are lots of RCU callbacks in
    flight on the last non-dynticked CPU.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • It is invalid to invoke __rcu_process_callbacks() with irqs
    disabled, so do it indirectly via raise_softirq(). This
    requires a state-machine implementation to cycle through the
    grace-period machinery the required number of times.

    Located-by: Ingo Molnar
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

25 Feb, 2010

4 commits

  • When RCU detects a grace-period stall, it currently just prints
    out the PID of any tasks doing the stalling. This patch adds
    RCU_CPU_STALL_VERBOSE, which enables the more-verbose reporting
    from sched_show_task().

    Suggested-by: Thomas Gleixner
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Under TREE_PREEMPT_RCU, print_other_cpu_stall() invokes
    rcu_print_task_stall() with the root rcu_node structure's ->lock
    held, and rcu_print_task_stall() acquires that same lock for
    self-deadlock. Fix this by removing the lock acquisition from
    rcu_print_task_stall(), and making all callers acquire the lock
    instead.

    Tested-by: John Kacur
    Tested-by: Thomas Gleixner
    Located-by: Thomas Gleixner
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • The spinlocks in rcutree need to be real spinlocks in
    preempt-rt. Convert them to raw_spinlocks.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Currently, rcu_needs_cpu() simply checks whether the current CPU
    has an outstanding RCU callback, which means that the last CPU
    to go into dyntick-idle mode might wait a few ticks for the
    relevant grace periods to complete. However, if all the other
    CPUs are in dyntick-idle mode, and if this CPU is in a quiescent
    state (which it is for RCU-bh and RCU-sched any time that we are
    considering going into dyntick-idle mode), then the grace period
    is instantly complete.

    This patch therefore repeatedly invokes the RCU grace-period
    machinery in order to force any needed grace periods to complete
    quickly. It does so a limited number of times in order to
    prevent starvation by an RCU callback function that might pass
    itself to call_rcu().

    However, if any CPU other than the current one is not in
    dyntick-idle mode, fall back to simply checking (with fix to bug
    noted by Lai Jiangshan). Also, take advantage of last
    grace-period forcing, the opportunity to do so noted by Steve
    Rostedt. And apply simplified #ifdef condition suggested by
    Frederic Weisbecker.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

13 Jan, 2010

2 commits

  • TREE_PREEMPT_RCU maintains an rcu_read_lock_nesting counter in
    the task structure, which happens to be a signed int. So this
    patch adds a check for this counter being negative at the end of
    __rcu_read_unlock(). This check is under CONFIG_PROVE_LOCKING,
    so can be thought of as being part of lockdep.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Add force_quiescent_state() testing to rcutorture, with a
    separate thread that repeatedly invokes force_quiescent_state()
    in bursts. This can greatly increase the probability of
    encountering certain types of race conditions.

    Suggested-by: Josh Triplett
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

03 Dec, 2009

2 commits

  • Implement an synchronize_rcu_expedited() for preemptible RCU
    that actually is expedited. This uses
    synchronize_sched_expedited() to force all threads currently
    running in a preemptible-RCU read-side critical section onto the
    appropriate ->blocked_tasks[] list, then takes a snapshot of all
    of these lists and waits for them to drain.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • The number of "quiet" functions has grown recently, and the
    names are no longer very descriptive. The point of all of these
    functions is to do some portion of the task of reporting a
    quiescent state, so rename them accordingly:

    o cpu_quiet() becomes rcu_report_qs_rdp(), which reports a
    quiescent state to the per-CPU rcu_data structure. If this
    turns out to be a new quiescent state for this grace period,
    then rcu_report_qs_rnp() will be invoked to propagate the
    quiescent state up the rcu_node hierarchy.

    o cpu_quiet_msk() becomes rcu_report_qs_rnp(), which reports
    a quiescent state for a given CPU (or possibly a set of CPUs)
    up the rcu_node hierarchy.

    o cpu_quiet_msk_finish() becomes rcu_report_qs_rsp(), which
    reports a full set of quiescent states to the global rcu_state
    structure.

    o task_quiet() becomes rcu_report_unblock_qs_rnp(), which reports
    a quiescent state due to a task exiting an RCU read-side critical
    section that had previously blocked in that same critical section.
    As indicated by the new name, this type of quiescent state is
    reported up the rcu_node hierarchy (using rcu_report_qs_rnp()
    to do so).

    Signed-off-by: Paul E. McKenney
    Acked-by: Josh Triplett
    Acked-by: Lai Jiangshan
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

23 Nov, 2009

2 commits

  • Remove #ifdefs from kernel/rcupdate.c and
    include/linux/rcupdate.h by moving code to
    include/linux/rcutiny.h, include/linux/rcutree.h, and
    kernel/rcutree.c.

    Also remove some definitions that are no longer used.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • When the last CPU of a given leaf rcu_node structure goes
    offline, all of the tasks queued on that leaf rcu_node structure
    (due to having blocked in their current RCU read-side critical
    sections) are requeued onto the root rcu_node structure. This
    requeuing is carried out by rcu_preempt_offline_tasks().
    However, it is possible that these queued tasks are the only
    thing preventing the leaf rcu_node structure from reporting a
    quiescent state up the rcu_node hierarchy. Unfortunately, the
    old code would fail to do this reporting, resulting in a
    grace-period stall given the following sequence of events:

    1. Kernel built for more than 32 CPUs on 32-bit systems or for more
    than 64 CPUs on 64-bit systems, so that there is more than one
    rcu_node structure. (Or CONFIG_RCU_FANOUT is artificially set
    to a number smaller than CONFIG_NR_CPUS.)

    2. The kernel is built with CONFIG_TREE_PREEMPT_RCU.

    3. A task running on a CPU associated with a given leaf rcu_node
    structure blocks while in an RCU read-side critical section
    -and- that CPU has not yet passed through a quiescent state
    for the current RCU grace period. This will cause the task
    to be queued on the leaf rcu_node's blocked_tasks[] array, in
    particular, on the element of this array corresponding to the
    current grace period.

    4. Each of the remaining CPUs corresponding to this same leaf rcu_node
    structure pass through a quiescent state. However, the task is
    still in its RCU read-side critical section, so these quiescent
    states cannot be reported further up the rcu_node hierarchy.
    Nevertheless, all bits in the leaf rcu_node structure's ->qsmask
    field are now zero.

    5. Each of the remaining CPUs go offline. (The events in step
    #4 and #5 can happen in any order as long as each CPU passes
    through a quiescent state before going offline.)

    6. When the last CPU goes offline, __rcu_offline_cpu() will invoke
    rcu_preempt_offline_tasks(), which will move the task to the
    root rcu_node structure, but without reporting a quiescent state
    up the rcu_node hierarchy (and this failure to report a quiescent
    state is the bug).

    But because this leaf rcu_node structure's ->qsmask field is
    already zero and its ->block_tasks[] entries are all empty,
    force_quiescent_state() will skip this rcu_node structure.

    Therefore, grace periods are now hung.

    This patch abstracts some code out of rcu_read_unlock_special(),
    calling the result task_quiet() by analogy with cpu_quiet(), and
    invokes task_quiet() from both rcu_read_lock_special() and
    __rcu_offline_cpu(). Invoking task_quiet() from
    __rcu_offline_cpu() reports the quiescent state up the rcu_node
    hierarchy, fixing the bug. This ends up requiring a separate
    lock_class_key per level of the rcu_node hierarchy, which this
    patch also provides.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

12 Nov, 2009

1 commit

  • Because rcu_bootup_announce() is used only at boot time, mark it
    as __init, presumably so that its memory can be reclaimed.

    Suggested-by: Joe Perches
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

11 Nov, 2009

2 commits

  • The rdp->passed_quiesc_completed fields are used to properly
    associate the recorded quiescent state with a grace period. It
    is OK to wrongly associate a given quiescent state with a
    preceding grace period, but it is fatal to associate a given
    quiescent state with a grace period that begins after the
    quiescent state occurred. Grace periods are numbered, and the
    following fields track them:

    o ->gpnum is the number of the grace period currently in
    progress, or the number of the last grace period to
    complete if no grace period is currently in progress.

    o ->completed is the number of the last grace period to
    have completed.

    These two fields are equal if there is no grace period in
    progress, otherwise ->gpnum is one greater than ->completed.
    But the rdp->passed_quiesc_completed field compared against
    ->completed, and if equal, the quiescent state is presumed to
    count against the current grace period.

    The earlier code copied rdp->completed to
    rdp->passed_quiesc_completed, which has been made to work, but
    is error-prone. In contrast, copying one less than rdp->gpnum
    is guaranteed safe, because rdp->gpnum is not incremented until
    after the start of the corresponding grace period. At the end of
    the grace period, when ->completed has incremented, then any
    quiescent periods recorded previously will be discarded.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Some variants of gcc are reputed to dislike forward references
    to functions declared "inline". Remove the "inline" keyword
    from such functions.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

16 Oct, 2009

1 commit

  • If the following sequence of events occurs, then
    TREE_PREEMPT_RCU will hang waiting for a grace period to
    complete, eventually OOMing the system:

    o A TREE_PREEMPT_RCU build of the kernel is booted on a system
    with more than 64 physical CPUs present (32 on a 32-bit system).
    Alternatively, a TREE_PREEMPT_RCU build of the kernel is booted
    with RCU_FANOUT set to a sufficiently small value that the
    physical CPUs populate two or more leaf rcu_node structures.

    o A task is preempted in an RCU read-side critical section
    while running on a CPU corresponding to a given leaf rcu_node
    structure.

    o All CPUs corresponding to this same leaf rcu_node structure
    record quiescent states for the current grace period.

    o All of these same CPUs go offline (hence the need for enough
    physical CPUs to populate more than one leaf rcu_node structure).
    This causes the preempted task to be moved to the root rcu_node
    structure.

    At this point, there is nothing left to cause the quiescent
    state to be propagated up the rcu_node tree, so the current
    grace period never completes.

    The simplest fix, especially after considering the deadlock
    possibilities, is to detect this situation when the last CPU is
    offlined, and to set that CPU's ->qsmask bit in its leaf
    rcu_node structure. This will cause the next invocation of
    force_quiescent_state() to end the grace period.

    Without this fix, this hang can be triggered in an hour or so on
    some machines with rcutorture and random CPU onlining/offlining.
    With this fix, these same machines pass a full 10 hours of this
    sort of abuse.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

15 Oct, 2009

1 commit

  • For the short term, map synchronize_rcu_expedited() to
    synchronize_rcu() for TREE_PREEMPT_RCU and to
    synchronize_sched_expedited() for TREE_RCU.

    Longer term, there needs to be a real expedited grace period for
    TREE_PREEMPT_RCU, but candidate patches to date are considerably
    more complex and intrusive.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    Cc: npiggin@suse.de
    Cc: jens.axboe@oracle.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

07 Oct, 2009

1 commit

  • The current interaction between RCU and CPU hotplug requires that
    RCU block in CPU notifiers waiting for callbacks to drain.

    This can be greatly simplified by having each CPU relinquish its
    own callbacks, and for both _rcu_barrier() and CPU_DEAD notifiers
    to adopt all callbacks that were previously relinquished.

    This change also eliminates the possibility of certain types of
    hangs due to the previous practice of waiting for callbacks to be
    invoked from within CPU notifiers. If you don't every wait, you
    cannot hang.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

06 Oct, 2009

1 commit

  • These issues identified during an old-fashioned face-to-face code
    review extending over many hours. This group improves an existing
    abstraction and introduces two new ones. It also fixes an RCU
    stall-warning bug found while making the other changes.

    o Make RCU_INIT_FLAVOR() declare its own variables, removing
    the need to declare them at each call site.

    o Create an rcu_for_each_leaf() macro that scans the leaf
    nodes of the rcu_node tree.

    o Create an rcu_for_each_node_breadth_first() macro that does
    a breadth-first traversal of the rcu_node tree, AKA
    stepping through the array in index-number order.

    o If all CPUs corresponding to a given leaf rcu_node
    structure go offline, then any tasks queued on that leaf
    will be moved to the root rcu_node structure. Therefore,
    the stall-warning code must dump out tasks queued on the
    root rcu_node structure as well as those queued on the leaf
    rcu_node structures.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

24 Sep, 2009

2 commits

  • These issues identified during an old-fashioned face-to-face code
    review extending over many hours.

    o Add comments for tricky parts of code, and correct comments
    that have passed their sell-by date.

    o Get rid of the vestiges of rcu_init_sched(), which is no
    longer needed now that PREEMPT_RCU is gone.

    o Move the #include of rcutree_plugin.h to the end of
    rcutree.c, which means that, rather than having a random
    collection of forward declarations, the new set of forward
    declarations document the set of plugins. The new home for
    this #include also allows __rcu_init_preempt() to move into
    rcutree_plugin.h.

    o Fix rcu_preempt_check_callbacks() to be static.

    Suggested-by: Josh Triplett
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar
    Peter Zijlstra

    Paul E. McKenney
     
  • These issues identified during an old-fashioned face-to-face code
    review extended over many hours.

    o Bury various forms of the "rsp->completed == rsp->gpnum"
    comparison into an rcu_gp_in_progress() function, which has
    the beneficial side-effect of forcing consistent use of
    ACCESS_ONCE().

    o Replace hand-coded arithmetic with DIV_ROUND_UP().

    o Bury several "!list_empty(&rnp->blocked_tasks[rnp->gpnum & 0x01])"
    instances into an rcu_preempted_readers() function, as this
    expression indicates that there are no readers blocked
    within RCU read-side critical sections blocking the current
    grace period. (Though there might well be similar readers
    blocking the next grace period.)

    o Remove a dangling rcu_restart_cpu() declaration that has
    been dangling for almost 20 minor releases of the kernel.

    Signed-off-by: Paul E. McKenney
    Acked-by: Peter Zijlstra
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

19 Sep, 2009

1 commit

  • Fix a number of whitespace ^Ierrors in the include/linux/rcu*
    and the kernel/rcu* files.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    LKML-Reference:
    [ did more checkpatch fixlets ]
    Signed-off-by: Ingo Molnar

    Paul E. McKenney