28 Feb, 2010

1 commit

  • Make the holdoff only happen when the full number of attempts
    have been made.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

27 Feb, 2010

2 commits

  • This patch disables irqs across the call to rcu_needs_cpu(). It
    also enforces a hold-off period so that the idle loop doesn't
    softirq itself to death when there are lots of RCU callbacks in
    flight on the last non-dynticked CPU.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • It is invalid to invoke __rcu_process_callbacks() with irqs
    disabled, so do it indirectly via raise_softirq(). This
    requires a state-machine implementation to cycle through the
    grace-period machinery the required number of times.

    Located-by: Ingo Molnar
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

25 Feb, 2010

4 commits

  • When RCU detects a grace-period stall, it currently just prints
    out the PID of any tasks doing the stalling. This patch adds
    RCU_CPU_STALL_VERBOSE, which enables the more-verbose reporting
    from sched_show_task().

    Suggested-by: Thomas Gleixner
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Under TREE_PREEMPT_RCU, print_other_cpu_stall() invokes
    rcu_print_task_stall() with the root rcu_node structure's ->lock
    held, and rcu_print_task_stall() acquires that same lock for
    self-deadlock. Fix this by removing the lock acquisition from
    rcu_print_task_stall(), and making all callers acquire the lock
    instead.

    Tested-by: John Kacur
    Tested-by: Thomas Gleixner
    Located-by: Thomas Gleixner
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • The spinlocks in rcutree need to be real spinlocks in
    preempt-rt. Convert them to raw_spinlocks.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Currently, rcu_needs_cpu() simply checks whether the current CPU
    has an outstanding RCU callback, which means that the last CPU
    to go into dyntick-idle mode might wait a few ticks for the
    relevant grace periods to complete. However, if all the other
    CPUs are in dyntick-idle mode, and if this CPU is in a quiescent
    state (which it is for RCU-bh and RCU-sched any time that we are
    considering going into dyntick-idle mode), then the grace period
    is instantly complete.

    This patch therefore repeatedly invokes the RCU grace-period
    machinery in order to force any needed grace periods to complete
    quickly. It does so a limited number of times in order to
    prevent starvation by an RCU callback function that might pass
    itself to call_rcu().

    However, if any CPU other than the current one is not in
    dyntick-idle mode, fall back to simply checking (with fix to bug
    noted by Lai Jiangshan). Also, take advantage of last
    grace-period forcing, the opportunity to do so noted by Steve
    Rostedt. And apply simplified #ifdef condition suggested by
    Frederic Weisbecker.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

13 Jan, 2010

2 commits

  • TREE_PREEMPT_RCU maintains an rcu_read_lock_nesting counter in
    the task structure, which happens to be a signed int. So this
    patch adds a check for this counter being negative at the end of
    __rcu_read_unlock(). This check is under CONFIG_PROVE_LOCKING,
    so can be thought of as being part of lockdep.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Add force_quiescent_state() testing to rcutorture, with a
    separate thread that repeatedly invokes force_quiescent_state()
    in bursts. This can greatly increase the probability of
    encountering certain types of race conditions.

    Suggested-by: Josh Triplett
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

03 Dec, 2009

2 commits

  • Implement an synchronize_rcu_expedited() for preemptible RCU
    that actually is expedited. This uses
    synchronize_sched_expedited() to force all threads currently
    running in a preemptible-RCU read-side critical section onto the
    appropriate ->blocked_tasks[] list, then takes a snapshot of all
    of these lists and waits for them to drain.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • The number of "quiet" functions has grown recently, and the
    names are no longer very descriptive. The point of all of these
    functions is to do some portion of the task of reporting a
    quiescent state, so rename them accordingly:

    o cpu_quiet() becomes rcu_report_qs_rdp(), which reports a
    quiescent state to the per-CPU rcu_data structure. If this
    turns out to be a new quiescent state for this grace period,
    then rcu_report_qs_rnp() will be invoked to propagate the
    quiescent state up the rcu_node hierarchy.

    o cpu_quiet_msk() becomes rcu_report_qs_rnp(), which reports
    a quiescent state for a given CPU (or possibly a set of CPUs)
    up the rcu_node hierarchy.

    o cpu_quiet_msk_finish() becomes rcu_report_qs_rsp(), which
    reports a full set of quiescent states to the global rcu_state
    structure.

    o task_quiet() becomes rcu_report_unblock_qs_rnp(), which reports
    a quiescent state due to a task exiting an RCU read-side critical
    section that had previously blocked in that same critical section.
    As indicated by the new name, this type of quiescent state is
    reported up the rcu_node hierarchy (using rcu_report_qs_rnp()
    to do so).

    Signed-off-by: Paul E. McKenney
    Acked-by: Josh Triplett
    Acked-by: Lai Jiangshan
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

23 Nov, 2009

2 commits

  • Remove #ifdefs from kernel/rcupdate.c and
    include/linux/rcupdate.h by moving code to
    include/linux/rcutiny.h, include/linux/rcutree.h, and
    kernel/rcutree.c.

    Also remove some definitions that are no longer used.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • When the last CPU of a given leaf rcu_node structure goes
    offline, all of the tasks queued on that leaf rcu_node structure
    (due to having blocked in their current RCU read-side critical
    sections) are requeued onto the root rcu_node structure. This
    requeuing is carried out by rcu_preempt_offline_tasks().
    However, it is possible that these queued tasks are the only
    thing preventing the leaf rcu_node structure from reporting a
    quiescent state up the rcu_node hierarchy. Unfortunately, the
    old code would fail to do this reporting, resulting in a
    grace-period stall given the following sequence of events:

    1. Kernel built for more than 32 CPUs on 32-bit systems or for more
    than 64 CPUs on 64-bit systems, so that there is more than one
    rcu_node structure. (Or CONFIG_RCU_FANOUT is artificially set
    to a number smaller than CONFIG_NR_CPUS.)

    2. The kernel is built with CONFIG_TREE_PREEMPT_RCU.

    3. A task running on a CPU associated with a given leaf rcu_node
    structure blocks while in an RCU read-side critical section
    -and- that CPU has not yet passed through a quiescent state
    for the current RCU grace period. This will cause the task
    to be queued on the leaf rcu_node's blocked_tasks[] array, in
    particular, on the element of this array corresponding to the
    current grace period.

    4. Each of the remaining CPUs corresponding to this same leaf rcu_node
    structure pass through a quiescent state. However, the task is
    still in its RCU read-side critical section, so these quiescent
    states cannot be reported further up the rcu_node hierarchy.
    Nevertheless, all bits in the leaf rcu_node structure's ->qsmask
    field are now zero.

    5. Each of the remaining CPUs go offline. (The events in step
    #4 and #5 can happen in any order as long as each CPU passes
    through a quiescent state before going offline.)

    6. When the last CPU goes offline, __rcu_offline_cpu() will invoke
    rcu_preempt_offline_tasks(), which will move the task to the
    root rcu_node structure, but without reporting a quiescent state
    up the rcu_node hierarchy (and this failure to report a quiescent
    state is the bug).

    But because this leaf rcu_node structure's ->qsmask field is
    already zero and its ->block_tasks[] entries are all empty,
    force_quiescent_state() will skip this rcu_node structure.

    Therefore, grace periods are now hung.

    This patch abstracts some code out of rcu_read_unlock_special(),
    calling the result task_quiet() by analogy with cpu_quiet(), and
    invokes task_quiet() from both rcu_read_lock_special() and
    __rcu_offline_cpu(). Invoking task_quiet() from
    __rcu_offline_cpu() reports the quiescent state up the rcu_node
    hierarchy, fixing the bug. This ends up requiring a separate
    lock_class_key per level of the rcu_node hierarchy, which this
    patch also provides.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

12 Nov, 2009

1 commit

  • Because rcu_bootup_announce() is used only at boot time, mark it
    as __init, presumably so that its memory can be reclaimed.

    Suggested-by: Joe Perches
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

11 Nov, 2009

2 commits

  • The rdp->passed_quiesc_completed fields are used to properly
    associate the recorded quiescent state with a grace period. It
    is OK to wrongly associate a given quiescent state with a
    preceding grace period, but it is fatal to associate a given
    quiescent state with a grace period that begins after the
    quiescent state occurred. Grace periods are numbered, and the
    following fields track them:

    o ->gpnum is the number of the grace period currently in
    progress, or the number of the last grace period to
    complete if no grace period is currently in progress.

    o ->completed is the number of the last grace period to
    have completed.

    These two fields are equal if there is no grace period in
    progress, otherwise ->gpnum is one greater than ->completed.
    But the rdp->passed_quiesc_completed field compared against
    ->completed, and if equal, the quiescent state is presumed to
    count against the current grace period.

    The earlier code copied rdp->completed to
    rdp->passed_quiesc_completed, which has been made to work, but
    is error-prone. In contrast, copying one less than rdp->gpnum
    is guaranteed safe, because rdp->gpnum is not incremented until
    after the start of the corresponding grace period. At the end of
    the grace period, when ->completed has incremented, then any
    quiescent periods recorded previously will be discarded.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Some variants of gcc are reputed to dislike forward references
    to functions declared "inline". Remove the "inline" keyword
    from such functions.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

16 Oct, 2009

1 commit

  • If the following sequence of events occurs, then
    TREE_PREEMPT_RCU will hang waiting for a grace period to
    complete, eventually OOMing the system:

    o A TREE_PREEMPT_RCU build of the kernel is booted on a system
    with more than 64 physical CPUs present (32 on a 32-bit system).
    Alternatively, a TREE_PREEMPT_RCU build of the kernel is booted
    with RCU_FANOUT set to a sufficiently small value that the
    physical CPUs populate two or more leaf rcu_node structures.

    o A task is preempted in an RCU read-side critical section
    while running on a CPU corresponding to a given leaf rcu_node
    structure.

    o All CPUs corresponding to this same leaf rcu_node structure
    record quiescent states for the current grace period.

    o All of these same CPUs go offline (hence the need for enough
    physical CPUs to populate more than one leaf rcu_node structure).
    This causes the preempted task to be moved to the root rcu_node
    structure.

    At this point, there is nothing left to cause the quiescent
    state to be propagated up the rcu_node tree, so the current
    grace period never completes.

    The simplest fix, especially after considering the deadlock
    possibilities, is to detect this situation when the last CPU is
    offlined, and to set that CPU's ->qsmask bit in its leaf
    rcu_node structure. This will cause the next invocation of
    force_quiescent_state() to end the grace period.

    Without this fix, this hang can be triggered in an hour or so on
    some machines with rcutorture and random CPU onlining/offlining.
    With this fix, these same machines pass a full 10 hours of this
    sort of abuse.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

15 Oct, 2009

1 commit

  • For the short term, map synchronize_rcu_expedited() to
    synchronize_rcu() for TREE_PREEMPT_RCU and to
    synchronize_sched_expedited() for TREE_RCU.

    Longer term, there needs to be a real expedited grace period for
    TREE_PREEMPT_RCU, but candidate patches to date are considerably
    more complex and intrusive.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    Cc: npiggin@suse.de
    Cc: jens.axboe@oracle.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

07 Oct, 2009

1 commit

  • The current interaction between RCU and CPU hotplug requires that
    RCU block in CPU notifiers waiting for callbacks to drain.

    This can be greatly simplified by having each CPU relinquish its
    own callbacks, and for both _rcu_barrier() and CPU_DEAD notifiers
    to adopt all callbacks that were previously relinquished.

    This change also eliminates the possibility of certain types of
    hangs due to the previous practice of waiting for callbacks to be
    invoked from within CPU notifiers. If you don't every wait, you
    cannot hang.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

06 Oct, 2009

1 commit

  • These issues identified during an old-fashioned face-to-face code
    review extending over many hours. This group improves an existing
    abstraction and introduces two new ones. It also fixes an RCU
    stall-warning bug found while making the other changes.

    o Make RCU_INIT_FLAVOR() declare its own variables, removing
    the need to declare them at each call site.

    o Create an rcu_for_each_leaf() macro that scans the leaf
    nodes of the rcu_node tree.

    o Create an rcu_for_each_node_breadth_first() macro that does
    a breadth-first traversal of the rcu_node tree, AKA
    stepping through the array in index-number order.

    o If all CPUs corresponding to a given leaf rcu_node
    structure go offline, then any tasks queued on that leaf
    will be moved to the root rcu_node structure. Therefore,
    the stall-warning code must dump out tasks queued on the
    root rcu_node structure as well as those queued on the leaf
    rcu_node structures.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

24 Sep, 2009

2 commits

  • These issues identified during an old-fashioned face-to-face code
    review extending over many hours.

    o Add comments for tricky parts of code, and correct comments
    that have passed their sell-by date.

    o Get rid of the vestiges of rcu_init_sched(), which is no
    longer needed now that PREEMPT_RCU is gone.

    o Move the #include of rcutree_plugin.h to the end of
    rcutree.c, which means that, rather than having a random
    collection of forward declarations, the new set of forward
    declarations document the set of plugins. The new home for
    this #include also allows __rcu_init_preempt() to move into
    rcutree_plugin.h.

    o Fix rcu_preempt_check_callbacks() to be static.

    Suggested-by: Josh Triplett
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar
    Peter Zijlstra

    Paul E. McKenney
     
  • These issues identified during an old-fashioned face-to-face code
    review extended over many hours.

    o Bury various forms of the "rsp->completed == rsp->gpnum"
    comparison into an rcu_gp_in_progress() function, which has
    the beneficial side-effect of forcing consistent use of
    ACCESS_ONCE().

    o Replace hand-coded arithmetic with DIV_ROUND_UP().

    o Bury several "!list_empty(&rnp->blocked_tasks[rnp->gpnum & 0x01])"
    instances into an rcu_preempted_readers() function, as this
    expression indicates that there are no readers blocked
    within RCU read-side critical sections blocking the current
    grace period. (Though there might well be similar readers
    blocking the next grace period.)

    o Remove a dangling rcu_restart_cpu() declaration that has
    been dangling for almost 20 minor releases of the kernel.

    Signed-off-by: Paul E. McKenney
    Acked-by: Peter Zijlstra
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

19 Sep, 2009

4 commits

  • Fix a number of whitespace ^Ierrors in the include/linux/rcu*
    and the kernel/rcu* files.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    LKML-Reference:
    [ did more checkpatch fixlets ]
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Commit de078d8 ("rcu: Need to update rnp->gpnum if preemptable RCU
    is to be reliable") repeatedly and incorrectly initializes the root
    rcu_node structure's ->gpnum field rather than initializing the
    ->gpnum field of each node in the tree. Fix this. Also add an
    additional consistency check to catch this in the future.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • o Drop the calls to cpu_quiet() from the online/offline code.
    These are unnecessary, since force_quiescent_state() will
    clean up, and removing them simplifies the code a bit.

    o Add a warning to check that we don't enqueue the same blocked
    task twice onto the ->blocked_tasks[] lists.

    o Rework the phase computation in rcu_preempt_note_context_switch()
    to be more readable, as suggested by Josh Triplett.

    o Disable irqs to close a race between the scheduling clock
    interrupt and rcu_preempt_note_context_switch() WRT the
    ->rcu_read_unlock_special field.

    o Add comments to rnp->lock acquisition and release within
    rcu_read_unlock_special() noting that irqs are already
    disabled.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • o Verify that qsmask bits stay clear through GP
    initialization.

    o Verify that cpu_quiet_msk_finish() is never invoked unless
    there actually is an RCU grace period in progress.

    o Verify that all internal-node rcu_node structures have empty
    blocked_tasks[] lists.

    o Verify that child rcu_node structure's bits remain clear after
    acquiring parent's lock.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

18 Sep, 2009

2 commits

  • The earlier approach required two scheduling-clock ticks to note an
    preemptable-RCU quiescent state in the situation in which the
    scheduling-clock interrupt is unlucky enough to always interrupt an
    RCU read-side critical section.

    With this change, the quiescent state is instead noted by the
    outermost rcu_read_unlock() immediately following the first
    scheduling-clock tick, or, alternatively, by the first subsequent
    context switch. Therefore, this change also speeds up grace
    periods.

    Suggested-by: Josh Triplett
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Check to make sure that there are no blocked tasks for the previous
    grace period while initializing for the next grace period, verify
    that rcu_preempt_qs() is given the correct CPU number and is never
    called for an offline CPU.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

29 Aug, 2009

2 commits

  • Changes suggested by review comments from Josh Triplett and
    Mathieu Desnoyers.

    Signed-off-by: Paul E. McKenney
    Acked-by: Josh Triplett
    Acked-by: Mathieu Desnoyers
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • When offlining CPUs from a multi-level tree, there is the
    possibility of offlining the last CPU from a given node when
    there are preempted RCU read-side critical sections that
    started life on one of the CPUs on that node.

    In this case, the corresponding tasks will be enqueued via the
    task_struct's rcu_node_entry list_head onto one of the
    rcu_node's blocked_tasks[] lists. These tasks need to be moved
    somewhere else so that they will prevent the current grace
    period from ending. That somewhere is the root rcu_node.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josht@linux.vnet.ibm.com
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

25 Aug, 2009

1 commit

  • Add preemptable-RCU plugin to handle the CPU-offline
    processing.

    An additional plugin is forthcoming to handle multinode RCU
    trees, but this current plugin works for configurations up to
    32 CPUs (64 CPUs for 64-bit kernels).

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josht@linux.vnet.ibm.com
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

23 Aug, 2009

1 commit

  • Create a kernel/rcutree_plugin.h file that contains definitions
    for preemptable RCU (or, under the #else branch of the #ifdef,
    empty definitions for the classic non-preemptable semantics).
    These definitions fit into plugins defined in kernel/rcutree.c
    for this purpose.

    This variant of preemptable RCU uses a new algorithm whose
    read-side expense is roughly that of classic hierarchical RCU
    under CONFIG_PREEMPT. This new algorithm's update-side expense
    is similar to that of classic hierarchical RCU, and, in absence
    of read-side preemption or blocking, is exactly that of classic
    hierarchical RCU. Perhaps more important, this new algorithm
    has a much simpler implementation, saving well over 1,000 lines
    of code compared to mainline's implementation of preemptable
    RCU, which will hopefully be retired in favor of this new
    algorithm.

    The simplifications are obtained by maintaining per-task
    nesting state for running tasks, and using a simple
    lock-protected algorithm to handle accounting when tasks block
    within RCU read-side critical sections, making use of lessons
    learned while creating numerous user-level RCU implementations
    over the past 18 months.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josht@linux.vnet.ibm.com
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney