29 Sep, 2011

14 commits

  • The purpose of rcu_needs_cpu_flush() was to iterate on pushing the
    current grace period in order to help the current CPU enter dyntick-idle
    mode. However, this can result in failures if the CPU starts entering
    dyntick-idle mode, but then backs out. In this case, the call to
    rcu_pending() from rcu_needs_cpu_flush() might end up announcing a
    non-existing quiescent state.

    This commit therefore removes rcu_needs_cpu_flush() in favor of letting
    the dyntick-idle machinery at the end of the softirq handler push the
    loop along via its call to rcu_pending().

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • RCU boost threads start life at RCU_BOOST_PRIO, while others remain
    at RCU_KTHREAD_PRIO. While here, change thread names to match other
    kthreads, and adjust rcu_yield() to not override the priority set by
    the user. This last change sets the stage for runtime changes to
    priority in the -rt tree.

    Signed-off-by: Mike Galbraith
    Signed-off-by: Paul E. McKenney

    Mike Galbraith
     
  • Create a separate lockdep class for the rt_mutex used for RCU priority
    boosting and enable use of rt_mutex_lock() with irqs disabled. This
    prevents RCU priority boosting from falling prey to deadlocks when
    someone begins an RCU read-side critical section in preemptible state,
    but releases it with an irq-disabled lock held.

    Unfortunately, the scheduler's runqueue and priority-inheritance locks
    still must either completely enclose or be completely enclosed by any
    overlapping RCU read-side critical section.

    This version removes a redundant local_irq_restore() noted by
    Yong Zhang.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • It is possible for an RCU CPU stall to end just as it is detected, in
    which case the current code will uselessly dump all CPU's stacks.
    This commit therefore checks for this condition and refrains from
    sending needless NMIs.

    And yes, the stall might also end just after we checked all CPUs and
    tasks, but in that case we would at least have given some clue as
    to which CPU/task was at fault.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Commit 7765be (Fix RCU_BOOST race handling current->rcu_read_unlock_special)
    introduced a new ->rcu_boosted field in the task structure. This is
    redundant because the existing ->rcu_boost_mutex will be non-NULL at
    any time that ->rcu_boosted is nonzero. Therefore, this commit removes
    ->rcu_boosted and tests ->rcu_boost_mutex instead.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • We only need to constrain the compiler if we are actually exiting
    the top-level RCU read-side critical section. This commit therefore
    moves the first barrier() cal in __rcu_read_unlock() to inside the
    "if" statement, thus avoiding needless register flushes for inner
    rcu_read_unlock() calls.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • There is often a delay between the time that a CPU passes through a
    quiescent state and the time that this quiescent state is reported to the
    RCU core. It is quite possible that the grace period ended before the
    quiescent state could be reported, for example, some other CPU might have
    deduced that this CPU passed through dyntick-idle mode. It is critically
    important that quiescent state be counted only against the grace period
    that was in effect at the time that the quiescent state was detected.

    Previously, this was handled by recording the number of the last grace
    period to complete when passing through a quiescent state. The RCU
    core then checks this number against the current value, and rejects
    the quiescent state if there is a mismatch. However, one additional
    possibility must be accounted for, namely that the quiescent state was
    recorded after the prior grace period completed but before the current
    grace period started. In this case, the RCU core must reject the
    quiescent state, but the recorded number will match. This is handled
    when the CPU becomes aware of a new grace period -- at that point,
    it invalidates any prior quiescent state.

    This works, but is a bit indirect. The new approach records the current
    grace period, and the RCU core checks to see (1) that this is still the
    current grace period and (2) that this grace period has not yet ended.
    This approach simplifies reasoning about correctness, and this commit
    changes over to this new approach.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Add trace events to record grace-period start and end, quiescent states,
    CPUs noticing grace-period start and end, grace-period initialization,
    call_rcu() invocation, tasks blocking in RCU read-side critical sections,
    tasks exiting those same critical sections, force_quiescent_state()
    detection of dyntick-idle and offline CPUs, CPUs entering and leaving
    dyntick-idle mode (except from NMIs), CPUs coming online and going
    offline, and CPUs being kicked for staying in dyntick-idle mode for too
    long (as in many weeks, even on 32-bit systems).

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    rcu: Add the rcu flavor to callback trace events

    The earlier trace events for registering RCU callbacks and for invoking
    them did not include the RCU flavor (rcu_bh, rcu_preempt, or rcu_sched).
    This commit adds the RCU flavor to those trace events.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Add event-trace markers to TREE_RCU kthreads to allow including these
    kthread's CPU time in the utilization calculations.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • We now have kthreads only for flavors of RCU that support boosting,
    so update the now-misleading comments accordingly.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • In order to allow event tracing to distinguish between flavors of
    RCU, we need those names in the relevant RCU data structures. TINY_RCU
    has avoided them for memory-footprint reasons, so add them only if
    CONFIG_RCU_TRACE=y.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Pull the code that waits for an RCU grace period into a single function,
    which is then called by synchronize_rcu() and friends in the case of
    TREE_RCU and TREE_PREEMPT_RCU, and from rcu_barrier() and friends in
    the case of TINY_RCU and TINY_PREEMPT_RCU.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • There are a number of cases where the RCU can find additional work
    for the per-CPU kthread within the context of that per-CPU kthread.
    In such cases, the per-CPU kthread is already running, so attempting
    to wake itself up does nothing except waste CPU cycles. This commit
    therefore checks to see if it is in the per-CPU kthread context,
    omitting the wakeup in this case.

    Signed-off-by: Shaohua Li
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Shaohua Li
     
  • Commit a26ac2455ffc (move TREE_RCU from softirq to kthread) added
    per-CPU kthreads. However, kthread creation uses kthread_create(), which
    can put the kthread's stack and task struct on the wrong NUMA node.
    Therefore, use kthread_create_on_node() instead of kthread_create()
    so that the stacks and task structs are placed on the correct NUMA node.

    A similar change was carried out in commit 94dcf29a11b3 (kthread:
    use kthread_create_on_node()).

    Also change rcutorture's priority-boost-test kthread creation.

    Signed-off-by: Eric Dumazet
    CC: Tejun Heo
    CC: Rusty Russell
    CC: Andrew Morton
    CC: Andi Kleen
    CC: Ingo Molnar
    Signed-off-by: Paul E. McKenney

    Eric Dumazet
     

21 Jul, 2011

2 commits

  • The rcu_read_unlock_special() function relies on in_irq() to exclude
    scheduler activity from interrupt level. This fails because exit_irq()
    can invoke the scheduler after clearing the preempt_count() bits that
    in_irq() uses to determine that it is at interrupt level. This situation
    can result in failures as follows:

    $task IRQ SoftIRQ

    rcu_read_lock()

    /* do stuff */

    |= UNLOCK_BLOCKED

    rcu_read_unlock()
    --t->rcu_read_lock_nesting

    irq_enter();
    /* do stuff, don't use RCU */
    irq_exit();
    sub_preempt_count(IRQ_EXIT_OFFSET);
    invoke_softirq()

    ttwu();
    spin_lock_irq(&pi->lock)
    rcu_read_lock();
    /* do stuff */
    rcu_read_unlock();
    rcu_read_unlock_special()
    rcu_report_exp_rnp()
    ttwu()
    spin_lock_irq(&pi->lock) /* deadlock */

    rcu_read_unlock_special(t);

    Ed can simply trigger this 'easy' because invoke_softirq() immediately
    does a ttwu() of ksoftirqd/# instead of doing the in-place softirq stuff
    first, but even without that the above happens.

    Cure this by also excluding softirqs from the
    rcu_read_unlock_special() handler and ensuring the force_irqthreads
    ksoftirqd/# wakeup is done from full softirq context.

    [ Alternatively, delaying the ->rcu_read_lock_nesting decrement
    until after the special handling would make the thing more robust
    in the face of interrupts as well. And there is a separate patch
    for that. ]

    Cc: Thomas Gleixner
    Reported-and-tested-by: Ed Tomlinson
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Paul E. McKenney

    Peter Zijlstra
     
  • The addition of RCU read-side critical sections within runqueue and
    priority-inheritance lock critical sections introduced some deadlock
    cycles, for example, involving interrupts from __rcu_read_unlock()
    where the interrupt handlers call wake_up(). This situation can cause
    the instance of __rcu_read_unlock() invoked from interrupt to do some
    of the processing that would otherwise have been carried out by the
    task-level instance of __rcu_read_unlock(). When the interrupt-level
    instance of __rcu_read_unlock() is called with a scheduler lock held
    from interrupt-entry/exit situations where in_irq() returns false,
    deadlock can result.

    This commit resolves these deadlocks by using negative values of
    the per-task ->rcu_read_lock_nesting counter to indicate that an
    instance of __rcu_read_unlock() is in flight, which in turn prevents
    instances from interrupt handlers from doing any special processing.
    This patch is inspired by Steven Rostedt's earlier patch that similarly
    made __rcu_read_unlock() guard against interrupt-mediated recursion
    (see https://lkml.org/lkml/2011/7/15/326), but this commit refines
    Steven's approach to avoid the need for preemption disabling on the
    __rcu_read_unlock() fastpath and to also avoid the need for manipulating
    a separate per-CPU variable.

    This patch avoids need for preempt_disable() by instead using negative
    values of the per-task ->rcu_read_lock_nesting counter. Note that nested
    rcu_read_lock()/rcu_read_unlock() pairs are still permitted, but they will
    never see ->rcu_read_lock_nesting go to zero, and will therefore never
    invoke rcu_read_unlock_special(), thus preventing them from seeing the
    RCU_READ_UNLOCK_BLOCKED bit should it be set in ->rcu_read_unlock_special.
    This patch also adds a check for ->rcu_read_unlock_special being negative
    in rcu_check_callbacks(), thus preventing the RCU_READ_UNLOCK_NEED_QS
    bit from being set should a scheduling-clock interrupt occur while
    __rcu_read_unlock() is exiting from an outermost RCU read-side critical
    section.

    Of course, __rcu_read_unlock() can be preempted during the time that
    ->rcu_read_lock_nesting is negative. This could result in the setting
    of the RCU_READ_UNLOCK_BLOCKED bit after __rcu_read_unlock() checks it,
    and would also result it this task being queued on the corresponding
    rcu_node structure's blkd_tasks list. Therefore, some later RCU read-side
    critical section would enter rcu_read_unlock_special() to clean up --
    which could result in deadlock if that critical section happened to be in
    the scheduler where the runqueue or priority-inheritance locks were held.

    This situation is dealt with by making rcu_preempt_note_context_switch()
    check for negative ->rcu_read_lock_nesting, thus refraining from
    queuing the task (and from setting RCU_READ_UNLOCK_BLOCKED) if we are
    already exiting from the outermost RCU read-side critical section (in
    other words, we really are no longer actually in that RCU read-side
    critical section). In addition, rcu_preempt_note_context_switch()
    invokes rcu_read_unlock_special() to carry out the cleanup in this case,
    which clears out the ->rcu_read_unlock_special bits and dequeues the task
    (if necessary), in turn avoiding needless delay of the current RCU grace
    period and needless RCU priority boosting.

    It is still illegal to call rcu_read_unlock() while holding a scheduler
    lock if the prior RCU read-side critical section has ever had either
    preemption or irqs enabled. However, the common use case is legal,
    namely where then entire RCU read-side critical section executes with
    irqs disabled, for example, when the scheduler lock is held across the
    entire lifetime of the RCU read-side critical section.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

20 Jul, 2011

3 commits

  • Given some common flag combinations, particularly -Os, gcc will inline
    rcu_read_unlock_special() despite its being in an unlikely() clause.
    Use noinline to prohibit this misoptimization.

    In addition, move the second barrier() in __rcu_read_unlock() so that
    it is not on the common-case code path. This will allow the compiler to
    generate better code for the common-case path through __rcu_read_unlock().

    Suggested-by: Linus Torvalds
    Signed-off-by: Paul E. McKenney
    Acked-by: Mathieu Desnoyers

    Paul E. McKenney
     
  • The RCU_BOOST commits for TREE_PREEMPT_RCU introduced an other-task
    write to a new RCU_READ_UNLOCK_BOOSTED bit in the task_struct structure's
    ->rcu_read_unlock_special field, but, as noted by Steven Rostedt, without
    correctly synchronizing all accesses to ->rcu_read_unlock_special.
    This could result in bits in ->rcu_read_unlock_special being spuriously
    set and cleared due to conflicting accesses, which in turn could result
    in deadlocks between the rcu_node structure's ->lock and the scheduler's
    rq and pi locks. These deadlocks would result from RCU incorrectly
    believing that the just-ended RCU read-side critical section had been
    preempted and/or boosted. If that RCU read-side critical section was
    executed with either rq or pi locks held, RCU's ensuing (incorrect)
    calls to the scheduler would cause the scheduler to attempt to once
    again acquire the rq and pi locks, resulting in deadlock. More complex
    deadlock cycles are also possible, involving multiple rq and pi locks
    as well as locks from multiple rcu_node structures.

    This commit fixes synchronization by creating ->rcu_boosted field in
    task_struct that is accessed and modified only when holding the ->lock
    in the rcu_node structure on which the task is queued (on that rcu_node
    structure's ->blkd_tasks list). This results in tasks accessing only
    their own current->rcu_read_unlock_special fields, making unsynchronized
    access once again legal, and keeping the rcu_read_unlock() fastpath free
    of atomic instructions and memory barriers.

    The reason that the rcu_read_unlock() fastpath does not need to access
    the new current->rcu_boosted field is that this new field cannot
    be non-zero unless the RCU_READ_UNLOCK_BLOCKED bit is set in the
    current->rcu_read_unlock_special field. Therefore, rcu_read_unlock()
    need only test current->rcu_read_unlock_special: if that is zero, then
    current->rcu_boosted must also be zero.

    This bug does not affect TINY_PREEMPT_RCU because this implementation
    of RCU accesses current->rcu_read_unlock_special with irqs disabled,
    thus preventing races on the !SMP systems that TINY_PREEMPT_RCU runs on.

    Maybe-reported-by: Dave Jones
    Maybe-reported-by: Sergey Senozhatsky
    Reported-by: Steven Rostedt
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Steven Rostedt

    Paul E. McKenney
     
  • PREEMPT_RCU read-side critical sections blocking an expedited grace
    period invoke rcu_report_exp_rnp(). When the last such critical section
    has completed, rcu_report_exp_rnp() invokes the scheduler to wake up the
    task that invoked synchronize_rcu_expedited() -- needlessly holding the
    root rcu_node structure's lock while doing so, thus needlessly providing
    a way for RCU and the scheduler to deadlock.

    This commit therefore releases the root rcu_node structure's lock before
    calling wake_up().

    Reported-by: Ed Tomlinson
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

13 Jul, 2011

1 commit

  • Under some rare but real combinations of configuration parameters, RCU
    callbacks are posted during early boot that use kernel facilities that
    are not yet initialized. Therefore, when these callbacks are invoked,
    hard hangs and crashes ensue. This commit therefore prevents RCU
    callbacks from being invoked until after the scheduler is fully up and
    running, as in after multiple tasks have been spawned.

    It might well turn out that a better approach is to identify the specific
    RCU callbacks that are causing this problem, but that discussion will
    wait until such time as someone really needs an RCU callback to be invoked
    (as opposed to merely registered) during early boot.

    Reported-by: julie Sullivan
    Reported-by: RKK
    Signed-off-by: Paul E. McKenney
    Tested-by: Konrad Rzeszutek Wilk
    Tested-by: julie Sullivan
    Tested-by: RKK

    Paul E. McKenney
     

17 Jun, 2011

1 commit

  • The commit "use softirq instead of kthreads except when RCU_BOOST=y"
    just applied #ifdef in place. This commit is a cleanup that moves
    the newly #ifdef'ed code to the header file kernel/rcutree_plugin.h.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

16 Jun, 2011

1 commit


15 Jun, 2011

2 commits

  • Commit a26ac2455ffcf3(rcu: move TREE_RCU from softirq to kthread)
    introduced performance regression. In an AIM7 test, this commit degraded
    performance by about 40%.

    The commit runs rcu callbacks in a kthread instead of softirq. We observed
    high rate of context switch which is caused by this. Out test system has
    64 CPUs and HZ is 1000, so we saw more than 64k context switch per second
    which is caused by RCU's per-CPU kthread. A trace showed that most of
    the time the RCU per-CPU kthread doesn't actually handle any callbacks,
    but instead just does a very small amount of work handling grace periods.
    This means that RCU's per-CPU kthreads are making the scheduler do quite
    a bit of work in order to allow a very small amount of RCU-related
    processing to be done.

    Alex Shi's analysis determined that this slowdown is due to lock
    contention within the scheduler. Unfortunately, as Peter Zijlstra points
    out, the scheduler's real-time semantics require global action, which
    means that this contention is inherent in real-time scheduling. (Yes,
    perhaps someone will come up with a workaround -- otherwise, -rt is not
    going to do well on large SMP systems -- but this patch will work around
    this issue in the meantime. And "the meantime" might well be forever.)

    This patch therefore re-introduces softirq processing to RCU, but only
    for core RCU work. RCU callbacks are still executed in kthread context,
    so that only a small amount of RCU work runs in softirq context in the
    common case. This should minimize ksoftirqd execution, allowing us to
    skip boosting of ksoftirqd for CONFIG_RCU_BOOST=y kernels.

    Signed-off-by: Shaohua Li
    Tested-by: "Alex,Shi"
    Signed-off-by: Paul E. McKenney

    Shaohua Li
     
  • Make the functions creating the kthreads wake them up. Leverage the
    fact that the per-node and boost kthreads can run anywhere, thus
    dispensing with the need to wake them up once the incoming CPU has
    gone fully online.

    Signed-off-by: Paul E. McKenney
    Tested-by: Daniel J Blueman

    Paul E. McKenney
     

31 May, 2011

1 commit

  • Commit cc3ce5176d83 (rcu: Start RCU kthreads in TASK_INTERRUPTIBLE
    state) fudges a sleeping task' state, resulting in the scheduler seeing
    a TASK_UNINTERRUPTIBLE task going to sleep, but a TASK_INTERRUPTIBLE
    task waking up. The result is unbalanced load calculation.

    The problem that patch tried to address is that the RCU threads could
    stay in UNINTERRUPTIBLE state for quite a while and triggering the hung
    task detector due to on-demand wake-ups.

    Cure the problem differently by always giving the tasks at least one
    wake-up once the CPU is fully up and running, this will kick them out of
    the initial UNINTERRUPTIBLE state and into the regular INTERRUPTIBLE
    wait state.

    [ The alternative would be teaching kthread_create() to start threads as
    INTERRUPTIBLE but that needs a tad more thought. ]

    Reported-by: Damien Wyart
    Signed-off-by: Peter Zijlstra
    Acked-by: Paul E. McKenney
    Link: http://lkml.kernel.org/r/1306755291.1200.2872.camel@twins
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

28 May, 2011

2 commits

  • Upon creation, kthreads are in TASK_UNINTERRUPTIBLE state, which can
    result in softlockup warnings. Because some of RCU's kthreads can
    legitimately be idle indefinitely, start them in TASK_INTERRUPTIBLE
    state in order to avoid those warnings.

    Suggested-by: Peter Zijlstra
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Tested-by: Yinghai Lu
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • It is not necessary to use waitqueues for the RCU kthreads because
    we always know exactly which thread is to be awakened. In addition,
    wake_up() only issues an actual wakeup when there is a thread waiting on
    the queue, which was why there was an extra explicit wake_up_process()
    to get the RCU kthreads started.

    Eliminating the waitqueues (and wake_up()) in favor of wake_up_process()
    eliminates the need for the initial wake_up_process() and also shrinks
    the data structure size a bit. The wakeup logic is placed in a new
    rcu_wait() macro.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

27 May, 2011

1 commit

  • (Note: this was reverted, and is now being re-applied in pieces, with
    this being the fifth and final piece. See below for the reason that
    it is now felt to be safe to re-apply this.)

    Commit d09b62d fixed grace-period synchronization, but left some smp_mb()
    invocations in rcu_process_callbacks() that are no longer needed, but
    sheer paranoia prevented them from being removed. This commit removes
    them and provides a proof of correctness in their absence. It also adds
    a memory barrier to rcu_report_qs_rsp() immediately before the update to
    rsp->completed in order to handle the theoretical possibility that the
    compiler or CPU might move massive quantities of code into a lock-based
    critical section. This also proves that the sheer paranoia was not
    entirely unjustified, at least from a theoretical point of view.

    In addition, the old dyntick-idle synchronization depended on the fact
    that grace periods were many milliseconds in duration, so that it could
    be assumed that no dyntick-idle CPU could reorder a memory reference
    across an entire grace period. Unfortunately for this design, the
    addition of expedited grace periods breaks this assumption, which has
    the unfortunate side-effect of requiring atomic operations in the
    functions that track dyntick-idle state for RCU. (There is some hope
    that the algorithms used in user-level RCU might be applied here, but
    some work is required to handle the NMIs that user-space applications
    can happily ignore. For the short term, better safe than sorry.)

    This proof assumes that neither compiler nor CPU will allow a lock
    acquisition and release to be reordered, as doing so can result in
    deadlock. The proof is as follows:

    1. A given CPU declares a quiescent state under the protection of
    its leaf rcu_node's lock.

    2. If there is more than one level of rcu_node hierarchy, the
    last CPU to declare a quiescent state will also acquire the
    ->lock of the next rcu_node up in the hierarchy, but only
    after releasing the lower level's lock. The acquisition of this
    lock clearly cannot occur prior to the acquisition of the leaf
    node's lock.

    3. Step 2 repeats until we reach the root rcu_node structure.
    Please note again that only one lock is held at a time through
    this process. The acquisition of the root rcu_node's ->lock
    must occur after the release of that of the leaf rcu_node.

    4. At this point, we set the ->completed field in the rcu_state
    structure in rcu_report_qs_rsp(). However, if the rcu_node
    hierarchy contains only one rcu_node, then in theory the code
    preceding the quiescent state could leak into the critical
    section. We therefore precede the update of ->completed with a
    memory barrier. All CPUs will therefore agree that any updates
    preceding any report of a quiescent state will have happened
    before the update of ->completed.

    5. Regardless of whether a new grace period is needed, rcu_start_gp()
    will propagate the new value of ->completed to all of the leaf
    rcu_node structures, under the protection of each rcu_node's ->lock.
    If a new grace period is needed immediately, this propagation
    will occur in the same critical section that ->completed was
    set in, but courtesy of the memory barrier in #4 above, is still
    seen to follow any pre-quiescent-state activity.

    6. When a given CPU invokes __rcu_process_gp_end(), it becomes
    aware of the end of the old grace period and therefore makes
    any RCU callbacks that were waiting on that grace period eligible
    for invocation.

    If this CPU is the same one that detected the end of the grace
    period, and if there is but a single rcu_node in the hierarchy,
    we will still be in the single critical section. In this case,
    the memory barrier in step #4 guarantees that all callbacks will
    be seen to execute after each CPU's quiescent state.

    On the other hand, if this is a different CPU, it will acquire
    the leaf rcu_node's ->lock, and will again be serialized after
    each CPU's quiescent state for the old grace period.

    On the strength of this proof, this commit therefore removes the memory
    barriers from rcu_process_callbacks() and adds one to rcu_report_qs_rsp().
    The effect is to reduce the number of memory barriers by one and to
    reduce the frequency of execution from about once per scheduling tick
    per CPU to once per grace period.

    This was reverted do to hangs found during testing by Yinghai Lu and
    Ingo Molnar. Frederic Weisbecker supplied Yinghai with tracing that
    located the underlying problem, and Frederic also provided the fix.

    The underlying problem was that the HARDIRQ_ENTER() macro from
    lib/locking-selftest.c invoked irq_enter(), which in turn invokes
    rcu_irq_enter(), but HARDIRQ_EXIT() invoked __irq_exit(), which
    does not invoke rcu_irq_exit(). This situation resulted in calls
    to rcu_irq_enter() that were not balanced by the required calls to
    rcu_irq_exit(). Therefore, after these locking selftests completed,
    RCU's dyntick-idle nesting count was a large number (for example,
    72), which caused RCU to to conclude that the affected CPU was not in
    dyntick-idle mode when in fact it was.

    RCU would therefore incorrectly wait for this dyntick-idle CPU, resulting
    in hangs.

    In contrast, with Frederic's patch, which replaces the irq_enter()
    in HARDIRQ_ENTER() with an __irq_enter(), these tests don't ever call
    either rcu_irq_enter() or rcu_irq_exit(), which works because the CPU
    running the test is already marked as not being in dyntick-idle mode.
    This means that the rcu_irq_enter() and rcu_irq_exit() calls and RCU
    then has no problem working out which CPUs are in dyntick-idle mode and
    which are not.

    The reason that the imbalance was not noticed before the barrier patch
    was applied is that the old implementation of rcu_enter_nohz() ignored
    the nesting depth. This could still result in delays, but much shorter
    ones. Whenever there was a delay, RCU would IPI the CPU with the
    unbalanced nesting level, which would eventually result in rcu_enter_nohz()
    being called, which in turn would force RCU to see that the CPU was in
    dyntick-idle mode.

    The reason that very few people noticed the problem is that the mismatched
    irq_enter() vs. __irq_exit() occured only when the kernel was built with
    CONFIG_DEBUG_LOCKING_API_SELFTESTS.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     

20 May, 2011

1 commit


08 May, 2011

1 commit

  • Avoid calling into the scheduler while holding core RCU locks. This
    allows rcu_read_unlock() to be called while holding the runqueue locks,
    but only as long as there was no chance of the RCU read-side critical
    section having been preempted. (Otherwise, if RCU priority boosting
    is enabled, rcu_read_unlock() might call into the scheduler in order to
    unboost itself, which might allows self-deadlock on the runqueue locks
    within the scheduler.)

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

06 May, 2011

10 commits

  • The "preemptible" spelling is preferable. May as well fix it.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • Using __rcu_read_lock() in place of rcu_read_lock() leaves any debug
    state as it really should be, namely with the lock still held.

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Lai Jiangshan
     
  • The rcu_initiate_boost_trace() function mis-attributed refusals to
    initiate RCU priority boosting that were in fact due to its not yet
    being time to boost. This patch fixes the faulty comparison.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Add tracing to help debugging situations when RCU's kthreads are not
    running but are supposed to be.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • Includes total number of tasks boosted, number boosted on behalf of each
    of normal and expedited grace periods, and statistics on attempts to
    initiate boosting that failed for various reasons.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • The scheduler has had some heartburn in the past when too many real-time
    kthreads were affinitied to the outgoing CPU. So, this commit lightens
    the load by forcing the per-rcu_node and the boost kthreads off of the
    outgoing CPU. Note that RCU's per-CPU kthread remains on the outgoing
    CPU until the bitter end, as it must in order to preserve correctness.

    Also avoid disabling hardirqs across calls to set_cpus_allowed_ptr(),
    given that this function can block.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Add priority boosting for TREE_PREEMPT_RCU, similar to that for
    TINY_PREEMPT_RCU. This is enabled by the default-off RCU_BOOST
    kernel parameter. The priority to which to boost preempted
    RCU readers is controlled by the RCU_BOOST_PRIO kernel parameter
    (defaulting to real-time priority 1) and the time to wait before
    boosting the readers who are blocking a given grace period is
    controlled by the RCU_BOOST_DELAY kernel parameter (defaulting to
    500 milliseconds).

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • If RCU priority boosting is to be meaningful, callback invocation must
    be boosted in addition to preempted RCU readers. Otherwise, in presence
    of CPU real-time threads, the grace period ends, but the callbacks don't
    get invoked. If the callbacks don't get invoked, the associated memory
    doesn't get freed, so the system is still subject to OOM.

    But it is not reasonable to priority-boost RCU_SOFTIRQ, so this commit
    moves the callback invocations to a kthread, which can be boosted easily.

    Also add comments and properly synchronized all accesses to
    rcu_cpu_kthread_task, as suggested by Lai Jiangshan.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • Combine the current TREE_PREEMPT_RCU ->blocked_tasks[] lists in the
    rcu_node structure into a single ->blkd_tasks list with ->gp_tasks
    and ->exp_tasks tail pointers. This is in preparation for RCU priority
    boosting, which will add a third dimension to the combinatorial explosion
    in the ->blocked_tasks[] case, but simply a third pointer in the new
    ->blkd_tasks case.

    Also update documentation to reflect blocked_tasks[] merge

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • Commit d09b62d fixed grace-period synchronization, but left some smp_mb()
    invocations in rcu_process_callbacks() that are no longer needed, but
    sheer paranoia prevented them from being removed. This commit removes
    them and provides a proof of correctness in their absence. It also adds
    a memory barrier to rcu_report_qs_rsp() immediately before the update to
    rsp->completed in order to handle the theoretical possibility that the
    compiler or CPU might move massive quantities of code into a lock-based
    critical section. This also proves that the sheer paranoia was not
    entirely unjustified, at least from a theoretical point of view.

    In addition, the old dyntick-idle synchronization depended on the fact
    that grace periods were many milliseconds in duration, so that it could
    be assumed that no dyntick-idle CPU could reorder a memory reference
    across an entire grace period. Unfortunately for this design, the
    addition of expedited grace periods breaks this assumption, which has
    the unfortunate side-effect of requiring atomic operations in the
    functions that track dyntick-idle state for RCU. (There is some hope
    that the algorithms used in user-level RCU might be applied here, but
    some work is required to handle the NMIs that user-space applications
    can happily ignore. For the short term, better safe than sorry.)

    This proof assumes that neither compiler nor CPU will allow a lock
    acquisition and release to be reordered, as doing so can result in
    deadlock. The proof is as follows:

    1. A given CPU declares a quiescent state under the protection of
    its leaf rcu_node's lock.

    2. If there is more than one level of rcu_node hierarchy, the
    last CPU to declare a quiescent state will also acquire the
    ->lock of the next rcu_node up in the hierarchy, but only
    after releasing the lower level's lock. The acquisition of this
    lock clearly cannot occur prior to the acquisition of the leaf
    node's lock.

    3. Step 2 repeats until we reach the root rcu_node structure.
    Please note again that only one lock is held at a time through
    this process. The acquisition of the root rcu_node's ->lock
    must occur after the release of that of the leaf rcu_node.

    4. At this point, we set the ->completed field in the rcu_state
    structure in rcu_report_qs_rsp(). However, if the rcu_node
    hierarchy contains only one rcu_node, then in theory the code
    preceding the quiescent state could leak into the critical
    section. We therefore precede the update of ->completed with a
    memory barrier. All CPUs will therefore agree that any updates
    preceding any report of a quiescent state will have happened
    before the update of ->completed.

    5. Regardless of whether a new grace period is needed, rcu_start_gp()
    will propagate the new value of ->completed to all of the leaf
    rcu_node structures, under the protection of each rcu_node's ->lock.
    If a new grace period is needed immediately, this propagation
    will occur in the same critical section that ->completed was
    set in, but courtesy of the memory barrier in #4 above, is still
    seen to follow any pre-quiescent-state activity.

    6. When a given CPU invokes __rcu_process_gp_end(), it becomes
    aware of the end of the old grace period and therefore makes
    any RCU callbacks that were waiting on that grace period eligible
    for invocation.

    If this CPU is the same one that detected the end of the grace
    period, and if there is but a single rcu_node in the hierarchy,
    we will still be in the single critical section. In this case,
    the memory barrier in step #4 guarantees that all callbacks will
    be seen to execute after each CPU's quiescent state.

    On the other hand, if this is a different CPU, it will acquire
    the leaf rcu_node's ->lock, and will again be serialized after
    each CPU's quiescent state for the old grace period.

    On the strength of this proof, this commit therefore removes the memory
    barriers from rcu_process_callbacks() and adds one to rcu_report_qs_rsp().
    The effect is to reduce the number of memory barriers by one and to
    reduce the frequency of execution from about once per scheduling tick
    per CPU to once per grace period.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney