13 Mar, 2015

2 commits

  • Because that RCU grace-period initialization need no longer exclude
    CPU-hotplug operations, this commit eliminates the ->onoff_mutex and
    its uses.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Races between CPU hotplug and grace periods can be difficult to resolve,
    so the ->onoff_mutex is used to exclude the two events. Unfortunately,
    this means that it is impossible for an outgoing CPU to perform the
    last bits of its offlining from its last pass through the idle loop,
    because sleeplocks cannot be acquired in that context.

    This commit avoids these problems by buffering online and offline events
    in a new ->qsmaskinitnext field in the leaf rcu_node structures. When a
    grace period starts, the events accumulated in this mask are applied to
    the ->qsmaskinit field, and, if needed, up the rcu_node tree. The special
    case of all CPUs corresponding to a given leaf rcu_node structure being
    offline while there are still elements in that structure's ->blkd_tasks
    list is handled using a new ->wait_blkd_tasks field. In this case,
    propagating the offline bits up the tree is deferred until the beginning
    of the grace period after all of the tasks have exited their RCU read-side
    critical sections and removed themselves from the list, at which point
    the ->wait_blkd_tasks flag is cleared. If one of that leaf rcu_node
    structure's CPUs comes back online before the list empties, then the
    ->wait_blkd_tasks flag is simply cleared.

    This of course means that RCU's notion of which CPUs are offline can be
    out of date. This is OK because RCU need only wait on CPUs that were
    online at the time that the grace period started. In addition, RCU's
    force-quiescent-state actions will handle the case where a CPU goes
    offline after the grace period starts.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

16 Jan, 2015

2 commits

  • …rcu.2015.01.06a', 'stall.2015.01.16a' and 'torture.2015.01.11a' into HEAD

    doc.2015.01.07a: Documentation updates.
    fixes.2015.01.15a: Miscellaneous fixes.
    preempt.2015.01.06a: Changes to handling of lists of preempted tasks.
    srcu.2015.01.06a: SRCU updates.
    stall.2015.01.16a: RCU CPU stall-warning updates and fixes.
    torture.2015.01.11a: RCU torture-test updates and fixes.

    Paul E. McKenney
     
  • Although cond_resched_rcu_qs() only applies to TASKS_RCU, it is used
    in places where it would be useful for it to apply to the normal RCU
    flavors, rcu_preempt, rcu_sched, and rcu_bh. This is especially the
    case for workloads that aggressively overload the system, particularly
    those that generate large numbers of RCU updates on systems running
    NO_HZ_FULL CPUs. This commit therefore communicates quiescent states
    from cond_resched_rcu_qs() to the normal RCU flavors.

    Note that it is unfortunately necessary to leave the old ->passed_quiesce
    mechanism in place to allow quiescent states that apply to only one
    flavor to be recorded. (Yes, we could decrement ->rcu_qs_ctr_snap in
    that case, but that is not so good for debugging of RCU internals.)
    In addition, if one of the RCU flavor's grace period has stalled, this
    will invoke rcu_momentary_dyntick_idle(), resulting in a heavy-weight
    quiescent state visible from other CPUs.

    Reported-by: Sasha Levin
    Reported-by: Dave Jones
    Signed-off-by: Paul E. McKenney
    [ paulmck: Merge commit from Sasha Levin fixing a bug where __this_cpu()
    was used in preemptible code. ]

    Paul E. McKenney
     

11 Jan, 2015

2 commits


07 Jan, 2015

8 commits

  • Subtle race conditions can result if a CPU stays in dyntick-idle mode
    long enough for the ->gpnum and ->completed fields to wrap. For
    example, consider the following sequence of events:

    o CPU 1 encounters a quiescent state while waiting for grace period
    5 to complete, but then enters dyntick-idle mode.

    o While CPU 1 is in dyntick-idle mode, the grace-period counters
    wrap around so that the grace period number is now 4.

    o Just as CPU 1 exits dyntick-idle mode, grace period 4 completes
    and grace period 5 begins.

    o The quiescent state that CPU 1 passed through during the old
    grace period 5 looks like it applies to the new grace period
    5. Therefore, the new grace period 5 completes without CPU 1
    having passed through a quiescent state.

    This could clearly be a fatal surprise to any long-running RCU read-side
    critical section that happened to be running on CPU 1 at the time. At one
    time, this was not a problem, given that it takes significant time for
    the grace-period counters to overflow even on 32-bit systems. However,
    with the advent of NO_HZ_FULL and SMP embedded systems, arbitrarily long
    idle periods are now becoming quite feasible. It is therefore time to
    close this race.

    This commit therefore avoids this race condition by having the
    quiescent-state forcing code detect when a CPU is falling too far
    behind, and setting a new rcu_data field ->gpwrap when this happens.
    Whenever this new ->gpwrap field is set, the CPU's ->gpnum and ->completed
    fields are known to be untrustworthy, and can be ignored, along with
    any associated quiescent states.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The current RCU CPU stall warning code will print "Stall ended before
    state dump start" any time that the stall-warning code is triggered on
    a CPU that has already reported a quiescent state for the current grace
    period and if all quiescent states have been reported for the current
    grace period. However, a true stall can result in these symptoms, for
    example, by preventing RCU's grace-period kthreads from ever running

    This commit therefore checks for this condition, reporting the end of
    the stall only if one of the grace-period counters has actually advanced.
    Otherwise, it reports the last time that the grace-period kthread made
    meaningful progress. (In normal situations, the grace-period kthread
    should make meaningful progress at least every jiffies_till_next_fqs
    jiffies.)

    Reported-by: Miroslav Benes
    Signed-off-by: Paul E. McKenney
    Tested-by: Miroslav Benes

    Paul E. McKenney
     
  • One way that an RCU CPU stall warning can happen is if the grace-period
    kthread is not allowed to execute. One proxy for this kthread's
    forward progress is the number of force-quiescent-state (fqs) scans.
    This commit therefore adds the number of fqs scans to the RCU CPU stall
    warning printouts when CONFIG_RCU_CPU_STALL_INFO=y.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The patch dfeb9765ce3c ("Allow post-unlock reference for rt_mutex")
    ensured rcu-boost safe even the rt_mutex has post-unlock reference.

    But rt_mutex allowing post-unlock reference is definitely a bug and it was
    fixed by the commit 27e35715df54 ("rtmutex: Plug slow unlock race").
    This fix made the previous patch (dfeb9765ce3c) useless.

    And even worse, the priority-inversion introduced by the the previous
    patch still exists.

    rcu_read_unlock_special() {
    rt_mutex_unlock(&rnp->boost_mtx);
    /* Priority-Inversion:
    * the current task had been deboosted and preempted as a low
    * priority task immediately, it could wait long before reschedule in,
    * and the rcu-booster also waits on this low priority task and sleeps.
    * This priority-inversion makes rcu-booster can't work
    * as expected.
    */
    complete(&rnp->boost_completion);
    }

    Just revert the patch to avoid it.

    Cc: Thomas Gleixner
    Cc: Steven Rostedt
    Cc: Peter Zijlstra
    Signed-off-by: Lai Jiangshan
    Signed-off-by: Paul E. McKenney

    Lai Jiangshan
     
  • When the last CPU associated with a given leaf rcu_node structure
    goes offline, something must be done about the tasks queued on that
    rcu_node structure. Each of these tasks has been preempted on one of
    the leaf rcu_node structure's CPUs while in an RCU read-side critical
    section that it have not yet exited. Handling these tasks is the job of
    rcu_preempt_offline_tasks(), which migrates them from the leaf rcu_node
    structure to the root rcu_node structure.

    Unfortunately, this migration has to be done one task at a time because
    each tasks allegiance must be shifted from the original leaf rcu_node to
    the root, so that future attempts to deal with these tasks will acquire
    the root rcu_node structure's ->lock rather than that of the leaf.
    Worse yet, this migration must be done with interrupts disabled, which
    is not so good for realtime response, especially given that there is
    no bound on the number of tasks on a given rcu_node structure's list.
    (OK, OK, there is a bound, it is just that it is unreasonably large,
    especially on 64-bit systems.) This was not considered a problem back
    when rcu_preempt_offline_tasks() was first written because realtime
    systems were assumed not to do CPU-hotplug operations while real-time
    applications were running. This assumption has proved of dubious validity
    given that people are starting to run multiple realtime applications
    on a single SMP system and that it is common practice to offline then
    online a CPU before starting its real-time application in order to clear
    extraneous processing off of that CPU. So we now need CPU hotplug
    operations to avoid undue latencies.

    This commit therefore avoids migrating these tasks, instead letting
    them be dequeued one by one from the original leaf rcu_node structure
    by rcu_read_unlock_special(). This means that the clearing of bits
    from the upper-level rcu_node structures must be deferred until the
    last such task has been dequeued, because otherwise subsequent grace
    periods won't wait on them. This commit has the beneficial side effect
    of simplifying the CPU-hotplug code for TREE_PREEMPT_RCU, especially in
    CONFIG_RCU_BOOST builds.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • This commit abstracts rcu_cleanup_dead_rnp() from rcu_cleanup_dead_cpu()
    in preparation for the rework of RCU priority boosting. This new function
    will be invoked from rcu_read_unlock_special() in the reworked scheme,
    which is why rcu_cleanup_dead_rnp() assumes that the leaf rcu_node
    structure's ->qsmaskinit field has already been updated.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The 48a7639ce80c ("rcu: Make callers awaken grace-period kthread")
    removed the irq_work_queue(), so the TREE_RCU doesn't need
    irq work any more. This commit therefore updates RCU's Kconfig and

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Paul E. McKenney

    Lai Jiangshan
     
  • The rcu_barrier() no-callbacks check for no-CBs CPUs has race conditions.
    It checks a given CPU's lists of callbacks, and if all three no-CBs lists
    are empty, ignores that CPU. However, these three lists could potentially
    be empty even when callbacks are present if the check executed just as
    the callbacks were being moved from one list to another. It turns out
    that recent versions of rcutorture can spot this race.

    This commit plugs this hole by consolidating the per-list counts of
    no-CBs callbacks into a single count, which is incremented before
    the corresponding callback is posted and after it is invoked. Then
    rcu_barrier() checks this single count to reliably determine whether
    the corresponding CPU has no-CBs callbacks.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

14 Nov, 2014

1 commit


04 Nov, 2014

5 commits

  • The "cpu" argument to rcu_cleanup_after_idle() is always the current
    CPU, so drop it. This moves the smp_processor_id() from the caller to
    rcu_cleanup_after_idle(), saving argument-passing overhead. Again,
    the anticipated cross-CPU uses of these functions has been replaced
    by NO_HZ_FULL.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Pranith Kumar

    Paul E. McKenney
     
  • The "cpu" argument to rcu_prepare_for_idle() is always the current
    CPU, so drop it. This in turn allows two of the uses of "cpu" in
    this function to be replaced with a this_cpu_ptr() and the third by
    smp_processor_id(), replacing that of the call to rcu_prepare_for_idle().
    Again, the anticipated cross-CPU uses of these functions has been replaced
    by NO_HZ_FULL.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Pranith Kumar

    Paul E. McKenney
     
  • The "cpu" argument to rcu_note_context_switch() is always the current
    CPU, so drop it. This in turn allows the "cpu" argument to
    rcu_preempt_note_context_switch() to be removed, which allows the sole
    use of "cpu" in both functions to be replaced with a this_cpu_ptr().
    Again, the anticipated cross-CPU uses of these functions has been
    replaced by NO_HZ_FULL.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Pranith Kumar

    Paul E. McKenney
     
  • Because rcu_preempt_check_callbacks()'s argument is guaranteed to
    always be the current CPU, drop the argument and replace per_cpu()
    with __this_cpu_read().

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Pranith Kumar

    Paul E. McKenney
     
  • For some functions in kernel/rcu/tree* the rdtp parameter is always
    this_cpu_ptr(rdtp). Remove the parameter if constant and calculate the
    pointer in function.

    This will have the advantage that it is obvious that the address are
    all per cpu offsets and thus it will enable the use of this_cpu_ops in
    the future.

    Signed-off-by: Christoph Lameter
    [ paulmck: Forward-ported to rcu/dev, whitespace adjustment. ]
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Pranith Kumar

    Christoph Lameter
     

30 Oct, 2014

1 commit


29 Oct, 2014

1 commit

  • Commit 35ce7f29a44a (rcu: Create rcuo kthreads only for onlined CPUs)
    avoids creating rcuo kthreads for CPUs that never come online. This
    fixes a bug in many instances of firmware: Instead of lying about their
    age, these systems instead lie about the number of CPUs that they have.
    Before commit 35ce7f29a44a, this could result in huge numbers of useless
    rcuo kthreads being created.

    It appears that experience indicates that I should have told the
    people suffering from this problem to fix their broken firmware, but
    I instead produced what turned out to be a partial fix. The missing
    piece supplied by this commit makes sure that rcu_barrier() knows not to
    post callbacks for no-CBs CPUs that have not yet come online, because
    otherwise rcu_barrier() will hang on systems having firmware that lies
    about the number of CPUs.

    It is tempting to simply have rcu_barrier() refuse to post a callback on
    any no-CBs CPU that does not have an rcuo kthread. This unfortunately
    does not work because rcu_barrier() is required to wait for all pending
    callbacks. It is therefore required to wait even for those callbacks
    that cannot possibly be invoked. Even if doing so hangs the system.

    Given that posting a callback to a no-CBs CPU that does not yet have an
    rcuo kthread can hang rcu_barrier(), It is tempting to report an error
    in this case. Unfortunately, this will result in false positives at
    boot time, when it is perfectly legal to post callbacks to the boot CPU
    before the scheduler has started, in other words, before it is legal
    to invoke rcu_barrier().

    So this commit instead has rcu_barrier() avoid posting callbacks to
    CPUs having neither rcuo kthread nor pending callbacks, and has it
    complain bitterly if it finds CPUs having no rcuo kthread but some
    pending callbacks. And when rcu_barrier() does find CPUs having no rcuo
    kthread but pending callbacks, as noted earlier, it has no choice but
    to hang indefinitely.

    Reported-by: Yanko Kaneti
    Reported-by: Jay Vosburgh
    Reported-by: Meelis Roos
    Reported-by: Eric B Munson
    Signed-off-by: Paul E. McKenney
    Tested-by: Eric B Munson
    Tested-by: Jay Vosburgh
    Tested-by: Yanko Kaneti
    Tested-by: Kevin Fenzi
    Tested-by: Meelis Roos

    Paul E. McKenney
     

17 Sep, 2014

4 commits


08 Sep, 2014

2 commits

  • Currently TASKS_RCU would ignore a CPU running a task in nohz_full=
    usermode execution. There would be neither a context switch nor a
    scheduling-clock interrupt to tell TASKS_RCU that the task in question
    had passed through a quiescent state. The grace period would therefore
    extend indefinitely. This commit therefore makes RCU's dyntick-idle
    subsystem record the task_struct structure of the task that is running
    in dyntick-idle mode on each CPU. The TASKS_RCU grace period can
    then access this information and record a quiescent state on
    behalf of any CPU running in dyntick-idle usermode.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Commit 96d3fd0d315a9 (rcu: Break call_rcu() deadlock involving scheduler
    and perf) covered the case where __call_rcu_nocb_enqueue() needs to wake
    the rcuo kthread due to the queue being initially empty, but did not
    do anything for the case where the queue was overflowing. This commit
    therefore also defers wakeup for the overflow case.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

28 Aug, 2014

1 commit

  • The nocb callbacks generated before the nocb kthreads are spawned are
    enqueued in the nocb queue for later processing. Commit fbce7497ee5af ("rcu:
    Parallelize and economize NOCB kthread wakeups") introduced nocb leader kthreads
    which checked the nocb_leader_wake flag to see if there were any such pending
    callbacks. A case was reported in which newly spawned leader kthreads were not
    processing the pending callbacks as this flag was not set, which led to a boot
    hang.

    The following commit ensures that the newly spawned nocb kthreads process the
    pending callbacks by allowing the kthreads to run immediately after spawning
    instead of waiting. This is done by inverting the logic of nocb_leader_wake
    tests to nocb_leader_sleep which allows us to use the default initialization of
    this flag to 0 to let the kthreads run.

    Reported-by: Amit Shah
    Signed-off-by: Pranith Kumar
    Link: http://www.spinics.net/lists/kernel/msg1802899.html
    [ paulmck: Backported to v3.17-rc2. ]
    Signed-off-by: Paul E. McKenney
    Tested-by: Amit Shah

    Pranith Kumar
     

10 Jul, 2014

3 commits

  • …, 'nocbs.2014.07.07a' and 'torture.2014.07.07a' into HEAD

    doc.2014.07.08a: Documentation updates.
    fixes.2014.07.09a: Miscellaneous fixes.
    maintainers.2014.07.08b: Maintainership updates.
    nocbs.2014.07.07a: Callback-offloading fixes.
    torture.2014.07.07a: Torture-test updates.

    Paul E. McKenney
     
  • RCU priority boosting currently checks for boosting via a pointer in
    task_struct. However, this is not needed: As Oleg noted, if the
    rt_mutex is placed in the rcu_node instead of on the booster's stack,
    the boostee can simply check it see if it owns the lock. This commit
    makes this change, shrinking task_struct by one pointer and the kernel
    by thirteen lines.

    Suggested-by: Oleg Nesterov
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The current approach to RCU priority boosting uses an rt_mutex strictly
    for its priority-boosting side effects. The rt_mutex_init_proxy_locked()
    function is used by the booster to initialize the lock as held by the
    boostee. The booster then uses rt_mutex_lock() to acquire this rt_mutex,
    which priority-boosts the boostee. When the boostee reaches the end
    of its outermost RCU read-side critical section, it checks a field in
    its task structure to see whether it has been boosted, and, if so, uses
    rt_mutex_unlock() to release the rt_mutex. The booster can then go on
    to boost the next task that is blocking the current RCU grace period.

    But reasonable implementations of rt_mutex_unlock() might result in the
    boostee referencing the rt_mutex's data after releasing it. But the
    booster might have re-initialized the rt_mutex between the time that the
    boostee released it and the time that it later referenced it. This is
    clearly asking for trouble, so this commit introduces a completion that
    forces the booster to wait until the boostee has completely finished with
    the rt_mutex, thus avoiding the case where the booster is re-initializing
    the rt_mutex before the last boostee's last reference to that rt_mutex.

    This of course does introduce some overhead, but the priority-boosting
    code paths are miles from any possible fastpath, and the overhead of
    executing the completion will normally be quite small compared to the
    overhead of priority boosting and deboosting, so this should be OK.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

08 Jul, 2014

1 commit

  • An 80-CPU system with a context-switch-heavy workload can require so
    many NOCB kthread wakeups that the RCU grace-period kthreads spend several
    tens of percent of a CPU just awakening things. This clearly will not
    scale well: If you add enough CPUs, the RCU grace-period kthreads would
    get behind, increasing grace-period latency.

    To avoid this problem, this commit divides the NOCB kthreads into leaders
    and followers, where the grace-period kthreads awaken the leaders each of
    whom in turn awakens its followers. By default, the number of groups of
    kthreads is the square root of the number of CPUs, but this default may
    be overridden using the rcutree.rcu_nocb_leader_stride boot parameter.
    This reduces the number of wakeups done per grace period by the RCU
    grace-period kthread by the square root of the number of CPUs, but of
    course by shifting those wakeups to the leaders. In addition, because
    the leaders do grace periods on behalf of their respective followers,
    the number of wakeups of the followers decreases by up to a factor of two.
    Instead of being awakened once when new callbacks arrive and again
    at the end of the grace period, the followers are awakened only at
    the end of the grace period.

    For a numerical example, in a 4096-CPU system, the grace-period kthread
    would awaken 64 leaders, each of which would awaken its 63 followers
    at the end of the grace period. This compares favorably with the 79
    wakeups for the grace-period kthread on an 80-CPU system.

    Reported-by: Rik van Riel
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

24 Jun, 2014

1 commit

  • Commit ac1bea85781e (Make cond_resched() report RCU quiescent states)
    fixed a problem where a CPU looping in the kernel with but one runnable
    task would give RCU CPU stall warnings, even if the in-kernel loop
    contained cond_resched() calls. Unfortunately, in so doing, it introduced
    performance regressions in Anton Blanchard's will-it-scale "open1" test.
    The problem appears to be not so much the increased cond_resched() path
    length as an increase in the rate at which grace periods complete, which
    increased per-update grace-period overhead.

    This commit takes a different approach to fixing this bug, mainly by
    moving the RCU-visible quiescent state from cond_resched() to
    rcu_note_context_switch(), and by further reducing the check to a
    simple non-zero test of a single per-CPU variable. However, this
    approach requires that the force-quiescent-state processing send
    resched IPIs to the offending CPUs. These will be sent only once
    the grace period has reached an age specified by the boot/sysfs
    parameter rcutree.jiffies_till_sched_qs, or once the grace period
    reaches an age halfway to the point at which RCU CPU stall warnings
    will be emitted, whichever comes first.

    Reported-by: Dave Hansen
    Signed-off-by: Paul E. McKenney
    Cc: Andi Kleen
    Cc: Christoph Lameter
    Cc: Mike Galbraith
    Cc: Eric Dumazet
    Reviewed-by: Josh Triplett
    [ paulmck: Made rcu_momentary_dyntick_idle() as suggested by the
    ktest build robot. Also fixed smp_mb() comment as noted by
    Oleg Nesterov. ]

    Merge with e552592e (Reduce overhead of cond_resched() checks for RCU)

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

15 May, 2014

2 commits


29 Apr, 2014

3 commits

  • The rcu_start_gp_advanced() function currently uses irq_work_queue()
    to defer wakeups of the RCU grace-period kthread. This deferring
    is necessary to avoid RCU-scheduler deadlocks involving the rcu_node
    structure's lock, meaning that RCU cannot call any of the scheduler's
    wake-up functions while holding one of these locks.

    Unfortunately, the second and subsequent calls to irq_work_queue() are
    ignored, and the first call will be ignored (aside from queuing the work
    item) if the scheduler-clock tick is turned off. This is OK for many
    uses, especially those where irq_work_queue() is called from an interrupt
    or softirq handler, because in those cases the scheduler-clock-tick state
    will be re-evaluated, which will turn the scheduler-clock tick back on.
    On the next tick, any deferred work will then be processed.

    However, this strategy does not always work for RCU, which can be invoked
    at process level from idle CPUs. In this case, the tick might never
    be turned back on, indefinitely defering a grace-period start request.
    Note that the RCU CPU stall detector cannot see this condition, because
    there is no RCU grace period in progress. Therefore, we can (and do!)
    see long tens-of-seconds stalls in grace-period handling. In theory,
    we could see a full grace-period hang, but rcutorture testing to date
    has seen only the tens-of-seconds stalls. Event tracing demonstrates
    that irq_work_queue() is being called repeatedly to no effect during
    these stalls: The "newreq" event appears repeatedly from a task that is
    not one of the grace-period kthreads.

    In theory, irq_work_queue() might be fixed to avoid this sort of issue,
    but RCU's requirements are unusual and it is quite straightforward to pass
    wake-up responsibility up through RCU's call chain, so that the wakeup
    happens when the offending locks are released.

    This commit therefore makes this change. The rcu_start_gp_advanced(),
    rcu_start_future_gp(), rcu_accelerate_cbs(), rcu_advance_cbs(),
    __note_gp_changes(), and rcu_start_gp() functions now return a boolean
    which indicates when a wake-up is needed. A new rcu_gp_kthread_wake()
    does the wakeup when it is necessary and safe to do so: No self-wakes,
    no wake-ups if the ->gp_flags field indicates there is no need (as in
    someone else did the wake-up before we got around to it), and no wake-ups
    before the grace-period kthread has been created.

    Signed-off-by: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • The ->preemptible field in rcu_data is only initialized in the function
    rcu_init_percpu_data(), and never used. This commit therefore removes
    this field.

    Signed-off-by: Iulia Manda
    Reviewed-by: Josh Triplett
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Iulia Manda
     
  • In the old days, the only source of requests for future grace periods
    was NOCB CPUs. This has changed: CPUs routinely post requests for
    future grace periods in order to promote power efficiency and reduce
    OS jitter with minimal impact on grace-period latency. This commit
    therefore updates cpu_needs_another_gp() to invoke rcu_future_needs_gp()
    instead of rcu_nocb_needs_gp(). The latter is no longer used, so is
    now removed. This commit also adds tracing for the irq_work_queue()
    wakeup case.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     

18 Feb, 2014

1 commit

  • All of the RCU source files have the usual GPL header, which contains a
    long-obsolete postal address for FSF. To avoid the need to track the
    FSF office's movements, this commit substitutes the URL where GPL may
    be found.

    Reported-by: Greg KH
    Reported-by: Steven Rostedt
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney