13 Jul, 2011

1 commit

  • Under some rare but real combinations of configuration parameters, RCU
    callbacks are posted during early boot that use kernel facilities that
    are not yet initialized. Therefore, when these callbacks are invoked,
    hard hangs and crashes ensue. This commit therefore prevents RCU
    callbacks from being invoked until after the scheduler is fully up and
    running, as in after multiple tasks have been spawned.

    It might well turn out that a better approach is to identify the specific
    RCU callbacks that are causing this problem, but that discussion will
    wait until such time as someone really needs an RCU callback to be invoked
    (as opposed to merely registered) during early boot.

    Reported-by: julie Sullivan
    Reported-by: RKK
    Signed-off-by: Paul E. McKenney
    Tested-by: Konrad Rzeszutek Wilk
    Tested-by: julie Sullivan
    Tested-by: RKK

    Paul E. McKenney
     

17 Jun, 2011

1 commit

  • The commit "use softirq instead of kthreads except when RCU_BOOST=y"
    just applied #ifdef in place. This commit is a cleanup that moves
    the newly #ifdef'ed code to the header file kernel/rcutree_plugin.h.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

16 Jun, 2011

1 commit


15 Jun, 2011

2 commits

  • Commit a26ac2455ffcf3(rcu: move TREE_RCU from softirq to kthread)
    introduced performance regression. In an AIM7 test, this commit degraded
    performance by about 40%.

    The commit runs rcu callbacks in a kthread instead of softirq. We observed
    high rate of context switch which is caused by this. Out test system has
    64 CPUs and HZ is 1000, so we saw more than 64k context switch per second
    which is caused by RCU's per-CPU kthread. A trace showed that most of
    the time the RCU per-CPU kthread doesn't actually handle any callbacks,
    but instead just does a very small amount of work handling grace periods.
    This means that RCU's per-CPU kthreads are making the scheduler do quite
    a bit of work in order to allow a very small amount of RCU-related
    processing to be done.

    Alex Shi's analysis determined that this slowdown is due to lock
    contention within the scheduler. Unfortunately, as Peter Zijlstra points
    out, the scheduler's real-time semantics require global action, which
    means that this contention is inherent in real-time scheduling. (Yes,
    perhaps someone will come up with a workaround -- otherwise, -rt is not
    going to do well on large SMP systems -- but this patch will work around
    this issue in the meantime. And "the meantime" might well be forever.)

    This patch therefore re-introduces softirq processing to RCU, but only
    for core RCU work. RCU callbacks are still executed in kthread context,
    so that only a small amount of RCU work runs in softirq context in the
    common case. This should minimize ksoftirqd execution, allowing us to
    skip boosting of ksoftirqd for CONFIG_RCU_BOOST=y kernels.

    Signed-off-by: Shaohua Li
    Tested-by: "Alex,Shi"
    Signed-off-by: Paul E. McKenney

    Shaohua Li
     
  • Make the functions creating the kthreads wake them up. Leverage the
    fact that the per-node and boost kthreads can run anywhere, thus
    dispensing with the need to wake them up once the incoming CPU has
    gone fully online.

    Signed-off-by: Paul E. McKenney
    Tested-by: Daniel J Blueman

    Paul E. McKenney
     

31 May, 2011

1 commit

  • Commit cc3ce5176d83 (rcu: Start RCU kthreads in TASK_INTERRUPTIBLE
    state) fudges a sleeping task' state, resulting in the scheduler seeing
    a TASK_UNINTERRUPTIBLE task going to sleep, but a TASK_INTERRUPTIBLE
    task waking up. The result is unbalanced load calculation.

    The problem that patch tried to address is that the RCU threads could
    stay in UNINTERRUPTIBLE state for quite a while and triggering the hung
    task detector due to on-demand wake-ups.

    Cure the problem differently by always giving the tasks at least one
    wake-up once the CPU is fully up and running, this will kick them out of
    the initial UNINTERRUPTIBLE state and into the regular INTERRUPTIBLE
    wait state.

    [ The alternative would be teaching kthread_create() to start threads as
    INTERRUPTIBLE but that needs a tad more thought. ]

    Reported-by: Damien Wyart
    Signed-off-by: Peter Zijlstra
    Acked-by: Paul E. McKenney
    Link: http://lkml.kernel.org/r/1306755291.1200.2872.camel@twins
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

28 May, 2011

4 commits

  • Upon creation, kthreads are in TASK_UNINTERRUPTIBLE state, which can
    result in softlockup warnings. Because some of RCU's kthreads can
    legitimately be idle indefinitely, start them in TASK_INTERRUPTIBLE
    state in order to avoid those warnings.

    Suggested-by: Peter Zijlstra
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Tested-by: Yinghai Lu
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • It is not necessary to use waitqueues for the RCU kthreads because
    we always know exactly which thread is to be awakened. In addition,
    wake_up() only issues an actual wakeup when there is a thread waiting on
    the queue, which was why there was an extra explicit wake_up_process()
    to get the RCU kthreads started.

    Eliminating the waitqueues (and wake_up()) in favor of wake_up_process()
    eliminates the need for the initial wake_up_process() and also shrinks
    the data structure size a bit. The wakeup logic is placed in a new
    rcu_wait() macro.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • This commit switches manipulations of the rcu_node ->wakemask field
    to atomic operations, which allows rcu_cpu_kthread_timer() to avoid
    acquiring the rcu_node lock. This should avoid the following lockdep
    splat reported by Valdis Kletnieks:

    [ 12.872150] usb 1-4: new high speed USB device number 3 using ehci_hcd
    [ 12.986667] usb 1-4: New USB device found, idVendor=413c, idProduct=2513
    [ 12.986679] usb 1-4: New USB device strings: Mfr=0, Product=0, SerialNumber=0
    [ 12.987691] hub 1-4:1.0: USB hub found
    [ 12.987877] hub 1-4:1.0: 3 ports detected
    [ 12.996372] input: PS/2 Generic Mouse as /devices/platform/i8042/serio1/input/input10
    [ 13.071471] udevadm used greatest stack depth: 3984 bytes left
    [ 13.172129]
    [ 13.172130] =======================================================
    [ 13.172425] [ INFO: possible circular locking dependency detected ]
    [ 13.172650] 2.6.39-rc6-mmotm0506 #1
    [ 13.172773] -------------------------------------------------------
    [ 13.172997] blkid/267 is trying to acquire lock:
    [ 13.173009] (&p->pi_lock){-.-.-.}, at: [] try_to_wake_up+0x29/0x1aa
    [ 13.173009]
    [ 13.173009] but task is already holding lock:
    [ 13.173009] (rcu_node_level_0){..-...}, at: [] rcu_cpu_kthread_timer+0x27/0x58
    [ 13.173009]
    [ 13.173009] which lock already depends on the new lock.
    [ 13.173009]
    [ 13.173009]
    [ 13.173009] the existing dependency chain (in reverse order) is:
    [ 13.173009]
    [ 13.173009] -> #2 (rcu_node_level_0){..-...}:
    [ 13.173009] [] check_prevs_add+0x8b/0x104
    [ 13.173009] [] validate_chain+0x36f/0x3ab
    [ 13.173009] [] __lock_acquire+0x369/0x3e2
    [ 13.173009] [] lock_acquire+0xfc/0x14c
    [ 13.173009] [] _raw_spin_lock+0x36/0x45
    [ 13.173009] [] rcu_read_unlock_special+0x8c/0x1d5
    [ 13.173009] [] __rcu_read_unlock+0x4f/0xd7
    [ 13.173009] [] rcu_read_unlock+0x21/0x23
    [ 13.173009] [] cpuacct_charge+0x6c/0x75
    [ 13.173009] [] update_curr+0x101/0x12e
    [ 13.173009] [] check_preempt_wakeup+0xf7/0x23b
    [ 13.173009] [] check_preempt_curr+0x2b/0x68
    [ 13.173009] [] ttwu_do_wakeup+0x76/0x128
    [ 13.173009] [] ttwu_do_activate.constprop.63+0x57/0x5c
    [ 13.173009] [] scheduler_ipi+0x48/0x5d
    [ 13.173009] [] smp_reschedule_interrupt+0x16/0x18
    [ 13.173009] [] reschedule_interrupt+0x13/0x20
    [ 13.173009] [] rcu_read_unlock+0x21/0x23
    [ 13.173009] [] find_get_page+0xa9/0xb9
    [ 13.173009] [] filemap_fault+0x6a/0x34d
    [ 13.173009] [] __do_fault+0x54/0x3e6
    [ 13.173009] [] handle_pte_fault+0x12c/0x1ed
    [ 13.173009] [] handle_mm_fault+0x1cd/0x1e0
    [ 13.173009] [] do_page_fault+0x42d/0x5de
    [ 13.173009] [] page_fault+0x1f/0x30
    [ 13.173009]
    [ 13.173009] -> #1 (&rq->lock){-.-.-.}:
    [ 13.173009] [] check_prevs_add+0x8b/0x104
    [ 13.173009] [] validate_chain+0x36f/0x3ab
    [ 13.173009] [] __lock_acquire+0x369/0x3e2
    [ 13.173009] [] lock_acquire+0xfc/0x14c
    [ 13.173009] [] _raw_spin_lock+0x36/0x45
    [ 13.173009] [] __task_rq_lock+0x8b/0xd3
    [ 13.173009] [] wake_up_new_task+0x41/0x108
    [ 13.173009] [] do_fork+0x265/0x33f
    [ 13.173009] [] kernel_thread+0x6b/0x6d
    [ 13.173009] [] rest_init+0x21/0xd2
    [ 13.173009] [] start_kernel+0x3bb/0x3c6
    [ 13.173009] [] x86_64_start_reservations+0xaf/0xb3
    [ 13.173009] [] x86_64_start_kernel+0xf0/0xf7
    [ 13.173009]
    [ 13.173009] -> #0 (&p->pi_lock){-.-.-.}:
    [ 13.173009] [] check_prev_add+0x68/0x20e
    [ 13.173009] [] check_prevs_add+0x8b/0x104
    [ 13.173009] [] validate_chain+0x36f/0x3ab
    [ 13.173009] [] __lock_acquire+0x369/0x3e2
    [ 13.173009] [] lock_acquire+0xfc/0x14c
    [ 13.173009] [] _raw_spin_lock_irqsave+0x44/0x57
    [ 13.173009] [] try_to_wake_up+0x29/0x1aa
    [ 13.173009] [] wake_up_process+0x10/0x12
    [ 13.173009] [] rcu_cpu_kthread_timer+0x44/0x58
    [ 13.173009] [] call_timer_fn+0xac/0x1e9
    [ 13.173009] [] run_timer_softirq+0x1aa/0x1f2
    [ 13.173009] [] __do_softirq+0x109/0x26a
    [ 13.173009] [] call_softirq+0x1c/0x30
    [ 13.173009] [] do_softirq+0x44/0xf1
    [ 13.173009] [] irq_exit+0x58/0xc8
    [ 13.173009] [] smp_apic_timer_interrupt+0x79/0x87
    [ 13.173009] [] apic_timer_interrupt+0x13/0x20
    [ 13.173009] [] get_page_from_freelist+0x2aa/0x310
    [ 13.173009] [] __alloc_pages_nodemask+0x178/0x243
    [ 13.173009] [] pte_alloc_one+0x1e/0x3a
    [ 13.173009] [] __pte_alloc+0x22/0x14b
    [ 13.173009] [] handle_mm_fault+0x17e/0x1e0
    [ 13.173009] [] do_page_fault+0x42d/0x5de
    [ 13.173009] [] page_fault+0x1f/0x30
    [ 13.173009]
    [ 13.173009] other info that might help us debug this:
    [ 13.173009]
    [ 13.173009] Chain exists of:
    [ 13.173009] &p->pi_lock --> &rq->lock --> rcu_node_level_0
    [ 13.173009]
    [ 13.173009] Possible unsafe locking scenario:
    [ 13.173009]
    [ 13.173009] CPU0 CPU1
    [ 13.173009] ---- ----
    [ 13.173009] lock(rcu_node_level_0);
    [ 13.173009] lock(&rq->lock);
    [ 13.173009] lock(rcu_node_level_0);
    [ 13.173009] lock(&p->pi_lock);
    [ 13.173009]
    [ 13.173009] *** DEADLOCK ***
    [ 13.173009]
    [ 13.173009] 3 locks held by blkid/267:
    [ 13.173009] #0: (&mm->mmap_sem){++++++}, at: [] do_page_fault+0x1f3/0x5de
    [ 13.173009] #1: (&yield_timer){+.-...}, at: [] call_timer_fn+0x0/0x1e9
    [ 13.173009] #2: (rcu_node_level_0){..-...}, at: [] rcu_cpu_kthread_timer+0x27/0x58
    [ 13.173009]
    [ 13.173009] stack backtrace:
    [ 13.173009] Pid: 267, comm: blkid Not tainted 2.6.39-rc6-mmotm0506 #1
    [ 13.173009] Call Trace:
    [ 13.173009] [] print_circular_bug+0xc8/0xd9
    [ 13.173009] [] check_prev_add+0x68/0x20e
    [ 13.173009] [] ? save_stack_trace+0x28/0x46
    [ 13.173009] [] check_prevs_add+0x8b/0x104
    [ 13.173009] [] validate_chain+0x36f/0x3ab
    [ 13.173009] [] __lock_acquire+0x369/0x3e2
    [ 13.173009] [] ? try_to_wake_up+0x29/0x1aa
    [ 13.173009] [] lock_acquire+0xfc/0x14c
    [ 13.173009] [] ? try_to_wake_up+0x29/0x1aa
    [ 13.173009] [] ? rcu_check_quiescent_state+0x82/0x82
    [ 13.173009] [] _raw_spin_lock_irqsave+0x44/0x57
    [ 13.173009] [] ? try_to_wake_up+0x29/0x1aa
    [ 13.173009] [] try_to_wake_up+0x29/0x1aa
    [ 13.173009] [] ? rcu_check_quiescent_state+0x82/0x82
    [ 13.173009] [] wake_up_process+0x10/0x12
    [ 13.173009] [] rcu_cpu_kthread_timer+0x44/0x58
    [ 13.173009] [] ? rcu_check_quiescent_state+0x82/0x82
    [ 13.173009] [] call_timer_fn+0xac/0x1e9
    [ 13.173009] [] ? del_timer+0x75/0x75
    [ 13.173009] [] ? rcu_check_quiescent_state+0x82/0x82
    [ 13.173009] [] run_timer_softirq+0x1aa/0x1f2
    [ 13.173009] [] __do_softirq+0x109/0x26a
    [ 13.173009] [] ? tick_dev_program_event+0x37/0xf6
    [ 13.173009] [] ? time_hardirqs_off+0x1b/0x2f
    [ 13.173009] [] call_softirq+0x1c/0x30
    [ 13.173009] [] do_softirq+0x44/0xf1
    [ 13.173009] [] irq_exit+0x58/0xc8
    [ 13.173009] [] smp_apic_timer_interrupt+0x79/0x87
    [ 13.173009] [] apic_timer_interrupt+0x13/0x20
    [ 13.173009] [] ? get_page_from_freelist+0x114/0x310
    [ 13.173009] [] ? get_page_from_freelist+0x2aa/0x310
    [ 13.173009] [] ? clear_page_c+0x7/0x10
    [ 13.173009] [] ? prep_new_page+0x14c/0x1cd
    [ 13.173009] [] get_page_from_freelist+0x2aa/0x310
    [ 13.173009] [] __alloc_pages_nodemask+0x178/0x243
    [ 13.173009] [] ? __pmd_alloc+0x87/0x99
    [ 13.173009] [] pte_alloc_one+0x1e/0x3a
    [ 13.173009] [] ? __pmd_alloc+0x87/0x99
    [ 13.173009] [] __pte_alloc+0x22/0x14b
    [ 13.173009] [] handle_mm_fault+0x17e/0x1e0
    [ 13.173009] [] do_page_fault+0x42d/0x5de
    [ 13.173009] [] ? sys_brk+0x32/0x10c
    [ 13.173009] [] ? time_hardirqs_off+0x1b/0x2f
    [ 13.173009] [] ? trace_hardirqs_off_caller+0x3f/0x9c
    [ 13.173009] [] ? trace_hardirqs_off_thunk+0x3a/0x3c
    [ 13.173009] [] page_fault+0x1f/0x30
    [ 14.010075] usb 5-1: new full speed USB device number 2 using uhci_hcd

    Reported-by: Valdis Kletnieks
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • …ck/linux-2.6-rcu into core/urgent

    Ingo Molnar
     

27 May, 2011

5 commits

  • (Note: this was reverted, and is now being re-applied in pieces, with
    this being the fifth and final piece. See below for the reason that
    it is now felt to be safe to re-apply this.)

    Commit d09b62d fixed grace-period synchronization, but left some smp_mb()
    invocations in rcu_process_callbacks() that are no longer needed, but
    sheer paranoia prevented them from being removed. This commit removes
    them and provides a proof of correctness in their absence. It also adds
    a memory barrier to rcu_report_qs_rsp() immediately before the update to
    rsp->completed in order to handle the theoretical possibility that the
    compiler or CPU might move massive quantities of code into a lock-based
    critical section. This also proves that the sheer paranoia was not
    entirely unjustified, at least from a theoretical point of view.

    In addition, the old dyntick-idle synchronization depended on the fact
    that grace periods were many milliseconds in duration, so that it could
    be assumed that no dyntick-idle CPU could reorder a memory reference
    across an entire grace period. Unfortunately for this design, the
    addition of expedited grace periods breaks this assumption, which has
    the unfortunate side-effect of requiring atomic operations in the
    functions that track dyntick-idle state for RCU. (There is some hope
    that the algorithms used in user-level RCU might be applied here, but
    some work is required to handle the NMIs that user-space applications
    can happily ignore. For the short term, better safe than sorry.)

    This proof assumes that neither compiler nor CPU will allow a lock
    acquisition and release to be reordered, as doing so can result in
    deadlock. The proof is as follows:

    1. A given CPU declares a quiescent state under the protection of
    its leaf rcu_node's lock.

    2. If there is more than one level of rcu_node hierarchy, the
    last CPU to declare a quiescent state will also acquire the
    ->lock of the next rcu_node up in the hierarchy, but only
    after releasing the lower level's lock. The acquisition of this
    lock clearly cannot occur prior to the acquisition of the leaf
    node's lock.

    3. Step 2 repeats until we reach the root rcu_node structure.
    Please note again that only one lock is held at a time through
    this process. The acquisition of the root rcu_node's ->lock
    must occur after the release of that of the leaf rcu_node.

    4. At this point, we set the ->completed field in the rcu_state
    structure in rcu_report_qs_rsp(). However, if the rcu_node
    hierarchy contains only one rcu_node, then in theory the code
    preceding the quiescent state could leak into the critical
    section. We therefore precede the update of ->completed with a
    memory barrier. All CPUs will therefore agree that any updates
    preceding any report of a quiescent state will have happened
    before the update of ->completed.

    5. Regardless of whether a new grace period is needed, rcu_start_gp()
    will propagate the new value of ->completed to all of the leaf
    rcu_node structures, under the protection of each rcu_node's ->lock.
    If a new grace period is needed immediately, this propagation
    will occur in the same critical section that ->completed was
    set in, but courtesy of the memory barrier in #4 above, is still
    seen to follow any pre-quiescent-state activity.

    6. When a given CPU invokes __rcu_process_gp_end(), it becomes
    aware of the end of the old grace period and therefore makes
    any RCU callbacks that were waiting on that grace period eligible
    for invocation.

    If this CPU is the same one that detected the end of the grace
    period, and if there is but a single rcu_node in the hierarchy,
    we will still be in the single critical section. In this case,
    the memory barrier in step #4 guarantees that all callbacks will
    be seen to execute after each CPU's quiescent state.

    On the other hand, if this is a different CPU, it will acquire
    the leaf rcu_node's ->lock, and will again be serialized after
    each CPU's quiescent state for the old grace period.

    On the strength of this proof, this commit therefore removes the memory
    barriers from rcu_process_callbacks() and adds one to rcu_report_qs_rsp().
    The effect is to reduce the number of memory barriers by one and to
    reduce the frequency of execution from about once per scheduling tick
    per CPU to once per grace period.

    This was reverted do to hangs found during testing by Yinghai Lu and
    Ingo Molnar. Frederic Weisbecker supplied Yinghai with tracing that
    located the underlying problem, and Frederic also provided the fix.

    The underlying problem was that the HARDIRQ_ENTER() macro from
    lib/locking-selftest.c invoked irq_enter(), which in turn invokes
    rcu_irq_enter(), but HARDIRQ_EXIT() invoked __irq_exit(), which
    does not invoke rcu_irq_exit(). This situation resulted in calls
    to rcu_irq_enter() that were not balanced by the required calls to
    rcu_irq_exit(). Therefore, after these locking selftests completed,
    RCU's dyntick-idle nesting count was a large number (for example,
    72), which caused RCU to to conclude that the affected CPU was not in
    dyntick-idle mode when in fact it was.

    RCU would therefore incorrectly wait for this dyntick-idle CPU, resulting
    in hangs.

    In contrast, with Frederic's patch, which replaces the irq_enter()
    in HARDIRQ_ENTER() with an __irq_enter(), these tests don't ever call
    either rcu_irq_enter() or rcu_irq_exit(), which works because the CPU
    running the test is already marked as not being in dyntick-idle mode.
    This means that the rcu_irq_enter() and rcu_irq_exit() calls and RCU
    then has no problem working out which CPUs are in dyntick-idle mode and
    which are not.

    The reason that the imbalance was not noticed before the barrier patch
    was applied is that the old implementation of rcu_enter_nohz() ignored
    the nesting depth. This could still result in delays, but much shorter
    ones. Whenever there was a delay, RCU would IPI the CPU with the
    unbalanced nesting level, which would eventually result in rcu_enter_nohz()
    being called, which in turn would force RCU to see that the CPU was in
    dyntick-idle mode.

    The reason that very few people noticed the problem is that the mismatched
    irq_enter() vs. __irq_exit() occured only when the kernel was built with
    CONFIG_DEBUG_LOCKING_API_SELFTESTS.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • The old version of rcu_enter_nohz() forced RCU into nohz mode even if
    the nesting count was non-zero. This change causes rcu_enter_nohz()
    to hold off for non-zero nesting counts.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Condition the set_need_resched() in rcu_irq_exit() on in_irq(). This
    should be a no-op, because rcu_irq_exit() should only be called from irq.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Second step of partitioning of commit e59fb3120b.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Add the memory barriers added by e59fb3120b.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

21 May, 2011

1 commit

  • Commit e66eed651fd1 ("list: remove prefetching from regular list
    iterators") removed the include of prefetch.h from list.h, which
    uncovered several cases that had apparently relied on that rather
    obscure header file dependency.

    So this fixes things up a bit, using

    grep -L linux/prefetch.h $(git grep -l '[^a-z_]prefetchw*(' -- '*.[ch]')
    grep -L 'prefetchw*(' $(git grep -l 'linux/prefetch.h' -- '*.[ch]')

    to guide us in finding files that either need
    inclusion, or have it despite not needing it.

    There are more of them around (mostly network drivers), but this gets
    many core ones.

    Reported-by: Stephen Rothwell
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

20 May, 2011

1 commit


08 May, 2011

1 commit

  • Avoid calling into the scheduler while holding core RCU locks. This
    allows rcu_read_unlock() to be called while holding the runqueue locks,
    but only as long as there was no chance of the RCU read-side critical
    section having been preempted. (Otherwise, if RCU priority boosting
    is enabled, rcu_read_unlock() might call into the scheduler in order to
    unboost itself, which might allows self-deadlock on the runqueue locks
    within the scheduler.)

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

06 May, 2011

18 commits

  • Provide rcu_virt_note_context_switch() for vitalization use to note
    quiescent state during guest entry.

    Signed-off-by: Gleb Natapov
    Signed-off-by: Paul E. McKenney

    Gleb Natapov
     
  • Signed integer overflow is undefined by the C standard, so move
    calculations to unsigned.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • This commit marks a first step towards making call_rcu() have
    real-time behavior. If irqs are disabled, don't dive into the
    RCU core. Later on, this new early exit will wake up the
    per-CPU kthread, which first must be modified to handle the
    cases involving callback storms.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • Although rcu_yield() dropped from real-time to normal priority, there
    is always the possibility that the competing tasks have been niced.
    So nice to 19 in rcu_yield() to help ensure that other tasks have a
    better chance of running.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • Many rcu callbacks functions just call kfree() on the base structure.
    These functions are trivial, but their size adds up, and furthermore
    when they are used in a kernel module, that module must invoke the
    high-latency rcu_barrier() function at module-unload time.

    The kfree_rcu() function introduced by this commit addresses this issue.
    Rather than encoding a function address in the embedded rcu_head
    structure, kfree_rcu() instead encodes the offset of the rcu_head
    structure within the base structure. Because the functions are not
    allowed in the low-order 4096 bytes of kernel virtual memory, offsets
    up to 4095 bytes can be accommodated. If the offset is larger than
    4095 bytes, a compile-time error will be generated in __kfree_rcu().
    If this error is triggered, you can either fall back to use of call_rcu()
    or rearrange the structure to position the rcu_head structure into the
    first 4096 bytes.

    Note that the allowable offset might decrease in the future, for example,
    to allow something like kmem_cache_free_rcu().

    The new kfree_rcu() function can replace code as follows:

    call_rcu(&p->rcu, simple_kfree_callback);

    where "simple_kfree_callback()" might be defined as follows:

    void simple_kfree_callback(struct rcu_head *p)
    {
    struct foo *q = container_of(p, struct foo, rcu);

    kfree(q);
    }

    with the following:

    kfree_rcu(&p->rcu, rcu);

    Note that the "rcu" is the name of a field in the structure being
    freed. The reason for using this rather than passing in a pointer
    to the base structure is that the above approach allows better type
    checking.

    This commit is based on earlier work by Lai Jiangshan and Manfred Spraul:

    Lai's V1 patch: http://lkml.org/lkml/2008/9/18/1
    Manfred's patch: http://lkml.org/lkml/2009/1/2/115

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Manfred Spraul
    Signed-off-by: Paul E. McKenney
    Reviewed-by: David Howells
    Reviewed-by: Josh Triplett

    Lai Jiangshan
     
  • The "preemptible" spelling is preferable. May as well fix it.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • This removes a couple of lines from invoke_rcu_cpu_kthread(), improving
    readability.

    Reported-by: Christoph Lameter
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • Increment a per-CPU counter on each pass through rcu_cpu_kthread()'s
    service loop, and add it to the rcudata trace output.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • This commit adds the age in jiffies of the current grace period along
    with the duration in jiffies of the longest grace period since boot
    to the rcu/rcugp debugfs file. It also adds an additional "O" state
    to kthread tracing to differentiate between the kthread waiting due to
    having nothing to do on the one hand and waiting due to being on the
    wrong CPU on the other hand.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • It is not possible to accurately correlate rcutorture output with that
    of debugfs. This patch therefore adds a debugfs file that prints out
    the rcutorture version number, permitting easy correlation.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • Add tracing to help debugging situations when RCU's kthreads are not
    running but are supposed to be.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • If you are doing CPU hotplug operations, it is best not to have
    CPU-bound realtime tasks running CPU-bound on the outgoing CPU.
    So this commit makes per-CPU kthreads run at non-realtime priority
    during that time.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • The scheduler has had some heartburn in the past when too many real-time
    kthreads were affinitied to the outgoing CPU. So, this commit lightens
    the load by forcing the per-rcu_node and the boost kthreads off of the
    outgoing CPU. Note that RCU's per-CPU kthread remains on the outgoing
    CPU until the bitter end, as it must in order to preserve correctness.

    Also avoid disabling hardirqs across calls to set_cpus_allowed_ptr(),
    given that this function can block.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Add priority boosting for TREE_PREEMPT_RCU, similar to that for
    TINY_PREEMPT_RCU. This is enabled by the default-off RCU_BOOST
    kernel parameter. The priority to which to boost preempted
    RCU readers is controlled by the RCU_BOOST_PRIO kernel parameter
    (defaulting to real-time priority 1) and the time to wait before
    boosting the readers who are blocking a given grace period is
    controlled by the RCU_BOOST_DELAY kernel parameter (defaulting to
    500 milliseconds).

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • If RCU priority boosting is to be meaningful, callback invocation must
    be boosted in addition to preempted RCU readers. Otherwise, in presence
    of CPU real-time threads, the grace period ends, but the callbacks don't
    get invoked. If the callbacks don't get invoked, the associated memory
    doesn't get freed, so the system is still subject to OOM.

    But it is not reasonable to priority-boost RCU_SOFTIRQ, so this commit
    moves the callback invocations to a kthread, which can be boosted easily.

    Also add comments and properly synchronized all accesses to
    rcu_cpu_kthread_task, as suggested by Lai Jiangshan.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • Combine the current TREE_PREEMPT_RCU ->blocked_tasks[] lists in the
    rcu_node structure into a single ->blkd_tasks list with ->gp_tasks
    and ->exp_tasks tail pointers. This is in preparation for RCU priority
    boosting, which will add a third dimension to the combinatorial explosion
    in the ->blocked_tasks[] case, but simply a third pointer in the new
    ->blkd_tasks case.

    Also update documentation to reflect blocked_tasks[] merge

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • Commit d09b62d fixed grace-period synchronization, but left some smp_mb()
    invocations in rcu_process_callbacks() that are no longer needed, but
    sheer paranoia prevented them from being removed. This commit removes
    them and provides a proof of correctness in their absence. It also adds
    a memory barrier to rcu_report_qs_rsp() immediately before the update to
    rsp->completed in order to handle the theoretical possibility that the
    compiler or CPU might move massive quantities of code into a lock-based
    critical section. This also proves that the sheer paranoia was not
    entirely unjustified, at least from a theoretical point of view.

    In addition, the old dyntick-idle synchronization depended on the fact
    that grace periods were many milliseconds in duration, so that it could
    be assumed that no dyntick-idle CPU could reorder a memory reference
    across an entire grace period. Unfortunately for this design, the
    addition of expedited grace periods breaks this assumption, which has
    the unfortunate side-effect of requiring atomic operations in the
    functions that track dyntick-idle state for RCU. (There is some hope
    that the algorithms used in user-level RCU might be applied here, but
    some work is required to handle the NMIs that user-space applications
    can happily ignore. For the short term, better safe than sorry.)

    This proof assumes that neither compiler nor CPU will allow a lock
    acquisition and release to be reordered, as doing so can result in
    deadlock. The proof is as follows:

    1. A given CPU declares a quiescent state under the protection of
    its leaf rcu_node's lock.

    2. If there is more than one level of rcu_node hierarchy, the
    last CPU to declare a quiescent state will also acquire the
    ->lock of the next rcu_node up in the hierarchy, but only
    after releasing the lower level's lock. The acquisition of this
    lock clearly cannot occur prior to the acquisition of the leaf
    node's lock.

    3. Step 2 repeats until we reach the root rcu_node structure.
    Please note again that only one lock is held at a time through
    this process. The acquisition of the root rcu_node's ->lock
    must occur after the release of that of the leaf rcu_node.

    4. At this point, we set the ->completed field in the rcu_state
    structure in rcu_report_qs_rsp(). However, if the rcu_node
    hierarchy contains only one rcu_node, then in theory the code
    preceding the quiescent state could leak into the critical
    section. We therefore precede the update of ->completed with a
    memory barrier. All CPUs will therefore agree that any updates
    preceding any report of a quiescent state will have happened
    before the update of ->completed.

    5. Regardless of whether a new grace period is needed, rcu_start_gp()
    will propagate the new value of ->completed to all of the leaf
    rcu_node structures, under the protection of each rcu_node's ->lock.
    If a new grace period is needed immediately, this propagation
    will occur in the same critical section that ->completed was
    set in, but courtesy of the memory barrier in #4 above, is still
    seen to follow any pre-quiescent-state activity.

    6. When a given CPU invokes __rcu_process_gp_end(), it becomes
    aware of the end of the old grace period and therefore makes
    any RCU callbacks that were waiting on that grace period eligible
    for invocation.

    If this CPU is the same one that detected the end of the grace
    period, and if there is but a single rcu_node in the hierarchy,
    we will still be in the single critical section. In this case,
    the memory barrier in step #4 guarantees that all callbacks will
    be seen to execute after each CPU's quiescent state.

    On the other hand, if this is a different CPU, it will acquire
    the leaf rcu_node's ->lock, and will again be serialized after
    each CPU's quiescent state for the old grace period.

    On the strength of this proof, this commit therefore removes the memory
    barriers from rcu_process_callbacks() and adds one to rcu_report_qs_rsp().
    The effect is to reduce the number of memory barriers by one and to
    reduce the frequency of execution from about once per scheduling tick
    per CPU to once per grace period.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • The RCU CPU stall warnings can now be controlled using the
    rcu_cpu_stall_suppress boot-time parameter or via the same parameter
    from sysfs. There is therefore no longer any reason to have
    kernel config parameters for this feature. This commit therefore
    removes the RCU_CPU_STALL_DETECTOR and RCU_CPU_STALL_DETECTOR_RUNNABLE
    kernel config parameters. The RCU_CPU_STALL_TIMEOUT parameter remains
    to allow the timeout to be tuned and the RCU_CPU_STALL_VERBOSE parameter
    remains to allow task-stall information to be suppressed if desired.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     

08 Jan, 2011

1 commit

  • * 'for-2.6.38' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (30 commits)
    gameport: use this_cpu_read instead of lookup
    x86: udelay: Use this_cpu_read to avoid address calculation
    x86: Use this_cpu_inc_return for nmi counter
    x86: Replace uses of current_cpu_data with this_cpu ops
    x86: Use this_cpu_ops to optimize code
    vmstat: User per cpu atomics to avoid interrupt disable / enable
    irq_work: Use per cpu atomics instead of regular atomics
    cpuops: Use cmpxchg for xchg to avoid lock semantics
    x86: this_cpu_cmpxchg and this_cpu_xchg operations
    percpu: Generic this_cpu_cmpxchg() and this_cpu_xchg support
    percpu,x86: relocate this_cpu_add_return() and friends
    connector: Use this_cpu operations
    xen: Use this_cpu_inc_return
    taskstats: Use this_cpu_ops
    random: Use this_cpu_inc_return
    fs: Use this_cpu_inc_return in buffer.c
    highmem: Use this_cpu_xx_return() operations
    vmstat: Use this_cpu_inc_return for vm statistics
    x86: Support for this_cpu_add, sub, dec, inc_return
    percpu: Generic support for this_cpu_add, sub, dec, inc_return
    ...

    Fixed up conflicts: in arch/x86/kernel/{apic/nmi.c, apic/x2apic_uv_x.c, process.c}
    as per Tejun.

    Linus Torvalds
     

18 Dec, 2010

3 commits

  • When the current __call_rcu() function was written, the expedited
    APIs did not exist. The __call_rcu() implementation therefore went
    to great lengths to detect the end of old grace periods and to start
    new ones, all in the name of reducing grace-period latency. Now the
    expedited APIs do exist, and the usage of __call_rcu() has increased
    considerably. This commit therefore causes __call_rcu() to avoid
    worrying about grace periods unless there are a large number of
    RCU callbacks stacked up on the current CPU.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Some recent benchmarks have indicated possible lock contention on the
    leaf-level rcu_node locks. This commit therefore limits the number of
    CPUs per leaf-level rcu_node structure to 16, in other words, there
    can be at most 16 rcu_data structures fanning into a given rcu_node
    structure. Prior to this, the limit was 32 on 32-bit systems and 64 on
    64-bit systems.

    Note that the fanout of non-leaf rcu_node structures is unchanged. The
    organization of accesses to the rcu_node tree is such that references
    to non-leaf rcu_node structures are much less frequent than to the
    leaf structures.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Use the CPU's bit in rnp->qsmask to determine whether or not the CPU
    should try to report a quiescent state. Handle overflow in the check
    for rdp->gpnum having fallen behind.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney