14 Jan, 2009

1 commit


04 Jan, 2009

2 commits


01 Jan, 2009

1 commit

  • Impact: use new cpumask API.

    rcu_ctrlblk contains a cpumask, and it's highly optimized so I don't want
    a cpumask_var_t (ie. a pointer) for the CONFIG_CPUMASK_OFFSTACK case. It
    could use a dangling bitmap, and be allocated in __rcu_init to save memory,
    but for the moment we use a bitmap.

    (Eventually 'struct cpumask' will be undefined for CONFIG_CPUMASK_OFFSTACK,
    so we use a bitmap here to show we really mean it).

    We remove on-stack cpumasks, using cpumask_var_t for
    rcu_torture_shuffle_tasks() and for_each_cpu_and in force_quiescent_state().

    Signed-off-by: Rusty Russell

    Rusty Russell
     

25 Nov, 2008

1 commit

  • Impact: (future) size reduction for large NR_CPUS.

    Dynamically allocating cpumasks (when CONFIG_CPUMASK_OFFSTACK) saves
    space for small nr_cpu_ids but big CONFIG_NR_CPUS. cpumask_var_t
    is just a struct cpumask for !CONFIG_CPUMASK_OFFSTACK.

    Signed-off-by: Rusty Russell
    Signed-off-by: Ingo Molnar

    Rusty Russell
     

04 Nov, 2008

1 commit


03 Oct, 2008

2 commits

  • fix the !CONFIG_RCU_CPU_STALL_DETECTOR path:

    kernel/rcuclassic.c: In function '__rcu_pending':
    kernel/rcuclassic.c:609: error: too few arguments to function 'check_cpu_stall'

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • This patch adds stalled-CPU detection to Classic RCU. This capability
    is enabled by a new config variable CONFIG_RCU_CPU_STALL_DETECTOR, which
    defaults disabled.

    This is a debugging feature to detect infinite loops in kernel code, not
    something that non-kernel-hackers would be expected to care about.

    This feature can detect looping CPUs in !PREEMPT builds and looping CPUs
    with preemption disabled in PREEMPT builds. This is essentially a port of
    this functionality from the treercu patch, replacing the stall debug patch
    that is already in tip/core/rcu (commit 67182ae1c4).

    The changes from the patch in tip/core/rcu include making the config
    variable name match that in treercu, changing from seconds to jiffies to
    avoid spurious warnings, and printing a boot message when this feature
    is enabled.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

23 Sep, 2008

1 commit


27 Aug, 2008

1 commit

  • CC kernel/rcuclassic.o
    kernel/rcuclassic.c: In function 'rcu_init_percpu_data':
    kernel/rcuclassic.c:705: warning: comparison of distinct pointer types lacks a cast
    kernel/rcuclassic.c:713: warning: comparison of distinct pointer types lacks a cast

    flags should be unsigned long.

    Signed-off-by: Hiroshi Shimamoto
    Signed-off-by: Ingo Molnar

    Hiroshi Shimamoto
     

21 Aug, 2008

1 commit

  • Some earlier tip/core/rcu patches caused RCU to incorrectly enable irqs
    too early in boot. This caused Yinghai's repeated-kexec testing to
    hit oopses, presumably due to so that device interrupts left over from
    the prior kernel instance (which would oops the newly booting kernel
    before it got a chance to reset said devices). This patch therefore
    converts all the local_irq_disable()s in rcuclassic.c to local_irq_save().

    Besides, I never did like local_irq_disable() anyway. ;-)

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Yinghai Lu
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

19 Aug, 2008

2 commits

  • fix:

    CC kernel/rcuclassic.o
    kernel/rcuclassic.c: In function '__rcu_process_callbacks':
    kernel/rcuclassic.c:561: error: 'flags' undeclared (first use in this function)
    kernel/rcuclassic.c:561: error: (Each undeclared identifier is reported only once
    kernel/rcuclassic.c:561: error: for each function it appears in.)

    Declare missing variable flags.

    Signed-off-by: Hiroshi Shimamoto
    Signed-off-by: Ingo Molnar

    Hiroshi Shimamoto
     
  • Given that the rcp->lock is now acquired from call_rcu(), which can be
    invoked from irq-disable regions, all acquisitions need to disable irqs.
    The following patch fixes this.

    Although I don't have any reason to believe that this is the cause of
    Yinghai's oops, it does need to be fixed.

    Signed-off-by: Paul E. McKenney
    Cc: Yinghai Lu
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

17 Aug, 2008

1 commit

  • On Fri, Aug 15, 2008 at 04:24:30PM +0200, Ingo Molnar wrote:
    >
    > Paul,
    >
    > one of your two recent RCU patches caused this lockdep splat in -tip
    > testing:
    >
    > ------------------->
    > Brought up 2 CPUs
    > Total of 2 processors activated (6850.87 BogoMIPS).
    > PM: Adding info for No Bus:platform
    > khelper used greatest stack depth: 3124 bytes left
    >
    > =================================
    > [ INFO: inconsistent lock state ]
    > 2.6.27-rc3-tip #1
    > ---------------------------------
    > inconsistent {softirq-on-W} -> {in-softirq-W} usage.
    > ksoftirqd/0/4 [HC0[0]:SC1[1]:HE1:SE0] takes:
    > (&rcu_ctrlblk.lock){-+..}, at: [] __rcu_process_callbacks+0x1ac/0x1f0
    > {softirq-on-W} state was registered at:
    > [] __lock_acquire+0x3f4/0x5b0
    > [] lock_acquire+0x89/0xc0
    > [] _spin_lock+0x3b/0x70
    > [] rcu_init_percpu_data+0x29/0x80
    > [] rcu_cpu_notify+0xaf/0xd0
    > [] notifier_call_chain+0x2d/0x60
    > [] __raw_notifier_call_chain+0x1e/0x30
    > [] _cpu_up+0x79/0x110
    > [] cpu_up+0x4d/0x70
    > [] kernel_init+0xb1/0x200
    > [] kernel_thread_helper+0x7/0x10
    > [] 0xffffffff
    > irq event stamp: 14
    > hardirqs last enabled at (14): [] trace_hardirqs_on+0xb/0x10
    > hardirqs last disabled at (13): [] trace_hardirqs_off+0xb/0x10
    > softirqs last enabled at (0): [] copy_process+0x276/0x1190
    > softirqs last disabled at (11): [] call_on_stack+0x1a/0x30
    >
    > other info that might help us debug this:
    > no locks held by ksoftirqd/0/4.
    >
    > stack backtrace:
    > Pid: 4, comm: ksoftirqd/0 Not tainted 2.6.27-rc3-tip #1
    > [] print_usage_bug+0x16c/0x1b0
    > [] mark_lock+0xa75/0xb10
    > [] ? sched_clock+0x15/0x30
    > [] __lock_acquire+0x3ad/0x5b0
    > [] lock_acquire+0x89/0xc0
    > [] ? __rcu_process_callbacks+0x1ac/0x1f0
    > [] _spin_lock+0x3b/0x70
    > [] ? __rcu_process_callbacks+0x1ac/0x1f0
    > [] __rcu_process_callbacks+0x1ac/0x1f0
    > [] rcu_process_callbacks+0x26/0x50
    > [] __do_softirq+0x95/0x120
    > [] ? __do_softirq+0x0/0x120
    > [] call_on_stack+0x1a/0x30
    > [] ? ksoftirqd+0x96/0x110
    > [] ? ksoftirqd+0x0/0x110
    > [] ? kthread+0x47/0x80
    > [] ? kthread+0x0/0x80
    > [] ? kernel_thread_helper+0x7/0x10
    > =======================
    > calling init_cpufreq_transition_notifier_list+0x0/0x20
    > initcall init_cpufreq_transition_notifier_list+0x0/0x20 returned 0 after 0 msecs
    > calling net_ns_init+0x0/0x190
    > net_namespace: 676 bytes
    > initcall net_ns_init+0x0/0x190 returned 0 after 0 msecs
    > calling cpufreq_tsc+0x0/0x20
    > initcall cpufreq_tsc+0x0/0x20 returned 0 after 0 msecs
    > calling reboot_init+0x0/0x20
    > initcall reboot_init+0x0/0x20 returned 0 after 0 msecs
    > calling print_banner+0x0/0x10
    > Booting paravirtualized kernel on bare hardware
    >
    >
    > my guess is on:
    >
    > commit 1f7b94cd3d564901f9e04a8bc5832ae7bfd690a0
    > Author: Paul E. McKenney
    > Date: Tue Aug 5 09:21:44 2008 -0700
    >
    > rcu: classic RCU locking and memory-barrier cleanups
    >
    > Ingo

    Fixes a problem detected by lockdep in which rcu->lock was acquired
    both in irq context and in process context, but without disabling from
    process context.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

15 Aug, 2008

2 commits

  • This patch simplifies the locking and memory-barrier usage in the Classic
    RCU grace-period-detection mechanism, incorporating Lai Jiangshan's
    feedback from the earlier version (http://lkml.org/lkml/2008/8/1/400
    and http://lkml.org/lkml/2008/8/3/43). Passed 10 hours of
    rcutorture concurrent with CPUs being put online and taken offline on
    a 128-hardware-thread Power machine. My apologies to whoever in the
    Eastern Hemisphere was planning to use this machine over the Western
    Hemisphere night, but it was sitting idle and...

    So this is ready for tip/core/rcu.

    This patch is in preparation for moving to a hierarchical
    algorithm to allow the very large SMP machines -- requested by some
    people at OLS, and there seem to have been a few recent patches in the
    4096-CPU direction as well. The general idea is to move to a much more
    conservative concurrency design, then apply a hierarchy to reduce
    contention on the global lock by a few orders of magnitude (larger
    machines would see greater reductions). The reason for taking a
    conservative approach is that this code isn't on any fast path.

    Prototype in progress.

    This patch is against the linux-tip git tree (tip/core/rcu). If you
    wish to test this against 2.6.26, use the following set of patches:

    http://www.rdrop.com/users/paulmck/patches/2.6.26-ljsimp-1.patch
    http://www.rdrop.com/users/paulmck/patches/2.6.26-ljsimpfix-3.patch

    The first patch combines commits 5127bed588a2f8f3a1f732de2a8a190b7df5dce3
    and 3cac97cbb14aed00d83eb33d4613b0fe3aaea863 from Lai Jiangshan
    , and the second patch contains my changes.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • One small change needed to keep from flooding the console when one
    CPU notices that another is AWOL. Unless I am missing something subtle.
    Otherwise the cleanups look good!

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

11 Aug, 2008

3 commits

  • small cleanups.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • this is a diagnostic patch for Classic RCU.

    The approach is to record a timestamp at the beginning
    of the grace period (in rcu_start_batch()), then have
    rcu_check_callbacks() complain if:

    1. it is running on a CPU that has holding up grace periods for
    a long time (say one second). This will identify the culprit
    assuming that the culprit has not disabled hardware irqs,
    instruction execution, or some such.

    2. it is running on a CPU that is not holding up grace periods,
    but grace periods have been held up for an even longer time
    (say two seconds).

    It is enabled via the default-off CONFIG_DEBUG_RCU_STALL kernel parameter.

    Rather than exponential backoff, it backs off to once per 30 seconds.
    My feeling upon thinking on it was that if you have stalled RCU grace
    periods for that long, a few extra printk() messages are probably the
    least of your worries...

    Signed-off-by: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Yinghai Lu
    Cc: David Witbrodt
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Ingo Molnar
     

28 Jul, 2008

1 commit


18 Jul, 2008

2 commits

  • This is v2, it's a little deference from v1 that I
    had send to lkml.
    use ACCESS_ONCE
    use rcu_batch_after/rcu_batch_before for batch # comparison.

    rcutorture test result:
    (hotplugs: do cpu-online/offline once per second)

    No CONFIG_NO_HZ: OK, 12hours
    No CONFIG_NO_HZ, hotplugs: OK, 12hours
    CONFIG_NO_HZ=y: OK, 24hours
    CONFIG_NO_HZ=y, hotplugs: Failed.
    (Failed also without my patch applied, exactly the same bug occurred,
    http://lkml.org/lkml/2008/7/3/24)

    v1's email thread:
    http://lkml.org/lkml/2008/6/2/539

    v1's description:

    The code/algorithm of the implement of current callbacks-processing
    is very efficient and technical. But when I studied it and I found
    a disadvantage:

    In multi-CPU systems, when a new RCU callback is being
    queued(call_rcu[_bh]), this callback will be invoked after the grace
    period for the batch with batch number = rcp->cur+2 has completed
    very very likely in current implement. Actually, this callback can be
    invoked after the grace period for the batch with
    batch number = rcp->cur+1 has completed. The delay of invocation means
    that latency of synchronize_rcu() is extended. But more important thing
    is that the callbacks usually free memory, and these works are delayed
    too! it's necessary for reclaimer to free memory as soon as
    possible when left memory is few.

    A very simple way can solve this problem:
    a field(struct rcu_head::batch) is added to record the batch number for
    the RCU callback. And when a new RCU callback is being queued, we
    determine the batch number for this callback(head->batch = rcp->cur+1)
    and we move this callback to rdp->donelist if we find
    that head->batch completed when we process callbacks.
    This simple way reduces the wait time for invocation a lot. (about
    2.5Grace Period -> 1.5Grace Period in average in multi-CPU systems)

    This is my algorithm. But I do not add any field for struct rcu_head
    in my implement. We just need to memorize the last 2 batches and
    their batch number, because these 2 batches include all entries that
    for whom the grace period hasn't completed. So we use a special
    linked-list rather than add a field.
    Please see the comment of struct rcu_data.

    Signed-off-by: Lai Jiangshan
    Cc: "Paul E. McKenney"
    Cc: Dipankar Sarma
    Cc: Gautham Shenoy
    Cc: Dhaval Giani
    Cc: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Lai Jiangshan
     
  • use a batch number(rcp->pending) instead of a flag(rcp->next_pending)

    rcu_start_batch() need to change this flag, so mb()s is needed
    for memory-access safe.

    but(after this patch applied) rcu_start_batch() do not change
    this batch number(rcp->pending), rcp->pending is managed by
    __rcu_process_callbacks only, and troublesome mb()s are eliminated.

    And codes look simpler and clearer.

    Signed-off-by: Lai Jiangshan
    Cc: "Paul E. McKenney"
    Cc: Dipankar Sarma
    Cc: Gautham Shenoy
    Cc: Dhaval Giani
    Cc: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Lai Jiangshan
     

16 Jul, 2008

2 commits


15 Jul, 2008

1 commit


14 Jul, 2008

1 commit

  • When callbacks are moved from offline cpu to this cpu,
    the qlen field of this rdp should be updated.

    [ Paul E. McKenney: ]

    The effect of this bug would be for force_quiescent_state() to be invoked
    when it should not and vice versa -- wasting cycles in the first case
    and letting RCU callbacks remain piled up in the second case. The bug
    is thus "benign" in that it does not result in premature grace-period
    termination, but should of course be fixed nonetheless.

    Preemption is disabled by the caller's get_cpu_var(), so we are guaranteed
    to remain on the same CPU, as required. The local_irq_disable() is indeed
    needed, otherwise, an interrupt might invoke call_rcu() or call_rcu_bh(),
    which could cause that interrupt's increment of ->qlen to be lost.

    Signed-off-by: Lai Jiangshan
    Cc: Andrew Morton
    Reviewed-by: Paul E. McKenney
    Signed-off-by: Ingo Molnar

    Lai Jiangshan
     

11 Jul, 2008

1 commit


06 Jul, 2008

1 commit


01 Jul, 2008

1 commit

  • Dhaval Giani reported this warning during cpu hotplug stress-tests:

    | On running kernel compiles in parallel with cpu hotplug:
    |
    | WARNING: at arch/x86/kernel/smp.c:118
    | native_smp_send_reschedule+0x21/0x36()
    | Modules linked in:
    | Pid: 27483, comm: cc1 Not tainted 2.6.26-rc7 #1
    | [...]
    | [] native_smp_send_reschedule+0x21/0x36
    | [] force_quiescent_state+0x47/0x57
    | [] call_rcu+0x51/0x6d
    | [] __fput+0x130/0x158
    | [] fput+0x17/0x19
    | [] filp_close+0x4d/0x57
    | [] sys_close+0x5c/0x97

    IMHO the warning is a spurious one.

    cpu_online_map is updated by the _cpu_down() using stop_machine_run().
    Since force_quiescent_state is invoked from irqs disabled section,
    stop_machine_run() won't be executing while a cpu is executing
    force_quiescent_state(). Hence the cpu_online_map is stable while we're
    in the irq disabled section.

    However, a cpu might have been offlined _just_ before we disabled irqs
    while entering force_quiescent_state(). And rcu subsystem might not yet
    have handled the CPU_DEAD notification, leading to the offlined cpu's
    bit being set in the rcp->cpumask.

    Hence cpumask = (rcp->cpumask & cpu_online_map) to prevent sending
    smp_reschedule() to an offlined CPU.

    Here's the timeline:

    CPU_A CPU_B
    --------------------------------------------------------------
    cpu_down(): .
    . .
    . .
    stop_machine(): /* disables preemption, .
    * and irqs */ .
    . .
    . .
    take_cpu_down(); .
    . .
    . .
    . .
    cpu_disable(); /*this removes cpu .
    *from cpu_online_map .
    */ .
    . .
    . .
    restart_machine(); /* enables irqs */ .
    ------WINDOW DURING WHICH rcp->cpumask is stale ---------------
    . call_rcu();
    . /* disables irqs here */
    . .force_quiescent_state();
    .CPU_DEAD: .for_each_cpu(rcp->cpumask)
    . . smp_send_reschedule();
    . .
    . . WARN_ON() for offlined CPU!
    .
    .
    .
    rcu_cpu_notify:
    .
    -------- WINDOW ENDS ------------------------------------------
    rcu_offline_cpu() /* Which calls cpu_quiet()
    * which removes
    * cpu from rcp->cpumask.
    */

    If a new batch was started just before calling stop_machine_run(), the
    "tobe-offlined" cpu is still present in rcp-cpumask.

    During a cpu-offline, from take_cpu_down(), we queue an rt-prio idle
    task as the next task to be picked by the scheduler. We also call
    cpu_disable() which will disable any further interrupts and remove the
    cpu's bit from the cpu_online_map.

    Once the stop_machine_run() successfully calls take_cpu_down(), it calls
    schedule(). That's the last time a schedule is called on the offlined
    cpu, and hence the last time when rdp->passed_quiesc will be set to 1
    through rcu_qsctr_inc().

    But the cpu_quiet() will be on this cpu will be called only when the
    next RCU_SOFTIRQ occurs on this CPU. So at this time, the offlined CPU
    is still set in rcp->cpumask.

    Now coming back to the idle_task which truely offlines the CPU, it does
    check for a pending RCU and raises the softirq, since it will find
    rdp->passed_quiesc to be 0 in this case. However, since the cpu is
    offline I am not sure if the softirq will trigger on the CPU.

    Even if it doesn't the rcu_offline_cpu() will find that rcp->completed
    is not the same as rcp->cur, which means that our cpu could be holding
    up the grace period progression. Hence we call cpu_quiet() and move
    ahead.

    But because of the window explained in the timeline, we could still have
    a call_rcu() before the RCU subsystem executes it's CPU_DEAD
    notification, and we send smp_send_reschedule() to offlined cpu while
    trying to force the quiescent states. The appended patch adds comments
    and prevents checking for offlined cpu everytime.

    cpu_online_map is updated by the _cpu_down() using stop_machine_run().
    Since force_quiescent_state is invoked from irqs disabled section,
    stop_machine_run() won't be executing while a cpu is executing
    force_quiescent_state(). Hence the cpu_online_map is stable while we're
    in the irq disabled section.

    Reported-by: Dhaval Giani
    Signed-off-by: Gautham R Shenoy
    Acked-by: Dhaval Giani
    Cc: Dipankar Sarma
    Cc: laijs@cn.fujitsu.com
    Cc: Peter Zijlstra
    Cc: Rusty Russel
    Cc: "Paul E. McKenney"
    Signed-off-by: Ingo Molnar

    Gautham R Shenoy
     

25 May, 2008

1 commit

  • As git-grep shows, open_softirq() is always called with the last argument
    being NULL

    block/blk-core.c: open_softirq(BLOCK_SOFTIRQ, blk_done_softirq, NULL);
    kernel/hrtimer.c: open_softirq(HRTIMER_SOFTIRQ, run_hrtimer_softirq, NULL);
    kernel/rcuclassic.c: open_softirq(RCU_SOFTIRQ, rcu_process_callbacks, NULL);
    kernel/rcupreempt.c: open_softirq(RCU_SOFTIRQ, rcu_process_callbacks, NULL);
    kernel/sched.c: open_softirq(SCHED_SOFTIRQ, run_rebalance_domains, NULL);
    kernel/softirq.c: open_softirq(TASKLET_SOFTIRQ, tasklet_action, NULL);
    kernel/softirq.c: open_softirq(HI_SOFTIRQ, tasklet_hi_action, NULL);
    kernel/timer.c: open_softirq(TIMER_SOFTIRQ, run_timer_softirq, NULL);
    net/core/dev.c: open_softirq(NET_TX_SOFTIRQ, net_tx_action, NULL);
    net/core/dev.c: open_softirq(NET_RX_SOFTIRQ, net_rx_action, NULL);

    This observation has already been made by Matthew Wilcox in June 2002
    (http://www.cs.helsinki.fi/linux/linux-kernel/2002-25/0687.html)

    "I notice that none of the current softirq routines use the data element
    passed to them."

    and the situation hasn't changed since them. So it appears we can safely
    remove that extra argument to save 128 (54) bytes of kernel data (text).

    Signed-off-by: Carlos R. Mafra
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Carlos R. Mafra
     

24 May, 2008

1 commit


19 May, 2008

1 commit

  • Add comments to the logic that infers quiescent states when interrupting
    from either user mode or the idle loop. Also add a memory barrier: it
    appears that James Huang was in fact onto something, as the scheduler
    is much less synchronization happy than it once was, so we can no longer
    rely on its memory barriers in all cases.

    Signed-off-by: Paul E. McKenney
    Reported-by: James Huang
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Paul E. McKenney
     

26 Jan, 2008

3 commits

  • This patch implements a new version of RCU which allows its read-side
    critical sections to be preempted. It uses a set of counter pairs
    to keep track of the read-side critical sections and flips them
    when all tasks exit read-side critical section. The details
    of this implementation can be found in this paper -

    http://www.rdrop.com/users/paulmck/RCU/OLSrtRCU.2006.08.11a.pdf

    and the article-

    http://lwn.net/Articles/253651/

    This patch was developed as a part of the -rt kernel development and
    meant to provide better latencies when read-side critical sections of
    RCU don't disable preemption. As a consequence of keeping track of RCU
    readers, the readers have a slight overhead (optimizations in the paper).
    This implementation co-exists with the "classic" RCU implementations
    and can be switched to at compiler.

    Also includes RCU tracing summarized in debugfs.

    [ akpm@linux-foundation.org: build fixes on non-preempt architectures ]

    Signed-off-by: Gautham R Shenoy
    Signed-off-by: Dipankar Sarma
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Fix rcu_barrier() to work properly in preemptive kernel environment.
    Also, the ordering of callback must be preserved while moving
    callbacks to another CPU during CPU hotplug.

    Signed-off-by: Gautham R Shenoy
    Signed-off-by: Dipankar Sarma
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • This patch re-organizes the RCU code to enable multiple implementations
    of RCU. Users of RCU continues to include rcupdate.h and the
    RCU interfaces remain the same. This is in preparation for
    subsequently merging the preemptible RCU implementation.

    Signed-off-by: Gautham R Shenoy
    Signed-off-by: Dipankar Sarma
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Paul E. McKenney