12 Dec, 2011

6 commits

  • This commit adds simple rcutorture tests for srcu_read_lock_raw() and
    srcu_read_unlock_raw(). It does not test doing srcu_read_lock_raw()
    in an exception handler and releasing it in the corresponding process
    context.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The rcutorture test now can automatically exercise CPU hotplug and
    collect success statistics, which can be correlated with other rcutorture
    activity. This permits rcutorture to completely exercise RCU regardless
    of what sort of userspace and filesystem layout is in use. Unfortunately,
    rcutorture is happy to attempt to offline CPUs that cannot be offlined,
    for example, CPU 0 in both the x86 and ARM architectures. Although this
    allows rcutorture testing to proceed normally, it confounds attempts at
    error analysis due to the resulting flood of spurious CPU-hotplug errors.

    Therefore, this commit uses the new cpu_is_hotpluggable() function to
    avoid attempting to offline CPUs that are not hotpluggable, which in
    turn avoids spurious CPU-hotplug errors.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Running CPU-hotplug operations concurrently with rcutorture has
    historically been a good way to find bugs in both RCU and CPU hotplug.
    This commit therefore adds an rcutorture module parameter called
    "onoff_interval" that causes a randomly selected CPU-hotplug operation to
    be executed at the specified interval, in seconds. The default value of
    "onoff_interval" is zero, which disables rcutorture-instigated CPU-hotplug
    operations.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Currently, if rcutorture is built into the kernel, it must be manually
    started or started from an init script. This is inconvenient for
    automated KVM testing, where it is good to be able to fully control
    rcutorture execution from the kernel parameters. This patch therefore
    adds a module parameter named "rcutorture_runnable" that defaults
    to zero ("don't start automatically"), but which can be set to one
    to cause rcutorture to start up immediately during boot.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Although it is easy to run rcutorture tests under KVM, there is currently
    no nice way to run such a test for a fixed time period, collect all of
    the rcutorture data, and then shut the system down cleanly. This commit
    therefore adds an rcutorture module parameter named "shutdown_secs" that
    specified the run duration in seconds, after which rcutorture terminates
    the test and powers the system down. The default value for "shutdown_secs"
    is zero, which disables shutdown.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Trace the rcutorture RCU accesses and dump the trace buffer when the
    first failure is detected.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     

29 Sep, 2011

6 commits

  • One of the loops in rcu_torture_boost() fails to check kthread_should_stop(),
    and thus might be slowing or even stopping completion of rcutorture tests
    at rmmod time. This commit adds the kthread_should_stop() check to the
    offending loop.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The rcu_torture_fqs() function can prevent the rcutorture tests from
    completing, resulting in a hang. This commit therefore ensures that
    rcu_torture_fqs() will exit its inner loops at the end of the test,
    and also applies the newish ULONG_CMP_LT() macro to time comparisons.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • When rcutorture is compiled directly into the kernel
    (instead of separately as a module), it is necessary to specify
    rcutorture.stat_interval as a kernel command-line parameter, otherwise,
    the rcu_torture_stats kthread is never started. However, when working
    with the system after it has booted, it is convenient to be able to
    change the time between statistic printing, particularly when logged
    into the console.

    This commit therefore allows the stat_interval parameter to be changed
    at runtime.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The rcu_torture_boost() cleanup code destroyed debug-objects state before
    waiting for the last RCU callback to be invoked, resulting in rare but
    very real debug-objects warnings. Move the destruction to after the
    waiting to fix this problem.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Now that the RCU API contains synchronize_rcu_bh(), synchronize_sched(),
    call_rcu_sched(), and rcu_bh_expedited()...

    Make rcutorture test synchronize_rcu_bh(), getting rid of the old
    rcu_bh_torture_synchronize() workaround. Similarly, make rcutorture test
    synchronize_sched(), getting rid of the old sched_torture_synchronize()
    workaround. Make rcutorture test call_rcu_sched() instead of wrappering
    synchronize_sched(). Also add testing of rcu_bh_expedited().

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Commit a26ac2455ffc (move TREE_RCU from softirq to kthread) added
    per-CPU kthreads. However, kthread creation uses kthread_create(), which
    can put the kthread's stack and task struct on the wrong NUMA node.
    Therefore, use kthread_create_on_node() instead of kthread_create()
    so that the stacks and task structs are placed on the correct NUMA node.

    A similar change was carried out in commit 94dcf29a11b3 (kthread:
    use kthread_create_on_node()).

    Also change rcutorture's priority-boost-test kthread creation.

    Signed-off-by: Eric Dumazet
    CC: Tejun Heo
    CC: Rusty Russell
    CC: Andrew Morton
    CC: Andi Kleen
    CC: Ingo Molnar
    Signed-off-by: Paul E. McKenney

    Eric Dumazet
     

27 Jul, 2011

1 commit

  • This allows us to move duplicated code in
    (atomic_inc_not_zero() for now) to

    Signed-off-by: Arun Sharma
    Reviewed-by: Eric Dumazet
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: Eric Dumazet
    Acked-by: Mike Frysinger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arun Sharma
     

09 Jul, 2011

1 commit


06 May, 2011

4 commits


05 Mar, 2011

1 commit


08 Oct, 2010

1 commit


24 Sep, 2010

1 commit


20 Aug, 2010

1 commit

  • Add random preemption to help we to torture the preemptable rcu.

    srcu_read_delay() also calls rcu_read_delay() for shorter delays.

    Added comment to preempt_schedule() call indicating that no quiescent
    states happen if preemption is disabled.

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Lai Jiangshan
     

09 Jun, 2010

1 commit

  • For people who otherwise get to write: cpu_clock(smp_processor_id()),
    there is now: local_clock().

    Also, as per suggestion from Andrew, provide some documentation on
    the various clock interfaces, and minimize the unsigned long long vs
    u64 mess.

    Signed-off-by: Peter Zijlstra
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Jens Axboe
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

18 May, 2010

1 commit

  • …/git/tip/linux-2.6-tip

    * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (49 commits)
    stop_machine: Move local variable closer to the usage site in cpu_stop_cpu_callback()
    sched, wait: Use wrapper functions
    sched: Remove a stale comment
    ondemand: Make the iowait-is-busy time a sysfs tunable
    ondemand: Solve a big performance issue by counting IOWAIT time as busy
    sched: Intoduce get_cpu_iowait_time_us()
    sched: Eliminate the ts->idle_lastupdate field
    sched: Fold updating of the last_update_time_info into update_ts_time_stats()
    sched: Update the idle statistics in get_cpu_idle_time_us()
    sched: Introduce a function to update the idle statistics
    sched: Add a comment to get_cpu_idle_time_us()
    cpu_stop: add dummy implementation for UP
    sched: Remove rq argument to the tracepoints
    rcu: need barrier() in UP synchronize_sched_expedited()
    sched: correctly place paranioa memory barriers in synchronize_sched_expedited()
    sched: kill paranoia check in synchronize_sched_expedited()
    sched: replace migration_thread with cpu_stop
    stop_machine: reimplement using cpu_stop
    cpu_stop: implement stop_cpu[s]()
    sched: Fix select_idle_sibling() logic in select_task_rq_fair()
    ...

    Linus Torvalds
     

12 May, 2010

1 commit


07 May, 2010

1 commit

  • Currently migration_thread is serving three purposes - migration
    pusher, context to execute active_load_balance() and forced context
    switcher for expedited RCU synchronize_sched. All three roles are
    hardcoded into migration_thread() and determining which job is
    scheduled is slightly messy.

    This patch kills migration_thread and replaces all three uses with
    cpu_stop. The three different roles of migration_thread() are
    splitted into three separate cpu_stop callbacks -
    migration_cpu_stop(), active_load_balance_cpu_stop() and
    synchronize_sched_expedited_cpu_stop() - and each use case now simply
    asks cpu_stop to execute the callback as necessary.

    synchronize_sched_expedited() was implemented with private
    preallocated resources and custom multi-cpu queueing and waiting
    logic, both of which are provided by cpu_stop.
    synchronize_sched_expedited_count is made atomic and all other shared
    resources along with the mutex are dropped.

    synchronize_sched_expedited() also implemented a check to detect cases
    where not all the callback got executed on their assigned cpus and
    fall back to synchronize_sched(). If called with cpu hotplug blocked,
    cpu_stop already guarantees that and the condition cannot happen;
    otherwise, stop_machine() would break. However, this patch preserves
    the paranoid check using a cpumask to record on which cpus the stopper
    ran so that it can serve as a bisection point if something actually
    goes wrong theree.

    Because the internal execution state is no longer visible,
    rcu_expedited_torture_stats() is removed.

    This patch also renames cpu_stop threads to from "stopper/%d" to
    "migration/%d". The names of these threads ultimately don't matter
    and there's no reason to make unnecessary userland visible changes.

    With this patch applied, stop_machine() and sched now share the same
    resources. stop_machine() is faster without wasting any resources and
    sched migration users are much cleaner.

    Signed-off-by: Tejun Heo
    Acked-by: Peter Zijlstra
    Cc: Ingo Molnar
    Cc: Dipankar Sarma
    Cc: Josh Triplett
    Cc: Paul E. McKenney
    Cc: Oleg Nesterov
    Cc: Dimitri Sivanich

    Tejun Heo
     

03 Mar, 2010

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
    percpu: add __percpu sparse annotations to what's left
    percpu: add __percpu sparse annotations to fs
    percpu: add __percpu sparse annotations to core kernel subsystems
    local_t: Remove leftover local.h
    this_cpu: Remove pageset_notifier
    this_cpu: Page allocator conversion
    percpu, x86: Generic inc / dec percpu instructions
    local_t: Move local.h include to ringbuffer.c and ring_buffer_benchmark.c
    module: Use this_cpu_xx to dynamically allocate counters
    local_t: Remove cpu_local_xx macros
    percpu: refactor the code in pcpu_[de]populate_chunk()
    percpu: remove compile warnings caused by __verify_pcpu_ptr()
    percpu: make accessors check for percpu pointer in sparse
    percpu: add __percpu for sparse.
    percpu: make access macros universal
    percpu: remove per_cpu__ prefix.

    Linus Torvalds
     

25 Feb, 2010

2 commits

  • The current "mod_timer(&t, 1)" potentially makes the timer fire
    immediately, change this to wait one jiffy.

    Signed-off-by: Dan Carpenter
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Inspection is proving insufficient to catch all RCU misuses,
    which is understandable given that rcu_dereference() might be
    protected by any of four different flavors of RCU (RCU, RCU-bh,
    RCU-sched, and SRCU), and might also/instead be protected by any
    of a number of locking primitives. It is therefore time to
    enlist the aid of lockdep.

    This set of patches is inspired by earlier work by Peter
    Zijlstra and Thomas Gleixner, and takes the following approach:

    o Set up separate lockdep classes for RCU, RCU-bh, and RCU-sched.

    o Set up separate lockdep classes for each instance of SRCU.

    o Create primitives that check for being in an RCU read-side
    critical section. These return exact answers if lockdep is
    fully enabled, but if unsure, report being in an RCU read-side
    critical section. (We want to avoid false positives!)
    The primitives are:

    For RCU: rcu_read_lock_held(void)

    For RCU-bh: rcu_read_lock_bh_held(void)

    For RCU-sched: rcu_read_lock_sched_held(void)

    For SRCU: srcu_read_lock_held(struct srcu_struct *sp)

    o Add rcu_dereference_check(), which takes a second argument
    in which one places a boolean expression based on the above
    primitives and/or lockdep_is_held().

    o A new kernel configuration parameter, CONFIG_PROVE_RCU, enables
    rcu_dereference_check(). This depends on CONFIG_PROVE_LOCKING,
    and should be quite helpful during the transition period while
    CONFIG_PROVE_RCU-unaware patches are in flight.

    The existing rcu_dereference() primitive does no checking, but
    upcoming patches will change that.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

13 Jan, 2010

1 commit

  • Add force_quiescent_state() testing to rcutorture, with a
    separate thread that repeatedly invokes force_quiescent_state()
    in bursts. This can greatly increase the probability of
    encountering certain types of race conditions.

    Suggested-by: Josh Triplett
    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

05 Jan, 2010

1 commit


15 Dec, 2009

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (34 commits)
    m68k: rename global variable vmalloc_end to m68k_vmalloc_end
    percpu: add missing per_cpu_ptr_to_phys() definition for UP
    percpu: Fix kdump failure if booted with percpu_alloc=page
    percpu: make misc percpu symbols unique
    percpu: make percpu symbols in ia64 unique
    percpu: make percpu symbols in powerpc unique
    percpu: make percpu symbols in x86 unique
    percpu: make percpu symbols in xen unique
    percpu: make percpu symbols in cpufreq unique
    percpu: make percpu symbols in oprofile unique
    percpu: make percpu symbols in tracer unique
    percpu: make percpu symbols under kernel/ and mm/ unique
    percpu: remove some sparse warnings
    percpu: make alloc_percpu() handle array types
    vmalloc: fix use of non-existent percpu variable in put_cpu_var()
    this_cpu: Use this_cpu_xx in trace_functions_graph.c
    this_cpu: Use this_cpu_xx for ftrace
    this_cpu: Use this_cpu_xx in nmi handling
    this_cpu: Use this_cpu operations in RCU
    this_cpu: Use this_cpu ops for VM statistics
    ...

    Fix up trivial (famous last words) global per-cpu naming conflicts in
    arch/x86/kvm/svm.c
    mm/slab.c

    Linus Torvalds
     

03 Dec, 2009

1 commit

  • Implement an synchronize_rcu_expedited() for preemptible RCU
    that actually is expedited. This uses
    synchronize_sched_expedited() to force all threads currently
    running in a preemptible-RCU read-side critical section onto the
    appropriate ->blocked_tasks[] list, then takes a snapshot of all
    of these lists and waits for them to drain.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

29 Oct, 2009

1 commit

  • Now that the return from alloc_percpu is compatible with the address
    of per-cpu vars, it makes sense to hand around the address of per-cpu
    variables. To make this sane, we remove the per_cpu__ prefix we used
    created to stop people accidentally using these vars directly.

    Now we have sparse, we can use that (next patch).

    tj: * Updated to convert stuff which were missed by or added after the
    original patch.

    * Kill per_cpu_var() macro.

    Signed-off-by: Rusty Russell
    Signed-off-by: Tejun Heo
    Reviewed-by: Christoph Lameter

    Rusty Russell
     

26 Oct, 2009

2 commits

  • Make rcutorture list the available torture_type values when it
    doesn't like the one specified.

    Signed-off-by: Paul E. McKenney
    Acked-by: Josh Triplett
    Reviewed-by: Lai Jiangshan
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    Cc: avi@redhat.com
    Cc: mtosatti@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     
  • Adds the "srcu_expedited" torture type, and also renames
    sched_ops_sync to sched_sync_ops for consistency while we are in
    this file.

    Signed-off-by: Paul E. McKenney
    Acked-by: Josh Triplett
    Reviewed-by: Lai Jiangshan
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    Cc: avi@redhat.com
    Cc: mtosatti@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

03 Oct, 2009

1 commit

  • RCU does not do dynamic allocations but it increments per cpu variables
    a lot. These instructions results in a move to a register and then back
    to memory. This patch will make it use the inc/dec instructions on x86
    that do not need a register.

    Acked-by: Tejun Heo
    Acked-by: Paul E. McKenney
    Signed-off-by: Christoph Lameter
    Signed-off-by: Tejun Heo

    Christoph Lameter
     

24 Sep, 2009

1 commit

  • Move declarations and update storage classes to make checkpatch happy.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

19 Sep, 2009

1 commit

  • Fix a number of whitespace ^Ierrors in the include/linux/rcu*
    and the kernel/rcu* files.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    LKML-Reference:
    [ did more checkpatch fixlets ]
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

18 Sep, 2009

1 commit

  • rcutorture already delays readers, but never for long enough to
    make RCU force a quiescent state. Add an occasional delay of
    50ms.

    Signed-off-by: Josh Triplett
    Signed-off-by: Paul E. McKenney
    Acked-by: Steven Rostedt
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: akpm@linux-foundation.org
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josht@linux.vnet.ibm.com
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Josh Triplett