22 Feb, 2012

35 commits

  • synchronize_sched_expedited() is spamming CONFIG_DEBUG_PREEMPT=y
    users with an unintended warning from the cpu_is_offline() check: use
    raw_smp_processor_id() instead of smp_processor_id() there.

    Because the warning is under a get_online_cpus(), it is not possible
    for any CPUs to go offline, though it is quite possible that the
    task might migrate between the raw_smp_processor_id() and the check
    of cpu_is_offline(). This is not a problem because the task cannot
    migrate from an offline CPU to an online one or vice versa. The point
    of the check is to verify that synchronize_sched_expedited() is not
    called from an offline CPU, for example, from a CPU_DYING notifier, or,
    more important, from an outgoing CPU making its way from its CPU_DYING
    notifiers to the idle loop.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Paul E. McKenney

    Hugh Dickins
     
  • This commit handles workloads that transition quickly between idle and
    non-idle, and where the CPU's callbacks cannot be invoked, but where
    RCU does not have anything immediate for the CPU to do. Without this
    patch, the RCU_FAST_NO_HZ code can be invoked repeatedly on each entry
    to idle. The commit sets the per-CPU rcu_dyntick_holdoff variable to
    hold off further attempts for a tick.

    Reported-by: "Abou Gazala, Neven M"
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • If a softirq is pending, the current CPU has RCU callbacks pending,
    and RCU does not immediately need anything from this CPU, then the
    current code resets the RCU_FAST_NO_HZ state machine. This means that
    upon exit from the subsequent softirq handler, RCU_FAST_NO_HZ will
    try really hard to force RCU into dyntick-idle mode. And if the same
    conditions hold after a few tries (determined by RCU_IDLE_OPT_FLUSHES),
    the same situation can repeat, possibly endlessly. This scenario is
    not particularly good for battery lifetime.

    This commit therefore suppresses the early exit from the RCU_FAST_NO_HZ
    state machine in the case where there is a softirq pending. This change
    forces the state machine to retain its memory, and to enter holdoff if
    this condition persists.

    Reported-by: "Abou Gazala, Neven M"
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • RCU, RCU-bh, and RCU-sched read-side critical sections are forbidden
    in the inner idle loop, that is, between the rcu_idle_enter() and the
    rcu_idle_exit() -- RCU will happily ignore any such read-side critical
    sections. However, things like powertop need tracepoints in the inner
    idle loop.

    This commit therefore provides an RCU_NONIDLE() macro that can be used to
    wrap code in the idle loop that requires RCU read-side critical sections.

    Suggested-by: Steven Rostedt
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett
    Acked-by: Deepthi Dharwar

    Paul E. McKenney
     
  • Use of RCU in the idle loop is incorrect, quite a few instances of
    just that have made their way into mainline, primarily event tracing.
    The problem with RCU read-side critical sections on CPUs that RCU believes
    to be idle is that RCU is completely ignoring the CPU, along with any
    attempts and RCU read-side critical sections.

    The approaches of eliminating the offending uses and of pushing the
    definition of idle down beyond the offending uses have both proved
    impractical. The new approach is to encapsulate offending uses of RCU
    with rcu_idle_exit() and rcu_idle_enter(), but this requires nesting
    for code that is invoked both during idle and and during normal execution.
    Therefore, this commit modifies rcu_idle_enter() and rcu_idle_exit() to
    permit nesting.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett
    Acked-by: Deepthi Dharwar

    Paul E. McKenney
     
  • There is now an unconditional check for rcu_head misalignment in
    __call_rcu(), so remove the old conditional one in debug_rcu_head_queue().

    Reported-by: Josh Triplett
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The semantic match that finds this problem is as follows:
    (http://coccinelle.lip6.fr/)

    //
    @@
    expression e,e1;
    constant c;
    @@

    *e = c
    ... when != e = e1
    when != &e
    when != true IS_ERR(e)
    *PTR_ERR(e)
    //

    Signed-off-by: Julia Lawall
    Reported-by: Josh Triplett
    Signed-off-by: Paul E. McKenney

    Julia Lawall
     
  • Fix a bonehead error introduced when adding event tracing to rcutorture.
    Move the traces to follow the NULL-pointer checks.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The expedited RCU primitives can be quite useful, but they have some
    high costs as well. This commit updates and creates docbook comments
    calling out the costs, and updates the RCU documentation as well.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Because newly offlined CPUs continue executing after completing the
    CPU_DYING notifiers, they legitimately enter the scheduler and use
    RCU while appearing to be offline. This calls for a more sophisticated
    approach as follows:

    1. RCU marks the CPU online during the CPU_UP_PREPARE phase.

    2. RCU marks the CPU offline during the CPU_DEAD phase.

    3. Diagnostics regarding use of read-side RCU by offline CPUs use
    RCU's accounting rather than the cpu_online_map. (Note that
    __call_rcu() still uses cpu_online_map to detect illegal
    invocations within CPU_DYING notifiers.)

    4. Offline CPUs are prevented from hanging the system by
    force_quiescent_state(), which pays attention to cpu_online_map.
    Some additional work (in a later commit) will be needed to
    guarantee that force_quiescent_state() waits a full jiffy before
    assuming that a CPU is offline, for example, when called from
    idle entry. (This commit also makes the one-jiffy wait
    explicit, since the old-style implicit wait can now be defeated
    by RCU_FAST_NO_HZ and by rcutorture.)

    This approach avoids the false positives encountered when attempting to
    use more exact classification of CPU online/offline state.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • It is illegal to use RCU from a CPU that has reported idleness or
    offlinedness to RCU. However, it can be quite difficult to determine
    from a stack trace whether or not a given CPU is idle or offline.
    Therefore, this commit adds idle/offline diagnostics to the lockdep-RCU
    error message.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The rcu_prepare_for_idle() function is always called with interrupts
    disabled, so there is no reason to disable interrupts again within
    rcu_prepare_for_idle(). Therefore, this commit removes all of the
    interrupt disabling, also removing a latent disabling-unbalance bug.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Now that TREE_RCU and TREE_PREEMPT_RCU no longer do anything different
    for the single-CPU case, there is no need for multiple definitions of
    synchronize_sched_expedited(). It is no longer in any sense a plug-in,
    so move it from kernel/rcutree_plugin.h to kernel/rcutree.c.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Although it is legal to use RCU during early boot, it is anything
    but legal to use RCU at runtime from an offlined CPU. After all, RCU
    explicitly ignores offlined CPUs. This commit therefore adds checks
    for runtime use of RCU from offlined CPUs.

    These checks are not perfect, in particular, they can be subverted
    through use of things like rcu_dereference_raw(). Note that it is not
    possible to put checks in rcu_read_lock() and friends due to the fact
    that these primitives are used in code that might be used under either
    RCU or lock-based protection, which means that checking rcu_read_lock()
    gets you fat piles of false positives.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Add module parameters to rcutorture that induce a CPU stall.
    The stall_cpu parameter specifies how long to stall in seconds,
    defaulting to zero, which indicates no stalling is to be undertaken.
    The stall_cpu_holdoff parameter specifies how many seconds after
    insmod (or boot, if rcutorture is built into the kernel) that this
    stall is to start. The default value for stall_cpu_holdoff is ten
    seconds.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • When rcutorture is started automatically at boot time, it might well
    also start CPU-hotplug operations at that time, which might not be
    desirable. This commit therefore adds an rcutorture parameter that
    allows CPU-hotplug operations to be held off for the specified number
    of seconds after the start of boot.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • There have been situations where RCU CPU stall warnings were caused by
    issues in scheduling-clock timer initialization. To make it easier to
    track these down, this commit causes the RCU CPU stall-warning messages
    to print out the number of scheduling-clock interrupts taken in the
    current grace period for each stalled CPU.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The default CONFIG_RCU_CPU_STALL_TIMEOUT value of 60 seconds has served
    Linux users well for production use for quite some time. However, for
    debugging, there will be more than three minutes between subsequent
    stall-warning messages. This can be an annoyingly long wait if you
    are trying to work out where the offending infinite loop is hiding.

    Therefore, this commit provides a rcu_cpu_stall_timeout sysfs
    parameter that may be adjusted at boot time and at runtime to speed
    up debugging.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Now that both TINY_RCU and TINY_PREEMPT_RCU have been in place for awhile,
    it is time to remove UP support from TREE_RCU, which is what this commit
    does.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The inner idle loop is an extended quiescent state for all flavors
    of RCU, but there have been recent bug involving use of RCU read-side
    primitives from within the idle loop. Therefore, this commit enlists
    lockdep-RCU to detect attempts to enter the inner idle loop while in
    an RCU read-side critical section, emitting a lockdep-RCU splat if so.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The recent updates to RCU_CPU_FAST_NO_HZ have an rcu_needs_cpu() that
    does more than just check for callbacks, so get the name for
    rcu_preempt_needs_cpu() consistent with that change, now calling it
    rcu_preempt_cpu_has_callbacks().

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • This is a port of commit #82e78d80 from TREE_PREEMPT_RCU to
    TINY_PREEMPT_RCU.

    This commit uses the fact that current->rcu_boost_mutex is set
    any time that the RCU_READ_UNLOCK_BOOSTED flag is set in the
    current->rcu_read_unlock_special bitmask. This allows tests of
    the bit to be changed to tests of the pointer, which in turn allows
    the RCU_READ_UNLOCK_BOOSTED flag to be eliminated.

    Please note that the check of current->rcu_read_unlock_special need not
    change because any time that RCU_READ_UNLOCK_BOOSTED was set, so was
    RCU_READ_UNLOCK_BLOCKED. Therefore, __rcu_read_unlock() can continue
    testing current->rcu_read_unlock_special for non-zero, as before.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • This is a port to TINY_RCU of Peter Zijlstra's commit #ec433f0c5

    The rcu_read_unlock_special() function relies on in_irq() to exclude
    scheduler activity from interrupt level. This fails because exit_irq()
    can invoke the scheduler after clearing the preempt_count() bits that
    in_irq() uses to determine that it is at interrupt level. This situation
    can result in failures as follows:

    $task IRQ SoftIRQ

    rcu_read_lock()

    /* do stuff */

    |= UNLOCK_BLOCKED

    rcu_read_unlock()
    --t->rcu_read_lock_nesting

    irq_enter();
    /* do stuff, don't use RCU */
    irq_exit();
    sub_preempt_count(IRQ_EXIT_OFFSET);
    invoke_softirq()

    ttwu();
    spin_lock_irq(&pi->lock)
    rcu_read_lock();
    /* do stuff */
    rcu_read_unlock();
    rcu_read_unlock_special()
    rcu_report_exp_rnp()
    ttwu()
    spin_lock_irq(&pi->lock) /* deadlock */

    rcu_read_unlock_special(t);

    This can be triggered 'easily' because invoke_softirq() immediately does
    a ttwu() of ksoftirqd/# instead of doing the in-place softirq stuff first,
    but even without that the above happens.

    Cure this by also excluding softirqs from the rcu_read_unlock_special()
    handler and ensuring the force_irqthreads ksoftirqd/# wakeup is done
    from full softirq context.

    It is also necessary to delay the ->rcu_read_lock_nesting decrement until
    after rcu_read_unlock_special(). This delay is handled by the commit
    "Protect __rcu_read_unlock() against scheduler-using irq handlers".

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • This is a port of commit #b0d3041 from TREE_RCU to TREE_PREEMPT_RCU.

    Under some rare but real combinations of configuration parameters, RCU
    callbacks are posted during early boot that use kernel facilities that are
    not yet initialized. Therefore, when these callbacks are invoked, hard
    hangs and crashes ensue. This commit therefore prevents RCU callbacks
    from being invoked until after the scheduler is fully up and running,
    as in after multiple tasks have been spawned.

    It might well turn out that a better approach is to identify the specific
    RCU callbacks that are causing this problem, but that discussion will
    wait until such time as someone really needs an RCU callback to be invoked
    (as opposed to merely registered) during early boot.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • This is a port of commit #be0e1e21 to TINY_PREEMPT_RCU. This uses
    noinline to prevent rcu_read_unlock_special() from being inlined into
    __rcu_read_unlock().

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • This commit ports commit #10f39bb1b2 (rcu: protect __rcu_read_unlock()
    against scheduler-using irq handlers) from TREE_PREEMPT_RCU to
    TINY_PREEMPT_RCU. The following is a corresponding port of that
    commit message.

    The addition of RCU read-side critical sections within runqueue and
    priority-inheritance critical sections introduced some deadlocks,
    for example, involving interrupts from __rcu_read_unlock() where the
    interrupt handlers call wake_up(). This situation can cause the
    instance of __rcu_read_unlock() invoked from interrupt to do some
    of the processing that would otherwise have been carried out by the
    task-level instance of __rcu_read_unlock(). When the interrupt-level
    instance of __rcu_read_unlock() is called with a scheduler lock held from
    interrupt-entry/exit situations where in_irq() returns false, deadlock can
    result. Of course, in a UP kernel, there are not really any deadlocks,
    but the upper-level critical section can still be be fatally confused
    by the lower-level critical section changing things out from under it.

    This commit resolves these deadlocks by using negative values of the
    per-task ->rcu_read_lock_nesting counter to indicate that an instance of
    __rcu_read_unlock() is in flight, which in turn prevents instances from
    interrupt handlers from doing any special processing. Note that nested
    rcu_read_lock()/rcu_read_unlock() pairs are still permitted, but they will
    never see ->rcu_read_lock_nesting go to zero, and will therefore never
    invoke rcu_read_unlock_special(), thus preventing them from seeing the
    RCU_READ_UNLOCK_BLOCKED bit should it be set in ->rcu_read_unlock_special.
    This patch also adds a check for ->rcu_read_unlock_special being negative
    in rcu_check_callbacks(), thus preventing the RCU_READ_UNLOCK_NEED_QS
    bit from being set should a scheduling-clock interrupt occur while
    __rcu_read_unlock() is exiting from an outermost RCU read-side critical
    section.

    Of course, __rcu_read_unlock() can be preempted during the time that
    ->rcu_read_lock_nesting is negative. This could result in the setting
    of the RCU_READ_UNLOCK_BLOCKED bit after __rcu_read_unlock() checks it,
    and would also result it this task being queued on the corresponding
    rcu_node structure's blkd_tasks list. Therefore, some later RCU read-side
    critical section would enter rcu_read_unlock_special() to clean up --
    which could result in deadlock (OK, OK, fatal confusion) if that RCU
    read-side critical section happened to be in the scheduler where the
    runqueue or priority-inheritance locks were held.

    To prevent the possibility of fatal confusion that might result from
    preemption during the time that ->rcu_read_lock_nesting is negative,
    this commit also makes rcu_preempt_note_context_switch() check for
    negative ->rcu_read_lock_nesting, thus refraining from queuing the task
    (and from setting RCU_READ_UNLOCK_BLOCKED) if we are already exiting
    from the outermost RCU read-side critical section (in other words,
    we really are no longer actually in that RCU read-side critical
    section). In addition, rcu_preempt_note_context_switch() invokes
    rcu_read_unlock_special() to carry out the cleanup in this case, which
    clears out the ->rcu_read_unlock_special bits and dequeues the task
    (if necessary), in turn avoiding needless delay of the current RCU grace
    period and needless RCU priority boosting.

    It is still illegal to call rcu_read_unlock() while holding a scheduler
    lock if the prior RCU read-side critical section has ever had both
    preemption and irqs enabled. However, the common use case is legal,
    namely where then entire RCU read-side critical section executes with
    irqs disabled, for example, when the scheduler lock is held across the
    entire lifetime of the RCU read-side critical section.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The grace-period initialization sequence in rcu_start_gp() has a special
    case for systems where the rcu_node tree is a single rcu_node structure.
    This made sense some years ago when systems were smaller and up to 64
    CPUs could share a single rcu_node structure, but now that large systems
    are common and a given leaf rcu_node structure can support only 16 CPUs
    (due to lock contention on the rcu_node's ->lock field), this optimization
    is almost never taken. And even the small mobile platforms that might
    make use of it might rather have the kernel text reduction.

    Therefore, this commit removes the check for single-rcu_node trees.

    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Paul E. McKenney
     
  • RCU's current CPU-offline code path dumps all of the outgoing CPU's
    callbacks onto the RCU_NEXT_TAIL portion of the surviving CPU's
    callback list. This means that all the ready-to-invoke callbacks from
    the outgoing CPU must wait for another full RCU grace period. This was
    just fine when CPU-hotplug events were rare, but there is increasing
    evidence that users are planning to make increasing use of CPU hotplug.

    Therefore, this commit changes the callback-dumping procedure so that
    callbacks that are ready to invoke are moved to the RCU_DONE_TAIL
    portion of the surviving CPU's callback list. This avoids running
    these callbacks through a second unnecessary grace period.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Because quiescent states are now reported from offline CPUs in
    CPU_DYING state, there is some possibility that such a CPU might
    note the end of a grace period and attempt to start invoking
    callbacks. This would be a very bad thing, and is supposed to
    be prevented by the fact that the CPU_DYING CPU gets rid of all
    its callbacks before reporting the quiescent state. However,
    there is other CPU-offline code in the kernel, and it is quite
    possible that someone will invoke RCU core processing from that
    code. Therefore, this commit adds a warning for this case.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Currently, a given CPU is permitted to remain in dyntick-idle mode
    indefinitely if it has only lazy RCU callbacks queued. This is vulnerable
    to corner cases in NUMA systems, so limit the time to six seconds by
    default. (Currently controlled by a cpp macro.)

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Make rcutorture check for CPU-hotplug failures and complain if there
    were any.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • Move ->qsmaskinit and blkd_tasks[] manipulation to the CPU_DYING
    notifier. This simplifies the code by eliminating a potential
    deadlock and by reducing the responsibilities of force_quiescent_state().
    Also rename functions to make their connection to the CPU-hotplug
    stages explicit.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • When CONFIG_RCU_FAST_NO_HZ is enabled, RCU will allow a given CPU to
    enter dyntick-idle mode even if it still has RCU callbacks queued.
    RCU avoids system hangs in this case by scheduling a timer for several
    jiffies in the future. However, if all of the callbacks on that CPU
    are from kfree_rcu(), there is no reason to wake the CPU up, as it is
    not a problem to defer freeing of memory.

    This commit therefore tracks the number of callbacks on a given CPU
    that are from kfree_rcu(), and avoids scheduling the timer if all of
    a given CPU's callbacks are from kfree_rcu().

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • The push for energy efficiency will require that RCU tag rcu_head
    structures to indicate whether or not their invocation is time critical.
    This tagging is best carried out in the bottom bits of the ->next
    pointers in the rcu_head structures. This tagging requires that the
    rcu_head structures be properly aligned, so this commit adds the required
    diagnostics.

    Signed-off-by: Paul E. McKenney
    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     
  • It is illegal to have a grace period within a same-flavor RCU read-side
    critical section, so this commit adds lockdep-RCU checks to splat when
    such abuse is encountered. This commit does not detect more elaborate
    RCU deadlock situations. These situations might be a job for lockdep
    enhancements.

    Signed-off-by: Paul E. McKenney

    Paul E. McKenney
     

14 Feb, 2012

2 commits


12 Feb, 2012

1 commit

  • Says Jens:

    "Time to push off some of the pending items. I really wanted to wait
    until we had the regression nailed, but alas it's not quite there yet.
    But I'm very confident that it's "just" a missing expire on exit, so
    fix from Tejun should be fairly trivial. I'm headed out for a week on
    the slopes.

    - Killing the barrier part of mtip32xx. It doesn't really support
    barriers, and it doesn't need them (writes are fully ordered).

    - A few fixes from Dan Carpenter, preventing overflows of integer
    multiplication.

    - A fixup for loop, fixing a previous commit that didn't quite solve
    the partial read problem from Dave Young.

    - A bio integer overflow fix from Kent Overstreet.

    - Improvement/fix of the door "keep locked" part of the cdrom shared
    code from Paolo Benzini.

    - A few cfq fixes from Shaohua Li.

    - A fix for bsg sysfs warning when removing a file it did not create
    from Stanislaw Gruszka.

    - Two fixes for floppy from Vivek, preventing a crash.

    - A few block core fixes from Tejun. One killing the over-optimized
    ioc exit path, cleaning that up nicely. Two others fixing an oops
    on elevator switch, due to calling into the scheduler merge check
    code without holding the queue lock."

    * 'for-linus' of git://git.kernel.dk/linux-block:
    block: fix lockdep warning on io_context release put_io_context()
    relay: prevent integer overflow in relay_open()
    loop: zero fill bio instead of return -EIO for partial read
    bio: don't overflow in bio_get_nr_vecs()
    floppy: Fix a crash during rmmod
    floppy: Cleanup disk->queue before caling put_disk() if add_disk() was never called
    cdrom: move shared static to cdrom_device_info
    bsg: fix sysfs link remove warning
    block: don't call elevator callbacks for plug merges
    block: separate out blk_rq_merge_ok() and blk_try_merge() from elevator functions
    mtip32xx: removed the irrelevant argument of mtip_hw_submit_io() and the unused member of struct driver_data
    block: strip out locking optimization in put_io_context()
    cdrom: use copy_to_user() without the underscores
    block: fix ioc locking warning
    block: fix NULL icq_cache reference
    block,cfq: change code order

    Linus Torvalds
     

11 Feb, 2012

1 commit


10 Feb, 2012

1 commit