12 Dec, 2011

3 commits

  • Those two APIs were provided to optimize the calls of
    tick_nohz_idle_enter() and rcu_idle_enter() into a single
    irq disabled section. This way no interrupt happening in-between would
    needlessly process any RCU job.

    Now we are talking about an optimization for which benefits
    have yet to be measured. Let's start simple and completely decouple
    idle rcu and dyntick idle logics to simplify.

    Signed-off-by: Frederic Weisbecker
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Reviewed-by: Josh Triplett
    Signed-off-by: Paul E. McKenney

    Frederic Weisbecker
     
  • It is assumed that rcu won't be used once we switch to tickless
    mode and until we restart the tick. However this is not always
    true, as in x86-64 where we dereference the idle notifiers after
    the tick is stopped.

    To prepare for fixing this, add two new APIs:
    tick_nohz_idle_enter_norcu() and tick_nohz_idle_exit_norcu().

    If no use of RCU is made in the idle loop between
    tick_nohz_enter_idle() and tick_nohz_exit_idle() calls, the arch
    must instead call the new *_norcu() version such that the arch doesn't
    need to call rcu_idle_enter() and rcu_idle_exit().

    Otherwise the arch must call tick_nohz_enter_idle() and
    tick_nohz_exit_idle() and also call explicitly:

    - rcu_idle_enter() after its last use of RCU before the CPU is put
    to sleep.
    - rcu_idle_exit() before the first use of RCU after the CPU is woken
    up.

    Signed-off-by: Frederic Weisbecker
    Cc: Mike Frysinger
    Cc: Guan Xuetao
    Cc: David Miller
    Cc: Chris Metcalf
    Cc: Hans-Christian Egtvedt
    Cc: Ralf Baechle
    Cc: Paul E. McKenney
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: H. Peter Anvin
    Cc: Russell King
    Cc: Paul Mackerras
    Cc: Heiko Carstens
    Cc: Paul Mundt
    Signed-off-by: Paul E. McKenney

    Frederic Weisbecker
     
  • The tick_nohz_stop_sched_tick() function, which tries to delay
    the next timer tick as long as possible, can be called from two
    places:

    - From the idle loop to start the dytick idle mode
    - From interrupt exit if we have interrupted the dyntick
    idle mode, so that we reprogram the next tick event in
    case the irq changed some internal state that requires this
    action.

    There are only few minor differences between both that
    are handled by that function, driven by the ts->inidle
    cpu variable and the inidle parameter. The whole guarantees
    that we only update the dyntick mode on irq exit if we actually
    interrupted the dyntick idle mode, and that we enter in RCU extended
    quiescent state from idle loop entry only.

    Split this function into:

    - tick_nohz_idle_enter(), which sets ts->inidle to 1, enters
    dynticks idle mode unconditionally if it can, and enters into RCU
    extended quiescent state.

    - tick_nohz_irq_exit() which only updates the dynticks idle mode
    when ts->inidle is set (ie: if tick_nohz_idle_enter() has been called).

    To maintain symmetry, tick_nohz_restart_sched_tick() has been renamed
    into tick_nohz_idle_exit().

    This simplifies the code and micro-optimize the irq exit path (no need
    for local_irq_save there). This also prepares for the split between
    dynticks and rcu extended quiescent state logics. We'll need this split to
    further fix illegal uses of RCU in extended quiescent states in the idle
    loop.

    Signed-off-by: Frederic Weisbecker
    Cc: Mike Frysinger
    Cc: Guan Xuetao
    Cc: David Miller
    Cc: Chris Metcalf
    Cc: Hans-Christian Egtvedt
    Cc: Ralf Baechle
    Cc: Paul E. McKenney
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: H. Peter Anvin
    Cc: Russell King
    Cc: Paul Mackerras
    Cc: Heiko Carstens
    Cc: Paul Mundt
    Signed-off-by: Paul E. McKenney
    Reviewed-by: Josh Triplett

    Frederic Weisbecker
     

08 Aug, 2011

1 commit


05 Aug, 2011

1 commit

  • Commit a0bfa1373859e9d11dc92561a8667588803e42d8 mispells
    cpuidle_idle_call() on ARM and SH code. Fix this to be consistent.

    Cc: Kevin Hilman
    Cc: Paul Mundt
    Cc: x86@kernel.org
    Cc: Len Brown
    Signed-off-by: David Brown
    [ Also done by Mark Brown - th ebug has been around forever, and was
    noticed in -next, but the idle tree never picked it up. Bad bad bad ]
    Signed-off-by: Linus Torvalds

    David Brown
     

04 Aug, 2011

2 commits

  • * 'idle-release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-idle-2.6:
    cpuidle: stop depending on pm_idle
    x86 idle: move mwait_idle_with_hints() to where it is used
    cpuidle: replace xen access to x86 pm_idle and default_idle
    cpuidle: create bootparam "cpuidle.off=1"
    mrst_pmu: driver for Intel Moorestown Power Management Unit

    Linus Torvalds
     
  • cpuidle users should call cpuidle_call_idle() directly
    rather than via (pm_idle)() function pointer.

    Architecture may choose to continue using (pm_idle)(),
    but cpuidle need not depend on it:

    my_arch_cpu_idle()
    ...
    if(cpuidle_call_idle())
    pm_idle();

    cc: Kevin Hilman
    cc: Paul Mundt
    cc: x86@kernel.org
    Acked-by: H. Peter Anvin
    Signed-off-by: Len Brown

    Len Brown
     

27 Jul, 2011

1 commit

  • This allows us to move duplicated code in
    (atomic_inc_not_zero() for now) to

    Signed-off-by: Arun Sharma
    Reviewed-by: Eric Dumazet
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: Eric Dumazet
    Acked-by: Mike Frysinger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arun Sharma
     

26 Apr, 2010

2 commits


23 Mar, 2010

1 commit


20 Jan, 2010

1 commit

  • This provides a machine_ops-based reboot interface loosely cloned from
    x86, and converts the native sh32 and sh64 cases over to it.

    Necessary both for tying in SMP support and also enabling platforms like
    SDK7786 to add support for their microcontroller-based power managers.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

21 Dec, 2009

2 commits


16 Oct, 2009

3 commits

  • In the case where need_resched() is set in between the cpu_idle() and
    pm_idle() calls we were missing an else case for just re-enabling local
    IRQs and bailing out. This was noticed by the irqs_disabled() warning,
    even though IRQs were being re-enabled elsewhere.

    Signed-off-by: Paul Mundt

    Paul Mundt
     
  • This follows the x86 change and moves check_pgt_cache() up under the
    !need_resched() tight loop, rather than simply calling in to it when
    exiting idle.

    Signed-off-by: Paul Mundt

    Paul Mundt
     
  • This does a bit of chainsawing of the idle loop code to get light sleep
    working on SMP. Previously this was forcing secondary CPUs in to sleep
    mode with them not coming back if they didn't have their own local
    timers. Given that we use clockevents broadcasting by default, the CPU
    managing the clockevents can't have IRQs disabled before entering its
    sleep state.

    This unfortunately leaves us with the age-old need_resched() race in
    between local_irq_enable() and cpu_sleep(), but at present this is
    unavoidable. After some more experimentation it may be possible to layer
    on SR.BL bit manipulation over top of this scheme to inhibit the race
    condition, but given the current potential for missing wakeups, this is
    left as a future exercise.

    Signed-off-by: Paul Mundt

    Paul Mundt
     

23 Jun, 2009

1 commit


02 Apr, 2009

1 commit


22 Dec, 2008

1 commit