19 Jun, 2015

2 commits

  • If nohz is disabled on the kernel command line the [hr]timer code
    still calls wake_up_nohz_cpu() and tick_nohz_full_cpu(), a pretty
    pointless exercise. Cache nohz_active in [hr]timer per cpu bases and
    avoid the overhead.

    Before:
    48.10% hog [.] main
    15.25% [kernel] [k] _raw_spin_lock_irqsave
    9.76% [kernel] [k] _raw_spin_unlock_irqrestore
    6.50% [kernel] [k] mod_timer
    6.44% [kernel] [k] lock_timer_base.isra.38
    3.87% [kernel] [k] detach_if_pending
    3.80% [kernel] [k] del_timer
    2.67% [kernel] [k] internal_add_timer
    1.33% [kernel] [k] __internal_add_timer
    0.73% [kernel] [k] timerfn
    0.54% [kernel] [k] wake_up_nohz_cpu

    After:
    48.73% hog [.] main
    15.36% [kernel] [k] _raw_spin_lock_irqsave
    9.77% [kernel] [k] _raw_spin_unlock_irqrestore
    6.61% [kernel] [k] lock_timer_base.isra.38
    6.42% [kernel] [k] mod_timer
    3.90% [kernel] [k] detach_if_pending
    3.76% [kernel] [k] del_timer
    2.41% [kernel] [k] internal_add_timer
    1.39% [kernel] [k] __internal_add_timer
    0.76% [kernel] [k] timerfn

    We probably should have a cached value for nohz full in the per cpu
    bases as well to avoid the cpumask check. The base cache line is hot
    already, the cpumask not necessarily.

    Signed-off-by: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Paul McKenney
    Cc: Frederic Weisbecker
    Cc: Eric Dumazet
    Cc: Viresh Kumar
    Cc: John Stultz
    Cc: Joonwoo Park
    Cc: Wenbo Wang
    Link: http://lkml.kernel.org/r/20150526224512.207378134@linutronix.de
    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     
  • Eric reported that the timer_migration sysctl is not really nice
    performance wise as it needs to check at every timer insertion whether
    the feature is enabled or not. Further the check does not live in the
    timer code, so we have an extra function call which checks an extra
    cache line to figure out that it is disabled.

    We can do better and store that information in the per cpu (hr)timer
    bases. I pondered to use a static key, but that's a nightmare to
    update from the nohz code and the timer base cache line is hot anyway
    when we select a timer base.

    The old logic enabled the timer migration unconditionally if
    CONFIG_NO_HZ was set even if nohz was disabled on the kernel command
    line.

    With this modification, we start off with migration disabled. The user
    visible sysctl is still set to enabled. If the kernel switches to NOHZ
    migration is enabled, if the user did not disable it via the sysctl
    prior to the switch. If nohz=off is on the kernel command line,
    migration stays disabled no matter what.

    Before:
    47.76% hog [.] main
    14.84% [kernel] [k] _raw_spin_lock_irqsave
    9.55% [kernel] [k] _raw_spin_unlock_irqrestore
    6.71% [kernel] [k] mod_timer
    6.24% [kernel] [k] lock_timer_base.isra.38
    3.76% [kernel] [k] detach_if_pending
    3.71% [kernel] [k] del_timer
    2.50% [kernel] [k] internal_add_timer
    1.51% [kernel] [k] get_nohz_timer_target
    1.28% [kernel] [k] __internal_add_timer
    0.78% [kernel] [k] timerfn
    0.48% [kernel] [k] wake_up_nohz_cpu

    After:
    48.10% hog [.] main
    15.25% [kernel] [k] _raw_spin_lock_irqsave
    9.76% [kernel] [k] _raw_spin_unlock_irqrestore
    6.50% [kernel] [k] mod_timer
    6.44% [kernel] [k] lock_timer_base.isra.38
    3.87% [kernel] [k] detach_if_pending
    3.80% [kernel] [k] del_timer
    2.67% [kernel] [k] internal_add_timer
    1.33% [kernel] [k] __internal_add_timer
    0.73% [kernel] [k] timerfn
    0.54% [kernel] [k] wake_up_nohz_cpu

    Reported-by: Eric Dumazet
    Signed-off-by: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Paul McKenney
    Cc: Frederic Weisbecker
    Cc: Viresh Kumar
    Cc: John Stultz
    Cc: Joonwoo Park
    Cc: Wenbo Wang
    Link: http://lkml.kernel.org/r/20150526224512.127050787@linutronix.de
    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     

02 Jun, 2015

2 commits


22 Apr, 2015

1 commit

  • The evaluation of the next timer in the nohz code is based on jiffies
    while all the tick internals are nano seconds based. We have also to
    convert hrtimer nanoseconds to jiffies in the !highres case. That's
    just wrong and introduces interesting corner cases.

    Turn it around and convert the next timer wheel timer expiry and the
    rcu event to clock monotonic and base all calculations on
    nanoseconds. That identifies the case where no timer is pending
    clearly with an absolute expiry value of KTIME_MAX.

    Makes the code more readable and gets rid of the jiffies magic in the
    nohz code.

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Paul E. McKenney
    Acked-by: Peter Zijlstra
    Cc: Preeti U Murthy
    Cc: Viresh Kumar
    Cc: Marcelo Tosatti
    Cc: Frederic Weisbecker
    Cc: Josh Triplett
    Cc: Lai Jiangshan
    Cc: John Stultz
    Cc: Marcelo Tosatti
    Link: http://lkml.kernel.org/r/20150414203502.184198593@linutronix.de
    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     

03 Apr, 2015

4 commits

  • clockevents_notify() is a leftover from the early design of the
    clockevents facility. It's really not a notification mechanism,
    it's a multiplex call. We are way better off to have explicit
    calls instead of this monstrosity.

    Split out the cleanup function for a dead cpu and invoke it
    directly from the cpu down code. Make it conditional on
    CPU_HOTPLUG as well.

    Temporary change, will be refined in the future.

    Signed-off-by: Thomas Gleixner
    [ Rebased, added clockevents_notify() removal ]
    Signed-off-by: Rafael J. Wysocki
    Cc: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1735025.raBZdQHM3m@vostro.rjw.lan
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • clockevents_notify() is a leftover from the early design of the
    clockevents facility. It's really not a notification mechanism,
    it's a multiplex call. We are way better off to have explicit
    calls instead of this monstrosity.

    Split out the tick_handover call and invoke it explicitely from
    the hotplug code. Temporary solution will be cleaned up in later
    patches.

    Signed-off-by: Thomas Gleixner
    [ Rebase ]
    Signed-off-by: Rafael J. Wysocki
    Cc: Peter Zijlstra
    Cc: John Stultz
    Cc: Linus Torvalds
    Cc: Andrew Morton
    Link: http://lkml.kernel.org/r/1658173.RkEEILFiQZ@vostro.rjw.lan
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • clockevents_notify() is a leftover from the early design of the
    clockevents facility. It's really not a notification mechanism,
    it's a multiplex call. We are way better off to have explicit
    calls instead of this monstrosity.

    Split out the broadcast oneshot control into a separate function
    and provide inline helpers. Switch clockevents_notify() over.
    This will go away once all callers are converted.

    This also gets rid of the nested locking of clockevents_lock and
    broadcast_lock. The broadcast oneshot control functions do not
    require clockevents_lock. Only the managing functions
    (setup/shutdown/suspend/resume of the broadcast device require
    clockevents_lock.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Rafael J. Wysocki
    Cc: Alexandre Courbot
    Cc: Daniel Lezcano
    Cc: Len Brown
    Cc: Peter Zijlstra
    Cc: Stephen Warren
    Cc: Thierry Reding
    Cc: Tony Lindgren
    Link: http://lkml.kernel.org/r/13000649.8qZuEDV0OA@vostro.rjw.lan
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • clockevents_notify() is a leftover from the early design of the
    clockevents facility. It's really not a notification mechanism,
    it's a multiplex call. We are way better off to have explicit
    calls instead of this monstrosity.

    Split out the broadcast control into a separate function and
    provide inline helpers. Switch clockevents_notify() over. This
    will go away once all callers are converted.

    This also gets rid of the nested locking of clockevents_lock and
    broadcast_lock. The broadcast control functions do not require
    clockevents_lock. Only the managing functions
    (setup/shutdown/suspend/resume of the broadcast device require
    clockevents_lock.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Rafael J. Wysocki
    Cc: Daniel Lezcano
    Cc: Len Brown
    Cc: Peter Zijlstra
    Cc: Tony Lindgren
    Link: http://lkml.kernel.org/r/8086559.ttsuS0n1Xr@vostro.rjw.lan
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     

02 Apr, 2015

1 commit

  • Move the broadcasting related section to the GENERIC_CLOCKEVENTS=y
    section - this also solves build failures on architectures that
    don't use generic clockevents yet.

    Also standardize include file style to make it easier to read, and
    use nesting depth aware preprocessor directives to make future merges
    easier.

    Cc: Peter Zijlstra
    Cc: Rafael J. Wysocki
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

01 Apr, 2015

8 commits

  • Use the new tick_suspend/resume_local() and get rid of the
    homebrewn implementation of these in the ARM bL switcher. The
    check for the cpumask is completely pointless. There is no harm
    to suspend a per cpu tick device unconditionally. If that's a
    real issue then we fix it proper at the core level and not with
    some completely undocumented hacks in some random core code.

    Move the tick internals to the core code, now that this nuisance
    is gone.

    Signed-off-by: Thomas Gleixner
    [ rjw: Rebase, changelog ]
    Signed-off-by: Rafael J. Wysocki
    Cc: Nicolas Pitre
    Cc: Peter Zijlstra
    Cc: Russell King
    Link: http://lkml.kernel.org/r/1655112.Ws17YsMfN7@vostro.rjw.lan
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • Xen calls on every cpu into tick_resume() which is just wrong.
    tick_resume() is for the syscore global suspend/resume
    invocation. What XEN really wants is a per cpu local resume
    function.

    Provide a tick_resume_local() function and use it in XEN.

    Also provide a complementary tick_suspend_local() and modify
    tick_unfreeze() and tick_freeze(), respectively, to use the
    new local tick resume/suspend functions.

    Signed-off-by: Thomas Gleixner
    [ Combined two patches, rebased, modified subject/changelog. ]
    Signed-off-by: Rafael J. Wysocki
    Cc: Boris Ostrovsky
    Cc: David Vrabel
    Cc: Konrad Rzeszutek Wilk
    Cc: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1698741.eezk9tnXtG@vostro.rjw.lan
    [ Merged to latest timers/core. ]
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • Solely used in tick-broadcast.c and the return value is
    hardcoded 0. Make it static and void.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Rafael J. Wysocki
    Cc: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1689058.QkHYDJSRKu@vostro.rjw.lan
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • clockevents_notify() is a leftover from the early design of the
    clockevents facility. It's really not a notification mechanism,
    it's a multiplex call.

    We are way better off to have explicit calls instead of this
    monstrosity. Split out the suspend/resume() calls and invoke
    them directly from the call sites.

    No locking required at this point because these calls happen
    with interrupts disabled and a single cpu online.

    Signed-off-by: Thomas Gleixner
    [ Rebased on top of 4.0-rc5. ]
    Signed-off-by: Rafael J. Wysocki
    Cc: Peter Zijlstra
    Link: http://lkml.kernel.org/r/713674030.jVm1qaHuPf@vostro.rjw.lan
    [ Rebased on top of latest timers/core. ]
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • No point to expose everything to the world. People just believe
    such functions can be abused for whatever purposes. Sigh.

    Signed-off-by: Thomas Gleixner
    [ Rebased on top of 4.0-rc5 ]
    Signed-off-by: Rafael J. Wysocki
    Cc: Nicolas Pitre
    Cc: Peter Zijlstra
    Link: http://lkml.kernel.org/r/28017337.VbCUc39Gme@vostro.rjw.lan
    [ Merged to latest timers/core ]
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • tick-internal.h is pretty confusing as a lot of the stub inlines
    are there several times.

    Distangle the maze and make clear functional sections.

    Signed-off-by: Thomas Gleixner
    [ rjw: Subject ]
    Signed-off-by: Rafael J. Wysocki
    Cc: Peter Zijlstra
    Link: http://lkml.kernel.org/r/16068264.vcNp79HLaT@vostro.rjw.lan
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • Move clocksource related stuff to timekeeping.h and remove the
    pointless include from ntp.c

    Signed-off-by: Thomas Gleixner
    [ rjw: Subject ]
    Signed-off-by: Rafael J. Wysocki
    Cc: Peter Zijlstra
    Link: http://lkml.kernel.org/r/2714218.nM5AEfAHj0@vostro.rjw.lan
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • This option was for simpler migration to the clock events code.
    Most architectures have been converted and the option has been
    disfunctional as a standalone option for quite some time. Remove
    it.

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Rafael J. Wysocki
    Cc: Peter Zijlstra
    Link: http://lkml.kernel.org/r/5021859.jl9OC1medj@vostro.rjw.lan
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     

27 Mar, 2015

1 commit

  • Upcoming patch will redefine possible states of a clockevent
    device. The RESUME mode is a special case only for tick's
    clockevent devices. In future it can be replaced by ->resume()
    callback already available for clockevent devices.

    Lets handle it separately so that clockevents_set_mode() only
    handles states valid across all devices. This also renames
    set_mode_resume() to tick_resume() to make it more explicit.

    Signed-off-by: Viresh Kumar
    Acked-by: Peter Zijlstra
    Cc: Daniel Lezcano
    Cc: Frederic Weisbecker
    Cc: Kevin Hilman
    Cc: Peter Zijlstra
    Cc: Preeti U Murthy
    Cc: linaro-kernel@lists.linaro.org
    Cc: linaro-networking@linaro.org
    Cc: linux-arm-kernel@lists.infradead.org
    Link: http://lkml.kernel.org/r/c1b0112410870f49e7bf06958e1483eac6c15e20.1425037853.git.viresh.kumar@linaro.org
    Signed-off-by: Ingo Molnar

    Viresh Kumar
     

14 Sep, 2014

1 commit

  • This way we unbloat a bit main.c and more importantly we initialize
    nohz full after init_IRQ(). This dependency will be needed in further
    patches because nohz full needs irq work to raise its own IRQ.
    Information about the support for this ability on ARM64 is obtained on
    init_IRQ() which initialize the pointer to __smp_call_function.

    Since tick_init() is called right after init_IRQ(), this is a good place
    to call tick_nohz_init() and prepare for that dependency.

    Acked-by: Peter Zijlstra (Intel)
    Cc: Ingo Molnar
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Frederic Weisbecker

    Frederic Weisbecker
     

24 Jul, 2014

1 commit


07 Feb, 2014

3 commits

  • Make the stub function static inline instead of static and move the
    clockevents related function into the proper ifdeffed section.

    Reported-by: Fengguang Wu
    Signed-off-by: Thomas Gleixner
    Cc: Soren Brinkmann
    Cc: Preeti U Murthy

    Thomas Gleixner
     
  • The broadcast framework can potentially be made use of by archs which do not have an
    external clock device as well. Then, it is required that one of the CPUs need
    to handle the broadcasting of wakeup IPIs to the CPUs in deep idle. As a
    result its local timers should remain functional all the time. For such
    a CPU, the BROADCAST_ENTER notification has to fail indicating that its clock
    device cannot be shutdown. To make way for this support, change the return
    type of tick_broadcast_oneshot_control() and hence clockevents_notify() to
    indicate such scenarios.

    Signed-off-by: Preeti U Murthy
    Cc: deepthi@linux.vnet.ibm.com
    Cc: paulmck@linux.vnet.ibm.com
    Cc: fweisbec@gmail.com
    Cc: paulus@samba.org
    Cc: srivatsa.bhat@linux.vnet.ibm.com
    Cc: svaidy@linux.vnet.ibm.com
    Cc: peterz@infradead.org
    Cc: benh@kernel.crashing.org
    Cc: rafael.j.wysocki@intel.com
    Cc: linuxppc-dev@lists.ozlabs.org
    Link: http://lkml.kernel.org/r/20140207080606.17187.78306.stgit@preeti.in.ibm.com
    Signed-off-by: Thomas Gleixner

    Preeti U Murthy
     
  • We can identify the broadcast device in the core and serialize all
    callers including interrupts on a different CPU against the update.
    Also, disabling interrupts is moved into the core allowing callers to
    leave interrutps enabled when calling clockevents_update_freq().

    Signed-off-by: Soren Brinkmann
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: Soeren Brinkmann
    Cc: Daniel Lezcano
    Cc: Michal Simek
    Link: http://lkml.kernel.org/r/1391466877-28908-2-git-send-email-soren.brinkmann@xilinx.com
    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     

24 Dec, 2013

1 commit

  • Since the xtime lock was split into the timekeeping lock and
    the jiffies lock, we no longer need to call update_wall_time()
    while holding the jiffies lock.

    Thus, this patch splits update_wall_time() out from do_timer().

    This allows us to get away from calling clock_was_set_delayed()
    in update_wall_time() and instead use the standard clock_was_set()
    call that previously would deadlock, as it causes the jiffies lock
    to be acquired.

    Cc: Sasha Levin
    Cc: Thomas Gleixner
    Cc: Prarit Bhargava
    Cc: Richard Cochran
    Cc: Ingo Molnar
    Signed-off-by: John Stultz

    John Stultz
     

03 Dec, 2013

1 commit

  • A few functions use remote per CPU access APIs when they
    deal with local values.

    Just do the right conversion to improve performance, code
    readability and debug checks.

    While at it, lets extend some of these function names with *_this_cpu()
    suffix in order to display their purpose more clearly.

    Signed-off-by: Frederic Weisbecker
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Oleg Nesterov
    Cc: Steven Rostedt

    Frederic Weisbecker
     

19 Oct, 2013

1 commit

  • sysfs_get_uname() is erroneously declared as returning size_t even
    though it may return a negative value, specifically -EINVAL. Its
    callers then check whether its return value is less than zero and indeed
    that is never the case for size_t.

    This patch changes sysfs_get_uname() to return ssize_t and makes sure
    its callers use ssize_t accordingly.

    Signed-off-by: Patrick Palka
    [jstultz: Didn't apply cleanly, as a similar partial fix was also applied
    so had to resolve the collisions]
    Signed-off-by: John Stultz

    Patrick Palka
     

28 May, 2013

1 commit


16 May, 2013

3 commits

  • Provide a sysfs interface to allow unbinding of clockevent
    devices. The device is unbound if it is unused or if there is a
    replacement device available. Unbinding of broadcast devices is not
    supported as we don't want to foster that nonsense. If no replacement
    device is available the unbind returns -EBUSY. Unbind is available
    from the kernel and through sysfs, which is necessary to drop the
    module refcount.

    Signed-off-by: Thomas Gleixner
    Cc: John Stultz
    Cc: Magnus Damm
    Link: http://lkml.kernel.org/r/20130425143436.499216659@linutronix.de
    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     
  • No need to call another function and have duplicated cases.

    Signed-off-by: Thomas Gleixner
    Cc: John Stultz
    Cc: Magnus Damm
    Link: http://lkml.kernel.org/r/20130425143436.235746557@linutronix.de
    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     
  • 7+ years and still a single user. Kill it.

    Signed-off-by: Thomas Gleixner
    Cc: John Stultz
    Cc: Magnus Damm
    Link: http://lkml.kernel.org/r/20130425143436.098520211@linutronix.de
    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     

03 Apr, 2013

1 commit


23 Mar, 2013

1 commit


07 Mar, 2013

1 commit


14 Nov, 2012

1 commit


08 Sep, 2011

1 commit

  • The automatic increase of the min_delta_ns of a clockevents device
    should be done in the clockevents code as the minimum delay is an
    attribute of the clockevents device.

    In addition not all architectures want the automatic adjustment, on a
    massively virtualized system it can happen that the programming of a
    clock event fails several times in a row because the virtual cpu has
    been rescheduled quickly enough. In that case the minimum delay will
    erroneously be increased with no way back. The new config symbol
    GENERIC_CLOCKEVENTS_MIN_ADJUST is used to enable the automatic
    adjustment. The config option is selected only for x86.

    Signed-off-by: Martin Schwidefsky
    Cc: john stultz
    Link: http://lkml.kernel.org/r/20110823133142.494157493@de.ibm.com
    Signed-off-by: Thomas Gleixner

    Martin Schwidefsky
     

16 Mar, 2011

1 commit

  • …l/git/tip/linux-2.6-tip

    * 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (62 commits)
    posix-clocks: Check write permissions in posix syscalls
    hrtimer: Remove empty hrtimer_init_hres_timer()
    hrtimer: Update hrtimer->state documentation
    hrtimer: Update base[CLOCK_BOOTTIME].offset correctly
    timers: Export CLOCK_BOOTTIME via the posix timers interface
    timers: Add CLOCK_BOOTTIME hrtimer base
    time: Extend get_xtime_and_monotonic_offset() to also return sleep
    time: Introduce get_monotonic_boottime and ktime_get_boottime
    hrtimers: extend hrtimer base code to handle more then 2 clockids
    ntp: Remove redundant and incorrect parameter check
    mn10300: Switch do_timer() to xtimer_update()
    posix clocks: Introduce dynamic clocks
    posix-timers: Cleanup namespace
    posix-timers: Add support for fd based clocks
    x86: Add clock_adjtime for x86
    posix-timers: Introduce a syscall for clock tuning.
    time: Splitout compat timex accessors
    ntp: Add ADJ_SETOFFSET mode bit
    time: Introduce timekeeping_inject_offset
    posix-timer: Update comment
    ...

    Fix up new system-call-related conflicts in
    arch/x86/ia32/ia32entry.S
    arch/x86/include/asm/unistd_32.h
    arch/x86/include/asm/unistd_64.h
    arch/x86/kernel/syscall_table_32.S
    (name_to_handle_at()/open_by_handle_at() vs clock_adjtime()), and some
    due to movement of get_jiffies_64() in:
    kernel/time.c

    Linus Torvalds
     

26 Feb, 2011

1 commit

  • When the per cpu timer is marked CLOCK_EVT_FEAT_C3STOP, then we only
    can switch into oneshot mode, when the backup broadcast device
    supports oneshot mode as well. Otherwise we would try to switch the
    broadcast device into an unsupported mode unconditionally. This went
    unnoticed so far as the current available broadcast devices support
    oneshot mode. Seth unearthed this problem while debugging and working
    around an hpet related BIOS wreckage.

    Add the necessary check to tick_is_oneshot_available().

    Reported-and-tested-by: Seth Forshee
    Signed-off-by: Thomas Gleixner
    LKML-Reference:
    Cc: stable@kernel.org # .21 ->

    Thomas Gleixner
     

01 Feb, 2011

2 commits

  • The xtime/dotimer cleanup broke architectures which do not implement
    clockevents. Time to send out another __do_IRQ threat.

    Signed-off-by: Thomas Gleixner
    Reported-by: Ingo Molnar
    Cc: Torben Hohn
    Cc: Peter Zijlstra
    Cc: johnstul@us.ibm.com
    Cc: yong.zhang0@gmail.com
    Cc: hch@infradead.org
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     
  • All callers of do_timer() are converted to xtime_update(). The only
    users of xtime_lock are in kernel/time/. Make both local to
    kernel/time/ and remove them from the global header files.

    [ tglx: Reuse tick-internal.h instead of creating another local header
    file. Massaged changelog ]

    Signed-off-by: Torben Hohn
    Cc: Peter Zijlstra
    Cc: johnstul@us.ibm.com
    Cc: yong.zhang0@gmail.com
    Cc: hch@infradead.org
    Signed-off-by: Thomas Gleixner

    Torben Hohn