15 Oct, 2014

1 commit

  • Pull percpu consistent-ops changes from Tejun Heo:
    "Way back, before the current percpu allocator was implemented, static
    and dynamic percpu memory areas were allocated and handled separately
    and had their own accessors. The distinction has been gone for many
    years now; however, the now duplicate two sets of accessors remained
    with the pointer based ones - this_cpu_*() - evolving various other
    operations over time. During the process, we also accumulated other
    inconsistent operations.

    This pull request contains Christoph's patches to clean up the
    duplicate accessor situation. __get_cpu_var() uses are replaced with
    with this_cpu_ptr() and __this_cpu_ptr() with raw_cpu_ptr().

    Unfortunately, the former sometimes is tricky thanks to C being a bit
    messy with the distinction between lvalues and pointers, which led to
    a rather ugly solution for cpumask_var_t involving the introduction of
    this_cpu_cpumask_var_ptr().

    This converts most of the uses but not all. Christoph will follow up
    with the remaining conversions in this merge window and hopefully
    remove the obsolete accessors"

    * 'for-3.18-consistent-ops' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (38 commits)
    irqchip: Properly fetch the per cpu offset
    percpu: Resolve ambiguities in __get_cpu_var/cpumask_var_t -fix
    ia64: sn_nodepda cannot be assigned to after this_cpu conversion. Use __this_cpu_write.
    percpu: Resolve ambiguities in __get_cpu_var/cpumask_var_t
    Revert "powerpc: Replace __get_cpu_var uses"
    percpu: Remove __this_cpu_ptr
    clocksource: Replace __this_cpu_ptr with raw_cpu_ptr
    sparc: Replace __get_cpu_var uses
    avr32: Replace __get_cpu_var with __this_cpu_write
    blackfin: Replace __get_cpu_var uses
    tile: Use this_cpu_ptr() for hardware counters
    tile: Replace __get_cpu_var uses
    powerpc: Replace __get_cpu_var uses
    alpha: Replace __get_cpu_var
    ia64: Replace __get_cpu_var uses
    s390: cio driver &__get_cpu_var replacements
    s390: Replace __get_cpu_var uses
    mips: Replace __get_cpu_var uses
    MIPS: Replace __get_cpu_var uses in FPU emulator.
    arm: Replace __this_cpu_ptr with raw_cpu_ptr
    ...

    Linus Torvalds
     

14 Oct, 2014

1 commit

  • In some cases we don't want hard lockup detection enabled by default.
    An example is when running as a guest. Introduce

    watchdog_enable_hardlockup_detector(bool)

    allowing those cases to disable hard lockup detection. This must be
    executed early by the boot processor from e.g. smp_prepare_boot_cpu, in
    order to allow kernel command line arguments to override it, as well as
    to avoid hard lockup detection being enabled before we've had a chance
    to indicate that it's unwanted. In summary,

    initial boot: default=enabled
    smp_prepare_boot_cpu
    watchdog_enable_hardlockup_detector(false): default=disabled
    cmdline has 'nmi_watchdog=1': default=enabled

    The running kernel still has the ability to enable/disable at any time
    with /proc/sys/kernel/nmi_watchdog us usual. However even when the
    default has been overridden /proc/sys/kernel/nmi_watchdog will initially
    show '1'. To truly turn it on one must disable/enable it, i.e.

    echo 0 > /proc/sys/kernel/nmi_watchdog
    echo 1 > /proc/sys/kernel/nmi_watchdog

    This patch will be immediately useful for KVM with the next patch of this
    series. Other hypervisor guest types may find it useful as well.

    [akpm@linux-foundation.org: fix build]
    [dzickus@redhat.com: fix compile issues on sparc]
    Signed-off-by: Ulrich Obergfell
    Signed-off-by: Andrew Jones
    Signed-off-by: Don Zickus
    Signed-off-by: Don Zickus
    Cc: Stephen Rothwell
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ulrich Obergfell
     

13 Oct, 2014

1 commit


10 Oct, 2014

1 commit

  • For now, soft lockup detector warns once for each case of process
    softlockup. But the thread 'watchdog/n' may not always get the cpu at the
    time slot between the task switch of two processes hogging that cpu to
    reset soft_watchdog_warn.

    An example would be two processes hogging the cpu. Process A causes the
    softlockup warning and is killed manually by a user. Process B
    immediately becomes the new process hogging the cpu preventing the
    softlockup code from resetting the soft_watchdog_warn variable.

    This case is a false negative of "warn only once for a process", as there
    may be a different process that is going to hog the cpu. Resolve this by
    saving/checking the task pointer of the hogging process and use that to
    reset soft_watchdog_warn too.

    [dzickus@redhat.com: update comment]
    Signed-off-by: chai wen
    Signed-off-by: Don Zickus
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    chai wen
     

27 Aug, 2014

1 commit

  • Most of these are the uses of &__raw_get_cpu_var for address calculation.

    touch_softlockup_watchdog_sync() uses __raw_get_cpu_var to write to
    per cpu variables. Use __this_cpu_write instead.

    Cc: Wim Van Sebroeck
    Cc: linux-watchdog@vger.kernel.org
    Signed-off-by: Christoph Lameter
    Signed-off-by: Tejun Heo

    Christoph Lameter
     

18 Aug, 2014

2 commits

  • This patch avoids printing the message 'enabled on all CPUs,
    ...' multiple times. For example, the issue can occur in the
    following scenario:

    1) watchdog_nmi_enable() fails to enable PMU counters and sets
    cpu0_err.

    2) 'echo [0|1] > /proc/sys/kernel/nmi_watchdog' is executed to
    disable and re-enable the watchdog mechanism 'on the fly'.

    3) If watchdog_nmi_enable() succeeds to enable PMU counters,
    each CPU will print the message because step1 left behind a
    non-zero cpu0_err.

    if (!IS_ERR(event)) {
    if (cpu == 0 || cpu0_err)
    pr_info("enabled on all CPUs, ...")

    The patch avoids this by clearing cpu0_err in watchdog_nmi_disable().

    Signed-off-by: Ulrich Obergfell
    Signed-off-by: Andrew Jones
    Signed-off-by: Don Zickus
    Cc: pbonzini@redhat.com
    Link: http://lkml.kernel.org/r/1407768567-171794-4-git-send-email-dzickus@redhat.com
    [ Applied small cleanups. ]
    Signed-off-by: Ingo Molnar

    Ulrich Obergfell
     
  • Signed-off-by: chai wen
    Signed-off-by: Don Zickus
    Cc: pbonzini@redhat.com
    Link: http://lkml.kernel.org/r/1407768567-171794-2-git-send-email-dzickus@redhat.com
    Signed-off-by: Ingo Molnar

    chai wen
     

09 Aug, 2014

1 commit

  • This taint flag will be set if the system has ever entered a softlockup
    state. Similar to TAINT_WARN it is useful to know whether or not the
    system has been in a softlockup state when debugging.

    [akpm@linux-foundation.org: apply the taint before calling panic()]
    Signed-off-by: Josh Hunt
    Cc: Jason Baron
    Cc: Randy Dunlap
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Josh Hunt
     

07 Aug, 2014

1 commit


24 Jun, 2014

2 commits

  • A 'softlockup' is defined as a bug that causes the kernel to loop in
    kernel mode for more than a predefined period to time, without giving
    other tasks a chance to run.

    Currently, upon detection of this condition by the per-cpu watchdog
    task, debug information (including a stack trace) is sent to the system
    log.

    On some occasions, we have observed that the "victim" rather than the
    actual "culprit" (i.e. the owner/holder of the contended resource) is
    reported to the user. Often this information has proven to be
    insufficient to assist debugging efforts.

    To avoid loss of useful debug information, for architectures which
    support NMI, this patch makes it possible to improve soft lockup
    reporting. This is accomplished by issuing an NMI to each cpu to obtain
    a stack trace.

    If NMI is not supported we just revert back to the old method. A sysctl
    and boot-time parameter is available to toggle this feature.

    [dzickus@redhat.com: add CONFIG_SMP in certain areas]
    [akpm@linux-foundation.org: additional CONFIG_SMP=n optimisations]
    [mq@suse.cz: fix warning]
    Signed-off-by: Aaron Tomlin
    Signed-off-by: Don Zickus
    Cc: David S. Miller
    Cc: Mateusz Guzik
    Cc: Oleg Nesterov
    Signed-off-by: Jan Moskyto Matejka
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Aaron Tomlin
     
  • Peter Wu noticed the following splat on his machine when updating
    /proc/sys/kernel/watchdog_thresh:

    BUG: sleeping function called from invalid context at mm/slub.c:965
    in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: init
    3 locks held by init/1:
    #0: (sb_writers#3){.+.+.+}, at: [] vfs_write+0x143/0x180
    #1: (watchdog_proc_mutex){+.+.+.}, at: [] proc_dowatchdog+0x33/0x110
    #2: (cpu_hotplug.lock){.+.+.+}, at: [] get_online_cpus+0x32/0x80
    Preemption disabled at:[] proc_dowatchdog+0xe4/0x110

    CPU: 0 PID: 1 Comm: init Not tainted 3.16.0-rc1-testing #34
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
    Call Trace:
    dump_stack+0x4e/0x7a
    __might_sleep+0x11d/0x190
    kmem_cache_alloc_trace+0x4e/0x1e0
    perf_event_alloc+0x55/0x440
    perf_event_create_kernel_counter+0x26/0xe0
    watchdog_nmi_enable+0x75/0x140
    update_timers_all_cpus+0x53/0xa0
    proc_dowatchdog+0xe4/0x110
    proc_sys_call_handler+0xb3/0xc0
    proc_sys_write+0x14/0x20
    vfs_write+0xad/0x180
    SyS_write+0x49/0xb0
    system_call_fastpath+0x16/0x1b
    NMI watchdog: disabled (cpu0): hardware events not enabled

    What happened is after updating the watchdog_thresh, the lockup detector
    is restarted to utilize the new value. Part of this process involved
    disabling preemption. Once preemption was disabled, perf tried to
    allocate a new event (as part of the restart). This caused the above
    BUG_ON as you can't sleep with preemption disabled.

    The preemption restriction seemed agressive as we are not doing anything
    on that particular cpu, but with all the online cpus (which are
    protected by the get_online_cpus lock). Remove the restriction and the
    BUG_ON goes away.

    Signed-off-by: Don Zickus
    Acked-by: Michal Hocko
    Reported-by: Peter Wu
    Tested-by: Peter Wu
    Acked-by: David Rientjes
    Cc: [3.13+]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Don Zickus
     

19 Apr, 2014

1 commit

  • Fix:

    BUG: using __this_cpu_write() in preemptible [00000000] code: systemd-udevd/497
    caller is __this_cpu_preempt_check+0x13/0x20
    CPU: 3 PID: 497 Comm: systemd-udevd Tainted: G W 3.15.0-rc1 #9
    Hardware name: Hewlett-Packard HP EliteBook 8470p/179B, BIOS 68ICF Ver. F.02 04/27/2012
    Call Trace:
    check_preemption_disabled+0xe1/0xf0
    __this_cpu_preempt_check+0x13/0x20
    touch_nmi_watchdog+0x28/0x40

    Reported-by: Luis Henriques
    Tested-by: Luis Henriques
    Cc: Eric Piel
    Cc: Robert Moore
    Cc: Lv Zheng
    Cc: "Rafael J. Wysocki"
    Cc: Len Brown
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

04 Apr, 2014

1 commit

  • I ran into a scenario where while one cpu was stuck and should have
    panic'd because of the NMI watchdog, it didn't. The reason was another
    cpu was spewing stack dumps on to the console. Upon investigation, I
    noticed that when writing to the console and also when dumping the
    stack, the watchdog is touched.

    This causes all the cpus to reset their NMI watchdog flags and the
    'stuck' cpu just spins forever.

    This change causes the semantics of touch_nmi_watchdog to be changed
    slightly. Previously, I accidentally changed the semantics and we
    noticed there was a codepath in which touch_nmi_watchdog could be
    touched from a preemtible area. That caused a BUG() to happen when
    CONFIG_DEBUG_PREEMPT was enabled. I believe it was the acpi code.

    My attempt here re-introduces the change to have the
    touch_nmi_watchdog() code only touch the local cpu instead of all of the
    cpus. But instead of using __get_cpu_var(), I use the
    __raw_get_cpu_var() version.

    This avoids the preemption problem. However my reasoning wasn't because
    I was trying to be lazy. Instead I rationalized it as, well if
    preemption is enabled then interrupts should be enabled to and the NMI
    watchdog will have no reason to trigger. So it won't matter if the
    wrong cpu is touched because the percpu interrupt counters the NMI
    watchdog uses should still be incrementing.

    Don said:

    : I'm ok with this patch, though it does alter the behaviour of how
    : touch_nmi_watchdog works. For the most part I don't think most callers
    : need to touch all of the watchdogs (on each cpu). Perhaps a corner case
    : will pop up (the scheduler?? to mimic touch_all_softlockup_watchdogs() ).
    :
    : But this does address an issue where if a system is locked up and one cpu
    : is spewing out useful debug messages (or error messages), the hard lockup
    : will fail to go off. We have seen this on RHEL also.

    Signed-off-by: Don Zickus
    Signed-off-by: Ben Zhang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ben Zhang
     

25 Feb, 2014

1 commit

  • In order to remotely restart the watchdog hrtimer, update_timers()
    allocates a csd on the stack and pass it to __smp_call_function_single().

    There is no partcular need, however, for a specific csd here. Lets
    simplify that a little by calling smp_call_function_single()
    which can already take care of the csd allocation by itself.

    Acked-by: Don Zickus
    Reviewed-by: Michal Hocko
    Cc: Andrew Morton
    Cc: Christoph Hellwig
    Cc: Don Zickus
    Cc: Ingo Molnar
    Cc: Jan Kara
    Cc: Jens Axboe
    Cc: Michal Hocko
    Cc: Srivatsa S. Bhat
    Signed-off-by: Frederic Weisbecker
    Signed-off-by: Jens Axboe

    Frederic Weisbecker
     

25 Sep, 2013

2 commits

  • watchdog_tresh controls how often nmi perf event counter checks per-cpu
    hrtimer_interrupts counter and blows up if the counter hasn't changed
    since the last check. The counter is updated by per-cpu
    watchdog_hrtimer hrtimer which is scheduled with 2/5 watchdog_thresh
    period which guarantees that hrtimer is scheduled 2 times per the main
    period. Both hrtimer and perf event are started together when the
    watchdog is enabled.

    So far so good. But...

    But what happens when watchdog_thresh is updated from sysctl handler?

    proc_dowatchdog will set a new sampling period and hrtimer callback
    (watchdog_timer_fn) will use the new value in the next round. The
    problem, however, is that nobody tells the perf event that the sampling
    period has changed so it is ticking with the period configured when it
    has been set up.

    This might result in an ear ripping dissonance between perf and hrtimer
    parts if the watchdog_thresh is increased. And even worse it might lead
    to KABOOM if the watchdog is configured to panic on such a spurious
    lockup.

    This patch fixes the issue by updating both nmi perf even counter and
    hrtimers if the threshold value has changed.

    The nmi one is disabled and then reinitialized from scratch. This has
    an unpleasant side effect that the allocation of the new event might
    fail theoretically so the hard lockup detector would be disabled for
    such cpus. On the other hand such a memory allocation failure is very
    unlikely because the original event is deallocated right before.

    It would be much nicer if we just changed perf event period but there
    doesn't seem to be any API to do that right now. It is also unfortunate
    that perf_event_alloc uses GFP_KERNEL allocation unconditionally so we
    cannot use on_each_cpu() and do the same thing from the per-cpu context.
    The update from the current CPU should be safe because
    perf_event_disable removes the event atomically before it clears the
    per-cpu watchdog_ev so it cannot change anything under running handler
    feet.

    The hrtimer is simply restarted (thanks to Don Zickus who has pointed
    this out) if it is queued because we cannot rely it will fire&adopt to
    the new sampling period before a new nmi event triggers (when the
    treshold is decreased).

    [akpm@linux-foundation.org: the UP version of __smp_call_function_single ended up in the wrong place]
    Signed-off-by: Michal Hocko
    Acked-by: Don Zickus
    Cc: Frederic Weisbecker
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: Fabio Estevam
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     
  • proc_dowatchdog doesn't synchronize multiple callers which might lead to
    confusion when two parallel callers might confuse watchdog_enable_all_cpus
    resp watchdog_disable_all_cpus (eg watchdog gets enabled even if
    watchdog_thresh was set to 0 already).

    This patch adds a local mutex which synchronizes callers to the sysctl
    handler.

    Signed-off-by: Michal Hocko
    Cc: Frederic Weisbecker
    Acked-by: Don Zickus
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

31 Jul, 2013

1 commit

  • A perf event can be used without forcing the tick to
    stay alive if it doesn't use a frequency but a sample
    period and if it doesn't throttle (raise storm of events).

    Since the lockup detector neither use a perf event frequency
    nor should ever throttle due to its high period, it can now
    run concurrently with the full dynticks feature.

    So remove the hack that disabled the watchdog.

    Signed-off-by: Frederic Weisbecker
    Cc: Jiri Olsa
    Cc: Namhyung Kim
    Cc: Arnaldo Carvalho de Melo
    Cc: Stephane Eranian
    Cc: Don Zickus
    Cc: Srivatsa S. Bhat
    Cc: Anish Singh
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/1374539466-4799-9-git-send-email-fweisbec@gmail.com
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

20 Jun, 2013

3 commits

  • When the watchdog runs, it prevents the full dynticks
    CPUs from stopping their tick because the hard lockup
    detector uses perf events internally, which in turn
    rely on the periodic tick.

    Since this is a rather confusing behaviour that is not
    easy to track down and identify for those who want to
    test CONFIG_NO_HZ_FULL, let's default disable the
    watchdog on boot time when full dynticks is enabled.

    The user can still enable it later on runtime using
    proc or sysctl.

    Reported-by: Steven Rostedt
    Suggested-by: Peter Zijlstra
    Signed-off-by: Frederic Weisbecker
    Cc: Steven Rostedt
    Cc: Paul E. McKenney
    Cc: Ingo Molnar
    Cc: Andrew Morton
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Li Zhong
    Cc: Don Zickus
    Cc: Srivatsa S. Bhat
    Cc: Anish Singh

    Frederic Weisbecker
     
  • We have two very conflicting state variable names in the
    watchdog:

    * watchdog_enabled: This one reflects the user interface. It's
    set to 1 by default and can be overriden with boot options
    or sysctl/procfs interface.

    * watchdog_disabled: This is the internal toggle state that
    tells if watchdog threads, timers and NMI events are currently
    running or not. This state mostly depends on the user settings.
    It's a convenient state latch.

    Now we really need to find clearer names because those
    are just too confusing to encourage deep review.

    watchdog_enabled now becomes watchdog_user_enabled to reflect
    its purpose as an interface.

    watchdog_disabled becomes watchdog_running to suggest its
    role as a pure internal state.

    Signed-off-by: Frederic Weisbecker
    Cc: Srivatsa S. Bhat
    Cc: Anish Singh
    Cc: Steven Rostedt
    Cc: Paul E. McKenney
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Borislav Petkov
    Cc: Li Zhong
    Cc: Don Zickus

    Frederic Weisbecker
     
  • The user activation/deactivation of the watchdog through boot parameters
    or systcl is currently implemented with a dance involving kthreads parking
    and unparking methods: the threads are unconditionally registered on
    boot and they park as soon as the user want the watchdog to be disabled.

    This method involves a few noisy details to handle though: the watchdog
    kthreads may be unparked anytime due to hotplug operations, after which
    the watchdog internals have to decide to park again if it is user-disabled.

    As a result the setup() and unpark() methods need to be able to request a
    reparking. This is not currently supported in the kthread infrastructure
    so this piece of the watchdog code only works halfway.

    Besides, unparking/reparking the watchdog kthreads consume unnecessary
    cputime on hotplug operations when those could be simply ignored in the
    first place.

    As suggested by Srivatsa, let's instead only register the watchdog
    threads when they are needed. This way we don't need to think about
    hotplug operations and we don't burden the CPU onlining when the watchdog
    is simply disabled.

    Suggested-by: Srivatsa S. Bhat
    Signed-off-by: Frederic Weisbecker
    Cc: Srivatsa S. Bhat
    Cc: Anish Singh
    Cc: Steven Rostedt
    Cc: Paul E. McKenney
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Borislav Petkov
    Cc: Li Zhong
    Cc: Don Zickus

    Frederic Weisbecker
     

14 Mar, 2013

1 commit

  • The watchdog_disabled flag is a bit cryptic. However it's
    usefulness is multifold. Uses are:

    1. Check if smpboot_register_percpu_thread function passed.

    2. Makes sure that user enables and disables the watchdog in
    sequence i.e. enable watchdog->disable watchdog->enable watchdog
    Unlike enable watchdog->enable watchdog which is wrong.

    Signed-off-by: anish kumar
    [small text cleanups]
    Signed-off-by: Don Zickus
    Cc: chuansheng.liu@intel.com
    Cc: paulmck@linux.vnet.ibm.com
    Link: http://lkml.kernel.org/r/1363113848-18344-1-git-send-email-dzickus@redhat.com
    Signed-off-by: Ingo Molnar

    anish kumar
     

23 Feb, 2013

1 commit

  • Pull core locking changes from Ingo Molnar:
    "The biggest change is the rwsem lock-steal improvements, both to the
    assembly optimized and the spinlock based variants.

    The other notable change is the clean up of the seqlock implementation
    to be based on the seqcount infrastructure.

    The rest is assorted smaller debuggability, cleanup and continued -rt
    locking changes."

    * 'core-locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
    rwsem-spinlock: Implement writer lock-stealing for better scalability
    futex: Revert "futex: Mark get_robust_list as deprecated"
    generic: Use raw local irq variant for generic cmpxchg
    lockdep: Selftest: convert spinlock to raw spinlock
    seqlock: Use seqcount infrastructure
    seqlock: Remove unused functions
    ntp: Make ntp_lock raw
    intel_idle: Convert i7300_idle_lock to raw_spinlock
    locking: Various static lock initializer fixes
    lockdep: Print more info when MAX_LOCK_DEPTH is exceeded
    rwsem: Implement writer lock-stealing for better scalability
    lockdep: Silence warning if CONFIG_LOCKDEP isn't set
    watchdog: Use local_clock for get_timestamp()
    lockdep: Rename print_unlock_inbalance_bug() to print_unlock_imbalance_bug()
    locking/stat: Fix a typo

    Linus Torvalds
     

19 Feb, 2013

1 commit

  • The get_timestamp() function is always called with current cpu,
    thus using local_clock() would be more appropriate and it makes
    the code shorter and cleaner IMHO.

    Signed-off-by: Namhyung Kim
    Acked-by: Don Zickus
    Cc: Steven Rostedt
    Link: http://lkml.kernel.org/r/1356576585-28782-1-git-send-email-namhyung@kernel.org
    Signed-off-by: Ingo Molnar

    Namhyung Kim
     

08 Feb, 2013

1 commit


20 Dec, 2012

1 commit

  • Commit 8d4516904b39 ("watchdog: Fix CPU hotplug regression") causes an
    oops or hard lockup when doing

    echo 0 > /proc/sys/kernel/nmi_watchdog
    echo 1 > /proc/sys/kernel/nmi_watchdog

    and the kernel is booted with nmi_watchdog=1 (default)

    Running laptop-mode-tools and disconnecting/connecting AC power will
    cause this to trigger, making it a common failure scenario on laptops.

    Instead of bailing out of watchdog_disable() when !watchdog_enabled we
    can initialize the hrtimer regardless of watchdog_enabled status. This
    makes it safe to call watchdog_disable() in the nmi_watchdog=0 case,
    without the negative effect on the enabled => disabled => enabled case.

    All these tests pass with this patch:
    - nmi_watchdog=1
    echo 0 > /proc/sys/kernel/nmi_watchdog
    echo 1 > /proc/sys/kernel/nmi_watchdog

    - nmi_watchdog=0
    echo 0 > /sys/devices/system/cpu/cpu1/online

    - nmi_watchdog=0
    echo mem > /sys/power/state

    Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=51661

    Cc: # v3.7
    Cc: Norbert Warmuth
    Cc: Joseph Salisbury
    Cc: Thomas Gleixner
    Signed-off-by: Bjørn Mork
    Signed-off-by: Linus Torvalds

    Bjørn Mork
     

18 Dec, 2012

1 commit


05 Dec, 2012

1 commit

  • Norbert reported:
    "3.7-rc6 booted with nmi_watchdog=0 fails to suspend to RAM or
    offline CPUs. It's reproducable with a KVM guest and physical
    system."

    The reason is that commit bcd951cf(watchdog: Use hotplug thread
    infrastructure) missed to take this into account. So the cpu offline
    code gets stuck in the teardown function because it accesses non
    initialized data structures.

    Add a check for watchdog_enabled into that path to cure the issue.

    Reported-and-tested-by: Norbert Warmuth
    Tested-by: Joseph Salisbury
    Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1211231033230.2701@ionos
    Link: http://bugs.launchpad.net/bugs/1079534
    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     

27 Nov, 2012

1 commit

  • In get_sample_period(), unsigned long is not enough:

    watchdog_thresh * 2 * (NSEC_PER_SEC / 5)

    case1:
    watchdog_thresh is 10 by default, the sample value will be: 0xEE6B2800

    case2:
    set watchdog_thresh is 20, the sample value will be: 0x1 DCD6 5000

    In case2, we need use u64 to express the sample period. Otherwise,
    changing the threshold thru proc often can not be successful.

    Signed-off-by: liu chuansheng
    Acked-by: Don Zickus
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Chuansheng Liu
     

13 Aug, 2012

1 commit


09 Aug, 2012

1 commit

  • Revert commit 45226e9 (NMI watchdog: fix for lockup detector breakage
    on resume) which breaks resume from system suspend on my SH7372
    Mackerel board (by causing a NULL pointer dereference to happen) and
    is generally wrong, because it abuses the CPU hotplug functionality
    in a shamelessly blatant way.

    The original issue should be addressed through appropriate syscore
    resume callback instead.

    Signed-off-by: Rafael J. Wysocki

    Rafael J. Wysocki
     

31 Jul, 2012

1 commit

  • On the suspend/resume path the boot CPU does not go though an
    offline->online transition. This breaks the NMI detector post-resume
    since it depends on PMU state that is lost when the system gets
    suspended.

    Fix this by forcing a CPU offline->online transition for the lockup
    detector on the boot CPU during resume.

    To provide more context, we enable NMI watchdog on Chrome OS. We have
    seen several reports of systems freezing up completely which indicated
    that the NMI watchdog was not firing for some reason.

    Debugging further, we found a simple way of repro'ing system freezes --
    issuing the command 'tasket 1 sh -c "echo nmilockup > /proc/breakme"'
    after the system has been suspended/resumed one or more times.

    With this patch in place, the system freeze result in panics, as
    expected.

    These panics provide a nice stack trace for us to debug the actual issue
    causing the freeze.

    [akpm@linux-foundation.org: fiddle with code comment]
    [akpm@linux-foundation.org: make lockup_detector_bootcpu_resume() conditional on CONFIG_SUSPEND]
    [akpm@linux-foundation.org: fix section errors]
    Signed-off-by: Sameer Nanda
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: "Rafael J. Wysocki"
    Cc: Don Zickus
    Cc: Mandeep Singh Baines
    Cc: Srivatsa S. Bhat
    Cc: Anshuman Khandual
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sameer Nanda
     

14 Jun, 2012

1 commit

  • A bunch of bugzillas have complained how noisy the nmi_watchdog
    is during boot-up especially with its expected failure cases
    (like virt and bios resource contention).

    This is my attempt to quiet them down and keep it less confusing
    for the end user. What I did is print the message for cpu0 and
    save it for future comparisons. If future cpus have an
    identical message as cpu0, then don't print the redundant info.
    However, if a future cpu has a different message, happily print
    that loudly.

    Before the change, you would see something like:

    ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
    CPU0: Intel(R) Core(TM)2 Quad CPU Q9550 @ 2.83GHz stepping 0a
    Performance Events: PEBS fmt0+, Core2 events, Intel PMU driver.
    ... version: 2
    ... bit width: 40
    ... generic registers: 2
    ... value mask: 000000ffffffffff
    ... max period: 000000007fffffff
    ... fixed-purpose events: 3
    ... event mask: 0000000700000003
    NMI watchdog enabled, takes one hw-pmu counter.
    Booting Node 0, Processors #1
    NMI watchdog enabled, takes one hw-pmu counter.
    #2
    NMI watchdog enabled, takes one hw-pmu counter.
    #3 Ok.
    NMI watchdog enabled, takes one hw-pmu counter.
    Brought up 4 CPUs
    Total of 4 processors activated (22607.24 BogoMIPS).

    After the change, it is simplified to:

    ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
    CPU0: Intel(R) Core(TM)2 Quad CPU Q9550 @ 2.83GHz stepping 0a
    Performance Events: PEBS fmt0+, Core2 events, Intel PMU driver.
    ... version: 2
    ... bit width: 40
    ... generic registers: 2
    ... value mask: 000000ffffffffff
    ... max period: 000000007fffffff
    ... fixed-purpose events: 3
    ... event mask: 0000000700000003
    NMI watchdog: enabled on all CPUs, permanently consumes one hw-PMU counter.
    Booting Node 0, Processors #1 #2 #3 Ok.
    Brought up 4 CPUs

    V2: little changes based on Joe Perches' feedback
    V3: printk cleanup based on Ingo's feedback; checkpatch fix
    V4: keep printk as one long line
    V5: Ingo fix ups

    Reported-and-tested-by: Nathan Zimmer
    Signed-off-by: Don Zickus
    Cc: nzimmer@sgi.com
    Cc: joe@perches.com
    Link: http://lkml.kernel.org/r/1339594548-17227-1-git-send-email-dzickus@redhat.com
    Signed-off-by: Ingo Molnar

    Don Zickus
     

08 Apr, 2012

1 commit

  • A suspended VM can cause spurious soft lockup warnings. To avoid these, the
    watchdog now checks if the kernel knows it was stopped by the host and skips
    the warning if so. When the watchdog is reset successfully, clear the guest
    paused flag.

    Signed-off-by: Eric B Munson
    Signed-off-by: Marcelo Tosatti
    Signed-off-by: Avi Kivity

    Eric B Munson
     

24 Mar, 2012

3 commits

  • Revelation from Peter.

    Cc: Peter Zijlstra
    Cc: Don Zickus
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • It fixes some 80-col wordwrappings and adds some consistency.

    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • If the system is loaded while hotplugging a CPU we might end up with a
    bogus hardlockup detection. This has been seen during LTP pounder test
    executed in parallel with hotplug test.

    The main problem is that enable_watchdog (called when CPU is brought up)
    registers perf event which periodically checks per-cpu counter
    (hrtimer_interrupts), updated from a hrtimer callback, but the hrtimer
    is fired from the kernel thread.

    This means that while we already do check for the hard lockup the kernel
    thread might be sitting on the runqueue with zillions of tasks so there
    is nobody to update the value we rely on and so we KABOOM.

    Let's fix this by boosting the watchdog thread priority before we wake
    it up rather than when it's already running. This still doesn't handle
    a case where we have the same amount of high prio FIFO tasks but that
    doesn't seem to be common. The current implementation doesn't handle
    that case anyway so this is not worse at least.

    Unfortunately, we cannot start perf counter from the watchdog thread
    because we could miss a real lock up and also we cannot start the
    hrtimer watchdog_enable because we there is no way (at least I don't
    know any) to start a hrtimer from a different CPU.

    [dzickus@redhat.com: fix compile issue with param]
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Reviewed-by: Mandeep Singh Baines
    Signed-off-by: Michal Hocko
    Signed-off-by: Don Zickus
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michal Hocko
     

11 Feb, 2012

1 commit

  • Reflect the change in the soft and hard lockup thresholds and
    their relation to the frequency of the hrtimer and NMI events in
    the code comments. While at it, remove references to files that
    do not exist anymore.

    Signed-off-by: Fernando Luis Vazquez Cao
    Signed-off-by: Don Zickus
    Link: http://lkml.kernel.org/r/1328827342-6253-3-git-send-email-dzickus@redhat.com
    Signed-off-by: Ingo Molnar

    Fernando Luis Vázquez Cao
     

27 Jan, 2012

1 commit

  • rsyslog will display KERN_EMERG messages on a connected
    terminal. However, these messages are useless/undecipherable
    for a general user.

    For example, after a softlockup we get:

    Message from syslogd@intel-s3e37-04 at Jan 25 14:18:06 ...
    kernel:Stack:

    Message from syslogd@intel-s3e37-04 at Jan 25 14:18:06 ...
    kernel:Call Trace:

    Message from syslogd@intel-s3e37-04 at Jan 25 14:18:06 ...
    kernel:Code: ff ff a8 08 75 25 31 d2 48 8d 86 38 e0 ff ff 48 89
    d1 0f 01 c8 0f ae f0 48 8b 86 38 e0 ff ff a8 08 75 08 b1 01 4c 89 e0 0f 01 c9 ea 69 dd ff 4c 29 e8 48 89 c7 e8 0f bc da ff 49 89 c4 49 89

    This happens because the printk levels for these messages are
    incorrect. Only an informational message should be displayed on
    a terminal.

    I modified the printk levels for various messages in the kernel
    and tested the output by using the drivers/misc/lkdtm.c kernel
    modules (ie, softlockups, panics, hard lockups, etc.) and
    confirmed that the console output was still the same and that
    the output to the terminals was correct.

    For example, in the case of a softlockup we now see the much
    more informative:

    Message from syslogd@intel-s3e37-04 at Jan 25 10:18:06 ...
    BUG: soft lockup - CPU4 stuck for 60s!

    instead of the above confusing messages.

    AFAICT, the messages no longer have to be KERN_EMERG. In the
    most important case of a panic we set console_verbose(). As for
    the other less severe cases the correct data is output to the
    console and /var/log/messages.

    Successfully tested by me using the drivers/misc/lkdtm.c module.

    Signed-off-by: Prarit Bhargava
    Cc: dzickus@redhat.com
    Cc: Linus Torvalds
    Cc: Andrew Morton
    Link: http://lkml.kernel.org/r/1327586134-11926-1-git-send-email-prarit@redhat.com
    Signed-off-by: Ingo Molnar

    Prarit Bhargava
     

01 Nov, 2011

1 commit

  • Fix compilation warnings for CONFIG_SYSCTL=n:

    fixed compilation warnings in case of disabled CONFIG_SYSCTL
    kernel/watchdog.c:483:13: warning: `watchdog_enable_all_cpus' defined but not used
    kernel/watchdog.c:500:13: warning: `watchdog_disable_all_cpus' defined but not used

    these functions are static and are used only in sysctl handler, so move
    them inside #ifdef CONFIG_SYSCTL too

    Signed-off-by: Vasily Averin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vasily Averin
     

18 Sep, 2011

1 commit

  • When the watchdog thread exits it runs through the exit path with FIFO
    priority. There is no point in doing so. Switch back to SCHED_NORMAL
    before exiting.

    Cc: Don Zickus
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1109121337461.2723@ionos
    Signed-off-by: Ingo Molnar

    Thomas Gleixner