19 May, 2020

1 commit

  • Force inlining and prevent instrumentation of all sorts by marking the
    functions which are invoked from low level entry code with 'noinstr'.

    Split the irqflags tracking into two parts. One which does the heavy
    lifting while RCU is watching and the final one which can be invoked after
    RCU is turned off.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Thomas Gleixner
    Reviewed-by: Alexandre Chartre
    Link: https://lkml.kernel.org/r/20200505134100.484532537@linutronix.de

    Peter Zijlstra
     

21 May, 2019

1 commit

  • Add SPDX license identifiers to all files which:

    - Have no license information of any form

    - Have EXPORT_.*_SYMBOL_GPL inside which was used in the
    initial scan/conversion to ignore the file

    These files fall under the project license, GPL v2 only. The resulting SPDX
    license identifier is:

    GPL-2.0-only

    Signed-off-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

19 Oct, 2018

2 commits

  • Make the frequently used lockdep global variable debug_locks read-mostly.
    As debug_locks_silent is sometime used together with debug_locks,
    it is also made read-mostly so that they can be close together.

    With false cacheline sharing, cacheline contention problem can happen
    depending on what get put into the same cacheline as debug_locks.

    Signed-off-by: Waiman Long
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Will Deacon
    Link: http://lkml.kernel.org/r/1539913518-15598-2-git-send-email-longman@redhat.com
    Signed-off-by: Ingo Molnar

    Waiman Long
     
  • It was found that when debug_locks was turned off because of a problem
    found by the lockdep code, the system performance could drop quite
    significantly when the lock_stat code was also configured into the
    kernel. For instance, parallel kernel build time on a 4-socket x86-64
    server nearly doubled.

    Further analysis into the cause of the slowdown traced back to the
    frequent call to debug_locks_off() from the __lock_acquired() function
    probably due to some inconsistent lockdep states with debug_locks
    off. The debug_locks_off() function did an unconditional atomic xchg
    to write a 0 value into debug_locks which had already been set to 0.
    This led to severe cacheline contention in the cacheline that held
    debug_locks. As debug_locks is being referenced in quite a few different
    places in the kernel, this greatly slow down the system performance.

    To prevent that trashing of debug_locks cacheline, lock_acquired()
    and lock_contended() now checks the state of debug_locks before
    proceeding. The debug_locks_off() function is also modified to check
    debug_locks before calling __debug_locks_off().

    Signed-off-by: Waiman Long
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Will Deacon
    Link: http://lkml.kernel.org/r/1539913518-15598-1-git-send-email-longman@redhat.com
    Signed-off-by: Ingo Molnar

    Waiman Long
     

26 Jun, 2013

1 commit

  • Wound/wait mutexes are used when other multiple lock
    acquisitions of a similar type can be done in an arbitrary
    order. The deadlock handling used here is called wait/wound in
    the RDBMS literature: The older tasks waits until it can acquire
    the contended lock. The younger tasks needs to back off and drop
    all the locks it is currently holding, i.e. the younger task is
    wounded.

    For full documentation please read Documentation/ww-mutex-design.txt.

    References: https://lwn.net/Articles/548909/
    Signed-off-by: Maarten Lankhorst
    Acked-by: Daniel Vetter
    Acked-by: Rob Clark
    Acked-by: Peter Zijlstra
    Cc: dri-devel@lists.freedesktop.org
    Cc: linaro-mm-sig@lists.linaro.org
    Cc: rostedt@goodmis.org
    Cc: daniel@ffwll.ch
    Cc: Linus Torvalds
    Cc: Andrew Morton
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/51C8038C.9000106@canonical.com
    Signed-off-by: Ingo Molnar

    Maarten Lankhorst
     

08 Mar, 2012

1 commit


30 Nov, 2010

1 commit

  • This reverts commit e0fdace10e75dac67d906213b780ff1b1a4cc360.

    On-list discussion seems to suggest that the robustness fixes for printk
    make this unnecessary and DaveM has also agreed in person at Kernel Summit
    and on list.

    The main problem with this code is once we hit a lockdep splat we always
    keep oops_in_progress set, the console layer uses oops_in_progress with KMS
    to decide when it should be showing the oops and not showing X, so it causes
    problems around suspend/resume time when a userspace resume can cause a console
    switch away from X, only if oops_in_progress is set (which is what we want
    if an oops actually is in progress, but not because we had a lockdep splat
    2 days prior).

    Cc: David S Miller
    Cc: Ingo Molnar
    Signed-off-by: Dave Airlie
    Signed-off-by: Linus Torvalds

    Dave Airlie
     

25 Feb, 2010

1 commit

  • Inspection is proving insufficient to catch all RCU misuses,
    which is understandable given that rcu_dereference() might be
    protected by any of four different flavors of RCU (RCU, RCU-bh,
    RCU-sched, and SRCU), and might also/instead be protected by any
    of a number of locking primitives. It is therefore time to
    enlist the aid of lockdep.

    This set of patches is inspired by earlier work by Peter
    Zijlstra and Thomas Gleixner, and takes the following approach:

    o Set up separate lockdep classes for RCU, RCU-bh, and RCU-sched.

    o Set up separate lockdep classes for each instance of SRCU.

    o Create primitives that check for being in an RCU read-side
    critical section. These return exact answers if lockdep is
    fully enabled, but if unsure, report being in an RCU read-side
    critical section. (We want to avoid false positives!)
    The primitives are:

    For RCU: rcu_read_lock_held(void)

    For RCU-bh: rcu_read_lock_bh_held(void)

    For RCU-sched: rcu_read_lock_sched_held(void)

    For SRCU: srcu_read_lock_held(struct srcu_struct *sp)

    o Add rcu_dereference_check(), which takes a second argument
    in which one places a boolean expression based on the above
    primitives and/or lockdep_is_held().

    o A new kernel configuration parameter, CONFIG_PROVE_RCU, enables
    rcu_dereference_check(). This depends on CONFIG_PROVE_LOCKING,
    and should be quite helpful during the transition period while
    CONFIG_PROVE_RCU-unaware patches are in flight.

    The existing rcu_dereference() primitive does no checking, but
    upcoming patches will change that.

    Signed-off-by: Paul E. McKenney
    Cc: laijs@cn.fujitsu.com
    Cc: dipankar@in.ibm.com
    Cc: mathieu.desnoyers@polymtl.ca
    Cc: josh@joshtriplett.org
    Cc: dvhltc@us.ibm.com
    Cc: niv@us.ibm.com
    Cc: peterz@infradead.org
    Cc: rostedt@goodmis.org
    Cc: Valdis.Kletnieks@vt.edu
    Cc: dhowells@redhat.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Paul E. McKenney
     

12 Apr, 2009

1 commit

  • Impact: provide useful missing info for developers

    Kernel taint can occur in several situations such as warnings,
    load of prorietary or staging modules, bad page, etc...

    But when such taint happens, a developer might still be working on
    the kernel, expecting that lockdep is still enabled. But a taint
    disables lockdep without ever warning about it.
    Such a kernel behaviour doesn't really help for kernel development.

    This patch adds this missing warning.

    Since the taint is done most of the time after the main message that
    explain the real source issue, it seems safe to warn about it inside
    add_taint() so that it appears at last, without hurting the main
    information.

    v2: Use a generic helper to disable lockdep instead of an
    open coded xchg().

    Signed-off-by: Frederic Weisbecker
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

01 Aug, 2008

1 commit


04 Jul, 2006

1 commit

  • Generic lock debugging:

    - generalized lock debugging framework. For example, a bug in one lock
    subsystem turns off debugging in all lock subsystems.

    - got rid of the caller address passing (__IP__/__IP_DECL__/etc.) from
    the mutex/rtmutex debugging code: it caused way too much prototype
    hackery, and lockdep will give the same information anyway.

    - ability to do silent tests

    - check lock freeing in vfree too.

    - more finegrained debugging options, to allow distributions to
    turn off more expensive debugging features.

    There's no separate 'held mutexes' list anymore - but there's a 'held locks'
    stack within lockdep, which unifies deadlock detection across all lock
    classes. (this is independent of the lockdep validation stuff - lockdep first
    checks whether we are holding a lock already)

    Here are the current debugging options:

    CONFIG_DEBUG_MUTEXES=y
    CONFIG_DEBUG_LOCK_ALLOC=y

    which do:

    config DEBUG_MUTEXES
    bool "Mutex debugging, basic checks"

    config DEBUG_LOCK_ALLOC
    bool "Detect incorrect freeing of live mutexes"

    Signed-off-by: Ingo Molnar
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar