04 Jan, 2015

1 commit


13 Jan, 2014

1 commit

  • Currently all _bh_ lock functions do two preempt_count operations:

    local_bh_disable();
    preempt_disable();

    and for the unlock:

    preempt_enable_no_resched();
    local_bh_enable();

    Since its a waste of perfectly good cycles to modify the same variable
    twice when you can do it in one go; use the new
    __local_bh_{dis,en}able_ip() functions that allow us to provide a
    preempt_count value to add/sub.

    So define SOFTIRQ_LOCK_OFFSET as the offset a _bh_ lock needs to
    add/sub to be done in one go.

    As a bonus it gets rid of the preempt_enable_no_resched() usage.

    This reduces a 1000 loops of:

    spin_lock_bh(&bh_lock);
    spin_unlock_bh(&bh_lock);

    from 53596 cycles to 51995 cycles. I didn't do enough measurements to
    say for absolute sure that the result is significant but the the few
    runs I did for each suggest it is so.

    Reviewed-by: Thomas Gleixner
    Signed-off-by: Peter Zijlstra
    Cc: jacob.jun.pan@linux.intel.com
    Cc: Mike Galbraith
    Cc: hpa@zytor.com
    Cc: Arjan van de Ven
    Cc: lenb@kernel.org
    Cc: rjw@rjwysocki.net
    Cc: rui.zhang@intel.com
    Cc: Linus Torvalds
    Cc: Andrew Morton
    Link: http://lkml.kernel.org/r/20131119151338.GF3694@twins.programming.kicks-ass.net
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

20 Jun, 2013

1 commit


23 Mar, 2012

1 commit

  • Get rid of INLINE_SPIN_UNLOCK entirely replacing it with
    UNINLINE_SPIN_UNLOCK instead of the reverse meaning.

    Whoever wants to change the default spinlock inlining
    behavior and uninline the spinlocks for some weird reason,
    such as spinlock debugging, paravirt etc. can now all just
    select UNINLINE_SPIN_UNLOCK

    Original discussion at: https://lkml.org/lkml/2012/3/21/357

    Suggested-by: Linus Torvalds
    Signed-off-by: Raghavendra K T
    Cc: Linus Torvalds
    Cc: Ralf Baechle
    Cc: Chris Metcalf
    Cc: Chris Zankel
    Cc: linux-mips@linux-mips.org
    Link: http://lkml.kernel.org/r/20120322095502.30866.75756.sendpatchset@codeblue
    [ tidied up the changelog a bit ]
    Signed-off-by: Ingo Molnar

    Raghavendra K T
     

15 Dec, 2009

4 commits

  • Make the name space hierarchy of locking functions consistent:
    raw_spin* -> _raw_spin* -> __raw_spin*

    No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: Ingo Molnar

    Thomas Gleixner
     
  • The name space hierarchy for the internal lock functions is now a bit
    backwards. raw_spin* functions map to _spin* which use __spin*, while
    we would like to have _raw_spin* and __raw_spin*.

    _raw_spin* is already used by lock debugging, so rename those funtions
    to do_raw_spin* to free up the _raw_spin* name space.

    No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: Ingo Molnar

    Thomas Gleixner
     
  • Now that the raw_spin name space is freed up, we can implement
    raw_spinlock and the related functions which are used to annotate the
    locks which are not converted to sleeping spinlocks in preempt-rt.

    A side effect is that only such locks can be used with the low level
    lock fsunctions which circumvent lockdep.

    For !rt spin_* functions are mapped to the raw_spin* implementations.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: Ingo Molnar

    Thomas Gleixner
     
  • Move the rwlock smp api defines and functions into a separate header
    file. Makes the -rt selection simpler and less intrusive.

    No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: Ingo Molnar

    Thomas Gleixner
     

14 Nov, 2009

1 commit

  • commit 892a7c67 (locking: Allow arch-inlined spinlocks) implements the
    selection of which lock functions are inlined based on defines in
    arch/.../spinlock.h: #define __always_inline__LOCK_FUNCTION

    Despite of the name __always_inline__* the lock functions can be built
    out of line depending on config options. Also if the arch does not set
    some inline defines the generic code might set them; again depending on
    config options.

    This makes it unnecessary hard to figure out when and which lock
    functions are inlined. Aside of that it makes it way harder and
    messier for -rt to manipulate the lock functions.

    Convert the inlining decision to CONFIG switches. Each lock function
    is inlined depending on CONFIG_INLINE_*. The configs implement the
    existing dependencies. The architecture code can select ARCH_INLINE_*
    to signal that it wants the corresponding lock function inlined.
    ARCH_INLINE_* is necessary as Kconfig ignores "depends on"
    restrictions when a config element is selected.

    No functional change.

    Signed-off-by: Thomas Gleixner
    LKML-Reference:
    Acked-by: Heiko Carstens
    Reviewed-by: Ingo Molnar
    Acked-by: Peter Zijlstra

    Thomas Gleixner
     

01 Sep, 2009

3 commits

  • For !DEBUG_SPINLOCK && !PREEMPT && SMP the spin_unlock()
    functions were always inlined by using special defines which
    would call the __raw* functions.

    The out-of-line variants for these functions would be generated
    anyway.

    Use the new per unlock/locking variant mechanism to force
    inlining of the unlock functions like before. This is not a
    functional change, we just get rid of one additional way to
    force inlining.

    Signed-off-by: Heiko Carstens
    Acked-by: Peter Zijlstra
    Cc: Arnd Bergmann
    Cc: Nick Piggin
    Cc: Martin Schwidefsky
    Cc: Horst Hartmann
    Cc: Christian Ehrhardt
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: David Miller
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Geert Uytterhoeven
    Cc: Roman Zippel
    Cc:
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Heiko Carstens
     
  • This allows an architecture to specify per lock variant if the
    locking code should be kept out-of-line or inlined.

    If an architecure wants out-of-line locking code no change is
    needed. To force inlining of e.g. spin_lock() the line:

    #define __always_inline__spin_lock

    needs to be added to arch//include/asm/spinlock.h

    If CONFIG_DEBUG_SPINLOCK or CONFIG_GENERIC_LOCKBREAK are
    defined the per architecture defines are (partly) ignored and
    still out-of-line spinlock code will be generated.

    Signed-off-by: Heiko Carstens
    Acked-by: Peter Zijlstra
    Cc: Arnd Bergmann
    Cc: Nick Piggin
    Cc: Martin Schwidefsky
    Cc: Horst Hartmann
    Cc: Christian Ehrhardt
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: David Miller
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Geert Uytterhoeven
    Cc: Roman Zippel
    Cc:
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Heiko Carstens
     
  • Move spinlock function bodies to header file by creating a
    static inline version of each variant. Use the inline version
    on the out-of-line code.

    This shouldn't make any difference besides that the spinlock
    code can now be used to generate inlined spinlock code.

    Signed-off-by: Heiko Carstens
    Acked-by: Arnd Bergmann
    Acked-by: Peter Zijlstra
    Cc: Nick Piggin
    Cc: Martin Schwidefsky
    Cc: Horst Hartmann
    Cc: Christian Ehrhardt
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: David Miller
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Geert Uytterhoeven
    Cc: Roman Zippel
    Cc:
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Heiko Carstens
     

11 Aug, 2008

1 commit

  • Expose the new lock protection lock.

    This can be used to annotate places where we take multiple locks of the
    same class and avoid deadlocks by always taking another (top-level) lock
    first.

    NOTE: we're still bound to the MAX_LOCK_DEPTH (48) limit.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

26 Nov, 2006

1 commit

  • Introduce spin_lock_irqsave_nested(); implementation from:
    http://lkml.org/lkml/2006/6/1/122
    Patch from:
    http://lkml.org/lkml/2006/9/13/258

    [akpm@osdl.org: two compile fixes]
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Jiri Kosina
    Signed-off-by: Peter Zijlstra
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arjan van de Ven
     

30 Sep, 2006

1 commit

  • The lock annotations used on spinlocks and rwlocks currently use
    __{acquires,releases}(spinlock_t) and __{acquires,releases}(rwlock_t),
    respectively. This loses the information of which lock actually got
    acquired or released, and assumes a different type for the parameter of
    __acquires and __releases than the rest of the kernel. While the current
    implementations of __acquires and __releases throw away their argument,
    this will not always remain the case. Change this to use the lock
    parameter instead, to preserve this information and increase consistency in
    usage of __acquires and __releases.

    Signed-off-by: Josh Triplett
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Josh Triplett
     

04 Jul, 2006

1 commit


11 Sep, 2005

1 commit

  • This patch (written by me and also containing many suggestions of Arjan van
    de Ven) does a major cleanup of the spinlock code. It does the following
    things:

    - consolidates and enhances the spinlock/rwlock debugging code

    - simplifies the asm/spinlock.h files

    - encapsulates the raw spinlock type and moves generic spinlock
    features (such as ->break_lock) into the generic code.

    - cleans up the spinlock code hierarchy to get rid of the spaghetti.

    Most notably there's now only a single variant of the debugging code,
    located in lib/spinlock_debug.c. (previously we had one SMP debugging
    variant per architecture, plus a separate generic one for UP builds)

    Also, i've enhanced the rwlock debugging facility, it will now track
    write-owners. There is new spinlock-owner/CPU-tracking on SMP builds too.
    All locks have lockup detection now, which will work for both soft and hard
    spin/rwlock lockups.

    The arch-level include files now only contain the minimally necessary
    subset of the spinlock code - all the rest that can be generalized now
    lives in the generic headers:

    include/asm-i386/spinlock_types.h | 16
    include/asm-x86_64/spinlock_types.h | 16

    I have also split up the various spinlock variants into separate files,
    making it easier to see which does what. The new layout is:

    SMP | UP
    ----------------------------|-----------------------------------
    asm/spinlock_types_smp.h | linux/spinlock_types_up.h
    linux/spinlock_types.h | linux/spinlock_types.h
    asm/spinlock_smp.h | linux/spinlock_up.h
    linux/spinlock_api_smp.h | linux/spinlock_api_up.h
    linux/spinlock.h | linux/spinlock.h

    /*
    * here's the role of the various spinlock/rwlock related include files:
    *
    * on SMP builds:
    *
    * asm/spinlock_types.h: contains the raw_spinlock_t/raw_rwlock_t and the
    * initializers
    *
    * linux/spinlock_types.h:
    * defines the generic type and initializers
    *
    * asm/spinlock.h: contains the __raw_spin_*()/etc. lowlevel
    * implementations, mostly inline assembly code
    *
    * (also included on UP-debug builds:)
    *
    * linux/spinlock_api_smp.h:
    * contains the prototypes for the _spin_*() APIs.
    *
    * linux/spinlock.h: builds the final spin_*() APIs.
    *
    * on UP builds:
    *
    * linux/spinlock_type_up.h:
    * contains the generic, simplified UP spinlock type.
    * (which is an empty structure on non-debug builds)
    *
    * linux/spinlock_types.h:
    * defines the generic type and initializers
    *
    * linux/spinlock_up.h:
    * contains the __raw_spin_*()/etc. version of UP
    * builds. (which are NOPs on non-debug, non-preempt
    * builds)
    *
    * (included on UP-non-debug builds:)
    *
    * linux/spinlock_api_up.h:
    * builds the _spin_*() APIs.
    *
    * linux/spinlock.h: builds the final spin_*() APIs.
    */

    All SMP and UP architectures are converted by this patch.

    arm, i386, ia64, ppc, ppc64, s390/s390x, x64 was build-tested via
    crosscompilers. m32r, mips, sh, sparc, have not been tested yet, but should
    be mostly fine.

    From: Grant Grundler

    Booted and lightly tested on a500-44 (64-bit, SMP kernel, dual CPU).
    Builds 32-bit SMP kernel (not booted or tested). I did not try to build
    non-SMP kernels. That should be trivial to fix up later if necessary.

    I converted bit ops atomic_hash lock to raw_spinlock_t. Doing so avoids
    some ugly nesting of linux/*.h and asm/*.h files. Those particular locks
    are well tested and contained entirely inside arch specific code. I do NOT
    expect any new issues to arise with them.

    If someone does ever need to use debug/metrics with them, then they will
    need to unravel this hairball between spinlocks, atomic ops, and bit ops
    that exist only because parisc has exactly one atomic instruction: LDCW
    (load and clear word).

    From: "Luck, Tony"

    ia64 fix

    Signed-off-by: Ingo Molnar
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Grant Grundler
    Cc: Matthew Wilcox
    Signed-off-by: Hirokazu Takata
    Signed-off-by: Mikael Pettersson
    Signed-off-by: Benoit Boissinot
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar