15 Dec, 2009

5 commits

  • Make the name space hierarchy of locking functions consistent:
    raw_spin* -> _raw_spin* -> __raw_spin*

    No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: Ingo Molnar

    Thomas Gleixner
     
  • The name space hierarchy for the internal lock functions is now a bit
    backwards. raw_spin* functions map to _spin* which use __spin*, while
    we would like to have _raw_spin* and __raw_spin*.

    _raw_spin* is already used by lock debugging, so rename those funtions
    to do_raw_spin* to free up the _raw_spin* name space.

    No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: Ingo Molnar

    Thomas Gleixner
     
  • Now that the raw_spin name space is freed up, we can implement
    raw_spinlock and the related functions which are used to annotate the
    locks which are not converted to sleeping spinlocks in preempt-rt.

    A side effect is that only such locks can be used with the low level
    lock fsunctions which circumvent lockdep.

    For !rt spin_* functions are mapped to the raw_spin* implementations.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: Ingo Molnar

    Thomas Gleixner
     
  • Name space cleanup. No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: David S. Miller
    Acked-by: Ingo Molnar
    Cc: linux-arch@vger.kernel.org

    Thomas Gleixner
     
  • Separate spin_lock and rw_lock functions. Preempt-RT needs to exclude
    the rw_lock functions from being compiled. The reordering allows to do
    that with a single #ifdef.

    No functional change.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Acked-by: Ingo Molnar

    Thomas Gleixner
     

14 Nov, 2009

2 commits

  • With the Kconfig based inline decisions we can remove extra ifdefs in
    kernel/spinlock.c by creating the complex lockbreak functions as
    inlines which are inserted into the non inlined lock functions.

    No functional change.

    Signed-off-by: Thomas Gleixner
    LKML-Reference:
    Acked-by: Heiko Carstens
    Reviewed-by: Ingo Molnar
    Acked-by: Peter Zijlstra

    Thomas Gleixner
     
  • commit 892a7c67 (locking: Allow arch-inlined spinlocks) implements the
    selection of which lock functions are inlined based on defines in
    arch/.../spinlock.h: #define __always_inline__LOCK_FUNCTION

    Despite of the name __always_inline__* the lock functions can be built
    out of line depending on config options. Also if the arch does not set
    some inline defines the generic code might set them; again depending on
    config options.

    This makes it unnecessary hard to figure out when and which lock
    functions are inlined. Aside of that it makes it way harder and
    messier for -rt to manipulate the lock functions.

    Convert the inlining decision to CONFIG switches. Each lock function
    is inlined depending on CONFIG_INLINE_*. The configs implement the
    existing dependencies. The architecture code can select ARCH_INLINE_*
    to signal that it wants the corresponding lock function inlined.
    ARCH_INLINE_* is necessary as Kconfig ignores "depends on"
    restrictions when a config element is selected.

    No functional change.

    Signed-off-by: Thomas Gleixner
    LKML-Reference:
    Acked-by: Heiko Carstens
    Reviewed-by: Ingo Molnar
    Acked-by: Peter Zijlstra

    Thomas Gleixner
     

01 Sep, 2009

2 commits

  • This allows an architecture to specify per lock variant if the
    locking code should be kept out-of-line or inlined.

    If an architecure wants out-of-line locking code no change is
    needed. To force inlining of e.g. spin_lock() the line:

    #define __always_inline__spin_lock

    needs to be added to arch//include/asm/spinlock.h

    If CONFIG_DEBUG_SPINLOCK or CONFIG_GENERIC_LOCKBREAK are
    defined the per architecture defines are (partly) ignored and
    still out-of-line spinlock code will be generated.

    Signed-off-by: Heiko Carstens
    Acked-by: Peter Zijlstra
    Cc: Arnd Bergmann
    Cc: Nick Piggin
    Cc: Martin Schwidefsky
    Cc: Horst Hartmann
    Cc: Christian Ehrhardt
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: David Miller
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Geert Uytterhoeven
    Cc: Roman Zippel
    Cc:
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Heiko Carstens
     
  • Move spinlock function bodies to header file by creating a
    static inline version of each variant. Use the inline version
    on the out-of-line code.

    This shouldn't make any difference besides that the spinlock
    code can now be used to generate inlined spinlock code.

    Signed-off-by: Heiko Carstens
    Acked-by: Arnd Bergmann
    Acked-by: Peter Zijlstra
    Cc: Nick Piggin
    Cc: Martin Schwidefsky
    Cc: Horst Hartmann
    Cc: Christian Ehrhardt
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: David Miller
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Geert Uytterhoeven
    Cc: Roman Zippel
    Cc:
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Heiko Carstens
     

03 Apr, 2009

2 commits

  • Pass the original flags to rwlock arch-code, so that it can re-enable
    interrupts if implemented for that architecture.

    Initially, make __raw_read_lock_flags and __raw_write_lock_flags stubs
    which just do the same thing as non-flags variants.

    Signed-off-by: Petr Tesarik
    Signed-off-by: Robin Holt
    Acked-by: Peter Zijlstra
    Cc:
    Acked-by: Ingo Molnar
    Cc: "Luck, Tony"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robin Holt
     
  • SGI has observed that on large systems, interrupts are not serviced for a
    long period of time when waiting for a rwlock. The following patch series
    re-enables irqs while waiting for the lock, resembling the code which is
    already there for spinlocks.

    I only made the ia64 version, because the patch adds some overhead to the
    fast path. I assume there is currently no demand to have this for other
    architectures, because the systems are not so large. Of course, the
    possibility to implement raw_{read|write}_lock_flags for any architecture
    is still there.

    This patch:

    The new macro LOCK_CONTENDED_FLAGS expands to the correct implementation
    depending on the config options, so that IRQ's are re-enabled when
    possible, but they remain disabled if CONFIG_LOCKDEP is set.

    Signed-off-by: Petr Tesarik
    Signed-off-by: Robin Holt
    Cc:
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: "Luck, Tony"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robin Holt
     

13 Aug, 2008

1 commit

  • fix:

    WARNING: EXPORT_SYMBOL(foo); should immediately follow its function/variable
    #46: FILE: kernel/spinlock.c:326:
    +EXPORT_SYMBOL(_spin_lock_nest_lock);

    total: 0 errors, 1 warnings, 26 lines checked

    Signed-off-by: Andrew Morton
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Andrew Morton
     

11 Aug, 2008

1 commit

  • Expose the new lock protection lock.

    This can be used to annotate places where we take multiple locks of the
    same class and avoid deadlocks by always taking another (top-level) lock
    first.

    NOTE: we're still bound to the MAX_LOCK_DEPTH (48) limit.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

24 May, 2008

1 commit


30 Jan, 2008

1 commit

  • The break_lock data structure and code for spinlocks is quite nasty.
    Not only does it double the size of a spinlock but it changes locking to
    a potentially less optimal trylock.

    Put all of that under CONFIG_GENERIC_LOCKBREAK, and introduce a
    __raw_spin_is_contended that uses the lock data itself to determine whether
    there are waiters on the lock, to be used if CONFIG_GENERIC_LOCKBREAK is
    not set.

    Rename need_lockbreak to spin_needbreak, make it use spin_is_contended to
    decouple it from the spinlock implementation, and make it typesafe (rwlocks
    do not have any need_lockbreak sites -- why do they even get bloated up
    with that break_lock then?).

    Signed-off-by: Nick Piggin
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner

    Nick Piggin
     

20 Jul, 2007

2 commits


26 Nov, 2006

1 commit

  • Introduce spin_lock_irqsave_nested(); implementation from:
    http://lkml.org/lkml/2006/6/1/122
    Patch from:
    http://lkml.org/lkml/2006/9/13/258

    [akpm@osdl.org: two compile fixes]
    Signed-off-by: Arjan van de Ven
    Signed-off-by: Jiri Kosina
    Signed-off-by: Peter Zijlstra
    Acked-by: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arjan van de Ven