19 May, 2010
1 commit
-
Currently, we can hit a nasty case with optimistic
spinning on mutexes:CPU A tries to take a mutex, while holding the BKL
CPU B tried to take the BLK while holding the mutex
This looks like a AB-BA scenario but in practice, is
allowed and happens due to the auto-release on
schedule() nature of the BKL.In that case, the optimistic spinning code can get us
into a situation where instead of going to sleep, A
will spin waiting for B who is spinning waiting for
A, and the only way out of that loop is the
need_resched() test in mutex_spin_on_owner().This patch fixes it by completely disabling spinning
if we own the BKL. This adds one more detail to the
extensive list of reasons why it's a bad idea for
kernel code to be holding the BKL.Signed-off-by: Tony Breeds
Acked-by: Linus Torvalds
Acked-by: Peter Zijlstra
Cc: Benjamin Herrenschmidt
Cc:
LKML-Reference:
[ added an unlikely() attribute to the branch ]
Signed-off-by: Ingo Molnar
03 Dec, 2009
1 commit
-
Introduce CONFIG_MUTEX_SPIN_ON_OWNER so that we can centralize
in a single place the conditions that determine its definition
and use.Signed-off-by: Frederic Weisbecker
Acked-by: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar
Cc: Peter Zijlstra
11 Jun, 2009
2 commits
-
Conflicts:
arch/x86/kernel/irqinit.c
arch/x86/kernel/irqinit_64.c
arch/x86/kernel/traps.c
arch/x86/mm/fault.c
include/linux/sched.h
kernel/exit.c -
* 'locking-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
spinlock: Add missing __raw_spin_lock_flags() stub for UP
mutex: add atomic_dec_and_mutex_lock(), fix
locking, rtmutex.c: Documentation cleanup
mutex: add atomic_dec_and_mutex_lock()
11 May, 2009
1 commit
-
Merge reason: sched/core was on .30-rc1 before, update to latest fixes
Signed-off-by: Ingo Molnar
06 May, 2009
1 commit
-
Merge reason: we moved a mutex.h commit that originated from the
perfcounters tree into core/locking - but now merge
back that branch to solve a merge artifact and to
pick up cleanups of this commit that happened in
core/locking.Signed-off-by: Ingo Molnar
30 Apr, 2009
1 commit
-
include/linux/mutex.h:136: warning: 'mutex_lock' declared inline after being called
include/linux/mutex.h:136: warning: previous declaration of 'mutex_lock' was hereuninline it.
[ Impact: clean up and uninline, address compiler warning ]
Signed-off-by: Andrew Morton
Cc: Al Viro
Cc: Christoph Hellwig
Cc: Eric Paris
Cc: Paul Mackerras
Cc: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar
29 Apr, 2009
1 commit
-
Merge reason: This brach was on -rc1, refresh it to almost-rc4 to pick up
the latest upstream fixes.Signed-off-by: Ingo Molnar
21 Apr, 2009
1 commit
-
Lai Jiangshan's patch reminded me that I promised Nick to remove
that extra call overhead in schedule().Signed-off-by: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar
10 Apr, 2009
1 commit
-
Impact: performance regression fix for s390
The adaptive spinning mutexes will not always do what one would expect on
virtualized architectures like s390. Especially the cpu_relax() loop in
mutex_spin_on_owner might hurt if the mutex holding cpu has been scheduled
away by the hypervisor.We would end up in a cpu_relax() loop when there is no chance that the
state of the mutex changes until the target cpu has been scheduled again by
the hypervisor.For that reason we should change the default behaviour to no-spin on s390.
We do have an instruction which allows to yield the current cpu in favour of
a different target cpu. Also we have an instruction which allows us to figure
out if the target cpu is physically backed.However we need to do some performance tests until we can come up with
a solution that will do the right thing on s390.Signed-off-by: Heiko Carstens
Acked-by: Peter Zijlstra
Cc: Martin Schwidefsky
Cc: Christian Borntraeger
LKML-Reference:
Signed-off-by: Ingo Molnar
06 Apr, 2009
1 commit
-
Impact: build fix
mutex_lock() is was defined inline in kernel/mutex.c, but wasn't
declared so not in . This didn't cause a problem until
checkin 3a2d367d9aabac486ac4444c6c7ec7a1dab16267 added the
atomic_dec_and_mutex_lock() inline in between declaration and
definion.This broke building with CONFIG_ALLOW_WARNINGS=n, e.g. make
allnoconfig.Either from the source code nor the allnoconfig binary output I cannot
find any internal references to mutex_lock() in kernel/mutex.c, so
presumably this "inline" is now-useless legacy.Cc: Eric Paris
Cc: Peter Zijlstra
Cc: Paul Mackerras
Orig-LKML-Reference:
Signed-off-by: H. Peter Anvin
15 Jan, 2009
4 commits
-
Spin more agressively. This is less fair but also markedly faster.
The numbers:
* dbench 50 (higher is better):
spin 1282MB/s
v10 548MB/s
v10 no wait 1868MB/s* 4k creates (numbers in files/second higher is better):
spin avg 200.60 median 193.20 std 19.71 high 305.93 low 186.82
v10 avg 180.94 median 175.28 std 13.91 high 229.31 low 168.73
v10 no wait avg 232.18 median 222.38 std 22.91 high 314.66 low 209.12* File stats (numbers in seconds, lower is better):
spin 2.27s
v10 5.1s
v10 no wait 1.6s( The source changes are smaller than they look, I just moved the
need_resched checks in __mutex_lock_common after the cmpxchg. )Signed-off-by: Chris Mason
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar -
Change mutex contention behaviour such that it will sometimes busy wait on
acquisition - moving its behaviour closer to that of spinlocks.This concept got ported to mainline from the -rt tree, where it was originally
implemented for rtmutexes by Steven Rostedt, based on work by Gregory Haskins.Testing with Ingo's test-mutex application (http://lkml.org/lkml/2006/1/8/50)
gave a 345% boost for VFS scalability on my testbox:# ./test-mutex-shm V 16 10 | grep "^avg ops"
avg ops/sec: 296604# ./test-mutex-shm V 16 10 | grep "^avg ops"
avg ops/sec: 85870The key criteria for the busy wait is that the lock owner has to be running on
a (different) cpu. The idea is that as long as the owner is running, there is a
fair chance it'll release the lock soon, and thus we'll be better off spinning
instead of blocking/scheduling.Since regular mutexes (as opposed to rtmutexes) do not atomically track the
owner, we add the owner in a non-atomic fashion and deal with the races in
the slowpath.Furthermore, to ease the testing of the performance impact of this new code,
there is means to disable this behaviour runtime (without having to reboot
the system), when scheduler debugging is enabled (CONFIG_SCHED_DEBUG=y),
by issuing the following command:# echo NO_OWNER_SPIN > /debug/sched_features
This command re-enables spinning again (this is also the default):
# echo OWNER_SPIN > /debug/sched_features
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar -
The problem is that dropping the spinlock right before schedule is a voluntary
preemption point and can cause a schedule, right after which we schedule again.Fix this inefficiency by keeping preemption disabled until we schedule, do this
by explicity disabling preemption and providing a schedule() variant that
assumes preemption is already disabled.Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar -
Remove a local variable by combining an assingment and test in one.
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar
24 Nov, 2008
1 commit
-
Impact: fix build failure on llvm-gcc-4.2
According to the gcc manual, the 'used' attribute should be applied to
functions referenced only from inline assembly.
This fixes a build failure with llvm-gcc-4.2, which deleted
__mutex_lock_slowpath, __mutex_unlock_slowpath.Signed-off-by: Török Edwin
Signed-off-by: Ingo Molnar
20 Oct, 2008
1 commit
-
We currently only provide points that have to wait on contention, also
lists the points we have to wait for.Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar
29 Jul, 2008
1 commit
-
Fix @key parameter to mutex_init() and one of its callers.
Warning(linux-2.6.26-git11//drivers/base/class.c:210): No description found for parameter 'key'
Signed-off-by: Randy Dunlap
Acked-by: Greg Kroah-Hartman
Signed-off-by: Ingo Molnar
10 Jun, 2008
1 commit
-
Change __mutex_lock_common() to use signal_pending_state() for the sake of
the code re-use.This adds 7 bytes to kernel/mutex.o, but afaics only because gcc isn't smart
enough.(btw, uninlining of __mutex_lock_common() shrinks .text from 2722 to 1542,
perhaps it is worth doing).Signed-off-by: Oleg Nesterov
Signed-off-by: Ingo Molnar
09 Feb, 2008
1 commit
-
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Harvey Harrison
Acked-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
07 Dec, 2007
1 commit
-
Similar to mutex_lock_interruptible, it can be interrupted by a fatal
signal only.Signed-off-by: Liam R. Howlett
Acked-by: Ingo Molnar
Signed-off-by: Matthew Wilcox
12 Oct, 2007
1 commit
-
The fancy mutex_lock fastpath has too many indirections to track the caller
hence all contentions are perceived to come from mutex_lock().Avoid this by explicitly not using the fastpath code (it was disabled already
anyway).Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar
20 Jul, 2007
2 commits
-
__acquire
|
lock _____
| \
| __contended
| |
| wait
| _______/
|/
|
__acquired
|
__release
|
unlockWe measure acquisition and contention bouncing.
This is done by recording a cpu stamp in each lock instance.
Contention bouncing requires the cpu stamp to be set on acquisition. Hence we
move __acquired into the generic path.__acquired is then used to measure acquisition bouncing by comparing the
current cpu with the old stamp before replacing it.__contended is used to measure contention bouncing (only useful for preemptable
locks)[akpm@linux-foundation.org: cleanups]
Signed-off-by: Peter Zijlstra
Acked-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Call the new lockstat tracking functions from the various lock primitives.
Signed-off-by: Peter Zijlstra
Acked-by: Ingo Molnar
Acked-by: Jason Baron
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
10 May, 2007
1 commit
-
Recently a few direct accesses to the thread_info in the task structure snuck
back, so this wraps them with the appropriate wrapper.Signed-off-by: Roman Zippel
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
09 Dec, 2006
1 commit
-
md_open takes ->reconfig_mutex which causes lockdep to complain. This
(normally) doesn't have deadlock potential as the possible conflict is with a
reconfig_mutex in a different device.I say "normally" because if a loop were created in the array->member hierarchy
a deadlock could happen. However that causes bigger problems than a deadlock
and should be fixed independently.So we flag the lock in md_open as a nested lock. This requires defining
mutex_lock_interruptible_nested.Cc: Ingo Molnar
Acked-by: Peter Zijlstra
Acked-by: Ingo Molnar
Signed-off-by: Neil Brown
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
04 Jul, 2006
4 commits
-
Use the lock validator framework to prove mutex locking correctness.
Signed-off-by: Ingo Molnar
Signed-off-by: Arjan van de Ven
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Work around weird section nesting build bug causing smp-alternatives failures
under certain circumstances.Signed-off-by: Ingo Molnar
Signed-off-by: Arjan van de Ven
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Generic lock debugging:
- generalized lock debugging framework. For example, a bug in one lock
subsystem turns off debugging in all lock subsystems.- got rid of the caller address passing (__IP__/__IP_DECL__/etc.) from
the mutex/rtmutex debugging code: it caused way too much prototype
hackery, and lockdep will give the same information anyway.- ability to do silent tests
- check lock freeing in vfree too.
- more finegrained debugging options, to allow distributions to
turn off more expensive debugging features.There's no separate 'held mutexes' list anymore - but there's a 'held locks'
stack within lockdep, which unifies deadlock detection across all lock
classes. (this is independent of the lockdep validation stuff - lockdep first
checks whether we are holding a lock already)Here are the current debugging options:
CONFIG_DEBUG_MUTEXES=y
CONFIG_DEBUG_LOCK_ALLOC=ywhich do:
config DEBUG_MUTEXES
bool "Mutex debugging, basic checks"config DEBUG_LOCK_ALLOC
bool "Detect incorrect freeing of live mutexes"Signed-off-by: Ingo Molnar
Signed-off-by: Arjan van de Ven
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Rename DEBUG_WARN_ON() to the less generic DEBUG_LOCKS_WARN_ON() name, so that
it's clear that this is a lock-debugging internal mechanism.Signed-off-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
27 Jun, 2006
1 commit
-
It seems ppc64 wants to lock mutexes in early bootup code, with interrupts
disabled, and they expect interrupts to stay disabled, else they crash.Work around this bug by making mutex debugging variants save/restore irq
flags.Signed-off-by: Ingo Molnar
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
11 Jan, 2006
3 commits
-
Signed-off-by: Ingo Molnar
Signed-off-by: Linus Torvalds -
Mark mutex_lock() and mutex_lock_interruptible() as might_sleep()
functions.Signed-off-by: Ingo Molnar
Signed-off-by: Linus Torvalds -
Call the mutex slowpath more conservatively - e.g. FRAME_POINTERS can
change the calling convention, in which case a direct branch to the
slowpath becomes illegal. Bug found by Hugh Dickins.Signed-off-by: Ingo Molnar
Signed-off-by: Linus Torvalds
10 Jan, 2006
1 commit
-
mutex implementation, core files: just the basic subsystem, no users of it.
Signed-off-by: Ingo Molnar
Signed-off-by: Arjan van de Ven