Commit 335d7afbfb71faac833734a94240c1e07cf0ead8

Authored by Gerald Schaefer
Committed by Ingo Molnar
1 parent 22a867d817

mutexes, sched: Introduce arch_mutex_cpu_relax()

The spinning mutex implementation uses cpu_relax() in busy loops as a
compiler barrier. Depending on the architecture, cpu_relax() may do more
than needed in this specific mutex spin loops. On System z we also give
up the time slice of the virtual cpu in cpu_relax(), which prevents
effective spinning on the mutex.

This patch replaces cpu_relax() in the spinning mutex code with
arch_mutex_cpu_relax(), which can be defined by each architecture that
selects HAVE_ARCH_MUTEX_CPU_RELAX. The default is still cpu_relax(), so
this patch should not affect other architectures than System z for now.

Signed-off-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1290437256.7455.4.camel@thinkpad>
Signed-off-by: Ingo Molnar <mingo@elte.hu>

Showing 6 changed files with 13 additions and 2 deletions Side-by-side Diff

... ... @@ -175,5 +175,8 @@
175 175 config HAVE_ARCH_JUMP_LABEL
176 176 bool
177 177  
  178 +config HAVE_ARCH_MUTEX_CPU_RELAX
  179 + bool
  180 +
178 181 source "kernel/gcov/Kconfig"
... ... @@ -99,6 +99,7 @@
99 99 select HAVE_KERNEL_LZMA
100 100 select HAVE_KERNEL_LZO
101 101 select HAVE_GET_USER_PAGES_FAST
  102 + select HAVE_ARCH_MUTEX_CPU_RELAX
102 103 select ARCH_INLINE_SPIN_TRYLOCK
103 104 select ARCH_INLINE_SPIN_TRYLOCK_BH
104 105 select ARCH_INLINE_SPIN_LOCK
arch/s390/include/asm/mutex.h
... ... @@ -7,4 +7,6 @@
7 7 */
8 8  
9 9 #include <asm-generic/mutex-dec.h>
  10 +
  11 +#define arch_mutex_cpu_relax() barrier()
include/linux/mutex.h
... ... @@ -160,5 +160,9 @@
160 160 extern void mutex_unlock(struct mutex *lock);
161 161 extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);
162 162  
  163 +#ifndef CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX
  164 +#define arch_mutex_cpu_relax() cpu_relax()
  165 +#endif
  166 +
163 167 #endif
... ... @@ -199,7 +199,7 @@
199 199 * memory barriers as we'll eventually observe the right
200 200 * values at the cost of a few extra spins.
201 201 */
202   - cpu_relax();
  202 + arch_mutex_cpu_relax();
203 203 }
204 204 #endif
205 205 spin_lock_mutex(&lock->wait_lock, flags);
... ... @@ -75,6 +75,7 @@
75 75  
76 76 #include <asm/tlb.h>
77 77 #include <asm/irq_regs.h>
  78 +#include <asm/mutex.h>
78 79  
79 80 #include "sched_cpupri.h"
80 81 #include "workqueue_sched.h"
... ... @@ -3888,7 +3889,7 @@
3888 3889 if (task_thread_info(rq->curr) != owner || need_resched())
3889 3890 return 0;
3890 3891  
3891   - cpu_relax();
  3892 + arch_mutex_cpu_relax();
3892 3893 }
3893 3894  
3894 3895 return 1;