Commit f68e14805085972b4e0b0ab684af37f713b9c262
Committed by
Linus Torvalds
1 parent
3d2d827f5c
Exists in
master
and in
7 other branches
mm: reduce atomic use on use_mm fast path
When the mm being switched to matches the active mm, we don't need to increment and then drop the mm count. In a simple benchmark this happens in about 50% of time. Making that conditional reduces contention on that cacheline on SMP systems. Acked-by: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Showing 1 changed file with 6 additions and 3 deletions Side-by-side Diff
mm/mmu_context.c
... | ... | @@ -26,13 +26,16 @@ |
26 | 26 | |
27 | 27 | task_lock(tsk); |
28 | 28 | active_mm = tsk->active_mm; |
29 | - atomic_inc(&mm->mm_count); | |
29 | + if (active_mm != mm) { | |
30 | + atomic_inc(&mm->mm_count); | |
31 | + tsk->active_mm = mm; | |
32 | + } | |
30 | 33 | tsk->mm = mm; |
31 | - tsk->active_mm = mm; | |
32 | 34 | switch_mm(active_mm, mm, tsk); |
33 | 35 | task_unlock(tsk); |
34 | 36 | |
35 | - mmdrop(active_mm); | |
37 | + if (active_mm != mm) | |
38 | + mmdrop(active_mm); | |
36 | 39 | } |
37 | 40 | |
38 | 41 | /* |