Commit 663b97f7efd001b0c56bd5fce059c5272725b86f

Authored by Hugh Dickins
Committed by Linus Torvalds
1 parent 705e87c0c3

[PATCH] mm: flush_tlb_range outside ptlock

There was one small but very significant change in the previous patch:
mprotect's flush_tlb_range fell outside the page_table_lock: as it is in 2.4,
but that doesn't prove it safe in 2.6.

On some architectures flush_tlb_range comes to the same as flush_tlb_mm, which
has always been called from outside page_table_lock in dup_mmap, and is so
proved safe.  Others required a deeper audit: I could find no reliance on
page_table_lock in any; but in ia64 and parisc found some code which looks a
bit as if it might want preemption disabled.  That won't do any actual harm,
so pending a decision from the maintainers, disable preemption there.

Remove comments on page_table_lock from flush_tlb_mm, flush_tlb_range and
flush_tlb_page entries in cachetlb.txt: they were rather misleading (what
generic code does is different from what usually happens), the rules are now
changing, and it's not yet clear where we'll end up (will the generic
tlb_flush_mmu happen always under lock?  never under lock?  or sometimes under
and sometimes not?).

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

Showing 3 changed files with 4 additions and 10 deletions Side-by-side Diff

Documentation/cachetlb.txt
... ... @@ -49,9 +49,6 @@
49 49 page table operations such as what happens during
50 50 fork, and exec.
51 51  
52   - Platform developers note that generic code will always
53   - invoke this interface without mm->page_table_lock held.
54   -
55 52 3) void flush_tlb_range(struct vm_area_struct *vma,
56 53 unsigned long start, unsigned long end)
57 54  
... ... @@ -72,9 +69,6 @@
72 69 call flush_tlb_page (see below) for each entry which may be
73 70 modified.
74 71  
75   - Platform developers note that generic code will always
76   - invoke this interface with mm->page_table_lock held.
77   -
78 72 4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
79 73  
80 74 This time we need to remove the PAGE_SIZE sized translation
... ... @@ -92,9 +86,6 @@
92 86 'vma->vm_mm' for virtual address 'addr'.
93 87  
94 88 This is used primarily during fault processing.
95   -
96   - Platform developers note that generic code will always
97   - invoke this interface with mm->page_table_lock held.
98 89  
99 90 5) void flush_tlb_pgtables(struct mm_struct *mm,
100 91 unsigned long start, unsigned long end)
... ... @@ -158,10 +158,12 @@
158 158 # ifdef CONFIG_SMP
159 159 platform_global_tlb_purge(mm, start, end, nbits);
160 160 # else
  161 + preempt_disable();
161 162 do {
162 163 ia64_ptcl(start, (nbits<<2));
163 164 start += (1UL << nbits);
164 165 } while (start < end);
  166 + preempt_enable();
165 167 # endif
166 168  
167 169 ia64_srlz_i(); /* srlz.i implies srlz.d */
include/asm-parisc/tlbflush.h
... ... @@ -88,7 +88,7 @@
88 88 if (npages >= 512) /* 2MB of space: arbitrary, should be tuned */
89 89 flush_tlb_all();
90 90 else {
91   -
  91 + preempt_disable();
92 92 mtsp(vma->vm_mm->context,1);
93 93 purge_tlb_start();
94 94 if (split_tlb) {
... ... @@ -102,6 +102,7 @@
102 102 pdtlb(start);
103 103 start += PAGE_SIZE;
104 104 }
  105 + preempt_enable();
105 106 }
106 107 purge_tlb_end();
107 108 }