Commit ec8c0446b6e2b67b5c8813eb517f4bf00efa99a9

Authored by Ralf Baechle
Committed by Linus Torvalds
1 parent bcd022801e

[PATCH] Optimize D-cache alias handling on fork

Virtually index, physically tagged cache architectures can get away
without cache flushing when forking.  This patch adds a new cache
flushing function flush_cache_dup_mm(struct mm_struct *) which for the
moment I've implemented to do the same thing on all architectures
except on MIPS where it's a no-op.

Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>

Showing 27 changed files with 54 additions and 7 deletions Inline Diff

Documentation/cachetlb.txt
1 Cache and TLB Flushing 1 Cache and TLB Flushing
2 Under Linux 2 Under Linux
3 3
4 David S. Miller <davem@redhat.com> 4 David S. Miller <davem@redhat.com>
5 5
6 This document describes the cache/tlb flushing interfaces called 6 This document describes the cache/tlb flushing interfaces called
7 by the Linux VM subsystem. It enumerates over each interface, 7 by the Linux VM subsystem. It enumerates over each interface,
8 describes it's intended purpose, and what side effect is expected 8 describes it's intended purpose, and what side effect is expected
9 after the interface is invoked. 9 after the interface is invoked.
10 10
11 The side effects described below are stated for a uniprocessor 11 The side effects described below are stated for a uniprocessor
12 implementation, and what is to happen on that single processor. The 12 implementation, and what is to happen on that single processor. The
13 SMP cases are a simple extension, in that you just extend the 13 SMP cases are a simple extension, in that you just extend the
14 definition such that the side effect for a particular interface occurs 14 definition such that the side effect for a particular interface occurs
15 on all processors in the system. Don't let this scare you into 15 on all processors in the system. Don't let this scare you into
16 thinking SMP cache/tlb flushing must be so inefficient, this is in 16 thinking SMP cache/tlb flushing must be so inefficient, this is in
17 fact an area where many optimizations are possible. For example, 17 fact an area where many optimizations are possible. For example,
18 if it can be proven that a user address space has never executed 18 if it can be proven that a user address space has never executed
19 on a cpu (see vma->cpu_vm_mask), one need not perform a flush 19 on a cpu (see vma->cpu_vm_mask), one need not perform a flush
20 for this address space on that cpu. 20 for this address space on that cpu.
21 21
22 First, the TLB flushing interfaces, since they are the simplest. The 22 First, the TLB flushing interfaces, since they are the simplest. The
23 "TLB" is abstracted under Linux as something the cpu uses to cache 23 "TLB" is abstracted under Linux as something the cpu uses to cache
24 virtual-->physical address translations obtained from the software 24 virtual-->physical address translations obtained from the software
25 page tables. Meaning that if the software page tables change, it is 25 page tables. Meaning that if the software page tables change, it is
26 possible for stale translations to exist in this "TLB" cache. 26 possible for stale translations to exist in this "TLB" cache.
27 Therefore when software page table changes occur, the kernel will 27 Therefore when software page table changes occur, the kernel will
28 invoke one of the following flush methods _after_ the page table 28 invoke one of the following flush methods _after_ the page table
29 changes occur: 29 changes occur:
30 30
31 1) void flush_tlb_all(void) 31 1) void flush_tlb_all(void)
32 32
33 The most severe flush of all. After this interface runs, 33 The most severe flush of all. After this interface runs,
34 any previous page table modification whatsoever will be 34 any previous page table modification whatsoever will be
35 visible to the cpu. 35 visible to the cpu.
36 36
37 This is usually invoked when the kernel page tables are 37 This is usually invoked when the kernel page tables are
38 changed, since such translations are "global" in nature. 38 changed, since such translations are "global" in nature.
39 39
40 2) void flush_tlb_mm(struct mm_struct *mm) 40 2) void flush_tlb_mm(struct mm_struct *mm)
41 41
42 This interface flushes an entire user address space from 42 This interface flushes an entire user address space from
43 the TLB. After running, this interface must make sure that 43 the TLB. After running, this interface must make sure that
44 any previous page table modifications for the address space 44 any previous page table modifications for the address space
45 'mm' will be visible to the cpu. That is, after running, 45 'mm' will be visible to the cpu. That is, after running,
46 there will be no entries in the TLB for 'mm'. 46 there will be no entries in the TLB for 'mm'.
47 47
48 This interface is used to handle whole address space 48 This interface is used to handle whole address space
49 page table operations such as what happens during 49 page table operations such as what happens during
50 fork, and exec. 50 fork, and exec.
51 51
52 3) void flush_tlb_range(struct vm_area_struct *vma, 52 3) void flush_tlb_range(struct vm_area_struct *vma,
53 unsigned long start, unsigned long end) 53 unsigned long start, unsigned long end)
54 54
55 Here we are flushing a specific range of (user) virtual 55 Here we are flushing a specific range of (user) virtual
56 address translations from the TLB. After running, this 56 address translations from the TLB. After running, this
57 interface must make sure that any previous page table 57 interface must make sure that any previous page table
58 modifications for the address space 'vma->vm_mm' in the range 58 modifications for the address space 'vma->vm_mm' in the range
59 'start' to 'end-1' will be visible to the cpu. That is, after 59 'start' to 'end-1' will be visible to the cpu. That is, after
60 running, here will be no entries in the TLB for 'mm' for 60 running, here will be no entries in the TLB for 'mm' for
61 virtual addresses in the range 'start' to 'end-1'. 61 virtual addresses in the range 'start' to 'end-1'.
62 62
63 The "vma" is the backing store being used for the region. 63 The "vma" is the backing store being used for the region.
64 Primarily, this is used for munmap() type operations. 64 Primarily, this is used for munmap() type operations.
65 65
66 The interface is provided in hopes that the port can find 66 The interface is provided in hopes that the port can find
67 a suitably efficient method for removing multiple page 67 a suitably efficient method for removing multiple page
68 sized translations from the TLB, instead of having the kernel 68 sized translations from the TLB, instead of having the kernel
69 call flush_tlb_page (see below) for each entry which may be 69 call flush_tlb_page (see below) for each entry which may be
70 modified. 70 modified.
71 71
72 4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr) 72 4) void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
73 73
74 This time we need to remove the PAGE_SIZE sized translation 74 This time we need to remove the PAGE_SIZE sized translation
75 from the TLB. The 'vma' is the backing structure used by 75 from the TLB. The 'vma' is the backing structure used by
76 Linux to keep track of mmap'd regions for a process, the 76 Linux to keep track of mmap'd regions for a process, the
77 address space is available via vma->vm_mm. Also, one may 77 address space is available via vma->vm_mm. Also, one may
78 test (vma->vm_flags & VM_EXEC) to see if this region is 78 test (vma->vm_flags & VM_EXEC) to see if this region is
79 executable (and thus could be in the 'instruction TLB' in 79 executable (and thus could be in the 'instruction TLB' in
80 split-tlb type setups). 80 split-tlb type setups).
81 81
82 After running, this interface must make sure that any previous 82 After running, this interface must make sure that any previous
83 page table modification for address space 'vma->vm_mm' for 83 page table modification for address space 'vma->vm_mm' for
84 user virtual address 'addr' will be visible to the cpu. That 84 user virtual address 'addr' will be visible to the cpu. That
85 is, after running, there will be no entries in the TLB for 85 is, after running, there will be no entries in the TLB for
86 'vma->vm_mm' for virtual address 'addr'. 86 'vma->vm_mm' for virtual address 'addr'.
87 87
88 This is used primarily during fault processing. 88 This is used primarily during fault processing.
89 89
90 5) void flush_tlb_pgtables(struct mm_struct *mm, 90 5) void flush_tlb_pgtables(struct mm_struct *mm,
91 unsigned long start, unsigned long end) 91 unsigned long start, unsigned long end)
92 92
93 The software page tables for address space 'mm' for virtual 93 The software page tables for address space 'mm' for virtual
94 addresses in the range 'start' to 'end-1' are being torn down. 94 addresses in the range 'start' to 'end-1' are being torn down.
95 95
96 Some platforms cache the lowest level of the software page tables 96 Some platforms cache the lowest level of the software page tables
97 in a linear virtually mapped array, to make TLB miss processing 97 in a linear virtually mapped array, to make TLB miss processing
98 more efficient. On such platforms, since the TLB is caching the 98 more efficient. On such platforms, since the TLB is caching the
99 software page table structure, it needs to be flushed when parts 99 software page table structure, it needs to be flushed when parts
100 of the software page table tree are unlinked/freed. 100 of the software page table tree are unlinked/freed.
101 101
102 Sparc64 is one example of a platform which does this. 102 Sparc64 is one example of a platform which does this.
103 103
104 Usually, when munmap()'ing an area of user virtual address 104 Usually, when munmap()'ing an area of user virtual address
105 space, the kernel leaves the page table parts around and just 105 space, the kernel leaves the page table parts around and just
106 marks the individual pte's as invalid. However, if very large 106 marks the individual pte's as invalid. However, if very large
107 portions of the address space are unmapped, the kernel frees up 107 portions of the address space are unmapped, the kernel frees up
108 those portions of the software page tables to prevent potential 108 those portions of the software page tables to prevent potential
109 excessive kernel memory usage caused by erratic mmap/mmunmap 109 excessive kernel memory usage caused by erratic mmap/mmunmap
110 sequences. It is at these times that flush_tlb_pgtables will 110 sequences. It is at these times that flush_tlb_pgtables will
111 be invoked. 111 be invoked.
112 112
113 6) void update_mmu_cache(struct vm_area_struct *vma, 113 6) void update_mmu_cache(struct vm_area_struct *vma,
114 unsigned long address, pte_t pte) 114 unsigned long address, pte_t pte)
115 115
116 At the end of every page fault, this routine is invoked to 116 At the end of every page fault, this routine is invoked to
117 tell the architecture specific code that a translation 117 tell the architecture specific code that a translation
118 described by "pte" now exists at virtual address "address" 118 described by "pte" now exists at virtual address "address"
119 for address space "vma->vm_mm", in the software page tables. 119 for address space "vma->vm_mm", in the software page tables.
120 120
121 A port may use this information in any way it so chooses. 121 A port may use this information in any way it so chooses.
122 For example, it could use this event to pre-load TLB 122 For example, it could use this event to pre-load TLB
123 translations for software managed TLB configurations. 123 translations for software managed TLB configurations.
124 The sparc64 port currently does this. 124 The sparc64 port currently does this.
125 125
126 7) void tlb_migrate_finish(struct mm_struct *mm) 126 7) void tlb_migrate_finish(struct mm_struct *mm)
127 127
128 This interface is called at the end of an explicit 128 This interface is called at the end of an explicit
129 process migration. This interface provides a hook 129 process migration. This interface provides a hook
130 to allow a platform to update TLB or context-specific 130 to allow a platform to update TLB or context-specific
131 information for the address space. 131 information for the address space.
132 132
133 The ia64 sn2 platform is one example of a platform 133 The ia64 sn2 platform is one example of a platform
134 that uses this interface. 134 that uses this interface.
135 135
136 8) void lazy_mmu_prot_update(pte_t pte) 136 8) void lazy_mmu_prot_update(pte_t pte)
137 This interface is called whenever the protection on 137 This interface is called whenever the protection on
138 any user PTEs change. This interface provides a notification 138 any user PTEs change. This interface provides a notification
139 to architecture specific code to take appropriate action. 139 to architecture specific code to take appropriate action.
140 140
141 141
142 Next, we have the cache flushing interfaces. In general, when Linux 142 Next, we have the cache flushing interfaces. In general, when Linux
143 is changing an existing virtual-->physical mapping to a new value, 143 is changing an existing virtual-->physical mapping to a new value,
144 the sequence will be in one of the following forms: 144 the sequence will be in one of the following forms:
145 145
146 1) flush_cache_mm(mm); 146 1) flush_cache_mm(mm);
147 change_all_page_tables_of(mm); 147 change_all_page_tables_of(mm);
148 flush_tlb_mm(mm); 148 flush_tlb_mm(mm);
149 149
150 2) flush_cache_range(vma, start, end); 150 2) flush_cache_range(vma, start, end);
151 change_range_of_page_tables(mm, start, end); 151 change_range_of_page_tables(mm, start, end);
152 flush_tlb_range(vma, start, end); 152 flush_tlb_range(vma, start, end);
153 153
154 3) flush_cache_page(vma, addr, pfn); 154 3) flush_cache_page(vma, addr, pfn);
155 set_pte(pte_pointer, new_pte_val); 155 set_pte(pte_pointer, new_pte_val);
156 flush_tlb_page(vma, addr); 156 flush_tlb_page(vma, addr);
157 157
158 The cache level flush will always be first, because this allows 158 The cache level flush will always be first, because this allows
159 us to properly handle systems whose caches are strict and require 159 us to properly handle systems whose caches are strict and require
160 a virtual-->physical translation to exist for a virtual address 160 a virtual-->physical translation to exist for a virtual address
161 when that virtual address is flushed from the cache. The HyperSparc 161 when that virtual address is flushed from the cache. The HyperSparc
162 cpu is one such cpu with this attribute. 162 cpu is one such cpu with this attribute.
163 163
164 The cache flushing routines below need only deal with cache flushing 164 The cache flushing routines below need only deal with cache flushing
165 to the extent that it is necessary for a particular cpu. Mostly, 165 to the extent that it is necessary for a particular cpu. Mostly,
166 these routines must be implemented for cpus which have virtually 166 these routines must be implemented for cpus which have virtually
167 indexed caches which must be flushed when virtual-->physical 167 indexed caches which must be flushed when virtual-->physical
168 translations are changed or removed. So, for example, the physically 168 translations are changed or removed. So, for example, the physically
169 indexed physically tagged caches of IA32 processors have no need to 169 indexed physically tagged caches of IA32 processors have no need to
170 implement these interfaces since the caches are fully synchronized 170 implement these interfaces since the caches are fully synchronized
171 and have no dependency on translation information. 171 and have no dependency on translation information.
172 172
173 Here are the routines, one by one: 173 Here are the routines, one by one:
174 174
175 1) void flush_cache_mm(struct mm_struct *mm) 175 1) void flush_cache_mm(struct mm_struct *mm)
176 176
177 This interface flushes an entire user address space from 177 This interface flushes an entire user address space from
178 the caches. That is, after running, there will be no cache 178 the caches. That is, after running, there will be no cache
179 lines associated with 'mm'. 179 lines associated with 'mm'.
180 180
181 This interface is used to handle whole address space 181 This interface is used to handle whole address space
182 page table operations such as what happens during 182 page table operations such as what happens during exit and exec.
183 fork, exit, and exec.
184 183
185 2) void flush_cache_range(struct vm_area_struct *vma, 184 2) void flush_cache_dup_mm(struct mm_struct *mm)
185
186 This interface flushes an entire user address space from
187 the caches. That is, after running, there will be no cache
188 lines associated with 'mm'.
189
190 This interface is used to handle whole address space
191 page table operations such as what happens during fork.
192
193 This option is separate from flush_cache_mm to allow some
194 optimizations for VIPT caches.
195
196 3) void flush_cache_range(struct vm_area_struct *vma,
186 unsigned long start, unsigned long end) 197 unsigned long start, unsigned long end)
187 198
188 Here we are flushing a specific range of (user) virtual 199 Here we are flushing a specific range of (user) virtual
189 addresses from the cache. After running, there will be no 200 addresses from the cache. After running, there will be no
190 entries in the cache for 'vma->vm_mm' for virtual addresses in 201 entries in the cache for 'vma->vm_mm' for virtual addresses in
191 the range 'start' to 'end-1'. 202 the range 'start' to 'end-1'.
192 203
193 The "vma" is the backing store being used for the region. 204 The "vma" is the backing store being used for the region.
194 Primarily, this is used for munmap() type operations. 205 Primarily, this is used for munmap() type operations.
195 206
196 The interface is provided in hopes that the port can find 207 The interface is provided in hopes that the port can find
197 a suitably efficient method for removing multiple page 208 a suitably efficient method for removing multiple page
198 sized regions from the cache, instead of having the kernel 209 sized regions from the cache, instead of having the kernel
199 call flush_cache_page (see below) for each entry which may be 210 call flush_cache_page (see below) for each entry which may be
200 modified. 211 modified.
201 212
202 3) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn) 213 4) void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)
203 214
204 This time we need to remove a PAGE_SIZE sized range 215 This time we need to remove a PAGE_SIZE sized range
205 from the cache. The 'vma' is the backing structure used by 216 from the cache. The 'vma' is the backing structure used by
206 Linux to keep track of mmap'd regions for a process, the 217 Linux to keep track of mmap'd regions for a process, the
207 address space is available via vma->vm_mm. Also, one may 218 address space is available via vma->vm_mm. Also, one may
208 test (vma->vm_flags & VM_EXEC) to see if this region is 219 test (vma->vm_flags & VM_EXEC) to see if this region is
209 executable (and thus could be in the 'instruction cache' in 220 executable (and thus could be in the 'instruction cache' in
210 "Harvard" type cache layouts). 221 "Harvard" type cache layouts).
211 222
212 The 'pfn' indicates the physical page frame (shift this value 223 The 'pfn' indicates the physical page frame (shift this value
213 left by PAGE_SHIFT to get the physical address) that 'addr' 224 left by PAGE_SHIFT to get the physical address) that 'addr'
214 translates to. It is this mapping which should be removed from 225 translates to. It is this mapping which should be removed from
215 the cache. 226 the cache.
216 227
217 After running, there will be no entries in the cache for 228 After running, there will be no entries in the cache for
218 'vma->vm_mm' for virtual address 'addr' which translates 229 'vma->vm_mm' for virtual address 'addr' which translates
219 to 'pfn'. 230 to 'pfn'.
220 231
221 This is used primarily during fault processing. 232 This is used primarily during fault processing.
222 233
223 4) void flush_cache_kmaps(void) 234 5) void flush_cache_kmaps(void)
224 235
225 This routine need only be implemented if the platform utilizes 236 This routine need only be implemented if the platform utilizes
226 highmem. It will be called right before all of the kmaps 237 highmem. It will be called right before all of the kmaps
227 are invalidated. 238 are invalidated.
228 239
229 After running, there will be no entries in the cache for 240 After running, there will be no entries in the cache for
230 the kernel virtual address range PKMAP_ADDR(0) to 241 the kernel virtual address range PKMAP_ADDR(0) to
231 PKMAP_ADDR(LAST_PKMAP). 242 PKMAP_ADDR(LAST_PKMAP).
232 243
233 This routing should be implemented in asm/highmem.h 244 This routing should be implemented in asm/highmem.h
234 245
235 5) void flush_cache_vmap(unsigned long start, unsigned long end) 246 6) void flush_cache_vmap(unsigned long start, unsigned long end)
236 void flush_cache_vunmap(unsigned long start, unsigned long end) 247 void flush_cache_vunmap(unsigned long start, unsigned long end)
237 248
238 Here in these two interfaces we are flushing a specific range 249 Here in these two interfaces we are flushing a specific range
239 of (kernel) virtual addresses from the cache. After running, 250 of (kernel) virtual addresses from the cache. After running,
240 there will be no entries in the cache for the kernel address 251 there will be no entries in the cache for the kernel address
241 space for virtual addresses in the range 'start' to 'end-1'. 252 space for virtual addresses in the range 'start' to 'end-1'.
242 253
243 The first of these two routines is invoked after map_vm_area() 254 The first of these two routines is invoked after map_vm_area()
244 has installed the page table entries. The second is invoked 255 has installed the page table entries. The second is invoked
245 before unmap_vm_area() deletes the page table entries. 256 before unmap_vm_area() deletes the page table entries.
246 257
247 There exists another whole class of cpu cache issues which currently 258 There exists another whole class of cpu cache issues which currently
248 require a whole different set of interfaces to handle properly. 259 require a whole different set of interfaces to handle properly.
249 The biggest problem is that of virtual aliasing in the data cache 260 The biggest problem is that of virtual aliasing in the data cache
250 of a processor. 261 of a processor.
251 262
252 Is your port susceptible to virtual aliasing in it's D-cache? 263 Is your port susceptible to virtual aliasing in it's D-cache?
253 Well, if your D-cache is virtually indexed, is larger in size than 264 Well, if your D-cache is virtually indexed, is larger in size than
254 PAGE_SIZE, and does not prevent multiple cache lines for the same 265 PAGE_SIZE, and does not prevent multiple cache lines for the same
255 physical address from existing at once, you have this problem. 266 physical address from existing at once, you have this problem.
256 267
257 If your D-cache has this problem, first define asm/shmparam.h SHMLBA 268 If your D-cache has this problem, first define asm/shmparam.h SHMLBA
258 properly, it should essentially be the size of your virtually 269 properly, it should essentially be the size of your virtually
259 addressed D-cache (or if the size is variable, the largest possible 270 addressed D-cache (or if the size is variable, the largest possible
260 size). This setting will force the SYSv IPC layer to only allow user 271 size). This setting will force the SYSv IPC layer to only allow user
261 processes to mmap shared memory at address which are a multiple of 272 processes to mmap shared memory at address which are a multiple of
262 this value. 273 this value.
263 274
264 NOTE: This does not fix shared mmaps, check out the sparc64 port for 275 NOTE: This does not fix shared mmaps, check out the sparc64 port for
265 one way to solve this (in particular SPARC_FLAG_MMAPSHARED). 276 one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
266 277
267 Next, you have to solve the D-cache aliasing issue for all 278 Next, you have to solve the D-cache aliasing issue for all
268 other cases. Please keep in mind that fact that, for a given page 279 other cases. Please keep in mind that fact that, for a given page
269 mapped into some user address space, there is always at least one more 280 mapped into some user address space, there is always at least one more
270 mapping, that of the kernel in it's linear mapping starting at 281 mapping, that of the kernel in it's linear mapping starting at
271 PAGE_OFFSET. So immediately, once the first user maps a given 282 PAGE_OFFSET. So immediately, once the first user maps a given
272 physical page into its address space, by implication the D-cache 283 physical page into its address space, by implication the D-cache
273 aliasing problem has the potential to exist since the kernel already 284 aliasing problem has the potential to exist since the kernel already
274 maps this page at its virtual address. 285 maps this page at its virtual address.
275 286
276 void copy_user_page(void *to, void *from, unsigned long addr, struct page *page) 287 void copy_user_page(void *to, void *from, unsigned long addr, struct page *page)
277 void clear_user_page(void *to, unsigned long addr, struct page *page) 288 void clear_user_page(void *to, unsigned long addr, struct page *page)
278 289
279 These two routines store data in user anonymous or COW 290 These two routines store data in user anonymous or COW
280 pages. It allows a port to efficiently avoid D-cache alias 291 pages. It allows a port to efficiently avoid D-cache alias
281 issues between userspace and the kernel. 292 issues between userspace and the kernel.
282 293
283 For example, a port may temporarily map 'from' and 'to' to 294 For example, a port may temporarily map 'from' and 'to' to
284 kernel virtual addresses during the copy. The virtual address 295 kernel virtual addresses during the copy. The virtual address
285 for these two pages is chosen in such a way that the kernel 296 for these two pages is chosen in such a way that the kernel
286 load/store instructions happen to virtual addresses which are 297 load/store instructions happen to virtual addresses which are
287 of the same "color" as the user mapping of the page. Sparc64 298 of the same "color" as the user mapping of the page. Sparc64
288 for example, uses this technique. 299 for example, uses this technique.
289 300
290 The 'addr' parameter tells the virtual address where the 301 The 'addr' parameter tells the virtual address where the
291 user will ultimately have this page mapped, and the 'page' 302 user will ultimately have this page mapped, and the 'page'
292 parameter gives a pointer to the struct page of the target. 303 parameter gives a pointer to the struct page of the target.
293 304
294 If D-cache aliasing is not an issue, these two routines may 305 If D-cache aliasing is not an issue, these two routines may
295 simply call memcpy/memset directly and do nothing more. 306 simply call memcpy/memset directly and do nothing more.
296 307
297 void flush_dcache_page(struct page *page) 308 void flush_dcache_page(struct page *page)
298 309
299 Any time the kernel writes to a page cache page, _OR_ 310 Any time the kernel writes to a page cache page, _OR_
300 the kernel is about to read from a page cache page and 311 the kernel is about to read from a page cache page and
301 user space shared/writable mappings of this page potentially 312 user space shared/writable mappings of this page potentially
302 exist, this routine is called. 313 exist, this routine is called.
303 314
304 NOTE: This routine need only be called for page cache pages 315 NOTE: This routine need only be called for page cache pages
305 which can potentially ever be mapped into the address 316 which can potentially ever be mapped into the address
306 space of a user process. So for example, VFS layer code 317 space of a user process. So for example, VFS layer code
307 handling vfs symlinks in the page cache need not call 318 handling vfs symlinks in the page cache need not call
308 this interface at all. 319 this interface at all.
309 320
310 The phrase "kernel writes to a page cache page" means, 321 The phrase "kernel writes to a page cache page" means,
311 specifically, that the kernel executes store instructions 322 specifically, that the kernel executes store instructions
312 that dirty data in that page at the page->virtual mapping 323 that dirty data in that page at the page->virtual mapping
313 of that page. It is important to flush here to handle 324 of that page. It is important to flush here to handle
314 D-cache aliasing, to make sure these kernel stores are 325 D-cache aliasing, to make sure these kernel stores are
315 visible to user space mappings of that page. 326 visible to user space mappings of that page.
316 327
317 The corollary case is just as important, if there are users 328 The corollary case is just as important, if there are users
318 which have shared+writable mappings of this file, we must make 329 which have shared+writable mappings of this file, we must make
319 sure that kernel reads of these pages will see the most recent 330 sure that kernel reads of these pages will see the most recent
320 stores done by the user. 331 stores done by the user.
321 332
322 If D-cache aliasing is not an issue, this routine may 333 If D-cache aliasing is not an issue, this routine may
323 simply be defined as a nop on that architecture. 334 simply be defined as a nop on that architecture.
324 335
325 There is a bit set aside in page->flags (PG_arch_1) as 336 There is a bit set aside in page->flags (PG_arch_1) as
326 "architecture private". The kernel guarantees that, 337 "architecture private". The kernel guarantees that,
327 for pagecache pages, it will clear this bit when such 338 for pagecache pages, it will clear this bit when such
328 a page first enters the pagecache. 339 a page first enters the pagecache.
329 340
330 This allows these interfaces to be implemented much more 341 This allows these interfaces to be implemented much more
331 efficiently. It allows one to "defer" (perhaps indefinitely) 342 efficiently. It allows one to "defer" (perhaps indefinitely)
332 the actual flush if there are currently no user processes 343 the actual flush if there are currently no user processes
333 mapping this page. See sparc64's flush_dcache_page and 344 mapping this page. See sparc64's flush_dcache_page and
334 update_mmu_cache implementations for an example of how to go 345 update_mmu_cache implementations for an example of how to go
335 about doing this. 346 about doing this.
336 347
337 The idea is, first at flush_dcache_page() time, if 348 The idea is, first at flush_dcache_page() time, if
338 page->mapping->i_mmap is an empty tree and ->i_mmap_nonlinear 349 page->mapping->i_mmap is an empty tree and ->i_mmap_nonlinear
339 an empty list, just mark the architecture private page flag bit. 350 an empty list, just mark the architecture private page flag bit.
340 Later, in update_mmu_cache(), a check is made of this flag bit, 351 Later, in update_mmu_cache(), a check is made of this flag bit,
341 and if set the flush is done and the flag bit is cleared. 352 and if set the flush is done and the flag bit is cleared.
342 353
343 IMPORTANT NOTE: It is often important, if you defer the flush, 354 IMPORTANT NOTE: It is often important, if you defer the flush,
344 that the actual flush occurs on the same CPU 355 that the actual flush occurs on the same CPU
345 as did the cpu stores into the page to make it 356 as did the cpu stores into the page to make it
346 dirty. Again, see sparc64 for examples of how 357 dirty. Again, see sparc64 for examples of how
347 to deal with this. 358 to deal with this.
348 359
349 void copy_to_user_page(struct vm_area_struct *vma, struct page *page, 360 void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
350 unsigned long user_vaddr, 361 unsigned long user_vaddr,
351 void *dst, void *src, int len) 362 void *dst, void *src, int len)
352 void copy_from_user_page(struct vm_area_struct *vma, struct page *page, 363 void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
353 unsigned long user_vaddr, 364 unsigned long user_vaddr,
354 void *dst, void *src, int len) 365 void *dst, void *src, int len)
355 When the kernel needs to copy arbitrary data in and out 366 When the kernel needs to copy arbitrary data in and out
356 of arbitrary user pages (f.e. for ptrace()) it will use 367 of arbitrary user pages (f.e. for ptrace()) it will use
357 these two routines. 368 these two routines.
358 369
359 Any necessary cache flushing or other coherency operations 370 Any necessary cache flushing or other coherency operations
360 that need to occur should happen here. If the processor's 371 that need to occur should happen here. If the processor's
361 instruction cache does not snoop cpu stores, it is very 372 instruction cache does not snoop cpu stores, it is very
362 likely that you will need to flush the instruction cache 373 likely that you will need to flush the instruction cache
363 for copy_to_user_page(). 374 for copy_to_user_page().
364 375
365 void flush_anon_page(struct page *page, unsigned long vmaddr) 376 void flush_anon_page(struct page *page, unsigned long vmaddr)
366 When the kernel needs to access the contents of an anonymous 377 When the kernel needs to access the contents of an anonymous
367 page, it calls this function (currently only 378 page, it calls this function (currently only
368 get_user_pages()). Note: flush_dcache_page() deliberately 379 get_user_pages()). Note: flush_dcache_page() deliberately
369 doesn't work for an anonymous page. The default 380 doesn't work for an anonymous page. The default
370 implementation is a nop (and should remain so for all coherent 381 implementation is a nop (and should remain so for all coherent
371 architectures). For incoherent architectures, it should flush 382 architectures). For incoherent architectures, it should flush
372 the cache of the page at vmaddr in the current user process. 383 the cache of the page at vmaddr in the current user process.
373 384
374 void flush_kernel_dcache_page(struct page *page) 385 void flush_kernel_dcache_page(struct page *page)
375 When the kernel needs to modify a user page is has obtained 386 When the kernel needs to modify a user page is has obtained
376 with kmap, it calls this function after all modifications are 387 with kmap, it calls this function after all modifications are
377 complete (but before kunmapping it) to bring the underlying 388 complete (but before kunmapping it) to bring the underlying
378 page up to date. It is assumed here that the user has no 389 page up to date. It is assumed here that the user has no
379 incoherent cached copies (i.e. the original page was obtained 390 incoherent cached copies (i.e. the original page was obtained
380 from a mechanism like get_user_pages()). The default 391 from a mechanism like get_user_pages()). The default
381 implementation is a nop and should remain so on all coherent 392 implementation is a nop and should remain so on all coherent
382 architectures. On incoherent architectures, this should flush 393 architectures. On incoherent architectures, this should flush
383 the kernel cache for page (using page_address(page)). 394 the kernel cache for page (using page_address(page)).
384 395
385 396
386 void flush_icache_range(unsigned long start, unsigned long end) 397 void flush_icache_range(unsigned long start, unsigned long end)
387 When the kernel stores into addresses that it will execute 398 When the kernel stores into addresses that it will execute
388 out of (eg when loading modules), this function is called. 399 out of (eg when loading modules), this function is called.
389 400
390 If the icache does not snoop stores then this routine will need 401 If the icache does not snoop stores then this routine will need
391 to flush it. 402 to flush it.
392 403
393 void flush_icache_page(struct vm_area_struct *vma, struct page *page) 404 void flush_icache_page(struct vm_area_struct *vma, struct page *page)
394 All the functionality of flush_icache_page can be implemented in 405 All the functionality of flush_icache_page can be implemented in
395 flush_dcache_page and update_mmu_cache. In 2.7 the hope is to 406 flush_dcache_page and update_mmu_cache. In 2.7 the hope is to
396 remove this interface completely. 407 remove this interface completely.
include/asm-alpha/cacheflush.h
1 #ifndef _ALPHA_CACHEFLUSH_H 1 #ifndef _ALPHA_CACHEFLUSH_H
2 #define _ALPHA_CACHEFLUSH_H 2 #define _ALPHA_CACHEFLUSH_H
3 3
4 #include <linux/mm.h> 4 #include <linux/mm.h>
5 5
6 /* Caches aren't brain-dead on the Alpha. */ 6 /* Caches aren't brain-dead on the Alpha. */
7 #define flush_cache_all() do { } while (0) 7 #define flush_cache_all() do { } while (0)
8 #define flush_cache_mm(mm) do { } while (0) 8 #define flush_cache_mm(mm) do { } while (0)
9 #define flush_cache_dup_mm(mm) do { } while (0)
9 #define flush_cache_range(vma, start, end) do { } while (0) 10 #define flush_cache_range(vma, start, end) do { } while (0)
10 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 11 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
11 #define flush_dcache_page(page) do { } while (0) 12 #define flush_dcache_page(page) do { } while (0)
12 #define flush_dcache_mmap_lock(mapping) do { } while (0) 13 #define flush_dcache_mmap_lock(mapping) do { } while (0)
13 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 14 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
14 #define flush_cache_vmap(start, end) do { } while (0) 15 #define flush_cache_vmap(start, end) do { } while (0)
15 #define flush_cache_vunmap(start, end) do { } while (0) 16 #define flush_cache_vunmap(start, end) do { } while (0)
16 17
17 /* Note that the following two definitions are _highly_ dependent 18 /* Note that the following two definitions are _highly_ dependent
18 on the contexts in which they are used in the kernel. I personally 19 on the contexts in which they are used in the kernel. I personally
19 think it is criminal how loosely defined these macros are. */ 20 think it is criminal how loosely defined these macros are. */
20 21
21 /* We need to flush the kernel's icache after loading modules. The 22 /* We need to flush the kernel's icache after loading modules. The
22 only other use of this macro is in load_aout_interp which is not 23 only other use of this macro is in load_aout_interp which is not
23 used on Alpha. 24 used on Alpha.
24 25
25 Note that this definition should *not* be used for userspace 26 Note that this definition should *not* be used for userspace
26 icache flushing. While functional, it is _way_ overkill. The 27 icache flushing. While functional, it is _way_ overkill. The
27 icache is tagged with ASNs and it suffices to allocate a new ASN 28 icache is tagged with ASNs and it suffices to allocate a new ASN
28 for the process. */ 29 for the process. */
29 #ifndef CONFIG_SMP 30 #ifndef CONFIG_SMP
30 #define flush_icache_range(start, end) imb() 31 #define flush_icache_range(start, end) imb()
31 #else 32 #else
32 #define flush_icache_range(start, end) smp_imb() 33 #define flush_icache_range(start, end) smp_imb()
33 extern void smp_imb(void); 34 extern void smp_imb(void);
34 #endif 35 #endif
35 36
36 /* We need to flush the userspace icache after setting breakpoints in 37 /* We need to flush the userspace icache after setting breakpoints in
37 ptrace. 38 ptrace.
38 39
39 Instead of indiscriminately using imb, take advantage of the fact 40 Instead of indiscriminately using imb, take advantage of the fact
40 that icache entries are tagged with the ASN and load a new mm context. */ 41 that icache entries are tagged with the ASN and load a new mm context. */
41 /* ??? Ought to use this in arch/alpha/kernel/signal.c too. */ 42 /* ??? Ought to use this in arch/alpha/kernel/signal.c too. */
42 43
43 #ifndef CONFIG_SMP 44 #ifndef CONFIG_SMP
44 extern void __load_new_mm_context(struct mm_struct *); 45 extern void __load_new_mm_context(struct mm_struct *);
45 static inline void 46 static inline void
46 flush_icache_user_range(struct vm_area_struct *vma, struct page *page, 47 flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
47 unsigned long addr, int len) 48 unsigned long addr, int len)
48 { 49 {
49 if (vma->vm_flags & VM_EXEC) { 50 if (vma->vm_flags & VM_EXEC) {
50 struct mm_struct *mm = vma->vm_mm; 51 struct mm_struct *mm = vma->vm_mm;
51 if (current->active_mm == mm) 52 if (current->active_mm == mm)
52 __load_new_mm_context(mm); 53 __load_new_mm_context(mm);
53 else 54 else
54 mm->context[smp_processor_id()] = 0; 55 mm->context[smp_processor_id()] = 0;
55 } 56 }
56 } 57 }
57 #else 58 #else
58 extern void flush_icache_user_range(struct vm_area_struct *vma, 59 extern void flush_icache_user_range(struct vm_area_struct *vma,
59 struct page *page, unsigned long addr, int len); 60 struct page *page, unsigned long addr, int len);
60 #endif 61 #endif
61 62
62 /* This is used only in do_no_page and do_swap_page. */ 63 /* This is used only in do_no_page and do_swap_page. */
63 #define flush_icache_page(vma, page) \ 64 #define flush_icache_page(vma, page) \
64 flush_icache_user_range((vma), (page), 0, 0) 65 flush_icache_user_range((vma), (page), 0, 0)
65 66
66 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 67 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
67 do { memcpy(dst, src, len); \ 68 do { memcpy(dst, src, len); \
68 flush_icache_user_range(vma, page, vaddr, len); \ 69 flush_icache_user_range(vma, page, vaddr, len); \
69 } while (0) 70 } while (0)
70 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 71 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
71 memcpy(dst, src, len) 72 memcpy(dst, src, len)
72 73
73 #endif /* _ALPHA_CACHEFLUSH_H */ 74 #endif /* _ALPHA_CACHEFLUSH_H */
74 75
include/asm-arm/cacheflush.h
1 /* 1 /*
2 * linux/include/asm-arm/cacheflush.h 2 * linux/include/asm-arm/cacheflush.h
3 * 3 *
4 * Copyright (C) 1999-2002 Russell King 4 * Copyright (C) 1999-2002 Russell King
5 * 5 *
6 * This program is free software; you can redistribute it and/or modify 6 * This program is free software; you can redistribute it and/or modify
7 * it under the terms of the GNU General Public License version 2 as 7 * it under the terms of the GNU General Public License version 2 as
8 * published by the Free Software Foundation. 8 * published by the Free Software Foundation.
9 */ 9 */
10 #ifndef _ASMARM_CACHEFLUSH_H 10 #ifndef _ASMARM_CACHEFLUSH_H
11 #define _ASMARM_CACHEFLUSH_H 11 #define _ASMARM_CACHEFLUSH_H
12 12
13 #include <linux/sched.h> 13 #include <linux/sched.h>
14 #include <linux/mm.h> 14 #include <linux/mm.h>
15 15
16 #include <asm/glue.h> 16 #include <asm/glue.h>
17 #include <asm/shmparam.h> 17 #include <asm/shmparam.h>
18 18
19 #define CACHE_COLOUR(vaddr) ((vaddr & (SHMLBA - 1)) >> PAGE_SHIFT) 19 #define CACHE_COLOUR(vaddr) ((vaddr & (SHMLBA - 1)) >> PAGE_SHIFT)
20 20
21 /* 21 /*
22 * Cache Model 22 * Cache Model
23 * =========== 23 * ===========
24 */ 24 */
25 #undef _CACHE 25 #undef _CACHE
26 #undef MULTI_CACHE 26 #undef MULTI_CACHE
27 27
28 #if defined(CONFIG_CPU_CACHE_V3) 28 #if defined(CONFIG_CPU_CACHE_V3)
29 # ifdef _CACHE 29 # ifdef _CACHE
30 # define MULTI_CACHE 1 30 # define MULTI_CACHE 1
31 # else 31 # else
32 # define _CACHE v3 32 # define _CACHE v3
33 # endif 33 # endif
34 #endif 34 #endif
35 35
36 #if defined(CONFIG_CPU_CACHE_V4) 36 #if defined(CONFIG_CPU_CACHE_V4)
37 # ifdef _CACHE 37 # ifdef _CACHE
38 # define MULTI_CACHE 1 38 # define MULTI_CACHE 1
39 # else 39 # else
40 # define _CACHE v4 40 # define _CACHE v4
41 # endif 41 # endif
42 #endif 42 #endif
43 43
44 #if defined(CONFIG_CPU_ARM920T) || defined(CONFIG_CPU_ARM922T) || \ 44 #if defined(CONFIG_CPU_ARM920T) || defined(CONFIG_CPU_ARM922T) || \
45 defined(CONFIG_CPU_ARM925T) || defined(CONFIG_CPU_ARM1020) 45 defined(CONFIG_CPU_ARM925T) || defined(CONFIG_CPU_ARM1020)
46 # define MULTI_CACHE 1 46 # define MULTI_CACHE 1
47 #endif 47 #endif
48 48
49 #if defined(CONFIG_CPU_ARM926T) 49 #if defined(CONFIG_CPU_ARM926T)
50 # ifdef _CACHE 50 # ifdef _CACHE
51 # define MULTI_CACHE 1 51 # define MULTI_CACHE 1
52 # else 52 # else
53 # define _CACHE arm926 53 # define _CACHE arm926
54 # endif 54 # endif
55 #endif 55 #endif
56 56
57 #if defined(CONFIG_CPU_ARM940T) 57 #if defined(CONFIG_CPU_ARM940T)
58 # ifdef _CACHE 58 # ifdef _CACHE
59 # define MULTI_CACHE 1 59 # define MULTI_CACHE 1
60 # else 60 # else
61 # define _CACHE arm940 61 # define _CACHE arm940
62 # endif 62 # endif
63 #endif 63 #endif
64 64
65 #if defined(CONFIG_CPU_ARM946E) 65 #if defined(CONFIG_CPU_ARM946E)
66 # ifdef _CACHE 66 # ifdef _CACHE
67 # define MULTI_CACHE 1 67 # define MULTI_CACHE 1
68 # else 68 # else
69 # define _CACHE arm946 69 # define _CACHE arm946
70 # endif 70 # endif
71 #endif 71 #endif
72 72
73 #if defined(CONFIG_CPU_CACHE_V4WB) 73 #if defined(CONFIG_CPU_CACHE_V4WB)
74 # ifdef _CACHE 74 # ifdef _CACHE
75 # define MULTI_CACHE 1 75 # define MULTI_CACHE 1
76 # else 76 # else
77 # define _CACHE v4wb 77 # define _CACHE v4wb
78 # endif 78 # endif
79 #endif 79 #endif
80 80
81 #if defined(CONFIG_CPU_XSCALE) 81 #if defined(CONFIG_CPU_XSCALE)
82 # ifdef _CACHE 82 # ifdef _CACHE
83 # define MULTI_CACHE 1 83 # define MULTI_CACHE 1
84 # else 84 # else
85 # define _CACHE xscale 85 # define _CACHE xscale
86 # endif 86 # endif
87 #endif 87 #endif
88 88
89 #if defined(CONFIG_CPU_XSC3) 89 #if defined(CONFIG_CPU_XSC3)
90 # ifdef _CACHE 90 # ifdef _CACHE
91 # define MULTI_CACHE 1 91 # define MULTI_CACHE 1
92 # else 92 # else
93 # define _CACHE xsc3 93 # define _CACHE xsc3
94 # endif 94 # endif
95 #endif 95 #endif
96 96
97 #if defined(CONFIG_CPU_V6) 97 #if defined(CONFIG_CPU_V6)
98 //# ifdef _CACHE 98 //# ifdef _CACHE
99 # define MULTI_CACHE 1 99 # define MULTI_CACHE 1
100 //# else 100 //# else
101 //# define _CACHE v6 101 //# define _CACHE v6
102 //# endif 102 //# endif
103 #endif 103 #endif
104 104
105 #if !defined(_CACHE) && !defined(MULTI_CACHE) 105 #if !defined(_CACHE) && !defined(MULTI_CACHE)
106 #error Unknown cache maintainence model 106 #error Unknown cache maintainence model
107 #endif 107 #endif
108 108
109 /* 109 /*
110 * This flag is used to indicate that the page pointed to by a pte 110 * This flag is used to indicate that the page pointed to by a pte
111 * is dirty and requires cleaning before returning it to the user. 111 * is dirty and requires cleaning before returning it to the user.
112 */ 112 */
113 #define PG_dcache_dirty PG_arch_1 113 #define PG_dcache_dirty PG_arch_1
114 114
115 /* 115 /*
116 * MM Cache Management 116 * MM Cache Management
117 * =================== 117 * ===================
118 * 118 *
119 * The arch/arm/mm/cache-*.S and arch/arm/mm/proc-*.S files 119 * The arch/arm/mm/cache-*.S and arch/arm/mm/proc-*.S files
120 * implement these methods. 120 * implement these methods.
121 * 121 *
122 * Start addresses are inclusive and end addresses are exclusive; 122 * Start addresses are inclusive and end addresses are exclusive;
123 * start addresses should be rounded down, end addresses up. 123 * start addresses should be rounded down, end addresses up.
124 * 124 *
125 * See Documentation/cachetlb.txt for more information. 125 * See Documentation/cachetlb.txt for more information.
126 * Please note that the implementation of these, and the required 126 * Please note that the implementation of these, and the required
127 * effects are cache-type (VIVT/VIPT/PIPT) specific. 127 * effects are cache-type (VIVT/VIPT/PIPT) specific.
128 * 128 *
129 * flush_cache_kern_all() 129 * flush_cache_kern_all()
130 * 130 *
131 * Unconditionally clean and invalidate the entire cache. 131 * Unconditionally clean and invalidate the entire cache.
132 * 132 *
133 * flush_cache_user_mm(mm) 133 * flush_cache_user_mm(mm)
134 * 134 *
135 * Clean and invalidate all user space cache entries 135 * Clean and invalidate all user space cache entries
136 * before a change of page tables. 136 * before a change of page tables.
137 * 137 *
138 * flush_cache_user_range(start, end, flags) 138 * flush_cache_user_range(start, end, flags)
139 * 139 *
140 * Clean and invalidate a range of cache entries in the 140 * Clean and invalidate a range of cache entries in the
141 * specified address space before a change of page tables. 141 * specified address space before a change of page tables.
142 * - start - user start address (inclusive, page aligned) 142 * - start - user start address (inclusive, page aligned)
143 * - end - user end address (exclusive, page aligned) 143 * - end - user end address (exclusive, page aligned)
144 * - flags - vma->vm_flags field 144 * - flags - vma->vm_flags field
145 * 145 *
146 * coherent_kern_range(start, end) 146 * coherent_kern_range(start, end)
147 * 147 *
148 * Ensure coherency between the Icache and the Dcache in the 148 * Ensure coherency between the Icache and the Dcache in the
149 * region described by start, end. If you have non-snooping 149 * region described by start, end. If you have non-snooping
150 * Harvard caches, you need to implement this function. 150 * Harvard caches, you need to implement this function.
151 * - start - virtual start address 151 * - start - virtual start address
152 * - end - virtual end address 152 * - end - virtual end address
153 * 153 *
154 * DMA Cache Coherency 154 * DMA Cache Coherency
155 * =================== 155 * ===================
156 * 156 *
157 * dma_inv_range(start, end) 157 * dma_inv_range(start, end)
158 * 158 *
159 * Invalidate (discard) the specified virtual address range. 159 * Invalidate (discard) the specified virtual address range.
160 * May not write back any entries. If 'start' or 'end' 160 * May not write back any entries. If 'start' or 'end'
161 * are not cache line aligned, those lines must be written 161 * are not cache line aligned, those lines must be written
162 * back. 162 * back.
163 * - start - virtual start address 163 * - start - virtual start address
164 * - end - virtual end address 164 * - end - virtual end address
165 * 165 *
166 * dma_clean_range(start, end) 166 * dma_clean_range(start, end)
167 * 167 *
168 * Clean (write back) the specified virtual address range. 168 * Clean (write back) the specified virtual address range.
169 * - start - virtual start address 169 * - start - virtual start address
170 * - end - virtual end address 170 * - end - virtual end address
171 * 171 *
172 * dma_flush_range(start, end) 172 * dma_flush_range(start, end)
173 * 173 *
174 * Clean and invalidate the specified virtual address range. 174 * Clean and invalidate the specified virtual address range.
175 * - start - virtual start address 175 * - start - virtual start address
176 * - end - virtual end address 176 * - end - virtual end address
177 */ 177 */
178 178
179 struct cpu_cache_fns { 179 struct cpu_cache_fns {
180 void (*flush_kern_all)(void); 180 void (*flush_kern_all)(void);
181 void (*flush_user_all)(void); 181 void (*flush_user_all)(void);
182 void (*flush_user_range)(unsigned long, unsigned long, unsigned int); 182 void (*flush_user_range)(unsigned long, unsigned long, unsigned int);
183 183
184 void (*coherent_kern_range)(unsigned long, unsigned long); 184 void (*coherent_kern_range)(unsigned long, unsigned long);
185 void (*coherent_user_range)(unsigned long, unsigned long); 185 void (*coherent_user_range)(unsigned long, unsigned long);
186 void (*flush_kern_dcache_page)(void *); 186 void (*flush_kern_dcache_page)(void *);
187 187
188 void (*dma_inv_range)(unsigned long, unsigned long); 188 void (*dma_inv_range)(unsigned long, unsigned long);
189 void (*dma_clean_range)(unsigned long, unsigned long); 189 void (*dma_clean_range)(unsigned long, unsigned long);
190 void (*dma_flush_range)(unsigned long, unsigned long); 190 void (*dma_flush_range)(unsigned long, unsigned long);
191 }; 191 };
192 192
193 /* 193 /*
194 * Select the calling method 194 * Select the calling method
195 */ 195 */
196 #ifdef MULTI_CACHE 196 #ifdef MULTI_CACHE
197 197
198 extern struct cpu_cache_fns cpu_cache; 198 extern struct cpu_cache_fns cpu_cache;
199 199
200 #define __cpuc_flush_kern_all cpu_cache.flush_kern_all 200 #define __cpuc_flush_kern_all cpu_cache.flush_kern_all
201 #define __cpuc_flush_user_all cpu_cache.flush_user_all 201 #define __cpuc_flush_user_all cpu_cache.flush_user_all
202 #define __cpuc_flush_user_range cpu_cache.flush_user_range 202 #define __cpuc_flush_user_range cpu_cache.flush_user_range
203 #define __cpuc_coherent_kern_range cpu_cache.coherent_kern_range 203 #define __cpuc_coherent_kern_range cpu_cache.coherent_kern_range
204 #define __cpuc_coherent_user_range cpu_cache.coherent_user_range 204 #define __cpuc_coherent_user_range cpu_cache.coherent_user_range
205 #define __cpuc_flush_dcache_page cpu_cache.flush_kern_dcache_page 205 #define __cpuc_flush_dcache_page cpu_cache.flush_kern_dcache_page
206 206
207 /* 207 /*
208 * These are private to the dma-mapping API. Do not use directly. 208 * These are private to the dma-mapping API. Do not use directly.
209 * Their sole purpose is to ensure that data held in the cache 209 * Their sole purpose is to ensure that data held in the cache
210 * is visible to DMA, or data written by DMA to system memory is 210 * is visible to DMA, or data written by DMA to system memory is
211 * visible to the CPU. 211 * visible to the CPU.
212 */ 212 */
213 #define dmac_inv_range cpu_cache.dma_inv_range 213 #define dmac_inv_range cpu_cache.dma_inv_range
214 #define dmac_clean_range cpu_cache.dma_clean_range 214 #define dmac_clean_range cpu_cache.dma_clean_range
215 #define dmac_flush_range cpu_cache.dma_flush_range 215 #define dmac_flush_range cpu_cache.dma_flush_range
216 216
217 #else 217 #else
218 218
219 #define __cpuc_flush_kern_all __glue(_CACHE,_flush_kern_cache_all) 219 #define __cpuc_flush_kern_all __glue(_CACHE,_flush_kern_cache_all)
220 #define __cpuc_flush_user_all __glue(_CACHE,_flush_user_cache_all) 220 #define __cpuc_flush_user_all __glue(_CACHE,_flush_user_cache_all)
221 #define __cpuc_flush_user_range __glue(_CACHE,_flush_user_cache_range) 221 #define __cpuc_flush_user_range __glue(_CACHE,_flush_user_cache_range)
222 #define __cpuc_coherent_kern_range __glue(_CACHE,_coherent_kern_range) 222 #define __cpuc_coherent_kern_range __glue(_CACHE,_coherent_kern_range)
223 #define __cpuc_coherent_user_range __glue(_CACHE,_coherent_user_range) 223 #define __cpuc_coherent_user_range __glue(_CACHE,_coherent_user_range)
224 #define __cpuc_flush_dcache_page __glue(_CACHE,_flush_kern_dcache_page) 224 #define __cpuc_flush_dcache_page __glue(_CACHE,_flush_kern_dcache_page)
225 225
226 extern void __cpuc_flush_kern_all(void); 226 extern void __cpuc_flush_kern_all(void);
227 extern void __cpuc_flush_user_all(void); 227 extern void __cpuc_flush_user_all(void);
228 extern void __cpuc_flush_user_range(unsigned long, unsigned long, unsigned int); 228 extern void __cpuc_flush_user_range(unsigned long, unsigned long, unsigned int);
229 extern void __cpuc_coherent_kern_range(unsigned long, unsigned long); 229 extern void __cpuc_coherent_kern_range(unsigned long, unsigned long);
230 extern void __cpuc_coherent_user_range(unsigned long, unsigned long); 230 extern void __cpuc_coherent_user_range(unsigned long, unsigned long);
231 extern void __cpuc_flush_dcache_page(void *); 231 extern void __cpuc_flush_dcache_page(void *);
232 232
233 /* 233 /*
234 * These are private to the dma-mapping API. Do not use directly. 234 * These are private to the dma-mapping API. Do not use directly.
235 * Their sole purpose is to ensure that data held in the cache 235 * Their sole purpose is to ensure that data held in the cache
236 * is visible to DMA, or data written by DMA to system memory is 236 * is visible to DMA, or data written by DMA to system memory is
237 * visible to the CPU. 237 * visible to the CPU.
238 */ 238 */
239 #define dmac_inv_range __glue(_CACHE,_dma_inv_range) 239 #define dmac_inv_range __glue(_CACHE,_dma_inv_range)
240 #define dmac_clean_range __glue(_CACHE,_dma_clean_range) 240 #define dmac_clean_range __glue(_CACHE,_dma_clean_range)
241 #define dmac_flush_range __glue(_CACHE,_dma_flush_range) 241 #define dmac_flush_range __glue(_CACHE,_dma_flush_range)
242 242
243 extern void dmac_inv_range(unsigned long, unsigned long); 243 extern void dmac_inv_range(unsigned long, unsigned long);
244 extern void dmac_clean_range(unsigned long, unsigned long); 244 extern void dmac_clean_range(unsigned long, unsigned long);
245 extern void dmac_flush_range(unsigned long, unsigned long); 245 extern void dmac_flush_range(unsigned long, unsigned long);
246 246
247 #endif 247 #endif
248 248
249 /* 249 /*
250 * flush_cache_vmap() is used when creating mappings (eg, via vmap, 250 * flush_cache_vmap() is used when creating mappings (eg, via vmap,
251 * vmalloc, ioremap etc) in kernel space for pages. Since the 251 * vmalloc, ioremap etc) in kernel space for pages. Since the
252 * direct-mappings of these pages may contain cached data, we need 252 * direct-mappings of these pages may contain cached data, we need
253 * to do a full cache flush to ensure that writebacks don't corrupt 253 * to do a full cache flush to ensure that writebacks don't corrupt
254 * data placed into these pages via the new mappings. 254 * data placed into these pages via the new mappings.
255 */ 255 */
256 #define flush_cache_vmap(start, end) flush_cache_all() 256 #define flush_cache_vmap(start, end) flush_cache_all()
257 #define flush_cache_vunmap(start, end) flush_cache_all() 257 #define flush_cache_vunmap(start, end) flush_cache_all()
258 258
259 /* 259 /*
260 * Copy user data from/to a page which is mapped into a different 260 * Copy user data from/to a page which is mapped into a different
261 * processes address space. Really, we want to allow our "user 261 * processes address space. Really, we want to allow our "user
262 * space" model to handle this. 262 * space" model to handle this.
263 */ 263 */
264 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 264 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
265 do { \ 265 do { \
266 memcpy(dst, src, len); \ 266 memcpy(dst, src, len); \
267 flush_ptrace_access(vma, page, vaddr, dst, len, 1);\ 267 flush_ptrace_access(vma, page, vaddr, dst, len, 1);\
268 } while (0) 268 } while (0)
269 269
270 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 270 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
271 do { \ 271 do { \
272 memcpy(dst, src, len); \ 272 memcpy(dst, src, len); \
273 } while (0) 273 } while (0)
274 274
275 /* 275 /*
276 * Convert calls to our calling convention. 276 * Convert calls to our calling convention.
277 */ 277 */
278 #define flush_cache_all() __cpuc_flush_kern_all() 278 #define flush_cache_all() __cpuc_flush_kern_all()
279 #ifndef CONFIG_CPU_CACHE_VIPT 279 #ifndef CONFIG_CPU_CACHE_VIPT
280 static inline void flush_cache_mm(struct mm_struct *mm) 280 static inline void flush_cache_mm(struct mm_struct *mm)
281 { 281 {
282 if (cpu_isset(smp_processor_id(), mm->cpu_vm_mask)) 282 if (cpu_isset(smp_processor_id(), mm->cpu_vm_mask))
283 __cpuc_flush_user_all(); 283 __cpuc_flush_user_all();
284 } 284 }
285 285
286 static inline void 286 static inline void
287 flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end) 287 flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end)
288 { 288 {
289 if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask)) 289 if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask))
290 __cpuc_flush_user_range(start & PAGE_MASK, PAGE_ALIGN(end), 290 __cpuc_flush_user_range(start & PAGE_MASK, PAGE_ALIGN(end),
291 vma->vm_flags); 291 vma->vm_flags);
292 } 292 }
293 293
294 static inline void 294 static inline void
295 flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn) 295 flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn)
296 { 296 {
297 if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask)) { 297 if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask)) {
298 unsigned long addr = user_addr & PAGE_MASK; 298 unsigned long addr = user_addr & PAGE_MASK;
299 __cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags); 299 __cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags);
300 } 300 }
301 } 301 }
302 302
303 static inline void 303 static inline void
304 flush_ptrace_access(struct vm_area_struct *vma, struct page *page, 304 flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
305 unsigned long uaddr, void *kaddr, 305 unsigned long uaddr, void *kaddr,
306 unsigned long len, int write) 306 unsigned long len, int write)
307 { 307 {
308 if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask)) { 308 if (cpu_isset(smp_processor_id(), vma->vm_mm->cpu_vm_mask)) {
309 unsigned long addr = (unsigned long)kaddr; 309 unsigned long addr = (unsigned long)kaddr;
310 __cpuc_coherent_kern_range(addr, addr + len); 310 __cpuc_coherent_kern_range(addr, addr + len);
311 } 311 }
312 } 312 }
313 #else 313 #else
314 extern void flush_cache_mm(struct mm_struct *mm); 314 extern void flush_cache_mm(struct mm_struct *mm);
315 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); 315 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end);
316 extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn); 316 extern void flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsigned long pfn);
317 extern void flush_ptrace_access(struct vm_area_struct *vma, struct page *page, 317 extern void flush_ptrace_access(struct vm_area_struct *vma, struct page *page,
318 unsigned long uaddr, void *kaddr, 318 unsigned long uaddr, void *kaddr,
319 unsigned long len, int write); 319 unsigned long len, int write);
320 #endif 320 #endif
321 321
322 #define flush_cache_dup_mm(mm) flush_cache_mm(mm)
323
322 /* 324 /*
323 * flush_cache_user_range is used when we want to ensure that the 325 * flush_cache_user_range is used when we want to ensure that the
324 * Harvard caches are synchronised for the user space address range. 326 * Harvard caches are synchronised for the user space address range.
325 * This is used for the ARM private sys_cacheflush system call. 327 * This is used for the ARM private sys_cacheflush system call.
326 */ 328 */
327 #define flush_cache_user_range(vma,start,end) \ 329 #define flush_cache_user_range(vma,start,end) \
328 __cpuc_coherent_user_range((start) & PAGE_MASK, PAGE_ALIGN(end)) 330 __cpuc_coherent_user_range((start) & PAGE_MASK, PAGE_ALIGN(end))
329 331
330 /* 332 /*
331 * Perform necessary cache operations to ensure that data previously 333 * Perform necessary cache operations to ensure that data previously
332 * stored within this range of addresses can be executed by the CPU. 334 * stored within this range of addresses can be executed by the CPU.
333 */ 335 */
334 #define flush_icache_range(s,e) __cpuc_coherent_kern_range(s,e) 336 #define flush_icache_range(s,e) __cpuc_coherent_kern_range(s,e)
335 337
336 /* 338 /*
337 * Perform necessary cache operations to ensure that the TLB will 339 * Perform necessary cache operations to ensure that the TLB will
338 * see data written in the specified area. 340 * see data written in the specified area.
339 */ 341 */
340 #define clean_dcache_area(start,size) cpu_dcache_clean_area(start, size) 342 #define clean_dcache_area(start,size) cpu_dcache_clean_area(start, size)
341 343
342 /* 344 /*
343 * flush_dcache_page is used when the kernel has written to the page 345 * flush_dcache_page is used when the kernel has written to the page
344 * cache page at virtual address page->virtual. 346 * cache page at virtual address page->virtual.
345 * 347 *
346 * If this page isn't mapped (ie, page_mapping == NULL), or it might 348 * If this page isn't mapped (ie, page_mapping == NULL), or it might
347 * have userspace mappings, then we _must_ always clean + invalidate 349 * have userspace mappings, then we _must_ always clean + invalidate
348 * the dcache entries associated with the kernel mapping. 350 * the dcache entries associated with the kernel mapping.
349 * 351 *
350 * Otherwise we can defer the operation, and clean the cache when we are 352 * Otherwise we can defer the operation, and clean the cache when we are
351 * about to change to user space. This is the same method as used on SPARC64. 353 * about to change to user space. This is the same method as used on SPARC64.
352 * See update_mmu_cache for the user space part. 354 * See update_mmu_cache for the user space part.
353 */ 355 */
354 extern void flush_dcache_page(struct page *); 356 extern void flush_dcache_page(struct page *);
355 357
356 #define flush_dcache_mmap_lock(mapping) \ 358 #define flush_dcache_mmap_lock(mapping) \
357 write_lock_irq(&(mapping)->tree_lock) 359 write_lock_irq(&(mapping)->tree_lock)
358 #define flush_dcache_mmap_unlock(mapping) \ 360 #define flush_dcache_mmap_unlock(mapping) \
359 write_unlock_irq(&(mapping)->tree_lock) 361 write_unlock_irq(&(mapping)->tree_lock)
360 362
361 #define flush_icache_user_range(vma,page,addr,len) \ 363 #define flush_icache_user_range(vma,page,addr,len) \
362 flush_dcache_page(page) 364 flush_dcache_page(page)
363 365
364 /* 366 /*
365 * We don't appear to need to do anything here. In fact, if we did, we'd 367 * We don't appear to need to do anything here. In fact, if we did, we'd
366 * duplicate cache flushing elsewhere performed by flush_dcache_page(). 368 * duplicate cache flushing elsewhere performed by flush_dcache_page().
367 */ 369 */
368 #define flush_icache_page(vma,page) do { } while (0) 370 #define flush_icache_page(vma,page) do { } while (0)
369 371
370 #define __cacheid_present(val) (val != read_cpuid(CPUID_ID)) 372 #define __cacheid_present(val) (val != read_cpuid(CPUID_ID))
371 #define __cacheid_vivt(val) ((val & (15 << 25)) != (14 << 25)) 373 #define __cacheid_vivt(val) ((val & (15 << 25)) != (14 << 25))
372 #define __cacheid_vipt(val) ((val & (15 << 25)) == (14 << 25)) 374 #define __cacheid_vipt(val) ((val & (15 << 25)) == (14 << 25))
373 #define __cacheid_vipt_nonaliasing(val) ((val & (15 << 25 | 1 << 23)) == (14 << 25)) 375 #define __cacheid_vipt_nonaliasing(val) ((val & (15 << 25 | 1 << 23)) == (14 << 25))
374 #define __cacheid_vipt_aliasing(val) ((val & (15 << 25 | 1 << 23)) == (14 << 25 | 1 << 23)) 376 #define __cacheid_vipt_aliasing(val) ((val & (15 << 25 | 1 << 23)) == (14 << 25 | 1 << 23))
375 377
376 #if defined(CONFIG_CPU_CACHE_VIVT) && !defined(CONFIG_CPU_CACHE_VIPT) 378 #if defined(CONFIG_CPU_CACHE_VIVT) && !defined(CONFIG_CPU_CACHE_VIPT)
377 379
378 #define cache_is_vivt() 1 380 #define cache_is_vivt() 1
379 #define cache_is_vipt() 0 381 #define cache_is_vipt() 0
380 #define cache_is_vipt_nonaliasing() 0 382 #define cache_is_vipt_nonaliasing() 0
381 #define cache_is_vipt_aliasing() 0 383 #define cache_is_vipt_aliasing() 0
382 384
383 #elif defined(CONFIG_CPU_CACHE_VIPT) 385 #elif defined(CONFIG_CPU_CACHE_VIPT)
384 386
385 #define cache_is_vivt() 0 387 #define cache_is_vivt() 0
386 #define cache_is_vipt() 1 388 #define cache_is_vipt() 1
387 #define cache_is_vipt_nonaliasing() \ 389 #define cache_is_vipt_nonaliasing() \
388 ({ \ 390 ({ \
389 unsigned int __val = read_cpuid(CPUID_CACHETYPE); \ 391 unsigned int __val = read_cpuid(CPUID_CACHETYPE); \
390 __cacheid_vipt_nonaliasing(__val); \ 392 __cacheid_vipt_nonaliasing(__val); \
391 }) 393 })
392 394
393 #define cache_is_vipt_aliasing() \ 395 #define cache_is_vipt_aliasing() \
394 ({ \ 396 ({ \
395 unsigned int __val = read_cpuid(CPUID_CACHETYPE); \ 397 unsigned int __val = read_cpuid(CPUID_CACHETYPE); \
396 __cacheid_vipt_aliasing(__val); \ 398 __cacheid_vipt_aliasing(__val); \
397 }) 399 })
398 400
399 #else 401 #else
400 402
401 #define cache_is_vivt() \ 403 #define cache_is_vivt() \
402 ({ \ 404 ({ \
403 unsigned int __val = read_cpuid(CPUID_CACHETYPE); \ 405 unsigned int __val = read_cpuid(CPUID_CACHETYPE); \
404 (!__cacheid_present(__val)) || __cacheid_vivt(__val); \ 406 (!__cacheid_present(__val)) || __cacheid_vivt(__val); \
405 }) 407 })
406 408
407 #define cache_is_vipt() \ 409 #define cache_is_vipt() \
408 ({ \ 410 ({ \
409 unsigned int __val = read_cpuid(CPUID_CACHETYPE); \ 411 unsigned int __val = read_cpuid(CPUID_CACHETYPE); \
410 __cacheid_present(__val) && __cacheid_vipt(__val); \ 412 __cacheid_present(__val) && __cacheid_vipt(__val); \
411 }) 413 })
412 414
413 #define cache_is_vipt_nonaliasing() \ 415 #define cache_is_vipt_nonaliasing() \
414 ({ \ 416 ({ \
415 unsigned int __val = read_cpuid(CPUID_CACHETYPE); \ 417 unsigned int __val = read_cpuid(CPUID_CACHETYPE); \
416 __cacheid_present(__val) && \ 418 __cacheid_present(__val) && \
417 __cacheid_vipt_nonaliasing(__val); \ 419 __cacheid_vipt_nonaliasing(__val); \
418 }) 420 })
419 421
420 #define cache_is_vipt_aliasing() \ 422 #define cache_is_vipt_aliasing() \
421 ({ \ 423 ({ \
422 unsigned int __val = read_cpuid(CPUID_CACHETYPE); \ 424 unsigned int __val = read_cpuid(CPUID_CACHETYPE); \
423 __cacheid_present(__val) && \ 425 __cacheid_present(__val) && \
424 __cacheid_vipt_aliasing(__val); \ 426 __cacheid_vipt_aliasing(__val); \
425 }) 427 })
426 428
427 #endif 429 #endif
428 430
429 #endif 431 #endif
430 432
include/asm-arm26/cacheflush.h
1 /* 1 /*
2 * linux/include/asm-arm/cacheflush.h 2 * linux/include/asm-arm/cacheflush.h
3 * 3 *
4 * Copyright (C) 2000-2002 Russell King 4 * Copyright (C) 2000-2002 Russell King
5 * Copyright (C) 2003 Ian Molton 5 * Copyright (C) 2003 Ian Molton
6 * 6 *
7 * This program is free software; you can redistribute it and/or modify 7 * This program is free software; you can redistribute it and/or modify
8 * it under the terms of the GNU General Public License version 2 as 8 * it under the terms of the GNU General Public License version 2 as
9 * published by the Free Software Foundation. 9 * published by the Free Software Foundation.
10 * 10 *
11 * ARM26 cache 'functions' 11 * ARM26 cache 'functions'
12 * 12 *
13 */ 13 */
14 14
15 #ifndef _ASMARM_CACHEFLUSH_H 15 #ifndef _ASMARM_CACHEFLUSH_H
16 #define _ASMARM_CACHEFLUSH_H 16 #define _ASMARM_CACHEFLUSH_H
17 17
18 #if 1 //FIXME - BAD INCLUDES!!! 18 #if 1 //FIXME - BAD INCLUDES!!!
19 #include <linux/sched.h> 19 #include <linux/sched.h>
20 #include <linux/mm.h> 20 #include <linux/mm.h>
21 #endif 21 #endif
22 22
23 #define flush_cache_all() do { } while (0) 23 #define flush_cache_all() do { } while (0)
24 #define flush_cache_mm(mm) do { } while (0) 24 #define flush_cache_mm(mm) do { } while (0)
25 #define flush_cache_dup_mm(mm) do { } while (0)
25 #define flush_cache_range(vma,start,end) do { } while (0) 26 #define flush_cache_range(vma,start,end) do { } while (0)
26 #define flush_cache_page(vma,vmaddr,pfn) do { } while (0) 27 #define flush_cache_page(vma,vmaddr,pfn) do { } while (0)
27 #define flush_cache_vmap(start, end) do { } while (0) 28 #define flush_cache_vmap(start, end) do { } while (0)
28 #define flush_cache_vunmap(start, end) do { } while (0) 29 #define flush_cache_vunmap(start, end) do { } while (0)
29 30
30 #define invalidate_dcache_range(start,end) do { } while (0) 31 #define invalidate_dcache_range(start,end) do { } while (0)
31 #define clean_dcache_range(start,end) do { } while (0) 32 #define clean_dcache_range(start,end) do { } while (0)
32 #define flush_dcache_range(start,end) do { } while (0) 33 #define flush_dcache_range(start,end) do { } while (0)
33 #define flush_dcache_page(page) do { } while (0) 34 #define flush_dcache_page(page) do { } while (0)
34 #define flush_dcache_mmap_lock(mapping) do { } while (0) 35 #define flush_dcache_mmap_lock(mapping) do { } while (0)
35 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 36 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
36 #define clean_dcache_entry(_s) do { } while (0) 37 #define clean_dcache_entry(_s) do { } while (0)
37 #define clean_cache_entry(_start) do { } while (0) 38 #define clean_cache_entry(_start) do { } while (0)
38 39
39 #define flush_icache_user_range(start,end, bob, fred) do { } while (0) 40 #define flush_icache_user_range(start,end, bob, fred) do { } while (0)
40 #define flush_icache_range(start,end) do { } while (0) 41 #define flush_icache_range(start,end) do { } while (0)
41 #define flush_icache_page(vma,page) do { } while (0) 42 #define flush_icache_page(vma,page) do { } while (0)
42 43
43 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 44 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
44 memcpy(dst, src, len) 45 memcpy(dst, src, len)
45 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 46 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
46 memcpy(dst, src, len) 47 memcpy(dst, src, len)
47 48
48 /* DAG: ARM3 will flush cache on MEMC updates anyway? so don't bother */ 49 /* DAG: ARM3 will flush cache on MEMC updates anyway? so don't bother */
49 /* IM : Yes, it will, but only if setup to do so (we do this). */ 50 /* IM : Yes, it will, but only if setup to do so (we do this). */
50 #define clean_cache_area(_start,_size) do { } while (0) 51 #define clean_cache_area(_start,_size) do { } while (0)
51 52
52 #endif 53 #endif
53 54
include/asm-avr32/cacheflush.h
1 /* 1 /*
2 * Copyright (C) 2004-2006 Atmel Corporation 2 * Copyright (C) 2004-2006 Atmel Corporation
3 * 3 *
4 * This program is free software; you can redistribute it and/or modify 4 * This program is free software; you can redistribute it and/or modify
5 * it under the terms of the GNU General Public License version 2 as 5 * it under the terms of the GNU General Public License version 2 as
6 * published by the Free Software Foundation. 6 * published by the Free Software Foundation.
7 */ 7 */
8 #ifndef __ASM_AVR32_CACHEFLUSH_H 8 #ifndef __ASM_AVR32_CACHEFLUSH_H
9 #define __ASM_AVR32_CACHEFLUSH_H 9 #define __ASM_AVR32_CACHEFLUSH_H
10 10
11 /* Keep includes the same across arches. */ 11 /* Keep includes the same across arches. */
12 #include <linux/mm.h> 12 #include <linux/mm.h>
13 13
14 #define CACHE_OP_ICACHE_INVALIDATE 0x01 14 #define CACHE_OP_ICACHE_INVALIDATE 0x01
15 #define CACHE_OP_DCACHE_INVALIDATE 0x0b 15 #define CACHE_OP_DCACHE_INVALIDATE 0x0b
16 #define CACHE_OP_DCACHE_CLEAN 0x0c 16 #define CACHE_OP_DCACHE_CLEAN 0x0c
17 #define CACHE_OP_DCACHE_CLEAN_INVAL 0x0d 17 #define CACHE_OP_DCACHE_CLEAN_INVAL 0x0d
18 18
19 /* 19 /*
20 * Invalidate any cacheline containing virtual address vaddr without 20 * Invalidate any cacheline containing virtual address vaddr without
21 * writing anything back to memory. 21 * writing anything back to memory.
22 * 22 *
23 * Note that this function may corrupt unrelated data structures when 23 * Note that this function may corrupt unrelated data structures when
24 * applied on buffers that are not cacheline aligned in both ends. 24 * applied on buffers that are not cacheline aligned in both ends.
25 */ 25 */
26 static inline void invalidate_dcache_line(void *vaddr) 26 static inline void invalidate_dcache_line(void *vaddr)
27 { 27 {
28 asm volatile("cache %0[0], %1" 28 asm volatile("cache %0[0], %1"
29 : 29 :
30 : "r"(vaddr), "n"(CACHE_OP_DCACHE_INVALIDATE) 30 : "r"(vaddr), "n"(CACHE_OP_DCACHE_INVALIDATE)
31 : "memory"); 31 : "memory");
32 } 32 }
33 33
34 /* 34 /*
35 * Make sure any cacheline containing virtual address vaddr is written 35 * Make sure any cacheline containing virtual address vaddr is written
36 * to memory. 36 * to memory.
37 */ 37 */
38 static inline void clean_dcache_line(void *vaddr) 38 static inline void clean_dcache_line(void *vaddr)
39 { 39 {
40 asm volatile("cache %0[0], %1" 40 asm volatile("cache %0[0], %1"
41 : 41 :
42 : "r"(vaddr), "n"(CACHE_OP_DCACHE_CLEAN) 42 : "r"(vaddr), "n"(CACHE_OP_DCACHE_CLEAN)
43 : "memory"); 43 : "memory");
44 } 44 }
45 45
46 /* 46 /*
47 * Make sure any cacheline containing virtual address vaddr is written 47 * Make sure any cacheline containing virtual address vaddr is written
48 * to memory and then invalidate it. 48 * to memory and then invalidate it.
49 */ 49 */
50 static inline void flush_dcache_line(void *vaddr) 50 static inline void flush_dcache_line(void *vaddr)
51 { 51 {
52 asm volatile("cache %0[0], %1" 52 asm volatile("cache %0[0], %1"
53 : 53 :
54 : "r"(vaddr), "n"(CACHE_OP_DCACHE_CLEAN_INVAL) 54 : "r"(vaddr), "n"(CACHE_OP_DCACHE_CLEAN_INVAL)
55 : "memory"); 55 : "memory");
56 } 56 }
57 57
58 /* 58 /*
59 * Invalidate any instruction cacheline containing virtual address 59 * Invalidate any instruction cacheline containing virtual address
60 * vaddr. 60 * vaddr.
61 */ 61 */
62 static inline void invalidate_icache_line(void *vaddr) 62 static inline void invalidate_icache_line(void *vaddr)
63 { 63 {
64 asm volatile("cache %0[0], %1" 64 asm volatile("cache %0[0], %1"
65 : 65 :
66 : "r"(vaddr), "n"(CACHE_OP_ICACHE_INVALIDATE) 66 : "r"(vaddr), "n"(CACHE_OP_ICACHE_INVALIDATE)
67 : "memory"); 67 : "memory");
68 } 68 }
69 69
70 /* 70 /*
71 * Applies the above functions on all lines that are touched by the 71 * Applies the above functions on all lines that are touched by the
72 * specified virtual address range. 72 * specified virtual address range.
73 */ 73 */
74 void invalidate_dcache_region(void *start, size_t len); 74 void invalidate_dcache_region(void *start, size_t len);
75 void clean_dcache_region(void *start, size_t len); 75 void clean_dcache_region(void *start, size_t len);
76 void flush_dcache_region(void *start, size_t len); 76 void flush_dcache_region(void *start, size_t len);
77 void invalidate_icache_region(void *start, size_t len); 77 void invalidate_icache_region(void *start, size_t len);
78 78
79 /* 79 /*
80 * Make sure any pending writes are completed before continuing. 80 * Make sure any pending writes are completed before continuing.
81 */ 81 */
82 #define flush_write_buffer() asm volatile("sync 0" : : : "memory") 82 #define flush_write_buffer() asm volatile("sync 0" : : : "memory")
83 83
84 /* 84 /*
85 * The following functions are called when a virtual mapping changes. 85 * The following functions are called when a virtual mapping changes.
86 * We do not need to flush anything in this case. 86 * We do not need to flush anything in this case.
87 */ 87 */
88 #define flush_cache_all() do { } while (0) 88 #define flush_cache_all() do { } while (0)
89 #define flush_cache_mm(mm) do { } while (0) 89 #define flush_cache_mm(mm) do { } while (0)
90 #define flush_cache_dup_mm(mm) do { } while (0)
90 #define flush_cache_range(vma, start, end) do { } while (0) 91 #define flush_cache_range(vma, start, end) do { } while (0)
91 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 92 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
92 #define flush_cache_vmap(start, end) do { } while (0) 93 #define flush_cache_vmap(start, end) do { } while (0)
93 #define flush_cache_vunmap(start, end) do { } while (0) 94 #define flush_cache_vunmap(start, end) do { } while (0)
94 95
95 /* 96 /*
96 * I think we need to implement this one to be able to reliably 97 * I think we need to implement this one to be able to reliably
97 * execute pages from RAMDISK. However, if we implement the 98 * execute pages from RAMDISK. However, if we implement the
98 * flush_dcache_*() functions, it might not be needed anymore. 99 * flush_dcache_*() functions, it might not be needed anymore.
99 * 100 *
100 * #define flush_icache_page(vma, page) do { } while (0) 101 * #define flush_icache_page(vma, page) do { } while (0)
101 */ 102 */
102 extern void flush_icache_page(struct vm_area_struct *vma, struct page *page); 103 extern void flush_icache_page(struct vm_area_struct *vma, struct page *page);
103 104
104 /* 105 /*
105 * These are (I think) related to D-cache aliasing. We might need to 106 * These are (I think) related to D-cache aliasing. We might need to
106 * do something here, but only for certain configurations. No such 107 * do something here, but only for certain configurations. No such
107 * configurations exist at this time. 108 * configurations exist at this time.
108 */ 109 */
109 #define flush_dcache_page(page) do { } while (0) 110 #define flush_dcache_page(page) do { } while (0)
110 #define flush_dcache_mmap_lock(page) do { } while (0) 111 #define flush_dcache_mmap_lock(page) do { } while (0)
111 #define flush_dcache_mmap_unlock(page) do { } while (0) 112 #define flush_dcache_mmap_unlock(page) do { } while (0)
112 113
113 /* 114 /*
114 * These are for I/D cache coherency. In this case, we do need to 115 * These are for I/D cache coherency. In this case, we do need to
115 * flush with all configurations. 116 * flush with all configurations.
116 */ 117 */
117 extern void flush_icache_range(unsigned long start, unsigned long end); 118 extern void flush_icache_range(unsigned long start, unsigned long end);
118 extern void flush_icache_user_range(struct vm_area_struct *vma, 119 extern void flush_icache_user_range(struct vm_area_struct *vma,
119 struct page *page, 120 struct page *page,
120 unsigned long addr, int len); 121 unsigned long addr, int len);
121 122
122 #define copy_to_user_page(vma, page, vaddr, dst, src, len) do { \ 123 #define copy_to_user_page(vma, page, vaddr, dst, src, len) do { \
123 memcpy(dst, src, len); \ 124 memcpy(dst, src, len); \
124 flush_icache_user_range(vma, page, vaddr, len); \ 125 flush_icache_user_range(vma, page, vaddr, len); \
125 } while(0) 126 } while(0)
126 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 127 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
127 memcpy(dst, src, len) 128 memcpy(dst, src, len)
128 129
129 #endif /* __ASM_AVR32_CACHEFLUSH_H */ 130 #endif /* __ASM_AVR32_CACHEFLUSH_H */
130 131
include/asm-cris/cacheflush.h
1 #ifndef _CRIS_CACHEFLUSH_H 1 #ifndef _CRIS_CACHEFLUSH_H
2 #define _CRIS_CACHEFLUSH_H 2 #define _CRIS_CACHEFLUSH_H
3 3
4 /* Keep includes the same across arches. */ 4 /* Keep includes the same across arches. */
5 #include <linux/mm.h> 5 #include <linux/mm.h>
6 6
7 /* The cache doesn't need to be flushed when TLB entries change because 7 /* The cache doesn't need to be flushed when TLB entries change because
8 * the cache is mapped to physical memory, not virtual memory 8 * the cache is mapped to physical memory, not virtual memory
9 */ 9 */
10 #define flush_cache_all() do { } while (0) 10 #define flush_cache_all() do { } while (0)
11 #define flush_cache_mm(mm) do { } while (0) 11 #define flush_cache_mm(mm) do { } while (0)
12 #define flush_cache_dup_mm(mm) do { } while (0)
12 #define flush_cache_range(vma, start, end) do { } while (0) 13 #define flush_cache_range(vma, start, end) do { } while (0)
13 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 14 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
14 #define flush_dcache_page(page) do { } while (0) 15 #define flush_dcache_page(page) do { } while (0)
15 #define flush_dcache_mmap_lock(mapping) do { } while (0) 16 #define flush_dcache_mmap_lock(mapping) do { } while (0)
16 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 17 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
17 #define flush_icache_range(start, end) do { } while (0) 18 #define flush_icache_range(start, end) do { } while (0)
18 #define flush_icache_page(vma,pg) do { } while (0) 19 #define flush_icache_page(vma,pg) do { } while (0)
19 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0) 20 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
20 #define flush_cache_vmap(start, end) do { } while (0) 21 #define flush_cache_vmap(start, end) do { } while (0)
21 #define flush_cache_vunmap(start, end) do { } while (0) 22 #define flush_cache_vunmap(start, end) do { } while (0)
22 23
23 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 24 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
24 memcpy(dst, src, len) 25 memcpy(dst, src, len)
25 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 26 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
26 memcpy(dst, src, len) 27 memcpy(dst, src, len)
27 28
28 void global_flush_tlb(void); 29 void global_flush_tlb(void);
29 int change_page_attr(struct page *page, int numpages, pgprot_t prot); 30 int change_page_attr(struct page *page, int numpages, pgprot_t prot);
30 31
31 #endif /* _CRIS_CACHEFLUSH_H */ 32 #endif /* _CRIS_CACHEFLUSH_H */
32 33
include/asm-frv/cacheflush.h
1 /* cacheflush.h: FRV cache flushing routines 1 /* cacheflush.h: FRV cache flushing routines
2 * 2 *
3 * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved. 3 * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
4 * Written by David Howells (dhowells@redhat.com) 4 * Written by David Howells (dhowells@redhat.com)
5 * 5 *
6 * This program is free software; you can redistribute it and/or 6 * This program is free software; you can redistribute it and/or
7 * modify it under the terms of the GNU General Public License 7 * modify it under the terms of the GNU General Public License
8 * as published by the Free Software Foundation; either version 8 * as published by the Free Software Foundation; either version
9 * 2 of the License, or (at your option) any later version. 9 * 2 of the License, or (at your option) any later version.
10 */ 10 */
11 11
12 #ifndef _ASM_CACHEFLUSH_H 12 #ifndef _ASM_CACHEFLUSH_H
13 #define _ASM_CACHEFLUSH_H 13 #define _ASM_CACHEFLUSH_H
14 14
15 /* Keep includes the same across arches. */ 15 /* Keep includes the same across arches. */
16 #include <linux/mm.h> 16 #include <linux/mm.h>
17 17
18 /* 18 /*
19 * virtually-indexed cache management (our cache is physically indexed) 19 * virtually-indexed cache management (our cache is physically indexed)
20 */ 20 */
21 #define flush_cache_all() do {} while(0) 21 #define flush_cache_all() do {} while(0)
22 #define flush_cache_mm(mm) do {} while(0) 22 #define flush_cache_mm(mm) do {} while(0)
23 #define flush_cache_dup_mm(mm) do {} while(0)
23 #define flush_cache_range(mm, start, end) do {} while(0) 24 #define flush_cache_range(mm, start, end) do {} while(0)
24 #define flush_cache_page(vma, vmaddr, pfn) do {} while(0) 25 #define flush_cache_page(vma, vmaddr, pfn) do {} while(0)
25 #define flush_cache_vmap(start, end) do {} while(0) 26 #define flush_cache_vmap(start, end) do {} while(0)
26 #define flush_cache_vunmap(start, end) do {} while(0) 27 #define flush_cache_vunmap(start, end) do {} while(0)
27 #define flush_dcache_mmap_lock(mapping) do {} while(0) 28 #define flush_dcache_mmap_lock(mapping) do {} while(0)
28 #define flush_dcache_mmap_unlock(mapping) do {} while(0) 29 #define flush_dcache_mmap_unlock(mapping) do {} while(0)
29 30
30 /* 31 /*
31 * physically-indexed cache managment 32 * physically-indexed cache managment
32 * - see arch/frv/lib/cache.S 33 * - see arch/frv/lib/cache.S
33 */ 34 */
34 extern void frv_dcache_writeback(unsigned long start, unsigned long size); 35 extern void frv_dcache_writeback(unsigned long start, unsigned long size);
35 extern void frv_cache_invalidate(unsigned long start, unsigned long size); 36 extern void frv_cache_invalidate(unsigned long start, unsigned long size);
36 extern void frv_icache_invalidate(unsigned long start, unsigned long size); 37 extern void frv_icache_invalidate(unsigned long start, unsigned long size);
37 extern void frv_cache_wback_inv(unsigned long start, unsigned long size); 38 extern void frv_cache_wback_inv(unsigned long start, unsigned long size);
38 39
39 static inline void __flush_cache_all(void) 40 static inline void __flush_cache_all(void)
40 { 41 {
41 asm volatile(" dcef @(gr0,gr0),#1 \n" 42 asm volatile(" dcef @(gr0,gr0),#1 \n"
42 " icei @(gr0,gr0),#1 \n" 43 " icei @(gr0,gr0),#1 \n"
43 " membar \n" 44 " membar \n"
44 : : : "memory" 45 : : : "memory"
45 ); 46 );
46 } 47 }
47 48
48 /* dcache/icache coherency... */ 49 /* dcache/icache coherency... */
49 #ifdef CONFIG_MMU 50 #ifdef CONFIG_MMU
50 extern void flush_dcache_page(struct page *page); 51 extern void flush_dcache_page(struct page *page);
51 #else 52 #else
52 static inline void flush_dcache_page(struct page *page) 53 static inline void flush_dcache_page(struct page *page)
53 { 54 {
54 unsigned long addr = page_to_phys(page); 55 unsigned long addr = page_to_phys(page);
55 frv_dcache_writeback(addr, addr + PAGE_SIZE); 56 frv_dcache_writeback(addr, addr + PAGE_SIZE);
56 } 57 }
57 #endif 58 #endif
58 59
59 static inline void flush_page_to_ram(struct page *page) 60 static inline void flush_page_to_ram(struct page *page)
60 { 61 {
61 flush_dcache_page(page); 62 flush_dcache_page(page);
62 } 63 }
63 64
64 static inline void flush_icache(void) 65 static inline void flush_icache(void)
65 { 66 {
66 __flush_cache_all(); 67 __flush_cache_all();
67 } 68 }
68 69
69 static inline void flush_icache_range(unsigned long start, unsigned long end) 70 static inline void flush_icache_range(unsigned long start, unsigned long end)
70 { 71 {
71 frv_cache_wback_inv(start, end); 72 frv_cache_wback_inv(start, end);
72 } 73 }
73 74
74 #ifdef CONFIG_MMU 75 #ifdef CONFIG_MMU
75 extern void flush_icache_user_range(struct vm_area_struct *vma, struct page *page, 76 extern void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
76 unsigned long start, unsigned long len); 77 unsigned long start, unsigned long len);
77 #else 78 #else
78 static inline void flush_icache_user_range(struct vm_area_struct *vma, struct page *page, 79 static inline void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
79 unsigned long start, unsigned long len) 80 unsigned long start, unsigned long len)
80 { 81 {
81 frv_cache_wback_inv(start, start + len); 82 frv_cache_wback_inv(start, start + len);
82 } 83 }
83 #endif 84 #endif
84 85
85 static inline void flush_icache_page(struct vm_area_struct *vma, struct page *page) 86 static inline void flush_icache_page(struct vm_area_struct *vma, struct page *page)
86 { 87 {
87 flush_icache_user_range(vma, page, page_to_phys(page), PAGE_SIZE); 88 flush_icache_user_range(vma, page, page_to_phys(page), PAGE_SIZE);
88 } 89 }
89 90
90 /* 91 /*
91 * permit ptrace to access another process's address space through the icache 92 * permit ptrace to access another process's address space through the icache
92 * and the dcache 93 * and the dcache
93 */ 94 */
94 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 95 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
95 do { \ 96 do { \
96 memcpy((dst), (src), (len)); \ 97 memcpy((dst), (src), (len)); \
97 flush_icache_user_range((vma), (page), (vaddr), (len)); \ 98 flush_icache_user_range((vma), (page), (vaddr), (len)); \
98 } while(0) 99 } while(0)
99 100
100 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 101 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
101 memcpy((dst), (src), (len)) 102 memcpy((dst), (src), (len))
102 103
103 #endif /* _ASM_CACHEFLUSH_H */ 104 #endif /* _ASM_CACHEFLUSH_H */
104 105
include/asm-h8300/cacheflush.h
1 /* 1 /*
2 * (C) Copyright 2002, Yoshinori Sato <ysato@users.sourceforge.jp> 2 * (C) Copyright 2002, Yoshinori Sato <ysato@users.sourceforge.jp>
3 */ 3 */
4 4
5 #ifndef _ASM_H8300_CACHEFLUSH_H 5 #ifndef _ASM_H8300_CACHEFLUSH_H
6 #define _AMS_H8300_CACHEFLUSH_H 6 #define _AMS_H8300_CACHEFLUSH_H
7 7
8 /* 8 /*
9 * Cache handling functions 9 * Cache handling functions
10 * No Cache memory all dummy functions 10 * No Cache memory all dummy functions
11 */ 11 */
12 12
13 #define flush_cache_all() 13 #define flush_cache_all()
14 #define flush_cache_mm(mm) 14 #define flush_cache_mm(mm)
15 #define flush_cache_dup_mm(mm) do { } while (0)
15 #define flush_cache_range(vma,a,b) 16 #define flush_cache_range(vma,a,b)
16 #define flush_cache_page(vma,p,pfn) 17 #define flush_cache_page(vma,p,pfn)
17 #define flush_dcache_page(page) 18 #define flush_dcache_page(page)
18 #define flush_dcache_mmap_lock(mapping) 19 #define flush_dcache_mmap_lock(mapping)
19 #define flush_dcache_mmap_unlock(mapping) 20 #define flush_dcache_mmap_unlock(mapping)
20 #define flush_icache() 21 #define flush_icache()
21 #define flush_icache_page(vma,page) 22 #define flush_icache_page(vma,page)
22 #define flush_icache_range(start,len) 23 #define flush_icache_range(start,len)
23 #define flush_cache_vmap(start, end) 24 #define flush_cache_vmap(start, end)
24 #define flush_cache_vunmap(start, end) 25 #define flush_cache_vunmap(start, end)
25 #define cache_push_v(vaddr,len) 26 #define cache_push_v(vaddr,len)
26 #define cache_push(paddr,len) 27 #define cache_push(paddr,len)
27 #define cache_clear(paddr,len) 28 #define cache_clear(paddr,len)
28 29
29 #define flush_dcache_range(a,b) 30 #define flush_dcache_range(a,b)
30 31
31 #define flush_icache_user_range(vma,page,addr,len) 32 #define flush_icache_user_range(vma,page,addr,len)
32 33
33 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 34 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
34 memcpy(dst, src, len) 35 memcpy(dst, src, len)
35 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 36 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
36 memcpy(dst, src, len) 37 memcpy(dst, src, len)
37 38
38 #endif /* _ASM_H8300_CACHEFLUSH_H */ 39 #endif /* _ASM_H8300_CACHEFLUSH_H */
39 40
include/asm-i386/cacheflush.h
1 #ifndef _I386_CACHEFLUSH_H 1 #ifndef _I386_CACHEFLUSH_H
2 #define _I386_CACHEFLUSH_H 2 #define _I386_CACHEFLUSH_H
3 3
4 /* Keep includes the same across arches. */ 4 /* Keep includes the same across arches. */
5 #include <linux/mm.h> 5 #include <linux/mm.h>
6 6
7 /* Caches aren't brain-dead on the intel. */ 7 /* Caches aren't brain-dead on the intel. */
8 #define flush_cache_all() do { } while (0) 8 #define flush_cache_all() do { } while (0)
9 #define flush_cache_mm(mm) do { } while (0) 9 #define flush_cache_mm(mm) do { } while (0)
10 #define flush_cache_dup_mm(mm) do { } while (0)
10 #define flush_cache_range(vma, start, end) do { } while (0) 11 #define flush_cache_range(vma, start, end) do { } while (0)
11 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 12 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
12 #define flush_dcache_page(page) do { } while (0) 13 #define flush_dcache_page(page) do { } while (0)
13 #define flush_dcache_mmap_lock(mapping) do { } while (0) 14 #define flush_dcache_mmap_lock(mapping) do { } while (0)
14 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 15 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
15 #define flush_icache_range(start, end) do { } while (0) 16 #define flush_icache_range(start, end) do { } while (0)
16 #define flush_icache_page(vma,pg) do { } while (0) 17 #define flush_icache_page(vma,pg) do { } while (0)
17 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0) 18 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
18 #define flush_cache_vmap(start, end) do { } while (0) 19 #define flush_cache_vmap(start, end) do { } while (0)
19 #define flush_cache_vunmap(start, end) do { } while (0) 20 #define flush_cache_vunmap(start, end) do { } while (0)
20 21
21 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 22 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
22 memcpy(dst, src, len) 23 memcpy(dst, src, len)
23 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 24 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
24 memcpy(dst, src, len) 25 memcpy(dst, src, len)
25 26
26 void global_flush_tlb(void); 27 void global_flush_tlb(void);
27 int change_page_attr(struct page *page, int numpages, pgprot_t prot); 28 int change_page_attr(struct page *page, int numpages, pgprot_t prot);
28 29
29 #ifdef CONFIG_DEBUG_PAGEALLOC 30 #ifdef CONFIG_DEBUG_PAGEALLOC
30 /* internal debugging function */ 31 /* internal debugging function */
31 void kernel_map_pages(struct page *page, int numpages, int enable); 32 void kernel_map_pages(struct page *page, int numpages, int enable);
32 #endif 33 #endif
33 34
34 #ifdef CONFIG_DEBUG_RODATA 35 #ifdef CONFIG_DEBUG_RODATA
35 void mark_rodata_ro(void); 36 void mark_rodata_ro(void);
36 #endif 37 #endif
37 38
38 #endif /* _I386_CACHEFLUSH_H */ 39 #endif /* _I386_CACHEFLUSH_H */
39 40
include/asm-ia64/cacheflush.h
1 #ifndef _ASM_IA64_CACHEFLUSH_H 1 #ifndef _ASM_IA64_CACHEFLUSH_H
2 #define _ASM_IA64_CACHEFLUSH_H 2 #define _ASM_IA64_CACHEFLUSH_H
3 3
4 /* 4 /*
5 * Copyright (C) 2002 Hewlett-Packard Co 5 * Copyright (C) 2002 Hewlett-Packard Co
6 * David Mosberger-Tang <davidm@hpl.hp.com> 6 * David Mosberger-Tang <davidm@hpl.hp.com>
7 */ 7 */
8 8
9 #include <linux/page-flags.h> 9 #include <linux/page-flags.h>
10 10
11 #include <asm/bitops.h> 11 #include <asm/bitops.h>
12 #include <asm/page.h> 12 #include <asm/page.h>
13 13
14 /* 14 /*
15 * Cache flushing routines. This is the kind of stuff that can be very expensive, so try 15 * Cache flushing routines. This is the kind of stuff that can be very expensive, so try
16 * to avoid them whenever possible. 16 * to avoid them whenever possible.
17 */ 17 */
18 18
19 #define flush_cache_all() do { } while (0) 19 #define flush_cache_all() do { } while (0)
20 #define flush_cache_mm(mm) do { } while (0) 20 #define flush_cache_mm(mm) do { } while (0)
21 #define flush_cache_dup_mm(mm) do { } while (0)
21 #define flush_cache_range(vma, start, end) do { } while (0) 22 #define flush_cache_range(vma, start, end) do { } while (0)
22 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 23 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
23 #define flush_icache_page(vma,page) do { } while (0) 24 #define flush_icache_page(vma,page) do { } while (0)
24 #define flush_cache_vmap(start, end) do { } while (0) 25 #define flush_cache_vmap(start, end) do { } while (0)
25 #define flush_cache_vunmap(start, end) do { } while (0) 26 #define flush_cache_vunmap(start, end) do { } while (0)
26 27
27 #define flush_dcache_page(page) \ 28 #define flush_dcache_page(page) \
28 do { \ 29 do { \
29 clear_bit(PG_arch_1, &(page)->flags); \ 30 clear_bit(PG_arch_1, &(page)->flags); \
30 } while (0) 31 } while (0)
31 32
32 #define flush_dcache_mmap_lock(mapping) do { } while (0) 33 #define flush_dcache_mmap_lock(mapping) do { } while (0)
33 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 34 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
34 35
35 extern void flush_icache_range (unsigned long start, unsigned long end); 36 extern void flush_icache_range (unsigned long start, unsigned long end);
36 37
37 #define flush_icache_user_range(vma, page, user_addr, len) \ 38 #define flush_icache_user_range(vma, page, user_addr, len) \
38 do { \ 39 do { \
39 unsigned long _addr = (unsigned long) page_address(page) + ((user_addr) & ~PAGE_MASK); \ 40 unsigned long _addr = (unsigned long) page_address(page) + ((user_addr) & ~PAGE_MASK); \
40 flush_icache_range(_addr, _addr + (len)); \ 41 flush_icache_range(_addr, _addr + (len)); \
41 } while (0) 42 } while (0)
42 43
43 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 44 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
44 do { memcpy(dst, src, len); \ 45 do { memcpy(dst, src, len); \
45 flush_icache_user_range(vma, page, vaddr, len); \ 46 flush_icache_user_range(vma, page, vaddr, len); \
46 } while (0) 47 } while (0)
47 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 48 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
48 memcpy(dst, src, len) 49 memcpy(dst, src, len)
49 50
50 #endif /* _ASM_IA64_CACHEFLUSH_H */ 51 #endif /* _ASM_IA64_CACHEFLUSH_H */
51 52
include/asm-m32r/cacheflush.h
1 #ifndef _ASM_M32R_CACHEFLUSH_H 1 #ifndef _ASM_M32R_CACHEFLUSH_H
2 #define _ASM_M32R_CACHEFLUSH_H 2 #define _ASM_M32R_CACHEFLUSH_H
3 3
4 #include <linux/mm.h> 4 #include <linux/mm.h>
5 5
6 extern void _flush_cache_all(void); 6 extern void _flush_cache_all(void);
7 extern void _flush_cache_copyback_all(void); 7 extern void _flush_cache_copyback_all(void);
8 8
9 #if defined(CONFIG_CHIP_M32700) || defined(CONFIG_CHIP_OPSP) || defined(CONFIG_CHIP_M32104) 9 #if defined(CONFIG_CHIP_M32700) || defined(CONFIG_CHIP_OPSP) || defined(CONFIG_CHIP_M32104)
10 #define flush_cache_all() do { } while (0) 10 #define flush_cache_all() do { } while (0)
11 #define flush_cache_mm(mm) do { } while (0) 11 #define flush_cache_mm(mm) do { } while (0)
12 #define flush_cache_dup_mm(mm) do { } while (0)
12 #define flush_cache_range(vma, start, end) do { } while (0) 13 #define flush_cache_range(vma, start, end) do { } while (0)
13 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 14 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
14 #define flush_dcache_page(page) do { } while (0) 15 #define flush_dcache_page(page) do { } while (0)
15 #define flush_dcache_mmap_lock(mapping) do { } while (0) 16 #define flush_dcache_mmap_lock(mapping) do { } while (0)
16 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 17 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
17 #ifndef CONFIG_SMP 18 #ifndef CONFIG_SMP
18 #define flush_icache_range(start, end) _flush_cache_copyback_all() 19 #define flush_icache_range(start, end) _flush_cache_copyback_all()
19 #define flush_icache_page(vma,pg) _flush_cache_copyback_all() 20 #define flush_icache_page(vma,pg) _flush_cache_copyback_all()
20 #define flush_icache_user_range(vma,pg,adr,len) _flush_cache_copyback_all() 21 #define flush_icache_user_range(vma,pg,adr,len) _flush_cache_copyback_all()
21 #define flush_cache_sigtramp(addr) _flush_cache_copyback_all() 22 #define flush_cache_sigtramp(addr) _flush_cache_copyback_all()
22 #else /* CONFIG_SMP */ 23 #else /* CONFIG_SMP */
23 extern void smp_flush_cache_all(void); 24 extern void smp_flush_cache_all(void);
24 #define flush_icache_range(start, end) smp_flush_cache_all() 25 #define flush_icache_range(start, end) smp_flush_cache_all()
25 #define flush_icache_page(vma,pg) smp_flush_cache_all() 26 #define flush_icache_page(vma,pg) smp_flush_cache_all()
26 #define flush_icache_user_range(vma,pg,adr,len) smp_flush_cache_all() 27 #define flush_icache_user_range(vma,pg,adr,len) smp_flush_cache_all()
27 #define flush_cache_sigtramp(addr) _flush_cache_copyback_all() 28 #define flush_cache_sigtramp(addr) _flush_cache_copyback_all()
28 #endif /* CONFIG_SMP */ 29 #endif /* CONFIG_SMP */
29 #elif defined(CONFIG_CHIP_M32102) 30 #elif defined(CONFIG_CHIP_M32102)
30 #define flush_cache_all() do { } while (0) 31 #define flush_cache_all() do { } while (0)
31 #define flush_cache_mm(mm) do { } while (0) 32 #define flush_cache_mm(mm) do { } while (0)
33 #define flush_cache_dup_mm(mm) do { } while (0)
32 #define flush_cache_range(vma, start, end) do { } while (0) 34 #define flush_cache_range(vma, start, end) do { } while (0)
33 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 35 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
34 #define flush_dcache_page(page) do { } while (0) 36 #define flush_dcache_page(page) do { } while (0)
35 #define flush_dcache_mmap_lock(mapping) do { } while (0) 37 #define flush_dcache_mmap_lock(mapping) do { } while (0)
36 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 38 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
37 #define flush_icache_range(start, end) _flush_cache_all() 39 #define flush_icache_range(start, end) _flush_cache_all()
38 #define flush_icache_page(vma,pg) _flush_cache_all() 40 #define flush_icache_page(vma,pg) _flush_cache_all()
39 #define flush_icache_user_range(vma,pg,adr,len) _flush_cache_all() 41 #define flush_icache_user_range(vma,pg,adr,len) _flush_cache_all()
40 #define flush_cache_sigtramp(addr) _flush_cache_all() 42 #define flush_cache_sigtramp(addr) _flush_cache_all()
41 #else 43 #else
42 #define flush_cache_all() do { } while (0) 44 #define flush_cache_all() do { } while (0)
43 #define flush_cache_mm(mm) do { } while (0) 45 #define flush_cache_mm(mm) do { } while (0)
46 #define flush_cache_dup_mm(mm) do { } while (0)
44 #define flush_cache_range(vma, start, end) do { } while (0) 47 #define flush_cache_range(vma, start, end) do { } while (0)
45 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 48 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
46 #define flush_dcache_page(page) do { } while (0) 49 #define flush_dcache_page(page) do { } while (0)
47 #define flush_dcache_mmap_lock(mapping) do { } while (0) 50 #define flush_dcache_mmap_lock(mapping) do { } while (0)
48 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 51 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
49 #define flush_icache_range(start, end) do { } while (0) 52 #define flush_icache_range(start, end) do { } while (0)
50 #define flush_icache_page(vma,pg) do { } while (0) 53 #define flush_icache_page(vma,pg) do { } while (0)
51 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0) 54 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
52 #define flush_cache_sigtramp(addr) do { } while (0) 55 #define flush_cache_sigtramp(addr) do { } while (0)
53 #endif /* CONFIG_CHIP_* */ 56 #endif /* CONFIG_CHIP_* */
54 57
55 #define flush_cache_vmap(start, end) do { } while (0) 58 #define flush_cache_vmap(start, end) do { } while (0)
56 #define flush_cache_vunmap(start, end) do { } while (0) 59 #define flush_cache_vunmap(start, end) do { } while (0)
57 60
58 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 61 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
59 do { \ 62 do { \
60 memcpy(dst, src, len); \ 63 memcpy(dst, src, len); \
61 flush_icache_user_range(vma, page, vaddr, len); \ 64 flush_icache_user_range(vma, page, vaddr, len); \
62 } while (0) 65 } while (0)
63 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 66 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
64 memcpy(dst, src, len) 67 memcpy(dst, src, len)
65 68
66 #endif /* _ASM_M32R_CACHEFLUSH_H */ 69 #endif /* _ASM_M32R_CACHEFLUSH_H */
67 70
68 71
include/asm-m68k/cacheflush.h
1 #ifndef _M68K_CACHEFLUSH_H 1 #ifndef _M68K_CACHEFLUSH_H
2 #define _M68K_CACHEFLUSH_H 2 #define _M68K_CACHEFLUSH_H
3 3
4 #include <linux/mm.h> 4 #include <linux/mm.h>
5 5
6 /* cache code */ 6 /* cache code */
7 #define FLUSH_I_AND_D (0x00000808) 7 #define FLUSH_I_AND_D (0x00000808)
8 #define FLUSH_I (0x00000008) 8 #define FLUSH_I (0x00000008)
9 9
10 /* 10 /*
11 * Cache handling functions 11 * Cache handling functions
12 */ 12 */
13 13
14 static inline void flush_icache(void) 14 static inline void flush_icache(void)
15 { 15 {
16 if (CPU_IS_040_OR_060) 16 if (CPU_IS_040_OR_060)
17 asm volatile ( "nop\n" 17 asm volatile ( "nop\n"
18 " .chip 68040\n" 18 " .chip 68040\n"
19 " cpusha %bc\n" 19 " cpusha %bc\n"
20 " .chip 68k"); 20 " .chip 68k");
21 else { 21 else {
22 unsigned long tmp; 22 unsigned long tmp;
23 asm volatile ( "movec %%cacr,%0\n" 23 asm volatile ( "movec %%cacr,%0\n"
24 " or.w %1,%0\n" 24 " or.w %1,%0\n"
25 " movec %0,%%cacr" 25 " movec %0,%%cacr"
26 : "=&d" (tmp) 26 : "=&d" (tmp)
27 : "id" (FLUSH_I)); 27 : "id" (FLUSH_I));
28 } 28 }
29 } 29 }
30 30
31 /* 31 /*
32 * invalidate the cache for the specified memory range. 32 * invalidate the cache for the specified memory range.
33 * It starts at the physical address specified for 33 * It starts at the physical address specified for
34 * the given number of bytes. 34 * the given number of bytes.
35 */ 35 */
36 extern void cache_clear(unsigned long paddr, int len); 36 extern void cache_clear(unsigned long paddr, int len);
37 /* 37 /*
38 * push any dirty cache in the specified memory range. 38 * push any dirty cache in the specified memory range.
39 * It starts at the physical address specified for 39 * It starts at the physical address specified for
40 * the given number of bytes. 40 * the given number of bytes.
41 */ 41 */
42 extern void cache_push(unsigned long paddr, int len); 42 extern void cache_push(unsigned long paddr, int len);
43 43
44 /* 44 /*
45 * push and invalidate pages in the specified user virtual 45 * push and invalidate pages in the specified user virtual
46 * memory range. 46 * memory range.
47 */ 47 */
48 extern void cache_push_v(unsigned long vaddr, int len); 48 extern void cache_push_v(unsigned long vaddr, int len);
49 49
50 /* This is needed whenever the virtual mapping of the current 50 /* This is needed whenever the virtual mapping of the current
51 process changes. */ 51 process changes. */
52 #define __flush_cache_all() \ 52 #define __flush_cache_all() \
53 ({ \ 53 ({ \
54 if (CPU_IS_040_OR_060) \ 54 if (CPU_IS_040_OR_060) \
55 __asm__ __volatile__("nop\n\t" \ 55 __asm__ __volatile__("nop\n\t" \
56 ".chip 68040\n\t" \ 56 ".chip 68040\n\t" \
57 "cpusha %dc\n\t" \ 57 "cpusha %dc\n\t" \
58 ".chip 68k"); \ 58 ".chip 68k"); \
59 else { \ 59 else { \
60 unsigned long _tmp; \ 60 unsigned long _tmp; \
61 __asm__ __volatile__("movec %%cacr,%0\n\t" \ 61 __asm__ __volatile__("movec %%cacr,%0\n\t" \
62 "orw %1,%0\n\t" \ 62 "orw %1,%0\n\t" \
63 "movec %0,%%cacr" \ 63 "movec %0,%%cacr" \
64 : "=&d" (_tmp) \ 64 : "=&d" (_tmp) \
65 : "di" (FLUSH_I_AND_D)); \ 65 : "di" (FLUSH_I_AND_D)); \
66 } \ 66 } \
67 }) 67 })
68 68
69 #define __flush_cache_030() \ 69 #define __flush_cache_030() \
70 ({ \ 70 ({ \
71 if (CPU_IS_020_OR_030) { \ 71 if (CPU_IS_020_OR_030) { \
72 unsigned long _tmp; \ 72 unsigned long _tmp; \
73 __asm__ __volatile__("movec %%cacr,%0\n\t" \ 73 __asm__ __volatile__("movec %%cacr,%0\n\t" \
74 "orw %1,%0\n\t" \ 74 "orw %1,%0\n\t" \
75 "movec %0,%%cacr" \ 75 "movec %0,%%cacr" \
76 : "=&d" (_tmp) \ 76 : "=&d" (_tmp) \
77 : "di" (FLUSH_I_AND_D)); \ 77 : "di" (FLUSH_I_AND_D)); \
78 } \ 78 } \
79 }) 79 })
80 80
81 #define flush_cache_all() __flush_cache_all() 81 #define flush_cache_all() __flush_cache_all()
82 82
83 #define flush_cache_vmap(start, end) flush_cache_all() 83 #define flush_cache_vmap(start, end) flush_cache_all()
84 #define flush_cache_vunmap(start, end) flush_cache_all() 84 #define flush_cache_vunmap(start, end) flush_cache_all()
85 85
86 static inline void flush_cache_mm(struct mm_struct *mm) 86 static inline void flush_cache_mm(struct mm_struct *mm)
87 { 87 {
88 if (mm == current->mm) 88 if (mm == current->mm)
89 __flush_cache_030(); 89 __flush_cache_030();
90 } 90 }
91 91
92 #define flush_cache_dup_mm(mm) flush_cache_mm(mm)
93
92 /* flush_cache_range/flush_cache_page must be macros to avoid 94 /* flush_cache_range/flush_cache_page must be macros to avoid
93 a dependency on linux/mm.h, which includes this file... */ 95 a dependency on linux/mm.h, which includes this file... */
94 static inline void flush_cache_range(struct vm_area_struct *vma, 96 static inline void flush_cache_range(struct vm_area_struct *vma,
95 unsigned long start, 97 unsigned long start,
96 unsigned long end) 98 unsigned long end)
97 { 99 {
98 if (vma->vm_mm == current->mm) 100 if (vma->vm_mm == current->mm)
99 __flush_cache_030(); 101 __flush_cache_030();
100 } 102 }
101 103
102 static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn) 104 static inline void flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn)
103 { 105 {
104 if (vma->vm_mm == current->mm) 106 if (vma->vm_mm == current->mm)
105 __flush_cache_030(); 107 __flush_cache_030();
106 } 108 }
107 109
108 110
109 /* Push the page at kernel virtual address and clear the icache */ 111 /* Push the page at kernel virtual address and clear the icache */
110 /* RZ: use cpush %bc instead of cpush %dc, cinv %ic */ 112 /* RZ: use cpush %bc instead of cpush %dc, cinv %ic */
111 static inline void __flush_page_to_ram(void *vaddr) 113 static inline void __flush_page_to_ram(void *vaddr)
112 { 114 {
113 if (CPU_IS_040_OR_060) { 115 if (CPU_IS_040_OR_060) {
114 __asm__ __volatile__("nop\n\t" 116 __asm__ __volatile__("nop\n\t"
115 ".chip 68040\n\t" 117 ".chip 68040\n\t"
116 "cpushp %%bc,(%0)\n\t" 118 "cpushp %%bc,(%0)\n\t"
117 ".chip 68k" 119 ".chip 68k"
118 : : "a" (__pa(vaddr))); 120 : : "a" (__pa(vaddr)));
119 } else { 121 } else {
120 unsigned long _tmp; 122 unsigned long _tmp;
121 __asm__ __volatile__("movec %%cacr,%0\n\t" 123 __asm__ __volatile__("movec %%cacr,%0\n\t"
122 "orw %1,%0\n\t" 124 "orw %1,%0\n\t"
123 "movec %0,%%cacr" 125 "movec %0,%%cacr"
124 : "=&d" (_tmp) 126 : "=&d" (_tmp)
125 : "di" (FLUSH_I)); 127 : "di" (FLUSH_I));
126 } 128 }
127 } 129 }
128 130
129 #define flush_dcache_page(page) __flush_page_to_ram(page_address(page)) 131 #define flush_dcache_page(page) __flush_page_to_ram(page_address(page))
130 #define flush_dcache_mmap_lock(mapping) do { } while (0) 132 #define flush_dcache_mmap_lock(mapping) do { } while (0)
131 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 133 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
132 #define flush_icache_page(vma, page) __flush_page_to_ram(page_address(page)) 134 #define flush_icache_page(vma, page) __flush_page_to_ram(page_address(page))
133 135
134 extern void flush_icache_user_range(struct vm_area_struct *vma, struct page *page, 136 extern void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
135 unsigned long addr, int len); 137 unsigned long addr, int len);
136 extern void flush_icache_range(unsigned long address, unsigned long endaddr); 138 extern void flush_icache_range(unsigned long address, unsigned long endaddr);
137 139
138 static inline void copy_to_user_page(struct vm_area_struct *vma, 140 static inline void copy_to_user_page(struct vm_area_struct *vma,
139 struct page *page, unsigned long vaddr, 141 struct page *page, unsigned long vaddr,
140 void *dst, void *src, int len) 142 void *dst, void *src, int len)
141 { 143 {
142 flush_cache_page(vma, vaddr, page_to_pfn(page)); 144 flush_cache_page(vma, vaddr, page_to_pfn(page));
143 memcpy(dst, src, len); 145 memcpy(dst, src, len);
144 flush_icache_user_range(vma, page, vaddr, len); 146 flush_icache_user_range(vma, page, vaddr, len);
145 } 147 }
146 static inline void copy_from_user_page(struct vm_area_struct *vma, 148 static inline void copy_from_user_page(struct vm_area_struct *vma,
147 struct page *page, unsigned long vaddr, 149 struct page *page, unsigned long vaddr,
148 void *dst, void *src, int len) 150 void *dst, void *src, int len)
149 { 151 {
150 flush_cache_page(vma, vaddr, page_to_pfn(page)); 152 flush_cache_page(vma, vaddr, page_to_pfn(page));
151 memcpy(dst, src, len); 153 memcpy(dst, src, len);
152 } 154 }
153 155
154 #endif /* _M68K_CACHEFLUSH_H */ 156 #endif /* _M68K_CACHEFLUSH_H */
155 157
include/asm-m68knommu/cacheflush.h
1 #ifndef _M68KNOMMU_CACHEFLUSH_H 1 #ifndef _M68KNOMMU_CACHEFLUSH_H
2 #define _M68KNOMMU_CACHEFLUSH_H 2 #define _M68KNOMMU_CACHEFLUSH_H
3 3
4 /* 4 /*
5 * (C) Copyright 2000-2004, Greg Ungerer <gerg@snapgear.com> 5 * (C) Copyright 2000-2004, Greg Ungerer <gerg@snapgear.com>
6 */ 6 */
7 #include <linux/mm.h> 7 #include <linux/mm.h>
8 8
9 #define flush_cache_all() __flush_cache_all() 9 #define flush_cache_all() __flush_cache_all()
10 #define flush_cache_mm(mm) do { } while (0) 10 #define flush_cache_mm(mm) do { } while (0)
11 #define flush_cache_dup_mm(mm) do { } while (0)
11 #define flush_cache_range(vma, start, end) __flush_cache_all() 12 #define flush_cache_range(vma, start, end) __flush_cache_all()
12 #define flush_cache_page(vma, vmaddr) do { } while (0) 13 #define flush_cache_page(vma, vmaddr) do { } while (0)
13 #define flush_dcache_range(start,len) __flush_cache_all() 14 #define flush_dcache_range(start,len) __flush_cache_all()
14 #define flush_dcache_page(page) do { } while (0) 15 #define flush_dcache_page(page) do { } while (0)
15 #define flush_dcache_mmap_lock(mapping) do { } while (0) 16 #define flush_dcache_mmap_lock(mapping) do { } while (0)
16 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 17 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
17 #define flush_icache_range(start,len) __flush_cache_all() 18 #define flush_icache_range(start,len) __flush_cache_all()
18 #define flush_icache_page(vma,pg) do { } while (0) 19 #define flush_icache_page(vma,pg) do { } while (0)
19 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0) 20 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
20 #define flush_cache_vmap(start, end) do { } while (0) 21 #define flush_cache_vmap(start, end) do { } while (0)
21 #define flush_cache_vunmap(start, end) do { } while (0) 22 #define flush_cache_vunmap(start, end) do { } while (0)
22 23
23 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 24 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
24 memcpy(dst, src, len) 25 memcpy(dst, src, len)
25 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 26 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
26 memcpy(dst, src, len) 27 memcpy(dst, src, len)
27 28
28 static inline void __flush_cache_all(void) 29 static inline void __flush_cache_all(void)
29 { 30 {
30 #ifdef CONFIG_M5407 31 #ifdef CONFIG_M5407
31 /* 32 /*
32 * Use cpushl to push and invalidate all cache lines. 33 * Use cpushl to push and invalidate all cache lines.
33 * Gas doesn't seem to know how to generate the ColdFire 34 * Gas doesn't seem to know how to generate the ColdFire
34 * cpushl instruction... Oh well, bit stuff it for now. 35 * cpushl instruction... Oh well, bit stuff it for now.
35 */ 36 */
36 __asm__ __volatile__ ( 37 __asm__ __volatile__ (
37 "nop\n\t" 38 "nop\n\t"
38 "clrl %%d0\n\t" 39 "clrl %%d0\n\t"
39 "1:\n\t" 40 "1:\n\t"
40 "movel %%d0,%%a0\n\t" 41 "movel %%d0,%%a0\n\t"
41 "2:\n\t" 42 "2:\n\t"
42 ".word 0xf468\n\t" 43 ".word 0xf468\n\t"
43 "addl #0x10,%%a0\n\t" 44 "addl #0x10,%%a0\n\t"
44 "cmpl #0x00000800,%%a0\n\t" 45 "cmpl #0x00000800,%%a0\n\t"
45 "blt 2b\n\t" 46 "blt 2b\n\t"
46 "addql #1,%%d0\n\t" 47 "addql #1,%%d0\n\t"
47 "cmpil #4,%%d0\n\t" 48 "cmpil #4,%%d0\n\t"
48 "bne 1b\n\t" 49 "bne 1b\n\t"
49 "movel #0xb6088500,%%d0\n\t" 50 "movel #0xb6088500,%%d0\n\t"
50 "movec %%d0,%%CACR\n\t" 51 "movec %%d0,%%CACR\n\t"
51 : : : "d0", "a0" ); 52 : : : "d0", "a0" );
52 #endif /* CONFIG_M5407 */ 53 #endif /* CONFIG_M5407 */
53 #if defined(CONFIG_M527x) || defined(CONFIG_M528x) 54 #if defined(CONFIG_M527x) || defined(CONFIG_M528x)
54 __asm__ __volatile__ ( 55 __asm__ __volatile__ (
55 "movel #0x81400100, %%d0\n\t" 56 "movel #0x81400100, %%d0\n\t"
56 "movec %%d0, %%CACR\n\t" 57 "movec %%d0, %%CACR\n\t"
57 "nop\n\t" 58 "nop\n\t"
58 : : : "d0" ); 59 : : : "d0" );
59 #endif /* CONFIG_M527x || CONFIG_M528x */ 60 #endif /* CONFIG_M527x || CONFIG_M528x */
60 #if defined(CONFIG_M5206) || defined(CONFIG_M5206e) || defined(CONFIG_M5272) 61 #if defined(CONFIG_M5206) || defined(CONFIG_M5206e) || defined(CONFIG_M5272)
61 __asm__ __volatile__ ( 62 __asm__ __volatile__ (
62 "movel #0x81000100, %%d0\n\t" 63 "movel #0x81000100, %%d0\n\t"
63 "movec %%d0, %%CACR\n\t" 64 "movec %%d0, %%CACR\n\t"
64 "nop\n\t" 65 "nop\n\t"
65 : : : "d0" ); 66 : : : "d0" );
66 #endif /* CONFIG_M5206 || CONFIG_M5206e || CONFIG_M5272 */ 67 #endif /* CONFIG_M5206 || CONFIG_M5206e || CONFIG_M5272 */
67 #ifdef CONFIG_M5249 68 #ifdef CONFIG_M5249
68 __asm__ __volatile__ ( 69 __asm__ __volatile__ (
69 "movel #0xa1000200, %%d0\n\t" 70 "movel #0xa1000200, %%d0\n\t"
70 "movec %%d0, %%CACR\n\t" 71 "movec %%d0, %%CACR\n\t"
71 "nop\n\t" 72 "nop\n\t"
72 : : : "d0" ); 73 : : : "d0" );
73 #endif /* CONFIG_M5249 */ 74 #endif /* CONFIG_M5249 */
74 #ifdef CONFIG_M532x 75 #ifdef CONFIG_M532x
75 __asm__ __volatile__ ( 76 __asm__ __volatile__ (
76 "movel #0x81000200, %%d0\n\t" 77 "movel #0x81000200, %%d0\n\t"
77 "movec %%d0, %%CACR\n\t" 78 "movec %%d0, %%CACR\n\t"
78 "nop\n\t" 79 "nop\n\t"
79 : : : "d0" ); 80 : : : "d0" );
80 #endif /* CONFIG_M532x */ 81 #endif /* CONFIG_M532x */
81 } 82 }
82 83
83 #endif /* _M68KNOMMU_CACHEFLUSH_H */ 84 #endif /* _M68KNOMMU_CACHEFLUSH_H */
84 85
include/asm-mips/cacheflush.h
1 /* 1 /*
2 * This file is subject to the terms and conditions of the GNU General Public 2 * This file is subject to the terms and conditions of the GNU General Public
3 * License. See the file "COPYING" in the main directory of this archive 3 * License. See the file "COPYING" in the main directory of this archive
4 * for more details. 4 * for more details.
5 * 5 *
6 * Copyright (C) 1994, 95, 96, 97, 98, 99, 2000, 01, 02, 03 by Ralf Baechle 6 * Copyright (C) 1994, 95, 96, 97, 98, 99, 2000, 01, 02, 03 by Ralf Baechle
7 * Copyright (C) 1999, 2000, 2001 Silicon Graphics, Inc. 7 * Copyright (C) 1999, 2000, 2001 Silicon Graphics, Inc.
8 */ 8 */
9 #ifndef _ASM_CACHEFLUSH_H 9 #ifndef _ASM_CACHEFLUSH_H
10 #define _ASM_CACHEFLUSH_H 10 #define _ASM_CACHEFLUSH_H
11 11
12 /* Keep includes the same across arches. */ 12 /* Keep includes the same across arches. */
13 #include <linux/mm.h> 13 #include <linux/mm.h>
14 #include <asm/cpu-features.h> 14 #include <asm/cpu-features.h>
15 15
16 /* Cache flushing: 16 /* Cache flushing:
17 * 17 *
18 * - flush_cache_all() flushes entire cache 18 * - flush_cache_all() flushes entire cache
19 * - flush_cache_mm(mm) flushes the specified mm context's cache lines 19 * - flush_cache_mm(mm) flushes the specified mm context's cache lines
20 * - flush_cache_dup mm(mm) handles cache flushing when forking
20 * - flush_cache_page(mm, vmaddr, pfn) flushes a single page 21 * - flush_cache_page(mm, vmaddr, pfn) flushes a single page
21 * - flush_cache_range(vma, start, end) flushes a range of pages 22 * - flush_cache_range(vma, start, end) flushes a range of pages
22 * - flush_icache_range(start, end) flush a range of instructions 23 * - flush_icache_range(start, end) flush a range of instructions
23 * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache 24 * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache
24 * 25 *
25 * MIPS specific flush operations: 26 * MIPS specific flush operations:
26 * 27 *
27 * - flush_cache_sigtramp() flush signal trampoline 28 * - flush_cache_sigtramp() flush signal trampoline
28 * - flush_icache_all() flush the entire instruction cache 29 * - flush_icache_all() flush the entire instruction cache
29 * - flush_data_cache_page() flushes a page from the data cache 30 * - flush_data_cache_page() flushes a page from the data cache
30 */ 31 */
31 extern void (*flush_cache_all)(void); 32 extern void (*flush_cache_all)(void);
32 extern void (*__flush_cache_all)(void); 33 extern void (*__flush_cache_all)(void);
33 extern void (*flush_cache_mm)(struct mm_struct *mm); 34 extern void (*flush_cache_mm)(struct mm_struct *mm);
35 #define flush_cache_dup_mm(mm) do { (void) (mm); } while (0)
34 extern void (*flush_cache_range)(struct vm_area_struct *vma, 36 extern void (*flush_cache_range)(struct vm_area_struct *vma,
35 unsigned long start, unsigned long end); 37 unsigned long start, unsigned long end);
36 extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn); 38 extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn);
37 extern void __flush_dcache_page(struct page *page); 39 extern void __flush_dcache_page(struct page *page);
38 40
39 static inline void flush_dcache_page(struct page *page) 41 static inline void flush_dcache_page(struct page *page)
40 { 42 {
41 if (cpu_has_dc_aliases || !cpu_has_ic_fills_f_dc) 43 if (cpu_has_dc_aliases || !cpu_has_ic_fills_f_dc)
42 __flush_dcache_page(page); 44 __flush_dcache_page(page);
43 45
44 } 46 }
45 47
46 #define flush_dcache_mmap_lock(mapping) do { } while (0) 48 #define flush_dcache_mmap_lock(mapping) do { } while (0)
47 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 49 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
48 50
49 static inline void flush_icache_page(struct vm_area_struct *vma, 51 static inline void flush_icache_page(struct vm_area_struct *vma,
50 struct page *page) 52 struct page *page)
51 { 53 {
52 } 54 }
53 55
54 extern void (*flush_icache_range)(unsigned long start, unsigned long end); 56 extern void (*flush_icache_range)(unsigned long start, unsigned long end);
55 #define flush_cache_vmap(start, end) flush_cache_all() 57 #define flush_cache_vmap(start, end) flush_cache_all()
56 #define flush_cache_vunmap(start, end) flush_cache_all() 58 #define flush_cache_vunmap(start, end) flush_cache_all()
57 59
58 extern void copy_to_user_page(struct vm_area_struct *vma, 60 extern void copy_to_user_page(struct vm_area_struct *vma,
59 struct page *page, unsigned long vaddr, void *dst, const void *src, 61 struct page *page, unsigned long vaddr, void *dst, const void *src,
60 unsigned long len); 62 unsigned long len);
61 63
62 extern void copy_from_user_page(struct vm_area_struct *vma, 64 extern void copy_from_user_page(struct vm_area_struct *vma,
63 struct page *page, unsigned long vaddr, void *dst, const void *src, 65 struct page *page, unsigned long vaddr, void *dst, const void *src,
64 unsigned long len); 66 unsigned long len);
65 67
66 extern void (*flush_cache_sigtramp)(unsigned long addr); 68 extern void (*flush_cache_sigtramp)(unsigned long addr);
67 extern void (*flush_icache_all)(void); 69 extern void (*flush_icache_all)(void);
68 extern void (*local_flush_data_cache_page)(void * addr); 70 extern void (*local_flush_data_cache_page)(void * addr);
69 extern void (*flush_data_cache_page)(unsigned long addr); 71 extern void (*flush_data_cache_page)(unsigned long addr);
70 72
71 /* 73 /*
72 * This flag is used to indicate that the page pointed to by a pte 74 * This flag is used to indicate that the page pointed to by a pte
73 * is dirty and requires cleaning before returning it to the user. 75 * is dirty and requires cleaning before returning it to the user.
74 */ 76 */
75 #define PG_dcache_dirty PG_arch_1 77 #define PG_dcache_dirty PG_arch_1
76 78
77 #define Page_dcache_dirty(page) \ 79 #define Page_dcache_dirty(page) \
78 test_bit(PG_dcache_dirty, &(page)->flags) 80 test_bit(PG_dcache_dirty, &(page)->flags)
79 #define SetPageDcacheDirty(page) \ 81 #define SetPageDcacheDirty(page) \
80 set_bit(PG_dcache_dirty, &(page)->flags) 82 set_bit(PG_dcache_dirty, &(page)->flags)
81 #define ClearPageDcacheDirty(page) \ 83 #define ClearPageDcacheDirty(page) \
82 clear_bit(PG_dcache_dirty, &(page)->flags) 84 clear_bit(PG_dcache_dirty, &(page)->flags)
83 85
84 /* Run kernel code uncached, useful for cache probing functions. */ 86 /* Run kernel code uncached, useful for cache probing functions. */
85 unsigned long __init run_uncached(void *func); 87 unsigned long __init run_uncached(void *func);
86 88
87 #endif /* _ASM_CACHEFLUSH_H */ 89 #endif /* _ASM_CACHEFLUSH_H */
88 90
include/asm-parisc/cacheflush.h
1 #ifndef _PARISC_CACHEFLUSH_H 1 #ifndef _PARISC_CACHEFLUSH_H
2 #define _PARISC_CACHEFLUSH_H 2 #define _PARISC_CACHEFLUSH_H
3 3
4 #include <linux/mm.h> 4 #include <linux/mm.h>
5 #include <asm/cache.h> /* for flush_user_dcache_range_asm() proto */ 5 #include <asm/cache.h> /* for flush_user_dcache_range_asm() proto */
6 6
7 /* The usual comment is "Caches aren't brain-dead on the <architecture>". 7 /* The usual comment is "Caches aren't brain-dead on the <architecture>".
8 * Unfortunately, that doesn't apply to PA-RISC. */ 8 * Unfortunately, that doesn't apply to PA-RISC. */
9 9
10 /* Cache flush operations */ 10 /* Cache flush operations */
11 11
12 #ifdef CONFIG_SMP 12 #ifdef CONFIG_SMP
13 #define flush_cache_mm(mm) flush_cache_all() 13 #define flush_cache_mm(mm) flush_cache_all()
14 #else 14 #else
15 #define flush_cache_mm(mm) flush_cache_all_local() 15 #define flush_cache_mm(mm) flush_cache_all_local()
16 #endif 16 #endif
17 17
18 #define flush_cache_dup_mm(mm) flush_cache_mm(mm)
19
18 #define flush_kernel_dcache_range(start,size) \ 20 #define flush_kernel_dcache_range(start,size) \
19 flush_kernel_dcache_range_asm((start), (start)+(size)); 21 flush_kernel_dcache_range_asm((start), (start)+(size));
20 22
21 extern void flush_cache_all_local(void); 23 extern void flush_cache_all_local(void);
22 24
23 static inline void cacheflush_h_tmp_function(void *dummy) 25 static inline void cacheflush_h_tmp_function(void *dummy)
24 { 26 {
25 flush_cache_all_local(); 27 flush_cache_all_local();
26 } 28 }
27 29
28 static inline void flush_cache_all(void) 30 static inline void flush_cache_all(void)
29 { 31 {
30 on_each_cpu(cacheflush_h_tmp_function, NULL, 1, 1); 32 on_each_cpu(cacheflush_h_tmp_function, NULL, 1, 1);
31 } 33 }
32 34
33 #define flush_cache_vmap(start, end) flush_cache_all() 35 #define flush_cache_vmap(start, end) flush_cache_all()
34 #define flush_cache_vunmap(start, end) flush_cache_all() 36 #define flush_cache_vunmap(start, end) flush_cache_all()
35 37
36 extern int parisc_cache_flush_threshold; 38 extern int parisc_cache_flush_threshold;
37 void parisc_setup_cache_timing(void); 39 void parisc_setup_cache_timing(void);
38 40
39 static inline void 41 static inline void
40 flush_user_dcache_range(unsigned long start, unsigned long end) 42 flush_user_dcache_range(unsigned long start, unsigned long end)
41 { 43 {
42 if ((end - start) < parisc_cache_flush_threshold) 44 if ((end - start) < parisc_cache_flush_threshold)
43 flush_user_dcache_range_asm(start,end); 45 flush_user_dcache_range_asm(start,end);
44 else 46 else
45 flush_data_cache(); 47 flush_data_cache();
46 } 48 }
47 49
48 static inline void 50 static inline void
49 flush_user_icache_range(unsigned long start, unsigned long end) 51 flush_user_icache_range(unsigned long start, unsigned long end)
50 { 52 {
51 if ((end - start) < parisc_cache_flush_threshold) 53 if ((end - start) < parisc_cache_flush_threshold)
52 flush_user_icache_range_asm(start,end); 54 flush_user_icache_range_asm(start,end);
53 else 55 else
54 flush_instruction_cache(); 56 flush_instruction_cache();
55 } 57 }
56 58
57 extern void flush_dcache_page(struct page *page); 59 extern void flush_dcache_page(struct page *page);
58 60
59 #define flush_dcache_mmap_lock(mapping) \ 61 #define flush_dcache_mmap_lock(mapping) \
60 write_lock_irq(&(mapping)->tree_lock) 62 write_lock_irq(&(mapping)->tree_lock)
61 #define flush_dcache_mmap_unlock(mapping) \ 63 #define flush_dcache_mmap_unlock(mapping) \
62 write_unlock_irq(&(mapping)->tree_lock) 64 write_unlock_irq(&(mapping)->tree_lock)
63 65
64 #define flush_icache_page(vma,page) do { flush_kernel_dcache_page(page); flush_kernel_icache_page(page_address(page)); } while (0) 66 #define flush_icache_page(vma,page) do { flush_kernel_dcache_page(page); flush_kernel_icache_page(page_address(page)); } while (0)
65 67
66 #define flush_icache_range(s,e) do { flush_kernel_dcache_range_asm(s,e); flush_kernel_icache_range_asm(s,e); } while (0) 68 #define flush_icache_range(s,e) do { flush_kernel_dcache_range_asm(s,e); flush_kernel_icache_range_asm(s,e); } while (0)
67 69
68 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 70 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
69 do { \ 71 do { \
70 flush_cache_page(vma, vaddr, page_to_pfn(page)); \ 72 flush_cache_page(vma, vaddr, page_to_pfn(page)); \
71 memcpy(dst, src, len); \ 73 memcpy(dst, src, len); \
72 flush_kernel_dcache_range_asm((unsigned long)dst, (unsigned long)dst + len); \ 74 flush_kernel_dcache_range_asm((unsigned long)dst, (unsigned long)dst + len); \
73 } while (0) 75 } while (0)
74 76
75 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 77 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
76 do { \ 78 do { \
77 flush_cache_page(vma, vaddr, page_to_pfn(page)); \ 79 flush_cache_page(vma, vaddr, page_to_pfn(page)); \
78 memcpy(dst, src, len); \ 80 memcpy(dst, src, len); \
79 } while (0) 81 } while (0)
80 82
81 static inline void flush_cache_range(struct vm_area_struct *vma, 83 static inline void flush_cache_range(struct vm_area_struct *vma,
82 unsigned long start, unsigned long end) 84 unsigned long start, unsigned long end)
83 { 85 {
84 int sr3; 86 int sr3;
85 87
86 if (!vma->vm_mm->context) { 88 if (!vma->vm_mm->context) {
87 BUG(); 89 BUG();
88 return; 90 return;
89 } 91 }
90 92
91 sr3 = mfsp(3); 93 sr3 = mfsp(3);
92 if (vma->vm_mm->context == sr3) { 94 if (vma->vm_mm->context == sr3) {
93 flush_user_dcache_range(start,end); 95 flush_user_dcache_range(start,end);
94 flush_user_icache_range(start,end); 96 flush_user_icache_range(start,end);
95 } else { 97 } else {
96 flush_cache_all(); 98 flush_cache_all();
97 } 99 }
98 } 100 }
99 101
100 /* Simple function to work out if we have an existing address translation 102 /* Simple function to work out if we have an existing address translation
101 * for a user space vma. */ 103 * for a user space vma. */
102 static inline int translation_exists(struct vm_area_struct *vma, 104 static inline int translation_exists(struct vm_area_struct *vma,
103 unsigned long addr, unsigned long pfn) 105 unsigned long addr, unsigned long pfn)
104 { 106 {
105 pgd_t *pgd = pgd_offset(vma->vm_mm, addr); 107 pgd_t *pgd = pgd_offset(vma->vm_mm, addr);
106 pmd_t *pmd; 108 pmd_t *pmd;
107 pte_t pte; 109 pte_t pte;
108 110
109 if(pgd_none(*pgd)) 111 if(pgd_none(*pgd))
110 return 0; 112 return 0;
111 113
112 pmd = pmd_offset(pgd, addr); 114 pmd = pmd_offset(pgd, addr);
113 if(pmd_none(*pmd) || pmd_bad(*pmd)) 115 if(pmd_none(*pmd) || pmd_bad(*pmd))
114 return 0; 116 return 0;
115 117
116 /* We cannot take the pte lock here: flush_cache_page is usually 118 /* We cannot take the pte lock here: flush_cache_page is usually
117 * called with pte lock already held. Whereas flush_dcache_page 119 * called with pte lock already held. Whereas flush_dcache_page
118 * takes flush_dcache_mmap_lock, which is lower in the hierarchy: 120 * takes flush_dcache_mmap_lock, which is lower in the hierarchy:
119 * the vma itself is secure, but the pte might come or go racily. 121 * the vma itself is secure, but the pte might come or go racily.
120 */ 122 */
121 pte = *pte_offset_map(pmd, addr); 123 pte = *pte_offset_map(pmd, addr);
122 /* But pte_unmap() does nothing on this architecture */ 124 /* But pte_unmap() does nothing on this architecture */
123 125
124 /* Filter out coincidental file entries and swap entries */ 126 /* Filter out coincidental file entries and swap entries */
125 if (!(pte_val(pte) & (_PAGE_FLUSH|_PAGE_PRESENT))) 127 if (!(pte_val(pte) & (_PAGE_FLUSH|_PAGE_PRESENT)))
126 return 0; 128 return 0;
127 129
128 return pte_pfn(pte) == pfn; 130 return pte_pfn(pte) == pfn;
129 } 131 }
130 132
131 /* Private function to flush a page from the cache of a non-current 133 /* Private function to flush a page from the cache of a non-current
132 * process. cr25 contains the Page Directory of the current user 134 * process. cr25 contains the Page Directory of the current user
133 * process; we're going to hijack both it and the user space %sr3 to 135 * process; we're going to hijack both it and the user space %sr3 to
134 * temporarily make the non-current process current. We have to do 136 * temporarily make the non-current process current. We have to do
135 * this because cache flushing may cause a non-access tlb miss which 137 * this because cache flushing may cause a non-access tlb miss which
136 * the handlers have to fill in from the pgd of the non-current 138 * the handlers have to fill in from the pgd of the non-current
137 * process. */ 139 * process. */
138 static inline void 140 static inline void
139 flush_user_cache_page_non_current(struct vm_area_struct *vma, 141 flush_user_cache_page_non_current(struct vm_area_struct *vma,
140 unsigned long vmaddr) 142 unsigned long vmaddr)
141 { 143 {
142 /* save the current process space and pgd */ 144 /* save the current process space and pgd */
143 unsigned long space = mfsp(3), pgd = mfctl(25); 145 unsigned long space = mfsp(3), pgd = mfctl(25);
144 146
145 /* we don't mind taking interrups since they may not 147 /* we don't mind taking interrups since they may not
146 * do anything with user space, but we can't 148 * do anything with user space, but we can't
147 * be preempted here */ 149 * be preempted here */
148 preempt_disable(); 150 preempt_disable();
149 151
150 /* make us current */ 152 /* make us current */
151 mtctl(__pa(vma->vm_mm->pgd), 25); 153 mtctl(__pa(vma->vm_mm->pgd), 25);
152 mtsp(vma->vm_mm->context, 3); 154 mtsp(vma->vm_mm->context, 3);
153 155
154 flush_user_dcache_page(vmaddr); 156 flush_user_dcache_page(vmaddr);
155 if(vma->vm_flags & VM_EXEC) 157 if(vma->vm_flags & VM_EXEC)
156 flush_user_icache_page(vmaddr); 158 flush_user_icache_page(vmaddr);
157 159
158 /* put the old current process back */ 160 /* put the old current process back */
159 mtsp(space, 3); 161 mtsp(space, 3);
160 mtctl(pgd, 25); 162 mtctl(pgd, 25);
161 preempt_enable(); 163 preempt_enable();
162 } 164 }
163 165
164 static inline void 166 static inline void
165 __flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr) 167 __flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr)
166 { 168 {
167 if (likely(vma->vm_mm->context == mfsp(3))) { 169 if (likely(vma->vm_mm->context == mfsp(3))) {
168 flush_user_dcache_page(vmaddr); 170 flush_user_dcache_page(vmaddr);
169 if (vma->vm_flags & VM_EXEC) 171 if (vma->vm_flags & VM_EXEC)
170 flush_user_icache_page(vmaddr); 172 flush_user_icache_page(vmaddr);
171 } else { 173 } else {
172 flush_user_cache_page_non_current(vma, vmaddr); 174 flush_user_cache_page_non_current(vma, vmaddr);
173 } 175 }
174 } 176 }
175 177
176 static inline void 178 static inline void
177 flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn) 179 flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn)
178 { 180 {
179 BUG_ON(!vma->vm_mm->context); 181 BUG_ON(!vma->vm_mm->context);
180 182
181 if (likely(translation_exists(vma, vmaddr, pfn))) 183 if (likely(translation_exists(vma, vmaddr, pfn)))
182 __flush_cache_page(vma, vmaddr); 184 __flush_cache_page(vma, vmaddr);
183 185
184 } 186 }
185 187
186 static inline void 188 static inline void
187 flush_anon_page(struct page *page, unsigned long vmaddr) 189 flush_anon_page(struct page *page, unsigned long vmaddr)
188 { 190 {
189 if (PageAnon(page)) 191 if (PageAnon(page))
190 flush_user_dcache_page(vmaddr); 192 flush_user_dcache_page(vmaddr);
191 } 193 }
192 #define ARCH_HAS_FLUSH_ANON_PAGE 194 #define ARCH_HAS_FLUSH_ANON_PAGE
193 195
194 #define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE 196 #define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
195 void flush_kernel_dcache_page_addr(void *addr); 197 void flush_kernel_dcache_page_addr(void *addr);
196 static inline void flush_kernel_dcache_page(struct page *page) 198 static inline void flush_kernel_dcache_page(struct page *page)
197 { 199 {
198 flush_kernel_dcache_page_addr(page_address(page)); 200 flush_kernel_dcache_page_addr(page_address(page));
199 } 201 }
200 202
201 #ifdef CONFIG_DEBUG_RODATA 203 #ifdef CONFIG_DEBUG_RODATA
202 void mark_rodata_ro(void); 204 void mark_rodata_ro(void);
203 #endif 205 #endif
204 206
205 #ifdef CONFIG_PA8X00 207 #ifdef CONFIG_PA8X00
206 /* Only pa8800, pa8900 needs this */ 208 /* Only pa8800, pa8900 needs this */
207 #define ARCH_HAS_KMAP 209 #define ARCH_HAS_KMAP
208 210
209 void kunmap_parisc(void *addr); 211 void kunmap_parisc(void *addr);
210 212
211 static inline void *kmap(struct page *page) 213 static inline void *kmap(struct page *page)
212 { 214 {
213 might_sleep(); 215 might_sleep();
214 return page_address(page); 216 return page_address(page);
215 } 217 }
216 218
217 #define kunmap(page) kunmap_parisc(page_address(page)) 219 #define kunmap(page) kunmap_parisc(page_address(page))
218 220
219 #define kmap_atomic(page, idx) page_address(page) 221 #define kmap_atomic(page, idx) page_address(page)
220 222
221 #define kunmap_atomic(addr, idx) kunmap_parisc(addr) 223 #define kunmap_atomic(addr, idx) kunmap_parisc(addr)
222 224
223 #define kmap_atomic_pfn(pfn, idx) page_address(pfn_to_page(pfn)) 225 #define kmap_atomic_pfn(pfn, idx) page_address(pfn_to_page(pfn))
224 #define kmap_atomic_to_page(ptr) virt_to_page(ptr) 226 #define kmap_atomic_to_page(ptr) virt_to_page(ptr)
225 #endif 227 #endif
226 228
227 #endif /* _PARISC_CACHEFLUSH_H */ 229 #endif /* _PARISC_CACHEFLUSH_H */
228 230
229 231
include/asm-powerpc/cacheflush.h
1 /* 1 /*
2 * This program is free software; you can redistribute it and/or 2 * This program is free software; you can redistribute it and/or
3 * modify it under the terms of the GNU General Public License 3 * modify it under the terms of the GNU General Public License
4 * as published by the Free Software Foundation; either version 4 * as published by the Free Software Foundation; either version
5 * 2 of the License, or (at your option) any later version. 5 * 2 of the License, or (at your option) any later version.
6 */ 6 */
7 #ifndef _ASM_POWERPC_CACHEFLUSH_H 7 #ifndef _ASM_POWERPC_CACHEFLUSH_H
8 #define _ASM_POWERPC_CACHEFLUSH_H 8 #define _ASM_POWERPC_CACHEFLUSH_H
9 9
10 #ifdef __KERNEL__ 10 #ifdef __KERNEL__
11 11
12 #include <linux/mm.h> 12 #include <linux/mm.h>
13 #include <asm/cputable.h> 13 #include <asm/cputable.h>
14 14
15 /* 15 /*
16 * No cache flushing is required when address mappings are changed, 16 * No cache flushing is required when address mappings are changed,
17 * because the caches on PowerPCs are physically addressed. 17 * because the caches on PowerPCs are physically addressed.
18 */ 18 */
19 #define flush_cache_all() do { } while (0) 19 #define flush_cache_all() do { } while (0)
20 #define flush_cache_mm(mm) do { } while (0) 20 #define flush_cache_mm(mm) do { } while (0)
21 #define flush_cache_dup_mm(mm) do { } while (0)
21 #define flush_cache_range(vma, start, end) do { } while (0) 22 #define flush_cache_range(vma, start, end) do { } while (0)
22 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 23 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
23 #define flush_icache_page(vma, page) do { } while (0) 24 #define flush_icache_page(vma, page) do { } while (0)
24 #define flush_cache_vmap(start, end) do { } while (0) 25 #define flush_cache_vmap(start, end) do { } while (0)
25 #define flush_cache_vunmap(start, end) do { } while (0) 26 #define flush_cache_vunmap(start, end) do { } while (0)
26 27
27 extern void flush_dcache_page(struct page *page); 28 extern void flush_dcache_page(struct page *page);
28 #define flush_dcache_mmap_lock(mapping) do { } while (0) 29 #define flush_dcache_mmap_lock(mapping) do { } while (0)
29 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 30 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
30 31
31 extern void __flush_icache_range(unsigned long, unsigned long); 32 extern void __flush_icache_range(unsigned long, unsigned long);
32 static inline void flush_icache_range(unsigned long start, unsigned long stop) 33 static inline void flush_icache_range(unsigned long start, unsigned long stop)
33 { 34 {
34 if (!cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) 35 if (!cpu_has_feature(CPU_FTR_COHERENT_ICACHE))
35 __flush_icache_range(start, stop); 36 __flush_icache_range(start, stop);
36 } 37 }
37 38
38 extern void flush_icache_user_range(struct vm_area_struct *vma, 39 extern void flush_icache_user_range(struct vm_area_struct *vma,
39 struct page *page, unsigned long addr, 40 struct page *page, unsigned long addr,
40 int len); 41 int len);
41 extern void __flush_dcache_icache(void *page_va); 42 extern void __flush_dcache_icache(void *page_va);
42 extern void flush_dcache_icache_page(struct page *page); 43 extern void flush_dcache_icache_page(struct page *page);
43 #if defined(CONFIG_PPC32) && !defined(CONFIG_BOOKE) 44 #if defined(CONFIG_PPC32) && !defined(CONFIG_BOOKE)
44 extern void __flush_dcache_icache_phys(unsigned long physaddr); 45 extern void __flush_dcache_icache_phys(unsigned long physaddr);
45 #endif /* CONFIG_PPC32 && !CONFIG_BOOKE */ 46 #endif /* CONFIG_PPC32 && !CONFIG_BOOKE */
46 47
47 extern void flush_dcache_range(unsigned long start, unsigned long stop); 48 extern void flush_dcache_range(unsigned long start, unsigned long stop);
48 #ifdef CONFIG_PPC32 49 #ifdef CONFIG_PPC32
49 extern void clean_dcache_range(unsigned long start, unsigned long stop); 50 extern void clean_dcache_range(unsigned long start, unsigned long stop);
50 extern void invalidate_dcache_range(unsigned long start, unsigned long stop); 51 extern void invalidate_dcache_range(unsigned long start, unsigned long stop);
51 #endif /* CONFIG_PPC32 */ 52 #endif /* CONFIG_PPC32 */
52 #ifdef CONFIG_PPC64 53 #ifdef CONFIG_PPC64
53 extern void flush_inval_dcache_range(unsigned long start, unsigned long stop); 54 extern void flush_inval_dcache_range(unsigned long start, unsigned long stop);
54 extern void flush_dcache_phys_range(unsigned long start, unsigned long stop); 55 extern void flush_dcache_phys_range(unsigned long start, unsigned long stop);
55 #endif 56 #endif
56 57
57 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 58 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
58 do { \ 59 do { \
59 memcpy(dst, src, len); \ 60 memcpy(dst, src, len); \
60 flush_icache_user_range(vma, page, vaddr, len); \ 61 flush_icache_user_range(vma, page, vaddr, len); \
61 } while (0) 62 } while (0)
62 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 63 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
63 memcpy(dst, src, len) 64 memcpy(dst, src, len)
64 65
65 66
66 #endif /* __KERNEL__ */ 67 #endif /* __KERNEL__ */
67 68
68 #endif /* _ASM_POWERPC_CACHEFLUSH_H */ 69 #endif /* _ASM_POWERPC_CACHEFLUSH_H */
69 70
include/asm-s390/cacheflush.h
1 #ifndef _S390_CACHEFLUSH_H 1 #ifndef _S390_CACHEFLUSH_H
2 #define _S390_CACHEFLUSH_H 2 #define _S390_CACHEFLUSH_H
3 3
4 /* Keep includes the same across arches. */ 4 /* Keep includes the same across arches. */
5 #include <linux/mm.h> 5 #include <linux/mm.h>
6 6
7 /* Caches aren't brain-dead on the s390. */ 7 /* Caches aren't brain-dead on the s390. */
8 #define flush_cache_all() do { } while (0) 8 #define flush_cache_all() do { } while (0)
9 #define flush_cache_mm(mm) do { } while (0) 9 #define flush_cache_mm(mm) do { } while (0)
10 #define flush_cache_dup_mm(mm) do { } while (0)
10 #define flush_cache_range(vma, start, end) do { } while (0) 11 #define flush_cache_range(vma, start, end) do { } while (0)
11 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 12 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
12 #define flush_dcache_page(page) do { } while (0) 13 #define flush_dcache_page(page) do { } while (0)
13 #define flush_dcache_mmap_lock(mapping) do { } while (0) 14 #define flush_dcache_mmap_lock(mapping) do { } while (0)
14 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 15 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
15 #define flush_icache_range(start, end) do { } while (0) 16 #define flush_icache_range(start, end) do { } while (0)
16 #define flush_icache_page(vma,pg) do { } while (0) 17 #define flush_icache_page(vma,pg) do { } while (0)
17 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0) 18 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
18 #define flush_cache_vmap(start, end) do { } while (0) 19 #define flush_cache_vmap(start, end) do { } while (0)
19 #define flush_cache_vunmap(start, end) do { } while (0) 20 #define flush_cache_vunmap(start, end) do { } while (0)
20 21
21 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 22 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
22 memcpy(dst, src, len) 23 memcpy(dst, src, len)
23 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 24 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
24 memcpy(dst, src, len) 25 memcpy(dst, src, len)
25 26
26 #endif /* _S390_CACHEFLUSH_H */ 27 #endif /* _S390_CACHEFLUSH_H */
27 28
include/asm-sh/cpu-sh2/cacheflush.h
1 /* 1 /*
2 * include/asm-sh/cpu-sh2/cacheflush.h 2 * include/asm-sh/cpu-sh2/cacheflush.h
3 * 3 *
4 * Copyright (C) 2003 Paul Mundt 4 * Copyright (C) 2003 Paul Mundt
5 * 5 *
6 * This file is subject to the terms and conditions of the GNU General Public 6 * This file is subject to the terms and conditions of the GNU General Public
7 * License. See the file "COPYING" in the main directory of this archive 7 * License. See the file "COPYING" in the main directory of this archive
8 * for more details. 8 * for more details.
9 */ 9 */
10 #ifndef __ASM_CPU_SH2_CACHEFLUSH_H 10 #ifndef __ASM_CPU_SH2_CACHEFLUSH_H
11 #define __ASM_CPU_SH2_CACHEFLUSH_H 11 #define __ASM_CPU_SH2_CACHEFLUSH_H
12 12
13 /* 13 /*
14 * Cache flushing: 14 * Cache flushing:
15 * 15 *
16 * - flush_cache_all() flushes entire cache 16 * - flush_cache_all() flushes entire cache
17 * - flush_cache_mm(mm) flushes the specified mm context's cache lines 17 * - flush_cache_mm(mm) flushes the specified mm context's cache lines
18 * - flush_cache_dup mm(mm) handles cache flushing when forking
18 * - flush_cache_page(mm, vmaddr, pfn) flushes a single page 19 * - flush_cache_page(mm, vmaddr, pfn) flushes a single page
19 * - flush_cache_range(vma, start, end) flushes a range of pages 20 * - flush_cache_range(vma, start, end) flushes a range of pages
20 * 21 *
21 * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache 22 * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache
22 * - flush_icache_range(start, end) flushes(invalidates) a range for icache 23 * - flush_icache_range(start, end) flushes(invalidates) a range for icache
23 * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache 24 * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache
24 * 25 *
25 * Caches are indexed (effectively) by physical address on SH-2, so 26 * Caches are indexed (effectively) by physical address on SH-2, so
26 * we don't need them. 27 * we don't need them.
27 */ 28 */
28 #define flush_cache_all() do { } while (0) 29 #define flush_cache_all() do { } while (0)
29 #define flush_cache_mm(mm) do { } while (0) 30 #define flush_cache_mm(mm) do { } while (0)
31 #define flush_cache_dup_mm(mm) do { } while (0)
30 #define flush_cache_range(vma, start, end) do { } while (0) 32 #define flush_cache_range(vma, start, end) do { } while (0)
31 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 33 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
32 #define flush_dcache_page(page) do { } while (0) 34 #define flush_dcache_page(page) do { } while (0)
33 #define flush_dcache_mmap_lock(mapping) do { } while (0) 35 #define flush_dcache_mmap_lock(mapping) do { } while (0)
34 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 36 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
35 #define flush_icache_range(start, end) do { } while (0) 37 #define flush_icache_range(start, end) do { } while (0)
36 #define flush_icache_page(vma,pg) do { } while (0) 38 #define flush_icache_page(vma,pg) do { } while (0)
37 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0) 39 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
38 #define flush_cache_sigtramp(vaddr) do { } while (0) 40 #define flush_cache_sigtramp(vaddr) do { } while (0)
39 41
40 #define p3_cache_init() do { } while (0) 42 #define p3_cache_init() do { } while (0)
41 #endif /* __ASM_CPU_SH2_CACHEFLUSH_H */ 43 #endif /* __ASM_CPU_SH2_CACHEFLUSH_H */
42 44
43 45
include/asm-sh/cpu-sh3/cacheflush.h
1 /* 1 /*
2 * include/asm-sh/cpu-sh3/cacheflush.h 2 * include/asm-sh/cpu-sh3/cacheflush.h
3 * 3 *
4 * Copyright (C) 1999 Niibe Yutaka 4 * Copyright (C) 1999 Niibe Yutaka
5 * 5 *
6 * This file is subject to the terms and conditions of the GNU General Public 6 * This file is subject to the terms and conditions of the GNU General Public
7 * License. See the file "COPYING" in the main directory of this archive 7 * License. See the file "COPYING" in the main directory of this archive
8 * for more details. 8 * for more details.
9 */ 9 */
10 #ifndef __ASM_CPU_SH3_CACHEFLUSH_H 10 #ifndef __ASM_CPU_SH3_CACHEFLUSH_H
11 #define __ASM_CPU_SH3_CACHEFLUSH_H 11 #define __ASM_CPU_SH3_CACHEFLUSH_H
12 12
13 /* 13 /*
14 * Cache flushing: 14 * Cache flushing:
15 * 15 *
16 * - flush_cache_all() flushes entire cache 16 * - flush_cache_all() flushes entire cache
17 * - flush_cache_mm(mm) flushes the specified mm context's cache lines 17 * - flush_cache_mm(mm) flushes the specified mm context's cache lines
18 * - flush_cache_dup mm(mm) handles cache flushing when forking
18 * - flush_cache_page(mm, vmaddr, pfn) flushes a single page 19 * - flush_cache_page(mm, vmaddr, pfn) flushes a single page
19 * - flush_cache_range(vma, start, end) flushes a range of pages 20 * - flush_cache_range(vma, start, end) flushes a range of pages
20 * 21 *
21 * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache 22 * - flush_dcache_page(pg) flushes(wback&invalidates) a page for dcache
22 * - flush_icache_range(start, end) flushes(invalidates) a range for icache 23 * - flush_icache_range(start, end) flushes(invalidates) a range for icache
23 * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache 24 * - flush_icache_page(vma, pg) flushes(invalidates) a page for icache
24 * 25 *
25 * Caches are indexed (effectively) by physical address on SH-3, so 26 * Caches are indexed (effectively) by physical address on SH-3, so
26 * we don't need them. 27 * we don't need them.
27 */ 28 */
28 29
29 #if defined(CONFIG_SH7705_CACHE_32KB) 30 #if defined(CONFIG_SH7705_CACHE_32KB)
30 31
31 /* SH7705 is an SH3 processor with 32KB cache. This has alias issues like the 32 /* SH7705 is an SH3 processor with 32KB cache. This has alias issues like the
32 * SH4. Unlike the SH4 this is a unified cache so we need to do some work 33 * SH4. Unlike the SH4 this is a unified cache so we need to do some work
33 * in mmap when 'exec'ing a new binary 34 * in mmap when 'exec'ing a new binary
34 */ 35 */
35 /* 32KB cache, 4kb PAGE sizes need to check bit 12 */ 36 /* 32KB cache, 4kb PAGE sizes need to check bit 12 */
36 #define CACHE_ALIAS 0x00001000 37 #define CACHE_ALIAS 0x00001000
37 38
38 #define PG_mapped PG_arch_1 39 #define PG_mapped PG_arch_1
39 40
40 void flush_cache_all(void); 41 void flush_cache_all(void);
41 void flush_cache_mm(struct mm_struct *mm); 42 void flush_cache_mm(struct mm_struct *mm);
43 #define flush_cache_dup_mm(mm) flush_cache_mm(mm)
42 void flush_cache_range(struct vm_area_struct *vma, unsigned long start, 44 void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
43 unsigned long end); 45 unsigned long end);
44 void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn); 46 void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn);
45 void flush_dcache_page(struct page *pg); 47 void flush_dcache_page(struct page *pg);
46 void flush_icache_range(unsigned long start, unsigned long end); 48 void flush_icache_range(unsigned long start, unsigned long end);
47 void flush_icache_page(struct vm_area_struct *vma, struct page *page); 49 void flush_icache_page(struct vm_area_struct *vma, struct page *page);
48 #else 50 #else
49 #define flush_cache_all() do { } while (0) 51 #define flush_cache_all() do { } while (0)
50 #define flush_cache_mm(mm) do { } while (0) 52 #define flush_cache_mm(mm) do { } while (0)
53 #define flush_cache_dup_mm(mm) do { } while (0)
51 #define flush_cache_range(vma, start, end) do { } while (0) 54 #define flush_cache_range(vma, start, end) do { } while (0)
52 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 55 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
53 #define flush_dcache_page(page) do { } while (0) 56 #define flush_dcache_page(page) do { } while (0)
54 #define flush_icache_range(start, end) do { } while (0) 57 #define flush_icache_range(start, end) do { } while (0)
55 #define flush_icache_page(vma,pg) do { } while (0) 58 #define flush_icache_page(vma,pg) do { } while (0)
56 #endif 59 #endif
57 60
58 #define flush_dcache_mmap_lock(mapping) do { } while (0) 61 #define flush_dcache_mmap_lock(mapping) do { } while (0)
59 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 62 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
60 63
61 /* SH3 has unified cache so no special action needed here */ 64 /* SH3 has unified cache so no special action needed here */
62 #define flush_cache_sigtramp(vaddr) do { } while (0) 65 #define flush_cache_sigtramp(vaddr) do { } while (0)
63 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0) 66 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
64 67
65 #define p3_cache_init() do { } while (0) 68 #define p3_cache_init() do { } while (0)
66 69
67 #endif /* __ASM_CPU_SH3_CACHEFLUSH_H */ 70 #endif /* __ASM_CPU_SH3_CACHEFLUSH_H */
68 71
include/asm-sh/cpu-sh4/cacheflush.h
1 /* 1 /*
2 * include/asm-sh/cpu-sh4/cacheflush.h 2 * include/asm-sh/cpu-sh4/cacheflush.h
3 * 3 *
4 * Copyright (C) 1999 Niibe Yutaka 4 * Copyright (C) 1999 Niibe Yutaka
5 * Copyright (C) 2003 Paul Mundt 5 * Copyright (C) 2003 Paul Mundt
6 * 6 *
7 * This file is subject to the terms and conditions of the GNU General Public 7 * This file is subject to the terms and conditions of the GNU General Public
8 * License. See the file "COPYING" in the main directory of this archive 8 * License. See the file "COPYING" in the main directory of this archive
9 * for more details. 9 * for more details.
10 */ 10 */
11 #ifndef __ASM_CPU_SH4_CACHEFLUSH_H 11 #ifndef __ASM_CPU_SH4_CACHEFLUSH_H
12 #define __ASM_CPU_SH4_CACHEFLUSH_H 12 #define __ASM_CPU_SH4_CACHEFLUSH_H
13 13
14 /* 14 /*
15 * Caches are broken on SH-4 (unless we use write-through 15 * Caches are broken on SH-4 (unless we use write-through
16 * caching; in which case they're only semi-broken), 16 * caching; in which case they're only semi-broken),
17 * so we need them. 17 * so we need them.
18 */ 18 */
19 void flush_cache_all(void); 19 void flush_cache_all(void);
20 void flush_cache_mm(struct mm_struct *mm); 20 void flush_cache_mm(struct mm_struct *mm);
21 #define flush_cache_dup_mm(mm) flush_cache_mm(mm)
21 void flush_cache_range(struct vm_area_struct *vma, unsigned long start, 22 void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
22 unsigned long end); 23 unsigned long end);
23 void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, 24 void flush_cache_page(struct vm_area_struct *vma, unsigned long addr,
24 unsigned long pfn); 25 unsigned long pfn);
25 void flush_dcache_page(struct page *pg); 26 void flush_dcache_page(struct page *pg);
26 27
27 #define flush_dcache_mmap_lock(mapping) do { } while (0) 28 #define flush_dcache_mmap_lock(mapping) do { } while (0)
28 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 29 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
29 30
30 void flush_icache_range(unsigned long start, unsigned long end); 31 void flush_icache_range(unsigned long start, unsigned long end);
31 void flush_cache_sigtramp(unsigned long addr); 32 void flush_cache_sigtramp(unsigned long addr);
32 void flush_icache_user_range(struct vm_area_struct *vma, struct page *page, 33 void flush_icache_user_range(struct vm_area_struct *vma, struct page *page,
33 unsigned long addr, int len); 34 unsigned long addr, int len);
34 35
35 #define flush_icache_page(vma,pg) do { } while (0) 36 #define flush_icache_page(vma,pg) do { } while (0)
36 37
37 /* Initialization of P3 area for copy_user_page */ 38 /* Initialization of P3 area for copy_user_page */
38 void p3_cache_init(void); 39 void p3_cache_init(void);
39 40
40 #define PG_mapped PG_arch_1 41 #define PG_mapped PG_arch_1
41 42
42 #ifdef CONFIG_MMU 43 #ifdef CONFIG_MMU
43 extern int remap_area_pages(unsigned long addr, unsigned long phys_addr, 44 extern int remap_area_pages(unsigned long addr, unsigned long phys_addr,
44 unsigned long size, unsigned long flags); 45 unsigned long size, unsigned long flags);
45 #else /* CONFIG_MMU */ 46 #else /* CONFIG_MMU */
46 static inline int remap_area_pages(unsigned long addr, unsigned long phys_addr, 47 static inline int remap_area_pages(unsigned long addr, unsigned long phys_addr,
47 unsigned long size, unsigned long flags) 48 unsigned long size, unsigned long flags)
48 { 49 {
49 return 0; 50 return 0;
50 } 51 }
51 #endif /* CONFIG_MMU */ 52 #endif /* CONFIG_MMU */
52 #endif /* __ASM_CPU_SH4_CACHEFLUSH_H */ 53 #endif /* __ASM_CPU_SH4_CACHEFLUSH_H */
53 54
include/asm-sh64/cacheflush.h
1 #ifndef __ASM_SH64_CACHEFLUSH_H 1 #ifndef __ASM_SH64_CACHEFLUSH_H
2 #define __ASM_SH64_CACHEFLUSH_H 2 #define __ASM_SH64_CACHEFLUSH_H
3 3
4 #ifndef __ASSEMBLY__ 4 #ifndef __ASSEMBLY__
5 5
6 #include <asm/page.h> 6 #include <asm/page.h>
7 7
8 struct vm_area_struct; 8 struct vm_area_struct;
9 struct page; 9 struct page;
10 struct mm_struct; 10 struct mm_struct;
11 11
12 extern void flush_cache_all(void); 12 extern void flush_cache_all(void);
13 extern void flush_cache_mm(struct mm_struct *mm); 13 extern void flush_cache_mm(struct mm_struct *mm);
14 extern void flush_cache_sigtramp(unsigned long start, unsigned long end); 14 extern void flush_cache_sigtramp(unsigned long start, unsigned long end);
15 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, 15 extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
16 unsigned long end); 16 unsigned long end);
17 extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn); 17 extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn);
18 extern void flush_dcache_page(struct page *pg); 18 extern void flush_dcache_page(struct page *pg);
19 extern void flush_icache_range(unsigned long start, unsigned long end); 19 extern void flush_icache_range(unsigned long start, unsigned long end);
20 extern void flush_icache_user_range(struct vm_area_struct *vma, 20 extern void flush_icache_user_range(struct vm_area_struct *vma,
21 struct page *page, unsigned long addr, 21 struct page *page, unsigned long addr,
22 int len); 22 int len);
23 23
24 #define flush_cache_dup_mm(mm) flush_cache_mm(mm)
25
24 #define flush_dcache_mmap_lock(mapping) do { } while (0) 26 #define flush_dcache_mmap_lock(mapping) do { } while (0)
25 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 27 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
26 28
27 #define flush_cache_vmap(start, end) flush_cache_all() 29 #define flush_cache_vmap(start, end) flush_cache_all()
28 #define flush_cache_vunmap(start, end) flush_cache_all() 30 #define flush_cache_vunmap(start, end) flush_cache_all()
29 31
30 #define flush_icache_page(vma, page) do { } while (0) 32 #define flush_icache_page(vma, page) do { } while (0)
31 33
32 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 34 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
33 do { \ 35 do { \
34 flush_cache_page(vma, vaddr, page_to_pfn(page));\ 36 flush_cache_page(vma, vaddr, page_to_pfn(page));\
35 memcpy(dst, src, len); \ 37 memcpy(dst, src, len); \
36 flush_icache_user_range(vma, page, vaddr, len); \ 38 flush_icache_user_range(vma, page, vaddr, len); \
37 } while (0) 39 } while (0)
38 40
39 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 41 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
40 do { \ 42 do { \
41 flush_cache_page(vma, vaddr, page_to_pfn(page));\ 43 flush_cache_page(vma, vaddr, page_to_pfn(page));\
42 memcpy(dst, src, len); \ 44 memcpy(dst, src, len); \
43 } while (0) 45 } while (0)
44 46
45 #endif /* __ASSEMBLY__ */ 47 #endif /* __ASSEMBLY__ */
46 48
47 #endif /* __ASM_SH64_CACHEFLUSH_H */ 49 #endif /* __ASM_SH64_CACHEFLUSH_H */
48 50
49 51
include/asm-sparc/cacheflush.h
1 #ifndef _SPARC_CACHEFLUSH_H 1 #ifndef _SPARC_CACHEFLUSH_H
2 #define _SPARC_CACHEFLUSH_H 2 #define _SPARC_CACHEFLUSH_H
3 3
4 #include <linux/mm.h> /* Common for other includes */ 4 #include <linux/mm.h> /* Common for other includes */
5 // #include <linux/kernel.h> from pgalloc.h 5 // #include <linux/kernel.h> from pgalloc.h
6 // #include <linux/sched.h> from pgalloc.h 6 // #include <linux/sched.h> from pgalloc.h
7 7
8 // #include <asm/page.h> 8 // #include <asm/page.h>
9 #include <asm/btfixup.h> 9 #include <asm/btfixup.h>
10 10
11 /* 11 /*
12 * Fine grained cache flushing. 12 * Fine grained cache flushing.
13 */ 13 */
14 #ifdef CONFIG_SMP 14 #ifdef CONFIG_SMP
15 15
16 BTFIXUPDEF_CALL(void, local_flush_cache_all, void) 16 BTFIXUPDEF_CALL(void, local_flush_cache_all, void)
17 BTFIXUPDEF_CALL(void, local_flush_cache_mm, struct mm_struct *) 17 BTFIXUPDEF_CALL(void, local_flush_cache_mm, struct mm_struct *)
18 BTFIXUPDEF_CALL(void, local_flush_cache_range, struct vm_area_struct *, unsigned long, unsigned long) 18 BTFIXUPDEF_CALL(void, local_flush_cache_range, struct vm_area_struct *, unsigned long, unsigned long)
19 BTFIXUPDEF_CALL(void, local_flush_cache_page, struct vm_area_struct *, unsigned long) 19 BTFIXUPDEF_CALL(void, local_flush_cache_page, struct vm_area_struct *, unsigned long)
20 20
21 #define local_flush_cache_all() BTFIXUP_CALL(local_flush_cache_all)() 21 #define local_flush_cache_all() BTFIXUP_CALL(local_flush_cache_all)()
22 #define local_flush_cache_mm(mm) BTFIXUP_CALL(local_flush_cache_mm)(mm) 22 #define local_flush_cache_mm(mm) BTFIXUP_CALL(local_flush_cache_mm)(mm)
23 #define local_flush_cache_range(vma,start,end) BTFIXUP_CALL(local_flush_cache_range)(vma,start,end) 23 #define local_flush_cache_range(vma,start,end) BTFIXUP_CALL(local_flush_cache_range)(vma,start,end)
24 #define local_flush_cache_page(vma,addr) BTFIXUP_CALL(local_flush_cache_page)(vma,addr) 24 #define local_flush_cache_page(vma,addr) BTFIXUP_CALL(local_flush_cache_page)(vma,addr)
25 25
26 BTFIXUPDEF_CALL(void, local_flush_page_to_ram, unsigned long) 26 BTFIXUPDEF_CALL(void, local_flush_page_to_ram, unsigned long)
27 BTFIXUPDEF_CALL(void, local_flush_sig_insns, struct mm_struct *, unsigned long) 27 BTFIXUPDEF_CALL(void, local_flush_sig_insns, struct mm_struct *, unsigned long)
28 28
29 #define local_flush_page_to_ram(addr) BTFIXUP_CALL(local_flush_page_to_ram)(addr) 29 #define local_flush_page_to_ram(addr) BTFIXUP_CALL(local_flush_page_to_ram)(addr)
30 #define local_flush_sig_insns(mm,insn_addr) BTFIXUP_CALL(local_flush_sig_insns)(mm,insn_addr) 30 #define local_flush_sig_insns(mm,insn_addr) BTFIXUP_CALL(local_flush_sig_insns)(mm,insn_addr)
31 31
32 extern void smp_flush_cache_all(void); 32 extern void smp_flush_cache_all(void);
33 extern void smp_flush_cache_mm(struct mm_struct *mm); 33 extern void smp_flush_cache_mm(struct mm_struct *mm);
34 extern void smp_flush_cache_range(struct vm_area_struct *vma, 34 extern void smp_flush_cache_range(struct vm_area_struct *vma,
35 unsigned long start, 35 unsigned long start,
36 unsigned long end); 36 unsigned long end);
37 extern void smp_flush_cache_page(struct vm_area_struct *vma, unsigned long page); 37 extern void smp_flush_cache_page(struct vm_area_struct *vma, unsigned long page);
38 38
39 extern void smp_flush_page_to_ram(unsigned long page); 39 extern void smp_flush_page_to_ram(unsigned long page);
40 extern void smp_flush_sig_insns(struct mm_struct *mm, unsigned long insn_addr); 40 extern void smp_flush_sig_insns(struct mm_struct *mm, unsigned long insn_addr);
41 41
42 #endif /* CONFIG_SMP */ 42 #endif /* CONFIG_SMP */
43 43
44 BTFIXUPDEF_CALL(void, flush_cache_all, void) 44 BTFIXUPDEF_CALL(void, flush_cache_all, void)
45 BTFIXUPDEF_CALL(void, flush_cache_mm, struct mm_struct *) 45 BTFIXUPDEF_CALL(void, flush_cache_mm, struct mm_struct *)
46 BTFIXUPDEF_CALL(void, flush_cache_range, struct vm_area_struct *, unsigned long, unsigned long) 46 BTFIXUPDEF_CALL(void, flush_cache_range, struct vm_area_struct *, unsigned long, unsigned long)
47 BTFIXUPDEF_CALL(void, flush_cache_page, struct vm_area_struct *, unsigned long) 47 BTFIXUPDEF_CALL(void, flush_cache_page, struct vm_area_struct *, unsigned long)
48 48
49 #define flush_cache_all() BTFIXUP_CALL(flush_cache_all)() 49 #define flush_cache_all() BTFIXUP_CALL(flush_cache_all)()
50 #define flush_cache_mm(mm) BTFIXUP_CALL(flush_cache_mm)(mm) 50 #define flush_cache_mm(mm) BTFIXUP_CALL(flush_cache_mm)(mm)
51 #define flush_cache_dup_mm(mm) BTFIXUP_CALL(flush_cache_mm)(mm)
51 #define flush_cache_range(vma,start,end) BTFIXUP_CALL(flush_cache_range)(vma,start,end) 52 #define flush_cache_range(vma,start,end) BTFIXUP_CALL(flush_cache_range)(vma,start,end)
52 #define flush_cache_page(vma,addr,pfn) BTFIXUP_CALL(flush_cache_page)(vma,addr) 53 #define flush_cache_page(vma,addr,pfn) BTFIXUP_CALL(flush_cache_page)(vma,addr)
53 #define flush_icache_range(start, end) do { } while (0) 54 #define flush_icache_range(start, end) do { } while (0)
54 #define flush_icache_page(vma, pg) do { } while (0) 55 #define flush_icache_page(vma, pg) do { } while (0)
55 56
56 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0) 57 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
57 58
58 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 59 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
59 do { \ 60 do { \
60 flush_cache_page(vma, vaddr, page_to_pfn(page));\ 61 flush_cache_page(vma, vaddr, page_to_pfn(page));\
61 memcpy(dst, src, len); \ 62 memcpy(dst, src, len); \
62 } while (0) 63 } while (0)
63 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 64 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
64 do { \ 65 do { \
65 flush_cache_page(vma, vaddr, page_to_pfn(page));\ 66 flush_cache_page(vma, vaddr, page_to_pfn(page));\
66 memcpy(dst, src, len); \ 67 memcpy(dst, src, len); \
67 } while (0) 68 } while (0)
68 69
69 BTFIXUPDEF_CALL(void, __flush_page_to_ram, unsigned long) 70 BTFIXUPDEF_CALL(void, __flush_page_to_ram, unsigned long)
70 BTFIXUPDEF_CALL(void, flush_sig_insns, struct mm_struct *, unsigned long) 71 BTFIXUPDEF_CALL(void, flush_sig_insns, struct mm_struct *, unsigned long)
71 72
72 #define __flush_page_to_ram(addr) BTFIXUP_CALL(__flush_page_to_ram)(addr) 73 #define __flush_page_to_ram(addr) BTFIXUP_CALL(__flush_page_to_ram)(addr)
73 #define flush_sig_insns(mm,insn_addr) BTFIXUP_CALL(flush_sig_insns)(mm,insn_addr) 74 #define flush_sig_insns(mm,insn_addr) BTFIXUP_CALL(flush_sig_insns)(mm,insn_addr)
74 75
75 extern void sparc_flush_page_to_ram(struct page *page); 76 extern void sparc_flush_page_to_ram(struct page *page);
76 77
77 #define flush_dcache_page(page) sparc_flush_page_to_ram(page) 78 #define flush_dcache_page(page) sparc_flush_page_to_ram(page)
78 #define flush_dcache_mmap_lock(mapping) do { } while (0) 79 #define flush_dcache_mmap_lock(mapping) do { } while (0)
79 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 80 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
80 81
81 #define flush_cache_vmap(start, end) flush_cache_all() 82 #define flush_cache_vmap(start, end) flush_cache_all()
82 #define flush_cache_vunmap(start, end) flush_cache_all() 83 #define flush_cache_vunmap(start, end) flush_cache_all()
83 84
84 #endif /* _SPARC_CACHEFLUSH_H */ 85 #endif /* _SPARC_CACHEFLUSH_H */
85 86
include/asm-sparc64/cacheflush.h
1 #ifndef _SPARC64_CACHEFLUSH_H 1 #ifndef _SPARC64_CACHEFLUSH_H
2 #define _SPARC64_CACHEFLUSH_H 2 #define _SPARC64_CACHEFLUSH_H
3 3
4 #include <asm/page.h> 4 #include <asm/page.h>
5 5
6 #ifndef __ASSEMBLY__ 6 #ifndef __ASSEMBLY__
7 7
8 #include <linux/mm.h> 8 #include <linux/mm.h>
9 9
10 /* Cache flush operations. */ 10 /* Cache flush operations. */
11 11
12 /* These are the same regardless of whether this is an SMP kernel or not. */ 12 /* These are the same regardless of whether this is an SMP kernel or not. */
13 #define flush_cache_mm(__mm) \ 13 #define flush_cache_mm(__mm) \
14 do { if ((__mm) == current->mm) flushw_user(); } while(0) 14 do { if ((__mm) == current->mm) flushw_user(); } while(0)
15 #define flush_cache_dup_mm(mm) flush_cache_mm(mm)
15 #define flush_cache_range(vma, start, end) \ 16 #define flush_cache_range(vma, start, end) \
16 flush_cache_mm((vma)->vm_mm) 17 flush_cache_mm((vma)->vm_mm)
17 #define flush_cache_page(vma, page, pfn) \ 18 #define flush_cache_page(vma, page, pfn) \
18 flush_cache_mm((vma)->vm_mm) 19 flush_cache_mm((vma)->vm_mm)
19 20
20 /* 21 /*
21 * On spitfire, the icache doesn't snoop local stores and we don't 22 * On spitfire, the icache doesn't snoop local stores and we don't
22 * use block commit stores (which invalidate icache lines) during 23 * use block commit stores (which invalidate icache lines) during
23 * module load, so we need this. 24 * module load, so we need this.
24 */ 25 */
25 extern void flush_icache_range(unsigned long start, unsigned long end); 26 extern void flush_icache_range(unsigned long start, unsigned long end);
26 extern void __flush_icache_page(unsigned long); 27 extern void __flush_icache_page(unsigned long);
27 28
28 extern void __flush_dcache_page(void *addr, int flush_icache); 29 extern void __flush_dcache_page(void *addr, int flush_icache);
29 extern void flush_dcache_page_impl(struct page *page); 30 extern void flush_dcache_page_impl(struct page *page);
30 #ifdef CONFIG_SMP 31 #ifdef CONFIG_SMP
31 extern void smp_flush_dcache_page_impl(struct page *page, int cpu); 32 extern void smp_flush_dcache_page_impl(struct page *page, int cpu);
32 extern void flush_dcache_page_all(struct mm_struct *mm, struct page *page); 33 extern void flush_dcache_page_all(struct mm_struct *mm, struct page *page);
33 #else 34 #else
34 #define smp_flush_dcache_page_impl(page,cpu) flush_dcache_page_impl(page) 35 #define smp_flush_dcache_page_impl(page,cpu) flush_dcache_page_impl(page)
35 #define flush_dcache_page_all(mm,page) flush_dcache_page_impl(page) 36 #define flush_dcache_page_all(mm,page) flush_dcache_page_impl(page)
36 #endif 37 #endif
37 38
38 extern void __flush_dcache_range(unsigned long start, unsigned long end); 39 extern void __flush_dcache_range(unsigned long start, unsigned long end);
39 extern void flush_dcache_page(struct page *page); 40 extern void flush_dcache_page(struct page *page);
40 41
41 #define flush_icache_page(vma, pg) do { } while(0) 42 #define flush_icache_page(vma, pg) do { } while(0)
42 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0) 43 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
43 44
44 extern void flush_ptrace_access(struct vm_area_struct *, struct page *, 45 extern void flush_ptrace_access(struct vm_area_struct *, struct page *,
45 unsigned long uaddr, void *kaddr, 46 unsigned long uaddr, void *kaddr,
46 unsigned long len, int write); 47 unsigned long len, int write);
47 48
48 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 49 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
49 do { \ 50 do { \
50 flush_cache_page(vma, vaddr, page_to_pfn(page)); \ 51 flush_cache_page(vma, vaddr, page_to_pfn(page)); \
51 memcpy(dst, src, len); \ 52 memcpy(dst, src, len); \
52 flush_ptrace_access(vma, page, vaddr, src, len, 0); \ 53 flush_ptrace_access(vma, page, vaddr, src, len, 0); \
53 } while (0) 54 } while (0)
54 55
55 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 56 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
56 do { \ 57 do { \
57 flush_cache_page(vma, vaddr, page_to_pfn(page)); \ 58 flush_cache_page(vma, vaddr, page_to_pfn(page)); \
58 memcpy(dst, src, len); \ 59 memcpy(dst, src, len); \
59 flush_ptrace_access(vma, page, vaddr, dst, len, 1); \ 60 flush_ptrace_access(vma, page, vaddr, dst, len, 1); \
60 } while (0) 61 } while (0)
61 62
62 #define flush_dcache_mmap_lock(mapping) do { } while (0) 63 #define flush_dcache_mmap_lock(mapping) do { } while (0)
63 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 64 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
64 65
65 #define flush_cache_vmap(start, end) do { } while (0) 66 #define flush_cache_vmap(start, end) do { } while (0)
66 #define flush_cache_vunmap(start, end) do { } while (0) 67 #define flush_cache_vunmap(start, end) do { } while (0)
67 68
68 #ifdef CONFIG_DEBUG_PAGEALLOC 69 #ifdef CONFIG_DEBUG_PAGEALLOC
69 /* internal debugging function */ 70 /* internal debugging function */
70 void kernel_map_pages(struct page *page, int numpages, int enable); 71 void kernel_map_pages(struct page *page, int numpages, int enable);
71 #endif 72 #endif
72 73
73 #endif /* !__ASSEMBLY__ */ 74 #endif /* !__ASSEMBLY__ */
74 75
75 #endif /* _SPARC64_CACHEFLUSH_H */ 76 #endif /* _SPARC64_CACHEFLUSH_H */
76 77
include/asm-v850/cacheflush.h
1 /* 1 /*
2 * include/asm-v850/cacheflush.h 2 * include/asm-v850/cacheflush.h
3 * 3 *
4 * Copyright (C) 2001,02,03 NEC Electronics Corporation 4 * Copyright (C) 2001,02,03 NEC Electronics Corporation
5 * Copyright (C) 2001,02,03 Miles Bader <miles@gnu.org> 5 * Copyright (C) 2001,02,03 Miles Bader <miles@gnu.org>
6 * 6 *
7 * This file is subject to the terms and conditions of the GNU General 7 * This file is subject to the terms and conditions of the GNU General
8 * Public License. See the file COPYING in the main directory of this 8 * Public License. See the file COPYING in the main directory of this
9 * archive for more details. 9 * archive for more details.
10 * 10 *
11 * Written by Miles Bader <miles@gnu.org> 11 * Written by Miles Bader <miles@gnu.org>
12 */ 12 */
13 13
14 #ifndef __V850_CACHEFLUSH_H__ 14 #ifndef __V850_CACHEFLUSH_H__
15 #define __V850_CACHEFLUSH_H__ 15 #define __V850_CACHEFLUSH_H__
16 16
17 /* Somebody depends on this; sigh... */ 17 /* Somebody depends on this; sigh... */
18 #include <linux/mm.h> 18 #include <linux/mm.h>
19 19
20 #include <asm/machdep.h> 20 #include <asm/machdep.h>
21 21
22 22
23 /* The following are all used by the kernel in ways that only affect 23 /* The following are all used by the kernel in ways that only affect
24 systems with MMUs, so we don't need them. */ 24 systems with MMUs, so we don't need them. */
25 #define flush_cache_all() ((void)0) 25 #define flush_cache_all() ((void)0)
26 #define flush_cache_mm(mm) ((void)0) 26 #define flush_cache_mm(mm) ((void)0)
27 #define flush_cache_dup_mm(mm) ((void)0)
27 #define flush_cache_range(vma, start, end) ((void)0) 28 #define flush_cache_range(vma, start, end) ((void)0)
28 #define flush_cache_page(vma, vmaddr, pfn) ((void)0) 29 #define flush_cache_page(vma, vmaddr, pfn) ((void)0)
29 #define flush_dcache_page(page) ((void)0) 30 #define flush_dcache_page(page) ((void)0)
30 #define flush_dcache_mmap_lock(mapping) ((void)0) 31 #define flush_dcache_mmap_lock(mapping) ((void)0)
31 #define flush_dcache_mmap_unlock(mapping) ((void)0) 32 #define flush_dcache_mmap_unlock(mapping) ((void)0)
32 #define flush_cache_vmap(start, end) ((void)0) 33 #define flush_cache_vmap(start, end) ((void)0)
33 #define flush_cache_vunmap(start, end) ((void)0) 34 #define flush_cache_vunmap(start, end) ((void)0)
34 35
35 #ifdef CONFIG_NO_CACHE 36 #ifdef CONFIG_NO_CACHE
36 37
37 /* Some systems have no cache at all, in which case we don't need these 38 /* Some systems have no cache at all, in which case we don't need these
38 either. */ 39 either. */
39 #define flush_icache() ((void)0) 40 #define flush_icache() ((void)0)
40 #define flush_icache_range(start, end) ((void)0) 41 #define flush_icache_range(start, end) ((void)0)
41 #define flush_icache_page(vma,pg) ((void)0) 42 #define flush_icache_page(vma,pg) ((void)0)
42 #define flush_icache_user_range(vma,pg,adr,len) ((void)0) 43 #define flush_icache_user_range(vma,pg,adr,len) ((void)0)
43 #define flush_cache_sigtramp(vaddr) ((void)0) 44 #define flush_cache_sigtramp(vaddr) ((void)0)
44 45
45 #else /* !CONFIG_NO_CACHE */ 46 #else /* !CONFIG_NO_CACHE */
46 47
47 struct page; 48 struct page;
48 struct mm_struct; 49 struct mm_struct;
49 struct vm_area_struct; 50 struct vm_area_struct;
50 51
51 /* Otherwise, somebody had better define them. */ 52 /* Otherwise, somebody had better define them. */
52 extern void flush_icache (void); 53 extern void flush_icache (void);
53 extern void flush_icache_range (unsigned long start, unsigned long end); 54 extern void flush_icache_range (unsigned long start, unsigned long end);
54 extern void flush_icache_page (struct vm_area_struct *vma, struct page *page); 55 extern void flush_icache_page (struct vm_area_struct *vma, struct page *page);
55 extern void flush_icache_user_range (struct vm_area_struct *vma, 56 extern void flush_icache_user_range (struct vm_area_struct *vma,
56 struct page *page, 57 struct page *page,
57 unsigned long adr, int len); 58 unsigned long adr, int len);
58 extern void flush_cache_sigtramp (unsigned long addr); 59 extern void flush_cache_sigtramp (unsigned long addr);
59 60
60 #endif /* CONFIG_NO_CACHE */ 61 #endif /* CONFIG_NO_CACHE */
61 62
62 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 63 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
63 do { memcpy(dst, src, len); \ 64 do { memcpy(dst, src, len); \
64 flush_icache_user_range(vma, page, vaddr, len); \ 65 flush_icache_user_range(vma, page, vaddr, len); \
65 } while (0) 66 } while (0)
66 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 67 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
67 memcpy(dst, src, len) 68 memcpy(dst, src, len)
68 69
69 #endif /* __V850_CACHEFLUSH_H__ */ 70 #endif /* __V850_CACHEFLUSH_H__ */
70 71
include/asm-x86_64/cacheflush.h
1 #ifndef _X8664_CACHEFLUSH_H 1 #ifndef _X8664_CACHEFLUSH_H
2 #define _X8664_CACHEFLUSH_H 2 #define _X8664_CACHEFLUSH_H
3 3
4 /* Keep includes the same across arches. */ 4 /* Keep includes the same across arches. */
5 #include <linux/mm.h> 5 #include <linux/mm.h>
6 6
7 /* Caches aren't brain-dead on the intel. */ 7 /* Caches aren't brain-dead on the intel. */
8 #define flush_cache_all() do { } while (0) 8 #define flush_cache_all() do { } while (0)
9 #define flush_cache_mm(mm) do { } while (0) 9 #define flush_cache_mm(mm) do { } while (0)
10 #define flush_cache_dup_mm(mm) do { } while (0)
10 #define flush_cache_range(vma, start, end) do { } while (0) 11 #define flush_cache_range(vma, start, end) do { } while (0)
11 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0) 12 #define flush_cache_page(vma, vmaddr, pfn) do { } while (0)
12 #define flush_dcache_page(page) do { } while (0) 13 #define flush_dcache_page(page) do { } while (0)
13 #define flush_dcache_mmap_lock(mapping) do { } while (0) 14 #define flush_dcache_mmap_lock(mapping) do { } while (0)
14 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 15 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
15 #define flush_icache_range(start, end) do { } while (0) 16 #define flush_icache_range(start, end) do { } while (0)
16 #define flush_icache_page(vma,pg) do { } while (0) 17 #define flush_icache_page(vma,pg) do { } while (0)
17 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0) 18 #define flush_icache_user_range(vma,pg,adr,len) do { } while (0)
18 #define flush_cache_vmap(start, end) do { } while (0) 19 #define flush_cache_vmap(start, end) do { } while (0)
19 #define flush_cache_vunmap(start, end) do { } while (0) 20 #define flush_cache_vunmap(start, end) do { } while (0)
20 21
21 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 22 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
22 memcpy(dst, src, len) 23 memcpy(dst, src, len)
23 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 24 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
24 memcpy(dst, src, len) 25 memcpy(dst, src, len)
25 26
26 void global_flush_tlb(void); 27 void global_flush_tlb(void);
27 int change_page_attr(struct page *page, int numpages, pgprot_t prot); 28 int change_page_attr(struct page *page, int numpages, pgprot_t prot);
28 int change_page_attr_addr(unsigned long addr, int numpages, pgprot_t prot); 29 int change_page_attr_addr(unsigned long addr, int numpages, pgprot_t prot);
29 30
30 #ifdef CONFIG_DEBUG_RODATA 31 #ifdef CONFIG_DEBUG_RODATA
31 void mark_rodata_ro(void); 32 void mark_rodata_ro(void);
32 #endif 33 #endif
33 34
34 #endif /* _X8664_CACHEFLUSH_H */ 35 #endif /* _X8664_CACHEFLUSH_H */
35 36
include/asm-xtensa/cacheflush.h
1 /* 1 /*
2 * include/asm-xtensa/cacheflush.h 2 * include/asm-xtensa/cacheflush.h
3 * 3 *
4 * This file is subject to the terms and conditions of the GNU General Public 4 * This file is subject to the terms and conditions of the GNU General Public
5 * License. See the file "COPYING" in the main directory of this archive 5 * License. See the file "COPYING" in the main directory of this archive
6 * for more details. 6 * for more details.
7 * 7 *
8 * (C) 2001 - 2006 Tensilica Inc. 8 * (C) 2001 - 2006 Tensilica Inc.
9 */ 9 */
10 10
11 #ifndef _XTENSA_CACHEFLUSH_H 11 #ifndef _XTENSA_CACHEFLUSH_H
12 #define _XTENSA_CACHEFLUSH_H 12 #define _XTENSA_CACHEFLUSH_H
13 13
14 #ifdef __KERNEL__ 14 #ifdef __KERNEL__
15 15
16 #include <linux/mm.h> 16 #include <linux/mm.h>
17 #include <asm/processor.h> 17 #include <asm/processor.h>
18 #include <asm/page.h> 18 #include <asm/page.h>
19 19
20 /* 20 /*
21 * flush and invalidate data cache, invalidate instruction cache: 21 * flush and invalidate data cache, invalidate instruction cache:
22 * 22 *
23 * __flush_invalidate_cache_all() 23 * __flush_invalidate_cache_all()
24 * __flush_invalidate_cache_range(from,sze) 24 * __flush_invalidate_cache_range(from,sze)
25 * 25 *
26 * invalidate data or instruction cache: 26 * invalidate data or instruction cache:
27 * 27 *
28 * __invalidate_icache_all() 28 * __invalidate_icache_all()
29 * __invalidate_icache_page(adr) 29 * __invalidate_icache_page(adr)
30 * __invalidate_dcache_page(adr) 30 * __invalidate_dcache_page(adr)
31 * __invalidate_icache_range(from,size) 31 * __invalidate_icache_range(from,size)
32 * __invalidate_dcache_range(from,size) 32 * __invalidate_dcache_range(from,size)
33 * 33 *
34 * flush data cache: 34 * flush data cache:
35 * 35 *
36 * __flush_dcache_page(adr) 36 * __flush_dcache_page(adr)
37 * 37 *
38 * flush and invalidate data cache: 38 * flush and invalidate data cache:
39 * 39 *
40 * __flush_invalidate_dcache_all() 40 * __flush_invalidate_dcache_all()
41 * __flush_invalidate_dcache_page(adr) 41 * __flush_invalidate_dcache_page(adr)
42 * __flush_invalidate_dcache_range(from,size) 42 * __flush_invalidate_dcache_range(from,size)
43 */ 43 */
44 44
45 extern void __flush_invalidate_cache_all(void); 45 extern void __flush_invalidate_cache_all(void);
46 extern void __flush_invalidate_cache_range(unsigned long, unsigned long); 46 extern void __flush_invalidate_cache_range(unsigned long, unsigned long);
47 extern void __flush_invalidate_dcache_all(void); 47 extern void __flush_invalidate_dcache_all(void);
48 extern void __invalidate_icache_all(void); 48 extern void __invalidate_icache_all(void);
49 49
50 extern void __invalidate_dcache_page(unsigned long); 50 extern void __invalidate_dcache_page(unsigned long);
51 extern void __invalidate_icache_page(unsigned long); 51 extern void __invalidate_icache_page(unsigned long);
52 extern void __invalidate_icache_range(unsigned long, unsigned long); 52 extern void __invalidate_icache_range(unsigned long, unsigned long);
53 extern void __invalidate_dcache_range(unsigned long, unsigned long); 53 extern void __invalidate_dcache_range(unsigned long, unsigned long);
54 54
55 #if XCHAL_DCACHE_IS_WRITEBACK 55 #if XCHAL_DCACHE_IS_WRITEBACK
56 extern void __flush_dcache_page(unsigned long); 56 extern void __flush_dcache_page(unsigned long);
57 extern void __flush_invalidate_dcache_page(unsigned long); 57 extern void __flush_invalidate_dcache_page(unsigned long);
58 extern void __flush_invalidate_dcache_range(unsigned long, unsigned long); 58 extern void __flush_invalidate_dcache_range(unsigned long, unsigned long);
59 #else 59 #else
60 # define __flush_dcache_page(p) do { } while(0) 60 # define __flush_dcache_page(p) do { } while(0)
61 # define __flush_invalidate_dcache_page(p) do { } while(0) 61 # define __flush_invalidate_dcache_page(p) do { } while(0)
62 # define __flush_invalidate_dcache_range(p,s) do { } while(0) 62 # define __flush_invalidate_dcache_range(p,s) do { } while(0)
63 #endif 63 #endif
64 64
65 /* 65 /*
66 * We have physically tagged caches - nothing to do here - 66 * We have physically tagged caches - nothing to do here -
67 * unless we have cache aliasing. 67 * unless we have cache aliasing.
68 * 68 *
69 * Pages can get remapped. Because this might change the 'color' of that page, 69 * Pages can get remapped. Because this might change the 'color' of that page,
70 * we have to flush the cache before the PTE is changed. 70 * we have to flush the cache before the PTE is changed.
71 * (see also Documentation/cachetlb.txt) 71 * (see also Documentation/cachetlb.txt)
72 */ 72 */
73 73
74 #if (DCACHE_WAY_SIZE > PAGE_SIZE) && XCHAL_DCACHE_IS_WRITEBACK 74 #if (DCACHE_WAY_SIZE > PAGE_SIZE) && XCHAL_DCACHE_IS_WRITEBACK
75 75
76 #define flush_cache_all() __flush_invalidate_cache_all(); 76 #define flush_cache_all() __flush_invalidate_cache_all();
77 #define flush_cache_mm(mm) __flush_invalidate_cache_all(); 77 #define flush_cache_mm(mm) __flush_invalidate_cache_all();
78 #define flush_cache_dup_mm(mm) __flush_invalidate_cache_all();
78 79
79 #define flush_cache_vmap(start,end) __flush_invalidate_cache_all(); 80 #define flush_cache_vmap(start,end) __flush_invalidate_cache_all();
80 #define flush_cache_vunmap(start,end) __flush_invalidate_cache_all(); 81 #define flush_cache_vunmap(start,end) __flush_invalidate_cache_all();
81 82
82 extern void flush_dcache_page(struct page*); 83 extern void flush_dcache_page(struct page*);
83 84
84 extern void flush_cache_range(struct vm_area_struct*, ulong, ulong); 85 extern void flush_cache_range(struct vm_area_struct*, ulong, ulong);
85 extern void flush_cache_page(struct vm_area_struct*, unsigned long, unsigned long); 86 extern void flush_cache_page(struct vm_area_struct*, unsigned long, unsigned long);
86 87
87 #else 88 #else
88 89
89 #define flush_cache_all() do { } while (0) 90 #define flush_cache_all() do { } while (0)
90 #define flush_cache_mm(mm) do { } while (0) 91 #define flush_cache_mm(mm) do { } while (0)
92 #define flush_cache_dup_mm(mm) do { } while (0)
91 93
92 #define flush_cache_vmap(start,end) do { } while (0) 94 #define flush_cache_vmap(start,end) do { } while (0)
93 #define flush_cache_vunmap(start,end) do { } while (0) 95 #define flush_cache_vunmap(start,end) do { } while (0)
94 96
95 #define flush_dcache_page(page) do { } while (0) 97 #define flush_dcache_page(page) do { } while (0)
96 98
97 #define flush_cache_page(vma,addr,pfn) do { } while (0) 99 #define flush_cache_page(vma,addr,pfn) do { } while (0)
98 #define flush_cache_range(vma,start,end) do { } while (0) 100 #define flush_cache_range(vma,start,end) do { } while (0)
99 101
100 #endif 102 #endif
101 103
102 #define flush_icache_range(start,end) \ 104 #define flush_icache_range(start,end) \
103 __invalidate_icache_range(start,(end)-(start)) 105 __invalidate_icache_range(start,(end)-(start))
104 106
105 /* This is not required, see Documentation/cachetlb.txt */ 107 /* This is not required, see Documentation/cachetlb.txt */
106 108
107 #define flush_icache_page(vma,page) do { } while(0) 109 #define flush_icache_page(vma,page) do { } while(0)
108 110
109 #define flush_dcache_mmap_lock(mapping) do { } while (0) 111 #define flush_dcache_mmap_lock(mapping) do { } while (0)
110 #define flush_dcache_mmap_unlock(mapping) do { } while (0) 112 #define flush_dcache_mmap_unlock(mapping) do { } while (0)
111 113
112 114
113 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ 115 #define copy_to_user_page(vma, page, vaddr, dst, src, len) \
114 memcpy(dst, src, len) 116 memcpy(dst, src, len)
115 117
116 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ 118 #define copy_from_user_page(vma, page, vaddr, dst, src, len) \
117 memcpy(dst, src, len) 119 memcpy(dst, src, len)
118 120
119 #endif /* __KERNEL__ */ 121 #endif /* __KERNEL__ */
120 122
121 #endif /* _XTENSA_CACHEFLUSH_H */ 123 #endif /* _XTENSA_CACHEFLUSH_H */
122 124
123 125
1 /* 1 /*
2 * linux/kernel/fork.c 2 * linux/kernel/fork.c
3 * 3 *
4 * Copyright (C) 1991, 1992 Linus Torvalds 4 * Copyright (C) 1991, 1992 Linus Torvalds
5 */ 5 */
6 6
7 /* 7 /*
8 * 'fork.c' contains the help-routines for the 'fork' system call 8 * 'fork.c' contains the help-routines for the 'fork' system call
9 * (see also entry.S and others). 9 * (see also entry.S and others).
10 * Fork is rather simple, once you get the hang of it, but the memory 10 * Fork is rather simple, once you get the hang of it, but the memory
11 * management can be a bitch. See 'mm/memory.c': 'copy_page_range()' 11 * management can be a bitch. See 'mm/memory.c': 'copy_page_range()'
12 */ 12 */
13 13
14 #include <linux/slab.h> 14 #include <linux/slab.h>
15 #include <linux/init.h> 15 #include <linux/init.h>
16 #include <linux/unistd.h> 16 #include <linux/unistd.h>
17 #include <linux/smp_lock.h> 17 #include <linux/smp_lock.h>
18 #include <linux/module.h> 18 #include <linux/module.h>
19 #include <linux/vmalloc.h> 19 #include <linux/vmalloc.h>
20 #include <linux/completion.h> 20 #include <linux/completion.h>
21 #include <linux/mnt_namespace.h> 21 #include <linux/mnt_namespace.h>
22 #include <linux/personality.h> 22 #include <linux/personality.h>
23 #include <linux/mempolicy.h> 23 #include <linux/mempolicy.h>
24 #include <linux/sem.h> 24 #include <linux/sem.h>
25 #include <linux/file.h> 25 #include <linux/file.h>
26 #include <linux/key.h> 26 #include <linux/key.h>
27 #include <linux/binfmts.h> 27 #include <linux/binfmts.h>
28 #include <linux/mman.h> 28 #include <linux/mman.h>
29 #include <linux/fs.h> 29 #include <linux/fs.h>
30 #include <linux/nsproxy.h> 30 #include <linux/nsproxy.h>
31 #include <linux/capability.h> 31 #include <linux/capability.h>
32 #include <linux/cpu.h> 32 #include <linux/cpu.h>
33 #include <linux/cpuset.h> 33 #include <linux/cpuset.h>
34 #include <linux/security.h> 34 #include <linux/security.h>
35 #include <linux/swap.h> 35 #include <linux/swap.h>
36 #include <linux/syscalls.h> 36 #include <linux/syscalls.h>
37 #include <linux/jiffies.h> 37 #include <linux/jiffies.h>
38 #include <linux/futex.h> 38 #include <linux/futex.h>
39 #include <linux/task_io_accounting_ops.h> 39 #include <linux/task_io_accounting_ops.h>
40 #include <linux/rcupdate.h> 40 #include <linux/rcupdate.h>
41 #include <linux/ptrace.h> 41 #include <linux/ptrace.h>
42 #include <linux/mount.h> 42 #include <linux/mount.h>
43 #include <linux/audit.h> 43 #include <linux/audit.h>
44 #include <linux/profile.h> 44 #include <linux/profile.h>
45 #include <linux/rmap.h> 45 #include <linux/rmap.h>
46 #include <linux/acct.h> 46 #include <linux/acct.h>
47 #include <linux/tsacct_kern.h> 47 #include <linux/tsacct_kern.h>
48 #include <linux/cn_proc.h> 48 #include <linux/cn_proc.h>
49 #include <linux/delayacct.h> 49 #include <linux/delayacct.h>
50 #include <linux/taskstats_kern.h> 50 #include <linux/taskstats_kern.h>
51 #include <linux/random.h> 51 #include <linux/random.h>
52 52
53 #include <asm/pgtable.h> 53 #include <asm/pgtable.h>
54 #include <asm/pgalloc.h> 54 #include <asm/pgalloc.h>
55 #include <asm/uaccess.h> 55 #include <asm/uaccess.h>
56 #include <asm/mmu_context.h> 56 #include <asm/mmu_context.h>
57 #include <asm/cacheflush.h> 57 #include <asm/cacheflush.h>
58 #include <asm/tlbflush.h> 58 #include <asm/tlbflush.h>
59 59
60 /* 60 /*
61 * Protected counters by write_lock_irq(&tasklist_lock) 61 * Protected counters by write_lock_irq(&tasklist_lock)
62 */ 62 */
63 unsigned long total_forks; /* Handle normal Linux uptimes. */ 63 unsigned long total_forks; /* Handle normal Linux uptimes. */
64 int nr_threads; /* The idle threads do not count.. */ 64 int nr_threads; /* The idle threads do not count.. */
65 65
66 int max_threads; /* tunable limit on nr_threads */ 66 int max_threads; /* tunable limit on nr_threads */
67 67
68 DEFINE_PER_CPU(unsigned long, process_counts) = 0; 68 DEFINE_PER_CPU(unsigned long, process_counts) = 0;
69 69
70 __cacheline_aligned DEFINE_RWLOCK(tasklist_lock); /* outer */ 70 __cacheline_aligned DEFINE_RWLOCK(tasklist_lock); /* outer */
71 71
72 int nr_processes(void) 72 int nr_processes(void)
73 { 73 {
74 int cpu; 74 int cpu;
75 int total = 0; 75 int total = 0;
76 76
77 for_each_online_cpu(cpu) 77 for_each_online_cpu(cpu)
78 total += per_cpu(process_counts, cpu); 78 total += per_cpu(process_counts, cpu);
79 79
80 return total; 80 return total;
81 } 81 }
82 82
83 #ifndef __HAVE_ARCH_TASK_STRUCT_ALLOCATOR 83 #ifndef __HAVE_ARCH_TASK_STRUCT_ALLOCATOR
84 # define alloc_task_struct() kmem_cache_alloc(task_struct_cachep, GFP_KERNEL) 84 # define alloc_task_struct() kmem_cache_alloc(task_struct_cachep, GFP_KERNEL)
85 # define free_task_struct(tsk) kmem_cache_free(task_struct_cachep, (tsk)) 85 # define free_task_struct(tsk) kmem_cache_free(task_struct_cachep, (tsk))
86 static struct kmem_cache *task_struct_cachep; 86 static struct kmem_cache *task_struct_cachep;
87 #endif 87 #endif
88 88
89 /* SLAB cache for signal_struct structures (tsk->signal) */ 89 /* SLAB cache for signal_struct structures (tsk->signal) */
90 static struct kmem_cache *signal_cachep; 90 static struct kmem_cache *signal_cachep;
91 91
92 /* SLAB cache for sighand_struct structures (tsk->sighand) */ 92 /* SLAB cache for sighand_struct structures (tsk->sighand) */
93 struct kmem_cache *sighand_cachep; 93 struct kmem_cache *sighand_cachep;
94 94
95 /* SLAB cache for files_struct structures (tsk->files) */ 95 /* SLAB cache for files_struct structures (tsk->files) */
96 struct kmem_cache *files_cachep; 96 struct kmem_cache *files_cachep;
97 97
98 /* SLAB cache for fs_struct structures (tsk->fs) */ 98 /* SLAB cache for fs_struct structures (tsk->fs) */
99 struct kmem_cache *fs_cachep; 99 struct kmem_cache *fs_cachep;
100 100
101 /* SLAB cache for vm_area_struct structures */ 101 /* SLAB cache for vm_area_struct structures */
102 struct kmem_cache *vm_area_cachep; 102 struct kmem_cache *vm_area_cachep;
103 103
104 /* SLAB cache for mm_struct structures (tsk->mm) */ 104 /* SLAB cache for mm_struct structures (tsk->mm) */
105 static struct kmem_cache *mm_cachep; 105 static struct kmem_cache *mm_cachep;
106 106
107 void free_task(struct task_struct *tsk) 107 void free_task(struct task_struct *tsk)
108 { 108 {
109 free_thread_info(tsk->thread_info); 109 free_thread_info(tsk->thread_info);
110 rt_mutex_debug_task_free(tsk); 110 rt_mutex_debug_task_free(tsk);
111 free_task_struct(tsk); 111 free_task_struct(tsk);
112 } 112 }
113 EXPORT_SYMBOL(free_task); 113 EXPORT_SYMBOL(free_task);
114 114
115 void __put_task_struct(struct task_struct *tsk) 115 void __put_task_struct(struct task_struct *tsk)
116 { 116 {
117 WARN_ON(!(tsk->exit_state & (EXIT_DEAD | EXIT_ZOMBIE))); 117 WARN_ON(!(tsk->exit_state & (EXIT_DEAD | EXIT_ZOMBIE)));
118 WARN_ON(atomic_read(&tsk->usage)); 118 WARN_ON(atomic_read(&tsk->usage));
119 WARN_ON(tsk == current); 119 WARN_ON(tsk == current);
120 120
121 security_task_free(tsk); 121 security_task_free(tsk);
122 free_uid(tsk->user); 122 free_uid(tsk->user);
123 put_group_info(tsk->group_info); 123 put_group_info(tsk->group_info);
124 delayacct_tsk_free(tsk); 124 delayacct_tsk_free(tsk);
125 125
126 if (!profile_handoff_task(tsk)) 126 if (!profile_handoff_task(tsk))
127 free_task(tsk); 127 free_task(tsk);
128 } 128 }
129 129
130 void __init fork_init(unsigned long mempages) 130 void __init fork_init(unsigned long mempages)
131 { 131 {
132 #ifndef __HAVE_ARCH_TASK_STRUCT_ALLOCATOR 132 #ifndef __HAVE_ARCH_TASK_STRUCT_ALLOCATOR
133 #ifndef ARCH_MIN_TASKALIGN 133 #ifndef ARCH_MIN_TASKALIGN
134 #define ARCH_MIN_TASKALIGN L1_CACHE_BYTES 134 #define ARCH_MIN_TASKALIGN L1_CACHE_BYTES
135 #endif 135 #endif
136 /* create a slab on which task_structs can be allocated */ 136 /* create a slab on which task_structs can be allocated */
137 task_struct_cachep = 137 task_struct_cachep =
138 kmem_cache_create("task_struct", sizeof(struct task_struct), 138 kmem_cache_create("task_struct", sizeof(struct task_struct),
139 ARCH_MIN_TASKALIGN, SLAB_PANIC, NULL, NULL); 139 ARCH_MIN_TASKALIGN, SLAB_PANIC, NULL, NULL);
140 #endif 140 #endif
141 141
142 /* 142 /*
143 * The default maximum number of threads is set to a safe 143 * The default maximum number of threads is set to a safe
144 * value: the thread structures can take up at most half 144 * value: the thread structures can take up at most half
145 * of memory. 145 * of memory.
146 */ 146 */
147 max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE); 147 max_threads = mempages / (8 * THREAD_SIZE / PAGE_SIZE);
148 148
149 /* 149 /*
150 * we need to allow at least 20 threads to boot a system 150 * we need to allow at least 20 threads to boot a system
151 */ 151 */
152 if(max_threads < 20) 152 if(max_threads < 20)
153 max_threads = 20; 153 max_threads = 20;
154 154
155 init_task.signal->rlim[RLIMIT_NPROC].rlim_cur = max_threads/2; 155 init_task.signal->rlim[RLIMIT_NPROC].rlim_cur = max_threads/2;
156 init_task.signal->rlim[RLIMIT_NPROC].rlim_max = max_threads/2; 156 init_task.signal->rlim[RLIMIT_NPROC].rlim_max = max_threads/2;
157 init_task.signal->rlim[RLIMIT_SIGPENDING] = 157 init_task.signal->rlim[RLIMIT_SIGPENDING] =
158 init_task.signal->rlim[RLIMIT_NPROC]; 158 init_task.signal->rlim[RLIMIT_NPROC];
159 } 159 }
160 160
161 static struct task_struct *dup_task_struct(struct task_struct *orig) 161 static struct task_struct *dup_task_struct(struct task_struct *orig)
162 { 162 {
163 struct task_struct *tsk; 163 struct task_struct *tsk;
164 struct thread_info *ti; 164 struct thread_info *ti;
165 165
166 prepare_to_copy(orig); 166 prepare_to_copy(orig);
167 167
168 tsk = alloc_task_struct(); 168 tsk = alloc_task_struct();
169 if (!tsk) 169 if (!tsk)
170 return NULL; 170 return NULL;
171 171
172 ti = alloc_thread_info(tsk); 172 ti = alloc_thread_info(tsk);
173 if (!ti) { 173 if (!ti) {
174 free_task_struct(tsk); 174 free_task_struct(tsk);
175 return NULL; 175 return NULL;
176 } 176 }
177 177
178 *tsk = *orig; 178 *tsk = *orig;
179 tsk->thread_info = ti; 179 tsk->thread_info = ti;
180 setup_thread_stack(tsk, orig); 180 setup_thread_stack(tsk, orig);
181 181
182 #ifdef CONFIG_CC_STACKPROTECTOR 182 #ifdef CONFIG_CC_STACKPROTECTOR
183 tsk->stack_canary = get_random_int(); 183 tsk->stack_canary = get_random_int();
184 #endif 184 #endif
185 185
186 /* One for us, one for whoever does the "release_task()" (usually parent) */ 186 /* One for us, one for whoever does the "release_task()" (usually parent) */
187 atomic_set(&tsk->usage,2); 187 atomic_set(&tsk->usage,2);
188 atomic_set(&tsk->fs_excl, 0); 188 atomic_set(&tsk->fs_excl, 0);
189 #ifdef CONFIG_BLK_DEV_IO_TRACE 189 #ifdef CONFIG_BLK_DEV_IO_TRACE
190 tsk->btrace_seq = 0; 190 tsk->btrace_seq = 0;
191 #endif 191 #endif
192 tsk->splice_pipe = NULL; 192 tsk->splice_pipe = NULL;
193 return tsk; 193 return tsk;
194 } 194 }
195 195
196 #ifdef CONFIG_MMU 196 #ifdef CONFIG_MMU
197 static inline int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm) 197 static inline int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm)
198 { 198 {
199 struct vm_area_struct *mpnt, *tmp, **pprev; 199 struct vm_area_struct *mpnt, *tmp, **pprev;
200 struct rb_node **rb_link, *rb_parent; 200 struct rb_node **rb_link, *rb_parent;
201 int retval; 201 int retval;
202 unsigned long charge; 202 unsigned long charge;
203 struct mempolicy *pol; 203 struct mempolicy *pol;
204 204
205 down_write(&oldmm->mmap_sem); 205 down_write(&oldmm->mmap_sem);
206 flush_cache_mm(oldmm); 206 flush_cache_dup_mm(oldmm);
207 /* 207 /*
208 * Not linked in yet - no deadlock potential: 208 * Not linked in yet - no deadlock potential:
209 */ 209 */
210 down_write_nested(&mm->mmap_sem, SINGLE_DEPTH_NESTING); 210 down_write_nested(&mm->mmap_sem, SINGLE_DEPTH_NESTING);
211 211
212 mm->locked_vm = 0; 212 mm->locked_vm = 0;
213 mm->mmap = NULL; 213 mm->mmap = NULL;
214 mm->mmap_cache = NULL; 214 mm->mmap_cache = NULL;
215 mm->free_area_cache = oldmm->mmap_base; 215 mm->free_area_cache = oldmm->mmap_base;
216 mm->cached_hole_size = ~0UL; 216 mm->cached_hole_size = ~0UL;
217 mm->map_count = 0; 217 mm->map_count = 0;
218 cpus_clear(mm->cpu_vm_mask); 218 cpus_clear(mm->cpu_vm_mask);
219 mm->mm_rb = RB_ROOT; 219 mm->mm_rb = RB_ROOT;
220 rb_link = &mm->mm_rb.rb_node; 220 rb_link = &mm->mm_rb.rb_node;
221 rb_parent = NULL; 221 rb_parent = NULL;
222 pprev = &mm->mmap; 222 pprev = &mm->mmap;
223 223
224 for (mpnt = oldmm->mmap; mpnt; mpnt = mpnt->vm_next) { 224 for (mpnt = oldmm->mmap; mpnt; mpnt = mpnt->vm_next) {
225 struct file *file; 225 struct file *file;
226 226
227 if (mpnt->vm_flags & VM_DONTCOPY) { 227 if (mpnt->vm_flags & VM_DONTCOPY) {
228 long pages = vma_pages(mpnt); 228 long pages = vma_pages(mpnt);
229 mm->total_vm -= pages; 229 mm->total_vm -= pages;
230 vm_stat_account(mm, mpnt->vm_flags, mpnt->vm_file, 230 vm_stat_account(mm, mpnt->vm_flags, mpnt->vm_file,
231 -pages); 231 -pages);
232 continue; 232 continue;
233 } 233 }
234 charge = 0; 234 charge = 0;
235 if (mpnt->vm_flags & VM_ACCOUNT) { 235 if (mpnt->vm_flags & VM_ACCOUNT) {
236 unsigned int len = (mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT; 236 unsigned int len = (mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT;
237 if (security_vm_enough_memory(len)) 237 if (security_vm_enough_memory(len))
238 goto fail_nomem; 238 goto fail_nomem;
239 charge = len; 239 charge = len;
240 } 240 }
241 tmp = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL); 241 tmp = kmem_cache_alloc(vm_area_cachep, GFP_KERNEL);
242 if (!tmp) 242 if (!tmp)
243 goto fail_nomem; 243 goto fail_nomem;
244 *tmp = *mpnt; 244 *tmp = *mpnt;
245 pol = mpol_copy(vma_policy(mpnt)); 245 pol = mpol_copy(vma_policy(mpnt));
246 retval = PTR_ERR(pol); 246 retval = PTR_ERR(pol);
247 if (IS_ERR(pol)) 247 if (IS_ERR(pol))
248 goto fail_nomem_policy; 248 goto fail_nomem_policy;
249 vma_set_policy(tmp, pol); 249 vma_set_policy(tmp, pol);
250 tmp->vm_flags &= ~VM_LOCKED; 250 tmp->vm_flags &= ~VM_LOCKED;
251 tmp->vm_mm = mm; 251 tmp->vm_mm = mm;
252 tmp->vm_next = NULL; 252 tmp->vm_next = NULL;
253 anon_vma_link(tmp); 253 anon_vma_link(tmp);
254 file = tmp->vm_file; 254 file = tmp->vm_file;
255 if (file) { 255 if (file) {
256 struct inode *inode = file->f_path.dentry->d_inode; 256 struct inode *inode = file->f_path.dentry->d_inode;
257 get_file(file); 257 get_file(file);
258 if (tmp->vm_flags & VM_DENYWRITE) 258 if (tmp->vm_flags & VM_DENYWRITE)
259 atomic_dec(&inode->i_writecount); 259 atomic_dec(&inode->i_writecount);
260 260
261 /* insert tmp into the share list, just after mpnt */ 261 /* insert tmp into the share list, just after mpnt */
262 spin_lock(&file->f_mapping->i_mmap_lock); 262 spin_lock(&file->f_mapping->i_mmap_lock);
263 tmp->vm_truncate_count = mpnt->vm_truncate_count; 263 tmp->vm_truncate_count = mpnt->vm_truncate_count;
264 flush_dcache_mmap_lock(file->f_mapping); 264 flush_dcache_mmap_lock(file->f_mapping);
265 vma_prio_tree_add(tmp, mpnt); 265 vma_prio_tree_add(tmp, mpnt);
266 flush_dcache_mmap_unlock(file->f_mapping); 266 flush_dcache_mmap_unlock(file->f_mapping);
267 spin_unlock(&file->f_mapping->i_mmap_lock); 267 spin_unlock(&file->f_mapping->i_mmap_lock);
268 } 268 }
269 269
270 /* 270 /*
271 * Link in the new vma and copy the page table entries. 271 * Link in the new vma and copy the page table entries.
272 */ 272 */
273 *pprev = tmp; 273 *pprev = tmp;
274 pprev = &tmp->vm_next; 274 pprev = &tmp->vm_next;
275 275
276 __vma_link_rb(mm, tmp, rb_link, rb_parent); 276 __vma_link_rb(mm, tmp, rb_link, rb_parent);
277 rb_link = &tmp->vm_rb.rb_right; 277 rb_link = &tmp->vm_rb.rb_right;
278 rb_parent = &tmp->vm_rb; 278 rb_parent = &tmp->vm_rb;
279 279
280 mm->map_count++; 280 mm->map_count++;
281 retval = copy_page_range(mm, oldmm, mpnt); 281 retval = copy_page_range(mm, oldmm, mpnt);
282 282
283 if (tmp->vm_ops && tmp->vm_ops->open) 283 if (tmp->vm_ops && tmp->vm_ops->open)
284 tmp->vm_ops->open(tmp); 284 tmp->vm_ops->open(tmp);
285 285
286 if (retval) 286 if (retval)
287 goto out; 287 goto out;
288 } 288 }
289 retval = 0; 289 retval = 0;
290 out: 290 out:
291 up_write(&mm->mmap_sem); 291 up_write(&mm->mmap_sem);
292 flush_tlb_mm(oldmm); 292 flush_tlb_mm(oldmm);
293 up_write(&oldmm->mmap_sem); 293 up_write(&oldmm->mmap_sem);
294 return retval; 294 return retval;
295 fail_nomem_policy: 295 fail_nomem_policy:
296 kmem_cache_free(vm_area_cachep, tmp); 296 kmem_cache_free(vm_area_cachep, tmp);
297 fail_nomem: 297 fail_nomem:
298 retval = -ENOMEM; 298 retval = -ENOMEM;
299 vm_unacct_memory(charge); 299 vm_unacct_memory(charge);
300 goto out; 300 goto out;
301 } 301 }
302 302
303 static inline int mm_alloc_pgd(struct mm_struct * mm) 303 static inline int mm_alloc_pgd(struct mm_struct * mm)
304 { 304 {
305 mm->pgd = pgd_alloc(mm); 305 mm->pgd = pgd_alloc(mm);
306 if (unlikely(!mm->pgd)) 306 if (unlikely(!mm->pgd))
307 return -ENOMEM; 307 return -ENOMEM;
308 return 0; 308 return 0;
309 } 309 }
310 310
311 static inline void mm_free_pgd(struct mm_struct * mm) 311 static inline void mm_free_pgd(struct mm_struct * mm)
312 { 312 {
313 pgd_free(mm->pgd); 313 pgd_free(mm->pgd);
314 } 314 }
315 #else 315 #else
316 #define dup_mmap(mm, oldmm) (0) 316 #define dup_mmap(mm, oldmm) (0)
317 #define mm_alloc_pgd(mm) (0) 317 #define mm_alloc_pgd(mm) (0)
318 #define mm_free_pgd(mm) 318 #define mm_free_pgd(mm)
319 #endif /* CONFIG_MMU */ 319 #endif /* CONFIG_MMU */
320 320
321 __cacheline_aligned_in_smp DEFINE_SPINLOCK(mmlist_lock); 321 __cacheline_aligned_in_smp DEFINE_SPINLOCK(mmlist_lock);
322 322
323 #define allocate_mm() (kmem_cache_alloc(mm_cachep, GFP_KERNEL)) 323 #define allocate_mm() (kmem_cache_alloc(mm_cachep, GFP_KERNEL))
324 #define free_mm(mm) (kmem_cache_free(mm_cachep, (mm))) 324 #define free_mm(mm) (kmem_cache_free(mm_cachep, (mm)))
325 325
326 #include <linux/init_task.h> 326 #include <linux/init_task.h>
327 327
328 static struct mm_struct * mm_init(struct mm_struct * mm) 328 static struct mm_struct * mm_init(struct mm_struct * mm)
329 { 329 {
330 atomic_set(&mm->mm_users, 1); 330 atomic_set(&mm->mm_users, 1);
331 atomic_set(&mm->mm_count, 1); 331 atomic_set(&mm->mm_count, 1);
332 init_rwsem(&mm->mmap_sem); 332 init_rwsem(&mm->mmap_sem);
333 INIT_LIST_HEAD(&mm->mmlist); 333 INIT_LIST_HEAD(&mm->mmlist);
334 mm->core_waiters = 0; 334 mm->core_waiters = 0;
335 mm->nr_ptes = 0; 335 mm->nr_ptes = 0;
336 set_mm_counter(mm, file_rss, 0); 336 set_mm_counter(mm, file_rss, 0);
337 set_mm_counter(mm, anon_rss, 0); 337 set_mm_counter(mm, anon_rss, 0);
338 spin_lock_init(&mm->page_table_lock); 338 spin_lock_init(&mm->page_table_lock);
339 rwlock_init(&mm->ioctx_list_lock); 339 rwlock_init(&mm->ioctx_list_lock);
340 mm->ioctx_list = NULL; 340 mm->ioctx_list = NULL;
341 mm->free_area_cache = TASK_UNMAPPED_BASE; 341 mm->free_area_cache = TASK_UNMAPPED_BASE;
342 mm->cached_hole_size = ~0UL; 342 mm->cached_hole_size = ~0UL;
343 343
344 if (likely(!mm_alloc_pgd(mm))) { 344 if (likely(!mm_alloc_pgd(mm))) {
345 mm->def_flags = 0; 345 mm->def_flags = 0;
346 return mm; 346 return mm;
347 } 347 }
348 free_mm(mm); 348 free_mm(mm);
349 return NULL; 349 return NULL;
350 } 350 }
351 351
352 /* 352 /*
353 * Allocate and initialize an mm_struct. 353 * Allocate and initialize an mm_struct.
354 */ 354 */
355 struct mm_struct * mm_alloc(void) 355 struct mm_struct * mm_alloc(void)
356 { 356 {
357 struct mm_struct * mm; 357 struct mm_struct * mm;
358 358
359 mm = allocate_mm(); 359 mm = allocate_mm();
360 if (mm) { 360 if (mm) {
361 memset(mm, 0, sizeof(*mm)); 361 memset(mm, 0, sizeof(*mm));
362 mm = mm_init(mm); 362 mm = mm_init(mm);
363 } 363 }
364 return mm; 364 return mm;
365 } 365 }
366 366
367 /* 367 /*
368 * Called when the last reference to the mm 368 * Called when the last reference to the mm
369 * is dropped: either by a lazy thread or by 369 * is dropped: either by a lazy thread or by
370 * mmput. Free the page directory and the mm. 370 * mmput. Free the page directory and the mm.
371 */ 371 */
372 void fastcall __mmdrop(struct mm_struct *mm) 372 void fastcall __mmdrop(struct mm_struct *mm)
373 { 373 {
374 BUG_ON(mm == &init_mm); 374 BUG_ON(mm == &init_mm);
375 mm_free_pgd(mm); 375 mm_free_pgd(mm);
376 destroy_context(mm); 376 destroy_context(mm);
377 free_mm(mm); 377 free_mm(mm);
378 } 378 }
379 379
380 /* 380 /*
381 * Decrement the use count and release all resources for an mm. 381 * Decrement the use count and release all resources for an mm.
382 */ 382 */
383 void mmput(struct mm_struct *mm) 383 void mmput(struct mm_struct *mm)
384 { 384 {
385 might_sleep(); 385 might_sleep();
386 386
387 if (atomic_dec_and_test(&mm->mm_users)) { 387 if (atomic_dec_and_test(&mm->mm_users)) {
388 exit_aio(mm); 388 exit_aio(mm);
389 exit_mmap(mm); 389 exit_mmap(mm);
390 if (!list_empty(&mm->mmlist)) { 390 if (!list_empty(&mm->mmlist)) {
391 spin_lock(&mmlist_lock); 391 spin_lock(&mmlist_lock);
392 list_del(&mm->mmlist); 392 list_del(&mm->mmlist);
393 spin_unlock(&mmlist_lock); 393 spin_unlock(&mmlist_lock);
394 } 394 }
395 put_swap_token(mm); 395 put_swap_token(mm);
396 mmdrop(mm); 396 mmdrop(mm);
397 } 397 }
398 } 398 }
399 EXPORT_SYMBOL_GPL(mmput); 399 EXPORT_SYMBOL_GPL(mmput);
400 400
401 /** 401 /**
402 * get_task_mm - acquire a reference to the task's mm 402 * get_task_mm - acquire a reference to the task's mm
403 * 403 *
404 * Returns %NULL if the task has no mm. Checks PF_BORROWED_MM (meaning 404 * Returns %NULL if the task has no mm. Checks PF_BORROWED_MM (meaning
405 * this kernel workthread has transiently adopted a user mm with use_mm, 405 * this kernel workthread has transiently adopted a user mm with use_mm,
406 * to do its AIO) is not set and if so returns a reference to it, after 406 * to do its AIO) is not set and if so returns a reference to it, after
407 * bumping up the use count. User must release the mm via mmput() 407 * bumping up the use count. User must release the mm via mmput()
408 * after use. Typically used by /proc and ptrace. 408 * after use. Typically used by /proc and ptrace.
409 */ 409 */
410 struct mm_struct *get_task_mm(struct task_struct *task) 410 struct mm_struct *get_task_mm(struct task_struct *task)
411 { 411 {
412 struct mm_struct *mm; 412 struct mm_struct *mm;
413 413
414 task_lock(task); 414 task_lock(task);
415 mm = task->mm; 415 mm = task->mm;
416 if (mm) { 416 if (mm) {
417 if (task->flags & PF_BORROWED_MM) 417 if (task->flags & PF_BORROWED_MM)
418 mm = NULL; 418 mm = NULL;
419 else 419 else
420 atomic_inc(&mm->mm_users); 420 atomic_inc(&mm->mm_users);
421 } 421 }
422 task_unlock(task); 422 task_unlock(task);
423 return mm; 423 return mm;
424 } 424 }
425 EXPORT_SYMBOL_GPL(get_task_mm); 425 EXPORT_SYMBOL_GPL(get_task_mm);
426 426
427 /* Please note the differences between mmput and mm_release. 427 /* Please note the differences between mmput and mm_release.
428 * mmput is called whenever we stop holding onto a mm_struct, 428 * mmput is called whenever we stop holding onto a mm_struct,
429 * error success whatever. 429 * error success whatever.
430 * 430 *
431 * mm_release is called after a mm_struct has been removed 431 * mm_release is called after a mm_struct has been removed
432 * from the current process. 432 * from the current process.
433 * 433 *
434 * This difference is important for error handling, when we 434 * This difference is important for error handling, when we
435 * only half set up a mm_struct for a new process and need to restore 435 * only half set up a mm_struct for a new process and need to restore
436 * the old one. Because we mmput the new mm_struct before 436 * the old one. Because we mmput the new mm_struct before
437 * restoring the old one. . . 437 * restoring the old one. . .
438 * Eric Biederman 10 January 1998 438 * Eric Biederman 10 January 1998
439 */ 439 */
440 void mm_release(struct task_struct *tsk, struct mm_struct *mm) 440 void mm_release(struct task_struct *tsk, struct mm_struct *mm)
441 { 441 {
442 struct completion *vfork_done = tsk->vfork_done; 442 struct completion *vfork_done = tsk->vfork_done;
443 443
444 /* Get rid of any cached register state */ 444 /* Get rid of any cached register state */
445 deactivate_mm(tsk, mm); 445 deactivate_mm(tsk, mm);
446 446
447 /* notify parent sleeping on vfork() */ 447 /* notify parent sleeping on vfork() */
448 if (vfork_done) { 448 if (vfork_done) {
449 tsk->vfork_done = NULL; 449 tsk->vfork_done = NULL;
450 complete(vfork_done); 450 complete(vfork_done);
451 } 451 }
452 452
453 /* 453 /*
454 * If we're exiting normally, clear a user-space tid field if 454 * If we're exiting normally, clear a user-space tid field if
455 * requested. We leave this alone when dying by signal, to leave 455 * requested. We leave this alone when dying by signal, to leave
456 * the value intact in a core dump, and to save the unnecessary 456 * the value intact in a core dump, and to save the unnecessary
457 * trouble otherwise. Userland only wants this done for a sys_exit. 457 * trouble otherwise. Userland only wants this done for a sys_exit.
458 */ 458 */
459 if (tsk->clear_child_tid 459 if (tsk->clear_child_tid
460 && !(tsk->flags & PF_SIGNALED) 460 && !(tsk->flags & PF_SIGNALED)
461 && atomic_read(&mm->mm_users) > 1) { 461 && atomic_read(&mm->mm_users) > 1) {
462 u32 __user * tidptr = tsk->clear_child_tid; 462 u32 __user * tidptr = tsk->clear_child_tid;
463 tsk->clear_child_tid = NULL; 463 tsk->clear_child_tid = NULL;
464 464
465 /* 465 /*
466 * We don't check the error code - if userspace has 466 * We don't check the error code - if userspace has
467 * not set up a proper pointer then tough luck. 467 * not set up a proper pointer then tough luck.
468 */ 468 */
469 put_user(0, tidptr); 469 put_user(0, tidptr);
470 sys_futex(tidptr, FUTEX_WAKE, 1, NULL, NULL, 0); 470 sys_futex(tidptr, FUTEX_WAKE, 1, NULL, NULL, 0);
471 } 471 }
472 } 472 }
473 473
474 /* 474 /*
475 * Allocate a new mm structure and copy contents from the 475 * Allocate a new mm structure and copy contents from the
476 * mm structure of the passed in task structure. 476 * mm structure of the passed in task structure.
477 */ 477 */
478 static struct mm_struct *dup_mm(struct task_struct *tsk) 478 static struct mm_struct *dup_mm(struct task_struct *tsk)
479 { 479 {
480 struct mm_struct *mm, *oldmm = current->mm; 480 struct mm_struct *mm, *oldmm = current->mm;
481 int err; 481 int err;
482 482
483 if (!oldmm) 483 if (!oldmm)
484 return NULL; 484 return NULL;
485 485
486 mm = allocate_mm(); 486 mm = allocate_mm();
487 if (!mm) 487 if (!mm)
488 goto fail_nomem; 488 goto fail_nomem;
489 489
490 memcpy(mm, oldmm, sizeof(*mm)); 490 memcpy(mm, oldmm, sizeof(*mm));
491 491
492 /* Initializing for Swap token stuff */ 492 /* Initializing for Swap token stuff */
493 mm->token_priority = 0; 493 mm->token_priority = 0;
494 mm->last_interval = 0; 494 mm->last_interval = 0;
495 495
496 if (!mm_init(mm)) 496 if (!mm_init(mm))
497 goto fail_nomem; 497 goto fail_nomem;
498 498
499 if (init_new_context(tsk, mm)) 499 if (init_new_context(tsk, mm))
500 goto fail_nocontext; 500 goto fail_nocontext;
501 501
502 err = dup_mmap(mm, oldmm); 502 err = dup_mmap(mm, oldmm);
503 if (err) 503 if (err)
504 goto free_pt; 504 goto free_pt;
505 505
506 mm->hiwater_rss = get_mm_rss(mm); 506 mm->hiwater_rss = get_mm_rss(mm);
507 mm->hiwater_vm = mm->total_vm; 507 mm->hiwater_vm = mm->total_vm;
508 508
509 return mm; 509 return mm;
510 510
511 free_pt: 511 free_pt:
512 mmput(mm); 512 mmput(mm);
513 513
514 fail_nomem: 514 fail_nomem:
515 return NULL; 515 return NULL;
516 516
517 fail_nocontext: 517 fail_nocontext:
518 /* 518 /*
519 * If init_new_context() failed, we cannot use mmput() to free the mm 519 * If init_new_context() failed, we cannot use mmput() to free the mm
520 * because it calls destroy_context() 520 * because it calls destroy_context()
521 */ 521 */
522 mm_free_pgd(mm); 522 mm_free_pgd(mm);
523 free_mm(mm); 523 free_mm(mm);
524 return NULL; 524 return NULL;
525 } 525 }
526 526
527 static int copy_mm(unsigned long clone_flags, struct task_struct * tsk) 527 static int copy_mm(unsigned long clone_flags, struct task_struct * tsk)
528 { 528 {
529 struct mm_struct * mm, *oldmm; 529 struct mm_struct * mm, *oldmm;
530 int retval; 530 int retval;
531 531
532 tsk->min_flt = tsk->maj_flt = 0; 532 tsk->min_flt = tsk->maj_flt = 0;
533 tsk->nvcsw = tsk->nivcsw = 0; 533 tsk->nvcsw = tsk->nivcsw = 0;
534 534
535 tsk->mm = NULL; 535 tsk->mm = NULL;
536 tsk->active_mm = NULL; 536 tsk->active_mm = NULL;
537 537
538 /* 538 /*
539 * Are we cloning a kernel thread? 539 * Are we cloning a kernel thread?
540 * 540 *
541 * We need to steal a active VM for that.. 541 * We need to steal a active VM for that..
542 */ 542 */
543 oldmm = current->mm; 543 oldmm = current->mm;
544 if (!oldmm) 544 if (!oldmm)
545 return 0; 545 return 0;
546 546
547 if (clone_flags & CLONE_VM) { 547 if (clone_flags & CLONE_VM) {
548 atomic_inc(&oldmm->mm_users); 548 atomic_inc(&oldmm->mm_users);
549 mm = oldmm; 549 mm = oldmm;
550 goto good_mm; 550 goto good_mm;
551 } 551 }
552 552
553 retval = -ENOMEM; 553 retval = -ENOMEM;
554 mm = dup_mm(tsk); 554 mm = dup_mm(tsk);
555 if (!mm) 555 if (!mm)
556 goto fail_nomem; 556 goto fail_nomem;
557 557
558 good_mm: 558 good_mm:
559 /* Initializing for Swap token stuff */ 559 /* Initializing for Swap token stuff */
560 mm->token_priority = 0; 560 mm->token_priority = 0;
561 mm->last_interval = 0; 561 mm->last_interval = 0;
562 562
563 tsk->mm = mm; 563 tsk->mm = mm;
564 tsk->active_mm = mm; 564 tsk->active_mm = mm;
565 return 0; 565 return 0;
566 566
567 fail_nomem: 567 fail_nomem:
568 return retval; 568 return retval;
569 } 569 }
570 570
571 static inline struct fs_struct *__copy_fs_struct(struct fs_struct *old) 571 static inline struct fs_struct *__copy_fs_struct(struct fs_struct *old)
572 { 572 {
573 struct fs_struct *fs = kmem_cache_alloc(fs_cachep, GFP_KERNEL); 573 struct fs_struct *fs = kmem_cache_alloc(fs_cachep, GFP_KERNEL);
574 /* We don't need to lock fs - think why ;-) */ 574 /* We don't need to lock fs - think why ;-) */
575 if (fs) { 575 if (fs) {
576 atomic_set(&fs->count, 1); 576 atomic_set(&fs->count, 1);
577 rwlock_init(&fs->lock); 577 rwlock_init(&fs->lock);
578 fs->umask = old->umask; 578 fs->umask = old->umask;
579 read_lock(&old->lock); 579 read_lock(&old->lock);
580 fs->rootmnt = mntget(old->rootmnt); 580 fs->rootmnt = mntget(old->rootmnt);
581 fs->root = dget(old->root); 581 fs->root = dget(old->root);
582 fs->pwdmnt = mntget(old->pwdmnt); 582 fs->pwdmnt = mntget(old->pwdmnt);
583 fs->pwd = dget(old->pwd); 583 fs->pwd = dget(old->pwd);
584 if (old->altroot) { 584 if (old->altroot) {
585 fs->altrootmnt = mntget(old->altrootmnt); 585 fs->altrootmnt = mntget(old->altrootmnt);
586 fs->altroot = dget(old->altroot); 586 fs->altroot = dget(old->altroot);
587 } else { 587 } else {
588 fs->altrootmnt = NULL; 588 fs->altrootmnt = NULL;
589 fs->altroot = NULL; 589 fs->altroot = NULL;
590 } 590 }
591 read_unlock(&old->lock); 591 read_unlock(&old->lock);
592 } 592 }
593 return fs; 593 return fs;
594 } 594 }
595 595
596 struct fs_struct *copy_fs_struct(struct fs_struct *old) 596 struct fs_struct *copy_fs_struct(struct fs_struct *old)
597 { 597 {
598 return __copy_fs_struct(old); 598 return __copy_fs_struct(old);
599 } 599 }
600 600
601 EXPORT_SYMBOL_GPL(copy_fs_struct); 601 EXPORT_SYMBOL_GPL(copy_fs_struct);
602 602
603 static inline int copy_fs(unsigned long clone_flags, struct task_struct * tsk) 603 static inline int copy_fs(unsigned long clone_flags, struct task_struct * tsk)
604 { 604 {
605 if (clone_flags & CLONE_FS) { 605 if (clone_flags & CLONE_FS) {
606 atomic_inc(&current->fs->count); 606 atomic_inc(&current->fs->count);
607 return 0; 607 return 0;
608 } 608 }
609 tsk->fs = __copy_fs_struct(current->fs); 609 tsk->fs = __copy_fs_struct(current->fs);
610 if (!tsk->fs) 610 if (!tsk->fs)
611 return -ENOMEM; 611 return -ENOMEM;
612 return 0; 612 return 0;
613 } 613 }
614 614
615 static int count_open_files(struct fdtable *fdt) 615 static int count_open_files(struct fdtable *fdt)
616 { 616 {
617 int size = fdt->max_fds; 617 int size = fdt->max_fds;
618 int i; 618 int i;
619 619
620 /* Find the last open fd */ 620 /* Find the last open fd */
621 for (i = size/(8*sizeof(long)); i > 0; ) { 621 for (i = size/(8*sizeof(long)); i > 0; ) {
622 if (fdt->open_fds->fds_bits[--i]) 622 if (fdt->open_fds->fds_bits[--i])
623 break; 623 break;
624 } 624 }
625 i = (i+1) * 8 * sizeof(long); 625 i = (i+1) * 8 * sizeof(long);
626 return i; 626 return i;
627 } 627 }
628 628
629 static struct files_struct *alloc_files(void) 629 static struct files_struct *alloc_files(void)
630 { 630 {
631 struct files_struct *newf; 631 struct files_struct *newf;
632 struct fdtable *fdt; 632 struct fdtable *fdt;
633 633
634 newf = kmem_cache_alloc(files_cachep, GFP_KERNEL); 634 newf = kmem_cache_alloc(files_cachep, GFP_KERNEL);
635 if (!newf) 635 if (!newf)
636 goto out; 636 goto out;
637 637
638 atomic_set(&newf->count, 1); 638 atomic_set(&newf->count, 1);
639 639
640 spin_lock_init(&newf->file_lock); 640 spin_lock_init(&newf->file_lock);
641 newf->next_fd = 0; 641 newf->next_fd = 0;
642 fdt = &newf->fdtab; 642 fdt = &newf->fdtab;
643 fdt->max_fds = NR_OPEN_DEFAULT; 643 fdt->max_fds = NR_OPEN_DEFAULT;
644 fdt->close_on_exec = (fd_set *)&newf->close_on_exec_init; 644 fdt->close_on_exec = (fd_set *)&newf->close_on_exec_init;
645 fdt->open_fds = (fd_set *)&newf->open_fds_init; 645 fdt->open_fds = (fd_set *)&newf->open_fds_init;
646 fdt->fd = &newf->fd_array[0]; 646 fdt->fd = &newf->fd_array[0];
647 INIT_RCU_HEAD(&fdt->rcu); 647 INIT_RCU_HEAD(&fdt->rcu);
648 fdt->next = NULL; 648 fdt->next = NULL;
649 rcu_assign_pointer(newf->fdt, fdt); 649 rcu_assign_pointer(newf->fdt, fdt);
650 out: 650 out:
651 return newf; 651 return newf;
652 } 652 }
653 653
654 /* 654 /*
655 * Allocate a new files structure and copy contents from the 655 * Allocate a new files structure and copy contents from the
656 * passed in files structure. 656 * passed in files structure.
657 * errorp will be valid only when the returned files_struct is NULL. 657 * errorp will be valid only when the returned files_struct is NULL.
658 */ 658 */
659 static struct files_struct *dup_fd(struct files_struct *oldf, int *errorp) 659 static struct files_struct *dup_fd(struct files_struct *oldf, int *errorp)
660 { 660 {
661 struct files_struct *newf; 661 struct files_struct *newf;
662 struct file **old_fds, **new_fds; 662 struct file **old_fds, **new_fds;
663 int open_files, size, i; 663 int open_files, size, i;
664 struct fdtable *old_fdt, *new_fdt; 664 struct fdtable *old_fdt, *new_fdt;
665 665
666 *errorp = -ENOMEM; 666 *errorp = -ENOMEM;
667 newf = alloc_files(); 667 newf = alloc_files();
668 if (!newf) 668 if (!newf)
669 goto out; 669 goto out;
670 670
671 spin_lock(&oldf->file_lock); 671 spin_lock(&oldf->file_lock);
672 old_fdt = files_fdtable(oldf); 672 old_fdt = files_fdtable(oldf);
673 new_fdt = files_fdtable(newf); 673 new_fdt = files_fdtable(newf);
674 open_files = count_open_files(old_fdt); 674 open_files = count_open_files(old_fdt);
675 675
676 /* 676 /*
677 * Check whether we need to allocate a larger fd array and fd set. 677 * Check whether we need to allocate a larger fd array and fd set.
678 * Note: we're not a clone task, so the open count won't change. 678 * Note: we're not a clone task, so the open count won't change.
679 */ 679 */
680 if (open_files > new_fdt->max_fds) { 680 if (open_files > new_fdt->max_fds) {
681 new_fdt->max_fds = 0; 681 new_fdt->max_fds = 0;
682 spin_unlock(&oldf->file_lock); 682 spin_unlock(&oldf->file_lock);
683 spin_lock(&newf->file_lock); 683 spin_lock(&newf->file_lock);
684 *errorp = expand_files(newf, open_files-1); 684 *errorp = expand_files(newf, open_files-1);
685 spin_unlock(&newf->file_lock); 685 spin_unlock(&newf->file_lock);
686 if (*errorp < 0) 686 if (*errorp < 0)
687 goto out_release; 687 goto out_release;
688 new_fdt = files_fdtable(newf); 688 new_fdt = files_fdtable(newf);
689 /* 689 /*
690 * Reacquire the oldf lock and a pointer to its fd table 690 * Reacquire the oldf lock and a pointer to its fd table
691 * who knows it may have a new bigger fd table. We need 691 * who knows it may have a new bigger fd table. We need
692 * the latest pointer. 692 * the latest pointer.
693 */ 693 */
694 spin_lock(&oldf->file_lock); 694 spin_lock(&oldf->file_lock);
695 old_fdt = files_fdtable(oldf); 695 old_fdt = files_fdtable(oldf);
696 } 696 }
697 697
698 old_fds = old_fdt->fd; 698 old_fds = old_fdt->fd;
699 new_fds = new_fdt->fd; 699 new_fds = new_fdt->fd;
700 700
701 memcpy(new_fdt->open_fds->fds_bits, 701 memcpy(new_fdt->open_fds->fds_bits,
702 old_fdt->open_fds->fds_bits, open_files/8); 702 old_fdt->open_fds->fds_bits, open_files/8);
703 memcpy(new_fdt->close_on_exec->fds_bits, 703 memcpy(new_fdt->close_on_exec->fds_bits,
704 old_fdt->close_on_exec->fds_bits, open_files/8); 704 old_fdt->close_on_exec->fds_bits, open_files/8);
705 705
706 for (i = open_files; i != 0; i--) { 706 for (i = open_files; i != 0; i--) {
707 struct file *f = *old_fds++; 707 struct file *f = *old_fds++;
708 if (f) { 708 if (f) {
709 get_file(f); 709 get_file(f);
710 } else { 710 } else {
711 /* 711 /*
712 * The fd may be claimed in the fd bitmap but not yet 712 * The fd may be claimed in the fd bitmap but not yet
713 * instantiated in the files array if a sibling thread 713 * instantiated in the files array if a sibling thread
714 * is partway through open(). So make sure that this 714 * is partway through open(). So make sure that this
715 * fd is available to the new process. 715 * fd is available to the new process.
716 */ 716 */
717 FD_CLR(open_files - i, new_fdt->open_fds); 717 FD_CLR(open_files - i, new_fdt->open_fds);
718 } 718 }
719 rcu_assign_pointer(*new_fds++, f); 719 rcu_assign_pointer(*new_fds++, f);
720 } 720 }
721 spin_unlock(&oldf->file_lock); 721 spin_unlock(&oldf->file_lock);
722 722
723 /* compute the remainder to be cleared */ 723 /* compute the remainder to be cleared */
724 size = (new_fdt->max_fds - open_files) * sizeof(struct file *); 724 size = (new_fdt->max_fds - open_files) * sizeof(struct file *);
725 725
726 /* This is long word aligned thus could use a optimized version */ 726 /* This is long word aligned thus could use a optimized version */
727 memset(new_fds, 0, size); 727 memset(new_fds, 0, size);
728 728
729 if (new_fdt->max_fds > open_files) { 729 if (new_fdt->max_fds > open_files) {
730 int left = (new_fdt->max_fds-open_files)/8; 730 int left = (new_fdt->max_fds-open_files)/8;
731 int start = open_files / (8 * sizeof(unsigned long)); 731 int start = open_files / (8 * sizeof(unsigned long));
732 732
733 memset(&new_fdt->open_fds->fds_bits[start], 0, left); 733 memset(&new_fdt->open_fds->fds_bits[start], 0, left);
734 memset(&new_fdt->close_on_exec->fds_bits[start], 0, left); 734 memset(&new_fdt->close_on_exec->fds_bits[start], 0, left);
735 } 735 }
736 736
737 return newf; 737 return newf;
738 738
739 out_release: 739 out_release:
740 kmem_cache_free(files_cachep, newf); 740 kmem_cache_free(files_cachep, newf);
741 out: 741 out:
742 return NULL; 742 return NULL;
743 } 743 }
744 744
745 static int copy_files(unsigned long clone_flags, struct task_struct * tsk) 745 static int copy_files(unsigned long clone_flags, struct task_struct * tsk)
746 { 746 {
747 struct files_struct *oldf, *newf; 747 struct files_struct *oldf, *newf;
748 int error = 0; 748 int error = 0;
749 749
750 /* 750 /*
751 * A background process may not have any files ... 751 * A background process may not have any files ...
752 */ 752 */
753 oldf = current->files; 753 oldf = current->files;
754 if (!oldf) 754 if (!oldf)
755 goto out; 755 goto out;
756 756
757 if (clone_flags & CLONE_FILES) { 757 if (clone_flags & CLONE_FILES) {
758 atomic_inc(&oldf->count); 758 atomic_inc(&oldf->count);
759 goto out; 759 goto out;
760 } 760 }
761 761
762 /* 762 /*
763 * Note: we may be using current for both targets (See exec.c) 763 * Note: we may be using current for both targets (See exec.c)
764 * This works because we cache current->files (old) as oldf. Don't 764 * This works because we cache current->files (old) as oldf. Don't
765 * break this. 765 * break this.
766 */ 766 */
767 tsk->files = NULL; 767 tsk->files = NULL;
768 newf = dup_fd(oldf, &error); 768 newf = dup_fd(oldf, &error);
769 if (!newf) 769 if (!newf)
770 goto out; 770 goto out;
771 771
772 tsk->files = newf; 772 tsk->files = newf;
773 error = 0; 773 error = 0;
774 out: 774 out:
775 return error; 775 return error;
776 } 776 }
777 777
778 /* 778 /*
779 * Helper to unshare the files of the current task. 779 * Helper to unshare the files of the current task.
780 * We don't want to expose copy_files internals to 780 * We don't want to expose copy_files internals to
781 * the exec layer of the kernel. 781 * the exec layer of the kernel.
782 */ 782 */
783 783
784 int unshare_files(void) 784 int unshare_files(void)
785 { 785 {
786 struct files_struct *files = current->files; 786 struct files_struct *files = current->files;
787 int rc; 787 int rc;
788 788
789 BUG_ON(!files); 789 BUG_ON(!files);
790 790
791 /* This can race but the race causes us to copy when we don't 791 /* This can race but the race causes us to copy when we don't
792 need to and drop the copy */ 792 need to and drop the copy */
793 if(atomic_read(&files->count) == 1) 793 if(atomic_read(&files->count) == 1)
794 { 794 {
795 atomic_inc(&files->count); 795 atomic_inc(&files->count);
796 return 0; 796 return 0;
797 } 797 }
798 rc = copy_files(0, current); 798 rc = copy_files(0, current);
799 if(rc) 799 if(rc)
800 current->files = files; 800 current->files = files;
801 return rc; 801 return rc;
802 } 802 }
803 803
804 EXPORT_SYMBOL(unshare_files); 804 EXPORT_SYMBOL(unshare_files);
805 805
806 static inline int copy_sighand(unsigned long clone_flags, struct task_struct * tsk) 806 static inline int copy_sighand(unsigned long clone_flags, struct task_struct * tsk)
807 { 807 {
808 struct sighand_struct *sig; 808 struct sighand_struct *sig;
809 809
810 if (clone_flags & (CLONE_SIGHAND | CLONE_THREAD)) { 810 if (clone_flags & (CLONE_SIGHAND | CLONE_THREAD)) {
811 atomic_inc(&current->sighand->count); 811 atomic_inc(&current->sighand->count);
812 return 0; 812 return 0;
813 } 813 }
814 sig = kmem_cache_alloc(sighand_cachep, GFP_KERNEL); 814 sig = kmem_cache_alloc(sighand_cachep, GFP_KERNEL);
815 rcu_assign_pointer(tsk->sighand, sig); 815 rcu_assign_pointer(tsk->sighand, sig);
816 if (!sig) 816 if (!sig)
817 return -ENOMEM; 817 return -ENOMEM;
818 atomic_set(&sig->count, 1); 818 atomic_set(&sig->count, 1);
819 memcpy(sig->action, current->sighand->action, sizeof(sig->action)); 819 memcpy(sig->action, current->sighand->action, sizeof(sig->action));
820 return 0; 820 return 0;
821 } 821 }
822 822
823 void __cleanup_sighand(struct sighand_struct *sighand) 823 void __cleanup_sighand(struct sighand_struct *sighand)
824 { 824 {
825 if (atomic_dec_and_test(&sighand->count)) 825 if (atomic_dec_and_test(&sighand->count))
826 kmem_cache_free(sighand_cachep, sighand); 826 kmem_cache_free(sighand_cachep, sighand);
827 } 827 }
828 828
829 static inline int copy_signal(unsigned long clone_flags, struct task_struct * tsk) 829 static inline int copy_signal(unsigned long clone_flags, struct task_struct * tsk)
830 { 830 {
831 struct signal_struct *sig; 831 struct signal_struct *sig;
832 int ret; 832 int ret;
833 833
834 if (clone_flags & CLONE_THREAD) { 834 if (clone_flags & CLONE_THREAD) {
835 atomic_inc(&current->signal->count); 835 atomic_inc(&current->signal->count);
836 atomic_inc(&current->signal->live); 836 atomic_inc(&current->signal->live);
837 return 0; 837 return 0;
838 } 838 }
839 sig = kmem_cache_alloc(signal_cachep, GFP_KERNEL); 839 sig = kmem_cache_alloc(signal_cachep, GFP_KERNEL);
840 tsk->signal = sig; 840 tsk->signal = sig;
841 if (!sig) 841 if (!sig)
842 return -ENOMEM; 842 return -ENOMEM;
843 843
844 ret = copy_thread_group_keys(tsk); 844 ret = copy_thread_group_keys(tsk);
845 if (ret < 0) { 845 if (ret < 0) {
846 kmem_cache_free(signal_cachep, sig); 846 kmem_cache_free(signal_cachep, sig);
847 return ret; 847 return ret;
848 } 848 }
849 849
850 atomic_set(&sig->count, 1); 850 atomic_set(&sig->count, 1);
851 atomic_set(&sig->live, 1); 851 atomic_set(&sig->live, 1);
852 init_waitqueue_head(&sig->wait_chldexit); 852 init_waitqueue_head(&sig->wait_chldexit);
853 sig->flags = 0; 853 sig->flags = 0;
854 sig->group_exit_code = 0; 854 sig->group_exit_code = 0;
855 sig->group_exit_task = NULL; 855 sig->group_exit_task = NULL;
856 sig->group_stop_count = 0; 856 sig->group_stop_count = 0;
857 sig->curr_target = NULL; 857 sig->curr_target = NULL;
858 init_sigpending(&sig->shared_pending); 858 init_sigpending(&sig->shared_pending);
859 INIT_LIST_HEAD(&sig->posix_timers); 859 INIT_LIST_HEAD(&sig->posix_timers);
860 860
861 hrtimer_init(&sig->real_timer, CLOCK_MONOTONIC, HRTIMER_REL); 861 hrtimer_init(&sig->real_timer, CLOCK_MONOTONIC, HRTIMER_REL);
862 sig->it_real_incr.tv64 = 0; 862 sig->it_real_incr.tv64 = 0;
863 sig->real_timer.function = it_real_fn; 863 sig->real_timer.function = it_real_fn;
864 sig->tsk = tsk; 864 sig->tsk = tsk;
865 865
866 sig->it_virt_expires = cputime_zero; 866 sig->it_virt_expires = cputime_zero;
867 sig->it_virt_incr = cputime_zero; 867 sig->it_virt_incr = cputime_zero;
868 sig->it_prof_expires = cputime_zero; 868 sig->it_prof_expires = cputime_zero;
869 sig->it_prof_incr = cputime_zero; 869 sig->it_prof_incr = cputime_zero;
870 870
871 sig->leader = 0; /* session leadership doesn't inherit */ 871 sig->leader = 0; /* session leadership doesn't inherit */
872 sig->tty_old_pgrp = 0; 872 sig->tty_old_pgrp = 0;
873 873
874 sig->utime = sig->stime = sig->cutime = sig->cstime = cputime_zero; 874 sig->utime = sig->stime = sig->cutime = sig->cstime = cputime_zero;
875 sig->nvcsw = sig->nivcsw = sig->cnvcsw = sig->cnivcsw = 0; 875 sig->nvcsw = sig->nivcsw = sig->cnvcsw = sig->cnivcsw = 0;
876 sig->min_flt = sig->maj_flt = sig->cmin_flt = sig->cmaj_flt = 0; 876 sig->min_flt = sig->maj_flt = sig->cmin_flt = sig->cmaj_flt = 0;
877 sig->sched_time = 0; 877 sig->sched_time = 0;
878 INIT_LIST_HEAD(&sig->cpu_timers[0]); 878 INIT_LIST_HEAD(&sig->cpu_timers[0]);
879 INIT_LIST_HEAD(&sig->cpu_timers[1]); 879 INIT_LIST_HEAD(&sig->cpu_timers[1]);
880 INIT_LIST_HEAD(&sig->cpu_timers[2]); 880 INIT_LIST_HEAD(&sig->cpu_timers[2]);
881 taskstats_tgid_init(sig); 881 taskstats_tgid_init(sig);
882 882
883 task_lock(current->group_leader); 883 task_lock(current->group_leader);
884 memcpy(sig->rlim, current->signal->rlim, sizeof sig->rlim); 884 memcpy(sig->rlim, current->signal->rlim, sizeof sig->rlim);
885 task_unlock(current->group_leader); 885 task_unlock(current->group_leader);
886 886
887 if (sig->rlim[RLIMIT_CPU].rlim_cur != RLIM_INFINITY) { 887 if (sig->rlim[RLIMIT_CPU].rlim_cur != RLIM_INFINITY) {
888 /* 888 /*
889 * New sole thread in the process gets an expiry time 889 * New sole thread in the process gets an expiry time
890 * of the whole CPU time limit. 890 * of the whole CPU time limit.
891 */ 891 */
892 tsk->it_prof_expires = 892 tsk->it_prof_expires =
893 secs_to_cputime(sig->rlim[RLIMIT_CPU].rlim_cur); 893 secs_to_cputime(sig->rlim[RLIMIT_CPU].rlim_cur);
894 } 894 }
895 acct_init_pacct(&sig->pacct); 895 acct_init_pacct(&sig->pacct);
896 896
897 return 0; 897 return 0;
898 } 898 }
899 899
900 void __cleanup_signal(struct signal_struct *sig) 900 void __cleanup_signal(struct signal_struct *sig)
901 { 901 {
902 exit_thread_group_keys(sig); 902 exit_thread_group_keys(sig);
903 kmem_cache_free(signal_cachep, sig); 903 kmem_cache_free(signal_cachep, sig);
904 } 904 }
905 905
906 static inline void cleanup_signal(struct task_struct *tsk) 906 static inline void cleanup_signal(struct task_struct *tsk)
907 { 907 {
908 struct signal_struct *sig = tsk->signal; 908 struct signal_struct *sig = tsk->signal;
909 909
910 atomic_dec(&sig->live); 910 atomic_dec(&sig->live);
911 911
912 if (atomic_dec_and_test(&sig->count)) 912 if (atomic_dec_and_test(&sig->count))
913 __cleanup_signal(sig); 913 __cleanup_signal(sig);
914 } 914 }
915 915
916 static inline void copy_flags(unsigned long clone_flags, struct task_struct *p) 916 static inline void copy_flags(unsigned long clone_flags, struct task_struct *p)
917 { 917 {
918 unsigned long new_flags = p->flags; 918 unsigned long new_flags = p->flags;
919 919
920 new_flags &= ~(PF_SUPERPRIV | PF_NOFREEZE); 920 new_flags &= ~(PF_SUPERPRIV | PF_NOFREEZE);
921 new_flags |= PF_FORKNOEXEC; 921 new_flags |= PF_FORKNOEXEC;
922 if (!(clone_flags & CLONE_PTRACE)) 922 if (!(clone_flags & CLONE_PTRACE))
923 p->ptrace = 0; 923 p->ptrace = 0;
924 p->flags = new_flags; 924 p->flags = new_flags;
925 } 925 }
926 926
927 asmlinkage long sys_set_tid_address(int __user *tidptr) 927 asmlinkage long sys_set_tid_address(int __user *tidptr)
928 { 928 {
929 current->clear_child_tid = tidptr; 929 current->clear_child_tid = tidptr;
930 930
931 return current->pid; 931 return current->pid;
932 } 932 }
933 933
934 static inline void rt_mutex_init_task(struct task_struct *p) 934 static inline void rt_mutex_init_task(struct task_struct *p)
935 { 935 {
936 #ifdef CONFIG_RT_MUTEXES 936 #ifdef CONFIG_RT_MUTEXES
937 spin_lock_init(&p->pi_lock); 937 spin_lock_init(&p->pi_lock);
938 plist_head_init(&p->pi_waiters, &p->pi_lock); 938 plist_head_init(&p->pi_waiters, &p->pi_lock);
939 p->pi_blocked_on = NULL; 939 p->pi_blocked_on = NULL;
940 #endif 940 #endif
941 } 941 }
942 942
943 /* 943 /*
944 * This creates a new process as a copy of the old one, 944 * This creates a new process as a copy of the old one,
945 * but does not actually start it yet. 945 * but does not actually start it yet.
946 * 946 *
947 * It copies the registers, and all the appropriate 947 * It copies the registers, and all the appropriate
948 * parts of the process environment (as per the clone 948 * parts of the process environment (as per the clone
949 * flags). The actual kick-off is left to the caller. 949 * flags). The actual kick-off is left to the caller.
950 */ 950 */
951 static struct task_struct *copy_process(unsigned long clone_flags, 951 static struct task_struct *copy_process(unsigned long clone_flags,
952 unsigned long stack_start, 952 unsigned long stack_start,
953 struct pt_regs *regs, 953 struct pt_regs *regs,
954 unsigned long stack_size, 954 unsigned long stack_size,
955 int __user *parent_tidptr, 955 int __user *parent_tidptr,
956 int __user *child_tidptr, 956 int __user *child_tidptr,
957 int pid) 957 int pid)
958 { 958 {
959 int retval; 959 int retval;
960 struct task_struct *p = NULL; 960 struct task_struct *p = NULL;
961 961
962 if ((clone_flags & (CLONE_NEWNS|CLONE_FS)) == (CLONE_NEWNS|CLONE_FS)) 962 if ((clone_flags & (CLONE_NEWNS|CLONE_FS)) == (CLONE_NEWNS|CLONE_FS))
963 return ERR_PTR(-EINVAL); 963 return ERR_PTR(-EINVAL);
964 964
965 /* 965 /*
966 * Thread groups must share signals as well, and detached threads 966 * Thread groups must share signals as well, and detached threads
967 * can only be started up within the thread group. 967 * can only be started up within the thread group.
968 */ 968 */
969 if ((clone_flags & CLONE_THREAD) && !(clone_flags & CLONE_SIGHAND)) 969 if ((clone_flags & CLONE_THREAD) && !(clone_flags & CLONE_SIGHAND))
970 return ERR_PTR(-EINVAL); 970 return ERR_PTR(-EINVAL);
971 971
972 /* 972 /*
973 * Shared signal handlers imply shared VM. By way of the above, 973 * Shared signal handlers imply shared VM. By way of the above,
974 * thread groups also imply shared VM. Blocking this case allows 974 * thread groups also imply shared VM. Blocking this case allows
975 * for various simplifications in other code. 975 * for various simplifications in other code.
976 */ 976 */
977 if ((clone_flags & CLONE_SIGHAND) && !(clone_flags & CLONE_VM)) 977 if ((clone_flags & CLONE_SIGHAND) && !(clone_flags & CLONE_VM))
978 return ERR_PTR(-EINVAL); 978 return ERR_PTR(-EINVAL);
979 979
980 retval = security_task_create(clone_flags); 980 retval = security_task_create(clone_flags);
981 if (retval) 981 if (retval)
982 goto fork_out; 982 goto fork_out;
983 983
984 retval = -ENOMEM; 984 retval = -ENOMEM;
985 p = dup_task_struct(current); 985 p = dup_task_struct(current);
986 if (!p) 986 if (!p)
987 goto fork_out; 987 goto fork_out;
988 988
989 rt_mutex_init_task(p); 989 rt_mutex_init_task(p);
990 990
991 #ifdef CONFIG_TRACE_IRQFLAGS 991 #ifdef CONFIG_TRACE_IRQFLAGS
992 DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled); 992 DEBUG_LOCKS_WARN_ON(!p->hardirqs_enabled);
993 DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled); 993 DEBUG_LOCKS_WARN_ON(!p->softirqs_enabled);
994 #endif 994 #endif
995 retval = -EAGAIN; 995 retval = -EAGAIN;
996 if (atomic_read(&p->user->processes) >= 996 if (atomic_read(&p->user->processes) >=
997 p->signal->rlim[RLIMIT_NPROC].rlim_cur) { 997 p->signal->rlim[RLIMIT_NPROC].rlim_cur) {
998 if (!capable(CAP_SYS_ADMIN) && !capable(CAP_SYS_RESOURCE) && 998 if (!capable(CAP_SYS_ADMIN) && !capable(CAP_SYS_RESOURCE) &&
999 p->user != &root_user) 999 p->user != &root_user)
1000 goto bad_fork_free; 1000 goto bad_fork_free;
1001 } 1001 }
1002 1002
1003 atomic_inc(&p->user->__count); 1003 atomic_inc(&p->user->__count);
1004 atomic_inc(&p->user->processes); 1004 atomic_inc(&p->user->processes);
1005 get_group_info(p->group_info); 1005 get_group_info(p->group_info);
1006 1006
1007 /* 1007 /*
1008 * If multiple threads are within copy_process(), then this check 1008 * If multiple threads are within copy_process(), then this check
1009 * triggers too late. This doesn't hurt, the check is only there 1009 * triggers too late. This doesn't hurt, the check is only there
1010 * to stop root fork bombs. 1010 * to stop root fork bombs.
1011 */ 1011 */
1012 if (nr_threads >= max_threads) 1012 if (nr_threads >= max_threads)
1013 goto bad_fork_cleanup_count; 1013 goto bad_fork_cleanup_count;
1014 1014
1015 if (!try_module_get(task_thread_info(p)->exec_domain->module)) 1015 if (!try_module_get(task_thread_info(p)->exec_domain->module))
1016 goto bad_fork_cleanup_count; 1016 goto bad_fork_cleanup_count;
1017 1017
1018 if (p->binfmt && !try_module_get(p->binfmt->module)) 1018 if (p->binfmt && !try_module_get(p->binfmt->module))
1019 goto bad_fork_cleanup_put_domain; 1019 goto bad_fork_cleanup_put_domain;
1020 1020
1021 p->did_exec = 0; 1021 p->did_exec = 0;
1022 delayacct_tsk_init(p); /* Must remain after dup_task_struct() */ 1022 delayacct_tsk_init(p); /* Must remain after dup_task_struct() */
1023 copy_flags(clone_flags, p); 1023 copy_flags(clone_flags, p);
1024 p->pid = pid; 1024 p->pid = pid;
1025 retval = -EFAULT; 1025 retval = -EFAULT;
1026 if (clone_flags & CLONE_PARENT_SETTID) 1026 if (clone_flags & CLONE_PARENT_SETTID)
1027 if (put_user(p->pid, parent_tidptr)) 1027 if (put_user(p->pid, parent_tidptr))
1028 goto bad_fork_cleanup_delays_binfmt; 1028 goto bad_fork_cleanup_delays_binfmt;
1029 1029
1030 INIT_LIST_HEAD(&p->children); 1030 INIT_LIST_HEAD(&p->children);
1031 INIT_LIST_HEAD(&p->sibling); 1031 INIT_LIST_HEAD(&p->sibling);
1032 p->vfork_done = NULL; 1032 p->vfork_done = NULL;
1033 spin_lock_init(&p->alloc_lock); 1033 spin_lock_init(&p->alloc_lock);
1034 1034
1035 clear_tsk_thread_flag(p, TIF_SIGPENDING); 1035 clear_tsk_thread_flag(p, TIF_SIGPENDING);
1036 init_sigpending(&p->pending); 1036 init_sigpending(&p->pending);
1037 1037
1038 p->utime = cputime_zero; 1038 p->utime = cputime_zero;
1039 p->stime = cputime_zero; 1039 p->stime = cputime_zero;
1040 p->sched_time = 0; 1040 p->sched_time = 0;
1041 p->rchar = 0; /* I/O counter: bytes read */ 1041 p->rchar = 0; /* I/O counter: bytes read */
1042 p->wchar = 0; /* I/O counter: bytes written */ 1042 p->wchar = 0; /* I/O counter: bytes written */
1043 p->syscr = 0; /* I/O counter: read syscalls */ 1043 p->syscr = 0; /* I/O counter: read syscalls */
1044 p->syscw = 0; /* I/O counter: write syscalls */ 1044 p->syscw = 0; /* I/O counter: write syscalls */
1045 task_io_accounting_init(p); 1045 task_io_accounting_init(p);
1046 acct_clear_integrals(p); 1046 acct_clear_integrals(p);
1047 1047
1048 p->it_virt_expires = cputime_zero; 1048 p->it_virt_expires = cputime_zero;
1049 p->it_prof_expires = cputime_zero; 1049 p->it_prof_expires = cputime_zero;
1050 p->it_sched_expires = 0; 1050 p->it_sched_expires = 0;
1051 INIT_LIST_HEAD(&p->cpu_timers[0]); 1051 INIT_LIST_HEAD(&p->cpu_timers[0]);
1052 INIT_LIST_HEAD(&p->cpu_timers[1]); 1052 INIT_LIST_HEAD(&p->cpu_timers[1]);
1053 INIT_LIST_HEAD(&p->cpu_timers[2]); 1053 INIT_LIST_HEAD(&p->cpu_timers[2]);
1054 1054
1055 p->lock_depth = -1; /* -1 = no lock */ 1055 p->lock_depth = -1; /* -1 = no lock */
1056 do_posix_clock_monotonic_gettime(&p->start_time); 1056 do_posix_clock_monotonic_gettime(&p->start_time);
1057 p->security = NULL; 1057 p->security = NULL;
1058 p->io_context = NULL; 1058 p->io_context = NULL;
1059 p->io_wait = NULL; 1059 p->io_wait = NULL;
1060 p->audit_context = NULL; 1060 p->audit_context = NULL;
1061 cpuset_fork(p); 1061 cpuset_fork(p);
1062 #ifdef CONFIG_NUMA 1062 #ifdef CONFIG_NUMA
1063 p->mempolicy = mpol_copy(p->mempolicy); 1063 p->mempolicy = mpol_copy(p->mempolicy);
1064 if (IS_ERR(p->mempolicy)) { 1064 if (IS_ERR(p->mempolicy)) {
1065 retval = PTR_ERR(p->mempolicy); 1065 retval = PTR_ERR(p->mempolicy);
1066 p->mempolicy = NULL; 1066 p->mempolicy = NULL;
1067 goto bad_fork_cleanup_cpuset; 1067 goto bad_fork_cleanup_cpuset;
1068 } 1068 }
1069 mpol_fix_fork_child_flag(p); 1069 mpol_fix_fork_child_flag(p);
1070 #endif 1070 #endif
1071 #ifdef CONFIG_TRACE_IRQFLAGS 1071 #ifdef CONFIG_TRACE_IRQFLAGS
1072 p->irq_events = 0; 1072 p->irq_events = 0;
1073 #ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW 1073 #ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
1074 p->hardirqs_enabled = 1; 1074 p->hardirqs_enabled = 1;
1075 #else 1075 #else
1076 p->hardirqs_enabled = 0; 1076 p->hardirqs_enabled = 0;
1077 #endif 1077 #endif
1078 p->hardirq_enable_ip = 0; 1078 p->hardirq_enable_ip = 0;
1079 p->hardirq_enable_event = 0; 1079 p->hardirq_enable_event = 0;
1080 p->hardirq_disable_ip = _THIS_IP_; 1080 p->hardirq_disable_ip = _THIS_IP_;
1081 p->hardirq_disable_event = 0; 1081 p->hardirq_disable_event = 0;
1082 p->softirqs_enabled = 1; 1082 p->softirqs_enabled = 1;
1083 p->softirq_enable_ip = _THIS_IP_; 1083 p->softirq_enable_ip = _THIS_IP_;
1084 p->softirq_enable_event = 0; 1084 p->softirq_enable_event = 0;
1085 p->softirq_disable_ip = 0; 1085 p->softirq_disable_ip = 0;
1086 p->softirq_disable_event = 0; 1086 p->softirq_disable_event = 0;
1087 p->hardirq_context = 0; 1087 p->hardirq_context = 0;
1088 p->softirq_context = 0; 1088 p->softirq_context = 0;
1089 #endif 1089 #endif
1090 #ifdef CONFIG_LOCKDEP 1090 #ifdef CONFIG_LOCKDEP
1091 p->lockdep_depth = 0; /* no locks held yet */ 1091 p->lockdep_depth = 0; /* no locks held yet */
1092 p->curr_chain_key = 0; 1092 p->curr_chain_key = 0;
1093 p->lockdep_recursion = 0; 1093 p->lockdep_recursion = 0;
1094 #endif 1094 #endif
1095 1095
1096 #ifdef CONFIG_DEBUG_MUTEXES 1096 #ifdef CONFIG_DEBUG_MUTEXES
1097 p->blocked_on = NULL; /* not blocked yet */ 1097 p->blocked_on = NULL; /* not blocked yet */
1098 #endif 1098 #endif
1099 1099
1100 p->tgid = p->pid; 1100 p->tgid = p->pid;
1101 if (clone_flags & CLONE_THREAD) 1101 if (clone_flags & CLONE_THREAD)
1102 p->tgid = current->tgid; 1102 p->tgid = current->tgid;
1103 1103
1104 if ((retval = security_task_alloc(p))) 1104 if ((retval = security_task_alloc(p)))
1105 goto bad_fork_cleanup_policy; 1105 goto bad_fork_cleanup_policy;
1106 if ((retval = audit_alloc(p))) 1106 if ((retval = audit_alloc(p)))
1107 goto bad_fork_cleanup_security; 1107 goto bad_fork_cleanup_security;
1108 /* copy all the process information */ 1108 /* copy all the process information */
1109 if ((retval = copy_semundo(clone_flags, p))) 1109 if ((retval = copy_semundo(clone_flags, p)))
1110 goto bad_fork_cleanup_audit; 1110 goto bad_fork_cleanup_audit;
1111 if ((retval = copy_files(clone_flags, p))) 1111 if ((retval = copy_files(clone_flags, p)))
1112 goto bad_fork_cleanup_semundo; 1112 goto bad_fork_cleanup_semundo;
1113 if ((retval = copy_fs(clone_flags, p))) 1113 if ((retval = copy_fs(clone_flags, p)))
1114 goto bad_fork_cleanup_files; 1114 goto bad_fork_cleanup_files;
1115 if ((retval = copy_sighand(clone_flags, p))) 1115 if ((retval = copy_sighand(clone_flags, p)))
1116 goto bad_fork_cleanup_fs; 1116 goto bad_fork_cleanup_fs;
1117 if ((retval = copy_signal(clone_flags, p))) 1117 if ((retval = copy_signal(clone_flags, p)))
1118 goto bad_fork_cleanup_sighand; 1118 goto bad_fork_cleanup_sighand;
1119 if ((retval = copy_mm(clone_flags, p))) 1119 if ((retval = copy_mm(clone_flags, p)))
1120 goto bad_fork_cleanup_signal; 1120 goto bad_fork_cleanup_signal;
1121 if ((retval = copy_keys(clone_flags, p))) 1121 if ((retval = copy_keys(clone_flags, p)))
1122 goto bad_fork_cleanup_mm; 1122 goto bad_fork_cleanup_mm;
1123 if ((retval = copy_namespaces(clone_flags, p))) 1123 if ((retval = copy_namespaces(clone_flags, p)))
1124 goto bad_fork_cleanup_keys; 1124 goto bad_fork_cleanup_keys;
1125 retval = copy_thread(0, clone_flags, stack_start, stack_size, p, regs); 1125 retval = copy_thread(0, clone_flags, stack_start, stack_size, p, regs);
1126 if (retval) 1126 if (retval)
1127 goto bad_fork_cleanup_namespaces; 1127 goto bad_fork_cleanup_namespaces;
1128 1128
1129 p->set_child_tid = (clone_flags & CLONE_CHILD_SETTID) ? child_tidptr : NULL; 1129 p->set_child_tid = (clone_flags & CLONE_CHILD_SETTID) ? child_tidptr : NULL;
1130 /* 1130 /*
1131 * Clear TID on mm_release()? 1131 * Clear TID on mm_release()?
1132 */ 1132 */
1133 p->clear_child_tid = (clone_flags & CLONE_CHILD_CLEARTID) ? child_tidptr: NULL; 1133 p->clear_child_tid = (clone_flags & CLONE_CHILD_CLEARTID) ? child_tidptr: NULL;
1134 p->robust_list = NULL; 1134 p->robust_list = NULL;
1135 #ifdef CONFIG_COMPAT 1135 #ifdef CONFIG_COMPAT
1136 p->compat_robust_list = NULL; 1136 p->compat_robust_list = NULL;
1137 #endif 1137 #endif
1138 INIT_LIST_HEAD(&p->pi_state_list); 1138 INIT_LIST_HEAD(&p->pi_state_list);
1139 p->pi_state_cache = NULL; 1139 p->pi_state_cache = NULL;
1140 1140
1141 /* 1141 /*
1142 * sigaltstack should be cleared when sharing the same VM 1142 * sigaltstack should be cleared when sharing the same VM
1143 */ 1143 */
1144 if ((clone_flags & (CLONE_VM|CLONE_VFORK)) == CLONE_VM) 1144 if ((clone_flags & (CLONE_VM|CLONE_VFORK)) == CLONE_VM)
1145 p->sas_ss_sp = p->sas_ss_size = 0; 1145 p->sas_ss_sp = p->sas_ss_size = 0;
1146 1146
1147 /* 1147 /*
1148 * Syscall tracing should be turned off in the child regardless 1148 * Syscall tracing should be turned off in the child regardless
1149 * of CLONE_PTRACE. 1149 * of CLONE_PTRACE.
1150 */ 1150 */
1151 clear_tsk_thread_flag(p, TIF_SYSCALL_TRACE); 1151 clear_tsk_thread_flag(p, TIF_SYSCALL_TRACE);
1152 #ifdef TIF_SYSCALL_EMU 1152 #ifdef TIF_SYSCALL_EMU
1153 clear_tsk_thread_flag(p, TIF_SYSCALL_EMU); 1153 clear_tsk_thread_flag(p, TIF_SYSCALL_EMU);
1154 #endif 1154 #endif
1155 1155
1156 /* Our parent execution domain becomes current domain 1156 /* Our parent execution domain becomes current domain
1157 These must match for thread signalling to apply */ 1157 These must match for thread signalling to apply */
1158 p->parent_exec_id = p->self_exec_id; 1158 p->parent_exec_id = p->self_exec_id;
1159 1159
1160 /* ok, now we should be set up.. */ 1160 /* ok, now we should be set up.. */
1161 p->exit_signal = (clone_flags & CLONE_THREAD) ? -1 : (clone_flags & CSIGNAL); 1161 p->exit_signal = (clone_flags & CLONE_THREAD) ? -1 : (clone_flags & CSIGNAL);
1162 p->pdeath_signal = 0; 1162 p->pdeath_signal = 0;
1163 p->exit_state = 0; 1163 p->exit_state = 0;
1164 1164
1165 /* 1165 /*
1166 * Ok, make it visible to the rest of the system. 1166 * Ok, make it visible to the rest of the system.
1167 * We dont wake it up yet. 1167 * We dont wake it up yet.
1168 */ 1168 */
1169 p->group_leader = p; 1169 p->group_leader = p;
1170 INIT_LIST_HEAD(&p->thread_group); 1170 INIT_LIST_HEAD(&p->thread_group);
1171 INIT_LIST_HEAD(&p->ptrace_children); 1171 INIT_LIST_HEAD(&p->ptrace_children);
1172 INIT_LIST_HEAD(&p->ptrace_list); 1172 INIT_LIST_HEAD(&p->ptrace_list);
1173 1173
1174 /* Perform scheduler related setup. Assign this task to a CPU. */ 1174 /* Perform scheduler related setup. Assign this task to a CPU. */
1175 sched_fork(p, clone_flags); 1175 sched_fork(p, clone_flags);
1176 1176
1177 /* Need tasklist lock for parent etc handling! */ 1177 /* Need tasklist lock for parent etc handling! */
1178 write_lock_irq(&tasklist_lock); 1178 write_lock_irq(&tasklist_lock);
1179 1179
1180 /* for sys_ioprio_set(IOPRIO_WHO_PGRP) */ 1180 /* for sys_ioprio_set(IOPRIO_WHO_PGRP) */
1181 p->ioprio = current->ioprio; 1181 p->ioprio = current->ioprio;
1182 1182
1183 /* 1183 /*
1184 * The task hasn't been attached yet, so its cpus_allowed mask will 1184 * The task hasn't been attached yet, so its cpus_allowed mask will
1185 * not be changed, nor will its assigned CPU. 1185 * not be changed, nor will its assigned CPU.
1186 * 1186 *
1187 * The cpus_allowed mask of the parent may have changed after it was 1187 * The cpus_allowed mask of the parent may have changed after it was
1188 * copied first time - so re-copy it here, then check the child's CPU 1188 * copied first time - so re-copy it here, then check the child's CPU
1189 * to ensure it is on a valid CPU (and if not, just force it back to 1189 * to ensure it is on a valid CPU (and if not, just force it back to
1190 * parent's CPU). This avoids alot of nasty races. 1190 * parent's CPU). This avoids alot of nasty races.
1191 */ 1191 */
1192 p->cpus_allowed = current->cpus_allowed; 1192 p->cpus_allowed = current->cpus_allowed;
1193 if (unlikely(!cpu_isset(task_cpu(p), p->cpus_allowed) || 1193 if (unlikely(!cpu_isset(task_cpu(p), p->cpus_allowed) ||
1194 !cpu_online(task_cpu(p)))) 1194 !cpu_online(task_cpu(p))))
1195 set_task_cpu(p, smp_processor_id()); 1195 set_task_cpu(p, smp_processor_id());
1196 1196
1197 /* CLONE_PARENT re-uses the old parent */ 1197 /* CLONE_PARENT re-uses the old parent */
1198 if (clone_flags & (CLONE_PARENT|CLONE_THREAD)) 1198 if (clone_flags & (CLONE_PARENT|CLONE_THREAD))
1199 p->real_parent = current->real_parent; 1199 p->real_parent = current->real_parent;
1200 else 1200 else
1201 p->real_parent = current; 1201 p->real_parent = current;
1202 p->parent = p->real_parent; 1202 p->parent = p->real_parent;
1203 1203
1204 spin_lock(&current->sighand->siglock); 1204 spin_lock(&current->sighand->siglock);
1205 1205
1206 /* 1206 /*
1207 * Process group and session signals need to be delivered to just the 1207 * Process group and session signals need to be delivered to just the
1208 * parent before the fork or both the parent and the child after the 1208 * parent before the fork or both the parent and the child after the
1209 * fork. Restart if a signal comes in before we add the new process to 1209 * fork. Restart if a signal comes in before we add the new process to
1210 * it's process group. 1210 * it's process group.
1211 * A fatal signal pending means that current will exit, so the new 1211 * A fatal signal pending means that current will exit, so the new
1212 * thread can't slip out of an OOM kill (or normal SIGKILL). 1212 * thread can't slip out of an OOM kill (or normal SIGKILL).
1213 */ 1213 */
1214 recalc_sigpending(); 1214 recalc_sigpending();
1215 if (signal_pending(current)) { 1215 if (signal_pending(current)) {
1216 spin_unlock(&current->sighand->siglock); 1216 spin_unlock(&current->sighand->siglock);
1217 write_unlock_irq(&tasklist_lock); 1217 write_unlock_irq(&tasklist_lock);
1218 retval = -ERESTARTNOINTR; 1218 retval = -ERESTARTNOINTR;
1219 goto bad_fork_cleanup_namespaces; 1219 goto bad_fork_cleanup_namespaces;
1220 } 1220 }
1221 1221
1222 if (clone_flags & CLONE_THREAD) { 1222 if (clone_flags & CLONE_THREAD) {
1223 p->group_leader = current->group_leader; 1223 p->group_leader = current->group_leader;
1224 list_add_tail_rcu(&p->thread_group, &p->group_leader->thread_group); 1224 list_add_tail_rcu(&p->thread_group, &p->group_leader->thread_group);
1225 1225
1226 if (!cputime_eq(current->signal->it_virt_expires, 1226 if (!cputime_eq(current->signal->it_virt_expires,
1227 cputime_zero) || 1227 cputime_zero) ||
1228 !cputime_eq(current->signal->it_prof_expires, 1228 !cputime_eq(current->signal->it_prof_expires,
1229 cputime_zero) || 1229 cputime_zero) ||
1230 current->signal->rlim[RLIMIT_CPU].rlim_cur != RLIM_INFINITY || 1230 current->signal->rlim[RLIMIT_CPU].rlim_cur != RLIM_INFINITY ||
1231 !list_empty(&current->signal->cpu_timers[0]) || 1231 !list_empty(&current->signal->cpu_timers[0]) ||
1232 !list_empty(&current->signal->cpu_timers[1]) || 1232 !list_empty(&current->signal->cpu_timers[1]) ||
1233 !list_empty(&current->signal->cpu_timers[2])) { 1233 !list_empty(&current->signal->cpu_timers[2])) {
1234 /* 1234 /*
1235 * Have child wake up on its first tick to check 1235 * Have child wake up on its first tick to check
1236 * for process CPU timers. 1236 * for process CPU timers.
1237 */ 1237 */
1238 p->it_prof_expires = jiffies_to_cputime(1); 1238 p->it_prof_expires = jiffies_to_cputime(1);
1239 } 1239 }
1240 } 1240 }
1241 1241
1242 if (likely(p->pid)) { 1242 if (likely(p->pid)) {
1243 add_parent(p); 1243 add_parent(p);
1244 if (unlikely(p->ptrace & PT_PTRACED)) 1244 if (unlikely(p->ptrace & PT_PTRACED))
1245 __ptrace_link(p, current->parent); 1245 __ptrace_link(p, current->parent);
1246 1246
1247 if (thread_group_leader(p)) { 1247 if (thread_group_leader(p)) {
1248 p->signal->tty = current->signal->tty; 1248 p->signal->tty = current->signal->tty;
1249 p->signal->pgrp = process_group(current); 1249 p->signal->pgrp = process_group(current);
1250 set_signal_session(p->signal, process_session(current)); 1250 set_signal_session(p->signal, process_session(current));
1251 attach_pid(p, PIDTYPE_PGID, process_group(p)); 1251 attach_pid(p, PIDTYPE_PGID, process_group(p));
1252 attach_pid(p, PIDTYPE_SID, process_session(p)); 1252 attach_pid(p, PIDTYPE_SID, process_session(p));
1253 1253
1254 list_add_tail_rcu(&p->tasks, &init_task.tasks); 1254 list_add_tail_rcu(&p->tasks, &init_task.tasks);
1255 __get_cpu_var(process_counts)++; 1255 __get_cpu_var(process_counts)++;
1256 } 1256 }
1257 attach_pid(p, PIDTYPE_PID, p->pid); 1257 attach_pid(p, PIDTYPE_PID, p->pid);
1258 nr_threads++; 1258 nr_threads++;
1259 } 1259 }
1260 1260
1261 total_forks++; 1261 total_forks++;
1262 spin_unlock(&current->sighand->siglock); 1262 spin_unlock(&current->sighand->siglock);
1263 write_unlock_irq(&tasklist_lock); 1263 write_unlock_irq(&tasklist_lock);
1264 proc_fork_connector(p); 1264 proc_fork_connector(p);
1265 return p; 1265 return p;
1266 1266
1267 bad_fork_cleanup_namespaces: 1267 bad_fork_cleanup_namespaces:
1268 exit_task_namespaces(p); 1268 exit_task_namespaces(p);
1269 bad_fork_cleanup_keys: 1269 bad_fork_cleanup_keys:
1270 exit_keys(p); 1270 exit_keys(p);
1271 bad_fork_cleanup_mm: 1271 bad_fork_cleanup_mm:
1272 if (p->mm) 1272 if (p->mm)
1273 mmput(p->mm); 1273 mmput(p->mm);
1274 bad_fork_cleanup_signal: 1274 bad_fork_cleanup_signal:
1275 cleanup_signal(p); 1275 cleanup_signal(p);
1276 bad_fork_cleanup_sighand: 1276 bad_fork_cleanup_sighand:
1277 __cleanup_sighand(p->sighand); 1277 __cleanup_sighand(p->sighand);
1278 bad_fork_cleanup_fs: 1278 bad_fork_cleanup_fs:
1279 exit_fs(p); /* blocking */ 1279 exit_fs(p); /* blocking */
1280 bad_fork_cleanup_files: 1280 bad_fork_cleanup_files:
1281 exit_files(p); /* blocking */ 1281 exit_files(p); /* blocking */
1282 bad_fork_cleanup_semundo: 1282 bad_fork_cleanup_semundo:
1283 exit_sem(p); 1283 exit_sem(p);
1284 bad_fork_cleanup_audit: 1284 bad_fork_cleanup_audit:
1285 audit_free(p); 1285 audit_free(p);
1286 bad_fork_cleanup_security: 1286 bad_fork_cleanup_security:
1287 security_task_free(p); 1287 security_task_free(p);
1288 bad_fork_cleanup_policy: 1288 bad_fork_cleanup_policy:
1289 #ifdef CONFIG_NUMA 1289 #ifdef CONFIG_NUMA
1290 mpol_free(p->mempolicy); 1290 mpol_free(p->mempolicy);
1291 bad_fork_cleanup_cpuset: 1291 bad_fork_cleanup_cpuset:
1292 #endif 1292 #endif
1293 cpuset_exit(p); 1293 cpuset_exit(p);
1294 bad_fork_cleanup_delays_binfmt: 1294 bad_fork_cleanup_delays_binfmt:
1295 delayacct_tsk_free(p); 1295 delayacct_tsk_free(p);
1296 if (p->binfmt) 1296 if (p->binfmt)
1297 module_put(p->binfmt->module); 1297 module_put(p->binfmt->module);
1298 bad_fork_cleanup_put_domain: 1298 bad_fork_cleanup_put_domain:
1299 module_put(task_thread_info(p)->exec_domain->module); 1299 module_put(task_thread_info(p)->exec_domain->module);
1300 bad_fork_cleanup_count: 1300 bad_fork_cleanup_count:
1301 put_group_info(p->group_info); 1301 put_group_info(p->group_info);
1302 atomic_dec(&p->user->processes); 1302 atomic_dec(&p->user->processes);
1303 free_uid(p->user); 1303 free_uid(p->user);
1304 bad_fork_free: 1304 bad_fork_free:
1305 free_task(p); 1305 free_task(p);
1306 fork_out: 1306 fork_out:
1307 return ERR_PTR(retval); 1307 return ERR_PTR(retval);
1308 } 1308 }
1309 1309
1310 noinline struct pt_regs * __devinit __attribute__((weak)) idle_regs(struct pt_regs *regs) 1310 noinline struct pt_regs * __devinit __attribute__((weak)) idle_regs(struct pt_regs *regs)
1311 { 1311 {
1312 memset(regs, 0, sizeof(struct pt_regs)); 1312 memset(regs, 0, sizeof(struct pt_regs));
1313 return regs; 1313 return regs;
1314 } 1314 }
1315 1315
1316 struct task_struct * __devinit fork_idle(int cpu) 1316 struct task_struct * __devinit fork_idle(int cpu)
1317 { 1317 {
1318 struct task_struct *task; 1318 struct task_struct *task;
1319 struct pt_regs regs; 1319 struct pt_regs regs;
1320 1320
1321 task = copy_process(CLONE_VM, 0, idle_regs(&regs), 0, NULL, NULL, 0); 1321 task = copy_process(CLONE_VM, 0, idle_regs(&regs), 0, NULL, NULL, 0);
1322 if (!IS_ERR(task)) 1322 if (!IS_ERR(task))
1323 init_idle(task, cpu); 1323 init_idle(task, cpu);
1324 1324
1325 return task; 1325 return task;
1326 } 1326 }
1327 1327
1328 static inline int fork_traceflag (unsigned clone_flags) 1328 static inline int fork_traceflag (unsigned clone_flags)
1329 { 1329 {
1330 if (clone_flags & CLONE_UNTRACED) 1330 if (clone_flags & CLONE_UNTRACED)
1331 return 0; 1331 return 0;
1332 else if (clone_flags & CLONE_VFORK) { 1332 else if (clone_flags & CLONE_VFORK) {
1333 if (current->ptrace & PT_TRACE_VFORK) 1333 if (current->ptrace & PT_TRACE_VFORK)
1334 return PTRACE_EVENT_VFORK; 1334 return PTRACE_EVENT_VFORK;
1335 } else if ((clone_flags & CSIGNAL) != SIGCHLD) { 1335 } else if ((clone_flags & CSIGNAL) != SIGCHLD) {
1336 if (current->ptrace & PT_TRACE_CLONE) 1336 if (current->ptrace & PT_TRACE_CLONE)
1337 return PTRACE_EVENT_CLONE; 1337 return PTRACE_EVENT_CLONE;
1338 } else if (current->ptrace & PT_TRACE_FORK) 1338 } else if (current->ptrace & PT_TRACE_FORK)
1339 return PTRACE_EVENT_FORK; 1339 return PTRACE_EVENT_FORK;
1340 1340
1341 return 0; 1341 return 0;
1342 } 1342 }
1343 1343
1344 /* 1344 /*
1345 * Ok, this is the main fork-routine. 1345 * Ok, this is the main fork-routine.
1346 * 1346 *
1347 * It copies the process, and if successful kick-starts 1347 * It copies the process, and if successful kick-starts
1348 * it and waits for it to finish using the VM if required. 1348 * it and waits for it to finish using the VM if required.
1349 */ 1349 */
1350 long do_fork(unsigned long clone_flags, 1350 long do_fork(unsigned long clone_flags,
1351 unsigned long stack_start, 1351 unsigned long stack_start,
1352 struct pt_regs *regs, 1352 struct pt_regs *regs,
1353 unsigned long stack_size, 1353 unsigned long stack_size,
1354 int __user *parent_tidptr, 1354 int __user *parent_tidptr,
1355 int __user *child_tidptr) 1355 int __user *child_tidptr)
1356 { 1356 {
1357 struct task_struct *p; 1357 struct task_struct *p;
1358 int trace = 0; 1358 int trace = 0;
1359 struct pid *pid = alloc_pid(); 1359 struct pid *pid = alloc_pid();
1360 long nr; 1360 long nr;
1361 1361
1362 if (!pid) 1362 if (!pid)
1363 return -EAGAIN; 1363 return -EAGAIN;
1364 nr = pid->nr; 1364 nr = pid->nr;
1365 if (unlikely(current->ptrace)) { 1365 if (unlikely(current->ptrace)) {
1366 trace = fork_traceflag (clone_flags); 1366 trace = fork_traceflag (clone_flags);
1367 if (trace) 1367 if (trace)
1368 clone_flags |= CLONE_PTRACE; 1368 clone_flags |= CLONE_PTRACE;
1369 } 1369 }
1370 1370
1371 p = copy_process(clone_flags, stack_start, regs, stack_size, parent_tidptr, child_tidptr, nr); 1371 p = copy_process(clone_flags, stack_start, regs, stack_size, parent_tidptr, child_tidptr, nr);
1372 /* 1372 /*
1373 * Do this prior waking up the new thread - the thread pointer 1373 * Do this prior waking up the new thread - the thread pointer
1374 * might get invalid after that point, if the thread exits quickly. 1374 * might get invalid after that point, if the thread exits quickly.
1375 */ 1375 */
1376 if (!IS_ERR(p)) { 1376 if (!IS_ERR(p)) {
1377 struct completion vfork; 1377 struct completion vfork;
1378 1378
1379 if (clone_flags & CLONE_VFORK) { 1379 if (clone_flags & CLONE_VFORK) {
1380 p->vfork_done = &vfork; 1380 p->vfork_done = &vfork;
1381 init_completion(&vfork); 1381 init_completion(&vfork);
1382 } 1382 }
1383 1383
1384 if ((p->ptrace & PT_PTRACED) || (clone_flags & CLONE_STOPPED)) { 1384 if ((p->ptrace & PT_PTRACED) || (clone_flags & CLONE_STOPPED)) {
1385 /* 1385 /*
1386 * We'll start up with an immediate SIGSTOP. 1386 * We'll start up with an immediate SIGSTOP.
1387 */ 1387 */
1388 sigaddset(&p->pending.signal, SIGSTOP); 1388 sigaddset(&p->pending.signal, SIGSTOP);
1389 set_tsk_thread_flag(p, TIF_SIGPENDING); 1389 set_tsk_thread_flag(p, TIF_SIGPENDING);
1390 } 1390 }
1391 1391
1392 if (!(clone_flags & CLONE_STOPPED)) 1392 if (!(clone_flags & CLONE_STOPPED))
1393 wake_up_new_task(p, clone_flags); 1393 wake_up_new_task(p, clone_flags);
1394 else 1394 else
1395 p->state = TASK_STOPPED; 1395 p->state = TASK_STOPPED;
1396 1396
1397 if (unlikely (trace)) { 1397 if (unlikely (trace)) {
1398 current->ptrace_message = nr; 1398 current->ptrace_message = nr;
1399 ptrace_notify ((trace << 8) | SIGTRAP); 1399 ptrace_notify ((trace << 8) | SIGTRAP);
1400 } 1400 }
1401 1401
1402 if (clone_flags & CLONE_VFORK) { 1402 if (clone_flags & CLONE_VFORK) {
1403 wait_for_completion(&vfork); 1403 wait_for_completion(&vfork);
1404 if (unlikely (current->ptrace & PT_TRACE_VFORK_DONE)) { 1404 if (unlikely (current->ptrace & PT_TRACE_VFORK_DONE)) {
1405 current->ptrace_message = nr; 1405 current->ptrace_message = nr;
1406 ptrace_notify ((PTRACE_EVENT_VFORK_DONE << 8) | SIGTRAP); 1406 ptrace_notify ((PTRACE_EVENT_VFORK_DONE << 8) | SIGTRAP);
1407 } 1407 }
1408 } 1408 }
1409 } else { 1409 } else {
1410 free_pid(pid); 1410 free_pid(pid);
1411 nr = PTR_ERR(p); 1411 nr = PTR_ERR(p);
1412 } 1412 }
1413 return nr; 1413 return nr;
1414 } 1414 }
1415 1415
1416 #ifndef ARCH_MIN_MMSTRUCT_ALIGN 1416 #ifndef ARCH_MIN_MMSTRUCT_ALIGN
1417 #define ARCH_MIN_MMSTRUCT_ALIGN 0 1417 #define ARCH_MIN_MMSTRUCT_ALIGN 0
1418 #endif 1418 #endif
1419 1419
1420 static void sighand_ctor(void *data, struct kmem_cache *cachep, unsigned long flags) 1420 static void sighand_ctor(void *data, struct kmem_cache *cachep, unsigned long flags)
1421 { 1421 {
1422 struct sighand_struct *sighand = data; 1422 struct sighand_struct *sighand = data;
1423 1423
1424 if ((flags & (SLAB_CTOR_VERIFY | SLAB_CTOR_CONSTRUCTOR)) == 1424 if ((flags & (SLAB_CTOR_VERIFY | SLAB_CTOR_CONSTRUCTOR)) ==
1425 SLAB_CTOR_CONSTRUCTOR) 1425 SLAB_CTOR_CONSTRUCTOR)
1426 spin_lock_init(&sighand->siglock); 1426 spin_lock_init(&sighand->siglock);
1427 } 1427 }
1428 1428
1429 void __init proc_caches_init(void) 1429 void __init proc_caches_init(void)
1430 { 1430 {
1431 sighand_cachep = kmem_cache_create("sighand_cache", 1431 sighand_cachep = kmem_cache_create("sighand_cache",
1432 sizeof(struct sighand_struct), 0, 1432 sizeof(struct sighand_struct), 0,
1433 SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_DESTROY_BY_RCU, 1433 SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_DESTROY_BY_RCU,
1434 sighand_ctor, NULL); 1434 sighand_ctor, NULL);
1435 signal_cachep = kmem_cache_create("signal_cache", 1435 signal_cachep = kmem_cache_create("signal_cache",
1436 sizeof(struct signal_struct), 0, 1436 sizeof(struct signal_struct), 0,
1437 SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL, NULL); 1437 SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL, NULL);
1438 files_cachep = kmem_cache_create("files_cache", 1438 files_cachep = kmem_cache_create("files_cache",
1439 sizeof(struct files_struct), 0, 1439 sizeof(struct files_struct), 0,
1440 SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL, NULL); 1440 SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL, NULL);
1441 fs_cachep = kmem_cache_create("fs_cache", 1441 fs_cachep = kmem_cache_create("fs_cache",
1442 sizeof(struct fs_struct), 0, 1442 sizeof(struct fs_struct), 0,
1443 SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL, NULL); 1443 SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL, NULL);
1444 vm_area_cachep = kmem_cache_create("vm_area_struct", 1444 vm_area_cachep = kmem_cache_create("vm_area_struct",
1445 sizeof(struct vm_area_struct), 0, 1445 sizeof(struct vm_area_struct), 0,
1446 SLAB_PANIC, NULL, NULL); 1446 SLAB_PANIC, NULL, NULL);
1447 mm_cachep = kmem_cache_create("mm_struct", 1447 mm_cachep = kmem_cache_create("mm_struct",
1448 sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN, 1448 sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
1449 SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL, NULL); 1449 SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL, NULL);
1450 } 1450 }
1451 1451
1452 1452
1453 /* 1453 /*
1454 * Check constraints on flags passed to the unshare system call and 1454 * Check constraints on flags passed to the unshare system call and
1455 * force unsharing of additional process context as appropriate. 1455 * force unsharing of additional process context as appropriate.
1456 */ 1456 */
1457 static inline void check_unshare_flags(unsigned long *flags_ptr) 1457 static inline void check_unshare_flags(unsigned long *flags_ptr)
1458 { 1458 {
1459 /* 1459 /*
1460 * If unsharing a thread from a thread group, must also 1460 * If unsharing a thread from a thread group, must also
1461 * unshare vm. 1461 * unshare vm.
1462 */ 1462 */
1463 if (*flags_ptr & CLONE_THREAD) 1463 if (*flags_ptr & CLONE_THREAD)
1464 *flags_ptr |= CLONE_VM; 1464 *flags_ptr |= CLONE_VM;
1465 1465
1466 /* 1466 /*
1467 * If unsharing vm, must also unshare signal handlers. 1467 * If unsharing vm, must also unshare signal handlers.
1468 */ 1468 */
1469 if (*flags_ptr & CLONE_VM) 1469 if (*flags_ptr & CLONE_VM)
1470 *flags_ptr |= CLONE_SIGHAND; 1470 *flags_ptr |= CLONE_SIGHAND;
1471 1471
1472 /* 1472 /*
1473 * If unsharing signal handlers and the task was created 1473 * If unsharing signal handlers and the task was created
1474 * using CLONE_THREAD, then must unshare the thread 1474 * using CLONE_THREAD, then must unshare the thread
1475 */ 1475 */
1476 if ((*flags_ptr & CLONE_SIGHAND) && 1476 if ((*flags_ptr & CLONE_SIGHAND) &&
1477 (atomic_read(&current->signal->count) > 1)) 1477 (atomic_read(&current->signal->count) > 1))
1478 *flags_ptr |= CLONE_THREAD; 1478 *flags_ptr |= CLONE_THREAD;
1479 1479
1480 /* 1480 /*
1481 * If unsharing namespace, must also unshare filesystem information. 1481 * If unsharing namespace, must also unshare filesystem information.
1482 */ 1482 */
1483 if (*flags_ptr & CLONE_NEWNS) 1483 if (*flags_ptr & CLONE_NEWNS)
1484 *flags_ptr |= CLONE_FS; 1484 *flags_ptr |= CLONE_FS;
1485 } 1485 }
1486 1486
1487 /* 1487 /*
1488 * Unsharing of tasks created with CLONE_THREAD is not supported yet 1488 * Unsharing of tasks created with CLONE_THREAD is not supported yet
1489 */ 1489 */
1490 static int unshare_thread(unsigned long unshare_flags) 1490 static int unshare_thread(unsigned long unshare_flags)
1491 { 1491 {
1492 if (unshare_flags & CLONE_THREAD) 1492 if (unshare_flags & CLONE_THREAD)
1493 return -EINVAL; 1493 return -EINVAL;
1494 1494
1495 return 0; 1495 return 0;
1496 } 1496 }
1497 1497
1498 /* 1498 /*
1499 * Unshare the filesystem structure if it is being shared 1499 * Unshare the filesystem structure if it is being shared
1500 */ 1500 */
1501 static int unshare_fs(unsigned long unshare_flags, struct fs_struct **new_fsp) 1501 static int unshare_fs(unsigned long unshare_flags, struct fs_struct **new_fsp)
1502 { 1502 {
1503 struct fs_struct *fs = current->fs; 1503 struct fs_struct *fs = current->fs;
1504 1504
1505 if ((unshare_flags & CLONE_FS) && 1505 if ((unshare_flags & CLONE_FS) &&
1506 (fs && atomic_read(&fs->count) > 1)) { 1506 (fs && atomic_read(&fs->count) > 1)) {
1507 *new_fsp = __copy_fs_struct(current->fs); 1507 *new_fsp = __copy_fs_struct(current->fs);
1508 if (!*new_fsp) 1508 if (!*new_fsp)
1509 return -ENOMEM; 1509 return -ENOMEM;
1510 } 1510 }
1511 1511
1512 return 0; 1512 return 0;
1513 } 1513 }
1514 1514
1515 /* 1515 /*
1516 * Unshare the mnt_namespace structure if it is being shared 1516 * Unshare the mnt_namespace structure if it is being shared
1517 */ 1517 */
1518 static int unshare_mnt_namespace(unsigned long unshare_flags, 1518 static int unshare_mnt_namespace(unsigned long unshare_flags,
1519 struct mnt_namespace **new_nsp, struct fs_struct *new_fs) 1519 struct mnt_namespace **new_nsp, struct fs_struct *new_fs)
1520 { 1520 {
1521 struct mnt_namespace *ns = current->nsproxy->mnt_ns; 1521 struct mnt_namespace *ns = current->nsproxy->mnt_ns;
1522 1522
1523 if ((unshare_flags & CLONE_NEWNS) && ns) { 1523 if ((unshare_flags & CLONE_NEWNS) && ns) {
1524 if (!capable(CAP_SYS_ADMIN)) 1524 if (!capable(CAP_SYS_ADMIN))
1525 return -EPERM; 1525 return -EPERM;
1526 1526
1527 *new_nsp = dup_mnt_ns(current, new_fs ? new_fs : current->fs); 1527 *new_nsp = dup_mnt_ns(current, new_fs ? new_fs : current->fs);
1528 if (!*new_nsp) 1528 if (!*new_nsp)
1529 return -ENOMEM; 1529 return -ENOMEM;
1530 } 1530 }
1531 1531
1532 return 0; 1532 return 0;
1533 } 1533 }
1534 1534
1535 /* 1535 /*
1536 * Unsharing of sighand is not supported yet 1536 * Unsharing of sighand is not supported yet
1537 */ 1537 */
1538 static int unshare_sighand(unsigned long unshare_flags, struct sighand_struct **new_sighp) 1538 static int unshare_sighand(unsigned long unshare_flags, struct sighand_struct **new_sighp)
1539 { 1539 {
1540 struct sighand_struct *sigh = current->sighand; 1540 struct sighand_struct *sigh = current->sighand;
1541 1541
1542 if ((unshare_flags & CLONE_SIGHAND) && atomic_read(&sigh->count) > 1) 1542 if ((unshare_flags & CLONE_SIGHAND) && atomic_read(&sigh->count) > 1)
1543 return -EINVAL; 1543 return -EINVAL;
1544 else 1544 else
1545 return 0; 1545 return 0;
1546 } 1546 }
1547 1547
1548 /* 1548 /*
1549 * Unshare vm if it is being shared 1549 * Unshare vm if it is being shared
1550 */ 1550 */
1551 static int unshare_vm(unsigned long unshare_flags, struct mm_struct **new_mmp) 1551 static int unshare_vm(unsigned long unshare_flags, struct mm_struct **new_mmp)
1552 { 1552 {
1553 struct mm_struct *mm = current->mm; 1553 struct mm_struct *mm = current->mm;
1554 1554
1555 if ((unshare_flags & CLONE_VM) && 1555 if ((unshare_flags & CLONE_VM) &&
1556 (mm && atomic_read(&mm->mm_users) > 1)) { 1556 (mm && atomic_read(&mm->mm_users) > 1)) {
1557 return -EINVAL; 1557 return -EINVAL;
1558 } 1558 }
1559 1559
1560 return 0; 1560 return 0;
1561 } 1561 }
1562 1562
1563 /* 1563 /*
1564 * Unshare file descriptor table if it is being shared 1564 * Unshare file descriptor table if it is being shared
1565 */ 1565 */
1566 static int unshare_fd(unsigned long unshare_flags, struct files_struct **new_fdp) 1566 static int unshare_fd(unsigned long unshare_flags, struct files_struct **new_fdp)
1567 { 1567 {
1568 struct files_struct *fd = current->files; 1568 struct files_struct *fd = current->files;
1569 int error = 0; 1569 int error = 0;
1570 1570
1571 if ((unshare_flags & CLONE_FILES) && 1571 if ((unshare_flags & CLONE_FILES) &&
1572 (fd && atomic_read(&fd->count) > 1)) { 1572 (fd && atomic_read(&fd->count) > 1)) {
1573 *new_fdp = dup_fd(fd, &error); 1573 *new_fdp = dup_fd(fd, &error);
1574 if (!*new_fdp) 1574 if (!*new_fdp)
1575 return error; 1575 return error;
1576 } 1576 }
1577 1577
1578 return 0; 1578 return 0;
1579 } 1579 }
1580 1580
1581 /* 1581 /*
1582 * Unsharing of semundo for tasks created with CLONE_SYSVSEM is not 1582 * Unsharing of semundo for tasks created with CLONE_SYSVSEM is not
1583 * supported yet 1583 * supported yet
1584 */ 1584 */
1585 static int unshare_semundo(unsigned long unshare_flags, struct sem_undo_list **new_ulistp) 1585 static int unshare_semundo(unsigned long unshare_flags, struct sem_undo_list **new_ulistp)
1586 { 1586 {
1587 if (unshare_flags & CLONE_SYSVSEM) 1587 if (unshare_flags & CLONE_SYSVSEM)
1588 return -EINVAL; 1588 return -EINVAL;
1589 1589
1590 return 0; 1590 return 0;
1591 } 1591 }
1592 1592
1593 #ifndef CONFIG_IPC_NS 1593 #ifndef CONFIG_IPC_NS
1594 static inline int unshare_ipcs(unsigned long flags, struct ipc_namespace **ns) 1594 static inline int unshare_ipcs(unsigned long flags, struct ipc_namespace **ns)
1595 { 1595 {
1596 if (flags & CLONE_NEWIPC) 1596 if (flags & CLONE_NEWIPC)
1597 return -EINVAL; 1597 return -EINVAL;
1598 1598
1599 return 0; 1599 return 0;
1600 } 1600 }
1601 #endif 1601 #endif
1602 1602
1603 /* 1603 /*
1604 * unshare allows a process to 'unshare' part of the process 1604 * unshare allows a process to 'unshare' part of the process
1605 * context which was originally shared using clone. copy_* 1605 * context which was originally shared using clone. copy_*
1606 * functions used by do_fork() cannot be used here directly 1606 * functions used by do_fork() cannot be used here directly
1607 * because they modify an inactive task_struct that is being 1607 * because they modify an inactive task_struct that is being
1608 * constructed. Here we are modifying the current, active, 1608 * constructed. Here we are modifying the current, active,
1609 * task_struct. 1609 * task_struct.
1610 */ 1610 */
1611 asmlinkage long sys_unshare(unsigned long unshare_flags) 1611 asmlinkage long sys_unshare(unsigned long unshare_flags)
1612 { 1612 {
1613 int err = 0; 1613 int err = 0;
1614 struct fs_struct *fs, *new_fs = NULL; 1614 struct fs_struct *fs, *new_fs = NULL;
1615 struct mnt_namespace *ns, *new_ns = NULL; 1615 struct mnt_namespace *ns, *new_ns = NULL;
1616 struct sighand_struct *new_sigh = NULL; 1616 struct sighand_struct *new_sigh = NULL;
1617 struct mm_struct *mm, *new_mm = NULL, *active_mm = NULL; 1617 struct mm_struct *mm, *new_mm = NULL, *active_mm = NULL;
1618 struct files_struct *fd, *new_fd = NULL; 1618 struct files_struct *fd, *new_fd = NULL;
1619 struct sem_undo_list *new_ulist = NULL; 1619 struct sem_undo_list *new_ulist = NULL;
1620 struct nsproxy *new_nsproxy = NULL, *old_nsproxy = NULL; 1620 struct nsproxy *new_nsproxy = NULL, *old_nsproxy = NULL;
1621 struct uts_namespace *uts, *new_uts = NULL; 1621 struct uts_namespace *uts, *new_uts = NULL;
1622 struct ipc_namespace *ipc, *new_ipc = NULL; 1622 struct ipc_namespace *ipc, *new_ipc = NULL;
1623 1623
1624 check_unshare_flags(&unshare_flags); 1624 check_unshare_flags(&unshare_flags);
1625 1625
1626 /* Return -EINVAL for all unsupported flags */ 1626 /* Return -EINVAL for all unsupported flags */
1627 err = -EINVAL; 1627 err = -EINVAL;
1628 if (unshare_flags & ~(CLONE_THREAD|CLONE_FS|CLONE_NEWNS|CLONE_SIGHAND| 1628 if (unshare_flags & ~(CLONE_THREAD|CLONE_FS|CLONE_NEWNS|CLONE_SIGHAND|
1629 CLONE_VM|CLONE_FILES|CLONE_SYSVSEM| 1629 CLONE_VM|CLONE_FILES|CLONE_SYSVSEM|
1630 CLONE_NEWUTS|CLONE_NEWIPC)) 1630 CLONE_NEWUTS|CLONE_NEWIPC))
1631 goto bad_unshare_out; 1631 goto bad_unshare_out;
1632 1632
1633 if ((err = unshare_thread(unshare_flags))) 1633 if ((err = unshare_thread(unshare_flags)))
1634 goto bad_unshare_out; 1634 goto bad_unshare_out;
1635 if ((err = unshare_fs(unshare_flags, &new_fs))) 1635 if ((err = unshare_fs(unshare_flags, &new_fs)))
1636 goto bad_unshare_cleanup_thread; 1636 goto bad_unshare_cleanup_thread;
1637 if ((err = unshare_mnt_namespace(unshare_flags, &new_ns, new_fs))) 1637 if ((err = unshare_mnt_namespace(unshare_flags, &new_ns, new_fs)))
1638 goto bad_unshare_cleanup_fs; 1638 goto bad_unshare_cleanup_fs;
1639 if ((err = unshare_sighand(unshare_flags, &new_sigh))) 1639 if ((err = unshare_sighand(unshare_flags, &new_sigh)))
1640 goto bad_unshare_cleanup_ns; 1640 goto bad_unshare_cleanup_ns;
1641 if ((err = unshare_vm(unshare_flags, &new_mm))) 1641 if ((err = unshare_vm(unshare_flags, &new_mm)))
1642 goto bad_unshare_cleanup_sigh; 1642 goto bad_unshare_cleanup_sigh;
1643 if ((err = unshare_fd(unshare_flags, &new_fd))) 1643 if ((err = unshare_fd(unshare_flags, &new_fd)))
1644 goto bad_unshare_cleanup_vm; 1644 goto bad_unshare_cleanup_vm;
1645 if ((err = unshare_semundo(unshare_flags, &new_ulist))) 1645 if ((err = unshare_semundo(unshare_flags, &new_ulist)))
1646 goto bad_unshare_cleanup_fd; 1646 goto bad_unshare_cleanup_fd;
1647 if ((err = unshare_utsname(unshare_flags, &new_uts))) 1647 if ((err = unshare_utsname(unshare_flags, &new_uts)))
1648 goto bad_unshare_cleanup_semundo; 1648 goto bad_unshare_cleanup_semundo;
1649 if ((err = unshare_ipcs(unshare_flags, &new_ipc))) 1649 if ((err = unshare_ipcs(unshare_flags, &new_ipc)))
1650 goto bad_unshare_cleanup_uts; 1650 goto bad_unshare_cleanup_uts;
1651 1651
1652 if (new_ns || new_uts || new_ipc) { 1652 if (new_ns || new_uts || new_ipc) {
1653 old_nsproxy = current->nsproxy; 1653 old_nsproxy = current->nsproxy;
1654 new_nsproxy = dup_namespaces(old_nsproxy); 1654 new_nsproxy = dup_namespaces(old_nsproxy);
1655 if (!new_nsproxy) { 1655 if (!new_nsproxy) {
1656 err = -ENOMEM; 1656 err = -ENOMEM;
1657 goto bad_unshare_cleanup_ipc; 1657 goto bad_unshare_cleanup_ipc;
1658 } 1658 }
1659 } 1659 }
1660 1660
1661 if (new_fs || new_ns || new_mm || new_fd || new_ulist || 1661 if (new_fs || new_ns || new_mm || new_fd || new_ulist ||
1662 new_uts || new_ipc) { 1662 new_uts || new_ipc) {
1663 1663
1664 task_lock(current); 1664 task_lock(current);
1665 1665
1666 if (new_nsproxy) { 1666 if (new_nsproxy) {
1667 current->nsproxy = new_nsproxy; 1667 current->nsproxy = new_nsproxy;
1668 new_nsproxy = old_nsproxy; 1668 new_nsproxy = old_nsproxy;
1669 } 1669 }
1670 1670
1671 if (new_fs) { 1671 if (new_fs) {
1672 fs = current->fs; 1672 fs = current->fs;
1673 current->fs = new_fs; 1673 current->fs = new_fs;
1674 new_fs = fs; 1674 new_fs = fs;
1675 } 1675 }
1676 1676
1677 if (new_ns) { 1677 if (new_ns) {
1678 ns = current->nsproxy->mnt_ns; 1678 ns = current->nsproxy->mnt_ns;
1679 current->nsproxy->mnt_ns = new_ns; 1679 current->nsproxy->mnt_ns = new_ns;
1680 new_ns = ns; 1680 new_ns = ns;
1681 } 1681 }
1682 1682
1683 if (new_mm) { 1683 if (new_mm) {
1684 mm = current->mm; 1684 mm = current->mm;
1685 active_mm = current->active_mm; 1685 active_mm = current->active_mm;
1686 current->mm = new_mm; 1686 current->mm = new_mm;
1687 current->active_mm = new_mm; 1687 current->active_mm = new_mm;
1688 activate_mm(active_mm, new_mm); 1688 activate_mm(active_mm, new_mm);
1689 new_mm = mm; 1689 new_mm = mm;
1690 } 1690 }
1691 1691
1692 if (new_fd) { 1692 if (new_fd) {
1693 fd = current->files; 1693 fd = current->files;
1694 current->files = new_fd; 1694 current->files = new_fd;
1695 new_fd = fd; 1695 new_fd = fd;
1696 } 1696 }
1697 1697
1698 if (new_uts) { 1698 if (new_uts) {
1699 uts = current->nsproxy->uts_ns; 1699 uts = current->nsproxy->uts_ns;
1700 current->nsproxy->uts_ns = new_uts; 1700 current->nsproxy->uts_ns = new_uts;
1701 new_uts = uts; 1701 new_uts = uts;
1702 } 1702 }
1703 1703
1704 if (new_ipc) { 1704 if (new_ipc) {
1705 ipc = current->nsproxy->ipc_ns; 1705 ipc = current->nsproxy->ipc_ns;
1706 current->nsproxy->ipc_ns = new_ipc; 1706 current->nsproxy->ipc_ns = new_ipc;
1707 new_ipc = ipc; 1707 new_ipc = ipc;
1708 } 1708 }
1709 1709
1710 task_unlock(current); 1710 task_unlock(current);
1711 } 1711 }
1712 1712
1713 if (new_nsproxy) 1713 if (new_nsproxy)
1714 put_nsproxy(new_nsproxy); 1714 put_nsproxy(new_nsproxy);
1715 1715
1716 bad_unshare_cleanup_ipc: 1716 bad_unshare_cleanup_ipc:
1717 if (new_ipc) 1717 if (new_ipc)
1718 put_ipc_ns(new_ipc); 1718 put_ipc_ns(new_ipc);
1719 1719
1720 bad_unshare_cleanup_uts: 1720 bad_unshare_cleanup_uts:
1721 if (new_uts) 1721 if (new_uts)
1722 put_uts_ns(new_uts); 1722 put_uts_ns(new_uts);
1723 1723
1724 bad_unshare_cleanup_semundo: 1724 bad_unshare_cleanup_semundo:
1725 bad_unshare_cleanup_fd: 1725 bad_unshare_cleanup_fd:
1726 if (new_fd) 1726 if (new_fd)
1727 put_files_struct(new_fd); 1727 put_files_struct(new_fd);
1728 1728
1729 bad_unshare_cleanup_vm: 1729 bad_unshare_cleanup_vm:
1730 if (new_mm) 1730 if (new_mm)
1731 mmput(new_mm); 1731 mmput(new_mm);
1732 1732
1733 bad_unshare_cleanup_sigh: 1733 bad_unshare_cleanup_sigh:
1734 if (new_sigh) 1734 if (new_sigh)
1735 if (atomic_dec_and_test(&new_sigh->count)) 1735 if (atomic_dec_and_test(&new_sigh->count))
1736 kmem_cache_free(sighand_cachep, new_sigh); 1736 kmem_cache_free(sighand_cachep, new_sigh);
1737 1737
1738 bad_unshare_cleanup_ns: 1738 bad_unshare_cleanup_ns:
1739 if (new_ns) 1739 if (new_ns)
1740 put_mnt_ns(new_ns); 1740 put_mnt_ns(new_ns);
1741 1741
1742 bad_unshare_cleanup_fs: 1742 bad_unshare_cleanup_fs:
1743 if (new_fs) 1743 if (new_fs)
1744 put_fs_struct(new_fs); 1744 put_fs_struct(new_fs);
1745 1745
1746 bad_unshare_cleanup_thread: 1746 bad_unshare_cleanup_thread:
1747 bad_unshare_out: 1747 bad_unshare_out:
1748 return err; 1748 return err;
1749 } 1749 }
1750 1750