Commit 10e267234cc0133bc9ed26bc34eb09de90c248c0

Authored by David S. Miller
Committed by David S. Miller
1 parent af1713e0f1

[SPARC64]: Add irqtrace/stacktrace/lockdep support.

Signed-off-by: David S. Miller <davem@davemloft.net>

Showing 13 changed files with 289 additions and 66 deletions Inline Diff

arch/sparc64/Kconfig
1 # $Id: config.in,v 1.158 2002/01/24 22:14:44 davem Exp $ 1 # $Id: config.in,v 1.158 2002/01/24 22:14:44 davem Exp $
2 # For a description of the syntax of this configuration file, 2 # For a description of the syntax of this configuration file,
3 # see the Configure script. 3 # see the Configure script.
4 # 4 #
5 5
6 mainmenu "Linux/UltraSPARC Kernel Configuration" 6 mainmenu "Linux/UltraSPARC Kernel Configuration"
7 7
8 config SPARC 8 config SPARC
9 bool 9 bool
10 default y 10 default y
11 11
12 config SPARC64 12 config SPARC64
13 bool 13 bool
14 default y 14 default y
15 help 15 help
16 SPARC is a family of RISC microprocessors designed and marketed by 16 SPARC is a family of RISC microprocessors designed and marketed by
17 Sun Microsystems, incorporated. This port covers the newer 64-bit 17 Sun Microsystems, incorporated. This port covers the newer 64-bit
18 UltraSPARC. The UltraLinux project maintains both the SPARC32 and 18 UltraSPARC. The UltraLinux project maintains both the SPARC32 and
19 SPARC64 ports; its web page is available at 19 SPARC64 ports; its web page is available at
20 <http://www.ultralinux.org/>. 20 <http://www.ultralinux.org/>.
21 21
22 config 64BIT 22 config 64BIT
23 def_bool y 23 def_bool y
24 24
25 config MMU 25 config MMU
26 bool 26 bool
27 default y 27 default y
28 28
29 config STACKTRACE_SUPPORT
30 bool
31 default y
32
33 config LOCKDEP_SUPPORT
34 bool
35 default y
36
29 config TIME_INTERPOLATION 37 config TIME_INTERPOLATION
30 bool 38 bool
31 default y 39 default y
32 40
33 config ARCH_MAY_HAVE_PC_FDC 41 config ARCH_MAY_HAVE_PC_FDC
34 bool 42 bool
35 default y 43 default y
36 44
37 config ARCH_HAS_ILOG2_U32 45 config ARCH_HAS_ILOG2_U32
38 bool 46 bool
39 default n 47 default n
40 48
41 config ARCH_HAS_ILOG2_U64 49 config ARCH_HAS_ILOG2_U64
42 bool 50 bool
43 default n 51 default n
44 52
45 config AUDIT_ARCH 53 config AUDIT_ARCH
46 bool 54 bool
47 default y 55 default y
48 56
49 choice 57 choice
50 prompt "Kernel page size" 58 prompt "Kernel page size"
51 default SPARC64_PAGE_SIZE_8KB 59 default SPARC64_PAGE_SIZE_8KB
52 60
53 config SPARC64_PAGE_SIZE_8KB 61 config SPARC64_PAGE_SIZE_8KB
54 bool "8KB" 62 bool "8KB"
55 help 63 help
56 This lets you select the page size of the kernel. 64 This lets you select the page size of the kernel.
57 65
58 8KB and 64KB work quite well, since Sparc ELF sections 66 8KB and 64KB work quite well, since Sparc ELF sections
59 provide for up to 64KB alignment. 67 provide for up to 64KB alignment.
60 68
61 Therefore, 512KB and 4MB are for expert hackers only. 69 Therefore, 512KB and 4MB are for expert hackers only.
62 70
63 If you don't know what to do, choose 8KB. 71 If you don't know what to do, choose 8KB.
64 72
65 config SPARC64_PAGE_SIZE_64KB 73 config SPARC64_PAGE_SIZE_64KB
66 bool "64KB" 74 bool "64KB"
67 75
68 config SPARC64_PAGE_SIZE_512KB 76 config SPARC64_PAGE_SIZE_512KB
69 bool "512KB" 77 bool "512KB"
70 78
71 config SPARC64_PAGE_SIZE_4MB 79 config SPARC64_PAGE_SIZE_4MB
72 bool "4MB" 80 bool "4MB"
73 81
74 endchoice 82 endchoice
75 83
76 config SECCOMP 84 config SECCOMP
77 bool "Enable seccomp to safely compute untrusted bytecode" 85 bool "Enable seccomp to safely compute untrusted bytecode"
78 depends on PROC_FS 86 depends on PROC_FS
79 default y 87 default y
80 help 88 help
81 This kernel feature is useful for number crunching applications 89 This kernel feature is useful for number crunching applications
82 that may need to compute untrusted bytecode during their 90 that may need to compute untrusted bytecode during their
83 execution. By using pipes or other transports made available to 91 execution. By using pipes or other transports made available to
84 the process as file descriptors supporting the read/write 92 the process as file descriptors supporting the read/write
85 syscalls, it's possible to isolate those applications in 93 syscalls, it's possible to isolate those applications in
86 their own address space using seccomp. Once seccomp is 94 their own address space using seccomp. Once seccomp is
87 enabled via /proc/<pid>/seccomp, it cannot be disabled 95 enabled via /proc/<pid>/seccomp, it cannot be disabled
88 and the task is only allowed to execute a few safe syscalls 96 and the task is only allowed to execute a few safe syscalls
89 defined by each seccomp mode. 97 defined by each seccomp mode.
90 98
91 If unsure, say Y. Only embedded should say N here. 99 If unsure, say Y. Only embedded should say N here.
92 100
93 source kernel/Kconfig.hz 101 source kernel/Kconfig.hz
94 102
95 source "init/Kconfig" 103 source "init/Kconfig"
96 104
97 config SYSVIPC_COMPAT 105 config SYSVIPC_COMPAT
98 bool 106 bool
99 depends on COMPAT && SYSVIPC 107 depends on COMPAT && SYSVIPC
100 default y 108 default y
101 109
102 config GENERIC_HARDIRQS 110 config GENERIC_HARDIRQS
103 bool 111 bool
104 default y 112 default y
105 113
106 menu "General machine setup" 114 menu "General machine setup"
107 115
108 config SMP 116 config SMP
109 bool "Symmetric multi-processing support" 117 bool "Symmetric multi-processing support"
110 ---help--- 118 ---help---
111 This enables support for systems with more than one CPU. If you have 119 This enables support for systems with more than one CPU. If you have
112 a system with only one CPU, say N. If you have a system with more than 120 a system with only one CPU, say N. If you have a system with more than
113 one CPU, say Y. 121 one CPU, say Y.
114 122
115 If you say N here, the kernel will run on single and multiprocessor 123 If you say N here, the kernel will run on single and multiprocessor
116 machines, but will use only one CPU of a multiprocessor machine. If 124 machines, but will use only one CPU of a multiprocessor machine. If
117 you say Y here, the kernel will run on many, but not all, 125 you say Y here, the kernel will run on many, but not all,
118 singleprocessor machines. On a singleprocessor machine, the kernel 126 singleprocessor machines. On a singleprocessor machine, the kernel
119 will run faster if you say N here. 127 will run faster if you say N here.
120 128
121 People using multiprocessor machines who say Y here should also say 129 People using multiprocessor machines who say Y here should also say
122 Y to "Enhanced Real Time Clock Support", below. The "Advanced Power 130 Y to "Enhanced Real Time Clock Support", below. The "Advanced Power
123 Management" code will be disabled if you say Y here. 131 Management" code will be disabled if you say Y here.
124 132
125 See also the <file:Documentation/smp.txt>, 133 See also the <file:Documentation/smp.txt>,
126 <file:Documentation/nmi_watchdog.txt> and the SMP-HOWTO available at 134 <file:Documentation/nmi_watchdog.txt> and the SMP-HOWTO available at
127 <http://www.tldp.org/docs.html#howto>. 135 <http://www.tldp.org/docs.html#howto>.
128 136
129 If you don't know what to do here, say N. 137 If you don't know what to do here, say N.
130 138
131 config PREEMPT 139 config PREEMPT
132 bool "Preemptible Kernel" 140 bool "Preemptible Kernel"
133 help 141 help
134 This option reduces the latency of the kernel when reacting to 142 This option reduces the latency of the kernel when reacting to
135 real-time or interactive events by allowing a low priority process to 143 real-time or interactive events by allowing a low priority process to
136 be preempted even if it is in kernel mode executing a system call. 144 be preempted even if it is in kernel mode executing a system call.
137 This allows applications to run more reliably even when the system is 145 This allows applications to run more reliably even when the system is
138 under load. 146 under load.
139 147
140 Say Y here if you are building a kernel for a desktop, embedded 148 Say Y here if you are building a kernel for a desktop, embedded
141 or real-time system. Say N if you are unsure. 149 or real-time system. Say N if you are unsure.
142 150
143 config NR_CPUS 151 config NR_CPUS
144 int "Maximum number of CPUs (2-64)" 152 int "Maximum number of CPUs (2-64)"
145 range 2 64 153 range 2 64
146 depends on SMP 154 depends on SMP
147 default "32" 155 default "32"
148 156
149 source "drivers/cpufreq/Kconfig" 157 source "drivers/cpufreq/Kconfig"
150 158
151 config US3_FREQ 159 config US3_FREQ
152 tristate "UltraSPARC-III CPU Frequency driver" 160 tristate "UltraSPARC-III CPU Frequency driver"
153 depends on CPU_FREQ 161 depends on CPU_FREQ
154 select CPU_FREQ_TABLE 162 select CPU_FREQ_TABLE
155 help 163 help
156 This adds the CPUFreq driver for UltraSPARC-III processors. 164 This adds the CPUFreq driver for UltraSPARC-III processors.
157 165
158 For details, take a look at <file:Documentation/cpu-freq>. 166 For details, take a look at <file:Documentation/cpu-freq>.
159 167
160 If in doubt, say N. 168 If in doubt, say N.
161 169
162 config US2E_FREQ 170 config US2E_FREQ
163 tristate "UltraSPARC-IIe CPU Frequency driver" 171 tristate "UltraSPARC-IIe CPU Frequency driver"
164 depends on CPU_FREQ 172 depends on CPU_FREQ
165 select CPU_FREQ_TABLE 173 select CPU_FREQ_TABLE
166 help 174 help
167 This adds the CPUFreq driver for UltraSPARC-IIe processors. 175 This adds the CPUFreq driver for UltraSPARC-IIe processors.
168 176
169 For details, take a look at <file:Documentation/cpu-freq>. 177 For details, take a look at <file:Documentation/cpu-freq>.
170 178
171 If in doubt, say N. 179 If in doubt, say N.
172 180
173 # Global things across all Sun machines. 181 # Global things across all Sun machines.
174 config RWSEM_GENERIC_SPINLOCK 182 config RWSEM_GENERIC_SPINLOCK
175 bool 183 bool
176 184
177 config RWSEM_XCHGADD_ALGORITHM 185 config RWSEM_XCHGADD_ALGORITHM
178 bool 186 bool
179 default y 187 default y
180 188
181 config GENERIC_FIND_NEXT_BIT 189 config GENERIC_FIND_NEXT_BIT
182 bool 190 bool
183 default y 191 default y
184 192
185 config GENERIC_HWEIGHT 193 config GENERIC_HWEIGHT
186 bool 194 bool
187 default y if !ULTRA_HAS_POPULATION_COUNT 195 default y if !ULTRA_HAS_POPULATION_COUNT
188 196
189 config GENERIC_CALIBRATE_DELAY 197 config GENERIC_CALIBRATE_DELAY
190 bool 198 bool
191 default y 199 default y
192 200
193 choice 201 choice
194 prompt "SPARC64 Huge TLB Page Size" 202 prompt "SPARC64 Huge TLB Page Size"
195 depends on HUGETLB_PAGE 203 depends on HUGETLB_PAGE
196 default HUGETLB_PAGE_SIZE_4MB 204 default HUGETLB_PAGE_SIZE_4MB
197 205
198 config HUGETLB_PAGE_SIZE_4MB 206 config HUGETLB_PAGE_SIZE_4MB
199 bool "4MB" 207 bool "4MB"
200 208
201 config HUGETLB_PAGE_SIZE_512K 209 config HUGETLB_PAGE_SIZE_512K
202 depends on !SPARC64_PAGE_SIZE_4MB && !SPARC64_PAGE_SIZE_512KB 210 depends on !SPARC64_PAGE_SIZE_4MB && !SPARC64_PAGE_SIZE_512KB
203 bool "512K" 211 bool "512K"
204 212
205 config HUGETLB_PAGE_SIZE_64K 213 config HUGETLB_PAGE_SIZE_64K
206 depends on !SPARC64_PAGE_SIZE_4MB && !SPARC64_PAGE_SIZE_512KB && !SPARC64_PAGE_SIZE_64KB 214 depends on !SPARC64_PAGE_SIZE_4MB && !SPARC64_PAGE_SIZE_512KB && !SPARC64_PAGE_SIZE_64KB
207 bool "64K" 215 bool "64K"
208 216
209 endchoice 217 endchoice
210 218
211 endmenu 219 endmenu
212 220
213 config ARCH_SELECT_MEMORY_MODEL 221 config ARCH_SELECT_MEMORY_MODEL
214 def_bool y 222 def_bool y
215 223
216 config ARCH_SPARSEMEM_ENABLE 224 config ARCH_SPARSEMEM_ENABLE
217 def_bool y 225 def_bool y
218 226
219 config ARCH_SPARSEMEM_DEFAULT 227 config ARCH_SPARSEMEM_DEFAULT
220 def_bool y 228 def_bool y
221 229
222 config LARGE_ALLOCS 230 config LARGE_ALLOCS
223 def_bool y 231 def_bool y
224 232
225 source "mm/Kconfig" 233 source "mm/Kconfig"
226 234
227 config GENERIC_ISA_DMA 235 config GENERIC_ISA_DMA
228 bool 236 bool
229 default y 237 default y
230 238
231 config ISA 239 config ISA
232 bool 240 bool
233 help 241 help
234 Find out whether you have ISA slots on your motherboard. ISA is the 242 Find out whether you have ISA slots on your motherboard. ISA is the
235 name of a bus system, i.e. the way the CPU talks to the other stuff 243 name of a bus system, i.e. the way the CPU talks to the other stuff
236 inside your box. Other bus systems are PCI, EISA, MicroChannel 244 inside your box. Other bus systems are PCI, EISA, MicroChannel
237 (MCA) or VESA. ISA is an older system, now being displaced by PCI; 245 (MCA) or VESA. ISA is an older system, now being displaced by PCI;
238 newer boards don't support it. If you have ISA, say Y, otherwise N. 246 newer boards don't support it. If you have ISA, say Y, otherwise N.
239 247
240 config ISAPNP 248 config ISAPNP
241 bool 249 bool
242 help 250 help
243 Say Y here if you would like support for ISA Plug and Play devices. 251 Say Y here if you would like support for ISA Plug and Play devices.
244 Some information is in <file:Documentation/isapnp.txt>. 252 Some information is in <file:Documentation/isapnp.txt>.
245 253
246 To compile this driver as a module, choose M here: the 254 To compile this driver as a module, choose M here: the
247 module will be called isapnp. 255 module will be called isapnp.
248 256
249 If unsure, say Y. 257 If unsure, say Y.
250 258
251 config EISA 259 config EISA
252 bool 260 bool
253 ---help--- 261 ---help---
254 The Extended Industry Standard Architecture (EISA) bus was 262 The Extended Industry Standard Architecture (EISA) bus was
255 developed as an open alternative to the IBM MicroChannel bus. 263 developed as an open alternative to the IBM MicroChannel bus.
256 264
257 The EISA bus provided some of the features of the IBM MicroChannel 265 The EISA bus provided some of the features of the IBM MicroChannel
258 bus while maintaining backward compatibility with cards made for 266 bus while maintaining backward compatibility with cards made for
259 the older ISA bus. The EISA bus saw limited use between 1988 and 267 the older ISA bus. The EISA bus saw limited use between 1988 and
260 1995 when it was made obsolete by the PCI bus. 268 1995 when it was made obsolete by the PCI bus.
261 269
262 Say Y here if you are building a kernel for an EISA-based machine. 270 Say Y here if you are building a kernel for an EISA-based machine.
263 271
264 Otherwise, say N. 272 Otherwise, say N.
265 273
266 config MCA 274 config MCA
267 bool 275 bool
268 help 276 help
269 MicroChannel Architecture is found in some IBM PS/2 machines and 277 MicroChannel Architecture is found in some IBM PS/2 machines and
270 laptops. It is a bus system similar to PCI or ISA. See 278 laptops. It is a bus system similar to PCI or ISA. See
271 <file:Documentation/mca.txt> (and especially the web page given 279 <file:Documentation/mca.txt> (and especially the web page given
272 there) before attempting to build an MCA bus kernel. 280 there) before attempting to build an MCA bus kernel.
273 281
274 config PCMCIA 282 config PCMCIA
275 tristate 283 tristate
276 ---help--- 284 ---help---
277 Say Y here if you want to attach PCMCIA- or PC-cards to your Linux 285 Say Y here if you want to attach PCMCIA- or PC-cards to your Linux
278 computer. These are credit-card size devices such as network cards, 286 computer. These are credit-card size devices such as network cards,
279 modems or hard drives often used with laptops computers. There are 287 modems or hard drives often used with laptops computers. There are
280 actually two varieties of these cards: the older 16 bit PCMCIA cards 288 actually two varieties of these cards: the older 16 bit PCMCIA cards
281 and the newer 32 bit CardBus cards. If you want to use CardBus 289 and the newer 32 bit CardBus cards. If you want to use CardBus
282 cards, you need to say Y here and also to "CardBus support" below. 290 cards, you need to say Y here and also to "CardBus support" below.
283 291
284 To use your PC-cards, you will need supporting software from David 292 To use your PC-cards, you will need supporting software from David
285 Hinds' pcmcia-cs package (see the file <file:Documentation/Changes> 293 Hinds' pcmcia-cs package (see the file <file:Documentation/Changes>
286 for location). Please also read the PCMCIA-HOWTO, available from 294 for location). Please also read the PCMCIA-HOWTO, available from
287 <http://www.tldp.org/docs.html#howto>. 295 <http://www.tldp.org/docs.html#howto>.
288 296
289 To compile this driver as modules, choose M here: the 297 To compile this driver as modules, choose M here: the
290 modules will be called pcmcia_core and ds. 298 modules will be called pcmcia_core and ds.
291 299
292 config SBUS 300 config SBUS
293 bool 301 bool
294 default y 302 default y
295 303
296 config SBUSCHAR 304 config SBUSCHAR
297 bool 305 bool
298 default y 306 default y
299 307
300 config SUN_AUXIO 308 config SUN_AUXIO
301 bool 309 bool
302 default y 310 default y
303 311
304 config SUN_IO 312 config SUN_IO
305 bool 313 bool
306 default y 314 default y
307 315
308 config PCI 316 config PCI
309 bool "PCI support" 317 bool "PCI support"
310 help 318 help
311 Find out whether you have a PCI motherboard. PCI is the name of a 319 Find out whether you have a PCI motherboard. PCI is the name of a
312 bus system, i.e. the way the CPU talks to the other stuff inside 320 bus system, i.e. the way the CPU talks to the other stuff inside
313 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or 321 your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or
314 VESA. If you have PCI, say Y, otherwise N. 322 VESA. If you have PCI, say Y, otherwise N.
315 323
316 The PCI-HOWTO, available from 324 The PCI-HOWTO, available from
317 <http://www.tldp.org/docs.html#howto>, contains valuable 325 <http://www.tldp.org/docs.html#howto>, contains valuable
318 information about which PCI hardware does work under Linux and which 326 information about which PCI hardware does work under Linux and which
319 doesn't. 327 doesn't.
320 328
321 config PCI_DOMAINS 329 config PCI_DOMAINS
322 bool 330 bool
323 default PCI 331 default PCI
324 332
325 source "drivers/pci/Kconfig" 333 source "drivers/pci/Kconfig"
326 334
327 config SUN_OPENPROMFS 335 config SUN_OPENPROMFS
328 tristate "Openprom tree appears in /proc/openprom" 336 tristate "Openprom tree appears in /proc/openprom"
329 help 337 help
330 If you say Y, the OpenPROM device tree will be available as a 338 If you say Y, the OpenPROM device tree will be available as a
331 virtual file system, which you can mount to /proc/openprom by "mount 339 virtual file system, which you can mount to /proc/openprom by "mount
332 -t openpromfs none /proc/openprom". 340 -t openpromfs none /proc/openprom".
333 341
334 To compile the /proc/openprom support as a module, choose M here: the 342 To compile the /proc/openprom support as a module, choose M here: the
335 module will be called openpromfs. If unsure, choose M. 343 module will be called openpromfs. If unsure, choose M.
336 344
337 config SPARC32_COMPAT 345 config SPARC32_COMPAT
338 bool "Kernel support for Linux/Sparc 32bit binary compatibility" 346 bool "Kernel support for Linux/Sparc 32bit binary compatibility"
339 help 347 help
340 This allows you to run 32-bit binaries on your Ultra. 348 This allows you to run 32-bit binaries on your Ultra.
341 Everybody wants this; say Y. 349 Everybody wants this; say Y.
342 350
343 config COMPAT 351 config COMPAT
344 bool 352 bool
345 depends on SPARC32_COMPAT 353 depends on SPARC32_COMPAT
346 default y 354 default y
347 355
348 config BINFMT_ELF32 356 config BINFMT_ELF32
349 bool "Kernel support for 32-bit ELF binaries" 357 bool "Kernel support for 32-bit ELF binaries"
350 depends on SPARC32_COMPAT 358 depends on SPARC32_COMPAT
351 help 359 help
352 This allows you to run 32-bit Linux/ELF binaries on your Ultra. 360 This allows you to run 32-bit Linux/ELF binaries on your Ultra.
353 Everybody wants this; say Y. 361 Everybody wants this; say Y.
354 362
355 config BINFMT_AOUT32 363 config BINFMT_AOUT32
356 bool "Kernel support for 32-bit (ie. SunOS) a.out binaries" 364 bool "Kernel support for 32-bit (ie. SunOS) a.out binaries"
357 depends on SPARC32_COMPAT 365 depends on SPARC32_COMPAT
358 help 366 help
359 This allows you to run 32-bit a.out format binaries on your Ultra. 367 This allows you to run 32-bit a.out format binaries on your Ultra.
360 If you want to run SunOS binaries (see SunOS binary emulation below) 368 If you want to run SunOS binaries (see SunOS binary emulation below)
361 or other a.out binaries, say Y. If unsure, say N. 369 or other a.out binaries, say Y. If unsure, say N.
362 370
363 menu "Executable file formats" 371 menu "Executable file formats"
364 372
365 source "fs/Kconfig.binfmt" 373 source "fs/Kconfig.binfmt"
366 374
367 config SUNOS_EMUL 375 config SUNOS_EMUL
368 bool "SunOS binary emulation" 376 bool "SunOS binary emulation"
369 depends on BINFMT_AOUT32 377 depends on BINFMT_AOUT32
370 help 378 help
371 This allows you to run most SunOS binaries. If you want to do this, 379 This allows you to run most SunOS binaries. If you want to do this,
372 say Y here and place appropriate files in /usr/gnemul/sunos. See 380 say Y here and place appropriate files in /usr/gnemul/sunos. See
373 <http://www.ultralinux.org/faq.html> for more information. If you 381 <http://www.ultralinux.org/faq.html> for more information. If you
374 want to run SunOS binaries on an Ultra you must also say Y to 382 want to run SunOS binaries on an Ultra you must also say Y to
375 "Kernel support for 32-bit a.out binaries" above. 383 "Kernel support for 32-bit a.out binaries" above.
376 384
377 config SOLARIS_EMUL 385 config SOLARIS_EMUL
378 tristate "Solaris binary emulation (EXPERIMENTAL)" 386 tristate "Solaris binary emulation (EXPERIMENTAL)"
379 depends on SPARC32_COMPAT && EXPERIMENTAL 387 depends on SPARC32_COMPAT && EXPERIMENTAL
380 help 388 help
381 This is experimental code which will enable you to run (many) 389 This is experimental code which will enable you to run (many)
382 Solaris binaries on your SPARC Linux machine. 390 Solaris binaries on your SPARC Linux machine.
383 391
384 To compile this code as a module, choose M here: the 392 To compile this code as a module, choose M here: the
385 module will be called solaris. 393 module will be called solaris.
386 394
387 endmenu 395 endmenu
388 396
389 config SCHED_SMT 397 config SCHED_SMT
390 bool "SMT (Hyperthreading) scheduler support" 398 bool "SMT (Hyperthreading) scheduler support"
391 depends on SMP 399 depends on SMP
392 default y 400 default y
393 help 401 help
394 SMT scheduler support improves the CPU scheduler's decision making 402 SMT scheduler support improves the CPU scheduler's decision making
395 when dealing with UltraSPARC cpus at a cost of slightly increased 403 when dealing with UltraSPARC cpus at a cost of slightly increased
396 overhead in some places. If unsure say N here. 404 overhead in some places. If unsure say N here.
397 405
398 config CMDLINE_BOOL 406 config CMDLINE_BOOL
399 bool "Default bootloader kernel arguments" 407 bool "Default bootloader kernel arguments"
400 408
401 config CMDLINE 409 config CMDLINE
402 string "Initial kernel command string" 410 string "Initial kernel command string"
403 depends on CMDLINE_BOOL 411 depends on CMDLINE_BOOL
404 default "console=ttyS0,9600 root=/dev/sda1" 412 default "console=ttyS0,9600 root=/dev/sda1"
405 help 413 help
406 Say Y here if you want to be able to pass default arguments to 414 Say Y here if you want to be able to pass default arguments to
407 the kernel. This will be overridden by the bootloader, if you 415 the kernel. This will be overridden by the bootloader, if you
408 use one (such as SILO). This is most useful if you want to boot 416 use one (such as SILO). This is most useful if you want to boot
409 a kernel from TFTP, and want default options to be available 417 a kernel from TFTP, and want default options to be available
410 with having them passed on the command line. 418 with having them passed on the command line.
411 419
412 NOTE: This option WILL override the PROM bootargs setting! 420 NOTE: This option WILL override the PROM bootargs setting!
413 421
414 source "net/Kconfig" 422 source "net/Kconfig"
415 423
416 source "drivers/Kconfig" 424 source "drivers/Kconfig"
417 425
418 source "drivers/sbus/char/Kconfig" 426 source "drivers/sbus/char/Kconfig"
419 427
420 source "drivers/fc4/Kconfig" 428 source "drivers/fc4/Kconfig"
421 429
422 source "fs/Kconfig" 430 source "fs/Kconfig"
423 431
424 menu "Instrumentation Support" 432 menu "Instrumentation Support"
425 depends on EXPERIMENTAL 433 depends on EXPERIMENTAL
426 434
427 source "arch/sparc64/oprofile/Kconfig" 435 source "arch/sparc64/oprofile/Kconfig"
428 436
429 config KPROBES 437 config KPROBES
430 bool "Kprobes (EXPERIMENTAL)" 438 bool "Kprobes (EXPERIMENTAL)"
431 depends on KALLSYMS && EXPERIMENTAL && MODULES 439 depends on KALLSYMS && EXPERIMENTAL && MODULES
432 help 440 help
433 Kprobes allows you to trap at almost any kernel address and 441 Kprobes allows you to trap at almost any kernel address and
434 execute a callback function. register_kprobe() establishes 442 execute a callback function. register_kprobe() establishes
435 a probepoint and specifies the callback. Kprobes is useful 443 a probepoint and specifies the callback. Kprobes is useful
436 for kernel debugging, non-intrusive instrumentation and testing. 444 for kernel debugging, non-intrusive instrumentation and testing.
437 If in doubt, say "N". 445 If in doubt, say "N".
438 endmenu 446 endmenu
439 447
440 source "arch/sparc64/Kconfig.debug" 448 source "arch/sparc64/Kconfig.debug"
441 449
442 source "security/Kconfig" 450 source "security/Kconfig"
443 451
444 source "crypto/Kconfig" 452 source "crypto/Kconfig"
445 453
446 source "lib/Kconfig" 454 source "lib/Kconfig"
447 455
arch/sparc64/Kconfig.debug
1 menu "Kernel hacking" 1 menu "Kernel hacking"
2 2
3 config TRACE_IRQFLAGS_SUPPORT
4 bool
5 default y
6
3 source "lib/Kconfig.debug" 7 source "lib/Kconfig.debug"
4 8
5 config DEBUG_STACK_USAGE 9 config DEBUG_STACK_USAGE
6 bool "Enable stack utilization instrumentation" 10 bool "Enable stack utilization instrumentation"
7 depends on DEBUG_KERNEL 11 depends on DEBUG_KERNEL
8 help 12 help
9 Enables the display of the minimum amount of free stack which each 13 Enables the display of the minimum amount of free stack which each
10 task has ever had available in the sysrq-T and sysrq-P debug output. 14 task has ever had available in the sysrq-T and sysrq-P debug output.
11 15
12 This option will slow down process creation somewhat. 16 This option will slow down process creation somewhat.
13 17
14 config DEBUG_DCFLUSH 18 config DEBUG_DCFLUSH
15 bool "D-cache flush debugging" 19 bool "D-cache flush debugging"
16 depends on DEBUG_KERNEL 20 depends on DEBUG_KERNEL
17 21
18 config STACK_DEBUG 22 config STACK_DEBUG
19 depends on DEBUG_KERNEL 23 depends on DEBUG_KERNEL
20 bool "Stack Overflow Detection Support" 24 bool "Stack Overflow Detection Support"
21 25
22 config DEBUG_BOOTMEM 26 config DEBUG_BOOTMEM
23 depends on DEBUG_KERNEL 27 depends on DEBUG_KERNEL
24 bool "Debug BOOTMEM initialization" 28 bool "Debug BOOTMEM initialization"
25 29
26 config DEBUG_PAGEALLOC 30 config DEBUG_PAGEALLOC
27 bool "Debug page memory allocations" 31 bool "Debug page memory allocations"
28 depends on DEBUG_KERNEL && !SOFTWARE_SUSPEND 32 depends on DEBUG_KERNEL && !SOFTWARE_SUSPEND
29 help 33 help
30 Unmap pages from the kernel linear mapping after free_pages(). 34 Unmap pages from the kernel linear mapping after free_pages().
31 This results in a large slowdown, but helps to find certain types 35 This results in a large slowdown, but helps to find certain types
32 of memory corruptions. 36 of memory corruptions.
33 37
34 config MCOUNT 38 config MCOUNT
35 bool 39 bool
36 depends on STACK_DEBUG 40 depends on STACK_DEBUG
37 default y 41 default y
38 42
39 config FRAME_POINTER 43 config FRAME_POINTER
40 bool 44 bool
41 depends on MCOUNT 45 depends on MCOUNT
42 default y 46 default y
43 47
44 endmenu 48 endmenu
45 49
arch/sparc64/kernel/Makefile
1 # $Id: Makefile,v 1.70 2002/02/09 19:49:30 davem Exp $ 1 # $Id: Makefile,v 1.70 2002/02/09 19:49:30 davem Exp $
2 # Makefile for the linux kernel. 2 # Makefile for the linux kernel.
3 # 3 #
4 4
5 EXTRA_AFLAGS := -ansi 5 EXTRA_AFLAGS := -ansi
6 EXTRA_CFLAGS := -Werror 6 EXTRA_CFLAGS := -Werror
7 7
8 extra-y := head.o init_task.o vmlinux.lds 8 extra-y := head.o init_task.o vmlinux.lds
9 9
10 obj-y := process.o setup.o cpu.o idprom.o \ 10 obj-y := process.o setup.o cpu.o idprom.o \
11 traps.o devices.o auxio.o una_asm.o \ 11 traps.o devices.o auxio.o una_asm.o \
12 irq.o ptrace.o time.o sys_sparc.o signal.o \ 12 irq.o ptrace.o time.o sys_sparc.o signal.o \
13 unaligned.o central.o pci.o starfire.o semaphore.o \ 13 unaligned.o central.o pci.o starfire.o semaphore.o \
14 power.o sbus.o iommu_common.o sparc64_ksyms.o chmc.o \ 14 power.o sbus.o iommu_common.o sparc64_ksyms.o chmc.o \
15 visemul.o prom.o of_device.o 15 visemul.o prom.o of_device.o
16 16
17 obj-$(CONFIG_STACKTRACE) += stacktrace.o
17 obj-$(CONFIG_PCI) += ebus.o isa.o pci_common.o pci_iommu.o \ 18 obj-$(CONFIG_PCI) += ebus.o isa.o pci_common.o pci_iommu.o \
18 pci_psycho.o pci_sabre.o pci_schizo.o \ 19 pci_psycho.o pci_sabre.o pci_schizo.o \
19 pci_sun4v.o pci_sun4v_asm.o 20 pci_sun4v.o pci_sun4v_asm.o
20 obj-$(CONFIG_SMP) += smp.o trampoline.o 21 obj-$(CONFIG_SMP) += smp.o trampoline.o
21 obj-$(CONFIG_SPARC32_COMPAT) += sys32.o sys_sparc32.o signal32.o 22 obj-$(CONFIG_SPARC32_COMPAT) += sys32.o sys_sparc32.o signal32.o
22 obj-$(CONFIG_BINFMT_ELF32) += binfmt_elf32.o 23 obj-$(CONFIG_BINFMT_ELF32) += binfmt_elf32.o
23 obj-$(CONFIG_BINFMT_AOUT32) += binfmt_aout32.o 24 obj-$(CONFIG_BINFMT_AOUT32) += binfmt_aout32.o
24 obj-$(CONFIG_MODULES) += module.o 25 obj-$(CONFIG_MODULES) += module.o
25 obj-$(CONFIG_US3_FREQ) += us3_cpufreq.o 26 obj-$(CONFIG_US3_FREQ) += us3_cpufreq.o
26 obj-$(CONFIG_US2E_FREQ) += us2e_cpufreq.o 27 obj-$(CONFIG_US2E_FREQ) += us2e_cpufreq.o
27 obj-$(CONFIG_KPROBES) += kprobes.o 28 obj-$(CONFIG_KPROBES) += kprobes.o
28 obj-$(CONFIG_AUDIT) += audit.o 29 obj-$(CONFIG_AUDIT) += audit.o
29 obj-$(CONFIG_AUDIT)$(CONFIG_SPARC32_COMPAT) += compat_audit.o 30 obj-$(CONFIG_AUDIT)$(CONFIG_SPARC32_COMPAT) += compat_audit.o
30 obj-y += $(obj-yy) 31 obj-y += $(obj-yy)
31 32
32 ifdef CONFIG_SUNOS_EMUL 33 ifdef CONFIG_SUNOS_EMUL
33 obj-y += sys_sunos32.o sunos_ioctl32.o 34 obj-y += sys_sunos32.o sunos_ioctl32.o
34 else 35 else
35 ifdef CONFIG_SOLARIS_EMUL 36 ifdef CONFIG_SOLARIS_EMUL
36 obj-y += sys_sunos32.o sunos_ioctl32.o 37 obj-y += sys_sunos32.o sunos_ioctl32.o
37 endif 38 endif
38 endif 39 endif
39 40
40 ifneq ($(NEW_GCC),y) 41 ifneq ($(NEW_GCC),y)
41 CMODEL_CFLAG := -mmedlow 42 CMODEL_CFLAG := -mmedlow
42 else 43 else
43 CMODEL_CFLAG := -m64 -mcmodel=medlow 44 CMODEL_CFLAG := -m64 -mcmodel=medlow
44 endif 45 endif
45 46
46 head.o: head.S ttable.S itlb_miss.S dtlb_miss.S ktlb.S tsb.S \ 47 head.o: head.S ttable.S itlb_miss.S dtlb_miss.S ktlb.S tsb.S \
47 etrap.S rtrap.S winfixup.S entry.S 48 etrap.S rtrap.S winfixup.S entry.S
48 49
arch/sparc64/kernel/entry.S
1 /* $Id: entry.S,v 1.144 2002/02/09 19:49:30 davem Exp $ 1 /* $Id: entry.S,v 1.144 2002/02/09 19:49:30 davem Exp $
2 * arch/sparc64/kernel/entry.S: Sparc64 trap low-level entry points. 2 * arch/sparc64/kernel/entry.S: Sparc64 trap low-level entry points.
3 * 3 *
4 * Copyright (C) 1995,1997 David S. Miller (davem@caip.rutgers.edu) 4 * Copyright (C) 1995,1997 David S. Miller (davem@caip.rutgers.edu)
5 * Copyright (C) 1996 Eddie C. Dost (ecd@skynet.be) 5 * Copyright (C) 1996 Eddie C. Dost (ecd@skynet.be)
6 * Copyright (C) 1996 Miguel de Icaza (miguel@nuclecu.unam.mx) 6 * Copyright (C) 1996 Miguel de Icaza (miguel@nuclecu.unam.mx)
7 * Copyright (C) 1996,98,99 Jakub Jelinek (jj@sunsite.mff.cuni.cz) 7 * Copyright (C) 1996,98,99 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
8 */ 8 */
9 9
10 #include <linux/errno.h> 10 #include <linux/errno.h>
11 11
12 #include <asm/head.h> 12 #include <asm/head.h>
13 #include <asm/asi.h> 13 #include <asm/asi.h>
14 #include <asm/smp.h> 14 #include <asm/smp.h>
15 #include <asm/ptrace.h> 15 #include <asm/ptrace.h>
16 #include <asm/page.h> 16 #include <asm/page.h>
17 #include <asm/signal.h> 17 #include <asm/signal.h>
18 #include <asm/pgtable.h> 18 #include <asm/pgtable.h>
19 #include <asm/processor.h> 19 #include <asm/processor.h>
20 #include <asm/visasm.h> 20 #include <asm/visasm.h>
21 #include <asm/estate.h> 21 #include <asm/estate.h>
22 #include <asm/auxio.h> 22 #include <asm/auxio.h>
23 #include <asm/sfafsr.h> 23 #include <asm/sfafsr.h>
24 #include <asm/pil.h> 24 #include <asm/pil.h>
25 #include <asm/unistd.h> 25 #include <asm/unistd.h>
26 26
27 #define curptr g6 27 #define curptr g6
28 28
29 .text 29 .text
30 .align 32 30 .align 32
31 31
32 /* This is trivial with the new code... */ 32 /* This is trivial with the new code... */
33 .globl do_fpdis 33 .globl do_fpdis
34 do_fpdis: 34 do_fpdis:
35 sethi %hi(TSTATE_PEF), %g4 35 sethi %hi(TSTATE_PEF), %g4
36 rdpr %tstate, %g5 36 rdpr %tstate, %g5
37 andcc %g5, %g4, %g0 37 andcc %g5, %g4, %g0
38 be,pt %xcc, 1f 38 be,pt %xcc, 1f
39 nop 39 nop
40 rd %fprs, %g5 40 rd %fprs, %g5
41 andcc %g5, FPRS_FEF, %g0 41 andcc %g5, FPRS_FEF, %g0
42 be,pt %xcc, 1f 42 be,pt %xcc, 1f
43 nop 43 nop
44 44
45 /* Legal state when DCR_IFPOE is set in Cheetah %dcr. */ 45 /* Legal state when DCR_IFPOE is set in Cheetah %dcr. */
46 sethi %hi(109f), %g7 46 sethi %hi(109f), %g7
47 ba,pt %xcc, etrap 47 ba,pt %xcc, etrap
48 109: or %g7, %lo(109b), %g7 48 109: or %g7, %lo(109b), %g7
49 add %g0, %g0, %g0 49 add %g0, %g0, %g0
50 ba,a,pt %xcc, rtrap_clr_l6 50 ba,a,pt %xcc, rtrap_clr_l6
51 51
52 1: TRAP_LOAD_THREAD_REG(%g6, %g1) 52 1: TRAP_LOAD_THREAD_REG(%g6, %g1)
53 ldub [%g6 + TI_FPSAVED], %g5 53 ldub [%g6 + TI_FPSAVED], %g5
54 wr %g0, FPRS_FEF, %fprs 54 wr %g0, FPRS_FEF, %fprs
55 andcc %g5, FPRS_FEF, %g0 55 andcc %g5, FPRS_FEF, %g0
56 be,a,pt %icc, 1f 56 be,a,pt %icc, 1f
57 clr %g7 57 clr %g7
58 ldx [%g6 + TI_GSR], %g7 58 ldx [%g6 + TI_GSR], %g7
59 1: andcc %g5, FPRS_DL, %g0 59 1: andcc %g5, FPRS_DL, %g0
60 bne,pn %icc, 2f 60 bne,pn %icc, 2f
61 fzero %f0 61 fzero %f0
62 andcc %g5, FPRS_DU, %g0 62 andcc %g5, FPRS_DU, %g0
63 bne,pn %icc, 1f 63 bne,pn %icc, 1f
64 fzero %f2 64 fzero %f2
65 faddd %f0, %f2, %f4 65 faddd %f0, %f2, %f4
66 fmuld %f0, %f2, %f6 66 fmuld %f0, %f2, %f6
67 faddd %f0, %f2, %f8 67 faddd %f0, %f2, %f8
68 fmuld %f0, %f2, %f10 68 fmuld %f0, %f2, %f10
69 faddd %f0, %f2, %f12 69 faddd %f0, %f2, %f12
70 fmuld %f0, %f2, %f14 70 fmuld %f0, %f2, %f14
71 faddd %f0, %f2, %f16 71 faddd %f0, %f2, %f16
72 fmuld %f0, %f2, %f18 72 fmuld %f0, %f2, %f18
73 faddd %f0, %f2, %f20 73 faddd %f0, %f2, %f20
74 fmuld %f0, %f2, %f22 74 fmuld %f0, %f2, %f22
75 faddd %f0, %f2, %f24 75 faddd %f0, %f2, %f24
76 fmuld %f0, %f2, %f26 76 fmuld %f0, %f2, %f26
77 faddd %f0, %f2, %f28 77 faddd %f0, %f2, %f28
78 fmuld %f0, %f2, %f30 78 fmuld %f0, %f2, %f30
79 faddd %f0, %f2, %f32 79 faddd %f0, %f2, %f32
80 fmuld %f0, %f2, %f34 80 fmuld %f0, %f2, %f34
81 faddd %f0, %f2, %f36 81 faddd %f0, %f2, %f36
82 fmuld %f0, %f2, %f38 82 fmuld %f0, %f2, %f38
83 faddd %f0, %f2, %f40 83 faddd %f0, %f2, %f40
84 fmuld %f0, %f2, %f42 84 fmuld %f0, %f2, %f42
85 faddd %f0, %f2, %f44 85 faddd %f0, %f2, %f44
86 fmuld %f0, %f2, %f46 86 fmuld %f0, %f2, %f46
87 faddd %f0, %f2, %f48 87 faddd %f0, %f2, %f48
88 fmuld %f0, %f2, %f50 88 fmuld %f0, %f2, %f50
89 faddd %f0, %f2, %f52 89 faddd %f0, %f2, %f52
90 fmuld %f0, %f2, %f54 90 fmuld %f0, %f2, %f54
91 faddd %f0, %f2, %f56 91 faddd %f0, %f2, %f56
92 fmuld %f0, %f2, %f58 92 fmuld %f0, %f2, %f58
93 b,pt %xcc, fpdis_exit2 93 b,pt %xcc, fpdis_exit2
94 faddd %f0, %f2, %f60 94 faddd %f0, %f2, %f60
95 1: mov SECONDARY_CONTEXT, %g3 95 1: mov SECONDARY_CONTEXT, %g3
96 add %g6, TI_FPREGS + 0x80, %g1 96 add %g6, TI_FPREGS + 0x80, %g1
97 faddd %f0, %f2, %f4 97 faddd %f0, %f2, %f4
98 fmuld %f0, %f2, %f6 98 fmuld %f0, %f2, %f6
99 99
100 661: ldxa [%g3] ASI_DMMU, %g5 100 661: ldxa [%g3] ASI_DMMU, %g5
101 .section .sun4v_1insn_patch, "ax" 101 .section .sun4v_1insn_patch, "ax"
102 .word 661b 102 .word 661b
103 ldxa [%g3] ASI_MMU, %g5 103 ldxa [%g3] ASI_MMU, %g5
104 .previous 104 .previous
105 105
106 sethi %hi(sparc64_kern_sec_context), %g2 106 sethi %hi(sparc64_kern_sec_context), %g2
107 ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2 107 ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2
108 108
109 661: stxa %g2, [%g3] ASI_DMMU 109 661: stxa %g2, [%g3] ASI_DMMU
110 .section .sun4v_1insn_patch, "ax" 110 .section .sun4v_1insn_patch, "ax"
111 .word 661b 111 .word 661b
112 stxa %g2, [%g3] ASI_MMU 112 stxa %g2, [%g3] ASI_MMU
113 .previous 113 .previous
114 114
115 membar #Sync 115 membar #Sync
116 add %g6, TI_FPREGS + 0xc0, %g2 116 add %g6, TI_FPREGS + 0xc0, %g2
117 faddd %f0, %f2, %f8 117 faddd %f0, %f2, %f8
118 fmuld %f0, %f2, %f10 118 fmuld %f0, %f2, %f10
119 membar #Sync 119 membar #Sync
120 ldda [%g1] ASI_BLK_S, %f32 120 ldda [%g1] ASI_BLK_S, %f32
121 ldda [%g2] ASI_BLK_S, %f48 121 ldda [%g2] ASI_BLK_S, %f48
122 membar #Sync 122 membar #Sync
123 faddd %f0, %f2, %f12 123 faddd %f0, %f2, %f12
124 fmuld %f0, %f2, %f14 124 fmuld %f0, %f2, %f14
125 faddd %f0, %f2, %f16 125 faddd %f0, %f2, %f16
126 fmuld %f0, %f2, %f18 126 fmuld %f0, %f2, %f18
127 faddd %f0, %f2, %f20 127 faddd %f0, %f2, %f20
128 fmuld %f0, %f2, %f22 128 fmuld %f0, %f2, %f22
129 faddd %f0, %f2, %f24 129 faddd %f0, %f2, %f24
130 fmuld %f0, %f2, %f26 130 fmuld %f0, %f2, %f26
131 faddd %f0, %f2, %f28 131 faddd %f0, %f2, %f28
132 fmuld %f0, %f2, %f30 132 fmuld %f0, %f2, %f30
133 b,pt %xcc, fpdis_exit 133 b,pt %xcc, fpdis_exit
134 nop 134 nop
135 2: andcc %g5, FPRS_DU, %g0 135 2: andcc %g5, FPRS_DU, %g0
136 bne,pt %icc, 3f 136 bne,pt %icc, 3f
137 fzero %f32 137 fzero %f32
138 mov SECONDARY_CONTEXT, %g3 138 mov SECONDARY_CONTEXT, %g3
139 fzero %f34 139 fzero %f34
140 140
141 661: ldxa [%g3] ASI_DMMU, %g5 141 661: ldxa [%g3] ASI_DMMU, %g5
142 .section .sun4v_1insn_patch, "ax" 142 .section .sun4v_1insn_patch, "ax"
143 .word 661b 143 .word 661b
144 ldxa [%g3] ASI_MMU, %g5 144 ldxa [%g3] ASI_MMU, %g5
145 .previous 145 .previous
146 146
147 add %g6, TI_FPREGS, %g1 147 add %g6, TI_FPREGS, %g1
148 sethi %hi(sparc64_kern_sec_context), %g2 148 sethi %hi(sparc64_kern_sec_context), %g2
149 ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2 149 ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2
150 150
151 661: stxa %g2, [%g3] ASI_DMMU 151 661: stxa %g2, [%g3] ASI_DMMU
152 .section .sun4v_1insn_patch, "ax" 152 .section .sun4v_1insn_patch, "ax"
153 .word 661b 153 .word 661b
154 stxa %g2, [%g3] ASI_MMU 154 stxa %g2, [%g3] ASI_MMU
155 .previous 155 .previous
156 156
157 membar #Sync 157 membar #Sync
158 add %g6, TI_FPREGS + 0x40, %g2 158 add %g6, TI_FPREGS + 0x40, %g2
159 faddd %f32, %f34, %f36 159 faddd %f32, %f34, %f36
160 fmuld %f32, %f34, %f38 160 fmuld %f32, %f34, %f38
161 membar #Sync 161 membar #Sync
162 ldda [%g1] ASI_BLK_S, %f0 162 ldda [%g1] ASI_BLK_S, %f0
163 ldda [%g2] ASI_BLK_S, %f16 163 ldda [%g2] ASI_BLK_S, %f16
164 membar #Sync 164 membar #Sync
165 faddd %f32, %f34, %f40 165 faddd %f32, %f34, %f40
166 fmuld %f32, %f34, %f42 166 fmuld %f32, %f34, %f42
167 faddd %f32, %f34, %f44 167 faddd %f32, %f34, %f44
168 fmuld %f32, %f34, %f46 168 fmuld %f32, %f34, %f46
169 faddd %f32, %f34, %f48 169 faddd %f32, %f34, %f48
170 fmuld %f32, %f34, %f50 170 fmuld %f32, %f34, %f50
171 faddd %f32, %f34, %f52 171 faddd %f32, %f34, %f52
172 fmuld %f32, %f34, %f54 172 fmuld %f32, %f34, %f54
173 faddd %f32, %f34, %f56 173 faddd %f32, %f34, %f56
174 fmuld %f32, %f34, %f58 174 fmuld %f32, %f34, %f58
175 faddd %f32, %f34, %f60 175 faddd %f32, %f34, %f60
176 fmuld %f32, %f34, %f62 176 fmuld %f32, %f34, %f62
177 ba,pt %xcc, fpdis_exit 177 ba,pt %xcc, fpdis_exit
178 nop 178 nop
179 3: mov SECONDARY_CONTEXT, %g3 179 3: mov SECONDARY_CONTEXT, %g3
180 add %g6, TI_FPREGS, %g1 180 add %g6, TI_FPREGS, %g1
181 181
182 661: ldxa [%g3] ASI_DMMU, %g5 182 661: ldxa [%g3] ASI_DMMU, %g5
183 .section .sun4v_1insn_patch, "ax" 183 .section .sun4v_1insn_patch, "ax"
184 .word 661b 184 .word 661b
185 ldxa [%g3] ASI_MMU, %g5 185 ldxa [%g3] ASI_MMU, %g5
186 .previous 186 .previous
187 187
188 sethi %hi(sparc64_kern_sec_context), %g2 188 sethi %hi(sparc64_kern_sec_context), %g2
189 ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2 189 ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2
190 190
191 661: stxa %g2, [%g3] ASI_DMMU 191 661: stxa %g2, [%g3] ASI_DMMU
192 .section .sun4v_1insn_patch, "ax" 192 .section .sun4v_1insn_patch, "ax"
193 .word 661b 193 .word 661b
194 stxa %g2, [%g3] ASI_MMU 194 stxa %g2, [%g3] ASI_MMU
195 .previous 195 .previous
196 196
197 membar #Sync 197 membar #Sync
198 mov 0x40, %g2 198 mov 0x40, %g2
199 membar #Sync 199 membar #Sync
200 ldda [%g1] ASI_BLK_S, %f0 200 ldda [%g1] ASI_BLK_S, %f0
201 ldda [%g1 + %g2] ASI_BLK_S, %f16 201 ldda [%g1 + %g2] ASI_BLK_S, %f16
202 add %g1, 0x80, %g1 202 add %g1, 0x80, %g1
203 ldda [%g1] ASI_BLK_S, %f32 203 ldda [%g1] ASI_BLK_S, %f32
204 ldda [%g1 + %g2] ASI_BLK_S, %f48 204 ldda [%g1 + %g2] ASI_BLK_S, %f48
205 membar #Sync 205 membar #Sync
206 fpdis_exit: 206 fpdis_exit:
207 207
208 661: stxa %g5, [%g3] ASI_DMMU 208 661: stxa %g5, [%g3] ASI_DMMU
209 .section .sun4v_1insn_patch, "ax" 209 .section .sun4v_1insn_patch, "ax"
210 .word 661b 210 .word 661b
211 stxa %g5, [%g3] ASI_MMU 211 stxa %g5, [%g3] ASI_MMU
212 .previous 212 .previous
213 213
214 membar #Sync 214 membar #Sync
215 fpdis_exit2: 215 fpdis_exit2:
216 wr %g7, 0, %gsr 216 wr %g7, 0, %gsr
217 ldx [%g6 + TI_XFSR], %fsr 217 ldx [%g6 + TI_XFSR], %fsr
218 rdpr %tstate, %g3 218 rdpr %tstate, %g3
219 or %g3, %g4, %g3 ! anal... 219 or %g3, %g4, %g3 ! anal...
220 wrpr %g3, %tstate 220 wrpr %g3, %tstate
221 wr %g0, FPRS_FEF, %fprs ! clean DU/DL bits 221 wr %g0, FPRS_FEF, %fprs ! clean DU/DL bits
222 retry 222 retry
223 223
224 .align 32 224 .align 32
225 fp_other_bounce: 225 fp_other_bounce:
226 call do_fpother 226 call do_fpother
227 add %sp, PTREGS_OFF, %o0 227 add %sp, PTREGS_OFF, %o0
228 ba,pt %xcc, rtrap 228 ba,pt %xcc, rtrap
229 clr %l6 229 clr %l6
230 230
231 .globl do_fpother_check_fitos 231 .globl do_fpother_check_fitos
232 .align 32 232 .align 32
233 do_fpother_check_fitos: 233 do_fpother_check_fitos:
234 TRAP_LOAD_THREAD_REG(%g6, %g1) 234 TRAP_LOAD_THREAD_REG(%g6, %g1)
235 sethi %hi(fp_other_bounce - 4), %g7 235 sethi %hi(fp_other_bounce - 4), %g7
236 or %g7, %lo(fp_other_bounce - 4), %g7 236 or %g7, %lo(fp_other_bounce - 4), %g7
237 237
238 /* NOTE: Need to preserve %g7 until we fully commit 238 /* NOTE: Need to preserve %g7 until we fully commit
239 * to the fitos fixup. 239 * to the fitos fixup.
240 */ 240 */
241 stx %fsr, [%g6 + TI_XFSR] 241 stx %fsr, [%g6 + TI_XFSR]
242 rdpr %tstate, %g3 242 rdpr %tstate, %g3
243 andcc %g3, TSTATE_PRIV, %g0 243 andcc %g3, TSTATE_PRIV, %g0
244 bne,pn %xcc, do_fptrap_after_fsr 244 bne,pn %xcc, do_fptrap_after_fsr
245 nop 245 nop
246 ldx [%g6 + TI_XFSR], %g3 246 ldx [%g6 + TI_XFSR], %g3
247 srlx %g3, 14, %g1 247 srlx %g3, 14, %g1
248 and %g1, 7, %g1 248 and %g1, 7, %g1
249 cmp %g1, 2 ! Unfinished FP-OP 249 cmp %g1, 2 ! Unfinished FP-OP
250 bne,pn %xcc, do_fptrap_after_fsr 250 bne,pn %xcc, do_fptrap_after_fsr
251 sethi %hi(1 << 23), %g1 ! Inexact 251 sethi %hi(1 << 23), %g1 ! Inexact
252 andcc %g3, %g1, %g0 252 andcc %g3, %g1, %g0
253 bne,pn %xcc, do_fptrap_after_fsr 253 bne,pn %xcc, do_fptrap_after_fsr
254 rdpr %tpc, %g1 254 rdpr %tpc, %g1
255 lduwa [%g1] ASI_AIUP, %g3 ! This cannot ever fail 255 lduwa [%g1] ASI_AIUP, %g3 ! This cannot ever fail
256 #define FITOS_MASK 0xc1f83fe0 256 #define FITOS_MASK 0xc1f83fe0
257 #define FITOS_COMPARE 0x81a01880 257 #define FITOS_COMPARE 0x81a01880
258 sethi %hi(FITOS_MASK), %g1 258 sethi %hi(FITOS_MASK), %g1
259 or %g1, %lo(FITOS_MASK), %g1 259 or %g1, %lo(FITOS_MASK), %g1
260 and %g3, %g1, %g1 260 and %g3, %g1, %g1
261 sethi %hi(FITOS_COMPARE), %g2 261 sethi %hi(FITOS_COMPARE), %g2
262 or %g2, %lo(FITOS_COMPARE), %g2 262 or %g2, %lo(FITOS_COMPARE), %g2
263 cmp %g1, %g2 263 cmp %g1, %g2
264 bne,pn %xcc, do_fptrap_after_fsr 264 bne,pn %xcc, do_fptrap_after_fsr
265 nop 265 nop
266 std %f62, [%g6 + TI_FPREGS + (62 * 4)] 266 std %f62, [%g6 + TI_FPREGS + (62 * 4)]
267 sethi %hi(fitos_table_1), %g1 267 sethi %hi(fitos_table_1), %g1
268 and %g3, 0x1f, %g2 268 and %g3, 0x1f, %g2
269 or %g1, %lo(fitos_table_1), %g1 269 or %g1, %lo(fitos_table_1), %g1
270 sllx %g2, 2, %g2 270 sllx %g2, 2, %g2
271 jmpl %g1 + %g2, %g0 271 jmpl %g1 + %g2, %g0
272 ba,pt %xcc, fitos_emul_continue 272 ba,pt %xcc, fitos_emul_continue
273 273
274 fitos_table_1: 274 fitos_table_1:
275 fitod %f0, %f62 275 fitod %f0, %f62
276 fitod %f1, %f62 276 fitod %f1, %f62
277 fitod %f2, %f62 277 fitod %f2, %f62
278 fitod %f3, %f62 278 fitod %f3, %f62
279 fitod %f4, %f62 279 fitod %f4, %f62
280 fitod %f5, %f62 280 fitod %f5, %f62
281 fitod %f6, %f62 281 fitod %f6, %f62
282 fitod %f7, %f62 282 fitod %f7, %f62
283 fitod %f8, %f62 283 fitod %f8, %f62
284 fitod %f9, %f62 284 fitod %f9, %f62
285 fitod %f10, %f62 285 fitod %f10, %f62
286 fitod %f11, %f62 286 fitod %f11, %f62
287 fitod %f12, %f62 287 fitod %f12, %f62
288 fitod %f13, %f62 288 fitod %f13, %f62
289 fitod %f14, %f62 289 fitod %f14, %f62
290 fitod %f15, %f62 290 fitod %f15, %f62
291 fitod %f16, %f62 291 fitod %f16, %f62
292 fitod %f17, %f62 292 fitod %f17, %f62
293 fitod %f18, %f62 293 fitod %f18, %f62
294 fitod %f19, %f62 294 fitod %f19, %f62
295 fitod %f20, %f62 295 fitod %f20, %f62
296 fitod %f21, %f62 296 fitod %f21, %f62
297 fitod %f22, %f62 297 fitod %f22, %f62
298 fitod %f23, %f62 298 fitod %f23, %f62
299 fitod %f24, %f62 299 fitod %f24, %f62
300 fitod %f25, %f62 300 fitod %f25, %f62
301 fitod %f26, %f62 301 fitod %f26, %f62
302 fitod %f27, %f62 302 fitod %f27, %f62
303 fitod %f28, %f62 303 fitod %f28, %f62
304 fitod %f29, %f62 304 fitod %f29, %f62
305 fitod %f30, %f62 305 fitod %f30, %f62
306 fitod %f31, %f62 306 fitod %f31, %f62
307 307
308 fitos_emul_continue: 308 fitos_emul_continue:
309 sethi %hi(fitos_table_2), %g1 309 sethi %hi(fitos_table_2), %g1
310 srl %g3, 25, %g2 310 srl %g3, 25, %g2
311 or %g1, %lo(fitos_table_2), %g1 311 or %g1, %lo(fitos_table_2), %g1
312 and %g2, 0x1f, %g2 312 and %g2, 0x1f, %g2
313 sllx %g2, 2, %g2 313 sllx %g2, 2, %g2
314 jmpl %g1 + %g2, %g0 314 jmpl %g1 + %g2, %g0
315 ba,pt %xcc, fitos_emul_fini 315 ba,pt %xcc, fitos_emul_fini
316 316
317 fitos_table_2: 317 fitos_table_2:
318 fdtos %f62, %f0 318 fdtos %f62, %f0
319 fdtos %f62, %f1 319 fdtos %f62, %f1
320 fdtos %f62, %f2 320 fdtos %f62, %f2
321 fdtos %f62, %f3 321 fdtos %f62, %f3
322 fdtos %f62, %f4 322 fdtos %f62, %f4
323 fdtos %f62, %f5 323 fdtos %f62, %f5
324 fdtos %f62, %f6 324 fdtos %f62, %f6
325 fdtos %f62, %f7 325 fdtos %f62, %f7
326 fdtos %f62, %f8 326 fdtos %f62, %f8
327 fdtos %f62, %f9 327 fdtos %f62, %f9
328 fdtos %f62, %f10 328 fdtos %f62, %f10
329 fdtos %f62, %f11 329 fdtos %f62, %f11
330 fdtos %f62, %f12 330 fdtos %f62, %f12
331 fdtos %f62, %f13 331 fdtos %f62, %f13
332 fdtos %f62, %f14 332 fdtos %f62, %f14
333 fdtos %f62, %f15 333 fdtos %f62, %f15
334 fdtos %f62, %f16 334 fdtos %f62, %f16
335 fdtos %f62, %f17 335 fdtos %f62, %f17
336 fdtos %f62, %f18 336 fdtos %f62, %f18
337 fdtos %f62, %f19 337 fdtos %f62, %f19
338 fdtos %f62, %f20 338 fdtos %f62, %f20
339 fdtos %f62, %f21 339 fdtos %f62, %f21
340 fdtos %f62, %f22 340 fdtos %f62, %f22
341 fdtos %f62, %f23 341 fdtos %f62, %f23
342 fdtos %f62, %f24 342 fdtos %f62, %f24
343 fdtos %f62, %f25 343 fdtos %f62, %f25
344 fdtos %f62, %f26 344 fdtos %f62, %f26
345 fdtos %f62, %f27 345 fdtos %f62, %f27
346 fdtos %f62, %f28 346 fdtos %f62, %f28
347 fdtos %f62, %f29 347 fdtos %f62, %f29
348 fdtos %f62, %f30 348 fdtos %f62, %f30
349 fdtos %f62, %f31 349 fdtos %f62, %f31
350 350
351 fitos_emul_fini: 351 fitos_emul_fini:
352 ldd [%g6 + TI_FPREGS + (62 * 4)], %f62 352 ldd [%g6 + TI_FPREGS + (62 * 4)], %f62
353 done 353 done
354 354
355 .globl do_fptrap 355 .globl do_fptrap
356 .align 32 356 .align 32
357 do_fptrap: 357 do_fptrap:
358 TRAP_LOAD_THREAD_REG(%g6, %g1) 358 TRAP_LOAD_THREAD_REG(%g6, %g1)
359 stx %fsr, [%g6 + TI_XFSR] 359 stx %fsr, [%g6 + TI_XFSR]
360 do_fptrap_after_fsr: 360 do_fptrap_after_fsr:
361 ldub [%g6 + TI_FPSAVED], %g3 361 ldub [%g6 + TI_FPSAVED], %g3
362 rd %fprs, %g1 362 rd %fprs, %g1
363 or %g3, %g1, %g3 363 or %g3, %g1, %g3
364 stb %g3, [%g6 + TI_FPSAVED] 364 stb %g3, [%g6 + TI_FPSAVED]
365 rd %gsr, %g3 365 rd %gsr, %g3
366 stx %g3, [%g6 + TI_GSR] 366 stx %g3, [%g6 + TI_GSR]
367 mov SECONDARY_CONTEXT, %g3 367 mov SECONDARY_CONTEXT, %g3
368 368
369 661: ldxa [%g3] ASI_DMMU, %g5 369 661: ldxa [%g3] ASI_DMMU, %g5
370 .section .sun4v_1insn_patch, "ax" 370 .section .sun4v_1insn_patch, "ax"
371 .word 661b 371 .word 661b
372 ldxa [%g3] ASI_MMU, %g5 372 ldxa [%g3] ASI_MMU, %g5
373 .previous 373 .previous
374 374
375 sethi %hi(sparc64_kern_sec_context), %g2 375 sethi %hi(sparc64_kern_sec_context), %g2
376 ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2 376 ldx [%g2 + %lo(sparc64_kern_sec_context)], %g2
377 377
378 661: stxa %g2, [%g3] ASI_DMMU 378 661: stxa %g2, [%g3] ASI_DMMU
379 .section .sun4v_1insn_patch, "ax" 379 .section .sun4v_1insn_patch, "ax"
380 .word 661b 380 .word 661b
381 stxa %g2, [%g3] ASI_MMU 381 stxa %g2, [%g3] ASI_MMU
382 .previous 382 .previous
383 383
384 membar #Sync 384 membar #Sync
385 add %g6, TI_FPREGS, %g2 385 add %g6, TI_FPREGS, %g2
386 andcc %g1, FPRS_DL, %g0 386 andcc %g1, FPRS_DL, %g0
387 be,pn %icc, 4f 387 be,pn %icc, 4f
388 mov 0x40, %g3 388 mov 0x40, %g3
389 stda %f0, [%g2] ASI_BLK_S 389 stda %f0, [%g2] ASI_BLK_S
390 stda %f16, [%g2 + %g3] ASI_BLK_S 390 stda %f16, [%g2 + %g3] ASI_BLK_S
391 andcc %g1, FPRS_DU, %g0 391 andcc %g1, FPRS_DU, %g0
392 be,pn %icc, 5f 392 be,pn %icc, 5f
393 4: add %g2, 128, %g2 393 4: add %g2, 128, %g2
394 stda %f32, [%g2] ASI_BLK_S 394 stda %f32, [%g2] ASI_BLK_S
395 stda %f48, [%g2 + %g3] ASI_BLK_S 395 stda %f48, [%g2 + %g3] ASI_BLK_S
396 5: mov SECONDARY_CONTEXT, %g1 396 5: mov SECONDARY_CONTEXT, %g1
397 membar #Sync 397 membar #Sync
398 398
399 661: stxa %g5, [%g1] ASI_DMMU 399 661: stxa %g5, [%g1] ASI_DMMU
400 .section .sun4v_1insn_patch, "ax" 400 .section .sun4v_1insn_patch, "ax"
401 .word 661b 401 .word 661b
402 stxa %g5, [%g1] ASI_MMU 402 stxa %g5, [%g1] ASI_MMU
403 .previous 403 .previous
404 404
405 membar #Sync 405 membar #Sync
406 ba,pt %xcc, etrap 406 ba,pt %xcc, etrap
407 wr %g0, 0, %fprs 407 wr %g0, 0, %fprs
408 408
409 /* The registers for cross calls will be: 409 /* The registers for cross calls will be:
410 * 410 *
411 * DATA 0: [low 32-bits] Address of function to call, jmp to this 411 * DATA 0: [low 32-bits] Address of function to call, jmp to this
412 * [high 32-bits] MMU Context Argument 0, place in %g5 412 * [high 32-bits] MMU Context Argument 0, place in %g5
413 * DATA 1: Address Argument 1, place in %g1 413 * DATA 1: Address Argument 1, place in %g1
414 * DATA 2: Address Argument 2, place in %g7 414 * DATA 2: Address Argument 2, place in %g7
415 * 415 *
416 * With this method we can do most of the cross-call tlb/cache 416 * With this method we can do most of the cross-call tlb/cache
417 * flushing very quickly. 417 * flushing very quickly.
418 */ 418 */
419 .text 419 .text
420 .align 32 420 .align 32
421 .globl do_ivec 421 .globl do_ivec
422 do_ivec: 422 do_ivec:
423 mov 0x40, %g3 423 mov 0x40, %g3
424 ldxa [%g3 + %g0] ASI_INTR_R, %g3 424 ldxa [%g3 + %g0] ASI_INTR_R, %g3
425 sethi %hi(KERNBASE), %g4 425 sethi %hi(KERNBASE), %g4
426 cmp %g3, %g4 426 cmp %g3, %g4
427 bgeu,pn %xcc, do_ivec_xcall 427 bgeu,pn %xcc, do_ivec_xcall
428 srlx %g3, 32, %g5 428 srlx %g3, 32, %g5
429 stxa %g0, [%g0] ASI_INTR_RECEIVE 429 stxa %g0, [%g0] ASI_INTR_RECEIVE
430 membar #Sync 430 membar #Sync
431 431
432 sethi %hi(ivector_table), %g2 432 sethi %hi(ivector_table), %g2
433 sllx %g3, 3, %g3 433 sllx %g3, 3, %g3
434 or %g2, %lo(ivector_table), %g2 434 or %g2, %lo(ivector_table), %g2
435 add %g2, %g3, %g3 435 add %g2, %g3, %g3
436 436
437 TRAP_LOAD_IRQ_WORK(%g6, %g1) 437 TRAP_LOAD_IRQ_WORK(%g6, %g1)
438 438
439 lduw [%g6], %g5 /* g5 = irq_work(cpu) */ 439 lduw [%g6], %g5 /* g5 = irq_work(cpu) */
440 stw %g5, [%g3 + 0x00] /* bucket->irq_chain = g5 */ 440 stw %g5, [%g3 + 0x00] /* bucket->irq_chain = g5 */
441 stw %g3, [%g6] /* irq_work(cpu) = bucket */ 441 stw %g3, [%g6] /* irq_work(cpu) = bucket */
442 wr %g0, 1 << PIL_DEVICE_IRQ, %set_softint 442 wr %g0, 1 << PIL_DEVICE_IRQ, %set_softint
443 retry 443 retry
444 do_ivec_xcall: 444 do_ivec_xcall:
445 mov 0x50, %g1 445 mov 0x50, %g1
446 ldxa [%g1 + %g0] ASI_INTR_R, %g1 446 ldxa [%g1 + %g0] ASI_INTR_R, %g1
447 srl %g3, 0, %g3 447 srl %g3, 0, %g3
448 448
449 mov 0x60, %g7 449 mov 0x60, %g7
450 ldxa [%g7 + %g0] ASI_INTR_R, %g7 450 ldxa [%g7 + %g0] ASI_INTR_R, %g7
451 stxa %g0, [%g0] ASI_INTR_RECEIVE 451 stxa %g0, [%g0] ASI_INTR_RECEIVE
452 membar #Sync 452 membar #Sync
453 ba,pt %xcc, 1f 453 ba,pt %xcc, 1f
454 nop 454 nop
455 455
456 .align 32 456 .align 32
457 1: jmpl %g3, %g0 457 1: jmpl %g3, %g0
458 nop 458 nop
459 459
460 .globl getcc, setcc 460 .globl getcc, setcc
461 getcc: 461 getcc:
462 ldx [%o0 + PT_V9_TSTATE], %o1 462 ldx [%o0 + PT_V9_TSTATE], %o1
463 srlx %o1, 32, %o1 463 srlx %o1, 32, %o1
464 and %o1, 0xf, %o1 464 and %o1, 0xf, %o1
465 retl 465 retl
466 stx %o1, [%o0 + PT_V9_G1] 466 stx %o1, [%o0 + PT_V9_G1]
467 setcc: 467 setcc:
468 ldx [%o0 + PT_V9_TSTATE], %o1 468 ldx [%o0 + PT_V9_TSTATE], %o1
469 ldx [%o0 + PT_V9_G1], %o2 469 ldx [%o0 + PT_V9_G1], %o2
470 or %g0, %ulo(TSTATE_ICC), %o3 470 or %g0, %ulo(TSTATE_ICC), %o3
471 sllx %o3, 32, %o3 471 sllx %o3, 32, %o3
472 andn %o1, %o3, %o1 472 andn %o1, %o3, %o1
473 sllx %o2, 32, %o2 473 sllx %o2, 32, %o2
474 and %o2, %o3, %o2 474 and %o2, %o3, %o2
475 or %o1, %o2, %o1 475 or %o1, %o2, %o1
476 retl 476 retl
477 stx %o1, [%o0 + PT_V9_TSTATE] 477 stx %o1, [%o0 + PT_V9_TSTATE]
478 478
479 .globl utrap_trap 479 .globl utrap_trap
480 utrap_trap: /* %g3=handler,%g4=level */ 480 utrap_trap: /* %g3=handler,%g4=level */
481 TRAP_LOAD_THREAD_REG(%g6, %g1) 481 TRAP_LOAD_THREAD_REG(%g6, %g1)
482 ldx [%g6 + TI_UTRAPS], %g1 482 ldx [%g6 + TI_UTRAPS], %g1
483 brnz,pt %g1, invoke_utrap 483 brnz,pt %g1, invoke_utrap
484 nop 484 nop
485 485
486 ba,pt %xcc, etrap 486 ba,pt %xcc, etrap
487 rd %pc, %g7 487 rd %pc, %g7
488 mov %l4, %o1 488 mov %l4, %o1
489 call bad_trap 489 call bad_trap
490 add %sp, PTREGS_OFF, %o0 490 add %sp, PTREGS_OFF, %o0
491 ba,pt %xcc, rtrap 491 ba,pt %xcc, rtrap
492 clr %l6 492 clr %l6
493 493
494 invoke_utrap: 494 invoke_utrap:
495 sllx %g3, 3, %g3 495 sllx %g3, 3, %g3
496 ldx [%g1 + %g3], %g1 496 ldx [%g1 + %g3], %g1
497 save %sp, -128, %sp 497 save %sp, -128, %sp
498 rdpr %tstate, %l6 498 rdpr %tstate, %l6
499 rdpr %cwp, %l7 499 rdpr %cwp, %l7
500 andn %l6, TSTATE_CWP, %l6 500 andn %l6, TSTATE_CWP, %l6
501 wrpr %l6, %l7, %tstate 501 wrpr %l6, %l7, %tstate
502 rdpr %tpc, %l6 502 rdpr %tpc, %l6
503 rdpr %tnpc, %l7 503 rdpr %tnpc, %l7
504 wrpr %g1, 0, %tnpc 504 wrpr %g1, 0, %tnpc
505 done 505 done
506 506
507 /* We need to carefully read the error status, ACK 507 /* We need to carefully read the error status, ACK
508 * the errors, prevent recursive traps, and pass the 508 * the errors, prevent recursive traps, and pass the
509 * information on to C code for logging. 509 * information on to C code for logging.
510 * 510 *
511 * We pass the AFAR in as-is, and we encode the status 511 * We pass the AFAR in as-is, and we encode the status
512 * information as described in asm-sparc64/sfafsr.h 512 * information as described in asm-sparc64/sfafsr.h
513 */ 513 */
514 .globl __spitfire_access_error 514 .globl __spitfire_access_error
515 __spitfire_access_error: 515 __spitfire_access_error:
516 /* Disable ESTATE error reporting so that we do not 516 /* Disable ESTATE error reporting so that we do not
517 * take recursive traps and RED state the processor. 517 * take recursive traps and RED state the processor.
518 */ 518 */
519 stxa %g0, [%g0] ASI_ESTATE_ERROR_EN 519 stxa %g0, [%g0] ASI_ESTATE_ERROR_EN
520 membar #Sync 520 membar #Sync
521 521
522 mov UDBE_UE, %g1 522 mov UDBE_UE, %g1
523 ldxa [%g0] ASI_AFSR, %g4 ! Get AFSR 523 ldxa [%g0] ASI_AFSR, %g4 ! Get AFSR
524 524
525 /* __spitfire_cee_trap branches here with AFSR in %g4 and 525 /* __spitfire_cee_trap branches here with AFSR in %g4 and
526 * UDBE_CE in %g1. It only clears ESTATE_ERR_CE in the 526 * UDBE_CE in %g1. It only clears ESTATE_ERR_CE in the
527 * ESTATE Error Enable register. 527 * ESTATE Error Enable register.
528 */ 528 */
529 __spitfire_cee_trap_continue: 529 __spitfire_cee_trap_continue:
530 ldxa [%g0] ASI_AFAR, %g5 ! Get AFAR 530 ldxa [%g0] ASI_AFAR, %g5 ! Get AFAR
531 531
532 rdpr %tt, %g3 532 rdpr %tt, %g3
533 and %g3, 0x1ff, %g3 ! Paranoia 533 and %g3, 0x1ff, %g3 ! Paranoia
534 sllx %g3, SFSTAT_TRAP_TYPE_SHIFT, %g3 534 sllx %g3, SFSTAT_TRAP_TYPE_SHIFT, %g3
535 or %g4, %g3, %g4 535 or %g4, %g3, %g4
536 rdpr %tl, %g3 536 rdpr %tl, %g3
537 cmp %g3, 1 537 cmp %g3, 1
538 mov 1, %g3 538 mov 1, %g3
539 bleu %xcc, 1f 539 bleu %xcc, 1f
540 sllx %g3, SFSTAT_TL_GT_ONE_SHIFT, %g3 540 sllx %g3, SFSTAT_TL_GT_ONE_SHIFT, %g3
541 541
542 or %g4, %g3, %g4 542 or %g4, %g3, %g4
543 543
544 /* Read in the UDB error register state, clearing the 544 /* Read in the UDB error register state, clearing the
545 * sticky error bits as-needed. We only clear them if 545 * sticky error bits as-needed. We only clear them if
546 * the UE bit is set. Likewise, __spitfire_cee_trap 546 * the UE bit is set. Likewise, __spitfire_cee_trap
547 * below will only do so if the CE bit is set. 547 * below will only do so if the CE bit is set.
548 * 548 *
549 * NOTE: UltraSparc-I/II have high and low UDB error 549 * NOTE: UltraSparc-I/II have high and low UDB error
550 * registers, corresponding to the two UDB units 550 * registers, corresponding to the two UDB units
551 * present on those chips. UltraSparc-IIi only 551 * present on those chips. UltraSparc-IIi only
552 * has a single UDB, called "SDB" in the manual. 552 * has a single UDB, called "SDB" in the manual.
553 * For IIi the upper UDB register always reads 553 * For IIi the upper UDB register always reads
554 * as zero so for our purposes things will just 554 * as zero so for our purposes things will just
555 * work with the checks below. 555 * work with the checks below.
556 */ 556 */
557 1: ldxa [%g0] ASI_UDBH_ERROR_R, %g3 557 1: ldxa [%g0] ASI_UDBH_ERROR_R, %g3
558 and %g3, 0x3ff, %g7 ! Paranoia 558 and %g3, 0x3ff, %g7 ! Paranoia
559 sllx %g7, SFSTAT_UDBH_SHIFT, %g7 559 sllx %g7, SFSTAT_UDBH_SHIFT, %g7
560 or %g4, %g7, %g4 560 or %g4, %g7, %g4
561 andcc %g3, %g1, %g3 ! UDBE_UE or UDBE_CE 561 andcc %g3, %g1, %g3 ! UDBE_UE or UDBE_CE
562 be,pn %xcc, 1f 562 be,pn %xcc, 1f
563 nop 563 nop
564 stxa %g3, [%g0] ASI_UDB_ERROR_W 564 stxa %g3, [%g0] ASI_UDB_ERROR_W
565 membar #Sync 565 membar #Sync
566 566
567 1: mov 0x18, %g3 567 1: mov 0x18, %g3
568 ldxa [%g3] ASI_UDBL_ERROR_R, %g3 568 ldxa [%g3] ASI_UDBL_ERROR_R, %g3
569 and %g3, 0x3ff, %g7 ! Paranoia 569 and %g3, 0x3ff, %g7 ! Paranoia
570 sllx %g7, SFSTAT_UDBL_SHIFT, %g7 570 sllx %g7, SFSTAT_UDBL_SHIFT, %g7
571 or %g4, %g7, %g4 571 or %g4, %g7, %g4
572 andcc %g3, %g1, %g3 ! UDBE_UE or UDBE_CE 572 andcc %g3, %g1, %g3 ! UDBE_UE or UDBE_CE
573 be,pn %xcc, 1f 573 be,pn %xcc, 1f
574 nop 574 nop
575 mov 0x18, %g7 575 mov 0x18, %g7
576 stxa %g3, [%g7] ASI_UDB_ERROR_W 576 stxa %g3, [%g7] ASI_UDB_ERROR_W
577 membar #Sync 577 membar #Sync
578 578
579 1: /* Ok, now that we've latched the error state, 579 1: /* Ok, now that we've latched the error state,
580 * clear the sticky bits in the AFSR. 580 * clear the sticky bits in the AFSR.
581 */ 581 */
582 stxa %g4, [%g0] ASI_AFSR 582 stxa %g4, [%g0] ASI_AFSR
583 membar #Sync 583 membar #Sync
584 584
585 rdpr %tl, %g2 585 rdpr %tl, %g2
586 cmp %g2, 1 586 cmp %g2, 1
587 rdpr %pil, %g2 587 rdpr %pil, %g2
588 bleu,pt %xcc, 1f 588 bleu,pt %xcc, 1f
589 wrpr %g0, 15, %pil 589 wrpr %g0, 15, %pil
590 590
591 ba,pt %xcc, etraptl1 591 ba,pt %xcc, etraptl1
592 rd %pc, %g7 592 rd %pc, %g7
593 593
594 ba,pt %xcc, 2f 594 ba,pt %xcc, 2f
595 nop 595 nop
596 596
597 1: ba,pt %xcc, etrap_irq 597 1: ba,pt %xcc, etrap_irq
598 rd %pc, %g7 598 rd %pc, %g7
599 599
600 2: mov %l4, %o1 600 2:
601 #ifdef CONFIG_TRACE_IRQFLAGS
602 call trace_hardirqs_off
603 nop
604 #endif
605 mov %l4, %o1
601 mov %l5, %o2 606 mov %l5, %o2
602 call spitfire_access_error 607 call spitfire_access_error
603 add %sp, PTREGS_OFF, %o0 608 add %sp, PTREGS_OFF, %o0
604 ba,pt %xcc, rtrap 609 ba,pt %xcc, rtrap
605 clr %l6 610 clr %l6
606 611
607 /* This is the trap handler entry point for ECC correctable 612 /* This is the trap handler entry point for ECC correctable
608 * errors. They are corrected, but we listen for the trap 613 * errors. They are corrected, but we listen for the trap
609 * so that the event can be logged. 614 * so that the event can be logged.
610 * 615 *
611 * Disrupting errors are either: 616 * Disrupting errors are either:
612 * 1) single-bit ECC errors during UDB reads to system 617 * 1) single-bit ECC errors during UDB reads to system
613 * memory 618 * memory
614 * 2) data parity errors during write-back events 619 * 2) data parity errors during write-back events
615 * 620 *
616 * As far as I can make out from the manual, the CEE trap 621 * As far as I can make out from the manual, the CEE trap
617 * is only for correctable errors during memory read 622 * is only for correctable errors during memory read
618 * accesses by the front-end of the processor. 623 * accesses by the front-end of the processor.
619 * 624 *
620 * The code below is only for trap level 1 CEE events, 625 * The code below is only for trap level 1 CEE events,
621 * as it is the only situation where we can safely record 626 * as it is the only situation where we can safely record
622 * and log. For trap level >1 we just clear the CE bit 627 * and log. For trap level >1 we just clear the CE bit
623 * in the AFSR and return. 628 * in the AFSR and return.
624 * 629 *
625 * This is just like __spiftire_access_error above, but it 630 * This is just like __spiftire_access_error above, but it
626 * specifically handles correctable errors. If an 631 * specifically handles correctable errors. If an
627 * uncorrectable error is indicated in the AFSR we 632 * uncorrectable error is indicated in the AFSR we
628 * will branch directly above to __spitfire_access_error 633 * will branch directly above to __spitfire_access_error
629 * to handle it instead. Uncorrectable therefore takes 634 * to handle it instead. Uncorrectable therefore takes
630 * priority over correctable, and the error logging 635 * priority over correctable, and the error logging
631 * C code will notice this case by inspecting the 636 * C code will notice this case by inspecting the
632 * trap type. 637 * trap type.
633 */ 638 */
634 .globl __spitfire_cee_trap 639 .globl __spitfire_cee_trap
635 __spitfire_cee_trap: 640 __spitfire_cee_trap:
636 ldxa [%g0] ASI_AFSR, %g4 ! Get AFSR 641 ldxa [%g0] ASI_AFSR, %g4 ! Get AFSR
637 mov 1, %g3 642 mov 1, %g3
638 sllx %g3, SFAFSR_UE_SHIFT, %g3 643 sllx %g3, SFAFSR_UE_SHIFT, %g3
639 andcc %g4, %g3, %g0 ! Check for UE 644 andcc %g4, %g3, %g0 ! Check for UE
640 bne,pn %xcc, __spitfire_access_error 645 bne,pn %xcc, __spitfire_access_error
641 nop 646 nop
642 647
643 /* Ok, in this case we only have a correctable error. 648 /* Ok, in this case we only have a correctable error.
644 * Indicate we only wish to capture that state in register 649 * Indicate we only wish to capture that state in register
645 * %g1, and we only disable CE error reporting unlike UE 650 * %g1, and we only disable CE error reporting unlike UE
646 * handling which disables all errors. 651 * handling which disables all errors.
647 */ 652 */
648 ldxa [%g0] ASI_ESTATE_ERROR_EN, %g3 653 ldxa [%g0] ASI_ESTATE_ERROR_EN, %g3
649 andn %g3, ESTATE_ERR_CE, %g3 654 andn %g3, ESTATE_ERR_CE, %g3
650 stxa %g3, [%g0] ASI_ESTATE_ERROR_EN 655 stxa %g3, [%g0] ASI_ESTATE_ERROR_EN
651 membar #Sync 656 membar #Sync
652 657
653 /* Preserve AFSR in %g4, indicate UDB state to capture in %g1 */ 658 /* Preserve AFSR in %g4, indicate UDB state to capture in %g1 */
654 ba,pt %xcc, __spitfire_cee_trap_continue 659 ba,pt %xcc, __spitfire_cee_trap_continue
655 mov UDBE_CE, %g1 660 mov UDBE_CE, %g1
656 661
657 .globl __spitfire_data_access_exception 662 .globl __spitfire_data_access_exception
658 .globl __spitfire_data_access_exception_tl1 663 .globl __spitfire_data_access_exception_tl1
659 __spitfire_data_access_exception_tl1: 664 __spitfire_data_access_exception_tl1:
660 rdpr %pstate, %g4 665 rdpr %pstate, %g4
661 wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate 666 wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate
662 mov TLB_SFSR, %g3 667 mov TLB_SFSR, %g3
663 mov DMMU_SFAR, %g5 668 mov DMMU_SFAR, %g5
664 ldxa [%g3] ASI_DMMU, %g4 ! Get SFSR 669 ldxa [%g3] ASI_DMMU, %g4 ! Get SFSR
665 ldxa [%g5] ASI_DMMU, %g5 ! Get SFAR 670 ldxa [%g5] ASI_DMMU, %g5 ! Get SFAR
666 stxa %g0, [%g3] ASI_DMMU ! Clear SFSR.FaultValid bit 671 stxa %g0, [%g3] ASI_DMMU ! Clear SFSR.FaultValid bit
667 membar #Sync 672 membar #Sync
668 rdpr %tt, %g3 673 rdpr %tt, %g3
669 cmp %g3, 0x80 ! first win spill/fill trap 674 cmp %g3, 0x80 ! first win spill/fill trap
670 blu,pn %xcc, 1f 675 blu,pn %xcc, 1f
671 cmp %g3, 0xff ! last win spill/fill trap 676 cmp %g3, 0xff ! last win spill/fill trap
672 bgu,pn %xcc, 1f 677 bgu,pn %xcc, 1f
673 nop 678 nop
674 ba,pt %xcc, winfix_dax 679 ba,pt %xcc, winfix_dax
675 rdpr %tpc, %g3 680 rdpr %tpc, %g3
676 1: sethi %hi(109f), %g7 681 1: sethi %hi(109f), %g7
677 ba,pt %xcc, etraptl1 682 ba,pt %xcc, etraptl1
678 109: or %g7, %lo(109b), %g7 683 109: or %g7, %lo(109b), %g7
679 mov %l4, %o1 684 mov %l4, %o1
680 mov %l5, %o2 685 mov %l5, %o2
681 call spitfire_data_access_exception_tl1 686 call spitfire_data_access_exception_tl1
682 add %sp, PTREGS_OFF, %o0 687 add %sp, PTREGS_OFF, %o0
683 ba,pt %xcc, rtrap 688 ba,pt %xcc, rtrap
684 clr %l6 689 clr %l6
685 690
686 __spitfire_data_access_exception: 691 __spitfire_data_access_exception:
687 rdpr %pstate, %g4 692 rdpr %pstate, %g4
688 wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate 693 wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate
689 mov TLB_SFSR, %g3 694 mov TLB_SFSR, %g3
690 mov DMMU_SFAR, %g5 695 mov DMMU_SFAR, %g5
691 ldxa [%g3] ASI_DMMU, %g4 ! Get SFSR 696 ldxa [%g3] ASI_DMMU, %g4 ! Get SFSR
692 ldxa [%g5] ASI_DMMU, %g5 ! Get SFAR 697 ldxa [%g5] ASI_DMMU, %g5 ! Get SFAR
693 stxa %g0, [%g3] ASI_DMMU ! Clear SFSR.FaultValid bit 698 stxa %g0, [%g3] ASI_DMMU ! Clear SFSR.FaultValid bit
694 membar #Sync 699 membar #Sync
695 sethi %hi(109f), %g7 700 sethi %hi(109f), %g7
696 ba,pt %xcc, etrap 701 ba,pt %xcc, etrap
697 109: or %g7, %lo(109b), %g7 702 109: or %g7, %lo(109b), %g7
698 mov %l4, %o1 703 mov %l4, %o1
699 mov %l5, %o2 704 mov %l5, %o2
700 call spitfire_data_access_exception 705 call spitfire_data_access_exception
701 add %sp, PTREGS_OFF, %o0 706 add %sp, PTREGS_OFF, %o0
702 ba,pt %xcc, rtrap 707 ba,pt %xcc, rtrap
703 clr %l6 708 clr %l6
704 709
705 .globl __spitfire_insn_access_exception 710 .globl __spitfire_insn_access_exception
706 .globl __spitfire_insn_access_exception_tl1 711 .globl __spitfire_insn_access_exception_tl1
707 __spitfire_insn_access_exception_tl1: 712 __spitfire_insn_access_exception_tl1:
708 rdpr %pstate, %g4 713 rdpr %pstate, %g4
709 wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate 714 wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate
710 mov TLB_SFSR, %g3 715 mov TLB_SFSR, %g3
711 ldxa [%g3] ASI_IMMU, %g4 ! Get SFSR 716 ldxa [%g3] ASI_IMMU, %g4 ! Get SFSR
712 rdpr %tpc, %g5 ! IMMU has no SFAR, use TPC 717 rdpr %tpc, %g5 ! IMMU has no SFAR, use TPC
713 stxa %g0, [%g3] ASI_IMMU ! Clear FaultValid bit 718 stxa %g0, [%g3] ASI_IMMU ! Clear FaultValid bit
714 membar #Sync 719 membar #Sync
715 sethi %hi(109f), %g7 720 sethi %hi(109f), %g7
716 ba,pt %xcc, etraptl1 721 ba,pt %xcc, etraptl1
717 109: or %g7, %lo(109b), %g7 722 109: or %g7, %lo(109b), %g7
718 mov %l4, %o1 723 mov %l4, %o1
719 mov %l5, %o2 724 mov %l5, %o2
720 call spitfire_insn_access_exception_tl1 725 call spitfire_insn_access_exception_tl1
721 add %sp, PTREGS_OFF, %o0 726 add %sp, PTREGS_OFF, %o0
722 ba,pt %xcc, rtrap 727 ba,pt %xcc, rtrap
723 clr %l6 728 clr %l6
724 729
725 __spitfire_insn_access_exception: 730 __spitfire_insn_access_exception:
726 rdpr %pstate, %g4 731 rdpr %pstate, %g4
727 wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate 732 wrpr %g4, PSTATE_MG|PSTATE_AG, %pstate
728 mov TLB_SFSR, %g3 733 mov TLB_SFSR, %g3
729 ldxa [%g3] ASI_IMMU, %g4 ! Get SFSR 734 ldxa [%g3] ASI_IMMU, %g4 ! Get SFSR
730 rdpr %tpc, %g5 ! IMMU has no SFAR, use TPC 735 rdpr %tpc, %g5 ! IMMU has no SFAR, use TPC
731 stxa %g0, [%g3] ASI_IMMU ! Clear FaultValid bit 736 stxa %g0, [%g3] ASI_IMMU ! Clear FaultValid bit
732 membar #Sync 737 membar #Sync
733 sethi %hi(109f), %g7 738 sethi %hi(109f), %g7
734 ba,pt %xcc, etrap 739 ba,pt %xcc, etrap
735 109: or %g7, %lo(109b), %g7 740 109: or %g7, %lo(109b), %g7
736 mov %l4, %o1 741 mov %l4, %o1
737 mov %l5, %o2 742 mov %l5, %o2
738 call spitfire_insn_access_exception 743 call spitfire_insn_access_exception
739 add %sp, PTREGS_OFF, %o0 744 add %sp, PTREGS_OFF, %o0
740 ba,pt %xcc, rtrap 745 ba,pt %xcc, rtrap
741 clr %l6 746 clr %l6
742 747
743 /* These get patched into the trap table at boot time 748 /* These get patched into the trap table at boot time
744 * once we know we have a cheetah processor. 749 * once we know we have a cheetah processor.
745 */ 750 */
746 .globl cheetah_fecc_trap_vector, cheetah_fecc_trap_vector_tl1 751 .globl cheetah_fecc_trap_vector, cheetah_fecc_trap_vector_tl1
747 cheetah_fecc_trap_vector: 752 cheetah_fecc_trap_vector:
748 membar #Sync 753 membar #Sync
749 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1 754 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1
750 andn %g1, DCU_DC | DCU_IC, %g1 755 andn %g1, DCU_DC | DCU_IC, %g1
751 stxa %g1, [%g0] ASI_DCU_CONTROL_REG 756 stxa %g1, [%g0] ASI_DCU_CONTROL_REG
752 membar #Sync 757 membar #Sync
753 sethi %hi(cheetah_fast_ecc), %g2 758 sethi %hi(cheetah_fast_ecc), %g2
754 jmpl %g2 + %lo(cheetah_fast_ecc), %g0 759 jmpl %g2 + %lo(cheetah_fast_ecc), %g0
755 mov 0, %g1 760 mov 0, %g1
756 cheetah_fecc_trap_vector_tl1: 761 cheetah_fecc_trap_vector_tl1:
757 membar #Sync 762 membar #Sync
758 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1 763 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1
759 andn %g1, DCU_DC | DCU_IC, %g1 764 andn %g1, DCU_DC | DCU_IC, %g1
760 stxa %g1, [%g0] ASI_DCU_CONTROL_REG 765 stxa %g1, [%g0] ASI_DCU_CONTROL_REG
761 membar #Sync 766 membar #Sync
762 sethi %hi(cheetah_fast_ecc), %g2 767 sethi %hi(cheetah_fast_ecc), %g2
763 jmpl %g2 + %lo(cheetah_fast_ecc), %g0 768 jmpl %g2 + %lo(cheetah_fast_ecc), %g0
764 mov 1, %g1 769 mov 1, %g1
765 .globl cheetah_cee_trap_vector, cheetah_cee_trap_vector_tl1 770 .globl cheetah_cee_trap_vector, cheetah_cee_trap_vector_tl1
766 cheetah_cee_trap_vector: 771 cheetah_cee_trap_vector:
767 membar #Sync 772 membar #Sync
768 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1 773 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1
769 andn %g1, DCU_IC, %g1 774 andn %g1, DCU_IC, %g1
770 stxa %g1, [%g0] ASI_DCU_CONTROL_REG 775 stxa %g1, [%g0] ASI_DCU_CONTROL_REG
771 membar #Sync 776 membar #Sync
772 sethi %hi(cheetah_cee), %g2 777 sethi %hi(cheetah_cee), %g2
773 jmpl %g2 + %lo(cheetah_cee), %g0 778 jmpl %g2 + %lo(cheetah_cee), %g0
774 mov 0, %g1 779 mov 0, %g1
775 cheetah_cee_trap_vector_tl1: 780 cheetah_cee_trap_vector_tl1:
776 membar #Sync 781 membar #Sync
777 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1 782 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1
778 andn %g1, DCU_IC, %g1 783 andn %g1, DCU_IC, %g1
779 stxa %g1, [%g0] ASI_DCU_CONTROL_REG 784 stxa %g1, [%g0] ASI_DCU_CONTROL_REG
780 membar #Sync 785 membar #Sync
781 sethi %hi(cheetah_cee), %g2 786 sethi %hi(cheetah_cee), %g2
782 jmpl %g2 + %lo(cheetah_cee), %g0 787 jmpl %g2 + %lo(cheetah_cee), %g0
783 mov 1, %g1 788 mov 1, %g1
784 .globl cheetah_deferred_trap_vector, cheetah_deferred_trap_vector_tl1 789 .globl cheetah_deferred_trap_vector, cheetah_deferred_trap_vector_tl1
785 cheetah_deferred_trap_vector: 790 cheetah_deferred_trap_vector:
786 membar #Sync 791 membar #Sync
787 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1; 792 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1;
788 andn %g1, DCU_DC | DCU_IC, %g1; 793 andn %g1, DCU_DC | DCU_IC, %g1;
789 stxa %g1, [%g0] ASI_DCU_CONTROL_REG; 794 stxa %g1, [%g0] ASI_DCU_CONTROL_REG;
790 membar #Sync; 795 membar #Sync;
791 sethi %hi(cheetah_deferred_trap), %g2 796 sethi %hi(cheetah_deferred_trap), %g2
792 jmpl %g2 + %lo(cheetah_deferred_trap), %g0 797 jmpl %g2 + %lo(cheetah_deferred_trap), %g0
793 mov 0, %g1 798 mov 0, %g1
794 cheetah_deferred_trap_vector_tl1: 799 cheetah_deferred_trap_vector_tl1:
795 membar #Sync; 800 membar #Sync;
796 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1; 801 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1;
797 andn %g1, DCU_DC | DCU_IC, %g1; 802 andn %g1, DCU_DC | DCU_IC, %g1;
798 stxa %g1, [%g0] ASI_DCU_CONTROL_REG; 803 stxa %g1, [%g0] ASI_DCU_CONTROL_REG;
799 membar #Sync; 804 membar #Sync;
800 sethi %hi(cheetah_deferred_trap), %g2 805 sethi %hi(cheetah_deferred_trap), %g2
801 jmpl %g2 + %lo(cheetah_deferred_trap), %g0 806 jmpl %g2 + %lo(cheetah_deferred_trap), %g0
802 mov 1, %g1 807 mov 1, %g1
803 808
804 /* Cheetah+ specific traps. These are for the new I/D cache parity 809 /* Cheetah+ specific traps. These are for the new I/D cache parity
805 * error traps. The first argument to cheetah_plus_parity_handler 810 * error traps. The first argument to cheetah_plus_parity_handler
806 * is encoded as follows: 811 * is encoded as follows:
807 * 812 *
808 * Bit0: 0=dcache,1=icache 813 * Bit0: 0=dcache,1=icache
809 * Bit1: 0=recoverable,1=unrecoverable 814 * Bit1: 0=recoverable,1=unrecoverable
810 */ 815 */
811 .globl cheetah_plus_dcpe_trap_vector, cheetah_plus_dcpe_trap_vector_tl1 816 .globl cheetah_plus_dcpe_trap_vector, cheetah_plus_dcpe_trap_vector_tl1
812 cheetah_plus_dcpe_trap_vector: 817 cheetah_plus_dcpe_trap_vector:
813 membar #Sync 818 membar #Sync
814 sethi %hi(do_cheetah_plus_data_parity), %g7 819 sethi %hi(do_cheetah_plus_data_parity), %g7
815 jmpl %g7 + %lo(do_cheetah_plus_data_parity), %g0 820 jmpl %g7 + %lo(do_cheetah_plus_data_parity), %g0
816 nop 821 nop
817 nop 822 nop
818 nop 823 nop
819 nop 824 nop
820 nop 825 nop
821 826
822 do_cheetah_plus_data_parity: 827 do_cheetah_plus_data_parity:
823 rdpr %pil, %g2 828 rdpr %pil, %g2
824 wrpr %g0, 15, %pil 829 wrpr %g0, 15, %pil
825 ba,pt %xcc, etrap_irq 830 ba,pt %xcc, etrap_irq
826 rd %pc, %g7 831 rd %pc, %g7
832 #ifdef CONFIG_TRACE_IRQFLAGS
833 call trace_hardirqs_off
834 nop
835 #endif
827 mov 0x0, %o0 836 mov 0x0, %o0
828 call cheetah_plus_parity_error 837 call cheetah_plus_parity_error
829 add %sp, PTREGS_OFF, %o1 838 add %sp, PTREGS_OFF, %o1
830 ba,a,pt %xcc, rtrap_irq 839 ba,a,pt %xcc, rtrap_irq
831 840
832 cheetah_plus_dcpe_trap_vector_tl1: 841 cheetah_plus_dcpe_trap_vector_tl1:
833 membar #Sync 842 membar #Sync
834 wrpr PSTATE_IG | PSTATE_PEF | PSTATE_PRIV, %pstate 843 wrpr PSTATE_IG | PSTATE_PEF | PSTATE_PRIV, %pstate
835 sethi %hi(do_dcpe_tl1), %g3 844 sethi %hi(do_dcpe_tl1), %g3
836 jmpl %g3 + %lo(do_dcpe_tl1), %g0 845 jmpl %g3 + %lo(do_dcpe_tl1), %g0
837 nop 846 nop
838 nop 847 nop
839 nop 848 nop
840 nop 849 nop
841 850
842 .globl cheetah_plus_icpe_trap_vector, cheetah_plus_icpe_trap_vector_tl1 851 .globl cheetah_plus_icpe_trap_vector, cheetah_plus_icpe_trap_vector_tl1
843 cheetah_plus_icpe_trap_vector: 852 cheetah_plus_icpe_trap_vector:
844 membar #Sync 853 membar #Sync
845 sethi %hi(do_cheetah_plus_insn_parity), %g7 854 sethi %hi(do_cheetah_plus_insn_parity), %g7
846 jmpl %g7 + %lo(do_cheetah_plus_insn_parity), %g0 855 jmpl %g7 + %lo(do_cheetah_plus_insn_parity), %g0
847 nop 856 nop
848 nop 857 nop
849 nop 858 nop
850 nop 859 nop
851 nop 860 nop
852 861
853 do_cheetah_plus_insn_parity: 862 do_cheetah_plus_insn_parity:
854 rdpr %pil, %g2 863 rdpr %pil, %g2
855 wrpr %g0, 15, %pil 864 wrpr %g0, 15, %pil
856 ba,pt %xcc, etrap_irq 865 ba,pt %xcc, etrap_irq
857 rd %pc, %g7 866 rd %pc, %g7
867 #ifdef CONFIG_TRACE_IRQFLAGS
868 call trace_hardirqs_off
869 nop
870 #endif
858 mov 0x1, %o0 871 mov 0x1, %o0
859 call cheetah_plus_parity_error 872 call cheetah_plus_parity_error
860 add %sp, PTREGS_OFF, %o1 873 add %sp, PTREGS_OFF, %o1
861 ba,a,pt %xcc, rtrap_irq 874 ba,a,pt %xcc, rtrap_irq
862 875
863 cheetah_plus_icpe_trap_vector_tl1: 876 cheetah_plus_icpe_trap_vector_tl1:
864 membar #Sync 877 membar #Sync
865 wrpr PSTATE_IG | PSTATE_PEF | PSTATE_PRIV, %pstate 878 wrpr PSTATE_IG | PSTATE_PEF | PSTATE_PRIV, %pstate
866 sethi %hi(do_icpe_tl1), %g3 879 sethi %hi(do_icpe_tl1), %g3
867 jmpl %g3 + %lo(do_icpe_tl1), %g0 880 jmpl %g3 + %lo(do_icpe_tl1), %g0
868 nop 881 nop
869 nop 882 nop
870 nop 883 nop
871 nop 884 nop
872 885
873 /* If we take one of these traps when tl >= 1, then we 886 /* If we take one of these traps when tl >= 1, then we
874 * jump to interrupt globals. If some trap level above us 887 * jump to interrupt globals. If some trap level above us
875 * was also using interrupt globals, we cannot recover. 888 * was also using interrupt globals, we cannot recover.
876 * We may use all interrupt global registers except %g6. 889 * We may use all interrupt global registers except %g6.
877 */ 890 */
878 .globl do_dcpe_tl1, do_icpe_tl1 891 .globl do_dcpe_tl1, do_icpe_tl1
879 do_dcpe_tl1: 892 do_dcpe_tl1:
880 rdpr %tl, %g1 ! Save original trap level 893 rdpr %tl, %g1 ! Save original trap level
881 mov 1, %g2 ! Setup TSTATE checking loop 894 mov 1, %g2 ! Setup TSTATE checking loop
882 sethi %hi(TSTATE_IG), %g3 ! TSTATE mask bit 895 sethi %hi(TSTATE_IG), %g3 ! TSTATE mask bit
883 1: wrpr %g2, %tl ! Set trap level to check 896 1: wrpr %g2, %tl ! Set trap level to check
884 rdpr %tstate, %g4 ! Read TSTATE for this level 897 rdpr %tstate, %g4 ! Read TSTATE for this level
885 andcc %g4, %g3, %g0 ! Interrupt globals in use? 898 andcc %g4, %g3, %g0 ! Interrupt globals in use?
886 bne,a,pn %xcc, do_dcpe_tl1_fatal ! Yep, irrecoverable 899 bne,a,pn %xcc, do_dcpe_tl1_fatal ! Yep, irrecoverable
887 wrpr %g1, %tl ! Restore original trap level 900 wrpr %g1, %tl ! Restore original trap level
888 add %g2, 1, %g2 ! Next trap level 901 add %g2, 1, %g2 ! Next trap level
889 cmp %g2, %g1 ! Hit them all yet? 902 cmp %g2, %g1 ! Hit them all yet?
890 ble,pt %icc, 1b ! Not yet 903 ble,pt %icc, 1b ! Not yet
891 nop 904 nop
892 wrpr %g1, %tl ! Restore original trap level 905 wrpr %g1, %tl ! Restore original trap level
893 do_dcpe_tl1_nonfatal: /* Ok we may use interrupt globals safely. */ 906 do_dcpe_tl1_nonfatal: /* Ok we may use interrupt globals safely. */
894 sethi %hi(dcache_parity_tl1_occurred), %g2 907 sethi %hi(dcache_parity_tl1_occurred), %g2
895 lduw [%g2 + %lo(dcache_parity_tl1_occurred)], %g1 908 lduw [%g2 + %lo(dcache_parity_tl1_occurred)], %g1
896 add %g1, 1, %g1 909 add %g1, 1, %g1
897 stw %g1, [%g2 + %lo(dcache_parity_tl1_occurred)] 910 stw %g1, [%g2 + %lo(dcache_parity_tl1_occurred)]
898 /* Reset D-cache parity */ 911 /* Reset D-cache parity */
899 sethi %hi(1 << 16), %g1 ! D-cache size 912 sethi %hi(1 << 16), %g1 ! D-cache size
900 mov (1 << 5), %g2 ! D-cache line size 913 mov (1 << 5), %g2 ! D-cache line size
901 sub %g1, %g2, %g1 ! Move down 1 cacheline 914 sub %g1, %g2, %g1 ! Move down 1 cacheline
902 1: srl %g1, 14, %g3 ! Compute UTAG 915 1: srl %g1, 14, %g3 ! Compute UTAG
903 membar #Sync 916 membar #Sync
904 stxa %g3, [%g1] ASI_DCACHE_UTAG 917 stxa %g3, [%g1] ASI_DCACHE_UTAG
905 membar #Sync 918 membar #Sync
906 sub %g2, 8, %g3 ! 64-bit data word within line 919 sub %g2, 8, %g3 ! 64-bit data word within line
907 2: membar #Sync 920 2: membar #Sync
908 stxa %g0, [%g1 + %g3] ASI_DCACHE_DATA 921 stxa %g0, [%g1 + %g3] ASI_DCACHE_DATA
909 membar #Sync 922 membar #Sync
910 subcc %g3, 8, %g3 ! Next 64-bit data word 923 subcc %g3, 8, %g3 ! Next 64-bit data word
911 bge,pt %icc, 2b 924 bge,pt %icc, 2b
912 nop 925 nop
913 subcc %g1, %g2, %g1 ! Next cacheline 926 subcc %g1, %g2, %g1 ! Next cacheline
914 bge,pt %icc, 1b 927 bge,pt %icc, 1b
915 nop 928 nop
916 ba,pt %xcc, dcpe_icpe_tl1_common 929 ba,pt %xcc, dcpe_icpe_tl1_common
917 nop 930 nop
918 931
919 do_dcpe_tl1_fatal: 932 do_dcpe_tl1_fatal:
920 sethi %hi(1f), %g7 933 sethi %hi(1f), %g7
921 ba,pt %xcc, etraptl1 934 ba,pt %xcc, etraptl1
922 1: or %g7, %lo(1b), %g7 935 1: or %g7, %lo(1b), %g7
923 mov 0x2, %o0 936 mov 0x2, %o0
924 call cheetah_plus_parity_error 937 call cheetah_plus_parity_error
925 add %sp, PTREGS_OFF, %o1 938 add %sp, PTREGS_OFF, %o1
926 ba,pt %xcc, rtrap 939 ba,pt %xcc, rtrap
927 clr %l6 940 clr %l6
928 941
929 do_icpe_tl1: 942 do_icpe_tl1:
930 rdpr %tl, %g1 ! Save original trap level 943 rdpr %tl, %g1 ! Save original trap level
931 mov 1, %g2 ! Setup TSTATE checking loop 944 mov 1, %g2 ! Setup TSTATE checking loop
932 sethi %hi(TSTATE_IG), %g3 ! TSTATE mask bit 945 sethi %hi(TSTATE_IG), %g3 ! TSTATE mask bit
933 1: wrpr %g2, %tl ! Set trap level to check 946 1: wrpr %g2, %tl ! Set trap level to check
934 rdpr %tstate, %g4 ! Read TSTATE for this level 947 rdpr %tstate, %g4 ! Read TSTATE for this level
935 andcc %g4, %g3, %g0 ! Interrupt globals in use? 948 andcc %g4, %g3, %g0 ! Interrupt globals in use?
936 bne,a,pn %xcc, do_icpe_tl1_fatal ! Yep, irrecoverable 949 bne,a,pn %xcc, do_icpe_tl1_fatal ! Yep, irrecoverable
937 wrpr %g1, %tl ! Restore original trap level 950 wrpr %g1, %tl ! Restore original trap level
938 add %g2, 1, %g2 ! Next trap level 951 add %g2, 1, %g2 ! Next trap level
939 cmp %g2, %g1 ! Hit them all yet? 952 cmp %g2, %g1 ! Hit them all yet?
940 ble,pt %icc, 1b ! Not yet 953 ble,pt %icc, 1b ! Not yet
941 nop 954 nop
942 wrpr %g1, %tl ! Restore original trap level 955 wrpr %g1, %tl ! Restore original trap level
943 do_icpe_tl1_nonfatal: /* Ok we may use interrupt globals safely. */ 956 do_icpe_tl1_nonfatal: /* Ok we may use interrupt globals safely. */
944 sethi %hi(icache_parity_tl1_occurred), %g2 957 sethi %hi(icache_parity_tl1_occurred), %g2
945 lduw [%g2 + %lo(icache_parity_tl1_occurred)], %g1 958 lduw [%g2 + %lo(icache_parity_tl1_occurred)], %g1
946 add %g1, 1, %g1 959 add %g1, 1, %g1
947 stw %g1, [%g2 + %lo(icache_parity_tl1_occurred)] 960 stw %g1, [%g2 + %lo(icache_parity_tl1_occurred)]
948 /* Flush I-cache */ 961 /* Flush I-cache */
949 sethi %hi(1 << 15), %g1 ! I-cache size 962 sethi %hi(1 << 15), %g1 ! I-cache size
950 mov (1 << 5), %g2 ! I-cache line size 963 mov (1 << 5), %g2 ! I-cache line size
951 sub %g1, %g2, %g1 964 sub %g1, %g2, %g1
952 1: or %g1, (2 << 3), %g3 965 1: or %g1, (2 << 3), %g3
953 stxa %g0, [%g3] ASI_IC_TAG 966 stxa %g0, [%g3] ASI_IC_TAG
954 membar #Sync 967 membar #Sync
955 subcc %g1, %g2, %g1 968 subcc %g1, %g2, %g1
956 bge,pt %icc, 1b 969 bge,pt %icc, 1b
957 nop 970 nop
958 ba,pt %xcc, dcpe_icpe_tl1_common 971 ba,pt %xcc, dcpe_icpe_tl1_common
959 nop 972 nop
960 973
961 do_icpe_tl1_fatal: 974 do_icpe_tl1_fatal:
962 sethi %hi(1f), %g7 975 sethi %hi(1f), %g7
963 ba,pt %xcc, etraptl1 976 ba,pt %xcc, etraptl1
964 1: or %g7, %lo(1b), %g7 977 1: or %g7, %lo(1b), %g7
965 mov 0x3, %o0 978 mov 0x3, %o0
966 call cheetah_plus_parity_error 979 call cheetah_plus_parity_error
967 add %sp, PTREGS_OFF, %o1 980 add %sp, PTREGS_OFF, %o1
968 ba,pt %xcc, rtrap 981 ba,pt %xcc, rtrap
969 clr %l6 982 clr %l6
970 983
971 dcpe_icpe_tl1_common: 984 dcpe_icpe_tl1_common:
972 /* Flush D-cache, re-enable D/I caches in DCU and finally 985 /* Flush D-cache, re-enable D/I caches in DCU and finally
973 * retry the trapping instruction. 986 * retry the trapping instruction.
974 */ 987 */
975 sethi %hi(1 << 16), %g1 ! D-cache size 988 sethi %hi(1 << 16), %g1 ! D-cache size
976 mov (1 << 5), %g2 ! D-cache line size 989 mov (1 << 5), %g2 ! D-cache line size
977 sub %g1, %g2, %g1 990 sub %g1, %g2, %g1
978 1: stxa %g0, [%g1] ASI_DCACHE_TAG 991 1: stxa %g0, [%g1] ASI_DCACHE_TAG
979 membar #Sync 992 membar #Sync
980 subcc %g1, %g2, %g1 993 subcc %g1, %g2, %g1
981 bge,pt %icc, 1b 994 bge,pt %icc, 1b
982 nop 995 nop
983 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1 996 ldxa [%g0] ASI_DCU_CONTROL_REG, %g1
984 or %g1, (DCU_DC | DCU_IC), %g1 997 or %g1, (DCU_DC | DCU_IC), %g1
985 stxa %g1, [%g0] ASI_DCU_CONTROL_REG 998 stxa %g1, [%g0] ASI_DCU_CONTROL_REG
986 membar #Sync 999 membar #Sync
987 retry 1000 retry
988 1001
989 /* Capture I/D/E-cache state into per-cpu error scoreboard. 1002 /* Capture I/D/E-cache state into per-cpu error scoreboard.
990 * 1003 *
991 * %g1: (TL>=0) ? 1 : 0 1004 * %g1: (TL>=0) ? 1 : 0
992 * %g2: scratch 1005 * %g2: scratch
993 * %g3: scratch 1006 * %g3: scratch
994 * %g4: AFSR 1007 * %g4: AFSR
995 * %g5: AFAR 1008 * %g5: AFAR
996 * %g6: unused, will have current thread ptr after etrap 1009 * %g6: unused, will have current thread ptr after etrap
997 * %g7: scratch 1010 * %g7: scratch
998 */ 1011 */
999 __cheetah_log_error: 1012 __cheetah_log_error:
1000 /* Put "TL1" software bit into AFSR. */ 1013 /* Put "TL1" software bit into AFSR. */
1001 and %g1, 0x1, %g1 1014 and %g1, 0x1, %g1
1002 sllx %g1, 63, %g2 1015 sllx %g1, 63, %g2
1003 or %g4, %g2, %g4 1016 or %g4, %g2, %g4
1004 1017
1005 /* Get log entry pointer for this cpu at this trap level. */ 1018 /* Get log entry pointer for this cpu at this trap level. */
1006 BRANCH_IF_JALAPENO(g2,g3,50f) 1019 BRANCH_IF_JALAPENO(g2,g3,50f)
1007 ldxa [%g0] ASI_SAFARI_CONFIG, %g2 1020 ldxa [%g0] ASI_SAFARI_CONFIG, %g2
1008 srlx %g2, 17, %g2 1021 srlx %g2, 17, %g2
1009 ba,pt %xcc, 60f 1022 ba,pt %xcc, 60f
1010 and %g2, 0x3ff, %g2 1023 and %g2, 0x3ff, %g2
1011 1024
1012 50: ldxa [%g0] ASI_JBUS_CONFIG, %g2 1025 50: ldxa [%g0] ASI_JBUS_CONFIG, %g2
1013 srlx %g2, 17, %g2 1026 srlx %g2, 17, %g2
1014 and %g2, 0x1f, %g2 1027 and %g2, 0x1f, %g2
1015 1028
1016 60: sllx %g2, 9, %g2 1029 60: sllx %g2, 9, %g2
1017 sethi %hi(cheetah_error_log), %g3 1030 sethi %hi(cheetah_error_log), %g3
1018 ldx [%g3 + %lo(cheetah_error_log)], %g3 1031 ldx [%g3 + %lo(cheetah_error_log)], %g3
1019 brz,pn %g3, 80f 1032 brz,pn %g3, 80f
1020 nop 1033 nop
1021 1034
1022 add %g3, %g2, %g3 1035 add %g3, %g2, %g3
1023 sllx %g1, 8, %g1 1036 sllx %g1, 8, %g1
1024 add %g3, %g1, %g1 1037 add %g3, %g1, %g1
1025 1038
1026 /* %g1 holds pointer to the top of the logging scoreboard */ 1039 /* %g1 holds pointer to the top of the logging scoreboard */
1027 ldx [%g1 + 0x0], %g7 1040 ldx [%g1 + 0x0], %g7
1028 cmp %g7, -1 1041 cmp %g7, -1
1029 bne,pn %xcc, 80f 1042 bne,pn %xcc, 80f
1030 nop 1043 nop
1031 1044
1032 stx %g4, [%g1 + 0x0] 1045 stx %g4, [%g1 + 0x0]
1033 stx %g5, [%g1 + 0x8] 1046 stx %g5, [%g1 + 0x8]
1034 add %g1, 0x10, %g1 1047 add %g1, 0x10, %g1
1035 1048
1036 /* %g1 now points to D-cache logging area */ 1049 /* %g1 now points to D-cache logging area */
1037 set 0x3ff8, %g2 /* DC_addr mask */ 1050 set 0x3ff8, %g2 /* DC_addr mask */
1038 and %g5, %g2, %g2 /* DC_addr bits of AFAR */ 1051 and %g5, %g2, %g2 /* DC_addr bits of AFAR */
1039 srlx %g5, 12, %g3 1052 srlx %g5, 12, %g3
1040 or %g3, 1, %g3 /* PHYS tag + valid */ 1053 or %g3, 1, %g3 /* PHYS tag + valid */
1041 1054
1042 10: ldxa [%g2] ASI_DCACHE_TAG, %g7 1055 10: ldxa [%g2] ASI_DCACHE_TAG, %g7
1043 cmp %g3, %g7 /* TAG match? */ 1056 cmp %g3, %g7 /* TAG match? */
1044 bne,pt %xcc, 13f 1057 bne,pt %xcc, 13f
1045 nop 1058 nop
1046 1059
1047 /* Yep, what we want, capture state. */ 1060 /* Yep, what we want, capture state. */
1048 stx %g2, [%g1 + 0x20] 1061 stx %g2, [%g1 + 0x20]
1049 stx %g7, [%g1 + 0x28] 1062 stx %g7, [%g1 + 0x28]
1050 1063
1051 /* A membar Sync is required before and after utag access. */ 1064 /* A membar Sync is required before and after utag access. */
1052 membar #Sync 1065 membar #Sync
1053 ldxa [%g2] ASI_DCACHE_UTAG, %g7 1066 ldxa [%g2] ASI_DCACHE_UTAG, %g7
1054 membar #Sync 1067 membar #Sync
1055 stx %g7, [%g1 + 0x30] 1068 stx %g7, [%g1 + 0x30]
1056 ldxa [%g2] ASI_DCACHE_SNOOP_TAG, %g7 1069 ldxa [%g2] ASI_DCACHE_SNOOP_TAG, %g7
1057 stx %g7, [%g1 + 0x38] 1070 stx %g7, [%g1 + 0x38]
1058 clr %g3 1071 clr %g3
1059 1072
1060 12: ldxa [%g2 + %g3] ASI_DCACHE_DATA, %g7 1073 12: ldxa [%g2 + %g3] ASI_DCACHE_DATA, %g7
1061 stx %g7, [%g1] 1074 stx %g7, [%g1]
1062 add %g3, (1 << 5), %g3 1075 add %g3, (1 << 5), %g3
1063 cmp %g3, (4 << 5) 1076 cmp %g3, (4 << 5)
1064 bl,pt %xcc, 12b 1077 bl,pt %xcc, 12b
1065 add %g1, 0x8, %g1 1078 add %g1, 0x8, %g1
1066 1079
1067 ba,pt %xcc, 20f 1080 ba,pt %xcc, 20f
1068 add %g1, 0x20, %g1 1081 add %g1, 0x20, %g1
1069 1082
1070 13: sethi %hi(1 << 14), %g7 1083 13: sethi %hi(1 << 14), %g7
1071 add %g2, %g7, %g2 1084 add %g2, %g7, %g2
1072 srlx %g2, 14, %g7 1085 srlx %g2, 14, %g7
1073 cmp %g7, 4 1086 cmp %g7, 4
1074 bl,pt %xcc, 10b 1087 bl,pt %xcc, 10b
1075 nop 1088 nop
1076 1089
1077 add %g1, 0x40, %g1 1090 add %g1, 0x40, %g1
1078 1091
1079 /* %g1 now points to I-cache logging area */ 1092 /* %g1 now points to I-cache logging area */
1080 20: set 0x1fe0, %g2 /* IC_addr mask */ 1093 20: set 0x1fe0, %g2 /* IC_addr mask */
1081 and %g5, %g2, %g2 /* IC_addr bits of AFAR */ 1094 and %g5, %g2, %g2 /* IC_addr bits of AFAR */
1082 sllx %g2, 1, %g2 /* IC_addr[13:6]==VA[12:5] */ 1095 sllx %g2, 1, %g2 /* IC_addr[13:6]==VA[12:5] */
1083 srlx %g5, (13 - 8), %g3 /* Make PTAG */ 1096 srlx %g5, (13 - 8), %g3 /* Make PTAG */
1084 andn %g3, 0xff, %g3 /* Mask off undefined bits */ 1097 andn %g3, 0xff, %g3 /* Mask off undefined bits */
1085 1098
1086 21: ldxa [%g2] ASI_IC_TAG, %g7 1099 21: ldxa [%g2] ASI_IC_TAG, %g7
1087 andn %g7, 0xff, %g7 1100 andn %g7, 0xff, %g7
1088 cmp %g3, %g7 1101 cmp %g3, %g7
1089 bne,pt %xcc, 23f 1102 bne,pt %xcc, 23f
1090 nop 1103 nop
1091 1104
1092 /* Yep, what we want, capture state. */ 1105 /* Yep, what we want, capture state. */
1093 stx %g2, [%g1 + 0x40] 1106 stx %g2, [%g1 + 0x40]
1094 stx %g7, [%g1 + 0x48] 1107 stx %g7, [%g1 + 0x48]
1095 add %g2, (1 << 3), %g2 1108 add %g2, (1 << 3), %g2
1096 ldxa [%g2] ASI_IC_TAG, %g7 1109 ldxa [%g2] ASI_IC_TAG, %g7
1097 add %g2, (1 << 3), %g2 1110 add %g2, (1 << 3), %g2
1098 stx %g7, [%g1 + 0x50] 1111 stx %g7, [%g1 + 0x50]
1099 ldxa [%g2] ASI_IC_TAG, %g7 1112 ldxa [%g2] ASI_IC_TAG, %g7
1100 add %g2, (1 << 3), %g2 1113 add %g2, (1 << 3), %g2
1101 stx %g7, [%g1 + 0x60] 1114 stx %g7, [%g1 + 0x60]
1102 ldxa [%g2] ASI_IC_TAG, %g7 1115 ldxa [%g2] ASI_IC_TAG, %g7
1103 stx %g7, [%g1 + 0x68] 1116 stx %g7, [%g1 + 0x68]
1104 sub %g2, (3 << 3), %g2 1117 sub %g2, (3 << 3), %g2
1105 ldxa [%g2] ASI_IC_STAG, %g7 1118 ldxa [%g2] ASI_IC_STAG, %g7
1106 stx %g7, [%g1 + 0x58] 1119 stx %g7, [%g1 + 0x58]
1107 clr %g3 1120 clr %g3
1108 srlx %g2, 2, %g2 1121 srlx %g2, 2, %g2
1109 1122
1110 22: ldxa [%g2 + %g3] ASI_IC_INSTR, %g7 1123 22: ldxa [%g2 + %g3] ASI_IC_INSTR, %g7
1111 stx %g7, [%g1] 1124 stx %g7, [%g1]
1112 add %g3, (1 << 3), %g3 1125 add %g3, (1 << 3), %g3
1113 cmp %g3, (8 << 3) 1126 cmp %g3, (8 << 3)
1114 bl,pt %xcc, 22b 1127 bl,pt %xcc, 22b
1115 add %g1, 0x8, %g1 1128 add %g1, 0x8, %g1
1116 1129
1117 ba,pt %xcc, 30f 1130 ba,pt %xcc, 30f
1118 add %g1, 0x30, %g1 1131 add %g1, 0x30, %g1
1119 1132
1120 23: sethi %hi(1 << 14), %g7 1133 23: sethi %hi(1 << 14), %g7
1121 add %g2, %g7, %g2 1134 add %g2, %g7, %g2
1122 srlx %g2, 14, %g7 1135 srlx %g2, 14, %g7
1123 cmp %g7, 4 1136 cmp %g7, 4
1124 bl,pt %xcc, 21b 1137 bl,pt %xcc, 21b
1125 nop 1138 nop
1126 1139
1127 add %g1, 0x70, %g1 1140 add %g1, 0x70, %g1
1128 1141
1129 /* %g1 now points to E-cache logging area */ 1142 /* %g1 now points to E-cache logging area */
1130 30: andn %g5, (32 - 1), %g2 1143 30: andn %g5, (32 - 1), %g2
1131 stx %g2, [%g1 + 0x20] 1144 stx %g2, [%g1 + 0x20]
1132 ldxa [%g2] ASI_EC_TAG_DATA, %g7 1145 ldxa [%g2] ASI_EC_TAG_DATA, %g7
1133 stx %g7, [%g1 + 0x28] 1146 stx %g7, [%g1 + 0x28]
1134 ldxa [%g2] ASI_EC_R, %g0 1147 ldxa [%g2] ASI_EC_R, %g0
1135 clr %g3 1148 clr %g3
1136 1149
1137 31: ldxa [%g3] ASI_EC_DATA, %g7 1150 31: ldxa [%g3] ASI_EC_DATA, %g7
1138 stx %g7, [%g1 + %g3] 1151 stx %g7, [%g1 + %g3]
1139 add %g3, 0x8, %g3 1152 add %g3, 0x8, %g3
1140 cmp %g3, 0x20 1153 cmp %g3, 0x20
1141 1154
1142 bl,pt %xcc, 31b 1155 bl,pt %xcc, 31b
1143 nop 1156 nop
1144 80: 1157 80:
1145 rdpr %tt, %g2 1158 rdpr %tt, %g2
1146 cmp %g2, 0x70 1159 cmp %g2, 0x70
1147 be c_fast_ecc 1160 be c_fast_ecc
1148 cmp %g2, 0x63 1161 cmp %g2, 0x63
1149 be c_cee 1162 be c_cee
1150 nop 1163 nop
1151 ba,pt %xcc, c_deferred 1164 ba,pt %xcc, c_deferred
1152 1165
1153 /* Cheetah FECC trap handling, we get here from tl{0,1}_fecc 1166 /* Cheetah FECC trap handling, we get here from tl{0,1}_fecc
1154 * in the trap table. That code has done a memory barrier 1167 * in the trap table. That code has done a memory barrier
1155 * and has disabled both the I-cache and D-cache in the DCU 1168 * and has disabled both the I-cache and D-cache in the DCU
1156 * control register. The I-cache is disabled so that we may 1169 * control register. The I-cache is disabled so that we may
1157 * capture the corrupted cache line, and the D-cache is disabled 1170 * capture the corrupted cache line, and the D-cache is disabled
1158 * because corrupt data may have been placed there and we don't 1171 * because corrupt data may have been placed there and we don't
1159 * want to reference it. 1172 * want to reference it.
1160 * 1173 *
1161 * %g1 is one if this trap occurred at %tl >= 1. 1174 * %g1 is one if this trap occurred at %tl >= 1.
1162 * 1175 *
1163 * Next, we turn off error reporting so that we don't recurse. 1176 * Next, we turn off error reporting so that we don't recurse.
1164 */ 1177 */
1165 .globl cheetah_fast_ecc 1178 .globl cheetah_fast_ecc
1166 cheetah_fast_ecc: 1179 cheetah_fast_ecc:
1167 ldxa [%g0] ASI_ESTATE_ERROR_EN, %g2 1180 ldxa [%g0] ASI_ESTATE_ERROR_EN, %g2
1168 andn %g2, ESTATE_ERROR_NCEEN | ESTATE_ERROR_CEEN, %g2 1181 andn %g2, ESTATE_ERROR_NCEEN | ESTATE_ERROR_CEEN, %g2
1169 stxa %g2, [%g0] ASI_ESTATE_ERROR_EN 1182 stxa %g2, [%g0] ASI_ESTATE_ERROR_EN
1170 membar #Sync 1183 membar #Sync
1171 1184
1172 /* Fetch and clear AFSR/AFAR */ 1185 /* Fetch and clear AFSR/AFAR */
1173 ldxa [%g0] ASI_AFSR, %g4 1186 ldxa [%g0] ASI_AFSR, %g4
1174 ldxa [%g0] ASI_AFAR, %g5 1187 ldxa [%g0] ASI_AFAR, %g5
1175 stxa %g4, [%g0] ASI_AFSR 1188 stxa %g4, [%g0] ASI_AFSR
1176 membar #Sync 1189 membar #Sync
1177 1190
1178 ba,pt %xcc, __cheetah_log_error 1191 ba,pt %xcc, __cheetah_log_error
1179 nop 1192 nop
1180 1193
1181 c_fast_ecc: 1194 c_fast_ecc:
1182 rdpr %pil, %g2 1195 rdpr %pil, %g2
1183 wrpr %g0, 15, %pil 1196 wrpr %g0, 15, %pil
1184 ba,pt %xcc, etrap_irq 1197 ba,pt %xcc, etrap_irq
1185 rd %pc, %g7 1198 rd %pc, %g7
1199 #ifdef CONFIG_TRACE_IRQFLAGS
1200 call trace_hardirqs_off
1201 nop
1202 #endif
1186 mov %l4, %o1 1203 mov %l4, %o1
1187 mov %l5, %o2 1204 mov %l5, %o2
1188 call cheetah_fecc_handler 1205 call cheetah_fecc_handler
1189 add %sp, PTREGS_OFF, %o0 1206 add %sp, PTREGS_OFF, %o0
1190 ba,a,pt %xcc, rtrap_irq 1207 ba,a,pt %xcc, rtrap_irq
1191 1208
1192 /* Our caller has disabled I-cache and performed membar Sync. */ 1209 /* Our caller has disabled I-cache and performed membar Sync. */
1193 .globl cheetah_cee 1210 .globl cheetah_cee
1194 cheetah_cee: 1211 cheetah_cee:
1195 ldxa [%g0] ASI_ESTATE_ERROR_EN, %g2 1212 ldxa [%g0] ASI_ESTATE_ERROR_EN, %g2
1196 andn %g2, ESTATE_ERROR_CEEN, %g2 1213 andn %g2, ESTATE_ERROR_CEEN, %g2
1197 stxa %g2, [%g0] ASI_ESTATE_ERROR_EN 1214 stxa %g2, [%g0] ASI_ESTATE_ERROR_EN
1198 membar #Sync 1215 membar #Sync
1199 1216
1200 /* Fetch and clear AFSR/AFAR */ 1217 /* Fetch and clear AFSR/AFAR */
1201 ldxa [%g0] ASI_AFSR, %g4 1218 ldxa [%g0] ASI_AFSR, %g4
1202 ldxa [%g0] ASI_AFAR, %g5 1219 ldxa [%g0] ASI_AFAR, %g5
1203 stxa %g4, [%g0] ASI_AFSR 1220 stxa %g4, [%g0] ASI_AFSR
1204 membar #Sync 1221 membar #Sync
1205 1222
1206 ba,pt %xcc, __cheetah_log_error 1223 ba,pt %xcc, __cheetah_log_error
1207 nop 1224 nop
1208 1225
1209 c_cee: 1226 c_cee:
1210 rdpr %pil, %g2 1227 rdpr %pil, %g2
1211 wrpr %g0, 15, %pil 1228 wrpr %g0, 15, %pil
1212 ba,pt %xcc, etrap_irq 1229 ba,pt %xcc, etrap_irq
1213 rd %pc, %g7 1230 rd %pc, %g7
1231 #ifdef CONFIG_TRACE_IRQFLAGS
1232 call trace_hardirqs_off
1233 nop
1234 #endif
1214 mov %l4, %o1 1235 mov %l4, %o1
1215 mov %l5, %o2 1236 mov %l5, %o2
1216 call cheetah_cee_handler 1237 call cheetah_cee_handler
1217 add %sp, PTREGS_OFF, %o0 1238 add %sp, PTREGS_OFF, %o0
1218 ba,a,pt %xcc, rtrap_irq 1239 ba,a,pt %xcc, rtrap_irq
1219 1240
1220 /* Our caller has disabled I-cache+D-cache and performed membar Sync. */ 1241 /* Our caller has disabled I-cache+D-cache and performed membar Sync. */
1221 .globl cheetah_deferred_trap 1242 .globl cheetah_deferred_trap
1222 cheetah_deferred_trap: 1243 cheetah_deferred_trap:
1223 ldxa [%g0] ASI_ESTATE_ERROR_EN, %g2 1244 ldxa [%g0] ASI_ESTATE_ERROR_EN, %g2
1224 andn %g2, ESTATE_ERROR_NCEEN | ESTATE_ERROR_CEEN, %g2 1245 andn %g2, ESTATE_ERROR_NCEEN | ESTATE_ERROR_CEEN, %g2
1225 stxa %g2, [%g0] ASI_ESTATE_ERROR_EN 1246 stxa %g2, [%g0] ASI_ESTATE_ERROR_EN
1226 membar #Sync 1247 membar #Sync
1227 1248
1228 /* Fetch and clear AFSR/AFAR */ 1249 /* Fetch and clear AFSR/AFAR */
1229 ldxa [%g0] ASI_AFSR, %g4 1250 ldxa [%g0] ASI_AFSR, %g4
1230 ldxa [%g0] ASI_AFAR, %g5 1251 ldxa [%g0] ASI_AFAR, %g5
1231 stxa %g4, [%g0] ASI_AFSR 1252 stxa %g4, [%g0] ASI_AFSR
1232 membar #Sync 1253 membar #Sync
1233 1254
1234 ba,pt %xcc, __cheetah_log_error 1255 ba,pt %xcc, __cheetah_log_error
1235 nop 1256 nop
1236 1257
1237 c_deferred: 1258 c_deferred:
1238 rdpr %pil, %g2 1259 rdpr %pil, %g2
1239 wrpr %g0, 15, %pil 1260 wrpr %g0, 15, %pil
1240 ba,pt %xcc, etrap_irq 1261 ba,pt %xcc, etrap_irq
1241 rd %pc, %g7 1262 rd %pc, %g7
1263 #ifdef CONFIG_TRACE_IRQFLAGS
1264 call trace_hardirqs_off
1265 nop
1266 #endif
1242 mov %l4, %o1 1267 mov %l4, %o1
1243 mov %l5, %o2 1268 mov %l5, %o2
1244 call cheetah_deferred_handler 1269 call cheetah_deferred_handler
1245 add %sp, PTREGS_OFF, %o0 1270 add %sp, PTREGS_OFF, %o0
1246 ba,a,pt %xcc, rtrap_irq 1271 ba,a,pt %xcc, rtrap_irq
1247 1272
1248 .globl __do_privact 1273 .globl __do_privact
1249 __do_privact: 1274 __do_privact:
1250 mov TLB_SFSR, %g3 1275 mov TLB_SFSR, %g3
1251 stxa %g0, [%g3] ASI_DMMU ! Clear FaultValid bit 1276 stxa %g0, [%g3] ASI_DMMU ! Clear FaultValid bit
1252 membar #Sync 1277 membar #Sync
1253 sethi %hi(109f), %g7 1278 sethi %hi(109f), %g7
1254 ba,pt %xcc, etrap 1279 ba,pt %xcc, etrap
1255 109: or %g7, %lo(109b), %g7 1280 109: or %g7, %lo(109b), %g7
1256 call do_privact 1281 call do_privact
1257 add %sp, PTREGS_OFF, %o0 1282 add %sp, PTREGS_OFF, %o0
1258 ba,pt %xcc, rtrap 1283 ba,pt %xcc, rtrap
1259 clr %l6 1284 clr %l6
1260 1285
1261 .globl do_mna 1286 .globl do_mna
1262 do_mna: 1287 do_mna:
1263 rdpr %tl, %g3 1288 rdpr %tl, %g3
1264 cmp %g3, 1 1289 cmp %g3, 1
1265 1290
1266 /* Setup %g4/%g5 now as they are used in the 1291 /* Setup %g4/%g5 now as they are used in the
1267 * winfixup code. 1292 * winfixup code.
1268 */ 1293 */
1269 mov TLB_SFSR, %g3 1294 mov TLB_SFSR, %g3
1270 mov DMMU_SFAR, %g4 1295 mov DMMU_SFAR, %g4
1271 ldxa [%g4] ASI_DMMU, %g4 1296 ldxa [%g4] ASI_DMMU, %g4
1272 ldxa [%g3] ASI_DMMU, %g5 1297 ldxa [%g3] ASI_DMMU, %g5
1273 stxa %g0, [%g3] ASI_DMMU ! Clear FaultValid bit 1298 stxa %g0, [%g3] ASI_DMMU ! Clear FaultValid bit
1274 membar #Sync 1299 membar #Sync
1275 bgu,pn %icc, winfix_mna 1300 bgu,pn %icc, winfix_mna
1276 rdpr %tpc, %g3 1301 rdpr %tpc, %g3
1277 1302
1278 1: sethi %hi(109f), %g7 1303 1: sethi %hi(109f), %g7
1279 ba,pt %xcc, etrap 1304 ba,pt %xcc, etrap
1280 109: or %g7, %lo(109b), %g7 1305 109: or %g7, %lo(109b), %g7
1281 mov %l4, %o1 1306 mov %l4, %o1
1282 mov %l5, %o2 1307 mov %l5, %o2
1283 call mem_address_unaligned 1308 call mem_address_unaligned
1284 add %sp, PTREGS_OFF, %o0 1309 add %sp, PTREGS_OFF, %o0
1285 ba,pt %xcc, rtrap 1310 ba,pt %xcc, rtrap
1286 clr %l6 1311 clr %l6
1287 1312
1288 .globl do_lddfmna 1313 .globl do_lddfmna
1289 do_lddfmna: 1314 do_lddfmna:
1290 sethi %hi(109f), %g7 1315 sethi %hi(109f), %g7
1291 mov TLB_SFSR, %g4 1316 mov TLB_SFSR, %g4
1292 ldxa [%g4] ASI_DMMU, %g5 1317 ldxa [%g4] ASI_DMMU, %g5
1293 stxa %g0, [%g4] ASI_DMMU ! Clear FaultValid bit 1318 stxa %g0, [%g4] ASI_DMMU ! Clear FaultValid bit
1294 membar #Sync 1319 membar #Sync
1295 mov DMMU_SFAR, %g4 1320 mov DMMU_SFAR, %g4
1296 ldxa [%g4] ASI_DMMU, %g4 1321 ldxa [%g4] ASI_DMMU, %g4
1297 ba,pt %xcc, etrap 1322 ba,pt %xcc, etrap
1298 109: or %g7, %lo(109b), %g7 1323 109: or %g7, %lo(109b), %g7
1299 mov %l4, %o1 1324 mov %l4, %o1
1300 mov %l5, %o2 1325 mov %l5, %o2
1301 call handle_lddfmna 1326 call handle_lddfmna
1302 add %sp, PTREGS_OFF, %o0 1327 add %sp, PTREGS_OFF, %o0
1303 ba,pt %xcc, rtrap 1328 ba,pt %xcc, rtrap
1304 clr %l6 1329 clr %l6
1305 1330
1306 .globl do_stdfmna 1331 .globl do_stdfmna
1307 do_stdfmna: 1332 do_stdfmna:
1308 sethi %hi(109f), %g7 1333 sethi %hi(109f), %g7
1309 mov TLB_SFSR, %g4 1334 mov TLB_SFSR, %g4
1310 ldxa [%g4] ASI_DMMU, %g5 1335 ldxa [%g4] ASI_DMMU, %g5
1311 stxa %g0, [%g4] ASI_DMMU ! Clear FaultValid bit 1336 stxa %g0, [%g4] ASI_DMMU ! Clear FaultValid bit
1312 membar #Sync 1337 membar #Sync
1313 mov DMMU_SFAR, %g4 1338 mov DMMU_SFAR, %g4
1314 ldxa [%g4] ASI_DMMU, %g4 1339 ldxa [%g4] ASI_DMMU, %g4
1315 ba,pt %xcc, etrap 1340 ba,pt %xcc, etrap
1316 109: or %g7, %lo(109b), %g7 1341 109: or %g7, %lo(109b), %g7
1317 mov %l4, %o1 1342 mov %l4, %o1
1318 mov %l5, %o2 1343 mov %l5, %o2
1319 call handle_stdfmna 1344 call handle_stdfmna
1320 add %sp, PTREGS_OFF, %o0 1345 add %sp, PTREGS_OFF, %o0
1321 ba,pt %xcc, rtrap 1346 ba,pt %xcc, rtrap
1322 clr %l6 1347 clr %l6
1323 1348
1324 .globl breakpoint_trap 1349 .globl breakpoint_trap
1325 breakpoint_trap: 1350 breakpoint_trap:
1326 call sparc_breakpoint 1351 call sparc_breakpoint
1327 add %sp, PTREGS_OFF, %o0 1352 add %sp, PTREGS_OFF, %o0
1328 ba,pt %xcc, rtrap 1353 ba,pt %xcc, rtrap
1329 nop 1354 nop
1330 1355
1331 #if defined(CONFIG_SUNOS_EMUL) || defined(CONFIG_SOLARIS_EMUL) || \ 1356 #if defined(CONFIG_SUNOS_EMUL) || defined(CONFIG_SOLARIS_EMUL) || \
1332 defined(CONFIG_SOLARIS_EMUL_MODULE) 1357 defined(CONFIG_SOLARIS_EMUL_MODULE)
1333 /* SunOS uses syscall zero as the 'indirect syscall' it looks 1358 /* SunOS uses syscall zero as the 'indirect syscall' it looks
1334 * like indir_syscall(scall_num, arg0, arg1, arg2...); etc. 1359 * like indir_syscall(scall_num, arg0, arg1, arg2...); etc.
1335 * This is complete brain damage. 1360 * This is complete brain damage.
1336 */ 1361 */
1337 .globl sunos_indir 1362 .globl sunos_indir
1338 sunos_indir: 1363 sunos_indir:
1339 srl %o0, 0, %o0 1364 srl %o0, 0, %o0
1340 mov %o7, %l4 1365 mov %o7, %l4
1341 cmp %o0, NR_SYSCALLS 1366 cmp %o0, NR_SYSCALLS
1342 blu,a,pt %icc, 1f 1367 blu,a,pt %icc, 1f
1343 sll %o0, 0x2, %o0 1368 sll %o0, 0x2, %o0
1344 sethi %hi(sunos_nosys), %l6 1369 sethi %hi(sunos_nosys), %l6
1345 b,pt %xcc, 2f 1370 b,pt %xcc, 2f
1346 or %l6, %lo(sunos_nosys), %l6 1371 or %l6, %lo(sunos_nosys), %l6
1347 1: sethi %hi(sunos_sys_table), %l7 1372 1: sethi %hi(sunos_sys_table), %l7
1348 or %l7, %lo(sunos_sys_table), %l7 1373 or %l7, %lo(sunos_sys_table), %l7
1349 lduw [%l7 + %o0], %l6 1374 lduw [%l7 + %o0], %l6
1350 2: mov %o1, %o0 1375 2: mov %o1, %o0
1351 mov %o2, %o1 1376 mov %o2, %o1
1352 mov %o3, %o2 1377 mov %o3, %o2
1353 mov %o4, %o3 1378 mov %o4, %o3
1354 mov %o5, %o4 1379 mov %o5, %o4
1355 call %l6 1380 call %l6
1356 mov %l4, %o7 1381 mov %l4, %o7
1357 1382
1358 .globl sunos_getpid 1383 .globl sunos_getpid
1359 sunos_getpid: 1384 sunos_getpid:
1360 call sys_getppid 1385 call sys_getppid
1361 nop 1386 nop
1362 call sys_getpid 1387 call sys_getpid
1363 stx %o0, [%sp + PTREGS_OFF + PT_V9_I1] 1388 stx %o0, [%sp + PTREGS_OFF + PT_V9_I1]
1364 b,pt %xcc, ret_sys_call 1389 b,pt %xcc, ret_sys_call
1365 stx %o0, [%sp + PTREGS_OFF + PT_V9_I0] 1390 stx %o0, [%sp + PTREGS_OFF + PT_V9_I0]
1366 1391
1367 /* SunOS getuid() returns uid in %o0 and euid in %o1 */ 1392 /* SunOS getuid() returns uid in %o0 and euid in %o1 */
1368 .globl sunos_getuid 1393 .globl sunos_getuid
1369 sunos_getuid: 1394 sunos_getuid:
1370 call sys32_geteuid16 1395 call sys32_geteuid16
1371 nop 1396 nop
1372 call sys32_getuid16 1397 call sys32_getuid16
1373 stx %o0, [%sp + PTREGS_OFF + PT_V9_I1] 1398 stx %o0, [%sp + PTREGS_OFF + PT_V9_I1]
1374 b,pt %xcc, ret_sys_call 1399 b,pt %xcc, ret_sys_call
1375 stx %o0, [%sp + PTREGS_OFF + PT_V9_I0] 1400 stx %o0, [%sp + PTREGS_OFF + PT_V9_I0]
1376 1401
1377 /* SunOS getgid() returns gid in %o0 and egid in %o1 */ 1402 /* SunOS getgid() returns gid in %o0 and egid in %o1 */
1378 .globl sunos_getgid 1403 .globl sunos_getgid
1379 sunos_getgid: 1404 sunos_getgid:
1380 call sys32_getegid16 1405 call sys32_getegid16
1381 nop 1406 nop
1382 call sys32_getgid16 1407 call sys32_getgid16
1383 stx %o0, [%sp + PTREGS_OFF + PT_V9_I1] 1408 stx %o0, [%sp + PTREGS_OFF + PT_V9_I1]
1384 b,pt %xcc, ret_sys_call 1409 b,pt %xcc, ret_sys_call
1385 stx %o0, [%sp + PTREGS_OFF + PT_V9_I0] 1410 stx %o0, [%sp + PTREGS_OFF + PT_V9_I0]
1386 #endif 1411 #endif
1387 1412
1388 /* SunOS's execv() call only specifies the argv argument, the 1413 /* SunOS's execv() call only specifies the argv argument, the
1389 * environment settings are the same as the calling processes. 1414 * environment settings are the same as the calling processes.
1390 */ 1415 */
1391 .globl sunos_execv 1416 .globl sunos_execv
1392 sys_execve: 1417 sys_execve:
1393 sethi %hi(sparc_execve), %g1 1418 sethi %hi(sparc_execve), %g1
1394 ba,pt %xcc, execve_merge 1419 ba,pt %xcc, execve_merge
1395 or %g1, %lo(sparc_execve), %g1 1420 or %g1, %lo(sparc_execve), %g1
1396 #ifdef CONFIG_COMPAT 1421 #ifdef CONFIG_COMPAT
1397 .globl sys_execve 1422 .globl sys_execve
1398 sunos_execv: 1423 sunos_execv:
1399 stx %g0, [%sp + PTREGS_OFF + PT_V9_I2] 1424 stx %g0, [%sp + PTREGS_OFF + PT_V9_I2]
1400 .globl sys32_execve 1425 .globl sys32_execve
1401 sys32_execve: 1426 sys32_execve:
1402 sethi %hi(sparc32_execve), %g1 1427 sethi %hi(sparc32_execve), %g1
1403 or %g1, %lo(sparc32_execve), %g1 1428 or %g1, %lo(sparc32_execve), %g1
1404 #endif 1429 #endif
1405 execve_merge: 1430 execve_merge:
1406 flushw 1431 flushw
1407 jmpl %g1, %g0 1432 jmpl %g1, %g0
1408 add %sp, PTREGS_OFF, %o0 1433 add %sp, PTREGS_OFF, %o0
1409 1434
1410 .globl sys_pipe, sys_sigpause, sys_nis_syscall 1435 .globl sys_pipe, sys_sigpause, sys_nis_syscall
1411 .globl sys_rt_sigreturn 1436 .globl sys_rt_sigreturn
1412 .globl sys_ptrace 1437 .globl sys_ptrace
1413 .globl sys_sigaltstack 1438 .globl sys_sigaltstack
1414 .align 32 1439 .align 32
1415 sys_pipe: ba,pt %xcc, sparc_pipe 1440 sys_pipe: ba,pt %xcc, sparc_pipe
1416 add %sp, PTREGS_OFF, %o0 1441 add %sp, PTREGS_OFF, %o0
1417 sys_nis_syscall:ba,pt %xcc, c_sys_nis_syscall 1442 sys_nis_syscall:ba,pt %xcc, c_sys_nis_syscall
1418 add %sp, PTREGS_OFF, %o0 1443 add %sp, PTREGS_OFF, %o0
1419 sys_memory_ordering: 1444 sys_memory_ordering:
1420 ba,pt %xcc, sparc_memory_ordering 1445 ba,pt %xcc, sparc_memory_ordering
1421 add %sp, PTREGS_OFF, %o1 1446 add %sp, PTREGS_OFF, %o1
1422 sys_sigaltstack:ba,pt %xcc, do_sigaltstack 1447 sys_sigaltstack:ba,pt %xcc, do_sigaltstack
1423 add %i6, STACK_BIAS, %o2 1448 add %i6, STACK_BIAS, %o2
1424 #ifdef CONFIG_COMPAT 1449 #ifdef CONFIG_COMPAT
1425 .globl sys32_sigstack 1450 .globl sys32_sigstack
1426 sys32_sigstack: ba,pt %xcc, do_sys32_sigstack 1451 sys32_sigstack: ba,pt %xcc, do_sys32_sigstack
1427 mov %i6, %o2 1452 mov %i6, %o2
1428 .globl sys32_sigaltstack 1453 .globl sys32_sigaltstack
1429 sys32_sigaltstack: 1454 sys32_sigaltstack:
1430 ba,pt %xcc, do_sys32_sigaltstack 1455 ba,pt %xcc, do_sys32_sigaltstack
1431 mov %i6, %o2 1456 mov %i6, %o2
1432 #endif 1457 #endif
1433 .align 32 1458 .align 32
1434 #ifdef CONFIG_COMPAT 1459 #ifdef CONFIG_COMPAT
1435 .globl sys32_sigreturn 1460 .globl sys32_sigreturn
1436 sys32_sigreturn: 1461 sys32_sigreturn:
1437 add %sp, PTREGS_OFF, %o0 1462 add %sp, PTREGS_OFF, %o0
1438 call do_sigreturn32 1463 call do_sigreturn32
1439 add %o7, 1f-.-4, %o7 1464 add %o7, 1f-.-4, %o7
1440 nop 1465 nop
1441 #endif 1466 #endif
1442 sys_rt_sigreturn: 1467 sys_rt_sigreturn:
1443 add %sp, PTREGS_OFF, %o0 1468 add %sp, PTREGS_OFF, %o0
1444 call do_rt_sigreturn 1469 call do_rt_sigreturn
1445 add %o7, 1f-.-4, %o7 1470 add %o7, 1f-.-4, %o7
1446 nop 1471 nop
1447 #ifdef CONFIG_COMPAT 1472 #ifdef CONFIG_COMPAT
1448 .globl sys32_rt_sigreturn 1473 .globl sys32_rt_sigreturn
1449 sys32_rt_sigreturn: 1474 sys32_rt_sigreturn:
1450 add %sp, PTREGS_OFF, %o0 1475 add %sp, PTREGS_OFF, %o0
1451 call do_rt_sigreturn32 1476 call do_rt_sigreturn32
1452 add %o7, 1f-.-4, %o7 1477 add %o7, 1f-.-4, %o7
1453 nop 1478 nop
1454 #endif 1479 #endif
1455 sys_ptrace: add %sp, PTREGS_OFF, %o0 1480 sys_ptrace: add %sp, PTREGS_OFF, %o0
1456 call do_ptrace 1481 call do_ptrace
1457 add %o7, 1f-.-4, %o7 1482 add %o7, 1f-.-4, %o7
1458 nop 1483 nop
1459 .align 32 1484 .align 32
1460 1: ldx [%curptr + TI_FLAGS], %l5 1485 1: ldx [%curptr + TI_FLAGS], %l5
1461 andcc %l5, (_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT), %g0 1486 andcc %l5, (_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT), %g0
1462 be,pt %icc, rtrap 1487 be,pt %icc, rtrap
1463 clr %l6 1488 clr %l6
1464 add %sp, PTREGS_OFF, %o0 1489 add %sp, PTREGS_OFF, %o0
1465 call syscall_trace 1490 call syscall_trace
1466 mov 1, %o1 1491 mov 1, %o1
1467 1492
1468 ba,pt %xcc, rtrap 1493 ba,pt %xcc, rtrap
1469 clr %l6 1494 clr %l6
1470 1495
1471 /* This is how fork() was meant to be done, 8 instruction entry. 1496 /* This is how fork() was meant to be done, 8 instruction entry.
1472 * 1497 *
1473 * I questioned the following code briefly, let me clear things 1498 * I questioned the following code briefly, let me clear things
1474 * up so you must not reason on it like I did. 1499 * up so you must not reason on it like I did.
1475 * 1500 *
1476 * Know the fork_kpsr etc. we use in the sparc32 port? We don't 1501 * Know the fork_kpsr etc. we use in the sparc32 port? We don't
1477 * need it here because the only piece of window state we copy to 1502 * need it here because the only piece of window state we copy to
1478 * the child is the CWP register. Even if the parent sleeps, 1503 * the child is the CWP register. Even if the parent sleeps,
1479 * we are safe because we stuck it into pt_regs of the parent 1504 * we are safe because we stuck it into pt_regs of the parent
1480 * so it will not change. 1505 * so it will not change.
1481 * 1506 *
1482 * XXX This raises the question, whether we can do the same on 1507 * XXX This raises the question, whether we can do the same on
1483 * XXX sparc32 to get rid of fork_kpsr _and_ fork_kwim. The 1508 * XXX sparc32 to get rid of fork_kpsr _and_ fork_kwim. The
1484 * XXX answer is yes. We stick fork_kpsr in UREG_G0 and 1509 * XXX answer is yes. We stick fork_kpsr in UREG_G0 and
1485 * XXX fork_kwim in UREG_G1 (global registers are considered 1510 * XXX fork_kwim in UREG_G1 (global registers are considered
1486 * XXX volatile across a system call in the sparc ABI I think 1511 * XXX volatile across a system call in the sparc ABI I think
1487 * XXX if it isn't we can use regs->y instead, anyone who depends 1512 * XXX if it isn't we can use regs->y instead, anyone who depends
1488 * XXX upon the Y register being preserved across a fork deserves 1513 * XXX upon the Y register being preserved across a fork deserves
1489 * XXX to lose). 1514 * XXX to lose).
1490 * 1515 *
1491 * In fact we should take advantage of that fact for other things 1516 * In fact we should take advantage of that fact for other things
1492 * during system calls... 1517 * during system calls...
1493 */ 1518 */
1494 .globl sys_fork, sys_vfork, sys_clone, sparc_exit 1519 .globl sys_fork, sys_vfork, sys_clone, sparc_exit
1495 .globl ret_from_syscall 1520 .globl ret_from_syscall
1496 .align 32 1521 .align 32
1497 sys_vfork: /* Under Linux, vfork and fork are just special cases of clone. */ 1522 sys_vfork: /* Under Linux, vfork and fork are just special cases of clone. */
1498 sethi %hi(0x4000 | 0x0100 | SIGCHLD), %o0 1523 sethi %hi(0x4000 | 0x0100 | SIGCHLD), %o0
1499 or %o0, %lo(0x4000 | 0x0100 | SIGCHLD), %o0 1524 or %o0, %lo(0x4000 | 0x0100 | SIGCHLD), %o0
1500 ba,pt %xcc, sys_clone 1525 ba,pt %xcc, sys_clone
1501 sys_fork: clr %o1 1526 sys_fork: clr %o1
1502 mov SIGCHLD, %o0 1527 mov SIGCHLD, %o0
1503 sys_clone: flushw 1528 sys_clone: flushw
1504 movrz %o1, %fp, %o1 1529 movrz %o1, %fp, %o1
1505 mov 0, %o3 1530 mov 0, %o3
1506 ba,pt %xcc, sparc_do_fork 1531 ba,pt %xcc, sparc_do_fork
1507 add %sp, PTREGS_OFF, %o2 1532 add %sp, PTREGS_OFF, %o2
1508 ret_from_syscall: 1533 ret_from_syscall:
1509 /* Clear current_thread_info()->new_child, and 1534 /* Clear current_thread_info()->new_child, and
1510 * check performance counter stuff too. 1535 * check performance counter stuff too.
1511 */ 1536 */
1512 stb %g0, [%g6 + TI_NEW_CHILD] 1537 stb %g0, [%g6 + TI_NEW_CHILD]
1513 ldx [%g6 + TI_FLAGS], %l0 1538 ldx [%g6 + TI_FLAGS], %l0
1514 call schedule_tail 1539 call schedule_tail
1515 mov %g7, %o0 1540 mov %g7, %o0
1516 andcc %l0, _TIF_PERFCTR, %g0 1541 andcc %l0, _TIF_PERFCTR, %g0
1517 be,pt %icc, 1f 1542 be,pt %icc, 1f
1518 nop 1543 nop
1519 ldx [%g6 + TI_PCR], %o7 1544 ldx [%g6 + TI_PCR], %o7
1520 wr %g0, %o7, %pcr 1545 wr %g0, %o7, %pcr
1521 1546
1522 /* Blackbird errata workaround. See commentary in 1547 /* Blackbird errata workaround. See commentary in
1523 * smp.c:smp_percpu_timer_interrupt() for more 1548 * smp.c:smp_percpu_timer_interrupt() for more
1524 * information. 1549 * information.
1525 */ 1550 */
1526 ba,pt %xcc, 99f 1551 ba,pt %xcc, 99f
1527 nop 1552 nop
1528 .align 64 1553 .align 64
1529 99: wr %g0, %g0, %pic 1554 99: wr %g0, %g0, %pic
1530 rd %pic, %g0 1555 rd %pic, %g0
1531 1556
1532 1: b,pt %xcc, ret_sys_call 1557 1: b,pt %xcc, ret_sys_call
1533 ldx [%sp + PTREGS_OFF + PT_V9_I0], %o0 1558 ldx [%sp + PTREGS_OFF + PT_V9_I0], %o0
1534 sparc_exit: rdpr %pstate, %g2 1559 sparc_exit: rdpr %pstate, %g2
1535 wrpr %g2, PSTATE_IE, %pstate 1560 wrpr %g2, PSTATE_IE, %pstate
1536 rdpr %otherwin, %g1 1561 rdpr %otherwin, %g1
1537 rdpr %cansave, %g3 1562 rdpr %cansave, %g3
1538 add %g3, %g1, %g3 1563 add %g3, %g1, %g3
1539 wrpr %g3, 0x0, %cansave 1564 wrpr %g3, 0x0, %cansave
1540 wrpr %g0, 0x0, %otherwin 1565 wrpr %g0, 0x0, %otherwin
1541 wrpr %g2, 0x0, %pstate 1566 wrpr %g2, 0x0, %pstate
1542 ba,pt %xcc, sys_exit 1567 ba,pt %xcc, sys_exit
1543 stb %g0, [%g6 + TI_WSAVED] 1568 stb %g0, [%g6 + TI_WSAVED]
1544 1569
1545 linux_sparc_ni_syscall: 1570 linux_sparc_ni_syscall:
1546 sethi %hi(sys_ni_syscall), %l7 1571 sethi %hi(sys_ni_syscall), %l7
1547 b,pt %xcc, 4f 1572 b,pt %xcc, 4f
1548 or %l7, %lo(sys_ni_syscall), %l7 1573 or %l7, %lo(sys_ni_syscall), %l7
1549 1574
1550 linux_syscall_trace32: 1575 linux_syscall_trace32:
1551 add %sp, PTREGS_OFF, %o0 1576 add %sp, PTREGS_OFF, %o0
1552 call syscall_trace 1577 call syscall_trace
1553 clr %o1 1578 clr %o1
1554 srl %i0, 0, %o0 1579 srl %i0, 0, %o0
1555 srl %i4, 0, %o4 1580 srl %i4, 0, %o4
1556 srl %i1, 0, %o1 1581 srl %i1, 0, %o1
1557 srl %i2, 0, %o2 1582 srl %i2, 0, %o2
1558 b,pt %xcc, 2f 1583 b,pt %xcc, 2f
1559 srl %i3, 0, %o3 1584 srl %i3, 0, %o3
1560 1585
1561 linux_syscall_trace: 1586 linux_syscall_trace:
1562 add %sp, PTREGS_OFF, %o0 1587 add %sp, PTREGS_OFF, %o0
1563 call syscall_trace 1588 call syscall_trace
1564 clr %o1 1589 clr %o1
1565 mov %i0, %o0 1590 mov %i0, %o0
1566 mov %i1, %o1 1591 mov %i1, %o1
1567 mov %i2, %o2 1592 mov %i2, %o2
1568 mov %i3, %o3 1593 mov %i3, %o3
1569 b,pt %xcc, 2f 1594 b,pt %xcc, 2f
1570 mov %i4, %o4 1595 mov %i4, %o4
1571 1596
1572 1597
1573 /* Linux 32-bit and SunOS system calls enter here... */ 1598 /* Linux 32-bit and SunOS system calls enter here... */
1574 .align 32 1599 .align 32
1575 .globl linux_sparc_syscall32 1600 .globl linux_sparc_syscall32
1576 linux_sparc_syscall32: 1601 linux_sparc_syscall32:
1577 /* Direct access to user regs, much faster. */ 1602 /* Direct access to user regs, much faster. */
1578 cmp %g1, NR_SYSCALLS ! IEU1 Group 1603 cmp %g1, NR_SYSCALLS ! IEU1 Group
1579 bgeu,pn %xcc, linux_sparc_ni_syscall ! CTI 1604 bgeu,pn %xcc, linux_sparc_ni_syscall ! CTI
1580 srl %i0, 0, %o0 ! IEU0 1605 srl %i0, 0, %o0 ! IEU0
1581 sll %g1, 2, %l4 ! IEU0 Group 1606 sll %g1, 2, %l4 ! IEU0 Group
1582 srl %i4, 0, %o4 ! IEU1 1607 srl %i4, 0, %o4 ! IEU1
1583 lduw [%l7 + %l4], %l7 ! Load 1608 lduw [%l7 + %l4], %l7 ! Load
1584 srl %i1, 0, %o1 ! IEU0 Group 1609 srl %i1, 0, %o1 ! IEU0 Group
1585 ldx [%curptr + TI_FLAGS], %l0 ! Load 1610 ldx [%curptr + TI_FLAGS], %l0 ! Load
1586 1611
1587 srl %i5, 0, %o5 ! IEU1 1612 srl %i5, 0, %o5 ! IEU1
1588 srl %i2, 0, %o2 ! IEU0 Group 1613 srl %i2, 0, %o2 ! IEU0 Group
1589 andcc %l0, (_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT), %g0 1614 andcc %l0, (_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT), %g0
1590 bne,pn %icc, linux_syscall_trace32 ! CTI 1615 bne,pn %icc, linux_syscall_trace32 ! CTI
1591 mov %i0, %l5 ! IEU1 1616 mov %i0, %l5 ! IEU1
1592 call %l7 ! CTI Group brk forced 1617 call %l7 ! CTI Group brk forced
1593 srl %i3, 0, %o3 ! IEU0 1618 srl %i3, 0, %o3 ! IEU0
1594 ba,a,pt %xcc, 3f 1619 ba,a,pt %xcc, 3f
1595 1620
1596 /* Linux native and SunOS system calls enter here... */ 1621 /* Linux native and SunOS system calls enter here... */
1597 .align 32 1622 .align 32
1598 .globl linux_sparc_syscall, ret_sys_call 1623 .globl linux_sparc_syscall, ret_sys_call
1599 linux_sparc_syscall: 1624 linux_sparc_syscall:
1600 /* Direct access to user regs, much faster. */ 1625 /* Direct access to user regs, much faster. */
1601 cmp %g1, NR_SYSCALLS ! IEU1 Group 1626 cmp %g1, NR_SYSCALLS ! IEU1 Group
1602 bgeu,pn %xcc, linux_sparc_ni_syscall ! CTI 1627 bgeu,pn %xcc, linux_sparc_ni_syscall ! CTI
1603 mov %i0, %o0 ! IEU0 1628 mov %i0, %o0 ! IEU0
1604 sll %g1, 2, %l4 ! IEU0 Group 1629 sll %g1, 2, %l4 ! IEU0 Group
1605 mov %i1, %o1 ! IEU1 1630 mov %i1, %o1 ! IEU1
1606 lduw [%l7 + %l4], %l7 ! Load 1631 lduw [%l7 + %l4], %l7 ! Load
1607 4: mov %i2, %o2 ! IEU0 Group 1632 4: mov %i2, %o2 ! IEU0 Group
1608 ldx [%curptr + TI_FLAGS], %l0 ! Load 1633 ldx [%curptr + TI_FLAGS], %l0 ! Load
1609 1634
1610 mov %i3, %o3 ! IEU1 1635 mov %i3, %o3 ! IEU1
1611 mov %i4, %o4 ! IEU0 Group 1636 mov %i4, %o4 ! IEU0 Group
1612 andcc %l0, (_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT), %g0 1637 andcc %l0, (_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT), %g0
1613 bne,pn %icc, linux_syscall_trace ! CTI Group 1638 bne,pn %icc, linux_syscall_trace ! CTI Group
1614 mov %i0, %l5 ! IEU0 1639 mov %i0, %l5 ! IEU0
1615 2: call %l7 ! CTI Group brk forced 1640 2: call %l7 ! CTI Group brk forced
1616 mov %i5, %o5 ! IEU0 1641 mov %i5, %o5 ! IEU0
1617 nop 1642 nop
1618 1643
1619 3: stx %o0, [%sp + PTREGS_OFF + PT_V9_I0] 1644 3: stx %o0, [%sp + PTREGS_OFF + PT_V9_I0]
1620 ret_sys_call: 1645 ret_sys_call:
1621 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %g3 1646 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %g3
1622 ldx [%sp + PTREGS_OFF + PT_V9_TNPC], %l1 ! pc = npc 1647 ldx [%sp + PTREGS_OFF + PT_V9_TNPC], %l1 ! pc = npc
1623 sra %o0, 0, %o0 1648 sra %o0, 0, %o0
1624 mov %ulo(TSTATE_XCARRY | TSTATE_ICARRY), %g2 1649 mov %ulo(TSTATE_XCARRY | TSTATE_ICARRY), %g2
1625 sllx %g2, 32, %g2 1650 sllx %g2, 32, %g2
1626 1651
1627 /* Check if force_successful_syscall_return() 1652 /* Check if force_successful_syscall_return()
1628 * was invoked. 1653 * was invoked.
1629 */ 1654 */
1630 ldub [%curptr + TI_SYS_NOERROR], %l2 1655 ldub [%curptr + TI_SYS_NOERROR], %l2
1631 brnz,a,pn %l2, 80f 1656 brnz,a,pn %l2, 80f
1632 stb %g0, [%curptr + TI_SYS_NOERROR] 1657 stb %g0, [%curptr + TI_SYS_NOERROR]
1633 1658
1634 cmp %o0, -ERESTART_RESTARTBLOCK 1659 cmp %o0, -ERESTART_RESTARTBLOCK
1635 bgeu,pn %xcc, 1f 1660 bgeu,pn %xcc, 1f
1636 andcc %l0, (_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT), %l6 1661 andcc %l0, (_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT), %l6
1637 80: 1662 80:
1638 /* System call success, clear Carry condition code. */ 1663 /* System call success, clear Carry condition code. */
1639 andn %g3, %g2, %g3 1664 andn %g3, %g2, %g3
1640 stx %g3, [%sp + PTREGS_OFF + PT_V9_TSTATE] 1665 stx %g3, [%sp + PTREGS_OFF + PT_V9_TSTATE]
1641 bne,pn %icc, linux_syscall_trace2 1666 bne,pn %icc, linux_syscall_trace2
1642 add %l1, 0x4, %l2 ! npc = npc+4 1667 add %l1, 0x4, %l2 ! npc = npc+4
1643 stx %l1, [%sp + PTREGS_OFF + PT_V9_TPC] 1668 stx %l1, [%sp + PTREGS_OFF + PT_V9_TPC]
1644 ba,pt %xcc, rtrap_clr_l6 1669 ba,pt %xcc, rtrap_clr_l6
1645 stx %l2, [%sp + PTREGS_OFF + PT_V9_TNPC] 1670 stx %l2, [%sp + PTREGS_OFF + PT_V9_TNPC]
1646 1671
1647 1: 1672 1:
1648 /* System call failure, set Carry condition code. 1673 /* System call failure, set Carry condition code.
1649 * Also, get abs(errno) to return to the process. 1674 * Also, get abs(errno) to return to the process.
1650 */ 1675 */
1651 andcc %l0, (_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT), %l6 1676 andcc %l0, (_TIF_SYSCALL_TRACE|_TIF_SECCOMP|_TIF_SYSCALL_AUDIT), %l6
1652 sub %g0, %o0, %o0 1677 sub %g0, %o0, %o0
1653 or %g3, %g2, %g3 1678 or %g3, %g2, %g3
1654 stx %o0, [%sp + PTREGS_OFF + PT_V9_I0] 1679 stx %o0, [%sp + PTREGS_OFF + PT_V9_I0]
1655 mov 1, %l6 1680 mov 1, %l6
1656 stx %g3, [%sp + PTREGS_OFF + PT_V9_TSTATE] 1681 stx %g3, [%sp + PTREGS_OFF + PT_V9_TSTATE]
1657 bne,pn %icc, linux_syscall_trace2 1682 bne,pn %icc, linux_syscall_trace2
1658 add %l1, 0x4, %l2 ! npc = npc+4 1683 add %l1, 0x4, %l2 ! npc = npc+4
1659 stx %l1, [%sp + PTREGS_OFF + PT_V9_TPC] 1684 stx %l1, [%sp + PTREGS_OFF + PT_V9_TPC]
1660 1685
1661 b,pt %xcc, rtrap 1686 b,pt %xcc, rtrap
1662 stx %l2, [%sp + PTREGS_OFF + PT_V9_TNPC] 1687 stx %l2, [%sp + PTREGS_OFF + PT_V9_TNPC]
1663 linux_syscall_trace2: 1688 linux_syscall_trace2:
1664 add %sp, PTREGS_OFF, %o0 1689 add %sp, PTREGS_OFF, %o0
1665 call syscall_trace 1690 call syscall_trace
1666 mov 1, %o1 1691 mov 1, %o1
1667 stx %l1, [%sp + PTREGS_OFF + PT_V9_TPC] 1692 stx %l1, [%sp + PTREGS_OFF + PT_V9_TPC]
1668 ba,pt %xcc, rtrap 1693 ba,pt %xcc, rtrap
1669 stx %l2, [%sp + PTREGS_OFF + PT_V9_TNPC] 1694 stx %l2, [%sp + PTREGS_OFF + PT_V9_TNPC]
1670 1695
1671 .align 32 1696 .align 32
1672 .globl __flushw_user 1697 .globl __flushw_user
1673 __flushw_user: 1698 __flushw_user:
1674 rdpr %otherwin, %g1 1699 rdpr %otherwin, %g1
1675 brz,pn %g1, 2f 1700 brz,pn %g1, 2f
1676 clr %g2 1701 clr %g2
1677 1: save %sp, -128, %sp 1702 1: save %sp, -128, %sp
1678 rdpr %otherwin, %g1 1703 rdpr %otherwin, %g1
1679 brnz,pt %g1, 1b 1704 brnz,pt %g1, 1b
1680 add %g2, 1, %g2 1705 add %g2, 1, %g2
1681 1: sub %g2, 1, %g2 1706 1: sub %g2, 1, %g2
1682 brnz,pt %g2, 1b 1707 brnz,pt %g2, 1b
1683 restore %g0, %g0, %g0 1708 restore %g0, %g0, %g0
1684 2: retl 1709 2: retl
1685 nop 1710 nop
1686 1711
1687 #ifdef CONFIG_SMP 1712 #ifdef CONFIG_SMP
1688 .globl hard_smp_processor_id 1713 .globl hard_smp_processor_id
1689 hard_smp_processor_id: 1714 hard_smp_processor_id:
1690 #endif 1715 #endif
1691 .globl real_hard_smp_processor_id 1716 .globl real_hard_smp_processor_id
1692 real_hard_smp_processor_id: 1717 real_hard_smp_processor_id:
1693 __GET_CPUID(%o0) 1718 __GET_CPUID(%o0)
1694 retl 1719 retl
1695 nop 1720 nop
1696 1721
1697 /* %o0: devhandle 1722 /* %o0: devhandle
1698 * %o1: devino 1723 * %o1: devino
1699 * 1724 *
1700 * returns %o0: sysino 1725 * returns %o0: sysino
1701 */ 1726 */
1702 .globl sun4v_devino_to_sysino 1727 .globl sun4v_devino_to_sysino
1703 sun4v_devino_to_sysino: 1728 sun4v_devino_to_sysino:
1704 mov HV_FAST_INTR_DEVINO2SYSINO, %o5 1729 mov HV_FAST_INTR_DEVINO2SYSINO, %o5
1705 ta HV_FAST_TRAP 1730 ta HV_FAST_TRAP
1706 retl 1731 retl
1707 mov %o1, %o0 1732 mov %o1, %o0
1708 1733
1709 /* %o0: sysino 1734 /* %o0: sysino
1710 * 1735 *
1711 * returns %o0: intr_enabled (HV_INTR_{DISABLED,ENABLED}) 1736 * returns %o0: intr_enabled (HV_INTR_{DISABLED,ENABLED})
1712 */ 1737 */
1713 .globl sun4v_intr_getenabled 1738 .globl sun4v_intr_getenabled
1714 sun4v_intr_getenabled: 1739 sun4v_intr_getenabled:
1715 mov HV_FAST_INTR_GETENABLED, %o5 1740 mov HV_FAST_INTR_GETENABLED, %o5
1716 ta HV_FAST_TRAP 1741 ta HV_FAST_TRAP
1717 retl 1742 retl
1718 mov %o1, %o0 1743 mov %o1, %o0
1719 1744
1720 /* %o0: sysino 1745 /* %o0: sysino
1721 * %o1: intr_enabled (HV_INTR_{DISABLED,ENABLED}) 1746 * %o1: intr_enabled (HV_INTR_{DISABLED,ENABLED})
1722 */ 1747 */
1723 .globl sun4v_intr_setenabled 1748 .globl sun4v_intr_setenabled
1724 sun4v_intr_setenabled: 1749 sun4v_intr_setenabled:
1725 mov HV_FAST_INTR_SETENABLED, %o5 1750 mov HV_FAST_INTR_SETENABLED, %o5
1726 ta HV_FAST_TRAP 1751 ta HV_FAST_TRAP
1727 retl 1752 retl
1728 nop 1753 nop
1729 1754
1730 /* %o0: sysino 1755 /* %o0: sysino
1731 * 1756 *
1732 * returns %o0: intr_state (HV_INTR_STATE_*) 1757 * returns %o0: intr_state (HV_INTR_STATE_*)
1733 */ 1758 */
1734 .globl sun4v_intr_getstate 1759 .globl sun4v_intr_getstate
1735 sun4v_intr_getstate: 1760 sun4v_intr_getstate:
1736 mov HV_FAST_INTR_GETSTATE, %o5 1761 mov HV_FAST_INTR_GETSTATE, %o5
1737 ta HV_FAST_TRAP 1762 ta HV_FAST_TRAP
1738 retl 1763 retl
1739 mov %o1, %o0 1764 mov %o1, %o0
1740 1765
1741 /* %o0: sysino 1766 /* %o0: sysino
1742 * %o1: intr_state (HV_INTR_STATE_*) 1767 * %o1: intr_state (HV_INTR_STATE_*)
1743 */ 1768 */
1744 .globl sun4v_intr_setstate 1769 .globl sun4v_intr_setstate
1745 sun4v_intr_setstate: 1770 sun4v_intr_setstate:
1746 mov HV_FAST_INTR_SETSTATE, %o5 1771 mov HV_FAST_INTR_SETSTATE, %o5
1747 ta HV_FAST_TRAP 1772 ta HV_FAST_TRAP
1748 retl 1773 retl
1749 nop 1774 nop
1750 1775
1751 /* %o0: sysino 1776 /* %o0: sysino
1752 * 1777 *
1753 * returns %o0: cpuid 1778 * returns %o0: cpuid
1754 */ 1779 */
1755 .globl sun4v_intr_gettarget 1780 .globl sun4v_intr_gettarget
1756 sun4v_intr_gettarget: 1781 sun4v_intr_gettarget:
1757 mov HV_FAST_INTR_GETTARGET, %o5 1782 mov HV_FAST_INTR_GETTARGET, %o5
1758 ta HV_FAST_TRAP 1783 ta HV_FAST_TRAP
1759 retl 1784 retl
1760 mov %o1, %o0 1785 mov %o1, %o0
1761 1786
1762 /* %o0: sysino 1787 /* %o0: sysino
1763 * %o1: cpuid 1788 * %o1: cpuid
1764 */ 1789 */
1765 .globl sun4v_intr_settarget 1790 .globl sun4v_intr_settarget
1766 sun4v_intr_settarget: 1791 sun4v_intr_settarget:
1767 mov HV_FAST_INTR_SETTARGET, %o5 1792 mov HV_FAST_INTR_SETTARGET, %o5
1768 ta HV_FAST_TRAP 1793 ta HV_FAST_TRAP
1769 retl 1794 retl
1770 nop 1795 nop
1771 1796
1772 /* %o0: type 1797 /* %o0: type
1773 * %o1: queue paddr 1798 * %o1: queue paddr
1774 * %o2: num queue entries 1799 * %o2: num queue entries
1775 * 1800 *
1776 * returns %o0: status 1801 * returns %o0: status
1777 */ 1802 */
1778 .globl sun4v_cpu_qconf 1803 .globl sun4v_cpu_qconf
1779 sun4v_cpu_qconf: 1804 sun4v_cpu_qconf:
1780 mov HV_FAST_CPU_QCONF, %o5 1805 mov HV_FAST_CPU_QCONF, %o5
1781 ta HV_FAST_TRAP 1806 ta HV_FAST_TRAP
1782 retl 1807 retl
1783 nop 1808 nop
1784 1809
1785 /* returns %o0: status 1810 /* returns %o0: status
1786 */ 1811 */
1787 .globl sun4v_cpu_yield 1812 .globl sun4v_cpu_yield
1788 sun4v_cpu_yield: 1813 sun4v_cpu_yield:
1789 mov HV_FAST_CPU_YIELD, %o5 1814 mov HV_FAST_CPU_YIELD, %o5
1790 ta HV_FAST_TRAP 1815 ta HV_FAST_TRAP
1791 retl 1816 retl
1792 nop 1817 nop
1793 1818
1794 /* %o0: num cpus in cpu list 1819 /* %o0: num cpus in cpu list
1795 * %o1: cpu list paddr 1820 * %o1: cpu list paddr
1796 * %o2: mondo block paddr 1821 * %o2: mondo block paddr
1797 * 1822 *
1798 * returns %o0: status 1823 * returns %o0: status
1799 */ 1824 */
1800 .globl sun4v_cpu_mondo_send 1825 .globl sun4v_cpu_mondo_send
1801 sun4v_cpu_mondo_send: 1826 sun4v_cpu_mondo_send:
1802 mov HV_FAST_CPU_MONDO_SEND, %o5 1827 mov HV_FAST_CPU_MONDO_SEND, %o5
1803 ta HV_FAST_TRAP 1828 ta HV_FAST_TRAP
1804 retl 1829 retl
1805 nop 1830 nop
1806 1831
1807 /* %o0: CPU ID 1832 /* %o0: CPU ID
1808 * 1833 *
1809 * returns %o0: -status if status non-zero, else 1834 * returns %o0: -status if status non-zero, else
1810 * %o0: cpu state as HV_CPU_STATE_* 1835 * %o0: cpu state as HV_CPU_STATE_*
1811 */ 1836 */
1812 .globl sun4v_cpu_state 1837 .globl sun4v_cpu_state
1813 sun4v_cpu_state: 1838 sun4v_cpu_state:
1814 mov HV_FAST_CPU_STATE, %o5 1839 mov HV_FAST_CPU_STATE, %o5
1815 ta HV_FAST_TRAP 1840 ta HV_FAST_TRAP
1816 brnz,pn %o0, 1f 1841 brnz,pn %o0, 1f
1817 sub %g0, %o0, %o0 1842 sub %g0, %o0, %o0
1818 mov %o1, %o0 1843 mov %o1, %o0
1819 1: retl 1844 1: retl
1820 nop 1845 nop
1821 1846
arch/sparc64/kernel/head.S
1 /* $Id: head.S,v 1.87 2002/02/09 19:49:31 davem Exp $ 1 /* $Id: head.S,v 1.87 2002/02/09 19:49:31 davem Exp $
2 * head.S: Initial boot code for the Sparc64 port of Linux. 2 * head.S: Initial boot code for the Sparc64 port of Linux.
3 * 3 *
4 * Copyright (C) 1996,1997 David S. Miller (davem@caip.rutgers.edu) 4 * Copyright (C) 1996,1997 David S. Miller (davem@caip.rutgers.edu)
5 * Copyright (C) 1996 David Sitsky (David.Sitsky@anu.edu.au) 5 * Copyright (C) 1996 David Sitsky (David.Sitsky@anu.edu.au)
6 * Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz) 6 * Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
7 * Copyright (C) 1997 Miguel de Icaza (miguel@nuclecu.unam.mx) 7 * Copyright (C) 1997 Miguel de Icaza (miguel@nuclecu.unam.mx)
8 */ 8 */
9 9
10 #include <linux/version.h> 10 #include <linux/version.h>
11 #include <linux/errno.h> 11 #include <linux/errno.h>
12 #include <linux/threads.h> 12 #include <linux/threads.h>
13 #include <asm/thread_info.h> 13 #include <asm/thread_info.h>
14 #include <asm/asi.h> 14 #include <asm/asi.h>
15 #include <asm/pstate.h> 15 #include <asm/pstate.h>
16 #include <asm/ptrace.h> 16 #include <asm/ptrace.h>
17 #include <asm/spitfire.h> 17 #include <asm/spitfire.h>
18 #include <asm/page.h> 18 #include <asm/page.h>
19 #include <asm/pgtable.h> 19 #include <asm/pgtable.h>
20 #include <asm/errno.h> 20 #include <asm/errno.h>
21 #include <asm/signal.h> 21 #include <asm/signal.h>
22 #include <asm/processor.h> 22 #include <asm/processor.h>
23 #include <asm/lsu.h> 23 #include <asm/lsu.h>
24 #include <asm/dcr.h> 24 #include <asm/dcr.h>
25 #include <asm/dcu.h> 25 #include <asm/dcu.h>
26 #include <asm/head.h> 26 #include <asm/head.h>
27 #include <asm/ttable.h> 27 #include <asm/ttable.h>
28 #include <asm/mmu.h> 28 #include <asm/mmu.h>
29 #include <asm/cpudata.h> 29 #include <asm/cpudata.h>
30 30
31 /* This section from from _start to sparc64_boot_end should fit into 31 /* This section from from _start to sparc64_boot_end should fit into
32 * 0x0000000000404000 to 0x0000000000408000. 32 * 0x0000000000404000 to 0x0000000000408000.
33 */ 33 */
34 .text 34 .text
35 .globl start, _start, stext, _stext 35 .globl start, _start, stext, _stext
36 _start: 36 _start:
37 start: 37 start:
38 _stext: 38 _stext:
39 stext: 39 stext:
40 ! 0x0000000000404000 40 ! 0x0000000000404000
41 b sparc64_boot 41 b sparc64_boot
42 flushw /* Flush register file. */ 42 flushw /* Flush register file. */
43 43
44 /* This stuff has to be in sync with SILO and other potential boot loaders 44 /* This stuff has to be in sync with SILO and other potential boot loaders
45 * Fields should be kept upward compatible and whenever any change is made, 45 * Fields should be kept upward compatible and whenever any change is made,
46 * HdrS version should be incremented. 46 * HdrS version should be incremented.
47 */ 47 */
48 .global root_flags, ram_flags, root_dev 48 .global root_flags, ram_flags, root_dev
49 .global sparc_ramdisk_image, sparc_ramdisk_size 49 .global sparc_ramdisk_image, sparc_ramdisk_size
50 .global sparc_ramdisk_image64 50 .global sparc_ramdisk_image64
51 51
52 .ascii "HdrS" 52 .ascii "HdrS"
53 .word LINUX_VERSION_CODE 53 .word LINUX_VERSION_CODE
54 54
55 /* History: 55 /* History:
56 * 56 *
57 * 0x0300 : Supports being located at other than 0x4000 57 * 0x0300 : Supports being located at other than 0x4000
58 * 0x0202 : Supports kernel params string 58 * 0x0202 : Supports kernel params string
59 * 0x0201 : Supports reboot_command 59 * 0x0201 : Supports reboot_command
60 */ 60 */
61 .half 0x0301 /* HdrS version */ 61 .half 0x0301 /* HdrS version */
62 62
63 root_flags: 63 root_flags:
64 .half 1 64 .half 1
65 root_dev: 65 root_dev:
66 .half 0 66 .half 0
67 ram_flags: 67 ram_flags:
68 .half 0 68 .half 0
69 sparc_ramdisk_image: 69 sparc_ramdisk_image:
70 .word 0 70 .word 0
71 sparc_ramdisk_size: 71 sparc_ramdisk_size:
72 .word 0 72 .word 0
73 .xword reboot_command 73 .xword reboot_command
74 .xword bootstr_info 74 .xword bootstr_info
75 sparc_ramdisk_image64: 75 sparc_ramdisk_image64:
76 .xword 0 76 .xword 0
77 .word _end 77 .word _end
78 78
79 /* PROM cif handler code address is in %o4. */ 79 /* PROM cif handler code address is in %o4. */
80 sparc64_boot: 80 sparc64_boot:
81 1: rd %pc, %g7 81 1: rd %pc, %g7
82 set 1b, %g1 82 set 1b, %g1
83 cmp %g1, %g7 83 cmp %g1, %g7
84 be,pn %xcc, sparc64_boot_after_remap 84 be,pn %xcc, sparc64_boot_after_remap
85 mov %o4, %l7 85 mov %o4, %l7
86 86
87 /* We need to remap the kernel. Use position independant 87 /* We need to remap the kernel. Use position independant
88 * code to remap us to KERNBASE. 88 * code to remap us to KERNBASE.
89 * 89 *
90 * SILO can invoke us with 32-bit address masking enabled, 90 * SILO can invoke us with 32-bit address masking enabled,
91 * so make sure that's clear. 91 * so make sure that's clear.
92 */ 92 */
93 rdpr %pstate, %g1 93 rdpr %pstate, %g1
94 andn %g1, PSTATE_AM, %g1 94 andn %g1, PSTATE_AM, %g1
95 wrpr %g1, 0x0, %pstate 95 wrpr %g1, 0x0, %pstate
96 ba,a,pt %xcc, 1f 96 ba,a,pt %xcc, 1f
97 97
98 .globl prom_finddev_name, prom_chosen_path, prom_root_node 98 .globl prom_finddev_name, prom_chosen_path, prom_root_node
99 .globl prom_getprop_name, prom_mmu_name, prom_peer_name 99 .globl prom_getprop_name, prom_mmu_name, prom_peer_name
100 .globl prom_callmethod_name, prom_translate_name, prom_root_compatible 100 .globl prom_callmethod_name, prom_translate_name, prom_root_compatible
101 .globl prom_map_name, prom_unmap_name, prom_mmu_ihandle_cache 101 .globl prom_map_name, prom_unmap_name, prom_mmu_ihandle_cache
102 .globl prom_boot_mapped_pc, prom_boot_mapping_mode 102 .globl prom_boot_mapped_pc, prom_boot_mapping_mode
103 .globl prom_boot_mapping_phys_high, prom_boot_mapping_phys_low 103 .globl prom_boot_mapping_phys_high, prom_boot_mapping_phys_low
104 .globl is_sun4v 104 .globl is_sun4v
105 prom_peer_name: 105 prom_peer_name:
106 .asciz "peer" 106 .asciz "peer"
107 prom_compatible_name: 107 prom_compatible_name:
108 .asciz "compatible" 108 .asciz "compatible"
109 prom_finddev_name: 109 prom_finddev_name:
110 .asciz "finddevice" 110 .asciz "finddevice"
111 prom_chosen_path: 111 prom_chosen_path:
112 .asciz "/chosen" 112 .asciz "/chosen"
113 prom_getprop_name: 113 prom_getprop_name:
114 .asciz "getprop" 114 .asciz "getprop"
115 prom_mmu_name: 115 prom_mmu_name:
116 .asciz "mmu" 116 .asciz "mmu"
117 prom_callmethod_name: 117 prom_callmethod_name:
118 .asciz "call-method" 118 .asciz "call-method"
119 prom_translate_name: 119 prom_translate_name:
120 .asciz "translate" 120 .asciz "translate"
121 prom_map_name: 121 prom_map_name:
122 .asciz "map" 122 .asciz "map"
123 prom_unmap_name: 123 prom_unmap_name:
124 .asciz "unmap" 124 .asciz "unmap"
125 prom_sun4v_name: 125 prom_sun4v_name:
126 .asciz "sun4v" 126 .asciz "sun4v"
127 .align 4 127 .align 4
128 prom_root_compatible: 128 prom_root_compatible:
129 .skip 64 129 .skip 64
130 prom_root_node: 130 prom_root_node:
131 .word 0 131 .word 0
132 prom_mmu_ihandle_cache: 132 prom_mmu_ihandle_cache:
133 .word 0 133 .word 0
134 prom_boot_mapped_pc: 134 prom_boot_mapped_pc:
135 .word 0 135 .word 0
136 prom_boot_mapping_mode: 136 prom_boot_mapping_mode:
137 .word 0 137 .word 0
138 .align 8 138 .align 8
139 prom_boot_mapping_phys_high: 139 prom_boot_mapping_phys_high:
140 .xword 0 140 .xword 0
141 prom_boot_mapping_phys_low: 141 prom_boot_mapping_phys_low:
142 .xword 0 142 .xword 0
143 is_sun4v: 143 is_sun4v:
144 .word 0 144 .word 0
145 1: 145 1:
146 rd %pc, %l0 146 rd %pc, %l0
147 147
148 mov (1b - prom_peer_name), %l1 148 mov (1b - prom_peer_name), %l1
149 sub %l0, %l1, %l1 149 sub %l0, %l1, %l1
150 mov 0, %l2 150 mov 0, %l2
151 151
152 /* prom_root_node = prom_peer(0) */ 152 /* prom_root_node = prom_peer(0) */
153 stx %l1, [%sp + 2047 + 128 + 0x00] ! service, "peer" 153 stx %l1, [%sp + 2047 + 128 + 0x00] ! service, "peer"
154 mov 1, %l3 154 mov 1, %l3
155 stx %l3, [%sp + 2047 + 128 + 0x08] ! num_args, 1 155 stx %l3, [%sp + 2047 + 128 + 0x08] ! num_args, 1
156 stx %l3, [%sp + 2047 + 128 + 0x10] ! num_rets, 1 156 stx %l3, [%sp + 2047 + 128 + 0x10] ! num_rets, 1
157 stx %l2, [%sp + 2047 + 128 + 0x18] ! arg1, 0 157 stx %l2, [%sp + 2047 + 128 + 0x18] ! arg1, 0
158 stx %g0, [%sp + 2047 + 128 + 0x20] ! ret1 158 stx %g0, [%sp + 2047 + 128 + 0x20] ! ret1
159 call %l7 159 call %l7
160 add %sp, (2047 + 128), %o0 ! argument array 160 add %sp, (2047 + 128), %o0 ! argument array
161 161
162 ldx [%sp + 2047 + 128 + 0x20], %l4 ! prom root node 162 ldx [%sp + 2047 + 128 + 0x20], %l4 ! prom root node
163 mov (1b - prom_root_node), %l1 163 mov (1b - prom_root_node), %l1
164 sub %l0, %l1, %l1 164 sub %l0, %l1, %l1
165 stw %l4, [%l1] 165 stw %l4, [%l1]
166 166
167 mov (1b - prom_getprop_name), %l1 167 mov (1b - prom_getprop_name), %l1
168 mov (1b - prom_compatible_name), %l2 168 mov (1b - prom_compatible_name), %l2
169 mov (1b - prom_root_compatible), %l5 169 mov (1b - prom_root_compatible), %l5
170 sub %l0, %l1, %l1 170 sub %l0, %l1, %l1
171 sub %l0, %l2, %l2 171 sub %l0, %l2, %l2
172 sub %l0, %l5, %l5 172 sub %l0, %l5, %l5
173 173
174 /* prom_getproperty(prom_root_node, "compatible", 174 /* prom_getproperty(prom_root_node, "compatible",
175 * &prom_root_compatible, 64) 175 * &prom_root_compatible, 64)
176 */ 176 */
177 stx %l1, [%sp + 2047 + 128 + 0x00] ! service, "getprop" 177 stx %l1, [%sp + 2047 + 128 + 0x00] ! service, "getprop"
178 mov 4, %l3 178 mov 4, %l3
179 stx %l3, [%sp + 2047 + 128 + 0x08] ! num_args, 4 179 stx %l3, [%sp + 2047 + 128 + 0x08] ! num_args, 4
180 mov 1, %l3 180 mov 1, %l3
181 stx %l3, [%sp + 2047 + 128 + 0x10] ! num_rets, 1 181 stx %l3, [%sp + 2047 + 128 + 0x10] ! num_rets, 1
182 stx %l4, [%sp + 2047 + 128 + 0x18] ! arg1, prom_root_node 182 stx %l4, [%sp + 2047 + 128 + 0x18] ! arg1, prom_root_node
183 stx %l2, [%sp + 2047 + 128 + 0x20] ! arg2, "compatible" 183 stx %l2, [%sp + 2047 + 128 + 0x20] ! arg2, "compatible"
184 stx %l5, [%sp + 2047 + 128 + 0x28] ! arg3, &prom_root_compatible 184 stx %l5, [%sp + 2047 + 128 + 0x28] ! arg3, &prom_root_compatible
185 mov 64, %l3 185 mov 64, %l3
186 stx %l3, [%sp + 2047 + 128 + 0x30] ! arg4, size 186 stx %l3, [%sp + 2047 + 128 + 0x30] ! arg4, size
187 stx %g0, [%sp + 2047 + 128 + 0x38] ! ret1 187 stx %g0, [%sp + 2047 + 128 + 0x38] ! ret1
188 call %l7 188 call %l7
189 add %sp, (2047 + 128), %o0 ! argument array 189 add %sp, (2047 + 128), %o0 ! argument array
190 190
191 mov (1b - prom_finddev_name), %l1 191 mov (1b - prom_finddev_name), %l1
192 mov (1b - prom_chosen_path), %l2 192 mov (1b - prom_chosen_path), %l2
193 mov (1b - prom_boot_mapped_pc), %l3 193 mov (1b - prom_boot_mapped_pc), %l3
194 sub %l0, %l1, %l1 194 sub %l0, %l1, %l1
195 sub %l0, %l2, %l2 195 sub %l0, %l2, %l2
196 sub %l0, %l3, %l3 196 sub %l0, %l3, %l3
197 stw %l0, [%l3] 197 stw %l0, [%l3]
198 sub %sp, (192 + 128), %sp 198 sub %sp, (192 + 128), %sp
199 199
200 /* chosen_node = prom_finddevice("/chosen") */ 200 /* chosen_node = prom_finddevice("/chosen") */
201 stx %l1, [%sp + 2047 + 128 + 0x00] ! service, "finddevice" 201 stx %l1, [%sp + 2047 + 128 + 0x00] ! service, "finddevice"
202 mov 1, %l3 202 mov 1, %l3
203 stx %l3, [%sp + 2047 + 128 + 0x08] ! num_args, 1 203 stx %l3, [%sp + 2047 + 128 + 0x08] ! num_args, 1
204 stx %l3, [%sp + 2047 + 128 + 0x10] ! num_rets, 1 204 stx %l3, [%sp + 2047 + 128 + 0x10] ! num_rets, 1
205 stx %l2, [%sp + 2047 + 128 + 0x18] ! arg1, "/chosen" 205 stx %l2, [%sp + 2047 + 128 + 0x18] ! arg1, "/chosen"
206 stx %g0, [%sp + 2047 + 128 + 0x20] ! ret1 206 stx %g0, [%sp + 2047 + 128 + 0x20] ! ret1
207 call %l7 207 call %l7
208 add %sp, (2047 + 128), %o0 ! argument array 208 add %sp, (2047 + 128), %o0 ! argument array
209 209
210 ldx [%sp + 2047 + 128 + 0x20], %l4 ! chosen device node 210 ldx [%sp + 2047 + 128 + 0x20], %l4 ! chosen device node
211 211
212 mov (1b - prom_getprop_name), %l1 212 mov (1b - prom_getprop_name), %l1
213 mov (1b - prom_mmu_name), %l2 213 mov (1b - prom_mmu_name), %l2
214 mov (1b - prom_mmu_ihandle_cache), %l5 214 mov (1b - prom_mmu_ihandle_cache), %l5
215 sub %l0, %l1, %l1 215 sub %l0, %l1, %l1
216 sub %l0, %l2, %l2 216 sub %l0, %l2, %l2
217 sub %l0, %l5, %l5 217 sub %l0, %l5, %l5
218 218
219 /* prom_mmu_ihandle_cache = prom_getint(chosen_node, "mmu") */ 219 /* prom_mmu_ihandle_cache = prom_getint(chosen_node, "mmu") */
220 stx %l1, [%sp + 2047 + 128 + 0x00] ! service, "getprop" 220 stx %l1, [%sp + 2047 + 128 + 0x00] ! service, "getprop"
221 mov 4, %l3 221 mov 4, %l3
222 stx %l3, [%sp + 2047 + 128 + 0x08] ! num_args, 4 222 stx %l3, [%sp + 2047 + 128 + 0x08] ! num_args, 4
223 mov 1, %l3 223 mov 1, %l3
224 stx %l3, [%sp + 2047 + 128 + 0x10] ! num_rets, 1 224 stx %l3, [%sp + 2047 + 128 + 0x10] ! num_rets, 1
225 stx %l4, [%sp + 2047 + 128 + 0x18] ! arg1, chosen_node 225 stx %l4, [%sp + 2047 + 128 + 0x18] ! arg1, chosen_node
226 stx %l2, [%sp + 2047 + 128 + 0x20] ! arg2, "mmu" 226 stx %l2, [%sp + 2047 + 128 + 0x20] ! arg2, "mmu"
227 stx %l5, [%sp + 2047 + 128 + 0x28] ! arg3, &prom_mmu_ihandle_cache 227 stx %l5, [%sp + 2047 + 128 + 0x28] ! arg3, &prom_mmu_ihandle_cache
228 mov 4, %l3 228 mov 4, %l3
229 stx %l3, [%sp + 2047 + 128 + 0x30] ! arg4, sizeof(arg3) 229 stx %l3, [%sp + 2047 + 128 + 0x30] ! arg4, sizeof(arg3)
230 stx %g0, [%sp + 2047 + 128 + 0x38] ! ret1 230 stx %g0, [%sp + 2047 + 128 + 0x38] ! ret1
231 call %l7 231 call %l7
232 add %sp, (2047 + 128), %o0 ! argument array 232 add %sp, (2047 + 128), %o0 ! argument array
233 233
234 mov (1b - prom_callmethod_name), %l1 234 mov (1b - prom_callmethod_name), %l1
235 mov (1b - prom_translate_name), %l2 235 mov (1b - prom_translate_name), %l2
236 sub %l0, %l1, %l1 236 sub %l0, %l1, %l1
237 sub %l0, %l2, %l2 237 sub %l0, %l2, %l2
238 lduw [%l5], %l5 ! prom_mmu_ihandle_cache 238 lduw [%l5], %l5 ! prom_mmu_ihandle_cache
239 239
240 stx %l1, [%sp + 2047 + 128 + 0x00] ! service, "call-method" 240 stx %l1, [%sp + 2047 + 128 + 0x00] ! service, "call-method"
241 mov 3, %l3 241 mov 3, %l3
242 stx %l3, [%sp + 2047 + 128 + 0x08] ! num_args, 3 242 stx %l3, [%sp + 2047 + 128 + 0x08] ! num_args, 3
243 mov 5, %l3 243 mov 5, %l3
244 stx %l3, [%sp + 2047 + 128 + 0x10] ! num_rets, 5 244 stx %l3, [%sp + 2047 + 128 + 0x10] ! num_rets, 5
245 stx %l2, [%sp + 2047 + 128 + 0x18] ! arg1: "translate" 245 stx %l2, [%sp + 2047 + 128 + 0x18] ! arg1: "translate"
246 stx %l5, [%sp + 2047 + 128 + 0x20] ! arg2: prom_mmu_ihandle_cache 246 stx %l5, [%sp + 2047 + 128 + 0x20] ! arg2: prom_mmu_ihandle_cache
247 /* PAGE align */ 247 /* PAGE align */
248 srlx %l0, 13, %l3 248 srlx %l0, 13, %l3
249 sllx %l3, 13, %l3 249 sllx %l3, 13, %l3
250 stx %l3, [%sp + 2047 + 128 + 0x28] ! arg3: vaddr, our PC 250 stx %l3, [%sp + 2047 + 128 + 0x28] ! arg3: vaddr, our PC
251 stx %g0, [%sp + 2047 + 128 + 0x30] ! res1 251 stx %g0, [%sp + 2047 + 128 + 0x30] ! res1
252 stx %g0, [%sp + 2047 + 128 + 0x38] ! res2 252 stx %g0, [%sp + 2047 + 128 + 0x38] ! res2
253 stx %g0, [%sp + 2047 + 128 + 0x40] ! res3 253 stx %g0, [%sp + 2047 + 128 + 0x40] ! res3
254 stx %g0, [%sp + 2047 + 128 + 0x48] ! res4 254 stx %g0, [%sp + 2047 + 128 + 0x48] ! res4
255 stx %g0, [%sp + 2047 + 128 + 0x50] ! res5 255 stx %g0, [%sp + 2047 + 128 + 0x50] ! res5
256 call %l7 256 call %l7
257 add %sp, (2047 + 128), %o0 ! argument array 257 add %sp, (2047 + 128), %o0 ! argument array
258 258
259 ldx [%sp + 2047 + 128 + 0x40], %l1 ! translation mode 259 ldx [%sp + 2047 + 128 + 0x40], %l1 ! translation mode
260 mov (1b - prom_boot_mapping_mode), %l4 260 mov (1b - prom_boot_mapping_mode), %l4
261 sub %l0, %l4, %l4 261 sub %l0, %l4, %l4
262 stw %l1, [%l4] 262 stw %l1, [%l4]
263 mov (1b - prom_boot_mapping_phys_high), %l4 263 mov (1b - prom_boot_mapping_phys_high), %l4
264 sub %l0, %l4, %l4 264 sub %l0, %l4, %l4
265 ldx [%sp + 2047 + 128 + 0x48], %l2 ! physaddr high 265 ldx [%sp + 2047 + 128 + 0x48], %l2 ! physaddr high
266 stx %l2, [%l4 + 0x0] 266 stx %l2, [%l4 + 0x0]
267 ldx [%sp + 2047 + 128 + 0x50], %l3 ! physaddr low 267 ldx [%sp + 2047 + 128 + 0x50], %l3 ! physaddr low
268 /* 4MB align */ 268 /* 4MB align */
269 srlx %l3, 22, %l3 269 srlx %l3, 22, %l3
270 sllx %l3, 22, %l3 270 sllx %l3, 22, %l3
271 stx %l3, [%l4 + 0x8] 271 stx %l3, [%l4 + 0x8]
272 272
273 /* Leave service as-is, "call-method" */ 273 /* Leave service as-is, "call-method" */
274 mov 7, %l3 274 mov 7, %l3
275 stx %l3, [%sp + 2047 + 128 + 0x08] ! num_args, 7 275 stx %l3, [%sp + 2047 + 128 + 0x08] ! num_args, 7
276 mov 1, %l3 276 mov 1, %l3
277 stx %l3, [%sp + 2047 + 128 + 0x10] ! num_rets, 1 277 stx %l3, [%sp + 2047 + 128 + 0x10] ! num_rets, 1
278 mov (1b - prom_map_name), %l3 278 mov (1b - prom_map_name), %l3
279 sub %l0, %l3, %l3 279 sub %l0, %l3, %l3
280 stx %l3, [%sp + 2047 + 128 + 0x18] ! arg1: "map" 280 stx %l3, [%sp + 2047 + 128 + 0x18] ! arg1: "map"
281 /* Leave arg2 as-is, prom_mmu_ihandle_cache */ 281 /* Leave arg2 as-is, prom_mmu_ihandle_cache */
282 mov -1, %l3 282 mov -1, %l3
283 stx %l3, [%sp + 2047 + 128 + 0x28] ! arg3: mode (-1 default) 283 stx %l3, [%sp + 2047 + 128 + 0x28] ! arg3: mode (-1 default)
284 sethi %hi(8 * 1024 * 1024), %l3 284 sethi %hi(8 * 1024 * 1024), %l3
285 stx %l3, [%sp + 2047 + 128 + 0x30] ! arg4: size (8MB) 285 stx %l3, [%sp + 2047 + 128 + 0x30] ! arg4: size (8MB)
286 sethi %hi(KERNBASE), %l3 286 sethi %hi(KERNBASE), %l3
287 stx %l3, [%sp + 2047 + 128 + 0x38] ! arg5: vaddr (KERNBASE) 287 stx %l3, [%sp + 2047 + 128 + 0x38] ! arg5: vaddr (KERNBASE)
288 stx %g0, [%sp + 2047 + 128 + 0x40] ! arg6: empty 288 stx %g0, [%sp + 2047 + 128 + 0x40] ! arg6: empty
289 mov (1b - prom_boot_mapping_phys_low), %l3 289 mov (1b - prom_boot_mapping_phys_low), %l3
290 sub %l0, %l3, %l3 290 sub %l0, %l3, %l3
291 ldx [%l3], %l3 291 ldx [%l3], %l3
292 stx %l3, [%sp + 2047 + 128 + 0x48] ! arg7: phys addr 292 stx %l3, [%sp + 2047 + 128 + 0x48] ! arg7: phys addr
293 call %l7 293 call %l7
294 add %sp, (2047 + 128), %o0 ! argument array 294 add %sp, (2047 + 128), %o0 ! argument array
295 295
296 add %sp, (192 + 128), %sp 296 add %sp, (192 + 128), %sp
297 297
298 sparc64_boot_after_remap: 298 sparc64_boot_after_remap:
299 sethi %hi(prom_root_compatible), %g1 299 sethi %hi(prom_root_compatible), %g1
300 or %g1, %lo(prom_root_compatible), %g1 300 or %g1, %lo(prom_root_compatible), %g1
301 sethi %hi(prom_sun4v_name), %g7 301 sethi %hi(prom_sun4v_name), %g7
302 or %g7, %lo(prom_sun4v_name), %g7 302 or %g7, %lo(prom_sun4v_name), %g7
303 mov 5, %g3 303 mov 5, %g3
304 1: ldub [%g7], %g2 304 1: ldub [%g7], %g2
305 ldub [%g1], %g4 305 ldub [%g1], %g4
306 cmp %g2, %g4 306 cmp %g2, %g4
307 bne,pn %icc, 2f 307 bne,pn %icc, 2f
308 add %g7, 1, %g7 308 add %g7, 1, %g7
309 subcc %g3, 1, %g3 309 subcc %g3, 1, %g3
310 bne,pt %xcc, 1b 310 bne,pt %xcc, 1b
311 add %g1, 1, %g1 311 add %g1, 1, %g1
312 312
313 sethi %hi(is_sun4v), %g1 313 sethi %hi(is_sun4v), %g1
314 or %g1, %lo(is_sun4v), %g1 314 or %g1, %lo(is_sun4v), %g1
315 mov 1, %g7 315 mov 1, %g7
316 stw %g7, [%g1] 316 stw %g7, [%g1]
317 317
318 2: 318 2:
319 BRANCH_IF_SUN4V(g1, jump_to_sun4u_init) 319 BRANCH_IF_SUN4V(g1, jump_to_sun4u_init)
320 BRANCH_IF_CHEETAH_BASE(g1,g7,cheetah_boot) 320 BRANCH_IF_CHEETAH_BASE(g1,g7,cheetah_boot)
321 BRANCH_IF_CHEETAH_PLUS_OR_FOLLOWON(g1,g7,cheetah_plus_boot) 321 BRANCH_IF_CHEETAH_PLUS_OR_FOLLOWON(g1,g7,cheetah_plus_boot)
322 ba,pt %xcc, spitfire_boot 322 ba,pt %xcc, spitfire_boot
323 nop 323 nop
324 324
325 cheetah_plus_boot: 325 cheetah_plus_boot:
326 /* Preserve OBP chosen DCU and DCR register settings. */ 326 /* Preserve OBP chosen DCU and DCR register settings. */
327 ba,pt %xcc, cheetah_generic_boot 327 ba,pt %xcc, cheetah_generic_boot
328 nop 328 nop
329 329
330 cheetah_boot: 330 cheetah_boot:
331 mov DCR_BPE | DCR_RPE | DCR_SI | DCR_IFPOE | DCR_MS, %g1 331 mov DCR_BPE | DCR_RPE | DCR_SI | DCR_IFPOE | DCR_MS, %g1
332 wr %g1, %asr18 332 wr %g1, %asr18
333 333
334 sethi %uhi(DCU_ME|DCU_RE|DCU_HPE|DCU_SPE|DCU_SL|DCU_WE), %g7 334 sethi %uhi(DCU_ME|DCU_RE|DCU_HPE|DCU_SPE|DCU_SL|DCU_WE), %g7
335 or %g7, %ulo(DCU_ME|DCU_RE|DCU_HPE|DCU_SPE|DCU_SL|DCU_WE), %g7 335 or %g7, %ulo(DCU_ME|DCU_RE|DCU_HPE|DCU_SPE|DCU_SL|DCU_WE), %g7
336 sllx %g7, 32, %g7 336 sllx %g7, 32, %g7
337 or %g7, DCU_DM | DCU_IM | DCU_DC | DCU_IC, %g7 337 or %g7, DCU_DM | DCU_IM | DCU_DC | DCU_IC, %g7
338 stxa %g7, [%g0] ASI_DCU_CONTROL_REG 338 stxa %g7, [%g0] ASI_DCU_CONTROL_REG
339 membar #Sync 339 membar #Sync
340 340
341 cheetah_generic_boot: 341 cheetah_generic_boot:
342 mov TSB_EXTENSION_P, %g3 342 mov TSB_EXTENSION_P, %g3
343 stxa %g0, [%g3] ASI_DMMU 343 stxa %g0, [%g3] ASI_DMMU
344 stxa %g0, [%g3] ASI_IMMU 344 stxa %g0, [%g3] ASI_IMMU
345 membar #Sync 345 membar #Sync
346 346
347 mov TSB_EXTENSION_S, %g3 347 mov TSB_EXTENSION_S, %g3
348 stxa %g0, [%g3] ASI_DMMU 348 stxa %g0, [%g3] ASI_DMMU
349 membar #Sync 349 membar #Sync
350 350
351 mov TSB_EXTENSION_N, %g3 351 mov TSB_EXTENSION_N, %g3
352 stxa %g0, [%g3] ASI_DMMU 352 stxa %g0, [%g3] ASI_DMMU
353 stxa %g0, [%g3] ASI_IMMU 353 stxa %g0, [%g3] ASI_IMMU
354 membar #Sync 354 membar #Sync
355 355
356 ba,a,pt %xcc, jump_to_sun4u_init 356 ba,a,pt %xcc, jump_to_sun4u_init
357 357
358 spitfire_boot: 358 spitfire_boot:
359 /* Typically PROM has already enabled both MMU's and both on-chip 359 /* Typically PROM has already enabled both MMU's and both on-chip
360 * caches, but we do it here anyway just to be paranoid. 360 * caches, but we do it here anyway just to be paranoid.
361 */ 361 */
362 mov (LSU_CONTROL_IC|LSU_CONTROL_DC|LSU_CONTROL_IM|LSU_CONTROL_DM), %g1 362 mov (LSU_CONTROL_IC|LSU_CONTROL_DC|LSU_CONTROL_IM|LSU_CONTROL_DM), %g1
363 stxa %g1, [%g0] ASI_LSU_CONTROL 363 stxa %g1, [%g0] ASI_LSU_CONTROL
364 membar #Sync 364 membar #Sync
365 365
366 jump_to_sun4u_init: 366 jump_to_sun4u_init:
367 /* 367 /*
368 * Make sure we are in privileged mode, have address masking, 368 * Make sure we are in privileged mode, have address masking,
369 * using the ordinary globals and have enabled floating 369 * using the ordinary globals and have enabled floating
370 * point. 370 * point.
371 * 371 *
372 * Again, typically PROM has left %pil at 13 or similar, and 372 * Again, typically PROM has left %pil at 13 or similar, and
373 * (PSTATE_PRIV | PSTATE_PEF | PSTATE_IE) in %pstate. 373 * (PSTATE_PRIV | PSTATE_PEF | PSTATE_IE) in %pstate.
374 */ 374 */
375 wrpr %g0, (PSTATE_PRIV|PSTATE_PEF|PSTATE_IE), %pstate 375 wrpr %g0, (PSTATE_PRIV|PSTATE_PEF|PSTATE_IE), %pstate
376 wr %g0, 0, %fprs 376 wr %g0, 0, %fprs
377 377
378 set sun4u_init, %g2 378 set sun4u_init, %g2
379 jmpl %g2 + %g0, %g0 379 jmpl %g2 + %g0, %g0
380 nop 380 nop
381 381
382 sun4u_init: 382 sun4u_init:
383 BRANCH_IF_SUN4V(g1, sun4v_init) 383 BRANCH_IF_SUN4V(g1, sun4v_init)
384 384
385 /* Set ctx 0 */ 385 /* Set ctx 0 */
386 mov PRIMARY_CONTEXT, %g7 386 mov PRIMARY_CONTEXT, %g7
387 stxa %g0, [%g7] ASI_DMMU 387 stxa %g0, [%g7] ASI_DMMU
388 membar #Sync 388 membar #Sync
389 389
390 mov SECONDARY_CONTEXT, %g7 390 mov SECONDARY_CONTEXT, %g7
391 stxa %g0, [%g7] ASI_DMMU 391 stxa %g0, [%g7] ASI_DMMU
392 membar #Sync 392 membar #Sync
393 393
394 ba,pt %xcc, sun4u_continue 394 ba,pt %xcc, sun4u_continue
395 nop 395 nop
396 396
397 sun4v_init: 397 sun4v_init:
398 /* Set ctx 0 */ 398 /* Set ctx 0 */
399 mov PRIMARY_CONTEXT, %g7 399 mov PRIMARY_CONTEXT, %g7
400 stxa %g0, [%g7] ASI_MMU 400 stxa %g0, [%g7] ASI_MMU
401 membar #Sync 401 membar #Sync
402 402
403 mov SECONDARY_CONTEXT, %g7 403 mov SECONDARY_CONTEXT, %g7
404 stxa %g0, [%g7] ASI_MMU 404 stxa %g0, [%g7] ASI_MMU
405 membar #Sync 405 membar #Sync
406 ba,pt %xcc, niagara_tlb_fixup 406 ba,pt %xcc, niagara_tlb_fixup
407 nop 407 nop
408 408
409 sun4u_continue: 409 sun4u_continue:
410 BRANCH_IF_ANY_CHEETAH(g1, g7, cheetah_tlb_fixup) 410 BRANCH_IF_ANY_CHEETAH(g1, g7, cheetah_tlb_fixup)
411 411
412 ba,pt %xcc, spitfire_tlb_fixup 412 ba,pt %xcc, spitfire_tlb_fixup
413 nop 413 nop
414 414
415 niagara_tlb_fixup: 415 niagara_tlb_fixup:
416 mov 3, %g2 /* Set TLB type to hypervisor. */ 416 mov 3, %g2 /* Set TLB type to hypervisor. */
417 sethi %hi(tlb_type), %g1 417 sethi %hi(tlb_type), %g1
418 stw %g2, [%g1 + %lo(tlb_type)] 418 stw %g2, [%g1 + %lo(tlb_type)]
419 419
420 /* Patch copy/clear ops. */ 420 /* Patch copy/clear ops. */
421 call niagara_patch_copyops 421 call niagara_patch_copyops
422 nop 422 nop
423 call niagara_patch_bzero 423 call niagara_patch_bzero
424 nop 424 nop
425 call niagara_patch_pageops 425 call niagara_patch_pageops
426 nop 426 nop
427 427
428 /* Patch TLB/cache ops. */ 428 /* Patch TLB/cache ops. */
429 call hypervisor_patch_cachetlbops 429 call hypervisor_patch_cachetlbops
430 nop 430 nop
431 431
432 ba,pt %xcc, tlb_fixup_done 432 ba,pt %xcc, tlb_fixup_done
433 nop 433 nop
434 434
435 cheetah_tlb_fixup: 435 cheetah_tlb_fixup:
436 mov 2, %g2 /* Set TLB type to cheetah+. */ 436 mov 2, %g2 /* Set TLB type to cheetah+. */
437 BRANCH_IF_CHEETAH_PLUS_OR_FOLLOWON(g1,g7,1f) 437 BRANCH_IF_CHEETAH_PLUS_OR_FOLLOWON(g1,g7,1f)
438 438
439 mov 1, %g2 /* Set TLB type to cheetah. */ 439 mov 1, %g2 /* Set TLB type to cheetah. */
440 440
441 1: sethi %hi(tlb_type), %g1 441 1: sethi %hi(tlb_type), %g1
442 stw %g2, [%g1 + %lo(tlb_type)] 442 stw %g2, [%g1 + %lo(tlb_type)]
443 443
444 /* Patch copy/page operations to cheetah optimized versions. */ 444 /* Patch copy/page operations to cheetah optimized versions. */
445 call cheetah_patch_copyops 445 call cheetah_patch_copyops
446 nop 446 nop
447 call cheetah_patch_copy_page 447 call cheetah_patch_copy_page
448 nop 448 nop
449 call cheetah_patch_cachetlbops 449 call cheetah_patch_cachetlbops
450 nop 450 nop
451 451
452 ba,pt %xcc, tlb_fixup_done 452 ba,pt %xcc, tlb_fixup_done
453 nop 453 nop
454 454
455 spitfire_tlb_fixup: 455 spitfire_tlb_fixup:
456 /* Set TLB type to spitfire. */ 456 /* Set TLB type to spitfire. */
457 mov 0, %g2 457 mov 0, %g2
458 sethi %hi(tlb_type), %g1 458 sethi %hi(tlb_type), %g1
459 stw %g2, [%g1 + %lo(tlb_type)] 459 stw %g2, [%g1 + %lo(tlb_type)]
460 460
461 tlb_fixup_done: 461 tlb_fixup_done:
462 sethi %hi(init_thread_union), %g6 462 sethi %hi(init_thread_union), %g6
463 or %g6, %lo(init_thread_union), %g6 463 or %g6, %lo(init_thread_union), %g6
464 ldx [%g6 + TI_TASK], %g4 464 ldx [%g6 + TI_TASK], %g4
465 mov %sp, %l6 465 mov %sp, %l6
466 mov %o4, %l7 466 mov %o4, %l7
467 467
468 wr %g0, ASI_P, %asi 468 wr %g0, ASI_P, %asi
469 mov 1, %g1 469 mov 1, %g1
470 sllx %g1, THREAD_SHIFT, %g1 470 sllx %g1, THREAD_SHIFT, %g1
471 sub %g1, (STACKFRAME_SZ + STACK_BIAS), %g1 471 sub %g1, (STACKFRAME_SZ + STACK_BIAS), %g1
472 add %g6, %g1, %sp 472 add %g6, %g1, %sp
473 mov 0, %fp 473 mov 0, %fp
474 474
475 /* Set per-cpu pointer initially to zero, this makes 475 /* Set per-cpu pointer initially to zero, this makes
476 * the boot-cpu use the in-kernel-image per-cpu areas 476 * the boot-cpu use the in-kernel-image per-cpu areas
477 * before setup_per_cpu_area() is invoked. 477 * before setup_per_cpu_area() is invoked.
478 */ 478 */
479 clr %g5 479 clr %g5
480 480
481 wrpr %g0, 0, %wstate 481 wrpr %g0, 0, %wstate
482 wrpr %g0, 0x0, %tl 482 wrpr %g0, 0x0, %tl
483 483
484 /* Clear the bss */ 484 /* Clear the bss */
485 sethi %hi(__bss_start), %o0 485 sethi %hi(__bss_start), %o0
486 or %o0, %lo(__bss_start), %o0 486 or %o0, %lo(__bss_start), %o0
487 sethi %hi(_end), %o1 487 sethi %hi(_end), %o1
488 or %o1, %lo(_end), %o1 488 or %o1, %lo(_end), %o1
489 call __bzero 489 call __bzero
490 sub %o1, %o0, %o1 490 sub %o1, %o0, %o1
491 491
492 #ifdef CONFIG_LOCKDEP
493 /* We have this call this super early, as even prom_init can grab
494 * spinlocks and thus call into the lockdep code.
495 */
496 call lockdep_init
497 nop
498 #endif
499
492 mov %l6, %o1 ! OpenPROM stack 500 mov %l6, %o1 ! OpenPROM stack
493 call prom_init 501 call prom_init
494 mov %l7, %o0 ! OpenPROM cif handler 502 mov %l7, %o0 ! OpenPROM cif handler
495 503
496 /* Initialize current_thread_info()->cpu as early as possible. 504 /* Initialize current_thread_info()->cpu as early as possible.
497 * In order to do that accurately we have to patch up the get_cpuid() 505 * In order to do that accurately we have to patch up the get_cpuid()
498 * assembler sequences. And that, in turn, requires that we know 506 * assembler sequences. And that, in turn, requires that we know
499 * if we are on a Starfire box or not. While we're here, patch up 507 * if we are on a Starfire box or not. While we're here, patch up
500 * the sun4v sequences as well. 508 * the sun4v sequences as well.
501 */ 509 */
502 call check_if_starfire 510 call check_if_starfire
503 nop 511 nop
504 call per_cpu_patch 512 call per_cpu_patch
505 nop 513 nop
506 call sun4v_patch 514 call sun4v_patch
507 nop 515 nop
508 516
509 #ifdef CONFIG_SMP 517 #ifdef CONFIG_SMP
510 call hard_smp_processor_id 518 call hard_smp_processor_id
511 nop 519 nop
512 cmp %o0, NR_CPUS 520 cmp %o0, NR_CPUS
513 blu,pt %xcc, 1f 521 blu,pt %xcc, 1f
514 nop 522 nop
515 call boot_cpu_id_too_large 523 call boot_cpu_id_too_large
516 nop 524 nop
517 /* Not reached... */ 525 /* Not reached... */
518 526
519 1: 527 1:
520 #else 528 #else
521 mov 0, %o0 529 mov 0, %o0
522 #endif 530 #endif
523 stb %o0, [%g6 + TI_CPU] 531 stb %o0, [%g6 + TI_CPU]
524 532
525 /* Off we go.... */ 533 /* Off we go.... */
526 call start_kernel 534 call start_kernel
527 nop 535 nop
528 /* Not reached... */ 536 /* Not reached... */
529 537
530 /* This is meant to allow the sharing of this code between 538 /* This is meant to allow the sharing of this code between
531 * boot processor invocation (via setup_tba() below) and 539 * boot processor invocation (via setup_tba() below) and
532 * secondary processor startup (via trampoline.S). The 540 * secondary processor startup (via trampoline.S). The
533 * former does use this code, the latter does not yet due 541 * former does use this code, the latter does not yet due
534 * to some complexities. That should be fixed up at some 542 * to some complexities. That should be fixed up at some
535 * point. 543 * point.
536 * 544 *
537 * There used to be enormous complexity wrt. transferring 545 * There used to be enormous complexity wrt. transferring
538 * over from the firwmare's trap table to the Linux kernel's. 546 * over from the firwmare's trap table to the Linux kernel's.
539 * For example, there was a chicken & egg problem wrt. building 547 * For example, there was a chicken & egg problem wrt. building
540 * the OBP page tables, yet needing to be on the Linux kernel 548 * the OBP page tables, yet needing to be on the Linux kernel
541 * trap table (to translate PAGE_OFFSET addresses) in order to 549 * trap table (to translate PAGE_OFFSET addresses) in order to
542 * do that. 550 * do that.
543 * 551 *
544 * We now handle OBP tlb misses differently, via linear lookups 552 * We now handle OBP tlb misses differently, via linear lookups
545 * into the prom_trans[] array. So that specific problem no 553 * into the prom_trans[] array. So that specific problem no
546 * longer exists. Yet, unfortunately there are still some issues 554 * longer exists. Yet, unfortunately there are still some issues
547 * preventing trampoline.S from using this code... ho hum. 555 * preventing trampoline.S from using this code... ho hum.
548 */ 556 */
549 .globl setup_trap_table 557 .globl setup_trap_table
550 setup_trap_table: 558 setup_trap_table:
551 save %sp, -192, %sp 559 save %sp, -192, %sp
552 560
553 /* Force interrupts to be disabled. */ 561 /* Force interrupts to be disabled. */
554 rdpr %pstate, %l0 562 rdpr %pstate, %l0
555 andn %l0, PSTATE_IE, %o1 563 andn %l0, PSTATE_IE, %o1
556 wrpr %o1, 0x0, %pstate 564 wrpr %o1, 0x0, %pstate
557 rdpr %pil, %l1 565 rdpr %pil, %l1
558 wrpr %g0, 15, %pil 566 wrpr %g0, 15, %pil
559 567
560 /* Make the firmware call to jump over to the Linux trap table. */ 568 /* Make the firmware call to jump over to the Linux trap table. */
561 sethi %hi(is_sun4v), %o0 569 sethi %hi(is_sun4v), %o0
562 lduw [%o0 + %lo(is_sun4v)], %o0 570 lduw [%o0 + %lo(is_sun4v)], %o0
563 brz,pt %o0, 1f 571 brz,pt %o0, 1f
564 nop 572 nop
565 573
566 TRAP_LOAD_TRAP_BLOCK(%g2, %g3) 574 TRAP_LOAD_TRAP_BLOCK(%g2, %g3)
567 add %g2, TRAP_PER_CPU_FAULT_INFO, %g2 575 add %g2, TRAP_PER_CPU_FAULT_INFO, %g2
568 stxa %g2, [%g0] ASI_SCRATCHPAD 576 stxa %g2, [%g0] ASI_SCRATCHPAD
569 577
570 /* Compute physical address: 578 /* Compute physical address:
571 * 579 *
572 * paddr = kern_base + (mmfsa_vaddr - KERNBASE) 580 * paddr = kern_base + (mmfsa_vaddr - KERNBASE)
573 */ 581 */
574 sethi %hi(KERNBASE), %g3 582 sethi %hi(KERNBASE), %g3
575 sub %g2, %g3, %g2 583 sub %g2, %g3, %g2
576 sethi %hi(kern_base), %g3 584 sethi %hi(kern_base), %g3
577 ldx [%g3 + %lo(kern_base)], %g3 585 ldx [%g3 + %lo(kern_base)], %g3
578 add %g2, %g3, %o1 586 add %g2, %g3, %o1
579 587
580 call prom_set_trap_table_sun4v 588 call prom_set_trap_table_sun4v
581 sethi %hi(sparc64_ttable_tl0), %o0 589 sethi %hi(sparc64_ttable_tl0), %o0
582 590
583 ba,pt %xcc, 2f 591 ba,pt %xcc, 2f
584 nop 592 nop
585 593
586 1: call prom_set_trap_table 594 1: call prom_set_trap_table
587 sethi %hi(sparc64_ttable_tl0), %o0 595 sethi %hi(sparc64_ttable_tl0), %o0
588 596
589 /* Start using proper page size encodings in ctx register. */ 597 /* Start using proper page size encodings in ctx register. */
590 2: sethi %hi(sparc64_kern_pri_context), %g3 598 2: sethi %hi(sparc64_kern_pri_context), %g3
591 ldx [%g3 + %lo(sparc64_kern_pri_context)], %g2 599 ldx [%g3 + %lo(sparc64_kern_pri_context)], %g2
592 600
593 mov PRIMARY_CONTEXT, %g1 601 mov PRIMARY_CONTEXT, %g1
594 602
595 661: stxa %g2, [%g1] ASI_DMMU 603 661: stxa %g2, [%g1] ASI_DMMU
596 .section .sun4v_1insn_patch, "ax" 604 .section .sun4v_1insn_patch, "ax"
597 .word 661b 605 .word 661b
598 stxa %g2, [%g1] ASI_MMU 606 stxa %g2, [%g1] ASI_MMU
599 .previous 607 .previous
600 608
601 membar #Sync 609 membar #Sync
602 610
603 /* Kill PROM timer */ 611 /* Kill PROM timer */
604 sethi %hi(0x80000000), %o2 612 sethi %hi(0x80000000), %o2
605 sllx %o2, 32, %o2 613 sllx %o2, 32, %o2
606 wr %o2, 0, %tick_cmpr 614 wr %o2, 0, %tick_cmpr
607 615
608 BRANCH_IF_SUN4V(o2, 1f) 616 BRANCH_IF_SUN4V(o2, 1f)
609 BRANCH_IF_ANY_CHEETAH(o2, o3, 1f) 617 BRANCH_IF_ANY_CHEETAH(o2, o3, 1f)
610 618
611 ba,pt %xcc, 2f 619 ba,pt %xcc, 2f
612 nop 620 nop
613 621
614 /* Disable STICK_INT interrupts. */ 622 /* Disable STICK_INT interrupts. */
615 1: 623 1:
616 sethi %hi(0x80000000), %o2 624 sethi %hi(0x80000000), %o2
617 sllx %o2, 32, %o2 625 sllx %o2, 32, %o2
618 wr %o2, %asr25 626 wr %o2, %asr25
619 627
620 2: 628 2:
621 wrpr %g0, %g0, %wstate 629 wrpr %g0, %g0, %wstate
622 630
623 call init_irqwork_curcpu 631 call init_irqwork_curcpu
624 nop 632 nop
625 633
626 /* Now we can restore interrupt state. */ 634 /* Now we can restore interrupt state. */
627 wrpr %l0, 0, %pstate 635 wrpr %l0, 0, %pstate
628 wrpr %l1, 0x0, %pil 636 wrpr %l1, 0x0, %pil
629 637
630 ret 638 ret
631 restore 639 restore
632 640
633 .globl setup_tba 641 .globl setup_tba
634 setup_tba: 642 setup_tba:
635 save %sp, -192, %sp 643 save %sp, -192, %sp
636 644
637 /* The boot processor is the only cpu which invokes this 645 /* The boot processor is the only cpu which invokes this
638 * routine, the other cpus set things up via trampoline.S. 646 * routine, the other cpus set things up via trampoline.S.
639 * So save the OBP trap table address here. 647 * So save the OBP trap table address here.
640 */ 648 */
641 rdpr %tba, %g7 649 rdpr %tba, %g7
642 sethi %hi(prom_tba), %o1 650 sethi %hi(prom_tba), %o1
643 or %o1, %lo(prom_tba), %o1 651 or %o1, %lo(prom_tba), %o1
644 stx %g7, [%o1] 652 stx %g7, [%o1]
645 653
646 call setup_trap_table 654 call setup_trap_table
647 nop 655 nop
648 656
649 ret 657 ret
650 restore 658 restore
651 sparc64_boot_end: 659 sparc64_boot_end:
652 660
653 #include "ktlb.S" 661 #include "ktlb.S"
654 #include "tsb.S" 662 #include "tsb.S"
655 #include "etrap.S" 663 #include "etrap.S"
656 #include "rtrap.S" 664 #include "rtrap.S"
657 #include "winfixup.S" 665 #include "winfixup.S"
658 #include "entry.S" 666 #include "entry.S"
659 #include "sun4v_tlb_miss.S" 667 #include "sun4v_tlb_miss.S"
660 #include "sun4v_ivec.S" 668 #include "sun4v_ivec.S"
661 669
662 /* 670 /*
663 * The following skip makes sure the trap table in ttable.S is aligned 671 * The following skip makes sure the trap table in ttable.S is aligned
664 * on a 32K boundary as required by the v9 specs for TBA register. 672 * on a 32K boundary as required by the v9 specs for TBA register.
665 * 673 *
666 * We align to a 32K boundary, then we have the 32K kernel TSB, 674 * We align to a 32K boundary, then we have the 32K kernel TSB,
667 * then the 32K aligned trap table. 675 * then the 32K aligned trap table.
668 */ 676 */
669 1: 677 1:
670 .skip 0x4000 + _start - 1b 678 .skip 0x4000 + _start - 1b
671 679
672 .globl swapper_tsb 680 .globl swapper_tsb
673 swapper_tsb: 681 swapper_tsb:
674 .skip (32 * 1024) 682 .skip (32 * 1024)
675 683
676 ! 0x0000000000408000 684 ! 0x0000000000408000
677 685
678 #include "ttable.S" 686 #include "ttable.S"
679 687
680 #include "systbls.S" 688 #include "systbls.S"
681 689
682 .data 690 .data
683 .align 8 691 .align 8
684 .globl prom_tba, tlb_type 692 .globl prom_tba, tlb_type
685 prom_tba: .xword 0 693 prom_tba: .xword 0
686 tlb_type: .word 0 /* Must NOT end up in BSS */ 694 tlb_type: .word 0 /* Must NOT end up in BSS */
687 .section ".fixup",#alloc,#execinstr 695 .section ".fixup",#alloc,#execinstr
688 696
689 .globl __ret_efault, __retl_efault 697 .globl __ret_efault, __retl_efault
690 __ret_efault: 698 __ret_efault:
691 ret 699 ret
692 restore %g0, -EFAULT, %o0 700 restore %g0, -EFAULT, %o0
693 __retl_efault: 701 __retl_efault:
694 retl 702 retl
695 mov -EFAULT, %o0 703 mov -EFAULT, %o0
696 704
arch/sparc64/kernel/rtrap.S
1 /* $Id: rtrap.S,v 1.61 2002/02/09 19:49:31 davem Exp $ 1 /* $Id: rtrap.S,v 1.61 2002/02/09 19:49:31 davem Exp $
2 * rtrap.S: Preparing for return from trap on Sparc V9. 2 * rtrap.S: Preparing for return from trap on Sparc V9.
3 * 3 *
4 * Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz) 4 * Copyright (C) 1997,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
5 * Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu) 5 * Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
6 */ 6 */
7 7
8 8
9 #include <asm/asi.h> 9 #include <asm/asi.h>
10 #include <asm/pstate.h> 10 #include <asm/pstate.h>
11 #include <asm/ptrace.h> 11 #include <asm/ptrace.h>
12 #include <asm/spitfire.h> 12 #include <asm/spitfire.h>
13 #include <asm/head.h> 13 #include <asm/head.h>
14 #include <asm/visasm.h> 14 #include <asm/visasm.h>
15 #include <asm/processor.h> 15 #include <asm/processor.h>
16 16
17 #define RTRAP_PSTATE (PSTATE_RMO|PSTATE_PEF|PSTATE_PRIV|PSTATE_IE) 17 #define RTRAP_PSTATE (PSTATE_RMO|PSTATE_PEF|PSTATE_PRIV|PSTATE_IE)
18 #define RTRAP_PSTATE_IRQOFF (PSTATE_RMO|PSTATE_PEF|PSTATE_PRIV) 18 #define RTRAP_PSTATE_IRQOFF (PSTATE_RMO|PSTATE_PEF|PSTATE_PRIV)
19 #define RTRAP_PSTATE_AG_IRQOFF (PSTATE_RMO|PSTATE_PEF|PSTATE_PRIV|PSTATE_AG) 19 #define RTRAP_PSTATE_AG_IRQOFF (PSTATE_RMO|PSTATE_PEF|PSTATE_PRIV|PSTATE_AG)
20 20
21 /* Register %l6 keeps track of whether we are returning 21 /* Register %l6 keeps track of whether we are returning
22 * from a system call or not. It is cleared if we call 22 * from a system call or not. It is cleared if we call
23 * do_notify_resume, and it must not be otherwise modified 23 * do_notify_resume, and it must not be otherwise modified
24 * until we fully commit to returning to userspace. 24 * until we fully commit to returning to userspace.
25 */ 25 */
26 26
27 .text 27 .text
28 .align 32 28 .align 32
29 __handle_softirq: 29 __handle_softirq:
30 call do_softirq 30 call do_softirq
31 nop 31 nop
32 ba,a,pt %xcc, __handle_softirq_continue 32 ba,a,pt %xcc, __handle_softirq_continue
33 nop 33 nop
34 __handle_preemption: 34 __handle_preemption:
35 call schedule 35 call schedule
36 wrpr %g0, RTRAP_PSTATE, %pstate 36 wrpr %g0, RTRAP_PSTATE, %pstate
37 ba,pt %xcc, __handle_preemption_continue 37 ba,pt %xcc, __handle_preemption_continue
38 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 38 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
39 39
40 __handle_user_windows: 40 __handle_user_windows:
41 call fault_in_user_windows 41 call fault_in_user_windows
42 wrpr %g0, RTRAP_PSTATE, %pstate 42 wrpr %g0, RTRAP_PSTATE, %pstate
43 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 43 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
44 /* Redo sched+sig checks */ 44 /* Redo sched+sig checks */
45 ldx [%g6 + TI_FLAGS], %l0 45 ldx [%g6 + TI_FLAGS], %l0
46 andcc %l0, _TIF_NEED_RESCHED, %g0 46 andcc %l0, _TIF_NEED_RESCHED, %g0
47 47
48 be,pt %xcc, 1f 48 be,pt %xcc, 1f
49 nop 49 nop
50 call schedule 50 call schedule
51 wrpr %g0, RTRAP_PSTATE, %pstate 51 wrpr %g0, RTRAP_PSTATE, %pstate
52 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 52 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
53 ldx [%g6 + TI_FLAGS], %l0 53 ldx [%g6 + TI_FLAGS], %l0
54 54
55 1: andcc %l0, (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK), %g0 55 1: andcc %l0, (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK), %g0
56 be,pt %xcc, __handle_user_windows_continue 56 be,pt %xcc, __handle_user_windows_continue
57 nop 57 nop
58 mov %l5, %o1 58 mov %l5, %o1
59 mov %l6, %o2 59 mov %l6, %o2
60 add %sp, PTREGS_OFF, %o0 60 add %sp, PTREGS_OFF, %o0
61 mov %l0, %o3 61 mov %l0, %o3
62 62
63 call do_notify_resume 63 call do_notify_resume
64 wrpr %g0, RTRAP_PSTATE, %pstate 64 wrpr %g0, RTRAP_PSTATE, %pstate
65 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 65 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
66 clr %l6 66 clr %l6
67 /* Signal delivery can modify pt_regs tstate, so we must 67 /* Signal delivery can modify pt_regs tstate, so we must
68 * reload it. 68 * reload it.
69 */ 69 */
70 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1 70 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1
71 sethi %hi(0xf << 20), %l4 71 sethi %hi(0xf << 20), %l4
72 and %l1, %l4, %l4 72 and %l1, %l4, %l4
73 ba,pt %xcc, __handle_user_windows_continue 73 ba,pt %xcc, __handle_user_windows_continue
74 74
75 andn %l1, %l4, %l1 75 andn %l1, %l4, %l1
76 __handle_perfctrs: 76 __handle_perfctrs:
77 call update_perfctrs 77 call update_perfctrs
78 wrpr %g0, RTRAP_PSTATE, %pstate 78 wrpr %g0, RTRAP_PSTATE, %pstate
79 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 79 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
80 ldub [%g6 + TI_WSAVED], %o2 80 ldub [%g6 + TI_WSAVED], %o2
81 brz,pt %o2, 1f 81 brz,pt %o2, 1f
82 nop 82 nop
83 /* Redo userwin+sched+sig checks */ 83 /* Redo userwin+sched+sig checks */
84 call fault_in_user_windows 84 call fault_in_user_windows
85 85
86 wrpr %g0, RTRAP_PSTATE, %pstate 86 wrpr %g0, RTRAP_PSTATE, %pstate
87 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 87 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
88 ldx [%g6 + TI_FLAGS], %l0 88 ldx [%g6 + TI_FLAGS], %l0
89 andcc %l0, _TIF_NEED_RESCHED, %g0 89 andcc %l0, _TIF_NEED_RESCHED, %g0
90 be,pt %xcc, 1f 90 be,pt %xcc, 1f
91 91
92 nop 92 nop
93 call schedule 93 call schedule
94 wrpr %g0, RTRAP_PSTATE, %pstate 94 wrpr %g0, RTRAP_PSTATE, %pstate
95 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 95 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
96 ldx [%g6 + TI_FLAGS], %l0 96 ldx [%g6 + TI_FLAGS], %l0
97 1: andcc %l0, (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK), %g0 97 1: andcc %l0, (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK), %g0
98 98
99 be,pt %xcc, __handle_perfctrs_continue 99 be,pt %xcc, __handle_perfctrs_continue
100 sethi %hi(TSTATE_PEF), %o0 100 sethi %hi(TSTATE_PEF), %o0
101 mov %l5, %o1 101 mov %l5, %o1
102 mov %l6, %o2 102 mov %l6, %o2
103 add %sp, PTREGS_OFF, %o0 103 add %sp, PTREGS_OFF, %o0
104 mov %l0, %o3 104 mov %l0, %o3
105 call do_notify_resume 105 call do_notify_resume
106 106
107 wrpr %g0, RTRAP_PSTATE, %pstate 107 wrpr %g0, RTRAP_PSTATE, %pstate
108 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 108 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
109 clr %l6 109 clr %l6
110 /* Signal delivery can modify pt_regs tstate, so we must 110 /* Signal delivery can modify pt_regs tstate, so we must
111 * reload it. 111 * reload it.
112 */ 112 */
113 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1 113 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1
114 sethi %hi(0xf << 20), %l4 114 sethi %hi(0xf << 20), %l4
115 and %l1, %l4, %l4 115 and %l1, %l4, %l4
116 andn %l1, %l4, %l1 116 andn %l1, %l4, %l1
117 ba,pt %xcc, __handle_perfctrs_continue 117 ba,pt %xcc, __handle_perfctrs_continue
118 118
119 sethi %hi(TSTATE_PEF), %o0 119 sethi %hi(TSTATE_PEF), %o0
120 __handle_userfpu: 120 __handle_userfpu:
121 rd %fprs, %l5 121 rd %fprs, %l5
122 andcc %l5, FPRS_FEF, %g0 122 andcc %l5, FPRS_FEF, %g0
123 sethi %hi(TSTATE_PEF), %o0 123 sethi %hi(TSTATE_PEF), %o0
124 be,a,pn %icc, __handle_userfpu_continue 124 be,a,pn %icc, __handle_userfpu_continue
125 andn %l1, %o0, %l1 125 andn %l1, %o0, %l1
126 ba,a,pt %xcc, __handle_userfpu_continue 126 ba,a,pt %xcc, __handle_userfpu_continue
127 127
128 __handle_signal: 128 __handle_signal:
129 mov %l5, %o1 129 mov %l5, %o1
130 mov %l6, %o2 130 mov %l6, %o2
131 add %sp, PTREGS_OFF, %o0 131 add %sp, PTREGS_OFF, %o0
132 mov %l0, %o3 132 mov %l0, %o3
133 call do_notify_resume 133 call do_notify_resume
134 wrpr %g0, RTRAP_PSTATE, %pstate 134 wrpr %g0, RTRAP_PSTATE, %pstate
135 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 135 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
136 clr %l6 136 clr %l6
137 137
138 /* Signal delivery can modify pt_regs tstate, so we must 138 /* Signal delivery can modify pt_regs tstate, so we must
139 * reload it. 139 * reload it.
140 */ 140 */
141 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1 141 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1
142 sethi %hi(0xf << 20), %l4 142 sethi %hi(0xf << 20), %l4
143 and %l1, %l4, %l4 143 and %l1, %l4, %l4
144 ba,pt %xcc, __handle_signal_continue 144 ba,pt %xcc, __handle_signal_continue
145 andn %l1, %l4, %l1 145 andn %l1, %l4, %l1
146 146
147 .align 64 147 .align 64
148 .globl rtrap_irq, rtrap_clr_l6, rtrap, irqsz_patchme, rtrap_xcall 148 .globl rtrap_irq, rtrap_clr_l6, rtrap, irqsz_patchme, rtrap_xcall
149 rtrap_irq: 149 rtrap_irq:
150 rtrap_clr_l6: clr %l6 150 rtrap_clr_l6: clr %l6
151 rtrap: 151 rtrap:
152 #ifndef CONFIG_SMP 152 #ifndef CONFIG_SMP
153 sethi %hi(per_cpu____cpu_data), %l0 153 sethi %hi(per_cpu____cpu_data), %l0
154 lduw [%l0 + %lo(per_cpu____cpu_data)], %l1 154 lduw [%l0 + %lo(per_cpu____cpu_data)], %l1
155 #else 155 #else
156 sethi %hi(per_cpu____cpu_data), %l0 156 sethi %hi(per_cpu____cpu_data), %l0
157 or %l0, %lo(per_cpu____cpu_data), %l0 157 or %l0, %lo(per_cpu____cpu_data), %l0
158 lduw [%l0 + %g5], %l1 158 lduw [%l0 + %g5], %l1
159 #endif 159 #endif
160 cmp %l1, 0 160 cmp %l1, 0
161 161
162 /* mm/ultra.S:xcall_report_regs KNOWS about this load. */ 162 /* mm/ultra.S:xcall_report_regs KNOWS about this load. */
163 bne,pn %icc, __handle_softirq 163 bne,pn %icc, __handle_softirq
164 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1 164 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1
165 __handle_softirq_continue: 165 __handle_softirq_continue:
166 rtrap_xcall: 166 rtrap_xcall:
167 sethi %hi(0xf << 20), %l4 167 sethi %hi(0xf << 20), %l4
168 andcc %l1, TSTATE_PRIV, %l3
169 and %l1, %l4, %l4 168 and %l1, %l4, %l4
169 andn %l1, %l4, %l1
170 srl %l4, 20, %l4
171 #ifdef CONFIG_TRACE_IRQFLAGS
172 brnz,pn %l4, rtrap_no_irq_enable
173 nop
174 call trace_hardirqs_on
175 nop
176 wrpr %l4, %pil
177 rtrap_no_irq_enable:
178 #endif
179 andcc %l1, TSTATE_PRIV, %l3
170 bne,pn %icc, to_kernel 180 bne,pn %icc, to_kernel
171 andn %l1, %l4, %l1 181 nop
172 182
173 /* We must hold IRQs off and atomically test schedule+signal 183 /* We must hold IRQs off and atomically test schedule+signal
174 * state, then hold them off all the way back to userspace. 184 * state, then hold them off all the way back to userspace.
175 * If we are returning to kernel, none of this matters. 185 * If we are returning to kernel, none of this matters. Note
186 * that we are disabling interrupts via PSTATE_IE, not using
187 * %pil.
176 * 188 *
177 * If we do not do this, there is a window where we would do 189 * If we do not do this, there is a window where we would do
178 * the tests, later the signal/resched event arrives but we do 190 * the tests, later the signal/resched event arrives but we do
179 * not process it since we are still in kernel mode. It would 191 * not process it since we are still in kernel mode. It would
180 * take until the next local IRQ before the signal/resched 192 * take until the next local IRQ before the signal/resched
181 * event would be handled. 193 * event would be handled.
182 * 194 *
183 * This also means that if we have to deal with performance 195 * This also means that if we have to deal with performance
184 * counters or user windows, we have to redo all of these 196 * counters or user windows, we have to redo all of these
185 * sched+signal checks with IRQs disabled. 197 * sched+signal checks with IRQs disabled.
186 */ 198 */
187 to_user: wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 199 to_user: wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
188 wrpr 0, %pil 200 wrpr 0, %pil
189 __handle_preemption_continue: 201 __handle_preemption_continue:
190 ldx [%g6 + TI_FLAGS], %l0 202 ldx [%g6 + TI_FLAGS], %l0
191 sethi %hi(_TIF_USER_WORK_MASK), %o0 203 sethi %hi(_TIF_USER_WORK_MASK), %o0
192 or %o0, %lo(_TIF_USER_WORK_MASK), %o0 204 or %o0, %lo(_TIF_USER_WORK_MASK), %o0
193 andcc %l0, %o0, %g0 205 andcc %l0, %o0, %g0
194 sethi %hi(TSTATE_PEF), %o0 206 sethi %hi(TSTATE_PEF), %o0
195 be,pt %xcc, user_nowork 207 be,pt %xcc, user_nowork
196 andcc %l1, %o0, %g0 208 andcc %l1, %o0, %g0
197 andcc %l0, _TIF_NEED_RESCHED, %g0 209 andcc %l0, _TIF_NEED_RESCHED, %g0
198 bne,pn %xcc, __handle_preemption 210 bne,pn %xcc, __handle_preemption
199 andcc %l0, (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK), %g0 211 andcc %l0, (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK), %g0
200 bne,pn %xcc, __handle_signal 212 bne,pn %xcc, __handle_signal
201 __handle_signal_continue: 213 __handle_signal_continue:
202 ldub [%g6 + TI_WSAVED], %o2 214 ldub [%g6 + TI_WSAVED], %o2
203 brnz,pn %o2, __handle_user_windows 215 brnz,pn %o2, __handle_user_windows
204 nop 216 nop
205 __handle_user_windows_continue: 217 __handle_user_windows_continue:
206 ldx [%g6 + TI_FLAGS], %l5 218 ldx [%g6 + TI_FLAGS], %l5
207 andcc %l5, _TIF_PERFCTR, %g0 219 andcc %l5, _TIF_PERFCTR, %g0
208 sethi %hi(TSTATE_PEF), %o0 220 sethi %hi(TSTATE_PEF), %o0
209 bne,pn %xcc, __handle_perfctrs 221 bne,pn %xcc, __handle_perfctrs
210 __handle_perfctrs_continue: 222 __handle_perfctrs_continue:
211 andcc %l1, %o0, %g0 223 andcc %l1, %o0, %g0
212 224
213 /* This fpdepth clear is necessary for non-syscall rtraps only */ 225 /* This fpdepth clear is necessary for non-syscall rtraps only */
214 user_nowork: 226 user_nowork:
215 bne,pn %xcc, __handle_userfpu 227 bne,pn %xcc, __handle_userfpu
216 stb %g0, [%g6 + TI_FPDEPTH] 228 stb %g0, [%g6 + TI_FPDEPTH]
217 __handle_userfpu_continue: 229 __handle_userfpu_continue:
218 230
219 rt_continue: ldx [%sp + PTREGS_OFF + PT_V9_G1], %g1 231 rt_continue: ldx [%sp + PTREGS_OFF + PT_V9_G1], %g1
220 ldx [%sp + PTREGS_OFF + PT_V9_G2], %g2 232 ldx [%sp + PTREGS_OFF + PT_V9_G2], %g2
221 233
222 ldx [%sp + PTREGS_OFF + PT_V9_G3], %g3 234 ldx [%sp + PTREGS_OFF + PT_V9_G3], %g3
223 ldx [%sp + PTREGS_OFF + PT_V9_G4], %g4 235 ldx [%sp + PTREGS_OFF + PT_V9_G4], %g4
224 ldx [%sp + PTREGS_OFF + PT_V9_G5], %g5 236 ldx [%sp + PTREGS_OFF + PT_V9_G5], %g5
225 brz,pt %l3, 1f 237 brz,pt %l3, 1f
226 mov %g6, %l2 238 mov %g6, %l2
227 239
228 /* Must do this before thread reg is clobbered below. */ 240 /* Must do this before thread reg is clobbered below. */
229 LOAD_PER_CPU_BASE(%g5, %g6, %i0, %i1, %i2) 241 LOAD_PER_CPU_BASE(%g5, %g6, %i0, %i1, %i2)
230 1: 242 1:
231 ldx [%sp + PTREGS_OFF + PT_V9_G6], %g6 243 ldx [%sp + PTREGS_OFF + PT_V9_G6], %g6
232 ldx [%sp + PTREGS_OFF + PT_V9_G7], %g7 244 ldx [%sp + PTREGS_OFF + PT_V9_G7], %g7
233 245
234 /* Normal globals are restored, go to trap globals. */ 246 /* Normal globals are restored, go to trap globals. */
235 661: wrpr %g0, RTRAP_PSTATE_AG_IRQOFF, %pstate 247 661: wrpr %g0, RTRAP_PSTATE_AG_IRQOFF, %pstate
236 nop 248 nop
237 .section .sun4v_2insn_patch, "ax" 249 .section .sun4v_2insn_patch, "ax"
238 .word 661b 250 .word 661b
239 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate 251 wrpr %g0, RTRAP_PSTATE_IRQOFF, %pstate
240 SET_GL(1) 252 SET_GL(1)
241 .previous 253 .previous
242 254
243 mov %l2, %g6 255 mov %l2, %g6
244 256
245 ldx [%sp + PTREGS_OFF + PT_V9_I0], %i0 257 ldx [%sp + PTREGS_OFF + PT_V9_I0], %i0
246 ldx [%sp + PTREGS_OFF + PT_V9_I1], %i1 258 ldx [%sp + PTREGS_OFF + PT_V9_I1], %i1
247 259
248 ldx [%sp + PTREGS_OFF + PT_V9_I2], %i2 260 ldx [%sp + PTREGS_OFF + PT_V9_I2], %i2
249 ldx [%sp + PTREGS_OFF + PT_V9_I3], %i3 261 ldx [%sp + PTREGS_OFF + PT_V9_I3], %i3
250 ldx [%sp + PTREGS_OFF + PT_V9_I4], %i4 262 ldx [%sp + PTREGS_OFF + PT_V9_I4], %i4
251 ldx [%sp + PTREGS_OFF + PT_V9_I5], %i5 263 ldx [%sp + PTREGS_OFF + PT_V9_I5], %i5
252 ldx [%sp + PTREGS_OFF + PT_V9_I6], %i6 264 ldx [%sp + PTREGS_OFF + PT_V9_I6], %i6
253 ldx [%sp + PTREGS_OFF + PT_V9_I7], %i7 265 ldx [%sp + PTREGS_OFF + PT_V9_I7], %i7
254 ldx [%sp + PTREGS_OFF + PT_V9_TPC], %l2 266 ldx [%sp + PTREGS_OFF + PT_V9_TPC], %l2
255 ldx [%sp + PTREGS_OFF + PT_V9_TNPC], %o2 267 ldx [%sp + PTREGS_OFF + PT_V9_TNPC], %o2
256 268
257 ld [%sp + PTREGS_OFF + PT_V9_Y], %o3 269 ld [%sp + PTREGS_OFF + PT_V9_Y], %o3
258 wr %o3, %g0, %y 270 wr %o3, %g0, %y
259 srl %l4, 20, %l4
260 wrpr %l4, 0x0, %pil 271 wrpr %l4, 0x0, %pil
261 wrpr %g0, 0x1, %tl 272 wrpr %g0, 0x1, %tl
262 wrpr %l1, %g0, %tstate 273 wrpr %l1, %g0, %tstate
263 wrpr %l2, %g0, %tpc 274 wrpr %l2, %g0, %tpc
264 wrpr %o2, %g0, %tnpc 275 wrpr %o2, %g0, %tnpc
265 276
266 brnz,pn %l3, kern_rtt 277 brnz,pn %l3, kern_rtt
267 mov PRIMARY_CONTEXT, %l7 278 mov PRIMARY_CONTEXT, %l7
268 279
269 661: ldxa [%l7 + %l7] ASI_DMMU, %l0 280 661: ldxa [%l7 + %l7] ASI_DMMU, %l0
270 .section .sun4v_1insn_patch, "ax" 281 .section .sun4v_1insn_patch, "ax"
271 .word 661b 282 .word 661b
272 ldxa [%l7 + %l7] ASI_MMU, %l0 283 ldxa [%l7 + %l7] ASI_MMU, %l0
273 .previous 284 .previous
274 285
275 sethi %hi(sparc64_kern_pri_nuc_bits), %l1 286 sethi %hi(sparc64_kern_pri_nuc_bits), %l1
276 ldx [%l1 + %lo(sparc64_kern_pri_nuc_bits)], %l1 287 ldx [%l1 + %lo(sparc64_kern_pri_nuc_bits)], %l1
277 or %l0, %l1, %l0 288 or %l0, %l1, %l0
278 289
279 661: stxa %l0, [%l7] ASI_DMMU 290 661: stxa %l0, [%l7] ASI_DMMU
280 .section .sun4v_1insn_patch, "ax" 291 .section .sun4v_1insn_patch, "ax"
281 .word 661b 292 .word 661b
282 stxa %l0, [%l7] ASI_MMU 293 stxa %l0, [%l7] ASI_MMU
283 .previous 294 .previous
284 295
285 sethi %hi(KERNBASE), %l7 296 sethi %hi(KERNBASE), %l7
286 flush %l7 297 flush %l7
287 rdpr %wstate, %l1 298 rdpr %wstate, %l1
288 rdpr %otherwin, %l2 299 rdpr %otherwin, %l2
289 srl %l1, 3, %l1 300 srl %l1, 3, %l1
290 301
291 wrpr %l2, %g0, %canrestore 302 wrpr %l2, %g0, %canrestore
292 wrpr %l1, %g0, %wstate 303 wrpr %l1, %g0, %wstate
293 brnz,pt %l2, user_rtt_restore 304 brnz,pt %l2, user_rtt_restore
294 wrpr %g0, %g0, %otherwin 305 wrpr %g0, %g0, %otherwin
295 306
296 ldx [%g6 + TI_FLAGS], %g3 307 ldx [%g6 + TI_FLAGS], %g3
297 wr %g0, ASI_AIUP, %asi 308 wr %g0, ASI_AIUP, %asi
298 rdpr %cwp, %g1 309 rdpr %cwp, %g1
299 andcc %g3, _TIF_32BIT, %g0 310 andcc %g3, _TIF_32BIT, %g0
300 sub %g1, 1, %g1 311 sub %g1, 1, %g1
301 bne,pt %xcc, user_rtt_fill_32bit 312 bne,pt %xcc, user_rtt_fill_32bit
302 wrpr %g1, %cwp 313 wrpr %g1, %cwp
303 ba,a,pt %xcc, user_rtt_fill_64bit 314 ba,a,pt %xcc, user_rtt_fill_64bit
304 315
305 user_rtt_fill_fixup: 316 user_rtt_fill_fixup:
306 rdpr %cwp, %g1 317 rdpr %cwp, %g1
307 add %g1, 1, %g1 318 add %g1, 1, %g1
308 wrpr %g1, 0x0, %cwp 319 wrpr %g1, 0x0, %cwp
309 320
310 rdpr %wstate, %g2 321 rdpr %wstate, %g2
311 sll %g2, 3, %g2 322 sll %g2, 3, %g2
312 wrpr %g2, 0x0, %wstate 323 wrpr %g2, 0x0, %wstate
313 324
314 /* We know %canrestore and %otherwin are both zero. */ 325 /* We know %canrestore and %otherwin are both zero. */
315 326
316 sethi %hi(sparc64_kern_pri_context), %g2 327 sethi %hi(sparc64_kern_pri_context), %g2
317 ldx [%g2 + %lo(sparc64_kern_pri_context)], %g2 328 ldx [%g2 + %lo(sparc64_kern_pri_context)], %g2
318 mov PRIMARY_CONTEXT, %g1 329 mov PRIMARY_CONTEXT, %g1
319 330
320 661: stxa %g2, [%g1] ASI_DMMU 331 661: stxa %g2, [%g1] ASI_DMMU
321 .section .sun4v_1insn_patch, "ax" 332 .section .sun4v_1insn_patch, "ax"
322 .word 661b 333 .word 661b
323 stxa %g2, [%g1] ASI_MMU 334 stxa %g2, [%g1] ASI_MMU
324 .previous 335 .previous
325 336
326 sethi %hi(KERNBASE), %g1 337 sethi %hi(KERNBASE), %g1
327 flush %g1 338 flush %g1
328 339
329 or %g4, FAULT_CODE_WINFIXUP, %g4 340 or %g4, FAULT_CODE_WINFIXUP, %g4
330 stb %g4, [%g6 + TI_FAULT_CODE] 341 stb %g4, [%g6 + TI_FAULT_CODE]
331 stx %g5, [%g6 + TI_FAULT_ADDR] 342 stx %g5, [%g6 + TI_FAULT_ADDR]
332 343
333 mov %g6, %l1 344 mov %g6, %l1
334 wrpr %g0, 0x0, %tl 345 wrpr %g0, 0x0, %tl
335 346
336 661: nop 347 661: nop
337 .section .sun4v_1insn_patch, "ax" 348 .section .sun4v_1insn_patch, "ax"
338 .word 661b 349 .word 661b
339 SET_GL(0) 350 SET_GL(0)
340 .previous 351 .previous
341 352
342 wrpr %g0, RTRAP_PSTATE, %pstate 353 wrpr %g0, RTRAP_PSTATE, %pstate
343 354
344 mov %l1, %g6 355 mov %l1, %g6
345 ldx [%g6 + TI_TASK], %g4 356 ldx [%g6 + TI_TASK], %g4
346 LOAD_PER_CPU_BASE(%g5, %g6, %g1, %g2, %g3) 357 LOAD_PER_CPU_BASE(%g5, %g6, %g1, %g2, %g3)
347 call do_sparc64_fault 358 call do_sparc64_fault
348 add %sp, PTREGS_OFF, %o0 359 add %sp, PTREGS_OFF, %o0
349 ba,pt %xcc, rtrap 360 ba,pt %xcc, rtrap
350 nop 361 nop
351 362
352 user_rtt_pre_restore: 363 user_rtt_pre_restore:
353 add %g1, 1, %g1 364 add %g1, 1, %g1
354 wrpr %g1, 0x0, %cwp 365 wrpr %g1, 0x0, %cwp
355 366
356 user_rtt_restore: 367 user_rtt_restore:
357 restore 368 restore
358 rdpr %canrestore, %g1 369 rdpr %canrestore, %g1
359 wrpr %g1, 0x0, %cleanwin 370 wrpr %g1, 0x0, %cleanwin
360 retry 371 retry
361 nop 372 nop
362 373
363 kern_rtt: rdpr %canrestore, %g1 374 kern_rtt: rdpr %canrestore, %g1
364 brz,pn %g1, kern_rtt_fill 375 brz,pn %g1, kern_rtt_fill
365 nop 376 nop
366 kern_rtt_restore: 377 kern_rtt_restore:
367 restore 378 restore
368 retry 379 retry
369 380
370 to_kernel: 381 to_kernel:
371 #ifdef CONFIG_PREEMPT 382 #ifdef CONFIG_PREEMPT
372 ldsw [%g6 + TI_PRE_COUNT], %l5 383 ldsw [%g6 + TI_PRE_COUNT], %l5
373 brnz %l5, kern_fpucheck 384 brnz %l5, kern_fpucheck
374 ldx [%g6 + TI_FLAGS], %l5 385 ldx [%g6 + TI_FLAGS], %l5
375 andcc %l5, _TIF_NEED_RESCHED, %g0 386 andcc %l5, _TIF_NEED_RESCHED, %g0
376 be,pt %xcc, kern_fpucheck 387 be,pt %xcc, kern_fpucheck
377 srl %l4, 20, %l5 388 nop
378 cmp %l5, 0 389 cmp %l4, 0
379 bne,pn %xcc, kern_fpucheck 390 bne,pn %xcc, kern_fpucheck
380 sethi %hi(PREEMPT_ACTIVE), %l6 391 sethi %hi(PREEMPT_ACTIVE), %l6
381 stw %l6, [%g6 + TI_PRE_COUNT] 392 stw %l6, [%g6 + TI_PRE_COUNT]
382 call schedule 393 call schedule
383 nop 394 nop
384 ba,pt %xcc, rtrap 395 ba,pt %xcc, rtrap
385 stw %g0, [%g6 + TI_PRE_COUNT] 396 stw %g0, [%g6 + TI_PRE_COUNT]
386 #endif 397 #endif
387 kern_fpucheck: ldub [%g6 + TI_FPDEPTH], %l5 398 kern_fpucheck: ldub [%g6 + TI_FPDEPTH], %l5
388 brz,pt %l5, rt_continue 399 brz,pt %l5, rt_continue
389 srl %l5, 1, %o0 400 srl %l5, 1, %o0
390 add %g6, TI_FPSAVED, %l6 401 add %g6, TI_FPSAVED, %l6
391 ldub [%l6 + %o0], %l2 402 ldub [%l6 + %o0], %l2
392 sub %l5, 2, %l5 403 sub %l5, 2, %l5
393 404
394 add %g6, TI_GSR, %o1 405 add %g6, TI_GSR, %o1
395 andcc %l2, (FPRS_FEF|FPRS_DU), %g0 406 andcc %l2, (FPRS_FEF|FPRS_DU), %g0
396 be,pt %icc, 2f 407 be,pt %icc, 2f
397 and %l2, FPRS_DL, %l6 408 and %l2, FPRS_DL, %l6
398 andcc %l2, FPRS_FEF, %g0 409 andcc %l2, FPRS_FEF, %g0
399 be,pn %icc, 5f 410 be,pn %icc, 5f
400 sll %o0, 3, %o5 411 sll %o0, 3, %o5
401 rd %fprs, %g1 412 rd %fprs, %g1
402 413
403 wr %g1, FPRS_FEF, %fprs 414 wr %g1, FPRS_FEF, %fprs
404 ldx [%o1 + %o5], %g1 415 ldx [%o1 + %o5], %g1
405 add %g6, TI_XFSR, %o1 416 add %g6, TI_XFSR, %o1
406 sll %o0, 8, %o2 417 sll %o0, 8, %o2
407 add %g6, TI_FPREGS, %o3 418 add %g6, TI_FPREGS, %o3
408 brz,pn %l6, 1f 419 brz,pn %l6, 1f
409 add %g6, TI_FPREGS+0x40, %o4 420 add %g6, TI_FPREGS+0x40, %o4
410 421
411 membar #Sync 422 membar #Sync
412 ldda [%o3 + %o2] ASI_BLK_P, %f0 423 ldda [%o3 + %o2] ASI_BLK_P, %f0
413 ldda [%o4 + %o2] ASI_BLK_P, %f16 424 ldda [%o4 + %o2] ASI_BLK_P, %f16
414 membar #Sync 425 membar #Sync
415 1: andcc %l2, FPRS_DU, %g0 426 1: andcc %l2, FPRS_DU, %g0
416 be,pn %icc, 1f 427 be,pn %icc, 1f
417 wr %g1, 0, %gsr 428 wr %g1, 0, %gsr
418 add %o2, 0x80, %o2 429 add %o2, 0x80, %o2
419 membar #Sync 430 membar #Sync
420 ldda [%o3 + %o2] ASI_BLK_P, %f32 431 ldda [%o3 + %o2] ASI_BLK_P, %f32
421 ldda [%o4 + %o2] ASI_BLK_P, %f48 432 ldda [%o4 + %o2] ASI_BLK_P, %f48
422 1: membar #Sync 433 1: membar #Sync
423 ldx [%o1 + %o5], %fsr 434 ldx [%o1 + %o5], %fsr
424 2: stb %l5, [%g6 + TI_FPDEPTH] 435 2: stb %l5, [%g6 + TI_FPDEPTH]
425 ba,pt %xcc, rt_continue 436 ba,pt %xcc, rt_continue
426 nop 437 nop
427 5: wr %g0, FPRS_FEF, %fprs 438 5: wr %g0, FPRS_FEF, %fprs
428 sll %o0, 8, %o2 439 sll %o0, 8, %o2
429 440
430 add %g6, TI_FPREGS+0x80, %o3 441 add %g6, TI_FPREGS+0x80, %o3
431 add %g6, TI_FPREGS+0xc0, %o4 442 add %g6, TI_FPREGS+0xc0, %o4
432 membar #Sync 443 membar #Sync
433 ldda [%o3 + %o2] ASI_BLK_P, %f32 444 ldda [%o3 + %o2] ASI_BLK_P, %f32
434 ldda [%o4 + %o2] ASI_BLK_P, %f48 445 ldda [%o4 + %o2] ASI_BLK_P, %f48
435 membar #Sync 446 membar #Sync
436 wr %g0, FPRS_DU, %fprs 447 wr %g0, FPRS_DU, %fprs
437 ba,pt %xcc, rt_continue 448 ba,pt %xcc, rt_continue
arch/sparc64/kernel/stacktrace.c
File was created 1 #include <linux/sched.h>
2 #include <linux/stacktrace.h>
3 #include <linux/thread_info.h>
4 #include <asm/ptrace.h>
5
6 void save_stack_trace(struct stack_trace *trace, struct task_struct *task)
7 {
8 unsigned long ksp, fp, thread_base;
9 struct thread_info *tp;
10
11 if (!task)
12 task = current;
13 tp = task_thread_info(task);
14 if (task == current) {
15 flushw_all();
16 __asm__ __volatile__(
17 "mov %%fp, %0"
18 : "=r" (ksp)
19 );
20 } else
21 ksp = tp->ksp;
22
23 fp = ksp + STACK_BIAS;
24 thread_base = (unsigned long) tp;
25 do {
26 struct reg_window *rw;
27
28 /* Bogus frame pointer? */
29 if (fp < (thread_base + sizeof(struct thread_info)) ||
30 fp >= (thread_base + THREAD_SIZE))
31 break;
32
33 rw = (struct reg_window *) fp;
34 if (trace->skip > 0)
35 trace->skip--;
36 else
37 trace->entries[trace->nr_entries++] = rw->ins[7];
38
39 fp = rw->ins[6] + STACK_BIAS;
40 } while (trace->nr_entries < trace->max_entries);
41 }
42
arch/sparc64/kernel/sun4v_ivec.S
1 /* sun4v_ivec.S: Sun4v interrupt vector handling. 1 /* sun4v_ivec.S: Sun4v interrupt vector handling.
2 * 2 *
3 * Copyright (C) 2006 <davem@davemloft.net> 3 * Copyright (C) 2006 <davem@davemloft.net>
4 */ 4 */
5 5
6 #include <asm/cpudata.h> 6 #include <asm/cpudata.h>
7 #include <asm/intr_queue.h> 7 #include <asm/intr_queue.h>
8 #include <asm/pil.h> 8 #include <asm/pil.h>
9 9
10 .text 10 .text
11 .align 32 11 .align 32
12 12
13 sun4v_cpu_mondo: 13 sun4v_cpu_mondo:
14 /* Head offset in %g2, tail offset in %g4. 14 /* Head offset in %g2, tail offset in %g4.
15 * If they are the same, no work. 15 * If they are the same, no work.
16 */ 16 */
17 mov INTRQ_CPU_MONDO_HEAD, %g2 17 mov INTRQ_CPU_MONDO_HEAD, %g2
18 ldxa [%g2] ASI_QUEUE, %g2 18 ldxa [%g2] ASI_QUEUE, %g2
19 mov INTRQ_CPU_MONDO_TAIL, %g4 19 mov INTRQ_CPU_MONDO_TAIL, %g4
20 ldxa [%g4] ASI_QUEUE, %g4 20 ldxa [%g4] ASI_QUEUE, %g4
21 cmp %g2, %g4 21 cmp %g2, %g4
22 be,pn %xcc, sun4v_cpu_mondo_queue_empty 22 be,pn %xcc, sun4v_cpu_mondo_queue_empty
23 nop 23 nop
24 24
25 /* Get &trap_block[smp_processor_id()] into %g3. */ 25 /* Get &trap_block[smp_processor_id()] into %g3. */
26 ldxa [%g0] ASI_SCRATCHPAD, %g3 26 ldxa [%g0] ASI_SCRATCHPAD, %g3
27 sub %g3, TRAP_PER_CPU_FAULT_INFO, %g3 27 sub %g3, TRAP_PER_CPU_FAULT_INFO, %g3
28 28
29 /* Get CPU mondo queue base phys address into %g7. */ 29 /* Get CPU mondo queue base phys address into %g7. */
30 ldx [%g3 + TRAP_PER_CPU_CPU_MONDO_PA], %g7 30 ldx [%g3 + TRAP_PER_CPU_CPU_MONDO_PA], %g7
31 31
32 /* Now get the cross-call arguments and handler PC, same 32 /* Now get the cross-call arguments and handler PC, same
33 * layout as sun4u: 33 * layout as sun4u:
34 * 34 *
35 * 1st 64-bit word: low half is 32-bit PC, put into %g3 and jmpl to it 35 * 1st 64-bit word: low half is 32-bit PC, put into %g3 and jmpl to it
36 * high half is context arg to MMU flushes, into %g5 36 * high half is context arg to MMU flushes, into %g5
37 * 2nd 64-bit word: 64-bit arg, load into %g1 37 * 2nd 64-bit word: 64-bit arg, load into %g1
38 * 3rd 64-bit word: 64-bit arg, load into %g7 38 * 3rd 64-bit word: 64-bit arg, load into %g7
39 */ 39 */
40 ldxa [%g7 + %g2] ASI_PHYS_USE_EC, %g3 40 ldxa [%g7 + %g2] ASI_PHYS_USE_EC, %g3
41 add %g2, 0x8, %g2 41 add %g2, 0x8, %g2
42 srlx %g3, 32, %g5 42 srlx %g3, 32, %g5
43 ldxa [%g7 + %g2] ASI_PHYS_USE_EC, %g1 43 ldxa [%g7 + %g2] ASI_PHYS_USE_EC, %g1
44 add %g2, 0x8, %g2 44 add %g2, 0x8, %g2
45 srl %g3, 0, %g3 45 srl %g3, 0, %g3
46 ldxa [%g7 + %g2] ASI_PHYS_USE_EC, %g7 46 ldxa [%g7 + %g2] ASI_PHYS_USE_EC, %g7
47 add %g2, 0x40 - 0x8 - 0x8, %g2 47 add %g2, 0x40 - 0x8 - 0x8, %g2
48 48
49 /* Update queue head pointer. */ 49 /* Update queue head pointer. */
50 sethi %hi(8192 - 1), %g4 50 sethi %hi(8192 - 1), %g4
51 or %g4, %lo(8192 - 1), %g4 51 or %g4, %lo(8192 - 1), %g4
52 and %g2, %g4, %g2 52 and %g2, %g4, %g2
53 53
54 mov INTRQ_CPU_MONDO_HEAD, %g4 54 mov INTRQ_CPU_MONDO_HEAD, %g4
55 stxa %g2, [%g4] ASI_QUEUE 55 stxa %g2, [%g4] ASI_QUEUE
56 membar #Sync 56 membar #Sync
57 57
58 jmpl %g3, %g0 58 jmpl %g3, %g0
59 nop 59 nop
60 60
61 sun4v_cpu_mondo_queue_empty: 61 sun4v_cpu_mondo_queue_empty:
62 retry 62 retry
63 63
64 sun4v_dev_mondo: 64 sun4v_dev_mondo:
65 /* Head offset in %g2, tail offset in %g4. */ 65 /* Head offset in %g2, tail offset in %g4. */
66 mov INTRQ_DEVICE_MONDO_HEAD, %g2 66 mov INTRQ_DEVICE_MONDO_HEAD, %g2
67 ldxa [%g2] ASI_QUEUE, %g2 67 ldxa [%g2] ASI_QUEUE, %g2
68 mov INTRQ_DEVICE_MONDO_TAIL, %g4 68 mov INTRQ_DEVICE_MONDO_TAIL, %g4
69 ldxa [%g4] ASI_QUEUE, %g4 69 ldxa [%g4] ASI_QUEUE, %g4
70 cmp %g2, %g4 70 cmp %g2, %g4
71 be,pn %xcc, sun4v_dev_mondo_queue_empty 71 be,pn %xcc, sun4v_dev_mondo_queue_empty
72 nop 72 nop
73 73
74 /* Get &trap_block[smp_processor_id()] into %g3. */ 74 /* Get &trap_block[smp_processor_id()] into %g3. */
75 ldxa [%g0] ASI_SCRATCHPAD, %g3 75 ldxa [%g0] ASI_SCRATCHPAD, %g3
76 sub %g3, TRAP_PER_CPU_FAULT_INFO, %g3 76 sub %g3, TRAP_PER_CPU_FAULT_INFO, %g3
77 77
78 /* Get DEV mondo queue base phys address into %g5. */ 78 /* Get DEV mondo queue base phys address into %g5. */
79 ldx [%g3 + TRAP_PER_CPU_DEV_MONDO_PA], %g5 79 ldx [%g3 + TRAP_PER_CPU_DEV_MONDO_PA], %g5
80 80
81 /* Load IVEC into %g3. */ 81 /* Load IVEC into %g3. */
82 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 82 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
83 add %g2, 0x40, %g2 83 add %g2, 0x40, %g2
84 84
85 /* XXX There can be a full 64-byte block of data here. 85 /* XXX There can be a full 64-byte block of data here.
86 * XXX This is how we can get at MSI vector data. 86 * XXX This is how we can get at MSI vector data.
87 * XXX Current we do not capture this, but when we do we'll 87 * XXX Current we do not capture this, but when we do we'll
88 * XXX need to add a 64-byte storage area in the struct ino_bucket 88 * XXX need to add a 64-byte storage area in the struct ino_bucket
89 * XXX or the struct irq_desc. 89 * XXX or the struct irq_desc.
90 */ 90 */
91 91
92 /* Update queue head pointer, this frees up some registers. */ 92 /* Update queue head pointer, this frees up some registers. */
93 sethi %hi(8192 - 1), %g4 93 sethi %hi(8192 - 1), %g4
94 or %g4, %lo(8192 - 1), %g4 94 or %g4, %lo(8192 - 1), %g4
95 and %g2, %g4, %g2 95 and %g2, %g4, %g2
96 96
97 mov INTRQ_DEVICE_MONDO_HEAD, %g4 97 mov INTRQ_DEVICE_MONDO_HEAD, %g4
98 stxa %g2, [%g4] ASI_QUEUE 98 stxa %g2, [%g4] ASI_QUEUE
99 membar #Sync 99 membar #Sync
100 100
101 /* Get &__irq_work[smp_processor_id()] into %g1. */ 101 /* Get &__irq_work[smp_processor_id()] into %g1. */
102 TRAP_LOAD_IRQ_WORK(%g1, %g4) 102 TRAP_LOAD_IRQ_WORK(%g1, %g4)
103 103
104 /* Get &ivector_table[IVEC] into %g4. */ 104 /* Get &ivector_table[IVEC] into %g4. */
105 sethi %hi(ivector_table), %g4 105 sethi %hi(ivector_table), %g4
106 sllx %g3, 3, %g3 106 sllx %g3, 3, %g3
107 or %g4, %lo(ivector_table), %g4 107 or %g4, %lo(ivector_table), %g4
108 add %g4, %g3, %g4 108 add %g4, %g3, %g4
109 109
110 /* Insert ivector_table[] entry into __irq_work[] queue. */ 110 /* Insert ivector_table[] entry into __irq_work[] queue. */
111 lduw [%g1], %g2 /* g2 = irq_work(cpu) */ 111 lduw [%g1], %g2 /* g2 = irq_work(cpu) */
112 stw %g2, [%g4 + 0x00] /* bucket->irq_chain = g2 */ 112 stw %g2, [%g4 + 0x00] /* bucket->irq_chain = g2 */
113 stw %g4, [%g1] /* irq_work(cpu) = bucket */ 113 stw %g4, [%g1] /* irq_work(cpu) = bucket */
114 114
115 /* Signal the interrupt by setting (1 << pil) in %softint. */ 115 /* Signal the interrupt by setting (1 << pil) in %softint. */
116 wr %g0, 1 << PIL_DEVICE_IRQ, %set_softint 116 wr %g0, 1 << PIL_DEVICE_IRQ, %set_softint
117 117
118 sun4v_dev_mondo_queue_empty: 118 sun4v_dev_mondo_queue_empty:
119 retry 119 retry
120 120
121 sun4v_res_mondo: 121 sun4v_res_mondo:
122 /* Head offset in %g2, tail offset in %g4. */ 122 /* Head offset in %g2, tail offset in %g4. */
123 mov INTRQ_RESUM_MONDO_HEAD, %g2 123 mov INTRQ_RESUM_MONDO_HEAD, %g2
124 ldxa [%g2] ASI_QUEUE, %g2 124 ldxa [%g2] ASI_QUEUE, %g2
125 mov INTRQ_RESUM_MONDO_TAIL, %g4 125 mov INTRQ_RESUM_MONDO_TAIL, %g4
126 ldxa [%g4] ASI_QUEUE, %g4 126 ldxa [%g4] ASI_QUEUE, %g4
127 cmp %g2, %g4 127 cmp %g2, %g4
128 be,pn %xcc, sun4v_res_mondo_queue_empty 128 be,pn %xcc, sun4v_res_mondo_queue_empty
129 nop 129 nop
130 130
131 /* Get &trap_block[smp_processor_id()] into %g3. */ 131 /* Get &trap_block[smp_processor_id()] into %g3. */
132 ldxa [%g0] ASI_SCRATCHPAD, %g3 132 ldxa [%g0] ASI_SCRATCHPAD, %g3
133 sub %g3, TRAP_PER_CPU_FAULT_INFO, %g3 133 sub %g3, TRAP_PER_CPU_FAULT_INFO, %g3
134 134
135 /* Get RES mondo queue base phys address into %g5. */ 135 /* Get RES mondo queue base phys address into %g5. */
136 ldx [%g3 + TRAP_PER_CPU_RESUM_MONDO_PA], %g5 136 ldx [%g3 + TRAP_PER_CPU_RESUM_MONDO_PA], %g5
137 137
138 /* Get RES kernel buffer base phys address into %g7. */ 138 /* Get RES kernel buffer base phys address into %g7. */
139 ldx [%g3 + TRAP_PER_CPU_RESUM_KBUF_PA], %g7 139 ldx [%g3 + TRAP_PER_CPU_RESUM_KBUF_PA], %g7
140 140
141 /* If the first word is non-zero, queue is full. */ 141 /* If the first word is non-zero, queue is full. */
142 ldxa [%g7 + %g2] ASI_PHYS_USE_EC, %g1 142 ldxa [%g7 + %g2] ASI_PHYS_USE_EC, %g1
143 brnz,pn %g1, sun4v_res_mondo_queue_full 143 brnz,pn %g1, sun4v_res_mondo_queue_full
144 nop 144 nop
145 145
146 /* Remember this entry's offset in %g1. */ 146 /* Remember this entry's offset in %g1. */
147 mov %g2, %g1 147 mov %g2, %g1
148 148
149 /* Copy 64-byte queue entry into kernel buffer. */ 149 /* Copy 64-byte queue entry into kernel buffer. */
150 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 150 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
151 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 151 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
152 add %g2, 0x08, %g2 152 add %g2, 0x08, %g2
153 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 153 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
154 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 154 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
155 add %g2, 0x08, %g2 155 add %g2, 0x08, %g2
156 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 156 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
157 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 157 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
158 add %g2, 0x08, %g2 158 add %g2, 0x08, %g2
159 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 159 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
160 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 160 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
161 add %g2, 0x08, %g2 161 add %g2, 0x08, %g2
162 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 162 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
163 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 163 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
164 add %g2, 0x08, %g2 164 add %g2, 0x08, %g2
165 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 165 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
166 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 166 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
167 add %g2, 0x08, %g2 167 add %g2, 0x08, %g2
168 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 168 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
169 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 169 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
170 add %g2, 0x08, %g2 170 add %g2, 0x08, %g2
171 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 171 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
172 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 172 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
173 add %g2, 0x08, %g2 173 add %g2, 0x08, %g2
174 174
175 /* Update queue head pointer. */ 175 /* Update queue head pointer. */
176 sethi %hi(8192 - 1), %g4 176 sethi %hi(8192 - 1), %g4
177 or %g4, %lo(8192 - 1), %g4 177 or %g4, %lo(8192 - 1), %g4
178 and %g2, %g4, %g2 178 and %g2, %g4, %g2
179 179
180 mov INTRQ_RESUM_MONDO_HEAD, %g4 180 mov INTRQ_RESUM_MONDO_HEAD, %g4
181 stxa %g2, [%g4] ASI_QUEUE 181 stxa %g2, [%g4] ASI_QUEUE
182 membar #Sync 182 membar #Sync
183 183
184 /* Disable interrupts and save register state so we can call 184 /* Disable interrupts and save register state so we can call
185 * C code. The etrap handling will leave %g4 in %l4 for us 185 * C code. The etrap handling will leave %g4 in %l4 for us
186 * when it's done. 186 * when it's done.
187 */ 187 */
188 rdpr %pil, %g2 188 rdpr %pil, %g2
189 wrpr %g0, 15, %pil 189 wrpr %g0, 15, %pil
190 mov %g1, %g4 190 mov %g1, %g4
191 ba,pt %xcc, etrap_irq 191 ba,pt %xcc, etrap_irq
192 rd %pc, %g7 192 rd %pc, %g7
193 193 #ifdef CONFIG_TRACE_IRQFLAGS
194 call trace_hardirqs_off
195 nop
196 #endif
194 /* Log the event. */ 197 /* Log the event. */
195 add %sp, PTREGS_OFF, %o0 198 add %sp, PTREGS_OFF, %o0
196 call sun4v_resum_error 199 call sun4v_resum_error
197 mov %l4, %o1 200 mov %l4, %o1
198 201
199 /* Return from trap. */ 202 /* Return from trap. */
200 ba,pt %xcc, rtrap_irq 203 ba,pt %xcc, rtrap_irq
201 nop 204 nop
202 205
203 sun4v_res_mondo_queue_empty: 206 sun4v_res_mondo_queue_empty:
204 retry 207 retry
205 208
206 sun4v_res_mondo_queue_full: 209 sun4v_res_mondo_queue_full:
207 /* The queue is full, consolidate our damage by setting 210 /* The queue is full, consolidate our damage by setting
208 * the head equal to the tail. We'll just trap again otherwise. 211 * the head equal to the tail. We'll just trap again otherwise.
209 * Call C code to log the event. 212 * Call C code to log the event.
210 */ 213 */
211 mov INTRQ_RESUM_MONDO_HEAD, %g2 214 mov INTRQ_RESUM_MONDO_HEAD, %g2
212 stxa %g4, [%g2] ASI_QUEUE 215 stxa %g4, [%g2] ASI_QUEUE
213 membar #Sync 216 membar #Sync
214 217
215 rdpr %pil, %g2 218 rdpr %pil, %g2
216 wrpr %g0, 15, %pil 219 wrpr %g0, 15, %pil
217 ba,pt %xcc, etrap_irq 220 ba,pt %xcc, etrap_irq
218 rd %pc, %g7 221 rd %pc, %g7
219 222 #ifdef CONFIG_TRACE_IRQFLAGS
223 call trace_hardirqs_off
224 nop
225 #endif
220 call sun4v_resum_overflow 226 call sun4v_resum_overflow
221 add %sp, PTREGS_OFF, %o0 227 add %sp, PTREGS_OFF, %o0
222 228
223 ba,pt %xcc, rtrap_irq 229 ba,pt %xcc, rtrap_irq
224 nop 230 nop
225 231
226 sun4v_nonres_mondo: 232 sun4v_nonres_mondo:
227 /* Head offset in %g2, tail offset in %g4. */ 233 /* Head offset in %g2, tail offset in %g4. */
228 mov INTRQ_NONRESUM_MONDO_HEAD, %g2 234 mov INTRQ_NONRESUM_MONDO_HEAD, %g2
229 ldxa [%g2] ASI_QUEUE, %g2 235 ldxa [%g2] ASI_QUEUE, %g2
230 mov INTRQ_NONRESUM_MONDO_TAIL, %g4 236 mov INTRQ_NONRESUM_MONDO_TAIL, %g4
231 ldxa [%g4] ASI_QUEUE, %g4 237 ldxa [%g4] ASI_QUEUE, %g4
232 cmp %g2, %g4 238 cmp %g2, %g4
233 be,pn %xcc, sun4v_nonres_mondo_queue_empty 239 be,pn %xcc, sun4v_nonres_mondo_queue_empty
234 nop 240 nop
235 241
236 /* Get &trap_block[smp_processor_id()] into %g3. */ 242 /* Get &trap_block[smp_processor_id()] into %g3. */
237 ldxa [%g0] ASI_SCRATCHPAD, %g3 243 ldxa [%g0] ASI_SCRATCHPAD, %g3
238 sub %g3, TRAP_PER_CPU_FAULT_INFO, %g3 244 sub %g3, TRAP_PER_CPU_FAULT_INFO, %g3
239 245
240 /* Get RES mondo queue base phys address into %g5. */ 246 /* Get RES mondo queue base phys address into %g5. */
241 ldx [%g3 + TRAP_PER_CPU_NONRESUM_MONDO_PA], %g5 247 ldx [%g3 + TRAP_PER_CPU_NONRESUM_MONDO_PA], %g5
242 248
243 /* Get RES kernel buffer base phys address into %g7. */ 249 /* Get RES kernel buffer base phys address into %g7. */
244 ldx [%g3 + TRAP_PER_CPU_NONRESUM_KBUF_PA], %g7 250 ldx [%g3 + TRAP_PER_CPU_NONRESUM_KBUF_PA], %g7
245 251
246 /* If the first word is non-zero, queue is full. */ 252 /* If the first word is non-zero, queue is full. */
247 ldxa [%g7 + %g2] ASI_PHYS_USE_EC, %g1 253 ldxa [%g7 + %g2] ASI_PHYS_USE_EC, %g1
248 brnz,pn %g1, sun4v_nonres_mondo_queue_full 254 brnz,pn %g1, sun4v_nonres_mondo_queue_full
249 nop 255 nop
250 256
251 /* Remember this entry's offset in %g1. */ 257 /* Remember this entry's offset in %g1. */
252 mov %g2, %g1 258 mov %g2, %g1
253 259
254 /* Copy 64-byte queue entry into kernel buffer. */ 260 /* Copy 64-byte queue entry into kernel buffer. */
255 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 261 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
256 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 262 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
257 add %g2, 0x08, %g2 263 add %g2, 0x08, %g2
258 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 264 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
259 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 265 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
260 add %g2, 0x08, %g2 266 add %g2, 0x08, %g2
261 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 267 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
262 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 268 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
263 add %g2, 0x08, %g2 269 add %g2, 0x08, %g2
264 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 270 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
265 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 271 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
266 add %g2, 0x08, %g2 272 add %g2, 0x08, %g2
267 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 273 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
268 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 274 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
269 add %g2, 0x08, %g2 275 add %g2, 0x08, %g2
270 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 276 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
271 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 277 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
272 add %g2, 0x08, %g2 278 add %g2, 0x08, %g2
273 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 279 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
274 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 280 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
275 add %g2, 0x08, %g2 281 add %g2, 0x08, %g2
276 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3 282 ldxa [%g5 + %g2] ASI_PHYS_USE_EC, %g3
277 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC 283 stxa %g3, [%g7 + %g2] ASI_PHYS_USE_EC
278 add %g2, 0x08, %g2 284 add %g2, 0x08, %g2
279 285
280 /* Update queue head pointer. */ 286 /* Update queue head pointer. */
281 sethi %hi(8192 - 1), %g4 287 sethi %hi(8192 - 1), %g4
282 or %g4, %lo(8192 - 1), %g4 288 or %g4, %lo(8192 - 1), %g4
283 and %g2, %g4, %g2 289 and %g2, %g4, %g2
284 290
285 mov INTRQ_NONRESUM_MONDO_HEAD, %g4 291 mov INTRQ_NONRESUM_MONDO_HEAD, %g4
286 stxa %g2, [%g4] ASI_QUEUE 292 stxa %g2, [%g4] ASI_QUEUE
287 membar #Sync 293 membar #Sync
288 294
289 /* Disable interrupts and save register state so we can call 295 /* Disable interrupts and save register state so we can call
290 * C code. The etrap handling will leave %g4 in %l4 for us 296 * C code. The etrap handling will leave %g4 in %l4 for us
291 * when it's done. 297 * when it's done.
292 */ 298 */
293 rdpr %pil, %g2 299 rdpr %pil, %g2
294 wrpr %g0, 15, %pil 300 wrpr %g0, 15, %pil
295 mov %g1, %g4 301 mov %g1, %g4
296 ba,pt %xcc, etrap_irq 302 ba,pt %xcc, etrap_irq
297 rd %pc, %g7 303 rd %pc, %g7
298 304 #ifdef CONFIG_TRACE_IRQFLAGS
305 call trace_hardirqs_off
306 nop
307 #endif
299 /* Log the event. */ 308 /* Log the event. */
300 add %sp, PTREGS_OFF, %o0 309 add %sp, PTREGS_OFF, %o0
301 call sun4v_nonresum_error 310 call sun4v_nonresum_error
302 mov %l4, %o1 311 mov %l4, %o1
303 312
304 /* Return from trap. */ 313 /* Return from trap. */
305 ba,pt %xcc, rtrap_irq 314 ba,pt %xcc, rtrap_irq
306 nop 315 nop
307 316
308 sun4v_nonres_mondo_queue_empty: 317 sun4v_nonres_mondo_queue_empty:
309 retry 318 retry
310 319
311 sun4v_nonres_mondo_queue_full: 320 sun4v_nonres_mondo_queue_full:
312 /* The queue is full, consolidate our damage by setting 321 /* The queue is full, consolidate our damage by setting
313 * the head equal to the tail. We'll just trap again otherwise. 322 * the head equal to the tail. We'll just trap again otherwise.
314 * Call C code to log the event. 323 * Call C code to log the event.
315 */ 324 */
316 mov INTRQ_NONRESUM_MONDO_HEAD, %g2 325 mov INTRQ_NONRESUM_MONDO_HEAD, %g2
317 stxa %g4, [%g2] ASI_QUEUE 326 stxa %g4, [%g2] ASI_QUEUE
318 membar #Sync 327 membar #Sync
319 328
320 rdpr %pil, %g2 329 rdpr %pil, %g2
321 wrpr %g0, 15, %pil 330 wrpr %g0, 15, %pil
322 ba,pt %xcc, etrap_irq 331 ba,pt %xcc, etrap_irq
323 rd %pc, %g7 332 rd %pc, %g7
324 333 #ifdef CONFIG_TRACE_IRQFLAGS
334 call trace_hardirqs_off
335 nop
336 #endif
325 call sun4v_nonresum_overflow 337 call sun4v_nonresum_overflow
326 add %sp, PTREGS_OFF, %o0 338 add %sp, PTREGS_OFF, %o0
327 339
328 ba,pt %xcc, rtrap_irq 340 ba,pt %xcc, rtrap_irq
329 nop 341 nop
330 342
arch/sparc64/mm/ultra.S
1 /* $Id: ultra.S,v 1.72 2002/02/09 19:49:31 davem Exp $ 1 /* $Id: ultra.S,v 1.72 2002/02/09 19:49:31 davem Exp $
2 * ultra.S: Don't expand these all over the place... 2 * ultra.S: Don't expand these all over the place...
3 * 3 *
4 * Copyright (C) 1997, 2000 David S. Miller (davem@redhat.com) 4 * Copyright (C) 1997, 2000 David S. Miller (davem@redhat.com)
5 */ 5 */
6 6
7 #include <asm/asi.h> 7 #include <asm/asi.h>
8 #include <asm/pgtable.h> 8 #include <asm/pgtable.h>
9 #include <asm/page.h> 9 #include <asm/page.h>
10 #include <asm/spitfire.h> 10 #include <asm/spitfire.h>
11 #include <asm/mmu_context.h> 11 #include <asm/mmu_context.h>
12 #include <asm/mmu.h> 12 #include <asm/mmu.h>
13 #include <asm/pil.h> 13 #include <asm/pil.h>
14 #include <asm/head.h> 14 #include <asm/head.h>
15 #include <asm/thread_info.h> 15 #include <asm/thread_info.h>
16 #include <asm/cacheflush.h> 16 #include <asm/cacheflush.h>
17 #include <asm/hypervisor.h> 17 #include <asm/hypervisor.h>
18 18
19 /* Basically, most of the Spitfire vs. Cheetah madness 19 /* Basically, most of the Spitfire vs. Cheetah madness
20 * has to do with the fact that Cheetah does not support 20 * has to do with the fact that Cheetah does not support
21 * IMMU flushes out of the secondary context. Someone needs 21 * IMMU flushes out of the secondary context. Someone needs
22 * to throw a south lake birthday party for the folks 22 * to throw a south lake birthday party for the folks
23 * in Microelectronics who refused to fix this shit. 23 * in Microelectronics who refused to fix this shit.
24 */ 24 */
25 25
26 /* This file is meant to be read efficiently by the CPU, not humans. 26 /* This file is meant to be read efficiently by the CPU, not humans.
27 * Staraj sie tego nikomu nie pierdolnac... 27 * Staraj sie tego nikomu nie pierdolnac...
28 */ 28 */
29 .text 29 .text
30 .align 32 30 .align 32
31 .globl __flush_tlb_mm 31 .globl __flush_tlb_mm
32 __flush_tlb_mm: /* 18 insns */ 32 __flush_tlb_mm: /* 18 insns */
33 /* %o0=(ctx & TAG_CONTEXT_BITS), %o1=SECONDARY_CONTEXT */ 33 /* %o0=(ctx & TAG_CONTEXT_BITS), %o1=SECONDARY_CONTEXT */
34 ldxa [%o1] ASI_DMMU, %g2 34 ldxa [%o1] ASI_DMMU, %g2
35 cmp %g2, %o0 35 cmp %g2, %o0
36 bne,pn %icc, __spitfire_flush_tlb_mm_slow 36 bne,pn %icc, __spitfire_flush_tlb_mm_slow
37 mov 0x50, %g3 37 mov 0x50, %g3
38 stxa %g0, [%g3] ASI_DMMU_DEMAP 38 stxa %g0, [%g3] ASI_DMMU_DEMAP
39 stxa %g0, [%g3] ASI_IMMU_DEMAP 39 stxa %g0, [%g3] ASI_IMMU_DEMAP
40 sethi %hi(KERNBASE), %g3 40 sethi %hi(KERNBASE), %g3
41 flush %g3 41 flush %g3
42 retl 42 retl
43 nop 43 nop
44 nop 44 nop
45 nop 45 nop
46 nop 46 nop
47 nop 47 nop
48 nop 48 nop
49 nop 49 nop
50 nop 50 nop
51 nop 51 nop
52 nop 52 nop
53 53
54 .align 32 54 .align 32
55 .globl __flush_tlb_pending 55 .globl __flush_tlb_pending
56 __flush_tlb_pending: /* 26 insns */ 56 __flush_tlb_pending: /* 26 insns */
57 /* %o0 = context, %o1 = nr, %o2 = vaddrs[] */ 57 /* %o0 = context, %o1 = nr, %o2 = vaddrs[] */
58 rdpr %pstate, %g7 58 rdpr %pstate, %g7
59 sllx %o1, 3, %o1 59 sllx %o1, 3, %o1
60 andn %g7, PSTATE_IE, %g2 60 andn %g7, PSTATE_IE, %g2
61 wrpr %g2, %pstate 61 wrpr %g2, %pstate
62 mov SECONDARY_CONTEXT, %o4 62 mov SECONDARY_CONTEXT, %o4
63 ldxa [%o4] ASI_DMMU, %g2 63 ldxa [%o4] ASI_DMMU, %g2
64 stxa %o0, [%o4] ASI_DMMU 64 stxa %o0, [%o4] ASI_DMMU
65 1: sub %o1, (1 << 3), %o1 65 1: sub %o1, (1 << 3), %o1
66 ldx [%o2 + %o1], %o3 66 ldx [%o2 + %o1], %o3
67 andcc %o3, 1, %g0 67 andcc %o3, 1, %g0
68 andn %o3, 1, %o3 68 andn %o3, 1, %o3
69 be,pn %icc, 2f 69 be,pn %icc, 2f
70 or %o3, 0x10, %o3 70 or %o3, 0x10, %o3
71 stxa %g0, [%o3] ASI_IMMU_DEMAP 71 stxa %g0, [%o3] ASI_IMMU_DEMAP
72 2: stxa %g0, [%o3] ASI_DMMU_DEMAP 72 2: stxa %g0, [%o3] ASI_DMMU_DEMAP
73 membar #Sync 73 membar #Sync
74 brnz,pt %o1, 1b 74 brnz,pt %o1, 1b
75 nop 75 nop
76 stxa %g2, [%o4] ASI_DMMU 76 stxa %g2, [%o4] ASI_DMMU
77 sethi %hi(KERNBASE), %o4 77 sethi %hi(KERNBASE), %o4
78 flush %o4 78 flush %o4
79 retl 79 retl
80 wrpr %g7, 0x0, %pstate 80 wrpr %g7, 0x0, %pstate
81 nop 81 nop
82 nop 82 nop
83 nop 83 nop
84 nop 84 nop
85 85
86 .align 32 86 .align 32
87 .globl __flush_tlb_kernel_range 87 .globl __flush_tlb_kernel_range
88 __flush_tlb_kernel_range: /* 16 insns */ 88 __flush_tlb_kernel_range: /* 16 insns */
89 /* %o0=start, %o1=end */ 89 /* %o0=start, %o1=end */
90 cmp %o0, %o1 90 cmp %o0, %o1
91 be,pn %xcc, 2f 91 be,pn %xcc, 2f
92 sethi %hi(PAGE_SIZE), %o4 92 sethi %hi(PAGE_SIZE), %o4
93 sub %o1, %o0, %o3 93 sub %o1, %o0, %o3
94 sub %o3, %o4, %o3 94 sub %o3, %o4, %o3
95 or %o0, 0x20, %o0 ! Nucleus 95 or %o0, 0x20, %o0 ! Nucleus
96 1: stxa %g0, [%o0 + %o3] ASI_DMMU_DEMAP 96 1: stxa %g0, [%o0 + %o3] ASI_DMMU_DEMAP
97 stxa %g0, [%o0 + %o3] ASI_IMMU_DEMAP 97 stxa %g0, [%o0 + %o3] ASI_IMMU_DEMAP
98 membar #Sync 98 membar #Sync
99 brnz,pt %o3, 1b 99 brnz,pt %o3, 1b
100 sub %o3, %o4, %o3 100 sub %o3, %o4, %o3
101 2: sethi %hi(KERNBASE), %o3 101 2: sethi %hi(KERNBASE), %o3
102 flush %o3 102 flush %o3
103 retl 103 retl
104 nop 104 nop
105 nop 105 nop
106 106
107 __spitfire_flush_tlb_mm_slow: 107 __spitfire_flush_tlb_mm_slow:
108 rdpr %pstate, %g1 108 rdpr %pstate, %g1
109 wrpr %g1, PSTATE_IE, %pstate 109 wrpr %g1, PSTATE_IE, %pstate
110 stxa %o0, [%o1] ASI_DMMU 110 stxa %o0, [%o1] ASI_DMMU
111 stxa %g0, [%g3] ASI_DMMU_DEMAP 111 stxa %g0, [%g3] ASI_DMMU_DEMAP
112 stxa %g0, [%g3] ASI_IMMU_DEMAP 112 stxa %g0, [%g3] ASI_IMMU_DEMAP
113 flush %g6 113 flush %g6
114 stxa %g2, [%o1] ASI_DMMU 114 stxa %g2, [%o1] ASI_DMMU
115 sethi %hi(KERNBASE), %o1 115 sethi %hi(KERNBASE), %o1
116 flush %o1 116 flush %o1
117 retl 117 retl
118 wrpr %g1, 0, %pstate 118 wrpr %g1, 0, %pstate
119 119
120 /* 120 /*
121 * The following code flushes one page_size worth. 121 * The following code flushes one page_size worth.
122 */ 122 */
123 #if (PAGE_SHIFT == 13) 123 #if (PAGE_SHIFT == 13)
124 #define ITAG_MASK 0xfe 124 #define ITAG_MASK 0xfe
125 #elif (PAGE_SHIFT == 16) 125 #elif (PAGE_SHIFT == 16)
126 #define ITAG_MASK 0x7fe 126 #define ITAG_MASK 0x7fe
127 #else 127 #else
128 #error unsupported PAGE_SIZE 128 #error unsupported PAGE_SIZE
129 #endif 129 #endif
130 .section .kprobes.text, "ax" 130 .section .kprobes.text, "ax"
131 .align 32 131 .align 32
132 .globl __flush_icache_page 132 .globl __flush_icache_page
133 __flush_icache_page: /* %o0 = phys_page */ 133 __flush_icache_page: /* %o0 = phys_page */
134 membar #StoreStore 134 membar #StoreStore
135 srlx %o0, PAGE_SHIFT, %o0 135 srlx %o0, PAGE_SHIFT, %o0
136 sethi %uhi(PAGE_OFFSET), %g1 136 sethi %uhi(PAGE_OFFSET), %g1
137 sllx %o0, PAGE_SHIFT, %o0 137 sllx %o0, PAGE_SHIFT, %o0
138 sethi %hi(PAGE_SIZE), %g2 138 sethi %hi(PAGE_SIZE), %g2
139 sllx %g1, 32, %g1 139 sllx %g1, 32, %g1
140 add %o0, %g1, %o0 140 add %o0, %g1, %o0
141 1: subcc %g2, 32, %g2 141 1: subcc %g2, 32, %g2
142 bne,pt %icc, 1b 142 bne,pt %icc, 1b
143 flush %o0 + %g2 143 flush %o0 + %g2
144 retl 144 retl
145 nop 145 nop
146 146
147 #ifdef DCACHE_ALIASING_POSSIBLE 147 #ifdef DCACHE_ALIASING_POSSIBLE
148 148
149 #if (PAGE_SHIFT != 13) 149 #if (PAGE_SHIFT != 13)
150 #error only page shift of 13 is supported by dcache flush 150 #error only page shift of 13 is supported by dcache flush
151 #endif 151 #endif
152 152
153 #define DTAG_MASK 0x3 153 #define DTAG_MASK 0x3
154 154
155 /* This routine is Spitfire specific so the hardcoded 155 /* This routine is Spitfire specific so the hardcoded
156 * D-cache size and line-size are OK. 156 * D-cache size and line-size are OK.
157 */ 157 */
158 .align 64 158 .align 64
159 .globl __flush_dcache_page 159 .globl __flush_dcache_page
160 __flush_dcache_page: /* %o0=kaddr, %o1=flush_icache */ 160 __flush_dcache_page: /* %o0=kaddr, %o1=flush_icache */
161 sethi %uhi(PAGE_OFFSET), %g1 161 sethi %uhi(PAGE_OFFSET), %g1
162 sllx %g1, 32, %g1 162 sllx %g1, 32, %g1
163 sub %o0, %g1, %o0 ! physical address 163 sub %o0, %g1, %o0 ! physical address
164 srlx %o0, 11, %o0 ! make D-cache TAG 164 srlx %o0, 11, %o0 ! make D-cache TAG
165 sethi %hi(1 << 14), %o2 ! D-cache size 165 sethi %hi(1 << 14), %o2 ! D-cache size
166 sub %o2, (1 << 5), %o2 ! D-cache line size 166 sub %o2, (1 << 5), %o2 ! D-cache line size
167 1: ldxa [%o2] ASI_DCACHE_TAG, %o3 ! load D-cache TAG 167 1: ldxa [%o2] ASI_DCACHE_TAG, %o3 ! load D-cache TAG
168 andcc %o3, DTAG_MASK, %g0 ! Valid? 168 andcc %o3, DTAG_MASK, %g0 ! Valid?
169 be,pn %xcc, 2f ! Nope, branch 169 be,pn %xcc, 2f ! Nope, branch
170 andn %o3, DTAG_MASK, %o3 ! Clear valid bits 170 andn %o3, DTAG_MASK, %o3 ! Clear valid bits
171 cmp %o3, %o0 ! TAG match? 171 cmp %o3, %o0 ! TAG match?
172 bne,pt %xcc, 2f ! Nope, branch 172 bne,pt %xcc, 2f ! Nope, branch
173 nop 173 nop
174 stxa %g0, [%o2] ASI_DCACHE_TAG ! Invalidate TAG 174 stxa %g0, [%o2] ASI_DCACHE_TAG ! Invalidate TAG
175 membar #Sync 175 membar #Sync
176 2: brnz,pt %o2, 1b 176 2: brnz,pt %o2, 1b
177 sub %o2, (1 << 5), %o2 ! D-cache line size 177 sub %o2, (1 << 5), %o2 ! D-cache line size
178 178
179 /* The I-cache does not snoop local stores so we 179 /* The I-cache does not snoop local stores so we
180 * better flush that too when necessary. 180 * better flush that too when necessary.
181 */ 181 */
182 brnz,pt %o1, __flush_icache_page 182 brnz,pt %o1, __flush_icache_page
183 sllx %o0, 11, %o0 183 sllx %o0, 11, %o0
184 retl 184 retl
185 nop 185 nop
186 186
187 #endif /* DCACHE_ALIASING_POSSIBLE */ 187 #endif /* DCACHE_ALIASING_POSSIBLE */
188 188
189 .previous 189 .previous
190 190
191 /* Cheetah specific versions, patched at boot time. */ 191 /* Cheetah specific versions, patched at boot time. */
192 __cheetah_flush_tlb_mm: /* 19 insns */ 192 __cheetah_flush_tlb_mm: /* 19 insns */
193 rdpr %pstate, %g7 193 rdpr %pstate, %g7
194 andn %g7, PSTATE_IE, %g2 194 andn %g7, PSTATE_IE, %g2
195 wrpr %g2, 0x0, %pstate 195 wrpr %g2, 0x0, %pstate
196 wrpr %g0, 1, %tl 196 wrpr %g0, 1, %tl
197 mov PRIMARY_CONTEXT, %o2 197 mov PRIMARY_CONTEXT, %o2
198 mov 0x40, %g3 198 mov 0x40, %g3
199 ldxa [%o2] ASI_DMMU, %g2 199 ldxa [%o2] ASI_DMMU, %g2
200 srlx %g2, CTX_PGSZ1_NUC_SHIFT, %o1 200 srlx %g2, CTX_PGSZ1_NUC_SHIFT, %o1
201 sllx %o1, CTX_PGSZ1_NUC_SHIFT, %o1 201 sllx %o1, CTX_PGSZ1_NUC_SHIFT, %o1
202 or %o0, %o1, %o0 /* Preserve nucleus page size fields */ 202 or %o0, %o1, %o0 /* Preserve nucleus page size fields */
203 stxa %o0, [%o2] ASI_DMMU 203 stxa %o0, [%o2] ASI_DMMU
204 stxa %g0, [%g3] ASI_DMMU_DEMAP 204 stxa %g0, [%g3] ASI_DMMU_DEMAP
205 stxa %g0, [%g3] ASI_IMMU_DEMAP 205 stxa %g0, [%g3] ASI_IMMU_DEMAP
206 stxa %g2, [%o2] ASI_DMMU 206 stxa %g2, [%o2] ASI_DMMU
207 sethi %hi(KERNBASE), %o2 207 sethi %hi(KERNBASE), %o2
208 flush %o2 208 flush %o2
209 wrpr %g0, 0, %tl 209 wrpr %g0, 0, %tl
210 retl 210 retl
211 wrpr %g7, 0x0, %pstate 211 wrpr %g7, 0x0, %pstate
212 212
213 __cheetah_flush_tlb_pending: /* 27 insns */ 213 __cheetah_flush_tlb_pending: /* 27 insns */
214 /* %o0 = context, %o1 = nr, %o2 = vaddrs[] */ 214 /* %o0 = context, %o1 = nr, %o2 = vaddrs[] */
215 rdpr %pstate, %g7 215 rdpr %pstate, %g7
216 sllx %o1, 3, %o1 216 sllx %o1, 3, %o1
217 andn %g7, PSTATE_IE, %g2 217 andn %g7, PSTATE_IE, %g2
218 wrpr %g2, 0x0, %pstate 218 wrpr %g2, 0x0, %pstate
219 wrpr %g0, 1, %tl 219 wrpr %g0, 1, %tl
220 mov PRIMARY_CONTEXT, %o4 220 mov PRIMARY_CONTEXT, %o4
221 ldxa [%o4] ASI_DMMU, %g2 221 ldxa [%o4] ASI_DMMU, %g2
222 srlx %g2, CTX_PGSZ1_NUC_SHIFT, %o3 222 srlx %g2, CTX_PGSZ1_NUC_SHIFT, %o3
223 sllx %o3, CTX_PGSZ1_NUC_SHIFT, %o3 223 sllx %o3, CTX_PGSZ1_NUC_SHIFT, %o3
224 or %o0, %o3, %o0 /* Preserve nucleus page size fields */ 224 or %o0, %o3, %o0 /* Preserve nucleus page size fields */
225 stxa %o0, [%o4] ASI_DMMU 225 stxa %o0, [%o4] ASI_DMMU
226 1: sub %o1, (1 << 3), %o1 226 1: sub %o1, (1 << 3), %o1
227 ldx [%o2 + %o1], %o3 227 ldx [%o2 + %o1], %o3
228 andcc %o3, 1, %g0 228 andcc %o3, 1, %g0
229 be,pn %icc, 2f 229 be,pn %icc, 2f
230 andn %o3, 1, %o3 230 andn %o3, 1, %o3
231 stxa %g0, [%o3] ASI_IMMU_DEMAP 231 stxa %g0, [%o3] ASI_IMMU_DEMAP
232 2: stxa %g0, [%o3] ASI_DMMU_DEMAP 232 2: stxa %g0, [%o3] ASI_DMMU_DEMAP
233 membar #Sync 233 membar #Sync
234 brnz,pt %o1, 1b 234 brnz,pt %o1, 1b
235 nop 235 nop
236 stxa %g2, [%o4] ASI_DMMU 236 stxa %g2, [%o4] ASI_DMMU
237 sethi %hi(KERNBASE), %o4 237 sethi %hi(KERNBASE), %o4
238 flush %o4 238 flush %o4
239 wrpr %g0, 0, %tl 239 wrpr %g0, 0, %tl
240 retl 240 retl
241 wrpr %g7, 0x0, %pstate 241 wrpr %g7, 0x0, %pstate
242 242
243 #ifdef DCACHE_ALIASING_POSSIBLE 243 #ifdef DCACHE_ALIASING_POSSIBLE
244 __cheetah_flush_dcache_page: /* 11 insns */ 244 __cheetah_flush_dcache_page: /* 11 insns */
245 sethi %uhi(PAGE_OFFSET), %g1 245 sethi %uhi(PAGE_OFFSET), %g1
246 sllx %g1, 32, %g1 246 sllx %g1, 32, %g1
247 sub %o0, %g1, %o0 247 sub %o0, %g1, %o0
248 sethi %hi(PAGE_SIZE), %o4 248 sethi %hi(PAGE_SIZE), %o4
249 1: subcc %o4, (1 << 5), %o4 249 1: subcc %o4, (1 << 5), %o4
250 stxa %g0, [%o0 + %o4] ASI_DCACHE_INVALIDATE 250 stxa %g0, [%o0 + %o4] ASI_DCACHE_INVALIDATE
251 membar #Sync 251 membar #Sync
252 bne,pt %icc, 1b 252 bne,pt %icc, 1b
253 nop 253 nop
254 retl /* I-cache flush never needed on Cheetah, see callers. */ 254 retl /* I-cache flush never needed on Cheetah, see callers. */
255 nop 255 nop
256 #endif /* DCACHE_ALIASING_POSSIBLE */ 256 #endif /* DCACHE_ALIASING_POSSIBLE */
257 257
258 /* Hypervisor specific versions, patched at boot time. */ 258 /* Hypervisor specific versions, patched at boot time. */
259 __hypervisor_tlb_tl0_error: 259 __hypervisor_tlb_tl0_error:
260 save %sp, -192, %sp 260 save %sp, -192, %sp
261 mov %i0, %o0 261 mov %i0, %o0
262 call hypervisor_tlbop_error 262 call hypervisor_tlbop_error
263 mov %i1, %o1 263 mov %i1, %o1
264 ret 264 ret
265 restore 265 restore
266 266
267 __hypervisor_flush_tlb_mm: /* 10 insns */ 267 __hypervisor_flush_tlb_mm: /* 10 insns */
268 mov %o0, %o2 /* ARG2: mmu context */ 268 mov %o0, %o2 /* ARG2: mmu context */
269 mov 0, %o0 /* ARG0: CPU lists unimplemented */ 269 mov 0, %o0 /* ARG0: CPU lists unimplemented */
270 mov 0, %o1 /* ARG1: CPU lists unimplemented */ 270 mov 0, %o1 /* ARG1: CPU lists unimplemented */
271 mov HV_MMU_ALL, %o3 /* ARG3: flags */ 271 mov HV_MMU_ALL, %o3 /* ARG3: flags */
272 mov HV_FAST_MMU_DEMAP_CTX, %o5 272 mov HV_FAST_MMU_DEMAP_CTX, %o5
273 ta HV_FAST_TRAP 273 ta HV_FAST_TRAP
274 brnz,pn %o0, __hypervisor_tlb_tl0_error 274 brnz,pn %o0, __hypervisor_tlb_tl0_error
275 mov HV_FAST_MMU_DEMAP_CTX, %o1 275 mov HV_FAST_MMU_DEMAP_CTX, %o1
276 retl 276 retl
277 nop 277 nop
278 278
279 __hypervisor_flush_tlb_pending: /* 16 insns */ 279 __hypervisor_flush_tlb_pending: /* 16 insns */
280 /* %o0 = context, %o1 = nr, %o2 = vaddrs[] */ 280 /* %o0 = context, %o1 = nr, %o2 = vaddrs[] */
281 sllx %o1, 3, %g1 281 sllx %o1, 3, %g1
282 mov %o2, %g2 282 mov %o2, %g2
283 mov %o0, %g3 283 mov %o0, %g3
284 1: sub %g1, (1 << 3), %g1 284 1: sub %g1, (1 << 3), %g1
285 ldx [%g2 + %g1], %o0 /* ARG0: vaddr + IMMU-bit */ 285 ldx [%g2 + %g1], %o0 /* ARG0: vaddr + IMMU-bit */
286 mov %g3, %o1 /* ARG1: mmu context */ 286 mov %g3, %o1 /* ARG1: mmu context */
287 mov HV_MMU_ALL, %o2 /* ARG2: flags */ 287 mov HV_MMU_ALL, %o2 /* ARG2: flags */
288 srlx %o0, PAGE_SHIFT, %o0 288 srlx %o0, PAGE_SHIFT, %o0
289 sllx %o0, PAGE_SHIFT, %o0 289 sllx %o0, PAGE_SHIFT, %o0
290 ta HV_MMU_UNMAP_ADDR_TRAP 290 ta HV_MMU_UNMAP_ADDR_TRAP
291 brnz,pn %o0, __hypervisor_tlb_tl0_error 291 brnz,pn %o0, __hypervisor_tlb_tl0_error
292 mov HV_MMU_UNMAP_ADDR_TRAP, %o1 292 mov HV_MMU_UNMAP_ADDR_TRAP, %o1
293 brnz,pt %g1, 1b 293 brnz,pt %g1, 1b
294 nop 294 nop
295 retl 295 retl
296 nop 296 nop
297 297
298 __hypervisor_flush_tlb_kernel_range: /* 16 insns */ 298 __hypervisor_flush_tlb_kernel_range: /* 16 insns */
299 /* %o0=start, %o1=end */ 299 /* %o0=start, %o1=end */
300 cmp %o0, %o1 300 cmp %o0, %o1
301 be,pn %xcc, 2f 301 be,pn %xcc, 2f
302 sethi %hi(PAGE_SIZE), %g3 302 sethi %hi(PAGE_SIZE), %g3
303 mov %o0, %g1 303 mov %o0, %g1
304 sub %o1, %g1, %g2 304 sub %o1, %g1, %g2
305 sub %g2, %g3, %g2 305 sub %g2, %g3, %g2
306 1: add %g1, %g2, %o0 /* ARG0: virtual address */ 306 1: add %g1, %g2, %o0 /* ARG0: virtual address */
307 mov 0, %o1 /* ARG1: mmu context */ 307 mov 0, %o1 /* ARG1: mmu context */
308 mov HV_MMU_ALL, %o2 /* ARG2: flags */ 308 mov HV_MMU_ALL, %o2 /* ARG2: flags */
309 ta HV_MMU_UNMAP_ADDR_TRAP 309 ta HV_MMU_UNMAP_ADDR_TRAP
310 brnz,pn %o0, __hypervisor_tlb_tl0_error 310 brnz,pn %o0, __hypervisor_tlb_tl0_error
311 mov HV_MMU_UNMAP_ADDR_TRAP, %o1 311 mov HV_MMU_UNMAP_ADDR_TRAP, %o1
312 brnz,pt %g2, 1b 312 brnz,pt %g2, 1b
313 sub %g2, %g3, %g2 313 sub %g2, %g3, %g2
314 2: retl 314 2: retl
315 nop 315 nop
316 316
317 #ifdef DCACHE_ALIASING_POSSIBLE 317 #ifdef DCACHE_ALIASING_POSSIBLE
318 /* XXX Niagara and friends have an 8K cache, so no aliasing is 318 /* XXX Niagara and friends have an 8K cache, so no aliasing is
319 * XXX possible, but nothing explicit in the Hypervisor API 319 * XXX possible, but nothing explicit in the Hypervisor API
320 * XXX guarantees this. 320 * XXX guarantees this.
321 */ 321 */
322 __hypervisor_flush_dcache_page: /* 2 insns */ 322 __hypervisor_flush_dcache_page: /* 2 insns */
323 retl 323 retl
324 nop 324 nop
325 #endif 325 #endif
326 326
327 tlb_patch_one: 327 tlb_patch_one:
328 1: lduw [%o1], %g1 328 1: lduw [%o1], %g1
329 stw %g1, [%o0] 329 stw %g1, [%o0]
330 flush %o0 330 flush %o0
331 subcc %o2, 1, %o2 331 subcc %o2, 1, %o2
332 add %o1, 4, %o1 332 add %o1, 4, %o1
333 bne,pt %icc, 1b 333 bne,pt %icc, 1b
334 add %o0, 4, %o0 334 add %o0, 4, %o0
335 retl 335 retl
336 nop 336 nop
337 337
338 .globl cheetah_patch_cachetlbops 338 .globl cheetah_patch_cachetlbops
339 cheetah_patch_cachetlbops: 339 cheetah_patch_cachetlbops:
340 save %sp, -128, %sp 340 save %sp, -128, %sp
341 341
342 sethi %hi(__flush_tlb_mm), %o0 342 sethi %hi(__flush_tlb_mm), %o0
343 or %o0, %lo(__flush_tlb_mm), %o0 343 or %o0, %lo(__flush_tlb_mm), %o0
344 sethi %hi(__cheetah_flush_tlb_mm), %o1 344 sethi %hi(__cheetah_flush_tlb_mm), %o1
345 or %o1, %lo(__cheetah_flush_tlb_mm), %o1 345 or %o1, %lo(__cheetah_flush_tlb_mm), %o1
346 call tlb_patch_one 346 call tlb_patch_one
347 mov 19, %o2 347 mov 19, %o2
348 348
349 sethi %hi(__flush_tlb_pending), %o0 349 sethi %hi(__flush_tlb_pending), %o0
350 or %o0, %lo(__flush_tlb_pending), %o0 350 or %o0, %lo(__flush_tlb_pending), %o0
351 sethi %hi(__cheetah_flush_tlb_pending), %o1 351 sethi %hi(__cheetah_flush_tlb_pending), %o1
352 or %o1, %lo(__cheetah_flush_tlb_pending), %o1 352 or %o1, %lo(__cheetah_flush_tlb_pending), %o1
353 call tlb_patch_one 353 call tlb_patch_one
354 mov 27, %o2 354 mov 27, %o2
355 355
356 #ifdef DCACHE_ALIASING_POSSIBLE 356 #ifdef DCACHE_ALIASING_POSSIBLE
357 sethi %hi(__flush_dcache_page), %o0 357 sethi %hi(__flush_dcache_page), %o0
358 or %o0, %lo(__flush_dcache_page), %o0 358 or %o0, %lo(__flush_dcache_page), %o0
359 sethi %hi(__cheetah_flush_dcache_page), %o1 359 sethi %hi(__cheetah_flush_dcache_page), %o1
360 or %o1, %lo(__cheetah_flush_dcache_page), %o1 360 or %o1, %lo(__cheetah_flush_dcache_page), %o1
361 call tlb_patch_one 361 call tlb_patch_one
362 mov 11, %o2 362 mov 11, %o2
363 #endif /* DCACHE_ALIASING_POSSIBLE */ 363 #endif /* DCACHE_ALIASING_POSSIBLE */
364 364
365 ret 365 ret
366 restore 366 restore
367 367
368 #ifdef CONFIG_SMP 368 #ifdef CONFIG_SMP
369 /* These are all called by the slaves of a cross call, at 369 /* These are all called by the slaves of a cross call, at
370 * trap level 1, with interrupts fully disabled. 370 * trap level 1, with interrupts fully disabled.
371 * 371 *
372 * Register usage: 372 * Register usage:
373 * %g5 mm->context (all tlb flushes) 373 * %g5 mm->context (all tlb flushes)
374 * %g1 address arg 1 (tlb page and range flushes) 374 * %g1 address arg 1 (tlb page and range flushes)
375 * %g7 address arg 2 (tlb range flush only) 375 * %g7 address arg 2 (tlb range flush only)
376 * 376 *
377 * %g6 scratch 1 377 * %g6 scratch 1
378 * %g2 scratch 2 378 * %g2 scratch 2
379 * %g3 scratch 3 379 * %g3 scratch 3
380 * %g4 scratch 4 380 * %g4 scratch 4
381 */ 381 */
382 .align 32 382 .align 32
383 .globl xcall_flush_tlb_mm 383 .globl xcall_flush_tlb_mm
384 xcall_flush_tlb_mm: /* 21 insns */ 384 xcall_flush_tlb_mm: /* 21 insns */
385 mov PRIMARY_CONTEXT, %g2 385 mov PRIMARY_CONTEXT, %g2
386 ldxa [%g2] ASI_DMMU, %g3 386 ldxa [%g2] ASI_DMMU, %g3
387 srlx %g3, CTX_PGSZ1_NUC_SHIFT, %g4 387 srlx %g3, CTX_PGSZ1_NUC_SHIFT, %g4
388 sllx %g4, CTX_PGSZ1_NUC_SHIFT, %g4 388 sllx %g4, CTX_PGSZ1_NUC_SHIFT, %g4
389 or %g5, %g4, %g5 /* Preserve nucleus page size fields */ 389 or %g5, %g4, %g5 /* Preserve nucleus page size fields */
390 stxa %g5, [%g2] ASI_DMMU 390 stxa %g5, [%g2] ASI_DMMU
391 mov 0x40, %g4 391 mov 0x40, %g4
392 stxa %g0, [%g4] ASI_DMMU_DEMAP 392 stxa %g0, [%g4] ASI_DMMU_DEMAP
393 stxa %g0, [%g4] ASI_IMMU_DEMAP 393 stxa %g0, [%g4] ASI_IMMU_DEMAP
394 stxa %g3, [%g2] ASI_DMMU 394 stxa %g3, [%g2] ASI_DMMU
395 retry 395 retry
396 nop 396 nop
397 nop 397 nop
398 nop 398 nop
399 nop 399 nop
400 nop 400 nop
401 nop 401 nop
402 nop 402 nop
403 nop 403 nop
404 nop 404 nop
405 nop 405 nop
406 406
407 .globl xcall_flush_tlb_pending 407 .globl xcall_flush_tlb_pending
408 xcall_flush_tlb_pending: /* 21 insns */ 408 xcall_flush_tlb_pending: /* 21 insns */
409 /* %g5=context, %g1=nr, %g7=vaddrs[] */ 409 /* %g5=context, %g1=nr, %g7=vaddrs[] */
410 sllx %g1, 3, %g1 410 sllx %g1, 3, %g1
411 mov PRIMARY_CONTEXT, %g4 411 mov PRIMARY_CONTEXT, %g4
412 ldxa [%g4] ASI_DMMU, %g2 412 ldxa [%g4] ASI_DMMU, %g2
413 srlx %g2, CTX_PGSZ1_NUC_SHIFT, %g4 413 srlx %g2, CTX_PGSZ1_NUC_SHIFT, %g4
414 sllx %g4, CTX_PGSZ1_NUC_SHIFT, %g4 414 sllx %g4, CTX_PGSZ1_NUC_SHIFT, %g4
415 or %g5, %g4, %g5 415 or %g5, %g4, %g5
416 mov PRIMARY_CONTEXT, %g4 416 mov PRIMARY_CONTEXT, %g4
417 stxa %g5, [%g4] ASI_DMMU 417 stxa %g5, [%g4] ASI_DMMU
418 1: sub %g1, (1 << 3), %g1 418 1: sub %g1, (1 << 3), %g1
419 ldx [%g7 + %g1], %g5 419 ldx [%g7 + %g1], %g5
420 andcc %g5, 0x1, %g0 420 andcc %g5, 0x1, %g0
421 be,pn %icc, 2f 421 be,pn %icc, 2f
422 422
423 andn %g5, 0x1, %g5 423 andn %g5, 0x1, %g5
424 stxa %g0, [%g5] ASI_IMMU_DEMAP 424 stxa %g0, [%g5] ASI_IMMU_DEMAP
425 2: stxa %g0, [%g5] ASI_DMMU_DEMAP 425 2: stxa %g0, [%g5] ASI_DMMU_DEMAP
426 membar #Sync 426 membar #Sync
427 brnz,pt %g1, 1b 427 brnz,pt %g1, 1b
428 nop 428 nop
429 stxa %g2, [%g4] ASI_DMMU 429 stxa %g2, [%g4] ASI_DMMU
430 retry 430 retry
431 nop 431 nop
432 432
433 .globl xcall_flush_tlb_kernel_range 433 .globl xcall_flush_tlb_kernel_range
434 xcall_flush_tlb_kernel_range: /* 25 insns */ 434 xcall_flush_tlb_kernel_range: /* 25 insns */
435 sethi %hi(PAGE_SIZE - 1), %g2 435 sethi %hi(PAGE_SIZE - 1), %g2
436 or %g2, %lo(PAGE_SIZE - 1), %g2 436 or %g2, %lo(PAGE_SIZE - 1), %g2
437 andn %g1, %g2, %g1 437 andn %g1, %g2, %g1
438 andn %g7, %g2, %g7 438 andn %g7, %g2, %g7
439 sub %g7, %g1, %g3 439 sub %g7, %g1, %g3
440 add %g2, 1, %g2 440 add %g2, 1, %g2
441 sub %g3, %g2, %g3 441 sub %g3, %g2, %g3
442 or %g1, 0x20, %g1 ! Nucleus 442 or %g1, 0x20, %g1 ! Nucleus
443 1: stxa %g0, [%g1 + %g3] ASI_DMMU_DEMAP 443 1: stxa %g0, [%g1 + %g3] ASI_DMMU_DEMAP
444 stxa %g0, [%g1 + %g3] ASI_IMMU_DEMAP 444 stxa %g0, [%g1 + %g3] ASI_IMMU_DEMAP
445 membar #Sync 445 membar #Sync
446 brnz,pt %g3, 1b 446 brnz,pt %g3, 1b
447 sub %g3, %g2, %g3 447 sub %g3, %g2, %g3
448 retry 448 retry
449 nop 449 nop
450 nop 450 nop
451 nop 451 nop
452 nop 452 nop
453 nop 453 nop
454 nop 454 nop
455 nop 455 nop
456 nop 456 nop
457 nop 457 nop
458 nop 458 nop
459 nop 459 nop
460 460
461 /* This runs in a very controlled environment, so we do 461 /* This runs in a very controlled environment, so we do
462 * not need to worry about BH races etc. 462 * not need to worry about BH races etc.
463 */ 463 */
464 .globl xcall_sync_tick 464 .globl xcall_sync_tick
465 xcall_sync_tick: 465 xcall_sync_tick:
466 466
467 661: rdpr %pstate, %g2 467 661: rdpr %pstate, %g2
468 wrpr %g2, PSTATE_IG | PSTATE_AG, %pstate 468 wrpr %g2, PSTATE_IG | PSTATE_AG, %pstate
469 .section .sun4v_2insn_patch, "ax" 469 .section .sun4v_2insn_patch, "ax"
470 .word 661b 470 .word 661b
471 nop 471 nop
472 nop 472 nop
473 .previous 473 .previous
474 474
475 rdpr %pil, %g2 475 rdpr %pil, %g2
476 wrpr %g0, 15, %pil 476 wrpr %g0, 15, %pil
477 sethi %hi(109f), %g7 477 sethi %hi(109f), %g7
478 b,pt %xcc, etrap_irq 478 b,pt %xcc, etrap_irq
479 109: or %g7, %lo(109b), %g7 479 109: or %g7, %lo(109b), %g7
480 #ifdef CONFIG_TRACE_IRQFLAGS
481 call trace_hardirqs_off
482 nop
483 #endif
480 call smp_synchronize_tick_client 484 call smp_synchronize_tick_client
481 nop 485 nop
482 clr %l6 486 clr %l6
483 b rtrap_xcall 487 b rtrap_xcall
484 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1 488 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1
485 489
486 /* NOTE: This is SPECIAL!! We do etrap/rtrap however 490 /* NOTE: This is SPECIAL!! We do etrap/rtrap however
487 * we choose to deal with the "BH's run with 491 * we choose to deal with the "BH's run with
488 * %pil==15" problem (described in asm/pil.h) 492 * %pil==15" problem (described in asm/pil.h)
489 * by just invoking rtrap directly past where 493 * by just invoking rtrap directly past where
490 * BH's are checked for. 494 * BH's are checked for.
491 * 495 *
492 * We do it like this because we do not want %pil==15 496 * We do it like this because we do not want %pil==15
493 * lockups to prevent regs being reported. 497 * lockups to prevent regs being reported.
494 */ 498 */
495 .globl xcall_report_regs 499 .globl xcall_report_regs
496 xcall_report_regs: 500 xcall_report_regs:
497 501
498 661: rdpr %pstate, %g2 502 661: rdpr %pstate, %g2
499 wrpr %g2, PSTATE_IG | PSTATE_AG, %pstate 503 wrpr %g2, PSTATE_IG | PSTATE_AG, %pstate
500 .section .sun4v_2insn_patch, "ax" 504 .section .sun4v_2insn_patch, "ax"
501 .word 661b 505 .word 661b
502 nop 506 nop
503 nop 507 nop
504 .previous 508 .previous
505 509
506 rdpr %pil, %g2 510 rdpr %pil, %g2
507 wrpr %g0, 15, %pil 511 wrpr %g0, 15, %pil
508 sethi %hi(109f), %g7 512 sethi %hi(109f), %g7
509 b,pt %xcc, etrap_irq 513 b,pt %xcc, etrap_irq
510 109: or %g7, %lo(109b), %g7 514 109: or %g7, %lo(109b), %g7
515 #ifdef CONFIG_TRACE_IRQFLAGS
516 call trace_hardirqs_off
517 nop
518 #endif
511 call __show_regs 519 call __show_regs
512 add %sp, PTREGS_OFF, %o0 520 add %sp, PTREGS_OFF, %o0
513 clr %l6 521 clr %l6
514 /* Has to be a non-v9 branch due to the large distance. */ 522 /* Has to be a non-v9 branch due to the large distance. */
515 b rtrap_xcall 523 b rtrap_xcall
516 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1 524 ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1
517 525
518 #ifdef DCACHE_ALIASING_POSSIBLE 526 #ifdef DCACHE_ALIASING_POSSIBLE
519 .align 32 527 .align 32
520 .globl xcall_flush_dcache_page_cheetah 528 .globl xcall_flush_dcache_page_cheetah
521 xcall_flush_dcache_page_cheetah: /* %g1 == physical page address */ 529 xcall_flush_dcache_page_cheetah: /* %g1 == physical page address */
522 sethi %hi(PAGE_SIZE), %g3 530 sethi %hi(PAGE_SIZE), %g3
523 1: subcc %g3, (1 << 5), %g3 531 1: subcc %g3, (1 << 5), %g3
524 stxa %g0, [%g1 + %g3] ASI_DCACHE_INVALIDATE 532 stxa %g0, [%g1 + %g3] ASI_DCACHE_INVALIDATE
525 membar #Sync 533 membar #Sync
526 bne,pt %icc, 1b 534 bne,pt %icc, 1b
527 nop 535 nop
528 retry 536 retry
529 nop 537 nop
530 #endif /* DCACHE_ALIASING_POSSIBLE */ 538 #endif /* DCACHE_ALIASING_POSSIBLE */
531 539
532 .globl xcall_flush_dcache_page_spitfire 540 .globl xcall_flush_dcache_page_spitfire
533 xcall_flush_dcache_page_spitfire: /* %g1 == physical page address 541 xcall_flush_dcache_page_spitfire: /* %g1 == physical page address
534 %g7 == kernel page virtual address 542 %g7 == kernel page virtual address
535 %g5 == (page->mapping != NULL) */ 543 %g5 == (page->mapping != NULL) */
536 #ifdef DCACHE_ALIASING_POSSIBLE 544 #ifdef DCACHE_ALIASING_POSSIBLE
537 srlx %g1, (13 - 2), %g1 ! Form tag comparitor 545 srlx %g1, (13 - 2), %g1 ! Form tag comparitor
538 sethi %hi(L1DCACHE_SIZE), %g3 ! D$ size == 16K 546 sethi %hi(L1DCACHE_SIZE), %g3 ! D$ size == 16K
539 sub %g3, (1 << 5), %g3 ! D$ linesize == 32 547 sub %g3, (1 << 5), %g3 ! D$ linesize == 32
540 1: ldxa [%g3] ASI_DCACHE_TAG, %g2 548 1: ldxa [%g3] ASI_DCACHE_TAG, %g2
541 andcc %g2, 0x3, %g0 549 andcc %g2, 0x3, %g0
542 be,pn %xcc, 2f 550 be,pn %xcc, 2f
543 andn %g2, 0x3, %g2 551 andn %g2, 0x3, %g2
544 cmp %g2, %g1 552 cmp %g2, %g1
545 553
546 bne,pt %xcc, 2f 554 bne,pt %xcc, 2f
547 nop 555 nop
548 stxa %g0, [%g3] ASI_DCACHE_TAG 556 stxa %g0, [%g3] ASI_DCACHE_TAG
549 membar #Sync 557 membar #Sync
550 2: cmp %g3, 0 558 2: cmp %g3, 0
551 bne,pt %xcc, 1b 559 bne,pt %xcc, 1b
552 sub %g3, (1 << 5), %g3 560 sub %g3, (1 << 5), %g3
553 561
554 brz,pn %g5, 2f 562 brz,pn %g5, 2f
555 #endif /* DCACHE_ALIASING_POSSIBLE */ 563 #endif /* DCACHE_ALIASING_POSSIBLE */
556 sethi %hi(PAGE_SIZE), %g3 564 sethi %hi(PAGE_SIZE), %g3
557 565
558 1: flush %g7 566 1: flush %g7
559 subcc %g3, (1 << 5), %g3 567 subcc %g3, (1 << 5), %g3
560 bne,pt %icc, 1b 568 bne,pt %icc, 1b
561 add %g7, (1 << 5), %g7 569 add %g7, (1 << 5), %g7
562 570
563 2: retry 571 2: retry
564 nop 572 nop
565 nop 573 nop
566 574
567 /* %g5: error 575 /* %g5: error
568 * %g6: tlb op 576 * %g6: tlb op
569 */ 577 */
570 __hypervisor_tlb_xcall_error: 578 __hypervisor_tlb_xcall_error:
571 mov %g5, %g4 579 mov %g5, %g4
572 mov %g6, %g5 580 mov %g6, %g5
573 ba,pt %xcc, etrap 581 ba,pt %xcc, etrap
574 rd %pc, %g7 582 rd %pc, %g7
575 mov %l4, %o0 583 mov %l4, %o0
576 call hypervisor_tlbop_error_xcall 584 call hypervisor_tlbop_error_xcall
577 mov %l5, %o1 585 mov %l5, %o1
578 ba,a,pt %xcc, rtrap_clr_l6 586 ba,a,pt %xcc, rtrap_clr_l6
579 587
580 .globl __hypervisor_xcall_flush_tlb_mm 588 .globl __hypervisor_xcall_flush_tlb_mm
581 __hypervisor_xcall_flush_tlb_mm: /* 21 insns */ 589 __hypervisor_xcall_flush_tlb_mm: /* 21 insns */
582 /* %g5=ctx, g1,g2,g3,g4,g7=scratch, %g6=unusable */ 590 /* %g5=ctx, g1,g2,g3,g4,g7=scratch, %g6=unusable */
583 mov %o0, %g2 591 mov %o0, %g2
584 mov %o1, %g3 592 mov %o1, %g3
585 mov %o2, %g4 593 mov %o2, %g4
586 mov %o3, %g1 594 mov %o3, %g1
587 mov %o5, %g7 595 mov %o5, %g7
588 clr %o0 /* ARG0: CPU lists unimplemented */ 596 clr %o0 /* ARG0: CPU lists unimplemented */
589 clr %o1 /* ARG1: CPU lists unimplemented */ 597 clr %o1 /* ARG1: CPU lists unimplemented */
590 mov %g5, %o2 /* ARG2: mmu context */ 598 mov %g5, %o2 /* ARG2: mmu context */
591 mov HV_MMU_ALL, %o3 /* ARG3: flags */ 599 mov HV_MMU_ALL, %o3 /* ARG3: flags */
592 mov HV_FAST_MMU_DEMAP_CTX, %o5 600 mov HV_FAST_MMU_DEMAP_CTX, %o5
593 ta HV_FAST_TRAP 601 ta HV_FAST_TRAP
594 mov HV_FAST_MMU_DEMAP_CTX, %g6 602 mov HV_FAST_MMU_DEMAP_CTX, %g6
595 brnz,pn %o0, __hypervisor_tlb_xcall_error 603 brnz,pn %o0, __hypervisor_tlb_xcall_error
596 mov %o0, %g5 604 mov %o0, %g5
597 mov %g2, %o0 605 mov %g2, %o0
598 mov %g3, %o1 606 mov %g3, %o1
599 mov %g4, %o2 607 mov %g4, %o2
600 mov %g1, %o3 608 mov %g1, %o3
601 mov %g7, %o5 609 mov %g7, %o5
602 membar #Sync 610 membar #Sync
603 retry 611 retry
604 612
605 .globl __hypervisor_xcall_flush_tlb_pending 613 .globl __hypervisor_xcall_flush_tlb_pending
606 __hypervisor_xcall_flush_tlb_pending: /* 21 insns */ 614 __hypervisor_xcall_flush_tlb_pending: /* 21 insns */
607 /* %g5=ctx, %g1=nr, %g7=vaddrs[], %g2,%g3,%g4,g6=scratch */ 615 /* %g5=ctx, %g1=nr, %g7=vaddrs[], %g2,%g3,%g4,g6=scratch */
608 sllx %g1, 3, %g1 616 sllx %g1, 3, %g1
609 mov %o0, %g2 617 mov %o0, %g2
610 mov %o1, %g3 618 mov %o1, %g3
611 mov %o2, %g4 619 mov %o2, %g4
612 1: sub %g1, (1 << 3), %g1 620 1: sub %g1, (1 << 3), %g1
613 ldx [%g7 + %g1], %o0 /* ARG0: virtual address */ 621 ldx [%g7 + %g1], %o0 /* ARG0: virtual address */
614 mov %g5, %o1 /* ARG1: mmu context */ 622 mov %g5, %o1 /* ARG1: mmu context */
615 mov HV_MMU_ALL, %o2 /* ARG2: flags */ 623 mov HV_MMU_ALL, %o2 /* ARG2: flags */
616 srlx %o0, PAGE_SHIFT, %o0 624 srlx %o0, PAGE_SHIFT, %o0
617 sllx %o0, PAGE_SHIFT, %o0 625 sllx %o0, PAGE_SHIFT, %o0
618 ta HV_MMU_UNMAP_ADDR_TRAP 626 ta HV_MMU_UNMAP_ADDR_TRAP
619 mov HV_MMU_UNMAP_ADDR_TRAP, %g6 627 mov HV_MMU_UNMAP_ADDR_TRAP, %g6
620 brnz,a,pn %o0, __hypervisor_tlb_xcall_error 628 brnz,a,pn %o0, __hypervisor_tlb_xcall_error
621 mov %o0, %g5 629 mov %o0, %g5
622 brnz,pt %g1, 1b 630 brnz,pt %g1, 1b
623 nop 631 nop
624 mov %g2, %o0 632 mov %g2, %o0
625 mov %g3, %o1 633 mov %g3, %o1
626 mov %g4, %o2 634 mov %g4, %o2
627 membar #Sync 635 membar #Sync
628 retry 636 retry
629 637
630 .globl __hypervisor_xcall_flush_tlb_kernel_range 638 .globl __hypervisor_xcall_flush_tlb_kernel_range
631 __hypervisor_xcall_flush_tlb_kernel_range: /* 25 insns */ 639 __hypervisor_xcall_flush_tlb_kernel_range: /* 25 insns */
632 /* %g1=start, %g7=end, g2,g3,g4,g5,g6=scratch */ 640 /* %g1=start, %g7=end, g2,g3,g4,g5,g6=scratch */
633 sethi %hi(PAGE_SIZE - 1), %g2 641 sethi %hi(PAGE_SIZE - 1), %g2
634 or %g2, %lo(PAGE_SIZE - 1), %g2 642 or %g2, %lo(PAGE_SIZE - 1), %g2
635 andn %g1, %g2, %g1 643 andn %g1, %g2, %g1
636 andn %g7, %g2, %g7 644 andn %g7, %g2, %g7
637 sub %g7, %g1, %g3 645 sub %g7, %g1, %g3
638 add %g2, 1, %g2 646 add %g2, 1, %g2
639 sub %g3, %g2, %g3 647 sub %g3, %g2, %g3
640 mov %o0, %g2 648 mov %o0, %g2
641 mov %o1, %g4 649 mov %o1, %g4
642 mov %o2, %g7 650 mov %o2, %g7
643 1: add %g1, %g3, %o0 /* ARG0: virtual address */ 651 1: add %g1, %g3, %o0 /* ARG0: virtual address */
644 mov 0, %o1 /* ARG1: mmu context */ 652 mov 0, %o1 /* ARG1: mmu context */
645 mov HV_MMU_ALL, %o2 /* ARG2: flags */ 653 mov HV_MMU_ALL, %o2 /* ARG2: flags */
646 ta HV_MMU_UNMAP_ADDR_TRAP 654 ta HV_MMU_UNMAP_ADDR_TRAP
647 mov HV_MMU_UNMAP_ADDR_TRAP, %g6 655 mov HV_MMU_UNMAP_ADDR_TRAP, %g6
648 brnz,pn %o0, __hypervisor_tlb_xcall_error 656 brnz,pn %o0, __hypervisor_tlb_xcall_error
649 mov %o0, %g5 657 mov %o0, %g5
650 sethi %hi(PAGE_SIZE), %o2 658 sethi %hi(PAGE_SIZE), %o2
651 brnz,pt %g3, 1b 659 brnz,pt %g3, 1b
652 sub %g3, %o2, %g3 660 sub %g3, %o2, %g3
653 mov %g2, %o0 661 mov %g2, %o0
654 mov %g4, %o1 662 mov %g4, %o1
655 mov %g7, %o2 663 mov %g7, %o2
656 membar #Sync 664 membar #Sync
657 retry 665 retry
658 666
659 /* These just get rescheduled to PIL vectors. */ 667 /* These just get rescheduled to PIL vectors. */
660 .globl xcall_call_function 668 .globl xcall_call_function
661 xcall_call_function: 669 xcall_call_function:
662 wr %g0, (1 << PIL_SMP_CALL_FUNC), %set_softint 670 wr %g0, (1 << PIL_SMP_CALL_FUNC), %set_softint
663 retry 671 retry
664 672
665 .globl xcall_receive_signal 673 .globl xcall_receive_signal
666 xcall_receive_signal: 674 xcall_receive_signal:
667 wr %g0, (1 << PIL_SMP_RECEIVE_SIGNAL), %set_softint 675 wr %g0, (1 << PIL_SMP_RECEIVE_SIGNAL), %set_softint
668 retry 676 retry
669 677
670 .globl xcall_capture 678 .globl xcall_capture
671 xcall_capture: 679 xcall_capture:
672 wr %g0, (1 << PIL_SMP_CAPTURE), %set_softint 680 wr %g0, (1 << PIL_SMP_CAPTURE), %set_softint
673 retry 681 retry
674 682
675 .globl xcall_new_mmu_context_version 683 .globl xcall_new_mmu_context_version
676 xcall_new_mmu_context_version: 684 xcall_new_mmu_context_version:
677 wr %g0, (1 << PIL_SMP_CTX_NEW_VERSION), %set_softint 685 wr %g0, (1 << PIL_SMP_CTX_NEW_VERSION), %set_softint
678 retry 686 retry
679 687
680 #endif /* CONFIG_SMP */ 688 #endif /* CONFIG_SMP */
681 689
682 690
683 .globl hypervisor_patch_cachetlbops 691 .globl hypervisor_patch_cachetlbops
684 hypervisor_patch_cachetlbops: 692 hypervisor_patch_cachetlbops:
685 save %sp, -128, %sp 693 save %sp, -128, %sp
686 694
687 sethi %hi(__flush_tlb_mm), %o0 695 sethi %hi(__flush_tlb_mm), %o0
688 or %o0, %lo(__flush_tlb_mm), %o0 696 or %o0, %lo(__flush_tlb_mm), %o0
689 sethi %hi(__hypervisor_flush_tlb_mm), %o1 697 sethi %hi(__hypervisor_flush_tlb_mm), %o1
690 or %o1, %lo(__hypervisor_flush_tlb_mm), %o1 698 or %o1, %lo(__hypervisor_flush_tlb_mm), %o1
691 call tlb_patch_one 699 call tlb_patch_one
692 mov 10, %o2 700 mov 10, %o2
693 701
694 sethi %hi(__flush_tlb_pending), %o0 702 sethi %hi(__flush_tlb_pending), %o0
695 or %o0, %lo(__flush_tlb_pending), %o0 703 or %o0, %lo(__flush_tlb_pending), %o0
696 sethi %hi(__hypervisor_flush_tlb_pending), %o1 704 sethi %hi(__hypervisor_flush_tlb_pending), %o1
697 or %o1, %lo(__hypervisor_flush_tlb_pending), %o1 705 or %o1, %lo(__hypervisor_flush_tlb_pending), %o1
698 call tlb_patch_one 706 call tlb_patch_one
699 mov 16, %o2 707 mov 16, %o2
700 708
701 sethi %hi(__flush_tlb_kernel_range), %o0 709 sethi %hi(__flush_tlb_kernel_range), %o0
702 or %o0, %lo(__flush_tlb_kernel_range), %o0 710 or %o0, %lo(__flush_tlb_kernel_range), %o0
703 sethi %hi(__hypervisor_flush_tlb_kernel_range), %o1 711 sethi %hi(__hypervisor_flush_tlb_kernel_range), %o1
704 or %o1, %lo(__hypervisor_flush_tlb_kernel_range), %o1 712 or %o1, %lo(__hypervisor_flush_tlb_kernel_range), %o1
705 call tlb_patch_one 713 call tlb_patch_one
706 mov 16, %o2 714 mov 16, %o2
707 715
708 #ifdef DCACHE_ALIASING_POSSIBLE 716 #ifdef DCACHE_ALIASING_POSSIBLE
709 sethi %hi(__flush_dcache_page), %o0 717 sethi %hi(__flush_dcache_page), %o0
710 or %o0, %lo(__flush_dcache_page), %o0 718 or %o0, %lo(__flush_dcache_page), %o0
711 sethi %hi(__hypervisor_flush_dcache_page), %o1 719 sethi %hi(__hypervisor_flush_dcache_page), %o1
712 or %o1, %lo(__hypervisor_flush_dcache_page), %o1 720 or %o1, %lo(__hypervisor_flush_dcache_page), %o1
713 call tlb_patch_one 721 call tlb_patch_one
714 mov 2, %o2 722 mov 2, %o2
715 #endif /* DCACHE_ALIASING_POSSIBLE */ 723 #endif /* DCACHE_ALIASING_POSSIBLE */
716 724
717 #ifdef CONFIG_SMP 725 #ifdef CONFIG_SMP
718 sethi %hi(xcall_flush_tlb_mm), %o0 726 sethi %hi(xcall_flush_tlb_mm), %o0
719 or %o0, %lo(xcall_flush_tlb_mm), %o0 727 or %o0, %lo(xcall_flush_tlb_mm), %o0
720 sethi %hi(__hypervisor_xcall_flush_tlb_mm), %o1 728 sethi %hi(__hypervisor_xcall_flush_tlb_mm), %o1
721 or %o1, %lo(__hypervisor_xcall_flush_tlb_mm), %o1 729 or %o1, %lo(__hypervisor_xcall_flush_tlb_mm), %o1
722 call tlb_patch_one 730 call tlb_patch_one
723 mov 21, %o2 731 mov 21, %o2
724 732
725 sethi %hi(xcall_flush_tlb_pending), %o0 733 sethi %hi(xcall_flush_tlb_pending), %o0
726 or %o0, %lo(xcall_flush_tlb_pending), %o0 734 or %o0, %lo(xcall_flush_tlb_pending), %o0
727 sethi %hi(__hypervisor_xcall_flush_tlb_pending), %o1 735 sethi %hi(__hypervisor_xcall_flush_tlb_pending), %o1
728 or %o1, %lo(__hypervisor_xcall_flush_tlb_pending), %o1 736 or %o1, %lo(__hypervisor_xcall_flush_tlb_pending), %o1
729 call tlb_patch_one 737 call tlb_patch_one
730 mov 21, %o2 738 mov 21, %o2
731 739
732 sethi %hi(xcall_flush_tlb_kernel_range), %o0 740 sethi %hi(xcall_flush_tlb_kernel_range), %o0
733 or %o0, %lo(xcall_flush_tlb_kernel_range), %o0 741 or %o0, %lo(xcall_flush_tlb_kernel_range), %o0
734 sethi %hi(__hypervisor_xcall_flush_tlb_kernel_range), %o1 742 sethi %hi(__hypervisor_xcall_flush_tlb_kernel_range), %o1
735 or %o1, %lo(__hypervisor_xcall_flush_tlb_kernel_range), %o1 743 or %o1, %lo(__hypervisor_xcall_flush_tlb_kernel_range), %o1
736 call tlb_patch_one 744 call tlb_patch_one
737 mov 25, %o2 745 mov 25, %o2
738 #endif /* CONFIG_SMP */ 746 #endif /* CONFIG_SMP */
739 747
740 ret 748 ret
741 restore 749 restore
742 750
include/asm-sparc64/irqflags.h
File was created 1 /*
2 * include/asm-sparc64/irqflags.h
3 *
4 * IRQ flags handling
5 *
6 * This file gets included from lowlevel asm headers too, to provide
7 * wrapped versions of the local_irq_*() APIs, based on the
8 * raw_local_irq_*() functions from the lowlevel headers.
9 */
10 #ifndef _ASM_IRQFLAGS_H
11 #define _ASM_IRQFLAGS_H
12
13 #ifndef __ASSEMBLY__
14
15 static inline unsigned long __raw_local_save_flags(void)
16 {
17 unsigned long flags;
18
19 __asm__ __volatile__(
20 "rdpr %%pil, %0"
21 : "=r" (flags)
22 );
23
24 return flags;
25 }
26
27 #define raw_local_save_flags(flags) \
28 do { (flags) = __raw_local_save_flags(); } while (0)
29
30 static inline void raw_local_irq_restore(unsigned long flags)
31 {
32 __asm__ __volatile__(
33 "wrpr %0, %%pil"
34 : /* no output */
35 : "r" (flags)
36 : "memory"
37 );
38 }
39
40 static inline void raw_local_irq_disable(void)
41 {
42 __asm__ __volatile__(
43 "wrpr 15, %%pil"
44 : /* no outputs */
45 : /* no inputs */
46 : "memory"
47 );
48 }
49
50 static inline void raw_local_irq_enable(void)
51 {
52 __asm__ __volatile__(
53 "wrpr 0, %%pil"
54 : /* no outputs */
55 : /* no inputs */
56 : "memory"
57 );
58 }
59
60 static inline int raw_irqs_disabled_flags(unsigned long flags)
61 {
62 return (flags > 0);
63 }
64
65 static inline int raw_irqs_disabled(void)
66 {
67 unsigned long flags = __raw_local_save_flags();
68
69 return raw_irqs_disabled_flags(flags);
70 }
71
72 /*
73 * For spinlocks, etc:
74 */
75 static inline unsigned long __raw_local_irq_save(void)
76 {
77 unsigned long flags = __raw_local_save_flags();
78
79 raw_local_irq_disable();
80
81 return flags;
82 }
83
84 #define raw_local_irq_save(flags) \
85 do { (flags) = __raw_local_irq_save(); } while (0)
86
87 #endif /* (__ASSEMBLY__) */
88
89 #endif /* !(_ASM_IRQFLAGS_H) */
90
include/asm-sparc64/rwsem.h
1 /* $Id: rwsem.h,v 1.5 2001/11/18 00:12:56 davem Exp $ 1 /* $Id: rwsem.h,v 1.5 2001/11/18 00:12:56 davem Exp $
2 * rwsem.h: R/W semaphores implemented using CAS 2 * rwsem.h: R/W semaphores implemented using CAS
3 * 3 *
4 * Written by David S. Miller (davem@redhat.com), 2001. 4 * Written by David S. Miller (davem@redhat.com), 2001.
5 * Derived from asm-i386/rwsem.h 5 * Derived from asm-i386/rwsem.h
6 */ 6 */
7 #ifndef _SPARC64_RWSEM_H 7 #ifndef _SPARC64_RWSEM_H
8 #define _SPARC64_RWSEM_H 8 #define _SPARC64_RWSEM_H
9 9
10 #ifndef _LINUX_RWSEM_H 10 #ifndef _LINUX_RWSEM_H
11 #error "please don't include asm/rwsem.h directly, use linux/rwsem.h instead" 11 #error "please don't include asm/rwsem.h directly, use linux/rwsem.h instead"
12 #endif 12 #endif
13 13
14 #ifdef __KERNEL__ 14 #ifdef __KERNEL__
15 15
16 #include <linux/list.h> 16 #include <linux/list.h>
17 #include <linux/spinlock.h> 17 #include <linux/spinlock.h>
18 #include <asm/rwsem-const.h> 18 #include <asm/rwsem-const.h>
19 19
20 struct rwsem_waiter; 20 struct rwsem_waiter;
21 21
22 struct rw_semaphore { 22 struct rw_semaphore {
23 signed int count; 23 signed int count;
24 spinlock_t wait_lock; 24 spinlock_t wait_lock;
25 struct list_head wait_list; 25 struct list_head wait_list;
26 #ifdef CONFIG_DEBUG_LOCK_ALLOC
27 struct lockdep_map dep_map;
28 #endif
26 }; 29 };
27 30
31 #ifdef CONFIG_DEBUG_LOCK_ALLOC
32 # define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname }
33 #else
34 # define __RWSEM_DEP_MAP_INIT(lockname)
35 #endif
36
28 #define __RWSEM_INITIALIZER(name) \ 37 #define __RWSEM_INITIALIZER(name) \
29 { RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, LIST_HEAD_INIT((name).wait_list) } 38 { RWSEM_UNLOCKED_VALUE, SPIN_LOCK_UNLOCKED, LIST_HEAD_INIT((name).wait_list) \
39 __RWSEM_DEP_MAP_INIT(name) }
30 40
31 #define DECLARE_RWSEM(name) \ 41 #define DECLARE_RWSEM(name) \
32 struct rw_semaphore name = __RWSEM_INITIALIZER(name) 42 struct rw_semaphore name = __RWSEM_INITIALIZER(name)
33 43
34 static __inline__ void init_rwsem(struct rw_semaphore *sem) 44 extern void __init_rwsem(struct rw_semaphore *sem, const char *name,
35 { 45 struct lock_class_key *key);
36 sem->count = RWSEM_UNLOCKED_VALUE;
37 spin_lock_init(&sem->wait_lock);
38 INIT_LIST_HEAD(&sem->wait_list);
39 }
40 46
47 #define init_rwsem(sem) \
48 do { \
49 static struct lock_class_key __key; \
50 \
51 __init_rwsem((sem), #sem, &__key); \
52 } while (0)
53
41 extern void __down_read(struct rw_semaphore *sem); 54 extern void __down_read(struct rw_semaphore *sem);
42 extern int __down_read_trylock(struct rw_semaphore *sem); 55 extern int __down_read_trylock(struct rw_semaphore *sem);
43 extern void __down_write(struct rw_semaphore *sem); 56 extern void __down_write(struct rw_semaphore *sem);
44 extern int __down_write_trylock(struct rw_semaphore *sem); 57 extern int __down_write_trylock(struct rw_semaphore *sem);
45 extern void __up_read(struct rw_semaphore *sem); 58 extern void __up_read(struct rw_semaphore *sem);
46 extern void __up_write(struct rw_semaphore *sem); 59 extern void __up_write(struct rw_semaphore *sem);
47 extern void __downgrade_write(struct rw_semaphore *sem); 60 extern void __downgrade_write(struct rw_semaphore *sem);
61
62 static inline void __down_write_nested(struct rw_semaphore *sem, int subclass)
63 {
64 __down_write(sem);
65 }
48 66
49 static inline int rwsem_atomic_update(int delta, struct rw_semaphore *sem) 67 static inline int rwsem_atomic_update(int delta, struct rw_semaphore *sem)
50 { 68 {
51 return atomic_add_return(delta, (atomic_t *)(&sem->count)); 69 return atomic_add_return(delta, (atomic_t *)(&sem->count));
52 } 70 }
53 71
54 static inline void rwsem_atomic_add(int delta, struct rw_semaphore *sem) 72 static inline void rwsem_atomic_add(int delta, struct rw_semaphore *sem)
55 { 73 {
56 atomic_add(delta, (atomic_t *)(&sem->count)); 74 atomic_add(delta, (atomic_t *)(&sem->count));
57 } 75 }
58 76
59 static inline int rwsem_is_locked(struct rw_semaphore *sem) 77 static inline int rwsem_is_locked(struct rw_semaphore *sem)
60 { 78 {
61 return (sem->count != 0); 79 return (sem->count != 0);
62 } 80 }
63 81
include/asm-sparc64/system.h
1 /* $Id: system.h,v 1.69 2002/02/09 19:49:31 davem Exp $ */ 1 /* $Id: system.h,v 1.69 2002/02/09 19:49:31 davem Exp $ */
2 #ifndef __SPARC64_SYSTEM_H 2 #ifndef __SPARC64_SYSTEM_H
3 #define __SPARC64_SYSTEM_H 3 #define __SPARC64_SYSTEM_H
4 4
5 #include <asm/ptrace.h> 5 #include <asm/ptrace.h>
6 #include <asm/processor.h> 6 #include <asm/processor.h>
7 #include <asm/visasm.h> 7 #include <asm/visasm.h>
8 8
9 #ifndef __ASSEMBLY__ 9 #ifndef __ASSEMBLY__
10
11 #include <linux/irqflags.h>
12
10 /* 13 /*
11 * Sparc (general) CPU types 14 * Sparc (general) CPU types
12 */ 15 */
13 enum sparc_cpu { 16 enum sparc_cpu {
14 sun4 = 0x00, 17 sun4 = 0x00,
15 sun4c = 0x01, 18 sun4c = 0x01,
16 sun4m = 0x02, 19 sun4m = 0x02,
17 sun4d = 0x03, 20 sun4d = 0x03,
18 sun4e = 0x04, 21 sun4e = 0x04,
19 sun4u = 0x05, /* V8 ploos ploos */ 22 sun4u = 0x05, /* V8 ploos ploos */
20 sun_unknown = 0x06, 23 sun_unknown = 0x06,
21 ap1000 = 0x07, /* almost a sun4m */ 24 ap1000 = 0x07, /* almost a sun4m */
22 }; 25 };
23 26
24 #define sparc_cpu_model sun4u 27 #define sparc_cpu_model sun4u
25 28
26 /* This cannot ever be a sun4c nor sun4 :) That's just history. */ 29 /* This cannot ever be a sun4c nor sun4 :) That's just history. */
27 #define ARCH_SUN4C_SUN4 0 30 #define ARCH_SUN4C_SUN4 0
28 #define ARCH_SUN4 0 31 #define ARCH_SUN4 0
29 32
30 /* These are here in an effort to more fully work around Spitfire Errata 33 /* These are here in an effort to more fully work around Spitfire Errata
31 * #51. Essentially, if a memory barrier occurs soon after a mispredicted 34 * #51. Essentially, if a memory barrier occurs soon after a mispredicted
32 * branch, the chip can stop executing instructions until a trap occurs. 35 * branch, the chip can stop executing instructions until a trap occurs.
33 * Therefore, if interrupts are disabled, the chip can hang forever. 36 * Therefore, if interrupts are disabled, the chip can hang forever.
34 * 37 *
35 * It used to be believed that the memory barrier had to be right in the 38 * It used to be believed that the memory barrier had to be right in the
36 * delay slot, but a case has been traced recently wherein the memory barrier 39 * delay slot, but a case has been traced recently wherein the memory barrier
37 * was one instruction after the branch delay slot and the chip still hung. 40 * was one instruction after the branch delay slot and the chip still hung.
38 * The offending sequence was the following in sym_wakeup_done() of the 41 * The offending sequence was the following in sym_wakeup_done() of the
39 * sym53c8xx_2 driver: 42 * sym53c8xx_2 driver:
40 * 43 *
41 * call sym_ccb_from_dsa, 0 44 * call sym_ccb_from_dsa, 0
42 * movge %icc, 0, %l0 45 * movge %icc, 0, %l0
43 * brz,pn %o0, .LL1303 46 * brz,pn %o0, .LL1303
44 * mov %o0, %l2 47 * mov %o0, %l2
45 * membar #LoadLoad 48 * membar #LoadLoad
46 * 49 *
47 * The branch has to be mispredicted for the bug to occur. Therefore, we put 50 * The branch has to be mispredicted for the bug to occur. Therefore, we put
48 * the memory barrier explicitly into a "branch always, predicted taken" 51 * the memory barrier explicitly into a "branch always, predicted taken"
49 * delay slot to avoid the problem case. 52 * delay slot to avoid the problem case.
50 */ 53 */
51 #define membar_safe(type) \ 54 #define membar_safe(type) \
52 do { __asm__ __volatile__("ba,pt %%xcc, 1f\n\t" \ 55 do { __asm__ __volatile__("ba,pt %%xcc, 1f\n\t" \
53 " membar " type "\n" \ 56 " membar " type "\n" \
54 "1:\n" \ 57 "1:\n" \
55 : : : "memory"); \ 58 : : : "memory"); \
56 } while (0) 59 } while (0)
57 60
58 #define mb() \ 61 #define mb() \
59 membar_safe("#LoadLoad | #LoadStore | #StoreStore | #StoreLoad") 62 membar_safe("#LoadLoad | #LoadStore | #StoreStore | #StoreLoad")
60 #define rmb() \ 63 #define rmb() \
61 membar_safe("#LoadLoad") 64 membar_safe("#LoadLoad")
62 #define wmb() \ 65 #define wmb() \
63 membar_safe("#StoreStore") 66 membar_safe("#StoreStore")
64 #define membar_storeload() \ 67 #define membar_storeload() \
65 membar_safe("#StoreLoad") 68 membar_safe("#StoreLoad")
66 #define membar_storeload_storestore() \ 69 #define membar_storeload_storestore() \
67 membar_safe("#StoreLoad | #StoreStore") 70 membar_safe("#StoreLoad | #StoreStore")
68 #define membar_storeload_loadload() \ 71 #define membar_storeload_loadload() \
69 membar_safe("#StoreLoad | #LoadLoad") 72 membar_safe("#StoreLoad | #LoadLoad")
70 #define membar_storestore_loadstore() \ 73 #define membar_storestore_loadstore() \
71 membar_safe("#StoreStore | #LoadStore") 74 membar_safe("#StoreStore | #LoadStore")
72 75
73 #endif 76 #endif
74
75 #define setipl(__new_ipl) \
76 __asm__ __volatile__("wrpr %0, %%pil" : : "r" (__new_ipl) : "memory")
77
78 #define local_irq_disable() \
79 __asm__ __volatile__("wrpr 15, %%pil" : : : "memory")
80
81 #define local_irq_enable() \
82 __asm__ __volatile__("wrpr 0, %%pil" : : : "memory")
83
84 #define getipl() \
85 ({ unsigned long retval; __asm__ __volatile__("rdpr %%pil, %0" : "=r" (retval)); retval; })
86
87 #define swap_pil(__new_pil) \
88 ({ unsigned long retval; \
89 __asm__ __volatile__("rdpr %%pil, %0\n\t" \
90 "wrpr %1, %%pil" \
91 : "=&r" (retval) \
92 : "r" (__new_pil) \
93 : "memory"); \
94 retval; \
95 })
96
97 #define read_pil_and_cli() \
98 ({ unsigned long retval; \
99 __asm__ __volatile__("rdpr %%pil, %0\n\t" \
100 "wrpr 15, %%pil" \
101 : "=r" (retval) \
102 : : "memory"); \
103 retval; \
104 })
105
106 #define local_save_flags(flags) ((flags) = getipl())
107 #define local_irq_save(flags) ((flags) = read_pil_and_cli())
108 #define local_irq_restore(flags) setipl((flags))
109
110 /* On sparc64 IRQ flags are the PIL register. A value of zero
111 * means all interrupt levels are enabled, any other value means
112 * only IRQ levels greater than that value will be received.
113 * Consequently this means that the lowest IRQ level is one.
114 */
115 #define irqs_disabled() \
116 ({ unsigned long flags; \
117 local_save_flags(flags);\
118 (flags > 0); \
119 })
120 77
121 #define nop() __asm__ __volatile__ ("nop") 78 #define nop() __asm__ __volatile__ ("nop")
122 79
123 #define read_barrier_depends() do { } while(0) 80 #define read_barrier_depends() do { } while(0)
124 #define set_mb(__var, __value) \ 81 #define set_mb(__var, __value) \
125 do { __var = __value; membar_storeload_storestore(); } while(0) 82 do { __var = __value; membar_storeload_storestore(); } while(0)
126 83
127 #ifdef CONFIG_SMP 84 #ifdef CONFIG_SMP
128 #define smp_mb() mb() 85 #define smp_mb() mb()
129 #define smp_rmb() rmb() 86 #define smp_rmb() rmb()
130 #define smp_wmb() wmb() 87 #define smp_wmb() wmb()
131 #define smp_read_barrier_depends() read_barrier_depends() 88 #define smp_read_barrier_depends() read_barrier_depends()
132 #else 89 #else
133 #define smp_mb() __asm__ __volatile__("":::"memory") 90 #define smp_mb() __asm__ __volatile__("":::"memory")
134 #define smp_rmb() __asm__ __volatile__("":::"memory") 91 #define smp_rmb() __asm__ __volatile__("":::"memory")
135 #define smp_wmb() __asm__ __volatile__("":::"memory") 92 #define smp_wmb() __asm__ __volatile__("":::"memory")
136 #define smp_read_barrier_depends() do { } while(0) 93 #define smp_read_barrier_depends() do { } while(0)
137 #endif 94 #endif
138 95
139 #define flushi(addr) __asm__ __volatile__ ("flush %0" : : "r" (addr) : "memory") 96 #define flushi(addr) __asm__ __volatile__ ("flush %0" : : "r" (addr) : "memory")
140 97
141 #define flushw_all() __asm__ __volatile__("flushw") 98 #define flushw_all() __asm__ __volatile__("flushw")
142 99
143 /* Performance counter register access. */ 100 /* Performance counter register access. */
144 #define read_pcr(__p) __asm__ __volatile__("rd %%pcr, %0" : "=r" (__p)) 101 #define read_pcr(__p) __asm__ __volatile__("rd %%pcr, %0" : "=r" (__p))
145 #define write_pcr(__p) __asm__ __volatile__("wr %0, 0x0, %%pcr" : : "r" (__p)) 102 #define write_pcr(__p) __asm__ __volatile__("wr %0, 0x0, %%pcr" : : "r" (__p))
146 #define read_pic(__p) __asm__ __volatile__("rd %%pic, %0" : "=r" (__p)) 103 #define read_pic(__p) __asm__ __volatile__("rd %%pic, %0" : "=r" (__p))
147 104
148 /* Blackbird errata workaround. See commentary in 105 /* Blackbird errata workaround. See commentary in
149 * arch/sparc64/kernel/smp.c:smp_percpu_timer_interrupt() 106 * arch/sparc64/kernel/smp.c:smp_percpu_timer_interrupt()
150 * for more information. 107 * for more information.
151 */ 108 */
152 #define reset_pic() \ 109 #define reset_pic() \
153 __asm__ __volatile__("ba,pt %xcc, 99f\n\t" \ 110 __asm__ __volatile__("ba,pt %xcc, 99f\n\t" \
154 ".align 64\n" \ 111 ".align 64\n" \
155 "99:wr %g0, 0x0, %pic\n\t" \ 112 "99:wr %g0, 0x0, %pic\n\t" \
156 "rd %pic, %g0") 113 "rd %pic, %g0")
157 114
158 #ifndef __ASSEMBLY__ 115 #ifndef __ASSEMBLY__
159 116
160 extern void sun_do_break(void); 117 extern void sun_do_break(void);
161 extern int serial_console; 118 extern int serial_console;
162 extern int stop_a_enabled; 119 extern int stop_a_enabled;
163 120
164 static __inline__ int con_is_present(void) 121 static __inline__ int con_is_present(void)
165 { 122 {
166 return serial_console ? 0 : 1; 123 return serial_console ? 0 : 1;
167 } 124 }
168 125
169 extern void synchronize_user_stack(void); 126 extern void synchronize_user_stack(void);
170 127
171 extern void __flushw_user(void); 128 extern void __flushw_user(void);
172 #define flushw_user() __flushw_user() 129 #define flushw_user() __flushw_user()
173 130
174 #define flush_user_windows flushw_user 131 #define flush_user_windows flushw_user
175 #define flush_register_windows flushw_all 132 #define flush_register_windows flushw_all
176 133
177 /* Don't hold the runqueue lock over context switch */ 134 /* Don't hold the runqueue lock over context switch */
178 #define __ARCH_WANT_UNLOCKED_CTXSW 135 #define __ARCH_WANT_UNLOCKED_CTXSW
179 #define prepare_arch_switch(next) \ 136 #define prepare_arch_switch(next) \
180 do { \ 137 do { \
181 flushw_all(); \ 138 flushw_all(); \
182 } while (0) 139 } while (0)
183 140
184 /* See what happens when you design the chip correctly? 141 /* See what happens when you design the chip correctly?
185 * 142 *
186 * We tell gcc we clobber all non-fixed-usage registers except 143 * We tell gcc we clobber all non-fixed-usage registers except
187 * for l0/l1. It will use one for 'next' and the other to hold 144 * for l0/l1. It will use one for 'next' and the other to hold
188 * the output value of 'last'. 'next' is not referenced again 145 * the output value of 'last'. 'next' is not referenced again
189 * past the invocation of switch_to in the scheduler, so we need 146 * past the invocation of switch_to in the scheduler, so we need
190 * not preserve it's value. Hairy, but it lets us remove 2 loads 147 * not preserve it's value. Hairy, but it lets us remove 2 loads
191 * and 2 stores in this critical code path. -DaveM 148 * and 2 stores in this critical code path. -DaveM
192 */ 149 */
193 #define EXTRA_CLOBBER ,"%l1" 150 #define EXTRA_CLOBBER ,"%l1"
194 #define switch_to(prev, next, last) \ 151 #define switch_to(prev, next, last) \
195 do { if (test_thread_flag(TIF_PERFCTR)) { \ 152 do { if (test_thread_flag(TIF_PERFCTR)) { \
196 unsigned long __tmp; \ 153 unsigned long __tmp; \
197 read_pcr(__tmp); \ 154 read_pcr(__tmp); \
198 current_thread_info()->pcr_reg = __tmp; \ 155 current_thread_info()->pcr_reg = __tmp; \
199 read_pic(__tmp); \ 156 read_pic(__tmp); \
200 current_thread_info()->kernel_cntd0 += (unsigned int)(__tmp);\ 157 current_thread_info()->kernel_cntd0 += (unsigned int)(__tmp);\
201 current_thread_info()->kernel_cntd1 += ((__tmp) >> 32); \ 158 current_thread_info()->kernel_cntd1 += ((__tmp) >> 32); \
202 } \ 159 } \
203 flush_tlb_pending(); \ 160 flush_tlb_pending(); \
204 save_and_clear_fpu(); \ 161 save_and_clear_fpu(); \
205 /* If you are tempted to conditionalize the following */ \ 162 /* If you are tempted to conditionalize the following */ \
206 /* so that ASI is only written if it changes, think again. */ \ 163 /* so that ASI is only written if it changes, think again. */ \
207 __asm__ __volatile__("wr %%g0, %0, %%asi" \ 164 __asm__ __volatile__("wr %%g0, %0, %%asi" \
208 : : "r" (__thread_flag_byte_ptr(task_thread_info(next))[TI_FLAG_BYTE_CURRENT_DS]));\ 165 : : "r" (__thread_flag_byte_ptr(task_thread_info(next))[TI_FLAG_BYTE_CURRENT_DS]));\
209 trap_block[current_thread_info()->cpu].thread = \ 166 trap_block[current_thread_info()->cpu].thread = \
210 task_thread_info(next); \ 167 task_thread_info(next); \
211 __asm__ __volatile__( \ 168 __asm__ __volatile__( \
212 "mov %%g4, %%g7\n\t" \ 169 "mov %%g4, %%g7\n\t" \
213 "stx %%i6, [%%sp + 2047 + 0x70]\n\t" \ 170 "stx %%i6, [%%sp + 2047 + 0x70]\n\t" \
214 "stx %%i7, [%%sp + 2047 + 0x78]\n\t" \ 171 "stx %%i7, [%%sp + 2047 + 0x78]\n\t" \
215 "rdpr %%wstate, %%o5\n\t" \ 172 "rdpr %%wstate, %%o5\n\t" \
216 "stx %%o6, [%%g6 + %3]\n\t" \ 173 "stx %%o6, [%%g6 + %3]\n\t" \
217 "stb %%o5, [%%g6 + %2]\n\t" \ 174 "stb %%o5, [%%g6 + %2]\n\t" \
218 "rdpr %%cwp, %%o5\n\t" \ 175 "rdpr %%cwp, %%o5\n\t" \
219 "stb %%o5, [%%g6 + %5]\n\t" \ 176 "stb %%o5, [%%g6 + %5]\n\t" \
220 "mov %1, %%g6\n\t" \ 177 "mov %1, %%g6\n\t" \
221 "ldub [%1 + %5], %%g1\n\t" \ 178 "ldub [%1 + %5], %%g1\n\t" \
222 "wrpr %%g1, %%cwp\n\t" \ 179 "wrpr %%g1, %%cwp\n\t" \
223 "ldx [%%g6 + %3], %%o6\n\t" \ 180 "ldx [%%g6 + %3], %%o6\n\t" \
224 "ldub [%%g6 + %2], %%o5\n\t" \ 181 "ldub [%%g6 + %2], %%o5\n\t" \
225 "ldub [%%g6 + %4], %%o7\n\t" \ 182 "ldub [%%g6 + %4], %%o7\n\t" \
226 "wrpr %%o5, 0x0, %%wstate\n\t" \ 183 "wrpr %%o5, 0x0, %%wstate\n\t" \
227 "ldx [%%sp + 2047 + 0x70], %%i6\n\t" \ 184 "ldx [%%sp + 2047 + 0x70], %%i6\n\t" \
228 "ldx [%%sp + 2047 + 0x78], %%i7\n\t" \ 185 "ldx [%%sp + 2047 + 0x78], %%i7\n\t" \
229 "ldx [%%g6 + %6], %%g4\n\t" \ 186 "ldx [%%g6 + %6], %%g4\n\t" \
230 "brz,pt %%o7, 1f\n\t" \ 187 "brz,pt %%o7, 1f\n\t" \
231 " mov %%g7, %0\n\t" \ 188 " mov %%g7, %0\n\t" \
232 "b,a ret_from_syscall\n\t" \ 189 "b,a ret_from_syscall\n\t" \
233 "1:\n\t" \ 190 "1:\n\t" \
234 : "=&r" (last) \ 191 : "=&r" (last) \
235 : "0" (task_thread_info(next)), \ 192 : "0" (task_thread_info(next)), \
236 "i" (TI_WSTATE), "i" (TI_KSP), "i" (TI_NEW_CHILD), \ 193 "i" (TI_WSTATE), "i" (TI_KSP), "i" (TI_NEW_CHILD), \
237 "i" (TI_CWP), "i" (TI_TASK) \ 194 "i" (TI_CWP), "i" (TI_TASK) \
238 : "cc", \ 195 : "cc", \
239 "g1", "g2", "g3", "g7", \ 196 "g1", "g2", "g3", "g7", \
240 "l2", "l3", "l4", "l5", "l6", "l7", \ 197 "l2", "l3", "l4", "l5", "l6", "l7", \
241 "i0", "i1", "i2", "i3", "i4", "i5", \ 198 "i0", "i1", "i2", "i3", "i4", "i5", \
242 "o0", "o1", "o2", "o3", "o4", "o5", "o7" EXTRA_CLOBBER);\ 199 "o0", "o1", "o2", "o3", "o4", "o5", "o7" EXTRA_CLOBBER);\
243 /* If you fuck with this, update ret_from_syscall code too. */ \ 200 /* If you fuck with this, update ret_from_syscall code too. */ \
244 if (test_thread_flag(TIF_PERFCTR)) { \ 201 if (test_thread_flag(TIF_PERFCTR)) { \
245 write_pcr(current_thread_info()->pcr_reg); \ 202 write_pcr(current_thread_info()->pcr_reg); \
246 reset_pic(); \ 203 reset_pic(); \
247 } \ 204 } \
248 } while(0) 205 } while(0)
249 206
250 /* 207 /*
251 * On SMP systems, when the scheduler does migration-cost autodetection, 208 * On SMP systems, when the scheduler does migration-cost autodetection,
252 * it needs a way to flush as much of the CPU's caches as possible. 209 * it needs a way to flush as much of the CPU's caches as possible.
253 * 210 *
254 * TODO: fill this in! 211 * TODO: fill this in!
255 */ 212 */
256 static inline void sched_cacheflush(void) 213 static inline void sched_cacheflush(void)
257 { 214 {
258 } 215 }
259 216
260 static inline unsigned long xchg32(__volatile__ unsigned int *m, unsigned int val) 217 static inline unsigned long xchg32(__volatile__ unsigned int *m, unsigned int val)
261 { 218 {
262 unsigned long tmp1, tmp2; 219 unsigned long tmp1, tmp2;
263 220
264 __asm__ __volatile__( 221 __asm__ __volatile__(
265 " membar #StoreLoad | #LoadLoad\n" 222 " membar #StoreLoad | #LoadLoad\n"
266 " mov %0, %1\n" 223 " mov %0, %1\n"
267 "1: lduw [%4], %2\n" 224 "1: lduw [%4], %2\n"
268 " cas [%4], %2, %0\n" 225 " cas [%4], %2, %0\n"
269 " cmp %2, %0\n" 226 " cmp %2, %0\n"
270 " bne,a,pn %%icc, 1b\n" 227 " bne,a,pn %%icc, 1b\n"
271 " mov %1, %0\n" 228 " mov %1, %0\n"
272 " membar #StoreLoad | #StoreStore\n" 229 " membar #StoreLoad | #StoreStore\n"
273 : "=&r" (val), "=&r" (tmp1), "=&r" (tmp2) 230 : "=&r" (val), "=&r" (tmp1), "=&r" (tmp2)
274 : "0" (val), "r" (m) 231 : "0" (val), "r" (m)
275 : "cc", "memory"); 232 : "cc", "memory");
276 return val; 233 return val;
277 } 234 }
278 235
279 static inline unsigned long xchg64(__volatile__ unsigned long *m, unsigned long val) 236 static inline unsigned long xchg64(__volatile__ unsigned long *m, unsigned long val)
280 { 237 {
281 unsigned long tmp1, tmp2; 238 unsigned long tmp1, tmp2;
282 239
283 __asm__ __volatile__( 240 __asm__ __volatile__(
284 " membar #StoreLoad | #LoadLoad\n" 241 " membar #StoreLoad | #LoadLoad\n"
285 " mov %0, %1\n" 242 " mov %0, %1\n"
286 "1: ldx [%4], %2\n" 243 "1: ldx [%4], %2\n"
287 " casx [%4], %2, %0\n" 244 " casx [%4], %2, %0\n"
288 " cmp %2, %0\n" 245 " cmp %2, %0\n"
289 " bne,a,pn %%xcc, 1b\n" 246 " bne,a,pn %%xcc, 1b\n"
290 " mov %1, %0\n" 247 " mov %1, %0\n"
291 " membar #StoreLoad | #StoreStore\n" 248 " membar #StoreLoad | #StoreStore\n"
292 : "=&r" (val), "=&r" (tmp1), "=&r" (tmp2) 249 : "=&r" (val), "=&r" (tmp1), "=&r" (tmp2)
293 : "0" (val), "r" (m) 250 : "0" (val), "r" (m)
294 : "cc", "memory"); 251 : "cc", "memory");
295 return val; 252 return val;
296 } 253 }
297 254
298 #define xchg(ptr,x) ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) 255 #define xchg(ptr,x) ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
299 #define tas(ptr) (xchg((ptr),1)) 256 #define tas(ptr) (xchg((ptr),1))
300 257
301 extern void __xchg_called_with_bad_pointer(void); 258 extern void __xchg_called_with_bad_pointer(void);
302 259
303 static __inline__ unsigned long __xchg(unsigned long x, __volatile__ void * ptr, 260 static __inline__ unsigned long __xchg(unsigned long x, __volatile__ void * ptr,
304 int size) 261 int size)
305 { 262 {
306 switch (size) { 263 switch (size) {
307 case 4: 264 case 4:
308 return xchg32(ptr, x); 265 return xchg32(ptr, x);
309 case 8: 266 case 8:
310 return xchg64(ptr, x); 267 return xchg64(ptr, x);
311 }; 268 };
312 __xchg_called_with_bad_pointer(); 269 __xchg_called_with_bad_pointer();
313 return x; 270 return x;
314 } 271 }
315 272
316 extern void die_if_kernel(char *str, struct pt_regs *regs) __attribute__ ((noreturn)); 273 extern void die_if_kernel(char *str, struct pt_regs *regs) __attribute__ ((noreturn));
317 274
318 /* 275 /*
319 * Atomic compare and exchange. Compare OLD with MEM, if identical, 276 * Atomic compare and exchange. Compare OLD with MEM, if identical,
320 * store NEW in MEM. Return the initial value in MEM. Success is 277 * store NEW in MEM. Return the initial value in MEM. Success is
321 * indicated by comparing RETURN with OLD. 278 * indicated by comparing RETURN with OLD.
322 */ 279 */
323 280
324 #define __HAVE_ARCH_CMPXCHG 1 281 #define __HAVE_ARCH_CMPXCHG 1
325 282
326 static __inline__ unsigned long 283 static __inline__ unsigned long
327 __cmpxchg_u32(volatile int *m, int old, int new) 284 __cmpxchg_u32(volatile int *m, int old, int new)
328 { 285 {
329 __asm__ __volatile__("membar #StoreLoad | #LoadLoad\n" 286 __asm__ __volatile__("membar #StoreLoad | #LoadLoad\n"
330 "cas [%2], %3, %0\n\t" 287 "cas [%2], %3, %0\n\t"
331 "membar #StoreLoad | #StoreStore" 288 "membar #StoreLoad | #StoreStore"
332 : "=&r" (new) 289 : "=&r" (new)
333 : "0" (new), "r" (m), "r" (old) 290 : "0" (new), "r" (m), "r" (old)
334 : "memory"); 291 : "memory");
335 292
336 return new; 293 return new;
337 } 294 }
338 295
339 static __inline__ unsigned long 296 static __inline__ unsigned long
340 __cmpxchg_u64(volatile long *m, unsigned long old, unsigned long new) 297 __cmpxchg_u64(volatile long *m, unsigned long old, unsigned long new)
341 { 298 {
342 __asm__ __volatile__("membar #StoreLoad | #LoadLoad\n" 299 __asm__ __volatile__("membar #StoreLoad | #LoadLoad\n"
343 "casx [%2], %3, %0\n\t" 300 "casx [%2], %3, %0\n\t"
344 "membar #StoreLoad | #StoreStore" 301 "membar #StoreLoad | #StoreStore"
345 : "=&r" (new) 302 : "=&r" (new)
346 : "0" (new), "r" (m), "r" (old) 303 : "0" (new), "r" (m), "r" (old)
347 : "memory"); 304 : "memory");
348 305
349 return new; 306 return new;
350 } 307 }
351 308
352 /* This function doesn't exist, so you'll get a linker error 309 /* This function doesn't exist, so you'll get a linker error
353 if something tries to do an invalid cmpxchg(). */ 310 if something tries to do an invalid cmpxchg(). */
354 extern void __cmpxchg_called_with_bad_pointer(void); 311 extern void __cmpxchg_called_with_bad_pointer(void);
355 312
356 static __inline__ unsigned long 313 static __inline__ unsigned long
357 __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size) 314 __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
358 { 315 {
359 switch (size) { 316 switch (size) {
360 case 4: 317 case 4:
361 return __cmpxchg_u32(ptr, old, new); 318 return __cmpxchg_u32(ptr, old, new);
362 case 8: 319 case 8:
363 return __cmpxchg_u64(ptr, old, new); 320 return __cmpxchg_u64(ptr, old, new);
364 } 321 }
365 __cmpxchg_called_with_bad_pointer(); 322 __cmpxchg_called_with_bad_pointer();
366 return old; 323 return old;
367 } 324 }
368 325
369 #define cmpxchg(ptr,o,n) \ 326 #define cmpxchg(ptr,o,n) \
370 ({ \ 327 ({ \
371 __typeof__(*(ptr)) _o_ = (o); \ 328 __typeof__(*(ptr)) _o_ = (o); \
372 __typeof__(*(ptr)) _n_ = (n); \ 329 __typeof__(*(ptr)) _n_ = (n); \
373 (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \ 330 (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \
374 (unsigned long)_n_, sizeof(*(ptr))); \ 331 (unsigned long)_n_, sizeof(*(ptr))); \
375 }) 332 })
376 333
377 #endif /* !(__ASSEMBLY__) */ 334 #endif /* !(__ASSEMBLY__) */
378 335
379 #define arch_align_stack(x) (x) 336 #define arch_align_stack(x) (x)
include/asm-sparc64/ttable.h
1 /* $Id: ttable.h,v 1.18 2002/02/09 19:49:32 davem Exp $ */ 1 /* $Id: ttable.h,v 1.18 2002/02/09 19:49:32 davem Exp $ */
2 #ifndef _SPARC64_TTABLE_H 2 #ifndef _SPARC64_TTABLE_H
3 #define _SPARC64_TTABLE_H 3 #define _SPARC64_TTABLE_H
4 4
5 #include <asm/utrap.h> 5 #include <asm/utrap.h>
6 6
7 #ifdef __ASSEMBLY__ 7 #ifdef __ASSEMBLY__
8 #include <asm/thread_info.h> 8 #include <asm/thread_info.h>
9 #endif 9 #endif
10 10
11 #define BOOT_KERNEL b sparc64_boot; nop; nop; nop; nop; nop; nop; nop; 11 #define BOOT_KERNEL b sparc64_boot; nop; nop; nop; nop; nop; nop; nop;
12 12
13 /* We need a "cleaned" instruction... */ 13 /* We need a "cleaned" instruction... */
14 #define CLEAN_WINDOW \ 14 #define CLEAN_WINDOW \
15 rdpr %cleanwin, %l0; add %l0, 1, %l0; \ 15 rdpr %cleanwin, %l0; add %l0, 1, %l0; \
16 wrpr %l0, 0x0, %cleanwin; \ 16 wrpr %l0, 0x0, %cleanwin; \
17 clr %o0; clr %o1; clr %o2; clr %o3; \ 17 clr %o0; clr %o1; clr %o2; clr %o3; \
18 clr %o4; clr %o5; clr %o6; clr %o7; \ 18 clr %o4; clr %o5; clr %o6; clr %o7; \
19 clr %l0; clr %l1; clr %l2; clr %l3; \ 19 clr %l0; clr %l1; clr %l2; clr %l3; \
20 clr %l4; clr %l5; clr %l6; clr %l7; \ 20 clr %l4; clr %l5; clr %l6; clr %l7; \
21 retry; \ 21 retry; \
22 nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop; 22 nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;nop;
23 23
24 #define TRAP(routine) \ 24 #define TRAP(routine) \
25 sethi %hi(109f), %g7; \ 25 sethi %hi(109f), %g7; \
26 ba,pt %xcc, etrap; \ 26 ba,pt %xcc, etrap; \
27 109: or %g7, %lo(109b), %g7; \ 27 109: or %g7, %lo(109b), %g7; \
28 call routine; \ 28 call routine; \
29 add %sp, PTREGS_OFF, %o0; \ 29 add %sp, PTREGS_OFF, %o0; \
30 ba,pt %xcc, rtrap; \ 30 ba,pt %xcc, rtrap; \
31 clr %l6; \ 31 clr %l6; \
32 nop; 32 nop;
33 33
34 #define TRAP_7INSNS(routine) \ 34 #define TRAP_7INSNS(routine) \
35 sethi %hi(109f), %g7; \ 35 sethi %hi(109f), %g7; \
36 ba,pt %xcc, etrap; \ 36 ba,pt %xcc, etrap; \
37 109: or %g7, %lo(109b), %g7; \ 37 109: or %g7, %lo(109b), %g7; \
38 call routine; \ 38 call routine; \
39 add %sp, PTREGS_OFF, %o0; \ 39 add %sp, PTREGS_OFF, %o0; \
40 ba,pt %xcc, rtrap; \ 40 ba,pt %xcc, rtrap; \
41 clr %l6; 41 clr %l6;
42 42
43 #define TRAP_SAVEFPU(routine) \ 43 #define TRAP_SAVEFPU(routine) \
44 sethi %hi(109f), %g7; \ 44 sethi %hi(109f), %g7; \
45 ba,pt %xcc, do_fptrap; \ 45 ba,pt %xcc, do_fptrap; \
46 109: or %g7, %lo(109b), %g7; \ 46 109: or %g7, %lo(109b), %g7; \
47 call routine; \ 47 call routine; \
48 add %sp, PTREGS_OFF, %o0; \ 48 add %sp, PTREGS_OFF, %o0; \
49 ba,pt %xcc, rtrap; \ 49 ba,pt %xcc, rtrap; \
50 clr %l6; \ 50 clr %l6; \
51 nop; 51 nop;
52 52
53 #define TRAP_NOSAVE(routine) \ 53 #define TRAP_NOSAVE(routine) \
54 ba,pt %xcc, routine; \ 54 ba,pt %xcc, routine; \
55 nop; \ 55 nop; \
56 nop; nop; nop; nop; nop; nop; 56 nop; nop; nop; nop; nop; nop;
57 57
58 #define TRAP_NOSAVE_7INSNS(routine) \ 58 #define TRAP_NOSAVE_7INSNS(routine) \
59 ba,pt %xcc, routine; \ 59 ba,pt %xcc, routine; \
60 nop; \ 60 nop; \
61 nop; nop; nop; nop; nop; 61 nop; nop; nop; nop; nop;
62 62
63 #define TRAPTL1(routine) \ 63 #define TRAPTL1(routine) \
64 sethi %hi(109f), %g7; \ 64 sethi %hi(109f), %g7; \
65 ba,pt %xcc, etraptl1; \ 65 ba,pt %xcc, etraptl1; \
66 109: or %g7, %lo(109b), %g7; \ 66 109: or %g7, %lo(109b), %g7; \
67 call routine; \ 67 call routine; \
68 add %sp, PTREGS_OFF, %o0; \ 68 add %sp, PTREGS_OFF, %o0; \
69 ba,pt %xcc, rtrap; \ 69 ba,pt %xcc, rtrap; \
70 clr %l6; \ 70 clr %l6; \
71 nop; 71 nop;
72 72
73 #define TRAP_ARG(routine, arg) \ 73 #define TRAP_ARG(routine, arg) \
74 sethi %hi(109f), %g7; \ 74 sethi %hi(109f), %g7; \
75 ba,pt %xcc, etrap; \ 75 ba,pt %xcc, etrap; \
76 109: or %g7, %lo(109b), %g7; \ 76 109: or %g7, %lo(109b), %g7; \
77 add %sp, PTREGS_OFF, %o0; \ 77 add %sp, PTREGS_OFF, %o0; \
78 call routine; \ 78 call routine; \
79 mov arg, %o1; \ 79 mov arg, %o1; \
80 ba,pt %xcc, rtrap; \ 80 ba,pt %xcc, rtrap; \
81 clr %l6; 81 clr %l6;
82 82
83 #define TRAPTL1_ARG(routine, arg) \ 83 #define TRAPTL1_ARG(routine, arg) \
84 sethi %hi(109f), %g7; \ 84 sethi %hi(109f), %g7; \
85 ba,pt %xcc, etraptl1; \ 85 ba,pt %xcc, etraptl1; \
86 109: or %g7, %lo(109b), %g7; \ 86 109: or %g7, %lo(109b), %g7; \
87 add %sp, PTREGS_OFF, %o0; \ 87 add %sp, PTREGS_OFF, %o0; \
88 call routine; \ 88 call routine; \
89 mov arg, %o1; \ 89 mov arg, %o1; \
90 ba,pt %xcc, rtrap; \ 90 ba,pt %xcc, rtrap; \
91 clr %l6; 91 clr %l6;
92 92
93 #define SYSCALL_TRAP(routine, systbl) \ 93 #define SYSCALL_TRAP(routine, systbl) \
94 sethi %hi(109f), %g7; \ 94 sethi %hi(109f), %g7; \
95 ba,pt %xcc, etrap; \ 95 ba,pt %xcc, etrap; \
96 109: or %g7, %lo(109b), %g7; \ 96 109: or %g7, %lo(109b), %g7; \
97 sethi %hi(systbl), %l7; \ 97 sethi %hi(systbl), %l7; \
98 ba,pt %xcc, routine; \ 98 ba,pt %xcc, routine; \
99 or %l7, %lo(systbl), %l7; \ 99 or %l7, %lo(systbl), %l7; \
100 nop; nop; 100 nop; nop;
101 101
102 #define INDIRECT_SOLARIS_SYSCALL(num) \ 102 #define INDIRECT_SOLARIS_SYSCALL(num) \
103 sethi %hi(109f), %g7; \ 103 sethi %hi(109f), %g7; \
104 ba,pt %xcc, etrap; \ 104 ba,pt %xcc, etrap; \
105 109: or %g7, %lo(109b), %g7; \ 105 109: or %g7, %lo(109b), %g7; \
106 ba,pt %xcc, tl0_solaris + 0xc; \ 106 ba,pt %xcc, tl0_solaris + 0xc; \
107 mov num, %g1; \ 107 mov num, %g1; \
108 nop;nop;nop; 108 nop;nop;nop;
109 109
110 #define TRAP_UTRAP(handler,lvl) \ 110 #define TRAP_UTRAP(handler,lvl) \
111 mov handler, %g3; \ 111 mov handler, %g3; \
112 ba,pt %xcc, utrap_trap; \ 112 ba,pt %xcc, utrap_trap; \
113 mov lvl, %g4; \ 113 mov lvl, %g4; \
114 nop; \ 114 nop; \
115 nop; \ 115 nop; \
116 nop; \ 116 nop; \
117 nop; \ 117 nop; \
118 nop; 118 nop;
119 119
120 #ifdef CONFIG_SUNOS_EMUL 120 #ifdef CONFIG_SUNOS_EMUL
121 #define SUNOS_SYSCALL_TRAP SYSCALL_TRAP(linux_sparc_syscall32, sunos_sys_table) 121 #define SUNOS_SYSCALL_TRAP SYSCALL_TRAP(linux_sparc_syscall32, sunos_sys_table)
122 #else 122 #else
123 #define SUNOS_SYSCALL_TRAP TRAP(sunos_syscall) 123 #define SUNOS_SYSCALL_TRAP TRAP(sunos_syscall)
124 #endif 124 #endif
125 #ifdef CONFIG_COMPAT 125 #ifdef CONFIG_COMPAT
126 #define LINUX_32BIT_SYSCALL_TRAP SYSCALL_TRAP(linux_sparc_syscall32, sys_call_table32) 126 #define LINUX_32BIT_SYSCALL_TRAP SYSCALL_TRAP(linux_sparc_syscall32, sys_call_table32)
127 #else 127 #else
128 #define LINUX_32BIT_SYSCALL_TRAP BTRAP(0x110) 128 #define LINUX_32BIT_SYSCALL_TRAP BTRAP(0x110)
129 #endif 129 #endif
130 #define LINUX_64BIT_SYSCALL_TRAP SYSCALL_TRAP(linux_sparc_syscall, sys_call_table64) 130 #define LINUX_64BIT_SYSCALL_TRAP SYSCALL_TRAP(linux_sparc_syscall, sys_call_table64)
131 #define GETCC_TRAP TRAP(getcc) 131 #define GETCC_TRAP TRAP(getcc)
132 #define SETCC_TRAP TRAP(setcc) 132 #define SETCC_TRAP TRAP(setcc)
133 #ifdef CONFIG_SOLARIS_EMUL 133 #ifdef CONFIG_SOLARIS_EMUL
134 #define SOLARIS_SYSCALL_TRAP TRAP(solaris_sparc_syscall) 134 #define SOLARIS_SYSCALL_TRAP TRAP(solaris_sparc_syscall)
135 #else 135 #else
136 #define SOLARIS_SYSCALL_TRAP TRAP(solaris_syscall) 136 #define SOLARIS_SYSCALL_TRAP TRAP(solaris_syscall)
137 #endif 137 #endif
138 #define BREAKPOINT_TRAP TRAP(breakpoint_trap) 138 #define BREAKPOINT_TRAP TRAP(breakpoint_trap)
139 139
140 #ifdef CONFIG_TRACE_IRQFLAGS
141
140 #define TRAP_IRQ(routine, level) \ 142 #define TRAP_IRQ(routine, level) \
141 rdpr %pil, %g2; \ 143 rdpr %pil, %g2; \
142 wrpr %g0, 15, %pil; \ 144 wrpr %g0, 15, %pil; \
143 b,pt %xcc, etrap_irq; \ 145 sethi %hi(1f-4), %g7; \
146 ba,pt %xcc, etrap_irq; \
147 or %g7, %lo(1f-4), %g7; \
148 nop; \
149 nop; \
150 nop; \
151 .subsection 2; \
152 1: call trace_hardirqs_off; \
153 nop; \
154 mov level, %o0; \
155 call routine; \
156 add %sp, PTREGS_OFF, %o1; \
157 ba,a,pt %xcc, rtrap_irq; \
158 .previous;
159
160 #define TICK_SMP_IRQ \
161 rdpr %pil, %g2; \
162 wrpr %g0, 15, %pil; \
163 sethi %hi(1f-4), %g7; \
164 ba,pt %xcc, etrap_irq; \
165 or %g7, %lo(1f-4), %g7; \
166 nop; \
167 nop; \
168 nop; \
169 .subsection 2; \
170 1: call trace_hardirqs_off; \
171 nop; \
172 call smp_percpu_timer_interrupt; \
173 add %sp, PTREGS_OFF, %o0; \
174 ba,a,pt %xcc, rtrap_irq; \
175 .previous;
176
177 #else
178
179 #define TRAP_IRQ(routine, level) \
180 rdpr %pil, %g2; \
181 wrpr %g0, 15, %pil; \
182 ba,pt %xcc, etrap_irq; \
144 rd %pc, %g7; \ 183 rd %pc, %g7; \
145 mov level, %o0; \ 184 mov level, %o0; \
146 call routine; \ 185 call routine; \
147 add %sp, PTREGS_OFF, %o1; \ 186 add %sp, PTREGS_OFF, %o1; \
148 ba,a,pt %xcc, rtrap_irq; 187 ba,a,pt %xcc, rtrap_irq;
149 188
150 #define TICK_SMP_IRQ \ 189 #define TICK_SMP_IRQ \
151 rdpr %pil, %g2; \ 190 rdpr %pil, %g2; \
152 wrpr %g0, 15, %pil; \ 191 wrpr %g0, 15, %pil; \
153 sethi %hi(109f), %g7; \ 192 sethi %hi(109f), %g7; \
154 b,pt %xcc, etrap_irq; \ 193 ba,pt %xcc, etrap_irq; \
155 109: or %g7, %lo(109b), %g7; \ 194 109: or %g7, %lo(109b), %g7; \
156 call smp_percpu_timer_interrupt; \ 195 call smp_percpu_timer_interrupt; \
157 add %sp, PTREGS_OFF, %o0; \ 196 add %sp, PTREGS_OFF, %o0; \
158 ba,a,pt %xcc, rtrap_irq; 197 ba,a,pt %xcc, rtrap_irq;
198
199 #endif
159 200
160 #define TRAP_IVEC TRAP_NOSAVE(do_ivec) 201 #define TRAP_IVEC TRAP_NOSAVE(do_ivec)
161 202
162 #define BTRAP(lvl) TRAP_ARG(bad_trap, lvl) 203 #define BTRAP(lvl) TRAP_ARG(bad_trap, lvl)
163 204
164 #define BTRAPTL1(lvl) TRAPTL1_ARG(bad_trap_tl1, lvl) 205 #define BTRAPTL1(lvl) TRAPTL1_ARG(bad_trap_tl1, lvl)
165 206
166 #define FLUSH_WINDOW_TRAP \ 207 #define FLUSH_WINDOW_TRAP \
167 ba,pt %xcc, etrap; \ 208 ba,pt %xcc, etrap; \
168 rd %pc, %g7; \ 209 rd %pc, %g7; \
169 flushw; \ 210 flushw; \
170 ldx [%sp + PTREGS_OFF + PT_V9_TNPC], %l1; \ 211 ldx [%sp + PTREGS_OFF + PT_V9_TNPC], %l1; \
171 add %l1, 4, %l2; \ 212 add %l1, 4, %l2; \
172 stx %l1, [%sp + PTREGS_OFF + PT_V9_TPC]; \ 213 stx %l1, [%sp + PTREGS_OFF + PT_V9_TPC]; \
173 ba,pt %xcc, rtrap_clr_l6; \ 214 ba,pt %xcc, rtrap_clr_l6; \
174 stx %l2, [%sp + PTREGS_OFF + PT_V9_TNPC]; 215 stx %l2, [%sp + PTREGS_OFF + PT_V9_TNPC];
175 216
176 #ifdef CONFIG_KPROBES 217 #ifdef CONFIG_KPROBES
177 #define KPROBES_TRAP(lvl) TRAP_IRQ(kprobe_trap, lvl) 218 #define KPROBES_TRAP(lvl) TRAP_IRQ(kprobe_trap, lvl)
178 #else 219 #else
179 #define KPROBES_TRAP(lvl) TRAP_ARG(bad_trap, lvl) 220 #define KPROBES_TRAP(lvl) TRAP_ARG(bad_trap, lvl)
180 #endif 221 #endif
181 222
182 #define SUN4V_ITSB_MISS \ 223 #define SUN4V_ITSB_MISS \
183 ldxa [%g0] ASI_SCRATCHPAD, %g2; \ 224 ldxa [%g0] ASI_SCRATCHPAD, %g2; \
184 ldx [%g2 + HV_FAULT_I_ADDR_OFFSET], %g4; \ 225 ldx [%g2 + HV_FAULT_I_ADDR_OFFSET], %g4; \
185 ldx [%g2 + HV_FAULT_I_CTX_OFFSET], %g5; \ 226 ldx [%g2 + HV_FAULT_I_CTX_OFFSET], %g5; \
186 srlx %g4, 22, %g6; \ 227 srlx %g4, 22, %g6; \
187 ba,pt %xcc, sun4v_itsb_miss; \ 228 ba,pt %xcc, sun4v_itsb_miss; \
188 nop; \ 229 nop; \
189 nop; \ 230 nop; \
190 nop; 231 nop;
191 232
192 #define SUN4V_DTSB_MISS \ 233 #define SUN4V_DTSB_MISS \
193 ldxa [%g0] ASI_SCRATCHPAD, %g2; \ 234 ldxa [%g0] ASI_SCRATCHPAD, %g2; \
194 ldx [%g2 + HV_FAULT_D_ADDR_OFFSET], %g4; \ 235 ldx [%g2 + HV_FAULT_D_ADDR_OFFSET], %g4; \
195 ldx [%g2 + HV_FAULT_D_CTX_OFFSET], %g5; \ 236 ldx [%g2 + HV_FAULT_D_CTX_OFFSET], %g5; \
196 srlx %g4, 22, %g6; \ 237 srlx %g4, 22, %g6; \
197 ba,pt %xcc, sun4v_dtsb_miss; \ 238 ba,pt %xcc, sun4v_dtsb_miss; \
198 nop; \ 239 nop; \
199 nop; \ 240 nop; \
200 nop; 241 nop;
201 242
202 /* Before touching these macros, you owe it to yourself to go and 243 /* Before touching these macros, you owe it to yourself to go and
203 * see how arch/sparc64/kernel/winfixup.S works... -DaveM 244 * see how arch/sparc64/kernel/winfixup.S works... -DaveM
204 * 245 *
205 * For the user cases we used to use the %asi register, but 246 * For the user cases we used to use the %asi register, but
206 * it turns out that the "wr xxx, %asi" costs ~5 cycles, so 247 * it turns out that the "wr xxx, %asi" costs ~5 cycles, so
207 * now we use immediate ASI loads and stores instead. Kudos 248 * now we use immediate ASI loads and stores instead. Kudos
208 * to Greg Onufer for pointing out this performance anomaly. 249 * to Greg Onufer for pointing out this performance anomaly.
209 * 250 *
210 * Further note that we cannot use the g2, g4, g5, and g7 alternate 251 * Further note that we cannot use the g2, g4, g5, and g7 alternate
211 * globals in the spill routines, check out the save instruction in 252 * globals in the spill routines, check out the save instruction in
212 * arch/sparc64/kernel/etrap.S to see what I mean about g2, and 253 * arch/sparc64/kernel/etrap.S to see what I mean about g2, and
213 * g4/g5 are the globals which are preserved by etrap processing 254 * g4/g5 are the globals which are preserved by etrap processing
214 * for the caller of it. The g7 register is the return pc for 255 * for the caller of it. The g7 register is the return pc for
215 * etrap. Finally, g6 is the current thread register so we cannot 256 * etrap. Finally, g6 is the current thread register so we cannot
216 * us it in the spill handlers either. Most of these rules do not 257 * us it in the spill handlers either. Most of these rules do not
217 * apply to fill processing, only g6 is not usable. 258 * apply to fill processing, only g6 is not usable.
218 */ 259 */
219 260
220 /* Normal kernel spill */ 261 /* Normal kernel spill */
221 #define SPILL_0_NORMAL \ 262 #define SPILL_0_NORMAL \
222 stx %l0, [%sp + STACK_BIAS + 0x00]; \ 263 stx %l0, [%sp + STACK_BIAS + 0x00]; \
223 stx %l1, [%sp + STACK_BIAS + 0x08]; \ 264 stx %l1, [%sp + STACK_BIAS + 0x08]; \
224 stx %l2, [%sp + STACK_BIAS + 0x10]; \ 265 stx %l2, [%sp + STACK_BIAS + 0x10]; \
225 stx %l3, [%sp + STACK_BIAS + 0x18]; \ 266 stx %l3, [%sp + STACK_BIAS + 0x18]; \
226 stx %l4, [%sp + STACK_BIAS + 0x20]; \ 267 stx %l4, [%sp + STACK_BIAS + 0x20]; \
227 stx %l5, [%sp + STACK_BIAS + 0x28]; \ 268 stx %l5, [%sp + STACK_BIAS + 0x28]; \
228 stx %l6, [%sp + STACK_BIAS + 0x30]; \ 269 stx %l6, [%sp + STACK_BIAS + 0x30]; \
229 stx %l7, [%sp + STACK_BIAS + 0x38]; \ 270 stx %l7, [%sp + STACK_BIAS + 0x38]; \
230 stx %i0, [%sp + STACK_BIAS + 0x40]; \ 271 stx %i0, [%sp + STACK_BIAS + 0x40]; \
231 stx %i1, [%sp + STACK_BIAS + 0x48]; \ 272 stx %i1, [%sp + STACK_BIAS + 0x48]; \
232 stx %i2, [%sp + STACK_BIAS + 0x50]; \ 273 stx %i2, [%sp + STACK_BIAS + 0x50]; \
233 stx %i3, [%sp + STACK_BIAS + 0x58]; \ 274 stx %i3, [%sp + STACK_BIAS + 0x58]; \
234 stx %i4, [%sp + STACK_BIAS + 0x60]; \ 275 stx %i4, [%sp + STACK_BIAS + 0x60]; \
235 stx %i5, [%sp + STACK_BIAS + 0x68]; \ 276 stx %i5, [%sp + STACK_BIAS + 0x68]; \
236 stx %i6, [%sp + STACK_BIAS + 0x70]; \ 277 stx %i6, [%sp + STACK_BIAS + 0x70]; \
237 stx %i7, [%sp + STACK_BIAS + 0x78]; \ 278 stx %i7, [%sp + STACK_BIAS + 0x78]; \
238 saved; retry; nop; nop; nop; nop; nop; nop; \ 279 saved; retry; nop; nop; nop; nop; nop; nop; \
239 nop; nop; nop; nop; nop; nop; nop; nop; 280 nop; nop; nop; nop; nop; nop; nop; nop;
240 281
241 #define SPILL_0_NORMAL_ETRAP \ 282 #define SPILL_0_NORMAL_ETRAP \
242 etrap_kernel_spill: \ 283 etrap_kernel_spill: \
243 stx %l0, [%sp + STACK_BIAS + 0x00]; \ 284 stx %l0, [%sp + STACK_BIAS + 0x00]; \
244 stx %l1, [%sp + STACK_BIAS + 0x08]; \ 285 stx %l1, [%sp + STACK_BIAS + 0x08]; \
245 stx %l2, [%sp + STACK_BIAS + 0x10]; \ 286 stx %l2, [%sp + STACK_BIAS + 0x10]; \
246 stx %l3, [%sp + STACK_BIAS + 0x18]; \ 287 stx %l3, [%sp + STACK_BIAS + 0x18]; \
247 stx %l4, [%sp + STACK_BIAS + 0x20]; \ 288 stx %l4, [%sp + STACK_BIAS + 0x20]; \
248 stx %l5, [%sp + STACK_BIAS + 0x28]; \ 289 stx %l5, [%sp + STACK_BIAS + 0x28]; \
249 stx %l6, [%sp + STACK_BIAS + 0x30]; \ 290 stx %l6, [%sp + STACK_BIAS + 0x30]; \
250 stx %l7, [%sp + STACK_BIAS + 0x38]; \ 291 stx %l7, [%sp + STACK_BIAS + 0x38]; \
251 stx %i0, [%sp + STACK_BIAS + 0x40]; \ 292 stx %i0, [%sp + STACK_BIAS + 0x40]; \
252 stx %i1, [%sp + STACK_BIAS + 0x48]; \ 293 stx %i1, [%sp + STACK_BIAS + 0x48]; \
253 stx %i2, [%sp + STACK_BIAS + 0x50]; \ 294 stx %i2, [%sp + STACK_BIAS + 0x50]; \
254 stx %i3, [%sp + STACK_BIAS + 0x58]; \ 295 stx %i3, [%sp + STACK_BIAS + 0x58]; \
255 stx %i4, [%sp + STACK_BIAS + 0x60]; \ 296 stx %i4, [%sp + STACK_BIAS + 0x60]; \
256 stx %i5, [%sp + STACK_BIAS + 0x68]; \ 297 stx %i5, [%sp + STACK_BIAS + 0x68]; \
257 stx %i6, [%sp + STACK_BIAS + 0x70]; \ 298 stx %i6, [%sp + STACK_BIAS + 0x70]; \
258 stx %i7, [%sp + STACK_BIAS + 0x78]; \ 299 stx %i7, [%sp + STACK_BIAS + 0x78]; \
259 saved; \ 300 saved; \
260 sub %g1, 2, %g1; \ 301 sub %g1, 2, %g1; \
261 ba,pt %xcc, etrap_save; \ 302 ba,pt %xcc, etrap_save; \
262 wrpr %g1, %cwp; \ 303 wrpr %g1, %cwp; \
263 nop; nop; nop; nop; nop; nop; nop; nop; \ 304 nop; nop; nop; nop; nop; nop; nop; nop; \
264 nop; nop; nop; nop; 305 nop; nop; nop; nop;
265 306
266 /* Normal 64bit spill */ 307 /* Normal 64bit spill */
267 #define SPILL_1_GENERIC(ASI) \ 308 #define SPILL_1_GENERIC(ASI) \
268 add %sp, STACK_BIAS + 0x00, %g1; \ 309 add %sp, STACK_BIAS + 0x00, %g1; \
269 stxa %l0, [%g1 + %g0] ASI; \ 310 stxa %l0, [%g1 + %g0] ASI; \
270 mov 0x08, %g3; \ 311 mov 0x08, %g3; \
271 stxa %l1, [%g1 + %g3] ASI; \ 312 stxa %l1, [%g1 + %g3] ASI; \
272 add %g1, 0x10, %g1; \ 313 add %g1, 0x10, %g1; \
273 stxa %l2, [%g1 + %g0] ASI; \ 314 stxa %l2, [%g1 + %g0] ASI; \
274 stxa %l3, [%g1 + %g3] ASI; \ 315 stxa %l3, [%g1 + %g3] ASI; \
275 add %g1, 0x10, %g1; \ 316 add %g1, 0x10, %g1; \
276 stxa %l4, [%g1 + %g0] ASI; \ 317 stxa %l4, [%g1 + %g0] ASI; \
277 stxa %l5, [%g1 + %g3] ASI; \ 318 stxa %l5, [%g1 + %g3] ASI; \
278 add %g1, 0x10, %g1; \ 319 add %g1, 0x10, %g1; \
279 stxa %l6, [%g1 + %g0] ASI; \ 320 stxa %l6, [%g1 + %g0] ASI; \
280 stxa %l7, [%g1 + %g3] ASI; \ 321 stxa %l7, [%g1 + %g3] ASI; \
281 add %g1, 0x10, %g1; \ 322 add %g1, 0x10, %g1; \
282 stxa %i0, [%g1 + %g0] ASI; \ 323 stxa %i0, [%g1 + %g0] ASI; \
283 stxa %i1, [%g1 + %g3] ASI; \ 324 stxa %i1, [%g1 + %g3] ASI; \
284 add %g1, 0x10, %g1; \ 325 add %g1, 0x10, %g1; \
285 stxa %i2, [%g1 + %g0] ASI; \ 326 stxa %i2, [%g1 + %g0] ASI; \
286 stxa %i3, [%g1 + %g3] ASI; \ 327 stxa %i3, [%g1 + %g3] ASI; \
287 add %g1, 0x10, %g1; \ 328 add %g1, 0x10, %g1; \
288 stxa %i4, [%g1 + %g0] ASI; \ 329 stxa %i4, [%g1 + %g0] ASI; \
289 stxa %i5, [%g1 + %g3] ASI; \ 330 stxa %i5, [%g1 + %g3] ASI; \
290 add %g1, 0x10, %g1; \ 331 add %g1, 0x10, %g1; \
291 stxa %i6, [%g1 + %g0] ASI; \ 332 stxa %i6, [%g1 + %g0] ASI; \
292 stxa %i7, [%g1 + %g3] ASI; \ 333 stxa %i7, [%g1 + %g3] ASI; \
293 saved; \ 334 saved; \
294 retry; nop; nop; \ 335 retry; nop; nop; \
295 b,a,pt %xcc, spill_fixup_dax; \ 336 b,a,pt %xcc, spill_fixup_dax; \
296 b,a,pt %xcc, spill_fixup_mna; \ 337 b,a,pt %xcc, spill_fixup_mna; \
297 b,a,pt %xcc, spill_fixup; 338 b,a,pt %xcc, spill_fixup;
298 339
299 #define SPILL_1_GENERIC_ETRAP \ 340 #define SPILL_1_GENERIC_ETRAP \
300 etrap_user_spill_64bit: \ 341 etrap_user_spill_64bit: \
301 stxa %l0, [%sp + STACK_BIAS + 0x00] %asi; \ 342 stxa %l0, [%sp + STACK_BIAS + 0x00] %asi; \
302 stxa %l1, [%sp + STACK_BIAS + 0x08] %asi; \ 343 stxa %l1, [%sp + STACK_BIAS + 0x08] %asi; \
303 stxa %l2, [%sp + STACK_BIAS + 0x10] %asi; \ 344 stxa %l2, [%sp + STACK_BIAS + 0x10] %asi; \
304 stxa %l3, [%sp + STACK_BIAS + 0x18] %asi; \ 345 stxa %l3, [%sp + STACK_BIAS + 0x18] %asi; \
305 stxa %l4, [%sp + STACK_BIAS + 0x20] %asi; \ 346 stxa %l4, [%sp + STACK_BIAS + 0x20] %asi; \
306 stxa %l5, [%sp + STACK_BIAS + 0x28] %asi; \ 347 stxa %l5, [%sp + STACK_BIAS + 0x28] %asi; \
307 stxa %l6, [%sp + STACK_BIAS + 0x30] %asi; \ 348 stxa %l6, [%sp + STACK_BIAS + 0x30] %asi; \
308 stxa %l7, [%sp + STACK_BIAS + 0x38] %asi; \ 349 stxa %l7, [%sp + STACK_BIAS + 0x38] %asi; \
309 stxa %i0, [%sp + STACK_BIAS + 0x40] %asi; \ 350 stxa %i0, [%sp + STACK_BIAS + 0x40] %asi; \
310 stxa %i1, [%sp + STACK_BIAS + 0x48] %asi; \ 351 stxa %i1, [%sp + STACK_BIAS + 0x48] %asi; \
311 stxa %i2, [%sp + STACK_BIAS + 0x50] %asi; \ 352 stxa %i2, [%sp + STACK_BIAS + 0x50] %asi; \
312 stxa %i3, [%sp + STACK_BIAS + 0x58] %asi; \ 353 stxa %i3, [%sp + STACK_BIAS + 0x58] %asi; \
313 stxa %i4, [%sp + STACK_BIAS + 0x60] %asi; \ 354 stxa %i4, [%sp + STACK_BIAS + 0x60] %asi; \
314 stxa %i5, [%sp + STACK_BIAS + 0x68] %asi; \ 355 stxa %i5, [%sp + STACK_BIAS + 0x68] %asi; \
315 stxa %i6, [%sp + STACK_BIAS + 0x70] %asi; \ 356 stxa %i6, [%sp + STACK_BIAS + 0x70] %asi; \
316 stxa %i7, [%sp + STACK_BIAS + 0x78] %asi; \ 357 stxa %i7, [%sp + STACK_BIAS + 0x78] %asi; \
317 saved; \ 358 saved; \
318 sub %g1, 2, %g1; \ 359 sub %g1, 2, %g1; \
319 ba,pt %xcc, etrap_save; \ 360 ba,pt %xcc, etrap_save; \
320 wrpr %g1, %cwp; \ 361 wrpr %g1, %cwp; \
321 nop; nop; nop; nop; nop; \ 362 nop; nop; nop; nop; nop; \
322 nop; nop; nop; nop; \ 363 nop; nop; nop; nop; \
323 ba,a,pt %xcc, etrap_spill_fixup_64bit; \ 364 ba,a,pt %xcc, etrap_spill_fixup_64bit; \
324 ba,a,pt %xcc, etrap_spill_fixup_64bit; \ 365 ba,a,pt %xcc, etrap_spill_fixup_64bit; \
325 ba,a,pt %xcc, etrap_spill_fixup_64bit; 366 ba,a,pt %xcc, etrap_spill_fixup_64bit;
326 367
327 #define SPILL_1_GENERIC_ETRAP_FIXUP \ 368 #define SPILL_1_GENERIC_ETRAP_FIXUP \
328 etrap_spill_fixup_64bit: \ 369 etrap_spill_fixup_64bit: \
329 ldub [%g6 + TI_WSAVED], %g1; \ 370 ldub [%g6 + TI_WSAVED], %g1; \
330 sll %g1, 3, %g3; \ 371 sll %g1, 3, %g3; \
331 add %g6, %g3, %g3; \ 372 add %g6, %g3, %g3; \
332 stx %sp, [%g3 + TI_RWIN_SPTRS]; \ 373 stx %sp, [%g3 + TI_RWIN_SPTRS]; \
333 sll %g1, 7, %g3; \ 374 sll %g1, 7, %g3; \
334 add %g6, %g3, %g3; \ 375 add %g6, %g3, %g3; \
335 stx %l0, [%g3 + TI_REG_WINDOW + 0x00]; \ 376 stx %l0, [%g3 + TI_REG_WINDOW + 0x00]; \
336 stx %l1, [%g3 + TI_REG_WINDOW + 0x08]; \ 377 stx %l1, [%g3 + TI_REG_WINDOW + 0x08]; \
337 stx %l2, [%g3 + TI_REG_WINDOW + 0x10]; \ 378 stx %l2, [%g3 + TI_REG_WINDOW + 0x10]; \
338 stx %l3, [%g3 + TI_REG_WINDOW + 0x18]; \ 379 stx %l3, [%g3 + TI_REG_WINDOW + 0x18]; \
339 stx %l4, [%g3 + TI_REG_WINDOW + 0x20]; \ 380 stx %l4, [%g3 + TI_REG_WINDOW + 0x20]; \
340 stx %l5, [%g3 + TI_REG_WINDOW + 0x28]; \ 381 stx %l5, [%g3 + TI_REG_WINDOW + 0x28]; \
341 stx %l6, [%g3 + TI_REG_WINDOW + 0x30]; \ 382 stx %l6, [%g3 + TI_REG_WINDOW + 0x30]; \
342 stx %l7, [%g3 + TI_REG_WINDOW + 0x38]; \ 383 stx %l7, [%g3 + TI_REG_WINDOW + 0x38]; \
343 stx %i0, [%g3 + TI_REG_WINDOW + 0x40]; \ 384 stx %i0, [%g3 + TI_REG_WINDOW + 0x40]; \
344 stx %i1, [%g3 + TI_REG_WINDOW + 0x48]; \ 385 stx %i1, [%g3 + TI_REG_WINDOW + 0x48]; \
345 stx %i2, [%g3 + TI_REG_WINDOW + 0x50]; \ 386 stx %i2, [%g3 + TI_REG_WINDOW + 0x50]; \
346 stx %i3, [%g3 + TI_REG_WINDOW + 0x58]; \ 387 stx %i3, [%g3 + TI_REG_WINDOW + 0x58]; \
347 stx %i4, [%g3 + TI_REG_WINDOW + 0x60]; \ 388 stx %i4, [%g3 + TI_REG_WINDOW + 0x60]; \
348 stx %i5, [%g3 + TI_REG_WINDOW + 0x68]; \ 389 stx %i5, [%g3 + TI_REG_WINDOW + 0x68]; \
349 stx %i6, [%g3 + TI_REG_WINDOW + 0x70]; \ 390 stx %i6, [%g3 + TI_REG_WINDOW + 0x70]; \
350 stx %i7, [%g3 + TI_REG_WINDOW + 0x78]; \ 391 stx %i7, [%g3 + TI_REG_WINDOW + 0x78]; \
351 add %g1, 1, %g1; \ 392 add %g1, 1, %g1; \
352 stb %g1, [%g6 + TI_WSAVED]; \ 393 stb %g1, [%g6 + TI_WSAVED]; \
353 saved; \ 394 saved; \
354 rdpr %cwp, %g1; \ 395 rdpr %cwp, %g1; \
355 sub %g1, 2, %g1; \ 396 sub %g1, 2, %g1; \
356 ba,pt %xcc, etrap_save; \ 397 ba,pt %xcc, etrap_save; \
357 wrpr %g1, %cwp; \ 398 wrpr %g1, %cwp; \
358 nop; nop; nop 399 nop; nop; nop
359 400
360 /* Normal 32bit spill */ 401 /* Normal 32bit spill */
361 #define SPILL_2_GENERIC(ASI) \ 402 #define SPILL_2_GENERIC(ASI) \
362 srl %sp, 0, %sp; \ 403 srl %sp, 0, %sp; \
363 stwa %l0, [%sp + %g0] ASI; \ 404 stwa %l0, [%sp + %g0] ASI; \
364 mov 0x04, %g3; \ 405 mov 0x04, %g3; \
365 stwa %l1, [%sp + %g3] ASI; \ 406 stwa %l1, [%sp + %g3] ASI; \
366 add %sp, 0x08, %g1; \ 407 add %sp, 0x08, %g1; \
367 stwa %l2, [%g1 + %g0] ASI; \ 408 stwa %l2, [%g1 + %g0] ASI; \
368 stwa %l3, [%g1 + %g3] ASI; \ 409 stwa %l3, [%g1 + %g3] ASI; \
369 add %g1, 0x08, %g1; \ 410 add %g1, 0x08, %g1; \
370 stwa %l4, [%g1 + %g0] ASI; \ 411 stwa %l4, [%g1 + %g0] ASI; \
371 stwa %l5, [%g1 + %g3] ASI; \ 412 stwa %l5, [%g1 + %g3] ASI; \
372 add %g1, 0x08, %g1; \ 413 add %g1, 0x08, %g1; \
373 stwa %l6, [%g1 + %g0] ASI; \ 414 stwa %l6, [%g1 + %g0] ASI; \
374 stwa %l7, [%g1 + %g3] ASI; \ 415 stwa %l7, [%g1 + %g3] ASI; \
375 add %g1, 0x08, %g1; \ 416 add %g1, 0x08, %g1; \
376 stwa %i0, [%g1 + %g0] ASI; \ 417 stwa %i0, [%g1 + %g0] ASI; \
377 stwa %i1, [%g1 + %g3] ASI; \ 418 stwa %i1, [%g1 + %g3] ASI; \
378 add %g1, 0x08, %g1; \ 419 add %g1, 0x08, %g1; \
379 stwa %i2, [%g1 + %g0] ASI; \ 420 stwa %i2, [%g1 + %g0] ASI; \
380 stwa %i3, [%g1 + %g3] ASI; \ 421 stwa %i3, [%g1 + %g3] ASI; \
381 add %g1, 0x08, %g1; \ 422 add %g1, 0x08, %g1; \
382 stwa %i4, [%g1 + %g0] ASI; \ 423 stwa %i4, [%g1 + %g0] ASI; \
383 stwa %i5, [%g1 + %g3] ASI; \ 424 stwa %i5, [%g1 + %g3] ASI; \
384 add %g1, 0x08, %g1; \ 425 add %g1, 0x08, %g1; \
385 stwa %i6, [%g1 + %g0] ASI; \ 426 stwa %i6, [%g1 + %g0] ASI; \
386 stwa %i7, [%g1 + %g3] ASI; \ 427 stwa %i7, [%g1 + %g3] ASI; \
387 saved; \ 428 saved; \
388 retry; nop; nop; \ 429 retry; nop; nop; \
389 b,a,pt %xcc, spill_fixup_dax; \ 430 b,a,pt %xcc, spill_fixup_dax; \
390 b,a,pt %xcc, spill_fixup_mna; \ 431 b,a,pt %xcc, spill_fixup_mna; \
391 b,a,pt %xcc, spill_fixup; 432 b,a,pt %xcc, spill_fixup;
392 433
393 #define SPILL_2_GENERIC_ETRAP \ 434 #define SPILL_2_GENERIC_ETRAP \
394 etrap_user_spill_32bit: \ 435 etrap_user_spill_32bit: \
395 srl %sp, 0, %sp; \ 436 srl %sp, 0, %sp; \
396 stwa %l0, [%sp + 0x00] %asi; \ 437 stwa %l0, [%sp + 0x00] %asi; \
397 stwa %l1, [%sp + 0x04] %asi; \ 438 stwa %l1, [%sp + 0x04] %asi; \
398 stwa %l2, [%sp + 0x08] %asi; \ 439 stwa %l2, [%sp + 0x08] %asi; \
399 stwa %l3, [%sp + 0x0c] %asi; \ 440 stwa %l3, [%sp + 0x0c] %asi; \
400 stwa %l4, [%sp + 0x10] %asi; \ 441 stwa %l4, [%sp + 0x10] %asi; \
401 stwa %l5, [%sp + 0x14] %asi; \ 442 stwa %l5, [%sp + 0x14] %asi; \
402 stwa %l6, [%sp + 0x18] %asi; \ 443 stwa %l6, [%sp + 0x18] %asi; \
403 stwa %l7, [%sp + 0x1c] %asi; \ 444 stwa %l7, [%sp + 0x1c] %asi; \
404 stwa %i0, [%sp + 0x20] %asi; \ 445 stwa %i0, [%sp + 0x20] %asi; \
405 stwa %i1, [%sp + 0x24] %asi; \ 446 stwa %i1, [%sp + 0x24] %asi; \
406 stwa %i2, [%sp + 0x28] %asi; \ 447 stwa %i2, [%sp + 0x28] %asi; \
407 stwa %i3, [%sp + 0x2c] %asi; \ 448 stwa %i3, [%sp + 0x2c] %asi; \
408 stwa %i4, [%sp + 0x30] %asi; \ 449 stwa %i4, [%sp + 0x30] %asi; \
409 stwa %i5, [%sp + 0x34] %asi; \ 450 stwa %i5, [%sp + 0x34] %asi; \
410 stwa %i6, [%sp + 0x38] %asi; \ 451 stwa %i6, [%sp + 0x38] %asi; \
411 stwa %i7, [%sp + 0x3c] %asi; \ 452 stwa %i7, [%sp + 0x3c] %asi; \
412 saved; \ 453 saved; \
413 sub %g1, 2, %g1; \ 454 sub %g1, 2, %g1; \
414 ba,pt %xcc, etrap_save; \ 455 ba,pt %xcc, etrap_save; \
415 wrpr %g1, %cwp; \ 456 wrpr %g1, %cwp; \
416 nop; nop; nop; nop; \ 457 nop; nop; nop; nop; \
417 nop; nop; nop; nop; \ 458 nop; nop; nop; nop; \
418 ba,a,pt %xcc, etrap_spill_fixup_32bit; \ 459 ba,a,pt %xcc, etrap_spill_fixup_32bit; \
419 ba,a,pt %xcc, etrap_spill_fixup_32bit; \ 460 ba,a,pt %xcc, etrap_spill_fixup_32bit; \
420 ba,a,pt %xcc, etrap_spill_fixup_32bit; 461 ba,a,pt %xcc, etrap_spill_fixup_32bit;
421 462
422 #define SPILL_2_GENERIC_ETRAP_FIXUP \ 463 #define SPILL_2_GENERIC_ETRAP_FIXUP \
423 etrap_spill_fixup_32bit: \ 464 etrap_spill_fixup_32bit: \
424 ldub [%g6 + TI_WSAVED], %g1; \ 465 ldub [%g6 + TI_WSAVED], %g1; \
425 sll %g1, 3, %g3; \ 466 sll %g1, 3, %g3; \
426 add %g6, %g3, %g3; \ 467 add %g6, %g3, %g3; \
427 stx %sp, [%g3 + TI_RWIN_SPTRS]; \ 468 stx %sp, [%g3 + TI_RWIN_SPTRS]; \
428 sll %g1, 7, %g3; \ 469 sll %g1, 7, %g3; \
429 add %g6, %g3, %g3; \ 470 add %g6, %g3, %g3; \
430 stw %l0, [%g3 + TI_REG_WINDOW + 0x00]; \ 471 stw %l0, [%g3 + TI_REG_WINDOW + 0x00]; \
431 stw %l1, [%g3 + TI_REG_WINDOW + 0x04]; \ 472 stw %l1, [%g3 + TI_REG_WINDOW + 0x04]; \
432 stw %l2, [%g3 + TI_REG_WINDOW + 0x08]; \ 473 stw %l2, [%g3 + TI_REG_WINDOW + 0x08]; \
433 stw %l3, [%g3 + TI_REG_WINDOW + 0x0c]; \ 474 stw %l3, [%g3 + TI_REG_WINDOW + 0x0c]; \
434 stw %l4, [%g3 + TI_REG_WINDOW + 0x10]; \ 475 stw %l4, [%g3 + TI_REG_WINDOW + 0x10]; \
435 stw %l5, [%g3 + TI_REG_WINDOW + 0x14]; \ 476 stw %l5, [%g3 + TI_REG_WINDOW + 0x14]; \
436 stw %l6, [%g3 + TI_REG_WINDOW + 0x18]; \ 477 stw %l6, [%g3 + TI_REG_WINDOW + 0x18]; \
437 stw %l7, [%g3 + TI_REG_WINDOW + 0x1c]; \ 478 stw %l7, [%g3 + TI_REG_WINDOW + 0x1c]; \
438 stw %i0, [%g3 + TI_REG_WINDOW + 0x20]; \ 479 stw %i0, [%g3 + TI_REG_WINDOW + 0x20]; \
439 stw %i1, [%g3 + TI_REG_WINDOW + 0x24]; \ 480 stw %i1, [%g3 + TI_REG_WINDOW + 0x24]; \
440 stw %i2, [%g3 + TI_REG_WINDOW + 0x28]; \ 481 stw %i2, [%g3 + TI_REG_WINDOW + 0x28]; \
441 stw %i3, [%g3 + TI_REG_WINDOW + 0x2c]; \ 482 stw %i3, [%g3 + TI_REG_WINDOW + 0x2c]; \
442 stw %i4, [%g3 + TI_REG_WINDOW + 0x30]; \ 483 stw %i4, [%g3 + TI_REG_WINDOW + 0x30]; \
443 stw %i5, [%g3 + TI_REG_WINDOW + 0x34]; \ 484 stw %i5, [%g3 + TI_REG_WINDOW + 0x34]; \
444 stw %i6, [%g3 + TI_REG_WINDOW + 0x38]; \ 485 stw %i6, [%g3 + TI_REG_WINDOW + 0x38]; \
445 stw %i7, [%g3 + TI_REG_WINDOW + 0x3c]; \ 486 stw %i7, [%g3 + TI_REG_WINDOW + 0x3c]; \
446 add %g1, 1, %g1; \ 487 add %g1, 1, %g1; \
447 stb %g1, [%g6 + TI_WSAVED]; \ 488 stb %g1, [%g6 + TI_WSAVED]; \
448 saved; \ 489 saved; \
449 rdpr %cwp, %g1; \ 490 rdpr %cwp, %g1; \
450 sub %g1, 2, %g1; \ 491 sub %g1, 2, %g1; \
451 ba,pt %xcc, etrap_save; \ 492 ba,pt %xcc, etrap_save; \
452 wrpr %g1, %cwp; \ 493 wrpr %g1, %cwp; \
453 nop; nop; nop 494 nop; nop; nop
454 495
455 #define SPILL_1_NORMAL SPILL_1_GENERIC(ASI_AIUP) 496 #define SPILL_1_NORMAL SPILL_1_GENERIC(ASI_AIUP)
456 #define SPILL_2_NORMAL SPILL_2_GENERIC(ASI_AIUP) 497 #define SPILL_2_NORMAL SPILL_2_GENERIC(ASI_AIUP)
457 #define SPILL_3_NORMAL SPILL_0_NORMAL 498 #define SPILL_3_NORMAL SPILL_0_NORMAL
458 #define SPILL_4_NORMAL SPILL_0_NORMAL 499 #define SPILL_4_NORMAL SPILL_0_NORMAL
459 #define SPILL_5_NORMAL SPILL_0_NORMAL 500 #define SPILL_5_NORMAL SPILL_0_NORMAL
460 #define SPILL_6_NORMAL SPILL_0_NORMAL 501 #define SPILL_6_NORMAL SPILL_0_NORMAL
461 #define SPILL_7_NORMAL SPILL_0_NORMAL 502 #define SPILL_7_NORMAL SPILL_0_NORMAL
462 503
463 #define SPILL_0_OTHER SPILL_0_NORMAL 504 #define SPILL_0_OTHER SPILL_0_NORMAL
464 #define SPILL_1_OTHER SPILL_1_GENERIC(ASI_AIUS) 505 #define SPILL_1_OTHER SPILL_1_GENERIC(ASI_AIUS)
465 #define SPILL_2_OTHER SPILL_2_GENERIC(ASI_AIUS) 506 #define SPILL_2_OTHER SPILL_2_GENERIC(ASI_AIUS)
466 #define SPILL_3_OTHER SPILL_3_NORMAL 507 #define SPILL_3_OTHER SPILL_3_NORMAL
467 #define SPILL_4_OTHER SPILL_4_NORMAL 508 #define SPILL_4_OTHER SPILL_4_NORMAL
468 #define SPILL_5_OTHER SPILL_5_NORMAL 509 #define SPILL_5_OTHER SPILL_5_NORMAL
469 #define SPILL_6_OTHER SPILL_6_NORMAL 510 #define SPILL_6_OTHER SPILL_6_NORMAL
470 #define SPILL_7_OTHER SPILL_7_NORMAL 511 #define SPILL_7_OTHER SPILL_7_NORMAL
471 512
472 /* Normal kernel fill */ 513 /* Normal kernel fill */
473 #define FILL_0_NORMAL \ 514 #define FILL_0_NORMAL \
474 ldx [%sp + STACK_BIAS + 0x00], %l0; \ 515 ldx [%sp + STACK_BIAS + 0x00], %l0; \
475 ldx [%sp + STACK_BIAS + 0x08], %l1; \ 516 ldx [%sp + STACK_BIAS + 0x08], %l1; \
476 ldx [%sp + STACK_BIAS + 0x10], %l2; \ 517 ldx [%sp + STACK_BIAS + 0x10], %l2; \
477 ldx [%sp + STACK_BIAS + 0x18], %l3; \ 518 ldx [%sp + STACK_BIAS + 0x18], %l3; \
478 ldx [%sp + STACK_BIAS + 0x20], %l4; \ 519 ldx [%sp + STACK_BIAS + 0x20], %l4; \
479 ldx [%sp + STACK_BIAS + 0x28], %l5; \ 520 ldx [%sp + STACK_BIAS + 0x28], %l5; \
480 ldx [%sp + STACK_BIAS + 0x30], %l6; \ 521 ldx [%sp + STACK_BIAS + 0x30], %l6; \
481 ldx [%sp + STACK_BIAS + 0x38], %l7; \ 522 ldx [%sp + STACK_BIAS + 0x38], %l7; \
482 ldx [%sp + STACK_BIAS + 0x40], %i0; \ 523 ldx [%sp + STACK_BIAS + 0x40], %i0; \
483 ldx [%sp + STACK_BIAS + 0x48], %i1; \ 524 ldx [%sp + STACK_BIAS + 0x48], %i1; \
484 ldx [%sp + STACK_BIAS + 0x50], %i2; \ 525 ldx [%sp + STACK_BIAS + 0x50], %i2; \
485 ldx [%sp + STACK_BIAS + 0x58], %i3; \ 526 ldx [%sp + STACK_BIAS + 0x58], %i3; \
486 ldx [%sp + STACK_BIAS + 0x60], %i4; \ 527 ldx [%sp + STACK_BIAS + 0x60], %i4; \
487 ldx [%sp + STACK_BIAS + 0x68], %i5; \ 528 ldx [%sp + STACK_BIAS + 0x68], %i5; \
488 ldx [%sp + STACK_BIAS + 0x70], %i6; \ 529 ldx [%sp + STACK_BIAS + 0x70], %i6; \
489 ldx [%sp + STACK_BIAS + 0x78], %i7; \ 530 ldx [%sp + STACK_BIAS + 0x78], %i7; \
490 restored; retry; nop; nop; nop; nop; nop; nop; \ 531 restored; retry; nop; nop; nop; nop; nop; nop; \
491 nop; nop; nop; nop; nop; nop; nop; nop; 532 nop; nop; nop; nop; nop; nop; nop; nop;
492 533
493 #define FILL_0_NORMAL_RTRAP \ 534 #define FILL_0_NORMAL_RTRAP \
494 kern_rtt_fill: \ 535 kern_rtt_fill: \
495 rdpr %cwp, %g1; \ 536 rdpr %cwp, %g1; \
496 sub %g1, 1, %g1; \ 537 sub %g1, 1, %g1; \
497 wrpr %g1, %cwp; \ 538 wrpr %g1, %cwp; \
498 ldx [%sp + STACK_BIAS + 0x00], %l0; \ 539 ldx [%sp + STACK_BIAS + 0x00], %l0; \
499 ldx [%sp + STACK_BIAS + 0x08], %l1; \ 540 ldx [%sp + STACK_BIAS + 0x08], %l1; \
500 ldx [%sp + STACK_BIAS + 0x10], %l2; \ 541 ldx [%sp + STACK_BIAS + 0x10], %l2; \
501 ldx [%sp + STACK_BIAS + 0x18], %l3; \ 542 ldx [%sp + STACK_BIAS + 0x18], %l3; \
502 ldx [%sp + STACK_BIAS + 0x20], %l4; \ 543 ldx [%sp + STACK_BIAS + 0x20], %l4; \
503 ldx [%sp + STACK_BIAS + 0x28], %l5; \ 544 ldx [%sp + STACK_BIAS + 0x28], %l5; \
504 ldx [%sp + STACK_BIAS + 0x30], %l6; \ 545 ldx [%sp + STACK_BIAS + 0x30], %l6; \
505 ldx [%sp + STACK_BIAS + 0x38], %l7; \ 546 ldx [%sp + STACK_BIAS + 0x38], %l7; \
506 ldx [%sp + STACK_BIAS + 0x40], %i0; \ 547 ldx [%sp + STACK_BIAS + 0x40], %i0; \
507 ldx [%sp + STACK_BIAS + 0x48], %i1; \ 548 ldx [%sp + STACK_BIAS + 0x48], %i1; \
508 ldx [%sp + STACK_BIAS + 0x50], %i2; \ 549 ldx [%sp + STACK_BIAS + 0x50], %i2; \
509 ldx [%sp + STACK_BIAS + 0x58], %i3; \ 550 ldx [%sp + STACK_BIAS + 0x58], %i3; \
510 ldx [%sp + STACK_BIAS + 0x60], %i4; \ 551 ldx [%sp + STACK_BIAS + 0x60], %i4; \
511 ldx [%sp + STACK_BIAS + 0x68], %i5; \ 552 ldx [%sp + STACK_BIAS + 0x68], %i5; \
512 ldx [%sp + STACK_BIAS + 0x70], %i6; \ 553 ldx [%sp + STACK_BIAS + 0x70], %i6; \
513 ldx [%sp + STACK_BIAS + 0x78], %i7; \ 554 ldx [%sp + STACK_BIAS + 0x78], %i7; \
514 restored; \ 555 restored; \
515 add %g1, 1, %g1; \ 556 add %g1, 1, %g1; \
516 ba,pt %xcc, kern_rtt_restore; \ 557 ba,pt %xcc, kern_rtt_restore; \
517 wrpr %g1, %cwp; \ 558 wrpr %g1, %cwp; \
518 nop; nop; nop; nop; nop; \ 559 nop; nop; nop; nop; nop; \
519 nop; nop; nop; nop; 560 nop; nop; nop; nop;
520 561
521 562
522 /* Normal 64bit fill */ 563 /* Normal 64bit fill */
523 #define FILL_1_GENERIC(ASI) \ 564 #define FILL_1_GENERIC(ASI) \
524 add %sp, STACK_BIAS + 0x00, %g1; \ 565 add %sp, STACK_BIAS + 0x00, %g1; \
525 ldxa [%g1 + %g0] ASI, %l0; \ 566 ldxa [%g1 + %g0] ASI, %l0; \
526 mov 0x08, %g2; \ 567 mov 0x08, %g2; \
527 mov 0x10, %g3; \ 568 mov 0x10, %g3; \
528 ldxa [%g1 + %g2] ASI, %l1; \ 569 ldxa [%g1 + %g2] ASI, %l1; \
529 mov 0x18, %g5; \ 570 mov 0x18, %g5; \
530 ldxa [%g1 + %g3] ASI, %l2; \ 571 ldxa [%g1 + %g3] ASI, %l2; \
531 ldxa [%g1 + %g5] ASI, %l3; \ 572 ldxa [%g1 + %g5] ASI, %l3; \
532 add %g1, 0x20, %g1; \ 573 add %g1, 0x20, %g1; \
533 ldxa [%g1 + %g0] ASI, %l4; \ 574 ldxa [%g1 + %g0] ASI, %l4; \
534 ldxa [%g1 + %g2] ASI, %l5; \ 575 ldxa [%g1 + %g2] ASI, %l5; \
535 ldxa [%g1 + %g3] ASI, %l6; \ 576 ldxa [%g1 + %g3] ASI, %l6; \
536 ldxa [%g1 + %g5] ASI, %l7; \ 577 ldxa [%g1 + %g5] ASI, %l7; \
537 add %g1, 0x20, %g1; \ 578 add %g1, 0x20, %g1; \
538 ldxa [%g1 + %g0] ASI, %i0; \ 579 ldxa [%g1 + %g0] ASI, %i0; \
539 ldxa [%g1 + %g2] ASI, %i1; \ 580 ldxa [%g1 + %g2] ASI, %i1; \
540 ldxa [%g1 + %g3] ASI, %i2; \ 581 ldxa [%g1 + %g3] ASI, %i2; \
541 ldxa [%g1 + %g5] ASI, %i3; \ 582 ldxa [%g1 + %g5] ASI, %i3; \
542 add %g1, 0x20, %g1; \ 583 add %g1, 0x20, %g1; \
543 ldxa [%g1 + %g0] ASI, %i4; \ 584 ldxa [%g1 + %g0] ASI, %i4; \
544 ldxa [%g1 + %g2] ASI, %i5; \ 585 ldxa [%g1 + %g2] ASI, %i5; \
545 ldxa [%g1 + %g3] ASI, %i6; \ 586 ldxa [%g1 + %g3] ASI, %i6; \
546 ldxa [%g1 + %g5] ASI, %i7; \ 587 ldxa [%g1 + %g5] ASI, %i7; \
547 restored; \ 588 restored; \
548 retry; nop; nop; nop; nop; \ 589 retry; nop; nop; nop; nop; \
549 b,a,pt %xcc, fill_fixup_dax; \ 590 b,a,pt %xcc, fill_fixup_dax; \
550 b,a,pt %xcc, fill_fixup_mna; \ 591 b,a,pt %xcc, fill_fixup_mna; \
551 b,a,pt %xcc, fill_fixup; 592 b,a,pt %xcc, fill_fixup;
552 593
553 #define FILL_1_GENERIC_RTRAP \ 594 #define FILL_1_GENERIC_RTRAP \
554 user_rtt_fill_64bit: \ 595 user_rtt_fill_64bit: \
555 ldxa [%sp + STACK_BIAS + 0x00] %asi, %l0; \ 596 ldxa [%sp + STACK_BIAS + 0x00] %asi, %l0; \
556 ldxa [%sp + STACK_BIAS + 0x08] %asi, %l1; \ 597 ldxa [%sp + STACK_BIAS + 0x08] %asi, %l1; \
557 ldxa [%sp + STACK_BIAS + 0x10] %asi, %l2; \ 598 ldxa [%sp + STACK_BIAS + 0x10] %asi, %l2; \
558 ldxa [%sp + STACK_BIAS + 0x18] %asi, %l3; \ 599 ldxa [%sp + STACK_BIAS + 0x18] %asi, %l3; \
559 ldxa [%sp + STACK_BIAS + 0x20] %asi, %l4; \ 600 ldxa [%sp + STACK_BIAS + 0x20] %asi, %l4; \
560 ldxa [%sp + STACK_BIAS + 0x28] %asi, %l5; \ 601 ldxa [%sp + STACK_BIAS + 0x28] %asi, %l5; \
561 ldxa [%sp + STACK_BIAS + 0x30] %asi, %l6; \ 602 ldxa [%sp + STACK_BIAS + 0x30] %asi, %l6; \
562 ldxa [%sp + STACK_BIAS + 0x38] %asi, %l7; \ 603 ldxa [%sp + STACK_BIAS + 0x38] %asi, %l7; \
563 ldxa [%sp + STACK_BIAS + 0x40] %asi, %i0; \ 604 ldxa [%sp + STACK_BIAS + 0x40] %asi, %i0; \
564 ldxa [%sp + STACK_BIAS + 0x48] %asi, %i1; \ 605 ldxa [%sp + STACK_BIAS + 0x48] %asi, %i1; \
565 ldxa [%sp + STACK_BIAS + 0x50] %asi, %i2; \ 606 ldxa [%sp + STACK_BIAS + 0x50] %asi, %i2; \
566 ldxa [%sp + STACK_BIAS + 0x58] %asi, %i3; \ 607 ldxa [%sp + STACK_BIAS + 0x58] %asi, %i3; \
567 ldxa [%sp + STACK_BIAS + 0x60] %asi, %i4; \ 608 ldxa [%sp + STACK_BIAS + 0x60] %asi, %i4; \
568 ldxa [%sp + STACK_BIAS + 0x68] %asi, %i5; \ 609 ldxa [%sp + STACK_BIAS + 0x68] %asi, %i5; \
569 ldxa [%sp + STACK_BIAS + 0x70] %asi, %i6; \ 610 ldxa [%sp + STACK_BIAS + 0x70] %asi, %i6; \
570 ldxa [%sp + STACK_BIAS + 0x78] %asi, %i7; \ 611 ldxa [%sp + STACK_BIAS + 0x78] %asi, %i7; \
571 ba,pt %xcc, user_rtt_pre_restore; \ 612 ba,pt %xcc, user_rtt_pre_restore; \
572 restored; \ 613 restored; \
573 nop; nop; nop; nop; nop; nop; \ 614 nop; nop; nop; nop; nop; nop; \
574 nop; nop; nop; nop; nop; \ 615 nop; nop; nop; nop; nop; \
575 ba,a,pt %xcc, user_rtt_fill_fixup; \ 616 ba,a,pt %xcc, user_rtt_fill_fixup; \
576 ba,a,pt %xcc, user_rtt_fill_fixup; \ 617 ba,a,pt %xcc, user_rtt_fill_fixup; \
577 ba,a,pt %xcc, user_rtt_fill_fixup; 618 ba,a,pt %xcc, user_rtt_fill_fixup;
578 619
579 620
580 /* Normal 32bit fill */ 621 /* Normal 32bit fill */
581 #define FILL_2_GENERIC(ASI) \ 622 #define FILL_2_GENERIC(ASI) \
582 srl %sp, 0, %sp; \ 623 srl %sp, 0, %sp; \
583 lduwa [%sp + %g0] ASI, %l0; \ 624 lduwa [%sp + %g0] ASI, %l0; \
584 mov 0x04, %g2; \ 625 mov 0x04, %g2; \
585 mov 0x08, %g3; \ 626 mov 0x08, %g3; \
586 lduwa [%sp + %g2] ASI, %l1; \ 627 lduwa [%sp + %g2] ASI, %l1; \
587 mov 0x0c, %g5; \ 628 mov 0x0c, %g5; \
588 lduwa [%sp + %g3] ASI, %l2; \ 629 lduwa [%sp + %g3] ASI, %l2; \
589 lduwa [%sp + %g5] ASI, %l3; \ 630 lduwa [%sp + %g5] ASI, %l3; \
590 add %sp, 0x10, %g1; \ 631 add %sp, 0x10, %g1; \
591 lduwa [%g1 + %g0] ASI, %l4; \ 632 lduwa [%g1 + %g0] ASI, %l4; \
592 lduwa [%g1 + %g2] ASI, %l5; \ 633 lduwa [%g1 + %g2] ASI, %l5; \
593 lduwa [%g1 + %g3] ASI, %l6; \ 634 lduwa [%g1 + %g3] ASI, %l6; \
594 lduwa [%g1 + %g5] ASI, %l7; \ 635 lduwa [%g1 + %g5] ASI, %l7; \
595 add %g1, 0x10, %g1; \ 636 add %g1, 0x10, %g1; \
596 lduwa [%g1 + %g0] ASI, %i0; \ 637 lduwa [%g1 + %g0] ASI, %i0; \
597 lduwa [%g1 + %g2] ASI, %i1; \ 638 lduwa [%g1 + %g2] ASI, %i1; \
598 lduwa [%g1 + %g3] ASI, %i2; \ 639 lduwa [%g1 + %g3] ASI, %i2; \
599 lduwa [%g1 + %g5] ASI, %i3; \ 640 lduwa [%g1 + %g5] ASI, %i3; \
600 add %g1, 0x10, %g1; \ 641 add %g1, 0x10, %g1; \
601 lduwa [%g1 + %g0] ASI, %i4; \ 642 lduwa [%g1 + %g0] ASI, %i4; \
602 lduwa [%g1 + %g2] ASI, %i5; \ 643 lduwa [%g1 + %g2] ASI, %i5; \
603 lduwa [%g1 + %g3] ASI, %i6; \ 644 lduwa [%g1 + %g3] ASI, %i6; \
604 lduwa [%g1 + %g5] ASI, %i7; \ 645 lduwa [%g1 + %g5] ASI, %i7; \
605 restored; \ 646 restored; \
606 retry; nop; nop; nop; nop; \ 647 retry; nop; nop; nop; nop; \
607 b,a,pt %xcc, fill_fixup_dax; \ 648 b,a,pt %xcc, fill_fixup_dax; \
608 b,a,pt %xcc, fill_fixup_mna; \ 649 b,a,pt %xcc, fill_fixup_mna; \
609 b,a,pt %xcc, fill_fixup; 650 b,a,pt %xcc, fill_fixup;
610 651
611 #define FILL_2_GENERIC_RTRAP \ 652 #define FILL_2_GENERIC_RTRAP \
612 user_rtt_fill_32bit: \ 653 user_rtt_fill_32bit: \
613 srl %sp, 0, %sp; \ 654 srl %sp, 0, %sp; \
614 lduwa [%sp + 0x00] %asi, %l0; \ 655 lduwa [%sp + 0x00] %asi, %l0; \
615 lduwa [%sp + 0x04] %asi, %l1; \ 656 lduwa [%sp + 0x04] %asi, %l1; \
616 lduwa [%sp + 0x08] %asi, %l2; \ 657 lduwa [%sp + 0x08] %asi, %l2; \
617 lduwa [%sp + 0x0c] %asi, %l3; \ 658 lduwa [%sp + 0x0c] %asi, %l3; \
618 lduwa [%sp + 0x10] %asi, %l4; \ 659 lduwa [%sp + 0x10] %asi, %l4; \
619 lduwa [%sp + 0x14] %asi, %l5; \ 660 lduwa [%sp + 0x14] %asi, %l5; \
620 lduwa [%sp + 0x18] %asi, %l6; \ 661 lduwa [%sp + 0x18] %asi, %l6; \
621 lduwa [%sp + 0x1c] %asi, %l7; \ 662 lduwa [%sp + 0x1c] %asi, %l7; \
622 lduwa [%sp + 0x20] %asi, %i0; \ 663 lduwa [%sp + 0x20] %asi, %i0; \
623 lduwa [%sp + 0x24] %asi, %i1; \ 664 lduwa [%sp + 0x24] %asi, %i1; \
624 lduwa [%sp + 0x28] %asi, %i2; \ 665 lduwa [%sp + 0x28] %asi, %i2; \
625 lduwa [%sp + 0x2c] %asi, %i3; \ 666 lduwa [%sp + 0x2c] %asi, %i3; \
626 lduwa [%sp + 0x30] %asi, %i4; \ 667 lduwa [%sp + 0x30] %asi, %i4; \
627 lduwa [%sp + 0x34] %asi, %i5; \ 668 lduwa [%sp + 0x34] %asi, %i5; \
628 lduwa [%sp + 0x38] %asi, %i6; \ 669 lduwa [%sp + 0x38] %asi, %i6; \
629 lduwa [%sp + 0x3c] %asi, %i7; \ 670 lduwa [%sp + 0x3c] %asi, %i7; \
630 ba,pt %xcc, user_rtt_pre_restore; \ 671 ba,pt %xcc, user_rtt_pre_restore; \
631 restored; \ 672 restored; \
632 nop; nop; nop; nop; nop; \ 673 nop; nop; nop; nop; nop; \
633 nop; nop; nop; nop; nop; \ 674 nop; nop; nop; nop; nop; \
634 ba,a,pt %xcc, user_rtt_fill_fixup; \ 675 ba,a,pt %xcc, user_rtt_fill_fixup; \
635 ba,a,pt %xcc, user_rtt_fill_fixup; \ 676 ba,a,pt %xcc, user_rtt_fill_fixup; \
636 ba,a,pt %xcc, user_rtt_fill_fixup; 677 ba,a,pt %xcc, user_rtt_fill_fixup;
637 678
638 679
639 #define FILL_1_NORMAL FILL_1_GENERIC(ASI_AIUP) 680 #define FILL_1_NORMAL FILL_1_GENERIC(ASI_AIUP)
640 #define FILL_2_NORMAL FILL_2_GENERIC(ASI_AIUP) 681 #define FILL_2_NORMAL FILL_2_GENERIC(ASI_AIUP)
641 #define FILL_3_NORMAL FILL_0_NORMAL 682 #define FILL_3_NORMAL FILL_0_NORMAL
642 #define FILL_4_NORMAL FILL_0_NORMAL 683 #define FILL_4_NORMAL FILL_0_NORMAL
643 #define FILL_5_NORMAL FILL_0_NORMAL 684 #define FILL_5_NORMAL FILL_0_NORMAL
644 #define FILL_6_NORMAL FILL_0_NORMAL 685 #define FILL_6_NORMAL FILL_0_NORMAL
645 #define FILL_7_NORMAL FILL_0_NORMAL 686 #define FILL_7_NORMAL FILL_0_NORMAL
646 687
647 #define FILL_0_OTHER FILL_0_NORMAL 688 #define FILL_0_OTHER FILL_0_NORMAL
648 #define FILL_1_OTHER FILL_1_GENERIC(ASI_AIUS) 689 #define FILL_1_OTHER FILL_1_GENERIC(ASI_AIUS)
649 #define FILL_2_OTHER FILL_2_GENERIC(ASI_AIUS) 690 #define FILL_2_OTHER FILL_2_GENERIC(ASI_AIUS)
650 #define FILL_3_OTHER FILL_3_NORMAL 691 #define FILL_3_OTHER FILL_3_NORMAL
651 #define FILL_4_OTHER FILL_4_NORMAL 692 #define FILL_4_OTHER FILL_4_NORMAL
652 #define FILL_5_OTHER FILL_5_NORMAL 693 #define FILL_5_OTHER FILL_5_NORMAL
653 #define FILL_6_OTHER FILL_6_NORMAL 694 #define FILL_6_OTHER FILL_6_NORMAL
654 #define FILL_7_OTHER FILL_7_NORMAL 695 #define FILL_7_OTHER FILL_7_NORMAL
655 696
656 #endif /* !(_SPARC64_TTABLE_H) */ 697 #endif /* !(_SPARC64_TTABLE_H) */
657 698