Commit bcdcd8e725b923ad7c0de809680d5d5658a7bf8c

Authored by Pavel Emelianov
Committed by Linus Torvalds
1 parent 74489a91dd

Report that kernel is tainted if there was an OOPS

If the kernel OOPSed or BUGed then it probably should be considered as
tainted.  Thus, all subsequent OOPSes and SysRq dumps will report the
tainted kernel.  This saves a lot of time explaining oddities in the
calltraces.

Signed-off-by: Pavel Emelianov <xemul@openvz.org>
Acked-by: Randy Dunlap <randy.dunlap@oracle.com>
Cc: <linux-arch@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ Added parisc patch from Matthew Wilson  -Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 21 changed files with 24 additions and 2 deletions Inline Diff

Documentation/oops-tracing.txt
1 NOTE: ksymoops is useless on 2.6. Please use the Oops in its original format 1 NOTE: ksymoops is useless on 2.6. Please use the Oops in its original format
2 (from dmesg, etc). Ignore any references in this or other docs to "decoding 2 (from dmesg, etc). Ignore any references in this or other docs to "decoding
3 the Oops" or "running it through ksymoops". If you post an Oops from 2.6 that 3 the Oops" or "running it through ksymoops". If you post an Oops from 2.6 that
4 has been run through ksymoops, people will just tell you to repost it. 4 has been run through ksymoops, people will just tell you to repost it.
5 5
6 Quick Summary 6 Quick Summary
7 ------------- 7 -------------
8 8
9 Find the Oops and send it to the maintainer of the kernel area that seems to be 9 Find the Oops and send it to the maintainer of the kernel area that seems to be
10 involved with the problem. Don't worry too much about getting the wrong person. 10 involved with the problem. Don't worry too much about getting the wrong person.
11 If you are unsure send it to the person responsible for the code relevant to 11 If you are unsure send it to the person responsible for the code relevant to
12 what you were doing. If it occurs repeatably try and describe how to recreate 12 what you were doing. If it occurs repeatably try and describe how to recreate
13 it. That's worth even more than the oops. 13 it. That's worth even more than the oops.
14 14
15 If you are totally stumped as to whom to send the report, send it to 15 If you are totally stumped as to whom to send the report, send it to
16 linux-kernel@vger.kernel.org. Thanks for your help in making Linux as 16 linux-kernel@vger.kernel.org. Thanks for your help in making Linux as
17 stable as humanly possible. 17 stable as humanly possible.
18 18
19 Where is the Oops? 19 Where is the Oops?
20 ---------------------- 20 ----------------------
21 21
22 Normally the Oops text is read from the kernel buffers by klogd and 22 Normally the Oops text is read from the kernel buffers by klogd and
23 handed to syslogd which writes it to a syslog file, typically 23 handed to syslogd which writes it to a syslog file, typically
24 /var/log/messages (depends on /etc/syslog.conf). Sometimes klogd dies, 24 /var/log/messages (depends on /etc/syslog.conf). Sometimes klogd dies,
25 in which case you can run dmesg > file to read the data from the kernel 25 in which case you can run dmesg > file to read the data from the kernel
26 buffers and save it. Or you can cat /proc/kmsg > file, however you 26 buffers and save it. Or you can cat /proc/kmsg > file, however you
27 have to break in to stop the transfer, kmsg is a "never ending file". 27 have to break in to stop the transfer, kmsg is a "never ending file".
28 If the machine has crashed so badly that you cannot enter commands or 28 If the machine has crashed so badly that you cannot enter commands or
29 the disk is not available then you have three options :- 29 the disk is not available then you have three options :-
30 30
31 (1) Hand copy the text from the screen and type it in after the machine 31 (1) Hand copy the text from the screen and type it in after the machine
32 has restarted. Messy but it is the only option if you have not 32 has restarted. Messy but it is the only option if you have not
33 planned for a crash. Alternatively, you can take a picture of 33 planned for a crash. Alternatively, you can take a picture of
34 the screen with a digital camera - not nice, but better than 34 the screen with a digital camera - not nice, but better than
35 nothing. If the messages scroll off the top of the console, you 35 nothing. If the messages scroll off the top of the console, you
36 may find that booting with a higher resolution (eg, vga=791) 36 may find that booting with a higher resolution (eg, vga=791)
37 will allow you to read more of the text. (Caveat: This needs vesafb, 37 will allow you to read more of the text. (Caveat: This needs vesafb,
38 so won't help for 'early' oopses) 38 so won't help for 'early' oopses)
39 39
40 (2) Boot with a serial console (see Documentation/serial-console.txt), 40 (2) Boot with a serial console (see Documentation/serial-console.txt),
41 run a null modem to a second machine and capture the output there 41 run a null modem to a second machine and capture the output there
42 using your favourite communication program. Minicom works well. 42 using your favourite communication program. Minicom works well.
43 43
44 (3) Use Kdump (see Documentation/kdump/kdump.txt), 44 (3) Use Kdump (see Documentation/kdump/kdump.txt),
45 extract the kernel ring buffer from old memory with using dmesg 45 extract the kernel ring buffer from old memory with using dmesg
46 gdbmacro in Documentation/kdump/gdbmacros.txt. 46 gdbmacro in Documentation/kdump/gdbmacros.txt.
47 47
48 48
49 Full Information 49 Full Information
50 ---------------- 50 ----------------
51 51
52 NOTE: the message from Linus below applies to 2.4 kernel. I have preserved it 52 NOTE: the message from Linus below applies to 2.4 kernel. I have preserved it
53 for historical reasons, and because some of the information in it still 53 for historical reasons, and because some of the information in it still
54 applies. Especially, please ignore any references to ksymoops. 54 applies. Especially, please ignore any references to ksymoops.
55 55
56 From: Linus Torvalds <torvalds@osdl.org> 56 From: Linus Torvalds <torvalds@osdl.org>
57 57
58 How to track down an Oops.. [originally a mail to linux-kernel] 58 How to track down an Oops.. [originally a mail to linux-kernel]
59 59
60 The main trick is having 5 years of experience with those pesky oops 60 The main trick is having 5 years of experience with those pesky oops
61 messages ;-) 61 messages ;-)
62 62
63 Actually, there are things you can do that make this easier. I have two 63 Actually, there are things you can do that make this easier. I have two
64 separate approaches: 64 separate approaches:
65 65
66 gdb /usr/src/linux/vmlinux 66 gdb /usr/src/linux/vmlinux
67 gdb> disassemble <offending_function> 67 gdb> disassemble <offending_function>
68 68
69 That's the easy way to find the problem, at least if the bug-report is 69 That's the easy way to find the problem, at least if the bug-report is
70 well made (like this one was - run through ksymoops to get the 70 well made (like this one was - run through ksymoops to get the
71 information of which function and the offset in the function that it 71 information of which function and the offset in the function that it
72 happened in). 72 happened in).
73 73
74 Oh, it helps if the report happens on a kernel that is compiled with the 74 Oh, it helps if the report happens on a kernel that is compiled with the
75 same compiler and similar setups. 75 same compiler and similar setups.
76 76
77 The other thing to do is disassemble the "Code:" part of the bug report: 77 The other thing to do is disassemble the "Code:" part of the bug report:
78 ksymoops will do this too with the correct tools, but if you don't have 78 ksymoops will do this too with the correct tools, but if you don't have
79 the tools you can just do a silly program: 79 the tools you can just do a silly program:
80 80
81 char str[] = "\xXX\xXX\xXX..."; 81 char str[] = "\xXX\xXX\xXX...";
82 main(){} 82 main(){}
83 83
84 and compile it with gcc -g and then do "disassemble str" (where the "XX" 84 and compile it with gcc -g and then do "disassemble str" (where the "XX"
85 stuff are the values reported by the Oops - you can just cut-and-paste 85 stuff are the values reported by the Oops - you can just cut-and-paste
86 and do a replace of spaces to "\x" - that's what I do, as I'm too lazy 86 and do a replace of spaces to "\x" - that's what I do, as I'm too lazy
87 to write a program to automate this all). 87 to write a program to automate this all).
88 88
89 Alternatively, you can use the shell script in scripts/decodecode. 89 Alternatively, you can use the shell script in scripts/decodecode.
90 Its usage is: decodecode < oops.txt 90 Its usage is: decodecode < oops.txt
91 91
92 The hex bytes that follow "Code:" may (in some architectures) have a series 92 The hex bytes that follow "Code:" may (in some architectures) have a series
93 of bytes that precede the current instruction pointer as well as bytes at and 93 of bytes that precede the current instruction pointer as well as bytes at and
94 following the current instruction pointer. In some cases, one instruction 94 following the current instruction pointer. In some cases, one instruction
95 byte or word is surrounded by <> or (), as in "<86>" or "(f00d)". These 95 byte or word is surrounded by <> or (), as in "<86>" or "(f00d)". These
96 <> or () markings indicate the current instruction pointer. Example from 96 <> or () markings indicate the current instruction pointer. Example from
97 i386, split into multiple lines for readability: 97 i386, split into multiple lines for readability:
98 98
99 Code: f9 0f 8d f9 00 00 00 8d 42 0c e8 dd 26 11 c7 a1 60 ea 2b f9 8b 50 08 a1 99 Code: f9 0f 8d f9 00 00 00 8d 42 0c e8 dd 26 11 c7 a1 60 ea 2b f9 8b 50 08 a1
100 64 ea 2b f9 8d 34 82 8b 1e 85 db 74 6d 8b 15 60 ea 2b f9 <8b> 43 04 39 42 54 100 64 ea 2b f9 8d 34 82 8b 1e 85 db 74 6d 8b 15 60 ea 2b f9 <8b> 43 04 39 42 54
101 7e 04 40 89 42 54 8b 43 04 3b 05 00 f6 52 c0 101 7e 04 40 89 42 54 8b 43 04 3b 05 00 f6 52 c0
102 102
103 Finally, if you want to see where the code comes from, you can do 103 Finally, if you want to see where the code comes from, you can do
104 104
105 cd /usr/src/linux 105 cd /usr/src/linux
106 make fs/buffer.s # or whatever file the bug happened in 106 make fs/buffer.s # or whatever file the bug happened in
107 107
108 and then you get a better idea of what happens than with the gdb 108 and then you get a better idea of what happens than with the gdb
109 disassembly. 109 disassembly.
110 110
111 Now, the trick is just then to combine all the data you have: the C 111 Now, the trick is just then to combine all the data you have: the C
112 sources (and general knowledge of what it _should_ do), the assembly 112 sources (and general knowledge of what it _should_ do), the assembly
113 listing and the code disassembly (and additionally the register dump you 113 listing and the code disassembly (and additionally the register dump you
114 also get from the "oops" message - that can be useful to see _what_ the 114 also get from the "oops" message - that can be useful to see _what_ the
115 corrupted pointers were, and when you have the assembler listing you can 115 corrupted pointers were, and when you have the assembler listing you can
116 also match the other registers to whatever C expressions they were used 116 also match the other registers to whatever C expressions they were used
117 for). 117 for).
118 118
119 Essentially, you just look at what doesn't match (in this case it was the 119 Essentially, you just look at what doesn't match (in this case it was the
120 "Code" disassembly that didn't match with what the compiler generated). 120 "Code" disassembly that didn't match with what the compiler generated).
121 Then you need to find out _why_ they don't match. Often it's simple - you 121 Then you need to find out _why_ they don't match. Often it's simple - you
122 see that the code uses a NULL pointer and then you look at the code and 122 see that the code uses a NULL pointer and then you look at the code and
123 wonder how the NULL pointer got there, and if it's a valid thing to do 123 wonder how the NULL pointer got there, and if it's a valid thing to do
124 you just check against it.. 124 you just check against it..
125 125
126 Now, if somebody gets the idea that this is time-consuming and requires 126 Now, if somebody gets the idea that this is time-consuming and requires
127 some small amount of concentration, you're right. Which is why I will 127 some small amount of concentration, you're right. Which is why I will
128 mostly just ignore any panic reports that don't have the symbol table 128 mostly just ignore any panic reports that don't have the symbol table
129 info etc looked up: it simply gets too hard to look it up (I have some 129 info etc looked up: it simply gets too hard to look it up (I have some
130 programs to search for specific patterns in the kernel code segment, and 130 programs to search for specific patterns in the kernel code segment, and
131 sometimes I have been able to look up those kinds of panics too, but 131 sometimes I have been able to look up those kinds of panics too, but
132 that really requires pretty good knowledge of the kernel just to be able 132 that really requires pretty good knowledge of the kernel just to be able
133 to pick out the right sequences etc..) 133 to pick out the right sequences etc..)
134 134
135 _Sometimes_ it happens that I just see the disassembled code sequence 135 _Sometimes_ it happens that I just see the disassembled code sequence
136 from the panic, and I know immediately where it's coming from. That's when 136 from the panic, and I know immediately where it's coming from. That's when
137 I get worried that I've been doing this for too long ;-) 137 I get worried that I've been doing this for too long ;-)
138 138
139 Linus 139 Linus
140 140
141 141
142 --------------------------------------------------------------------------- 142 ---------------------------------------------------------------------------
143 Notes on Oops tracing with klogd: 143 Notes on Oops tracing with klogd:
144 144
145 In order to help Linus and the other kernel developers there has been 145 In order to help Linus and the other kernel developers there has been
146 substantial support incorporated into klogd for processing protection 146 substantial support incorporated into klogd for processing protection
147 faults. In order to have full support for address resolution at least 147 faults. In order to have full support for address resolution at least
148 version 1.3-pl3 of the sysklogd package should be used. 148 version 1.3-pl3 of the sysklogd package should be used.
149 149
150 When a protection fault occurs the klogd daemon automatically 150 When a protection fault occurs the klogd daemon automatically
151 translates important addresses in the kernel log messages to their 151 translates important addresses in the kernel log messages to their
152 symbolic equivalents. This translated kernel message is then 152 symbolic equivalents. This translated kernel message is then
153 forwarded through whatever reporting mechanism klogd is using. The 153 forwarded through whatever reporting mechanism klogd is using. The
154 protection fault message can be simply cut out of the message files 154 protection fault message can be simply cut out of the message files
155 and forwarded to the kernel developers. 155 and forwarded to the kernel developers.
156 156
157 Two types of address resolution are performed by klogd. The first is 157 Two types of address resolution are performed by klogd. The first is
158 static translation and the second is dynamic translation. Static 158 static translation and the second is dynamic translation. Static
159 translation uses the System.map file in much the same manner that 159 translation uses the System.map file in much the same manner that
160 ksymoops does. In order to do static translation the klogd daemon 160 ksymoops does. In order to do static translation the klogd daemon
161 must be able to find a system map file at daemon initialization time. 161 must be able to find a system map file at daemon initialization time.
162 See the klogd man page for information on how klogd searches for map 162 See the klogd man page for information on how klogd searches for map
163 files. 163 files.
164 164
165 Dynamic address translation is important when kernel loadable modules 165 Dynamic address translation is important when kernel loadable modules
166 are being used. Since memory for kernel modules is allocated from the 166 are being used. Since memory for kernel modules is allocated from the
167 kernel's dynamic memory pools there are no fixed locations for either 167 kernel's dynamic memory pools there are no fixed locations for either
168 the start of the module or for functions and symbols in the module. 168 the start of the module or for functions and symbols in the module.
169 169
170 The kernel supports system calls which allow a program to determine 170 The kernel supports system calls which allow a program to determine
171 which modules are loaded and their location in memory. Using these 171 which modules are loaded and their location in memory. Using these
172 system calls the klogd daemon builds a symbol table which can be used 172 system calls the klogd daemon builds a symbol table which can be used
173 to debug a protection fault which occurs in a loadable kernel module. 173 to debug a protection fault which occurs in a loadable kernel module.
174 174
175 At the very minimum klogd will provide the name of the module which 175 At the very minimum klogd will provide the name of the module which
176 generated the protection fault. There may be additional symbolic 176 generated the protection fault. There may be additional symbolic
177 information available if the developer of the loadable module chose to 177 information available if the developer of the loadable module chose to
178 export symbol information from the module. 178 export symbol information from the module.
179 179
180 Since the kernel module environment can be dynamic there must be a 180 Since the kernel module environment can be dynamic there must be a
181 mechanism for notifying the klogd daemon when a change in module 181 mechanism for notifying the klogd daemon when a change in module
182 environment occurs. There are command line options available which 182 environment occurs. There are command line options available which
183 allow klogd to signal the currently executing daemon that symbol 183 allow klogd to signal the currently executing daemon that symbol
184 information should be refreshed. See the klogd manual page for more 184 information should be refreshed. See the klogd manual page for more
185 information. 185 information.
186 186
187 A patch is included with the sysklogd distribution which modifies the 187 A patch is included with the sysklogd distribution which modifies the
188 modules-2.0.0 package to automatically signal klogd whenever a module 188 modules-2.0.0 package to automatically signal klogd whenever a module
189 is loaded or unloaded. Applying this patch provides essentially 189 is loaded or unloaded. Applying this patch provides essentially
190 seamless support for debugging protection faults which occur with 190 seamless support for debugging protection faults which occur with
191 kernel loadable modules. 191 kernel loadable modules.
192 192
193 The following is an example of a protection fault in a loadable module 193 The following is an example of a protection fault in a loadable module
194 processed by klogd: 194 processed by klogd:
195 --------------------------------------------------------------------------- 195 ---------------------------------------------------------------------------
196 Aug 29 09:51:01 blizard kernel: Unable to handle kernel paging request at virtual address f15e97cc 196 Aug 29 09:51:01 blizard kernel: Unable to handle kernel paging request at virtual address f15e97cc
197 Aug 29 09:51:01 blizard kernel: current->tss.cr3 = 0062d000, %cr3 = 0062d000 197 Aug 29 09:51:01 blizard kernel: current->tss.cr3 = 0062d000, %cr3 = 0062d000
198 Aug 29 09:51:01 blizard kernel: *pde = 00000000 198 Aug 29 09:51:01 blizard kernel: *pde = 00000000
199 Aug 29 09:51:01 blizard kernel: Oops: 0002 199 Aug 29 09:51:01 blizard kernel: Oops: 0002
200 Aug 29 09:51:01 blizard kernel: CPU: 0 200 Aug 29 09:51:01 blizard kernel: CPU: 0
201 Aug 29 09:51:01 blizard kernel: EIP: 0010:[oops:_oops+16/3868] 201 Aug 29 09:51:01 blizard kernel: EIP: 0010:[oops:_oops+16/3868]
202 Aug 29 09:51:01 blizard kernel: EFLAGS: 00010212 202 Aug 29 09:51:01 blizard kernel: EFLAGS: 00010212
203 Aug 29 09:51:01 blizard kernel: eax: 315e97cc ebx: 003a6f80 ecx: 001be77b edx: 00237c0c 203 Aug 29 09:51:01 blizard kernel: eax: 315e97cc ebx: 003a6f80 ecx: 001be77b edx: 00237c0c
204 Aug 29 09:51:01 blizard kernel: esi: 00000000 edi: bffffdb3 ebp: 00589f90 esp: 00589f8c 204 Aug 29 09:51:01 blizard kernel: esi: 00000000 edi: bffffdb3 ebp: 00589f90 esp: 00589f8c
205 Aug 29 09:51:01 blizard kernel: ds: 0018 es: 0018 fs: 002b gs: 002b ss: 0018 205 Aug 29 09:51:01 blizard kernel: ds: 0018 es: 0018 fs: 002b gs: 002b ss: 0018
206 Aug 29 09:51:01 blizard kernel: Process oops_test (pid: 3374, process nr: 21, stackpage=00589000) 206 Aug 29 09:51:01 blizard kernel: Process oops_test (pid: 3374, process nr: 21, stackpage=00589000)
207 Aug 29 09:51:01 blizard kernel: Stack: 315e97cc 00589f98 0100b0b4 bffffed4 0012e38e 00240c64 003a6f80 00000001 207 Aug 29 09:51:01 blizard kernel: Stack: 315e97cc 00589f98 0100b0b4 bffffed4 0012e38e 00240c64 003a6f80 00000001
208 Aug 29 09:51:01 blizard kernel: 00000000 00237810 bfffff00 0010a7fa 00000003 00000001 00000000 bfffff00 208 Aug 29 09:51:01 blizard kernel: 00000000 00237810 bfffff00 0010a7fa 00000003 00000001 00000000 bfffff00
209 Aug 29 09:51:01 blizard kernel: bffffdb3 bffffed4 ffffffda 0000002b 0007002b 0000002b 0000002b 00000036 209 Aug 29 09:51:01 blizard kernel: bffffdb3 bffffed4 ffffffda 0000002b 0007002b 0000002b 0000002b 00000036
210 Aug 29 09:51:01 blizard kernel: Call Trace: [oops:_oops_ioctl+48/80] [_sys_ioctl+254/272] [_system_call+82/128] 210 Aug 29 09:51:01 blizard kernel: Call Trace: [oops:_oops_ioctl+48/80] [_sys_ioctl+254/272] [_system_call+82/128]
211 Aug 29 09:51:01 blizard kernel: Code: c7 00 05 00 00 00 eb 08 90 90 90 90 90 90 90 90 89 ec 5d c3 211 Aug 29 09:51:01 blizard kernel: Code: c7 00 05 00 00 00 eb 08 90 90 90 90 90 90 90 90 89 ec 5d c3
212 --------------------------------------------------------------------------- 212 ---------------------------------------------------------------------------
213 213
214 Dr. G.W. Wettstein Oncology Research Div. Computing Facility 214 Dr. G.W. Wettstein Oncology Research Div. Computing Facility
215 Roger Maris Cancer Center INTERNET: greg@wind.rmcc.com 215 Roger Maris Cancer Center INTERNET: greg@wind.rmcc.com
216 820 4th St. N. 216 820 4th St. N.
217 Fargo, ND 58122 217 Fargo, ND 58122
218 Phone: 701-234-7556 218 Phone: 701-234-7556
219 219
220 220
221 --------------------------------------------------------------------------- 221 ---------------------------------------------------------------------------
222 Tainted kernels: 222 Tainted kernels:
223 223
224 Some oops reports contain the string 'Tainted: ' after the program 224 Some oops reports contain the string 'Tainted: ' after the program
225 counter. This indicates that the kernel has been tainted by some 225 counter. This indicates that the kernel has been tainted by some
226 mechanism. The string is followed by a series of position-sensitive 226 mechanism. The string is followed by a series of position-sensitive
227 characters, each representing a particular tainted value. 227 characters, each representing a particular tainted value.
228 228
229 1: 'G' if all modules loaded have a GPL or compatible license, 'P' if 229 1: 'G' if all modules loaded have a GPL or compatible license, 'P' if
230 any proprietary module has been loaded. Modules without a 230 any proprietary module has been loaded. Modules without a
231 MODULE_LICENSE or with a MODULE_LICENSE that is not recognised by 231 MODULE_LICENSE or with a MODULE_LICENSE that is not recognised by
232 insmod as GPL compatible are assumed to be proprietary. 232 insmod as GPL compatible are assumed to be proprietary.
233 233
234 2: 'F' if any module was force loaded by "insmod -f", ' ' if all 234 2: 'F' if any module was force loaded by "insmod -f", ' ' if all
235 modules were loaded normally. 235 modules were loaded normally.
236 236
237 3: 'S' if the oops occurred on an SMP kernel running on hardware that 237 3: 'S' if the oops occurred on an SMP kernel running on hardware that
238 hasn't been certified as safe to run multiprocessor. 238 hasn't been certified as safe to run multiprocessor.
239 Currently this occurs only on various Athlons that are not 239 Currently this occurs only on various Athlons that are not
240 SMP capable. 240 SMP capable.
241 241
242 4: 'R' if a module was force unloaded by "rmmod -f", ' ' if all 242 4: 'R' if a module was force unloaded by "rmmod -f", ' ' if all
243 modules were unloaded normally. 243 modules were unloaded normally.
244 244
245 5: 'M' if any processor has reported a Machine Check Exception, 245 5: 'M' if any processor has reported a Machine Check Exception,
246 ' ' if no Machine Check Exceptions have occurred. 246 ' ' if no Machine Check Exceptions have occurred.
247 247
248 6: 'B' if a page-release function has found a bad page reference or 248 6: 'B' if a page-release function has found a bad page reference or
249 some unexpected page flags. 249 some unexpected page flags.
250 250
251 7: 'U' if a user or user application specifically requested that the 251 7: 'U' if a user or user application specifically requested that the
252 Tainted flag be set, ' ' otherwise. 252 Tainted flag be set, ' ' otherwise.
253 253
254 8: 'D' if the kernel has died recently, i.e. there was an OOPS or BUG.
255
254 The primary reason for the 'Tainted: ' string is to tell kernel 256 The primary reason for the 'Tainted: ' string is to tell kernel
255 debuggers if this is a clean kernel or if anything unusual has 257 debuggers if this is a clean kernel or if anything unusual has
256 occurred. Tainting is permanent: even if an offending module is 258 occurred. Tainting is permanent: even if an offending module is
257 unloaded, the tainted value remains to indicate that the kernel is not 259 unloaded, the tainted value remains to indicate that the kernel is not
258 trustworthy. 260 trustworthy.
259 261
arch/alpha/kernel/traps.c
1 /* 1 /*
2 * arch/alpha/kernel/traps.c 2 * arch/alpha/kernel/traps.c
3 * 3 *
4 * (C) Copyright 1994 Linus Torvalds 4 * (C) Copyright 1994 Linus Torvalds
5 */ 5 */
6 6
7 /* 7 /*
8 * This file initializes the trap entry points 8 * This file initializes the trap entry points
9 */ 9 */
10 10
11 #include <linux/mm.h> 11 #include <linux/mm.h>
12 #include <linux/sched.h> 12 #include <linux/sched.h>
13 #include <linux/tty.h> 13 #include <linux/tty.h>
14 #include <linux/delay.h> 14 #include <linux/delay.h>
15 #include <linux/smp_lock.h> 15 #include <linux/smp_lock.h>
16 #include <linux/module.h> 16 #include <linux/module.h>
17 #include <linux/init.h> 17 #include <linux/init.h>
18 #include <linux/kallsyms.h> 18 #include <linux/kallsyms.h>
19 19
20 #include <asm/gentrap.h> 20 #include <asm/gentrap.h>
21 #include <asm/uaccess.h> 21 #include <asm/uaccess.h>
22 #include <asm/unaligned.h> 22 #include <asm/unaligned.h>
23 #include <asm/sysinfo.h> 23 #include <asm/sysinfo.h>
24 #include <asm/hwrpb.h> 24 #include <asm/hwrpb.h>
25 #include <asm/mmu_context.h> 25 #include <asm/mmu_context.h>
26 26
27 #include "proto.h" 27 #include "proto.h"
28 28
29 /* Work-around for some SRMs which mishandle opDEC faults. */ 29 /* Work-around for some SRMs which mishandle opDEC faults. */
30 30
31 static int opDEC_fix; 31 static int opDEC_fix;
32 32
33 static void __init 33 static void __init
34 opDEC_check(void) 34 opDEC_check(void)
35 { 35 {
36 __asm__ __volatile__ ( 36 __asm__ __volatile__ (
37 /* Load the address of... */ 37 /* Load the address of... */
38 " br $16, 1f\n" 38 " br $16, 1f\n"
39 /* A stub instruction fault handler. Just add 4 to the 39 /* A stub instruction fault handler. Just add 4 to the
40 pc and continue. */ 40 pc and continue. */
41 " ldq $16, 8($sp)\n" 41 " ldq $16, 8($sp)\n"
42 " addq $16, 4, $16\n" 42 " addq $16, 4, $16\n"
43 " stq $16, 8($sp)\n" 43 " stq $16, 8($sp)\n"
44 " call_pal %[rti]\n" 44 " call_pal %[rti]\n"
45 /* Install the instruction fault handler. */ 45 /* Install the instruction fault handler. */
46 "1: lda $17, 3\n" 46 "1: lda $17, 3\n"
47 " call_pal %[wrent]\n" 47 " call_pal %[wrent]\n"
48 /* With that in place, the fault from the round-to-minf fp 48 /* With that in place, the fault from the round-to-minf fp
49 insn will arrive either at the "lda 4" insn (bad) or one 49 insn will arrive either at the "lda 4" insn (bad) or one
50 past that (good). This places the correct fixup in %0. */ 50 past that (good). This places the correct fixup in %0. */
51 " lda %[fix], 0\n" 51 " lda %[fix], 0\n"
52 " cvttq/svm $f31,$f31\n" 52 " cvttq/svm $f31,$f31\n"
53 " lda %[fix], 4" 53 " lda %[fix], 4"
54 : [fix] "=r" (opDEC_fix) 54 : [fix] "=r" (opDEC_fix)
55 : [rti] "n" (PAL_rti), [wrent] "n" (PAL_wrent) 55 : [rti] "n" (PAL_rti), [wrent] "n" (PAL_wrent)
56 : "$0", "$1", "$16", "$17", "$22", "$23", "$24", "$25"); 56 : "$0", "$1", "$16", "$17", "$22", "$23", "$24", "$25");
57 57
58 if (opDEC_fix) 58 if (opDEC_fix)
59 printk("opDEC fixup enabled.\n"); 59 printk("opDEC fixup enabled.\n");
60 } 60 }
61 61
62 void 62 void
63 dik_show_regs(struct pt_regs *regs, unsigned long *r9_15) 63 dik_show_regs(struct pt_regs *regs, unsigned long *r9_15)
64 { 64 {
65 printk("pc = [<%016lx>] ra = [<%016lx>] ps = %04lx %s\n", 65 printk("pc = [<%016lx>] ra = [<%016lx>] ps = %04lx %s\n",
66 regs->pc, regs->r26, regs->ps, print_tainted()); 66 regs->pc, regs->r26, regs->ps, print_tainted());
67 print_symbol("pc is at %s\n", regs->pc); 67 print_symbol("pc is at %s\n", regs->pc);
68 print_symbol("ra is at %s\n", regs->r26 ); 68 print_symbol("ra is at %s\n", regs->r26 );
69 printk("v0 = %016lx t0 = %016lx t1 = %016lx\n", 69 printk("v0 = %016lx t0 = %016lx t1 = %016lx\n",
70 regs->r0, regs->r1, regs->r2); 70 regs->r0, regs->r1, regs->r2);
71 printk("t2 = %016lx t3 = %016lx t4 = %016lx\n", 71 printk("t2 = %016lx t3 = %016lx t4 = %016lx\n",
72 regs->r3, regs->r4, regs->r5); 72 regs->r3, regs->r4, regs->r5);
73 printk("t5 = %016lx t6 = %016lx t7 = %016lx\n", 73 printk("t5 = %016lx t6 = %016lx t7 = %016lx\n",
74 regs->r6, regs->r7, regs->r8); 74 regs->r6, regs->r7, regs->r8);
75 75
76 if (r9_15) { 76 if (r9_15) {
77 printk("s0 = %016lx s1 = %016lx s2 = %016lx\n", 77 printk("s0 = %016lx s1 = %016lx s2 = %016lx\n",
78 r9_15[9], r9_15[10], r9_15[11]); 78 r9_15[9], r9_15[10], r9_15[11]);
79 printk("s3 = %016lx s4 = %016lx s5 = %016lx\n", 79 printk("s3 = %016lx s4 = %016lx s5 = %016lx\n",
80 r9_15[12], r9_15[13], r9_15[14]); 80 r9_15[12], r9_15[13], r9_15[14]);
81 printk("s6 = %016lx\n", r9_15[15]); 81 printk("s6 = %016lx\n", r9_15[15]);
82 } 82 }
83 83
84 printk("a0 = %016lx a1 = %016lx a2 = %016lx\n", 84 printk("a0 = %016lx a1 = %016lx a2 = %016lx\n",
85 regs->r16, regs->r17, regs->r18); 85 regs->r16, regs->r17, regs->r18);
86 printk("a3 = %016lx a4 = %016lx a5 = %016lx\n", 86 printk("a3 = %016lx a4 = %016lx a5 = %016lx\n",
87 regs->r19, regs->r20, regs->r21); 87 regs->r19, regs->r20, regs->r21);
88 printk("t8 = %016lx t9 = %016lx t10= %016lx\n", 88 printk("t8 = %016lx t9 = %016lx t10= %016lx\n",
89 regs->r22, regs->r23, regs->r24); 89 regs->r22, regs->r23, regs->r24);
90 printk("t11= %016lx pv = %016lx at = %016lx\n", 90 printk("t11= %016lx pv = %016lx at = %016lx\n",
91 regs->r25, regs->r27, regs->r28); 91 regs->r25, regs->r27, regs->r28);
92 printk("gp = %016lx sp = %p\n", regs->gp, regs+1); 92 printk("gp = %016lx sp = %p\n", regs->gp, regs+1);
93 #if 0 93 #if 0
94 __halt(); 94 __halt();
95 #endif 95 #endif
96 } 96 }
97 97
98 #if 0 98 #if 0
99 static char * ireg_name[] = {"v0", "t0", "t1", "t2", "t3", "t4", "t5", "t6", 99 static char * ireg_name[] = {"v0", "t0", "t1", "t2", "t3", "t4", "t5", "t6",
100 "t7", "s0", "s1", "s2", "s3", "s4", "s5", "s6", 100 "t7", "s0", "s1", "s2", "s3", "s4", "s5", "s6",
101 "a0", "a1", "a2", "a3", "a4", "a5", "t8", "t9", 101 "a0", "a1", "a2", "a3", "a4", "a5", "t8", "t9",
102 "t10", "t11", "ra", "pv", "at", "gp", "sp", "zero"}; 102 "t10", "t11", "ra", "pv", "at", "gp", "sp", "zero"};
103 #endif 103 #endif
104 104
105 static void 105 static void
106 dik_show_code(unsigned int *pc) 106 dik_show_code(unsigned int *pc)
107 { 107 {
108 long i; 108 long i;
109 109
110 printk("Code:"); 110 printk("Code:");
111 for (i = -6; i < 2; i++) { 111 for (i = -6; i < 2; i++) {
112 unsigned int insn; 112 unsigned int insn;
113 if (__get_user(insn, (unsigned int __user *)pc + i)) 113 if (__get_user(insn, (unsigned int __user *)pc + i))
114 break; 114 break;
115 printk("%c%08x%c", i ? ' ' : '<', insn, i ? ' ' : '>'); 115 printk("%c%08x%c", i ? ' ' : '<', insn, i ? ' ' : '>');
116 } 116 }
117 printk("\n"); 117 printk("\n");
118 } 118 }
119 119
120 static void 120 static void
121 dik_show_trace(unsigned long *sp) 121 dik_show_trace(unsigned long *sp)
122 { 122 {
123 long i = 0; 123 long i = 0;
124 printk("Trace:\n"); 124 printk("Trace:\n");
125 while (0x1ff8 & (unsigned long) sp) { 125 while (0x1ff8 & (unsigned long) sp) {
126 extern char _stext[], _etext[]; 126 extern char _stext[], _etext[];
127 unsigned long tmp = *sp; 127 unsigned long tmp = *sp;
128 sp++; 128 sp++;
129 if (tmp < (unsigned long) &_stext) 129 if (tmp < (unsigned long) &_stext)
130 continue; 130 continue;
131 if (tmp >= (unsigned long) &_etext) 131 if (tmp >= (unsigned long) &_etext)
132 continue; 132 continue;
133 printk("[<%lx>]", tmp); 133 printk("[<%lx>]", tmp);
134 print_symbol(" %s", tmp); 134 print_symbol(" %s", tmp);
135 printk("\n"); 135 printk("\n");
136 if (i > 40) { 136 if (i > 40) {
137 printk(" ..."); 137 printk(" ...");
138 break; 138 break;
139 } 139 }
140 } 140 }
141 printk("\n"); 141 printk("\n");
142 } 142 }
143 143
144 static int kstack_depth_to_print = 24; 144 static int kstack_depth_to_print = 24;
145 145
146 void show_stack(struct task_struct *task, unsigned long *sp) 146 void show_stack(struct task_struct *task, unsigned long *sp)
147 { 147 {
148 unsigned long *stack; 148 unsigned long *stack;
149 int i; 149 int i;
150 150
151 /* 151 /*
152 * debugging aid: "show_stack(NULL);" prints the 152 * debugging aid: "show_stack(NULL);" prints the
153 * back trace for this cpu. 153 * back trace for this cpu.
154 */ 154 */
155 if(sp==NULL) 155 if(sp==NULL)
156 sp=(unsigned long*)&sp; 156 sp=(unsigned long*)&sp;
157 157
158 stack = sp; 158 stack = sp;
159 for(i=0; i < kstack_depth_to_print; i++) { 159 for(i=0; i < kstack_depth_to_print; i++) {
160 if (((long) stack & (THREAD_SIZE-1)) == 0) 160 if (((long) stack & (THREAD_SIZE-1)) == 0)
161 break; 161 break;
162 if (i && ((i % 4) == 0)) 162 if (i && ((i % 4) == 0))
163 printk("\n "); 163 printk("\n ");
164 printk("%016lx ", *stack++); 164 printk("%016lx ", *stack++);
165 } 165 }
166 printk("\n"); 166 printk("\n");
167 dik_show_trace(sp); 167 dik_show_trace(sp);
168 } 168 }
169 169
170 void dump_stack(void) 170 void dump_stack(void)
171 { 171 {
172 show_stack(NULL, NULL); 172 show_stack(NULL, NULL);
173 } 173 }
174 174
175 EXPORT_SYMBOL(dump_stack); 175 EXPORT_SYMBOL(dump_stack);
176 176
177 void 177 void
178 die_if_kernel(char * str, struct pt_regs *regs, long err, unsigned long *r9_15) 178 die_if_kernel(char * str, struct pt_regs *regs, long err, unsigned long *r9_15)
179 { 179 {
180 if (regs->ps & 8) 180 if (regs->ps & 8)
181 return; 181 return;
182 #ifdef CONFIG_SMP 182 #ifdef CONFIG_SMP
183 printk("CPU %d ", hard_smp_processor_id()); 183 printk("CPU %d ", hard_smp_processor_id());
184 #endif 184 #endif
185 printk("%s(%d): %s %ld\n", current->comm, current->pid, str, err); 185 printk("%s(%d): %s %ld\n", current->comm, current->pid, str, err);
186 dik_show_regs(regs, r9_15); 186 dik_show_regs(regs, r9_15);
187 add_taint(TAINT_DIE);
187 dik_show_trace((unsigned long *)(regs+1)); 188 dik_show_trace((unsigned long *)(regs+1));
188 dik_show_code((unsigned int *)regs->pc); 189 dik_show_code((unsigned int *)regs->pc);
189 190
190 if (test_and_set_thread_flag (TIF_DIE_IF_KERNEL)) { 191 if (test_and_set_thread_flag (TIF_DIE_IF_KERNEL)) {
191 printk("die_if_kernel recursion detected.\n"); 192 printk("die_if_kernel recursion detected.\n");
192 local_irq_enable(); 193 local_irq_enable();
193 while (1); 194 while (1);
194 } 195 }
195 do_exit(SIGSEGV); 196 do_exit(SIGSEGV);
196 } 197 }
197 198
198 #ifndef CONFIG_MATHEMU 199 #ifndef CONFIG_MATHEMU
199 static long dummy_emul(void) { return 0; } 200 static long dummy_emul(void) { return 0; }
200 long (*alpha_fp_emul_imprecise)(struct pt_regs *regs, unsigned long writemask) 201 long (*alpha_fp_emul_imprecise)(struct pt_regs *regs, unsigned long writemask)
201 = (void *)dummy_emul; 202 = (void *)dummy_emul;
202 long (*alpha_fp_emul) (unsigned long pc) 203 long (*alpha_fp_emul) (unsigned long pc)
203 = (void *)dummy_emul; 204 = (void *)dummy_emul;
204 #else 205 #else
205 long alpha_fp_emul_imprecise(struct pt_regs *regs, unsigned long writemask); 206 long alpha_fp_emul_imprecise(struct pt_regs *regs, unsigned long writemask);
206 long alpha_fp_emul (unsigned long pc); 207 long alpha_fp_emul (unsigned long pc);
207 #endif 208 #endif
208 209
209 asmlinkage void 210 asmlinkage void
210 do_entArith(unsigned long summary, unsigned long write_mask, 211 do_entArith(unsigned long summary, unsigned long write_mask,
211 struct pt_regs *regs) 212 struct pt_regs *regs)
212 { 213 {
213 long si_code = FPE_FLTINV; 214 long si_code = FPE_FLTINV;
214 siginfo_t info; 215 siginfo_t info;
215 216
216 if (summary & 1) { 217 if (summary & 1) {
217 /* Software-completion summary bit is set, so try to 218 /* Software-completion summary bit is set, so try to
218 emulate the instruction. If the processor supports 219 emulate the instruction. If the processor supports
219 precise exceptions, we don't have to search. */ 220 precise exceptions, we don't have to search. */
220 if (!amask(AMASK_PRECISE_TRAP)) 221 if (!amask(AMASK_PRECISE_TRAP))
221 si_code = alpha_fp_emul(regs->pc - 4); 222 si_code = alpha_fp_emul(regs->pc - 4);
222 else 223 else
223 si_code = alpha_fp_emul_imprecise(regs, write_mask); 224 si_code = alpha_fp_emul_imprecise(regs, write_mask);
224 if (si_code == 0) 225 if (si_code == 0)
225 return; 226 return;
226 } 227 }
227 die_if_kernel("Arithmetic fault", regs, 0, NULL); 228 die_if_kernel("Arithmetic fault", regs, 0, NULL);
228 229
229 info.si_signo = SIGFPE; 230 info.si_signo = SIGFPE;
230 info.si_errno = 0; 231 info.si_errno = 0;
231 info.si_code = si_code; 232 info.si_code = si_code;
232 info.si_addr = (void __user *) regs->pc; 233 info.si_addr = (void __user *) regs->pc;
233 send_sig_info(SIGFPE, &info, current); 234 send_sig_info(SIGFPE, &info, current);
234 } 235 }
235 236
236 asmlinkage void 237 asmlinkage void
237 do_entIF(unsigned long type, struct pt_regs *regs) 238 do_entIF(unsigned long type, struct pt_regs *regs)
238 { 239 {
239 siginfo_t info; 240 siginfo_t info;
240 int signo, code; 241 int signo, code;
241 242
242 if ((regs->ps & ~IPL_MAX) == 0) { 243 if ((regs->ps & ~IPL_MAX) == 0) {
243 if (type == 1) { 244 if (type == 1) {
244 const unsigned int *data 245 const unsigned int *data
245 = (const unsigned int *) regs->pc; 246 = (const unsigned int *) regs->pc;
246 printk("Kernel bug at %s:%d\n", 247 printk("Kernel bug at %s:%d\n",
247 (const char *)(data[1] | (long)data[2] << 32), 248 (const char *)(data[1] | (long)data[2] << 32),
248 data[0]); 249 data[0]);
249 } 250 }
250 die_if_kernel((type == 1 ? "Kernel Bug" : "Instruction fault"), 251 die_if_kernel((type == 1 ? "Kernel Bug" : "Instruction fault"),
251 regs, type, NULL); 252 regs, type, NULL);
252 } 253 }
253 254
254 switch (type) { 255 switch (type) {
255 case 0: /* breakpoint */ 256 case 0: /* breakpoint */
256 info.si_signo = SIGTRAP; 257 info.si_signo = SIGTRAP;
257 info.si_errno = 0; 258 info.si_errno = 0;
258 info.si_code = TRAP_BRKPT; 259 info.si_code = TRAP_BRKPT;
259 info.si_trapno = 0; 260 info.si_trapno = 0;
260 info.si_addr = (void __user *) regs->pc; 261 info.si_addr = (void __user *) regs->pc;
261 262
262 if (ptrace_cancel_bpt(current)) { 263 if (ptrace_cancel_bpt(current)) {
263 regs->pc -= 4; /* make pc point to former bpt */ 264 regs->pc -= 4; /* make pc point to former bpt */
264 } 265 }
265 266
266 send_sig_info(SIGTRAP, &info, current); 267 send_sig_info(SIGTRAP, &info, current);
267 return; 268 return;
268 269
269 case 1: /* bugcheck */ 270 case 1: /* bugcheck */
270 info.si_signo = SIGTRAP; 271 info.si_signo = SIGTRAP;
271 info.si_errno = 0; 272 info.si_errno = 0;
272 info.si_code = __SI_FAULT; 273 info.si_code = __SI_FAULT;
273 info.si_addr = (void __user *) regs->pc; 274 info.si_addr = (void __user *) regs->pc;
274 info.si_trapno = 0; 275 info.si_trapno = 0;
275 send_sig_info(SIGTRAP, &info, current); 276 send_sig_info(SIGTRAP, &info, current);
276 return; 277 return;
277 278
278 case 2: /* gentrap */ 279 case 2: /* gentrap */
279 info.si_addr = (void __user *) regs->pc; 280 info.si_addr = (void __user *) regs->pc;
280 info.si_trapno = regs->r16; 281 info.si_trapno = regs->r16;
281 switch ((long) regs->r16) { 282 switch ((long) regs->r16) {
282 case GEN_INTOVF: 283 case GEN_INTOVF:
283 signo = SIGFPE; 284 signo = SIGFPE;
284 code = FPE_INTOVF; 285 code = FPE_INTOVF;
285 break; 286 break;
286 case GEN_INTDIV: 287 case GEN_INTDIV:
287 signo = SIGFPE; 288 signo = SIGFPE;
288 code = FPE_INTDIV; 289 code = FPE_INTDIV;
289 break; 290 break;
290 case GEN_FLTOVF: 291 case GEN_FLTOVF:
291 signo = SIGFPE; 292 signo = SIGFPE;
292 code = FPE_FLTOVF; 293 code = FPE_FLTOVF;
293 break; 294 break;
294 case GEN_FLTDIV: 295 case GEN_FLTDIV:
295 signo = SIGFPE; 296 signo = SIGFPE;
296 code = FPE_FLTDIV; 297 code = FPE_FLTDIV;
297 break; 298 break;
298 case GEN_FLTUND: 299 case GEN_FLTUND:
299 signo = SIGFPE; 300 signo = SIGFPE;
300 code = FPE_FLTUND; 301 code = FPE_FLTUND;
301 break; 302 break;
302 case GEN_FLTINV: 303 case GEN_FLTINV:
303 signo = SIGFPE; 304 signo = SIGFPE;
304 code = FPE_FLTINV; 305 code = FPE_FLTINV;
305 break; 306 break;
306 case GEN_FLTINE: 307 case GEN_FLTINE:
307 signo = SIGFPE; 308 signo = SIGFPE;
308 code = FPE_FLTRES; 309 code = FPE_FLTRES;
309 break; 310 break;
310 case GEN_ROPRAND: 311 case GEN_ROPRAND:
311 signo = SIGFPE; 312 signo = SIGFPE;
312 code = __SI_FAULT; 313 code = __SI_FAULT;
313 break; 314 break;
314 315
315 case GEN_DECOVF: 316 case GEN_DECOVF:
316 case GEN_DECDIV: 317 case GEN_DECDIV:
317 case GEN_DECINV: 318 case GEN_DECINV:
318 case GEN_ASSERTERR: 319 case GEN_ASSERTERR:
319 case GEN_NULPTRERR: 320 case GEN_NULPTRERR:
320 case GEN_STKOVF: 321 case GEN_STKOVF:
321 case GEN_STRLENERR: 322 case GEN_STRLENERR:
322 case GEN_SUBSTRERR: 323 case GEN_SUBSTRERR:
323 case GEN_RANGERR: 324 case GEN_RANGERR:
324 case GEN_SUBRNG: 325 case GEN_SUBRNG:
325 case GEN_SUBRNG1: 326 case GEN_SUBRNG1:
326 case GEN_SUBRNG2: 327 case GEN_SUBRNG2:
327 case GEN_SUBRNG3: 328 case GEN_SUBRNG3:
328 case GEN_SUBRNG4: 329 case GEN_SUBRNG4:
329 case GEN_SUBRNG5: 330 case GEN_SUBRNG5:
330 case GEN_SUBRNG6: 331 case GEN_SUBRNG6:
331 case GEN_SUBRNG7: 332 case GEN_SUBRNG7:
332 default: 333 default:
333 signo = SIGTRAP; 334 signo = SIGTRAP;
334 code = __SI_FAULT; 335 code = __SI_FAULT;
335 break; 336 break;
336 } 337 }
337 338
338 info.si_signo = signo; 339 info.si_signo = signo;
339 info.si_errno = 0; 340 info.si_errno = 0;
340 info.si_code = code; 341 info.si_code = code;
341 info.si_addr = (void __user *) regs->pc; 342 info.si_addr = (void __user *) regs->pc;
342 send_sig_info(signo, &info, current); 343 send_sig_info(signo, &info, current);
343 return; 344 return;
344 345
345 case 4: /* opDEC */ 346 case 4: /* opDEC */
346 if (implver() == IMPLVER_EV4) { 347 if (implver() == IMPLVER_EV4) {
347 long si_code; 348 long si_code;
348 349
349 /* The some versions of SRM do not handle 350 /* The some versions of SRM do not handle
350 the opDEC properly - they return the PC of the 351 the opDEC properly - they return the PC of the
351 opDEC fault, not the instruction after as the 352 opDEC fault, not the instruction after as the
352 Alpha architecture requires. Here we fix it up. 353 Alpha architecture requires. Here we fix it up.
353 We do this by intentionally causing an opDEC 354 We do this by intentionally causing an opDEC
354 fault during the boot sequence and testing if 355 fault during the boot sequence and testing if
355 we get the correct PC. If not, we set a flag 356 we get the correct PC. If not, we set a flag
356 to correct it every time through. */ 357 to correct it every time through. */
357 regs->pc += opDEC_fix; 358 regs->pc += opDEC_fix;
358 359
359 /* EV4 does not implement anything except normal 360 /* EV4 does not implement anything except normal
360 rounding. Everything else will come here as 361 rounding. Everything else will come here as
361 an illegal instruction. Emulate them. */ 362 an illegal instruction. Emulate them. */
362 si_code = alpha_fp_emul(regs->pc - 4); 363 si_code = alpha_fp_emul(regs->pc - 4);
363 if (si_code == 0) 364 if (si_code == 0)
364 return; 365 return;
365 if (si_code > 0) { 366 if (si_code > 0) {
366 info.si_signo = SIGFPE; 367 info.si_signo = SIGFPE;
367 info.si_errno = 0; 368 info.si_errno = 0;
368 info.si_code = si_code; 369 info.si_code = si_code;
369 info.si_addr = (void __user *) regs->pc; 370 info.si_addr = (void __user *) regs->pc;
370 send_sig_info(SIGFPE, &info, current); 371 send_sig_info(SIGFPE, &info, current);
371 return; 372 return;
372 } 373 }
373 } 374 }
374 break; 375 break;
375 376
376 case 3: /* FEN fault */ 377 case 3: /* FEN fault */
377 /* Irritating users can call PAL_clrfen to disable the 378 /* Irritating users can call PAL_clrfen to disable the
378 FPU for the process. The kernel will then trap in 379 FPU for the process. The kernel will then trap in
379 do_switch_stack and undo_switch_stack when we try 380 do_switch_stack and undo_switch_stack when we try
380 to save and restore the FP registers. 381 to save and restore the FP registers.
381 382
382 Given that GCC by default generates code that uses the 383 Given that GCC by default generates code that uses the
383 FP registers, PAL_clrfen is not useful except for DoS 384 FP registers, PAL_clrfen is not useful except for DoS
384 attacks. So turn the bleeding FPU back on and be done 385 attacks. So turn the bleeding FPU back on and be done
385 with it. */ 386 with it. */
386 current_thread_info()->pcb.flags |= 1; 387 current_thread_info()->pcb.flags |= 1;
387 __reload_thread(&current_thread_info()->pcb); 388 __reload_thread(&current_thread_info()->pcb);
388 return; 389 return;
389 390
390 case 5: /* illoc */ 391 case 5: /* illoc */
391 default: /* unexpected instruction-fault type */ 392 default: /* unexpected instruction-fault type */
392 ; 393 ;
393 } 394 }
394 395
395 info.si_signo = SIGILL; 396 info.si_signo = SIGILL;
396 info.si_errno = 0; 397 info.si_errno = 0;
397 info.si_code = ILL_ILLOPC; 398 info.si_code = ILL_ILLOPC;
398 info.si_addr = (void __user *) regs->pc; 399 info.si_addr = (void __user *) regs->pc;
399 send_sig_info(SIGILL, &info, current); 400 send_sig_info(SIGILL, &info, current);
400 } 401 }
401 402
402 /* There is an ifdef in the PALcode in MILO that enables a 403 /* There is an ifdef in the PALcode in MILO that enables a
403 "kernel debugging entry point" as an unprivileged call_pal. 404 "kernel debugging entry point" as an unprivileged call_pal.
404 405
405 We don't want to have anything to do with it, but unfortunately 406 We don't want to have anything to do with it, but unfortunately
406 several versions of MILO included in distributions have it enabled, 407 several versions of MILO included in distributions have it enabled,
407 and if we don't put something on the entry point we'll oops. */ 408 and if we don't put something on the entry point we'll oops. */
408 409
409 asmlinkage void 410 asmlinkage void
410 do_entDbg(struct pt_regs *regs) 411 do_entDbg(struct pt_regs *regs)
411 { 412 {
412 siginfo_t info; 413 siginfo_t info;
413 414
414 die_if_kernel("Instruction fault", regs, 0, NULL); 415 die_if_kernel("Instruction fault", regs, 0, NULL);
415 416
416 info.si_signo = SIGILL; 417 info.si_signo = SIGILL;
417 info.si_errno = 0; 418 info.si_errno = 0;
418 info.si_code = ILL_ILLOPC; 419 info.si_code = ILL_ILLOPC;
419 info.si_addr = (void __user *) regs->pc; 420 info.si_addr = (void __user *) regs->pc;
420 force_sig_info(SIGILL, &info, current); 421 force_sig_info(SIGILL, &info, current);
421 } 422 }
422 423
423 424
424 /* 425 /*
425 * entUna has a different register layout to be reasonably simple. It 426 * entUna has a different register layout to be reasonably simple. It
426 * needs access to all the integer registers (the kernel doesn't use 427 * needs access to all the integer registers (the kernel doesn't use
427 * fp-regs), and it needs to have them in order for simpler access. 428 * fp-regs), and it needs to have them in order for simpler access.
428 * 429 *
429 * Due to the non-standard register layout (and because we don't want 430 * Due to the non-standard register layout (and because we don't want
430 * to handle floating-point regs), user-mode unaligned accesses are 431 * to handle floating-point regs), user-mode unaligned accesses are
431 * handled separately by do_entUnaUser below. 432 * handled separately by do_entUnaUser below.
432 * 433 *
433 * Oh, btw, we don't handle the "gp" register correctly, but if we fault 434 * Oh, btw, we don't handle the "gp" register correctly, but if we fault
434 * on a gp-register unaligned load/store, something is _very_ wrong 435 * on a gp-register unaligned load/store, something is _very_ wrong
435 * in the kernel anyway.. 436 * in the kernel anyway..
436 */ 437 */
437 struct allregs { 438 struct allregs {
438 unsigned long regs[32]; 439 unsigned long regs[32];
439 unsigned long ps, pc, gp, a0, a1, a2; 440 unsigned long ps, pc, gp, a0, a1, a2;
440 }; 441 };
441 442
442 struct unaligned_stat { 443 struct unaligned_stat {
443 unsigned long count, va, pc; 444 unsigned long count, va, pc;
444 } unaligned[2]; 445 } unaligned[2];
445 446
446 447
447 /* Macro for exception fixup code to access integer registers. */ 448 /* Macro for exception fixup code to access integer registers. */
448 #define una_reg(r) (regs->regs[(r) >= 16 && (r) <= 18 ? (r)+19 : (r)]) 449 #define una_reg(r) (regs->regs[(r) >= 16 && (r) <= 18 ? (r)+19 : (r)])
449 450
450 451
451 asmlinkage void 452 asmlinkage void
452 do_entUna(void * va, unsigned long opcode, unsigned long reg, 453 do_entUna(void * va, unsigned long opcode, unsigned long reg,
453 struct allregs *regs) 454 struct allregs *regs)
454 { 455 {
455 long error, tmp1, tmp2, tmp3, tmp4; 456 long error, tmp1, tmp2, tmp3, tmp4;
456 unsigned long pc = regs->pc - 4; 457 unsigned long pc = regs->pc - 4;
457 const struct exception_table_entry *fixup; 458 const struct exception_table_entry *fixup;
458 459
459 unaligned[0].count++; 460 unaligned[0].count++;
460 unaligned[0].va = (unsigned long) va; 461 unaligned[0].va = (unsigned long) va;
461 unaligned[0].pc = pc; 462 unaligned[0].pc = pc;
462 463
463 /* We don't want to use the generic get/put unaligned macros as 464 /* We don't want to use the generic get/put unaligned macros as
464 we want to trap exceptions. Only if we actually get an 465 we want to trap exceptions. Only if we actually get an
465 exception will we decide whether we should have caught it. */ 466 exception will we decide whether we should have caught it. */
466 467
467 switch (opcode) { 468 switch (opcode) {
468 case 0x0c: /* ldwu */ 469 case 0x0c: /* ldwu */
469 __asm__ __volatile__( 470 __asm__ __volatile__(
470 "1: ldq_u %1,0(%3)\n" 471 "1: ldq_u %1,0(%3)\n"
471 "2: ldq_u %2,1(%3)\n" 472 "2: ldq_u %2,1(%3)\n"
472 " extwl %1,%3,%1\n" 473 " extwl %1,%3,%1\n"
473 " extwh %2,%3,%2\n" 474 " extwh %2,%3,%2\n"
474 "3:\n" 475 "3:\n"
475 ".section __ex_table,\"a\"\n" 476 ".section __ex_table,\"a\"\n"
476 " .long 1b - .\n" 477 " .long 1b - .\n"
477 " lda %1,3b-1b(%0)\n" 478 " lda %1,3b-1b(%0)\n"
478 " .long 2b - .\n" 479 " .long 2b - .\n"
479 " lda %2,3b-2b(%0)\n" 480 " lda %2,3b-2b(%0)\n"
480 ".previous" 481 ".previous"
481 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2) 482 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2)
482 : "r"(va), "0"(0)); 483 : "r"(va), "0"(0));
483 if (error) 484 if (error)
484 goto got_exception; 485 goto got_exception;
485 una_reg(reg) = tmp1|tmp2; 486 una_reg(reg) = tmp1|tmp2;
486 return; 487 return;
487 488
488 case 0x28: /* ldl */ 489 case 0x28: /* ldl */
489 __asm__ __volatile__( 490 __asm__ __volatile__(
490 "1: ldq_u %1,0(%3)\n" 491 "1: ldq_u %1,0(%3)\n"
491 "2: ldq_u %2,3(%3)\n" 492 "2: ldq_u %2,3(%3)\n"
492 " extll %1,%3,%1\n" 493 " extll %1,%3,%1\n"
493 " extlh %2,%3,%2\n" 494 " extlh %2,%3,%2\n"
494 "3:\n" 495 "3:\n"
495 ".section __ex_table,\"a\"\n" 496 ".section __ex_table,\"a\"\n"
496 " .long 1b - .\n" 497 " .long 1b - .\n"
497 " lda %1,3b-1b(%0)\n" 498 " lda %1,3b-1b(%0)\n"
498 " .long 2b - .\n" 499 " .long 2b - .\n"
499 " lda %2,3b-2b(%0)\n" 500 " lda %2,3b-2b(%0)\n"
500 ".previous" 501 ".previous"
501 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2) 502 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2)
502 : "r"(va), "0"(0)); 503 : "r"(va), "0"(0));
503 if (error) 504 if (error)
504 goto got_exception; 505 goto got_exception;
505 una_reg(reg) = (int)(tmp1|tmp2); 506 una_reg(reg) = (int)(tmp1|tmp2);
506 return; 507 return;
507 508
508 case 0x29: /* ldq */ 509 case 0x29: /* ldq */
509 __asm__ __volatile__( 510 __asm__ __volatile__(
510 "1: ldq_u %1,0(%3)\n" 511 "1: ldq_u %1,0(%3)\n"
511 "2: ldq_u %2,7(%3)\n" 512 "2: ldq_u %2,7(%3)\n"
512 " extql %1,%3,%1\n" 513 " extql %1,%3,%1\n"
513 " extqh %2,%3,%2\n" 514 " extqh %2,%3,%2\n"
514 "3:\n" 515 "3:\n"
515 ".section __ex_table,\"a\"\n" 516 ".section __ex_table,\"a\"\n"
516 " .long 1b - .\n" 517 " .long 1b - .\n"
517 " lda %1,3b-1b(%0)\n" 518 " lda %1,3b-1b(%0)\n"
518 " .long 2b - .\n" 519 " .long 2b - .\n"
519 " lda %2,3b-2b(%0)\n" 520 " lda %2,3b-2b(%0)\n"
520 ".previous" 521 ".previous"
521 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2) 522 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2)
522 : "r"(va), "0"(0)); 523 : "r"(va), "0"(0));
523 if (error) 524 if (error)
524 goto got_exception; 525 goto got_exception;
525 una_reg(reg) = tmp1|tmp2; 526 una_reg(reg) = tmp1|tmp2;
526 return; 527 return;
527 528
528 /* Note that the store sequences do not indicate that they change 529 /* Note that the store sequences do not indicate that they change
529 memory because it _should_ be affecting nothing in this context. 530 memory because it _should_ be affecting nothing in this context.
530 (Otherwise we have other, much larger, problems.) */ 531 (Otherwise we have other, much larger, problems.) */
531 case 0x0d: /* stw */ 532 case 0x0d: /* stw */
532 __asm__ __volatile__( 533 __asm__ __volatile__(
533 "1: ldq_u %2,1(%5)\n" 534 "1: ldq_u %2,1(%5)\n"
534 "2: ldq_u %1,0(%5)\n" 535 "2: ldq_u %1,0(%5)\n"
535 " inswh %6,%5,%4\n" 536 " inswh %6,%5,%4\n"
536 " inswl %6,%5,%3\n" 537 " inswl %6,%5,%3\n"
537 " mskwh %2,%5,%2\n" 538 " mskwh %2,%5,%2\n"
538 " mskwl %1,%5,%1\n" 539 " mskwl %1,%5,%1\n"
539 " or %2,%4,%2\n" 540 " or %2,%4,%2\n"
540 " or %1,%3,%1\n" 541 " or %1,%3,%1\n"
541 "3: stq_u %2,1(%5)\n" 542 "3: stq_u %2,1(%5)\n"
542 "4: stq_u %1,0(%5)\n" 543 "4: stq_u %1,0(%5)\n"
543 "5:\n" 544 "5:\n"
544 ".section __ex_table,\"a\"\n" 545 ".section __ex_table,\"a\"\n"
545 " .long 1b - .\n" 546 " .long 1b - .\n"
546 " lda %2,5b-1b(%0)\n" 547 " lda %2,5b-1b(%0)\n"
547 " .long 2b - .\n" 548 " .long 2b - .\n"
548 " lda %1,5b-2b(%0)\n" 549 " lda %1,5b-2b(%0)\n"
549 " .long 3b - .\n" 550 " .long 3b - .\n"
550 " lda $31,5b-3b(%0)\n" 551 " lda $31,5b-3b(%0)\n"
551 " .long 4b - .\n" 552 " .long 4b - .\n"
552 " lda $31,5b-4b(%0)\n" 553 " lda $31,5b-4b(%0)\n"
553 ".previous" 554 ".previous"
554 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2), 555 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2),
555 "=&r"(tmp3), "=&r"(tmp4) 556 "=&r"(tmp3), "=&r"(tmp4)
556 : "r"(va), "r"(una_reg(reg)), "0"(0)); 557 : "r"(va), "r"(una_reg(reg)), "0"(0));
557 if (error) 558 if (error)
558 goto got_exception; 559 goto got_exception;
559 return; 560 return;
560 561
561 case 0x2c: /* stl */ 562 case 0x2c: /* stl */
562 __asm__ __volatile__( 563 __asm__ __volatile__(
563 "1: ldq_u %2,3(%5)\n" 564 "1: ldq_u %2,3(%5)\n"
564 "2: ldq_u %1,0(%5)\n" 565 "2: ldq_u %1,0(%5)\n"
565 " inslh %6,%5,%4\n" 566 " inslh %6,%5,%4\n"
566 " insll %6,%5,%3\n" 567 " insll %6,%5,%3\n"
567 " msklh %2,%5,%2\n" 568 " msklh %2,%5,%2\n"
568 " mskll %1,%5,%1\n" 569 " mskll %1,%5,%1\n"
569 " or %2,%4,%2\n" 570 " or %2,%4,%2\n"
570 " or %1,%3,%1\n" 571 " or %1,%3,%1\n"
571 "3: stq_u %2,3(%5)\n" 572 "3: stq_u %2,3(%5)\n"
572 "4: stq_u %1,0(%5)\n" 573 "4: stq_u %1,0(%5)\n"
573 "5:\n" 574 "5:\n"
574 ".section __ex_table,\"a\"\n" 575 ".section __ex_table,\"a\"\n"
575 " .long 1b - .\n" 576 " .long 1b - .\n"
576 " lda %2,5b-1b(%0)\n" 577 " lda %2,5b-1b(%0)\n"
577 " .long 2b - .\n" 578 " .long 2b - .\n"
578 " lda %1,5b-2b(%0)\n" 579 " lda %1,5b-2b(%0)\n"
579 " .long 3b - .\n" 580 " .long 3b - .\n"
580 " lda $31,5b-3b(%0)\n" 581 " lda $31,5b-3b(%0)\n"
581 " .long 4b - .\n" 582 " .long 4b - .\n"
582 " lda $31,5b-4b(%0)\n" 583 " lda $31,5b-4b(%0)\n"
583 ".previous" 584 ".previous"
584 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2), 585 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2),
585 "=&r"(tmp3), "=&r"(tmp4) 586 "=&r"(tmp3), "=&r"(tmp4)
586 : "r"(va), "r"(una_reg(reg)), "0"(0)); 587 : "r"(va), "r"(una_reg(reg)), "0"(0));
587 if (error) 588 if (error)
588 goto got_exception; 589 goto got_exception;
589 return; 590 return;
590 591
591 case 0x2d: /* stq */ 592 case 0x2d: /* stq */
592 __asm__ __volatile__( 593 __asm__ __volatile__(
593 "1: ldq_u %2,7(%5)\n" 594 "1: ldq_u %2,7(%5)\n"
594 "2: ldq_u %1,0(%5)\n" 595 "2: ldq_u %1,0(%5)\n"
595 " insqh %6,%5,%4\n" 596 " insqh %6,%5,%4\n"
596 " insql %6,%5,%3\n" 597 " insql %6,%5,%3\n"
597 " mskqh %2,%5,%2\n" 598 " mskqh %2,%5,%2\n"
598 " mskql %1,%5,%1\n" 599 " mskql %1,%5,%1\n"
599 " or %2,%4,%2\n" 600 " or %2,%4,%2\n"
600 " or %1,%3,%1\n" 601 " or %1,%3,%1\n"
601 "3: stq_u %2,7(%5)\n" 602 "3: stq_u %2,7(%5)\n"
602 "4: stq_u %1,0(%5)\n" 603 "4: stq_u %1,0(%5)\n"
603 "5:\n" 604 "5:\n"
604 ".section __ex_table,\"a\"\n\t" 605 ".section __ex_table,\"a\"\n\t"
605 " .long 1b - .\n" 606 " .long 1b - .\n"
606 " lda %2,5b-1b(%0)\n" 607 " lda %2,5b-1b(%0)\n"
607 " .long 2b - .\n" 608 " .long 2b - .\n"
608 " lda %1,5b-2b(%0)\n" 609 " lda %1,5b-2b(%0)\n"
609 " .long 3b - .\n" 610 " .long 3b - .\n"
610 " lda $31,5b-3b(%0)\n" 611 " lda $31,5b-3b(%0)\n"
611 " .long 4b - .\n" 612 " .long 4b - .\n"
612 " lda $31,5b-4b(%0)\n" 613 " lda $31,5b-4b(%0)\n"
613 ".previous" 614 ".previous"
614 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2), 615 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2),
615 "=&r"(tmp3), "=&r"(tmp4) 616 "=&r"(tmp3), "=&r"(tmp4)
616 : "r"(va), "r"(una_reg(reg)), "0"(0)); 617 : "r"(va), "r"(una_reg(reg)), "0"(0));
617 if (error) 618 if (error)
618 goto got_exception; 619 goto got_exception;
619 return; 620 return;
620 } 621 }
621 622
622 lock_kernel(); 623 lock_kernel();
623 printk("Bad unaligned kernel access at %016lx: %p %lx %ld\n", 624 printk("Bad unaligned kernel access at %016lx: %p %lx %ld\n",
624 pc, va, opcode, reg); 625 pc, va, opcode, reg);
625 do_exit(SIGSEGV); 626 do_exit(SIGSEGV);
626 627
627 got_exception: 628 got_exception:
628 /* Ok, we caught the exception, but we don't want it. Is there 629 /* Ok, we caught the exception, but we don't want it. Is there
629 someone to pass it along to? */ 630 someone to pass it along to? */
630 if ((fixup = search_exception_tables(pc)) != 0) { 631 if ((fixup = search_exception_tables(pc)) != 0) {
631 unsigned long newpc; 632 unsigned long newpc;
632 newpc = fixup_exception(una_reg, fixup, pc); 633 newpc = fixup_exception(una_reg, fixup, pc);
633 634
634 printk("Forwarding unaligned exception at %lx (%lx)\n", 635 printk("Forwarding unaligned exception at %lx (%lx)\n",
635 pc, newpc); 636 pc, newpc);
636 637
637 regs->pc = newpc; 638 regs->pc = newpc;
638 return; 639 return;
639 } 640 }
640 641
641 /* 642 /*
642 * Yikes! No one to forward the exception to. 643 * Yikes! No one to forward the exception to.
643 * Since the registers are in a weird format, dump them ourselves. 644 * Since the registers are in a weird format, dump them ourselves.
644 */ 645 */
645 lock_kernel(); 646 lock_kernel();
646 647
647 printk("%s(%d): unhandled unaligned exception\n", 648 printk("%s(%d): unhandled unaligned exception\n",
648 current->comm, current->pid); 649 current->comm, current->pid);
649 650
650 printk("pc = [<%016lx>] ra = [<%016lx>] ps = %04lx\n", 651 printk("pc = [<%016lx>] ra = [<%016lx>] ps = %04lx\n",
651 pc, una_reg(26), regs->ps); 652 pc, una_reg(26), regs->ps);
652 printk("r0 = %016lx r1 = %016lx r2 = %016lx\n", 653 printk("r0 = %016lx r1 = %016lx r2 = %016lx\n",
653 una_reg(0), una_reg(1), una_reg(2)); 654 una_reg(0), una_reg(1), una_reg(2));
654 printk("r3 = %016lx r4 = %016lx r5 = %016lx\n", 655 printk("r3 = %016lx r4 = %016lx r5 = %016lx\n",
655 una_reg(3), una_reg(4), una_reg(5)); 656 una_reg(3), una_reg(4), una_reg(5));
656 printk("r6 = %016lx r7 = %016lx r8 = %016lx\n", 657 printk("r6 = %016lx r7 = %016lx r8 = %016lx\n",
657 una_reg(6), una_reg(7), una_reg(8)); 658 una_reg(6), una_reg(7), una_reg(8));
658 printk("r9 = %016lx r10= %016lx r11= %016lx\n", 659 printk("r9 = %016lx r10= %016lx r11= %016lx\n",
659 una_reg(9), una_reg(10), una_reg(11)); 660 una_reg(9), una_reg(10), una_reg(11));
660 printk("r12= %016lx r13= %016lx r14= %016lx\n", 661 printk("r12= %016lx r13= %016lx r14= %016lx\n",
661 una_reg(12), una_reg(13), una_reg(14)); 662 una_reg(12), una_reg(13), una_reg(14));
662 printk("r15= %016lx\n", una_reg(15)); 663 printk("r15= %016lx\n", una_reg(15));
663 printk("r16= %016lx r17= %016lx r18= %016lx\n", 664 printk("r16= %016lx r17= %016lx r18= %016lx\n",
664 una_reg(16), una_reg(17), una_reg(18)); 665 una_reg(16), una_reg(17), una_reg(18));
665 printk("r19= %016lx r20= %016lx r21= %016lx\n", 666 printk("r19= %016lx r20= %016lx r21= %016lx\n",
666 una_reg(19), una_reg(20), una_reg(21)); 667 una_reg(19), una_reg(20), una_reg(21));
667 printk("r22= %016lx r23= %016lx r24= %016lx\n", 668 printk("r22= %016lx r23= %016lx r24= %016lx\n",
668 una_reg(22), una_reg(23), una_reg(24)); 669 una_reg(22), una_reg(23), una_reg(24));
669 printk("r25= %016lx r27= %016lx r28= %016lx\n", 670 printk("r25= %016lx r27= %016lx r28= %016lx\n",
670 una_reg(25), una_reg(27), una_reg(28)); 671 una_reg(25), una_reg(27), una_reg(28));
671 printk("gp = %016lx sp = %p\n", regs->gp, regs+1); 672 printk("gp = %016lx sp = %p\n", regs->gp, regs+1);
672 673
673 dik_show_code((unsigned int *)pc); 674 dik_show_code((unsigned int *)pc);
674 dik_show_trace((unsigned long *)(regs+1)); 675 dik_show_trace((unsigned long *)(regs+1));
675 676
676 if (test_and_set_thread_flag (TIF_DIE_IF_KERNEL)) { 677 if (test_and_set_thread_flag (TIF_DIE_IF_KERNEL)) {
677 printk("die_if_kernel recursion detected.\n"); 678 printk("die_if_kernel recursion detected.\n");
678 local_irq_enable(); 679 local_irq_enable();
679 while (1); 680 while (1);
680 } 681 }
681 do_exit(SIGSEGV); 682 do_exit(SIGSEGV);
682 } 683 }
683 684
684 /* 685 /*
685 * Convert an s-floating point value in memory format to the 686 * Convert an s-floating point value in memory format to the
686 * corresponding value in register format. The exponent 687 * corresponding value in register format. The exponent
687 * needs to be remapped to preserve non-finite values 688 * needs to be remapped to preserve non-finite values
688 * (infinities, not-a-numbers, denormals). 689 * (infinities, not-a-numbers, denormals).
689 */ 690 */
690 static inline unsigned long 691 static inline unsigned long
691 s_mem_to_reg (unsigned long s_mem) 692 s_mem_to_reg (unsigned long s_mem)
692 { 693 {
693 unsigned long frac = (s_mem >> 0) & 0x7fffff; 694 unsigned long frac = (s_mem >> 0) & 0x7fffff;
694 unsigned long sign = (s_mem >> 31) & 0x1; 695 unsigned long sign = (s_mem >> 31) & 0x1;
695 unsigned long exp_msb = (s_mem >> 30) & 0x1; 696 unsigned long exp_msb = (s_mem >> 30) & 0x1;
696 unsigned long exp_low = (s_mem >> 23) & 0x7f; 697 unsigned long exp_low = (s_mem >> 23) & 0x7f;
697 unsigned long exp; 698 unsigned long exp;
698 699
699 exp = (exp_msb << 10) | exp_low; /* common case */ 700 exp = (exp_msb << 10) | exp_low; /* common case */
700 if (exp_msb) { 701 if (exp_msb) {
701 if (exp_low == 0x7f) { 702 if (exp_low == 0x7f) {
702 exp = 0x7ff; 703 exp = 0x7ff;
703 } 704 }
704 } else { 705 } else {
705 if (exp_low == 0x00) { 706 if (exp_low == 0x00) {
706 exp = 0x000; 707 exp = 0x000;
707 } else { 708 } else {
708 exp |= (0x7 << 7); 709 exp |= (0x7 << 7);
709 } 710 }
710 } 711 }
711 return (sign << 63) | (exp << 52) | (frac << 29); 712 return (sign << 63) | (exp << 52) | (frac << 29);
712 } 713 }
713 714
714 /* 715 /*
715 * Convert an s-floating point value in register format to the 716 * Convert an s-floating point value in register format to the
716 * corresponding value in memory format. 717 * corresponding value in memory format.
717 */ 718 */
718 static inline unsigned long 719 static inline unsigned long
719 s_reg_to_mem (unsigned long s_reg) 720 s_reg_to_mem (unsigned long s_reg)
720 { 721 {
721 return ((s_reg >> 62) << 30) | ((s_reg << 5) >> 34); 722 return ((s_reg >> 62) << 30) | ((s_reg << 5) >> 34);
722 } 723 }
723 724
724 /* 725 /*
725 * Handle user-level unaligned fault. Handling user-level unaligned 726 * Handle user-level unaligned fault. Handling user-level unaligned
726 * faults is *extremely* slow and produces nasty messages. A user 727 * faults is *extremely* slow and produces nasty messages. A user
727 * program *should* fix unaligned faults ASAP. 728 * program *should* fix unaligned faults ASAP.
728 * 729 *
729 * Notice that we have (almost) the regular kernel stack layout here, 730 * Notice that we have (almost) the regular kernel stack layout here,
730 * so finding the appropriate registers is a little more difficult 731 * so finding the appropriate registers is a little more difficult
731 * than in the kernel case. 732 * than in the kernel case.
732 * 733 *
733 * Finally, we handle regular integer load/stores only. In 734 * Finally, we handle regular integer load/stores only. In
734 * particular, load-linked/store-conditionally and floating point 735 * particular, load-linked/store-conditionally and floating point
735 * load/stores are not supported. The former make no sense with 736 * load/stores are not supported. The former make no sense with
736 * unaligned faults (they are guaranteed to fail) and I don't think 737 * unaligned faults (they are guaranteed to fail) and I don't think
737 * the latter will occur in any decent program. 738 * the latter will occur in any decent program.
738 * 739 *
739 * Sigh. We *do* have to handle some FP operations, because GCC will 740 * Sigh. We *do* have to handle some FP operations, because GCC will
740 * uses them as temporary storage for integer memory to memory copies. 741 * uses them as temporary storage for integer memory to memory copies.
741 * However, we need to deal with stt/ldt and sts/lds only. 742 * However, we need to deal with stt/ldt and sts/lds only.
742 */ 743 */
743 744
744 #define OP_INT_MASK ( 1L << 0x28 | 1L << 0x2c /* ldl stl */ \ 745 #define OP_INT_MASK ( 1L << 0x28 | 1L << 0x2c /* ldl stl */ \
745 | 1L << 0x29 | 1L << 0x2d /* ldq stq */ \ 746 | 1L << 0x29 | 1L << 0x2d /* ldq stq */ \
746 | 1L << 0x0c | 1L << 0x0d /* ldwu stw */ \ 747 | 1L << 0x0c | 1L << 0x0d /* ldwu stw */ \
747 | 1L << 0x0a | 1L << 0x0e ) /* ldbu stb */ 748 | 1L << 0x0a | 1L << 0x0e ) /* ldbu stb */
748 749
749 #define OP_WRITE_MASK ( 1L << 0x26 | 1L << 0x27 /* sts stt */ \ 750 #define OP_WRITE_MASK ( 1L << 0x26 | 1L << 0x27 /* sts stt */ \
750 | 1L << 0x2c | 1L << 0x2d /* stl stq */ \ 751 | 1L << 0x2c | 1L << 0x2d /* stl stq */ \
751 | 1L << 0x0d | 1L << 0x0e ) /* stw stb */ 752 | 1L << 0x0d | 1L << 0x0e ) /* stw stb */
752 753
753 #define R(x) ((size_t) &((struct pt_regs *)0)->x) 754 #define R(x) ((size_t) &((struct pt_regs *)0)->x)
754 755
755 static int unauser_reg_offsets[32] = { 756 static int unauser_reg_offsets[32] = {
756 R(r0), R(r1), R(r2), R(r3), R(r4), R(r5), R(r6), R(r7), R(r8), 757 R(r0), R(r1), R(r2), R(r3), R(r4), R(r5), R(r6), R(r7), R(r8),
757 /* r9 ... r15 are stored in front of regs. */ 758 /* r9 ... r15 are stored in front of regs. */
758 -56, -48, -40, -32, -24, -16, -8, 759 -56, -48, -40, -32, -24, -16, -8,
759 R(r16), R(r17), R(r18), 760 R(r16), R(r17), R(r18),
760 R(r19), R(r20), R(r21), R(r22), R(r23), R(r24), R(r25), R(r26), 761 R(r19), R(r20), R(r21), R(r22), R(r23), R(r24), R(r25), R(r26),
761 R(r27), R(r28), R(gp), 762 R(r27), R(r28), R(gp),
762 0, 0 763 0, 0
763 }; 764 };
764 765
765 #undef R 766 #undef R
766 767
767 asmlinkage void 768 asmlinkage void
768 do_entUnaUser(void __user * va, unsigned long opcode, 769 do_entUnaUser(void __user * va, unsigned long opcode,
769 unsigned long reg, struct pt_regs *regs) 770 unsigned long reg, struct pt_regs *regs)
770 { 771 {
771 static int cnt = 0; 772 static int cnt = 0;
772 static long last_time = 0; 773 static long last_time = 0;
773 774
774 unsigned long tmp1, tmp2, tmp3, tmp4; 775 unsigned long tmp1, tmp2, tmp3, tmp4;
775 unsigned long fake_reg, *reg_addr = &fake_reg; 776 unsigned long fake_reg, *reg_addr = &fake_reg;
776 siginfo_t info; 777 siginfo_t info;
777 long error; 778 long error;
778 779
779 /* Check the UAC bits to decide what the user wants us to do 780 /* Check the UAC bits to decide what the user wants us to do
780 with the unaliged access. */ 781 with the unaliged access. */
781 782
782 if (!test_thread_flag (TIF_UAC_NOPRINT)) { 783 if (!test_thread_flag (TIF_UAC_NOPRINT)) {
783 if (cnt >= 5 && jiffies - last_time > 5*HZ) { 784 if (cnt >= 5 && jiffies - last_time > 5*HZ) {
784 cnt = 0; 785 cnt = 0;
785 } 786 }
786 if (++cnt < 5) { 787 if (++cnt < 5) {
787 printk("%s(%d): unaligned trap at %016lx: %p %lx %ld\n", 788 printk("%s(%d): unaligned trap at %016lx: %p %lx %ld\n",
788 current->comm, current->pid, 789 current->comm, current->pid,
789 regs->pc - 4, va, opcode, reg); 790 regs->pc - 4, va, opcode, reg);
790 } 791 }
791 last_time = jiffies; 792 last_time = jiffies;
792 } 793 }
793 if (test_thread_flag (TIF_UAC_SIGBUS)) 794 if (test_thread_flag (TIF_UAC_SIGBUS))
794 goto give_sigbus; 795 goto give_sigbus;
795 /* Not sure why you'd want to use this, but... */ 796 /* Not sure why you'd want to use this, but... */
796 if (test_thread_flag (TIF_UAC_NOFIX)) 797 if (test_thread_flag (TIF_UAC_NOFIX))
797 return; 798 return;
798 799
799 /* Don't bother reading ds in the access check since we already 800 /* Don't bother reading ds in the access check since we already
800 know that this came from the user. Also rely on the fact that 801 know that this came from the user. Also rely on the fact that
801 the page at TASK_SIZE is unmapped and so can't be touched anyway. */ 802 the page at TASK_SIZE is unmapped and so can't be touched anyway. */
802 if (!__access_ok((unsigned long)va, 0, USER_DS)) 803 if (!__access_ok((unsigned long)va, 0, USER_DS))
803 goto give_sigsegv; 804 goto give_sigsegv;
804 805
805 ++unaligned[1].count; 806 ++unaligned[1].count;
806 unaligned[1].va = (unsigned long)va; 807 unaligned[1].va = (unsigned long)va;
807 unaligned[1].pc = regs->pc - 4; 808 unaligned[1].pc = regs->pc - 4;
808 809
809 if ((1L << opcode) & OP_INT_MASK) { 810 if ((1L << opcode) & OP_INT_MASK) {
810 /* it's an integer load/store */ 811 /* it's an integer load/store */
811 if (reg < 30) { 812 if (reg < 30) {
812 reg_addr = (unsigned long *) 813 reg_addr = (unsigned long *)
813 ((char *)regs + unauser_reg_offsets[reg]); 814 ((char *)regs + unauser_reg_offsets[reg]);
814 } else if (reg == 30) { 815 } else if (reg == 30) {
815 /* usp in PAL regs */ 816 /* usp in PAL regs */
816 fake_reg = rdusp(); 817 fake_reg = rdusp();
817 } else { 818 } else {
818 /* zero "register" */ 819 /* zero "register" */
819 fake_reg = 0; 820 fake_reg = 0;
820 } 821 }
821 } 822 }
822 823
823 /* We don't want to use the generic get/put unaligned macros as 824 /* We don't want to use the generic get/put unaligned macros as
824 we want to trap exceptions. Only if we actually get an 825 we want to trap exceptions. Only if we actually get an
825 exception will we decide whether we should have caught it. */ 826 exception will we decide whether we should have caught it. */
826 827
827 switch (opcode) { 828 switch (opcode) {
828 case 0x0c: /* ldwu */ 829 case 0x0c: /* ldwu */
829 __asm__ __volatile__( 830 __asm__ __volatile__(
830 "1: ldq_u %1,0(%3)\n" 831 "1: ldq_u %1,0(%3)\n"
831 "2: ldq_u %2,1(%3)\n" 832 "2: ldq_u %2,1(%3)\n"
832 " extwl %1,%3,%1\n" 833 " extwl %1,%3,%1\n"
833 " extwh %2,%3,%2\n" 834 " extwh %2,%3,%2\n"
834 "3:\n" 835 "3:\n"
835 ".section __ex_table,\"a\"\n" 836 ".section __ex_table,\"a\"\n"
836 " .long 1b - .\n" 837 " .long 1b - .\n"
837 " lda %1,3b-1b(%0)\n" 838 " lda %1,3b-1b(%0)\n"
838 " .long 2b - .\n" 839 " .long 2b - .\n"
839 " lda %2,3b-2b(%0)\n" 840 " lda %2,3b-2b(%0)\n"
840 ".previous" 841 ".previous"
841 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2) 842 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2)
842 : "r"(va), "0"(0)); 843 : "r"(va), "0"(0));
843 if (error) 844 if (error)
844 goto give_sigsegv; 845 goto give_sigsegv;
845 *reg_addr = tmp1|tmp2; 846 *reg_addr = tmp1|tmp2;
846 break; 847 break;
847 848
848 case 0x22: /* lds */ 849 case 0x22: /* lds */
849 __asm__ __volatile__( 850 __asm__ __volatile__(
850 "1: ldq_u %1,0(%3)\n" 851 "1: ldq_u %1,0(%3)\n"
851 "2: ldq_u %2,3(%3)\n" 852 "2: ldq_u %2,3(%3)\n"
852 " extll %1,%3,%1\n" 853 " extll %1,%3,%1\n"
853 " extlh %2,%3,%2\n" 854 " extlh %2,%3,%2\n"
854 "3:\n" 855 "3:\n"
855 ".section __ex_table,\"a\"\n" 856 ".section __ex_table,\"a\"\n"
856 " .long 1b - .\n" 857 " .long 1b - .\n"
857 " lda %1,3b-1b(%0)\n" 858 " lda %1,3b-1b(%0)\n"
858 " .long 2b - .\n" 859 " .long 2b - .\n"
859 " lda %2,3b-2b(%0)\n" 860 " lda %2,3b-2b(%0)\n"
860 ".previous" 861 ".previous"
861 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2) 862 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2)
862 : "r"(va), "0"(0)); 863 : "r"(va), "0"(0));
863 if (error) 864 if (error)
864 goto give_sigsegv; 865 goto give_sigsegv;
865 alpha_write_fp_reg(reg, s_mem_to_reg((int)(tmp1|tmp2))); 866 alpha_write_fp_reg(reg, s_mem_to_reg((int)(tmp1|tmp2)));
866 return; 867 return;
867 868
868 case 0x23: /* ldt */ 869 case 0x23: /* ldt */
869 __asm__ __volatile__( 870 __asm__ __volatile__(
870 "1: ldq_u %1,0(%3)\n" 871 "1: ldq_u %1,0(%3)\n"
871 "2: ldq_u %2,7(%3)\n" 872 "2: ldq_u %2,7(%3)\n"
872 " extql %1,%3,%1\n" 873 " extql %1,%3,%1\n"
873 " extqh %2,%3,%2\n" 874 " extqh %2,%3,%2\n"
874 "3:\n" 875 "3:\n"
875 ".section __ex_table,\"a\"\n" 876 ".section __ex_table,\"a\"\n"
876 " .long 1b - .\n" 877 " .long 1b - .\n"
877 " lda %1,3b-1b(%0)\n" 878 " lda %1,3b-1b(%0)\n"
878 " .long 2b - .\n" 879 " .long 2b - .\n"
879 " lda %2,3b-2b(%0)\n" 880 " lda %2,3b-2b(%0)\n"
880 ".previous" 881 ".previous"
881 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2) 882 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2)
882 : "r"(va), "0"(0)); 883 : "r"(va), "0"(0));
883 if (error) 884 if (error)
884 goto give_sigsegv; 885 goto give_sigsegv;
885 alpha_write_fp_reg(reg, tmp1|tmp2); 886 alpha_write_fp_reg(reg, tmp1|tmp2);
886 return; 887 return;
887 888
888 case 0x28: /* ldl */ 889 case 0x28: /* ldl */
889 __asm__ __volatile__( 890 __asm__ __volatile__(
890 "1: ldq_u %1,0(%3)\n" 891 "1: ldq_u %1,0(%3)\n"
891 "2: ldq_u %2,3(%3)\n" 892 "2: ldq_u %2,3(%3)\n"
892 " extll %1,%3,%1\n" 893 " extll %1,%3,%1\n"
893 " extlh %2,%3,%2\n" 894 " extlh %2,%3,%2\n"
894 "3:\n" 895 "3:\n"
895 ".section __ex_table,\"a\"\n" 896 ".section __ex_table,\"a\"\n"
896 " .long 1b - .\n" 897 " .long 1b - .\n"
897 " lda %1,3b-1b(%0)\n" 898 " lda %1,3b-1b(%0)\n"
898 " .long 2b - .\n" 899 " .long 2b - .\n"
899 " lda %2,3b-2b(%0)\n" 900 " lda %2,3b-2b(%0)\n"
900 ".previous" 901 ".previous"
901 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2) 902 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2)
902 : "r"(va), "0"(0)); 903 : "r"(va), "0"(0));
903 if (error) 904 if (error)
904 goto give_sigsegv; 905 goto give_sigsegv;
905 *reg_addr = (int)(tmp1|tmp2); 906 *reg_addr = (int)(tmp1|tmp2);
906 break; 907 break;
907 908
908 case 0x29: /* ldq */ 909 case 0x29: /* ldq */
909 __asm__ __volatile__( 910 __asm__ __volatile__(
910 "1: ldq_u %1,0(%3)\n" 911 "1: ldq_u %1,0(%3)\n"
911 "2: ldq_u %2,7(%3)\n" 912 "2: ldq_u %2,7(%3)\n"
912 " extql %1,%3,%1\n" 913 " extql %1,%3,%1\n"
913 " extqh %2,%3,%2\n" 914 " extqh %2,%3,%2\n"
914 "3:\n" 915 "3:\n"
915 ".section __ex_table,\"a\"\n" 916 ".section __ex_table,\"a\"\n"
916 " .long 1b - .\n" 917 " .long 1b - .\n"
917 " lda %1,3b-1b(%0)\n" 918 " lda %1,3b-1b(%0)\n"
918 " .long 2b - .\n" 919 " .long 2b - .\n"
919 " lda %2,3b-2b(%0)\n" 920 " lda %2,3b-2b(%0)\n"
920 ".previous" 921 ".previous"
921 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2) 922 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2)
922 : "r"(va), "0"(0)); 923 : "r"(va), "0"(0));
923 if (error) 924 if (error)
924 goto give_sigsegv; 925 goto give_sigsegv;
925 *reg_addr = tmp1|tmp2; 926 *reg_addr = tmp1|tmp2;
926 break; 927 break;
927 928
928 /* Note that the store sequences do not indicate that they change 929 /* Note that the store sequences do not indicate that they change
929 memory because it _should_ be affecting nothing in this context. 930 memory because it _should_ be affecting nothing in this context.
930 (Otherwise we have other, much larger, problems.) */ 931 (Otherwise we have other, much larger, problems.) */
931 case 0x0d: /* stw */ 932 case 0x0d: /* stw */
932 __asm__ __volatile__( 933 __asm__ __volatile__(
933 "1: ldq_u %2,1(%5)\n" 934 "1: ldq_u %2,1(%5)\n"
934 "2: ldq_u %1,0(%5)\n" 935 "2: ldq_u %1,0(%5)\n"
935 " inswh %6,%5,%4\n" 936 " inswh %6,%5,%4\n"
936 " inswl %6,%5,%3\n" 937 " inswl %6,%5,%3\n"
937 " mskwh %2,%5,%2\n" 938 " mskwh %2,%5,%2\n"
938 " mskwl %1,%5,%1\n" 939 " mskwl %1,%5,%1\n"
939 " or %2,%4,%2\n" 940 " or %2,%4,%2\n"
940 " or %1,%3,%1\n" 941 " or %1,%3,%1\n"
941 "3: stq_u %2,1(%5)\n" 942 "3: stq_u %2,1(%5)\n"
942 "4: stq_u %1,0(%5)\n" 943 "4: stq_u %1,0(%5)\n"
943 "5:\n" 944 "5:\n"
944 ".section __ex_table,\"a\"\n" 945 ".section __ex_table,\"a\"\n"
945 " .long 1b - .\n" 946 " .long 1b - .\n"
946 " lda %2,5b-1b(%0)\n" 947 " lda %2,5b-1b(%0)\n"
947 " .long 2b - .\n" 948 " .long 2b - .\n"
948 " lda %1,5b-2b(%0)\n" 949 " lda %1,5b-2b(%0)\n"
949 " .long 3b - .\n" 950 " .long 3b - .\n"
950 " lda $31,5b-3b(%0)\n" 951 " lda $31,5b-3b(%0)\n"
951 " .long 4b - .\n" 952 " .long 4b - .\n"
952 " lda $31,5b-4b(%0)\n" 953 " lda $31,5b-4b(%0)\n"
953 ".previous" 954 ".previous"
954 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2), 955 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2),
955 "=&r"(tmp3), "=&r"(tmp4) 956 "=&r"(tmp3), "=&r"(tmp4)
956 : "r"(va), "r"(*reg_addr), "0"(0)); 957 : "r"(va), "r"(*reg_addr), "0"(0));
957 if (error) 958 if (error)
958 goto give_sigsegv; 959 goto give_sigsegv;
959 return; 960 return;
960 961
961 case 0x26: /* sts */ 962 case 0x26: /* sts */
962 fake_reg = s_reg_to_mem(alpha_read_fp_reg(reg)); 963 fake_reg = s_reg_to_mem(alpha_read_fp_reg(reg));
963 /* FALLTHRU */ 964 /* FALLTHRU */
964 965
965 case 0x2c: /* stl */ 966 case 0x2c: /* stl */
966 __asm__ __volatile__( 967 __asm__ __volatile__(
967 "1: ldq_u %2,3(%5)\n" 968 "1: ldq_u %2,3(%5)\n"
968 "2: ldq_u %1,0(%5)\n" 969 "2: ldq_u %1,0(%5)\n"
969 " inslh %6,%5,%4\n" 970 " inslh %6,%5,%4\n"
970 " insll %6,%5,%3\n" 971 " insll %6,%5,%3\n"
971 " msklh %2,%5,%2\n" 972 " msklh %2,%5,%2\n"
972 " mskll %1,%5,%1\n" 973 " mskll %1,%5,%1\n"
973 " or %2,%4,%2\n" 974 " or %2,%4,%2\n"
974 " or %1,%3,%1\n" 975 " or %1,%3,%1\n"
975 "3: stq_u %2,3(%5)\n" 976 "3: stq_u %2,3(%5)\n"
976 "4: stq_u %1,0(%5)\n" 977 "4: stq_u %1,0(%5)\n"
977 "5:\n" 978 "5:\n"
978 ".section __ex_table,\"a\"\n" 979 ".section __ex_table,\"a\"\n"
979 " .long 1b - .\n" 980 " .long 1b - .\n"
980 " lda %2,5b-1b(%0)\n" 981 " lda %2,5b-1b(%0)\n"
981 " .long 2b - .\n" 982 " .long 2b - .\n"
982 " lda %1,5b-2b(%0)\n" 983 " lda %1,5b-2b(%0)\n"
983 " .long 3b - .\n" 984 " .long 3b - .\n"
984 " lda $31,5b-3b(%0)\n" 985 " lda $31,5b-3b(%0)\n"
985 " .long 4b - .\n" 986 " .long 4b - .\n"
986 " lda $31,5b-4b(%0)\n" 987 " lda $31,5b-4b(%0)\n"
987 ".previous" 988 ".previous"
988 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2), 989 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2),
989 "=&r"(tmp3), "=&r"(tmp4) 990 "=&r"(tmp3), "=&r"(tmp4)
990 : "r"(va), "r"(*reg_addr), "0"(0)); 991 : "r"(va), "r"(*reg_addr), "0"(0));
991 if (error) 992 if (error)
992 goto give_sigsegv; 993 goto give_sigsegv;
993 return; 994 return;
994 995
995 case 0x27: /* stt */ 996 case 0x27: /* stt */
996 fake_reg = alpha_read_fp_reg(reg); 997 fake_reg = alpha_read_fp_reg(reg);
997 /* FALLTHRU */ 998 /* FALLTHRU */
998 999
999 case 0x2d: /* stq */ 1000 case 0x2d: /* stq */
1000 __asm__ __volatile__( 1001 __asm__ __volatile__(
1001 "1: ldq_u %2,7(%5)\n" 1002 "1: ldq_u %2,7(%5)\n"
1002 "2: ldq_u %1,0(%5)\n" 1003 "2: ldq_u %1,0(%5)\n"
1003 " insqh %6,%5,%4\n" 1004 " insqh %6,%5,%4\n"
1004 " insql %6,%5,%3\n" 1005 " insql %6,%5,%3\n"
1005 " mskqh %2,%5,%2\n" 1006 " mskqh %2,%5,%2\n"
1006 " mskql %1,%5,%1\n" 1007 " mskql %1,%5,%1\n"
1007 " or %2,%4,%2\n" 1008 " or %2,%4,%2\n"
1008 " or %1,%3,%1\n" 1009 " or %1,%3,%1\n"
1009 "3: stq_u %2,7(%5)\n" 1010 "3: stq_u %2,7(%5)\n"
1010 "4: stq_u %1,0(%5)\n" 1011 "4: stq_u %1,0(%5)\n"
1011 "5:\n" 1012 "5:\n"
1012 ".section __ex_table,\"a\"\n\t" 1013 ".section __ex_table,\"a\"\n\t"
1013 " .long 1b - .\n" 1014 " .long 1b - .\n"
1014 " lda %2,5b-1b(%0)\n" 1015 " lda %2,5b-1b(%0)\n"
1015 " .long 2b - .\n" 1016 " .long 2b - .\n"
1016 " lda %1,5b-2b(%0)\n" 1017 " lda %1,5b-2b(%0)\n"
1017 " .long 3b - .\n" 1018 " .long 3b - .\n"
1018 " lda $31,5b-3b(%0)\n" 1019 " lda $31,5b-3b(%0)\n"
1019 " .long 4b - .\n" 1020 " .long 4b - .\n"
1020 " lda $31,5b-4b(%0)\n" 1021 " lda $31,5b-4b(%0)\n"
1021 ".previous" 1022 ".previous"
1022 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2), 1023 : "=r"(error), "=&r"(tmp1), "=&r"(tmp2),
1023 "=&r"(tmp3), "=&r"(tmp4) 1024 "=&r"(tmp3), "=&r"(tmp4)
1024 : "r"(va), "r"(*reg_addr), "0"(0)); 1025 : "r"(va), "r"(*reg_addr), "0"(0));
1025 if (error) 1026 if (error)
1026 goto give_sigsegv; 1027 goto give_sigsegv;
1027 return; 1028 return;
1028 1029
1029 default: 1030 default:
1030 /* What instruction were you trying to use, exactly? */ 1031 /* What instruction were you trying to use, exactly? */
1031 goto give_sigbus; 1032 goto give_sigbus;
1032 } 1033 }
1033 1034
1034 /* Only integer loads should get here; everyone else returns early. */ 1035 /* Only integer loads should get here; everyone else returns early. */
1035 if (reg == 30) 1036 if (reg == 30)
1036 wrusp(fake_reg); 1037 wrusp(fake_reg);
1037 return; 1038 return;
1038 1039
1039 give_sigsegv: 1040 give_sigsegv:
1040 regs->pc -= 4; /* make pc point to faulting insn */ 1041 regs->pc -= 4; /* make pc point to faulting insn */
1041 info.si_signo = SIGSEGV; 1042 info.si_signo = SIGSEGV;
1042 info.si_errno = 0; 1043 info.si_errno = 0;
1043 1044
1044 /* We need to replicate some of the logic in mm/fault.c, 1045 /* We need to replicate some of the logic in mm/fault.c,
1045 since we don't have access to the fault code in the 1046 since we don't have access to the fault code in the
1046 exception handling return path. */ 1047 exception handling return path. */
1047 if (!__access_ok((unsigned long)va, 0, USER_DS)) 1048 if (!__access_ok((unsigned long)va, 0, USER_DS))
1048 info.si_code = SEGV_ACCERR; 1049 info.si_code = SEGV_ACCERR;
1049 else { 1050 else {
1050 struct mm_struct *mm = current->mm; 1051 struct mm_struct *mm = current->mm;
1051 down_read(&mm->mmap_sem); 1052 down_read(&mm->mmap_sem);
1052 if (find_vma(mm, (unsigned long)va)) 1053 if (find_vma(mm, (unsigned long)va))
1053 info.si_code = SEGV_ACCERR; 1054 info.si_code = SEGV_ACCERR;
1054 else 1055 else
1055 info.si_code = SEGV_MAPERR; 1056 info.si_code = SEGV_MAPERR;
1056 up_read(&mm->mmap_sem); 1057 up_read(&mm->mmap_sem);
1057 } 1058 }
1058 info.si_addr = va; 1059 info.si_addr = va;
1059 send_sig_info(SIGSEGV, &info, current); 1060 send_sig_info(SIGSEGV, &info, current);
1060 return; 1061 return;
1061 1062
1062 give_sigbus: 1063 give_sigbus:
1063 regs->pc -= 4; 1064 regs->pc -= 4;
1064 info.si_signo = SIGBUS; 1065 info.si_signo = SIGBUS;
1065 info.si_errno = 0; 1066 info.si_errno = 0;
1066 info.si_code = BUS_ADRALN; 1067 info.si_code = BUS_ADRALN;
1067 info.si_addr = va; 1068 info.si_addr = va;
1068 send_sig_info(SIGBUS, &info, current); 1069 send_sig_info(SIGBUS, &info, current);
1069 return; 1070 return;
1070 } 1071 }
1071 1072
1072 void __init 1073 void __init
1073 trap_init(void) 1074 trap_init(void)
1074 { 1075 {
1075 /* Tell PAL-code what global pointer we want in the kernel. */ 1076 /* Tell PAL-code what global pointer we want in the kernel. */
1076 register unsigned long gptr __asm__("$29"); 1077 register unsigned long gptr __asm__("$29");
1077 wrkgp(gptr); 1078 wrkgp(gptr);
1078 1079
1079 /* Hack for Multia (UDB) and JENSEN: some of their SRMs have 1080 /* Hack for Multia (UDB) and JENSEN: some of their SRMs have
1080 a bug in the handling of the opDEC fault. Fix it up if so. */ 1081 a bug in the handling of the opDEC fault. Fix it up if so. */
1081 if (implver() == IMPLVER_EV4) 1082 if (implver() == IMPLVER_EV4)
1082 opDEC_check(); 1083 opDEC_check();
1083 1084
1084 wrent(entArith, 1); 1085 wrent(entArith, 1);
1085 wrent(entMM, 2); 1086 wrent(entMM, 2);
1086 wrent(entIF, 3); 1087 wrent(entIF, 3);
1087 wrent(entUna, 4); 1088 wrent(entUna, 4);
1088 wrent(entSys, 5); 1089 wrent(entSys, 5);
1089 wrent(entDbg, 6); 1090 wrent(entDbg, 6);
1090 } 1091 }
1091 1092
arch/arm/kernel/traps.c
1 /* 1 /*
2 * linux/arch/arm/kernel/traps.c 2 * linux/arch/arm/kernel/traps.c
3 * 3 *
4 * Copyright (C) 1995-2002 Russell King 4 * Copyright (C) 1995-2002 Russell King
5 * Fragments that appear the same as linux/arch/i386/kernel/traps.c (C) Linus Torvalds 5 * Fragments that appear the same as linux/arch/i386/kernel/traps.c (C) Linus Torvalds
6 * 6 *
7 * This program is free software; you can redistribute it and/or modify 7 * This program is free software; you can redistribute it and/or modify
8 * it under the terms of the GNU General Public License version 2 as 8 * it under the terms of the GNU General Public License version 2 as
9 * published by the Free Software Foundation. 9 * published by the Free Software Foundation.
10 * 10 *
11 * 'traps.c' handles hardware exceptions after we have saved some state in 11 * 'traps.c' handles hardware exceptions after we have saved some state in
12 * 'linux/arch/arm/lib/traps.S'. Mostly a debugging aid, but will probably 12 * 'linux/arch/arm/lib/traps.S'. Mostly a debugging aid, but will probably
13 * kill the offending process. 13 * kill the offending process.
14 */ 14 */
15 #include <linux/module.h> 15 #include <linux/module.h>
16 #include <linux/signal.h> 16 #include <linux/signal.h>
17 #include <linux/spinlock.h> 17 #include <linux/spinlock.h>
18 #include <linux/personality.h> 18 #include <linux/personality.h>
19 #include <linux/kallsyms.h> 19 #include <linux/kallsyms.h>
20 #include <linux/delay.h> 20 #include <linux/delay.h>
21 #include <linux/init.h> 21 #include <linux/init.h>
22 22
23 #include <asm/atomic.h> 23 #include <asm/atomic.h>
24 #include <asm/cacheflush.h> 24 #include <asm/cacheflush.h>
25 #include <asm/system.h> 25 #include <asm/system.h>
26 #include <asm/uaccess.h> 26 #include <asm/uaccess.h>
27 #include <asm/unistd.h> 27 #include <asm/unistd.h>
28 #include <asm/traps.h> 28 #include <asm/traps.h>
29 #include <asm/io.h> 29 #include <asm/io.h>
30 30
31 #include "ptrace.h" 31 #include "ptrace.h"
32 #include "signal.h" 32 #include "signal.h"
33 33
34 static const char *handler[]= { "prefetch abort", "data abort", "address exception", "interrupt" }; 34 static const char *handler[]= { "prefetch abort", "data abort", "address exception", "interrupt" };
35 35
36 #ifdef CONFIG_DEBUG_USER 36 #ifdef CONFIG_DEBUG_USER
37 unsigned int user_debug; 37 unsigned int user_debug;
38 38
39 static int __init user_debug_setup(char *str) 39 static int __init user_debug_setup(char *str)
40 { 40 {
41 get_option(&str, &user_debug); 41 get_option(&str, &user_debug);
42 return 1; 42 return 1;
43 } 43 }
44 __setup("user_debug=", user_debug_setup); 44 __setup("user_debug=", user_debug_setup);
45 #endif 45 #endif
46 46
47 static void dump_mem(const char *str, unsigned long bottom, unsigned long top); 47 static void dump_mem(const char *str, unsigned long bottom, unsigned long top);
48 48
49 static inline int in_exception_text(unsigned long ptr) 49 static inline int in_exception_text(unsigned long ptr)
50 { 50 {
51 extern char __exception_text_start[]; 51 extern char __exception_text_start[];
52 extern char __exception_text_end[]; 52 extern char __exception_text_end[];
53 53
54 return ptr >= (unsigned long)&__exception_text_start && 54 return ptr >= (unsigned long)&__exception_text_start &&
55 ptr < (unsigned long)&__exception_text_end; 55 ptr < (unsigned long)&__exception_text_end;
56 } 56 }
57 57
58 void dump_backtrace_entry(unsigned long where, unsigned long from, unsigned long frame) 58 void dump_backtrace_entry(unsigned long where, unsigned long from, unsigned long frame)
59 { 59 {
60 #ifdef CONFIG_KALLSYMS 60 #ifdef CONFIG_KALLSYMS
61 printk("[<%08lx>] ", where); 61 printk("[<%08lx>] ", where);
62 print_symbol("(%s) ", where); 62 print_symbol("(%s) ", where);
63 printk("from [<%08lx>] ", from); 63 printk("from [<%08lx>] ", from);
64 print_symbol("(%s)\n", from); 64 print_symbol("(%s)\n", from);
65 #else 65 #else
66 printk("Function entered at [<%08lx>] from [<%08lx>]\n", where, from); 66 printk("Function entered at [<%08lx>] from [<%08lx>]\n", where, from);
67 #endif 67 #endif
68 68
69 if (in_exception_text(where)) 69 if (in_exception_text(where))
70 dump_mem("Exception stack", frame + 4, frame + 4 + sizeof(struct pt_regs)); 70 dump_mem("Exception stack", frame + 4, frame + 4 + sizeof(struct pt_regs));
71 } 71 }
72 72
73 /* 73 /*
74 * Stack pointers should always be within the kernels view of 74 * Stack pointers should always be within the kernels view of
75 * physical memory. If it is not there, then we can't dump 75 * physical memory. If it is not there, then we can't dump
76 * out any information relating to the stack. 76 * out any information relating to the stack.
77 */ 77 */
78 static int verify_stack(unsigned long sp) 78 static int verify_stack(unsigned long sp)
79 { 79 {
80 if (sp < PAGE_OFFSET || (sp > (unsigned long)high_memory && high_memory != 0)) 80 if (sp < PAGE_OFFSET || (sp > (unsigned long)high_memory && high_memory != 0))
81 return -EFAULT; 81 return -EFAULT;
82 82
83 return 0; 83 return 0;
84 } 84 }
85 85
86 /* 86 /*
87 * Dump out the contents of some memory nicely... 87 * Dump out the contents of some memory nicely...
88 */ 88 */
89 static void dump_mem(const char *str, unsigned long bottom, unsigned long top) 89 static void dump_mem(const char *str, unsigned long bottom, unsigned long top)
90 { 90 {
91 unsigned long p = bottom & ~31; 91 unsigned long p = bottom & ~31;
92 mm_segment_t fs; 92 mm_segment_t fs;
93 int i; 93 int i;
94 94
95 /* 95 /*
96 * We need to switch to kernel mode so that we can use __get_user 96 * We need to switch to kernel mode so that we can use __get_user
97 * to safely read from kernel space. Note that we now dump the 97 * to safely read from kernel space. Note that we now dump the
98 * code first, just in case the backtrace kills us. 98 * code first, just in case the backtrace kills us.
99 */ 99 */
100 fs = get_fs(); 100 fs = get_fs();
101 set_fs(KERNEL_DS); 101 set_fs(KERNEL_DS);
102 102
103 printk("%s(0x%08lx to 0x%08lx)\n", str, bottom, top); 103 printk("%s(0x%08lx to 0x%08lx)\n", str, bottom, top);
104 104
105 for (p = bottom & ~31; p < top;) { 105 for (p = bottom & ~31; p < top;) {
106 printk("%04lx: ", p & 0xffff); 106 printk("%04lx: ", p & 0xffff);
107 107
108 for (i = 0; i < 8; i++, p += 4) { 108 for (i = 0; i < 8; i++, p += 4) {
109 unsigned int val; 109 unsigned int val;
110 110
111 if (p < bottom || p >= top) 111 if (p < bottom || p >= top)
112 printk(" "); 112 printk(" ");
113 else { 113 else {
114 __get_user(val, (unsigned long *)p); 114 __get_user(val, (unsigned long *)p);
115 printk("%08x ", val); 115 printk("%08x ", val);
116 } 116 }
117 } 117 }
118 printk ("\n"); 118 printk ("\n");
119 } 119 }
120 120
121 set_fs(fs); 121 set_fs(fs);
122 } 122 }
123 123
124 static void dump_instr(struct pt_regs *regs) 124 static void dump_instr(struct pt_regs *regs)
125 { 125 {
126 unsigned long addr = instruction_pointer(regs); 126 unsigned long addr = instruction_pointer(regs);
127 const int thumb = thumb_mode(regs); 127 const int thumb = thumb_mode(regs);
128 const int width = thumb ? 4 : 8; 128 const int width = thumb ? 4 : 8;
129 mm_segment_t fs; 129 mm_segment_t fs;
130 int i; 130 int i;
131 131
132 /* 132 /*
133 * We need to switch to kernel mode so that we can use __get_user 133 * We need to switch to kernel mode so that we can use __get_user
134 * to safely read from kernel space. Note that we now dump the 134 * to safely read from kernel space. Note that we now dump the
135 * code first, just in case the backtrace kills us. 135 * code first, just in case the backtrace kills us.
136 */ 136 */
137 fs = get_fs(); 137 fs = get_fs();
138 set_fs(KERNEL_DS); 138 set_fs(KERNEL_DS);
139 139
140 printk("Code: "); 140 printk("Code: ");
141 for (i = -4; i < 1; i++) { 141 for (i = -4; i < 1; i++) {
142 unsigned int val, bad; 142 unsigned int val, bad;
143 143
144 if (thumb) 144 if (thumb)
145 bad = __get_user(val, &((u16 *)addr)[i]); 145 bad = __get_user(val, &((u16 *)addr)[i]);
146 else 146 else
147 bad = __get_user(val, &((u32 *)addr)[i]); 147 bad = __get_user(val, &((u32 *)addr)[i]);
148 148
149 if (!bad) 149 if (!bad)
150 printk(i == 0 ? "(%0*x) " : "%0*x ", width, val); 150 printk(i == 0 ? "(%0*x) " : "%0*x ", width, val);
151 else { 151 else {
152 printk("bad PC value."); 152 printk("bad PC value.");
153 break; 153 break;
154 } 154 }
155 } 155 }
156 printk("\n"); 156 printk("\n");
157 157
158 set_fs(fs); 158 set_fs(fs);
159 } 159 }
160 160
161 static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk) 161 static void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
162 { 162 {
163 unsigned int fp; 163 unsigned int fp;
164 int ok = 1; 164 int ok = 1;
165 165
166 printk("Backtrace: "); 166 printk("Backtrace: ");
167 fp = regs->ARM_fp; 167 fp = regs->ARM_fp;
168 if (!fp) { 168 if (!fp) {
169 printk("no frame pointer"); 169 printk("no frame pointer");
170 ok = 0; 170 ok = 0;
171 } else if (verify_stack(fp)) { 171 } else if (verify_stack(fp)) {
172 printk("invalid frame pointer 0x%08x", fp); 172 printk("invalid frame pointer 0x%08x", fp);
173 ok = 0; 173 ok = 0;
174 } else if (fp < (unsigned long)end_of_stack(tsk)) 174 } else if (fp < (unsigned long)end_of_stack(tsk))
175 printk("frame pointer underflow"); 175 printk("frame pointer underflow");
176 printk("\n"); 176 printk("\n");
177 177
178 if (ok) 178 if (ok)
179 c_backtrace(fp, processor_mode(regs)); 179 c_backtrace(fp, processor_mode(regs));
180 } 180 }
181 181
182 void dump_stack(void) 182 void dump_stack(void)
183 { 183 {
184 __backtrace(); 184 __backtrace();
185 } 185 }
186 186
187 EXPORT_SYMBOL(dump_stack); 187 EXPORT_SYMBOL(dump_stack);
188 188
189 void show_stack(struct task_struct *tsk, unsigned long *sp) 189 void show_stack(struct task_struct *tsk, unsigned long *sp)
190 { 190 {
191 unsigned long fp; 191 unsigned long fp;
192 192
193 if (!tsk) 193 if (!tsk)
194 tsk = current; 194 tsk = current;
195 195
196 if (tsk != current) 196 if (tsk != current)
197 fp = thread_saved_fp(tsk); 197 fp = thread_saved_fp(tsk);
198 else 198 else
199 asm("mov %0, fp" : "=r" (fp) : : "cc"); 199 asm("mov %0, fp" : "=r" (fp) : : "cc");
200 200
201 c_backtrace(fp, 0x10); 201 c_backtrace(fp, 0x10);
202 barrier(); 202 barrier();
203 } 203 }
204 204
205 #ifdef CONFIG_PREEMPT 205 #ifdef CONFIG_PREEMPT
206 #define S_PREEMPT " PREEMPT" 206 #define S_PREEMPT " PREEMPT"
207 #else 207 #else
208 #define S_PREEMPT "" 208 #define S_PREEMPT ""
209 #endif 209 #endif
210 #ifdef CONFIG_SMP 210 #ifdef CONFIG_SMP
211 #define S_SMP " SMP" 211 #define S_SMP " SMP"
212 #else 212 #else
213 #define S_SMP "" 213 #define S_SMP ""
214 #endif 214 #endif
215 215
216 static void __die(const char *str, int err, struct thread_info *thread, struct pt_regs *regs) 216 static void __die(const char *str, int err, struct thread_info *thread, struct pt_regs *regs)
217 { 217 {
218 struct task_struct *tsk = thread->task; 218 struct task_struct *tsk = thread->task;
219 static int die_counter; 219 static int die_counter;
220 220
221 printk("Internal error: %s: %x [#%d]" S_PREEMPT S_SMP "\n", 221 printk("Internal error: %s: %x [#%d]" S_PREEMPT S_SMP "\n",
222 str, err, ++die_counter); 222 str, err, ++die_counter);
223 print_modules(); 223 print_modules();
224 __show_regs(regs); 224 __show_regs(regs);
225 printk("Process %s (pid: %d, stack limit = 0x%p)\n", 225 printk("Process %s (pid: %d, stack limit = 0x%p)\n",
226 tsk->comm, tsk->pid, thread + 1); 226 tsk->comm, tsk->pid, thread + 1);
227 227
228 if (!user_mode(regs) || in_interrupt()) { 228 if (!user_mode(regs) || in_interrupt()) {
229 dump_mem("Stack: ", regs->ARM_sp, 229 dump_mem("Stack: ", regs->ARM_sp,
230 THREAD_SIZE + (unsigned long)task_stack_page(tsk)); 230 THREAD_SIZE + (unsigned long)task_stack_page(tsk));
231 dump_backtrace(regs, tsk); 231 dump_backtrace(regs, tsk);
232 dump_instr(regs); 232 dump_instr(regs);
233 } 233 }
234 } 234 }
235 235
236 DEFINE_SPINLOCK(die_lock); 236 DEFINE_SPINLOCK(die_lock);
237 237
238 /* 238 /*
239 * This function is protected against re-entrancy. 239 * This function is protected against re-entrancy.
240 */ 240 */
241 NORET_TYPE void die(const char *str, struct pt_regs *regs, int err) 241 NORET_TYPE void die(const char *str, struct pt_regs *regs, int err)
242 { 242 {
243 struct thread_info *thread = current_thread_info(); 243 struct thread_info *thread = current_thread_info();
244 244
245 oops_enter(); 245 oops_enter();
246 246
247 console_verbose(); 247 console_verbose();
248 spin_lock_irq(&die_lock); 248 spin_lock_irq(&die_lock);
249 bust_spinlocks(1); 249 bust_spinlocks(1);
250 __die(str, err, thread, regs); 250 __die(str, err, thread, regs);
251 bust_spinlocks(0); 251 bust_spinlocks(0);
252 add_taint(TAINT_DIE);
252 spin_unlock_irq(&die_lock); 253 spin_unlock_irq(&die_lock);
253 254
254 if (in_interrupt()) 255 if (in_interrupt())
255 panic("Fatal exception in interrupt"); 256 panic("Fatal exception in interrupt");
256 257
257 if (panic_on_oops) 258 if (panic_on_oops)
258 panic("Fatal exception"); 259 panic("Fatal exception");
259 260
260 oops_exit(); 261 oops_exit();
261 do_exit(SIGSEGV); 262 do_exit(SIGSEGV);
262 } 263 }
263 264
264 void arm_notify_die(const char *str, struct pt_regs *regs, 265 void arm_notify_die(const char *str, struct pt_regs *regs,
265 struct siginfo *info, unsigned long err, unsigned long trap) 266 struct siginfo *info, unsigned long err, unsigned long trap)
266 { 267 {
267 if (user_mode(regs)) { 268 if (user_mode(regs)) {
268 current->thread.error_code = err; 269 current->thread.error_code = err;
269 current->thread.trap_no = trap; 270 current->thread.trap_no = trap;
270 271
271 force_sig_info(info->si_signo, info, current); 272 force_sig_info(info->si_signo, info, current);
272 } else { 273 } else {
273 die(str, regs, err); 274 die(str, regs, err);
274 } 275 }
275 } 276 }
276 277
277 static LIST_HEAD(undef_hook); 278 static LIST_HEAD(undef_hook);
278 static DEFINE_SPINLOCK(undef_lock); 279 static DEFINE_SPINLOCK(undef_lock);
279 280
280 void register_undef_hook(struct undef_hook *hook) 281 void register_undef_hook(struct undef_hook *hook)
281 { 282 {
282 unsigned long flags; 283 unsigned long flags;
283 284
284 spin_lock_irqsave(&undef_lock, flags); 285 spin_lock_irqsave(&undef_lock, flags);
285 list_add(&hook->node, &undef_hook); 286 list_add(&hook->node, &undef_hook);
286 spin_unlock_irqrestore(&undef_lock, flags); 287 spin_unlock_irqrestore(&undef_lock, flags);
287 } 288 }
288 289
289 void unregister_undef_hook(struct undef_hook *hook) 290 void unregister_undef_hook(struct undef_hook *hook)
290 { 291 {
291 unsigned long flags; 292 unsigned long flags;
292 293
293 spin_lock_irqsave(&undef_lock, flags); 294 spin_lock_irqsave(&undef_lock, flags);
294 list_del(&hook->node); 295 list_del(&hook->node);
295 spin_unlock_irqrestore(&undef_lock, flags); 296 spin_unlock_irqrestore(&undef_lock, flags);
296 } 297 }
297 298
298 asmlinkage void __exception do_undefinstr(struct pt_regs *regs) 299 asmlinkage void __exception do_undefinstr(struct pt_regs *regs)
299 { 300 {
300 unsigned int correction = thumb_mode(regs) ? 2 : 4; 301 unsigned int correction = thumb_mode(regs) ? 2 : 4;
301 unsigned int instr; 302 unsigned int instr;
302 struct undef_hook *hook; 303 struct undef_hook *hook;
303 siginfo_t info; 304 siginfo_t info;
304 void __user *pc; 305 void __user *pc;
305 unsigned long flags; 306 unsigned long flags;
306 307
307 /* 308 /*
308 * According to the ARM ARM, PC is 2 or 4 bytes ahead, 309 * According to the ARM ARM, PC is 2 or 4 bytes ahead,
309 * depending whether we're in Thumb mode or not. 310 * depending whether we're in Thumb mode or not.
310 * Correct this offset. 311 * Correct this offset.
311 */ 312 */
312 regs->ARM_pc -= correction; 313 regs->ARM_pc -= correction;
313 314
314 pc = (void __user *)instruction_pointer(regs); 315 pc = (void __user *)instruction_pointer(regs);
315 316
316 if (processor_mode(regs) == SVC_MODE) { 317 if (processor_mode(regs) == SVC_MODE) {
317 instr = *(u32 *) pc; 318 instr = *(u32 *) pc;
318 } else if (thumb_mode(regs)) { 319 } else if (thumb_mode(regs)) {
319 get_user(instr, (u16 __user *)pc); 320 get_user(instr, (u16 __user *)pc);
320 } else { 321 } else {
321 get_user(instr, (u32 __user *)pc); 322 get_user(instr, (u32 __user *)pc);
322 } 323 }
323 324
324 spin_lock_irqsave(&undef_lock, flags); 325 spin_lock_irqsave(&undef_lock, flags);
325 list_for_each_entry(hook, &undef_hook, node) { 326 list_for_each_entry(hook, &undef_hook, node) {
326 if ((instr & hook->instr_mask) == hook->instr_val && 327 if ((instr & hook->instr_mask) == hook->instr_val &&
327 (regs->ARM_cpsr & hook->cpsr_mask) == hook->cpsr_val) { 328 (regs->ARM_cpsr & hook->cpsr_mask) == hook->cpsr_val) {
328 if (hook->fn(regs, instr) == 0) { 329 if (hook->fn(regs, instr) == 0) {
329 spin_unlock_irq(&undef_lock); 330 spin_unlock_irq(&undef_lock);
330 return; 331 return;
331 } 332 }
332 } 333 }
333 } 334 }
334 spin_unlock_irqrestore(&undef_lock, flags); 335 spin_unlock_irqrestore(&undef_lock, flags);
335 336
336 #ifdef CONFIG_DEBUG_USER 337 #ifdef CONFIG_DEBUG_USER
337 if (user_debug & UDBG_UNDEFINED) { 338 if (user_debug & UDBG_UNDEFINED) {
338 printk(KERN_INFO "%s (%d): undefined instruction: pc=%p\n", 339 printk(KERN_INFO "%s (%d): undefined instruction: pc=%p\n",
339 current->comm, current->pid, pc); 340 current->comm, current->pid, pc);
340 dump_instr(regs); 341 dump_instr(regs);
341 } 342 }
342 #endif 343 #endif
343 344
344 info.si_signo = SIGILL; 345 info.si_signo = SIGILL;
345 info.si_errno = 0; 346 info.si_errno = 0;
346 info.si_code = ILL_ILLOPC; 347 info.si_code = ILL_ILLOPC;
347 info.si_addr = pc; 348 info.si_addr = pc;
348 349
349 arm_notify_die("Oops - undefined instruction", regs, &info, 0, 6); 350 arm_notify_die("Oops - undefined instruction", regs, &info, 0, 6);
350 } 351 }
351 352
352 asmlinkage void do_unexp_fiq (struct pt_regs *regs) 353 asmlinkage void do_unexp_fiq (struct pt_regs *regs)
353 { 354 {
354 #ifndef CONFIG_IGNORE_FIQ 355 #ifndef CONFIG_IGNORE_FIQ
355 printk("Hmm. Unexpected FIQ received, but trying to continue\n"); 356 printk("Hmm. Unexpected FIQ received, but trying to continue\n");
356 printk("You may have a hardware problem...\n"); 357 printk("You may have a hardware problem...\n");
357 #endif 358 #endif
358 } 359 }
359 360
360 /* 361 /*
361 * bad_mode handles the impossible case in the vectors. If you see one of 362 * bad_mode handles the impossible case in the vectors. If you see one of
362 * these, then it's extremely serious, and could mean you have buggy hardware. 363 * these, then it's extremely serious, and could mean you have buggy hardware.
363 * It never returns, and never tries to sync. We hope that we can at least 364 * It never returns, and never tries to sync. We hope that we can at least
364 * dump out some state information... 365 * dump out some state information...
365 */ 366 */
366 asmlinkage void bad_mode(struct pt_regs *regs, int reason) 367 asmlinkage void bad_mode(struct pt_regs *regs, int reason)
367 { 368 {
368 console_verbose(); 369 console_verbose();
369 370
370 printk(KERN_CRIT "Bad mode in %s handler detected\n", handler[reason]); 371 printk(KERN_CRIT "Bad mode in %s handler detected\n", handler[reason]);
371 372
372 die("Oops - bad mode", regs, 0); 373 die("Oops - bad mode", regs, 0);
373 local_irq_disable(); 374 local_irq_disable();
374 panic("bad mode"); 375 panic("bad mode");
375 } 376 }
376 377
377 static int bad_syscall(int n, struct pt_regs *regs) 378 static int bad_syscall(int n, struct pt_regs *regs)
378 { 379 {
379 struct thread_info *thread = current_thread_info(); 380 struct thread_info *thread = current_thread_info();
380 siginfo_t info; 381 siginfo_t info;
381 382
382 if (current->personality != PER_LINUX && 383 if (current->personality != PER_LINUX &&
383 current->personality != PER_LINUX_32BIT && 384 current->personality != PER_LINUX_32BIT &&
384 thread->exec_domain->handler) { 385 thread->exec_domain->handler) {
385 thread->exec_domain->handler(n, regs); 386 thread->exec_domain->handler(n, regs);
386 return regs->ARM_r0; 387 return regs->ARM_r0;
387 } 388 }
388 389
389 #ifdef CONFIG_DEBUG_USER 390 #ifdef CONFIG_DEBUG_USER
390 if (user_debug & UDBG_SYSCALL) { 391 if (user_debug & UDBG_SYSCALL) {
391 printk(KERN_ERR "[%d] %s: obsolete system call %08x.\n", 392 printk(KERN_ERR "[%d] %s: obsolete system call %08x.\n",
392 current->pid, current->comm, n); 393 current->pid, current->comm, n);
393 dump_instr(regs); 394 dump_instr(regs);
394 } 395 }
395 #endif 396 #endif
396 397
397 info.si_signo = SIGILL; 398 info.si_signo = SIGILL;
398 info.si_errno = 0; 399 info.si_errno = 0;
399 info.si_code = ILL_ILLTRP; 400 info.si_code = ILL_ILLTRP;
400 info.si_addr = (void __user *)instruction_pointer(regs) - 401 info.si_addr = (void __user *)instruction_pointer(regs) -
401 (thumb_mode(regs) ? 2 : 4); 402 (thumb_mode(regs) ? 2 : 4);
402 403
403 arm_notify_die("Oops - bad syscall", regs, &info, n, 0); 404 arm_notify_die("Oops - bad syscall", regs, &info, n, 0);
404 405
405 return regs->ARM_r0; 406 return regs->ARM_r0;
406 } 407 }
407 408
408 static inline void 409 static inline void
409 do_cache_op(unsigned long start, unsigned long end, int flags) 410 do_cache_op(unsigned long start, unsigned long end, int flags)
410 { 411 {
411 struct vm_area_struct *vma; 412 struct vm_area_struct *vma;
412 413
413 if (end < start || flags) 414 if (end < start || flags)
414 return; 415 return;
415 416
416 vma = find_vma(current->active_mm, start); 417 vma = find_vma(current->active_mm, start);
417 if (vma && vma->vm_start < end) { 418 if (vma && vma->vm_start < end) {
418 if (start < vma->vm_start) 419 if (start < vma->vm_start)
419 start = vma->vm_start; 420 start = vma->vm_start;
420 if (end > vma->vm_end) 421 if (end > vma->vm_end)
421 end = vma->vm_end; 422 end = vma->vm_end;
422 423
423 flush_cache_user_range(vma, start, end); 424 flush_cache_user_range(vma, start, end);
424 } 425 }
425 } 426 }
426 427
427 /* 428 /*
428 * Handle all unrecognised system calls. 429 * Handle all unrecognised system calls.
429 * 0x9f0000 - 0x9fffff are some more esoteric system calls 430 * 0x9f0000 - 0x9fffff are some more esoteric system calls
430 */ 431 */
431 #define NR(x) ((__ARM_NR_##x) - __ARM_NR_BASE) 432 #define NR(x) ((__ARM_NR_##x) - __ARM_NR_BASE)
432 asmlinkage int arm_syscall(int no, struct pt_regs *regs) 433 asmlinkage int arm_syscall(int no, struct pt_regs *regs)
433 { 434 {
434 struct thread_info *thread = current_thread_info(); 435 struct thread_info *thread = current_thread_info();
435 siginfo_t info; 436 siginfo_t info;
436 437
437 if ((no >> 16) != (__ARM_NR_BASE>> 16)) 438 if ((no >> 16) != (__ARM_NR_BASE>> 16))
438 return bad_syscall(no, regs); 439 return bad_syscall(no, regs);
439 440
440 switch (no & 0xffff) { 441 switch (no & 0xffff) {
441 case 0: /* branch through 0 */ 442 case 0: /* branch through 0 */
442 info.si_signo = SIGSEGV; 443 info.si_signo = SIGSEGV;
443 info.si_errno = 0; 444 info.si_errno = 0;
444 info.si_code = SEGV_MAPERR; 445 info.si_code = SEGV_MAPERR;
445 info.si_addr = NULL; 446 info.si_addr = NULL;
446 447
447 arm_notify_die("branch through zero", regs, &info, 0, 0); 448 arm_notify_die("branch through zero", regs, &info, 0, 0);
448 return 0; 449 return 0;
449 450
450 case NR(breakpoint): /* SWI BREAK_POINT */ 451 case NR(breakpoint): /* SWI BREAK_POINT */
451 regs->ARM_pc -= thumb_mode(regs) ? 2 : 4; 452 regs->ARM_pc -= thumb_mode(regs) ? 2 : 4;
452 ptrace_break(current, regs); 453 ptrace_break(current, regs);
453 return regs->ARM_r0; 454 return regs->ARM_r0;
454 455
455 /* 456 /*
456 * Flush a region from virtual address 'r0' to virtual address 'r1' 457 * Flush a region from virtual address 'r0' to virtual address 'r1'
457 * _exclusive_. There is no alignment requirement on either address; 458 * _exclusive_. There is no alignment requirement on either address;
458 * user space does not need to know the hardware cache layout. 459 * user space does not need to know the hardware cache layout.
459 * 460 *
460 * r2 contains flags. It should ALWAYS be passed as ZERO until it 461 * r2 contains flags. It should ALWAYS be passed as ZERO until it
461 * is defined to be something else. For now we ignore it, but may 462 * is defined to be something else. For now we ignore it, but may
462 * the fires of hell burn in your belly if you break this rule. ;) 463 * the fires of hell burn in your belly if you break this rule. ;)
463 * 464 *
464 * (at a later date, we may want to allow this call to not flush 465 * (at a later date, we may want to allow this call to not flush
465 * various aspects of the cache. Passing '0' will guarantee that 466 * various aspects of the cache. Passing '0' will guarantee that
466 * everything necessary gets flushed to maintain consistency in 467 * everything necessary gets flushed to maintain consistency in
467 * the specified region). 468 * the specified region).
468 */ 469 */
469 case NR(cacheflush): 470 case NR(cacheflush):
470 do_cache_op(regs->ARM_r0, regs->ARM_r1, regs->ARM_r2); 471 do_cache_op(regs->ARM_r0, regs->ARM_r1, regs->ARM_r2);
471 return 0; 472 return 0;
472 473
473 case NR(usr26): 474 case NR(usr26):
474 if (!(elf_hwcap & HWCAP_26BIT)) 475 if (!(elf_hwcap & HWCAP_26BIT))
475 break; 476 break;
476 regs->ARM_cpsr &= ~MODE32_BIT; 477 regs->ARM_cpsr &= ~MODE32_BIT;
477 return regs->ARM_r0; 478 return regs->ARM_r0;
478 479
479 case NR(usr32): 480 case NR(usr32):
480 if (!(elf_hwcap & HWCAP_26BIT)) 481 if (!(elf_hwcap & HWCAP_26BIT))
481 break; 482 break;
482 regs->ARM_cpsr |= MODE32_BIT; 483 regs->ARM_cpsr |= MODE32_BIT;
483 return regs->ARM_r0; 484 return regs->ARM_r0;
484 485
485 case NR(set_tls): 486 case NR(set_tls):
486 thread->tp_value = regs->ARM_r0; 487 thread->tp_value = regs->ARM_r0;
487 #if defined(CONFIG_HAS_TLS_REG) 488 #if defined(CONFIG_HAS_TLS_REG)
488 asm ("mcr p15, 0, %0, c13, c0, 3" : : "r" (regs->ARM_r0) ); 489 asm ("mcr p15, 0, %0, c13, c0, 3" : : "r" (regs->ARM_r0) );
489 #elif !defined(CONFIG_TLS_REG_EMUL) 490 #elif !defined(CONFIG_TLS_REG_EMUL)
490 /* 491 /*
491 * User space must never try to access this directly. 492 * User space must never try to access this directly.
492 * Expect your app to break eventually if you do so. 493 * Expect your app to break eventually if you do so.
493 * The user helper at 0xffff0fe0 must be used instead. 494 * The user helper at 0xffff0fe0 must be used instead.
494 * (see entry-armv.S for details) 495 * (see entry-armv.S for details)
495 */ 496 */
496 *((unsigned int *)0xffff0ff0) = regs->ARM_r0; 497 *((unsigned int *)0xffff0ff0) = regs->ARM_r0;
497 #endif 498 #endif
498 return 0; 499 return 0;
499 500
500 #ifdef CONFIG_NEEDS_SYSCALL_FOR_CMPXCHG 501 #ifdef CONFIG_NEEDS_SYSCALL_FOR_CMPXCHG
501 /* 502 /*
502 * Atomically store r1 in *r2 if *r2 is equal to r0 for user space. 503 * Atomically store r1 in *r2 if *r2 is equal to r0 for user space.
503 * Return zero in r0 if *MEM was changed or non-zero if no exchange 504 * Return zero in r0 if *MEM was changed or non-zero if no exchange
504 * happened. Also set the user C flag accordingly. 505 * happened. Also set the user C flag accordingly.
505 * If access permissions have to be fixed up then non-zero is 506 * If access permissions have to be fixed up then non-zero is
506 * returned and the operation has to be re-attempted. 507 * returned and the operation has to be re-attempted.
507 * 508 *
508 * *NOTE*: This is a ghost syscall private to the kernel. Only the 509 * *NOTE*: This is a ghost syscall private to the kernel. Only the
509 * __kuser_cmpxchg code in entry-armv.S should be aware of its 510 * __kuser_cmpxchg code in entry-armv.S should be aware of its
510 * existence. Don't ever use this from user code. 511 * existence. Don't ever use this from user code.
511 */ 512 */
512 case 0xfff0: 513 case 0xfff0:
513 { 514 {
514 extern void do_DataAbort(unsigned long addr, unsigned int fsr, 515 extern void do_DataAbort(unsigned long addr, unsigned int fsr,
515 struct pt_regs *regs); 516 struct pt_regs *regs);
516 unsigned long val; 517 unsigned long val;
517 unsigned long addr = regs->ARM_r2; 518 unsigned long addr = regs->ARM_r2;
518 struct mm_struct *mm = current->mm; 519 struct mm_struct *mm = current->mm;
519 pgd_t *pgd; pmd_t *pmd; pte_t *pte; 520 pgd_t *pgd; pmd_t *pmd; pte_t *pte;
520 spinlock_t *ptl; 521 spinlock_t *ptl;
521 522
522 regs->ARM_cpsr &= ~PSR_C_BIT; 523 regs->ARM_cpsr &= ~PSR_C_BIT;
523 down_read(&mm->mmap_sem); 524 down_read(&mm->mmap_sem);
524 pgd = pgd_offset(mm, addr); 525 pgd = pgd_offset(mm, addr);
525 if (!pgd_present(*pgd)) 526 if (!pgd_present(*pgd))
526 goto bad_access; 527 goto bad_access;
527 pmd = pmd_offset(pgd, addr); 528 pmd = pmd_offset(pgd, addr);
528 if (!pmd_present(*pmd)) 529 if (!pmd_present(*pmd))
529 goto bad_access; 530 goto bad_access;
530 pte = pte_offset_map_lock(mm, pmd, addr, &ptl); 531 pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
531 if (!pte_present(*pte) || !pte_dirty(*pte)) { 532 if (!pte_present(*pte) || !pte_dirty(*pte)) {
532 pte_unmap_unlock(pte, ptl); 533 pte_unmap_unlock(pte, ptl);
533 goto bad_access; 534 goto bad_access;
534 } 535 }
535 val = *(unsigned long *)addr; 536 val = *(unsigned long *)addr;
536 val -= regs->ARM_r0; 537 val -= regs->ARM_r0;
537 if (val == 0) { 538 if (val == 0) {
538 *(unsigned long *)addr = regs->ARM_r1; 539 *(unsigned long *)addr = regs->ARM_r1;
539 regs->ARM_cpsr |= PSR_C_BIT; 540 regs->ARM_cpsr |= PSR_C_BIT;
540 } 541 }
541 pte_unmap_unlock(pte, ptl); 542 pte_unmap_unlock(pte, ptl);
542 up_read(&mm->mmap_sem); 543 up_read(&mm->mmap_sem);
543 return val; 544 return val;
544 545
545 bad_access: 546 bad_access:
546 up_read(&mm->mmap_sem); 547 up_read(&mm->mmap_sem);
547 /* simulate a write access fault */ 548 /* simulate a write access fault */
548 do_DataAbort(addr, 15 + (1 << 11), regs); 549 do_DataAbort(addr, 15 + (1 << 11), regs);
549 return -1; 550 return -1;
550 } 551 }
551 #endif 552 #endif
552 553
553 default: 554 default:
554 /* Calls 9f00xx..9f07ff are defined to return -ENOSYS 555 /* Calls 9f00xx..9f07ff are defined to return -ENOSYS
555 if not implemented, rather than raising SIGILL. This 556 if not implemented, rather than raising SIGILL. This
556 way the calling program can gracefully determine whether 557 way the calling program can gracefully determine whether
557 a feature is supported. */ 558 a feature is supported. */
558 if (no <= 0x7ff) 559 if (no <= 0x7ff)
559 return -ENOSYS; 560 return -ENOSYS;
560 break; 561 break;
561 } 562 }
562 #ifdef CONFIG_DEBUG_USER 563 #ifdef CONFIG_DEBUG_USER
563 /* 564 /*
564 * experience shows that these seem to indicate that 565 * experience shows that these seem to indicate that
565 * something catastrophic has happened 566 * something catastrophic has happened
566 */ 567 */
567 if (user_debug & UDBG_SYSCALL) { 568 if (user_debug & UDBG_SYSCALL) {
568 printk("[%d] %s: arm syscall %d\n", 569 printk("[%d] %s: arm syscall %d\n",
569 current->pid, current->comm, no); 570 current->pid, current->comm, no);
570 dump_instr(regs); 571 dump_instr(regs);
571 if (user_mode(regs)) { 572 if (user_mode(regs)) {
572 __show_regs(regs); 573 __show_regs(regs);
573 c_backtrace(regs->ARM_fp, processor_mode(regs)); 574 c_backtrace(regs->ARM_fp, processor_mode(regs));
574 } 575 }
575 } 576 }
576 #endif 577 #endif
577 info.si_signo = SIGILL; 578 info.si_signo = SIGILL;
578 info.si_errno = 0; 579 info.si_errno = 0;
579 info.si_code = ILL_ILLTRP; 580 info.si_code = ILL_ILLTRP;
580 info.si_addr = (void __user *)instruction_pointer(regs) - 581 info.si_addr = (void __user *)instruction_pointer(regs) -
581 (thumb_mode(regs) ? 2 : 4); 582 (thumb_mode(regs) ? 2 : 4);
582 583
583 arm_notify_die("Oops - bad syscall(2)", regs, &info, no, 0); 584 arm_notify_die("Oops - bad syscall(2)", regs, &info, no, 0);
584 return 0; 585 return 0;
585 } 586 }
586 587
587 #ifdef CONFIG_TLS_REG_EMUL 588 #ifdef CONFIG_TLS_REG_EMUL
588 589
589 /* 590 /*
590 * We might be running on an ARMv6+ processor which should have the TLS 591 * We might be running on an ARMv6+ processor which should have the TLS
591 * register but for some reason we can't use it, or maybe an SMP system 592 * register but for some reason we can't use it, or maybe an SMP system
592 * using a pre-ARMv6 processor (there are apparently a few prototypes like 593 * using a pre-ARMv6 processor (there are apparently a few prototypes like
593 * that in existence) and therefore access to that register must be 594 * that in existence) and therefore access to that register must be
594 * emulated. 595 * emulated.
595 */ 596 */
596 597
597 static int get_tp_trap(struct pt_regs *regs, unsigned int instr) 598 static int get_tp_trap(struct pt_regs *regs, unsigned int instr)
598 { 599 {
599 int reg = (instr >> 12) & 15; 600 int reg = (instr >> 12) & 15;
600 if (reg == 15) 601 if (reg == 15)
601 return 1; 602 return 1;
602 regs->uregs[reg] = current_thread_info()->tp_value; 603 regs->uregs[reg] = current_thread_info()->tp_value;
603 regs->ARM_pc += 4; 604 regs->ARM_pc += 4;
604 return 0; 605 return 0;
605 } 606 }
606 607
607 static struct undef_hook arm_mrc_hook = { 608 static struct undef_hook arm_mrc_hook = {
608 .instr_mask = 0x0fff0fff, 609 .instr_mask = 0x0fff0fff,
609 .instr_val = 0x0e1d0f70, 610 .instr_val = 0x0e1d0f70,
610 .cpsr_mask = PSR_T_BIT, 611 .cpsr_mask = PSR_T_BIT,
611 .cpsr_val = 0, 612 .cpsr_val = 0,
612 .fn = get_tp_trap, 613 .fn = get_tp_trap,
613 }; 614 };
614 615
615 static int __init arm_mrc_hook_init(void) 616 static int __init arm_mrc_hook_init(void)
616 { 617 {
617 register_undef_hook(&arm_mrc_hook); 618 register_undef_hook(&arm_mrc_hook);
618 return 0; 619 return 0;
619 } 620 }
620 621
621 late_initcall(arm_mrc_hook_init); 622 late_initcall(arm_mrc_hook_init);
622 623
623 #endif 624 #endif
624 625
625 void __bad_xchg(volatile void *ptr, int size) 626 void __bad_xchg(volatile void *ptr, int size)
626 { 627 {
627 printk("xchg: bad data size: pc 0x%p, ptr 0x%p, size %d\n", 628 printk("xchg: bad data size: pc 0x%p, ptr 0x%p, size %d\n",
628 __builtin_return_address(0), ptr, size); 629 __builtin_return_address(0), ptr, size);
629 BUG(); 630 BUG();
630 } 631 }
631 EXPORT_SYMBOL(__bad_xchg); 632 EXPORT_SYMBOL(__bad_xchg);
632 633
633 /* 634 /*
634 * A data abort trap was taken, but we did not handle the instruction. 635 * A data abort trap was taken, but we did not handle the instruction.
635 * Try to abort the user program, or panic if it was the kernel. 636 * Try to abort the user program, or panic if it was the kernel.
636 */ 637 */
637 asmlinkage void 638 asmlinkage void
638 baddataabort(int code, unsigned long instr, struct pt_regs *regs) 639 baddataabort(int code, unsigned long instr, struct pt_regs *regs)
639 { 640 {
640 unsigned long addr = instruction_pointer(regs); 641 unsigned long addr = instruction_pointer(regs);
641 siginfo_t info; 642 siginfo_t info;
642 643
643 #ifdef CONFIG_DEBUG_USER 644 #ifdef CONFIG_DEBUG_USER
644 if (user_debug & UDBG_BADABORT) { 645 if (user_debug & UDBG_BADABORT) {
645 printk(KERN_ERR "[%d] %s: bad data abort: code %d instr 0x%08lx\n", 646 printk(KERN_ERR "[%d] %s: bad data abort: code %d instr 0x%08lx\n",
646 current->pid, current->comm, code, instr); 647 current->pid, current->comm, code, instr);
647 dump_instr(regs); 648 dump_instr(regs);
648 show_pte(current->mm, addr); 649 show_pte(current->mm, addr);
649 } 650 }
650 #endif 651 #endif
651 652
652 info.si_signo = SIGILL; 653 info.si_signo = SIGILL;
653 info.si_errno = 0; 654 info.si_errno = 0;
654 info.si_code = ILL_ILLOPC; 655 info.si_code = ILL_ILLOPC;
655 info.si_addr = (void __user *)addr; 656 info.si_addr = (void __user *)addr;
656 657
657 arm_notify_die("unknown data abort code", regs, &info, instr, 0); 658 arm_notify_die("unknown data abort code", regs, &info, instr, 0);
658 } 659 }
659 660
660 void __attribute__((noreturn)) __bug(const char *file, int line) 661 void __attribute__((noreturn)) __bug(const char *file, int line)
661 { 662 {
662 printk(KERN_CRIT"kernel BUG at %s:%d!\n", file, line); 663 printk(KERN_CRIT"kernel BUG at %s:%d!\n", file, line);
663 *(int *)0 = 0; 664 *(int *)0 = 0;
664 665
665 /* Avoid "noreturn function does return" */ 666 /* Avoid "noreturn function does return" */
666 for (;;); 667 for (;;);
667 } 668 }
668 EXPORT_SYMBOL(__bug); 669 EXPORT_SYMBOL(__bug);
669 670
670 void __readwrite_bug(const char *fn) 671 void __readwrite_bug(const char *fn)
671 { 672 {
672 printk("%s called, but not implemented\n", fn); 673 printk("%s called, but not implemented\n", fn);
673 BUG(); 674 BUG();
674 } 675 }
675 EXPORT_SYMBOL(__readwrite_bug); 676 EXPORT_SYMBOL(__readwrite_bug);
676 677
677 void __pte_error(const char *file, int line, unsigned long val) 678 void __pte_error(const char *file, int line, unsigned long val)
678 { 679 {
679 printk("%s:%d: bad pte %08lx.\n", file, line, val); 680 printk("%s:%d: bad pte %08lx.\n", file, line, val);
680 } 681 }
681 682
682 void __pmd_error(const char *file, int line, unsigned long val) 683 void __pmd_error(const char *file, int line, unsigned long val)
683 { 684 {
684 printk("%s:%d: bad pmd %08lx.\n", file, line, val); 685 printk("%s:%d: bad pmd %08lx.\n", file, line, val);
685 } 686 }
686 687
687 void __pgd_error(const char *file, int line, unsigned long val) 688 void __pgd_error(const char *file, int line, unsigned long val)
688 { 689 {
689 printk("%s:%d: bad pgd %08lx.\n", file, line, val); 690 printk("%s:%d: bad pgd %08lx.\n", file, line, val);
690 } 691 }
691 692
692 asmlinkage void __div0(void) 693 asmlinkage void __div0(void)
693 { 694 {
694 printk("Division by zero in kernel.\n"); 695 printk("Division by zero in kernel.\n");
695 dump_stack(); 696 dump_stack();
696 } 697 }
697 EXPORT_SYMBOL(__div0); 698 EXPORT_SYMBOL(__div0);
698 699
699 void abort(void) 700 void abort(void)
700 { 701 {
701 BUG(); 702 BUG();
702 703
703 /* if that doesn't kill us, halt */ 704 /* if that doesn't kill us, halt */
704 panic("Oops failed to kill thread"); 705 panic("Oops failed to kill thread");
705 } 706 }
706 EXPORT_SYMBOL(abort); 707 EXPORT_SYMBOL(abort);
707 708
708 void __init trap_init(void) 709 void __init trap_init(void)
709 { 710 {
710 unsigned long vectors = CONFIG_VECTORS_BASE; 711 unsigned long vectors = CONFIG_VECTORS_BASE;
711 extern char __stubs_start[], __stubs_end[]; 712 extern char __stubs_start[], __stubs_end[];
712 extern char __vectors_start[], __vectors_end[]; 713 extern char __vectors_start[], __vectors_end[];
713 extern char __kuser_helper_start[], __kuser_helper_end[]; 714 extern char __kuser_helper_start[], __kuser_helper_end[];
714 int kuser_sz = __kuser_helper_end - __kuser_helper_start; 715 int kuser_sz = __kuser_helper_end - __kuser_helper_start;
715 716
716 /* 717 /*
717 * Copy the vectors, stubs and kuser helpers (in entry-armv.S) 718 * Copy the vectors, stubs and kuser helpers (in entry-armv.S)
718 * into the vector page, mapped at 0xffff0000, and ensure these 719 * into the vector page, mapped at 0xffff0000, and ensure these
719 * are visible to the instruction stream. 720 * are visible to the instruction stream.
720 */ 721 */
721 memcpy((void *)vectors, __vectors_start, __vectors_end - __vectors_start); 722 memcpy((void *)vectors, __vectors_start, __vectors_end - __vectors_start);
722 memcpy((void *)vectors + 0x200, __stubs_start, __stubs_end - __stubs_start); 723 memcpy((void *)vectors + 0x200, __stubs_start, __stubs_end - __stubs_start);
723 memcpy((void *)vectors + 0x1000 - kuser_sz, __kuser_helper_start, kuser_sz); 724 memcpy((void *)vectors + 0x1000 - kuser_sz, __kuser_helper_start, kuser_sz);
724 725
725 /* 726 /*
726 * Copy signal return handlers into the vector page, and 727 * Copy signal return handlers into the vector page, and
727 * set sigreturn to be a pointer to these. 728 * set sigreturn to be a pointer to these.
728 */ 729 */
729 memcpy((void *)KERN_SIGRETURN_CODE, sigreturn_codes, 730 memcpy((void *)KERN_SIGRETURN_CODE, sigreturn_codes,
730 sizeof(sigreturn_codes)); 731 sizeof(sigreturn_codes));
731 732
732 flush_icache_range(vectors, vectors + PAGE_SIZE); 733 flush_icache_range(vectors, vectors + PAGE_SIZE);
733 modify_domain(DOMAIN_USER, DOMAIN_CLIENT); 734 modify_domain(DOMAIN_USER, DOMAIN_CLIENT);
734 } 735 }
735 736
arch/arm26/kernel/traps.c
1 /* 1 /*
2 * linux/arch/arm26/kernel/traps.c 2 * linux/arch/arm26/kernel/traps.c
3 * 3 *
4 * Copyright (C) 1995-2002 Russell King 4 * Copyright (C) 1995-2002 Russell King
5 * Fragments that appear the same as linux/arch/i386/kernel/traps.c (C) Linus Torvalds 5 * Fragments that appear the same as linux/arch/i386/kernel/traps.c (C) Linus Torvalds
6 * Copyright (C) 2003 Ian Molton (ARM26) 6 * Copyright (C) 2003 Ian Molton (ARM26)
7 * 7 *
8 * This program is free software; you can redistribute it and/or modify 8 * This program is free software; you can redistribute it and/or modify
9 * it under the terms of the GNU General Public License version 2 as 9 * it under the terms of the GNU General Public License version 2 as
10 * published by the Free Software Foundation. 10 * published by the Free Software Foundation.
11 * 11 *
12 * 'traps.c' handles hardware exceptions after we have saved some state in 12 * 'traps.c' handles hardware exceptions after we have saved some state in
13 * 'linux/arch/arm26/lib/traps.S'. Mostly a debugging aid, but will probably 13 * 'linux/arch/arm26/lib/traps.S'. Mostly a debugging aid, but will probably
14 * kill the offending process. 14 * kill the offending process.
15 */ 15 */
16 16
17 #include <linux/module.h> 17 #include <linux/module.h>
18 #include <linux/types.h> 18 #include <linux/types.h>
19 #include <linux/kernel.h> 19 #include <linux/kernel.h>
20 #include <linux/signal.h> 20 #include <linux/signal.h>
21 #include <linux/sched.h> 21 #include <linux/sched.h>
22 #include <linux/mm.h> 22 #include <linux/mm.h>
23 #include <linux/spinlock.h> 23 #include <linux/spinlock.h>
24 #include <linux/personality.h> 24 #include <linux/personality.h>
25 #include <linux/ptrace.h> 25 #include <linux/ptrace.h>
26 #include <linux/elf.h> 26 #include <linux/elf.h>
27 #include <linux/interrupt.h> 27 #include <linux/interrupt.h>
28 #include <linux/init.h> 28 #include <linux/init.h>
29 29
30 #include <asm/atomic.h> 30 #include <asm/atomic.h>
31 #include <asm/io.h> 31 #include <asm/io.h>
32 #include <asm/pgtable.h> 32 #include <asm/pgtable.h>
33 #include <asm/system.h> 33 #include <asm/system.h>
34 #include <asm/uaccess.h> 34 #include <asm/uaccess.h>
35 #include <asm/unistd.h> 35 #include <asm/unistd.h>
36 #include <linux/mutex.h> 36 #include <linux/mutex.h>
37 37
38 #include "ptrace.h" 38 #include "ptrace.h"
39 39
40 extern void c_backtrace (unsigned long fp, int pmode); 40 extern void c_backtrace (unsigned long fp, int pmode);
41 extern void show_pte(struct mm_struct *mm, unsigned long addr); 41 extern void show_pte(struct mm_struct *mm, unsigned long addr);
42 42
43 const char *processor_modes[] = { "USER_26", "FIQ_26" , "IRQ_26" , "SVC_26" }; 43 const char *processor_modes[] = { "USER_26", "FIQ_26" , "IRQ_26" , "SVC_26" };
44 44
45 static const char *handler[]= { "prefetch abort", "data abort", "address exception", "interrupt" "*bad reason*"}; 45 static const char *handler[]= { "prefetch abort", "data abort", "address exception", "interrupt" "*bad reason*"};
46 46
47 /* 47 /*
48 * Stack pointers should always be within the kernels view of 48 * Stack pointers should always be within the kernels view of
49 * physical memory. If it is not there, then we can't dump 49 * physical memory. If it is not there, then we can't dump
50 * out any information relating to the stack. 50 * out any information relating to the stack.
51 */ 51 */
52 static int verify_stack(unsigned long sp) 52 static int verify_stack(unsigned long sp)
53 { 53 {
54 if (sp < PAGE_OFFSET || (sp > (unsigned long)high_memory && high_memory != 0)) 54 if (sp < PAGE_OFFSET || (sp > (unsigned long)high_memory && high_memory != 0))
55 return -EFAULT; 55 return -EFAULT;
56 56
57 return 0; 57 return 0;
58 } 58 }
59 59
60 /* 60 /*
61 * Dump out the contents of some memory nicely... 61 * Dump out the contents of some memory nicely...
62 */ 62 */
63 static void dump_mem(const char *str, unsigned long bottom, unsigned long top) 63 static void dump_mem(const char *str, unsigned long bottom, unsigned long top)
64 { 64 {
65 unsigned long p = bottom & ~31; 65 unsigned long p = bottom & ~31;
66 mm_segment_t fs; 66 mm_segment_t fs;
67 int i; 67 int i;
68 68
69 /* 69 /*
70 * We need to switch to kernel mode so that we can use __get_user 70 * We need to switch to kernel mode so that we can use __get_user
71 * to safely read from kernel space. Note that we now dump the 71 * to safely read from kernel space. Note that we now dump the
72 * code first, just in case the backtrace kills us. 72 * code first, just in case the backtrace kills us.
73 */ 73 */
74 fs = get_fs(); 74 fs = get_fs();
75 set_fs(KERNEL_DS); 75 set_fs(KERNEL_DS);
76 76
77 printk("%s", str); 77 printk("%s", str);
78 printk("(0x%08lx to 0x%08lx)\n", bottom, top); 78 printk("(0x%08lx to 0x%08lx)\n", bottom, top);
79 79
80 for (p = bottom & ~31; p < top;) { 80 for (p = bottom & ~31; p < top;) {
81 printk("%04lx: ", p & 0xffff); 81 printk("%04lx: ", p & 0xffff);
82 82
83 for (i = 0; i < 8; i++, p += 4) { 83 for (i = 0; i < 8; i++, p += 4) {
84 unsigned int val; 84 unsigned int val;
85 85
86 if (p < bottom || p >= top) 86 if (p < bottom || p >= top)
87 printk(" "); 87 printk(" ");
88 else { 88 else {
89 __get_user(val, (unsigned long *)p); 89 __get_user(val, (unsigned long *)p);
90 printk("%08x ", val); 90 printk("%08x ", val);
91 } 91 }
92 } 92 }
93 printk ("\n"); 93 printk ("\n");
94 } 94 }
95 95
96 set_fs(fs); 96 set_fs(fs);
97 } 97 }
98 98
99 static void dump_instr(struct pt_regs *regs) 99 static void dump_instr(struct pt_regs *regs)
100 { 100 {
101 unsigned long addr = instruction_pointer(regs); 101 unsigned long addr = instruction_pointer(regs);
102 const int width = 8; 102 const int width = 8;
103 mm_segment_t fs; 103 mm_segment_t fs;
104 int i; 104 int i;
105 105
106 /* 106 /*
107 * We need to switch to kernel mode so that we can use __get_user 107 * We need to switch to kernel mode so that we can use __get_user
108 * to safely read from kernel space. Note that we now dump the 108 * to safely read from kernel space. Note that we now dump the
109 * code first, just in case the backtrace kills us. 109 * code first, just in case the backtrace kills us.
110 */ 110 */
111 fs = get_fs(); 111 fs = get_fs();
112 set_fs(KERNEL_DS); 112 set_fs(KERNEL_DS);
113 113
114 printk("Code: "); 114 printk("Code: ");
115 for (i = -4; i < 1; i++) { 115 for (i = -4; i < 1; i++) {
116 unsigned int val, bad; 116 unsigned int val, bad;
117 117
118 bad = __get_user(val, &((u32 *)addr)[i]); 118 bad = __get_user(val, &((u32 *)addr)[i]);
119 119
120 if (!bad) 120 if (!bad)
121 printk(i == 0 ? "(%0*x) " : "%0*x ", width, val); 121 printk(i == 0 ? "(%0*x) " : "%0*x ", width, val);
122 else { 122 else {
123 printk("bad PC value."); 123 printk("bad PC value.");
124 break; 124 break;
125 } 125 }
126 } 126 }
127 printk("\n"); 127 printk("\n");
128 128
129 set_fs(fs); 129 set_fs(fs);
130 } 130 }
131 131
132 /*static*/ void __dump_stack(struct task_struct *tsk, unsigned long sp) 132 /*static*/ void __dump_stack(struct task_struct *tsk, unsigned long sp)
133 { 133 {
134 dump_mem("Stack: ", sp, 8192+(unsigned long)task_stack_page(tsk)); 134 dump_mem("Stack: ", sp, 8192+(unsigned long)task_stack_page(tsk));
135 } 135 }
136 136
137 void dump_stack(void) 137 void dump_stack(void)
138 { 138 {
139 #ifdef CONFIG_DEBUG_ERRORS 139 #ifdef CONFIG_DEBUG_ERRORS
140 __backtrace(); 140 __backtrace();
141 #endif 141 #endif
142 } 142 }
143 143
144 EXPORT_SYMBOL(dump_stack); 144 EXPORT_SYMBOL(dump_stack);
145 145
146 //FIXME - was a static fn 146 //FIXME - was a static fn
147 void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk) 147 void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk)
148 { 148 {
149 unsigned int fp; 149 unsigned int fp;
150 int ok = 1; 150 int ok = 1;
151 151
152 printk("Backtrace: "); 152 printk("Backtrace: ");
153 fp = regs->ARM_fp; 153 fp = regs->ARM_fp;
154 if (!fp) { 154 if (!fp) {
155 printk("no frame pointer"); 155 printk("no frame pointer");
156 ok = 0; 156 ok = 0;
157 } else if (verify_stack(fp)) { 157 } else if (verify_stack(fp)) {
158 printk("invalid frame pointer 0x%08x", fp); 158 printk("invalid frame pointer 0x%08x", fp);
159 ok = 0; 159 ok = 0;
160 } else if (fp < (unsigned long)end_of_stack(tsk)) 160 } else if (fp < (unsigned long)end_of_stack(tsk))
161 printk("frame pointer underflow"); 161 printk("frame pointer underflow");
162 printk("\n"); 162 printk("\n");
163 163
164 if (ok) 164 if (ok)
165 c_backtrace(fp, processor_mode(regs)); 165 c_backtrace(fp, processor_mode(regs));
166 } 166 }
167 167
168 /* FIXME - this is probably wrong.. */ 168 /* FIXME - this is probably wrong.. */
169 void show_stack(struct task_struct *task, unsigned long *sp) { 169 void show_stack(struct task_struct *task, unsigned long *sp) {
170 dump_mem("Stack: ", (unsigned long)sp, 8192+(unsigned long)task_stack_page(task)); 170 dump_mem("Stack: ", (unsigned long)sp, 8192+(unsigned long)task_stack_page(task));
171 } 171 }
172 172
173 DEFINE_SPINLOCK(die_lock); 173 DEFINE_SPINLOCK(die_lock);
174 174
175 /* 175 /*
176 * This function is protected against re-entrancy. 176 * This function is protected against re-entrancy.
177 */ 177 */
178 NORET_TYPE void die(const char *str, struct pt_regs *regs, int err) 178 NORET_TYPE void die(const char *str, struct pt_regs *regs, int err)
179 { 179 {
180 struct task_struct *tsk = current; 180 struct task_struct *tsk = current;
181 181
182 console_verbose(); 182 console_verbose();
183 spin_lock_irq(&die_lock); 183 spin_lock_irq(&die_lock);
184 184
185 printk("Internal error: %s: %x\n", str, err); 185 printk("Internal error: %s: %x\n", str, err);
186 printk("CPU: %d\n", smp_processor_id()); 186 printk("CPU: %d\n", smp_processor_id());
187 show_regs(regs); 187 show_regs(regs);
188 add_taint(TAINT_DIE);
188 printk("Process %s (pid: %d, stack limit = 0x%p)\n", 189 printk("Process %s (pid: %d, stack limit = 0x%p)\n",
189 current->comm, current->pid, end_of_stack(tsk)); 190 current->comm, current->pid, end_of_stack(tsk));
190 191
191 if (!user_mode(regs) || in_interrupt()) { 192 if (!user_mode(regs) || in_interrupt()) {
192 __dump_stack(tsk, (unsigned long)(regs + 1)); 193 __dump_stack(tsk, (unsigned long)(regs + 1));
193 dump_backtrace(regs, tsk); 194 dump_backtrace(regs, tsk);
194 dump_instr(regs); 195 dump_instr(regs);
195 } 196 }
196 while(1); 197 while(1);
197 spin_unlock_irq(&die_lock); 198 spin_unlock_irq(&die_lock);
198 do_exit(SIGSEGV); 199 do_exit(SIGSEGV);
199 } 200 }
200 201
201 void die_if_kernel(const char *str, struct pt_regs *regs, int err) 202 void die_if_kernel(const char *str, struct pt_regs *regs, int err)
202 { 203 {
203 if (user_mode(regs)) 204 if (user_mode(regs))
204 return; 205 return;
205 206
206 die(str, regs, err); 207 die(str, regs, err);
207 } 208 }
208 209
209 static DEFINE_MUTEX(undef_mutex); 210 static DEFINE_MUTEX(undef_mutex);
210 static int (*undef_hook)(struct pt_regs *); 211 static int (*undef_hook)(struct pt_regs *);
211 212
212 int request_undef_hook(int (*fn)(struct pt_regs *)) 213 int request_undef_hook(int (*fn)(struct pt_regs *))
213 { 214 {
214 int ret = -EBUSY; 215 int ret = -EBUSY;
215 216
216 mutex_lock(&undef_mutex); 217 mutex_lock(&undef_mutex);
217 if (undef_hook == NULL) { 218 if (undef_hook == NULL) {
218 undef_hook = fn; 219 undef_hook = fn;
219 ret = 0; 220 ret = 0;
220 } 221 }
221 mutex_unlock(&undef_mutex); 222 mutex_unlock(&undef_mutex);
222 223
223 return ret; 224 return ret;
224 } 225 }
225 226
226 int release_undef_hook(int (*fn)(struct pt_regs *)) 227 int release_undef_hook(int (*fn)(struct pt_regs *))
227 { 228 {
228 int ret = -EINVAL; 229 int ret = -EINVAL;
229 230
230 mutex_lock(&undef_mutex); 231 mutex_lock(&undef_mutex);
231 if (undef_hook == fn) { 232 if (undef_hook == fn) {
232 undef_hook = NULL; 233 undef_hook = NULL;
233 ret = 0; 234 ret = 0;
234 } 235 }
235 mutex_unlock(&undef_mutex); 236 mutex_unlock(&undef_mutex);
236 237
237 return ret; 238 return ret;
238 } 239 }
239 240
240 static int undefined_extension(struct pt_regs *regs, unsigned int op) 241 static int undefined_extension(struct pt_regs *regs, unsigned int op)
241 { 242 {
242 switch (op) { 243 switch (op) {
243 case 1: /* 0xde01 / 0x?7f001f0 */ 244 case 1: /* 0xde01 / 0x?7f001f0 */
244 ptrace_break(current, regs); 245 ptrace_break(current, regs);
245 return 0; 246 return 0;
246 } 247 }
247 return 1; 248 return 1;
248 } 249 }
249 250
250 asmlinkage void do_undefinstr(struct pt_regs *regs) 251 asmlinkage void do_undefinstr(struct pt_regs *regs)
251 { 252 {
252 siginfo_t info; 253 siginfo_t info;
253 void *pc; 254 void *pc;
254 255
255 regs->ARM_pc -= 4; 256 regs->ARM_pc -= 4;
256 257
257 pc = (unsigned long *)instruction_pointer(regs); /* strip PSR */ 258 pc = (unsigned long *)instruction_pointer(regs); /* strip PSR */
258 259
259 if (user_mode(regs)) { 260 if (user_mode(regs)) {
260 u32 instr; 261 u32 instr;
261 262
262 get_user(instr, (u32 *)pc); 263 get_user(instr, (u32 *)pc);
263 264
264 if ((instr & 0x0fff00ff) == 0x07f000f0 && 265 if ((instr & 0x0fff00ff) == 0x07f000f0 &&
265 undefined_extension(regs, (instr >> 8) & 255) == 0) { 266 undefined_extension(regs, (instr >> 8) & 255) == 0) {
266 regs->ARM_pc += 4; 267 regs->ARM_pc += 4;
267 return; 268 return;
268 } 269 }
269 } else { 270 } else {
270 if (undef_hook && undef_hook(regs) == 0) { 271 if (undef_hook && undef_hook(regs) == 0) {
271 regs->ARM_pc += 4; 272 regs->ARM_pc += 4;
272 return; 273 return;
273 } 274 }
274 } 275 }
275 276
276 #ifdef CONFIG_DEBUG_USER 277 #ifdef CONFIG_DEBUG_USER
277 printk(KERN_INFO "%s (%d): undefined instruction: pc=%p\n", 278 printk(KERN_INFO "%s (%d): undefined instruction: pc=%p\n",
278 current->comm, current->pid, pc); 279 current->comm, current->pid, pc);
279 dump_instr(regs); 280 dump_instr(regs);
280 #endif 281 #endif
281 282
282 current->thread.error_code = 0; 283 current->thread.error_code = 0;
283 current->thread.trap_no = 6; 284 current->thread.trap_no = 6;
284 285
285 info.si_signo = SIGILL; 286 info.si_signo = SIGILL;
286 info.si_errno = 0; 287 info.si_errno = 0;
287 info.si_code = ILL_ILLOPC; 288 info.si_code = ILL_ILLOPC;
288 info.si_addr = pc; 289 info.si_addr = pc;
289 290
290 force_sig_info(SIGILL, &info, current); 291 force_sig_info(SIGILL, &info, current);
291 292
292 die_if_kernel("Oops - undefined instruction", regs, 0); 293 die_if_kernel("Oops - undefined instruction", regs, 0);
293 } 294 }
294 295
295 asmlinkage void do_excpt(unsigned long address, struct pt_regs *regs, int mode) 296 asmlinkage void do_excpt(unsigned long address, struct pt_regs *regs, int mode)
296 { 297 {
297 siginfo_t info; 298 siginfo_t info;
298 299
299 #ifdef CONFIG_DEBUG_USER 300 #ifdef CONFIG_DEBUG_USER
300 printk(KERN_INFO "%s (%d): address exception: pc=%08lx\n", 301 printk(KERN_INFO "%s (%d): address exception: pc=%08lx\n",
301 current->comm, current->pid, instruction_pointer(regs)); 302 current->comm, current->pid, instruction_pointer(regs));
302 dump_instr(regs); 303 dump_instr(regs);
303 #endif 304 #endif
304 305
305 current->thread.error_code = 0; 306 current->thread.error_code = 0;
306 current->thread.trap_no = 11; 307 current->thread.trap_no = 11;
307 308
308 info.si_signo = SIGBUS; 309 info.si_signo = SIGBUS;
309 info.si_errno = 0; 310 info.si_errno = 0;
310 info.si_code = BUS_ADRERR; 311 info.si_code = BUS_ADRERR;
311 info.si_addr = (void *)address; 312 info.si_addr = (void *)address;
312 313
313 force_sig_info(SIGBUS, &info, current); 314 force_sig_info(SIGBUS, &info, current);
314 315
315 die_if_kernel("Oops - address exception", regs, mode); 316 die_if_kernel("Oops - address exception", regs, mode);
316 } 317 }
317 318
318 asmlinkage void do_unexp_fiq (struct pt_regs *regs) 319 asmlinkage void do_unexp_fiq (struct pt_regs *regs)
319 { 320 {
320 #ifndef CONFIG_IGNORE_FIQ 321 #ifndef CONFIG_IGNORE_FIQ
321 printk("Hmm. Unexpected FIQ received, but trying to continue\n"); 322 printk("Hmm. Unexpected FIQ received, but trying to continue\n");
322 printk("You may have a hardware problem...\n"); 323 printk("You may have a hardware problem...\n");
323 #endif 324 #endif
324 } 325 }
325 326
326 /* 327 /*
327 * bad_mode handles the impossible case in the vectors. If you see one of 328 * bad_mode handles the impossible case in the vectors. If you see one of
328 * these, then it's extremely serious, and could mean you have buggy hardware. 329 * these, then it's extremely serious, and could mean you have buggy hardware.
329 * It never returns, and never tries to sync. We hope that we can at least 330 * It never returns, and never tries to sync. We hope that we can at least
330 * dump out some state information... 331 * dump out some state information...
331 */ 332 */
332 asmlinkage void bad_mode(struct pt_regs *regs, int reason, int proc_mode) 333 asmlinkage void bad_mode(struct pt_regs *regs, int reason, int proc_mode)
333 { 334 {
334 unsigned int vectors = vectors_base(); 335 unsigned int vectors = vectors_base();
335 336
336 console_verbose(); 337 console_verbose();
337 338
338 printk(KERN_CRIT "Bad mode in %s handler detected: mode %s\n", 339 printk(KERN_CRIT "Bad mode in %s handler detected: mode %s\n",
339 handler[reason<5?reason:4], processor_modes[proc_mode]); 340 handler[reason<5?reason:4], processor_modes[proc_mode]);
340 341
341 /* 342 /*
342 * Dump out the vectors and stub routines. Maybe a better solution 343 * Dump out the vectors and stub routines. Maybe a better solution
343 * would be to dump them out only if we detect that they are corrupted. 344 * would be to dump them out only if we detect that they are corrupted.
344 */ 345 */
345 dump_mem(KERN_CRIT "Vectors: ", vectors, vectors + 0x40); 346 dump_mem(KERN_CRIT "Vectors: ", vectors, vectors + 0x40);
346 dump_mem(KERN_CRIT "Stubs: ", vectors + 0x200, vectors + 0x4b8); 347 dump_mem(KERN_CRIT "Stubs: ", vectors + 0x200, vectors + 0x4b8);
347 348
348 die("Oops", regs, 0); 349 die("Oops", regs, 0);
349 local_irq_disable(); 350 local_irq_disable();
350 panic("bad mode"); 351 panic("bad mode");
351 } 352 }
352 353
353 static int bad_syscall(int n, struct pt_regs *regs) 354 static int bad_syscall(int n, struct pt_regs *regs)
354 { 355 {
355 struct thread_info *thread = current_thread_info(); 356 struct thread_info *thread = current_thread_info();
356 siginfo_t info; 357 siginfo_t info;
357 358
358 if (current->personality != PER_LINUX && thread->exec_domain->handler) { 359 if (current->personality != PER_LINUX && thread->exec_domain->handler) {
359 thread->exec_domain->handler(n, regs); 360 thread->exec_domain->handler(n, regs);
360 return regs->ARM_r0; 361 return regs->ARM_r0;
361 } 362 }
362 363
363 #ifdef CONFIG_DEBUG_USER 364 #ifdef CONFIG_DEBUG_USER
364 printk(KERN_ERR "[%d] %s: obsolete system call %08x.\n", 365 printk(KERN_ERR "[%d] %s: obsolete system call %08x.\n",
365 current->pid, current->comm, n); 366 current->pid, current->comm, n);
366 dump_instr(regs); 367 dump_instr(regs);
367 #endif 368 #endif
368 369
369 info.si_signo = SIGILL; 370 info.si_signo = SIGILL;
370 info.si_errno = 0; 371 info.si_errno = 0;
371 info.si_code = ILL_ILLTRP; 372 info.si_code = ILL_ILLTRP;
372 info.si_addr = (void *)instruction_pointer(regs) - 4; 373 info.si_addr = (void *)instruction_pointer(regs) - 4;
373 374
374 force_sig_info(SIGILL, &info, current); 375 force_sig_info(SIGILL, &info, current);
375 die_if_kernel("Oops", regs, n); 376 die_if_kernel("Oops", regs, n);
376 return regs->ARM_r0; 377 return regs->ARM_r0;
377 } 378 }
378 379
379 static inline void 380 static inline void
380 do_cache_op(unsigned long start, unsigned long end, int flags) 381 do_cache_op(unsigned long start, unsigned long end, int flags)
381 { 382 {
382 struct vm_area_struct *vma; 383 struct vm_area_struct *vma;
383 384
384 if (end < start) 385 if (end < start)
385 return; 386 return;
386 387
387 vma = find_vma(current->active_mm, start); 388 vma = find_vma(current->active_mm, start);
388 if (vma && vma->vm_start < end) { 389 if (vma && vma->vm_start < end) {
389 if (start < vma->vm_start) 390 if (start < vma->vm_start)
390 start = vma->vm_start; 391 start = vma->vm_start;
391 if (end > vma->vm_end) 392 if (end > vma->vm_end)
392 end = vma->vm_end; 393 end = vma->vm_end;
393 } 394 }
394 } 395 }
395 396
396 /* 397 /*
397 * Handle all unrecognised system calls. 398 * Handle all unrecognised system calls.
398 * 0x9f0000 - 0x9fffff are some more esoteric system calls 399 * 0x9f0000 - 0x9fffff are some more esoteric system calls
399 */ 400 */
400 #define NR(x) ((__ARM_NR_##x) - __ARM_NR_BASE) 401 #define NR(x) ((__ARM_NR_##x) - __ARM_NR_BASE)
401 asmlinkage int arm_syscall(int no, struct pt_regs *regs) 402 asmlinkage int arm_syscall(int no, struct pt_regs *regs)
402 { 403 {
403 siginfo_t info; 404 siginfo_t info;
404 405
405 if ((no >> 16) != 0x9f) 406 if ((no >> 16) != 0x9f)
406 return bad_syscall(no, regs); 407 return bad_syscall(no, regs);
407 408
408 switch (no & 0xffff) { 409 switch (no & 0xffff) {
409 case 0: /* branch through 0 */ 410 case 0: /* branch through 0 */
410 info.si_signo = SIGSEGV; 411 info.si_signo = SIGSEGV;
411 info.si_errno = 0; 412 info.si_errno = 0;
412 info.si_code = SEGV_MAPERR; 413 info.si_code = SEGV_MAPERR;
413 info.si_addr = NULL; 414 info.si_addr = NULL;
414 415
415 force_sig_info(SIGSEGV, &info, current); 416 force_sig_info(SIGSEGV, &info, current);
416 417
417 die_if_kernel("branch through zero", regs, 0); 418 die_if_kernel("branch through zero", regs, 0);
418 return 0; 419 return 0;
419 420
420 case NR(breakpoint): /* SWI BREAK_POINT */ 421 case NR(breakpoint): /* SWI BREAK_POINT */
421 ptrace_break(current, regs); 422 ptrace_break(current, regs);
422 return regs->ARM_r0; 423 return regs->ARM_r0;
423 424
424 case NR(cacheflush): 425 case NR(cacheflush):
425 return 0; 426 return 0;
426 427
427 case NR(usr26): 428 case NR(usr26):
428 break; 429 break;
429 430
430 default: 431 default:
431 /* Calls 9f00xx..9f07ff are defined to return -ENOSYS 432 /* Calls 9f00xx..9f07ff are defined to return -ENOSYS
432 if not implemented, rather than raising SIGILL. This 433 if not implemented, rather than raising SIGILL. This
433 way the calling program can gracefully determine whether 434 way the calling program can gracefully determine whether
434 a feature is supported. */ 435 a feature is supported. */
435 if (no <= 0x7ff) 436 if (no <= 0x7ff)
436 return -ENOSYS; 437 return -ENOSYS;
437 break; 438 break;
438 } 439 }
439 #ifdef CONFIG_DEBUG_USER 440 #ifdef CONFIG_DEBUG_USER
440 /* 441 /*
441 * experience shows that these seem to indicate that 442 * experience shows that these seem to indicate that
442 * something catastrophic has happened 443 * something catastrophic has happened
443 */ 444 */
444 printk("[%d] %s: arm syscall %d\n", current->pid, current->comm, no); 445 printk("[%d] %s: arm syscall %d\n", current->pid, current->comm, no);
445 dump_instr(regs); 446 dump_instr(regs);
446 if (user_mode(regs)) { 447 if (user_mode(regs)) {
447 show_regs(regs); 448 show_regs(regs);
448 c_backtrace(regs->ARM_fp, processor_mode(regs)); 449 c_backtrace(regs->ARM_fp, processor_mode(regs));
449 } 450 }
450 #endif 451 #endif
451 info.si_signo = SIGILL; 452 info.si_signo = SIGILL;
452 info.si_errno = 0; 453 info.si_errno = 0;
453 info.si_code = ILL_ILLTRP; 454 info.si_code = ILL_ILLTRP;
454 info.si_addr = (void *)instruction_pointer(regs) - 4; 455 info.si_addr = (void *)instruction_pointer(regs) - 4;
455 456
456 force_sig_info(SIGILL, &info, current); 457 force_sig_info(SIGILL, &info, current);
457 die_if_kernel("Oops", regs, no); 458 die_if_kernel("Oops", regs, no);
458 return 0; 459 return 0;
459 } 460 }
460 461
461 void __bad_xchg(volatile void *ptr, int size) 462 void __bad_xchg(volatile void *ptr, int size)
462 { 463 {
463 printk("xchg: bad data size: pc 0x%p, ptr 0x%p, size %d\n", 464 printk("xchg: bad data size: pc 0x%p, ptr 0x%p, size %d\n",
464 __builtin_return_address(0), ptr, size); 465 __builtin_return_address(0), ptr, size);
465 BUG(); 466 BUG();
466 } 467 }
467 468
468 /* 469 /*
469 * A data abort trap was taken, but we did not handle the instruction. 470 * A data abort trap was taken, but we did not handle the instruction.
470 * Try to abort the user program, or panic if it was the kernel. 471 * Try to abort the user program, or panic if it was the kernel.
471 */ 472 */
472 asmlinkage void 473 asmlinkage void
473 baddataabort(int code, unsigned long instr, struct pt_regs *regs) 474 baddataabort(int code, unsigned long instr, struct pt_regs *regs)
474 { 475 {
475 unsigned long addr = instruction_pointer(regs); 476 unsigned long addr = instruction_pointer(regs);
476 siginfo_t info; 477 siginfo_t info;
477 478
478 #ifdef CONFIG_DEBUG_USER 479 #ifdef CONFIG_DEBUG_USER
479 printk(KERN_ERR "[%d] %s: bad data abort: code %d instr 0x%08lx\n", 480 printk(KERN_ERR "[%d] %s: bad data abort: code %d instr 0x%08lx\n",
480 current->pid, current->comm, code, instr); 481 current->pid, current->comm, code, instr);
481 dump_instr(regs); 482 dump_instr(regs);
482 show_pte(current->mm, addr); 483 show_pte(current->mm, addr);
483 #endif 484 #endif
484 485
485 info.si_signo = SIGILL; 486 info.si_signo = SIGILL;
486 info.si_errno = 0; 487 info.si_errno = 0;
487 info.si_code = ILL_ILLOPC; 488 info.si_code = ILL_ILLOPC;
488 info.si_addr = (void *)addr; 489 info.si_addr = (void *)addr;
489 490
490 force_sig_info(SIGILL, &info, current); 491 force_sig_info(SIGILL, &info, current);
491 die_if_kernel("unknown data abort code", regs, instr); 492 die_if_kernel("unknown data abort code", regs, instr);
492 } 493 }
493 494
494 volatile void __bug(const char *file, int line, void *data) 495 volatile void __bug(const char *file, int line, void *data)
495 { 496 {
496 printk(KERN_CRIT"kernel BUG at %s:%d!", file, line); 497 printk(KERN_CRIT"kernel BUG at %s:%d!", file, line);
497 if (data) 498 if (data)
498 printk(KERN_CRIT" - extra data = %p", data); 499 printk(KERN_CRIT" - extra data = %p", data);
499 printk("\n"); 500 printk("\n");
500 *(int *)0 = 0; 501 *(int *)0 = 0;
501 } 502 }
502 503
503 void __readwrite_bug(const char *fn) 504 void __readwrite_bug(const char *fn)
504 { 505 {
505 printk("%s called, but not implemented", fn); 506 printk("%s called, but not implemented", fn);
506 BUG(); 507 BUG();
507 } 508 }
508 509
509 void __pte_error(const char *file, int line, unsigned long val) 510 void __pte_error(const char *file, int line, unsigned long val)
510 { 511 {
511 printk("%s:%d: bad pte %08lx.\n", file, line, val); 512 printk("%s:%d: bad pte %08lx.\n", file, line, val);
512 } 513 }
513 514
514 void __pmd_error(const char *file, int line, unsigned long val) 515 void __pmd_error(const char *file, int line, unsigned long val)
515 { 516 {
516 printk("%s:%d: bad pmd %08lx.\n", file, line, val); 517 printk("%s:%d: bad pmd %08lx.\n", file, line, val);
517 } 518 }
518 519
519 void __pgd_error(const char *file, int line, unsigned long val) 520 void __pgd_error(const char *file, int line, unsigned long val)
520 { 521 {
521 printk("%s:%d: bad pgd %08lx.\n", file, line, val); 522 printk("%s:%d: bad pgd %08lx.\n", file, line, val);
522 } 523 }
523 524
524 asmlinkage void __div0(void) 525 asmlinkage void __div0(void)
525 { 526 {
526 printk("Division by zero in kernel.\n"); 527 printk("Division by zero in kernel.\n");
527 dump_stack(); 528 dump_stack();
528 } 529 }
529 530
530 void abort(void) 531 void abort(void)
531 { 532 {
532 BUG(); 533 BUG();
533 534
534 /* if that doesn't kill us, halt */ 535 /* if that doesn't kill us, halt */
535 panic("Oops failed to kill thread"); 536 panic("Oops failed to kill thread");
536 } 537 }
537 538
538 void __init trap_init(void) 539 void __init trap_init(void)
539 { 540 {
540 extern void __trap_init(unsigned long); 541 extern void __trap_init(unsigned long);
541 unsigned long base = vectors_base(); 542 unsigned long base = vectors_base();
542 543
543 __trap_init(base); 544 __trap_init(base);
544 if (base != 0) 545 if (base != 0)
545 printk(KERN_DEBUG "Relocating machine vectors to 0x%08lx\n", 546 printk(KERN_DEBUG "Relocating machine vectors to 0x%08lx\n",
546 base); 547 base);
547 } 548 }
548 549
arch/avr32/kernel/traps.c
1 /* 1 /*
2 * Copyright (C) 2004-2006 Atmel Corporation 2 * Copyright (C) 2004-2006 Atmel Corporation
3 * 3 *
4 * This program is free software; you can redistribute it and/or modify 4 * This program is free software; you can redistribute it and/or modify
5 * it under the terms of the GNU General Public License version 2 as 5 * it under the terms of the GNU General Public License version 2 as
6 * published by the Free Software Foundation. 6 * published by the Free Software Foundation.
7 */ 7 */
8 8
9 #include <linux/bug.h> 9 #include <linux/bug.h>
10 #include <linux/init.h> 10 #include <linux/init.h>
11 #include <linux/kallsyms.h> 11 #include <linux/kallsyms.h>
12 #include <linux/module.h> 12 #include <linux/module.h>
13 #include <linux/notifier.h> 13 #include <linux/notifier.h>
14 #include <linux/sched.h> 14 #include <linux/sched.h>
15 #include <linux/uaccess.h> 15 #include <linux/uaccess.h>
16 16
17 #include <asm/addrspace.h> 17 #include <asm/addrspace.h>
18 #include <asm/mmu_context.h> 18 #include <asm/mmu_context.h>
19 #include <asm/ocd.h> 19 #include <asm/ocd.h>
20 #include <asm/sysreg.h> 20 #include <asm/sysreg.h>
21 #include <asm/traps.h> 21 #include <asm/traps.h>
22 22
23 static DEFINE_SPINLOCK(die_lock); 23 static DEFINE_SPINLOCK(die_lock);
24 24
25 void NORET_TYPE die(const char *str, struct pt_regs *regs, long err) 25 void NORET_TYPE die(const char *str, struct pt_regs *regs, long err)
26 { 26 {
27 static int die_counter; 27 static int die_counter;
28 28
29 console_verbose(); 29 console_verbose();
30 spin_lock_irq(&die_lock); 30 spin_lock_irq(&die_lock);
31 bust_spinlocks(1); 31 bust_spinlocks(1);
32 32
33 printk(KERN_ALERT "Oops: %s, sig: %ld [#%d]\n" KERN_EMERG, 33 printk(KERN_ALERT "Oops: %s, sig: %ld [#%d]\n" KERN_EMERG,
34 str, err, ++die_counter); 34 str, err, ++die_counter);
35 #ifdef CONFIG_PREEMPT 35 #ifdef CONFIG_PREEMPT
36 printk("PREEMPT "); 36 printk("PREEMPT ");
37 #endif 37 #endif
38 #ifdef CONFIG_FRAME_POINTER 38 #ifdef CONFIG_FRAME_POINTER
39 printk("FRAME_POINTER "); 39 printk("FRAME_POINTER ");
40 #endif 40 #endif
41 if (current_cpu_data.features & AVR32_FEATURE_OCD) { 41 if (current_cpu_data.features & AVR32_FEATURE_OCD) {
42 unsigned long did = __mfdr(DBGREG_DID); 42 unsigned long did = __mfdr(DBGREG_DID);
43 printk("chip: 0x%03lx:0x%04lx rev %lu\n", 43 printk("chip: 0x%03lx:0x%04lx rev %lu\n",
44 (did >> 1) & 0x7ff, 44 (did >> 1) & 0x7ff,
45 (did >> 12) & 0x7fff, 45 (did >> 12) & 0x7fff,
46 (did >> 28) & 0xf); 46 (did >> 28) & 0xf);
47 } else { 47 } else {
48 printk("cpu: arch %u r%u / core %u r%u\n", 48 printk("cpu: arch %u r%u / core %u r%u\n",
49 current_cpu_data.arch_type, 49 current_cpu_data.arch_type,
50 current_cpu_data.arch_revision, 50 current_cpu_data.arch_revision,
51 current_cpu_data.cpu_type, 51 current_cpu_data.cpu_type,
52 current_cpu_data.cpu_revision); 52 current_cpu_data.cpu_revision);
53 } 53 }
54 54
55 print_modules(); 55 print_modules();
56 show_regs_log_lvl(regs, KERN_EMERG); 56 show_regs_log_lvl(regs, KERN_EMERG);
57 show_stack_log_lvl(current, regs->sp, regs, KERN_EMERG); 57 show_stack_log_lvl(current, regs->sp, regs, KERN_EMERG);
58 bust_spinlocks(0); 58 bust_spinlocks(0);
59 add_taint(TAINT_DIE);
59 spin_unlock_irq(&die_lock); 60 spin_unlock_irq(&die_lock);
60 61
61 if (in_interrupt()) 62 if (in_interrupt())
62 panic("Fatal exception in interrupt"); 63 panic("Fatal exception in interrupt");
63 64
64 if (panic_on_oops) 65 if (panic_on_oops)
65 panic("Fatal exception"); 66 panic("Fatal exception");
66 67
67 do_exit(err); 68 do_exit(err);
68 } 69 }
69 70
70 void _exception(long signr, struct pt_regs *regs, int code, 71 void _exception(long signr, struct pt_regs *regs, int code,
71 unsigned long addr) 72 unsigned long addr)
72 { 73 {
73 siginfo_t info; 74 siginfo_t info;
74 75
75 if (!user_mode(regs)) 76 if (!user_mode(regs))
76 die("Unhandled exception in kernel mode", regs, signr); 77 die("Unhandled exception in kernel mode", regs, signr);
77 78
78 memset(&info, 0, sizeof(info)); 79 memset(&info, 0, sizeof(info));
79 info.si_signo = signr; 80 info.si_signo = signr;
80 info.si_code = code; 81 info.si_code = code;
81 info.si_addr = (void __user *)addr; 82 info.si_addr = (void __user *)addr;
82 force_sig_info(signr, &info, current); 83 force_sig_info(signr, &info, current);
83 84
84 /* 85 /*
85 * Init gets no signals that it doesn't have a handler for. 86 * Init gets no signals that it doesn't have a handler for.
86 * That's all very well, but if it has caused a synchronous 87 * That's all very well, but if it has caused a synchronous
87 * exception and we ignore the resulting signal, it will just 88 * exception and we ignore the resulting signal, it will just
88 * generate the same exception over and over again and we get 89 * generate the same exception over and over again and we get
89 * nowhere. Better to kill it and let the kernel panic. 90 * nowhere. Better to kill it and let the kernel panic.
90 */ 91 */
91 if (is_init(current)) { 92 if (is_init(current)) {
92 __sighandler_t handler; 93 __sighandler_t handler;
93 94
94 spin_lock_irq(&current->sighand->siglock); 95 spin_lock_irq(&current->sighand->siglock);
95 handler = current->sighand->action[signr-1].sa.sa_handler; 96 handler = current->sighand->action[signr-1].sa.sa_handler;
96 spin_unlock_irq(&current->sighand->siglock); 97 spin_unlock_irq(&current->sighand->siglock);
97 if (handler == SIG_DFL) { 98 if (handler == SIG_DFL) {
98 /* init has generated a synchronous exception 99 /* init has generated a synchronous exception
99 and it doesn't have a handler for the signal */ 100 and it doesn't have a handler for the signal */
100 printk(KERN_CRIT "init has generated signal %ld " 101 printk(KERN_CRIT "init has generated signal %ld "
101 "but has no handler for it\n", signr); 102 "but has no handler for it\n", signr);
102 do_exit(signr); 103 do_exit(signr);
103 } 104 }
104 } 105 }
105 } 106 }
106 107
107 asmlinkage void do_nmi(unsigned long ecr, struct pt_regs *regs) 108 asmlinkage void do_nmi(unsigned long ecr, struct pt_regs *regs)
108 { 109 {
109 printk(KERN_ALERT "Got Non-Maskable Interrupt, dumping regs\n"); 110 printk(KERN_ALERT "Got Non-Maskable Interrupt, dumping regs\n");
110 show_regs_log_lvl(regs, KERN_ALERT); 111 show_regs_log_lvl(regs, KERN_ALERT);
111 show_stack_log_lvl(current, regs->sp, regs, KERN_ALERT); 112 show_stack_log_lvl(current, regs->sp, regs, KERN_ALERT);
112 } 113 }
113 114
114 asmlinkage void do_critical_exception(unsigned long ecr, struct pt_regs *regs) 115 asmlinkage void do_critical_exception(unsigned long ecr, struct pt_regs *regs)
115 { 116 {
116 die("Critical exception", regs, SIGKILL); 117 die("Critical exception", regs, SIGKILL);
117 } 118 }
118 119
119 asmlinkage void do_address_exception(unsigned long ecr, struct pt_regs *regs) 120 asmlinkage void do_address_exception(unsigned long ecr, struct pt_regs *regs)
120 { 121 {
121 _exception(SIGBUS, regs, BUS_ADRALN, regs->pc); 122 _exception(SIGBUS, regs, BUS_ADRALN, regs->pc);
122 } 123 }
123 124
124 /* This way of handling undefined instructions is stolen from ARM */ 125 /* This way of handling undefined instructions is stolen from ARM */
125 static LIST_HEAD(undef_hook); 126 static LIST_HEAD(undef_hook);
126 static DEFINE_SPINLOCK(undef_lock); 127 static DEFINE_SPINLOCK(undef_lock);
127 128
128 void register_undef_hook(struct undef_hook *hook) 129 void register_undef_hook(struct undef_hook *hook)
129 { 130 {
130 spin_lock_irq(&undef_lock); 131 spin_lock_irq(&undef_lock);
131 list_add(&hook->node, &undef_hook); 132 list_add(&hook->node, &undef_hook);
132 spin_unlock_irq(&undef_lock); 133 spin_unlock_irq(&undef_lock);
133 } 134 }
134 135
135 void unregister_undef_hook(struct undef_hook *hook) 136 void unregister_undef_hook(struct undef_hook *hook)
136 { 137 {
137 spin_lock_irq(&undef_lock); 138 spin_lock_irq(&undef_lock);
138 list_del(&hook->node); 139 list_del(&hook->node);
139 spin_unlock_irq(&undef_lock); 140 spin_unlock_irq(&undef_lock);
140 } 141 }
141 142
142 static int do_cop_absent(u32 insn) 143 static int do_cop_absent(u32 insn)
143 { 144 {
144 int cop_nr; 145 int cop_nr;
145 u32 cpucr; 146 u32 cpucr;
146 147
147 if ((insn & 0xfdf00000) == 0xf1900000) 148 if ((insn & 0xfdf00000) == 0xf1900000)
148 /* LDC0 */ 149 /* LDC0 */
149 cop_nr = 0; 150 cop_nr = 0;
150 else 151 else
151 cop_nr = (insn >> 13) & 0x7; 152 cop_nr = (insn >> 13) & 0x7;
152 153
153 /* Try enabling the coprocessor */ 154 /* Try enabling the coprocessor */
154 cpucr = sysreg_read(CPUCR); 155 cpucr = sysreg_read(CPUCR);
155 cpucr |= (1 << (24 + cop_nr)); 156 cpucr |= (1 << (24 + cop_nr));
156 sysreg_write(CPUCR, cpucr); 157 sysreg_write(CPUCR, cpucr);
157 158
158 cpucr = sysreg_read(CPUCR); 159 cpucr = sysreg_read(CPUCR);
159 if (!(cpucr & (1 << (24 + cop_nr)))) 160 if (!(cpucr & (1 << (24 + cop_nr))))
160 return -ENODEV; 161 return -ENODEV;
161 162
162 return 0; 163 return 0;
163 } 164 }
164 165
165 int is_valid_bugaddr(unsigned long pc) 166 int is_valid_bugaddr(unsigned long pc)
166 { 167 {
167 unsigned short opcode; 168 unsigned short opcode;
168 169
169 if (pc < PAGE_OFFSET) 170 if (pc < PAGE_OFFSET)
170 return 0; 171 return 0;
171 if (probe_kernel_address((u16 *)pc, opcode)) 172 if (probe_kernel_address((u16 *)pc, opcode))
172 return 0; 173 return 0;
173 174
174 return opcode == AVR32_BUG_OPCODE; 175 return opcode == AVR32_BUG_OPCODE;
175 } 176 }
176 177
177 asmlinkage void do_illegal_opcode(unsigned long ecr, struct pt_regs *regs) 178 asmlinkage void do_illegal_opcode(unsigned long ecr, struct pt_regs *regs)
178 { 179 {
179 u32 insn; 180 u32 insn;
180 struct undef_hook *hook; 181 struct undef_hook *hook;
181 void __user *pc; 182 void __user *pc;
182 long code; 183 long code;
183 184
184 if (!user_mode(regs) && (ecr == ECR_ILLEGAL_OPCODE)) { 185 if (!user_mode(regs) && (ecr == ECR_ILLEGAL_OPCODE)) {
185 enum bug_trap_type type; 186 enum bug_trap_type type;
186 187
187 type = report_bug(regs->pc, regs); 188 type = report_bug(regs->pc, regs);
188 switch (type) { 189 switch (type) {
189 case BUG_TRAP_TYPE_NONE: 190 case BUG_TRAP_TYPE_NONE:
190 break; 191 break;
191 case BUG_TRAP_TYPE_WARN: 192 case BUG_TRAP_TYPE_WARN:
192 regs->pc += 2; 193 regs->pc += 2;
193 return; 194 return;
194 case BUG_TRAP_TYPE_BUG: 195 case BUG_TRAP_TYPE_BUG:
195 die("Kernel BUG", regs, SIGKILL); 196 die("Kernel BUG", regs, SIGKILL);
196 } 197 }
197 } 198 }
198 199
199 local_irq_enable(); 200 local_irq_enable();
200 201
201 if (user_mode(regs)) { 202 if (user_mode(regs)) {
202 pc = (void __user *)instruction_pointer(regs); 203 pc = (void __user *)instruction_pointer(regs);
203 if (get_user(insn, (u32 __user *)pc)) 204 if (get_user(insn, (u32 __user *)pc))
204 goto invalid_area; 205 goto invalid_area;
205 206
206 if (ecr == ECR_COPROC_ABSENT && !do_cop_absent(insn)) 207 if (ecr == ECR_COPROC_ABSENT && !do_cop_absent(insn))
207 return; 208 return;
208 209
209 spin_lock_irq(&undef_lock); 210 spin_lock_irq(&undef_lock);
210 list_for_each_entry(hook, &undef_hook, node) { 211 list_for_each_entry(hook, &undef_hook, node) {
211 if ((insn & hook->insn_mask) == hook->insn_val) { 212 if ((insn & hook->insn_mask) == hook->insn_val) {
212 if (hook->fn(regs, insn) == 0) { 213 if (hook->fn(regs, insn) == 0) {
213 spin_unlock_irq(&undef_lock); 214 spin_unlock_irq(&undef_lock);
214 return; 215 return;
215 } 216 }
216 } 217 }
217 } 218 }
218 spin_unlock_irq(&undef_lock); 219 spin_unlock_irq(&undef_lock);
219 } 220 }
220 221
221 switch (ecr) { 222 switch (ecr) {
222 case ECR_PRIVILEGE_VIOLATION: 223 case ECR_PRIVILEGE_VIOLATION:
223 code = ILL_PRVOPC; 224 code = ILL_PRVOPC;
224 break; 225 break;
225 case ECR_COPROC_ABSENT: 226 case ECR_COPROC_ABSENT:
226 code = ILL_COPROC; 227 code = ILL_COPROC;
227 break; 228 break;
228 default: 229 default:
229 code = ILL_ILLOPC; 230 code = ILL_ILLOPC;
230 break; 231 break;
231 } 232 }
232 233
233 _exception(SIGILL, regs, code, regs->pc); 234 _exception(SIGILL, regs, code, regs->pc);
234 return; 235 return;
235 236
236 invalid_area: 237 invalid_area:
237 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->pc); 238 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->pc);
238 } 239 }
239 240
240 asmlinkage void do_fpe(unsigned long ecr, struct pt_regs *regs) 241 asmlinkage void do_fpe(unsigned long ecr, struct pt_regs *regs)
241 { 242 {
242 /* We have no FPU yet */ 243 /* We have no FPU yet */
243 _exception(SIGILL, regs, ILL_COPROC, regs->pc); 244 _exception(SIGILL, regs, ILL_COPROC, regs->pc);
244 } 245 }
245 246
246 247
247 void __init trap_init(void) 248 void __init trap_init(void)
248 { 249 {
249 250
250 } 251 }
251 252
arch/i386/kernel/traps.c
1 /* 1 /*
2 * linux/arch/i386/traps.c 2 * linux/arch/i386/traps.c
3 * 3 *
4 * Copyright (C) 1991, 1992 Linus Torvalds 4 * Copyright (C) 1991, 1992 Linus Torvalds
5 * 5 *
6 * Pentium III FXSR, SSE support 6 * Pentium III FXSR, SSE support
7 * Gareth Hughes <gareth@valinux.com>, May 2000 7 * Gareth Hughes <gareth@valinux.com>, May 2000
8 */ 8 */
9 9
10 /* 10 /*
11 * 'Traps.c' handles hardware traps and faults after we have saved some 11 * 'Traps.c' handles hardware traps and faults after we have saved some
12 * state in 'asm.s'. 12 * state in 'asm.s'.
13 */ 13 */
14 #include <linux/sched.h> 14 #include <linux/sched.h>
15 #include <linux/kernel.h> 15 #include <linux/kernel.h>
16 #include <linux/string.h> 16 #include <linux/string.h>
17 #include <linux/errno.h> 17 #include <linux/errno.h>
18 #include <linux/timer.h> 18 #include <linux/timer.h>
19 #include <linux/mm.h> 19 #include <linux/mm.h>
20 #include <linux/init.h> 20 #include <linux/init.h>
21 #include <linux/delay.h> 21 #include <linux/delay.h>
22 #include <linux/spinlock.h> 22 #include <linux/spinlock.h>
23 #include <linux/interrupt.h> 23 #include <linux/interrupt.h>
24 #include <linux/highmem.h> 24 #include <linux/highmem.h>
25 #include <linux/kallsyms.h> 25 #include <linux/kallsyms.h>
26 #include <linux/ptrace.h> 26 #include <linux/ptrace.h>
27 #include <linux/utsname.h> 27 #include <linux/utsname.h>
28 #include <linux/kprobes.h> 28 #include <linux/kprobes.h>
29 #include <linux/kexec.h> 29 #include <linux/kexec.h>
30 #include <linux/unwind.h> 30 #include <linux/unwind.h>
31 #include <linux/uaccess.h> 31 #include <linux/uaccess.h>
32 #include <linux/nmi.h> 32 #include <linux/nmi.h>
33 #include <linux/bug.h> 33 #include <linux/bug.h>
34 34
35 #ifdef CONFIG_EISA 35 #ifdef CONFIG_EISA
36 #include <linux/ioport.h> 36 #include <linux/ioport.h>
37 #include <linux/eisa.h> 37 #include <linux/eisa.h>
38 #endif 38 #endif
39 39
40 #ifdef CONFIG_MCA 40 #ifdef CONFIG_MCA
41 #include <linux/mca.h> 41 #include <linux/mca.h>
42 #endif 42 #endif
43 43
44 #include <asm/processor.h> 44 #include <asm/processor.h>
45 #include <asm/system.h> 45 #include <asm/system.h>
46 #include <asm/io.h> 46 #include <asm/io.h>
47 #include <asm/atomic.h> 47 #include <asm/atomic.h>
48 #include <asm/debugreg.h> 48 #include <asm/debugreg.h>
49 #include <asm/desc.h> 49 #include <asm/desc.h>
50 #include <asm/i387.h> 50 #include <asm/i387.h>
51 #include <asm/nmi.h> 51 #include <asm/nmi.h>
52 #include <asm/unwind.h> 52 #include <asm/unwind.h>
53 #include <asm/smp.h> 53 #include <asm/smp.h>
54 #include <asm/arch_hooks.h> 54 #include <asm/arch_hooks.h>
55 #include <linux/kdebug.h> 55 #include <linux/kdebug.h>
56 #include <asm/stacktrace.h> 56 #include <asm/stacktrace.h>
57 57
58 #include <linux/module.h> 58 #include <linux/module.h>
59 59
60 #include "mach_traps.h" 60 #include "mach_traps.h"
61 61
62 int panic_on_unrecovered_nmi; 62 int panic_on_unrecovered_nmi;
63 63
64 asmlinkage int system_call(void); 64 asmlinkage int system_call(void);
65 65
66 /* Do we ignore FPU interrupts ? */ 66 /* Do we ignore FPU interrupts ? */
67 char ignore_fpu_irq = 0; 67 char ignore_fpu_irq = 0;
68 68
69 /* 69 /*
70 * The IDT has to be page-aligned to simplify the Pentium 70 * The IDT has to be page-aligned to simplify the Pentium
71 * F0 0F bug workaround.. We have a special link segment 71 * F0 0F bug workaround.. We have a special link segment
72 * for this. 72 * for this.
73 */ 73 */
74 struct desc_struct idt_table[256] __attribute__((__section__(".data.idt"))) = { {0, 0}, }; 74 struct desc_struct idt_table[256] __attribute__((__section__(".data.idt"))) = { {0, 0}, };
75 75
76 asmlinkage void divide_error(void); 76 asmlinkage void divide_error(void);
77 asmlinkage void debug(void); 77 asmlinkage void debug(void);
78 asmlinkage void nmi(void); 78 asmlinkage void nmi(void);
79 asmlinkage void int3(void); 79 asmlinkage void int3(void);
80 asmlinkage void overflow(void); 80 asmlinkage void overflow(void);
81 asmlinkage void bounds(void); 81 asmlinkage void bounds(void);
82 asmlinkage void invalid_op(void); 82 asmlinkage void invalid_op(void);
83 asmlinkage void device_not_available(void); 83 asmlinkage void device_not_available(void);
84 asmlinkage void coprocessor_segment_overrun(void); 84 asmlinkage void coprocessor_segment_overrun(void);
85 asmlinkage void invalid_TSS(void); 85 asmlinkage void invalid_TSS(void);
86 asmlinkage void segment_not_present(void); 86 asmlinkage void segment_not_present(void);
87 asmlinkage void stack_segment(void); 87 asmlinkage void stack_segment(void);
88 asmlinkage void general_protection(void); 88 asmlinkage void general_protection(void);
89 asmlinkage void page_fault(void); 89 asmlinkage void page_fault(void);
90 asmlinkage void coprocessor_error(void); 90 asmlinkage void coprocessor_error(void);
91 asmlinkage void simd_coprocessor_error(void); 91 asmlinkage void simd_coprocessor_error(void);
92 asmlinkage void alignment_check(void); 92 asmlinkage void alignment_check(void);
93 asmlinkage void spurious_interrupt_bug(void); 93 asmlinkage void spurious_interrupt_bug(void);
94 asmlinkage void machine_check(void); 94 asmlinkage void machine_check(void);
95 95
96 int kstack_depth_to_print = 24; 96 int kstack_depth_to_print = 24;
97 static unsigned int code_bytes = 64; 97 static unsigned int code_bytes = 64;
98 98
99 static inline int valid_stack_ptr(struct thread_info *tinfo, void *p) 99 static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
100 { 100 {
101 return p > (void *)tinfo && 101 return p > (void *)tinfo &&
102 p < (void *)tinfo + THREAD_SIZE - 3; 102 p < (void *)tinfo + THREAD_SIZE - 3;
103 } 103 }
104 104
105 static inline unsigned long print_context_stack(struct thread_info *tinfo, 105 static inline unsigned long print_context_stack(struct thread_info *tinfo,
106 unsigned long *stack, unsigned long ebp, 106 unsigned long *stack, unsigned long ebp,
107 struct stacktrace_ops *ops, void *data) 107 struct stacktrace_ops *ops, void *data)
108 { 108 {
109 unsigned long addr; 109 unsigned long addr;
110 110
111 #ifdef CONFIG_FRAME_POINTER 111 #ifdef CONFIG_FRAME_POINTER
112 while (valid_stack_ptr(tinfo, (void *)ebp)) { 112 while (valid_stack_ptr(tinfo, (void *)ebp)) {
113 unsigned long new_ebp; 113 unsigned long new_ebp;
114 addr = *(unsigned long *)(ebp + 4); 114 addr = *(unsigned long *)(ebp + 4);
115 ops->address(data, addr); 115 ops->address(data, addr);
116 /* 116 /*
117 * break out of recursive entries (such as 117 * break out of recursive entries (such as
118 * end_of_stack_stop_unwind_function). Also, 118 * end_of_stack_stop_unwind_function). Also,
119 * we can never allow a frame pointer to 119 * we can never allow a frame pointer to
120 * move downwards! 120 * move downwards!
121 */ 121 */
122 new_ebp = *(unsigned long *)ebp; 122 new_ebp = *(unsigned long *)ebp;
123 if (new_ebp <= ebp) 123 if (new_ebp <= ebp)
124 break; 124 break;
125 ebp = new_ebp; 125 ebp = new_ebp;
126 } 126 }
127 #else 127 #else
128 while (valid_stack_ptr(tinfo, stack)) { 128 while (valid_stack_ptr(tinfo, stack)) {
129 addr = *stack++; 129 addr = *stack++;
130 if (__kernel_text_address(addr)) 130 if (__kernel_text_address(addr))
131 ops->address(data, addr); 131 ops->address(data, addr);
132 } 132 }
133 #endif 133 #endif
134 return ebp; 134 return ebp;
135 } 135 }
136 136
137 #define MSG(msg) ops->warning(data, msg) 137 #define MSG(msg) ops->warning(data, msg)
138 138
139 void dump_trace(struct task_struct *task, struct pt_regs *regs, 139 void dump_trace(struct task_struct *task, struct pt_regs *regs,
140 unsigned long *stack, 140 unsigned long *stack,
141 struct stacktrace_ops *ops, void *data) 141 struct stacktrace_ops *ops, void *data)
142 { 142 {
143 unsigned long ebp = 0; 143 unsigned long ebp = 0;
144 144
145 if (!task) 145 if (!task)
146 task = current; 146 task = current;
147 147
148 if (!stack) { 148 if (!stack) {
149 unsigned long dummy; 149 unsigned long dummy;
150 stack = &dummy; 150 stack = &dummy;
151 if (task && task != current) 151 if (task && task != current)
152 stack = (unsigned long *)task->thread.esp; 152 stack = (unsigned long *)task->thread.esp;
153 } 153 }
154 154
155 #ifdef CONFIG_FRAME_POINTER 155 #ifdef CONFIG_FRAME_POINTER
156 if (!ebp) { 156 if (!ebp) {
157 if (task == current) { 157 if (task == current) {
158 /* Grab ebp right from our regs */ 158 /* Grab ebp right from our regs */
159 asm ("movl %%ebp, %0" : "=r" (ebp) : ); 159 asm ("movl %%ebp, %0" : "=r" (ebp) : );
160 } else { 160 } else {
161 /* ebp is the last reg pushed by switch_to */ 161 /* ebp is the last reg pushed by switch_to */
162 ebp = *(unsigned long *) task->thread.esp; 162 ebp = *(unsigned long *) task->thread.esp;
163 } 163 }
164 } 164 }
165 #endif 165 #endif
166 166
167 while (1) { 167 while (1) {
168 struct thread_info *context; 168 struct thread_info *context;
169 context = (struct thread_info *) 169 context = (struct thread_info *)
170 ((unsigned long)stack & (~(THREAD_SIZE - 1))); 170 ((unsigned long)stack & (~(THREAD_SIZE - 1)));
171 ebp = print_context_stack(context, stack, ebp, ops, data); 171 ebp = print_context_stack(context, stack, ebp, ops, data);
172 /* Should be after the line below, but somewhere 172 /* Should be after the line below, but somewhere
173 in early boot context comes out corrupted and we 173 in early boot context comes out corrupted and we
174 can't reference it -AK */ 174 can't reference it -AK */
175 if (ops->stack(data, "IRQ") < 0) 175 if (ops->stack(data, "IRQ") < 0)
176 break; 176 break;
177 stack = (unsigned long*)context->previous_esp; 177 stack = (unsigned long*)context->previous_esp;
178 if (!stack) 178 if (!stack)
179 break; 179 break;
180 touch_nmi_watchdog(); 180 touch_nmi_watchdog();
181 } 181 }
182 } 182 }
183 EXPORT_SYMBOL(dump_trace); 183 EXPORT_SYMBOL(dump_trace);
184 184
185 static void 185 static void
186 print_trace_warning_symbol(void *data, char *msg, unsigned long symbol) 186 print_trace_warning_symbol(void *data, char *msg, unsigned long symbol)
187 { 187 {
188 printk(data); 188 printk(data);
189 print_symbol(msg, symbol); 189 print_symbol(msg, symbol);
190 printk("\n"); 190 printk("\n");
191 } 191 }
192 192
193 static void print_trace_warning(void *data, char *msg) 193 static void print_trace_warning(void *data, char *msg)
194 { 194 {
195 printk("%s%s\n", (char *)data, msg); 195 printk("%s%s\n", (char *)data, msg);
196 } 196 }
197 197
198 static int print_trace_stack(void *data, char *name) 198 static int print_trace_stack(void *data, char *name)
199 { 199 {
200 return 0; 200 return 0;
201 } 201 }
202 202
203 /* 203 /*
204 * Print one address/symbol entries per line. 204 * Print one address/symbol entries per line.
205 */ 205 */
206 static void print_trace_address(void *data, unsigned long addr) 206 static void print_trace_address(void *data, unsigned long addr)
207 { 207 {
208 printk("%s [<%08lx>] ", (char *)data, addr); 208 printk("%s [<%08lx>] ", (char *)data, addr);
209 print_symbol("%s\n", addr); 209 print_symbol("%s\n", addr);
210 } 210 }
211 211
212 static struct stacktrace_ops print_trace_ops = { 212 static struct stacktrace_ops print_trace_ops = {
213 .warning = print_trace_warning, 213 .warning = print_trace_warning,
214 .warning_symbol = print_trace_warning_symbol, 214 .warning_symbol = print_trace_warning_symbol,
215 .stack = print_trace_stack, 215 .stack = print_trace_stack,
216 .address = print_trace_address, 216 .address = print_trace_address,
217 }; 217 };
218 218
219 static void 219 static void
220 show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs, 220 show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
221 unsigned long * stack, char *log_lvl) 221 unsigned long * stack, char *log_lvl)
222 { 222 {
223 dump_trace(task, regs, stack, &print_trace_ops, log_lvl); 223 dump_trace(task, regs, stack, &print_trace_ops, log_lvl);
224 printk("%s =======================\n", log_lvl); 224 printk("%s =======================\n", log_lvl);
225 } 225 }
226 226
227 void show_trace(struct task_struct *task, struct pt_regs *regs, 227 void show_trace(struct task_struct *task, struct pt_regs *regs,
228 unsigned long * stack) 228 unsigned long * stack)
229 { 229 {
230 show_trace_log_lvl(task, regs, stack, ""); 230 show_trace_log_lvl(task, regs, stack, "");
231 } 231 }
232 232
233 static void show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs, 233 static void show_stack_log_lvl(struct task_struct *task, struct pt_regs *regs,
234 unsigned long *esp, char *log_lvl) 234 unsigned long *esp, char *log_lvl)
235 { 235 {
236 unsigned long *stack; 236 unsigned long *stack;
237 int i; 237 int i;
238 238
239 if (esp == NULL) { 239 if (esp == NULL) {
240 if (task) 240 if (task)
241 esp = (unsigned long*)task->thread.esp; 241 esp = (unsigned long*)task->thread.esp;
242 else 242 else
243 esp = (unsigned long *)&esp; 243 esp = (unsigned long *)&esp;
244 } 244 }
245 245
246 stack = esp; 246 stack = esp;
247 for(i = 0; i < kstack_depth_to_print; i++) { 247 for(i = 0; i < kstack_depth_to_print; i++) {
248 if (kstack_end(stack)) 248 if (kstack_end(stack))
249 break; 249 break;
250 if (i && ((i % 8) == 0)) 250 if (i && ((i % 8) == 0))
251 printk("\n%s ", log_lvl); 251 printk("\n%s ", log_lvl);
252 printk("%08lx ", *stack++); 252 printk("%08lx ", *stack++);
253 } 253 }
254 printk("\n%sCall Trace:\n", log_lvl); 254 printk("\n%sCall Trace:\n", log_lvl);
255 show_trace_log_lvl(task, regs, esp, log_lvl); 255 show_trace_log_lvl(task, regs, esp, log_lvl);
256 } 256 }
257 257
258 void show_stack(struct task_struct *task, unsigned long *esp) 258 void show_stack(struct task_struct *task, unsigned long *esp)
259 { 259 {
260 printk(" "); 260 printk(" ");
261 show_stack_log_lvl(task, NULL, esp, ""); 261 show_stack_log_lvl(task, NULL, esp, "");
262 } 262 }
263 263
264 /* 264 /*
265 * The architecture-independent dump_stack generator 265 * The architecture-independent dump_stack generator
266 */ 266 */
267 void dump_stack(void) 267 void dump_stack(void)
268 { 268 {
269 unsigned long stack; 269 unsigned long stack;
270 270
271 show_trace(current, NULL, &stack); 271 show_trace(current, NULL, &stack);
272 } 272 }
273 273
274 EXPORT_SYMBOL(dump_stack); 274 EXPORT_SYMBOL(dump_stack);
275 275
276 void show_registers(struct pt_regs *regs) 276 void show_registers(struct pt_regs *regs)
277 { 277 {
278 int i; 278 int i;
279 int in_kernel = 1; 279 int in_kernel = 1;
280 unsigned long esp; 280 unsigned long esp;
281 unsigned short ss, gs; 281 unsigned short ss, gs;
282 282
283 esp = (unsigned long) (&regs->esp); 283 esp = (unsigned long) (&regs->esp);
284 savesegment(ss, ss); 284 savesegment(ss, ss);
285 savesegment(gs, gs); 285 savesegment(gs, gs);
286 if (user_mode_vm(regs)) { 286 if (user_mode_vm(regs)) {
287 in_kernel = 0; 287 in_kernel = 0;
288 esp = regs->esp; 288 esp = regs->esp;
289 ss = regs->xss & 0xffff; 289 ss = regs->xss & 0xffff;
290 } 290 }
291 print_modules(); 291 print_modules();
292 printk(KERN_EMERG "CPU: %d\n" 292 printk(KERN_EMERG "CPU: %d\n"
293 KERN_EMERG "EIP: %04x:[<%08lx>] %s VLI\n" 293 KERN_EMERG "EIP: %04x:[<%08lx>] %s VLI\n"
294 KERN_EMERG "EFLAGS: %08lx (%s %.*s)\n", 294 KERN_EMERG "EFLAGS: %08lx (%s %.*s)\n",
295 smp_processor_id(), 0xffff & regs->xcs, regs->eip, 295 smp_processor_id(), 0xffff & regs->xcs, regs->eip,
296 print_tainted(), regs->eflags, init_utsname()->release, 296 print_tainted(), regs->eflags, init_utsname()->release,
297 (int)strcspn(init_utsname()->version, " "), 297 (int)strcspn(init_utsname()->version, " "),
298 init_utsname()->version); 298 init_utsname()->version);
299 print_symbol(KERN_EMERG "EIP is at %s\n", regs->eip); 299 print_symbol(KERN_EMERG "EIP is at %s\n", regs->eip);
300 printk(KERN_EMERG "eax: %08lx ebx: %08lx ecx: %08lx edx: %08lx\n", 300 printk(KERN_EMERG "eax: %08lx ebx: %08lx ecx: %08lx edx: %08lx\n",
301 regs->eax, regs->ebx, regs->ecx, regs->edx); 301 regs->eax, regs->ebx, regs->ecx, regs->edx);
302 printk(KERN_EMERG "esi: %08lx edi: %08lx ebp: %08lx esp: %08lx\n", 302 printk(KERN_EMERG "esi: %08lx edi: %08lx ebp: %08lx esp: %08lx\n",
303 regs->esi, regs->edi, regs->ebp, esp); 303 regs->esi, regs->edi, regs->ebp, esp);
304 printk(KERN_EMERG "ds: %04x es: %04x fs: %04x gs: %04x ss: %04x\n", 304 printk(KERN_EMERG "ds: %04x es: %04x fs: %04x gs: %04x ss: %04x\n",
305 regs->xds & 0xffff, regs->xes & 0xffff, regs->xfs & 0xffff, gs, ss); 305 regs->xds & 0xffff, regs->xes & 0xffff, regs->xfs & 0xffff, gs, ss);
306 printk(KERN_EMERG "Process %.*s (pid: %d, ti=%p task=%p task.ti=%p)", 306 printk(KERN_EMERG "Process %.*s (pid: %d, ti=%p task=%p task.ti=%p)",
307 TASK_COMM_LEN, current->comm, current->pid, 307 TASK_COMM_LEN, current->comm, current->pid,
308 current_thread_info(), current, task_thread_info(current)); 308 current_thread_info(), current, task_thread_info(current));
309 /* 309 /*
310 * When in-kernel, we also print out the stack and code at the 310 * When in-kernel, we also print out the stack and code at the
311 * time of the fault.. 311 * time of the fault..
312 */ 312 */
313 if (in_kernel) { 313 if (in_kernel) {
314 u8 *eip; 314 u8 *eip;
315 unsigned int code_prologue = code_bytes * 43 / 64; 315 unsigned int code_prologue = code_bytes * 43 / 64;
316 unsigned int code_len = code_bytes; 316 unsigned int code_len = code_bytes;
317 unsigned char c; 317 unsigned char c;
318 318
319 printk("\n" KERN_EMERG "Stack: "); 319 printk("\n" KERN_EMERG "Stack: ");
320 show_stack_log_lvl(NULL, regs, (unsigned long *)esp, KERN_EMERG); 320 show_stack_log_lvl(NULL, regs, (unsigned long *)esp, KERN_EMERG);
321 321
322 printk(KERN_EMERG "Code: "); 322 printk(KERN_EMERG "Code: ");
323 323
324 eip = (u8 *)regs->eip - code_prologue; 324 eip = (u8 *)regs->eip - code_prologue;
325 if (eip < (u8 *)PAGE_OFFSET || 325 if (eip < (u8 *)PAGE_OFFSET ||
326 probe_kernel_address(eip, c)) { 326 probe_kernel_address(eip, c)) {
327 /* try starting at EIP */ 327 /* try starting at EIP */
328 eip = (u8 *)regs->eip; 328 eip = (u8 *)regs->eip;
329 code_len = code_len - code_prologue + 1; 329 code_len = code_len - code_prologue + 1;
330 } 330 }
331 for (i = 0; i < code_len; i++, eip++) { 331 for (i = 0; i < code_len; i++, eip++) {
332 if (eip < (u8 *)PAGE_OFFSET || 332 if (eip < (u8 *)PAGE_OFFSET ||
333 probe_kernel_address(eip, c)) { 333 probe_kernel_address(eip, c)) {
334 printk(" Bad EIP value."); 334 printk(" Bad EIP value.");
335 break; 335 break;
336 } 336 }
337 if (eip == (u8 *)regs->eip) 337 if (eip == (u8 *)regs->eip)
338 printk("<%02x> ", c); 338 printk("<%02x> ", c);
339 else 339 else
340 printk("%02x ", c); 340 printk("%02x ", c);
341 } 341 }
342 } 342 }
343 printk("\n"); 343 printk("\n");
344 } 344 }
345 345
346 int is_valid_bugaddr(unsigned long eip) 346 int is_valid_bugaddr(unsigned long eip)
347 { 347 {
348 unsigned short ud2; 348 unsigned short ud2;
349 349
350 if (eip < PAGE_OFFSET) 350 if (eip < PAGE_OFFSET)
351 return 0; 351 return 0;
352 if (probe_kernel_address((unsigned short *)eip, ud2)) 352 if (probe_kernel_address((unsigned short *)eip, ud2))
353 return 0; 353 return 0;
354 354
355 return ud2 == 0x0b0f; 355 return ud2 == 0x0b0f;
356 } 356 }
357 357
358 /* 358 /*
359 * This is gone through when something in the kernel has done something bad and 359 * This is gone through when something in the kernel has done something bad and
360 * is about to be terminated. 360 * is about to be terminated.
361 */ 361 */
362 void die(const char * str, struct pt_regs * regs, long err) 362 void die(const char * str, struct pt_regs * regs, long err)
363 { 363 {
364 static struct { 364 static struct {
365 spinlock_t lock; 365 spinlock_t lock;
366 u32 lock_owner; 366 u32 lock_owner;
367 int lock_owner_depth; 367 int lock_owner_depth;
368 } die = { 368 } die = {
369 .lock = __SPIN_LOCK_UNLOCKED(die.lock), 369 .lock = __SPIN_LOCK_UNLOCKED(die.lock),
370 .lock_owner = -1, 370 .lock_owner = -1,
371 .lock_owner_depth = 0 371 .lock_owner_depth = 0
372 }; 372 };
373 static int die_counter; 373 static int die_counter;
374 unsigned long flags; 374 unsigned long flags;
375 375
376 oops_enter(); 376 oops_enter();
377 377
378 if (die.lock_owner != raw_smp_processor_id()) { 378 if (die.lock_owner != raw_smp_processor_id()) {
379 console_verbose(); 379 console_verbose();
380 spin_lock_irqsave(&die.lock, flags); 380 spin_lock_irqsave(&die.lock, flags);
381 die.lock_owner = smp_processor_id(); 381 die.lock_owner = smp_processor_id();
382 die.lock_owner_depth = 0; 382 die.lock_owner_depth = 0;
383 bust_spinlocks(1); 383 bust_spinlocks(1);
384 } 384 }
385 else 385 else
386 local_save_flags(flags); 386 local_save_flags(flags);
387 387
388 if (++die.lock_owner_depth < 3) { 388 if (++die.lock_owner_depth < 3) {
389 int nl = 0; 389 int nl = 0;
390 unsigned long esp; 390 unsigned long esp;
391 unsigned short ss; 391 unsigned short ss;
392 392
393 report_bug(regs->eip, regs); 393 report_bug(regs->eip, regs);
394 394
395 printk(KERN_EMERG "%s: %04lx [#%d]\n", str, err & 0xffff, ++die_counter); 395 printk(KERN_EMERG "%s: %04lx [#%d]\n", str, err & 0xffff, ++die_counter);
396 #ifdef CONFIG_PREEMPT 396 #ifdef CONFIG_PREEMPT
397 printk(KERN_EMERG "PREEMPT "); 397 printk(KERN_EMERG "PREEMPT ");
398 nl = 1; 398 nl = 1;
399 #endif 399 #endif
400 #ifdef CONFIG_SMP 400 #ifdef CONFIG_SMP
401 if (!nl) 401 if (!nl)
402 printk(KERN_EMERG); 402 printk(KERN_EMERG);
403 printk("SMP "); 403 printk("SMP ");
404 nl = 1; 404 nl = 1;
405 #endif 405 #endif
406 #ifdef CONFIG_DEBUG_PAGEALLOC 406 #ifdef CONFIG_DEBUG_PAGEALLOC
407 if (!nl) 407 if (!nl)
408 printk(KERN_EMERG); 408 printk(KERN_EMERG);
409 printk("DEBUG_PAGEALLOC"); 409 printk("DEBUG_PAGEALLOC");
410 nl = 1; 410 nl = 1;
411 #endif 411 #endif
412 if (nl) 412 if (nl)
413 printk("\n"); 413 printk("\n");
414 if (notify_die(DIE_OOPS, str, regs, err, 414 if (notify_die(DIE_OOPS, str, regs, err,
415 current->thread.trap_no, SIGSEGV) != 415 current->thread.trap_no, SIGSEGV) !=
416 NOTIFY_STOP) { 416 NOTIFY_STOP) {
417 show_registers(regs); 417 show_registers(regs);
418 /* Executive summary in case the oops scrolled away */ 418 /* Executive summary in case the oops scrolled away */
419 esp = (unsigned long) (&regs->esp); 419 esp = (unsigned long) (&regs->esp);
420 savesegment(ss, ss); 420 savesegment(ss, ss);
421 if (user_mode(regs)) { 421 if (user_mode(regs)) {
422 esp = regs->esp; 422 esp = regs->esp;
423 ss = regs->xss & 0xffff; 423 ss = regs->xss & 0xffff;
424 } 424 }
425 printk(KERN_EMERG "EIP: [<%08lx>] ", regs->eip); 425 printk(KERN_EMERG "EIP: [<%08lx>] ", regs->eip);
426 print_symbol("%s", regs->eip); 426 print_symbol("%s", regs->eip);
427 printk(" SS:ESP %04x:%08lx\n", ss, esp); 427 printk(" SS:ESP %04x:%08lx\n", ss, esp);
428 } 428 }
429 else 429 else
430 regs = NULL; 430 regs = NULL;
431 } else 431 } else
432 printk(KERN_EMERG "Recursive die() failure, output suppressed\n"); 432 printk(KERN_EMERG "Recursive die() failure, output suppressed\n");
433 433
434 bust_spinlocks(0); 434 bust_spinlocks(0);
435 die.lock_owner = -1; 435 die.lock_owner = -1;
436 add_taint(TAINT_DIE);
436 spin_unlock_irqrestore(&die.lock, flags); 437 spin_unlock_irqrestore(&die.lock, flags);
437 438
438 if (!regs) 439 if (!regs)
439 return; 440 return;
440 441
441 if (kexec_should_crash(current)) 442 if (kexec_should_crash(current))
442 crash_kexec(regs); 443 crash_kexec(regs);
443 444
444 if (in_interrupt()) 445 if (in_interrupt())
445 panic("Fatal exception in interrupt"); 446 panic("Fatal exception in interrupt");
446 447
447 if (panic_on_oops) 448 if (panic_on_oops)
448 panic("Fatal exception"); 449 panic("Fatal exception");
449 450
450 oops_exit(); 451 oops_exit();
451 do_exit(SIGSEGV); 452 do_exit(SIGSEGV);
452 } 453 }
453 454
454 static inline void die_if_kernel(const char * str, struct pt_regs * regs, long err) 455 static inline void die_if_kernel(const char * str, struct pt_regs * regs, long err)
455 { 456 {
456 if (!user_mode_vm(regs)) 457 if (!user_mode_vm(regs))
457 die(str, regs, err); 458 die(str, regs, err);
458 } 459 }
459 460
460 static void __kprobes do_trap(int trapnr, int signr, char *str, int vm86, 461 static void __kprobes do_trap(int trapnr, int signr, char *str, int vm86,
461 struct pt_regs * regs, long error_code, 462 struct pt_regs * regs, long error_code,
462 siginfo_t *info) 463 siginfo_t *info)
463 { 464 {
464 struct task_struct *tsk = current; 465 struct task_struct *tsk = current;
465 466
466 if (regs->eflags & VM_MASK) { 467 if (regs->eflags & VM_MASK) {
467 if (vm86) 468 if (vm86)
468 goto vm86_trap; 469 goto vm86_trap;
469 goto trap_signal; 470 goto trap_signal;
470 } 471 }
471 472
472 if (!user_mode(regs)) 473 if (!user_mode(regs))
473 goto kernel_trap; 474 goto kernel_trap;
474 475
475 trap_signal: { 476 trap_signal: {
476 /* 477 /*
477 * We want error_code and trap_no set for userspace faults and 478 * We want error_code and trap_no set for userspace faults and
478 * kernelspace faults which result in die(), but not 479 * kernelspace faults which result in die(), but not
479 * kernelspace faults which are fixed up. die() gives the 480 * kernelspace faults which are fixed up. die() gives the
480 * process no chance to handle the signal and notice the 481 * process no chance to handle the signal and notice the
481 * kernel fault information, so that won't result in polluting 482 * kernel fault information, so that won't result in polluting
482 * the information about previously queued, but not yet 483 * the information about previously queued, but not yet
483 * delivered, faults. See also do_general_protection below. 484 * delivered, faults. See also do_general_protection below.
484 */ 485 */
485 tsk->thread.error_code = error_code; 486 tsk->thread.error_code = error_code;
486 tsk->thread.trap_no = trapnr; 487 tsk->thread.trap_no = trapnr;
487 488
488 if (info) 489 if (info)
489 force_sig_info(signr, info, tsk); 490 force_sig_info(signr, info, tsk);
490 else 491 else
491 force_sig(signr, tsk); 492 force_sig(signr, tsk);
492 return; 493 return;
493 } 494 }
494 495
495 kernel_trap: { 496 kernel_trap: {
496 if (!fixup_exception(regs)) { 497 if (!fixup_exception(regs)) {
497 tsk->thread.error_code = error_code; 498 tsk->thread.error_code = error_code;
498 tsk->thread.trap_no = trapnr; 499 tsk->thread.trap_no = trapnr;
499 die(str, regs, error_code); 500 die(str, regs, error_code);
500 } 501 }
501 return; 502 return;
502 } 503 }
503 504
504 vm86_trap: { 505 vm86_trap: {
505 int ret = handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, trapnr); 506 int ret = handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, trapnr);
506 if (ret) goto trap_signal; 507 if (ret) goto trap_signal;
507 return; 508 return;
508 } 509 }
509 } 510 }
510 511
511 #define DO_ERROR(trapnr, signr, str, name) \ 512 #define DO_ERROR(trapnr, signr, str, name) \
512 fastcall void do_##name(struct pt_regs * regs, long error_code) \ 513 fastcall void do_##name(struct pt_regs * regs, long error_code) \
513 { \ 514 { \
514 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \ 515 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
515 == NOTIFY_STOP) \ 516 == NOTIFY_STOP) \
516 return; \ 517 return; \
517 do_trap(trapnr, signr, str, 0, regs, error_code, NULL); \ 518 do_trap(trapnr, signr, str, 0, regs, error_code, NULL); \
518 } 519 }
519 520
520 #define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \ 521 #define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \
521 fastcall void do_##name(struct pt_regs * regs, long error_code) \ 522 fastcall void do_##name(struct pt_regs * regs, long error_code) \
522 { \ 523 { \
523 siginfo_t info; \ 524 siginfo_t info; \
524 info.si_signo = signr; \ 525 info.si_signo = signr; \
525 info.si_errno = 0; \ 526 info.si_errno = 0; \
526 info.si_code = sicode; \ 527 info.si_code = sicode; \
527 info.si_addr = (void __user *)siaddr; \ 528 info.si_addr = (void __user *)siaddr; \
528 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \ 529 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
529 == NOTIFY_STOP) \ 530 == NOTIFY_STOP) \
530 return; \ 531 return; \
531 do_trap(trapnr, signr, str, 0, regs, error_code, &info); \ 532 do_trap(trapnr, signr, str, 0, regs, error_code, &info); \
532 } 533 }
533 534
534 #define DO_VM86_ERROR(trapnr, signr, str, name) \ 535 #define DO_VM86_ERROR(trapnr, signr, str, name) \
535 fastcall void do_##name(struct pt_regs * regs, long error_code) \ 536 fastcall void do_##name(struct pt_regs * regs, long error_code) \
536 { \ 537 { \
537 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \ 538 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
538 == NOTIFY_STOP) \ 539 == NOTIFY_STOP) \
539 return; \ 540 return; \
540 do_trap(trapnr, signr, str, 1, regs, error_code, NULL); \ 541 do_trap(trapnr, signr, str, 1, regs, error_code, NULL); \
541 } 542 }
542 543
543 #define DO_VM86_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \ 544 #define DO_VM86_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \
544 fastcall void do_##name(struct pt_regs * regs, long error_code) \ 545 fastcall void do_##name(struct pt_regs * regs, long error_code) \
545 { \ 546 { \
546 siginfo_t info; \ 547 siginfo_t info; \
547 info.si_signo = signr; \ 548 info.si_signo = signr; \
548 info.si_errno = 0; \ 549 info.si_errno = 0; \
549 info.si_code = sicode; \ 550 info.si_code = sicode; \
550 info.si_addr = (void __user *)siaddr; \ 551 info.si_addr = (void __user *)siaddr; \
551 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \ 552 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
552 == NOTIFY_STOP) \ 553 == NOTIFY_STOP) \
553 return; \ 554 return; \
554 do_trap(trapnr, signr, str, 1, regs, error_code, &info); \ 555 do_trap(trapnr, signr, str, 1, regs, error_code, &info); \
555 } 556 }
556 557
557 DO_VM86_ERROR_INFO( 0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->eip) 558 DO_VM86_ERROR_INFO( 0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->eip)
558 #ifndef CONFIG_KPROBES 559 #ifndef CONFIG_KPROBES
559 DO_VM86_ERROR( 3, SIGTRAP, "int3", int3) 560 DO_VM86_ERROR( 3, SIGTRAP, "int3", int3)
560 #endif 561 #endif
561 DO_VM86_ERROR( 4, SIGSEGV, "overflow", overflow) 562 DO_VM86_ERROR( 4, SIGSEGV, "overflow", overflow)
562 DO_VM86_ERROR( 5, SIGSEGV, "bounds", bounds) 563 DO_VM86_ERROR( 5, SIGSEGV, "bounds", bounds)
563 DO_ERROR_INFO( 6, SIGILL, "invalid opcode", invalid_op, ILL_ILLOPN, regs->eip) 564 DO_ERROR_INFO( 6, SIGILL, "invalid opcode", invalid_op, ILL_ILLOPN, regs->eip)
564 DO_ERROR( 9, SIGFPE, "coprocessor segment overrun", coprocessor_segment_overrun) 565 DO_ERROR( 9, SIGFPE, "coprocessor segment overrun", coprocessor_segment_overrun)
565 DO_ERROR(10, SIGSEGV, "invalid TSS", invalid_TSS) 566 DO_ERROR(10, SIGSEGV, "invalid TSS", invalid_TSS)
566 DO_ERROR(11, SIGBUS, "segment not present", segment_not_present) 567 DO_ERROR(11, SIGBUS, "segment not present", segment_not_present)
567 DO_ERROR(12, SIGBUS, "stack segment", stack_segment) 568 DO_ERROR(12, SIGBUS, "stack segment", stack_segment)
568 DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, 0) 569 DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, 0)
569 DO_ERROR_INFO(32, SIGSEGV, "iret exception", iret_error, ILL_BADSTK, 0) 570 DO_ERROR_INFO(32, SIGSEGV, "iret exception", iret_error, ILL_BADSTK, 0)
570 571
571 fastcall void __kprobes do_general_protection(struct pt_regs * regs, 572 fastcall void __kprobes do_general_protection(struct pt_regs * regs,
572 long error_code) 573 long error_code)
573 { 574 {
574 int cpu = get_cpu(); 575 int cpu = get_cpu();
575 struct tss_struct *tss = &per_cpu(init_tss, cpu); 576 struct tss_struct *tss = &per_cpu(init_tss, cpu);
576 struct thread_struct *thread = &current->thread; 577 struct thread_struct *thread = &current->thread;
577 578
578 /* 579 /*
579 * Perform the lazy TSS's I/O bitmap copy. If the TSS has an 580 * Perform the lazy TSS's I/O bitmap copy. If the TSS has an
580 * invalid offset set (the LAZY one) and the faulting thread has 581 * invalid offset set (the LAZY one) and the faulting thread has
581 * a valid I/O bitmap pointer, we copy the I/O bitmap in the TSS 582 * a valid I/O bitmap pointer, we copy the I/O bitmap in the TSS
582 * and we set the offset field correctly. Then we let the CPU to 583 * and we set the offset field correctly. Then we let the CPU to
583 * restart the faulting instruction. 584 * restart the faulting instruction.
584 */ 585 */
585 if (tss->x86_tss.io_bitmap_base == INVALID_IO_BITMAP_OFFSET_LAZY && 586 if (tss->x86_tss.io_bitmap_base == INVALID_IO_BITMAP_OFFSET_LAZY &&
586 thread->io_bitmap_ptr) { 587 thread->io_bitmap_ptr) {
587 memcpy(tss->io_bitmap, thread->io_bitmap_ptr, 588 memcpy(tss->io_bitmap, thread->io_bitmap_ptr,
588 thread->io_bitmap_max); 589 thread->io_bitmap_max);
589 /* 590 /*
590 * If the previously set map was extending to higher ports 591 * If the previously set map was extending to higher ports
591 * than the current one, pad extra space with 0xff (no access). 592 * than the current one, pad extra space with 0xff (no access).
592 */ 593 */
593 if (thread->io_bitmap_max < tss->io_bitmap_max) 594 if (thread->io_bitmap_max < tss->io_bitmap_max)
594 memset((char *) tss->io_bitmap + 595 memset((char *) tss->io_bitmap +
595 thread->io_bitmap_max, 0xff, 596 thread->io_bitmap_max, 0xff,
596 tss->io_bitmap_max - thread->io_bitmap_max); 597 tss->io_bitmap_max - thread->io_bitmap_max);
597 tss->io_bitmap_max = thread->io_bitmap_max; 598 tss->io_bitmap_max = thread->io_bitmap_max;
598 tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET; 599 tss->x86_tss.io_bitmap_base = IO_BITMAP_OFFSET;
599 tss->io_bitmap_owner = thread; 600 tss->io_bitmap_owner = thread;
600 put_cpu(); 601 put_cpu();
601 return; 602 return;
602 } 603 }
603 put_cpu(); 604 put_cpu();
604 605
605 if (regs->eflags & VM_MASK) 606 if (regs->eflags & VM_MASK)
606 goto gp_in_vm86; 607 goto gp_in_vm86;
607 608
608 if (!user_mode(regs)) 609 if (!user_mode(regs))
609 goto gp_in_kernel; 610 goto gp_in_kernel;
610 611
611 current->thread.error_code = error_code; 612 current->thread.error_code = error_code;
612 current->thread.trap_no = 13; 613 current->thread.trap_no = 13;
613 force_sig(SIGSEGV, current); 614 force_sig(SIGSEGV, current);
614 return; 615 return;
615 616
616 gp_in_vm86: 617 gp_in_vm86:
617 local_irq_enable(); 618 local_irq_enable();
618 handle_vm86_fault((struct kernel_vm86_regs *) regs, error_code); 619 handle_vm86_fault((struct kernel_vm86_regs *) regs, error_code);
619 return; 620 return;
620 621
621 gp_in_kernel: 622 gp_in_kernel:
622 if (!fixup_exception(regs)) { 623 if (!fixup_exception(regs)) {
623 current->thread.error_code = error_code; 624 current->thread.error_code = error_code;
624 current->thread.trap_no = 13; 625 current->thread.trap_no = 13;
625 if (notify_die(DIE_GPF, "general protection fault", regs, 626 if (notify_die(DIE_GPF, "general protection fault", regs,
626 error_code, 13, SIGSEGV) == NOTIFY_STOP) 627 error_code, 13, SIGSEGV) == NOTIFY_STOP)
627 return; 628 return;
628 die("general protection fault", regs, error_code); 629 die("general protection fault", regs, error_code);
629 } 630 }
630 } 631 }
631 632
632 static __kprobes void 633 static __kprobes void
633 mem_parity_error(unsigned char reason, struct pt_regs * regs) 634 mem_parity_error(unsigned char reason, struct pt_regs * regs)
634 { 635 {
635 printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x on " 636 printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x on "
636 "CPU %d.\n", reason, smp_processor_id()); 637 "CPU %d.\n", reason, smp_processor_id());
637 printk(KERN_EMERG "You have some hardware problem, likely on the PCI bus.\n"); 638 printk(KERN_EMERG "You have some hardware problem, likely on the PCI bus.\n");
638 if (panic_on_unrecovered_nmi) 639 if (panic_on_unrecovered_nmi)
639 panic("NMI: Not continuing"); 640 panic("NMI: Not continuing");
640 641
641 printk(KERN_EMERG "Dazed and confused, but trying to continue\n"); 642 printk(KERN_EMERG "Dazed and confused, but trying to continue\n");
642 643
643 /* Clear and disable the memory parity error line. */ 644 /* Clear and disable the memory parity error line. */
644 clear_mem_error(reason); 645 clear_mem_error(reason);
645 } 646 }
646 647
647 static __kprobes void 648 static __kprobes void
648 io_check_error(unsigned char reason, struct pt_regs * regs) 649 io_check_error(unsigned char reason, struct pt_regs * regs)
649 { 650 {
650 unsigned long i; 651 unsigned long i;
651 652
652 printk(KERN_EMERG "NMI: IOCK error (debug interrupt?)\n"); 653 printk(KERN_EMERG "NMI: IOCK error (debug interrupt?)\n");
653 show_registers(regs); 654 show_registers(regs);
654 655
655 /* Re-enable the IOCK line, wait for a few seconds */ 656 /* Re-enable the IOCK line, wait for a few seconds */
656 reason = (reason & 0xf) | 8; 657 reason = (reason & 0xf) | 8;
657 outb(reason, 0x61); 658 outb(reason, 0x61);
658 i = 2000; 659 i = 2000;
659 while (--i) udelay(1000); 660 while (--i) udelay(1000);
660 reason &= ~8; 661 reason &= ~8;
661 outb(reason, 0x61); 662 outb(reason, 0x61);
662 } 663 }
663 664
664 static __kprobes void 665 static __kprobes void
665 unknown_nmi_error(unsigned char reason, struct pt_regs * regs) 666 unknown_nmi_error(unsigned char reason, struct pt_regs * regs)
666 { 667 {
667 #ifdef CONFIG_MCA 668 #ifdef CONFIG_MCA
668 /* Might actually be able to figure out what the guilty party 669 /* Might actually be able to figure out what the guilty party
669 * is. */ 670 * is. */
670 if( MCA_bus ) { 671 if( MCA_bus ) {
671 mca_handle_nmi(); 672 mca_handle_nmi();
672 return; 673 return;
673 } 674 }
674 #endif 675 #endif
675 printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x on " 676 printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x on "
676 "CPU %d.\n", reason, smp_processor_id()); 677 "CPU %d.\n", reason, smp_processor_id());
677 printk(KERN_EMERG "Do you have a strange power saving mode enabled?\n"); 678 printk(KERN_EMERG "Do you have a strange power saving mode enabled?\n");
678 if (panic_on_unrecovered_nmi) 679 if (panic_on_unrecovered_nmi)
679 panic("NMI: Not continuing"); 680 panic("NMI: Not continuing");
680 681
681 printk(KERN_EMERG "Dazed and confused, but trying to continue\n"); 682 printk(KERN_EMERG "Dazed and confused, but trying to continue\n");
682 } 683 }
683 684
684 static DEFINE_SPINLOCK(nmi_print_lock); 685 static DEFINE_SPINLOCK(nmi_print_lock);
685 686
686 void __kprobes die_nmi(struct pt_regs *regs, const char *msg) 687 void __kprobes die_nmi(struct pt_regs *regs, const char *msg)
687 { 688 {
688 if (notify_die(DIE_NMIWATCHDOG, msg, regs, 0, 2, SIGINT) == 689 if (notify_die(DIE_NMIWATCHDOG, msg, regs, 0, 2, SIGINT) ==
689 NOTIFY_STOP) 690 NOTIFY_STOP)
690 return; 691 return;
691 692
692 spin_lock(&nmi_print_lock); 693 spin_lock(&nmi_print_lock);
693 /* 694 /*
694 * We are in trouble anyway, lets at least try 695 * We are in trouble anyway, lets at least try
695 * to get a message out. 696 * to get a message out.
696 */ 697 */
697 bust_spinlocks(1); 698 bust_spinlocks(1);
698 printk(KERN_EMERG "%s", msg); 699 printk(KERN_EMERG "%s", msg);
699 printk(" on CPU%d, eip %08lx, registers:\n", 700 printk(" on CPU%d, eip %08lx, registers:\n",
700 smp_processor_id(), regs->eip); 701 smp_processor_id(), regs->eip);
701 show_registers(regs); 702 show_registers(regs);
702 console_silent(); 703 console_silent();
703 spin_unlock(&nmi_print_lock); 704 spin_unlock(&nmi_print_lock);
704 bust_spinlocks(0); 705 bust_spinlocks(0);
705 706
706 /* If we are in kernel we are probably nested up pretty bad 707 /* If we are in kernel we are probably nested up pretty bad
707 * and might aswell get out now while we still can. 708 * and might aswell get out now while we still can.
708 */ 709 */
709 if (!user_mode_vm(regs)) { 710 if (!user_mode_vm(regs)) {
710 current->thread.trap_no = 2; 711 current->thread.trap_no = 2;
711 crash_kexec(regs); 712 crash_kexec(regs);
712 } 713 }
713 714
714 do_exit(SIGSEGV); 715 do_exit(SIGSEGV);
715 } 716 }
716 717
717 static __kprobes void default_do_nmi(struct pt_regs * regs) 718 static __kprobes void default_do_nmi(struct pt_regs * regs)
718 { 719 {
719 unsigned char reason = 0; 720 unsigned char reason = 0;
720 721
721 /* Only the BSP gets external NMIs from the system. */ 722 /* Only the BSP gets external NMIs from the system. */
722 if (!smp_processor_id()) 723 if (!smp_processor_id())
723 reason = get_nmi_reason(); 724 reason = get_nmi_reason();
724 725
725 if (!(reason & 0xc0)) { 726 if (!(reason & 0xc0)) {
726 if (notify_die(DIE_NMI_IPI, "nmi_ipi", regs, reason, 2, SIGINT) 727 if (notify_die(DIE_NMI_IPI, "nmi_ipi", regs, reason, 2, SIGINT)
727 == NOTIFY_STOP) 728 == NOTIFY_STOP)
728 return; 729 return;
729 #ifdef CONFIG_X86_LOCAL_APIC 730 #ifdef CONFIG_X86_LOCAL_APIC
730 /* 731 /*
731 * Ok, so this is none of the documented NMI sources, 732 * Ok, so this is none of the documented NMI sources,
732 * so it must be the NMI watchdog. 733 * so it must be the NMI watchdog.
733 */ 734 */
734 if (nmi_watchdog_tick(regs, reason)) 735 if (nmi_watchdog_tick(regs, reason))
735 return; 736 return;
736 if (!do_nmi_callback(regs, smp_processor_id())) 737 if (!do_nmi_callback(regs, smp_processor_id()))
737 #endif 738 #endif
738 unknown_nmi_error(reason, regs); 739 unknown_nmi_error(reason, regs);
739 740
740 return; 741 return;
741 } 742 }
742 if (notify_die(DIE_NMI, "nmi", regs, reason, 2, SIGINT) == NOTIFY_STOP) 743 if (notify_die(DIE_NMI, "nmi", regs, reason, 2, SIGINT) == NOTIFY_STOP)
743 return; 744 return;
744 if (reason & 0x80) 745 if (reason & 0x80)
745 mem_parity_error(reason, regs); 746 mem_parity_error(reason, regs);
746 if (reason & 0x40) 747 if (reason & 0x40)
747 io_check_error(reason, regs); 748 io_check_error(reason, regs);
748 /* 749 /*
749 * Reassert NMI in case it became active meanwhile 750 * Reassert NMI in case it became active meanwhile
750 * as it's edge-triggered. 751 * as it's edge-triggered.
751 */ 752 */
752 reassert_nmi(); 753 reassert_nmi();
753 } 754 }
754 755
755 fastcall __kprobes void do_nmi(struct pt_regs * regs, long error_code) 756 fastcall __kprobes void do_nmi(struct pt_regs * regs, long error_code)
756 { 757 {
757 int cpu; 758 int cpu;
758 759
759 nmi_enter(); 760 nmi_enter();
760 761
761 cpu = smp_processor_id(); 762 cpu = smp_processor_id();
762 763
763 ++nmi_count(cpu); 764 ++nmi_count(cpu);
764 765
765 default_do_nmi(regs); 766 default_do_nmi(regs);
766 767
767 nmi_exit(); 768 nmi_exit();
768 } 769 }
769 770
770 #ifdef CONFIG_KPROBES 771 #ifdef CONFIG_KPROBES
771 fastcall void __kprobes do_int3(struct pt_regs *regs, long error_code) 772 fastcall void __kprobes do_int3(struct pt_regs *regs, long error_code)
772 { 773 {
773 if (notify_die(DIE_INT3, "int3", regs, error_code, 3, SIGTRAP) 774 if (notify_die(DIE_INT3, "int3", regs, error_code, 3, SIGTRAP)
774 == NOTIFY_STOP) 775 == NOTIFY_STOP)
775 return; 776 return;
776 /* This is an interrupt gate, because kprobes wants interrupts 777 /* This is an interrupt gate, because kprobes wants interrupts
777 disabled. Normal trap handlers don't. */ 778 disabled. Normal trap handlers don't. */
778 restore_interrupts(regs); 779 restore_interrupts(regs);
779 do_trap(3, SIGTRAP, "int3", 1, regs, error_code, NULL); 780 do_trap(3, SIGTRAP, "int3", 1, regs, error_code, NULL);
780 } 781 }
781 #endif 782 #endif
782 783
783 /* 784 /*
784 * Our handling of the processor debug registers is non-trivial. 785 * Our handling of the processor debug registers is non-trivial.
785 * We do not clear them on entry and exit from the kernel. Therefore 786 * We do not clear them on entry and exit from the kernel. Therefore
786 * it is possible to get a watchpoint trap here from inside the kernel. 787 * it is possible to get a watchpoint trap here from inside the kernel.
787 * However, the code in ./ptrace.c has ensured that the user can 788 * However, the code in ./ptrace.c has ensured that the user can
788 * only set watchpoints on userspace addresses. Therefore the in-kernel 789 * only set watchpoints on userspace addresses. Therefore the in-kernel
789 * watchpoint trap can only occur in code which is reading/writing 790 * watchpoint trap can only occur in code which is reading/writing
790 * from user space. Such code must not hold kernel locks (since it 791 * from user space. Such code must not hold kernel locks (since it
791 * can equally take a page fault), therefore it is safe to call 792 * can equally take a page fault), therefore it is safe to call
792 * force_sig_info even though that claims and releases locks. 793 * force_sig_info even though that claims and releases locks.
793 * 794 *
794 * Code in ./signal.c ensures that the debug control register 795 * Code in ./signal.c ensures that the debug control register
795 * is restored before we deliver any signal, and therefore that 796 * is restored before we deliver any signal, and therefore that
796 * user code runs with the correct debug control register even though 797 * user code runs with the correct debug control register even though
797 * we clear it here. 798 * we clear it here.
798 * 799 *
799 * Being careful here means that we don't have to be as careful in a 800 * Being careful here means that we don't have to be as careful in a
800 * lot of more complicated places (task switching can be a bit lazy 801 * lot of more complicated places (task switching can be a bit lazy
801 * about restoring all the debug state, and ptrace doesn't have to 802 * about restoring all the debug state, and ptrace doesn't have to
802 * find every occurrence of the TF bit that could be saved away even 803 * find every occurrence of the TF bit that could be saved away even
803 * by user code) 804 * by user code)
804 */ 805 */
805 fastcall void __kprobes do_debug(struct pt_regs * regs, long error_code) 806 fastcall void __kprobes do_debug(struct pt_regs * regs, long error_code)
806 { 807 {
807 unsigned int condition; 808 unsigned int condition;
808 struct task_struct *tsk = current; 809 struct task_struct *tsk = current;
809 810
810 get_debugreg(condition, 6); 811 get_debugreg(condition, 6);
811 812
812 if (notify_die(DIE_DEBUG, "debug", regs, condition, error_code, 813 if (notify_die(DIE_DEBUG, "debug", regs, condition, error_code,
813 SIGTRAP) == NOTIFY_STOP) 814 SIGTRAP) == NOTIFY_STOP)
814 return; 815 return;
815 /* It's safe to allow irq's after DR6 has been saved */ 816 /* It's safe to allow irq's after DR6 has been saved */
816 if (regs->eflags & X86_EFLAGS_IF) 817 if (regs->eflags & X86_EFLAGS_IF)
817 local_irq_enable(); 818 local_irq_enable();
818 819
819 /* Mask out spurious debug traps due to lazy DR7 setting */ 820 /* Mask out spurious debug traps due to lazy DR7 setting */
820 if (condition & (DR_TRAP0|DR_TRAP1|DR_TRAP2|DR_TRAP3)) { 821 if (condition & (DR_TRAP0|DR_TRAP1|DR_TRAP2|DR_TRAP3)) {
821 if (!tsk->thread.debugreg[7]) 822 if (!tsk->thread.debugreg[7])
822 goto clear_dr7; 823 goto clear_dr7;
823 } 824 }
824 825
825 if (regs->eflags & VM_MASK) 826 if (regs->eflags & VM_MASK)
826 goto debug_vm86; 827 goto debug_vm86;
827 828
828 /* Save debug status register where ptrace can see it */ 829 /* Save debug status register where ptrace can see it */
829 tsk->thread.debugreg[6] = condition; 830 tsk->thread.debugreg[6] = condition;
830 831
831 /* 832 /*
832 * Single-stepping through TF: make sure we ignore any events in 833 * Single-stepping through TF: make sure we ignore any events in
833 * kernel space (but re-enable TF when returning to user mode). 834 * kernel space (but re-enable TF when returning to user mode).
834 */ 835 */
835 if (condition & DR_STEP) { 836 if (condition & DR_STEP) {
836 /* 837 /*
837 * We already checked v86 mode above, so we can 838 * We already checked v86 mode above, so we can
838 * check for kernel mode by just checking the CPL 839 * check for kernel mode by just checking the CPL
839 * of CS. 840 * of CS.
840 */ 841 */
841 if (!user_mode(regs)) 842 if (!user_mode(regs))
842 goto clear_TF_reenable; 843 goto clear_TF_reenable;
843 } 844 }
844 845
845 /* Ok, finally something we can handle */ 846 /* Ok, finally something we can handle */
846 send_sigtrap(tsk, regs, error_code); 847 send_sigtrap(tsk, regs, error_code);
847 848
848 /* Disable additional traps. They'll be re-enabled when 849 /* Disable additional traps. They'll be re-enabled when
849 * the signal is delivered. 850 * the signal is delivered.
850 */ 851 */
851 clear_dr7: 852 clear_dr7:
852 set_debugreg(0, 7); 853 set_debugreg(0, 7);
853 return; 854 return;
854 855
855 debug_vm86: 856 debug_vm86:
856 handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, 1); 857 handle_vm86_trap((struct kernel_vm86_regs *) regs, error_code, 1);
857 return; 858 return;
858 859
859 clear_TF_reenable: 860 clear_TF_reenable:
860 set_tsk_thread_flag(tsk, TIF_SINGLESTEP); 861 set_tsk_thread_flag(tsk, TIF_SINGLESTEP);
861 regs->eflags &= ~TF_MASK; 862 regs->eflags &= ~TF_MASK;
862 return; 863 return;
863 } 864 }
864 865
865 /* 866 /*
866 * Note that we play around with the 'TS' bit in an attempt to get 867 * Note that we play around with the 'TS' bit in an attempt to get
867 * the correct behaviour even in the presence of the asynchronous 868 * the correct behaviour even in the presence of the asynchronous
868 * IRQ13 behaviour 869 * IRQ13 behaviour
869 */ 870 */
870 void math_error(void __user *eip) 871 void math_error(void __user *eip)
871 { 872 {
872 struct task_struct * task; 873 struct task_struct * task;
873 siginfo_t info; 874 siginfo_t info;
874 unsigned short cwd, swd; 875 unsigned short cwd, swd;
875 876
876 /* 877 /*
877 * Save the info for the exception handler and clear the error. 878 * Save the info for the exception handler and clear the error.
878 */ 879 */
879 task = current; 880 task = current;
880 save_init_fpu(task); 881 save_init_fpu(task);
881 task->thread.trap_no = 16; 882 task->thread.trap_no = 16;
882 task->thread.error_code = 0; 883 task->thread.error_code = 0;
883 info.si_signo = SIGFPE; 884 info.si_signo = SIGFPE;
884 info.si_errno = 0; 885 info.si_errno = 0;
885 info.si_code = __SI_FAULT; 886 info.si_code = __SI_FAULT;
886 info.si_addr = eip; 887 info.si_addr = eip;
887 /* 888 /*
888 * (~cwd & swd) will mask out exceptions that are not set to unmasked 889 * (~cwd & swd) will mask out exceptions that are not set to unmasked
889 * status. 0x3f is the exception bits in these regs, 0x200 is the 890 * status. 0x3f is the exception bits in these regs, 0x200 is the
890 * C1 reg you need in case of a stack fault, 0x040 is the stack 891 * C1 reg you need in case of a stack fault, 0x040 is the stack
891 * fault bit. We should only be taking one exception at a time, 892 * fault bit. We should only be taking one exception at a time,
892 * so if this combination doesn't produce any single exception, 893 * so if this combination doesn't produce any single exception,
893 * then we have a bad program that isn't syncronizing its FPU usage 894 * then we have a bad program that isn't syncronizing its FPU usage
894 * and it will suffer the consequences since we won't be able to 895 * and it will suffer the consequences since we won't be able to
895 * fully reproduce the context of the exception 896 * fully reproduce the context of the exception
896 */ 897 */
897 cwd = get_fpu_cwd(task); 898 cwd = get_fpu_cwd(task);
898 swd = get_fpu_swd(task); 899 swd = get_fpu_swd(task);
899 switch (swd & ~cwd & 0x3f) { 900 switch (swd & ~cwd & 0x3f) {
900 case 0x000: /* No unmasked exception */ 901 case 0x000: /* No unmasked exception */
901 return; 902 return;
902 default: /* Multiple exceptions */ 903 default: /* Multiple exceptions */
903 break; 904 break;
904 case 0x001: /* Invalid Op */ 905 case 0x001: /* Invalid Op */
905 /* 906 /*
906 * swd & 0x240 == 0x040: Stack Underflow 907 * swd & 0x240 == 0x040: Stack Underflow
907 * swd & 0x240 == 0x240: Stack Overflow 908 * swd & 0x240 == 0x240: Stack Overflow
908 * User must clear the SF bit (0x40) if set 909 * User must clear the SF bit (0x40) if set
909 */ 910 */
910 info.si_code = FPE_FLTINV; 911 info.si_code = FPE_FLTINV;
911 break; 912 break;
912 case 0x002: /* Denormalize */ 913 case 0x002: /* Denormalize */
913 case 0x010: /* Underflow */ 914 case 0x010: /* Underflow */
914 info.si_code = FPE_FLTUND; 915 info.si_code = FPE_FLTUND;
915 break; 916 break;
916 case 0x004: /* Zero Divide */ 917 case 0x004: /* Zero Divide */
917 info.si_code = FPE_FLTDIV; 918 info.si_code = FPE_FLTDIV;
918 break; 919 break;
919 case 0x008: /* Overflow */ 920 case 0x008: /* Overflow */
920 info.si_code = FPE_FLTOVF; 921 info.si_code = FPE_FLTOVF;
921 break; 922 break;
922 case 0x020: /* Precision */ 923 case 0x020: /* Precision */
923 info.si_code = FPE_FLTRES; 924 info.si_code = FPE_FLTRES;
924 break; 925 break;
925 } 926 }
926 force_sig_info(SIGFPE, &info, task); 927 force_sig_info(SIGFPE, &info, task);
927 } 928 }
928 929
929 fastcall void do_coprocessor_error(struct pt_regs * regs, long error_code) 930 fastcall void do_coprocessor_error(struct pt_regs * regs, long error_code)
930 { 931 {
931 ignore_fpu_irq = 1; 932 ignore_fpu_irq = 1;
932 math_error((void __user *)regs->eip); 933 math_error((void __user *)regs->eip);
933 } 934 }
934 935
935 static void simd_math_error(void __user *eip) 936 static void simd_math_error(void __user *eip)
936 { 937 {
937 struct task_struct * task; 938 struct task_struct * task;
938 siginfo_t info; 939 siginfo_t info;
939 unsigned short mxcsr; 940 unsigned short mxcsr;
940 941
941 /* 942 /*
942 * Save the info for the exception handler and clear the error. 943 * Save the info for the exception handler and clear the error.
943 */ 944 */
944 task = current; 945 task = current;
945 save_init_fpu(task); 946 save_init_fpu(task);
946 task->thread.trap_no = 19; 947 task->thread.trap_no = 19;
947 task->thread.error_code = 0; 948 task->thread.error_code = 0;
948 info.si_signo = SIGFPE; 949 info.si_signo = SIGFPE;
949 info.si_errno = 0; 950 info.si_errno = 0;
950 info.si_code = __SI_FAULT; 951 info.si_code = __SI_FAULT;
951 info.si_addr = eip; 952 info.si_addr = eip;
952 /* 953 /*
953 * The SIMD FPU exceptions are handled a little differently, as there 954 * The SIMD FPU exceptions are handled a little differently, as there
954 * is only a single status/control register. Thus, to determine which 955 * is only a single status/control register. Thus, to determine which
955 * unmasked exception was caught we must mask the exception mask bits 956 * unmasked exception was caught we must mask the exception mask bits
956 * at 0x1f80, and then use these to mask the exception bits at 0x3f. 957 * at 0x1f80, and then use these to mask the exception bits at 0x3f.
957 */ 958 */
958 mxcsr = get_fpu_mxcsr(task); 959 mxcsr = get_fpu_mxcsr(task);
959 switch (~((mxcsr & 0x1f80) >> 7) & (mxcsr & 0x3f)) { 960 switch (~((mxcsr & 0x1f80) >> 7) & (mxcsr & 0x3f)) {
960 case 0x000: 961 case 0x000:
961 default: 962 default:
962 break; 963 break;
963 case 0x001: /* Invalid Op */ 964 case 0x001: /* Invalid Op */
964 info.si_code = FPE_FLTINV; 965 info.si_code = FPE_FLTINV;
965 break; 966 break;
966 case 0x002: /* Denormalize */ 967 case 0x002: /* Denormalize */
967 case 0x010: /* Underflow */ 968 case 0x010: /* Underflow */
968 info.si_code = FPE_FLTUND; 969 info.si_code = FPE_FLTUND;
969 break; 970 break;
970 case 0x004: /* Zero Divide */ 971 case 0x004: /* Zero Divide */
971 info.si_code = FPE_FLTDIV; 972 info.si_code = FPE_FLTDIV;
972 break; 973 break;
973 case 0x008: /* Overflow */ 974 case 0x008: /* Overflow */
974 info.si_code = FPE_FLTOVF; 975 info.si_code = FPE_FLTOVF;
975 break; 976 break;
976 case 0x020: /* Precision */ 977 case 0x020: /* Precision */
977 info.si_code = FPE_FLTRES; 978 info.si_code = FPE_FLTRES;
978 break; 979 break;
979 } 980 }
980 force_sig_info(SIGFPE, &info, task); 981 force_sig_info(SIGFPE, &info, task);
981 } 982 }
982 983
983 fastcall void do_simd_coprocessor_error(struct pt_regs * regs, 984 fastcall void do_simd_coprocessor_error(struct pt_regs * regs,
984 long error_code) 985 long error_code)
985 { 986 {
986 if (cpu_has_xmm) { 987 if (cpu_has_xmm) {
987 /* Handle SIMD FPU exceptions on PIII+ processors. */ 988 /* Handle SIMD FPU exceptions on PIII+ processors. */
988 ignore_fpu_irq = 1; 989 ignore_fpu_irq = 1;
989 simd_math_error((void __user *)regs->eip); 990 simd_math_error((void __user *)regs->eip);
990 } else { 991 } else {
991 /* 992 /*
992 * Handle strange cache flush from user space exception 993 * Handle strange cache flush from user space exception
993 * in all other cases. This is undocumented behaviour. 994 * in all other cases. This is undocumented behaviour.
994 */ 995 */
995 if (regs->eflags & VM_MASK) { 996 if (regs->eflags & VM_MASK) {
996 handle_vm86_fault((struct kernel_vm86_regs *)regs, 997 handle_vm86_fault((struct kernel_vm86_regs *)regs,
997 error_code); 998 error_code);
998 return; 999 return;
999 } 1000 }
1000 current->thread.trap_no = 19; 1001 current->thread.trap_no = 19;
1001 current->thread.error_code = error_code; 1002 current->thread.error_code = error_code;
1002 die_if_kernel("cache flush denied", regs, error_code); 1003 die_if_kernel("cache flush denied", regs, error_code);
1003 force_sig(SIGSEGV, current); 1004 force_sig(SIGSEGV, current);
1004 } 1005 }
1005 } 1006 }
1006 1007
1007 fastcall void do_spurious_interrupt_bug(struct pt_regs * regs, 1008 fastcall void do_spurious_interrupt_bug(struct pt_regs * regs,
1008 long error_code) 1009 long error_code)
1009 { 1010 {
1010 #if 0 1011 #if 0
1011 /* No need to warn about this any longer. */ 1012 /* No need to warn about this any longer. */
1012 printk("Ignoring P6 Local APIC Spurious Interrupt Bug...\n"); 1013 printk("Ignoring P6 Local APIC Spurious Interrupt Bug...\n");
1013 #endif 1014 #endif
1014 } 1015 }
1015 1016
1016 fastcall unsigned long patch_espfix_desc(unsigned long uesp, 1017 fastcall unsigned long patch_espfix_desc(unsigned long uesp,
1017 unsigned long kesp) 1018 unsigned long kesp)
1018 { 1019 {
1019 struct desc_struct *gdt = __get_cpu_var(gdt_page).gdt; 1020 struct desc_struct *gdt = __get_cpu_var(gdt_page).gdt;
1020 unsigned long base = (kesp - uesp) & -THREAD_SIZE; 1021 unsigned long base = (kesp - uesp) & -THREAD_SIZE;
1021 unsigned long new_kesp = kesp - base; 1022 unsigned long new_kesp = kesp - base;
1022 unsigned long lim_pages = (new_kesp | (THREAD_SIZE - 1)) >> PAGE_SHIFT; 1023 unsigned long lim_pages = (new_kesp | (THREAD_SIZE - 1)) >> PAGE_SHIFT;
1023 __u64 desc = *(__u64 *)&gdt[GDT_ENTRY_ESPFIX_SS]; 1024 __u64 desc = *(__u64 *)&gdt[GDT_ENTRY_ESPFIX_SS];
1024 /* Set up base for espfix segment */ 1025 /* Set up base for espfix segment */
1025 desc &= 0x00f0ff0000000000ULL; 1026 desc &= 0x00f0ff0000000000ULL;
1026 desc |= ((((__u64)base) << 16) & 0x000000ffffff0000ULL) | 1027 desc |= ((((__u64)base) << 16) & 0x000000ffffff0000ULL) |
1027 ((((__u64)base) << 32) & 0xff00000000000000ULL) | 1028 ((((__u64)base) << 32) & 0xff00000000000000ULL) |
1028 ((((__u64)lim_pages) << 32) & 0x000f000000000000ULL) | 1029 ((((__u64)lim_pages) << 32) & 0x000f000000000000ULL) |
1029 (lim_pages & 0xffff); 1030 (lim_pages & 0xffff);
1030 *(__u64 *)&gdt[GDT_ENTRY_ESPFIX_SS] = desc; 1031 *(__u64 *)&gdt[GDT_ENTRY_ESPFIX_SS] = desc;
1031 return new_kesp; 1032 return new_kesp;
1032 } 1033 }
1033 1034
1034 /* 1035 /*
1035 * 'math_state_restore()' saves the current math information in the 1036 * 'math_state_restore()' saves the current math information in the
1036 * old math state array, and gets the new ones from the current task 1037 * old math state array, and gets the new ones from the current task
1037 * 1038 *
1038 * Careful.. There are problems with IBM-designed IRQ13 behaviour. 1039 * Careful.. There are problems with IBM-designed IRQ13 behaviour.
1039 * Don't touch unless you *really* know how it works. 1040 * Don't touch unless you *really* know how it works.
1040 * 1041 *
1041 * Must be called with kernel preemption disabled (in this case, 1042 * Must be called with kernel preemption disabled (in this case,
1042 * local interrupts are disabled at the call-site in entry.S). 1043 * local interrupts are disabled at the call-site in entry.S).
1043 */ 1044 */
1044 asmlinkage void math_state_restore(void) 1045 asmlinkage void math_state_restore(void)
1045 { 1046 {
1046 struct thread_info *thread = current_thread_info(); 1047 struct thread_info *thread = current_thread_info();
1047 struct task_struct *tsk = thread->task; 1048 struct task_struct *tsk = thread->task;
1048 1049
1049 clts(); /* Allow maths ops (or we recurse) */ 1050 clts(); /* Allow maths ops (or we recurse) */
1050 if (!tsk_used_math(tsk)) 1051 if (!tsk_used_math(tsk))
1051 init_fpu(tsk); 1052 init_fpu(tsk);
1052 restore_fpu(tsk); 1053 restore_fpu(tsk);
1053 thread->status |= TS_USEDFPU; /* So we fnsave on switch_to() */ 1054 thread->status |= TS_USEDFPU; /* So we fnsave on switch_to() */
1054 tsk->fpu_counter++; 1055 tsk->fpu_counter++;
1055 } 1056 }
1056 1057
1057 #ifndef CONFIG_MATH_EMULATION 1058 #ifndef CONFIG_MATH_EMULATION
1058 1059
1059 asmlinkage void math_emulate(long arg) 1060 asmlinkage void math_emulate(long arg)
1060 { 1061 {
1061 printk(KERN_EMERG "math-emulation not enabled and no coprocessor found.\n"); 1062 printk(KERN_EMERG "math-emulation not enabled and no coprocessor found.\n");
1062 printk(KERN_EMERG "killing %s.\n",current->comm); 1063 printk(KERN_EMERG "killing %s.\n",current->comm);
1063 force_sig(SIGFPE,current); 1064 force_sig(SIGFPE,current);
1064 schedule(); 1065 schedule();
1065 } 1066 }
1066 1067
1067 #endif /* CONFIG_MATH_EMULATION */ 1068 #endif /* CONFIG_MATH_EMULATION */
1068 1069
1069 #ifdef CONFIG_X86_F00F_BUG 1070 #ifdef CONFIG_X86_F00F_BUG
1070 void __init trap_init_f00f_bug(void) 1071 void __init trap_init_f00f_bug(void)
1071 { 1072 {
1072 __set_fixmap(FIX_F00F_IDT, __pa(&idt_table), PAGE_KERNEL_RO); 1073 __set_fixmap(FIX_F00F_IDT, __pa(&idt_table), PAGE_KERNEL_RO);
1073 1074
1074 /* 1075 /*
1075 * Update the IDT descriptor and reload the IDT so that 1076 * Update the IDT descriptor and reload the IDT so that
1076 * it uses the read-only mapped virtual address. 1077 * it uses the read-only mapped virtual address.
1077 */ 1078 */
1078 idt_descr.address = fix_to_virt(FIX_F00F_IDT); 1079 idt_descr.address = fix_to_virt(FIX_F00F_IDT);
1079 load_idt(&idt_descr); 1080 load_idt(&idt_descr);
1080 } 1081 }
1081 #endif 1082 #endif
1082 1083
1083 /* 1084 /*
1084 * This needs to use 'idt_table' rather than 'idt', and 1085 * This needs to use 'idt_table' rather than 'idt', and
1085 * thus use the _nonmapped_ version of the IDT, as the 1086 * thus use the _nonmapped_ version of the IDT, as the
1086 * Pentium F0 0F bugfix can have resulted in the mapped 1087 * Pentium F0 0F bugfix can have resulted in the mapped
1087 * IDT being write-protected. 1088 * IDT being write-protected.
1088 */ 1089 */
1089 void set_intr_gate(unsigned int n, void *addr) 1090 void set_intr_gate(unsigned int n, void *addr)
1090 { 1091 {
1091 _set_gate(n, DESCTYPE_INT, addr, __KERNEL_CS); 1092 _set_gate(n, DESCTYPE_INT, addr, __KERNEL_CS);
1092 } 1093 }
1093 1094
1094 /* 1095 /*
1095 * This routine sets up an interrupt gate at directory privilege level 3. 1096 * This routine sets up an interrupt gate at directory privilege level 3.
1096 */ 1097 */
1097 static inline void set_system_intr_gate(unsigned int n, void *addr) 1098 static inline void set_system_intr_gate(unsigned int n, void *addr)
1098 { 1099 {
1099 _set_gate(n, DESCTYPE_INT | DESCTYPE_DPL3, addr, __KERNEL_CS); 1100 _set_gate(n, DESCTYPE_INT | DESCTYPE_DPL3, addr, __KERNEL_CS);
1100 } 1101 }
1101 1102
1102 static void __init set_trap_gate(unsigned int n, void *addr) 1103 static void __init set_trap_gate(unsigned int n, void *addr)
1103 { 1104 {
1104 _set_gate(n, DESCTYPE_TRAP, addr, __KERNEL_CS); 1105 _set_gate(n, DESCTYPE_TRAP, addr, __KERNEL_CS);
1105 } 1106 }
1106 1107
1107 static void __init set_system_gate(unsigned int n, void *addr) 1108 static void __init set_system_gate(unsigned int n, void *addr)
1108 { 1109 {
1109 _set_gate(n, DESCTYPE_TRAP | DESCTYPE_DPL3, addr, __KERNEL_CS); 1110 _set_gate(n, DESCTYPE_TRAP | DESCTYPE_DPL3, addr, __KERNEL_CS);
1110 } 1111 }
1111 1112
1112 static void __init set_task_gate(unsigned int n, unsigned int gdt_entry) 1113 static void __init set_task_gate(unsigned int n, unsigned int gdt_entry)
1113 { 1114 {
1114 _set_gate(n, DESCTYPE_TASK, (void *)0, (gdt_entry<<3)); 1115 _set_gate(n, DESCTYPE_TASK, (void *)0, (gdt_entry<<3));
1115 } 1116 }
1116 1117
1117 1118
1118 void __init trap_init(void) 1119 void __init trap_init(void)
1119 { 1120 {
1120 #ifdef CONFIG_EISA 1121 #ifdef CONFIG_EISA
1121 void __iomem *p = ioremap(0x0FFFD9, 4); 1122 void __iomem *p = ioremap(0x0FFFD9, 4);
1122 if (readl(p) == 'E'+('I'<<8)+('S'<<16)+('A'<<24)) { 1123 if (readl(p) == 'E'+('I'<<8)+('S'<<16)+('A'<<24)) {
1123 EISA_bus = 1; 1124 EISA_bus = 1;
1124 } 1125 }
1125 iounmap(p); 1126 iounmap(p);
1126 #endif 1127 #endif
1127 1128
1128 #ifdef CONFIG_X86_LOCAL_APIC 1129 #ifdef CONFIG_X86_LOCAL_APIC
1129 init_apic_mappings(); 1130 init_apic_mappings();
1130 #endif 1131 #endif
1131 1132
1132 set_trap_gate(0,&divide_error); 1133 set_trap_gate(0,&divide_error);
1133 set_intr_gate(1,&debug); 1134 set_intr_gate(1,&debug);
1134 set_intr_gate(2,&nmi); 1135 set_intr_gate(2,&nmi);
1135 set_system_intr_gate(3, &int3); /* int3/4 can be called from all */ 1136 set_system_intr_gate(3, &int3); /* int3/4 can be called from all */
1136 set_system_gate(4,&overflow); 1137 set_system_gate(4,&overflow);
1137 set_trap_gate(5,&bounds); 1138 set_trap_gate(5,&bounds);
1138 set_trap_gate(6,&invalid_op); 1139 set_trap_gate(6,&invalid_op);
1139 set_trap_gate(7,&device_not_available); 1140 set_trap_gate(7,&device_not_available);
1140 set_task_gate(8,GDT_ENTRY_DOUBLEFAULT_TSS); 1141 set_task_gate(8,GDT_ENTRY_DOUBLEFAULT_TSS);
1141 set_trap_gate(9,&coprocessor_segment_overrun); 1142 set_trap_gate(9,&coprocessor_segment_overrun);
1142 set_trap_gate(10,&invalid_TSS); 1143 set_trap_gate(10,&invalid_TSS);
1143 set_trap_gate(11,&segment_not_present); 1144 set_trap_gate(11,&segment_not_present);
1144 set_trap_gate(12,&stack_segment); 1145 set_trap_gate(12,&stack_segment);
1145 set_trap_gate(13,&general_protection); 1146 set_trap_gate(13,&general_protection);
1146 set_intr_gate(14,&page_fault); 1147 set_intr_gate(14,&page_fault);
1147 set_trap_gate(15,&spurious_interrupt_bug); 1148 set_trap_gate(15,&spurious_interrupt_bug);
1148 set_trap_gate(16,&coprocessor_error); 1149 set_trap_gate(16,&coprocessor_error);
1149 set_trap_gate(17,&alignment_check); 1150 set_trap_gate(17,&alignment_check);
1150 #ifdef CONFIG_X86_MCE 1151 #ifdef CONFIG_X86_MCE
1151 set_trap_gate(18,&machine_check); 1152 set_trap_gate(18,&machine_check);
1152 #endif 1153 #endif
1153 set_trap_gate(19,&simd_coprocessor_error); 1154 set_trap_gate(19,&simd_coprocessor_error);
1154 1155
1155 if (cpu_has_fxsr) { 1156 if (cpu_has_fxsr) {
1156 /* 1157 /*
1157 * Verify that the FXSAVE/FXRSTOR data will be 16-byte aligned. 1158 * Verify that the FXSAVE/FXRSTOR data will be 16-byte aligned.
1158 * Generates a compile-time "error: zero width for bit-field" if 1159 * Generates a compile-time "error: zero width for bit-field" if
1159 * the alignment is wrong. 1160 * the alignment is wrong.
1160 */ 1161 */
1161 struct fxsrAlignAssert { 1162 struct fxsrAlignAssert {
1162 int _:!(offsetof(struct task_struct, 1163 int _:!(offsetof(struct task_struct,
1163 thread.i387.fxsave) & 15); 1164 thread.i387.fxsave) & 15);
1164 }; 1165 };
1165 1166
1166 printk(KERN_INFO "Enabling fast FPU save and restore... "); 1167 printk(KERN_INFO "Enabling fast FPU save and restore... ");
1167 set_in_cr4(X86_CR4_OSFXSR); 1168 set_in_cr4(X86_CR4_OSFXSR);
1168 printk("done.\n"); 1169 printk("done.\n");
1169 } 1170 }
1170 if (cpu_has_xmm) { 1171 if (cpu_has_xmm) {
1171 printk(KERN_INFO "Enabling unmasked SIMD FPU exception " 1172 printk(KERN_INFO "Enabling unmasked SIMD FPU exception "
1172 "support... "); 1173 "support... ");
1173 set_in_cr4(X86_CR4_OSXMMEXCPT); 1174 set_in_cr4(X86_CR4_OSXMMEXCPT);
1174 printk("done.\n"); 1175 printk("done.\n");
1175 } 1176 }
1176 1177
1177 set_system_gate(SYSCALL_VECTOR,&system_call); 1178 set_system_gate(SYSCALL_VECTOR,&system_call);
1178 1179
1179 /* 1180 /*
1180 * Should be a barrier for any external CPU state. 1181 * Should be a barrier for any external CPU state.
1181 */ 1182 */
1182 cpu_init(); 1183 cpu_init();
1183 1184
1184 trap_init_hook(); 1185 trap_init_hook();
1185 } 1186 }
1186 1187
1187 static int __init kstack_setup(char *s) 1188 static int __init kstack_setup(char *s)
1188 { 1189 {
1189 kstack_depth_to_print = simple_strtoul(s, NULL, 0); 1190 kstack_depth_to_print = simple_strtoul(s, NULL, 0);
1190 return 1; 1191 return 1;
1191 } 1192 }
1192 __setup("kstack=", kstack_setup); 1193 __setup("kstack=", kstack_setup);
1193 1194
1194 static int __init code_bytes_setup(char *s) 1195 static int __init code_bytes_setup(char *s)
1195 { 1196 {
1196 code_bytes = simple_strtoul(s, NULL, 0); 1197 code_bytes = simple_strtoul(s, NULL, 0);
1197 if (code_bytes > 8192) 1198 if (code_bytes > 8192)
1198 code_bytes = 8192; 1199 code_bytes = 8192;
1199 1200
1200 return 1; 1201 return 1;
1201 } 1202 }
1202 __setup("code_bytes=", code_bytes_setup); 1203 __setup("code_bytes=", code_bytes_setup);
1203 1204
arch/ia64/kernel/traps.c
1 /* 1 /*
2 * Architecture-specific trap handling. 2 * Architecture-specific trap handling.
3 * 3 *
4 * Copyright (C) 1998-2003 Hewlett-Packard Co 4 * Copyright (C) 1998-2003 Hewlett-Packard Co
5 * David Mosberger-Tang <davidm@hpl.hp.com> 5 * David Mosberger-Tang <davidm@hpl.hp.com>
6 * 6 *
7 * 05/12/00 grao <goutham.rao@intel.com> : added isr in siginfo for SIGFPE 7 * 05/12/00 grao <goutham.rao@intel.com> : added isr in siginfo for SIGFPE
8 */ 8 */
9 9
10 #include <linux/kernel.h> 10 #include <linux/kernel.h>
11 #include <linux/init.h> 11 #include <linux/init.h>
12 #include <linux/sched.h> 12 #include <linux/sched.h>
13 #include <linux/tty.h> 13 #include <linux/tty.h>
14 #include <linux/vt_kern.h> /* For unblank_screen() */ 14 #include <linux/vt_kern.h> /* For unblank_screen() */
15 #include <linux/module.h> /* for EXPORT_SYMBOL */ 15 #include <linux/module.h> /* for EXPORT_SYMBOL */
16 #include <linux/hardirq.h> 16 #include <linux/hardirq.h>
17 #include <linux/kprobes.h> 17 #include <linux/kprobes.h>
18 #include <linux/delay.h> /* for ssleep() */ 18 #include <linux/delay.h> /* for ssleep() */
19 #include <linux/kdebug.h> 19 #include <linux/kdebug.h>
20 20
21 #include <asm/fpswa.h> 21 #include <asm/fpswa.h>
22 #include <asm/ia32.h> 22 #include <asm/ia32.h>
23 #include <asm/intrinsics.h> 23 #include <asm/intrinsics.h>
24 #include <asm/processor.h> 24 #include <asm/processor.h>
25 #include <asm/uaccess.h> 25 #include <asm/uaccess.h>
26 26
27 fpswa_interface_t *fpswa_interface; 27 fpswa_interface_t *fpswa_interface;
28 EXPORT_SYMBOL(fpswa_interface); 28 EXPORT_SYMBOL(fpswa_interface);
29 29
30 void __init 30 void __init
31 trap_init (void) 31 trap_init (void)
32 { 32 {
33 if (ia64_boot_param->fpswa) 33 if (ia64_boot_param->fpswa)
34 /* FPSWA fixup: make the interface pointer a kernel virtual address: */ 34 /* FPSWA fixup: make the interface pointer a kernel virtual address: */
35 fpswa_interface = __va(ia64_boot_param->fpswa); 35 fpswa_interface = __va(ia64_boot_param->fpswa);
36 } 36 }
37 37
38 void 38 void
39 die (const char *str, struct pt_regs *regs, long err) 39 die (const char *str, struct pt_regs *regs, long err)
40 { 40 {
41 static struct { 41 static struct {
42 spinlock_t lock; 42 spinlock_t lock;
43 u32 lock_owner; 43 u32 lock_owner;
44 int lock_owner_depth; 44 int lock_owner_depth;
45 } die = { 45 } die = {
46 .lock = __SPIN_LOCK_UNLOCKED(die.lock), 46 .lock = __SPIN_LOCK_UNLOCKED(die.lock),
47 .lock_owner = -1, 47 .lock_owner = -1,
48 .lock_owner_depth = 0 48 .lock_owner_depth = 0
49 }; 49 };
50 static int die_counter; 50 static int die_counter;
51 int cpu = get_cpu(); 51 int cpu = get_cpu();
52 52
53 if (die.lock_owner != cpu) { 53 if (die.lock_owner != cpu) {
54 console_verbose(); 54 console_verbose();
55 spin_lock_irq(&die.lock); 55 spin_lock_irq(&die.lock);
56 die.lock_owner = cpu; 56 die.lock_owner = cpu;
57 die.lock_owner_depth = 0; 57 die.lock_owner_depth = 0;
58 bust_spinlocks(1); 58 bust_spinlocks(1);
59 } 59 }
60 put_cpu(); 60 put_cpu();
61 61
62 if (++die.lock_owner_depth < 3) { 62 if (++die.lock_owner_depth < 3) {
63 printk("%s[%d]: %s %ld [%d]\n", 63 printk("%s[%d]: %s %ld [%d]\n",
64 current->comm, current->pid, str, err, ++die_counter); 64 current->comm, current->pid, str, err, ++die_counter);
65 (void) notify_die(DIE_OOPS, (char *)str, regs, err, 255, SIGSEGV); 65 (void) notify_die(DIE_OOPS, (char *)str, regs, err, 255, SIGSEGV);
66 show_regs(regs); 66 show_regs(regs);
67 } else 67 } else
68 printk(KERN_ERR "Recursive die() failure, output suppressed\n"); 68 printk(KERN_ERR "Recursive die() failure, output suppressed\n");
69 69
70 bust_spinlocks(0); 70 bust_spinlocks(0);
71 die.lock_owner = -1; 71 die.lock_owner = -1;
72 add_taint(TAINT_DIE);
72 spin_unlock_irq(&die.lock); 73 spin_unlock_irq(&die.lock);
73 74
74 if (panic_on_oops) 75 if (panic_on_oops)
75 panic("Fatal exception"); 76 panic("Fatal exception");
76 77
77 do_exit(SIGSEGV); 78 do_exit(SIGSEGV);
78 } 79 }
79 80
80 void 81 void
81 die_if_kernel (char *str, struct pt_regs *regs, long err) 82 die_if_kernel (char *str, struct pt_regs *regs, long err)
82 { 83 {
83 if (!user_mode(regs)) 84 if (!user_mode(regs))
84 die(str, regs, err); 85 die(str, regs, err);
85 } 86 }
86 87
87 void 88 void
88 __kprobes ia64_bad_break (unsigned long break_num, struct pt_regs *regs) 89 __kprobes ia64_bad_break (unsigned long break_num, struct pt_regs *regs)
89 { 90 {
90 siginfo_t siginfo; 91 siginfo_t siginfo;
91 int sig, code; 92 int sig, code;
92 93
93 /* SIGILL, SIGFPE, SIGSEGV, and SIGBUS want these field initialized: */ 94 /* SIGILL, SIGFPE, SIGSEGV, and SIGBUS want these field initialized: */
94 siginfo.si_addr = (void __user *) (regs->cr_iip + ia64_psr(regs)->ri); 95 siginfo.si_addr = (void __user *) (regs->cr_iip + ia64_psr(regs)->ri);
95 siginfo.si_imm = break_num; 96 siginfo.si_imm = break_num;
96 siginfo.si_flags = 0; /* clear __ISR_VALID */ 97 siginfo.si_flags = 0; /* clear __ISR_VALID */
97 siginfo.si_isr = 0; 98 siginfo.si_isr = 0;
98 99
99 switch (break_num) { 100 switch (break_num) {
100 case 0: /* unknown error (used by GCC for __builtin_abort()) */ 101 case 0: /* unknown error (used by GCC for __builtin_abort()) */
101 if (notify_die(DIE_BREAK, "break 0", regs, break_num, TRAP_BRKPT, SIGTRAP) 102 if (notify_die(DIE_BREAK, "break 0", regs, break_num, TRAP_BRKPT, SIGTRAP)
102 == NOTIFY_STOP) 103 == NOTIFY_STOP)
103 return; 104 return;
104 die_if_kernel("bugcheck!", regs, break_num); 105 die_if_kernel("bugcheck!", regs, break_num);
105 sig = SIGILL; code = ILL_ILLOPC; 106 sig = SIGILL; code = ILL_ILLOPC;
106 break; 107 break;
107 108
108 case 1: /* integer divide by zero */ 109 case 1: /* integer divide by zero */
109 sig = SIGFPE; code = FPE_INTDIV; 110 sig = SIGFPE; code = FPE_INTDIV;
110 break; 111 break;
111 112
112 case 2: /* integer overflow */ 113 case 2: /* integer overflow */
113 sig = SIGFPE; code = FPE_INTOVF; 114 sig = SIGFPE; code = FPE_INTOVF;
114 break; 115 break;
115 116
116 case 3: /* range check/bounds check */ 117 case 3: /* range check/bounds check */
117 sig = SIGFPE; code = FPE_FLTSUB; 118 sig = SIGFPE; code = FPE_FLTSUB;
118 break; 119 break;
119 120
120 case 4: /* null pointer dereference */ 121 case 4: /* null pointer dereference */
121 sig = SIGSEGV; code = SEGV_MAPERR; 122 sig = SIGSEGV; code = SEGV_MAPERR;
122 break; 123 break;
123 124
124 case 5: /* misaligned data */ 125 case 5: /* misaligned data */
125 sig = SIGSEGV; code = BUS_ADRALN; 126 sig = SIGSEGV; code = BUS_ADRALN;
126 break; 127 break;
127 128
128 case 6: /* decimal overflow */ 129 case 6: /* decimal overflow */
129 sig = SIGFPE; code = __FPE_DECOVF; 130 sig = SIGFPE; code = __FPE_DECOVF;
130 break; 131 break;
131 132
132 case 7: /* decimal divide by zero */ 133 case 7: /* decimal divide by zero */
133 sig = SIGFPE; code = __FPE_DECDIV; 134 sig = SIGFPE; code = __FPE_DECDIV;
134 break; 135 break;
135 136
136 case 8: /* packed decimal error */ 137 case 8: /* packed decimal error */
137 sig = SIGFPE; code = __FPE_DECERR; 138 sig = SIGFPE; code = __FPE_DECERR;
138 break; 139 break;
139 140
140 case 9: /* invalid ASCII digit */ 141 case 9: /* invalid ASCII digit */
141 sig = SIGFPE; code = __FPE_INVASC; 142 sig = SIGFPE; code = __FPE_INVASC;
142 break; 143 break;
143 144
144 case 10: /* invalid decimal digit */ 145 case 10: /* invalid decimal digit */
145 sig = SIGFPE; code = __FPE_INVDEC; 146 sig = SIGFPE; code = __FPE_INVDEC;
146 break; 147 break;
147 148
148 case 11: /* paragraph stack overflow */ 149 case 11: /* paragraph stack overflow */
149 sig = SIGSEGV; code = __SEGV_PSTKOVF; 150 sig = SIGSEGV; code = __SEGV_PSTKOVF;
150 break; 151 break;
151 152
152 case 0x3f000 ... 0x3ffff: /* bundle-update in progress */ 153 case 0x3f000 ... 0x3ffff: /* bundle-update in progress */
153 sig = SIGILL; code = __ILL_BNDMOD; 154 sig = SIGILL; code = __ILL_BNDMOD;
154 break; 155 break;
155 156
156 default: 157 default:
157 if (break_num < 0x40000 || break_num > 0x100000) 158 if (break_num < 0x40000 || break_num > 0x100000)
158 die_if_kernel("Bad break", regs, break_num); 159 die_if_kernel("Bad break", regs, break_num);
159 160
160 if (break_num < 0x80000) { 161 if (break_num < 0x80000) {
161 sig = SIGILL; code = __ILL_BREAK; 162 sig = SIGILL; code = __ILL_BREAK;
162 } else { 163 } else {
163 if (notify_die(DIE_BREAK, "bad break", regs, break_num, TRAP_BRKPT, SIGTRAP) 164 if (notify_die(DIE_BREAK, "bad break", regs, break_num, TRAP_BRKPT, SIGTRAP)
164 == NOTIFY_STOP) 165 == NOTIFY_STOP)
165 return; 166 return;
166 sig = SIGTRAP; code = TRAP_BRKPT; 167 sig = SIGTRAP; code = TRAP_BRKPT;
167 } 168 }
168 } 169 }
169 siginfo.si_signo = sig; 170 siginfo.si_signo = sig;
170 siginfo.si_errno = 0; 171 siginfo.si_errno = 0;
171 siginfo.si_code = code; 172 siginfo.si_code = code;
172 force_sig_info(sig, &siginfo, current); 173 force_sig_info(sig, &siginfo, current);
173 } 174 }
174 175
175 /* 176 /*
176 * disabled_fph_fault() is called when a user-level process attempts to access f32..f127 177 * disabled_fph_fault() is called when a user-level process attempts to access f32..f127
177 * and it doesn't own the fp-high register partition. When this happens, we save the 178 * and it doesn't own the fp-high register partition. When this happens, we save the
178 * current fph partition in the task_struct of the fpu-owner (if necessary) and then load 179 * current fph partition in the task_struct of the fpu-owner (if necessary) and then load
179 * the fp-high partition of the current task (if necessary). Note that the kernel has 180 * the fp-high partition of the current task (if necessary). Note that the kernel has
180 * access to fph by the time we get here, as the IVT's "Disabled FP-Register" handler takes 181 * access to fph by the time we get here, as the IVT's "Disabled FP-Register" handler takes
181 * care of clearing psr.dfh. 182 * care of clearing psr.dfh.
182 */ 183 */
183 static inline void 184 static inline void
184 disabled_fph_fault (struct pt_regs *regs) 185 disabled_fph_fault (struct pt_regs *regs)
185 { 186 {
186 struct ia64_psr *psr = ia64_psr(regs); 187 struct ia64_psr *psr = ia64_psr(regs);
187 188
188 /* first, grant user-level access to fph partition: */ 189 /* first, grant user-level access to fph partition: */
189 psr->dfh = 0; 190 psr->dfh = 0;
190 191
191 /* 192 /*
192 * Make sure that no other task gets in on this processor 193 * Make sure that no other task gets in on this processor
193 * while we're claiming the FPU 194 * while we're claiming the FPU
194 */ 195 */
195 preempt_disable(); 196 preempt_disable();
196 #ifndef CONFIG_SMP 197 #ifndef CONFIG_SMP
197 { 198 {
198 struct task_struct *fpu_owner 199 struct task_struct *fpu_owner
199 = (struct task_struct *)ia64_get_kr(IA64_KR_FPU_OWNER); 200 = (struct task_struct *)ia64_get_kr(IA64_KR_FPU_OWNER);
200 201
201 if (ia64_is_local_fpu_owner(current)) { 202 if (ia64_is_local_fpu_owner(current)) {
202 preempt_enable_no_resched(); 203 preempt_enable_no_resched();
203 return; 204 return;
204 } 205 }
205 206
206 if (fpu_owner) 207 if (fpu_owner)
207 ia64_flush_fph(fpu_owner); 208 ia64_flush_fph(fpu_owner);
208 } 209 }
209 #endif /* !CONFIG_SMP */ 210 #endif /* !CONFIG_SMP */
210 ia64_set_local_fpu_owner(current); 211 ia64_set_local_fpu_owner(current);
211 if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0) { 212 if ((current->thread.flags & IA64_THREAD_FPH_VALID) != 0) {
212 __ia64_load_fpu(current->thread.fph); 213 __ia64_load_fpu(current->thread.fph);
213 psr->mfh = 0; 214 psr->mfh = 0;
214 } else { 215 } else {
215 __ia64_init_fpu(); 216 __ia64_init_fpu();
216 /* 217 /*
217 * Set mfh because the state in thread.fph does not match the state in 218 * Set mfh because the state in thread.fph does not match the state in
218 * the fph partition. 219 * the fph partition.
219 */ 220 */
220 psr->mfh = 1; 221 psr->mfh = 1;
221 } 222 }
222 preempt_enable_no_resched(); 223 preempt_enable_no_resched();
223 } 224 }
224 225
225 static inline int 226 static inline int
226 fp_emulate (int fp_fault, void *bundle, long *ipsr, long *fpsr, long *isr, long *pr, long *ifs, 227 fp_emulate (int fp_fault, void *bundle, long *ipsr, long *fpsr, long *isr, long *pr, long *ifs,
227 struct pt_regs *regs) 228 struct pt_regs *regs)
228 { 229 {
229 fp_state_t fp_state; 230 fp_state_t fp_state;
230 fpswa_ret_t ret; 231 fpswa_ret_t ret;
231 232
232 if (!fpswa_interface) 233 if (!fpswa_interface)
233 return -1; 234 return -1;
234 235
235 memset(&fp_state, 0, sizeof(fp_state_t)); 236 memset(&fp_state, 0, sizeof(fp_state_t));
236 237
237 /* 238 /*
238 * compute fp_state. only FP registers f6 - f11 are used by the 239 * compute fp_state. only FP registers f6 - f11 are used by the
239 * kernel, so set those bits in the mask and set the low volatile 240 * kernel, so set those bits in the mask and set the low volatile
240 * pointer to point to these registers. 241 * pointer to point to these registers.
241 */ 242 */
242 fp_state.bitmask_low64 = 0xfc0; /* bit6..bit11 */ 243 fp_state.bitmask_low64 = 0xfc0; /* bit6..bit11 */
243 244
244 fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) &regs->f6; 245 fp_state.fp_state_low_volatile = (fp_state_low_volatile_t *) &regs->f6;
245 /* 246 /*
246 * unsigned long (*EFI_FPSWA) ( 247 * unsigned long (*EFI_FPSWA) (
247 * unsigned long trap_type, 248 * unsigned long trap_type,
248 * void *Bundle, 249 * void *Bundle,
249 * unsigned long *pipsr, 250 * unsigned long *pipsr,
250 * unsigned long *pfsr, 251 * unsigned long *pfsr,
251 * unsigned long *pisr, 252 * unsigned long *pisr,
252 * unsigned long *ppreds, 253 * unsigned long *ppreds,
253 * unsigned long *pifs, 254 * unsigned long *pifs,
254 * void *fp_state); 255 * void *fp_state);
255 */ 256 */
256 ret = (*fpswa_interface->fpswa)((unsigned long) fp_fault, bundle, 257 ret = (*fpswa_interface->fpswa)((unsigned long) fp_fault, bundle,
257 (unsigned long *) ipsr, (unsigned long *) fpsr, 258 (unsigned long *) ipsr, (unsigned long *) fpsr,
258 (unsigned long *) isr, (unsigned long *) pr, 259 (unsigned long *) isr, (unsigned long *) pr,
259 (unsigned long *) ifs, &fp_state); 260 (unsigned long *) ifs, &fp_state);
260 261
261 return ret.status; 262 return ret.status;
262 } 263 }
263 264
264 struct fpu_swa_msg { 265 struct fpu_swa_msg {
265 unsigned long count; 266 unsigned long count;
266 unsigned long time; 267 unsigned long time;
267 }; 268 };
268 static DEFINE_PER_CPU(struct fpu_swa_msg, cpulast); 269 static DEFINE_PER_CPU(struct fpu_swa_msg, cpulast);
269 DECLARE_PER_CPU(struct fpu_swa_msg, cpulast); 270 DECLARE_PER_CPU(struct fpu_swa_msg, cpulast);
270 static struct fpu_swa_msg last __cacheline_aligned; 271 static struct fpu_swa_msg last __cacheline_aligned;
271 272
272 273
273 /* 274 /*
274 * Handle floating-point assist faults and traps. 275 * Handle floating-point assist faults and traps.
275 */ 276 */
276 static int 277 static int
277 handle_fpu_swa (int fp_fault, struct pt_regs *regs, unsigned long isr) 278 handle_fpu_swa (int fp_fault, struct pt_regs *regs, unsigned long isr)
278 { 279 {
279 long exception, bundle[2]; 280 long exception, bundle[2];
280 unsigned long fault_ip; 281 unsigned long fault_ip;
281 struct siginfo siginfo; 282 struct siginfo siginfo;
282 283
283 fault_ip = regs->cr_iip; 284 fault_ip = regs->cr_iip;
284 if (!fp_fault && (ia64_psr(regs)->ri == 0)) 285 if (!fp_fault && (ia64_psr(regs)->ri == 0))
285 fault_ip -= 16; 286 fault_ip -= 16;
286 if (copy_from_user(bundle, (void __user *) fault_ip, sizeof(bundle))) 287 if (copy_from_user(bundle, (void __user *) fault_ip, sizeof(bundle)))
287 return -1; 288 return -1;
288 289
289 if (!(current->thread.flags & IA64_THREAD_FPEMU_NOPRINT)) { 290 if (!(current->thread.flags & IA64_THREAD_FPEMU_NOPRINT)) {
290 unsigned long count, current_jiffies = jiffies; 291 unsigned long count, current_jiffies = jiffies;
291 struct fpu_swa_msg *cp = &__get_cpu_var(cpulast); 292 struct fpu_swa_msg *cp = &__get_cpu_var(cpulast);
292 293
293 if (unlikely(current_jiffies > cp->time)) 294 if (unlikely(current_jiffies > cp->time))
294 cp->count = 0; 295 cp->count = 0;
295 if (unlikely(cp->count < 5)) { 296 if (unlikely(cp->count < 5)) {
296 cp->count++; 297 cp->count++;
297 cp->time = current_jiffies + 5 * HZ; 298 cp->time = current_jiffies + 5 * HZ;
298 299
299 /* minimize races by grabbing a copy of count BEFORE checking last.time. */ 300 /* minimize races by grabbing a copy of count BEFORE checking last.time. */
300 count = last.count; 301 count = last.count;
301 barrier(); 302 barrier();
302 303
303 /* 304 /*
304 * Lower 4 bits are used as a count. Upper bits are a sequence 305 * Lower 4 bits are used as a count. Upper bits are a sequence
305 * number that is updated when count is reset. The cmpxchg will 306 * number that is updated when count is reset. The cmpxchg will
306 * fail is seqno has changed. This minimizes mutiple cpus 307 * fail is seqno has changed. This minimizes mutiple cpus
307 * resetting the count. 308 * resetting the count.
308 */ 309 */
309 if (current_jiffies > last.time) 310 if (current_jiffies > last.time)
310 (void) cmpxchg_acq(&last.count, count, 16 + (count & ~15)); 311 (void) cmpxchg_acq(&last.count, count, 16 + (count & ~15));
311 312
312 /* used fetchadd to atomically update the count */ 313 /* used fetchadd to atomically update the count */
313 if ((last.count & 15) < 5 && (ia64_fetchadd(1, &last.count, acq) & 15) < 5) { 314 if ((last.count & 15) < 5 && (ia64_fetchadd(1, &last.count, acq) & 15) < 5) {
314 last.time = current_jiffies + 5 * HZ; 315 last.time = current_jiffies + 5 * HZ;
315 printk(KERN_WARNING 316 printk(KERN_WARNING
316 "%s(%d): floating-point assist fault at ip %016lx, isr %016lx\n", 317 "%s(%d): floating-point assist fault at ip %016lx, isr %016lx\n",
317 current->comm, current->pid, regs->cr_iip + ia64_psr(regs)->ri, isr); 318 current->comm, current->pid, regs->cr_iip + ia64_psr(regs)->ri, isr);
318 } 319 }
319 } 320 }
320 } 321 }
321 322
322 exception = fp_emulate(fp_fault, bundle, &regs->cr_ipsr, &regs->ar_fpsr, &isr, &regs->pr, 323 exception = fp_emulate(fp_fault, bundle, &regs->cr_ipsr, &regs->ar_fpsr, &isr, &regs->pr,
323 &regs->cr_ifs, regs); 324 &regs->cr_ifs, regs);
324 if (fp_fault) { 325 if (fp_fault) {
325 if (exception == 0) { 326 if (exception == 0) {
326 /* emulation was successful */ 327 /* emulation was successful */
327 ia64_increment_ip(regs); 328 ia64_increment_ip(regs);
328 } else if (exception == -1) { 329 } else if (exception == -1) {
329 printk(KERN_ERR "handle_fpu_swa: fp_emulate() returned -1\n"); 330 printk(KERN_ERR "handle_fpu_swa: fp_emulate() returned -1\n");
330 return -1; 331 return -1;
331 } else { 332 } else {
332 /* is next instruction a trap? */ 333 /* is next instruction a trap? */
333 if (exception & 2) { 334 if (exception & 2) {
334 ia64_increment_ip(regs); 335 ia64_increment_ip(regs);
335 } 336 }
336 siginfo.si_signo = SIGFPE; 337 siginfo.si_signo = SIGFPE;
337 siginfo.si_errno = 0; 338 siginfo.si_errno = 0;
338 siginfo.si_code = __SI_FAULT; /* default code */ 339 siginfo.si_code = __SI_FAULT; /* default code */
339 siginfo.si_addr = (void __user *) (regs->cr_iip + ia64_psr(regs)->ri); 340 siginfo.si_addr = (void __user *) (regs->cr_iip + ia64_psr(regs)->ri);
340 if (isr & 0x11) { 341 if (isr & 0x11) {
341 siginfo.si_code = FPE_FLTINV; 342 siginfo.si_code = FPE_FLTINV;
342 } else if (isr & 0x22) { 343 } else if (isr & 0x22) {
343 /* denormal operand gets the same si_code as underflow 344 /* denormal operand gets the same si_code as underflow
344 * see arch/i386/kernel/traps.c:math_error() */ 345 * see arch/i386/kernel/traps.c:math_error() */
345 siginfo.si_code = FPE_FLTUND; 346 siginfo.si_code = FPE_FLTUND;
346 } else if (isr & 0x44) { 347 } else if (isr & 0x44) {
347 siginfo.si_code = FPE_FLTDIV; 348 siginfo.si_code = FPE_FLTDIV;
348 } 349 }
349 siginfo.si_isr = isr; 350 siginfo.si_isr = isr;
350 siginfo.si_flags = __ISR_VALID; 351 siginfo.si_flags = __ISR_VALID;
351 siginfo.si_imm = 0; 352 siginfo.si_imm = 0;
352 force_sig_info(SIGFPE, &siginfo, current); 353 force_sig_info(SIGFPE, &siginfo, current);
353 } 354 }
354 } else { 355 } else {
355 if (exception == -1) { 356 if (exception == -1) {
356 printk(KERN_ERR "handle_fpu_swa: fp_emulate() returned -1\n"); 357 printk(KERN_ERR "handle_fpu_swa: fp_emulate() returned -1\n");
357 return -1; 358 return -1;
358 } else if (exception != 0) { 359 } else if (exception != 0) {
359 /* raise exception */ 360 /* raise exception */
360 siginfo.si_signo = SIGFPE; 361 siginfo.si_signo = SIGFPE;
361 siginfo.si_errno = 0; 362 siginfo.si_errno = 0;
362 siginfo.si_code = __SI_FAULT; /* default code */ 363 siginfo.si_code = __SI_FAULT; /* default code */
363 siginfo.si_addr = (void __user *) (regs->cr_iip + ia64_psr(regs)->ri); 364 siginfo.si_addr = (void __user *) (regs->cr_iip + ia64_psr(regs)->ri);
364 if (isr & 0x880) { 365 if (isr & 0x880) {
365 siginfo.si_code = FPE_FLTOVF; 366 siginfo.si_code = FPE_FLTOVF;
366 } else if (isr & 0x1100) { 367 } else if (isr & 0x1100) {
367 siginfo.si_code = FPE_FLTUND; 368 siginfo.si_code = FPE_FLTUND;
368 } else if (isr & 0x2200) { 369 } else if (isr & 0x2200) {
369 siginfo.si_code = FPE_FLTRES; 370 siginfo.si_code = FPE_FLTRES;
370 } 371 }
371 siginfo.si_isr = isr; 372 siginfo.si_isr = isr;
372 siginfo.si_flags = __ISR_VALID; 373 siginfo.si_flags = __ISR_VALID;
373 siginfo.si_imm = 0; 374 siginfo.si_imm = 0;
374 force_sig_info(SIGFPE, &siginfo, current); 375 force_sig_info(SIGFPE, &siginfo, current);
375 } 376 }
376 } 377 }
377 return 0; 378 return 0;
378 } 379 }
379 380
380 struct illegal_op_return { 381 struct illegal_op_return {
381 unsigned long fkt, arg1, arg2, arg3; 382 unsigned long fkt, arg1, arg2, arg3;
382 }; 383 };
383 384
384 struct illegal_op_return 385 struct illegal_op_return
385 ia64_illegal_op_fault (unsigned long ec, long arg1, long arg2, long arg3, 386 ia64_illegal_op_fault (unsigned long ec, long arg1, long arg2, long arg3,
386 long arg4, long arg5, long arg6, long arg7, 387 long arg4, long arg5, long arg6, long arg7,
387 struct pt_regs regs) 388 struct pt_regs regs)
388 { 389 {
389 struct illegal_op_return rv; 390 struct illegal_op_return rv;
390 struct siginfo si; 391 struct siginfo si;
391 char buf[128]; 392 char buf[128];
392 393
393 #ifdef CONFIG_IA64_BRL_EMU 394 #ifdef CONFIG_IA64_BRL_EMU
394 { 395 {
395 extern struct illegal_op_return ia64_emulate_brl (struct pt_regs *, unsigned long); 396 extern struct illegal_op_return ia64_emulate_brl (struct pt_regs *, unsigned long);
396 397
397 rv = ia64_emulate_brl(&regs, ec); 398 rv = ia64_emulate_brl(&regs, ec);
398 if (rv.fkt != (unsigned long) -1) 399 if (rv.fkt != (unsigned long) -1)
399 return rv; 400 return rv;
400 } 401 }
401 #endif 402 #endif
402 403
403 sprintf(buf, "IA-64 Illegal operation fault"); 404 sprintf(buf, "IA-64 Illegal operation fault");
404 die_if_kernel(buf, &regs, 0); 405 die_if_kernel(buf, &regs, 0);
405 406
406 memset(&si, 0, sizeof(si)); 407 memset(&si, 0, sizeof(si));
407 si.si_signo = SIGILL; 408 si.si_signo = SIGILL;
408 si.si_code = ILL_ILLOPC; 409 si.si_code = ILL_ILLOPC;
409 si.si_addr = (void __user *) (regs.cr_iip + ia64_psr(&regs)->ri); 410 si.si_addr = (void __user *) (regs.cr_iip + ia64_psr(&regs)->ri);
410 force_sig_info(SIGILL, &si, current); 411 force_sig_info(SIGILL, &si, current);
411 rv.fkt = 0; 412 rv.fkt = 0;
412 return rv; 413 return rv;
413 } 414 }
414 415
415 void __kprobes 416 void __kprobes
416 ia64_fault (unsigned long vector, unsigned long isr, unsigned long ifa, 417 ia64_fault (unsigned long vector, unsigned long isr, unsigned long ifa,
417 unsigned long iim, unsigned long itir, long arg5, long arg6, 418 unsigned long iim, unsigned long itir, long arg5, long arg6,
418 long arg7, struct pt_regs regs) 419 long arg7, struct pt_regs regs)
419 { 420 {
420 unsigned long code, error = isr, iip; 421 unsigned long code, error = isr, iip;
421 struct siginfo siginfo; 422 struct siginfo siginfo;
422 char buf[128]; 423 char buf[128];
423 int result, sig; 424 int result, sig;
424 static const char *reason[] = { 425 static const char *reason[] = {
425 "IA-64 Illegal Operation fault", 426 "IA-64 Illegal Operation fault",
426 "IA-64 Privileged Operation fault", 427 "IA-64 Privileged Operation fault",
427 "IA-64 Privileged Register fault", 428 "IA-64 Privileged Register fault",
428 "IA-64 Reserved Register/Field fault", 429 "IA-64 Reserved Register/Field fault",
429 "Disabled Instruction Set Transition fault", 430 "Disabled Instruction Set Transition fault",
430 "Unknown fault 5", "Unknown fault 6", "Unknown fault 7", "Illegal Hazard fault", 431 "Unknown fault 5", "Unknown fault 6", "Unknown fault 7", "Illegal Hazard fault",
431 "Unknown fault 9", "Unknown fault 10", "Unknown fault 11", "Unknown fault 12", 432 "Unknown fault 9", "Unknown fault 10", "Unknown fault 11", "Unknown fault 12",
432 "Unknown fault 13", "Unknown fault 14", "Unknown fault 15" 433 "Unknown fault 13", "Unknown fault 14", "Unknown fault 15"
433 }; 434 };
434 435
435 if ((isr & IA64_ISR_NA) && ((isr & IA64_ISR_CODE_MASK) == IA64_ISR_CODE_LFETCH)) { 436 if ((isr & IA64_ISR_NA) && ((isr & IA64_ISR_CODE_MASK) == IA64_ISR_CODE_LFETCH)) {
436 /* 437 /*
437 * This fault was due to lfetch.fault, set "ed" bit in the psr to cancel 438 * This fault was due to lfetch.fault, set "ed" bit in the psr to cancel
438 * the lfetch. 439 * the lfetch.
439 */ 440 */
440 ia64_psr(&regs)->ed = 1; 441 ia64_psr(&regs)->ed = 1;
441 return; 442 return;
442 } 443 }
443 444
444 iip = regs.cr_iip + ia64_psr(&regs)->ri; 445 iip = regs.cr_iip + ia64_psr(&regs)->ri;
445 446
446 switch (vector) { 447 switch (vector) {
447 case 24: /* General Exception */ 448 case 24: /* General Exception */
448 code = (isr >> 4) & 0xf; 449 code = (isr >> 4) & 0xf;
449 sprintf(buf, "General Exception: %s%s", reason[code], 450 sprintf(buf, "General Exception: %s%s", reason[code],
450 (code == 3) ? ((isr & (1UL << 37)) 451 (code == 3) ? ((isr & (1UL << 37))
451 ? " (RSE access)" : " (data access)") : ""); 452 ? " (RSE access)" : " (data access)") : "");
452 if (code == 8) { 453 if (code == 8) {
453 # ifdef CONFIG_IA64_PRINT_HAZARDS 454 # ifdef CONFIG_IA64_PRINT_HAZARDS
454 printk("%s[%d]: possible hazard @ ip=%016lx (pr = %016lx)\n", 455 printk("%s[%d]: possible hazard @ ip=%016lx (pr = %016lx)\n",
455 current->comm, current->pid, 456 current->comm, current->pid,
456 regs.cr_iip + ia64_psr(&regs)->ri, regs.pr); 457 regs.cr_iip + ia64_psr(&regs)->ri, regs.pr);
457 # endif 458 # endif
458 return; 459 return;
459 } 460 }
460 break; 461 break;
461 462
462 case 25: /* Disabled FP-Register */ 463 case 25: /* Disabled FP-Register */
463 if (isr & 2) { 464 if (isr & 2) {
464 disabled_fph_fault(&regs); 465 disabled_fph_fault(&regs);
465 return; 466 return;
466 } 467 }
467 sprintf(buf, "Disabled FPL fault---not supposed to happen!"); 468 sprintf(buf, "Disabled FPL fault---not supposed to happen!");
468 break; 469 break;
469 470
470 case 26: /* NaT Consumption */ 471 case 26: /* NaT Consumption */
471 if (user_mode(&regs)) { 472 if (user_mode(&regs)) {
472 void __user *addr; 473 void __user *addr;
473 474
474 if (((isr >> 4) & 0xf) == 2) { 475 if (((isr >> 4) & 0xf) == 2) {
475 /* NaT page consumption */ 476 /* NaT page consumption */
476 sig = SIGSEGV; 477 sig = SIGSEGV;
477 code = SEGV_ACCERR; 478 code = SEGV_ACCERR;
478 addr = (void __user *) ifa; 479 addr = (void __user *) ifa;
479 } else { 480 } else {
480 /* register NaT consumption */ 481 /* register NaT consumption */
481 sig = SIGILL; 482 sig = SIGILL;
482 code = ILL_ILLOPN; 483 code = ILL_ILLOPN;
483 addr = (void __user *) (regs.cr_iip 484 addr = (void __user *) (regs.cr_iip
484 + ia64_psr(&regs)->ri); 485 + ia64_psr(&regs)->ri);
485 } 486 }
486 siginfo.si_signo = sig; 487 siginfo.si_signo = sig;
487 siginfo.si_code = code; 488 siginfo.si_code = code;
488 siginfo.si_errno = 0; 489 siginfo.si_errno = 0;
489 siginfo.si_addr = addr; 490 siginfo.si_addr = addr;
490 siginfo.si_imm = vector; 491 siginfo.si_imm = vector;
491 siginfo.si_flags = __ISR_VALID; 492 siginfo.si_flags = __ISR_VALID;
492 siginfo.si_isr = isr; 493 siginfo.si_isr = isr;
493 force_sig_info(sig, &siginfo, current); 494 force_sig_info(sig, &siginfo, current);
494 return; 495 return;
495 } else if (ia64_done_with_exception(&regs)) 496 } else if (ia64_done_with_exception(&regs))
496 return; 497 return;
497 sprintf(buf, "NaT consumption"); 498 sprintf(buf, "NaT consumption");
498 break; 499 break;
499 500
500 case 31: /* Unsupported Data Reference */ 501 case 31: /* Unsupported Data Reference */
501 if (user_mode(&regs)) { 502 if (user_mode(&regs)) {
502 siginfo.si_signo = SIGILL; 503 siginfo.si_signo = SIGILL;
503 siginfo.si_code = ILL_ILLOPN; 504 siginfo.si_code = ILL_ILLOPN;
504 siginfo.si_errno = 0; 505 siginfo.si_errno = 0;
505 siginfo.si_addr = (void __user *) iip; 506 siginfo.si_addr = (void __user *) iip;
506 siginfo.si_imm = vector; 507 siginfo.si_imm = vector;
507 siginfo.si_flags = __ISR_VALID; 508 siginfo.si_flags = __ISR_VALID;
508 siginfo.si_isr = isr; 509 siginfo.si_isr = isr;
509 force_sig_info(SIGILL, &siginfo, current); 510 force_sig_info(SIGILL, &siginfo, current);
510 return; 511 return;
511 } 512 }
512 sprintf(buf, "Unsupported data reference"); 513 sprintf(buf, "Unsupported data reference");
513 break; 514 break;
514 515
515 case 29: /* Debug */ 516 case 29: /* Debug */
516 case 35: /* Taken Branch Trap */ 517 case 35: /* Taken Branch Trap */
517 case 36: /* Single Step Trap */ 518 case 36: /* Single Step Trap */
518 if (fsys_mode(current, &regs)) { 519 if (fsys_mode(current, &regs)) {
519 extern char __kernel_syscall_via_break[]; 520 extern char __kernel_syscall_via_break[];
520 /* 521 /*
521 * Got a trap in fsys-mode: Taken Branch Trap 522 * Got a trap in fsys-mode: Taken Branch Trap
522 * and Single Step trap need special handling; 523 * and Single Step trap need special handling;
523 * Debug trap is ignored (we disable it here 524 * Debug trap is ignored (we disable it here
524 * and re-enable it in the lower-privilege trap). 525 * and re-enable it in the lower-privilege trap).
525 */ 526 */
526 if (unlikely(vector == 29)) { 527 if (unlikely(vector == 29)) {
527 set_thread_flag(TIF_DB_DISABLED); 528 set_thread_flag(TIF_DB_DISABLED);
528 ia64_psr(&regs)->db = 0; 529 ia64_psr(&regs)->db = 0;
529 ia64_psr(&regs)->lp = 1; 530 ia64_psr(&regs)->lp = 1;
530 return; 531 return;
531 } 532 }
532 /* re-do the system call via break 0x100000: */ 533 /* re-do the system call via break 0x100000: */
533 regs.cr_iip = (unsigned long) __kernel_syscall_via_break; 534 regs.cr_iip = (unsigned long) __kernel_syscall_via_break;
534 ia64_psr(&regs)->ri = 0; 535 ia64_psr(&regs)->ri = 0;
535 ia64_psr(&regs)->cpl = 3; 536 ia64_psr(&regs)->cpl = 3;
536 return; 537 return;
537 } 538 }
538 switch (vector) { 539 switch (vector) {
539 case 29: 540 case 29:
540 siginfo.si_code = TRAP_HWBKPT; 541 siginfo.si_code = TRAP_HWBKPT;
541 #ifdef CONFIG_ITANIUM 542 #ifdef CONFIG_ITANIUM
542 /* 543 /*
543 * Erratum 10 (IFA may contain incorrect address) now has 544 * Erratum 10 (IFA may contain incorrect address) now has
544 * "NoFix" status. There are no plans for fixing this. 545 * "NoFix" status. There are no plans for fixing this.
545 */ 546 */
546 if (ia64_psr(&regs)->is == 0) 547 if (ia64_psr(&regs)->is == 0)
547 ifa = regs.cr_iip; 548 ifa = regs.cr_iip;
548 #endif 549 #endif
549 break; 550 break;
550 case 35: siginfo.si_code = TRAP_BRANCH; ifa = 0; break; 551 case 35: siginfo.si_code = TRAP_BRANCH; ifa = 0; break;
551 case 36: siginfo.si_code = TRAP_TRACE; ifa = 0; break; 552 case 36: siginfo.si_code = TRAP_TRACE; ifa = 0; break;
552 } 553 }
553 if (notify_die(DIE_FAULT, "ia64_fault", &regs, vector, siginfo.si_code, SIGTRAP) 554 if (notify_die(DIE_FAULT, "ia64_fault", &regs, vector, siginfo.si_code, SIGTRAP)
554 == NOTIFY_STOP) 555 == NOTIFY_STOP)
555 return; 556 return;
556 siginfo.si_signo = SIGTRAP; 557 siginfo.si_signo = SIGTRAP;
557 siginfo.si_errno = 0; 558 siginfo.si_errno = 0;
558 siginfo.si_addr = (void __user *) ifa; 559 siginfo.si_addr = (void __user *) ifa;
559 siginfo.si_imm = 0; 560 siginfo.si_imm = 0;
560 siginfo.si_flags = __ISR_VALID; 561 siginfo.si_flags = __ISR_VALID;
561 siginfo.si_isr = isr; 562 siginfo.si_isr = isr;
562 force_sig_info(SIGTRAP, &siginfo, current); 563 force_sig_info(SIGTRAP, &siginfo, current);
563 return; 564 return;
564 565
565 case 32: /* fp fault */ 566 case 32: /* fp fault */
566 case 33: /* fp trap */ 567 case 33: /* fp trap */
567 result = handle_fpu_swa((vector == 32) ? 1 : 0, &regs, isr); 568 result = handle_fpu_swa((vector == 32) ? 1 : 0, &regs, isr);
568 if ((result < 0) || (current->thread.flags & IA64_THREAD_FPEMU_SIGFPE)) { 569 if ((result < 0) || (current->thread.flags & IA64_THREAD_FPEMU_SIGFPE)) {
569 siginfo.si_signo = SIGFPE; 570 siginfo.si_signo = SIGFPE;
570 siginfo.si_errno = 0; 571 siginfo.si_errno = 0;
571 siginfo.si_code = FPE_FLTINV; 572 siginfo.si_code = FPE_FLTINV;
572 siginfo.si_addr = (void __user *) iip; 573 siginfo.si_addr = (void __user *) iip;
573 siginfo.si_flags = __ISR_VALID; 574 siginfo.si_flags = __ISR_VALID;
574 siginfo.si_isr = isr; 575 siginfo.si_isr = isr;
575 siginfo.si_imm = 0; 576 siginfo.si_imm = 0;
576 force_sig_info(SIGFPE, &siginfo, current); 577 force_sig_info(SIGFPE, &siginfo, current);
577 } 578 }
578 return; 579 return;
579 580
580 case 34: 581 case 34:
581 if (isr & 0x2) { 582 if (isr & 0x2) {
582 /* Lower-Privilege Transfer Trap */ 583 /* Lower-Privilege Transfer Trap */
583 584
584 /* If we disabled debug traps during an fsyscall, 585 /* If we disabled debug traps during an fsyscall,
585 * re-enable them here. 586 * re-enable them here.
586 */ 587 */
587 if (test_thread_flag(TIF_DB_DISABLED)) { 588 if (test_thread_flag(TIF_DB_DISABLED)) {
588 clear_thread_flag(TIF_DB_DISABLED); 589 clear_thread_flag(TIF_DB_DISABLED);
589 ia64_psr(&regs)->db = 1; 590 ia64_psr(&regs)->db = 1;
590 } 591 }
591 592
592 /* 593 /*
593 * Just clear PSR.lp and then return immediately: 594 * Just clear PSR.lp and then return immediately:
594 * all the interesting work (e.g., signal delivery) 595 * all the interesting work (e.g., signal delivery)
595 * is done in the kernel exit path. 596 * is done in the kernel exit path.
596 */ 597 */
597 ia64_psr(&regs)->lp = 0; 598 ia64_psr(&regs)->lp = 0;
598 return; 599 return;
599 } else { 600 } else {
600 /* Unimplemented Instr. Address Trap */ 601 /* Unimplemented Instr. Address Trap */
601 if (user_mode(&regs)) { 602 if (user_mode(&regs)) {
602 siginfo.si_signo = SIGILL; 603 siginfo.si_signo = SIGILL;
603 siginfo.si_code = ILL_BADIADDR; 604 siginfo.si_code = ILL_BADIADDR;
604 siginfo.si_errno = 0; 605 siginfo.si_errno = 0;
605 siginfo.si_flags = 0; 606 siginfo.si_flags = 0;
606 siginfo.si_isr = 0; 607 siginfo.si_isr = 0;
607 siginfo.si_imm = 0; 608 siginfo.si_imm = 0;
608 siginfo.si_addr = (void __user *) iip; 609 siginfo.si_addr = (void __user *) iip;
609 force_sig_info(SIGILL, &siginfo, current); 610 force_sig_info(SIGILL, &siginfo, current);
610 return; 611 return;
611 } 612 }
612 sprintf(buf, "Unimplemented Instruction Address fault"); 613 sprintf(buf, "Unimplemented Instruction Address fault");
613 } 614 }
614 break; 615 break;
615 616
616 case 45: 617 case 45:
617 #ifdef CONFIG_IA32_SUPPORT 618 #ifdef CONFIG_IA32_SUPPORT
618 if (ia32_exception(&regs, isr) == 0) 619 if (ia32_exception(&regs, isr) == 0)
619 return; 620 return;
620 #endif 621 #endif
621 printk(KERN_ERR "Unexpected IA-32 exception (Trap 45)\n"); 622 printk(KERN_ERR "Unexpected IA-32 exception (Trap 45)\n");
622 printk(KERN_ERR " iip - 0x%lx, ifa - 0x%lx, isr - 0x%lx\n", 623 printk(KERN_ERR " iip - 0x%lx, ifa - 0x%lx, isr - 0x%lx\n",
623 iip, ifa, isr); 624 iip, ifa, isr);
624 force_sig(SIGSEGV, current); 625 force_sig(SIGSEGV, current);
625 break; 626 break;
626 627
627 case 46: 628 case 46:
628 #ifdef CONFIG_IA32_SUPPORT 629 #ifdef CONFIG_IA32_SUPPORT
629 if (ia32_intercept(&regs, isr) == 0) 630 if (ia32_intercept(&regs, isr) == 0)
630 return; 631 return;
631 #endif 632 #endif
632 printk(KERN_ERR "Unexpected IA-32 intercept trap (Trap 46)\n"); 633 printk(KERN_ERR "Unexpected IA-32 intercept trap (Trap 46)\n");
633 printk(KERN_ERR " iip - 0x%lx, ifa - 0x%lx, isr - 0x%lx, iim - 0x%lx\n", 634 printk(KERN_ERR " iip - 0x%lx, ifa - 0x%lx, isr - 0x%lx, iim - 0x%lx\n",
634 iip, ifa, isr, iim); 635 iip, ifa, isr, iim);
635 force_sig(SIGSEGV, current); 636 force_sig(SIGSEGV, current);
636 return; 637 return;
637 638
638 case 47: 639 case 47:
639 sprintf(buf, "IA-32 Interruption Fault (int 0x%lx)", isr >> 16); 640 sprintf(buf, "IA-32 Interruption Fault (int 0x%lx)", isr >> 16);
640 break; 641 break;
641 642
642 default: 643 default:
643 sprintf(buf, "Fault %lu", vector); 644 sprintf(buf, "Fault %lu", vector);
644 break; 645 break;
645 } 646 }
646 die_if_kernel(buf, &regs, error); 647 die_if_kernel(buf, &regs, error);
647 force_sig(SIGILL, current); 648 force_sig(SIGILL, current);
648 } 649 }
649 650
arch/m68k/kernel/traps.c
1 /* 1 /*
2 * linux/arch/m68k/kernel/traps.c 2 * linux/arch/m68k/kernel/traps.c
3 * 3 *
4 * Copyright (C) 1993, 1994 by Hamish Macdonald 4 * Copyright (C) 1993, 1994 by Hamish Macdonald
5 * 5 *
6 * 68040 fixes by Michael Rausch 6 * 68040 fixes by Michael Rausch
7 * 68040 fixes by Martin Apel 7 * 68040 fixes by Martin Apel
8 * 68040 fixes and writeback by Richard Zidlicky 8 * 68040 fixes and writeback by Richard Zidlicky
9 * 68060 fixes by Roman Hodek 9 * 68060 fixes by Roman Hodek
10 * 68060 fixes by Jesper Skov 10 * 68060 fixes by Jesper Skov
11 * 11 *
12 * This file is subject to the terms and conditions of the GNU General Public 12 * This file is subject to the terms and conditions of the GNU General Public
13 * License. See the file COPYING in the main directory of this archive 13 * License. See the file COPYING in the main directory of this archive
14 * for more details. 14 * for more details.
15 */ 15 */
16 16
17 /* 17 /*
18 * Sets up all exception vectors 18 * Sets up all exception vectors
19 */ 19 */
20 20
21 #include <linux/sched.h> 21 #include <linux/sched.h>
22 #include <linux/signal.h> 22 #include <linux/signal.h>
23 #include <linux/kernel.h> 23 #include <linux/kernel.h>
24 #include <linux/mm.h> 24 #include <linux/mm.h>
25 #include <linux/module.h> 25 #include <linux/module.h>
26 #include <linux/a.out.h> 26 #include <linux/a.out.h>
27 #include <linux/user.h> 27 #include <linux/user.h>
28 #include <linux/string.h> 28 #include <linux/string.h>
29 #include <linux/linkage.h> 29 #include <linux/linkage.h>
30 #include <linux/init.h> 30 #include <linux/init.h>
31 #include <linux/ptrace.h> 31 #include <linux/ptrace.h>
32 #include <linux/kallsyms.h> 32 #include <linux/kallsyms.h>
33 33
34 #include <asm/setup.h> 34 #include <asm/setup.h>
35 #include <asm/fpu.h> 35 #include <asm/fpu.h>
36 #include <asm/system.h> 36 #include <asm/system.h>
37 #include <asm/uaccess.h> 37 #include <asm/uaccess.h>
38 #include <asm/traps.h> 38 #include <asm/traps.h>
39 #include <asm/pgalloc.h> 39 #include <asm/pgalloc.h>
40 #include <asm/machdep.h> 40 #include <asm/machdep.h>
41 #include <asm/siginfo.h> 41 #include <asm/siginfo.h>
42 42
43 /* assembler routines */ 43 /* assembler routines */
44 asmlinkage void system_call(void); 44 asmlinkage void system_call(void);
45 asmlinkage void buserr(void); 45 asmlinkage void buserr(void);
46 asmlinkage void trap(void); 46 asmlinkage void trap(void);
47 asmlinkage void nmihandler(void); 47 asmlinkage void nmihandler(void);
48 #ifdef CONFIG_M68KFPU_EMU 48 #ifdef CONFIG_M68KFPU_EMU
49 asmlinkage void fpu_emu(void); 49 asmlinkage void fpu_emu(void);
50 #endif 50 #endif
51 51
52 e_vector vectors[256] = { 52 e_vector vectors[256] = {
53 [VEC_BUSERR] = buserr, 53 [VEC_BUSERR] = buserr,
54 [VEC_SYS] = system_call, 54 [VEC_SYS] = system_call,
55 }; 55 };
56 56
57 /* nmi handler for the Amiga */ 57 /* nmi handler for the Amiga */
58 asm(".text\n" 58 asm(".text\n"
59 __ALIGN_STR "\n" 59 __ALIGN_STR "\n"
60 "nmihandler: rte"); 60 "nmihandler: rte");
61 61
62 /* 62 /*
63 * this must be called very early as the kernel might 63 * this must be called very early as the kernel might
64 * use some instruction that are emulated on the 060 64 * use some instruction that are emulated on the 060
65 */ 65 */
66 void __init base_trap_init(void) 66 void __init base_trap_init(void)
67 { 67 {
68 if(MACH_IS_SUN3X) { 68 if(MACH_IS_SUN3X) {
69 extern e_vector *sun3x_prom_vbr; 69 extern e_vector *sun3x_prom_vbr;
70 70
71 __asm__ volatile ("movec %%vbr, %0" : "=r" (sun3x_prom_vbr)); 71 __asm__ volatile ("movec %%vbr, %0" : "=r" (sun3x_prom_vbr));
72 } 72 }
73 73
74 /* setup the exception vector table */ 74 /* setup the exception vector table */
75 __asm__ volatile ("movec %0,%%vbr" : : "r" ((void*)vectors)); 75 __asm__ volatile ("movec %0,%%vbr" : : "r" ((void*)vectors));
76 76
77 if (CPU_IS_060) { 77 if (CPU_IS_060) {
78 /* set up ISP entry points */ 78 /* set up ISP entry points */
79 asmlinkage void unimp_vec(void) asm ("_060_isp_unimp"); 79 asmlinkage void unimp_vec(void) asm ("_060_isp_unimp");
80 80
81 vectors[VEC_UNIMPII] = unimp_vec; 81 vectors[VEC_UNIMPII] = unimp_vec;
82 } 82 }
83 } 83 }
84 84
85 void __init trap_init (void) 85 void __init trap_init (void)
86 { 86 {
87 int i; 87 int i;
88 88
89 for (i = VEC_SPUR; i <= VEC_INT7; i++) 89 for (i = VEC_SPUR; i <= VEC_INT7; i++)
90 vectors[i] = bad_inthandler; 90 vectors[i] = bad_inthandler;
91 91
92 for (i = 0; i < VEC_USER; i++) 92 for (i = 0; i < VEC_USER; i++)
93 if (!vectors[i]) 93 if (!vectors[i])
94 vectors[i] = trap; 94 vectors[i] = trap;
95 95
96 for (i = VEC_USER; i < 256; i++) 96 for (i = VEC_USER; i < 256; i++)
97 vectors[i] = bad_inthandler; 97 vectors[i] = bad_inthandler;
98 98
99 #ifdef CONFIG_M68KFPU_EMU 99 #ifdef CONFIG_M68KFPU_EMU
100 if (FPU_IS_EMU) 100 if (FPU_IS_EMU)
101 vectors[VEC_LINE11] = fpu_emu; 101 vectors[VEC_LINE11] = fpu_emu;
102 #endif 102 #endif
103 103
104 if (CPU_IS_040 && !FPU_IS_EMU) { 104 if (CPU_IS_040 && !FPU_IS_EMU) {
105 /* set up FPSP entry points */ 105 /* set up FPSP entry points */
106 asmlinkage void dz_vec(void) asm ("dz"); 106 asmlinkage void dz_vec(void) asm ("dz");
107 asmlinkage void inex_vec(void) asm ("inex"); 107 asmlinkage void inex_vec(void) asm ("inex");
108 asmlinkage void ovfl_vec(void) asm ("ovfl"); 108 asmlinkage void ovfl_vec(void) asm ("ovfl");
109 asmlinkage void unfl_vec(void) asm ("unfl"); 109 asmlinkage void unfl_vec(void) asm ("unfl");
110 asmlinkage void snan_vec(void) asm ("snan"); 110 asmlinkage void snan_vec(void) asm ("snan");
111 asmlinkage void operr_vec(void) asm ("operr"); 111 asmlinkage void operr_vec(void) asm ("operr");
112 asmlinkage void bsun_vec(void) asm ("bsun"); 112 asmlinkage void bsun_vec(void) asm ("bsun");
113 asmlinkage void fline_vec(void) asm ("fline"); 113 asmlinkage void fline_vec(void) asm ("fline");
114 asmlinkage void unsupp_vec(void) asm ("unsupp"); 114 asmlinkage void unsupp_vec(void) asm ("unsupp");
115 115
116 vectors[VEC_FPDIVZ] = dz_vec; 116 vectors[VEC_FPDIVZ] = dz_vec;
117 vectors[VEC_FPIR] = inex_vec; 117 vectors[VEC_FPIR] = inex_vec;
118 vectors[VEC_FPOVER] = ovfl_vec; 118 vectors[VEC_FPOVER] = ovfl_vec;
119 vectors[VEC_FPUNDER] = unfl_vec; 119 vectors[VEC_FPUNDER] = unfl_vec;
120 vectors[VEC_FPNAN] = snan_vec; 120 vectors[VEC_FPNAN] = snan_vec;
121 vectors[VEC_FPOE] = operr_vec; 121 vectors[VEC_FPOE] = operr_vec;
122 vectors[VEC_FPBRUC] = bsun_vec; 122 vectors[VEC_FPBRUC] = bsun_vec;
123 vectors[VEC_LINE11] = fline_vec; 123 vectors[VEC_LINE11] = fline_vec;
124 vectors[VEC_FPUNSUP] = unsupp_vec; 124 vectors[VEC_FPUNSUP] = unsupp_vec;
125 } 125 }
126 126
127 if (CPU_IS_060 && !FPU_IS_EMU) { 127 if (CPU_IS_060 && !FPU_IS_EMU) {
128 /* set up IFPSP entry points */ 128 /* set up IFPSP entry points */
129 asmlinkage void snan_vec6(void) asm ("_060_fpsp_snan"); 129 asmlinkage void snan_vec6(void) asm ("_060_fpsp_snan");
130 asmlinkage void operr_vec6(void) asm ("_060_fpsp_operr"); 130 asmlinkage void operr_vec6(void) asm ("_060_fpsp_operr");
131 asmlinkage void ovfl_vec6(void) asm ("_060_fpsp_ovfl"); 131 asmlinkage void ovfl_vec6(void) asm ("_060_fpsp_ovfl");
132 asmlinkage void unfl_vec6(void) asm ("_060_fpsp_unfl"); 132 asmlinkage void unfl_vec6(void) asm ("_060_fpsp_unfl");
133 asmlinkage void dz_vec6(void) asm ("_060_fpsp_dz"); 133 asmlinkage void dz_vec6(void) asm ("_060_fpsp_dz");
134 asmlinkage void inex_vec6(void) asm ("_060_fpsp_inex"); 134 asmlinkage void inex_vec6(void) asm ("_060_fpsp_inex");
135 asmlinkage void fline_vec6(void) asm ("_060_fpsp_fline"); 135 asmlinkage void fline_vec6(void) asm ("_060_fpsp_fline");
136 asmlinkage void unsupp_vec6(void) asm ("_060_fpsp_unsupp"); 136 asmlinkage void unsupp_vec6(void) asm ("_060_fpsp_unsupp");
137 asmlinkage void effadd_vec6(void) asm ("_060_fpsp_effadd"); 137 asmlinkage void effadd_vec6(void) asm ("_060_fpsp_effadd");
138 138
139 vectors[VEC_FPNAN] = snan_vec6; 139 vectors[VEC_FPNAN] = snan_vec6;
140 vectors[VEC_FPOE] = operr_vec6; 140 vectors[VEC_FPOE] = operr_vec6;
141 vectors[VEC_FPOVER] = ovfl_vec6; 141 vectors[VEC_FPOVER] = ovfl_vec6;
142 vectors[VEC_FPUNDER] = unfl_vec6; 142 vectors[VEC_FPUNDER] = unfl_vec6;
143 vectors[VEC_FPDIVZ] = dz_vec6; 143 vectors[VEC_FPDIVZ] = dz_vec6;
144 vectors[VEC_FPIR] = inex_vec6; 144 vectors[VEC_FPIR] = inex_vec6;
145 vectors[VEC_LINE11] = fline_vec6; 145 vectors[VEC_LINE11] = fline_vec6;
146 vectors[VEC_FPUNSUP] = unsupp_vec6; 146 vectors[VEC_FPUNSUP] = unsupp_vec6;
147 vectors[VEC_UNIMPEA] = effadd_vec6; 147 vectors[VEC_UNIMPEA] = effadd_vec6;
148 } 148 }
149 149
150 /* if running on an amiga, make the NMI interrupt do nothing */ 150 /* if running on an amiga, make the NMI interrupt do nothing */
151 if (MACH_IS_AMIGA) { 151 if (MACH_IS_AMIGA) {
152 vectors[VEC_INT7] = nmihandler; 152 vectors[VEC_INT7] = nmihandler;
153 } 153 }
154 } 154 }
155 155
156 156
157 static const char *vec_names[] = { 157 static const char *vec_names[] = {
158 [VEC_RESETSP] = "RESET SP", 158 [VEC_RESETSP] = "RESET SP",
159 [VEC_RESETPC] = "RESET PC", 159 [VEC_RESETPC] = "RESET PC",
160 [VEC_BUSERR] = "BUS ERROR", 160 [VEC_BUSERR] = "BUS ERROR",
161 [VEC_ADDRERR] = "ADDRESS ERROR", 161 [VEC_ADDRERR] = "ADDRESS ERROR",
162 [VEC_ILLEGAL] = "ILLEGAL INSTRUCTION", 162 [VEC_ILLEGAL] = "ILLEGAL INSTRUCTION",
163 [VEC_ZERODIV] = "ZERO DIVIDE", 163 [VEC_ZERODIV] = "ZERO DIVIDE",
164 [VEC_CHK] = "CHK", 164 [VEC_CHK] = "CHK",
165 [VEC_TRAP] = "TRAPcc", 165 [VEC_TRAP] = "TRAPcc",
166 [VEC_PRIV] = "PRIVILEGE VIOLATION", 166 [VEC_PRIV] = "PRIVILEGE VIOLATION",
167 [VEC_TRACE] = "TRACE", 167 [VEC_TRACE] = "TRACE",
168 [VEC_LINE10] = "LINE 1010", 168 [VEC_LINE10] = "LINE 1010",
169 [VEC_LINE11] = "LINE 1111", 169 [VEC_LINE11] = "LINE 1111",
170 [VEC_RESV12] = "UNASSIGNED RESERVED 12", 170 [VEC_RESV12] = "UNASSIGNED RESERVED 12",
171 [VEC_COPROC] = "COPROCESSOR PROTOCOL VIOLATION", 171 [VEC_COPROC] = "COPROCESSOR PROTOCOL VIOLATION",
172 [VEC_FORMAT] = "FORMAT ERROR", 172 [VEC_FORMAT] = "FORMAT ERROR",
173 [VEC_UNINT] = "UNINITIALIZED INTERRUPT", 173 [VEC_UNINT] = "UNINITIALIZED INTERRUPT",
174 [VEC_RESV16] = "UNASSIGNED RESERVED 16", 174 [VEC_RESV16] = "UNASSIGNED RESERVED 16",
175 [VEC_RESV17] = "UNASSIGNED RESERVED 17", 175 [VEC_RESV17] = "UNASSIGNED RESERVED 17",
176 [VEC_RESV18] = "UNASSIGNED RESERVED 18", 176 [VEC_RESV18] = "UNASSIGNED RESERVED 18",
177 [VEC_RESV19] = "UNASSIGNED RESERVED 19", 177 [VEC_RESV19] = "UNASSIGNED RESERVED 19",
178 [VEC_RESV20] = "UNASSIGNED RESERVED 20", 178 [VEC_RESV20] = "UNASSIGNED RESERVED 20",
179 [VEC_RESV21] = "UNASSIGNED RESERVED 21", 179 [VEC_RESV21] = "UNASSIGNED RESERVED 21",
180 [VEC_RESV22] = "UNASSIGNED RESERVED 22", 180 [VEC_RESV22] = "UNASSIGNED RESERVED 22",
181 [VEC_RESV23] = "UNASSIGNED RESERVED 23", 181 [VEC_RESV23] = "UNASSIGNED RESERVED 23",
182 [VEC_SPUR] = "SPURIOUS INTERRUPT", 182 [VEC_SPUR] = "SPURIOUS INTERRUPT",
183 [VEC_INT1] = "LEVEL 1 INT", 183 [VEC_INT1] = "LEVEL 1 INT",
184 [VEC_INT2] = "LEVEL 2 INT", 184 [VEC_INT2] = "LEVEL 2 INT",
185 [VEC_INT3] = "LEVEL 3 INT", 185 [VEC_INT3] = "LEVEL 3 INT",
186 [VEC_INT4] = "LEVEL 4 INT", 186 [VEC_INT4] = "LEVEL 4 INT",
187 [VEC_INT5] = "LEVEL 5 INT", 187 [VEC_INT5] = "LEVEL 5 INT",
188 [VEC_INT6] = "LEVEL 6 INT", 188 [VEC_INT6] = "LEVEL 6 INT",
189 [VEC_INT7] = "LEVEL 7 INT", 189 [VEC_INT7] = "LEVEL 7 INT",
190 [VEC_SYS] = "SYSCALL", 190 [VEC_SYS] = "SYSCALL",
191 [VEC_TRAP1] = "TRAP #1", 191 [VEC_TRAP1] = "TRAP #1",
192 [VEC_TRAP2] = "TRAP #2", 192 [VEC_TRAP2] = "TRAP #2",
193 [VEC_TRAP3] = "TRAP #3", 193 [VEC_TRAP3] = "TRAP #3",
194 [VEC_TRAP4] = "TRAP #4", 194 [VEC_TRAP4] = "TRAP #4",
195 [VEC_TRAP5] = "TRAP #5", 195 [VEC_TRAP5] = "TRAP #5",
196 [VEC_TRAP6] = "TRAP #6", 196 [VEC_TRAP6] = "TRAP #6",
197 [VEC_TRAP7] = "TRAP #7", 197 [VEC_TRAP7] = "TRAP #7",
198 [VEC_TRAP8] = "TRAP #8", 198 [VEC_TRAP8] = "TRAP #8",
199 [VEC_TRAP9] = "TRAP #9", 199 [VEC_TRAP9] = "TRAP #9",
200 [VEC_TRAP10] = "TRAP #10", 200 [VEC_TRAP10] = "TRAP #10",
201 [VEC_TRAP11] = "TRAP #11", 201 [VEC_TRAP11] = "TRAP #11",
202 [VEC_TRAP12] = "TRAP #12", 202 [VEC_TRAP12] = "TRAP #12",
203 [VEC_TRAP13] = "TRAP #13", 203 [VEC_TRAP13] = "TRAP #13",
204 [VEC_TRAP14] = "TRAP #14", 204 [VEC_TRAP14] = "TRAP #14",
205 [VEC_TRAP15] = "TRAP #15", 205 [VEC_TRAP15] = "TRAP #15",
206 [VEC_FPBRUC] = "FPCP BSUN", 206 [VEC_FPBRUC] = "FPCP BSUN",
207 [VEC_FPIR] = "FPCP INEXACT", 207 [VEC_FPIR] = "FPCP INEXACT",
208 [VEC_FPDIVZ] = "FPCP DIV BY 0", 208 [VEC_FPDIVZ] = "FPCP DIV BY 0",
209 [VEC_FPUNDER] = "FPCP UNDERFLOW", 209 [VEC_FPUNDER] = "FPCP UNDERFLOW",
210 [VEC_FPOE] = "FPCP OPERAND ERROR", 210 [VEC_FPOE] = "FPCP OPERAND ERROR",
211 [VEC_FPOVER] = "FPCP OVERFLOW", 211 [VEC_FPOVER] = "FPCP OVERFLOW",
212 [VEC_FPNAN] = "FPCP SNAN", 212 [VEC_FPNAN] = "FPCP SNAN",
213 [VEC_FPUNSUP] = "FPCP UNSUPPORTED OPERATION", 213 [VEC_FPUNSUP] = "FPCP UNSUPPORTED OPERATION",
214 [VEC_MMUCFG] = "MMU CONFIGURATION ERROR", 214 [VEC_MMUCFG] = "MMU CONFIGURATION ERROR",
215 [VEC_MMUILL] = "MMU ILLEGAL OPERATION ERROR", 215 [VEC_MMUILL] = "MMU ILLEGAL OPERATION ERROR",
216 [VEC_MMUACC] = "MMU ACCESS LEVEL VIOLATION ERROR", 216 [VEC_MMUACC] = "MMU ACCESS LEVEL VIOLATION ERROR",
217 [VEC_RESV59] = "UNASSIGNED RESERVED 59", 217 [VEC_RESV59] = "UNASSIGNED RESERVED 59",
218 [VEC_UNIMPEA] = "UNASSIGNED RESERVED 60", 218 [VEC_UNIMPEA] = "UNASSIGNED RESERVED 60",
219 [VEC_UNIMPII] = "UNASSIGNED RESERVED 61", 219 [VEC_UNIMPII] = "UNASSIGNED RESERVED 61",
220 [VEC_RESV62] = "UNASSIGNED RESERVED 62", 220 [VEC_RESV62] = "UNASSIGNED RESERVED 62",
221 [VEC_RESV63] = "UNASSIGNED RESERVED 63", 221 [VEC_RESV63] = "UNASSIGNED RESERVED 63",
222 }; 222 };
223 223
224 static const char *space_names[] = { 224 static const char *space_names[] = {
225 [0] = "Space 0", 225 [0] = "Space 0",
226 [USER_DATA] = "User Data", 226 [USER_DATA] = "User Data",
227 [USER_PROGRAM] = "User Program", 227 [USER_PROGRAM] = "User Program",
228 #ifndef CONFIG_SUN3 228 #ifndef CONFIG_SUN3
229 [3] = "Space 3", 229 [3] = "Space 3",
230 #else 230 #else
231 [FC_CONTROL] = "Control", 231 [FC_CONTROL] = "Control",
232 #endif 232 #endif
233 [4] = "Space 4", 233 [4] = "Space 4",
234 [SUPER_DATA] = "Super Data", 234 [SUPER_DATA] = "Super Data",
235 [SUPER_PROGRAM] = "Super Program", 235 [SUPER_PROGRAM] = "Super Program",
236 [CPU_SPACE] = "CPU" 236 [CPU_SPACE] = "CPU"
237 }; 237 };
238 238
239 void die_if_kernel(char *,struct pt_regs *,int); 239 void die_if_kernel(char *,struct pt_regs *,int);
240 asmlinkage int do_page_fault(struct pt_regs *regs, unsigned long address, 240 asmlinkage int do_page_fault(struct pt_regs *regs, unsigned long address,
241 unsigned long error_code); 241 unsigned long error_code);
242 int send_fault_sig(struct pt_regs *regs); 242 int send_fault_sig(struct pt_regs *regs);
243 243
244 asmlinkage void trap_c(struct frame *fp); 244 asmlinkage void trap_c(struct frame *fp);
245 245
246 #if defined (CONFIG_M68060) 246 #if defined (CONFIG_M68060)
247 static inline void access_error060 (struct frame *fp) 247 static inline void access_error060 (struct frame *fp)
248 { 248 {
249 unsigned long fslw = fp->un.fmt4.pc; /* is really FSLW for access error */ 249 unsigned long fslw = fp->un.fmt4.pc; /* is really FSLW for access error */
250 250
251 #ifdef DEBUG 251 #ifdef DEBUG
252 printk("fslw=%#lx, fa=%#lx\n", fslw, fp->un.fmt4.effaddr); 252 printk("fslw=%#lx, fa=%#lx\n", fslw, fp->un.fmt4.effaddr);
253 #endif 253 #endif
254 254
255 if (fslw & MMU060_BPE) { 255 if (fslw & MMU060_BPE) {
256 /* branch prediction error -> clear branch cache */ 256 /* branch prediction error -> clear branch cache */
257 __asm__ __volatile__ ("movec %/cacr,%/d0\n\t" 257 __asm__ __volatile__ ("movec %/cacr,%/d0\n\t"
258 "orl #0x00400000,%/d0\n\t" 258 "orl #0x00400000,%/d0\n\t"
259 "movec %/d0,%/cacr" 259 "movec %/d0,%/cacr"
260 : : : "d0" ); 260 : : : "d0" );
261 /* return if there's no other error */ 261 /* return if there's no other error */
262 if (!(fslw & MMU060_ERR_BITS) && !(fslw & MMU060_SEE)) 262 if (!(fslw & MMU060_ERR_BITS) && !(fslw & MMU060_SEE))
263 return; 263 return;
264 } 264 }
265 265
266 if (fslw & (MMU060_DESC_ERR | MMU060_WP | MMU060_SP)) { 266 if (fslw & (MMU060_DESC_ERR | MMU060_WP | MMU060_SP)) {
267 unsigned long errorcode; 267 unsigned long errorcode;
268 unsigned long addr = fp->un.fmt4.effaddr; 268 unsigned long addr = fp->un.fmt4.effaddr;
269 269
270 if (fslw & MMU060_MA) 270 if (fslw & MMU060_MA)
271 addr = (addr + PAGE_SIZE - 1) & PAGE_MASK; 271 addr = (addr + PAGE_SIZE - 1) & PAGE_MASK;
272 272
273 errorcode = 1; 273 errorcode = 1;
274 if (fslw & MMU060_DESC_ERR) { 274 if (fslw & MMU060_DESC_ERR) {
275 __flush_tlb040_one(addr); 275 __flush_tlb040_one(addr);
276 errorcode = 0; 276 errorcode = 0;
277 } 277 }
278 if (fslw & MMU060_W) 278 if (fslw & MMU060_W)
279 errorcode |= 2; 279 errorcode |= 2;
280 #ifdef DEBUG 280 #ifdef DEBUG
281 printk("errorcode = %d\n", errorcode ); 281 printk("errorcode = %d\n", errorcode );
282 #endif 282 #endif
283 do_page_fault(&fp->ptregs, addr, errorcode); 283 do_page_fault(&fp->ptregs, addr, errorcode);
284 } else if (fslw & (MMU060_SEE)){ 284 } else if (fslw & (MMU060_SEE)){
285 /* Software Emulation Error. 285 /* Software Emulation Error.
286 * fault during mem_read/mem_write in ifpsp060/os.S 286 * fault during mem_read/mem_write in ifpsp060/os.S
287 */ 287 */
288 send_fault_sig(&fp->ptregs); 288 send_fault_sig(&fp->ptregs);
289 } else if (!(fslw & (MMU060_RE|MMU060_WE)) || 289 } else if (!(fslw & (MMU060_RE|MMU060_WE)) ||
290 send_fault_sig(&fp->ptregs) > 0) { 290 send_fault_sig(&fp->ptregs) > 0) {
291 printk("pc=%#lx, fa=%#lx\n", fp->ptregs.pc, fp->un.fmt4.effaddr); 291 printk("pc=%#lx, fa=%#lx\n", fp->ptregs.pc, fp->un.fmt4.effaddr);
292 printk( "68060 access error, fslw=%lx\n", fslw ); 292 printk( "68060 access error, fslw=%lx\n", fslw );
293 trap_c( fp ); 293 trap_c( fp );
294 } 294 }
295 } 295 }
296 #endif /* CONFIG_M68060 */ 296 #endif /* CONFIG_M68060 */
297 297
298 #if defined (CONFIG_M68040) 298 #if defined (CONFIG_M68040)
299 static inline unsigned long probe040(int iswrite, unsigned long addr, int wbs) 299 static inline unsigned long probe040(int iswrite, unsigned long addr, int wbs)
300 { 300 {
301 unsigned long mmusr; 301 unsigned long mmusr;
302 mm_segment_t old_fs = get_fs(); 302 mm_segment_t old_fs = get_fs();
303 303
304 set_fs(MAKE_MM_SEG(wbs)); 304 set_fs(MAKE_MM_SEG(wbs));
305 305
306 if (iswrite) 306 if (iswrite)
307 asm volatile (".chip 68040; ptestw (%0); .chip 68k" : : "a" (addr)); 307 asm volatile (".chip 68040; ptestw (%0); .chip 68k" : : "a" (addr));
308 else 308 else
309 asm volatile (".chip 68040; ptestr (%0); .chip 68k" : : "a" (addr)); 309 asm volatile (".chip 68040; ptestr (%0); .chip 68k" : : "a" (addr));
310 310
311 asm volatile (".chip 68040; movec %%mmusr,%0; .chip 68k" : "=r" (mmusr)); 311 asm volatile (".chip 68040; movec %%mmusr,%0; .chip 68k" : "=r" (mmusr));
312 312
313 set_fs(old_fs); 313 set_fs(old_fs);
314 314
315 return mmusr; 315 return mmusr;
316 } 316 }
317 317
318 static inline int do_040writeback1(unsigned short wbs, unsigned long wba, 318 static inline int do_040writeback1(unsigned short wbs, unsigned long wba,
319 unsigned long wbd) 319 unsigned long wbd)
320 { 320 {
321 int res = 0; 321 int res = 0;
322 mm_segment_t old_fs = get_fs(); 322 mm_segment_t old_fs = get_fs();
323 323
324 /* set_fs can not be moved, otherwise put_user() may oops */ 324 /* set_fs can not be moved, otherwise put_user() may oops */
325 set_fs(MAKE_MM_SEG(wbs)); 325 set_fs(MAKE_MM_SEG(wbs));
326 326
327 switch (wbs & WBSIZ_040) { 327 switch (wbs & WBSIZ_040) {
328 case BA_SIZE_BYTE: 328 case BA_SIZE_BYTE:
329 res = put_user(wbd & 0xff, (char __user *)wba); 329 res = put_user(wbd & 0xff, (char __user *)wba);
330 break; 330 break;
331 case BA_SIZE_WORD: 331 case BA_SIZE_WORD:
332 res = put_user(wbd & 0xffff, (short __user *)wba); 332 res = put_user(wbd & 0xffff, (short __user *)wba);
333 break; 333 break;
334 case BA_SIZE_LONG: 334 case BA_SIZE_LONG:
335 res = put_user(wbd, (int __user *)wba); 335 res = put_user(wbd, (int __user *)wba);
336 break; 336 break;
337 } 337 }
338 338
339 /* set_fs can not be moved, otherwise put_user() may oops */ 339 /* set_fs can not be moved, otherwise put_user() may oops */
340 set_fs(old_fs); 340 set_fs(old_fs);
341 341
342 342
343 #ifdef DEBUG 343 #ifdef DEBUG
344 printk("do_040writeback1, res=%d\n",res); 344 printk("do_040writeback1, res=%d\n",res);
345 #endif 345 #endif
346 346
347 return res; 347 return res;
348 } 348 }
349 349
350 /* after an exception in a writeback the stack frame corresponding 350 /* after an exception in a writeback the stack frame corresponding
351 * to that exception is discarded, set a few bits in the old frame 351 * to that exception is discarded, set a few bits in the old frame
352 * to simulate what it should look like 352 * to simulate what it should look like
353 */ 353 */
354 static inline void fix_xframe040(struct frame *fp, unsigned long wba, unsigned short wbs) 354 static inline void fix_xframe040(struct frame *fp, unsigned long wba, unsigned short wbs)
355 { 355 {
356 fp->un.fmt7.faddr = wba; 356 fp->un.fmt7.faddr = wba;
357 fp->un.fmt7.ssw = wbs & 0xff; 357 fp->un.fmt7.ssw = wbs & 0xff;
358 if (wba != current->thread.faddr) 358 if (wba != current->thread.faddr)
359 fp->un.fmt7.ssw |= MA_040; 359 fp->un.fmt7.ssw |= MA_040;
360 } 360 }
361 361
362 static inline void do_040writebacks(struct frame *fp) 362 static inline void do_040writebacks(struct frame *fp)
363 { 363 {
364 int res = 0; 364 int res = 0;
365 #if 0 365 #if 0
366 if (fp->un.fmt7.wb1s & WBV_040) 366 if (fp->un.fmt7.wb1s & WBV_040)
367 printk("access_error040: cannot handle 1st writeback. oops.\n"); 367 printk("access_error040: cannot handle 1st writeback. oops.\n");
368 #endif 368 #endif
369 369
370 if ((fp->un.fmt7.wb2s & WBV_040) && 370 if ((fp->un.fmt7.wb2s & WBV_040) &&
371 !(fp->un.fmt7.wb2s & WBTT_040)) { 371 !(fp->un.fmt7.wb2s & WBTT_040)) {
372 res = do_040writeback1(fp->un.fmt7.wb2s, fp->un.fmt7.wb2a, 372 res = do_040writeback1(fp->un.fmt7.wb2s, fp->un.fmt7.wb2a,
373 fp->un.fmt7.wb2d); 373 fp->un.fmt7.wb2d);
374 if (res) 374 if (res)
375 fix_xframe040(fp, fp->un.fmt7.wb2a, fp->un.fmt7.wb2s); 375 fix_xframe040(fp, fp->un.fmt7.wb2a, fp->un.fmt7.wb2s);
376 else 376 else
377 fp->un.fmt7.wb2s = 0; 377 fp->un.fmt7.wb2s = 0;
378 } 378 }
379 379
380 /* do the 2nd wb only if the first one was successful (except for a kernel wb) */ 380 /* do the 2nd wb only if the first one was successful (except for a kernel wb) */
381 if (fp->un.fmt7.wb3s & WBV_040 && (!res || fp->un.fmt7.wb3s & 4)) { 381 if (fp->un.fmt7.wb3s & WBV_040 && (!res || fp->un.fmt7.wb3s & 4)) {
382 res = do_040writeback1(fp->un.fmt7.wb3s, fp->un.fmt7.wb3a, 382 res = do_040writeback1(fp->un.fmt7.wb3s, fp->un.fmt7.wb3a,
383 fp->un.fmt7.wb3d); 383 fp->un.fmt7.wb3d);
384 if (res) 384 if (res)
385 { 385 {
386 fix_xframe040(fp, fp->un.fmt7.wb3a, fp->un.fmt7.wb3s); 386 fix_xframe040(fp, fp->un.fmt7.wb3a, fp->un.fmt7.wb3s);
387 387
388 fp->un.fmt7.wb2s = fp->un.fmt7.wb3s; 388 fp->un.fmt7.wb2s = fp->un.fmt7.wb3s;
389 fp->un.fmt7.wb3s &= (~WBV_040); 389 fp->un.fmt7.wb3s &= (~WBV_040);
390 fp->un.fmt7.wb2a = fp->un.fmt7.wb3a; 390 fp->un.fmt7.wb2a = fp->un.fmt7.wb3a;
391 fp->un.fmt7.wb2d = fp->un.fmt7.wb3d; 391 fp->un.fmt7.wb2d = fp->un.fmt7.wb3d;
392 } 392 }
393 else 393 else
394 fp->un.fmt7.wb3s = 0; 394 fp->un.fmt7.wb3s = 0;
395 } 395 }
396 396
397 if (res) 397 if (res)
398 send_fault_sig(&fp->ptregs); 398 send_fault_sig(&fp->ptregs);
399 } 399 }
400 400
401 /* 401 /*
402 * called from sigreturn(), must ensure userspace code didn't 402 * called from sigreturn(), must ensure userspace code didn't
403 * manipulate exception frame to circumvent protection, then complete 403 * manipulate exception frame to circumvent protection, then complete
404 * pending writebacks 404 * pending writebacks
405 * we just clear TM2 to turn it into an userspace access 405 * we just clear TM2 to turn it into an userspace access
406 */ 406 */
407 asmlinkage void berr_040cleanup(struct frame *fp) 407 asmlinkage void berr_040cleanup(struct frame *fp)
408 { 408 {
409 fp->un.fmt7.wb2s &= ~4; 409 fp->un.fmt7.wb2s &= ~4;
410 fp->un.fmt7.wb3s &= ~4; 410 fp->un.fmt7.wb3s &= ~4;
411 411
412 do_040writebacks(fp); 412 do_040writebacks(fp);
413 } 413 }
414 414
415 static inline void access_error040(struct frame *fp) 415 static inline void access_error040(struct frame *fp)
416 { 416 {
417 unsigned short ssw = fp->un.fmt7.ssw; 417 unsigned short ssw = fp->un.fmt7.ssw;
418 unsigned long mmusr; 418 unsigned long mmusr;
419 419
420 #ifdef DEBUG 420 #ifdef DEBUG
421 printk("ssw=%#x, fa=%#lx\n", ssw, fp->un.fmt7.faddr); 421 printk("ssw=%#x, fa=%#lx\n", ssw, fp->un.fmt7.faddr);
422 printk("wb1s=%#x, wb2s=%#x, wb3s=%#x\n", fp->un.fmt7.wb1s, 422 printk("wb1s=%#x, wb2s=%#x, wb3s=%#x\n", fp->un.fmt7.wb1s,
423 fp->un.fmt7.wb2s, fp->un.fmt7.wb3s); 423 fp->un.fmt7.wb2s, fp->un.fmt7.wb3s);
424 printk ("wb2a=%lx, wb3a=%lx, wb2d=%lx, wb3d=%lx\n", 424 printk ("wb2a=%lx, wb3a=%lx, wb2d=%lx, wb3d=%lx\n",
425 fp->un.fmt7.wb2a, fp->un.fmt7.wb3a, 425 fp->un.fmt7.wb2a, fp->un.fmt7.wb3a,
426 fp->un.fmt7.wb2d, fp->un.fmt7.wb3d); 426 fp->un.fmt7.wb2d, fp->un.fmt7.wb3d);
427 #endif 427 #endif
428 428
429 if (ssw & ATC_040) { 429 if (ssw & ATC_040) {
430 unsigned long addr = fp->un.fmt7.faddr; 430 unsigned long addr = fp->un.fmt7.faddr;
431 unsigned long errorcode; 431 unsigned long errorcode;
432 432
433 /* 433 /*
434 * The MMU status has to be determined AFTER the address 434 * The MMU status has to be determined AFTER the address
435 * has been corrected if there was a misaligned access (MA). 435 * has been corrected if there was a misaligned access (MA).
436 */ 436 */
437 if (ssw & MA_040) 437 if (ssw & MA_040)
438 addr = (addr + 7) & -8; 438 addr = (addr + 7) & -8;
439 439
440 /* MMU error, get the MMUSR info for this access */ 440 /* MMU error, get the MMUSR info for this access */
441 mmusr = probe040(!(ssw & RW_040), addr, ssw); 441 mmusr = probe040(!(ssw & RW_040), addr, ssw);
442 #ifdef DEBUG 442 #ifdef DEBUG
443 printk("mmusr = %lx\n", mmusr); 443 printk("mmusr = %lx\n", mmusr);
444 #endif 444 #endif
445 errorcode = 1; 445 errorcode = 1;
446 if (!(mmusr & MMU_R_040)) { 446 if (!(mmusr & MMU_R_040)) {
447 /* clear the invalid atc entry */ 447 /* clear the invalid atc entry */
448 __flush_tlb040_one(addr); 448 __flush_tlb040_one(addr);
449 errorcode = 0; 449 errorcode = 0;
450 } 450 }
451 451
452 /* despite what documentation seems to say, RMW 452 /* despite what documentation seems to say, RMW
453 * accesses have always both the LK and RW bits set */ 453 * accesses have always both the LK and RW bits set */
454 if (!(ssw & RW_040) || (ssw & LK_040)) 454 if (!(ssw & RW_040) || (ssw & LK_040))
455 errorcode |= 2; 455 errorcode |= 2;
456 456
457 if (do_page_fault(&fp->ptregs, addr, errorcode)) { 457 if (do_page_fault(&fp->ptregs, addr, errorcode)) {
458 #ifdef DEBUG 458 #ifdef DEBUG
459 printk("do_page_fault() !=0 \n"); 459 printk("do_page_fault() !=0 \n");
460 #endif 460 #endif
461 if (user_mode(&fp->ptregs)){ 461 if (user_mode(&fp->ptregs)){
462 /* delay writebacks after signal delivery */ 462 /* delay writebacks after signal delivery */
463 #ifdef DEBUG 463 #ifdef DEBUG
464 printk(".. was usermode - return\n"); 464 printk(".. was usermode - return\n");
465 #endif 465 #endif
466 return; 466 return;
467 } 467 }
468 /* disable writeback into user space from kernel 468 /* disable writeback into user space from kernel
469 * (if do_page_fault didn't fix the mapping, 469 * (if do_page_fault didn't fix the mapping,
470 * the writeback won't do good) 470 * the writeback won't do good)
471 */ 471 */
472 #ifdef DEBUG 472 #ifdef DEBUG
473 printk(".. disabling wb2\n"); 473 printk(".. disabling wb2\n");
474 #endif 474 #endif
475 if (fp->un.fmt7.wb2a == fp->un.fmt7.faddr) 475 if (fp->un.fmt7.wb2a == fp->un.fmt7.faddr)
476 fp->un.fmt7.wb2s &= ~WBV_040; 476 fp->un.fmt7.wb2s &= ~WBV_040;
477 } 477 }
478 } else if (send_fault_sig(&fp->ptregs) > 0) { 478 } else if (send_fault_sig(&fp->ptregs) > 0) {
479 printk("68040 access error, ssw=%x\n", ssw); 479 printk("68040 access error, ssw=%x\n", ssw);
480 trap_c(fp); 480 trap_c(fp);
481 } 481 }
482 482
483 do_040writebacks(fp); 483 do_040writebacks(fp);
484 } 484 }
485 #endif /* CONFIG_M68040 */ 485 #endif /* CONFIG_M68040 */
486 486
487 #if defined(CONFIG_SUN3) 487 #if defined(CONFIG_SUN3)
488 #include <asm/sun3mmu.h> 488 #include <asm/sun3mmu.h>
489 489
490 extern int mmu_emu_handle_fault (unsigned long, int, int); 490 extern int mmu_emu_handle_fault (unsigned long, int, int);
491 491
492 /* sun3 version of bus_error030 */ 492 /* sun3 version of bus_error030 */
493 493
494 static inline void bus_error030 (struct frame *fp) 494 static inline void bus_error030 (struct frame *fp)
495 { 495 {
496 unsigned char buserr_type = sun3_get_buserr (); 496 unsigned char buserr_type = sun3_get_buserr ();
497 unsigned long addr, errorcode; 497 unsigned long addr, errorcode;
498 unsigned short ssw = fp->un.fmtb.ssw; 498 unsigned short ssw = fp->un.fmtb.ssw;
499 extern unsigned long _sun3_map_test_start, _sun3_map_test_end; 499 extern unsigned long _sun3_map_test_start, _sun3_map_test_end;
500 500
501 #ifdef DEBUG 501 #ifdef DEBUG
502 if (ssw & (FC | FB)) 502 if (ssw & (FC | FB))
503 printk ("Instruction fault at %#010lx\n", 503 printk ("Instruction fault at %#010lx\n",
504 ssw & FC ? 504 ssw & FC ?
505 fp->ptregs.format == 0xa ? fp->ptregs.pc + 2 : fp->un.fmtb.baddr - 2 505 fp->ptregs.format == 0xa ? fp->ptregs.pc + 2 : fp->un.fmtb.baddr - 2
506 : 506 :
507 fp->ptregs.format == 0xa ? fp->ptregs.pc + 4 : fp->un.fmtb.baddr); 507 fp->ptregs.format == 0xa ? fp->ptregs.pc + 4 : fp->un.fmtb.baddr);
508 if (ssw & DF) 508 if (ssw & DF)
509 printk ("Data %s fault at %#010lx in %s (pc=%#lx)\n", 509 printk ("Data %s fault at %#010lx in %s (pc=%#lx)\n",
510 ssw & RW ? "read" : "write", 510 ssw & RW ? "read" : "write",
511 fp->un.fmtb.daddr, 511 fp->un.fmtb.daddr,
512 space_names[ssw & DFC], fp->ptregs.pc); 512 space_names[ssw & DFC], fp->ptregs.pc);
513 #endif 513 #endif
514 514
515 /* 515 /*
516 * Check if this page should be demand-mapped. This needs to go before 516 * Check if this page should be demand-mapped. This needs to go before
517 * the testing for a bad kernel-space access (demand-mapping applies 517 * the testing for a bad kernel-space access (demand-mapping applies
518 * to kernel accesses too). 518 * to kernel accesses too).
519 */ 519 */
520 520
521 if ((ssw & DF) 521 if ((ssw & DF)
522 && (buserr_type & (SUN3_BUSERR_PROTERR | SUN3_BUSERR_INVALID))) { 522 && (buserr_type & (SUN3_BUSERR_PROTERR | SUN3_BUSERR_INVALID))) {
523 if (mmu_emu_handle_fault (fp->un.fmtb.daddr, ssw & RW, 0)) 523 if (mmu_emu_handle_fault (fp->un.fmtb.daddr, ssw & RW, 0))
524 return; 524 return;
525 } 525 }
526 526
527 /* Check for kernel-space pagefault (BAD). */ 527 /* Check for kernel-space pagefault (BAD). */
528 if (fp->ptregs.sr & PS_S) { 528 if (fp->ptregs.sr & PS_S) {
529 /* kernel fault must be a data fault to user space */ 529 /* kernel fault must be a data fault to user space */
530 if (! ((ssw & DF) && ((ssw & DFC) == USER_DATA))) { 530 if (! ((ssw & DF) && ((ssw & DFC) == USER_DATA))) {
531 // try checking the kernel mappings before surrender 531 // try checking the kernel mappings before surrender
532 if (mmu_emu_handle_fault (fp->un.fmtb.daddr, ssw & RW, 1)) 532 if (mmu_emu_handle_fault (fp->un.fmtb.daddr, ssw & RW, 1))
533 return; 533 return;
534 /* instruction fault or kernel data fault! */ 534 /* instruction fault or kernel data fault! */
535 if (ssw & (FC | FB)) 535 if (ssw & (FC | FB))
536 printk ("Instruction fault at %#010lx\n", 536 printk ("Instruction fault at %#010lx\n",
537 fp->ptregs.pc); 537 fp->ptregs.pc);
538 if (ssw & DF) { 538 if (ssw & DF) {
539 /* was this fault incurred testing bus mappings? */ 539 /* was this fault incurred testing bus mappings? */
540 if((fp->ptregs.pc >= (unsigned long)&_sun3_map_test_start) && 540 if((fp->ptregs.pc >= (unsigned long)&_sun3_map_test_start) &&
541 (fp->ptregs.pc <= (unsigned long)&_sun3_map_test_end)) { 541 (fp->ptregs.pc <= (unsigned long)&_sun3_map_test_end)) {
542 send_fault_sig(&fp->ptregs); 542 send_fault_sig(&fp->ptregs);
543 return; 543 return;
544 } 544 }
545 545
546 printk ("Data %s fault at %#010lx in %s (pc=%#lx)\n", 546 printk ("Data %s fault at %#010lx in %s (pc=%#lx)\n",
547 ssw & RW ? "read" : "write", 547 ssw & RW ? "read" : "write",
548 fp->un.fmtb.daddr, 548 fp->un.fmtb.daddr,
549 space_names[ssw & DFC], fp->ptregs.pc); 549 space_names[ssw & DFC], fp->ptregs.pc);
550 } 550 }
551 printk ("BAD KERNEL BUSERR\n"); 551 printk ("BAD KERNEL BUSERR\n");
552 552
553 die_if_kernel("Oops", &fp->ptregs,0); 553 die_if_kernel("Oops", &fp->ptregs,0);
554 force_sig(SIGKILL, current); 554 force_sig(SIGKILL, current);
555 return; 555 return;
556 } 556 }
557 } else { 557 } else {
558 /* user fault */ 558 /* user fault */
559 if (!(ssw & (FC | FB)) && !(ssw & DF)) 559 if (!(ssw & (FC | FB)) && !(ssw & DF))
560 /* not an instruction fault or data fault! BAD */ 560 /* not an instruction fault or data fault! BAD */
561 panic ("USER BUSERR w/o instruction or data fault"); 561 panic ("USER BUSERR w/o instruction or data fault");
562 } 562 }
563 563
564 564
565 /* First handle the data fault, if any. */ 565 /* First handle the data fault, if any. */
566 if (ssw & DF) { 566 if (ssw & DF) {
567 addr = fp->un.fmtb.daddr; 567 addr = fp->un.fmtb.daddr;
568 568
569 // errorcode bit 0: 0 -> no page 1 -> protection fault 569 // errorcode bit 0: 0 -> no page 1 -> protection fault
570 // errorcode bit 1: 0 -> read fault 1 -> write fault 570 // errorcode bit 1: 0 -> read fault 1 -> write fault
571 571
572 // (buserr_type & SUN3_BUSERR_PROTERR) -> protection fault 572 // (buserr_type & SUN3_BUSERR_PROTERR) -> protection fault
573 // (buserr_type & SUN3_BUSERR_INVALID) -> invalid page fault 573 // (buserr_type & SUN3_BUSERR_INVALID) -> invalid page fault
574 574
575 if (buserr_type & SUN3_BUSERR_PROTERR) 575 if (buserr_type & SUN3_BUSERR_PROTERR)
576 errorcode = 0x01; 576 errorcode = 0x01;
577 else if (buserr_type & SUN3_BUSERR_INVALID) 577 else if (buserr_type & SUN3_BUSERR_INVALID)
578 errorcode = 0x00; 578 errorcode = 0x00;
579 else { 579 else {
580 #ifdef DEBUG 580 #ifdef DEBUG
581 printk ("*** unexpected busfault type=%#04x\n", buserr_type); 581 printk ("*** unexpected busfault type=%#04x\n", buserr_type);
582 printk ("invalid %s access at %#lx from pc %#lx\n", 582 printk ("invalid %s access at %#lx from pc %#lx\n",
583 !(ssw & RW) ? "write" : "read", addr, 583 !(ssw & RW) ? "write" : "read", addr,
584 fp->ptregs.pc); 584 fp->ptregs.pc);
585 #endif 585 #endif
586 die_if_kernel ("Oops", &fp->ptregs, buserr_type); 586 die_if_kernel ("Oops", &fp->ptregs, buserr_type);
587 force_sig (SIGBUS, current); 587 force_sig (SIGBUS, current);
588 return; 588 return;
589 } 589 }
590 590
591 //todo: wtf is RM bit? --m 591 //todo: wtf is RM bit? --m
592 if (!(ssw & RW) || ssw & RM) 592 if (!(ssw & RW) || ssw & RM)
593 errorcode |= 0x02; 593 errorcode |= 0x02;
594 594
595 /* Handle page fault. */ 595 /* Handle page fault. */
596 do_page_fault (&fp->ptregs, addr, errorcode); 596 do_page_fault (&fp->ptregs, addr, errorcode);
597 597
598 /* Retry the data fault now. */ 598 /* Retry the data fault now. */
599 return; 599 return;
600 } 600 }
601 601
602 /* Now handle the instruction fault. */ 602 /* Now handle the instruction fault. */
603 603
604 /* Get the fault address. */ 604 /* Get the fault address. */
605 if (fp->ptregs.format == 0xA) 605 if (fp->ptregs.format == 0xA)
606 addr = fp->ptregs.pc + 4; 606 addr = fp->ptregs.pc + 4;
607 else 607 else
608 addr = fp->un.fmtb.baddr; 608 addr = fp->un.fmtb.baddr;
609 if (ssw & FC) 609 if (ssw & FC)
610 addr -= 2; 610 addr -= 2;
611 611
612 if (buserr_type & SUN3_BUSERR_INVALID) { 612 if (buserr_type & SUN3_BUSERR_INVALID) {
613 if (!mmu_emu_handle_fault (fp->un.fmtb.daddr, 1, 0)) 613 if (!mmu_emu_handle_fault (fp->un.fmtb.daddr, 1, 0))
614 do_page_fault (&fp->ptregs, addr, 0); 614 do_page_fault (&fp->ptregs, addr, 0);
615 } else { 615 } else {
616 #ifdef DEBUG 616 #ifdef DEBUG
617 printk ("protection fault on insn access (segv).\n"); 617 printk ("protection fault on insn access (segv).\n");
618 #endif 618 #endif
619 force_sig (SIGSEGV, current); 619 force_sig (SIGSEGV, current);
620 } 620 }
621 } 621 }
622 #else 622 #else
623 #if defined(CPU_M68020_OR_M68030) 623 #if defined(CPU_M68020_OR_M68030)
624 static inline void bus_error030 (struct frame *fp) 624 static inline void bus_error030 (struct frame *fp)
625 { 625 {
626 volatile unsigned short temp; 626 volatile unsigned short temp;
627 unsigned short mmusr; 627 unsigned short mmusr;
628 unsigned long addr, errorcode; 628 unsigned long addr, errorcode;
629 unsigned short ssw = fp->un.fmtb.ssw; 629 unsigned short ssw = fp->un.fmtb.ssw;
630 #ifdef DEBUG 630 #ifdef DEBUG
631 unsigned long desc; 631 unsigned long desc;
632 632
633 printk ("pid = %x ", current->pid); 633 printk ("pid = %x ", current->pid);
634 printk ("SSW=%#06x ", ssw); 634 printk ("SSW=%#06x ", ssw);
635 635
636 if (ssw & (FC | FB)) 636 if (ssw & (FC | FB))
637 printk ("Instruction fault at %#010lx\n", 637 printk ("Instruction fault at %#010lx\n",
638 ssw & FC ? 638 ssw & FC ?
639 fp->ptregs.format == 0xa ? fp->ptregs.pc + 2 : fp->un.fmtb.baddr - 2 639 fp->ptregs.format == 0xa ? fp->ptregs.pc + 2 : fp->un.fmtb.baddr - 2
640 : 640 :
641 fp->ptregs.format == 0xa ? fp->ptregs.pc + 4 : fp->un.fmtb.baddr); 641 fp->ptregs.format == 0xa ? fp->ptregs.pc + 4 : fp->un.fmtb.baddr);
642 if (ssw & DF) 642 if (ssw & DF)
643 printk ("Data %s fault at %#010lx in %s (pc=%#lx)\n", 643 printk ("Data %s fault at %#010lx in %s (pc=%#lx)\n",
644 ssw & RW ? "read" : "write", 644 ssw & RW ? "read" : "write",
645 fp->un.fmtb.daddr, 645 fp->un.fmtb.daddr,
646 space_names[ssw & DFC], fp->ptregs.pc); 646 space_names[ssw & DFC], fp->ptregs.pc);
647 #endif 647 #endif
648 648
649 /* ++andreas: If a data fault and an instruction fault happen 649 /* ++andreas: If a data fault and an instruction fault happen
650 at the same time map in both pages. */ 650 at the same time map in both pages. */
651 651
652 /* First handle the data fault, if any. */ 652 /* First handle the data fault, if any. */
653 if (ssw & DF) { 653 if (ssw & DF) {
654 addr = fp->un.fmtb.daddr; 654 addr = fp->un.fmtb.daddr;
655 655
656 #ifdef DEBUG 656 #ifdef DEBUG
657 asm volatile ("ptestr %3,%2@,#7,%0\n\t" 657 asm volatile ("ptestr %3,%2@,#7,%0\n\t"
658 "pmove %%psr,%1@" 658 "pmove %%psr,%1@"
659 : "=a&" (desc) 659 : "=a&" (desc)
660 : "a" (&temp), "a" (addr), "d" (ssw)); 660 : "a" (&temp), "a" (addr), "d" (ssw));
661 #else 661 #else
662 asm volatile ("ptestr %2,%1@,#7\n\t" 662 asm volatile ("ptestr %2,%1@,#7\n\t"
663 "pmove %%psr,%0@" 663 "pmove %%psr,%0@"
664 : : "a" (&temp), "a" (addr), "d" (ssw)); 664 : : "a" (&temp), "a" (addr), "d" (ssw));
665 #endif 665 #endif
666 mmusr = temp; 666 mmusr = temp;
667 667
668 #ifdef DEBUG 668 #ifdef DEBUG
669 printk("mmusr is %#x for addr %#lx in task %p\n", 669 printk("mmusr is %#x for addr %#lx in task %p\n",
670 mmusr, addr, current); 670 mmusr, addr, current);
671 printk("descriptor address is %#lx, contents %#lx\n", 671 printk("descriptor address is %#lx, contents %#lx\n",
672 __va(desc), *(unsigned long *)__va(desc)); 672 __va(desc), *(unsigned long *)__va(desc));
673 #endif 673 #endif
674 674
675 errorcode = (mmusr & MMU_I) ? 0 : 1; 675 errorcode = (mmusr & MMU_I) ? 0 : 1;
676 if (!(ssw & RW) || (ssw & RM)) 676 if (!(ssw & RW) || (ssw & RM))
677 errorcode |= 2; 677 errorcode |= 2;
678 678
679 if (mmusr & (MMU_I | MMU_WP)) { 679 if (mmusr & (MMU_I | MMU_WP)) {
680 if (ssw & 4) { 680 if (ssw & 4) {
681 printk("Data %s fault at %#010lx in %s (pc=%#lx)\n", 681 printk("Data %s fault at %#010lx in %s (pc=%#lx)\n",
682 ssw & RW ? "read" : "write", 682 ssw & RW ? "read" : "write",
683 fp->un.fmtb.daddr, 683 fp->un.fmtb.daddr,
684 space_names[ssw & DFC], fp->ptregs.pc); 684 space_names[ssw & DFC], fp->ptregs.pc);
685 goto buserr; 685 goto buserr;
686 } 686 }
687 /* Don't try to do anything further if an exception was 687 /* Don't try to do anything further if an exception was
688 handled. */ 688 handled. */
689 if (do_page_fault (&fp->ptregs, addr, errorcode) < 0) 689 if (do_page_fault (&fp->ptregs, addr, errorcode) < 0)
690 return; 690 return;
691 } else if (!(mmusr & MMU_I)) { 691 } else if (!(mmusr & MMU_I)) {
692 /* probably a 020 cas fault */ 692 /* probably a 020 cas fault */
693 if (!(ssw & RM) && send_fault_sig(&fp->ptregs) > 0) 693 if (!(ssw & RM) && send_fault_sig(&fp->ptregs) > 0)
694 printk("unexpected bus error (%#x,%#x)\n", ssw, mmusr); 694 printk("unexpected bus error (%#x,%#x)\n", ssw, mmusr);
695 } else if (mmusr & (MMU_B|MMU_L|MMU_S)) { 695 } else if (mmusr & (MMU_B|MMU_L|MMU_S)) {
696 printk("invalid %s access at %#lx from pc %#lx\n", 696 printk("invalid %s access at %#lx from pc %#lx\n",
697 !(ssw & RW) ? "write" : "read", addr, 697 !(ssw & RW) ? "write" : "read", addr,
698 fp->ptregs.pc); 698 fp->ptregs.pc);
699 die_if_kernel("Oops",&fp->ptregs,mmusr); 699 die_if_kernel("Oops",&fp->ptregs,mmusr);
700 force_sig(SIGSEGV, current); 700 force_sig(SIGSEGV, current);
701 return; 701 return;
702 } else { 702 } else {
703 #if 0 703 #if 0
704 static volatile long tlong; 704 static volatile long tlong;
705 #endif 705 #endif
706 706
707 printk("weird %s access at %#lx from pc %#lx (ssw is %#x)\n", 707 printk("weird %s access at %#lx from pc %#lx (ssw is %#x)\n",
708 !(ssw & RW) ? "write" : "read", addr, 708 !(ssw & RW) ? "write" : "read", addr,
709 fp->ptregs.pc, ssw); 709 fp->ptregs.pc, ssw);
710 asm volatile ("ptestr #1,%1@,#0\n\t" 710 asm volatile ("ptestr #1,%1@,#0\n\t"
711 "pmove %%psr,%0@" 711 "pmove %%psr,%0@"
712 : /* no outputs */ 712 : /* no outputs */
713 : "a" (&temp), "a" (addr)); 713 : "a" (&temp), "a" (addr));
714 mmusr = temp; 714 mmusr = temp;
715 715
716 printk ("level 0 mmusr is %#x\n", mmusr); 716 printk ("level 0 mmusr is %#x\n", mmusr);
717 #if 0 717 #if 0
718 asm volatile ("pmove %%tt0,%0@" 718 asm volatile ("pmove %%tt0,%0@"
719 : /* no outputs */ 719 : /* no outputs */
720 : "a" (&tlong)); 720 : "a" (&tlong));
721 printk("tt0 is %#lx, ", tlong); 721 printk("tt0 is %#lx, ", tlong);
722 asm volatile ("pmove %%tt1,%0@" 722 asm volatile ("pmove %%tt1,%0@"
723 : /* no outputs */ 723 : /* no outputs */
724 : "a" (&tlong)); 724 : "a" (&tlong));
725 printk("tt1 is %#lx\n", tlong); 725 printk("tt1 is %#lx\n", tlong);
726 #endif 726 #endif
727 #ifdef DEBUG 727 #ifdef DEBUG
728 printk("Unknown SIGSEGV - 1\n"); 728 printk("Unknown SIGSEGV - 1\n");
729 #endif 729 #endif
730 die_if_kernel("Oops",&fp->ptregs,mmusr); 730 die_if_kernel("Oops",&fp->ptregs,mmusr);
731 force_sig(SIGSEGV, current); 731 force_sig(SIGSEGV, current);
732 return; 732 return;
733 } 733 }
734 734
735 /* setup an ATC entry for the access about to be retried */ 735 /* setup an ATC entry for the access about to be retried */
736 if (!(ssw & RW) || (ssw & RM)) 736 if (!(ssw & RW) || (ssw & RM))
737 asm volatile ("ploadw %1,%0@" : /* no outputs */ 737 asm volatile ("ploadw %1,%0@" : /* no outputs */
738 : "a" (addr), "d" (ssw)); 738 : "a" (addr), "d" (ssw));
739 else 739 else
740 asm volatile ("ploadr %1,%0@" : /* no outputs */ 740 asm volatile ("ploadr %1,%0@" : /* no outputs */
741 : "a" (addr), "d" (ssw)); 741 : "a" (addr), "d" (ssw));
742 } 742 }
743 743
744 /* Now handle the instruction fault. */ 744 /* Now handle the instruction fault. */
745 745
746 if (!(ssw & (FC|FB))) 746 if (!(ssw & (FC|FB)))
747 return; 747 return;
748 748
749 if (fp->ptregs.sr & PS_S) { 749 if (fp->ptregs.sr & PS_S) {
750 printk("Instruction fault at %#010lx\n", 750 printk("Instruction fault at %#010lx\n",
751 fp->ptregs.pc); 751 fp->ptregs.pc);
752 buserr: 752 buserr:
753 printk ("BAD KERNEL BUSERR\n"); 753 printk ("BAD KERNEL BUSERR\n");
754 die_if_kernel("Oops",&fp->ptregs,0); 754 die_if_kernel("Oops",&fp->ptregs,0);
755 force_sig(SIGKILL, current); 755 force_sig(SIGKILL, current);
756 return; 756 return;
757 } 757 }
758 758
759 /* get the fault address */ 759 /* get the fault address */
760 if (fp->ptregs.format == 10) 760 if (fp->ptregs.format == 10)
761 addr = fp->ptregs.pc + 4; 761 addr = fp->ptregs.pc + 4;
762 else 762 else
763 addr = fp->un.fmtb.baddr; 763 addr = fp->un.fmtb.baddr;
764 if (ssw & FC) 764 if (ssw & FC)
765 addr -= 2; 765 addr -= 2;
766 766
767 if ((ssw & DF) && ((addr ^ fp->un.fmtb.daddr) & PAGE_MASK) == 0) 767 if ((ssw & DF) && ((addr ^ fp->un.fmtb.daddr) & PAGE_MASK) == 0)
768 /* Insn fault on same page as data fault. But we 768 /* Insn fault on same page as data fault. But we
769 should still create the ATC entry. */ 769 should still create the ATC entry. */
770 goto create_atc_entry; 770 goto create_atc_entry;
771 771
772 #ifdef DEBUG 772 #ifdef DEBUG
773 asm volatile ("ptestr #1,%2@,#7,%0\n\t" 773 asm volatile ("ptestr #1,%2@,#7,%0\n\t"
774 "pmove %%psr,%1@" 774 "pmove %%psr,%1@"
775 : "=a&" (desc) 775 : "=a&" (desc)
776 : "a" (&temp), "a" (addr)); 776 : "a" (&temp), "a" (addr));
777 #else 777 #else
778 asm volatile ("ptestr #1,%1@,#7\n\t" 778 asm volatile ("ptestr #1,%1@,#7\n\t"
779 "pmove %%psr,%0@" 779 "pmove %%psr,%0@"
780 : : "a" (&temp), "a" (addr)); 780 : : "a" (&temp), "a" (addr));
781 #endif 781 #endif
782 mmusr = temp; 782 mmusr = temp;
783 783
784 #ifdef DEBUG 784 #ifdef DEBUG
785 printk ("mmusr is %#x for addr %#lx in task %p\n", 785 printk ("mmusr is %#x for addr %#lx in task %p\n",
786 mmusr, addr, current); 786 mmusr, addr, current);
787 printk ("descriptor address is %#lx, contents %#lx\n", 787 printk ("descriptor address is %#lx, contents %#lx\n",
788 __va(desc), *(unsigned long *)__va(desc)); 788 __va(desc), *(unsigned long *)__va(desc));
789 #endif 789 #endif
790 790
791 if (mmusr & MMU_I) 791 if (mmusr & MMU_I)
792 do_page_fault (&fp->ptregs, addr, 0); 792 do_page_fault (&fp->ptregs, addr, 0);
793 else if (mmusr & (MMU_B|MMU_L|MMU_S)) { 793 else if (mmusr & (MMU_B|MMU_L|MMU_S)) {
794 printk ("invalid insn access at %#lx from pc %#lx\n", 794 printk ("invalid insn access at %#lx from pc %#lx\n",
795 addr, fp->ptregs.pc); 795 addr, fp->ptregs.pc);
796 #ifdef DEBUG 796 #ifdef DEBUG
797 printk("Unknown SIGSEGV - 2\n"); 797 printk("Unknown SIGSEGV - 2\n");
798 #endif 798 #endif
799 die_if_kernel("Oops",&fp->ptregs,mmusr); 799 die_if_kernel("Oops",&fp->ptregs,mmusr);
800 force_sig(SIGSEGV, current); 800 force_sig(SIGSEGV, current);
801 return; 801 return;
802 } 802 }
803 803
804 create_atc_entry: 804 create_atc_entry:
805 /* setup an ATC entry for the access about to be retried */ 805 /* setup an ATC entry for the access about to be retried */
806 asm volatile ("ploadr #2,%0@" : /* no outputs */ 806 asm volatile ("ploadr #2,%0@" : /* no outputs */
807 : "a" (addr)); 807 : "a" (addr));
808 } 808 }
809 #endif /* CPU_M68020_OR_M68030 */ 809 #endif /* CPU_M68020_OR_M68030 */
810 #endif /* !CONFIG_SUN3 */ 810 #endif /* !CONFIG_SUN3 */
811 811
812 asmlinkage void buserr_c(struct frame *fp) 812 asmlinkage void buserr_c(struct frame *fp)
813 { 813 {
814 /* Only set esp0 if coming from user mode */ 814 /* Only set esp0 if coming from user mode */
815 if (user_mode(&fp->ptregs)) 815 if (user_mode(&fp->ptregs))
816 current->thread.esp0 = (unsigned long) fp; 816 current->thread.esp0 = (unsigned long) fp;
817 817
818 #ifdef DEBUG 818 #ifdef DEBUG
819 printk ("*** Bus Error *** Format is %x\n", fp->ptregs.format); 819 printk ("*** Bus Error *** Format is %x\n", fp->ptregs.format);
820 #endif 820 #endif
821 821
822 switch (fp->ptregs.format) { 822 switch (fp->ptregs.format) {
823 #if defined (CONFIG_M68060) 823 #if defined (CONFIG_M68060)
824 case 4: /* 68060 access error */ 824 case 4: /* 68060 access error */
825 access_error060 (fp); 825 access_error060 (fp);
826 break; 826 break;
827 #endif 827 #endif
828 #if defined (CONFIG_M68040) 828 #if defined (CONFIG_M68040)
829 case 0x7: /* 68040 access error */ 829 case 0x7: /* 68040 access error */
830 access_error040 (fp); 830 access_error040 (fp);
831 break; 831 break;
832 #endif 832 #endif
833 #if defined (CPU_M68020_OR_M68030) 833 #if defined (CPU_M68020_OR_M68030)
834 case 0xa: 834 case 0xa:
835 case 0xb: 835 case 0xb:
836 bus_error030 (fp); 836 bus_error030 (fp);
837 break; 837 break;
838 #endif 838 #endif
839 default: 839 default:
840 die_if_kernel("bad frame format",&fp->ptregs,0); 840 die_if_kernel("bad frame format",&fp->ptregs,0);
841 #ifdef DEBUG 841 #ifdef DEBUG
842 printk("Unknown SIGSEGV - 4\n"); 842 printk("Unknown SIGSEGV - 4\n");
843 #endif 843 #endif
844 force_sig(SIGSEGV, current); 844 force_sig(SIGSEGV, current);
845 } 845 }
846 } 846 }
847 847
848 848
849 static int kstack_depth_to_print = 48; 849 static int kstack_depth_to_print = 48;
850 850
851 void show_trace(unsigned long *stack) 851 void show_trace(unsigned long *stack)
852 { 852 {
853 unsigned long *endstack; 853 unsigned long *endstack;
854 unsigned long addr; 854 unsigned long addr;
855 int i; 855 int i;
856 856
857 printk("Call Trace:"); 857 printk("Call Trace:");
858 addr = (unsigned long)stack + THREAD_SIZE - 1; 858 addr = (unsigned long)stack + THREAD_SIZE - 1;
859 endstack = (unsigned long *)(addr & -THREAD_SIZE); 859 endstack = (unsigned long *)(addr & -THREAD_SIZE);
860 i = 0; 860 i = 0;
861 while (stack + 1 <= endstack) { 861 while (stack + 1 <= endstack) {
862 addr = *stack++; 862 addr = *stack++;
863 /* 863 /*
864 * If the address is either in the text segment of the 864 * If the address is either in the text segment of the
865 * kernel, or in the region which contains vmalloc'ed 865 * kernel, or in the region which contains vmalloc'ed
866 * memory, it *may* be the address of a calling 866 * memory, it *may* be the address of a calling
867 * routine; if so, print it so that someone tracing 867 * routine; if so, print it so that someone tracing
868 * down the cause of the crash will be able to figure 868 * down the cause of the crash will be able to figure
869 * out the call path that was taken. 869 * out the call path that was taken.
870 */ 870 */
871 if (__kernel_text_address(addr)) { 871 if (__kernel_text_address(addr)) {
872 #ifndef CONFIG_KALLSYMS 872 #ifndef CONFIG_KALLSYMS
873 if (i % 5 == 0) 873 if (i % 5 == 0)
874 printk("\n "); 874 printk("\n ");
875 #endif 875 #endif
876 printk(" [<%08lx>]", addr); 876 printk(" [<%08lx>]", addr);
877 print_symbol(" %s\n", addr); 877 print_symbol(" %s\n", addr);
878 i++; 878 i++;
879 } 879 }
880 } 880 }
881 printk("\n"); 881 printk("\n");
882 } 882 }
883 883
884 void show_registers(struct pt_regs *regs) 884 void show_registers(struct pt_regs *regs)
885 { 885 {
886 struct frame *fp = (struct frame *)regs; 886 struct frame *fp = (struct frame *)regs;
887 mm_segment_t old_fs = get_fs(); 887 mm_segment_t old_fs = get_fs();
888 u16 c, *cp; 888 u16 c, *cp;
889 unsigned long addr; 889 unsigned long addr;
890 int i; 890 int i;
891 891
892 print_modules(); 892 print_modules();
893 printk("PC: [<%08lx>]",regs->pc); 893 printk("PC: [<%08lx>]",regs->pc);
894 print_symbol(" %s", regs->pc); 894 print_symbol(" %s", regs->pc);
895 printk("\nSR: %04x SP: %p a2: %08lx\n", 895 printk("\nSR: %04x SP: %p a2: %08lx\n",
896 regs->sr, regs, regs->a2); 896 regs->sr, regs, regs->a2);
897 printk("d0: %08lx d1: %08lx d2: %08lx d3: %08lx\n", 897 printk("d0: %08lx d1: %08lx d2: %08lx d3: %08lx\n",
898 regs->d0, regs->d1, regs->d2, regs->d3); 898 regs->d0, regs->d1, regs->d2, regs->d3);
899 printk("d4: %08lx d5: %08lx a0: %08lx a1: %08lx\n", 899 printk("d4: %08lx d5: %08lx a0: %08lx a1: %08lx\n",
900 regs->d4, regs->d5, regs->a0, regs->a1); 900 regs->d4, regs->d5, regs->a0, regs->a1);
901 901
902 printk("Process %s (pid: %d, task=%p)\n", 902 printk("Process %s (pid: %d, task=%p)\n",
903 current->comm, current->pid, current); 903 current->comm, current->pid, current);
904 addr = (unsigned long)&fp->un; 904 addr = (unsigned long)&fp->un;
905 printk("Frame format=%X ", regs->format); 905 printk("Frame format=%X ", regs->format);
906 switch (regs->format) { 906 switch (regs->format) {
907 case 0x2: 907 case 0x2:
908 printk("instr addr=%08lx\n", fp->un.fmt2.iaddr); 908 printk("instr addr=%08lx\n", fp->un.fmt2.iaddr);
909 addr += sizeof(fp->un.fmt2); 909 addr += sizeof(fp->un.fmt2);
910 break; 910 break;
911 case 0x3: 911 case 0x3:
912 printk("eff addr=%08lx\n", fp->un.fmt3.effaddr); 912 printk("eff addr=%08lx\n", fp->un.fmt3.effaddr);
913 addr += sizeof(fp->un.fmt3); 913 addr += sizeof(fp->un.fmt3);
914 break; 914 break;
915 case 0x4: 915 case 0x4:
916 printk((CPU_IS_060 ? "fault addr=%08lx fslw=%08lx\n" 916 printk((CPU_IS_060 ? "fault addr=%08lx fslw=%08lx\n"
917 : "eff addr=%08lx pc=%08lx\n"), 917 : "eff addr=%08lx pc=%08lx\n"),
918 fp->un.fmt4.effaddr, fp->un.fmt4.pc); 918 fp->un.fmt4.effaddr, fp->un.fmt4.pc);
919 addr += sizeof(fp->un.fmt4); 919 addr += sizeof(fp->un.fmt4);
920 break; 920 break;
921 case 0x7: 921 case 0x7:
922 printk("eff addr=%08lx ssw=%04x faddr=%08lx\n", 922 printk("eff addr=%08lx ssw=%04x faddr=%08lx\n",
923 fp->un.fmt7.effaddr, fp->un.fmt7.ssw, fp->un.fmt7.faddr); 923 fp->un.fmt7.effaddr, fp->un.fmt7.ssw, fp->un.fmt7.faddr);
924 printk("wb 1 stat/addr/data: %04x %08lx %08lx\n", 924 printk("wb 1 stat/addr/data: %04x %08lx %08lx\n",
925 fp->un.fmt7.wb1s, fp->un.fmt7.wb1a, fp->un.fmt7.wb1dpd0); 925 fp->un.fmt7.wb1s, fp->un.fmt7.wb1a, fp->un.fmt7.wb1dpd0);
926 printk("wb 2 stat/addr/data: %04x %08lx %08lx\n", 926 printk("wb 2 stat/addr/data: %04x %08lx %08lx\n",
927 fp->un.fmt7.wb2s, fp->un.fmt7.wb2a, fp->un.fmt7.wb2d); 927 fp->un.fmt7.wb2s, fp->un.fmt7.wb2a, fp->un.fmt7.wb2d);
928 printk("wb 3 stat/addr/data: %04x %08lx %08lx\n", 928 printk("wb 3 stat/addr/data: %04x %08lx %08lx\n",
929 fp->un.fmt7.wb3s, fp->un.fmt7.wb3a, fp->un.fmt7.wb3d); 929 fp->un.fmt7.wb3s, fp->un.fmt7.wb3a, fp->un.fmt7.wb3d);
930 printk("push data: %08lx %08lx %08lx %08lx\n", 930 printk("push data: %08lx %08lx %08lx %08lx\n",
931 fp->un.fmt7.wb1dpd0, fp->un.fmt7.pd1, fp->un.fmt7.pd2, 931 fp->un.fmt7.wb1dpd0, fp->un.fmt7.pd1, fp->un.fmt7.pd2,
932 fp->un.fmt7.pd3); 932 fp->un.fmt7.pd3);
933 addr += sizeof(fp->un.fmt7); 933 addr += sizeof(fp->un.fmt7);
934 break; 934 break;
935 case 0x9: 935 case 0x9:
936 printk("instr addr=%08lx\n", fp->un.fmt9.iaddr); 936 printk("instr addr=%08lx\n", fp->un.fmt9.iaddr);
937 addr += sizeof(fp->un.fmt9); 937 addr += sizeof(fp->un.fmt9);
938 break; 938 break;
939 case 0xa: 939 case 0xa:
940 printk("ssw=%04x isc=%04x isb=%04x daddr=%08lx dobuf=%08lx\n", 940 printk("ssw=%04x isc=%04x isb=%04x daddr=%08lx dobuf=%08lx\n",
941 fp->un.fmta.ssw, fp->un.fmta.isc, fp->un.fmta.isb, 941 fp->un.fmta.ssw, fp->un.fmta.isc, fp->un.fmta.isb,
942 fp->un.fmta.daddr, fp->un.fmta.dobuf); 942 fp->un.fmta.daddr, fp->un.fmta.dobuf);
943 addr += sizeof(fp->un.fmta); 943 addr += sizeof(fp->un.fmta);
944 break; 944 break;
945 case 0xb: 945 case 0xb:
946 printk("ssw=%04x isc=%04x isb=%04x daddr=%08lx dobuf=%08lx\n", 946 printk("ssw=%04x isc=%04x isb=%04x daddr=%08lx dobuf=%08lx\n",
947 fp->un.fmtb.ssw, fp->un.fmtb.isc, fp->un.fmtb.isb, 947 fp->un.fmtb.ssw, fp->un.fmtb.isc, fp->un.fmtb.isb,
948 fp->un.fmtb.daddr, fp->un.fmtb.dobuf); 948 fp->un.fmtb.daddr, fp->un.fmtb.dobuf);
949 printk("baddr=%08lx dibuf=%08lx ver=%x\n", 949 printk("baddr=%08lx dibuf=%08lx ver=%x\n",
950 fp->un.fmtb.baddr, fp->un.fmtb.dibuf, fp->un.fmtb.ver); 950 fp->un.fmtb.baddr, fp->un.fmtb.dibuf, fp->un.fmtb.ver);
951 addr += sizeof(fp->un.fmtb); 951 addr += sizeof(fp->un.fmtb);
952 break; 952 break;
953 default: 953 default:
954 printk("\n"); 954 printk("\n");
955 } 955 }
956 show_stack(NULL, (unsigned long *)addr); 956 show_stack(NULL, (unsigned long *)addr);
957 957
958 printk("Code:"); 958 printk("Code:");
959 set_fs(KERNEL_DS); 959 set_fs(KERNEL_DS);
960 cp = (u16 *)regs->pc; 960 cp = (u16 *)regs->pc;
961 for (i = -8; i < 16; i++) { 961 for (i = -8; i < 16; i++) {
962 if (get_user(c, cp + i) && i >= 0) { 962 if (get_user(c, cp + i) && i >= 0) {
963 printk(" Bad PC value."); 963 printk(" Bad PC value.");
964 break; 964 break;
965 } 965 }
966 printk(i ? " %04x" : " <%04x>", c); 966 printk(i ? " %04x" : " <%04x>", c);
967 } 967 }
968 set_fs(old_fs); 968 set_fs(old_fs);
969 printk ("\n"); 969 printk ("\n");
970 } 970 }
971 971
972 void show_stack(struct task_struct *task, unsigned long *stack) 972 void show_stack(struct task_struct *task, unsigned long *stack)
973 { 973 {
974 unsigned long *p; 974 unsigned long *p;
975 unsigned long *endstack; 975 unsigned long *endstack;
976 int i; 976 int i;
977 977
978 if (!stack) { 978 if (!stack) {
979 if (task) 979 if (task)
980 stack = (unsigned long *)task->thread.esp0; 980 stack = (unsigned long *)task->thread.esp0;
981 else 981 else
982 stack = (unsigned long *)&stack; 982 stack = (unsigned long *)&stack;
983 } 983 }
984 endstack = (unsigned long *)(((unsigned long)stack + THREAD_SIZE - 1) & -THREAD_SIZE); 984 endstack = (unsigned long *)(((unsigned long)stack + THREAD_SIZE - 1) & -THREAD_SIZE);
985 985
986 printk("Stack from %08lx:", (unsigned long)stack); 986 printk("Stack from %08lx:", (unsigned long)stack);
987 p = stack; 987 p = stack;
988 for (i = 0; i < kstack_depth_to_print; i++) { 988 for (i = 0; i < kstack_depth_to_print; i++) {
989 if (p + 1 > endstack) 989 if (p + 1 > endstack)
990 break; 990 break;
991 if (i % 8 == 0) 991 if (i % 8 == 0)
992 printk("\n "); 992 printk("\n ");
993 printk(" %08lx", *p++); 993 printk(" %08lx", *p++);
994 } 994 }
995 printk("\n"); 995 printk("\n");
996 show_trace(stack); 996 show_trace(stack);
997 } 997 }
998 998
999 /* 999 /*
1000 * The architecture-independent backtrace generator 1000 * The architecture-independent backtrace generator
1001 */ 1001 */
1002 void dump_stack(void) 1002 void dump_stack(void)
1003 { 1003 {
1004 unsigned long stack; 1004 unsigned long stack;
1005 1005
1006 show_trace(&stack); 1006 show_trace(&stack);
1007 } 1007 }
1008 1008
1009 EXPORT_SYMBOL(dump_stack); 1009 EXPORT_SYMBOL(dump_stack);
1010 1010
1011 void bad_super_trap (struct frame *fp) 1011 void bad_super_trap (struct frame *fp)
1012 { 1012 {
1013 console_verbose(); 1013 console_verbose();
1014 if (fp->ptregs.vector < 4 * ARRAY_SIZE(vec_names)) 1014 if (fp->ptregs.vector < 4 * ARRAY_SIZE(vec_names))
1015 printk ("*** %s *** FORMAT=%X\n", 1015 printk ("*** %s *** FORMAT=%X\n",
1016 vec_names[(fp->ptregs.vector) >> 2], 1016 vec_names[(fp->ptregs.vector) >> 2],
1017 fp->ptregs.format); 1017 fp->ptregs.format);
1018 else 1018 else
1019 printk ("*** Exception %d *** FORMAT=%X\n", 1019 printk ("*** Exception %d *** FORMAT=%X\n",
1020 (fp->ptregs.vector) >> 2, 1020 (fp->ptregs.vector) >> 2,
1021 fp->ptregs.format); 1021 fp->ptregs.format);
1022 if (fp->ptregs.vector >> 2 == VEC_ADDRERR && CPU_IS_020_OR_030) { 1022 if (fp->ptregs.vector >> 2 == VEC_ADDRERR && CPU_IS_020_OR_030) {
1023 unsigned short ssw = fp->un.fmtb.ssw; 1023 unsigned short ssw = fp->un.fmtb.ssw;
1024 1024
1025 printk ("SSW=%#06x ", ssw); 1025 printk ("SSW=%#06x ", ssw);
1026 1026
1027 if (ssw & RC) 1027 if (ssw & RC)
1028 printk ("Pipe stage C instruction fault at %#010lx\n", 1028 printk ("Pipe stage C instruction fault at %#010lx\n",
1029 (fp->ptregs.format) == 0xA ? 1029 (fp->ptregs.format) == 0xA ?
1030 fp->ptregs.pc + 2 : fp->un.fmtb.baddr - 2); 1030 fp->ptregs.pc + 2 : fp->un.fmtb.baddr - 2);
1031 if (ssw & RB) 1031 if (ssw & RB)
1032 printk ("Pipe stage B instruction fault at %#010lx\n", 1032 printk ("Pipe stage B instruction fault at %#010lx\n",
1033 (fp->ptregs.format) == 0xA ? 1033 (fp->ptregs.format) == 0xA ?
1034 fp->ptregs.pc + 4 : fp->un.fmtb.baddr); 1034 fp->ptregs.pc + 4 : fp->un.fmtb.baddr);
1035 if (ssw & DF) 1035 if (ssw & DF)
1036 printk ("Data %s fault at %#010lx in %s (pc=%#lx)\n", 1036 printk ("Data %s fault at %#010lx in %s (pc=%#lx)\n",
1037 ssw & RW ? "read" : "write", 1037 ssw & RW ? "read" : "write",
1038 fp->un.fmtb.daddr, space_names[ssw & DFC], 1038 fp->un.fmtb.daddr, space_names[ssw & DFC],
1039 fp->ptregs.pc); 1039 fp->ptregs.pc);
1040 } 1040 }
1041 printk ("Current process id is %d\n", current->pid); 1041 printk ("Current process id is %d\n", current->pid);
1042 die_if_kernel("BAD KERNEL TRAP", &fp->ptregs, 0); 1042 die_if_kernel("BAD KERNEL TRAP", &fp->ptregs, 0);
1043 } 1043 }
1044 1044
1045 asmlinkage void trap_c(struct frame *fp) 1045 asmlinkage void trap_c(struct frame *fp)
1046 { 1046 {
1047 int sig; 1047 int sig;
1048 siginfo_t info; 1048 siginfo_t info;
1049 1049
1050 if (fp->ptregs.sr & PS_S) { 1050 if (fp->ptregs.sr & PS_S) {
1051 if ((fp->ptregs.vector >> 2) == VEC_TRACE) { 1051 if ((fp->ptregs.vector >> 2) == VEC_TRACE) {
1052 /* traced a trapping instruction */ 1052 /* traced a trapping instruction */
1053 current->ptrace |= PT_DTRACE; 1053 current->ptrace |= PT_DTRACE;
1054 } else 1054 } else
1055 bad_super_trap(fp); 1055 bad_super_trap(fp);
1056 return; 1056 return;
1057 } 1057 }
1058 1058
1059 /* send the appropriate signal to the user program */ 1059 /* send the appropriate signal to the user program */
1060 switch ((fp->ptregs.vector) >> 2) { 1060 switch ((fp->ptregs.vector) >> 2) {
1061 case VEC_ADDRERR: 1061 case VEC_ADDRERR:
1062 info.si_code = BUS_ADRALN; 1062 info.si_code = BUS_ADRALN;
1063 sig = SIGBUS; 1063 sig = SIGBUS;
1064 break; 1064 break;
1065 case VEC_ILLEGAL: 1065 case VEC_ILLEGAL:
1066 case VEC_LINE10: 1066 case VEC_LINE10:
1067 case VEC_LINE11: 1067 case VEC_LINE11:
1068 info.si_code = ILL_ILLOPC; 1068 info.si_code = ILL_ILLOPC;
1069 sig = SIGILL; 1069 sig = SIGILL;
1070 break; 1070 break;
1071 case VEC_PRIV: 1071 case VEC_PRIV:
1072 info.si_code = ILL_PRVOPC; 1072 info.si_code = ILL_PRVOPC;
1073 sig = SIGILL; 1073 sig = SIGILL;
1074 break; 1074 break;
1075 case VEC_COPROC: 1075 case VEC_COPROC:
1076 info.si_code = ILL_COPROC; 1076 info.si_code = ILL_COPROC;
1077 sig = SIGILL; 1077 sig = SIGILL;
1078 break; 1078 break;
1079 case VEC_TRAP1: 1079 case VEC_TRAP1:
1080 case VEC_TRAP2: 1080 case VEC_TRAP2:
1081 case VEC_TRAP3: 1081 case VEC_TRAP3:
1082 case VEC_TRAP4: 1082 case VEC_TRAP4:
1083 case VEC_TRAP5: 1083 case VEC_TRAP5:
1084 case VEC_TRAP6: 1084 case VEC_TRAP6:
1085 case VEC_TRAP7: 1085 case VEC_TRAP7:
1086 case VEC_TRAP8: 1086 case VEC_TRAP8:
1087 case VEC_TRAP9: 1087 case VEC_TRAP9:
1088 case VEC_TRAP10: 1088 case VEC_TRAP10:
1089 case VEC_TRAP11: 1089 case VEC_TRAP11:
1090 case VEC_TRAP12: 1090 case VEC_TRAP12:
1091 case VEC_TRAP13: 1091 case VEC_TRAP13:
1092 case VEC_TRAP14: 1092 case VEC_TRAP14:
1093 info.si_code = ILL_ILLTRP; 1093 info.si_code = ILL_ILLTRP;
1094 sig = SIGILL; 1094 sig = SIGILL;
1095 break; 1095 break;
1096 case VEC_FPBRUC: 1096 case VEC_FPBRUC:
1097 case VEC_FPOE: 1097 case VEC_FPOE:
1098 case VEC_FPNAN: 1098 case VEC_FPNAN:
1099 info.si_code = FPE_FLTINV; 1099 info.si_code = FPE_FLTINV;
1100 sig = SIGFPE; 1100 sig = SIGFPE;
1101 break; 1101 break;
1102 case VEC_FPIR: 1102 case VEC_FPIR:
1103 info.si_code = FPE_FLTRES; 1103 info.si_code = FPE_FLTRES;
1104 sig = SIGFPE; 1104 sig = SIGFPE;
1105 break; 1105 break;
1106 case VEC_FPDIVZ: 1106 case VEC_FPDIVZ:
1107 info.si_code = FPE_FLTDIV; 1107 info.si_code = FPE_FLTDIV;
1108 sig = SIGFPE; 1108 sig = SIGFPE;
1109 break; 1109 break;
1110 case VEC_FPUNDER: 1110 case VEC_FPUNDER:
1111 info.si_code = FPE_FLTUND; 1111 info.si_code = FPE_FLTUND;
1112 sig = SIGFPE; 1112 sig = SIGFPE;
1113 break; 1113 break;
1114 case VEC_FPOVER: 1114 case VEC_FPOVER:
1115 info.si_code = FPE_FLTOVF; 1115 info.si_code = FPE_FLTOVF;
1116 sig = SIGFPE; 1116 sig = SIGFPE;
1117 break; 1117 break;
1118 case VEC_ZERODIV: 1118 case VEC_ZERODIV:
1119 info.si_code = FPE_INTDIV; 1119 info.si_code = FPE_INTDIV;
1120 sig = SIGFPE; 1120 sig = SIGFPE;
1121 break; 1121 break;
1122 case VEC_CHK: 1122 case VEC_CHK:
1123 case VEC_TRAP: 1123 case VEC_TRAP:
1124 info.si_code = FPE_INTOVF; 1124 info.si_code = FPE_INTOVF;
1125 sig = SIGFPE; 1125 sig = SIGFPE;
1126 break; 1126 break;
1127 case VEC_TRACE: /* ptrace single step */ 1127 case VEC_TRACE: /* ptrace single step */
1128 info.si_code = TRAP_TRACE; 1128 info.si_code = TRAP_TRACE;
1129 sig = SIGTRAP; 1129 sig = SIGTRAP;
1130 break; 1130 break;
1131 case VEC_TRAP15: /* breakpoint */ 1131 case VEC_TRAP15: /* breakpoint */
1132 info.si_code = TRAP_BRKPT; 1132 info.si_code = TRAP_BRKPT;
1133 sig = SIGTRAP; 1133 sig = SIGTRAP;
1134 break; 1134 break;
1135 default: 1135 default:
1136 info.si_code = ILL_ILLOPC; 1136 info.si_code = ILL_ILLOPC;
1137 sig = SIGILL; 1137 sig = SIGILL;
1138 break; 1138 break;
1139 } 1139 }
1140 info.si_signo = sig; 1140 info.si_signo = sig;
1141 info.si_errno = 0; 1141 info.si_errno = 0;
1142 switch (fp->ptregs.format) { 1142 switch (fp->ptregs.format) {
1143 default: 1143 default:
1144 info.si_addr = (void *) fp->ptregs.pc; 1144 info.si_addr = (void *) fp->ptregs.pc;
1145 break; 1145 break;
1146 case 2: 1146 case 2:
1147 info.si_addr = (void *) fp->un.fmt2.iaddr; 1147 info.si_addr = (void *) fp->un.fmt2.iaddr;
1148 break; 1148 break;
1149 case 7: 1149 case 7:
1150 info.si_addr = (void *) fp->un.fmt7.effaddr; 1150 info.si_addr = (void *) fp->un.fmt7.effaddr;
1151 break; 1151 break;
1152 case 9: 1152 case 9:
1153 info.si_addr = (void *) fp->un.fmt9.iaddr; 1153 info.si_addr = (void *) fp->un.fmt9.iaddr;
1154 break; 1154 break;
1155 case 10: 1155 case 10:
1156 info.si_addr = (void *) fp->un.fmta.daddr; 1156 info.si_addr = (void *) fp->un.fmta.daddr;
1157 break; 1157 break;
1158 case 11: 1158 case 11:
1159 info.si_addr = (void *) fp->un.fmtb.daddr; 1159 info.si_addr = (void *) fp->un.fmtb.daddr;
1160 break; 1160 break;
1161 } 1161 }
1162 force_sig_info (sig, &info, current); 1162 force_sig_info (sig, &info, current);
1163 } 1163 }
1164 1164
1165 void die_if_kernel (char *str, struct pt_regs *fp, int nr) 1165 void die_if_kernel (char *str, struct pt_regs *fp, int nr)
1166 { 1166 {
1167 if (!(fp->sr & PS_S)) 1167 if (!(fp->sr & PS_S))
1168 return; 1168 return;
1169 1169
1170 console_verbose(); 1170 console_verbose();
1171 printk("%s: %08x\n",str,nr); 1171 printk("%s: %08x\n",str,nr);
1172 show_registers(fp); 1172 show_registers(fp);
1173 add_taint(TAINT_DIE);
1173 do_exit(SIGSEGV); 1174 do_exit(SIGSEGV);
1174 } 1175 }
1175 1176
1176 /* 1177 /*
1177 * This function is called if an error occur while accessing 1178 * This function is called if an error occur while accessing
1178 * user-space from the fpsp040 code. 1179 * user-space from the fpsp040 code.
1179 */ 1180 */
1180 asmlinkage void fpsp040_die(void) 1181 asmlinkage void fpsp040_die(void)
1181 { 1182 {
1182 do_exit(SIGSEGV); 1183 do_exit(SIGSEGV);
1183 } 1184 }
1184 1185
1185 #ifdef CONFIG_M68KFPU_EMU 1186 #ifdef CONFIG_M68KFPU_EMU
1186 asmlinkage void fpemu_signal(int signal, int code, void *addr) 1187 asmlinkage void fpemu_signal(int signal, int code, void *addr)
1187 { 1188 {
1188 siginfo_t info; 1189 siginfo_t info;
1189 1190
1190 info.si_signo = signal; 1191 info.si_signo = signal;
1191 info.si_errno = 0; 1192 info.si_errno = 0;
1192 info.si_code = code; 1193 info.si_code = code;
1193 info.si_addr = addr; 1194 info.si_addr = addr;
1194 force_sig_info(signal, &info, current); 1195 force_sig_info(signal, &info, current);
1195 } 1196 }
1196 #endif 1197 #endif
1197 1198
arch/m68knommu/kernel/traps.c
1 /* 1 /*
2 * linux/arch/m68knommu/kernel/traps.c 2 * linux/arch/m68knommu/kernel/traps.c
3 * 3 *
4 * Copyright (C) 1993, 1994 by Hamish Macdonald 4 * Copyright (C) 1993, 1994 by Hamish Macdonald
5 * 5 *
6 * 68040 fixes by Michael Rausch 6 * 68040 fixes by Michael Rausch
7 * 68040 fixes by Martin Apel 7 * 68040 fixes by Martin Apel
8 * 68060 fixes by Roman Hodek 8 * 68060 fixes by Roman Hodek
9 * 68060 fixes by Jesper Skov 9 * 68060 fixes by Jesper Skov
10 * 10 *
11 * This file is subject to the terms and conditions of the GNU General Public 11 * This file is subject to the terms and conditions of the GNU General Public
12 * License. See the file COPYING in the main directory of this archive 12 * License. See the file COPYING in the main directory of this archive
13 * for more details. 13 * for more details.
14 */ 14 */
15 15
16 /* 16 /*
17 * Sets up all exception vectors 17 * Sets up all exception vectors
18 */ 18 */
19 #include <linux/sched.h> 19 #include <linux/sched.h>
20 #include <linux/signal.h> 20 #include <linux/signal.h>
21 #include <linux/kernel.h> 21 #include <linux/kernel.h>
22 #include <linux/mm.h> 22 #include <linux/mm.h>
23 #include <linux/module.h> 23 #include <linux/module.h>
24 #include <linux/types.h> 24 #include <linux/types.h>
25 #include <linux/a.out.h> 25 #include <linux/a.out.h>
26 #include <linux/user.h> 26 #include <linux/user.h>
27 #include <linux/string.h> 27 #include <linux/string.h>
28 #include <linux/linkage.h> 28 #include <linux/linkage.h>
29 #include <linux/init.h> 29 #include <linux/init.h>
30 #include <linux/ptrace.h> 30 #include <linux/ptrace.h>
31 31
32 #include <asm/setup.h> 32 #include <asm/setup.h>
33 #include <asm/fpu.h> 33 #include <asm/fpu.h>
34 #include <asm/system.h> 34 #include <asm/system.h>
35 #include <asm/uaccess.h> 35 #include <asm/uaccess.h>
36 #include <asm/traps.h> 36 #include <asm/traps.h>
37 #include <asm/pgtable.h> 37 #include <asm/pgtable.h>
38 #include <asm/machdep.h> 38 #include <asm/machdep.h>
39 #include <asm/siginfo.h> 39 #include <asm/siginfo.h>
40 40
41 static char const * const vec_names[] = { 41 static char const * const vec_names[] = {
42 "RESET SP", "RESET PC", "BUS ERROR", "ADDRESS ERROR", 42 "RESET SP", "RESET PC", "BUS ERROR", "ADDRESS ERROR",
43 "ILLEGAL INSTRUCTION", "ZERO DIVIDE", "CHK", "TRAPcc", 43 "ILLEGAL INSTRUCTION", "ZERO DIVIDE", "CHK", "TRAPcc",
44 "PRIVILEGE VIOLATION", "TRACE", "LINE 1010", "LINE 1111", 44 "PRIVILEGE VIOLATION", "TRACE", "LINE 1010", "LINE 1111",
45 "UNASSIGNED RESERVED 12", "COPROCESSOR PROTOCOL VIOLATION", 45 "UNASSIGNED RESERVED 12", "COPROCESSOR PROTOCOL VIOLATION",
46 "FORMAT ERROR", "UNINITIALIZED INTERRUPT", 46 "FORMAT ERROR", "UNINITIALIZED INTERRUPT",
47 "UNASSIGNED RESERVED 16", "UNASSIGNED RESERVED 17", 47 "UNASSIGNED RESERVED 16", "UNASSIGNED RESERVED 17",
48 "UNASSIGNED RESERVED 18", "UNASSIGNED RESERVED 19", 48 "UNASSIGNED RESERVED 18", "UNASSIGNED RESERVED 19",
49 "UNASSIGNED RESERVED 20", "UNASSIGNED RESERVED 21", 49 "UNASSIGNED RESERVED 20", "UNASSIGNED RESERVED 21",
50 "UNASSIGNED RESERVED 22", "UNASSIGNED RESERVED 23", 50 "UNASSIGNED RESERVED 22", "UNASSIGNED RESERVED 23",
51 "SPURIOUS INTERRUPT", "LEVEL 1 INT", "LEVEL 2 INT", "LEVEL 3 INT", 51 "SPURIOUS INTERRUPT", "LEVEL 1 INT", "LEVEL 2 INT", "LEVEL 3 INT",
52 "LEVEL 4 INT", "LEVEL 5 INT", "LEVEL 6 INT", "LEVEL 7 INT", 52 "LEVEL 4 INT", "LEVEL 5 INT", "LEVEL 6 INT", "LEVEL 7 INT",
53 "SYSCALL", "TRAP #1", "TRAP #2", "TRAP #3", 53 "SYSCALL", "TRAP #1", "TRAP #2", "TRAP #3",
54 "TRAP #4", "TRAP #5", "TRAP #6", "TRAP #7", 54 "TRAP #4", "TRAP #5", "TRAP #6", "TRAP #7",
55 "TRAP #8", "TRAP #9", "TRAP #10", "TRAP #11", 55 "TRAP #8", "TRAP #9", "TRAP #10", "TRAP #11",
56 "TRAP #12", "TRAP #13", "TRAP #14", "TRAP #15", 56 "TRAP #12", "TRAP #13", "TRAP #14", "TRAP #15",
57 "FPCP BSUN", "FPCP INEXACT", "FPCP DIV BY 0", "FPCP UNDERFLOW", 57 "FPCP BSUN", "FPCP INEXACT", "FPCP DIV BY 0", "FPCP UNDERFLOW",
58 "FPCP OPERAND ERROR", "FPCP OVERFLOW", "FPCP SNAN", 58 "FPCP OPERAND ERROR", "FPCP OVERFLOW", "FPCP SNAN",
59 "FPCP UNSUPPORTED OPERATION", 59 "FPCP UNSUPPORTED OPERATION",
60 "MMU CONFIGURATION ERROR" 60 "MMU CONFIGURATION ERROR"
61 }; 61 };
62 62
63 void __init trap_init(void) 63 void __init trap_init(void)
64 { 64 {
65 if (mach_trap_init) 65 if (mach_trap_init)
66 mach_trap_init(); 66 mach_trap_init();
67 } 67 }
68 68
69 void die_if_kernel(char *str, struct pt_regs *fp, int nr) 69 void die_if_kernel(char *str, struct pt_regs *fp, int nr)
70 { 70 {
71 if (!(fp->sr & PS_S)) 71 if (!(fp->sr & PS_S))
72 return; 72 return;
73 73
74 console_verbose(); 74 console_verbose();
75 printk(KERN_EMERG "%s: %08x\n",str,nr); 75 printk(KERN_EMERG "%s: %08x\n",str,nr);
76 printk(KERN_EMERG "PC: [<%08lx>]\nSR: %04x SP: %p a2: %08lx\n", 76 printk(KERN_EMERG "PC: [<%08lx>]\nSR: %04x SP: %p a2: %08lx\n",
77 fp->pc, fp->sr, fp, fp->a2); 77 fp->pc, fp->sr, fp, fp->a2);
78 printk(KERN_EMERG "d0: %08lx d1: %08lx d2: %08lx d3: %08lx\n", 78 printk(KERN_EMERG "d0: %08lx d1: %08lx d2: %08lx d3: %08lx\n",
79 fp->d0, fp->d1, fp->d2, fp->d3); 79 fp->d0, fp->d1, fp->d2, fp->d3);
80 printk(KERN_EMERG "d4: %08lx d5: %08lx a0: %08lx a1: %08lx\n", 80 printk(KERN_EMERG "d4: %08lx d5: %08lx a0: %08lx a1: %08lx\n",
81 fp->d4, fp->d5, fp->a0, fp->a1); 81 fp->d4, fp->d5, fp->a0, fp->a1);
82 82
83 printk(KERN_EMERG "Process %s (pid: %d, stackpage=%08lx)\n", 83 printk(KERN_EMERG "Process %s (pid: %d, stackpage=%08lx)\n",
84 current->comm, current->pid, PAGE_SIZE+(unsigned long)current); 84 current->comm, current->pid, PAGE_SIZE+(unsigned long)current);
85 show_stack(NULL, (unsigned long *)fp); 85 show_stack(NULL, (unsigned long *)fp);
86 add_taint(TAINT_DIE);
86 do_exit(SIGSEGV); 87 do_exit(SIGSEGV);
87 } 88 }
88 89
89 asmlinkage void buserr_c(struct frame *fp) 90 asmlinkage void buserr_c(struct frame *fp)
90 { 91 {
91 /* Only set esp0 if coming from user mode */ 92 /* Only set esp0 if coming from user mode */
92 if (user_mode(&fp->ptregs)) 93 if (user_mode(&fp->ptregs))
93 current->thread.esp0 = (unsigned long) fp; 94 current->thread.esp0 = (unsigned long) fp;
94 95
95 #if defined(DEBUG) 96 #if defined(DEBUG)
96 printk (KERN_DEBUG "*** Bus Error *** Format is %x\n", fp->ptregs.format); 97 printk (KERN_DEBUG "*** Bus Error *** Format is %x\n", fp->ptregs.format);
97 #endif 98 #endif
98 99
99 die_if_kernel("bad frame format",&fp->ptregs,0); 100 die_if_kernel("bad frame format",&fp->ptregs,0);
100 #if defined(DEBUG) 101 #if defined(DEBUG)
101 printk(KERN_DEBUG "Unknown SIGSEGV - 4\n"); 102 printk(KERN_DEBUG "Unknown SIGSEGV - 4\n");
102 #endif 103 #endif
103 force_sig(SIGSEGV, current); 104 force_sig(SIGSEGV, current);
104 } 105 }
105 106
106 107
107 int kstack_depth_to_print = 48; 108 int kstack_depth_to_print = 48;
108 109
109 void show_stack(struct task_struct *task, unsigned long *stack) 110 void show_stack(struct task_struct *task, unsigned long *stack)
110 { 111 {
111 unsigned long *endstack, addr; 112 unsigned long *endstack, addr;
112 extern char _start, _etext; 113 extern char _start, _etext;
113 int i; 114 int i;
114 115
115 if (!stack) { 116 if (!stack) {
116 if (task) 117 if (task)
117 stack = (unsigned long *)task->thread.ksp; 118 stack = (unsigned long *)task->thread.ksp;
118 else 119 else
119 stack = (unsigned long *)&stack; 120 stack = (unsigned long *)&stack;
120 } 121 }
121 122
122 addr = (unsigned long) stack; 123 addr = (unsigned long) stack;
123 endstack = (unsigned long *) PAGE_ALIGN(addr); 124 endstack = (unsigned long *) PAGE_ALIGN(addr);
124 125
125 printk(KERN_EMERG "Stack from %08lx:", (unsigned long)stack); 126 printk(KERN_EMERG "Stack from %08lx:", (unsigned long)stack);
126 for (i = 0; i < kstack_depth_to_print; i++) { 127 for (i = 0; i < kstack_depth_to_print; i++) {
127 if (stack + 1 > endstack) 128 if (stack + 1 > endstack)
128 break; 129 break;
129 if (i % 8 == 0) 130 if (i % 8 == 0)
130 printk("\n" KERN_EMERG " "); 131 printk("\n" KERN_EMERG " ");
131 printk(" %08lx", *stack++); 132 printk(" %08lx", *stack++);
132 } 133 }
133 printk("\n"); 134 printk("\n");
134 135
135 printk(KERN_EMERG "Call Trace:"); 136 printk(KERN_EMERG "Call Trace:");
136 i = 0; 137 i = 0;
137 while (stack + 1 <= endstack) { 138 while (stack + 1 <= endstack) {
138 addr = *stack++; 139 addr = *stack++;
139 /* 140 /*
140 * If the address is either in the text segment of the 141 * If the address is either in the text segment of the
141 * kernel, or in the region which contains vmalloc'ed 142 * kernel, or in the region which contains vmalloc'ed
142 * memory, it *may* be the address of a calling 143 * memory, it *may* be the address of a calling
143 * routine; if so, print it so that someone tracing 144 * routine; if so, print it so that someone tracing
144 * down the cause of the crash will be able to figure 145 * down the cause of the crash will be able to figure
145 * out the call path that was taken. 146 * out the call path that was taken.
146 */ 147 */
147 if (((addr >= (unsigned long) &_start) && 148 if (((addr >= (unsigned long) &_start) &&
148 (addr <= (unsigned long) &_etext))) { 149 (addr <= (unsigned long) &_etext))) {
149 if (i % 4 == 0) 150 if (i % 4 == 0)
150 printk("\n" KERN_EMERG " "); 151 printk("\n" KERN_EMERG " ");
151 printk(" [<%08lx>]", addr); 152 printk(" [<%08lx>]", addr);
152 i++; 153 i++;
153 } 154 }
154 } 155 }
155 printk("\n"); 156 printk("\n");
156 } 157 }
157 158
158 void bad_super_trap(struct frame *fp) 159 void bad_super_trap(struct frame *fp)
159 { 160 {
160 console_verbose(); 161 console_verbose();
161 if (fp->ptregs.vector < 4 * ARRAY_SIZE(vec_names)) 162 if (fp->ptregs.vector < 4 * ARRAY_SIZE(vec_names))
162 printk (KERN_WARNING "*** %s *** FORMAT=%X\n", 163 printk (KERN_WARNING "*** %s *** FORMAT=%X\n",
163 vec_names[(fp->ptregs.vector) >> 2], 164 vec_names[(fp->ptregs.vector) >> 2],
164 fp->ptregs.format); 165 fp->ptregs.format);
165 else 166 else
166 printk (KERN_WARNING "*** Exception %d *** FORMAT=%X\n", 167 printk (KERN_WARNING "*** Exception %d *** FORMAT=%X\n",
167 (fp->ptregs.vector) >> 2, 168 (fp->ptregs.vector) >> 2,
168 fp->ptregs.format); 169 fp->ptregs.format);
169 printk (KERN_WARNING "Current process id is %d\n", current->pid); 170 printk (KERN_WARNING "Current process id is %d\n", current->pid);
170 die_if_kernel("BAD KERNEL TRAP", &fp->ptregs, 0); 171 die_if_kernel("BAD KERNEL TRAP", &fp->ptregs, 0);
171 } 172 }
172 173
173 asmlinkage void trap_c(struct frame *fp) 174 asmlinkage void trap_c(struct frame *fp)
174 { 175 {
175 int sig; 176 int sig;
176 siginfo_t info; 177 siginfo_t info;
177 178
178 if (fp->ptregs.sr & PS_S) { 179 if (fp->ptregs.sr & PS_S) {
179 if ((fp->ptregs.vector >> 2) == VEC_TRACE) { 180 if ((fp->ptregs.vector >> 2) == VEC_TRACE) {
180 /* traced a trapping instruction */ 181 /* traced a trapping instruction */
181 current->ptrace |= PT_DTRACE; 182 current->ptrace |= PT_DTRACE;
182 } else 183 } else
183 bad_super_trap(fp); 184 bad_super_trap(fp);
184 return; 185 return;
185 } 186 }
186 187
187 /* send the appropriate signal to the user program */ 188 /* send the appropriate signal to the user program */
188 switch ((fp->ptregs.vector) >> 2) { 189 switch ((fp->ptregs.vector) >> 2) {
189 case VEC_ADDRERR: 190 case VEC_ADDRERR:
190 info.si_code = BUS_ADRALN; 191 info.si_code = BUS_ADRALN;
191 sig = SIGBUS; 192 sig = SIGBUS;
192 break; 193 break;
193 case VEC_ILLEGAL: 194 case VEC_ILLEGAL:
194 case VEC_LINE10: 195 case VEC_LINE10:
195 case VEC_LINE11: 196 case VEC_LINE11:
196 info.si_code = ILL_ILLOPC; 197 info.si_code = ILL_ILLOPC;
197 sig = SIGILL; 198 sig = SIGILL;
198 break; 199 break;
199 case VEC_PRIV: 200 case VEC_PRIV:
200 info.si_code = ILL_PRVOPC; 201 info.si_code = ILL_PRVOPC;
201 sig = SIGILL; 202 sig = SIGILL;
202 break; 203 break;
203 case VEC_COPROC: 204 case VEC_COPROC:
204 info.si_code = ILL_COPROC; 205 info.si_code = ILL_COPROC;
205 sig = SIGILL; 206 sig = SIGILL;
206 break; 207 break;
207 case VEC_TRAP1: /* gdbserver breakpoint */ 208 case VEC_TRAP1: /* gdbserver breakpoint */
208 fp->ptregs.pc -= 2; 209 fp->ptregs.pc -= 2;
209 info.si_code = TRAP_TRACE; 210 info.si_code = TRAP_TRACE;
210 sig = SIGTRAP; 211 sig = SIGTRAP;
211 break; 212 break;
212 case VEC_TRAP2: 213 case VEC_TRAP2:
213 case VEC_TRAP3: 214 case VEC_TRAP3:
214 case VEC_TRAP4: 215 case VEC_TRAP4:
215 case VEC_TRAP5: 216 case VEC_TRAP5:
216 case VEC_TRAP6: 217 case VEC_TRAP6:
217 case VEC_TRAP7: 218 case VEC_TRAP7:
218 case VEC_TRAP8: 219 case VEC_TRAP8:
219 case VEC_TRAP9: 220 case VEC_TRAP9:
220 case VEC_TRAP10: 221 case VEC_TRAP10:
221 case VEC_TRAP11: 222 case VEC_TRAP11:
222 case VEC_TRAP12: 223 case VEC_TRAP12:
223 case VEC_TRAP13: 224 case VEC_TRAP13:
224 case VEC_TRAP14: 225 case VEC_TRAP14:
225 info.si_code = ILL_ILLTRP; 226 info.si_code = ILL_ILLTRP;
226 sig = SIGILL; 227 sig = SIGILL;
227 break; 228 break;
228 case VEC_FPBRUC: 229 case VEC_FPBRUC:
229 case VEC_FPOE: 230 case VEC_FPOE:
230 case VEC_FPNAN: 231 case VEC_FPNAN:
231 info.si_code = FPE_FLTINV; 232 info.si_code = FPE_FLTINV;
232 sig = SIGFPE; 233 sig = SIGFPE;
233 break; 234 break;
234 case VEC_FPIR: 235 case VEC_FPIR:
235 info.si_code = FPE_FLTRES; 236 info.si_code = FPE_FLTRES;
236 sig = SIGFPE; 237 sig = SIGFPE;
237 break; 238 break;
238 case VEC_FPDIVZ: 239 case VEC_FPDIVZ:
239 info.si_code = FPE_FLTDIV; 240 info.si_code = FPE_FLTDIV;
240 sig = SIGFPE; 241 sig = SIGFPE;
241 break; 242 break;
242 case VEC_FPUNDER: 243 case VEC_FPUNDER:
243 info.si_code = FPE_FLTUND; 244 info.si_code = FPE_FLTUND;
244 sig = SIGFPE; 245 sig = SIGFPE;
245 break; 246 break;
246 case VEC_FPOVER: 247 case VEC_FPOVER:
247 info.si_code = FPE_FLTOVF; 248 info.si_code = FPE_FLTOVF;
248 sig = SIGFPE; 249 sig = SIGFPE;
249 break; 250 break;
250 case VEC_ZERODIV: 251 case VEC_ZERODIV:
251 info.si_code = FPE_INTDIV; 252 info.si_code = FPE_INTDIV;
252 sig = SIGFPE; 253 sig = SIGFPE;
253 break; 254 break;
254 case VEC_CHK: 255 case VEC_CHK:
255 case VEC_TRAP: 256 case VEC_TRAP:
256 info.si_code = FPE_INTOVF; 257 info.si_code = FPE_INTOVF;
257 sig = SIGFPE; 258 sig = SIGFPE;
258 break; 259 break;
259 case VEC_TRACE: /* ptrace single step */ 260 case VEC_TRACE: /* ptrace single step */
260 info.si_code = TRAP_TRACE; 261 info.si_code = TRAP_TRACE;
261 sig = SIGTRAP; 262 sig = SIGTRAP;
262 break; 263 break;
263 case VEC_TRAP15: /* breakpoint */ 264 case VEC_TRAP15: /* breakpoint */
264 info.si_code = TRAP_BRKPT; 265 info.si_code = TRAP_BRKPT;
265 sig = SIGTRAP; 266 sig = SIGTRAP;
266 break; 267 break;
267 default: 268 default:
268 info.si_code = ILL_ILLOPC; 269 info.si_code = ILL_ILLOPC;
269 sig = SIGILL; 270 sig = SIGILL;
270 break; 271 break;
271 } 272 }
272 info.si_signo = sig; 273 info.si_signo = sig;
273 info.si_errno = 0; 274 info.si_errno = 0;
274 switch (fp->ptregs.format) { 275 switch (fp->ptregs.format) {
275 default: 276 default:
276 info.si_addr = (void *) fp->ptregs.pc; 277 info.si_addr = (void *) fp->ptregs.pc;
277 break; 278 break;
278 case 2: 279 case 2:
279 info.si_addr = (void *) fp->un.fmt2.iaddr; 280 info.si_addr = (void *) fp->un.fmt2.iaddr;
280 break; 281 break;
281 case 7: 282 case 7:
282 info.si_addr = (void *) fp->un.fmt7.effaddr; 283 info.si_addr = (void *) fp->un.fmt7.effaddr;
283 break; 284 break;
284 case 9: 285 case 9:
285 info.si_addr = (void *) fp->un.fmt9.iaddr; 286 info.si_addr = (void *) fp->un.fmt9.iaddr;
286 break; 287 break;
287 case 10: 288 case 10:
288 info.si_addr = (void *) fp->un.fmta.daddr; 289 info.si_addr = (void *) fp->un.fmta.daddr;
289 break; 290 break;
290 case 11: 291 case 11:
291 info.si_addr = (void *) fp->un.fmtb.daddr; 292 info.si_addr = (void *) fp->un.fmtb.daddr;
292 break; 293 break;
293 } 294 }
294 force_sig_info (sig, &info, current); 295 force_sig_info (sig, &info, current);
295 } 296 }
296 297
297 asmlinkage void set_esp0(unsigned long ssp) 298 asmlinkage void set_esp0(unsigned long ssp)
298 { 299 {
299 current->thread.esp0 = ssp; 300 current->thread.esp0 = ssp;
300 } 301 }
301 302
302 303
303 /* 304 /*
304 * The architecture-independent backtrace generator 305 * The architecture-independent backtrace generator
305 */ 306 */
306 void dump_stack(void) 307 void dump_stack(void)
307 { 308 {
308 unsigned long stack; 309 unsigned long stack;
309 310
310 show_stack(current, &stack); 311 show_stack(current, &stack);
311 } 312 }
312 313
313 EXPORT_SYMBOL(dump_stack); 314 EXPORT_SYMBOL(dump_stack);
314 315
315 #ifdef CONFIG_M68KFPU_EMU 316 #ifdef CONFIG_M68KFPU_EMU
316 asmlinkage void fpemu_signal(int signal, int code, void *addr) 317 asmlinkage void fpemu_signal(int signal, int code, void *addr)
317 { 318 {
318 siginfo_t info; 319 siginfo_t info;
319 320
320 info.si_signo = signal; 321 info.si_signo = signal;
321 info.si_errno = 0; 322 info.si_errno = 0;
322 info.si_code = code; 323 info.si_code = code;
323 info.si_addr = addr; 324 info.si_addr = addr;
324 force_sig_info(signal, &info, current); 325 force_sig_info(signal, &info, current);
325 } 326 }
326 #endif 327 #endif
327 328
arch/mips/kernel/traps.c
1 /* 1 /*
2 * This file is subject to the terms and conditions of the GNU General Public 2 * This file is subject to the terms and conditions of the GNU General Public
3 * License. See the file "COPYING" in the main directory of this archive 3 * License. See the file "COPYING" in the main directory of this archive
4 * for more details. 4 * for more details.
5 * 5 *
6 * Copyright (C) 1994 - 1999, 2000, 01, 06 Ralf Baechle 6 * Copyright (C) 1994 - 1999, 2000, 01, 06 Ralf Baechle
7 * Copyright (C) 1995, 1996 Paul M. Antoine 7 * Copyright (C) 1995, 1996 Paul M. Antoine
8 * Copyright (C) 1998 Ulf Carlsson 8 * Copyright (C) 1998 Ulf Carlsson
9 * Copyright (C) 1999 Silicon Graphics, Inc. 9 * Copyright (C) 1999 Silicon Graphics, Inc.
10 * Kevin D. Kissell, kevink@mips.com and Carsten Langgaard, carstenl@mips.com 10 * Kevin D. Kissell, kevink@mips.com and Carsten Langgaard, carstenl@mips.com
11 * Copyright (C) 2000, 01 MIPS Technologies, Inc. 11 * Copyright (C) 2000, 01 MIPS Technologies, Inc.
12 * Copyright (C) 2002, 2003, 2004, 2005 Maciej W. Rozycki 12 * Copyright (C) 2002, 2003, 2004, 2005 Maciej W. Rozycki
13 */ 13 */
14 #include <linux/bug.h> 14 #include <linux/bug.h>
15 #include <linux/init.h> 15 #include <linux/init.h>
16 #include <linux/mm.h> 16 #include <linux/mm.h>
17 #include <linux/module.h> 17 #include <linux/module.h>
18 #include <linux/sched.h> 18 #include <linux/sched.h>
19 #include <linux/smp.h> 19 #include <linux/smp.h>
20 #include <linux/spinlock.h> 20 #include <linux/spinlock.h>
21 #include <linux/kallsyms.h> 21 #include <linux/kallsyms.h>
22 #include <linux/bootmem.h> 22 #include <linux/bootmem.h>
23 #include <linux/interrupt.h> 23 #include <linux/interrupt.h>
24 24
25 #include <asm/bootinfo.h> 25 #include <asm/bootinfo.h>
26 #include <asm/branch.h> 26 #include <asm/branch.h>
27 #include <asm/break.h> 27 #include <asm/break.h>
28 #include <asm/cpu.h> 28 #include <asm/cpu.h>
29 #include <asm/dsp.h> 29 #include <asm/dsp.h>
30 #include <asm/fpu.h> 30 #include <asm/fpu.h>
31 #include <asm/mipsregs.h> 31 #include <asm/mipsregs.h>
32 #include <asm/mipsmtregs.h> 32 #include <asm/mipsmtregs.h>
33 #include <asm/module.h> 33 #include <asm/module.h>
34 #include <asm/pgtable.h> 34 #include <asm/pgtable.h>
35 #include <asm/ptrace.h> 35 #include <asm/ptrace.h>
36 #include <asm/sections.h> 36 #include <asm/sections.h>
37 #include <asm/system.h> 37 #include <asm/system.h>
38 #include <asm/tlbdebug.h> 38 #include <asm/tlbdebug.h>
39 #include <asm/traps.h> 39 #include <asm/traps.h>
40 #include <asm/uaccess.h> 40 #include <asm/uaccess.h>
41 #include <asm/mmu_context.h> 41 #include <asm/mmu_context.h>
42 #include <asm/types.h> 42 #include <asm/types.h>
43 #include <asm/stacktrace.h> 43 #include <asm/stacktrace.h>
44 44
45 extern asmlinkage void handle_int(void); 45 extern asmlinkage void handle_int(void);
46 extern asmlinkage void handle_tlbm(void); 46 extern asmlinkage void handle_tlbm(void);
47 extern asmlinkage void handle_tlbl(void); 47 extern asmlinkage void handle_tlbl(void);
48 extern asmlinkage void handle_tlbs(void); 48 extern asmlinkage void handle_tlbs(void);
49 extern asmlinkage void handle_adel(void); 49 extern asmlinkage void handle_adel(void);
50 extern asmlinkage void handle_ades(void); 50 extern asmlinkage void handle_ades(void);
51 extern asmlinkage void handle_ibe(void); 51 extern asmlinkage void handle_ibe(void);
52 extern asmlinkage void handle_dbe(void); 52 extern asmlinkage void handle_dbe(void);
53 extern asmlinkage void handle_sys(void); 53 extern asmlinkage void handle_sys(void);
54 extern asmlinkage void handle_bp(void); 54 extern asmlinkage void handle_bp(void);
55 extern asmlinkage void handle_ri(void); 55 extern asmlinkage void handle_ri(void);
56 extern asmlinkage void handle_ri_rdhwr_vivt(void); 56 extern asmlinkage void handle_ri_rdhwr_vivt(void);
57 extern asmlinkage void handle_ri_rdhwr(void); 57 extern asmlinkage void handle_ri_rdhwr(void);
58 extern asmlinkage void handle_cpu(void); 58 extern asmlinkage void handle_cpu(void);
59 extern asmlinkage void handle_ov(void); 59 extern asmlinkage void handle_ov(void);
60 extern asmlinkage void handle_tr(void); 60 extern asmlinkage void handle_tr(void);
61 extern asmlinkage void handle_fpe(void); 61 extern asmlinkage void handle_fpe(void);
62 extern asmlinkage void handle_mdmx(void); 62 extern asmlinkage void handle_mdmx(void);
63 extern asmlinkage void handle_watch(void); 63 extern asmlinkage void handle_watch(void);
64 extern asmlinkage void handle_mt(void); 64 extern asmlinkage void handle_mt(void);
65 extern asmlinkage void handle_dsp(void); 65 extern asmlinkage void handle_dsp(void);
66 extern asmlinkage void handle_mcheck(void); 66 extern asmlinkage void handle_mcheck(void);
67 extern asmlinkage void handle_reserved(void); 67 extern asmlinkage void handle_reserved(void);
68 68
69 extern int fpu_emulator_cop1Handler(struct pt_regs *xcp, 69 extern int fpu_emulator_cop1Handler(struct pt_regs *xcp,
70 struct mips_fpu_struct *ctx, int has_fpu); 70 struct mips_fpu_struct *ctx, int has_fpu);
71 71
72 void (*board_watchpoint_handler)(struct pt_regs *regs); 72 void (*board_watchpoint_handler)(struct pt_regs *regs);
73 void (*board_be_init)(void); 73 void (*board_be_init)(void);
74 int (*board_be_handler)(struct pt_regs *regs, int is_fixup); 74 int (*board_be_handler)(struct pt_regs *regs, int is_fixup);
75 void (*board_nmi_handler_setup)(void); 75 void (*board_nmi_handler_setup)(void);
76 void (*board_ejtag_handler_setup)(void); 76 void (*board_ejtag_handler_setup)(void);
77 void (*board_bind_eic_interrupt)(int irq, int regset); 77 void (*board_bind_eic_interrupt)(int irq, int regset);
78 78
79 79
80 static void show_raw_backtrace(unsigned long reg29) 80 static void show_raw_backtrace(unsigned long reg29)
81 { 81 {
82 unsigned long *sp = (unsigned long *)reg29; 82 unsigned long *sp = (unsigned long *)reg29;
83 unsigned long addr; 83 unsigned long addr;
84 84
85 printk("Call Trace:"); 85 printk("Call Trace:");
86 #ifdef CONFIG_KALLSYMS 86 #ifdef CONFIG_KALLSYMS
87 printk("\n"); 87 printk("\n");
88 #endif 88 #endif
89 while (!kstack_end(sp)) { 89 while (!kstack_end(sp)) {
90 addr = *sp++; 90 addr = *sp++;
91 if (__kernel_text_address(addr)) 91 if (__kernel_text_address(addr))
92 print_ip_sym(addr); 92 print_ip_sym(addr);
93 } 93 }
94 printk("\n"); 94 printk("\n");
95 } 95 }
96 96
97 #ifdef CONFIG_KALLSYMS 97 #ifdef CONFIG_KALLSYMS
98 int raw_show_trace; 98 int raw_show_trace;
99 static int __init set_raw_show_trace(char *str) 99 static int __init set_raw_show_trace(char *str)
100 { 100 {
101 raw_show_trace = 1; 101 raw_show_trace = 1;
102 return 1; 102 return 1;
103 } 103 }
104 __setup("raw_show_trace", set_raw_show_trace); 104 __setup("raw_show_trace", set_raw_show_trace);
105 #endif 105 #endif
106 106
107 static void show_backtrace(struct task_struct *task, struct pt_regs *regs) 107 static void show_backtrace(struct task_struct *task, struct pt_regs *regs)
108 { 108 {
109 unsigned long sp = regs->regs[29]; 109 unsigned long sp = regs->regs[29];
110 unsigned long ra = regs->regs[31]; 110 unsigned long ra = regs->regs[31];
111 unsigned long pc = regs->cp0_epc; 111 unsigned long pc = regs->cp0_epc;
112 112
113 if (raw_show_trace || !__kernel_text_address(pc)) { 113 if (raw_show_trace || !__kernel_text_address(pc)) {
114 show_raw_backtrace(sp); 114 show_raw_backtrace(sp);
115 return; 115 return;
116 } 116 }
117 printk("Call Trace:\n"); 117 printk("Call Trace:\n");
118 do { 118 do {
119 print_ip_sym(pc); 119 print_ip_sym(pc);
120 pc = unwind_stack(task, &sp, pc, &ra); 120 pc = unwind_stack(task, &sp, pc, &ra);
121 } while (pc); 121 } while (pc);
122 printk("\n"); 122 printk("\n");
123 } 123 }
124 124
125 /* 125 /*
126 * This routine abuses get_user()/put_user() to reference pointers 126 * This routine abuses get_user()/put_user() to reference pointers
127 * with at least a bit of error checking ... 127 * with at least a bit of error checking ...
128 */ 128 */
129 static void show_stacktrace(struct task_struct *task, struct pt_regs *regs) 129 static void show_stacktrace(struct task_struct *task, struct pt_regs *regs)
130 { 130 {
131 const int field = 2 * sizeof(unsigned long); 131 const int field = 2 * sizeof(unsigned long);
132 long stackdata; 132 long stackdata;
133 int i; 133 int i;
134 unsigned long __user *sp = (unsigned long __user *)regs->regs[29]; 134 unsigned long __user *sp = (unsigned long __user *)regs->regs[29];
135 135
136 printk("Stack :"); 136 printk("Stack :");
137 i = 0; 137 i = 0;
138 while ((unsigned long) sp & (PAGE_SIZE - 1)) { 138 while ((unsigned long) sp & (PAGE_SIZE - 1)) {
139 if (i && ((i % (64 / field)) == 0)) 139 if (i && ((i % (64 / field)) == 0))
140 printk("\n "); 140 printk("\n ");
141 if (i > 39) { 141 if (i > 39) {
142 printk(" ..."); 142 printk(" ...");
143 break; 143 break;
144 } 144 }
145 145
146 if (__get_user(stackdata, sp++)) { 146 if (__get_user(stackdata, sp++)) {
147 printk(" (Bad stack address)"); 147 printk(" (Bad stack address)");
148 break; 148 break;
149 } 149 }
150 150
151 printk(" %0*lx", field, stackdata); 151 printk(" %0*lx", field, stackdata);
152 i++; 152 i++;
153 } 153 }
154 printk("\n"); 154 printk("\n");
155 show_backtrace(task, regs); 155 show_backtrace(task, regs);
156 } 156 }
157 157
158 void show_stack(struct task_struct *task, unsigned long *sp) 158 void show_stack(struct task_struct *task, unsigned long *sp)
159 { 159 {
160 struct pt_regs regs; 160 struct pt_regs regs;
161 if (sp) { 161 if (sp) {
162 regs.regs[29] = (unsigned long)sp; 162 regs.regs[29] = (unsigned long)sp;
163 regs.regs[31] = 0; 163 regs.regs[31] = 0;
164 regs.cp0_epc = 0; 164 regs.cp0_epc = 0;
165 } else { 165 } else {
166 if (task && task != current) { 166 if (task && task != current) {
167 regs.regs[29] = task->thread.reg29; 167 regs.regs[29] = task->thread.reg29;
168 regs.regs[31] = 0; 168 regs.regs[31] = 0;
169 regs.cp0_epc = task->thread.reg31; 169 regs.cp0_epc = task->thread.reg31;
170 } else { 170 } else {
171 prepare_frametrace(&regs); 171 prepare_frametrace(&regs);
172 } 172 }
173 } 173 }
174 show_stacktrace(task, &regs); 174 show_stacktrace(task, &regs);
175 } 175 }
176 176
177 /* 177 /*
178 * The architecture-independent dump_stack generator 178 * The architecture-independent dump_stack generator
179 */ 179 */
180 void dump_stack(void) 180 void dump_stack(void)
181 { 181 {
182 struct pt_regs regs; 182 struct pt_regs regs;
183 183
184 prepare_frametrace(&regs); 184 prepare_frametrace(&regs);
185 show_backtrace(current, &regs); 185 show_backtrace(current, &regs);
186 } 186 }
187 187
188 EXPORT_SYMBOL(dump_stack); 188 EXPORT_SYMBOL(dump_stack);
189 189
190 static void show_code(unsigned int __user *pc) 190 static void show_code(unsigned int __user *pc)
191 { 191 {
192 long i; 192 long i;
193 193
194 printk("\nCode:"); 194 printk("\nCode:");
195 195
196 for(i = -3 ; i < 6 ; i++) { 196 for(i = -3 ; i < 6 ; i++) {
197 unsigned int insn; 197 unsigned int insn;
198 if (__get_user(insn, pc + i)) { 198 if (__get_user(insn, pc + i)) {
199 printk(" (Bad address in epc)\n"); 199 printk(" (Bad address in epc)\n");
200 break; 200 break;
201 } 201 }
202 printk("%c%08x%c", (i?' ':'<'), insn, (i?' ':'>')); 202 printk("%c%08x%c", (i?' ':'<'), insn, (i?' ':'>'));
203 } 203 }
204 } 204 }
205 205
206 void show_regs(struct pt_regs *regs) 206 void show_regs(struct pt_regs *regs)
207 { 207 {
208 const int field = 2 * sizeof(unsigned long); 208 const int field = 2 * sizeof(unsigned long);
209 unsigned int cause = regs->cp0_cause; 209 unsigned int cause = regs->cp0_cause;
210 int i; 210 int i;
211 211
212 printk("Cpu %d\n", smp_processor_id()); 212 printk("Cpu %d\n", smp_processor_id());
213 213
214 /* 214 /*
215 * Saved main processor registers 215 * Saved main processor registers
216 */ 216 */
217 for (i = 0; i < 32; ) { 217 for (i = 0; i < 32; ) {
218 if ((i % 4) == 0) 218 if ((i % 4) == 0)
219 printk("$%2d :", i); 219 printk("$%2d :", i);
220 if (i == 0) 220 if (i == 0)
221 printk(" %0*lx", field, 0UL); 221 printk(" %0*lx", field, 0UL);
222 else if (i == 26 || i == 27) 222 else if (i == 26 || i == 27)
223 printk(" %*s", field, ""); 223 printk(" %*s", field, "");
224 else 224 else
225 printk(" %0*lx", field, regs->regs[i]); 225 printk(" %0*lx", field, regs->regs[i]);
226 226
227 i++; 227 i++;
228 if ((i % 4) == 0) 228 if ((i % 4) == 0)
229 printk("\n"); 229 printk("\n");
230 } 230 }
231 231
232 #ifdef CONFIG_CPU_HAS_SMARTMIPS 232 #ifdef CONFIG_CPU_HAS_SMARTMIPS
233 printk("Acx : %0*lx\n", field, regs->acx); 233 printk("Acx : %0*lx\n", field, regs->acx);
234 #endif 234 #endif
235 printk("Hi : %0*lx\n", field, regs->hi); 235 printk("Hi : %0*lx\n", field, regs->hi);
236 printk("Lo : %0*lx\n", field, regs->lo); 236 printk("Lo : %0*lx\n", field, regs->lo);
237 237
238 /* 238 /*
239 * Saved cp0 registers 239 * Saved cp0 registers
240 */ 240 */
241 printk("epc : %0*lx ", field, regs->cp0_epc); 241 printk("epc : %0*lx ", field, regs->cp0_epc);
242 print_symbol("%s ", regs->cp0_epc); 242 print_symbol("%s ", regs->cp0_epc);
243 printk(" %s\n", print_tainted()); 243 printk(" %s\n", print_tainted());
244 printk("ra : %0*lx ", field, regs->regs[31]); 244 printk("ra : %0*lx ", field, regs->regs[31]);
245 print_symbol("%s\n", regs->regs[31]); 245 print_symbol("%s\n", regs->regs[31]);
246 246
247 printk("Status: %08x ", (uint32_t) regs->cp0_status); 247 printk("Status: %08x ", (uint32_t) regs->cp0_status);
248 248
249 if (current_cpu_data.isa_level == MIPS_CPU_ISA_I) { 249 if (current_cpu_data.isa_level == MIPS_CPU_ISA_I) {
250 if (regs->cp0_status & ST0_KUO) 250 if (regs->cp0_status & ST0_KUO)
251 printk("KUo "); 251 printk("KUo ");
252 if (regs->cp0_status & ST0_IEO) 252 if (regs->cp0_status & ST0_IEO)
253 printk("IEo "); 253 printk("IEo ");
254 if (regs->cp0_status & ST0_KUP) 254 if (regs->cp0_status & ST0_KUP)
255 printk("KUp "); 255 printk("KUp ");
256 if (regs->cp0_status & ST0_IEP) 256 if (regs->cp0_status & ST0_IEP)
257 printk("IEp "); 257 printk("IEp ");
258 if (regs->cp0_status & ST0_KUC) 258 if (regs->cp0_status & ST0_KUC)
259 printk("KUc "); 259 printk("KUc ");
260 if (regs->cp0_status & ST0_IEC) 260 if (regs->cp0_status & ST0_IEC)
261 printk("IEc "); 261 printk("IEc ");
262 } else { 262 } else {
263 if (regs->cp0_status & ST0_KX) 263 if (regs->cp0_status & ST0_KX)
264 printk("KX "); 264 printk("KX ");
265 if (regs->cp0_status & ST0_SX) 265 if (regs->cp0_status & ST0_SX)
266 printk("SX "); 266 printk("SX ");
267 if (regs->cp0_status & ST0_UX) 267 if (regs->cp0_status & ST0_UX)
268 printk("UX "); 268 printk("UX ");
269 switch (regs->cp0_status & ST0_KSU) { 269 switch (regs->cp0_status & ST0_KSU) {
270 case KSU_USER: 270 case KSU_USER:
271 printk("USER "); 271 printk("USER ");
272 break; 272 break;
273 case KSU_SUPERVISOR: 273 case KSU_SUPERVISOR:
274 printk("SUPERVISOR "); 274 printk("SUPERVISOR ");
275 break; 275 break;
276 case KSU_KERNEL: 276 case KSU_KERNEL:
277 printk("KERNEL "); 277 printk("KERNEL ");
278 break; 278 break;
279 default: 279 default:
280 printk("BAD_MODE "); 280 printk("BAD_MODE ");
281 break; 281 break;
282 } 282 }
283 if (regs->cp0_status & ST0_ERL) 283 if (regs->cp0_status & ST0_ERL)
284 printk("ERL "); 284 printk("ERL ");
285 if (regs->cp0_status & ST0_EXL) 285 if (regs->cp0_status & ST0_EXL)
286 printk("EXL "); 286 printk("EXL ");
287 if (regs->cp0_status & ST0_IE) 287 if (regs->cp0_status & ST0_IE)
288 printk("IE "); 288 printk("IE ");
289 } 289 }
290 printk("\n"); 290 printk("\n");
291 291
292 printk("Cause : %08x\n", cause); 292 printk("Cause : %08x\n", cause);
293 293
294 cause = (cause & CAUSEF_EXCCODE) >> CAUSEB_EXCCODE; 294 cause = (cause & CAUSEF_EXCCODE) >> CAUSEB_EXCCODE;
295 if (1 <= cause && cause <= 5) 295 if (1 <= cause && cause <= 5)
296 printk("BadVA : %0*lx\n", field, regs->cp0_badvaddr); 296 printk("BadVA : %0*lx\n", field, regs->cp0_badvaddr);
297 297
298 printk("PrId : %08x\n", read_c0_prid()); 298 printk("PrId : %08x\n", read_c0_prid());
299 } 299 }
300 300
301 void show_registers(struct pt_regs *regs) 301 void show_registers(struct pt_regs *regs)
302 { 302 {
303 show_regs(regs); 303 show_regs(regs);
304 print_modules(); 304 print_modules();
305 printk("Process %s (pid: %d, threadinfo=%p, task=%p)\n", 305 printk("Process %s (pid: %d, threadinfo=%p, task=%p)\n",
306 current->comm, current->pid, current_thread_info(), current); 306 current->comm, current->pid, current_thread_info(), current);
307 show_stacktrace(current, regs); 307 show_stacktrace(current, regs);
308 show_code((unsigned int __user *) regs->cp0_epc); 308 show_code((unsigned int __user *) regs->cp0_epc);
309 printk("\n"); 309 printk("\n");
310 } 310 }
311 311
312 static DEFINE_SPINLOCK(die_lock); 312 static DEFINE_SPINLOCK(die_lock);
313 313
314 void __noreturn die(const char * str, struct pt_regs * regs) 314 void __noreturn die(const char * str, struct pt_regs * regs)
315 { 315 {
316 static int die_counter; 316 static int die_counter;
317 #ifdef CONFIG_MIPS_MT_SMTC 317 #ifdef CONFIG_MIPS_MT_SMTC
318 unsigned long dvpret = dvpe(); 318 unsigned long dvpret = dvpe();
319 #endif /* CONFIG_MIPS_MT_SMTC */ 319 #endif /* CONFIG_MIPS_MT_SMTC */
320 320
321 console_verbose(); 321 console_verbose();
322 spin_lock_irq(&die_lock); 322 spin_lock_irq(&die_lock);
323 bust_spinlocks(1); 323 bust_spinlocks(1);
324 #ifdef CONFIG_MIPS_MT_SMTC 324 #ifdef CONFIG_MIPS_MT_SMTC
325 mips_mt_regdump(dvpret); 325 mips_mt_regdump(dvpret);
326 #endif /* CONFIG_MIPS_MT_SMTC */ 326 #endif /* CONFIG_MIPS_MT_SMTC */
327 printk("%s[#%d]:\n", str, ++die_counter); 327 printk("%s[#%d]:\n", str, ++die_counter);
328 show_registers(regs); 328 show_registers(regs);
329 add_taint(TAINT_DIE);
329 spin_unlock_irq(&die_lock); 330 spin_unlock_irq(&die_lock);
330 331
331 if (in_interrupt()) 332 if (in_interrupt())
332 panic("Fatal exception in interrupt"); 333 panic("Fatal exception in interrupt");
333 334
334 if (panic_on_oops) { 335 if (panic_on_oops) {
335 printk(KERN_EMERG "Fatal exception: panic in 5 seconds\n"); 336 printk(KERN_EMERG "Fatal exception: panic in 5 seconds\n");
336 ssleep(5); 337 ssleep(5);
337 panic("Fatal exception"); 338 panic("Fatal exception");
338 } 339 }
339 340
340 do_exit(SIGSEGV); 341 do_exit(SIGSEGV);
341 } 342 }
342 343
343 extern const struct exception_table_entry __start___dbe_table[]; 344 extern const struct exception_table_entry __start___dbe_table[];
344 extern const struct exception_table_entry __stop___dbe_table[]; 345 extern const struct exception_table_entry __stop___dbe_table[];
345 346
346 __asm__( 347 __asm__(
347 " .section __dbe_table, \"a\"\n" 348 " .section __dbe_table, \"a\"\n"
348 " .previous \n"); 349 " .previous \n");
349 350
350 /* Given an address, look for it in the exception tables. */ 351 /* Given an address, look for it in the exception tables. */
351 static const struct exception_table_entry *search_dbe_tables(unsigned long addr) 352 static const struct exception_table_entry *search_dbe_tables(unsigned long addr)
352 { 353 {
353 const struct exception_table_entry *e; 354 const struct exception_table_entry *e;
354 355
355 e = search_extable(__start___dbe_table, __stop___dbe_table - 1, addr); 356 e = search_extable(__start___dbe_table, __stop___dbe_table - 1, addr);
356 if (!e) 357 if (!e)
357 e = search_module_dbetables(addr); 358 e = search_module_dbetables(addr);
358 return e; 359 return e;
359 } 360 }
360 361
361 asmlinkage void do_be(struct pt_regs *regs) 362 asmlinkage void do_be(struct pt_regs *regs)
362 { 363 {
363 const int field = 2 * sizeof(unsigned long); 364 const int field = 2 * sizeof(unsigned long);
364 const struct exception_table_entry *fixup = NULL; 365 const struct exception_table_entry *fixup = NULL;
365 int data = regs->cp0_cause & 4; 366 int data = regs->cp0_cause & 4;
366 int action = MIPS_BE_FATAL; 367 int action = MIPS_BE_FATAL;
367 368
368 /* XXX For now. Fixme, this searches the wrong table ... */ 369 /* XXX For now. Fixme, this searches the wrong table ... */
369 if (data && !user_mode(regs)) 370 if (data && !user_mode(regs))
370 fixup = search_dbe_tables(exception_epc(regs)); 371 fixup = search_dbe_tables(exception_epc(regs));
371 372
372 if (fixup) 373 if (fixup)
373 action = MIPS_BE_FIXUP; 374 action = MIPS_BE_FIXUP;
374 375
375 if (board_be_handler) 376 if (board_be_handler)
376 action = board_be_handler(regs, fixup != NULL); 377 action = board_be_handler(regs, fixup != NULL);
377 378
378 switch (action) { 379 switch (action) {
379 case MIPS_BE_DISCARD: 380 case MIPS_BE_DISCARD:
380 return; 381 return;
381 case MIPS_BE_FIXUP: 382 case MIPS_BE_FIXUP:
382 if (fixup) { 383 if (fixup) {
383 regs->cp0_epc = fixup->nextinsn; 384 regs->cp0_epc = fixup->nextinsn;
384 return; 385 return;
385 } 386 }
386 break; 387 break;
387 default: 388 default:
388 break; 389 break;
389 } 390 }
390 391
391 /* 392 /*
392 * Assume it would be too dangerous to continue ... 393 * Assume it would be too dangerous to continue ...
393 */ 394 */
394 printk(KERN_ALERT "%s bus error, epc == %0*lx, ra == %0*lx\n", 395 printk(KERN_ALERT "%s bus error, epc == %0*lx, ra == %0*lx\n",
395 data ? "Data" : "Instruction", 396 data ? "Data" : "Instruction",
396 field, regs->cp0_epc, field, regs->regs[31]); 397 field, regs->cp0_epc, field, regs->regs[31]);
397 die_if_kernel("Oops", regs); 398 die_if_kernel("Oops", regs);
398 force_sig(SIGBUS, current); 399 force_sig(SIGBUS, current);
399 } 400 }
400 401
401 /* 402 /*
402 * ll/sc emulation 403 * ll/sc emulation
403 */ 404 */
404 405
405 #define OPCODE 0xfc000000 406 #define OPCODE 0xfc000000
406 #define BASE 0x03e00000 407 #define BASE 0x03e00000
407 #define RT 0x001f0000 408 #define RT 0x001f0000
408 #define OFFSET 0x0000ffff 409 #define OFFSET 0x0000ffff
409 #define LL 0xc0000000 410 #define LL 0xc0000000
410 #define SC 0xe0000000 411 #define SC 0xe0000000
411 #define SPEC3 0x7c000000 412 #define SPEC3 0x7c000000
412 #define RD 0x0000f800 413 #define RD 0x0000f800
413 #define FUNC 0x0000003f 414 #define FUNC 0x0000003f
414 #define RDHWR 0x0000003b 415 #define RDHWR 0x0000003b
415 416
416 /* 417 /*
417 * The ll_bit is cleared by r*_switch.S 418 * The ll_bit is cleared by r*_switch.S
418 */ 419 */
419 420
420 unsigned long ll_bit; 421 unsigned long ll_bit;
421 422
422 static struct task_struct *ll_task = NULL; 423 static struct task_struct *ll_task = NULL;
423 424
424 static inline void simulate_ll(struct pt_regs *regs, unsigned int opcode) 425 static inline void simulate_ll(struct pt_regs *regs, unsigned int opcode)
425 { 426 {
426 unsigned long value, __user *vaddr; 427 unsigned long value, __user *vaddr;
427 long offset; 428 long offset;
428 int signal = 0; 429 int signal = 0;
429 430
430 /* 431 /*
431 * analyse the ll instruction that just caused a ri exception 432 * analyse the ll instruction that just caused a ri exception
432 * and put the referenced address to addr. 433 * and put the referenced address to addr.
433 */ 434 */
434 435
435 /* sign extend offset */ 436 /* sign extend offset */
436 offset = opcode & OFFSET; 437 offset = opcode & OFFSET;
437 offset <<= 16; 438 offset <<= 16;
438 offset >>= 16; 439 offset >>= 16;
439 440
440 vaddr = (unsigned long __user *) 441 vaddr = (unsigned long __user *)
441 ((unsigned long)(regs->regs[(opcode & BASE) >> 21]) + offset); 442 ((unsigned long)(regs->regs[(opcode & BASE) >> 21]) + offset);
442 443
443 if ((unsigned long)vaddr & 3) { 444 if ((unsigned long)vaddr & 3) {
444 signal = SIGBUS; 445 signal = SIGBUS;
445 goto sig; 446 goto sig;
446 } 447 }
447 if (get_user(value, vaddr)) { 448 if (get_user(value, vaddr)) {
448 signal = SIGSEGV; 449 signal = SIGSEGV;
449 goto sig; 450 goto sig;
450 } 451 }
451 452
452 preempt_disable(); 453 preempt_disable();
453 454
454 if (ll_task == NULL || ll_task == current) { 455 if (ll_task == NULL || ll_task == current) {
455 ll_bit = 1; 456 ll_bit = 1;
456 } else { 457 } else {
457 ll_bit = 0; 458 ll_bit = 0;
458 } 459 }
459 ll_task = current; 460 ll_task = current;
460 461
461 preempt_enable(); 462 preempt_enable();
462 463
463 compute_return_epc(regs); 464 compute_return_epc(regs);
464 465
465 regs->regs[(opcode & RT) >> 16] = value; 466 regs->regs[(opcode & RT) >> 16] = value;
466 467
467 return; 468 return;
468 469
469 sig: 470 sig:
470 force_sig(signal, current); 471 force_sig(signal, current);
471 } 472 }
472 473
473 static inline void simulate_sc(struct pt_regs *regs, unsigned int opcode) 474 static inline void simulate_sc(struct pt_regs *regs, unsigned int opcode)
474 { 475 {
475 unsigned long __user *vaddr; 476 unsigned long __user *vaddr;
476 unsigned long reg; 477 unsigned long reg;
477 long offset; 478 long offset;
478 int signal = 0; 479 int signal = 0;
479 480
480 /* 481 /*
481 * analyse the sc instruction that just caused a ri exception 482 * analyse the sc instruction that just caused a ri exception
482 * and put the referenced address to addr. 483 * and put the referenced address to addr.
483 */ 484 */
484 485
485 /* sign extend offset */ 486 /* sign extend offset */
486 offset = opcode & OFFSET; 487 offset = opcode & OFFSET;
487 offset <<= 16; 488 offset <<= 16;
488 offset >>= 16; 489 offset >>= 16;
489 490
490 vaddr = (unsigned long __user *) 491 vaddr = (unsigned long __user *)
491 ((unsigned long)(regs->regs[(opcode & BASE) >> 21]) + offset); 492 ((unsigned long)(regs->regs[(opcode & BASE) >> 21]) + offset);
492 reg = (opcode & RT) >> 16; 493 reg = (opcode & RT) >> 16;
493 494
494 if ((unsigned long)vaddr & 3) { 495 if ((unsigned long)vaddr & 3) {
495 signal = SIGBUS; 496 signal = SIGBUS;
496 goto sig; 497 goto sig;
497 } 498 }
498 499
499 preempt_disable(); 500 preempt_disable();
500 501
501 if (ll_bit == 0 || ll_task != current) { 502 if (ll_bit == 0 || ll_task != current) {
502 compute_return_epc(regs); 503 compute_return_epc(regs);
503 regs->regs[reg] = 0; 504 regs->regs[reg] = 0;
504 preempt_enable(); 505 preempt_enable();
505 return; 506 return;
506 } 507 }
507 508
508 preempt_enable(); 509 preempt_enable();
509 510
510 if (put_user(regs->regs[reg], vaddr)) { 511 if (put_user(regs->regs[reg], vaddr)) {
511 signal = SIGSEGV; 512 signal = SIGSEGV;
512 goto sig; 513 goto sig;
513 } 514 }
514 515
515 compute_return_epc(regs); 516 compute_return_epc(regs);
516 regs->regs[reg] = 1; 517 regs->regs[reg] = 1;
517 518
518 return; 519 return;
519 520
520 sig: 521 sig:
521 force_sig(signal, current); 522 force_sig(signal, current);
522 } 523 }
523 524
524 /* 525 /*
525 * ll uses the opcode of lwc0 and sc uses the opcode of swc0. That is both 526 * ll uses the opcode of lwc0 and sc uses the opcode of swc0. That is both
526 * opcodes are supposed to result in coprocessor unusable exceptions if 527 * opcodes are supposed to result in coprocessor unusable exceptions if
527 * executed on ll/sc-less processors. That's the theory. In practice a 528 * executed on ll/sc-less processors. That's the theory. In practice a
528 * few processors such as NEC's VR4100 throw reserved instruction exceptions 529 * few processors such as NEC's VR4100 throw reserved instruction exceptions
529 * instead, so we're doing the emulation thing in both exception handlers. 530 * instead, so we're doing the emulation thing in both exception handlers.
530 */ 531 */
531 static inline int simulate_llsc(struct pt_regs *regs) 532 static inline int simulate_llsc(struct pt_regs *regs)
532 { 533 {
533 unsigned int opcode; 534 unsigned int opcode;
534 535
535 if (get_user(opcode, (unsigned int __user *) exception_epc(regs))) 536 if (get_user(opcode, (unsigned int __user *) exception_epc(regs)))
536 goto out_sigsegv; 537 goto out_sigsegv;
537 538
538 if ((opcode & OPCODE) == LL) { 539 if ((opcode & OPCODE) == LL) {
539 simulate_ll(regs, opcode); 540 simulate_ll(regs, opcode);
540 return 0; 541 return 0;
541 } 542 }
542 if ((opcode & OPCODE) == SC) { 543 if ((opcode & OPCODE) == SC) {
543 simulate_sc(regs, opcode); 544 simulate_sc(regs, opcode);
544 return 0; 545 return 0;
545 } 546 }
546 547
547 return -EFAULT; /* Strange things going on ... */ 548 return -EFAULT; /* Strange things going on ... */
548 549
549 out_sigsegv: 550 out_sigsegv:
550 force_sig(SIGSEGV, current); 551 force_sig(SIGSEGV, current);
551 return -EFAULT; 552 return -EFAULT;
552 } 553 }
553 554
554 /* 555 /*
555 * Simulate trapping 'rdhwr' instructions to provide user accessible 556 * Simulate trapping 'rdhwr' instructions to provide user accessible
556 * registers not implemented in hardware. The only current use of this 557 * registers not implemented in hardware. The only current use of this
557 * is the thread area pointer. 558 * is the thread area pointer.
558 */ 559 */
559 static inline int simulate_rdhwr(struct pt_regs *regs) 560 static inline int simulate_rdhwr(struct pt_regs *regs)
560 { 561 {
561 struct thread_info *ti = task_thread_info(current); 562 struct thread_info *ti = task_thread_info(current);
562 unsigned int opcode; 563 unsigned int opcode;
563 564
564 if (get_user(opcode, (unsigned int __user *) exception_epc(regs))) 565 if (get_user(opcode, (unsigned int __user *) exception_epc(regs)))
565 goto out_sigsegv; 566 goto out_sigsegv;
566 567
567 if (unlikely(compute_return_epc(regs))) 568 if (unlikely(compute_return_epc(regs)))
568 return -EFAULT; 569 return -EFAULT;
569 570
570 if ((opcode & OPCODE) == SPEC3 && (opcode & FUNC) == RDHWR) { 571 if ((opcode & OPCODE) == SPEC3 && (opcode & FUNC) == RDHWR) {
571 int rd = (opcode & RD) >> 11; 572 int rd = (opcode & RD) >> 11;
572 int rt = (opcode & RT) >> 16; 573 int rt = (opcode & RT) >> 16;
573 switch (rd) { 574 switch (rd) {
574 case 29: 575 case 29:
575 regs->regs[rt] = ti->tp_value; 576 regs->regs[rt] = ti->tp_value;
576 return 0; 577 return 0;
577 default: 578 default:
578 return -EFAULT; 579 return -EFAULT;
579 } 580 }
580 } 581 }
581 582
582 /* Not ours. */ 583 /* Not ours. */
583 return -EFAULT; 584 return -EFAULT;
584 585
585 out_sigsegv: 586 out_sigsegv:
586 force_sig(SIGSEGV, current); 587 force_sig(SIGSEGV, current);
587 return -EFAULT; 588 return -EFAULT;
588 } 589 }
589 590
590 asmlinkage void do_ov(struct pt_regs *regs) 591 asmlinkage void do_ov(struct pt_regs *regs)
591 { 592 {
592 siginfo_t info; 593 siginfo_t info;
593 594
594 die_if_kernel("Integer overflow", regs); 595 die_if_kernel("Integer overflow", regs);
595 596
596 info.si_code = FPE_INTOVF; 597 info.si_code = FPE_INTOVF;
597 info.si_signo = SIGFPE; 598 info.si_signo = SIGFPE;
598 info.si_errno = 0; 599 info.si_errno = 0;
599 info.si_addr = (void __user *) regs->cp0_epc; 600 info.si_addr = (void __user *) regs->cp0_epc;
600 force_sig_info(SIGFPE, &info, current); 601 force_sig_info(SIGFPE, &info, current);
601 } 602 }
602 603
603 /* 604 /*
604 * XXX Delayed fp exceptions when doing a lazy ctx switch XXX 605 * XXX Delayed fp exceptions when doing a lazy ctx switch XXX
605 */ 606 */
606 asmlinkage void do_fpe(struct pt_regs *regs, unsigned long fcr31) 607 asmlinkage void do_fpe(struct pt_regs *regs, unsigned long fcr31)
607 { 608 {
608 die_if_kernel("FP exception in kernel code", regs); 609 die_if_kernel("FP exception in kernel code", regs);
609 610
610 if (fcr31 & FPU_CSR_UNI_X) { 611 if (fcr31 & FPU_CSR_UNI_X) {
611 int sig; 612 int sig;
612 613
613 /* 614 /*
614 * Unimplemented operation exception. If we've got the full 615 * Unimplemented operation exception. If we've got the full
615 * software emulator on-board, let's use it... 616 * software emulator on-board, let's use it...
616 * 617 *
617 * Force FPU to dump state into task/thread context. We're 618 * Force FPU to dump state into task/thread context. We're
618 * moving a lot of data here for what is probably a single 619 * moving a lot of data here for what is probably a single
619 * instruction, but the alternative is to pre-decode the FP 620 * instruction, but the alternative is to pre-decode the FP
620 * register operands before invoking the emulator, which seems 621 * register operands before invoking the emulator, which seems
621 * a bit extreme for what should be an infrequent event. 622 * a bit extreme for what should be an infrequent event.
622 */ 623 */
623 /* Ensure 'resume' not overwrite saved fp context again. */ 624 /* Ensure 'resume' not overwrite saved fp context again. */
624 lose_fpu(1); 625 lose_fpu(1);
625 626
626 /* Run the emulator */ 627 /* Run the emulator */
627 sig = fpu_emulator_cop1Handler (regs, &current->thread.fpu, 1); 628 sig = fpu_emulator_cop1Handler (regs, &current->thread.fpu, 1);
628 629
629 /* 630 /*
630 * We can't allow the emulated instruction to leave any of 631 * We can't allow the emulated instruction to leave any of
631 * the cause bit set in $fcr31. 632 * the cause bit set in $fcr31.
632 */ 633 */
633 current->thread.fpu.fcr31 &= ~FPU_CSR_ALL_X; 634 current->thread.fpu.fcr31 &= ~FPU_CSR_ALL_X;
634 635
635 /* Restore the hardware register state */ 636 /* Restore the hardware register state */
636 own_fpu(1); /* Using the FPU again. */ 637 own_fpu(1); /* Using the FPU again. */
637 638
638 /* If something went wrong, signal */ 639 /* If something went wrong, signal */
639 if (sig) 640 if (sig)
640 force_sig(sig, current); 641 force_sig(sig, current);
641 642
642 return; 643 return;
643 } 644 }
644 645
645 force_sig(SIGFPE, current); 646 force_sig(SIGFPE, current);
646 } 647 }
647 648
648 asmlinkage void do_bp(struct pt_regs *regs) 649 asmlinkage void do_bp(struct pt_regs *regs)
649 { 650 {
650 unsigned int opcode, bcode; 651 unsigned int opcode, bcode;
651 siginfo_t info; 652 siginfo_t info;
652 653
653 if (__get_user(opcode, (unsigned int __user *) exception_epc(regs))) 654 if (__get_user(opcode, (unsigned int __user *) exception_epc(regs)))
654 goto out_sigsegv; 655 goto out_sigsegv;
655 656
656 /* 657 /*
657 * There is the ancient bug in the MIPS assemblers that the break 658 * There is the ancient bug in the MIPS assemblers that the break
658 * code starts left to bit 16 instead to bit 6 in the opcode. 659 * code starts left to bit 16 instead to bit 6 in the opcode.
659 * Gas is bug-compatible, but not always, grrr... 660 * Gas is bug-compatible, but not always, grrr...
660 * We handle both cases with a simple heuristics. --macro 661 * We handle both cases with a simple heuristics. --macro
661 */ 662 */
662 bcode = ((opcode >> 6) & ((1 << 20) - 1)); 663 bcode = ((opcode >> 6) & ((1 << 20) - 1));
663 if (bcode < (1 << 10)) 664 if (bcode < (1 << 10))
664 bcode <<= 10; 665 bcode <<= 10;
665 666
666 /* 667 /*
667 * (A short test says that IRIX 5.3 sends SIGTRAP for all break 668 * (A short test says that IRIX 5.3 sends SIGTRAP for all break
668 * insns, even for break codes that indicate arithmetic failures. 669 * insns, even for break codes that indicate arithmetic failures.
669 * Weird ...) 670 * Weird ...)
670 * But should we continue the brokenness??? --macro 671 * But should we continue the brokenness??? --macro
671 */ 672 */
672 switch (bcode) { 673 switch (bcode) {
673 case BRK_OVERFLOW << 10: 674 case BRK_OVERFLOW << 10:
674 case BRK_DIVZERO << 10: 675 case BRK_DIVZERO << 10:
675 die_if_kernel("Break instruction in kernel code", regs); 676 die_if_kernel("Break instruction in kernel code", regs);
676 if (bcode == (BRK_DIVZERO << 10)) 677 if (bcode == (BRK_DIVZERO << 10))
677 info.si_code = FPE_INTDIV; 678 info.si_code = FPE_INTDIV;
678 else 679 else
679 info.si_code = FPE_INTOVF; 680 info.si_code = FPE_INTOVF;
680 info.si_signo = SIGFPE; 681 info.si_signo = SIGFPE;
681 info.si_errno = 0; 682 info.si_errno = 0;
682 info.si_addr = (void __user *) regs->cp0_epc; 683 info.si_addr = (void __user *) regs->cp0_epc;
683 force_sig_info(SIGFPE, &info, current); 684 force_sig_info(SIGFPE, &info, current);
684 break; 685 break;
685 case BRK_BUG: 686 case BRK_BUG:
686 die("Kernel bug detected", regs); 687 die("Kernel bug detected", regs);
687 break; 688 break;
688 default: 689 default:
689 die_if_kernel("Break instruction in kernel code", regs); 690 die_if_kernel("Break instruction in kernel code", regs);
690 force_sig(SIGTRAP, current); 691 force_sig(SIGTRAP, current);
691 } 692 }
692 return; 693 return;
693 694
694 out_sigsegv: 695 out_sigsegv:
695 force_sig(SIGSEGV, current); 696 force_sig(SIGSEGV, current);
696 } 697 }
697 698
698 asmlinkage void do_tr(struct pt_regs *regs) 699 asmlinkage void do_tr(struct pt_regs *regs)
699 { 700 {
700 unsigned int opcode, tcode = 0; 701 unsigned int opcode, tcode = 0;
701 siginfo_t info; 702 siginfo_t info;
702 703
703 if (__get_user(opcode, (unsigned int __user *) exception_epc(regs))) 704 if (__get_user(opcode, (unsigned int __user *) exception_epc(regs)))
704 goto out_sigsegv; 705 goto out_sigsegv;
705 706
706 /* Immediate versions don't provide a code. */ 707 /* Immediate versions don't provide a code. */
707 if (!(opcode & OPCODE)) 708 if (!(opcode & OPCODE))
708 tcode = ((opcode >> 6) & ((1 << 10) - 1)); 709 tcode = ((opcode >> 6) & ((1 << 10) - 1));
709 710
710 /* 711 /*
711 * (A short test says that IRIX 5.3 sends SIGTRAP for all trap 712 * (A short test says that IRIX 5.3 sends SIGTRAP for all trap
712 * insns, even for trap codes that indicate arithmetic failures. 713 * insns, even for trap codes that indicate arithmetic failures.
713 * Weird ...) 714 * Weird ...)
714 * But should we continue the brokenness??? --macro 715 * But should we continue the brokenness??? --macro
715 */ 716 */
716 switch (tcode) { 717 switch (tcode) {
717 case BRK_OVERFLOW: 718 case BRK_OVERFLOW:
718 case BRK_DIVZERO: 719 case BRK_DIVZERO:
719 die_if_kernel("Trap instruction in kernel code", regs); 720 die_if_kernel("Trap instruction in kernel code", regs);
720 if (tcode == BRK_DIVZERO) 721 if (tcode == BRK_DIVZERO)
721 info.si_code = FPE_INTDIV; 722 info.si_code = FPE_INTDIV;
722 else 723 else
723 info.si_code = FPE_INTOVF; 724 info.si_code = FPE_INTOVF;
724 info.si_signo = SIGFPE; 725 info.si_signo = SIGFPE;
725 info.si_errno = 0; 726 info.si_errno = 0;
726 info.si_addr = (void __user *) regs->cp0_epc; 727 info.si_addr = (void __user *) regs->cp0_epc;
727 force_sig_info(SIGFPE, &info, current); 728 force_sig_info(SIGFPE, &info, current);
728 break; 729 break;
729 case BRK_BUG: 730 case BRK_BUG:
730 die("Kernel bug detected", regs); 731 die("Kernel bug detected", regs);
731 break; 732 break;
732 default: 733 default:
733 die_if_kernel("Trap instruction in kernel code", regs); 734 die_if_kernel("Trap instruction in kernel code", regs);
734 force_sig(SIGTRAP, current); 735 force_sig(SIGTRAP, current);
735 } 736 }
736 return; 737 return;
737 738
738 out_sigsegv: 739 out_sigsegv:
739 force_sig(SIGSEGV, current); 740 force_sig(SIGSEGV, current);
740 } 741 }
741 742
742 asmlinkage void do_ri(struct pt_regs *regs) 743 asmlinkage void do_ri(struct pt_regs *regs)
743 { 744 {
744 die_if_kernel("Reserved instruction in kernel code", regs); 745 die_if_kernel("Reserved instruction in kernel code", regs);
745 746
746 if (!cpu_has_llsc) 747 if (!cpu_has_llsc)
747 if (!simulate_llsc(regs)) 748 if (!simulate_llsc(regs))
748 return; 749 return;
749 750
750 if (!simulate_rdhwr(regs)) 751 if (!simulate_rdhwr(regs))
751 return; 752 return;
752 753
753 force_sig(SIGILL, current); 754 force_sig(SIGILL, current);
754 } 755 }
755 756
756 /* 757 /*
757 * MIPS MT processors may have fewer FPU contexts than CPU threads. If we've 758 * MIPS MT processors may have fewer FPU contexts than CPU threads. If we've
758 * emulated more than some threshold number of instructions, force migration to 759 * emulated more than some threshold number of instructions, force migration to
759 * a "CPU" that has FP support. 760 * a "CPU" that has FP support.
760 */ 761 */
761 static void mt_ase_fp_affinity(void) 762 static void mt_ase_fp_affinity(void)
762 { 763 {
763 #ifdef CONFIG_MIPS_MT_FPAFF 764 #ifdef CONFIG_MIPS_MT_FPAFF
764 if (mt_fpemul_threshold > 0 && 765 if (mt_fpemul_threshold > 0 &&
765 ((current->thread.emulated_fp++ > mt_fpemul_threshold))) { 766 ((current->thread.emulated_fp++ > mt_fpemul_threshold))) {
766 /* 767 /*
767 * If there's no FPU present, or if the application has already 768 * If there's no FPU present, or if the application has already
768 * restricted the allowed set to exclude any CPUs with FPUs, 769 * restricted the allowed set to exclude any CPUs with FPUs,
769 * we'll skip the procedure. 770 * we'll skip the procedure.
770 */ 771 */
771 if (cpus_intersects(current->cpus_allowed, mt_fpu_cpumask)) { 772 if (cpus_intersects(current->cpus_allowed, mt_fpu_cpumask)) {
772 cpumask_t tmask; 773 cpumask_t tmask;
773 774
774 cpus_and(tmask, current->thread.user_cpus_allowed, 775 cpus_and(tmask, current->thread.user_cpus_allowed,
775 mt_fpu_cpumask); 776 mt_fpu_cpumask);
776 set_cpus_allowed(current, tmask); 777 set_cpus_allowed(current, tmask);
777 current->thread.mflags |= MF_FPUBOUND; 778 current->thread.mflags |= MF_FPUBOUND;
778 } 779 }
779 } 780 }
780 #endif /* CONFIG_MIPS_MT_FPAFF */ 781 #endif /* CONFIG_MIPS_MT_FPAFF */
781 } 782 }
782 783
783 asmlinkage void do_cpu(struct pt_regs *regs) 784 asmlinkage void do_cpu(struct pt_regs *regs)
784 { 785 {
785 unsigned int cpid; 786 unsigned int cpid;
786 787
787 die_if_kernel("do_cpu invoked from kernel context!", regs); 788 die_if_kernel("do_cpu invoked from kernel context!", regs);
788 789
789 cpid = (regs->cp0_cause >> CAUSEB_CE) & 3; 790 cpid = (regs->cp0_cause >> CAUSEB_CE) & 3;
790 791
791 switch (cpid) { 792 switch (cpid) {
792 case 0: 793 case 0:
793 if (!cpu_has_llsc) 794 if (!cpu_has_llsc)
794 if (!simulate_llsc(regs)) 795 if (!simulate_llsc(regs))
795 return; 796 return;
796 797
797 if (!simulate_rdhwr(regs)) 798 if (!simulate_rdhwr(regs))
798 return; 799 return;
799 800
800 break; 801 break;
801 802
802 case 1: 803 case 1:
803 if (used_math()) /* Using the FPU again. */ 804 if (used_math()) /* Using the FPU again. */
804 own_fpu(1); 805 own_fpu(1);
805 else { /* First time FPU user. */ 806 else { /* First time FPU user. */
806 init_fpu(); 807 init_fpu();
807 set_used_math(); 808 set_used_math();
808 } 809 }
809 810
810 if (!raw_cpu_has_fpu) { 811 if (!raw_cpu_has_fpu) {
811 int sig; 812 int sig;
812 sig = fpu_emulator_cop1Handler(regs, 813 sig = fpu_emulator_cop1Handler(regs,
813 &current->thread.fpu, 0); 814 &current->thread.fpu, 0);
814 if (sig) 815 if (sig)
815 force_sig(sig, current); 816 force_sig(sig, current);
816 else 817 else
817 mt_ase_fp_affinity(); 818 mt_ase_fp_affinity();
818 } 819 }
819 820
820 return; 821 return;
821 822
822 case 2: 823 case 2:
823 case 3: 824 case 3:
824 break; 825 break;
825 } 826 }
826 827
827 force_sig(SIGILL, current); 828 force_sig(SIGILL, current);
828 } 829 }
829 830
830 asmlinkage void do_mdmx(struct pt_regs *regs) 831 asmlinkage void do_mdmx(struct pt_regs *regs)
831 { 832 {
832 force_sig(SIGILL, current); 833 force_sig(SIGILL, current);
833 } 834 }
834 835
835 asmlinkage void do_watch(struct pt_regs *regs) 836 asmlinkage void do_watch(struct pt_regs *regs)
836 { 837 {
837 if (board_watchpoint_handler) { 838 if (board_watchpoint_handler) {
838 (*board_watchpoint_handler)(regs); 839 (*board_watchpoint_handler)(regs);
839 return; 840 return;
840 } 841 }
841 842
842 /* 843 /*
843 * We use the watch exception where available to detect stack 844 * We use the watch exception where available to detect stack
844 * overflows. 845 * overflows.
845 */ 846 */
846 dump_tlb_all(); 847 dump_tlb_all();
847 show_regs(regs); 848 show_regs(regs);
848 panic("Caught WATCH exception - probably caused by stack overflow."); 849 panic("Caught WATCH exception - probably caused by stack overflow.");
849 } 850 }
850 851
851 asmlinkage void do_mcheck(struct pt_regs *regs) 852 asmlinkage void do_mcheck(struct pt_regs *regs)
852 { 853 {
853 const int field = 2 * sizeof(unsigned long); 854 const int field = 2 * sizeof(unsigned long);
854 int multi_match = regs->cp0_status & ST0_TS; 855 int multi_match = regs->cp0_status & ST0_TS;
855 856
856 show_regs(regs); 857 show_regs(regs);
857 858
858 if (multi_match) { 859 if (multi_match) {
859 printk("Index : %0x\n", read_c0_index()); 860 printk("Index : %0x\n", read_c0_index());
860 printk("Pagemask: %0x\n", read_c0_pagemask()); 861 printk("Pagemask: %0x\n", read_c0_pagemask());
861 printk("EntryHi : %0*lx\n", field, read_c0_entryhi()); 862 printk("EntryHi : %0*lx\n", field, read_c0_entryhi());
862 printk("EntryLo0: %0*lx\n", field, read_c0_entrylo0()); 863 printk("EntryLo0: %0*lx\n", field, read_c0_entrylo0());
863 printk("EntryLo1: %0*lx\n", field, read_c0_entrylo1()); 864 printk("EntryLo1: %0*lx\n", field, read_c0_entrylo1());
864 printk("\n"); 865 printk("\n");
865 dump_tlb_all(); 866 dump_tlb_all();
866 } 867 }
867 868
868 show_code((unsigned int __user *) regs->cp0_epc); 869 show_code((unsigned int __user *) regs->cp0_epc);
869 870
870 /* 871 /*
871 * Some chips may have other causes of machine check (e.g. SB1 872 * Some chips may have other causes of machine check (e.g. SB1
872 * graduation timer) 873 * graduation timer)
873 */ 874 */
874 panic("Caught Machine Check exception - %scaused by multiple " 875 panic("Caught Machine Check exception - %scaused by multiple "
875 "matching entries in the TLB.", 876 "matching entries in the TLB.",
876 (multi_match) ? "" : "not "); 877 (multi_match) ? "" : "not ");
877 } 878 }
878 879
879 asmlinkage void do_mt(struct pt_regs *regs) 880 asmlinkage void do_mt(struct pt_regs *regs)
880 { 881 {
881 int subcode; 882 int subcode;
882 883
883 subcode = (read_vpe_c0_vpecontrol() & VPECONTROL_EXCPT) 884 subcode = (read_vpe_c0_vpecontrol() & VPECONTROL_EXCPT)
884 >> VPECONTROL_EXCPT_SHIFT; 885 >> VPECONTROL_EXCPT_SHIFT;
885 switch (subcode) { 886 switch (subcode) {
886 case 0: 887 case 0:
887 printk(KERN_DEBUG "Thread Underflow\n"); 888 printk(KERN_DEBUG "Thread Underflow\n");
888 break; 889 break;
889 case 1: 890 case 1:
890 printk(KERN_DEBUG "Thread Overflow\n"); 891 printk(KERN_DEBUG "Thread Overflow\n");
891 break; 892 break;
892 case 2: 893 case 2:
893 printk(KERN_DEBUG "Invalid YIELD Qualifier\n"); 894 printk(KERN_DEBUG "Invalid YIELD Qualifier\n");
894 break; 895 break;
895 case 3: 896 case 3:
896 printk(KERN_DEBUG "Gating Storage Exception\n"); 897 printk(KERN_DEBUG "Gating Storage Exception\n");
897 break; 898 break;
898 case 4: 899 case 4:
899 printk(KERN_DEBUG "YIELD Scheduler Exception\n"); 900 printk(KERN_DEBUG "YIELD Scheduler Exception\n");
900 break; 901 break;
901 case 5: 902 case 5:
902 printk(KERN_DEBUG "Gating Storage Schedulier Exception\n"); 903 printk(KERN_DEBUG "Gating Storage Schedulier Exception\n");
903 break; 904 break;
904 default: 905 default:
905 printk(KERN_DEBUG "*** UNKNOWN THREAD EXCEPTION %d ***\n", 906 printk(KERN_DEBUG "*** UNKNOWN THREAD EXCEPTION %d ***\n",
906 subcode); 907 subcode);
907 break; 908 break;
908 } 909 }
909 die_if_kernel("MIPS MT Thread exception in kernel", regs); 910 die_if_kernel("MIPS MT Thread exception in kernel", regs);
910 911
911 force_sig(SIGILL, current); 912 force_sig(SIGILL, current);
912 } 913 }
913 914
914 915
915 asmlinkage void do_dsp(struct pt_regs *regs) 916 asmlinkage void do_dsp(struct pt_regs *regs)
916 { 917 {
917 if (cpu_has_dsp) 918 if (cpu_has_dsp)
918 panic("Unexpected DSP exception\n"); 919 panic("Unexpected DSP exception\n");
919 920
920 force_sig(SIGILL, current); 921 force_sig(SIGILL, current);
921 } 922 }
922 923
923 asmlinkage void do_reserved(struct pt_regs *regs) 924 asmlinkage void do_reserved(struct pt_regs *regs)
924 { 925 {
925 /* 926 /*
926 * Game over - no way to handle this if it ever occurs. Most probably 927 * Game over - no way to handle this if it ever occurs. Most probably
927 * caused by a new unknown cpu type or after another deadly 928 * caused by a new unknown cpu type or after another deadly
928 * hard/software error. 929 * hard/software error.
929 */ 930 */
930 show_regs(regs); 931 show_regs(regs);
931 panic("Caught reserved exception %ld - should not happen.", 932 panic("Caught reserved exception %ld - should not happen.",
932 (regs->cp0_cause & 0x7f) >> 2); 933 (regs->cp0_cause & 0x7f) >> 2);
933 } 934 }
934 935
935 /* 936 /*
936 * Some MIPS CPUs can enable/disable for cache parity detection, but do 937 * Some MIPS CPUs can enable/disable for cache parity detection, but do
937 * it different ways. 938 * it different ways.
938 */ 939 */
939 static inline void parity_protection_init(void) 940 static inline void parity_protection_init(void)
940 { 941 {
941 switch (current_cpu_data.cputype) { 942 switch (current_cpu_data.cputype) {
942 case CPU_24K: 943 case CPU_24K:
943 case CPU_34K: 944 case CPU_34K:
944 case CPU_5KC: 945 case CPU_5KC:
945 write_c0_ecc(0x80000000); 946 write_c0_ecc(0x80000000);
946 back_to_back_c0_hazard(); 947 back_to_back_c0_hazard();
947 /* Set the PE bit (bit 31) in the c0_errctl register. */ 948 /* Set the PE bit (bit 31) in the c0_errctl register. */
948 printk(KERN_INFO "Cache parity protection %sabled\n", 949 printk(KERN_INFO "Cache parity protection %sabled\n",
949 (read_c0_ecc() & 0x80000000) ? "en" : "dis"); 950 (read_c0_ecc() & 0x80000000) ? "en" : "dis");
950 break; 951 break;
951 case CPU_20KC: 952 case CPU_20KC:
952 case CPU_25KF: 953 case CPU_25KF:
953 /* Clear the DE bit (bit 16) in the c0_status register. */ 954 /* Clear the DE bit (bit 16) in the c0_status register. */
954 printk(KERN_INFO "Enable cache parity protection for " 955 printk(KERN_INFO "Enable cache parity protection for "
955 "MIPS 20KC/25KF CPUs.\n"); 956 "MIPS 20KC/25KF CPUs.\n");
956 clear_c0_status(ST0_DE); 957 clear_c0_status(ST0_DE);
957 break; 958 break;
958 default: 959 default:
959 break; 960 break;
960 } 961 }
961 } 962 }
962 963
963 asmlinkage void cache_parity_error(void) 964 asmlinkage void cache_parity_error(void)
964 { 965 {
965 const int field = 2 * sizeof(unsigned long); 966 const int field = 2 * sizeof(unsigned long);
966 unsigned int reg_val; 967 unsigned int reg_val;
967 968
968 /* For the moment, report the problem and hang. */ 969 /* For the moment, report the problem and hang. */
969 printk("Cache error exception:\n"); 970 printk("Cache error exception:\n");
970 printk("cp0_errorepc == %0*lx\n", field, read_c0_errorepc()); 971 printk("cp0_errorepc == %0*lx\n", field, read_c0_errorepc());
971 reg_val = read_c0_cacheerr(); 972 reg_val = read_c0_cacheerr();
972 printk("c0_cacheerr == %08x\n", reg_val); 973 printk("c0_cacheerr == %08x\n", reg_val);
973 974
974 printk("Decoded c0_cacheerr: %s cache fault in %s reference.\n", 975 printk("Decoded c0_cacheerr: %s cache fault in %s reference.\n",
975 reg_val & (1<<30) ? "secondary" : "primary", 976 reg_val & (1<<30) ? "secondary" : "primary",
976 reg_val & (1<<31) ? "data" : "insn"); 977 reg_val & (1<<31) ? "data" : "insn");
977 printk("Error bits: %s%s%s%s%s%s%s\n", 978 printk("Error bits: %s%s%s%s%s%s%s\n",
978 reg_val & (1<<29) ? "ED " : "", 979 reg_val & (1<<29) ? "ED " : "",
979 reg_val & (1<<28) ? "ET " : "", 980 reg_val & (1<<28) ? "ET " : "",
980 reg_val & (1<<26) ? "EE " : "", 981 reg_val & (1<<26) ? "EE " : "",
981 reg_val & (1<<25) ? "EB " : "", 982 reg_val & (1<<25) ? "EB " : "",
982 reg_val & (1<<24) ? "EI " : "", 983 reg_val & (1<<24) ? "EI " : "",
983 reg_val & (1<<23) ? "E1 " : "", 984 reg_val & (1<<23) ? "E1 " : "",
984 reg_val & (1<<22) ? "E0 " : ""); 985 reg_val & (1<<22) ? "E0 " : "");
985 printk("IDX: 0x%08x\n", reg_val & ((1<<22)-1)); 986 printk("IDX: 0x%08x\n", reg_val & ((1<<22)-1));
986 987
987 #if defined(CONFIG_CPU_MIPS32) || defined(CONFIG_CPU_MIPS64) 988 #if defined(CONFIG_CPU_MIPS32) || defined(CONFIG_CPU_MIPS64)
988 if (reg_val & (1<<22)) 989 if (reg_val & (1<<22))
989 printk("DErrAddr0: 0x%0*lx\n", field, read_c0_derraddr0()); 990 printk("DErrAddr0: 0x%0*lx\n", field, read_c0_derraddr0());
990 991
991 if (reg_val & (1<<23)) 992 if (reg_val & (1<<23))
992 printk("DErrAddr1: 0x%0*lx\n", field, read_c0_derraddr1()); 993 printk("DErrAddr1: 0x%0*lx\n", field, read_c0_derraddr1());
993 #endif 994 #endif
994 995
995 panic("Can't handle the cache error!"); 996 panic("Can't handle the cache error!");
996 } 997 }
997 998
998 /* 999 /*
999 * SDBBP EJTAG debug exception handler. 1000 * SDBBP EJTAG debug exception handler.
1000 * We skip the instruction and return to the next instruction. 1001 * We skip the instruction and return to the next instruction.
1001 */ 1002 */
1002 void ejtag_exception_handler(struct pt_regs *regs) 1003 void ejtag_exception_handler(struct pt_regs *regs)
1003 { 1004 {
1004 const int field = 2 * sizeof(unsigned long); 1005 const int field = 2 * sizeof(unsigned long);
1005 unsigned long depc, old_epc; 1006 unsigned long depc, old_epc;
1006 unsigned int debug; 1007 unsigned int debug;
1007 1008
1008 printk(KERN_DEBUG "SDBBP EJTAG debug exception - not handled yet, just ignored!\n"); 1009 printk(KERN_DEBUG "SDBBP EJTAG debug exception - not handled yet, just ignored!\n");
1009 depc = read_c0_depc(); 1010 depc = read_c0_depc();
1010 debug = read_c0_debug(); 1011 debug = read_c0_debug();
1011 printk(KERN_DEBUG "c0_depc = %0*lx, DEBUG = %08x\n", field, depc, debug); 1012 printk(KERN_DEBUG "c0_depc = %0*lx, DEBUG = %08x\n", field, depc, debug);
1012 if (debug & 0x80000000) { 1013 if (debug & 0x80000000) {
1013 /* 1014 /*
1014 * In branch delay slot. 1015 * In branch delay slot.
1015 * We cheat a little bit here and use EPC to calculate the 1016 * We cheat a little bit here and use EPC to calculate the
1016 * debug return address (DEPC). EPC is restored after the 1017 * debug return address (DEPC). EPC is restored after the
1017 * calculation. 1018 * calculation.
1018 */ 1019 */
1019 old_epc = regs->cp0_epc; 1020 old_epc = regs->cp0_epc;
1020 regs->cp0_epc = depc; 1021 regs->cp0_epc = depc;
1021 __compute_return_epc(regs); 1022 __compute_return_epc(regs);
1022 depc = regs->cp0_epc; 1023 depc = regs->cp0_epc;
1023 regs->cp0_epc = old_epc; 1024 regs->cp0_epc = old_epc;
1024 } else 1025 } else
1025 depc += 4; 1026 depc += 4;
1026 write_c0_depc(depc); 1027 write_c0_depc(depc);
1027 1028
1028 #if 0 1029 #if 0
1029 printk(KERN_DEBUG "\n\n----- Enable EJTAG single stepping ----\n\n"); 1030 printk(KERN_DEBUG "\n\n----- Enable EJTAG single stepping ----\n\n");
1030 write_c0_debug(debug | 0x100); 1031 write_c0_debug(debug | 0x100);
1031 #endif 1032 #endif
1032 } 1033 }
1033 1034
1034 /* 1035 /*
1035 * NMI exception handler. 1036 * NMI exception handler.
1036 */ 1037 */
1037 void nmi_exception_handler(struct pt_regs *regs) 1038 void nmi_exception_handler(struct pt_regs *regs)
1038 { 1039 {
1039 #ifdef CONFIG_MIPS_MT_SMTC 1040 #ifdef CONFIG_MIPS_MT_SMTC
1040 unsigned long dvpret = dvpe(); 1041 unsigned long dvpret = dvpe();
1041 bust_spinlocks(1); 1042 bust_spinlocks(1);
1042 printk("NMI taken!!!!\n"); 1043 printk("NMI taken!!!!\n");
1043 mips_mt_regdump(dvpret); 1044 mips_mt_regdump(dvpret);
1044 #else 1045 #else
1045 bust_spinlocks(1); 1046 bust_spinlocks(1);
1046 printk("NMI taken!!!!\n"); 1047 printk("NMI taken!!!!\n");
1047 #endif /* CONFIG_MIPS_MT_SMTC */ 1048 #endif /* CONFIG_MIPS_MT_SMTC */
1048 die("NMI", regs); 1049 die("NMI", regs);
1049 while(1) ; 1050 while(1) ;
1050 } 1051 }
1051 1052
1052 #define VECTORSPACING 0x100 /* for EI/VI mode */ 1053 #define VECTORSPACING 0x100 /* for EI/VI mode */
1053 1054
1054 unsigned long ebase; 1055 unsigned long ebase;
1055 unsigned long exception_handlers[32]; 1056 unsigned long exception_handlers[32];
1056 unsigned long vi_handlers[64]; 1057 unsigned long vi_handlers[64];
1057 1058
1058 /* 1059 /*
1059 * As a side effect of the way this is implemented we're limited 1060 * As a side effect of the way this is implemented we're limited
1060 * to interrupt handlers in the address range from 1061 * to interrupt handlers in the address range from
1061 * KSEG0 <= x < KSEG0 + 256mb on the Nevada. Oh well ... 1062 * KSEG0 <= x < KSEG0 + 256mb on the Nevada. Oh well ...
1062 */ 1063 */
1063 void *set_except_vector(int n, void *addr) 1064 void *set_except_vector(int n, void *addr)
1064 { 1065 {
1065 unsigned long handler = (unsigned long) addr; 1066 unsigned long handler = (unsigned long) addr;
1066 unsigned long old_handler = exception_handlers[n]; 1067 unsigned long old_handler = exception_handlers[n];
1067 1068
1068 exception_handlers[n] = handler; 1069 exception_handlers[n] = handler;
1069 if (n == 0 && cpu_has_divec) { 1070 if (n == 0 && cpu_has_divec) {
1070 *(volatile u32 *)(ebase + 0x200) = 0x08000000 | 1071 *(volatile u32 *)(ebase + 0x200) = 0x08000000 |
1071 (0x03ffffff & (handler >> 2)); 1072 (0x03ffffff & (handler >> 2));
1072 flush_icache_range(ebase + 0x200, ebase + 0x204); 1073 flush_icache_range(ebase + 0x200, ebase + 0x204);
1073 } 1074 }
1074 return (void *)old_handler; 1075 return (void *)old_handler;
1075 } 1076 }
1076 1077
1077 #ifdef CONFIG_CPU_MIPSR2_SRS 1078 #ifdef CONFIG_CPU_MIPSR2_SRS
1078 /* 1079 /*
1079 * MIPSR2 shadow register set allocation 1080 * MIPSR2 shadow register set allocation
1080 * FIXME: SMP... 1081 * FIXME: SMP...
1081 */ 1082 */
1082 1083
1083 static struct shadow_registers { 1084 static struct shadow_registers {
1084 /* 1085 /*
1085 * Number of shadow register sets supported 1086 * Number of shadow register sets supported
1086 */ 1087 */
1087 unsigned long sr_supported; 1088 unsigned long sr_supported;
1088 /* 1089 /*
1089 * Bitmap of allocated shadow registers 1090 * Bitmap of allocated shadow registers
1090 */ 1091 */
1091 unsigned long sr_allocated; 1092 unsigned long sr_allocated;
1092 } shadow_registers; 1093 } shadow_registers;
1093 1094
1094 static void mips_srs_init(void) 1095 static void mips_srs_init(void)
1095 { 1096 {
1096 shadow_registers.sr_supported = ((read_c0_srsctl() >> 26) & 0x0f) + 1; 1097 shadow_registers.sr_supported = ((read_c0_srsctl() >> 26) & 0x0f) + 1;
1097 printk(KERN_INFO "%ld MIPSR2 register sets available\n", 1098 printk(KERN_INFO "%ld MIPSR2 register sets available\n",
1098 shadow_registers.sr_supported); 1099 shadow_registers.sr_supported);
1099 shadow_registers.sr_allocated = 1; /* Set 0 used by kernel */ 1100 shadow_registers.sr_allocated = 1; /* Set 0 used by kernel */
1100 } 1101 }
1101 1102
1102 int mips_srs_max(void) 1103 int mips_srs_max(void)
1103 { 1104 {
1104 return shadow_registers.sr_supported; 1105 return shadow_registers.sr_supported;
1105 } 1106 }
1106 1107
1107 int mips_srs_alloc(void) 1108 int mips_srs_alloc(void)
1108 { 1109 {
1109 struct shadow_registers *sr = &shadow_registers; 1110 struct shadow_registers *sr = &shadow_registers;
1110 int set; 1111 int set;
1111 1112
1112 again: 1113 again:
1113 set = find_first_zero_bit(&sr->sr_allocated, sr->sr_supported); 1114 set = find_first_zero_bit(&sr->sr_allocated, sr->sr_supported);
1114 if (set >= sr->sr_supported) 1115 if (set >= sr->sr_supported)
1115 return -1; 1116 return -1;
1116 1117
1117 if (test_and_set_bit(set, &sr->sr_allocated)) 1118 if (test_and_set_bit(set, &sr->sr_allocated))
1118 goto again; 1119 goto again;
1119 1120
1120 return set; 1121 return set;
1121 } 1122 }
1122 1123
1123 void mips_srs_free(int set) 1124 void mips_srs_free(int set)
1124 { 1125 {
1125 struct shadow_registers *sr = &shadow_registers; 1126 struct shadow_registers *sr = &shadow_registers;
1126 1127
1127 clear_bit(set, &sr->sr_allocated); 1128 clear_bit(set, &sr->sr_allocated);
1128 } 1129 }
1129 1130
1130 static asmlinkage void do_default_vi(void) 1131 static asmlinkage void do_default_vi(void)
1131 { 1132 {
1132 show_regs(get_irq_regs()); 1133 show_regs(get_irq_regs());
1133 panic("Caught unexpected vectored interrupt."); 1134 panic("Caught unexpected vectored interrupt.");
1134 } 1135 }
1135 1136
1136 static void *set_vi_srs_handler(int n, vi_handler_t addr, int srs) 1137 static void *set_vi_srs_handler(int n, vi_handler_t addr, int srs)
1137 { 1138 {
1138 unsigned long handler; 1139 unsigned long handler;
1139 unsigned long old_handler = vi_handlers[n]; 1140 unsigned long old_handler = vi_handlers[n];
1140 u32 *w; 1141 u32 *w;
1141 unsigned char *b; 1142 unsigned char *b;
1142 1143
1143 if (!cpu_has_veic && !cpu_has_vint) 1144 if (!cpu_has_veic && !cpu_has_vint)
1144 BUG(); 1145 BUG();
1145 1146
1146 if (addr == NULL) { 1147 if (addr == NULL) {
1147 handler = (unsigned long) do_default_vi; 1148 handler = (unsigned long) do_default_vi;
1148 srs = 0; 1149 srs = 0;
1149 } else 1150 } else
1150 handler = (unsigned long) addr; 1151 handler = (unsigned long) addr;
1151 vi_handlers[n] = (unsigned long) addr; 1152 vi_handlers[n] = (unsigned long) addr;
1152 1153
1153 b = (unsigned char *)(ebase + 0x200 + n*VECTORSPACING); 1154 b = (unsigned char *)(ebase + 0x200 + n*VECTORSPACING);
1154 1155
1155 if (srs >= mips_srs_max()) 1156 if (srs >= mips_srs_max())
1156 panic("Shadow register set %d not supported", srs); 1157 panic("Shadow register set %d not supported", srs);
1157 1158
1158 if (cpu_has_veic) { 1159 if (cpu_has_veic) {
1159 if (board_bind_eic_interrupt) 1160 if (board_bind_eic_interrupt)
1160 board_bind_eic_interrupt (n, srs); 1161 board_bind_eic_interrupt (n, srs);
1161 } else if (cpu_has_vint) { 1162 } else if (cpu_has_vint) {
1162 /* SRSMap is only defined if shadow sets are implemented */ 1163 /* SRSMap is only defined if shadow sets are implemented */
1163 if (mips_srs_max() > 1) 1164 if (mips_srs_max() > 1)
1164 change_c0_srsmap (0xf << n*4, srs << n*4); 1165 change_c0_srsmap (0xf << n*4, srs << n*4);
1165 } 1166 }
1166 1167
1167 if (srs == 0) { 1168 if (srs == 0) {
1168 /* 1169 /*
1169 * If no shadow set is selected then use the default handler 1170 * If no shadow set is selected then use the default handler
1170 * that does normal register saving and a standard interrupt exit 1171 * that does normal register saving and a standard interrupt exit
1171 */ 1172 */
1172 1173
1173 extern char except_vec_vi, except_vec_vi_lui; 1174 extern char except_vec_vi, except_vec_vi_lui;
1174 extern char except_vec_vi_ori, except_vec_vi_end; 1175 extern char except_vec_vi_ori, except_vec_vi_end;
1175 #ifdef CONFIG_MIPS_MT_SMTC 1176 #ifdef CONFIG_MIPS_MT_SMTC
1176 /* 1177 /*
1177 * We need to provide the SMTC vectored interrupt handler 1178 * We need to provide the SMTC vectored interrupt handler
1178 * not only with the address of the handler, but with the 1179 * not only with the address of the handler, but with the
1179 * Status.IM bit to be masked before going there. 1180 * Status.IM bit to be masked before going there.
1180 */ 1181 */
1181 extern char except_vec_vi_mori; 1182 extern char except_vec_vi_mori;
1182 const int mori_offset = &except_vec_vi_mori - &except_vec_vi; 1183 const int mori_offset = &except_vec_vi_mori - &except_vec_vi;
1183 #endif /* CONFIG_MIPS_MT_SMTC */ 1184 #endif /* CONFIG_MIPS_MT_SMTC */
1184 const int handler_len = &except_vec_vi_end - &except_vec_vi; 1185 const int handler_len = &except_vec_vi_end - &except_vec_vi;
1185 const int lui_offset = &except_vec_vi_lui - &except_vec_vi; 1186 const int lui_offset = &except_vec_vi_lui - &except_vec_vi;
1186 const int ori_offset = &except_vec_vi_ori - &except_vec_vi; 1187 const int ori_offset = &except_vec_vi_ori - &except_vec_vi;
1187 1188
1188 if (handler_len > VECTORSPACING) { 1189 if (handler_len > VECTORSPACING) {
1189 /* 1190 /*
1190 * Sigh... panicing won't help as the console 1191 * Sigh... panicing won't help as the console
1191 * is probably not configured :( 1192 * is probably not configured :(
1192 */ 1193 */
1193 panic ("VECTORSPACING too small"); 1194 panic ("VECTORSPACING too small");
1194 } 1195 }
1195 1196
1196 memcpy (b, &except_vec_vi, handler_len); 1197 memcpy (b, &except_vec_vi, handler_len);
1197 #ifdef CONFIG_MIPS_MT_SMTC 1198 #ifdef CONFIG_MIPS_MT_SMTC
1198 BUG_ON(n > 7); /* Vector index %d exceeds SMTC maximum. */ 1199 BUG_ON(n > 7); /* Vector index %d exceeds SMTC maximum. */
1199 1200
1200 w = (u32 *)(b + mori_offset); 1201 w = (u32 *)(b + mori_offset);
1201 *w = (*w & 0xffff0000) | (0x100 << n); 1202 *w = (*w & 0xffff0000) | (0x100 << n);
1202 #endif /* CONFIG_MIPS_MT_SMTC */ 1203 #endif /* CONFIG_MIPS_MT_SMTC */
1203 w = (u32 *)(b + lui_offset); 1204 w = (u32 *)(b + lui_offset);
1204 *w = (*w & 0xffff0000) | (((u32)handler >> 16) & 0xffff); 1205 *w = (*w & 0xffff0000) | (((u32)handler >> 16) & 0xffff);
1205 w = (u32 *)(b + ori_offset); 1206 w = (u32 *)(b + ori_offset);
1206 *w = (*w & 0xffff0000) | ((u32)handler & 0xffff); 1207 *w = (*w & 0xffff0000) | ((u32)handler & 0xffff);
1207 flush_icache_range((unsigned long)b, (unsigned long)(b+handler_len)); 1208 flush_icache_range((unsigned long)b, (unsigned long)(b+handler_len));
1208 } 1209 }
1209 else { 1210 else {
1210 /* 1211 /*
1211 * In other cases jump directly to the interrupt handler 1212 * In other cases jump directly to the interrupt handler
1212 * 1213 *
1213 * It is the handlers responsibility to save registers if required 1214 * It is the handlers responsibility to save registers if required
1214 * (eg hi/lo) and return from the exception using "eret" 1215 * (eg hi/lo) and return from the exception using "eret"
1215 */ 1216 */
1216 w = (u32 *)b; 1217 w = (u32 *)b;
1217 *w++ = 0x08000000 | (((u32)handler >> 2) & 0x03fffff); /* j handler */ 1218 *w++ = 0x08000000 | (((u32)handler >> 2) & 0x03fffff); /* j handler */
1218 *w = 0; 1219 *w = 0;
1219 flush_icache_range((unsigned long)b, (unsigned long)(b+8)); 1220 flush_icache_range((unsigned long)b, (unsigned long)(b+8));
1220 } 1221 }
1221 1222
1222 return (void *)old_handler; 1223 return (void *)old_handler;
1223 } 1224 }
1224 1225
1225 void *set_vi_handler(int n, vi_handler_t addr) 1226 void *set_vi_handler(int n, vi_handler_t addr)
1226 { 1227 {
1227 return set_vi_srs_handler(n, addr, 0); 1228 return set_vi_srs_handler(n, addr, 0);
1228 } 1229 }
1229 1230
1230 #else 1231 #else
1231 1232
1232 static inline void mips_srs_init(void) 1233 static inline void mips_srs_init(void)
1233 { 1234 {
1234 } 1235 }
1235 1236
1236 #endif /* CONFIG_CPU_MIPSR2_SRS */ 1237 #endif /* CONFIG_CPU_MIPSR2_SRS */
1237 1238
1238 /* 1239 /*
1239 * This is used by native signal handling 1240 * This is used by native signal handling
1240 */ 1241 */
1241 asmlinkage int (*save_fp_context)(struct sigcontext __user *sc); 1242 asmlinkage int (*save_fp_context)(struct sigcontext __user *sc);
1242 asmlinkage int (*restore_fp_context)(struct sigcontext __user *sc); 1243 asmlinkage int (*restore_fp_context)(struct sigcontext __user *sc);
1243 1244
1244 extern asmlinkage int _save_fp_context(struct sigcontext __user *sc); 1245 extern asmlinkage int _save_fp_context(struct sigcontext __user *sc);
1245 extern asmlinkage int _restore_fp_context(struct sigcontext __user *sc); 1246 extern asmlinkage int _restore_fp_context(struct sigcontext __user *sc);
1246 1247
1247 extern asmlinkage int fpu_emulator_save_context(struct sigcontext __user *sc); 1248 extern asmlinkage int fpu_emulator_save_context(struct sigcontext __user *sc);
1248 extern asmlinkage int fpu_emulator_restore_context(struct sigcontext __user *sc); 1249 extern asmlinkage int fpu_emulator_restore_context(struct sigcontext __user *sc);
1249 1250
1250 #ifdef CONFIG_SMP 1251 #ifdef CONFIG_SMP
1251 static int smp_save_fp_context(struct sigcontext __user *sc) 1252 static int smp_save_fp_context(struct sigcontext __user *sc)
1252 { 1253 {
1253 return raw_cpu_has_fpu 1254 return raw_cpu_has_fpu
1254 ? _save_fp_context(sc) 1255 ? _save_fp_context(sc)
1255 : fpu_emulator_save_context(sc); 1256 : fpu_emulator_save_context(sc);
1256 } 1257 }
1257 1258
1258 static int smp_restore_fp_context(struct sigcontext __user *sc) 1259 static int smp_restore_fp_context(struct sigcontext __user *sc)
1259 { 1260 {
1260 return raw_cpu_has_fpu 1261 return raw_cpu_has_fpu
1261 ? _restore_fp_context(sc) 1262 ? _restore_fp_context(sc)
1262 : fpu_emulator_restore_context(sc); 1263 : fpu_emulator_restore_context(sc);
1263 } 1264 }
1264 #endif 1265 #endif
1265 1266
1266 static inline void signal_init(void) 1267 static inline void signal_init(void)
1267 { 1268 {
1268 #ifdef CONFIG_SMP 1269 #ifdef CONFIG_SMP
1269 /* For now just do the cpu_has_fpu check when the functions are invoked */ 1270 /* For now just do the cpu_has_fpu check when the functions are invoked */
1270 save_fp_context = smp_save_fp_context; 1271 save_fp_context = smp_save_fp_context;
1271 restore_fp_context = smp_restore_fp_context; 1272 restore_fp_context = smp_restore_fp_context;
1272 #else 1273 #else
1273 if (cpu_has_fpu) { 1274 if (cpu_has_fpu) {
1274 save_fp_context = _save_fp_context; 1275 save_fp_context = _save_fp_context;
1275 restore_fp_context = _restore_fp_context; 1276 restore_fp_context = _restore_fp_context;
1276 } else { 1277 } else {
1277 save_fp_context = fpu_emulator_save_context; 1278 save_fp_context = fpu_emulator_save_context;
1278 restore_fp_context = fpu_emulator_restore_context; 1279 restore_fp_context = fpu_emulator_restore_context;
1279 } 1280 }
1280 #endif 1281 #endif
1281 } 1282 }
1282 1283
1283 #ifdef CONFIG_MIPS32_COMPAT 1284 #ifdef CONFIG_MIPS32_COMPAT
1284 1285
1285 /* 1286 /*
1286 * This is used by 32-bit signal stuff on the 64-bit kernel 1287 * This is used by 32-bit signal stuff on the 64-bit kernel
1287 */ 1288 */
1288 asmlinkage int (*save_fp_context32)(struct sigcontext32 __user *sc); 1289 asmlinkage int (*save_fp_context32)(struct sigcontext32 __user *sc);
1289 asmlinkage int (*restore_fp_context32)(struct sigcontext32 __user *sc); 1290 asmlinkage int (*restore_fp_context32)(struct sigcontext32 __user *sc);
1290 1291
1291 extern asmlinkage int _save_fp_context32(struct sigcontext32 __user *sc); 1292 extern asmlinkage int _save_fp_context32(struct sigcontext32 __user *sc);
1292 extern asmlinkage int _restore_fp_context32(struct sigcontext32 __user *sc); 1293 extern asmlinkage int _restore_fp_context32(struct sigcontext32 __user *sc);
1293 1294
1294 extern asmlinkage int fpu_emulator_save_context32(struct sigcontext32 __user *sc); 1295 extern asmlinkage int fpu_emulator_save_context32(struct sigcontext32 __user *sc);
1295 extern asmlinkage int fpu_emulator_restore_context32(struct sigcontext32 __user *sc); 1296 extern asmlinkage int fpu_emulator_restore_context32(struct sigcontext32 __user *sc);
1296 1297
1297 static inline void signal32_init(void) 1298 static inline void signal32_init(void)
1298 { 1299 {
1299 if (cpu_has_fpu) { 1300 if (cpu_has_fpu) {
1300 save_fp_context32 = _save_fp_context32; 1301 save_fp_context32 = _save_fp_context32;
1301 restore_fp_context32 = _restore_fp_context32; 1302 restore_fp_context32 = _restore_fp_context32;
1302 } else { 1303 } else {
1303 save_fp_context32 = fpu_emulator_save_context32; 1304 save_fp_context32 = fpu_emulator_save_context32;
1304 restore_fp_context32 = fpu_emulator_restore_context32; 1305 restore_fp_context32 = fpu_emulator_restore_context32;
1305 } 1306 }
1306 } 1307 }
1307 #endif 1308 #endif
1308 1309
1309 extern void cpu_cache_init(void); 1310 extern void cpu_cache_init(void);
1310 extern void tlb_init(void); 1311 extern void tlb_init(void);
1311 extern void flush_tlb_handlers(void); 1312 extern void flush_tlb_handlers(void);
1312 1313
1313 void __init per_cpu_trap_init(void) 1314 void __init per_cpu_trap_init(void)
1314 { 1315 {
1315 unsigned int cpu = smp_processor_id(); 1316 unsigned int cpu = smp_processor_id();
1316 unsigned int status_set = ST0_CU0; 1317 unsigned int status_set = ST0_CU0;
1317 #ifdef CONFIG_MIPS_MT_SMTC 1318 #ifdef CONFIG_MIPS_MT_SMTC
1318 int secondaryTC = 0; 1319 int secondaryTC = 0;
1319 int bootTC = (cpu == 0); 1320 int bootTC = (cpu == 0);
1320 1321
1321 /* 1322 /*
1322 * Only do per_cpu_trap_init() for first TC of Each VPE. 1323 * Only do per_cpu_trap_init() for first TC of Each VPE.
1323 * Note that this hack assumes that the SMTC init code 1324 * Note that this hack assumes that the SMTC init code
1324 * assigns TCs consecutively and in ascending order. 1325 * assigns TCs consecutively and in ascending order.
1325 */ 1326 */
1326 1327
1327 if (((read_c0_tcbind() & TCBIND_CURTC) != 0) && 1328 if (((read_c0_tcbind() & TCBIND_CURTC) != 0) &&
1328 ((read_c0_tcbind() & TCBIND_CURVPE) == cpu_data[cpu - 1].vpe_id)) 1329 ((read_c0_tcbind() & TCBIND_CURVPE) == cpu_data[cpu - 1].vpe_id))
1329 secondaryTC = 1; 1330 secondaryTC = 1;
1330 #endif /* CONFIG_MIPS_MT_SMTC */ 1331 #endif /* CONFIG_MIPS_MT_SMTC */
1331 1332
1332 /* 1333 /*
1333 * Disable coprocessors and select 32-bit or 64-bit addressing 1334 * Disable coprocessors and select 32-bit or 64-bit addressing
1334 * and the 16/32 or 32/32 FPR register model. Reset the BEV 1335 * and the 16/32 or 32/32 FPR register model. Reset the BEV
1335 * flag that some firmware may have left set and the TS bit (for 1336 * flag that some firmware may have left set and the TS bit (for
1336 * IP27). Set XX for ISA IV code to work. 1337 * IP27). Set XX for ISA IV code to work.
1337 */ 1338 */
1338 #ifdef CONFIG_64BIT 1339 #ifdef CONFIG_64BIT
1339 status_set |= ST0_FR|ST0_KX|ST0_SX|ST0_UX; 1340 status_set |= ST0_FR|ST0_KX|ST0_SX|ST0_UX;
1340 #endif 1341 #endif
1341 if (current_cpu_data.isa_level == MIPS_CPU_ISA_IV) 1342 if (current_cpu_data.isa_level == MIPS_CPU_ISA_IV)
1342 status_set |= ST0_XX; 1343 status_set |= ST0_XX;
1343 change_c0_status(ST0_CU|ST0_MX|ST0_RE|ST0_FR|ST0_BEV|ST0_TS|ST0_KX|ST0_SX|ST0_UX, 1344 change_c0_status(ST0_CU|ST0_MX|ST0_RE|ST0_FR|ST0_BEV|ST0_TS|ST0_KX|ST0_SX|ST0_UX,
1344 status_set); 1345 status_set);
1345 1346
1346 if (cpu_has_dsp) 1347 if (cpu_has_dsp)
1347 set_c0_status(ST0_MX); 1348 set_c0_status(ST0_MX);
1348 1349
1349 #ifdef CONFIG_CPU_MIPSR2 1350 #ifdef CONFIG_CPU_MIPSR2
1350 if (cpu_has_mips_r2) { 1351 if (cpu_has_mips_r2) {
1351 unsigned int enable = 0x0000000f; 1352 unsigned int enable = 0x0000000f;
1352 1353
1353 if (cpu_has_userlocal) 1354 if (cpu_has_userlocal)
1354 enable |= (1 << 29); 1355 enable |= (1 << 29);
1355 1356
1356 write_c0_hwrena(enable); 1357 write_c0_hwrena(enable);
1357 } 1358 }
1358 #endif 1359 #endif
1359 1360
1360 #ifdef CONFIG_MIPS_MT_SMTC 1361 #ifdef CONFIG_MIPS_MT_SMTC
1361 if (!secondaryTC) { 1362 if (!secondaryTC) {
1362 #endif /* CONFIG_MIPS_MT_SMTC */ 1363 #endif /* CONFIG_MIPS_MT_SMTC */
1363 1364
1364 if (cpu_has_veic || cpu_has_vint) { 1365 if (cpu_has_veic || cpu_has_vint) {
1365 write_c0_ebase (ebase); 1366 write_c0_ebase (ebase);
1366 /* Setting vector spacing enables EI/VI mode */ 1367 /* Setting vector spacing enables EI/VI mode */
1367 change_c0_intctl (0x3e0, VECTORSPACING); 1368 change_c0_intctl (0x3e0, VECTORSPACING);
1368 } 1369 }
1369 if (cpu_has_divec) { 1370 if (cpu_has_divec) {
1370 if (cpu_has_mipsmt) { 1371 if (cpu_has_mipsmt) {
1371 unsigned int vpflags = dvpe(); 1372 unsigned int vpflags = dvpe();
1372 set_c0_cause(CAUSEF_IV); 1373 set_c0_cause(CAUSEF_IV);
1373 evpe(vpflags); 1374 evpe(vpflags);
1374 } else 1375 } else
1375 set_c0_cause(CAUSEF_IV); 1376 set_c0_cause(CAUSEF_IV);
1376 } 1377 }
1377 1378
1378 /* 1379 /*
1379 * Before R2 both interrupt numbers were fixed to 7, so on R2 only: 1380 * Before R2 both interrupt numbers were fixed to 7, so on R2 only:
1380 * 1381 *
1381 * o read IntCtl.IPTI to determine the timer interrupt 1382 * o read IntCtl.IPTI to determine the timer interrupt
1382 * o read IntCtl.IPPCI to determine the performance counter interrupt 1383 * o read IntCtl.IPPCI to determine the performance counter interrupt
1383 */ 1384 */
1384 if (cpu_has_mips_r2) { 1385 if (cpu_has_mips_r2) {
1385 cp0_compare_irq = (read_c0_intctl () >> 29) & 7; 1386 cp0_compare_irq = (read_c0_intctl () >> 29) & 7;
1386 cp0_perfcount_irq = (read_c0_intctl () >> 26) & 7; 1387 cp0_perfcount_irq = (read_c0_intctl () >> 26) & 7;
1387 if (cp0_perfcount_irq == cp0_compare_irq) 1388 if (cp0_perfcount_irq == cp0_compare_irq)
1388 cp0_perfcount_irq = -1; 1389 cp0_perfcount_irq = -1;
1389 } else { 1390 } else {
1390 cp0_compare_irq = CP0_LEGACY_COMPARE_IRQ; 1391 cp0_compare_irq = CP0_LEGACY_COMPARE_IRQ;
1391 cp0_perfcount_irq = -1; 1392 cp0_perfcount_irq = -1;
1392 } 1393 }
1393 1394
1394 #ifdef CONFIG_MIPS_MT_SMTC 1395 #ifdef CONFIG_MIPS_MT_SMTC
1395 } 1396 }
1396 #endif /* CONFIG_MIPS_MT_SMTC */ 1397 #endif /* CONFIG_MIPS_MT_SMTC */
1397 1398
1398 cpu_data[cpu].asid_cache = ASID_FIRST_VERSION; 1399 cpu_data[cpu].asid_cache = ASID_FIRST_VERSION;
1399 TLBMISS_HANDLER_SETUP(); 1400 TLBMISS_HANDLER_SETUP();
1400 1401
1401 atomic_inc(&init_mm.mm_count); 1402 atomic_inc(&init_mm.mm_count);
1402 current->active_mm = &init_mm; 1403 current->active_mm = &init_mm;
1403 BUG_ON(current->mm); 1404 BUG_ON(current->mm);
1404 enter_lazy_tlb(&init_mm, current); 1405 enter_lazy_tlb(&init_mm, current);
1405 1406
1406 #ifdef CONFIG_MIPS_MT_SMTC 1407 #ifdef CONFIG_MIPS_MT_SMTC
1407 if (bootTC) { 1408 if (bootTC) {
1408 #endif /* CONFIG_MIPS_MT_SMTC */ 1409 #endif /* CONFIG_MIPS_MT_SMTC */
1409 cpu_cache_init(); 1410 cpu_cache_init();
1410 tlb_init(); 1411 tlb_init();
1411 #ifdef CONFIG_MIPS_MT_SMTC 1412 #ifdef CONFIG_MIPS_MT_SMTC
1412 } else if (!secondaryTC) { 1413 } else if (!secondaryTC) {
1413 /* 1414 /*
1414 * First TC in non-boot VPE must do subset of tlb_init() 1415 * First TC in non-boot VPE must do subset of tlb_init()
1415 * for MMU countrol registers. 1416 * for MMU countrol registers.
1416 */ 1417 */
1417 write_c0_pagemask(PM_DEFAULT_MASK); 1418 write_c0_pagemask(PM_DEFAULT_MASK);
1418 write_c0_wired(0); 1419 write_c0_wired(0);
1419 } 1420 }
1420 #endif /* CONFIG_MIPS_MT_SMTC */ 1421 #endif /* CONFIG_MIPS_MT_SMTC */
1421 } 1422 }
1422 1423
1423 /* Install CPU exception handler */ 1424 /* Install CPU exception handler */
1424 void __init set_handler (unsigned long offset, void *addr, unsigned long size) 1425 void __init set_handler (unsigned long offset, void *addr, unsigned long size)
1425 { 1426 {
1426 memcpy((void *)(ebase + offset), addr, size); 1427 memcpy((void *)(ebase + offset), addr, size);
1427 flush_icache_range(ebase + offset, ebase + offset + size); 1428 flush_icache_range(ebase + offset, ebase + offset + size);
1428 } 1429 }
1429 1430
1430 /* Install uncached CPU exception handler */ 1431 /* Install uncached CPU exception handler */
1431 void __init set_uncached_handler (unsigned long offset, void *addr, unsigned long size) 1432 void __init set_uncached_handler (unsigned long offset, void *addr, unsigned long size)
1432 { 1433 {
1433 #ifdef CONFIG_32BIT 1434 #ifdef CONFIG_32BIT
1434 unsigned long uncached_ebase = KSEG1ADDR(ebase); 1435 unsigned long uncached_ebase = KSEG1ADDR(ebase);
1435 #endif 1436 #endif
1436 #ifdef CONFIG_64BIT 1437 #ifdef CONFIG_64BIT
1437 unsigned long uncached_ebase = TO_UNCAC(ebase); 1438 unsigned long uncached_ebase = TO_UNCAC(ebase);
1438 #endif 1439 #endif
1439 1440
1440 memcpy((void *)(uncached_ebase + offset), addr, size); 1441 memcpy((void *)(uncached_ebase + offset), addr, size);
1441 } 1442 }
1442 1443
1443 static int __initdata rdhwr_noopt; 1444 static int __initdata rdhwr_noopt;
1444 static int __init set_rdhwr_noopt(char *str) 1445 static int __init set_rdhwr_noopt(char *str)
1445 { 1446 {
1446 rdhwr_noopt = 1; 1447 rdhwr_noopt = 1;
1447 return 1; 1448 return 1;
1448 } 1449 }
1449 1450
1450 __setup("rdhwr_noopt", set_rdhwr_noopt); 1451 __setup("rdhwr_noopt", set_rdhwr_noopt);
1451 1452
1452 void __init trap_init(void) 1453 void __init trap_init(void)
1453 { 1454 {
1454 extern char except_vec3_generic, except_vec3_r4000; 1455 extern char except_vec3_generic, except_vec3_r4000;
1455 extern char except_vec4; 1456 extern char except_vec4;
1456 unsigned long i; 1457 unsigned long i;
1457 1458
1458 if (cpu_has_veic || cpu_has_vint) 1459 if (cpu_has_veic || cpu_has_vint)
1459 ebase = (unsigned long) alloc_bootmem_low_pages (0x200 + VECTORSPACING*64); 1460 ebase = (unsigned long) alloc_bootmem_low_pages (0x200 + VECTORSPACING*64);
1460 else 1461 else
1461 ebase = CAC_BASE; 1462 ebase = CAC_BASE;
1462 1463
1463 mips_srs_init(); 1464 mips_srs_init();
1464 1465
1465 per_cpu_trap_init(); 1466 per_cpu_trap_init();
1466 1467
1467 /* 1468 /*
1468 * Copy the generic exception handlers to their final destination. 1469 * Copy the generic exception handlers to their final destination.
1469 * This will be overriden later as suitable for a particular 1470 * This will be overriden later as suitable for a particular
1470 * configuration. 1471 * configuration.
1471 */ 1472 */
1472 set_handler(0x180, &except_vec3_generic, 0x80); 1473 set_handler(0x180, &except_vec3_generic, 0x80);
1473 1474
1474 /* 1475 /*
1475 * Setup default vectors 1476 * Setup default vectors
1476 */ 1477 */
1477 for (i = 0; i <= 31; i++) 1478 for (i = 0; i <= 31; i++)
1478 set_except_vector(i, handle_reserved); 1479 set_except_vector(i, handle_reserved);
1479 1480
1480 /* 1481 /*
1481 * Copy the EJTAG debug exception vector handler code to it's final 1482 * Copy the EJTAG debug exception vector handler code to it's final
1482 * destination. 1483 * destination.
1483 */ 1484 */
1484 if (cpu_has_ejtag && board_ejtag_handler_setup) 1485 if (cpu_has_ejtag && board_ejtag_handler_setup)
1485 board_ejtag_handler_setup (); 1486 board_ejtag_handler_setup ();
1486 1487
1487 /* 1488 /*
1488 * Only some CPUs have the watch exceptions. 1489 * Only some CPUs have the watch exceptions.
1489 */ 1490 */
1490 if (cpu_has_watch) 1491 if (cpu_has_watch)
1491 set_except_vector(23, handle_watch); 1492 set_except_vector(23, handle_watch);
1492 1493
1493 /* 1494 /*
1494 * Initialise interrupt handlers 1495 * Initialise interrupt handlers
1495 */ 1496 */
1496 if (cpu_has_veic || cpu_has_vint) { 1497 if (cpu_has_veic || cpu_has_vint) {
1497 int nvec = cpu_has_veic ? 64 : 8; 1498 int nvec = cpu_has_veic ? 64 : 8;
1498 for (i = 0; i < nvec; i++) 1499 for (i = 0; i < nvec; i++)
1499 set_vi_handler(i, NULL); 1500 set_vi_handler(i, NULL);
1500 } 1501 }
1501 else if (cpu_has_divec) 1502 else if (cpu_has_divec)
1502 set_handler(0x200, &except_vec4, 0x8); 1503 set_handler(0x200, &except_vec4, 0x8);
1503 1504
1504 /* 1505 /*
1505 * Some CPUs can enable/disable for cache parity detection, but does 1506 * Some CPUs can enable/disable for cache parity detection, but does
1506 * it different ways. 1507 * it different ways.
1507 */ 1508 */
1508 parity_protection_init(); 1509 parity_protection_init();
1509 1510
1510 /* 1511 /*
1511 * The Data Bus Errors / Instruction Bus Errors are signaled 1512 * The Data Bus Errors / Instruction Bus Errors are signaled
1512 * by external hardware. Therefore these two exceptions 1513 * by external hardware. Therefore these two exceptions
1513 * may have board specific handlers. 1514 * may have board specific handlers.
1514 */ 1515 */
1515 if (board_be_init) 1516 if (board_be_init)
1516 board_be_init(); 1517 board_be_init();
1517 1518
1518 set_except_vector(0, handle_int); 1519 set_except_vector(0, handle_int);
1519 set_except_vector(1, handle_tlbm); 1520 set_except_vector(1, handle_tlbm);
1520 set_except_vector(2, handle_tlbl); 1521 set_except_vector(2, handle_tlbl);
1521 set_except_vector(3, handle_tlbs); 1522 set_except_vector(3, handle_tlbs);
1522 1523
1523 set_except_vector(4, handle_adel); 1524 set_except_vector(4, handle_adel);
1524 set_except_vector(5, handle_ades); 1525 set_except_vector(5, handle_ades);
1525 1526
1526 set_except_vector(6, handle_ibe); 1527 set_except_vector(6, handle_ibe);
1527 set_except_vector(7, handle_dbe); 1528 set_except_vector(7, handle_dbe);
1528 1529
1529 set_except_vector(8, handle_sys); 1530 set_except_vector(8, handle_sys);
1530 set_except_vector(9, handle_bp); 1531 set_except_vector(9, handle_bp);
1531 set_except_vector(10, rdhwr_noopt ? handle_ri : 1532 set_except_vector(10, rdhwr_noopt ? handle_ri :
1532 (cpu_has_vtag_icache ? 1533 (cpu_has_vtag_icache ?
1533 handle_ri_rdhwr_vivt : handle_ri_rdhwr)); 1534 handle_ri_rdhwr_vivt : handle_ri_rdhwr));
1534 set_except_vector(11, handle_cpu); 1535 set_except_vector(11, handle_cpu);
1535 set_except_vector(12, handle_ov); 1536 set_except_vector(12, handle_ov);
1536 set_except_vector(13, handle_tr); 1537 set_except_vector(13, handle_tr);
1537 1538
1538 if (current_cpu_data.cputype == CPU_R6000 || 1539 if (current_cpu_data.cputype == CPU_R6000 ||
1539 current_cpu_data.cputype == CPU_R6000A) { 1540 current_cpu_data.cputype == CPU_R6000A) {
1540 /* 1541 /*
1541 * The R6000 is the only R-series CPU that features a machine 1542 * The R6000 is the only R-series CPU that features a machine
1542 * check exception (similar to the R4000 cache error) and 1543 * check exception (similar to the R4000 cache error) and
1543 * unaligned ldc1/sdc1 exception. The handlers have not been 1544 * unaligned ldc1/sdc1 exception. The handlers have not been
1544 * written yet. Well, anyway there is no R6000 machine on the 1545 * written yet. Well, anyway there is no R6000 machine on the
1545 * current list of targets for Linux/MIPS. 1546 * current list of targets for Linux/MIPS.
1546 * (Duh, crap, there is someone with a triple R6k machine) 1547 * (Duh, crap, there is someone with a triple R6k machine)
1547 */ 1548 */
1548 //set_except_vector(14, handle_mc); 1549 //set_except_vector(14, handle_mc);
1549 //set_except_vector(15, handle_ndc); 1550 //set_except_vector(15, handle_ndc);
1550 } 1551 }
1551 1552
1552 1553
1553 if (board_nmi_handler_setup) 1554 if (board_nmi_handler_setup)
1554 board_nmi_handler_setup(); 1555 board_nmi_handler_setup();
1555 1556
1556 if (cpu_has_fpu && !cpu_has_nofpuex) 1557 if (cpu_has_fpu && !cpu_has_nofpuex)
1557 set_except_vector(15, handle_fpe); 1558 set_except_vector(15, handle_fpe);
1558 1559
1559 set_except_vector(22, handle_mdmx); 1560 set_except_vector(22, handle_mdmx);
1560 1561
1561 if (cpu_has_mcheck) 1562 if (cpu_has_mcheck)
1562 set_except_vector(24, handle_mcheck); 1563 set_except_vector(24, handle_mcheck);
1563 1564
1564 if (cpu_has_mipsmt) 1565 if (cpu_has_mipsmt)
1565 set_except_vector(25, handle_mt); 1566 set_except_vector(25, handle_mt);
1566 1567
1567 set_except_vector(26, handle_dsp); 1568 set_except_vector(26, handle_dsp);
1568 1569
1569 if (cpu_has_vce) 1570 if (cpu_has_vce)
1570 /* Special exception: R4[04]00 uses also the divec space. */ 1571 /* Special exception: R4[04]00 uses also the divec space. */
1571 memcpy((void *)(CAC_BASE + 0x180), &except_vec3_r4000, 0x100); 1572 memcpy((void *)(CAC_BASE + 0x180), &except_vec3_r4000, 0x100);
1572 else if (cpu_has_4kex) 1573 else if (cpu_has_4kex)
1573 memcpy((void *)(CAC_BASE + 0x180), &except_vec3_generic, 0x80); 1574 memcpy((void *)(CAC_BASE + 0x180), &except_vec3_generic, 0x80);
1574 else 1575 else
1575 memcpy((void *)(CAC_BASE + 0x080), &except_vec3_generic, 0x80); 1576 memcpy((void *)(CAC_BASE + 0x080), &except_vec3_generic, 0x80);
1576 1577
1577 signal_init(); 1578 signal_init();
1578 #ifdef CONFIG_MIPS32_COMPAT 1579 #ifdef CONFIG_MIPS32_COMPAT
1579 signal32_init(); 1580 signal32_init();
1580 #endif 1581 #endif
1581 1582
1582 flush_icache_range(ebase, ebase + 0x400); 1583 flush_icache_range(ebase, ebase + 0x400);
1583 flush_tlb_handlers(); 1584 flush_tlb_handlers();
1584 } 1585 }
1585 1586
arch/parisc/kernel/traps.c
1 /* 1 /*
2 * linux/arch/parisc/traps.c 2 * linux/arch/parisc/traps.c
3 * 3 *
4 * Copyright (C) 1991, 1992 Linus Torvalds 4 * Copyright (C) 1991, 1992 Linus Torvalds
5 * Copyright (C) 1999, 2000 Philipp Rumpf <prumpf@tux.org> 5 * Copyright (C) 1999, 2000 Philipp Rumpf <prumpf@tux.org>
6 */ 6 */
7 7
8 /* 8 /*
9 * 'Traps.c' handles hardware traps and faults after we have saved some 9 * 'Traps.c' handles hardware traps and faults after we have saved some
10 * state in 'asm.s'. 10 * state in 'asm.s'.
11 */ 11 */
12 12
13 #include <linux/sched.h> 13 #include <linux/sched.h>
14 #include <linux/kernel.h> 14 #include <linux/kernel.h>
15 #include <linux/string.h> 15 #include <linux/string.h>
16 #include <linux/errno.h> 16 #include <linux/errno.h>
17 #include <linux/ptrace.h> 17 #include <linux/ptrace.h>
18 #include <linux/timer.h> 18 #include <linux/timer.h>
19 #include <linux/delay.h> 19 #include <linux/delay.h>
20 #include <linux/mm.h> 20 #include <linux/mm.h>
21 #include <linux/module.h> 21 #include <linux/module.h>
22 #include <linux/smp.h> 22 #include <linux/smp.h>
23 #include <linux/spinlock.h> 23 #include <linux/spinlock.h>
24 #include <linux/init.h> 24 #include <linux/init.h>
25 #include <linux/interrupt.h> 25 #include <linux/interrupt.h>
26 #include <linux/console.h> 26 #include <linux/console.h>
27 #include <linux/kallsyms.h> 27 #include <linux/kallsyms.h>
28 #include <linux/bug.h> 28 #include <linux/bug.h>
29 29
30 #include <asm/assembly.h> 30 #include <asm/assembly.h>
31 #include <asm/system.h> 31 #include <asm/system.h>
32 #include <asm/uaccess.h> 32 #include <asm/uaccess.h>
33 #include <asm/io.h> 33 #include <asm/io.h>
34 #include <asm/irq.h> 34 #include <asm/irq.h>
35 #include <asm/traps.h> 35 #include <asm/traps.h>
36 #include <asm/unaligned.h> 36 #include <asm/unaligned.h>
37 #include <asm/atomic.h> 37 #include <asm/atomic.h>
38 #include <asm/smp.h> 38 #include <asm/smp.h>
39 #include <asm/pdc.h> 39 #include <asm/pdc.h>
40 #include <asm/pdc_chassis.h> 40 #include <asm/pdc_chassis.h>
41 #include <asm/unwind.h> 41 #include <asm/unwind.h>
42 #include <asm/tlbflush.h> 42 #include <asm/tlbflush.h>
43 #include <asm/cacheflush.h> 43 #include <asm/cacheflush.h>
44 44
45 #include "../math-emu/math-emu.h" /* for handle_fpe() */ 45 #include "../math-emu/math-emu.h" /* for handle_fpe() */
46 46
47 #define PRINT_USER_FAULTS /* (turn this on if you want user faults to be */ 47 #define PRINT_USER_FAULTS /* (turn this on if you want user faults to be */
48 /* dumped to the console via printk) */ 48 /* dumped to the console via printk) */
49 49
50 #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) 50 #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK)
51 DEFINE_SPINLOCK(pa_dbit_lock); 51 DEFINE_SPINLOCK(pa_dbit_lock);
52 #endif 52 #endif
53 53
54 static int printbinary(char *buf, unsigned long x, int nbits) 54 static int printbinary(char *buf, unsigned long x, int nbits)
55 { 55 {
56 unsigned long mask = 1UL << (nbits - 1); 56 unsigned long mask = 1UL << (nbits - 1);
57 while (mask != 0) { 57 while (mask != 0) {
58 *buf++ = (mask & x ? '1' : '0'); 58 *buf++ = (mask & x ? '1' : '0');
59 mask >>= 1; 59 mask >>= 1;
60 } 60 }
61 *buf = '\0'; 61 *buf = '\0';
62 62
63 return nbits; 63 return nbits;
64 } 64 }
65 65
66 #ifdef CONFIG_64BIT 66 #ifdef CONFIG_64BIT
67 #define RFMT "%016lx" 67 #define RFMT "%016lx"
68 #else 68 #else
69 #define RFMT "%08lx" 69 #define RFMT "%08lx"
70 #endif 70 #endif
71 #define FFMT "%016llx" /* fpregs are 64-bit always */ 71 #define FFMT "%016llx" /* fpregs are 64-bit always */
72 72
73 #define PRINTREGS(lvl,r,f,fmt,x) \ 73 #define PRINTREGS(lvl,r,f,fmt,x) \
74 printk("%s%s%02d-%02d " fmt " " fmt " " fmt " " fmt "\n", \ 74 printk("%s%s%02d-%02d " fmt " " fmt " " fmt " " fmt "\n", \
75 lvl, f, (x), (x+3), (r)[(x)+0], (r)[(x)+1], \ 75 lvl, f, (x), (x+3), (r)[(x)+0], (r)[(x)+1], \
76 (r)[(x)+2], (r)[(x)+3]) 76 (r)[(x)+2], (r)[(x)+3])
77 77
78 static void print_gr(char *level, struct pt_regs *regs) 78 static void print_gr(char *level, struct pt_regs *regs)
79 { 79 {
80 int i; 80 int i;
81 char buf[64]; 81 char buf[64];
82 82
83 printk("%s\n", level); 83 printk("%s\n", level);
84 printk("%s YZrvWESTHLNXBCVMcbcbcbcbOGFRQPDI\n", level); 84 printk("%s YZrvWESTHLNXBCVMcbcbcbcbOGFRQPDI\n", level);
85 printbinary(buf, regs->gr[0], 32); 85 printbinary(buf, regs->gr[0], 32);
86 printk("%sPSW: %s %s\n", level, buf, print_tainted()); 86 printk("%sPSW: %s %s\n", level, buf, print_tainted());
87 87
88 for (i = 0; i < 32; i += 4) 88 for (i = 0; i < 32; i += 4)
89 PRINTREGS(level, regs->gr, "r", RFMT, i); 89 PRINTREGS(level, regs->gr, "r", RFMT, i);
90 } 90 }
91 91
92 static void print_fr(char *level, struct pt_regs *regs) 92 static void print_fr(char *level, struct pt_regs *regs)
93 { 93 {
94 int i; 94 int i;
95 char buf[64]; 95 char buf[64];
96 struct { u32 sw[2]; } s; 96 struct { u32 sw[2]; } s;
97 97
98 /* FR are 64bit everywhere. Need to use asm to get the content 98 /* FR are 64bit everywhere. Need to use asm to get the content
99 * of fpsr/fper1, and we assume that we won't have a FP Identify 99 * of fpsr/fper1, and we assume that we won't have a FP Identify
100 * in our way, otherwise we're screwed. 100 * in our way, otherwise we're screwed.
101 * The fldd is used to restore the T-bit if there was one, as the 101 * The fldd is used to restore the T-bit if there was one, as the
102 * store clears it anyway. 102 * store clears it anyway.
103 * PA2.0 book says "thou shall not use fstw on FPSR/FPERs" - T-Bone */ 103 * PA2.0 book says "thou shall not use fstw on FPSR/FPERs" - T-Bone */
104 asm volatile ("fstd %%fr0,0(%1) \n\t" 104 asm volatile ("fstd %%fr0,0(%1) \n\t"
105 "fldd 0(%1),%%fr0 \n\t" 105 "fldd 0(%1),%%fr0 \n\t"
106 : "=m" (s) : "r" (&s) : "r0"); 106 : "=m" (s) : "r" (&s) : "r0");
107 107
108 printk("%s\n", level); 108 printk("%s\n", level);
109 printk("%s VZOUICununcqcqcqcqcqcrmunTDVZOUI\n", level); 109 printk("%s VZOUICununcqcqcqcqcqcrmunTDVZOUI\n", level);
110 printbinary(buf, s.sw[0], 32); 110 printbinary(buf, s.sw[0], 32);
111 printk("%sFPSR: %s\n", level, buf); 111 printk("%sFPSR: %s\n", level, buf);
112 printk("%sFPER1: %08x\n", level, s.sw[1]); 112 printk("%sFPER1: %08x\n", level, s.sw[1]);
113 113
114 /* here we'll print fr0 again, tho it'll be meaningless */ 114 /* here we'll print fr0 again, tho it'll be meaningless */
115 for (i = 0; i < 32; i += 4) 115 for (i = 0; i < 32; i += 4)
116 PRINTREGS(level, regs->fr, "fr", FFMT, i); 116 PRINTREGS(level, regs->fr, "fr", FFMT, i);
117 } 117 }
118 118
119 void show_regs(struct pt_regs *regs) 119 void show_regs(struct pt_regs *regs)
120 { 120 {
121 int i; 121 int i;
122 char *level; 122 char *level;
123 unsigned long cr30, cr31; 123 unsigned long cr30, cr31;
124 124
125 level = user_mode(regs) ? KERN_DEBUG : KERN_CRIT; 125 level = user_mode(regs) ? KERN_DEBUG : KERN_CRIT;
126 126
127 print_gr(level, regs); 127 print_gr(level, regs);
128 128
129 for (i = 0; i < 8; i += 4) 129 for (i = 0; i < 8; i += 4)
130 PRINTREGS(level, regs->sr, "sr", RFMT, i); 130 PRINTREGS(level, regs->sr, "sr", RFMT, i);
131 131
132 if (user_mode(regs)) 132 if (user_mode(regs))
133 print_fr(level, regs); 133 print_fr(level, regs);
134 134
135 cr30 = mfctl(30); 135 cr30 = mfctl(30);
136 cr31 = mfctl(31); 136 cr31 = mfctl(31);
137 printk("%s\n", level); 137 printk("%s\n", level);
138 printk("%sIASQ: " RFMT " " RFMT " IAOQ: " RFMT " " RFMT "\n", 138 printk("%sIASQ: " RFMT " " RFMT " IAOQ: " RFMT " " RFMT "\n",
139 level, regs->iasq[0], regs->iasq[1], regs->iaoq[0], regs->iaoq[1]); 139 level, regs->iasq[0], regs->iasq[1], regs->iaoq[0], regs->iaoq[1]);
140 printk("%s IIR: %08lx ISR: " RFMT " IOR: " RFMT "\n", 140 printk("%s IIR: %08lx ISR: " RFMT " IOR: " RFMT "\n",
141 level, regs->iir, regs->isr, regs->ior); 141 level, regs->iir, regs->isr, regs->ior);
142 printk("%s CPU: %8d CR30: " RFMT " CR31: " RFMT "\n", 142 printk("%s CPU: %8d CR30: " RFMT " CR31: " RFMT "\n",
143 level, current_thread_info()->cpu, cr30, cr31); 143 level, current_thread_info()->cpu, cr30, cr31);
144 printk("%s ORIG_R28: " RFMT "\n", level, regs->orig_r28); 144 printk("%s ORIG_R28: " RFMT "\n", level, regs->orig_r28);
145 printk(level); 145 printk(level);
146 print_symbol(" IAOQ[0]: %s\n", regs->iaoq[0]); 146 print_symbol(" IAOQ[0]: %s\n", regs->iaoq[0]);
147 printk(level); 147 printk(level);
148 print_symbol(" IAOQ[1]: %s\n", regs->iaoq[1]); 148 print_symbol(" IAOQ[1]: %s\n", regs->iaoq[1]);
149 printk(level); 149 printk(level);
150 print_symbol(" RP(r2): %s\n", regs->gr[2]); 150 print_symbol(" RP(r2): %s\n", regs->gr[2]);
151 } 151 }
152 152
153 153
154 void dump_stack(void) 154 void dump_stack(void)
155 { 155 {
156 show_stack(NULL, NULL); 156 show_stack(NULL, NULL);
157 } 157 }
158 158
159 EXPORT_SYMBOL(dump_stack); 159 EXPORT_SYMBOL(dump_stack);
160 160
161 static void do_show_stack(struct unwind_frame_info *info) 161 static void do_show_stack(struct unwind_frame_info *info)
162 { 162 {
163 int i = 1; 163 int i = 1;
164 164
165 printk(KERN_CRIT "Backtrace:\n"); 165 printk(KERN_CRIT "Backtrace:\n");
166 while (i <= 16) { 166 while (i <= 16) {
167 if (unwind_once(info) < 0 || info->ip == 0) 167 if (unwind_once(info) < 0 || info->ip == 0)
168 break; 168 break;
169 169
170 if (__kernel_text_address(info->ip)) { 170 if (__kernel_text_address(info->ip)) {
171 printk("%s [<" RFMT ">] ", (i&0x3)==1 ? KERN_CRIT : "", info->ip); 171 printk("%s [<" RFMT ">] ", (i&0x3)==1 ? KERN_CRIT : "", info->ip);
172 #ifdef CONFIG_KALLSYMS 172 #ifdef CONFIG_KALLSYMS
173 print_symbol("%s\n", info->ip); 173 print_symbol("%s\n", info->ip);
174 #else 174 #else
175 if ((i & 0x03) == 0) 175 if ((i & 0x03) == 0)
176 printk("\n"); 176 printk("\n");
177 #endif 177 #endif
178 i++; 178 i++;
179 } 179 }
180 } 180 }
181 printk("\n"); 181 printk("\n");
182 } 182 }
183 183
184 void show_stack(struct task_struct *task, unsigned long *s) 184 void show_stack(struct task_struct *task, unsigned long *s)
185 { 185 {
186 struct unwind_frame_info info; 186 struct unwind_frame_info info;
187 187
188 if (!task) { 188 if (!task) {
189 unsigned long sp; 189 unsigned long sp;
190 190
191 HERE: 191 HERE:
192 asm volatile ("copy %%r30, %0" : "=r"(sp)); 192 asm volatile ("copy %%r30, %0" : "=r"(sp));
193 { 193 {
194 struct pt_regs r; 194 struct pt_regs r;
195 195
196 memset(&r, 0, sizeof(struct pt_regs)); 196 memset(&r, 0, sizeof(struct pt_regs));
197 r.iaoq[0] = (unsigned long)&&HERE; 197 r.iaoq[0] = (unsigned long)&&HERE;
198 r.gr[2] = (unsigned long)__builtin_return_address(0); 198 r.gr[2] = (unsigned long)__builtin_return_address(0);
199 r.gr[30] = sp; 199 r.gr[30] = sp;
200 200
201 unwind_frame_init(&info, current, &r); 201 unwind_frame_init(&info, current, &r);
202 } 202 }
203 } else { 203 } else {
204 unwind_frame_init_from_blocked_task(&info, task); 204 unwind_frame_init_from_blocked_task(&info, task);
205 } 205 }
206 206
207 do_show_stack(&info); 207 do_show_stack(&info);
208 } 208 }
209 209
210 int is_valid_bugaddr(unsigned long iaoq) 210 int is_valid_bugaddr(unsigned long iaoq)
211 { 211 {
212 return 1; 212 return 1;
213 } 213 }
214 214
215 void die_if_kernel(char *str, struct pt_regs *regs, long err) 215 void die_if_kernel(char *str, struct pt_regs *regs, long err)
216 { 216 {
217 if (user_mode(regs)) { 217 if (user_mode(regs)) {
218 if (err == 0) 218 if (err == 0)
219 return; /* STFU */ 219 return; /* STFU */
220 220
221 printk(KERN_CRIT "%s (pid %d): %s (code %ld) at " RFMT "\n", 221 printk(KERN_CRIT "%s (pid %d): %s (code %ld) at " RFMT "\n",
222 current->comm, current->pid, str, err, regs->iaoq[0]); 222 current->comm, current->pid, str, err, regs->iaoq[0]);
223 #ifdef PRINT_USER_FAULTS 223 #ifdef PRINT_USER_FAULTS
224 /* XXX for debugging only */ 224 /* XXX for debugging only */
225 show_regs(regs); 225 show_regs(regs);
226 #endif 226 #endif
227 return; 227 return;
228 } 228 }
229 229
230 oops_in_progress = 1; 230 oops_in_progress = 1;
231 231
232 /* Amuse the user in a SPARC fashion */ 232 /* Amuse the user in a SPARC fashion */
233 if (err) printk( 233 if (err) printk(
234 KERN_CRIT " _______________________________ \n" 234 KERN_CRIT " _______________________________ \n"
235 KERN_CRIT " < Your System ate a SPARC! Gah! >\n" 235 KERN_CRIT " < Your System ate a SPARC! Gah! >\n"
236 KERN_CRIT " ------------------------------- \n" 236 KERN_CRIT " ------------------------------- \n"
237 KERN_CRIT " \\ ^__^\n" 237 KERN_CRIT " \\ ^__^\n"
238 KERN_CRIT " \\ (xx)\\_______\n" 238 KERN_CRIT " \\ (xx)\\_______\n"
239 KERN_CRIT " (__)\\ )\\/\\\n" 239 KERN_CRIT " (__)\\ )\\/\\\n"
240 KERN_CRIT " U ||----w |\n" 240 KERN_CRIT " U ||----w |\n"
241 KERN_CRIT " || ||\n"); 241 KERN_CRIT " || ||\n");
242 242
243 /* unlock the pdc lock if necessary */ 243 /* unlock the pdc lock if necessary */
244 pdc_emergency_unlock(); 244 pdc_emergency_unlock();
245 245
246 /* maybe the kernel hasn't booted very far yet and hasn't been able 246 /* maybe the kernel hasn't booted very far yet and hasn't been able
247 * to initialize the serial or STI console. In that case we should 247 * to initialize the serial or STI console. In that case we should
248 * re-enable the pdc console, so that the user will be able to 248 * re-enable the pdc console, so that the user will be able to
249 * identify the problem. */ 249 * identify the problem. */
250 if (!console_drivers) 250 if (!console_drivers)
251 pdc_console_restart(); 251 pdc_console_restart();
252 252
253 if (err) 253 if (err)
254 printk(KERN_CRIT "%s (pid %d): %s (code %ld)\n", 254 printk(KERN_CRIT "%s (pid %d): %s (code %ld)\n",
255 current->comm, current->pid, str, err); 255 current->comm, current->pid, str, err);
256 256
257 /* Wot's wrong wif bein' racy? */ 257 /* Wot's wrong wif bein' racy? */
258 if (current->thread.flags & PARISC_KERNEL_DEATH) { 258 if (current->thread.flags & PARISC_KERNEL_DEATH) {
259 printk(KERN_CRIT "%s() recursion detected.\n", __FUNCTION__); 259 printk(KERN_CRIT "%s() recursion detected.\n", __FUNCTION__);
260 local_irq_enable(); 260 local_irq_enable();
261 while (1); 261 while (1);
262 } 262 }
263 current->thread.flags |= PARISC_KERNEL_DEATH; 263 current->thread.flags |= PARISC_KERNEL_DEATH;
264 264
265 show_regs(regs); 265 show_regs(regs);
266 dump_stack(); 266 dump_stack();
267 add_taint(TAINT_DIE);
267 268
268 if (in_interrupt()) 269 if (in_interrupt())
269 panic("Fatal exception in interrupt"); 270 panic("Fatal exception in interrupt");
270 271
271 if (panic_on_oops) { 272 if (panic_on_oops) {
272 printk(KERN_EMERG "Fatal exception: panic in 5 seconds\n"); 273 printk(KERN_EMERG "Fatal exception: panic in 5 seconds\n");
273 ssleep(5); 274 ssleep(5);
274 panic("Fatal exception"); 275 panic("Fatal exception");
275 } 276 }
276 277
277 do_exit(SIGSEGV); 278 do_exit(SIGSEGV);
278 } 279 }
279 280
280 int syscall_ipi(int (*syscall) (struct pt_regs *), struct pt_regs *regs) 281 int syscall_ipi(int (*syscall) (struct pt_regs *), struct pt_regs *regs)
281 { 282 {
282 return syscall(regs); 283 return syscall(regs);
283 } 284 }
284 285
285 /* gdb uses break 4,8 */ 286 /* gdb uses break 4,8 */
286 #define GDB_BREAK_INSN 0x10004 287 #define GDB_BREAK_INSN 0x10004
287 static void handle_gdb_break(struct pt_regs *regs, int wot) 288 static void handle_gdb_break(struct pt_regs *regs, int wot)
288 { 289 {
289 struct siginfo si; 290 struct siginfo si;
290 291
291 si.si_signo = SIGTRAP; 292 si.si_signo = SIGTRAP;
292 si.si_errno = 0; 293 si.si_errno = 0;
293 si.si_code = wot; 294 si.si_code = wot;
294 si.si_addr = (void __user *) (regs->iaoq[0] & ~3); 295 si.si_addr = (void __user *) (regs->iaoq[0] & ~3);
295 force_sig_info(SIGTRAP, &si, current); 296 force_sig_info(SIGTRAP, &si, current);
296 } 297 }
297 298
298 static void handle_break(struct pt_regs *regs) 299 static void handle_break(struct pt_regs *regs)
299 { 300 {
300 unsigned iir = regs->iir; 301 unsigned iir = regs->iir;
301 302
302 if (unlikely(iir == PARISC_BUG_BREAK_INSN && !user_mode(regs))) { 303 if (unlikely(iir == PARISC_BUG_BREAK_INSN && !user_mode(regs))) {
303 /* check if a BUG() or WARN() trapped here. */ 304 /* check if a BUG() or WARN() trapped here. */
304 enum bug_trap_type tt; 305 enum bug_trap_type tt;
305 tt = report_bug(regs->iaoq[0] & ~3, regs); 306 tt = report_bug(regs->iaoq[0] & ~3, regs);
306 if (tt == BUG_TRAP_TYPE_WARN) { 307 if (tt == BUG_TRAP_TYPE_WARN) {
307 regs->iaoq[0] += 4; 308 regs->iaoq[0] += 4;
308 regs->iaoq[1] += 4; 309 regs->iaoq[1] += 4;
309 return; /* return to next instruction when WARN_ON(). */ 310 return; /* return to next instruction when WARN_ON(). */
310 } 311 }
311 die_if_kernel("Unknown kernel breakpoint", regs, 312 die_if_kernel("Unknown kernel breakpoint", regs,
312 (tt == BUG_TRAP_TYPE_NONE) ? 9 : 0); 313 (tt == BUG_TRAP_TYPE_NONE) ? 9 : 0);
313 } 314 }
314 315
315 #ifdef PRINT_USER_FAULTS 316 #ifdef PRINT_USER_FAULTS
316 if (unlikely(iir != GDB_BREAK_INSN)) { 317 if (unlikely(iir != GDB_BREAK_INSN)) {
317 printk(KERN_DEBUG "break %d,%d: pid=%d command='%s'\n", 318 printk(KERN_DEBUG "break %d,%d: pid=%d command='%s'\n",
318 iir & 31, (iir>>13) & ((1<<13)-1), 319 iir & 31, (iir>>13) & ((1<<13)-1),
319 current->pid, current->comm); 320 current->pid, current->comm);
320 show_regs(regs); 321 show_regs(regs);
321 } 322 }
322 #endif 323 #endif
323 324
324 /* send standard GDB signal */ 325 /* send standard GDB signal */
325 handle_gdb_break(regs, TRAP_BRKPT); 326 handle_gdb_break(regs, TRAP_BRKPT);
326 } 327 }
327 328
328 static void default_trap(int code, struct pt_regs *regs) 329 static void default_trap(int code, struct pt_regs *regs)
329 { 330 {
330 printk(KERN_ERR "Trap %d on CPU %d\n", code, smp_processor_id()); 331 printk(KERN_ERR "Trap %d on CPU %d\n", code, smp_processor_id());
331 show_regs(regs); 332 show_regs(regs);
332 } 333 }
333 334
334 void (*cpu_lpmc) (int code, struct pt_regs *regs) __read_mostly = default_trap; 335 void (*cpu_lpmc) (int code, struct pt_regs *regs) __read_mostly = default_trap;
335 336
336 337
337 void transfer_pim_to_trap_frame(struct pt_regs *regs) 338 void transfer_pim_to_trap_frame(struct pt_regs *regs)
338 { 339 {
339 register int i; 340 register int i;
340 extern unsigned int hpmc_pim_data[]; 341 extern unsigned int hpmc_pim_data[];
341 struct pdc_hpmc_pim_11 *pim_narrow; 342 struct pdc_hpmc_pim_11 *pim_narrow;
342 struct pdc_hpmc_pim_20 *pim_wide; 343 struct pdc_hpmc_pim_20 *pim_wide;
343 344
344 if (boot_cpu_data.cpu_type >= pcxu) { 345 if (boot_cpu_data.cpu_type >= pcxu) {
345 346
346 pim_wide = (struct pdc_hpmc_pim_20 *)hpmc_pim_data; 347 pim_wide = (struct pdc_hpmc_pim_20 *)hpmc_pim_data;
347 348
348 /* 349 /*
349 * Note: The following code will probably generate a 350 * Note: The following code will probably generate a
350 * bunch of truncation error warnings from the compiler. 351 * bunch of truncation error warnings from the compiler.
351 * Could be handled with an ifdef, but perhaps there 352 * Could be handled with an ifdef, but perhaps there
352 * is a better way. 353 * is a better way.
353 */ 354 */
354 355
355 regs->gr[0] = pim_wide->cr[22]; 356 regs->gr[0] = pim_wide->cr[22];
356 357
357 for (i = 1; i < 32; i++) 358 for (i = 1; i < 32; i++)
358 regs->gr[i] = pim_wide->gr[i]; 359 regs->gr[i] = pim_wide->gr[i];
359 360
360 for (i = 0; i < 32; i++) 361 for (i = 0; i < 32; i++)
361 regs->fr[i] = pim_wide->fr[i]; 362 regs->fr[i] = pim_wide->fr[i];
362 363
363 for (i = 0; i < 8; i++) 364 for (i = 0; i < 8; i++)
364 regs->sr[i] = pim_wide->sr[i]; 365 regs->sr[i] = pim_wide->sr[i];
365 366
366 regs->iasq[0] = pim_wide->cr[17]; 367 regs->iasq[0] = pim_wide->cr[17];
367 regs->iasq[1] = pim_wide->iasq_back; 368 regs->iasq[1] = pim_wide->iasq_back;
368 regs->iaoq[0] = pim_wide->cr[18]; 369 regs->iaoq[0] = pim_wide->cr[18];
369 regs->iaoq[1] = pim_wide->iaoq_back; 370 regs->iaoq[1] = pim_wide->iaoq_back;
370 371
371 regs->sar = pim_wide->cr[11]; 372 regs->sar = pim_wide->cr[11];
372 regs->iir = pim_wide->cr[19]; 373 regs->iir = pim_wide->cr[19];
373 regs->isr = pim_wide->cr[20]; 374 regs->isr = pim_wide->cr[20];
374 regs->ior = pim_wide->cr[21]; 375 regs->ior = pim_wide->cr[21];
375 } 376 }
376 else { 377 else {
377 pim_narrow = (struct pdc_hpmc_pim_11 *)hpmc_pim_data; 378 pim_narrow = (struct pdc_hpmc_pim_11 *)hpmc_pim_data;
378 379
379 regs->gr[0] = pim_narrow->cr[22]; 380 regs->gr[0] = pim_narrow->cr[22];
380 381
381 for (i = 1; i < 32; i++) 382 for (i = 1; i < 32; i++)
382 regs->gr[i] = pim_narrow->gr[i]; 383 regs->gr[i] = pim_narrow->gr[i];
383 384
384 for (i = 0; i < 32; i++) 385 for (i = 0; i < 32; i++)
385 regs->fr[i] = pim_narrow->fr[i]; 386 regs->fr[i] = pim_narrow->fr[i];
386 387
387 for (i = 0; i < 8; i++) 388 for (i = 0; i < 8; i++)
388 regs->sr[i] = pim_narrow->sr[i]; 389 regs->sr[i] = pim_narrow->sr[i];
389 390
390 regs->iasq[0] = pim_narrow->cr[17]; 391 regs->iasq[0] = pim_narrow->cr[17];
391 regs->iasq[1] = pim_narrow->iasq_back; 392 regs->iasq[1] = pim_narrow->iasq_back;
392 regs->iaoq[0] = pim_narrow->cr[18]; 393 regs->iaoq[0] = pim_narrow->cr[18];
393 regs->iaoq[1] = pim_narrow->iaoq_back; 394 regs->iaoq[1] = pim_narrow->iaoq_back;
394 395
395 regs->sar = pim_narrow->cr[11]; 396 regs->sar = pim_narrow->cr[11];
396 regs->iir = pim_narrow->cr[19]; 397 regs->iir = pim_narrow->cr[19];
397 regs->isr = pim_narrow->cr[20]; 398 regs->isr = pim_narrow->cr[20];
398 regs->ior = pim_narrow->cr[21]; 399 regs->ior = pim_narrow->cr[21];
399 } 400 }
400 401
401 /* 402 /*
402 * The following fields only have meaning if we came through 403 * The following fields only have meaning if we came through
403 * another path. So just zero them here. 404 * another path. So just zero them here.
404 */ 405 */
405 406
406 regs->ksp = 0; 407 regs->ksp = 0;
407 regs->kpc = 0; 408 regs->kpc = 0;
408 regs->orig_r28 = 0; 409 regs->orig_r28 = 0;
409 } 410 }
410 411
411 412
412 /* 413 /*
413 * This routine is called as a last resort when everything else 414 * This routine is called as a last resort when everything else
414 * has gone clearly wrong. We get called for faults in kernel space, 415 * has gone clearly wrong. We get called for faults in kernel space,
415 * and HPMC's. 416 * and HPMC's.
416 */ 417 */
417 void parisc_terminate(char *msg, struct pt_regs *regs, int code, unsigned long offset) 418 void parisc_terminate(char *msg, struct pt_regs *regs, int code, unsigned long offset)
418 { 419 {
419 static DEFINE_SPINLOCK(terminate_lock); 420 static DEFINE_SPINLOCK(terminate_lock);
420 421
421 oops_in_progress = 1; 422 oops_in_progress = 1;
422 423
423 set_eiem(0); 424 set_eiem(0);
424 local_irq_disable(); 425 local_irq_disable();
425 spin_lock(&terminate_lock); 426 spin_lock(&terminate_lock);
426 427
427 /* unlock the pdc lock if necessary */ 428 /* unlock the pdc lock if necessary */
428 pdc_emergency_unlock(); 429 pdc_emergency_unlock();
429 430
430 /* restart pdc console if necessary */ 431 /* restart pdc console if necessary */
431 if (!console_drivers) 432 if (!console_drivers)
432 pdc_console_restart(); 433 pdc_console_restart();
433 434
434 /* Not all paths will gutter the processor... */ 435 /* Not all paths will gutter the processor... */
435 switch(code){ 436 switch(code){
436 437
437 case 1: 438 case 1:
438 transfer_pim_to_trap_frame(regs); 439 transfer_pim_to_trap_frame(regs);
439 break; 440 break;
440 441
441 default: 442 default:
442 /* Fall through */ 443 /* Fall through */
443 break; 444 break;
444 445
445 } 446 }
446 447
447 { 448 {
448 /* show_stack(NULL, (unsigned long *)regs->gr[30]); */ 449 /* show_stack(NULL, (unsigned long *)regs->gr[30]); */
449 struct unwind_frame_info info; 450 struct unwind_frame_info info;
450 unwind_frame_init(&info, current, regs); 451 unwind_frame_init(&info, current, regs);
451 do_show_stack(&info); 452 do_show_stack(&info);
452 } 453 }
453 454
454 printk("\n"); 455 printk("\n");
455 printk(KERN_CRIT "%s: Code=%d regs=%p (Addr=" RFMT ")\n", 456 printk(KERN_CRIT "%s: Code=%d regs=%p (Addr=" RFMT ")\n",
456 msg, code, regs, offset); 457 msg, code, regs, offset);
457 show_regs(regs); 458 show_regs(regs);
458 459
459 spin_unlock(&terminate_lock); 460 spin_unlock(&terminate_lock);
460 461
461 /* put soft power button back under hardware control; 462 /* put soft power button back under hardware control;
462 * if the user had pressed it once at any time, the 463 * if the user had pressed it once at any time, the
463 * system will shut down immediately right here. */ 464 * system will shut down immediately right here. */
464 pdc_soft_power_button(0); 465 pdc_soft_power_button(0);
465 466
466 /* Call kernel panic() so reboot timeouts work properly 467 /* Call kernel panic() so reboot timeouts work properly
467 * FIXME: This function should be on the list of 468 * FIXME: This function should be on the list of
468 * panic notifiers, and we should call panic 469 * panic notifiers, and we should call panic
469 * directly from the location that we wish. 470 * directly from the location that we wish.
470 * e.g. We should not call panic from 471 * e.g. We should not call panic from
471 * parisc_terminate, but rather the oter way around. 472 * parisc_terminate, but rather the oter way around.
472 * This hack works, prints the panic message twice, 473 * This hack works, prints the panic message twice,
473 * and it enables reboot timers! 474 * and it enables reboot timers!
474 */ 475 */
475 panic(msg); 476 panic(msg);
476 } 477 }
477 478
478 void handle_interruption(int code, struct pt_regs *regs) 479 void handle_interruption(int code, struct pt_regs *regs)
479 { 480 {
480 unsigned long fault_address = 0; 481 unsigned long fault_address = 0;
481 unsigned long fault_space = 0; 482 unsigned long fault_space = 0;
482 struct siginfo si; 483 struct siginfo si;
483 484
484 if (code == 1) 485 if (code == 1)
485 pdc_console_restart(); /* switch back to pdc if HPMC */ 486 pdc_console_restart(); /* switch back to pdc if HPMC */
486 else 487 else
487 local_irq_enable(); 488 local_irq_enable();
488 489
489 /* Security check: 490 /* Security check:
490 * If the priority level is still user, and the 491 * If the priority level is still user, and the
491 * faulting space is not equal to the active space 492 * faulting space is not equal to the active space
492 * then the user is attempting something in a space 493 * then the user is attempting something in a space
493 * that does not belong to them. Kill the process. 494 * that does not belong to them. Kill the process.
494 * 495 *
495 * This is normally the situation when the user 496 * This is normally the situation when the user
496 * attempts to jump into the kernel space at the 497 * attempts to jump into the kernel space at the
497 * wrong offset, be it at the gateway page or a 498 * wrong offset, be it at the gateway page or a
498 * random location. 499 * random location.
499 * 500 *
500 * We cannot normally signal the process because it 501 * We cannot normally signal the process because it
501 * could *be* on the gateway page, and processes 502 * could *be* on the gateway page, and processes
502 * executing on the gateway page can't have signals 503 * executing on the gateway page can't have signals
503 * delivered. 504 * delivered.
504 * 505 *
505 * We merely readjust the address into the users 506 * We merely readjust the address into the users
506 * space, at a destination address of zero, and 507 * space, at a destination address of zero, and
507 * allow processing to continue. 508 * allow processing to continue.
508 */ 509 */
509 if (((unsigned long)regs->iaoq[0] & 3) && 510 if (((unsigned long)regs->iaoq[0] & 3) &&
510 ((unsigned long)regs->iasq[0] != (unsigned long)regs->sr[7])) { 511 ((unsigned long)regs->iasq[0] != (unsigned long)regs->sr[7])) {
511 /* Kill the user process later */ 512 /* Kill the user process later */
512 regs->iaoq[0] = 0 | 3; 513 regs->iaoq[0] = 0 | 3;
513 regs->iaoq[1] = regs->iaoq[0] + 4; 514 regs->iaoq[1] = regs->iaoq[0] + 4;
514 regs->iasq[0] = regs->iasq[0] = regs->sr[7]; 515 regs->iasq[0] = regs->iasq[0] = regs->sr[7];
515 regs->gr[0] &= ~PSW_B; 516 regs->gr[0] &= ~PSW_B;
516 return; 517 return;
517 } 518 }
518 519
519 #if 0 520 #if 0
520 printk(KERN_CRIT "Interruption # %d\n", code); 521 printk(KERN_CRIT "Interruption # %d\n", code);
521 #endif 522 #endif
522 523
523 switch(code) { 524 switch(code) {
524 525
525 case 1: 526 case 1:
526 /* High-priority machine check (HPMC) */ 527 /* High-priority machine check (HPMC) */
527 528
528 /* set up a new led state on systems shipped with a LED State panel */ 529 /* set up a new led state on systems shipped with a LED State panel */
529 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_HPMC); 530 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_HPMC);
530 531
531 parisc_terminate("High Priority Machine Check (HPMC)", 532 parisc_terminate("High Priority Machine Check (HPMC)",
532 regs, code, 0); 533 regs, code, 0);
533 /* NOT REACHED */ 534 /* NOT REACHED */
534 535
535 case 2: 536 case 2:
536 /* Power failure interrupt */ 537 /* Power failure interrupt */
537 printk(KERN_CRIT "Power failure interrupt !\n"); 538 printk(KERN_CRIT "Power failure interrupt !\n");
538 return; 539 return;
539 540
540 case 3: 541 case 3:
541 /* Recovery counter trap */ 542 /* Recovery counter trap */
542 regs->gr[0] &= ~PSW_R; 543 regs->gr[0] &= ~PSW_R;
543 if (user_space(regs)) 544 if (user_space(regs))
544 handle_gdb_break(regs, TRAP_TRACE); 545 handle_gdb_break(regs, TRAP_TRACE);
545 /* else this must be the start of a syscall - just let it run */ 546 /* else this must be the start of a syscall - just let it run */
546 return; 547 return;
547 548
548 case 5: 549 case 5:
549 /* Low-priority machine check */ 550 /* Low-priority machine check */
550 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_LPMC); 551 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_LPMC);
551 552
552 flush_cache_all(); 553 flush_cache_all();
553 flush_tlb_all(); 554 flush_tlb_all();
554 cpu_lpmc(5, regs); 555 cpu_lpmc(5, regs);
555 return; 556 return;
556 557
557 case 6: 558 case 6:
558 /* Instruction TLB miss fault/Instruction page fault */ 559 /* Instruction TLB miss fault/Instruction page fault */
559 fault_address = regs->iaoq[0]; 560 fault_address = regs->iaoq[0];
560 fault_space = regs->iasq[0]; 561 fault_space = regs->iasq[0];
561 break; 562 break;
562 563
563 case 8: 564 case 8:
564 /* Illegal instruction trap */ 565 /* Illegal instruction trap */
565 die_if_kernel("Illegal instruction", regs, code); 566 die_if_kernel("Illegal instruction", regs, code);
566 si.si_code = ILL_ILLOPC; 567 si.si_code = ILL_ILLOPC;
567 goto give_sigill; 568 goto give_sigill;
568 569
569 case 9: 570 case 9:
570 /* Break instruction trap */ 571 /* Break instruction trap */
571 handle_break(regs); 572 handle_break(regs);
572 return; 573 return;
573 574
574 case 10: 575 case 10:
575 /* Privileged operation trap */ 576 /* Privileged operation trap */
576 die_if_kernel("Privileged operation", regs, code); 577 die_if_kernel("Privileged operation", regs, code);
577 si.si_code = ILL_PRVOPC; 578 si.si_code = ILL_PRVOPC;
578 goto give_sigill; 579 goto give_sigill;
579 580
580 case 11: 581 case 11:
581 /* Privileged register trap */ 582 /* Privileged register trap */
582 if ((regs->iir & 0xffdfffe0) == 0x034008a0) { 583 if ((regs->iir & 0xffdfffe0) == 0x034008a0) {
583 584
584 /* This is a MFCTL cr26/cr27 to gr instruction. 585 /* This is a MFCTL cr26/cr27 to gr instruction.
585 * PCXS traps on this, so we need to emulate it. 586 * PCXS traps on this, so we need to emulate it.
586 */ 587 */
587 588
588 if (regs->iir & 0x00200000) 589 if (regs->iir & 0x00200000)
589 regs->gr[regs->iir & 0x1f] = mfctl(27); 590 regs->gr[regs->iir & 0x1f] = mfctl(27);
590 else 591 else
591 regs->gr[regs->iir & 0x1f] = mfctl(26); 592 regs->gr[regs->iir & 0x1f] = mfctl(26);
592 593
593 regs->iaoq[0] = regs->iaoq[1]; 594 regs->iaoq[0] = regs->iaoq[1];
594 regs->iaoq[1] += 4; 595 regs->iaoq[1] += 4;
595 regs->iasq[0] = regs->iasq[1]; 596 regs->iasq[0] = regs->iasq[1];
596 return; 597 return;
597 } 598 }
598 599
599 die_if_kernel("Privileged register usage", regs, code); 600 die_if_kernel("Privileged register usage", regs, code);
600 si.si_code = ILL_PRVREG; 601 si.si_code = ILL_PRVREG;
601 give_sigill: 602 give_sigill:
602 si.si_signo = SIGILL; 603 si.si_signo = SIGILL;
603 si.si_errno = 0; 604 si.si_errno = 0;
604 si.si_addr = (void __user *) regs->iaoq[0]; 605 si.si_addr = (void __user *) regs->iaoq[0];
605 force_sig_info(SIGILL, &si, current); 606 force_sig_info(SIGILL, &si, current);
606 return; 607 return;
607 608
608 case 12: 609 case 12:
609 /* Overflow Trap, let the userland signal handler do the cleanup */ 610 /* Overflow Trap, let the userland signal handler do the cleanup */
610 si.si_signo = SIGFPE; 611 si.si_signo = SIGFPE;
611 si.si_code = FPE_INTOVF; 612 si.si_code = FPE_INTOVF;
612 si.si_addr = (void __user *) regs->iaoq[0]; 613 si.si_addr = (void __user *) regs->iaoq[0];
613 force_sig_info(SIGFPE, &si, current); 614 force_sig_info(SIGFPE, &si, current);
614 return; 615 return;
615 616
616 case 13: 617 case 13:
617 /* Conditional Trap 618 /* Conditional Trap
618 The condition succeeds in an instruction which traps 619 The condition succeeds in an instruction which traps
619 on condition */ 620 on condition */
620 if(user_mode(regs)){ 621 if(user_mode(regs)){
621 si.si_signo = SIGFPE; 622 si.si_signo = SIGFPE;
622 /* Set to zero, and let the userspace app figure it out from 623 /* Set to zero, and let the userspace app figure it out from
623 the insn pointed to by si_addr */ 624 the insn pointed to by si_addr */
624 si.si_code = 0; 625 si.si_code = 0;
625 si.si_addr = (void __user *) regs->iaoq[0]; 626 si.si_addr = (void __user *) regs->iaoq[0];
626 force_sig_info(SIGFPE, &si, current); 627 force_sig_info(SIGFPE, &si, current);
627 return; 628 return;
628 } 629 }
629 /* The kernel doesn't want to handle condition codes */ 630 /* The kernel doesn't want to handle condition codes */
630 break; 631 break;
631 632
632 case 14: 633 case 14:
633 /* Assist Exception Trap, i.e. floating point exception. */ 634 /* Assist Exception Trap, i.e. floating point exception. */
634 die_if_kernel("Floating point exception", regs, 0); /* quiet */ 635 die_if_kernel("Floating point exception", regs, 0); /* quiet */
635 handle_fpe(regs); 636 handle_fpe(regs);
636 return; 637 return;
637 638
638 case 15: 639 case 15:
639 /* Data TLB miss fault/Data page fault */ 640 /* Data TLB miss fault/Data page fault */
640 /* Fall through */ 641 /* Fall through */
641 case 16: 642 case 16:
642 /* Non-access instruction TLB miss fault */ 643 /* Non-access instruction TLB miss fault */
643 /* The instruction TLB entry needed for the target address of the FIC 644 /* The instruction TLB entry needed for the target address of the FIC
644 is absent, and hardware can't find it, so we get to cleanup */ 645 is absent, and hardware can't find it, so we get to cleanup */
645 /* Fall through */ 646 /* Fall through */
646 case 17: 647 case 17:
647 /* Non-access data TLB miss fault/Non-access data page fault */ 648 /* Non-access data TLB miss fault/Non-access data page fault */
648 /* FIXME: 649 /* FIXME:
649 Still need to add slow path emulation code here! 650 Still need to add slow path emulation code here!
650 If the insn used a non-shadow register, then the tlb 651 If the insn used a non-shadow register, then the tlb
651 handlers could not have their side-effect (e.g. probe 652 handlers could not have their side-effect (e.g. probe
652 writing to a target register) emulated since rfir would 653 writing to a target register) emulated since rfir would
653 erase the changes to said register. Instead we have to 654 erase the changes to said register. Instead we have to
654 setup everything, call this function we are in, and emulate 655 setup everything, call this function we are in, and emulate
655 by hand. Technically we need to emulate: 656 by hand. Technically we need to emulate:
656 fdc,fdce,pdc,"fic,4f",prober,probeir,probew, probeiw 657 fdc,fdce,pdc,"fic,4f",prober,probeir,probew, probeiw
657 */ 658 */
658 fault_address = regs->ior; 659 fault_address = regs->ior;
659 fault_space = regs->isr; 660 fault_space = regs->isr;
660 break; 661 break;
661 662
662 case 18: 663 case 18:
663 /* PCXS only -- later cpu's split this into types 26,27 & 28 */ 664 /* PCXS only -- later cpu's split this into types 26,27 & 28 */
664 /* Check for unaligned access */ 665 /* Check for unaligned access */
665 if (check_unaligned(regs)) { 666 if (check_unaligned(regs)) {
666 handle_unaligned(regs); 667 handle_unaligned(regs);
667 return; 668 return;
668 } 669 }
669 /* Fall Through */ 670 /* Fall Through */
670 case 26: 671 case 26:
671 /* PCXL: Data memory access rights trap */ 672 /* PCXL: Data memory access rights trap */
672 fault_address = regs->ior; 673 fault_address = regs->ior;
673 fault_space = regs->isr; 674 fault_space = regs->isr;
674 break; 675 break;
675 676
676 case 19: 677 case 19:
677 /* Data memory break trap */ 678 /* Data memory break trap */
678 regs->gr[0] |= PSW_X; /* So we can single-step over the trap */ 679 regs->gr[0] |= PSW_X; /* So we can single-step over the trap */
679 /* fall thru */ 680 /* fall thru */
680 case 21: 681 case 21:
681 /* Page reference trap */ 682 /* Page reference trap */
682 handle_gdb_break(regs, TRAP_HWBKPT); 683 handle_gdb_break(regs, TRAP_HWBKPT);
683 return; 684 return;
684 685
685 case 25: 686 case 25:
686 /* Taken branch trap */ 687 /* Taken branch trap */
687 regs->gr[0] &= ~PSW_T; 688 regs->gr[0] &= ~PSW_T;
688 if (user_space(regs)) 689 if (user_space(regs))
689 handle_gdb_break(regs, TRAP_BRANCH); 690 handle_gdb_break(regs, TRAP_BRANCH);
690 /* else this must be the start of a syscall - just let it 691 /* else this must be the start of a syscall - just let it
691 * run. 692 * run.
692 */ 693 */
693 return; 694 return;
694 695
695 case 7: 696 case 7:
696 /* Instruction access rights */ 697 /* Instruction access rights */
697 /* PCXL: Instruction memory protection trap */ 698 /* PCXL: Instruction memory protection trap */
698 699
699 /* 700 /*
700 * This could be caused by either: 1) a process attempting 701 * This could be caused by either: 1) a process attempting
701 * to execute within a vma that does not have execute 702 * to execute within a vma that does not have execute
702 * permission, or 2) an access rights violation caused by a 703 * permission, or 2) an access rights violation caused by a
703 * flush only translation set up by ptep_get_and_clear(). 704 * flush only translation set up by ptep_get_and_clear().
704 * So we check the vma permissions to differentiate the two. 705 * So we check the vma permissions to differentiate the two.
705 * If the vma indicates we have execute permission, then 706 * If the vma indicates we have execute permission, then
706 * the cause is the latter one. In this case, we need to 707 * the cause is the latter one. In this case, we need to
707 * call do_page_fault() to fix the problem. 708 * call do_page_fault() to fix the problem.
708 */ 709 */
709 710
710 if (user_mode(regs)) { 711 if (user_mode(regs)) {
711 struct vm_area_struct *vma; 712 struct vm_area_struct *vma;
712 713
713 down_read(&current->mm->mmap_sem); 714 down_read(&current->mm->mmap_sem);
714 vma = find_vma(current->mm,regs->iaoq[0]); 715 vma = find_vma(current->mm,regs->iaoq[0]);
715 if (vma && (regs->iaoq[0] >= vma->vm_start) 716 if (vma && (regs->iaoq[0] >= vma->vm_start)
716 && (vma->vm_flags & VM_EXEC)) { 717 && (vma->vm_flags & VM_EXEC)) {
717 718
718 fault_address = regs->iaoq[0]; 719 fault_address = regs->iaoq[0];
719 fault_space = regs->iasq[0]; 720 fault_space = regs->iasq[0];
720 721
721 up_read(&current->mm->mmap_sem); 722 up_read(&current->mm->mmap_sem);
722 break; /* call do_page_fault() */ 723 break; /* call do_page_fault() */
723 } 724 }
724 up_read(&current->mm->mmap_sem); 725 up_read(&current->mm->mmap_sem);
725 } 726 }
726 /* Fall Through */ 727 /* Fall Through */
727 case 27: 728 case 27:
728 /* Data memory protection ID trap */ 729 /* Data memory protection ID trap */
729 die_if_kernel("Protection id trap", regs, code); 730 die_if_kernel("Protection id trap", regs, code);
730 si.si_code = SEGV_MAPERR; 731 si.si_code = SEGV_MAPERR;
731 si.si_signo = SIGSEGV; 732 si.si_signo = SIGSEGV;
732 si.si_errno = 0; 733 si.si_errno = 0;
733 if (code == 7) 734 if (code == 7)
734 si.si_addr = (void __user *) regs->iaoq[0]; 735 si.si_addr = (void __user *) regs->iaoq[0];
735 else 736 else
736 si.si_addr = (void __user *) regs->ior; 737 si.si_addr = (void __user *) regs->ior;
737 force_sig_info(SIGSEGV, &si, current); 738 force_sig_info(SIGSEGV, &si, current);
738 return; 739 return;
739 740
740 case 28: 741 case 28:
741 /* Unaligned data reference trap */ 742 /* Unaligned data reference trap */
742 handle_unaligned(regs); 743 handle_unaligned(regs);
743 return; 744 return;
744 745
745 default: 746 default:
746 if (user_mode(regs)) { 747 if (user_mode(regs)) {
747 #ifdef PRINT_USER_FAULTS 748 #ifdef PRINT_USER_FAULTS
748 printk(KERN_DEBUG "\nhandle_interruption() pid=%d command='%s'\n", 749 printk(KERN_DEBUG "\nhandle_interruption() pid=%d command='%s'\n",
749 current->pid, current->comm); 750 current->pid, current->comm);
750 show_regs(regs); 751 show_regs(regs);
751 #endif 752 #endif
752 /* SIGBUS, for lack of a better one. */ 753 /* SIGBUS, for lack of a better one. */
753 si.si_signo = SIGBUS; 754 si.si_signo = SIGBUS;
754 si.si_code = BUS_OBJERR; 755 si.si_code = BUS_OBJERR;
755 si.si_errno = 0; 756 si.si_errno = 0;
756 si.si_addr = (void __user *) regs->ior; 757 si.si_addr = (void __user *) regs->ior;
757 force_sig_info(SIGBUS, &si, current); 758 force_sig_info(SIGBUS, &si, current);
758 return; 759 return;
759 } 760 }
760 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_PANIC); 761 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_PANIC);
761 762
762 parisc_terminate("Unexpected interruption", regs, code, 0); 763 parisc_terminate("Unexpected interruption", regs, code, 0);
763 /* NOT REACHED */ 764 /* NOT REACHED */
764 } 765 }
765 766
766 if (user_mode(regs)) { 767 if (user_mode(regs)) {
767 if ((fault_space >> SPACEID_SHIFT) != (regs->sr[7] >> SPACEID_SHIFT)) { 768 if ((fault_space >> SPACEID_SHIFT) != (regs->sr[7] >> SPACEID_SHIFT)) {
768 #ifdef PRINT_USER_FAULTS 769 #ifdef PRINT_USER_FAULTS
769 if (fault_space == 0) 770 if (fault_space == 0)
770 printk(KERN_DEBUG "User Fault on Kernel Space "); 771 printk(KERN_DEBUG "User Fault on Kernel Space ");
771 else 772 else
772 printk(KERN_DEBUG "User Fault (long pointer) (fault %d) ", 773 printk(KERN_DEBUG "User Fault (long pointer) (fault %d) ",
773 code); 774 code);
774 printk("pid=%d command='%s'\n", current->pid, current->comm); 775 printk("pid=%d command='%s'\n", current->pid, current->comm);
775 show_regs(regs); 776 show_regs(regs);
776 #endif 777 #endif
777 si.si_signo = SIGSEGV; 778 si.si_signo = SIGSEGV;
778 si.si_errno = 0; 779 si.si_errno = 0;
779 si.si_code = SEGV_MAPERR; 780 si.si_code = SEGV_MAPERR;
780 si.si_addr = (void __user *) regs->ior; 781 si.si_addr = (void __user *) regs->ior;
781 force_sig_info(SIGSEGV, &si, current); 782 force_sig_info(SIGSEGV, &si, current);
782 return; 783 return;
783 } 784 }
784 } 785 }
785 else { 786 else {
786 787
787 /* 788 /*
788 * The kernel should never fault on its own address space. 789 * The kernel should never fault on its own address space.
789 */ 790 */
790 791
791 if (fault_space == 0) 792 if (fault_space == 0)
792 { 793 {
793 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_PANIC); 794 pdc_chassis_send_status(PDC_CHASSIS_DIRECT_PANIC);
794 parisc_terminate("Kernel Fault", regs, code, fault_address); 795 parisc_terminate("Kernel Fault", regs, code, fault_address);
795 796
796 } 797 }
797 } 798 }
798 799
799 do_page_fault(regs, code, fault_address); 800 do_page_fault(regs, code, fault_address);
800 } 801 }
801 802
802 803
803 int __init check_ivt(void *iva) 804 int __init check_ivt(void *iva)
804 { 805 {
805 extern const u32 os_hpmc[]; 806 extern const u32 os_hpmc[];
806 extern const u32 os_hpmc_end[]; 807 extern const u32 os_hpmc_end[];
807 808
808 int i; 809 int i;
809 u32 check = 0; 810 u32 check = 0;
810 u32 *ivap; 811 u32 *ivap;
811 u32 *hpmcp; 812 u32 *hpmcp;
812 u32 length; 813 u32 length;
813 814
814 if (strcmp((char *)iva, "cows can fly")) 815 if (strcmp((char *)iva, "cows can fly"))
815 return -1; 816 return -1;
816 817
817 ivap = (u32 *)iva; 818 ivap = (u32 *)iva;
818 819
819 for (i = 0; i < 8; i++) 820 for (i = 0; i < 8; i++)
820 *ivap++ = 0; 821 *ivap++ = 0;
821 822
822 /* Compute Checksum for HPMC handler */ 823 /* Compute Checksum for HPMC handler */
823 824
824 length = os_hpmc_end - os_hpmc; 825 length = os_hpmc_end - os_hpmc;
825 ivap[7] = length; 826 ivap[7] = length;
826 827
827 hpmcp = (u32 *)os_hpmc; 828 hpmcp = (u32 *)os_hpmc;
828 829
829 for (i=0; i<length/4; i++) 830 for (i=0; i<length/4; i++)
830 check += *hpmcp++; 831 check += *hpmcp++;
831 832
832 for (i=0; i<8; i++) 833 for (i=0; i<8; i++)
833 check += ivap[i]; 834 check += ivap[i];
834 835
835 ivap[5] = -check; 836 ivap[5] = -check;
836 837
837 return 0; 838 return 0;
838 } 839 }
839 840
840 #ifndef CONFIG_64BIT 841 #ifndef CONFIG_64BIT
841 extern const void fault_vector_11; 842 extern const void fault_vector_11;
842 #endif 843 #endif
843 extern const void fault_vector_20; 844 extern const void fault_vector_20;
844 845
845 void __init trap_init(void) 846 void __init trap_init(void)
846 { 847 {
847 void *iva; 848 void *iva;
848 849
849 if (boot_cpu_data.cpu_type >= pcxu) 850 if (boot_cpu_data.cpu_type >= pcxu)
850 iva = (void *) &fault_vector_20; 851 iva = (void *) &fault_vector_20;
851 else 852 else
852 #ifdef CONFIG_64BIT 853 #ifdef CONFIG_64BIT
853 panic("Can't boot 64-bit OS on PA1.1 processor!"); 854 panic("Can't boot 64-bit OS on PA1.1 processor!");
854 #else 855 #else
855 iva = (void *) &fault_vector_11; 856 iva = (void *) &fault_vector_11;
856 #endif 857 #endif
857 858
858 if (check_ivt(iva)) 859 if (check_ivt(iva))
859 panic("IVT invalid"); 860 panic("IVT invalid");
860 } 861 }
861 862
arch/powerpc/kernel/traps.c
1 /* 1 /*
2 * Copyright (C) 1995-1996 Gary Thomas (gdt@linuxppc.org) 2 * Copyright (C) 1995-1996 Gary Thomas (gdt@linuxppc.org)
3 * 3 *
4 * This program is free software; you can redistribute it and/or 4 * This program is free software; you can redistribute it and/or
5 * modify it under the terms of the GNU General Public License 5 * modify it under the terms of the GNU General Public License
6 * as published by the Free Software Foundation; either version 6 * as published by the Free Software Foundation; either version
7 * 2 of the License, or (at your option) any later version. 7 * 2 of the License, or (at your option) any later version.
8 * 8 *
9 * Modified by Cort Dougan (cort@cs.nmt.edu) 9 * Modified by Cort Dougan (cort@cs.nmt.edu)
10 * and Paul Mackerras (paulus@samba.org) 10 * and Paul Mackerras (paulus@samba.org)
11 */ 11 */
12 12
13 /* 13 /*
14 * This file handles the architecture-dependent parts of hardware exceptions 14 * This file handles the architecture-dependent parts of hardware exceptions
15 */ 15 */
16 16
17 #include <linux/errno.h> 17 #include <linux/errno.h>
18 #include <linux/sched.h> 18 #include <linux/sched.h>
19 #include <linux/kernel.h> 19 #include <linux/kernel.h>
20 #include <linux/mm.h> 20 #include <linux/mm.h>
21 #include <linux/stddef.h> 21 #include <linux/stddef.h>
22 #include <linux/unistd.h> 22 #include <linux/unistd.h>
23 #include <linux/ptrace.h> 23 #include <linux/ptrace.h>
24 #include <linux/slab.h> 24 #include <linux/slab.h>
25 #include <linux/user.h> 25 #include <linux/user.h>
26 #include <linux/a.out.h> 26 #include <linux/a.out.h>
27 #include <linux/interrupt.h> 27 #include <linux/interrupt.h>
28 #include <linux/init.h> 28 #include <linux/init.h>
29 #include <linux/module.h> 29 #include <linux/module.h>
30 #include <linux/prctl.h> 30 #include <linux/prctl.h>
31 #include <linux/delay.h> 31 #include <linux/delay.h>
32 #include <linux/kprobes.h> 32 #include <linux/kprobes.h>
33 #include <linux/kexec.h> 33 #include <linux/kexec.h>
34 #include <linux/backlight.h> 34 #include <linux/backlight.h>
35 #include <linux/bug.h> 35 #include <linux/bug.h>
36 #include <linux/kdebug.h> 36 #include <linux/kdebug.h>
37 37
38 #include <asm/pgtable.h> 38 #include <asm/pgtable.h>
39 #include <asm/uaccess.h> 39 #include <asm/uaccess.h>
40 #include <asm/system.h> 40 #include <asm/system.h>
41 #include <asm/io.h> 41 #include <asm/io.h>
42 #include <asm/machdep.h> 42 #include <asm/machdep.h>
43 #include <asm/rtas.h> 43 #include <asm/rtas.h>
44 #include <asm/pmc.h> 44 #include <asm/pmc.h>
45 #ifdef CONFIG_PPC32 45 #ifdef CONFIG_PPC32
46 #include <asm/reg.h> 46 #include <asm/reg.h>
47 #endif 47 #endif
48 #ifdef CONFIG_PMAC_BACKLIGHT 48 #ifdef CONFIG_PMAC_BACKLIGHT
49 #include <asm/backlight.h> 49 #include <asm/backlight.h>
50 #endif 50 #endif
51 #ifdef CONFIG_PPC64 51 #ifdef CONFIG_PPC64
52 #include <asm/firmware.h> 52 #include <asm/firmware.h>
53 #include <asm/processor.h> 53 #include <asm/processor.h>
54 #endif 54 #endif
55 #include <asm/kexec.h> 55 #include <asm/kexec.h>
56 56
57 #ifdef CONFIG_DEBUGGER 57 #ifdef CONFIG_DEBUGGER
58 int (*__debugger)(struct pt_regs *regs); 58 int (*__debugger)(struct pt_regs *regs);
59 int (*__debugger_ipi)(struct pt_regs *regs); 59 int (*__debugger_ipi)(struct pt_regs *regs);
60 int (*__debugger_bpt)(struct pt_regs *regs); 60 int (*__debugger_bpt)(struct pt_regs *regs);
61 int (*__debugger_sstep)(struct pt_regs *regs); 61 int (*__debugger_sstep)(struct pt_regs *regs);
62 int (*__debugger_iabr_match)(struct pt_regs *regs); 62 int (*__debugger_iabr_match)(struct pt_regs *regs);
63 int (*__debugger_dabr_match)(struct pt_regs *regs); 63 int (*__debugger_dabr_match)(struct pt_regs *regs);
64 int (*__debugger_fault_handler)(struct pt_regs *regs); 64 int (*__debugger_fault_handler)(struct pt_regs *regs);
65 65
66 EXPORT_SYMBOL(__debugger); 66 EXPORT_SYMBOL(__debugger);
67 EXPORT_SYMBOL(__debugger_ipi); 67 EXPORT_SYMBOL(__debugger_ipi);
68 EXPORT_SYMBOL(__debugger_bpt); 68 EXPORT_SYMBOL(__debugger_bpt);
69 EXPORT_SYMBOL(__debugger_sstep); 69 EXPORT_SYMBOL(__debugger_sstep);
70 EXPORT_SYMBOL(__debugger_iabr_match); 70 EXPORT_SYMBOL(__debugger_iabr_match);
71 EXPORT_SYMBOL(__debugger_dabr_match); 71 EXPORT_SYMBOL(__debugger_dabr_match);
72 EXPORT_SYMBOL(__debugger_fault_handler); 72 EXPORT_SYMBOL(__debugger_fault_handler);
73 #endif 73 #endif
74 74
75 /* 75 /*
76 * Trap & Exception support 76 * Trap & Exception support
77 */ 77 */
78 78
79 #ifdef CONFIG_PMAC_BACKLIGHT 79 #ifdef CONFIG_PMAC_BACKLIGHT
80 static void pmac_backlight_unblank(void) 80 static void pmac_backlight_unblank(void)
81 { 81 {
82 mutex_lock(&pmac_backlight_mutex); 82 mutex_lock(&pmac_backlight_mutex);
83 if (pmac_backlight) { 83 if (pmac_backlight) {
84 struct backlight_properties *props; 84 struct backlight_properties *props;
85 85
86 props = &pmac_backlight->props; 86 props = &pmac_backlight->props;
87 props->brightness = props->max_brightness; 87 props->brightness = props->max_brightness;
88 props->power = FB_BLANK_UNBLANK; 88 props->power = FB_BLANK_UNBLANK;
89 backlight_update_status(pmac_backlight); 89 backlight_update_status(pmac_backlight);
90 } 90 }
91 mutex_unlock(&pmac_backlight_mutex); 91 mutex_unlock(&pmac_backlight_mutex);
92 } 92 }
93 #else 93 #else
94 static inline void pmac_backlight_unblank(void) { } 94 static inline void pmac_backlight_unblank(void) { }
95 #endif 95 #endif
96 96
97 int die(const char *str, struct pt_regs *regs, long err) 97 int die(const char *str, struct pt_regs *regs, long err)
98 { 98 {
99 static struct { 99 static struct {
100 spinlock_t lock; 100 spinlock_t lock;
101 u32 lock_owner; 101 u32 lock_owner;
102 int lock_owner_depth; 102 int lock_owner_depth;
103 } die = { 103 } die = {
104 .lock = __SPIN_LOCK_UNLOCKED(die.lock), 104 .lock = __SPIN_LOCK_UNLOCKED(die.lock),
105 .lock_owner = -1, 105 .lock_owner = -1,
106 .lock_owner_depth = 0 106 .lock_owner_depth = 0
107 }; 107 };
108 static int die_counter; 108 static int die_counter;
109 unsigned long flags; 109 unsigned long flags;
110 110
111 if (debugger(regs)) 111 if (debugger(regs))
112 return 1; 112 return 1;
113 113
114 oops_enter(); 114 oops_enter();
115 115
116 if (die.lock_owner != raw_smp_processor_id()) { 116 if (die.lock_owner != raw_smp_processor_id()) {
117 console_verbose(); 117 console_verbose();
118 spin_lock_irqsave(&die.lock, flags); 118 spin_lock_irqsave(&die.lock, flags);
119 die.lock_owner = smp_processor_id(); 119 die.lock_owner = smp_processor_id();
120 die.lock_owner_depth = 0; 120 die.lock_owner_depth = 0;
121 bust_spinlocks(1); 121 bust_spinlocks(1);
122 if (machine_is(powermac)) 122 if (machine_is(powermac))
123 pmac_backlight_unblank(); 123 pmac_backlight_unblank();
124 } else { 124 } else {
125 local_save_flags(flags); 125 local_save_flags(flags);
126 } 126 }
127 127
128 if (++die.lock_owner_depth < 3) { 128 if (++die.lock_owner_depth < 3) {
129 printk("Oops: %s, sig: %ld [#%d]\n", str, err, ++die_counter); 129 printk("Oops: %s, sig: %ld [#%d]\n", str, err, ++die_counter);
130 #ifdef CONFIG_PREEMPT 130 #ifdef CONFIG_PREEMPT
131 printk("PREEMPT "); 131 printk("PREEMPT ");
132 #endif 132 #endif
133 #ifdef CONFIG_SMP 133 #ifdef CONFIG_SMP
134 printk("SMP NR_CPUS=%d ", NR_CPUS); 134 printk("SMP NR_CPUS=%d ", NR_CPUS);
135 #endif 135 #endif
136 #ifdef CONFIG_DEBUG_PAGEALLOC 136 #ifdef CONFIG_DEBUG_PAGEALLOC
137 printk("DEBUG_PAGEALLOC "); 137 printk("DEBUG_PAGEALLOC ");
138 #endif 138 #endif
139 #ifdef CONFIG_NUMA 139 #ifdef CONFIG_NUMA
140 printk("NUMA "); 140 printk("NUMA ");
141 #endif 141 #endif
142 printk("%s\n", ppc_md.name ? ppc_md.name : ""); 142 printk("%s\n", ppc_md.name ? ppc_md.name : "");
143 143
144 print_modules(); 144 print_modules();
145 show_regs(regs); 145 show_regs(regs);
146 } else { 146 } else {
147 printk("Recursive die() failure, output suppressed\n"); 147 printk("Recursive die() failure, output suppressed\n");
148 } 148 }
149 149
150 bust_spinlocks(0); 150 bust_spinlocks(0);
151 die.lock_owner = -1; 151 die.lock_owner = -1;
152 add_taint(TAINT_DIE);
152 spin_unlock_irqrestore(&die.lock, flags); 153 spin_unlock_irqrestore(&die.lock, flags);
153 154
154 if (kexec_should_crash(current) || 155 if (kexec_should_crash(current) ||
155 kexec_sr_activated(smp_processor_id())) 156 kexec_sr_activated(smp_processor_id()))
156 crash_kexec(regs); 157 crash_kexec(regs);
157 crash_kexec_secondary(regs); 158 crash_kexec_secondary(regs);
158 159
159 if (in_interrupt()) 160 if (in_interrupt())
160 panic("Fatal exception in interrupt"); 161 panic("Fatal exception in interrupt");
161 162
162 if (panic_on_oops) 163 if (panic_on_oops)
163 panic("Fatal exception"); 164 panic("Fatal exception");
164 165
165 oops_exit(); 166 oops_exit();
166 do_exit(err); 167 do_exit(err);
167 168
168 return 0; 169 return 0;
169 } 170 }
170 171
171 void _exception(int signr, struct pt_regs *regs, int code, unsigned long addr) 172 void _exception(int signr, struct pt_regs *regs, int code, unsigned long addr)
172 { 173 {
173 siginfo_t info; 174 siginfo_t info;
174 175
175 if (!user_mode(regs)) { 176 if (!user_mode(regs)) {
176 if (die("Exception in kernel mode", regs, signr)) 177 if (die("Exception in kernel mode", regs, signr))
177 return; 178 return;
178 } 179 }
179 180
180 memset(&info, 0, sizeof(info)); 181 memset(&info, 0, sizeof(info));
181 info.si_signo = signr; 182 info.si_signo = signr;
182 info.si_code = code; 183 info.si_code = code;
183 info.si_addr = (void __user *) addr; 184 info.si_addr = (void __user *) addr;
184 force_sig_info(signr, &info, current); 185 force_sig_info(signr, &info, current);
185 186
186 /* 187 /*
187 * Init gets no signals that it doesn't have a handler for. 188 * Init gets no signals that it doesn't have a handler for.
188 * That's all very well, but if it has caused a synchronous 189 * That's all very well, but if it has caused a synchronous
189 * exception and we ignore the resulting signal, it will just 190 * exception and we ignore the resulting signal, it will just
190 * generate the same exception over and over again and we get 191 * generate the same exception over and over again and we get
191 * nowhere. Better to kill it and let the kernel panic. 192 * nowhere. Better to kill it and let the kernel panic.
192 */ 193 */
193 if (is_init(current)) { 194 if (is_init(current)) {
194 __sighandler_t handler; 195 __sighandler_t handler;
195 196
196 spin_lock_irq(&current->sighand->siglock); 197 spin_lock_irq(&current->sighand->siglock);
197 handler = current->sighand->action[signr-1].sa.sa_handler; 198 handler = current->sighand->action[signr-1].sa.sa_handler;
198 spin_unlock_irq(&current->sighand->siglock); 199 spin_unlock_irq(&current->sighand->siglock);
199 if (handler == SIG_DFL) { 200 if (handler == SIG_DFL) {
200 /* init has generated a synchronous exception 201 /* init has generated a synchronous exception
201 and it doesn't have a handler for the signal */ 202 and it doesn't have a handler for the signal */
202 printk(KERN_CRIT "init has generated signal %d " 203 printk(KERN_CRIT "init has generated signal %d "
203 "but has no handler for it\n", signr); 204 "but has no handler for it\n", signr);
204 do_exit(signr); 205 do_exit(signr);
205 } 206 }
206 } 207 }
207 } 208 }
208 209
209 #ifdef CONFIG_PPC64 210 #ifdef CONFIG_PPC64
210 void system_reset_exception(struct pt_regs *regs) 211 void system_reset_exception(struct pt_regs *regs)
211 { 212 {
212 /* See if any machine dependent calls */ 213 /* See if any machine dependent calls */
213 if (ppc_md.system_reset_exception) { 214 if (ppc_md.system_reset_exception) {
214 if (ppc_md.system_reset_exception(regs)) 215 if (ppc_md.system_reset_exception(regs))
215 return; 216 return;
216 } 217 }
217 218
218 #ifdef CONFIG_KEXEC 219 #ifdef CONFIG_KEXEC
219 cpu_set(smp_processor_id(), cpus_in_sr); 220 cpu_set(smp_processor_id(), cpus_in_sr);
220 #endif 221 #endif
221 222
222 die("System Reset", regs, SIGABRT); 223 die("System Reset", regs, SIGABRT);
223 224
224 /* 225 /*
225 * Some CPUs when released from the debugger will execute this path. 226 * Some CPUs when released from the debugger will execute this path.
226 * These CPUs entered the debugger via a soft-reset. If the CPU was 227 * These CPUs entered the debugger via a soft-reset. If the CPU was
227 * hung before entering the debugger it will return to the hung 228 * hung before entering the debugger it will return to the hung
228 * state when exiting this function. This causes a problem in 229 * state when exiting this function. This causes a problem in
229 * kdump since the hung CPU(s) will not respond to the IPI sent 230 * kdump since the hung CPU(s) will not respond to the IPI sent
230 * from kdump. To prevent the problem we call crash_kexec_secondary() 231 * from kdump. To prevent the problem we call crash_kexec_secondary()
231 * here. If a kdump had not been initiated or we exit the debugger 232 * here. If a kdump had not been initiated or we exit the debugger
232 * with the "exit and recover" command (x) crash_kexec_secondary() 233 * with the "exit and recover" command (x) crash_kexec_secondary()
233 * will return after 5ms and the CPU returns to its previous state. 234 * will return after 5ms and the CPU returns to its previous state.
234 */ 235 */
235 crash_kexec_secondary(regs); 236 crash_kexec_secondary(regs);
236 237
237 /* Must die if the interrupt is not recoverable */ 238 /* Must die if the interrupt is not recoverable */
238 if (!(regs->msr & MSR_RI)) 239 if (!(regs->msr & MSR_RI))
239 panic("Unrecoverable System Reset"); 240 panic("Unrecoverable System Reset");
240 241
241 /* What should we do here? We could issue a shutdown or hard reset. */ 242 /* What should we do here? We could issue a shutdown or hard reset. */
242 } 243 }
243 #endif 244 #endif
244 245
245 /* 246 /*
246 * I/O accesses can cause machine checks on powermacs. 247 * I/O accesses can cause machine checks on powermacs.
247 * Check if the NIP corresponds to the address of a sync 248 * Check if the NIP corresponds to the address of a sync
248 * instruction for which there is an entry in the exception 249 * instruction for which there is an entry in the exception
249 * table. 250 * table.
250 * Note that the 601 only takes a machine check on TEA 251 * Note that the 601 only takes a machine check on TEA
251 * (transfer error ack) signal assertion, and does not 252 * (transfer error ack) signal assertion, and does not
252 * set any of the top 16 bits of SRR1. 253 * set any of the top 16 bits of SRR1.
253 * -- paulus. 254 * -- paulus.
254 */ 255 */
255 static inline int check_io_access(struct pt_regs *regs) 256 static inline int check_io_access(struct pt_regs *regs)
256 { 257 {
257 #ifdef CONFIG_PPC32 258 #ifdef CONFIG_PPC32
258 unsigned long msr = regs->msr; 259 unsigned long msr = regs->msr;
259 const struct exception_table_entry *entry; 260 const struct exception_table_entry *entry;
260 unsigned int *nip = (unsigned int *)regs->nip; 261 unsigned int *nip = (unsigned int *)regs->nip;
261 262
262 if (((msr & 0xffff0000) == 0 || (msr & (0x80000 | 0x40000))) 263 if (((msr & 0xffff0000) == 0 || (msr & (0x80000 | 0x40000)))
263 && (entry = search_exception_tables(regs->nip)) != NULL) { 264 && (entry = search_exception_tables(regs->nip)) != NULL) {
264 /* 265 /*
265 * Check that it's a sync instruction, or somewhere 266 * Check that it's a sync instruction, or somewhere
266 * in the twi; isync; nop sequence that inb/inw/inl uses. 267 * in the twi; isync; nop sequence that inb/inw/inl uses.
267 * As the address is in the exception table 268 * As the address is in the exception table
268 * we should be able to read the instr there. 269 * we should be able to read the instr there.
269 * For the debug message, we look at the preceding 270 * For the debug message, we look at the preceding
270 * load or store. 271 * load or store.
271 */ 272 */
272 if (*nip == 0x60000000) /* nop */ 273 if (*nip == 0x60000000) /* nop */
273 nip -= 2; 274 nip -= 2;
274 else if (*nip == 0x4c00012c) /* isync */ 275 else if (*nip == 0x4c00012c) /* isync */
275 --nip; 276 --nip;
276 if (*nip == 0x7c0004ac || (*nip >> 26) == 3) { 277 if (*nip == 0x7c0004ac || (*nip >> 26) == 3) {
277 /* sync or twi */ 278 /* sync or twi */
278 unsigned int rb; 279 unsigned int rb;
279 280
280 --nip; 281 --nip;
281 rb = (*nip >> 11) & 0x1f; 282 rb = (*nip >> 11) & 0x1f;
282 printk(KERN_DEBUG "%s bad port %lx at %p\n", 283 printk(KERN_DEBUG "%s bad port %lx at %p\n",
283 (*nip & 0x100)? "OUT to": "IN from", 284 (*nip & 0x100)? "OUT to": "IN from",
284 regs->gpr[rb] - _IO_BASE, nip); 285 regs->gpr[rb] - _IO_BASE, nip);
285 regs->msr |= MSR_RI; 286 regs->msr |= MSR_RI;
286 regs->nip = entry->fixup; 287 regs->nip = entry->fixup;
287 return 1; 288 return 1;
288 } 289 }
289 } 290 }
290 #endif /* CONFIG_PPC32 */ 291 #endif /* CONFIG_PPC32 */
291 return 0; 292 return 0;
292 } 293 }
293 294
294 #if defined(CONFIG_4xx) || defined(CONFIG_BOOKE) 295 #if defined(CONFIG_4xx) || defined(CONFIG_BOOKE)
295 /* On 4xx, the reason for the machine check or program exception 296 /* On 4xx, the reason for the machine check or program exception
296 is in the ESR. */ 297 is in the ESR. */
297 #define get_reason(regs) ((regs)->dsisr) 298 #define get_reason(regs) ((regs)->dsisr)
298 #ifndef CONFIG_FSL_BOOKE 299 #ifndef CONFIG_FSL_BOOKE
299 #define get_mc_reason(regs) ((regs)->dsisr) 300 #define get_mc_reason(regs) ((regs)->dsisr)
300 #else 301 #else
301 #define get_mc_reason(regs) (mfspr(SPRN_MCSR)) 302 #define get_mc_reason(regs) (mfspr(SPRN_MCSR))
302 #endif 303 #endif
303 #define REASON_FP ESR_FP 304 #define REASON_FP ESR_FP
304 #define REASON_ILLEGAL (ESR_PIL | ESR_PUO) 305 #define REASON_ILLEGAL (ESR_PIL | ESR_PUO)
305 #define REASON_PRIVILEGED ESR_PPR 306 #define REASON_PRIVILEGED ESR_PPR
306 #define REASON_TRAP ESR_PTR 307 #define REASON_TRAP ESR_PTR
307 308
308 /* single-step stuff */ 309 /* single-step stuff */
309 #define single_stepping(regs) (current->thread.dbcr0 & DBCR0_IC) 310 #define single_stepping(regs) (current->thread.dbcr0 & DBCR0_IC)
310 #define clear_single_step(regs) (current->thread.dbcr0 &= ~DBCR0_IC) 311 #define clear_single_step(regs) (current->thread.dbcr0 &= ~DBCR0_IC)
311 312
312 #else 313 #else
313 /* On non-4xx, the reason for the machine check or program 314 /* On non-4xx, the reason for the machine check or program
314 exception is in the MSR. */ 315 exception is in the MSR. */
315 #define get_reason(regs) ((regs)->msr) 316 #define get_reason(regs) ((regs)->msr)
316 #define get_mc_reason(regs) ((regs)->msr) 317 #define get_mc_reason(regs) ((regs)->msr)
317 #define REASON_FP 0x100000 318 #define REASON_FP 0x100000
318 #define REASON_ILLEGAL 0x80000 319 #define REASON_ILLEGAL 0x80000
319 #define REASON_PRIVILEGED 0x40000 320 #define REASON_PRIVILEGED 0x40000
320 #define REASON_TRAP 0x20000 321 #define REASON_TRAP 0x20000
321 322
322 #define single_stepping(regs) ((regs)->msr & MSR_SE) 323 #define single_stepping(regs) ((regs)->msr & MSR_SE)
323 #define clear_single_step(regs) ((regs)->msr &= ~MSR_SE) 324 #define clear_single_step(regs) ((regs)->msr &= ~MSR_SE)
324 #endif 325 #endif
325 326
326 /* 327 /*
327 * This is "fall-back" implementation for configurations 328 * This is "fall-back" implementation for configurations
328 * which don't provide platform-specific machine check info 329 * which don't provide platform-specific machine check info
329 */ 330 */
330 void __attribute__ ((weak)) 331 void __attribute__ ((weak))
331 platform_machine_check(struct pt_regs *regs) 332 platform_machine_check(struct pt_regs *regs)
332 { 333 {
333 } 334 }
334 335
335 void machine_check_exception(struct pt_regs *regs) 336 void machine_check_exception(struct pt_regs *regs)
336 { 337 {
337 int recover = 0; 338 int recover = 0;
338 unsigned long reason = get_mc_reason(regs); 339 unsigned long reason = get_mc_reason(regs);
339 340
340 /* See if any machine dependent calls */ 341 /* See if any machine dependent calls */
341 if (ppc_md.machine_check_exception) 342 if (ppc_md.machine_check_exception)
342 recover = ppc_md.machine_check_exception(regs); 343 recover = ppc_md.machine_check_exception(regs);
343 344
344 if (recover) 345 if (recover)
345 return; 346 return;
346 347
347 if (user_mode(regs)) { 348 if (user_mode(regs)) {
348 regs->msr |= MSR_RI; 349 regs->msr |= MSR_RI;
349 _exception(SIGBUS, regs, BUS_ADRERR, regs->nip); 350 _exception(SIGBUS, regs, BUS_ADRERR, regs->nip);
350 return; 351 return;
351 } 352 }
352 353
353 #if defined(CONFIG_8xx) && defined(CONFIG_PCI) 354 #if defined(CONFIG_8xx) && defined(CONFIG_PCI)
354 /* the qspan pci read routines can cause machine checks -- Cort */ 355 /* the qspan pci read routines can cause machine checks -- Cort */
355 bad_page_fault(regs, regs->dar, SIGBUS); 356 bad_page_fault(regs, regs->dar, SIGBUS);
356 return; 357 return;
357 #endif 358 #endif
358 359
359 if (debugger_fault_handler(regs)) { 360 if (debugger_fault_handler(regs)) {
360 regs->msr |= MSR_RI; 361 regs->msr |= MSR_RI;
361 return; 362 return;
362 } 363 }
363 364
364 if (check_io_access(regs)) 365 if (check_io_access(regs))
365 return; 366 return;
366 367
367 #if defined(CONFIG_4xx) && !defined(CONFIG_440A) 368 #if defined(CONFIG_4xx) && !defined(CONFIG_440A)
368 if (reason & ESR_IMCP) { 369 if (reason & ESR_IMCP) {
369 printk("Instruction"); 370 printk("Instruction");
370 mtspr(SPRN_ESR, reason & ~ESR_IMCP); 371 mtspr(SPRN_ESR, reason & ~ESR_IMCP);
371 } else 372 } else
372 printk("Data"); 373 printk("Data");
373 printk(" machine check in kernel mode.\n"); 374 printk(" machine check in kernel mode.\n");
374 #elif defined(CONFIG_440A) 375 #elif defined(CONFIG_440A)
375 printk("Machine check in kernel mode.\n"); 376 printk("Machine check in kernel mode.\n");
376 if (reason & ESR_IMCP){ 377 if (reason & ESR_IMCP){
377 printk("Instruction Synchronous Machine Check exception\n"); 378 printk("Instruction Synchronous Machine Check exception\n");
378 mtspr(SPRN_ESR, reason & ~ESR_IMCP); 379 mtspr(SPRN_ESR, reason & ~ESR_IMCP);
379 } 380 }
380 else { 381 else {
381 u32 mcsr = mfspr(SPRN_MCSR); 382 u32 mcsr = mfspr(SPRN_MCSR);
382 if (mcsr & MCSR_IB) 383 if (mcsr & MCSR_IB)
383 printk("Instruction Read PLB Error\n"); 384 printk("Instruction Read PLB Error\n");
384 if (mcsr & MCSR_DRB) 385 if (mcsr & MCSR_DRB)
385 printk("Data Read PLB Error\n"); 386 printk("Data Read PLB Error\n");
386 if (mcsr & MCSR_DWB) 387 if (mcsr & MCSR_DWB)
387 printk("Data Write PLB Error\n"); 388 printk("Data Write PLB Error\n");
388 if (mcsr & MCSR_TLBP) 389 if (mcsr & MCSR_TLBP)
389 printk("TLB Parity Error\n"); 390 printk("TLB Parity Error\n");
390 if (mcsr & MCSR_ICP){ 391 if (mcsr & MCSR_ICP){
391 flush_instruction_cache(); 392 flush_instruction_cache();
392 printk("I-Cache Parity Error\n"); 393 printk("I-Cache Parity Error\n");
393 } 394 }
394 if (mcsr & MCSR_DCSP) 395 if (mcsr & MCSR_DCSP)
395 printk("D-Cache Search Parity Error\n"); 396 printk("D-Cache Search Parity Error\n");
396 if (mcsr & MCSR_DCFP) 397 if (mcsr & MCSR_DCFP)
397 printk("D-Cache Flush Parity Error\n"); 398 printk("D-Cache Flush Parity Error\n");
398 if (mcsr & MCSR_IMPE) 399 if (mcsr & MCSR_IMPE)
399 printk("Machine Check exception is imprecise\n"); 400 printk("Machine Check exception is imprecise\n");
400 401
401 /* Clear MCSR */ 402 /* Clear MCSR */
402 mtspr(SPRN_MCSR, mcsr); 403 mtspr(SPRN_MCSR, mcsr);
403 } 404 }
404 #elif defined (CONFIG_E500) 405 #elif defined (CONFIG_E500)
405 printk("Machine check in kernel mode.\n"); 406 printk("Machine check in kernel mode.\n");
406 printk("Caused by (from MCSR=%lx): ", reason); 407 printk("Caused by (from MCSR=%lx): ", reason);
407 408
408 if (reason & MCSR_MCP) 409 if (reason & MCSR_MCP)
409 printk("Machine Check Signal\n"); 410 printk("Machine Check Signal\n");
410 if (reason & MCSR_ICPERR) 411 if (reason & MCSR_ICPERR)
411 printk("Instruction Cache Parity Error\n"); 412 printk("Instruction Cache Parity Error\n");
412 if (reason & MCSR_DCP_PERR) 413 if (reason & MCSR_DCP_PERR)
413 printk("Data Cache Push Parity Error\n"); 414 printk("Data Cache Push Parity Error\n");
414 if (reason & MCSR_DCPERR) 415 if (reason & MCSR_DCPERR)
415 printk("Data Cache Parity Error\n"); 416 printk("Data Cache Parity Error\n");
416 if (reason & MCSR_GL_CI) 417 if (reason & MCSR_GL_CI)
417 printk("Guarded Load or Cache-Inhibited stwcx.\n"); 418 printk("Guarded Load or Cache-Inhibited stwcx.\n");
418 if (reason & MCSR_BUS_IAERR) 419 if (reason & MCSR_BUS_IAERR)
419 printk("Bus - Instruction Address Error\n"); 420 printk("Bus - Instruction Address Error\n");
420 if (reason & MCSR_BUS_RAERR) 421 if (reason & MCSR_BUS_RAERR)
421 printk("Bus - Read Address Error\n"); 422 printk("Bus - Read Address Error\n");
422 if (reason & MCSR_BUS_WAERR) 423 if (reason & MCSR_BUS_WAERR)
423 printk("Bus - Write Address Error\n"); 424 printk("Bus - Write Address Error\n");
424 if (reason & MCSR_BUS_IBERR) 425 if (reason & MCSR_BUS_IBERR)
425 printk("Bus - Instruction Data Error\n"); 426 printk("Bus - Instruction Data Error\n");
426 if (reason & MCSR_BUS_RBERR) 427 if (reason & MCSR_BUS_RBERR)
427 printk("Bus - Read Data Bus Error\n"); 428 printk("Bus - Read Data Bus Error\n");
428 if (reason & MCSR_BUS_WBERR) 429 if (reason & MCSR_BUS_WBERR)
429 printk("Bus - Read Data Bus Error\n"); 430 printk("Bus - Read Data Bus Error\n");
430 if (reason & MCSR_BUS_IPERR) 431 if (reason & MCSR_BUS_IPERR)
431 printk("Bus - Instruction Parity Error\n"); 432 printk("Bus - Instruction Parity Error\n");
432 if (reason & MCSR_BUS_RPERR) 433 if (reason & MCSR_BUS_RPERR)
433 printk("Bus - Read Parity Error\n"); 434 printk("Bus - Read Parity Error\n");
434 #elif defined (CONFIG_E200) 435 #elif defined (CONFIG_E200)
435 printk("Machine check in kernel mode.\n"); 436 printk("Machine check in kernel mode.\n");
436 printk("Caused by (from MCSR=%lx): ", reason); 437 printk("Caused by (from MCSR=%lx): ", reason);
437 438
438 if (reason & MCSR_MCP) 439 if (reason & MCSR_MCP)
439 printk("Machine Check Signal\n"); 440 printk("Machine Check Signal\n");
440 if (reason & MCSR_CP_PERR) 441 if (reason & MCSR_CP_PERR)
441 printk("Cache Push Parity Error\n"); 442 printk("Cache Push Parity Error\n");
442 if (reason & MCSR_CPERR) 443 if (reason & MCSR_CPERR)
443 printk("Cache Parity Error\n"); 444 printk("Cache Parity Error\n");
444 if (reason & MCSR_EXCP_ERR) 445 if (reason & MCSR_EXCP_ERR)
445 printk("ISI, ITLB, or Bus Error on first instruction fetch for an exception handler\n"); 446 printk("ISI, ITLB, or Bus Error on first instruction fetch for an exception handler\n");
446 if (reason & MCSR_BUS_IRERR) 447 if (reason & MCSR_BUS_IRERR)
447 printk("Bus - Read Bus Error on instruction fetch\n"); 448 printk("Bus - Read Bus Error on instruction fetch\n");
448 if (reason & MCSR_BUS_DRERR) 449 if (reason & MCSR_BUS_DRERR)
449 printk("Bus - Read Bus Error on data load\n"); 450 printk("Bus - Read Bus Error on data load\n");
450 if (reason & MCSR_BUS_WRERR) 451 if (reason & MCSR_BUS_WRERR)
451 printk("Bus - Write Bus Error on buffered store or cache line push\n"); 452 printk("Bus - Write Bus Error on buffered store or cache line push\n");
452 #else /* !CONFIG_4xx && !CONFIG_E500 && !CONFIG_E200 */ 453 #else /* !CONFIG_4xx && !CONFIG_E500 && !CONFIG_E200 */
453 printk("Machine check in kernel mode.\n"); 454 printk("Machine check in kernel mode.\n");
454 printk("Caused by (from SRR1=%lx): ", reason); 455 printk("Caused by (from SRR1=%lx): ", reason);
455 switch (reason & 0x601F0000) { 456 switch (reason & 0x601F0000) {
456 case 0x80000: 457 case 0x80000:
457 printk("Machine check signal\n"); 458 printk("Machine check signal\n");
458 break; 459 break;
459 case 0: /* for 601 */ 460 case 0: /* for 601 */
460 case 0x40000: 461 case 0x40000:
461 case 0x140000: /* 7450 MSS error and TEA */ 462 case 0x140000: /* 7450 MSS error and TEA */
462 printk("Transfer error ack signal\n"); 463 printk("Transfer error ack signal\n");
463 break; 464 break;
464 case 0x20000: 465 case 0x20000:
465 printk("Data parity error signal\n"); 466 printk("Data parity error signal\n");
466 break; 467 break;
467 case 0x10000: 468 case 0x10000:
468 printk("Address parity error signal\n"); 469 printk("Address parity error signal\n");
469 break; 470 break;
470 case 0x20000000: 471 case 0x20000000:
471 printk("L1 Data Cache error\n"); 472 printk("L1 Data Cache error\n");
472 break; 473 break;
473 case 0x40000000: 474 case 0x40000000:
474 printk("L1 Instruction Cache error\n"); 475 printk("L1 Instruction Cache error\n");
475 break; 476 break;
476 case 0x00100000: 477 case 0x00100000:
477 printk("L2 data cache parity error\n"); 478 printk("L2 data cache parity error\n");
478 break; 479 break;
479 default: 480 default:
480 printk("Unknown values in msr\n"); 481 printk("Unknown values in msr\n");
481 } 482 }
482 #endif /* CONFIG_4xx */ 483 #endif /* CONFIG_4xx */
483 484
484 /* 485 /*
485 * Optional platform-provided routine to print out 486 * Optional platform-provided routine to print out
486 * additional info, e.g. bus error registers. 487 * additional info, e.g. bus error registers.
487 */ 488 */
488 platform_machine_check(regs); 489 platform_machine_check(regs);
489 490
490 if (debugger_fault_handler(regs)) 491 if (debugger_fault_handler(regs))
491 return; 492 return;
492 die("Machine check", regs, SIGBUS); 493 die("Machine check", regs, SIGBUS);
493 494
494 /* Must die if the interrupt is not recoverable */ 495 /* Must die if the interrupt is not recoverable */
495 if (!(regs->msr & MSR_RI)) 496 if (!(regs->msr & MSR_RI))
496 panic("Unrecoverable Machine check"); 497 panic("Unrecoverable Machine check");
497 } 498 }
498 499
499 void SMIException(struct pt_regs *regs) 500 void SMIException(struct pt_regs *regs)
500 { 501 {
501 die("System Management Interrupt", regs, SIGABRT); 502 die("System Management Interrupt", regs, SIGABRT);
502 } 503 }
503 504
504 void unknown_exception(struct pt_regs *regs) 505 void unknown_exception(struct pt_regs *regs)
505 { 506 {
506 printk("Bad trap at PC: %lx, SR: %lx, vector=%lx\n", 507 printk("Bad trap at PC: %lx, SR: %lx, vector=%lx\n",
507 regs->nip, regs->msr, regs->trap); 508 regs->nip, regs->msr, regs->trap);
508 509
509 _exception(SIGTRAP, regs, 0, 0); 510 _exception(SIGTRAP, regs, 0, 0);
510 } 511 }
511 512
512 void instruction_breakpoint_exception(struct pt_regs *regs) 513 void instruction_breakpoint_exception(struct pt_regs *regs)
513 { 514 {
514 if (notify_die(DIE_IABR_MATCH, "iabr_match", regs, 5, 515 if (notify_die(DIE_IABR_MATCH, "iabr_match", regs, 5,
515 5, SIGTRAP) == NOTIFY_STOP) 516 5, SIGTRAP) == NOTIFY_STOP)
516 return; 517 return;
517 if (debugger_iabr_match(regs)) 518 if (debugger_iabr_match(regs))
518 return; 519 return;
519 _exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip); 520 _exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip);
520 } 521 }
521 522
522 void RunModeException(struct pt_regs *regs) 523 void RunModeException(struct pt_regs *regs)
523 { 524 {
524 _exception(SIGTRAP, regs, 0, 0); 525 _exception(SIGTRAP, regs, 0, 0);
525 } 526 }
526 527
527 void __kprobes single_step_exception(struct pt_regs *regs) 528 void __kprobes single_step_exception(struct pt_regs *regs)
528 { 529 {
529 regs->msr &= ~(MSR_SE | MSR_BE); /* Turn off 'trace' bits */ 530 regs->msr &= ~(MSR_SE | MSR_BE); /* Turn off 'trace' bits */
530 531
531 if (notify_die(DIE_SSTEP, "single_step", regs, 5, 532 if (notify_die(DIE_SSTEP, "single_step", regs, 5,
532 5, SIGTRAP) == NOTIFY_STOP) 533 5, SIGTRAP) == NOTIFY_STOP)
533 return; 534 return;
534 if (debugger_sstep(regs)) 535 if (debugger_sstep(regs))
535 return; 536 return;
536 537
537 _exception(SIGTRAP, regs, TRAP_TRACE, regs->nip); 538 _exception(SIGTRAP, regs, TRAP_TRACE, regs->nip);
538 } 539 }
539 540
540 /* 541 /*
541 * After we have successfully emulated an instruction, we have to 542 * After we have successfully emulated an instruction, we have to
542 * check if the instruction was being single-stepped, and if so, 543 * check if the instruction was being single-stepped, and if so,
543 * pretend we got a single-step exception. This was pointed out 544 * pretend we got a single-step exception. This was pointed out
544 * by Kumar Gala. -- paulus 545 * by Kumar Gala. -- paulus
545 */ 546 */
546 static void emulate_single_step(struct pt_regs *regs) 547 static void emulate_single_step(struct pt_regs *regs)
547 { 548 {
548 if (single_stepping(regs)) { 549 if (single_stepping(regs)) {
549 clear_single_step(regs); 550 clear_single_step(regs);
550 _exception(SIGTRAP, regs, TRAP_TRACE, 0); 551 _exception(SIGTRAP, regs, TRAP_TRACE, 0);
551 } 552 }
552 } 553 }
553 554
554 static inline int __parse_fpscr(unsigned long fpscr) 555 static inline int __parse_fpscr(unsigned long fpscr)
555 { 556 {
556 int ret = 0; 557 int ret = 0;
557 558
558 /* Invalid operation */ 559 /* Invalid operation */
559 if ((fpscr & FPSCR_VE) && (fpscr & FPSCR_VX)) 560 if ((fpscr & FPSCR_VE) && (fpscr & FPSCR_VX))
560 ret = FPE_FLTINV; 561 ret = FPE_FLTINV;
561 562
562 /* Overflow */ 563 /* Overflow */
563 else if ((fpscr & FPSCR_OE) && (fpscr & FPSCR_OX)) 564 else if ((fpscr & FPSCR_OE) && (fpscr & FPSCR_OX))
564 ret = FPE_FLTOVF; 565 ret = FPE_FLTOVF;
565 566
566 /* Underflow */ 567 /* Underflow */
567 else if ((fpscr & FPSCR_UE) && (fpscr & FPSCR_UX)) 568 else if ((fpscr & FPSCR_UE) && (fpscr & FPSCR_UX))
568 ret = FPE_FLTUND; 569 ret = FPE_FLTUND;
569 570
570 /* Divide by zero */ 571 /* Divide by zero */
571 else if ((fpscr & FPSCR_ZE) && (fpscr & FPSCR_ZX)) 572 else if ((fpscr & FPSCR_ZE) && (fpscr & FPSCR_ZX))
572 ret = FPE_FLTDIV; 573 ret = FPE_FLTDIV;
573 574
574 /* Inexact result */ 575 /* Inexact result */
575 else if ((fpscr & FPSCR_XE) && (fpscr & FPSCR_XX)) 576 else if ((fpscr & FPSCR_XE) && (fpscr & FPSCR_XX))
576 ret = FPE_FLTRES; 577 ret = FPE_FLTRES;
577 578
578 return ret; 579 return ret;
579 } 580 }
580 581
581 static void parse_fpe(struct pt_regs *regs) 582 static void parse_fpe(struct pt_regs *regs)
582 { 583 {
583 int code = 0; 584 int code = 0;
584 585
585 flush_fp_to_thread(current); 586 flush_fp_to_thread(current);
586 587
587 code = __parse_fpscr(current->thread.fpscr.val); 588 code = __parse_fpscr(current->thread.fpscr.val);
588 589
589 _exception(SIGFPE, regs, code, regs->nip); 590 _exception(SIGFPE, regs, code, regs->nip);
590 } 591 }
591 592
592 /* 593 /*
593 * Illegal instruction emulation support. Originally written to 594 * Illegal instruction emulation support. Originally written to
594 * provide the PVR to user applications using the mfspr rd, PVR. 595 * provide the PVR to user applications using the mfspr rd, PVR.
595 * Return non-zero if we can't emulate, or -EFAULT if the associated 596 * Return non-zero if we can't emulate, or -EFAULT if the associated
596 * memory access caused an access fault. Return zero on success. 597 * memory access caused an access fault. Return zero on success.
597 * 598 *
598 * There are a couple of ways to do this, either "decode" the instruction 599 * There are a couple of ways to do this, either "decode" the instruction
599 * or directly match lots of bits. In this case, matching lots of 600 * or directly match lots of bits. In this case, matching lots of
600 * bits is faster and easier. 601 * bits is faster and easier.
601 * 602 *
602 */ 603 */
603 #define INST_MFSPR_PVR 0x7c1f42a6 604 #define INST_MFSPR_PVR 0x7c1f42a6
604 #define INST_MFSPR_PVR_MASK 0xfc1fffff 605 #define INST_MFSPR_PVR_MASK 0xfc1fffff
605 606
606 #define INST_DCBA 0x7c0005ec 607 #define INST_DCBA 0x7c0005ec
607 #define INST_DCBA_MASK 0xfc0007fe 608 #define INST_DCBA_MASK 0xfc0007fe
608 609
609 #define INST_MCRXR 0x7c000400 610 #define INST_MCRXR 0x7c000400
610 #define INST_MCRXR_MASK 0xfc0007fe 611 #define INST_MCRXR_MASK 0xfc0007fe
611 612
612 #define INST_STRING 0x7c00042a 613 #define INST_STRING 0x7c00042a
613 #define INST_STRING_MASK 0xfc0007fe 614 #define INST_STRING_MASK 0xfc0007fe
614 #define INST_STRING_GEN_MASK 0xfc00067e 615 #define INST_STRING_GEN_MASK 0xfc00067e
615 #define INST_LSWI 0x7c0004aa 616 #define INST_LSWI 0x7c0004aa
616 #define INST_LSWX 0x7c00042a 617 #define INST_LSWX 0x7c00042a
617 #define INST_STSWI 0x7c0005aa 618 #define INST_STSWI 0x7c0005aa
618 #define INST_STSWX 0x7c00052a 619 #define INST_STSWX 0x7c00052a
619 620
620 #define INST_POPCNTB 0x7c0000f4 621 #define INST_POPCNTB 0x7c0000f4
621 #define INST_POPCNTB_MASK 0xfc0007fe 622 #define INST_POPCNTB_MASK 0xfc0007fe
622 623
623 static int emulate_string_inst(struct pt_regs *regs, u32 instword) 624 static int emulate_string_inst(struct pt_regs *regs, u32 instword)
624 { 625 {
625 u8 rT = (instword >> 21) & 0x1f; 626 u8 rT = (instword >> 21) & 0x1f;
626 u8 rA = (instword >> 16) & 0x1f; 627 u8 rA = (instword >> 16) & 0x1f;
627 u8 NB_RB = (instword >> 11) & 0x1f; 628 u8 NB_RB = (instword >> 11) & 0x1f;
628 u32 num_bytes; 629 u32 num_bytes;
629 unsigned long EA; 630 unsigned long EA;
630 int pos = 0; 631 int pos = 0;
631 632
632 /* Early out if we are an invalid form of lswx */ 633 /* Early out if we are an invalid form of lswx */
633 if ((instword & INST_STRING_MASK) == INST_LSWX) 634 if ((instword & INST_STRING_MASK) == INST_LSWX)
634 if ((rT == rA) || (rT == NB_RB)) 635 if ((rT == rA) || (rT == NB_RB))
635 return -EINVAL; 636 return -EINVAL;
636 637
637 EA = (rA == 0) ? 0 : regs->gpr[rA]; 638 EA = (rA == 0) ? 0 : regs->gpr[rA];
638 639
639 switch (instword & INST_STRING_MASK) { 640 switch (instword & INST_STRING_MASK) {
640 case INST_LSWX: 641 case INST_LSWX:
641 case INST_STSWX: 642 case INST_STSWX:
642 EA += NB_RB; 643 EA += NB_RB;
643 num_bytes = regs->xer & 0x7f; 644 num_bytes = regs->xer & 0x7f;
644 break; 645 break;
645 case INST_LSWI: 646 case INST_LSWI:
646 case INST_STSWI: 647 case INST_STSWI:
647 num_bytes = (NB_RB == 0) ? 32 : NB_RB; 648 num_bytes = (NB_RB == 0) ? 32 : NB_RB;
648 break; 649 break;
649 default: 650 default:
650 return -EINVAL; 651 return -EINVAL;
651 } 652 }
652 653
653 while (num_bytes != 0) 654 while (num_bytes != 0)
654 { 655 {
655 u8 val; 656 u8 val;
656 u32 shift = 8 * (3 - (pos & 0x3)); 657 u32 shift = 8 * (3 - (pos & 0x3));
657 658
658 switch ((instword & INST_STRING_MASK)) { 659 switch ((instword & INST_STRING_MASK)) {
659 case INST_LSWX: 660 case INST_LSWX:
660 case INST_LSWI: 661 case INST_LSWI:
661 if (get_user(val, (u8 __user *)EA)) 662 if (get_user(val, (u8 __user *)EA))
662 return -EFAULT; 663 return -EFAULT;
663 /* first time updating this reg, 664 /* first time updating this reg,
664 * zero it out */ 665 * zero it out */
665 if (pos == 0) 666 if (pos == 0)
666 regs->gpr[rT] = 0; 667 regs->gpr[rT] = 0;
667 regs->gpr[rT] |= val << shift; 668 regs->gpr[rT] |= val << shift;
668 break; 669 break;
669 case INST_STSWI: 670 case INST_STSWI:
670 case INST_STSWX: 671 case INST_STSWX:
671 val = regs->gpr[rT] >> shift; 672 val = regs->gpr[rT] >> shift;
672 if (put_user(val, (u8 __user *)EA)) 673 if (put_user(val, (u8 __user *)EA))
673 return -EFAULT; 674 return -EFAULT;
674 break; 675 break;
675 } 676 }
676 /* move EA to next address */ 677 /* move EA to next address */
677 EA += 1; 678 EA += 1;
678 num_bytes--; 679 num_bytes--;
679 680
680 /* manage our position within the register */ 681 /* manage our position within the register */
681 if (++pos == 4) { 682 if (++pos == 4) {
682 pos = 0; 683 pos = 0;
683 if (++rT == 32) 684 if (++rT == 32)
684 rT = 0; 685 rT = 0;
685 } 686 }
686 } 687 }
687 688
688 return 0; 689 return 0;
689 } 690 }
690 691
691 static int emulate_popcntb_inst(struct pt_regs *regs, u32 instword) 692 static int emulate_popcntb_inst(struct pt_regs *regs, u32 instword)
692 { 693 {
693 u32 ra,rs; 694 u32 ra,rs;
694 unsigned long tmp; 695 unsigned long tmp;
695 696
696 ra = (instword >> 16) & 0x1f; 697 ra = (instword >> 16) & 0x1f;
697 rs = (instword >> 21) & 0x1f; 698 rs = (instword >> 21) & 0x1f;
698 699
699 tmp = regs->gpr[rs]; 700 tmp = regs->gpr[rs];
700 tmp = tmp - ((tmp >> 1) & 0x5555555555555555ULL); 701 tmp = tmp - ((tmp >> 1) & 0x5555555555555555ULL);
701 tmp = (tmp & 0x3333333333333333ULL) + ((tmp >> 2) & 0x3333333333333333ULL); 702 tmp = (tmp & 0x3333333333333333ULL) + ((tmp >> 2) & 0x3333333333333333ULL);
702 tmp = (tmp + (tmp >> 4)) & 0x0f0f0f0f0f0f0f0fULL; 703 tmp = (tmp + (tmp >> 4)) & 0x0f0f0f0f0f0f0f0fULL;
703 regs->gpr[ra] = tmp; 704 regs->gpr[ra] = tmp;
704 705
705 return 0; 706 return 0;
706 } 707 }
707 708
708 static int emulate_instruction(struct pt_regs *regs) 709 static int emulate_instruction(struct pt_regs *regs)
709 { 710 {
710 u32 instword; 711 u32 instword;
711 u32 rd; 712 u32 rd;
712 713
713 if (!user_mode(regs) || (regs->msr & MSR_LE)) 714 if (!user_mode(regs) || (regs->msr & MSR_LE))
714 return -EINVAL; 715 return -EINVAL;
715 CHECK_FULL_REGS(regs); 716 CHECK_FULL_REGS(regs);
716 717
717 if (get_user(instword, (u32 __user *)(regs->nip))) 718 if (get_user(instword, (u32 __user *)(regs->nip)))
718 return -EFAULT; 719 return -EFAULT;
719 720
720 /* Emulate the mfspr rD, PVR. */ 721 /* Emulate the mfspr rD, PVR. */
721 if ((instword & INST_MFSPR_PVR_MASK) == INST_MFSPR_PVR) { 722 if ((instword & INST_MFSPR_PVR_MASK) == INST_MFSPR_PVR) {
722 rd = (instword >> 21) & 0x1f; 723 rd = (instword >> 21) & 0x1f;
723 regs->gpr[rd] = mfspr(SPRN_PVR); 724 regs->gpr[rd] = mfspr(SPRN_PVR);
724 return 0; 725 return 0;
725 } 726 }
726 727
727 /* Emulating the dcba insn is just a no-op. */ 728 /* Emulating the dcba insn is just a no-op. */
728 if ((instword & INST_DCBA_MASK) == INST_DCBA) 729 if ((instword & INST_DCBA_MASK) == INST_DCBA)
729 return 0; 730 return 0;
730 731
731 /* Emulate the mcrxr insn. */ 732 /* Emulate the mcrxr insn. */
732 if ((instword & INST_MCRXR_MASK) == INST_MCRXR) { 733 if ((instword & INST_MCRXR_MASK) == INST_MCRXR) {
733 int shift = (instword >> 21) & 0x1c; 734 int shift = (instword >> 21) & 0x1c;
734 unsigned long msk = 0xf0000000UL >> shift; 735 unsigned long msk = 0xf0000000UL >> shift;
735 736
736 regs->ccr = (regs->ccr & ~msk) | ((regs->xer >> shift) & msk); 737 regs->ccr = (regs->ccr & ~msk) | ((regs->xer >> shift) & msk);
737 regs->xer &= ~0xf0000000UL; 738 regs->xer &= ~0xf0000000UL;
738 return 0; 739 return 0;
739 } 740 }
740 741
741 /* Emulate load/store string insn. */ 742 /* Emulate load/store string insn. */
742 if ((instword & INST_STRING_GEN_MASK) == INST_STRING) 743 if ((instword & INST_STRING_GEN_MASK) == INST_STRING)
743 return emulate_string_inst(regs, instword); 744 return emulate_string_inst(regs, instword);
744 745
745 /* Emulate the popcntb (Population Count Bytes) instruction. */ 746 /* Emulate the popcntb (Population Count Bytes) instruction. */
746 if ((instword & INST_POPCNTB_MASK) == INST_POPCNTB) { 747 if ((instword & INST_POPCNTB_MASK) == INST_POPCNTB) {
747 return emulate_popcntb_inst(regs, instword); 748 return emulate_popcntb_inst(regs, instword);
748 } 749 }
749 750
750 return -EINVAL; 751 return -EINVAL;
751 } 752 }
752 753
753 int is_valid_bugaddr(unsigned long addr) 754 int is_valid_bugaddr(unsigned long addr)
754 { 755 {
755 return is_kernel_addr(addr); 756 return is_kernel_addr(addr);
756 } 757 }
757 758
758 void __kprobes program_check_exception(struct pt_regs *regs) 759 void __kprobes program_check_exception(struct pt_regs *regs)
759 { 760 {
760 unsigned int reason = get_reason(regs); 761 unsigned int reason = get_reason(regs);
761 extern int do_mathemu(struct pt_regs *regs); 762 extern int do_mathemu(struct pt_regs *regs);
762 763
763 /* We can now get here via a FP Unavailable exception if the core 764 /* We can now get here via a FP Unavailable exception if the core
764 * has no FPU, in that case the reason flags will be 0 */ 765 * has no FPU, in that case the reason flags will be 0 */
765 766
766 if (reason & REASON_FP) { 767 if (reason & REASON_FP) {
767 /* IEEE FP exception */ 768 /* IEEE FP exception */
768 parse_fpe(regs); 769 parse_fpe(regs);
769 return; 770 return;
770 } 771 }
771 if (reason & REASON_TRAP) { 772 if (reason & REASON_TRAP) {
772 /* trap exception */ 773 /* trap exception */
773 if (notify_die(DIE_BPT, "breakpoint", regs, 5, 5, SIGTRAP) 774 if (notify_die(DIE_BPT, "breakpoint", regs, 5, 5, SIGTRAP)
774 == NOTIFY_STOP) 775 == NOTIFY_STOP)
775 return; 776 return;
776 if (debugger_bpt(regs)) 777 if (debugger_bpt(regs))
777 return; 778 return;
778 779
779 if (!(regs->msr & MSR_PR) && /* not user-mode */ 780 if (!(regs->msr & MSR_PR) && /* not user-mode */
780 report_bug(regs->nip, regs) == BUG_TRAP_TYPE_WARN) { 781 report_bug(regs->nip, regs) == BUG_TRAP_TYPE_WARN) {
781 regs->nip += 4; 782 regs->nip += 4;
782 return; 783 return;
783 } 784 }
784 _exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip); 785 _exception(SIGTRAP, regs, TRAP_BRKPT, regs->nip);
785 return; 786 return;
786 } 787 }
787 788
788 local_irq_enable(); 789 local_irq_enable();
789 790
790 #ifdef CONFIG_MATH_EMULATION 791 #ifdef CONFIG_MATH_EMULATION
791 /* (reason & REASON_ILLEGAL) would be the obvious thing here, 792 /* (reason & REASON_ILLEGAL) would be the obvious thing here,
792 * but there seems to be a hardware bug on the 405GP (RevD) 793 * but there seems to be a hardware bug on the 405GP (RevD)
793 * that means ESR is sometimes set incorrectly - either to 794 * that means ESR is sometimes set incorrectly - either to
794 * ESR_DST (!?) or 0. In the process of chasing this with the 795 * ESR_DST (!?) or 0. In the process of chasing this with the
795 * hardware people - not sure if it can happen on any illegal 796 * hardware people - not sure if it can happen on any illegal
796 * instruction or only on FP instructions, whether there is a 797 * instruction or only on FP instructions, whether there is a
797 * pattern to occurences etc. -dgibson 31/Mar/2003 */ 798 * pattern to occurences etc. -dgibson 31/Mar/2003 */
798 switch (do_mathemu(regs)) { 799 switch (do_mathemu(regs)) {
799 case 0: 800 case 0:
800 emulate_single_step(regs); 801 emulate_single_step(regs);
801 return; 802 return;
802 case 1: { 803 case 1: {
803 int code = 0; 804 int code = 0;
804 code = __parse_fpscr(current->thread.fpscr.val); 805 code = __parse_fpscr(current->thread.fpscr.val);
805 _exception(SIGFPE, regs, code, regs->nip); 806 _exception(SIGFPE, regs, code, regs->nip);
806 return; 807 return;
807 } 808 }
808 case -EFAULT: 809 case -EFAULT:
809 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip); 810 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);
810 return; 811 return;
811 } 812 }
812 /* fall through on any other errors */ 813 /* fall through on any other errors */
813 #endif /* CONFIG_MATH_EMULATION */ 814 #endif /* CONFIG_MATH_EMULATION */
814 815
815 /* Try to emulate it if we should. */ 816 /* Try to emulate it if we should. */
816 if (reason & (REASON_ILLEGAL | REASON_PRIVILEGED)) { 817 if (reason & (REASON_ILLEGAL | REASON_PRIVILEGED)) {
817 switch (emulate_instruction(regs)) { 818 switch (emulate_instruction(regs)) {
818 case 0: 819 case 0:
819 regs->nip += 4; 820 regs->nip += 4;
820 emulate_single_step(regs); 821 emulate_single_step(regs);
821 return; 822 return;
822 case -EFAULT: 823 case -EFAULT:
823 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip); 824 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);
824 return; 825 return;
825 } 826 }
826 } 827 }
827 828
828 if (reason & REASON_PRIVILEGED) 829 if (reason & REASON_PRIVILEGED)
829 _exception(SIGILL, regs, ILL_PRVOPC, regs->nip); 830 _exception(SIGILL, regs, ILL_PRVOPC, regs->nip);
830 else 831 else
831 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip); 832 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
832 } 833 }
833 834
834 void alignment_exception(struct pt_regs *regs) 835 void alignment_exception(struct pt_regs *regs)
835 { 836 {
836 int sig, code, fixed = 0; 837 int sig, code, fixed = 0;
837 838
838 /* we don't implement logging of alignment exceptions */ 839 /* we don't implement logging of alignment exceptions */
839 if (!(current->thread.align_ctl & PR_UNALIGN_SIGBUS)) 840 if (!(current->thread.align_ctl & PR_UNALIGN_SIGBUS))
840 fixed = fix_alignment(regs); 841 fixed = fix_alignment(regs);
841 842
842 if (fixed == 1) { 843 if (fixed == 1) {
843 regs->nip += 4; /* skip over emulated instruction */ 844 regs->nip += 4; /* skip over emulated instruction */
844 emulate_single_step(regs); 845 emulate_single_step(regs);
845 return; 846 return;
846 } 847 }
847 848
848 /* Operand address was bad */ 849 /* Operand address was bad */
849 if (fixed == -EFAULT) { 850 if (fixed == -EFAULT) {
850 sig = SIGSEGV; 851 sig = SIGSEGV;
851 code = SEGV_ACCERR; 852 code = SEGV_ACCERR;
852 } else { 853 } else {
853 sig = SIGBUS; 854 sig = SIGBUS;
854 code = BUS_ADRALN; 855 code = BUS_ADRALN;
855 } 856 }
856 if (user_mode(regs)) 857 if (user_mode(regs))
857 _exception(sig, regs, code, regs->dar); 858 _exception(sig, regs, code, regs->dar);
858 else 859 else
859 bad_page_fault(regs, regs->dar, sig); 860 bad_page_fault(regs, regs->dar, sig);
860 } 861 }
861 862
862 void StackOverflow(struct pt_regs *regs) 863 void StackOverflow(struct pt_regs *regs)
863 { 864 {
864 printk(KERN_CRIT "Kernel stack overflow in process %p, r1=%lx\n", 865 printk(KERN_CRIT "Kernel stack overflow in process %p, r1=%lx\n",
865 current, regs->gpr[1]); 866 current, regs->gpr[1]);
866 debugger(regs); 867 debugger(regs);
867 show_regs(regs); 868 show_regs(regs);
868 panic("kernel stack overflow"); 869 panic("kernel stack overflow");
869 } 870 }
870 871
871 void nonrecoverable_exception(struct pt_regs *regs) 872 void nonrecoverable_exception(struct pt_regs *regs)
872 { 873 {
873 printk(KERN_ERR "Non-recoverable exception at PC=%lx MSR=%lx\n", 874 printk(KERN_ERR "Non-recoverable exception at PC=%lx MSR=%lx\n",
874 regs->nip, regs->msr); 875 regs->nip, regs->msr);
875 debugger(regs); 876 debugger(regs);
876 die("nonrecoverable exception", regs, SIGKILL); 877 die("nonrecoverable exception", regs, SIGKILL);
877 } 878 }
878 879
879 void trace_syscall(struct pt_regs *regs) 880 void trace_syscall(struct pt_regs *regs)
880 { 881 {
881 printk("Task: %p(%d), PC: %08lX/%08lX, Syscall: %3ld, Result: %s%ld %s\n", 882 printk("Task: %p(%d), PC: %08lX/%08lX, Syscall: %3ld, Result: %s%ld %s\n",
882 current, current->pid, regs->nip, regs->link, regs->gpr[0], 883 current, current->pid, regs->nip, regs->link, regs->gpr[0],
883 regs->ccr&0x10000000?"Error=":"", regs->gpr[3], print_tainted()); 884 regs->ccr&0x10000000?"Error=":"", regs->gpr[3], print_tainted());
884 } 885 }
885 886
886 void kernel_fp_unavailable_exception(struct pt_regs *regs) 887 void kernel_fp_unavailable_exception(struct pt_regs *regs)
887 { 888 {
888 printk(KERN_EMERG "Unrecoverable FP Unavailable Exception " 889 printk(KERN_EMERG "Unrecoverable FP Unavailable Exception "
889 "%lx at %lx\n", regs->trap, regs->nip); 890 "%lx at %lx\n", regs->trap, regs->nip);
890 die("Unrecoverable FP Unavailable Exception", regs, SIGABRT); 891 die("Unrecoverable FP Unavailable Exception", regs, SIGABRT);
891 } 892 }
892 893
893 void altivec_unavailable_exception(struct pt_regs *regs) 894 void altivec_unavailable_exception(struct pt_regs *regs)
894 { 895 {
895 if (user_mode(regs)) { 896 if (user_mode(regs)) {
896 /* A user program has executed an altivec instruction, 897 /* A user program has executed an altivec instruction,
897 but this kernel doesn't support altivec. */ 898 but this kernel doesn't support altivec. */
898 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip); 899 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
899 return; 900 return;
900 } 901 }
901 902
902 printk(KERN_EMERG "Unrecoverable VMX/Altivec Unavailable Exception " 903 printk(KERN_EMERG "Unrecoverable VMX/Altivec Unavailable Exception "
903 "%lx at %lx\n", regs->trap, regs->nip); 904 "%lx at %lx\n", regs->trap, regs->nip);
904 die("Unrecoverable VMX/Altivec Unavailable Exception", regs, SIGABRT); 905 die("Unrecoverable VMX/Altivec Unavailable Exception", regs, SIGABRT);
905 } 906 }
906 907
907 void performance_monitor_exception(struct pt_regs *regs) 908 void performance_monitor_exception(struct pt_regs *regs)
908 { 909 {
909 perf_irq(regs); 910 perf_irq(regs);
910 } 911 }
911 912
912 #ifdef CONFIG_8xx 913 #ifdef CONFIG_8xx
913 void SoftwareEmulation(struct pt_regs *regs) 914 void SoftwareEmulation(struct pt_regs *regs)
914 { 915 {
915 extern int do_mathemu(struct pt_regs *); 916 extern int do_mathemu(struct pt_regs *);
916 extern int Soft_emulate_8xx(struct pt_regs *); 917 extern int Soft_emulate_8xx(struct pt_regs *);
917 int errcode; 918 int errcode;
918 919
919 CHECK_FULL_REGS(regs); 920 CHECK_FULL_REGS(regs);
920 921
921 if (!user_mode(regs)) { 922 if (!user_mode(regs)) {
922 debugger(regs); 923 debugger(regs);
923 die("Kernel Mode Software FPU Emulation", regs, SIGFPE); 924 die("Kernel Mode Software FPU Emulation", regs, SIGFPE);
924 } 925 }
925 926
926 #ifdef CONFIG_MATH_EMULATION 927 #ifdef CONFIG_MATH_EMULATION
927 errcode = do_mathemu(regs); 928 errcode = do_mathemu(regs);
928 929
929 switch (errcode) { 930 switch (errcode) {
930 case 0: 931 case 0:
931 emulate_single_step(regs); 932 emulate_single_step(regs);
932 return; 933 return;
933 case 1: { 934 case 1: {
934 int code = 0; 935 int code = 0;
935 code = __parse_fpscr(current->thread.fpscr.val); 936 code = __parse_fpscr(current->thread.fpscr.val);
936 _exception(SIGFPE, regs, code, regs->nip); 937 _exception(SIGFPE, regs, code, regs->nip);
937 return; 938 return;
938 } 939 }
939 case -EFAULT: 940 case -EFAULT:
940 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip); 941 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);
941 return; 942 return;
942 default: 943 default:
943 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip); 944 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
944 return; 945 return;
945 } 946 }
946 947
947 #else 948 #else
948 errcode = Soft_emulate_8xx(regs); 949 errcode = Soft_emulate_8xx(regs);
949 switch (errcode) { 950 switch (errcode) {
950 case 0: 951 case 0:
951 emulate_single_step(regs); 952 emulate_single_step(regs);
952 return; 953 return;
953 case 1: 954 case 1:
954 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip); 955 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
955 return; 956 return;
956 case -EFAULT: 957 case -EFAULT:
957 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip); 958 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);
958 return; 959 return;
959 } 960 }
960 #endif 961 #endif
961 } 962 }
962 #endif /* CONFIG_8xx */ 963 #endif /* CONFIG_8xx */
963 964
964 #if defined(CONFIG_40x) || defined(CONFIG_BOOKE) 965 #if defined(CONFIG_40x) || defined(CONFIG_BOOKE)
965 966
966 void DebugException(struct pt_regs *regs, unsigned long debug_status) 967 void DebugException(struct pt_regs *regs, unsigned long debug_status)
967 { 968 {
968 if (debug_status & DBSR_IC) { /* instruction completion */ 969 if (debug_status & DBSR_IC) { /* instruction completion */
969 regs->msr &= ~MSR_DE; 970 regs->msr &= ~MSR_DE;
970 if (user_mode(regs)) { 971 if (user_mode(regs)) {
971 current->thread.dbcr0 &= ~DBCR0_IC; 972 current->thread.dbcr0 &= ~DBCR0_IC;
972 } else { 973 } else {
973 /* Disable instruction completion */ 974 /* Disable instruction completion */
974 mtspr(SPRN_DBCR0, mfspr(SPRN_DBCR0) & ~DBCR0_IC); 975 mtspr(SPRN_DBCR0, mfspr(SPRN_DBCR0) & ~DBCR0_IC);
975 /* Clear the instruction completion event */ 976 /* Clear the instruction completion event */
976 mtspr(SPRN_DBSR, DBSR_IC); 977 mtspr(SPRN_DBSR, DBSR_IC);
977 if (debugger_sstep(regs)) 978 if (debugger_sstep(regs))
978 return; 979 return;
979 } 980 }
980 _exception(SIGTRAP, regs, TRAP_TRACE, 0); 981 _exception(SIGTRAP, regs, TRAP_TRACE, 0);
981 } 982 }
982 } 983 }
983 #endif /* CONFIG_4xx || CONFIG_BOOKE */ 984 #endif /* CONFIG_4xx || CONFIG_BOOKE */
984 985
985 #if !defined(CONFIG_TAU_INT) 986 #if !defined(CONFIG_TAU_INT)
986 void TAUException(struct pt_regs *regs) 987 void TAUException(struct pt_regs *regs)
987 { 988 {
988 printk("TAU trap at PC: %lx, MSR: %lx, vector=%lx %s\n", 989 printk("TAU trap at PC: %lx, MSR: %lx, vector=%lx %s\n",
989 regs->nip, regs->msr, regs->trap, print_tainted()); 990 regs->nip, regs->msr, regs->trap, print_tainted());
990 } 991 }
991 #endif /* CONFIG_INT_TAU */ 992 #endif /* CONFIG_INT_TAU */
992 993
993 #ifdef CONFIG_ALTIVEC 994 #ifdef CONFIG_ALTIVEC
994 void altivec_assist_exception(struct pt_regs *regs) 995 void altivec_assist_exception(struct pt_regs *regs)
995 { 996 {
996 int err; 997 int err;
997 998
998 if (!user_mode(regs)) { 999 if (!user_mode(regs)) {
999 printk(KERN_EMERG "VMX/Altivec assist exception in kernel mode" 1000 printk(KERN_EMERG "VMX/Altivec assist exception in kernel mode"
1000 " at %lx\n", regs->nip); 1001 " at %lx\n", regs->nip);
1001 die("Kernel VMX/Altivec assist exception", regs, SIGILL); 1002 die("Kernel VMX/Altivec assist exception", regs, SIGILL);
1002 } 1003 }
1003 1004
1004 flush_altivec_to_thread(current); 1005 flush_altivec_to_thread(current);
1005 1006
1006 err = emulate_altivec(regs); 1007 err = emulate_altivec(regs);
1007 if (err == 0) { 1008 if (err == 0) {
1008 regs->nip += 4; /* skip emulated instruction */ 1009 regs->nip += 4; /* skip emulated instruction */
1009 emulate_single_step(regs); 1010 emulate_single_step(regs);
1010 return; 1011 return;
1011 } 1012 }
1012 1013
1013 if (err == -EFAULT) { 1014 if (err == -EFAULT) {
1014 /* got an error reading the instruction */ 1015 /* got an error reading the instruction */
1015 _exception(SIGSEGV, regs, SEGV_ACCERR, regs->nip); 1016 _exception(SIGSEGV, regs, SEGV_ACCERR, regs->nip);
1016 } else { 1017 } else {
1017 /* didn't recognize the instruction */ 1018 /* didn't recognize the instruction */
1018 /* XXX quick hack for now: set the non-Java bit in the VSCR */ 1019 /* XXX quick hack for now: set the non-Java bit in the VSCR */
1019 if (printk_ratelimit()) 1020 if (printk_ratelimit())
1020 printk(KERN_ERR "Unrecognized altivec instruction " 1021 printk(KERN_ERR "Unrecognized altivec instruction "
1021 "in %s at %lx\n", current->comm, regs->nip); 1022 "in %s at %lx\n", current->comm, regs->nip);
1022 current->thread.vscr.u[3] |= 0x10000; 1023 current->thread.vscr.u[3] |= 0x10000;
1023 } 1024 }
1024 } 1025 }
1025 #endif /* CONFIG_ALTIVEC */ 1026 #endif /* CONFIG_ALTIVEC */
1026 1027
1027 #ifdef CONFIG_FSL_BOOKE 1028 #ifdef CONFIG_FSL_BOOKE
1028 void CacheLockingException(struct pt_regs *regs, unsigned long address, 1029 void CacheLockingException(struct pt_regs *regs, unsigned long address,
1029 unsigned long error_code) 1030 unsigned long error_code)
1030 { 1031 {
1031 /* We treat cache locking instructions from the user 1032 /* We treat cache locking instructions from the user
1032 * as priv ops, in the future we could try to do 1033 * as priv ops, in the future we could try to do
1033 * something smarter 1034 * something smarter
1034 */ 1035 */
1035 if (error_code & (ESR_DLK|ESR_ILK)) 1036 if (error_code & (ESR_DLK|ESR_ILK))
1036 _exception(SIGILL, regs, ILL_PRVOPC, regs->nip); 1037 _exception(SIGILL, regs, ILL_PRVOPC, regs->nip);
1037 return; 1038 return;
1038 } 1039 }
1039 #endif /* CONFIG_FSL_BOOKE */ 1040 #endif /* CONFIG_FSL_BOOKE */
1040 1041
1041 #ifdef CONFIG_SPE 1042 #ifdef CONFIG_SPE
1042 void SPEFloatingPointException(struct pt_regs *regs) 1043 void SPEFloatingPointException(struct pt_regs *regs)
1043 { 1044 {
1044 unsigned long spefscr; 1045 unsigned long spefscr;
1045 int fpexc_mode; 1046 int fpexc_mode;
1046 int code = 0; 1047 int code = 0;
1047 1048
1048 spefscr = current->thread.spefscr; 1049 spefscr = current->thread.spefscr;
1049 fpexc_mode = current->thread.fpexc_mode; 1050 fpexc_mode = current->thread.fpexc_mode;
1050 1051
1051 /* Hardware does not neccessarily set sticky 1052 /* Hardware does not neccessarily set sticky
1052 * underflow/overflow/invalid flags */ 1053 * underflow/overflow/invalid flags */
1053 if ((spefscr & SPEFSCR_FOVF) && (fpexc_mode & PR_FP_EXC_OVF)) { 1054 if ((spefscr & SPEFSCR_FOVF) && (fpexc_mode & PR_FP_EXC_OVF)) {
1054 code = FPE_FLTOVF; 1055 code = FPE_FLTOVF;
1055 spefscr |= SPEFSCR_FOVFS; 1056 spefscr |= SPEFSCR_FOVFS;
1056 } 1057 }
1057 else if ((spefscr & SPEFSCR_FUNF) && (fpexc_mode & PR_FP_EXC_UND)) { 1058 else if ((spefscr & SPEFSCR_FUNF) && (fpexc_mode & PR_FP_EXC_UND)) {
1058 code = FPE_FLTUND; 1059 code = FPE_FLTUND;
1059 spefscr |= SPEFSCR_FUNFS; 1060 spefscr |= SPEFSCR_FUNFS;
1060 } 1061 }
1061 else if ((spefscr & SPEFSCR_FDBZ) && (fpexc_mode & PR_FP_EXC_DIV)) 1062 else if ((spefscr & SPEFSCR_FDBZ) && (fpexc_mode & PR_FP_EXC_DIV))
1062 code = FPE_FLTDIV; 1063 code = FPE_FLTDIV;
1063 else if ((spefscr & SPEFSCR_FINV) && (fpexc_mode & PR_FP_EXC_INV)) { 1064 else if ((spefscr & SPEFSCR_FINV) && (fpexc_mode & PR_FP_EXC_INV)) {
1064 code = FPE_FLTINV; 1065 code = FPE_FLTINV;
1065 spefscr |= SPEFSCR_FINVS; 1066 spefscr |= SPEFSCR_FINVS;
1066 } 1067 }
1067 else if ((spefscr & (SPEFSCR_FG | SPEFSCR_FX)) && (fpexc_mode & PR_FP_EXC_RES)) 1068 else if ((spefscr & (SPEFSCR_FG | SPEFSCR_FX)) && (fpexc_mode & PR_FP_EXC_RES))
1068 code = FPE_FLTRES; 1069 code = FPE_FLTRES;
1069 1070
1070 current->thread.spefscr = spefscr; 1071 current->thread.spefscr = spefscr;
1071 1072
1072 _exception(SIGFPE, regs, code, regs->nip); 1073 _exception(SIGFPE, regs, code, regs->nip);
1073 return; 1074 return;
1074 } 1075 }
1075 #endif 1076 #endif
1076 1077
1077 /* 1078 /*
1078 * We enter here if we get an unrecoverable exception, that is, one 1079 * We enter here if we get an unrecoverable exception, that is, one
1079 * that happened at a point where the RI (recoverable interrupt) bit 1080 * that happened at a point where the RI (recoverable interrupt) bit
1080 * in the MSR is 0. This indicates that SRR0/1 are live, and that 1081 * in the MSR is 0. This indicates that SRR0/1 are live, and that
1081 * we therefore lost state by taking this exception. 1082 * we therefore lost state by taking this exception.
1082 */ 1083 */
1083 void unrecoverable_exception(struct pt_regs *regs) 1084 void unrecoverable_exception(struct pt_regs *regs)
1084 { 1085 {
1085 printk(KERN_EMERG "Unrecoverable exception %lx at %lx\n", 1086 printk(KERN_EMERG "Unrecoverable exception %lx at %lx\n",
1086 regs->trap, regs->nip); 1087 regs->trap, regs->nip);
1087 die("Unrecoverable exception", regs, SIGABRT); 1088 die("Unrecoverable exception", regs, SIGABRT);
1088 } 1089 }
1089 1090
1090 #ifdef CONFIG_BOOKE_WDT 1091 #ifdef CONFIG_BOOKE_WDT
1091 /* 1092 /*
1092 * Default handler for a Watchdog exception, 1093 * Default handler for a Watchdog exception,
1093 * spins until a reboot occurs 1094 * spins until a reboot occurs
1094 */ 1095 */
1095 void __attribute__ ((weak)) WatchdogHandler(struct pt_regs *regs) 1096 void __attribute__ ((weak)) WatchdogHandler(struct pt_regs *regs)
1096 { 1097 {
1097 /* Generic WatchdogHandler, implement your own */ 1098 /* Generic WatchdogHandler, implement your own */
1098 mtspr(SPRN_TCR, mfspr(SPRN_TCR)&(~TCR_WIE)); 1099 mtspr(SPRN_TCR, mfspr(SPRN_TCR)&(~TCR_WIE));
1099 return; 1100 return;
1100 } 1101 }
1101 1102
1102 void WatchdogException(struct pt_regs *regs) 1103 void WatchdogException(struct pt_regs *regs)
1103 { 1104 {
1104 printk (KERN_EMERG "PowerPC Book-E Watchdog Exception\n"); 1105 printk (KERN_EMERG "PowerPC Book-E Watchdog Exception\n");
1105 WatchdogHandler(regs); 1106 WatchdogHandler(regs);
1106 } 1107 }
1107 #endif 1108 #endif
1108 1109
1109 /* 1110 /*
1110 * We enter here if we discover during exception entry that we are 1111 * We enter here if we discover during exception entry that we are
1111 * running in supervisor mode with a userspace value in the stack pointer. 1112 * running in supervisor mode with a userspace value in the stack pointer.
1112 */ 1113 */
1113 void kernel_bad_stack(struct pt_regs *regs) 1114 void kernel_bad_stack(struct pt_regs *regs)
1114 { 1115 {
1115 printk(KERN_EMERG "Bad kernel stack pointer %lx at %lx\n", 1116 printk(KERN_EMERG "Bad kernel stack pointer %lx at %lx\n",
1116 regs->gpr[1], regs->nip); 1117 regs->gpr[1], regs->nip);
1117 die("Bad kernel stack pointer", regs, SIGABRT); 1118 die("Bad kernel stack pointer", regs, SIGABRT);
1118 } 1119 }
1119 1120
1120 void __init trap_init(void) 1121 void __init trap_init(void)
1121 { 1122 {
1122 } 1123 }
1123 1124
arch/ppc/kernel/traps.c
1 /* 1 /*
2 * Copyright (C) 1995-1996 Gary Thomas (gdt@linuxppc.org) 2 * Copyright (C) 1995-1996 Gary Thomas (gdt@linuxppc.org)
3 * 3 *
4 * This program is free software; you can redistribute it and/or 4 * This program is free software; you can redistribute it and/or
5 * modify it under the terms of the GNU General Public License 5 * modify it under the terms of the GNU General Public License
6 * as published by the Free Software Foundation; either version 6 * as published by the Free Software Foundation; either version
7 * 2 of the License, or (at your option) any later version. 7 * 2 of the License, or (at your option) any later version.
8 * 8 *
9 * Modified by Cort Dougan (cort@cs.nmt.edu) 9 * Modified by Cort Dougan (cort@cs.nmt.edu)
10 * and Paul Mackerras (paulus@cs.anu.edu.au) 10 * and Paul Mackerras (paulus@cs.anu.edu.au)
11 */ 11 */
12 12
13 /* 13 /*
14 * This file handles the architecture-dependent parts of hardware exceptions 14 * This file handles the architecture-dependent parts of hardware exceptions
15 */ 15 */
16 16
17 #include <linux/errno.h> 17 #include <linux/errno.h>
18 #include <linux/sched.h> 18 #include <linux/sched.h>
19 #include <linux/kernel.h> 19 #include <linux/kernel.h>
20 #include <linux/mm.h> 20 #include <linux/mm.h>
21 #include <linux/stddef.h> 21 #include <linux/stddef.h>
22 #include <linux/unistd.h> 22 #include <linux/unistd.h>
23 #include <linux/ptrace.h> 23 #include <linux/ptrace.h>
24 #include <linux/slab.h> 24 #include <linux/slab.h>
25 #include <linux/user.h> 25 #include <linux/user.h>
26 #include <linux/a.out.h> 26 #include <linux/a.out.h>
27 #include <linux/interrupt.h> 27 #include <linux/interrupt.h>
28 #include <linux/init.h> 28 #include <linux/init.h>
29 #include <linux/module.h> 29 #include <linux/module.h>
30 #include <linux/prctl.h> 30 #include <linux/prctl.h>
31 #include <linux/bug.h> 31 #include <linux/bug.h>
32 32
33 #include <asm/pgtable.h> 33 #include <asm/pgtable.h>
34 #include <asm/uaccess.h> 34 #include <asm/uaccess.h>
35 #include <asm/system.h> 35 #include <asm/system.h>
36 #include <asm/io.h> 36 #include <asm/io.h>
37 #include <asm/reg.h> 37 #include <asm/reg.h>
38 #include <asm/xmon.h> 38 #include <asm/xmon.h>
39 #include <asm/pmc.h> 39 #include <asm/pmc.h>
40 40
41 #ifdef CONFIG_XMON 41 #ifdef CONFIG_XMON
42 extern int xmon_bpt(struct pt_regs *regs); 42 extern int xmon_bpt(struct pt_regs *regs);
43 extern int xmon_sstep(struct pt_regs *regs); 43 extern int xmon_sstep(struct pt_regs *regs);
44 extern int xmon_iabr_match(struct pt_regs *regs); 44 extern int xmon_iabr_match(struct pt_regs *regs);
45 extern int xmon_dabr_match(struct pt_regs *regs); 45 extern int xmon_dabr_match(struct pt_regs *regs);
46 46
47 int (*debugger)(struct pt_regs *regs) = xmon; 47 int (*debugger)(struct pt_regs *regs) = xmon;
48 int (*debugger_bpt)(struct pt_regs *regs) = xmon_bpt; 48 int (*debugger_bpt)(struct pt_regs *regs) = xmon_bpt;
49 int (*debugger_sstep)(struct pt_regs *regs) = xmon_sstep; 49 int (*debugger_sstep)(struct pt_regs *regs) = xmon_sstep;
50 int (*debugger_iabr_match)(struct pt_regs *regs) = xmon_iabr_match; 50 int (*debugger_iabr_match)(struct pt_regs *regs) = xmon_iabr_match;
51 int (*debugger_dabr_match)(struct pt_regs *regs) = xmon_dabr_match; 51 int (*debugger_dabr_match)(struct pt_regs *regs) = xmon_dabr_match;
52 void (*debugger_fault_handler)(struct pt_regs *regs); 52 void (*debugger_fault_handler)(struct pt_regs *regs);
53 #else 53 #else
54 #ifdef CONFIG_KGDB 54 #ifdef CONFIG_KGDB
55 int (*debugger)(struct pt_regs *regs); 55 int (*debugger)(struct pt_regs *regs);
56 int (*debugger_bpt)(struct pt_regs *regs); 56 int (*debugger_bpt)(struct pt_regs *regs);
57 int (*debugger_sstep)(struct pt_regs *regs); 57 int (*debugger_sstep)(struct pt_regs *regs);
58 int (*debugger_iabr_match)(struct pt_regs *regs); 58 int (*debugger_iabr_match)(struct pt_regs *regs);
59 int (*debugger_dabr_match)(struct pt_regs *regs); 59 int (*debugger_dabr_match)(struct pt_regs *regs);
60 void (*debugger_fault_handler)(struct pt_regs *regs); 60 void (*debugger_fault_handler)(struct pt_regs *regs);
61 #else 61 #else
62 #define debugger(regs) do { } while (0) 62 #define debugger(regs) do { } while (0)
63 #define debugger_bpt(regs) 0 63 #define debugger_bpt(regs) 0
64 #define debugger_sstep(regs) 0 64 #define debugger_sstep(regs) 0
65 #define debugger_iabr_match(regs) 0 65 #define debugger_iabr_match(regs) 0
66 #define debugger_dabr_match(regs) 0 66 #define debugger_dabr_match(regs) 0
67 #define debugger_fault_handler ((void (*)(struct pt_regs *))0) 67 #define debugger_fault_handler ((void (*)(struct pt_regs *))0)
68 #endif 68 #endif
69 #endif 69 #endif
70 70
71 /* 71 /*
72 * Trap & Exception support 72 * Trap & Exception support
73 */ 73 */
74 74
75 DEFINE_SPINLOCK(die_lock); 75 DEFINE_SPINLOCK(die_lock);
76 76
77 int die(const char * str, struct pt_regs * fp, long err) 77 int die(const char * str, struct pt_regs * fp, long err)
78 { 78 {
79 static int die_counter; 79 static int die_counter;
80 int nl = 0; 80 int nl = 0;
81 console_verbose(); 81 console_verbose();
82 spin_lock_irq(&die_lock); 82 spin_lock_irq(&die_lock);
83 printk("Oops: %s, sig: %ld [#%d]\n", str, err, ++die_counter); 83 printk("Oops: %s, sig: %ld [#%d]\n", str, err, ++die_counter);
84 #ifdef CONFIG_PREEMPT 84 #ifdef CONFIG_PREEMPT
85 printk("PREEMPT "); 85 printk("PREEMPT ");
86 nl = 1; 86 nl = 1;
87 #endif 87 #endif
88 #ifdef CONFIG_SMP 88 #ifdef CONFIG_SMP
89 printk("SMP NR_CPUS=%d ", NR_CPUS); 89 printk("SMP NR_CPUS=%d ", NR_CPUS);
90 nl = 1; 90 nl = 1;
91 #endif 91 #endif
92 if (nl) 92 if (nl)
93 printk("\n"); 93 printk("\n");
94 show_regs(fp); 94 show_regs(fp);
95 add_taint(TAINT_DIE);
95 spin_unlock_irq(&die_lock); 96 spin_unlock_irq(&die_lock);
96 /* do_exit() should take care of panic'ing from an interrupt 97 /* do_exit() should take care of panic'ing from an interrupt
97 * context so we don't handle it here 98 * context so we don't handle it here
98 */ 99 */
99 do_exit(err); 100 do_exit(err);
100 } 101 }
101 102
102 void _exception(int signr, struct pt_regs *regs, int code, unsigned long addr) 103 void _exception(int signr, struct pt_regs *regs, int code, unsigned long addr)
103 { 104 {
104 siginfo_t info; 105 siginfo_t info;
105 106
106 if (!user_mode(regs)) { 107 if (!user_mode(regs)) {
107 debugger(regs); 108 debugger(regs);
108 die("Exception in kernel mode", regs, signr); 109 die("Exception in kernel mode", regs, signr);
109 } 110 }
110 info.si_signo = signr; 111 info.si_signo = signr;
111 info.si_errno = 0; 112 info.si_errno = 0;
112 info.si_code = code; 113 info.si_code = code;
113 info.si_addr = (void __user *) addr; 114 info.si_addr = (void __user *) addr;
114 force_sig_info(signr, &info, current); 115 force_sig_info(signr, &info, current);
115 116
116 /* 117 /*
117 * Init gets no signals that it doesn't have a handler for. 118 * Init gets no signals that it doesn't have a handler for.
118 * That's all very well, but if it has caused a synchronous 119 * That's all very well, but if it has caused a synchronous
119 * exception and we ignore the resulting signal, it will just 120 * exception and we ignore the resulting signal, it will just
120 * generate the same exception over and over again and we get 121 * generate the same exception over and over again and we get
121 * nowhere. Better to kill it and let the kernel panic. 122 * nowhere. Better to kill it and let the kernel panic.
122 */ 123 */
123 if (is_init(current)) { 124 if (is_init(current)) {
124 __sighandler_t handler; 125 __sighandler_t handler;
125 126
126 spin_lock_irq(&current->sighand->siglock); 127 spin_lock_irq(&current->sighand->siglock);
127 handler = current->sighand->action[signr-1].sa.sa_handler; 128 handler = current->sighand->action[signr-1].sa.sa_handler;
128 spin_unlock_irq(&current->sighand->siglock); 129 spin_unlock_irq(&current->sighand->siglock);
129 if (handler == SIG_DFL) { 130 if (handler == SIG_DFL) {
130 /* init has generated a synchronous exception 131 /* init has generated a synchronous exception
131 and it doesn't have a handler for the signal */ 132 and it doesn't have a handler for the signal */
132 printk(KERN_CRIT "init has generated signal %d " 133 printk(KERN_CRIT "init has generated signal %d "
133 "but has no handler for it\n", signr); 134 "but has no handler for it\n", signr);
134 do_exit(signr); 135 do_exit(signr);
135 } 136 }
136 } 137 }
137 } 138 }
138 139
139 /* 140 /*
140 * I/O accesses can cause machine checks on powermacs. 141 * I/O accesses can cause machine checks on powermacs.
141 * Check if the NIP corresponds to the address of a sync 142 * Check if the NIP corresponds to the address of a sync
142 * instruction for which there is an entry in the exception 143 * instruction for which there is an entry in the exception
143 * table. 144 * table.
144 * Note that the 601 only takes a machine check on TEA 145 * Note that the 601 only takes a machine check on TEA
145 * (transfer error ack) signal assertion, and does not 146 * (transfer error ack) signal assertion, and does not
146 * set any of the top 16 bits of SRR1. 147 * set any of the top 16 bits of SRR1.
147 * -- paulus. 148 * -- paulus.
148 */ 149 */
149 static inline int check_io_access(struct pt_regs *regs) 150 static inline int check_io_access(struct pt_regs *regs)
150 { 151 {
151 #if defined CONFIG_8xx 152 #if defined CONFIG_8xx
152 unsigned long msr = regs->msr; 153 unsigned long msr = regs->msr;
153 const struct exception_table_entry *entry; 154 const struct exception_table_entry *entry;
154 unsigned int *nip = (unsigned int *)regs->nip; 155 unsigned int *nip = (unsigned int *)regs->nip;
155 156
156 if (((msr & 0xffff0000) == 0 || (msr & (0x80000 | 0x40000))) 157 if (((msr & 0xffff0000) == 0 || (msr & (0x80000 | 0x40000)))
157 && (entry = search_exception_tables(regs->nip)) != NULL) { 158 && (entry = search_exception_tables(regs->nip)) != NULL) {
158 /* 159 /*
159 * Check that it's a sync instruction, or somewhere 160 * Check that it's a sync instruction, or somewhere
160 * in the twi; isync; nop sequence that inb/inw/inl uses. 161 * in the twi; isync; nop sequence that inb/inw/inl uses.
161 * As the address is in the exception table 162 * As the address is in the exception table
162 * we should be able to read the instr there. 163 * we should be able to read the instr there.
163 * For the debug message, we look at the preceding 164 * For the debug message, we look at the preceding
164 * load or store. 165 * load or store.
165 */ 166 */
166 if (*nip == 0x60000000) /* nop */ 167 if (*nip == 0x60000000) /* nop */
167 nip -= 2; 168 nip -= 2;
168 else if (*nip == 0x4c00012c) /* isync */ 169 else if (*nip == 0x4c00012c) /* isync */
169 --nip; 170 --nip;
170 /* eieio from I/O string functions */ 171 /* eieio from I/O string functions */
171 else if ((*nip) == 0x7c0006ac || *(nip+1) == 0x7c0006ac) 172 else if ((*nip) == 0x7c0006ac || *(nip+1) == 0x7c0006ac)
172 nip += 2; 173 nip += 2;
173 if (*nip == 0x7c0004ac || (*nip >> 26) == 3 || 174 if (*nip == 0x7c0004ac || (*nip >> 26) == 3 ||
174 (*(nip+1) >> 26) == 3) { 175 (*(nip+1) >> 26) == 3) {
175 /* sync or twi */ 176 /* sync or twi */
176 unsigned int rb; 177 unsigned int rb;
177 178
178 --nip; 179 --nip;
179 rb = (*nip >> 11) & 0x1f; 180 rb = (*nip >> 11) & 0x1f;
180 printk(KERN_DEBUG "%s bad port %lx at %p\n", 181 printk(KERN_DEBUG "%s bad port %lx at %p\n",
181 (*nip & 0x100)? "OUT to": "IN from", 182 (*nip & 0x100)? "OUT to": "IN from",
182 regs->gpr[rb] - _IO_BASE, nip); 183 regs->gpr[rb] - _IO_BASE, nip);
183 regs->msr |= MSR_RI; 184 regs->msr |= MSR_RI;
184 regs->nip = entry->fixup; 185 regs->nip = entry->fixup;
185 return 1; 186 return 1;
186 } 187 }
187 } 188 }
188 #endif /* CONFIG_8xx */ 189 #endif /* CONFIG_8xx */
189 return 0; 190 return 0;
190 } 191 }
191 192
192 #if defined(CONFIG_4xx) || defined(CONFIG_BOOKE) 193 #if defined(CONFIG_4xx) || defined(CONFIG_BOOKE)
193 /* On 4xx, the reason for the machine check or program exception 194 /* On 4xx, the reason for the machine check or program exception
194 is in the ESR. */ 195 is in the ESR. */
195 #define get_reason(regs) ((regs)->dsisr) 196 #define get_reason(regs) ((regs)->dsisr)
196 #ifndef CONFIG_FSL_BOOKE 197 #ifndef CONFIG_FSL_BOOKE
197 #define get_mc_reason(regs) ((regs)->dsisr) 198 #define get_mc_reason(regs) ((regs)->dsisr)
198 #else 199 #else
199 #define get_mc_reason(regs) (mfspr(SPRN_MCSR)) 200 #define get_mc_reason(regs) (mfspr(SPRN_MCSR))
200 #endif 201 #endif
201 #define REASON_FP ESR_FP 202 #define REASON_FP ESR_FP
202 #define REASON_ILLEGAL (ESR_PIL | ESR_PUO) 203 #define REASON_ILLEGAL (ESR_PIL | ESR_PUO)
203 #define REASON_PRIVILEGED ESR_PPR 204 #define REASON_PRIVILEGED ESR_PPR
204 #define REASON_TRAP ESR_PTR 205 #define REASON_TRAP ESR_PTR
205 206
206 /* single-step stuff */ 207 /* single-step stuff */
207 #define single_stepping(regs) (current->thread.dbcr0 & DBCR0_IC) 208 #define single_stepping(regs) (current->thread.dbcr0 & DBCR0_IC)
208 #define clear_single_step(regs) (current->thread.dbcr0 &= ~DBCR0_IC) 209 #define clear_single_step(regs) (current->thread.dbcr0 &= ~DBCR0_IC)
209 210
210 #else 211 #else
211 /* On non-4xx, the reason for the machine check or program 212 /* On non-4xx, the reason for the machine check or program
212 exception is in the MSR. */ 213 exception is in the MSR. */
213 #define get_reason(regs) ((regs)->msr) 214 #define get_reason(regs) ((regs)->msr)
214 #define get_mc_reason(regs) ((regs)->msr) 215 #define get_mc_reason(regs) ((regs)->msr)
215 #define REASON_FP 0x100000 216 #define REASON_FP 0x100000
216 #define REASON_ILLEGAL 0x80000 217 #define REASON_ILLEGAL 0x80000
217 #define REASON_PRIVILEGED 0x40000 218 #define REASON_PRIVILEGED 0x40000
218 #define REASON_TRAP 0x20000 219 #define REASON_TRAP 0x20000
219 220
220 #define single_stepping(regs) ((regs)->msr & MSR_SE) 221 #define single_stepping(regs) ((regs)->msr & MSR_SE)
221 #define clear_single_step(regs) ((regs)->msr &= ~MSR_SE) 222 #define clear_single_step(regs) ((regs)->msr &= ~MSR_SE)
222 #endif 223 #endif
223 224
224 /* 225 /*
225 * This is "fall-back" implementation for configurations 226 * This is "fall-back" implementation for configurations
226 * which don't provide platform-specific machine check info 227 * which don't provide platform-specific machine check info
227 */ 228 */
228 void __attribute__ ((weak)) 229 void __attribute__ ((weak))
229 platform_machine_check(struct pt_regs *regs) 230 platform_machine_check(struct pt_regs *regs)
230 { 231 {
231 } 232 }
232 233
233 void machine_check_exception(struct pt_regs *regs) 234 void machine_check_exception(struct pt_regs *regs)
234 { 235 {
235 unsigned long reason = get_mc_reason(regs); 236 unsigned long reason = get_mc_reason(regs);
236 237
237 if (user_mode(regs)) { 238 if (user_mode(regs)) {
238 regs->msr |= MSR_RI; 239 regs->msr |= MSR_RI;
239 _exception(SIGBUS, regs, BUS_ADRERR, regs->nip); 240 _exception(SIGBUS, regs, BUS_ADRERR, regs->nip);
240 return; 241 return;
241 } 242 }
242 243
243 #if defined(CONFIG_8xx) && defined(CONFIG_PCI) 244 #if defined(CONFIG_8xx) && defined(CONFIG_PCI)
244 /* the qspan pci read routines can cause machine checks -- Cort */ 245 /* the qspan pci read routines can cause machine checks -- Cort */
245 bad_page_fault(regs, regs->dar, SIGBUS); 246 bad_page_fault(regs, regs->dar, SIGBUS);
246 return; 247 return;
247 #endif 248 #endif
248 249
249 if (debugger_fault_handler) { 250 if (debugger_fault_handler) {
250 debugger_fault_handler(regs); 251 debugger_fault_handler(regs);
251 regs->msr |= MSR_RI; 252 regs->msr |= MSR_RI;
252 return; 253 return;
253 } 254 }
254 255
255 if (check_io_access(regs)) 256 if (check_io_access(regs))
256 return; 257 return;
257 258
258 #if defined(CONFIG_4xx) && !defined(CONFIG_440A) 259 #if defined(CONFIG_4xx) && !defined(CONFIG_440A)
259 if (reason & ESR_IMCP) { 260 if (reason & ESR_IMCP) {
260 printk("Instruction"); 261 printk("Instruction");
261 mtspr(SPRN_ESR, reason & ~ESR_IMCP); 262 mtspr(SPRN_ESR, reason & ~ESR_IMCP);
262 } else 263 } else
263 printk("Data"); 264 printk("Data");
264 printk(" machine check in kernel mode.\n"); 265 printk(" machine check in kernel mode.\n");
265 #elif defined(CONFIG_440A) 266 #elif defined(CONFIG_440A)
266 printk("Machine check in kernel mode.\n"); 267 printk("Machine check in kernel mode.\n");
267 if (reason & ESR_IMCP){ 268 if (reason & ESR_IMCP){
268 printk("Instruction Synchronous Machine Check exception\n"); 269 printk("Instruction Synchronous Machine Check exception\n");
269 mtspr(SPRN_ESR, reason & ~ESR_IMCP); 270 mtspr(SPRN_ESR, reason & ~ESR_IMCP);
270 } 271 }
271 else { 272 else {
272 u32 mcsr = mfspr(SPRN_MCSR); 273 u32 mcsr = mfspr(SPRN_MCSR);
273 if (mcsr & MCSR_IB) 274 if (mcsr & MCSR_IB)
274 printk("Instruction Read PLB Error\n"); 275 printk("Instruction Read PLB Error\n");
275 if (mcsr & MCSR_DRB) 276 if (mcsr & MCSR_DRB)
276 printk("Data Read PLB Error\n"); 277 printk("Data Read PLB Error\n");
277 if (mcsr & MCSR_DWB) 278 if (mcsr & MCSR_DWB)
278 printk("Data Write PLB Error\n"); 279 printk("Data Write PLB Error\n");
279 if (mcsr & MCSR_TLBP) 280 if (mcsr & MCSR_TLBP)
280 printk("TLB Parity Error\n"); 281 printk("TLB Parity Error\n");
281 if (mcsr & MCSR_ICP){ 282 if (mcsr & MCSR_ICP){
282 flush_instruction_cache(); 283 flush_instruction_cache();
283 printk("I-Cache Parity Error\n"); 284 printk("I-Cache Parity Error\n");
284 } 285 }
285 if (mcsr & MCSR_DCSP) 286 if (mcsr & MCSR_DCSP)
286 printk("D-Cache Search Parity Error\n"); 287 printk("D-Cache Search Parity Error\n");
287 if (mcsr & MCSR_DCFP) 288 if (mcsr & MCSR_DCFP)
288 printk("D-Cache Flush Parity Error\n"); 289 printk("D-Cache Flush Parity Error\n");
289 if (mcsr & MCSR_IMPE) 290 if (mcsr & MCSR_IMPE)
290 printk("Machine Check exception is imprecise\n"); 291 printk("Machine Check exception is imprecise\n");
291 292
292 /* Clear MCSR */ 293 /* Clear MCSR */
293 mtspr(SPRN_MCSR, mcsr); 294 mtspr(SPRN_MCSR, mcsr);
294 } 295 }
295 #elif defined (CONFIG_E500) 296 #elif defined (CONFIG_E500)
296 printk("Machine check in kernel mode.\n"); 297 printk("Machine check in kernel mode.\n");
297 printk("Caused by (from MCSR=%lx): ", reason); 298 printk("Caused by (from MCSR=%lx): ", reason);
298 299
299 if (reason & MCSR_MCP) 300 if (reason & MCSR_MCP)
300 printk("Machine Check Signal\n"); 301 printk("Machine Check Signal\n");
301 if (reason & MCSR_ICPERR) 302 if (reason & MCSR_ICPERR)
302 printk("Instruction Cache Parity Error\n"); 303 printk("Instruction Cache Parity Error\n");
303 if (reason & MCSR_DCP_PERR) 304 if (reason & MCSR_DCP_PERR)
304 printk("Data Cache Push Parity Error\n"); 305 printk("Data Cache Push Parity Error\n");
305 if (reason & MCSR_DCPERR) 306 if (reason & MCSR_DCPERR)
306 printk("Data Cache Parity Error\n"); 307 printk("Data Cache Parity Error\n");
307 if (reason & MCSR_GL_CI) 308 if (reason & MCSR_GL_CI)
308 printk("Guarded Load or Cache-Inhibited stwcx.\n"); 309 printk("Guarded Load or Cache-Inhibited stwcx.\n");
309 if (reason & MCSR_BUS_IAERR) 310 if (reason & MCSR_BUS_IAERR)
310 printk("Bus - Instruction Address Error\n"); 311 printk("Bus - Instruction Address Error\n");
311 if (reason & MCSR_BUS_RAERR) 312 if (reason & MCSR_BUS_RAERR)
312 printk("Bus - Read Address Error\n"); 313 printk("Bus - Read Address Error\n");
313 if (reason & MCSR_BUS_WAERR) 314 if (reason & MCSR_BUS_WAERR)
314 printk("Bus - Write Address Error\n"); 315 printk("Bus - Write Address Error\n");
315 if (reason & MCSR_BUS_IBERR) 316 if (reason & MCSR_BUS_IBERR)
316 printk("Bus - Instruction Data Error\n"); 317 printk("Bus - Instruction Data Error\n");
317 if (reason & MCSR_BUS_RBERR) 318 if (reason & MCSR_BUS_RBERR)
318 printk("Bus - Read Data Bus Error\n"); 319 printk("Bus - Read Data Bus Error\n");
319 if (reason & MCSR_BUS_WBERR) 320 if (reason & MCSR_BUS_WBERR)
320 printk("Bus - Write Data Bus Error\n"); 321 printk("Bus - Write Data Bus Error\n");
321 if (reason & MCSR_BUS_IPERR) 322 if (reason & MCSR_BUS_IPERR)
322 printk("Bus - Instruction Parity Error\n"); 323 printk("Bus - Instruction Parity Error\n");
323 if (reason & MCSR_BUS_RPERR) 324 if (reason & MCSR_BUS_RPERR)
324 printk("Bus - Read Parity Error\n"); 325 printk("Bus - Read Parity Error\n");
325 #elif defined (CONFIG_E200) 326 #elif defined (CONFIG_E200)
326 printk("Machine check in kernel mode.\n"); 327 printk("Machine check in kernel mode.\n");
327 printk("Caused by (from MCSR=%lx): ", reason); 328 printk("Caused by (from MCSR=%lx): ", reason);
328 329
329 if (reason & MCSR_MCP) 330 if (reason & MCSR_MCP)
330 printk("Machine Check Signal\n"); 331 printk("Machine Check Signal\n");
331 if (reason & MCSR_CP_PERR) 332 if (reason & MCSR_CP_PERR)
332 printk("Cache Push Parity Error\n"); 333 printk("Cache Push Parity Error\n");
333 if (reason & MCSR_CPERR) 334 if (reason & MCSR_CPERR)
334 printk("Cache Parity Error\n"); 335 printk("Cache Parity Error\n");
335 if (reason & MCSR_EXCP_ERR) 336 if (reason & MCSR_EXCP_ERR)
336 printk("ISI, ITLB, or Bus Error on first instruction fetch for an exception handler\n"); 337 printk("ISI, ITLB, or Bus Error on first instruction fetch for an exception handler\n");
337 if (reason & MCSR_BUS_IRERR) 338 if (reason & MCSR_BUS_IRERR)
338 printk("Bus - Read Bus Error on instruction fetch\n"); 339 printk("Bus - Read Bus Error on instruction fetch\n");
339 if (reason & MCSR_BUS_DRERR) 340 if (reason & MCSR_BUS_DRERR)
340 printk("Bus - Read Bus Error on data load\n"); 341 printk("Bus - Read Bus Error on data load\n");
341 if (reason & MCSR_BUS_WRERR) 342 if (reason & MCSR_BUS_WRERR)
342 printk("Bus - Write Bus Error on buffered store or cache line push\n"); 343 printk("Bus - Write Bus Error on buffered store or cache line push\n");
343 #else /* !CONFIG_4xx && !CONFIG_E500 && !CONFIG_E200 */ 344 #else /* !CONFIG_4xx && !CONFIG_E500 && !CONFIG_E200 */
344 printk("Machine check in kernel mode.\n"); 345 printk("Machine check in kernel mode.\n");
345 printk("Caused by (from SRR1=%lx): ", reason); 346 printk("Caused by (from SRR1=%lx): ", reason);
346 switch (reason & 0x601F0000) { 347 switch (reason & 0x601F0000) {
347 case 0x80000: 348 case 0x80000:
348 printk("Machine check signal\n"); 349 printk("Machine check signal\n");
349 break; 350 break;
350 case 0: /* for 601 */ 351 case 0: /* for 601 */
351 case 0x40000: 352 case 0x40000:
352 case 0x140000: /* 7450 MSS error and TEA */ 353 case 0x140000: /* 7450 MSS error and TEA */
353 printk("Transfer error ack signal\n"); 354 printk("Transfer error ack signal\n");
354 break; 355 break;
355 case 0x20000: 356 case 0x20000:
356 printk("Data parity error signal\n"); 357 printk("Data parity error signal\n");
357 break; 358 break;
358 case 0x10000: 359 case 0x10000:
359 printk("Address parity error signal\n"); 360 printk("Address parity error signal\n");
360 break; 361 break;
361 case 0x20000000: 362 case 0x20000000:
362 printk("L1 Data Cache error\n"); 363 printk("L1 Data Cache error\n");
363 break; 364 break;
364 case 0x40000000: 365 case 0x40000000:
365 printk("L1 Instruction Cache error\n"); 366 printk("L1 Instruction Cache error\n");
366 break; 367 break;
367 case 0x00100000: 368 case 0x00100000:
368 printk("L2 data cache parity error\n"); 369 printk("L2 data cache parity error\n");
369 break; 370 break;
370 default: 371 default:
371 printk("Unknown values in msr\n"); 372 printk("Unknown values in msr\n");
372 } 373 }
373 #endif /* CONFIG_4xx */ 374 #endif /* CONFIG_4xx */
374 375
375 /* 376 /*
376 * Optional platform-provided routine to print out 377 * Optional platform-provided routine to print out
377 * additional info, e.g. bus error registers. 378 * additional info, e.g. bus error registers.
378 */ 379 */
379 platform_machine_check(regs); 380 platform_machine_check(regs);
380 381
381 debugger(regs); 382 debugger(regs);
382 die("machine check", regs, SIGBUS); 383 die("machine check", regs, SIGBUS);
383 } 384 }
384 385
385 void SMIException(struct pt_regs *regs) 386 void SMIException(struct pt_regs *regs)
386 { 387 {
387 debugger(regs); 388 debugger(regs);
388 #if !(defined(CONFIG_XMON) || defined(CONFIG_KGDB)) 389 #if !(defined(CONFIG_XMON) || defined(CONFIG_KGDB))
389 show_regs(regs); 390 show_regs(regs);
390 panic("System Management Interrupt"); 391 panic("System Management Interrupt");
391 #endif 392 #endif
392 } 393 }
393 394
394 void unknown_exception(struct pt_regs *regs) 395 void unknown_exception(struct pt_regs *regs)
395 { 396 {
396 printk("Bad trap at PC: %lx, MSR: %lx, vector=%lx %s\n", 397 printk("Bad trap at PC: %lx, MSR: %lx, vector=%lx %s\n",
397 regs->nip, regs->msr, regs->trap, print_tainted()); 398 regs->nip, regs->msr, regs->trap, print_tainted());
398 _exception(SIGTRAP, regs, 0, 0); 399 _exception(SIGTRAP, regs, 0, 0);
399 } 400 }
400 401
401 void instruction_breakpoint_exception(struct pt_regs *regs) 402 void instruction_breakpoint_exception(struct pt_regs *regs)
402 { 403 {
403 if (debugger_iabr_match(regs)) 404 if (debugger_iabr_match(regs))
404 return; 405 return;
405 _exception(SIGTRAP, regs, TRAP_BRKPT, 0); 406 _exception(SIGTRAP, regs, TRAP_BRKPT, 0);
406 } 407 }
407 408
408 void RunModeException(struct pt_regs *regs) 409 void RunModeException(struct pt_regs *regs)
409 { 410 {
410 _exception(SIGTRAP, regs, 0, 0); 411 _exception(SIGTRAP, regs, 0, 0);
411 } 412 }
412 413
413 /* Illegal instruction emulation support. Originally written to 414 /* Illegal instruction emulation support. Originally written to
414 * provide the PVR to user applications using the mfspr rd, PVR. 415 * provide the PVR to user applications using the mfspr rd, PVR.
415 * Return non-zero if we can't emulate, or -EFAULT if the associated 416 * Return non-zero if we can't emulate, or -EFAULT if the associated
416 * memory access caused an access fault. Return zero on success. 417 * memory access caused an access fault. Return zero on success.
417 * 418 *
418 * There are a couple of ways to do this, either "decode" the instruction 419 * There are a couple of ways to do this, either "decode" the instruction
419 * or directly match lots of bits. In this case, matching lots of 420 * or directly match lots of bits. In this case, matching lots of
420 * bits is faster and easier. 421 * bits is faster and easier.
421 * 422 *
422 */ 423 */
423 #define INST_MFSPR_PVR 0x7c1f42a6 424 #define INST_MFSPR_PVR 0x7c1f42a6
424 #define INST_MFSPR_PVR_MASK 0xfc1fffff 425 #define INST_MFSPR_PVR_MASK 0xfc1fffff
425 426
426 #define INST_DCBA 0x7c0005ec 427 #define INST_DCBA 0x7c0005ec
427 #define INST_DCBA_MASK 0x7c0007fe 428 #define INST_DCBA_MASK 0x7c0007fe
428 429
429 #define INST_MCRXR 0x7c000400 430 #define INST_MCRXR 0x7c000400
430 #define INST_MCRXR_MASK 0x7c0007fe 431 #define INST_MCRXR_MASK 0x7c0007fe
431 432
432 #define INST_STRING 0x7c00042a 433 #define INST_STRING 0x7c00042a
433 #define INST_STRING_MASK 0x7c0007fe 434 #define INST_STRING_MASK 0x7c0007fe
434 #define INST_STRING_GEN_MASK 0x7c00067e 435 #define INST_STRING_GEN_MASK 0x7c00067e
435 #define INST_LSWI 0x7c0004aa 436 #define INST_LSWI 0x7c0004aa
436 #define INST_LSWX 0x7c00042a 437 #define INST_LSWX 0x7c00042a
437 #define INST_STSWI 0x7c0005aa 438 #define INST_STSWI 0x7c0005aa
438 #define INST_STSWX 0x7c00052a 439 #define INST_STSWX 0x7c00052a
439 440
440 static int emulate_string_inst(struct pt_regs *regs, u32 instword) 441 static int emulate_string_inst(struct pt_regs *regs, u32 instword)
441 { 442 {
442 u8 rT = (instword >> 21) & 0x1f; 443 u8 rT = (instword >> 21) & 0x1f;
443 u8 rA = (instword >> 16) & 0x1f; 444 u8 rA = (instword >> 16) & 0x1f;
444 u8 NB_RB = (instword >> 11) & 0x1f; 445 u8 NB_RB = (instword >> 11) & 0x1f;
445 u32 num_bytes; 446 u32 num_bytes;
446 unsigned long EA; 447 unsigned long EA;
447 int pos = 0; 448 int pos = 0;
448 449
449 /* Early out if we are an invalid form of lswx */ 450 /* Early out if we are an invalid form of lswx */
450 if ((instword & INST_STRING_MASK) == INST_LSWX) 451 if ((instword & INST_STRING_MASK) == INST_LSWX)
451 if ((rT == rA) || (rT == NB_RB)) 452 if ((rT == rA) || (rT == NB_RB))
452 return -EINVAL; 453 return -EINVAL;
453 454
454 EA = (rA == 0) ? 0 : regs->gpr[rA]; 455 EA = (rA == 0) ? 0 : regs->gpr[rA];
455 456
456 switch (instword & INST_STRING_MASK) { 457 switch (instword & INST_STRING_MASK) {
457 case INST_LSWX: 458 case INST_LSWX:
458 case INST_STSWX: 459 case INST_STSWX:
459 EA += NB_RB; 460 EA += NB_RB;
460 num_bytes = regs->xer & 0x7f; 461 num_bytes = regs->xer & 0x7f;
461 break; 462 break;
462 case INST_LSWI: 463 case INST_LSWI:
463 case INST_STSWI: 464 case INST_STSWI:
464 num_bytes = (NB_RB == 0) ? 32 : NB_RB; 465 num_bytes = (NB_RB == 0) ? 32 : NB_RB;
465 break; 466 break;
466 default: 467 default:
467 return -EINVAL; 468 return -EINVAL;
468 } 469 }
469 470
470 while (num_bytes != 0) 471 while (num_bytes != 0)
471 { 472 {
472 u8 val; 473 u8 val;
473 u32 shift = 8 * (3 - (pos & 0x3)); 474 u32 shift = 8 * (3 - (pos & 0x3));
474 475
475 switch ((instword & INST_STRING_MASK)) { 476 switch ((instword & INST_STRING_MASK)) {
476 case INST_LSWX: 477 case INST_LSWX:
477 case INST_LSWI: 478 case INST_LSWI:
478 if (get_user(val, (u8 __user *)EA)) 479 if (get_user(val, (u8 __user *)EA))
479 return -EFAULT; 480 return -EFAULT;
480 /* first time updating this reg, 481 /* first time updating this reg,
481 * zero it out */ 482 * zero it out */
482 if (pos == 0) 483 if (pos == 0)
483 regs->gpr[rT] = 0; 484 regs->gpr[rT] = 0;
484 regs->gpr[rT] |= val << shift; 485 regs->gpr[rT] |= val << shift;
485 break; 486 break;
486 case INST_STSWI: 487 case INST_STSWI:
487 case INST_STSWX: 488 case INST_STSWX:
488 val = regs->gpr[rT] >> shift; 489 val = regs->gpr[rT] >> shift;
489 if (put_user(val, (u8 __user *)EA)) 490 if (put_user(val, (u8 __user *)EA))
490 return -EFAULT; 491 return -EFAULT;
491 break; 492 break;
492 } 493 }
493 /* move EA to next address */ 494 /* move EA to next address */
494 EA += 1; 495 EA += 1;
495 num_bytes--; 496 num_bytes--;
496 497
497 /* manage our position within the register */ 498 /* manage our position within the register */
498 if (++pos == 4) { 499 if (++pos == 4) {
499 pos = 0; 500 pos = 0;
500 if (++rT == 32) 501 if (++rT == 32)
501 rT = 0; 502 rT = 0;
502 } 503 }
503 } 504 }
504 505
505 return 0; 506 return 0;
506 } 507 }
507 508
508 static int emulate_instruction(struct pt_regs *regs) 509 static int emulate_instruction(struct pt_regs *regs)
509 { 510 {
510 u32 instword; 511 u32 instword;
511 u32 rd; 512 u32 rd;
512 513
513 if (!user_mode(regs)) 514 if (!user_mode(regs))
514 return -EINVAL; 515 return -EINVAL;
515 CHECK_FULL_REGS(regs); 516 CHECK_FULL_REGS(regs);
516 517
517 if (get_user(instword, (u32 __user *)(regs->nip))) 518 if (get_user(instword, (u32 __user *)(regs->nip)))
518 return -EFAULT; 519 return -EFAULT;
519 520
520 /* Emulate the mfspr rD, PVR. 521 /* Emulate the mfspr rD, PVR.
521 */ 522 */
522 if ((instword & INST_MFSPR_PVR_MASK) == INST_MFSPR_PVR) { 523 if ((instword & INST_MFSPR_PVR_MASK) == INST_MFSPR_PVR) {
523 rd = (instword >> 21) & 0x1f; 524 rd = (instword >> 21) & 0x1f;
524 regs->gpr[rd] = mfspr(SPRN_PVR); 525 regs->gpr[rd] = mfspr(SPRN_PVR);
525 return 0; 526 return 0;
526 } 527 }
527 528
528 /* Emulating the dcba insn is just a no-op. */ 529 /* Emulating the dcba insn is just a no-op. */
529 if ((instword & INST_DCBA_MASK) == INST_DCBA) 530 if ((instword & INST_DCBA_MASK) == INST_DCBA)
530 return 0; 531 return 0;
531 532
532 /* Emulate the mcrxr insn. */ 533 /* Emulate the mcrxr insn. */
533 if ((instword & INST_MCRXR_MASK) == INST_MCRXR) { 534 if ((instword & INST_MCRXR_MASK) == INST_MCRXR) {
534 int shift = (instword >> 21) & 0x1c; 535 int shift = (instword >> 21) & 0x1c;
535 unsigned long msk = 0xf0000000UL >> shift; 536 unsigned long msk = 0xf0000000UL >> shift;
536 537
537 regs->ccr = (regs->ccr & ~msk) | ((regs->xer >> shift) & msk); 538 regs->ccr = (regs->ccr & ~msk) | ((regs->xer >> shift) & msk);
538 regs->xer &= ~0xf0000000UL; 539 regs->xer &= ~0xf0000000UL;
539 return 0; 540 return 0;
540 } 541 }
541 542
542 /* Emulate load/store string insn. */ 543 /* Emulate load/store string insn. */
543 if ((instword & INST_STRING_GEN_MASK) == INST_STRING) 544 if ((instword & INST_STRING_GEN_MASK) == INST_STRING)
544 return emulate_string_inst(regs, instword); 545 return emulate_string_inst(regs, instword);
545 546
546 return -EINVAL; 547 return -EINVAL;
547 } 548 }
548 549
549 /* 550 /*
550 * After we have successfully emulated an instruction, we have to 551 * After we have successfully emulated an instruction, we have to
551 * check if the instruction was being single-stepped, and if so, 552 * check if the instruction was being single-stepped, and if so,
552 * pretend we got a single-step exception. This was pointed out 553 * pretend we got a single-step exception. This was pointed out
553 * by Kumar Gala. -- paulus 554 * by Kumar Gala. -- paulus
554 */ 555 */
555 static void emulate_single_step(struct pt_regs *regs) 556 static void emulate_single_step(struct pt_regs *regs)
556 { 557 {
557 if (single_stepping(regs)) { 558 if (single_stepping(regs)) {
558 clear_single_step(regs); 559 clear_single_step(regs);
559 _exception(SIGTRAP, regs, TRAP_TRACE, 0); 560 _exception(SIGTRAP, regs, TRAP_TRACE, 0);
560 } 561 }
561 } 562 }
562 563
563 int is_valid_bugaddr(unsigned long addr) 564 int is_valid_bugaddr(unsigned long addr)
564 { 565 {
565 return addr >= PAGE_OFFSET; 566 return addr >= PAGE_OFFSET;
566 } 567 }
567 568
568 void program_check_exception(struct pt_regs *regs) 569 void program_check_exception(struct pt_regs *regs)
569 { 570 {
570 unsigned int reason = get_reason(regs); 571 unsigned int reason = get_reason(regs);
571 extern int do_mathemu(struct pt_regs *regs); 572 extern int do_mathemu(struct pt_regs *regs);
572 573
573 #ifdef CONFIG_MATH_EMULATION 574 #ifdef CONFIG_MATH_EMULATION
574 /* (reason & REASON_ILLEGAL) would be the obvious thing here, 575 /* (reason & REASON_ILLEGAL) would be the obvious thing here,
575 * but there seems to be a hardware bug on the 405GP (RevD) 576 * but there seems to be a hardware bug on the 405GP (RevD)
576 * that means ESR is sometimes set incorrectly - either to 577 * that means ESR is sometimes set incorrectly - either to
577 * ESR_DST (!?) or 0. In the process of chasing this with the 578 * ESR_DST (!?) or 0. In the process of chasing this with the
578 * hardware people - not sure if it can happen on any illegal 579 * hardware people - not sure if it can happen on any illegal
579 * instruction or only on FP instructions, whether there is a 580 * instruction or only on FP instructions, whether there is a
580 * pattern to occurrences etc. -dgibson 31/Mar/2003 */ 581 * pattern to occurrences etc. -dgibson 31/Mar/2003 */
581 if (!(reason & REASON_TRAP) && do_mathemu(regs) == 0) { 582 if (!(reason & REASON_TRAP) && do_mathemu(regs) == 0) {
582 emulate_single_step(regs); 583 emulate_single_step(regs);
583 return; 584 return;
584 } 585 }
585 #endif /* CONFIG_MATH_EMULATION */ 586 #endif /* CONFIG_MATH_EMULATION */
586 587
587 if (reason & REASON_FP) { 588 if (reason & REASON_FP) {
588 /* IEEE FP exception */ 589 /* IEEE FP exception */
589 int code = 0; 590 int code = 0;
590 u32 fpscr; 591 u32 fpscr;
591 592
592 /* We must make sure the FP state is consistent with 593 /* We must make sure the FP state is consistent with
593 * our MSR_FP in regs 594 * our MSR_FP in regs
594 */ 595 */
595 preempt_disable(); 596 preempt_disable();
596 if (regs->msr & MSR_FP) 597 if (regs->msr & MSR_FP)
597 giveup_fpu(current); 598 giveup_fpu(current);
598 preempt_enable(); 599 preempt_enable();
599 600
600 fpscr = current->thread.fpscr.val; 601 fpscr = current->thread.fpscr.val;
601 fpscr &= fpscr << 22; /* mask summary bits with enables */ 602 fpscr &= fpscr << 22; /* mask summary bits with enables */
602 if (fpscr & FPSCR_VX) 603 if (fpscr & FPSCR_VX)
603 code = FPE_FLTINV; 604 code = FPE_FLTINV;
604 else if (fpscr & FPSCR_OX) 605 else if (fpscr & FPSCR_OX)
605 code = FPE_FLTOVF; 606 code = FPE_FLTOVF;
606 else if (fpscr & FPSCR_UX) 607 else if (fpscr & FPSCR_UX)
607 code = FPE_FLTUND; 608 code = FPE_FLTUND;
608 else if (fpscr & FPSCR_ZX) 609 else if (fpscr & FPSCR_ZX)
609 code = FPE_FLTDIV; 610 code = FPE_FLTDIV;
610 else if (fpscr & FPSCR_XX) 611 else if (fpscr & FPSCR_XX)
611 code = FPE_FLTRES; 612 code = FPE_FLTRES;
612 _exception(SIGFPE, regs, code, regs->nip); 613 _exception(SIGFPE, regs, code, regs->nip);
613 return; 614 return;
614 } 615 }
615 616
616 if (reason & REASON_TRAP) { 617 if (reason & REASON_TRAP) {
617 /* trap exception */ 618 /* trap exception */
618 if (debugger_bpt(regs)) 619 if (debugger_bpt(regs))
619 return; 620 return;
620 621
621 if (!(regs->msr & MSR_PR) && /* not user-mode */ 622 if (!(regs->msr & MSR_PR) && /* not user-mode */
622 report_bug(regs->nip, regs) == BUG_TRAP_TYPE_WARN) { 623 report_bug(regs->nip, regs) == BUG_TRAP_TYPE_WARN) {
623 regs->nip += 4; 624 regs->nip += 4;
624 return; 625 return;
625 } 626 }
626 _exception(SIGTRAP, regs, TRAP_BRKPT, 0); 627 _exception(SIGTRAP, regs, TRAP_BRKPT, 0);
627 return; 628 return;
628 } 629 }
629 630
630 /* Try to emulate it if we should. */ 631 /* Try to emulate it if we should. */
631 if (reason & (REASON_ILLEGAL | REASON_PRIVILEGED)) { 632 if (reason & (REASON_ILLEGAL | REASON_PRIVILEGED)) {
632 switch (emulate_instruction(regs)) { 633 switch (emulate_instruction(regs)) {
633 case 0: 634 case 0:
634 regs->nip += 4; 635 regs->nip += 4;
635 emulate_single_step(regs); 636 emulate_single_step(regs);
636 return; 637 return;
637 case -EFAULT: 638 case -EFAULT:
638 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip); 639 _exception(SIGSEGV, regs, SEGV_MAPERR, regs->nip);
639 return; 640 return;
640 } 641 }
641 } 642 }
642 643
643 if (reason & REASON_PRIVILEGED) 644 if (reason & REASON_PRIVILEGED)
644 _exception(SIGILL, regs, ILL_PRVOPC, regs->nip); 645 _exception(SIGILL, regs, ILL_PRVOPC, regs->nip);
645 else 646 else
646 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip); 647 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
647 } 648 }
648 649
649 void single_step_exception(struct pt_regs *regs) 650 void single_step_exception(struct pt_regs *regs)
650 { 651 {
651 regs->msr &= ~(MSR_SE | MSR_BE); /* Turn off 'trace' bits */ 652 regs->msr &= ~(MSR_SE | MSR_BE); /* Turn off 'trace' bits */
652 if (debugger_sstep(regs)) 653 if (debugger_sstep(regs))
653 return; 654 return;
654 _exception(SIGTRAP, regs, TRAP_TRACE, 0); 655 _exception(SIGTRAP, regs, TRAP_TRACE, 0);
655 } 656 }
656 657
657 void alignment_exception(struct pt_regs *regs) 658 void alignment_exception(struct pt_regs *regs)
658 { 659 {
659 int sig, code, fixed = 0; 660 int sig, code, fixed = 0;
660 661
661 fixed = fix_alignment(regs); 662 fixed = fix_alignment(regs);
662 if (fixed == 1) { 663 if (fixed == 1) {
663 regs->nip += 4; /* skip over emulated instruction */ 664 regs->nip += 4; /* skip over emulated instruction */
664 emulate_single_step(regs); 665 emulate_single_step(regs);
665 return; 666 return;
666 } 667 }
667 if (fixed == -EFAULT) { 668 if (fixed == -EFAULT) {
668 sig = SIGSEGV; 669 sig = SIGSEGV;
669 code = SEGV_ACCERR; 670 code = SEGV_ACCERR;
670 } else { 671 } else {
671 sig = SIGBUS; 672 sig = SIGBUS;
672 code = BUS_ADRALN; 673 code = BUS_ADRALN;
673 } 674 }
674 if (user_mode(regs)) 675 if (user_mode(regs))
675 _exception(sig, regs, code, regs->dar); 676 _exception(sig, regs, code, regs->dar);
676 else 677 else
677 bad_page_fault(regs, regs->dar, sig); 678 bad_page_fault(regs, regs->dar, sig);
678 } 679 }
679 680
680 void StackOverflow(struct pt_regs *regs) 681 void StackOverflow(struct pt_regs *regs)
681 { 682 {
682 printk(KERN_CRIT "Kernel stack overflow in process %p, r1=%lx\n", 683 printk(KERN_CRIT "Kernel stack overflow in process %p, r1=%lx\n",
683 current, regs->gpr[1]); 684 current, regs->gpr[1]);
684 debugger(regs); 685 debugger(regs);
685 show_regs(regs); 686 show_regs(regs);
686 panic("kernel stack overflow"); 687 panic("kernel stack overflow");
687 } 688 }
688 689
689 void nonrecoverable_exception(struct pt_regs *regs) 690 void nonrecoverable_exception(struct pt_regs *regs)
690 { 691 {
691 printk(KERN_ERR "Non-recoverable exception at PC=%lx MSR=%lx\n", 692 printk(KERN_ERR "Non-recoverable exception at PC=%lx MSR=%lx\n",
692 regs->nip, regs->msr); 693 regs->nip, regs->msr);
693 debugger(regs); 694 debugger(regs);
694 die("nonrecoverable exception", regs, SIGKILL); 695 die("nonrecoverable exception", regs, SIGKILL);
695 } 696 }
696 697
697 void trace_syscall(struct pt_regs *regs) 698 void trace_syscall(struct pt_regs *regs)
698 { 699 {
699 printk("Task: %p(%d), PC: %08lX/%08lX, Syscall: %3ld, Result: %s%ld %s\n", 700 printk("Task: %p(%d), PC: %08lX/%08lX, Syscall: %3ld, Result: %s%ld %s\n",
700 current, current->pid, regs->nip, regs->link, regs->gpr[0], 701 current, current->pid, regs->nip, regs->link, regs->gpr[0],
701 regs->ccr&0x10000000?"Error=":"", regs->gpr[3], print_tainted()); 702 regs->ccr&0x10000000?"Error=":"", regs->gpr[3], print_tainted());
702 } 703 }
703 704
704 #ifdef CONFIG_8xx 705 #ifdef CONFIG_8xx
705 void SoftwareEmulation(struct pt_regs *regs) 706 void SoftwareEmulation(struct pt_regs *regs)
706 { 707 {
707 extern int do_mathemu(struct pt_regs *); 708 extern int do_mathemu(struct pt_regs *);
708 extern int Soft_emulate_8xx(struct pt_regs *); 709 extern int Soft_emulate_8xx(struct pt_regs *);
709 int errcode; 710 int errcode;
710 711
711 CHECK_FULL_REGS(regs); 712 CHECK_FULL_REGS(regs);
712 713
713 if (!user_mode(regs)) { 714 if (!user_mode(regs)) {
714 debugger(regs); 715 debugger(regs);
715 die("Kernel Mode Software FPU Emulation", regs, SIGFPE); 716 die("Kernel Mode Software FPU Emulation", regs, SIGFPE);
716 } 717 }
717 718
718 #ifdef CONFIG_MATH_EMULATION 719 #ifdef CONFIG_MATH_EMULATION
719 errcode = do_mathemu(regs); 720 errcode = do_mathemu(regs);
720 #else 721 #else
721 errcode = Soft_emulate_8xx(regs); 722 errcode = Soft_emulate_8xx(regs);
722 #endif 723 #endif
723 if (errcode) { 724 if (errcode) {
724 if (errcode > 0) 725 if (errcode > 0)
725 _exception(SIGFPE, regs, 0, 0); 726 _exception(SIGFPE, regs, 0, 0);
726 else if (errcode == -EFAULT) 727 else if (errcode == -EFAULT)
727 _exception(SIGSEGV, regs, 0, 0); 728 _exception(SIGSEGV, regs, 0, 0);
728 else 729 else
729 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip); 730 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
730 } else 731 } else
731 emulate_single_step(regs); 732 emulate_single_step(regs);
732 } 733 }
733 #endif /* CONFIG_8xx */ 734 #endif /* CONFIG_8xx */
734 735
735 #if defined(CONFIG_40x) || defined(CONFIG_BOOKE) 736 #if defined(CONFIG_40x) || defined(CONFIG_BOOKE)
736 737
737 void DebugException(struct pt_regs *regs, unsigned long debug_status) 738 void DebugException(struct pt_regs *regs, unsigned long debug_status)
738 { 739 {
739 if (debug_status & DBSR_IC) { /* instruction completion */ 740 if (debug_status & DBSR_IC) { /* instruction completion */
740 regs->msr &= ~MSR_DE; 741 regs->msr &= ~MSR_DE;
741 if (user_mode(regs)) { 742 if (user_mode(regs)) {
742 current->thread.dbcr0 &= ~DBCR0_IC; 743 current->thread.dbcr0 &= ~DBCR0_IC;
743 } else { 744 } else {
744 /* Disable instruction completion */ 745 /* Disable instruction completion */
745 mtspr(SPRN_DBCR0, mfspr(SPRN_DBCR0) & ~DBCR0_IC); 746 mtspr(SPRN_DBCR0, mfspr(SPRN_DBCR0) & ~DBCR0_IC);
746 /* Clear the instruction completion event */ 747 /* Clear the instruction completion event */
747 mtspr(SPRN_DBSR, DBSR_IC); 748 mtspr(SPRN_DBSR, DBSR_IC);
748 if (debugger_sstep(regs)) 749 if (debugger_sstep(regs))
749 return; 750 return;
750 } 751 }
751 _exception(SIGTRAP, regs, TRAP_TRACE, 0); 752 _exception(SIGTRAP, regs, TRAP_TRACE, 0);
752 } 753 }
753 } 754 }
754 #endif /* CONFIG_4xx || CONFIG_BOOKE */ 755 #endif /* CONFIG_4xx || CONFIG_BOOKE */
755 756
756 #if !defined(CONFIG_TAU_INT) 757 #if !defined(CONFIG_TAU_INT)
757 void TAUException(struct pt_regs *regs) 758 void TAUException(struct pt_regs *regs)
758 { 759 {
759 printk("TAU trap at PC: %lx, MSR: %lx, vector=%lx %s\n", 760 printk("TAU trap at PC: %lx, MSR: %lx, vector=%lx %s\n",
760 regs->nip, regs->msr, regs->trap, print_tainted()); 761 regs->nip, regs->msr, regs->trap, print_tainted());
761 } 762 }
762 #endif /* CONFIG_INT_TAU */ 763 #endif /* CONFIG_INT_TAU */
763 764
764 /* 765 /*
765 * FP unavailable trap from kernel - print a message, but let 766 * FP unavailable trap from kernel - print a message, but let
766 * the task use FP in the kernel until it returns to user mode. 767 * the task use FP in the kernel until it returns to user mode.
767 */ 768 */
768 void kernel_fp_unavailable_exception(struct pt_regs *regs) 769 void kernel_fp_unavailable_exception(struct pt_regs *regs)
769 { 770 {
770 regs->msr |= MSR_FP; 771 regs->msr |= MSR_FP;
771 printk(KERN_ERR "floating point used in kernel (task=%p, pc=%lx)\n", 772 printk(KERN_ERR "floating point used in kernel (task=%p, pc=%lx)\n",
772 current, regs->nip); 773 current, regs->nip);
773 } 774 }
774 775
775 void altivec_unavailable_exception(struct pt_regs *regs) 776 void altivec_unavailable_exception(struct pt_regs *regs)
776 { 777 {
777 static int kernel_altivec_count; 778 static int kernel_altivec_count;
778 779
779 #ifndef CONFIG_ALTIVEC 780 #ifndef CONFIG_ALTIVEC
780 if (user_mode(regs)) { 781 if (user_mode(regs)) {
781 /* A user program has executed an altivec instruction, 782 /* A user program has executed an altivec instruction,
782 but this kernel doesn't support altivec. */ 783 but this kernel doesn't support altivec. */
783 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip); 784 _exception(SIGILL, regs, ILL_ILLOPC, regs->nip);
784 return; 785 return;
785 } 786 }
786 #endif 787 #endif
787 /* The kernel has executed an altivec instruction without 788 /* The kernel has executed an altivec instruction without
788 first enabling altivec. Whinge but let it do it. */ 789 first enabling altivec. Whinge but let it do it. */
789 if (++kernel_altivec_count < 10) 790 if (++kernel_altivec_count < 10)
790 printk(KERN_ERR "AltiVec used in kernel (task=%p, pc=%lx)\n", 791 printk(KERN_ERR "AltiVec used in kernel (task=%p, pc=%lx)\n",
791 current, regs->nip); 792 current, regs->nip);
792 regs->msr |= MSR_VEC; 793 regs->msr |= MSR_VEC;
793 } 794 }
794 795
795 #ifdef CONFIG_ALTIVEC 796 #ifdef CONFIG_ALTIVEC
796 void altivec_assist_exception(struct pt_regs *regs) 797 void altivec_assist_exception(struct pt_regs *regs)
797 { 798 {
798 int err; 799 int err;
799 800
800 preempt_disable(); 801 preempt_disable();
801 if (regs->msr & MSR_VEC) 802 if (regs->msr & MSR_VEC)
802 giveup_altivec(current); 803 giveup_altivec(current);
803 preempt_enable(); 804 preempt_enable();
804 if (!user_mode(regs)) { 805 if (!user_mode(regs)) {
805 printk(KERN_ERR "altivec assist exception in kernel mode" 806 printk(KERN_ERR "altivec assist exception in kernel mode"
806 " at %lx\n", regs->nip); 807 " at %lx\n", regs->nip);
807 debugger(regs); 808 debugger(regs);
808 die("altivec assist exception", regs, SIGFPE); 809 die("altivec assist exception", regs, SIGFPE);
809 return; 810 return;
810 } 811 }
811 812
812 err = emulate_altivec(regs); 813 err = emulate_altivec(regs);
813 if (err == 0) { 814 if (err == 0) {
814 regs->nip += 4; /* skip emulated instruction */ 815 regs->nip += 4; /* skip emulated instruction */
815 emulate_single_step(regs); 816 emulate_single_step(regs);
816 return; 817 return;
817 } 818 }
818 819
819 if (err == -EFAULT) { 820 if (err == -EFAULT) {
820 /* got an error reading the instruction */ 821 /* got an error reading the instruction */
821 _exception(SIGSEGV, regs, SEGV_ACCERR, regs->nip); 822 _exception(SIGSEGV, regs, SEGV_ACCERR, regs->nip);
822 } else { 823 } else {
823 /* didn't recognize the instruction */ 824 /* didn't recognize the instruction */
824 /* XXX quick hack for now: set the non-Java bit in the VSCR */ 825 /* XXX quick hack for now: set the non-Java bit in the VSCR */
825 printk(KERN_ERR "unrecognized altivec instruction " 826 printk(KERN_ERR "unrecognized altivec instruction "
826 "in %s at %lx\n", current->comm, regs->nip); 827 "in %s at %lx\n", current->comm, regs->nip);
827 current->thread.vscr.u[3] |= 0x10000; 828 current->thread.vscr.u[3] |= 0x10000;
828 } 829 }
829 } 830 }
830 #endif /* CONFIG_ALTIVEC */ 831 #endif /* CONFIG_ALTIVEC */
831 832
832 #ifdef CONFIG_E500 833 #ifdef CONFIG_E500
833 void performance_monitor_exception(struct pt_regs *regs) 834 void performance_monitor_exception(struct pt_regs *regs)
834 { 835 {
835 perf_irq(regs); 836 perf_irq(regs);
836 } 837 }
837 #endif 838 #endif
838 839
839 #ifdef CONFIG_FSL_BOOKE 840 #ifdef CONFIG_FSL_BOOKE
840 void CacheLockingException(struct pt_regs *regs, unsigned long address, 841 void CacheLockingException(struct pt_regs *regs, unsigned long address,
841 unsigned long error_code) 842 unsigned long error_code)
842 { 843 {
843 /* We treat cache locking instructions from the user 844 /* We treat cache locking instructions from the user
844 * as priv ops, in the future we could try to do 845 * as priv ops, in the future we could try to do
845 * something smarter 846 * something smarter
846 */ 847 */
847 if (error_code & (ESR_DLK|ESR_ILK)) 848 if (error_code & (ESR_DLK|ESR_ILK))
848 _exception(SIGILL, regs, ILL_PRVOPC, regs->nip); 849 _exception(SIGILL, regs, ILL_PRVOPC, regs->nip);
849 return; 850 return;
850 } 851 }
851 #endif /* CONFIG_FSL_BOOKE */ 852 #endif /* CONFIG_FSL_BOOKE */
852 853
853 #ifdef CONFIG_SPE 854 #ifdef CONFIG_SPE
854 void SPEFloatingPointException(struct pt_regs *regs) 855 void SPEFloatingPointException(struct pt_regs *regs)
855 { 856 {
856 unsigned long spefscr; 857 unsigned long spefscr;
857 int fpexc_mode; 858 int fpexc_mode;
858 int code = 0; 859 int code = 0;
859 860
860 spefscr = current->thread.spefscr; 861 spefscr = current->thread.spefscr;
861 fpexc_mode = current->thread.fpexc_mode; 862 fpexc_mode = current->thread.fpexc_mode;
862 863
863 /* Hardware does not necessarily set sticky 864 /* Hardware does not necessarily set sticky
864 * underflow/overflow/invalid flags */ 865 * underflow/overflow/invalid flags */
865 if ((spefscr & SPEFSCR_FOVF) && (fpexc_mode & PR_FP_EXC_OVF)) { 866 if ((spefscr & SPEFSCR_FOVF) && (fpexc_mode & PR_FP_EXC_OVF)) {
866 code = FPE_FLTOVF; 867 code = FPE_FLTOVF;
867 spefscr |= SPEFSCR_FOVFS; 868 spefscr |= SPEFSCR_FOVFS;
868 } 869 }
869 else if ((spefscr & SPEFSCR_FUNF) && (fpexc_mode & PR_FP_EXC_UND)) { 870 else if ((spefscr & SPEFSCR_FUNF) && (fpexc_mode & PR_FP_EXC_UND)) {
870 code = FPE_FLTUND; 871 code = FPE_FLTUND;
871 spefscr |= SPEFSCR_FUNFS; 872 spefscr |= SPEFSCR_FUNFS;
872 } 873 }
873 else if ((spefscr & SPEFSCR_FDBZ) && (fpexc_mode & PR_FP_EXC_DIV)) 874 else if ((spefscr & SPEFSCR_FDBZ) && (fpexc_mode & PR_FP_EXC_DIV))
874 code = FPE_FLTDIV; 875 code = FPE_FLTDIV;
875 else if ((spefscr & SPEFSCR_FINV) && (fpexc_mode & PR_FP_EXC_INV)) { 876 else if ((spefscr & SPEFSCR_FINV) && (fpexc_mode & PR_FP_EXC_INV)) {
876 code = FPE_FLTINV; 877 code = FPE_FLTINV;
877 spefscr |= SPEFSCR_FINVS; 878 spefscr |= SPEFSCR_FINVS;
878 } 879 }
879 else if ((spefscr & (SPEFSCR_FG | SPEFSCR_FX)) && (fpexc_mode & PR_FP_EXC_RES)) 880 else if ((spefscr & (SPEFSCR_FG | SPEFSCR_FX)) && (fpexc_mode & PR_FP_EXC_RES))
880 code = FPE_FLTRES; 881 code = FPE_FLTRES;
881 882
882 current->thread.spefscr = spefscr; 883 current->thread.spefscr = spefscr;
883 884
884 _exception(SIGFPE, regs, code, regs->nip); 885 _exception(SIGFPE, regs, code, regs->nip);
885 return; 886 return;
886 } 887 }
887 #endif 888 #endif
888 889
889 #ifdef CONFIG_BOOKE_WDT 890 #ifdef CONFIG_BOOKE_WDT
890 /* 891 /*
891 * Default handler for a Watchdog exception, 892 * Default handler for a Watchdog exception,
892 * spins until a reboot occurs 893 * spins until a reboot occurs
893 */ 894 */
894 void __attribute__ ((weak)) WatchdogHandler(struct pt_regs *regs) 895 void __attribute__ ((weak)) WatchdogHandler(struct pt_regs *regs)
895 { 896 {
896 /* Generic WatchdogHandler, implement your own */ 897 /* Generic WatchdogHandler, implement your own */
897 mtspr(SPRN_TCR, mfspr(SPRN_TCR)&(~TCR_WIE)); 898 mtspr(SPRN_TCR, mfspr(SPRN_TCR)&(~TCR_WIE));
898 return; 899 return;
899 } 900 }
900 901
901 void WatchdogException(struct pt_regs *regs) 902 void WatchdogException(struct pt_regs *regs)
902 { 903 {
903 printk (KERN_EMERG "PowerPC Book-E Watchdog Exception\n"); 904 printk (KERN_EMERG "PowerPC Book-E Watchdog Exception\n");
904 WatchdogHandler(regs); 905 WatchdogHandler(regs);
905 } 906 }
906 #endif 907 #endif
907 908
908 void __init trap_init(void) 909 void __init trap_init(void)
909 { 910 {
910 } 911 }
911 912
arch/s390/kernel/traps.c
1 /* 1 /*
2 * arch/s390/kernel/traps.c 2 * arch/s390/kernel/traps.c
3 * 3 *
4 * S390 version 4 * S390 version
5 * Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation 5 * Copyright (C) 1999,2000 IBM Deutschland Entwicklung GmbH, IBM Corporation
6 * Author(s): Martin Schwidefsky (schwidefsky@de.ibm.com), 6 * Author(s): Martin Schwidefsky (schwidefsky@de.ibm.com),
7 * Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com), 7 * Denis Joseph Barrow (djbarrow@de.ibm.com,barrow_dj@yahoo.com),
8 * 8 *
9 * Derived from "arch/i386/kernel/traps.c" 9 * Derived from "arch/i386/kernel/traps.c"
10 * Copyright (C) 1991, 1992 Linus Torvalds 10 * Copyright (C) 1991, 1992 Linus Torvalds
11 */ 11 */
12 12
13 /* 13 /*
14 * 'Traps.c' handles hardware traps and faults after we have saved some 14 * 'Traps.c' handles hardware traps and faults after we have saved some
15 * state in 'asm.s'. 15 * state in 'asm.s'.
16 */ 16 */
17 #include <linux/sched.h> 17 #include <linux/sched.h>
18 #include <linux/kernel.h> 18 #include <linux/kernel.h>
19 #include <linux/string.h> 19 #include <linux/string.h>
20 #include <linux/errno.h> 20 #include <linux/errno.h>
21 #include <linux/ptrace.h> 21 #include <linux/ptrace.h>
22 #include <linux/timer.h> 22 #include <linux/timer.h>
23 #include <linux/mm.h> 23 #include <linux/mm.h>
24 #include <linux/smp.h> 24 #include <linux/smp.h>
25 #include <linux/init.h> 25 #include <linux/init.h>
26 #include <linux/interrupt.h> 26 #include <linux/interrupt.h>
27 #include <linux/delay.h> 27 #include <linux/delay.h>
28 #include <linux/module.h> 28 #include <linux/module.h>
29 #include <linux/kdebug.h> 29 #include <linux/kdebug.h>
30 #include <linux/kallsyms.h> 30 #include <linux/kallsyms.h>
31 #include <linux/reboot.h> 31 #include <linux/reboot.h>
32 #include <linux/kprobes.h> 32 #include <linux/kprobes.h>
33 #include <linux/bug.h> 33 #include <linux/bug.h>
34 #include <asm/system.h> 34 #include <asm/system.h>
35 #include <asm/uaccess.h> 35 #include <asm/uaccess.h>
36 #include <asm/io.h> 36 #include <asm/io.h>
37 #include <asm/atomic.h> 37 #include <asm/atomic.h>
38 #include <asm/mathemu.h> 38 #include <asm/mathemu.h>
39 #include <asm/cpcmd.h> 39 #include <asm/cpcmd.h>
40 #include <asm/s390_ext.h> 40 #include <asm/s390_ext.h>
41 #include <asm/lowcore.h> 41 #include <asm/lowcore.h>
42 #include <asm/debug.h> 42 #include <asm/debug.h>
43 43
44 /* Called from entry.S only */ 44 /* Called from entry.S only */
45 extern void handle_per_exception(struct pt_regs *regs); 45 extern void handle_per_exception(struct pt_regs *regs);
46 46
47 typedef void pgm_check_handler_t(struct pt_regs *, long); 47 typedef void pgm_check_handler_t(struct pt_regs *, long);
48 pgm_check_handler_t *pgm_check_table[128]; 48 pgm_check_handler_t *pgm_check_table[128];
49 49
50 #ifdef CONFIG_SYSCTL 50 #ifdef CONFIG_SYSCTL
51 #ifdef CONFIG_PROCESS_DEBUG 51 #ifdef CONFIG_PROCESS_DEBUG
52 int sysctl_userprocess_debug = 1; 52 int sysctl_userprocess_debug = 1;
53 #else 53 #else
54 int sysctl_userprocess_debug = 0; 54 int sysctl_userprocess_debug = 0;
55 #endif 55 #endif
56 #endif 56 #endif
57 57
58 extern pgm_check_handler_t do_protection_exception; 58 extern pgm_check_handler_t do_protection_exception;
59 extern pgm_check_handler_t do_dat_exception; 59 extern pgm_check_handler_t do_dat_exception;
60 extern pgm_check_handler_t do_monitor_call; 60 extern pgm_check_handler_t do_monitor_call;
61 61
62 #define stack_pointer ({ void **sp; asm("la %0,0(15)" : "=&d" (sp)); sp; }) 62 #define stack_pointer ({ void **sp; asm("la %0,0(15)" : "=&d" (sp)); sp; })
63 63
64 #ifndef CONFIG_64BIT 64 #ifndef CONFIG_64BIT
65 #define FOURLONG "%08lx %08lx %08lx %08lx\n" 65 #define FOURLONG "%08lx %08lx %08lx %08lx\n"
66 static int kstack_depth_to_print = 12; 66 static int kstack_depth_to_print = 12;
67 #else /* CONFIG_64BIT */ 67 #else /* CONFIG_64BIT */
68 #define FOURLONG "%016lx %016lx %016lx %016lx\n" 68 #define FOURLONG "%016lx %016lx %016lx %016lx\n"
69 static int kstack_depth_to_print = 20; 69 static int kstack_depth_to_print = 20;
70 #endif /* CONFIG_64BIT */ 70 #endif /* CONFIG_64BIT */
71 71
72 /* 72 /*
73 * For show_trace we have tree different stack to consider: 73 * For show_trace we have tree different stack to consider:
74 * - the panic stack which is used if the kernel stack has overflown 74 * - the panic stack which is used if the kernel stack has overflown
75 * - the asynchronous interrupt stack (cpu related) 75 * - the asynchronous interrupt stack (cpu related)
76 * - the synchronous kernel stack (process related) 76 * - the synchronous kernel stack (process related)
77 * The stack trace can start at any of the three stack and can potentially 77 * The stack trace can start at any of the three stack and can potentially
78 * touch all of them. The order is: panic stack, async stack, sync stack. 78 * touch all of them. The order is: panic stack, async stack, sync stack.
79 */ 79 */
80 static unsigned long 80 static unsigned long
81 __show_trace(unsigned long sp, unsigned long low, unsigned long high) 81 __show_trace(unsigned long sp, unsigned long low, unsigned long high)
82 { 82 {
83 struct stack_frame *sf; 83 struct stack_frame *sf;
84 struct pt_regs *regs; 84 struct pt_regs *regs;
85 85
86 while (1) { 86 while (1) {
87 sp = sp & PSW_ADDR_INSN; 87 sp = sp & PSW_ADDR_INSN;
88 if (sp < low || sp > high - sizeof(*sf)) 88 if (sp < low || sp > high - sizeof(*sf))
89 return sp; 89 return sp;
90 sf = (struct stack_frame *) sp; 90 sf = (struct stack_frame *) sp;
91 printk("([<%016lx>] ", sf->gprs[8] & PSW_ADDR_INSN); 91 printk("([<%016lx>] ", sf->gprs[8] & PSW_ADDR_INSN);
92 print_symbol("%s)\n", sf->gprs[8] & PSW_ADDR_INSN); 92 print_symbol("%s)\n", sf->gprs[8] & PSW_ADDR_INSN);
93 /* Follow the backchain. */ 93 /* Follow the backchain. */
94 while (1) { 94 while (1) {
95 low = sp; 95 low = sp;
96 sp = sf->back_chain & PSW_ADDR_INSN; 96 sp = sf->back_chain & PSW_ADDR_INSN;
97 if (!sp) 97 if (!sp)
98 break; 98 break;
99 if (sp <= low || sp > high - sizeof(*sf)) 99 if (sp <= low || sp > high - sizeof(*sf))
100 return sp; 100 return sp;
101 sf = (struct stack_frame *) sp; 101 sf = (struct stack_frame *) sp;
102 printk(" [<%016lx>] ", sf->gprs[8] & PSW_ADDR_INSN); 102 printk(" [<%016lx>] ", sf->gprs[8] & PSW_ADDR_INSN);
103 print_symbol("%s\n", sf->gprs[8] & PSW_ADDR_INSN); 103 print_symbol("%s\n", sf->gprs[8] & PSW_ADDR_INSN);
104 } 104 }
105 /* Zero backchain detected, check for interrupt frame. */ 105 /* Zero backchain detected, check for interrupt frame. */
106 sp = (unsigned long) (sf + 1); 106 sp = (unsigned long) (sf + 1);
107 if (sp <= low || sp > high - sizeof(*regs)) 107 if (sp <= low || sp > high - sizeof(*regs))
108 return sp; 108 return sp;
109 regs = (struct pt_regs *) sp; 109 regs = (struct pt_regs *) sp;
110 printk(" [<%016lx>] ", regs->psw.addr & PSW_ADDR_INSN); 110 printk(" [<%016lx>] ", regs->psw.addr & PSW_ADDR_INSN);
111 print_symbol("%s\n", regs->psw.addr & PSW_ADDR_INSN); 111 print_symbol("%s\n", regs->psw.addr & PSW_ADDR_INSN);
112 low = sp; 112 low = sp;
113 sp = regs->gprs[15]; 113 sp = regs->gprs[15];
114 } 114 }
115 } 115 }
116 116
117 void show_trace(struct task_struct *task, unsigned long *stack) 117 void show_trace(struct task_struct *task, unsigned long *stack)
118 { 118 {
119 register unsigned long __r15 asm ("15"); 119 register unsigned long __r15 asm ("15");
120 unsigned long sp; 120 unsigned long sp;
121 121
122 sp = (unsigned long) stack; 122 sp = (unsigned long) stack;
123 if (!sp) 123 if (!sp)
124 sp = task ? task->thread.ksp : __r15; 124 sp = task ? task->thread.ksp : __r15;
125 printk("Call Trace:\n"); 125 printk("Call Trace:\n");
126 #ifdef CONFIG_CHECK_STACK 126 #ifdef CONFIG_CHECK_STACK
127 sp = __show_trace(sp, S390_lowcore.panic_stack - 4096, 127 sp = __show_trace(sp, S390_lowcore.panic_stack - 4096,
128 S390_lowcore.panic_stack); 128 S390_lowcore.panic_stack);
129 #endif 129 #endif
130 sp = __show_trace(sp, S390_lowcore.async_stack - ASYNC_SIZE, 130 sp = __show_trace(sp, S390_lowcore.async_stack - ASYNC_SIZE,
131 S390_lowcore.async_stack); 131 S390_lowcore.async_stack);
132 if (task) 132 if (task)
133 __show_trace(sp, (unsigned long) task_stack_page(task), 133 __show_trace(sp, (unsigned long) task_stack_page(task),
134 (unsigned long) task_stack_page(task) + THREAD_SIZE); 134 (unsigned long) task_stack_page(task) + THREAD_SIZE);
135 else 135 else
136 __show_trace(sp, S390_lowcore.thread_info, 136 __show_trace(sp, S390_lowcore.thread_info,
137 S390_lowcore.thread_info + THREAD_SIZE); 137 S390_lowcore.thread_info + THREAD_SIZE);
138 printk("\n"); 138 printk("\n");
139 if (!task) 139 if (!task)
140 task = current; 140 task = current;
141 debug_show_held_locks(task); 141 debug_show_held_locks(task);
142 } 142 }
143 143
144 void show_stack(struct task_struct *task, unsigned long *sp) 144 void show_stack(struct task_struct *task, unsigned long *sp)
145 { 145 {
146 register unsigned long * __r15 asm ("15"); 146 register unsigned long * __r15 asm ("15");
147 unsigned long *stack; 147 unsigned long *stack;
148 int i; 148 int i;
149 149
150 if (!sp) 150 if (!sp)
151 stack = task ? (unsigned long *) task->thread.ksp : __r15; 151 stack = task ? (unsigned long *) task->thread.ksp : __r15;
152 else 152 else
153 stack = sp; 153 stack = sp;
154 154
155 for (i = 0; i < kstack_depth_to_print; i++) { 155 for (i = 0; i < kstack_depth_to_print; i++) {
156 if (((addr_t) stack & (THREAD_SIZE-1)) == 0) 156 if (((addr_t) stack & (THREAD_SIZE-1)) == 0)
157 break; 157 break;
158 if (i && ((i * sizeof (long) % 32) == 0)) 158 if (i && ((i * sizeof (long) % 32) == 0))
159 printk("\n "); 159 printk("\n ");
160 printk("%p ", (void *)*stack++); 160 printk("%p ", (void *)*stack++);
161 } 161 }
162 printk("\n"); 162 printk("\n");
163 show_trace(task, sp); 163 show_trace(task, sp);
164 } 164 }
165 165
166 /* 166 /*
167 * The architecture-independent dump_stack generator 167 * The architecture-independent dump_stack generator
168 */ 168 */
169 void dump_stack(void) 169 void dump_stack(void)
170 { 170 {
171 show_stack(NULL, NULL); 171 show_stack(NULL, NULL);
172 } 172 }
173 173
174 EXPORT_SYMBOL(dump_stack); 174 EXPORT_SYMBOL(dump_stack);
175 175
176 static inline int mask_bits(struct pt_regs *regs, unsigned long bits) 176 static inline int mask_bits(struct pt_regs *regs, unsigned long bits)
177 { 177 {
178 return (regs->psw.mask & bits) / ((~bits + 1) & bits); 178 return (regs->psw.mask & bits) / ((~bits + 1) & bits);
179 } 179 }
180 180
181 void show_registers(struct pt_regs *regs) 181 void show_registers(struct pt_regs *regs)
182 { 182 {
183 char *mode; 183 char *mode;
184 184
185 mode = (regs->psw.mask & PSW_MASK_PSTATE) ? "User" : "Krnl"; 185 mode = (regs->psw.mask & PSW_MASK_PSTATE) ? "User" : "Krnl";
186 printk("%s PSW : %p %p", 186 printk("%s PSW : %p %p",
187 mode, (void *) regs->psw.mask, 187 mode, (void *) regs->psw.mask,
188 (void *) regs->psw.addr); 188 (void *) regs->psw.addr);
189 print_symbol(" (%s)\n", regs->psw.addr & PSW_ADDR_INSN); 189 print_symbol(" (%s)\n", regs->psw.addr & PSW_ADDR_INSN);
190 printk(" R:%x T:%x IO:%x EX:%x Key:%x M:%x W:%x " 190 printk(" R:%x T:%x IO:%x EX:%x Key:%x M:%x W:%x "
191 "P:%x AS:%x CC:%x PM:%x", mask_bits(regs, PSW_MASK_PER), 191 "P:%x AS:%x CC:%x PM:%x", mask_bits(regs, PSW_MASK_PER),
192 mask_bits(regs, PSW_MASK_DAT), mask_bits(regs, PSW_MASK_IO), 192 mask_bits(regs, PSW_MASK_DAT), mask_bits(regs, PSW_MASK_IO),
193 mask_bits(regs, PSW_MASK_EXT), mask_bits(regs, PSW_MASK_KEY), 193 mask_bits(regs, PSW_MASK_EXT), mask_bits(regs, PSW_MASK_KEY),
194 mask_bits(regs, PSW_MASK_MCHECK), mask_bits(regs, PSW_MASK_WAIT), 194 mask_bits(regs, PSW_MASK_MCHECK), mask_bits(regs, PSW_MASK_WAIT),
195 mask_bits(regs, PSW_MASK_PSTATE), mask_bits(regs, PSW_MASK_ASC), 195 mask_bits(regs, PSW_MASK_PSTATE), mask_bits(regs, PSW_MASK_ASC),
196 mask_bits(regs, PSW_MASK_CC), mask_bits(regs, PSW_MASK_PM)); 196 mask_bits(regs, PSW_MASK_CC), mask_bits(regs, PSW_MASK_PM));
197 #ifdef CONFIG_64BIT 197 #ifdef CONFIG_64BIT
198 printk(" EA:%x", mask_bits(regs, PSW_BASE_BITS)); 198 printk(" EA:%x", mask_bits(regs, PSW_BASE_BITS));
199 #endif 199 #endif
200 printk("\n%s GPRS: " FOURLONG, mode, 200 printk("\n%s GPRS: " FOURLONG, mode,
201 regs->gprs[0], regs->gprs[1], regs->gprs[2], regs->gprs[3]); 201 regs->gprs[0], regs->gprs[1], regs->gprs[2], regs->gprs[3]);
202 printk(" " FOURLONG, 202 printk(" " FOURLONG,
203 regs->gprs[4], regs->gprs[5], regs->gprs[6], regs->gprs[7]); 203 regs->gprs[4], regs->gprs[5], regs->gprs[6], regs->gprs[7]);
204 printk(" " FOURLONG, 204 printk(" " FOURLONG,
205 regs->gprs[8], regs->gprs[9], regs->gprs[10], regs->gprs[11]); 205 regs->gprs[8], regs->gprs[9], regs->gprs[10], regs->gprs[11]);
206 printk(" " FOURLONG, 206 printk(" " FOURLONG,
207 regs->gprs[12], regs->gprs[13], regs->gprs[14], regs->gprs[15]); 207 regs->gprs[12], regs->gprs[13], regs->gprs[14], regs->gprs[15]);
208 208
209 show_code(regs); 209 show_code(regs);
210 } 210 }
211 211
212 /* This is called from fs/proc/array.c */ 212 /* This is called from fs/proc/array.c */
213 char *task_show_regs(struct task_struct *task, char *buffer) 213 char *task_show_regs(struct task_struct *task, char *buffer)
214 { 214 {
215 struct pt_regs *regs; 215 struct pt_regs *regs;
216 216
217 regs = task_pt_regs(task); 217 regs = task_pt_regs(task);
218 buffer += sprintf(buffer, "task: %p, ksp: %p\n", 218 buffer += sprintf(buffer, "task: %p, ksp: %p\n",
219 task, (void *)task->thread.ksp); 219 task, (void *)task->thread.ksp);
220 buffer += sprintf(buffer, "User PSW : %p %p\n", 220 buffer += sprintf(buffer, "User PSW : %p %p\n",
221 (void *) regs->psw.mask, (void *)regs->psw.addr); 221 (void *) regs->psw.mask, (void *)regs->psw.addr);
222 222
223 buffer += sprintf(buffer, "User GPRS: " FOURLONG, 223 buffer += sprintf(buffer, "User GPRS: " FOURLONG,
224 regs->gprs[0], regs->gprs[1], 224 regs->gprs[0], regs->gprs[1],
225 regs->gprs[2], regs->gprs[3]); 225 regs->gprs[2], regs->gprs[3]);
226 buffer += sprintf(buffer, " " FOURLONG, 226 buffer += sprintf(buffer, " " FOURLONG,
227 regs->gprs[4], regs->gprs[5], 227 regs->gprs[4], regs->gprs[5],
228 regs->gprs[6], regs->gprs[7]); 228 regs->gprs[6], regs->gprs[7]);
229 buffer += sprintf(buffer, " " FOURLONG, 229 buffer += sprintf(buffer, " " FOURLONG,
230 regs->gprs[8], regs->gprs[9], 230 regs->gprs[8], regs->gprs[9],
231 regs->gprs[10], regs->gprs[11]); 231 regs->gprs[10], regs->gprs[11]);
232 buffer += sprintf(buffer, " " FOURLONG, 232 buffer += sprintf(buffer, " " FOURLONG,
233 regs->gprs[12], regs->gprs[13], 233 regs->gprs[12], regs->gprs[13],
234 regs->gprs[14], regs->gprs[15]); 234 regs->gprs[14], regs->gprs[15]);
235 buffer += sprintf(buffer, "User ACRS: %08x %08x %08x %08x\n", 235 buffer += sprintf(buffer, "User ACRS: %08x %08x %08x %08x\n",
236 task->thread.acrs[0], task->thread.acrs[1], 236 task->thread.acrs[0], task->thread.acrs[1],
237 task->thread.acrs[2], task->thread.acrs[3]); 237 task->thread.acrs[2], task->thread.acrs[3]);
238 buffer += sprintf(buffer, " %08x %08x %08x %08x\n", 238 buffer += sprintf(buffer, " %08x %08x %08x %08x\n",
239 task->thread.acrs[4], task->thread.acrs[5], 239 task->thread.acrs[4], task->thread.acrs[5],
240 task->thread.acrs[6], task->thread.acrs[7]); 240 task->thread.acrs[6], task->thread.acrs[7]);
241 buffer += sprintf(buffer, " %08x %08x %08x %08x\n", 241 buffer += sprintf(buffer, " %08x %08x %08x %08x\n",
242 task->thread.acrs[8], task->thread.acrs[9], 242 task->thread.acrs[8], task->thread.acrs[9],
243 task->thread.acrs[10], task->thread.acrs[11]); 243 task->thread.acrs[10], task->thread.acrs[11]);
244 buffer += sprintf(buffer, " %08x %08x %08x %08x\n", 244 buffer += sprintf(buffer, " %08x %08x %08x %08x\n",
245 task->thread.acrs[12], task->thread.acrs[13], 245 task->thread.acrs[12], task->thread.acrs[13],
246 task->thread.acrs[14], task->thread.acrs[15]); 246 task->thread.acrs[14], task->thread.acrs[15]);
247 return buffer; 247 return buffer;
248 } 248 }
249 249
250 static DEFINE_SPINLOCK(die_lock); 250 static DEFINE_SPINLOCK(die_lock);
251 251
252 void die(const char * str, struct pt_regs * regs, long err) 252 void die(const char * str, struct pt_regs * regs, long err)
253 { 253 {
254 static int die_counter; 254 static int die_counter;
255 255
256 oops_enter(); 256 oops_enter();
257 debug_stop_all(); 257 debug_stop_all();
258 console_verbose(); 258 console_verbose();
259 spin_lock_irq(&die_lock); 259 spin_lock_irq(&die_lock);
260 bust_spinlocks(1); 260 bust_spinlocks(1);
261 printk("%s: %04lx [#%d]\n", str, err & 0xffff, ++die_counter); 261 printk("%s: %04lx [#%d]\n", str, err & 0xffff, ++die_counter);
262 print_modules(); 262 print_modules();
263 show_regs(regs); 263 show_regs(regs);
264 bust_spinlocks(0); 264 bust_spinlocks(0);
265 add_taint(TAINT_DIE);
265 spin_unlock_irq(&die_lock); 266 spin_unlock_irq(&die_lock);
266 if (in_interrupt()) 267 if (in_interrupt())
267 panic("Fatal exception in interrupt"); 268 panic("Fatal exception in interrupt");
268 if (panic_on_oops) 269 if (panic_on_oops)
269 panic("Fatal exception: panic_on_oops"); 270 panic("Fatal exception: panic_on_oops");
270 oops_exit(); 271 oops_exit();
271 do_exit(SIGSEGV); 272 do_exit(SIGSEGV);
272 } 273 }
273 274
274 static void inline 275 static void inline
275 report_user_fault(long interruption_code, struct pt_regs *regs) 276 report_user_fault(long interruption_code, struct pt_regs *regs)
276 { 277 {
277 #if defined(CONFIG_SYSCTL) 278 #if defined(CONFIG_SYSCTL)
278 if (!sysctl_userprocess_debug) 279 if (!sysctl_userprocess_debug)
279 return; 280 return;
280 #endif 281 #endif
281 #if defined(CONFIG_SYSCTL) || defined(CONFIG_PROCESS_DEBUG) 282 #if defined(CONFIG_SYSCTL) || defined(CONFIG_PROCESS_DEBUG)
282 printk("User process fault: interruption code 0x%lX\n", 283 printk("User process fault: interruption code 0x%lX\n",
283 interruption_code); 284 interruption_code);
284 show_regs(regs); 285 show_regs(regs);
285 #endif 286 #endif
286 } 287 }
287 288
288 int is_valid_bugaddr(unsigned long addr) 289 int is_valid_bugaddr(unsigned long addr)
289 { 290 {
290 return 1; 291 return 1;
291 } 292 }
292 293
293 static void __kprobes inline do_trap(long interruption_code, int signr, 294 static void __kprobes inline do_trap(long interruption_code, int signr,
294 char *str, struct pt_regs *regs, 295 char *str, struct pt_regs *regs,
295 siginfo_t *info) 296 siginfo_t *info)
296 { 297 {
297 /* 298 /*
298 * We got all needed information from the lowcore and can 299 * We got all needed information from the lowcore and can
299 * now safely switch on interrupts. 300 * now safely switch on interrupts.
300 */ 301 */
301 if (regs->psw.mask & PSW_MASK_PSTATE) 302 if (regs->psw.mask & PSW_MASK_PSTATE)
302 local_irq_enable(); 303 local_irq_enable();
303 304
304 if (notify_die(DIE_TRAP, str, regs, interruption_code, 305 if (notify_die(DIE_TRAP, str, regs, interruption_code,
305 interruption_code, signr) == NOTIFY_STOP) 306 interruption_code, signr) == NOTIFY_STOP)
306 return; 307 return;
307 308
308 if (regs->psw.mask & PSW_MASK_PSTATE) { 309 if (regs->psw.mask & PSW_MASK_PSTATE) {
309 struct task_struct *tsk = current; 310 struct task_struct *tsk = current;
310 311
311 tsk->thread.trap_no = interruption_code & 0xffff; 312 tsk->thread.trap_no = interruption_code & 0xffff;
312 force_sig_info(signr, info, tsk); 313 force_sig_info(signr, info, tsk);
313 report_user_fault(interruption_code, regs); 314 report_user_fault(interruption_code, regs);
314 } else { 315 } else {
315 const struct exception_table_entry *fixup; 316 const struct exception_table_entry *fixup;
316 fixup = search_exception_tables(regs->psw.addr & PSW_ADDR_INSN); 317 fixup = search_exception_tables(regs->psw.addr & PSW_ADDR_INSN);
317 if (fixup) 318 if (fixup)
318 regs->psw.addr = fixup->fixup | PSW_ADDR_AMODE; 319 regs->psw.addr = fixup->fixup | PSW_ADDR_AMODE;
319 else { 320 else {
320 enum bug_trap_type btt; 321 enum bug_trap_type btt;
321 322
322 btt = report_bug(regs->psw.addr & PSW_ADDR_INSN, regs); 323 btt = report_bug(regs->psw.addr & PSW_ADDR_INSN, regs);
323 if (btt == BUG_TRAP_TYPE_WARN) 324 if (btt == BUG_TRAP_TYPE_WARN)
324 return; 325 return;
325 die(str, regs, interruption_code); 326 die(str, regs, interruption_code);
326 } 327 }
327 } 328 }
328 } 329 }
329 330
330 static inline void __user *get_check_address(struct pt_regs *regs) 331 static inline void __user *get_check_address(struct pt_regs *regs)
331 { 332 {
332 return (void __user *)((regs->psw.addr-S390_lowcore.pgm_ilc) & PSW_ADDR_INSN); 333 return (void __user *)((regs->psw.addr-S390_lowcore.pgm_ilc) & PSW_ADDR_INSN);
333 } 334 }
334 335
335 void __kprobes do_single_step(struct pt_regs *regs) 336 void __kprobes do_single_step(struct pt_regs *regs)
336 { 337 {
337 if (notify_die(DIE_SSTEP, "sstep", regs, 0, 0, 338 if (notify_die(DIE_SSTEP, "sstep", regs, 0, 0,
338 SIGTRAP) == NOTIFY_STOP){ 339 SIGTRAP) == NOTIFY_STOP){
339 return; 340 return;
340 } 341 }
341 if ((current->ptrace & PT_PTRACED) != 0) 342 if ((current->ptrace & PT_PTRACED) != 0)
342 force_sig(SIGTRAP, current); 343 force_sig(SIGTRAP, current);
343 } 344 }
344 345
345 static void default_trap_handler(struct pt_regs * regs, long interruption_code) 346 static void default_trap_handler(struct pt_regs * regs, long interruption_code)
346 { 347 {
347 if (regs->psw.mask & PSW_MASK_PSTATE) { 348 if (regs->psw.mask & PSW_MASK_PSTATE) {
348 local_irq_enable(); 349 local_irq_enable();
349 do_exit(SIGSEGV); 350 do_exit(SIGSEGV);
350 report_user_fault(interruption_code, regs); 351 report_user_fault(interruption_code, regs);
351 } else 352 } else
352 die("Unknown program exception", regs, interruption_code); 353 die("Unknown program exception", regs, interruption_code);
353 } 354 }
354 355
355 #define DO_ERROR_INFO(signr, str, name, sicode, siaddr) \ 356 #define DO_ERROR_INFO(signr, str, name, sicode, siaddr) \
356 static void name(struct pt_regs * regs, long interruption_code) \ 357 static void name(struct pt_regs * regs, long interruption_code) \
357 { \ 358 { \
358 siginfo_t info; \ 359 siginfo_t info; \
359 info.si_signo = signr; \ 360 info.si_signo = signr; \
360 info.si_errno = 0; \ 361 info.si_errno = 0; \
361 info.si_code = sicode; \ 362 info.si_code = sicode; \
362 info.si_addr = siaddr; \ 363 info.si_addr = siaddr; \
363 do_trap(interruption_code, signr, str, regs, &info); \ 364 do_trap(interruption_code, signr, str, regs, &info); \
364 } 365 }
365 366
366 DO_ERROR_INFO(SIGILL, "addressing exception", addressing_exception, 367 DO_ERROR_INFO(SIGILL, "addressing exception", addressing_exception,
367 ILL_ILLADR, get_check_address(regs)) 368 ILL_ILLADR, get_check_address(regs))
368 DO_ERROR_INFO(SIGILL, "execute exception", execute_exception, 369 DO_ERROR_INFO(SIGILL, "execute exception", execute_exception,
369 ILL_ILLOPN, get_check_address(regs)) 370 ILL_ILLOPN, get_check_address(regs))
370 DO_ERROR_INFO(SIGFPE, "fixpoint divide exception", divide_exception, 371 DO_ERROR_INFO(SIGFPE, "fixpoint divide exception", divide_exception,
371 FPE_INTDIV, get_check_address(regs)) 372 FPE_INTDIV, get_check_address(regs))
372 DO_ERROR_INFO(SIGFPE, "fixpoint overflow exception", overflow_exception, 373 DO_ERROR_INFO(SIGFPE, "fixpoint overflow exception", overflow_exception,
373 FPE_INTOVF, get_check_address(regs)) 374 FPE_INTOVF, get_check_address(regs))
374 DO_ERROR_INFO(SIGFPE, "HFP overflow exception", hfp_overflow_exception, 375 DO_ERROR_INFO(SIGFPE, "HFP overflow exception", hfp_overflow_exception,
375 FPE_FLTOVF, get_check_address(regs)) 376 FPE_FLTOVF, get_check_address(regs))
376 DO_ERROR_INFO(SIGFPE, "HFP underflow exception", hfp_underflow_exception, 377 DO_ERROR_INFO(SIGFPE, "HFP underflow exception", hfp_underflow_exception,
377 FPE_FLTUND, get_check_address(regs)) 378 FPE_FLTUND, get_check_address(regs))
378 DO_ERROR_INFO(SIGFPE, "HFP significance exception", hfp_significance_exception, 379 DO_ERROR_INFO(SIGFPE, "HFP significance exception", hfp_significance_exception,
379 FPE_FLTRES, get_check_address(regs)) 380 FPE_FLTRES, get_check_address(regs))
380 DO_ERROR_INFO(SIGFPE, "HFP divide exception", hfp_divide_exception, 381 DO_ERROR_INFO(SIGFPE, "HFP divide exception", hfp_divide_exception,
381 FPE_FLTDIV, get_check_address(regs)) 382 FPE_FLTDIV, get_check_address(regs))
382 DO_ERROR_INFO(SIGFPE, "HFP square root exception", hfp_sqrt_exception, 383 DO_ERROR_INFO(SIGFPE, "HFP square root exception", hfp_sqrt_exception,
383 FPE_FLTINV, get_check_address(regs)) 384 FPE_FLTINV, get_check_address(regs))
384 DO_ERROR_INFO(SIGILL, "operand exception", operand_exception, 385 DO_ERROR_INFO(SIGILL, "operand exception", operand_exception,
385 ILL_ILLOPN, get_check_address(regs)) 386 ILL_ILLOPN, get_check_address(regs))
386 DO_ERROR_INFO(SIGILL, "privileged operation", privileged_op, 387 DO_ERROR_INFO(SIGILL, "privileged operation", privileged_op,
387 ILL_PRVOPC, get_check_address(regs)) 388 ILL_PRVOPC, get_check_address(regs))
388 DO_ERROR_INFO(SIGILL, "special operation exception", special_op_exception, 389 DO_ERROR_INFO(SIGILL, "special operation exception", special_op_exception,
389 ILL_ILLOPN, get_check_address(regs)) 390 ILL_ILLOPN, get_check_address(regs))
390 DO_ERROR_INFO(SIGILL, "translation exception", translation_exception, 391 DO_ERROR_INFO(SIGILL, "translation exception", translation_exception,
391 ILL_ILLOPN, get_check_address(regs)) 392 ILL_ILLOPN, get_check_address(regs))
392 393
393 static inline void 394 static inline void
394 do_fp_trap(struct pt_regs *regs, void __user *location, 395 do_fp_trap(struct pt_regs *regs, void __user *location,
395 int fpc, long interruption_code) 396 int fpc, long interruption_code)
396 { 397 {
397 siginfo_t si; 398 siginfo_t si;
398 399
399 si.si_signo = SIGFPE; 400 si.si_signo = SIGFPE;
400 si.si_errno = 0; 401 si.si_errno = 0;
401 si.si_addr = location; 402 si.si_addr = location;
402 si.si_code = 0; 403 si.si_code = 0;
403 /* FPC[2] is Data Exception Code */ 404 /* FPC[2] is Data Exception Code */
404 if ((fpc & 0x00000300) == 0) { 405 if ((fpc & 0x00000300) == 0) {
405 /* bits 6 and 7 of DXC are 0 iff IEEE exception */ 406 /* bits 6 and 7 of DXC are 0 iff IEEE exception */
406 if (fpc & 0x8000) /* invalid fp operation */ 407 if (fpc & 0x8000) /* invalid fp operation */
407 si.si_code = FPE_FLTINV; 408 si.si_code = FPE_FLTINV;
408 else if (fpc & 0x4000) /* div by 0 */ 409 else if (fpc & 0x4000) /* div by 0 */
409 si.si_code = FPE_FLTDIV; 410 si.si_code = FPE_FLTDIV;
410 else if (fpc & 0x2000) /* overflow */ 411 else if (fpc & 0x2000) /* overflow */
411 si.si_code = FPE_FLTOVF; 412 si.si_code = FPE_FLTOVF;
412 else if (fpc & 0x1000) /* underflow */ 413 else if (fpc & 0x1000) /* underflow */
413 si.si_code = FPE_FLTUND; 414 si.si_code = FPE_FLTUND;
414 else if (fpc & 0x0800) /* inexact */ 415 else if (fpc & 0x0800) /* inexact */
415 si.si_code = FPE_FLTRES; 416 si.si_code = FPE_FLTRES;
416 } 417 }
417 current->thread.ieee_instruction_pointer = (addr_t) location; 418 current->thread.ieee_instruction_pointer = (addr_t) location;
418 do_trap(interruption_code, SIGFPE, 419 do_trap(interruption_code, SIGFPE,
419 "floating point exception", regs, &si); 420 "floating point exception", regs, &si);
420 } 421 }
421 422
422 static void illegal_op(struct pt_regs * regs, long interruption_code) 423 static void illegal_op(struct pt_regs * regs, long interruption_code)
423 { 424 {
424 siginfo_t info; 425 siginfo_t info;
425 __u8 opcode[6]; 426 __u8 opcode[6];
426 __u16 __user *location; 427 __u16 __user *location;
427 int signal = 0; 428 int signal = 0;
428 429
429 location = get_check_address(regs); 430 location = get_check_address(regs);
430 431
431 /* 432 /*
432 * We got all needed information from the lowcore and can 433 * We got all needed information from the lowcore and can
433 * now safely switch on interrupts. 434 * now safely switch on interrupts.
434 */ 435 */
435 if (regs->psw.mask & PSW_MASK_PSTATE) 436 if (regs->psw.mask & PSW_MASK_PSTATE)
436 local_irq_enable(); 437 local_irq_enable();
437 438
438 if (regs->psw.mask & PSW_MASK_PSTATE) { 439 if (regs->psw.mask & PSW_MASK_PSTATE) {
439 if (get_user(*((__u16 *) opcode), (__u16 __user *) location)) 440 if (get_user(*((__u16 *) opcode), (__u16 __user *) location))
440 return; 441 return;
441 if (*((__u16 *) opcode) == S390_BREAKPOINT_U16) { 442 if (*((__u16 *) opcode) == S390_BREAKPOINT_U16) {
442 if (current->ptrace & PT_PTRACED) 443 if (current->ptrace & PT_PTRACED)
443 force_sig(SIGTRAP, current); 444 force_sig(SIGTRAP, current);
444 else 445 else
445 signal = SIGILL; 446 signal = SIGILL;
446 #ifdef CONFIG_MATHEMU 447 #ifdef CONFIG_MATHEMU
447 } else if (opcode[0] == 0xb3) { 448 } else if (opcode[0] == 0xb3) {
448 if (get_user(*((__u16 *) (opcode+2)), location+1)) 449 if (get_user(*((__u16 *) (opcode+2)), location+1))
449 return; 450 return;
450 signal = math_emu_b3(opcode, regs); 451 signal = math_emu_b3(opcode, regs);
451 } else if (opcode[0] == 0xed) { 452 } else if (opcode[0] == 0xed) {
452 if (get_user(*((__u32 *) (opcode+2)), 453 if (get_user(*((__u32 *) (opcode+2)),
453 (__u32 __user *)(location+1))) 454 (__u32 __user *)(location+1)))
454 return; 455 return;
455 signal = math_emu_ed(opcode, regs); 456 signal = math_emu_ed(opcode, regs);
456 } else if (*((__u16 *) opcode) == 0xb299) { 457 } else if (*((__u16 *) opcode) == 0xb299) {
457 if (get_user(*((__u16 *) (opcode+2)), location+1)) 458 if (get_user(*((__u16 *) (opcode+2)), location+1))
458 return; 459 return;
459 signal = math_emu_srnm(opcode, regs); 460 signal = math_emu_srnm(opcode, regs);
460 } else if (*((__u16 *) opcode) == 0xb29c) { 461 } else if (*((__u16 *) opcode) == 0xb29c) {
461 if (get_user(*((__u16 *) (opcode+2)), location+1)) 462 if (get_user(*((__u16 *) (opcode+2)), location+1))
462 return; 463 return;
463 signal = math_emu_stfpc(opcode, regs); 464 signal = math_emu_stfpc(opcode, regs);
464 } else if (*((__u16 *) opcode) == 0xb29d) { 465 } else if (*((__u16 *) opcode) == 0xb29d) {
465 if (get_user(*((__u16 *) (opcode+2)), location+1)) 466 if (get_user(*((__u16 *) (opcode+2)), location+1))
466 return; 467 return;
467 signal = math_emu_lfpc(opcode, regs); 468 signal = math_emu_lfpc(opcode, regs);
468 #endif 469 #endif
469 } else 470 } else
470 signal = SIGILL; 471 signal = SIGILL;
471 } else { 472 } else {
472 /* 473 /*
473 * If we get an illegal op in kernel mode, send it through the 474 * If we get an illegal op in kernel mode, send it through the
474 * kprobes notifier. If kprobes doesn't pick it up, SIGILL 475 * kprobes notifier. If kprobes doesn't pick it up, SIGILL
475 */ 476 */
476 if (notify_die(DIE_BPT, "bpt", regs, interruption_code, 477 if (notify_die(DIE_BPT, "bpt", regs, interruption_code,
477 3, SIGTRAP) != NOTIFY_STOP) 478 3, SIGTRAP) != NOTIFY_STOP)
478 signal = SIGILL; 479 signal = SIGILL;
479 } 480 }
480 481
481 #ifdef CONFIG_MATHEMU 482 #ifdef CONFIG_MATHEMU
482 if (signal == SIGFPE) 483 if (signal == SIGFPE)
483 do_fp_trap(regs, location, 484 do_fp_trap(regs, location,
484 current->thread.fp_regs.fpc, interruption_code); 485 current->thread.fp_regs.fpc, interruption_code);
485 else if (signal == SIGSEGV) { 486 else if (signal == SIGSEGV) {
486 info.si_signo = signal; 487 info.si_signo = signal;
487 info.si_errno = 0; 488 info.si_errno = 0;
488 info.si_code = SEGV_MAPERR; 489 info.si_code = SEGV_MAPERR;
489 info.si_addr = (void __user *) location; 490 info.si_addr = (void __user *) location;
490 do_trap(interruption_code, signal, 491 do_trap(interruption_code, signal,
491 "user address fault", regs, &info); 492 "user address fault", regs, &info);
492 } else 493 } else
493 #endif 494 #endif
494 if (signal) { 495 if (signal) {
495 info.si_signo = signal; 496 info.si_signo = signal;
496 info.si_errno = 0; 497 info.si_errno = 0;
497 info.si_code = ILL_ILLOPC; 498 info.si_code = ILL_ILLOPC;
498 info.si_addr = (void __user *) location; 499 info.si_addr = (void __user *) location;
499 do_trap(interruption_code, signal, 500 do_trap(interruption_code, signal,
500 "illegal operation", regs, &info); 501 "illegal operation", regs, &info);
501 } 502 }
502 } 503 }
503 504
504 505
505 #ifdef CONFIG_MATHEMU 506 #ifdef CONFIG_MATHEMU
506 asmlinkage void 507 asmlinkage void
507 specification_exception(struct pt_regs * regs, long interruption_code) 508 specification_exception(struct pt_regs * regs, long interruption_code)
508 { 509 {
509 __u8 opcode[6]; 510 __u8 opcode[6];
510 __u16 __user *location = NULL; 511 __u16 __user *location = NULL;
511 int signal = 0; 512 int signal = 0;
512 513
513 location = (__u16 __user *) get_check_address(regs); 514 location = (__u16 __user *) get_check_address(regs);
514 515
515 /* 516 /*
516 * We got all needed information from the lowcore and can 517 * We got all needed information from the lowcore and can
517 * now safely switch on interrupts. 518 * now safely switch on interrupts.
518 */ 519 */
519 if (regs->psw.mask & PSW_MASK_PSTATE) 520 if (regs->psw.mask & PSW_MASK_PSTATE)
520 local_irq_enable(); 521 local_irq_enable();
521 522
522 if (regs->psw.mask & PSW_MASK_PSTATE) { 523 if (regs->psw.mask & PSW_MASK_PSTATE) {
523 get_user(*((__u16 *) opcode), location); 524 get_user(*((__u16 *) opcode), location);
524 switch (opcode[0]) { 525 switch (opcode[0]) {
525 case 0x28: /* LDR Rx,Ry */ 526 case 0x28: /* LDR Rx,Ry */
526 signal = math_emu_ldr(opcode); 527 signal = math_emu_ldr(opcode);
527 break; 528 break;
528 case 0x38: /* LER Rx,Ry */ 529 case 0x38: /* LER Rx,Ry */
529 signal = math_emu_ler(opcode); 530 signal = math_emu_ler(opcode);
530 break; 531 break;
531 case 0x60: /* STD R,D(X,B) */ 532 case 0x60: /* STD R,D(X,B) */
532 get_user(*((__u16 *) (opcode+2)), location+1); 533 get_user(*((__u16 *) (opcode+2)), location+1);
533 signal = math_emu_std(opcode, regs); 534 signal = math_emu_std(opcode, regs);
534 break; 535 break;
535 case 0x68: /* LD R,D(X,B) */ 536 case 0x68: /* LD R,D(X,B) */
536 get_user(*((__u16 *) (opcode+2)), location+1); 537 get_user(*((__u16 *) (opcode+2)), location+1);
537 signal = math_emu_ld(opcode, regs); 538 signal = math_emu_ld(opcode, regs);
538 break; 539 break;
539 case 0x70: /* STE R,D(X,B) */ 540 case 0x70: /* STE R,D(X,B) */
540 get_user(*((__u16 *) (opcode+2)), location+1); 541 get_user(*((__u16 *) (opcode+2)), location+1);
541 signal = math_emu_ste(opcode, regs); 542 signal = math_emu_ste(opcode, regs);
542 break; 543 break;
543 case 0x78: /* LE R,D(X,B) */ 544 case 0x78: /* LE R,D(X,B) */
544 get_user(*((__u16 *) (opcode+2)), location+1); 545 get_user(*((__u16 *) (opcode+2)), location+1);
545 signal = math_emu_le(opcode, regs); 546 signal = math_emu_le(opcode, regs);
546 break; 547 break;
547 default: 548 default:
548 signal = SIGILL; 549 signal = SIGILL;
549 break; 550 break;
550 } 551 }
551 } else 552 } else
552 signal = SIGILL; 553 signal = SIGILL;
553 554
554 if (signal == SIGFPE) 555 if (signal == SIGFPE)
555 do_fp_trap(regs, location, 556 do_fp_trap(regs, location,
556 current->thread.fp_regs.fpc, interruption_code); 557 current->thread.fp_regs.fpc, interruption_code);
557 else if (signal) { 558 else if (signal) {
558 siginfo_t info; 559 siginfo_t info;
559 info.si_signo = signal; 560 info.si_signo = signal;
560 info.si_errno = 0; 561 info.si_errno = 0;
561 info.si_code = ILL_ILLOPN; 562 info.si_code = ILL_ILLOPN;
562 info.si_addr = location; 563 info.si_addr = location;
563 do_trap(interruption_code, signal, 564 do_trap(interruption_code, signal,
564 "specification exception", regs, &info); 565 "specification exception", regs, &info);
565 } 566 }
566 } 567 }
567 #else 568 #else
568 DO_ERROR_INFO(SIGILL, "specification exception", specification_exception, 569 DO_ERROR_INFO(SIGILL, "specification exception", specification_exception,
569 ILL_ILLOPN, get_check_address(regs)); 570 ILL_ILLOPN, get_check_address(regs));
570 #endif 571 #endif
571 572
572 static void data_exception(struct pt_regs * regs, long interruption_code) 573 static void data_exception(struct pt_regs * regs, long interruption_code)
573 { 574 {
574 __u16 __user *location; 575 __u16 __user *location;
575 int signal = 0; 576 int signal = 0;
576 577
577 location = get_check_address(regs); 578 location = get_check_address(regs);
578 579
579 /* 580 /*
580 * We got all needed information from the lowcore and can 581 * We got all needed information from the lowcore and can
581 * now safely switch on interrupts. 582 * now safely switch on interrupts.
582 */ 583 */
583 if (regs->psw.mask & PSW_MASK_PSTATE) 584 if (regs->psw.mask & PSW_MASK_PSTATE)
584 local_irq_enable(); 585 local_irq_enable();
585 586
586 if (MACHINE_HAS_IEEE) 587 if (MACHINE_HAS_IEEE)
587 asm volatile("stfpc %0" : "=m" (current->thread.fp_regs.fpc)); 588 asm volatile("stfpc %0" : "=m" (current->thread.fp_regs.fpc));
588 589
589 #ifdef CONFIG_MATHEMU 590 #ifdef CONFIG_MATHEMU
590 else if (regs->psw.mask & PSW_MASK_PSTATE) { 591 else if (regs->psw.mask & PSW_MASK_PSTATE) {
591 __u8 opcode[6]; 592 __u8 opcode[6];
592 get_user(*((__u16 *) opcode), location); 593 get_user(*((__u16 *) opcode), location);
593 switch (opcode[0]) { 594 switch (opcode[0]) {
594 case 0x28: /* LDR Rx,Ry */ 595 case 0x28: /* LDR Rx,Ry */
595 signal = math_emu_ldr(opcode); 596 signal = math_emu_ldr(opcode);
596 break; 597 break;
597 case 0x38: /* LER Rx,Ry */ 598 case 0x38: /* LER Rx,Ry */
598 signal = math_emu_ler(opcode); 599 signal = math_emu_ler(opcode);
599 break; 600 break;
600 case 0x60: /* STD R,D(X,B) */ 601 case 0x60: /* STD R,D(X,B) */
601 get_user(*((__u16 *) (opcode+2)), location+1); 602 get_user(*((__u16 *) (opcode+2)), location+1);
602 signal = math_emu_std(opcode, regs); 603 signal = math_emu_std(opcode, regs);
603 break; 604 break;
604 case 0x68: /* LD R,D(X,B) */ 605 case 0x68: /* LD R,D(X,B) */
605 get_user(*((__u16 *) (opcode+2)), location+1); 606 get_user(*((__u16 *) (opcode+2)), location+1);
606 signal = math_emu_ld(opcode, regs); 607 signal = math_emu_ld(opcode, regs);
607 break; 608 break;
608 case 0x70: /* STE R,D(X,B) */ 609 case 0x70: /* STE R,D(X,B) */
609 get_user(*((__u16 *) (opcode+2)), location+1); 610 get_user(*((__u16 *) (opcode+2)), location+1);
610 signal = math_emu_ste(opcode, regs); 611 signal = math_emu_ste(opcode, regs);
611 break; 612 break;
612 case 0x78: /* LE R,D(X,B) */ 613 case 0x78: /* LE R,D(X,B) */
613 get_user(*((__u16 *) (opcode+2)), location+1); 614 get_user(*((__u16 *) (opcode+2)), location+1);
614 signal = math_emu_le(opcode, regs); 615 signal = math_emu_le(opcode, regs);
615 break; 616 break;
616 case 0xb3: 617 case 0xb3:
617 get_user(*((__u16 *) (opcode+2)), location+1); 618 get_user(*((__u16 *) (opcode+2)), location+1);
618 signal = math_emu_b3(opcode, regs); 619 signal = math_emu_b3(opcode, regs);
619 break; 620 break;
620 case 0xed: 621 case 0xed:
621 get_user(*((__u32 *) (opcode+2)), 622 get_user(*((__u32 *) (opcode+2)),
622 (__u32 __user *)(location+1)); 623 (__u32 __user *)(location+1));
623 signal = math_emu_ed(opcode, regs); 624 signal = math_emu_ed(opcode, regs);
624 break; 625 break;
625 case 0xb2: 626 case 0xb2:
626 if (opcode[1] == 0x99) { 627 if (opcode[1] == 0x99) {
627 get_user(*((__u16 *) (opcode+2)), location+1); 628 get_user(*((__u16 *) (opcode+2)), location+1);
628 signal = math_emu_srnm(opcode, regs); 629 signal = math_emu_srnm(opcode, regs);
629 } else if (opcode[1] == 0x9c) { 630 } else if (opcode[1] == 0x9c) {
630 get_user(*((__u16 *) (opcode+2)), location+1); 631 get_user(*((__u16 *) (opcode+2)), location+1);
631 signal = math_emu_stfpc(opcode, regs); 632 signal = math_emu_stfpc(opcode, regs);
632 } else if (opcode[1] == 0x9d) { 633 } else if (opcode[1] == 0x9d) {
633 get_user(*((__u16 *) (opcode+2)), location+1); 634 get_user(*((__u16 *) (opcode+2)), location+1);
634 signal = math_emu_lfpc(opcode, regs); 635 signal = math_emu_lfpc(opcode, regs);
635 } else 636 } else
636 signal = SIGILL; 637 signal = SIGILL;
637 break; 638 break;
638 default: 639 default:
639 signal = SIGILL; 640 signal = SIGILL;
640 break; 641 break;
641 } 642 }
642 } 643 }
643 #endif 644 #endif
644 if (current->thread.fp_regs.fpc & FPC_DXC_MASK) 645 if (current->thread.fp_regs.fpc & FPC_DXC_MASK)
645 signal = SIGFPE; 646 signal = SIGFPE;
646 else 647 else
647 signal = SIGILL; 648 signal = SIGILL;
648 if (signal == SIGFPE) 649 if (signal == SIGFPE)
649 do_fp_trap(regs, location, 650 do_fp_trap(regs, location,
650 current->thread.fp_regs.fpc, interruption_code); 651 current->thread.fp_regs.fpc, interruption_code);
651 else if (signal) { 652 else if (signal) {
652 siginfo_t info; 653 siginfo_t info;
653 info.si_signo = signal; 654 info.si_signo = signal;
654 info.si_errno = 0; 655 info.si_errno = 0;
655 info.si_code = ILL_ILLOPN; 656 info.si_code = ILL_ILLOPN;
656 info.si_addr = location; 657 info.si_addr = location;
657 do_trap(interruption_code, signal, 658 do_trap(interruption_code, signal,
658 "data exception", regs, &info); 659 "data exception", regs, &info);
659 } 660 }
660 } 661 }
661 662
662 static void space_switch_exception(struct pt_regs * regs, long int_code) 663 static void space_switch_exception(struct pt_regs * regs, long int_code)
663 { 664 {
664 siginfo_t info; 665 siginfo_t info;
665 666
666 /* Set user psw back to home space mode. */ 667 /* Set user psw back to home space mode. */
667 if (regs->psw.mask & PSW_MASK_PSTATE) 668 if (regs->psw.mask & PSW_MASK_PSTATE)
668 regs->psw.mask |= PSW_ASC_HOME; 669 regs->psw.mask |= PSW_ASC_HOME;
669 /* Send SIGILL. */ 670 /* Send SIGILL. */
670 info.si_signo = SIGILL; 671 info.si_signo = SIGILL;
671 info.si_errno = 0; 672 info.si_errno = 0;
672 info.si_code = ILL_PRVOPC; 673 info.si_code = ILL_PRVOPC;
673 info.si_addr = get_check_address(regs); 674 info.si_addr = get_check_address(regs);
674 do_trap(int_code, SIGILL, "space switch event", regs, &info); 675 do_trap(int_code, SIGILL, "space switch event", regs, &info);
675 } 676 }
676 677
677 asmlinkage void kernel_stack_overflow(struct pt_regs * regs) 678 asmlinkage void kernel_stack_overflow(struct pt_regs * regs)
678 { 679 {
679 bust_spinlocks(1); 680 bust_spinlocks(1);
680 printk("Kernel stack overflow.\n"); 681 printk("Kernel stack overflow.\n");
681 show_regs(regs); 682 show_regs(regs);
682 bust_spinlocks(0); 683 bust_spinlocks(0);
683 panic("Corrupt kernel stack, can't continue."); 684 panic("Corrupt kernel stack, can't continue.");
684 } 685 }
685 686
686 /* init is done in lowcore.S and head.S */ 687 /* init is done in lowcore.S and head.S */
687 688
688 void __init trap_init(void) 689 void __init trap_init(void)
689 { 690 {
690 int i; 691 int i;
691 692
692 for (i = 0; i < 128; i++) 693 for (i = 0; i < 128; i++)
693 pgm_check_table[i] = &default_trap_handler; 694 pgm_check_table[i] = &default_trap_handler;
694 pgm_check_table[1] = &illegal_op; 695 pgm_check_table[1] = &illegal_op;
695 pgm_check_table[2] = &privileged_op; 696 pgm_check_table[2] = &privileged_op;
696 pgm_check_table[3] = &execute_exception; 697 pgm_check_table[3] = &execute_exception;
697 pgm_check_table[4] = &do_protection_exception; 698 pgm_check_table[4] = &do_protection_exception;
698 pgm_check_table[5] = &addressing_exception; 699 pgm_check_table[5] = &addressing_exception;
699 pgm_check_table[6] = &specification_exception; 700 pgm_check_table[6] = &specification_exception;
700 pgm_check_table[7] = &data_exception; 701 pgm_check_table[7] = &data_exception;
701 pgm_check_table[8] = &overflow_exception; 702 pgm_check_table[8] = &overflow_exception;
702 pgm_check_table[9] = &divide_exception; 703 pgm_check_table[9] = &divide_exception;
703 pgm_check_table[0x0A] = &overflow_exception; 704 pgm_check_table[0x0A] = &overflow_exception;
704 pgm_check_table[0x0B] = &divide_exception; 705 pgm_check_table[0x0B] = &divide_exception;
705 pgm_check_table[0x0C] = &hfp_overflow_exception; 706 pgm_check_table[0x0C] = &hfp_overflow_exception;
706 pgm_check_table[0x0D] = &hfp_underflow_exception; 707 pgm_check_table[0x0D] = &hfp_underflow_exception;
707 pgm_check_table[0x0E] = &hfp_significance_exception; 708 pgm_check_table[0x0E] = &hfp_significance_exception;
708 pgm_check_table[0x0F] = &hfp_divide_exception; 709 pgm_check_table[0x0F] = &hfp_divide_exception;
709 pgm_check_table[0x10] = &do_dat_exception; 710 pgm_check_table[0x10] = &do_dat_exception;
710 pgm_check_table[0x11] = &do_dat_exception; 711 pgm_check_table[0x11] = &do_dat_exception;
711 pgm_check_table[0x12] = &translation_exception; 712 pgm_check_table[0x12] = &translation_exception;
712 pgm_check_table[0x13] = &special_op_exception; 713 pgm_check_table[0x13] = &special_op_exception;
713 #ifdef CONFIG_64BIT 714 #ifdef CONFIG_64BIT
714 pgm_check_table[0x38] = &do_dat_exception; 715 pgm_check_table[0x38] = &do_dat_exception;
715 pgm_check_table[0x39] = &do_dat_exception; 716 pgm_check_table[0x39] = &do_dat_exception;
716 pgm_check_table[0x3A] = &do_dat_exception; 717 pgm_check_table[0x3A] = &do_dat_exception;
717 pgm_check_table[0x3B] = &do_dat_exception; 718 pgm_check_table[0x3B] = &do_dat_exception;
718 #endif /* CONFIG_64BIT */ 719 #endif /* CONFIG_64BIT */
719 pgm_check_table[0x15] = &operand_exception; 720 pgm_check_table[0x15] = &operand_exception;
720 pgm_check_table[0x1C] = &space_switch_exception; 721 pgm_check_table[0x1C] = &space_switch_exception;
721 pgm_check_table[0x1D] = &hfp_sqrt_exception; 722 pgm_check_table[0x1D] = &hfp_sqrt_exception;
722 pgm_check_table[0x40] = &do_monitor_call; 723 pgm_check_table[0x40] = &do_monitor_call;
723 pfault_irq_init(); 724 pfault_irq_init();
724 } 725 }
725 726
arch/sh/kernel/traps.c
1 /* 1 /*
2 * 'traps.c' handles hardware traps and faults after we have saved some 2 * 'traps.c' handles hardware traps and faults after we have saved some
3 * state in 'entry.S'. 3 * state in 'entry.S'.
4 * 4 *
5 * SuperH version: Copyright (C) 1999 Niibe Yutaka 5 * SuperH version: Copyright (C) 1999 Niibe Yutaka
6 * Copyright (C) 2000 Philipp Rumpf 6 * Copyright (C) 2000 Philipp Rumpf
7 * Copyright (C) 2000 David Howells 7 * Copyright (C) 2000 David Howells
8 * Copyright (C) 2002 - 2007 Paul Mundt 8 * Copyright (C) 2002 - 2007 Paul Mundt
9 * 9 *
10 * This file is subject to the terms and conditions of the GNU General Public 10 * This file is subject to the terms and conditions of the GNU General Public
11 * License. See the file "COPYING" in the main directory of this archive 11 * License. See the file "COPYING" in the main directory of this archive
12 * for more details. 12 * for more details.
13 */ 13 */
14 #include <linux/kernel.h> 14 #include <linux/kernel.h>
15 #include <linux/ptrace.h> 15 #include <linux/ptrace.h>
16 #include <linux/init.h> 16 #include <linux/init.h>
17 #include <linux/spinlock.h> 17 #include <linux/spinlock.h>
18 #include <linux/module.h> 18 #include <linux/module.h>
19 #include <linux/kallsyms.h> 19 #include <linux/kallsyms.h>
20 #include <linux/io.h> 20 #include <linux/io.h>
21 #include <linux/bug.h> 21 #include <linux/bug.h>
22 #include <linux/debug_locks.h> 22 #include <linux/debug_locks.h>
23 #include <linux/kdebug.h> 23 #include <linux/kdebug.h>
24 #include <linux/kexec.h> 24 #include <linux/kexec.h>
25 #include <linux/limits.h> 25 #include <linux/limits.h>
26 #include <asm/system.h> 26 #include <asm/system.h>
27 #include <asm/uaccess.h> 27 #include <asm/uaccess.h>
28 28
29 #ifdef CONFIG_SH_KGDB 29 #ifdef CONFIG_SH_KGDB
30 #include <asm/kgdb.h> 30 #include <asm/kgdb.h>
31 #define CHK_REMOTE_DEBUG(regs) \ 31 #define CHK_REMOTE_DEBUG(regs) \
32 { \ 32 { \
33 if (kgdb_debug_hook && !user_mode(regs))\ 33 if (kgdb_debug_hook && !user_mode(regs))\
34 (*kgdb_debug_hook)(regs); \ 34 (*kgdb_debug_hook)(regs); \
35 } 35 }
36 #else 36 #else
37 #define CHK_REMOTE_DEBUG(regs) 37 #define CHK_REMOTE_DEBUG(regs)
38 #endif 38 #endif
39 39
40 #ifdef CONFIG_CPU_SH2 40 #ifdef CONFIG_CPU_SH2
41 # define TRAP_RESERVED_INST 4 41 # define TRAP_RESERVED_INST 4
42 # define TRAP_ILLEGAL_SLOT_INST 6 42 # define TRAP_ILLEGAL_SLOT_INST 6
43 # define TRAP_ADDRESS_ERROR 9 43 # define TRAP_ADDRESS_ERROR 9
44 # ifdef CONFIG_CPU_SH2A 44 # ifdef CONFIG_CPU_SH2A
45 # define TRAP_DIVZERO_ERROR 17 45 # define TRAP_DIVZERO_ERROR 17
46 # define TRAP_DIVOVF_ERROR 18 46 # define TRAP_DIVOVF_ERROR 18
47 # endif 47 # endif
48 #else 48 #else
49 #define TRAP_RESERVED_INST 12 49 #define TRAP_RESERVED_INST 12
50 #define TRAP_ILLEGAL_SLOT_INST 13 50 #define TRAP_ILLEGAL_SLOT_INST 13
51 #endif 51 #endif
52 52
53 static void dump_mem(const char *str, unsigned long bottom, unsigned long top) 53 static void dump_mem(const char *str, unsigned long bottom, unsigned long top)
54 { 54 {
55 unsigned long p; 55 unsigned long p;
56 int i; 56 int i;
57 57
58 printk("%s(0x%08lx to 0x%08lx)\n", str, bottom, top); 58 printk("%s(0x%08lx to 0x%08lx)\n", str, bottom, top);
59 59
60 for (p = bottom & ~31; p < top; ) { 60 for (p = bottom & ~31; p < top; ) {
61 printk("%04lx: ", p & 0xffff); 61 printk("%04lx: ", p & 0xffff);
62 62
63 for (i = 0; i < 8; i++, p += 4) { 63 for (i = 0; i < 8; i++, p += 4) {
64 unsigned int val; 64 unsigned int val;
65 65
66 if (p < bottom || p >= top) 66 if (p < bottom || p >= top)
67 printk(" "); 67 printk(" ");
68 else { 68 else {
69 if (__get_user(val, (unsigned int __user *)p)) { 69 if (__get_user(val, (unsigned int __user *)p)) {
70 printk("\n"); 70 printk("\n");
71 return; 71 return;
72 } 72 }
73 printk("%08x ", val); 73 printk("%08x ", val);
74 } 74 }
75 } 75 }
76 printk("\n"); 76 printk("\n");
77 } 77 }
78 } 78 }
79 79
80 static DEFINE_SPINLOCK(die_lock); 80 static DEFINE_SPINLOCK(die_lock);
81 81
82 void die(const char * str, struct pt_regs * regs, long err) 82 void die(const char * str, struct pt_regs * regs, long err)
83 { 83 {
84 static int die_counter; 84 static int die_counter;
85 85
86 oops_enter(); 86 oops_enter();
87 87
88 console_verbose(); 88 console_verbose();
89 spin_lock_irq(&die_lock); 89 spin_lock_irq(&die_lock);
90 bust_spinlocks(1); 90 bust_spinlocks(1);
91 91
92 printk("%s: %04lx [#%d]\n", str, err & 0xffff, ++die_counter); 92 printk("%s: %04lx [#%d]\n", str, err & 0xffff, ++die_counter);
93 93
94 CHK_REMOTE_DEBUG(regs); 94 CHK_REMOTE_DEBUG(regs);
95 print_modules(); 95 print_modules();
96 show_regs(regs); 96 show_regs(regs);
97 97
98 printk("Process: %s (pid: %d, stack limit = %p)\n", 98 printk("Process: %s (pid: %d, stack limit = %p)\n",
99 current->comm, current->pid, task_stack_page(current) + 1); 99 current->comm, current->pid, task_stack_page(current) + 1);
100 100
101 if (!user_mode(regs) || in_interrupt()) 101 if (!user_mode(regs) || in_interrupt())
102 dump_mem("Stack: ", regs->regs[15], THREAD_SIZE + 102 dump_mem("Stack: ", regs->regs[15], THREAD_SIZE +
103 (unsigned long)task_stack_page(current)); 103 (unsigned long)task_stack_page(current));
104 104
105 bust_spinlocks(0); 105 bust_spinlocks(0);
106 add_taint(TAINT_DIE);
106 spin_unlock_irq(&die_lock); 107 spin_unlock_irq(&die_lock);
107 108
108 if (kexec_should_crash(current)) 109 if (kexec_should_crash(current))
109 crash_kexec(regs); 110 crash_kexec(regs);
110 111
111 if (in_interrupt()) 112 if (in_interrupt())
112 panic("Fatal exception in interrupt"); 113 panic("Fatal exception in interrupt");
113 114
114 if (panic_on_oops) 115 if (panic_on_oops)
115 panic("Fatal exception"); 116 panic("Fatal exception");
116 117
117 oops_exit(); 118 oops_exit();
118 do_exit(SIGSEGV); 119 do_exit(SIGSEGV);
119 } 120 }
120 121
121 static inline void die_if_kernel(const char *str, struct pt_regs *regs, 122 static inline void die_if_kernel(const char *str, struct pt_regs *regs,
122 long err) 123 long err)
123 { 124 {
124 if (!user_mode(regs)) 125 if (!user_mode(regs))
125 die(str, regs, err); 126 die(str, regs, err);
126 } 127 }
127 128
128 /* 129 /*
129 * try and fix up kernelspace address errors 130 * try and fix up kernelspace address errors
130 * - userspace errors just cause EFAULT to be returned, resulting in SEGV 131 * - userspace errors just cause EFAULT to be returned, resulting in SEGV
131 * - kernel/userspace interfaces cause a jump to an appropriate handler 132 * - kernel/userspace interfaces cause a jump to an appropriate handler
132 * - other kernel errors are bad 133 * - other kernel errors are bad
133 * - return 0 if fixed-up, -EFAULT if non-fatal (to the kernel) fault 134 * - return 0 if fixed-up, -EFAULT if non-fatal (to the kernel) fault
134 */ 135 */
135 static int die_if_no_fixup(const char * str, struct pt_regs * regs, long err) 136 static int die_if_no_fixup(const char * str, struct pt_regs * regs, long err)
136 { 137 {
137 if (!user_mode(regs)) { 138 if (!user_mode(regs)) {
138 const struct exception_table_entry *fixup; 139 const struct exception_table_entry *fixup;
139 fixup = search_exception_tables(regs->pc); 140 fixup = search_exception_tables(regs->pc);
140 if (fixup) { 141 if (fixup) {
141 regs->pc = fixup->fixup; 142 regs->pc = fixup->fixup;
142 return 0; 143 return 0;
143 } 144 }
144 die(str, regs, err); 145 die(str, regs, err);
145 } 146 }
146 return -EFAULT; 147 return -EFAULT;
147 } 148 }
148 149
149 /* 150 /*
150 * handle an instruction that does an unaligned memory access by emulating the 151 * handle an instruction that does an unaligned memory access by emulating the
151 * desired behaviour 152 * desired behaviour
152 * - note that PC _may not_ point to the faulting instruction 153 * - note that PC _may not_ point to the faulting instruction
153 * (if that instruction is in a branch delay slot) 154 * (if that instruction is in a branch delay slot)
154 * - return 0 if emulation okay, -EFAULT on existential error 155 * - return 0 if emulation okay, -EFAULT on existential error
155 */ 156 */
156 static int handle_unaligned_ins(u16 instruction, struct pt_regs *regs) 157 static int handle_unaligned_ins(u16 instruction, struct pt_regs *regs)
157 { 158 {
158 int ret, index, count; 159 int ret, index, count;
159 unsigned long *rm, *rn; 160 unsigned long *rm, *rn;
160 unsigned char *src, *dst; 161 unsigned char *src, *dst;
161 162
162 index = (instruction>>8)&15; /* 0x0F00 */ 163 index = (instruction>>8)&15; /* 0x0F00 */
163 rn = &regs->regs[index]; 164 rn = &regs->regs[index];
164 165
165 index = (instruction>>4)&15; /* 0x00F0 */ 166 index = (instruction>>4)&15; /* 0x00F0 */
166 rm = &regs->regs[index]; 167 rm = &regs->regs[index];
167 168
168 count = 1<<(instruction&3); 169 count = 1<<(instruction&3);
169 170
170 ret = -EFAULT; 171 ret = -EFAULT;
171 switch (instruction>>12) { 172 switch (instruction>>12) {
172 case 0: /* mov.[bwl] to/from memory via r0+rn */ 173 case 0: /* mov.[bwl] to/from memory via r0+rn */
173 if (instruction & 8) { 174 if (instruction & 8) {
174 /* from memory */ 175 /* from memory */
175 src = (unsigned char*) *rm; 176 src = (unsigned char*) *rm;
176 src += regs->regs[0]; 177 src += regs->regs[0];
177 dst = (unsigned char*) rn; 178 dst = (unsigned char*) rn;
178 *(unsigned long*)dst = 0; 179 *(unsigned long*)dst = 0;
179 180
180 #ifdef __LITTLE_ENDIAN__ 181 #ifdef __LITTLE_ENDIAN__
181 if (copy_from_user(dst, src, count)) 182 if (copy_from_user(dst, src, count))
182 goto fetch_fault; 183 goto fetch_fault;
183 184
184 if ((count == 2) && dst[1] & 0x80) { 185 if ((count == 2) && dst[1] & 0x80) {
185 dst[2] = 0xff; 186 dst[2] = 0xff;
186 dst[3] = 0xff; 187 dst[3] = 0xff;
187 } 188 }
188 #else 189 #else
189 dst += 4-count; 190 dst += 4-count;
190 191
191 if (__copy_user(dst, src, count)) 192 if (__copy_user(dst, src, count))
192 goto fetch_fault; 193 goto fetch_fault;
193 194
194 if ((count == 2) && dst[2] & 0x80) { 195 if ((count == 2) && dst[2] & 0x80) {
195 dst[0] = 0xff; 196 dst[0] = 0xff;
196 dst[1] = 0xff; 197 dst[1] = 0xff;
197 } 198 }
198 #endif 199 #endif
199 } else { 200 } else {
200 /* to memory */ 201 /* to memory */
201 src = (unsigned char*) rm; 202 src = (unsigned char*) rm;
202 #if !defined(__LITTLE_ENDIAN__) 203 #if !defined(__LITTLE_ENDIAN__)
203 src += 4-count; 204 src += 4-count;
204 #endif 205 #endif
205 dst = (unsigned char*) *rn; 206 dst = (unsigned char*) *rn;
206 dst += regs->regs[0]; 207 dst += regs->regs[0];
207 208
208 if (copy_to_user(dst, src, count)) 209 if (copy_to_user(dst, src, count))
209 goto fetch_fault; 210 goto fetch_fault;
210 } 211 }
211 ret = 0; 212 ret = 0;
212 break; 213 break;
213 214
214 case 1: /* mov.l Rm,@(disp,Rn) */ 215 case 1: /* mov.l Rm,@(disp,Rn) */
215 src = (unsigned char*) rm; 216 src = (unsigned char*) rm;
216 dst = (unsigned char*) *rn; 217 dst = (unsigned char*) *rn;
217 dst += (instruction&0x000F)<<2; 218 dst += (instruction&0x000F)<<2;
218 219
219 if (copy_to_user(dst,src,4)) 220 if (copy_to_user(dst,src,4))
220 goto fetch_fault; 221 goto fetch_fault;
221 ret = 0; 222 ret = 0;
222 break; 223 break;
223 224
224 case 2: /* mov.[bwl] to memory, possibly with pre-decrement */ 225 case 2: /* mov.[bwl] to memory, possibly with pre-decrement */
225 if (instruction & 4) 226 if (instruction & 4)
226 *rn -= count; 227 *rn -= count;
227 src = (unsigned char*) rm; 228 src = (unsigned char*) rm;
228 dst = (unsigned char*) *rn; 229 dst = (unsigned char*) *rn;
229 #if !defined(__LITTLE_ENDIAN__) 230 #if !defined(__LITTLE_ENDIAN__)
230 src += 4-count; 231 src += 4-count;
231 #endif 232 #endif
232 if (copy_to_user(dst, src, count)) 233 if (copy_to_user(dst, src, count))
233 goto fetch_fault; 234 goto fetch_fault;
234 ret = 0; 235 ret = 0;
235 break; 236 break;
236 237
237 case 5: /* mov.l @(disp,Rm),Rn */ 238 case 5: /* mov.l @(disp,Rm),Rn */
238 src = (unsigned char*) *rm; 239 src = (unsigned char*) *rm;
239 src += (instruction&0x000F)<<2; 240 src += (instruction&0x000F)<<2;
240 dst = (unsigned char*) rn; 241 dst = (unsigned char*) rn;
241 *(unsigned long*)dst = 0; 242 *(unsigned long*)dst = 0;
242 243
243 if (copy_from_user(dst,src,4)) 244 if (copy_from_user(dst,src,4))
244 goto fetch_fault; 245 goto fetch_fault;
245 ret = 0; 246 ret = 0;
246 break; 247 break;
247 248
248 case 6: /* mov.[bwl] from memory, possibly with post-increment */ 249 case 6: /* mov.[bwl] from memory, possibly with post-increment */
249 src = (unsigned char*) *rm; 250 src = (unsigned char*) *rm;
250 if (instruction & 4) 251 if (instruction & 4)
251 *rm += count; 252 *rm += count;
252 dst = (unsigned char*) rn; 253 dst = (unsigned char*) rn;
253 *(unsigned long*)dst = 0; 254 *(unsigned long*)dst = 0;
254 255
255 #ifdef __LITTLE_ENDIAN__ 256 #ifdef __LITTLE_ENDIAN__
256 if (copy_from_user(dst, src, count)) 257 if (copy_from_user(dst, src, count))
257 goto fetch_fault; 258 goto fetch_fault;
258 259
259 if ((count == 2) && dst[1] & 0x80) { 260 if ((count == 2) && dst[1] & 0x80) {
260 dst[2] = 0xff; 261 dst[2] = 0xff;
261 dst[3] = 0xff; 262 dst[3] = 0xff;
262 } 263 }
263 #else 264 #else
264 dst += 4-count; 265 dst += 4-count;
265 266
266 if (copy_from_user(dst, src, count)) 267 if (copy_from_user(dst, src, count))
267 goto fetch_fault; 268 goto fetch_fault;
268 269
269 if ((count == 2) && dst[2] & 0x80) { 270 if ((count == 2) && dst[2] & 0x80) {
270 dst[0] = 0xff; 271 dst[0] = 0xff;
271 dst[1] = 0xff; 272 dst[1] = 0xff;
272 } 273 }
273 #endif 274 #endif
274 ret = 0; 275 ret = 0;
275 break; 276 break;
276 277
277 case 8: 278 case 8:
278 switch ((instruction&0xFF00)>>8) { 279 switch ((instruction&0xFF00)>>8) {
279 case 0x81: /* mov.w R0,@(disp,Rn) */ 280 case 0x81: /* mov.w R0,@(disp,Rn) */
280 src = (unsigned char*) &regs->regs[0]; 281 src = (unsigned char*) &regs->regs[0];
281 #if !defined(__LITTLE_ENDIAN__) 282 #if !defined(__LITTLE_ENDIAN__)
282 src += 2; 283 src += 2;
283 #endif 284 #endif
284 dst = (unsigned char*) *rm; /* called Rn in the spec */ 285 dst = (unsigned char*) *rm; /* called Rn in the spec */
285 dst += (instruction&0x000F)<<1; 286 dst += (instruction&0x000F)<<1;
286 287
287 if (copy_to_user(dst, src, 2)) 288 if (copy_to_user(dst, src, 2))
288 goto fetch_fault; 289 goto fetch_fault;
289 ret = 0; 290 ret = 0;
290 break; 291 break;
291 292
292 case 0x85: /* mov.w @(disp,Rm),R0 */ 293 case 0x85: /* mov.w @(disp,Rm),R0 */
293 src = (unsigned char*) *rm; 294 src = (unsigned char*) *rm;
294 src += (instruction&0x000F)<<1; 295 src += (instruction&0x000F)<<1;
295 dst = (unsigned char*) &regs->regs[0]; 296 dst = (unsigned char*) &regs->regs[0];
296 *(unsigned long*)dst = 0; 297 *(unsigned long*)dst = 0;
297 298
298 #if !defined(__LITTLE_ENDIAN__) 299 #if !defined(__LITTLE_ENDIAN__)
299 dst += 2; 300 dst += 2;
300 #endif 301 #endif
301 302
302 if (copy_from_user(dst, src, 2)) 303 if (copy_from_user(dst, src, 2))
303 goto fetch_fault; 304 goto fetch_fault;
304 305
305 #ifdef __LITTLE_ENDIAN__ 306 #ifdef __LITTLE_ENDIAN__
306 if (dst[1] & 0x80) { 307 if (dst[1] & 0x80) {
307 dst[2] = 0xff; 308 dst[2] = 0xff;
308 dst[3] = 0xff; 309 dst[3] = 0xff;
309 } 310 }
310 #else 311 #else
311 if (dst[2] & 0x80) { 312 if (dst[2] & 0x80) {
312 dst[0] = 0xff; 313 dst[0] = 0xff;
313 dst[1] = 0xff; 314 dst[1] = 0xff;
314 } 315 }
315 #endif 316 #endif
316 ret = 0; 317 ret = 0;
317 break; 318 break;
318 } 319 }
319 break; 320 break;
320 } 321 }
321 return ret; 322 return ret;
322 323
323 fetch_fault: 324 fetch_fault:
324 /* Argh. Address not only misaligned but also non-existent. 325 /* Argh. Address not only misaligned but also non-existent.
325 * Raise an EFAULT and see if it's trapped 326 * Raise an EFAULT and see if it's trapped
326 */ 327 */
327 return die_if_no_fixup("Fault in unaligned fixup", regs, 0); 328 return die_if_no_fixup("Fault in unaligned fixup", regs, 0);
328 } 329 }
329 330
330 /* 331 /*
331 * emulate the instruction in the delay slot 332 * emulate the instruction in the delay slot
332 * - fetches the instruction from PC+2 333 * - fetches the instruction from PC+2
333 */ 334 */
334 static inline int handle_unaligned_delayslot(struct pt_regs *regs) 335 static inline int handle_unaligned_delayslot(struct pt_regs *regs)
335 { 336 {
336 u16 instruction; 337 u16 instruction;
337 338
338 if (copy_from_user(&instruction, (u16 *)(regs->pc+2), 2)) { 339 if (copy_from_user(&instruction, (u16 *)(regs->pc+2), 2)) {
339 /* the instruction-fetch faulted */ 340 /* the instruction-fetch faulted */
340 if (user_mode(regs)) 341 if (user_mode(regs))
341 return -EFAULT; 342 return -EFAULT;
342 343
343 /* kernel */ 344 /* kernel */
344 die("delay-slot-insn faulting in handle_unaligned_delayslot", 345 die("delay-slot-insn faulting in handle_unaligned_delayslot",
345 regs, 0); 346 regs, 0);
346 } 347 }
347 348
348 return handle_unaligned_ins(instruction,regs); 349 return handle_unaligned_ins(instruction,regs);
349 } 350 }
350 351
351 /* 352 /*
352 * handle an instruction that does an unaligned memory access 353 * handle an instruction that does an unaligned memory access
353 * - have to be careful of branch delay-slot instructions that fault 354 * - have to be careful of branch delay-slot instructions that fault
354 * SH3: 355 * SH3:
355 * - if the branch would be taken PC points to the branch 356 * - if the branch would be taken PC points to the branch
356 * - if the branch would not be taken, PC points to delay-slot 357 * - if the branch would not be taken, PC points to delay-slot
357 * SH4: 358 * SH4:
358 * - PC always points to delayed branch 359 * - PC always points to delayed branch
359 * - return 0 if handled, -EFAULT if failed (may not return if in kernel) 360 * - return 0 if handled, -EFAULT if failed (may not return if in kernel)
360 */ 361 */
361 362
362 /* Macros to determine offset from current PC for branch instructions */ 363 /* Macros to determine offset from current PC for branch instructions */
363 /* Explicit type coercion is used to force sign extension where needed */ 364 /* Explicit type coercion is used to force sign extension where needed */
364 #define SH_PC_8BIT_OFFSET(instr) ((((signed char)(instr))*2) + 4) 365 #define SH_PC_8BIT_OFFSET(instr) ((((signed char)(instr))*2) + 4)
365 #define SH_PC_12BIT_OFFSET(instr) ((((signed short)(instr<<4))>>3) + 4) 366 #define SH_PC_12BIT_OFFSET(instr) ((((signed short)(instr<<4))>>3) + 4)
366 367
367 /* 368 /*
368 * XXX: SH-2A needs this too, but it needs an overhaul thanks to mixed 32-bit 369 * XXX: SH-2A needs this too, but it needs an overhaul thanks to mixed 32-bit
369 * opcodes.. 370 * opcodes..
370 */ 371 */
371 #ifndef CONFIG_CPU_SH2A 372 #ifndef CONFIG_CPU_SH2A
372 static int handle_unaligned_notify_count = 10; 373 static int handle_unaligned_notify_count = 10;
373 374
374 static int handle_unaligned_access(u16 instruction, struct pt_regs *regs) 375 static int handle_unaligned_access(u16 instruction, struct pt_regs *regs)
375 { 376 {
376 u_int rm; 377 u_int rm;
377 int ret, index; 378 int ret, index;
378 379
379 index = (instruction>>8)&15; /* 0x0F00 */ 380 index = (instruction>>8)&15; /* 0x0F00 */
380 rm = regs->regs[index]; 381 rm = regs->regs[index];
381 382
382 /* shout about the first ten userspace fixups */ 383 /* shout about the first ten userspace fixups */
383 if (user_mode(regs) && handle_unaligned_notify_count>0) { 384 if (user_mode(regs) && handle_unaligned_notify_count>0) {
384 handle_unaligned_notify_count--; 385 handle_unaligned_notify_count--;
385 386
386 printk(KERN_NOTICE "Fixing up unaligned userspace access " 387 printk(KERN_NOTICE "Fixing up unaligned userspace access "
387 "in \"%s\" pid=%d pc=0x%p ins=0x%04hx\n", 388 "in \"%s\" pid=%d pc=0x%p ins=0x%04hx\n",
388 current->comm,current->pid,(u16*)regs->pc,instruction); 389 current->comm,current->pid,(u16*)regs->pc,instruction);
389 } 390 }
390 391
391 ret = -EFAULT; 392 ret = -EFAULT;
392 switch (instruction&0xF000) { 393 switch (instruction&0xF000) {
393 case 0x0000: 394 case 0x0000:
394 if (instruction==0x000B) { 395 if (instruction==0x000B) {
395 /* rts */ 396 /* rts */
396 ret = handle_unaligned_delayslot(regs); 397 ret = handle_unaligned_delayslot(regs);
397 if (ret==0) 398 if (ret==0)
398 regs->pc = regs->pr; 399 regs->pc = regs->pr;
399 } 400 }
400 else if ((instruction&0x00FF)==0x0023) { 401 else if ((instruction&0x00FF)==0x0023) {
401 /* braf @Rm */ 402 /* braf @Rm */
402 ret = handle_unaligned_delayslot(regs); 403 ret = handle_unaligned_delayslot(regs);
403 if (ret==0) 404 if (ret==0)
404 regs->pc += rm + 4; 405 regs->pc += rm + 4;
405 } 406 }
406 else if ((instruction&0x00FF)==0x0003) { 407 else if ((instruction&0x00FF)==0x0003) {
407 /* bsrf @Rm */ 408 /* bsrf @Rm */
408 ret = handle_unaligned_delayslot(regs); 409 ret = handle_unaligned_delayslot(regs);
409 if (ret==0) { 410 if (ret==0) {
410 regs->pr = regs->pc + 4; 411 regs->pr = regs->pc + 4;
411 regs->pc += rm + 4; 412 regs->pc += rm + 4;
412 } 413 }
413 } 414 }
414 else { 415 else {
415 /* mov.[bwl] to/from memory via r0+rn */ 416 /* mov.[bwl] to/from memory via r0+rn */
416 goto simple; 417 goto simple;
417 } 418 }
418 break; 419 break;
419 420
420 case 0x1000: /* mov.l Rm,@(disp,Rn) */ 421 case 0x1000: /* mov.l Rm,@(disp,Rn) */
421 goto simple; 422 goto simple;
422 423
423 case 0x2000: /* mov.[bwl] to memory, possibly with pre-decrement */ 424 case 0x2000: /* mov.[bwl] to memory, possibly with pre-decrement */
424 goto simple; 425 goto simple;
425 426
426 case 0x4000: 427 case 0x4000:
427 if ((instruction&0x00FF)==0x002B) { 428 if ((instruction&0x00FF)==0x002B) {
428 /* jmp @Rm */ 429 /* jmp @Rm */
429 ret = handle_unaligned_delayslot(regs); 430 ret = handle_unaligned_delayslot(regs);
430 if (ret==0) 431 if (ret==0)
431 regs->pc = rm; 432 regs->pc = rm;
432 } 433 }
433 else if ((instruction&0x00FF)==0x000B) { 434 else if ((instruction&0x00FF)==0x000B) {
434 /* jsr @Rm */ 435 /* jsr @Rm */
435 ret = handle_unaligned_delayslot(regs); 436 ret = handle_unaligned_delayslot(regs);
436 if (ret==0) { 437 if (ret==0) {
437 regs->pr = regs->pc + 4; 438 regs->pr = regs->pc + 4;
438 regs->pc = rm; 439 regs->pc = rm;
439 } 440 }
440 } 441 }
441 else { 442 else {
442 /* mov.[bwl] to/from memory via r0+rn */ 443 /* mov.[bwl] to/from memory via r0+rn */
443 goto simple; 444 goto simple;
444 } 445 }
445 break; 446 break;
446 447
447 case 0x5000: /* mov.l @(disp,Rm),Rn */ 448 case 0x5000: /* mov.l @(disp,Rm),Rn */
448 goto simple; 449 goto simple;
449 450
450 case 0x6000: /* mov.[bwl] from memory, possibly with post-increment */ 451 case 0x6000: /* mov.[bwl] from memory, possibly with post-increment */
451 goto simple; 452 goto simple;
452 453
453 case 0x8000: /* bf lab, bf/s lab, bt lab, bt/s lab */ 454 case 0x8000: /* bf lab, bf/s lab, bt lab, bt/s lab */
454 switch (instruction&0x0F00) { 455 switch (instruction&0x0F00) {
455 case 0x0100: /* mov.w R0,@(disp,Rm) */ 456 case 0x0100: /* mov.w R0,@(disp,Rm) */
456 goto simple; 457 goto simple;
457 case 0x0500: /* mov.w @(disp,Rm),R0 */ 458 case 0x0500: /* mov.w @(disp,Rm),R0 */
458 goto simple; 459 goto simple;
459 case 0x0B00: /* bf lab - no delayslot*/ 460 case 0x0B00: /* bf lab - no delayslot*/
460 break; 461 break;
461 case 0x0F00: /* bf/s lab */ 462 case 0x0F00: /* bf/s lab */
462 ret = handle_unaligned_delayslot(regs); 463 ret = handle_unaligned_delayslot(regs);
463 if (ret==0) { 464 if (ret==0) {
464 #if defined(CONFIG_CPU_SH4) || defined(CONFIG_SH7705_CACHE_32KB) 465 #if defined(CONFIG_CPU_SH4) || defined(CONFIG_SH7705_CACHE_32KB)
465 if ((regs->sr & 0x00000001) != 0) 466 if ((regs->sr & 0x00000001) != 0)
466 regs->pc += 4; /* next after slot */ 467 regs->pc += 4; /* next after slot */
467 else 468 else
468 #endif 469 #endif
469 regs->pc += SH_PC_8BIT_OFFSET(instruction); 470 regs->pc += SH_PC_8BIT_OFFSET(instruction);
470 } 471 }
471 break; 472 break;
472 case 0x0900: /* bt lab - no delayslot */ 473 case 0x0900: /* bt lab - no delayslot */
473 break; 474 break;
474 case 0x0D00: /* bt/s lab */ 475 case 0x0D00: /* bt/s lab */
475 ret = handle_unaligned_delayslot(regs); 476 ret = handle_unaligned_delayslot(regs);
476 if (ret==0) { 477 if (ret==0) {
477 #if defined(CONFIG_CPU_SH4) || defined(CONFIG_SH7705_CACHE_32KB) 478 #if defined(CONFIG_CPU_SH4) || defined(CONFIG_SH7705_CACHE_32KB)
478 if ((regs->sr & 0x00000001) == 0) 479 if ((regs->sr & 0x00000001) == 0)
479 regs->pc += 4; /* next after slot */ 480 regs->pc += 4; /* next after slot */
480 else 481 else
481 #endif 482 #endif
482 regs->pc += SH_PC_8BIT_OFFSET(instruction); 483 regs->pc += SH_PC_8BIT_OFFSET(instruction);
483 } 484 }
484 break; 485 break;
485 } 486 }
486 break; 487 break;
487 488
488 case 0xA000: /* bra label */ 489 case 0xA000: /* bra label */
489 ret = handle_unaligned_delayslot(regs); 490 ret = handle_unaligned_delayslot(regs);
490 if (ret==0) 491 if (ret==0)
491 regs->pc += SH_PC_12BIT_OFFSET(instruction); 492 regs->pc += SH_PC_12BIT_OFFSET(instruction);
492 break; 493 break;
493 494
494 case 0xB000: /* bsr label */ 495 case 0xB000: /* bsr label */
495 ret = handle_unaligned_delayslot(regs); 496 ret = handle_unaligned_delayslot(regs);
496 if (ret==0) { 497 if (ret==0) {
497 regs->pr = regs->pc + 4; 498 regs->pr = regs->pc + 4;
498 regs->pc += SH_PC_12BIT_OFFSET(instruction); 499 regs->pc += SH_PC_12BIT_OFFSET(instruction);
499 } 500 }
500 break; 501 break;
501 } 502 }
502 return ret; 503 return ret;
503 504
504 /* handle non-delay-slot instruction */ 505 /* handle non-delay-slot instruction */
505 simple: 506 simple:
506 ret = handle_unaligned_ins(instruction,regs); 507 ret = handle_unaligned_ins(instruction,regs);
507 if (ret==0) 508 if (ret==0)
508 regs->pc += instruction_size(instruction); 509 regs->pc += instruction_size(instruction);
509 return ret; 510 return ret;
510 } 511 }
511 #endif /* CONFIG_CPU_SH2A */ 512 #endif /* CONFIG_CPU_SH2A */
512 513
513 #ifdef CONFIG_CPU_HAS_SR_RB 514 #ifdef CONFIG_CPU_HAS_SR_RB
514 #define lookup_exception_vector(x) \ 515 #define lookup_exception_vector(x) \
515 __asm__ __volatile__ ("stc r2_bank, %0\n\t" : "=r" ((x))) 516 __asm__ __volatile__ ("stc r2_bank, %0\n\t" : "=r" ((x)))
516 #else 517 #else
517 #define lookup_exception_vector(x) \ 518 #define lookup_exception_vector(x) \
518 __asm__ __volatile__ ("mov r4, %0\n\t" : "=r" ((x))) 519 __asm__ __volatile__ ("mov r4, %0\n\t" : "=r" ((x)))
519 #endif 520 #endif
520 521
521 /* 522 /*
522 * Handle various address error exceptions: 523 * Handle various address error exceptions:
523 * - instruction address error: 524 * - instruction address error:
524 * misaligned PC 525 * misaligned PC
525 * PC >= 0x80000000 in user mode 526 * PC >= 0x80000000 in user mode
526 * - data address error (read and write) 527 * - data address error (read and write)
527 * misaligned data access 528 * misaligned data access
528 * access to >= 0x80000000 is user mode 529 * access to >= 0x80000000 is user mode
529 * Unfortuntaly we can't distinguish between instruction address error 530 * Unfortuntaly we can't distinguish between instruction address error
530 * and data address errors caused by read accesses. 531 * and data address errors caused by read accesses.
531 */ 532 */
532 asmlinkage void do_address_error(struct pt_regs *regs, 533 asmlinkage void do_address_error(struct pt_regs *regs,
533 unsigned long writeaccess, 534 unsigned long writeaccess,
534 unsigned long address) 535 unsigned long address)
535 { 536 {
536 unsigned long error_code = 0; 537 unsigned long error_code = 0;
537 mm_segment_t oldfs; 538 mm_segment_t oldfs;
538 siginfo_t info; 539 siginfo_t info;
539 #ifndef CONFIG_CPU_SH2A 540 #ifndef CONFIG_CPU_SH2A
540 u16 instruction; 541 u16 instruction;
541 int tmp; 542 int tmp;
542 #endif 543 #endif
543 544
544 /* Intentional ifdef */ 545 /* Intentional ifdef */
545 #ifdef CONFIG_CPU_HAS_SR_RB 546 #ifdef CONFIG_CPU_HAS_SR_RB
546 lookup_exception_vector(error_code); 547 lookup_exception_vector(error_code);
547 #endif 548 #endif
548 549
549 oldfs = get_fs(); 550 oldfs = get_fs();
550 551
551 if (user_mode(regs)) { 552 if (user_mode(regs)) {
552 int si_code = BUS_ADRERR; 553 int si_code = BUS_ADRERR;
553 554
554 local_irq_enable(); 555 local_irq_enable();
555 556
556 /* bad PC is not something we can fix */ 557 /* bad PC is not something we can fix */
557 if (regs->pc & 1) { 558 if (regs->pc & 1) {
558 si_code = BUS_ADRALN; 559 si_code = BUS_ADRALN;
559 goto uspace_segv; 560 goto uspace_segv;
560 } 561 }
561 562
562 #ifndef CONFIG_CPU_SH2A 563 #ifndef CONFIG_CPU_SH2A
563 set_fs(USER_DS); 564 set_fs(USER_DS);
564 if (copy_from_user(&instruction, (u16 *)(regs->pc), 2)) { 565 if (copy_from_user(&instruction, (u16 *)(regs->pc), 2)) {
565 /* Argh. Fault on the instruction itself. 566 /* Argh. Fault on the instruction itself.
566 This should never happen non-SMP 567 This should never happen non-SMP
567 */ 568 */
568 set_fs(oldfs); 569 set_fs(oldfs);
569 goto uspace_segv; 570 goto uspace_segv;
570 } 571 }
571 572
572 tmp = handle_unaligned_access(instruction, regs); 573 tmp = handle_unaligned_access(instruction, regs);
573 set_fs(oldfs); 574 set_fs(oldfs);
574 575
575 if (tmp==0) 576 if (tmp==0)
576 return; /* sorted */ 577 return; /* sorted */
577 #endif 578 #endif
578 579
579 uspace_segv: 580 uspace_segv:
580 printk(KERN_NOTICE "Sending SIGBUS to \"%s\" due to unaligned " 581 printk(KERN_NOTICE "Sending SIGBUS to \"%s\" due to unaligned "
581 "access (PC %lx PR %lx)\n", current->comm, regs->pc, 582 "access (PC %lx PR %lx)\n", current->comm, regs->pc,
582 regs->pr); 583 regs->pr);
583 584
584 info.si_signo = SIGBUS; 585 info.si_signo = SIGBUS;
585 info.si_errno = 0; 586 info.si_errno = 0;
586 info.si_code = si_code; 587 info.si_code = si_code;
587 info.si_addr = (void __user *)address; 588 info.si_addr = (void __user *)address;
588 force_sig_info(SIGBUS, &info, current); 589 force_sig_info(SIGBUS, &info, current);
589 } else { 590 } else {
590 if (regs->pc & 1) 591 if (regs->pc & 1)
591 die("unaligned program counter", regs, error_code); 592 die("unaligned program counter", regs, error_code);
592 593
593 #ifndef CONFIG_CPU_SH2A 594 #ifndef CONFIG_CPU_SH2A
594 set_fs(KERNEL_DS); 595 set_fs(KERNEL_DS);
595 if (copy_from_user(&instruction, (u16 *)(regs->pc), 2)) { 596 if (copy_from_user(&instruction, (u16 *)(regs->pc), 2)) {
596 /* Argh. Fault on the instruction itself. 597 /* Argh. Fault on the instruction itself.
597 This should never happen non-SMP 598 This should never happen non-SMP
598 */ 599 */
599 set_fs(oldfs); 600 set_fs(oldfs);
600 die("insn faulting in do_address_error", regs, 0); 601 die("insn faulting in do_address_error", regs, 0);
601 } 602 }
602 603
603 handle_unaligned_access(instruction, regs); 604 handle_unaligned_access(instruction, regs);
604 set_fs(oldfs); 605 set_fs(oldfs);
605 #else 606 #else
606 printk(KERN_NOTICE "Killing process \"%s\" due to unaligned " 607 printk(KERN_NOTICE "Killing process \"%s\" due to unaligned "
607 "access\n", current->comm); 608 "access\n", current->comm);
608 609
609 force_sig(SIGSEGV, current); 610 force_sig(SIGSEGV, current);
610 #endif 611 #endif
611 } 612 }
612 } 613 }
613 614
614 #ifdef CONFIG_SH_DSP 615 #ifdef CONFIG_SH_DSP
615 /* 616 /*
616 * SH-DSP support gerg@snapgear.com. 617 * SH-DSP support gerg@snapgear.com.
617 */ 618 */
618 int is_dsp_inst(struct pt_regs *regs) 619 int is_dsp_inst(struct pt_regs *regs)
619 { 620 {
620 unsigned short inst = 0; 621 unsigned short inst = 0;
621 622
622 /* 623 /*
623 * Safe guard if DSP mode is already enabled or we're lacking 624 * Safe guard if DSP mode is already enabled or we're lacking
624 * the DSP altogether. 625 * the DSP altogether.
625 */ 626 */
626 if (!(current_cpu_data.flags & CPU_HAS_DSP) || (regs->sr & SR_DSP)) 627 if (!(current_cpu_data.flags & CPU_HAS_DSP) || (regs->sr & SR_DSP))
627 return 0; 628 return 0;
628 629
629 get_user(inst, ((unsigned short *) regs->pc)); 630 get_user(inst, ((unsigned short *) regs->pc));
630 631
631 inst &= 0xf000; 632 inst &= 0xf000;
632 633
633 /* Check for any type of DSP or support instruction */ 634 /* Check for any type of DSP or support instruction */
634 if ((inst == 0xf000) || (inst == 0x4000)) 635 if ((inst == 0xf000) || (inst == 0x4000))
635 return 1; 636 return 1;
636 637
637 return 0; 638 return 0;
638 } 639 }
639 #else 640 #else
640 #define is_dsp_inst(regs) (0) 641 #define is_dsp_inst(regs) (0)
641 #endif /* CONFIG_SH_DSP */ 642 #endif /* CONFIG_SH_DSP */
642 643
643 #ifdef CONFIG_CPU_SH2A 644 #ifdef CONFIG_CPU_SH2A
644 asmlinkage void do_divide_error(unsigned long r4, unsigned long r5, 645 asmlinkage void do_divide_error(unsigned long r4, unsigned long r5,
645 unsigned long r6, unsigned long r7, 646 unsigned long r6, unsigned long r7,
646 struct pt_regs __regs) 647 struct pt_regs __regs)
647 { 648 {
648 siginfo_t info; 649 siginfo_t info;
649 650
650 switch (r4) { 651 switch (r4) {
651 case TRAP_DIVZERO_ERROR: 652 case TRAP_DIVZERO_ERROR:
652 info.si_code = FPE_INTDIV; 653 info.si_code = FPE_INTDIV;
653 break; 654 break;
654 case TRAP_DIVOVF_ERROR: 655 case TRAP_DIVOVF_ERROR:
655 info.si_code = FPE_INTOVF; 656 info.si_code = FPE_INTOVF;
656 break; 657 break;
657 } 658 }
658 659
659 force_sig_info(SIGFPE, &info, current); 660 force_sig_info(SIGFPE, &info, current);
660 } 661 }
661 #endif 662 #endif
662 663
663 /* arch/sh/kernel/cpu/sh4/fpu.c */ 664 /* arch/sh/kernel/cpu/sh4/fpu.c */
664 extern int do_fpu_inst(unsigned short, struct pt_regs *); 665 extern int do_fpu_inst(unsigned short, struct pt_regs *);
665 extern asmlinkage void do_fpu_state_restore(unsigned long r4, unsigned long r5, 666 extern asmlinkage void do_fpu_state_restore(unsigned long r4, unsigned long r5,
666 unsigned long r6, unsigned long r7, struct pt_regs __regs); 667 unsigned long r6, unsigned long r7, struct pt_regs __regs);
667 668
668 asmlinkage void do_reserved_inst(unsigned long r4, unsigned long r5, 669 asmlinkage void do_reserved_inst(unsigned long r4, unsigned long r5,
669 unsigned long r6, unsigned long r7, 670 unsigned long r6, unsigned long r7,
670 struct pt_regs __regs) 671 struct pt_regs __regs)
671 { 672 {
672 struct pt_regs *regs = RELOC_HIDE(&__regs, 0); 673 struct pt_regs *regs = RELOC_HIDE(&__regs, 0);
673 unsigned long error_code; 674 unsigned long error_code;
674 struct task_struct *tsk = current; 675 struct task_struct *tsk = current;
675 676
676 #ifdef CONFIG_SH_FPU_EMU 677 #ifdef CONFIG_SH_FPU_EMU
677 unsigned short inst = 0; 678 unsigned short inst = 0;
678 int err; 679 int err;
679 680
680 get_user(inst, (unsigned short*)regs->pc); 681 get_user(inst, (unsigned short*)regs->pc);
681 682
682 err = do_fpu_inst(inst, regs); 683 err = do_fpu_inst(inst, regs);
683 if (!err) { 684 if (!err) {
684 regs->pc += instruction_size(inst); 685 regs->pc += instruction_size(inst);
685 return; 686 return;
686 } 687 }
687 /* not a FPU inst. */ 688 /* not a FPU inst. */
688 #endif 689 #endif
689 690
690 #ifdef CONFIG_SH_DSP 691 #ifdef CONFIG_SH_DSP
691 /* Check if it's a DSP instruction */ 692 /* Check if it's a DSP instruction */
692 if (is_dsp_inst(regs)) { 693 if (is_dsp_inst(regs)) {
693 /* Enable DSP mode, and restart instruction. */ 694 /* Enable DSP mode, and restart instruction. */
694 regs->sr |= SR_DSP; 695 regs->sr |= SR_DSP;
695 return; 696 return;
696 } 697 }
697 #endif 698 #endif
698 699
699 lookup_exception_vector(error_code); 700 lookup_exception_vector(error_code);
700 701
701 local_irq_enable(); 702 local_irq_enable();
702 CHK_REMOTE_DEBUG(regs); 703 CHK_REMOTE_DEBUG(regs);
703 force_sig(SIGILL, tsk); 704 force_sig(SIGILL, tsk);
704 die_if_no_fixup("reserved instruction", regs, error_code); 705 die_if_no_fixup("reserved instruction", regs, error_code);
705 } 706 }
706 707
707 #ifdef CONFIG_SH_FPU_EMU 708 #ifdef CONFIG_SH_FPU_EMU
708 static int emulate_branch(unsigned short inst, struct pt_regs* regs) 709 static int emulate_branch(unsigned short inst, struct pt_regs* regs)
709 { 710 {
710 /* 711 /*
711 * bfs: 8fxx: PC+=d*2+4; 712 * bfs: 8fxx: PC+=d*2+4;
712 * bts: 8dxx: PC+=d*2+4; 713 * bts: 8dxx: PC+=d*2+4;
713 * bra: axxx: PC+=D*2+4; 714 * bra: axxx: PC+=D*2+4;
714 * bsr: bxxx: PC+=D*2+4 after PR=PC+4; 715 * bsr: bxxx: PC+=D*2+4 after PR=PC+4;
715 * braf:0x23: PC+=Rn*2+4; 716 * braf:0x23: PC+=Rn*2+4;
716 * bsrf:0x03: PC+=Rn*2+4 after PR=PC+4; 717 * bsrf:0x03: PC+=Rn*2+4 after PR=PC+4;
717 * jmp: 4x2b: PC=Rn; 718 * jmp: 4x2b: PC=Rn;
718 * jsr: 4x0b: PC=Rn after PR=PC+4; 719 * jsr: 4x0b: PC=Rn after PR=PC+4;
719 * rts: 000b: PC=PR; 720 * rts: 000b: PC=PR;
720 */ 721 */
721 if ((inst & 0xfd00) == 0x8d00) { 722 if ((inst & 0xfd00) == 0x8d00) {
722 regs->pc += SH_PC_8BIT_OFFSET(inst); 723 regs->pc += SH_PC_8BIT_OFFSET(inst);
723 return 0; 724 return 0;
724 } 725 }
725 726
726 if ((inst & 0xe000) == 0xa000) { 727 if ((inst & 0xe000) == 0xa000) {
727 regs->pc += SH_PC_12BIT_OFFSET(inst); 728 regs->pc += SH_PC_12BIT_OFFSET(inst);
728 return 0; 729 return 0;
729 } 730 }
730 731
731 if ((inst & 0xf0df) == 0x0003) { 732 if ((inst & 0xf0df) == 0x0003) {
732 regs->pc += regs->regs[(inst & 0x0f00) >> 8] + 4; 733 regs->pc += regs->regs[(inst & 0x0f00) >> 8] + 4;
733 return 0; 734 return 0;
734 } 735 }
735 736
736 if ((inst & 0xf0df) == 0x400b) { 737 if ((inst & 0xf0df) == 0x400b) {
737 regs->pc = regs->regs[(inst & 0x0f00) >> 8]; 738 regs->pc = regs->regs[(inst & 0x0f00) >> 8];
738 return 0; 739 return 0;
739 } 740 }
740 741
741 if ((inst & 0xffff) == 0x000b) { 742 if ((inst & 0xffff) == 0x000b) {
742 regs->pc = regs->pr; 743 regs->pc = regs->pr;
743 return 0; 744 return 0;
744 } 745 }
745 746
746 return 1; 747 return 1;
747 } 748 }
748 #endif 749 #endif
749 750
750 asmlinkage void do_illegal_slot_inst(unsigned long r4, unsigned long r5, 751 asmlinkage void do_illegal_slot_inst(unsigned long r4, unsigned long r5,
751 unsigned long r6, unsigned long r7, 752 unsigned long r6, unsigned long r7,
752 struct pt_regs __regs) 753 struct pt_regs __regs)
753 { 754 {
754 struct pt_regs *regs = RELOC_HIDE(&__regs, 0); 755 struct pt_regs *regs = RELOC_HIDE(&__regs, 0);
755 unsigned long error_code; 756 unsigned long error_code;
756 struct task_struct *tsk = current; 757 struct task_struct *tsk = current;
757 #ifdef CONFIG_SH_FPU_EMU 758 #ifdef CONFIG_SH_FPU_EMU
758 unsigned short inst = 0; 759 unsigned short inst = 0;
759 760
760 get_user(inst, (unsigned short *)regs->pc + 1); 761 get_user(inst, (unsigned short *)regs->pc + 1);
761 if (!do_fpu_inst(inst, regs)) { 762 if (!do_fpu_inst(inst, regs)) {
762 get_user(inst, (unsigned short *)regs->pc); 763 get_user(inst, (unsigned short *)regs->pc);
763 if (!emulate_branch(inst, regs)) 764 if (!emulate_branch(inst, regs))
764 return; 765 return;
765 /* fault in branch.*/ 766 /* fault in branch.*/
766 } 767 }
767 /* not a FPU inst. */ 768 /* not a FPU inst. */
768 #endif 769 #endif
769 770
770 lookup_exception_vector(error_code); 771 lookup_exception_vector(error_code);
771 772
772 local_irq_enable(); 773 local_irq_enable();
773 CHK_REMOTE_DEBUG(regs); 774 CHK_REMOTE_DEBUG(regs);
774 force_sig(SIGILL, tsk); 775 force_sig(SIGILL, tsk);
775 die_if_no_fixup("illegal slot instruction", regs, error_code); 776 die_if_no_fixup("illegal slot instruction", regs, error_code);
776 } 777 }
777 778
778 asmlinkage void do_exception_error(unsigned long r4, unsigned long r5, 779 asmlinkage void do_exception_error(unsigned long r4, unsigned long r5,
779 unsigned long r6, unsigned long r7, 780 unsigned long r6, unsigned long r7,
780 struct pt_regs __regs) 781 struct pt_regs __regs)
781 { 782 {
782 struct pt_regs *regs = RELOC_HIDE(&__regs, 0); 783 struct pt_regs *regs = RELOC_HIDE(&__regs, 0);
783 long ex; 784 long ex;
784 785
785 lookup_exception_vector(ex); 786 lookup_exception_vector(ex);
786 die_if_kernel("exception", regs, ex); 787 die_if_kernel("exception", regs, ex);
787 } 788 }
788 789
789 #if defined(CONFIG_SH_STANDARD_BIOS) 790 #if defined(CONFIG_SH_STANDARD_BIOS)
790 void *gdb_vbr_vector; 791 void *gdb_vbr_vector;
791 792
792 static inline void __init gdb_vbr_init(void) 793 static inline void __init gdb_vbr_init(void)
793 { 794 {
794 register unsigned long vbr; 795 register unsigned long vbr;
795 796
796 /* 797 /*
797 * Read the old value of the VBR register to initialise 798 * Read the old value of the VBR register to initialise
798 * the vector through which debug and BIOS traps are 799 * the vector through which debug and BIOS traps are
799 * delegated by the Linux trap handler. 800 * delegated by the Linux trap handler.
800 */ 801 */
801 asm volatile("stc vbr, %0" : "=r" (vbr)); 802 asm volatile("stc vbr, %0" : "=r" (vbr));
802 803
803 gdb_vbr_vector = (void *)(vbr + 0x100); 804 gdb_vbr_vector = (void *)(vbr + 0x100);
804 printk("Setting GDB trap vector to 0x%08lx\n", 805 printk("Setting GDB trap vector to 0x%08lx\n",
805 (unsigned long)gdb_vbr_vector); 806 (unsigned long)gdb_vbr_vector);
806 } 807 }
807 #endif 808 #endif
808 809
809 void __init per_cpu_trap_init(void) 810 void __init per_cpu_trap_init(void)
810 { 811 {
811 extern void *vbr_base; 812 extern void *vbr_base;
812 813
813 #ifdef CONFIG_SH_STANDARD_BIOS 814 #ifdef CONFIG_SH_STANDARD_BIOS
814 gdb_vbr_init(); 815 gdb_vbr_init();
815 #endif 816 #endif
816 817
817 /* NOTE: The VBR value should be at P1 818 /* NOTE: The VBR value should be at P1
818 (or P2, virtural "fixed" address space). 819 (or P2, virtural "fixed" address space).
819 It's definitely should not in physical address. */ 820 It's definitely should not in physical address. */
820 821
821 asm volatile("ldc %0, vbr" 822 asm volatile("ldc %0, vbr"
822 : /* no output */ 823 : /* no output */
823 : "r" (&vbr_base) 824 : "r" (&vbr_base)
824 : "memory"); 825 : "memory");
825 } 826 }
826 827
827 void *set_exception_table_vec(unsigned int vec, void *handler) 828 void *set_exception_table_vec(unsigned int vec, void *handler)
828 { 829 {
829 extern void *exception_handling_table[]; 830 extern void *exception_handling_table[];
830 void *old_handler; 831 void *old_handler;
831 832
832 old_handler = exception_handling_table[vec]; 833 old_handler = exception_handling_table[vec];
833 exception_handling_table[vec] = handler; 834 exception_handling_table[vec] = handler;
834 return old_handler; 835 return old_handler;
835 } 836 }
836 837
837 extern asmlinkage void address_error_handler(unsigned long r4, unsigned long r5, 838 extern asmlinkage void address_error_handler(unsigned long r4, unsigned long r5,
838 unsigned long r6, unsigned long r7, 839 unsigned long r6, unsigned long r7,
839 struct pt_regs __regs); 840 struct pt_regs __regs);
840 841
841 void __init trap_init(void) 842 void __init trap_init(void)
842 { 843 {
843 set_exception_table_vec(TRAP_RESERVED_INST, do_reserved_inst); 844 set_exception_table_vec(TRAP_RESERVED_INST, do_reserved_inst);
844 set_exception_table_vec(TRAP_ILLEGAL_SLOT_INST, do_illegal_slot_inst); 845 set_exception_table_vec(TRAP_ILLEGAL_SLOT_INST, do_illegal_slot_inst);
845 846
846 #if defined(CONFIG_CPU_SH4) && !defined(CONFIG_SH_FPU) || \ 847 #if defined(CONFIG_CPU_SH4) && !defined(CONFIG_SH_FPU) || \
847 defined(CONFIG_SH_FPU_EMU) 848 defined(CONFIG_SH_FPU_EMU)
848 /* 849 /*
849 * For SH-4 lacking an FPU, treat floating point instructions as 850 * For SH-4 lacking an FPU, treat floating point instructions as
850 * reserved. They'll be handled in the math-emu case, or faulted on 851 * reserved. They'll be handled in the math-emu case, or faulted on
851 * otherwise. 852 * otherwise.
852 */ 853 */
853 set_exception_table_evt(0x800, do_reserved_inst); 854 set_exception_table_evt(0x800, do_reserved_inst);
854 set_exception_table_evt(0x820, do_illegal_slot_inst); 855 set_exception_table_evt(0x820, do_illegal_slot_inst);
855 #elif defined(CONFIG_SH_FPU) 856 #elif defined(CONFIG_SH_FPU)
856 set_exception_table_evt(0x800, do_fpu_state_restore); 857 set_exception_table_evt(0x800, do_fpu_state_restore);
857 set_exception_table_evt(0x820, do_fpu_state_restore); 858 set_exception_table_evt(0x820, do_fpu_state_restore);
858 #endif 859 #endif
859 860
860 #ifdef CONFIG_CPU_SH2 861 #ifdef CONFIG_CPU_SH2
861 set_exception_table_vec(TRAP_ADDRESS_ERROR, address_error_handler); 862 set_exception_table_vec(TRAP_ADDRESS_ERROR, address_error_handler);
862 #endif 863 #endif
863 #ifdef CONFIG_CPU_SH2A 864 #ifdef CONFIG_CPU_SH2A
864 set_exception_table_vec(TRAP_DIVZERO_ERROR, do_divide_error); 865 set_exception_table_vec(TRAP_DIVZERO_ERROR, do_divide_error);
865 set_exception_table_vec(TRAP_DIVOVF_ERROR, do_divide_error); 866 set_exception_table_vec(TRAP_DIVOVF_ERROR, do_divide_error);
866 #endif 867 #endif
867 868
868 /* Setup VBR for boot cpu */ 869 /* Setup VBR for boot cpu */
869 per_cpu_trap_init(); 870 per_cpu_trap_init();
870 } 871 }
871 872
872 #ifdef CONFIG_BUG 873 #ifdef CONFIG_BUG
873 void handle_BUG(struct pt_regs *regs) 874 void handle_BUG(struct pt_regs *regs)
874 { 875 {
875 enum bug_trap_type tt; 876 enum bug_trap_type tt;
876 tt = report_bug(regs->pc, regs); 877 tt = report_bug(regs->pc, regs);
877 if (tt == BUG_TRAP_TYPE_WARN) { 878 if (tt == BUG_TRAP_TYPE_WARN) {
878 regs->pc += 2; 879 regs->pc += 2;
879 return; 880 return;
880 } 881 }
881 882
882 die("Kernel BUG", regs, TRAPA_BUG_OPCODE & 0xff); 883 die("Kernel BUG", regs, TRAPA_BUG_OPCODE & 0xff);
883 } 884 }
884 885
885 int is_valid_bugaddr(unsigned long addr) 886 int is_valid_bugaddr(unsigned long addr)
886 { 887 {
887 return addr >= PAGE_OFFSET; 888 return addr >= PAGE_OFFSET;
888 } 889 }
889 #endif 890 #endif
890 891
891 void show_trace(struct task_struct *tsk, unsigned long *sp, 892 void show_trace(struct task_struct *tsk, unsigned long *sp,
892 struct pt_regs *regs) 893 struct pt_regs *regs)
893 { 894 {
894 unsigned long addr; 895 unsigned long addr;
895 896
896 if (regs && user_mode(regs)) 897 if (regs && user_mode(regs))
897 return; 898 return;
898 899
899 printk("\nCall trace: "); 900 printk("\nCall trace: ");
900 #ifdef CONFIG_KALLSYMS 901 #ifdef CONFIG_KALLSYMS
901 printk("\n"); 902 printk("\n");
902 #endif 903 #endif
903 904
904 while (!kstack_end(sp)) { 905 while (!kstack_end(sp)) {
905 addr = *sp++; 906 addr = *sp++;
906 if (kernel_text_address(addr)) 907 if (kernel_text_address(addr))
907 print_ip_sym(addr); 908 print_ip_sym(addr);
908 } 909 }
909 910
910 printk("\n"); 911 printk("\n");
911 912
912 if (!tsk) 913 if (!tsk)
913 tsk = current; 914 tsk = current;
914 915
915 debug_show_held_locks(tsk); 916 debug_show_held_locks(tsk);
916 } 917 }
917 918
918 void show_stack(struct task_struct *tsk, unsigned long *sp) 919 void show_stack(struct task_struct *tsk, unsigned long *sp)
919 { 920 {
920 unsigned long stack; 921 unsigned long stack;
921 922
922 if (!tsk) 923 if (!tsk)
923 tsk = current; 924 tsk = current;
924 if (tsk == current) 925 if (tsk == current)
925 sp = (unsigned long *)current_stack_pointer; 926 sp = (unsigned long *)current_stack_pointer;
926 else 927 else
927 sp = (unsigned long *)tsk->thread.sp; 928 sp = (unsigned long *)tsk->thread.sp;
928 929
929 stack = (unsigned long)sp; 930 stack = (unsigned long)sp;
930 dump_mem("Stack: ", stack, THREAD_SIZE + 931 dump_mem("Stack: ", stack, THREAD_SIZE +
931 (unsigned long)task_stack_page(tsk)); 932 (unsigned long)task_stack_page(tsk));
932 show_trace(tsk, sp, NULL); 933 show_trace(tsk, sp, NULL);
933 } 934 }
934 935
935 void dump_stack(void) 936 void dump_stack(void)
936 { 937 {
937 show_stack(NULL, NULL); 938 show_stack(NULL, NULL);
938 } 939 }
939 EXPORT_SYMBOL(dump_stack); 940 EXPORT_SYMBOL(dump_stack);
940 941
arch/sparc/kernel/traps.c
1 /* $Id: traps.c,v 1.64 2000/09/03 15:00:49 anton Exp $ 1 /* $Id: traps.c,v 1.64 2000/09/03 15:00:49 anton Exp $
2 * arch/sparc/kernel/traps.c 2 * arch/sparc/kernel/traps.c
3 * 3 *
4 * Copyright 1995 David S. Miller (davem@caip.rutgers.edu) 4 * Copyright 1995 David S. Miller (davem@caip.rutgers.edu)
5 * Copyright 2000 Jakub Jelinek (jakub@redhat.com) 5 * Copyright 2000 Jakub Jelinek (jakub@redhat.com)
6 */ 6 */
7 7
8 /* 8 /*
9 * I hate traps on the sparc, grrr... 9 * I hate traps on the sparc, grrr...
10 */ 10 */
11 11
12 #include <linux/sched.h> /* for jiffies */ 12 #include <linux/sched.h> /* for jiffies */
13 #include <linux/kernel.h> 13 #include <linux/kernel.h>
14 #include <linux/kallsyms.h> 14 #include <linux/kallsyms.h>
15 #include <linux/signal.h> 15 #include <linux/signal.h>
16 #include <linux/smp.h> 16 #include <linux/smp.h>
17 #include <linux/smp_lock.h> 17 #include <linux/smp_lock.h>
18 #include <linux/kdebug.h> 18 #include <linux/kdebug.h>
19 19
20 #include <asm/delay.h> 20 #include <asm/delay.h>
21 #include <asm/system.h> 21 #include <asm/system.h>
22 #include <asm/ptrace.h> 22 #include <asm/ptrace.h>
23 #include <asm/oplib.h> 23 #include <asm/oplib.h>
24 #include <asm/page.h> 24 #include <asm/page.h>
25 #include <asm/pgtable.h> 25 #include <asm/pgtable.h>
26 #include <asm/unistd.h> 26 #include <asm/unistd.h>
27 #include <asm/traps.h> 27 #include <asm/traps.h>
28 28
29 /* #define TRAP_DEBUG */ 29 /* #define TRAP_DEBUG */
30 30
31 struct trap_trace_entry { 31 struct trap_trace_entry {
32 unsigned long pc; 32 unsigned long pc;
33 unsigned long type; 33 unsigned long type;
34 }; 34 };
35 35
36 int trap_curbuf = 0; 36 int trap_curbuf = 0;
37 struct trap_trace_entry trapbuf[1024]; 37 struct trap_trace_entry trapbuf[1024];
38 38
39 void syscall_trace_entry(struct pt_regs *regs) 39 void syscall_trace_entry(struct pt_regs *regs)
40 { 40 {
41 printk("%s[%d]: ", current->comm, current->pid); 41 printk("%s[%d]: ", current->comm, current->pid);
42 printk("scall<%d> (could be %d)\n", (int) regs->u_regs[UREG_G1], 42 printk("scall<%d> (could be %d)\n", (int) regs->u_regs[UREG_G1],
43 (int) regs->u_regs[UREG_I0]); 43 (int) regs->u_regs[UREG_I0]);
44 } 44 }
45 45
46 void syscall_trace_exit(struct pt_regs *regs) 46 void syscall_trace_exit(struct pt_regs *regs)
47 { 47 {
48 } 48 }
49 49
50 void sun4m_nmi(struct pt_regs *regs) 50 void sun4m_nmi(struct pt_regs *regs)
51 { 51 {
52 unsigned long afsr, afar; 52 unsigned long afsr, afar;
53 53
54 printk("Aieee: sun4m NMI received!\n"); 54 printk("Aieee: sun4m NMI received!\n");
55 /* XXX HyperSparc hack XXX */ 55 /* XXX HyperSparc hack XXX */
56 __asm__ __volatile__("mov 0x500, %%g1\n\t" 56 __asm__ __volatile__("mov 0x500, %%g1\n\t"
57 "lda [%%g1] 0x4, %0\n\t" 57 "lda [%%g1] 0x4, %0\n\t"
58 "mov 0x600, %%g1\n\t" 58 "mov 0x600, %%g1\n\t"
59 "lda [%%g1] 0x4, %1\n\t" : 59 "lda [%%g1] 0x4, %1\n\t" :
60 "=r" (afsr), "=r" (afar)); 60 "=r" (afsr), "=r" (afar));
61 printk("afsr=%08lx afar=%08lx\n", afsr, afar); 61 printk("afsr=%08lx afar=%08lx\n", afsr, afar);
62 printk("you lose buddy boy...\n"); 62 printk("you lose buddy boy...\n");
63 show_regs(regs); 63 show_regs(regs);
64 prom_halt(); 64 prom_halt();
65 } 65 }
66 66
67 void sun4d_nmi(struct pt_regs *regs) 67 void sun4d_nmi(struct pt_regs *regs)
68 { 68 {
69 printk("Aieee: sun4d NMI received!\n"); 69 printk("Aieee: sun4d NMI received!\n");
70 printk("you lose buddy boy...\n"); 70 printk("you lose buddy boy...\n");
71 show_regs(regs); 71 show_regs(regs);
72 prom_halt(); 72 prom_halt();
73 } 73 }
74 74
75 void instruction_dump (unsigned long *pc) 75 void instruction_dump (unsigned long *pc)
76 { 76 {
77 int i; 77 int i;
78 78
79 if((((unsigned long) pc) & 3)) 79 if((((unsigned long) pc) & 3))
80 return; 80 return;
81 81
82 for(i = -3; i < 6; i++) 82 for(i = -3; i < 6; i++)
83 printk("%c%08lx%c",i?' ':'<',pc[i],i?' ':'>'); 83 printk("%c%08lx%c",i?' ':'<',pc[i],i?' ':'>');
84 printk("\n"); 84 printk("\n");
85 } 85 }
86 86
87 #define __SAVE __asm__ __volatile__("save %sp, -0x40, %sp\n\t") 87 #define __SAVE __asm__ __volatile__("save %sp, -0x40, %sp\n\t")
88 #define __RESTORE __asm__ __volatile__("restore %g0, %g0, %g0\n\t") 88 #define __RESTORE __asm__ __volatile__("restore %g0, %g0, %g0\n\t")
89 89
90 void die_if_kernel(char *str, struct pt_regs *regs) 90 void die_if_kernel(char *str, struct pt_regs *regs)
91 { 91 {
92 static int die_counter; 92 static int die_counter;
93 int count = 0; 93 int count = 0;
94 94
95 /* Amuse the user. */ 95 /* Amuse the user. */
96 printk( 96 printk(
97 " \\|/ ____ \\|/\n" 97 " \\|/ ____ \\|/\n"
98 " \"@'/ ,. \\`@\"\n" 98 " \"@'/ ,. \\`@\"\n"
99 " /_| \\__/ |_\\\n" 99 " /_| \\__/ |_\\\n"
100 " \\__U_/\n"); 100 " \\__U_/\n");
101 101
102 printk("%s(%d): %s [#%d]\n", current->comm, current->pid, str, ++die_counter); 102 printk("%s(%d): %s [#%d]\n", current->comm, current->pid, str, ++die_counter);
103 show_regs(regs); 103 show_regs(regs);
104 add_taint(TAINT_DIE);
104 105
105 __SAVE; __SAVE; __SAVE; __SAVE; 106 __SAVE; __SAVE; __SAVE; __SAVE;
106 __SAVE; __SAVE; __SAVE; __SAVE; 107 __SAVE; __SAVE; __SAVE; __SAVE;
107 __RESTORE; __RESTORE; __RESTORE; __RESTORE; 108 __RESTORE; __RESTORE; __RESTORE; __RESTORE;
108 __RESTORE; __RESTORE; __RESTORE; __RESTORE; 109 __RESTORE; __RESTORE; __RESTORE; __RESTORE;
109 110
110 { 111 {
111 struct reg_window *rw = (struct reg_window *)regs->u_regs[UREG_FP]; 112 struct reg_window *rw = (struct reg_window *)regs->u_regs[UREG_FP];
112 113
113 /* Stop the back trace when we hit userland or we 114 /* Stop the back trace when we hit userland or we
114 * find some badly aligned kernel stack. Set an upper 115 * find some badly aligned kernel stack. Set an upper
115 * bound in case our stack is trashed and we loop. 116 * bound in case our stack is trashed and we loop.
116 */ 117 */
117 while(rw && 118 while(rw &&
118 count++ < 30 && 119 count++ < 30 &&
119 (((unsigned long) rw) >= PAGE_OFFSET) && 120 (((unsigned long) rw) >= PAGE_OFFSET) &&
120 !(((unsigned long) rw) & 0x7)) { 121 !(((unsigned long) rw) & 0x7)) {
121 printk("Caller[%08lx]", rw->ins[7]); 122 printk("Caller[%08lx]", rw->ins[7]);
122 print_symbol(": %s\n", rw->ins[7]); 123 print_symbol(": %s\n", rw->ins[7]);
123 rw = (struct reg_window *)rw->ins[6]; 124 rw = (struct reg_window *)rw->ins[6];
124 } 125 }
125 } 126 }
126 printk("Instruction DUMP:"); 127 printk("Instruction DUMP:");
127 instruction_dump ((unsigned long *) regs->pc); 128 instruction_dump ((unsigned long *) regs->pc);
128 if(regs->psr & PSR_PS) 129 if(regs->psr & PSR_PS)
129 do_exit(SIGKILL); 130 do_exit(SIGKILL);
130 do_exit(SIGSEGV); 131 do_exit(SIGSEGV);
131 } 132 }
132 133
133 void do_hw_interrupt(struct pt_regs *regs, unsigned long type) 134 void do_hw_interrupt(struct pt_regs *regs, unsigned long type)
134 { 135 {
135 siginfo_t info; 136 siginfo_t info;
136 137
137 if(type < 0x80) { 138 if(type < 0x80) {
138 /* Sun OS's puke from bad traps, Linux survives! */ 139 /* Sun OS's puke from bad traps, Linux survives! */
139 printk("Unimplemented Sparc TRAP, type = %02lx\n", type); 140 printk("Unimplemented Sparc TRAP, type = %02lx\n", type);
140 die_if_kernel("Whee... Hello Mr. Penguin", regs); 141 die_if_kernel("Whee... Hello Mr. Penguin", regs);
141 } 142 }
142 143
143 if(regs->psr & PSR_PS) 144 if(regs->psr & PSR_PS)
144 die_if_kernel("Kernel bad trap", regs); 145 die_if_kernel("Kernel bad trap", regs);
145 146
146 info.si_signo = SIGILL; 147 info.si_signo = SIGILL;
147 info.si_errno = 0; 148 info.si_errno = 0;
148 info.si_code = ILL_ILLTRP; 149 info.si_code = ILL_ILLTRP;
149 info.si_addr = (void __user *)regs->pc; 150 info.si_addr = (void __user *)regs->pc;
150 info.si_trapno = type - 0x80; 151 info.si_trapno = type - 0x80;
151 force_sig_info(SIGILL, &info, current); 152 force_sig_info(SIGILL, &info, current);
152 } 153 }
153 154
154 void do_illegal_instruction(struct pt_regs *regs, unsigned long pc, unsigned long npc, 155 void do_illegal_instruction(struct pt_regs *regs, unsigned long pc, unsigned long npc,
155 unsigned long psr) 156 unsigned long psr)
156 { 157 {
157 extern int do_user_muldiv (struct pt_regs *, unsigned long); 158 extern int do_user_muldiv (struct pt_regs *, unsigned long);
158 siginfo_t info; 159 siginfo_t info;
159 160
160 if(psr & PSR_PS) 161 if(psr & PSR_PS)
161 die_if_kernel("Kernel illegal instruction", regs); 162 die_if_kernel("Kernel illegal instruction", regs);
162 #ifdef TRAP_DEBUG 163 #ifdef TRAP_DEBUG
163 printk("Ill instr. at pc=%08lx instruction is %08lx\n", 164 printk("Ill instr. at pc=%08lx instruction is %08lx\n",
164 regs->pc, *(unsigned long *)regs->pc); 165 regs->pc, *(unsigned long *)regs->pc);
165 #endif 166 #endif
166 if (!do_user_muldiv (regs, pc)) 167 if (!do_user_muldiv (regs, pc))
167 return; 168 return;
168 169
169 info.si_signo = SIGILL; 170 info.si_signo = SIGILL;
170 info.si_errno = 0; 171 info.si_errno = 0;
171 info.si_code = ILL_ILLOPC; 172 info.si_code = ILL_ILLOPC;
172 info.si_addr = (void __user *)pc; 173 info.si_addr = (void __user *)pc;
173 info.si_trapno = 0; 174 info.si_trapno = 0;
174 send_sig_info(SIGILL, &info, current); 175 send_sig_info(SIGILL, &info, current);
175 } 176 }
176 177
177 void do_priv_instruction(struct pt_regs *regs, unsigned long pc, unsigned long npc, 178 void do_priv_instruction(struct pt_regs *regs, unsigned long pc, unsigned long npc,
178 unsigned long psr) 179 unsigned long psr)
179 { 180 {
180 siginfo_t info; 181 siginfo_t info;
181 182
182 if(psr & PSR_PS) 183 if(psr & PSR_PS)
183 die_if_kernel("Penguin instruction from Penguin mode??!?!", regs); 184 die_if_kernel("Penguin instruction from Penguin mode??!?!", regs);
184 info.si_signo = SIGILL; 185 info.si_signo = SIGILL;
185 info.si_errno = 0; 186 info.si_errno = 0;
186 info.si_code = ILL_PRVOPC; 187 info.si_code = ILL_PRVOPC;
187 info.si_addr = (void __user *)pc; 188 info.si_addr = (void __user *)pc;
188 info.si_trapno = 0; 189 info.si_trapno = 0;
189 send_sig_info(SIGILL, &info, current); 190 send_sig_info(SIGILL, &info, current);
190 } 191 }
191 192
192 /* XXX User may want to be allowed to do this. XXX */ 193 /* XXX User may want to be allowed to do this. XXX */
193 194
194 void do_memaccess_unaligned(struct pt_regs *regs, unsigned long pc, unsigned long npc, 195 void do_memaccess_unaligned(struct pt_regs *regs, unsigned long pc, unsigned long npc,
195 unsigned long psr) 196 unsigned long psr)
196 { 197 {
197 siginfo_t info; 198 siginfo_t info;
198 199
199 if(regs->psr & PSR_PS) { 200 if(regs->psr & PSR_PS) {
200 printk("KERNEL MNA at pc %08lx npc %08lx called by %08lx\n", pc, npc, 201 printk("KERNEL MNA at pc %08lx npc %08lx called by %08lx\n", pc, npc,
201 regs->u_regs[UREG_RETPC]); 202 regs->u_regs[UREG_RETPC]);
202 die_if_kernel("BOGUS", regs); 203 die_if_kernel("BOGUS", regs);
203 /* die_if_kernel("Kernel MNA access", regs); */ 204 /* die_if_kernel("Kernel MNA access", regs); */
204 } 205 }
205 #if 0 206 #if 0
206 show_regs (regs); 207 show_regs (regs);
207 instruction_dump ((unsigned long *) regs->pc); 208 instruction_dump ((unsigned long *) regs->pc);
208 printk ("do_MNA!\n"); 209 printk ("do_MNA!\n");
209 #endif 210 #endif
210 info.si_signo = SIGBUS; 211 info.si_signo = SIGBUS;
211 info.si_errno = 0; 212 info.si_errno = 0;
212 info.si_code = BUS_ADRALN; 213 info.si_code = BUS_ADRALN;
213 info.si_addr = /* FIXME: Should dig out mna address */ (void *)0; 214 info.si_addr = /* FIXME: Should dig out mna address */ (void *)0;
214 info.si_trapno = 0; 215 info.si_trapno = 0;
215 send_sig_info(SIGBUS, &info, current); 216 send_sig_info(SIGBUS, &info, current);
216 } 217 }
217 218
218 extern void fpsave(unsigned long *fpregs, unsigned long *fsr, 219 extern void fpsave(unsigned long *fpregs, unsigned long *fsr,
219 void *fpqueue, unsigned long *fpqdepth); 220 void *fpqueue, unsigned long *fpqdepth);
220 extern void fpload(unsigned long *fpregs, unsigned long *fsr); 221 extern void fpload(unsigned long *fpregs, unsigned long *fsr);
221 222
222 static unsigned long init_fsr = 0x0UL; 223 static unsigned long init_fsr = 0x0UL;
223 static unsigned long init_fregs[32] __attribute__ ((aligned (8))) = 224 static unsigned long init_fregs[32] __attribute__ ((aligned (8))) =
224 { ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, 225 { ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL,
225 ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, 226 ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL,
226 ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, 227 ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL,
227 ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL }; 228 ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL, ~0UL };
228 229
229 void do_fpd_trap(struct pt_regs *regs, unsigned long pc, unsigned long npc, 230 void do_fpd_trap(struct pt_regs *regs, unsigned long pc, unsigned long npc,
230 unsigned long psr) 231 unsigned long psr)
231 { 232 {
232 /* Sanity check... */ 233 /* Sanity check... */
233 if(psr & PSR_PS) 234 if(psr & PSR_PS)
234 die_if_kernel("Kernel gets FloatingPenguinUnit disabled trap", regs); 235 die_if_kernel("Kernel gets FloatingPenguinUnit disabled trap", regs);
235 236
236 put_psr(get_psr() | PSR_EF); /* Allow FPU ops. */ 237 put_psr(get_psr() | PSR_EF); /* Allow FPU ops. */
237 regs->psr |= PSR_EF; 238 regs->psr |= PSR_EF;
238 #ifndef CONFIG_SMP 239 #ifndef CONFIG_SMP
239 if(last_task_used_math == current) 240 if(last_task_used_math == current)
240 return; 241 return;
241 if(last_task_used_math) { 242 if(last_task_used_math) {
242 /* Other processes fpu state, save away */ 243 /* Other processes fpu state, save away */
243 struct task_struct *fptask = last_task_used_math; 244 struct task_struct *fptask = last_task_used_math;
244 fpsave(&fptask->thread.float_regs[0], &fptask->thread.fsr, 245 fpsave(&fptask->thread.float_regs[0], &fptask->thread.fsr,
245 &fptask->thread.fpqueue[0], &fptask->thread.fpqdepth); 246 &fptask->thread.fpqueue[0], &fptask->thread.fpqdepth);
246 } 247 }
247 last_task_used_math = current; 248 last_task_used_math = current;
248 if(used_math()) { 249 if(used_math()) {
249 fpload(&current->thread.float_regs[0], &current->thread.fsr); 250 fpload(&current->thread.float_regs[0], &current->thread.fsr);
250 } else { 251 } else {
251 /* Set initial sane state. */ 252 /* Set initial sane state. */
252 fpload(&init_fregs[0], &init_fsr); 253 fpload(&init_fregs[0], &init_fsr);
253 set_used_math(); 254 set_used_math();
254 } 255 }
255 #else 256 #else
256 if(!used_math()) { 257 if(!used_math()) {
257 fpload(&init_fregs[0], &init_fsr); 258 fpload(&init_fregs[0], &init_fsr);
258 set_used_math(); 259 set_used_math();
259 } else { 260 } else {
260 fpload(&current->thread.float_regs[0], &current->thread.fsr); 261 fpload(&current->thread.float_regs[0], &current->thread.fsr);
261 } 262 }
262 set_thread_flag(TIF_USEDFPU); 263 set_thread_flag(TIF_USEDFPU);
263 #endif 264 #endif
264 } 265 }
265 266
266 static unsigned long fake_regs[32] __attribute__ ((aligned (8))); 267 static unsigned long fake_regs[32] __attribute__ ((aligned (8)));
267 static unsigned long fake_fsr; 268 static unsigned long fake_fsr;
268 static unsigned long fake_queue[32] __attribute__ ((aligned (8))); 269 static unsigned long fake_queue[32] __attribute__ ((aligned (8)));
269 static unsigned long fake_depth; 270 static unsigned long fake_depth;
270 271
271 extern int do_mathemu(struct pt_regs *, struct task_struct *); 272 extern int do_mathemu(struct pt_regs *, struct task_struct *);
272 273
273 void do_fpe_trap(struct pt_regs *regs, unsigned long pc, unsigned long npc, 274 void do_fpe_trap(struct pt_regs *regs, unsigned long pc, unsigned long npc,
274 unsigned long psr) 275 unsigned long psr)
275 { 276 {
276 static int calls; 277 static int calls;
277 siginfo_t info; 278 siginfo_t info;
278 unsigned long fsr; 279 unsigned long fsr;
279 int ret = 0; 280 int ret = 0;
280 #ifndef CONFIG_SMP 281 #ifndef CONFIG_SMP
281 struct task_struct *fpt = last_task_used_math; 282 struct task_struct *fpt = last_task_used_math;
282 #else 283 #else
283 struct task_struct *fpt = current; 284 struct task_struct *fpt = current;
284 #endif 285 #endif
285 put_psr(get_psr() | PSR_EF); 286 put_psr(get_psr() | PSR_EF);
286 /* If nobody owns the fpu right now, just clear the 287 /* If nobody owns the fpu right now, just clear the
287 * error into our fake static buffer and hope it don't 288 * error into our fake static buffer and hope it don't
288 * happen again. Thank you crashme... 289 * happen again. Thank you crashme...
289 */ 290 */
290 #ifndef CONFIG_SMP 291 #ifndef CONFIG_SMP
291 if(!fpt) { 292 if(!fpt) {
292 #else 293 #else
293 if (!test_tsk_thread_flag(fpt, TIF_USEDFPU)) { 294 if (!test_tsk_thread_flag(fpt, TIF_USEDFPU)) {
294 #endif 295 #endif
295 fpsave(&fake_regs[0], &fake_fsr, &fake_queue[0], &fake_depth); 296 fpsave(&fake_regs[0], &fake_fsr, &fake_queue[0], &fake_depth);
296 regs->psr &= ~PSR_EF; 297 regs->psr &= ~PSR_EF;
297 return; 298 return;
298 } 299 }
299 fpsave(&fpt->thread.float_regs[0], &fpt->thread.fsr, 300 fpsave(&fpt->thread.float_regs[0], &fpt->thread.fsr,
300 &fpt->thread.fpqueue[0], &fpt->thread.fpqdepth); 301 &fpt->thread.fpqueue[0], &fpt->thread.fpqdepth);
301 #ifdef DEBUG_FPU 302 #ifdef DEBUG_FPU
302 printk("Hmm, FP exception, fsr was %016lx\n", fpt->thread.fsr); 303 printk("Hmm, FP exception, fsr was %016lx\n", fpt->thread.fsr);
303 #endif 304 #endif
304 305
305 switch ((fpt->thread.fsr & 0x1c000)) { 306 switch ((fpt->thread.fsr & 0x1c000)) {
306 /* switch on the contents of the ftt [floating point trap type] field */ 307 /* switch on the contents of the ftt [floating point trap type] field */
307 #ifdef DEBUG_FPU 308 #ifdef DEBUG_FPU
308 case (1 << 14): 309 case (1 << 14):
309 printk("IEEE_754_exception\n"); 310 printk("IEEE_754_exception\n");
310 break; 311 break;
311 #endif 312 #endif
312 case (2 << 14): /* unfinished_FPop (underflow & co) */ 313 case (2 << 14): /* unfinished_FPop (underflow & co) */
313 case (3 << 14): /* unimplemented_FPop (quad stuff, maybe sqrt) */ 314 case (3 << 14): /* unimplemented_FPop (quad stuff, maybe sqrt) */
314 ret = do_mathemu(regs, fpt); 315 ret = do_mathemu(regs, fpt);
315 break; 316 break;
316 #ifdef DEBUG_FPU 317 #ifdef DEBUG_FPU
317 case (4 << 14): 318 case (4 << 14):
318 printk("sequence_error (OS bug...)\n"); 319 printk("sequence_error (OS bug...)\n");
319 break; 320 break;
320 case (5 << 14): 321 case (5 << 14):
321 printk("hardware_error (uhoh!)\n"); 322 printk("hardware_error (uhoh!)\n");
322 break; 323 break;
323 case (6 << 14): 324 case (6 << 14):
324 printk("invalid_fp_register (user error)\n"); 325 printk("invalid_fp_register (user error)\n");
325 break; 326 break;
326 #endif /* DEBUG_FPU */ 327 #endif /* DEBUG_FPU */
327 } 328 }
328 /* If we successfully emulated the FPop, we pretend the trap never happened :-> */ 329 /* If we successfully emulated the FPop, we pretend the trap never happened :-> */
329 if (ret) { 330 if (ret) {
330 fpload(&current->thread.float_regs[0], &current->thread.fsr); 331 fpload(&current->thread.float_regs[0], &current->thread.fsr);
331 return; 332 return;
332 } 333 }
333 /* nope, better SIGFPE the offending process... */ 334 /* nope, better SIGFPE the offending process... */
334 335
335 #ifdef CONFIG_SMP 336 #ifdef CONFIG_SMP
336 clear_tsk_thread_flag(fpt, TIF_USEDFPU); 337 clear_tsk_thread_flag(fpt, TIF_USEDFPU);
337 #endif 338 #endif
338 if(psr & PSR_PS) { 339 if(psr & PSR_PS) {
339 /* The first fsr store/load we tried trapped, 340 /* The first fsr store/load we tried trapped,
340 * the second one will not (we hope). 341 * the second one will not (we hope).
341 */ 342 */
342 printk("WARNING: FPU exception from kernel mode. at pc=%08lx\n", 343 printk("WARNING: FPU exception from kernel mode. at pc=%08lx\n",
343 regs->pc); 344 regs->pc);
344 regs->pc = regs->npc; 345 regs->pc = regs->npc;
345 regs->npc += 4; 346 regs->npc += 4;
346 calls++; 347 calls++;
347 if(calls > 2) 348 if(calls > 2)
348 die_if_kernel("Too many Penguin-FPU traps from kernel mode", 349 die_if_kernel("Too many Penguin-FPU traps from kernel mode",
349 regs); 350 regs);
350 return; 351 return;
351 } 352 }
352 353
353 fsr = fpt->thread.fsr; 354 fsr = fpt->thread.fsr;
354 info.si_signo = SIGFPE; 355 info.si_signo = SIGFPE;
355 info.si_errno = 0; 356 info.si_errno = 0;
356 info.si_addr = (void __user *)pc; 357 info.si_addr = (void __user *)pc;
357 info.si_trapno = 0; 358 info.si_trapno = 0;
358 info.si_code = __SI_FAULT; 359 info.si_code = __SI_FAULT;
359 if ((fsr & 0x1c000) == (1 << 14)) { 360 if ((fsr & 0x1c000) == (1 << 14)) {
360 if (fsr & 0x10) 361 if (fsr & 0x10)
361 info.si_code = FPE_FLTINV; 362 info.si_code = FPE_FLTINV;
362 else if (fsr & 0x08) 363 else if (fsr & 0x08)
363 info.si_code = FPE_FLTOVF; 364 info.si_code = FPE_FLTOVF;
364 else if (fsr & 0x04) 365 else if (fsr & 0x04)
365 info.si_code = FPE_FLTUND; 366 info.si_code = FPE_FLTUND;
366 else if (fsr & 0x02) 367 else if (fsr & 0x02)
367 info.si_code = FPE_FLTDIV; 368 info.si_code = FPE_FLTDIV;
368 else if (fsr & 0x01) 369 else if (fsr & 0x01)
369 info.si_code = FPE_FLTRES; 370 info.si_code = FPE_FLTRES;
370 } 371 }
371 send_sig_info(SIGFPE, &info, fpt); 372 send_sig_info(SIGFPE, &info, fpt);
372 #ifndef CONFIG_SMP 373 #ifndef CONFIG_SMP
373 last_task_used_math = NULL; 374 last_task_used_math = NULL;
374 #endif 375 #endif
375 regs->psr &= ~PSR_EF; 376 regs->psr &= ~PSR_EF;
376 if(calls > 0) 377 if(calls > 0)
377 calls=0; 378 calls=0;
378 } 379 }
379 380
380 void handle_tag_overflow(struct pt_regs *regs, unsigned long pc, unsigned long npc, 381 void handle_tag_overflow(struct pt_regs *regs, unsigned long pc, unsigned long npc,
381 unsigned long psr) 382 unsigned long psr)
382 { 383 {
383 siginfo_t info; 384 siginfo_t info;
384 385
385 if(psr & PSR_PS) 386 if(psr & PSR_PS)
386 die_if_kernel("Penguin overflow trap from kernel mode", regs); 387 die_if_kernel("Penguin overflow trap from kernel mode", regs);
387 info.si_signo = SIGEMT; 388 info.si_signo = SIGEMT;
388 info.si_errno = 0; 389 info.si_errno = 0;
389 info.si_code = EMT_TAGOVF; 390 info.si_code = EMT_TAGOVF;
390 info.si_addr = (void __user *)pc; 391 info.si_addr = (void __user *)pc;
391 info.si_trapno = 0; 392 info.si_trapno = 0;
392 send_sig_info(SIGEMT, &info, current); 393 send_sig_info(SIGEMT, &info, current);
393 } 394 }
394 395
395 void handle_watchpoint(struct pt_regs *regs, unsigned long pc, unsigned long npc, 396 void handle_watchpoint(struct pt_regs *regs, unsigned long pc, unsigned long npc,
396 unsigned long psr) 397 unsigned long psr)
397 { 398 {
398 #ifdef TRAP_DEBUG 399 #ifdef TRAP_DEBUG
399 printk("Watchpoint detected at PC %08lx NPC %08lx PSR %08lx\n", 400 printk("Watchpoint detected at PC %08lx NPC %08lx PSR %08lx\n",
400 pc, npc, psr); 401 pc, npc, psr);
401 #endif 402 #endif
402 if(psr & PSR_PS) 403 if(psr & PSR_PS)
403 panic("Tell me what a watchpoint trap is, and I'll then deal " 404 panic("Tell me what a watchpoint trap is, and I'll then deal "
404 "with such a beast..."); 405 "with such a beast...");
405 } 406 }
406 407
407 void handle_reg_access(struct pt_regs *regs, unsigned long pc, unsigned long npc, 408 void handle_reg_access(struct pt_regs *regs, unsigned long pc, unsigned long npc,
408 unsigned long psr) 409 unsigned long psr)
409 { 410 {
410 siginfo_t info; 411 siginfo_t info;
411 412
412 #ifdef TRAP_DEBUG 413 #ifdef TRAP_DEBUG
413 printk("Register Access Exception at PC %08lx NPC %08lx PSR %08lx\n", 414 printk("Register Access Exception at PC %08lx NPC %08lx PSR %08lx\n",
414 pc, npc, psr); 415 pc, npc, psr);
415 #endif 416 #endif
416 info.si_signo = SIGBUS; 417 info.si_signo = SIGBUS;
417 info.si_errno = 0; 418 info.si_errno = 0;
418 info.si_code = BUS_OBJERR; 419 info.si_code = BUS_OBJERR;
419 info.si_addr = (void __user *)pc; 420 info.si_addr = (void __user *)pc;
420 info.si_trapno = 0; 421 info.si_trapno = 0;
421 force_sig_info(SIGBUS, &info, current); 422 force_sig_info(SIGBUS, &info, current);
422 } 423 }
423 424
424 void handle_cp_disabled(struct pt_regs *regs, unsigned long pc, unsigned long npc, 425 void handle_cp_disabled(struct pt_regs *regs, unsigned long pc, unsigned long npc,
425 unsigned long psr) 426 unsigned long psr)
426 { 427 {
427 siginfo_t info; 428 siginfo_t info;
428 429
429 info.si_signo = SIGILL; 430 info.si_signo = SIGILL;
430 info.si_errno = 0; 431 info.si_errno = 0;
431 info.si_code = ILL_COPROC; 432 info.si_code = ILL_COPROC;
432 info.si_addr = (void __user *)pc; 433 info.si_addr = (void __user *)pc;
433 info.si_trapno = 0; 434 info.si_trapno = 0;
434 send_sig_info(SIGILL, &info, current); 435 send_sig_info(SIGILL, &info, current);
435 } 436 }
436 437
437 void handle_cp_exception(struct pt_regs *regs, unsigned long pc, unsigned long npc, 438 void handle_cp_exception(struct pt_regs *regs, unsigned long pc, unsigned long npc,
438 unsigned long psr) 439 unsigned long psr)
439 { 440 {
440 siginfo_t info; 441 siginfo_t info;
441 442
442 #ifdef TRAP_DEBUG 443 #ifdef TRAP_DEBUG
443 printk("Co-Processor Exception at PC %08lx NPC %08lx PSR %08lx\n", 444 printk("Co-Processor Exception at PC %08lx NPC %08lx PSR %08lx\n",
444 pc, npc, psr); 445 pc, npc, psr);
445 #endif 446 #endif
446 info.si_signo = SIGILL; 447 info.si_signo = SIGILL;
447 info.si_errno = 0; 448 info.si_errno = 0;
448 info.si_code = ILL_COPROC; 449 info.si_code = ILL_COPROC;
449 info.si_addr = (void __user *)pc; 450 info.si_addr = (void __user *)pc;
450 info.si_trapno = 0; 451 info.si_trapno = 0;
451 send_sig_info(SIGILL, &info, current); 452 send_sig_info(SIGILL, &info, current);
452 } 453 }
453 454
454 void handle_hw_divzero(struct pt_regs *regs, unsigned long pc, unsigned long npc, 455 void handle_hw_divzero(struct pt_regs *regs, unsigned long pc, unsigned long npc,
455 unsigned long psr) 456 unsigned long psr)
456 { 457 {
457 siginfo_t info; 458 siginfo_t info;
458 459
459 info.si_signo = SIGFPE; 460 info.si_signo = SIGFPE;
460 info.si_errno = 0; 461 info.si_errno = 0;
461 info.si_code = FPE_INTDIV; 462 info.si_code = FPE_INTDIV;
462 info.si_addr = (void __user *)pc; 463 info.si_addr = (void __user *)pc;
463 info.si_trapno = 0; 464 info.si_trapno = 0;
464 send_sig_info(SIGFPE, &info, current); 465 send_sig_info(SIGFPE, &info, current);
465 } 466 }
466 467
467 #ifdef CONFIG_DEBUG_BUGVERBOSE 468 #ifdef CONFIG_DEBUG_BUGVERBOSE
468 void do_BUG(const char *file, int line) 469 void do_BUG(const char *file, int line)
469 { 470 {
470 // bust_spinlocks(1); XXX Not in our original BUG() 471 // bust_spinlocks(1); XXX Not in our original BUG()
471 printk("kernel BUG at %s:%d!\n", file, line); 472 printk("kernel BUG at %s:%d!\n", file, line);
472 } 473 }
473 #endif 474 #endif
474 475
475 /* Since we have our mappings set up, on multiprocessors we can spin them 476 /* Since we have our mappings set up, on multiprocessors we can spin them
476 * up here so that timer interrupts work during initialization. 477 * up here so that timer interrupts work during initialization.
477 */ 478 */
478 479
479 extern void sparc_cpu_startup(void); 480 extern void sparc_cpu_startup(void);
480 481
481 int linux_smp_still_initting; 482 int linux_smp_still_initting;
482 unsigned int thiscpus_tbr; 483 unsigned int thiscpus_tbr;
483 int thiscpus_mid; 484 int thiscpus_mid;
484 485
485 void trap_init(void) 486 void trap_init(void)
486 { 487 {
487 extern void thread_info_offsets_are_bolixed_pete(void); 488 extern void thread_info_offsets_are_bolixed_pete(void);
488 489
489 /* Force linker to barf if mismatched */ 490 /* Force linker to barf if mismatched */
490 if (TI_UWINMASK != offsetof(struct thread_info, uwinmask) || 491 if (TI_UWINMASK != offsetof(struct thread_info, uwinmask) ||
491 TI_TASK != offsetof(struct thread_info, task) || 492 TI_TASK != offsetof(struct thread_info, task) ||
492 TI_EXECDOMAIN != offsetof(struct thread_info, exec_domain) || 493 TI_EXECDOMAIN != offsetof(struct thread_info, exec_domain) ||
493 TI_FLAGS != offsetof(struct thread_info, flags) || 494 TI_FLAGS != offsetof(struct thread_info, flags) ||
494 TI_CPU != offsetof(struct thread_info, cpu) || 495 TI_CPU != offsetof(struct thread_info, cpu) ||
495 TI_PREEMPT != offsetof(struct thread_info, preempt_count) || 496 TI_PREEMPT != offsetof(struct thread_info, preempt_count) ||
496 TI_SOFTIRQ != offsetof(struct thread_info, softirq_count) || 497 TI_SOFTIRQ != offsetof(struct thread_info, softirq_count) ||
497 TI_HARDIRQ != offsetof(struct thread_info, hardirq_count) || 498 TI_HARDIRQ != offsetof(struct thread_info, hardirq_count) ||
498 TI_KSP != offsetof(struct thread_info, ksp) || 499 TI_KSP != offsetof(struct thread_info, ksp) ||
499 TI_KPC != offsetof(struct thread_info, kpc) || 500 TI_KPC != offsetof(struct thread_info, kpc) ||
500 TI_KPSR != offsetof(struct thread_info, kpsr) || 501 TI_KPSR != offsetof(struct thread_info, kpsr) ||
501 TI_KWIM != offsetof(struct thread_info, kwim) || 502 TI_KWIM != offsetof(struct thread_info, kwim) ||
502 TI_REG_WINDOW != offsetof(struct thread_info, reg_window) || 503 TI_REG_WINDOW != offsetof(struct thread_info, reg_window) ||
503 TI_RWIN_SPTRS != offsetof(struct thread_info, rwbuf_stkptrs) || 504 TI_RWIN_SPTRS != offsetof(struct thread_info, rwbuf_stkptrs) ||
504 TI_W_SAVED != offsetof(struct thread_info, w_saved)) 505 TI_W_SAVED != offsetof(struct thread_info, w_saved))
505 thread_info_offsets_are_bolixed_pete(); 506 thread_info_offsets_are_bolixed_pete();
506 507
507 /* Attach to the address space of init_task. */ 508 /* Attach to the address space of init_task. */
508 atomic_inc(&init_mm.mm_count); 509 atomic_inc(&init_mm.mm_count);
509 current->active_mm = &init_mm; 510 current->active_mm = &init_mm;
510 511
511 /* NOTE: Other cpus have this done as they are started 512 /* NOTE: Other cpus have this done as they are started
512 * up on SMP. 513 * up on SMP.
513 */ 514 */
514 } 515 }
515 516
arch/sparc64/kernel/traps.c
1 /* $Id: traps.c,v 1.85 2002/02/09 19:49:31 davem Exp $ 1 /* $Id: traps.c,v 1.85 2002/02/09 19:49:31 davem Exp $
2 * arch/sparc64/kernel/traps.c 2 * arch/sparc64/kernel/traps.c
3 * 3 *
4 * Copyright (C) 1995,1997 David S. Miller (davem@caip.rutgers.edu) 4 * Copyright (C) 1995,1997 David S. Miller (davem@caip.rutgers.edu)
5 * Copyright (C) 1997,1999,2000 Jakub Jelinek (jakub@redhat.com) 5 * Copyright (C) 1997,1999,2000 Jakub Jelinek (jakub@redhat.com)
6 */ 6 */
7 7
8 /* 8 /*
9 * I like traps on v9, :)))) 9 * I like traps on v9, :))))
10 */ 10 */
11 11
12 #include <linux/module.h> 12 #include <linux/module.h>
13 #include <linux/sched.h> 13 #include <linux/sched.h>
14 #include <linux/kernel.h> 14 #include <linux/kernel.h>
15 #include <linux/kallsyms.h> 15 #include <linux/kallsyms.h>
16 #include <linux/signal.h> 16 #include <linux/signal.h>
17 #include <linux/smp.h> 17 #include <linux/smp.h>
18 #include <linux/mm.h> 18 #include <linux/mm.h>
19 #include <linux/init.h> 19 #include <linux/init.h>
20 #include <linux/kdebug.h> 20 #include <linux/kdebug.h>
21 21
22 #include <asm/smp.h> 22 #include <asm/smp.h>
23 #include <asm/delay.h> 23 #include <asm/delay.h>
24 #include <asm/system.h> 24 #include <asm/system.h>
25 #include <asm/ptrace.h> 25 #include <asm/ptrace.h>
26 #include <asm/oplib.h> 26 #include <asm/oplib.h>
27 #include <asm/page.h> 27 #include <asm/page.h>
28 #include <asm/pgtable.h> 28 #include <asm/pgtable.h>
29 #include <asm/unistd.h> 29 #include <asm/unistd.h>
30 #include <asm/uaccess.h> 30 #include <asm/uaccess.h>
31 #include <asm/fpumacro.h> 31 #include <asm/fpumacro.h>
32 #include <asm/lsu.h> 32 #include <asm/lsu.h>
33 #include <asm/dcu.h> 33 #include <asm/dcu.h>
34 #include <asm/estate.h> 34 #include <asm/estate.h>
35 #include <asm/chafsr.h> 35 #include <asm/chafsr.h>
36 #include <asm/sfafsr.h> 36 #include <asm/sfafsr.h>
37 #include <asm/psrcompat.h> 37 #include <asm/psrcompat.h>
38 #include <asm/processor.h> 38 #include <asm/processor.h>
39 #include <asm/timer.h> 39 #include <asm/timer.h>
40 #include <asm/head.h> 40 #include <asm/head.h>
41 #ifdef CONFIG_KMOD 41 #ifdef CONFIG_KMOD
42 #include <linux/kmod.h> 42 #include <linux/kmod.h>
43 #endif 43 #endif
44 #include <asm/prom.h> 44 #include <asm/prom.h>
45 45
46 46
47 /* When an irrecoverable trap occurs at tl > 0, the trap entry 47 /* When an irrecoverable trap occurs at tl > 0, the trap entry
48 * code logs the trap state registers at every level in the trap 48 * code logs the trap state registers at every level in the trap
49 * stack. It is found at (pt_regs + sizeof(pt_regs)) and the layout 49 * stack. It is found at (pt_regs + sizeof(pt_regs)) and the layout
50 * is as follows: 50 * is as follows:
51 */ 51 */
52 struct tl1_traplog { 52 struct tl1_traplog {
53 struct { 53 struct {
54 unsigned long tstate; 54 unsigned long tstate;
55 unsigned long tpc; 55 unsigned long tpc;
56 unsigned long tnpc; 56 unsigned long tnpc;
57 unsigned long tt; 57 unsigned long tt;
58 } trapstack[4]; 58 } trapstack[4];
59 unsigned long tl; 59 unsigned long tl;
60 }; 60 };
61 61
62 static void dump_tl1_traplog(struct tl1_traplog *p) 62 static void dump_tl1_traplog(struct tl1_traplog *p)
63 { 63 {
64 int i, limit; 64 int i, limit;
65 65
66 printk(KERN_EMERG "TRAPLOG: Error at trap level 0x%lx, " 66 printk(KERN_EMERG "TRAPLOG: Error at trap level 0x%lx, "
67 "dumping track stack.\n", p->tl); 67 "dumping track stack.\n", p->tl);
68 68
69 limit = (tlb_type == hypervisor) ? 2 : 4; 69 limit = (tlb_type == hypervisor) ? 2 : 4;
70 for (i = 0; i < limit; i++) { 70 for (i = 0; i < limit; i++) {
71 printk(KERN_EMERG 71 printk(KERN_EMERG
72 "TRAPLOG: Trap level %d TSTATE[%016lx] TPC[%016lx] " 72 "TRAPLOG: Trap level %d TSTATE[%016lx] TPC[%016lx] "
73 "TNPC[%016lx] TT[%lx]\n", 73 "TNPC[%016lx] TT[%lx]\n",
74 i + 1, 74 i + 1,
75 p->trapstack[i].tstate, p->trapstack[i].tpc, 75 p->trapstack[i].tstate, p->trapstack[i].tpc,
76 p->trapstack[i].tnpc, p->trapstack[i].tt); 76 p->trapstack[i].tnpc, p->trapstack[i].tt);
77 print_symbol("TRAPLOG: TPC<%s>\n", p->trapstack[i].tpc); 77 print_symbol("TRAPLOG: TPC<%s>\n", p->trapstack[i].tpc);
78 } 78 }
79 } 79 }
80 80
81 void do_call_debug(struct pt_regs *regs) 81 void do_call_debug(struct pt_regs *regs)
82 { 82 {
83 notify_die(DIE_CALL, "debug call", regs, 0, 255, SIGINT); 83 notify_die(DIE_CALL, "debug call", regs, 0, 255, SIGINT);
84 } 84 }
85 85
86 void bad_trap(struct pt_regs *regs, long lvl) 86 void bad_trap(struct pt_regs *regs, long lvl)
87 { 87 {
88 char buffer[32]; 88 char buffer[32];
89 siginfo_t info; 89 siginfo_t info;
90 90
91 if (notify_die(DIE_TRAP, "bad trap", regs, 91 if (notify_die(DIE_TRAP, "bad trap", regs,
92 0, lvl, SIGTRAP) == NOTIFY_STOP) 92 0, lvl, SIGTRAP) == NOTIFY_STOP)
93 return; 93 return;
94 94
95 if (lvl < 0x100) { 95 if (lvl < 0x100) {
96 sprintf(buffer, "Bad hw trap %lx at tl0\n", lvl); 96 sprintf(buffer, "Bad hw trap %lx at tl0\n", lvl);
97 die_if_kernel(buffer, regs); 97 die_if_kernel(buffer, regs);
98 } 98 }
99 99
100 lvl -= 0x100; 100 lvl -= 0x100;
101 if (regs->tstate & TSTATE_PRIV) { 101 if (regs->tstate & TSTATE_PRIV) {
102 sprintf(buffer, "Kernel bad sw trap %lx", lvl); 102 sprintf(buffer, "Kernel bad sw trap %lx", lvl);
103 die_if_kernel(buffer, regs); 103 die_if_kernel(buffer, regs);
104 } 104 }
105 if (test_thread_flag(TIF_32BIT)) { 105 if (test_thread_flag(TIF_32BIT)) {
106 regs->tpc &= 0xffffffff; 106 regs->tpc &= 0xffffffff;
107 regs->tnpc &= 0xffffffff; 107 regs->tnpc &= 0xffffffff;
108 } 108 }
109 info.si_signo = SIGILL; 109 info.si_signo = SIGILL;
110 info.si_errno = 0; 110 info.si_errno = 0;
111 info.si_code = ILL_ILLTRP; 111 info.si_code = ILL_ILLTRP;
112 info.si_addr = (void __user *)regs->tpc; 112 info.si_addr = (void __user *)regs->tpc;
113 info.si_trapno = lvl; 113 info.si_trapno = lvl;
114 force_sig_info(SIGILL, &info, current); 114 force_sig_info(SIGILL, &info, current);
115 } 115 }
116 116
117 void bad_trap_tl1(struct pt_regs *regs, long lvl) 117 void bad_trap_tl1(struct pt_regs *regs, long lvl)
118 { 118 {
119 char buffer[32]; 119 char buffer[32];
120 120
121 if (notify_die(DIE_TRAP_TL1, "bad trap tl1", regs, 121 if (notify_die(DIE_TRAP_TL1, "bad trap tl1", regs,
122 0, lvl, SIGTRAP) == NOTIFY_STOP) 122 0, lvl, SIGTRAP) == NOTIFY_STOP)
123 return; 123 return;
124 124
125 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 125 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
126 126
127 sprintf (buffer, "Bad trap %lx at tl>0", lvl); 127 sprintf (buffer, "Bad trap %lx at tl>0", lvl);
128 die_if_kernel (buffer, regs); 128 die_if_kernel (buffer, regs);
129 } 129 }
130 130
131 #ifdef CONFIG_DEBUG_BUGVERBOSE 131 #ifdef CONFIG_DEBUG_BUGVERBOSE
132 void do_BUG(const char *file, int line) 132 void do_BUG(const char *file, int line)
133 { 133 {
134 bust_spinlocks(1); 134 bust_spinlocks(1);
135 printk("kernel BUG at %s:%d!\n", file, line); 135 printk("kernel BUG at %s:%d!\n", file, line);
136 } 136 }
137 #endif 137 #endif
138 138
139 void spitfire_insn_access_exception(struct pt_regs *regs, unsigned long sfsr, unsigned long sfar) 139 void spitfire_insn_access_exception(struct pt_regs *regs, unsigned long sfsr, unsigned long sfar)
140 { 140 {
141 siginfo_t info; 141 siginfo_t info;
142 142
143 if (notify_die(DIE_TRAP, "instruction access exception", regs, 143 if (notify_die(DIE_TRAP, "instruction access exception", regs,
144 0, 0x8, SIGTRAP) == NOTIFY_STOP) 144 0, 0x8, SIGTRAP) == NOTIFY_STOP)
145 return; 145 return;
146 146
147 if (regs->tstate & TSTATE_PRIV) { 147 if (regs->tstate & TSTATE_PRIV) {
148 printk("spitfire_insn_access_exception: SFSR[%016lx] " 148 printk("spitfire_insn_access_exception: SFSR[%016lx] "
149 "SFAR[%016lx], going.\n", sfsr, sfar); 149 "SFAR[%016lx], going.\n", sfsr, sfar);
150 die_if_kernel("Iax", regs); 150 die_if_kernel("Iax", regs);
151 } 151 }
152 if (test_thread_flag(TIF_32BIT)) { 152 if (test_thread_flag(TIF_32BIT)) {
153 regs->tpc &= 0xffffffff; 153 regs->tpc &= 0xffffffff;
154 regs->tnpc &= 0xffffffff; 154 regs->tnpc &= 0xffffffff;
155 } 155 }
156 info.si_signo = SIGSEGV; 156 info.si_signo = SIGSEGV;
157 info.si_errno = 0; 157 info.si_errno = 0;
158 info.si_code = SEGV_MAPERR; 158 info.si_code = SEGV_MAPERR;
159 info.si_addr = (void __user *)regs->tpc; 159 info.si_addr = (void __user *)regs->tpc;
160 info.si_trapno = 0; 160 info.si_trapno = 0;
161 force_sig_info(SIGSEGV, &info, current); 161 force_sig_info(SIGSEGV, &info, current);
162 } 162 }
163 163
164 void spitfire_insn_access_exception_tl1(struct pt_regs *regs, unsigned long sfsr, unsigned long sfar) 164 void spitfire_insn_access_exception_tl1(struct pt_regs *regs, unsigned long sfsr, unsigned long sfar)
165 { 165 {
166 if (notify_die(DIE_TRAP_TL1, "instruction access exception tl1", regs, 166 if (notify_die(DIE_TRAP_TL1, "instruction access exception tl1", regs,
167 0, 0x8, SIGTRAP) == NOTIFY_STOP) 167 0, 0x8, SIGTRAP) == NOTIFY_STOP)
168 return; 168 return;
169 169
170 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 170 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
171 spitfire_insn_access_exception(regs, sfsr, sfar); 171 spitfire_insn_access_exception(regs, sfsr, sfar);
172 } 172 }
173 173
174 void sun4v_insn_access_exception(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx) 174 void sun4v_insn_access_exception(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx)
175 { 175 {
176 unsigned short type = (type_ctx >> 16); 176 unsigned short type = (type_ctx >> 16);
177 unsigned short ctx = (type_ctx & 0xffff); 177 unsigned short ctx = (type_ctx & 0xffff);
178 siginfo_t info; 178 siginfo_t info;
179 179
180 if (notify_die(DIE_TRAP, "instruction access exception", regs, 180 if (notify_die(DIE_TRAP, "instruction access exception", regs,
181 0, 0x8, SIGTRAP) == NOTIFY_STOP) 181 0, 0x8, SIGTRAP) == NOTIFY_STOP)
182 return; 182 return;
183 183
184 if (regs->tstate & TSTATE_PRIV) { 184 if (regs->tstate & TSTATE_PRIV) {
185 printk("sun4v_insn_access_exception: ADDR[%016lx] " 185 printk("sun4v_insn_access_exception: ADDR[%016lx] "
186 "CTX[%04x] TYPE[%04x], going.\n", 186 "CTX[%04x] TYPE[%04x], going.\n",
187 addr, ctx, type); 187 addr, ctx, type);
188 die_if_kernel("Iax", regs); 188 die_if_kernel("Iax", regs);
189 } 189 }
190 190
191 if (test_thread_flag(TIF_32BIT)) { 191 if (test_thread_flag(TIF_32BIT)) {
192 regs->tpc &= 0xffffffff; 192 regs->tpc &= 0xffffffff;
193 regs->tnpc &= 0xffffffff; 193 regs->tnpc &= 0xffffffff;
194 } 194 }
195 info.si_signo = SIGSEGV; 195 info.si_signo = SIGSEGV;
196 info.si_errno = 0; 196 info.si_errno = 0;
197 info.si_code = SEGV_MAPERR; 197 info.si_code = SEGV_MAPERR;
198 info.si_addr = (void __user *) addr; 198 info.si_addr = (void __user *) addr;
199 info.si_trapno = 0; 199 info.si_trapno = 0;
200 force_sig_info(SIGSEGV, &info, current); 200 force_sig_info(SIGSEGV, &info, current);
201 } 201 }
202 202
203 void sun4v_insn_access_exception_tl1(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx) 203 void sun4v_insn_access_exception_tl1(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx)
204 { 204 {
205 if (notify_die(DIE_TRAP_TL1, "instruction access exception tl1", regs, 205 if (notify_die(DIE_TRAP_TL1, "instruction access exception tl1", regs,
206 0, 0x8, SIGTRAP) == NOTIFY_STOP) 206 0, 0x8, SIGTRAP) == NOTIFY_STOP)
207 return; 207 return;
208 208
209 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 209 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
210 sun4v_insn_access_exception(regs, addr, type_ctx); 210 sun4v_insn_access_exception(regs, addr, type_ctx);
211 } 211 }
212 212
213 void spitfire_data_access_exception(struct pt_regs *regs, unsigned long sfsr, unsigned long sfar) 213 void spitfire_data_access_exception(struct pt_regs *regs, unsigned long sfsr, unsigned long sfar)
214 { 214 {
215 siginfo_t info; 215 siginfo_t info;
216 216
217 if (notify_die(DIE_TRAP, "data access exception", regs, 217 if (notify_die(DIE_TRAP, "data access exception", regs,
218 0, 0x30, SIGTRAP) == NOTIFY_STOP) 218 0, 0x30, SIGTRAP) == NOTIFY_STOP)
219 return; 219 return;
220 220
221 if (regs->tstate & TSTATE_PRIV) { 221 if (regs->tstate & TSTATE_PRIV) {
222 /* Test if this comes from uaccess places. */ 222 /* Test if this comes from uaccess places. */
223 const struct exception_table_entry *entry; 223 const struct exception_table_entry *entry;
224 224
225 entry = search_exception_tables(regs->tpc); 225 entry = search_exception_tables(regs->tpc);
226 if (entry) { 226 if (entry) {
227 /* Ouch, somebody is trying VM hole tricks on us... */ 227 /* Ouch, somebody is trying VM hole tricks on us... */
228 #ifdef DEBUG_EXCEPTIONS 228 #ifdef DEBUG_EXCEPTIONS
229 printk("Exception: PC<%016lx> faddr<UNKNOWN>\n", regs->tpc); 229 printk("Exception: PC<%016lx> faddr<UNKNOWN>\n", regs->tpc);
230 printk("EX_TABLE: insn<%016lx> fixup<%016lx>\n", 230 printk("EX_TABLE: insn<%016lx> fixup<%016lx>\n",
231 regs->tpc, entry->fixup); 231 regs->tpc, entry->fixup);
232 #endif 232 #endif
233 regs->tpc = entry->fixup; 233 regs->tpc = entry->fixup;
234 regs->tnpc = regs->tpc + 4; 234 regs->tnpc = regs->tpc + 4;
235 return; 235 return;
236 } 236 }
237 /* Shit... */ 237 /* Shit... */
238 printk("spitfire_data_access_exception: SFSR[%016lx] " 238 printk("spitfire_data_access_exception: SFSR[%016lx] "
239 "SFAR[%016lx], going.\n", sfsr, sfar); 239 "SFAR[%016lx], going.\n", sfsr, sfar);
240 die_if_kernel("Dax", regs); 240 die_if_kernel("Dax", regs);
241 } 241 }
242 242
243 info.si_signo = SIGSEGV; 243 info.si_signo = SIGSEGV;
244 info.si_errno = 0; 244 info.si_errno = 0;
245 info.si_code = SEGV_MAPERR; 245 info.si_code = SEGV_MAPERR;
246 info.si_addr = (void __user *)sfar; 246 info.si_addr = (void __user *)sfar;
247 info.si_trapno = 0; 247 info.si_trapno = 0;
248 force_sig_info(SIGSEGV, &info, current); 248 force_sig_info(SIGSEGV, &info, current);
249 } 249 }
250 250
251 void spitfire_data_access_exception_tl1(struct pt_regs *regs, unsigned long sfsr, unsigned long sfar) 251 void spitfire_data_access_exception_tl1(struct pt_regs *regs, unsigned long sfsr, unsigned long sfar)
252 { 252 {
253 if (notify_die(DIE_TRAP_TL1, "data access exception tl1", regs, 253 if (notify_die(DIE_TRAP_TL1, "data access exception tl1", regs,
254 0, 0x30, SIGTRAP) == NOTIFY_STOP) 254 0, 0x30, SIGTRAP) == NOTIFY_STOP)
255 return; 255 return;
256 256
257 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 257 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
258 spitfire_data_access_exception(regs, sfsr, sfar); 258 spitfire_data_access_exception(regs, sfsr, sfar);
259 } 259 }
260 260
261 void sun4v_data_access_exception(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx) 261 void sun4v_data_access_exception(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx)
262 { 262 {
263 unsigned short type = (type_ctx >> 16); 263 unsigned short type = (type_ctx >> 16);
264 unsigned short ctx = (type_ctx & 0xffff); 264 unsigned short ctx = (type_ctx & 0xffff);
265 siginfo_t info; 265 siginfo_t info;
266 266
267 if (notify_die(DIE_TRAP, "data access exception", regs, 267 if (notify_die(DIE_TRAP, "data access exception", regs,
268 0, 0x8, SIGTRAP) == NOTIFY_STOP) 268 0, 0x8, SIGTRAP) == NOTIFY_STOP)
269 return; 269 return;
270 270
271 if (regs->tstate & TSTATE_PRIV) { 271 if (regs->tstate & TSTATE_PRIV) {
272 printk("sun4v_data_access_exception: ADDR[%016lx] " 272 printk("sun4v_data_access_exception: ADDR[%016lx] "
273 "CTX[%04x] TYPE[%04x], going.\n", 273 "CTX[%04x] TYPE[%04x], going.\n",
274 addr, ctx, type); 274 addr, ctx, type);
275 die_if_kernel("Dax", regs); 275 die_if_kernel("Dax", regs);
276 } 276 }
277 277
278 if (test_thread_flag(TIF_32BIT)) { 278 if (test_thread_flag(TIF_32BIT)) {
279 regs->tpc &= 0xffffffff; 279 regs->tpc &= 0xffffffff;
280 regs->tnpc &= 0xffffffff; 280 regs->tnpc &= 0xffffffff;
281 } 281 }
282 info.si_signo = SIGSEGV; 282 info.si_signo = SIGSEGV;
283 info.si_errno = 0; 283 info.si_errno = 0;
284 info.si_code = SEGV_MAPERR; 284 info.si_code = SEGV_MAPERR;
285 info.si_addr = (void __user *) addr; 285 info.si_addr = (void __user *) addr;
286 info.si_trapno = 0; 286 info.si_trapno = 0;
287 force_sig_info(SIGSEGV, &info, current); 287 force_sig_info(SIGSEGV, &info, current);
288 } 288 }
289 289
290 void sun4v_data_access_exception_tl1(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx) 290 void sun4v_data_access_exception_tl1(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx)
291 { 291 {
292 if (notify_die(DIE_TRAP_TL1, "data access exception tl1", regs, 292 if (notify_die(DIE_TRAP_TL1, "data access exception tl1", regs,
293 0, 0x8, SIGTRAP) == NOTIFY_STOP) 293 0, 0x8, SIGTRAP) == NOTIFY_STOP)
294 return; 294 return;
295 295
296 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 296 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
297 sun4v_data_access_exception(regs, addr, type_ctx); 297 sun4v_data_access_exception(regs, addr, type_ctx);
298 } 298 }
299 299
300 #ifdef CONFIG_PCI 300 #ifdef CONFIG_PCI
301 /* This is really pathetic... */ 301 /* This is really pathetic... */
302 extern volatile int pci_poke_in_progress; 302 extern volatile int pci_poke_in_progress;
303 extern volatile int pci_poke_cpu; 303 extern volatile int pci_poke_cpu;
304 extern volatile int pci_poke_faulted; 304 extern volatile int pci_poke_faulted;
305 #endif 305 #endif
306 306
307 /* When access exceptions happen, we must do this. */ 307 /* When access exceptions happen, we must do this. */
308 static void spitfire_clean_and_reenable_l1_caches(void) 308 static void spitfire_clean_and_reenable_l1_caches(void)
309 { 309 {
310 unsigned long va; 310 unsigned long va;
311 311
312 if (tlb_type != spitfire) 312 if (tlb_type != spitfire)
313 BUG(); 313 BUG();
314 314
315 /* Clean 'em. */ 315 /* Clean 'em. */
316 for (va = 0; va < (PAGE_SIZE << 1); va += 32) { 316 for (va = 0; va < (PAGE_SIZE << 1); va += 32) {
317 spitfire_put_icache_tag(va, 0x0); 317 spitfire_put_icache_tag(va, 0x0);
318 spitfire_put_dcache_tag(va, 0x0); 318 spitfire_put_dcache_tag(va, 0x0);
319 } 319 }
320 320
321 /* Re-enable in LSU. */ 321 /* Re-enable in LSU. */
322 __asm__ __volatile__("flush %%g6\n\t" 322 __asm__ __volatile__("flush %%g6\n\t"
323 "membar #Sync\n\t" 323 "membar #Sync\n\t"
324 "stxa %0, [%%g0] %1\n\t" 324 "stxa %0, [%%g0] %1\n\t"
325 "membar #Sync" 325 "membar #Sync"
326 : /* no outputs */ 326 : /* no outputs */
327 : "r" (LSU_CONTROL_IC | LSU_CONTROL_DC | 327 : "r" (LSU_CONTROL_IC | LSU_CONTROL_DC |
328 LSU_CONTROL_IM | LSU_CONTROL_DM), 328 LSU_CONTROL_IM | LSU_CONTROL_DM),
329 "i" (ASI_LSU_CONTROL) 329 "i" (ASI_LSU_CONTROL)
330 : "memory"); 330 : "memory");
331 } 331 }
332 332
333 static void spitfire_enable_estate_errors(void) 333 static void spitfire_enable_estate_errors(void)
334 { 334 {
335 __asm__ __volatile__("stxa %0, [%%g0] %1\n\t" 335 __asm__ __volatile__("stxa %0, [%%g0] %1\n\t"
336 "membar #Sync" 336 "membar #Sync"
337 : /* no outputs */ 337 : /* no outputs */
338 : "r" (ESTATE_ERR_ALL), 338 : "r" (ESTATE_ERR_ALL),
339 "i" (ASI_ESTATE_ERROR_EN)); 339 "i" (ASI_ESTATE_ERROR_EN));
340 } 340 }
341 341
342 static char ecc_syndrome_table[] = { 342 static char ecc_syndrome_table[] = {
343 0x4c, 0x40, 0x41, 0x48, 0x42, 0x48, 0x48, 0x49, 343 0x4c, 0x40, 0x41, 0x48, 0x42, 0x48, 0x48, 0x49,
344 0x43, 0x48, 0x48, 0x49, 0x48, 0x49, 0x49, 0x4a, 344 0x43, 0x48, 0x48, 0x49, 0x48, 0x49, 0x49, 0x4a,
345 0x44, 0x48, 0x48, 0x20, 0x48, 0x39, 0x4b, 0x48, 345 0x44, 0x48, 0x48, 0x20, 0x48, 0x39, 0x4b, 0x48,
346 0x48, 0x25, 0x31, 0x48, 0x28, 0x48, 0x48, 0x2c, 346 0x48, 0x25, 0x31, 0x48, 0x28, 0x48, 0x48, 0x2c,
347 0x45, 0x48, 0x48, 0x21, 0x48, 0x3d, 0x04, 0x48, 347 0x45, 0x48, 0x48, 0x21, 0x48, 0x3d, 0x04, 0x48,
348 0x48, 0x4b, 0x35, 0x48, 0x2d, 0x48, 0x48, 0x29, 348 0x48, 0x4b, 0x35, 0x48, 0x2d, 0x48, 0x48, 0x29,
349 0x48, 0x00, 0x01, 0x48, 0x0a, 0x48, 0x48, 0x4b, 349 0x48, 0x00, 0x01, 0x48, 0x0a, 0x48, 0x48, 0x4b,
350 0x0f, 0x48, 0x48, 0x4b, 0x48, 0x49, 0x49, 0x48, 350 0x0f, 0x48, 0x48, 0x4b, 0x48, 0x49, 0x49, 0x48,
351 0x46, 0x48, 0x48, 0x2a, 0x48, 0x3b, 0x27, 0x48, 351 0x46, 0x48, 0x48, 0x2a, 0x48, 0x3b, 0x27, 0x48,
352 0x48, 0x4b, 0x33, 0x48, 0x22, 0x48, 0x48, 0x2e, 352 0x48, 0x4b, 0x33, 0x48, 0x22, 0x48, 0x48, 0x2e,
353 0x48, 0x19, 0x1d, 0x48, 0x1b, 0x4a, 0x48, 0x4b, 353 0x48, 0x19, 0x1d, 0x48, 0x1b, 0x4a, 0x48, 0x4b,
354 0x1f, 0x48, 0x4a, 0x4b, 0x48, 0x4b, 0x4b, 0x48, 354 0x1f, 0x48, 0x4a, 0x4b, 0x48, 0x4b, 0x4b, 0x48,
355 0x48, 0x4b, 0x24, 0x48, 0x07, 0x48, 0x48, 0x36, 355 0x48, 0x4b, 0x24, 0x48, 0x07, 0x48, 0x48, 0x36,
356 0x4b, 0x48, 0x48, 0x3e, 0x48, 0x30, 0x38, 0x48, 356 0x4b, 0x48, 0x48, 0x3e, 0x48, 0x30, 0x38, 0x48,
357 0x49, 0x48, 0x48, 0x4b, 0x48, 0x4b, 0x16, 0x48, 357 0x49, 0x48, 0x48, 0x4b, 0x48, 0x4b, 0x16, 0x48,
358 0x48, 0x12, 0x4b, 0x48, 0x49, 0x48, 0x48, 0x4b, 358 0x48, 0x12, 0x4b, 0x48, 0x49, 0x48, 0x48, 0x4b,
359 0x47, 0x48, 0x48, 0x2f, 0x48, 0x3f, 0x4b, 0x48, 359 0x47, 0x48, 0x48, 0x2f, 0x48, 0x3f, 0x4b, 0x48,
360 0x48, 0x06, 0x37, 0x48, 0x23, 0x48, 0x48, 0x2b, 360 0x48, 0x06, 0x37, 0x48, 0x23, 0x48, 0x48, 0x2b,
361 0x48, 0x05, 0x4b, 0x48, 0x4b, 0x48, 0x48, 0x32, 361 0x48, 0x05, 0x4b, 0x48, 0x4b, 0x48, 0x48, 0x32,
362 0x26, 0x48, 0x48, 0x3a, 0x48, 0x34, 0x3c, 0x48, 362 0x26, 0x48, 0x48, 0x3a, 0x48, 0x34, 0x3c, 0x48,
363 0x48, 0x11, 0x15, 0x48, 0x13, 0x4a, 0x48, 0x4b, 363 0x48, 0x11, 0x15, 0x48, 0x13, 0x4a, 0x48, 0x4b,
364 0x17, 0x48, 0x4a, 0x4b, 0x48, 0x4b, 0x4b, 0x48, 364 0x17, 0x48, 0x4a, 0x4b, 0x48, 0x4b, 0x4b, 0x48,
365 0x49, 0x48, 0x48, 0x4b, 0x48, 0x4b, 0x1e, 0x48, 365 0x49, 0x48, 0x48, 0x4b, 0x48, 0x4b, 0x1e, 0x48,
366 0x48, 0x1a, 0x4b, 0x48, 0x49, 0x48, 0x48, 0x4b, 366 0x48, 0x1a, 0x4b, 0x48, 0x49, 0x48, 0x48, 0x4b,
367 0x48, 0x08, 0x0d, 0x48, 0x02, 0x48, 0x48, 0x49, 367 0x48, 0x08, 0x0d, 0x48, 0x02, 0x48, 0x48, 0x49,
368 0x03, 0x48, 0x48, 0x49, 0x48, 0x4b, 0x4b, 0x48, 368 0x03, 0x48, 0x48, 0x49, 0x48, 0x4b, 0x4b, 0x48,
369 0x49, 0x48, 0x48, 0x49, 0x48, 0x4b, 0x10, 0x48, 369 0x49, 0x48, 0x48, 0x49, 0x48, 0x4b, 0x10, 0x48,
370 0x48, 0x14, 0x4b, 0x48, 0x4b, 0x48, 0x48, 0x4b, 370 0x48, 0x14, 0x4b, 0x48, 0x4b, 0x48, 0x48, 0x4b,
371 0x49, 0x48, 0x48, 0x49, 0x48, 0x4b, 0x18, 0x48, 371 0x49, 0x48, 0x48, 0x49, 0x48, 0x4b, 0x18, 0x48,
372 0x48, 0x1c, 0x4b, 0x48, 0x4b, 0x48, 0x48, 0x4b, 372 0x48, 0x1c, 0x4b, 0x48, 0x4b, 0x48, 0x48, 0x4b,
373 0x4a, 0x0c, 0x09, 0x48, 0x0e, 0x48, 0x48, 0x4b, 373 0x4a, 0x0c, 0x09, 0x48, 0x0e, 0x48, 0x48, 0x4b,
374 0x0b, 0x48, 0x48, 0x4b, 0x48, 0x4b, 0x4b, 0x4a 374 0x0b, 0x48, 0x48, 0x4b, 0x48, 0x4b, 0x4b, 0x4a
375 }; 375 };
376 376
377 static char *syndrome_unknown = "<Unknown>"; 377 static char *syndrome_unknown = "<Unknown>";
378 378
379 static void spitfire_log_udb_syndrome(unsigned long afar, unsigned long udbh, unsigned long udbl, unsigned long bit) 379 static void spitfire_log_udb_syndrome(unsigned long afar, unsigned long udbh, unsigned long udbl, unsigned long bit)
380 { 380 {
381 unsigned short scode; 381 unsigned short scode;
382 char memmod_str[64], *p; 382 char memmod_str[64], *p;
383 383
384 if (udbl & bit) { 384 if (udbl & bit) {
385 scode = ecc_syndrome_table[udbl & 0xff]; 385 scode = ecc_syndrome_table[udbl & 0xff];
386 if (prom_getunumber(scode, afar, 386 if (prom_getunumber(scode, afar,
387 memmod_str, sizeof(memmod_str)) == -1) 387 memmod_str, sizeof(memmod_str)) == -1)
388 p = syndrome_unknown; 388 p = syndrome_unknown;
389 else 389 else
390 p = memmod_str; 390 p = memmod_str;
391 printk(KERN_WARNING "CPU[%d]: UDBL Syndrome[%x] " 391 printk(KERN_WARNING "CPU[%d]: UDBL Syndrome[%x] "
392 "Memory Module \"%s\"\n", 392 "Memory Module \"%s\"\n",
393 smp_processor_id(), scode, p); 393 smp_processor_id(), scode, p);
394 } 394 }
395 395
396 if (udbh & bit) { 396 if (udbh & bit) {
397 scode = ecc_syndrome_table[udbh & 0xff]; 397 scode = ecc_syndrome_table[udbh & 0xff];
398 if (prom_getunumber(scode, afar, 398 if (prom_getunumber(scode, afar,
399 memmod_str, sizeof(memmod_str)) == -1) 399 memmod_str, sizeof(memmod_str)) == -1)
400 p = syndrome_unknown; 400 p = syndrome_unknown;
401 else 401 else
402 p = memmod_str; 402 p = memmod_str;
403 printk(KERN_WARNING "CPU[%d]: UDBH Syndrome[%x] " 403 printk(KERN_WARNING "CPU[%d]: UDBH Syndrome[%x] "
404 "Memory Module \"%s\"\n", 404 "Memory Module \"%s\"\n",
405 smp_processor_id(), scode, p); 405 smp_processor_id(), scode, p);
406 } 406 }
407 407
408 } 408 }
409 409
410 static void spitfire_cee_log(unsigned long afsr, unsigned long afar, unsigned long udbh, unsigned long udbl, int tl1, struct pt_regs *regs) 410 static void spitfire_cee_log(unsigned long afsr, unsigned long afar, unsigned long udbh, unsigned long udbl, int tl1, struct pt_regs *regs)
411 { 411 {
412 412
413 printk(KERN_WARNING "CPU[%d]: Correctable ECC Error " 413 printk(KERN_WARNING "CPU[%d]: Correctable ECC Error "
414 "AFSR[%lx] AFAR[%016lx] UDBL[%lx] UDBH[%lx] TL>1[%d]\n", 414 "AFSR[%lx] AFAR[%016lx] UDBL[%lx] UDBH[%lx] TL>1[%d]\n",
415 smp_processor_id(), afsr, afar, udbl, udbh, tl1); 415 smp_processor_id(), afsr, afar, udbl, udbh, tl1);
416 416
417 spitfire_log_udb_syndrome(afar, udbh, udbl, UDBE_CE); 417 spitfire_log_udb_syndrome(afar, udbh, udbl, UDBE_CE);
418 418
419 /* We always log it, even if someone is listening for this 419 /* We always log it, even if someone is listening for this
420 * trap. 420 * trap.
421 */ 421 */
422 notify_die(DIE_TRAP, "Correctable ECC Error", regs, 422 notify_die(DIE_TRAP, "Correctable ECC Error", regs,
423 0, TRAP_TYPE_CEE, SIGTRAP); 423 0, TRAP_TYPE_CEE, SIGTRAP);
424 424
425 /* The Correctable ECC Error trap does not disable I/D caches. So 425 /* The Correctable ECC Error trap does not disable I/D caches. So
426 * we only have to restore the ESTATE Error Enable register. 426 * we only have to restore the ESTATE Error Enable register.
427 */ 427 */
428 spitfire_enable_estate_errors(); 428 spitfire_enable_estate_errors();
429 } 429 }
430 430
431 static void spitfire_ue_log(unsigned long afsr, unsigned long afar, unsigned long udbh, unsigned long udbl, unsigned long tt, int tl1, struct pt_regs *regs) 431 static void spitfire_ue_log(unsigned long afsr, unsigned long afar, unsigned long udbh, unsigned long udbl, unsigned long tt, int tl1, struct pt_regs *regs)
432 { 432 {
433 siginfo_t info; 433 siginfo_t info;
434 434
435 printk(KERN_WARNING "CPU[%d]: Uncorrectable Error AFSR[%lx] " 435 printk(KERN_WARNING "CPU[%d]: Uncorrectable Error AFSR[%lx] "
436 "AFAR[%lx] UDBL[%lx] UDBH[%ld] TT[%lx] TL>1[%d]\n", 436 "AFAR[%lx] UDBL[%lx] UDBH[%ld] TT[%lx] TL>1[%d]\n",
437 smp_processor_id(), afsr, afar, udbl, udbh, tt, tl1); 437 smp_processor_id(), afsr, afar, udbl, udbh, tt, tl1);
438 438
439 /* XXX add more human friendly logging of the error status 439 /* XXX add more human friendly logging of the error status
440 * XXX as is implemented for cheetah 440 * XXX as is implemented for cheetah
441 */ 441 */
442 442
443 spitfire_log_udb_syndrome(afar, udbh, udbl, UDBE_UE); 443 spitfire_log_udb_syndrome(afar, udbh, udbl, UDBE_UE);
444 444
445 /* We always log it, even if someone is listening for this 445 /* We always log it, even if someone is listening for this
446 * trap. 446 * trap.
447 */ 447 */
448 notify_die(DIE_TRAP, "Uncorrectable Error", regs, 448 notify_die(DIE_TRAP, "Uncorrectable Error", regs,
449 0, tt, SIGTRAP); 449 0, tt, SIGTRAP);
450 450
451 if (regs->tstate & TSTATE_PRIV) { 451 if (regs->tstate & TSTATE_PRIV) {
452 if (tl1) 452 if (tl1)
453 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 453 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
454 die_if_kernel("UE", regs); 454 die_if_kernel("UE", regs);
455 } 455 }
456 456
457 /* XXX need more intelligent processing here, such as is implemented 457 /* XXX need more intelligent processing here, such as is implemented
458 * XXX for cheetah errors, in fact if the E-cache still holds the 458 * XXX for cheetah errors, in fact if the E-cache still holds the
459 * XXX line with bad parity this will loop 459 * XXX line with bad parity this will loop
460 */ 460 */
461 461
462 spitfire_clean_and_reenable_l1_caches(); 462 spitfire_clean_and_reenable_l1_caches();
463 spitfire_enable_estate_errors(); 463 spitfire_enable_estate_errors();
464 464
465 if (test_thread_flag(TIF_32BIT)) { 465 if (test_thread_flag(TIF_32BIT)) {
466 regs->tpc &= 0xffffffff; 466 regs->tpc &= 0xffffffff;
467 regs->tnpc &= 0xffffffff; 467 regs->tnpc &= 0xffffffff;
468 } 468 }
469 info.si_signo = SIGBUS; 469 info.si_signo = SIGBUS;
470 info.si_errno = 0; 470 info.si_errno = 0;
471 info.si_code = BUS_OBJERR; 471 info.si_code = BUS_OBJERR;
472 info.si_addr = (void *)0; 472 info.si_addr = (void *)0;
473 info.si_trapno = 0; 473 info.si_trapno = 0;
474 force_sig_info(SIGBUS, &info, current); 474 force_sig_info(SIGBUS, &info, current);
475 } 475 }
476 476
477 void spitfire_access_error(struct pt_regs *regs, unsigned long status_encoded, unsigned long afar) 477 void spitfire_access_error(struct pt_regs *regs, unsigned long status_encoded, unsigned long afar)
478 { 478 {
479 unsigned long afsr, tt, udbh, udbl; 479 unsigned long afsr, tt, udbh, udbl;
480 int tl1; 480 int tl1;
481 481
482 afsr = (status_encoded & SFSTAT_AFSR_MASK) >> SFSTAT_AFSR_SHIFT; 482 afsr = (status_encoded & SFSTAT_AFSR_MASK) >> SFSTAT_AFSR_SHIFT;
483 tt = (status_encoded & SFSTAT_TRAP_TYPE) >> SFSTAT_TRAP_TYPE_SHIFT; 483 tt = (status_encoded & SFSTAT_TRAP_TYPE) >> SFSTAT_TRAP_TYPE_SHIFT;
484 tl1 = (status_encoded & SFSTAT_TL_GT_ONE) ? 1 : 0; 484 tl1 = (status_encoded & SFSTAT_TL_GT_ONE) ? 1 : 0;
485 udbl = (status_encoded & SFSTAT_UDBL_MASK) >> SFSTAT_UDBL_SHIFT; 485 udbl = (status_encoded & SFSTAT_UDBL_MASK) >> SFSTAT_UDBL_SHIFT;
486 udbh = (status_encoded & SFSTAT_UDBH_MASK) >> SFSTAT_UDBH_SHIFT; 486 udbh = (status_encoded & SFSTAT_UDBH_MASK) >> SFSTAT_UDBH_SHIFT;
487 487
488 #ifdef CONFIG_PCI 488 #ifdef CONFIG_PCI
489 if (tt == TRAP_TYPE_DAE && 489 if (tt == TRAP_TYPE_DAE &&
490 pci_poke_in_progress && pci_poke_cpu == smp_processor_id()) { 490 pci_poke_in_progress && pci_poke_cpu == smp_processor_id()) {
491 spitfire_clean_and_reenable_l1_caches(); 491 spitfire_clean_and_reenable_l1_caches();
492 spitfire_enable_estate_errors(); 492 spitfire_enable_estate_errors();
493 493
494 pci_poke_faulted = 1; 494 pci_poke_faulted = 1;
495 regs->tnpc = regs->tpc + 4; 495 regs->tnpc = regs->tpc + 4;
496 return; 496 return;
497 } 497 }
498 #endif 498 #endif
499 499
500 if (afsr & SFAFSR_UE) 500 if (afsr & SFAFSR_UE)
501 spitfire_ue_log(afsr, afar, udbh, udbl, tt, tl1, regs); 501 spitfire_ue_log(afsr, afar, udbh, udbl, tt, tl1, regs);
502 502
503 if (tt == TRAP_TYPE_CEE) { 503 if (tt == TRAP_TYPE_CEE) {
504 /* Handle the case where we took a CEE trap, but ACK'd 504 /* Handle the case where we took a CEE trap, but ACK'd
505 * only the UE state in the UDB error registers. 505 * only the UE state in the UDB error registers.
506 */ 506 */
507 if (afsr & SFAFSR_UE) { 507 if (afsr & SFAFSR_UE) {
508 if (udbh & UDBE_CE) { 508 if (udbh & UDBE_CE) {
509 __asm__ __volatile__( 509 __asm__ __volatile__(
510 "stxa %0, [%1] %2\n\t" 510 "stxa %0, [%1] %2\n\t"
511 "membar #Sync" 511 "membar #Sync"
512 : /* no outputs */ 512 : /* no outputs */
513 : "r" (udbh & UDBE_CE), 513 : "r" (udbh & UDBE_CE),
514 "r" (0x0), "i" (ASI_UDB_ERROR_W)); 514 "r" (0x0), "i" (ASI_UDB_ERROR_W));
515 } 515 }
516 if (udbl & UDBE_CE) { 516 if (udbl & UDBE_CE) {
517 __asm__ __volatile__( 517 __asm__ __volatile__(
518 "stxa %0, [%1] %2\n\t" 518 "stxa %0, [%1] %2\n\t"
519 "membar #Sync" 519 "membar #Sync"
520 : /* no outputs */ 520 : /* no outputs */
521 : "r" (udbl & UDBE_CE), 521 : "r" (udbl & UDBE_CE),
522 "r" (0x18), "i" (ASI_UDB_ERROR_W)); 522 "r" (0x18), "i" (ASI_UDB_ERROR_W));
523 } 523 }
524 } 524 }
525 525
526 spitfire_cee_log(afsr, afar, udbh, udbl, tl1, regs); 526 spitfire_cee_log(afsr, afar, udbh, udbl, tl1, regs);
527 } 527 }
528 } 528 }
529 529
530 int cheetah_pcache_forced_on; 530 int cheetah_pcache_forced_on;
531 531
532 void cheetah_enable_pcache(void) 532 void cheetah_enable_pcache(void)
533 { 533 {
534 unsigned long dcr; 534 unsigned long dcr;
535 535
536 printk("CHEETAH: Enabling P-Cache on cpu %d.\n", 536 printk("CHEETAH: Enabling P-Cache on cpu %d.\n",
537 smp_processor_id()); 537 smp_processor_id());
538 538
539 __asm__ __volatile__("ldxa [%%g0] %1, %0" 539 __asm__ __volatile__("ldxa [%%g0] %1, %0"
540 : "=r" (dcr) 540 : "=r" (dcr)
541 : "i" (ASI_DCU_CONTROL_REG)); 541 : "i" (ASI_DCU_CONTROL_REG));
542 dcr |= (DCU_PE | DCU_HPE | DCU_SPE | DCU_SL); 542 dcr |= (DCU_PE | DCU_HPE | DCU_SPE | DCU_SL);
543 __asm__ __volatile__("stxa %0, [%%g0] %1\n\t" 543 __asm__ __volatile__("stxa %0, [%%g0] %1\n\t"
544 "membar #Sync" 544 "membar #Sync"
545 : /* no outputs */ 545 : /* no outputs */
546 : "r" (dcr), "i" (ASI_DCU_CONTROL_REG)); 546 : "r" (dcr), "i" (ASI_DCU_CONTROL_REG));
547 } 547 }
548 548
549 /* Cheetah error trap handling. */ 549 /* Cheetah error trap handling. */
550 static unsigned long ecache_flush_physbase; 550 static unsigned long ecache_flush_physbase;
551 static unsigned long ecache_flush_linesize; 551 static unsigned long ecache_flush_linesize;
552 static unsigned long ecache_flush_size; 552 static unsigned long ecache_flush_size;
553 553
554 /* WARNING: The error trap handlers in assembly know the precise 554 /* WARNING: The error trap handlers in assembly know the precise
555 * layout of the following structure. 555 * layout of the following structure.
556 * 556 *
557 * C-level handlers below use this information to log the error 557 * C-level handlers below use this information to log the error
558 * and then determine how to recover (if possible). 558 * and then determine how to recover (if possible).
559 */ 559 */
560 struct cheetah_err_info { 560 struct cheetah_err_info {
561 /*0x00*/u64 afsr; 561 /*0x00*/u64 afsr;
562 /*0x08*/u64 afar; 562 /*0x08*/u64 afar;
563 563
564 /* D-cache state */ 564 /* D-cache state */
565 /*0x10*/u64 dcache_data[4]; /* The actual data */ 565 /*0x10*/u64 dcache_data[4]; /* The actual data */
566 /*0x30*/u64 dcache_index; /* D-cache index */ 566 /*0x30*/u64 dcache_index; /* D-cache index */
567 /*0x38*/u64 dcache_tag; /* D-cache tag/valid */ 567 /*0x38*/u64 dcache_tag; /* D-cache tag/valid */
568 /*0x40*/u64 dcache_utag; /* D-cache microtag */ 568 /*0x40*/u64 dcache_utag; /* D-cache microtag */
569 /*0x48*/u64 dcache_stag; /* D-cache snooptag */ 569 /*0x48*/u64 dcache_stag; /* D-cache snooptag */
570 570
571 /* I-cache state */ 571 /* I-cache state */
572 /*0x50*/u64 icache_data[8]; /* The actual insns + predecode */ 572 /*0x50*/u64 icache_data[8]; /* The actual insns + predecode */
573 /*0x90*/u64 icache_index; /* I-cache index */ 573 /*0x90*/u64 icache_index; /* I-cache index */
574 /*0x98*/u64 icache_tag; /* I-cache phys tag */ 574 /*0x98*/u64 icache_tag; /* I-cache phys tag */
575 /*0xa0*/u64 icache_utag; /* I-cache microtag */ 575 /*0xa0*/u64 icache_utag; /* I-cache microtag */
576 /*0xa8*/u64 icache_stag; /* I-cache snooptag */ 576 /*0xa8*/u64 icache_stag; /* I-cache snooptag */
577 /*0xb0*/u64 icache_upper; /* I-cache upper-tag */ 577 /*0xb0*/u64 icache_upper; /* I-cache upper-tag */
578 /*0xb8*/u64 icache_lower; /* I-cache lower-tag */ 578 /*0xb8*/u64 icache_lower; /* I-cache lower-tag */
579 579
580 /* E-cache state */ 580 /* E-cache state */
581 /*0xc0*/u64 ecache_data[4]; /* 32 bytes from staging registers */ 581 /*0xc0*/u64 ecache_data[4]; /* 32 bytes from staging registers */
582 /*0xe0*/u64 ecache_index; /* E-cache index */ 582 /*0xe0*/u64 ecache_index; /* E-cache index */
583 /*0xe8*/u64 ecache_tag; /* E-cache tag/state */ 583 /*0xe8*/u64 ecache_tag; /* E-cache tag/state */
584 584
585 /*0xf0*/u64 __pad[32 - 30]; 585 /*0xf0*/u64 __pad[32 - 30];
586 }; 586 };
587 #define CHAFSR_INVALID ((u64)-1L) 587 #define CHAFSR_INVALID ((u64)-1L)
588 588
589 /* This table is ordered in priority of errors and matches the 589 /* This table is ordered in priority of errors and matches the
590 * AFAR overwrite policy as well. 590 * AFAR overwrite policy as well.
591 */ 591 */
592 592
593 struct afsr_error_table { 593 struct afsr_error_table {
594 unsigned long mask; 594 unsigned long mask;
595 const char *name; 595 const char *name;
596 }; 596 };
597 597
598 static const char CHAFSR_PERR_msg[] = 598 static const char CHAFSR_PERR_msg[] =
599 "System interface protocol error"; 599 "System interface protocol error";
600 static const char CHAFSR_IERR_msg[] = 600 static const char CHAFSR_IERR_msg[] =
601 "Internal processor error"; 601 "Internal processor error";
602 static const char CHAFSR_ISAP_msg[] = 602 static const char CHAFSR_ISAP_msg[] =
603 "System request parity error on incoming addresss"; 603 "System request parity error on incoming addresss";
604 static const char CHAFSR_UCU_msg[] = 604 static const char CHAFSR_UCU_msg[] =
605 "Uncorrectable E-cache ECC error for ifetch/data"; 605 "Uncorrectable E-cache ECC error for ifetch/data";
606 static const char CHAFSR_UCC_msg[] = 606 static const char CHAFSR_UCC_msg[] =
607 "SW Correctable E-cache ECC error for ifetch/data"; 607 "SW Correctable E-cache ECC error for ifetch/data";
608 static const char CHAFSR_UE_msg[] = 608 static const char CHAFSR_UE_msg[] =
609 "Uncorrectable system bus data ECC error for read"; 609 "Uncorrectable system bus data ECC error for read";
610 static const char CHAFSR_EDU_msg[] = 610 static const char CHAFSR_EDU_msg[] =
611 "Uncorrectable E-cache ECC error for stmerge/blkld"; 611 "Uncorrectable E-cache ECC error for stmerge/blkld";
612 static const char CHAFSR_EMU_msg[] = 612 static const char CHAFSR_EMU_msg[] =
613 "Uncorrectable system bus MTAG error"; 613 "Uncorrectable system bus MTAG error";
614 static const char CHAFSR_WDU_msg[] = 614 static const char CHAFSR_WDU_msg[] =
615 "Uncorrectable E-cache ECC error for writeback"; 615 "Uncorrectable E-cache ECC error for writeback";
616 static const char CHAFSR_CPU_msg[] = 616 static const char CHAFSR_CPU_msg[] =
617 "Uncorrectable ECC error for copyout"; 617 "Uncorrectable ECC error for copyout";
618 static const char CHAFSR_CE_msg[] = 618 static const char CHAFSR_CE_msg[] =
619 "HW corrected system bus data ECC error for read"; 619 "HW corrected system bus data ECC error for read";
620 static const char CHAFSR_EDC_msg[] = 620 static const char CHAFSR_EDC_msg[] =
621 "HW corrected E-cache ECC error for stmerge/blkld"; 621 "HW corrected E-cache ECC error for stmerge/blkld";
622 static const char CHAFSR_EMC_msg[] = 622 static const char CHAFSR_EMC_msg[] =
623 "HW corrected system bus MTAG ECC error"; 623 "HW corrected system bus MTAG ECC error";
624 static const char CHAFSR_WDC_msg[] = 624 static const char CHAFSR_WDC_msg[] =
625 "HW corrected E-cache ECC error for writeback"; 625 "HW corrected E-cache ECC error for writeback";
626 static const char CHAFSR_CPC_msg[] = 626 static const char CHAFSR_CPC_msg[] =
627 "HW corrected ECC error for copyout"; 627 "HW corrected ECC error for copyout";
628 static const char CHAFSR_TO_msg[] = 628 static const char CHAFSR_TO_msg[] =
629 "Unmapped error from system bus"; 629 "Unmapped error from system bus";
630 static const char CHAFSR_BERR_msg[] = 630 static const char CHAFSR_BERR_msg[] =
631 "Bus error response from system bus"; 631 "Bus error response from system bus";
632 static const char CHAFSR_IVC_msg[] = 632 static const char CHAFSR_IVC_msg[] =
633 "HW corrected system bus data ECC error for ivec read"; 633 "HW corrected system bus data ECC error for ivec read";
634 static const char CHAFSR_IVU_msg[] = 634 static const char CHAFSR_IVU_msg[] =
635 "Uncorrectable system bus data ECC error for ivec read"; 635 "Uncorrectable system bus data ECC error for ivec read";
636 static struct afsr_error_table __cheetah_error_table[] = { 636 static struct afsr_error_table __cheetah_error_table[] = {
637 { CHAFSR_PERR, CHAFSR_PERR_msg }, 637 { CHAFSR_PERR, CHAFSR_PERR_msg },
638 { CHAFSR_IERR, CHAFSR_IERR_msg }, 638 { CHAFSR_IERR, CHAFSR_IERR_msg },
639 { CHAFSR_ISAP, CHAFSR_ISAP_msg }, 639 { CHAFSR_ISAP, CHAFSR_ISAP_msg },
640 { CHAFSR_UCU, CHAFSR_UCU_msg }, 640 { CHAFSR_UCU, CHAFSR_UCU_msg },
641 { CHAFSR_UCC, CHAFSR_UCC_msg }, 641 { CHAFSR_UCC, CHAFSR_UCC_msg },
642 { CHAFSR_UE, CHAFSR_UE_msg }, 642 { CHAFSR_UE, CHAFSR_UE_msg },
643 { CHAFSR_EDU, CHAFSR_EDU_msg }, 643 { CHAFSR_EDU, CHAFSR_EDU_msg },
644 { CHAFSR_EMU, CHAFSR_EMU_msg }, 644 { CHAFSR_EMU, CHAFSR_EMU_msg },
645 { CHAFSR_WDU, CHAFSR_WDU_msg }, 645 { CHAFSR_WDU, CHAFSR_WDU_msg },
646 { CHAFSR_CPU, CHAFSR_CPU_msg }, 646 { CHAFSR_CPU, CHAFSR_CPU_msg },
647 { CHAFSR_CE, CHAFSR_CE_msg }, 647 { CHAFSR_CE, CHAFSR_CE_msg },
648 { CHAFSR_EDC, CHAFSR_EDC_msg }, 648 { CHAFSR_EDC, CHAFSR_EDC_msg },
649 { CHAFSR_EMC, CHAFSR_EMC_msg }, 649 { CHAFSR_EMC, CHAFSR_EMC_msg },
650 { CHAFSR_WDC, CHAFSR_WDC_msg }, 650 { CHAFSR_WDC, CHAFSR_WDC_msg },
651 { CHAFSR_CPC, CHAFSR_CPC_msg }, 651 { CHAFSR_CPC, CHAFSR_CPC_msg },
652 { CHAFSR_TO, CHAFSR_TO_msg }, 652 { CHAFSR_TO, CHAFSR_TO_msg },
653 { CHAFSR_BERR, CHAFSR_BERR_msg }, 653 { CHAFSR_BERR, CHAFSR_BERR_msg },
654 /* These two do not update the AFAR. */ 654 /* These two do not update the AFAR. */
655 { CHAFSR_IVC, CHAFSR_IVC_msg }, 655 { CHAFSR_IVC, CHAFSR_IVC_msg },
656 { CHAFSR_IVU, CHAFSR_IVU_msg }, 656 { CHAFSR_IVU, CHAFSR_IVU_msg },
657 { 0, NULL }, 657 { 0, NULL },
658 }; 658 };
659 static const char CHPAFSR_DTO_msg[] = 659 static const char CHPAFSR_DTO_msg[] =
660 "System bus unmapped error for prefetch/storequeue-read"; 660 "System bus unmapped error for prefetch/storequeue-read";
661 static const char CHPAFSR_DBERR_msg[] = 661 static const char CHPAFSR_DBERR_msg[] =
662 "System bus error for prefetch/storequeue-read"; 662 "System bus error for prefetch/storequeue-read";
663 static const char CHPAFSR_THCE_msg[] = 663 static const char CHPAFSR_THCE_msg[] =
664 "Hardware corrected E-cache Tag ECC error"; 664 "Hardware corrected E-cache Tag ECC error";
665 static const char CHPAFSR_TSCE_msg[] = 665 static const char CHPAFSR_TSCE_msg[] =
666 "SW handled correctable E-cache Tag ECC error"; 666 "SW handled correctable E-cache Tag ECC error";
667 static const char CHPAFSR_TUE_msg[] = 667 static const char CHPAFSR_TUE_msg[] =
668 "Uncorrectable E-cache Tag ECC error"; 668 "Uncorrectable E-cache Tag ECC error";
669 static const char CHPAFSR_DUE_msg[] = 669 static const char CHPAFSR_DUE_msg[] =
670 "System bus uncorrectable data ECC error due to prefetch/store-fill"; 670 "System bus uncorrectable data ECC error due to prefetch/store-fill";
671 static struct afsr_error_table __cheetah_plus_error_table[] = { 671 static struct afsr_error_table __cheetah_plus_error_table[] = {
672 { CHAFSR_PERR, CHAFSR_PERR_msg }, 672 { CHAFSR_PERR, CHAFSR_PERR_msg },
673 { CHAFSR_IERR, CHAFSR_IERR_msg }, 673 { CHAFSR_IERR, CHAFSR_IERR_msg },
674 { CHAFSR_ISAP, CHAFSR_ISAP_msg }, 674 { CHAFSR_ISAP, CHAFSR_ISAP_msg },
675 { CHAFSR_UCU, CHAFSR_UCU_msg }, 675 { CHAFSR_UCU, CHAFSR_UCU_msg },
676 { CHAFSR_UCC, CHAFSR_UCC_msg }, 676 { CHAFSR_UCC, CHAFSR_UCC_msg },
677 { CHAFSR_UE, CHAFSR_UE_msg }, 677 { CHAFSR_UE, CHAFSR_UE_msg },
678 { CHAFSR_EDU, CHAFSR_EDU_msg }, 678 { CHAFSR_EDU, CHAFSR_EDU_msg },
679 { CHAFSR_EMU, CHAFSR_EMU_msg }, 679 { CHAFSR_EMU, CHAFSR_EMU_msg },
680 { CHAFSR_WDU, CHAFSR_WDU_msg }, 680 { CHAFSR_WDU, CHAFSR_WDU_msg },
681 { CHAFSR_CPU, CHAFSR_CPU_msg }, 681 { CHAFSR_CPU, CHAFSR_CPU_msg },
682 { CHAFSR_CE, CHAFSR_CE_msg }, 682 { CHAFSR_CE, CHAFSR_CE_msg },
683 { CHAFSR_EDC, CHAFSR_EDC_msg }, 683 { CHAFSR_EDC, CHAFSR_EDC_msg },
684 { CHAFSR_EMC, CHAFSR_EMC_msg }, 684 { CHAFSR_EMC, CHAFSR_EMC_msg },
685 { CHAFSR_WDC, CHAFSR_WDC_msg }, 685 { CHAFSR_WDC, CHAFSR_WDC_msg },
686 { CHAFSR_CPC, CHAFSR_CPC_msg }, 686 { CHAFSR_CPC, CHAFSR_CPC_msg },
687 { CHAFSR_TO, CHAFSR_TO_msg }, 687 { CHAFSR_TO, CHAFSR_TO_msg },
688 { CHAFSR_BERR, CHAFSR_BERR_msg }, 688 { CHAFSR_BERR, CHAFSR_BERR_msg },
689 { CHPAFSR_DTO, CHPAFSR_DTO_msg }, 689 { CHPAFSR_DTO, CHPAFSR_DTO_msg },
690 { CHPAFSR_DBERR, CHPAFSR_DBERR_msg }, 690 { CHPAFSR_DBERR, CHPAFSR_DBERR_msg },
691 { CHPAFSR_THCE, CHPAFSR_THCE_msg }, 691 { CHPAFSR_THCE, CHPAFSR_THCE_msg },
692 { CHPAFSR_TSCE, CHPAFSR_TSCE_msg }, 692 { CHPAFSR_TSCE, CHPAFSR_TSCE_msg },
693 { CHPAFSR_TUE, CHPAFSR_TUE_msg }, 693 { CHPAFSR_TUE, CHPAFSR_TUE_msg },
694 { CHPAFSR_DUE, CHPAFSR_DUE_msg }, 694 { CHPAFSR_DUE, CHPAFSR_DUE_msg },
695 /* These two do not update the AFAR. */ 695 /* These two do not update the AFAR. */
696 { CHAFSR_IVC, CHAFSR_IVC_msg }, 696 { CHAFSR_IVC, CHAFSR_IVC_msg },
697 { CHAFSR_IVU, CHAFSR_IVU_msg }, 697 { CHAFSR_IVU, CHAFSR_IVU_msg },
698 { 0, NULL }, 698 { 0, NULL },
699 }; 699 };
700 static const char JPAFSR_JETO_msg[] = 700 static const char JPAFSR_JETO_msg[] =
701 "System interface protocol error, hw timeout caused"; 701 "System interface protocol error, hw timeout caused";
702 static const char JPAFSR_SCE_msg[] = 702 static const char JPAFSR_SCE_msg[] =
703 "Parity error on system snoop results"; 703 "Parity error on system snoop results";
704 static const char JPAFSR_JEIC_msg[] = 704 static const char JPAFSR_JEIC_msg[] =
705 "System interface protocol error, illegal command detected"; 705 "System interface protocol error, illegal command detected";
706 static const char JPAFSR_JEIT_msg[] = 706 static const char JPAFSR_JEIT_msg[] =
707 "System interface protocol error, illegal ADTYPE detected"; 707 "System interface protocol error, illegal ADTYPE detected";
708 static const char JPAFSR_OM_msg[] = 708 static const char JPAFSR_OM_msg[] =
709 "Out of range memory error has occurred"; 709 "Out of range memory error has occurred";
710 static const char JPAFSR_ETP_msg[] = 710 static const char JPAFSR_ETP_msg[] =
711 "Parity error on L2 cache tag SRAM"; 711 "Parity error on L2 cache tag SRAM";
712 static const char JPAFSR_UMS_msg[] = 712 static const char JPAFSR_UMS_msg[] =
713 "Error due to unsupported store"; 713 "Error due to unsupported store";
714 static const char JPAFSR_RUE_msg[] = 714 static const char JPAFSR_RUE_msg[] =
715 "Uncorrectable ECC error from remote cache/memory"; 715 "Uncorrectable ECC error from remote cache/memory";
716 static const char JPAFSR_RCE_msg[] = 716 static const char JPAFSR_RCE_msg[] =
717 "Correctable ECC error from remote cache/memory"; 717 "Correctable ECC error from remote cache/memory";
718 static const char JPAFSR_BP_msg[] = 718 static const char JPAFSR_BP_msg[] =
719 "JBUS parity error on returned read data"; 719 "JBUS parity error on returned read data";
720 static const char JPAFSR_WBP_msg[] = 720 static const char JPAFSR_WBP_msg[] =
721 "JBUS parity error on data for writeback or block store"; 721 "JBUS parity error on data for writeback or block store";
722 static const char JPAFSR_FRC_msg[] = 722 static const char JPAFSR_FRC_msg[] =
723 "Foreign read to DRAM incurring correctable ECC error"; 723 "Foreign read to DRAM incurring correctable ECC error";
724 static const char JPAFSR_FRU_msg[] = 724 static const char JPAFSR_FRU_msg[] =
725 "Foreign read to DRAM incurring uncorrectable ECC error"; 725 "Foreign read to DRAM incurring uncorrectable ECC error";
726 static struct afsr_error_table __jalapeno_error_table[] = { 726 static struct afsr_error_table __jalapeno_error_table[] = {
727 { JPAFSR_JETO, JPAFSR_JETO_msg }, 727 { JPAFSR_JETO, JPAFSR_JETO_msg },
728 { JPAFSR_SCE, JPAFSR_SCE_msg }, 728 { JPAFSR_SCE, JPAFSR_SCE_msg },
729 { JPAFSR_JEIC, JPAFSR_JEIC_msg }, 729 { JPAFSR_JEIC, JPAFSR_JEIC_msg },
730 { JPAFSR_JEIT, JPAFSR_JEIT_msg }, 730 { JPAFSR_JEIT, JPAFSR_JEIT_msg },
731 { CHAFSR_PERR, CHAFSR_PERR_msg }, 731 { CHAFSR_PERR, CHAFSR_PERR_msg },
732 { CHAFSR_IERR, CHAFSR_IERR_msg }, 732 { CHAFSR_IERR, CHAFSR_IERR_msg },
733 { CHAFSR_ISAP, CHAFSR_ISAP_msg }, 733 { CHAFSR_ISAP, CHAFSR_ISAP_msg },
734 { CHAFSR_UCU, CHAFSR_UCU_msg }, 734 { CHAFSR_UCU, CHAFSR_UCU_msg },
735 { CHAFSR_UCC, CHAFSR_UCC_msg }, 735 { CHAFSR_UCC, CHAFSR_UCC_msg },
736 { CHAFSR_UE, CHAFSR_UE_msg }, 736 { CHAFSR_UE, CHAFSR_UE_msg },
737 { CHAFSR_EDU, CHAFSR_EDU_msg }, 737 { CHAFSR_EDU, CHAFSR_EDU_msg },
738 { JPAFSR_OM, JPAFSR_OM_msg }, 738 { JPAFSR_OM, JPAFSR_OM_msg },
739 { CHAFSR_WDU, CHAFSR_WDU_msg }, 739 { CHAFSR_WDU, CHAFSR_WDU_msg },
740 { CHAFSR_CPU, CHAFSR_CPU_msg }, 740 { CHAFSR_CPU, CHAFSR_CPU_msg },
741 { CHAFSR_CE, CHAFSR_CE_msg }, 741 { CHAFSR_CE, CHAFSR_CE_msg },
742 { CHAFSR_EDC, CHAFSR_EDC_msg }, 742 { CHAFSR_EDC, CHAFSR_EDC_msg },
743 { JPAFSR_ETP, JPAFSR_ETP_msg }, 743 { JPAFSR_ETP, JPAFSR_ETP_msg },
744 { CHAFSR_WDC, CHAFSR_WDC_msg }, 744 { CHAFSR_WDC, CHAFSR_WDC_msg },
745 { CHAFSR_CPC, CHAFSR_CPC_msg }, 745 { CHAFSR_CPC, CHAFSR_CPC_msg },
746 { CHAFSR_TO, CHAFSR_TO_msg }, 746 { CHAFSR_TO, CHAFSR_TO_msg },
747 { CHAFSR_BERR, CHAFSR_BERR_msg }, 747 { CHAFSR_BERR, CHAFSR_BERR_msg },
748 { JPAFSR_UMS, JPAFSR_UMS_msg }, 748 { JPAFSR_UMS, JPAFSR_UMS_msg },
749 { JPAFSR_RUE, JPAFSR_RUE_msg }, 749 { JPAFSR_RUE, JPAFSR_RUE_msg },
750 { JPAFSR_RCE, JPAFSR_RCE_msg }, 750 { JPAFSR_RCE, JPAFSR_RCE_msg },
751 { JPAFSR_BP, JPAFSR_BP_msg }, 751 { JPAFSR_BP, JPAFSR_BP_msg },
752 { JPAFSR_WBP, JPAFSR_WBP_msg }, 752 { JPAFSR_WBP, JPAFSR_WBP_msg },
753 { JPAFSR_FRC, JPAFSR_FRC_msg }, 753 { JPAFSR_FRC, JPAFSR_FRC_msg },
754 { JPAFSR_FRU, JPAFSR_FRU_msg }, 754 { JPAFSR_FRU, JPAFSR_FRU_msg },
755 /* These two do not update the AFAR. */ 755 /* These two do not update the AFAR. */
756 { CHAFSR_IVU, CHAFSR_IVU_msg }, 756 { CHAFSR_IVU, CHAFSR_IVU_msg },
757 { 0, NULL }, 757 { 0, NULL },
758 }; 758 };
759 static struct afsr_error_table *cheetah_error_table; 759 static struct afsr_error_table *cheetah_error_table;
760 static unsigned long cheetah_afsr_errors; 760 static unsigned long cheetah_afsr_errors;
761 761
762 /* This is allocated at boot time based upon the largest hardware 762 /* This is allocated at boot time based upon the largest hardware
763 * cpu ID in the system. We allocate two entries per cpu, one for 763 * cpu ID in the system. We allocate two entries per cpu, one for
764 * TL==0 logging and one for TL >= 1 logging. 764 * TL==0 logging and one for TL >= 1 logging.
765 */ 765 */
766 struct cheetah_err_info *cheetah_error_log; 766 struct cheetah_err_info *cheetah_error_log;
767 767
768 static __inline__ struct cheetah_err_info *cheetah_get_error_log(unsigned long afsr) 768 static __inline__ struct cheetah_err_info *cheetah_get_error_log(unsigned long afsr)
769 { 769 {
770 struct cheetah_err_info *p; 770 struct cheetah_err_info *p;
771 int cpu = smp_processor_id(); 771 int cpu = smp_processor_id();
772 772
773 if (!cheetah_error_log) 773 if (!cheetah_error_log)
774 return NULL; 774 return NULL;
775 775
776 p = cheetah_error_log + (cpu * 2); 776 p = cheetah_error_log + (cpu * 2);
777 if ((afsr & CHAFSR_TL1) != 0UL) 777 if ((afsr & CHAFSR_TL1) != 0UL)
778 p++; 778 p++;
779 779
780 return p; 780 return p;
781 } 781 }
782 782
783 extern unsigned int tl0_icpe[], tl1_icpe[]; 783 extern unsigned int tl0_icpe[], tl1_icpe[];
784 extern unsigned int tl0_dcpe[], tl1_dcpe[]; 784 extern unsigned int tl0_dcpe[], tl1_dcpe[];
785 extern unsigned int tl0_fecc[], tl1_fecc[]; 785 extern unsigned int tl0_fecc[], tl1_fecc[];
786 extern unsigned int tl0_cee[], tl1_cee[]; 786 extern unsigned int tl0_cee[], tl1_cee[];
787 extern unsigned int tl0_iae[], tl1_iae[]; 787 extern unsigned int tl0_iae[], tl1_iae[];
788 extern unsigned int tl0_dae[], tl1_dae[]; 788 extern unsigned int tl0_dae[], tl1_dae[];
789 extern unsigned int cheetah_plus_icpe_trap_vector[], cheetah_plus_icpe_trap_vector_tl1[]; 789 extern unsigned int cheetah_plus_icpe_trap_vector[], cheetah_plus_icpe_trap_vector_tl1[];
790 extern unsigned int cheetah_plus_dcpe_trap_vector[], cheetah_plus_dcpe_trap_vector_tl1[]; 790 extern unsigned int cheetah_plus_dcpe_trap_vector[], cheetah_plus_dcpe_trap_vector_tl1[];
791 extern unsigned int cheetah_fecc_trap_vector[], cheetah_fecc_trap_vector_tl1[]; 791 extern unsigned int cheetah_fecc_trap_vector[], cheetah_fecc_trap_vector_tl1[];
792 extern unsigned int cheetah_cee_trap_vector[], cheetah_cee_trap_vector_tl1[]; 792 extern unsigned int cheetah_cee_trap_vector[], cheetah_cee_trap_vector_tl1[];
793 extern unsigned int cheetah_deferred_trap_vector[], cheetah_deferred_trap_vector_tl1[]; 793 extern unsigned int cheetah_deferred_trap_vector[], cheetah_deferred_trap_vector_tl1[];
794 794
795 void __init cheetah_ecache_flush_init(void) 795 void __init cheetah_ecache_flush_init(void)
796 { 796 {
797 unsigned long largest_size, smallest_linesize, order, ver; 797 unsigned long largest_size, smallest_linesize, order, ver;
798 int i, sz; 798 int i, sz;
799 799
800 /* Scan all cpu device tree nodes, note two values: 800 /* Scan all cpu device tree nodes, note two values:
801 * 1) largest E-cache size 801 * 1) largest E-cache size
802 * 2) smallest E-cache line size 802 * 2) smallest E-cache line size
803 */ 803 */
804 largest_size = 0UL; 804 largest_size = 0UL;
805 smallest_linesize = ~0UL; 805 smallest_linesize = ~0UL;
806 806
807 for (i = 0; i < NR_CPUS; i++) { 807 for (i = 0; i < NR_CPUS; i++) {
808 unsigned long val; 808 unsigned long val;
809 809
810 val = cpu_data(i).ecache_size; 810 val = cpu_data(i).ecache_size;
811 if (!val) 811 if (!val)
812 continue; 812 continue;
813 813
814 if (val > largest_size) 814 if (val > largest_size)
815 largest_size = val; 815 largest_size = val;
816 816
817 val = cpu_data(i).ecache_line_size; 817 val = cpu_data(i).ecache_line_size;
818 if (val < smallest_linesize) 818 if (val < smallest_linesize)
819 smallest_linesize = val; 819 smallest_linesize = val;
820 820
821 } 821 }
822 822
823 if (largest_size == 0UL || smallest_linesize == ~0UL) { 823 if (largest_size == 0UL || smallest_linesize == ~0UL) {
824 prom_printf("cheetah_ecache_flush_init: Cannot probe cpu E-cache " 824 prom_printf("cheetah_ecache_flush_init: Cannot probe cpu E-cache "
825 "parameters.\n"); 825 "parameters.\n");
826 prom_halt(); 826 prom_halt();
827 } 827 }
828 828
829 ecache_flush_size = (2 * largest_size); 829 ecache_flush_size = (2 * largest_size);
830 ecache_flush_linesize = smallest_linesize; 830 ecache_flush_linesize = smallest_linesize;
831 831
832 ecache_flush_physbase = find_ecache_flush_span(ecache_flush_size); 832 ecache_flush_physbase = find_ecache_flush_span(ecache_flush_size);
833 833
834 if (ecache_flush_physbase == ~0UL) { 834 if (ecache_flush_physbase == ~0UL) {
835 prom_printf("cheetah_ecache_flush_init: Cannot find %d byte " 835 prom_printf("cheetah_ecache_flush_init: Cannot find %d byte "
836 "contiguous physical memory.\n", 836 "contiguous physical memory.\n",
837 ecache_flush_size); 837 ecache_flush_size);
838 prom_halt(); 838 prom_halt();
839 } 839 }
840 840
841 /* Now allocate error trap reporting scoreboard. */ 841 /* Now allocate error trap reporting scoreboard. */
842 sz = NR_CPUS * (2 * sizeof(struct cheetah_err_info)); 842 sz = NR_CPUS * (2 * sizeof(struct cheetah_err_info));
843 for (order = 0; order < MAX_ORDER; order++) { 843 for (order = 0; order < MAX_ORDER; order++) {
844 if ((PAGE_SIZE << order) >= sz) 844 if ((PAGE_SIZE << order) >= sz)
845 break; 845 break;
846 } 846 }
847 cheetah_error_log = (struct cheetah_err_info *) 847 cheetah_error_log = (struct cheetah_err_info *)
848 __get_free_pages(GFP_KERNEL, order); 848 __get_free_pages(GFP_KERNEL, order);
849 if (!cheetah_error_log) { 849 if (!cheetah_error_log) {
850 prom_printf("cheetah_ecache_flush_init: Failed to allocate " 850 prom_printf("cheetah_ecache_flush_init: Failed to allocate "
851 "error logging scoreboard (%d bytes).\n", sz); 851 "error logging scoreboard (%d bytes).\n", sz);
852 prom_halt(); 852 prom_halt();
853 } 853 }
854 memset(cheetah_error_log, 0, PAGE_SIZE << order); 854 memset(cheetah_error_log, 0, PAGE_SIZE << order);
855 855
856 /* Mark all AFSRs as invalid so that the trap handler will 856 /* Mark all AFSRs as invalid so that the trap handler will
857 * log new new information there. 857 * log new new information there.
858 */ 858 */
859 for (i = 0; i < 2 * NR_CPUS; i++) 859 for (i = 0; i < 2 * NR_CPUS; i++)
860 cheetah_error_log[i].afsr = CHAFSR_INVALID; 860 cheetah_error_log[i].afsr = CHAFSR_INVALID;
861 861
862 __asm__ ("rdpr %%ver, %0" : "=r" (ver)); 862 __asm__ ("rdpr %%ver, %0" : "=r" (ver));
863 if ((ver >> 32) == __JALAPENO_ID || 863 if ((ver >> 32) == __JALAPENO_ID ||
864 (ver >> 32) == __SERRANO_ID) { 864 (ver >> 32) == __SERRANO_ID) {
865 cheetah_error_table = &__jalapeno_error_table[0]; 865 cheetah_error_table = &__jalapeno_error_table[0];
866 cheetah_afsr_errors = JPAFSR_ERRORS; 866 cheetah_afsr_errors = JPAFSR_ERRORS;
867 } else if ((ver >> 32) == 0x003e0015) { 867 } else if ((ver >> 32) == 0x003e0015) {
868 cheetah_error_table = &__cheetah_plus_error_table[0]; 868 cheetah_error_table = &__cheetah_plus_error_table[0];
869 cheetah_afsr_errors = CHPAFSR_ERRORS; 869 cheetah_afsr_errors = CHPAFSR_ERRORS;
870 } else { 870 } else {
871 cheetah_error_table = &__cheetah_error_table[0]; 871 cheetah_error_table = &__cheetah_error_table[0];
872 cheetah_afsr_errors = CHAFSR_ERRORS; 872 cheetah_afsr_errors = CHAFSR_ERRORS;
873 } 873 }
874 874
875 /* Now patch trap tables. */ 875 /* Now patch trap tables. */
876 memcpy(tl0_fecc, cheetah_fecc_trap_vector, (8 * 4)); 876 memcpy(tl0_fecc, cheetah_fecc_trap_vector, (8 * 4));
877 memcpy(tl1_fecc, cheetah_fecc_trap_vector_tl1, (8 * 4)); 877 memcpy(tl1_fecc, cheetah_fecc_trap_vector_tl1, (8 * 4));
878 memcpy(tl0_cee, cheetah_cee_trap_vector, (8 * 4)); 878 memcpy(tl0_cee, cheetah_cee_trap_vector, (8 * 4));
879 memcpy(tl1_cee, cheetah_cee_trap_vector_tl1, (8 * 4)); 879 memcpy(tl1_cee, cheetah_cee_trap_vector_tl1, (8 * 4));
880 memcpy(tl0_iae, cheetah_deferred_trap_vector, (8 * 4)); 880 memcpy(tl0_iae, cheetah_deferred_trap_vector, (8 * 4));
881 memcpy(tl1_iae, cheetah_deferred_trap_vector_tl1, (8 * 4)); 881 memcpy(tl1_iae, cheetah_deferred_trap_vector_tl1, (8 * 4));
882 memcpy(tl0_dae, cheetah_deferred_trap_vector, (8 * 4)); 882 memcpy(tl0_dae, cheetah_deferred_trap_vector, (8 * 4));
883 memcpy(tl1_dae, cheetah_deferred_trap_vector_tl1, (8 * 4)); 883 memcpy(tl1_dae, cheetah_deferred_trap_vector_tl1, (8 * 4));
884 if (tlb_type == cheetah_plus) { 884 if (tlb_type == cheetah_plus) {
885 memcpy(tl0_dcpe, cheetah_plus_dcpe_trap_vector, (8 * 4)); 885 memcpy(tl0_dcpe, cheetah_plus_dcpe_trap_vector, (8 * 4));
886 memcpy(tl1_dcpe, cheetah_plus_dcpe_trap_vector_tl1, (8 * 4)); 886 memcpy(tl1_dcpe, cheetah_plus_dcpe_trap_vector_tl1, (8 * 4));
887 memcpy(tl0_icpe, cheetah_plus_icpe_trap_vector, (8 * 4)); 887 memcpy(tl0_icpe, cheetah_plus_icpe_trap_vector, (8 * 4));
888 memcpy(tl1_icpe, cheetah_plus_icpe_trap_vector_tl1, (8 * 4)); 888 memcpy(tl1_icpe, cheetah_plus_icpe_trap_vector_tl1, (8 * 4));
889 } 889 }
890 flushi(PAGE_OFFSET); 890 flushi(PAGE_OFFSET);
891 } 891 }
892 892
893 static void cheetah_flush_ecache(void) 893 static void cheetah_flush_ecache(void)
894 { 894 {
895 unsigned long flush_base = ecache_flush_physbase; 895 unsigned long flush_base = ecache_flush_physbase;
896 unsigned long flush_linesize = ecache_flush_linesize; 896 unsigned long flush_linesize = ecache_flush_linesize;
897 unsigned long flush_size = ecache_flush_size; 897 unsigned long flush_size = ecache_flush_size;
898 898
899 __asm__ __volatile__("1: subcc %0, %4, %0\n\t" 899 __asm__ __volatile__("1: subcc %0, %4, %0\n\t"
900 " bne,pt %%xcc, 1b\n\t" 900 " bne,pt %%xcc, 1b\n\t"
901 " ldxa [%2 + %0] %3, %%g0\n\t" 901 " ldxa [%2 + %0] %3, %%g0\n\t"
902 : "=&r" (flush_size) 902 : "=&r" (flush_size)
903 : "0" (flush_size), "r" (flush_base), 903 : "0" (flush_size), "r" (flush_base),
904 "i" (ASI_PHYS_USE_EC), "r" (flush_linesize)); 904 "i" (ASI_PHYS_USE_EC), "r" (flush_linesize));
905 } 905 }
906 906
907 static void cheetah_flush_ecache_line(unsigned long physaddr) 907 static void cheetah_flush_ecache_line(unsigned long physaddr)
908 { 908 {
909 unsigned long alias; 909 unsigned long alias;
910 910
911 physaddr &= ~(8UL - 1UL); 911 physaddr &= ~(8UL - 1UL);
912 physaddr = (ecache_flush_physbase + 912 physaddr = (ecache_flush_physbase +
913 (physaddr & ((ecache_flush_size>>1UL) - 1UL))); 913 (physaddr & ((ecache_flush_size>>1UL) - 1UL)));
914 alias = physaddr + (ecache_flush_size >> 1UL); 914 alias = physaddr + (ecache_flush_size >> 1UL);
915 __asm__ __volatile__("ldxa [%0] %2, %%g0\n\t" 915 __asm__ __volatile__("ldxa [%0] %2, %%g0\n\t"
916 "ldxa [%1] %2, %%g0\n\t" 916 "ldxa [%1] %2, %%g0\n\t"
917 "membar #Sync" 917 "membar #Sync"
918 : /* no outputs */ 918 : /* no outputs */
919 : "r" (physaddr), "r" (alias), 919 : "r" (physaddr), "r" (alias),
920 "i" (ASI_PHYS_USE_EC)); 920 "i" (ASI_PHYS_USE_EC));
921 } 921 }
922 922
923 /* Unfortunately, the diagnostic access to the I-cache tags we need to 923 /* Unfortunately, the diagnostic access to the I-cache tags we need to
924 * use to clear the thing interferes with I-cache coherency transactions. 924 * use to clear the thing interferes with I-cache coherency transactions.
925 * 925 *
926 * So we must only flush the I-cache when it is disabled. 926 * So we must only flush the I-cache when it is disabled.
927 */ 927 */
928 static void __cheetah_flush_icache(void) 928 static void __cheetah_flush_icache(void)
929 { 929 {
930 unsigned int icache_size, icache_line_size; 930 unsigned int icache_size, icache_line_size;
931 unsigned long addr; 931 unsigned long addr;
932 932
933 icache_size = local_cpu_data().icache_size; 933 icache_size = local_cpu_data().icache_size;
934 icache_line_size = local_cpu_data().icache_line_size; 934 icache_line_size = local_cpu_data().icache_line_size;
935 935
936 /* Clear the valid bits in all the tags. */ 936 /* Clear the valid bits in all the tags. */
937 for (addr = 0; addr < icache_size; addr += icache_line_size) { 937 for (addr = 0; addr < icache_size; addr += icache_line_size) {
938 __asm__ __volatile__("stxa %%g0, [%0] %1\n\t" 938 __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
939 "membar #Sync" 939 "membar #Sync"
940 : /* no outputs */ 940 : /* no outputs */
941 : "r" (addr | (2 << 3)), 941 : "r" (addr | (2 << 3)),
942 "i" (ASI_IC_TAG)); 942 "i" (ASI_IC_TAG));
943 } 943 }
944 } 944 }
945 945
946 static void cheetah_flush_icache(void) 946 static void cheetah_flush_icache(void)
947 { 947 {
948 unsigned long dcu_save; 948 unsigned long dcu_save;
949 949
950 /* Save current DCU, disable I-cache. */ 950 /* Save current DCU, disable I-cache. */
951 __asm__ __volatile__("ldxa [%%g0] %1, %0\n\t" 951 __asm__ __volatile__("ldxa [%%g0] %1, %0\n\t"
952 "or %0, %2, %%g1\n\t" 952 "or %0, %2, %%g1\n\t"
953 "stxa %%g1, [%%g0] %1\n\t" 953 "stxa %%g1, [%%g0] %1\n\t"
954 "membar #Sync" 954 "membar #Sync"
955 : "=r" (dcu_save) 955 : "=r" (dcu_save)
956 : "i" (ASI_DCU_CONTROL_REG), "i" (DCU_IC) 956 : "i" (ASI_DCU_CONTROL_REG), "i" (DCU_IC)
957 : "g1"); 957 : "g1");
958 958
959 __cheetah_flush_icache(); 959 __cheetah_flush_icache();
960 960
961 /* Restore DCU register */ 961 /* Restore DCU register */
962 __asm__ __volatile__("stxa %0, [%%g0] %1\n\t" 962 __asm__ __volatile__("stxa %0, [%%g0] %1\n\t"
963 "membar #Sync" 963 "membar #Sync"
964 : /* no outputs */ 964 : /* no outputs */
965 : "r" (dcu_save), "i" (ASI_DCU_CONTROL_REG)); 965 : "r" (dcu_save), "i" (ASI_DCU_CONTROL_REG));
966 } 966 }
967 967
968 static void cheetah_flush_dcache(void) 968 static void cheetah_flush_dcache(void)
969 { 969 {
970 unsigned int dcache_size, dcache_line_size; 970 unsigned int dcache_size, dcache_line_size;
971 unsigned long addr; 971 unsigned long addr;
972 972
973 dcache_size = local_cpu_data().dcache_size; 973 dcache_size = local_cpu_data().dcache_size;
974 dcache_line_size = local_cpu_data().dcache_line_size; 974 dcache_line_size = local_cpu_data().dcache_line_size;
975 975
976 for (addr = 0; addr < dcache_size; addr += dcache_line_size) { 976 for (addr = 0; addr < dcache_size; addr += dcache_line_size) {
977 __asm__ __volatile__("stxa %%g0, [%0] %1\n\t" 977 __asm__ __volatile__("stxa %%g0, [%0] %1\n\t"
978 "membar #Sync" 978 "membar #Sync"
979 : /* no outputs */ 979 : /* no outputs */
980 : "r" (addr), "i" (ASI_DCACHE_TAG)); 980 : "r" (addr), "i" (ASI_DCACHE_TAG));
981 } 981 }
982 } 982 }
983 983
984 /* In order to make the even parity correct we must do two things. 984 /* In order to make the even parity correct we must do two things.
985 * First, we clear DC_data_parity and set DC_utag to an appropriate value. 985 * First, we clear DC_data_parity and set DC_utag to an appropriate value.
986 * Next, we clear out all 32-bytes of data for that line. Data of 986 * Next, we clear out all 32-bytes of data for that line. Data of
987 * all-zero + tag parity value of zero == correct parity. 987 * all-zero + tag parity value of zero == correct parity.
988 */ 988 */
989 static void cheetah_plus_zap_dcache_parity(void) 989 static void cheetah_plus_zap_dcache_parity(void)
990 { 990 {
991 unsigned int dcache_size, dcache_line_size; 991 unsigned int dcache_size, dcache_line_size;
992 unsigned long addr; 992 unsigned long addr;
993 993
994 dcache_size = local_cpu_data().dcache_size; 994 dcache_size = local_cpu_data().dcache_size;
995 dcache_line_size = local_cpu_data().dcache_line_size; 995 dcache_line_size = local_cpu_data().dcache_line_size;
996 996
997 for (addr = 0; addr < dcache_size; addr += dcache_line_size) { 997 for (addr = 0; addr < dcache_size; addr += dcache_line_size) {
998 unsigned long tag = (addr >> 14); 998 unsigned long tag = (addr >> 14);
999 unsigned long line; 999 unsigned long line;
1000 1000
1001 __asm__ __volatile__("membar #Sync\n\t" 1001 __asm__ __volatile__("membar #Sync\n\t"
1002 "stxa %0, [%1] %2\n\t" 1002 "stxa %0, [%1] %2\n\t"
1003 "membar #Sync" 1003 "membar #Sync"
1004 : /* no outputs */ 1004 : /* no outputs */
1005 : "r" (tag), "r" (addr), 1005 : "r" (tag), "r" (addr),
1006 "i" (ASI_DCACHE_UTAG)); 1006 "i" (ASI_DCACHE_UTAG));
1007 for (line = addr; line < addr + dcache_line_size; line += 8) 1007 for (line = addr; line < addr + dcache_line_size; line += 8)
1008 __asm__ __volatile__("membar #Sync\n\t" 1008 __asm__ __volatile__("membar #Sync\n\t"
1009 "stxa %%g0, [%0] %1\n\t" 1009 "stxa %%g0, [%0] %1\n\t"
1010 "membar #Sync" 1010 "membar #Sync"
1011 : /* no outputs */ 1011 : /* no outputs */
1012 : "r" (line), 1012 : "r" (line),
1013 "i" (ASI_DCACHE_DATA)); 1013 "i" (ASI_DCACHE_DATA));
1014 } 1014 }
1015 } 1015 }
1016 1016
1017 /* Conversion tables used to frob Cheetah AFSR syndrome values into 1017 /* Conversion tables used to frob Cheetah AFSR syndrome values into
1018 * something palatable to the memory controller driver get_unumber 1018 * something palatable to the memory controller driver get_unumber
1019 * routine. 1019 * routine.
1020 */ 1020 */
1021 #define MT0 137 1021 #define MT0 137
1022 #define MT1 138 1022 #define MT1 138
1023 #define MT2 139 1023 #define MT2 139
1024 #define NONE 254 1024 #define NONE 254
1025 #define MTC0 140 1025 #define MTC0 140
1026 #define MTC1 141 1026 #define MTC1 141
1027 #define MTC2 142 1027 #define MTC2 142
1028 #define MTC3 143 1028 #define MTC3 143
1029 #define C0 128 1029 #define C0 128
1030 #define C1 129 1030 #define C1 129
1031 #define C2 130 1031 #define C2 130
1032 #define C3 131 1032 #define C3 131
1033 #define C4 132 1033 #define C4 132
1034 #define C5 133 1034 #define C5 133
1035 #define C6 134 1035 #define C6 134
1036 #define C7 135 1036 #define C7 135
1037 #define C8 136 1037 #define C8 136
1038 #define M2 144 1038 #define M2 144
1039 #define M3 145 1039 #define M3 145
1040 #define M4 146 1040 #define M4 146
1041 #define M 147 1041 #define M 147
1042 static unsigned char cheetah_ecc_syntab[] = { 1042 static unsigned char cheetah_ecc_syntab[] = {
1043 /*00*/NONE, C0, C1, M2, C2, M2, M3, 47, C3, M2, M2, 53, M2, 41, 29, M, 1043 /*00*/NONE, C0, C1, M2, C2, M2, M3, 47, C3, M2, M2, 53, M2, 41, 29, M,
1044 /*01*/C4, M, M, 50, M2, 38, 25, M2, M2, 33, 24, M2, 11, M, M2, 16, 1044 /*01*/C4, M, M, 50, M2, 38, 25, M2, M2, 33, 24, M2, 11, M, M2, 16,
1045 /*02*/C5, M, M, 46, M2, 37, 19, M2, M, 31, 32, M, 7, M2, M2, 10, 1045 /*02*/C5, M, M, 46, M2, 37, 19, M2, M, 31, 32, M, 7, M2, M2, 10,
1046 /*03*/M2, 40, 13, M2, 59, M, M2, 66, M, M2, M2, 0, M2, 67, 71, M, 1046 /*03*/M2, 40, 13, M2, 59, M, M2, 66, M, M2, M2, 0, M2, 67, 71, M,
1047 /*04*/C6, M, M, 43, M, 36, 18, M, M2, 49, 15, M, 63, M2, M2, 6, 1047 /*04*/C6, M, M, 43, M, 36, 18, M, M2, 49, 15, M, 63, M2, M2, 6,
1048 /*05*/M2, 44, 28, M2, M, M2, M2, 52, 68, M2, M2, 62, M2, M3, M3, M4, 1048 /*05*/M2, 44, 28, M2, M, M2, M2, 52, 68, M2, M2, 62, M2, M3, M3, M4,
1049 /*06*/M2, 26, 106, M2, 64, M, M2, 2, 120, M, M2, M3, M, M3, M3, M4, 1049 /*06*/M2, 26, 106, M2, 64, M, M2, 2, 120, M, M2, M3, M, M3, M3, M4,
1050 /*07*/116, M2, M2, M3, M2, M3, M, M4, M2, 58, 54, M2, M, M4, M4, M3, 1050 /*07*/116, M2, M2, M3, M2, M3, M, M4, M2, 58, 54, M2, M, M4, M4, M3,
1051 /*08*/C7, M2, M, 42, M, 35, 17, M2, M, 45, 14, M2, 21, M2, M2, 5, 1051 /*08*/C7, M2, M, 42, M, 35, 17, M2, M, 45, 14, M2, 21, M2, M2, 5,
1052 /*09*/M, 27, M, M, 99, M, M, 3, 114, M2, M2, 20, M2, M3, M3, M, 1052 /*09*/M, 27, M, M, 99, M, M, 3, 114, M2, M2, 20, M2, M3, M3, M,
1053 /*0a*/M2, 23, 113, M2, 112, M2, M, 51, 95, M, M2, M3, M2, M3, M3, M2, 1053 /*0a*/M2, 23, 113, M2, 112, M2, M, 51, 95, M, M2, M3, M2, M3, M3, M2,
1054 /*0b*/103, M, M2, M3, M2, M3, M3, M4, M2, 48, M, M, 73, M2, M, M3, 1054 /*0b*/103, M, M2, M3, M2, M3, M3, M4, M2, 48, M, M, 73, M2, M, M3,
1055 /*0c*/M2, 22, 110, M2, 109, M2, M, 9, 108, M2, M, M3, M2, M3, M3, M, 1055 /*0c*/M2, 22, 110, M2, 109, M2, M, 9, 108, M2, M, M3, M2, M3, M3, M,
1056 /*0d*/102, M2, M, M, M2, M3, M3, M, M2, M3, M3, M2, M, M4, M, M3, 1056 /*0d*/102, M2, M, M, M2, M3, M3, M, M2, M3, M3, M2, M, M4, M, M3,
1057 /*0e*/98, M, M2, M3, M2, M, M3, M4, M2, M3, M3, M4, M3, M, M, M, 1057 /*0e*/98, M, M2, M3, M2, M, M3, M4, M2, M3, M3, M4, M3, M, M, M,
1058 /*0f*/M2, M3, M3, M, M3, M, M, M, 56, M4, M, M3, M4, M, M, M, 1058 /*0f*/M2, M3, M3, M, M3, M, M, M, 56, M4, M, M3, M4, M, M, M,
1059 /*10*/C8, M, M2, 39, M, 34, 105, M2, M, 30, 104, M, 101, M, M, 4, 1059 /*10*/C8, M, M2, 39, M, 34, 105, M2, M, 30, 104, M, 101, M, M, 4,
1060 /*11*/M, M, 100, M, 83, M, M2, 12, 87, M, M, 57, M2, M, M3, M, 1060 /*11*/M, M, 100, M, 83, M, M2, 12, 87, M, M, 57, M2, M, M3, M,
1061 /*12*/M2, 97, 82, M2, 78, M2, M2, 1, 96, M, M, M, M, M, M3, M2, 1061 /*12*/M2, 97, 82, M2, 78, M2, M2, 1, 96, M, M, M, M, M, M3, M2,
1062 /*13*/94, M, M2, M3, M2, M, M3, M, M2, M, 79, M, 69, M, M4, M, 1062 /*13*/94, M, M2, M3, M2, M, M3, M, M2, M, 79, M, 69, M, M4, M,
1063 /*14*/M2, 93, 92, M, 91, M, M2, 8, 90, M2, M2, M, M, M, M, M4, 1063 /*14*/M2, 93, 92, M, 91, M, M2, 8, 90, M2, M2, M, M, M, M, M4,
1064 /*15*/89, M, M, M3, M2, M3, M3, M, M, M, M3, M2, M3, M2, M, M3, 1064 /*15*/89, M, M, M3, M2, M3, M3, M, M, M, M3, M2, M3, M2, M, M3,
1065 /*16*/86, M, M2, M3, M2, M, M3, M, M2, M, M3, M, M3, M, M, M3, 1065 /*16*/86, M, M2, M3, M2, M, M3, M, M2, M, M3, M, M3, M, M, M3,
1066 /*17*/M, M, M3, M2, M3, M2, M4, M, 60, M, M2, M3, M4, M, M, M2, 1066 /*17*/M, M, M3, M2, M3, M2, M4, M, 60, M, M2, M3, M4, M, M, M2,
1067 /*18*/M2, 88, 85, M2, 84, M, M2, 55, 81, M2, M2, M3, M2, M3, M3, M4, 1067 /*18*/M2, 88, 85, M2, 84, M, M2, 55, 81, M2, M2, M3, M2, M3, M3, M4,
1068 /*19*/77, M, M, M, M2, M3, M, M, M2, M3, M3, M4, M3, M2, M, M, 1068 /*19*/77, M, M, M, M2, M3, M, M, M2, M3, M3, M4, M3, M2, M, M,
1069 /*1a*/74, M, M2, M3, M, M, M3, M, M, M, M3, M, M3, M, M4, M3, 1069 /*1a*/74, M, M2, M3, M, M, M3, M, M, M, M3, M, M3, M, M4, M3,
1070 /*1b*/M2, 70, 107, M4, 65, M2, M2, M, 127, M, M, M, M2, M3, M3, M, 1070 /*1b*/M2, 70, 107, M4, 65, M2, M2, M, 127, M, M, M, M2, M3, M3, M,
1071 /*1c*/80, M2, M2, 72, M, 119, 118, M, M2, 126, 76, M, 125, M, M4, M3, 1071 /*1c*/80, M2, M2, 72, M, 119, 118, M, M2, 126, 76, M, 125, M, M4, M3,
1072 /*1d*/M2, 115, 124, M, 75, M, M, M3, 61, M, M4, M, M4, M, M, M, 1072 /*1d*/M2, 115, 124, M, 75, M, M, M3, 61, M, M4, M, M4, M, M, M,
1073 /*1e*/M, 123, 122, M4, 121, M4, M, M3, 117, M2, M2, M3, M4, M3, M, M, 1073 /*1e*/M, 123, 122, M4, 121, M4, M, M3, 117, M2, M2, M3, M4, M3, M, M,
1074 /*1f*/111, M, M, M, M4, M3, M3, M, M, M, M3, M, M3, M2, M, M 1074 /*1f*/111, M, M, M, M4, M3, M3, M, M, M, M3, M, M3, M2, M, M
1075 }; 1075 };
1076 static unsigned char cheetah_mtag_syntab[] = { 1076 static unsigned char cheetah_mtag_syntab[] = {
1077 NONE, MTC0, 1077 NONE, MTC0,
1078 MTC1, NONE, 1078 MTC1, NONE,
1079 MTC2, NONE, 1079 MTC2, NONE,
1080 NONE, MT0, 1080 NONE, MT0,
1081 MTC3, NONE, 1081 MTC3, NONE,
1082 NONE, MT1, 1082 NONE, MT1,
1083 NONE, MT2, 1083 NONE, MT2,
1084 NONE, NONE 1084 NONE, NONE
1085 }; 1085 };
1086 1086
1087 /* Return the highest priority error conditon mentioned. */ 1087 /* Return the highest priority error conditon mentioned. */
1088 static __inline__ unsigned long cheetah_get_hipri(unsigned long afsr) 1088 static __inline__ unsigned long cheetah_get_hipri(unsigned long afsr)
1089 { 1089 {
1090 unsigned long tmp = 0; 1090 unsigned long tmp = 0;
1091 int i; 1091 int i;
1092 1092
1093 for (i = 0; cheetah_error_table[i].mask; i++) { 1093 for (i = 0; cheetah_error_table[i].mask; i++) {
1094 if ((tmp = (afsr & cheetah_error_table[i].mask)) != 0UL) 1094 if ((tmp = (afsr & cheetah_error_table[i].mask)) != 0UL)
1095 return tmp; 1095 return tmp;
1096 } 1096 }
1097 return tmp; 1097 return tmp;
1098 } 1098 }
1099 1099
1100 static const char *cheetah_get_string(unsigned long bit) 1100 static const char *cheetah_get_string(unsigned long bit)
1101 { 1101 {
1102 int i; 1102 int i;
1103 1103
1104 for (i = 0; cheetah_error_table[i].mask; i++) { 1104 for (i = 0; cheetah_error_table[i].mask; i++) {
1105 if ((bit & cheetah_error_table[i].mask) != 0UL) 1105 if ((bit & cheetah_error_table[i].mask) != 0UL)
1106 return cheetah_error_table[i].name; 1106 return cheetah_error_table[i].name;
1107 } 1107 }
1108 return "???"; 1108 return "???";
1109 } 1109 }
1110 1110
1111 extern int chmc_getunumber(int, unsigned long, char *, int); 1111 extern int chmc_getunumber(int, unsigned long, char *, int);
1112 1112
1113 static void cheetah_log_errors(struct pt_regs *regs, struct cheetah_err_info *info, 1113 static void cheetah_log_errors(struct pt_regs *regs, struct cheetah_err_info *info,
1114 unsigned long afsr, unsigned long afar, int recoverable) 1114 unsigned long afsr, unsigned long afar, int recoverable)
1115 { 1115 {
1116 unsigned long hipri; 1116 unsigned long hipri;
1117 char unum[256]; 1117 char unum[256];
1118 1118
1119 printk("%s" "ERROR(%d): Cheetah error trap taken afsr[%016lx] afar[%016lx] TL1(%d)\n", 1119 printk("%s" "ERROR(%d): Cheetah error trap taken afsr[%016lx] afar[%016lx] TL1(%d)\n",
1120 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(), 1120 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(),
1121 afsr, afar, 1121 afsr, afar,
1122 (afsr & CHAFSR_TL1) ? 1 : 0); 1122 (afsr & CHAFSR_TL1) ? 1 : 0);
1123 printk("%s" "ERROR(%d): TPC[%lx] TNPC[%lx] O7[%lx] TSTATE[%lx]\n", 1123 printk("%s" "ERROR(%d): TPC[%lx] TNPC[%lx] O7[%lx] TSTATE[%lx]\n",
1124 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(), 1124 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(),
1125 regs->tpc, regs->tnpc, regs->u_regs[UREG_I7], regs->tstate); 1125 regs->tpc, regs->tnpc, regs->u_regs[UREG_I7], regs->tstate);
1126 printk("%s" "ERROR(%d): ", 1126 printk("%s" "ERROR(%d): ",
1127 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id()); 1127 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id());
1128 print_symbol("TPC<%s>\n", regs->tpc); 1128 print_symbol("TPC<%s>\n", regs->tpc);
1129 printk("%s" "ERROR(%d): M_SYND(%lx), E_SYND(%lx)%s%s\n", 1129 printk("%s" "ERROR(%d): M_SYND(%lx), E_SYND(%lx)%s%s\n",
1130 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(), 1130 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(),
1131 (afsr & CHAFSR_M_SYNDROME) >> CHAFSR_M_SYNDROME_SHIFT, 1131 (afsr & CHAFSR_M_SYNDROME) >> CHAFSR_M_SYNDROME_SHIFT,
1132 (afsr & CHAFSR_E_SYNDROME) >> CHAFSR_E_SYNDROME_SHIFT, 1132 (afsr & CHAFSR_E_SYNDROME) >> CHAFSR_E_SYNDROME_SHIFT,
1133 (afsr & CHAFSR_ME) ? ", Multiple Errors" : "", 1133 (afsr & CHAFSR_ME) ? ", Multiple Errors" : "",
1134 (afsr & CHAFSR_PRIV) ? ", Privileged" : ""); 1134 (afsr & CHAFSR_PRIV) ? ", Privileged" : "");
1135 hipri = cheetah_get_hipri(afsr); 1135 hipri = cheetah_get_hipri(afsr);
1136 printk("%s" "ERROR(%d): Highest priority error (%016lx) \"%s\"\n", 1136 printk("%s" "ERROR(%d): Highest priority error (%016lx) \"%s\"\n",
1137 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(), 1137 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(),
1138 hipri, cheetah_get_string(hipri)); 1138 hipri, cheetah_get_string(hipri));
1139 1139
1140 /* Try to get unumber if relevant. */ 1140 /* Try to get unumber if relevant. */
1141 #define ESYND_ERRORS (CHAFSR_IVC | CHAFSR_IVU | \ 1141 #define ESYND_ERRORS (CHAFSR_IVC | CHAFSR_IVU | \
1142 CHAFSR_CPC | CHAFSR_CPU | \ 1142 CHAFSR_CPC | CHAFSR_CPU | \
1143 CHAFSR_UE | CHAFSR_CE | \ 1143 CHAFSR_UE | CHAFSR_CE | \
1144 CHAFSR_EDC | CHAFSR_EDU | \ 1144 CHAFSR_EDC | CHAFSR_EDU | \
1145 CHAFSR_UCC | CHAFSR_UCU | \ 1145 CHAFSR_UCC | CHAFSR_UCU | \
1146 CHAFSR_WDU | CHAFSR_WDC) 1146 CHAFSR_WDU | CHAFSR_WDC)
1147 #define MSYND_ERRORS (CHAFSR_EMC | CHAFSR_EMU) 1147 #define MSYND_ERRORS (CHAFSR_EMC | CHAFSR_EMU)
1148 if (afsr & ESYND_ERRORS) { 1148 if (afsr & ESYND_ERRORS) {
1149 int syndrome; 1149 int syndrome;
1150 int ret; 1150 int ret;
1151 1151
1152 syndrome = (afsr & CHAFSR_E_SYNDROME) >> CHAFSR_E_SYNDROME_SHIFT; 1152 syndrome = (afsr & CHAFSR_E_SYNDROME) >> CHAFSR_E_SYNDROME_SHIFT;
1153 syndrome = cheetah_ecc_syntab[syndrome]; 1153 syndrome = cheetah_ecc_syntab[syndrome];
1154 ret = chmc_getunumber(syndrome, afar, unum, sizeof(unum)); 1154 ret = chmc_getunumber(syndrome, afar, unum, sizeof(unum));
1155 if (ret != -1) 1155 if (ret != -1)
1156 printk("%s" "ERROR(%d): AFAR E-syndrome [%s]\n", 1156 printk("%s" "ERROR(%d): AFAR E-syndrome [%s]\n",
1157 (recoverable ? KERN_WARNING : KERN_CRIT), 1157 (recoverable ? KERN_WARNING : KERN_CRIT),
1158 smp_processor_id(), unum); 1158 smp_processor_id(), unum);
1159 } else if (afsr & MSYND_ERRORS) { 1159 } else if (afsr & MSYND_ERRORS) {
1160 int syndrome; 1160 int syndrome;
1161 int ret; 1161 int ret;
1162 1162
1163 syndrome = (afsr & CHAFSR_M_SYNDROME) >> CHAFSR_M_SYNDROME_SHIFT; 1163 syndrome = (afsr & CHAFSR_M_SYNDROME) >> CHAFSR_M_SYNDROME_SHIFT;
1164 syndrome = cheetah_mtag_syntab[syndrome]; 1164 syndrome = cheetah_mtag_syntab[syndrome];
1165 ret = chmc_getunumber(syndrome, afar, unum, sizeof(unum)); 1165 ret = chmc_getunumber(syndrome, afar, unum, sizeof(unum));
1166 if (ret != -1) 1166 if (ret != -1)
1167 printk("%s" "ERROR(%d): AFAR M-syndrome [%s]\n", 1167 printk("%s" "ERROR(%d): AFAR M-syndrome [%s]\n",
1168 (recoverable ? KERN_WARNING : KERN_CRIT), 1168 (recoverable ? KERN_WARNING : KERN_CRIT),
1169 smp_processor_id(), unum); 1169 smp_processor_id(), unum);
1170 } 1170 }
1171 1171
1172 /* Now dump the cache snapshots. */ 1172 /* Now dump the cache snapshots. */
1173 printk("%s" "ERROR(%d): D-cache idx[%x] tag[%016lx] utag[%016lx] stag[%016lx]\n", 1173 printk("%s" "ERROR(%d): D-cache idx[%x] tag[%016lx] utag[%016lx] stag[%016lx]\n",
1174 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(), 1174 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(),
1175 (int) info->dcache_index, 1175 (int) info->dcache_index,
1176 info->dcache_tag, 1176 info->dcache_tag,
1177 info->dcache_utag, 1177 info->dcache_utag,
1178 info->dcache_stag); 1178 info->dcache_stag);
1179 printk("%s" "ERROR(%d): D-cache data0[%016lx] data1[%016lx] data2[%016lx] data3[%016lx]\n", 1179 printk("%s" "ERROR(%d): D-cache data0[%016lx] data1[%016lx] data2[%016lx] data3[%016lx]\n",
1180 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(), 1180 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(),
1181 info->dcache_data[0], 1181 info->dcache_data[0],
1182 info->dcache_data[1], 1182 info->dcache_data[1],
1183 info->dcache_data[2], 1183 info->dcache_data[2],
1184 info->dcache_data[3]); 1184 info->dcache_data[3]);
1185 printk("%s" "ERROR(%d): I-cache idx[%x] tag[%016lx] utag[%016lx] stag[%016lx] " 1185 printk("%s" "ERROR(%d): I-cache idx[%x] tag[%016lx] utag[%016lx] stag[%016lx] "
1186 "u[%016lx] l[%016lx]\n", 1186 "u[%016lx] l[%016lx]\n",
1187 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(), 1187 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(),
1188 (int) info->icache_index, 1188 (int) info->icache_index,
1189 info->icache_tag, 1189 info->icache_tag,
1190 info->icache_utag, 1190 info->icache_utag,
1191 info->icache_stag, 1191 info->icache_stag,
1192 info->icache_upper, 1192 info->icache_upper,
1193 info->icache_lower); 1193 info->icache_lower);
1194 printk("%s" "ERROR(%d): I-cache INSN0[%016lx] INSN1[%016lx] INSN2[%016lx] INSN3[%016lx]\n", 1194 printk("%s" "ERROR(%d): I-cache INSN0[%016lx] INSN1[%016lx] INSN2[%016lx] INSN3[%016lx]\n",
1195 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(), 1195 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(),
1196 info->icache_data[0], 1196 info->icache_data[0],
1197 info->icache_data[1], 1197 info->icache_data[1],
1198 info->icache_data[2], 1198 info->icache_data[2],
1199 info->icache_data[3]); 1199 info->icache_data[3]);
1200 printk("%s" "ERROR(%d): I-cache INSN4[%016lx] INSN5[%016lx] INSN6[%016lx] INSN7[%016lx]\n", 1200 printk("%s" "ERROR(%d): I-cache INSN4[%016lx] INSN5[%016lx] INSN6[%016lx] INSN7[%016lx]\n",
1201 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(), 1201 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(),
1202 info->icache_data[4], 1202 info->icache_data[4],
1203 info->icache_data[5], 1203 info->icache_data[5],
1204 info->icache_data[6], 1204 info->icache_data[6],
1205 info->icache_data[7]); 1205 info->icache_data[7]);
1206 printk("%s" "ERROR(%d): E-cache idx[%x] tag[%016lx]\n", 1206 printk("%s" "ERROR(%d): E-cache idx[%x] tag[%016lx]\n",
1207 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(), 1207 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(),
1208 (int) info->ecache_index, info->ecache_tag); 1208 (int) info->ecache_index, info->ecache_tag);
1209 printk("%s" "ERROR(%d): E-cache data0[%016lx] data1[%016lx] data2[%016lx] data3[%016lx]\n", 1209 printk("%s" "ERROR(%d): E-cache data0[%016lx] data1[%016lx] data2[%016lx] data3[%016lx]\n",
1210 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(), 1210 (recoverable ? KERN_WARNING : KERN_CRIT), smp_processor_id(),
1211 info->ecache_data[0], 1211 info->ecache_data[0],
1212 info->ecache_data[1], 1212 info->ecache_data[1],
1213 info->ecache_data[2], 1213 info->ecache_data[2],
1214 info->ecache_data[3]); 1214 info->ecache_data[3]);
1215 1215
1216 afsr = (afsr & ~hipri) & cheetah_afsr_errors; 1216 afsr = (afsr & ~hipri) & cheetah_afsr_errors;
1217 while (afsr != 0UL) { 1217 while (afsr != 0UL) {
1218 unsigned long bit = cheetah_get_hipri(afsr); 1218 unsigned long bit = cheetah_get_hipri(afsr);
1219 1219
1220 printk("%s" "ERROR: Multiple-error (%016lx) \"%s\"\n", 1220 printk("%s" "ERROR: Multiple-error (%016lx) \"%s\"\n",
1221 (recoverable ? KERN_WARNING : KERN_CRIT), 1221 (recoverable ? KERN_WARNING : KERN_CRIT),
1222 bit, cheetah_get_string(bit)); 1222 bit, cheetah_get_string(bit));
1223 1223
1224 afsr &= ~bit; 1224 afsr &= ~bit;
1225 } 1225 }
1226 1226
1227 if (!recoverable) 1227 if (!recoverable)
1228 printk(KERN_CRIT "ERROR: This condition is not recoverable.\n"); 1228 printk(KERN_CRIT "ERROR: This condition is not recoverable.\n");
1229 } 1229 }
1230 1230
1231 static int cheetah_recheck_errors(struct cheetah_err_info *logp) 1231 static int cheetah_recheck_errors(struct cheetah_err_info *logp)
1232 { 1232 {
1233 unsigned long afsr, afar; 1233 unsigned long afsr, afar;
1234 int ret = 0; 1234 int ret = 0;
1235 1235
1236 __asm__ __volatile__("ldxa [%%g0] %1, %0\n\t" 1236 __asm__ __volatile__("ldxa [%%g0] %1, %0\n\t"
1237 : "=r" (afsr) 1237 : "=r" (afsr)
1238 : "i" (ASI_AFSR)); 1238 : "i" (ASI_AFSR));
1239 if ((afsr & cheetah_afsr_errors) != 0) { 1239 if ((afsr & cheetah_afsr_errors) != 0) {
1240 if (logp != NULL) { 1240 if (logp != NULL) {
1241 __asm__ __volatile__("ldxa [%%g0] %1, %0\n\t" 1241 __asm__ __volatile__("ldxa [%%g0] %1, %0\n\t"
1242 : "=r" (afar) 1242 : "=r" (afar)
1243 : "i" (ASI_AFAR)); 1243 : "i" (ASI_AFAR));
1244 logp->afsr = afsr; 1244 logp->afsr = afsr;
1245 logp->afar = afar; 1245 logp->afar = afar;
1246 } 1246 }
1247 ret = 1; 1247 ret = 1;
1248 } 1248 }
1249 __asm__ __volatile__("stxa %0, [%%g0] %1\n\t" 1249 __asm__ __volatile__("stxa %0, [%%g0] %1\n\t"
1250 "membar #Sync\n\t" 1250 "membar #Sync\n\t"
1251 : : "r" (afsr), "i" (ASI_AFSR)); 1251 : : "r" (afsr), "i" (ASI_AFSR));
1252 1252
1253 return ret; 1253 return ret;
1254 } 1254 }
1255 1255
1256 void cheetah_fecc_handler(struct pt_regs *regs, unsigned long afsr, unsigned long afar) 1256 void cheetah_fecc_handler(struct pt_regs *regs, unsigned long afsr, unsigned long afar)
1257 { 1257 {
1258 struct cheetah_err_info local_snapshot, *p; 1258 struct cheetah_err_info local_snapshot, *p;
1259 int recoverable; 1259 int recoverable;
1260 1260
1261 /* Flush E-cache */ 1261 /* Flush E-cache */
1262 cheetah_flush_ecache(); 1262 cheetah_flush_ecache();
1263 1263
1264 p = cheetah_get_error_log(afsr); 1264 p = cheetah_get_error_log(afsr);
1265 if (!p) { 1265 if (!p) {
1266 prom_printf("ERROR: Early Fast-ECC error afsr[%016lx] afar[%016lx]\n", 1266 prom_printf("ERROR: Early Fast-ECC error afsr[%016lx] afar[%016lx]\n",
1267 afsr, afar); 1267 afsr, afar);
1268 prom_printf("ERROR: CPU(%d) TPC[%016lx] TNPC[%016lx] TSTATE[%016lx]\n", 1268 prom_printf("ERROR: CPU(%d) TPC[%016lx] TNPC[%016lx] TSTATE[%016lx]\n",
1269 smp_processor_id(), regs->tpc, regs->tnpc, regs->tstate); 1269 smp_processor_id(), regs->tpc, regs->tnpc, regs->tstate);
1270 prom_halt(); 1270 prom_halt();
1271 } 1271 }
1272 1272
1273 /* Grab snapshot of logged error. */ 1273 /* Grab snapshot of logged error. */
1274 memcpy(&local_snapshot, p, sizeof(local_snapshot)); 1274 memcpy(&local_snapshot, p, sizeof(local_snapshot));
1275 1275
1276 /* If the current trap snapshot does not match what the 1276 /* If the current trap snapshot does not match what the
1277 * trap handler passed along into our args, big trouble. 1277 * trap handler passed along into our args, big trouble.
1278 * In such a case, mark the local copy as invalid. 1278 * In such a case, mark the local copy as invalid.
1279 * 1279 *
1280 * Else, it matches and we mark the afsr in the non-local 1280 * Else, it matches and we mark the afsr in the non-local
1281 * copy as invalid so we may log new error traps there. 1281 * copy as invalid so we may log new error traps there.
1282 */ 1282 */
1283 if (p->afsr != afsr || p->afar != afar) 1283 if (p->afsr != afsr || p->afar != afar)
1284 local_snapshot.afsr = CHAFSR_INVALID; 1284 local_snapshot.afsr = CHAFSR_INVALID;
1285 else 1285 else
1286 p->afsr = CHAFSR_INVALID; 1286 p->afsr = CHAFSR_INVALID;
1287 1287
1288 cheetah_flush_icache(); 1288 cheetah_flush_icache();
1289 cheetah_flush_dcache(); 1289 cheetah_flush_dcache();
1290 1290
1291 /* Re-enable I-cache/D-cache */ 1291 /* Re-enable I-cache/D-cache */
1292 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t" 1292 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t"
1293 "or %%g1, %1, %%g1\n\t" 1293 "or %%g1, %1, %%g1\n\t"
1294 "stxa %%g1, [%%g0] %0\n\t" 1294 "stxa %%g1, [%%g0] %0\n\t"
1295 "membar #Sync" 1295 "membar #Sync"
1296 : /* no outputs */ 1296 : /* no outputs */
1297 : "i" (ASI_DCU_CONTROL_REG), 1297 : "i" (ASI_DCU_CONTROL_REG),
1298 "i" (DCU_DC | DCU_IC) 1298 "i" (DCU_DC | DCU_IC)
1299 : "g1"); 1299 : "g1");
1300 1300
1301 /* Re-enable error reporting */ 1301 /* Re-enable error reporting */
1302 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t" 1302 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t"
1303 "or %%g1, %1, %%g1\n\t" 1303 "or %%g1, %1, %%g1\n\t"
1304 "stxa %%g1, [%%g0] %0\n\t" 1304 "stxa %%g1, [%%g0] %0\n\t"
1305 "membar #Sync" 1305 "membar #Sync"
1306 : /* no outputs */ 1306 : /* no outputs */
1307 : "i" (ASI_ESTATE_ERROR_EN), 1307 : "i" (ASI_ESTATE_ERROR_EN),
1308 "i" (ESTATE_ERROR_NCEEN | ESTATE_ERROR_CEEN) 1308 "i" (ESTATE_ERROR_NCEEN | ESTATE_ERROR_CEEN)
1309 : "g1"); 1309 : "g1");
1310 1310
1311 /* Decide if we can continue after handling this trap and 1311 /* Decide if we can continue after handling this trap and
1312 * logging the error. 1312 * logging the error.
1313 */ 1313 */
1314 recoverable = 1; 1314 recoverable = 1;
1315 if (afsr & (CHAFSR_PERR | CHAFSR_IERR | CHAFSR_ISAP)) 1315 if (afsr & (CHAFSR_PERR | CHAFSR_IERR | CHAFSR_ISAP))
1316 recoverable = 0; 1316 recoverable = 0;
1317 1317
1318 /* Re-check AFSR/AFAR. What we are looking for here is whether a new 1318 /* Re-check AFSR/AFAR. What we are looking for here is whether a new
1319 * error was logged while we had error reporting traps disabled. 1319 * error was logged while we had error reporting traps disabled.
1320 */ 1320 */
1321 if (cheetah_recheck_errors(&local_snapshot)) { 1321 if (cheetah_recheck_errors(&local_snapshot)) {
1322 unsigned long new_afsr = local_snapshot.afsr; 1322 unsigned long new_afsr = local_snapshot.afsr;
1323 1323
1324 /* If we got a new asynchronous error, die... */ 1324 /* If we got a new asynchronous error, die... */
1325 if (new_afsr & (CHAFSR_EMU | CHAFSR_EDU | 1325 if (new_afsr & (CHAFSR_EMU | CHAFSR_EDU |
1326 CHAFSR_WDU | CHAFSR_CPU | 1326 CHAFSR_WDU | CHAFSR_CPU |
1327 CHAFSR_IVU | CHAFSR_UE | 1327 CHAFSR_IVU | CHAFSR_UE |
1328 CHAFSR_BERR | CHAFSR_TO)) 1328 CHAFSR_BERR | CHAFSR_TO))
1329 recoverable = 0; 1329 recoverable = 0;
1330 } 1330 }
1331 1331
1332 /* Log errors. */ 1332 /* Log errors. */
1333 cheetah_log_errors(regs, &local_snapshot, afsr, afar, recoverable); 1333 cheetah_log_errors(regs, &local_snapshot, afsr, afar, recoverable);
1334 1334
1335 if (!recoverable) 1335 if (!recoverable)
1336 panic("Irrecoverable Fast-ECC error trap.\n"); 1336 panic("Irrecoverable Fast-ECC error trap.\n");
1337 1337
1338 /* Flush E-cache to kick the error trap handlers out. */ 1338 /* Flush E-cache to kick the error trap handlers out. */
1339 cheetah_flush_ecache(); 1339 cheetah_flush_ecache();
1340 } 1340 }
1341 1341
1342 /* Try to fix a correctable error by pushing the line out from 1342 /* Try to fix a correctable error by pushing the line out from
1343 * the E-cache. Recheck error reporting registers to see if the 1343 * the E-cache. Recheck error reporting registers to see if the
1344 * problem is intermittent. 1344 * problem is intermittent.
1345 */ 1345 */
1346 static int cheetah_fix_ce(unsigned long physaddr) 1346 static int cheetah_fix_ce(unsigned long physaddr)
1347 { 1347 {
1348 unsigned long orig_estate; 1348 unsigned long orig_estate;
1349 unsigned long alias1, alias2; 1349 unsigned long alias1, alias2;
1350 int ret; 1350 int ret;
1351 1351
1352 /* Make sure correctable error traps are disabled. */ 1352 /* Make sure correctable error traps are disabled. */
1353 __asm__ __volatile__("ldxa [%%g0] %2, %0\n\t" 1353 __asm__ __volatile__("ldxa [%%g0] %2, %0\n\t"
1354 "andn %0, %1, %%g1\n\t" 1354 "andn %0, %1, %%g1\n\t"
1355 "stxa %%g1, [%%g0] %2\n\t" 1355 "stxa %%g1, [%%g0] %2\n\t"
1356 "membar #Sync" 1356 "membar #Sync"
1357 : "=&r" (orig_estate) 1357 : "=&r" (orig_estate)
1358 : "i" (ESTATE_ERROR_CEEN), 1358 : "i" (ESTATE_ERROR_CEEN),
1359 "i" (ASI_ESTATE_ERROR_EN) 1359 "i" (ASI_ESTATE_ERROR_EN)
1360 : "g1"); 1360 : "g1");
1361 1361
1362 /* We calculate alias addresses that will force the 1362 /* We calculate alias addresses that will force the
1363 * cache line in question out of the E-cache. Then 1363 * cache line in question out of the E-cache. Then
1364 * we bring it back in with an atomic instruction so 1364 * we bring it back in with an atomic instruction so
1365 * that we get it in some modified/exclusive state, 1365 * that we get it in some modified/exclusive state,
1366 * then we displace it again to try and get proper ECC 1366 * then we displace it again to try and get proper ECC
1367 * pushed back into the system. 1367 * pushed back into the system.
1368 */ 1368 */
1369 physaddr &= ~(8UL - 1UL); 1369 physaddr &= ~(8UL - 1UL);
1370 alias1 = (ecache_flush_physbase + 1370 alias1 = (ecache_flush_physbase +
1371 (physaddr & ((ecache_flush_size >> 1) - 1))); 1371 (physaddr & ((ecache_flush_size >> 1) - 1)));
1372 alias2 = alias1 + (ecache_flush_size >> 1); 1372 alias2 = alias1 + (ecache_flush_size >> 1);
1373 __asm__ __volatile__("ldxa [%0] %3, %%g0\n\t" 1373 __asm__ __volatile__("ldxa [%0] %3, %%g0\n\t"
1374 "ldxa [%1] %3, %%g0\n\t" 1374 "ldxa [%1] %3, %%g0\n\t"
1375 "casxa [%2] %3, %%g0, %%g0\n\t" 1375 "casxa [%2] %3, %%g0, %%g0\n\t"
1376 "membar #StoreLoad | #StoreStore\n\t" 1376 "membar #StoreLoad | #StoreStore\n\t"
1377 "ldxa [%0] %3, %%g0\n\t" 1377 "ldxa [%0] %3, %%g0\n\t"
1378 "ldxa [%1] %3, %%g0\n\t" 1378 "ldxa [%1] %3, %%g0\n\t"
1379 "membar #Sync" 1379 "membar #Sync"
1380 : /* no outputs */ 1380 : /* no outputs */
1381 : "r" (alias1), "r" (alias2), 1381 : "r" (alias1), "r" (alias2),
1382 "r" (physaddr), "i" (ASI_PHYS_USE_EC)); 1382 "r" (physaddr), "i" (ASI_PHYS_USE_EC));
1383 1383
1384 /* Did that trigger another error? */ 1384 /* Did that trigger another error? */
1385 if (cheetah_recheck_errors(NULL)) { 1385 if (cheetah_recheck_errors(NULL)) {
1386 /* Try one more time. */ 1386 /* Try one more time. */
1387 __asm__ __volatile__("ldxa [%0] %1, %%g0\n\t" 1387 __asm__ __volatile__("ldxa [%0] %1, %%g0\n\t"
1388 "membar #Sync" 1388 "membar #Sync"
1389 : : "r" (physaddr), "i" (ASI_PHYS_USE_EC)); 1389 : : "r" (physaddr), "i" (ASI_PHYS_USE_EC));
1390 if (cheetah_recheck_errors(NULL)) 1390 if (cheetah_recheck_errors(NULL))
1391 ret = 2; 1391 ret = 2;
1392 else 1392 else
1393 ret = 1; 1393 ret = 1;
1394 } else { 1394 } else {
1395 /* No new error, intermittent problem. */ 1395 /* No new error, intermittent problem. */
1396 ret = 0; 1396 ret = 0;
1397 } 1397 }
1398 1398
1399 /* Restore error enables. */ 1399 /* Restore error enables. */
1400 __asm__ __volatile__("stxa %0, [%%g0] %1\n\t" 1400 __asm__ __volatile__("stxa %0, [%%g0] %1\n\t"
1401 "membar #Sync" 1401 "membar #Sync"
1402 : : "r" (orig_estate), "i" (ASI_ESTATE_ERROR_EN)); 1402 : : "r" (orig_estate), "i" (ASI_ESTATE_ERROR_EN));
1403 1403
1404 return ret; 1404 return ret;
1405 } 1405 }
1406 1406
1407 /* Return non-zero if PADDR is a valid physical memory address. */ 1407 /* Return non-zero if PADDR is a valid physical memory address. */
1408 static int cheetah_check_main_memory(unsigned long paddr) 1408 static int cheetah_check_main_memory(unsigned long paddr)
1409 { 1409 {
1410 unsigned long vaddr = PAGE_OFFSET + paddr; 1410 unsigned long vaddr = PAGE_OFFSET + paddr;
1411 1411
1412 if (vaddr > (unsigned long) high_memory) 1412 if (vaddr > (unsigned long) high_memory)
1413 return 0; 1413 return 0;
1414 1414
1415 return kern_addr_valid(vaddr); 1415 return kern_addr_valid(vaddr);
1416 } 1416 }
1417 1417
1418 void cheetah_cee_handler(struct pt_regs *regs, unsigned long afsr, unsigned long afar) 1418 void cheetah_cee_handler(struct pt_regs *regs, unsigned long afsr, unsigned long afar)
1419 { 1419 {
1420 struct cheetah_err_info local_snapshot, *p; 1420 struct cheetah_err_info local_snapshot, *p;
1421 int recoverable, is_memory; 1421 int recoverable, is_memory;
1422 1422
1423 p = cheetah_get_error_log(afsr); 1423 p = cheetah_get_error_log(afsr);
1424 if (!p) { 1424 if (!p) {
1425 prom_printf("ERROR: Early CEE error afsr[%016lx] afar[%016lx]\n", 1425 prom_printf("ERROR: Early CEE error afsr[%016lx] afar[%016lx]\n",
1426 afsr, afar); 1426 afsr, afar);
1427 prom_printf("ERROR: CPU(%d) TPC[%016lx] TNPC[%016lx] TSTATE[%016lx]\n", 1427 prom_printf("ERROR: CPU(%d) TPC[%016lx] TNPC[%016lx] TSTATE[%016lx]\n",
1428 smp_processor_id(), regs->tpc, regs->tnpc, regs->tstate); 1428 smp_processor_id(), regs->tpc, regs->tnpc, regs->tstate);
1429 prom_halt(); 1429 prom_halt();
1430 } 1430 }
1431 1431
1432 /* Grab snapshot of logged error. */ 1432 /* Grab snapshot of logged error. */
1433 memcpy(&local_snapshot, p, sizeof(local_snapshot)); 1433 memcpy(&local_snapshot, p, sizeof(local_snapshot));
1434 1434
1435 /* If the current trap snapshot does not match what the 1435 /* If the current trap snapshot does not match what the
1436 * trap handler passed along into our args, big trouble. 1436 * trap handler passed along into our args, big trouble.
1437 * In such a case, mark the local copy as invalid. 1437 * In such a case, mark the local copy as invalid.
1438 * 1438 *
1439 * Else, it matches and we mark the afsr in the non-local 1439 * Else, it matches and we mark the afsr in the non-local
1440 * copy as invalid so we may log new error traps there. 1440 * copy as invalid so we may log new error traps there.
1441 */ 1441 */
1442 if (p->afsr != afsr || p->afar != afar) 1442 if (p->afsr != afsr || p->afar != afar)
1443 local_snapshot.afsr = CHAFSR_INVALID; 1443 local_snapshot.afsr = CHAFSR_INVALID;
1444 else 1444 else
1445 p->afsr = CHAFSR_INVALID; 1445 p->afsr = CHAFSR_INVALID;
1446 1446
1447 is_memory = cheetah_check_main_memory(afar); 1447 is_memory = cheetah_check_main_memory(afar);
1448 1448
1449 if (is_memory && (afsr & CHAFSR_CE) != 0UL) { 1449 if (is_memory && (afsr & CHAFSR_CE) != 0UL) {
1450 /* XXX Might want to log the results of this operation 1450 /* XXX Might want to log the results of this operation
1451 * XXX somewhere... -DaveM 1451 * XXX somewhere... -DaveM
1452 */ 1452 */
1453 cheetah_fix_ce(afar); 1453 cheetah_fix_ce(afar);
1454 } 1454 }
1455 1455
1456 { 1456 {
1457 int flush_all, flush_line; 1457 int flush_all, flush_line;
1458 1458
1459 flush_all = flush_line = 0; 1459 flush_all = flush_line = 0;
1460 if ((afsr & CHAFSR_EDC) != 0UL) { 1460 if ((afsr & CHAFSR_EDC) != 0UL) {
1461 if ((afsr & cheetah_afsr_errors) == CHAFSR_EDC) 1461 if ((afsr & cheetah_afsr_errors) == CHAFSR_EDC)
1462 flush_line = 1; 1462 flush_line = 1;
1463 else 1463 else
1464 flush_all = 1; 1464 flush_all = 1;
1465 } else if ((afsr & CHAFSR_CPC) != 0UL) { 1465 } else if ((afsr & CHAFSR_CPC) != 0UL) {
1466 if ((afsr & cheetah_afsr_errors) == CHAFSR_CPC) 1466 if ((afsr & cheetah_afsr_errors) == CHAFSR_CPC)
1467 flush_line = 1; 1467 flush_line = 1;
1468 else 1468 else
1469 flush_all = 1; 1469 flush_all = 1;
1470 } 1470 }
1471 1471
1472 /* Trap handler only disabled I-cache, flush it. */ 1472 /* Trap handler only disabled I-cache, flush it. */
1473 cheetah_flush_icache(); 1473 cheetah_flush_icache();
1474 1474
1475 /* Re-enable I-cache */ 1475 /* Re-enable I-cache */
1476 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t" 1476 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t"
1477 "or %%g1, %1, %%g1\n\t" 1477 "or %%g1, %1, %%g1\n\t"
1478 "stxa %%g1, [%%g0] %0\n\t" 1478 "stxa %%g1, [%%g0] %0\n\t"
1479 "membar #Sync" 1479 "membar #Sync"
1480 : /* no outputs */ 1480 : /* no outputs */
1481 : "i" (ASI_DCU_CONTROL_REG), 1481 : "i" (ASI_DCU_CONTROL_REG),
1482 "i" (DCU_IC) 1482 "i" (DCU_IC)
1483 : "g1"); 1483 : "g1");
1484 1484
1485 if (flush_all) 1485 if (flush_all)
1486 cheetah_flush_ecache(); 1486 cheetah_flush_ecache();
1487 else if (flush_line) 1487 else if (flush_line)
1488 cheetah_flush_ecache_line(afar); 1488 cheetah_flush_ecache_line(afar);
1489 } 1489 }
1490 1490
1491 /* Re-enable error reporting */ 1491 /* Re-enable error reporting */
1492 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t" 1492 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t"
1493 "or %%g1, %1, %%g1\n\t" 1493 "or %%g1, %1, %%g1\n\t"
1494 "stxa %%g1, [%%g0] %0\n\t" 1494 "stxa %%g1, [%%g0] %0\n\t"
1495 "membar #Sync" 1495 "membar #Sync"
1496 : /* no outputs */ 1496 : /* no outputs */
1497 : "i" (ASI_ESTATE_ERROR_EN), 1497 : "i" (ASI_ESTATE_ERROR_EN),
1498 "i" (ESTATE_ERROR_CEEN) 1498 "i" (ESTATE_ERROR_CEEN)
1499 : "g1"); 1499 : "g1");
1500 1500
1501 /* Decide if we can continue after handling this trap and 1501 /* Decide if we can continue after handling this trap and
1502 * logging the error. 1502 * logging the error.
1503 */ 1503 */
1504 recoverable = 1; 1504 recoverable = 1;
1505 if (afsr & (CHAFSR_PERR | CHAFSR_IERR | CHAFSR_ISAP)) 1505 if (afsr & (CHAFSR_PERR | CHAFSR_IERR | CHAFSR_ISAP))
1506 recoverable = 0; 1506 recoverable = 0;
1507 1507
1508 /* Re-check AFSR/AFAR */ 1508 /* Re-check AFSR/AFAR */
1509 (void) cheetah_recheck_errors(&local_snapshot); 1509 (void) cheetah_recheck_errors(&local_snapshot);
1510 1510
1511 /* Log errors. */ 1511 /* Log errors. */
1512 cheetah_log_errors(regs, &local_snapshot, afsr, afar, recoverable); 1512 cheetah_log_errors(regs, &local_snapshot, afsr, afar, recoverable);
1513 1513
1514 if (!recoverable) 1514 if (!recoverable)
1515 panic("Irrecoverable Correctable-ECC error trap.\n"); 1515 panic("Irrecoverable Correctable-ECC error trap.\n");
1516 } 1516 }
1517 1517
1518 void cheetah_deferred_handler(struct pt_regs *regs, unsigned long afsr, unsigned long afar) 1518 void cheetah_deferred_handler(struct pt_regs *regs, unsigned long afsr, unsigned long afar)
1519 { 1519 {
1520 struct cheetah_err_info local_snapshot, *p; 1520 struct cheetah_err_info local_snapshot, *p;
1521 int recoverable, is_memory; 1521 int recoverable, is_memory;
1522 1522
1523 #ifdef CONFIG_PCI 1523 #ifdef CONFIG_PCI
1524 /* Check for the special PCI poke sequence. */ 1524 /* Check for the special PCI poke sequence. */
1525 if (pci_poke_in_progress && pci_poke_cpu == smp_processor_id()) { 1525 if (pci_poke_in_progress && pci_poke_cpu == smp_processor_id()) {
1526 cheetah_flush_icache(); 1526 cheetah_flush_icache();
1527 cheetah_flush_dcache(); 1527 cheetah_flush_dcache();
1528 1528
1529 /* Re-enable I-cache/D-cache */ 1529 /* Re-enable I-cache/D-cache */
1530 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t" 1530 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t"
1531 "or %%g1, %1, %%g1\n\t" 1531 "or %%g1, %1, %%g1\n\t"
1532 "stxa %%g1, [%%g0] %0\n\t" 1532 "stxa %%g1, [%%g0] %0\n\t"
1533 "membar #Sync" 1533 "membar #Sync"
1534 : /* no outputs */ 1534 : /* no outputs */
1535 : "i" (ASI_DCU_CONTROL_REG), 1535 : "i" (ASI_DCU_CONTROL_REG),
1536 "i" (DCU_DC | DCU_IC) 1536 "i" (DCU_DC | DCU_IC)
1537 : "g1"); 1537 : "g1");
1538 1538
1539 /* Re-enable error reporting */ 1539 /* Re-enable error reporting */
1540 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t" 1540 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t"
1541 "or %%g1, %1, %%g1\n\t" 1541 "or %%g1, %1, %%g1\n\t"
1542 "stxa %%g1, [%%g0] %0\n\t" 1542 "stxa %%g1, [%%g0] %0\n\t"
1543 "membar #Sync" 1543 "membar #Sync"
1544 : /* no outputs */ 1544 : /* no outputs */
1545 : "i" (ASI_ESTATE_ERROR_EN), 1545 : "i" (ASI_ESTATE_ERROR_EN),
1546 "i" (ESTATE_ERROR_NCEEN | ESTATE_ERROR_CEEN) 1546 "i" (ESTATE_ERROR_NCEEN | ESTATE_ERROR_CEEN)
1547 : "g1"); 1547 : "g1");
1548 1548
1549 (void) cheetah_recheck_errors(NULL); 1549 (void) cheetah_recheck_errors(NULL);
1550 1550
1551 pci_poke_faulted = 1; 1551 pci_poke_faulted = 1;
1552 regs->tpc += 4; 1552 regs->tpc += 4;
1553 regs->tnpc = regs->tpc + 4; 1553 regs->tnpc = regs->tpc + 4;
1554 return; 1554 return;
1555 } 1555 }
1556 #endif 1556 #endif
1557 1557
1558 p = cheetah_get_error_log(afsr); 1558 p = cheetah_get_error_log(afsr);
1559 if (!p) { 1559 if (!p) {
1560 prom_printf("ERROR: Early deferred error afsr[%016lx] afar[%016lx]\n", 1560 prom_printf("ERROR: Early deferred error afsr[%016lx] afar[%016lx]\n",
1561 afsr, afar); 1561 afsr, afar);
1562 prom_printf("ERROR: CPU(%d) TPC[%016lx] TNPC[%016lx] TSTATE[%016lx]\n", 1562 prom_printf("ERROR: CPU(%d) TPC[%016lx] TNPC[%016lx] TSTATE[%016lx]\n",
1563 smp_processor_id(), regs->tpc, regs->tnpc, regs->tstate); 1563 smp_processor_id(), regs->tpc, regs->tnpc, regs->tstate);
1564 prom_halt(); 1564 prom_halt();
1565 } 1565 }
1566 1566
1567 /* Grab snapshot of logged error. */ 1567 /* Grab snapshot of logged error. */
1568 memcpy(&local_snapshot, p, sizeof(local_snapshot)); 1568 memcpy(&local_snapshot, p, sizeof(local_snapshot));
1569 1569
1570 /* If the current trap snapshot does not match what the 1570 /* If the current trap snapshot does not match what the
1571 * trap handler passed along into our args, big trouble. 1571 * trap handler passed along into our args, big trouble.
1572 * In such a case, mark the local copy as invalid. 1572 * In such a case, mark the local copy as invalid.
1573 * 1573 *
1574 * Else, it matches and we mark the afsr in the non-local 1574 * Else, it matches and we mark the afsr in the non-local
1575 * copy as invalid so we may log new error traps there. 1575 * copy as invalid so we may log new error traps there.
1576 */ 1576 */
1577 if (p->afsr != afsr || p->afar != afar) 1577 if (p->afsr != afsr || p->afar != afar)
1578 local_snapshot.afsr = CHAFSR_INVALID; 1578 local_snapshot.afsr = CHAFSR_INVALID;
1579 else 1579 else
1580 p->afsr = CHAFSR_INVALID; 1580 p->afsr = CHAFSR_INVALID;
1581 1581
1582 is_memory = cheetah_check_main_memory(afar); 1582 is_memory = cheetah_check_main_memory(afar);
1583 1583
1584 { 1584 {
1585 int flush_all, flush_line; 1585 int flush_all, flush_line;
1586 1586
1587 flush_all = flush_line = 0; 1587 flush_all = flush_line = 0;
1588 if ((afsr & CHAFSR_EDU) != 0UL) { 1588 if ((afsr & CHAFSR_EDU) != 0UL) {
1589 if ((afsr & cheetah_afsr_errors) == CHAFSR_EDU) 1589 if ((afsr & cheetah_afsr_errors) == CHAFSR_EDU)
1590 flush_line = 1; 1590 flush_line = 1;
1591 else 1591 else
1592 flush_all = 1; 1592 flush_all = 1;
1593 } else if ((afsr & CHAFSR_BERR) != 0UL) { 1593 } else if ((afsr & CHAFSR_BERR) != 0UL) {
1594 if ((afsr & cheetah_afsr_errors) == CHAFSR_BERR) 1594 if ((afsr & cheetah_afsr_errors) == CHAFSR_BERR)
1595 flush_line = 1; 1595 flush_line = 1;
1596 else 1596 else
1597 flush_all = 1; 1597 flush_all = 1;
1598 } 1598 }
1599 1599
1600 cheetah_flush_icache(); 1600 cheetah_flush_icache();
1601 cheetah_flush_dcache(); 1601 cheetah_flush_dcache();
1602 1602
1603 /* Re-enable I/D caches */ 1603 /* Re-enable I/D caches */
1604 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t" 1604 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t"
1605 "or %%g1, %1, %%g1\n\t" 1605 "or %%g1, %1, %%g1\n\t"
1606 "stxa %%g1, [%%g0] %0\n\t" 1606 "stxa %%g1, [%%g0] %0\n\t"
1607 "membar #Sync" 1607 "membar #Sync"
1608 : /* no outputs */ 1608 : /* no outputs */
1609 : "i" (ASI_DCU_CONTROL_REG), 1609 : "i" (ASI_DCU_CONTROL_REG),
1610 "i" (DCU_IC | DCU_DC) 1610 "i" (DCU_IC | DCU_DC)
1611 : "g1"); 1611 : "g1");
1612 1612
1613 if (flush_all) 1613 if (flush_all)
1614 cheetah_flush_ecache(); 1614 cheetah_flush_ecache();
1615 else if (flush_line) 1615 else if (flush_line)
1616 cheetah_flush_ecache_line(afar); 1616 cheetah_flush_ecache_line(afar);
1617 } 1617 }
1618 1618
1619 /* Re-enable error reporting */ 1619 /* Re-enable error reporting */
1620 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t" 1620 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t"
1621 "or %%g1, %1, %%g1\n\t" 1621 "or %%g1, %1, %%g1\n\t"
1622 "stxa %%g1, [%%g0] %0\n\t" 1622 "stxa %%g1, [%%g0] %0\n\t"
1623 "membar #Sync" 1623 "membar #Sync"
1624 : /* no outputs */ 1624 : /* no outputs */
1625 : "i" (ASI_ESTATE_ERROR_EN), 1625 : "i" (ASI_ESTATE_ERROR_EN),
1626 "i" (ESTATE_ERROR_NCEEN | ESTATE_ERROR_CEEN) 1626 "i" (ESTATE_ERROR_NCEEN | ESTATE_ERROR_CEEN)
1627 : "g1"); 1627 : "g1");
1628 1628
1629 /* Decide if we can continue after handling this trap and 1629 /* Decide if we can continue after handling this trap and
1630 * logging the error. 1630 * logging the error.
1631 */ 1631 */
1632 recoverable = 1; 1632 recoverable = 1;
1633 if (afsr & (CHAFSR_PERR | CHAFSR_IERR | CHAFSR_ISAP)) 1633 if (afsr & (CHAFSR_PERR | CHAFSR_IERR | CHAFSR_ISAP))
1634 recoverable = 0; 1634 recoverable = 0;
1635 1635
1636 /* Re-check AFSR/AFAR. What we are looking for here is whether a new 1636 /* Re-check AFSR/AFAR. What we are looking for here is whether a new
1637 * error was logged while we had error reporting traps disabled. 1637 * error was logged while we had error reporting traps disabled.
1638 */ 1638 */
1639 if (cheetah_recheck_errors(&local_snapshot)) { 1639 if (cheetah_recheck_errors(&local_snapshot)) {
1640 unsigned long new_afsr = local_snapshot.afsr; 1640 unsigned long new_afsr = local_snapshot.afsr;
1641 1641
1642 /* If we got a new asynchronous error, die... */ 1642 /* If we got a new asynchronous error, die... */
1643 if (new_afsr & (CHAFSR_EMU | CHAFSR_EDU | 1643 if (new_afsr & (CHAFSR_EMU | CHAFSR_EDU |
1644 CHAFSR_WDU | CHAFSR_CPU | 1644 CHAFSR_WDU | CHAFSR_CPU |
1645 CHAFSR_IVU | CHAFSR_UE | 1645 CHAFSR_IVU | CHAFSR_UE |
1646 CHAFSR_BERR | CHAFSR_TO)) 1646 CHAFSR_BERR | CHAFSR_TO))
1647 recoverable = 0; 1647 recoverable = 0;
1648 } 1648 }
1649 1649
1650 /* Log errors. */ 1650 /* Log errors. */
1651 cheetah_log_errors(regs, &local_snapshot, afsr, afar, recoverable); 1651 cheetah_log_errors(regs, &local_snapshot, afsr, afar, recoverable);
1652 1652
1653 /* "Recoverable" here means we try to yank the page from ever 1653 /* "Recoverable" here means we try to yank the page from ever
1654 * being newly used again. This depends upon a few things: 1654 * being newly used again. This depends upon a few things:
1655 * 1) Must be main memory, and AFAR must be valid. 1655 * 1) Must be main memory, and AFAR must be valid.
1656 * 2) If we trapped from user, OK. 1656 * 2) If we trapped from user, OK.
1657 * 3) Else, if we trapped from kernel we must find exception 1657 * 3) Else, if we trapped from kernel we must find exception
1658 * table entry (ie. we have to have been accessing user 1658 * table entry (ie. we have to have been accessing user
1659 * space). 1659 * space).
1660 * 1660 *
1661 * If AFAR is not in main memory, or we trapped from kernel 1661 * If AFAR is not in main memory, or we trapped from kernel
1662 * and cannot find an exception table entry, it is unacceptable 1662 * and cannot find an exception table entry, it is unacceptable
1663 * to try and continue. 1663 * to try and continue.
1664 */ 1664 */
1665 if (recoverable && is_memory) { 1665 if (recoverable && is_memory) {
1666 if ((regs->tstate & TSTATE_PRIV) == 0UL) { 1666 if ((regs->tstate & TSTATE_PRIV) == 0UL) {
1667 /* OK, usermode access. */ 1667 /* OK, usermode access. */
1668 recoverable = 1; 1668 recoverable = 1;
1669 } else { 1669 } else {
1670 const struct exception_table_entry *entry; 1670 const struct exception_table_entry *entry;
1671 1671
1672 entry = search_exception_tables(regs->tpc); 1672 entry = search_exception_tables(regs->tpc);
1673 if (entry) { 1673 if (entry) {
1674 /* OK, kernel access to userspace. */ 1674 /* OK, kernel access to userspace. */
1675 recoverable = 1; 1675 recoverable = 1;
1676 1676
1677 } else { 1677 } else {
1678 /* BAD, privileged state is corrupted. */ 1678 /* BAD, privileged state is corrupted. */
1679 recoverable = 0; 1679 recoverable = 0;
1680 } 1680 }
1681 1681
1682 if (recoverable) { 1682 if (recoverable) {
1683 if (pfn_valid(afar >> PAGE_SHIFT)) 1683 if (pfn_valid(afar >> PAGE_SHIFT))
1684 get_page(pfn_to_page(afar >> PAGE_SHIFT)); 1684 get_page(pfn_to_page(afar >> PAGE_SHIFT));
1685 else 1685 else
1686 recoverable = 0; 1686 recoverable = 0;
1687 1687
1688 /* Only perform fixup if we still have a 1688 /* Only perform fixup if we still have a
1689 * recoverable condition. 1689 * recoverable condition.
1690 */ 1690 */
1691 if (recoverable) { 1691 if (recoverable) {
1692 regs->tpc = entry->fixup; 1692 regs->tpc = entry->fixup;
1693 regs->tnpc = regs->tpc + 4; 1693 regs->tnpc = regs->tpc + 4;
1694 } 1694 }
1695 } 1695 }
1696 } 1696 }
1697 } else { 1697 } else {
1698 recoverable = 0; 1698 recoverable = 0;
1699 } 1699 }
1700 1700
1701 if (!recoverable) 1701 if (!recoverable)
1702 panic("Irrecoverable deferred error trap.\n"); 1702 panic("Irrecoverable deferred error trap.\n");
1703 } 1703 }
1704 1704
1705 /* Handle a D/I cache parity error trap. TYPE is encoded as: 1705 /* Handle a D/I cache parity error trap. TYPE is encoded as:
1706 * 1706 *
1707 * Bit0: 0=dcache,1=icache 1707 * Bit0: 0=dcache,1=icache
1708 * Bit1: 0=recoverable,1=unrecoverable 1708 * Bit1: 0=recoverable,1=unrecoverable
1709 * 1709 *
1710 * The hardware has disabled both the I-cache and D-cache in 1710 * The hardware has disabled both the I-cache and D-cache in
1711 * the %dcr register. 1711 * the %dcr register.
1712 */ 1712 */
1713 void cheetah_plus_parity_error(int type, struct pt_regs *regs) 1713 void cheetah_plus_parity_error(int type, struct pt_regs *regs)
1714 { 1714 {
1715 if (type & 0x1) 1715 if (type & 0x1)
1716 __cheetah_flush_icache(); 1716 __cheetah_flush_icache();
1717 else 1717 else
1718 cheetah_plus_zap_dcache_parity(); 1718 cheetah_plus_zap_dcache_parity();
1719 cheetah_flush_dcache(); 1719 cheetah_flush_dcache();
1720 1720
1721 /* Re-enable I-cache/D-cache */ 1721 /* Re-enable I-cache/D-cache */
1722 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t" 1722 __asm__ __volatile__("ldxa [%%g0] %0, %%g1\n\t"
1723 "or %%g1, %1, %%g1\n\t" 1723 "or %%g1, %1, %%g1\n\t"
1724 "stxa %%g1, [%%g0] %0\n\t" 1724 "stxa %%g1, [%%g0] %0\n\t"
1725 "membar #Sync" 1725 "membar #Sync"
1726 : /* no outputs */ 1726 : /* no outputs */
1727 : "i" (ASI_DCU_CONTROL_REG), 1727 : "i" (ASI_DCU_CONTROL_REG),
1728 "i" (DCU_DC | DCU_IC) 1728 "i" (DCU_DC | DCU_IC)
1729 : "g1"); 1729 : "g1");
1730 1730
1731 if (type & 0x2) { 1731 if (type & 0x2) {
1732 printk(KERN_EMERG "CPU[%d]: Cheetah+ %c-cache parity error at TPC[%016lx]\n", 1732 printk(KERN_EMERG "CPU[%d]: Cheetah+ %c-cache parity error at TPC[%016lx]\n",
1733 smp_processor_id(), 1733 smp_processor_id(),
1734 (type & 0x1) ? 'I' : 'D', 1734 (type & 0x1) ? 'I' : 'D',
1735 regs->tpc); 1735 regs->tpc);
1736 print_symbol(KERN_EMERG "TPC<%s>\n", regs->tpc); 1736 print_symbol(KERN_EMERG "TPC<%s>\n", regs->tpc);
1737 panic("Irrecoverable Cheetah+ parity error."); 1737 panic("Irrecoverable Cheetah+ parity error.");
1738 } 1738 }
1739 1739
1740 printk(KERN_WARNING "CPU[%d]: Cheetah+ %c-cache parity error at TPC[%016lx]\n", 1740 printk(KERN_WARNING "CPU[%d]: Cheetah+ %c-cache parity error at TPC[%016lx]\n",
1741 smp_processor_id(), 1741 smp_processor_id(),
1742 (type & 0x1) ? 'I' : 'D', 1742 (type & 0x1) ? 'I' : 'D',
1743 regs->tpc); 1743 regs->tpc);
1744 print_symbol(KERN_WARNING "TPC<%s>\n", regs->tpc); 1744 print_symbol(KERN_WARNING "TPC<%s>\n", regs->tpc);
1745 } 1745 }
1746 1746
1747 struct sun4v_error_entry { 1747 struct sun4v_error_entry {
1748 u64 err_handle; 1748 u64 err_handle;
1749 u64 err_stick; 1749 u64 err_stick;
1750 1750
1751 u32 err_type; 1751 u32 err_type;
1752 #define SUN4V_ERR_TYPE_UNDEFINED 0 1752 #define SUN4V_ERR_TYPE_UNDEFINED 0
1753 #define SUN4V_ERR_TYPE_UNCORRECTED_RES 1 1753 #define SUN4V_ERR_TYPE_UNCORRECTED_RES 1
1754 #define SUN4V_ERR_TYPE_PRECISE_NONRES 2 1754 #define SUN4V_ERR_TYPE_PRECISE_NONRES 2
1755 #define SUN4V_ERR_TYPE_DEFERRED_NONRES 3 1755 #define SUN4V_ERR_TYPE_DEFERRED_NONRES 3
1756 #define SUN4V_ERR_TYPE_WARNING_RES 4 1756 #define SUN4V_ERR_TYPE_WARNING_RES 4
1757 1757
1758 u32 err_attrs; 1758 u32 err_attrs;
1759 #define SUN4V_ERR_ATTRS_PROCESSOR 0x00000001 1759 #define SUN4V_ERR_ATTRS_PROCESSOR 0x00000001
1760 #define SUN4V_ERR_ATTRS_MEMORY 0x00000002 1760 #define SUN4V_ERR_ATTRS_MEMORY 0x00000002
1761 #define SUN4V_ERR_ATTRS_PIO 0x00000004 1761 #define SUN4V_ERR_ATTRS_PIO 0x00000004
1762 #define SUN4V_ERR_ATTRS_INT_REGISTERS 0x00000008 1762 #define SUN4V_ERR_ATTRS_INT_REGISTERS 0x00000008
1763 #define SUN4V_ERR_ATTRS_FPU_REGISTERS 0x00000010 1763 #define SUN4V_ERR_ATTRS_FPU_REGISTERS 0x00000010
1764 #define SUN4V_ERR_ATTRS_USER_MODE 0x01000000 1764 #define SUN4V_ERR_ATTRS_USER_MODE 0x01000000
1765 #define SUN4V_ERR_ATTRS_PRIV_MODE 0x02000000 1765 #define SUN4V_ERR_ATTRS_PRIV_MODE 0x02000000
1766 #define SUN4V_ERR_ATTRS_RES_QUEUE_FULL 0x80000000 1766 #define SUN4V_ERR_ATTRS_RES_QUEUE_FULL 0x80000000
1767 1767
1768 u64 err_raddr; 1768 u64 err_raddr;
1769 u32 err_size; 1769 u32 err_size;
1770 u16 err_cpu; 1770 u16 err_cpu;
1771 u16 err_pad; 1771 u16 err_pad;
1772 }; 1772 };
1773 1773
1774 static atomic_t sun4v_resum_oflow_cnt = ATOMIC_INIT(0); 1774 static atomic_t sun4v_resum_oflow_cnt = ATOMIC_INIT(0);
1775 static atomic_t sun4v_nonresum_oflow_cnt = ATOMIC_INIT(0); 1775 static atomic_t sun4v_nonresum_oflow_cnt = ATOMIC_INIT(0);
1776 1776
1777 static const char *sun4v_err_type_to_str(u32 type) 1777 static const char *sun4v_err_type_to_str(u32 type)
1778 { 1778 {
1779 switch (type) { 1779 switch (type) {
1780 case SUN4V_ERR_TYPE_UNDEFINED: 1780 case SUN4V_ERR_TYPE_UNDEFINED:
1781 return "undefined"; 1781 return "undefined";
1782 case SUN4V_ERR_TYPE_UNCORRECTED_RES: 1782 case SUN4V_ERR_TYPE_UNCORRECTED_RES:
1783 return "uncorrected resumable"; 1783 return "uncorrected resumable";
1784 case SUN4V_ERR_TYPE_PRECISE_NONRES: 1784 case SUN4V_ERR_TYPE_PRECISE_NONRES:
1785 return "precise nonresumable"; 1785 return "precise nonresumable";
1786 case SUN4V_ERR_TYPE_DEFERRED_NONRES: 1786 case SUN4V_ERR_TYPE_DEFERRED_NONRES:
1787 return "deferred nonresumable"; 1787 return "deferred nonresumable";
1788 case SUN4V_ERR_TYPE_WARNING_RES: 1788 case SUN4V_ERR_TYPE_WARNING_RES:
1789 return "warning resumable"; 1789 return "warning resumable";
1790 default: 1790 default:
1791 return "unknown"; 1791 return "unknown";
1792 }; 1792 };
1793 } 1793 }
1794 1794
1795 extern void __show_regs(struct pt_regs * regs); 1795 extern void __show_regs(struct pt_regs * regs);
1796 1796
1797 static void sun4v_log_error(struct pt_regs *regs, struct sun4v_error_entry *ent, int cpu, const char *pfx, atomic_t *ocnt) 1797 static void sun4v_log_error(struct pt_regs *regs, struct sun4v_error_entry *ent, int cpu, const char *pfx, atomic_t *ocnt)
1798 { 1798 {
1799 int cnt; 1799 int cnt;
1800 1800
1801 printk("%s: Reporting on cpu %d\n", pfx, cpu); 1801 printk("%s: Reporting on cpu %d\n", pfx, cpu);
1802 printk("%s: err_handle[%lx] err_stick[%lx] err_type[%08x:%s]\n", 1802 printk("%s: err_handle[%lx] err_stick[%lx] err_type[%08x:%s]\n",
1803 pfx, 1803 pfx,
1804 ent->err_handle, ent->err_stick, 1804 ent->err_handle, ent->err_stick,
1805 ent->err_type, 1805 ent->err_type,
1806 sun4v_err_type_to_str(ent->err_type)); 1806 sun4v_err_type_to_str(ent->err_type));
1807 printk("%s: err_attrs[%08x:%s %s %s %s %s %s %s %s]\n", 1807 printk("%s: err_attrs[%08x:%s %s %s %s %s %s %s %s]\n",
1808 pfx, 1808 pfx,
1809 ent->err_attrs, 1809 ent->err_attrs,
1810 ((ent->err_attrs & SUN4V_ERR_ATTRS_PROCESSOR) ? 1810 ((ent->err_attrs & SUN4V_ERR_ATTRS_PROCESSOR) ?
1811 "processor" : ""), 1811 "processor" : ""),
1812 ((ent->err_attrs & SUN4V_ERR_ATTRS_MEMORY) ? 1812 ((ent->err_attrs & SUN4V_ERR_ATTRS_MEMORY) ?
1813 "memory" : ""), 1813 "memory" : ""),
1814 ((ent->err_attrs & SUN4V_ERR_ATTRS_PIO) ? 1814 ((ent->err_attrs & SUN4V_ERR_ATTRS_PIO) ?
1815 "pio" : ""), 1815 "pio" : ""),
1816 ((ent->err_attrs & SUN4V_ERR_ATTRS_INT_REGISTERS) ? 1816 ((ent->err_attrs & SUN4V_ERR_ATTRS_INT_REGISTERS) ?
1817 "integer-regs" : ""), 1817 "integer-regs" : ""),
1818 ((ent->err_attrs & SUN4V_ERR_ATTRS_FPU_REGISTERS) ? 1818 ((ent->err_attrs & SUN4V_ERR_ATTRS_FPU_REGISTERS) ?
1819 "fpu-regs" : ""), 1819 "fpu-regs" : ""),
1820 ((ent->err_attrs & SUN4V_ERR_ATTRS_USER_MODE) ? 1820 ((ent->err_attrs & SUN4V_ERR_ATTRS_USER_MODE) ?
1821 "user" : ""), 1821 "user" : ""),
1822 ((ent->err_attrs & SUN4V_ERR_ATTRS_PRIV_MODE) ? 1822 ((ent->err_attrs & SUN4V_ERR_ATTRS_PRIV_MODE) ?
1823 "privileged" : ""), 1823 "privileged" : ""),
1824 ((ent->err_attrs & SUN4V_ERR_ATTRS_RES_QUEUE_FULL) ? 1824 ((ent->err_attrs & SUN4V_ERR_ATTRS_RES_QUEUE_FULL) ?
1825 "queue-full" : "")); 1825 "queue-full" : ""));
1826 printk("%s: err_raddr[%016lx] err_size[%u] err_cpu[%u]\n", 1826 printk("%s: err_raddr[%016lx] err_size[%u] err_cpu[%u]\n",
1827 pfx, 1827 pfx,
1828 ent->err_raddr, ent->err_size, ent->err_cpu); 1828 ent->err_raddr, ent->err_size, ent->err_cpu);
1829 1829
1830 __show_regs(regs); 1830 __show_regs(regs);
1831 1831
1832 if ((cnt = atomic_read(ocnt)) != 0) { 1832 if ((cnt = atomic_read(ocnt)) != 0) {
1833 atomic_set(ocnt, 0); 1833 atomic_set(ocnt, 0);
1834 wmb(); 1834 wmb();
1835 printk("%s: Queue overflowed %d times.\n", 1835 printk("%s: Queue overflowed %d times.\n",
1836 pfx, cnt); 1836 pfx, cnt);
1837 } 1837 }
1838 } 1838 }
1839 1839
1840 /* We run with %pil set to 15 and PSTATE_IE enabled in %pstate. 1840 /* We run with %pil set to 15 and PSTATE_IE enabled in %pstate.
1841 * Log the event and clear the first word of the entry. 1841 * Log the event and clear the first word of the entry.
1842 */ 1842 */
1843 void sun4v_resum_error(struct pt_regs *regs, unsigned long offset) 1843 void sun4v_resum_error(struct pt_regs *regs, unsigned long offset)
1844 { 1844 {
1845 struct sun4v_error_entry *ent, local_copy; 1845 struct sun4v_error_entry *ent, local_copy;
1846 struct trap_per_cpu *tb; 1846 struct trap_per_cpu *tb;
1847 unsigned long paddr; 1847 unsigned long paddr;
1848 int cpu; 1848 int cpu;
1849 1849
1850 cpu = get_cpu(); 1850 cpu = get_cpu();
1851 1851
1852 tb = &trap_block[cpu]; 1852 tb = &trap_block[cpu];
1853 paddr = tb->resum_kernel_buf_pa + offset; 1853 paddr = tb->resum_kernel_buf_pa + offset;
1854 ent = __va(paddr); 1854 ent = __va(paddr);
1855 1855
1856 memcpy(&local_copy, ent, sizeof(struct sun4v_error_entry)); 1856 memcpy(&local_copy, ent, sizeof(struct sun4v_error_entry));
1857 1857
1858 /* We have a local copy now, so release the entry. */ 1858 /* We have a local copy now, so release the entry. */
1859 ent->err_handle = 0; 1859 ent->err_handle = 0;
1860 wmb(); 1860 wmb();
1861 1861
1862 put_cpu(); 1862 put_cpu();
1863 1863
1864 if (ent->err_type == SUN4V_ERR_TYPE_WARNING_RES) { 1864 if (ent->err_type == SUN4V_ERR_TYPE_WARNING_RES) {
1865 /* If err_type is 0x4, it's a powerdown request. Do 1865 /* If err_type is 0x4, it's a powerdown request. Do
1866 * not do the usual resumable error log because that 1866 * not do the usual resumable error log because that
1867 * makes it look like some abnormal error. 1867 * makes it look like some abnormal error.
1868 */ 1868 */
1869 printk(KERN_INFO "Power down request...\n"); 1869 printk(KERN_INFO "Power down request...\n");
1870 kill_cad_pid(SIGINT, 1); 1870 kill_cad_pid(SIGINT, 1);
1871 return; 1871 return;
1872 } 1872 }
1873 1873
1874 sun4v_log_error(regs, &local_copy, cpu, 1874 sun4v_log_error(regs, &local_copy, cpu,
1875 KERN_ERR "RESUMABLE ERROR", 1875 KERN_ERR "RESUMABLE ERROR",
1876 &sun4v_resum_oflow_cnt); 1876 &sun4v_resum_oflow_cnt);
1877 } 1877 }
1878 1878
1879 /* If we try to printk() we'll probably make matters worse, by trying 1879 /* If we try to printk() we'll probably make matters worse, by trying
1880 * to retake locks this cpu already holds or causing more errors. So 1880 * to retake locks this cpu already holds or causing more errors. So
1881 * just bump a counter, and we'll report these counter bumps above. 1881 * just bump a counter, and we'll report these counter bumps above.
1882 */ 1882 */
1883 void sun4v_resum_overflow(struct pt_regs *regs) 1883 void sun4v_resum_overflow(struct pt_regs *regs)
1884 { 1884 {
1885 atomic_inc(&sun4v_resum_oflow_cnt); 1885 atomic_inc(&sun4v_resum_oflow_cnt);
1886 } 1886 }
1887 1887
1888 /* We run with %pil set to 15 and PSTATE_IE enabled in %pstate. 1888 /* We run with %pil set to 15 and PSTATE_IE enabled in %pstate.
1889 * Log the event, clear the first word of the entry, and die. 1889 * Log the event, clear the first word of the entry, and die.
1890 */ 1890 */
1891 void sun4v_nonresum_error(struct pt_regs *regs, unsigned long offset) 1891 void sun4v_nonresum_error(struct pt_regs *regs, unsigned long offset)
1892 { 1892 {
1893 struct sun4v_error_entry *ent, local_copy; 1893 struct sun4v_error_entry *ent, local_copy;
1894 struct trap_per_cpu *tb; 1894 struct trap_per_cpu *tb;
1895 unsigned long paddr; 1895 unsigned long paddr;
1896 int cpu; 1896 int cpu;
1897 1897
1898 cpu = get_cpu(); 1898 cpu = get_cpu();
1899 1899
1900 tb = &trap_block[cpu]; 1900 tb = &trap_block[cpu];
1901 paddr = tb->nonresum_kernel_buf_pa + offset; 1901 paddr = tb->nonresum_kernel_buf_pa + offset;
1902 ent = __va(paddr); 1902 ent = __va(paddr);
1903 1903
1904 memcpy(&local_copy, ent, sizeof(struct sun4v_error_entry)); 1904 memcpy(&local_copy, ent, sizeof(struct sun4v_error_entry));
1905 1905
1906 /* We have a local copy now, so release the entry. */ 1906 /* We have a local copy now, so release the entry. */
1907 ent->err_handle = 0; 1907 ent->err_handle = 0;
1908 wmb(); 1908 wmb();
1909 1909
1910 put_cpu(); 1910 put_cpu();
1911 1911
1912 #ifdef CONFIG_PCI 1912 #ifdef CONFIG_PCI
1913 /* Check for the special PCI poke sequence. */ 1913 /* Check for the special PCI poke sequence. */
1914 if (pci_poke_in_progress && pci_poke_cpu == cpu) { 1914 if (pci_poke_in_progress && pci_poke_cpu == cpu) {
1915 pci_poke_faulted = 1; 1915 pci_poke_faulted = 1;
1916 regs->tpc += 4; 1916 regs->tpc += 4;
1917 regs->tnpc = regs->tpc + 4; 1917 regs->tnpc = regs->tpc + 4;
1918 return; 1918 return;
1919 } 1919 }
1920 #endif 1920 #endif
1921 1921
1922 sun4v_log_error(regs, &local_copy, cpu, 1922 sun4v_log_error(regs, &local_copy, cpu,
1923 KERN_EMERG "NON-RESUMABLE ERROR", 1923 KERN_EMERG "NON-RESUMABLE ERROR",
1924 &sun4v_nonresum_oflow_cnt); 1924 &sun4v_nonresum_oflow_cnt);
1925 1925
1926 panic("Non-resumable error."); 1926 panic("Non-resumable error.");
1927 } 1927 }
1928 1928
1929 /* If we try to printk() we'll probably make matters worse, by trying 1929 /* If we try to printk() we'll probably make matters worse, by trying
1930 * to retake locks this cpu already holds or causing more errors. So 1930 * to retake locks this cpu already holds or causing more errors. So
1931 * just bump a counter, and we'll report these counter bumps above. 1931 * just bump a counter, and we'll report these counter bumps above.
1932 */ 1932 */
1933 void sun4v_nonresum_overflow(struct pt_regs *regs) 1933 void sun4v_nonresum_overflow(struct pt_regs *regs)
1934 { 1934 {
1935 /* XXX Actually even this can make not that much sense. Perhaps 1935 /* XXX Actually even this can make not that much sense. Perhaps
1936 * XXX we should just pull the plug and panic directly from here? 1936 * XXX we should just pull the plug and panic directly from here?
1937 */ 1937 */
1938 atomic_inc(&sun4v_nonresum_oflow_cnt); 1938 atomic_inc(&sun4v_nonresum_oflow_cnt);
1939 } 1939 }
1940 1940
1941 unsigned long sun4v_err_itlb_vaddr; 1941 unsigned long sun4v_err_itlb_vaddr;
1942 unsigned long sun4v_err_itlb_ctx; 1942 unsigned long sun4v_err_itlb_ctx;
1943 unsigned long sun4v_err_itlb_pte; 1943 unsigned long sun4v_err_itlb_pte;
1944 unsigned long sun4v_err_itlb_error; 1944 unsigned long sun4v_err_itlb_error;
1945 1945
1946 void sun4v_itlb_error_report(struct pt_regs *regs, int tl) 1946 void sun4v_itlb_error_report(struct pt_regs *regs, int tl)
1947 { 1947 {
1948 if (tl > 1) 1948 if (tl > 1)
1949 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 1949 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
1950 1950
1951 printk(KERN_EMERG "SUN4V-ITLB: Error at TPC[%lx], tl %d\n", 1951 printk(KERN_EMERG "SUN4V-ITLB: Error at TPC[%lx], tl %d\n",
1952 regs->tpc, tl); 1952 regs->tpc, tl);
1953 print_symbol(KERN_EMERG "SUN4V-ITLB: TPC<%s>\n", regs->tpc); 1953 print_symbol(KERN_EMERG "SUN4V-ITLB: TPC<%s>\n", regs->tpc);
1954 printk(KERN_EMERG "SUN4V-ITLB: vaddr[%lx] ctx[%lx] " 1954 printk(KERN_EMERG "SUN4V-ITLB: vaddr[%lx] ctx[%lx] "
1955 "pte[%lx] error[%lx]\n", 1955 "pte[%lx] error[%lx]\n",
1956 sun4v_err_itlb_vaddr, sun4v_err_itlb_ctx, 1956 sun4v_err_itlb_vaddr, sun4v_err_itlb_ctx,
1957 sun4v_err_itlb_pte, sun4v_err_itlb_error); 1957 sun4v_err_itlb_pte, sun4v_err_itlb_error);
1958 1958
1959 prom_halt(); 1959 prom_halt();
1960 } 1960 }
1961 1961
1962 unsigned long sun4v_err_dtlb_vaddr; 1962 unsigned long sun4v_err_dtlb_vaddr;
1963 unsigned long sun4v_err_dtlb_ctx; 1963 unsigned long sun4v_err_dtlb_ctx;
1964 unsigned long sun4v_err_dtlb_pte; 1964 unsigned long sun4v_err_dtlb_pte;
1965 unsigned long sun4v_err_dtlb_error; 1965 unsigned long sun4v_err_dtlb_error;
1966 1966
1967 void sun4v_dtlb_error_report(struct pt_regs *regs, int tl) 1967 void sun4v_dtlb_error_report(struct pt_regs *regs, int tl)
1968 { 1968 {
1969 if (tl > 1) 1969 if (tl > 1)
1970 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 1970 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
1971 1971
1972 printk(KERN_EMERG "SUN4V-DTLB: Error at TPC[%lx], tl %d\n", 1972 printk(KERN_EMERG "SUN4V-DTLB: Error at TPC[%lx], tl %d\n",
1973 regs->tpc, tl); 1973 regs->tpc, tl);
1974 print_symbol(KERN_EMERG "SUN4V-DTLB: TPC<%s>\n", regs->tpc); 1974 print_symbol(KERN_EMERG "SUN4V-DTLB: TPC<%s>\n", regs->tpc);
1975 printk(KERN_EMERG "SUN4V-DTLB: vaddr[%lx] ctx[%lx] " 1975 printk(KERN_EMERG "SUN4V-DTLB: vaddr[%lx] ctx[%lx] "
1976 "pte[%lx] error[%lx]\n", 1976 "pte[%lx] error[%lx]\n",
1977 sun4v_err_dtlb_vaddr, sun4v_err_dtlb_ctx, 1977 sun4v_err_dtlb_vaddr, sun4v_err_dtlb_ctx,
1978 sun4v_err_dtlb_pte, sun4v_err_dtlb_error); 1978 sun4v_err_dtlb_pte, sun4v_err_dtlb_error);
1979 1979
1980 prom_halt(); 1980 prom_halt();
1981 } 1981 }
1982 1982
1983 void hypervisor_tlbop_error(unsigned long err, unsigned long op) 1983 void hypervisor_tlbop_error(unsigned long err, unsigned long op)
1984 { 1984 {
1985 printk(KERN_CRIT "SUN4V: TLB hv call error %lu for op %lu\n", 1985 printk(KERN_CRIT "SUN4V: TLB hv call error %lu for op %lu\n",
1986 err, op); 1986 err, op);
1987 } 1987 }
1988 1988
1989 void hypervisor_tlbop_error_xcall(unsigned long err, unsigned long op) 1989 void hypervisor_tlbop_error_xcall(unsigned long err, unsigned long op)
1990 { 1990 {
1991 printk(KERN_CRIT "SUN4V: XCALL TLB hv call error %lu for op %lu\n", 1991 printk(KERN_CRIT "SUN4V: XCALL TLB hv call error %lu for op %lu\n",
1992 err, op); 1992 err, op);
1993 } 1993 }
1994 1994
1995 void do_fpe_common(struct pt_regs *regs) 1995 void do_fpe_common(struct pt_regs *regs)
1996 { 1996 {
1997 if (regs->tstate & TSTATE_PRIV) { 1997 if (regs->tstate & TSTATE_PRIV) {
1998 regs->tpc = regs->tnpc; 1998 regs->tpc = regs->tnpc;
1999 regs->tnpc += 4; 1999 regs->tnpc += 4;
2000 } else { 2000 } else {
2001 unsigned long fsr = current_thread_info()->xfsr[0]; 2001 unsigned long fsr = current_thread_info()->xfsr[0];
2002 siginfo_t info; 2002 siginfo_t info;
2003 2003
2004 if (test_thread_flag(TIF_32BIT)) { 2004 if (test_thread_flag(TIF_32BIT)) {
2005 regs->tpc &= 0xffffffff; 2005 regs->tpc &= 0xffffffff;
2006 regs->tnpc &= 0xffffffff; 2006 regs->tnpc &= 0xffffffff;
2007 } 2007 }
2008 info.si_signo = SIGFPE; 2008 info.si_signo = SIGFPE;
2009 info.si_errno = 0; 2009 info.si_errno = 0;
2010 info.si_addr = (void __user *)regs->tpc; 2010 info.si_addr = (void __user *)regs->tpc;
2011 info.si_trapno = 0; 2011 info.si_trapno = 0;
2012 info.si_code = __SI_FAULT; 2012 info.si_code = __SI_FAULT;
2013 if ((fsr & 0x1c000) == (1 << 14)) { 2013 if ((fsr & 0x1c000) == (1 << 14)) {
2014 if (fsr & 0x10) 2014 if (fsr & 0x10)
2015 info.si_code = FPE_FLTINV; 2015 info.si_code = FPE_FLTINV;
2016 else if (fsr & 0x08) 2016 else if (fsr & 0x08)
2017 info.si_code = FPE_FLTOVF; 2017 info.si_code = FPE_FLTOVF;
2018 else if (fsr & 0x04) 2018 else if (fsr & 0x04)
2019 info.si_code = FPE_FLTUND; 2019 info.si_code = FPE_FLTUND;
2020 else if (fsr & 0x02) 2020 else if (fsr & 0x02)
2021 info.si_code = FPE_FLTDIV; 2021 info.si_code = FPE_FLTDIV;
2022 else if (fsr & 0x01) 2022 else if (fsr & 0x01)
2023 info.si_code = FPE_FLTRES; 2023 info.si_code = FPE_FLTRES;
2024 } 2024 }
2025 force_sig_info(SIGFPE, &info, current); 2025 force_sig_info(SIGFPE, &info, current);
2026 } 2026 }
2027 } 2027 }
2028 2028
2029 void do_fpieee(struct pt_regs *regs) 2029 void do_fpieee(struct pt_regs *regs)
2030 { 2030 {
2031 if (notify_die(DIE_TRAP, "fpu exception ieee", regs, 2031 if (notify_die(DIE_TRAP, "fpu exception ieee", regs,
2032 0, 0x24, SIGFPE) == NOTIFY_STOP) 2032 0, 0x24, SIGFPE) == NOTIFY_STOP)
2033 return; 2033 return;
2034 2034
2035 do_fpe_common(regs); 2035 do_fpe_common(regs);
2036 } 2036 }
2037 2037
2038 extern int do_mathemu(struct pt_regs *, struct fpustate *); 2038 extern int do_mathemu(struct pt_regs *, struct fpustate *);
2039 2039
2040 void do_fpother(struct pt_regs *regs) 2040 void do_fpother(struct pt_regs *regs)
2041 { 2041 {
2042 struct fpustate *f = FPUSTATE; 2042 struct fpustate *f = FPUSTATE;
2043 int ret = 0; 2043 int ret = 0;
2044 2044
2045 if (notify_die(DIE_TRAP, "fpu exception other", regs, 2045 if (notify_die(DIE_TRAP, "fpu exception other", regs,
2046 0, 0x25, SIGFPE) == NOTIFY_STOP) 2046 0, 0x25, SIGFPE) == NOTIFY_STOP)
2047 return; 2047 return;
2048 2048
2049 switch ((current_thread_info()->xfsr[0] & 0x1c000)) { 2049 switch ((current_thread_info()->xfsr[0] & 0x1c000)) {
2050 case (2 << 14): /* unfinished_FPop */ 2050 case (2 << 14): /* unfinished_FPop */
2051 case (3 << 14): /* unimplemented_FPop */ 2051 case (3 << 14): /* unimplemented_FPop */
2052 ret = do_mathemu(regs, f); 2052 ret = do_mathemu(regs, f);
2053 break; 2053 break;
2054 } 2054 }
2055 if (ret) 2055 if (ret)
2056 return; 2056 return;
2057 do_fpe_common(regs); 2057 do_fpe_common(regs);
2058 } 2058 }
2059 2059
2060 void do_tof(struct pt_regs *regs) 2060 void do_tof(struct pt_regs *regs)
2061 { 2061 {
2062 siginfo_t info; 2062 siginfo_t info;
2063 2063
2064 if (notify_die(DIE_TRAP, "tagged arithmetic overflow", regs, 2064 if (notify_die(DIE_TRAP, "tagged arithmetic overflow", regs,
2065 0, 0x26, SIGEMT) == NOTIFY_STOP) 2065 0, 0x26, SIGEMT) == NOTIFY_STOP)
2066 return; 2066 return;
2067 2067
2068 if (regs->tstate & TSTATE_PRIV) 2068 if (regs->tstate & TSTATE_PRIV)
2069 die_if_kernel("Penguin overflow trap from kernel mode", regs); 2069 die_if_kernel("Penguin overflow trap from kernel mode", regs);
2070 if (test_thread_flag(TIF_32BIT)) { 2070 if (test_thread_flag(TIF_32BIT)) {
2071 regs->tpc &= 0xffffffff; 2071 regs->tpc &= 0xffffffff;
2072 regs->tnpc &= 0xffffffff; 2072 regs->tnpc &= 0xffffffff;
2073 } 2073 }
2074 info.si_signo = SIGEMT; 2074 info.si_signo = SIGEMT;
2075 info.si_errno = 0; 2075 info.si_errno = 0;
2076 info.si_code = EMT_TAGOVF; 2076 info.si_code = EMT_TAGOVF;
2077 info.si_addr = (void __user *)regs->tpc; 2077 info.si_addr = (void __user *)regs->tpc;
2078 info.si_trapno = 0; 2078 info.si_trapno = 0;
2079 force_sig_info(SIGEMT, &info, current); 2079 force_sig_info(SIGEMT, &info, current);
2080 } 2080 }
2081 2081
2082 void do_div0(struct pt_regs *regs) 2082 void do_div0(struct pt_regs *regs)
2083 { 2083 {
2084 siginfo_t info; 2084 siginfo_t info;
2085 2085
2086 if (notify_die(DIE_TRAP, "integer division by zero", regs, 2086 if (notify_die(DIE_TRAP, "integer division by zero", regs,
2087 0, 0x28, SIGFPE) == NOTIFY_STOP) 2087 0, 0x28, SIGFPE) == NOTIFY_STOP)
2088 return; 2088 return;
2089 2089
2090 if (regs->tstate & TSTATE_PRIV) 2090 if (regs->tstate & TSTATE_PRIV)
2091 die_if_kernel("TL0: Kernel divide by zero.", regs); 2091 die_if_kernel("TL0: Kernel divide by zero.", regs);
2092 if (test_thread_flag(TIF_32BIT)) { 2092 if (test_thread_flag(TIF_32BIT)) {
2093 regs->tpc &= 0xffffffff; 2093 regs->tpc &= 0xffffffff;
2094 regs->tnpc &= 0xffffffff; 2094 regs->tnpc &= 0xffffffff;
2095 } 2095 }
2096 info.si_signo = SIGFPE; 2096 info.si_signo = SIGFPE;
2097 info.si_errno = 0; 2097 info.si_errno = 0;
2098 info.si_code = FPE_INTDIV; 2098 info.si_code = FPE_INTDIV;
2099 info.si_addr = (void __user *)regs->tpc; 2099 info.si_addr = (void __user *)regs->tpc;
2100 info.si_trapno = 0; 2100 info.si_trapno = 0;
2101 force_sig_info(SIGFPE, &info, current); 2101 force_sig_info(SIGFPE, &info, current);
2102 } 2102 }
2103 2103
2104 void instruction_dump (unsigned int *pc) 2104 void instruction_dump (unsigned int *pc)
2105 { 2105 {
2106 int i; 2106 int i;
2107 2107
2108 if ((((unsigned long) pc) & 3)) 2108 if ((((unsigned long) pc) & 3))
2109 return; 2109 return;
2110 2110
2111 printk("Instruction DUMP:"); 2111 printk("Instruction DUMP:");
2112 for (i = -3; i < 6; i++) 2112 for (i = -3; i < 6; i++)
2113 printk("%c%08x%c",i?' ':'<',pc[i],i?' ':'>'); 2113 printk("%c%08x%c",i?' ':'<',pc[i],i?' ':'>');
2114 printk("\n"); 2114 printk("\n");
2115 } 2115 }
2116 2116
2117 static void user_instruction_dump (unsigned int __user *pc) 2117 static void user_instruction_dump (unsigned int __user *pc)
2118 { 2118 {
2119 int i; 2119 int i;
2120 unsigned int buf[9]; 2120 unsigned int buf[9];
2121 2121
2122 if ((((unsigned long) pc) & 3)) 2122 if ((((unsigned long) pc) & 3))
2123 return; 2123 return;
2124 2124
2125 if (copy_from_user(buf, pc - 3, sizeof(buf))) 2125 if (copy_from_user(buf, pc - 3, sizeof(buf)))
2126 return; 2126 return;
2127 2127
2128 printk("Instruction DUMP:"); 2128 printk("Instruction DUMP:");
2129 for (i = 0; i < 9; i++) 2129 for (i = 0; i < 9; i++)
2130 printk("%c%08x%c",i==3?' ':'<',buf[i],i==3?' ':'>'); 2130 printk("%c%08x%c",i==3?' ':'<',buf[i],i==3?' ':'>');
2131 printk("\n"); 2131 printk("\n");
2132 } 2132 }
2133 2133
2134 void show_stack(struct task_struct *tsk, unsigned long *_ksp) 2134 void show_stack(struct task_struct *tsk, unsigned long *_ksp)
2135 { 2135 {
2136 unsigned long pc, fp, thread_base, ksp; 2136 unsigned long pc, fp, thread_base, ksp;
2137 void *tp = task_stack_page(tsk); 2137 void *tp = task_stack_page(tsk);
2138 struct reg_window *rw; 2138 struct reg_window *rw;
2139 int count = 0; 2139 int count = 0;
2140 2140
2141 ksp = (unsigned long) _ksp; 2141 ksp = (unsigned long) _ksp;
2142 2142
2143 if (tp == current_thread_info()) 2143 if (tp == current_thread_info())
2144 flushw_all(); 2144 flushw_all();
2145 2145
2146 fp = ksp + STACK_BIAS; 2146 fp = ksp + STACK_BIAS;
2147 thread_base = (unsigned long) tp; 2147 thread_base = (unsigned long) tp;
2148 2148
2149 printk("Call Trace:"); 2149 printk("Call Trace:");
2150 #ifdef CONFIG_KALLSYMS 2150 #ifdef CONFIG_KALLSYMS
2151 printk("\n"); 2151 printk("\n");
2152 #endif 2152 #endif
2153 do { 2153 do {
2154 /* Bogus frame pointer? */ 2154 /* Bogus frame pointer? */
2155 if (fp < (thread_base + sizeof(struct thread_info)) || 2155 if (fp < (thread_base + sizeof(struct thread_info)) ||
2156 fp >= (thread_base + THREAD_SIZE)) 2156 fp >= (thread_base + THREAD_SIZE))
2157 break; 2157 break;
2158 rw = (struct reg_window *)fp; 2158 rw = (struct reg_window *)fp;
2159 pc = rw->ins[7]; 2159 pc = rw->ins[7];
2160 printk(" [%016lx] ", pc); 2160 printk(" [%016lx] ", pc);
2161 print_symbol("%s\n", pc); 2161 print_symbol("%s\n", pc);
2162 fp = rw->ins[6] + STACK_BIAS; 2162 fp = rw->ins[6] + STACK_BIAS;
2163 } while (++count < 16); 2163 } while (++count < 16);
2164 #ifndef CONFIG_KALLSYMS 2164 #ifndef CONFIG_KALLSYMS
2165 printk("\n"); 2165 printk("\n");
2166 #endif 2166 #endif
2167 } 2167 }
2168 2168
2169 void dump_stack(void) 2169 void dump_stack(void)
2170 { 2170 {
2171 unsigned long *ksp; 2171 unsigned long *ksp;
2172 2172
2173 __asm__ __volatile__("mov %%fp, %0" 2173 __asm__ __volatile__("mov %%fp, %0"
2174 : "=r" (ksp)); 2174 : "=r" (ksp));
2175 show_stack(current, ksp); 2175 show_stack(current, ksp);
2176 } 2176 }
2177 2177
2178 EXPORT_SYMBOL(dump_stack); 2178 EXPORT_SYMBOL(dump_stack);
2179 2179
2180 static inline int is_kernel_stack(struct task_struct *task, 2180 static inline int is_kernel_stack(struct task_struct *task,
2181 struct reg_window *rw) 2181 struct reg_window *rw)
2182 { 2182 {
2183 unsigned long rw_addr = (unsigned long) rw; 2183 unsigned long rw_addr = (unsigned long) rw;
2184 unsigned long thread_base, thread_end; 2184 unsigned long thread_base, thread_end;
2185 2185
2186 if (rw_addr < PAGE_OFFSET) { 2186 if (rw_addr < PAGE_OFFSET) {
2187 if (task != &init_task) 2187 if (task != &init_task)
2188 return 0; 2188 return 0;
2189 } 2189 }
2190 2190
2191 thread_base = (unsigned long) task_stack_page(task); 2191 thread_base = (unsigned long) task_stack_page(task);
2192 thread_end = thread_base + sizeof(union thread_union); 2192 thread_end = thread_base + sizeof(union thread_union);
2193 if (rw_addr >= thread_base && 2193 if (rw_addr >= thread_base &&
2194 rw_addr < thread_end && 2194 rw_addr < thread_end &&
2195 !(rw_addr & 0x7UL)) 2195 !(rw_addr & 0x7UL))
2196 return 1; 2196 return 1;
2197 2197
2198 return 0; 2198 return 0;
2199 } 2199 }
2200 2200
2201 static inline struct reg_window *kernel_stack_up(struct reg_window *rw) 2201 static inline struct reg_window *kernel_stack_up(struct reg_window *rw)
2202 { 2202 {
2203 unsigned long fp = rw->ins[6]; 2203 unsigned long fp = rw->ins[6];
2204 2204
2205 if (!fp) 2205 if (!fp)
2206 return NULL; 2206 return NULL;
2207 2207
2208 return (struct reg_window *) (fp + STACK_BIAS); 2208 return (struct reg_window *) (fp + STACK_BIAS);
2209 } 2209 }
2210 2210
2211 void die_if_kernel(char *str, struct pt_regs *regs) 2211 void die_if_kernel(char *str, struct pt_regs *regs)
2212 { 2212 {
2213 static int die_counter; 2213 static int die_counter;
2214 extern void smp_report_regs(void); 2214 extern void smp_report_regs(void);
2215 int count = 0; 2215 int count = 0;
2216 2216
2217 /* Amuse the user. */ 2217 /* Amuse the user. */
2218 printk( 2218 printk(
2219 " \\|/ ____ \\|/\n" 2219 " \\|/ ____ \\|/\n"
2220 " \"@'/ .. \\`@\"\n" 2220 " \"@'/ .. \\`@\"\n"
2221 " /_| \\__/ |_\\\n" 2221 " /_| \\__/ |_\\\n"
2222 " \\__U_/\n"); 2222 " \\__U_/\n");
2223 2223
2224 printk("%s(%d): %s [#%d]\n", current->comm, current->pid, str, ++die_counter); 2224 printk("%s(%d): %s [#%d]\n", current->comm, current->pid, str, ++die_counter);
2225 notify_die(DIE_OOPS, str, regs, 0, 255, SIGSEGV); 2225 notify_die(DIE_OOPS, str, regs, 0, 255, SIGSEGV);
2226 __asm__ __volatile__("flushw"); 2226 __asm__ __volatile__("flushw");
2227 __show_regs(regs); 2227 __show_regs(regs);
2228 add_taint(TAINT_DIE);
2228 if (regs->tstate & TSTATE_PRIV) { 2229 if (regs->tstate & TSTATE_PRIV) {
2229 struct reg_window *rw = (struct reg_window *) 2230 struct reg_window *rw = (struct reg_window *)
2230 (regs->u_regs[UREG_FP] + STACK_BIAS); 2231 (regs->u_regs[UREG_FP] + STACK_BIAS);
2231 2232
2232 /* Stop the back trace when we hit userland or we 2233 /* Stop the back trace when we hit userland or we
2233 * find some badly aligned kernel stack. 2234 * find some badly aligned kernel stack.
2234 */ 2235 */
2235 while (rw && 2236 while (rw &&
2236 count++ < 30&& 2237 count++ < 30&&
2237 is_kernel_stack(current, rw)) { 2238 is_kernel_stack(current, rw)) {
2238 printk("Caller[%016lx]", rw->ins[7]); 2239 printk("Caller[%016lx]", rw->ins[7]);
2239 print_symbol(": %s", rw->ins[7]); 2240 print_symbol(": %s", rw->ins[7]);
2240 printk("\n"); 2241 printk("\n");
2241 2242
2242 rw = kernel_stack_up(rw); 2243 rw = kernel_stack_up(rw);
2243 } 2244 }
2244 instruction_dump ((unsigned int *) regs->tpc); 2245 instruction_dump ((unsigned int *) regs->tpc);
2245 } else { 2246 } else {
2246 if (test_thread_flag(TIF_32BIT)) { 2247 if (test_thread_flag(TIF_32BIT)) {
2247 regs->tpc &= 0xffffffff; 2248 regs->tpc &= 0xffffffff;
2248 regs->tnpc &= 0xffffffff; 2249 regs->tnpc &= 0xffffffff;
2249 } 2250 }
2250 user_instruction_dump ((unsigned int __user *) regs->tpc); 2251 user_instruction_dump ((unsigned int __user *) regs->tpc);
2251 } 2252 }
2252 #if 0 2253 #if 0
2253 #ifdef CONFIG_SMP 2254 #ifdef CONFIG_SMP
2254 smp_report_regs(); 2255 smp_report_regs();
2255 #endif 2256 #endif
2256 #endif 2257 #endif
2257 if (regs->tstate & TSTATE_PRIV) 2258 if (regs->tstate & TSTATE_PRIV)
2258 do_exit(SIGKILL); 2259 do_exit(SIGKILL);
2259 do_exit(SIGSEGV); 2260 do_exit(SIGSEGV);
2260 } 2261 }
2261 2262
2262 #define VIS_OPCODE_MASK ((0x3 << 30) | (0x3f << 19)) 2263 #define VIS_OPCODE_MASK ((0x3 << 30) | (0x3f << 19))
2263 #define VIS_OPCODE_VAL ((0x2 << 30) | (0x36 << 19)) 2264 #define VIS_OPCODE_VAL ((0x2 << 30) | (0x36 << 19))
2264 2265
2265 extern int handle_popc(u32 insn, struct pt_regs *regs); 2266 extern int handle_popc(u32 insn, struct pt_regs *regs);
2266 extern int handle_ldf_stq(u32 insn, struct pt_regs *regs); 2267 extern int handle_ldf_stq(u32 insn, struct pt_regs *regs);
2267 extern int vis_emul(struct pt_regs *, unsigned int); 2268 extern int vis_emul(struct pt_regs *, unsigned int);
2268 2269
2269 void do_illegal_instruction(struct pt_regs *regs) 2270 void do_illegal_instruction(struct pt_regs *regs)
2270 { 2271 {
2271 unsigned long pc = regs->tpc; 2272 unsigned long pc = regs->tpc;
2272 unsigned long tstate = regs->tstate; 2273 unsigned long tstate = regs->tstate;
2273 u32 insn; 2274 u32 insn;
2274 siginfo_t info; 2275 siginfo_t info;
2275 2276
2276 if (notify_die(DIE_TRAP, "illegal instruction", regs, 2277 if (notify_die(DIE_TRAP, "illegal instruction", regs,
2277 0, 0x10, SIGILL) == NOTIFY_STOP) 2278 0, 0x10, SIGILL) == NOTIFY_STOP)
2278 return; 2279 return;
2279 2280
2280 if (tstate & TSTATE_PRIV) 2281 if (tstate & TSTATE_PRIV)
2281 die_if_kernel("Kernel illegal instruction", regs); 2282 die_if_kernel("Kernel illegal instruction", regs);
2282 if (test_thread_flag(TIF_32BIT)) 2283 if (test_thread_flag(TIF_32BIT))
2283 pc = (u32)pc; 2284 pc = (u32)pc;
2284 if (get_user(insn, (u32 __user *) pc) != -EFAULT) { 2285 if (get_user(insn, (u32 __user *) pc) != -EFAULT) {
2285 if ((insn & 0xc1ffc000) == 0x81700000) /* POPC */ { 2286 if ((insn & 0xc1ffc000) == 0x81700000) /* POPC */ {
2286 if (handle_popc(insn, regs)) 2287 if (handle_popc(insn, regs))
2287 return; 2288 return;
2288 } else if ((insn & 0xc1580000) == 0xc1100000) /* LDQ/STQ */ { 2289 } else if ((insn & 0xc1580000) == 0xc1100000) /* LDQ/STQ */ {
2289 if (handle_ldf_stq(insn, regs)) 2290 if (handle_ldf_stq(insn, regs))
2290 return; 2291 return;
2291 } else if (tlb_type == hypervisor) { 2292 } else if (tlb_type == hypervisor) {
2292 if ((insn & VIS_OPCODE_MASK) == VIS_OPCODE_VAL) { 2293 if ((insn & VIS_OPCODE_MASK) == VIS_OPCODE_VAL) {
2293 if (!vis_emul(regs, insn)) 2294 if (!vis_emul(regs, insn))
2294 return; 2295 return;
2295 } else { 2296 } else {
2296 struct fpustate *f = FPUSTATE; 2297 struct fpustate *f = FPUSTATE;
2297 2298
2298 /* XXX maybe verify XFSR bits like 2299 /* XXX maybe verify XFSR bits like
2299 * XXX do_fpother() does? 2300 * XXX do_fpother() does?
2300 */ 2301 */
2301 if (do_mathemu(regs, f)) 2302 if (do_mathemu(regs, f))
2302 return; 2303 return;
2303 } 2304 }
2304 } 2305 }
2305 } 2306 }
2306 info.si_signo = SIGILL; 2307 info.si_signo = SIGILL;
2307 info.si_errno = 0; 2308 info.si_errno = 0;
2308 info.si_code = ILL_ILLOPC; 2309 info.si_code = ILL_ILLOPC;
2309 info.si_addr = (void __user *)pc; 2310 info.si_addr = (void __user *)pc;
2310 info.si_trapno = 0; 2311 info.si_trapno = 0;
2311 force_sig_info(SIGILL, &info, current); 2312 force_sig_info(SIGILL, &info, current);
2312 } 2313 }
2313 2314
2314 extern void kernel_unaligned_trap(struct pt_regs *regs, unsigned int insn); 2315 extern void kernel_unaligned_trap(struct pt_regs *regs, unsigned int insn);
2315 2316
2316 void mem_address_unaligned(struct pt_regs *regs, unsigned long sfar, unsigned long sfsr) 2317 void mem_address_unaligned(struct pt_regs *regs, unsigned long sfar, unsigned long sfsr)
2317 { 2318 {
2318 siginfo_t info; 2319 siginfo_t info;
2319 2320
2320 if (notify_die(DIE_TRAP, "memory address unaligned", regs, 2321 if (notify_die(DIE_TRAP, "memory address unaligned", regs,
2321 0, 0x34, SIGSEGV) == NOTIFY_STOP) 2322 0, 0x34, SIGSEGV) == NOTIFY_STOP)
2322 return; 2323 return;
2323 2324
2324 if (regs->tstate & TSTATE_PRIV) { 2325 if (regs->tstate & TSTATE_PRIV) {
2325 kernel_unaligned_trap(regs, *((unsigned int *)regs->tpc)); 2326 kernel_unaligned_trap(regs, *((unsigned int *)regs->tpc));
2326 return; 2327 return;
2327 } 2328 }
2328 info.si_signo = SIGBUS; 2329 info.si_signo = SIGBUS;
2329 info.si_errno = 0; 2330 info.si_errno = 0;
2330 info.si_code = BUS_ADRALN; 2331 info.si_code = BUS_ADRALN;
2331 info.si_addr = (void __user *)sfar; 2332 info.si_addr = (void __user *)sfar;
2332 info.si_trapno = 0; 2333 info.si_trapno = 0;
2333 force_sig_info(SIGBUS, &info, current); 2334 force_sig_info(SIGBUS, &info, current);
2334 } 2335 }
2335 2336
2336 void sun4v_do_mna(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx) 2337 void sun4v_do_mna(struct pt_regs *regs, unsigned long addr, unsigned long type_ctx)
2337 { 2338 {
2338 siginfo_t info; 2339 siginfo_t info;
2339 2340
2340 if (notify_die(DIE_TRAP, "memory address unaligned", regs, 2341 if (notify_die(DIE_TRAP, "memory address unaligned", regs,
2341 0, 0x34, SIGSEGV) == NOTIFY_STOP) 2342 0, 0x34, SIGSEGV) == NOTIFY_STOP)
2342 return; 2343 return;
2343 2344
2344 if (regs->tstate & TSTATE_PRIV) { 2345 if (regs->tstate & TSTATE_PRIV) {
2345 kernel_unaligned_trap(regs, *((unsigned int *)regs->tpc)); 2346 kernel_unaligned_trap(regs, *((unsigned int *)regs->tpc));
2346 return; 2347 return;
2347 } 2348 }
2348 info.si_signo = SIGBUS; 2349 info.si_signo = SIGBUS;
2349 info.si_errno = 0; 2350 info.si_errno = 0;
2350 info.si_code = BUS_ADRALN; 2351 info.si_code = BUS_ADRALN;
2351 info.si_addr = (void __user *) addr; 2352 info.si_addr = (void __user *) addr;
2352 info.si_trapno = 0; 2353 info.si_trapno = 0;
2353 force_sig_info(SIGBUS, &info, current); 2354 force_sig_info(SIGBUS, &info, current);
2354 } 2355 }
2355 2356
2356 void do_privop(struct pt_regs *regs) 2357 void do_privop(struct pt_regs *regs)
2357 { 2358 {
2358 siginfo_t info; 2359 siginfo_t info;
2359 2360
2360 if (notify_die(DIE_TRAP, "privileged operation", regs, 2361 if (notify_die(DIE_TRAP, "privileged operation", regs,
2361 0, 0x11, SIGILL) == NOTIFY_STOP) 2362 0, 0x11, SIGILL) == NOTIFY_STOP)
2362 return; 2363 return;
2363 2364
2364 if (test_thread_flag(TIF_32BIT)) { 2365 if (test_thread_flag(TIF_32BIT)) {
2365 regs->tpc &= 0xffffffff; 2366 regs->tpc &= 0xffffffff;
2366 regs->tnpc &= 0xffffffff; 2367 regs->tnpc &= 0xffffffff;
2367 } 2368 }
2368 info.si_signo = SIGILL; 2369 info.si_signo = SIGILL;
2369 info.si_errno = 0; 2370 info.si_errno = 0;
2370 info.si_code = ILL_PRVOPC; 2371 info.si_code = ILL_PRVOPC;
2371 info.si_addr = (void __user *)regs->tpc; 2372 info.si_addr = (void __user *)regs->tpc;
2372 info.si_trapno = 0; 2373 info.si_trapno = 0;
2373 force_sig_info(SIGILL, &info, current); 2374 force_sig_info(SIGILL, &info, current);
2374 } 2375 }
2375 2376
2376 void do_privact(struct pt_regs *regs) 2377 void do_privact(struct pt_regs *regs)
2377 { 2378 {
2378 do_privop(regs); 2379 do_privop(regs);
2379 } 2380 }
2380 2381
2381 /* Trap level 1 stuff or other traps we should never see... */ 2382 /* Trap level 1 stuff or other traps we should never see... */
2382 void do_cee(struct pt_regs *regs) 2383 void do_cee(struct pt_regs *regs)
2383 { 2384 {
2384 die_if_kernel("TL0: Cache Error Exception", regs); 2385 die_if_kernel("TL0: Cache Error Exception", regs);
2385 } 2386 }
2386 2387
2387 void do_cee_tl1(struct pt_regs *regs) 2388 void do_cee_tl1(struct pt_regs *regs)
2388 { 2389 {
2389 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2390 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2390 die_if_kernel("TL1: Cache Error Exception", regs); 2391 die_if_kernel("TL1: Cache Error Exception", regs);
2391 } 2392 }
2392 2393
2393 void do_dae_tl1(struct pt_regs *regs) 2394 void do_dae_tl1(struct pt_regs *regs)
2394 { 2395 {
2395 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2396 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2396 die_if_kernel("TL1: Data Access Exception", regs); 2397 die_if_kernel("TL1: Data Access Exception", regs);
2397 } 2398 }
2398 2399
2399 void do_iae_tl1(struct pt_regs *regs) 2400 void do_iae_tl1(struct pt_regs *regs)
2400 { 2401 {
2401 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2402 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2402 die_if_kernel("TL1: Instruction Access Exception", regs); 2403 die_if_kernel("TL1: Instruction Access Exception", regs);
2403 } 2404 }
2404 2405
2405 void do_div0_tl1(struct pt_regs *regs) 2406 void do_div0_tl1(struct pt_regs *regs)
2406 { 2407 {
2407 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2408 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2408 die_if_kernel("TL1: DIV0 Exception", regs); 2409 die_if_kernel("TL1: DIV0 Exception", regs);
2409 } 2410 }
2410 2411
2411 void do_fpdis_tl1(struct pt_regs *regs) 2412 void do_fpdis_tl1(struct pt_regs *regs)
2412 { 2413 {
2413 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2414 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2414 die_if_kernel("TL1: FPU Disabled", regs); 2415 die_if_kernel("TL1: FPU Disabled", regs);
2415 } 2416 }
2416 2417
2417 void do_fpieee_tl1(struct pt_regs *regs) 2418 void do_fpieee_tl1(struct pt_regs *regs)
2418 { 2419 {
2419 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2420 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2420 die_if_kernel("TL1: FPU IEEE Exception", regs); 2421 die_if_kernel("TL1: FPU IEEE Exception", regs);
2421 } 2422 }
2422 2423
2423 void do_fpother_tl1(struct pt_regs *regs) 2424 void do_fpother_tl1(struct pt_regs *regs)
2424 { 2425 {
2425 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2426 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2426 die_if_kernel("TL1: FPU Other Exception", regs); 2427 die_if_kernel("TL1: FPU Other Exception", regs);
2427 } 2428 }
2428 2429
2429 void do_ill_tl1(struct pt_regs *regs) 2430 void do_ill_tl1(struct pt_regs *regs)
2430 { 2431 {
2431 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2432 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2432 die_if_kernel("TL1: Illegal Instruction Exception", regs); 2433 die_if_kernel("TL1: Illegal Instruction Exception", regs);
2433 } 2434 }
2434 2435
2435 void do_irq_tl1(struct pt_regs *regs) 2436 void do_irq_tl1(struct pt_regs *regs)
2436 { 2437 {
2437 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2438 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2438 die_if_kernel("TL1: IRQ Exception", regs); 2439 die_if_kernel("TL1: IRQ Exception", regs);
2439 } 2440 }
2440 2441
2441 void do_lddfmna_tl1(struct pt_regs *regs) 2442 void do_lddfmna_tl1(struct pt_regs *regs)
2442 { 2443 {
2443 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2444 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2444 die_if_kernel("TL1: LDDF Exception", regs); 2445 die_if_kernel("TL1: LDDF Exception", regs);
2445 } 2446 }
2446 2447
2447 void do_stdfmna_tl1(struct pt_regs *regs) 2448 void do_stdfmna_tl1(struct pt_regs *regs)
2448 { 2449 {
2449 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2450 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2450 die_if_kernel("TL1: STDF Exception", regs); 2451 die_if_kernel("TL1: STDF Exception", regs);
2451 } 2452 }
2452 2453
2453 void do_paw(struct pt_regs *regs) 2454 void do_paw(struct pt_regs *regs)
2454 { 2455 {
2455 die_if_kernel("TL0: Phys Watchpoint Exception", regs); 2456 die_if_kernel("TL0: Phys Watchpoint Exception", regs);
2456 } 2457 }
2457 2458
2458 void do_paw_tl1(struct pt_regs *regs) 2459 void do_paw_tl1(struct pt_regs *regs)
2459 { 2460 {
2460 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2461 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2461 die_if_kernel("TL1: Phys Watchpoint Exception", regs); 2462 die_if_kernel("TL1: Phys Watchpoint Exception", regs);
2462 } 2463 }
2463 2464
2464 void do_vaw(struct pt_regs *regs) 2465 void do_vaw(struct pt_regs *regs)
2465 { 2466 {
2466 die_if_kernel("TL0: Virt Watchpoint Exception", regs); 2467 die_if_kernel("TL0: Virt Watchpoint Exception", regs);
2467 } 2468 }
2468 2469
2469 void do_vaw_tl1(struct pt_regs *regs) 2470 void do_vaw_tl1(struct pt_regs *regs)
2470 { 2471 {
2471 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2472 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2472 die_if_kernel("TL1: Virt Watchpoint Exception", regs); 2473 die_if_kernel("TL1: Virt Watchpoint Exception", regs);
2473 } 2474 }
2474 2475
2475 void do_tof_tl1(struct pt_regs *regs) 2476 void do_tof_tl1(struct pt_regs *regs)
2476 { 2477 {
2477 dump_tl1_traplog((struct tl1_traplog *)(regs + 1)); 2478 dump_tl1_traplog((struct tl1_traplog *)(regs + 1));
2478 die_if_kernel("TL1: Tag Overflow Exception", regs); 2479 die_if_kernel("TL1: Tag Overflow Exception", regs);
2479 } 2480 }
2480 2481
2481 void do_getpsr(struct pt_regs *regs) 2482 void do_getpsr(struct pt_regs *regs)
2482 { 2483 {
2483 regs->u_regs[UREG_I0] = tstate_to_psr(regs->tstate); 2484 regs->u_regs[UREG_I0] = tstate_to_psr(regs->tstate);
2484 regs->tpc = regs->tnpc; 2485 regs->tpc = regs->tnpc;
2485 regs->tnpc += 4; 2486 regs->tnpc += 4;
2486 if (test_thread_flag(TIF_32BIT)) { 2487 if (test_thread_flag(TIF_32BIT)) {
2487 regs->tpc &= 0xffffffff; 2488 regs->tpc &= 0xffffffff;
2488 regs->tnpc &= 0xffffffff; 2489 regs->tnpc &= 0xffffffff;
2489 } 2490 }
2490 } 2491 }
2491 2492
2492 struct trap_per_cpu trap_block[NR_CPUS]; 2493 struct trap_per_cpu trap_block[NR_CPUS];
2493 2494
2494 /* This can get invoked before sched_init() so play it super safe 2495 /* This can get invoked before sched_init() so play it super safe
2495 * and use hard_smp_processor_id(). 2496 * and use hard_smp_processor_id().
2496 */ 2497 */
2497 void init_cur_cpu_trap(struct thread_info *t) 2498 void init_cur_cpu_trap(struct thread_info *t)
2498 { 2499 {
2499 int cpu = hard_smp_processor_id(); 2500 int cpu = hard_smp_processor_id();
2500 struct trap_per_cpu *p = &trap_block[cpu]; 2501 struct trap_per_cpu *p = &trap_block[cpu];
2501 2502
2502 p->thread = t; 2503 p->thread = t;
2503 p->pgd_paddr = 0; 2504 p->pgd_paddr = 0;
2504 } 2505 }
2505 2506
2506 extern void thread_info_offsets_are_bolixed_dave(void); 2507 extern void thread_info_offsets_are_bolixed_dave(void);
2507 extern void trap_per_cpu_offsets_are_bolixed_dave(void); 2508 extern void trap_per_cpu_offsets_are_bolixed_dave(void);
2508 extern void tsb_config_offsets_are_bolixed_dave(void); 2509 extern void tsb_config_offsets_are_bolixed_dave(void);
2509 2510
2510 /* Only invoked on boot processor. */ 2511 /* Only invoked on boot processor. */
2511 void __init trap_init(void) 2512 void __init trap_init(void)
2512 { 2513 {
2513 /* Compile time sanity check. */ 2514 /* Compile time sanity check. */
2514 if (TI_TASK != offsetof(struct thread_info, task) || 2515 if (TI_TASK != offsetof(struct thread_info, task) ||
2515 TI_FLAGS != offsetof(struct thread_info, flags) || 2516 TI_FLAGS != offsetof(struct thread_info, flags) ||
2516 TI_CPU != offsetof(struct thread_info, cpu) || 2517 TI_CPU != offsetof(struct thread_info, cpu) ||
2517 TI_FPSAVED != offsetof(struct thread_info, fpsaved) || 2518 TI_FPSAVED != offsetof(struct thread_info, fpsaved) ||
2518 TI_KSP != offsetof(struct thread_info, ksp) || 2519 TI_KSP != offsetof(struct thread_info, ksp) ||
2519 TI_FAULT_ADDR != offsetof(struct thread_info, fault_address) || 2520 TI_FAULT_ADDR != offsetof(struct thread_info, fault_address) ||
2520 TI_KREGS != offsetof(struct thread_info, kregs) || 2521 TI_KREGS != offsetof(struct thread_info, kregs) ||
2521 TI_UTRAPS != offsetof(struct thread_info, utraps) || 2522 TI_UTRAPS != offsetof(struct thread_info, utraps) ||
2522 TI_EXEC_DOMAIN != offsetof(struct thread_info, exec_domain) || 2523 TI_EXEC_DOMAIN != offsetof(struct thread_info, exec_domain) ||
2523 TI_REG_WINDOW != offsetof(struct thread_info, reg_window) || 2524 TI_REG_WINDOW != offsetof(struct thread_info, reg_window) ||
2524 TI_RWIN_SPTRS != offsetof(struct thread_info, rwbuf_stkptrs) || 2525 TI_RWIN_SPTRS != offsetof(struct thread_info, rwbuf_stkptrs) ||
2525 TI_GSR != offsetof(struct thread_info, gsr) || 2526 TI_GSR != offsetof(struct thread_info, gsr) ||
2526 TI_XFSR != offsetof(struct thread_info, xfsr) || 2527 TI_XFSR != offsetof(struct thread_info, xfsr) ||
2527 TI_USER_CNTD0 != offsetof(struct thread_info, user_cntd0) || 2528 TI_USER_CNTD0 != offsetof(struct thread_info, user_cntd0) ||
2528 TI_USER_CNTD1 != offsetof(struct thread_info, user_cntd1) || 2529 TI_USER_CNTD1 != offsetof(struct thread_info, user_cntd1) ||
2529 TI_KERN_CNTD0 != offsetof(struct thread_info, kernel_cntd0) || 2530 TI_KERN_CNTD0 != offsetof(struct thread_info, kernel_cntd0) ||
2530 TI_KERN_CNTD1 != offsetof(struct thread_info, kernel_cntd1) || 2531 TI_KERN_CNTD1 != offsetof(struct thread_info, kernel_cntd1) ||
2531 TI_PCR != offsetof(struct thread_info, pcr_reg) || 2532 TI_PCR != offsetof(struct thread_info, pcr_reg) ||
2532 TI_PRE_COUNT != offsetof(struct thread_info, preempt_count) || 2533 TI_PRE_COUNT != offsetof(struct thread_info, preempt_count) ||
2533 TI_NEW_CHILD != offsetof(struct thread_info, new_child) || 2534 TI_NEW_CHILD != offsetof(struct thread_info, new_child) ||
2534 TI_SYS_NOERROR != offsetof(struct thread_info, syscall_noerror) || 2535 TI_SYS_NOERROR != offsetof(struct thread_info, syscall_noerror) ||
2535 TI_RESTART_BLOCK != offsetof(struct thread_info, restart_block) || 2536 TI_RESTART_BLOCK != offsetof(struct thread_info, restart_block) ||
2536 TI_KUNA_REGS != offsetof(struct thread_info, kern_una_regs) || 2537 TI_KUNA_REGS != offsetof(struct thread_info, kern_una_regs) ||
2537 TI_KUNA_INSN != offsetof(struct thread_info, kern_una_insn) || 2538 TI_KUNA_INSN != offsetof(struct thread_info, kern_una_insn) ||
2538 TI_FPREGS != offsetof(struct thread_info, fpregs) || 2539 TI_FPREGS != offsetof(struct thread_info, fpregs) ||
2539 (TI_FPREGS & (64 - 1))) 2540 (TI_FPREGS & (64 - 1)))
2540 thread_info_offsets_are_bolixed_dave(); 2541 thread_info_offsets_are_bolixed_dave();
2541 2542
2542 if (TRAP_PER_CPU_THREAD != offsetof(struct trap_per_cpu, thread) || 2543 if (TRAP_PER_CPU_THREAD != offsetof(struct trap_per_cpu, thread) ||
2543 (TRAP_PER_CPU_PGD_PADDR != 2544 (TRAP_PER_CPU_PGD_PADDR !=
2544 offsetof(struct trap_per_cpu, pgd_paddr)) || 2545 offsetof(struct trap_per_cpu, pgd_paddr)) ||
2545 (TRAP_PER_CPU_CPU_MONDO_PA != 2546 (TRAP_PER_CPU_CPU_MONDO_PA !=
2546 offsetof(struct trap_per_cpu, cpu_mondo_pa)) || 2547 offsetof(struct trap_per_cpu, cpu_mondo_pa)) ||
2547 (TRAP_PER_CPU_DEV_MONDO_PA != 2548 (TRAP_PER_CPU_DEV_MONDO_PA !=
2548 offsetof(struct trap_per_cpu, dev_mondo_pa)) || 2549 offsetof(struct trap_per_cpu, dev_mondo_pa)) ||
2549 (TRAP_PER_CPU_RESUM_MONDO_PA != 2550 (TRAP_PER_CPU_RESUM_MONDO_PA !=
2550 offsetof(struct trap_per_cpu, resum_mondo_pa)) || 2551 offsetof(struct trap_per_cpu, resum_mondo_pa)) ||
2551 (TRAP_PER_CPU_RESUM_KBUF_PA != 2552 (TRAP_PER_CPU_RESUM_KBUF_PA !=
2552 offsetof(struct trap_per_cpu, resum_kernel_buf_pa)) || 2553 offsetof(struct trap_per_cpu, resum_kernel_buf_pa)) ||
2553 (TRAP_PER_CPU_NONRESUM_MONDO_PA != 2554 (TRAP_PER_CPU_NONRESUM_MONDO_PA !=
2554 offsetof(struct trap_per_cpu, nonresum_mondo_pa)) || 2555 offsetof(struct trap_per_cpu, nonresum_mondo_pa)) ||
2555 (TRAP_PER_CPU_NONRESUM_KBUF_PA != 2556 (TRAP_PER_CPU_NONRESUM_KBUF_PA !=
2556 offsetof(struct trap_per_cpu, nonresum_kernel_buf_pa)) || 2557 offsetof(struct trap_per_cpu, nonresum_kernel_buf_pa)) ||
2557 (TRAP_PER_CPU_FAULT_INFO != 2558 (TRAP_PER_CPU_FAULT_INFO !=
2558 offsetof(struct trap_per_cpu, fault_info)) || 2559 offsetof(struct trap_per_cpu, fault_info)) ||
2559 (TRAP_PER_CPU_CPU_MONDO_BLOCK_PA != 2560 (TRAP_PER_CPU_CPU_MONDO_BLOCK_PA !=
2560 offsetof(struct trap_per_cpu, cpu_mondo_block_pa)) || 2561 offsetof(struct trap_per_cpu, cpu_mondo_block_pa)) ||
2561 (TRAP_PER_CPU_CPU_LIST_PA != 2562 (TRAP_PER_CPU_CPU_LIST_PA !=
2562 offsetof(struct trap_per_cpu, cpu_list_pa)) || 2563 offsetof(struct trap_per_cpu, cpu_list_pa)) ||
2563 (TRAP_PER_CPU_TSB_HUGE != 2564 (TRAP_PER_CPU_TSB_HUGE !=
2564 offsetof(struct trap_per_cpu, tsb_huge)) || 2565 offsetof(struct trap_per_cpu, tsb_huge)) ||
2565 (TRAP_PER_CPU_TSB_HUGE_TEMP != 2566 (TRAP_PER_CPU_TSB_HUGE_TEMP !=
2566 offsetof(struct trap_per_cpu, tsb_huge_temp)) || 2567 offsetof(struct trap_per_cpu, tsb_huge_temp)) ||
2567 (TRAP_PER_CPU_IRQ_WORKLIST != 2568 (TRAP_PER_CPU_IRQ_WORKLIST !=
2568 offsetof(struct trap_per_cpu, irq_worklist)) || 2569 offsetof(struct trap_per_cpu, irq_worklist)) ||
2569 (TRAP_PER_CPU_CPU_MONDO_QMASK != 2570 (TRAP_PER_CPU_CPU_MONDO_QMASK !=
2570 offsetof(struct trap_per_cpu, cpu_mondo_qmask)) || 2571 offsetof(struct trap_per_cpu, cpu_mondo_qmask)) ||
2571 (TRAP_PER_CPU_DEV_MONDO_QMASK != 2572 (TRAP_PER_CPU_DEV_MONDO_QMASK !=
2572 offsetof(struct trap_per_cpu, dev_mondo_qmask)) || 2573 offsetof(struct trap_per_cpu, dev_mondo_qmask)) ||
2573 (TRAP_PER_CPU_RESUM_QMASK != 2574 (TRAP_PER_CPU_RESUM_QMASK !=
2574 offsetof(struct trap_per_cpu, resum_qmask)) || 2575 offsetof(struct trap_per_cpu, resum_qmask)) ||
2575 (TRAP_PER_CPU_NONRESUM_QMASK != 2576 (TRAP_PER_CPU_NONRESUM_QMASK !=
2576 offsetof(struct trap_per_cpu, nonresum_qmask))) 2577 offsetof(struct trap_per_cpu, nonresum_qmask)))
2577 trap_per_cpu_offsets_are_bolixed_dave(); 2578 trap_per_cpu_offsets_are_bolixed_dave();
2578 2579
2579 if ((TSB_CONFIG_TSB != 2580 if ((TSB_CONFIG_TSB !=
2580 offsetof(struct tsb_config, tsb)) || 2581 offsetof(struct tsb_config, tsb)) ||
2581 (TSB_CONFIG_RSS_LIMIT != 2582 (TSB_CONFIG_RSS_LIMIT !=
2582 offsetof(struct tsb_config, tsb_rss_limit)) || 2583 offsetof(struct tsb_config, tsb_rss_limit)) ||
2583 (TSB_CONFIG_NENTRIES != 2584 (TSB_CONFIG_NENTRIES !=
2584 offsetof(struct tsb_config, tsb_nentries)) || 2585 offsetof(struct tsb_config, tsb_nentries)) ||
2585 (TSB_CONFIG_REG_VAL != 2586 (TSB_CONFIG_REG_VAL !=
2586 offsetof(struct tsb_config, tsb_reg_val)) || 2587 offsetof(struct tsb_config, tsb_reg_val)) ||
2587 (TSB_CONFIG_MAP_VADDR != 2588 (TSB_CONFIG_MAP_VADDR !=
2588 offsetof(struct tsb_config, tsb_map_vaddr)) || 2589 offsetof(struct tsb_config, tsb_map_vaddr)) ||
2589 (TSB_CONFIG_MAP_PTE != 2590 (TSB_CONFIG_MAP_PTE !=
2590 offsetof(struct tsb_config, tsb_map_pte))) 2591 offsetof(struct tsb_config, tsb_map_pte)))
2591 tsb_config_offsets_are_bolixed_dave(); 2592 tsb_config_offsets_are_bolixed_dave();
2592 2593
2593 /* Attach to the address space of init_task. On SMP we 2594 /* Attach to the address space of init_task. On SMP we
2594 * do this in smp.c:smp_callin for other cpus. 2595 * do this in smp.c:smp_callin for other cpus.
2595 */ 2596 */
2596 atomic_inc(&init_mm.mm_count); 2597 atomic_inc(&init_mm.mm_count);
2597 current->active_mm = &init_mm; 2598 current->active_mm = &init_mm;
2598 } 2599 }
2599 2600
arch/x86_64/kernel/traps.c
1 /* 1 /*
2 * linux/arch/x86-64/traps.c 2 * linux/arch/x86-64/traps.c
3 * 3 *
4 * Copyright (C) 1991, 1992 Linus Torvalds 4 * Copyright (C) 1991, 1992 Linus Torvalds
5 * Copyright (C) 2000, 2001, 2002 Andi Kleen, SuSE Labs 5 * Copyright (C) 2000, 2001, 2002 Andi Kleen, SuSE Labs
6 * 6 *
7 * Pentium III FXSR, SSE support 7 * Pentium III FXSR, SSE support
8 * Gareth Hughes <gareth@valinux.com>, May 2000 8 * Gareth Hughes <gareth@valinux.com>, May 2000
9 */ 9 */
10 10
11 /* 11 /*
12 * 'Traps.c' handles hardware traps and faults after we have saved some 12 * 'Traps.c' handles hardware traps and faults after we have saved some
13 * state in 'entry.S'. 13 * state in 'entry.S'.
14 */ 14 */
15 #include <linux/sched.h> 15 #include <linux/sched.h>
16 #include <linux/kernel.h> 16 #include <linux/kernel.h>
17 #include <linux/string.h> 17 #include <linux/string.h>
18 #include <linux/errno.h> 18 #include <linux/errno.h>
19 #include <linux/ptrace.h> 19 #include <linux/ptrace.h>
20 #include <linux/timer.h> 20 #include <linux/timer.h>
21 #include <linux/mm.h> 21 #include <linux/mm.h>
22 #include <linux/init.h> 22 #include <linux/init.h>
23 #include <linux/delay.h> 23 #include <linux/delay.h>
24 #include <linux/spinlock.h> 24 #include <linux/spinlock.h>
25 #include <linux/interrupt.h> 25 #include <linux/interrupt.h>
26 #include <linux/kallsyms.h> 26 #include <linux/kallsyms.h>
27 #include <linux/module.h> 27 #include <linux/module.h>
28 #include <linux/moduleparam.h> 28 #include <linux/moduleparam.h>
29 #include <linux/nmi.h> 29 #include <linux/nmi.h>
30 #include <linux/kprobes.h> 30 #include <linux/kprobes.h>
31 #include <linux/kexec.h> 31 #include <linux/kexec.h>
32 #include <linux/unwind.h> 32 #include <linux/unwind.h>
33 #include <linux/uaccess.h> 33 #include <linux/uaccess.h>
34 #include <linux/bug.h> 34 #include <linux/bug.h>
35 #include <linux/kdebug.h> 35 #include <linux/kdebug.h>
36 36
37 #include <asm/system.h> 37 #include <asm/system.h>
38 #include <asm/io.h> 38 #include <asm/io.h>
39 #include <asm/atomic.h> 39 #include <asm/atomic.h>
40 #include <asm/debugreg.h> 40 #include <asm/debugreg.h>
41 #include <asm/desc.h> 41 #include <asm/desc.h>
42 #include <asm/i387.h> 42 #include <asm/i387.h>
43 #include <asm/processor.h> 43 #include <asm/processor.h>
44 #include <asm/unwind.h> 44 #include <asm/unwind.h>
45 #include <asm/smp.h> 45 #include <asm/smp.h>
46 #include <asm/pgalloc.h> 46 #include <asm/pgalloc.h>
47 #include <asm/pda.h> 47 #include <asm/pda.h>
48 #include <asm/proto.h> 48 #include <asm/proto.h>
49 #include <asm/nmi.h> 49 #include <asm/nmi.h>
50 #include <asm/stacktrace.h> 50 #include <asm/stacktrace.h>
51 51
52 asmlinkage void divide_error(void); 52 asmlinkage void divide_error(void);
53 asmlinkage void debug(void); 53 asmlinkage void debug(void);
54 asmlinkage void nmi(void); 54 asmlinkage void nmi(void);
55 asmlinkage void int3(void); 55 asmlinkage void int3(void);
56 asmlinkage void overflow(void); 56 asmlinkage void overflow(void);
57 asmlinkage void bounds(void); 57 asmlinkage void bounds(void);
58 asmlinkage void invalid_op(void); 58 asmlinkage void invalid_op(void);
59 asmlinkage void device_not_available(void); 59 asmlinkage void device_not_available(void);
60 asmlinkage void double_fault(void); 60 asmlinkage void double_fault(void);
61 asmlinkage void coprocessor_segment_overrun(void); 61 asmlinkage void coprocessor_segment_overrun(void);
62 asmlinkage void invalid_TSS(void); 62 asmlinkage void invalid_TSS(void);
63 asmlinkage void segment_not_present(void); 63 asmlinkage void segment_not_present(void);
64 asmlinkage void stack_segment(void); 64 asmlinkage void stack_segment(void);
65 asmlinkage void general_protection(void); 65 asmlinkage void general_protection(void);
66 asmlinkage void page_fault(void); 66 asmlinkage void page_fault(void);
67 asmlinkage void coprocessor_error(void); 67 asmlinkage void coprocessor_error(void);
68 asmlinkage void simd_coprocessor_error(void); 68 asmlinkage void simd_coprocessor_error(void);
69 asmlinkage void reserved(void); 69 asmlinkage void reserved(void);
70 asmlinkage void alignment_check(void); 70 asmlinkage void alignment_check(void);
71 asmlinkage void machine_check(void); 71 asmlinkage void machine_check(void);
72 asmlinkage void spurious_interrupt_bug(void); 72 asmlinkage void spurious_interrupt_bug(void);
73 73
74 static inline void conditional_sti(struct pt_regs *regs) 74 static inline void conditional_sti(struct pt_regs *regs)
75 { 75 {
76 if (regs->eflags & X86_EFLAGS_IF) 76 if (regs->eflags & X86_EFLAGS_IF)
77 local_irq_enable(); 77 local_irq_enable();
78 } 78 }
79 79
80 static inline void preempt_conditional_sti(struct pt_regs *regs) 80 static inline void preempt_conditional_sti(struct pt_regs *regs)
81 { 81 {
82 preempt_disable(); 82 preempt_disable();
83 if (regs->eflags & X86_EFLAGS_IF) 83 if (regs->eflags & X86_EFLAGS_IF)
84 local_irq_enable(); 84 local_irq_enable();
85 } 85 }
86 86
87 static inline void preempt_conditional_cli(struct pt_regs *regs) 87 static inline void preempt_conditional_cli(struct pt_regs *regs)
88 { 88 {
89 if (regs->eflags & X86_EFLAGS_IF) 89 if (regs->eflags & X86_EFLAGS_IF)
90 local_irq_disable(); 90 local_irq_disable();
91 /* Make sure to not schedule here because we could be running 91 /* Make sure to not schedule here because we could be running
92 on an exception stack. */ 92 on an exception stack. */
93 preempt_enable_no_resched(); 93 preempt_enable_no_resched();
94 } 94 }
95 95
96 int kstack_depth_to_print = 12; 96 int kstack_depth_to_print = 12;
97 97
98 #ifdef CONFIG_KALLSYMS 98 #ifdef CONFIG_KALLSYMS
99 void printk_address(unsigned long address) 99 void printk_address(unsigned long address)
100 { 100 {
101 unsigned long offset = 0, symsize; 101 unsigned long offset = 0, symsize;
102 const char *symname; 102 const char *symname;
103 char *modname; 103 char *modname;
104 char *delim = ":"; 104 char *delim = ":";
105 char namebuf[128]; 105 char namebuf[128];
106 106
107 symname = kallsyms_lookup(address, &symsize, &offset, 107 symname = kallsyms_lookup(address, &symsize, &offset,
108 &modname, namebuf); 108 &modname, namebuf);
109 if (!symname) { 109 if (!symname) {
110 printk(" [<%016lx>]\n", address); 110 printk(" [<%016lx>]\n", address);
111 return; 111 return;
112 } 112 }
113 if (!modname) 113 if (!modname)
114 modname = delim = ""; 114 modname = delim = "";
115 printk(" [<%016lx>] %s%s%s%s+0x%lx/0x%lx\n", 115 printk(" [<%016lx>] %s%s%s%s+0x%lx/0x%lx\n",
116 address, delim, modname, delim, symname, offset, symsize); 116 address, delim, modname, delim, symname, offset, symsize);
117 } 117 }
118 #else 118 #else
119 void printk_address(unsigned long address) 119 void printk_address(unsigned long address)
120 { 120 {
121 printk(" [<%016lx>]\n", address); 121 printk(" [<%016lx>]\n", address);
122 } 122 }
123 #endif 123 #endif
124 124
125 static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack, 125 static unsigned long *in_exception_stack(unsigned cpu, unsigned long stack,
126 unsigned *usedp, char **idp) 126 unsigned *usedp, char **idp)
127 { 127 {
128 static char ids[][8] = { 128 static char ids[][8] = {
129 [DEBUG_STACK - 1] = "#DB", 129 [DEBUG_STACK - 1] = "#DB",
130 [NMI_STACK - 1] = "NMI", 130 [NMI_STACK - 1] = "NMI",
131 [DOUBLEFAULT_STACK - 1] = "#DF", 131 [DOUBLEFAULT_STACK - 1] = "#DF",
132 [STACKFAULT_STACK - 1] = "#SS", 132 [STACKFAULT_STACK - 1] = "#SS",
133 [MCE_STACK - 1] = "#MC", 133 [MCE_STACK - 1] = "#MC",
134 #if DEBUG_STKSZ > EXCEPTION_STKSZ 134 #if DEBUG_STKSZ > EXCEPTION_STKSZ
135 [N_EXCEPTION_STACKS ... N_EXCEPTION_STACKS + DEBUG_STKSZ / EXCEPTION_STKSZ - 2] = "#DB[?]" 135 [N_EXCEPTION_STACKS ... N_EXCEPTION_STACKS + DEBUG_STKSZ / EXCEPTION_STKSZ - 2] = "#DB[?]"
136 #endif 136 #endif
137 }; 137 };
138 unsigned k; 138 unsigned k;
139 139
140 /* 140 /*
141 * Iterate over all exception stacks, and figure out whether 141 * Iterate over all exception stacks, and figure out whether
142 * 'stack' is in one of them: 142 * 'stack' is in one of them:
143 */ 143 */
144 for (k = 0; k < N_EXCEPTION_STACKS; k++) { 144 for (k = 0; k < N_EXCEPTION_STACKS; k++) {
145 unsigned long end = per_cpu(orig_ist, cpu).ist[k]; 145 unsigned long end = per_cpu(orig_ist, cpu).ist[k];
146 /* 146 /*
147 * Is 'stack' above this exception frame's end? 147 * Is 'stack' above this exception frame's end?
148 * If yes then skip to the next frame. 148 * If yes then skip to the next frame.
149 */ 149 */
150 if (stack >= end) 150 if (stack >= end)
151 continue; 151 continue;
152 /* 152 /*
153 * Is 'stack' above this exception frame's start address? 153 * Is 'stack' above this exception frame's start address?
154 * If yes then we found the right frame. 154 * If yes then we found the right frame.
155 */ 155 */
156 if (stack >= end - EXCEPTION_STKSZ) { 156 if (stack >= end - EXCEPTION_STKSZ) {
157 /* 157 /*
158 * Make sure we only iterate through an exception 158 * Make sure we only iterate through an exception
159 * stack once. If it comes up for the second time 159 * stack once. If it comes up for the second time
160 * then there's something wrong going on - just 160 * then there's something wrong going on - just
161 * break out and return NULL: 161 * break out and return NULL:
162 */ 162 */
163 if (*usedp & (1U << k)) 163 if (*usedp & (1U << k))
164 break; 164 break;
165 *usedp |= 1U << k; 165 *usedp |= 1U << k;
166 *idp = ids[k]; 166 *idp = ids[k];
167 return (unsigned long *)end; 167 return (unsigned long *)end;
168 } 168 }
169 /* 169 /*
170 * If this is a debug stack, and if it has a larger size than 170 * If this is a debug stack, and if it has a larger size than
171 * the usual exception stacks, then 'stack' might still 171 * the usual exception stacks, then 'stack' might still
172 * be within the lower portion of the debug stack: 172 * be within the lower portion of the debug stack:
173 */ 173 */
174 #if DEBUG_STKSZ > EXCEPTION_STKSZ 174 #if DEBUG_STKSZ > EXCEPTION_STKSZ
175 if (k == DEBUG_STACK - 1 && stack >= end - DEBUG_STKSZ) { 175 if (k == DEBUG_STACK - 1 && stack >= end - DEBUG_STKSZ) {
176 unsigned j = N_EXCEPTION_STACKS - 1; 176 unsigned j = N_EXCEPTION_STACKS - 1;
177 177
178 /* 178 /*
179 * Black magic. A large debug stack is composed of 179 * Black magic. A large debug stack is composed of
180 * multiple exception stack entries, which we 180 * multiple exception stack entries, which we
181 * iterate through now. Dont look: 181 * iterate through now. Dont look:
182 */ 182 */
183 do { 183 do {
184 ++j; 184 ++j;
185 end -= EXCEPTION_STKSZ; 185 end -= EXCEPTION_STKSZ;
186 ids[j][4] = '1' + (j - N_EXCEPTION_STACKS); 186 ids[j][4] = '1' + (j - N_EXCEPTION_STACKS);
187 } while (stack < end - EXCEPTION_STKSZ); 187 } while (stack < end - EXCEPTION_STKSZ);
188 if (*usedp & (1U << j)) 188 if (*usedp & (1U << j))
189 break; 189 break;
190 *usedp |= 1U << j; 190 *usedp |= 1U << j;
191 *idp = ids[j]; 191 *idp = ids[j];
192 return (unsigned long *)end; 192 return (unsigned long *)end;
193 } 193 }
194 #endif 194 #endif
195 } 195 }
196 return NULL; 196 return NULL;
197 } 197 }
198 198
199 #define MSG(txt) ops->warning(data, txt) 199 #define MSG(txt) ops->warning(data, txt)
200 200
201 /* 201 /*
202 * x86-64 can have upto three kernel stacks: 202 * x86-64 can have upto three kernel stacks:
203 * process stack 203 * process stack
204 * interrupt stack 204 * interrupt stack
205 * severe exception (double fault, nmi, stack fault, debug, mce) hardware stack 205 * severe exception (double fault, nmi, stack fault, debug, mce) hardware stack
206 */ 206 */
207 207
208 static inline int valid_stack_ptr(struct thread_info *tinfo, void *p) 208 static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
209 { 209 {
210 void *t = (void *)tinfo; 210 void *t = (void *)tinfo;
211 return p > t && p < t + THREAD_SIZE - 3; 211 return p > t && p < t + THREAD_SIZE - 3;
212 } 212 }
213 213
214 void dump_trace(struct task_struct *tsk, struct pt_regs *regs, 214 void dump_trace(struct task_struct *tsk, struct pt_regs *regs,
215 unsigned long *stack, 215 unsigned long *stack,
216 struct stacktrace_ops *ops, void *data) 216 struct stacktrace_ops *ops, void *data)
217 { 217 {
218 const unsigned cpu = get_cpu(); 218 const unsigned cpu = get_cpu();
219 unsigned long *irqstack_end = (unsigned long*)cpu_pda(cpu)->irqstackptr; 219 unsigned long *irqstack_end = (unsigned long*)cpu_pda(cpu)->irqstackptr;
220 unsigned used = 0; 220 unsigned used = 0;
221 struct thread_info *tinfo; 221 struct thread_info *tinfo;
222 222
223 if (!tsk) 223 if (!tsk)
224 tsk = current; 224 tsk = current;
225 225
226 if (!stack) { 226 if (!stack) {
227 unsigned long dummy; 227 unsigned long dummy;
228 stack = &dummy; 228 stack = &dummy;
229 if (tsk && tsk != current) 229 if (tsk && tsk != current)
230 stack = (unsigned long *)tsk->thread.rsp; 230 stack = (unsigned long *)tsk->thread.rsp;
231 } 231 }
232 232
233 /* 233 /*
234 * Print function call entries within a stack. 'cond' is the 234 * Print function call entries within a stack. 'cond' is the
235 * "end of stackframe" condition, that the 'stack++' 235 * "end of stackframe" condition, that the 'stack++'
236 * iteration will eventually trigger. 236 * iteration will eventually trigger.
237 */ 237 */
238 #define HANDLE_STACK(cond) \ 238 #define HANDLE_STACK(cond) \
239 do while (cond) { \ 239 do while (cond) { \
240 unsigned long addr = *stack++; \ 240 unsigned long addr = *stack++; \
241 /* Use unlocked access here because except for NMIs \ 241 /* Use unlocked access here because except for NMIs \
242 we should be already protected against module unloads */ \ 242 we should be already protected against module unloads */ \
243 if (__kernel_text_address(addr)) { \ 243 if (__kernel_text_address(addr)) { \
244 /* \ 244 /* \
245 * If the address is either in the text segment of the \ 245 * If the address is either in the text segment of the \
246 * kernel, or in the region which contains vmalloc'ed \ 246 * kernel, or in the region which contains vmalloc'ed \
247 * memory, it *may* be the address of a calling \ 247 * memory, it *may* be the address of a calling \
248 * routine; if so, print it so that someone tracing \ 248 * routine; if so, print it so that someone tracing \
249 * down the cause of the crash will be able to figure \ 249 * down the cause of the crash will be able to figure \
250 * out the call path that was taken. \ 250 * out the call path that was taken. \
251 */ \ 251 */ \
252 ops->address(data, addr); \ 252 ops->address(data, addr); \
253 } \ 253 } \
254 } while (0) 254 } while (0)
255 255
256 /* 256 /*
257 * Print function call entries in all stacks, starting at the 257 * Print function call entries in all stacks, starting at the
258 * current stack address. If the stacks consist of nested 258 * current stack address. If the stacks consist of nested
259 * exceptions 259 * exceptions
260 */ 260 */
261 for (;;) { 261 for (;;) {
262 char *id; 262 char *id;
263 unsigned long *estack_end; 263 unsigned long *estack_end;
264 estack_end = in_exception_stack(cpu, (unsigned long)stack, 264 estack_end = in_exception_stack(cpu, (unsigned long)stack,
265 &used, &id); 265 &used, &id);
266 266
267 if (estack_end) { 267 if (estack_end) {
268 if (ops->stack(data, id) < 0) 268 if (ops->stack(data, id) < 0)
269 break; 269 break;
270 HANDLE_STACK (stack < estack_end); 270 HANDLE_STACK (stack < estack_end);
271 ops->stack(data, "<EOE>"); 271 ops->stack(data, "<EOE>");
272 /* 272 /*
273 * We link to the next stack via the 273 * We link to the next stack via the
274 * second-to-last pointer (index -2 to end) in the 274 * second-to-last pointer (index -2 to end) in the
275 * exception stack: 275 * exception stack:
276 */ 276 */
277 stack = (unsigned long *) estack_end[-2]; 277 stack = (unsigned long *) estack_end[-2];
278 continue; 278 continue;
279 } 279 }
280 if (irqstack_end) { 280 if (irqstack_end) {
281 unsigned long *irqstack; 281 unsigned long *irqstack;
282 irqstack = irqstack_end - 282 irqstack = irqstack_end -
283 (IRQSTACKSIZE - 64) / sizeof(*irqstack); 283 (IRQSTACKSIZE - 64) / sizeof(*irqstack);
284 284
285 if (stack >= irqstack && stack < irqstack_end) { 285 if (stack >= irqstack && stack < irqstack_end) {
286 if (ops->stack(data, "IRQ") < 0) 286 if (ops->stack(data, "IRQ") < 0)
287 break; 287 break;
288 HANDLE_STACK (stack < irqstack_end); 288 HANDLE_STACK (stack < irqstack_end);
289 /* 289 /*
290 * We link to the next stack (which would be 290 * We link to the next stack (which would be
291 * the process stack normally) the last 291 * the process stack normally) the last
292 * pointer (index -1 to end) in the IRQ stack: 292 * pointer (index -1 to end) in the IRQ stack:
293 */ 293 */
294 stack = (unsigned long *) (irqstack_end[-1]); 294 stack = (unsigned long *) (irqstack_end[-1]);
295 irqstack_end = NULL; 295 irqstack_end = NULL;
296 ops->stack(data, "EOI"); 296 ops->stack(data, "EOI");
297 continue; 297 continue;
298 } 298 }
299 } 299 }
300 break; 300 break;
301 } 301 }
302 302
303 /* 303 /*
304 * This handles the process stack: 304 * This handles the process stack:
305 */ 305 */
306 tinfo = task_thread_info(tsk); 306 tinfo = task_thread_info(tsk);
307 HANDLE_STACK (valid_stack_ptr(tinfo, stack)); 307 HANDLE_STACK (valid_stack_ptr(tinfo, stack));
308 #undef HANDLE_STACK 308 #undef HANDLE_STACK
309 put_cpu(); 309 put_cpu();
310 } 310 }
311 EXPORT_SYMBOL(dump_trace); 311 EXPORT_SYMBOL(dump_trace);
312 312
313 static void 313 static void
314 print_trace_warning_symbol(void *data, char *msg, unsigned long symbol) 314 print_trace_warning_symbol(void *data, char *msg, unsigned long symbol)
315 { 315 {
316 print_symbol(msg, symbol); 316 print_symbol(msg, symbol);
317 printk("\n"); 317 printk("\n");
318 } 318 }
319 319
320 static void print_trace_warning(void *data, char *msg) 320 static void print_trace_warning(void *data, char *msg)
321 { 321 {
322 printk("%s\n", msg); 322 printk("%s\n", msg);
323 } 323 }
324 324
325 static int print_trace_stack(void *data, char *name) 325 static int print_trace_stack(void *data, char *name)
326 { 326 {
327 printk(" <%s> ", name); 327 printk(" <%s> ", name);
328 return 0; 328 return 0;
329 } 329 }
330 330
331 static void print_trace_address(void *data, unsigned long addr) 331 static void print_trace_address(void *data, unsigned long addr)
332 { 332 {
333 printk_address(addr); 333 printk_address(addr);
334 } 334 }
335 335
336 static struct stacktrace_ops print_trace_ops = { 336 static struct stacktrace_ops print_trace_ops = {
337 .warning = print_trace_warning, 337 .warning = print_trace_warning,
338 .warning_symbol = print_trace_warning_symbol, 338 .warning_symbol = print_trace_warning_symbol,
339 .stack = print_trace_stack, 339 .stack = print_trace_stack,
340 .address = print_trace_address, 340 .address = print_trace_address,
341 }; 341 };
342 342
343 void 343 void
344 show_trace(struct task_struct *tsk, struct pt_regs *regs, unsigned long *stack) 344 show_trace(struct task_struct *tsk, struct pt_regs *regs, unsigned long *stack)
345 { 345 {
346 printk("\nCall Trace:\n"); 346 printk("\nCall Trace:\n");
347 dump_trace(tsk, regs, stack, &print_trace_ops, NULL); 347 dump_trace(tsk, regs, stack, &print_trace_ops, NULL);
348 printk("\n"); 348 printk("\n");
349 } 349 }
350 350
351 static void 351 static void
352 _show_stack(struct task_struct *tsk, struct pt_regs *regs, unsigned long *rsp) 352 _show_stack(struct task_struct *tsk, struct pt_regs *regs, unsigned long *rsp)
353 { 353 {
354 unsigned long *stack; 354 unsigned long *stack;
355 int i; 355 int i;
356 const int cpu = smp_processor_id(); 356 const int cpu = smp_processor_id();
357 unsigned long *irqstack_end = (unsigned long *) (cpu_pda(cpu)->irqstackptr); 357 unsigned long *irqstack_end = (unsigned long *) (cpu_pda(cpu)->irqstackptr);
358 unsigned long *irqstack = (unsigned long *) (cpu_pda(cpu)->irqstackptr - IRQSTACKSIZE); 358 unsigned long *irqstack = (unsigned long *) (cpu_pda(cpu)->irqstackptr - IRQSTACKSIZE);
359 359
360 // debugging aid: "show_stack(NULL, NULL);" prints the 360 // debugging aid: "show_stack(NULL, NULL);" prints the
361 // back trace for this cpu. 361 // back trace for this cpu.
362 362
363 if (rsp == NULL) { 363 if (rsp == NULL) {
364 if (tsk) 364 if (tsk)
365 rsp = (unsigned long *)tsk->thread.rsp; 365 rsp = (unsigned long *)tsk->thread.rsp;
366 else 366 else
367 rsp = (unsigned long *)&rsp; 367 rsp = (unsigned long *)&rsp;
368 } 368 }
369 369
370 stack = rsp; 370 stack = rsp;
371 for(i=0; i < kstack_depth_to_print; i++) { 371 for(i=0; i < kstack_depth_to_print; i++) {
372 if (stack >= irqstack && stack <= irqstack_end) { 372 if (stack >= irqstack && stack <= irqstack_end) {
373 if (stack == irqstack_end) { 373 if (stack == irqstack_end) {
374 stack = (unsigned long *) (irqstack_end[-1]); 374 stack = (unsigned long *) (irqstack_end[-1]);
375 printk(" <EOI> "); 375 printk(" <EOI> ");
376 } 376 }
377 } else { 377 } else {
378 if (((long) stack & (THREAD_SIZE-1)) == 0) 378 if (((long) stack & (THREAD_SIZE-1)) == 0)
379 break; 379 break;
380 } 380 }
381 if (i && ((i % 4) == 0)) 381 if (i && ((i % 4) == 0))
382 printk("\n"); 382 printk("\n");
383 printk(" %016lx", *stack++); 383 printk(" %016lx", *stack++);
384 touch_nmi_watchdog(); 384 touch_nmi_watchdog();
385 } 385 }
386 show_trace(tsk, regs, rsp); 386 show_trace(tsk, regs, rsp);
387 } 387 }
388 388
389 void show_stack(struct task_struct *tsk, unsigned long * rsp) 389 void show_stack(struct task_struct *tsk, unsigned long * rsp)
390 { 390 {
391 _show_stack(tsk, NULL, rsp); 391 _show_stack(tsk, NULL, rsp);
392 } 392 }
393 393
394 /* 394 /*
395 * The architecture-independent dump_stack generator 395 * The architecture-independent dump_stack generator
396 */ 396 */
397 void dump_stack(void) 397 void dump_stack(void)
398 { 398 {
399 unsigned long dummy; 399 unsigned long dummy;
400 show_trace(NULL, NULL, &dummy); 400 show_trace(NULL, NULL, &dummy);
401 } 401 }
402 402
403 EXPORT_SYMBOL(dump_stack); 403 EXPORT_SYMBOL(dump_stack);
404 404
405 void show_registers(struct pt_regs *regs) 405 void show_registers(struct pt_regs *regs)
406 { 406 {
407 int i; 407 int i;
408 int in_kernel = !user_mode(regs); 408 int in_kernel = !user_mode(regs);
409 unsigned long rsp; 409 unsigned long rsp;
410 const int cpu = smp_processor_id(); 410 const int cpu = smp_processor_id();
411 struct task_struct *cur = cpu_pda(cpu)->pcurrent; 411 struct task_struct *cur = cpu_pda(cpu)->pcurrent;
412 412
413 rsp = regs->rsp; 413 rsp = regs->rsp;
414 printk("CPU %d ", cpu); 414 printk("CPU %d ", cpu);
415 __show_regs(regs); 415 __show_regs(regs);
416 printk("Process %s (pid: %d, threadinfo %p, task %p)\n", 416 printk("Process %s (pid: %d, threadinfo %p, task %p)\n",
417 cur->comm, cur->pid, task_thread_info(cur), cur); 417 cur->comm, cur->pid, task_thread_info(cur), cur);
418 418
419 /* 419 /*
420 * When in-kernel, we also print out the stack and code at the 420 * When in-kernel, we also print out the stack and code at the
421 * time of the fault.. 421 * time of the fault..
422 */ 422 */
423 if (in_kernel) { 423 if (in_kernel) {
424 printk("Stack: "); 424 printk("Stack: ");
425 _show_stack(NULL, regs, (unsigned long*)rsp); 425 _show_stack(NULL, regs, (unsigned long*)rsp);
426 426
427 printk("\nCode: "); 427 printk("\nCode: ");
428 if (regs->rip < PAGE_OFFSET) 428 if (regs->rip < PAGE_OFFSET)
429 goto bad; 429 goto bad;
430 430
431 for (i=0; i<20; i++) { 431 for (i=0; i<20; i++) {
432 unsigned char c; 432 unsigned char c;
433 if (__get_user(c, &((unsigned char*)regs->rip)[i])) { 433 if (__get_user(c, &((unsigned char*)regs->rip)[i])) {
434 bad: 434 bad:
435 printk(" Bad RIP value."); 435 printk(" Bad RIP value.");
436 break; 436 break;
437 } 437 }
438 printk("%02x ", c); 438 printk("%02x ", c);
439 } 439 }
440 } 440 }
441 printk("\n"); 441 printk("\n");
442 } 442 }
443 443
444 int is_valid_bugaddr(unsigned long rip) 444 int is_valid_bugaddr(unsigned long rip)
445 { 445 {
446 unsigned short ud2; 446 unsigned short ud2;
447 447
448 if (__copy_from_user(&ud2, (const void __user *) rip, sizeof(ud2))) 448 if (__copy_from_user(&ud2, (const void __user *) rip, sizeof(ud2)))
449 return 0; 449 return 0;
450 450
451 return ud2 == 0x0b0f; 451 return ud2 == 0x0b0f;
452 } 452 }
453 453
454 #ifdef CONFIG_BUG 454 #ifdef CONFIG_BUG
455 void out_of_line_bug(void) 455 void out_of_line_bug(void)
456 { 456 {
457 BUG(); 457 BUG();
458 } 458 }
459 EXPORT_SYMBOL(out_of_line_bug); 459 EXPORT_SYMBOL(out_of_line_bug);
460 #endif 460 #endif
461 461
462 static DEFINE_SPINLOCK(die_lock); 462 static DEFINE_SPINLOCK(die_lock);
463 static int die_owner = -1; 463 static int die_owner = -1;
464 static unsigned int die_nest_count; 464 static unsigned int die_nest_count;
465 465
466 unsigned __kprobes long oops_begin(void) 466 unsigned __kprobes long oops_begin(void)
467 { 467 {
468 int cpu; 468 int cpu;
469 unsigned long flags; 469 unsigned long flags;
470 470
471 oops_enter(); 471 oops_enter();
472 472
473 /* racy, but better than risking deadlock. */ 473 /* racy, but better than risking deadlock. */
474 local_irq_save(flags); 474 local_irq_save(flags);
475 cpu = smp_processor_id(); 475 cpu = smp_processor_id();
476 if (!spin_trylock(&die_lock)) { 476 if (!spin_trylock(&die_lock)) {
477 if (cpu == die_owner) 477 if (cpu == die_owner)
478 /* nested oops. should stop eventually */; 478 /* nested oops. should stop eventually */;
479 else 479 else
480 spin_lock(&die_lock); 480 spin_lock(&die_lock);
481 } 481 }
482 die_nest_count++; 482 die_nest_count++;
483 die_owner = cpu; 483 die_owner = cpu;
484 console_verbose(); 484 console_verbose();
485 bust_spinlocks(1); 485 bust_spinlocks(1);
486 return flags; 486 return flags;
487 } 487 }
488 488
489 void __kprobes oops_end(unsigned long flags) 489 void __kprobes oops_end(unsigned long flags)
490 { 490 {
491 die_owner = -1; 491 die_owner = -1;
492 bust_spinlocks(0); 492 bust_spinlocks(0);
493 die_nest_count--; 493 die_nest_count--;
494 if (die_nest_count) 494 if (die_nest_count)
495 /* We still own the lock */ 495 /* We still own the lock */
496 local_irq_restore(flags); 496 local_irq_restore(flags);
497 else 497 else
498 /* Nest count reaches zero, release the lock. */ 498 /* Nest count reaches zero, release the lock. */
499 spin_unlock_irqrestore(&die_lock, flags); 499 spin_unlock_irqrestore(&die_lock, flags);
500 if (panic_on_oops) 500 if (panic_on_oops)
501 panic("Fatal exception"); 501 panic("Fatal exception");
502 oops_exit(); 502 oops_exit();
503 } 503 }
504 504
505 void __kprobes __die(const char * str, struct pt_regs * regs, long err) 505 void __kprobes __die(const char * str, struct pt_regs * regs, long err)
506 { 506 {
507 static int die_counter; 507 static int die_counter;
508 printk(KERN_EMERG "%s: %04lx [%u] ", str, err & 0xffff,++die_counter); 508 printk(KERN_EMERG "%s: %04lx [%u] ", str, err & 0xffff,++die_counter);
509 #ifdef CONFIG_PREEMPT 509 #ifdef CONFIG_PREEMPT
510 printk("PREEMPT "); 510 printk("PREEMPT ");
511 #endif 511 #endif
512 #ifdef CONFIG_SMP 512 #ifdef CONFIG_SMP
513 printk("SMP "); 513 printk("SMP ");
514 #endif 514 #endif
515 #ifdef CONFIG_DEBUG_PAGEALLOC 515 #ifdef CONFIG_DEBUG_PAGEALLOC
516 printk("DEBUG_PAGEALLOC"); 516 printk("DEBUG_PAGEALLOC");
517 #endif 517 #endif
518 printk("\n"); 518 printk("\n");
519 notify_die(DIE_OOPS, str, regs, err, current->thread.trap_no, SIGSEGV); 519 notify_die(DIE_OOPS, str, regs, err, current->thread.trap_no, SIGSEGV);
520 show_registers(regs); 520 show_registers(regs);
521 add_taint(TAINT_DIE);
521 /* Executive summary in case the oops scrolled away */ 522 /* Executive summary in case the oops scrolled away */
522 printk(KERN_ALERT "RIP "); 523 printk(KERN_ALERT "RIP ");
523 printk_address(regs->rip); 524 printk_address(regs->rip);
524 printk(" RSP <%016lx>\n", regs->rsp); 525 printk(" RSP <%016lx>\n", regs->rsp);
525 if (kexec_should_crash(current)) 526 if (kexec_should_crash(current))
526 crash_kexec(regs); 527 crash_kexec(regs);
527 } 528 }
528 529
529 void die(const char * str, struct pt_regs * regs, long err) 530 void die(const char * str, struct pt_regs * regs, long err)
530 { 531 {
531 unsigned long flags = oops_begin(); 532 unsigned long flags = oops_begin();
532 533
533 if (!user_mode(regs)) 534 if (!user_mode(regs))
534 report_bug(regs->rip, regs); 535 report_bug(regs->rip, regs);
535 536
536 __die(str, regs, err); 537 __die(str, regs, err);
537 oops_end(flags); 538 oops_end(flags);
538 do_exit(SIGSEGV); 539 do_exit(SIGSEGV);
539 } 540 }
540 541
541 void __kprobes die_nmi(char *str, struct pt_regs *regs, int do_panic) 542 void __kprobes die_nmi(char *str, struct pt_regs *regs, int do_panic)
542 { 543 {
543 unsigned long flags = oops_begin(); 544 unsigned long flags = oops_begin();
544 545
545 /* 546 /*
546 * We are in trouble anyway, lets at least try 547 * We are in trouble anyway, lets at least try
547 * to get a message out. 548 * to get a message out.
548 */ 549 */
549 printk(str, smp_processor_id()); 550 printk(str, smp_processor_id());
550 show_registers(regs); 551 show_registers(regs);
551 if (kexec_should_crash(current)) 552 if (kexec_should_crash(current))
552 crash_kexec(regs); 553 crash_kexec(regs);
553 if (do_panic || panic_on_oops) 554 if (do_panic || panic_on_oops)
554 panic("Non maskable interrupt"); 555 panic("Non maskable interrupt");
555 oops_end(flags); 556 oops_end(flags);
556 nmi_exit(); 557 nmi_exit();
557 local_irq_enable(); 558 local_irq_enable();
558 do_exit(SIGSEGV); 559 do_exit(SIGSEGV);
559 } 560 }
560 561
561 static void __kprobes do_trap(int trapnr, int signr, char *str, 562 static void __kprobes do_trap(int trapnr, int signr, char *str,
562 struct pt_regs * regs, long error_code, 563 struct pt_regs * regs, long error_code,
563 siginfo_t *info) 564 siginfo_t *info)
564 { 565 {
565 struct task_struct *tsk = current; 566 struct task_struct *tsk = current;
566 567
567 if (user_mode(regs)) { 568 if (user_mode(regs)) {
568 /* 569 /*
569 * We want error_code and trap_no set for userspace 570 * We want error_code and trap_no set for userspace
570 * faults and kernelspace faults which result in 571 * faults and kernelspace faults which result in
571 * die(), but not kernelspace faults which are fixed 572 * die(), but not kernelspace faults which are fixed
572 * up. die() gives the process no chance to handle 573 * up. die() gives the process no chance to handle
573 * the signal and notice the kernel fault information, 574 * the signal and notice the kernel fault information,
574 * so that won't result in polluting the information 575 * so that won't result in polluting the information
575 * about previously queued, but not yet delivered, 576 * about previously queued, but not yet delivered,
576 * faults. See also do_general_protection below. 577 * faults. See also do_general_protection below.
577 */ 578 */
578 tsk->thread.error_code = error_code; 579 tsk->thread.error_code = error_code;
579 tsk->thread.trap_no = trapnr; 580 tsk->thread.trap_no = trapnr;
580 581
581 if (exception_trace && unhandled_signal(tsk, signr)) 582 if (exception_trace && unhandled_signal(tsk, signr))
582 printk(KERN_INFO 583 printk(KERN_INFO
583 "%s[%d] trap %s rip:%lx rsp:%lx error:%lx\n", 584 "%s[%d] trap %s rip:%lx rsp:%lx error:%lx\n",
584 tsk->comm, tsk->pid, str, 585 tsk->comm, tsk->pid, str,
585 regs->rip, regs->rsp, error_code); 586 regs->rip, regs->rsp, error_code);
586 587
587 if (info) 588 if (info)
588 force_sig_info(signr, info, tsk); 589 force_sig_info(signr, info, tsk);
589 else 590 else
590 force_sig(signr, tsk); 591 force_sig(signr, tsk);
591 return; 592 return;
592 } 593 }
593 594
594 595
595 /* kernel trap */ 596 /* kernel trap */
596 { 597 {
597 const struct exception_table_entry *fixup; 598 const struct exception_table_entry *fixup;
598 fixup = search_exception_tables(regs->rip); 599 fixup = search_exception_tables(regs->rip);
599 if (fixup) 600 if (fixup)
600 regs->rip = fixup->fixup; 601 regs->rip = fixup->fixup;
601 else { 602 else {
602 tsk->thread.error_code = error_code; 603 tsk->thread.error_code = error_code;
603 tsk->thread.trap_no = trapnr; 604 tsk->thread.trap_no = trapnr;
604 die(str, regs, error_code); 605 die(str, regs, error_code);
605 } 606 }
606 return; 607 return;
607 } 608 }
608 } 609 }
609 610
610 #define DO_ERROR(trapnr, signr, str, name) \ 611 #define DO_ERROR(trapnr, signr, str, name) \
611 asmlinkage void do_##name(struct pt_regs * regs, long error_code) \ 612 asmlinkage void do_##name(struct pt_regs * regs, long error_code) \
612 { \ 613 { \
613 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \ 614 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
614 == NOTIFY_STOP) \ 615 == NOTIFY_STOP) \
615 return; \ 616 return; \
616 conditional_sti(regs); \ 617 conditional_sti(regs); \
617 do_trap(trapnr, signr, str, regs, error_code, NULL); \ 618 do_trap(trapnr, signr, str, regs, error_code, NULL); \
618 } 619 }
619 620
620 #define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \ 621 #define DO_ERROR_INFO(trapnr, signr, str, name, sicode, siaddr) \
621 asmlinkage void do_##name(struct pt_regs * regs, long error_code) \ 622 asmlinkage void do_##name(struct pt_regs * regs, long error_code) \
622 { \ 623 { \
623 siginfo_t info; \ 624 siginfo_t info; \
624 info.si_signo = signr; \ 625 info.si_signo = signr; \
625 info.si_errno = 0; \ 626 info.si_errno = 0; \
626 info.si_code = sicode; \ 627 info.si_code = sicode; \
627 info.si_addr = (void __user *)siaddr; \ 628 info.si_addr = (void __user *)siaddr; \
628 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \ 629 if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
629 == NOTIFY_STOP) \ 630 == NOTIFY_STOP) \
630 return; \ 631 return; \
631 conditional_sti(regs); \ 632 conditional_sti(regs); \
632 do_trap(trapnr, signr, str, regs, error_code, &info); \ 633 do_trap(trapnr, signr, str, regs, error_code, &info); \
633 } 634 }
634 635
635 DO_ERROR_INFO( 0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->rip) 636 DO_ERROR_INFO( 0, SIGFPE, "divide error", divide_error, FPE_INTDIV, regs->rip)
636 DO_ERROR( 4, SIGSEGV, "overflow", overflow) 637 DO_ERROR( 4, SIGSEGV, "overflow", overflow)
637 DO_ERROR( 5, SIGSEGV, "bounds", bounds) 638 DO_ERROR( 5, SIGSEGV, "bounds", bounds)
638 DO_ERROR_INFO( 6, SIGILL, "invalid opcode", invalid_op, ILL_ILLOPN, regs->rip) 639 DO_ERROR_INFO( 6, SIGILL, "invalid opcode", invalid_op, ILL_ILLOPN, regs->rip)
639 DO_ERROR( 7, SIGSEGV, "device not available", device_not_available) 640 DO_ERROR( 7, SIGSEGV, "device not available", device_not_available)
640 DO_ERROR( 9, SIGFPE, "coprocessor segment overrun", coprocessor_segment_overrun) 641 DO_ERROR( 9, SIGFPE, "coprocessor segment overrun", coprocessor_segment_overrun)
641 DO_ERROR(10, SIGSEGV, "invalid TSS", invalid_TSS) 642 DO_ERROR(10, SIGSEGV, "invalid TSS", invalid_TSS)
642 DO_ERROR(11, SIGBUS, "segment not present", segment_not_present) 643 DO_ERROR(11, SIGBUS, "segment not present", segment_not_present)
643 DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, 0) 644 DO_ERROR_INFO(17, SIGBUS, "alignment check", alignment_check, BUS_ADRALN, 0)
644 DO_ERROR(18, SIGSEGV, "reserved", reserved) 645 DO_ERROR(18, SIGSEGV, "reserved", reserved)
645 646
646 /* Runs on IST stack */ 647 /* Runs on IST stack */
647 asmlinkage void do_stack_segment(struct pt_regs *regs, long error_code) 648 asmlinkage void do_stack_segment(struct pt_regs *regs, long error_code)
648 { 649 {
649 if (notify_die(DIE_TRAP, "stack segment", regs, error_code, 650 if (notify_die(DIE_TRAP, "stack segment", regs, error_code,
650 12, SIGBUS) == NOTIFY_STOP) 651 12, SIGBUS) == NOTIFY_STOP)
651 return; 652 return;
652 preempt_conditional_sti(regs); 653 preempt_conditional_sti(regs);
653 do_trap(12, SIGBUS, "stack segment", regs, error_code, NULL); 654 do_trap(12, SIGBUS, "stack segment", regs, error_code, NULL);
654 preempt_conditional_cli(regs); 655 preempt_conditional_cli(regs);
655 } 656 }
656 657
657 asmlinkage void do_double_fault(struct pt_regs * regs, long error_code) 658 asmlinkage void do_double_fault(struct pt_regs * regs, long error_code)
658 { 659 {
659 static const char str[] = "double fault"; 660 static const char str[] = "double fault";
660 struct task_struct *tsk = current; 661 struct task_struct *tsk = current;
661 662
662 /* Return not checked because double check cannot be ignored */ 663 /* Return not checked because double check cannot be ignored */
663 notify_die(DIE_TRAP, str, regs, error_code, 8, SIGSEGV); 664 notify_die(DIE_TRAP, str, regs, error_code, 8, SIGSEGV);
664 665
665 tsk->thread.error_code = error_code; 666 tsk->thread.error_code = error_code;
666 tsk->thread.trap_no = 8; 667 tsk->thread.trap_no = 8;
667 668
668 /* This is always a kernel trap and never fixable (and thus must 669 /* This is always a kernel trap and never fixable (and thus must
669 never return). */ 670 never return). */
670 for (;;) 671 for (;;)
671 die(str, regs, error_code); 672 die(str, regs, error_code);
672 } 673 }
673 674
674 asmlinkage void __kprobes do_general_protection(struct pt_regs * regs, 675 asmlinkage void __kprobes do_general_protection(struct pt_regs * regs,
675 long error_code) 676 long error_code)
676 { 677 {
677 struct task_struct *tsk = current; 678 struct task_struct *tsk = current;
678 679
679 conditional_sti(regs); 680 conditional_sti(regs);
680 681
681 if (user_mode(regs)) { 682 if (user_mode(regs)) {
682 tsk->thread.error_code = error_code; 683 tsk->thread.error_code = error_code;
683 tsk->thread.trap_no = 13; 684 tsk->thread.trap_no = 13;
684 685
685 if (exception_trace && unhandled_signal(tsk, SIGSEGV)) 686 if (exception_trace && unhandled_signal(tsk, SIGSEGV))
686 printk(KERN_INFO 687 printk(KERN_INFO
687 "%s[%d] general protection rip:%lx rsp:%lx error:%lx\n", 688 "%s[%d] general protection rip:%lx rsp:%lx error:%lx\n",
688 tsk->comm, tsk->pid, 689 tsk->comm, tsk->pid,
689 regs->rip, regs->rsp, error_code); 690 regs->rip, regs->rsp, error_code);
690 691
691 force_sig(SIGSEGV, tsk); 692 force_sig(SIGSEGV, tsk);
692 return; 693 return;
693 } 694 }
694 695
695 /* kernel gp */ 696 /* kernel gp */
696 { 697 {
697 const struct exception_table_entry *fixup; 698 const struct exception_table_entry *fixup;
698 fixup = search_exception_tables(regs->rip); 699 fixup = search_exception_tables(regs->rip);
699 if (fixup) { 700 if (fixup) {
700 regs->rip = fixup->fixup; 701 regs->rip = fixup->fixup;
701 return; 702 return;
702 } 703 }
703 704
704 tsk->thread.error_code = error_code; 705 tsk->thread.error_code = error_code;
705 tsk->thread.trap_no = 13; 706 tsk->thread.trap_no = 13;
706 if (notify_die(DIE_GPF, "general protection fault", regs, 707 if (notify_die(DIE_GPF, "general protection fault", regs,
707 error_code, 13, SIGSEGV) == NOTIFY_STOP) 708 error_code, 13, SIGSEGV) == NOTIFY_STOP)
708 return; 709 return;
709 die("general protection fault", regs, error_code); 710 die("general protection fault", regs, error_code);
710 } 711 }
711 } 712 }
712 713
713 static __kprobes void 714 static __kprobes void
714 mem_parity_error(unsigned char reason, struct pt_regs * regs) 715 mem_parity_error(unsigned char reason, struct pt_regs * regs)
715 { 716 {
716 printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x.\n", 717 printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x.\n",
717 reason); 718 reason);
718 printk(KERN_EMERG "You have some hardware problem, likely on the PCI bus.\n"); 719 printk(KERN_EMERG "You have some hardware problem, likely on the PCI bus.\n");
719 720
720 if (panic_on_unrecovered_nmi) 721 if (panic_on_unrecovered_nmi)
721 panic("NMI: Not continuing"); 722 panic("NMI: Not continuing");
722 723
723 printk(KERN_EMERG "Dazed and confused, but trying to continue\n"); 724 printk(KERN_EMERG "Dazed and confused, but trying to continue\n");
724 725
725 /* Clear and disable the memory parity error line. */ 726 /* Clear and disable the memory parity error line. */
726 reason = (reason & 0xf) | 4; 727 reason = (reason & 0xf) | 4;
727 outb(reason, 0x61); 728 outb(reason, 0x61);
728 } 729 }
729 730
730 static __kprobes void 731 static __kprobes void
731 io_check_error(unsigned char reason, struct pt_regs * regs) 732 io_check_error(unsigned char reason, struct pt_regs * regs)
732 { 733 {
733 printk("NMI: IOCK error (debug interrupt?)\n"); 734 printk("NMI: IOCK error (debug interrupt?)\n");
734 show_registers(regs); 735 show_registers(regs);
735 736
736 /* Re-enable the IOCK line, wait for a few seconds */ 737 /* Re-enable the IOCK line, wait for a few seconds */
737 reason = (reason & 0xf) | 8; 738 reason = (reason & 0xf) | 8;
738 outb(reason, 0x61); 739 outb(reason, 0x61);
739 mdelay(2000); 740 mdelay(2000);
740 reason &= ~8; 741 reason &= ~8;
741 outb(reason, 0x61); 742 outb(reason, 0x61);
742 } 743 }
743 744
744 static __kprobes void 745 static __kprobes void
745 unknown_nmi_error(unsigned char reason, struct pt_regs * regs) 746 unknown_nmi_error(unsigned char reason, struct pt_regs * regs)
746 { 747 {
747 printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x.\n", 748 printk(KERN_EMERG "Uhhuh. NMI received for unknown reason %02x.\n",
748 reason); 749 reason);
749 printk(KERN_EMERG "Do you have a strange power saving mode enabled?\n"); 750 printk(KERN_EMERG "Do you have a strange power saving mode enabled?\n");
750 751
751 if (panic_on_unrecovered_nmi) 752 if (panic_on_unrecovered_nmi)
752 panic("NMI: Not continuing"); 753 panic("NMI: Not continuing");
753 754
754 printk(KERN_EMERG "Dazed and confused, but trying to continue\n"); 755 printk(KERN_EMERG "Dazed and confused, but trying to continue\n");
755 } 756 }
756 757
757 /* Runs on IST stack. This code must keep interrupts off all the time. 758 /* Runs on IST stack. This code must keep interrupts off all the time.
758 Nested NMIs are prevented by the CPU. */ 759 Nested NMIs are prevented by the CPU. */
759 asmlinkage __kprobes void default_do_nmi(struct pt_regs *regs) 760 asmlinkage __kprobes void default_do_nmi(struct pt_regs *regs)
760 { 761 {
761 unsigned char reason = 0; 762 unsigned char reason = 0;
762 int cpu; 763 int cpu;
763 764
764 cpu = smp_processor_id(); 765 cpu = smp_processor_id();
765 766
766 /* Only the BSP gets external NMIs from the system. */ 767 /* Only the BSP gets external NMIs from the system. */
767 if (!cpu) 768 if (!cpu)
768 reason = get_nmi_reason(); 769 reason = get_nmi_reason();
769 770
770 if (!(reason & 0xc0)) { 771 if (!(reason & 0xc0)) {
771 if (notify_die(DIE_NMI_IPI, "nmi_ipi", regs, reason, 2, SIGINT) 772 if (notify_die(DIE_NMI_IPI, "nmi_ipi", regs, reason, 2, SIGINT)
772 == NOTIFY_STOP) 773 == NOTIFY_STOP)
773 return; 774 return;
774 /* 775 /*
775 * Ok, so this is none of the documented NMI sources, 776 * Ok, so this is none of the documented NMI sources,
776 * so it must be the NMI watchdog. 777 * so it must be the NMI watchdog.
777 */ 778 */
778 if (nmi_watchdog_tick(regs,reason)) 779 if (nmi_watchdog_tick(regs,reason))
779 return; 780 return;
780 if (!do_nmi_callback(regs,cpu)) 781 if (!do_nmi_callback(regs,cpu))
781 unknown_nmi_error(reason, regs); 782 unknown_nmi_error(reason, regs);
782 783
783 return; 784 return;
784 } 785 }
785 if (notify_die(DIE_NMI, "nmi", regs, reason, 2, SIGINT) == NOTIFY_STOP) 786 if (notify_die(DIE_NMI, "nmi", regs, reason, 2, SIGINT) == NOTIFY_STOP)
786 return; 787 return;
787 788
788 /* AK: following checks seem to be broken on modern chipsets. FIXME */ 789 /* AK: following checks seem to be broken on modern chipsets. FIXME */
789 790
790 if (reason & 0x80) 791 if (reason & 0x80)
791 mem_parity_error(reason, regs); 792 mem_parity_error(reason, regs);
792 if (reason & 0x40) 793 if (reason & 0x40)
793 io_check_error(reason, regs); 794 io_check_error(reason, regs);
794 } 795 }
795 796
796 /* runs on IST stack. */ 797 /* runs on IST stack. */
797 asmlinkage void __kprobes do_int3(struct pt_regs * regs, long error_code) 798 asmlinkage void __kprobes do_int3(struct pt_regs * regs, long error_code)
798 { 799 {
799 if (notify_die(DIE_INT3, "int3", regs, error_code, 3, SIGTRAP) == NOTIFY_STOP) { 800 if (notify_die(DIE_INT3, "int3", regs, error_code, 3, SIGTRAP) == NOTIFY_STOP) {
800 return; 801 return;
801 } 802 }
802 preempt_conditional_sti(regs); 803 preempt_conditional_sti(regs);
803 do_trap(3, SIGTRAP, "int3", regs, error_code, NULL); 804 do_trap(3, SIGTRAP, "int3", regs, error_code, NULL);
804 preempt_conditional_cli(regs); 805 preempt_conditional_cli(regs);
805 } 806 }
806 807
807 /* Help handler running on IST stack to switch back to user stack 808 /* Help handler running on IST stack to switch back to user stack
808 for scheduling or signal handling. The actual stack switch is done in 809 for scheduling or signal handling. The actual stack switch is done in
809 entry.S */ 810 entry.S */
810 asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs) 811 asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs)
811 { 812 {
812 struct pt_regs *regs = eregs; 813 struct pt_regs *regs = eregs;
813 /* Did already sync */ 814 /* Did already sync */
814 if (eregs == (struct pt_regs *)eregs->rsp) 815 if (eregs == (struct pt_regs *)eregs->rsp)
815 ; 816 ;
816 /* Exception from user space */ 817 /* Exception from user space */
817 else if (user_mode(eregs)) 818 else if (user_mode(eregs))
818 regs = task_pt_regs(current); 819 regs = task_pt_regs(current);
819 /* Exception from kernel and interrupts are enabled. Move to 820 /* Exception from kernel and interrupts are enabled. Move to
820 kernel process stack. */ 821 kernel process stack. */
821 else if (eregs->eflags & X86_EFLAGS_IF) 822 else if (eregs->eflags & X86_EFLAGS_IF)
822 regs = (struct pt_regs *)(eregs->rsp -= sizeof(struct pt_regs)); 823 regs = (struct pt_regs *)(eregs->rsp -= sizeof(struct pt_regs));
823 if (eregs != regs) 824 if (eregs != regs)
824 *regs = *eregs; 825 *regs = *eregs;
825 return regs; 826 return regs;
826 } 827 }
827 828
828 /* runs on IST stack. */ 829 /* runs on IST stack. */
829 asmlinkage void __kprobes do_debug(struct pt_regs * regs, 830 asmlinkage void __kprobes do_debug(struct pt_regs * regs,
830 unsigned long error_code) 831 unsigned long error_code)
831 { 832 {
832 unsigned long condition; 833 unsigned long condition;
833 struct task_struct *tsk = current; 834 struct task_struct *tsk = current;
834 siginfo_t info; 835 siginfo_t info;
835 836
836 get_debugreg(condition, 6); 837 get_debugreg(condition, 6);
837 838
838 if (notify_die(DIE_DEBUG, "debug", regs, condition, error_code, 839 if (notify_die(DIE_DEBUG, "debug", regs, condition, error_code,
839 SIGTRAP) == NOTIFY_STOP) 840 SIGTRAP) == NOTIFY_STOP)
840 return; 841 return;
841 842
842 preempt_conditional_sti(regs); 843 preempt_conditional_sti(regs);
843 844
844 /* Mask out spurious debug traps due to lazy DR7 setting */ 845 /* Mask out spurious debug traps due to lazy DR7 setting */
845 if (condition & (DR_TRAP0|DR_TRAP1|DR_TRAP2|DR_TRAP3)) { 846 if (condition & (DR_TRAP0|DR_TRAP1|DR_TRAP2|DR_TRAP3)) {
846 if (!tsk->thread.debugreg7) { 847 if (!tsk->thread.debugreg7) {
847 goto clear_dr7; 848 goto clear_dr7;
848 } 849 }
849 } 850 }
850 851
851 tsk->thread.debugreg6 = condition; 852 tsk->thread.debugreg6 = condition;
852 853
853 /* Mask out spurious TF errors due to lazy TF clearing */ 854 /* Mask out spurious TF errors due to lazy TF clearing */
854 if (condition & DR_STEP) { 855 if (condition & DR_STEP) {
855 /* 856 /*
856 * The TF error should be masked out only if the current 857 * The TF error should be masked out only if the current
857 * process is not traced and if the TRAP flag has been set 858 * process is not traced and if the TRAP flag has been set
858 * previously by a tracing process (condition detected by 859 * previously by a tracing process (condition detected by
859 * the PT_DTRACE flag); remember that the i386 TRAP flag 860 * the PT_DTRACE flag); remember that the i386 TRAP flag
860 * can be modified by the process itself in user mode, 861 * can be modified by the process itself in user mode,
861 * allowing programs to debug themselves without the ptrace() 862 * allowing programs to debug themselves without the ptrace()
862 * interface. 863 * interface.
863 */ 864 */
864 if (!user_mode(regs)) 865 if (!user_mode(regs))
865 goto clear_TF_reenable; 866 goto clear_TF_reenable;
866 /* 867 /*
867 * Was the TF flag set by a debugger? If so, clear it now, 868 * Was the TF flag set by a debugger? If so, clear it now,
868 * so that register information is correct. 869 * so that register information is correct.
869 */ 870 */
870 if (tsk->ptrace & PT_DTRACE) { 871 if (tsk->ptrace & PT_DTRACE) {
871 regs->eflags &= ~TF_MASK; 872 regs->eflags &= ~TF_MASK;
872 tsk->ptrace &= ~PT_DTRACE; 873 tsk->ptrace &= ~PT_DTRACE;
873 } 874 }
874 } 875 }
875 876
876 /* Ok, finally something we can handle */ 877 /* Ok, finally something we can handle */
877 tsk->thread.trap_no = 1; 878 tsk->thread.trap_no = 1;
878 tsk->thread.error_code = error_code; 879 tsk->thread.error_code = error_code;
879 info.si_signo = SIGTRAP; 880 info.si_signo = SIGTRAP;
880 info.si_errno = 0; 881 info.si_errno = 0;
881 info.si_code = TRAP_BRKPT; 882 info.si_code = TRAP_BRKPT;
882 info.si_addr = user_mode(regs) ? (void __user *)regs->rip : NULL; 883 info.si_addr = user_mode(regs) ? (void __user *)regs->rip : NULL;
883 force_sig_info(SIGTRAP, &info, tsk); 884 force_sig_info(SIGTRAP, &info, tsk);
884 885
885 clear_dr7: 886 clear_dr7:
886 set_debugreg(0UL, 7); 887 set_debugreg(0UL, 7);
887 preempt_conditional_cli(regs); 888 preempt_conditional_cli(regs);
888 return; 889 return;
889 890
890 clear_TF_reenable: 891 clear_TF_reenable:
891 set_tsk_thread_flag(tsk, TIF_SINGLESTEP); 892 set_tsk_thread_flag(tsk, TIF_SINGLESTEP);
892 regs->eflags &= ~TF_MASK; 893 regs->eflags &= ~TF_MASK;
893 preempt_conditional_cli(regs); 894 preempt_conditional_cli(regs);
894 } 895 }
895 896
896 static int kernel_math_error(struct pt_regs *regs, const char *str, int trapnr) 897 static int kernel_math_error(struct pt_regs *regs, const char *str, int trapnr)
897 { 898 {
898 const struct exception_table_entry *fixup; 899 const struct exception_table_entry *fixup;
899 fixup = search_exception_tables(regs->rip); 900 fixup = search_exception_tables(regs->rip);
900 if (fixup) { 901 if (fixup) {
901 regs->rip = fixup->fixup; 902 regs->rip = fixup->fixup;
902 return 1; 903 return 1;
903 } 904 }
904 notify_die(DIE_GPF, str, regs, 0, trapnr, SIGFPE); 905 notify_die(DIE_GPF, str, regs, 0, trapnr, SIGFPE);
905 /* Illegal floating point operation in the kernel */ 906 /* Illegal floating point operation in the kernel */
906 current->thread.trap_no = trapnr; 907 current->thread.trap_no = trapnr;
907 die(str, regs, 0); 908 die(str, regs, 0);
908 return 0; 909 return 0;
909 } 910 }
910 911
911 /* 912 /*
912 * Note that we play around with the 'TS' bit in an attempt to get 913 * Note that we play around with the 'TS' bit in an attempt to get
913 * the correct behaviour even in the presence of the asynchronous 914 * the correct behaviour even in the presence of the asynchronous
914 * IRQ13 behaviour 915 * IRQ13 behaviour
915 */ 916 */
916 asmlinkage void do_coprocessor_error(struct pt_regs *regs) 917 asmlinkage void do_coprocessor_error(struct pt_regs *regs)
917 { 918 {
918 void __user *rip = (void __user *)(regs->rip); 919 void __user *rip = (void __user *)(regs->rip);
919 struct task_struct * task; 920 struct task_struct * task;
920 siginfo_t info; 921 siginfo_t info;
921 unsigned short cwd, swd; 922 unsigned short cwd, swd;
922 923
923 conditional_sti(regs); 924 conditional_sti(regs);
924 if (!user_mode(regs) && 925 if (!user_mode(regs) &&
925 kernel_math_error(regs, "kernel x87 math error", 16)) 926 kernel_math_error(regs, "kernel x87 math error", 16))
926 return; 927 return;
927 928
928 /* 929 /*
929 * Save the info for the exception handler and clear the error. 930 * Save the info for the exception handler and clear the error.
930 */ 931 */
931 task = current; 932 task = current;
932 save_init_fpu(task); 933 save_init_fpu(task);
933 task->thread.trap_no = 16; 934 task->thread.trap_no = 16;
934 task->thread.error_code = 0; 935 task->thread.error_code = 0;
935 info.si_signo = SIGFPE; 936 info.si_signo = SIGFPE;
936 info.si_errno = 0; 937 info.si_errno = 0;
937 info.si_code = __SI_FAULT; 938 info.si_code = __SI_FAULT;
938 info.si_addr = rip; 939 info.si_addr = rip;
939 /* 940 /*
940 * (~cwd & swd) will mask out exceptions that are not set to unmasked 941 * (~cwd & swd) will mask out exceptions that are not set to unmasked
941 * status. 0x3f is the exception bits in these regs, 0x200 is the 942 * status. 0x3f is the exception bits in these regs, 0x200 is the
942 * C1 reg you need in case of a stack fault, 0x040 is the stack 943 * C1 reg you need in case of a stack fault, 0x040 is the stack
943 * fault bit. We should only be taking one exception at a time, 944 * fault bit. We should only be taking one exception at a time,
944 * so if this combination doesn't produce any single exception, 945 * so if this combination doesn't produce any single exception,
945 * then we have a bad program that isn't synchronizing its FPU usage 946 * then we have a bad program that isn't synchronizing its FPU usage
946 * and it will suffer the consequences since we won't be able to 947 * and it will suffer the consequences since we won't be able to
947 * fully reproduce the context of the exception 948 * fully reproduce the context of the exception
948 */ 949 */
949 cwd = get_fpu_cwd(task); 950 cwd = get_fpu_cwd(task);
950 swd = get_fpu_swd(task); 951 swd = get_fpu_swd(task);
951 switch (swd & ~cwd & 0x3f) { 952 switch (swd & ~cwd & 0x3f) {
952 case 0x000: 953 case 0x000:
953 default: 954 default:
954 break; 955 break;
955 case 0x001: /* Invalid Op */ 956 case 0x001: /* Invalid Op */
956 /* 957 /*
957 * swd & 0x240 == 0x040: Stack Underflow 958 * swd & 0x240 == 0x040: Stack Underflow
958 * swd & 0x240 == 0x240: Stack Overflow 959 * swd & 0x240 == 0x240: Stack Overflow
959 * User must clear the SF bit (0x40) if set 960 * User must clear the SF bit (0x40) if set
960 */ 961 */
961 info.si_code = FPE_FLTINV; 962 info.si_code = FPE_FLTINV;
962 break; 963 break;
963 case 0x002: /* Denormalize */ 964 case 0x002: /* Denormalize */
964 case 0x010: /* Underflow */ 965 case 0x010: /* Underflow */
965 info.si_code = FPE_FLTUND; 966 info.si_code = FPE_FLTUND;
966 break; 967 break;
967 case 0x004: /* Zero Divide */ 968 case 0x004: /* Zero Divide */
968 info.si_code = FPE_FLTDIV; 969 info.si_code = FPE_FLTDIV;
969 break; 970 break;
970 case 0x008: /* Overflow */ 971 case 0x008: /* Overflow */
971 info.si_code = FPE_FLTOVF; 972 info.si_code = FPE_FLTOVF;
972 break; 973 break;
973 case 0x020: /* Precision */ 974 case 0x020: /* Precision */
974 info.si_code = FPE_FLTRES; 975 info.si_code = FPE_FLTRES;
975 break; 976 break;
976 } 977 }
977 force_sig_info(SIGFPE, &info, task); 978 force_sig_info(SIGFPE, &info, task);
978 } 979 }
979 980
980 asmlinkage void bad_intr(void) 981 asmlinkage void bad_intr(void)
981 { 982 {
982 printk("bad interrupt"); 983 printk("bad interrupt");
983 } 984 }
984 985
985 asmlinkage void do_simd_coprocessor_error(struct pt_regs *regs) 986 asmlinkage void do_simd_coprocessor_error(struct pt_regs *regs)
986 { 987 {
987 void __user *rip = (void __user *)(regs->rip); 988 void __user *rip = (void __user *)(regs->rip);
988 struct task_struct * task; 989 struct task_struct * task;
989 siginfo_t info; 990 siginfo_t info;
990 unsigned short mxcsr; 991 unsigned short mxcsr;
991 992
992 conditional_sti(regs); 993 conditional_sti(regs);
993 if (!user_mode(regs) && 994 if (!user_mode(regs) &&
994 kernel_math_error(regs, "kernel simd math error", 19)) 995 kernel_math_error(regs, "kernel simd math error", 19))
995 return; 996 return;
996 997
997 /* 998 /*
998 * Save the info for the exception handler and clear the error. 999 * Save the info for the exception handler and clear the error.
999 */ 1000 */
1000 task = current; 1001 task = current;
1001 save_init_fpu(task); 1002 save_init_fpu(task);
1002 task->thread.trap_no = 19; 1003 task->thread.trap_no = 19;
1003 task->thread.error_code = 0; 1004 task->thread.error_code = 0;
1004 info.si_signo = SIGFPE; 1005 info.si_signo = SIGFPE;
1005 info.si_errno = 0; 1006 info.si_errno = 0;
1006 info.si_code = __SI_FAULT; 1007 info.si_code = __SI_FAULT;
1007 info.si_addr = rip; 1008 info.si_addr = rip;
1008 /* 1009 /*
1009 * The SIMD FPU exceptions are handled a little differently, as there 1010 * The SIMD FPU exceptions are handled a little differently, as there
1010 * is only a single status/control register. Thus, to determine which 1011 * is only a single status/control register. Thus, to determine which
1011 * unmasked exception was caught we must mask the exception mask bits 1012 * unmasked exception was caught we must mask the exception mask bits
1012 * at 0x1f80, and then use these to mask the exception bits at 0x3f. 1013 * at 0x1f80, and then use these to mask the exception bits at 0x3f.
1013 */ 1014 */
1014 mxcsr = get_fpu_mxcsr(task); 1015 mxcsr = get_fpu_mxcsr(task);
1015 switch (~((mxcsr & 0x1f80) >> 7) & (mxcsr & 0x3f)) { 1016 switch (~((mxcsr & 0x1f80) >> 7) & (mxcsr & 0x3f)) {
1016 case 0x000: 1017 case 0x000:
1017 default: 1018 default:
1018 break; 1019 break;
1019 case 0x001: /* Invalid Op */ 1020 case 0x001: /* Invalid Op */
1020 info.si_code = FPE_FLTINV; 1021 info.si_code = FPE_FLTINV;
1021 break; 1022 break;
1022 case 0x002: /* Denormalize */ 1023 case 0x002: /* Denormalize */
1023 case 0x010: /* Underflow */ 1024 case 0x010: /* Underflow */
1024 info.si_code = FPE_FLTUND; 1025 info.si_code = FPE_FLTUND;
1025 break; 1026 break;
1026 case 0x004: /* Zero Divide */ 1027 case 0x004: /* Zero Divide */
1027 info.si_code = FPE_FLTDIV; 1028 info.si_code = FPE_FLTDIV;
1028 break; 1029 break;
1029 case 0x008: /* Overflow */ 1030 case 0x008: /* Overflow */
1030 info.si_code = FPE_FLTOVF; 1031 info.si_code = FPE_FLTOVF;
1031 break; 1032 break;
1032 case 0x020: /* Precision */ 1033 case 0x020: /* Precision */
1033 info.si_code = FPE_FLTRES; 1034 info.si_code = FPE_FLTRES;
1034 break; 1035 break;
1035 } 1036 }
1036 force_sig_info(SIGFPE, &info, task); 1037 force_sig_info(SIGFPE, &info, task);
1037 } 1038 }
1038 1039
1039 asmlinkage void do_spurious_interrupt_bug(struct pt_regs * regs) 1040 asmlinkage void do_spurious_interrupt_bug(struct pt_regs * regs)
1040 { 1041 {
1041 } 1042 }
1042 1043
1043 asmlinkage void __attribute__((weak)) smp_thermal_interrupt(void) 1044 asmlinkage void __attribute__((weak)) smp_thermal_interrupt(void)
1044 { 1045 {
1045 } 1046 }
1046 1047
1047 asmlinkage void __attribute__((weak)) mce_threshold_interrupt(void) 1048 asmlinkage void __attribute__((weak)) mce_threshold_interrupt(void)
1048 { 1049 {
1049 } 1050 }
1050 1051
1051 /* 1052 /*
1052 * 'math_state_restore()' saves the current math information in the 1053 * 'math_state_restore()' saves the current math information in the
1053 * old math state array, and gets the new ones from the current task 1054 * old math state array, and gets the new ones from the current task
1054 * 1055 *
1055 * Careful.. There are problems with IBM-designed IRQ13 behaviour. 1056 * Careful.. There are problems with IBM-designed IRQ13 behaviour.
1056 * Don't touch unless you *really* know how it works. 1057 * Don't touch unless you *really* know how it works.
1057 */ 1058 */
1058 asmlinkage void math_state_restore(void) 1059 asmlinkage void math_state_restore(void)
1059 { 1060 {
1060 struct task_struct *me = current; 1061 struct task_struct *me = current;
1061 clts(); /* Allow maths ops (or we recurse) */ 1062 clts(); /* Allow maths ops (or we recurse) */
1062 1063
1063 if (!used_math()) 1064 if (!used_math())
1064 init_fpu(me); 1065 init_fpu(me);
1065 restore_fpu_checking(&me->thread.i387.fxsave); 1066 restore_fpu_checking(&me->thread.i387.fxsave);
1066 task_thread_info(me)->status |= TS_USEDFPU; 1067 task_thread_info(me)->status |= TS_USEDFPU;
1067 me->fpu_counter++; 1068 me->fpu_counter++;
1068 } 1069 }
1069 1070
1070 void __init trap_init(void) 1071 void __init trap_init(void)
1071 { 1072 {
1072 set_intr_gate(0,&divide_error); 1073 set_intr_gate(0,&divide_error);
1073 set_intr_gate_ist(1,&debug,DEBUG_STACK); 1074 set_intr_gate_ist(1,&debug,DEBUG_STACK);
1074 set_intr_gate_ist(2,&nmi,NMI_STACK); 1075 set_intr_gate_ist(2,&nmi,NMI_STACK);
1075 set_system_gate_ist(3,&int3,DEBUG_STACK); /* int3 can be called from all */ 1076 set_system_gate_ist(3,&int3,DEBUG_STACK); /* int3 can be called from all */
1076 set_system_gate(4,&overflow); /* int4 can be called from all */ 1077 set_system_gate(4,&overflow); /* int4 can be called from all */
1077 set_intr_gate(5,&bounds); 1078 set_intr_gate(5,&bounds);
1078 set_intr_gate(6,&invalid_op); 1079 set_intr_gate(6,&invalid_op);
1079 set_intr_gate(7,&device_not_available); 1080 set_intr_gate(7,&device_not_available);
1080 set_intr_gate_ist(8,&double_fault, DOUBLEFAULT_STACK); 1081 set_intr_gate_ist(8,&double_fault, DOUBLEFAULT_STACK);
1081 set_intr_gate(9,&coprocessor_segment_overrun); 1082 set_intr_gate(9,&coprocessor_segment_overrun);
1082 set_intr_gate(10,&invalid_TSS); 1083 set_intr_gate(10,&invalid_TSS);
1083 set_intr_gate(11,&segment_not_present); 1084 set_intr_gate(11,&segment_not_present);
1084 set_intr_gate_ist(12,&stack_segment,STACKFAULT_STACK); 1085 set_intr_gate_ist(12,&stack_segment,STACKFAULT_STACK);
1085 set_intr_gate(13,&general_protection); 1086 set_intr_gate(13,&general_protection);
1086 set_intr_gate(14,&page_fault); 1087 set_intr_gate(14,&page_fault);
1087 set_intr_gate(15,&spurious_interrupt_bug); 1088 set_intr_gate(15,&spurious_interrupt_bug);
1088 set_intr_gate(16,&coprocessor_error); 1089 set_intr_gate(16,&coprocessor_error);
1089 set_intr_gate(17,&alignment_check); 1090 set_intr_gate(17,&alignment_check);
1090 #ifdef CONFIG_X86_MCE 1091 #ifdef CONFIG_X86_MCE
1091 set_intr_gate_ist(18,&machine_check, MCE_STACK); 1092 set_intr_gate_ist(18,&machine_check, MCE_STACK);
1092 #endif 1093 #endif
1093 set_intr_gate(19,&simd_coprocessor_error); 1094 set_intr_gate(19,&simd_coprocessor_error);
1094 1095
1095 #ifdef CONFIG_IA32_EMULATION 1096 #ifdef CONFIG_IA32_EMULATION
1096 set_system_gate(IA32_SYSCALL_VECTOR, ia32_syscall); 1097 set_system_gate(IA32_SYSCALL_VECTOR, ia32_syscall);
1097 #endif 1098 #endif
1098 1099
1099 /* 1100 /*
1100 * Should be a barrier for any external CPU state. 1101 * Should be a barrier for any external CPU state.
1101 */ 1102 */
1102 cpu_init(); 1103 cpu_init();
1103 } 1104 }
1104 1105
1105 1106
1106 static int __init oops_setup(char *s) 1107 static int __init oops_setup(char *s)
1107 { 1108 {
1108 if (!s) 1109 if (!s)
1109 return -EINVAL; 1110 return -EINVAL;
1110 if (!strcmp(s, "panic")) 1111 if (!strcmp(s, "panic"))
1111 panic_on_oops = 1; 1112 panic_on_oops = 1;
1112 return 0; 1113 return 0;
1113 } 1114 }
1114 early_param("oops", oops_setup); 1115 early_param("oops", oops_setup);
1115 1116
1116 static int __init kstack_setup(char *s) 1117 static int __init kstack_setup(char *s)
1117 { 1118 {
1118 if (!s) 1119 if (!s)
1119 return -EINVAL; 1120 return -EINVAL;
1120 kstack_depth_to_print = simple_strtoul(s,NULL,0); 1121 kstack_depth_to_print = simple_strtoul(s,NULL,0);
1121 return 0; 1122 return 0;
1122 } 1123 }
1123 early_param("kstack", kstack_setup); 1124 early_param("kstack", kstack_setup);
1124 1125
arch/xtensa/kernel/traps.c
1 /* 1 /*
2 * arch/xtensa/kernel/traps.c 2 * arch/xtensa/kernel/traps.c
3 * 3 *
4 * Exception handling. 4 * Exception handling.
5 * 5 *
6 * Derived from code with the following copyrights: 6 * Derived from code with the following copyrights:
7 * Copyright (C) 1994 - 1999 by Ralf Baechle 7 * Copyright (C) 1994 - 1999 by Ralf Baechle
8 * Modified for R3000 by Paul M. Antoine, 1995, 1996 8 * Modified for R3000 by Paul M. Antoine, 1995, 1996
9 * Complete output from die() by Ulf Carlsson, 1998 9 * Complete output from die() by Ulf Carlsson, 1998
10 * Copyright (C) 1999 Silicon Graphics, Inc. 10 * Copyright (C) 1999 Silicon Graphics, Inc.
11 * 11 *
12 * Essentially rewritten for the Xtensa architecture port. 12 * Essentially rewritten for the Xtensa architecture port.
13 * 13 *
14 * Copyright (C) 2001 - 2005 Tensilica Inc. 14 * Copyright (C) 2001 - 2005 Tensilica Inc.
15 * 15 *
16 * Joe Taylor <joe@tensilica.com, joetylr@yahoo.com> 16 * Joe Taylor <joe@tensilica.com, joetylr@yahoo.com>
17 * Chris Zankel <chris@zankel.net> 17 * Chris Zankel <chris@zankel.net>
18 * Marc Gauthier<marc@tensilica.com, marc@alumni.uwaterloo.ca> 18 * Marc Gauthier<marc@tensilica.com, marc@alumni.uwaterloo.ca>
19 * Kevin Chea 19 * Kevin Chea
20 * 20 *
21 * This file is subject to the terms and conditions of the GNU General Public 21 * This file is subject to the terms and conditions of the GNU General Public
22 * License. See the file "COPYING" in the main directory of this archive 22 * License. See the file "COPYING" in the main directory of this archive
23 * for more details. 23 * for more details.
24 */ 24 */
25 25
26 #include <linux/kernel.h> 26 #include <linux/kernel.h>
27 #include <linux/sched.h> 27 #include <linux/sched.h>
28 #include <linux/init.h> 28 #include <linux/init.h>
29 #include <linux/module.h> 29 #include <linux/module.h>
30 #include <linux/stringify.h> 30 #include <linux/stringify.h>
31 #include <linux/kallsyms.h> 31 #include <linux/kallsyms.h>
32 #include <linux/delay.h> 32 #include <linux/delay.h>
33 33
34 #include <asm/ptrace.h> 34 #include <asm/ptrace.h>
35 #include <asm/timex.h> 35 #include <asm/timex.h>
36 #include <asm/uaccess.h> 36 #include <asm/uaccess.h>
37 #include <asm/pgtable.h> 37 #include <asm/pgtable.h>
38 #include <asm/processor.h> 38 #include <asm/processor.h>
39 39
40 #ifdef CONFIG_KGDB 40 #ifdef CONFIG_KGDB
41 extern int gdb_enter; 41 extern int gdb_enter;
42 extern int return_from_debug_flag; 42 extern int return_from_debug_flag;
43 #endif 43 #endif
44 44
45 /* 45 /*
46 * Machine specific interrupt handlers 46 * Machine specific interrupt handlers
47 */ 47 */
48 48
49 extern void kernel_exception(void); 49 extern void kernel_exception(void);
50 extern void user_exception(void); 50 extern void user_exception(void);
51 51
52 extern void fast_syscall_kernel(void); 52 extern void fast_syscall_kernel(void);
53 extern void fast_syscall_user(void); 53 extern void fast_syscall_user(void);
54 extern void fast_alloca(void); 54 extern void fast_alloca(void);
55 extern void fast_unaligned(void); 55 extern void fast_unaligned(void);
56 extern void fast_second_level_miss(void); 56 extern void fast_second_level_miss(void);
57 extern void fast_store_prohibited(void); 57 extern void fast_store_prohibited(void);
58 extern void fast_coprocessor(void); 58 extern void fast_coprocessor(void);
59 59
60 extern void do_illegal_instruction (struct pt_regs*); 60 extern void do_illegal_instruction (struct pt_regs*);
61 extern void do_interrupt (struct pt_regs*); 61 extern void do_interrupt (struct pt_regs*);
62 extern void do_unaligned_user (struct pt_regs*); 62 extern void do_unaligned_user (struct pt_regs*);
63 extern void do_multihit (struct pt_regs*, unsigned long); 63 extern void do_multihit (struct pt_regs*, unsigned long);
64 extern void do_page_fault (struct pt_regs*, unsigned long); 64 extern void do_page_fault (struct pt_regs*, unsigned long);
65 extern void do_debug (struct pt_regs*); 65 extern void do_debug (struct pt_regs*);
66 extern void system_call (struct pt_regs*); 66 extern void system_call (struct pt_regs*);
67 67
68 /* 68 /*
69 * The vector table must be preceded by a save area (which 69 * The vector table must be preceded by a save area (which
70 * implies it must be in RAM, unless one places RAM immediately 70 * implies it must be in RAM, unless one places RAM immediately
71 * before a ROM and puts the vector at the start of the ROM (!)) 71 * before a ROM and puts the vector at the start of the ROM (!))
72 */ 72 */
73 73
74 #define KRNL 0x01 74 #define KRNL 0x01
75 #define USER 0x02 75 #define USER 0x02
76 76
77 #define COPROCESSOR(x) \ 77 #define COPROCESSOR(x) \
78 { EXCCAUSE_COPROCESSOR ## x ## _DISABLED, USER, fast_coprocessor } 78 { EXCCAUSE_COPROCESSOR ## x ## _DISABLED, USER, fast_coprocessor }
79 79
80 typedef struct { 80 typedef struct {
81 int cause; 81 int cause;
82 int fast; 82 int fast;
83 void* handler; 83 void* handler;
84 } dispatch_init_table_t; 84 } dispatch_init_table_t;
85 85
86 dispatch_init_table_t __init dispatch_init_table[] = { 86 dispatch_init_table_t __init dispatch_init_table[] = {
87 87
88 { EXCCAUSE_ILLEGAL_INSTRUCTION, 0, do_illegal_instruction}, 88 { EXCCAUSE_ILLEGAL_INSTRUCTION, 0, do_illegal_instruction},
89 { EXCCAUSE_SYSTEM_CALL, KRNL, fast_syscall_kernel }, 89 { EXCCAUSE_SYSTEM_CALL, KRNL, fast_syscall_kernel },
90 { EXCCAUSE_SYSTEM_CALL, USER, fast_syscall_user }, 90 { EXCCAUSE_SYSTEM_CALL, USER, fast_syscall_user },
91 { EXCCAUSE_SYSTEM_CALL, 0, system_call }, 91 { EXCCAUSE_SYSTEM_CALL, 0, system_call },
92 /* EXCCAUSE_INSTRUCTION_FETCH unhandled */ 92 /* EXCCAUSE_INSTRUCTION_FETCH unhandled */
93 /* EXCCAUSE_LOAD_STORE_ERROR unhandled*/ 93 /* EXCCAUSE_LOAD_STORE_ERROR unhandled*/
94 { EXCCAUSE_LEVEL1_INTERRUPT, 0, do_interrupt }, 94 { EXCCAUSE_LEVEL1_INTERRUPT, 0, do_interrupt },
95 { EXCCAUSE_ALLOCA, USER|KRNL, fast_alloca }, 95 { EXCCAUSE_ALLOCA, USER|KRNL, fast_alloca },
96 /* EXCCAUSE_INTEGER_DIVIDE_BY_ZERO unhandled */ 96 /* EXCCAUSE_INTEGER_DIVIDE_BY_ZERO unhandled */
97 /* EXCCAUSE_PRIVILEGED unhandled */ 97 /* EXCCAUSE_PRIVILEGED unhandled */
98 #if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION 98 #if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION
99 #ifdef CONFIG_UNALIGNED_USER 99 #ifdef CONFIG_UNALIGNED_USER
100 { EXCCAUSE_UNALIGNED, USER, fast_unaligned }, 100 { EXCCAUSE_UNALIGNED, USER, fast_unaligned },
101 #else 101 #else
102 { EXCCAUSE_UNALIGNED, 0, do_unaligned_user }, 102 { EXCCAUSE_UNALIGNED, 0, do_unaligned_user },
103 #endif 103 #endif
104 { EXCCAUSE_UNALIGNED, KRNL, fast_unaligned }, 104 { EXCCAUSE_UNALIGNED, KRNL, fast_unaligned },
105 #endif 105 #endif
106 { EXCCAUSE_ITLB_MISS, 0, do_page_fault }, 106 { EXCCAUSE_ITLB_MISS, 0, do_page_fault },
107 { EXCCAUSE_ITLB_MISS, USER|KRNL, fast_second_level_miss}, 107 { EXCCAUSE_ITLB_MISS, USER|KRNL, fast_second_level_miss},
108 { EXCCAUSE_ITLB_MULTIHIT, 0, do_multihit }, 108 { EXCCAUSE_ITLB_MULTIHIT, 0, do_multihit },
109 { EXCCAUSE_ITLB_PRIVILEGE, 0, do_page_fault }, 109 { EXCCAUSE_ITLB_PRIVILEGE, 0, do_page_fault },
110 /* EXCCAUSE_SIZE_RESTRICTION unhandled */ 110 /* EXCCAUSE_SIZE_RESTRICTION unhandled */
111 { EXCCAUSE_FETCH_CACHE_ATTRIBUTE, 0, do_page_fault }, 111 { EXCCAUSE_FETCH_CACHE_ATTRIBUTE, 0, do_page_fault },
112 { EXCCAUSE_DTLB_MISS, USER|KRNL, fast_second_level_miss}, 112 { EXCCAUSE_DTLB_MISS, USER|KRNL, fast_second_level_miss},
113 { EXCCAUSE_DTLB_MISS, 0, do_page_fault }, 113 { EXCCAUSE_DTLB_MISS, 0, do_page_fault },
114 { EXCCAUSE_DTLB_MULTIHIT, 0, do_multihit }, 114 { EXCCAUSE_DTLB_MULTIHIT, 0, do_multihit },
115 { EXCCAUSE_DTLB_PRIVILEGE, 0, do_page_fault }, 115 { EXCCAUSE_DTLB_PRIVILEGE, 0, do_page_fault },
116 /* EXCCAUSE_DTLB_SIZE_RESTRICTION unhandled */ 116 /* EXCCAUSE_DTLB_SIZE_RESTRICTION unhandled */
117 { EXCCAUSE_STORE_CACHE_ATTRIBUTE, USER|KRNL, fast_store_prohibited }, 117 { EXCCAUSE_STORE_CACHE_ATTRIBUTE, USER|KRNL, fast_store_prohibited },
118 { EXCCAUSE_STORE_CACHE_ATTRIBUTE, 0, do_page_fault }, 118 { EXCCAUSE_STORE_CACHE_ATTRIBUTE, 0, do_page_fault },
119 { EXCCAUSE_LOAD_CACHE_ATTRIBUTE, 0, do_page_fault }, 119 { EXCCAUSE_LOAD_CACHE_ATTRIBUTE, 0, do_page_fault },
120 /* XCCHAL_EXCCAUSE_FLOATING_POINT unhandled */ 120 /* XCCHAL_EXCCAUSE_FLOATING_POINT unhandled */
121 #if (XCHAL_CP_MASK & 1) 121 #if (XCHAL_CP_MASK & 1)
122 COPROCESSOR(0), 122 COPROCESSOR(0),
123 #endif 123 #endif
124 #if (XCHAL_CP_MASK & 2) 124 #if (XCHAL_CP_MASK & 2)
125 COPROCESSOR(1), 125 COPROCESSOR(1),
126 #endif 126 #endif
127 #if (XCHAL_CP_MASK & 4) 127 #if (XCHAL_CP_MASK & 4)
128 COPROCESSOR(2), 128 COPROCESSOR(2),
129 #endif 129 #endif
130 #if (XCHAL_CP_MASK & 8) 130 #if (XCHAL_CP_MASK & 8)
131 COPROCESSOR(3), 131 COPROCESSOR(3),
132 #endif 132 #endif
133 #if (XCHAL_CP_MASK & 16) 133 #if (XCHAL_CP_MASK & 16)
134 COPROCESSOR(4), 134 COPROCESSOR(4),
135 #endif 135 #endif
136 #if (XCHAL_CP_MASK & 32) 136 #if (XCHAL_CP_MASK & 32)
137 COPROCESSOR(5), 137 COPROCESSOR(5),
138 #endif 138 #endif
139 #if (XCHAL_CP_MASK & 64) 139 #if (XCHAL_CP_MASK & 64)
140 COPROCESSOR(6), 140 COPROCESSOR(6),
141 #endif 141 #endif
142 #if (XCHAL_CP_MASK & 128) 142 #if (XCHAL_CP_MASK & 128)
143 COPROCESSOR(7), 143 COPROCESSOR(7),
144 #endif 144 #endif
145 { EXCCAUSE_MAPPED_DEBUG, 0, do_debug }, 145 { EXCCAUSE_MAPPED_DEBUG, 0, do_debug },
146 { -1, -1, 0 } 146 { -1, -1, 0 }
147 147
148 }; 148 };
149 149
150 /* The exception table <exc_table> serves two functions: 150 /* The exception table <exc_table> serves two functions:
151 * 1. it contains three dispatch tables (fast_user, fast_kernel, default-c) 151 * 1. it contains three dispatch tables (fast_user, fast_kernel, default-c)
152 * 2. it is a temporary memory buffer for the exception handlers. 152 * 2. it is a temporary memory buffer for the exception handlers.
153 */ 153 */
154 154
155 unsigned long exc_table[EXC_TABLE_SIZE/4]; 155 unsigned long exc_table[EXC_TABLE_SIZE/4];
156 156
157 void die(const char*, struct pt_regs*, long); 157 void die(const char*, struct pt_regs*, long);
158 158
159 static inline void 159 static inline void
160 __die_if_kernel(const char *str, struct pt_regs *regs, long err) 160 __die_if_kernel(const char *str, struct pt_regs *regs, long err)
161 { 161 {
162 if (!user_mode(regs)) 162 if (!user_mode(regs))
163 die(str, regs, err); 163 die(str, regs, err);
164 } 164 }
165 165
166 /* 166 /*
167 * Unhandled Exceptions. Kill user task or panic if in kernel space. 167 * Unhandled Exceptions. Kill user task or panic if in kernel space.
168 */ 168 */
169 169
170 void do_unhandled(struct pt_regs *regs, unsigned long exccause) 170 void do_unhandled(struct pt_regs *regs, unsigned long exccause)
171 { 171 {
172 __die_if_kernel("Caught unhandled exception - should not happen", 172 __die_if_kernel("Caught unhandled exception - should not happen",
173 regs, SIGKILL); 173 regs, SIGKILL);
174 174
175 /* If in user mode, send SIGILL signal to current process */ 175 /* If in user mode, send SIGILL signal to current process */
176 printk("Caught unhandled exception in '%s' " 176 printk("Caught unhandled exception in '%s' "
177 "(pid = %d, pc = %#010lx) - should not happen\n" 177 "(pid = %d, pc = %#010lx) - should not happen\n"
178 "\tEXCCAUSE is %ld\n", 178 "\tEXCCAUSE is %ld\n",
179 current->comm, current->pid, regs->pc, exccause); 179 current->comm, current->pid, regs->pc, exccause);
180 force_sig(SIGILL, current); 180 force_sig(SIGILL, current);
181 } 181 }
182 182
183 /* 183 /*
184 * Multi-hit exception. This if fatal! 184 * Multi-hit exception. This if fatal!
185 */ 185 */
186 186
187 void do_multihit(struct pt_regs *regs, unsigned long exccause) 187 void do_multihit(struct pt_regs *regs, unsigned long exccause)
188 { 188 {
189 die("Caught multihit exception", regs, SIGKILL); 189 die("Caught multihit exception", regs, SIGKILL);
190 } 190 }
191 191
192 /* 192 /*
193 * Level-1 interrupt. 193 * Level-1 interrupt.
194 * We currently have no priority encoding. 194 * We currently have no priority encoding.
195 */ 195 */
196 196
197 unsigned long ignored_level1_interrupts; 197 unsigned long ignored_level1_interrupts;
198 extern void do_IRQ(int, struct pt_regs *); 198 extern void do_IRQ(int, struct pt_regs *);
199 199
200 void do_interrupt (struct pt_regs *regs) 200 void do_interrupt (struct pt_regs *regs)
201 { 201 {
202 unsigned long intread = get_sr (INTREAD); 202 unsigned long intread = get_sr (INTREAD);
203 unsigned long intenable = get_sr (INTENABLE); 203 unsigned long intenable = get_sr (INTENABLE);
204 int i, mask; 204 int i, mask;
205 205
206 /* Handle all interrupts (no priorities). 206 /* Handle all interrupts (no priorities).
207 * (Clear the interrupt before processing, in case it's 207 * (Clear the interrupt before processing, in case it's
208 * edge-triggered or software-generated) 208 * edge-triggered or software-generated)
209 */ 209 */
210 210
211 for (i=0, mask = 1; i < XCHAL_NUM_INTERRUPTS; i++, mask <<= 1) { 211 for (i=0, mask = 1; i < XCHAL_NUM_INTERRUPTS; i++, mask <<= 1) {
212 if (mask & (intread & intenable)) { 212 if (mask & (intread & intenable)) {
213 set_sr (mask, INTCLEAR); 213 set_sr (mask, INTCLEAR);
214 do_IRQ (i,regs); 214 do_IRQ (i,regs);
215 } 215 }
216 } 216 }
217 } 217 }
218 218
219 /* 219 /*
220 * Illegal instruction. Fatal if in kernel space. 220 * Illegal instruction. Fatal if in kernel space.
221 */ 221 */
222 222
223 void 223 void
224 do_illegal_instruction(struct pt_regs *regs) 224 do_illegal_instruction(struct pt_regs *regs)
225 { 225 {
226 __die_if_kernel("Illegal instruction in kernel", regs, SIGKILL); 226 __die_if_kernel("Illegal instruction in kernel", regs, SIGKILL);
227 227
228 /* If in user mode, send SIGILL signal to current process. */ 228 /* If in user mode, send SIGILL signal to current process. */
229 229
230 printk("Illegal Instruction in '%s' (pid = %d, pc = %#010lx)\n", 230 printk("Illegal Instruction in '%s' (pid = %d, pc = %#010lx)\n",
231 current->comm, current->pid, regs->pc); 231 current->comm, current->pid, regs->pc);
232 force_sig(SIGILL, current); 232 force_sig(SIGILL, current);
233 } 233 }
234 234
235 235
236 /* 236 /*
237 * Handle unaligned memory accesses from user space. Kill task. 237 * Handle unaligned memory accesses from user space. Kill task.
238 * 238 *
239 * If CONFIG_UNALIGNED_USER is not set, we don't allow unaligned memory 239 * If CONFIG_UNALIGNED_USER is not set, we don't allow unaligned memory
240 * accesses causes from user space. 240 * accesses causes from user space.
241 */ 241 */
242 242
243 #if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION 243 #if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION
244 #ifndef CONFIG_UNALIGNED_USER 244 #ifndef CONFIG_UNALIGNED_USER
245 void 245 void
246 do_unaligned_user (struct pt_regs *regs) 246 do_unaligned_user (struct pt_regs *regs)
247 { 247 {
248 siginfo_t info; 248 siginfo_t info;
249 249
250 __die_if_kernel("Unhandled unaligned exception in kernel", 250 __die_if_kernel("Unhandled unaligned exception in kernel",
251 regs, SIGKILL); 251 regs, SIGKILL);
252 252
253 current->thread.bad_vaddr = regs->excvaddr; 253 current->thread.bad_vaddr = regs->excvaddr;
254 current->thread.error_code = -3; 254 current->thread.error_code = -3;
255 printk("Unaligned memory access to %08lx in '%s' " 255 printk("Unaligned memory access to %08lx in '%s' "
256 "(pid = %d, pc = %#010lx)\n", 256 "(pid = %d, pc = %#010lx)\n",
257 regs->excvaddr, current->comm, current->pid, regs->pc); 257 regs->excvaddr, current->comm, current->pid, regs->pc);
258 info.si_signo = SIGBUS; 258 info.si_signo = SIGBUS;
259 info.si_errno = 0; 259 info.si_errno = 0;
260 info.si_code = BUS_ADRALN; 260 info.si_code = BUS_ADRALN;
261 info.si_addr = (void *) regs->excvaddr; 261 info.si_addr = (void *) regs->excvaddr;
262 force_sig_info(SIGSEGV, &info, current); 262 force_sig_info(SIGSEGV, &info, current);
263 263
264 } 264 }
265 #endif 265 #endif
266 #endif 266 #endif
267 267
268 void 268 void
269 do_debug(struct pt_regs *regs) 269 do_debug(struct pt_regs *regs)
270 { 270 {
271 #ifdef CONFIG_KGDB 271 #ifdef CONFIG_KGDB
272 /* If remote debugging is configured AND enabled, we give control to 272 /* If remote debugging is configured AND enabled, we give control to
273 * kgdb. Otherwise, we fall through, perhaps giving control to the 273 * kgdb. Otherwise, we fall through, perhaps giving control to the
274 * native debugger. 274 * native debugger.
275 */ 275 */
276 276
277 if (gdb_enter) { 277 if (gdb_enter) {
278 extern void gdb_handle_exception(struct pt_regs *); 278 extern void gdb_handle_exception(struct pt_regs *);
279 gdb_handle_exception(regs); 279 gdb_handle_exception(regs);
280 return_from_debug_flag = 1; 280 return_from_debug_flag = 1;
281 return; 281 return;
282 } 282 }
283 #endif 283 #endif
284 284
285 __die_if_kernel("Breakpoint in kernel", regs, SIGKILL); 285 __die_if_kernel("Breakpoint in kernel", regs, SIGKILL);
286 286
287 /* If in user mode, send SIGTRAP signal to current process */ 287 /* If in user mode, send SIGTRAP signal to current process */
288 288
289 force_sig(SIGTRAP, current); 289 force_sig(SIGTRAP, current);
290 } 290 }
291 291
292 292
293 /* 293 /*
294 * Initialize dispatch tables. 294 * Initialize dispatch tables.
295 * 295 *
296 * The exception vectors are stored compressed the __init section in the 296 * The exception vectors are stored compressed the __init section in the
297 * dispatch_init_table. This function initializes the following three tables 297 * dispatch_init_table. This function initializes the following three tables
298 * from that compressed table: 298 * from that compressed table:
299 * - fast user first dispatch table for user exceptions 299 * - fast user first dispatch table for user exceptions
300 * - fast kernel first dispatch table for kernel exceptions 300 * - fast kernel first dispatch table for kernel exceptions
301 * - default C-handler C-handler called by the default fast handler. 301 * - default C-handler C-handler called by the default fast handler.
302 * 302 *
303 * See vectors.S for more details. 303 * See vectors.S for more details.
304 */ 304 */
305 305
306 #define set_handler(idx,handler) (exc_table[idx] = (unsigned long) (handler)) 306 #define set_handler(idx,handler) (exc_table[idx] = (unsigned long) (handler))
307 307
308 void trap_init(void) 308 void trap_init(void)
309 { 309 {
310 int i; 310 int i;
311 311
312 /* Setup default vectors. */ 312 /* Setup default vectors. */
313 313
314 for(i = 0; i < 64; i++) { 314 for(i = 0; i < 64; i++) {
315 set_handler(EXC_TABLE_FAST_USER/4 + i, user_exception); 315 set_handler(EXC_TABLE_FAST_USER/4 + i, user_exception);
316 set_handler(EXC_TABLE_FAST_KERNEL/4 + i, kernel_exception); 316 set_handler(EXC_TABLE_FAST_KERNEL/4 + i, kernel_exception);
317 set_handler(EXC_TABLE_DEFAULT/4 + i, do_unhandled); 317 set_handler(EXC_TABLE_DEFAULT/4 + i, do_unhandled);
318 } 318 }
319 319
320 /* Setup specific handlers. */ 320 /* Setup specific handlers. */
321 321
322 for(i = 0; dispatch_init_table[i].cause >= 0; i++) { 322 for(i = 0; dispatch_init_table[i].cause >= 0; i++) {
323 323
324 int fast = dispatch_init_table[i].fast; 324 int fast = dispatch_init_table[i].fast;
325 int cause = dispatch_init_table[i].cause; 325 int cause = dispatch_init_table[i].cause;
326 void *handler = dispatch_init_table[i].handler; 326 void *handler = dispatch_init_table[i].handler;
327 327
328 if (fast == 0) 328 if (fast == 0)
329 set_handler (EXC_TABLE_DEFAULT/4 + cause, handler); 329 set_handler (EXC_TABLE_DEFAULT/4 + cause, handler);
330 if (fast && fast & USER) 330 if (fast && fast & USER)
331 set_handler (EXC_TABLE_FAST_USER/4 + cause, handler); 331 set_handler (EXC_TABLE_FAST_USER/4 + cause, handler);
332 if (fast && fast & KRNL) 332 if (fast && fast & KRNL)
333 set_handler (EXC_TABLE_FAST_KERNEL/4 + cause, handler); 333 set_handler (EXC_TABLE_FAST_KERNEL/4 + cause, handler);
334 } 334 }
335 335
336 /* Initialize EXCSAVE_1 to hold the address of the exception table. */ 336 /* Initialize EXCSAVE_1 to hold the address of the exception table. */
337 337
338 i = (unsigned long)exc_table; 338 i = (unsigned long)exc_table;
339 __asm__ __volatile__("wsr %0, "__stringify(EXCSAVE_1)"\n" : : "a" (i)); 339 __asm__ __volatile__("wsr %0, "__stringify(EXCSAVE_1)"\n" : : "a" (i));
340 } 340 }
341 341
342 /* 342 /*
343 * This function dumps the current valid window frame and other base registers. 343 * This function dumps the current valid window frame and other base registers.
344 */ 344 */
345 345
346 void show_regs(struct pt_regs * regs) 346 void show_regs(struct pt_regs * regs)
347 { 347 {
348 int i, wmask; 348 int i, wmask;
349 349
350 wmask = regs->wmask & ~1; 350 wmask = regs->wmask & ~1;
351 351
352 for (i = 0; i < 32; i++) { 352 for (i = 0; i < 32; i++) {
353 if (wmask & (1 << (i / 4))) 353 if (wmask & (1 << (i / 4)))
354 break; 354 break;
355 if ((i % 8) == 0) 355 if ((i % 8) == 0)
356 printk ("\n" KERN_INFO "a%02d: ", i); 356 printk ("\n" KERN_INFO "a%02d: ", i);
357 printk("%08lx ", regs->areg[i]); 357 printk("%08lx ", regs->areg[i]);
358 } 358 }
359 printk("\n"); 359 printk("\n");
360 360
361 printk("pc: %08lx, ps: %08lx, depc: %08lx, excvaddr: %08lx\n", 361 printk("pc: %08lx, ps: %08lx, depc: %08lx, excvaddr: %08lx\n",
362 regs->pc, regs->ps, regs->depc, regs->excvaddr); 362 regs->pc, regs->ps, regs->depc, regs->excvaddr);
363 printk("lbeg: %08lx, lend: %08lx lcount: %08lx, sar: %08lx\n", 363 printk("lbeg: %08lx, lend: %08lx lcount: %08lx, sar: %08lx\n",
364 regs->lbeg, regs->lend, regs->lcount, regs->sar); 364 regs->lbeg, regs->lend, regs->lcount, regs->sar);
365 if (user_mode(regs)) 365 if (user_mode(regs))
366 printk("wb: %08lx, ws: %08lx, wmask: %08lx, syscall: %ld\n", 366 printk("wb: %08lx, ws: %08lx, wmask: %08lx, syscall: %ld\n",
367 regs->windowbase, regs->windowstart, regs->wmask, 367 regs->windowbase, regs->windowstart, regs->wmask,
368 regs->syscall); 368 regs->syscall);
369 } 369 }
370 370
371 void show_trace(struct task_struct *task, unsigned long *sp) 371 void show_trace(struct task_struct *task, unsigned long *sp)
372 { 372 {
373 unsigned long a0, a1, pc; 373 unsigned long a0, a1, pc;
374 unsigned long sp_start, sp_end; 374 unsigned long sp_start, sp_end;
375 375
376 a1 = (unsigned long)sp; 376 a1 = (unsigned long)sp;
377 377
378 if (a1 == 0) 378 if (a1 == 0)
379 __asm__ __volatile__ ("mov %0, a1\n" : "=a"(a1)); 379 __asm__ __volatile__ ("mov %0, a1\n" : "=a"(a1));
380 380
381 381
382 sp_start = a1 & ~(THREAD_SIZE-1); 382 sp_start = a1 & ~(THREAD_SIZE-1);
383 sp_end = sp_start + THREAD_SIZE; 383 sp_end = sp_start + THREAD_SIZE;
384 384
385 printk("Call Trace:"); 385 printk("Call Trace:");
386 #ifdef CONFIG_KALLSYMS 386 #ifdef CONFIG_KALLSYMS
387 printk("\n"); 387 printk("\n");
388 #endif 388 #endif
389 spill_registers(); 389 spill_registers();
390 390
391 while (a1 > sp_start && a1 < sp_end) { 391 while (a1 > sp_start && a1 < sp_end) {
392 sp = (unsigned long*)a1; 392 sp = (unsigned long*)a1;
393 393
394 a0 = *(sp - 4); 394 a0 = *(sp - 4);
395 a1 = *(sp - 3); 395 a1 = *(sp - 3);
396 396
397 if (a1 <= (unsigned long) sp) 397 if (a1 <= (unsigned long) sp)
398 break; 398 break;
399 399
400 pc = MAKE_PC_FROM_RA(a0, a1); 400 pc = MAKE_PC_FROM_RA(a0, a1);
401 401
402 if (kernel_text_address(pc)) { 402 if (kernel_text_address(pc)) {
403 printk(" [<%08lx>] ", pc); 403 printk(" [<%08lx>] ", pc);
404 print_symbol("%s\n", pc); 404 print_symbol("%s\n", pc);
405 } 405 }
406 } 406 }
407 printk("\n"); 407 printk("\n");
408 } 408 }
409 409
410 /* 410 /*
411 * This routine abuses get_user()/put_user() to reference pointers 411 * This routine abuses get_user()/put_user() to reference pointers
412 * with at least a bit of error checking ... 412 * with at least a bit of error checking ...
413 */ 413 */
414 414
415 static int kstack_depth_to_print = 24; 415 static int kstack_depth_to_print = 24;
416 416
417 void show_stack(struct task_struct *task, unsigned long *sp) 417 void show_stack(struct task_struct *task, unsigned long *sp)
418 { 418 {
419 int i = 0; 419 int i = 0;
420 unsigned long *stack; 420 unsigned long *stack;
421 421
422 if (sp == 0) 422 if (sp == 0)
423 __asm__ __volatile__ ("mov %0, a1\n" : "=a"(sp)); 423 __asm__ __volatile__ ("mov %0, a1\n" : "=a"(sp));
424 424
425 stack = sp; 425 stack = sp;
426 426
427 printk("\nStack: "); 427 printk("\nStack: ");
428 428
429 for (i = 0; i < kstack_depth_to_print; i++) { 429 for (i = 0; i < kstack_depth_to_print; i++) {
430 if (kstack_end(sp)) 430 if (kstack_end(sp))
431 break; 431 break;
432 if (i && ((i % 8) == 0)) 432 if (i && ((i % 8) == 0))
433 printk("\n "); 433 printk("\n ");
434 printk("%08lx ", *sp++); 434 printk("%08lx ", *sp++);
435 } 435 }
436 printk("\n"); 436 printk("\n");
437 show_trace(task, stack); 437 show_trace(task, stack);
438 } 438 }
439 439
440 void dump_stack(void) 440 void dump_stack(void)
441 { 441 {
442 show_stack(current, NULL); 442 show_stack(current, NULL);
443 } 443 }
444 444
445 EXPORT_SYMBOL(dump_stack); 445 EXPORT_SYMBOL(dump_stack);
446 446
447 447
448 void show_code(unsigned int *pc) 448 void show_code(unsigned int *pc)
449 { 449 {
450 long i; 450 long i;
451 451
452 printk("\nCode:"); 452 printk("\nCode:");
453 453
454 for(i = -3 ; i < 6 ; i++) { 454 for(i = -3 ; i < 6 ; i++) {
455 unsigned long insn; 455 unsigned long insn;
456 if (__get_user(insn, pc + i)) { 456 if (__get_user(insn, pc + i)) {
457 printk(" (Bad address in pc)\n"); 457 printk(" (Bad address in pc)\n");
458 break; 458 break;
459 } 459 }
460 printk("%c%08lx%c",(i?' ':'<'),insn,(i?' ':'>')); 460 printk("%c%08lx%c",(i?' ':'<'),insn,(i?' ':'>'));
461 } 461 }
462 } 462 }
463 463
464 DEFINE_SPINLOCK(die_lock); 464 DEFINE_SPINLOCK(die_lock);
465 465
466 void die(const char * str, struct pt_regs * regs, long err) 466 void die(const char * str, struct pt_regs * regs, long err)
467 { 467 {
468 static int die_counter; 468 static int die_counter;
469 int nl = 0; 469 int nl = 0;
470 470
471 console_verbose(); 471 console_verbose();
472 spin_lock_irq(&die_lock); 472 spin_lock_irq(&die_lock);
473 473
474 printk("%s: sig: %ld [#%d]\n", str, err, ++die_counter); 474 printk("%s: sig: %ld [#%d]\n", str, err, ++die_counter);
475 #ifdef CONFIG_PREEMPT 475 #ifdef CONFIG_PREEMPT
476 printk("PREEMPT "); 476 printk("PREEMPT ");
477 nl = 1; 477 nl = 1;
478 #endif 478 #endif
479 if (nl) 479 if (nl)
480 printk("\n"); 480 printk("\n");
481 show_regs(regs); 481 show_regs(regs);
482 if (!user_mode(regs)) 482 if (!user_mode(regs))
483 show_stack(NULL, (unsigned long*)regs->areg[1]); 483 show_stack(NULL, (unsigned long*)regs->areg[1]);
484 484
485 add_taint(TAINT_DIE);
485 spin_unlock_irq(&die_lock); 486 spin_unlock_irq(&die_lock);
486 487
487 if (in_interrupt()) 488 if (in_interrupt())
488 panic("Fatal exception in interrupt"); 489 panic("Fatal exception in interrupt");
489 490
490 if (panic_on_oops) 491 if (panic_on_oops)
491 panic("Fatal exception"); 492 panic("Fatal exception");
492 493
493 do_exit(err); 494 do_exit(err);
494 } 495 }
495 496
496 497
497 498
include/linux/kernel.h
1 #ifndef _LINUX_KERNEL_H 1 #ifndef _LINUX_KERNEL_H
2 #define _LINUX_KERNEL_H 2 #define _LINUX_KERNEL_H
3 3
4 /* 4 /*
5 * 'kernel.h' contains some often-used function prototypes etc 5 * 'kernel.h' contains some often-used function prototypes etc
6 */ 6 */
7 7
8 #ifdef __KERNEL__ 8 #ifdef __KERNEL__
9 9
10 #include <stdarg.h> 10 #include <stdarg.h>
11 #include <linux/linkage.h> 11 #include <linux/linkage.h>
12 #include <linux/stddef.h> 12 #include <linux/stddef.h>
13 #include <linux/types.h> 13 #include <linux/types.h>
14 #include <linux/compiler.h> 14 #include <linux/compiler.h>
15 #include <linux/bitops.h> 15 #include <linux/bitops.h>
16 #include <linux/log2.h> 16 #include <linux/log2.h>
17 #include <asm/byteorder.h> 17 #include <asm/byteorder.h>
18 #include <asm/bug.h> 18 #include <asm/bug.h>
19 19
20 extern const char linux_banner[]; 20 extern const char linux_banner[];
21 extern const char linux_proc_banner[]; 21 extern const char linux_proc_banner[];
22 22
23 #define INT_MAX ((int)(~0U>>1)) 23 #define INT_MAX ((int)(~0U>>1))
24 #define INT_MIN (-INT_MAX - 1) 24 #define INT_MIN (-INT_MAX - 1)
25 #define UINT_MAX (~0U) 25 #define UINT_MAX (~0U)
26 #define LONG_MAX ((long)(~0UL>>1)) 26 #define LONG_MAX ((long)(~0UL>>1))
27 #define LONG_MIN (-LONG_MAX - 1) 27 #define LONG_MIN (-LONG_MAX - 1)
28 #define ULONG_MAX (~0UL) 28 #define ULONG_MAX (~0UL)
29 #define LLONG_MAX ((long long)(~0ULL>>1)) 29 #define LLONG_MAX ((long long)(~0ULL>>1))
30 #define LLONG_MIN (-LLONG_MAX - 1) 30 #define LLONG_MIN (-LLONG_MAX - 1)
31 #define ULLONG_MAX (~0ULL) 31 #define ULLONG_MAX (~0ULL)
32 32
33 #define STACK_MAGIC 0xdeadbeef 33 #define STACK_MAGIC 0xdeadbeef
34 34
35 #define ALIGN(x,a) __ALIGN_MASK(x,(typeof(x))(a)-1) 35 #define ALIGN(x,a) __ALIGN_MASK(x,(typeof(x))(a)-1)
36 #define __ALIGN_MASK(x,mask) (((x)+(mask))&~(mask)) 36 #define __ALIGN_MASK(x,mask) (((x)+(mask))&~(mask))
37 37
38 #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr)) 38 #define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0]) + __must_be_array(arr))
39 39
40 #define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f)) 40 #define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
41 #define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d)) 41 #define DIV_ROUND_UP(n,d) (((n) + (d) - 1) / (d))
42 #define roundup(x, y) ((((x) + ((y) - 1)) / (y)) * (y)) 42 #define roundup(x, y) ((((x) + ((y) - 1)) / (y)) * (y))
43 43
44 /** 44 /**
45 * upper_32_bits - return bits 32-63 of a number 45 * upper_32_bits - return bits 32-63 of a number
46 * @n: the number we're accessing 46 * @n: the number we're accessing
47 * 47 *
48 * A basic shift-right of a 64- or 32-bit quantity. Use this to suppress 48 * A basic shift-right of a 64- or 32-bit quantity. Use this to suppress
49 * the "right shift count >= width of type" warning when that quantity is 49 * the "right shift count >= width of type" warning when that quantity is
50 * 32-bits. 50 * 32-bits.
51 */ 51 */
52 #define upper_32_bits(n) ((u32)(((n) >> 16) >> 16)) 52 #define upper_32_bits(n) ((u32)(((n) >> 16) >> 16))
53 53
54 #define KERN_EMERG "<0>" /* system is unusable */ 54 #define KERN_EMERG "<0>" /* system is unusable */
55 #define KERN_ALERT "<1>" /* action must be taken immediately */ 55 #define KERN_ALERT "<1>" /* action must be taken immediately */
56 #define KERN_CRIT "<2>" /* critical conditions */ 56 #define KERN_CRIT "<2>" /* critical conditions */
57 #define KERN_ERR "<3>" /* error conditions */ 57 #define KERN_ERR "<3>" /* error conditions */
58 #define KERN_WARNING "<4>" /* warning conditions */ 58 #define KERN_WARNING "<4>" /* warning conditions */
59 #define KERN_NOTICE "<5>" /* normal but significant condition */ 59 #define KERN_NOTICE "<5>" /* normal but significant condition */
60 #define KERN_INFO "<6>" /* informational */ 60 #define KERN_INFO "<6>" /* informational */
61 #define KERN_DEBUG "<7>" /* debug-level messages */ 61 #define KERN_DEBUG "<7>" /* debug-level messages */
62 62
63 extern int console_printk[]; 63 extern int console_printk[];
64 64
65 #define console_loglevel (console_printk[0]) 65 #define console_loglevel (console_printk[0])
66 #define default_message_loglevel (console_printk[1]) 66 #define default_message_loglevel (console_printk[1])
67 #define minimum_console_loglevel (console_printk[2]) 67 #define minimum_console_loglevel (console_printk[2])
68 #define default_console_loglevel (console_printk[3]) 68 #define default_console_loglevel (console_printk[3])
69 69
70 struct completion; 70 struct completion;
71 struct pt_regs; 71 struct pt_regs;
72 struct user; 72 struct user;
73 73
74 /** 74 /**
75 * might_sleep - annotation for functions that can sleep 75 * might_sleep - annotation for functions that can sleep
76 * 76 *
77 * this macro will print a stack trace if it is executed in an atomic 77 * this macro will print a stack trace if it is executed in an atomic
78 * context (spinlock, irq-handler, ...). 78 * context (spinlock, irq-handler, ...).
79 * 79 *
80 * This is a useful debugging help to be able to catch problems early and not 80 * This is a useful debugging help to be able to catch problems early and not
81 * be bitten later when the calling function happens to sleep when it is not 81 * be bitten later when the calling function happens to sleep when it is not
82 * supposed to. 82 * supposed to.
83 */ 83 */
84 #ifdef CONFIG_PREEMPT_VOLUNTARY 84 #ifdef CONFIG_PREEMPT_VOLUNTARY
85 extern int cond_resched(void); 85 extern int cond_resched(void);
86 # define might_resched() cond_resched() 86 # define might_resched() cond_resched()
87 #else 87 #else
88 # define might_resched() do { } while (0) 88 # define might_resched() do { } while (0)
89 #endif 89 #endif
90 90
91 #ifdef CONFIG_DEBUG_SPINLOCK_SLEEP 91 #ifdef CONFIG_DEBUG_SPINLOCK_SLEEP
92 void __might_sleep(char *file, int line); 92 void __might_sleep(char *file, int line);
93 # define might_sleep() \ 93 # define might_sleep() \
94 do { __might_sleep(__FILE__, __LINE__); might_resched(); } while (0) 94 do { __might_sleep(__FILE__, __LINE__); might_resched(); } while (0)
95 #else 95 #else
96 # define might_sleep() do { might_resched(); } while (0) 96 # define might_sleep() do { might_resched(); } while (0)
97 #endif 97 #endif
98 98
99 #define might_sleep_if(cond) do { if (cond) might_sleep(); } while (0) 99 #define might_sleep_if(cond) do { if (cond) might_sleep(); } while (0)
100 100
101 #define abs(x) ({ \ 101 #define abs(x) ({ \
102 int __x = (x); \ 102 int __x = (x); \
103 (__x < 0) ? -__x : __x; \ 103 (__x < 0) ? -__x : __x; \
104 }) 104 })
105 105
106 extern struct atomic_notifier_head panic_notifier_list; 106 extern struct atomic_notifier_head panic_notifier_list;
107 extern long (*panic_blink)(long time); 107 extern long (*panic_blink)(long time);
108 NORET_TYPE void panic(const char * fmt, ...) 108 NORET_TYPE void panic(const char * fmt, ...)
109 __attribute__ ((NORET_AND format (printf, 1, 2))); 109 __attribute__ ((NORET_AND format (printf, 1, 2)));
110 extern void oops_enter(void); 110 extern void oops_enter(void);
111 extern void oops_exit(void); 111 extern void oops_exit(void);
112 extern int oops_may_print(void); 112 extern int oops_may_print(void);
113 fastcall NORET_TYPE void do_exit(long error_code) 113 fastcall NORET_TYPE void do_exit(long error_code)
114 ATTRIB_NORET; 114 ATTRIB_NORET;
115 NORET_TYPE void complete_and_exit(struct completion *, long) 115 NORET_TYPE void complete_and_exit(struct completion *, long)
116 ATTRIB_NORET; 116 ATTRIB_NORET;
117 extern unsigned long simple_strtoul(const char *,char **,unsigned int); 117 extern unsigned long simple_strtoul(const char *,char **,unsigned int);
118 extern long simple_strtol(const char *,char **,unsigned int); 118 extern long simple_strtol(const char *,char **,unsigned int);
119 extern unsigned long long simple_strtoull(const char *,char **,unsigned int); 119 extern unsigned long long simple_strtoull(const char *,char **,unsigned int);
120 extern long long simple_strtoll(const char *,char **,unsigned int); 120 extern long long simple_strtoll(const char *,char **,unsigned int);
121 extern int sprintf(char * buf, const char * fmt, ...) 121 extern int sprintf(char * buf, const char * fmt, ...)
122 __attribute__ ((format (printf, 2, 3))); 122 __attribute__ ((format (printf, 2, 3)));
123 extern int vsprintf(char *buf, const char *, va_list) 123 extern int vsprintf(char *buf, const char *, va_list)
124 __attribute__ ((format (printf, 2, 0))); 124 __attribute__ ((format (printf, 2, 0)));
125 extern int snprintf(char * buf, size_t size, const char * fmt, ...) 125 extern int snprintf(char * buf, size_t size, const char * fmt, ...)
126 __attribute__ ((format (printf, 3, 4))); 126 __attribute__ ((format (printf, 3, 4)));
127 extern int vsnprintf(char *buf, size_t size, const char *fmt, va_list args) 127 extern int vsnprintf(char *buf, size_t size, const char *fmt, va_list args)
128 __attribute__ ((format (printf, 3, 0))); 128 __attribute__ ((format (printf, 3, 0)));
129 extern int scnprintf(char * buf, size_t size, const char * fmt, ...) 129 extern int scnprintf(char * buf, size_t size, const char * fmt, ...)
130 __attribute__ ((format (printf, 3, 4))); 130 __attribute__ ((format (printf, 3, 4)));
131 extern int vscnprintf(char *buf, size_t size, const char *fmt, va_list args) 131 extern int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)
132 __attribute__ ((format (printf, 3, 0))); 132 __attribute__ ((format (printf, 3, 0)));
133 extern char *kasprintf(gfp_t gfp, const char *fmt, ...) 133 extern char *kasprintf(gfp_t gfp, const char *fmt, ...)
134 __attribute__ ((format (printf, 2, 3))); 134 __attribute__ ((format (printf, 2, 3)));
135 extern char *kvasprintf(gfp_t gfp, const char *fmt, va_list args); 135 extern char *kvasprintf(gfp_t gfp, const char *fmt, va_list args);
136 136
137 extern int sscanf(const char *, const char *, ...) 137 extern int sscanf(const char *, const char *, ...)
138 __attribute__ ((format (scanf, 2, 3))); 138 __attribute__ ((format (scanf, 2, 3)));
139 extern int vsscanf(const char *, const char *, va_list) 139 extern int vsscanf(const char *, const char *, va_list)
140 __attribute__ ((format (scanf, 2, 0))); 140 __attribute__ ((format (scanf, 2, 0)));
141 141
142 extern int get_option(char **str, int *pint); 142 extern int get_option(char **str, int *pint);
143 extern char *get_options(const char *str, int nints, int *ints); 143 extern char *get_options(const char *str, int nints, int *ints);
144 extern unsigned long long memparse(char *ptr, char **retptr); 144 extern unsigned long long memparse(char *ptr, char **retptr);
145 145
146 extern int core_kernel_text(unsigned long addr); 146 extern int core_kernel_text(unsigned long addr);
147 extern int __kernel_text_address(unsigned long addr); 147 extern int __kernel_text_address(unsigned long addr);
148 extern int kernel_text_address(unsigned long addr); 148 extern int kernel_text_address(unsigned long addr);
149 struct pid; 149 struct pid;
150 extern struct pid *session_of_pgrp(struct pid *pgrp); 150 extern struct pid *session_of_pgrp(struct pid *pgrp);
151 151
152 extern void dump_thread(struct pt_regs *regs, struct user *dump); 152 extern void dump_thread(struct pt_regs *regs, struct user *dump);
153 153
154 #ifdef CONFIG_PRINTK 154 #ifdef CONFIG_PRINTK
155 asmlinkage int vprintk(const char *fmt, va_list args) 155 asmlinkage int vprintk(const char *fmt, va_list args)
156 __attribute__ ((format (printf, 1, 0))); 156 __attribute__ ((format (printf, 1, 0)));
157 asmlinkage int printk(const char * fmt, ...) 157 asmlinkage int printk(const char * fmt, ...)
158 __attribute__ ((format (printf, 1, 2))); 158 __attribute__ ((format (printf, 1, 2)));
159 #else 159 #else
160 static inline int vprintk(const char *s, va_list args) 160 static inline int vprintk(const char *s, va_list args)
161 __attribute__ ((format (printf, 1, 0))); 161 __attribute__ ((format (printf, 1, 0)));
162 static inline int vprintk(const char *s, va_list args) { return 0; } 162 static inline int vprintk(const char *s, va_list args) { return 0; }
163 static inline int printk(const char *s, ...) 163 static inline int printk(const char *s, ...)
164 __attribute__ ((format (printf, 1, 2))); 164 __attribute__ ((format (printf, 1, 2)));
165 static inline int printk(const char *s, ...) { return 0; } 165 static inline int printk(const char *s, ...) { return 0; }
166 #endif 166 #endif
167 167
168 unsigned long int_sqrt(unsigned long); 168 unsigned long int_sqrt(unsigned long);
169 169
170 extern int printk_ratelimit(void); 170 extern int printk_ratelimit(void);
171 extern int __printk_ratelimit(int ratelimit_jiffies, int ratelimit_burst); 171 extern int __printk_ratelimit(int ratelimit_jiffies, int ratelimit_burst);
172 extern bool printk_timed_ratelimit(unsigned long *caller_jiffies, 172 extern bool printk_timed_ratelimit(unsigned long *caller_jiffies,
173 unsigned int interval_msec); 173 unsigned int interval_msec);
174 174
175 static inline void console_silent(void) 175 static inline void console_silent(void)
176 { 176 {
177 console_loglevel = 0; 177 console_loglevel = 0;
178 } 178 }
179 179
180 static inline void console_verbose(void) 180 static inline void console_verbose(void)
181 { 181 {
182 if (console_loglevel) 182 if (console_loglevel)
183 console_loglevel = 15; 183 console_loglevel = 15;
184 } 184 }
185 185
186 extern void bust_spinlocks(int yes); 186 extern void bust_spinlocks(int yes);
187 extern void wake_up_klogd(void); 187 extern void wake_up_klogd(void);
188 extern int oops_in_progress; /* If set, an oops, panic(), BUG() or die() is in progress */ 188 extern int oops_in_progress; /* If set, an oops, panic(), BUG() or die() is in progress */
189 extern int panic_timeout; 189 extern int panic_timeout;
190 extern int panic_on_oops; 190 extern int panic_on_oops;
191 extern int panic_on_unrecovered_nmi; 191 extern int panic_on_unrecovered_nmi;
192 extern int tainted; 192 extern int tainted;
193 extern const char *print_tainted(void); 193 extern const char *print_tainted(void);
194 extern void add_taint(unsigned); 194 extern void add_taint(unsigned);
195 195
196 /* Values used for system_state */ 196 /* Values used for system_state */
197 extern enum system_states { 197 extern enum system_states {
198 SYSTEM_BOOTING, 198 SYSTEM_BOOTING,
199 SYSTEM_RUNNING, 199 SYSTEM_RUNNING,
200 SYSTEM_HALT, 200 SYSTEM_HALT,
201 SYSTEM_POWER_OFF, 201 SYSTEM_POWER_OFF,
202 SYSTEM_RESTART, 202 SYSTEM_RESTART,
203 SYSTEM_SUSPEND_DISK, 203 SYSTEM_SUSPEND_DISK,
204 } system_state; 204 } system_state;
205 205
206 #define TAINT_PROPRIETARY_MODULE (1<<0) 206 #define TAINT_PROPRIETARY_MODULE (1<<0)
207 #define TAINT_FORCED_MODULE (1<<1) 207 #define TAINT_FORCED_MODULE (1<<1)
208 #define TAINT_UNSAFE_SMP (1<<2) 208 #define TAINT_UNSAFE_SMP (1<<2)
209 #define TAINT_FORCED_RMMOD (1<<3) 209 #define TAINT_FORCED_RMMOD (1<<3)
210 #define TAINT_MACHINE_CHECK (1<<4) 210 #define TAINT_MACHINE_CHECK (1<<4)
211 #define TAINT_BAD_PAGE (1<<5) 211 #define TAINT_BAD_PAGE (1<<5)
212 #define TAINT_USER (1<<6) 212 #define TAINT_USER (1<<6)
213 #define TAINT_DIE (1<<7)
213 214
214 extern void dump_stack(void); 215 extern void dump_stack(void);
215 216
216 enum { 217 enum {
217 DUMP_PREFIX_NONE, 218 DUMP_PREFIX_NONE,
218 DUMP_PREFIX_ADDRESS, 219 DUMP_PREFIX_ADDRESS,
219 DUMP_PREFIX_OFFSET 220 DUMP_PREFIX_OFFSET
220 }; 221 };
221 extern void hex_dump_to_buffer(const void *buf, size_t len, 222 extern void hex_dump_to_buffer(const void *buf, size_t len,
222 int rowsize, int groupsize, 223 int rowsize, int groupsize,
223 char *linebuf, size_t linebuflen, bool ascii); 224 char *linebuf, size_t linebuflen, bool ascii);
224 extern void print_hex_dump(const char *level, const char *prefix_str, 225 extern void print_hex_dump(const char *level, const char *prefix_str,
225 int prefix_type, int rowsize, int groupsize, 226 int prefix_type, int rowsize, int groupsize,
226 void *buf, size_t len, bool ascii); 227 void *buf, size_t len, bool ascii);
227 extern void print_hex_dump_bytes(const char *prefix_str, int prefix_type, 228 extern void print_hex_dump_bytes(const char *prefix_str, int prefix_type,
228 void *buf, size_t len); 229 void *buf, size_t len);
229 #define hex_asc(x) "0123456789abcdef"[x] 230 #define hex_asc(x) "0123456789abcdef"[x]
230 231
231 #ifdef DEBUG 232 #ifdef DEBUG
232 /* If you are writing a driver, please use dev_dbg instead */ 233 /* If you are writing a driver, please use dev_dbg instead */
233 #define pr_debug(fmt,arg...) \ 234 #define pr_debug(fmt,arg...) \
234 printk(KERN_DEBUG fmt,##arg) 235 printk(KERN_DEBUG fmt,##arg)
235 #else 236 #else
236 static inline int __attribute__ ((format (printf, 1, 2))) pr_debug(const char * fmt, ...) 237 static inline int __attribute__ ((format (printf, 1, 2))) pr_debug(const char * fmt, ...)
237 { 238 {
238 return 0; 239 return 0;
239 } 240 }
240 #endif 241 #endif
241 242
242 #define pr_info(fmt,arg...) \ 243 #define pr_info(fmt,arg...) \
243 printk(KERN_INFO fmt,##arg) 244 printk(KERN_INFO fmt,##arg)
244 245
245 /* 246 /*
246 * Display an IP address in readable format. 247 * Display an IP address in readable format.
247 */ 248 */
248 249
249 #define NIPQUAD(addr) \ 250 #define NIPQUAD(addr) \
250 ((unsigned char *)&addr)[0], \ 251 ((unsigned char *)&addr)[0], \
251 ((unsigned char *)&addr)[1], \ 252 ((unsigned char *)&addr)[1], \
252 ((unsigned char *)&addr)[2], \ 253 ((unsigned char *)&addr)[2], \
253 ((unsigned char *)&addr)[3] 254 ((unsigned char *)&addr)[3]
254 #define NIPQUAD_FMT "%u.%u.%u.%u" 255 #define NIPQUAD_FMT "%u.%u.%u.%u"
255 256
256 #define NIP6(addr) \ 257 #define NIP6(addr) \
257 ntohs((addr).s6_addr16[0]), \ 258 ntohs((addr).s6_addr16[0]), \
258 ntohs((addr).s6_addr16[1]), \ 259 ntohs((addr).s6_addr16[1]), \
259 ntohs((addr).s6_addr16[2]), \ 260 ntohs((addr).s6_addr16[2]), \
260 ntohs((addr).s6_addr16[3]), \ 261 ntohs((addr).s6_addr16[3]), \
261 ntohs((addr).s6_addr16[4]), \ 262 ntohs((addr).s6_addr16[4]), \
262 ntohs((addr).s6_addr16[5]), \ 263 ntohs((addr).s6_addr16[5]), \
263 ntohs((addr).s6_addr16[6]), \ 264 ntohs((addr).s6_addr16[6]), \
264 ntohs((addr).s6_addr16[7]) 265 ntohs((addr).s6_addr16[7])
265 #define NIP6_FMT "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x" 266 #define NIP6_FMT "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x"
266 #define NIP6_SEQFMT "%04x%04x%04x%04x%04x%04x%04x%04x" 267 #define NIP6_SEQFMT "%04x%04x%04x%04x%04x%04x%04x%04x"
267 268
268 #if defined(__LITTLE_ENDIAN) 269 #if defined(__LITTLE_ENDIAN)
269 #define HIPQUAD(addr) \ 270 #define HIPQUAD(addr) \
270 ((unsigned char *)&addr)[3], \ 271 ((unsigned char *)&addr)[3], \
271 ((unsigned char *)&addr)[2], \ 272 ((unsigned char *)&addr)[2], \
272 ((unsigned char *)&addr)[1], \ 273 ((unsigned char *)&addr)[1], \
273 ((unsigned char *)&addr)[0] 274 ((unsigned char *)&addr)[0]
274 #elif defined(__BIG_ENDIAN) 275 #elif defined(__BIG_ENDIAN)
275 #define HIPQUAD NIPQUAD 276 #define HIPQUAD NIPQUAD
276 #else 277 #else
277 #error "Please fix asm/byteorder.h" 278 #error "Please fix asm/byteorder.h"
278 #endif /* __LITTLE_ENDIAN */ 279 #endif /* __LITTLE_ENDIAN */
279 280
280 /* 281 /*
281 * min()/max() macros that also do 282 * min()/max() macros that also do
282 * strict type-checking.. See the 283 * strict type-checking.. See the
283 * "unnecessary" pointer comparison. 284 * "unnecessary" pointer comparison.
284 */ 285 */
285 #define min(x,y) ({ \ 286 #define min(x,y) ({ \
286 typeof(x) _x = (x); \ 287 typeof(x) _x = (x); \
287 typeof(y) _y = (y); \ 288 typeof(y) _y = (y); \
288 (void) (&_x == &_y); \ 289 (void) (&_x == &_y); \
289 _x < _y ? _x : _y; }) 290 _x < _y ? _x : _y; })
290 291
291 #define max(x,y) ({ \ 292 #define max(x,y) ({ \
292 typeof(x) _x = (x); \ 293 typeof(x) _x = (x); \
293 typeof(y) _y = (y); \ 294 typeof(y) _y = (y); \
294 (void) (&_x == &_y); \ 295 (void) (&_x == &_y); \
295 _x > _y ? _x : _y; }) 296 _x > _y ? _x : _y; })
296 297
297 /* 298 /*
298 * ..and if you can't take the strict 299 * ..and if you can't take the strict
299 * types, you can specify one yourself. 300 * types, you can specify one yourself.
300 * 301 *
301 * Or not use min/max at all, of course. 302 * Or not use min/max at all, of course.
302 */ 303 */
303 #define min_t(type,x,y) \ 304 #define min_t(type,x,y) \
304 ({ type __x = (x); type __y = (y); __x < __y ? __x: __y; }) 305 ({ type __x = (x); type __y = (y); __x < __y ? __x: __y; })
305 #define max_t(type,x,y) \ 306 #define max_t(type,x,y) \
306 ({ type __x = (x); type __y = (y); __x > __y ? __x: __y; }) 307 ({ type __x = (x); type __y = (y); __x > __y ? __x: __y; })
307 308
308 309
309 /** 310 /**
310 * container_of - cast a member of a structure out to the containing structure 311 * container_of - cast a member of a structure out to the containing structure
311 * @ptr: the pointer to the member. 312 * @ptr: the pointer to the member.
312 * @type: the type of the container struct this is embedded in. 313 * @type: the type of the container struct this is embedded in.
313 * @member: the name of the member within the struct. 314 * @member: the name of the member within the struct.
314 * 315 *
315 */ 316 */
316 #define container_of(ptr, type, member) ({ \ 317 #define container_of(ptr, type, member) ({ \
317 const typeof( ((type *)0)->member ) *__mptr = (ptr); \ 318 const typeof( ((type *)0)->member ) *__mptr = (ptr); \
318 (type *)( (char *)__mptr - offsetof(type,member) );}) 319 (type *)( (char *)__mptr - offsetof(type,member) );})
319 320
320 /* 321 /*
321 * Check at compile time that something is of a particular type. 322 * Check at compile time that something is of a particular type.
322 * Always evaluates to 1 so you may use it easily in comparisons. 323 * Always evaluates to 1 so you may use it easily in comparisons.
323 */ 324 */
324 #define typecheck(type,x) \ 325 #define typecheck(type,x) \
325 ({ type __dummy; \ 326 ({ type __dummy; \
326 typeof(x) __dummy2; \ 327 typeof(x) __dummy2; \
327 (void)(&__dummy == &__dummy2); \ 328 (void)(&__dummy == &__dummy2); \
328 1; \ 329 1; \
329 }) 330 })
330 331
331 /* 332 /*
332 * Check at compile time that 'function' is a certain type, or is a pointer 333 * Check at compile time that 'function' is a certain type, or is a pointer
333 * to that type (needs to use typedef for the function type.) 334 * to that type (needs to use typedef for the function type.)
334 */ 335 */
335 #define typecheck_fn(type,function) \ 336 #define typecheck_fn(type,function) \
336 ({ typeof(type) __tmp = function; \ 337 ({ typeof(type) __tmp = function; \
337 (void)__tmp; \ 338 (void)__tmp; \
338 }) 339 })
339 340
340 struct sysinfo; 341 struct sysinfo;
341 extern int do_sysinfo(struct sysinfo *info); 342 extern int do_sysinfo(struct sysinfo *info);
342 343
343 #endif /* __KERNEL__ */ 344 #endif /* __KERNEL__ */
344 345
345 #define SI_LOAD_SHIFT 16 346 #define SI_LOAD_SHIFT 16
346 struct sysinfo { 347 struct sysinfo {
347 long uptime; /* Seconds since boot */ 348 long uptime; /* Seconds since boot */
348 unsigned long loads[3]; /* 1, 5, and 15 minute load averages */ 349 unsigned long loads[3]; /* 1, 5, and 15 minute load averages */
349 unsigned long totalram; /* Total usable main memory size */ 350 unsigned long totalram; /* Total usable main memory size */
350 unsigned long freeram; /* Available memory size */ 351 unsigned long freeram; /* Available memory size */
351 unsigned long sharedram; /* Amount of shared memory */ 352 unsigned long sharedram; /* Amount of shared memory */
352 unsigned long bufferram; /* Memory used by buffers */ 353 unsigned long bufferram; /* Memory used by buffers */
353 unsigned long totalswap; /* Total swap space size */ 354 unsigned long totalswap; /* Total swap space size */
354 unsigned long freeswap; /* swap space still available */ 355 unsigned long freeswap; /* swap space still available */
355 unsigned short procs; /* Number of current processes */ 356 unsigned short procs; /* Number of current processes */
356 unsigned short pad; /* explicit padding for m68k */ 357 unsigned short pad; /* explicit padding for m68k */
357 unsigned long totalhigh; /* Total high memory size */ 358 unsigned long totalhigh; /* Total high memory size */
358 unsigned long freehigh; /* Available high memory size */ 359 unsigned long freehigh; /* Available high memory size */
359 unsigned int mem_unit; /* Memory unit size in bytes */ 360 unsigned int mem_unit; /* Memory unit size in bytes */
360 char _f[20-2*sizeof(long)-sizeof(int)]; /* Padding: libc5 uses this.. */ 361 char _f[20-2*sizeof(long)-sizeof(int)]; /* Padding: libc5 uses this.. */
361 }; 362 };
362 363
363 /* Force a compilation error if condition is true */ 364 /* Force a compilation error if condition is true */
364 #define BUILD_BUG_ON(condition) ((void)sizeof(char[1 - 2*!!(condition)])) 365 #define BUILD_BUG_ON(condition) ((void)sizeof(char[1 - 2*!!(condition)]))
365 366
366 /* Force a compilation error if condition is true, but also produce a 367 /* Force a compilation error if condition is true, but also produce a
367 result (of value 0 and type size_t), so the expression can be used 368 result (of value 0 and type size_t), so the expression can be used
368 e.g. in a structure initializer (or where-ever else comma expressions 369 e.g. in a structure initializer (or where-ever else comma expressions
369 aren't permitted). */ 370 aren't permitted). */
370 #define BUILD_BUG_ON_ZERO(e) (sizeof(char[1 - 2 * !!(e)]) - 1) 371 #define BUILD_BUG_ON_ZERO(e) (sizeof(char[1 - 2 * !!(e)]) - 1)
371 372
372 /* Trap pasters of __FUNCTION__ at compile-time */ 373 /* Trap pasters of __FUNCTION__ at compile-time */
373 #define __FUNCTION__ (__func__) 374 #define __FUNCTION__ (__func__)
374 375
375 /* This helps us to avoid #ifdef CONFIG_NUMA */ 376 /* This helps us to avoid #ifdef CONFIG_NUMA */
376 #ifdef CONFIG_NUMA 377 #ifdef CONFIG_NUMA
377 #define NUMA_BUILD 1 378 #define NUMA_BUILD 1
378 #else 379 #else
379 #define NUMA_BUILD 0 380 #define NUMA_BUILD 0
380 #endif 381 #endif
381 382
382 #endif 383 #endif
383 384
1 /* 1 /*
2 * linux/kernel/panic.c 2 * linux/kernel/panic.c
3 * 3 *
4 * Copyright (C) 1991, 1992 Linus Torvalds 4 * Copyright (C) 1991, 1992 Linus Torvalds
5 */ 5 */
6 6
7 /* 7 /*
8 * This function is used through-out the kernel (including mm and fs) 8 * This function is used through-out the kernel (including mm and fs)
9 * to indicate a major problem. 9 * to indicate a major problem.
10 */ 10 */
11 #include <linux/module.h> 11 #include <linux/module.h>
12 #include <linux/sched.h> 12 #include <linux/sched.h>
13 #include <linux/delay.h> 13 #include <linux/delay.h>
14 #include <linux/reboot.h> 14 #include <linux/reboot.h>
15 #include <linux/notifier.h> 15 #include <linux/notifier.h>
16 #include <linux/init.h> 16 #include <linux/init.h>
17 #include <linux/sysrq.h> 17 #include <linux/sysrq.h>
18 #include <linux/interrupt.h> 18 #include <linux/interrupt.h>
19 #include <linux/nmi.h> 19 #include <linux/nmi.h>
20 #include <linux/kexec.h> 20 #include <linux/kexec.h>
21 #include <linux/debug_locks.h> 21 #include <linux/debug_locks.h>
22 22
23 int panic_on_oops; 23 int panic_on_oops;
24 int tainted; 24 int tainted;
25 static int pause_on_oops; 25 static int pause_on_oops;
26 static int pause_on_oops_flag; 26 static int pause_on_oops_flag;
27 static DEFINE_SPINLOCK(pause_on_oops_lock); 27 static DEFINE_SPINLOCK(pause_on_oops_lock);
28 28
29 int panic_timeout; 29 int panic_timeout;
30 30
31 ATOMIC_NOTIFIER_HEAD(panic_notifier_list); 31 ATOMIC_NOTIFIER_HEAD(panic_notifier_list);
32 32
33 EXPORT_SYMBOL(panic_notifier_list); 33 EXPORT_SYMBOL(panic_notifier_list);
34 34
35 static int __init panic_setup(char *str) 35 static int __init panic_setup(char *str)
36 { 36 {
37 panic_timeout = simple_strtoul(str, NULL, 0); 37 panic_timeout = simple_strtoul(str, NULL, 0);
38 return 1; 38 return 1;
39 } 39 }
40 __setup("panic=", panic_setup); 40 __setup("panic=", panic_setup);
41 41
42 static long no_blink(long time) 42 static long no_blink(long time)
43 { 43 {
44 return 0; 44 return 0;
45 } 45 }
46 46
47 /* Returns how long it waited in ms */ 47 /* Returns how long it waited in ms */
48 long (*panic_blink)(long time); 48 long (*panic_blink)(long time);
49 EXPORT_SYMBOL(panic_blink); 49 EXPORT_SYMBOL(panic_blink);
50 50
51 /** 51 /**
52 * panic - halt the system 52 * panic - halt the system
53 * @fmt: The text string to print 53 * @fmt: The text string to print
54 * 54 *
55 * Display a message, then perform cleanups. 55 * Display a message, then perform cleanups.
56 * 56 *
57 * This function never returns. 57 * This function never returns.
58 */ 58 */
59 59
60 NORET_TYPE void panic(const char * fmt, ...) 60 NORET_TYPE void panic(const char * fmt, ...)
61 { 61 {
62 long i; 62 long i;
63 static char buf[1024]; 63 static char buf[1024];
64 va_list args; 64 va_list args;
65 #if defined(CONFIG_S390) 65 #if defined(CONFIG_S390)
66 unsigned long caller = (unsigned long) __builtin_return_address(0); 66 unsigned long caller = (unsigned long) __builtin_return_address(0);
67 #endif 67 #endif
68 68
69 /* 69 /*
70 * It's possible to come here directly from a panic-assertion and not 70 * It's possible to come here directly from a panic-assertion and not
71 * have preempt disabled. Some functions called from here want 71 * have preempt disabled. Some functions called from here want
72 * preempt to be disabled. No point enabling it later though... 72 * preempt to be disabled. No point enabling it later though...
73 */ 73 */
74 preempt_disable(); 74 preempt_disable();
75 75
76 bust_spinlocks(1); 76 bust_spinlocks(1);
77 va_start(args, fmt); 77 va_start(args, fmt);
78 vsnprintf(buf, sizeof(buf), fmt, args); 78 vsnprintf(buf, sizeof(buf), fmt, args);
79 va_end(args); 79 va_end(args);
80 printk(KERN_EMERG "Kernel panic - not syncing: %s\n",buf); 80 printk(KERN_EMERG "Kernel panic - not syncing: %s\n",buf);
81 bust_spinlocks(0); 81 bust_spinlocks(0);
82 82
83 /* 83 /*
84 * If we have crashed and we have a crash kernel loaded let it handle 84 * If we have crashed and we have a crash kernel loaded let it handle
85 * everything else. 85 * everything else.
86 * Do we want to call this before we try to display a message? 86 * Do we want to call this before we try to display a message?
87 */ 87 */
88 crash_kexec(NULL); 88 crash_kexec(NULL);
89 89
90 #ifdef CONFIG_SMP 90 #ifdef CONFIG_SMP
91 /* 91 /*
92 * Note smp_send_stop is the usual smp shutdown function, which 92 * Note smp_send_stop is the usual smp shutdown function, which
93 * unfortunately means it may not be hardened to work in a panic 93 * unfortunately means it may not be hardened to work in a panic
94 * situation. 94 * situation.
95 */ 95 */
96 smp_send_stop(); 96 smp_send_stop();
97 #endif 97 #endif
98 98
99 atomic_notifier_call_chain(&panic_notifier_list, 0, buf); 99 atomic_notifier_call_chain(&panic_notifier_list, 0, buf);
100 100
101 if (!panic_blink) 101 if (!panic_blink)
102 panic_blink = no_blink; 102 panic_blink = no_blink;
103 103
104 if (panic_timeout > 0) { 104 if (panic_timeout > 0) {
105 /* 105 /*
106 * Delay timeout seconds before rebooting the machine. 106 * Delay timeout seconds before rebooting the machine.
107 * We can't use the "normal" timers since we just panicked.. 107 * We can't use the "normal" timers since we just panicked..
108 */ 108 */
109 printk(KERN_EMERG "Rebooting in %d seconds..",panic_timeout); 109 printk(KERN_EMERG "Rebooting in %d seconds..",panic_timeout);
110 for (i = 0; i < panic_timeout*1000; ) { 110 for (i = 0; i < panic_timeout*1000; ) {
111 touch_nmi_watchdog(); 111 touch_nmi_watchdog();
112 i += panic_blink(i); 112 i += panic_blink(i);
113 mdelay(1); 113 mdelay(1);
114 i++; 114 i++;
115 } 115 }
116 /* This will not be a clean reboot, with everything 116 /* This will not be a clean reboot, with everything
117 * shutting down. But if there is a chance of 117 * shutting down. But if there is a chance of
118 * rebooting the system it will be rebooted. 118 * rebooting the system it will be rebooted.
119 */ 119 */
120 emergency_restart(); 120 emergency_restart();
121 } 121 }
122 #ifdef __sparc__ 122 #ifdef __sparc__
123 { 123 {
124 extern int stop_a_enabled; 124 extern int stop_a_enabled;
125 /* Make sure the user can actually press Stop-A (L1-A) */ 125 /* Make sure the user can actually press Stop-A (L1-A) */
126 stop_a_enabled = 1; 126 stop_a_enabled = 1;
127 printk(KERN_EMERG "Press Stop-A (L1-A) to return to the boot prom\n"); 127 printk(KERN_EMERG "Press Stop-A (L1-A) to return to the boot prom\n");
128 } 128 }
129 #endif 129 #endif
130 #if defined(CONFIG_S390) 130 #if defined(CONFIG_S390)
131 disabled_wait(caller); 131 disabled_wait(caller);
132 #endif 132 #endif
133 local_irq_enable(); 133 local_irq_enable();
134 for (i = 0;;) { 134 for (i = 0;;) {
135 touch_softlockup_watchdog(); 135 touch_softlockup_watchdog();
136 i += panic_blink(i); 136 i += panic_blink(i);
137 mdelay(1); 137 mdelay(1);
138 i++; 138 i++;
139 } 139 }
140 } 140 }
141 141
142 EXPORT_SYMBOL(panic); 142 EXPORT_SYMBOL(panic);
143 143
144 /** 144 /**
145 * print_tainted - return a string to represent the kernel taint state. 145 * print_tainted - return a string to represent the kernel taint state.
146 * 146 *
147 * 'P' - Proprietary module has been loaded. 147 * 'P' - Proprietary module has been loaded.
148 * 'F' - Module has been forcibly loaded. 148 * 'F' - Module has been forcibly loaded.
149 * 'S' - SMP with CPUs not designed for SMP. 149 * 'S' - SMP with CPUs not designed for SMP.
150 * 'R' - User forced a module unload. 150 * 'R' - User forced a module unload.
151 * 'M' - Machine had a machine check experience. 151 * 'M' - Machine had a machine check experience.
152 * 'B' - System has hit bad_page. 152 * 'B' - System has hit bad_page.
153 * 'U' - Userspace-defined naughtiness. 153 * 'U' - Userspace-defined naughtiness.
154 * 154 *
155 * The string is overwritten by the next call to print_taint(). 155 * The string is overwritten by the next call to print_taint().
156 */ 156 */
157 157
158 const char *print_tainted(void) 158 const char *print_tainted(void)
159 { 159 {
160 static char buf[20]; 160 static char buf[20];
161 if (tainted) { 161 if (tainted) {
162 snprintf(buf, sizeof(buf), "Tainted: %c%c%c%c%c%c%c", 162 snprintf(buf, sizeof(buf), "Tainted: %c%c%c%c%c%c%c%c",
163 tainted & TAINT_PROPRIETARY_MODULE ? 'P' : 'G', 163 tainted & TAINT_PROPRIETARY_MODULE ? 'P' : 'G',
164 tainted & TAINT_FORCED_MODULE ? 'F' : ' ', 164 tainted & TAINT_FORCED_MODULE ? 'F' : ' ',
165 tainted & TAINT_UNSAFE_SMP ? 'S' : ' ', 165 tainted & TAINT_UNSAFE_SMP ? 'S' : ' ',
166 tainted & TAINT_FORCED_RMMOD ? 'R' : ' ', 166 tainted & TAINT_FORCED_RMMOD ? 'R' : ' ',
167 tainted & TAINT_MACHINE_CHECK ? 'M' : ' ', 167 tainted & TAINT_MACHINE_CHECK ? 'M' : ' ',
168 tainted & TAINT_BAD_PAGE ? 'B' : ' ', 168 tainted & TAINT_BAD_PAGE ? 'B' : ' ',
169 tainted & TAINT_USER ? 'U' : ' '); 169 tainted & TAINT_USER ? 'U' : ' ',
170 tainted & TAINT_DIE ? 'D' : ' ');
170 } 171 }
171 else 172 else
172 snprintf(buf, sizeof(buf), "Not tainted"); 173 snprintf(buf, sizeof(buf), "Not tainted");
173 return(buf); 174 return(buf);
174 } 175 }
175 176
176 void add_taint(unsigned flag) 177 void add_taint(unsigned flag)
177 { 178 {
178 debug_locks = 0; /* can't trust the integrity of the kernel anymore */ 179 debug_locks = 0; /* can't trust the integrity of the kernel anymore */
179 tainted |= flag; 180 tainted |= flag;
180 } 181 }
181 EXPORT_SYMBOL(add_taint); 182 EXPORT_SYMBOL(add_taint);
182 183
183 static int __init pause_on_oops_setup(char *str) 184 static int __init pause_on_oops_setup(char *str)
184 { 185 {
185 pause_on_oops = simple_strtoul(str, NULL, 0); 186 pause_on_oops = simple_strtoul(str, NULL, 0);
186 return 1; 187 return 1;
187 } 188 }
188 __setup("pause_on_oops=", pause_on_oops_setup); 189 __setup("pause_on_oops=", pause_on_oops_setup);
189 190
190 static void spin_msec(int msecs) 191 static void spin_msec(int msecs)
191 { 192 {
192 int i; 193 int i;
193 194
194 for (i = 0; i < msecs; i++) { 195 for (i = 0; i < msecs; i++) {
195 touch_nmi_watchdog(); 196 touch_nmi_watchdog();
196 mdelay(1); 197 mdelay(1);
197 } 198 }
198 } 199 }
199 200
200 /* 201 /*
201 * It just happens that oops_enter() and oops_exit() are identically 202 * It just happens that oops_enter() and oops_exit() are identically
202 * implemented... 203 * implemented...
203 */ 204 */
204 static void do_oops_enter_exit(void) 205 static void do_oops_enter_exit(void)
205 { 206 {
206 unsigned long flags; 207 unsigned long flags;
207 static int spin_counter; 208 static int spin_counter;
208 209
209 if (!pause_on_oops) 210 if (!pause_on_oops)
210 return; 211 return;
211 212
212 spin_lock_irqsave(&pause_on_oops_lock, flags); 213 spin_lock_irqsave(&pause_on_oops_lock, flags);
213 if (pause_on_oops_flag == 0) { 214 if (pause_on_oops_flag == 0) {
214 /* This CPU may now print the oops message */ 215 /* This CPU may now print the oops message */
215 pause_on_oops_flag = 1; 216 pause_on_oops_flag = 1;
216 } else { 217 } else {
217 /* We need to stall this CPU */ 218 /* We need to stall this CPU */
218 if (!spin_counter) { 219 if (!spin_counter) {
219 /* This CPU gets to do the counting */ 220 /* This CPU gets to do the counting */
220 spin_counter = pause_on_oops; 221 spin_counter = pause_on_oops;
221 do { 222 do {
222 spin_unlock(&pause_on_oops_lock); 223 spin_unlock(&pause_on_oops_lock);
223 spin_msec(MSEC_PER_SEC); 224 spin_msec(MSEC_PER_SEC);
224 spin_lock(&pause_on_oops_lock); 225 spin_lock(&pause_on_oops_lock);
225 } while (--spin_counter); 226 } while (--spin_counter);
226 pause_on_oops_flag = 0; 227 pause_on_oops_flag = 0;
227 } else { 228 } else {
228 /* This CPU waits for a different one */ 229 /* This CPU waits for a different one */
229 while (spin_counter) { 230 while (spin_counter) {
230 spin_unlock(&pause_on_oops_lock); 231 spin_unlock(&pause_on_oops_lock);
231 spin_msec(1); 232 spin_msec(1);
232 spin_lock(&pause_on_oops_lock); 233 spin_lock(&pause_on_oops_lock);
233 } 234 }
234 } 235 }
235 } 236 }
236 spin_unlock_irqrestore(&pause_on_oops_lock, flags); 237 spin_unlock_irqrestore(&pause_on_oops_lock, flags);
237 } 238 }
238 239
239 /* 240 /*
240 * Return true if the calling CPU is allowed to print oops-related info. This 241 * Return true if the calling CPU is allowed to print oops-related info. This
241 * is a bit racy.. 242 * is a bit racy..
242 */ 243 */
243 int oops_may_print(void) 244 int oops_may_print(void)
244 { 245 {
245 return pause_on_oops_flag == 0; 246 return pause_on_oops_flag == 0;
246 } 247 }
247 248
248 /* 249 /*
249 * Called when the architecture enters its oops handler, before it prints 250 * Called when the architecture enters its oops handler, before it prints
250 * anything. If this is the first CPU to oops, and it's oopsing the first time 251 * anything. If this is the first CPU to oops, and it's oopsing the first time
251 * then let it proceed. 252 * then let it proceed.
252 * 253 *
253 * This is all enabled by the pause_on_oops kernel boot option. We do all this 254 * This is all enabled by the pause_on_oops kernel boot option. We do all this
254 * to ensure that oopses don't scroll off the screen. It has the side-effect 255 * to ensure that oopses don't scroll off the screen. It has the side-effect
255 * of preventing later-oopsing CPUs from mucking up the display, too. 256 * of preventing later-oopsing CPUs from mucking up the display, too.
256 * 257 *
257 * It turns out that the CPU which is allowed to print ends up pausing for the 258 * It turns out that the CPU which is allowed to print ends up pausing for the
258 * right duration, whereas all the other CPUs pause for twice as long: once in 259 * right duration, whereas all the other CPUs pause for twice as long: once in
259 * oops_enter(), once in oops_exit(). 260 * oops_enter(), once in oops_exit().
260 */ 261 */
261 void oops_enter(void) 262 void oops_enter(void)
262 { 263 {
263 debug_locks_off(); /* can't trust the integrity of the kernel anymore */ 264 debug_locks_off(); /* can't trust the integrity of the kernel anymore */
264 do_oops_enter_exit(); 265 do_oops_enter_exit();
265 } 266 }
266 267
267 /* 268 /*
268 * Called when the architecture exits its oops handler, after printing 269 * Called when the architecture exits its oops handler, after printing
269 * everything. 270 * everything.
270 */ 271 */
271 void oops_exit(void) 272 void oops_exit(void)
272 { 273 {
273 do_oops_enter_exit(); 274 do_oops_enter_exit();
274 } 275 }
275 276
276 #ifdef CONFIG_CC_STACKPROTECTOR 277 #ifdef CONFIG_CC_STACKPROTECTOR
277 /* 278 /*
278 * Called when gcc's -fstack-protector feature is used, and 279 * Called when gcc's -fstack-protector feature is used, and
279 * gcc detects corruption of the on-stack canary value 280 * gcc detects corruption of the on-stack canary value
280 */ 281 */
281 void __stack_chk_fail(void) 282 void __stack_chk_fail(void)
282 { 283 {
283 panic("stack-protector: Kernel stack is corrupted"); 284 panic("stack-protector: Kernel stack is corrupted");
284 } 285 }
285 EXPORT_SYMBOL(__stack_chk_fail); 286 EXPORT_SYMBOL(__stack_chk_fail);
286 #endif 287 #endif
287 288