Commit 13049537007dee73a76f0a30fcbc24d02c6fa9e4

Authored by Joseph Handzik
Committed by Jens Axboe
1 parent 322a8b0340

cciss: Adds simple mode functionality

Signed-off-by: Joseph Handzik <joseph.t.handzik@beardog.cce.hp.com>
Acked-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Jens Axboe <jaxboe@fusionio.com>

Showing 3 changed files with 56 additions and 11 deletions Inline Diff

Documentation/blockdev/cciss.txt
1 This driver is for Compaq's SMART Array Controllers. 1 This driver is for Compaq's SMART Array Controllers.
2 2
3 Supported Cards: 3 Supported Cards:
4 ---------------- 4 ----------------
5 5
6 This driver is known to work with the following cards: 6 This driver is known to work with the following cards:
7 7
8 * SA 5300 8 * SA 5300
9 * SA 5i 9 * SA 5i
10 * SA 532 10 * SA 532
11 * SA 5312 11 * SA 5312
12 * SA 641 12 * SA 641
13 * SA 642 13 * SA 642
14 * SA 6400 14 * SA 6400
15 * SA 6400 U320 Expansion Module 15 * SA 6400 U320 Expansion Module
16 * SA 6i 16 * SA 6i
17 * SA P600 17 * SA P600
18 * SA P800 18 * SA P800
19 * SA E400 19 * SA E400
20 * SA P400i 20 * SA P400i
21 * SA E200 21 * SA E200
22 * SA E200i 22 * SA E200i
23 * SA E500 23 * SA E500
24 * SA P700m 24 * SA P700m
25 * SA P212 25 * SA P212
26 * SA P410 26 * SA P410
27 * SA P410i 27 * SA P410i
28 * SA P411 28 * SA P411
29 * SA P812 29 * SA P812
30 * SA P712m 30 * SA P712m
31 * SA P711m 31 * SA P711m
32 32
33 Detecting drive failures: 33 Detecting drive failures:
34 ------------------------- 34 -------------------------
35 35
36 To get the status of logical volumes and to detect physical drive 36 To get the status of logical volumes and to detect physical drive
37 failures, you can use the cciss_vol_status program found here: 37 failures, you can use the cciss_vol_status program found here:
38 http://cciss.sourceforge.net/#cciss_utils 38 http://cciss.sourceforge.net/#cciss_utils
39 39
40 Device Naming: 40 Device Naming:
41 -------------- 41 --------------
42 42
43 If nodes are not already created in the /dev/cciss directory, run as root: 43 If nodes are not already created in the /dev/cciss directory, run as root:
44 44
45 # cd /dev 45 # cd /dev
46 # ./MAKEDEV cciss 46 # ./MAKEDEV cciss
47 47
48 You need some entries in /dev for the cciss device. The MAKEDEV script 48 You need some entries in /dev for the cciss device. The MAKEDEV script
49 can make device nodes for you automatically. Currently the device setup 49 can make device nodes for you automatically. Currently the device setup
50 is as follows: 50 is as follows:
51 51
52 Major numbers: 52 Major numbers:
53 104 cciss0 53 104 cciss0
54 105 cciss1 54 105 cciss1
55 106 cciss2 55 106 cciss2
56 105 cciss3 56 105 cciss3
57 108 cciss4 57 108 cciss4
58 109 cciss5 58 109 cciss5
59 110 cciss6 59 110 cciss6
60 111 cciss7 60 111 cciss7
61 61
62 Minor numbers: 62 Minor numbers:
63 b7 b6 b5 b4 b3 b2 b1 b0 63 b7 b6 b5 b4 b3 b2 b1 b0
64 |----+----| |----+----| 64 |----+----| |----+----|
65 | | 65 | |
66 | +-------- Partition ID (0=wholedev, 1-15 partition) 66 | +-------- Partition ID (0=wholedev, 1-15 partition)
67 | 67 |
68 +-------------------- Logical Volume number 68 +-------------------- Logical Volume number
69 69
70 The device naming scheme is: 70 The device naming scheme is:
71 /dev/cciss/c0d0 Controller 0, disk 0, whole device 71 /dev/cciss/c0d0 Controller 0, disk 0, whole device
72 /dev/cciss/c0d0p1 Controller 0, disk 0, partition 1 72 /dev/cciss/c0d0p1 Controller 0, disk 0, partition 1
73 /dev/cciss/c0d0p2 Controller 0, disk 0, partition 2 73 /dev/cciss/c0d0p2 Controller 0, disk 0, partition 2
74 /dev/cciss/c0d0p3 Controller 0, disk 0, partition 3 74 /dev/cciss/c0d0p3 Controller 0, disk 0, partition 3
75 75
76 /dev/cciss/c1d1 Controller 1, disk 1, whole device 76 /dev/cciss/c1d1 Controller 1, disk 1, whole device
77 /dev/cciss/c1d1p1 Controller 1, disk 1, partition 1 77 /dev/cciss/c1d1p1 Controller 1, disk 1, partition 1
78 /dev/cciss/c1d1p2 Controller 1, disk 1, partition 2 78 /dev/cciss/c1d1p2 Controller 1, disk 1, partition 2
79 /dev/cciss/c1d1p3 Controller 1, disk 1, partition 3 79 /dev/cciss/c1d1p3 Controller 1, disk 1, partition 3
80 80
81 CCISS simple mode support
82 -------------------------
83
84 The "cciss_simple_mode=1" boot parameter may be used to prevent the driver
85 from putting the controller into "performant" mode. The difference is that
86 with simple mode, each command completion requires an interrupt, while with
87 "performant mode" (the default, and ordinarily better performing) it is
88 possible to have multiple command completions indicated by a single
89 interrupt.
90
81 SCSI tape drive and medium changer support 91 SCSI tape drive and medium changer support
82 ------------------------------------------ 92 ------------------------------------------
83 93
84 SCSI sequential access devices and medium changer devices are supported and 94 SCSI sequential access devices and medium changer devices are supported and
85 appropriate device nodes are automatically created. (e.g. 95 appropriate device nodes are automatically created. (e.g.
86 /dev/st0, /dev/st1, etc. See the "st" man page for more details.) 96 /dev/st0, /dev/st1, etc. See the "st" man page for more details.)
87 You must enable "SCSI tape drive support for Smart Array 5xxx" and 97 You must enable "SCSI tape drive support for Smart Array 5xxx" and
88 "SCSI support" in your kernel configuration to be able to use SCSI 98 "SCSI support" in your kernel configuration to be able to use SCSI
89 tape drives with your Smart Array 5xxx controller. 99 tape drives with your Smart Array 5xxx controller.
90 100
91 Additionally, note that the driver will not engage the SCSI core at init 101 Additionally, note that the driver will not engage the SCSI core at init
92 time. The driver must be directed to dynamically engage the SCSI core via 102 time. The driver must be directed to dynamically engage the SCSI core via
93 the /proc filesystem entry which the "block" side of the driver creates as 103 the /proc filesystem entry which the "block" side of the driver creates as
94 /proc/driver/cciss/cciss* at runtime. This is because at driver init time, 104 /proc/driver/cciss/cciss* at runtime. This is because at driver init time,
95 the SCSI core may not yet be initialized (because the driver is a block 105 the SCSI core may not yet be initialized (because the driver is a block
96 driver) and attempting to register it with the SCSI core in such a case 106 driver) and attempting to register it with the SCSI core in such a case
97 would cause a hang. This is best done via an initialization script 107 would cause a hang. This is best done via an initialization script
98 (typically in /etc/init.d, but could vary depending on distribution). 108 (typically in /etc/init.d, but could vary depending on distribution).
99 For example: 109 For example:
100 110
101 for x in /proc/driver/cciss/cciss[0-9]* 111 for x in /proc/driver/cciss/cciss[0-9]*
102 do 112 do
103 echo "engage scsi" > $x 113 echo "engage scsi" > $x
104 done 114 done
105 115
106 Once the SCSI core is engaged by the driver, it cannot be disengaged 116 Once the SCSI core is engaged by the driver, it cannot be disengaged
107 (except by unloading the driver, if it happens to be linked as a module.) 117 (except by unloading the driver, if it happens to be linked as a module.)
108 118
109 Note also that if no sequential access devices or medium changers are 119 Note also that if no sequential access devices or medium changers are
110 detected, the SCSI core will not be engaged by the action of the above 120 detected, the SCSI core will not be engaged by the action of the above
111 script. 121 script.
112 122
113 Hot plug support for SCSI tape drives 123 Hot plug support for SCSI tape drives
114 ------------------------------------- 124 -------------------------------------
115 125
116 Hot plugging of SCSI tape drives is supported, with some caveats. 126 Hot plugging of SCSI tape drives is supported, with some caveats.
117 The cciss driver must be informed that changes to the SCSI bus 127 The cciss driver must be informed that changes to the SCSI bus
118 have been made. This may be done via the /proc filesystem. 128 have been made. This may be done via the /proc filesystem.
119 For example: 129 For example:
120 130
121 echo "rescan" > /proc/scsi/cciss0/1 131 echo "rescan" > /proc/scsi/cciss0/1
122 132
123 This causes the driver to query the adapter about changes to the 133 This causes the driver to query the adapter about changes to the
124 physical SCSI buses and/or fibre channel arbitrated loop and the 134 physical SCSI buses and/or fibre channel arbitrated loop and the
125 driver to make note of any new or removed sequential access devices 135 driver to make note of any new or removed sequential access devices
126 or medium changers. The driver will output messages indicating what 136 or medium changers. The driver will output messages indicating what
127 devices have been added or removed and the controller, bus, target and 137 devices have been added or removed and the controller, bus, target and
128 lun used to address the device. It then notifies the SCSI mid layer 138 lun used to address the device. It then notifies the SCSI mid layer
129 of these changes. 139 of these changes.
130 140
131 Note that the naming convention of the /proc filesystem entries 141 Note that the naming convention of the /proc filesystem entries
132 contains a number in addition to the driver name. (E.g. "cciss0" 142 contains a number in addition to the driver name. (E.g. "cciss0"
133 instead of just "cciss" which you might expect.) 143 instead of just "cciss" which you might expect.)
134 144
135 Note: ONLY sequential access devices and medium changers are presented 145 Note: ONLY sequential access devices and medium changers are presented
136 as SCSI devices to the SCSI mid layer by the cciss driver. Specifically, 146 as SCSI devices to the SCSI mid layer by the cciss driver. Specifically,
137 physical SCSI disk drives are NOT presented to the SCSI mid layer. The 147 physical SCSI disk drives are NOT presented to the SCSI mid layer. The
138 physical SCSI disk drives are controlled directly by the array controller 148 physical SCSI disk drives are controlled directly by the array controller
139 hardware and it is important to prevent the kernel from attempting to directly 149 hardware and it is important to prevent the kernel from attempting to directly
140 access these devices too, as if the array controller were merely a SCSI 150 access these devices too, as if the array controller were merely a SCSI
141 controller in the same way that we are allowing it to access SCSI tape drives. 151 controller in the same way that we are allowing it to access SCSI tape drives.
142 152
143 SCSI error handling for tape drives and medium changers 153 SCSI error handling for tape drives and medium changers
144 ------------------------------------------------------- 154 -------------------------------------------------------
145 155
146 The linux SCSI mid layer provides an error handling protocol which 156 The linux SCSI mid layer provides an error handling protocol which
147 kicks into gear whenever a SCSI command fails to complete within a 157 kicks into gear whenever a SCSI command fails to complete within a
148 certain amount of time (which can vary depending on the command). 158 certain amount of time (which can vary depending on the command).
149 The cciss driver participates in this protocol to some extent. The 159 The cciss driver participates in this protocol to some extent. The
150 normal protocol is a four step process. First the device is told 160 normal protocol is a four step process. First the device is told
151 to abort the command. If that doesn't work, the device is reset. 161 to abort the command. If that doesn't work, the device is reset.
152 If that doesn't work, the SCSI bus is reset. If that doesn't work 162 If that doesn't work, the SCSI bus is reset. If that doesn't work
153 the host bus adapter is reset. Because the cciss driver is a block 163 the host bus adapter is reset. Because the cciss driver is a block
154 driver as well as a SCSI driver and only the tape drives and medium 164 driver as well as a SCSI driver and only the tape drives and medium
155 changers are presented to the SCSI mid layer, and unlike more 165 changers are presented to the SCSI mid layer, and unlike more
156 straightforward SCSI drivers, disk i/o continues through the block 166 straightforward SCSI drivers, disk i/o continues through the block
157 side during the SCSI error recovery process, the cciss driver only 167 side during the SCSI error recovery process, the cciss driver only
158 implements the first two of these actions, aborting the command, and 168 implements the first two of these actions, aborting the command, and
159 resetting the device. Additionally, most tape drives will not oblige 169 resetting the device. Additionally, most tape drives will not oblige
160 in aborting commands, and sometimes it appears they will not even 170 in aborting commands, and sometimes it appears they will not even
161 obey a reset command, though in most circumstances they will. In 171 obey a reset command, though in most circumstances they will. In
162 the case that the command cannot be aborted and the device cannot be 172 the case that the command cannot be aborted and the device cannot be
163 reset, the device will be set offline. 173 reset, the device will be set offline.
164 174
165 In the event the error handling code is triggered and a tape drive is 175 In the event the error handling code is triggered and a tape drive is
166 successfully reset or the tardy command is successfully aborted, the 176 successfully reset or the tardy command is successfully aborted, the
167 tape drive may still not allow i/o to continue until some command 177 tape drive may still not allow i/o to continue until some command
168 is issued which positions the tape to a known position. Typically you 178 is issued which positions the tape to a known position. Typically you
169 must rewind the tape (by issuing "mt -f /dev/st0 rewind" for example) 179 must rewind the tape (by issuing "mt -f /dev/st0 rewind" for example)
170 before i/o can proceed again to a tape drive which was reset. 180 before i/o can proceed again to a tape drive which was reset.
171 181
172 There is a cciss_tape_cmds module parameter which can be used to make cciss 182 There is a cciss_tape_cmds module parameter which can be used to make cciss
173 allocate more commands for use by tape drives. Ordinarily only a few commands 183 allocate more commands for use by tape drives. Ordinarily only a few commands
174 (6) are allocated for tape drives because tape drives are slow and 184 (6) are allocated for tape drives because tape drives are slow and
175 infrequently used and the primary purpose of Smart Array controllers is to 185 infrequently used and the primary purpose of Smart Array controllers is to
176 act as a RAID controller for disk drives, so the vast majority of commands 186 act as a RAID controller for disk drives, so the vast majority of commands
177 are allocated for disk devices. However, if you have more than a few tape 187 are allocated for disk devices. However, if you have more than a few tape
178 drives attached to a smart array, the default number of commands may not be 188 drives attached to a smart array, the default number of commands may not be
179 enought (for example, if you have 8 tape drives, you could only rewind 6 189 enought (for example, if you have 8 tape drives, you could only rewind 6
180 at one time with the default number of commands.) The cciss_tape_cmds module 190 at one time with the default number of commands.) The cciss_tape_cmds module
181 parameter allows more commands (up to 16 more) to be allocated for use by 191 parameter allows more commands (up to 16 more) to be allocated for use by
182 tape drives. For example: 192 tape drives. For example:
183 193
184 insmod cciss.ko cciss_tape_cmds=16 194 insmod cciss.ko cciss_tape_cmds=16
185 195
186 Or, as a kernel boot parameter passed in via grub: cciss.cciss_tape_cmds=8 196 Or, as a kernel boot parameter passed in via grub: cciss.cciss_tape_cmds=8
187 197
drivers/block/cciss.c
1 /* 1 /*
2 * Disk Array driver for HP Smart Array controllers. 2 * Disk Array driver for HP Smart Array controllers.
3 * (C) Copyright 2000, 2007 Hewlett-Packard Development Company, L.P. 3 * (C) Copyright 2000, 2007 Hewlett-Packard Development Company, L.P.
4 * 4 *
5 * This program is free software; you can redistribute it and/or modify 5 * This program is free software; you can redistribute it and/or modify
6 * it under the terms of the GNU General Public License as published by 6 * it under the terms of the GNU General Public License as published by
7 * the Free Software Foundation; version 2 of the License. 7 * the Free Software Foundation; version 2 of the License.
8 * 8 *
9 * This program is distributed in the hope that it will be useful, 9 * This program is distributed in the hope that it will be useful,
10 * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 * but WITHOUT ANY WARRANTY; without even the implied warranty of
11 * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. See the GNU 11 * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. See the GNU
12 * General Public License for more details. 12 * General Public License for more details.
13 * 13 *
14 * You should have received a copy of the GNU General Public License 14 * You should have received a copy of the GNU General Public License
15 * along with this program; if not, write to the Free Software 15 * along with this program; if not, write to the Free Software
16 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 16 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
17 * 02111-1307, USA. 17 * 02111-1307, USA.
18 * 18 *
19 * Questions/Comments/Bugfixes to iss_storagedev@hp.com 19 * Questions/Comments/Bugfixes to iss_storagedev@hp.com
20 * 20 *
21 */ 21 */
22 22
23 #include <linux/module.h> 23 #include <linux/module.h>
24 #include <linux/interrupt.h> 24 #include <linux/interrupt.h>
25 #include <linux/types.h> 25 #include <linux/types.h>
26 #include <linux/pci.h> 26 #include <linux/pci.h>
27 #include <linux/kernel.h> 27 #include <linux/kernel.h>
28 #include <linux/slab.h> 28 #include <linux/slab.h>
29 #include <linux/delay.h> 29 #include <linux/delay.h>
30 #include <linux/major.h> 30 #include <linux/major.h>
31 #include <linux/fs.h> 31 #include <linux/fs.h>
32 #include <linux/bio.h> 32 #include <linux/bio.h>
33 #include <linux/blkpg.h> 33 #include <linux/blkpg.h>
34 #include <linux/timer.h> 34 #include <linux/timer.h>
35 #include <linux/proc_fs.h> 35 #include <linux/proc_fs.h>
36 #include <linux/seq_file.h> 36 #include <linux/seq_file.h>
37 #include <linux/init.h> 37 #include <linux/init.h>
38 #include <linux/jiffies.h> 38 #include <linux/jiffies.h>
39 #include <linux/hdreg.h> 39 #include <linux/hdreg.h>
40 #include <linux/spinlock.h> 40 #include <linux/spinlock.h>
41 #include <linux/compat.h> 41 #include <linux/compat.h>
42 #include <linux/mutex.h> 42 #include <linux/mutex.h>
43 #include <asm/uaccess.h> 43 #include <asm/uaccess.h>
44 #include <asm/io.h> 44 #include <asm/io.h>
45 45
46 #include <linux/dma-mapping.h> 46 #include <linux/dma-mapping.h>
47 #include <linux/blkdev.h> 47 #include <linux/blkdev.h>
48 #include <linux/genhd.h> 48 #include <linux/genhd.h>
49 #include <linux/completion.h> 49 #include <linux/completion.h>
50 #include <scsi/scsi.h> 50 #include <scsi/scsi.h>
51 #include <scsi/sg.h> 51 #include <scsi/sg.h>
52 #include <scsi/scsi_ioctl.h> 52 #include <scsi/scsi_ioctl.h>
53 #include <linux/cdrom.h> 53 #include <linux/cdrom.h>
54 #include <linux/scatterlist.h> 54 #include <linux/scatterlist.h>
55 #include <linux/kthread.h> 55 #include <linux/kthread.h>
56 56
57 #define CCISS_DRIVER_VERSION(maj,min,submin) ((maj<<16)|(min<<8)|(submin)) 57 #define CCISS_DRIVER_VERSION(maj,min,submin) ((maj<<16)|(min<<8)|(submin))
58 #define DRIVER_NAME "HP CISS Driver (v 3.6.26)" 58 #define DRIVER_NAME "HP CISS Driver (v 3.6.26)"
59 #define DRIVER_VERSION CCISS_DRIVER_VERSION(3, 6, 26) 59 #define DRIVER_VERSION CCISS_DRIVER_VERSION(3, 6, 26)
60 60
61 /* Embedded module documentation macros - see modules.h */ 61 /* Embedded module documentation macros - see modules.h */
62 MODULE_AUTHOR("Hewlett-Packard Company"); 62 MODULE_AUTHOR("Hewlett-Packard Company");
63 MODULE_DESCRIPTION("Driver for HP Smart Array Controllers"); 63 MODULE_DESCRIPTION("Driver for HP Smart Array Controllers");
64 MODULE_SUPPORTED_DEVICE("HP Smart Array Controllers"); 64 MODULE_SUPPORTED_DEVICE("HP Smart Array Controllers");
65 MODULE_VERSION("3.6.26"); 65 MODULE_VERSION("3.6.26");
66 MODULE_LICENSE("GPL"); 66 MODULE_LICENSE("GPL");
67 static int cciss_tape_cmds = 6; 67 static int cciss_tape_cmds = 6;
68 module_param(cciss_tape_cmds, int, 0644); 68 module_param(cciss_tape_cmds, int, 0644);
69 MODULE_PARM_DESC(cciss_tape_cmds, 69 MODULE_PARM_DESC(cciss_tape_cmds,
70 "number of commands to allocate for tape devices (default: 6)"); 70 "number of commands to allocate for tape devices (default: 6)");
71 static int cciss_simple_mode;
72 module_param(cciss_simple_mode, int, S_IRUGO|S_IWUSR);
73 MODULE_PARM_DESC(cciss_simple_mode,
74 "Use 'simple mode' rather than 'performant mode'");
71 75
72 static DEFINE_MUTEX(cciss_mutex); 76 static DEFINE_MUTEX(cciss_mutex);
73 static struct proc_dir_entry *proc_cciss; 77 static struct proc_dir_entry *proc_cciss;
74 78
75 #include "cciss_cmd.h" 79 #include "cciss_cmd.h"
76 #include "cciss.h" 80 #include "cciss.h"
77 #include <linux/cciss_ioctl.h> 81 #include <linux/cciss_ioctl.h>
78 82
79 /* define the PCI info for the cards we can control */ 83 /* define the PCI info for the cards we can control */
80 static const struct pci_device_id cciss_pci_device_id[] = { 84 static const struct pci_device_id cciss_pci_device_id[] = {
81 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISS, 0x0E11, 0x4070}, 85 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISS, 0x0E11, 0x4070},
82 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSB, 0x0E11, 0x4080}, 86 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSB, 0x0E11, 0x4080},
83 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSB, 0x0E11, 0x4082}, 87 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSB, 0x0E11, 0x4082},
84 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSB, 0x0E11, 0x4083}, 88 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSB, 0x0E11, 0x4083},
85 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSC, 0x0E11, 0x4091}, 89 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSC, 0x0E11, 0x4091},
86 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSC, 0x0E11, 0x409A}, 90 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSC, 0x0E11, 0x409A},
87 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSC, 0x0E11, 0x409B}, 91 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSC, 0x0E11, 0x409B},
88 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSC, 0x0E11, 0x409C}, 92 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSC, 0x0E11, 0x409C},
89 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSC, 0x0E11, 0x409D}, 93 {PCI_VENDOR_ID_COMPAQ, PCI_DEVICE_ID_COMPAQ_CISSC, 0x0E11, 0x409D},
90 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSA, 0x103C, 0x3225}, 94 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSA, 0x103C, 0x3225},
91 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSC, 0x103C, 0x3223}, 95 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSC, 0x103C, 0x3223},
92 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSC, 0x103C, 0x3234}, 96 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSC, 0x103C, 0x3234},
93 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSC, 0x103C, 0x3235}, 97 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSC, 0x103C, 0x3235},
94 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSD, 0x103C, 0x3211}, 98 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSD, 0x103C, 0x3211},
95 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSD, 0x103C, 0x3212}, 99 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSD, 0x103C, 0x3212},
96 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSD, 0x103C, 0x3213}, 100 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSD, 0x103C, 0x3213},
97 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSD, 0x103C, 0x3214}, 101 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSD, 0x103C, 0x3214},
98 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSD, 0x103C, 0x3215}, 102 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSD, 0x103C, 0x3215},
99 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSC, 0x103C, 0x3237}, 103 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSC, 0x103C, 0x3237},
100 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSC, 0x103C, 0x323D}, 104 {PCI_VENDOR_ID_HP, PCI_DEVICE_ID_HP_CISSC, 0x103C, 0x323D},
101 {0,} 105 {0,}
102 }; 106 };
103 107
104 MODULE_DEVICE_TABLE(pci, cciss_pci_device_id); 108 MODULE_DEVICE_TABLE(pci, cciss_pci_device_id);
105 109
106 /* board_id = Subsystem Device ID & Vendor ID 110 /* board_id = Subsystem Device ID & Vendor ID
107 * product = Marketing Name for the board 111 * product = Marketing Name for the board
108 * access = Address of the struct of function pointers 112 * access = Address of the struct of function pointers
109 */ 113 */
110 static struct board_type products[] = { 114 static struct board_type products[] = {
111 {0x40700E11, "Smart Array 5300", &SA5_access}, 115 {0x40700E11, "Smart Array 5300", &SA5_access},
112 {0x40800E11, "Smart Array 5i", &SA5B_access}, 116 {0x40800E11, "Smart Array 5i", &SA5B_access},
113 {0x40820E11, "Smart Array 532", &SA5B_access}, 117 {0x40820E11, "Smart Array 532", &SA5B_access},
114 {0x40830E11, "Smart Array 5312", &SA5B_access}, 118 {0x40830E11, "Smart Array 5312", &SA5B_access},
115 {0x409A0E11, "Smart Array 641", &SA5_access}, 119 {0x409A0E11, "Smart Array 641", &SA5_access},
116 {0x409B0E11, "Smart Array 642", &SA5_access}, 120 {0x409B0E11, "Smart Array 642", &SA5_access},
117 {0x409C0E11, "Smart Array 6400", &SA5_access}, 121 {0x409C0E11, "Smart Array 6400", &SA5_access},
118 {0x409D0E11, "Smart Array 6400 EM", &SA5_access}, 122 {0x409D0E11, "Smart Array 6400 EM", &SA5_access},
119 {0x40910E11, "Smart Array 6i", &SA5_access}, 123 {0x40910E11, "Smart Array 6i", &SA5_access},
120 {0x3225103C, "Smart Array P600", &SA5_access}, 124 {0x3225103C, "Smart Array P600", &SA5_access},
121 {0x3223103C, "Smart Array P800", &SA5_access}, 125 {0x3223103C, "Smart Array P800", &SA5_access},
122 {0x3234103C, "Smart Array P400", &SA5_access}, 126 {0x3234103C, "Smart Array P400", &SA5_access},
123 {0x3235103C, "Smart Array P400i", &SA5_access}, 127 {0x3235103C, "Smart Array P400i", &SA5_access},
124 {0x3211103C, "Smart Array E200i", &SA5_access}, 128 {0x3211103C, "Smart Array E200i", &SA5_access},
125 {0x3212103C, "Smart Array E200", &SA5_access}, 129 {0x3212103C, "Smart Array E200", &SA5_access},
126 {0x3213103C, "Smart Array E200i", &SA5_access}, 130 {0x3213103C, "Smart Array E200i", &SA5_access},
127 {0x3214103C, "Smart Array E200i", &SA5_access}, 131 {0x3214103C, "Smart Array E200i", &SA5_access},
128 {0x3215103C, "Smart Array E200i", &SA5_access}, 132 {0x3215103C, "Smart Array E200i", &SA5_access},
129 {0x3237103C, "Smart Array E500", &SA5_access}, 133 {0x3237103C, "Smart Array E500", &SA5_access},
130 {0x3223103C, "Smart Array P800", &SA5_access}, 134 {0x3223103C, "Smart Array P800", &SA5_access},
131 {0x3234103C, "Smart Array P400", &SA5_access}, 135 {0x3234103C, "Smart Array P400", &SA5_access},
132 {0x323D103C, "Smart Array P700m", &SA5_access}, 136 {0x323D103C, "Smart Array P700m", &SA5_access},
133 }; 137 };
134 138
135 /* How long to wait (in milliseconds) for board to go into simple mode */ 139 /* How long to wait (in milliseconds) for board to go into simple mode */
136 #define MAX_CONFIG_WAIT 30000 140 #define MAX_CONFIG_WAIT 30000
137 #define MAX_IOCTL_CONFIG_WAIT 1000 141 #define MAX_IOCTL_CONFIG_WAIT 1000
138 142
139 /*define how many times we will try a command because of bus resets */ 143 /*define how many times we will try a command because of bus resets */
140 #define MAX_CMD_RETRIES 3 144 #define MAX_CMD_RETRIES 3
141 145
142 #define MAX_CTLR 32 146 #define MAX_CTLR 32
143 147
144 /* Originally cciss driver only supports 8 major numbers */ 148 /* Originally cciss driver only supports 8 major numbers */
145 #define MAX_CTLR_ORIG 8 149 #define MAX_CTLR_ORIG 8
146 150
147 static ctlr_info_t *hba[MAX_CTLR]; 151 static ctlr_info_t *hba[MAX_CTLR];
148 152
149 static struct task_struct *cciss_scan_thread; 153 static struct task_struct *cciss_scan_thread;
150 static DEFINE_MUTEX(scan_mutex); 154 static DEFINE_MUTEX(scan_mutex);
151 static LIST_HEAD(scan_q); 155 static LIST_HEAD(scan_q);
152 156
153 static void do_cciss_request(struct request_queue *q); 157 static void do_cciss_request(struct request_queue *q);
154 static irqreturn_t do_cciss_intx(int irq, void *dev_id); 158 static irqreturn_t do_cciss_intx(int irq, void *dev_id);
155 static irqreturn_t do_cciss_msix_intr(int irq, void *dev_id); 159 static irqreturn_t do_cciss_msix_intr(int irq, void *dev_id);
156 static int cciss_open(struct block_device *bdev, fmode_t mode); 160 static int cciss_open(struct block_device *bdev, fmode_t mode);
157 static int cciss_unlocked_open(struct block_device *bdev, fmode_t mode); 161 static int cciss_unlocked_open(struct block_device *bdev, fmode_t mode);
158 static int cciss_release(struct gendisk *disk, fmode_t mode); 162 static int cciss_release(struct gendisk *disk, fmode_t mode);
159 static int do_ioctl(struct block_device *bdev, fmode_t mode, 163 static int do_ioctl(struct block_device *bdev, fmode_t mode,
160 unsigned int cmd, unsigned long arg); 164 unsigned int cmd, unsigned long arg);
161 static int cciss_ioctl(struct block_device *bdev, fmode_t mode, 165 static int cciss_ioctl(struct block_device *bdev, fmode_t mode,
162 unsigned int cmd, unsigned long arg); 166 unsigned int cmd, unsigned long arg);
163 static int cciss_getgeo(struct block_device *bdev, struct hd_geometry *geo); 167 static int cciss_getgeo(struct block_device *bdev, struct hd_geometry *geo);
164 168
165 static int cciss_revalidate(struct gendisk *disk); 169 static int cciss_revalidate(struct gendisk *disk);
166 static int rebuild_lun_table(ctlr_info_t *h, int first_time, int via_ioctl); 170 static int rebuild_lun_table(ctlr_info_t *h, int first_time, int via_ioctl);
167 static int deregister_disk(ctlr_info_t *h, int drv_index, 171 static int deregister_disk(ctlr_info_t *h, int drv_index,
168 int clear_all, int via_ioctl); 172 int clear_all, int via_ioctl);
169 173
170 static void cciss_read_capacity(ctlr_info_t *h, int logvol, 174 static void cciss_read_capacity(ctlr_info_t *h, int logvol,
171 sector_t *total_size, unsigned int *block_size); 175 sector_t *total_size, unsigned int *block_size);
172 static void cciss_read_capacity_16(ctlr_info_t *h, int logvol, 176 static void cciss_read_capacity_16(ctlr_info_t *h, int logvol,
173 sector_t *total_size, unsigned int *block_size); 177 sector_t *total_size, unsigned int *block_size);
174 static void cciss_geometry_inquiry(ctlr_info_t *h, int logvol, 178 static void cciss_geometry_inquiry(ctlr_info_t *h, int logvol,
175 sector_t total_size, 179 sector_t total_size,
176 unsigned int block_size, InquiryData_struct *inq_buff, 180 unsigned int block_size, InquiryData_struct *inq_buff,
177 drive_info_struct *drv); 181 drive_info_struct *drv);
178 static void __devinit cciss_interrupt_mode(ctlr_info_t *); 182 static void __devinit cciss_interrupt_mode(ctlr_info_t *);
183 static int __devinit cciss_enter_simple_mode(struct ctlr_info *h);
179 static void start_io(ctlr_info_t *h); 184 static void start_io(ctlr_info_t *h);
180 static int sendcmd_withirq(ctlr_info_t *h, __u8 cmd, void *buff, size_t size, 185 static int sendcmd_withirq(ctlr_info_t *h, __u8 cmd, void *buff, size_t size,
181 __u8 page_code, unsigned char scsi3addr[], 186 __u8 page_code, unsigned char scsi3addr[],
182 int cmd_type); 187 int cmd_type);
183 static int sendcmd_withirq_core(ctlr_info_t *h, CommandList_struct *c, 188 static int sendcmd_withirq_core(ctlr_info_t *h, CommandList_struct *c,
184 int attempt_retry); 189 int attempt_retry);
185 static int process_sendcmd_error(ctlr_info_t *h, CommandList_struct *c); 190 static int process_sendcmd_error(ctlr_info_t *h, CommandList_struct *c);
186 191
187 static int add_to_scan_list(struct ctlr_info *h); 192 static int add_to_scan_list(struct ctlr_info *h);
188 static int scan_thread(void *data); 193 static int scan_thread(void *data);
189 static int check_for_unit_attention(ctlr_info_t *h, CommandList_struct *c); 194 static int check_for_unit_attention(ctlr_info_t *h, CommandList_struct *c);
190 static void cciss_hba_release(struct device *dev); 195 static void cciss_hba_release(struct device *dev);
191 static void cciss_device_release(struct device *dev); 196 static void cciss_device_release(struct device *dev);
192 static void cciss_free_gendisk(ctlr_info_t *h, int drv_index); 197 static void cciss_free_gendisk(ctlr_info_t *h, int drv_index);
193 static void cciss_free_drive_info(ctlr_info_t *h, int drv_index); 198 static void cciss_free_drive_info(ctlr_info_t *h, int drv_index);
194 static inline u32 next_command(ctlr_info_t *h); 199 static inline u32 next_command(ctlr_info_t *h);
195 static int __devinit cciss_find_cfg_addrs(struct pci_dev *pdev, 200 static int __devinit cciss_find_cfg_addrs(struct pci_dev *pdev,
196 void __iomem *vaddr, u32 *cfg_base_addr, u64 *cfg_base_addr_index, 201 void __iomem *vaddr, u32 *cfg_base_addr, u64 *cfg_base_addr_index,
197 u64 *cfg_offset); 202 u64 *cfg_offset);
198 static int __devinit cciss_pci_find_memory_BAR(struct pci_dev *pdev, 203 static int __devinit cciss_pci_find_memory_BAR(struct pci_dev *pdev,
199 unsigned long *memory_bar); 204 unsigned long *memory_bar);
200 static inline u32 cciss_tag_discard_error_bits(ctlr_info_t *h, u32 tag); 205 static inline u32 cciss_tag_discard_error_bits(ctlr_info_t *h, u32 tag);
201 static __devinit int write_driver_ver_to_cfgtable( 206 static __devinit int write_driver_ver_to_cfgtable(
202 CfgTable_struct __iomem *cfgtable); 207 CfgTable_struct __iomem *cfgtable);
203 208
204 /* performant mode helper functions */ 209 /* performant mode helper functions */
205 static void calc_bucket_map(int *bucket, int num_buckets, int nsgs, 210 static void calc_bucket_map(int *bucket, int num_buckets, int nsgs,
206 int *bucket_map); 211 int *bucket_map);
207 static void cciss_put_controller_into_performant_mode(ctlr_info_t *h); 212 static void cciss_put_controller_into_performant_mode(ctlr_info_t *h);
208 213
209 #ifdef CONFIG_PROC_FS 214 #ifdef CONFIG_PROC_FS
210 static void cciss_procinit(ctlr_info_t *h); 215 static void cciss_procinit(ctlr_info_t *h);
211 #else 216 #else
212 static void cciss_procinit(ctlr_info_t *h) 217 static void cciss_procinit(ctlr_info_t *h)
213 { 218 {
214 } 219 }
215 #endif /* CONFIG_PROC_FS */ 220 #endif /* CONFIG_PROC_FS */
216 221
217 #ifdef CONFIG_COMPAT 222 #ifdef CONFIG_COMPAT
218 static int cciss_compat_ioctl(struct block_device *, fmode_t, 223 static int cciss_compat_ioctl(struct block_device *, fmode_t,
219 unsigned, unsigned long); 224 unsigned, unsigned long);
220 #endif 225 #endif
221 226
222 static const struct block_device_operations cciss_fops = { 227 static const struct block_device_operations cciss_fops = {
223 .owner = THIS_MODULE, 228 .owner = THIS_MODULE,
224 .open = cciss_unlocked_open, 229 .open = cciss_unlocked_open,
225 .release = cciss_release, 230 .release = cciss_release,
226 .ioctl = do_ioctl, 231 .ioctl = do_ioctl,
227 .getgeo = cciss_getgeo, 232 .getgeo = cciss_getgeo,
228 #ifdef CONFIG_COMPAT 233 #ifdef CONFIG_COMPAT
229 .compat_ioctl = cciss_compat_ioctl, 234 .compat_ioctl = cciss_compat_ioctl,
230 #endif 235 #endif
231 .revalidate_disk = cciss_revalidate, 236 .revalidate_disk = cciss_revalidate,
232 }; 237 };
233 238
234 /* set_performant_mode: Modify the tag for cciss performant 239 /* set_performant_mode: Modify the tag for cciss performant
235 * set bit 0 for pull model, bits 3-1 for block fetch 240 * set bit 0 for pull model, bits 3-1 for block fetch
236 * register number 241 * register number
237 */ 242 */
238 static void set_performant_mode(ctlr_info_t *h, CommandList_struct *c) 243 static void set_performant_mode(ctlr_info_t *h, CommandList_struct *c)
239 { 244 {
240 if (likely(h->transMethod & CFGTBL_Trans_Performant)) 245 if (likely(h->transMethod & CFGTBL_Trans_Performant))
241 c->busaddr |= 1 | (h->blockFetchTable[c->Header.SGList] << 1); 246 c->busaddr |= 1 | (h->blockFetchTable[c->Header.SGList] << 1);
242 } 247 }
243 248
244 /* 249 /*
245 * Enqueuing and dequeuing functions for cmdlists. 250 * Enqueuing and dequeuing functions for cmdlists.
246 */ 251 */
247 static inline void addQ(struct list_head *list, CommandList_struct *c) 252 static inline void addQ(struct list_head *list, CommandList_struct *c)
248 { 253 {
249 list_add_tail(&c->list, list); 254 list_add_tail(&c->list, list);
250 } 255 }
251 256
252 static inline void removeQ(CommandList_struct *c) 257 static inline void removeQ(CommandList_struct *c)
253 { 258 {
254 /* 259 /*
255 * After kexec/dump some commands might still 260 * After kexec/dump some commands might still
256 * be in flight, which the firmware will try 261 * be in flight, which the firmware will try
257 * to complete. Resetting the firmware doesn't work 262 * to complete. Resetting the firmware doesn't work
258 * with old fw revisions, so we have to mark 263 * with old fw revisions, so we have to mark
259 * them off as 'stale' to prevent the driver from 264 * them off as 'stale' to prevent the driver from
260 * falling over. 265 * falling over.
261 */ 266 */
262 if (WARN_ON(list_empty(&c->list))) { 267 if (WARN_ON(list_empty(&c->list))) {
263 c->cmd_type = CMD_MSG_STALE; 268 c->cmd_type = CMD_MSG_STALE;
264 return; 269 return;
265 } 270 }
266 271
267 list_del_init(&c->list); 272 list_del_init(&c->list);
268 } 273 }
269 274
270 static void enqueue_cmd_and_start_io(ctlr_info_t *h, 275 static void enqueue_cmd_and_start_io(ctlr_info_t *h,
271 CommandList_struct *c) 276 CommandList_struct *c)
272 { 277 {
273 unsigned long flags; 278 unsigned long flags;
274 set_performant_mode(h, c); 279 set_performant_mode(h, c);
275 spin_lock_irqsave(&h->lock, flags); 280 spin_lock_irqsave(&h->lock, flags);
276 addQ(&h->reqQ, c); 281 addQ(&h->reqQ, c);
277 h->Qdepth++; 282 h->Qdepth++;
278 if (h->Qdepth > h->maxQsinceinit) 283 if (h->Qdepth > h->maxQsinceinit)
279 h->maxQsinceinit = h->Qdepth; 284 h->maxQsinceinit = h->Qdepth;
280 start_io(h); 285 start_io(h);
281 spin_unlock_irqrestore(&h->lock, flags); 286 spin_unlock_irqrestore(&h->lock, flags);
282 } 287 }
283 288
284 static void cciss_free_sg_chain_blocks(SGDescriptor_struct **cmd_sg_list, 289 static void cciss_free_sg_chain_blocks(SGDescriptor_struct **cmd_sg_list,
285 int nr_cmds) 290 int nr_cmds)
286 { 291 {
287 int i; 292 int i;
288 293
289 if (!cmd_sg_list) 294 if (!cmd_sg_list)
290 return; 295 return;
291 for (i = 0; i < nr_cmds; i++) { 296 for (i = 0; i < nr_cmds; i++) {
292 kfree(cmd_sg_list[i]); 297 kfree(cmd_sg_list[i]);
293 cmd_sg_list[i] = NULL; 298 cmd_sg_list[i] = NULL;
294 } 299 }
295 kfree(cmd_sg_list); 300 kfree(cmd_sg_list);
296 } 301 }
297 302
298 static SGDescriptor_struct **cciss_allocate_sg_chain_blocks( 303 static SGDescriptor_struct **cciss_allocate_sg_chain_blocks(
299 ctlr_info_t *h, int chainsize, int nr_cmds) 304 ctlr_info_t *h, int chainsize, int nr_cmds)
300 { 305 {
301 int j; 306 int j;
302 SGDescriptor_struct **cmd_sg_list; 307 SGDescriptor_struct **cmd_sg_list;
303 308
304 if (chainsize <= 0) 309 if (chainsize <= 0)
305 return NULL; 310 return NULL;
306 311
307 cmd_sg_list = kmalloc(sizeof(*cmd_sg_list) * nr_cmds, GFP_KERNEL); 312 cmd_sg_list = kmalloc(sizeof(*cmd_sg_list) * nr_cmds, GFP_KERNEL);
308 if (!cmd_sg_list) 313 if (!cmd_sg_list)
309 return NULL; 314 return NULL;
310 315
311 /* Build up chain blocks for each command */ 316 /* Build up chain blocks for each command */
312 for (j = 0; j < nr_cmds; j++) { 317 for (j = 0; j < nr_cmds; j++) {
313 /* Need a block of chainsized s/g elements. */ 318 /* Need a block of chainsized s/g elements. */
314 cmd_sg_list[j] = kmalloc((chainsize * 319 cmd_sg_list[j] = kmalloc((chainsize *
315 sizeof(*cmd_sg_list[j])), GFP_KERNEL); 320 sizeof(*cmd_sg_list[j])), GFP_KERNEL);
316 if (!cmd_sg_list[j]) { 321 if (!cmd_sg_list[j]) {
317 dev_err(&h->pdev->dev, "Cannot get memory " 322 dev_err(&h->pdev->dev, "Cannot get memory "
318 "for s/g chains.\n"); 323 "for s/g chains.\n");
319 goto clean; 324 goto clean;
320 } 325 }
321 } 326 }
322 return cmd_sg_list; 327 return cmd_sg_list;
323 clean: 328 clean:
324 cciss_free_sg_chain_blocks(cmd_sg_list, nr_cmds); 329 cciss_free_sg_chain_blocks(cmd_sg_list, nr_cmds);
325 return NULL; 330 return NULL;
326 } 331 }
327 332
328 static void cciss_unmap_sg_chain_block(ctlr_info_t *h, CommandList_struct *c) 333 static void cciss_unmap_sg_chain_block(ctlr_info_t *h, CommandList_struct *c)
329 { 334 {
330 SGDescriptor_struct *chain_sg; 335 SGDescriptor_struct *chain_sg;
331 u64bit temp64; 336 u64bit temp64;
332 337
333 if (c->Header.SGTotal <= h->max_cmd_sgentries) 338 if (c->Header.SGTotal <= h->max_cmd_sgentries)
334 return; 339 return;
335 340
336 chain_sg = &c->SG[h->max_cmd_sgentries - 1]; 341 chain_sg = &c->SG[h->max_cmd_sgentries - 1];
337 temp64.val32.lower = chain_sg->Addr.lower; 342 temp64.val32.lower = chain_sg->Addr.lower;
338 temp64.val32.upper = chain_sg->Addr.upper; 343 temp64.val32.upper = chain_sg->Addr.upper;
339 pci_unmap_single(h->pdev, temp64.val, chain_sg->Len, PCI_DMA_TODEVICE); 344 pci_unmap_single(h->pdev, temp64.val, chain_sg->Len, PCI_DMA_TODEVICE);
340 } 345 }
341 346
342 static void cciss_map_sg_chain_block(ctlr_info_t *h, CommandList_struct *c, 347 static void cciss_map_sg_chain_block(ctlr_info_t *h, CommandList_struct *c,
343 SGDescriptor_struct *chain_block, int len) 348 SGDescriptor_struct *chain_block, int len)
344 { 349 {
345 SGDescriptor_struct *chain_sg; 350 SGDescriptor_struct *chain_sg;
346 u64bit temp64; 351 u64bit temp64;
347 352
348 chain_sg = &c->SG[h->max_cmd_sgentries - 1]; 353 chain_sg = &c->SG[h->max_cmd_sgentries - 1];
349 chain_sg->Ext = CCISS_SG_CHAIN; 354 chain_sg->Ext = CCISS_SG_CHAIN;
350 chain_sg->Len = len; 355 chain_sg->Len = len;
351 temp64.val = pci_map_single(h->pdev, chain_block, len, 356 temp64.val = pci_map_single(h->pdev, chain_block, len,
352 PCI_DMA_TODEVICE); 357 PCI_DMA_TODEVICE);
353 chain_sg->Addr.lower = temp64.val32.lower; 358 chain_sg->Addr.lower = temp64.val32.lower;
354 chain_sg->Addr.upper = temp64.val32.upper; 359 chain_sg->Addr.upper = temp64.val32.upper;
355 } 360 }
356 361
357 #include "cciss_scsi.c" /* For SCSI tape support */ 362 #include "cciss_scsi.c" /* For SCSI tape support */
358 363
359 static const char *raid_label[] = { "0", "4", "1(1+0)", "5", "5+1", "ADG", 364 static const char *raid_label[] = { "0", "4", "1(1+0)", "5", "5+1", "ADG",
360 "UNKNOWN" 365 "UNKNOWN"
361 }; 366 };
362 #define RAID_UNKNOWN (ARRAY_SIZE(raid_label)-1) 367 #define RAID_UNKNOWN (ARRAY_SIZE(raid_label)-1)
363 368
364 #ifdef CONFIG_PROC_FS 369 #ifdef CONFIG_PROC_FS
365 370
366 /* 371 /*
367 * Report information about this controller. 372 * Report information about this controller.
368 */ 373 */
369 #define ENG_GIG 1000000000 374 #define ENG_GIG 1000000000
370 #define ENG_GIG_FACTOR (ENG_GIG/512) 375 #define ENG_GIG_FACTOR (ENG_GIG/512)
371 #define ENGAGE_SCSI "engage scsi" 376 #define ENGAGE_SCSI "engage scsi"
372 377
373 static void cciss_seq_show_header(struct seq_file *seq) 378 static void cciss_seq_show_header(struct seq_file *seq)
374 { 379 {
375 ctlr_info_t *h = seq->private; 380 ctlr_info_t *h = seq->private;
376 381
377 seq_printf(seq, "%s: HP %s Controller\n" 382 seq_printf(seq, "%s: HP %s Controller\n"
378 "Board ID: 0x%08lx\n" 383 "Board ID: 0x%08lx\n"
379 "Firmware Version: %c%c%c%c\n" 384 "Firmware Version: %c%c%c%c\n"
380 "IRQ: %d\n" 385 "IRQ: %d\n"
381 "Logical drives: %d\n" 386 "Logical drives: %d\n"
382 "Current Q depth: %d\n" 387 "Current Q depth: %d\n"
383 "Current # commands on controller: %d\n" 388 "Current # commands on controller: %d\n"
384 "Max Q depth since init: %d\n" 389 "Max Q depth since init: %d\n"
385 "Max # commands on controller since init: %d\n" 390 "Max # commands on controller since init: %d\n"
386 "Max SG entries since init: %d\n", 391 "Max SG entries since init: %d\n",
387 h->devname, 392 h->devname,
388 h->product_name, 393 h->product_name,
389 (unsigned long)h->board_id, 394 (unsigned long)h->board_id,
390 h->firm_ver[0], h->firm_ver[1], h->firm_ver[2], 395 h->firm_ver[0], h->firm_ver[1], h->firm_ver[2],
391 h->firm_ver[3], (unsigned int)h->intr[PERF_MODE_INT], 396 h->firm_ver[3], (unsigned int)h->intr[h->intr_mode],
392 h->num_luns, 397 h->num_luns,
393 h->Qdepth, h->commands_outstanding, 398 h->Qdepth, h->commands_outstanding,
394 h->maxQsinceinit, h->max_outstanding, h->maxSG); 399 h->maxQsinceinit, h->max_outstanding, h->maxSG);
395 400
396 #ifdef CONFIG_CISS_SCSI_TAPE 401 #ifdef CONFIG_CISS_SCSI_TAPE
397 cciss_seq_tape_report(seq, h); 402 cciss_seq_tape_report(seq, h);
398 #endif /* CONFIG_CISS_SCSI_TAPE */ 403 #endif /* CONFIG_CISS_SCSI_TAPE */
399 } 404 }
400 405
401 static void *cciss_seq_start(struct seq_file *seq, loff_t *pos) 406 static void *cciss_seq_start(struct seq_file *seq, loff_t *pos)
402 { 407 {
403 ctlr_info_t *h = seq->private; 408 ctlr_info_t *h = seq->private;
404 unsigned long flags; 409 unsigned long flags;
405 410
406 /* prevent displaying bogus info during configuration 411 /* prevent displaying bogus info during configuration
407 * or deconfiguration of a logical volume 412 * or deconfiguration of a logical volume
408 */ 413 */
409 spin_lock_irqsave(&h->lock, flags); 414 spin_lock_irqsave(&h->lock, flags);
410 if (h->busy_configuring) { 415 if (h->busy_configuring) {
411 spin_unlock_irqrestore(&h->lock, flags); 416 spin_unlock_irqrestore(&h->lock, flags);
412 return ERR_PTR(-EBUSY); 417 return ERR_PTR(-EBUSY);
413 } 418 }
414 h->busy_configuring = 1; 419 h->busy_configuring = 1;
415 spin_unlock_irqrestore(&h->lock, flags); 420 spin_unlock_irqrestore(&h->lock, flags);
416 421
417 if (*pos == 0) 422 if (*pos == 0)
418 cciss_seq_show_header(seq); 423 cciss_seq_show_header(seq);
419 424
420 return pos; 425 return pos;
421 } 426 }
422 427
423 static int cciss_seq_show(struct seq_file *seq, void *v) 428 static int cciss_seq_show(struct seq_file *seq, void *v)
424 { 429 {
425 sector_t vol_sz, vol_sz_frac; 430 sector_t vol_sz, vol_sz_frac;
426 ctlr_info_t *h = seq->private; 431 ctlr_info_t *h = seq->private;
427 unsigned ctlr = h->ctlr; 432 unsigned ctlr = h->ctlr;
428 loff_t *pos = v; 433 loff_t *pos = v;
429 drive_info_struct *drv = h->drv[*pos]; 434 drive_info_struct *drv = h->drv[*pos];
430 435
431 if (*pos > h->highest_lun) 436 if (*pos > h->highest_lun)
432 return 0; 437 return 0;
433 438
434 if (drv == NULL) /* it's possible for h->drv[] to have holes. */ 439 if (drv == NULL) /* it's possible for h->drv[] to have holes. */
435 return 0; 440 return 0;
436 441
437 if (drv->heads == 0) 442 if (drv->heads == 0)
438 return 0; 443 return 0;
439 444
440 vol_sz = drv->nr_blocks; 445 vol_sz = drv->nr_blocks;
441 vol_sz_frac = sector_div(vol_sz, ENG_GIG_FACTOR); 446 vol_sz_frac = sector_div(vol_sz, ENG_GIG_FACTOR);
442 vol_sz_frac *= 100; 447 vol_sz_frac *= 100;
443 sector_div(vol_sz_frac, ENG_GIG_FACTOR); 448 sector_div(vol_sz_frac, ENG_GIG_FACTOR);
444 449
445 if (drv->raid_level < 0 || drv->raid_level > RAID_UNKNOWN) 450 if (drv->raid_level < 0 || drv->raid_level > RAID_UNKNOWN)
446 drv->raid_level = RAID_UNKNOWN; 451 drv->raid_level = RAID_UNKNOWN;
447 seq_printf(seq, "cciss/c%dd%d:" 452 seq_printf(seq, "cciss/c%dd%d:"
448 "\t%4u.%02uGB\tRAID %s\n", 453 "\t%4u.%02uGB\tRAID %s\n",
449 ctlr, (int) *pos, (int)vol_sz, (int)vol_sz_frac, 454 ctlr, (int) *pos, (int)vol_sz, (int)vol_sz_frac,
450 raid_label[drv->raid_level]); 455 raid_label[drv->raid_level]);
451 return 0; 456 return 0;
452 } 457 }
453 458
454 static void *cciss_seq_next(struct seq_file *seq, void *v, loff_t *pos) 459 static void *cciss_seq_next(struct seq_file *seq, void *v, loff_t *pos)
455 { 460 {
456 ctlr_info_t *h = seq->private; 461 ctlr_info_t *h = seq->private;
457 462
458 if (*pos > h->highest_lun) 463 if (*pos > h->highest_lun)
459 return NULL; 464 return NULL;
460 *pos += 1; 465 *pos += 1;
461 466
462 return pos; 467 return pos;
463 } 468 }
464 469
465 static void cciss_seq_stop(struct seq_file *seq, void *v) 470 static void cciss_seq_stop(struct seq_file *seq, void *v)
466 { 471 {
467 ctlr_info_t *h = seq->private; 472 ctlr_info_t *h = seq->private;
468 473
469 /* Only reset h->busy_configuring if we succeeded in setting 474 /* Only reset h->busy_configuring if we succeeded in setting
470 * it during cciss_seq_start. */ 475 * it during cciss_seq_start. */
471 if (v == ERR_PTR(-EBUSY)) 476 if (v == ERR_PTR(-EBUSY))
472 return; 477 return;
473 478
474 h->busy_configuring = 0; 479 h->busy_configuring = 0;
475 } 480 }
476 481
477 static const struct seq_operations cciss_seq_ops = { 482 static const struct seq_operations cciss_seq_ops = {
478 .start = cciss_seq_start, 483 .start = cciss_seq_start,
479 .show = cciss_seq_show, 484 .show = cciss_seq_show,
480 .next = cciss_seq_next, 485 .next = cciss_seq_next,
481 .stop = cciss_seq_stop, 486 .stop = cciss_seq_stop,
482 }; 487 };
483 488
484 static int cciss_seq_open(struct inode *inode, struct file *file) 489 static int cciss_seq_open(struct inode *inode, struct file *file)
485 { 490 {
486 int ret = seq_open(file, &cciss_seq_ops); 491 int ret = seq_open(file, &cciss_seq_ops);
487 struct seq_file *seq = file->private_data; 492 struct seq_file *seq = file->private_data;
488 493
489 if (!ret) 494 if (!ret)
490 seq->private = PDE(inode)->data; 495 seq->private = PDE(inode)->data;
491 496
492 return ret; 497 return ret;
493 } 498 }
494 499
495 static ssize_t 500 static ssize_t
496 cciss_proc_write(struct file *file, const char __user *buf, 501 cciss_proc_write(struct file *file, const char __user *buf,
497 size_t length, loff_t *ppos) 502 size_t length, loff_t *ppos)
498 { 503 {
499 int err; 504 int err;
500 char *buffer; 505 char *buffer;
501 506
502 #ifndef CONFIG_CISS_SCSI_TAPE 507 #ifndef CONFIG_CISS_SCSI_TAPE
503 return -EINVAL; 508 return -EINVAL;
504 #endif 509 #endif
505 510
506 if (!buf || length > PAGE_SIZE - 1) 511 if (!buf || length > PAGE_SIZE - 1)
507 return -EINVAL; 512 return -EINVAL;
508 513
509 buffer = (char *)__get_free_page(GFP_KERNEL); 514 buffer = (char *)__get_free_page(GFP_KERNEL);
510 if (!buffer) 515 if (!buffer)
511 return -ENOMEM; 516 return -ENOMEM;
512 517
513 err = -EFAULT; 518 err = -EFAULT;
514 if (copy_from_user(buffer, buf, length)) 519 if (copy_from_user(buffer, buf, length))
515 goto out; 520 goto out;
516 buffer[length] = '\0'; 521 buffer[length] = '\0';
517 522
518 #ifdef CONFIG_CISS_SCSI_TAPE 523 #ifdef CONFIG_CISS_SCSI_TAPE
519 if (strncmp(ENGAGE_SCSI, buffer, sizeof ENGAGE_SCSI - 1) == 0) { 524 if (strncmp(ENGAGE_SCSI, buffer, sizeof ENGAGE_SCSI - 1) == 0) {
520 struct seq_file *seq = file->private_data; 525 struct seq_file *seq = file->private_data;
521 ctlr_info_t *h = seq->private; 526 ctlr_info_t *h = seq->private;
522 527
523 err = cciss_engage_scsi(h); 528 err = cciss_engage_scsi(h);
524 if (err == 0) 529 if (err == 0)
525 err = length; 530 err = length;
526 } else 531 } else
527 #endif /* CONFIG_CISS_SCSI_TAPE */ 532 #endif /* CONFIG_CISS_SCSI_TAPE */
528 err = -EINVAL; 533 err = -EINVAL;
529 /* might be nice to have "disengage" too, but it's not 534 /* might be nice to have "disengage" too, but it's not
530 safely possible. (only 1 module use count, lock issues.) */ 535 safely possible. (only 1 module use count, lock issues.) */
531 536
532 out: 537 out:
533 free_page((unsigned long)buffer); 538 free_page((unsigned long)buffer);
534 return err; 539 return err;
535 } 540 }
536 541
537 static const struct file_operations cciss_proc_fops = { 542 static const struct file_operations cciss_proc_fops = {
538 .owner = THIS_MODULE, 543 .owner = THIS_MODULE,
539 .open = cciss_seq_open, 544 .open = cciss_seq_open,
540 .read = seq_read, 545 .read = seq_read,
541 .llseek = seq_lseek, 546 .llseek = seq_lseek,
542 .release = seq_release, 547 .release = seq_release,
543 .write = cciss_proc_write, 548 .write = cciss_proc_write,
544 }; 549 };
545 550
546 static void __devinit cciss_procinit(ctlr_info_t *h) 551 static void __devinit cciss_procinit(ctlr_info_t *h)
547 { 552 {
548 struct proc_dir_entry *pde; 553 struct proc_dir_entry *pde;
549 554
550 if (proc_cciss == NULL) 555 if (proc_cciss == NULL)
551 proc_cciss = proc_mkdir("driver/cciss", NULL); 556 proc_cciss = proc_mkdir("driver/cciss", NULL);
552 if (!proc_cciss) 557 if (!proc_cciss)
553 return; 558 return;
554 pde = proc_create_data(h->devname, S_IWUSR | S_IRUSR | S_IRGRP | 559 pde = proc_create_data(h->devname, S_IWUSR | S_IRUSR | S_IRGRP |
555 S_IROTH, proc_cciss, 560 S_IROTH, proc_cciss,
556 &cciss_proc_fops, h); 561 &cciss_proc_fops, h);
557 } 562 }
558 #endif /* CONFIG_PROC_FS */ 563 #endif /* CONFIG_PROC_FS */
559 564
560 #define MAX_PRODUCT_NAME_LEN 19 565 #define MAX_PRODUCT_NAME_LEN 19
561 566
562 #define to_hba(n) container_of(n, struct ctlr_info, dev) 567 #define to_hba(n) container_of(n, struct ctlr_info, dev)
563 #define to_drv(n) container_of(n, drive_info_struct, dev) 568 #define to_drv(n) container_of(n, drive_info_struct, dev)
564 569
565 /* List of controllers which cannot be hard reset on kexec with reset_devices */ 570 /* List of controllers which cannot be hard reset on kexec with reset_devices */
566 static u32 unresettable_controller[] = { 571 static u32 unresettable_controller[] = {
567 0x324a103C, /* Smart Array P712m */ 572 0x324a103C, /* Smart Array P712m */
568 0x324b103C, /* SmartArray P711m */ 573 0x324b103C, /* SmartArray P711m */
569 0x3223103C, /* Smart Array P800 */ 574 0x3223103C, /* Smart Array P800 */
570 0x3234103C, /* Smart Array P400 */ 575 0x3234103C, /* Smart Array P400 */
571 0x3235103C, /* Smart Array P400i */ 576 0x3235103C, /* Smart Array P400i */
572 0x3211103C, /* Smart Array E200i */ 577 0x3211103C, /* Smart Array E200i */
573 0x3212103C, /* Smart Array E200 */ 578 0x3212103C, /* Smart Array E200 */
574 0x3213103C, /* Smart Array E200i */ 579 0x3213103C, /* Smart Array E200i */
575 0x3214103C, /* Smart Array E200i */ 580 0x3214103C, /* Smart Array E200i */
576 0x3215103C, /* Smart Array E200i */ 581 0x3215103C, /* Smart Array E200i */
577 0x3237103C, /* Smart Array E500 */ 582 0x3237103C, /* Smart Array E500 */
578 0x323D103C, /* Smart Array P700m */ 583 0x323D103C, /* Smart Array P700m */
579 0x409C0E11, /* Smart Array 6400 */ 584 0x409C0E11, /* Smart Array 6400 */
580 0x409D0E11, /* Smart Array 6400 EM */ 585 0x409D0E11, /* Smart Array 6400 EM */
581 }; 586 };
582 587
583 /* List of controllers which cannot even be soft reset */ 588 /* List of controllers which cannot even be soft reset */
584 static u32 soft_unresettable_controller[] = { 589 static u32 soft_unresettable_controller[] = {
585 0x409C0E11, /* Smart Array 6400 */ 590 0x409C0E11, /* Smart Array 6400 */
586 0x409D0E11, /* Smart Array 6400 EM */ 591 0x409D0E11, /* Smart Array 6400 EM */
587 }; 592 };
588 593
589 static int ctlr_is_hard_resettable(u32 board_id) 594 static int ctlr_is_hard_resettable(u32 board_id)
590 { 595 {
591 int i; 596 int i;
592 597
593 for (i = 0; i < ARRAY_SIZE(unresettable_controller); i++) 598 for (i = 0; i < ARRAY_SIZE(unresettable_controller); i++)
594 if (unresettable_controller[i] == board_id) 599 if (unresettable_controller[i] == board_id)
595 return 0; 600 return 0;
596 return 1; 601 return 1;
597 } 602 }
598 603
599 static int ctlr_is_soft_resettable(u32 board_id) 604 static int ctlr_is_soft_resettable(u32 board_id)
600 { 605 {
601 int i; 606 int i;
602 607
603 for (i = 0; i < ARRAY_SIZE(soft_unresettable_controller); i++) 608 for (i = 0; i < ARRAY_SIZE(soft_unresettable_controller); i++)
604 if (soft_unresettable_controller[i] == board_id) 609 if (soft_unresettable_controller[i] == board_id)
605 return 0; 610 return 0;
606 return 1; 611 return 1;
607 } 612 }
608 613
609 static int ctlr_is_resettable(u32 board_id) 614 static int ctlr_is_resettable(u32 board_id)
610 { 615 {
611 return ctlr_is_hard_resettable(board_id) || 616 return ctlr_is_hard_resettable(board_id) ||
612 ctlr_is_soft_resettable(board_id); 617 ctlr_is_soft_resettable(board_id);
613 } 618 }
614 619
615 static ssize_t host_show_resettable(struct device *dev, 620 static ssize_t host_show_resettable(struct device *dev,
616 struct device_attribute *attr, 621 struct device_attribute *attr,
617 char *buf) 622 char *buf)
618 { 623 {
619 struct ctlr_info *h = to_hba(dev); 624 struct ctlr_info *h = to_hba(dev);
620 625
621 return snprintf(buf, 20, "%d\n", ctlr_is_resettable(h->board_id)); 626 return snprintf(buf, 20, "%d\n", ctlr_is_resettable(h->board_id));
622 } 627 }
623 static DEVICE_ATTR(resettable, S_IRUGO, host_show_resettable, NULL); 628 static DEVICE_ATTR(resettable, S_IRUGO, host_show_resettable, NULL);
624 629
625 static ssize_t host_store_rescan(struct device *dev, 630 static ssize_t host_store_rescan(struct device *dev,
626 struct device_attribute *attr, 631 struct device_attribute *attr,
627 const char *buf, size_t count) 632 const char *buf, size_t count)
628 { 633 {
629 struct ctlr_info *h = to_hba(dev); 634 struct ctlr_info *h = to_hba(dev);
630 635
631 add_to_scan_list(h); 636 add_to_scan_list(h);
632 wake_up_process(cciss_scan_thread); 637 wake_up_process(cciss_scan_thread);
633 wait_for_completion_interruptible(&h->scan_wait); 638 wait_for_completion_interruptible(&h->scan_wait);
634 639
635 return count; 640 return count;
636 } 641 }
637 static DEVICE_ATTR(rescan, S_IWUSR, NULL, host_store_rescan); 642 static DEVICE_ATTR(rescan, S_IWUSR, NULL, host_store_rescan);
638 643
639 static ssize_t dev_show_unique_id(struct device *dev, 644 static ssize_t dev_show_unique_id(struct device *dev,
640 struct device_attribute *attr, 645 struct device_attribute *attr,
641 char *buf) 646 char *buf)
642 { 647 {
643 drive_info_struct *drv = to_drv(dev); 648 drive_info_struct *drv = to_drv(dev);
644 struct ctlr_info *h = to_hba(drv->dev.parent); 649 struct ctlr_info *h = to_hba(drv->dev.parent);
645 __u8 sn[16]; 650 __u8 sn[16];
646 unsigned long flags; 651 unsigned long flags;
647 int ret = 0; 652 int ret = 0;
648 653
649 spin_lock_irqsave(&h->lock, flags); 654 spin_lock_irqsave(&h->lock, flags);
650 if (h->busy_configuring) 655 if (h->busy_configuring)
651 ret = -EBUSY; 656 ret = -EBUSY;
652 else 657 else
653 memcpy(sn, drv->serial_no, sizeof(sn)); 658 memcpy(sn, drv->serial_no, sizeof(sn));
654 spin_unlock_irqrestore(&h->lock, flags); 659 spin_unlock_irqrestore(&h->lock, flags);
655 660
656 if (ret) 661 if (ret)
657 return ret; 662 return ret;
658 else 663 else
659 return snprintf(buf, 16 * 2 + 2, 664 return snprintf(buf, 16 * 2 + 2,
660 "%02X%02X%02X%02X%02X%02X%02X%02X" 665 "%02X%02X%02X%02X%02X%02X%02X%02X"
661 "%02X%02X%02X%02X%02X%02X%02X%02X\n", 666 "%02X%02X%02X%02X%02X%02X%02X%02X\n",
662 sn[0], sn[1], sn[2], sn[3], 667 sn[0], sn[1], sn[2], sn[3],
663 sn[4], sn[5], sn[6], sn[7], 668 sn[4], sn[5], sn[6], sn[7],
664 sn[8], sn[9], sn[10], sn[11], 669 sn[8], sn[9], sn[10], sn[11],
665 sn[12], sn[13], sn[14], sn[15]); 670 sn[12], sn[13], sn[14], sn[15]);
666 } 671 }
667 static DEVICE_ATTR(unique_id, S_IRUGO, dev_show_unique_id, NULL); 672 static DEVICE_ATTR(unique_id, S_IRUGO, dev_show_unique_id, NULL);
668 673
669 static ssize_t dev_show_vendor(struct device *dev, 674 static ssize_t dev_show_vendor(struct device *dev,
670 struct device_attribute *attr, 675 struct device_attribute *attr,
671 char *buf) 676 char *buf)
672 { 677 {
673 drive_info_struct *drv = to_drv(dev); 678 drive_info_struct *drv = to_drv(dev);
674 struct ctlr_info *h = to_hba(drv->dev.parent); 679 struct ctlr_info *h = to_hba(drv->dev.parent);
675 char vendor[VENDOR_LEN + 1]; 680 char vendor[VENDOR_LEN + 1];
676 unsigned long flags; 681 unsigned long flags;
677 int ret = 0; 682 int ret = 0;
678 683
679 spin_lock_irqsave(&h->lock, flags); 684 spin_lock_irqsave(&h->lock, flags);
680 if (h->busy_configuring) 685 if (h->busy_configuring)
681 ret = -EBUSY; 686 ret = -EBUSY;
682 else 687 else
683 memcpy(vendor, drv->vendor, VENDOR_LEN + 1); 688 memcpy(vendor, drv->vendor, VENDOR_LEN + 1);
684 spin_unlock_irqrestore(&h->lock, flags); 689 spin_unlock_irqrestore(&h->lock, flags);
685 690
686 if (ret) 691 if (ret)
687 return ret; 692 return ret;
688 else 693 else
689 return snprintf(buf, sizeof(vendor) + 1, "%s\n", drv->vendor); 694 return snprintf(buf, sizeof(vendor) + 1, "%s\n", drv->vendor);
690 } 695 }
691 static DEVICE_ATTR(vendor, S_IRUGO, dev_show_vendor, NULL); 696 static DEVICE_ATTR(vendor, S_IRUGO, dev_show_vendor, NULL);
692 697
693 static ssize_t dev_show_model(struct device *dev, 698 static ssize_t dev_show_model(struct device *dev,
694 struct device_attribute *attr, 699 struct device_attribute *attr,
695 char *buf) 700 char *buf)
696 { 701 {
697 drive_info_struct *drv = to_drv(dev); 702 drive_info_struct *drv = to_drv(dev);
698 struct ctlr_info *h = to_hba(drv->dev.parent); 703 struct ctlr_info *h = to_hba(drv->dev.parent);
699 char model[MODEL_LEN + 1]; 704 char model[MODEL_LEN + 1];
700 unsigned long flags; 705 unsigned long flags;
701 int ret = 0; 706 int ret = 0;
702 707
703 spin_lock_irqsave(&h->lock, flags); 708 spin_lock_irqsave(&h->lock, flags);
704 if (h->busy_configuring) 709 if (h->busy_configuring)
705 ret = -EBUSY; 710 ret = -EBUSY;
706 else 711 else
707 memcpy(model, drv->model, MODEL_LEN + 1); 712 memcpy(model, drv->model, MODEL_LEN + 1);
708 spin_unlock_irqrestore(&h->lock, flags); 713 spin_unlock_irqrestore(&h->lock, flags);
709 714
710 if (ret) 715 if (ret)
711 return ret; 716 return ret;
712 else 717 else
713 return snprintf(buf, sizeof(model) + 1, "%s\n", drv->model); 718 return snprintf(buf, sizeof(model) + 1, "%s\n", drv->model);
714 } 719 }
715 static DEVICE_ATTR(model, S_IRUGO, dev_show_model, NULL); 720 static DEVICE_ATTR(model, S_IRUGO, dev_show_model, NULL);
716 721
717 static ssize_t dev_show_rev(struct device *dev, 722 static ssize_t dev_show_rev(struct device *dev,
718 struct device_attribute *attr, 723 struct device_attribute *attr,
719 char *buf) 724 char *buf)
720 { 725 {
721 drive_info_struct *drv = to_drv(dev); 726 drive_info_struct *drv = to_drv(dev);
722 struct ctlr_info *h = to_hba(drv->dev.parent); 727 struct ctlr_info *h = to_hba(drv->dev.parent);
723 char rev[REV_LEN + 1]; 728 char rev[REV_LEN + 1];
724 unsigned long flags; 729 unsigned long flags;
725 int ret = 0; 730 int ret = 0;
726 731
727 spin_lock_irqsave(&h->lock, flags); 732 spin_lock_irqsave(&h->lock, flags);
728 if (h->busy_configuring) 733 if (h->busy_configuring)
729 ret = -EBUSY; 734 ret = -EBUSY;
730 else 735 else
731 memcpy(rev, drv->rev, REV_LEN + 1); 736 memcpy(rev, drv->rev, REV_LEN + 1);
732 spin_unlock_irqrestore(&h->lock, flags); 737 spin_unlock_irqrestore(&h->lock, flags);
733 738
734 if (ret) 739 if (ret)
735 return ret; 740 return ret;
736 else 741 else
737 return snprintf(buf, sizeof(rev) + 1, "%s\n", drv->rev); 742 return snprintf(buf, sizeof(rev) + 1, "%s\n", drv->rev);
738 } 743 }
739 static DEVICE_ATTR(rev, S_IRUGO, dev_show_rev, NULL); 744 static DEVICE_ATTR(rev, S_IRUGO, dev_show_rev, NULL);
740 745
741 static ssize_t cciss_show_lunid(struct device *dev, 746 static ssize_t cciss_show_lunid(struct device *dev,
742 struct device_attribute *attr, char *buf) 747 struct device_attribute *attr, char *buf)
743 { 748 {
744 drive_info_struct *drv = to_drv(dev); 749 drive_info_struct *drv = to_drv(dev);
745 struct ctlr_info *h = to_hba(drv->dev.parent); 750 struct ctlr_info *h = to_hba(drv->dev.parent);
746 unsigned long flags; 751 unsigned long flags;
747 unsigned char lunid[8]; 752 unsigned char lunid[8];
748 753
749 spin_lock_irqsave(&h->lock, flags); 754 spin_lock_irqsave(&h->lock, flags);
750 if (h->busy_configuring) { 755 if (h->busy_configuring) {
751 spin_unlock_irqrestore(&h->lock, flags); 756 spin_unlock_irqrestore(&h->lock, flags);
752 return -EBUSY; 757 return -EBUSY;
753 } 758 }
754 if (!drv->heads) { 759 if (!drv->heads) {
755 spin_unlock_irqrestore(&h->lock, flags); 760 spin_unlock_irqrestore(&h->lock, flags);
756 return -ENOTTY; 761 return -ENOTTY;
757 } 762 }
758 memcpy(lunid, drv->LunID, sizeof(lunid)); 763 memcpy(lunid, drv->LunID, sizeof(lunid));
759 spin_unlock_irqrestore(&h->lock, flags); 764 spin_unlock_irqrestore(&h->lock, flags);
760 return snprintf(buf, 20, "0x%02x%02x%02x%02x%02x%02x%02x%02x\n", 765 return snprintf(buf, 20, "0x%02x%02x%02x%02x%02x%02x%02x%02x\n",
761 lunid[0], lunid[1], lunid[2], lunid[3], 766 lunid[0], lunid[1], lunid[2], lunid[3],
762 lunid[4], lunid[5], lunid[6], lunid[7]); 767 lunid[4], lunid[5], lunid[6], lunid[7]);
763 } 768 }
764 static DEVICE_ATTR(lunid, S_IRUGO, cciss_show_lunid, NULL); 769 static DEVICE_ATTR(lunid, S_IRUGO, cciss_show_lunid, NULL);
765 770
766 static ssize_t cciss_show_raid_level(struct device *dev, 771 static ssize_t cciss_show_raid_level(struct device *dev,
767 struct device_attribute *attr, char *buf) 772 struct device_attribute *attr, char *buf)
768 { 773 {
769 drive_info_struct *drv = to_drv(dev); 774 drive_info_struct *drv = to_drv(dev);
770 struct ctlr_info *h = to_hba(drv->dev.parent); 775 struct ctlr_info *h = to_hba(drv->dev.parent);
771 int raid; 776 int raid;
772 unsigned long flags; 777 unsigned long flags;
773 778
774 spin_lock_irqsave(&h->lock, flags); 779 spin_lock_irqsave(&h->lock, flags);
775 if (h->busy_configuring) { 780 if (h->busy_configuring) {
776 spin_unlock_irqrestore(&h->lock, flags); 781 spin_unlock_irqrestore(&h->lock, flags);
777 return -EBUSY; 782 return -EBUSY;
778 } 783 }
779 raid = drv->raid_level; 784 raid = drv->raid_level;
780 spin_unlock_irqrestore(&h->lock, flags); 785 spin_unlock_irqrestore(&h->lock, flags);
781 if (raid < 0 || raid > RAID_UNKNOWN) 786 if (raid < 0 || raid > RAID_UNKNOWN)
782 raid = RAID_UNKNOWN; 787 raid = RAID_UNKNOWN;
783 788
784 return snprintf(buf, strlen(raid_label[raid]) + 7, "RAID %s\n", 789 return snprintf(buf, strlen(raid_label[raid]) + 7, "RAID %s\n",
785 raid_label[raid]); 790 raid_label[raid]);
786 } 791 }
787 static DEVICE_ATTR(raid_level, S_IRUGO, cciss_show_raid_level, NULL); 792 static DEVICE_ATTR(raid_level, S_IRUGO, cciss_show_raid_level, NULL);
788 793
789 static ssize_t cciss_show_usage_count(struct device *dev, 794 static ssize_t cciss_show_usage_count(struct device *dev,
790 struct device_attribute *attr, char *buf) 795 struct device_attribute *attr, char *buf)
791 { 796 {
792 drive_info_struct *drv = to_drv(dev); 797 drive_info_struct *drv = to_drv(dev);
793 struct ctlr_info *h = to_hba(drv->dev.parent); 798 struct ctlr_info *h = to_hba(drv->dev.parent);
794 unsigned long flags; 799 unsigned long flags;
795 int count; 800 int count;
796 801
797 spin_lock_irqsave(&h->lock, flags); 802 spin_lock_irqsave(&h->lock, flags);
798 if (h->busy_configuring) { 803 if (h->busy_configuring) {
799 spin_unlock_irqrestore(&h->lock, flags); 804 spin_unlock_irqrestore(&h->lock, flags);
800 return -EBUSY; 805 return -EBUSY;
801 } 806 }
802 count = drv->usage_count; 807 count = drv->usage_count;
803 spin_unlock_irqrestore(&h->lock, flags); 808 spin_unlock_irqrestore(&h->lock, flags);
804 return snprintf(buf, 20, "%d\n", count); 809 return snprintf(buf, 20, "%d\n", count);
805 } 810 }
806 static DEVICE_ATTR(usage_count, S_IRUGO, cciss_show_usage_count, NULL); 811 static DEVICE_ATTR(usage_count, S_IRUGO, cciss_show_usage_count, NULL);
807 812
808 static struct attribute *cciss_host_attrs[] = { 813 static struct attribute *cciss_host_attrs[] = {
809 &dev_attr_rescan.attr, 814 &dev_attr_rescan.attr,
810 &dev_attr_resettable.attr, 815 &dev_attr_resettable.attr,
811 NULL 816 NULL
812 }; 817 };
813 818
814 static struct attribute_group cciss_host_attr_group = { 819 static struct attribute_group cciss_host_attr_group = {
815 .attrs = cciss_host_attrs, 820 .attrs = cciss_host_attrs,
816 }; 821 };
817 822
818 static const struct attribute_group *cciss_host_attr_groups[] = { 823 static const struct attribute_group *cciss_host_attr_groups[] = {
819 &cciss_host_attr_group, 824 &cciss_host_attr_group,
820 NULL 825 NULL
821 }; 826 };
822 827
823 static struct device_type cciss_host_type = { 828 static struct device_type cciss_host_type = {
824 .name = "cciss_host", 829 .name = "cciss_host",
825 .groups = cciss_host_attr_groups, 830 .groups = cciss_host_attr_groups,
826 .release = cciss_hba_release, 831 .release = cciss_hba_release,
827 }; 832 };
828 833
829 static struct attribute *cciss_dev_attrs[] = { 834 static struct attribute *cciss_dev_attrs[] = {
830 &dev_attr_unique_id.attr, 835 &dev_attr_unique_id.attr,
831 &dev_attr_model.attr, 836 &dev_attr_model.attr,
832 &dev_attr_vendor.attr, 837 &dev_attr_vendor.attr,
833 &dev_attr_rev.attr, 838 &dev_attr_rev.attr,
834 &dev_attr_lunid.attr, 839 &dev_attr_lunid.attr,
835 &dev_attr_raid_level.attr, 840 &dev_attr_raid_level.attr,
836 &dev_attr_usage_count.attr, 841 &dev_attr_usage_count.attr,
837 NULL 842 NULL
838 }; 843 };
839 844
840 static struct attribute_group cciss_dev_attr_group = { 845 static struct attribute_group cciss_dev_attr_group = {
841 .attrs = cciss_dev_attrs, 846 .attrs = cciss_dev_attrs,
842 }; 847 };
843 848
844 static const struct attribute_group *cciss_dev_attr_groups[] = { 849 static const struct attribute_group *cciss_dev_attr_groups[] = {
845 &cciss_dev_attr_group, 850 &cciss_dev_attr_group,
846 NULL 851 NULL
847 }; 852 };
848 853
849 static struct device_type cciss_dev_type = { 854 static struct device_type cciss_dev_type = {
850 .name = "cciss_device", 855 .name = "cciss_device",
851 .groups = cciss_dev_attr_groups, 856 .groups = cciss_dev_attr_groups,
852 .release = cciss_device_release, 857 .release = cciss_device_release,
853 }; 858 };
854 859
855 static struct bus_type cciss_bus_type = { 860 static struct bus_type cciss_bus_type = {
856 .name = "cciss", 861 .name = "cciss",
857 }; 862 };
858 863
859 /* 864 /*
860 * cciss_hba_release is called when the reference count 865 * cciss_hba_release is called when the reference count
861 * of h->dev goes to zero. 866 * of h->dev goes to zero.
862 */ 867 */
863 static void cciss_hba_release(struct device *dev) 868 static void cciss_hba_release(struct device *dev)
864 { 869 {
865 /* 870 /*
866 * nothing to do, but need this to avoid a warning 871 * nothing to do, but need this to avoid a warning
867 * about not having a release handler from lib/kref.c. 872 * about not having a release handler from lib/kref.c.
868 */ 873 */
869 } 874 }
870 875
871 /* 876 /*
872 * Initialize sysfs entry for each controller. This sets up and registers 877 * Initialize sysfs entry for each controller. This sets up and registers
873 * the 'cciss#' directory for each individual controller under 878 * the 'cciss#' directory for each individual controller under
874 * /sys/bus/pci/devices/<dev>/. 879 * /sys/bus/pci/devices/<dev>/.
875 */ 880 */
876 static int cciss_create_hba_sysfs_entry(struct ctlr_info *h) 881 static int cciss_create_hba_sysfs_entry(struct ctlr_info *h)
877 { 882 {
878 device_initialize(&h->dev); 883 device_initialize(&h->dev);
879 h->dev.type = &cciss_host_type; 884 h->dev.type = &cciss_host_type;
880 h->dev.bus = &cciss_bus_type; 885 h->dev.bus = &cciss_bus_type;
881 dev_set_name(&h->dev, "%s", h->devname); 886 dev_set_name(&h->dev, "%s", h->devname);
882 h->dev.parent = &h->pdev->dev; 887 h->dev.parent = &h->pdev->dev;
883 888
884 return device_add(&h->dev); 889 return device_add(&h->dev);
885 } 890 }
886 891
887 /* 892 /*
888 * Remove sysfs entries for an hba. 893 * Remove sysfs entries for an hba.
889 */ 894 */
890 static void cciss_destroy_hba_sysfs_entry(struct ctlr_info *h) 895 static void cciss_destroy_hba_sysfs_entry(struct ctlr_info *h)
891 { 896 {
892 device_del(&h->dev); 897 device_del(&h->dev);
893 put_device(&h->dev); /* final put. */ 898 put_device(&h->dev); /* final put. */
894 } 899 }
895 900
896 /* cciss_device_release is called when the reference count 901 /* cciss_device_release is called when the reference count
897 * of h->drv[x]dev goes to zero. 902 * of h->drv[x]dev goes to zero.
898 */ 903 */
899 static void cciss_device_release(struct device *dev) 904 static void cciss_device_release(struct device *dev)
900 { 905 {
901 drive_info_struct *drv = to_drv(dev); 906 drive_info_struct *drv = to_drv(dev);
902 kfree(drv); 907 kfree(drv);
903 } 908 }
904 909
905 /* 910 /*
906 * Initialize sysfs for each logical drive. This sets up and registers 911 * Initialize sysfs for each logical drive. This sets up and registers
907 * the 'c#d#' directory for each individual logical drive under 912 * the 'c#d#' directory for each individual logical drive under
908 * /sys/bus/pci/devices/<dev/ccis#/. We also create a link from 913 * /sys/bus/pci/devices/<dev/ccis#/. We also create a link from
909 * /sys/block/cciss!c#d# to this entry. 914 * /sys/block/cciss!c#d# to this entry.
910 */ 915 */
911 static long cciss_create_ld_sysfs_entry(struct ctlr_info *h, 916 static long cciss_create_ld_sysfs_entry(struct ctlr_info *h,
912 int drv_index) 917 int drv_index)
913 { 918 {
914 struct device *dev; 919 struct device *dev;
915 920
916 if (h->drv[drv_index]->device_initialized) 921 if (h->drv[drv_index]->device_initialized)
917 return 0; 922 return 0;
918 923
919 dev = &h->drv[drv_index]->dev; 924 dev = &h->drv[drv_index]->dev;
920 device_initialize(dev); 925 device_initialize(dev);
921 dev->type = &cciss_dev_type; 926 dev->type = &cciss_dev_type;
922 dev->bus = &cciss_bus_type; 927 dev->bus = &cciss_bus_type;
923 dev_set_name(dev, "c%dd%d", h->ctlr, drv_index); 928 dev_set_name(dev, "c%dd%d", h->ctlr, drv_index);
924 dev->parent = &h->dev; 929 dev->parent = &h->dev;
925 h->drv[drv_index]->device_initialized = 1; 930 h->drv[drv_index]->device_initialized = 1;
926 return device_add(dev); 931 return device_add(dev);
927 } 932 }
928 933
929 /* 934 /*
930 * Remove sysfs entries for a logical drive. 935 * Remove sysfs entries for a logical drive.
931 */ 936 */
932 static void cciss_destroy_ld_sysfs_entry(struct ctlr_info *h, int drv_index, 937 static void cciss_destroy_ld_sysfs_entry(struct ctlr_info *h, int drv_index,
933 int ctlr_exiting) 938 int ctlr_exiting)
934 { 939 {
935 struct device *dev = &h->drv[drv_index]->dev; 940 struct device *dev = &h->drv[drv_index]->dev;
936 941
937 /* special case for c*d0, we only destroy it on controller exit */ 942 /* special case for c*d0, we only destroy it on controller exit */
938 if (drv_index == 0 && !ctlr_exiting) 943 if (drv_index == 0 && !ctlr_exiting)
939 return; 944 return;
940 945
941 device_del(dev); 946 device_del(dev);
942 put_device(dev); /* the "final" put. */ 947 put_device(dev); /* the "final" put. */
943 h->drv[drv_index] = NULL; 948 h->drv[drv_index] = NULL;
944 } 949 }
945 950
946 /* 951 /*
947 * For operations that cannot sleep, a command block is allocated at init, 952 * For operations that cannot sleep, a command block is allocated at init,
948 * and managed by cmd_alloc() and cmd_free() using a simple bitmap to track 953 * and managed by cmd_alloc() and cmd_free() using a simple bitmap to track
949 * which ones are free or in use. 954 * which ones are free or in use.
950 */ 955 */
951 static CommandList_struct *cmd_alloc(ctlr_info_t *h) 956 static CommandList_struct *cmd_alloc(ctlr_info_t *h)
952 { 957 {
953 CommandList_struct *c; 958 CommandList_struct *c;
954 int i; 959 int i;
955 u64bit temp64; 960 u64bit temp64;
956 dma_addr_t cmd_dma_handle, err_dma_handle; 961 dma_addr_t cmd_dma_handle, err_dma_handle;
957 962
958 do { 963 do {
959 i = find_first_zero_bit(h->cmd_pool_bits, h->nr_cmds); 964 i = find_first_zero_bit(h->cmd_pool_bits, h->nr_cmds);
960 if (i == h->nr_cmds) 965 if (i == h->nr_cmds)
961 return NULL; 966 return NULL;
962 } while (test_and_set_bit(i & (BITS_PER_LONG - 1), 967 } while (test_and_set_bit(i & (BITS_PER_LONG - 1),
963 h->cmd_pool_bits + (i / BITS_PER_LONG)) != 0); 968 h->cmd_pool_bits + (i / BITS_PER_LONG)) != 0);
964 c = h->cmd_pool + i; 969 c = h->cmd_pool + i;
965 memset(c, 0, sizeof(CommandList_struct)); 970 memset(c, 0, sizeof(CommandList_struct));
966 cmd_dma_handle = h->cmd_pool_dhandle + i * sizeof(CommandList_struct); 971 cmd_dma_handle = h->cmd_pool_dhandle + i * sizeof(CommandList_struct);
967 c->err_info = h->errinfo_pool + i; 972 c->err_info = h->errinfo_pool + i;
968 memset(c->err_info, 0, sizeof(ErrorInfo_struct)); 973 memset(c->err_info, 0, sizeof(ErrorInfo_struct));
969 err_dma_handle = h->errinfo_pool_dhandle 974 err_dma_handle = h->errinfo_pool_dhandle
970 + i * sizeof(ErrorInfo_struct); 975 + i * sizeof(ErrorInfo_struct);
971 h->nr_allocs++; 976 h->nr_allocs++;
972 977
973 c->cmdindex = i; 978 c->cmdindex = i;
974 979
975 INIT_LIST_HEAD(&c->list); 980 INIT_LIST_HEAD(&c->list);
976 c->busaddr = (__u32) cmd_dma_handle; 981 c->busaddr = (__u32) cmd_dma_handle;
977 temp64.val = (__u64) err_dma_handle; 982 temp64.val = (__u64) err_dma_handle;
978 c->ErrDesc.Addr.lower = temp64.val32.lower; 983 c->ErrDesc.Addr.lower = temp64.val32.lower;
979 c->ErrDesc.Addr.upper = temp64.val32.upper; 984 c->ErrDesc.Addr.upper = temp64.val32.upper;
980 c->ErrDesc.Len = sizeof(ErrorInfo_struct); 985 c->ErrDesc.Len = sizeof(ErrorInfo_struct);
981 986
982 c->ctlr = h->ctlr; 987 c->ctlr = h->ctlr;
983 return c; 988 return c;
984 } 989 }
985 990
986 /* allocate a command using pci_alloc_consistent, used for ioctls, 991 /* allocate a command using pci_alloc_consistent, used for ioctls,
987 * etc., not for the main i/o path. 992 * etc., not for the main i/o path.
988 */ 993 */
989 static CommandList_struct *cmd_special_alloc(ctlr_info_t *h) 994 static CommandList_struct *cmd_special_alloc(ctlr_info_t *h)
990 { 995 {
991 CommandList_struct *c; 996 CommandList_struct *c;
992 u64bit temp64; 997 u64bit temp64;
993 dma_addr_t cmd_dma_handle, err_dma_handle; 998 dma_addr_t cmd_dma_handle, err_dma_handle;
994 999
995 c = (CommandList_struct *) pci_alloc_consistent(h->pdev, 1000 c = (CommandList_struct *) pci_alloc_consistent(h->pdev,
996 sizeof(CommandList_struct), &cmd_dma_handle); 1001 sizeof(CommandList_struct), &cmd_dma_handle);
997 if (c == NULL) 1002 if (c == NULL)
998 return NULL; 1003 return NULL;
999 memset(c, 0, sizeof(CommandList_struct)); 1004 memset(c, 0, sizeof(CommandList_struct));
1000 1005
1001 c->cmdindex = -1; 1006 c->cmdindex = -1;
1002 1007
1003 c->err_info = (ErrorInfo_struct *) 1008 c->err_info = (ErrorInfo_struct *)
1004 pci_alloc_consistent(h->pdev, sizeof(ErrorInfo_struct), 1009 pci_alloc_consistent(h->pdev, sizeof(ErrorInfo_struct),
1005 &err_dma_handle); 1010 &err_dma_handle);
1006 1011
1007 if (c->err_info == NULL) { 1012 if (c->err_info == NULL) {
1008 pci_free_consistent(h->pdev, 1013 pci_free_consistent(h->pdev,
1009 sizeof(CommandList_struct), c, cmd_dma_handle); 1014 sizeof(CommandList_struct), c, cmd_dma_handle);
1010 return NULL; 1015 return NULL;
1011 } 1016 }
1012 memset(c->err_info, 0, sizeof(ErrorInfo_struct)); 1017 memset(c->err_info, 0, sizeof(ErrorInfo_struct));
1013 1018
1014 INIT_LIST_HEAD(&c->list); 1019 INIT_LIST_HEAD(&c->list);
1015 c->busaddr = (__u32) cmd_dma_handle; 1020 c->busaddr = (__u32) cmd_dma_handle;
1016 temp64.val = (__u64) err_dma_handle; 1021 temp64.val = (__u64) err_dma_handle;
1017 c->ErrDesc.Addr.lower = temp64.val32.lower; 1022 c->ErrDesc.Addr.lower = temp64.val32.lower;
1018 c->ErrDesc.Addr.upper = temp64.val32.upper; 1023 c->ErrDesc.Addr.upper = temp64.val32.upper;
1019 c->ErrDesc.Len = sizeof(ErrorInfo_struct); 1024 c->ErrDesc.Len = sizeof(ErrorInfo_struct);
1020 1025
1021 c->ctlr = h->ctlr; 1026 c->ctlr = h->ctlr;
1022 return c; 1027 return c;
1023 } 1028 }
1024 1029
1025 static void cmd_free(ctlr_info_t *h, CommandList_struct *c) 1030 static void cmd_free(ctlr_info_t *h, CommandList_struct *c)
1026 { 1031 {
1027 int i; 1032 int i;
1028 1033
1029 i = c - h->cmd_pool; 1034 i = c - h->cmd_pool;
1030 clear_bit(i & (BITS_PER_LONG - 1), 1035 clear_bit(i & (BITS_PER_LONG - 1),
1031 h->cmd_pool_bits + (i / BITS_PER_LONG)); 1036 h->cmd_pool_bits + (i / BITS_PER_LONG));
1032 h->nr_frees++; 1037 h->nr_frees++;
1033 } 1038 }
1034 1039
1035 static void cmd_special_free(ctlr_info_t *h, CommandList_struct *c) 1040 static void cmd_special_free(ctlr_info_t *h, CommandList_struct *c)
1036 { 1041 {
1037 u64bit temp64; 1042 u64bit temp64;
1038 1043
1039 temp64.val32.lower = c->ErrDesc.Addr.lower; 1044 temp64.val32.lower = c->ErrDesc.Addr.lower;
1040 temp64.val32.upper = c->ErrDesc.Addr.upper; 1045 temp64.val32.upper = c->ErrDesc.Addr.upper;
1041 pci_free_consistent(h->pdev, sizeof(ErrorInfo_struct), 1046 pci_free_consistent(h->pdev, sizeof(ErrorInfo_struct),
1042 c->err_info, (dma_addr_t) temp64.val); 1047 c->err_info, (dma_addr_t) temp64.val);
1043 pci_free_consistent(h->pdev, sizeof(CommandList_struct), c, 1048 pci_free_consistent(h->pdev, sizeof(CommandList_struct), c,
1044 (dma_addr_t) cciss_tag_discard_error_bits(h, (u32) c->busaddr)); 1049 (dma_addr_t) cciss_tag_discard_error_bits(h, (u32) c->busaddr));
1045 } 1050 }
1046 1051
1047 static inline ctlr_info_t *get_host(struct gendisk *disk) 1052 static inline ctlr_info_t *get_host(struct gendisk *disk)
1048 { 1053 {
1049 return disk->queue->queuedata; 1054 return disk->queue->queuedata;
1050 } 1055 }
1051 1056
1052 static inline drive_info_struct *get_drv(struct gendisk *disk) 1057 static inline drive_info_struct *get_drv(struct gendisk *disk)
1053 { 1058 {
1054 return disk->private_data; 1059 return disk->private_data;
1055 } 1060 }
1056 1061
1057 /* 1062 /*
1058 * Open. Make sure the device is really there. 1063 * Open. Make sure the device is really there.
1059 */ 1064 */
1060 static int cciss_open(struct block_device *bdev, fmode_t mode) 1065 static int cciss_open(struct block_device *bdev, fmode_t mode)
1061 { 1066 {
1062 ctlr_info_t *h = get_host(bdev->bd_disk); 1067 ctlr_info_t *h = get_host(bdev->bd_disk);
1063 drive_info_struct *drv = get_drv(bdev->bd_disk); 1068 drive_info_struct *drv = get_drv(bdev->bd_disk);
1064 1069
1065 dev_dbg(&h->pdev->dev, "cciss_open %s\n", bdev->bd_disk->disk_name); 1070 dev_dbg(&h->pdev->dev, "cciss_open %s\n", bdev->bd_disk->disk_name);
1066 if (drv->busy_configuring) 1071 if (drv->busy_configuring)
1067 return -EBUSY; 1072 return -EBUSY;
1068 /* 1073 /*
1069 * Root is allowed to open raw volume zero even if it's not configured 1074 * Root is allowed to open raw volume zero even if it's not configured
1070 * so array config can still work. Root is also allowed to open any 1075 * so array config can still work. Root is also allowed to open any
1071 * volume that has a LUN ID, so it can issue IOCTL to reread the 1076 * volume that has a LUN ID, so it can issue IOCTL to reread the
1072 * disk information. I don't think I really like this 1077 * disk information. I don't think I really like this
1073 * but I'm already using way to many device nodes to claim another one 1078 * but I'm already using way to many device nodes to claim another one
1074 * for "raw controller". 1079 * for "raw controller".
1075 */ 1080 */
1076 if (drv->heads == 0) { 1081 if (drv->heads == 0) {
1077 if (MINOR(bdev->bd_dev) != 0) { /* not node 0? */ 1082 if (MINOR(bdev->bd_dev) != 0) { /* not node 0? */
1078 /* if not node 0 make sure it is a partition = 0 */ 1083 /* if not node 0 make sure it is a partition = 0 */
1079 if (MINOR(bdev->bd_dev) & 0x0f) { 1084 if (MINOR(bdev->bd_dev) & 0x0f) {
1080 return -ENXIO; 1085 return -ENXIO;
1081 /* if it is, make sure we have a LUN ID */ 1086 /* if it is, make sure we have a LUN ID */
1082 } else if (memcmp(drv->LunID, CTLR_LUNID, 1087 } else if (memcmp(drv->LunID, CTLR_LUNID,
1083 sizeof(drv->LunID))) { 1088 sizeof(drv->LunID))) {
1084 return -ENXIO; 1089 return -ENXIO;
1085 } 1090 }
1086 } 1091 }
1087 if (!capable(CAP_SYS_ADMIN)) 1092 if (!capable(CAP_SYS_ADMIN))
1088 return -EPERM; 1093 return -EPERM;
1089 } 1094 }
1090 drv->usage_count++; 1095 drv->usage_count++;
1091 h->usage_count++; 1096 h->usage_count++;
1092 return 0; 1097 return 0;
1093 } 1098 }
1094 1099
1095 static int cciss_unlocked_open(struct block_device *bdev, fmode_t mode) 1100 static int cciss_unlocked_open(struct block_device *bdev, fmode_t mode)
1096 { 1101 {
1097 int ret; 1102 int ret;
1098 1103
1099 mutex_lock(&cciss_mutex); 1104 mutex_lock(&cciss_mutex);
1100 ret = cciss_open(bdev, mode); 1105 ret = cciss_open(bdev, mode);
1101 mutex_unlock(&cciss_mutex); 1106 mutex_unlock(&cciss_mutex);
1102 1107
1103 return ret; 1108 return ret;
1104 } 1109 }
1105 1110
1106 /* 1111 /*
1107 * Close. Sync first. 1112 * Close. Sync first.
1108 */ 1113 */
1109 static int cciss_release(struct gendisk *disk, fmode_t mode) 1114 static int cciss_release(struct gendisk *disk, fmode_t mode)
1110 { 1115 {
1111 ctlr_info_t *h; 1116 ctlr_info_t *h;
1112 drive_info_struct *drv; 1117 drive_info_struct *drv;
1113 1118
1114 mutex_lock(&cciss_mutex); 1119 mutex_lock(&cciss_mutex);
1115 h = get_host(disk); 1120 h = get_host(disk);
1116 drv = get_drv(disk); 1121 drv = get_drv(disk);
1117 dev_dbg(&h->pdev->dev, "cciss_release %s\n", disk->disk_name); 1122 dev_dbg(&h->pdev->dev, "cciss_release %s\n", disk->disk_name);
1118 drv->usage_count--; 1123 drv->usage_count--;
1119 h->usage_count--; 1124 h->usage_count--;
1120 mutex_unlock(&cciss_mutex); 1125 mutex_unlock(&cciss_mutex);
1121 return 0; 1126 return 0;
1122 } 1127 }
1123 1128
1124 static int do_ioctl(struct block_device *bdev, fmode_t mode, 1129 static int do_ioctl(struct block_device *bdev, fmode_t mode,
1125 unsigned cmd, unsigned long arg) 1130 unsigned cmd, unsigned long arg)
1126 { 1131 {
1127 int ret; 1132 int ret;
1128 mutex_lock(&cciss_mutex); 1133 mutex_lock(&cciss_mutex);
1129 ret = cciss_ioctl(bdev, mode, cmd, arg); 1134 ret = cciss_ioctl(bdev, mode, cmd, arg);
1130 mutex_unlock(&cciss_mutex); 1135 mutex_unlock(&cciss_mutex);
1131 return ret; 1136 return ret;
1132 } 1137 }
1133 1138
1134 #ifdef CONFIG_COMPAT 1139 #ifdef CONFIG_COMPAT
1135 1140
1136 static int cciss_ioctl32_passthru(struct block_device *bdev, fmode_t mode, 1141 static int cciss_ioctl32_passthru(struct block_device *bdev, fmode_t mode,
1137 unsigned cmd, unsigned long arg); 1142 unsigned cmd, unsigned long arg);
1138 static int cciss_ioctl32_big_passthru(struct block_device *bdev, fmode_t mode, 1143 static int cciss_ioctl32_big_passthru(struct block_device *bdev, fmode_t mode,
1139 unsigned cmd, unsigned long arg); 1144 unsigned cmd, unsigned long arg);
1140 1145
1141 static int cciss_compat_ioctl(struct block_device *bdev, fmode_t mode, 1146 static int cciss_compat_ioctl(struct block_device *bdev, fmode_t mode,
1142 unsigned cmd, unsigned long arg) 1147 unsigned cmd, unsigned long arg)
1143 { 1148 {
1144 switch (cmd) { 1149 switch (cmd) {
1145 case CCISS_GETPCIINFO: 1150 case CCISS_GETPCIINFO:
1146 case CCISS_GETINTINFO: 1151 case CCISS_GETINTINFO:
1147 case CCISS_SETINTINFO: 1152 case CCISS_SETINTINFO:
1148 case CCISS_GETNODENAME: 1153 case CCISS_GETNODENAME:
1149 case CCISS_SETNODENAME: 1154 case CCISS_SETNODENAME:
1150 case CCISS_GETHEARTBEAT: 1155 case CCISS_GETHEARTBEAT:
1151 case CCISS_GETBUSTYPES: 1156 case CCISS_GETBUSTYPES:
1152 case CCISS_GETFIRMVER: 1157 case CCISS_GETFIRMVER:
1153 case CCISS_GETDRIVVER: 1158 case CCISS_GETDRIVVER:
1154 case CCISS_REVALIDVOLS: 1159 case CCISS_REVALIDVOLS:
1155 case CCISS_DEREGDISK: 1160 case CCISS_DEREGDISK:
1156 case CCISS_REGNEWDISK: 1161 case CCISS_REGNEWDISK:
1157 case CCISS_REGNEWD: 1162 case CCISS_REGNEWD:
1158 case CCISS_RESCANDISK: 1163 case CCISS_RESCANDISK:
1159 case CCISS_GETLUNINFO: 1164 case CCISS_GETLUNINFO:
1160 return do_ioctl(bdev, mode, cmd, arg); 1165 return do_ioctl(bdev, mode, cmd, arg);
1161 1166
1162 case CCISS_PASSTHRU32: 1167 case CCISS_PASSTHRU32:
1163 return cciss_ioctl32_passthru(bdev, mode, cmd, arg); 1168 return cciss_ioctl32_passthru(bdev, mode, cmd, arg);
1164 case CCISS_BIG_PASSTHRU32: 1169 case CCISS_BIG_PASSTHRU32:
1165 return cciss_ioctl32_big_passthru(bdev, mode, cmd, arg); 1170 return cciss_ioctl32_big_passthru(bdev, mode, cmd, arg);
1166 1171
1167 default: 1172 default:
1168 return -ENOIOCTLCMD; 1173 return -ENOIOCTLCMD;
1169 } 1174 }
1170 } 1175 }
1171 1176
1172 static int cciss_ioctl32_passthru(struct block_device *bdev, fmode_t mode, 1177 static int cciss_ioctl32_passthru(struct block_device *bdev, fmode_t mode,
1173 unsigned cmd, unsigned long arg) 1178 unsigned cmd, unsigned long arg)
1174 { 1179 {
1175 IOCTL32_Command_struct __user *arg32 = 1180 IOCTL32_Command_struct __user *arg32 =
1176 (IOCTL32_Command_struct __user *) arg; 1181 (IOCTL32_Command_struct __user *) arg;
1177 IOCTL_Command_struct arg64; 1182 IOCTL_Command_struct arg64;
1178 IOCTL_Command_struct __user *p = compat_alloc_user_space(sizeof(arg64)); 1183 IOCTL_Command_struct __user *p = compat_alloc_user_space(sizeof(arg64));
1179 int err; 1184 int err;
1180 u32 cp; 1185 u32 cp;
1181 1186
1182 err = 0; 1187 err = 0;
1183 err |= 1188 err |=
1184 copy_from_user(&arg64.LUN_info, &arg32->LUN_info, 1189 copy_from_user(&arg64.LUN_info, &arg32->LUN_info,
1185 sizeof(arg64.LUN_info)); 1190 sizeof(arg64.LUN_info));
1186 err |= 1191 err |=
1187 copy_from_user(&arg64.Request, &arg32->Request, 1192 copy_from_user(&arg64.Request, &arg32->Request,
1188 sizeof(arg64.Request)); 1193 sizeof(arg64.Request));
1189 err |= 1194 err |=
1190 copy_from_user(&arg64.error_info, &arg32->error_info, 1195 copy_from_user(&arg64.error_info, &arg32->error_info,
1191 sizeof(arg64.error_info)); 1196 sizeof(arg64.error_info));
1192 err |= get_user(arg64.buf_size, &arg32->buf_size); 1197 err |= get_user(arg64.buf_size, &arg32->buf_size);
1193 err |= get_user(cp, &arg32->buf); 1198 err |= get_user(cp, &arg32->buf);
1194 arg64.buf = compat_ptr(cp); 1199 arg64.buf = compat_ptr(cp);
1195 err |= copy_to_user(p, &arg64, sizeof(arg64)); 1200 err |= copy_to_user(p, &arg64, sizeof(arg64));
1196 1201
1197 if (err) 1202 if (err)
1198 return -EFAULT; 1203 return -EFAULT;
1199 1204
1200 err = do_ioctl(bdev, mode, CCISS_PASSTHRU, (unsigned long)p); 1205 err = do_ioctl(bdev, mode, CCISS_PASSTHRU, (unsigned long)p);
1201 if (err) 1206 if (err)
1202 return err; 1207 return err;
1203 err |= 1208 err |=
1204 copy_in_user(&arg32->error_info, &p->error_info, 1209 copy_in_user(&arg32->error_info, &p->error_info,
1205 sizeof(arg32->error_info)); 1210 sizeof(arg32->error_info));
1206 if (err) 1211 if (err)
1207 return -EFAULT; 1212 return -EFAULT;
1208 return err; 1213 return err;
1209 } 1214 }
1210 1215
1211 static int cciss_ioctl32_big_passthru(struct block_device *bdev, fmode_t mode, 1216 static int cciss_ioctl32_big_passthru(struct block_device *bdev, fmode_t mode,
1212 unsigned cmd, unsigned long arg) 1217 unsigned cmd, unsigned long arg)
1213 { 1218 {
1214 BIG_IOCTL32_Command_struct __user *arg32 = 1219 BIG_IOCTL32_Command_struct __user *arg32 =
1215 (BIG_IOCTL32_Command_struct __user *) arg; 1220 (BIG_IOCTL32_Command_struct __user *) arg;
1216 BIG_IOCTL_Command_struct arg64; 1221 BIG_IOCTL_Command_struct arg64;
1217 BIG_IOCTL_Command_struct __user *p = 1222 BIG_IOCTL_Command_struct __user *p =
1218 compat_alloc_user_space(sizeof(arg64)); 1223 compat_alloc_user_space(sizeof(arg64));
1219 int err; 1224 int err;
1220 u32 cp; 1225 u32 cp;
1221 1226
1222 memset(&arg64, 0, sizeof(arg64)); 1227 memset(&arg64, 0, sizeof(arg64));
1223 err = 0; 1228 err = 0;
1224 err |= 1229 err |=
1225 copy_from_user(&arg64.LUN_info, &arg32->LUN_info, 1230 copy_from_user(&arg64.LUN_info, &arg32->LUN_info,
1226 sizeof(arg64.LUN_info)); 1231 sizeof(arg64.LUN_info));
1227 err |= 1232 err |=
1228 copy_from_user(&arg64.Request, &arg32->Request, 1233 copy_from_user(&arg64.Request, &arg32->Request,
1229 sizeof(arg64.Request)); 1234 sizeof(arg64.Request));
1230 err |= 1235 err |=
1231 copy_from_user(&arg64.error_info, &arg32->error_info, 1236 copy_from_user(&arg64.error_info, &arg32->error_info,
1232 sizeof(arg64.error_info)); 1237 sizeof(arg64.error_info));
1233 err |= get_user(arg64.buf_size, &arg32->buf_size); 1238 err |= get_user(arg64.buf_size, &arg32->buf_size);
1234 err |= get_user(arg64.malloc_size, &arg32->malloc_size); 1239 err |= get_user(arg64.malloc_size, &arg32->malloc_size);
1235 err |= get_user(cp, &arg32->buf); 1240 err |= get_user(cp, &arg32->buf);
1236 arg64.buf = compat_ptr(cp); 1241 arg64.buf = compat_ptr(cp);
1237 err |= copy_to_user(p, &arg64, sizeof(arg64)); 1242 err |= copy_to_user(p, &arg64, sizeof(arg64));
1238 1243
1239 if (err) 1244 if (err)
1240 return -EFAULT; 1245 return -EFAULT;
1241 1246
1242 err = do_ioctl(bdev, mode, CCISS_BIG_PASSTHRU, (unsigned long)p); 1247 err = do_ioctl(bdev, mode, CCISS_BIG_PASSTHRU, (unsigned long)p);
1243 if (err) 1248 if (err)
1244 return err; 1249 return err;
1245 err |= 1250 err |=
1246 copy_in_user(&arg32->error_info, &p->error_info, 1251 copy_in_user(&arg32->error_info, &p->error_info,
1247 sizeof(arg32->error_info)); 1252 sizeof(arg32->error_info));
1248 if (err) 1253 if (err)
1249 return -EFAULT; 1254 return -EFAULT;
1250 return err; 1255 return err;
1251 } 1256 }
1252 #endif 1257 #endif
1253 1258
1254 static int cciss_getgeo(struct block_device *bdev, struct hd_geometry *geo) 1259 static int cciss_getgeo(struct block_device *bdev, struct hd_geometry *geo)
1255 { 1260 {
1256 drive_info_struct *drv = get_drv(bdev->bd_disk); 1261 drive_info_struct *drv = get_drv(bdev->bd_disk);
1257 1262
1258 if (!drv->cylinders) 1263 if (!drv->cylinders)
1259 return -ENXIO; 1264 return -ENXIO;
1260 1265
1261 geo->heads = drv->heads; 1266 geo->heads = drv->heads;
1262 geo->sectors = drv->sectors; 1267 geo->sectors = drv->sectors;
1263 geo->cylinders = drv->cylinders; 1268 geo->cylinders = drv->cylinders;
1264 return 0; 1269 return 0;
1265 } 1270 }
1266 1271
1267 static void check_ioctl_unit_attention(ctlr_info_t *h, CommandList_struct *c) 1272 static void check_ioctl_unit_attention(ctlr_info_t *h, CommandList_struct *c)
1268 { 1273 {
1269 if (c->err_info->CommandStatus == CMD_TARGET_STATUS && 1274 if (c->err_info->CommandStatus == CMD_TARGET_STATUS &&
1270 c->err_info->ScsiStatus != SAM_STAT_CHECK_CONDITION) 1275 c->err_info->ScsiStatus != SAM_STAT_CHECK_CONDITION)
1271 (void)check_for_unit_attention(h, c); 1276 (void)check_for_unit_attention(h, c);
1272 } 1277 }
1273 1278
1274 static int cciss_getpciinfo(ctlr_info_t *h, void __user *argp) 1279 static int cciss_getpciinfo(ctlr_info_t *h, void __user *argp)
1275 { 1280 {
1276 cciss_pci_info_struct pciinfo; 1281 cciss_pci_info_struct pciinfo;
1277 1282
1278 if (!argp) 1283 if (!argp)
1279 return -EINVAL; 1284 return -EINVAL;
1280 pciinfo.domain = pci_domain_nr(h->pdev->bus); 1285 pciinfo.domain = pci_domain_nr(h->pdev->bus);
1281 pciinfo.bus = h->pdev->bus->number; 1286 pciinfo.bus = h->pdev->bus->number;
1282 pciinfo.dev_fn = h->pdev->devfn; 1287 pciinfo.dev_fn = h->pdev->devfn;
1283 pciinfo.board_id = h->board_id; 1288 pciinfo.board_id = h->board_id;
1284 if (copy_to_user(argp, &pciinfo, sizeof(cciss_pci_info_struct))) 1289 if (copy_to_user(argp, &pciinfo, sizeof(cciss_pci_info_struct)))
1285 return -EFAULT; 1290 return -EFAULT;
1286 return 0; 1291 return 0;
1287 } 1292 }
1288 1293
1289 static int cciss_getintinfo(ctlr_info_t *h, void __user *argp) 1294 static int cciss_getintinfo(ctlr_info_t *h, void __user *argp)
1290 { 1295 {
1291 cciss_coalint_struct intinfo; 1296 cciss_coalint_struct intinfo;
1292 1297
1293 if (!argp) 1298 if (!argp)
1294 return -EINVAL; 1299 return -EINVAL;
1295 intinfo.delay = readl(&h->cfgtable->HostWrite.CoalIntDelay); 1300 intinfo.delay = readl(&h->cfgtable->HostWrite.CoalIntDelay);
1296 intinfo.count = readl(&h->cfgtable->HostWrite.CoalIntCount); 1301 intinfo.count = readl(&h->cfgtable->HostWrite.CoalIntCount);
1297 if (copy_to_user 1302 if (copy_to_user
1298 (argp, &intinfo, sizeof(cciss_coalint_struct))) 1303 (argp, &intinfo, sizeof(cciss_coalint_struct)))
1299 return -EFAULT; 1304 return -EFAULT;
1300 return 0; 1305 return 0;
1301 } 1306 }
1302 1307
1303 static int cciss_setintinfo(ctlr_info_t *h, void __user *argp) 1308 static int cciss_setintinfo(ctlr_info_t *h, void __user *argp)
1304 { 1309 {
1305 cciss_coalint_struct intinfo; 1310 cciss_coalint_struct intinfo;
1306 unsigned long flags; 1311 unsigned long flags;
1307 int i; 1312 int i;
1308 1313
1309 if (!argp) 1314 if (!argp)
1310 return -EINVAL; 1315 return -EINVAL;
1311 if (!capable(CAP_SYS_ADMIN)) 1316 if (!capable(CAP_SYS_ADMIN))
1312 return -EPERM; 1317 return -EPERM;
1313 if (copy_from_user(&intinfo, argp, sizeof(intinfo))) 1318 if (copy_from_user(&intinfo, argp, sizeof(intinfo)))
1314 return -EFAULT; 1319 return -EFAULT;
1315 if ((intinfo.delay == 0) && (intinfo.count == 0)) 1320 if ((intinfo.delay == 0) && (intinfo.count == 0))
1316 return -EINVAL; 1321 return -EINVAL;
1317 spin_lock_irqsave(&h->lock, flags); 1322 spin_lock_irqsave(&h->lock, flags);
1318 /* Update the field, and then ring the doorbell */ 1323 /* Update the field, and then ring the doorbell */
1319 writel(intinfo.delay, &(h->cfgtable->HostWrite.CoalIntDelay)); 1324 writel(intinfo.delay, &(h->cfgtable->HostWrite.CoalIntDelay));
1320 writel(intinfo.count, &(h->cfgtable->HostWrite.CoalIntCount)); 1325 writel(intinfo.count, &(h->cfgtable->HostWrite.CoalIntCount));
1321 writel(CFGTBL_ChangeReq, h->vaddr + SA5_DOORBELL); 1326 writel(CFGTBL_ChangeReq, h->vaddr + SA5_DOORBELL);
1322 1327
1323 for (i = 0; i < MAX_IOCTL_CONFIG_WAIT; i++) { 1328 for (i = 0; i < MAX_IOCTL_CONFIG_WAIT; i++) {
1324 if (!(readl(h->vaddr + SA5_DOORBELL) & CFGTBL_ChangeReq)) 1329 if (!(readl(h->vaddr + SA5_DOORBELL) & CFGTBL_ChangeReq))
1325 break; 1330 break;
1326 udelay(1000); /* delay and try again */ 1331 udelay(1000); /* delay and try again */
1327 } 1332 }
1328 spin_unlock_irqrestore(&h->lock, flags); 1333 spin_unlock_irqrestore(&h->lock, flags);
1329 if (i >= MAX_IOCTL_CONFIG_WAIT) 1334 if (i >= MAX_IOCTL_CONFIG_WAIT)
1330 return -EAGAIN; 1335 return -EAGAIN;
1331 return 0; 1336 return 0;
1332 } 1337 }
1333 1338
1334 static int cciss_getnodename(ctlr_info_t *h, void __user *argp) 1339 static int cciss_getnodename(ctlr_info_t *h, void __user *argp)
1335 { 1340 {
1336 NodeName_type NodeName; 1341 NodeName_type NodeName;
1337 int i; 1342 int i;
1338 1343
1339 if (!argp) 1344 if (!argp)
1340 return -EINVAL; 1345 return -EINVAL;
1341 for (i = 0; i < 16; i++) 1346 for (i = 0; i < 16; i++)
1342 NodeName[i] = readb(&h->cfgtable->ServerName[i]); 1347 NodeName[i] = readb(&h->cfgtable->ServerName[i]);
1343 if (copy_to_user(argp, NodeName, sizeof(NodeName_type))) 1348 if (copy_to_user(argp, NodeName, sizeof(NodeName_type)))
1344 return -EFAULT; 1349 return -EFAULT;
1345 return 0; 1350 return 0;
1346 } 1351 }
1347 1352
1348 static int cciss_setnodename(ctlr_info_t *h, void __user *argp) 1353 static int cciss_setnodename(ctlr_info_t *h, void __user *argp)
1349 { 1354 {
1350 NodeName_type NodeName; 1355 NodeName_type NodeName;
1351 unsigned long flags; 1356 unsigned long flags;
1352 int i; 1357 int i;
1353 1358
1354 if (!argp) 1359 if (!argp)
1355 return -EINVAL; 1360 return -EINVAL;
1356 if (!capable(CAP_SYS_ADMIN)) 1361 if (!capable(CAP_SYS_ADMIN))
1357 return -EPERM; 1362 return -EPERM;
1358 if (copy_from_user(NodeName, argp, sizeof(NodeName_type))) 1363 if (copy_from_user(NodeName, argp, sizeof(NodeName_type)))
1359 return -EFAULT; 1364 return -EFAULT;
1360 spin_lock_irqsave(&h->lock, flags); 1365 spin_lock_irqsave(&h->lock, flags);
1361 /* Update the field, and then ring the doorbell */ 1366 /* Update the field, and then ring the doorbell */
1362 for (i = 0; i < 16; i++) 1367 for (i = 0; i < 16; i++)
1363 writeb(NodeName[i], &h->cfgtable->ServerName[i]); 1368 writeb(NodeName[i], &h->cfgtable->ServerName[i]);
1364 writel(CFGTBL_ChangeReq, h->vaddr + SA5_DOORBELL); 1369 writel(CFGTBL_ChangeReq, h->vaddr + SA5_DOORBELL);
1365 for (i = 0; i < MAX_IOCTL_CONFIG_WAIT; i++) { 1370 for (i = 0; i < MAX_IOCTL_CONFIG_WAIT; i++) {
1366 if (!(readl(h->vaddr + SA5_DOORBELL) & CFGTBL_ChangeReq)) 1371 if (!(readl(h->vaddr + SA5_DOORBELL) & CFGTBL_ChangeReq))
1367 break; 1372 break;
1368 udelay(1000); /* delay and try again */ 1373 udelay(1000); /* delay and try again */
1369 } 1374 }
1370 spin_unlock_irqrestore(&h->lock, flags); 1375 spin_unlock_irqrestore(&h->lock, flags);
1371 if (i >= MAX_IOCTL_CONFIG_WAIT) 1376 if (i >= MAX_IOCTL_CONFIG_WAIT)
1372 return -EAGAIN; 1377 return -EAGAIN;
1373 return 0; 1378 return 0;
1374 } 1379 }
1375 1380
1376 static int cciss_getheartbeat(ctlr_info_t *h, void __user *argp) 1381 static int cciss_getheartbeat(ctlr_info_t *h, void __user *argp)
1377 { 1382 {
1378 Heartbeat_type heartbeat; 1383 Heartbeat_type heartbeat;
1379 1384
1380 if (!argp) 1385 if (!argp)
1381 return -EINVAL; 1386 return -EINVAL;
1382 heartbeat = readl(&h->cfgtable->HeartBeat); 1387 heartbeat = readl(&h->cfgtable->HeartBeat);
1383 if (copy_to_user(argp, &heartbeat, sizeof(Heartbeat_type))) 1388 if (copy_to_user(argp, &heartbeat, sizeof(Heartbeat_type)))
1384 return -EFAULT; 1389 return -EFAULT;
1385 return 0; 1390 return 0;
1386 } 1391 }
1387 1392
1388 static int cciss_getbustypes(ctlr_info_t *h, void __user *argp) 1393 static int cciss_getbustypes(ctlr_info_t *h, void __user *argp)
1389 { 1394 {
1390 BusTypes_type BusTypes; 1395 BusTypes_type BusTypes;
1391 1396
1392 if (!argp) 1397 if (!argp)
1393 return -EINVAL; 1398 return -EINVAL;
1394 BusTypes = readl(&h->cfgtable->BusTypes); 1399 BusTypes = readl(&h->cfgtable->BusTypes);
1395 if (copy_to_user(argp, &BusTypes, sizeof(BusTypes_type))) 1400 if (copy_to_user(argp, &BusTypes, sizeof(BusTypes_type)))
1396 return -EFAULT; 1401 return -EFAULT;
1397 return 0; 1402 return 0;
1398 } 1403 }
1399 1404
1400 static int cciss_getfirmver(ctlr_info_t *h, void __user *argp) 1405 static int cciss_getfirmver(ctlr_info_t *h, void __user *argp)
1401 { 1406 {
1402 FirmwareVer_type firmware; 1407 FirmwareVer_type firmware;
1403 1408
1404 if (!argp) 1409 if (!argp)
1405 return -EINVAL; 1410 return -EINVAL;
1406 memcpy(firmware, h->firm_ver, 4); 1411 memcpy(firmware, h->firm_ver, 4);
1407 1412
1408 if (copy_to_user 1413 if (copy_to_user
1409 (argp, firmware, sizeof(FirmwareVer_type))) 1414 (argp, firmware, sizeof(FirmwareVer_type)))
1410 return -EFAULT; 1415 return -EFAULT;
1411 return 0; 1416 return 0;
1412 } 1417 }
1413 1418
1414 static int cciss_getdrivver(ctlr_info_t *h, void __user *argp) 1419 static int cciss_getdrivver(ctlr_info_t *h, void __user *argp)
1415 { 1420 {
1416 DriverVer_type DriverVer = DRIVER_VERSION; 1421 DriverVer_type DriverVer = DRIVER_VERSION;
1417 1422
1418 if (!argp) 1423 if (!argp)
1419 return -EINVAL; 1424 return -EINVAL;
1420 if (copy_to_user(argp, &DriverVer, sizeof(DriverVer_type))) 1425 if (copy_to_user(argp, &DriverVer, sizeof(DriverVer_type)))
1421 return -EFAULT; 1426 return -EFAULT;
1422 return 0; 1427 return 0;
1423 } 1428 }
1424 1429
1425 static int cciss_getluninfo(ctlr_info_t *h, 1430 static int cciss_getluninfo(ctlr_info_t *h,
1426 struct gendisk *disk, void __user *argp) 1431 struct gendisk *disk, void __user *argp)
1427 { 1432 {
1428 LogvolInfo_struct luninfo; 1433 LogvolInfo_struct luninfo;
1429 drive_info_struct *drv = get_drv(disk); 1434 drive_info_struct *drv = get_drv(disk);
1430 1435
1431 if (!argp) 1436 if (!argp)
1432 return -EINVAL; 1437 return -EINVAL;
1433 memcpy(&luninfo.LunID, drv->LunID, sizeof(luninfo.LunID)); 1438 memcpy(&luninfo.LunID, drv->LunID, sizeof(luninfo.LunID));
1434 luninfo.num_opens = drv->usage_count; 1439 luninfo.num_opens = drv->usage_count;
1435 luninfo.num_parts = 0; 1440 luninfo.num_parts = 0;
1436 if (copy_to_user(argp, &luninfo, sizeof(LogvolInfo_struct))) 1441 if (copy_to_user(argp, &luninfo, sizeof(LogvolInfo_struct)))
1437 return -EFAULT; 1442 return -EFAULT;
1438 return 0; 1443 return 0;
1439 } 1444 }
1440 1445
1441 static int cciss_passthru(ctlr_info_t *h, void __user *argp) 1446 static int cciss_passthru(ctlr_info_t *h, void __user *argp)
1442 { 1447 {
1443 IOCTL_Command_struct iocommand; 1448 IOCTL_Command_struct iocommand;
1444 CommandList_struct *c; 1449 CommandList_struct *c;
1445 char *buff = NULL; 1450 char *buff = NULL;
1446 u64bit temp64; 1451 u64bit temp64;
1447 DECLARE_COMPLETION_ONSTACK(wait); 1452 DECLARE_COMPLETION_ONSTACK(wait);
1448 1453
1449 if (!argp) 1454 if (!argp)
1450 return -EINVAL; 1455 return -EINVAL;
1451 1456
1452 if (!capable(CAP_SYS_RAWIO)) 1457 if (!capable(CAP_SYS_RAWIO))
1453 return -EPERM; 1458 return -EPERM;
1454 1459
1455 if (copy_from_user 1460 if (copy_from_user
1456 (&iocommand, argp, sizeof(IOCTL_Command_struct))) 1461 (&iocommand, argp, sizeof(IOCTL_Command_struct)))
1457 return -EFAULT; 1462 return -EFAULT;
1458 if ((iocommand.buf_size < 1) && 1463 if ((iocommand.buf_size < 1) &&
1459 (iocommand.Request.Type.Direction != XFER_NONE)) { 1464 (iocommand.Request.Type.Direction != XFER_NONE)) {
1460 return -EINVAL; 1465 return -EINVAL;
1461 } 1466 }
1462 if (iocommand.buf_size > 0) { 1467 if (iocommand.buf_size > 0) {
1463 buff = kmalloc(iocommand.buf_size, GFP_KERNEL); 1468 buff = kmalloc(iocommand.buf_size, GFP_KERNEL);
1464 if (buff == NULL) 1469 if (buff == NULL)
1465 return -EFAULT; 1470 return -EFAULT;
1466 } 1471 }
1467 if (iocommand.Request.Type.Direction == XFER_WRITE) { 1472 if (iocommand.Request.Type.Direction == XFER_WRITE) {
1468 /* Copy the data into the buffer we created */ 1473 /* Copy the data into the buffer we created */
1469 if (copy_from_user(buff, iocommand.buf, iocommand.buf_size)) { 1474 if (copy_from_user(buff, iocommand.buf, iocommand.buf_size)) {
1470 kfree(buff); 1475 kfree(buff);
1471 return -EFAULT; 1476 return -EFAULT;
1472 } 1477 }
1473 } else { 1478 } else {
1474 memset(buff, 0, iocommand.buf_size); 1479 memset(buff, 0, iocommand.buf_size);
1475 } 1480 }
1476 c = cmd_special_alloc(h); 1481 c = cmd_special_alloc(h);
1477 if (!c) { 1482 if (!c) {
1478 kfree(buff); 1483 kfree(buff);
1479 return -ENOMEM; 1484 return -ENOMEM;
1480 } 1485 }
1481 /* Fill in the command type */ 1486 /* Fill in the command type */
1482 c->cmd_type = CMD_IOCTL_PEND; 1487 c->cmd_type = CMD_IOCTL_PEND;
1483 /* Fill in Command Header */ 1488 /* Fill in Command Header */
1484 c->Header.ReplyQueue = 0; /* unused in simple mode */ 1489 c->Header.ReplyQueue = 0; /* unused in simple mode */
1485 if (iocommand.buf_size > 0) { /* buffer to fill */ 1490 if (iocommand.buf_size > 0) { /* buffer to fill */
1486 c->Header.SGList = 1; 1491 c->Header.SGList = 1;
1487 c->Header.SGTotal = 1; 1492 c->Header.SGTotal = 1;
1488 } else { /* no buffers to fill */ 1493 } else { /* no buffers to fill */
1489 c->Header.SGList = 0; 1494 c->Header.SGList = 0;
1490 c->Header.SGTotal = 0; 1495 c->Header.SGTotal = 0;
1491 } 1496 }
1492 c->Header.LUN = iocommand.LUN_info; 1497 c->Header.LUN = iocommand.LUN_info;
1493 /* use the kernel address the cmd block for tag */ 1498 /* use the kernel address the cmd block for tag */
1494 c->Header.Tag.lower = c->busaddr; 1499 c->Header.Tag.lower = c->busaddr;
1495 1500
1496 /* Fill in Request block */ 1501 /* Fill in Request block */
1497 c->Request = iocommand.Request; 1502 c->Request = iocommand.Request;
1498 1503
1499 /* Fill in the scatter gather information */ 1504 /* Fill in the scatter gather information */
1500 if (iocommand.buf_size > 0) { 1505 if (iocommand.buf_size > 0) {
1501 temp64.val = pci_map_single(h->pdev, buff, 1506 temp64.val = pci_map_single(h->pdev, buff,
1502 iocommand.buf_size, PCI_DMA_BIDIRECTIONAL); 1507 iocommand.buf_size, PCI_DMA_BIDIRECTIONAL);
1503 c->SG[0].Addr.lower = temp64.val32.lower; 1508 c->SG[0].Addr.lower = temp64.val32.lower;
1504 c->SG[0].Addr.upper = temp64.val32.upper; 1509 c->SG[0].Addr.upper = temp64.val32.upper;
1505 c->SG[0].Len = iocommand.buf_size; 1510 c->SG[0].Len = iocommand.buf_size;
1506 c->SG[0].Ext = 0; /* we are not chaining */ 1511 c->SG[0].Ext = 0; /* we are not chaining */
1507 } 1512 }
1508 c->waiting = &wait; 1513 c->waiting = &wait;
1509 1514
1510 enqueue_cmd_and_start_io(h, c); 1515 enqueue_cmd_and_start_io(h, c);
1511 wait_for_completion(&wait); 1516 wait_for_completion(&wait);
1512 1517
1513 /* unlock the buffers from DMA */ 1518 /* unlock the buffers from DMA */
1514 temp64.val32.lower = c->SG[0].Addr.lower; 1519 temp64.val32.lower = c->SG[0].Addr.lower;
1515 temp64.val32.upper = c->SG[0].Addr.upper; 1520 temp64.val32.upper = c->SG[0].Addr.upper;
1516 pci_unmap_single(h->pdev, (dma_addr_t) temp64.val, iocommand.buf_size, 1521 pci_unmap_single(h->pdev, (dma_addr_t) temp64.val, iocommand.buf_size,
1517 PCI_DMA_BIDIRECTIONAL); 1522 PCI_DMA_BIDIRECTIONAL);
1518 check_ioctl_unit_attention(h, c); 1523 check_ioctl_unit_attention(h, c);
1519 1524
1520 /* Copy the error information out */ 1525 /* Copy the error information out */
1521 iocommand.error_info = *(c->err_info); 1526 iocommand.error_info = *(c->err_info);
1522 if (copy_to_user(argp, &iocommand, sizeof(IOCTL_Command_struct))) { 1527 if (copy_to_user(argp, &iocommand, sizeof(IOCTL_Command_struct))) {
1523 kfree(buff); 1528 kfree(buff);
1524 cmd_special_free(h, c); 1529 cmd_special_free(h, c);
1525 return -EFAULT; 1530 return -EFAULT;
1526 } 1531 }
1527 1532
1528 if (iocommand.Request.Type.Direction == XFER_READ) { 1533 if (iocommand.Request.Type.Direction == XFER_READ) {
1529 /* Copy the data out of the buffer we created */ 1534 /* Copy the data out of the buffer we created */
1530 if (copy_to_user(iocommand.buf, buff, iocommand.buf_size)) { 1535 if (copy_to_user(iocommand.buf, buff, iocommand.buf_size)) {
1531 kfree(buff); 1536 kfree(buff);
1532 cmd_special_free(h, c); 1537 cmd_special_free(h, c);
1533 return -EFAULT; 1538 return -EFAULT;
1534 } 1539 }
1535 } 1540 }
1536 kfree(buff); 1541 kfree(buff);
1537 cmd_special_free(h, c); 1542 cmd_special_free(h, c);
1538 return 0; 1543 return 0;
1539 } 1544 }
1540 1545
1541 static int cciss_bigpassthru(ctlr_info_t *h, void __user *argp) 1546 static int cciss_bigpassthru(ctlr_info_t *h, void __user *argp)
1542 { 1547 {
1543 BIG_IOCTL_Command_struct *ioc; 1548 BIG_IOCTL_Command_struct *ioc;
1544 CommandList_struct *c; 1549 CommandList_struct *c;
1545 unsigned char **buff = NULL; 1550 unsigned char **buff = NULL;
1546 int *buff_size = NULL; 1551 int *buff_size = NULL;
1547 u64bit temp64; 1552 u64bit temp64;
1548 BYTE sg_used = 0; 1553 BYTE sg_used = 0;
1549 int status = 0; 1554 int status = 0;
1550 int i; 1555 int i;
1551 DECLARE_COMPLETION_ONSTACK(wait); 1556 DECLARE_COMPLETION_ONSTACK(wait);
1552 __u32 left; 1557 __u32 left;
1553 __u32 sz; 1558 __u32 sz;
1554 BYTE __user *data_ptr; 1559 BYTE __user *data_ptr;
1555 1560
1556 if (!argp) 1561 if (!argp)
1557 return -EINVAL; 1562 return -EINVAL;
1558 if (!capable(CAP_SYS_RAWIO)) 1563 if (!capable(CAP_SYS_RAWIO))
1559 return -EPERM; 1564 return -EPERM;
1560 ioc = kmalloc(sizeof(*ioc), GFP_KERNEL); 1565 ioc = kmalloc(sizeof(*ioc), GFP_KERNEL);
1561 if (!ioc) { 1566 if (!ioc) {
1562 status = -ENOMEM; 1567 status = -ENOMEM;
1563 goto cleanup1; 1568 goto cleanup1;
1564 } 1569 }
1565 if (copy_from_user(ioc, argp, sizeof(*ioc))) { 1570 if (copy_from_user(ioc, argp, sizeof(*ioc))) {
1566 status = -EFAULT; 1571 status = -EFAULT;
1567 goto cleanup1; 1572 goto cleanup1;
1568 } 1573 }
1569 if ((ioc->buf_size < 1) && 1574 if ((ioc->buf_size < 1) &&
1570 (ioc->Request.Type.Direction != XFER_NONE)) { 1575 (ioc->Request.Type.Direction != XFER_NONE)) {
1571 status = -EINVAL; 1576 status = -EINVAL;
1572 goto cleanup1; 1577 goto cleanup1;
1573 } 1578 }
1574 /* Check kmalloc limits using all SGs */ 1579 /* Check kmalloc limits using all SGs */
1575 if (ioc->malloc_size > MAX_KMALLOC_SIZE) { 1580 if (ioc->malloc_size > MAX_KMALLOC_SIZE) {
1576 status = -EINVAL; 1581 status = -EINVAL;
1577 goto cleanup1; 1582 goto cleanup1;
1578 } 1583 }
1579 if (ioc->buf_size > ioc->malloc_size * MAXSGENTRIES) { 1584 if (ioc->buf_size > ioc->malloc_size * MAXSGENTRIES) {
1580 status = -EINVAL; 1585 status = -EINVAL;
1581 goto cleanup1; 1586 goto cleanup1;
1582 } 1587 }
1583 buff = kzalloc(MAXSGENTRIES * sizeof(char *), GFP_KERNEL); 1588 buff = kzalloc(MAXSGENTRIES * sizeof(char *), GFP_KERNEL);
1584 if (!buff) { 1589 if (!buff) {
1585 status = -ENOMEM; 1590 status = -ENOMEM;
1586 goto cleanup1; 1591 goto cleanup1;
1587 } 1592 }
1588 buff_size = kmalloc(MAXSGENTRIES * sizeof(int), GFP_KERNEL); 1593 buff_size = kmalloc(MAXSGENTRIES * sizeof(int), GFP_KERNEL);
1589 if (!buff_size) { 1594 if (!buff_size) {
1590 status = -ENOMEM; 1595 status = -ENOMEM;
1591 goto cleanup1; 1596 goto cleanup1;
1592 } 1597 }
1593 left = ioc->buf_size; 1598 left = ioc->buf_size;
1594 data_ptr = ioc->buf; 1599 data_ptr = ioc->buf;
1595 while (left) { 1600 while (left) {
1596 sz = (left > ioc->malloc_size) ? ioc->malloc_size : left; 1601 sz = (left > ioc->malloc_size) ? ioc->malloc_size : left;
1597 buff_size[sg_used] = sz; 1602 buff_size[sg_used] = sz;
1598 buff[sg_used] = kmalloc(sz, GFP_KERNEL); 1603 buff[sg_used] = kmalloc(sz, GFP_KERNEL);
1599 if (buff[sg_used] == NULL) { 1604 if (buff[sg_used] == NULL) {
1600 status = -ENOMEM; 1605 status = -ENOMEM;
1601 goto cleanup1; 1606 goto cleanup1;
1602 } 1607 }
1603 if (ioc->Request.Type.Direction == XFER_WRITE) { 1608 if (ioc->Request.Type.Direction == XFER_WRITE) {
1604 if (copy_from_user(buff[sg_used], data_ptr, sz)) { 1609 if (copy_from_user(buff[sg_used], data_ptr, sz)) {
1605 status = -EFAULT; 1610 status = -EFAULT;
1606 goto cleanup1; 1611 goto cleanup1;
1607 } 1612 }
1608 } else { 1613 } else {
1609 memset(buff[sg_used], 0, sz); 1614 memset(buff[sg_used], 0, sz);
1610 } 1615 }
1611 left -= sz; 1616 left -= sz;
1612 data_ptr += sz; 1617 data_ptr += sz;
1613 sg_used++; 1618 sg_used++;
1614 } 1619 }
1615 c = cmd_special_alloc(h); 1620 c = cmd_special_alloc(h);
1616 if (!c) { 1621 if (!c) {
1617 status = -ENOMEM; 1622 status = -ENOMEM;
1618 goto cleanup1; 1623 goto cleanup1;
1619 } 1624 }
1620 c->cmd_type = CMD_IOCTL_PEND; 1625 c->cmd_type = CMD_IOCTL_PEND;
1621 c->Header.ReplyQueue = 0; 1626 c->Header.ReplyQueue = 0;
1622 c->Header.SGList = sg_used; 1627 c->Header.SGList = sg_used;
1623 c->Header.SGTotal = sg_used; 1628 c->Header.SGTotal = sg_used;
1624 c->Header.LUN = ioc->LUN_info; 1629 c->Header.LUN = ioc->LUN_info;
1625 c->Header.Tag.lower = c->busaddr; 1630 c->Header.Tag.lower = c->busaddr;
1626 1631
1627 c->Request = ioc->Request; 1632 c->Request = ioc->Request;
1628 for (i = 0; i < sg_used; i++) { 1633 for (i = 0; i < sg_used; i++) {
1629 temp64.val = pci_map_single(h->pdev, buff[i], buff_size[i], 1634 temp64.val = pci_map_single(h->pdev, buff[i], buff_size[i],
1630 PCI_DMA_BIDIRECTIONAL); 1635 PCI_DMA_BIDIRECTIONAL);
1631 c->SG[i].Addr.lower = temp64.val32.lower; 1636 c->SG[i].Addr.lower = temp64.val32.lower;
1632 c->SG[i].Addr.upper = temp64.val32.upper; 1637 c->SG[i].Addr.upper = temp64.val32.upper;
1633 c->SG[i].Len = buff_size[i]; 1638 c->SG[i].Len = buff_size[i];
1634 c->SG[i].Ext = 0; /* we are not chaining */ 1639 c->SG[i].Ext = 0; /* we are not chaining */
1635 } 1640 }
1636 c->waiting = &wait; 1641 c->waiting = &wait;
1637 enqueue_cmd_and_start_io(h, c); 1642 enqueue_cmd_and_start_io(h, c);
1638 wait_for_completion(&wait); 1643 wait_for_completion(&wait);
1639 /* unlock the buffers from DMA */ 1644 /* unlock the buffers from DMA */
1640 for (i = 0; i < sg_used; i++) { 1645 for (i = 0; i < sg_used; i++) {
1641 temp64.val32.lower = c->SG[i].Addr.lower; 1646 temp64.val32.lower = c->SG[i].Addr.lower;
1642 temp64.val32.upper = c->SG[i].Addr.upper; 1647 temp64.val32.upper = c->SG[i].Addr.upper;
1643 pci_unmap_single(h->pdev, 1648 pci_unmap_single(h->pdev,
1644 (dma_addr_t) temp64.val, buff_size[i], 1649 (dma_addr_t) temp64.val, buff_size[i],
1645 PCI_DMA_BIDIRECTIONAL); 1650 PCI_DMA_BIDIRECTIONAL);
1646 } 1651 }
1647 check_ioctl_unit_attention(h, c); 1652 check_ioctl_unit_attention(h, c);
1648 /* Copy the error information out */ 1653 /* Copy the error information out */
1649 ioc->error_info = *(c->err_info); 1654 ioc->error_info = *(c->err_info);
1650 if (copy_to_user(argp, ioc, sizeof(*ioc))) { 1655 if (copy_to_user(argp, ioc, sizeof(*ioc))) {
1651 cmd_special_free(h, c); 1656 cmd_special_free(h, c);
1652 status = -EFAULT; 1657 status = -EFAULT;
1653 goto cleanup1; 1658 goto cleanup1;
1654 } 1659 }
1655 if (ioc->Request.Type.Direction == XFER_READ) { 1660 if (ioc->Request.Type.Direction == XFER_READ) {
1656 /* Copy the data out of the buffer we created */ 1661 /* Copy the data out of the buffer we created */
1657 BYTE __user *ptr = ioc->buf; 1662 BYTE __user *ptr = ioc->buf;
1658 for (i = 0; i < sg_used; i++) { 1663 for (i = 0; i < sg_used; i++) {
1659 if (copy_to_user(ptr, buff[i], buff_size[i])) { 1664 if (copy_to_user(ptr, buff[i], buff_size[i])) {
1660 cmd_special_free(h, c); 1665 cmd_special_free(h, c);
1661 status = -EFAULT; 1666 status = -EFAULT;
1662 goto cleanup1; 1667 goto cleanup1;
1663 } 1668 }
1664 ptr += buff_size[i]; 1669 ptr += buff_size[i];
1665 } 1670 }
1666 } 1671 }
1667 cmd_special_free(h, c); 1672 cmd_special_free(h, c);
1668 status = 0; 1673 status = 0;
1669 cleanup1: 1674 cleanup1:
1670 if (buff) { 1675 if (buff) {
1671 for (i = 0; i < sg_used; i++) 1676 for (i = 0; i < sg_used; i++)
1672 kfree(buff[i]); 1677 kfree(buff[i]);
1673 kfree(buff); 1678 kfree(buff);
1674 } 1679 }
1675 kfree(buff_size); 1680 kfree(buff_size);
1676 kfree(ioc); 1681 kfree(ioc);
1677 return status; 1682 return status;
1678 } 1683 }
1679 1684
1680 static int cciss_ioctl(struct block_device *bdev, fmode_t mode, 1685 static int cciss_ioctl(struct block_device *bdev, fmode_t mode,
1681 unsigned int cmd, unsigned long arg) 1686 unsigned int cmd, unsigned long arg)
1682 { 1687 {
1683 struct gendisk *disk = bdev->bd_disk; 1688 struct gendisk *disk = bdev->bd_disk;
1684 ctlr_info_t *h = get_host(disk); 1689 ctlr_info_t *h = get_host(disk);
1685 void __user *argp = (void __user *)arg; 1690 void __user *argp = (void __user *)arg;
1686 1691
1687 dev_dbg(&h->pdev->dev, "cciss_ioctl: Called with cmd=%x %lx\n", 1692 dev_dbg(&h->pdev->dev, "cciss_ioctl: Called with cmd=%x %lx\n",
1688 cmd, arg); 1693 cmd, arg);
1689 switch (cmd) { 1694 switch (cmd) {
1690 case CCISS_GETPCIINFO: 1695 case CCISS_GETPCIINFO:
1691 return cciss_getpciinfo(h, argp); 1696 return cciss_getpciinfo(h, argp);
1692 case CCISS_GETINTINFO: 1697 case CCISS_GETINTINFO:
1693 return cciss_getintinfo(h, argp); 1698 return cciss_getintinfo(h, argp);
1694 case CCISS_SETINTINFO: 1699 case CCISS_SETINTINFO:
1695 return cciss_setintinfo(h, argp); 1700 return cciss_setintinfo(h, argp);
1696 case CCISS_GETNODENAME: 1701 case CCISS_GETNODENAME:
1697 return cciss_getnodename(h, argp); 1702 return cciss_getnodename(h, argp);
1698 case CCISS_SETNODENAME: 1703 case CCISS_SETNODENAME:
1699 return cciss_setnodename(h, argp); 1704 return cciss_setnodename(h, argp);
1700 case CCISS_GETHEARTBEAT: 1705 case CCISS_GETHEARTBEAT:
1701 return cciss_getheartbeat(h, argp); 1706 return cciss_getheartbeat(h, argp);
1702 case CCISS_GETBUSTYPES: 1707 case CCISS_GETBUSTYPES:
1703 return cciss_getbustypes(h, argp); 1708 return cciss_getbustypes(h, argp);
1704 case CCISS_GETFIRMVER: 1709 case CCISS_GETFIRMVER:
1705 return cciss_getfirmver(h, argp); 1710 return cciss_getfirmver(h, argp);
1706 case CCISS_GETDRIVVER: 1711 case CCISS_GETDRIVVER:
1707 return cciss_getdrivver(h, argp); 1712 return cciss_getdrivver(h, argp);
1708 case CCISS_DEREGDISK: 1713 case CCISS_DEREGDISK:
1709 case CCISS_REGNEWD: 1714 case CCISS_REGNEWD:
1710 case CCISS_REVALIDVOLS: 1715 case CCISS_REVALIDVOLS:
1711 return rebuild_lun_table(h, 0, 1); 1716 return rebuild_lun_table(h, 0, 1);
1712 case CCISS_GETLUNINFO: 1717 case CCISS_GETLUNINFO:
1713 return cciss_getluninfo(h, disk, argp); 1718 return cciss_getluninfo(h, disk, argp);
1714 case CCISS_PASSTHRU: 1719 case CCISS_PASSTHRU:
1715 return cciss_passthru(h, argp); 1720 return cciss_passthru(h, argp);
1716 case CCISS_BIG_PASSTHRU: 1721 case CCISS_BIG_PASSTHRU:
1717 return cciss_bigpassthru(h, argp); 1722 return cciss_bigpassthru(h, argp);
1718 1723
1719 /* scsi_cmd_ioctl handles these, below, though some are not */ 1724 /* scsi_cmd_ioctl handles these, below, though some are not */
1720 /* very meaningful for cciss. SG_IO is the main one people want. */ 1725 /* very meaningful for cciss. SG_IO is the main one people want. */
1721 1726
1722 case SG_GET_VERSION_NUM: 1727 case SG_GET_VERSION_NUM:
1723 case SG_SET_TIMEOUT: 1728 case SG_SET_TIMEOUT:
1724 case SG_GET_TIMEOUT: 1729 case SG_GET_TIMEOUT:
1725 case SG_GET_RESERVED_SIZE: 1730 case SG_GET_RESERVED_SIZE:
1726 case SG_SET_RESERVED_SIZE: 1731 case SG_SET_RESERVED_SIZE:
1727 case SG_EMULATED_HOST: 1732 case SG_EMULATED_HOST:
1728 case SG_IO: 1733 case SG_IO:
1729 case SCSI_IOCTL_SEND_COMMAND: 1734 case SCSI_IOCTL_SEND_COMMAND:
1730 return scsi_cmd_ioctl(disk->queue, disk, mode, cmd, argp); 1735 return scsi_cmd_ioctl(disk->queue, disk, mode, cmd, argp);
1731 1736
1732 /* scsi_cmd_ioctl would normally handle these, below, but */ 1737 /* scsi_cmd_ioctl would normally handle these, below, but */
1733 /* they aren't a good fit for cciss, as CD-ROMs are */ 1738 /* they aren't a good fit for cciss, as CD-ROMs are */
1734 /* not supported, and we don't have any bus/target/lun */ 1739 /* not supported, and we don't have any bus/target/lun */
1735 /* which we present to the kernel. */ 1740 /* which we present to the kernel. */
1736 1741
1737 case CDROM_SEND_PACKET: 1742 case CDROM_SEND_PACKET:
1738 case CDROMCLOSETRAY: 1743 case CDROMCLOSETRAY:
1739 case CDROMEJECT: 1744 case CDROMEJECT:
1740 case SCSI_IOCTL_GET_IDLUN: 1745 case SCSI_IOCTL_GET_IDLUN:
1741 case SCSI_IOCTL_GET_BUS_NUMBER: 1746 case SCSI_IOCTL_GET_BUS_NUMBER:
1742 default: 1747 default:
1743 return -ENOTTY; 1748 return -ENOTTY;
1744 } 1749 }
1745 } 1750 }
1746 1751
1747 static void cciss_check_queues(ctlr_info_t *h) 1752 static void cciss_check_queues(ctlr_info_t *h)
1748 { 1753 {
1749 int start_queue = h->next_to_run; 1754 int start_queue = h->next_to_run;
1750 int i; 1755 int i;
1751 1756
1752 /* check to see if we have maxed out the number of commands that can 1757 /* check to see if we have maxed out the number of commands that can
1753 * be placed on the queue. If so then exit. We do this check here 1758 * be placed on the queue. If so then exit. We do this check here
1754 * in case the interrupt we serviced was from an ioctl and did not 1759 * in case the interrupt we serviced was from an ioctl and did not
1755 * free any new commands. 1760 * free any new commands.
1756 */ 1761 */
1757 if ((find_first_zero_bit(h->cmd_pool_bits, h->nr_cmds)) == h->nr_cmds) 1762 if ((find_first_zero_bit(h->cmd_pool_bits, h->nr_cmds)) == h->nr_cmds)
1758 return; 1763 return;
1759 1764
1760 /* We have room on the queue for more commands. Now we need to queue 1765 /* We have room on the queue for more commands. Now we need to queue
1761 * them up. We will also keep track of the next queue to run so 1766 * them up. We will also keep track of the next queue to run so
1762 * that every queue gets a chance to be started first. 1767 * that every queue gets a chance to be started first.
1763 */ 1768 */
1764 for (i = 0; i < h->highest_lun + 1; i++) { 1769 for (i = 0; i < h->highest_lun + 1; i++) {
1765 int curr_queue = (start_queue + i) % (h->highest_lun + 1); 1770 int curr_queue = (start_queue + i) % (h->highest_lun + 1);
1766 /* make sure the disk has been added and the drive is real 1771 /* make sure the disk has been added and the drive is real
1767 * because this can be called from the middle of init_one. 1772 * because this can be called from the middle of init_one.
1768 */ 1773 */
1769 if (!h->drv[curr_queue]) 1774 if (!h->drv[curr_queue])
1770 continue; 1775 continue;
1771 if (!(h->drv[curr_queue]->queue) || 1776 if (!(h->drv[curr_queue]->queue) ||
1772 !(h->drv[curr_queue]->heads)) 1777 !(h->drv[curr_queue]->heads))
1773 continue; 1778 continue;
1774 blk_start_queue(h->gendisk[curr_queue]->queue); 1779 blk_start_queue(h->gendisk[curr_queue]->queue);
1775 1780
1776 /* check to see if we have maxed out the number of commands 1781 /* check to see if we have maxed out the number of commands
1777 * that can be placed on the queue. 1782 * that can be placed on the queue.
1778 */ 1783 */
1779 if ((find_first_zero_bit(h->cmd_pool_bits, h->nr_cmds)) == h->nr_cmds) { 1784 if ((find_first_zero_bit(h->cmd_pool_bits, h->nr_cmds)) == h->nr_cmds) {
1780 if (curr_queue == start_queue) { 1785 if (curr_queue == start_queue) {
1781 h->next_to_run = 1786 h->next_to_run =
1782 (start_queue + 1) % (h->highest_lun + 1); 1787 (start_queue + 1) % (h->highest_lun + 1);
1783 break; 1788 break;
1784 } else { 1789 } else {
1785 h->next_to_run = curr_queue; 1790 h->next_to_run = curr_queue;
1786 break; 1791 break;
1787 } 1792 }
1788 } 1793 }
1789 } 1794 }
1790 } 1795 }
1791 1796
1792 static void cciss_softirq_done(struct request *rq) 1797 static void cciss_softirq_done(struct request *rq)
1793 { 1798 {
1794 CommandList_struct *c = rq->completion_data; 1799 CommandList_struct *c = rq->completion_data;
1795 ctlr_info_t *h = hba[c->ctlr]; 1800 ctlr_info_t *h = hba[c->ctlr];
1796 SGDescriptor_struct *curr_sg = c->SG; 1801 SGDescriptor_struct *curr_sg = c->SG;
1797 u64bit temp64; 1802 u64bit temp64;
1798 unsigned long flags; 1803 unsigned long flags;
1799 int i, ddir; 1804 int i, ddir;
1800 int sg_index = 0; 1805 int sg_index = 0;
1801 1806
1802 if (c->Request.Type.Direction == XFER_READ) 1807 if (c->Request.Type.Direction == XFER_READ)
1803 ddir = PCI_DMA_FROMDEVICE; 1808 ddir = PCI_DMA_FROMDEVICE;
1804 else 1809 else
1805 ddir = PCI_DMA_TODEVICE; 1810 ddir = PCI_DMA_TODEVICE;
1806 1811
1807 /* command did not need to be retried */ 1812 /* command did not need to be retried */
1808 /* unmap the DMA mapping for all the scatter gather elements */ 1813 /* unmap the DMA mapping for all the scatter gather elements */
1809 for (i = 0; i < c->Header.SGList; i++) { 1814 for (i = 0; i < c->Header.SGList; i++) {
1810 if (curr_sg[sg_index].Ext == CCISS_SG_CHAIN) { 1815 if (curr_sg[sg_index].Ext == CCISS_SG_CHAIN) {
1811 cciss_unmap_sg_chain_block(h, c); 1816 cciss_unmap_sg_chain_block(h, c);
1812 /* Point to the next block */ 1817 /* Point to the next block */
1813 curr_sg = h->cmd_sg_list[c->cmdindex]; 1818 curr_sg = h->cmd_sg_list[c->cmdindex];
1814 sg_index = 0; 1819 sg_index = 0;
1815 } 1820 }
1816 temp64.val32.lower = curr_sg[sg_index].Addr.lower; 1821 temp64.val32.lower = curr_sg[sg_index].Addr.lower;
1817 temp64.val32.upper = curr_sg[sg_index].Addr.upper; 1822 temp64.val32.upper = curr_sg[sg_index].Addr.upper;
1818 pci_unmap_page(h->pdev, temp64.val, curr_sg[sg_index].Len, 1823 pci_unmap_page(h->pdev, temp64.val, curr_sg[sg_index].Len,
1819 ddir); 1824 ddir);
1820 ++sg_index; 1825 ++sg_index;
1821 } 1826 }
1822 1827
1823 dev_dbg(&h->pdev->dev, "Done with %p\n", rq); 1828 dev_dbg(&h->pdev->dev, "Done with %p\n", rq);
1824 1829
1825 /* set the residual count for pc requests */ 1830 /* set the residual count for pc requests */
1826 if (rq->cmd_type == REQ_TYPE_BLOCK_PC) 1831 if (rq->cmd_type == REQ_TYPE_BLOCK_PC)
1827 rq->resid_len = c->err_info->ResidualCnt; 1832 rq->resid_len = c->err_info->ResidualCnt;
1828 1833
1829 blk_end_request_all(rq, (rq->errors == 0) ? 0 : -EIO); 1834 blk_end_request_all(rq, (rq->errors == 0) ? 0 : -EIO);
1830 1835
1831 spin_lock_irqsave(&h->lock, flags); 1836 spin_lock_irqsave(&h->lock, flags);
1832 cmd_free(h, c); 1837 cmd_free(h, c);
1833 cciss_check_queues(h); 1838 cciss_check_queues(h);
1834 spin_unlock_irqrestore(&h->lock, flags); 1839 spin_unlock_irqrestore(&h->lock, flags);
1835 } 1840 }
1836 1841
1837 static inline void log_unit_to_scsi3addr(ctlr_info_t *h, 1842 static inline void log_unit_to_scsi3addr(ctlr_info_t *h,
1838 unsigned char scsi3addr[], uint32_t log_unit) 1843 unsigned char scsi3addr[], uint32_t log_unit)
1839 { 1844 {
1840 memcpy(scsi3addr, h->drv[log_unit]->LunID, 1845 memcpy(scsi3addr, h->drv[log_unit]->LunID,
1841 sizeof(h->drv[log_unit]->LunID)); 1846 sizeof(h->drv[log_unit]->LunID));
1842 } 1847 }
1843 1848
1844 /* This function gets the SCSI vendor, model, and revision of a logical drive 1849 /* This function gets the SCSI vendor, model, and revision of a logical drive
1845 * via the inquiry page 0. Model, vendor, and rev are set to empty strings if 1850 * via the inquiry page 0. Model, vendor, and rev are set to empty strings if
1846 * they cannot be read. 1851 * they cannot be read.
1847 */ 1852 */
1848 static void cciss_get_device_descr(ctlr_info_t *h, int logvol, 1853 static void cciss_get_device_descr(ctlr_info_t *h, int logvol,
1849 char *vendor, char *model, char *rev) 1854 char *vendor, char *model, char *rev)
1850 { 1855 {
1851 int rc; 1856 int rc;
1852 InquiryData_struct *inq_buf; 1857 InquiryData_struct *inq_buf;
1853 unsigned char scsi3addr[8]; 1858 unsigned char scsi3addr[8];
1854 1859
1855 *vendor = '\0'; 1860 *vendor = '\0';
1856 *model = '\0'; 1861 *model = '\0';
1857 *rev = '\0'; 1862 *rev = '\0';
1858 1863
1859 inq_buf = kzalloc(sizeof(InquiryData_struct), GFP_KERNEL); 1864 inq_buf = kzalloc(sizeof(InquiryData_struct), GFP_KERNEL);
1860 if (!inq_buf) 1865 if (!inq_buf)
1861 return; 1866 return;
1862 1867
1863 log_unit_to_scsi3addr(h, scsi3addr, logvol); 1868 log_unit_to_scsi3addr(h, scsi3addr, logvol);
1864 rc = sendcmd_withirq(h, CISS_INQUIRY, inq_buf, sizeof(*inq_buf), 0, 1869 rc = sendcmd_withirq(h, CISS_INQUIRY, inq_buf, sizeof(*inq_buf), 0,
1865 scsi3addr, TYPE_CMD); 1870 scsi3addr, TYPE_CMD);
1866 if (rc == IO_OK) { 1871 if (rc == IO_OK) {
1867 memcpy(vendor, &inq_buf->data_byte[8], VENDOR_LEN); 1872 memcpy(vendor, &inq_buf->data_byte[8], VENDOR_LEN);
1868 vendor[VENDOR_LEN] = '\0'; 1873 vendor[VENDOR_LEN] = '\0';
1869 memcpy(model, &inq_buf->data_byte[16], MODEL_LEN); 1874 memcpy(model, &inq_buf->data_byte[16], MODEL_LEN);
1870 model[MODEL_LEN] = '\0'; 1875 model[MODEL_LEN] = '\0';
1871 memcpy(rev, &inq_buf->data_byte[32], REV_LEN); 1876 memcpy(rev, &inq_buf->data_byte[32], REV_LEN);
1872 rev[REV_LEN] = '\0'; 1877 rev[REV_LEN] = '\0';
1873 } 1878 }
1874 1879
1875 kfree(inq_buf); 1880 kfree(inq_buf);
1876 return; 1881 return;
1877 } 1882 }
1878 1883
1879 /* This function gets the serial number of a logical drive via 1884 /* This function gets the serial number of a logical drive via
1880 * inquiry page 0x83. Serial no. is 16 bytes. If the serial 1885 * inquiry page 0x83. Serial no. is 16 bytes. If the serial
1881 * number cannot be had, for whatever reason, 16 bytes of 0xff 1886 * number cannot be had, for whatever reason, 16 bytes of 0xff
1882 * are returned instead. 1887 * are returned instead.
1883 */ 1888 */
1884 static void cciss_get_serial_no(ctlr_info_t *h, int logvol, 1889 static void cciss_get_serial_no(ctlr_info_t *h, int logvol,
1885 unsigned char *serial_no, int buflen) 1890 unsigned char *serial_no, int buflen)
1886 { 1891 {
1887 #define PAGE_83_INQ_BYTES 64 1892 #define PAGE_83_INQ_BYTES 64
1888 int rc; 1893 int rc;
1889 unsigned char *buf; 1894 unsigned char *buf;
1890 unsigned char scsi3addr[8]; 1895 unsigned char scsi3addr[8];
1891 1896
1892 if (buflen > 16) 1897 if (buflen > 16)
1893 buflen = 16; 1898 buflen = 16;
1894 memset(serial_no, 0xff, buflen); 1899 memset(serial_no, 0xff, buflen);
1895 buf = kzalloc(PAGE_83_INQ_BYTES, GFP_KERNEL); 1900 buf = kzalloc(PAGE_83_INQ_BYTES, GFP_KERNEL);
1896 if (!buf) 1901 if (!buf)
1897 return; 1902 return;
1898 memset(serial_no, 0, buflen); 1903 memset(serial_no, 0, buflen);
1899 log_unit_to_scsi3addr(h, scsi3addr, logvol); 1904 log_unit_to_scsi3addr(h, scsi3addr, logvol);
1900 rc = sendcmd_withirq(h, CISS_INQUIRY, buf, 1905 rc = sendcmd_withirq(h, CISS_INQUIRY, buf,
1901 PAGE_83_INQ_BYTES, 0x83, scsi3addr, TYPE_CMD); 1906 PAGE_83_INQ_BYTES, 0x83, scsi3addr, TYPE_CMD);
1902 if (rc == IO_OK) 1907 if (rc == IO_OK)
1903 memcpy(serial_no, &buf[8], buflen); 1908 memcpy(serial_no, &buf[8], buflen);
1904 kfree(buf); 1909 kfree(buf);
1905 return; 1910 return;
1906 } 1911 }
1907 1912
1908 /* 1913 /*
1909 * cciss_add_disk sets up the block device queue for a logical drive 1914 * cciss_add_disk sets up the block device queue for a logical drive
1910 */ 1915 */
1911 static int cciss_add_disk(ctlr_info_t *h, struct gendisk *disk, 1916 static int cciss_add_disk(ctlr_info_t *h, struct gendisk *disk,
1912 int drv_index) 1917 int drv_index)
1913 { 1918 {
1914 disk->queue = blk_init_queue(do_cciss_request, &h->lock); 1919 disk->queue = blk_init_queue(do_cciss_request, &h->lock);
1915 if (!disk->queue) 1920 if (!disk->queue)
1916 goto init_queue_failure; 1921 goto init_queue_failure;
1917 sprintf(disk->disk_name, "cciss/c%dd%d", h->ctlr, drv_index); 1922 sprintf(disk->disk_name, "cciss/c%dd%d", h->ctlr, drv_index);
1918 disk->major = h->major; 1923 disk->major = h->major;
1919 disk->first_minor = drv_index << NWD_SHIFT; 1924 disk->first_minor = drv_index << NWD_SHIFT;
1920 disk->fops = &cciss_fops; 1925 disk->fops = &cciss_fops;
1921 if (cciss_create_ld_sysfs_entry(h, drv_index)) 1926 if (cciss_create_ld_sysfs_entry(h, drv_index))
1922 goto cleanup_queue; 1927 goto cleanup_queue;
1923 disk->private_data = h->drv[drv_index]; 1928 disk->private_data = h->drv[drv_index];
1924 disk->driverfs_dev = &h->drv[drv_index]->dev; 1929 disk->driverfs_dev = &h->drv[drv_index]->dev;
1925 1930
1926 /* Set up queue information */ 1931 /* Set up queue information */
1927 blk_queue_bounce_limit(disk->queue, h->pdev->dma_mask); 1932 blk_queue_bounce_limit(disk->queue, h->pdev->dma_mask);
1928 1933
1929 /* This is a hardware imposed limit. */ 1934 /* This is a hardware imposed limit. */
1930 blk_queue_max_segments(disk->queue, h->maxsgentries); 1935 blk_queue_max_segments(disk->queue, h->maxsgentries);
1931 1936
1932 blk_queue_max_hw_sectors(disk->queue, h->cciss_max_sectors); 1937 blk_queue_max_hw_sectors(disk->queue, h->cciss_max_sectors);
1933 1938
1934 blk_queue_softirq_done(disk->queue, cciss_softirq_done); 1939 blk_queue_softirq_done(disk->queue, cciss_softirq_done);
1935 1940
1936 disk->queue->queuedata = h; 1941 disk->queue->queuedata = h;
1937 1942
1938 blk_queue_logical_block_size(disk->queue, 1943 blk_queue_logical_block_size(disk->queue,
1939 h->drv[drv_index]->block_size); 1944 h->drv[drv_index]->block_size);
1940 1945
1941 /* Make sure all queue data is written out before */ 1946 /* Make sure all queue data is written out before */
1942 /* setting h->drv[drv_index]->queue, as setting this */ 1947 /* setting h->drv[drv_index]->queue, as setting this */
1943 /* allows the interrupt handler to start the queue */ 1948 /* allows the interrupt handler to start the queue */
1944 wmb(); 1949 wmb();
1945 h->drv[drv_index]->queue = disk->queue; 1950 h->drv[drv_index]->queue = disk->queue;
1946 add_disk(disk); 1951 add_disk(disk);
1947 return 0; 1952 return 0;
1948 1953
1949 cleanup_queue: 1954 cleanup_queue:
1950 blk_cleanup_queue(disk->queue); 1955 blk_cleanup_queue(disk->queue);
1951 disk->queue = NULL; 1956 disk->queue = NULL;
1952 init_queue_failure: 1957 init_queue_failure:
1953 return -1; 1958 return -1;
1954 } 1959 }
1955 1960
1956 /* This function will check the usage_count of the drive to be updated/added. 1961 /* This function will check the usage_count of the drive to be updated/added.
1957 * If the usage_count is zero and it is a heretofore unknown drive, or, 1962 * If the usage_count is zero and it is a heretofore unknown drive, or,
1958 * the drive's capacity, geometry, or serial number has changed, 1963 * the drive's capacity, geometry, or serial number has changed,
1959 * then the drive information will be updated and the disk will be 1964 * then the drive information will be updated and the disk will be
1960 * re-registered with the kernel. If these conditions don't hold, 1965 * re-registered with the kernel. If these conditions don't hold,
1961 * then it will be left alone for the next reboot. The exception to this 1966 * then it will be left alone for the next reboot. The exception to this
1962 * is disk 0 which will always be left registered with the kernel since it 1967 * is disk 0 which will always be left registered with the kernel since it
1963 * is also the controller node. Any changes to disk 0 will show up on 1968 * is also the controller node. Any changes to disk 0 will show up on
1964 * the next reboot. 1969 * the next reboot.
1965 */ 1970 */
1966 static void cciss_update_drive_info(ctlr_info_t *h, int drv_index, 1971 static void cciss_update_drive_info(ctlr_info_t *h, int drv_index,
1967 int first_time, int via_ioctl) 1972 int first_time, int via_ioctl)
1968 { 1973 {
1969 struct gendisk *disk; 1974 struct gendisk *disk;
1970 InquiryData_struct *inq_buff = NULL; 1975 InquiryData_struct *inq_buff = NULL;
1971 unsigned int block_size; 1976 unsigned int block_size;
1972 sector_t total_size; 1977 sector_t total_size;
1973 unsigned long flags = 0; 1978 unsigned long flags = 0;
1974 int ret = 0; 1979 int ret = 0;
1975 drive_info_struct *drvinfo; 1980 drive_info_struct *drvinfo;
1976 1981
1977 /* Get information about the disk and modify the driver structure */ 1982 /* Get information about the disk and modify the driver structure */
1978 inq_buff = kmalloc(sizeof(InquiryData_struct), GFP_KERNEL); 1983 inq_buff = kmalloc(sizeof(InquiryData_struct), GFP_KERNEL);
1979 drvinfo = kzalloc(sizeof(*drvinfo), GFP_KERNEL); 1984 drvinfo = kzalloc(sizeof(*drvinfo), GFP_KERNEL);
1980 if (inq_buff == NULL || drvinfo == NULL) 1985 if (inq_buff == NULL || drvinfo == NULL)
1981 goto mem_msg; 1986 goto mem_msg;
1982 1987
1983 /* testing to see if 16-byte CDBs are already being used */ 1988 /* testing to see if 16-byte CDBs are already being used */
1984 if (h->cciss_read == CCISS_READ_16) { 1989 if (h->cciss_read == CCISS_READ_16) {
1985 cciss_read_capacity_16(h, drv_index, 1990 cciss_read_capacity_16(h, drv_index,
1986 &total_size, &block_size); 1991 &total_size, &block_size);
1987 1992
1988 } else { 1993 } else {
1989 cciss_read_capacity(h, drv_index, &total_size, &block_size); 1994 cciss_read_capacity(h, drv_index, &total_size, &block_size);
1990 /* if read_capacity returns all F's this volume is >2TB */ 1995 /* if read_capacity returns all F's this volume is >2TB */
1991 /* in size so we switch to 16-byte CDB's for all */ 1996 /* in size so we switch to 16-byte CDB's for all */
1992 /* read/write ops */ 1997 /* read/write ops */
1993 if (total_size == 0xFFFFFFFFULL) { 1998 if (total_size == 0xFFFFFFFFULL) {
1994 cciss_read_capacity_16(h, drv_index, 1999 cciss_read_capacity_16(h, drv_index,
1995 &total_size, &block_size); 2000 &total_size, &block_size);
1996 h->cciss_read = CCISS_READ_16; 2001 h->cciss_read = CCISS_READ_16;
1997 h->cciss_write = CCISS_WRITE_16; 2002 h->cciss_write = CCISS_WRITE_16;
1998 } else { 2003 } else {
1999 h->cciss_read = CCISS_READ_10; 2004 h->cciss_read = CCISS_READ_10;
2000 h->cciss_write = CCISS_WRITE_10; 2005 h->cciss_write = CCISS_WRITE_10;
2001 } 2006 }
2002 } 2007 }
2003 2008
2004 cciss_geometry_inquiry(h, drv_index, total_size, block_size, 2009 cciss_geometry_inquiry(h, drv_index, total_size, block_size,
2005 inq_buff, drvinfo); 2010 inq_buff, drvinfo);
2006 drvinfo->block_size = block_size; 2011 drvinfo->block_size = block_size;
2007 drvinfo->nr_blocks = total_size + 1; 2012 drvinfo->nr_blocks = total_size + 1;
2008 2013
2009 cciss_get_device_descr(h, drv_index, drvinfo->vendor, 2014 cciss_get_device_descr(h, drv_index, drvinfo->vendor,
2010 drvinfo->model, drvinfo->rev); 2015 drvinfo->model, drvinfo->rev);
2011 cciss_get_serial_no(h, drv_index, drvinfo->serial_no, 2016 cciss_get_serial_no(h, drv_index, drvinfo->serial_no,
2012 sizeof(drvinfo->serial_no)); 2017 sizeof(drvinfo->serial_no));
2013 /* Save the lunid in case we deregister the disk, below. */ 2018 /* Save the lunid in case we deregister the disk, below. */
2014 memcpy(drvinfo->LunID, h->drv[drv_index]->LunID, 2019 memcpy(drvinfo->LunID, h->drv[drv_index]->LunID,
2015 sizeof(drvinfo->LunID)); 2020 sizeof(drvinfo->LunID));
2016 2021
2017 /* Is it the same disk we already know, and nothing's changed? */ 2022 /* Is it the same disk we already know, and nothing's changed? */
2018 if (h->drv[drv_index]->raid_level != -1 && 2023 if (h->drv[drv_index]->raid_level != -1 &&
2019 ((memcmp(drvinfo->serial_no, 2024 ((memcmp(drvinfo->serial_no,
2020 h->drv[drv_index]->serial_no, 16) == 0) && 2025 h->drv[drv_index]->serial_no, 16) == 0) &&
2021 drvinfo->block_size == h->drv[drv_index]->block_size && 2026 drvinfo->block_size == h->drv[drv_index]->block_size &&
2022 drvinfo->nr_blocks == h->drv[drv_index]->nr_blocks && 2027 drvinfo->nr_blocks == h->drv[drv_index]->nr_blocks &&
2023 drvinfo->heads == h->drv[drv_index]->heads && 2028 drvinfo->heads == h->drv[drv_index]->heads &&
2024 drvinfo->sectors == h->drv[drv_index]->sectors && 2029 drvinfo->sectors == h->drv[drv_index]->sectors &&
2025 drvinfo->cylinders == h->drv[drv_index]->cylinders)) 2030 drvinfo->cylinders == h->drv[drv_index]->cylinders))
2026 /* The disk is unchanged, nothing to update */ 2031 /* The disk is unchanged, nothing to update */
2027 goto freeret; 2032 goto freeret;
2028 2033
2029 /* If we get here it's not the same disk, or something's changed, 2034 /* If we get here it's not the same disk, or something's changed,
2030 * so we need to * deregister it, and re-register it, if it's not 2035 * so we need to * deregister it, and re-register it, if it's not
2031 * in use. 2036 * in use.
2032 * If the disk already exists then deregister it before proceeding 2037 * If the disk already exists then deregister it before proceeding
2033 * (unless it's the first disk (for the controller node). 2038 * (unless it's the first disk (for the controller node).
2034 */ 2039 */
2035 if (h->drv[drv_index]->raid_level != -1 && drv_index != 0) { 2040 if (h->drv[drv_index]->raid_level != -1 && drv_index != 0) {
2036 dev_warn(&h->pdev->dev, "disk %d has changed.\n", drv_index); 2041 dev_warn(&h->pdev->dev, "disk %d has changed.\n", drv_index);
2037 spin_lock_irqsave(&h->lock, flags); 2042 spin_lock_irqsave(&h->lock, flags);
2038 h->drv[drv_index]->busy_configuring = 1; 2043 h->drv[drv_index]->busy_configuring = 1;
2039 spin_unlock_irqrestore(&h->lock, flags); 2044 spin_unlock_irqrestore(&h->lock, flags);
2040 2045
2041 /* deregister_disk sets h->drv[drv_index]->queue = NULL 2046 /* deregister_disk sets h->drv[drv_index]->queue = NULL
2042 * which keeps the interrupt handler from starting 2047 * which keeps the interrupt handler from starting
2043 * the queue. 2048 * the queue.
2044 */ 2049 */
2045 ret = deregister_disk(h, drv_index, 0, via_ioctl); 2050 ret = deregister_disk(h, drv_index, 0, via_ioctl);
2046 } 2051 }
2047 2052
2048 /* If the disk is in use return */ 2053 /* If the disk is in use return */
2049 if (ret) 2054 if (ret)
2050 goto freeret; 2055 goto freeret;
2051 2056
2052 /* Save the new information from cciss_geometry_inquiry 2057 /* Save the new information from cciss_geometry_inquiry
2053 * and serial number inquiry. If the disk was deregistered 2058 * and serial number inquiry. If the disk was deregistered
2054 * above, then h->drv[drv_index] will be NULL. 2059 * above, then h->drv[drv_index] will be NULL.
2055 */ 2060 */
2056 if (h->drv[drv_index] == NULL) { 2061 if (h->drv[drv_index] == NULL) {
2057 drvinfo->device_initialized = 0; 2062 drvinfo->device_initialized = 0;
2058 h->drv[drv_index] = drvinfo; 2063 h->drv[drv_index] = drvinfo;
2059 drvinfo = NULL; /* so it won't be freed below. */ 2064 drvinfo = NULL; /* so it won't be freed below. */
2060 } else { 2065 } else {
2061 /* special case for cxd0 */ 2066 /* special case for cxd0 */
2062 h->drv[drv_index]->block_size = drvinfo->block_size; 2067 h->drv[drv_index]->block_size = drvinfo->block_size;
2063 h->drv[drv_index]->nr_blocks = drvinfo->nr_blocks; 2068 h->drv[drv_index]->nr_blocks = drvinfo->nr_blocks;
2064 h->drv[drv_index]->heads = drvinfo->heads; 2069 h->drv[drv_index]->heads = drvinfo->heads;
2065 h->drv[drv_index]->sectors = drvinfo->sectors; 2070 h->drv[drv_index]->sectors = drvinfo->sectors;
2066 h->drv[drv_index]->cylinders = drvinfo->cylinders; 2071 h->drv[drv_index]->cylinders = drvinfo->cylinders;
2067 h->drv[drv_index]->raid_level = drvinfo->raid_level; 2072 h->drv[drv_index]->raid_level = drvinfo->raid_level;
2068 memcpy(h->drv[drv_index]->serial_no, drvinfo->serial_no, 16); 2073 memcpy(h->drv[drv_index]->serial_no, drvinfo->serial_no, 16);
2069 memcpy(h->drv[drv_index]->vendor, drvinfo->vendor, 2074 memcpy(h->drv[drv_index]->vendor, drvinfo->vendor,
2070 VENDOR_LEN + 1); 2075 VENDOR_LEN + 1);
2071 memcpy(h->drv[drv_index]->model, drvinfo->model, MODEL_LEN + 1); 2076 memcpy(h->drv[drv_index]->model, drvinfo->model, MODEL_LEN + 1);
2072 memcpy(h->drv[drv_index]->rev, drvinfo->rev, REV_LEN + 1); 2077 memcpy(h->drv[drv_index]->rev, drvinfo->rev, REV_LEN + 1);
2073 } 2078 }
2074 2079
2075 ++h->num_luns; 2080 ++h->num_luns;
2076 disk = h->gendisk[drv_index]; 2081 disk = h->gendisk[drv_index];
2077 set_capacity(disk, h->drv[drv_index]->nr_blocks); 2082 set_capacity(disk, h->drv[drv_index]->nr_blocks);
2078 2083
2079 /* If it's not disk 0 (drv_index != 0) 2084 /* If it's not disk 0 (drv_index != 0)
2080 * or if it was disk 0, but there was previously 2085 * or if it was disk 0, but there was previously
2081 * no actual corresponding configured logical drive 2086 * no actual corresponding configured logical drive
2082 * (raid_leve == -1) then we want to update the 2087 * (raid_leve == -1) then we want to update the
2083 * logical drive's information. 2088 * logical drive's information.
2084 */ 2089 */
2085 if (drv_index || first_time) { 2090 if (drv_index || first_time) {
2086 if (cciss_add_disk(h, disk, drv_index) != 0) { 2091 if (cciss_add_disk(h, disk, drv_index) != 0) {
2087 cciss_free_gendisk(h, drv_index); 2092 cciss_free_gendisk(h, drv_index);
2088 cciss_free_drive_info(h, drv_index); 2093 cciss_free_drive_info(h, drv_index);
2089 dev_warn(&h->pdev->dev, "could not update disk %d\n", 2094 dev_warn(&h->pdev->dev, "could not update disk %d\n",
2090 drv_index); 2095 drv_index);
2091 --h->num_luns; 2096 --h->num_luns;
2092 } 2097 }
2093 } 2098 }
2094 2099
2095 freeret: 2100 freeret:
2096 kfree(inq_buff); 2101 kfree(inq_buff);
2097 kfree(drvinfo); 2102 kfree(drvinfo);
2098 return; 2103 return;
2099 mem_msg: 2104 mem_msg:
2100 dev_err(&h->pdev->dev, "out of memory\n"); 2105 dev_err(&h->pdev->dev, "out of memory\n");
2101 goto freeret; 2106 goto freeret;
2102 } 2107 }
2103 2108
2104 /* This function will find the first index of the controllers drive array 2109 /* This function will find the first index of the controllers drive array
2105 * that has a null drv pointer and allocate the drive info struct and 2110 * that has a null drv pointer and allocate the drive info struct and
2106 * will return that index This is where new drives will be added. 2111 * will return that index This is where new drives will be added.
2107 * If the index to be returned is greater than the highest_lun index for 2112 * If the index to be returned is greater than the highest_lun index for
2108 * the controller then highest_lun is set * to this new index. 2113 * the controller then highest_lun is set * to this new index.
2109 * If there are no available indexes or if tha allocation fails, then -1 2114 * If there are no available indexes or if tha allocation fails, then -1
2110 * is returned. * "controller_node" is used to know if this is a real 2115 * is returned. * "controller_node" is used to know if this is a real
2111 * logical drive, or just the controller node, which determines if this 2116 * logical drive, or just the controller node, which determines if this
2112 * counts towards highest_lun. 2117 * counts towards highest_lun.
2113 */ 2118 */
2114 static int cciss_alloc_drive_info(ctlr_info_t *h, int controller_node) 2119 static int cciss_alloc_drive_info(ctlr_info_t *h, int controller_node)
2115 { 2120 {
2116 int i; 2121 int i;
2117 drive_info_struct *drv; 2122 drive_info_struct *drv;
2118 2123
2119 /* Search for an empty slot for our drive info */ 2124 /* Search for an empty slot for our drive info */
2120 for (i = 0; i < CISS_MAX_LUN; i++) { 2125 for (i = 0; i < CISS_MAX_LUN; i++) {
2121 2126
2122 /* if not cxd0 case, and it's occupied, skip it. */ 2127 /* if not cxd0 case, and it's occupied, skip it. */
2123 if (h->drv[i] && i != 0) 2128 if (h->drv[i] && i != 0)
2124 continue; 2129 continue;
2125 /* 2130 /*
2126 * If it's cxd0 case, and drv is alloc'ed already, and a 2131 * If it's cxd0 case, and drv is alloc'ed already, and a
2127 * disk is configured there, skip it. 2132 * disk is configured there, skip it.
2128 */ 2133 */
2129 if (i == 0 && h->drv[i] && h->drv[i]->raid_level != -1) 2134 if (i == 0 && h->drv[i] && h->drv[i]->raid_level != -1)
2130 continue; 2135 continue;
2131 2136
2132 /* 2137 /*
2133 * We've found an empty slot. Update highest_lun 2138 * We've found an empty slot. Update highest_lun
2134 * provided this isn't just the fake cxd0 controller node. 2139 * provided this isn't just the fake cxd0 controller node.
2135 */ 2140 */
2136 if (i > h->highest_lun && !controller_node) 2141 if (i > h->highest_lun && !controller_node)
2137 h->highest_lun = i; 2142 h->highest_lun = i;
2138 2143
2139 /* If adding a real disk at cxd0, and it's already alloc'ed */ 2144 /* If adding a real disk at cxd0, and it's already alloc'ed */
2140 if (i == 0 && h->drv[i] != NULL) 2145 if (i == 0 && h->drv[i] != NULL)
2141 return i; 2146 return i;
2142 2147
2143 /* 2148 /*
2144 * Found an empty slot, not already alloc'ed. Allocate it. 2149 * Found an empty slot, not already alloc'ed. Allocate it.
2145 * Mark it with raid_level == -1, so we know it's new later on. 2150 * Mark it with raid_level == -1, so we know it's new later on.
2146 */ 2151 */
2147 drv = kzalloc(sizeof(*drv), GFP_KERNEL); 2152 drv = kzalloc(sizeof(*drv), GFP_KERNEL);
2148 if (!drv) 2153 if (!drv)
2149 return -1; 2154 return -1;
2150 drv->raid_level = -1; /* so we know it's new */ 2155 drv->raid_level = -1; /* so we know it's new */
2151 h->drv[i] = drv; 2156 h->drv[i] = drv;
2152 return i; 2157 return i;
2153 } 2158 }
2154 return -1; 2159 return -1;
2155 } 2160 }
2156 2161
2157 static void cciss_free_drive_info(ctlr_info_t *h, int drv_index) 2162 static void cciss_free_drive_info(ctlr_info_t *h, int drv_index)
2158 { 2163 {
2159 kfree(h->drv[drv_index]); 2164 kfree(h->drv[drv_index]);
2160 h->drv[drv_index] = NULL; 2165 h->drv[drv_index] = NULL;
2161 } 2166 }
2162 2167
2163 static void cciss_free_gendisk(ctlr_info_t *h, int drv_index) 2168 static void cciss_free_gendisk(ctlr_info_t *h, int drv_index)
2164 { 2169 {
2165 put_disk(h->gendisk[drv_index]); 2170 put_disk(h->gendisk[drv_index]);
2166 h->gendisk[drv_index] = NULL; 2171 h->gendisk[drv_index] = NULL;
2167 } 2172 }
2168 2173
2169 /* cciss_add_gendisk finds a free hba[]->drv structure 2174 /* cciss_add_gendisk finds a free hba[]->drv structure
2170 * and allocates a gendisk if needed, and sets the lunid 2175 * and allocates a gendisk if needed, and sets the lunid
2171 * in the drvinfo structure. It returns the index into 2176 * in the drvinfo structure. It returns the index into
2172 * the ->drv[] array, or -1 if none are free. 2177 * the ->drv[] array, or -1 if none are free.
2173 * is_controller_node indicates whether highest_lun should 2178 * is_controller_node indicates whether highest_lun should
2174 * count this disk, or if it's only being added to provide 2179 * count this disk, or if it's only being added to provide
2175 * a means to talk to the controller in case no logical 2180 * a means to talk to the controller in case no logical
2176 * drives have yet been configured. 2181 * drives have yet been configured.
2177 */ 2182 */
2178 static int cciss_add_gendisk(ctlr_info_t *h, unsigned char lunid[], 2183 static int cciss_add_gendisk(ctlr_info_t *h, unsigned char lunid[],
2179 int controller_node) 2184 int controller_node)
2180 { 2185 {
2181 int drv_index; 2186 int drv_index;
2182 2187
2183 drv_index = cciss_alloc_drive_info(h, controller_node); 2188 drv_index = cciss_alloc_drive_info(h, controller_node);
2184 if (drv_index == -1) 2189 if (drv_index == -1)
2185 return -1; 2190 return -1;
2186 2191
2187 /*Check if the gendisk needs to be allocated */ 2192 /*Check if the gendisk needs to be allocated */
2188 if (!h->gendisk[drv_index]) { 2193 if (!h->gendisk[drv_index]) {
2189 h->gendisk[drv_index] = 2194 h->gendisk[drv_index] =
2190 alloc_disk(1 << NWD_SHIFT); 2195 alloc_disk(1 << NWD_SHIFT);
2191 if (!h->gendisk[drv_index]) { 2196 if (!h->gendisk[drv_index]) {
2192 dev_err(&h->pdev->dev, 2197 dev_err(&h->pdev->dev,
2193 "could not allocate a new disk %d\n", 2198 "could not allocate a new disk %d\n",
2194 drv_index); 2199 drv_index);
2195 goto err_free_drive_info; 2200 goto err_free_drive_info;
2196 } 2201 }
2197 } 2202 }
2198 memcpy(h->drv[drv_index]->LunID, lunid, 2203 memcpy(h->drv[drv_index]->LunID, lunid,
2199 sizeof(h->drv[drv_index]->LunID)); 2204 sizeof(h->drv[drv_index]->LunID));
2200 if (cciss_create_ld_sysfs_entry(h, drv_index)) 2205 if (cciss_create_ld_sysfs_entry(h, drv_index))
2201 goto err_free_disk; 2206 goto err_free_disk;
2202 /* Don't need to mark this busy because nobody */ 2207 /* Don't need to mark this busy because nobody */
2203 /* else knows about this disk yet to contend */ 2208 /* else knows about this disk yet to contend */
2204 /* for access to it. */ 2209 /* for access to it. */
2205 h->drv[drv_index]->busy_configuring = 0; 2210 h->drv[drv_index]->busy_configuring = 0;
2206 wmb(); 2211 wmb();
2207 return drv_index; 2212 return drv_index;
2208 2213
2209 err_free_disk: 2214 err_free_disk:
2210 cciss_free_gendisk(h, drv_index); 2215 cciss_free_gendisk(h, drv_index);
2211 err_free_drive_info: 2216 err_free_drive_info:
2212 cciss_free_drive_info(h, drv_index); 2217 cciss_free_drive_info(h, drv_index);
2213 return -1; 2218 return -1;
2214 } 2219 }
2215 2220
2216 /* This is for the special case of a controller which 2221 /* This is for the special case of a controller which
2217 * has no logical drives. In this case, we still need 2222 * has no logical drives. In this case, we still need
2218 * to register a disk so the controller can be accessed 2223 * to register a disk so the controller can be accessed
2219 * by the Array Config Utility. 2224 * by the Array Config Utility.
2220 */ 2225 */
2221 static void cciss_add_controller_node(ctlr_info_t *h) 2226 static void cciss_add_controller_node(ctlr_info_t *h)
2222 { 2227 {
2223 struct gendisk *disk; 2228 struct gendisk *disk;
2224 int drv_index; 2229 int drv_index;
2225 2230
2226 if (h->gendisk[0] != NULL) /* already did this? Then bail. */ 2231 if (h->gendisk[0] != NULL) /* already did this? Then bail. */
2227 return; 2232 return;
2228 2233
2229 drv_index = cciss_add_gendisk(h, CTLR_LUNID, 1); 2234 drv_index = cciss_add_gendisk(h, CTLR_LUNID, 1);
2230 if (drv_index == -1) 2235 if (drv_index == -1)
2231 goto error; 2236 goto error;
2232 h->drv[drv_index]->block_size = 512; 2237 h->drv[drv_index]->block_size = 512;
2233 h->drv[drv_index]->nr_blocks = 0; 2238 h->drv[drv_index]->nr_blocks = 0;
2234 h->drv[drv_index]->heads = 0; 2239 h->drv[drv_index]->heads = 0;
2235 h->drv[drv_index]->sectors = 0; 2240 h->drv[drv_index]->sectors = 0;
2236 h->drv[drv_index]->cylinders = 0; 2241 h->drv[drv_index]->cylinders = 0;
2237 h->drv[drv_index]->raid_level = -1; 2242 h->drv[drv_index]->raid_level = -1;
2238 memset(h->drv[drv_index]->serial_no, 0, 16); 2243 memset(h->drv[drv_index]->serial_no, 0, 16);
2239 disk = h->gendisk[drv_index]; 2244 disk = h->gendisk[drv_index];
2240 if (cciss_add_disk(h, disk, drv_index) == 0) 2245 if (cciss_add_disk(h, disk, drv_index) == 0)
2241 return; 2246 return;
2242 cciss_free_gendisk(h, drv_index); 2247 cciss_free_gendisk(h, drv_index);
2243 cciss_free_drive_info(h, drv_index); 2248 cciss_free_drive_info(h, drv_index);
2244 error: 2249 error:
2245 dev_warn(&h->pdev->dev, "could not add disk 0.\n"); 2250 dev_warn(&h->pdev->dev, "could not add disk 0.\n");
2246 return; 2251 return;
2247 } 2252 }
2248 2253
2249 /* This function will add and remove logical drives from the Logical 2254 /* This function will add and remove logical drives from the Logical
2250 * drive array of the controller and maintain persistency of ordering 2255 * drive array of the controller and maintain persistency of ordering
2251 * so that mount points are preserved until the next reboot. This allows 2256 * so that mount points are preserved until the next reboot. This allows
2252 * for the removal of logical drives in the middle of the drive array 2257 * for the removal of logical drives in the middle of the drive array
2253 * without a re-ordering of those drives. 2258 * without a re-ordering of those drives.
2254 * INPUT 2259 * INPUT
2255 * h = The controller to perform the operations on 2260 * h = The controller to perform the operations on
2256 */ 2261 */
2257 static int rebuild_lun_table(ctlr_info_t *h, int first_time, 2262 static int rebuild_lun_table(ctlr_info_t *h, int first_time,
2258 int via_ioctl) 2263 int via_ioctl)
2259 { 2264 {
2260 int num_luns; 2265 int num_luns;
2261 ReportLunData_struct *ld_buff = NULL; 2266 ReportLunData_struct *ld_buff = NULL;
2262 int return_code; 2267 int return_code;
2263 int listlength = 0; 2268 int listlength = 0;
2264 int i; 2269 int i;
2265 int drv_found; 2270 int drv_found;
2266 int drv_index = 0; 2271 int drv_index = 0;
2267 unsigned char lunid[8] = CTLR_LUNID; 2272 unsigned char lunid[8] = CTLR_LUNID;
2268 unsigned long flags; 2273 unsigned long flags;
2269 2274
2270 if (!capable(CAP_SYS_RAWIO)) 2275 if (!capable(CAP_SYS_RAWIO))
2271 return -EPERM; 2276 return -EPERM;
2272 2277
2273 /* Set busy_configuring flag for this operation */ 2278 /* Set busy_configuring flag for this operation */
2274 spin_lock_irqsave(&h->lock, flags); 2279 spin_lock_irqsave(&h->lock, flags);
2275 if (h->busy_configuring) { 2280 if (h->busy_configuring) {
2276 spin_unlock_irqrestore(&h->lock, flags); 2281 spin_unlock_irqrestore(&h->lock, flags);
2277 return -EBUSY; 2282 return -EBUSY;
2278 } 2283 }
2279 h->busy_configuring = 1; 2284 h->busy_configuring = 1;
2280 spin_unlock_irqrestore(&h->lock, flags); 2285 spin_unlock_irqrestore(&h->lock, flags);
2281 2286
2282 ld_buff = kzalloc(sizeof(ReportLunData_struct), GFP_KERNEL); 2287 ld_buff = kzalloc(sizeof(ReportLunData_struct), GFP_KERNEL);
2283 if (ld_buff == NULL) 2288 if (ld_buff == NULL)
2284 goto mem_msg; 2289 goto mem_msg;
2285 2290
2286 return_code = sendcmd_withirq(h, CISS_REPORT_LOG, ld_buff, 2291 return_code = sendcmd_withirq(h, CISS_REPORT_LOG, ld_buff,
2287 sizeof(ReportLunData_struct), 2292 sizeof(ReportLunData_struct),
2288 0, CTLR_LUNID, TYPE_CMD); 2293 0, CTLR_LUNID, TYPE_CMD);
2289 2294
2290 if (return_code == IO_OK) 2295 if (return_code == IO_OK)
2291 listlength = be32_to_cpu(*(__be32 *) ld_buff->LUNListLength); 2296 listlength = be32_to_cpu(*(__be32 *) ld_buff->LUNListLength);
2292 else { /* reading number of logical volumes failed */ 2297 else { /* reading number of logical volumes failed */
2293 dev_warn(&h->pdev->dev, 2298 dev_warn(&h->pdev->dev,
2294 "report logical volume command failed\n"); 2299 "report logical volume command failed\n");
2295 listlength = 0; 2300 listlength = 0;
2296 goto freeret; 2301 goto freeret;
2297 } 2302 }
2298 2303
2299 num_luns = listlength / 8; /* 8 bytes per entry */ 2304 num_luns = listlength / 8; /* 8 bytes per entry */
2300 if (num_luns > CISS_MAX_LUN) { 2305 if (num_luns > CISS_MAX_LUN) {
2301 num_luns = CISS_MAX_LUN; 2306 num_luns = CISS_MAX_LUN;
2302 dev_warn(&h->pdev->dev, "more luns configured" 2307 dev_warn(&h->pdev->dev, "more luns configured"
2303 " on controller than can be handled by" 2308 " on controller than can be handled by"
2304 " this driver.\n"); 2309 " this driver.\n");
2305 } 2310 }
2306 2311
2307 if (num_luns == 0) 2312 if (num_luns == 0)
2308 cciss_add_controller_node(h); 2313 cciss_add_controller_node(h);
2309 2314
2310 /* Compare controller drive array to driver's drive array 2315 /* Compare controller drive array to driver's drive array
2311 * to see if any drives are missing on the controller due 2316 * to see if any drives are missing on the controller due
2312 * to action of Array Config Utility (user deletes drive) 2317 * to action of Array Config Utility (user deletes drive)
2313 * and deregister logical drives which have disappeared. 2318 * and deregister logical drives which have disappeared.
2314 */ 2319 */
2315 for (i = 0; i <= h->highest_lun; i++) { 2320 for (i = 0; i <= h->highest_lun; i++) {
2316 int j; 2321 int j;
2317 drv_found = 0; 2322 drv_found = 0;
2318 2323
2319 /* skip holes in the array from already deleted drives */ 2324 /* skip holes in the array from already deleted drives */
2320 if (h->drv[i] == NULL) 2325 if (h->drv[i] == NULL)
2321 continue; 2326 continue;
2322 2327
2323 for (j = 0; j < num_luns; j++) { 2328 for (j = 0; j < num_luns; j++) {
2324 memcpy(lunid, &ld_buff->LUN[j][0], sizeof(lunid)); 2329 memcpy(lunid, &ld_buff->LUN[j][0], sizeof(lunid));
2325 if (memcmp(h->drv[i]->LunID, lunid, 2330 if (memcmp(h->drv[i]->LunID, lunid,
2326 sizeof(lunid)) == 0) { 2331 sizeof(lunid)) == 0) {
2327 drv_found = 1; 2332 drv_found = 1;
2328 break; 2333 break;
2329 } 2334 }
2330 } 2335 }
2331 if (!drv_found) { 2336 if (!drv_found) {
2332 /* Deregister it from the OS, it's gone. */ 2337 /* Deregister it from the OS, it's gone. */
2333 spin_lock_irqsave(&h->lock, flags); 2338 spin_lock_irqsave(&h->lock, flags);
2334 h->drv[i]->busy_configuring = 1; 2339 h->drv[i]->busy_configuring = 1;
2335 spin_unlock_irqrestore(&h->lock, flags); 2340 spin_unlock_irqrestore(&h->lock, flags);
2336 return_code = deregister_disk(h, i, 1, via_ioctl); 2341 return_code = deregister_disk(h, i, 1, via_ioctl);
2337 if (h->drv[i] != NULL) 2342 if (h->drv[i] != NULL)
2338 h->drv[i]->busy_configuring = 0; 2343 h->drv[i]->busy_configuring = 0;
2339 } 2344 }
2340 } 2345 }
2341 2346
2342 /* Compare controller drive array to driver's drive array. 2347 /* Compare controller drive array to driver's drive array.
2343 * Check for updates in the drive information and any new drives 2348 * Check for updates in the drive information and any new drives
2344 * on the controller due to ACU adding logical drives, or changing 2349 * on the controller due to ACU adding logical drives, or changing
2345 * a logical drive's size, etc. Reregister any new/changed drives 2350 * a logical drive's size, etc. Reregister any new/changed drives
2346 */ 2351 */
2347 for (i = 0; i < num_luns; i++) { 2352 for (i = 0; i < num_luns; i++) {
2348 int j; 2353 int j;
2349 2354
2350 drv_found = 0; 2355 drv_found = 0;
2351 2356
2352 memcpy(lunid, &ld_buff->LUN[i][0], sizeof(lunid)); 2357 memcpy(lunid, &ld_buff->LUN[i][0], sizeof(lunid));
2353 /* Find if the LUN is already in the drive array 2358 /* Find if the LUN is already in the drive array
2354 * of the driver. If so then update its info 2359 * of the driver. If so then update its info
2355 * if not in use. If it does not exist then find 2360 * if not in use. If it does not exist then find
2356 * the first free index and add it. 2361 * the first free index and add it.
2357 */ 2362 */
2358 for (j = 0; j <= h->highest_lun; j++) { 2363 for (j = 0; j <= h->highest_lun; j++) {
2359 if (h->drv[j] != NULL && 2364 if (h->drv[j] != NULL &&
2360 memcmp(h->drv[j]->LunID, lunid, 2365 memcmp(h->drv[j]->LunID, lunid,
2361 sizeof(h->drv[j]->LunID)) == 0) { 2366 sizeof(h->drv[j]->LunID)) == 0) {
2362 drv_index = j; 2367 drv_index = j;
2363 drv_found = 1; 2368 drv_found = 1;
2364 break; 2369 break;
2365 } 2370 }
2366 } 2371 }
2367 2372
2368 /* check if the drive was found already in the array */ 2373 /* check if the drive was found already in the array */
2369 if (!drv_found) { 2374 if (!drv_found) {
2370 drv_index = cciss_add_gendisk(h, lunid, 0); 2375 drv_index = cciss_add_gendisk(h, lunid, 0);
2371 if (drv_index == -1) 2376 if (drv_index == -1)
2372 goto freeret; 2377 goto freeret;
2373 } 2378 }
2374 cciss_update_drive_info(h, drv_index, first_time, via_ioctl); 2379 cciss_update_drive_info(h, drv_index, first_time, via_ioctl);
2375 } /* end for */ 2380 } /* end for */
2376 2381
2377 freeret: 2382 freeret:
2378 kfree(ld_buff); 2383 kfree(ld_buff);
2379 h->busy_configuring = 0; 2384 h->busy_configuring = 0;
2380 /* We return -1 here to tell the ACU that we have registered/updated 2385 /* We return -1 here to tell the ACU that we have registered/updated
2381 * all of the drives that we can and to keep it from calling us 2386 * all of the drives that we can and to keep it from calling us
2382 * additional times. 2387 * additional times.
2383 */ 2388 */
2384 return -1; 2389 return -1;
2385 mem_msg: 2390 mem_msg:
2386 dev_err(&h->pdev->dev, "out of memory\n"); 2391 dev_err(&h->pdev->dev, "out of memory\n");
2387 h->busy_configuring = 0; 2392 h->busy_configuring = 0;
2388 goto freeret; 2393 goto freeret;
2389 } 2394 }
2390 2395
2391 static void cciss_clear_drive_info(drive_info_struct *drive_info) 2396 static void cciss_clear_drive_info(drive_info_struct *drive_info)
2392 { 2397 {
2393 /* zero out the disk size info */ 2398 /* zero out the disk size info */
2394 drive_info->nr_blocks = 0; 2399 drive_info->nr_blocks = 0;
2395 drive_info->block_size = 0; 2400 drive_info->block_size = 0;
2396 drive_info->heads = 0; 2401 drive_info->heads = 0;
2397 drive_info->sectors = 0; 2402 drive_info->sectors = 0;
2398 drive_info->cylinders = 0; 2403 drive_info->cylinders = 0;
2399 drive_info->raid_level = -1; 2404 drive_info->raid_level = -1;
2400 memset(drive_info->serial_no, 0, sizeof(drive_info->serial_no)); 2405 memset(drive_info->serial_no, 0, sizeof(drive_info->serial_no));
2401 memset(drive_info->model, 0, sizeof(drive_info->model)); 2406 memset(drive_info->model, 0, sizeof(drive_info->model));
2402 memset(drive_info->rev, 0, sizeof(drive_info->rev)); 2407 memset(drive_info->rev, 0, sizeof(drive_info->rev));
2403 memset(drive_info->vendor, 0, sizeof(drive_info->vendor)); 2408 memset(drive_info->vendor, 0, sizeof(drive_info->vendor));
2404 /* 2409 /*
2405 * don't clear the LUNID though, we need to remember which 2410 * don't clear the LUNID though, we need to remember which
2406 * one this one is. 2411 * one this one is.
2407 */ 2412 */
2408 } 2413 }
2409 2414
2410 /* This function will deregister the disk and it's queue from the 2415 /* This function will deregister the disk and it's queue from the
2411 * kernel. It must be called with the controller lock held and the 2416 * kernel. It must be called with the controller lock held and the
2412 * drv structures busy_configuring flag set. It's parameters are: 2417 * drv structures busy_configuring flag set. It's parameters are:
2413 * 2418 *
2414 * disk = This is the disk to be deregistered 2419 * disk = This is the disk to be deregistered
2415 * drv = This is the drive_info_struct associated with the disk to be 2420 * drv = This is the drive_info_struct associated with the disk to be
2416 * deregistered. It contains information about the disk used 2421 * deregistered. It contains information about the disk used
2417 * by the driver. 2422 * by the driver.
2418 * clear_all = This flag determines whether or not the disk information 2423 * clear_all = This flag determines whether or not the disk information
2419 * is going to be completely cleared out and the highest_lun 2424 * is going to be completely cleared out and the highest_lun
2420 * reset. Sometimes we want to clear out information about 2425 * reset. Sometimes we want to clear out information about
2421 * the disk in preparation for re-adding it. In this case 2426 * the disk in preparation for re-adding it. In this case
2422 * the highest_lun should be left unchanged and the LunID 2427 * the highest_lun should be left unchanged and the LunID
2423 * should not be cleared. 2428 * should not be cleared.
2424 * via_ioctl 2429 * via_ioctl
2425 * This indicates whether we've reached this path via ioctl. 2430 * This indicates whether we've reached this path via ioctl.
2426 * This affects the maximum usage count allowed for c0d0 to be messed with. 2431 * This affects the maximum usage count allowed for c0d0 to be messed with.
2427 * If this path is reached via ioctl(), then the max_usage_count will 2432 * If this path is reached via ioctl(), then the max_usage_count will
2428 * be 1, as the process calling ioctl() has got to have the device open. 2433 * be 1, as the process calling ioctl() has got to have the device open.
2429 * If we get here via sysfs, then the max usage count will be zero. 2434 * If we get here via sysfs, then the max usage count will be zero.
2430 */ 2435 */
2431 static int deregister_disk(ctlr_info_t *h, int drv_index, 2436 static int deregister_disk(ctlr_info_t *h, int drv_index,
2432 int clear_all, int via_ioctl) 2437 int clear_all, int via_ioctl)
2433 { 2438 {
2434 int i; 2439 int i;
2435 struct gendisk *disk; 2440 struct gendisk *disk;
2436 drive_info_struct *drv; 2441 drive_info_struct *drv;
2437 int recalculate_highest_lun; 2442 int recalculate_highest_lun;
2438 2443
2439 if (!capable(CAP_SYS_RAWIO)) 2444 if (!capable(CAP_SYS_RAWIO))
2440 return -EPERM; 2445 return -EPERM;
2441 2446
2442 drv = h->drv[drv_index]; 2447 drv = h->drv[drv_index];
2443 disk = h->gendisk[drv_index]; 2448 disk = h->gendisk[drv_index];
2444 2449
2445 /* make sure logical volume is NOT is use */ 2450 /* make sure logical volume is NOT is use */
2446 if (clear_all || (h->gendisk[0] == disk)) { 2451 if (clear_all || (h->gendisk[0] == disk)) {
2447 if (drv->usage_count > via_ioctl) 2452 if (drv->usage_count > via_ioctl)
2448 return -EBUSY; 2453 return -EBUSY;
2449 } else if (drv->usage_count > 0) 2454 } else if (drv->usage_count > 0)
2450 return -EBUSY; 2455 return -EBUSY;
2451 2456
2452 recalculate_highest_lun = (drv == h->drv[h->highest_lun]); 2457 recalculate_highest_lun = (drv == h->drv[h->highest_lun]);
2453 2458
2454 /* invalidate the devices and deregister the disk. If it is disk 2459 /* invalidate the devices and deregister the disk. If it is disk
2455 * zero do not deregister it but just zero out it's values. This 2460 * zero do not deregister it but just zero out it's values. This
2456 * allows us to delete disk zero but keep the controller registered. 2461 * allows us to delete disk zero but keep the controller registered.
2457 */ 2462 */
2458 if (h->gendisk[0] != disk) { 2463 if (h->gendisk[0] != disk) {
2459 struct request_queue *q = disk->queue; 2464 struct request_queue *q = disk->queue;
2460 if (disk->flags & GENHD_FL_UP) { 2465 if (disk->flags & GENHD_FL_UP) {
2461 cciss_destroy_ld_sysfs_entry(h, drv_index, 0); 2466 cciss_destroy_ld_sysfs_entry(h, drv_index, 0);
2462 del_gendisk(disk); 2467 del_gendisk(disk);
2463 } 2468 }
2464 if (q) 2469 if (q)
2465 blk_cleanup_queue(q); 2470 blk_cleanup_queue(q);
2466 /* If clear_all is set then we are deleting the logical 2471 /* If clear_all is set then we are deleting the logical
2467 * drive, not just refreshing its info. For drives 2472 * drive, not just refreshing its info. For drives
2468 * other than disk 0 we will call put_disk. We do not 2473 * other than disk 0 we will call put_disk. We do not
2469 * do this for disk 0 as we need it to be able to 2474 * do this for disk 0 as we need it to be able to
2470 * configure the controller. 2475 * configure the controller.
2471 */ 2476 */
2472 if (clear_all){ 2477 if (clear_all){
2473 /* This isn't pretty, but we need to find the 2478 /* This isn't pretty, but we need to find the
2474 * disk in our array and NULL our the pointer. 2479 * disk in our array and NULL our the pointer.
2475 * This is so that we will call alloc_disk if 2480 * This is so that we will call alloc_disk if
2476 * this index is used again later. 2481 * this index is used again later.
2477 */ 2482 */
2478 for (i=0; i < CISS_MAX_LUN; i++){ 2483 for (i=0; i < CISS_MAX_LUN; i++){
2479 if (h->gendisk[i] == disk) { 2484 if (h->gendisk[i] == disk) {
2480 h->gendisk[i] = NULL; 2485 h->gendisk[i] = NULL;
2481 break; 2486 break;
2482 } 2487 }
2483 } 2488 }
2484 put_disk(disk); 2489 put_disk(disk);
2485 } 2490 }
2486 } else { 2491 } else {
2487 set_capacity(disk, 0); 2492 set_capacity(disk, 0);
2488 cciss_clear_drive_info(drv); 2493 cciss_clear_drive_info(drv);
2489 } 2494 }
2490 2495
2491 --h->num_luns; 2496 --h->num_luns;
2492 2497
2493 /* if it was the last disk, find the new hightest lun */ 2498 /* if it was the last disk, find the new hightest lun */
2494 if (clear_all && recalculate_highest_lun) { 2499 if (clear_all && recalculate_highest_lun) {
2495 int newhighest = -1; 2500 int newhighest = -1;
2496 for (i = 0; i <= h->highest_lun; i++) { 2501 for (i = 0; i <= h->highest_lun; i++) {
2497 /* if the disk has size > 0, it is available */ 2502 /* if the disk has size > 0, it is available */
2498 if (h->drv[i] && h->drv[i]->heads) 2503 if (h->drv[i] && h->drv[i]->heads)
2499 newhighest = i; 2504 newhighest = i;
2500 } 2505 }
2501 h->highest_lun = newhighest; 2506 h->highest_lun = newhighest;
2502 } 2507 }
2503 return 0; 2508 return 0;
2504 } 2509 }
2505 2510
2506 static int fill_cmd(ctlr_info_t *h, CommandList_struct *c, __u8 cmd, void *buff, 2511 static int fill_cmd(ctlr_info_t *h, CommandList_struct *c, __u8 cmd, void *buff,
2507 size_t size, __u8 page_code, unsigned char *scsi3addr, 2512 size_t size, __u8 page_code, unsigned char *scsi3addr,
2508 int cmd_type) 2513 int cmd_type)
2509 { 2514 {
2510 u64bit buff_dma_handle; 2515 u64bit buff_dma_handle;
2511 int status = IO_OK; 2516 int status = IO_OK;
2512 2517
2513 c->cmd_type = CMD_IOCTL_PEND; 2518 c->cmd_type = CMD_IOCTL_PEND;
2514 c->Header.ReplyQueue = 0; 2519 c->Header.ReplyQueue = 0;
2515 if (buff != NULL) { 2520 if (buff != NULL) {
2516 c->Header.SGList = 1; 2521 c->Header.SGList = 1;
2517 c->Header.SGTotal = 1; 2522 c->Header.SGTotal = 1;
2518 } else { 2523 } else {
2519 c->Header.SGList = 0; 2524 c->Header.SGList = 0;
2520 c->Header.SGTotal = 0; 2525 c->Header.SGTotal = 0;
2521 } 2526 }
2522 c->Header.Tag.lower = c->busaddr; 2527 c->Header.Tag.lower = c->busaddr;
2523 memcpy(c->Header.LUN.LunAddrBytes, scsi3addr, 8); 2528 memcpy(c->Header.LUN.LunAddrBytes, scsi3addr, 8);
2524 2529
2525 c->Request.Type.Type = cmd_type; 2530 c->Request.Type.Type = cmd_type;
2526 if (cmd_type == TYPE_CMD) { 2531 if (cmd_type == TYPE_CMD) {
2527 switch (cmd) { 2532 switch (cmd) {
2528 case CISS_INQUIRY: 2533 case CISS_INQUIRY:
2529 /* are we trying to read a vital product page */ 2534 /* are we trying to read a vital product page */
2530 if (page_code != 0) { 2535 if (page_code != 0) {
2531 c->Request.CDB[1] = 0x01; 2536 c->Request.CDB[1] = 0x01;
2532 c->Request.CDB[2] = page_code; 2537 c->Request.CDB[2] = page_code;
2533 } 2538 }
2534 c->Request.CDBLen = 6; 2539 c->Request.CDBLen = 6;
2535 c->Request.Type.Attribute = ATTR_SIMPLE; 2540 c->Request.Type.Attribute = ATTR_SIMPLE;
2536 c->Request.Type.Direction = XFER_READ; 2541 c->Request.Type.Direction = XFER_READ;
2537 c->Request.Timeout = 0; 2542 c->Request.Timeout = 0;
2538 c->Request.CDB[0] = CISS_INQUIRY; 2543 c->Request.CDB[0] = CISS_INQUIRY;
2539 c->Request.CDB[4] = size & 0xFF; 2544 c->Request.CDB[4] = size & 0xFF;
2540 break; 2545 break;
2541 case CISS_REPORT_LOG: 2546 case CISS_REPORT_LOG:
2542 case CISS_REPORT_PHYS: 2547 case CISS_REPORT_PHYS:
2543 /* Talking to controller so It's a physical command 2548 /* Talking to controller so It's a physical command
2544 mode = 00 target = 0. Nothing to write. 2549 mode = 00 target = 0. Nothing to write.
2545 */ 2550 */
2546 c->Request.CDBLen = 12; 2551 c->Request.CDBLen = 12;
2547 c->Request.Type.Attribute = ATTR_SIMPLE; 2552 c->Request.Type.Attribute = ATTR_SIMPLE;
2548 c->Request.Type.Direction = XFER_READ; 2553 c->Request.Type.Direction = XFER_READ;
2549 c->Request.Timeout = 0; 2554 c->Request.Timeout = 0;
2550 c->Request.CDB[0] = cmd; 2555 c->Request.CDB[0] = cmd;
2551 c->Request.CDB[6] = (size >> 24) & 0xFF; /* MSB */ 2556 c->Request.CDB[6] = (size >> 24) & 0xFF; /* MSB */
2552 c->Request.CDB[7] = (size >> 16) & 0xFF; 2557 c->Request.CDB[7] = (size >> 16) & 0xFF;
2553 c->Request.CDB[8] = (size >> 8) & 0xFF; 2558 c->Request.CDB[8] = (size >> 8) & 0xFF;
2554 c->Request.CDB[9] = size & 0xFF; 2559 c->Request.CDB[9] = size & 0xFF;
2555 break; 2560 break;
2556 2561
2557 case CCISS_READ_CAPACITY: 2562 case CCISS_READ_CAPACITY:
2558 c->Request.CDBLen = 10; 2563 c->Request.CDBLen = 10;
2559 c->Request.Type.Attribute = ATTR_SIMPLE; 2564 c->Request.Type.Attribute = ATTR_SIMPLE;
2560 c->Request.Type.Direction = XFER_READ; 2565 c->Request.Type.Direction = XFER_READ;
2561 c->Request.Timeout = 0; 2566 c->Request.Timeout = 0;
2562 c->Request.CDB[0] = cmd; 2567 c->Request.CDB[0] = cmd;
2563 break; 2568 break;
2564 case CCISS_READ_CAPACITY_16: 2569 case CCISS_READ_CAPACITY_16:
2565 c->Request.CDBLen = 16; 2570 c->Request.CDBLen = 16;
2566 c->Request.Type.Attribute = ATTR_SIMPLE; 2571 c->Request.Type.Attribute = ATTR_SIMPLE;
2567 c->Request.Type.Direction = XFER_READ; 2572 c->Request.Type.Direction = XFER_READ;
2568 c->Request.Timeout = 0; 2573 c->Request.Timeout = 0;
2569 c->Request.CDB[0] = cmd; 2574 c->Request.CDB[0] = cmd;
2570 c->Request.CDB[1] = 0x10; 2575 c->Request.CDB[1] = 0x10;
2571 c->Request.CDB[10] = (size >> 24) & 0xFF; 2576 c->Request.CDB[10] = (size >> 24) & 0xFF;
2572 c->Request.CDB[11] = (size >> 16) & 0xFF; 2577 c->Request.CDB[11] = (size >> 16) & 0xFF;
2573 c->Request.CDB[12] = (size >> 8) & 0xFF; 2578 c->Request.CDB[12] = (size >> 8) & 0xFF;
2574 c->Request.CDB[13] = size & 0xFF; 2579 c->Request.CDB[13] = size & 0xFF;
2575 c->Request.Timeout = 0; 2580 c->Request.Timeout = 0;
2576 c->Request.CDB[0] = cmd; 2581 c->Request.CDB[0] = cmd;
2577 break; 2582 break;
2578 case CCISS_CACHE_FLUSH: 2583 case CCISS_CACHE_FLUSH:
2579 c->Request.CDBLen = 12; 2584 c->Request.CDBLen = 12;
2580 c->Request.Type.Attribute = ATTR_SIMPLE; 2585 c->Request.Type.Attribute = ATTR_SIMPLE;
2581 c->Request.Type.Direction = XFER_WRITE; 2586 c->Request.Type.Direction = XFER_WRITE;
2582 c->Request.Timeout = 0; 2587 c->Request.Timeout = 0;
2583 c->Request.CDB[0] = BMIC_WRITE; 2588 c->Request.CDB[0] = BMIC_WRITE;
2584 c->Request.CDB[6] = BMIC_CACHE_FLUSH; 2589 c->Request.CDB[6] = BMIC_CACHE_FLUSH;
2585 break; 2590 break;
2586 case TEST_UNIT_READY: 2591 case TEST_UNIT_READY:
2587 c->Request.CDBLen = 6; 2592 c->Request.CDBLen = 6;
2588 c->Request.Type.Attribute = ATTR_SIMPLE; 2593 c->Request.Type.Attribute = ATTR_SIMPLE;
2589 c->Request.Type.Direction = XFER_NONE; 2594 c->Request.Type.Direction = XFER_NONE;
2590 c->Request.Timeout = 0; 2595 c->Request.Timeout = 0;
2591 break; 2596 break;
2592 default: 2597 default:
2593 dev_warn(&h->pdev->dev, "Unknown Command 0x%c\n", cmd); 2598 dev_warn(&h->pdev->dev, "Unknown Command 0x%c\n", cmd);
2594 return IO_ERROR; 2599 return IO_ERROR;
2595 } 2600 }
2596 } else if (cmd_type == TYPE_MSG) { 2601 } else if (cmd_type == TYPE_MSG) {
2597 switch (cmd) { 2602 switch (cmd) {
2598 case CCISS_ABORT_MSG: 2603 case CCISS_ABORT_MSG:
2599 c->Request.CDBLen = 12; 2604 c->Request.CDBLen = 12;
2600 c->Request.Type.Attribute = ATTR_SIMPLE; 2605 c->Request.Type.Attribute = ATTR_SIMPLE;
2601 c->Request.Type.Direction = XFER_WRITE; 2606 c->Request.Type.Direction = XFER_WRITE;
2602 c->Request.Timeout = 0; 2607 c->Request.Timeout = 0;
2603 c->Request.CDB[0] = cmd; /* abort */ 2608 c->Request.CDB[0] = cmd; /* abort */
2604 c->Request.CDB[1] = 0; /* abort a command */ 2609 c->Request.CDB[1] = 0; /* abort a command */
2605 /* buff contains the tag of the command to abort */ 2610 /* buff contains the tag of the command to abort */
2606 memcpy(&c->Request.CDB[4], buff, 8); 2611 memcpy(&c->Request.CDB[4], buff, 8);
2607 break; 2612 break;
2608 case CCISS_RESET_MSG: 2613 case CCISS_RESET_MSG:
2609 c->Request.CDBLen = 16; 2614 c->Request.CDBLen = 16;
2610 c->Request.Type.Attribute = ATTR_SIMPLE; 2615 c->Request.Type.Attribute = ATTR_SIMPLE;
2611 c->Request.Type.Direction = XFER_NONE; 2616 c->Request.Type.Direction = XFER_NONE;
2612 c->Request.Timeout = 0; 2617 c->Request.Timeout = 0;
2613 memset(&c->Request.CDB[0], 0, sizeof(c->Request.CDB)); 2618 memset(&c->Request.CDB[0], 0, sizeof(c->Request.CDB));
2614 c->Request.CDB[0] = cmd; /* reset */ 2619 c->Request.CDB[0] = cmd; /* reset */
2615 c->Request.CDB[1] = CCISS_RESET_TYPE_TARGET; 2620 c->Request.CDB[1] = CCISS_RESET_TYPE_TARGET;
2616 break; 2621 break;
2617 case CCISS_NOOP_MSG: 2622 case CCISS_NOOP_MSG:
2618 c->Request.CDBLen = 1; 2623 c->Request.CDBLen = 1;
2619 c->Request.Type.Attribute = ATTR_SIMPLE; 2624 c->Request.Type.Attribute = ATTR_SIMPLE;
2620 c->Request.Type.Direction = XFER_WRITE; 2625 c->Request.Type.Direction = XFER_WRITE;
2621 c->Request.Timeout = 0; 2626 c->Request.Timeout = 0;
2622 c->Request.CDB[0] = cmd; 2627 c->Request.CDB[0] = cmd;
2623 break; 2628 break;
2624 default: 2629 default:
2625 dev_warn(&h->pdev->dev, 2630 dev_warn(&h->pdev->dev,
2626 "unknown message type %d\n", cmd); 2631 "unknown message type %d\n", cmd);
2627 return IO_ERROR; 2632 return IO_ERROR;
2628 } 2633 }
2629 } else { 2634 } else {
2630 dev_warn(&h->pdev->dev, "unknown command type %d\n", cmd_type); 2635 dev_warn(&h->pdev->dev, "unknown command type %d\n", cmd_type);
2631 return IO_ERROR; 2636 return IO_ERROR;
2632 } 2637 }
2633 /* Fill in the scatter gather information */ 2638 /* Fill in the scatter gather information */
2634 if (size > 0) { 2639 if (size > 0) {
2635 buff_dma_handle.val = (__u64) pci_map_single(h->pdev, 2640 buff_dma_handle.val = (__u64) pci_map_single(h->pdev,
2636 buff, size, 2641 buff, size,
2637 PCI_DMA_BIDIRECTIONAL); 2642 PCI_DMA_BIDIRECTIONAL);
2638 c->SG[0].Addr.lower = buff_dma_handle.val32.lower; 2643 c->SG[0].Addr.lower = buff_dma_handle.val32.lower;
2639 c->SG[0].Addr.upper = buff_dma_handle.val32.upper; 2644 c->SG[0].Addr.upper = buff_dma_handle.val32.upper;
2640 c->SG[0].Len = size; 2645 c->SG[0].Len = size;
2641 c->SG[0].Ext = 0; /* we are not chaining */ 2646 c->SG[0].Ext = 0; /* we are not chaining */
2642 } 2647 }
2643 return status; 2648 return status;
2644 } 2649 }
2645 2650
2646 static int __devinit cciss_send_reset(ctlr_info_t *h, unsigned char *scsi3addr, 2651 static int __devinit cciss_send_reset(ctlr_info_t *h, unsigned char *scsi3addr,
2647 u8 reset_type) 2652 u8 reset_type)
2648 { 2653 {
2649 CommandList_struct *c; 2654 CommandList_struct *c;
2650 int return_status; 2655 int return_status;
2651 2656
2652 c = cmd_alloc(h); 2657 c = cmd_alloc(h);
2653 if (!c) 2658 if (!c)
2654 return -ENOMEM; 2659 return -ENOMEM;
2655 return_status = fill_cmd(h, c, CCISS_RESET_MSG, NULL, 0, 0, 2660 return_status = fill_cmd(h, c, CCISS_RESET_MSG, NULL, 0, 0,
2656 CTLR_LUNID, TYPE_MSG); 2661 CTLR_LUNID, TYPE_MSG);
2657 c->Request.CDB[1] = reset_type; /* fill_cmd defaults to target reset */ 2662 c->Request.CDB[1] = reset_type; /* fill_cmd defaults to target reset */
2658 if (return_status != IO_OK) { 2663 if (return_status != IO_OK) {
2659 cmd_special_free(h, c); 2664 cmd_special_free(h, c);
2660 return return_status; 2665 return return_status;
2661 } 2666 }
2662 c->waiting = NULL; 2667 c->waiting = NULL;
2663 enqueue_cmd_and_start_io(h, c); 2668 enqueue_cmd_and_start_io(h, c);
2664 /* Don't wait for completion, the reset won't complete. Don't free 2669 /* Don't wait for completion, the reset won't complete. Don't free
2665 * the command either. This is the last command we will send before 2670 * the command either. This is the last command we will send before
2666 * re-initializing everything, so it doesn't matter and won't leak. 2671 * re-initializing everything, so it doesn't matter and won't leak.
2667 */ 2672 */
2668 return 0; 2673 return 0;
2669 } 2674 }
2670 2675
2671 static int check_target_status(ctlr_info_t *h, CommandList_struct *c) 2676 static int check_target_status(ctlr_info_t *h, CommandList_struct *c)
2672 { 2677 {
2673 switch (c->err_info->ScsiStatus) { 2678 switch (c->err_info->ScsiStatus) {
2674 case SAM_STAT_GOOD: 2679 case SAM_STAT_GOOD:
2675 return IO_OK; 2680 return IO_OK;
2676 case SAM_STAT_CHECK_CONDITION: 2681 case SAM_STAT_CHECK_CONDITION:
2677 switch (0xf & c->err_info->SenseInfo[2]) { 2682 switch (0xf & c->err_info->SenseInfo[2]) {
2678 case 0: return IO_OK; /* no sense */ 2683 case 0: return IO_OK; /* no sense */
2679 case 1: return IO_OK; /* recovered error */ 2684 case 1: return IO_OK; /* recovered error */
2680 default: 2685 default:
2681 if (check_for_unit_attention(h, c)) 2686 if (check_for_unit_attention(h, c))
2682 return IO_NEEDS_RETRY; 2687 return IO_NEEDS_RETRY;
2683 dev_warn(&h->pdev->dev, "cmd 0x%02x " 2688 dev_warn(&h->pdev->dev, "cmd 0x%02x "
2684 "check condition, sense key = 0x%02x\n", 2689 "check condition, sense key = 0x%02x\n",
2685 c->Request.CDB[0], c->err_info->SenseInfo[2]); 2690 c->Request.CDB[0], c->err_info->SenseInfo[2]);
2686 } 2691 }
2687 break; 2692 break;
2688 default: 2693 default:
2689 dev_warn(&h->pdev->dev, "cmd 0x%02x" 2694 dev_warn(&h->pdev->dev, "cmd 0x%02x"
2690 "scsi status = 0x%02x\n", 2695 "scsi status = 0x%02x\n",
2691 c->Request.CDB[0], c->err_info->ScsiStatus); 2696 c->Request.CDB[0], c->err_info->ScsiStatus);
2692 break; 2697 break;
2693 } 2698 }
2694 return IO_ERROR; 2699 return IO_ERROR;
2695 } 2700 }
2696 2701
2697 static int process_sendcmd_error(ctlr_info_t *h, CommandList_struct *c) 2702 static int process_sendcmd_error(ctlr_info_t *h, CommandList_struct *c)
2698 { 2703 {
2699 int return_status = IO_OK; 2704 int return_status = IO_OK;
2700 2705
2701 if (c->err_info->CommandStatus == CMD_SUCCESS) 2706 if (c->err_info->CommandStatus == CMD_SUCCESS)
2702 return IO_OK; 2707 return IO_OK;
2703 2708
2704 switch (c->err_info->CommandStatus) { 2709 switch (c->err_info->CommandStatus) {
2705 case CMD_TARGET_STATUS: 2710 case CMD_TARGET_STATUS:
2706 return_status = check_target_status(h, c); 2711 return_status = check_target_status(h, c);
2707 break; 2712 break;
2708 case CMD_DATA_UNDERRUN: 2713 case CMD_DATA_UNDERRUN:
2709 case CMD_DATA_OVERRUN: 2714 case CMD_DATA_OVERRUN:
2710 /* expected for inquiry and report lun commands */ 2715 /* expected for inquiry and report lun commands */
2711 break; 2716 break;
2712 case CMD_INVALID: 2717 case CMD_INVALID:
2713 dev_warn(&h->pdev->dev, "cmd 0x%02x is " 2718 dev_warn(&h->pdev->dev, "cmd 0x%02x is "
2714 "reported invalid\n", c->Request.CDB[0]); 2719 "reported invalid\n", c->Request.CDB[0]);
2715 return_status = IO_ERROR; 2720 return_status = IO_ERROR;
2716 break; 2721 break;
2717 case CMD_PROTOCOL_ERR: 2722 case CMD_PROTOCOL_ERR:
2718 dev_warn(&h->pdev->dev, "cmd 0x%02x has " 2723 dev_warn(&h->pdev->dev, "cmd 0x%02x has "
2719 "protocol error\n", c->Request.CDB[0]); 2724 "protocol error\n", c->Request.CDB[0]);
2720 return_status = IO_ERROR; 2725 return_status = IO_ERROR;
2721 break; 2726 break;
2722 case CMD_HARDWARE_ERR: 2727 case CMD_HARDWARE_ERR:
2723 dev_warn(&h->pdev->dev, "cmd 0x%02x had " 2728 dev_warn(&h->pdev->dev, "cmd 0x%02x had "
2724 " hardware error\n", c->Request.CDB[0]); 2729 " hardware error\n", c->Request.CDB[0]);
2725 return_status = IO_ERROR; 2730 return_status = IO_ERROR;
2726 break; 2731 break;
2727 case CMD_CONNECTION_LOST: 2732 case CMD_CONNECTION_LOST:
2728 dev_warn(&h->pdev->dev, "cmd 0x%02x had " 2733 dev_warn(&h->pdev->dev, "cmd 0x%02x had "
2729 "connection lost\n", c->Request.CDB[0]); 2734 "connection lost\n", c->Request.CDB[0]);
2730 return_status = IO_ERROR; 2735 return_status = IO_ERROR;
2731 break; 2736 break;
2732 case CMD_ABORTED: 2737 case CMD_ABORTED:
2733 dev_warn(&h->pdev->dev, "cmd 0x%02x was " 2738 dev_warn(&h->pdev->dev, "cmd 0x%02x was "
2734 "aborted\n", c->Request.CDB[0]); 2739 "aborted\n", c->Request.CDB[0]);
2735 return_status = IO_ERROR; 2740 return_status = IO_ERROR;
2736 break; 2741 break;
2737 case CMD_ABORT_FAILED: 2742 case CMD_ABORT_FAILED:
2738 dev_warn(&h->pdev->dev, "cmd 0x%02x reports " 2743 dev_warn(&h->pdev->dev, "cmd 0x%02x reports "
2739 "abort failed\n", c->Request.CDB[0]); 2744 "abort failed\n", c->Request.CDB[0]);
2740 return_status = IO_ERROR; 2745 return_status = IO_ERROR;
2741 break; 2746 break;
2742 case CMD_UNSOLICITED_ABORT: 2747 case CMD_UNSOLICITED_ABORT:
2743 dev_warn(&h->pdev->dev, "unsolicited abort 0x%02x\n", 2748 dev_warn(&h->pdev->dev, "unsolicited abort 0x%02x\n",
2744 c->Request.CDB[0]); 2749 c->Request.CDB[0]);
2745 return_status = IO_NEEDS_RETRY; 2750 return_status = IO_NEEDS_RETRY;
2746 break; 2751 break;
2747 case CMD_UNABORTABLE: 2752 case CMD_UNABORTABLE:
2748 dev_warn(&h->pdev->dev, "cmd unabortable\n"); 2753 dev_warn(&h->pdev->dev, "cmd unabortable\n");
2749 return_status = IO_ERROR; 2754 return_status = IO_ERROR;
2750 break; 2755 break;
2751 default: 2756 default:
2752 dev_warn(&h->pdev->dev, "cmd 0x%02x returned " 2757 dev_warn(&h->pdev->dev, "cmd 0x%02x returned "
2753 "unknown status %x\n", c->Request.CDB[0], 2758 "unknown status %x\n", c->Request.CDB[0],
2754 c->err_info->CommandStatus); 2759 c->err_info->CommandStatus);
2755 return_status = IO_ERROR; 2760 return_status = IO_ERROR;
2756 } 2761 }
2757 return return_status; 2762 return return_status;
2758 } 2763 }
2759 2764
2760 static int sendcmd_withirq_core(ctlr_info_t *h, CommandList_struct *c, 2765 static int sendcmd_withirq_core(ctlr_info_t *h, CommandList_struct *c,
2761 int attempt_retry) 2766 int attempt_retry)
2762 { 2767 {
2763 DECLARE_COMPLETION_ONSTACK(wait); 2768 DECLARE_COMPLETION_ONSTACK(wait);
2764 u64bit buff_dma_handle; 2769 u64bit buff_dma_handle;
2765 int return_status = IO_OK; 2770 int return_status = IO_OK;
2766 2771
2767 resend_cmd2: 2772 resend_cmd2:
2768 c->waiting = &wait; 2773 c->waiting = &wait;
2769 enqueue_cmd_and_start_io(h, c); 2774 enqueue_cmd_and_start_io(h, c);
2770 2775
2771 wait_for_completion(&wait); 2776 wait_for_completion(&wait);
2772 2777
2773 if (c->err_info->CommandStatus == 0 || !attempt_retry) 2778 if (c->err_info->CommandStatus == 0 || !attempt_retry)
2774 goto command_done; 2779 goto command_done;
2775 2780
2776 return_status = process_sendcmd_error(h, c); 2781 return_status = process_sendcmd_error(h, c);
2777 2782
2778 if (return_status == IO_NEEDS_RETRY && 2783 if (return_status == IO_NEEDS_RETRY &&
2779 c->retry_count < MAX_CMD_RETRIES) { 2784 c->retry_count < MAX_CMD_RETRIES) {
2780 dev_warn(&h->pdev->dev, "retrying 0x%02x\n", 2785 dev_warn(&h->pdev->dev, "retrying 0x%02x\n",
2781 c->Request.CDB[0]); 2786 c->Request.CDB[0]);
2782 c->retry_count++; 2787 c->retry_count++;
2783 /* erase the old error information */ 2788 /* erase the old error information */
2784 memset(c->err_info, 0, sizeof(ErrorInfo_struct)); 2789 memset(c->err_info, 0, sizeof(ErrorInfo_struct));
2785 return_status = IO_OK; 2790 return_status = IO_OK;
2786 INIT_COMPLETION(wait); 2791 INIT_COMPLETION(wait);
2787 goto resend_cmd2; 2792 goto resend_cmd2;
2788 } 2793 }
2789 2794
2790 command_done: 2795 command_done:
2791 /* unlock the buffers from DMA */ 2796 /* unlock the buffers from DMA */
2792 buff_dma_handle.val32.lower = c->SG[0].Addr.lower; 2797 buff_dma_handle.val32.lower = c->SG[0].Addr.lower;
2793 buff_dma_handle.val32.upper = c->SG[0].Addr.upper; 2798 buff_dma_handle.val32.upper = c->SG[0].Addr.upper;
2794 pci_unmap_single(h->pdev, (dma_addr_t) buff_dma_handle.val, 2799 pci_unmap_single(h->pdev, (dma_addr_t) buff_dma_handle.val,
2795 c->SG[0].Len, PCI_DMA_BIDIRECTIONAL); 2800 c->SG[0].Len, PCI_DMA_BIDIRECTIONAL);
2796 return return_status; 2801 return return_status;
2797 } 2802 }
2798 2803
2799 static int sendcmd_withirq(ctlr_info_t *h, __u8 cmd, void *buff, size_t size, 2804 static int sendcmd_withirq(ctlr_info_t *h, __u8 cmd, void *buff, size_t size,
2800 __u8 page_code, unsigned char scsi3addr[], 2805 __u8 page_code, unsigned char scsi3addr[],
2801 int cmd_type) 2806 int cmd_type)
2802 { 2807 {
2803 CommandList_struct *c; 2808 CommandList_struct *c;
2804 int return_status; 2809 int return_status;
2805 2810
2806 c = cmd_special_alloc(h); 2811 c = cmd_special_alloc(h);
2807 if (!c) 2812 if (!c)
2808 return -ENOMEM; 2813 return -ENOMEM;
2809 return_status = fill_cmd(h, c, cmd, buff, size, page_code, 2814 return_status = fill_cmd(h, c, cmd, buff, size, page_code,
2810 scsi3addr, cmd_type); 2815 scsi3addr, cmd_type);
2811 if (return_status == IO_OK) 2816 if (return_status == IO_OK)
2812 return_status = sendcmd_withirq_core(h, c, 1); 2817 return_status = sendcmd_withirq_core(h, c, 1);
2813 2818
2814 cmd_special_free(h, c); 2819 cmd_special_free(h, c);
2815 return return_status; 2820 return return_status;
2816 } 2821 }
2817 2822
2818 static void cciss_geometry_inquiry(ctlr_info_t *h, int logvol, 2823 static void cciss_geometry_inquiry(ctlr_info_t *h, int logvol,
2819 sector_t total_size, 2824 sector_t total_size,
2820 unsigned int block_size, 2825 unsigned int block_size,
2821 InquiryData_struct *inq_buff, 2826 InquiryData_struct *inq_buff,
2822 drive_info_struct *drv) 2827 drive_info_struct *drv)
2823 { 2828 {
2824 int return_code; 2829 int return_code;
2825 unsigned long t; 2830 unsigned long t;
2826 unsigned char scsi3addr[8]; 2831 unsigned char scsi3addr[8];
2827 2832
2828 memset(inq_buff, 0, sizeof(InquiryData_struct)); 2833 memset(inq_buff, 0, sizeof(InquiryData_struct));
2829 log_unit_to_scsi3addr(h, scsi3addr, logvol); 2834 log_unit_to_scsi3addr(h, scsi3addr, logvol);
2830 return_code = sendcmd_withirq(h, CISS_INQUIRY, inq_buff, 2835 return_code = sendcmd_withirq(h, CISS_INQUIRY, inq_buff,
2831 sizeof(*inq_buff), 0xC1, scsi3addr, TYPE_CMD); 2836 sizeof(*inq_buff), 0xC1, scsi3addr, TYPE_CMD);
2832 if (return_code == IO_OK) { 2837 if (return_code == IO_OK) {
2833 if (inq_buff->data_byte[8] == 0xFF) { 2838 if (inq_buff->data_byte[8] == 0xFF) {
2834 dev_warn(&h->pdev->dev, 2839 dev_warn(&h->pdev->dev,
2835 "reading geometry failed, volume " 2840 "reading geometry failed, volume "
2836 "does not support reading geometry\n"); 2841 "does not support reading geometry\n");
2837 drv->heads = 255; 2842 drv->heads = 255;
2838 drv->sectors = 32; /* Sectors per track */ 2843 drv->sectors = 32; /* Sectors per track */
2839 drv->cylinders = total_size + 1; 2844 drv->cylinders = total_size + 1;
2840 drv->raid_level = RAID_UNKNOWN; 2845 drv->raid_level = RAID_UNKNOWN;
2841 } else { 2846 } else {
2842 drv->heads = inq_buff->data_byte[6]; 2847 drv->heads = inq_buff->data_byte[6];
2843 drv->sectors = inq_buff->data_byte[7]; 2848 drv->sectors = inq_buff->data_byte[7];
2844 drv->cylinders = (inq_buff->data_byte[4] & 0xff) << 8; 2849 drv->cylinders = (inq_buff->data_byte[4] & 0xff) << 8;
2845 drv->cylinders += inq_buff->data_byte[5]; 2850 drv->cylinders += inq_buff->data_byte[5];
2846 drv->raid_level = inq_buff->data_byte[8]; 2851 drv->raid_level = inq_buff->data_byte[8];
2847 } 2852 }
2848 drv->block_size = block_size; 2853 drv->block_size = block_size;
2849 drv->nr_blocks = total_size + 1; 2854 drv->nr_blocks = total_size + 1;
2850 t = drv->heads * drv->sectors; 2855 t = drv->heads * drv->sectors;
2851 if (t > 1) { 2856 if (t > 1) {
2852 sector_t real_size = total_size + 1; 2857 sector_t real_size = total_size + 1;
2853 unsigned long rem = sector_div(real_size, t); 2858 unsigned long rem = sector_div(real_size, t);
2854 if (rem) 2859 if (rem)
2855 real_size++; 2860 real_size++;
2856 drv->cylinders = real_size; 2861 drv->cylinders = real_size;
2857 } 2862 }
2858 } else { /* Get geometry failed */ 2863 } else { /* Get geometry failed */
2859 dev_warn(&h->pdev->dev, "reading geometry failed\n"); 2864 dev_warn(&h->pdev->dev, "reading geometry failed\n");
2860 } 2865 }
2861 } 2866 }
2862 2867
2863 static void 2868 static void
2864 cciss_read_capacity(ctlr_info_t *h, int logvol, sector_t *total_size, 2869 cciss_read_capacity(ctlr_info_t *h, int logvol, sector_t *total_size,
2865 unsigned int *block_size) 2870 unsigned int *block_size)
2866 { 2871 {
2867 ReadCapdata_struct *buf; 2872 ReadCapdata_struct *buf;
2868 int return_code; 2873 int return_code;
2869 unsigned char scsi3addr[8]; 2874 unsigned char scsi3addr[8];
2870 2875
2871 buf = kzalloc(sizeof(ReadCapdata_struct), GFP_KERNEL); 2876 buf = kzalloc(sizeof(ReadCapdata_struct), GFP_KERNEL);
2872 if (!buf) { 2877 if (!buf) {
2873 dev_warn(&h->pdev->dev, "out of memory\n"); 2878 dev_warn(&h->pdev->dev, "out of memory\n");
2874 return; 2879 return;
2875 } 2880 }
2876 2881
2877 log_unit_to_scsi3addr(h, scsi3addr, logvol); 2882 log_unit_to_scsi3addr(h, scsi3addr, logvol);
2878 return_code = sendcmd_withirq(h, CCISS_READ_CAPACITY, buf, 2883 return_code = sendcmd_withirq(h, CCISS_READ_CAPACITY, buf,
2879 sizeof(ReadCapdata_struct), 0, scsi3addr, TYPE_CMD); 2884 sizeof(ReadCapdata_struct), 0, scsi3addr, TYPE_CMD);
2880 if (return_code == IO_OK) { 2885 if (return_code == IO_OK) {
2881 *total_size = be32_to_cpu(*(__be32 *) buf->total_size); 2886 *total_size = be32_to_cpu(*(__be32 *) buf->total_size);
2882 *block_size = be32_to_cpu(*(__be32 *) buf->block_size); 2887 *block_size = be32_to_cpu(*(__be32 *) buf->block_size);
2883 } else { /* read capacity command failed */ 2888 } else { /* read capacity command failed */
2884 dev_warn(&h->pdev->dev, "read capacity failed\n"); 2889 dev_warn(&h->pdev->dev, "read capacity failed\n");
2885 *total_size = 0; 2890 *total_size = 0;
2886 *block_size = BLOCK_SIZE; 2891 *block_size = BLOCK_SIZE;
2887 } 2892 }
2888 kfree(buf); 2893 kfree(buf);
2889 } 2894 }
2890 2895
2891 static void cciss_read_capacity_16(ctlr_info_t *h, int logvol, 2896 static void cciss_read_capacity_16(ctlr_info_t *h, int logvol,
2892 sector_t *total_size, unsigned int *block_size) 2897 sector_t *total_size, unsigned int *block_size)
2893 { 2898 {
2894 ReadCapdata_struct_16 *buf; 2899 ReadCapdata_struct_16 *buf;
2895 int return_code; 2900 int return_code;
2896 unsigned char scsi3addr[8]; 2901 unsigned char scsi3addr[8];
2897 2902
2898 buf = kzalloc(sizeof(ReadCapdata_struct_16), GFP_KERNEL); 2903 buf = kzalloc(sizeof(ReadCapdata_struct_16), GFP_KERNEL);
2899 if (!buf) { 2904 if (!buf) {
2900 dev_warn(&h->pdev->dev, "out of memory\n"); 2905 dev_warn(&h->pdev->dev, "out of memory\n");
2901 return; 2906 return;
2902 } 2907 }
2903 2908
2904 log_unit_to_scsi3addr(h, scsi3addr, logvol); 2909 log_unit_to_scsi3addr(h, scsi3addr, logvol);
2905 return_code = sendcmd_withirq(h, CCISS_READ_CAPACITY_16, 2910 return_code = sendcmd_withirq(h, CCISS_READ_CAPACITY_16,
2906 buf, sizeof(ReadCapdata_struct_16), 2911 buf, sizeof(ReadCapdata_struct_16),
2907 0, scsi3addr, TYPE_CMD); 2912 0, scsi3addr, TYPE_CMD);
2908 if (return_code == IO_OK) { 2913 if (return_code == IO_OK) {
2909 *total_size = be64_to_cpu(*(__be64 *) buf->total_size); 2914 *total_size = be64_to_cpu(*(__be64 *) buf->total_size);
2910 *block_size = be32_to_cpu(*(__be32 *) buf->block_size); 2915 *block_size = be32_to_cpu(*(__be32 *) buf->block_size);
2911 } else { /* read capacity command failed */ 2916 } else { /* read capacity command failed */
2912 dev_warn(&h->pdev->dev, "read capacity failed\n"); 2917 dev_warn(&h->pdev->dev, "read capacity failed\n");
2913 *total_size = 0; 2918 *total_size = 0;
2914 *block_size = BLOCK_SIZE; 2919 *block_size = BLOCK_SIZE;
2915 } 2920 }
2916 dev_info(&h->pdev->dev, " blocks= %llu block_size= %d\n", 2921 dev_info(&h->pdev->dev, " blocks= %llu block_size= %d\n",
2917 (unsigned long long)*total_size+1, *block_size); 2922 (unsigned long long)*total_size+1, *block_size);
2918 kfree(buf); 2923 kfree(buf);
2919 } 2924 }
2920 2925
2921 static int cciss_revalidate(struct gendisk *disk) 2926 static int cciss_revalidate(struct gendisk *disk)
2922 { 2927 {
2923 ctlr_info_t *h = get_host(disk); 2928 ctlr_info_t *h = get_host(disk);
2924 drive_info_struct *drv = get_drv(disk); 2929 drive_info_struct *drv = get_drv(disk);
2925 int logvol; 2930 int logvol;
2926 int FOUND = 0; 2931 int FOUND = 0;
2927 unsigned int block_size; 2932 unsigned int block_size;
2928 sector_t total_size; 2933 sector_t total_size;
2929 InquiryData_struct *inq_buff = NULL; 2934 InquiryData_struct *inq_buff = NULL;
2930 2935
2931 for (logvol = 0; logvol <= h->highest_lun; logvol++) { 2936 for (logvol = 0; logvol <= h->highest_lun; logvol++) {
2932 if (!h->drv[logvol]) 2937 if (!h->drv[logvol])
2933 continue; 2938 continue;
2934 if (memcmp(h->drv[logvol]->LunID, drv->LunID, 2939 if (memcmp(h->drv[logvol]->LunID, drv->LunID,
2935 sizeof(drv->LunID)) == 0) { 2940 sizeof(drv->LunID)) == 0) {
2936 FOUND = 1; 2941 FOUND = 1;
2937 break; 2942 break;
2938 } 2943 }
2939 } 2944 }
2940 2945
2941 if (!FOUND) 2946 if (!FOUND)
2942 return 1; 2947 return 1;
2943 2948
2944 inq_buff = kmalloc(sizeof(InquiryData_struct), GFP_KERNEL); 2949 inq_buff = kmalloc(sizeof(InquiryData_struct), GFP_KERNEL);
2945 if (inq_buff == NULL) { 2950 if (inq_buff == NULL) {
2946 dev_warn(&h->pdev->dev, "out of memory\n"); 2951 dev_warn(&h->pdev->dev, "out of memory\n");
2947 return 1; 2952 return 1;
2948 } 2953 }
2949 if (h->cciss_read == CCISS_READ_10) { 2954 if (h->cciss_read == CCISS_READ_10) {
2950 cciss_read_capacity(h, logvol, 2955 cciss_read_capacity(h, logvol,
2951 &total_size, &block_size); 2956 &total_size, &block_size);
2952 } else { 2957 } else {
2953 cciss_read_capacity_16(h, logvol, 2958 cciss_read_capacity_16(h, logvol,
2954 &total_size, &block_size); 2959 &total_size, &block_size);
2955 } 2960 }
2956 cciss_geometry_inquiry(h, logvol, total_size, block_size, 2961 cciss_geometry_inquiry(h, logvol, total_size, block_size,
2957 inq_buff, drv); 2962 inq_buff, drv);
2958 2963
2959 blk_queue_logical_block_size(drv->queue, drv->block_size); 2964 blk_queue_logical_block_size(drv->queue, drv->block_size);
2960 set_capacity(disk, drv->nr_blocks); 2965 set_capacity(disk, drv->nr_blocks);
2961 2966
2962 kfree(inq_buff); 2967 kfree(inq_buff);
2963 return 0; 2968 return 0;
2964 } 2969 }
2965 2970
2966 /* 2971 /*
2967 * Map (physical) PCI mem into (virtual) kernel space 2972 * Map (physical) PCI mem into (virtual) kernel space
2968 */ 2973 */
2969 static void __iomem *remap_pci_mem(ulong base, ulong size) 2974 static void __iomem *remap_pci_mem(ulong base, ulong size)
2970 { 2975 {
2971 ulong page_base = ((ulong) base) & PAGE_MASK; 2976 ulong page_base = ((ulong) base) & PAGE_MASK;
2972 ulong page_offs = ((ulong) base) - page_base; 2977 ulong page_offs = ((ulong) base) - page_base;
2973 void __iomem *page_remapped = ioremap(page_base, page_offs + size); 2978 void __iomem *page_remapped = ioremap(page_base, page_offs + size);
2974 2979
2975 return page_remapped ? (page_remapped + page_offs) : NULL; 2980 return page_remapped ? (page_remapped + page_offs) : NULL;
2976 } 2981 }
2977 2982
2978 /* 2983 /*
2979 * Takes jobs of the Q and sends them to the hardware, then puts it on 2984 * Takes jobs of the Q and sends them to the hardware, then puts it on
2980 * the Q to wait for completion. 2985 * the Q to wait for completion.
2981 */ 2986 */
2982 static void start_io(ctlr_info_t *h) 2987 static void start_io(ctlr_info_t *h)
2983 { 2988 {
2984 CommandList_struct *c; 2989 CommandList_struct *c;
2985 2990
2986 while (!list_empty(&h->reqQ)) { 2991 while (!list_empty(&h->reqQ)) {
2987 c = list_entry(h->reqQ.next, CommandList_struct, list); 2992 c = list_entry(h->reqQ.next, CommandList_struct, list);
2988 /* can't do anything if fifo is full */ 2993 /* can't do anything if fifo is full */
2989 if ((h->access.fifo_full(h))) { 2994 if ((h->access.fifo_full(h))) {
2990 dev_warn(&h->pdev->dev, "fifo full\n"); 2995 dev_warn(&h->pdev->dev, "fifo full\n");
2991 break; 2996 break;
2992 } 2997 }
2993 2998
2994 /* Get the first entry from the Request Q */ 2999 /* Get the first entry from the Request Q */
2995 removeQ(c); 3000 removeQ(c);
2996 h->Qdepth--; 3001 h->Qdepth--;
2997 3002
2998 /* Tell the controller execute command */ 3003 /* Tell the controller execute command */
2999 h->access.submit_command(h, c); 3004 h->access.submit_command(h, c);
3000 3005
3001 /* Put job onto the completed Q */ 3006 /* Put job onto the completed Q */
3002 addQ(&h->cmpQ, c); 3007 addQ(&h->cmpQ, c);
3003 } 3008 }
3004 } 3009 }
3005 3010
3006 /* Assumes that h->lock is held. */ 3011 /* Assumes that h->lock is held. */
3007 /* Zeros out the error record and then resends the command back */ 3012 /* Zeros out the error record and then resends the command back */
3008 /* to the controller */ 3013 /* to the controller */
3009 static inline void resend_cciss_cmd(ctlr_info_t *h, CommandList_struct *c) 3014 static inline void resend_cciss_cmd(ctlr_info_t *h, CommandList_struct *c)
3010 { 3015 {
3011 /* erase the old error information */ 3016 /* erase the old error information */
3012 memset(c->err_info, 0, sizeof(ErrorInfo_struct)); 3017 memset(c->err_info, 0, sizeof(ErrorInfo_struct));
3013 3018
3014 /* add it to software queue and then send it to the controller */ 3019 /* add it to software queue and then send it to the controller */
3015 addQ(&h->reqQ, c); 3020 addQ(&h->reqQ, c);
3016 h->Qdepth++; 3021 h->Qdepth++;
3017 if (h->Qdepth > h->maxQsinceinit) 3022 if (h->Qdepth > h->maxQsinceinit)
3018 h->maxQsinceinit = h->Qdepth; 3023 h->maxQsinceinit = h->Qdepth;
3019 3024
3020 start_io(h); 3025 start_io(h);
3021 } 3026 }
3022 3027
3023 static inline unsigned int make_status_bytes(unsigned int scsi_status_byte, 3028 static inline unsigned int make_status_bytes(unsigned int scsi_status_byte,
3024 unsigned int msg_byte, unsigned int host_byte, 3029 unsigned int msg_byte, unsigned int host_byte,
3025 unsigned int driver_byte) 3030 unsigned int driver_byte)
3026 { 3031 {
3027 /* inverse of macros in scsi.h */ 3032 /* inverse of macros in scsi.h */
3028 return (scsi_status_byte & 0xff) | 3033 return (scsi_status_byte & 0xff) |
3029 ((msg_byte & 0xff) << 8) | 3034 ((msg_byte & 0xff) << 8) |
3030 ((host_byte & 0xff) << 16) | 3035 ((host_byte & 0xff) << 16) |
3031 ((driver_byte & 0xff) << 24); 3036 ((driver_byte & 0xff) << 24);
3032 } 3037 }
3033 3038
3034 static inline int evaluate_target_status(ctlr_info_t *h, 3039 static inline int evaluate_target_status(ctlr_info_t *h,
3035 CommandList_struct *cmd, int *retry_cmd) 3040 CommandList_struct *cmd, int *retry_cmd)
3036 { 3041 {
3037 unsigned char sense_key; 3042 unsigned char sense_key;
3038 unsigned char status_byte, msg_byte, host_byte, driver_byte; 3043 unsigned char status_byte, msg_byte, host_byte, driver_byte;
3039 int error_value; 3044 int error_value;
3040 3045
3041 *retry_cmd = 0; 3046 *retry_cmd = 0;
3042 /* If we get in here, it means we got "target status", that is, scsi status */ 3047 /* If we get in here, it means we got "target status", that is, scsi status */
3043 status_byte = cmd->err_info->ScsiStatus; 3048 status_byte = cmd->err_info->ScsiStatus;
3044 driver_byte = DRIVER_OK; 3049 driver_byte = DRIVER_OK;
3045 msg_byte = cmd->err_info->CommandStatus; /* correct? seems too device specific */ 3050 msg_byte = cmd->err_info->CommandStatus; /* correct? seems too device specific */
3046 3051
3047 if (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) 3052 if (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC)
3048 host_byte = DID_PASSTHROUGH; 3053 host_byte = DID_PASSTHROUGH;
3049 else 3054 else
3050 host_byte = DID_OK; 3055 host_byte = DID_OK;
3051 3056
3052 error_value = make_status_bytes(status_byte, msg_byte, 3057 error_value = make_status_bytes(status_byte, msg_byte,
3053 host_byte, driver_byte); 3058 host_byte, driver_byte);
3054 3059
3055 if (cmd->err_info->ScsiStatus != SAM_STAT_CHECK_CONDITION) { 3060 if (cmd->err_info->ScsiStatus != SAM_STAT_CHECK_CONDITION) {
3056 if (cmd->rq->cmd_type != REQ_TYPE_BLOCK_PC) 3061 if (cmd->rq->cmd_type != REQ_TYPE_BLOCK_PC)
3057 dev_warn(&h->pdev->dev, "cmd %p " 3062 dev_warn(&h->pdev->dev, "cmd %p "
3058 "has SCSI Status 0x%x\n", 3063 "has SCSI Status 0x%x\n",
3059 cmd, cmd->err_info->ScsiStatus); 3064 cmd, cmd->err_info->ScsiStatus);
3060 return error_value; 3065 return error_value;
3061 } 3066 }
3062 3067
3063 /* check the sense key */ 3068 /* check the sense key */
3064 sense_key = 0xf & cmd->err_info->SenseInfo[2]; 3069 sense_key = 0xf & cmd->err_info->SenseInfo[2];
3065 /* no status or recovered error */ 3070 /* no status or recovered error */
3066 if (((sense_key == 0x0) || (sense_key == 0x1)) && 3071 if (((sense_key == 0x0) || (sense_key == 0x1)) &&
3067 (cmd->rq->cmd_type != REQ_TYPE_BLOCK_PC)) 3072 (cmd->rq->cmd_type != REQ_TYPE_BLOCK_PC))
3068 error_value = 0; 3073 error_value = 0;
3069 3074
3070 if (check_for_unit_attention(h, cmd)) { 3075 if (check_for_unit_attention(h, cmd)) {
3071 *retry_cmd = !(cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC); 3076 *retry_cmd = !(cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC);
3072 return 0; 3077 return 0;
3073 } 3078 }
3074 3079
3075 /* Not SG_IO or similar? */ 3080 /* Not SG_IO or similar? */
3076 if (cmd->rq->cmd_type != REQ_TYPE_BLOCK_PC) { 3081 if (cmd->rq->cmd_type != REQ_TYPE_BLOCK_PC) {
3077 if (error_value != 0) 3082 if (error_value != 0)
3078 dev_warn(&h->pdev->dev, "cmd %p has CHECK CONDITION" 3083 dev_warn(&h->pdev->dev, "cmd %p has CHECK CONDITION"
3079 " sense key = 0x%x\n", cmd, sense_key); 3084 " sense key = 0x%x\n", cmd, sense_key);
3080 return error_value; 3085 return error_value;
3081 } 3086 }
3082 3087
3083 /* SG_IO or similar, copy sense data back */ 3088 /* SG_IO or similar, copy sense data back */
3084 if (cmd->rq->sense) { 3089 if (cmd->rq->sense) {
3085 if (cmd->rq->sense_len > cmd->err_info->SenseLen) 3090 if (cmd->rq->sense_len > cmd->err_info->SenseLen)
3086 cmd->rq->sense_len = cmd->err_info->SenseLen; 3091 cmd->rq->sense_len = cmd->err_info->SenseLen;
3087 memcpy(cmd->rq->sense, cmd->err_info->SenseInfo, 3092 memcpy(cmd->rq->sense, cmd->err_info->SenseInfo,
3088 cmd->rq->sense_len); 3093 cmd->rq->sense_len);
3089 } else 3094 } else
3090 cmd->rq->sense_len = 0; 3095 cmd->rq->sense_len = 0;
3091 3096
3092 return error_value; 3097 return error_value;
3093 } 3098 }
3094 3099
3095 /* checks the status of the job and calls complete buffers to mark all 3100 /* checks the status of the job and calls complete buffers to mark all
3096 * buffers for the completed job. Note that this function does not need 3101 * buffers for the completed job. Note that this function does not need
3097 * to hold the hba/queue lock. 3102 * to hold the hba/queue lock.
3098 */ 3103 */
3099 static inline void complete_command(ctlr_info_t *h, CommandList_struct *cmd, 3104 static inline void complete_command(ctlr_info_t *h, CommandList_struct *cmd,
3100 int timeout) 3105 int timeout)
3101 { 3106 {
3102 int retry_cmd = 0; 3107 int retry_cmd = 0;
3103 struct request *rq = cmd->rq; 3108 struct request *rq = cmd->rq;
3104 3109
3105 rq->errors = 0; 3110 rq->errors = 0;
3106 3111
3107 if (timeout) 3112 if (timeout)
3108 rq->errors = make_status_bytes(0, 0, 0, DRIVER_TIMEOUT); 3113 rq->errors = make_status_bytes(0, 0, 0, DRIVER_TIMEOUT);
3109 3114
3110 if (cmd->err_info->CommandStatus == 0) /* no error has occurred */ 3115 if (cmd->err_info->CommandStatus == 0) /* no error has occurred */
3111 goto after_error_processing; 3116 goto after_error_processing;
3112 3117
3113 switch (cmd->err_info->CommandStatus) { 3118 switch (cmd->err_info->CommandStatus) {
3114 case CMD_TARGET_STATUS: 3119 case CMD_TARGET_STATUS:
3115 rq->errors = evaluate_target_status(h, cmd, &retry_cmd); 3120 rq->errors = evaluate_target_status(h, cmd, &retry_cmd);
3116 break; 3121 break;
3117 case CMD_DATA_UNDERRUN: 3122 case CMD_DATA_UNDERRUN:
3118 if (cmd->rq->cmd_type == REQ_TYPE_FS) { 3123 if (cmd->rq->cmd_type == REQ_TYPE_FS) {
3119 dev_warn(&h->pdev->dev, "cmd %p has" 3124 dev_warn(&h->pdev->dev, "cmd %p has"
3120 " completed with data underrun " 3125 " completed with data underrun "
3121 "reported\n", cmd); 3126 "reported\n", cmd);
3122 cmd->rq->resid_len = cmd->err_info->ResidualCnt; 3127 cmd->rq->resid_len = cmd->err_info->ResidualCnt;
3123 } 3128 }
3124 break; 3129 break;
3125 case CMD_DATA_OVERRUN: 3130 case CMD_DATA_OVERRUN:
3126 if (cmd->rq->cmd_type == REQ_TYPE_FS) 3131 if (cmd->rq->cmd_type == REQ_TYPE_FS)
3127 dev_warn(&h->pdev->dev, "cciss: cmd %p has" 3132 dev_warn(&h->pdev->dev, "cciss: cmd %p has"
3128 " completed with data overrun " 3133 " completed with data overrun "
3129 "reported\n", cmd); 3134 "reported\n", cmd);
3130 break; 3135 break;
3131 case CMD_INVALID: 3136 case CMD_INVALID:
3132 dev_warn(&h->pdev->dev, "cciss: cmd %p is " 3137 dev_warn(&h->pdev->dev, "cciss: cmd %p is "
3133 "reported invalid\n", cmd); 3138 "reported invalid\n", cmd);
3134 rq->errors = make_status_bytes(SAM_STAT_GOOD, 3139 rq->errors = make_status_bytes(SAM_STAT_GOOD,
3135 cmd->err_info->CommandStatus, DRIVER_OK, 3140 cmd->err_info->CommandStatus, DRIVER_OK,
3136 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ? 3141 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
3137 DID_PASSTHROUGH : DID_ERROR); 3142 DID_PASSTHROUGH : DID_ERROR);
3138 break; 3143 break;
3139 case CMD_PROTOCOL_ERR: 3144 case CMD_PROTOCOL_ERR:
3140 dev_warn(&h->pdev->dev, "cciss: cmd %p has " 3145 dev_warn(&h->pdev->dev, "cciss: cmd %p has "
3141 "protocol error\n", cmd); 3146 "protocol error\n", cmd);
3142 rq->errors = make_status_bytes(SAM_STAT_GOOD, 3147 rq->errors = make_status_bytes(SAM_STAT_GOOD,
3143 cmd->err_info->CommandStatus, DRIVER_OK, 3148 cmd->err_info->CommandStatus, DRIVER_OK,
3144 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ? 3149 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
3145 DID_PASSTHROUGH : DID_ERROR); 3150 DID_PASSTHROUGH : DID_ERROR);
3146 break; 3151 break;
3147 case CMD_HARDWARE_ERR: 3152 case CMD_HARDWARE_ERR:
3148 dev_warn(&h->pdev->dev, "cciss: cmd %p had " 3153 dev_warn(&h->pdev->dev, "cciss: cmd %p had "
3149 " hardware error\n", cmd); 3154 " hardware error\n", cmd);
3150 rq->errors = make_status_bytes(SAM_STAT_GOOD, 3155 rq->errors = make_status_bytes(SAM_STAT_GOOD,
3151 cmd->err_info->CommandStatus, DRIVER_OK, 3156 cmd->err_info->CommandStatus, DRIVER_OK,
3152 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ? 3157 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
3153 DID_PASSTHROUGH : DID_ERROR); 3158 DID_PASSTHROUGH : DID_ERROR);
3154 break; 3159 break;
3155 case CMD_CONNECTION_LOST: 3160 case CMD_CONNECTION_LOST:
3156 dev_warn(&h->pdev->dev, "cciss: cmd %p had " 3161 dev_warn(&h->pdev->dev, "cciss: cmd %p had "
3157 "connection lost\n", cmd); 3162 "connection lost\n", cmd);
3158 rq->errors = make_status_bytes(SAM_STAT_GOOD, 3163 rq->errors = make_status_bytes(SAM_STAT_GOOD,
3159 cmd->err_info->CommandStatus, DRIVER_OK, 3164 cmd->err_info->CommandStatus, DRIVER_OK,
3160 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ? 3165 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
3161 DID_PASSTHROUGH : DID_ERROR); 3166 DID_PASSTHROUGH : DID_ERROR);
3162 break; 3167 break;
3163 case CMD_ABORTED: 3168 case CMD_ABORTED:
3164 dev_warn(&h->pdev->dev, "cciss: cmd %p was " 3169 dev_warn(&h->pdev->dev, "cciss: cmd %p was "
3165 "aborted\n", cmd); 3170 "aborted\n", cmd);
3166 rq->errors = make_status_bytes(SAM_STAT_GOOD, 3171 rq->errors = make_status_bytes(SAM_STAT_GOOD,
3167 cmd->err_info->CommandStatus, DRIVER_OK, 3172 cmd->err_info->CommandStatus, DRIVER_OK,
3168 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ? 3173 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
3169 DID_PASSTHROUGH : DID_ABORT); 3174 DID_PASSTHROUGH : DID_ABORT);
3170 break; 3175 break;
3171 case CMD_ABORT_FAILED: 3176 case CMD_ABORT_FAILED:
3172 dev_warn(&h->pdev->dev, "cciss: cmd %p reports " 3177 dev_warn(&h->pdev->dev, "cciss: cmd %p reports "
3173 "abort failed\n", cmd); 3178 "abort failed\n", cmd);
3174 rq->errors = make_status_bytes(SAM_STAT_GOOD, 3179 rq->errors = make_status_bytes(SAM_STAT_GOOD,
3175 cmd->err_info->CommandStatus, DRIVER_OK, 3180 cmd->err_info->CommandStatus, DRIVER_OK,
3176 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ? 3181 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
3177 DID_PASSTHROUGH : DID_ERROR); 3182 DID_PASSTHROUGH : DID_ERROR);
3178 break; 3183 break;
3179 case CMD_UNSOLICITED_ABORT: 3184 case CMD_UNSOLICITED_ABORT:
3180 dev_warn(&h->pdev->dev, "cciss%d: unsolicited " 3185 dev_warn(&h->pdev->dev, "cciss%d: unsolicited "
3181 "abort %p\n", h->ctlr, cmd); 3186 "abort %p\n", h->ctlr, cmd);
3182 if (cmd->retry_count < MAX_CMD_RETRIES) { 3187 if (cmd->retry_count < MAX_CMD_RETRIES) {
3183 retry_cmd = 1; 3188 retry_cmd = 1;
3184 dev_warn(&h->pdev->dev, "retrying %p\n", cmd); 3189 dev_warn(&h->pdev->dev, "retrying %p\n", cmd);
3185 cmd->retry_count++; 3190 cmd->retry_count++;
3186 } else 3191 } else
3187 dev_warn(&h->pdev->dev, 3192 dev_warn(&h->pdev->dev,
3188 "%p retried too many times\n", cmd); 3193 "%p retried too many times\n", cmd);
3189 rq->errors = make_status_bytes(SAM_STAT_GOOD, 3194 rq->errors = make_status_bytes(SAM_STAT_GOOD,
3190 cmd->err_info->CommandStatus, DRIVER_OK, 3195 cmd->err_info->CommandStatus, DRIVER_OK,
3191 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ? 3196 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
3192 DID_PASSTHROUGH : DID_ABORT); 3197 DID_PASSTHROUGH : DID_ABORT);
3193 break; 3198 break;
3194 case CMD_TIMEOUT: 3199 case CMD_TIMEOUT:
3195 dev_warn(&h->pdev->dev, "cmd %p timedout\n", cmd); 3200 dev_warn(&h->pdev->dev, "cmd %p timedout\n", cmd);
3196 rq->errors = make_status_bytes(SAM_STAT_GOOD, 3201 rq->errors = make_status_bytes(SAM_STAT_GOOD,
3197 cmd->err_info->CommandStatus, DRIVER_OK, 3202 cmd->err_info->CommandStatus, DRIVER_OK,
3198 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ? 3203 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
3199 DID_PASSTHROUGH : DID_ERROR); 3204 DID_PASSTHROUGH : DID_ERROR);
3200 break; 3205 break;
3201 case CMD_UNABORTABLE: 3206 case CMD_UNABORTABLE:
3202 dev_warn(&h->pdev->dev, "cmd %p unabortable\n", cmd); 3207 dev_warn(&h->pdev->dev, "cmd %p unabortable\n", cmd);
3203 rq->errors = make_status_bytes(SAM_STAT_GOOD, 3208 rq->errors = make_status_bytes(SAM_STAT_GOOD,
3204 cmd->err_info->CommandStatus, DRIVER_OK, 3209 cmd->err_info->CommandStatus, DRIVER_OK,
3205 cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC ? 3210 cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC ?
3206 DID_PASSTHROUGH : DID_ERROR); 3211 DID_PASSTHROUGH : DID_ERROR);
3207 break; 3212 break;
3208 default: 3213 default:
3209 dev_warn(&h->pdev->dev, "cmd %p returned " 3214 dev_warn(&h->pdev->dev, "cmd %p returned "
3210 "unknown status %x\n", cmd, 3215 "unknown status %x\n", cmd,
3211 cmd->err_info->CommandStatus); 3216 cmd->err_info->CommandStatus);
3212 rq->errors = make_status_bytes(SAM_STAT_GOOD, 3217 rq->errors = make_status_bytes(SAM_STAT_GOOD,
3213 cmd->err_info->CommandStatus, DRIVER_OK, 3218 cmd->err_info->CommandStatus, DRIVER_OK,
3214 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ? 3219 (cmd->rq->cmd_type == REQ_TYPE_BLOCK_PC) ?
3215 DID_PASSTHROUGH : DID_ERROR); 3220 DID_PASSTHROUGH : DID_ERROR);
3216 } 3221 }
3217 3222
3218 after_error_processing: 3223 after_error_processing:
3219 3224
3220 /* We need to return this command */ 3225 /* We need to return this command */
3221 if (retry_cmd) { 3226 if (retry_cmd) {
3222 resend_cciss_cmd(h, cmd); 3227 resend_cciss_cmd(h, cmd);
3223 return; 3228 return;
3224 } 3229 }
3225 cmd->rq->completion_data = cmd; 3230 cmd->rq->completion_data = cmd;
3226 blk_complete_request(cmd->rq); 3231 blk_complete_request(cmd->rq);
3227 } 3232 }
3228 3233
3229 static inline u32 cciss_tag_contains_index(u32 tag) 3234 static inline u32 cciss_tag_contains_index(u32 tag)
3230 { 3235 {
3231 #define DIRECT_LOOKUP_BIT 0x10 3236 #define DIRECT_LOOKUP_BIT 0x10
3232 return tag & DIRECT_LOOKUP_BIT; 3237 return tag & DIRECT_LOOKUP_BIT;
3233 } 3238 }
3234 3239
3235 static inline u32 cciss_tag_to_index(u32 tag) 3240 static inline u32 cciss_tag_to_index(u32 tag)
3236 { 3241 {
3237 #define DIRECT_LOOKUP_SHIFT 5 3242 #define DIRECT_LOOKUP_SHIFT 5
3238 return tag >> DIRECT_LOOKUP_SHIFT; 3243 return tag >> DIRECT_LOOKUP_SHIFT;
3239 } 3244 }
3240 3245
3241 static inline u32 cciss_tag_discard_error_bits(ctlr_info_t *h, u32 tag) 3246 static inline u32 cciss_tag_discard_error_bits(ctlr_info_t *h, u32 tag)
3242 { 3247 {
3243 #define CCISS_PERF_ERROR_BITS ((1 << DIRECT_LOOKUP_SHIFT) - 1) 3248 #define CCISS_PERF_ERROR_BITS ((1 << DIRECT_LOOKUP_SHIFT) - 1)
3244 #define CCISS_SIMPLE_ERROR_BITS 0x03 3249 #define CCISS_SIMPLE_ERROR_BITS 0x03
3245 if (likely(h->transMethod & CFGTBL_Trans_Performant)) 3250 if (likely(h->transMethod & CFGTBL_Trans_Performant))
3246 return tag & ~CCISS_PERF_ERROR_BITS; 3251 return tag & ~CCISS_PERF_ERROR_BITS;
3247 return tag & ~CCISS_SIMPLE_ERROR_BITS; 3252 return tag & ~CCISS_SIMPLE_ERROR_BITS;
3248 } 3253 }
3249 3254
3250 static inline void cciss_mark_tag_indexed(u32 *tag) 3255 static inline void cciss_mark_tag_indexed(u32 *tag)
3251 { 3256 {
3252 *tag |= DIRECT_LOOKUP_BIT; 3257 *tag |= DIRECT_LOOKUP_BIT;
3253 } 3258 }
3254 3259
3255 static inline void cciss_set_tag_index(u32 *tag, u32 index) 3260 static inline void cciss_set_tag_index(u32 *tag, u32 index)
3256 { 3261 {
3257 *tag |= (index << DIRECT_LOOKUP_SHIFT); 3262 *tag |= (index << DIRECT_LOOKUP_SHIFT);
3258 } 3263 }
3259 3264
3260 /* 3265 /*
3261 * Get a request and submit it to the controller. 3266 * Get a request and submit it to the controller.
3262 */ 3267 */
3263 static void do_cciss_request(struct request_queue *q) 3268 static void do_cciss_request(struct request_queue *q)
3264 { 3269 {
3265 ctlr_info_t *h = q->queuedata; 3270 ctlr_info_t *h = q->queuedata;
3266 CommandList_struct *c; 3271 CommandList_struct *c;
3267 sector_t start_blk; 3272 sector_t start_blk;
3268 int seg; 3273 int seg;
3269 struct request *creq; 3274 struct request *creq;
3270 u64bit temp64; 3275 u64bit temp64;
3271 struct scatterlist *tmp_sg; 3276 struct scatterlist *tmp_sg;
3272 SGDescriptor_struct *curr_sg; 3277 SGDescriptor_struct *curr_sg;
3273 drive_info_struct *drv; 3278 drive_info_struct *drv;
3274 int i, dir; 3279 int i, dir;
3275 int sg_index = 0; 3280 int sg_index = 0;
3276 int chained = 0; 3281 int chained = 0;
3277 3282
3278 queue: 3283 queue:
3279 creq = blk_peek_request(q); 3284 creq = blk_peek_request(q);
3280 if (!creq) 3285 if (!creq)
3281 goto startio; 3286 goto startio;
3282 3287
3283 BUG_ON(creq->nr_phys_segments > h->maxsgentries); 3288 BUG_ON(creq->nr_phys_segments > h->maxsgentries);
3284 3289
3285 c = cmd_alloc(h); 3290 c = cmd_alloc(h);
3286 if (!c) 3291 if (!c)
3287 goto full; 3292 goto full;
3288 3293
3289 blk_start_request(creq); 3294 blk_start_request(creq);
3290 3295
3291 tmp_sg = h->scatter_list[c->cmdindex]; 3296 tmp_sg = h->scatter_list[c->cmdindex];
3292 spin_unlock_irq(q->queue_lock); 3297 spin_unlock_irq(q->queue_lock);
3293 3298
3294 c->cmd_type = CMD_RWREQ; 3299 c->cmd_type = CMD_RWREQ;
3295 c->rq = creq; 3300 c->rq = creq;
3296 3301
3297 /* fill in the request */ 3302 /* fill in the request */
3298 drv = creq->rq_disk->private_data; 3303 drv = creq->rq_disk->private_data;
3299 c->Header.ReplyQueue = 0; /* unused in simple mode */ 3304 c->Header.ReplyQueue = 0; /* unused in simple mode */
3300 /* got command from pool, so use the command block index instead */ 3305 /* got command from pool, so use the command block index instead */
3301 /* for direct lookups. */ 3306 /* for direct lookups. */
3302 /* The first 2 bits are reserved for controller error reporting. */ 3307 /* The first 2 bits are reserved for controller error reporting. */
3303 cciss_set_tag_index(&c->Header.Tag.lower, c->cmdindex); 3308 cciss_set_tag_index(&c->Header.Tag.lower, c->cmdindex);
3304 cciss_mark_tag_indexed(&c->Header.Tag.lower); 3309 cciss_mark_tag_indexed(&c->Header.Tag.lower);
3305 memcpy(&c->Header.LUN, drv->LunID, sizeof(drv->LunID)); 3310 memcpy(&c->Header.LUN, drv->LunID, sizeof(drv->LunID));
3306 c->Request.CDBLen = 10; /* 12 byte commands not in FW yet; */ 3311 c->Request.CDBLen = 10; /* 12 byte commands not in FW yet; */
3307 c->Request.Type.Type = TYPE_CMD; /* It is a command. */ 3312 c->Request.Type.Type = TYPE_CMD; /* It is a command. */
3308 c->Request.Type.Attribute = ATTR_SIMPLE; 3313 c->Request.Type.Attribute = ATTR_SIMPLE;
3309 c->Request.Type.Direction = 3314 c->Request.Type.Direction =
3310 (rq_data_dir(creq) == READ) ? XFER_READ : XFER_WRITE; 3315 (rq_data_dir(creq) == READ) ? XFER_READ : XFER_WRITE;
3311 c->Request.Timeout = 0; /* Don't time out */ 3316 c->Request.Timeout = 0; /* Don't time out */
3312 c->Request.CDB[0] = 3317 c->Request.CDB[0] =
3313 (rq_data_dir(creq) == READ) ? h->cciss_read : h->cciss_write; 3318 (rq_data_dir(creq) == READ) ? h->cciss_read : h->cciss_write;
3314 start_blk = blk_rq_pos(creq); 3319 start_blk = blk_rq_pos(creq);
3315 dev_dbg(&h->pdev->dev, "sector =%d nr_sectors=%d\n", 3320 dev_dbg(&h->pdev->dev, "sector =%d nr_sectors=%d\n",
3316 (int)blk_rq_pos(creq), (int)blk_rq_sectors(creq)); 3321 (int)blk_rq_pos(creq), (int)blk_rq_sectors(creq));
3317 sg_init_table(tmp_sg, h->maxsgentries); 3322 sg_init_table(tmp_sg, h->maxsgentries);
3318 seg = blk_rq_map_sg(q, creq, tmp_sg); 3323 seg = blk_rq_map_sg(q, creq, tmp_sg);
3319 3324
3320 /* get the DMA records for the setup */ 3325 /* get the DMA records for the setup */
3321 if (c->Request.Type.Direction == XFER_READ) 3326 if (c->Request.Type.Direction == XFER_READ)
3322 dir = PCI_DMA_FROMDEVICE; 3327 dir = PCI_DMA_FROMDEVICE;
3323 else 3328 else
3324 dir = PCI_DMA_TODEVICE; 3329 dir = PCI_DMA_TODEVICE;
3325 3330
3326 curr_sg = c->SG; 3331 curr_sg = c->SG;
3327 sg_index = 0; 3332 sg_index = 0;
3328 chained = 0; 3333 chained = 0;
3329 3334
3330 for (i = 0; i < seg; i++) { 3335 for (i = 0; i < seg; i++) {
3331 if (((sg_index+1) == (h->max_cmd_sgentries)) && 3336 if (((sg_index+1) == (h->max_cmd_sgentries)) &&
3332 !chained && ((seg - i) > 1)) { 3337 !chained && ((seg - i) > 1)) {
3333 /* Point to next chain block. */ 3338 /* Point to next chain block. */
3334 curr_sg = h->cmd_sg_list[c->cmdindex]; 3339 curr_sg = h->cmd_sg_list[c->cmdindex];
3335 sg_index = 0; 3340 sg_index = 0;
3336 chained = 1; 3341 chained = 1;
3337 } 3342 }
3338 curr_sg[sg_index].Len = tmp_sg[i].length; 3343 curr_sg[sg_index].Len = tmp_sg[i].length;
3339 temp64.val = (__u64) pci_map_page(h->pdev, sg_page(&tmp_sg[i]), 3344 temp64.val = (__u64) pci_map_page(h->pdev, sg_page(&tmp_sg[i]),
3340 tmp_sg[i].offset, 3345 tmp_sg[i].offset,
3341 tmp_sg[i].length, dir); 3346 tmp_sg[i].length, dir);
3342 curr_sg[sg_index].Addr.lower = temp64.val32.lower; 3347 curr_sg[sg_index].Addr.lower = temp64.val32.lower;
3343 curr_sg[sg_index].Addr.upper = temp64.val32.upper; 3348 curr_sg[sg_index].Addr.upper = temp64.val32.upper;
3344 curr_sg[sg_index].Ext = 0; /* we are not chaining */ 3349 curr_sg[sg_index].Ext = 0; /* we are not chaining */
3345 ++sg_index; 3350 ++sg_index;
3346 } 3351 }
3347 if (chained) 3352 if (chained)
3348 cciss_map_sg_chain_block(h, c, h->cmd_sg_list[c->cmdindex], 3353 cciss_map_sg_chain_block(h, c, h->cmd_sg_list[c->cmdindex],
3349 (seg - (h->max_cmd_sgentries - 1)) * 3354 (seg - (h->max_cmd_sgentries - 1)) *
3350 sizeof(SGDescriptor_struct)); 3355 sizeof(SGDescriptor_struct));
3351 3356
3352 /* track how many SG entries we are using */ 3357 /* track how many SG entries we are using */
3353 if (seg > h->maxSG) 3358 if (seg > h->maxSG)
3354 h->maxSG = seg; 3359 h->maxSG = seg;
3355 3360
3356 dev_dbg(&h->pdev->dev, "Submitting %u sectors in %d segments " 3361 dev_dbg(&h->pdev->dev, "Submitting %u sectors in %d segments "
3357 "chained[%d]\n", 3362 "chained[%d]\n",
3358 blk_rq_sectors(creq), seg, chained); 3363 blk_rq_sectors(creq), seg, chained);
3359 3364
3360 c->Header.SGTotal = seg + chained; 3365 c->Header.SGTotal = seg + chained;
3361 if (seg <= h->max_cmd_sgentries) 3366 if (seg <= h->max_cmd_sgentries)
3362 c->Header.SGList = c->Header.SGTotal; 3367 c->Header.SGList = c->Header.SGTotal;
3363 else 3368 else
3364 c->Header.SGList = h->max_cmd_sgentries; 3369 c->Header.SGList = h->max_cmd_sgentries;
3365 set_performant_mode(h, c); 3370 set_performant_mode(h, c);
3366 3371
3367 if (likely(creq->cmd_type == REQ_TYPE_FS)) { 3372 if (likely(creq->cmd_type == REQ_TYPE_FS)) {
3368 if(h->cciss_read == CCISS_READ_10) { 3373 if(h->cciss_read == CCISS_READ_10) {
3369 c->Request.CDB[1] = 0; 3374 c->Request.CDB[1] = 0;
3370 c->Request.CDB[2] = (start_blk >> 24) & 0xff; /* MSB */ 3375 c->Request.CDB[2] = (start_blk >> 24) & 0xff; /* MSB */
3371 c->Request.CDB[3] = (start_blk >> 16) & 0xff; 3376 c->Request.CDB[3] = (start_blk >> 16) & 0xff;
3372 c->Request.CDB[4] = (start_blk >> 8) & 0xff; 3377 c->Request.CDB[4] = (start_blk >> 8) & 0xff;
3373 c->Request.CDB[5] = start_blk & 0xff; 3378 c->Request.CDB[5] = start_blk & 0xff;
3374 c->Request.CDB[6] = 0; /* (sect >> 24) & 0xff; MSB */ 3379 c->Request.CDB[6] = 0; /* (sect >> 24) & 0xff; MSB */
3375 c->Request.CDB[7] = (blk_rq_sectors(creq) >> 8) & 0xff; 3380 c->Request.CDB[7] = (blk_rq_sectors(creq) >> 8) & 0xff;
3376 c->Request.CDB[8] = blk_rq_sectors(creq) & 0xff; 3381 c->Request.CDB[8] = blk_rq_sectors(creq) & 0xff;
3377 c->Request.CDB[9] = c->Request.CDB[11] = c->Request.CDB[12] = 0; 3382 c->Request.CDB[9] = c->Request.CDB[11] = c->Request.CDB[12] = 0;
3378 } else { 3383 } else {
3379 u32 upper32 = upper_32_bits(start_blk); 3384 u32 upper32 = upper_32_bits(start_blk);
3380 3385
3381 c->Request.CDBLen = 16; 3386 c->Request.CDBLen = 16;
3382 c->Request.CDB[1]= 0; 3387 c->Request.CDB[1]= 0;
3383 c->Request.CDB[2]= (upper32 >> 24) & 0xff; /* MSB */ 3388 c->Request.CDB[2]= (upper32 >> 24) & 0xff; /* MSB */
3384 c->Request.CDB[3]= (upper32 >> 16) & 0xff; 3389 c->Request.CDB[3]= (upper32 >> 16) & 0xff;
3385 c->Request.CDB[4]= (upper32 >> 8) & 0xff; 3390 c->Request.CDB[4]= (upper32 >> 8) & 0xff;
3386 c->Request.CDB[5]= upper32 & 0xff; 3391 c->Request.CDB[5]= upper32 & 0xff;
3387 c->Request.CDB[6]= (start_blk >> 24) & 0xff; 3392 c->Request.CDB[6]= (start_blk >> 24) & 0xff;
3388 c->Request.CDB[7]= (start_blk >> 16) & 0xff; 3393 c->Request.CDB[7]= (start_blk >> 16) & 0xff;
3389 c->Request.CDB[8]= (start_blk >> 8) & 0xff; 3394 c->Request.CDB[8]= (start_blk >> 8) & 0xff;
3390 c->Request.CDB[9]= start_blk & 0xff; 3395 c->Request.CDB[9]= start_blk & 0xff;
3391 c->Request.CDB[10]= (blk_rq_sectors(creq) >> 24) & 0xff; 3396 c->Request.CDB[10]= (blk_rq_sectors(creq) >> 24) & 0xff;
3392 c->Request.CDB[11]= (blk_rq_sectors(creq) >> 16) & 0xff; 3397 c->Request.CDB[11]= (blk_rq_sectors(creq) >> 16) & 0xff;
3393 c->Request.CDB[12]= (blk_rq_sectors(creq) >> 8) & 0xff; 3398 c->Request.CDB[12]= (blk_rq_sectors(creq) >> 8) & 0xff;
3394 c->Request.CDB[13]= blk_rq_sectors(creq) & 0xff; 3399 c->Request.CDB[13]= blk_rq_sectors(creq) & 0xff;
3395 c->Request.CDB[14] = c->Request.CDB[15] = 0; 3400 c->Request.CDB[14] = c->Request.CDB[15] = 0;
3396 } 3401 }
3397 } else if (creq->cmd_type == REQ_TYPE_BLOCK_PC) { 3402 } else if (creq->cmd_type == REQ_TYPE_BLOCK_PC) {
3398 c->Request.CDBLen = creq->cmd_len; 3403 c->Request.CDBLen = creq->cmd_len;
3399 memcpy(c->Request.CDB, creq->cmd, BLK_MAX_CDB); 3404 memcpy(c->Request.CDB, creq->cmd, BLK_MAX_CDB);
3400 } else { 3405 } else {
3401 dev_warn(&h->pdev->dev, "bad request type %d\n", 3406 dev_warn(&h->pdev->dev, "bad request type %d\n",
3402 creq->cmd_type); 3407 creq->cmd_type);
3403 BUG(); 3408 BUG();
3404 } 3409 }
3405 3410
3406 spin_lock_irq(q->queue_lock); 3411 spin_lock_irq(q->queue_lock);
3407 3412
3408 addQ(&h->reqQ, c); 3413 addQ(&h->reqQ, c);
3409 h->Qdepth++; 3414 h->Qdepth++;
3410 if (h->Qdepth > h->maxQsinceinit) 3415 if (h->Qdepth > h->maxQsinceinit)
3411 h->maxQsinceinit = h->Qdepth; 3416 h->maxQsinceinit = h->Qdepth;
3412 3417
3413 goto queue; 3418 goto queue;
3414 full: 3419 full:
3415 blk_stop_queue(q); 3420 blk_stop_queue(q);
3416 startio: 3421 startio:
3417 /* We will already have the driver lock here so not need 3422 /* We will already have the driver lock here so not need
3418 * to lock it. 3423 * to lock it.
3419 */ 3424 */
3420 start_io(h); 3425 start_io(h);
3421 } 3426 }
3422 3427
3423 static inline unsigned long get_next_completion(ctlr_info_t *h) 3428 static inline unsigned long get_next_completion(ctlr_info_t *h)
3424 { 3429 {
3425 return h->access.command_completed(h); 3430 return h->access.command_completed(h);
3426 } 3431 }
3427 3432
3428 static inline int interrupt_pending(ctlr_info_t *h) 3433 static inline int interrupt_pending(ctlr_info_t *h)
3429 { 3434 {
3430 return h->access.intr_pending(h); 3435 return h->access.intr_pending(h);
3431 } 3436 }
3432 3437
3433 static inline long interrupt_not_for_us(ctlr_info_t *h) 3438 static inline long interrupt_not_for_us(ctlr_info_t *h)
3434 { 3439 {
3435 return ((h->access.intr_pending(h) == 0) || 3440 return ((h->access.intr_pending(h) == 0) ||
3436 (h->interrupts_enabled == 0)); 3441 (h->interrupts_enabled == 0));
3437 } 3442 }
3438 3443
3439 static inline int bad_tag(ctlr_info_t *h, u32 tag_index, 3444 static inline int bad_tag(ctlr_info_t *h, u32 tag_index,
3440 u32 raw_tag) 3445 u32 raw_tag)
3441 { 3446 {
3442 if (unlikely(tag_index >= h->nr_cmds)) { 3447 if (unlikely(tag_index >= h->nr_cmds)) {
3443 dev_warn(&h->pdev->dev, "bad tag 0x%08x ignored.\n", raw_tag); 3448 dev_warn(&h->pdev->dev, "bad tag 0x%08x ignored.\n", raw_tag);
3444 return 1; 3449 return 1;
3445 } 3450 }
3446 return 0; 3451 return 0;
3447 } 3452 }
3448 3453
3449 static inline void finish_cmd(ctlr_info_t *h, CommandList_struct *c, 3454 static inline void finish_cmd(ctlr_info_t *h, CommandList_struct *c,
3450 u32 raw_tag) 3455 u32 raw_tag)
3451 { 3456 {
3452 removeQ(c); 3457 removeQ(c);
3453 if (likely(c->cmd_type == CMD_RWREQ)) 3458 if (likely(c->cmd_type == CMD_RWREQ))
3454 complete_command(h, c, 0); 3459 complete_command(h, c, 0);
3455 else if (c->cmd_type == CMD_IOCTL_PEND) 3460 else if (c->cmd_type == CMD_IOCTL_PEND)
3456 complete(c->waiting); 3461 complete(c->waiting);
3457 #ifdef CONFIG_CISS_SCSI_TAPE 3462 #ifdef CONFIG_CISS_SCSI_TAPE
3458 else if (c->cmd_type == CMD_SCSI) 3463 else if (c->cmd_type == CMD_SCSI)
3459 complete_scsi_command(c, 0, raw_tag); 3464 complete_scsi_command(c, 0, raw_tag);
3460 #endif 3465 #endif
3461 } 3466 }
3462 3467
3463 static inline u32 next_command(ctlr_info_t *h) 3468 static inline u32 next_command(ctlr_info_t *h)
3464 { 3469 {
3465 u32 a; 3470 u32 a;
3466 3471
3467 if (unlikely(!(h->transMethod & CFGTBL_Trans_Performant))) 3472 if (unlikely(!(h->transMethod & CFGTBL_Trans_Performant)))
3468 return h->access.command_completed(h); 3473 return h->access.command_completed(h);
3469 3474
3470 if ((*(h->reply_pool_head) & 1) == (h->reply_pool_wraparound)) { 3475 if ((*(h->reply_pool_head) & 1) == (h->reply_pool_wraparound)) {
3471 a = *(h->reply_pool_head); /* Next cmd in ring buffer */ 3476 a = *(h->reply_pool_head); /* Next cmd in ring buffer */
3472 (h->reply_pool_head)++; 3477 (h->reply_pool_head)++;
3473 h->commands_outstanding--; 3478 h->commands_outstanding--;
3474 } else { 3479 } else {
3475 a = FIFO_EMPTY; 3480 a = FIFO_EMPTY;
3476 } 3481 }
3477 /* Check for wraparound */ 3482 /* Check for wraparound */
3478 if (h->reply_pool_head == (h->reply_pool + h->max_commands)) { 3483 if (h->reply_pool_head == (h->reply_pool + h->max_commands)) {
3479 h->reply_pool_head = h->reply_pool; 3484 h->reply_pool_head = h->reply_pool;
3480 h->reply_pool_wraparound ^= 1; 3485 h->reply_pool_wraparound ^= 1;
3481 } 3486 }
3482 return a; 3487 return a;
3483 } 3488 }
3484 3489
3485 /* process completion of an indexed ("direct lookup") command */ 3490 /* process completion of an indexed ("direct lookup") command */
3486 static inline u32 process_indexed_cmd(ctlr_info_t *h, u32 raw_tag) 3491 static inline u32 process_indexed_cmd(ctlr_info_t *h, u32 raw_tag)
3487 { 3492 {
3488 u32 tag_index; 3493 u32 tag_index;
3489 CommandList_struct *c; 3494 CommandList_struct *c;
3490 3495
3491 tag_index = cciss_tag_to_index(raw_tag); 3496 tag_index = cciss_tag_to_index(raw_tag);
3492 if (bad_tag(h, tag_index, raw_tag)) 3497 if (bad_tag(h, tag_index, raw_tag))
3493 return next_command(h); 3498 return next_command(h);
3494 c = h->cmd_pool + tag_index; 3499 c = h->cmd_pool + tag_index;
3495 finish_cmd(h, c, raw_tag); 3500 finish_cmd(h, c, raw_tag);
3496 return next_command(h); 3501 return next_command(h);
3497 } 3502 }
3498 3503
3499 /* process completion of a non-indexed command */ 3504 /* process completion of a non-indexed command */
3500 static inline u32 process_nonindexed_cmd(ctlr_info_t *h, u32 raw_tag) 3505 static inline u32 process_nonindexed_cmd(ctlr_info_t *h, u32 raw_tag)
3501 { 3506 {
3502 CommandList_struct *c = NULL; 3507 CommandList_struct *c = NULL;
3503 __u32 busaddr_masked, tag_masked; 3508 __u32 busaddr_masked, tag_masked;
3504 3509
3505 tag_masked = cciss_tag_discard_error_bits(h, raw_tag); 3510 tag_masked = cciss_tag_discard_error_bits(h, raw_tag);
3506 list_for_each_entry(c, &h->cmpQ, list) { 3511 list_for_each_entry(c, &h->cmpQ, list) {
3507 busaddr_masked = cciss_tag_discard_error_bits(h, c->busaddr); 3512 busaddr_masked = cciss_tag_discard_error_bits(h, c->busaddr);
3508 if (busaddr_masked == tag_masked) { 3513 if (busaddr_masked == tag_masked) {
3509 finish_cmd(h, c, raw_tag); 3514 finish_cmd(h, c, raw_tag);
3510 return next_command(h); 3515 return next_command(h);
3511 } 3516 }
3512 } 3517 }
3513 bad_tag(h, h->nr_cmds + 1, raw_tag); 3518 bad_tag(h, h->nr_cmds + 1, raw_tag);
3514 return next_command(h); 3519 return next_command(h);
3515 } 3520 }
3516 3521
3517 /* Some controllers, like p400, will give us one interrupt 3522 /* Some controllers, like p400, will give us one interrupt
3518 * after a soft reset, even if we turned interrupts off. 3523 * after a soft reset, even if we turned interrupts off.
3519 * Only need to check for this in the cciss_xxx_discard_completions 3524 * Only need to check for this in the cciss_xxx_discard_completions
3520 * functions. 3525 * functions.
3521 */ 3526 */
3522 static int ignore_bogus_interrupt(ctlr_info_t *h) 3527 static int ignore_bogus_interrupt(ctlr_info_t *h)
3523 { 3528 {
3524 if (likely(!reset_devices)) 3529 if (likely(!reset_devices))
3525 return 0; 3530 return 0;
3526 3531
3527 if (likely(h->interrupts_enabled)) 3532 if (likely(h->interrupts_enabled))
3528 return 0; 3533 return 0;
3529 3534
3530 dev_info(&h->pdev->dev, "Received interrupt while interrupts disabled " 3535 dev_info(&h->pdev->dev, "Received interrupt while interrupts disabled "
3531 "(known firmware bug.) Ignoring.\n"); 3536 "(known firmware bug.) Ignoring.\n");
3532 3537
3533 return 1; 3538 return 1;
3534 } 3539 }
3535 3540
3536 static irqreturn_t cciss_intx_discard_completions(int irq, void *dev_id) 3541 static irqreturn_t cciss_intx_discard_completions(int irq, void *dev_id)
3537 { 3542 {
3538 ctlr_info_t *h = dev_id; 3543 ctlr_info_t *h = dev_id;
3539 unsigned long flags; 3544 unsigned long flags;
3540 u32 raw_tag; 3545 u32 raw_tag;
3541 3546
3542 if (ignore_bogus_interrupt(h)) 3547 if (ignore_bogus_interrupt(h))
3543 return IRQ_NONE; 3548 return IRQ_NONE;
3544 3549
3545 if (interrupt_not_for_us(h)) 3550 if (interrupt_not_for_us(h))
3546 return IRQ_NONE; 3551 return IRQ_NONE;
3547 spin_lock_irqsave(&h->lock, flags); 3552 spin_lock_irqsave(&h->lock, flags);
3548 while (interrupt_pending(h)) { 3553 while (interrupt_pending(h)) {
3549 raw_tag = get_next_completion(h); 3554 raw_tag = get_next_completion(h);
3550 while (raw_tag != FIFO_EMPTY) 3555 while (raw_tag != FIFO_EMPTY)
3551 raw_tag = next_command(h); 3556 raw_tag = next_command(h);
3552 } 3557 }
3553 spin_unlock_irqrestore(&h->lock, flags); 3558 spin_unlock_irqrestore(&h->lock, flags);
3554 return IRQ_HANDLED; 3559 return IRQ_HANDLED;
3555 } 3560 }
3556 3561
3557 static irqreturn_t cciss_msix_discard_completions(int irq, void *dev_id) 3562 static irqreturn_t cciss_msix_discard_completions(int irq, void *dev_id)
3558 { 3563 {
3559 ctlr_info_t *h = dev_id; 3564 ctlr_info_t *h = dev_id;
3560 unsigned long flags; 3565 unsigned long flags;
3561 u32 raw_tag; 3566 u32 raw_tag;
3562 3567
3563 if (ignore_bogus_interrupt(h)) 3568 if (ignore_bogus_interrupt(h))
3564 return IRQ_NONE; 3569 return IRQ_NONE;
3565 3570
3566 spin_lock_irqsave(&h->lock, flags); 3571 spin_lock_irqsave(&h->lock, flags);
3567 raw_tag = get_next_completion(h); 3572 raw_tag = get_next_completion(h);
3568 while (raw_tag != FIFO_EMPTY) 3573 while (raw_tag != FIFO_EMPTY)
3569 raw_tag = next_command(h); 3574 raw_tag = next_command(h);
3570 spin_unlock_irqrestore(&h->lock, flags); 3575 spin_unlock_irqrestore(&h->lock, flags);
3571 return IRQ_HANDLED; 3576 return IRQ_HANDLED;
3572 } 3577 }
3573 3578
3574 static irqreturn_t do_cciss_intx(int irq, void *dev_id) 3579 static irqreturn_t do_cciss_intx(int irq, void *dev_id)
3575 { 3580 {
3576 ctlr_info_t *h = dev_id; 3581 ctlr_info_t *h = dev_id;
3577 unsigned long flags; 3582 unsigned long flags;
3578 u32 raw_tag; 3583 u32 raw_tag;
3579 3584
3580 if (interrupt_not_for_us(h)) 3585 if (interrupt_not_for_us(h))
3581 return IRQ_NONE; 3586 return IRQ_NONE;
3582 spin_lock_irqsave(&h->lock, flags); 3587 spin_lock_irqsave(&h->lock, flags);
3583 while (interrupt_pending(h)) { 3588 while (interrupt_pending(h)) {
3584 raw_tag = get_next_completion(h); 3589 raw_tag = get_next_completion(h);
3585 while (raw_tag != FIFO_EMPTY) { 3590 while (raw_tag != FIFO_EMPTY) {
3586 if (cciss_tag_contains_index(raw_tag)) 3591 if (cciss_tag_contains_index(raw_tag))
3587 raw_tag = process_indexed_cmd(h, raw_tag); 3592 raw_tag = process_indexed_cmd(h, raw_tag);
3588 else 3593 else
3589 raw_tag = process_nonindexed_cmd(h, raw_tag); 3594 raw_tag = process_nonindexed_cmd(h, raw_tag);
3590 } 3595 }
3591 } 3596 }
3592 spin_unlock_irqrestore(&h->lock, flags); 3597 spin_unlock_irqrestore(&h->lock, flags);
3593 return IRQ_HANDLED; 3598 return IRQ_HANDLED;
3594 } 3599 }
3595 3600
3596 /* Add a second interrupt handler for MSI/MSI-X mode. In this mode we never 3601 /* Add a second interrupt handler for MSI/MSI-X mode. In this mode we never
3597 * check the interrupt pending register because it is not set. 3602 * check the interrupt pending register because it is not set.
3598 */ 3603 */
3599 static irqreturn_t do_cciss_msix_intr(int irq, void *dev_id) 3604 static irqreturn_t do_cciss_msix_intr(int irq, void *dev_id)
3600 { 3605 {
3601 ctlr_info_t *h = dev_id; 3606 ctlr_info_t *h = dev_id;
3602 unsigned long flags; 3607 unsigned long flags;
3603 u32 raw_tag; 3608 u32 raw_tag;
3604 3609
3605 spin_lock_irqsave(&h->lock, flags); 3610 spin_lock_irqsave(&h->lock, flags);
3606 raw_tag = get_next_completion(h); 3611 raw_tag = get_next_completion(h);
3607 while (raw_tag != FIFO_EMPTY) { 3612 while (raw_tag != FIFO_EMPTY) {
3608 if (cciss_tag_contains_index(raw_tag)) 3613 if (cciss_tag_contains_index(raw_tag))
3609 raw_tag = process_indexed_cmd(h, raw_tag); 3614 raw_tag = process_indexed_cmd(h, raw_tag);
3610 else 3615 else
3611 raw_tag = process_nonindexed_cmd(h, raw_tag); 3616 raw_tag = process_nonindexed_cmd(h, raw_tag);
3612 } 3617 }
3613 spin_unlock_irqrestore(&h->lock, flags); 3618 spin_unlock_irqrestore(&h->lock, flags);
3614 return IRQ_HANDLED; 3619 return IRQ_HANDLED;
3615 } 3620 }
3616 3621
3617 /** 3622 /**
3618 * add_to_scan_list() - add controller to rescan queue 3623 * add_to_scan_list() - add controller to rescan queue
3619 * @h: Pointer to the controller. 3624 * @h: Pointer to the controller.
3620 * 3625 *
3621 * Adds the controller to the rescan queue if not already on the queue. 3626 * Adds the controller to the rescan queue if not already on the queue.
3622 * 3627 *
3623 * returns 1 if added to the queue, 0 if skipped (could be on the 3628 * returns 1 if added to the queue, 0 if skipped (could be on the
3624 * queue already, or the controller could be initializing or shutting 3629 * queue already, or the controller could be initializing or shutting
3625 * down). 3630 * down).
3626 **/ 3631 **/
3627 static int add_to_scan_list(struct ctlr_info *h) 3632 static int add_to_scan_list(struct ctlr_info *h)
3628 { 3633 {
3629 struct ctlr_info *test_h; 3634 struct ctlr_info *test_h;
3630 int found = 0; 3635 int found = 0;
3631 int ret = 0; 3636 int ret = 0;
3632 3637
3633 if (h->busy_initializing) 3638 if (h->busy_initializing)
3634 return 0; 3639 return 0;
3635 3640
3636 if (!mutex_trylock(&h->busy_shutting_down)) 3641 if (!mutex_trylock(&h->busy_shutting_down))
3637 return 0; 3642 return 0;
3638 3643
3639 mutex_lock(&scan_mutex); 3644 mutex_lock(&scan_mutex);
3640 list_for_each_entry(test_h, &scan_q, scan_list) { 3645 list_for_each_entry(test_h, &scan_q, scan_list) {
3641 if (test_h == h) { 3646 if (test_h == h) {
3642 found = 1; 3647 found = 1;
3643 break; 3648 break;
3644 } 3649 }
3645 } 3650 }
3646 if (!found && !h->busy_scanning) { 3651 if (!found && !h->busy_scanning) {
3647 INIT_COMPLETION(h->scan_wait); 3652 INIT_COMPLETION(h->scan_wait);
3648 list_add_tail(&h->scan_list, &scan_q); 3653 list_add_tail(&h->scan_list, &scan_q);
3649 ret = 1; 3654 ret = 1;
3650 } 3655 }
3651 mutex_unlock(&scan_mutex); 3656 mutex_unlock(&scan_mutex);
3652 mutex_unlock(&h->busy_shutting_down); 3657 mutex_unlock(&h->busy_shutting_down);
3653 3658
3654 return ret; 3659 return ret;
3655 } 3660 }
3656 3661
3657 /** 3662 /**
3658 * remove_from_scan_list() - remove controller from rescan queue 3663 * remove_from_scan_list() - remove controller from rescan queue
3659 * @h: Pointer to the controller. 3664 * @h: Pointer to the controller.
3660 * 3665 *
3661 * Removes the controller from the rescan queue if present. Blocks if 3666 * Removes the controller from the rescan queue if present. Blocks if
3662 * the controller is currently conducting a rescan. The controller 3667 * the controller is currently conducting a rescan. The controller
3663 * can be in one of three states: 3668 * can be in one of three states:
3664 * 1. Doesn't need a scan 3669 * 1. Doesn't need a scan
3665 * 2. On the scan list, but not scanning yet (we remove it) 3670 * 2. On the scan list, but not scanning yet (we remove it)
3666 * 3. Busy scanning (and not on the list). In this case we want to wait for 3671 * 3. Busy scanning (and not on the list). In this case we want to wait for
3667 * the scan to complete to make sure the scanning thread for this 3672 * the scan to complete to make sure the scanning thread for this
3668 * controller is completely idle. 3673 * controller is completely idle.
3669 **/ 3674 **/
3670 static void remove_from_scan_list(struct ctlr_info *h) 3675 static void remove_from_scan_list(struct ctlr_info *h)
3671 { 3676 {
3672 struct ctlr_info *test_h, *tmp_h; 3677 struct ctlr_info *test_h, *tmp_h;
3673 3678
3674 mutex_lock(&scan_mutex); 3679 mutex_lock(&scan_mutex);
3675 list_for_each_entry_safe(test_h, tmp_h, &scan_q, scan_list) { 3680 list_for_each_entry_safe(test_h, tmp_h, &scan_q, scan_list) {
3676 if (test_h == h) { /* state 2. */ 3681 if (test_h == h) { /* state 2. */
3677 list_del(&h->scan_list); 3682 list_del(&h->scan_list);
3678 complete_all(&h->scan_wait); 3683 complete_all(&h->scan_wait);
3679 mutex_unlock(&scan_mutex); 3684 mutex_unlock(&scan_mutex);
3680 return; 3685 return;
3681 } 3686 }
3682 } 3687 }
3683 if (h->busy_scanning) { /* state 3. */ 3688 if (h->busy_scanning) { /* state 3. */
3684 mutex_unlock(&scan_mutex); 3689 mutex_unlock(&scan_mutex);
3685 wait_for_completion(&h->scan_wait); 3690 wait_for_completion(&h->scan_wait);
3686 } else { /* state 1, nothing to do. */ 3691 } else { /* state 1, nothing to do. */
3687 mutex_unlock(&scan_mutex); 3692 mutex_unlock(&scan_mutex);
3688 } 3693 }
3689 } 3694 }
3690 3695
3691 /** 3696 /**
3692 * scan_thread() - kernel thread used to rescan controllers 3697 * scan_thread() - kernel thread used to rescan controllers
3693 * @data: Ignored. 3698 * @data: Ignored.
3694 * 3699 *
3695 * A kernel thread used scan for drive topology changes on 3700 * A kernel thread used scan for drive topology changes on
3696 * controllers. The thread processes only one controller at a time 3701 * controllers. The thread processes only one controller at a time
3697 * using a queue. Controllers are added to the queue using 3702 * using a queue. Controllers are added to the queue using
3698 * add_to_scan_list() and removed from the queue either after done 3703 * add_to_scan_list() and removed from the queue either after done
3699 * processing or using remove_from_scan_list(). 3704 * processing or using remove_from_scan_list().
3700 * 3705 *
3701 * returns 0. 3706 * returns 0.
3702 **/ 3707 **/
3703 static int scan_thread(void *data) 3708 static int scan_thread(void *data)
3704 { 3709 {
3705 struct ctlr_info *h; 3710 struct ctlr_info *h;
3706 3711
3707 while (1) { 3712 while (1) {
3708 set_current_state(TASK_INTERRUPTIBLE); 3713 set_current_state(TASK_INTERRUPTIBLE);
3709 schedule(); 3714 schedule();
3710 if (kthread_should_stop()) 3715 if (kthread_should_stop())
3711 break; 3716 break;
3712 3717
3713 while (1) { 3718 while (1) {
3714 mutex_lock(&scan_mutex); 3719 mutex_lock(&scan_mutex);
3715 if (list_empty(&scan_q)) { 3720 if (list_empty(&scan_q)) {
3716 mutex_unlock(&scan_mutex); 3721 mutex_unlock(&scan_mutex);
3717 break; 3722 break;
3718 } 3723 }
3719 3724
3720 h = list_entry(scan_q.next, 3725 h = list_entry(scan_q.next,
3721 struct ctlr_info, 3726 struct ctlr_info,
3722 scan_list); 3727 scan_list);
3723 list_del(&h->scan_list); 3728 list_del(&h->scan_list);
3724 h->busy_scanning = 1; 3729 h->busy_scanning = 1;
3725 mutex_unlock(&scan_mutex); 3730 mutex_unlock(&scan_mutex);
3726 3731
3727 rebuild_lun_table(h, 0, 0); 3732 rebuild_lun_table(h, 0, 0);
3728 complete_all(&h->scan_wait); 3733 complete_all(&h->scan_wait);
3729 mutex_lock(&scan_mutex); 3734 mutex_lock(&scan_mutex);
3730 h->busy_scanning = 0; 3735 h->busy_scanning = 0;
3731 mutex_unlock(&scan_mutex); 3736 mutex_unlock(&scan_mutex);
3732 } 3737 }
3733 } 3738 }
3734 3739
3735 return 0; 3740 return 0;
3736 } 3741 }
3737 3742
3738 static int check_for_unit_attention(ctlr_info_t *h, CommandList_struct *c) 3743 static int check_for_unit_attention(ctlr_info_t *h, CommandList_struct *c)
3739 { 3744 {
3740 if (c->err_info->SenseInfo[2] != UNIT_ATTENTION) 3745 if (c->err_info->SenseInfo[2] != UNIT_ATTENTION)
3741 return 0; 3746 return 0;
3742 3747
3743 switch (c->err_info->SenseInfo[12]) { 3748 switch (c->err_info->SenseInfo[12]) {
3744 case STATE_CHANGED: 3749 case STATE_CHANGED:
3745 dev_warn(&h->pdev->dev, "a state change " 3750 dev_warn(&h->pdev->dev, "a state change "
3746 "detected, command retried\n"); 3751 "detected, command retried\n");
3747 return 1; 3752 return 1;
3748 break; 3753 break;
3749 case LUN_FAILED: 3754 case LUN_FAILED:
3750 dev_warn(&h->pdev->dev, "LUN failure " 3755 dev_warn(&h->pdev->dev, "LUN failure "
3751 "detected, action required\n"); 3756 "detected, action required\n");
3752 return 1; 3757 return 1;
3753 break; 3758 break;
3754 case REPORT_LUNS_CHANGED: 3759 case REPORT_LUNS_CHANGED:
3755 dev_warn(&h->pdev->dev, "report LUN data changed\n"); 3760 dev_warn(&h->pdev->dev, "report LUN data changed\n");
3756 /* 3761 /*
3757 * Here, we could call add_to_scan_list and wake up the scan thread, 3762 * Here, we could call add_to_scan_list and wake up the scan thread,
3758 * except that it's quite likely that we will get more than one 3763 * except that it's quite likely that we will get more than one
3759 * REPORT_LUNS_CHANGED condition in quick succession, which means 3764 * REPORT_LUNS_CHANGED condition in quick succession, which means
3760 * that those which occur after the first one will likely happen 3765 * that those which occur after the first one will likely happen
3761 * *during* the scan_thread's rescan. And the rescan code is not 3766 * *during* the scan_thread's rescan. And the rescan code is not
3762 * robust enough to restart in the middle, undoing what it has already 3767 * robust enough to restart in the middle, undoing what it has already
3763 * done, and it's not clear that it's even possible to do this, since 3768 * done, and it's not clear that it's even possible to do this, since
3764 * part of what it does is notify the block layer, which starts 3769 * part of what it does is notify the block layer, which starts
3765 * doing it's own i/o to read partition tables and so on, and the 3770 * doing it's own i/o to read partition tables and so on, and the
3766 * driver doesn't have visibility to know what might need undoing. 3771 * driver doesn't have visibility to know what might need undoing.
3767 * In any event, if possible, it is horribly complicated to get right 3772 * In any event, if possible, it is horribly complicated to get right
3768 * so we just don't do it for now. 3773 * so we just don't do it for now.
3769 * 3774 *
3770 * Note: this REPORT_LUNS_CHANGED condition only occurs on the MSA2012. 3775 * Note: this REPORT_LUNS_CHANGED condition only occurs on the MSA2012.
3771 */ 3776 */
3772 return 1; 3777 return 1;
3773 break; 3778 break;
3774 case POWER_OR_RESET: 3779 case POWER_OR_RESET:
3775 dev_warn(&h->pdev->dev, 3780 dev_warn(&h->pdev->dev,
3776 "a power on or device reset detected\n"); 3781 "a power on or device reset detected\n");
3777 return 1; 3782 return 1;
3778 break; 3783 break;
3779 case UNIT_ATTENTION_CLEARED: 3784 case UNIT_ATTENTION_CLEARED:
3780 dev_warn(&h->pdev->dev, 3785 dev_warn(&h->pdev->dev,
3781 "unit attention cleared by another initiator\n"); 3786 "unit attention cleared by another initiator\n");
3782 return 1; 3787 return 1;
3783 break; 3788 break;
3784 default: 3789 default:
3785 dev_warn(&h->pdev->dev, "unknown unit attention detected\n"); 3790 dev_warn(&h->pdev->dev, "unknown unit attention detected\n");
3786 return 1; 3791 return 1;
3787 } 3792 }
3788 } 3793 }
3789 3794
3790 /* 3795 /*
3791 * We cannot read the structure directly, for portability we must use 3796 * We cannot read the structure directly, for portability we must use
3792 * the io functions. 3797 * the io functions.
3793 * This is for debug only. 3798 * This is for debug only.
3794 */ 3799 */
3795 static void print_cfg_table(ctlr_info_t *h) 3800 static void print_cfg_table(ctlr_info_t *h)
3796 { 3801 {
3797 int i; 3802 int i;
3798 char temp_name[17]; 3803 char temp_name[17];
3799 CfgTable_struct *tb = h->cfgtable; 3804 CfgTable_struct *tb = h->cfgtable;
3800 3805
3801 dev_dbg(&h->pdev->dev, "Controller Configuration information\n"); 3806 dev_dbg(&h->pdev->dev, "Controller Configuration information\n");
3802 dev_dbg(&h->pdev->dev, "------------------------------------\n"); 3807 dev_dbg(&h->pdev->dev, "------------------------------------\n");
3803 for (i = 0; i < 4; i++) 3808 for (i = 0; i < 4; i++)
3804 temp_name[i] = readb(&(tb->Signature[i])); 3809 temp_name[i] = readb(&(tb->Signature[i]));
3805 temp_name[4] = '\0'; 3810 temp_name[4] = '\0';
3806 dev_dbg(&h->pdev->dev, " Signature = %s\n", temp_name); 3811 dev_dbg(&h->pdev->dev, " Signature = %s\n", temp_name);
3807 dev_dbg(&h->pdev->dev, " Spec Number = %d\n", 3812 dev_dbg(&h->pdev->dev, " Spec Number = %d\n",
3808 readl(&(tb->SpecValence))); 3813 readl(&(tb->SpecValence)));
3809 dev_dbg(&h->pdev->dev, " Transport methods supported = 0x%x\n", 3814 dev_dbg(&h->pdev->dev, " Transport methods supported = 0x%x\n",
3810 readl(&(tb->TransportSupport))); 3815 readl(&(tb->TransportSupport)));
3811 dev_dbg(&h->pdev->dev, " Transport methods active = 0x%x\n", 3816 dev_dbg(&h->pdev->dev, " Transport methods active = 0x%x\n",
3812 readl(&(tb->TransportActive))); 3817 readl(&(tb->TransportActive)));
3813 dev_dbg(&h->pdev->dev, " Requested transport Method = 0x%x\n", 3818 dev_dbg(&h->pdev->dev, " Requested transport Method = 0x%x\n",
3814 readl(&(tb->HostWrite.TransportRequest))); 3819 readl(&(tb->HostWrite.TransportRequest)));
3815 dev_dbg(&h->pdev->dev, " Coalesce Interrupt Delay = 0x%x\n", 3820 dev_dbg(&h->pdev->dev, " Coalesce Interrupt Delay = 0x%x\n",
3816 readl(&(tb->HostWrite.CoalIntDelay))); 3821 readl(&(tb->HostWrite.CoalIntDelay)));
3817 dev_dbg(&h->pdev->dev, " Coalesce Interrupt Count = 0x%x\n", 3822 dev_dbg(&h->pdev->dev, " Coalesce Interrupt Count = 0x%x\n",
3818 readl(&(tb->HostWrite.CoalIntCount))); 3823 readl(&(tb->HostWrite.CoalIntCount)));
3819 dev_dbg(&h->pdev->dev, " Max outstanding commands = 0x%d\n", 3824 dev_dbg(&h->pdev->dev, " Max outstanding commands = 0x%d\n",
3820 readl(&(tb->CmdsOutMax))); 3825 readl(&(tb->CmdsOutMax)));
3821 dev_dbg(&h->pdev->dev, " Bus Types = 0x%x\n", 3826 dev_dbg(&h->pdev->dev, " Bus Types = 0x%x\n",
3822 readl(&(tb->BusTypes))); 3827 readl(&(tb->BusTypes)));
3823 for (i = 0; i < 16; i++) 3828 for (i = 0; i < 16; i++)
3824 temp_name[i] = readb(&(tb->ServerName[i])); 3829 temp_name[i] = readb(&(tb->ServerName[i]));
3825 temp_name[16] = '\0'; 3830 temp_name[16] = '\0';
3826 dev_dbg(&h->pdev->dev, " Server Name = %s\n", temp_name); 3831 dev_dbg(&h->pdev->dev, " Server Name = %s\n", temp_name);
3827 dev_dbg(&h->pdev->dev, " Heartbeat Counter = 0x%x\n\n\n", 3832 dev_dbg(&h->pdev->dev, " Heartbeat Counter = 0x%x\n\n\n",
3828 readl(&(tb->HeartBeat))); 3833 readl(&(tb->HeartBeat)));
3829 } 3834 }
3830 3835
3831 static int find_PCI_BAR_index(struct pci_dev *pdev, unsigned long pci_bar_addr) 3836 static int find_PCI_BAR_index(struct pci_dev *pdev, unsigned long pci_bar_addr)
3832 { 3837 {
3833 int i, offset, mem_type, bar_type; 3838 int i, offset, mem_type, bar_type;
3834 if (pci_bar_addr == PCI_BASE_ADDRESS_0) /* looking for BAR zero? */ 3839 if (pci_bar_addr == PCI_BASE_ADDRESS_0) /* looking for BAR zero? */
3835 return 0; 3840 return 0;
3836 offset = 0; 3841 offset = 0;
3837 for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 3842 for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) {
3838 bar_type = pci_resource_flags(pdev, i) & PCI_BASE_ADDRESS_SPACE; 3843 bar_type = pci_resource_flags(pdev, i) & PCI_BASE_ADDRESS_SPACE;
3839 if (bar_type == PCI_BASE_ADDRESS_SPACE_IO) 3844 if (bar_type == PCI_BASE_ADDRESS_SPACE_IO)
3840 offset += 4; 3845 offset += 4;
3841 else { 3846 else {
3842 mem_type = pci_resource_flags(pdev, i) & 3847 mem_type = pci_resource_flags(pdev, i) &
3843 PCI_BASE_ADDRESS_MEM_TYPE_MASK; 3848 PCI_BASE_ADDRESS_MEM_TYPE_MASK;
3844 switch (mem_type) { 3849 switch (mem_type) {
3845 case PCI_BASE_ADDRESS_MEM_TYPE_32: 3850 case PCI_BASE_ADDRESS_MEM_TYPE_32:
3846 case PCI_BASE_ADDRESS_MEM_TYPE_1M: 3851 case PCI_BASE_ADDRESS_MEM_TYPE_1M:
3847 offset += 4; /* 32 bit */ 3852 offset += 4; /* 32 bit */
3848 break; 3853 break;
3849 case PCI_BASE_ADDRESS_MEM_TYPE_64: 3854 case PCI_BASE_ADDRESS_MEM_TYPE_64:
3850 offset += 8; 3855 offset += 8;
3851 break; 3856 break;
3852 default: /* reserved in PCI 2.2 */ 3857 default: /* reserved in PCI 2.2 */
3853 dev_warn(&pdev->dev, 3858 dev_warn(&pdev->dev,
3854 "Base address is invalid\n"); 3859 "Base address is invalid\n");
3855 return -1; 3860 return -1;
3856 break; 3861 break;
3857 } 3862 }
3858 } 3863 }
3859 if (offset == pci_bar_addr - PCI_BASE_ADDRESS_0) 3864 if (offset == pci_bar_addr - PCI_BASE_ADDRESS_0)
3860 return i + 1; 3865 return i + 1;
3861 } 3866 }
3862 return -1; 3867 return -1;
3863 } 3868 }
3864 3869
3865 /* Fill in bucket_map[], given nsgs (the max number of 3870 /* Fill in bucket_map[], given nsgs (the max number of
3866 * scatter gather elements supported) and bucket[], 3871 * scatter gather elements supported) and bucket[],
3867 * which is an array of 8 integers. The bucket[] array 3872 * which is an array of 8 integers. The bucket[] array
3868 * contains 8 different DMA transfer sizes (in 16 3873 * contains 8 different DMA transfer sizes (in 16
3869 * byte increments) which the controller uses to fetch 3874 * byte increments) which the controller uses to fetch
3870 * commands. This function fills in bucket_map[], which 3875 * commands. This function fills in bucket_map[], which
3871 * maps a given number of scatter gather elements to one of 3876 * maps a given number of scatter gather elements to one of
3872 * the 8 DMA transfer sizes. The point of it is to allow the 3877 * the 8 DMA transfer sizes. The point of it is to allow the
3873 * controller to only do as much DMA as needed to fetch the 3878 * controller to only do as much DMA as needed to fetch the
3874 * command, with the DMA transfer size encoded in the lower 3879 * command, with the DMA transfer size encoded in the lower
3875 * bits of the command address. 3880 * bits of the command address.
3876 */ 3881 */
3877 static void calc_bucket_map(int bucket[], int num_buckets, 3882 static void calc_bucket_map(int bucket[], int num_buckets,
3878 int nsgs, int *bucket_map) 3883 int nsgs, int *bucket_map)
3879 { 3884 {
3880 int i, j, b, size; 3885 int i, j, b, size;
3881 3886
3882 /* even a command with 0 SGs requires 4 blocks */ 3887 /* even a command with 0 SGs requires 4 blocks */
3883 #define MINIMUM_TRANSFER_BLOCKS 4 3888 #define MINIMUM_TRANSFER_BLOCKS 4
3884 #define NUM_BUCKETS 8 3889 #define NUM_BUCKETS 8
3885 /* Note, bucket_map must have nsgs+1 entries. */ 3890 /* Note, bucket_map must have nsgs+1 entries. */
3886 for (i = 0; i <= nsgs; i++) { 3891 for (i = 0; i <= nsgs; i++) {
3887 /* Compute size of a command with i SG entries */ 3892 /* Compute size of a command with i SG entries */
3888 size = i + MINIMUM_TRANSFER_BLOCKS; 3893 size = i + MINIMUM_TRANSFER_BLOCKS;
3889 b = num_buckets; /* Assume the biggest bucket */ 3894 b = num_buckets; /* Assume the biggest bucket */
3890 /* Find the bucket that is just big enough */ 3895 /* Find the bucket that is just big enough */
3891 for (j = 0; j < 8; j++) { 3896 for (j = 0; j < 8; j++) {
3892 if (bucket[j] >= size) { 3897 if (bucket[j] >= size) {
3893 b = j; 3898 b = j;
3894 break; 3899 break;
3895 } 3900 }
3896 } 3901 }
3897 /* for a command with i SG entries, use bucket b. */ 3902 /* for a command with i SG entries, use bucket b. */
3898 bucket_map[i] = b; 3903 bucket_map[i] = b;
3899 } 3904 }
3900 } 3905 }
3901 3906
3902 static void __devinit cciss_wait_for_mode_change_ack(ctlr_info_t *h) 3907 static void __devinit cciss_wait_for_mode_change_ack(ctlr_info_t *h)
3903 { 3908 {
3904 int i; 3909 int i;
3905 3910
3906 /* under certain very rare conditions, this can take awhile. 3911 /* under certain very rare conditions, this can take awhile.
3907 * (e.g.: hot replace a failed 144GB drive in a RAID 5 set right 3912 * (e.g.: hot replace a failed 144GB drive in a RAID 5 set right
3908 * as we enter this code.) */ 3913 * as we enter this code.) */
3909 for (i = 0; i < MAX_CONFIG_WAIT; i++) { 3914 for (i = 0; i < MAX_CONFIG_WAIT; i++) {
3910 if (!(readl(h->vaddr + SA5_DOORBELL) & CFGTBL_ChangeReq)) 3915 if (!(readl(h->vaddr + SA5_DOORBELL) & CFGTBL_ChangeReq))
3911 break; 3916 break;
3912 usleep_range(10000, 20000); 3917 usleep_range(10000, 20000);
3913 } 3918 }
3914 } 3919 }
3915 3920
3916 static __devinit void cciss_enter_performant_mode(ctlr_info_t *h, 3921 static __devinit void cciss_enter_performant_mode(ctlr_info_t *h,
3917 u32 use_short_tags) 3922 u32 use_short_tags)
3918 { 3923 {
3919 /* This is a bit complicated. There are 8 registers on 3924 /* This is a bit complicated. There are 8 registers on
3920 * the controller which we write to to tell it 8 different 3925 * the controller which we write to to tell it 8 different
3921 * sizes of commands which there may be. It's a way of 3926 * sizes of commands which there may be. It's a way of
3922 * reducing the DMA done to fetch each command. Encoded into 3927 * reducing the DMA done to fetch each command. Encoded into
3923 * each command's tag are 3 bits which communicate to the controller 3928 * each command's tag are 3 bits which communicate to the controller
3924 * which of the eight sizes that command fits within. The size of 3929 * which of the eight sizes that command fits within. The size of
3925 * each command depends on how many scatter gather entries there are. 3930 * each command depends on how many scatter gather entries there are.
3926 * Each SG entry requires 16 bytes. The eight registers are programmed 3931 * Each SG entry requires 16 bytes. The eight registers are programmed
3927 * with the number of 16-byte blocks a command of that size requires. 3932 * with the number of 16-byte blocks a command of that size requires.
3928 * The smallest command possible requires 5 such 16 byte blocks. 3933 * The smallest command possible requires 5 such 16 byte blocks.
3929 * the largest command possible requires MAXSGENTRIES + 4 16-byte 3934 * the largest command possible requires MAXSGENTRIES + 4 16-byte
3930 * blocks. Note, this only extends to the SG entries contained 3935 * blocks. Note, this only extends to the SG entries contained
3931 * within the command block, and does not extend to chained blocks 3936 * within the command block, and does not extend to chained blocks
3932 * of SG elements. bft[] contains the eight values we write to 3937 * of SG elements. bft[] contains the eight values we write to
3933 * the registers. They are not evenly distributed, but have more 3938 * the registers. They are not evenly distributed, but have more
3934 * sizes for small commands, and fewer sizes for larger commands. 3939 * sizes for small commands, and fewer sizes for larger commands.
3935 */ 3940 */
3936 __u32 trans_offset; 3941 __u32 trans_offset;
3937 int bft[8] = { 5, 6, 8, 10, 12, 20, 28, MAXSGENTRIES + 4}; 3942 int bft[8] = { 5, 6, 8, 10, 12, 20, 28, MAXSGENTRIES + 4};
3938 /* 3943 /*
3939 * 5 = 1 s/g entry or 4k 3944 * 5 = 1 s/g entry or 4k
3940 * 6 = 2 s/g entry or 8k 3945 * 6 = 2 s/g entry or 8k
3941 * 8 = 4 s/g entry or 16k 3946 * 8 = 4 s/g entry or 16k
3942 * 10 = 6 s/g entry or 24k 3947 * 10 = 6 s/g entry or 24k
3943 */ 3948 */
3944 unsigned long register_value; 3949 unsigned long register_value;
3945 BUILD_BUG_ON(28 > MAXSGENTRIES + 4); 3950 BUILD_BUG_ON(28 > MAXSGENTRIES + 4);
3946 3951
3947 h->reply_pool_wraparound = 1; /* spec: init to 1 */ 3952 h->reply_pool_wraparound = 1; /* spec: init to 1 */
3948 3953
3949 /* Controller spec: zero out this buffer. */ 3954 /* Controller spec: zero out this buffer. */
3950 memset(h->reply_pool, 0, h->max_commands * sizeof(__u64)); 3955 memset(h->reply_pool, 0, h->max_commands * sizeof(__u64));
3951 h->reply_pool_head = h->reply_pool; 3956 h->reply_pool_head = h->reply_pool;
3952 3957
3953 trans_offset = readl(&(h->cfgtable->TransMethodOffset)); 3958 trans_offset = readl(&(h->cfgtable->TransMethodOffset));
3954 calc_bucket_map(bft, ARRAY_SIZE(bft), h->maxsgentries, 3959 calc_bucket_map(bft, ARRAY_SIZE(bft), h->maxsgentries,
3955 h->blockFetchTable); 3960 h->blockFetchTable);
3956 writel(bft[0], &h->transtable->BlockFetch0); 3961 writel(bft[0], &h->transtable->BlockFetch0);
3957 writel(bft[1], &h->transtable->BlockFetch1); 3962 writel(bft[1], &h->transtable->BlockFetch1);
3958 writel(bft[2], &h->transtable->BlockFetch2); 3963 writel(bft[2], &h->transtable->BlockFetch2);
3959 writel(bft[3], &h->transtable->BlockFetch3); 3964 writel(bft[3], &h->transtable->BlockFetch3);
3960 writel(bft[4], &h->transtable->BlockFetch4); 3965 writel(bft[4], &h->transtable->BlockFetch4);
3961 writel(bft[5], &h->transtable->BlockFetch5); 3966 writel(bft[5], &h->transtable->BlockFetch5);
3962 writel(bft[6], &h->transtable->BlockFetch6); 3967 writel(bft[6], &h->transtable->BlockFetch6);
3963 writel(bft[7], &h->transtable->BlockFetch7); 3968 writel(bft[7], &h->transtable->BlockFetch7);
3964 3969
3965 /* size of controller ring buffer */ 3970 /* size of controller ring buffer */
3966 writel(h->max_commands, &h->transtable->RepQSize); 3971 writel(h->max_commands, &h->transtable->RepQSize);
3967 writel(1, &h->transtable->RepQCount); 3972 writel(1, &h->transtable->RepQCount);
3968 writel(0, &h->transtable->RepQCtrAddrLow32); 3973 writel(0, &h->transtable->RepQCtrAddrLow32);
3969 writel(0, &h->transtable->RepQCtrAddrHigh32); 3974 writel(0, &h->transtable->RepQCtrAddrHigh32);
3970 writel(h->reply_pool_dhandle, &h->transtable->RepQAddr0Low32); 3975 writel(h->reply_pool_dhandle, &h->transtable->RepQAddr0Low32);
3971 writel(0, &h->transtable->RepQAddr0High32); 3976 writel(0, &h->transtable->RepQAddr0High32);
3972 writel(CFGTBL_Trans_Performant | use_short_tags, 3977 writel(CFGTBL_Trans_Performant | use_short_tags,
3973 &(h->cfgtable->HostWrite.TransportRequest)); 3978 &(h->cfgtable->HostWrite.TransportRequest));
3974 3979
3975 writel(CFGTBL_ChangeReq, h->vaddr + SA5_DOORBELL); 3980 writel(CFGTBL_ChangeReq, h->vaddr + SA5_DOORBELL);
3976 cciss_wait_for_mode_change_ack(h); 3981 cciss_wait_for_mode_change_ack(h);
3977 register_value = readl(&(h->cfgtable->TransportActive)); 3982 register_value = readl(&(h->cfgtable->TransportActive));
3978 if (!(register_value & CFGTBL_Trans_Performant)) 3983 if (!(register_value & CFGTBL_Trans_Performant))
3979 dev_warn(&h->pdev->dev, "cciss: unable to get board into" 3984 dev_warn(&h->pdev->dev, "cciss: unable to get board into"
3980 " performant mode\n"); 3985 " performant mode\n");
3981 } 3986 }
3982 3987
3983 static void __devinit cciss_put_controller_into_performant_mode(ctlr_info_t *h) 3988 static void __devinit cciss_put_controller_into_performant_mode(ctlr_info_t *h)
3984 { 3989 {
3985 __u32 trans_support; 3990 __u32 trans_support;
3986 3991
3992 if (cciss_simple_mode)
3993 return;
3994
3987 dev_dbg(&h->pdev->dev, "Trying to put board into Performant mode\n"); 3995 dev_dbg(&h->pdev->dev, "Trying to put board into Performant mode\n");
3988 /* Attempt to put controller into performant mode if supported */ 3996 /* Attempt to put controller into performant mode if supported */
3989 /* Does board support performant mode? */ 3997 /* Does board support performant mode? */
3990 trans_support = readl(&(h->cfgtable->TransportSupport)); 3998 trans_support = readl(&(h->cfgtable->TransportSupport));
3991 if (!(trans_support & PERFORMANT_MODE)) 3999 if (!(trans_support & PERFORMANT_MODE))
3992 return; 4000 return;
3993 4001
3994 dev_dbg(&h->pdev->dev, "Placing controller into performant mode\n"); 4002 dev_dbg(&h->pdev->dev, "Placing controller into performant mode\n");
3995 /* Performant mode demands commands on a 32 byte boundary 4003 /* Performant mode demands commands on a 32 byte boundary
3996 * pci_alloc_consistent aligns on page boundarys already. 4004 * pci_alloc_consistent aligns on page boundarys already.
3997 * Just need to check if divisible by 32 4005 * Just need to check if divisible by 32
3998 */ 4006 */
3999 if ((sizeof(CommandList_struct) % 32) != 0) { 4007 if ((sizeof(CommandList_struct) % 32) != 0) {
4000 dev_warn(&h->pdev->dev, "%s %d %s\n", 4008 dev_warn(&h->pdev->dev, "%s %d %s\n",
4001 "cciss info: command size[", 4009 "cciss info: command size[",
4002 (int)sizeof(CommandList_struct), 4010 (int)sizeof(CommandList_struct),
4003 "] not divisible by 32, no performant mode..\n"); 4011 "] not divisible by 32, no performant mode..\n");
4004 return; 4012 return;
4005 } 4013 }
4006 4014
4007 /* Performant mode ring buffer and supporting data structures */ 4015 /* Performant mode ring buffer and supporting data structures */
4008 h->reply_pool = (__u64 *)pci_alloc_consistent( 4016 h->reply_pool = (__u64 *)pci_alloc_consistent(
4009 h->pdev, h->max_commands * sizeof(__u64), 4017 h->pdev, h->max_commands * sizeof(__u64),
4010 &(h->reply_pool_dhandle)); 4018 &(h->reply_pool_dhandle));
4011 4019
4012 /* Need a block fetch table for performant mode */ 4020 /* Need a block fetch table for performant mode */
4013 h->blockFetchTable = kmalloc(((h->maxsgentries+1) * 4021 h->blockFetchTable = kmalloc(((h->maxsgentries+1) *
4014 sizeof(__u32)), GFP_KERNEL); 4022 sizeof(__u32)), GFP_KERNEL);
4015 4023
4016 if ((h->reply_pool == NULL) || (h->blockFetchTable == NULL)) 4024 if ((h->reply_pool == NULL) || (h->blockFetchTable == NULL))
4017 goto clean_up; 4025 goto clean_up;
4018 4026
4019 cciss_enter_performant_mode(h, 4027 cciss_enter_performant_mode(h,
4020 trans_support & CFGTBL_Trans_use_short_tags); 4028 trans_support & CFGTBL_Trans_use_short_tags);
4021 4029
4022 /* Change the access methods to the performant access methods */ 4030 /* Change the access methods to the performant access methods */
4023 h->access = SA5_performant_access; 4031 h->access = SA5_performant_access;
4024 h->transMethod = CFGTBL_Trans_Performant; 4032 h->transMethod = CFGTBL_Trans_Performant;
4025 4033
4026 return; 4034 return;
4027 clean_up: 4035 clean_up:
4028 kfree(h->blockFetchTable); 4036 kfree(h->blockFetchTable);
4029 if (h->reply_pool) 4037 if (h->reply_pool)
4030 pci_free_consistent(h->pdev, 4038 pci_free_consistent(h->pdev,
4031 h->max_commands * sizeof(__u64), 4039 h->max_commands * sizeof(__u64),
4032 h->reply_pool, 4040 h->reply_pool,
4033 h->reply_pool_dhandle); 4041 h->reply_pool_dhandle);
4034 return; 4042 return;
4035 4043
4036 } /* cciss_put_controller_into_performant_mode */ 4044 } /* cciss_put_controller_into_performant_mode */
4037 4045
4038 /* If MSI/MSI-X is supported by the kernel we will try to enable it on 4046 /* If MSI/MSI-X is supported by the kernel we will try to enable it on
4039 * controllers that are capable. If not, we use IO-APIC mode. 4047 * controllers that are capable. If not, we use IO-APIC mode.
4040 */ 4048 */
4041 4049
4042 static void __devinit cciss_interrupt_mode(ctlr_info_t *h) 4050 static void __devinit cciss_interrupt_mode(ctlr_info_t *h)
4043 { 4051 {
4044 #ifdef CONFIG_PCI_MSI 4052 #ifdef CONFIG_PCI_MSI
4045 int err; 4053 int err;
4046 struct msix_entry cciss_msix_entries[4] = { {0, 0}, {0, 1}, 4054 struct msix_entry cciss_msix_entries[4] = { {0, 0}, {0, 1},
4047 {0, 2}, {0, 3} 4055 {0, 2}, {0, 3}
4048 }; 4056 };
4049 4057
4050 /* Some boards advertise MSI but don't really support it */ 4058 /* Some boards advertise MSI but don't really support it */
4051 if ((h->board_id == 0x40700E11) || (h->board_id == 0x40800E11) || 4059 if ((h->board_id == 0x40700E11) || (h->board_id == 0x40800E11) ||
4052 (h->board_id == 0x40820E11) || (h->board_id == 0x40830E11)) 4060 (h->board_id == 0x40820E11) || (h->board_id == 0x40830E11))
4053 goto default_int_mode; 4061 goto default_int_mode;
4054 4062
4055 if (pci_find_capability(h->pdev, PCI_CAP_ID_MSIX)) { 4063 if (pci_find_capability(h->pdev, PCI_CAP_ID_MSIX)) {
4056 err = pci_enable_msix(h->pdev, cciss_msix_entries, 4); 4064 err = pci_enable_msix(h->pdev, cciss_msix_entries, 4);
4057 if (!err) { 4065 if (!err) {
4058 h->intr[0] = cciss_msix_entries[0].vector; 4066 h->intr[0] = cciss_msix_entries[0].vector;
4059 h->intr[1] = cciss_msix_entries[1].vector; 4067 h->intr[1] = cciss_msix_entries[1].vector;
4060 h->intr[2] = cciss_msix_entries[2].vector; 4068 h->intr[2] = cciss_msix_entries[2].vector;
4061 h->intr[3] = cciss_msix_entries[3].vector; 4069 h->intr[3] = cciss_msix_entries[3].vector;
4062 h->msix_vector = 1; 4070 h->msix_vector = 1;
4063 return; 4071 return;
4064 } 4072 }
4065 if (err > 0) { 4073 if (err > 0) {
4066 dev_warn(&h->pdev->dev, 4074 dev_warn(&h->pdev->dev,
4067 "only %d MSI-X vectors available\n", err); 4075 "only %d MSI-X vectors available\n", err);
4068 goto default_int_mode; 4076 goto default_int_mode;
4069 } else { 4077 } else {
4070 dev_warn(&h->pdev->dev, 4078 dev_warn(&h->pdev->dev,
4071 "MSI-X init failed %d\n", err); 4079 "MSI-X init failed %d\n", err);
4072 goto default_int_mode; 4080 goto default_int_mode;
4073 } 4081 }
4074 } 4082 }
4075 if (pci_find_capability(h->pdev, PCI_CAP_ID_MSI)) { 4083 if (pci_find_capability(h->pdev, PCI_CAP_ID_MSI)) {
4076 if (!pci_enable_msi(h->pdev)) 4084 if (!pci_enable_msi(h->pdev))
4077 h->msi_vector = 1; 4085 h->msi_vector = 1;
4078 else 4086 else
4079 dev_warn(&h->pdev->dev, "MSI init failed\n"); 4087 dev_warn(&h->pdev->dev, "MSI init failed\n");
4080 } 4088 }
4081 default_int_mode: 4089 default_int_mode:
4082 #endif /* CONFIG_PCI_MSI */ 4090 #endif /* CONFIG_PCI_MSI */
4083 /* if we get here we're going to use the default interrupt mode */ 4091 /* if we get here we're going to use the default interrupt mode */
4084 h->intr[PERF_MODE_INT] = h->pdev->irq; 4092 h->intr[h->intr_mode] = h->pdev->irq;
4085 return; 4093 return;
4086 } 4094 }
4087 4095
4088 static int __devinit cciss_lookup_board_id(struct pci_dev *pdev, u32 *board_id) 4096 static int __devinit cciss_lookup_board_id(struct pci_dev *pdev, u32 *board_id)
4089 { 4097 {
4090 int i; 4098 int i;
4091 u32 subsystem_vendor_id, subsystem_device_id; 4099 u32 subsystem_vendor_id, subsystem_device_id;
4092 4100
4093 subsystem_vendor_id = pdev->subsystem_vendor; 4101 subsystem_vendor_id = pdev->subsystem_vendor;
4094 subsystem_device_id = pdev->subsystem_device; 4102 subsystem_device_id = pdev->subsystem_device;
4095 *board_id = ((subsystem_device_id << 16) & 0xffff0000) | 4103 *board_id = ((subsystem_device_id << 16) & 0xffff0000) |
4096 subsystem_vendor_id; 4104 subsystem_vendor_id;
4097 4105
4098 for (i = 0; i < ARRAY_SIZE(products); i++) 4106 for (i = 0; i < ARRAY_SIZE(products); i++)
4099 if (*board_id == products[i].board_id) 4107 if (*board_id == products[i].board_id)
4100 return i; 4108 return i;
4101 dev_warn(&pdev->dev, "unrecognized board ID: 0x%08x, ignoring.\n", 4109 dev_warn(&pdev->dev, "unrecognized board ID: 0x%08x, ignoring.\n",
4102 *board_id); 4110 *board_id);
4103 return -ENODEV; 4111 return -ENODEV;
4104 } 4112 }
4105 4113
4106 static inline bool cciss_board_disabled(ctlr_info_t *h) 4114 static inline bool cciss_board_disabled(ctlr_info_t *h)
4107 { 4115 {
4108 u16 command; 4116 u16 command;
4109 4117
4110 (void) pci_read_config_word(h->pdev, PCI_COMMAND, &command); 4118 (void) pci_read_config_word(h->pdev, PCI_COMMAND, &command);
4111 return ((command & PCI_COMMAND_MEMORY) == 0); 4119 return ((command & PCI_COMMAND_MEMORY) == 0);
4112 } 4120 }
4113 4121
4114 static int __devinit cciss_pci_find_memory_BAR(struct pci_dev *pdev, 4122 static int __devinit cciss_pci_find_memory_BAR(struct pci_dev *pdev,
4115 unsigned long *memory_bar) 4123 unsigned long *memory_bar)
4116 { 4124 {
4117 int i; 4125 int i;
4118 4126
4119 for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) 4127 for (i = 0; i < DEVICE_COUNT_RESOURCE; i++)
4120 if (pci_resource_flags(pdev, i) & IORESOURCE_MEM) { 4128 if (pci_resource_flags(pdev, i) & IORESOURCE_MEM) {
4121 /* addressing mode bits already removed */ 4129 /* addressing mode bits already removed */
4122 *memory_bar = pci_resource_start(pdev, i); 4130 *memory_bar = pci_resource_start(pdev, i);
4123 dev_dbg(&pdev->dev, "memory BAR = %lx\n", 4131 dev_dbg(&pdev->dev, "memory BAR = %lx\n",
4124 *memory_bar); 4132 *memory_bar);
4125 return 0; 4133 return 0;
4126 } 4134 }
4127 dev_warn(&pdev->dev, "no memory BAR found\n"); 4135 dev_warn(&pdev->dev, "no memory BAR found\n");
4128 return -ENODEV; 4136 return -ENODEV;
4129 } 4137 }
4130 4138
4131 static int __devinit cciss_wait_for_board_state(struct pci_dev *pdev, 4139 static int __devinit cciss_wait_for_board_state(struct pci_dev *pdev,
4132 void __iomem *vaddr, int wait_for_ready) 4140 void __iomem *vaddr, int wait_for_ready)
4133 #define BOARD_READY 1 4141 #define BOARD_READY 1
4134 #define BOARD_NOT_READY 0 4142 #define BOARD_NOT_READY 0
4135 { 4143 {
4136 int i, iterations; 4144 int i, iterations;
4137 u32 scratchpad; 4145 u32 scratchpad;
4138 4146
4139 if (wait_for_ready) 4147 if (wait_for_ready)
4140 iterations = CCISS_BOARD_READY_ITERATIONS; 4148 iterations = CCISS_BOARD_READY_ITERATIONS;
4141 else 4149 else
4142 iterations = CCISS_BOARD_NOT_READY_ITERATIONS; 4150 iterations = CCISS_BOARD_NOT_READY_ITERATIONS;
4143 4151
4144 for (i = 0; i < iterations; i++) { 4152 for (i = 0; i < iterations; i++) {
4145 scratchpad = readl(vaddr + SA5_SCRATCHPAD_OFFSET); 4153 scratchpad = readl(vaddr + SA5_SCRATCHPAD_OFFSET);
4146 if (wait_for_ready) { 4154 if (wait_for_ready) {
4147 if (scratchpad == CCISS_FIRMWARE_READY) 4155 if (scratchpad == CCISS_FIRMWARE_READY)
4148 return 0; 4156 return 0;
4149 } else { 4157 } else {
4150 if (scratchpad != CCISS_FIRMWARE_READY) 4158 if (scratchpad != CCISS_FIRMWARE_READY)
4151 return 0; 4159 return 0;
4152 } 4160 }
4153 msleep(CCISS_BOARD_READY_POLL_INTERVAL_MSECS); 4161 msleep(CCISS_BOARD_READY_POLL_INTERVAL_MSECS);
4154 } 4162 }
4155 dev_warn(&pdev->dev, "board not ready, timed out.\n"); 4163 dev_warn(&pdev->dev, "board not ready, timed out.\n");
4156 return -ENODEV; 4164 return -ENODEV;
4157 } 4165 }
4158 4166
4159 static int __devinit cciss_find_cfg_addrs(struct pci_dev *pdev, 4167 static int __devinit cciss_find_cfg_addrs(struct pci_dev *pdev,
4160 void __iomem *vaddr, u32 *cfg_base_addr, u64 *cfg_base_addr_index, 4168 void __iomem *vaddr, u32 *cfg_base_addr, u64 *cfg_base_addr_index,
4161 u64 *cfg_offset) 4169 u64 *cfg_offset)
4162 { 4170 {
4163 *cfg_base_addr = readl(vaddr + SA5_CTCFG_OFFSET); 4171 *cfg_base_addr = readl(vaddr + SA5_CTCFG_OFFSET);
4164 *cfg_offset = readl(vaddr + SA5_CTMEM_OFFSET); 4172 *cfg_offset = readl(vaddr + SA5_CTMEM_OFFSET);
4165 *cfg_base_addr &= (u32) 0x0000ffff; 4173 *cfg_base_addr &= (u32) 0x0000ffff;
4166 *cfg_base_addr_index = find_PCI_BAR_index(pdev, *cfg_base_addr); 4174 *cfg_base_addr_index = find_PCI_BAR_index(pdev, *cfg_base_addr);
4167 if (*cfg_base_addr_index == -1) { 4175 if (*cfg_base_addr_index == -1) {
4168 dev_warn(&pdev->dev, "cannot find cfg_base_addr_index, " 4176 dev_warn(&pdev->dev, "cannot find cfg_base_addr_index, "
4169 "*cfg_base_addr = 0x%08x\n", *cfg_base_addr); 4177 "*cfg_base_addr = 0x%08x\n", *cfg_base_addr);
4170 return -ENODEV; 4178 return -ENODEV;
4171 } 4179 }
4172 return 0; 4180 return 0;
4173 } 4181 }
4174 4182
4175 static int __devinit cciss_find_cfgtables(ctlr_info_t *h) 4183 static int __devinit cciss_find_cfgtables(ctlr_info_t *h)
4176 { 4184 {
4177 u64 cfg_offset; 4185 u64 cfg_offset;
4178 u32 cfg_base_addr; 4186 u32 cfg_base_addr;
4179 u64 cfg_base_addr_index; 4187 u64 cfg_base_addr_index;
4180 u32 trans_offset; 4188 u32 trans_offset;
4181 int rc; 4189 int rc;
4182 4190
4183 rc = cciss_find_cfg_addrs(h->pdev, h->vaddr, &cfg_base_addr, 4191 rc = cciss_find_cfg_addrs(h->pdev, h->vaddr, &cfg_base_addr,
4184 &cfg_base_addr_index, &cfg_offset); 4192 &cfg_base_addr_index, &cfg_offset);
4185 if (rc) 4193 if (rc)
4186 return rc; 4194 return rc;
4187 h->cfgtable = remap_pci_mem(pci_resource_start(h->pdev, 4195 h->cfgtable = remap_pci_mem(pci_resource_start(h->pdev,
4188 cfg_base_addr_index) + cfg_offset, sizeof(h->cfgtable)); 4196 cfg_base_addr_index) + cfg_offset, sizeof(h->cfgtable));
4189 if (!h->cfgtable) 4197 if (!h->cfgtable)
4190 return -ENOMEM; 4198 return -ENOMEM;
4191 rc = write_driver_ver_to_cfgtable(h->cfgtable); 4199 rc = write_driver_ver_to_cfgtable(h->cfgtable);
4192 if (rc) 4200 if (rc)
4193 return rc; 4201 return rc;
4194 /* Find performant mode table. */ 4202 /* Find performant mode table. */
4195 trans_offset = readl(&h->cfgtable->TransMethodOffset); 4203 trans_offset = readl(&h->cfgtable->TransMethodOffset);
4196 h->transtable = remap_pci_mem(pci_resource_start(h->pdev, 4204 h->transtable = remap_pci_mem(pci_resource_start(h->pdev,
4197 cfg_base_addr_index)+cfg_offset+trans_offset, 4205 cfg_base_addr_index)+cfg_offset+trans_offset,
4198 sizeof(*h->transtable)); 4206 sizeof(*h->transtable));
4199 if (!h->transtable) 4207 if (!h->transtable)
4200 return -ENOMEM; 4208 return -ENOMEM;
4201 return 0; 4209 return 0;
4202 } 4210 }
4203 4211
4204 static void __devinit cciss_get_max_perf_mode_cmds(struct ctlr_info *h) 4212 static void __devinit cciss_get_max_perf_mode_cmds(struct ctlr_info *h)
4205 { 4213 {
4206 h->max_commands = readl(&(h->cfgtable->MaxPerformantModeCommands)); 4214 h->max_commands = readl(&(h->cfgtable->MaxPerformantModeCommands));
4207 4215
4208 /* Limit commands in memory limited kdump scenario. */ 4216 /* Limit commands in memory limited kdump scenario. */
4209 if (reset_devices && h->max_commands > 32) 4217 if (reset_devices && h->max_commands > 32)
4210 h->max_commands = 32; 4218 h->max_commands = 32;
4211 4219
4212 if (h->max_commands < 16) { 4220 if (h->max_commands < 16) {
4213 dev_warn(&h->pdev->dev, "Controller reports " 4221 dev_warn(&h->pdev->dev, "Controller reports "
4214 "max supported commands of %d, an obvious lie. " 4222 "max supported commands of %d, an obvious lie. "
4215 "Using 16. Ensure that firmware is up to date.\n", 4223 "Using 16. Ensure that firmware is up to date.\n",
4216 h->max_commands); 4224 h->max_commands);
4217 h->max_commands = 16; 4225 h->max_commands = 16;
4218 } 4226 }
4219 } 4227 }
4220 4228
4221 /* Interrogate the hardware for some limits: 4229 /* Interrogate the hardware for some limits:
4222 * max commands, max SG elements without chaining, and with chaining, 4230 * max commands, max SG elements without chaining, and with chaining,
4223 * SG chain block size, etc. 4231 * SG chain block size, etc.
4224 */ 4232 */
4225 static void __devinit cciss_find_board_params(ctlr_info_t *h) 4233 static void __devinit cciss_find_board_params(ctlr_info_t *h)
4226 { 4234 {
4227 cciss_get_max_perf_mode_cmds(h); 4235 cciss_get_max_perf_mode_cmds(h);
4228 h->nr_cmds = h->max_commands - 4 - cciss_tape_cmds; 4236 h->nr_cmds = h->max_commands - 4 - cciss_tape_cmds;
4229 h->maxsgentries = readl(&(h->cfgtable->MaxSGElements)); 4237 h->maxsgentries = readl(&(h->cfgtable->MaxSGElements));
4230 /* 4238 /*
4231 * Limit in-command s/g elements to 32 save dma'able memory. 4239 * Limit in-command s/g elements to 32 save dma'able memory.
4232 * Howvever spec says if 0, use 31 4240 * Howvever spec says if 0, use 31
4233 */ 4241 */
4234 h->max_cmd_sgentries = 31; 4242 h->max_cmd_sgentries = 31;
4235 if (h->maxsgentries > 512) { 4243 if (h->maxsgentries > 512) {
4236 h->max_cmd_sgentries = 32; 4244 h->max_cmd_sgentries = 32;
4237 h->chainsize = h->maxsgentries - h->max_cmd_sgentries + 1; 4245 h->chainsize = h->maxsgentries - h->max_cmd_sgentries + 1;
4238 h->maxsgentries--; /* save one for chain pointer */ 4246 h->maxsgentries--; /* save one for chain pointer */
4239 } else { 4247 } else {
4240 h->maxsgentries = 31; /* default to traditional values */ 4248 h->maxsgentries = 31; /* default to traditional values */
4241 h->chainsize = 0; 4249 h->chainsize = 0;
4242 } 4250 }
4243 } 4251 }
4244 4252
4245 static inline bool CISS_signature_present(ctlr_info_t *h) 4253 static inline bool CISS_signature_present(ctlr_info_t *h)
4246 { 4254 {
4247 if ((readb(&h->cfgtable->Signature[0]) != 'C') || 4255 if ((readb(&h->cfgtable->Signature[0]) != 'C') ||
4248 (readb(&h->cfgtable->Signature[1]) != 'I') || 4256 (readb(&h->cfgtable->Signature[1]) != 'I') ||
4249 (readb(&h->cfgtable->Signature[2]) != 'S') || 4257 (readb(&h->cfgtable->Signature[2]) != 'S') ||
4250 (readb(&h->cfgtable->Signature[3]) != 'S')) { 4258 (readb(&h->cfgtable->Signature[3]) != 'S')) {
4251 dev_warn(&h->pdev->dev, "not a valid CISS config table\n"); 4259 dev_warn(&h->pdev->dev, "not a valid CISS config table\n");
4252 return false; 4260 return false;
4253 } 4261 }
4254 return true; 4262 return true;
4255 } 4263 }
4256 4264
4257 /* Need to enable prefetch in the SCSI core for 6400 in x86 */ 4265 /* Need to enable prefetch in the SCSI core for 6400 in x86 */
4258 static inline void cciss_enable_scsi_prefetch(ctlr_info_t *h) 4266 static inline void cciss_enable_scsi_prefetch(ctlr_info_t *h)
4259 { 4267 {
4260 #ifdef CONFIG_X86 4268 #ifdef CONFIG_X86
4261 u32 prefetch; 4269 u32 prefetch;
4262 4270
4263 prefetch = readl(&(h->cfgtable->SCSI_Prefetch)); 4271 prefetch = readl(&(h->cfgtable->SCSI_Prefetch));
4264 prefetch |= 0x100; 4272 prefetch |= 0x100;
4265 writel(prefetch, &(h->cfgtable->SCSI_Prefetch)); 4273 writel(prefetch, &(h->cfgtable->SCSI_Prefetch));
4266 #endif 4274 #endif
4267 } 4275 }
4268 4276
4269 /* Disable DMA prefetch for the P600. Otherwise an ASIC bug may result 4277 /* Disable DMA prefetch for the P600. Otherwise an ASIC bug may result
4270 * in a prefetch beyond physical memory. 4278 * in a prefetch beyond physical memory.
4271 */ 4279 */
4272 static inline void cciss_p600_dma_prefetch_quirk(ctlr_info_t *h) 4280 static inline void cciss_p600_dma_prefetch_quirk(ctlr_info_t *h)
4273 { 4281 {
4274 u32 dma_prefetch; 4282 u32 dma_prefetch;
4275 __u32 dma_refetch; 4283 __u32 dma_refetch;
4276 4284
4277 if (h->board_id != 0x3225103C) 4285 if (h->board_id != 0x3225103C)
4278 return; 4286 return;
4279 dma_prefetch = readl(h->vaddr + I2O_DMA1_CFG); 4287 dma_prefetch = readl(h->vaddr + I2O_DMA1_CFG);
4280 dma_prefetch |= 0x8000; 4288 dma_prefetch |= 0x8000;
4281 writel(dma_prefetch, h->vaddr + I2O_DMA1_CFG); 4289 writel(dma_prefetch, h->vaddr + I2O_DMA1_CFG);
4282 pci_read_config_dword(h->pdev, PCI_COMMAND_PARITY, &dma_refetch); 4290 pci_read_config_dword(h->pdev, PCI_COMMAND_PARITY, &dma_refetch);
4283 dma_refetch |= 0x1; 4291 dma_refetch |= 0x1;
4284 pci_write_config_dword(h->pdev, PCI_COMMAND_PARITY, dma_refetch); 4292 pci_write_config_dword(h->pdev, PCI_COMMAND_PARITY, dma_refetch);
4285 } 4293 }
4286 4294
4287 static int __devinit cciss_pci_init(ctlr_info_t *h) 4295 static int __devinit cciss_pci_init(ctlr_info_t *h)
4288 { 4296 {
4289 int prod_index, err; 4297 int prod_index, err;
4290 4298
4291 prod_index = cciss_lookup_board_id(h->pdev, &h->board_id); 4299 prod_index = cciss_lookup_board_id(h->pdev, &h->board_id);
4292 if (prod_index < 0) 4300 if (prod_index < 0)
4293 return -ENODEV; 4301 return -ENODEV;
4294 h->product_name = products[prod_index].product_name; 4302 h->product_name = products[prod_index].product_name;
4295 h->access = *(products[prod_index].access); 4303 h->access = *(products[prod_index].access);
4296 4304
4297 if (cciss_board_disabled(h)) { 4305 if (cciss_board_disabled(h)) {
4298 dev_warn(&h->pdev->dev, "controller appears to be disabled\n"); 4306 dev_warn(&h->pdev->dev, "controller appears to be disabled\n");
4299 return -ENODEV; 4307 return -ENODEV;
4300 } 4308 }
4301 err = pci_enable_device(h->pdev); 4309 err = pci_enable_device(h->pdev);
4302 if (err) { 4310 if (err) {
4303 dev_warn(&h->pdev->dev, "Unable to Enable PCI device\n"); 4311 dev_warn(&h->pdev->dev, "Unable to Enable PCI device\n");
4304 return err; 4312 return err;
4305 } 4313 }
4306 4314
4307 err = pci_request_regions(h->pdev, "cciss"); 4315 err = pci_request_regions(h->pdev, "cciss");
4308 if (err) { 4316 if (err) {
4309 dev_warn(&h->pdev->dev, 4317 dev_warn(&h->pdev->dev,
4310 "Cannot obtain PCI resources, aborting\n"); 4318 "Cannot obtain PCI resources, aborting\n");
4311 return err; 4319 return err;
4312 } 4320 }
4313 4321
4314 dev_dbg(&h->pdev->dev, "irq = %x\n", h->pdev->irq); 4322 dev_dbg(&h->pdev->dev, "irq = %x\n", h->pdev->irq);
4315 dev_dbg(&h->pdev->dev, "board_id = %x\n", h->board_id); 4323 dev_dbg(&h->pdev->dev, "board_id = %x\n", h->board_id);
4316 4324
4317 /* If the kernel supports MSI/MSI-X we will try to enable that functionality, 4325 /* If the kernel supports MSI/MSI-X we will try to enable that functionality,
4318 * else we use the IO-APIC interrupt assigned to us by system ROM. 4326 * else we use the IO-APIC interrupt assigned to us by system ROM.
4319 */ 4327 */
4320 cciss_interrupt_mode(h); 4328 cciss_interrupt_mode(h);
4321 err = cciss_pci_find_memory_BAR(h->pdev, &h->paddr); 4329 err = cciss_pci_find_memory_BAR(h->pdev, &h->paddr);
4322 if (err) 4330 if (err)
4323 goto err_out_free_res; 4331 goto err_out_free_res;
4324 h->vaddr = remap_pci_mem(h->paddr, 0x250); 4332 h->vaddr = remap_pci_mem(h->paddr, 0x250);
4325 if (!h->vaddr) { 4333 if (!h->vaddr) {
4326 err = -ENOMEM; 4334 err = -ENOMEM;
4327 goto err_out_free_res; 4335 goto err_out_free_res;
4328 } 4336 }
4329 err = cciss_wait_for_board_state(h->pdev, h->vaddr, BOARD_READY); 4337 err = cciss_wait_for_board_state(h->pdev, h->vaddr, BOARD_READY);
4330 if (err) 4338 if (err)
4331 goto err_out_free_res; 4339 goto err_out_free_res;
4332 err = cciss_find_cfgtables(h); 4340 err = cciss_find_cfgtables(h);
4333 if (err) 4341 if (err)
4334 goto err_out_free_res; 4342 goto err_out_free_res;
4335 print_cfg_table(h); 4343 print_cfg_table(h);
4336 cciss_find_board_params(h); 4344 cciss_find_board_params(h);
4337 4345
4338 if (!CISS_signature_present(h)) { 4346 if (!CISS_signature_present(h)) {
4339 err = -ENODEV; 4347 err = -ENODEV;
4340 goto err_out_free_res; 4348 goto err_out_free_res;
4341 } 4349 }
4342 cciss_enable_scsi_prefetch(h); 4350 cciss_enable_scsi_prefetch(h);
4343 cciss_p600_dma_prefetch_quirk(h); 4351 cciss_p600_dma_prefetch_quirk(h);
4352 err = cciss_enter_simple_mode(h);
4353 if (err)
4354 goto err_out_free_res;
4344 cciss_put_controller_into_performant_mode(h); 4355 cciss_put_controller_into_performant_mode(h);
4345 return 0; 4356 return 0;
4346 4357
4347 err_out_free_res: 4358 err_out_free_res:
4348 /* 4359 /*
4349 * Deliberately omit pci_disable_device(): it does something nasty to 4360 * Deliberately omit pci_disable_device(): it does something nasty to
4350 * Smart Array controllers that pci_enable_device does not undo 4361 * Smart Array controllers that pci_enable_device does not undo
4351 */ 4362 */
4352 if (h->transtable) 4363 if (h->transtable)
4353 iounmap(h->transtable); 4364 iounmap(h->transtable);
4354 if (h->cfgtable) 4365 if (h->cfgtable)
4355 iounmap(h->cfgtable); 4366 iounmap(h->cfgtable);
4356 if (h->vaddr) 4367 if (h->vaddr)
4357 iounmap(h->vaddr); 4368 iounmap(h->vaddr);
4358 pci_release_regions(h->pdev); 4369 pci_release_regions(h->pdev);
4359 return err; 4370 return err;
4360 } 4371 }
4361 4372
4362 /* Function to find the first free pointer into our hba[] array 4373 /* Function to find the first free pointer into our hba[] array
4363 * Returns -1 if no free entries are left. 4374 * Returns -1 if no free entries are left.
4364 */ 4375 */
4365 static int alloc_cciss_hba(struct pci_dev *pdev) 4376 static int alloc_cciss_hba(struct pci_dev *pdev)
4366 { 4377 {
4367 int i; 4378 int i;
4368 4379
4369 for (i = 0; i < MAX_CTLR; i++) { 4380 for (i = 0; i < MAX_CTLR; i++) {
4370 if (!hba[i]) { 4381 if (!hba[i]) {
4371 ctlr_info_t *h; 4382 ctlr_info_t *h;
4372 4383
4373 h = kzalloc(sizeof(ctlr_info_t), GFP_KERNEL); 4384 h = kzalloc(sizeof(ctlr_info_t), GFP_KERNEL);
4374 if (!h) 4385 if (!h)
4375 goto Enomem; 4386 goto Enomem;
4376 hba[i] = h; 4387 hba[i] = h;
4377 return i; 4388 return i;
4378 } 4389 }
4379 } 4390 }
4380 dev_warn(&pdev->dev, "This driver supports a maximum" 4391 dev_warn(&pdev->dev, "This driver supports a maximum"
4381 " of %d controllers.\n", MAX_CTLR); 4392 " of %d controllers.\n", MAX_CTLR);
4382 return -1; 4393 return -1;
4383 Enomem: 4394 Enomem:
4384 dev_warn(&pdev->dev, "out of memory.\n"); 4395 dev_warn(&pdev->dev, "out of memory.\n");
4385 return -1; 4396 return -1;
4386 } 4397 }
4387 4398
4388 static void free_hba(ctlr_info_t *h) 4399 static void free_hba(ctlr_info_t *h)
4389 { 4400 {
4390 int i; 4401 int i;
4391 4402
4392 hba[h->ctlr] = NULL; 4403 hba[h->ctlr] = NULL;
4393 for (i = 0; i < h->highest_lun + 1; i++) 4404 for (i = 0; i < h->highest_lun + 1; i++)
4394 if (h->gendisk[i] != NULL) 4405 if (h->gendisk[i] != NULL)
4395 put_disk(h->gendisk[i]); 4406 put_disk(h->gendisk[i]);
4396 kfree(h); 4407 kfree(h);
4397 } 4408 }
4398 4409
4399 /* Send a message CDB to the firmware. */ 4410 /* Send a message CDB to the firmware. */
4400 static __devinit int cciss_message(struct pci_dev *pdev, unsigned char opcode, unsigned char type) 4411 static __devinit int cciss_message(struct pci_dev *pdev, unsigned char opcode, unsigned char type)
4401 { 4412 {
4402 typedef struct { 4413 typedef struct {
4403 CommandListHeader_struct CommandHeader; 4414 CommandListHeader_struct CommandHeader;
4404 RequestBlock_struct Request; 4415 RequestBlock_struct Request;
4405 ErrDescriptor_struct ErrorDescriptor; 4416 ErrDescriptor_struct ErrorDescriptor;
4406 } Command; 4417 } Command;
4407 static const size_t cmd_sz = sizeof(Command) + sizeof(ErrorInfo_struct); 4418 static const size_t cmd_sz = sizeof(Command) + sizeof(ErrorInfo_struct);
4408 Command *cmd; 4419 Command *cmd;
4409 dma_addr_t paddr64; 4420 dma_addr_t paddr64;
4410 uint32_t paddr32, tag; 4421 uint32_t paddr32, tag;
4411 void __iomem *vaddr; 4422 void __iomem *vaddr;
4412 int i, err; 4423 int i, err;
4413 4424
4414 vaddr = ioremap_nocache(pci_resource_start(pdev, 0), pci_resource_len(pdev, 0)); 4425 vaddr = ioremap_nocache(pci_resource_start(pdev, 0), pci_resource_len(pdev, 0));
4415 if (vaddr == NULL) 4426 if (vaddr == NULL)
4416 return -ENOMEM; 4427 return -ENOMEM;
4417 4428
4418 /* The Inbound Post Queue only accepts 32-bit physical addresses for the 4429 /* The Inbound Post Queue only accepts 32-bit physical addresses for the
4419 CCISS commands, so they must be allocated from the lower 4GiB of 4430 CCISS commands, so they must be allocated from the lower 4GiB of
4420 memory. */ 4431 memory. */
4421 err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32)); 4432 err = pci_set_consistent_dma_mask(pdev, DMA_BIT_MASK(32));
4422 if (err) { 4433 if (err) {
4423 iounmap(vaddr); 4434 iounmap(vaddr);
4424 return -ENOMEM; 4435 return -ENOMEM;
4425 } 4436 }
4426 4437
4427 cmd = pci_alloc_consistent(pdev, cmd_sz, &paddr64); 4438 cmd = pci_alloc_consistent(pdev, cmd_sz, &paddr64);
4428 if (cmd == NULL) { 4439 if (cmd == NULL) {
4429 iounmap(vaddr); 4440 iounmap(vaddr);
4430 return -ENOMEM; 4441 return -ENOMEM;
4431 } 4442 }
4432 4443
4433 /* This must fit, because of the 32-bit consistent DMA mask. Also, 4444 /* This must fit, because of the 32-bit consistent DMA mask. Also,
4434 although there's no guarantee, we assume that the address is at 4445 although there's no guarantee, we assume that the address is at
4435 least 4-byte aligned (most likely, it's page-aligned). */ 4446 least 4-byte aligned (most likely, it's page-aligned). */
4436 paddr32 = paddr64; 4447 paddr32 = paddr64;
4437 4448
4438 cmd->CommandHeader.ReplyQueue = 0; 4449 cmd->CommandHeader.ReplyQueue = 0;
4439 cmd->CommandHeader.SGList = 0; 4450 cmd->CommandHeader.SGList = 0;
4440 cmd->CommandHeader.SGTotal = 0; 4451 cmd->CommandHeader.SGTotal = 0;
4441 cmd->CommandHeader.Tag.lower = paddr32; 4452 cmd->CommandHeader.Tag.lower = paddr32;
4442 cmd->CommandHeader.Tag.upper = 0; 4453 cmd->CommandHeader.Tag.upper = 0;
4443 memset(&cmd->CommandHeader.LUN.LunAddrBytes, 0, 8); 4454 memset(&cmd->CommandHeader.LUN.LunAddrBytes, 0, 8);
4444 4455
4445 cmd->Request.CDBLen = 16; 4456 cmd->Request.CDBLen = 16;
4446 cmd->Request.Type.Type = TYPE_MSG; 4457 cmd->Request.Type.Type = TYPE_MSG;
4447 cmd->Request.Type.Attribute = ATTR_HEADOFQUEUE; 4458 cmd->Request.Type.Attribute = ATTR_HEADOFQUEUE;
4448 cmd->Request.Type.Direction = XFER_NONE; 4459 cmd->Request.Type.Direction = XFER_NONE;
4449 cmd->Request.Timeout = 0; /* Don't time out */ 4460 cmd->Request.Timeout = 0; /* Don't time out */
4450 cmd->Request.CDB[0] = opcode; 4461 cmd->Request.CDB[0] = opcode;
4451 cmd->Request.CDB[1] = type; 4462 cmd->Request.CDB[1] = type;
4452 memset(&cmd->Request.CDB[2], 0, 14); /* the rest of the CDB is reserved */ 4463 memset(&cmd->Request.CDB[2], 0, 14); /* the rest of the CDB is reserved */
4453 4464
4454 cmd->ErrorDescriptor.Addr.lower = paddr32 + sizeof(Command); 4465 cmd->ErrorDescriptor.Addr.lower = paddr32 + sizeof(Command);
4455 cmd->ErrorDescriptor.Addr.upper = 0; 4466 cmd->ErrorDescriptor.Addr.upper = 0;
4456 cmd->ErrorDescriptor.Len = sizeof(ErrorInfo_struct); 4467 cmd->ErrorDescriptor.Len = sizeof(ErrorInfo_struct);
4457 4468
4458 writel(paddr32, vaddr + SA5_REQUEST_PORT_OFFSET); 4469 writel(paddr32, vaddr + SA5_REQUEST_PORT_OFFSET);
4459 4470
4460 for (i = 0; i < 10; i++) { 4471 for (i = 0; i < 10; i++) {
4461 tag = readl(vaddr + SA5_REPLY_PORT_OFFSET); 4472 tag = readl(vaddr + SA5_REPLY_PORT_OFFSET);
4462 if ((tag & ~3) == paddr32) 4473 if ((tag & ~3) == paddr32)
4463 break; 4474 break;
4464 msleep(CCISS_POST_RESET_NOOP_TIMEOUT_MSECS); 4475 msleep(CCISS_POST_RESET_NOOP_TIMEOUT_MSECS);
4465 } 4476 }
4466 4477
4467 iounmap(vaddr); 4478 iounmap(vaddr);
4468 4479
4469 /* we leak the DMA buffer here ... no choice since the controller could 4480 /* we leak the DMA buffer here ... no choice since the controller could
4470 still complete the command. */ 4481 still complete the command. */
4471 if (i == 10) { 4482 if (i == 10) {
4472 dev_err(&pdev->dev, 4483 dev_err(&pdev->dev,
4473 "controller message %02x:%02x timed out\n", 4484 "controller message %02x:%02x timed out\n",
4474 opcode, type); 4485 opcode, type);
4475 return -ETIMEDOUT; 4486 return -ETIMEDOUT;
4476 } 4487 }
4477 4488
4478 pci_free_consistent(pdev, cmd_sz, cmd, paddr64); 4489 pci_free_consistent(pdev, cmd_sz, cmd, paddr64);
4479 4490
4480 if (tag & 2) { 4491 if (tag & 2) {
4481 dev_err(&pdev->dev, "controller message %02x:%02x failed\n", 4492 dev_err(&pdev->dev, "controller message %02x:%02x failed\n",
4482 opcode, type); 4493 opcode, type);
4483 return -EIO; 4494 return -EIO;
4484 } 4495 }
4485 4496
4486 dev_info(&pdev->dev, "controller message %02x:%02x succeeded\n", 4497 dev_info(&pdev->dev, "controller message %02x:%02x succeeded\n",
4487 opcode, type); 4498 opcode, type);
4488 return 0; 4499 return 0;
4489 } 4500 }
4490 4501
4491 #define cciss_noop(p) cciss_message(p, 3, 0) 4502 #define cciss_noop(p) cciss_message(p, 3, 0)
4492 4503
4493 static int cciss_controller_hard_reset(struct pci_dev *pdev, 4504 static int cciss_controller_hard_reset(struct pci_dev *pdev,
4494 void * __iomem vaddr, u32 use_doorbell) 4505 void * __iomem vaddr, u32 use_doorbell)
4495 { 4506 {
4496 u16 pmcsr; 4507 u16 pmcsr;
4497 int pos; 4508 int pos;
4498 4509
4499 if (use_doorbell) { 4510 if (use_doorbell) {
4500 /* For everything after the P600, the PCI power state method 4511 /* For everything after the P600, the PCI power state method
4501 * of resetting the controller doesn't work, so we have this 4512 * of resetting the controller doesn't work, so we have this
4502 * other way using the doorbell register. 4513 * other way using the doorbell register.
4503 */ 4514 */
4504 dev_info(&pdev->dev, "using doorbell to reset controller\n"); 4515 dev_info(&pdev->dev, "using doorbell to reset controller\n");
4505 writel(use_doorbell, vaddr + SA5_DOORBELL); 4516 writel(use_doorbell, vaddr + SA5_DOORBELL);
4506 } else { /* Try to do it the PCI power state way */ 4517 } else { /* Try to do it the PCI power state way */
4507 4518
4508 /* Quoting from the Open CISS Specification: "The Power 4519 /* Quoting from the Open CISS Specification: "The Power
4509 * Management Control/Status Register (CSR) controls the power 4520 * Management Control/Status Register (CSR) controls the power
4510 * state of the device. The normal operating state is D0, 4521 * state of the device. The normal operating state is D0,
4511 * CSR=00h. The software off state is D3, CSR=03h. To reset 4522 * CSR=00h. The software off state is D3, CSR=03h. To reset
4512 * the controller, place the interface device in D3 then to D0, 4523 * the controller, place the interface device in D3 then to D0,
4513 * this causes a secondary PCI reset which will reset the 4524 * this causes a secondary PCI reset which will reset the
4514 * controller." */ 4525 * controller." */
4515 4526
4516 pos = pci_find_capability(pdev, PCI_CAP_ID_PM); 4527 pos = pci_find_capability(pdev, PCI_CAP_ID_PM);
4517 if (pos == 0) { 4528 if (pos == 0) {
4518 dev_err(&pdev->dev, 4529 dev_err(&pdev->dev,
4519 "cciss_controller_hard_reset: " 4530 "cciss_controller_hard_reset: "
4520 "PCI PM not supported\n"); 4531 "PCI PM not supported\n");
4521 return -ENODEV; 4532 return -ENODEV;
4522 } 4533 }
4523 dev_info(&pdev->dev, "using PCI PM to reset controller\n"); 4534 dev_info(&pdev->dev, "using PCI PM to reset controller\n");
4524 /* enter the D3hot power management state */ 4535 /* enter the D3hot power management state */
4525 pci_read_config_word(pdev, pos + PCI_PM_CTRL, &pmcsr); 4536 pci_read_config_word(pdev, pos + PCI_PM_CTRL, &pmcsr);
4526 pmcsr &= ~PCI_PM_CTRL_STATE_MASK; 4537 pmcsr &= ~PCI_PM_CTRL_STATE_MASK;
4527 pmcsr |= PCI_D3hot; 4538 pmcsr |= PCI_D3hot;
4528 pci_write_config_word(pdev, pos + PCI_PM_CTRL, pmcsr); 4539 pci_write_config_word(pdev, pos + PCI_PM_CTRL, pmcsr);
4529 4540
4530 msleep(500); 4541 msleep(500);
4531 4542
4532 /* enter the D0 power management state */ 4543 /* enter the D0 power management state */
4533 pmcsr &= ~PCI_PM_CTRL_STATE_MASK; 4544 pmcsr &= ~PCI_PM_CTRL_STATE_MASK;
4534 pmcsr |= PCI_D0; 4545 pmcsr |= PCI_D0;
4535 pci_write_config_word(pdev, pos + PCI_PM_CTRL, pmcsr); 4546 pci_write_config_word(pdev, pos + PCI_PM_CTRL, pmcsr);
4536 } 4547 }
4537 return 0; 4548 return 0;
4538 } 4549 }
4539 4550
4540 static __devinit void init_driver_version(char *driver_version, int len) 4551 static __devinit void init_driver_version(char *driver_version, int len)
4541 { 4552 {
4542 memset(driver_version, 0, len); 4553 memset(driver_version, 0, len);
4543 strncpy(driver_version, "cciss " DRIVER_NAME, len - 1); 4554 strncpy(driver_version, "cciss " DRIVER_NAME, len - 1);
4544 } 4555 }
4545 4556
4546 static __devinit int write_driver_ver_to_cfgtable( 4557 static __devinit int write_driver_ver_to_cfgtable(
4547 CfgTable_struct __iomem *cfgtable) 4558 CfgTable_struct __iomem *cfgtable)
4548 { 4559 {
4549 char *driver_version; 4560 char *driver_version;
4550 int i, size = sizeof(cfgtable->driver_version); 4561 int i, size = sizeof(cfgtable->driver_version);
4551 4562
4552 driver_version = kmalloc(size, GFP_KERNEL); 4563 driver_version = kmalloc(size, GFP_KERNEL);
4553 if (!driver_version) 4564 if (!driver_version)
4554 return -ENOMEM; 4565 return -ENOMEM;
4555 4566
4556 init_driver_version(driver_version, size); 4567 init_driver_version(driver_version, size);
4557 for (i = 0; i < size; i++) 4568 for (i = 0; i < size; i++)
4558 writeb(driver_version[i], &cfgtable->driver_version[i]); 4569 writeb(driver_version[i], &cfgtable->driver_version[i]);
4559 kfree(driver_version); 4570 kfree(driver_version);
4560 return 0; 4571 return 0;
4561 } 4572 }
4562 4573
4563 static __devinit void read_driver_ver_from_cfgtable( 4574 static __devinit void read_driver_ver_from_cfgtable(
4564 CfgTable_struct __iomem *cfgtable, unsigned char *driver_ver) 4575 CfgTable_struct __iomem *cfgtable, unsigned char *driver_ver)
4565 { 4576 {
4566 int i; 4577 int i;
4567 4578
4568 for (i = 0; i < sizeof(cfgtable->driver_version); i++) 4579 for (i = 0; i < sizeof(cfgtable->driver_version); i++)
4569 driver_ver[i] = readb(&cfgtable->driver_version[i]); 4580 driver_ver[i] = readb(&cfgtable->driver_version[i]);
4570 } 4581 }
4571 4582
4572 static __devinit int controller_reset_failed( 4583 static __devinit int controller_reset_failed(
4573 CfgTable_struct __iomem *cfgtable) 4584 CfgTable_struct __iomem *cfgtable)
4574 { 4585 {
4575 4586
4576 char *driver_ver, *old_driver_ver; 4587 char *driver_ver, *old_driver_ver;
4577 int rc, size = sizeof(cfgtable->driver_version); 4588 int rc, size = sizeof(cfgtable->driver_version);
4578 4589
4579 old_driver_ver = kmalloc(2 * size, GFP_KERNEL); 4590 old_driver_ver = kmalloc(2 * size, GFP_KERNEL);
4580 if (!old_driver_ver) 4591 if (!old_driver_ver)
4581 return -ENOMEM; 4592 return -ENOMEM;
4582 driver_ver = old_driver_ver + size; 4593 driver_ver = old_driver_ver + size;
4583 4594
4584 /* After a reset, the 32 bytes of "driver version" in the cfgtable 4595 /* After a reset, the 32 bytes of "driver version" in the cfgtable
4585 * should have been changed, otherwise we know the reset failed. 4596 * should have been changed, otherwise we know the reset failed.
4586 */ 4597 */
4587 init_driver_version(old_driver_ver, size); 4598 init_driver_version(old_driver_ver, size);
4588 read_driver_ver_from_cfgtable(cfgtable, driver_ver); 4599 read_driver_ver_from_cfgtable(cfgtable, driver_ver);
4589 rc = !memcmp(driver_ver, old_driver_ver, size); 4600 rc = !memcmp(driver_ver, old_driver_ver, size);
4590 kfree(old_driver_ver); 4601 kfree(old_driver_ver);
4591 return rc; 4602 return rc;
4592 } 4603 }
4593 4604
4594 /* This does a hard reset of the controller using PCI power management 4605 /* This does a hard reset of the controller using PCI power management
4595 * states or using the doorbell register. */ 4606 * states or using the doorbell register. */
4596 static __devinit int cciss_kdump_hard_reset_controller(struct pci_dev *pdev) 4607 static __devinit int cciss_kdump_hard_reset_controller(struct pci_dev *pdev)
4597 { 4608 {
4598 u64 cfg_offset; 4609 u64 cfg_offset;
4599 u32 cfg_base_addr; 4610 u32 cfg_base_addr;
4600 u64 cfg_base_addr_index; 4611 u64 cfg_base_addr_index;
4601 void __iomem *vaddr; 4612 void __iomem *vaddr;
4602 unsigned long paddr; 4613 unsigned long paddr;
4603 u32 misc_fw_support; 4614 u32 misc_fw_support;
4604 int rc; 4615 int rc;
4605 CfgTable_struct __iomem *cfgtable; 4616 CfgTable_struct __iomem *cfgtable;
4606 u32 use_doorbell; 4617 u32 use_doorbell;
4607 u32 board_id; 4618 u32 board_id;
4608 u16 command_register; 4619 u16 command_register;
4609 4620
4610 /* For controllers as old a the p600, this is very nearly 4621 /* For controllers as old a the p600, this is very nearly
4611 * the same thing as 4622 * the same thing as
4612 * 4623 *
4613 * pci_save_state(pci_dev); 4624 * pci_save_state(pci_dev);
4614 * pci_set_power_state(pci_dev, PCI_D3hot); 4625 * pci_set_power_state(pci_dev, PCI_D3hot);
4615 * pci_set_power_state(pci_dev, PCI_D0); 4626 * pci_set_power_state(pci_dev, PCI_D0);
4616 * pci_restore_state(pci_dev); 4627 * pci_restore_state(pci_dev);
4617 * 4628 *
4618 * For controllers newer than the P600, the pci power state 4629 * For controllers newer than the P600, the pci power state
4619 * method of resetting doesn't work so we have another way 4630 * method of resetting doesn't work so we have another way
4620 * using the doorbell register. 4631 * using the doorbell register.
4621 */ 4632 */
4622 4633
4623 /* Exclude 640x boards. These are two pci devices in one slot 4634 /* Exclude 640x boards. These are two pci devices in one slot
4624 * which share a battery backed cache module. One controls the 4635 * which share a battery backed cache module. One controls the
4625 * cache, the other accesses the cache through the one that controls 4636 * cache, the other accesses the cache through the one that controls
4626 * it. If we reset the one controlling the cache, the other will 4637 * it. If we reset the one controlling the cache, the other will
4627 * likely not be happy. Just forbid resetting this conjoined mess. 4638 * likely not be happy. Just forbid resetting this conjoined mess.
4628 */ 4639 */
4629 cciss_lookup_board_id(pdev, &board_id); 4640 cciss_lookup_board_id(pdev, &board_id);
4630 if (!ctlr_is_resettable(board_id)) { 4641 if (!ctlr_is_resettable(board_id)) {
4631 dev_warn(&pdev->dev, "Cannot reset Smart Array 640x " 4642 dev_warn(&pdev->dev, "Cannot reset Smart Array 640x "
4632 "due to shared cache module."); 4643 "due to shared cache module.");
4633 return -ENODEV; 4644 return -ENODEV;
4634 } 4645 }
4635 4646
4636 /* if controller is soft- but not hard resettable... */ 4647 /* if controller is soft- but not hard resettable... */
4637 if (!ctlr_is_hard_resettable(board_id)) 4648 if (!ctlr_is_hard_resettable(board_id))
4638 return -ENOTSUPP; /* try soft reset later. */ 4649 return -ENOTSUPP; /* try soft reset later. */
4639 4650
4640 /* Save the PCI command register */ 4651 /* Save the PCI command register */
4641 pci_read_config_word(pdev, 4, &command_register); 4652 pci_read_config_word(pdev, 4, &command_register);
4642 /* Turn the board off. This is so that later pci_restore_state() 4653 /* Turn the board off. This is so that later pci_restore_state()
4643 * won't turn the board on before the rest of config space is ready. 4654 * won't turn the board on before the rest of config space is ready.
4644 */ 4655 */
4645 pci_disable_device(pdev); 4656 pci_disable_device(pdev);
4646 pci_save_state(pdev); 4657 pci_save_state(pdev);
4647 4658
4648 /* find the first memory BAR, so we can find the cfg table */ 4659 /* find the first memory BAR, so we can find the cfg table */
4649 rc = cciss_pci_find_memory_BAR(pdev, &paddr); 4660 rc = cciss_pci_find_memory_BAR(pdev, &paddr);
4650 if (rc) 4661 if (rc)
4651 return rc; 4662 return rc;
4652 vaddr = remap_pci_mem(paddr, 0x250); 4663 vaddr = remap_pci_mem(paddr, 0x250);
4653 if (!vaddr) 4664 if (!vaddr)
4654 return -ENOMEM; 4665 return -ENOMEM;
4655 4666
4656 /* find cfgtable in order to check if reset via doorbell is supported */ 4667 /* find cfgtable in order to check if reset via doorbell is supported */
4657 rc = cciss_find_cfg_addrs(pdev, vaddr, &cfg_base_addr, 4668 rc = cciss_find_cfg_addrs(pdev, vaddr, &cfg_base_addr,
4658 &cfg_base_addr_index, &cfg_offset); 4669 &cfg_base_addr_index, &cfg_offset);
4659 if (rc) 4670 if (rc)
4660 goto unmap_vaddr; 4671 goto unmap_vaddr;
4661 cfgtable = remap_pci_mem(pci_resource_start(pdev, 4672 cfgtable = remap_pci_mem(pci_resource_start(pdev,
4662 cfg_base_addr_index) + cfg_offset, sizeof(*cfgtable)); 4673 cfg_base_addr_index) + cfg_offset, sizeof(*cfgtable));
4663 if (!cfgtable) { 4674 if (!cfgtable) {
4664 rc = -ENOMEM; 4675 rc = -ENOMEM;
4665 goto unmap_vaddr; 4676 goto unmap_vaddr;
4666 } 4677 }
4667 rc = write_driver_ver_to_cfgtable(cfgtable); 4678 rc = write_driver_ver_to_cfgtable(cfgtable);
4668 if (rc) 4679 if (rc)
4669 goto unmap_vaddr; 4680 goto unmap_vaddr;
4670 4681
4671 /* If reset via doorbell register is supported, use that. 4682 /* If reset via doorbell register is supported, use that.
4672 * There are two such methods. Favor the newest method. 4683 * There are two such methods. Favor the newest method.
4673 */ 4684 */
4674 misc_fw_support = readl(&cfgtable->misc_fw_support); 4685 misc_fw_support = readl(&cfgtable->misc_fw_support);
4675 use_doorbell = misc_fw_support & MISC_FW_DOORBELL_RESET2; 4686 use_doorbell = misc_fw_support & MISC_FW_DOORBELL_RESET2;
4676 if (use_doorbell) { 4687 if (use_doorbell) {
4677 use_doorbell = DOORBELL_CTLR_RESET2; 4688 use_doorbell = DOORBELL_CTLR_RESET2;
4678 } else { 4689 } else {
4679 use_doorbell = misc_fw_support & MISC_FW_DOORBELL_RESET; 4690 use_doorbell = misc_fw_support & MISC_FW_DOORBELL_RESET;
4680 if (use_doorbell) { 4691 if (use_doorbell) {
4681 dev_warn(&pdev->dev, "Controller claims that " 4692 dev_warn(&pdev->dev, "Controller claims that "
4682 "'Bit 2 doorbell reset' is " 4693 "'Bit 2 doorbell reset' is "
4683 "supported, but not 'bit 5 doorbell reset'. " 4694 "supported, but not 'bit 5 doorbell reset'. "
4684 "Firmware update is recommended.\n"); 4695 "Firmware update is recommended.\n");
4685 rc = -ENOTSUPP; /* use the soft reset */ 4696 rc = -ENOTSUPP; /* use the soft reset */
4686 goto unmap_cfgtable; 4697 goto unmap_cfgtable;
4687 } 4698 }
4688 } 4699 }
4689 4700
4690 rc = cciss_controller_hard_reset(pdev, vaddr, use_doorbell); 4701 rc = cciss_controller_hard_reset(pdev, vaddr, use_doorbell);
4691 if (rc) 4702 if (rc)
4692 goto unmap_cfgtable; 4703 goto unmap_cfgtable;
4693 pci_restore_state(pdev); 4704 pci_restore_state(pdev);
4694 rc = pci_enable_device(pdev); 4705 rc = pci_enable_device(pdev);
4695 if (rc) { 4706 if (rc) {
4696 dev_warn(&pdev->dev, "failed to enable device.\n"); 4707 dev_warn(&pdev->dev, "failed to enable device.\n");
4697 goto unmap_cfgtable; 4708 goto unmap_cfgtable;
4698 } 4709 }
4699 pci_write_config_word(pdev, 4, command_register); 4710 pci_write_config_word(pdev, 4, command_register);
4700 4711
4701 /* Some devices (notably the HP Smart Array 5i Controller) 4712 /* Some devices (notably the HP Smart Array 5i Controller)
4702 need a little pause here */ 4713 need a little pause here */
4703 msleep(CCISS_POST_RESET_PAUSE_MSECS); 4714 msleep(CCISS_POST_RESET_PAUSE_MSECS);
4704 4715
4705 /* Wait for board to become not ready, then ready. */ 4716 /* Wait for board to become not ready, then ready. */
4706 dev_info(&pdev->dev, "Waiting for board to reset.\n"); 4717 dev_info(&pdev->dev, "Waiting for board to reset.\n");
4707 rc = cciss_wait_for_board_state(pdev, vaddr, BOARD_NOT_READY); 4718 rc = cciss_wait_for_board_state(pdev, vaddr, BOARD_NOT_READY);
4708 if (rc) { 4719 if (rc) {
4709 dev_warn(&pdev->dev, "Failed waiting for board to hard reset." 4720 dev_warn(&pdev->dev, "Failed waiting for board to hard reset."
4710 " Will try soft reset.\n"); 4721 " Will try soft reset.\n");
4711 rc = -ENOTSUPP; /* Not expected, but try soft reset later */ 4722 rc = -ENOTSUPP; /* Not expected, but try soft reset later */
4712 goto unmap_cfgtable; 4723 goto unmap_cfgtable;
4713 } 4724 }
4714 rc = cciss_wait_for_board_state(pdev, vaddr, BOARD_READY); 4725 rc = cciss_wait_for_board_state(pdev, vaddr, BOARD_READY);
4715 if (rc) { 4726 if (rc) {
4716 dev_warn(&pdev->dev, 4727 dev_warn(&pdev->dev,
4717 "failed waiting for board to become ready " 4728 "failed waiting for board to become ready "
4718 "after hard reset\n"); 4729 "after hard reset\n");
4719 goto unmap_cfgtable; 4730 goto unmap_cfgtable;
4720 } 4731 }
4721 4732
4722 rc = controller_reset_failed(vaddr); 4733 rc = controller_reset_failed(vaddr);
4723 if (rc < 0) 4734 if (rc < 0)
4724 goto unmap_cfgtable; 4735 goto unmap_cfgtable;
4725 if (rc) { 4736 if (rc) {
4726 dev_warn(&pdev->dev, "Unable to successfully hard reset " 4737 dev_warn(&pdev->dev, "Unable to successfully hard reset "
4727 "controller. Will try soft reset.\n"); 4738 "controller. Will try soft reset.\n");
4728 rc = -ENOTSUPP; /* Not expected, but try soft reset later */ 4739 rc = -ENOTSUPP; /* Not expected, but try soft reset later */
4729 } else { 4740 } else {
4730 dev_info(&pdev->dev, "Board ready after hard reset.\n"); 4741 dev_info(&pdev->dev, "Board ready after hard reset.\n");
4731 } 4742 }
4732 4743
4733 unmap_cfgtable: 4744 unmap_cfgtable:
4734 iounmap(cfgtable); 4745 iounmap(cfgtable);
4735 4746
4736 unmap_vaddr: 4747 unmap_vaddr:
4737 iounmap(vaddr); 4748 iounmap(vaddr);
4738 return rc; 4749 return rc;
4739 } 4750 }
4740 4751
4741 static __devinit int cciss_init_reset_devices(struct pci_dev *pdev) 4752 static __devinit int cciss_init_reset_devices(struct pci_dev *pdev)
4742 { 4753 {
4743 int rc, i; 4754 int rc, i;
4744 4755
4745 if (!reset_devices) 4756 if (!reset_devices)
4746 return 0; 4757 return 0;
4747 4758
4748 /* Reset the controller with a PCI power-cycle or via doorbell */ 4759 /* Reset the controller with a PCI power-cycle or via doorbell */
4749 rc = cciss_kdump_hard_reset_controller(pdev); 4760 rc = cciss_kdump_hard_reset_controller(pdev);
4750 4761
4751 /* -ENOTSUPP here means we cannot reset the controller 4762 /* -ENOTSUPP here means we cannot reset the controller
4752 * but it's already (and still) up and running in 4763 * but it's already (and still) up and running in
4753 * "performant mode". Or, it might be 640x, which can't reset 4764 * "performant mode". Or, it might be 640x, which can't reset
4754 * due to concerns about shared bbwc between 6402/6404 pair. 4765 * due to concerns about shared bbwc between 6402/6404 pair.
4755 */ 4766 */
4756 if (rc == -ENOTSUPP) 4767 if (rc == -ENOTSUPP)
4757 return rc; /* just try to do the kdump anyhow. */ 4768 return rc; /* just try to do the kdump anyhow. */
4758 if (rc) 4769 if (rc)
4759 return -ENODEV; 4770 return -ENODEV;
4760 4771
4761 /* Now try to get the controller to respond to a no-op */ 4772 /* Now try to get the controller to respond to a no-op */
4762 dev_warn(&pdev->dev, "Waiting for controller to respond to no-op\n"); 4773 dev_warn(&pdev->dev, "Waiting for controller to respond to no-op\n");
4763 for (i = 0; i < CCISS_POST_RESET_NOOP_RETRIES; i++) { 4774 for (i = 0; i < CCISS_POST_RESET_NOOP_RETRIES; i++) {
4764 if (cciss_noop(pdev) == 0) 4775 if (cciss_noop(pdev) == 0)
4765 break; 4776 break;
4766 else 4777 else
4767 dev_warn(&pdev->dev, "no-op failed%s\n", 4778 dev_warn(&pdev->dev, "no-op failed%s\n",
4768 (i < CCISS_POST_RESET_NOOP_RETRIES - 1 ? 4779 (i < CCISS_POST_RESET_NOOP_RETRIES - 1 ?
4769 "; re-trying" : "")); 4780 "; re-trying" : ""));
4770 msleep(CCISS_POST_RESET_NOOP_INTERVAL_MSECS); 4781 msleep(CCISS_POST_RESET_NOOP_INTERVAL_MSECS);
4771 } 4782 }
4772 return 0; 4783 return 0;
4773 } 4784 }
4774 4785
4775 static __devinit int cciss_allocate_cmd_pool(ctlr_info_t *h) 4786 static __devinit int cciss_allocate_cmd_pool(ctlr_info_t *h)
4776 { 4787 {
4777 h->cmd_pool_bits = kmalloc( 4788 h->cmd_pool_bits = kmalloc(
4778 DIV_ROUND_UP(h->nr_cmds, BITS_PER_LONG) * 4789 DIV_ROUND_UP(h->nr_cmds, BITS_PER_LONG) *
4779 sizeof(unsigned long), GFP_KERNEL); 4790 sizeof(unsigned long), GFP_KERNEL);
4780 h->cmd_pool = pci_alloc_consistent(h->pdev, 4791 h->cmd_pool = pci_alloc_consistent(h->pdev,
4781 h->nr_cmds * sizeof(CommandList_struct), 4792 h->nr_cmds * sizeof(CommandList_struct),
4782 &(h->cmd_pool_dhandle)); 4793 &(h->cmd_pool_dhandle));
4783 h->errinfo_pool = pci_alloc_consistent(h->pdev, 4794 h->errinfo_pool = pci_alloc_consistent(h->pdev,
4784 h->nr_cmds * sizeof(ErrorInfo_struct), 4795 h->nr_cmds * sizeof(ErrorInfo_struct),
4785 &(h->errinfo_pool_dhandle)); 4796 &(h->errinfo_pool_dhandle));
4786 if ((h->cmd_pool_bits == NULL) 4797 if ((h->cmd_pool_bits == NULL)
4787 || (h->cmd_pool == NULL) 4798 || (h->cmd_pool == NULL)
4788 || (h->errinfo_pool == NULL)) { 4799 || (h->errinfo_pool == NULL)) {
4789 dev_err(&h->pdev->dev, "out of memory"); 4800 dev_err(&h->pdev->dev, "out of memory");
4790 return -ENOMEM; 4801 return -ENOMEM;
4791 } 4802 }
4792 return 0; 4803 return 0;
4793 } 4804 }
4794 4805
4795 static __devinit int cciss_allocate_scatterlists(ctlr_info_t *h) 4806 static __devinit int cciss_allocate_scatterlists(ctlr_info_t *h)
4796 { 4807 {
4797 int i; 4808 int i;
4798 4809
4799 /* zero it, so that on free we need not know how many were alloc'ed */ 4810 /* zero it, so that on free we need not know how many were alloc'ed */
4800 h->scatter_list = kzalloc(h->max_commands * 4811 h->scatter_list = kzalloc(h->max_commands *
4801 sizeof(struct scatterlist *), GFP_KERNEL); 4812 sizeof(struct scatterlist *), GFP_KERNEL);
4802 if (!h->scatter_list) 4813 if (!h->scatter_list)
4803 return -ENOMEM; 4814 return -ENOMEM;
4804 4815
4805 for (i = 0; i < h->nr_cmds; i++) { 4816 for (i = 0; i < h->nr_cmds; i++) {
4806 h->scatter_list[i] = kmalloc(sizeof(struct scatterlist) * 4817 h->scatter_list[i] = kmalloc(sizeof(struct scatterlist) *
4807 h->maxsgentries, GFP_KERNEL); 4818 h->maxsgentries, GFP_KERNEL);
4808 if (h->scatter_list[i] == NULL) { 4819 if (h->scatter_list[i] == NULL) {
4809 dev_err(&h->pdev->dev, "could not allocate " 4820 dev_err(&h->pdev->dev, "could not allocate "
4810 "s/g lists\n"); 4821 "s/g lists\n");
4811 return -ENOMEM; 4822 return -ENOMEM;
4812 } 4823 }
4813 } 4824 }
4814 return 0; 4825 return 0;
4815 } 4826 }
4816 4827
4817 static void cciss_free_scatterlists(ctlr_info_t *h) 4828 static void cciss_free_scatterlists(ctlr_info_t *h)
4818 { 4829 {
4819 int i; 4830 int i;
4820 4831
4821 if (h->scatter_list) { 4832 if (h->scatter_list) {
4822 for (i = 0; i < h->nr_cmds; i++) 4833 for (i = 0; i < h->nr_cmds; i++)
4823 kfree(h->scatter_list[i]); 4834 kfree(h->scatter_list[i]);
4824 kfree(h->scatter_list); 4835 kfree(h->scatter_list);
4825 } 4836 }
4826 } 4837 }
4827 4838
4828 static void cciss_free_cmd_pool(ctlr_info_t *h) 4839 static void cciss_free_cmd_pool(ctlr_info_t *h)
4829 { 4840 {
4830 kfree(h->cmd_pool_bits); 4841 kfree(h->cmd_pool_bits);
4831 if (h->cmd_pool) 4842 if (h->cmd_pool)
4832 pci_free_consistent(h->pdev, 4843 pci_free_consistent(h->pdev,
4833 h->nr_cmds * sizeof(CommandList_struct), 4844 h->nr_cmds * sizeof(CommandList_struct),
4834 h->cmd_pool, h->cmd_pool_dhandle); 4845 h->cmd_pool, h->cmd_pool_dhandle);
4835 if (h->errinfo_pool) 4846 if (h->errinfo_pool)
4836 pci_free_consistent(h->pdev, 4847 pci_free_consistent(h->pdev,
4837 h->nr_cmds * sizeof(ErrorInfo_struct), 4848 h->nr_cmds * sizeof(ErrorInfo_struct),
4838 h->errinfo_pool, h->errinfo_pool_dhandle); 4849 h->errinfo_pool, h->errinfo_pool_dhandle);
4839 } 4850 }
4840 4851
4841 static int cciss_request_irq(ctlr_info_t *h, 4852 static int cciss_request_irq(ctlr_info_t *h,
4842 irqreturn_t (*msixhandler)(int, void *), 4853 irqreturn_t (*msixhandler)(int, void *),
4843 irqreturn_t (*intxhandler)(int, void *)) 4854 irqreturn_t (*intxhandler)(int, void *))
4844 { 4855 {
4845 if (h->msix_vector || h->msi_vector) { 4856 if (h->msix_vector || h->msi_vector) {
4846 if (!request_irq(h->intr[PERF_MODE_INT], msixhandler, 4857 if (!request_irq(h->intr[h->intr_mode], msixhandler,
4847 IRQF_DISABLED, h->devname, h)) 4858 IRQF_DISABLED, h->devname, h))
4848 return 0; 4859 return 0;
4849 dev_err(&h->pdev->dev, "Unable to get msi irq %d" 4860 dev_err(&h->pdev->dev, "Unable to get msi irq %d"
4850 " for %s\n", h->intr[PERF_MODE_INT], 4861 " for %s\n", h->intr[h->intr_mode],
4851 h->devname); 4862 h->devname);
4852 return -1; 4863 return -1;
4853 } 4864 }
4854 4865
4855 if (!request_irq(h->intr[PERF_MODE_INT], intxhandler, 4866 if (!request_irq(h->intr[h->intr_mode], intxhandler,
4856 IRQF_DISABLED, h->devname, h)) 4867 IRQF_DISABLED, h->devname, h))
4857 return 0; 4868 return 0;
4858 dev_err(&h->pdev->dev, "Unable to get irq %d for %s\n", 4869 dev_err(&h->pdev->dev, "Unable to get irq %d for %s\n",
4859 h->intr[PERF_MODE_INT], h->devname); 4870 h->intr[h->intr_mode], h->devname);
4860 return -1; 4871 return -1;
4861 } 4872 }
4862 4873
4863 static int __devinit cciss_kdump_soft_reset(ctlr_info_t *h) 4874 static int __devinit cciss_kdump_soft_reset(ctlr_info_t *h)
4864 { 4875 {
4865 if (cciss_send_reset(h, CTLR_LUNID, CCISS_RESET_TYPE_CONTROLLER)) { 4876 if (cciss_send_reset(h, CTLR_LUNID, CCISS_RESET_TYPE_CONTROLLER)) {
4866 dev_warn(&h->pdev->dev, "Resetting array controller failed.\n"); 4877 dev_warn(&h->pdev->dev, "Resetting array controller failed.\n");
4867 return -EIO; 4878 return -EIO;
4868 } 4879 }
4869 4880
4870 dev_info(&h->pdev->dev, "Waiting for board to soft reset.\n"); 4881 dev_info(&h->pdev->dev, "Waiting for board to soft reset.\n");
4871 if (cciss_wait_for_board_state(h->pdev, h->vaddr, BOARD_NOT_READY)) { 4882 if (cciss_wait_for_board_state(h->pdev, h->vaddr, BOARD_NOT_READY)) {
4872 dev_warn(&h->pdev->dev, "Soft reset had no effect.\n"); 4883 dev_warn(&h->pdev->dev, "Soft reset had no effect.\n");
4873 return -1; 4884 return -1;
4874 } 4885 }
4875 4886
4876 dev_info(&h->pdev->dev, "Board reset, awaiting READY status.\n"); 4887 dev_info(&h->pdev->dev, "Board reset, awaiting READY status.\n");
4877 if (cciss_wait_for_board_state(h->pdev, h->vaddr, BOARD_READY)) { 4888 if (cciss_wait_for_board_state(h->pdev, h->vaddr, BOARD_READY)) {
4878 dev_warn(&h->pdev->dev, "Board failed to become ready " 4889 dev_warn(&h->pdev->dev, "Board failed to become ready "
4879 "after soft reset.\n"); 4890 "after soft reset.\n");
4880 return -1; 4891 return -1;
4881 } 4892 }
4882 4893
4883 return 0; 4894 return 0;
4884 } 4895 }
4885 4896
4886 static void cciss_undo_allocations_after_kdump_soft_reset(ctlr_info_t *h) 4897 static void cciss_undo_allocations_after_kdump_soft_reset(ctlr_info_t *h)
4887 { 4898 {
4888 int ctlr = h->ctlr; 4899 int ctlr = h->ctlr;
4889 4900
4890 free_irq(h->intr[PERF_MODE_INT], h); 4901 free_irq(h->intr[h->intr_mode], h);
4891 #ifdef CONFIG_PCI_MSI 4902 #ifdef CONFIG_PCI_MSI
4892 if (h->msix_vector) 4903 if (h->msix_vector)
4893 pci_disable_msix(h->pdev); 4904 pci_disable_msix(h->pdev);
4894 else if (h->msi_vector) 4905 else if (h->msi_vector)
4895 pci_disable_msi(h->pdev); 4906 pci_disable_msi(h->pdev);
4896 #endif /* CONFIG_PCI_MSI */ 4907 #endif /* CONFIG_PCI_MSI */
4897 cciss_free_sg_chain_blocks(h->cmd_sg_list, h->nr_cmds); 4908 cciss_free_sg_chain_blocks(h->cmd_sg_list, h->nr_cmds);
4898 cciss_free_scatterlists(h); 4909 cciss_free_scatterlists(h);
4899 cciss_free_cmd_pool(h); 4910 cciss_free_cmd_pool(h);
4900 kfree(h->blockFetchTable); 4911 kfree(h->blockFetchTable);
4901 if (h->reply_pool) 4912 if (h->reply_pool)
4902 pci_free_consistent(h->pdev, h->max_commands * sizeof(__u64), 4913 pci_free_consistent(h->pdev, h->max_commands * sizeof(__u64),
4903 h->reply_pool, h->reply_pool_dhandle); 4914 h->reply_pool, h->reply_pool_dhandle);
4904 if (h->transtable) 4915 if (h->transtable)
4905 iounmap(h->transtable); 4916 iounmap(h->transtable);
4906 if (h->cfgtable) 4917 if (h->cfgtable)
4907 iounmap(h->cfgtable); 4918 iounmap(h->cfgtable);
4908 if (h->vaddr) 4919 if (h->vaddr)
4909 iounmap(h->vaddr); 4920 iounmap(h->vaddr);
4910 unregister_blkdev(h->major, h->devname); 4921 unregister_blkdev(h->major, h->devname);
4911 cciss_destroy_hba_sysfs_entry(h); 4922 cciss_destroy_hba_sysfs_entry(h);
4912 pci_release_regions(h->pdev); 4923 pci_release_regions(h->pdev);
4913 kfree(h); 4924 kfree(h);
4914 hba[ctlr] = NULL; 4925 hba[ctlr] = NULL;
4915 } 4926 }
4916 4927
4917 /* 4928 /*
4918 * This is it. Find all the controllers and register them. I really hate 4929 * This is it. Find all the controllers and register them. I really hate
4919 * stealing all these major device numbers. 4930 * stealing all these major device numbers.
4920 * returns the number of block devices registered. 4931 * returns the number of block devices registered.
4921 */ 4932 */
4922 static int __devinit cciss_init_one(struct pci_dev *pdev, 4933 static int __devinit cciss_init_one(struct pci_dev *pdev,
4923 const struct pci_device_id *ent) 4934 const struct pci_device_id *ent)
4924 { 4935 {
4925 int i; 4936 int i;
4926 int j = 0; 4937 int j = 0;
4927 int rc; 4938 int rc;
4928 int try_soft_reset = 0; 4939 int try_soft_reset = 0;
4929 int dac, return_code; 4940 int dac, return_code;
4930 InquiryData_struct *inq_buff; 4941 InquiryData_struct *inq_buff;
4931 ctlr_info_t *h; 4942 ctlr_info_t *h;
4932 unsigned long flags; 4943 unsigned long flags;
4933 4944
4934 rc = cciss_init_reset_devices(pdev); 4945 rc = cciss_init_reset_devices(pdev);
4935 if (rc) { 4946 if (rc) {
4936 if (rc != -ENOTSUPP) 4947 if (rc != -ENOTSUPP)
4937 return rc; 4948 return rc;
4938 /* If the reset fails in a particular way (it has no way to do 4949 /* If the reset fails in a particular way (it has no way to do
4939 * a proper hard reset, so returns -ENOTSUPP) we can try to do 4950 * a proper hard reset, so returns -ENOTSUPP) we can try to do
4940 * a soft reset once we get the controller configured up to the 4951 * a soft reset once we get the controller configured up to the
4941 * point that it can accept a command. 4952 * point that it can accept a command.
4942 */ 4953 */
4943 try_soft_reset = 1; 4954 try_soft_reset = 1;
4944 rc = 0; 4955 rc = 0;
4945 } 4956 }
4946 4957
4947 reinit_after_soft_reset: 4958 reinit_after_soft_reset:
4948 4959
4949 i = alloc_cciss_hba(pdev); 4960 i = alloc_cciss_hba(pdev);
4950 if (i < 0) 4961 if (i < 0)
4951 return -1; 4962 return -1;
4952 4963
4953 h = hba[i]; 4964 h = hba[i];
4954 h->pdev = pdev; 4965 h->pdev = pdev;
4955 h->busy_initializing = 1; 4966 h->busy_initializing = 1;
4967 h->intr_mode = cciss_simple_mode ? SIMPLE_MODE_INT : PERF_MODE_INT;
4956 INIT_LIST_HEAD(&h->cmpQ); 4968 INIT_LIST_HEAD(&h->cmpQ);
4957 INIT_LIST_HEAD(&h->reqQ); 4969 INIT_LIST_HEAD(&h->reqQ);
4958 mutex_init(&h->busy_shutting_down); 4970 mutex_init(&h->busy_shutting_down);
4959 4971
4960 if (cciss_pci_init(h) != 0) 4972 if (cciss_pci_init(h) != 0)
4961 goto clean_no_release_regions; 4973 goto clean_no_release_regions;
4962 4974
4963 sprintf(h->devname, "cciss%d", i); 4975 sprintf(h->devname, "cciss%d", i);
4964 h->ctlr = i; 4976 h->ctlr = i;
4965 4977
4966 if (cciss_tape_cmds < 2) 4978 if (cciss_tape_cmds < 2)
4967 cciss_tape_cmds = 2; 4979 cciss_tape_cmds = 2;
4968 if (cciss_tape_cmds > 16) 4980 if (cciss_tape_cmds > 16)
4969 cciss_tape_cmds = 16; 4981 cciss_tape_cmds = 16;
4970 4982
4971 init_completion(&h->scan_wait); 4983 init_completion(&h->scan_wait);
4972 4984
4973 if (cciss_create_hba_sysfs_entry(h)) 4985 if (cciss_create_hba_sysfs_entry(h))
4974 goto clean0; 4986 goto clean0;
4975 4987
4976 /* configure PCI DMA stuff */ 4988 /* configure PCI DMA stuff */
4977 if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) 4989 if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(64)))
4978 dac = 1; 4990 dac = 1;
4979 else if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32))) 4991 else if (!pci_set_dma_mask(pdev, DMA_BIT_MASK(32)))
4980 dac = 0; 4992 dac = 0;
4981 else { 4993 else {
4982 dev_err(&h->pdev->dev, "no suitable DMA available\n"); 4994 dev_err(&h->pdev->dev, "no suitable DMA available\n");
4983 goto clean1; 4995 goto clean1;
4984 } 4996 }
4985 4997
4986 /* 4998 /*
4987 * register with the major number, or get a dynamic major number 4999 * register with the major number, or get a dynamic major number
4988 * by passing 0 as argument. This is done for greater than 5000 * by passing 0 as argument. This is done for greater than
4989 * 8 controller support. 5001 * 8 controller support.
4990 */ 5002 */
4991 if (i < MAX_CTLR_ORIG) 5003 if (i < MAX_CTLR_ORIG)
4992 h->major = COMPAQ_CISS_MAJOR + i; 5004 h->major = COMPAQ_CISS_MAJOR + i;
4993 rc = register_blkdev(h->major, h->devname); 5005 rc = register_blkdev(h->major, h->devname);
4994 if (rc == -EBUSY || rc == -EINVAL) { 5006 if (rc == -EBUSY || rc == -EINVAL) {
4995 dev_err(&h->pdev->dev, 5007 dev_err(&h->pdev->dev,
4996 "Unable to get major number %d for %s " 5008 "Unable to get major number %d for %s "
4997 "on hba %d\n", h->major, h->devname, i); 5009 "on hba %d\n", h->major, h->devname, i);
4998 goto clean1; 5010 goto clean1;
4999 } else { 5011 } else {
5000 if (i >= MAX_CTLR_ORIG) 5012 if (i >= MAX_CTLR_ORIG)
5001 h->major = rc; 5013 h->major = rc;
5002 } 5014 }
5003 5015
5004 /* make sure the board interrupts are off */ 5016 /* make sure the board interrupts are off */
5005 h->access.set_intr_mask(h, CCISS_INTR_OFF); 5017 h->access.set_intr_mask(h, CCISS_INTR_OFF);
5006 rc = cciss_request_irq(h, do_cciss_msix_intr, do_cciss_intx); 5018 rc = cciss_request_irq(h, do_cciss_msix_intr, do_cciss_intx);
5007 if (rc) 5019 if (rc)
5008 goto clean2; 5020 goto clean2;
5009 5021
5010 dev_info(&h->pdev->dev, "%s: <0x%x> at PCI %s IRQ %d%s using DAC\n", 5022 dev_info(&h->pdev->dev, "%s: <0x%x> at PCI %s IRQ %d%s using DAC\n",
5011 h->devname, pdev->device, pci_name(pdev), 5023 h->devname, pdev->device, pci_name(pdev),
5012 h->intr[PERF_MODE_INT], dac ? "" : " not"); 5024 h->intr[h->intr_mode], dac ? "" : " not");
5013 5025
5014 if (cciss_allocate_cmd_pool(h)) 5026 if (cciss_allocate_cmd_pool(h))
5015 goto clean4; 5027 goto clean4;
5016 5028
5017 if (cciss_allocate_scatterlists(h)) 5029 if (cciss_allocate_scatterlists(h))
5018 goto clean4; 5030 goto clean4;
5019 5031
5020 h->cmd_sg_list = cciss_allocate_sg_chain_blocks(h, 5032 h->cmd_sg_list = cciss_allocate_sg_chain_blocks(h,
5021 h->chainsize, h->nr_cmds); 5033 h->chainsize, h->nr_cmds);
5022 if (!h->cmd_sg_list && h->chainsize > 0) 5034 if (!h->cmd_sg_list && h->chainsize > 0)
5023 goto clean4; 5035 goto clean4;
5024 5036
5025 spin_lock_init(&h->lock); 5037 spin_lock_init(&h->lock);
5026 5038
5027 /* Initialize the pdev driver private data. 5039 /* Initialize the pdev driver private data.
5028 have it point to h. */ 5040 have it point to h. */
5029 pci_set_drvdata(pdev, h); 5041 pci_set_drvdata(pdev, h);
5030 /* command and error info recs zeroed out before 5042 /* command and error info recs zeroed out before
5031 they are used */ 5043 they are used */
5032 memset(h->cmd_pool_bits, 0, 5044 memset(h->cmd_pool_bits, 0,
5033 DIV_ROUND_UP(h->nr_cmds, BITS_PER_LONG) 5045 DIV_ROUND_UP(h->nr_cmds, BITS_PER_LONG)
5034 * sizeof(unsigned long)); 5046 * sizeof(unsigned long));
5035 5047
5036 h->num_luns = 0; 5048 h->num_luns = 0;
5037 h->highest_lun = -1; 5049 h->highest_lun = -1;
5038 for (j = 0; j < CISS_MAX_LUN; j++) { 5050 for (j = 0; j < CISS_MAX_LUN; j++) {
5039 h->drv[j] = NULL; 5051 h->drv[j] = NULL;
5040 h->gendisk[j] = NULL; 5052 h->gendisk[j] = NULL;
5041 } 5053 }
5042 5054
5043 /* At this point, the controller is ready to take commands. 5055 /* At this point, the controller is ready to take commands.
5044 * Now, if reset_devices and the hard reset didn't work, try 5056 * Now, if reset_devices and the hard reset didn't work, try
5045 * the soft reset and see if that works. 5057 * the soft reset and see if that works.
5046 */ 5058 */
5047 if (try_soft_reset) { 5059 if (try_soft_reset) {
5048 5060
5049 /* This is kind of gross. We may or may not get a completion 5061 /* This is kind of gross. We may or may not get a completion
5050 * from the soft reset command, and if we do, then the value 5062 * from the soft reset command, and if we do, then the value
5051 * from the fifo may or may not be valid. So, we wait 10 secs 5063 * from the fifo may or may not be valid. So, we wait 10 secs
5052 * after the reset throwing away any completions we get during 5064 * after the reset throwing away any completions we get during
5053 * that time. Unregister the interrupt handler and register 5065 * that time. Unregister the interrupt handler and register
5054 * fake ones to scoop up any residual completions. 5066 * fake ones to scoop up any residual completions.
5055 */ 5067 */
5056 spin_lock_irqsave(&h->lock, flags); 5068 spin_lock_irqsave(&h->lock, flags);
5057 h->access.set_intr_mask(h, CCISS_INTR_OFF); 5069 h->access.set_intr_mask(h, CCISS_INTR_OFF);
5058 spin_unlock_irqrestore(&h->lock, flags); 5070 spin_unlock_irqrestore(&h->lock, flags);
5059 free_irq(h->intr[PERF_MODE_INT], h); 5071 free_irq(h->intr[h->intr_mode], h);
5060 rc = cciss_request_irq(h, cciss_msix_discard_completions, 5072 rc = cciss_request_irq(h, cciss_msix_discard_completions,
5061 cciss_intx_discard_completions); 5073 cciss_intx_discard_completions);
5062 if (rc) { 5074 if (rc) {
5063 dev_warn(&h->pdev->dev, "Failed to request_irq after " 5075 dev_warn(&h->pdev->dev, "Failed to request_irq after "
5064 "soft reset.\n"); 5076 "soft reset.\n");
5065 goto clean4; 5077 goto clean4;
5066 } 5078 }
5067 5079
5068 rc = cciss_kdump_soft_reset(h); 5080 rc = cciss_kdump_soft_reset(h);
5069 if (rc) { 5081 if (rc) {
5070 dev_warn(&h->pdev->dev, "Soft reset failed.\n"); 5082 dev_warn(&h->pdev->dev, "Soft reset failed.\n");
5071 goto clean4; 5083 goto clean4;
5072 } 5084 }
5073 5085
5074 dev_info(&h->pdev->dev, "Board READY.\n"); 5086 dev_info(&h->pdev->dev, "Board READY.\n");
5075 dev_info(&h->pdev->dev, 5087 dev_info(&h->pdev->dev,
5076 "Waiting for stale completions to drain.\n"); 5088 "Waiting for stale completions to drain.\n");
5077 h->access.set_intr_mask(h, CCISS_INTR_ON); 5089 h->access.set_intr_mask(h, CCISS_INTR_ON);
5078 msleep(10000); 5090 msleep(10000);
5079 h->access.set_intr_mask(h, CCISS_INTR_OFF); 5091 h->access.set_intr_mask(h, CCISS_INTR_OFF);
5080 5092
5081 rc = controller_reset_failed(h->cfgtable); 5093 rc = controller_reset_failed(h->cfgtable);
5082 if (rc) 5094 if (rc)
5083 dev_info(&h->pdev->dev, 5095 dev_info(&h->pdev->dev,
5084 "Soft reset appears to have failed.\n"); 5096 "Soft reset appears to have failed.\n");
5085 5097
5086 /* since the controller's reset, we have to go back and re-init 5098 /* since the controller's reset, we have to go back and re-init
5087 * everything. Easiest to just forget what we've done and do it 5099 * everything. Easiest to just forget what we've done and do it
5088 * all over again. 5100 * all over again.
5089 */ 5101 */
5090 cciss_undo_allocations_after_kdump_soft_reset(h); 5102 cciss_undo_allocations_after_kdump_soft_reset(h);
5091 try_soft_reset = 0; 5103 try_soft_reset = 0;
5092 if (rc) 5104 if (rc)
5093 /* don't go to clean4, we already unallocated */ 5105 /* don't go to clean4, we already unallocated */
5094 return -ENODEV; 5106 return -ENODEV;
5095 5107
5096 goto reinit_after_soft_reset; 5108 goto reinit_after_soft_reset;
5097 } 5109 }
5098 5110
5099 cciss_scsi_setup(h); 5111 cciss_scsi_setup(h);
5100 5112
5101 /* Turn the interrupts on so we can service requests */ 5113 /* Turn the interrupts on so we can service requests */
5102 h->access.set_intr_mask(h, CCISS_INTR_ON); 5114 h->access.set_intr_mask(h, CCISS_INTR_ON);
5103 5115
5104 /* Get the firmware version */ 5116 /* Get the firmware version */
5105 inq_buff = kzalloc(sizeof(InquiryData_struct), GFP_KERNEL); 5117 inq_buff = kzalloc(sizeof(InquiryData_struct), GFP_KERNEL);
5106 if (inq_buff == NULL) { 5118 if (inq_buff == NULL) {
5107 dev_err(&h->pdev->dev, "out of memory\n"); 5119 dev_err(&h->pdev->dev, "out of memory\n");
5108 goto clean4; 5120 goto clean4;
5109 } 5121 }
5110 5122
5111 return_code = sendcmd_withirq(h, CISS_INQUIRY, inq_buff, 5123 return_code = sendcmd_withirq(h, CISS_INQUIRY, inq_buff,
5112 sizeof(InquiryData_struct), 0, CTLR_LUNID, TYPE_CMD); 5124 sizeof(InquiryData_struct), 0, CTLR_LUNID, TYPE_CMD);
5113 if (return_code == IO_OK) { 5125 if (return_code == IO_OK) {
5114 h->firm_ver[0] = inq_buff->data_byte[32]; 5126 h->firm_ver[0] = inq_buff->data_byte[32];
5115 h->firm_ver[1] = inq_buff->data_byte[33]; 5127 h->firm_ver[1] = inq_buff->data_byte[33];
5116 h->firm_ver[2] = inq_buff->data_byte[34]; 5128 h->firm_ver[2] = inq_buff->data_byte[34];
5117 h->firm_ver[3] = inq_buff->data_byte[35]; 5129 h->firm_ver[3] = inq_buff->data_byte[35];
5118 } else { /* send command failed */ 5130 } else { /* send command failed */
5119 dev_warn(&h->pdev->dev, "unable to determine firmware" 5131 dev_warn(&h->pdev->dev, "unable to determine firmware"
5120 " version of controller\n"); 5132 " version of controller\n");
5121 } 5133 }
5122 kfree(inq_buff); 5134 kfree(inq_buff);
5123 5135
5124 cciss_procinit(h); 5136 cciss_procinit(h);
5125 5137
5126 h->cciss_max_sectors = 8192; 5138 h->cciss_max_sectors = 8192;
5127 5139
5128 rebuild_lun_table(h, 1, 0); 5140 rebuild_lun_table(h, 1, 0);
5129 h->busy_initializing = 0; 5141 h->busy_initializing = 0;
5130 return 1; 5142 return 1;
5131 5143
5132 clean4: 5144 clean4:
5133 cciss_free_cmd_pool(h); 5145 cciss_free_cmd_pool(h);
5134 cciss_free_scatterlists(h); 5146 cciss_free_scatterlists(h);
5135 cciss_free_sg_chain_blocks(h->cmd_sg_list, h->nr_cmds); 5147 cciss_free_sg_chain_blocks(h->cmd_sg_list, h->nr_cmds);
5136 free_irq(h->intr[PERF_MODE_INT], h); 5148 free_irq(h->intr[h->intr_mode], h);
5137 clean2: 5149 clean2:
5138 unregister_blkdev(h->major, h->devname); 5150 unregister_blkdev(h->major, h->devname);
5139 clean1: 5151 clean1:
5140 cciss_destroy_hba_sysfs_entry(h); 5152 cciss_destroy_hba_sysfs_entry(h);
5141 clean0: 5153 clean0:
5142 pci_release_regions(pdev); 5154 pci_release_regions(pdev);
5143 clean_no_release_regions: 5155 clean_no_release_regions:
5144 h->busy_initializing = 0; 5156 h->busy_initializing = 0;
5145 5157
5146 /* 5158 /*
5147 * Deliberately omit pci_disable_device(): it does something nasty to 5159 * Deliberately omit pci_disable_device(): it does something nasty to
5148 * Smart Array controllers that pci_enable_device does not undo 5160 * Smart Array controllers that pci_enable_device does not undo
5149 */ 5161 */
5150 pci_set_drvdata(pdev, NULL); 5162 pci_set_drvdata(pdev, NULL);
5151 free_hba(h); 5163 free_hba(h);
5152 return -1; 5164 return -1;
5153 } 5165 }
5154 5166
5155 static void cciss_shutdown(struct pci_dev *pdev) 5167 static void cciss_shutdown(struct pci_dev *pdev)
5156 { 5168 {
5157 ctlr_info_t *h; 5169 ctlr_info_t *h;
5158 char *flush_buf; 5170 char *flush_buf;
5159 int return_code; 5171 int return_code;
5160 5172
5161 h = pci_get_drvdata(pdev); 5173 h = pci_get_drvdata(pdev);
5162 flush_buf = kzalloc(4, GFP_KERNEL); 5174 flush_buf = kzalloc(4, GFP_KERNEL);
5163 if (!flush_buf) { 5175 if (!flush_buf) {
5164 dev_warn(&h->pdev->dev, "cache not flushed, out of memory.\n"); 5176 dev_warn(&h->pdev->dev, "cache not flushed, out of memory.\n");
5165 return; 5177 return;
5166 } 5178 }
5167 /* write all data in the battery backed cache to disk */ 5179 /* write all data in the battery backed cache to disk */
5168 memset(flush_buf, 0, 4); 5180 memset(flush_buf, 0, 4);
5169 return_code = sendcmd_withirq(h, CCISS_CACHE_FLUSH, flush_buf, 5181 return_code = sendcmd_withirq(h, CCISS_CACHE_FLUSH, flush_buf,
5170 4, 0, CTLR_LUNID, TYPE_CMD); 5182 4, 0, CTLR_LUNID, TYPE_CMD);
5171 kfree(flush_buf); 5183 kfree(flush_buf);
5172 if (return_code != IO_OK) 5184 if (return_code != IO_OK)
5173 dev_warn(&h->pdev->dev, "Error flushing cache\n"); 5185 dev_warn(&h->pdev->dev, "Error flushing cache\n");
5174 h->access.set_intr_mask(h, CCISS_INTR_OFF); 5186 h->access.set_intr_mask(h, CCISS_INTR_OFF);
5175 free_irq(h->intr[PERF_MODE_INT], h); 5187 free_irq(h->intr[h->intr_mode], h);
5176 } 5188 }
5189
5190 static int __devinit cciss_enter_simple_mode(struct ctlr_info *h)
5191 {
5192 u32 trans_support;
5193
5194 trans_support = readl(&(h->cfgtable->TransportSupport));
5195 if (!(trans_support & SIMPLE_MODE))
5196 return -ENOTSUPP;
5197
5198 h->max_commands = readl(&(h->cfgtable->CmdsOutMax));
5199 writel(CFGTBL_Trans_Simple, &(h->cfgtable->HostWrite.TransportRequest));
5200 writel(CFGTBL_ChangeReq, h->vaddr + SA5_DOORBELL);
5201 cciss_wait_for_mode_change_ack(h);
5202 print_cfg_table(h);
5203 if (!(readl(&(h->cfgtable->TransportActive)) & CFGTBL_Trans_Simple)) {
5204 dev_warn(&h->pdev->dev, "unable to get board into simple mode\n");
5205 return -ENODEV;
5206 }
5207 h->transMethod = CFGTBL_Trans_Simple;
5208 return 0;
5209 }
5210
5177 5211
5178 static void __devexit cciss_remove_one(struct pci_dev *pdev) 5212 static void __devexit cciss_remove_one(struct pci_dev *pdev)
5179 { 5213 {
5180 ctlr_info_t *h; 5214 ctlr_info_t *h;
5181 int i, j; 5215 int i, j;
5182 5216
5183 if (pci_get_drvdata(pdev) == NULL) { 5217 if (pci_get_drvdata(pdev) == NULL) {
5184 dev_err(&pdev->dev, "Unable to remove device\n"); 5218 dev_err(&pdev->dev, "Unable to remove device\n");
5185 return; 5219 return;
5186 } 5220 }
5187 5221
5188 h = pci_get_drvdata(pdev); 5222 h = pci_get_drvdata(pdev);
5189 i = h->ctlr; 5223 i = h->ctlr;
5190 if (hba[i] == NULL) { 5224 if (hba[i] == NULL) {
5191 dev_err(&pdev->dev, "device appears to already be removed\n"); 5225 dev_err(&pdev->dev, "device appears to already be removed\n");
5192 return; 5226 return;
5193 } 5227 }
5194 5228
5195 mutex_lock(&h->busy_shutting_down); 5229 mutex_lock(&h->busy_shutting_down);
5196 5230
5197 remove_from_scan_list(h); 5231 remove_from_scan_list(h);
5198 remove_proc_entry(h->devname, proc_cciss); 5232 remove_proc_entry(h->devname, proc_cciss);
5199 unregister_blkdev(h->major, h->devname); 5233 unregister_blkdev(h->major, h->devname);
5200 5234
5201 /* remove it from the disk list */ 5235 /* remove it from the disk list */
5202 for (j = 0; j < CISS_MAX_LUN; j++) { 5236 for (j = 0; j < CISS_MAX_LUN; j++) {
5203 struct gendisk *disk = h->gendisk[j]; 5237 struct gendisk *disk = h->gendisk[j];
5204 if (disk) { 5238 if (disk) {
5205 struct request_queue *q = disk->queue; 5239 struct request_queue *q = disk->queue;
5206 5240
5207 if (disk->flags & GENHD_FL_UP) { 5241 if (disk->flags & GENHD_FL_UP) {
5208 cciss_destroy_ld_sysfs_entry(h, j, 1); 5242 cciss_destroy_ld_sysfs_entry(h, j, 1);
5209 del_gendisk(disk); 5243 del_gendisk(disk);
5210 } 5244 }
5211 if (q) 5245 if (q)
5212 blk_cleanup_queue(q); 5246 blk_cleanup_queue(q);
5213 } 5247 }
5214 } 5248 }
5215 5249
5216 #ifdef CONFIG_CISS_SCSI_TAPE 5250 #ifdef CONFIG_CISS_SCSI_TAPE
5217 cciss_unregister_scsi(h); /* unhook from SCSI subsystem */ 5251 cciss_unregister_scsi(h); /* unhook from SCSI subsystem */
5218 #endif 5252 #endif
5219 5253
5220 cciss_shutdown(pdev); 5254 cciss_shutdown(pdev);
5221 5255
5222 #ifdef CONFIG_PCI_MSI 5256 #ifdef CONFIG_PCI_MSI
5223 if (h->msix_vector) 5257 if (h->msix_vector)
5224 pci_disable_msix(h->pdev); 5258 pci_disable_msix(h->pdev);
5225 else if (h->msi_vector) 5259 else if (h->msi_vector)
5226 pci_disable_msi(h->pdev); 5260 pci_disable_msi(h->pdev);
5227 #endif /* CONFIG_PCI_MSI */ 5261 #endif /* CONFIG_PCI_MSI */
5228 5262
5229 iounmap(h->transtable); 5263 iounmap(h->transtable);
5230 iounmap(h->cfgtable); 5264 iounmap(h->cfgtable);
5231 iounmap(h->vaddr); 5265 iounmap(h->vaddr);
5232 5266
5233 cciss_free_cmd_pool(h); 5267 cciss_free_cmd_pool(h);
5234 /* Free up sg elements */ 5268 /* Free up sg elements */
5235 for (j = 0; j < h->nr_cmds; j++) 5269 for (j = 0; j < h->nr_cmds; j++)
5236 kfree(h->scatter_list[j]); 5270 kfree(h->scatter_list[j]);
5237 kfree(h->scatter_list); 5271 kfree(h->scatter_list);
5238 cciss_free_sg_chain_blocks(h->cmd_sg_list, h->nr_cmds); 5272 cciss_free_sg_chain_blocks(h->cmd_sg_list, h->nr_cmds);
5239 kfree(h->blockFetchTable); 5273 kfree(h->blockFetchTable);
5240 if (h->reply_pool) 5274 if (h->reply_pool)
5241 pci_free_consistent(h->pdev, h->max_commands * sizeof(__u64), 5275 pci_free_consistent(h->pdev, h->max_commands * sizeof(__u64),
5242 h->reply_pool, h->reply_pool_dhandle); 5276 h->reply_pool, h->reply_pool_dhandle);
5243 /* 5277 /*
5244 * Deliberately omit pci_disable_device(): it does something nasty to 5278 * Deliberately omit pci_disable_device(): it does something nasty to
5245 * Smart Array controllers that pci_enable_device does not undo 5279 * Smart Array controllers that pci_enable_device does not undo
5246 */ 5280 */
5247 pci_release_regions(pdev); 5281 pci_release_regions(pdev);
5248 pci_set_drvdata(pdev, NULL); 5282 pci_set_drvdata(pdev, NULL);
5249 cciss_destroy_hba_sysfs_entry(h); 5283 cciss_destroy_hba_sysfs_entry(h);
5250 mutex_unlock(&h->busy_shutting_down); 5284 mutex_unlock(&h->busy_shutting_down);
5251 free_hba(h); 5285 free_hba(h);
5252 } 5286 }
5253 5287
5254 static struct pci_driver cciss_pci_driver = { 5288 static struct pci_driver cciss_pci_driver = {
5255 .name = "cciss", 5289 .name = "cciss",
5256 .probe = cciss_init_one, 5290 .probe = cciss_init_one,
5257 .remove = __devexit_p(cciss_remove_one), 5291 .remove = __devexit_p(cciss_remove_one),
5258 .id_table = cciss_pci_device_id, /* id_table */ 5292 .id_table = cciss_pci_device_id, /* id_table */
5259 .shutdown = cciss_shutdown, 5293 .shutdown = cciss_shutdown,
5260 }; 5294 };
5261 5295
5262 /* 5296 /*
5263 * This is it. Register the PCI driver information for the cards we control 5297 * This is it. Register the PCI driver information for the cards we control
5264 * the OS will call our registered routines when it finds one of our cards. 5298 * the OS will call our registered routines when it finds one of our cards.
5265 */ 5299 */
5266 static int __init cciss_init(void) 5300 static int __init cciss_init(void)
5267 { 5301 {
5268 int err; 5302 int err;
5269 5303
5270 /* 5304 /*
5271 * The hardware requires that commands are aligned on a 64-bit 5305 * The hardware requires that commands are aligned on a 64-bit
5272 * boundary. Given that we use pci_alloc_consistent() to allocate an 5306 * boundary. Given that we use pci_alloc_consistent() to allocate an
5273 * array of them, the size must be a multiple of 8 bytes. 5307 * array of them, the size must be a multiple of 8 bytes.
5274 */ 5308 */
5275 BUILD_BUG_ON(sizeof(CommandList_struct) % COMMANDLIST_ALIGNMENT); 5309 BUILD_BUG_ON(sizeof(CommandList_struct) % COMMANDLIST_ALIGNMENT);
5276 printk(KERN_INFO DRIVER_NAME "\n"); 5310 printk(KERN_INFO DRIVER_NAME "\n");
5277 5311
5278 err = bus_register(&cciss_bus_type); 5312 err = bus_register(&cciss_bus_type);
5279 if (err) 5313 if (err)
5280 return err; 5314 return err;
5281 5315
5282 /* Start the scan thread */ 5316 /* Start the scan thread */
5283 cciss_scan_thread = kthread_run(scan_thread, NULL, "cciss_scan"); 5317 cciss_scan_thread = kthread_run(scan_thread, NULL, "cciss_scan");
5284 if (IS_ERR(cciss_scan_thread)) { 5318 if (IS_ERR(cciss_scan_thread)) {
5285 err = PTR_ERR(cciss_scan_thread); 5319 err = PTR_ERR(cciss_scan_thread);
5286 goto err_bus_unregister; 5320 goto err_bus_unregister;
5287 } 5321 }
5288 5322
5289 /* Register for our PCI devices */ 5323 /* Register for our PCI devices */
5290 err = pci_register_driver(&cciss_pci_driver); 5324 err = pci_register_driver(&cciss_pci_driver);
5291 if (err) 5325 if (err)
5292 goto err_thread_stop; 5326 goto err_thread_stop;
5293 5327
5294 return err; 5328 return err;
5295 5329
5296 err_thread_stop: 5330 err_thread_stop:
5297 kthread_stop(cciss_scan_thread); 5331 kthread_stop(cciss_scan_thread);
5298 err_bus_unregister: 5332 err_bus_unregister:
5299 bus_unregister(&cciss_bus_type); 5333 bus_unregister(&cciss_bus_type);
5300 5334
5301 return err; 5335 return err;
5302 } 5336 }
5303 5337
5304 static void __exit cciss_cleanup(void) 5338 static void __exit cciss_cleanup(void)
5305 { 5339 {
5306 int i; 5340 int i;
5307 5341
5308 pci_unregister_driver(&cciss_pci_driver); 5342 pci_unregister_driver(&cciss_pci_driver);
5309 /* double check that all controller entrys have been removed */ 5343 /* double check that all controller entrys have been removed */
5310 for (i = 0; i < MAX_CTLR; i++) { 5344 for (i = 0; i < MAX_CTLR; i++) {
5311 if (hba[i] != NULL) { 5345 if (hba[i] != NULL) {
5312 dev_warn(&hba[i]->pdev->dev, 5346 dev_warn(&hba[i]->pdev->dev,
5313 "had to remove controller\n"); 5347 "had to remove controller\n");
5314 cciss_remove_one(hba[i]->pdev); 5348 cciss_remove_one(hba[i]->pdev);
5315 } 5349 }
5316 } 5350 }
5317 kthread_stop(cciss_scan_thread); 5351 kthread_stop(cciss_scan_thread);
5318 if (proc_cciss) 5352 if (proc_cciss)
5319 remove_proc_entry("driver/cciss", NULL); 5353 remove_proc_entry("driver/cciss", NULL);
5320 bus_unregister(&cciss_bus_type); 5354 bus_unregister(&cciss_bus_type);
5321 } 5355 }
5322 5356
5323 module_init(cciss_init); 5357 module_init(cciss_init);
5324 module_exit(cciss_cleanup); 5358 module_exit(cciss_cleanup);
5325 5359
drivers/block/cciss.h
1 #ifndef CCISS_H 1 #ifndef CCISS_H
2 #define CCISS_H 2 #define CCISS_H
3 3
4 #include <linux/genhd.h> 4 #include <linux/genhd.h>
5 #include <linux/mutex.h> 5 #include <linux/mutex.h>
6 6
7 #include "cciss_cmd.h" 7 #include "cciss_cmd.h"
8 8
9 9
10 #define NWD_SHIFT 4 10 #define NWD_SHIFT 4
11 #define MAX_PART (1 << NWD_SHIFT) 11 #define MAX_PART (1 << NWD_SHIFT)
12 12
13 #define IO_OK 0 13 #define IO_OK 0
14 #define IO_ERROR 1 14 #define IO_ERROR 1
15 #define IO_NEEDS_RETRY 3 15 #define IO_NEEDS_RETRY 3
16 16
17 #define VENDOR_LEN 8 17 #define VENDOR_LEN 8
18 #define MODEL_LEN 16 18 #define MODEL_LEN 16
19 #define REV_LEN 4 19 #define REV_LEN 4
20 20
21 struct ctlr_info; 21 struct ctlr_info;
22 typedef struct ctlr_info ctlr_info_t; 22 typedef struct ctlr_info ctlr_info_t;
23 23
24 struct access_method { 24 struct access_method {
25 void (*submit_command)(ctlr_info_t *h, CommandList_struct *c); 25 void (*submit_command)(ctlr_info_t *h, CommandList_struct *c);
26 void (*set_intr_mask)(ctlr_info_t *h, unsigned long val); 26 void (*set_intr_mask)(ctlr_info_t *h, unsigned long val);
27 unsigned long (*fifo_full)(ctlr_info_t *h); 27 unsigned long (*fifo_full)(ctlr_info_t *h);
28 bool (*intr_pending)(ctlr_info_t *h); 28 bool (*intr_pending)(ctlr_info_t *h);
29 unsigned long (*command_completed)(ctlr_info_t *h); 29 unsigned long (*command_completed)(ctlr_info_t *h);
30 }; 30 };
31 typedef struct _drive_info_struct 31 typedef struct _drive_info_struct
32 { 32 {
33 unsigned char LunID[8]; 33 unsigned char LunID[8];
34 int usage_count; 34 int usage_count;
35 struct request_queue *queue; 35 struct request_queue *queue;
36 sector_t nr_blocks; 36 sector_t nr_blocks;
37 int block_size; 37 int block_size;
38 int heads; 38 int heads;
39 int sectors; 39 int sectors;
40 int cylinders; 40 int cylinders;
41 int raid_level; /* set to -1 to indicate that 41 int raid_level; /* set to -1 to indicate that
42 * the drive is not in use/configured 42 * the drive is not in use/configured
43 */ 43 */
44 int busy_configuring; /* This is set when a drive is being removed 44 int busy_configuring; /* This is set when a drive is being removed
45 * to prevent it from being opened or it's 45 * to prevent it from being opened or it's
46 * queue from being started. 46 * queue from being started.
47 */ 47 */
48 struct device dev; 48 struct device dev;
49 __u8 serial_no[16]; /* from inquiry page 0x83, 49 __u8 serial_no[16]; /* from inquiry page 0x83,
50 * not necc. null terminated. 50 * not necc. null terminated.
51 */ 51 */
52 char vendor[VENDOR_LEN + 1]; /* SCSI vendor string */ 52 char vendor[VENDOR_LEN + 1]; /* SCSI vendor string */
53 char model[MODEL_LEN + 1]; /* SCSI model string */ 53 char model[MODEL_LEN + 1]; /* SCSI model string */
54 char rev[REV_LEN + 1]; /* SCSI revision string */ 54 char rev[REV_LEN + 1]; /* SCSI revision string */
55 char device_initialized; /* indicates whether dev is initialized */ 55 char device_initialized; /* indicates whether dev is initialized */
56 } drive_info_struct; 56 } drive_info_struct;
57 57
58 struct ctlr_info 58 struct ctlr_info
59 { 59 {
60 int ctlr; 60 int ctlr;
61 char devname[8]; 61 char devname[8];
62 char *product_name; 62 char *product_name;
63 char firm_ver[4]; /* Firmware version */ 63 char firm_ver[4]; /* Firmware version */
64 struct pci_dev *pdev; 64 struct pci_dev *pdev;
65 __u32 board_id; 65 __u32 board_id;
66 void __iomem *vaddr; 66 void __iomem *vaddr;
67 unsigned long paddr; 67 unsigned long paddr;
68 int nr_cmds; /* Number of commands allowed on this controller */ 68 int nr_cmds; /* Number of commands allowed on this controller */
69 CfgTable_struct __iomem *cfgtable; 69 CfgTable_struct __iomem *cfgtable;
70 int interrupts_enabled; 70 int interrupts_enabled;
71 int major; 71 int major;
72 int max_commands; 72 int max_commands;
73 int commands_outstanding; 73 int commands_outstanding;
74 int max_outstanding; /* Debug */ 74 int max_outstanding; /* Debug */
75 int num_luns; 75 int num_luns;
76 int highest_lun; 76 int highest_lun;
77 int usage_count; /* number of opens all all minor devices */ 77 int usage_count; /* number of opens all all minor devices */
78 /* Need space for temp sg list 78 /* Need space for temp sg list
79 * number of scatter/gathers supported 79 * number of scatter/gathers supported
80 * number of scatter/gathers in chained block 80 * number of scatter/gathers in chained block
81 */ 81 */
82 struct scatterlist **scatter_list; 82 struct scatterlist **scatter_list;
83 int maxsgentries; 83 int maxsgentries;
84 int chainsize; 84 int chainsize;
85 int max_cmd_sgentries; 85 int max_cmd_sgentries;
86 SGDescriptor_struct **cmd_sg_list; 86 SGDescriptor_struct **cmd_sg_list;
87 87
88 # define PERF_MODE_INT 0 88 # define PERF_MODE_INT 0
89 # define DOORBELL_INT 1 89 # define DOORBELL_INT 1
90 # define SIMPLE_MODE_INT 2 90 # define SIMPLE_MODE_INT 2
91 # define MEMQ_MODE_INT 3 91 # define MEMQ_MODE_INT 3
92 unsigned int intr[4]; 92 unsigned int intr[4];
93 unsigned int msix_vector; 93 unsigned int msix_vector;
94 unsigned int msi_vector; 94 unsigned int msi_vector;
95 int intr_mode;
95 int cciss_max_sectors; 96 int cciss_max_sectors;
96 BYTE cciss_read; 97 BYTE cciss_read;
97 BYTE cciss_write; 98 BYTE cciss_write;
98 BYTE cciss_read_capacity; 99 BYTE cciss_read_capacity;
99 100
100 /* information about each logical volume */ 101 /* information about each logical volume */
101 drive_info_struct *drv[CISS_MAX_LUN]; 102 drive_info_struct *drv[CISS_MAX_LUN];
102 103
103 struct access_method access; 104 struct access_method access;
104 105
105 /* queue and queue Info */ 106 /* queue and queue Info */
106 struct list_head reqQ; 107 struct list_head reqQ;
107 struct list_head cmpQ; 108 struct list_head cmpQ;
108 unsigned int Qdepth; 109 unsigned int Qdepth;
109 unsigned int maxQsinceinit; 110 unsigned int maxQsinceinit;
110 unsigned int maxSG; 111 unsigned int maxSG;
111 spinlock_t lock; 112 spinlock_t lock;
112 113
113 /* pointers to command and error info pool */ 114 /* pointers to command and error info pool */
114 CommandList_struct *cmd_pool; 115 CommandList_struct *cmd_pool;
115 dma_addr_t cmd_pool_dhandle; 116 dma_addr_t cmd_pool_dhandle;
116 ErrorInfo_struct *errinfo_pool; 117 ErrorInfo_struct *errinfo_pool;
117 dma_addr_t errinfo_pool_dhandle; 118 dma_addr_t errinfo_pool_dhandle;
118 unsigned long *cmd_pool_bits; 119 unsigned long *cmd_pool_bits;
119 int nr_allocs; 120 int nr_allocs;
120 int nr_frees; 121 int nr_frees;
121 int busy_configuring; 122 int busy_configuring;
122 int busy_initializing; 123 int busy_initializing;
123 int busy_scanning; 124 int busy_scanning;
124 struct mutex busy_shutting_down; 125 struct mutex busy_shutting_down;
125 126
126 /* This element holds the zero based queue number of the last 127 /* This element holds the zero based queue number of the last
127 * queue to be started. It is used for fairness. 128 * queue to be started. It is used for fairness.
128 */ 129 */
129 int next_to_run; 130 int next_to_run;
130 131
131 /* Disk structures we need to pass back */ 132 /* Disk structures we need to pass back */
132 struct gendisk *gendisk[CISS_MAX_LUN]; 133 struct gendisk *gendisk[CISS_MAX_LUN];
133 #ifdef CONFIG_CISS_SCSI_TAPE 134 #ifdef CONFIG_CISS_SCSI_TAPE
134 struct cciss_scsi_adapter_data_t *scsi_ctlr; 135 struct cciss_scsi_adapter_data_t *scsi_ctlr;
135 #endif 136 #endif
136 unsigned char alive; 137 unsigned char alive;
137 struct list_head scan_list; 138 struct list_head scan_list;
138 struct completion scan_wait; 139 struct completion scan_wait;
139 struct device dev; 140 struct device dev;
140 /* 141 /*
141 * Performant mode tables. 142 * Performant mode tables.
142 */ 143 */
143 u32 trans_support; 144 u32 trans_support;
144 u32 trans_offset; 145 u32 trans_offset;
145 struct TransTable_struct *transtable; 146 struct TransTable_struct *transtable;
146 unsigned long transMethod; 147 unsigned long transMethod;
147 148
148 /* 149 /*
149 * Performant mode completion buffer 150 * Performant mode completion buffer
150 */ 151 */
151 u64 *reply_pool; 152 u64 *reply_pool;
152 dma_addr_t reply_pool_dhandle; 153 dma_addr_t reply_pool_dhandle;
153 u64 *reply_pool_head; 154 u64 *reply_pool_head;
154 size_t reply_pool_size; 155 size_t reply_pool_size;
155 unsigned char reply_pool_wraparound; 156 unsigned char reply_pool_wraparound;
156 u32 *blockFetchTable; 157 u32 *blockFetchTable;
157 }; 158 };
158 159
159 /* Defining the diffent access_methods 160 /* Defining the diffent access_methods
160 * 161 *
161 * Memory mapped FIFO interface (SMART 53xx cards) 162 * Memory mapped FIFO interface (SMART 53xx cards)
162 */ 163 */
163 #define SA5_DOORBELL 0x20 164 #define SA5_DOORBELL 0x20
164 #define SA5_REQUEST_PORT_OFFSET 0x40 165 #define SA5_REQUEST_PORT_OFFSET 0x40
165 #define SA5_REPLY_INTR_MASK_OFFSET 0x34 166 #define SA5_REPLY_INTR_MASK_OFFSET 0x34
166 #define SA5_REPLY_PORT_OFFSET 0x44 167 #define SA5_REPLY_PORT_OFFSET 0x44
167 #define SA5_INTR_STATUS 0x30 168 #define SA5_INTR_STATUS 0x30
168 #define SA5_SCRATCHPAD_OFFSET 0xB0 169 #define SA5_SCRATCHPAD_OFFSET 0xB0
169 170
170 #define SA5_CTCFG_OFFSET 0xB4 171 #define SA5_CTCFG_OFFSET 0xB4
171 #define SA5_CTMEM_OFFSET 0xB8 172 #define SA5_CTMEM_OFFSET 0xB8
172 173
173 #define SA5_INTR_OFF 0x08 174 #define SA5_INTR_OFF 0x08
174 #define SA5B_INTR_OFF 0x04 175 #define SA5B_INTR_OFF 0x04
175 #define SA5_INTR_PENDING 0x08 176 #define SA5_INTR_PENDING 0x08
176 #define SA5B_INTR_PENDING 0x04 177 #define SA5B_INTR_PENDING 0x04
177 #define FIFO_EMPTY 0xffffffff 178 #define FIFO_EMPTY 0xffffffff
178 #define CCISS_FIRMWARE_READY 0xffff0000 /* value in scratchpad register */ 179 #define CCISS_FIRMWARE_READY 0xffff0000 /* value in scratchpad register */
179 /* Perf. mode flags */ 180 /* Perf. mode flags */
180 #define SA5_PERF_INTR_PENDING 0x04 181 #define SA5_PERF_INTR_PENDING 0x04
181 #define SA5_PERF_INTR_OFF 0x05 182 #define SA5_PERF_INTR_OFF 0x05
182 #define SA5_OUTDB_STATUS_PERF_BIT 0x01 183 #define SA5_OUTDB_STATUS_PERF_BIT 0x01
183 #define SA5_OUTDB_CLEAR_PERF_BIT 0x01 184 #define SA5_OUTDB_CLEAR_PERF_BIT 0x01
184 #define SA5_OUTDB_CLEAR 0xA0 185 #define SA5_OUTDB_CLEAR 0xA0
185 #define SA5_OUTDB_CLEAR_PERF_BIT 0x01 186 #define SA5_OUTDB_CLEAR_PERF_BIT 0x01
186 #define SA5_OUTDB_STATUS 0x9C 187 #define SA5_OUTDB_STATUS 0x9C
187 188
188 189
189 #define CISS_ERROR_BIT 0x02 190 #define CISS_ERROR_BIT 0x02
190 191
191 #define CCISS_INTR_ON 1 192 #define CCISS_INTR_ON 1
192 #define CCISS_INTR_OFF 0 193 #define CCISS_INTR_OFF 0
193 194
194 195
195 /* CCISS_BOARD_READY_WAIT_SECS is how long to wait for a board 196 /* CCISS_BOARD_READY_WAIT_SECS is how long to wait for a board
196 * to become ready, in seconds, before giving up on it. 197 * to become ready, in seconds, before giving up on it.
197 * CCISS_BOARD_READY_POLL_INTERVAL_MSECS * is how long to wait 198 * CCISS_BOARD_READY_POLL_INTERVAL_MSECS * is how long to wait
198 * between polling the board to see if it is ready, in 199 * between polling the board to see if it is ready, in
199 * milliseconds. CCISS_BOARD_READY_ITERATIONS is derived 200 * milliseconds. CCISS_BOARD_READY_ITERATIONS is derived
200 * the above. 201 * the above.
201 */ 202 */
202 #define CCISS_BOARD_READY_WAIT_SECS (120) 203 #define CCISS_BOARD_READY_WAIT_SECS (120)
203 #define CCISS_BOARD_NOT_READY_WAIT_SECS (100) 204 #define CCISS_BOARD_NOT_READY_WAIT_SECS (100)
204 #define CCISS_BOARD_READY_POLL_INTERVAL_MSECS (100) 205 #define CCISS_BOARD_READY_POLL_INTERVAL_MSECS (100)
205 #define CCISS_BOARD_READY_ITERATIONS \ 206 #define CCISS_BOARD_READY_ITERATIONS \
206 ((CCISS_BOARD_READY_WAIT_SECS * 1000) / \ 207 ((CCISS_BOARD_READY_WAIT_SECS * 1000) / \
207 CCISS_BOARD_READY_POLL_INTERVAL_MSECS) 208 CCISS_BOARD_READY_POLL_INTERVAL_MSECS)
208 #define CCISS_BOARD_NOT_READY_ITERATIONS \ 209 #define CCISS_BOARD_NOT_READY_ITERATIONS \
209 ((CCISS_BOARD_NOT_READY_WAIT_SECS * 1000) / \ 210 ((CCISS_BOARD_NOT_READY_WAIT_SECS * 1000) / \
210 CCISS_BOARD_READY_POLL_INTERVAL_MSECS) 211 CCISS_BOARD_READY_POLL_INTERVAL_MSECS)
211 #define CCISS_POST_RESET_PAUSE_MSECS (3000) 212 #define CCISS_POST_RESET_PAUSE_MSECS (3000)
212 #define CCISS_POST_RESET_NOOP_INTERVAL_MSECS (4000) 213 #define CCISS_POST_RESET_NOOP_INTERVAL_MSECS (4000)
213 #define CCISS_POST_RESET_NOOP_RETRIES (12) 214 #define CCISS_POST_RESET_NOOP_RETRIES (12)
214 #define CCISS_POST_RESET_NOOP_TIMEOUT_MSECS (10000) 215 #define CCISS_POST_RESET_NOOP_TIMEOUT_MSECS (10000)
215 216
216 /* 217 /*
217 Send the command to the hardware 218 Send the command to the hardware
218 */ 219 */
219 static void SA5_submit_command( ctlr_info_t *h, CommandList_struct *c) 220 static void SA5_submit_command( ctlr_info_t *h, CommandList_struct *c)
220 { 221 {
221 #ifdef CCISS_DEBUG 222 #ifdef CCISS_DEBUG
222 printk(KERN_WARNING "cciss%d: Sending %08x - down to controller\n", 223 printk(KERN_WARNING "cciss%d: Sending %08x - down to controller\n",
223 h->ctlr, c->busaddr); 224 h->ctlr, c->busaddr);
224 #endif /* CCISS_DEBUG */ 225 #endif /* CCISS_DEBUG */
225 writel(c->busaddr, h->vaddr + SA5_REQUEST_PORT_OFFSET); 226 writel(c->busaddr, h->vaddr + SA5_REQUEST_PORT_OFFSET);
226 readl(h->vaddr + SA5_SCRATCHPAD_OFFSET); 227 readl(h->vaddr + SA5_SCRATCHPAD_OFFSET);
227 h->commands_outstanding++; 228 h->commands_outstanding++;
228 if ( h->commands_outstanding > h->max_outstanding) 229 if ( h->commands_outstanding > h->max_outstanding)
229 h->max_outstanding = h->commands_outstanding; 230 h->max_outstanding = h->commands_outstanding;
230 } 231 }
231 232
232 /* 233 /*
233 * This card is the opposite of the other cards. 234 * This card is the opposite of the other cards.
234 * 0 turns interrupts on... 235 * 0 turns interrupts on...
235 * 0x08 turns them off... 236 * 0x08 turns them off...
236 */ 237 */
237 static void SA5_intr_mask(ctlr_info_t *h, unsigned long val) 238 static void SA5_intr_mask(ctlr_info_t *h, unsigned long val)
238 { 239 {
239 if (val) 240 if (val)
240 { /* Turn interrupts on */ 241 { /* Turn interrupts on */
241 h->interrupts_enabled = 1; 242 h->interrupts_enabled = 1;
242 writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); 243 writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
243 (void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); 244 (void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
244 } else /* Turn them off */ 245 } else /* Turn them off */
245 { 246 {
246 h->interrupts_enabled = 0; 247 h->interrupts_enabled = 0;
247 writel( SA5_INTR_OFF, 248 writel( SA5_INTR_OFF,
248 h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); 249 h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
249 (void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); 250 (void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
250 } 251 }
251 } 252 }
252 /* 253 /*
253 * This card is the opposite of the other cards. 254 * This card is the opposite of the other cards.
254 * 0 turns interrupts on... 255 * 0 turns interrupts on...
255 * 0x04 turns them off... 256 * 0x04 turns them off...
256 */ 257 */
257 static void SA5B_intr_mask(ctlr_info_t *h, unsigned long val) 258 static void SA5B_intr_mask(ctlr_info_t *h, unsigned long val)
258 { 259 {
259 if (val) 260 if (val)
260 { /* Turn interrupts on */ 261 { /* Turn interrupts on */
261 h->interrupts_enabled = 1; 262 h->interrupts_enabled = 1;
262 writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); 263 writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
263 (void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); 264 (void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
264 } else /* Turn them off */ 265 } else /* Turn them off */
265 { 266 {
266 h->interrupts_enabled = 0; 267 h->interrupts_enabled = 0;
267 writel( SA5B_INTR_OFF, 268 writel( SA5B_INTR_OFF,
268 h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); 269 h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
269 (void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); 270 (void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
270 } 271 }
271 } 272 }
272 273
273 /* Performant mode intr_mask */ 274 /* Performant mode intr_mask */
274 static void SA5_performant_intr_mask(ctlr_info_t *h, unsigned long val) 275 static void SA5_performant_intr_mask(ctlr_info_t *h, unsigned long val)
275 { 276 {
276 if (val) { /* turn on interrupts */ 277 if (val) { /* turn on interrupts */
277 h->interrupts_enabled = 1; 278 h->interrupts_enabled = 1;
278 writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); 279 writel(0, h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
279 (void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); 280 (void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
280 } else { 281 } else {
281 h->interrupts_enabled = 0; 282 h->interrupts_enabled = 0;
282 writel(SA5_PERF_INTR_OFF, 283 writel(SA5_PERF_INTR_OFF,
283 h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); 284 h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
284 (void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET); 285 (void) readl(h->vaddr + SA5_REPLY_INTR_MASK_OFFSET);
285 } 286 }
286 } 287 }
287 288
288 /* 289 /*
289 * Returns true if fifo is full. 290 * Returns true if fifo is full.
290 * 291 *
291 */ 292 */
292 static unsigned long SA5_fifo_full(ctlr_info_t *h) 293 static unsigned long SA5_fifo_full(ctlr_info_t *h)
293 { 294 {
294 if( h->commands_outstanding >= h->max_commands) 295 if( h->commands_outstanding >= h->max_commands)
295 return(1); 296 return(1);
296 else 297 else
297 return(0); 298 return(0);
298 299
299 } 300 }
300 /* 301 /*
301 * returns value read from hardware. 302 * returns value read from hardware.
302 * returns FIFO_EMPTY if there is nothing to read 303 * returns FIFO_EMPTY if there is nothing to read
303 */ 304 */
304 static unsigned long SA5_completed(ctlr_info_t *h) 305 static unsigned long SA5_completed(ctlr_info_t *h)
305 { 306 {
306 unsigned long register_value 307 unsigned long register_value
307 = readl(h->vaddr + SA5_REPLY_PORT_OFFSET); 308 = readl(h->vaddr + SA5_REPLY_PORT_OFFSET);
308 if(register_value != FIFO_EMPTY) 309 if(register_value != FIFO_EMPTY)
309 { 310 {
310 h->commands_outstanding--; 311 h->commands_outstanding--;
311 #ifdef CCISS_DEBUG 312 #ifdef CCISS_DEBUG
312 printk("cciss: Read %lx back from board\n", register_value); 313 printk("cciss: Read %lx back from board\n", register_value);
313 #endif /* CCISS_DEBUG */ 314 #endif /* CCISS_DEBUG */
314 } 315 }
315 #ifdef CCISS_DEBUG 316 #ifdef CCISS_DEBUG
316 else 317 else
317 { 318 {
318 printk("cciss: FIFO Empty read\n"); 319 printk("cciss: FIFO Empty read\n");
319 } 320 }
320 #endif 321 #endif
321 return ( register_value); 322 return ( register_value);
322 323
323 } 324 }
324 325
325 /* Performant mode command completed */ 326 /* Performant mode command completed */
326 static unsigned long SA5_performant_completed(ctlr_info_t *h) 327 static unsigned long SA5_performant_completed(ctlr_info_t *h)
327 { 328 {
328 unsigned long register_value = FIFO_EMPTY; 329 unsigned long register_value = FIFO_EMPTY;
329 330
330 /* flush the controller write of the reply queue by reading 331 /* flush the controller write of the reply queue by reading
331 * outbound doorbell status register. 332 * outbound doorbell status register.
332 */ 333 */
333 register_value = readl(h->vaddr + SA5_OUTDB_STATUS); 334 register_value = readl(h->vaddr + SA5_OUTDB_STATUS);
334 /* msi auto clears the interrupt pending bit. */ 335 /* msi auto clears the interrupt pending bit. */
335 if (!(h->msi_vector || h->msix_vector)) { 336 if (!(h->msi_vector || h->msix_vector)) {
336 writel(SA5_OUTDB_CLEAR_PERF_BIT, h->vaddr + SA5_OUTDB_CLEAR); 337 writel(SA5_OUTDB_CLEAR_PERF_BIT, h->vaddr + SA5_OUTDB_CLEAR);
337 /* Do a read in order to flush the write to the controller 338 /* Do a read in order to flush the write to the controller
338 * (as per spec.) 339 * (as per spec.)
339 */ 340 */
340 register_value = readl(h->vaddr + SA5_OUTDB_STATUS); 341 register_value = readl(h->vaddr + SA5_OUTDB_STATUS);
341 } 342 }
342 343
343 if ((*(h->reply_pool_head) & 1) == (h->reply_pool_wraparound)) { 344 if ((*(h->reply_pool_head) & 1) == (h->reply_pool_wraparound)) {
344 register_value = *(h->reply_pool_head); 345 register_value = *(h->reply_pool_head);
345 (h->reply_pool_head)++; 346 (h->reply_pool_head)++;
346 h->commands_outstanding--; 347 h->commands_outstanding--;
347 } else { 348 } else {
348 register_value = FIFO_EMPTY; 349 register_value = FIFO_EMPTY;
349 } 350 }
350 /* Check for wraparound */ 351 /* Check for wraparound */
351 if (h->reply_pool_head == (h->reply_pool + h->max_commands)) { 352 if (h->reply_pool_head == (h->reply_pool + h->max_commands)) {
352 h->reply_pool_head = h->reply_pool; 353 h->reply_pool_head = h->reply_pool;
353 h->reply_pool_wraparound ^= 1; 354 h->reply_pool_wraparound ^= 1;
354 } 355 }
355 356
356 return register_value; 357 return register_value;
357 } 358 }
358 /* 359 /*
359 * Returns true if an interrupt is pending.. 360 * Returns true if an interrupt is pending..
360 */ 361 */
361 static bool SA5_intr_pending(ctlr_info_t *h) 362 static bool SA5_intr_pending(ctlr_info_t *h)
362 { 363 {
363 unsigned long register_value = 364 unsigned long register_value =
364 readl(h->vaddr + SA5_INTR_STATUS); 365 readl(h->vaddr + SA5_INTR_STATUS);
365 #ifdef CCISS_DEBUG 366 #ifdef CCISS_DEBUG
366 printk("cciss: intr_pending %lx\n", register_value); 367 printk("cciss: intr_pending %lx\n", register_value);
367 #endif /* CCISS_DEBUG */ 368 #endif /* CCISS_DEBUG */
368 if( register_value & SA5_INTR_PENDING) 369 if( register_value & SA5_INTR_PENDING)
369 return 1; 370 return 1;
370 return 0 ; 371 return 0 ;
371 } 372 }
372 373
373 /* 374 /*
374 * Returns true if an interrupt is pending.. 375 * Returns true if an interrupt is pending..
375 */ 376 */
376 static bool SA5B_intr_pending(ctlr_info_t *h) 377 static bool SA5B_intr_pending(ctlr_info_t *h)
377 { 378 {
378 unsigned long register_value = 379 unsigned long register_value =
379 readl(h->vaddr + SA5_INTR_STATUS); 380 readl(h->vaddr + SA5_INTR_STATUS);
380 #ifdef CCISS_DEBUG 381 #ifdef CCISS_DEBUG
381 printk("cciss: intr_pending %lx\n", register_value); 382 printk("cciss: intr_pending %lx\n", register_value);
382 #endif /* CCISS_DEBUG */ 383 #endif /* CCISS_DEBUG */
383 if( register_value & SA5B_INTR_PENDING) 384 if( register_value & SA5B_INTR_PENDING)
384 return 1; 385 return 1;
385 return 0 ; 386 return 0 ;
386 } 387 }
387 388
388 static bool SA5_performant_intr_pending(ctlr_info_t *h) 389 static bool SA5_performant_intr_pending(ctlr_info_t *h)
389 { 390 {
390 unsigned long register_value = readl(h->vaddr + SA5_INTR_STATUS); 391 unsigned long register_value = readl(h->vaddr + SA5_INTR_STATUS);
391 392
392 if (!register_value) 393 if (!register_value)
393 return false; 394 return false;
394 395
395 if (h->msi_vector || h->msix_vector) 396 if (h->msi_vector || h->msix_vector)
396 return true; 397 return true;
397 398
398 /* Read outbound doorbell to flush */ 399 /* Read outbound doorbell to flush */
399 register_value = readl(h->vaddr + SA5_OUTDB_STATUS); 400 register_value = readl(h->vaddr + SA5_OUTDB_STATUS);
400 return register_value & SA5_OUTDB_STATUS_PERF_BIT; 401 return register_value & SA5_OUTDB_STATUS_PERF_BIT;
401 } 402 }
402 403
403 static struct access_method SA5_access = { 404 static struct access_method SA5_access = {
404 SA5_submit_command, 405 SA5_submit_command,
405 SA5_intr_mask, 406 SA5_intr_mask,
406 SA5_fifo_full, 407 SA5_fifo_full,
407 SA5_intr_pending, 408 SA5_intr_pending,
408 SA5_completed, 409 SA5_completed,
409 }; 410 };
410 411
411 static struct access_method SA5B_access = { 412 static struct access_method SA5B_access = {
412 SA5_submit_command, 413 SA5_submit_command,
413 SA5B_intr_mask, 414 SA5B_intr_mask,
414 SA5_fifo_full, 415 SA5_fifo_full,
415 SA5B_intr_pending, 416 SA5B_intr_pending,
416 SA5_completed, 417 SA5_completed,
417 }; 418 };
418 419
419 static struct access_method SA5_performant_access = { 420 static struct access_method SA5_performant_access = {
420 SA5_submit_command, 421 SA5_submit_command,
421 SA5_performant_intr_mask, 422 SA5_performant_intr_mask,
422 SA5_fifo_full, 423 SA5_fifo_full,
423 SA5_performant_intr_pending, 424 SA5_performant_intr_pending,
424 SA5_performant_completed, 425 SA5_performant_completed,
425 }; 426 };
426 427
427 struct board_type { 428 struct board_type {
428 __u32 board_id; 429 __u32 board_id;
429 char *product_name; 430 char *product_name;
430 struct access_method *access; 431 struct access_method *access;
431 int nr_cmds; /* Max cmds this kind of ctlr can handle. */ 432 int nr_cmds; /* Max cmds this kind of ctlr can handle. */
432 }; 433 };
433 434
434 #endif /* CCISS_H */ 435 #endif /* CCISS_H */
435 436