Commit 0082c16e3a6d87c7b156ccf21f5e6c448b102809

Authored by Linus Torvalds

Merge tag 'spi-3.6' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/misc

Pull spi updates from Mark Brown:
 "Since Grant is even more specacularly busy than usual for the time
  being I've been collecting SPI patches for him for this release -
  probably things will revert back to Grant before the next release.

  There's nothing too exciting here, mostly it's simple driver specific
  stuff:

   - Add spi: to the modaliases of SPI devices to provide namespacing.
   - A driver for AD-FMCOMMS1-EBZ.
   - DT binding for Orion.
   - Fixes and cleanups for i.MX, PL0022, OMAP and bitbang drivers.

   There may be a few more fixes I've missed, people keep sending me new
   things."

* tag 'spi-3.6' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/misc:
  spi/orion: remove uneeded spi_info
  spi/bcm63xx: fix clock configuration selection
  spi/orion: add device tree binding
  spi/omap2: mark omap2_mcspi_master_setup as __devinit
  spi: omap2-mcspi: Fix the below warning
  spi: Add AD-FMCOMMS1-EBZ I2C-SPI bridge driver
  spi/imx: use gpio_is_valid to determine if a gpio is valid
  spi/imx: remove redundant config.speed_hz setting
  spi/gpio: start with CS non-active
  spi: tegra: use dmaengine based dma driver
  spi/pl022: cleanup pl022 header documentation
  spi/pl022: enable runtime PM
  spi/pl022: delete DB5500 support
  spi/pl022: disable port when unused
  spi: Add "spi:" prefix to modalias attribute of spi devices

Showing 13 changed files Inline Diff

Documentation/devicetree/bindings/spi/spi-orion.txt
File was created 1 Marvell Orion SPI device
2
3 Required properties:
4 - compatible : should be "marvell,orion-spi".
5 - reg : offset and length of the register set for the device
6 - cell-index : Which of multiple SPI controllers is this.
7 Optional properties:
8 - interrupts : Is currently not used.
9
10 Example:
11 spi@10600 {
12 compatible = "marvell,orion-spi";
13 #address-cells = <1>;
14 #size-cells = <0>;
15 cell-index = <0>;
16 reg = <0x10600 0x28>;
17 interrupts = <23>;
18 status = "disabled";
19 };
20
1 # 1 #
2 # SPI driver configuration 2 # SPI driver configuration
3 # 3 #
4 # NOTE: the reason this doesn't show SPI slave support is mostly that 4 # NOTE: the reason this doesn't show SPI slave support is mostly that
5 # nobody's needed a slave side API yet. The master-role API is not 5 # nobody's needed a slave side API yet. The master-role API is not
6 # fully appropriate there, so it'd need some thought to do well. 6 # fully appropriate there, so it'd need some thought to do well.
7 # 7 #
8 menuconfig SPI 8 menuconfig SPI
9 bool "SPI support" 9 bool "SPI support"
10 depends on HAS_IOMEM 10 depends on HAS_IOMEM
11 help 11 help
12 The "Serial Peripheral Interface" is a low level synchronous 12 The "Serial Peripheral Interface" is a low level synchronous
13 protocol. Chips that support SPI can have data transfer rates 13 protocol. Chips that support SPI can have data transfer rates
14 up to several tens of Mbit/sec. Chips are addressed with a 14 up to several tens of Mbit/sec. Chips are addressed with a
15 controller and a chipselect. Most SPI slaves don't support 15 controller and a chipselect. Most SPI slaves don't support
16 dynamic device discovery; some are even write-only or read-only. 16 dynamic device discovery; some are even write-only or read-only.
17 17
18 SPI is widely used by microcontrollers to talk with sensors, 18 SPI is widely used by microcontrollers to talk with sensors,
19 eeprom and flash memory, codecs and various other controller 19 eeprom and flash memory, codecs and various other controller
20 chips, analog to digital (and d-to-a) converters, and more. 20 chips, analog to digital (and d-to-a) converters, and more.
21 MMC and SD cards can be accessed using SPI protocol; and for 21 MMC and SD cards can be accessed using SPI protocol; and for
22 DataFlash cards used in MMC sockets, SPI must always be used. 22 DataFlash cards used in MMC sockets, SPI must always be used.
23 23
24 SPI is one of a family of similar protocols using a four wire 24 SPI is one of a family of similar protocols using a four wire
25 interface (select, clock, data in, data out) including Microwire 25 interface (select, clock, data in, data out) including Microwire
26 (half duplex), SSP, SSI, and PSP. This driver framework should 26 (half duplex), SSP, SSI, and PSP. This driver framework should
27 work with most such devices and controllers. 27 work with most such devices and controllers.
28 28
29 if SPI 29 if SPI
30 30
31 config SPI_DEBUG 31 config SPI_DEBUG
32 boolean "Debug support for SPI drivers" 32 boolean "Debug support for SPI drivers"
33 depends on DEBUG_KERNEL 33 depends on DEBUG_KERNEL
34 help 34 help
35 Say "yes" to enable debug messaging (like dev_dbg and pr_debug), 35 Say "yes" to enable debug messaging (like dev_dbg and pr_debug),
36 sysfs, and debugfs support in SPI controller and protocol drivers. 36 sysfs, and debugfs support in SPI controller and protocol drivers.
37 37
38 # 38 #
39 # MASTER side ... talking to discrete SPI slave chips including microcontrollers 39 # MASTER side ... talking to discrete SPI slave chips including microcontrollers
40 # 40 #
41 41
42 config SPI_MASTER 42 config SPI_MASTER
43 # boolean "SPI Master Support" 43 # boolean "SPI Master Support"
44 boolean 44 boolean
45 default SPI 45 default SPI
46 help 46 help
47 If your system has an master-capable SPI controller (which 47 If your system has an master-capable SPI controller (which
48 provides the clock and chipselect), you can enable that 48 provides the clock and chipselect), you can enable that
49 controller and the protocol drivers for the SPI slave chips 49 controller and the protocol drivers for the SPI slave chips
50 that are connected. 50 that are connected.
51 51
52 if SPI_MASTER 52 if SPI_MASTER
53 53
54 comment "SPI Master Controller Drivers" 54 comment "SPI Master Controller Drivers"
55 55
56 config SPI_ALTERA 56 config SPI_ALTERA
57 tristate "Altera SPI Controller" 57 tristate "Altera SPI Controller"
58 select SPI_BITBANG 58 select SPI_BITBANG
59 help 59 help
60 This is the driver for the Altera SPI Controller. 60 This is the driver for the Altera SPI Controller.
61 61
62 config SPI_ATH79 62 config SPI_ATH79
63 tristate "Atheros AR71XX/AR724X/AR913X SPI controller driver" 63 tristate "Atheros AR71XX/AR724X/AR913X SPI controller driver"
64 depends on ATH79 && GENERIC_GPIO 64 depends on ATH79 && GENERIC_GPIO
65 select SPI_BITBANG 65 select SPI_BITBANG
66 help 66 help
67 This enables support for the SPI controller present on the 67 This enables support for the SPI controller present on the
68 Atheros AR71XX/AR724X/AR913X SoCs. 68 Atheros AR71XX/AR724X/AR913X SoCs.
69 69
70 config SPI_ATMEL 70 config SPI_ATMEL
71 tristate "Atmel SPI Controller" 71 tristate "Atmel SPI Controller"
72 depends on (ARCH_AT91 || AVR32) 72 depends on (ARCH_AT91 || AVR32)
73 help 73 help
74 This selects a driver for the Atmel SPI Controller, present on 74 This selects a driver for the Atmel SPI Controller, present on
75 many AT32 (AVR32) and AT91 (ARM) chips. 75 many AT32 (AVR32) and AT91 (ARM) chips.
76 76
77 config SPI_BFIN5XX 77 config SPI_BFIN5XX
78 tristate "SPI controller driver for ADI Blackfin5xx" 78 tristate "SPI controller driver for ADI Blackfin5xx"
79 depends on BLACKFIN 79 depends on BLACKFIN
80 help 80 help
81 This is the SPI controller master driver for Blackfin 5xx processor. 81 This is the SPI controller master driver for Blackfin 5xx processor.
82 82
83 config SPI_BFIN_SPORT 83 config SPI_BFIN_SPORT
84 tristate "SPI bus via Blackfin SPORT" 84 tristate "SPI bus via Blackfin SPORT"
85 depends on BLACKFIN 85 depends on BLACKFIN
86 help 86 help
87 Enable support for a SPI bus via the Blackfin SPORT peripheral. 87 Enable support for a SPI bus via the Blackfin SPORT peripheral.
88 88
89 config SPI_AU1550 89 config SPI_AU1550
90 tristate "Au1550/Au1200/Au1300 SPI Controller" 90 tristate "Au1550/Au1200/Au1300 SPI Controller"
91 depends on MIPS_ALCHEMY && EXPERIMENTAL 91 depends on MIPS_ALCHEMY && EXPERIMENTAL
92 select SPI_BITBANG 92 select SPI_BITBANG
93 help 93 help
94 If you say yes to this option, support will be included for the 94 If you say yes to this option, support will be included for the
95 PSC SPI controller found on Au1550, Au1200 and Au1300 series. 95 PSC SPI controller found on Au1550, Au1200 and Au1300 series.
96 96
97 config SPI_BCM63XX 97 config SPI_BCM63XX
98 tristate "Broadcom BCM63xx SPI controller" 98 tristate "Broadcom BCM63xx SPI controller"
99 depends on BCM63XX 99 depends on BCM63XX
100 help 100 help
101 Enable support for the SPI controller on the Broadcom BCM63xx SoCs. 101 Enable support for the SPI controller on the Broadcom BCM63xx SoCs.
102 102
103 config SPI_BITBANG 103 config SPI_BITBANG
104 tristate "Utilities for Bitbanging SPI masters" 104 tristate "Utilities for Bitbanging SPI masters"
105 help 105 help
106 With a few GPIO pins, your system can bitbang the SPI protocol. 106 With a few GPIO pins, your system can bitbang the SPI protocol.
107 Select this to get SPI support through I/O pins (GPIO, parallel 107 Select this to get SPI support through I/O pins (GPIO, parallel
108 port, etc). Or, some systems' SPI master controller drivers use 108 port, etc). Or, some systems' SPI master controller drivers use
109 this code to manage the per-word or per-transfer accesses to the 109 this code to manage the per-word or per-transfer accesses to the
110 hardware shift registers. 110 hardware shift registers.
111 111
112 This is library code, and is automatically selected by drivers that 112 This is library code, and is automatically selected by drivers that
113 need it. You only need to select this explicitly to support driver 113 need it. You only need to select this explicitly to support driver
114 modules that aren't part of this kernel tree. 114 modules that aren't part of this kernel tree.
115 115
116 config SPI_BUTTERFLY 116 config SPI_BUTTERFLY
117 tristate "Parallel port adapter for AVR Butterfly (DEVELOPMENT)" 117 tristate "Parallel port adapter for AVR Butterfly (DEVELOPMENT)"
118 depends on PARPORT 118 depends on PARPORT
119 select SPI_BITBANG 119 select SPI_BITBANG
120 help 120 help
121 This uses a custom parallel port cable to connect to an AVR 121 This uses a custom parallel port cable to connect to an AVR
122 Butterfly <http://www.atmel.com/products/avr/butterfly>, an 122 Butterfly <http://www.atmel.com/products/avr/butterfly>, an
123 inexpensive battery powered microcontroller evaluation board. 123 inexpensive battery powered microcontroller evaluation board.
124 This same cable can be used to flash new firmware. 124 This same cable can be used to flash new firmware.
125 125
126 config SPI_COLDFIRE_QSPI 126 config SPI_COLDFIRE_QSPI
127 tristate "Freescale Coldfire QSPI controller" 127 tristate "Freescale Coldfire QSPI controller"
128 depends on (M520x || M523x || M5249 || M525x || M527x || M528x || M532x) 128 depends on (M520x || M523x || M5249 || M525x || M527x || M528x || M532x)
129 help 129 help
130 This enables support for the Coldfire QSPI controller in master 130 This enables support for the Coldfire QSPI controller in master
131 mode. 131 mode.
132 132
133 config SPI_DAVINCI 133 config SPI_DAVINCI
134 tristate "Texas Instruments DaVinci/DA8x/OMAP-L/AM1x SoC SPI controller" 134 tristate "Texas Instruments DaVinci/DA8x/OMAP-L/AM1x SoC SPI controller"
135 depends on ARCH_DAVINCI 135 depends on ARCH_DAVINCI
136 select SPI_BITBANG 136 select SPI_BITBANG
137 help 137 help
138 SPI master controller for DaVinci/DA8x/OMAP-L/AM1x SPI modules. 138 SPI master controller for DaVinci/DA8x/OMAP-L/AM1x SPI modules.
139 139
140 config SPI_EP93XX 140 config SPI_EP93XX
141 tristate "Cirrus Logic EP93xx SPI controller" 141 tristate "Cirrus Logic EP93xx SPI controller"
142 depends on ARCH_EP93XX 142 depends on ARCH_EP93XX
143 help 143 help
144 This enables using the Cirrus EP93xx SPI controller in master 144 This enables using the Cirrus EP93xx SPI controller in master
145 mode. 145 mode.
146 146
147 config SPI_GPIO 147 config SPI_GPIO
148 tristate "GPIO-based bitbanging SPI Master" 148 tristate "GPIO-based bitbanging SPI Master"
149 depends on GENERIC_GPIO 149 depends on GENERIC_GPIO
150 select SPI_BITBANG 150 select SPI_BITBANG
151 help 151 help
152 This simple GPIO bitbanging SPI master uses the arch-neutral GPIO 152 This simple GPIO bitbanging SPI master uses the arch-neutral GPIO
153 interface to manage MOSI, MISO, SCK, and chipselect signals. SPI 153 interface to manage MOSI, MISO, SCK, and chipselect signals. SPI
154 slaves connected to a bus using this driver are configured as usual, 154 slaves connected to a bus using this driver are configured as usual,
155 except that the spi_board_info.controller_data holds the GPIO number 155 except that the spi_board_info.controller_data holds the GPIO number
156 for the chipselect used by this controller driver. 156 for the chipselect used by this controller driver.
157 157
158 Note that this driver often won't achieve even 1 Mbit/sec speeds, 158 Note that this driver often won't achieve even 1 Mbit/sec speeds,
159 making it unusually slow for SPI. If your platform can inline 159 making it unusually slow for SPI. If your platform can inline
160 GPIO operations, you should be able to leverage that for better 160 GPIO operations, you should be able to leverage that for better
161 speed with a custom version of this driver; see the source code. 161 speed with a custom version of this driver; see the source code.
162 162
163 config SPI_IMX 163 config SPI_IMX
164 tristate "Freescale i.MX SPI controllers" 164 tristate "Freescale i.MX SPI controllers"
165 depends on ARCH_MXC 165 depends on ARCH_MXC
166 select SPI_BITBANG 166 select SPI_BITBANG
167 default m if IMX_HAVE_PLATFORM_SPI_IMX 167 default m if IMX_HAVE_PLATFORM_SPI_IMX
168 help 168 help
169 This enables using the Freescale i.MX SPI controllers in master 169 This enables using the Freescale i.MX SPI controllers in master
170 mode. 170 mode.
171 171
172 config SPI_LM70_LLP 172 config SPI_LM70_LLP
173 tristate "Parallel port adapter for LM70 eval board (DEVELOPMENT)" 173 tristate "Parallel port adapter for LM70 eval board (DEVELOPMENT)"
174 depends on PARPORT && EXPERIMENTAL 174 depends on PARPORT && EXPERIMENTAL
175 select SPI_BITBANG 175 select SPI_BITBANG
176 help 176 help
177 This driver supports the NS LM70 LLP Evaluation Board, 177 This driver supports the NS LM70 LLP Evaluation Board,
178 which interfaces to an LM70 temperature sensor using 178 which interfaces to an LM70 temperature sensor using
179 a parallel port. 179 a parallel port.
180 180
181 config SPI_MPC52xx 181 config SPI_MPC52xx
182 tristate "Freescale MPC52xx SPI (non-PSC) controller support" 182 tristate "Freescale MPC52xx SPI (non-PSC) controller support"
183 depends on PPC_MPC52xx 183 depends on PPC_MPC52xx
184 help 184 help
185 This drivers supports the MPC52xx SPI controller in master SPI 185 This drivers supports the MPC52xx SPI controller in master SPI
186 mode. 186 mode.
187 187
188 config SPI_MPC52xx_PSC 188 config SPI_MPC52xx_PSC
189 tristate "Freescale MPC52xx PSC SPI controller" 189 tristate "Freescale MPC52xx PSC SPI controller"
190 depends on PPC_MPC52xx && EXPERIMENTAL 190 depends on PPC_MPC52xx && EXPERIMENTAL
191 help 191 help
192 This enables using the Freescale MPC52xx Programmable Serial 192 This enables using the Freescale MPC52xx Programmable Serial
193 Controller in master SPI mode. 193 Controller in master SPI mode.
194 194
195 config SPI_MPC512x_PSC 195 config SPI_MPC512x_PSC
196 tristate "Freescale MPC512x PSC SPI controller" 196 tristate "Freescale MPC512x PSC SPI controller"
197 depends on PPC_MPC512x 197 depends on PPC_MPC512x
198 help 198 help
199 This enables using the Freescale MPC5121 Programmable Serial 199 This enables using the Freescale MPC5121 Programmable Serial
200 Controller in SPI master mode. 200 Controller in SPI master mode.
201 201
202 config SPI_FSL_LIB 202 config SPI_FSL_LIB
203 tristate 203 tristate
204 depends on FSL_SOC 204 depends on FSL_SOC
205 205
206 config SPI_FSL_SPI 206 config SPI_FSL_SPI
207 bool "Freescale SPI controller" 207 bool "Freescale SPI controller"
208 depends on FSL_SOC 208 depends on FSL_SOC
209 select SPI_FSL_LIB 209 select SPI_FSL_LIB
210 help 210 help
211 This enables using the Freescale SPI controllers in master mode. 211 This enables using the Freescale SPI controllers in master mode.
212 MPC83xx platform uses the controller in cpu mode or CPM/QE mode. 212 MPC83xx platform uses the controller in cpu mode or CPM/QE mode.
213 MPC8569 uses the controller in QE mode, MPC8610 in cpu mode. 213 MPC8569 uses the controller in QE mode, MPC8610 in cpu mode.
214 214
215 config SPI_FSL_ESPI 215 config SPI_FSL_ESPI
216 bool "Freescale eSPI controller" 216 bool "Freescale eSPI controller"
217 depends on FSL_SOC 217 depends on FSL_SOC
218 select SPI_FSL_LIB 218 select SPI_FSL_LIB
219 help 219 help
220 This enables using the Freescale eSPI controllers in master mode. 220 This enables using the Freescale eSPI controllers in master mode.
221 From MPC8536, 85xx platform uses the controller, and all P10xx, 221 From MPC8536, 85xx platform uses the controller, and all P10xx,
222 P20xx, P30xx,P40xx, P50xx uses this controller. 222 P20xx, P30xx,P40xx, P50xx uses this controller.
223 223
224 config SPI_OC_TINY 224 config SPI_OC_TINY
225 tristate "OpenCores tiny SPI" 225 tristate "OpenCores tiny SPI"
226 depends on GENERIC_GPIO 226 depends on GENERIC_GPIO
227 select SPI_BITBANG 227 select SPI_BITBANG
228 help 228 help
229 This is the driver for OpenCores tiny SPI master controller. 229 This is the driver for OpenCores tiny SPI master controller.
230 230
231 config SPI_OMAP_UWIRE 231 config SPI_OMAP_UWIRE
232 tristate "OMAP1 MicroWire" 232 tristate "OMAP1 MicroWire"
233 depends on ARCH_OMAP1 233 depends on ARCH_OMAP1
234 select SPI_BITBANG 234 select SPI_BITBANG
235 help 235 help
236 This hooks up to the MicroWire controller on OMAP1 chips. 236 This hooks up to the MicroWire controller on OMAP1 chips.
237 237
238 config SPI_OMAP24XX 238 config SPI_OMAP24XX
239 tristate "McSPI driver for OMAP" 239 tristate "McSPI driver for OMAP"
240 depends on ARCH_OMAP2PLUS 240 depends on ARCH_OMAP2PLUS
241 help 241 help
242 SPI master controller for OMAP24XX and later Multichannel SPI 242 SPI master controller for OMAP24XX and later Multichannel SPI
243 (McSPI) modules. 243 (McSPI) modules.
244 244
245 config SPI_OMAP_100K 245 config SPI_OMAP_100K
246 tristate "OMAP SPI 100K" 246 tristate "OMAP SPI 100K"
247 depends on ARCH_OMAP850 || ARCH_OMAP730 247 depends on ARCH_OMAP850 || ARCH_OMAP730
248 help 248 help
249 OMAP SPI 100K master controller for omap7xx boards. 249 OMAP SPI 100K master controller for omap7xx boards.
250 250
251 config SPI_ORION 251 config SPI_ORION
252 tristate "Orion SPI master (EXPERIMENTAL)" 252 tristate "Orion SPI master (EXPERIMENTAL)"
253 depends on PLAT_ORION && EXPERIMENTAL 253 depends on PLAT_ORION && EXPERIMENTAL
254 help 254 help
255 This enables using the SPI master controller on the Orion chips. 255 This enables using the SPI master controller on the Orion chips.
256 256
257 config SPI_PL022 257 config SPI_PL022
258 tristate "ARM AMBA PL022 SSP controller" 258 tristate "ARM AMBA PL022 SSP controller"
259 depends on ARM_AMBA 259 depends on ARM_AMBA
260 default y if MACH_U300 260 default y if MACH_U300
261 default y if ARCH_REALVIEW 261 default y if ARCH_REALVIEW
262 default y if INTEGRATOR_IMPD1 262 default y if INTEGRATOR_IMPD1
263 default y if ARCH_VERSATILE 263 default y if ARCH_VERSATILE
264 help 264 help
265 This selects the ARM(R) AMBA(R) PrimeCell PL022 SSP 265 This selects the ARM(R) AMBA(R) PrimeCell PL022 SSP
266 controller. If you have an embedded system with an AMBA(R) 266 controller. If you have an embedded system with an AMBA(R)
267 bus and a PL022 controller, say Y or M here. 267 bus and a PL022 controller, say Y or M here.
268 268
269 config SPI_PPC4xx 269 config SPI_PPC4xx
270 tristate "PPC4xx SPI Controller" 270 tristate "PPC4xx SPI Controller"
271 depends on PPC32 && 4xx 271 depends on PPC32 && 4xx
272 select SPI_BITBANG 272 select SPI_BITBANG
273 help 273 help
274 This selects a driver for the PPC4xx SPI Controller. 274 This selects a driver for the PPC4xx SPI Controller.
275 275
276 config SPI_PXA2XX 276 config SPI_PXA2XX
277 tristate "PXA2xx SSP SPI master" 277 tristate "PXA2xx SSP SPI master"
278 depends on (ARCH_PXA || (X86_32 && PCI)) && EXPERIMENTAL 278 depends on (ARCH_PXA || (X86_32 && PCI)) && EXPERIMENTAL
279 select PXA_SSP if ARCH_PXA 279 select PXA_SSP if ARCH_PXA
280 help 280 help
281 This enables using a PXA2xx or Sodaville SSP port as a SPI master 281 This enables using a PXA2xx or Sodaville SSP port as a SPI master
282 controller. The driver can be configured to use any SSP port and 282 controller. The driver can be configured to use any SSP port and
283 additional documentation can be found a Documentation/spi/pxa2xx. 283 additional documentation can be found a Documentation/spi/pxa2xx.
284 284
285 config SPI_PXA2XX_PCI 285 config SPI_PXA2XX_PCI
286 def_bool SPI_PXA2XX && X86_32 && PCI 286 def_bool SPI_PXA2XX && X86_32 && PCI
287 287
288 config SPI_RSPI 288 config SPI_RSPI
289 tristate "Renesas RSPI controller" 289 tristate "Renesas RSPI controller"
290 depends on SUPERH 290 depends on SUPERH
291 help 291 help
292 SPI driver for Renesas RSPI blocks. 292 SPI driver for Renesas RSPI blocks.
293 293
294 config SPI_S3C24XX 294 config SPI_S3C24XX
295 tristate "Samsung S3C24XX series SPI" 295 tristate "Samsung S3C24XX series SPI"
296 depends on ARCH_S3C24XX && EXPERIMENTAL 296 depends on ARCH_S3C24XX && EXPERIMENTAL
297 select SPI_BITBANG 297 select SPI_BITBANG
298 help 298 help
299 SPI driver for Samsung S3C24XX series ARM SoCs 299 SPI driver for Samsung S3C24XX series ARM SoCs
300 300
301 config SPI_S3C24XX_FIQ 301 config SPI_S3C24XX_FIQ
302 bool "S3C24XX driver with FIQ pseudo-DMA" 302 bool "S3C24XX driver with FIQ pseudo-DMA"
303 depends on SPI_S3C24XX 303 depends on SPI_S3C24XX
304 select FIQ 304 select FIQ
305 help 305 help
306 Enable FIQ support for the S3C24XX SPI driver to provide pseudo 306 Enable FIQ support for the S3C24XX SPI driver to provide pseudo
307 DMA by using the fast-interrupt request framework, This allows 307 DMA by using the fast-interrupt request framework, This allows
308 the driver to get DMA-like performance when there are either 308 the driver to get DMA-like performance when there are either
309 no free DMA channels, or when doing transfers that required both 309 no free DMA channels, or when doing transfers that required both
310 TX and RX data paths. 310 TX and RX data paths.
311 311
312 config SPI_S3C64XX 312 config SPI_S3C64XX
313 tristate "Samsung S3C64XX series type SPI" 313 tristate "Samsung S3C64XX series type SPI"
314 depends on (ARCH_S3C24XX || ARCH_S3C64XX || ARCH_S5P64X0 || ARCH_EXYNOS) 314 depends on (ARCH_S3C24XX || ARCH_S3C64XX || ARCH_S5P64X0 || ARCH_EXYNOS)
315 select S3C64XX_DMA if ARCH_S3C64XX 315 select S3C64XX_DMA if ARCH_S3C64XX
316 help 316 help
317 SPI driver for Samsung S3C64XX and newer SoCs. 317 SPI driver for Samsung S3C64XX and newer SoCs.
318 318
319 config SPI_SH_MSIOF 319 config SPI_SH_MSIOF
320 tristate "SuperH MSIOF SPI controller" 320 tristate "SuperH MSIOF SPI controller"
321 depends on SUPERH && HAVE_CLK 321 depends on SUPERH && HAVE_CLK
322 select SPI_BITBANG 322 select SPI_BITBANG
323 help 323 help
324 SPI driver for SuperH MSIOF blocks. 324 SPI driver for SuperH MSIOF blocks.
325 325
326 config SPI_SH 326 config SPI_SH
327 tristate "SuperH SPI controller" 327 tristate "SuperH SPI controller"
328 depends on SUPERH 328 depends on SUPERH
329 help 329 help
330 SPI driver for SuperH SPI blocks. 330 SPI driver for SuperH SPI blocks.
331 331
332 config SPI_SH_SCI 332 config SPI_SH_SCI
333 tristate "SuperH SCI SPI controller" 333 tristate "SuperH SCI SPI controller"
334 depends on SUPERH 334 depends on SUPERH
335 select SPI_BITBANG 335 select SPI_BITBANG
336 help 336 help
337 SPI driver for SuperH SCI blocks. 337 SPI driver for SuperH SCI blocks.
338 338
339 config SPI_SH_HSPI 339 config SPI_SH_HSPI
340 tristate "SuperH HSPI controller" 340 tristate "SuperH HSPI controller"
341 depends on ARCH_SHMOBILE 341 depends on ARCH_SHMOBILE
342 help 342 help
343 SPI driver for SuperH HSPI blocks. 343 SPI driver for SuperH HSPI blocks.
344 344
345 config SPI_SIRF 345 config SPI_SIRF
346 tristate "CSR SiRFprimaII SPI controller" 346 tristate "CSR SiRFprimaII SPI controller"
347 depends on ARCH_PRIMA2 347 depends on ARCH_PRIMA2
348 select SPI_BITBANG 348 select SPI_BITBANG
349 help 349 help
350 SPI driver for CSR SiRFprimaII SoCs 350 SPI driver for CSR SiRFprimaII SoCs
351 351
352 config SPI_STMP3XXX 352 config SPI_STMP3XXX
353 tristate "Freescale STMP37xx/378x SPI/SSP controller" 353 tristate "Freescale STMP37xx/378x SPI/SSP controller"
354 depends on ARCH_STMP3XXX 354 depends on ARCH_STMP3XXX
355 help 355 help
356 SPI driver for Freescale STMP37xx/378x SoC SSP interface 356 SPI driver for Freescale STMP37xx/378x SoC SSP interface
357 357
358 config SPI_TEGRA 358 config SPI_TEGRA
359 tristate "Nvidia Tegra SPI controller" 359 tristate "Nvidia Tegra SPI controller"
360 depends on ARCH_TEGRA && TEGRA_SYSTEM_DMA 360 depends on ARCH_TEGRA && (TEGRA_SYSTEM_DMA || TEGRA20_APB_DMA)
361 help 361 help
362 SPI driver for NVidia Tegra SoCs 362 SPI driver for NVidia Tegra SoCs
363 363
364 config SPI_TI_SSP 364 config SPI_TI_SSP
365 tristate "TI Sequencer Serial Port - SPI Support" 365 tristate "TI Sequencer Serial Port - SPI Support"
366 depends on MFD_TI_SSP 366 depends on MFD_TI_SSP
367 help 367 help
368 This selects an SPI master implementation using a TI sequencer 368 This selects an SPI master implementation using a TI sequencer
369 serial port. 369 serial port.
370 370
371 config SPI_TOPCLIFF_PCH 371 config SPI_TOPCLIFF_PCH
372 tristate "Intel EG20T PCH/LAPIS Semicon IOH(ML7213/ML7223/ML7831) SPI" 372 tristate "Intel EG20T PCH/LAPIS Semicon IOH(ML7213/ML7223/ML7831) SPI"
373 depends on PCI 373 depends on PCI
374 help 374 help
375 SPI driver for the Topcliff PCH (Platform Controller Hub) SPI bus 375 SPI driver for the Topcliff PCH (Platform Controller Hub) SPI bus
376 used in some x86 embedded processors. 376 used in some x86 embedded processors.
377 377
378 This driver also supports the ML7213/ML7223/ML7831, a companion chip 378 This driver also supports the ML7213/ML7223/ML7831, a companion chip
379 for the Atom E6xx series and compatible with the Intel EG20T PCH. 379 for the Atom E6xx series and compatible with the Intel EG20T PCH.
380 380
381 config SPI_TXX9 381 config SPI_TXX9
382 tristate "Toshiba TXx9 SPI controller" 382 tristate "Toshiba TXx9 SPI controller"
383 depends on GENERIC_GPIO && CPU_TX49XX 383 depends on GENERIC_GPIO && CPU_TX49XX
384 help 384 help
385 SPI driver for Toshiba TXx9 MIPS SoCs 385 SPI driver for Toshiba TXx9 MIPS SoCs
386
387 config SPI_XCOMM
388 tristate "Analog Devices AD-FMCOMMS1-EBZ SPI-I2C-bridge driver"
389 depends on I2C
390 help
391 Support for the SPI-I2C bridge found on the Analog Devices
392 AD-FMCOMMS1-EBZ board.
386 393
387 config SPI_XILINX 394 config SPI_XILINX
388 tristate "Xilinx SPI controller common module" 395 tristate "Xilinx SPI controller common module"
389 depends on HAS_IOMEM && EXPERIMENTAL 396 depends on HAS_IOMEM && EXPERIMENTAL
390 select SPI_BITBANG 397 select SPI_BITBANG
391 help 398 help
392 This exposes the SPI controller IP from the Xilinx EDK. 399 This exposes the SPI controller IP from the Xilinx EDK.
393 400
394 See the "OPB Serial Peripheral Interface (SPI) (v1.00e)" 401 See the "OPB Serial Peripheral Interface (SPI) (v1.00e)"
395 Product Specification document (DS464) for hardware details. 402 Product Specification document (DS464) for hardware details.
396 403
397 Or for the DS570, see "XPS Serial Peripheral Interface (SPI) (v2.00b)" 404 Or for the DS570, see "XPS Serial Peripheral Interface (SPI) (v2.00b)"
398 405
399 config SPI_NUC900 406 config SPI_NUC900
400 tristate "Nuvoton NUC900 series SPI" 407 tristate "Nuvoton NUC900 series SPI"
401 depends on ARCH_W90X900 && EXPERIMENTAL 408 depends on ARCH_W90X900 && EXPERIMENTAL
402 select SPI_BITBANG 409 select SPI_BITBANG
403 help 410 help
404 SPI driver for Nuvoton NUC900 series ARM SoCs 411 SPI driver for Nuvoton NUC900 series ARM SoCs
405 412
406 # 413 #
407 # Add new SPI master controllers in alphabetical order above this line 414 # Add new SPI master controllers in alphabetical order above this line
408 # 415 #
409 416
410 config SPI_DESIGNWARE 417 config SPI_DESIGNWARE
411 tristate "DesignWare SPI controller core support" 418 tristate "DesignWare SPI controller core support"
412 help 419 help
413 general driver for SPI controller core from DesignWare 420 general driver for SPI controller core from DesignWare
414 421
415 config SPI_DW_PCI 422 config SPI_DW_PCI
416 tristate "PCI interface driver for DW SPI core" 423 tristate "PCI interface driver for DW SPI core"
417 depends on SPI_DESIGNWARE && PCI 424 depends on SPI_DESIGNWARE && PCI
418 425
419 config SPI_DW_MID_DMA 426 config SPI_DW_MID_DMA
420 bool "DMA support for DW SPI controller on Intel Moorestown platform" 427 bool "DMA support for DW SPI controller on Intel Moorestown platform"
421 depends on SPI_DW_PCI && INTEL_MID_DMAC 428 depends on SPI_DW_PCI && INTEL_MID_DMAC
422 429
423 config SPI_DW_MMIO 430 config SPI_DW_MMIO
424 tristate "Memory-mapped io interface driver for DW SPI core" 431 tristate "Memory-mapped io interface driver for DW SPI core"
425 depends on SPI_DESIGNWARE && HAVE_CLK 432 depends on SPI_DESIGNWARE && HAVE_CLK
426 433
427 # 434 #
428 # There are lots of SPI device types, with sensors and memory 435 # There are lots of SPI device types, with sensors and memory
429 # being probably the most widely used ones. 436 # being probably the most widely used ones.
430 # 437 #
431 comment "SPI Protocol Masters" 438 comment "SPI Protocol Masters"
432 439
433 config SPI_SPIDEV 440 config SPI_SPIDEV
434 tristate "User mode SPI device driver support" 441 tristate "User mode SPI device driver support"
435 depends on EXPERIMENTAL 442 depends on EXPERIMENTAL
436 help 443 help
437 This supports user mode SPI protocol drivers. 444 This supports user mode SPI protocol drivers.
438 445
439 Note that this application programming interface is EXPERIMENTAL 446 Note that this application programming interface is EXPERIMENTAL
440 and hence SUBJECT TO CHANGE WITHOUT NOTICE while it stabilizes. 447 and hence SUBJECT TO CHANGE WITHOUT NOTICE while it stabilizes.
441 448
442 config SPI_TLE62X0 449 config SPI_TLE62X0
443 tristate "Infineon TLE62X0 (for power switching)" 450 tristate "Infineon TLE62X0 (for power switching)"
444 depends on SYSFS 451 depends on SYSFS
445 help 452 help
446 SPI driver for Infineon TLE62X0 series line driver chips, 453 SPI driver for Infineon TLE62X0 series line driver chips,
447 such as the TLE6220, TLE6230 and TLE6240. This provides a 454 such as the TLE6220, TLE6230 and TLE6240. This provides a
448 sysfs interface, with each line presented as a kind of GPIO 455 sysfs interface, with each line presented as a kind of GPIO
449 exposing both switch control and diagnostic feedback. 456 exposing both switch control and diagnostic feedback.
450 457
451 # 458 #
452 # Add new SPI protocol masters in alphabetical order above this line 459 # Add new SPI protocol masters in alphabetical order above this line
453 # 460 #
454 461
455 endif # SPI_MASTER 462 endif # SPI_MASTER
456 463
457 # (slave support would go here) 464 # (slave support would go here)
458 465
459 endif # SPI 466 endif # SPI
460 467
drivers/spi/Makefile
1 # 1 #
2 # Makefile for kernel SPI drivers. 2 # Makefile for kernel SPI drivers.
3 # 3 #
4 4
5 ccflags-$(CONFIG_SPI_DEBUG) := -DDEBUG 5 ccflags-$(CONFIG_SPI_DEBUG) := -DDEBUG
6 6
7 # small core, mostly translating board-specific 7 # small core, mostly translating board-specific
8 # config declarations into driver model code 8 # config declarations into driver model code
9 obj-$(CONFIG_SPI_MASTER) += spi.o 9 obj-$(CONFIG_SPI_MASTER) += spi.o
10 obj-$(CONFIG_SPI_SPIDEV) += spidev.o 10 obj-$(CONFIG_SPI_SPIDEV) += spidev.o
11 11
12 # SPI master controller drivers (bus) 12 # SPI master controller drivers (bus)
13 obj-$(CONFIG_SPI_ALTERA) += spi-altera.o 13 obj-$(CONFIG_SPI_ALTERA) += spi-altera.o
14 obj-$(CONFIG_SPI_ATMEL) += spi-atmel.o 14 obj-$(CONFIG_SPI_ATMEL) += spi-atmel.o
15 obj-$(CONFIG_SPI_ATH79) += spi-ath79.o 15 obj-$(CONFIG_SPI_ATH79) += spi-ath79.o
16 obj-$(CONFIG_SPI_AU1550) += spi-au1550.o 16 obj-$(CONFIG_SPI_AU1550) += spi-au1550.o
17 obj-$(CONFIG_SPI_BCM63XX) += spi-bcm63xx.o 17 obj-$(CONFIG_SPI_BCM63XX) += spi-bcm63xx.o
18 obj-$(CONFIG_SPI_BFIN5XX) += spi-bfin5xx.o 18 obj-$(CONFIG_SPI_BFIN5XX) += spi-bfin5xx.o
19 obj-$(CONFIG_SPI_BFIN_SPORT) += spi-bfin-sport.o 19 obj-$(CONFIG_SPI_BFIN_SPORT) += spi-bfin-sport.o
20 obj-$(CONFIG_SPI_BITBANG) += spi-bitbang.o 20 obj-$(CONFIG_SPI_BITBANG) += spi-bitbang.o
21 obj-$(CONFIG_SPI_BUTTERFLY) += spi-butterfly.o 21 obj-$(CONFIG_SPI_BUTTERFLY) += spi-butterfly.o
22 obj-$(CONFIG_SPI_COLDFIRE_QSPI) += spi-coldfire-qspi.o 22 obj-$(CONFIG_SPI_COLDFIRE_QSPI) += spi-coldfire-qspi.o
23 obj-$(CONFIG_SPI_DAVINCI) += spi-davinci.o 23 obj-$(CONFIG_SPI_DAVINCI) += spi-davinci.o
24 obj-$(CONFIG_SPI_DESIGNWARE) += spi-dw.o 24 obj-$(CONFIG_SPI_DESIGNWARE) += spi-dw.o
25 obj-$(CONFIG_SPI_DW_MMIO) += spi-dw-mmio.o 25 obj-$(CONFIG_SPI_DW_MMIO) += spi-dw-mmio.o
26 obj-$(CONFIG_SPI_DW_PCI) += spi-dw-midpci.o 26 obj-$(CONFIG_SPI_DW_PCI) += spi-dw-midpci.o
27 spi-dw-midpci-objs := spi-dw-pci.o spi-dw-mid.o 27 spi-dw-midpci-objs := spi-dw-pci.o spi-dw-mid.o
28 obj-$(CONFIG_SPI_EP93XX) += spi-ep93xx.o 28 obj-$(CONFIG_SPI_EP93XX) += spi-ep93xx.o
29 obj-$(CONFIG_SPI_FSL_LIB) += spi-fsl-lib.o 29 obj-$(CONFIG_SPI_FSL_LIB) += spi-fsl-lib.o
30 obj-$(CONFIG_SPI_FSL_ESPI) += spi-fsl-espi.o 30 obj-$(CONFIG_SPI_FSL_ESPI) += spi-fsl-espi.o
31 obj-$(CONFIG_SPI_FSL_SPI) += spi-fsl-spi.o 31 obj-$(CONFIG_SPI_FSL_SPI) += spi-fsl-spi.o
32 obj-$(CONFIG_SPI_GPIO) += spi-gpio.o 32 obj-$(CONFIG_SPI_GPIO) += spi-gpio.o
33 obj-$(CONFIG_SPI_IMX) += spi-imx.o 33 obj-$(CONFIG_SPI_IMX) += spi-imx.o
34 obj-$(CONFIG_SPI_LM70_LLP) += spi-lm70llp.o 34 obj-$(CONFIG_SPI_LM70_LLP) += spi-lm70llp.o
35 obj-$(CONFIG_SPI_MPC512x_PSC) += spi-mpc512x-psc.o 35 obj-$(CONFIG_SPI_MPC512x_PSC) += spi-mpc512x-psc.o
36 obj-$(CONFIG_SPI_MPC52xx_PSC) += spi-mpc52xx-psc.o 36 obj-$(CONFIG_SPI_MPC52xx_PSC) += spi-mpc52xx-psc.o
37 obj-$(CONFIG_SPI_MPC52xx) += spi-mpc52xx.o 37 obj-$(CONFIG_SPI_MPC52xx) += spi-mpc52xx.o
38 obj-$(CONFIG_SPI_NUC900) += spi-nuc900.o 38 obj-$(CONFIG_SPI_NUC900) += spi-nuc900.o
39 obj-$(CONFIG_SPI_OC_TINY) += spi-oc-tiny.o 39 obj-$(CONFIG_SPI_OC_TINY) += spi-oc-tiny.o
40 obj-$(CONFIG_SPI_OMAP_UWIRE) += spi-omap-uwire.o 40 obj-$(CONFIG_SPI_OMAP_UWIRE) += spi-omap-uwire.o
41 obj-$(CONFIG_SPI_OMAP_100K) += spi-omap-100k.o 41 obj-$(CONFIG_SPI_OMAP_100K) += spi-omap-100k.o
42 obj-$(CONFIG_SPI_OMAP24XX) += spi-omap2-mcspi.o 42 obj-$(CONFIG_SPI_OMAP24XX) += spi-omap2-mcspi.o
43 obj-$(CONFIG_SPI_ORION) += spi-orion.o 43 obj-$(CONFIG_SPI_ORION) += spi-orion.o
44 obj-$(CONFIG_SPI_PL022) += spi-pl022.o 44 obj-$(CONFIG_SPI_PL022) += spi-pl022.o
45 obj-$(CONFIG_SPI_PPC4xx) += spi-ppc4xx.o 45 obj-$(CONFIG_SPI_PPC4xx) += spi-ppc4xx.o
46 obj-$(CONFIG_SPI_PXA2XX) += spi-pxa2xx.o 46 obj-$(CONFIG_SPI_PXA2XX) += spi-pxa2xx.o
47 obj-$(CONFIG_SPI_PXA2XX_PCI) += spi-pxa2xx-pci.o 47 obj-$(CONFIG_SPI_PXA2XX_PCI) += spi-pxa2xx-pci.o
48 obj-$(CONFIG_SPI_RSPI) += spi-rspi.o 48 obj-$(CONFIG_SPI_RSPI) += spi-rspi.o
49 obj-$(CONFIG_SPI_S3C24XX) += spi-s3c24xx-hw.o 49 obj-$(CONFIG_SPI_S3C24XX) += spi-s3c24xx-hw.o
50 spi-s3c24xx-hw-y := spi-s3c24xx.o 50 spi-s3c24xx-hw-y := spi-s3c24xx.o
51 spi-s3c24xx-hw-$(CONFIG_SPI_S3C24XX_FIQ) += spi-s3c24xx-fiq.o 51 spi-s3c24xx-hw-$(CONFIG_SPI_S3C24XX_FIQ) += spi-s3c24xx-fiq.o
52 obj-$(CONFIG_SPI_S3C64XX) += spi-s3c64xx.o 52 obj-$(CONFIG_SPI_S3C64XX) += spi-s3c64xx.o
53 obj-$(CONFIG_SPI_SH) += spi-sh.o 53 obj-$(CONFIG_SPI_SH) += spi-sh.o
54 obj-$(CONFIG_SPI_SH_HSPI) += spi-sh-hspi.o 54 obj-$(CONFIG_SPI_SH_HSPI) += spi-sh-hspi.o
55 obj-$(CONFIG_SPI_SH_MSIOF) += spi-sh-msiof.o 55 obj-$(CONFIG_SPI_SH_MSIOF) += spi-sh-msiof.o
56 obj-$(CONFIG_SPI_SH_SCI) += spi-sh-sci.o 56 obj-$(CONFIG_SPI_SH_SCI) += spi-sh-sci.o
57 obj-$(CONFIG_SPI_SIRF) += spi-sirf.o 57 obj-$(CONFIG_SPI_SIRF) += spi-sirf.o
58 obj-$(CONFIG_SPI_STMP3XXX) += spi-stmp.o 58 obj-$(CONFIG_SPI_STMP3XXX) += spi-stmp.o
59 obj-$(CONFIG_SPI_TEGRA) += spi-tegra.o 59 obj-$(CONFIG_SPI_TEGRA) += spi-tegra.o
60 obj-$(CONFIG_SPI_TI_SSP) += spi-ti-ssp.o 60 obj-$(CONFIG_SPI_TI_SSP) += spi-ti-ssp.o
61 obj-$(CONFIG_SPI_TLE62X0) += spi-tle62x0.o 61 obj-$(CONFIG_SPI_TLE62X0) += spi-tle62x0.o
62 obj-$(CONFIG_SPI_TOPCLIFF_PCH) += spi-topcliff-pch.o 62 obj-$(CONFIG_SPI_TOPCLIFF_PCH) += spi-topcliff-pch.o
63 obj-$(CONFIG_SPI_TXX9) += spi-txx9.o 63 obj-$(CONFIG_SPI_TXX9) += spi-txx9.o
64 obj-$(CONFIG_SPI_XCOMM) += spi-xcomm.o
64 obj-$(CONFIG_SPI_XILINX) += spi-xilinx.o 65 obj-$(CONFIG_SPI_XILINX) += spi-xilinx.o
65 66
66 67
drivers/spi/spi-bcm63xx.c
1 /* 1 /*
2 * Broadcom BCM63xx SPI controller support 2 * Broadcom BCM63xx SPI controller support
3 * 3 *
4 * Copyright (C) 2009-2012 Florian Fainelli <florian@openwrt.org> 4 * Copyright (C) 2009-2012 Florian Fainelli <florian@openwrt.org>
5 * Copyright (C) 2010 Tanguy Bouzeloc <tanguy.bouzeloc@efixo.com> 5 * Copyright (C) 2010 Tanguy Bouzeloc <tanguy.bouzeloc@efixo.com>
6 * 6 *
7 * This program is free software; you can redistribute it and/or 7 * This program is free software; you can redistribute it and/or
8 * modify it under the terms of the GNU General Public License 8 * modify it under the terms of the GNU General Public License
9 * as published by the Free Software Foundation; either version 2 9 * as published by the Free Software Foundation; either version 2
10 * of the License, or (at your option) any later version. 10 * of the License, or (at your option) any later version.
11 * 11 *
12 * This program is distributed in the hope that it will be useful, 12 * This program is distributed in the hope that it will be useful,
13 * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 * but WITHOUT ANY WARRANTY; without even the implied warranty of
14 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 * GNU General Public License for more details. 15 * GNU General Public License for more details.
16 * 16 *
17 * You should have received a copy of the GNU General Public License 17 * You should have received a copy of the GNU General Public License
18 * along with this program; if not, write to the 18 * along with this program; if not, write to the
19 * Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, 19 * Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
20 */ 20 */
21 21
22 #include <linux/kernel.h> 22 #include <linux/kernel.h>
23 #include <linux/init.h> 23 #include <linux/init.h>
24 #include <linux/clk.h> 24 #include <linux/clk.h>
25 #include <linux/io.h> 25 #include <linux/io.h>
26 #include <linux/module.h> 26 #include <linux/module.h>
27 #include <linux/platform_device.h> 27 #include <linux/platform_device.h>
28 #include <linux/delay.h> 28 #include <linux/delay.h>
29 #include <linux/interrupt.h> 29 #include <linux/interrupt.h>
30 #include <linux/spi/spi.h> 30 #include <linux/spi/spi.h>
31 #include <linux/completion.h> 31 #include <linux/completion.h>
32 #include <linux/err.h> 32 #include <linux/err.h>
33 #include <linux/workqueue.h> 33 #include <linux/workqueue.h>
34 #include <linux/pm_runtime.h> 34 #include <linux/pm_runtime.h>
35 35
36 #include <bcm63xx_dev_spi.h> 36 #include <bcm63xx_dev_spi.h>
37 37
38 #define PFX KBUILD_MODNAME 38 #define PFX KBUILD_MODNAME
39 #define DRV_VER "0.1.2" 39 #define DRV_VER "0.1.2"
40 40
41 struct bcm63xx_spi { 41 struct bcm63xx_spi {
42 struct completion done; 42 struct completion done;
43 43
44 void __iomem *regs; 44 void __iomem *regs;
45 int irq; 45 int irq;
46 46
47 /* Platform data */ 47 /* Platform data */
48 u32 speed_hz; 48 u32 speed_hz;
49 unsigned fifo_size; 49 unsigned fifo_size;
50 50
51 /* Data buffers */ 51 /* Data buffers */
52 const unsigned char *tx_ptr; 52 const unsigned char *tx_ptr;
53 unsigned char *rx_ptr; 53 unsigned char *rx_ptr;
54 54
55 /* data iomem */ 55 /* data iomem */
56 u8 __iomem *tx_io; 56 u8 __iomem *tx_io;
57 const u8 __iomem *rx_io; 57 const u8 __iomem *rx_io;
58 58
59 int remaining_bytes; 59 int remaining_bytes;
60 60
61 struct clk *clk; 61 struct clk *clk;
62 struct platform_device *pdev; 62 struct platform_device *pdev;
63 }; 63 };
64 64
65 static inline u8 bcm_spi_readb(struct bcm63xx_spi *bs, 65 static inline u8 bcm_spi_readb(struct bcm63xx_spi *bs,
66 unsigned int offset) 66 unsigned int offset)
67 { 67 {
68 return bcm_readb(bs->regs + bcm63xx_spireg(offset)); 68 return bcm_readb(bs->regs + bcm63xx_spireg(offset));
69 } 69 }
70 70
71 static inline u16 bcm_spi_readw(struct bcm63xx_spi *bs, 71 static inline u16 bcm_spi_readw(struct bcm63xx_spi *bs,
72 unsigned int offset) 72 unsigned int offset)
73 { 73 {
74 return bcm_readw(bs->regs + bcm63xx_spireg(offset)); 74 return bcm_readw(bs->regs + bcm63xx_spireg(offset));
75 } 75 }
76 76
77 static inline void bcm_spi_writeb(struct bcm63xx_spi *bs, 77 static inline void bcm_spi_writeb(struct bcm63xx_spi *bs,
78 u8 value, unsigned int offset) 78 u8 value, unsigned int offset)
79 { 79 {
80 bcm_writeb(value, bs->regs + bcm63xx_spireg(offset)); 80 bcm_writeb(value, bs->regs + bcm63xx_spireg(offset));
81 } 81 }
82 82
83 static inline void bcm_spi_writew(struct bcm63xx_spi *bs, 83 static inline void bcm_spi_writew(struct bcm63xx_spi *bs,
84 u16 value, unsigned int offset) 84 u16 value, unsigned int offset)
85 { 85 {
86 bcm_writew(value, bs->regs + bcm63xx_spireg(offset)); 86 bcm_writew(value, bs->regs + bcm63xx_spireg(offset));
87 } 87 }
88 88
89 static const unsigned bcm63xx_spi_freq_table[SPI_CLK_MASK][2] = { 89 static const unsigned bcm63xx_spi_freq_table[SPI_CLK_MASK][2] = {
90 { 20000000, SPI_CLK_20MHZ }, 90 { 20000000, SPI_CLK_20MHZ },
91 { 12500000, SPI_CLK_12_50MHZ }, 91 { 12500000, SPI_CLK_12_50MHZ },
92 { 6250000, SPI_CLK_6_250MHZ }, 92 { 6250000, SPI_CLK_6_250MHZ },
93 { 3125000, SPI_CLK_3_125MHZ }, 93 { 3125000, SPI_CLK_3_125MHZ },
94 { 1563000, SPI_CLK_1_563MHZ }, 94 { 1563000, SPI_CLK_1_563MHZ },
95 { 781000, SPI_CLK_0_781MHZ }, 95 { 781000, SPI_CLK_0_781MHZ },
96 { 391000, SPI_CLK_0_391MHZ } 96 { 391000, SPI_CLK_0_391MHZ }
97 }; 97 };
98 98
99 static int bcm63xx_spi_check_transfer(struct spi_device *spi, 99 static int bcm63xx_spi_check_transfer(struct spi_device *spi,
100 struct spi_transfer *t) 100 struct spi_transfer *t)
101 { 101 {
102 u8 bits_per_word; 102 u8 bits_per_word;
103 103
104 bits_per_word = (t) ? t->bits_per_word : spi->bits_per_word; 104 bits_per_word = (t) ? t->bits_per_word : spi->bits_per_word;
105 if (bits_per_word != 8) { 105 if (bits_per_word != 8) {
106 dev_err(&spi->dev, "%s, unsupported bits_per_word=%d\n", 106 dev_err(&spi->dev, "%s, unsupported bits_per_word=%d\n",
107 __func__, bits_per_word); 107 __func__, bits_per_word);
108 return -EINVAL; 108 return -EINVAL;
109 } 109 }
110 110
111 if (spi->chip_select > spi->master->num_chipselect) { 111 if (spi->chip_select > spi->master->num_chipselect) {
112 dev_err(&spi->dev, "%s, unsupported slave %d\n", 112 dev_err(&spi->dev, "%s, unsupported slave %d\n",
113 __func__, spi->chip_select); 113 __func__, spi->chip_select);
114 return -EINVAL; 114 return -EINVAL;
115 } 115 }
116 116
117 return 0; 117 return 0;
118 } 118 }
119 119
120 static void bcm63xx_spi_setup_transfer(struct spi_device *spi, 120 static void bcm63xx_spi_setup_transfer(struct spi_device *spi,
121 struct spi_transfer *t) 121 struct spi_transfer *t)
122 { 122 {
123 struct bcm63xx_spi *bs = spi_master_get_devdata(spi->master); 123 struct bcm63xx_spi *bs = spi_master_get_devdata(spi->master);
124 u32 hz; 124 u32 hz;
125 u8 clk_cfg, reg; 125 u8 clk_cfg, reg;
126 int i; 126 int i;
127 127
128 hz = (t) ? t->speed_hz : spi->max_speed_hz; 128 hz = (t) ? t->speed_hz : spi->max_speed_hz;
129 129
130 /* Find the closest clock configuration */ 130 /* Find the closest clock configuration */
131 for (i = 0; i < SPI_CLK_MASK; i++) { 131 for (i = 0; i < SPI_CLK_MASK; i++) {
132 if (hz <= bcm63xx_spi_freq_table[i][0]) { 132 if (hz >= bcm63xx_spi_freq_table[i][0]) {
133 clk_cfg = bcm63xx_spi_freq_table[i][1]; 133 clk_cfg = bcm63xx_spi_freq_table[i][1];
134 break; 134 break;
135 } 135 }
136 } 136 }
137 137
138 /* No matching configuration found, default to lowest */ 138 /* No matching configuration found, default to lowest */
139 if (i == SPI_CLK_MASK) 139 if (i == SPI_CLK_MASK)
140 clk_cfg = SPI_CLK_0_391MHZ; 140 clk_cfg = SPI_CLK_0_391MHZ;
141 141
142 /* clear existing clock configuration bits of the register */ 142 /* clear existing clock configuration bits of the register */
143 reg = bcm_spi_readb(bs, SPI_CLK_CFG); 143 reg = bcm_spi_readb(bs, SPI_CLK_CFG);
144 reg &= ~SPI_CLK_MASK; 144 reg &= ~SPI_CLK_MASK;
145 reg |= clk_cfg; 145 reg |= clk_cfg;
146 146
147 bcm_spi_writeb(bs, reg, SPI_CLK_CFG); 147 bcm_spi_writeb(bs, reg, SPI_CLK_CFG);
148 dev_dbg(&spi->dev, "Setting clock register to %02x (hz %d)\n", 148 dev_dbg(&spi->dev, "Setting clock register to %02x (hz %d)\n",
149 clk_cfg, hz); 149 clk_cfg, hz);
150 } 150 }
151 151
152 /* the spi->mode bits understood by this driver: */ 152 /* the spi->mode bits understood by this driver: */
153 #define MODEBITS (SPI_CPOL | SPI_CPHA) 153 #define MODEBITS (SPI_CPOL | SPI_CPHA)
154 154
155 static int bcm63xx_spi_setup(struct spi_device *spi) 155 static int bcm63xx_spi_setup(struct spi_device *spi)
156 { 156 {
157 struct bcm63xx_spi *bs; 157 struct bcm63xx_spi *bs;
158 int ret; 158 int ret;
159 159
160 bs = spi_master_get_devdata(spi->master); 160 bs = spi_master_get_devdata(spi->master);
161 161
162 if (!spi->bits_per_word) 162 if (!spi->bits_per_word)
163 spi->bits_per_word = 8; 163 spi->bits_per_word = 8;
164 164
165 if (spi->mode & ~MODEBITS) { 165 if (spi->mode & ~MODEBITS) {
166 dev_err(&spi->dev, "%s, unsupported mode bits %x\n", 166 dev_err(&spi->dev, "%s, unsupported mode bits %x\n",
167 __func__, spi->mode & ~MODEBITS); 167 __func__, spi->mode & ~MODEBITS);
168 return -EINVAL; 168 return -EINVAL;
169 } 169 }
170 170
171 ret = bcm63xx_spi_check_transfer(spi, NULL); 171 ret = bcm63xx_spi_check_transfer(spi, NULL);
172 if (ret < 0) { 172 if (ret < 0) {
173 dev_err(&spi->dev, "setup: unsupported mode bits %x\n", 173 dev_err(&spi->dev, "setup: unsupported mode bits %x\n",
174 spi->mode & ~MODEBITS); 174 spi->mode & ~MODEBITS);
175 return ret; 175 return ret;
176 } 176 }
177 177
178 dev_dbg(&spi->dev, "%s, mode %d, %u bits/w, %u nsec/bit\n", 178 dev_dbg(&spi->dev, "%s, mode %d, %u bits/w, %u nsec/bit\n",
179 __func__, spi->mode & MODEBITS, spi->bits_per_word, 0); 179 __func__, spi->mode & MODEBITS, spi->bits_per_word, 0);
180 180
181 return 0; 181 return 0;
182 } 182 }
183 183
184 /* Fill the TX FIFO with as many bytes as possible */ 184 /* Fill the TX FIFO with as many bytes as possible */
185 static void bcm63xx_spi_fill_tx_fifo(struct bcm63xx_spi *bs) 185 static void bcm63xx_spi_fill_tx_fifo(struct bcm63xx_spi *bs)
186 { 186 {
187 u8 size; 187 u8 size;
188 188
189 /* Fill the Tx FIFO with as many bytes as possible */ 189 /* Fill the Tx FIFO with as many bytes as possible */
190 size = bs->remaining_bytes < bs->fifo_size ? bs->remaining_bytes : 190 size = bs->remaining_bytes < bs->fifo_size ? bs->remaining_bytes :
191 bs->fifo_size; 191 bs->fifo_size;
192 memcpy_toio(bs->tx_io, bs->tx_ptr, size); 192 memcpy_toio(bs->tx_io, bs->tx_ptr, size);
193 bs->remaining_bytes -= size; 193 bs->remaining_bytes -= size;
194 } 194 }
195 195
196 static unsigned int bcm63xx_txrx_bufs(struct spi_device *spi, 196 static unsigned int bcm63xx_txrx_bufs(struct spi_device *spi,
197 struct spi_transfer *t) 197 struct spi_transfer *t)
198 { 198 {
199 struct bcm63xx_spi *bs = spi_master_get_devdata(spi->master); 199 struct bcm63xx_spi *bs = spi_master_get_devdata(spi->master);
200 u16 msg_ctl; 200 u16 msg_ctl;
201 u16 cmd; 201 u16 cmd;
202 202
203 /* Disable the CMD_DONE interrupt */ 203 /* Disable the CMD_DONE interrupt */
204 bcm_spi_writeb(bs, 0, SPI_INT_MASK); 204 bcm_spi_writeb(bs, 0, SPI_INT_MASK);
205 205
206 dev_dbg(&spi->dev, "txrx: tx %p, rx %p, len %d\n", 206 dev_dbg(&spi->dev, "txrx: tx %p, rx %p, len %d\n",
207 t->tx_buf, t->rx_buf, t->len); 207 t->tx_buf, t->rx_buf, t->len);
208 208
209 /* Transmitter is inhibited */ 209 /* Transmitter is inhibited */
210 bs->tx_ptr = t->tx_buf; 210 bs->tx_ptr = t->tx_buf;
211 bs->rx_ptr = t->rx_buf; 211 bs->rx_ptr = t->rx_buf;
212 212
213 if (t->tx_buf) { 213 if (t->tx_buf) {
214 bs->remaining_bytes = t->len; 214 bs->remaining_bytes = t->len;
215 bcm63xx_spi_fill_tx_fifo(bs); 215 bcm63xx_spi_fill_tx_fifo(bs);
216 } 216 }
217 217
218 init_completion(&bs->done); 218 init_completion(&bs->done);
219 219
220 /* Fill in the Message control register */ 220 /* Fill in the Message control register */
221 msg_ctl = (t->len << SPI_BYTE_CNT_SHIFT); 221 msg_ctl = (t->len << SPI_BYTE_CNT_SHIFT);
222 222
223 if (t->rx_buf && t->tx_buf) 223 if (t->rx_buf && t->tx_buf)
224 msg_ctl |= (SPI_FD_RW << SPI_MSG_TYPE_SHIFT); 224 msg_ctl |= (SPI_FD_RW << SPI_MSG_TYPE_SHIFT);
225 else if (t->rx_buf) 225 else if (t->rx_buf)
226 msg_ctl |= (SPI_HD_R << SPI_MSG_TYPE_SHIFT); 226 msg_ctl |= (SPI_HD_R << SPI_MSG_TYPE_SHIFT);
227 else if (t->tx_buf) 227 else if (t->tx_buf)
228 msg_ctl |= (SPI_HD_W << SPI_MSG_TYPE_SHIFT); 228 msg_ctl |= (SPI_HD_W << SPI_MSG_TYPE_SHIFT);
229 229
230 bcm_spi_writew(bs, msg_ctl, SPI_MSG_CTL); 230 bcm_spi_writew(bs, msg_ctl, SPI_MSG_CTL);
231 231
232 /* Issue the transfer */ 232 /* Issue the transfer */
233 cmd = SPI_CMD_START_IMMEDIATE; 233 cmd = SPI_CMD_START_IMMEDIATE;
234 cmd |= (0 << SPI_CMD_PREPEND_BYTE_CNT_SHIFT); 234 cmd |= (0 << SPI_CMD_PREPEND_BYTE_CNT_SHIFT);
235 cmd |= (spi->chip_select << SPI_CMD_DEVICE_ID_SHIFT); 235 cmd |= (spi->chip_select << SPI_CMD_DEVICE_ID_SHIFT);
236 bcm_spi_writew(bs, cmd, SPI_CMD); 236 bcm_spi_writew(bs, cmd, SPI_CMD);
237 237
238 /* Enable the CMD_DONE interrupt */ 238 /* Enable the CMD_DONE interrupt */
239 bcm_spi_writeb(bs, SPI_INTR_CMD_DONE, SPI_INT_MASK); 239 bcm_spi_writeb(bs, SPI_INTR_CMD_DONE, SPI_INT_MASK);
240 240
241 return t->len - bs->remaining_bytes; 241 return t->len - bs->remaining_bytes;
242 } 242 }
243 243
244 static int bcm63xx_spi_prepare_transfer(struct spi_master *master) 244 static int bcm63xx_spi_prepare_transfer(struct spi_master *master)
245 { 245 {
246 struct bcm63xx_spi *bs = spi_master_get_devdata(master); 246 struct bcm63xx_spi *bs = spi_master_get_devdata(master);
247 247
248 pm_runtime_get_sync(&bs->pdev->dev); 248 pm_runtime_get_sync(&bs->pdev->dev);
249 249
250 return 0; 250 return 0;
251 } 251 }
252 252
253 static int bcm63xx_spi_unprepare_transfer(struct spi_master *master) 253 static int bcm63xx_spi_unprepare_transfer(struct spi_master *master)
254 { 254 {
255 struct bcm63xx_spi *bs = spi_master_get_devdata(master); 255 struct bcm63xx_spi *bs = spi_master_get_devdata(master);
256 256
257 pm_runtime_put(&bs->pdev->dev); 257 pm_runtime_put(&bs->pdev->dev);
258 258
259 return 0; 259 return 0;
260 } 260 }
261 261
262 static int bcm63xx_spi_transfer_one(struct spi_master *master, 262 static int bcm63xx_spi_transfer_one(struct spi_master *master,
263 struct spi_message *m) 263 struct spi_message *m)
264 { 264 {
265 struct bcm63xx_spi *bs = spi_master_get_devdata(master); 265 struct bcm63xx_spi *bs = spi_master_get_devdata(master);
266 struct spi_transfer *t; 266 struct spi_transfer *t;
267 struct spi_device *spi = m->spi; 267 struct spi_device *spi = m->spi;
268 int status = 0; 268 int status = 0;
269 unsigned int timeout = 0; 269 unsigned int timeout = 0;
270 270
271 list_for_each_entry(t, &m->transfers, transfer_list) { 271 list_for_each_entry(t, &m->transfers, transfer_list) {
272 unsigned int len = t->len; 272 unsigned int len = t->len;
273 u8 rx_tail; 273 u8 rx_tail;
274 274
275 status = bcm63xx_spi_check_transfer(spi, t); 275 status = bcm63xx_spi_check_transfer(spi, t);
276 if (status < 0) 276 if (status < 0)
277 goto exit; 277 goto exit;
278 278
279 /* configure adapter for a new transfer */ 279 /* configure adapter for a new transfer */
280 bcm63xx_spi_setup_transfer(spi, t); 280 bcm63xx_spi_setup_transfer(spi, t);
281 281
282 while (len) { 282 while (len) {
283 /* send the data */ 283 /* send the data */
284 len -= bcm63xx_txrx_bufs(spi, t); 284 len -= bcm63xx_txrx_bufs(spi, t);
285 285
286 timeout = wait_for_completion_timeout(&bs->done, HZ); 286 timeout = wait_for_completion_timeout(&bs->done, HZ);
287 if (!timeout) { 287 if (!timeout) {
288 status = -ETIMEDOUT; 288 status = -ETIMEDOUT;
289 goto exit; 289 goto exit;
290 } 290 }
291 291
292 /* read out all data */ 292 /* read out all data */
293 rx_tail = bcm_spi_readb(bs, SPI_RX_TAIL); 293 rx_tail = bcm_spi_readb(bs, SPI_RX_TAIL);
294 294
295 /* Read out all the data */ 295 /* Read out all the data */
296 if (rx_tail) 296 if (rx_tail)
297 memcpy_fromio(bs->rx_ptr, bs->rx_io, rx_tail); 297 memcpy_fromio(bs->rx_ptr, bs->rx_io, rx_tail);
298 } 298 }
299 299
300 m->actual_length += t->len; 300 m->actual_length += t->len;
301 } 301 }
302 exit: 302 exit:
303 m->status = status; 303 m->status = status;
304 spi_finalize_current_message(master); 304 spi_finalize_current_message(master);
305 305
306 return 0; 306 return 0;
307 } 307 }
308 308
309 /* This driver supports single master mode only. Hence 309 /* This driver supports single master mode only. Hence
310 * CMD_DONE is the only interrupt we care about 310 * CMD_DONE is the only interrupt we care about
311 */ 311 */
312 static irqreturn_t bcm63xx_spi_interrupt(int irq, void *dev_id) 312 static irqreturn_t bcm63xx_spi_interrupt(int irq, void *dev_id)
313 { 313 {
314 struct spi_master *master = (struct spi_master *)dev_id; 314 struct spi_master *master = (struct spi_master *)dev_id;
315 struct bcm63xx_spi *bs = spi_master_get_devdata(master); 315 struct bcm63xx_spi *bs = spi_master_get_devdata(master);
316 u8 intr; 316 u8 intr;
317 317
318 /* Read interupts and clear them immediately */ 318 /* Read interupts and clear them immediately */
319 intr = bcm_spi_readb(bs, SPI_INT_STATUS); 319 intr = bcm_spi_readb(bs, SPI_INT_STATUS);
320 bcm_spi_writeb(bs, SPI_INTR_CLEAR_ALL, SPI_INT_STATUS); 320 bcm_spi_writeb(bs, SPI_INTR_CLEAR_ALL, SPI_INT_STATUS);
321 bcm_spi_writeb(bs, 0, SPI_INT_MASK); 321 bcm_spi_writeb(bs, 0, SPI_INT_MASK);
322 322
323 /* A transfer completed */ 323 /* A transfer completed */
324 if (intr & SPI_INTR_CMD_DONE) 324 if (intr & SPI_INTR_CMD_DONE)
325 complete(&bs->done); 325 complete(&bs->done);
326 326
327 return IRQ_HANDLED; 327 return IRQ_HANDLED;
328 } 328 }
329 329
330 330
331 static int __devinit bcm63xx_spi_probe(struct platform_device *pdev) 331 static int __devinit bcm63xx_spi_probe(struct platform_device *pdev)
332 { 332 {
333 struct resource *r; 333 struct resource *r;
334 struct device *dev = &pdev->dev; 334 struct device *dev = &pdev->dev;
335 struct bcm63xx_spi_pdata *pdata = pdev->dev.platform_data; 335 struct bcm63xx_spi_pdata *pdata = pdev->dev.platform_data;
336 int irq; 336 int irq;
337 struct spi_master *master; 337 struct spi_master *master;
338 struct clk *clk; 338 struct clk *clk;
339 struct bcm63xx_spi *bs; 339 struct bcm63xx_spi *bs;
340 int ret; 340 int ret;
341 341
342 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 342 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
343 if (!r) { 343 if (!r) {
344 dev_err(dev, "no iomem\n"); 344 dev_err(dev, "no iomem\n");
345 ret = -ENXIO; 345 ret = -ENXIO;
346 goto out; 346 goto out;
347 } 347 }
348 348
349 irq = platform_get_irq(pdev, 0); 349 irq = platform_get_irq(pdev, 0);
350 if (irq < 0) { 350 if (irq < 0) {
351 dev_err(dev, "no irq\n"); 351 dev_err(dev, "no irq\n");
352 ret = -ENXIO; 352 ret = -ENXIO;
353 goto out; 353 goto out;
354 } 354 }
355 355
356 clk = clk_get(dev, "spi"); 356 clk = clk_get(dev, "spi");
357 if (IS_ERR(clk)) { 357 if (IS_ERR(clk)) {
358 dev_err(dev, "no clock for device\n"); 358 dev_err(dev, "no clock for device\n");
359 ret = PTR_ERR(clk); 359 ret = PTR_ERR(clk);
360 goto out; 360 goto out;
361 } 361 }
362 362
363 master = spi_alloc_master(dev, sizeof(*bs)); 363 master = spi_alloc_master(dev, sizeof(*bs));
364 if (!master) { 364 if (!master) {
365 dev_err(dev, "out of memory\n"); 365 dev_err(dev, "out of memory\n");
366 ret = -ENOMEM; 366 ret = -ENOMEM;
367 goto out_clk; 367 goto out_clk;
368 } 368 }
369 369
370 bs = spi_master_get_devdata(master); 370 bs = spi_master_get_devdata(master);
371 371
372 platform_set_drvdata(pdev, master); 372 platform_set_drvdata(pdev, master);
373 bs->pdev = pdev; 373 bs->pdev = pdev;
374 374
375 if (!devm_request_mem_region(&pdev->dev, r->start, 375 if (!devm_request_mem_region(&pdev->dev, r->start,
376 resource_size(r), PFX)) { 376 resource_size(r), PFX)) {
377 dev_err(dev, "iomem request failed\n"); 377 dev_err(dev, "iomem request failed\n");
378 ret = -ENXIO; 378 ret = -ENXIO;
379 goto out_err; 379 goto out_err;
380 } 380 }
381 381
382 bs->regs = devm_ioremap_nocache(&pdev->dev, r->start, 382 bs->regs = devm_ioremap_nocache(&pdev->dev, r->start,
383 resource_size(r)); 383 resource_size(r));
384 if (!bs->regs) { 384 if (!bs->regs) {
385 dev_err(dev, "unable to ioremap regs\n"); 385 dev_err(dev, "unable to ioremap regs\n");
386 ret = -ENOMEM; 386 ret = -ENOMEM;
387 goto out_err; 387 goto out_err;
388 } 388 }
389 389
390 bs->irq = irq; 390 bs->irq = irq;
391 bs->clk = clk; 391 bs->clk = clk;
392 bs->fifo_size = pdata->fifo_size; 392 bs->fifo_size = pdata->fifo_size;
393 393
394 ret = devm_request_irq(&pdev->dev, irq, bcm63xx_spi_interrupt, 0, 394 ret = devm_request_irq(&pdev->dev, irq, bcm63xx_spi_interrupt, 0,
395 pdev->name, master); 395 pdev->name, master);
396 if (ret) { 396 if (ret) {
397 dev_err(dev, "unable to request irq\n"); 397 dev_err(dev, "unable to request irq\n");
398 goto out_err; 398 goto out_err;
399 } 399 }
400 400
401 master->bus_num = pdata->bus_num; 401 master->bus_num = pdata->bus_num;
402 master->num_chipselect = pdata->num_chipselect; 402 master->num_chipselect = pdata->num_chipselect;
403 master->setup = bcm63xx_spi_setup; 403 master->setup = bcm63xx_spi_setup;
404 master->prepare_transfer_hardware = bcm63xx_spi_prepare_transfer; 404 master->prepare_transfer_hardware = bcm63xx_spi_prepare_transfer;
405 master->unprepare_transfer_hardware = bcm63xx_spi_unprepare_transfer; 405 master->unprepare_transfer_hardware = bcm63xx_spi_unprepare_transfer;
406 master->transfer_one_message = bcm63xx_spi_transfer_one; 406 master->transfer_one_message = bcm63xx_spi_transfer_one;
407 master->mode_bits = MODEBITS; 407 master->mode_bits = MODEBITS;
408 bs->speed_hz = pdata->speed_hz; 408 bs->speed_hz = pdata->speed_hz;
409 bs->tx_io = (u8 *)(bs->regs + bcm63xx_spireg(SPI_MSG_DATA)); 409 bs->tx_io = (u8 *)(bs->regs + bcm63xx_spireg(SPI_MSG_DATA));
410 bs->rx_io = (const u8 *)(bs->regs + bcm63xx_spireg(SPI_RX_DATA)); 410 bs->rx_io = (const u8 *)(bs->regs + bcm63xx_spireg(SPI_RX_DATA));
411 411
412 /* Initialize hardware */ 412 /* Initialize hardware */
413 clk_enable(bs->clk); 413 clk_enable(bs->clk);
414 bcm_spi_writeb(bs, SPI_INTR_CLEAR_ALL, SPI_INT_STATUS); 414 bcm_spi_writeb(bs, SPI_INTR_CLEAR_ALL, SPI_INT_STATUS);
415 415
416 /* register and we are done */ 416 /* register and we are done */
417 ret = spi_register_master(master); 417 ret = spi_register_master(master);
418 if (ret) { 418 if (ret) {
419 dev_err(dev, "spi register failed\n"); 419 dev_err(dev, "spi register failed\n");
420 goto out_clk_disable; 420 goto out_clk_disable;
421 } 421 }
422 422
423 dev_info(dev, "at 0x%08x (irq %d, FIFOs size %d) v%s\n", 423 dev_info(dev, "at 0x%08x (irq %d, FIFOs size %d) v%s\n",
424 r->start, irq, bs->fifo_size, DRV_VER); 424 r->start, irq, bs->fifo_size, DRV_VER);
425 425
426 return 0; 426 return 0;
427 427
428 out_clk_disable: 428 out_clk_disable:
429 clk_disable(clk); 429 clk_disable(clk);
430 out_err: 430 out_err:
431 platform_set_drvdata(pdev, NULL); 431 platform_set_drvdata(pdev, NULL);
432 spi_master_put(master); 432 spi_master_put(master);
433 out_clk: 433 out_clk:
434 clk_put(clk); 434 clk_put(clk);
435 out: 435 out:
436 return ret; 436 return ret;
437 } 437 }
438 438
439 static int __devexit bcm63xx_spi_remove(struct platform_device *pdev) 439 static int __devexit bcm63xx_spi_remove(struct platform_device *pdev)
440 { 440 {
441 struct spi_master *master = platform_get_drvdata(pdev); 441 struct spi_master *master = platform_get_drvdata(pdev);
442 struct bcm63xx_spi *bs = spi_master_get_devdata(master); 442 struct bcm63xx_spi *bs = spi_master_get_devdata(master);
443 443
444 spi_unregister_master(master); 444 spi_unregister_master(master);
445 445
446 /* reset spi block */ 446 /* reset spi block */
447 bcm_spi_writeb(bs, 0, SPI_INT_MASK); 447 bcm_spi_writeb(bs, 0, SPI_INT_MASK);
448 448
449 /* HW shutdown */ 449 /* HW shutdown */
450 clk_disable(bs->clk); 450 clk_disable(bs->clk);
451 clk_put(bs->clk); 451 clk_put(bs->clk);
452 452
453 platform_set_drvdata(pdev, 0); 453 platform_set_drvdata(pdev, 0);
454 454
455 return 0; 455 return 0;
456 } 456 }
457 457
458 #ifdef CONFIG_PM 458 #ifdef CONFIG_PM
459 static int bcm63xx_spi_suspend(struct device *dev) 459 static int bcm63xx_spi_suspend(struct device *dev)
460 { 460 {
461 struct spi_master *master = 461 struct spi_master *master =
462 platform_get_drvdata(to_platform_device(dev)); 462 platform_get_drvdata(to_platform_device(dev));
463 struct bcm63xx_spi *bs = spi_master_get_devdata(master); 463 struct bcm63xx_spi *bs = spi_master_get_devdata(master);
464 464
465 clk_disable(bs->clk); 465 clk_disable(bs->clk);
466 466
467 return 0; 467 return 0;
468 } 468 }
469 469
470 static int bcm63xx_spi_resume(struct device *dev) 470 static int bcm63xx_spi_resume(struct device *dev)
471 { 471 {
472 struct spi_master *master = 472 struct spi_master *master =
473 platform_get_drvdata(to_platform_device(dev)); 473 platform_get_drvdata(to_platform_device(dev));
474 struct bcm63xx_spi *bs = spi_master_get_devdata(master); 474 struct bcm63xx_spi *bs = spi_master_get_devdata(master);
475 475
476 clk_enable(bs->clk); 476 clk_enable(bs->clk);
477 477
478 return 0; 478 return 0;
479 } 479 }
480 480
481 static const struct dev_pm_ops bcm63xx_spi_pm_ops = { 481 static const struct dev_pm_ops bcm63xx_spi_pm_ops = {
482 .suspend = bcm63xx_spi_suspend, 482 .suspend = bcm63xx_spi_suspend,
483 .resume = bcm63xx_spi_resume, 483 .resume = bcm63xx_spi_resume,
484 }; 484 };
485 485
486 #define BCM63XX_SPI_PM_OPS (&bcm63xx_spi_pm_ops) 486 #define BCM63XX_SPI_PM_OPS (&bcm63xx_spi_pm_ops)
487 #else 487 #else
488 #define BCM63XX_SPI_PM_OPS NULL 488 #define BCM63XX_SPI_PM_OPS NULL
489 #endif 489 #endif
490 490
491 static struct platform_driver bcm63xx_spi_driver = { 491 static struct platform_driver bcm63xx_spi_driver = {
492 .driver = { 492 .driver = {
493 .name = "bcm63xx-spi", 493 .name = "bcm63xx-spi",
494 .owner = THIS_MODULE, 494 .owner = THIS_MODULE,
495 .pm = BCM63XX_SPI_PM_OPS, 495 .pm = BCM63XX_SPI_PM_OPS,
496 }, 496 },
497 .probe = bcm63xx_spi_probe, 497 .probe = bcm63xx_spi_probe,
498 .remove = __devexit_p(bcm63xx_spi_remove), 498 .remove = __devexit_p(bcm63xx_spi_remove),
499 }; 499 };
500 500
501 module_platform_driver(bcm63xx_spi_driver); 501 module_platform_driver(bcm63xx_spi_driver);
502 502
503 MODULE_ALIAS("platform:bcm63xx_spi"); 503 MODULE_ALIAS("platform:bcm63xx_spi");
504 MODULE_AUTHOR("Florian Fainelli <florian@openwrt.org>"); 504 MODULE_AUTHOR("Florian Fainelli <florian@openwrt.org>");
505 MODULE_AUTHOR("Tanguy Bouzeloc <tanguy.bouzeloc@efixo.com>"); 505 MODULE_AUTHOR("Tanguy Bouzeloc <tanguy.bouzeloc@efixo.com>");
506 MODULE_DESCRIPTION("Broadcom BCM63xx SPI Controller driver"); 506 MODULE_DESCRIPTION("Broadcom BCM63xx SPI Controller driver");
507 MODULE_LICENSE("GPL"); 507 MODULE_LICENSE("GPL");
508 508
drivers/spi/spi-gpio.c
1 /* 1 /*
2 * SPI master driver using generic bitbanged GPIO 2 * SPI master driver using generic bitbanged GPIO
3 * 3 *
4 * Copyright (C) 2006,2008 David Brownell 4 * Copyright (C) 2006,2008 David Brownell
5 * 5 *
6 * This program is free software; you can redistribute it and/or modify 6 * This program is free software; you can redistribute it and/or modify
7 * it under the terms of the GNU General Public License as published by 7 * it under the terms of the GNU General Public License as published by
8 * the Free Software Foundation; either version 2 of the License, or 8 * the Free Software Foundation; either version 2 of the License, or
9 * (at your option) any later version. 9 * (at your option) any later version.
10 * 10 *
11 * This program is distributed in the hope that it will be useful, 11 * This program is distributed in the hope that it will be useful,
12 * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 * but WITHOUT ANY WARRANTY; without even the implied warranty of
13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
14 * GNU General Public License for more details. 14 * GNU General Public License for more details.
15 * 15 *
16 * You should have received a copy of the GNU General Public License 16 * You should have received a copy of the GNU General Public License
17 * along with this program; if not, write to the Free Software 17 * along with this program; if not, write to the Free Software
18 * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 18 * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
19 */ 19 */
20 #include <linux/kernel.h> 20 #include <linux/kernel.h>
21 #include <linux/module.h> 21 #include <linux/module.h>
22 #include <linux/init.h> 22 #include <linux/init.h>
23 #include <linux/platform_device.h> 23 #include <linux/platform_device.h>
24 #include <linux/gpio.h> 24 #include <linux/gpio.h>
25 25
26 #include <linux/spi/spi.h> 26 #include <linux/spi/spi.h>
27 #include <linux/spi/spi_bitbang.h> 27 #include <linux/spi/spi_bitbang.h>
28 #include <linux/spi/spi_gpio.h> 28 #include <linux/spi/spi_gpio.h>
29 29
30 30
31 /* 31 /*
32 * This bitbanging SPI master driver should help make systems usable 32 * This bitbanging SPI master driver should help make systems usable
33 * when a native hardware SPI engine is not available, perhaps because 33 * when a native hardware SPI engine is not available, perhaps because
34 * its driver isn't yet working or because the I/O pins it requires 34 * its driver isn't yet working or because the I/O pins it requires
35 * are used for other purposes. 35 * are used for other purposes.
36 * 36 *
37 * platform_device->driver_data ... points to spi_gpio 37 * platform_device->driver_data ... points to spi_gpio
38 * 38 *
39 * spi->controller_state ... reserved for bitbang framework code 39 * spi->controller_state ... reserved for bitbang framework code
40 * spi->controller_data ... holds chipselect GPIO 40 * spi->controller_data ... holds chipselect GPIO
41 * 41 *
42 * spi->master->dev.driver_data ... points to spi_gpio->bitbang 42 * spi->master->dev.driver_data ... points to spi_gpio->bitbang
43 */ 43 */
44 44
45 struct spi_gpio { 45 struct spi_gpio {
46 struct spi_bitbang bitbang; 46 struct spi_bitbang bitbang;
47 struct spi_gpio_platform_data pdata; 47 struct spi_gpio_platform_data pdata;
48 struct platform_device *pdev; 48 struct platform_device *pdev;
49 }; 49 };
50 50
51 /*----------------------------------------------------------------------*/ 51 /*----------------------------------------------------------------------*/
52 52
53 /* 53 /*
54 * Because the overhead of going through four GPIO procedure calls 54 * Because the overhead of going through four GPIO procedure calls
55 * per transferred bit can make performance a problem, this code 55 * per transferred bit can make performance a problem, this code
56 * is set up so that you can use it in either of two ways: 56 * is set up so that you can use it in either of two ways:
57 * 57 *
58 * - The slow generic way: set up platform_data to hold the GPIO 58 * - The slow generic way: set up platform_data to hold the GPIO
59 * numbers used for MISO/MOSI/SCK, and issue procedure calls for 59 * numbers used for MISO/MOSI/SCK, and issue procedure calls for
60 * each of them. This driver can handle several such busses. 60 * each of them. This driver can handle several such busses.
61 * 61 *
62 * - The quicker inlined way: only helps with platform GPIO code 62 * - The quicker inlined way: only helps with platform GPIO code
63 * that inlines operations for constant GPIOs. This can give 63 * that inlines operations for constant GPIOs. This can give
64 * you tight (fast!) inner loops, but each such bus needs a 64 * you tight (fast!) inner loops, but each such bus needs a
65 * new driver. You'll define a new C file, with Makefile and 65 * new driver. You'll define a new C file, with Makefile and
66 * Kconfig support; the C code can be a total of six lines: 66 * Kconfig support; the C code can be a total of six lines:
67 * 67 *
68 * #define DRIVER_NAME "myboard_spi2" 68 * #define DRIVER_NAME "myboard_spi2"
69 * #define SPI_MISO_GPIO 119 69 * #define SPI_MISO_GPIO 119
70 * #define SPI_MOSI_GPIO 120 70 * #define SPI_MOSI_GPIO 120
71 * #define SPI_SCK_GPIO 121 71 * #define SPI_SCK_GPIO 121
72 * #define SPI_N_CHIPSEL 4 72 * #define SPI_N_CHIPSEL 4
73 * #include "spi-gpio.c" 73 * #include "spi-gpio.c"
74 */ 74 */
75 75
76 #ifndef DRIVER_NAME 76 #ifndef DRIVER_NAME
77 #define DRIVER_NAME "spi_gpio" 77 #define DRIVER_NAME "spi_gpio"
78 78
79 #define GENERIC_BITBANG /* vs tight inlines */ 79 #define GENERIC_BITBANG /* vs tight inlines */
80 80
81 /* all functions referencing these symbols must define pdata */ 81 /* all functions referencing these symbols must define pdata */
82 #define SPI_MISO_GPIO ((pdata)->miso) 82 #define SPI_MISO_GPIO ((pdata)->miso)
83 #define SPI_MOSI_GPIO ((pdata)->mosi) 83 #define SPI_MOSI_GPIO ((pdata)->mosi)
84 #define SPI_SCK_GPIO ((pdata)->sck) 84 #define SPI_SCK_GPIO ((pdata)->sck)
85 85
86 #define SPI_N_CHIPSEL ((pdata)->num_chipselect) 86 #define SPI_N_CHIPSEL ((pdata)->num_chipselect)
87 87
88 #endif 88 #endif
89 89
90 /*----------------------------------------------------------------------*/ 90 /*----------------------------------------------------------------------*/
91 91
92 static inline const struct spi_gpio_platform_data * __pure 92 static inline const struct spi_gpio_platform_data * __pure
93 spi_to_pdata(const struct spi_device *spi) 93 spi_to_pdata(const struct spi_device *spi)
94 { 94 {
95 const struct spi_bitbang *bang; 95 const struct spi_bitbang *bang;
96 const struct spi_gpio *spi_gpio; 96 const struct spi_gpio *spi_gpio;
97 97
98 bang = spi_master_get_devdata(spi->master); 98 bang = spi_master_get_devdata(spi->master);
99 spi_gpio = container_of(bang, struct spi_gpio, bitbang); 99 spi_gpio = container_of(bang, struct spi_gpio, bitbang);
100 return &spi_gpio->pdata; 100 return &spi_gpio->pdata;
101 } 101 }
102 102
103 /* this is #defined to avoid unused-variable warnings when inlining */ 103 /* this is #defined to avoid unused-variable warnings when inlining */
104 #define pdata spi_to_pdata(spi) 104 #define pdata spi_to_pdata(spi)
105 105
106 static inline void setsck(const struct spi_device *spi, int is_on) 106 static inline void setsck(const struct spi_device *spi, int is_on)
107 { 107 {
108 gpio_set_value(SPI_SCK_GPIO, is_on); 108 gpio_set_value(SPI_SCK_GPIO, is_on);
109 } 109 }
110 110
111 static inline void setmosi(const struct spi_device *spi, int is_on) 111 static inline void setmosi(const struct spi_device *spi, int is_on)
112 { 112 {
113 gpio_set_value(SPI_MOSI_GPIO, is_on); 113 gpio_set_value(SPI_MOSI_GPIO, is_on);
114 } 114 }
115 115
116 static inline int getmiso(const struct spi_device *spi) 116 static inline int getmiso(const struct spi_device *spi)
117 { 117 {
118 return !!gpio_get_value(SPI_MISO_GPIO); 118 return !!gpio_get_value(SPI_MISO_GPIO);
119 } 119 }
120 120
121 #undef pdata 121 #undef pdata
122 122
123 /* 123 /*
124 * NOTE: this clocks "as fast as we can". It "should" be a function of the 124 * NOTE: this clocks "as fast as we can". It "should" be a function of the
125 * requested device clock. Software overhead means we usually have trouble 125 * requested device clock. Software overhead means we usually have trouble
126 * reaching even one Mbit/sec (except when we can inline bitops), so for now 126 * reaching even one Mbit/sec (except when we can inline bitops), so for now
127 * we'll just assume we never need additional per-bit slowdowns. 127 * we'll just assume we never need additional per-bit slowdowns.
128 */ 128 */
129 #define spidelay(nsecs) do {} while (0) 129 #define spidelay(nsecs) do {} while (0)
130 130
131 #include "spi-bitbang-txrx.h" 131 #include "spi-bitbang-txrx.h"
132 132
133 /* 133 /*
134 * These functions can leverage inline expansion of GPIO calls to shrink 134 * These functions can leverage inline expansion of GPIO calls to shrink
135 * costs for a txrx bit, often by factors of around ten (by instruction 135 * costs for a txrx bit, often by factors of around ten (by instruction
136 * count). That is particularly visible for larger word sizes, but helps 136 * count). That is particularly visible for larger word sizes, but helps
137 * even with default 8-bit words. 137 * even with default 8-bit words.
138 * 138 *
139 * REVISIT overheads calling these functions for each word also have 139 * REVISIT overheads calling these functions for each word also have
140 * significant performance costs. Having txrx_bufs() calls that inline 140 * significant performance costs. Having txrx_bufs() calls that inline
141 * the txrx_word() logic would help performance, e.g. on larger blocks 141 * the txrx_word() logic would help performance, e.g. on larger blocks
142 * used with flash storage or MMC/SD. There should also be ways to make 142 * used with flash storage or MMC/SD. There should also be ways to make
143 * GCC be less stupid about reloading registers inside the I/O loops, 143 * GCC be less stupid about reloading registers inside the I/O loops,
144 * even without inlined GPIO calls; __attribute__((hot)) on GCC 4.3? 144 * even without inlined GPIO calls; __attribute__((hot)) on GCC 4.3?
145 */ 145 */
146 146
147 static u32 spi_gpio_txrx_word_mode0(struct spi_device *spi, 147 static u32 spi_gpio_txrx_word_mode0(struct spi_device *spi,
148 unsigned nsecs, u32 word, u8 bits) 148 unsigned nsecs, u32 word, u8 bits)
149 { 149 {
150 return bitbang_txrx_be_cpha0(spi, nsecs, 0, 0, word, bits); 150 return bitbang_txrx_be_cpha0(spi, nsecs, 0, 0, word, bits);
151 } 151 }
152 152
153 static u32 spi_gpio_txrx_word_mode1(struct spi_device *spi, 153 static u32 spi_gpio_txrx_word_mode1(struct spi_device *spi,
154 unsigned nsecs, u32 word, u8 bits) 154 unsigned nsecs, u32 word, u8 bits)
155 { 155 {
156 return bitbang_txrx_be_cpha1(spi, nsecs, 0, 0, word, bits); 156 return bitbang_txrx_be_cpha1(spi, nsecs, 0, 0, word, bits);
157 } 157 }
158 158
159 static u32 spi_gpio_txrx_word_mode2(struct spi_device *spi, 159 static u32 spi_gpio_txrx_word_mode2(struct spi_device *spi,
160 unsigned nsecs, u32 word, u8 bits) 160 unsigned nsecs, u32 word, u8 bits)
161 { 161 {
162 return bitbang_txrx_be_cpha0(spi, nsecs, 1, 0, word, bits); 162 return bitbang_txrx_be_cpha0(spi, nsecs, 1, 0, word, bits);
163 } 163 }
164 164
165 static u32 spi_gpio_txrx_word_mode3(struct spi_device *spi, 165 static u32 spi_gpio_txrx_word_mode3(struct spi_device *spi,
166 unsigned nsecs, u32 word, u8 bits) 166 unsigned nsecs, u32 word, u8 bits)
167 { 167 {
168 return bitbang_txrx_be_cpha1(spi, nsecs, 1, 0, word, bits); 168 return bitbang_txrx_be_cpha1(spi, nsecs, 1, 0, word, bits);
169 } 169 }
170 170
171 /* 171 /*
172 * These functions do not call setmosi or getmiso if respective flag 172 * These functions do not call setmosi or getmiso if respective flag
173 * (SPI_MASTER_NO_RX or SPI_MASTER_NO_TX) is set, so they are safe to 173 * (SPI_MASTER_NO_RX or SPI_MASTER_NO_TX) is set, so they are safe to
174 * call when such pin is not present or defined in the controller. 174 * call when such pin is not present or defined in the controller.
175 * A separate set of callbacks is defined to get highest possible 175 * A separate set of callbacks is defined to get highest possible
176 * speed in the generic case (when both MISO and MOSI lines are 176 * speed in the generic case (when both MISO and MOSI lines are
177 * available), as optimiser will remove the checks when argument is 177 * available), as optimiser will remove the checks when argument is
178 * constant. 178 * constant.
179 */ 179 */
180 180
181 static u32 spi_gpio_spec_txrx_word_mode0(struct spi_device *spi, 181 static u32 spi_gpio_spec_txrx_word_mode0(struct spi_device *spi,
182 unsigned nsecs, u32 word, u8 bits) 182 unsigned nsecs, u32 word, u8 bits)
183 { 183 {
184 unsigned flags = spi->master->flags; 184 unsigned flags = spi->master->flags;
185 return bitbang_txrx_be_cpha0(spi, nsecs, 0, flags, word, bits); 185 return bitbang_txrx_be_cpha0(spi, nsecs, 0, flags, word, bits);
186 } 186 }
187 187
188 static u32 spi_gpio_spec_txrx_word_mode1(struct spi_device *spi, 188 static u32 spi_gpio_spec_txrx_word_mode1(struct spi_device *spi,
189 unsigned nsecs, u32 word, u8 bits) 189 unsigned nsecs, u32 word, u8 bits)
190 { 190 {
191 unsigned flags = spi->master->flags; 191 unsigned flags = spi->master->flags;
192 return bitbang_txrx_be_cpha1(spi, nsecs, 0, flags, word, bits); 192 return bitbang_txrx_be_cpha1(spi, nsecs, 0, flags, word, bits);
193 } 193 }
194 194
195 static u32 spi_gpio_spec_txrx_word_mode2(struct spi_device *spi, 195 static u32 spi_gpio_spec_txrx_word_mode2(struct spi_device *spi,
196 unsigned nsecs, u32 word, u8 bits) 196 unsigned nsecs, u32 word, u8 bits)
197 { 197 {
198 unsigned flags = spi->master->flags; 198 unsigned flags = spi->master->flags;
199 return bitbang_txrx_be_cpha0(spi, nsecs, 1, flags, word, bits); 199 return bitbang_txrx_be_cpha0(spi, nsecs, 1, flags, word, bits);
200 } 200 }
201 201
202 static u32 spi_gpio_spec_txrx_word_mode3(struct spi_device *spi, 202 static u32 spi_gpio_spec_txrx_word_mode3(struct spi_device *spi,
203 unsigned nsecs, u32 word, u8 bits) 203 unsigned nsecs, u32 word, u8 bits)
204 { 204 {
205 unsigned flags = spi->master->flags; 205 unsigned flags = spi->master->flags;
206 return bitbang_txrx_be_cpha1(spi, nsecs, 1, flags, word, bits); 206 return bitbang_txrx_be_cpha1(spi, nsecs, 1, flags, word, bits);
207 } 207 }
208 208
209 /*----------------------------------------------------------------------*/ 209 /*----------------------------------------------------------------------*/
210 210
211 static void spi_gpio_chipselect(struct spi_device *spi, int is_active) 211 static void spi_gpio_chipselect(struct spi_device *spi, int is_active)
212 { 212 {
213 unsigned long cs = (unsigned long) spi->controller_data; 213 unsigned long cs = (unsigned long) spi->controller_data;
214 214
215 /* set initial clock polarity */ 215 /* set initial clock polarity */
216 if (is_active) 216 if (is_active)
217 setsck(spi, spi->mode & SPI_CPOL); 217 setsck(spi, spi->mode & SPI_CPOL);
218 218
219 if (cs != SPI_GPIO_NO_CHIPSELECT) { 219 if (cs != SPI_GPIO_NO_CHIPSELECT) {
220 /* SPI is normally active-low */ 220 /* SPI is normally active-low */
221 gpio_set_value(cs, (spi->mode & SPI_CS_HIGH) ? is_active : !is_active); 221 gpio_set_value(cs, (spi->mode & SPI_CS_HIGH) ? is_active : !is_active);
222 } 222 }
223 } 223 }
224 224
225 static int spi_gpio_setup(struct spi_device *spi) 225 static int spi_gpio_setup(struct spi_device *spi)
226 { 226 {
227 unsigned long cs = (unsigned long) spi->controller_data; 227 unsigned long cs = (unsigned long) spi->controller_data;
228 int status = 0; 228 int status = 0;
229 229
230 if (spi->bits_per_word > 32) 230 if (spi->bits_per_word > 32)
231 return -EINVAL; 231 return -EINVAL;
232 232
233 if (!spi->controller_state) { 233 if (!spi->controller_state) {
234 if (cs != SPI_GPIO_NO_CHIPSELECT) { 234 if (cs != SPI_GPIO_NO_CHIPSELECT) {
235 status = gpio_request(cs, dev_name(&spi->dev)); 235 status = gpio_request(cs, dev_name(&spi->dev));
236 if (status) 236 if (status)
237 return status; 237 return status;
238 status = gpio_direction_output(cs, spi->mode & SPI_CS_HIGH); 238 status = gpio_direction_output(cs,
239 !(spi->mode & SPI_CS_HIGH));
239 } 240 }
240 } 241 }
241 if (!status) 242 if (!status)
242 status = spi_bitbang_setup(spi); 243 status = spi_bitbang_setup(spi);
243 if (status) { 244 if (status) {
244 if (!spi->controller_state && cs != SPI_GPIO_NO_CHIPSELECT) 245 if (!spi->controller_state && cs != SPI_GPIO_NO_CHIPSELECT)
245 gpio_free(cs); 246 gpio_free(cs);
246 } 247 }
247 return status; 248 return status;
248 } 249 }
249 250
250 static void spi_gpio_cleanup(struct spi_device *spi) 251 static void spi_gpio_cleanup(struct spi_device *spi)
251 { 252 {
252 unsigned long cs = (unsigned long) spi->controller_data; 253 unsigned long cs = (unsigned long) spi->controller_data;
253 254
254 if (cs != SPI_GPIO_NO_CHIPSELECT) 255 if (cs != SPI_GPIO_NO_CHIPSELECT)
255 gpio_free(cs); 256 gpio_free(cs);
256 spi_bitbang_cleanup(spi); 257 spi_bitbang_cleanup(spi);
257 } 258 }
258 259
259 static int __devinit spi_gpio_alloc(unsigned pin, const char *label, bool is_in) 260 static int __devinit spi_gpio_alloc(unsigned pin, const char *label, bool is_in)
260 { 261 {
261 int value; 262 int value;
262 263
263 value = gpio_request(pin, label); 264 value = gpio_request(pin, label);
264 if (value == 0) { 265 if (value == 0) {
265 if (is_in) 266 if (is_in)
266 value = gpio_direction_input(pin); 267 value = gpio_direction_input(pin);
267 else 268 else
268 value = gpio_direction_output(pin, 0); 269 value = gpio_direction_output(pin, 0);
269 } 270 }
270 return value; 271 return value;
271 } 272 }
272 273
273 static int __devinit 274 static int __devinit
274 spi_gpio_request(struct spi_gpio_platform_data *pdata, const char *label, 275 spi_gpio_request(struct spi_gpio_platform_data *pdata, const char *label,
275 u16 *res_flags) 276 u16 *res_flags)
276 { 277 {
277 int value; 278 int value;
278 279
279 /* NOTE: SPI_*_GPIO symbols may reference "pdata" */ 280 /* NOTE: SPI_*_GPIO symbols may reference "pdata" */
280 281
281 if (SPI_MOSI_GPIO != SPI_GPIO_NO_MOSI) { 282 if (SPI_MOSI_GPIO != SPI_GPIO_NO_MOSI) {
282 value = spi_gpio_alloc(SPI_MOSI_GPIO, label, false); 283 value = spi_gpio_alloc(SPI_MOSI_GPIO, label, false);
283 if (value) 284 if (value)
284 goto done; 285 goto done;
285 } else { 286 } else {
286 /* HW configuration without MOSI pin */ 287 /* HW configuration without MOSI pin */
287 *res_flags |= SPI_MASTER_NO_TX; 288 *res_flags |= SPI_MASTER_NO_TX;
288 } 289 }
289 290
290 if (SPI_MISO_GPIO != SPI_GPIO_NO_MISO) { 291 if (SPI_MISO_GPIO != SPI_GPIO_NO_MISO) {
291 value = spi_gpio_alloc(SPI_MISO_GPIO, label, true); 292 value = spi_gpio_alloc(SPI_MISO_GPIO, label, true);
292 if (value) 293 if (value)
293 goto free_mosi; 294 goto free_mosi;
294 } else { 295 } else {
295 /* HW configuration without MISO pin */ 296 /* HW configuration without MISO pin */
296 *res_flags |= SPI_MASTER_NO_RX; 297 *res_flags |= SPI_MASTER_NO_RX;
297 } 298 }
298 299
299 value = spi_gpio_alloc(SPI_SCK_GPIO, label, false); 300 value = spi_gpio_alloc(SPI_SCK_GPIO, label, false);
300 if (value) 301 if (value)
301 goto free_miso; 302 goto free_miso;
302 303
303 goto done; 304 goto done;
304 305
305 free_miso: 306 free_miso:
306 if (SPI_MISO_GPIO != SPI_GPIO_NO_MISO) 307 if (SPI_MISO_GPIO != SPI_GPIO_NO_MISO)
307 gpio_free(SPI_MISO_GPIO); 308 gpio_free(SPI_MISO_GPIO);
308 free_mosi: 309 free_mosi:
309 if (SPI_MOSI_GPIO != SPI_GPIO_NO_MOSI) 310 if (SPI_MOSI_GPIO != SPI_GPIO_NO_MOSI)
310 gpio_free(SPI_MOSI_GPIO); 311 gpio_free(SPI_MOSI_GPIO);
311 done: 312 done:
312 return value; 313 return value;
313 } 314 }
314 315
315 static int __devinit spi_gpio_probe(struct platform_device *pdev) 316 static int __devinit spi_gpio_probe(struct platform_device *pdev)
316 { 317 {
317 int status; 318 int status;
318 struct spi_master *master; 319 struct spi_master *master;
319 struct spi_gpio *spi_gpio; 320 struct spi_gpio *spi_gpio;
320 struct spi_gpio_platform_data *pdata; 321 struct spi_gpio_platform_data *pdata;
321 u16 master_flags = 0; 322 u16 master_flags = 0;
322 323
323 pdata = pdev->dev.platform_data; 324 pdata = pdev->dev.platform_data;
324 #ifdef GENERIC_BITBANG 325 #ifdef GENERIC_BITBANG
325 if (!pdata || !pdata->num_chipselect) 326 if (!pdata || !pdata->num_chipselect)
326 return -ENODEV; 327 return -ENODEV;
327 #endif 328 #endif
328 329
329 status = spi_gpio_request(pdata, dev_name(&pdev->dev), &master_flags); 330 status = spi_gpio_request(pdata, dev_name(&pdev->dev), &master_flags);
330 if (status < 0) 331 if (status < 0)
331 return status; 332 return status;
332 333
333 master = spi_alloc_master(&pdev->dev, sizeof *spi_gpio); 334 master = spi_alloc_master(&pdev->dev, sizeof *spi_gpio);
334 if (!master) { 335 if (!master) {
335 status = -ENOMEM; 336 status = -ENOMEM;
336 goto gpio_free; 337 goto gpio_free;
337 } 338 }
338 spi_gpio = spi_master_get_devdata(master); 339 spi_gpio = spi_master_get_devdata(master);
339 platform_set_drvdata(pdev, spi_gpio); 340 platform_set_drvdata(pdev, spi_gpio);
340 341
341 spi_gpio->pdev = pdev; 342 spi_gpio->pdev = pdev;
342 if (pdata) 343 if (pdata)
343 spi_gpio->pdata = *pdata; 344 spi_gpio->pdata = *pdata;
344 345
345 master->flags = master_flags; 346 master->flags = master_flags;
346 master->bus_num = pdev->id; 347 master->bus_num = pdev->id;
347 master->num_chipselect = SPI_N_CHIPSEL; 348 master->num_chipselect = SPI_N_CHIPSEL;
348 master->setup = spi_gpio_setup; 349 master->setup = spi_gpio_setup;
349 master->cleanup = spi_gpio_cleanup; 350 master->cleanup = spi_gpio_cleanup;
350 351
351 spi_gpio->bitbang.master = spi_master_get(master); 352 spi_gpio->bitbang.master = spi_master_get(master);
352 spi_gpio->bitbang.chipselect = spi_gpio_chipselect; 353 spi_gpio->bitbang.chipselect = spi_gpio_chipselect;
353 354
354 if ((master_flags & (SPI_MASTER_NO_TX | SPI_MASTER_NO_RX)) == 0) { 355 if ((master_flags & (SPI_MASTER_NO_TX | SPI_MASTER_NO_RX)) == 0) {
355 spi_gpio->bitbang.txrx_word[SPI_MODE_0] = spi_gpio_txrx_word_mode0; 356 spi_gpio->bitbang.txrx_word[SPI_MODE_0] = spi_gpio_txrx_word_mode0;
356 spi_gpio->bitbang.txrx_word[SPI_MODE_1] = spi_gpio_txrx_word_mode1; 357 spi_gpio->bitbang.txrx_word[SPI_MODE_1] = spi_gpio_txrx_word_mode1;
357 spi_gpio->bitbang.txrx_word[SPI_MODE_2] = spi_gpio_txrx_word_mode2; 358 spi_gpio->bitbang.txrx_word[SPI_MODE_2] = spi_gpio_txrx_word_mode2;
358 spi_gpio->bitbang.txrx_word[SPI_MODE_3] = spi_gpio_txrx_word_mode3; 359 spi_gpio->bitbang.txrx_word[SPI_MODE_3] = spi_gpio_txrx_word_mode3;
359 } else { 360 } else {
360 spi_gpio->bitbang.txrx_word[SPI_MODE_0] = spi_gpio_spec_txrx_word_mode0; 361 spi_gpio->bitbang.txrx_word[SPI_MODE_0] = spi_gpio_spec_txrx_word_mode0;
361 spi_gpio->bitbang.txrx_word[SPI_MODE_1] = spi_gpio_spec_txrx_word_mode1; 362 spi_gpio->bitbang.txrx_word[SPI_MODE_1] = spi_gpio_spec_txrx_word_mode1;
362 spi_gpio->bitbang.txrx_word[SPI_MODE_2] = spi_gpio_spec_txrx_word_mode2; 363 spi_gpio->bitbang.txrx_word[SPI_MODE_2] = spi_gpio_spec_txrx_word_mode2;
363 spi_gpio->bitbang.txrx_word[SPI_MODE_3] = spi_gpio_spec_txrx_word_mode3; 364 spi_gpio->bitbang.txrx_word[SPI_MODE_3] = spi_gpio_spec_txrx_word_mode3;
364 } 365 }
365 spi_gpio->bitbang.setup_transfer = spi_bitbang_setup_transfer; 366 spi_gpio->bitbang.setup_transfer = spi_bitbang_setup_transfer;
366 spi_gpio->bitbang.flags = SPI_CS_HIGH; 367 spi_gpio->bitbang.flags = SPI_CS_HIGH;
367 368
368 status = spi_bitbang_start(&spi_gpio->bitbang); 369 status = spi_bitbang_start(&spi_gpio->bitbang);
369 if (status < 0) { 370 if (status < 0) {
370 spi_master_put(spi_gpio->bitbang.master); 371 spi_master_put(spi_gpio->bitbang.master);
371 gpio_free: 372 gpio_free:
372 if (SPI_MISO_GPIO != SPI_GPIO_NO_MISO) 373 if (SPI_MISO_GPIO != SPI_GPIO_NO_MISO)
373 gpio_free(SPI_MISO_GPIO); 374 gpio_free(SPI_MISO_GPIO);
374 if (SPI_MOSI_GPIO != SPI_GPIO_NO_MOSI) 375 if (SPI_MOSI_GPIO != SPI_GPIO_NO_MOSI)
375 gpio_free(SPI_MOSI_GPIO); 376 gpio_free(SPI_MOSI_GPIO);
376 gpio_free(SPI_SCK_GPIO); 377 gpio_free(SPI_SCK_GPIO);
377 spi_master_put(master); 378 spi_master_put(master);
378 } 379 }
379 380
380 return status; 381 return status;
381 } 382 }
382 383
383 static int __devexit spi_gpio_remove(struct platform_device *pdev) 384 static int __devexit spi_gpio_remove(struct platform_device *pdev)
384 { 385 {
385 struct spi_gpio *spi_gpio; 386 struct spi_gpio *spi_gpio;
386 struct spi_gpio_platform_data *pdata; 387 struct spi_gpio_platform_data *pdata;
387 int status; 388 int status;
388 389
389 spi_gpio = platform_get_drvdata(pdev); 390 spi_gpio = platform_get_drvdata(pdev);
390 pdata = pdev->dev.platform_data; 391 pdata = pdev->dev.platform_data;
391 392
392 /* stop() unregisters child devices too */ 393 /* stop() unregisters child devices too */
393 status = spi_bitbang_stop(&spi_gpio->bitbang); 394 status = spi_bitbang_stop(&spi_gpio->bitbang);
394 spi_master_put(spi_gpio->bitbang.master); 395 spi_master_put(spi_gpio->bitbang.master);
395 396
396 platform_set_drvdata(pdev, NULL); 397 platform_set_drvdata(pdev, NULL);
397 398
398 if (SPI_MISO_GPIO != SPI_GPIO_NO_MISO) 399 if (SPI_MISO_GPIO != SPI_GPIO_NO_MISO)
399 gpio_free(SPI_MISO_GPIO); 400 gpio_free(SPI_MISO_GPIO);
400 if (SPI_MOSI_GPIO != SPI_GPIO_NO_MOSI) 401 if (SPI_MOSI_GPIO != SPI_GPIO_NO_MOSI)
401 gpio_free(SPI_MOSI_GPIO); 402 gpio_free(SPI_MOSI_GPIO);
402 gpio_free(SPI_SCK_GPIO); 403 gpio_free(SPI_SCK_GPIO);
403 404
404 return status; 405 return status;
405 } 406 }
406 407
407 MODULE_ALIAS("platform:" DRIVER_NAME); 408 MODULE_ALIAS("platform:" DRIVER_NAME);
408 409
409 static struct platform_driver spi_gpio_driver = { 410 static struct platform_driver spi_gpio_driver = {
410 .driver.name = DRIVER_NAME, 411 .driver.name = DRIVER_NAME,
411 .driver.owner = THIS_MODULE, 412 .driver.owner = THIS_MODULE,
412 .probe = spi_gpio_probe, 413 .probe = spi_gpio_probe,
413 .remove = __devexit_p(spi_gpio_remove), 414 .remove = __devexit_p(spi_gpio_remove),
414 }; 415 };
415 module_platform_driver(spi_gpio_driver); 416 module_platform_driver(spi_gpio_driver);
416 417
417 MODULE_DESCRIPTION("SPI master driver using generic bitbanged GPIO "); 418 MODULE_DESCRIPTION("SPI master driver using generic bitbanged GPIO ");
418 MODULE_AUTHOR("David Brownell"); 419 MODULE_AUTHOR("David Brownell");
419 MODULE_LICENSE("GPL"); 420 MODULE_LICENSE("GPL");
420 421
drivers/spi/spi-imx.c
1 /* 1 /*
2 * Copyright 2004-2007 Freescale Semiconductor, Inc. All Rights Reserved. 2 * Copyright 2004-2007 Freescale Semiconductor, Inc. All Rights Reserved.
3 * Copyright (C) 2008 Juergen Beisert 3 * Copyright (C) 2008 Juergen Beisert
4 * 4 *
5 * This program is free software; you can redistribute it and/or 5 * This program is free software; you can redistribute it and/or
6 * modify it under the terms of the GNU General Public License 6 * modify it under the terms of the GNU General Public License
7 * as published by the Free Software Foundation; either version 2 7 * as published by the Free Software Foundation; either version 2
8 * of the License, or (at your option) any later version. 8 * of the License, or (at your option) any later version.
9 * This program is distributed in the hope that it will be useful, 9 * This program is distributed in the hope that it will be useful,
10 * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 * but WITHOUT ANY WARRANTY; without even the implied warranty of
11 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 * GNU General Public License for more details. 12 * GNU General Public License for more details.
13 * 13 *
14 * You should have received a copy of the GNU General Public License 14 * You should have received a copy of the GNU General Public License
15 * along with this program; if not, write to the 15 * along with this program; if not, write to the
16 * Free Software Foundation 16 * Free Software Foundation
17 * 51 Franklin Street, Fifth Floor 17 * 51 Franklin Street, Fifth Floor
18 * Boston, MA 02110-1301, USA. 18 * Boston, MA 02110-1301, USA.
19 */ 19 */
20 20
21 #include <linux/clk.h> 21 #include <linux/clk.h>
22 #include <linux/completion.h> 22 #include <linux/completion.h>
23 #include <linux/delay.h> 23 #include <linux/delay.h>
24 #include <linux/err.h> 24 #include <linux/err.h>
25 #include <linux/gpio.h> 25 #include <linux/gpio.h>
26 #include <linux/init.h> 26 #include <linux/init.h>
27 #include <linux/interrupt.h> 27 #include <linux/interrupt.h>
28 #include <linux/io.h> 28 #include <linux/io.h>
29 #include <linux/irq.h> 29 #include <linux/irq.h>
30 #include <linux/kernel.h> 30 #include <linux/kernel.h>
31 #include <linux/module.h> 31 #include <linux/module.h>
32 #include <linux/platform_device.h> 32 #include <linux/platform_device.h>
33 #include <linux/slab.h> 33 #include <linux/slab.h>
34 #include <linux/spi/spi.h> 34 #include <linux/spi/spi.h>
35 #include <linux/spi/spi_bitbang.h> 35 #include <linux/spi/spi_bitbang.h>
36 #include <linux/types.h> 36 #include <linux/types.h>
37 #include <linux/of.h> 37 #include <linux/of.h>
38 #include <linux/of_device.h> 38 #include <linux/of_device.h>
39 #include <linux/of_gpio.h> 39 #include <linux/of_gpio.h>
40 #include <linux/pinctrl/consumer.h> 40 #include <linux/pinctrl/consumer.h>
41 41
42 #include <mach/spi.h> 42 #include <mach/spi.h>
43 43
44 #define DRIVER_NAME "spi_imx" 44 #define DRIVER_NAME "spi_imx"
45 45
46 #define MXC_CSPIRXDATA 0x00 46 #define MXC_CSPIRXDATA 0x00
47 #define MXC_CSPITXDATA 0x04 47 #define MXC_CSPITXDATA 0x04
48 #define MXC_CSPICTRL 0x08 48 #define MXC_CSPICTRL 0x08
49 #define MXC_CSPIINT 0x0c 49 #define MXC_CSPIINT 0x0c
50 #define MXC_RESET 0x1c 50 #define MXC_RESET 0x1c
51 51
52 /* generic defines to abstract from the different register layouts */ 52 /* generic defines to abstract from the different register layouts */
53 #define MXC_INT_RR (1 << 0) /* Receive data ready interrupt */ 53 #define MXC_INT_RR (1 << 0) /* Receive data ready interrupt */
54 #define MXC_INT_TE (1 << 1) /* Transmit FIFO empty interrupt */ 54 #define MXC_INT_TE (1 << 1) /* Transmit FIFO empty interrupt */
55 55
56 struct spi_imx_config { 56 struct spi_imx_config {
57 unsigned int speed_hz; 57 unsigned int speed_hz;
58 unsigned int bpw; 58 unsigned int bpw;
59 unsigned int mode; 59 unsigned int mode;
60 u8 cs; 60 u8 cs;
61 }; 61 };
62 62
63 enum spi_imx_devtype { 63 enum spi_imx_devtype {
64 IMX1_CSPI, 64 IMX1_CSPI,
65 IMX21_CSPI, 65 IMX21_CSPI,
66 IMX27_CSPI, 66 IMX27_CSPI,
67 IMX31_CSPI, 67 IMX31_CSPI,
68 IMX35_CSPI, /* CSPI on all i.mx except above */ 68 IMX35_CSPI, /* CSPI on all i.mx except above */
69 IMX51_ECSPI, /* ECSPI on i.mx51 and later */ 69 IMX51_ECSPI, /* ECSPI on i.mx51 and later */
70 }; 70 };
71 71
72 struct spi_imx_data; 72 struct spi_imx_data;
73 73
74 struct spi_imx_devtype_data { 74 struct spi_imx_devtype_data {
75 void (*intctrl)(struct spi_imx_data *, int); 75 void (*intctrl)(struct spi_imx_data *, int);
76 int (*config)(struct spi_imx_data *, struct spi_imx_config *); 76 int (*config)(struct spi_imx_data *, struct spi_imx_config *);
77 void (*trigger)(struct spi_imx_data *); 77 void (*trigger)(struct spi_imx_data *);
78 int (*rx_available)(struct spi_imx_data *); 78 int (*rx_available)(struct spi_imx_data *);
79 void (*reset)(struct spi_imx_data *); 79 void (*reset)(struct spi_imx_data *);
80 enum spi_imx_devtype devtype; 80 enum spi_imx_devtype devtype;
81 }; 81 };
82 82
83 struct spi_imx_data { 83 struct spi_imx_data {
84 struct spi_bitbang bitbang; 84 struct spi_bitbang bitbang;
85 85
86 struct completion xfer_done; 86 struct completion xfer_done;
87 void __iomem *base; 87 void __iomem *base;
88 int irq; 88 int irq;
89 struct clk *clk_per; 89 struct clk *clk_per;
90 struct clk *clk_ipg; 90 struct clk *clk_ipg;
91 unsigned long spi_clk; 91 unsigned long spi_clk;
92 92
93 unsigned int count; 93 unsigned int count;
94 void (*tx)(struct spi_imx_data *); 94 void (*tx)(struct spi_imx_data *);
95 void (*rx)(struct spi_imx_data *); 95 void (*rx)(struct spi_imx_data *);
96 void *rx_buf; 96 void *rx_buf;
97 const void *tx_buf; 97 const void *tx_buf;
98 unsigned int txfifo; /* number of words pushed in tx FIFO */ 98 unsigned int txfifo; /* number of words pushed in tx FIFO */
99 99
100 struct spi_imx_devtype_data *devtype_data; 100 struct spi_imx_devtype_data *devtype_data;
101 int chipselect[0]; 101 int chipselect[0];
102 }; 102 };
103 103
104 static inline int is_imx27_cspi(struct spi_imx_data *d) 104 static inline int is_imx27_cspi(struct spi_imx_data *d)
105 { 105 {
106 return d->devtype_data->devtype == IMX27_CSPI; 106 return d->devtype_data->devtype == IMX27_CSPI;
107 } 107 }
108 108
109 static inline int is_imx35_cspi(struct spi_imx_data *d) 109 static inline int is_imx35_cspi(struct spi_imx_data *d)
110 { 110 {
111 return d->devtype_data->devtype == IMX35_CSPI; 111 return d->devtype_data->devtype == IMX35_CSPI;
112 } 112 }
113 113
114 static inline unsigned spi_imx_get_fifosize(struct spi_imx_data *d) 114 static inline unsigned spi_imx_get_fifosize(struct spi_imx_data *d)
115 { 115 {
116 return (d->devtype_data->devtype == IMX51_ECSPI) ? 64 : 8; 116 return (d->devtype_data->devtype == IMX51_ECSPI) ? 64 : 8;
117 } 117 }
118 118
119 #define MXC_SPI_BUF_RX(type) \ 119 #define MXC_SPI_BUF_RX(type) \
120 static void spi_imx_buf_rx_##type(struct spi_imx_data *spi_imx) \ 120 static void spi_imx_buf_rx_##type(struct spi_imx_data *spi_imx) \
121 { \ 121 { \
122 unsigned int val = readl(spi_imx->base + MXC_CSPIRXDATA); \ 122 unsigned int val = readl(spi_imx->base + MXC_CSPIRXDATA); \
123 \ 123 \
124 if (spi_imx->rx_buf) { \ 124 if (spi_imx->rx_buf) { \
125 *(type *)spi_imx->rx_buf = val; \ 125 *(type *)spi_imx->rx_buf = val; \
126 spi_imx->rx_buf += sizeof(type); \ 126 spi_imx->rx_buf += sizeof(type); \
127 } \ 127 } \
128 } 128 }
129 129
130 #define MXC_SPI_BUF_TX(type) \ 130 #define MXC_SPI_BUF_TX(type) \
131 static void spi_imx_buf_tx_##type(struct spi_imx_data *spi_imx) \ 131 static void spi_imx_buf_tx_##type(struct spi_imx_data *spi_imx) \
132 { \ 132 { \
133 type val = 0; \ 133 type val = 0; \
134 \ 134 \
135 if (spi_imx->tx_buf) { \ 135 if (spi_imx->tx_buf) { \
136 val = *(type *)spi_imx->tx_buf; \ 136 val = *(type *)spi_imx->tx_buf; \
137 spi_imx->tx_buf += sizeof(type); \ 137 spi_imx->tx_buf += sizeof(type); \
138 } \ 138 } \
139 \ 139 \
140 spi_imx->count -= sizeof(type); \ 140 spi_imx->count -= sizeof(type); \
141 \ 141 \
142 writel(val, spi_imx->base + MXC_CSPITXDATA); \ 142 writel(val, spi_imx->base + MXC_CSPITXDATA); \
143 } 143 }
144 144
145 MXC_SPI_BUF_RX(u8) 145 MXC_SPI_BUF_RX(u8)
146 MXC_SPI_BUF_TX(u8) 146 MXC_SPI_BUF_TX(u8)
147 MXC_SPI_BUF_RX(u16) 147 MXC_SPI_BUF_RX(u16)
148 MXC_SPI_BUF_TX(u16) 148 MXC_SPI_BUF_TX(u16)
149 MXC_SPI_BUF_RX(u32) 149 MXC_SPI_BUF_RX(u32)
150 MXC_SPI_BUF_TX(u32) 150 MXC_SPI_BUF_TX(u32)
151 151
152 /* First entry is reserved, second entry is valid only if SDHC_SPIEN is set 152 /* First entry is reserved, second entry is valid only if SDHC_SPIEN is set
153 * (which is currently not the case in this driver) 153 * (which is currently not the case in this driver)
154 */ 154 */
155 static int mxc_clkdivs[] = {0, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96, 128, 192, 155 static int mxc_clkdivs[] = {0, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96, 128, 192,
156 256, 384, 512, 768, 1024}; 156 256, 384, 512, 768, 1024};
157 157
158 /* MX21, MX27 */ 158 /* MX21, MX27 */
159 static unsigned int spi_imx_clkdiv_1(unsigned int fin, 159 static unsigned int spi_imx_clkdiv_1(unsigned int fin,
160 unsigned int fspi, unsigned int max) 160 unsigned int fspi, unsigned int max)
161 { 161 {
162 int i; 162 int i;
163 163
164 for (i = 2; i < max; i++) 164 for (i = 2; i < max; i++)
165 if (fspi * mxc_clkdivs[i] >= fin) 165 if (fspi * mxc_clkdivs[i] >= fin)
166 return i; 166 return i;
167 167
168 return max; 168 return max;
169 } 169 }
170 170
171 /* MX1, MX31, MX35, MX51 CSPI */ 171 /* MX1, MX31, MX35, MX51 CSPI */
172 static unsigned int spi_imx_clkdiv_2(unsigned int fin, 172 static unsigned int spi_imx_clkdiv_2(unsigned int fin,
173 unsigned int fspi) 173 unsigned int fspi)
174 { 174 {
175 int i, div = 4; 175 int i, div = 4;
176 176
177 for (i = 0; i < 7; i++) { 177 for (i = 0; i < 7; i++) {
178 if (fspi * div >= fin) 178 if (fspi * div >= fin)
179 return i; 179 return i;
180 div <<= 1; 180 div <<= 1;
181 } 181 }
182 182
183 return 7; 183 return 7;
184 } 184 }
185 185
186 #define MX51_ECSPI_CTRL 0x08 186 #define MX51_ECSPI_CTRL 0x08
187 #define MX51_ECSPI_CTRL_ENABLE (1 << 0) 187 #define MX51_ECSPI_CTRL_ENABLE (1 << 0)
188 #define MX51_ECSPI_CTRL_XCH (1 << 2) 188 #define MX51_ECSPI_CTRL_XCH (1 << 2)
189 #define MX51_ECSPI_CTRL_MODE_MASK (0xf << 4) 189 #define MX51_ECSPI_CTRL_MODE_MASK (0xf << 4)
190 #define MX51_ECSPI_CTRL_POSTDIV_OFFSET 8 190 #define MX51_ECSPI_CTRL_POSTDIV_OFFSET 8
191 #define MX51_ECSPI_CTRL_PREDIV_OFFSET 12 191 #define MX51_ECSPI_CTRL_PREDIV_OFFSET 12
192 #define MX51_ECSPI_CTRL_CS(cs) ((cs) << 18) 192 #define MX51_ECSPI_CTRL_CS(cs) ((cs) << 18)
193 #define MX51_ECSPI_CTRL_BL_OFFSET 20 193 #define MX51_ECSPI_CTRL_BL_OFFSET 20
194 194
195 #define MX51_ECSPI_CONFIG 0x0c 195 #define MX51_ECSPI_CONFIG 0x0c
196 #define MX51_ECSPI_CONFIG_SCLKPHA(cs) (1 << ((cs) + 0)) 196 #define MX51_ECSPI_CONFIG_SCLKPHA(cs) (1 << ((cs) + 0))
197 #define MX51_ECSPI_CONFIG_SCLKPOL(cs) (1 << ((cs) + 4)) 197 #define MX51_ECSPI_CONFIG_SCLKPOL(cs) (1 << ((cs) + 4))
198 #define MX51_ECSPI_CONFIG_SBBCTRL(cs) (1 << ((cs) + 8)) 198 #define MX51_ECSPI_CONFIG_SBBCTRL(cs) (1 << ((cs) + 8))
199 #define MX51_ECSPI_CONFIG_SSBPOL(cs) (1 << ((cs) + 12)) 199 #define MX51_ECSPI_CONFIG_SSBPOL(cs) (1 << ((cs) + 12))
200 200
201 #define MX51_ECSPI_INT 0x10 201 #define MX51_ECSPI_INT 0x10
202 #define MX51_ECSPI_INT_TEEN (1 << 0) 202 #define MX51_ECSPI_INT_TEEN (1 << 0)
203 #define MX51_ECSPI_INT_RREN (1 << 3) 203 #define MX51_ECSPI_INT_RREN (1 << 3)
204 204
205 #define MX51_ECSPI_STAT 0x18 205 #define MX51_ECSPI_STAT 0x18
206 #define MX51_ECSPI_STAT_RR (1 << 3) 206 #define MX51_ECSPI_STAT_RR (1 << 3)
207 207
208 /* MX51 eCSPI */ 208 /* MX51 eCSPI */
209 static unsigned int mx51_ecspi_clkdiv(unsigned int fin, unsigned int fspi) 209 static unsigned int mx51_ecspi_clkdiv(unsigned int fin, unsigned int fspi)
210 { 210 {
211 /* 211 /*
212 * there are two 4-bit dividers, the pre-divider divides by 212 * there are two 4-bit dividers, the pre-divider divides by
213 * $pre, the post-divider by 2^$post 213 * $pre, the post-divider by 2^$post
214 */ 214 */
215 unsigned int pre, post; 215 unsigned int pre, post;
216 216
217 if (unlikely(fspi > fin)) 217 if (unlikely(fspi > fin))
218 return 0; 218 return 0;
219 219
220 post = fls(fin) - fls(fspi); 220 post = fls(fin) - fls(fspi);
221 if (fin > fspi << post) 221 if (fin > fspi << post)
222 post++; 222 post++;
223 223
224 /* now we have: (fin <= fspi << post) with post being minimal */ 224 /* now we have: (fin <= fspi << post) with post being minimal */
225 225
226 post = max(4U, post) - 4; 226 post = max(4U, post) - 4;
227 if (unlikely(post > 0xf)) { 227 if (unlikely(post > 0xf)) {
228 pr_err("%s: cannot set clock freq: %u (base freq: %u)\n", 228 pr_err("%s: cannot set clock freq: %u (base freq: %u)\n",
229 __func__, fspi, fin); 229 __func__, fspi, fin);
230 return 0xff; 230 return 0xff;
231 } 231 }
232 232
233 pre = DIV_ROUND_UP(fin, fspi << post) - 1; 233 pre = DIV_ROUND_UP(fin, fspi << post) - 1;
234 234
235 pr_debug("%s: fin: %u, fspi: %u, post: %u, pre: %u\n", 235 pr_debug("%s: fin: %u, fspi: %u, post: %u, pre: %u\n",
236 __func__, fin, fspi, post, pre); 236 __func__, fin, fspi, post, pre);
237 return (pre << MX51_ECSPI_CTRL_PREDIV_OFFSET) | 237 return (pre << MX51_ECSPI_CTRL_PREDIV_OFFSET) |
238 (post << MX51_ECSPI_CTRL_POSTDIV_OFFSET); 238 (post << MX51_ECSPI_CTRL_POSTDIV_OFFSET);
239 } 239 }
240 240
241 static void __maybe_unused mx51_ecspi_intctrl(struct spi_imx_data *spi_imx, int enable) 241 static void __maybe_unused mx51_ecspi_intctrl(struct spi_imx_data *spi_imx, int enable)
242 { 242 {
243 unsigned val = 0; 243 unsigned val = 0;
244 244
245 if (enable & MXC_INT_TE) 245 if (enable & MXC_INT_TE)
246 val |= MX51_ECSPI_INT_TEEN; 246 val |= MX51_ECSPI_INT_TEEN;
247 247
248 if (enable & MXC_INT_RR) 248 if (enable & MXC_INT_RR)
249 val |= MX51_ECSPI_INT_RREN; 249 val |= MX51_ECSPI_INT_RREN;
250 250
251 writel(val, spi_imx->base + MX51_ECSPI_INT); 251 writel(val, spi_imx->base + MX51_ECSPI_INT);
252 } 252 }
253 253
254 static void __maybe_unused mx51_ecspi_trigger(struct spi_imx_data *spi_imx) 254 static void __maybe_unused mx51_ecspi_trigger(struct spi_imx_data *spi_imx)
255 { 255 {
256 u32 reg; 256 u32 reg;
257 257
258 reg = readl(spi_imx->base + MX51_ECSPI_CTRL); 258 reg = readl(spi_imx->base + MX51_ECSPI_CTRL);
259 reg |= MX51_ECSPI_CTRL_XCH; 259 reg |= MX51_ECSPI_CTRL_XCH;
260 writel(reg, spi_imx->base + MX51_ECSPI_CTRL); 260 writel(reg, spi_imx->base + MX51_ECSPI_CTRL);
261 } 261 }
262 262
263 static int __maybe_unused mx51_ecspi_config(struct spi_imx_data *spi_imx, 263 static int __maybe_unused mx51_ecspi_config(struct spi_imx_data *spi_imx,
264 struct spi_imx_config *config) 264 struct spi_imx_config *config)
265 { 265 {
266 u32 ctrl = MX51_ECSPI_CTRL_ENABLE, cfg = 0; 266 u32 ctrl = MX51_ECSPI_CTRL_ENABLE, cfg = 0;
267 267
268 /* 268 /*
269 * The hardware seems to have a race condition when changing modes. The 269 * The hardware seems to have a race condition when changing modes. The
270 * current assumption is that the selection of the channel arrives 270 * current assumption is that the selection of the channel arrives
271 * earlier in the hardware than the mode bits when they are written at 271 * earlier in the hardware than the mode bits when they are written at
272 * the same time. 272 * the same time.
273 * So set master mode for all channels as we do not support slave mode. 273 * So set master mode for all channels as we do not support slave mode.
274 */ 274 */
275 ctrl |= MX51_ECSPI_CTRL_MODE_MASK; 275 ctrl |= MX51_ECSPI_CTRL_MODE_MASK;
276 276
277 /* set clock speed */ 277 /* set clock speed */
278 ctrl |= mx51_ecspi_clkdiv(spi_imx->spi_clk, config->speed_hz); 278 ctrl |= mx51_ecspi_clkdiv(spi_imx->spi_clk, config->speed_hz);
279 279
280 /* set chip select to use */ 280 /* set chip select to use */
281 ctrl |= MX51_ECSPI_CTRL_CS(config->cs); 281 ctrl |= MX51_ECSPI_CTRL_CS(config->cs);
282 282
283 ctrl |= (config->bpw - 1) << MX51_ECSPI_CTRL_BL_OFFSET; 283 ctrl |= (config->bpw - 1) << MX51_ECSPI_CTRL_BL_OFFSET;
284 284
285 cfg |= MX51_ECSPI_CONFIG_SBBCTRL(config->cs); 285 cfg |= MX51_ECSPI_CONFIG_SBBCTRL(config->cs);
286 286
287 if (config->mode & SPI_CPHA) 287 if (config->mode & SPI_CPHA)
288 cfg |= MX51_ECSPI_CONFIG_SCLKPHA(config->cs); 288 cfg |= MX51_ECSPI_CONFIG_SCLKPHA(config->cs);
289 289
290 if (config->mode & SPI_CPOL) 290 if (config->mode & SPI_CPOL)
291 cfg |= MX51_ECSPI_CONFIG_SCLKPOL(config->cs); 291 cfg |= MX51_ECSPI_CONFIG_SCLKPOL(config->cs);
292 292
293 if (config->mode & SPI_CS_HIGH) 293 if (config->mode & SPI_CS_HIGH)
294 cfg |= MX51_ECSPI_CONFIG_SSBPOL(config->cs); 294 cfg |= MX51_ECSPI_CONFIG_SSBPOL(config->cs);
295 295
296 writel(ctrl, spi_imx->base + MX51_ECSPI_CTRL); 296 writel(ctrl, spi_imx->base + MX51_ECSPI_CTRL);
297 writel(cfg, spi_imx->base + MX51_ECSPI_CONFIG); 297 writel(cfg, spi_imx->base + MX51_ECSPI_CONFIG);
298 298
299 return 0; 299 return 0;
300 } 300 }
301 301
302 static int __maybe_unused mx51_ecspi_rx_available(struct spi_imx_data *spi_imx) 302 static int __maybe_unused mx51_ecspi_rx_available(struct spi_imx_data *spi_imx)
303 { 303 {
304 return readl(spi_imx->base + MX51_ECSPI_STAT) & MX51_ECSPI_STAT_RR; 304 return readl(spi_imx->base + MX51_ECSPI_STAT) & MX51_ECSPI_STAT_RR;
305 } 305 }
306 306
307 static void __maybe_unused mx51_ecspi_reset(struct spi_imx_data *spi_imx) 307 static void __maybe_unused mx51_ecspi_reset(struct spi_imx_data *spi_imx)
308 { 308 {
309 /* drain receive buffer */ 309 /* drain receive buffer */
310 while (mx51_ecspi_rx_available(spi_imx)) 310 while (mx51_ecspi_rx_available(spi_imx))
311 readl(spi_imx->base + MXC_CSPIRXDATA); 311 readl(spi_imx->base + MXC_CSPIRXDATA);
312 } 312 }
313 313
314 #define MX31_INTREG_TEEN (1 << 0) 314 #define MX31_INTREG_TEEN (1 << 0)
315 #define MX31_INTREG_RREN (1 << 3) 315 #define MX31_INTREG_RREN (1 << 3)
316 316
317 #define MX31_CSPICTRL_ENABLE (1 << 0) 317 #define MX31_CSPICTRL_ENABLE (1 << 0)
318 #define MX31_CSPICTRL_MASTER (1 << 1) 318 #define MX31_CSPICTRL_MASTER (1 << 1)
319 #define MX31_CSPICTRL_XCH (1 << 2) 319 #define MX31_CSPICTRL_XCH (1 << 2)
320 #define MX31_CSPICTRL_POL (1 << 4) 320 #define MX31_CSPICTRL_POL (1 << 4)
321 #define MX31_CSPICTRL_PHA (1 << 5) 321 #define MX31_CSPICTRL_PHA (1 << 5)
322 #define MX31_CSPICTRL_SSCTL (1 << 6) 322 #define MX31_CSPICTRL_SSCTL (1 << 6)
323 #define MX31_CSPICTRL_SSPOL (1 << 7) 323 #define MX31_CSPICTRL_SSPOL (1 << 7)
324 #define MX31_CSPICTRL_BC_SHIFT 8 324 #define MX31_CSPICTRL_BC_SHIFT 8
325 #define MX35_CSPICTRL_BL_SHIFT 20 325 #define MX35_CSPICTRL_BL_SHIFT 20
326 #define MX31_CSPICTRL_CS_SHIFT 24 326 #define MX31_CSPICTRL_CS_SHIFT 24
327 #define MX35_CSPICTRL_CS_SHIFT 12 327 #define MX35_CSPICTRL_CS_SHIFT 12
328 #define MX31_CSPICTRL_DR_SHIFT 16 328 #define MX31_CSPICTRL_DR_SHIFT 16
329 329
330 #define MX31_CSPISTATUS 0x14 330 #define MX31_CSPISTATUS 0x14
331 #define MX31_STATUS_RR (1 << 3) 331 #define MX31_STATUS_RR (1 << 3)
332 332
333 /* These functions also work for the i.MX35, but be aware that 333 /* These functions also work for the i.MX35, but be aware that
334 * the i.MX35 has a slightly different register layout for bits 334 * the i.MX35 has a slightly different register layout for bits
335 * we do not use here. 335 * we do not use here.
336 */ 336 */
337 static void __maybe_unused mx31_intctrl(struct spi_imx_data *spi_imx, int enable) 337 static void __maybe_unused mx31_intctrl(struct spi_imx_data *spi_imx, int enable)
338 { 338 {
339 unsigned int val = 0; 339 unsigned int val = 0;
340 340
341 if (enable & MXC_INT_TE) 341 if (enable & MXC_INT_TE)
342 val |= MX31_INTREG_TEEN; 342 val |= MX31_INTREG_TEEN;
343 if (enable & MXC_INT_RR) 343 if (enable & MXC_INT_RR)
344 val |= MX31_INTREG_RREN; 344 val |= MX31_INTREG_RREN;
345 345
346 writel(val, spi_imx->base + MXC_CSPIINT); 346 writel(val, spi_imx->base + MXC_CSPIINT);
347 } 347 }
348 348
349 static void __maybe_unused mx31_trigger(struct spi_imx_data *spi_imx) 349 static void __maybe_unused mx31_trigger(struct spi_imx_data *spi_imx)
350 { 350 {
351 unsigned int reg; 351 unsigned int reg;
352 352
353 reg = readl(spi_imx->base + MXC_CSPICTRL); 353 reg = readl(spi_imx->base + MXC_CSPICTRL);
354 reg |= MX31_CSPICTRL_XCH; 354 reg |= MX31_CSPICTRL_XCH;
355 writel(reg, spi_imx->base + MXC_CSPICTRL); 355 writel(reg, spi_imx->base + MXC_CSPICTRL);
356 } 356 }
357 357
358 static int __maybe_unused mx31_config(struct spi_imx_data *spi_imx, 358 static int __maybe_unused mx31_config(struct spi_imx_data *spi_imx,
359 struct spi_imx_config *config) 359 struct spi_imx_config *config)
360 { 360 {
361 unsigned int reg = MX31_CSPICTRL_ENABLE | MX31_CSPICTRL_MASTER; 361 unsigned int reg = MX31_CSPICTRL_ENABLE | MX31_CSPICTRL_MASTER;
362 int cs = spi_imx->chipselect[config->cs]; 362 int cs = spi_imx->chipselect[config->cs];
363 363
364 reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, config->speed_hz) << 364 reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, config->speed_hz) <<
365 MX31_CSPICTRL_DR_SHIFT; 365 MX31_CSPICTRL_DR_SHIFT;
366 366
367 if (is_imx35_cspi(spi_imx)) { 367 if (is_imx35_cspi(spi_imx)) {
368 reg |= (config->bpw - 1) << MX35_CSPICTRL_BL_SHIFT; 368 reg |= (config->bpw - 1) << MX35_CSPICTRL_BL_SHIFT;
369 reg |= MX31_CSPICTRL_SSCTL; 369 reg |= MX31_CSPICTRL_SSCTL;
370 } else { 370 } else {
371 reg |= (config->bpw - 1) << MX31_CSPICTRL_BC_SHIFT; 371 reg |= (config->bpw - 1) << MX31_CSPICTRL_BC_SHIFT;
372 } 372 }
373 373
374 if (config->mode & SPI_CPHA) 374 if (config->mode & SPI_CPHA)
375 reg |= MX31_CSPICTRL_PHA; 375 reg |= MX31_CSPICTRL_PHA;
376 if (config->mode & SPI_CPOL) 376 if (config->mode & SPI_CPOL)
377 reg |= MX31_CSPICTRL_POL; 377 reg |= MX31_CSPICTRL_POL;
378 if (config->mode & SPI_CS_HIGH) 378 if (config->mode & SPI_CS_HIGH)
379 reg |= MX31_CSPICTRL_SSPOL; 379 reg |= MX31_CSPICTRL_SSPOL;
380 if (cs < 0) 380 if (cs < 0)
381 reg |= (cs + 32) << 381 reg |= (cs + 32) <<
382 (is_imx35_cspi(spi_imx) ? MX35_CSPICTRL_CS_SHIFT : 382 (is_imx35_cspi(spi_imx) ? MX35_CSPICTRL_CS_SHIFT :
383 MX31_CSPICTRL_CS_SHIFT); 383 MX31_CSPICTRL_CS_SHIFT);
384 384
385 writel(reg, spi_imx->base + MXC_CSPICTRL); 385 writel(reg, spi_imx->base + MXC_CSPICTRL);
386 386
387 return 0; 387 return 0;
388 } 388 }
389 389
390 static int __maybe_unused mx31_rx_available(struct spi_imx_data *spi_imx) 390 static int __maybe_unused mx31_rx_available(struct spi_imx_data *spi_imx)
391 { 391 {
392 return readl(spi_imx->base + MX31_CSPISTATUS) & MX31_STATUS_RR; 392 return readl(spi_imx->base + MX31_CSPISTATUS) & MX31_STATUS_RR;
393 } 393 }
394 394
395 static void __maybe_unused mx31_reset(struct spi_imx_data *spi_imx) 395 static void __maybe_unused mx31_reset(struct spi_imx_data *spi_imx)
396 { 396 {
397 /* drain receive buffer */ 397 /* drain receive buffer */
398 while (readl(spi_imx->base + MX31_CSPISTATUS) & MX31_STATUS_RR) 398 while (readl(spi_imx->base + MX31_CSPISTATUS) & MX31_STATUS_RR)
399 readl(spi_imx->base + MXC_CSPIRXDATA); 399 readl(spi_imx->base + MXC_CSPIRXDATA);
400 } 400 }
401 401
402 #define MX21_INTREG_RR (1 << 4) 402 #define MX21_INTREG_RR (1 << 4)
403 #define MX21_INTREG_TEEN (1 << 9) 403 #define MX21_INTREG_TEEN (1 << 9)
404 #define MX21_INTREG_RREN (1 << 13) 404 #define MX21_INTREG_RREN (1 << 13)
405 405
406 #define MX21_CSPICTRL_POL (1 << 5) 406 #define MX21_CSPICTRL_POL (1 << 5)
407 #define MX21_CSPICTRL_PHA (1 << 6) 407 #define MX21_CSPICTRL_PHA (1 << 6)
408 #define MX21_CSPICTRL_SSPOL (1 << 8) 408 #define MX21_CSPICTRL_SSPOL (1 << 8)
409 #define MX21_CSPICTRL_XCH (1 << 9) 409 #define MX21_CSPICTRL_XCH (1 << 9)
410 #define MX21_CSPICTRL_ENABLE (1 << 10) 410 #define MX21_CSPICTRL_ENABLE (1 << 10)
411 #define MX21_CSPICTRL_MASTER (1 << 11) 411 #define MX21_CSPICTRL_MASTER (1 << 11)
412 #define MX21_CSPICTRL_DR_SHIFT 14 412 #define MX21_CSPICTRL_DR_SHIFT 14
413 #define MX21_CSPICTRL_CS_SHIFT 19 413 #define MX21_CSPICTRL_CS_SHIFT 19
414 414
415 static void __maybe_unused mx21_intctrl(struct spi_imx_data *spi_imx, int enable) 415 static void __maybe_unused mx21_intctrl(struct spi_imx_data *spi_imx, int enable)
416 { 416 {
417 unsigned int val = 0; 417 unsigned int val = 0;
418 418
419 if (enable & MXC_INT_TE) 419 if (enable & MXC_INT_TE)
420 val |= MX21_INTREG_TEEN; 420 val |= MX21_INTREG_TEEN;
421 if (enable & MXC_INT_RR) 421 if (enable & MXC_INT_RR)
422 val |= MX21_INTREG_RREN; 422 val |= MX21_INTREG_RREN;
423 423
424 writel(val, spi_imx->base + MXC_CSPIINT); 424 writel(val, spi_imx->base + MXC_CSPIINT);
425 } 425 }
426 426
427 static void __maybe_unused mx21_trigger(struct spi_imx_data *spi_imx) 427 static void __maybe_unused mx21_trigger(struct spi_imx_data *spi_imx)
428 { 428 {
429 unsigned int reg; 429 unsigned int reg;
430 430
431 reg = readl(spi_imx->base + MXC_CSPICTRL); 431 reg = readl(spi_imx->base + MXC_CSPICTRL);
432 reg |= MX21_CSPICTRL_XCH; 432 reg |= MX21_CSPICTRL_XCH;
433 writel(reg, spi_imx->base + MXC_CSPICTRL); 433 writel(reg, spi_imx->base + MXC_CSPICTRL);
434 } 434 }
435 435
436 static int __maybe_unused mx21_config(struct spi_imx_data *spi_imx, 436 static int __maybe_unused mx21_config(struct spi_imx_data *spi_imx,
437 struct spi_imx_config *config) 437 struct spi_imx_config *config)
438 { 438 {
439 unsigned int reg = MX21_CSPICTRL_ENABLE | MX21_CSPICTRL_MASTER; 439 unsigned int reg = MX21_CSPICTRL_ENABLE | MX21_CSPICTRL_MASTER;
440 int cs = spi_imx->chipselect[config->cs]; 440 int cs = spi_imx->chipselect[config->cs];
441 unsigned int max = is_imx27_cspi(spi_imx) ? 16 : 18; 441 unsigned int max = is_imx27_cspi(spi_imx) ? 16 : 18;
442 442
443 reg |= spi_imx_clkdiv_1(spi_imx->spi_clk, config->speed_hz, max) << 443 reg |= spi_imx_clkdiv_1(spi_imx->spi_clk, config->speed_hz, max) <<
444 MX21_CSPICTRL_DR_SHIFT; 444 MX21_CSPICTRL_DR_SHIFT;
445 reg |= config->bpw - 1; 445 reg |= config->bpw - 1;
446 446
447 if (config->mode & SPI_CPHA) 447 if (config->mode & SPI_CPHA)
448 reg |= MX21_CSPICTRL_PHA; 448 reg |= MX21_CSPICTRL_PHA;
449 if (config->mode & SPI_CPOL) 449 if (config->mode & SPI_CPOL)
450 reg |= MX21_CSPICTRL_POL; 450 reg |= MX21_CSPICTRL_POL;
451 if (config->mode & SPI_CS_HIGH) 451 if (config->mode & SPI_CS_HIGH)
452 reg |= MX21_CSPICTRL_SSPOL; 452 reg |= MX21_CSPICTRL_SSPOL;
453 if (cs < 0) 453 if (cs < 0)
454 reg |= (cs + 32) << MX21_CSPICTRL_CS_SHIFT; 454 reg |= (cs + 32) << MX21_CSPICTRL_CS_SHIFT;
455 455
456 writel(reg, spi_imx->base + MXC_CSPICTRL); 456 writel(reg, spi_imx->base + MXC_CSPICTRL);
457 457
458 return 0; 458 return 0;
459 } 459 }
460 460
461 static int __maybe_unused mx21_rx_available(struct spi_imx_data *spi_imx) 461 static int __maybe_unused mx21_rx_available(struct spi_imx_data *spi_imx)
462 { 462 {
463 return readl(spi_imx->base + MXC_CSPIINT) & MX21_INTREG_RR; 463 return readl(spi_imx->base + MXC_CSPIINT) & MX21_INTREG_RR;
464 } 464 }
465 465
466 static void __maybe_unused mx21_reset(struct spi_imx_data *spi_imx) 466 static void __maybe_unused mx21_reset(struct spi_imx_data *spi_imx)
467 { 467 {
468 writel(1, spi_imx->base + MXC_RESET); 468 writel(1, spi_imx->base + MXC_RESET);
469 } 469 }
470 470
471 #define MX1_INTREG_RR (1 << 3) 471 #define MX1_INTREG_RR (1 << 3)
472 #define MX1_INTREG_TEEN (1 << 8) 472 #define MX1_INTREG_TEEN (1 << 8)
473 #define MX1_INTREG_RREN (1 << 11) 473 #define MX1_INTREG_RREN (1 << 11)
474 474
475 #define MX1_CSPICTRL_POL (1 << 4) 475 #define MX1_CSPICTRL_POL (1 << 4)
476 #define MX1_CSPICTRL_PHA (1 << 5) 476 #define MX1_CSPICTRL_PHA (1 << 5)
477 #define MX1_CSPICTRL_XCH (1 << 8) 477 #define MX1_CSPICTRL_XCH (1 << 8)
478 #define MX1_CSPICTRL_ENABLE (1 << 9) 478 #define MX1_CSPICTRL_ENABLE (1 << 9)
479 #define MX1_CSPICTRL_MASTER (1 << 10) 479 #define MX1_CSPICTRL_MASTER (1 << 10)
480 #define MX1_CSPICTRL_DR_SHIFT 13 480 #define MX1_CSPICTRL_DR_SHIFT 13
481 481
482 static void __maybe_unused mx1_intctrl(struct spi_imx_data *spi_imx, int enable) 482 static void __maybe_unused mx1_intctrl(struct spi_imx_data *spi_imx, int enable)
483 { 483 {
484 unsigned int val = 0; 484 unsigned int val = 0;
485 485
486 if (enable & MXC_INT_TE) 486 if (enable & MXC_INT_TE)
487 val |= MX1_INTREG_TEEN; 487 val |= MX1_INTREG_TEEN;
488 if (enable & MXC_INT_RR) 488 if (enable & MXC_INT_RR)
489 val |= MX1_INTREG_RREN; 489 val |= MX1_INTREG_RREN;
490 490
491 writel(val, spi_imx->base + MXC_CSPIINT); 491 writel(val, spi_imx->base + MXC_CSPIINT);
492 } 492 }
493 493
494 static void __maybe_unused mx1_trigger(struct spi_imx_data *spi_imx) 494 static void __maybe_unused mx1_trigger(struct spi_imx_data *spi_imx)
495 { 495 {
496 unsigned int reg; 496 unsigned int reg;
497 497
498 reg = readl(spi_imx->base + MXC_CSPICTRL); 498 reg = readl(spi_imx->base + MXC_CSPICTRL);
499 reg |= MX1_CSPICTRL_XCH; 499 reg |= MX1_CSPICTRL_XCH;
500 writel(reg, spi_imx->base + MXC_CSPICTRL); 500 writel(reg, spi_imx->base + MXC_CSPICTRL);
501 } 501 }
502 502
503 static int __maybe_unused mx1_config(struct spi_imx_data *spi_imx, 503 static int __maybe_unused mx1_config(struct spi_imx_data *spi_imx,
504 struct spi_imx_config *config) 504 struct spi_imx_config *config)
505 { 505 {
506 unsigned int reg = MX1_CSPICTRL_ENABLE | MX1_CSPICTRL_MASTER; 506 unsigned int reg = MX1_CSPICTRL_ENABLE | MX1_CSPICTRL_MASTER;
507 507
508 reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, config->speed_hz) << 508 reg |= spi_imx_clkdiv_2(spi_imx->spi_clk, config->speed_hz) <<
509 MX1_CSPICTRL_DR_SHIFT; 509 MX1_CSPICTRL_DR_SHIFT;
510 reg |= config->bpw - 1; 510 reg |= config->bpw - 1;
511 511
512 if (config->mode & SPI_CPHA) 512 if (config->mode & SPI_CPHA)
513 reg |= MX1_CSPICTRL_PHA; 513 reg |= MX1_CSPICTRL_PHA;
514 if (config->mode & SPI_CPOL) 514 if (config->mode & SPI_CPOL)
515 reg |= MX1_CSPICTRL_POL; 515 reg |= MX1_CSPICTRL_POL;
516 516
517 writel(reg, spi_imx->base + MXC_CSPICTRL); 517 writel(reg, spi_imx->base + MXC_CSPICTRL);
518 518
519 return 0; 519 return 0;
520 } 520 }
521 521
522 static int __maybe_unused mx1_rx_available(struct spi_imx_data *spi_imx) 522 static int __maybe_unused mx1_rx_available(struct spi_imx_data *spi_imx)
523 { 523 {
524 return readl(spi_imx->base + MXC_CSPIINT) & MX1_INTREG_RR; 524 return readl(spi_imx->base + MXC_CSPIINT) & MX1_INTREG_RR;
525 } 525 }
526 526
527 static void __maybe_unused mx1_reset(struct spi_imx_data *spi_imx) 527 static void __maybe_unused mx1_reset(struct spi_imx_data *spi_imx)
528 { 528 {
529 writel(1, spi_imx->base + MXC_RESET); 529 writel(1, spi_imx->base + MXC_RESET);
530 } 530 }
531 531
532 static struct spi_imx_devtype_data imx1_cspi_devtype_data = { 532 static struct spi_imx_devtype_data imx1_cspi_devtype_data = {
533 .intctrl = mx1_intctrl, 533 .intctrl = mx1_intctrl,
534 .config = mx1_config, 534 .config = mx1_config,
535 .trigger = mx1_trigger, 535 .trigger = mx1_trigger,
536 .rx_available = mx1_rx_available, 536 .rx_available = mx1_rx_available,
537 .reset = mx1_reset, 537 .reset = mx1_reset,
538 .devtype = IMX1_CSPI, 538 .devtype = IMX1_CSPI,
539 }; 539 };
540 540
541 static struct spi_imx_devtype_data imx21_cspi_devtype_data = { 541 static struct spi_imx_devtype_data imx21_cspi_devtype_data = {
542 .intctrl = mx21_intctrl, 542 .intctrl = mx21_intctrl,
543 .config = mx21_config, 543 .config = mx21_config,
544 .trigger = mx21_trigger, 544 .trigger = mx21_trigger,
545 .rx_available = mx21_rx_available, 545 .rx_available = mx21_rx_available,
546 .reset = mx21_reset, 546 .reset = mx21_reset,
547 .devtype = IMX21_CSPI, 547 .devtype = IMX21_CSPI,
548 }; 548 };
549 549
550 static struct spi_imx_devtype_data imx27_cspi_devtype_data = { 550 static struct spi_imx_devtype_data imx27_cspi_devtype_data = {
551 /* i.mx27 cspi shares the functions with i.mx21 one */ 551 /* i.mx27 cspi shares the functions with i.mx21 one */
552 .intctrl = mx21_intctrl, 552 .intctrl = mx21_intctrl,
553 .config = mx21_config, 553 .config = mx21_config,
554 .trigger = mx21_trigger, 554 .trigger = mx21_trigger,
555 .rx_available = mx21_rx_available, 555 .rx_available = mx21_rx_available,
556 .reset = mx21_reset, 556 .reset = mx21_reset,
557 .devtype = IMX27_CSPI, 557 .devtype = IMX27_CSPI,
558 }; 558 };
559 559
560 static struct spi_imx_devtype_data imx31_cspi_devtype_data = { 560 static struct spi_imx_devtype_data imx31_cspi_devtype_data = {
561 .intctrl = mx31_intctrl, 561 .intctrl = mx31_intctrl,
562 .config = mx31_config, 562 .config = mx31_config,
563 .trigger = mx31_trigger, 563 .trigger = mx31_trigger,
564 .rx_available = mx31_rx_available, 564 .rx_available = mx31_rx_available,
565 .reset = mx31_reset, 565 .reset = mx31_reset,
566 .devtype = IMX31_CSPI, 566 .devtype = IMX31_CSPI,
567 }; 567 };
568 568
569 static struct spi_imx_devtype_data imx35_cspi_devtype_data = { 569 static struct spi_imx_devtype_data imx35_cspi_devtype_data = {
570 /* i.mx35 and later cspi shares the functions with i.mx31 one */ 570 /* i.mx35 and later cspi shares the functions with i.mx31 one */
571 .intctrl = mx31_intctrl, 571 .intctrl = mx31_intctrl,
572 .config = mx31_config, 572 .config = mx31_config,
573 .trigger = mx31_trigger, 573 .trigger = mx31_trigger,
574 .rx_available = mx31_rx_available, 574 .rx_available = mx31_rx_available,
575 .reset = mx31_reset, 575 .reset = mx31_reset,
576 .devtype = IMX35_CSPI, 576 .devtype = IMX35_CSPI,
577 }; 577 };
578 578
579 static struct spi_imx_devtype_data imx51_ecspi_devtype_data = { 579 static struct spi_imx_devtype_data imx51_ecspi_devtype_data = {
580 .intctrl = mx51_ecspi_intctrl, 580 .intctrl = mx51_ecspi_intctrl,
581 .config = mx51_ecspi_config, 581 .config = mx51_ecspi_config,
582 .trigger = mx51_ecspi_trigger, 582 .trigger = mx51_ecspi_trigger,
583 .rx_available = mx51_ecspi_rx_available, 583 .rx_available = mx51_ecspi_rx_available,
584 .reset = mx51_ecspi_reset, 584 .reset = mx51_ecspi_reset,
585 .devtype = IMX51_ECSPI, 585 .devtype = IMX51_ECSPI,
586 }; 586 };
587 587
588 static struct platform_device_id spi_imx_devtype[] = { 588 static struct platform_device_id spi_imx_devtype[] = {
589 { 589 {
590 .name = "imx1-cspi", 590 .name = "imx1-cspi",
591 .driver_data = (kernel_ulong_t) &imx1_cspi_devtype_data, 591 .driver_data = (kernel_ulong_t) &imx1_cspi_devtype_data,
592 }, { 592 }, {
593 .name = "imx21-cspi", 593 .name = "imx21-cspi",
594 .driver_data = (kernel_ulong_t) &imx21_cspi_devtype_data, 594 .driver_data = (kernel_ulong_t) &imx21_cspi_devtype_data,
595 }, { 595 }, {
596 .name = "imx27-cspi", 596 .name = "imx27-cspi",
597 .driver_data = (kernel_ulong_t) &imx27_cspi_devtype_data, 597 .driver_data = (kernel_ulong_t) &imx27_cspi_devtype_data,
598 }, { 598 }, {
599 .name = "imx31-cspi", 599 .name = "imx31-cspi",
600 .driver_data = (kernel_ulong_t) &imx31_cspi_devtype_data, 600 .driver_data = (kernel_ulong_t) &imx31_cspi_devtype_data,
601 }, { 601 }, {
602 .name = "imx35-cspi", 602 .name = "imx35-cspi",
603 .driver_data = (kernel_ulong_t) &imx35_cspi_devtype_data, 603 .driver_data = (kernel_ulong_t) &imx35_cspi_devtype_data,
604 }, { 604 }, {
605 .name = "imx51-ecspi", 605 .name = "imx51-ecspi",
606 .driver_data = (kernel_ulong_t) &imx51_ecspi_devtype_data, 606 .driver_data = (kernel_ulong_t) &imx51_ecspi_devtype_data,
607 }, { 607 }, {
608 /* sentinel */ 608 /* sentinel */
609 } 609 }
610 }; 610 };
611 611
612 static const struct of_device_id spi_imx_dt_ids[] = { 612 static const struct of_device_id spi_imx_dt_ids[] = {
613 { .compatible = "fsl,imx1-cspi", .data = &imx1_cspi_devtype_data, }, 613 { .compatible = "fsl,imx1-cspi", .data = &imx1_cspi_devtype_data, },
614 { .compatible = "fsl,imx21-cspi", .data = &imx21_cspi_devtype_data, }, 614 { .compatible = "fsl,imx21-cspi", .data = &imx21_cspi_devtype_data, },
615 { .compatible = "fsl,imx27-cspi", .data = &imx27_cspi_devtype_data, }, 615 { .compatible = "fsl,imx27-cspi", .data = &imx27_cspi_devtype_data, },
616 { .compatible = "fsl,imx31-cspi", .data = &imx31_cspi_devtype_data, }, 616 { .compatible = "fsl,imx31-cspi", .data = &imx31_cspi_devtype_data, },
617 { .compatible = "fsl,imx35-cspi", .data = &imx35_cspi_devtype_data, }, 617 { .compatible = "fsl,imx35-cspi", .data = &imx35_cspi_devtype_data, },
618 { .compatible = "fsl,imx51-ecspi", .data = &imx51_ecspi_devtype_data, }, 618 { .compatible = "fsl,imx51-ecspi", .data = &imx51_ecspi_devtype_data, },
619 { /* sentinel */ } 619 { /* sentinel */ }
620 }; 620 };
621 621
622 static void spi_imx_chipselect(struct spi_device *spi, int is_active) 622 static void spi_imx_chipselect(struct spi_device *spi, int is_active)
623 { 623 {
624 struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master); 624 struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master);
625 int gpio = spi_imx->chipselect[spi->chip_select]; 625 int gpio = spi_imx->chipselect[spi->chip_select];
626 int active = is_active != BITBANG_CS_INACTIVE; 626 int active = is_active != BITBANG_CS_INACTIVE;
627 int dev_is_lowactive = !(spi->mode & SPI_CS_HIGH); 627 int dev_is_lowactive = !(spi->mode & SPI_CS_HIGH);
628 628
629 if (gpio < 0) 629 if (!gpio_is_valid(gpio))
630 return; 630 return;
631 631
632 gpio_set_value(gpio, dev_is_lowactive ^ active); 632 gpio_set_value(gpio, dev_is_lowactive ^ active);
633 } 633 }
634 634
635 static void spi_imx_push(struct spi_imx_data *spi_imx) 635 static void spi_imx_push(struct spi_imx_data *spi_imx)
636 { 636 {
637 while (spi_imx->txfifo < spi_imx_get_fifosize(spi_imx)) { 637 while (spi_imx->txfifo < spi_imx_get_fifosize(spi_imx)) {
638 if (!spi_imx->count) 638 if (!spi_imx->count)
639 break; 639 break;
640 spi_imx->tx(spi_imx); 640 spi_imx->tx(spi_imx);
641 spi_imx->txfifo++; 641 spi_imx->txfifo++;
642 } 642 }
643 643
644 spi_imx->devtype_data->trigger(spi_imx); 644 spi_imx->devtype_data->trigger(spi_imx);
645 } 645 }
646 646
647 static irqreturn_t spi_imx_isr(int irq, void *dev_id) 647 static irqreturn_t spi_imx_isr(int irq, void *dev_id)
648 { 648 {
649 struct spi_imx_data *spi_imx = dev_id; 649 struct spi_imx_data *spi_imx = dev_id;
650 650
651 while (spi_imx->devtype_data->rx_available(spi_imx)) { 651 while (spi_imx->devtype_data->rx_available(spi_imx)) {
652 spi_imx->rx(spi_imx); 652 spi_imx->rx(spi_imx);
653 spi_imx->txfifo--; 653 spi_imx->txfifo--;
654 } 654 }
655 655
656 if (spi_imx->count) { 656 if (spi_imx->count) {
657 spi_imx_push(spi_imx); 657 spi_imx_push(spi_imx);
658 return IRQ_HANDLED; 658 return IRQ_HANDLED;
659 } 659 }
660 660
661 if (spi_imx->txfifo) { 661 if (spi_imx->txfifo) {
662 /* No data left to push, but still waiting for rx data, 662 /* No data left to push, but still waiting for rx data,
663 * enable receive data available interrupt. 663 * enable receive data available interrupt.
664 */ 664 */
665 spi_imx->devtype_data->intctrl( 665 spi_imx->devtype_data->intctrl(
666 spi_imx, MXC_INT_RR); 666 spi_imx, MXC_INT_RR);
667 return IRQ_HANDLED; 667 return IRQ_HANDLED;
668 } 668 }
669 669
670 spi_imx->devtype_data->intctrl(spi_imx, 0); 670 spi_imx->devtype_data->intctrl(spi_imx, 0);
671 complete(&spi_imx->xfer_done); 671 complete(&spi_imx->xfer_done);
672 672
673 return IRQ_HANDLED; 673 return IRQ_HANDLED;
674 } 674 }
675 675
676 static int spi_imx_setupxfer(struct spi_device *spi, 676 static int spi_imx_setupxfer(struct spi_device *spi,
677 struct spi_transfer *t) 677 struct spi_transfer *t)
678 { 678 {
679 struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master); 679 struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master);
680 struct spi_imx_config config; 680 struct spi_imx_config config;
681 681
682 config.bpw = t ? t->bits_per_word : spi->bits_per_word; 682 config.bpw = t ? t->bits_per_word : spi->bits_per_word;
683 config.speed_hz = t ? t->speed_hz : spi->max_speed_hz; 683 config.speed_hz = t ? t->speed_hz : spi->max_speed_hz;
684 config.mode = spi->mode; 684 config.mode = spi->mode;
685 config.cs = spi->chip_select; 685 config.cs = spi->chip_select;
686 686
687 if (!config.speed_hz) 687 if (!config.speed_hz)
688 config.speed_hz = spi->max_speed_hz; 688 config.speed_hz = spi->max_speed_hz;
689 if (!config.bpw) 689 if (!config.bpw)
690 config.bpw = spi->bits_per_word; 690 config.bpw = spi->bits_per_word;
691 if (!config.speed_hz)
692 config.speed_hz = spi->max_speed_hz;
693 691
694 /* Initialize the functions for transfer */ 692 /* Initialize the functions for transfer */
695 if (config.bpw <= 8) { 693 if (config.bpw <= 8) {
696 spi_imx->rx = spi_imx_buf_rx_u8; 694 spi_imx->rx = spi_imx_buf_rx_u8;
697 spi_imx->tx = spi_imx_buf_tx_u8; 695 spi_imx->tx = spi_imx_buf_tx_u8;
698 } else if (config.bpw <= 16) { 696 } else if (config.bpw <= 16) {
699 spi_imx->rx = spi_imx_buf_rx_u16; 697 spi_imx->rx = spi_imx_buf_rx_u16;
700 spi_imx->tx = spi_imx_buf_tx_u16; 698 spi_imx->tx = spi_imx_buf_tx_u16;
701 } else if (config.bpw <= 32) { 699 } else if (config.bpw <= 32) {
702 spi_imx->rx = spi_imx_buf_rx_u32; 700 spi_imx->rx = spi_imx_buf_rx_u32;
703 spi_imx->tx = spi_imx_buf_tx_u32; 701 spi_imx->tx = spi_imx_buf_tx_u32;
704 } else 702 } else
705 BUG(); 703 BUG();
706 704
707 spi_imx->devtype_data->config(spi_imx, &config); 705 spi_imx->devtype_data->config(spi_imx, &config);
708 706
709 return 0; 707 return 0;
710 } 708 }
711 709
712 static int spi_imx_transfer(struct spi_device *spi, 710 static int spi_imx_transfer(struct spi_device *spi,
713 struct spi_transfer *transfer) 711 struct spi_transfer *transfer)
714 { 712 {
715 struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master); 713 struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master);
716 714
717 spi_imx->tx_buf = transfer->tx_buf; 715 spi_imx->tx_buf = transfer->tx_buf;
718 spi_imx->rx_buf = transfer->rx_buf; 716 spi_imx->rx_buf = transfer->rx_buf;
719 spi_imx->count = transfer->len; 717 spi_imx->count = transfer->len;
720 spi_imx->txfifo = 0; 718 spi_imx->txfifo = 0;
721 719
722 init_completion(&spi_imx->xfer_done); 720 init_completion(&spi_imx->xfer_done);
723 721
724 spi_imx_push(spi_imx); 722 spi_imx_push(spi_imx);
725 723
726 spi_imx->devtype_data->intctrl(spi_imx, MXC_INT_TE); 724 spi_imx->devtype_data->intctrl(spi_imx, MXC_INT_TE);
727 725
728 wait_for_completion(&spi_imx->xfer_done); 726 wait_for_completion(&spi_imx->xfer_done);
729 727
730 return transfer->len; 728 return transfer->len;
731 } 729 }
732 730
733 static int spi_imx_setup(struct spi_device *spi) 731 static int spi_imx_setup(struct spi_device *spi)
734 { 732 {
735 struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master); 733 struct spi_imx_data *spi_imx = spi_master_get_devdata(spi->master);
736 int gpio = spi_imx->chipselect[spi->chip_select]; 734 int gpio = spi_imx->chipselect[spi->chip_select];
737 735
738 dev_dbg(&spi->dev, "%s: mode %d, %u bpw, %d hz\n", __func__, 736 dev_dbg(&spi->dev, "%s: mode %d, %u bpw, %d hz\n", __func__,
739 spi->mode, spi->bits_per_word, spi->max_speed_hz); 737 spi->mode, spi->bits_per_word, spi->max_speed_hz);
740 738
741 if (gpio >= 0) 739 if (gpio_is_valid(gpio))
742 gpio_direction_output(gpio, spi->mode & SPI_CS_HIGH ? 0 : 1); 740 gpio_direction_output(gpio, spi->mode & SPI_CS_HIGH ? 0 : 1);
743 741
744 spi_imx_chipselect(spi, BITBANG_CS_INACTIVE); 742 spi_imx_chipselect(spi, BITBANG_CS_INACTIVE);
745 743
746 return 0; 744 return 0;
747 } 745 }
748 746
749 static void spi_imx_cleanup(struct spi_device *spi) 747 static void spi_imx_cleanup(struct spi_device *spi)
750 { 748 {
751 } 749 }
752 750
753 static int __devinit spi_imx_probe(struct platform_device *pdev) 751 static int __devinit spi_imx_probe(struct platform_device *pdev)
754 { 752 {
755 struct device_node *np = pdev->dev.of_node; 753 struct device_node *np = pdev->dev.of_node;
756 const struct of_device_id *of_id = 754 const struct of_device_id *of_id =
757 of_match_device(spi_imx_dt_ids, &pdev->dev); 755 of_match_device(spi_imx_dt_ids, &pdev->dev);
758 struct spi_imx_master *mxc_platform_info = 756 struct spi_imx_master *mxc_platform_info =
759 dev_get_platdata(&pdev->dev); 757 dev_get_platdata(&pdev->dev);
760 struct spi_master *master; 758 struct spi_master *master;
761 struct spi_imx_data *spi_imx; 759 struct spi_imx_data *spi_imx;
762 struct resource *res; 760 struct resource *res;
763 struct pinctrl *pinctrl; 761 struct pinctrl *pinctrl;
764 int i, ret, num_cs; 762 int i, ret, num_cs;
765 763
766 if (!np && !mxc_platform_info) { 764 if (!np && !mxc_platform_info) {
767 dev_err(&pdev->dev, "can't get the platform data\n"); 765 dev_err(&pdev->dev, "can't get the platform data\n");
768 return -EINVAL; 766 return -EINVAL;
769 } 767 }
770 768
771 ret = of_property_read_u32(np, "fsl,spi-num-chipselects", &num_cs); 769 ret = of_property_read_u32(np, "fsl,spi-num-chipselects", &num_cs);
772 if (ret < 0) { 770 if (ret < 0) {
773 if (mxc_platform_info) 771 if (mxc_platform_info)
774 num_cs = mxc_platform_info->num_chipselect; 772 num_cs = mxc_platform_info->num_chipselect;
775 else 773 else
776 return ret; 774 return ret;
777 } 775 }
778 776
779 master = spi_alloc_master(&pdev->dev, 777 master = spi_alloc_master(&pdev->dev,
780 sizeof(struct spi_imx_data) + sizeof(int) * num_cs); 778 sizeof(struct spi_imx_data) + sizeof(int) * num_cs);
781 if (!master) 779 if (!master)
782 return -ENOMEM; 780 return -ENOMEM;
783 781
784 platform_set_drvdata(pdev, master); 782 platform_set_drvdata(pdev, master);
785 783
786 master->bus_num = pdev->id; 784 master->bus_num = pdev->id;
787 master->num_chipselect = num_cs; 785 master->num_chipselect = num_cs;
788 786
789 spi_imx = spi_master_get_devdata(master); 787 spi_imx = spi_master_get_devdata(master);
790 spi_imx->bitbang.master = spi_master_get(master); 788 spi_imx->bitbang.master = spi_master_get(master);
791 789
792 for (i = 0; i < master->num_chipselect; i++) { 790 for (i = 0; i < master->num_chipselect; i++) {
793 int cs_gpio = of_get_named_gpio(np, "cs-gpios", i); 791 int cs_gpio = of_get_named_gpio(np, "cs-gpios", i);
794 if (cs_gpio < 0 && mxc_platform_info) 792 if (!gpio_is_valid(cs_gpio) && mxc_platform_info)
795 cs_gpio = mxc_platform_info->chipselect[i]; 793 cs_gpio = mxc_platform_info->chipselect[i];
796 794
797 spi_imx->chipselect[i] = cs_gpio; 795 spi_imx->chipselect[i] = cs_gpio;
798 if (cs_gpio < 0) 796 if (!gpio_is_valid(cs_gpio))
799 continue; 797 continue;
800 798
801 ret = gpio_request(spi_imx->chipselect[i], DRIVER_NAME); 799 ret = gpio_request(spi_imx->chipselect[i], DRIVER_NAME);
802 if (ret) { 800 if (ret) {
803 dev_err(&pdev->dev, "can't get cs gpios\n"); 801 dev_err(&pdev->dev, "can't get cs gpios\n");
804 goto out_gpio_free; 802 goto out_gpio_free;
805 } 803 }
806 } 804 }
807 805
808 spi_imx->bitbang.chipselect = spi_imx_chipselect; 806 spi_imx->bitbang.chipselect = spi_imx_chipselect;
809 spi_imx->bitbang.setup_transfer = spi_imx_setupxfer; 807 spi_imx->bitbang.setup_transfer = spi_imx_setupxfer;
810 spi_imx->bitbang.txrx_bufs = spi_imx_transfer; 808 spi_imx->bitbang.txrx_bufs = spi_imx_transfer;
811 spi_imx->bitbang.master->setup = spi_imx_setup; 809 spi_imx->bitbang.master->setup = spi_imx_setup;
812 spi_imx->bitbang.master->cleanup = spi_imx_cleanup; 810 spi_imx->bitbang.master->cleanup = spi_imx_cleanup;
813 spi_imx->bitbang.master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH; 811 spi_imx->bitbang.master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH;
814 812
815 init_completion(&spi_imx->xfer_done); 813 init_completion(&spi_imx->xfer_done);
816 814
817 spi_imx->devtype_data = of_id ? of_id->data : 815 spi_imx->devtype_data = of_id ? of_id->data :
818 (struct spi_imx_devtype_data *) pdev->id_entry->driver_data; 816 (struct spi_imx_devtype_data *) pdev->id_entry->driver_data;
819 817
820 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 818 res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
821 if (!res) { 819 if (!res) {
822 dev_err(&pdev->dev, "can't get platform resource\n"); 820 dev_err(&pdev->dev, "can't get platform resource\n");
823 ret = -ENOMEM; 821 ret = -ENOMEM;
824 goto out_gpio_free; 822 goto out_gpio_free;
825 } 823 }
826 824
827 if (!request_mem_region(res->start, resource_size(res), pdev->name)) { 825 if (!request_mem_region(res->start, resource_size(res), pdev->name)) {
828 dev_err(&pdev->dev, "request_mem_region failed\n"); 826 dev_err(&pdev->dev, "request_mem_region failed\n");
829 ret = -EBUSY; 827 ret = -EBUSY;
830 goto out_gpio_free; 828 goto out_gpio_free;
831 } 829 }
832 830
833 spi_imx->base = ioremap(res->start, resource_size(res)); 831 spi_imx->base = ioremap(res->start, resource_size(res));
834 if (!spi_imx->base) { 832 if (!spi_imx->base) {
835 ret = -EINVAL; 833 ret = -EINVAL;
836 goto out_release_mem; 834 goto out_release_mem;
837 } 835 }
838 836
839 spi_imx->irq = platform_get_irq(pdev, 0); 837 spi_imx->irq = platform_get_irq(pdev, 0);
840 if (spi_imx->irq < 0) { 838 if (spi_imx->irq < 0) {
841 ret = -EINVAL; 839 ret = -EINVAL;
842 goto out_iounmap; 840 goto out_iounmap;
843 } 841 }
844 842
845 ret = request_irq(spi_imx->irq, spi_imx_isr, 0, DRIVER_NAME, spi_imx); 843 ret = request_irq(spi_imx->irq, spi_imx_isr, 0, DRIVER_NAME, spi_imx);
846 if (ret) { 844 if (ret) {
847 dev_err(&pdev->dev, "can't get irq%d: %d\n", spi_imx->irq, ret); 845 dev_err(&pdev->dev, "can't get irq%d: %d\n", spi_imx->irq, ret);
848 goto out_iounmap; 846 goto out_iounmap;
849 } 847 }
850 848
851 pinctrl = devm_pinctrl_get_select_default(&pdev->dev); 849 pinctrl = devm_pinctrl_get_select_default(&pdev->dev);
852 if (IS_ERR(pinctrl)) { 850 if (IS_ERR(pinctrl)) {
853 ret = PTR_ERR(pinctrl); 851 ret = PTR_ERR(pinctrl);
854 goto out_free_irq; 852 goto out_free_irq;
855 } 853 }
856 854
857 spi_imx->clk_ipg = devm_clk_get(&pdev->dev, "ipg"); 855 spi_imx->clk_ipg = devm_clk_get(&pdev->dev, "ipg");
858 if (IS_ERR(spi_imx->clk_ipg)) { 856 if (IS_ERR(spi_imx->clk_ipg)) {
859 ret = PTR_ERR(spi_imx->clk_ipg); 857 ret = PTR_ERR(spi_imx->clk_ipg);
860 goto out_free_irq; 858 goto out_free_irq;
861 } 859 }
862 860
863 spi_imx->clk_per = devm_clk_get(&pdev->dev, "per"); 861 spi_imx->clk_per = devm_clk_get(&pdev->dev, "per");
864 if (IS_ERR(spi_imx->clk_per)) { 862 if (IS_ERR(spi_imx->clk_per)) {
865 ret = PTR_ERR(spi_imx->clk_per); 863 ret = PTR_ERR(spi_imx->clk_per);
866 goto out_free_irq; 864 goto out_free_irq;
867 } 865 }
868 866
869 clk_prepare_enable(spi_imx->clk_per); 867 clk_prepare_enable(spi_imx->clk_per);
870 clk_prepare_enable(spi_imx->clk_ipg); 868 clk_prepare_enable(spi_imx->clk_ipg);
871 869
872 spi_imx->spi_clk = clk_get_rate(spi_imx->clk_per); 870 spi_imx->spi_clk = clk_get_rate(spi_imx->clk_per);
873 871
874 spi_imx->devtype_data->reset(spi_imx); 872 spi_imx->devtype_data->reset(spi_imx);
875 873
876 spi_imx->devtype_data->intctrl(spi_imx, 0); 874 spi_imx->devtype_data->intctrl(spi_imx, 0);
877 875
878 master->dev.of_node = pdev->dev.of_node; 876 master->dev.of_node = pdev->dev.of_node;
879 ret = spi_bitbang_start(&spi_imx->bitbang); 877 ret = spi_bitbang_start(&spi_imx->bitbang);
880 if (ret) { 878 if (ret) {
881 dev_err(&pdev->dev, "bitbang start failed with %d\n", ret); 879 dev_err(&pdev->dev, "bitbang start failed with %d\n", ret);
882 goto out_clk_put; 880 goto out_clk_put;
883 } 881 }
884 882
885 dev_info(&pdev->dev, "probed\n"); 883 dev_info(&pdev->dev, "probed\n");
886 884
887 return ret; 885 return ret;
888 886
889 out_clk_put: 887 out_clk_put:
890 clk_disable_unprepare(spi_imx->clk_per); 888 clk_disable_unprepare(spi_imx->clk_per);
891 clk_disable_unprepare(spi_imx->clk_ipg); 889 clk_disable_unprepare(spi_imx->clk_ipg);
892 out_free_irq: 890 out_free_irq:
893 free_irq(spi_imx->irq, spi_imx); 891 free_irq(spi_imx->irq, spi_imx);
894 out_iounmap: 892 out_iounmap:
895 iounmap(spi_imx->base); 893 iounmap(spi_imx->base);
896 out_release_mem: 894 out_release_mem:
897 release_mem_region(res->start, resource_size(res)); 895 release_mem_region(res->start, resource_size(res));
898 out_gpio_free: 896 out_gpio_free:
899 while (--i >= 0) { 897 while (--i >= 0) {
900 if (spi_imx->chipselect[i] >= 0) 898 if (gpio_is_valid(spi_imx->chipselect[i]))
901 gpio_free(spi_imx->chipselect[i]); 899 gpio_free(spi_imx->chipselect[i]);
902 } 900 }
903 spi_master_put(master); 901 spi_master_put(master);
904 kfree(master); 902 kfree(master);
905 platform_set_drvdata(pdev, NULL); 903 platform_set_drvdata(pdev, NULL);
906 return ret; 904 return ret;
907 } 905 }
908 906
909 static int __devexit spi_imx_remove(struct platform_device *pdev) 907 static int __devexit spi_imx_remove(struct platform_device *pdev)
910 { 908 {
911 struct spi_master *master = platform_get_drvdata(pdev); 909 struct spi_master *master = platform_get_drvdata(pdev);
912 struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 910 struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
913 struct spi_imx_data *spi_imx = spi_master_get_devdata(master); 911 struct spi_imx_data *spi_imx = spi_master_get_devdata(master);
914 int i; 912 int i;
915 913
916 spi_bitbang_stop(&spi_imx->bitbang); 914 spi_bitbang_stop(&spi_imx->bitbang);
917 915
918 writel(0, spi_imx->base + MXC_CSPICTRL); 916 writel(0, spi_imx->base + MXC_CSPICTRL);
919 clk_disable_unprepare(spi_imx->clk_per); 917 clk_disable_unprepare(spi_imx->clk_per);
920 clk_disable_unprepare(spi_imx->clk_ipg); 918 clk_disable_unprepare(spi_imx->clk_ipg);
921 free_irq(spi_imx->irq, spi_imx); 919 free_irq(spi_imx->irq, spi_imx);
922 iounmap(spi_imx->base); 920 iounmap(spi_imx->base);
923 921
924 for (i = 0; i < master->num_chipselect; i++) 922 for (i = 0; i < master->num_chipselect; i++)
925 if (spi_imx->chipselect[i] >= 0) 923 if (gpio_is_valid(spi_imx->chipselect[i]))
926 gpio_free(spi_imx->chipselect[i]); 924 gpio_free(spi_imx->chipselect[i]);
927 925
928 spi_master_put(master); 926 spi_master_put(master);
929 927
930 release_mem_region(res->start, resource_size(res)); 928 release_mem_region(res->start, resource_size(res));
931 929
932 platform_set_drvdata(pdev, NULL); 930 platform_set_drvdata(pdev, NULL);
933 931
934 return 0; 932 return 0;
935 } 933 }
936 934
937 static struct platform_driver spi_imx_driver = { 935 static struct platform_driver spi_imx_driver = {
938 .driver = { 936 .driver = {
939 .name = DRIVER_NAME, 937 .name = DRIVER_NAME,
940 .owner = THIS_MODULE, 938 .owner = THIS_MODULE,
941 .of_match_table = spi_imx_dt_ids, 939 .of_match_table = spi_imx_dt_ids,
942 }, 940 },
943 .id_table = spi_imx_devtype, 941 .id_table = spi_imx_devtype,
944 .probe = spi_imx_probe, 942 .probe = spi_imx_probe,
945 .remove = __devexit_p(spi_imx_remove), 943 .remove = __devexit_p(spi_imx_remove),
946 }; 944 };
947 module_platform_driver(spi_imx_driver); 945 module_platform_driver(spi_imx_driver);
948 946
949 MODULE_DESCRIPTION("SPI Master Controller driver"); 947 MODULE_DESCRIPTION("SPI Master Controller driver");
950 MODULE_AUTHOR("Sascha Hauer, Pengutronix"); 948 MODULE_AUTHOR("Sascha Hauer, Pengutronix");
951 MODULE_LICENSE("GPL"); 949 MODULE_LICENSE("GPL");
952 950
drivers/spi/spi-omap2-mcspi.c
1 /* 1 /*
2 * OMAP2 McSPI controller driver 2 * OMAP2 McSPI controller driver
3 * 3 *
4 * Copyright (C) 2005, 2006 Nokia Corporation 4 * Copyright (C) 2005, 2006 Nokia Corporation
5 * Author: Samuel Ortiz <samuel.ortiz@nokia.com> and 5 * Author: Samuel Ortiz <samuel.ortiz@nokia.com> and
6 * Juha Yrj๏ฟฝl๏ฟฝ <juha.yrjola@nokia.com> 6 * Juha Yrj๏ฟฝl๏ฟฝ <juha.yrjola@nokia.com>
7 * 7 *
8 * This program is free software; you can redistribute it and/or modify 8 * This program is free software; you can redistribute it and/or modify
9 * it under the terms of the GNU General Public License as published by 9 * it under the terms of the GNU General Public License as published by
10 * the Free Software Foundation; either version 2 of the License, or 10 * the Free Software Foundation; either version 2 of the License, or
11 * (at your option) any later version. 11 * (at your option) any later version.
12 * 12 *
13 * This program is distributed in the hope that it will be useful, 13 * This program is distributed in the hope that it will be useful,
14 * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 * but WITHOUT ANY WARRANTY; without even the implied warranty of
15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 * GNU General Public License for more details. 16 * GNU General Public License for more details.
17 * 17 *
18 * You should have received a copy of the GNU General Public License 18 * You should have received a copy of the GNU General Public License
19 * along with this program; if not, write to the Free Software 19 * along with this program; if not, write to the Free Software
20 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
21 * 21 *
22 */ 22 */
23 23
24 #include <linux/kernel.h> 24 #include <linux/kernel.h>
25 #include <linux/init.h> 25 #include <linux/init.h>
26 #include <linux/interrupt.h> 26 #include <linux/interrupt.h>
27 #include <linux/module.h> 27 #include <linux/module.h>
28 #include <linux/device.h> 28 #include <linux/device.h>
29 #include <linux/delay.h> 29 #include <linux/delay.h>
30 #include <linux/dma-mapping.h> 30 #include <linux/dma-mapping.h>
31 #include <linux/platform_device.h> 31 #include <linux/platform_device.h>
32 #include <linux/err.h> 32 #include <linux/err.h>
33 #include <linux/clk.h> 33 #include <linux/clk.h>
34 #include <linux/io.h> 34 #include <linux/io.h>
35 #include <linux/slab.h> 35 #include <linux/slab.h>
36 #include <linux/pm_runtime.h> 36 #include <linux/pm_runtime.h>
37 #include <linux/of.h> 37 #include <linux/of.h>
38 #include <linux/of_device.h> 38 #include <linux/of_device.h>
39 39
40 #include <linux/spi/spi.h> 40 #include <linux/spi/spi.h>
41 41
42 #include <plat/dma.h> 42 #include <plat/dma.h>
43 #include <plat/clock.h> 43 #include <plat/clock.h>
44 #include <plat/mcspi.h> 44 #include <plat/mcspi.h>
45 45
46 #define OMAP2_MCSPI_MAX_FREQ 48000000 46 #define OMAP2_MCSPI_MAX_FREQ 48000000
47 #define SPI_AUTOSUSPEND_TIMEOUT 2000 47 #define SPI_AUTOSUSPEND_TIMEOUT 2000
48 48
49 #define OMAP2_MCSPI_REVISION 0x00 49 #define OMAP2_MCSPI_REVISION 0x00
50 #define OMAP2_MCSPI_SYSSTATUS 0x14 50 #define OMAP2_MCSPI_SYSSTATUS 0x14
51 #define OMAP2_MCSPI_IRQSTATUS 0x18 51 #define OMAP2_MCSPI_IRQSTATUS 0x18
52 #define OMAP2_MCSPI_IRQENABLE 0x1c 52 #define OMAP2_MCSPI_IRQENABLE 0x1c
53 #define OMAP2_MCSPI_WAKEUPENABLE 0x20 53 #define OMAP2_MCSPI_WAKEUPENABLE 0x20
54 #define OMAP2_MCSPI_SYST 0x24 54 #define OMAP2_MCSPI_SYST 0x24
55 #define OMAP2_MCSPI_MODULCTRL 0x28 55 #define OMAP2_MCSPI_MODULCTRL 0x28
56 56
57 /* per-channel banks, 0x14 bytes each, first is: */ 57 /* per-channel banks, 0x14 bytes each, first is: */
58 #define OMAP2_MCSPI_CHCONF0 0x2c 58 #define OMAP2_MCSPI_CHCONF0 0x2c
59 #define OMAP2_MCSPI_CHSTAT0 0x30 59 #define OMAP2_MCSPI_CHSTAT0 0x30
60 #define OMAP2_MCSPI_CHCTRL0 0x34 60 #define OMAP2_MCSPI_CHCTRL0 0x34
61 #define OMAP2_MCSPI_TX0 0x38 61 #define OMAP2_MCSPI_TX0 0x38
62 #define OMAP2_MCSPI_RX0 0x3c 62 #define OMAP2_MCSPI_RX0 0x3c
63 63
64 /* per-register bitmasks: */ 64 /* per-register bitmasks: */
65 65
66 #define OMAP2_MCSPI_MODULCTRL_SINGLE BIT(0) 66 #define OMAP2_MCSPI_MODULCTRL_SINGLE BIT(0)
67 #define OMAP2_MCSPI_MODULCTRL_MS BIT(2) 67 #define OMAP2_MCSPI_MODULCTRL_MS BIT(2)
68 #define OMAP2_MCSPI_MODULCTRL_STEST BIT(3) 68 #define OMAP2_MCSPI_MODULCTRL_STEST BIT(3)
69 69
70 #define OMAP2_MCSPI_CHCONF_PHA BIT(0) 70 #define OMAP2_MCSPI_CHCONF_PHA BIT(0)
71 #define OMAP2_MCSPI_CHCONF_POL BIT(1) 71 #define OMAP2_MCSPI_CHCONF_POL BIT(1)
72 #define OMAP2_MCSPI_CHCONF_CLKD_MASK (0x0f << 2) 72 #define OMAP2_MCSPI_CHCONF_CLKD_MASK (0x0f << 2)
73 #define OMAP2_MCSPI_CHCONF_EPOL BIT(6) 73 #define OMAP2_MCSPI_CHCONF_EPOL BIT(6)
74 #define OMAP2_MCSPI_CHCONF_WL_MASK (0x1f << 7) 74 #define OMAP2_MCSPI_CHCONF_WL_MASK (0x1f << 7)
75 #define OMAP2_MCSPI_CHCONF_TRM_RX_ONLY BIT(12) 75 #define OMAP2_MCSPI_CHCONF_TRM_RX_ONLY BIT(12)
76 #define OMAP2_MCSPI_CHCONF_TRM_TX_ONLY BIT(13) 76 #define OMAP2_MCSPI_CHCONF_TRM_TX_ONLY BIT(13)
77 #define OMAP2_MCSPI_CHCONF_TRM_MASK (0x03 << 12) 77 #define OMAP2_MCSPI_CHCONF_TRM_MASK (0x03 << 12)
78 #define OMAP2_MCSPI_CHCONF_DMAW BIT(14) 78 #define OMAP2_MCSPI_CHCONF_DMAW BIT(14)
79 #define OMAP2_MCSPI_CHCONF_DMAR BIT(15) 79 #define OMAP2_MCSPI_CHCONF_DMAR BIT(15)
80 #define OMAP2_MCSPI_CHCONF_DPE0 BIT(16) 80 #define OMAP2_MCSPI_CHCONF_DPE0 BIT(16)
81 #define OMAP2_MCSPI_CHCONF_DPE1 BIT(17) 81 #define OMAP2_MCSPI_CHCONF_DPE1 BIT(17)
82 #define OMAP2_MCSPI_CHCONF_IS BIT(18) 82 #define OMAP2_MCSPI_CHCONF_IS BIT(18)
83 #define OMAP2_MCSPI_CHCONF_TURBO BIT(19) 83 #define OMAP2_MCSPI_CHCONF_TURBO BIT(19)
84 #define OMAP2_MCSPI_CHCONF_FORCE BIT(20) 84 #define OMAP2_MCSPI_CHCONF_FORCE BIT(20)
85 85
86 #define OMAP2_MCSPI_CHSTAT_RXS BIT(0) 86 #define OMAP2_MCSPI_CHSTAT_RXS BIT(0)
87 #define OMAP2_MCSPI_CHSTAT_TXS BIT(1) 87 #define OMAP2_MCSPI_CHSTAT_TXS BIT(1)
88 #define OMAP2_MCSPI_CHSTAT_EOT BIT(2) 88 #define OMAP2_MCSPI_CHSTAT_EOT BIT(2)
89 89
90 #define OMAP2_MCSPI_CHCTRL_EN BIT(0) 90 #define OMAP2_MCSPI_CHCTRL_EN BIT(0)
91 91
92 #define OMAP2_MCSPI_WAKEUPENABLE_WKEN BIT(0) 92 #define OMAP2_MCSPI_WAKEUPENABLE_WKEN BIT(0)
93 93
94 /* We have 2 DMA channels per CS, one for RX and one for TX */ 94 /* We have 2 DMA channels per CS, one for RX and one for TX */
95 struct omap2_mcspi_dma { 95 struct omap2_mcspi_dma {
96 int dma_tx_channel; 96 int dma_tx_channel;
97 int dma_rx_channel; 97 int dma_rx_channel;
98 98
99 int dma_tx_sync_dev; 99 int dma_tx_sync_dev;
100 int dma_rx_sync_dev; 100 int dma_rx_sync_dev;
101 101
102 struct completion dma_tx_completion; 102 struct completion dma_tx_completion;
103 struct completion dma_rx_completion; 103 struct completion dma_rx_completion;
104 }; 104 };
105 105
106 /* use PIO for small transfers, avoiding DMA setup/teardown overhead and 106 /* use PIO for small transfers, avoiding DMA setup/teardown overhead and
107 * cache operations; better heuristics consider wordsize and bitrate. 107 * cache operations; better heuristics consider wordsize and bitrate.
108 */ 108 */
109 #define DMA_MIN_BYTES 160 109 #define DMA_MIN_BYTES 160
110 110
111 111
112 /* 112 /*
113 * Used for context save and restore, structure members to be updated whenever 113 * Used for context save and restore, structure members to be updated whenever
114 * corresponding registers are modified. 114 * corresponding registers are modified.
115 */ 115 */
116 struct omap2_mcspi_regs { 116 struct omap2_mcspi_regs {
117 u32 modulctrl; 117 u32 modulctrl;
118 u32 wakeupenable; 118 u32 wakeupenable;
119 struct list_head cs; 119 struct list_head cs;
120 }; 120 };
121 121
122 struct omap2_mcspi { 122 struct omap2_mcspi {
123 struct spi_master *master; 123 struct spi_master *master;
124 /* Virtual base address of the controller */ 124 /* Virtual base address of the controller */
125 void __iomem *base; 125 void __iomem *base;
126 unsigned long phys; 126 unsigned long phys;
127 /* SPI1 has 4 channels, while SPI2 has 2 */ 127 /* SPI1 has 4 channels, while SPI2 has 2 */
128 struct omap2_mcspi_dma *dma_channels; 128 struct omap2_mcspi_dma *dma_channels;
129 struct device *dev; 129 struct device *dev;
130 struct omap2_mcspi_regs ctx; 130 struct omap2_mcspi_regs ctx;
131 }; 131 };
132 132
133 struct omap2_mcspi_cs { 133 struct omap2_mcspi_cs {
134 void __iomem *base; 134 void __iomem *base;
135 unsigned long phys; 135 unsigned long phys;
136 int word_len; 136 int word_len;
137 struct list_head node; 137 struct list_head node;
138 /* Context save and restore shadow register */ 138 /* Context save and restore shadow register */
139 u32 chconf0; 139 u32 chconf0;
140 }; 140 };
141 141
142 #define MOD_REG_BIT(val, mask, set) do { \ 142 #define MOD_REG_BIT(val, mask, set) do { \
143 if (set) \ 143 if (set) \
144 val |= mask; \ 144 val |= mask; \
145 else \ 145 else \
146 val &= ~mask; \ 146 val &= ~mask; \
147 } while (0) 147 } while (0)
148 148
149 static inline void mcspi_write_reg(struct spi_master *master, 149 static inline void mcspi_write_reg(struct spi_master *master,
150 int idx, u32 val) 150 int idx, u32 val)
151 { 151 {
152 struct omap2_mcspi *mcspi = spi_master_get_devdata(master); 152 struct omap2_mcspi *mcspi = spi_master_get_devdata(master);
153 153
154 __raw_writel(val, mcspi->base + idx); 154 __raw_writel(val, mcspi->base + idx);
155 } 155 }
156 156
157 static inline u32 mcspi_read_reg(struct spi_master *master, int idx) 157 static inline u32 mcspi_read_reg(struct spi_master *master, int idx)
158 { 158 {
159 struct omap2_mcspi *mcspi = spi_master_get_devdata(master); 159 struct omap2_mcspi *mcspi = spi_master_get_devdata(master);
160 160
161 return __raw_readl(mcspi->base + idx); 161 return __raw_readl(mcspi->base + idx);
162 } 162 }
163 163
164 static inline void mcspi_write_cs_reg(const struct spi_device *spi, 164 static inline void mcspi_write_cs_reg(const struct spi_device *spi,
165 int idx, u32 val) 165 int idx, u32 val)
166 { 166 {
167 struct omap2_mcspi_cs *cs = spi->controller_state; 167 struct omap2_mcspi_cs *cs = spi->controller_state;
168 168
169 __raw_writel(val, cs->base + idx); 169 __raw_writel(val, cs->base + idx);
170 } 170 }
171 171
172 static inline u32 mcspi_read_cs_reg(const struct spi_device *spi, int idx) 172 static inline u32 mcspi_read_cs_reg(const struct spi_device *spi, int idx)
173 { 173 {
174 struct omap2_mcspi_cs *cs = spi->controller_state; 174 struct omap2_mcspi_cs *cs = spi->controller_state;
175 175
176 return __raw_readl(cs->base + idx); 176 return __raw_readl(cs->base + idx);
177 } 177 }
178 178
179 static inline u32 mcspi_cached_chconf0(const struct spi_device *spi) 179 static inline u32 mcspi_cached_chconf0(const struct spi_device *spi)
180 { 180 {
181 struct omap2_mcspi_cs *cs = spi->controller_state; 181 struct omap2_mcspi_cs *cs = spi->controller_state;
182 182
183 return cs->chconf0; 183 return cs->chconf0;
184 } 184 }
185 185
186 static inline void mcspi_write_chconf0(const struct spi_device *spi, u32 val) 186 static inline void mcspi_write_chconf0(const struct spi_device *spi, u32 val)
187 { 187 {
188 struct omap2_mcspi_cs *cs = spi->controller_state; 188 struct omap2_mcspi_cs *cs = spi->controller_state;
189 189
190 cs->chconf0 = val; 190 cs->chconf0 = val;
191 mcspi_write_cs_reg(spi, OMAP2_MCSPI_CHCONF0, val); 191 mcspi_write_cs_reg(spi, OMAP2_MCSPI_CHCONF0, val);
192 mcspi_read_cs_reg(spi, OMAP2_MCSPI_CHCONF0); 192 mcspi_read_cs_reg(spi, OMAP2_MCSPI_CHCONF0);
193 } 193 }
194 194
195 static void omap2_mcspi_set_dma_req(const struct spi_device *spi, 195 static void omap2_mcspi_set_dma_req(const struct spi_device *spi,
196 int is_read, int enable) 196 int is_read, int enable)
197 { 197 {
198 u32 l, rw; 198 u32 l, rw;
199 199
200 l = mcspi_cached_chconf0(spi); 200 l = mcspi_cached_chconf0(spi);
201 201
202 if (is_read) /* 1 is read, 0 write */ 202 if (is_read) /* 1 is read, 0 write */
203 rw = OMAP2_MCSPI_CHCONF_DMAR; 203 rw = OMAP2_MCSPI_CHCONF_DMAR;
204 else 204 else
205 rw = OMAP2_MCSPI_CHCONF_DMAW; 205 rw = OMAP2_MCSPI_CHCONF_DMAW;
206 206
207 MOD_REG_BIT(l, rw, enable); 207 MOD_REG_BIT(l, rw, enable);
208 mcspi_write_chconf0(spi, l); 208 mcspi_write_chconf0(spi, l);
209 } 209 }
210 210
211 static void omap2_mcspi_set_enable(const struct spi_device *spi, int enable) 211 static void omap2_mcspi_set_enable(const struct spi_device *spi, int enable)
212 { 212 {
213 u32 l; 213 u32 l;
214 214
215 l = enable ? OMAP2_MCSPI_CHCTRL_EN : 0; 215 l = enable ? OMAP2_MCSPI_CHCTRL_EN : 0;
216 mcspi_write_cs_reg(spi, OMAP2_MCSPI_CHCTRL0, l); 216 mcspi_write_cs_reg(spi, OMAP2_MCSPI_CHCTRL0, l);
217 /* Flash post-writes */ 217 /* Flash post-writes */
218 mcspi_read_cs_reg(spi, OMAP2_MCSPI_CHCTRL0); 218 mcspi_read_cs_reg(spi, OMAP2_MCSPI_CHCTRL0);
219 } 219 }
220 220
221 static void omap2_mcspi_force_cs(struct spi_device *spi, int cs_active) 221 static void omap2_mcspi_force_cs(struct spi_device *spi, int cs_active)
222 { 222 {
223 u32 l; 223 u32 l;
224 224
225 l = mcspi_cached_chconf0(spi); 225 l = mcspi_cached_chconf0(spi);
226 MOD_REG_BIT(l, OMAP2_MCSPI_CHCONF_FORCE, cs_active); 226 MOD_REG_BIT(l, OMAP2_MCSPI_CHCONF_FORCE, cs_active);
227 mcspi_write_chconf0(spi, l); 227 mcspi_write_chconf0(spi, l);
228 } 228 }
229 229
230 static void omap2_mcspi_set_master_mode(struct spi_master *master) 230 static void omap2_mcspi_set_master_mode(struct spi_master *master)
231 { 231 {
232 struct omap2_mcspi *mcspi = spi_master_get_devdata(master); 232 struct omap2_mcspi *mcspi = spi_master_get_devdata(master);
233 struct omap2_mcspi_regs *ctx = &mcspi->ctx; 233 struct omap2_mcspi_regs *ctx = &mcspi->ctx;
234 u32 l; 234 u32 l;
235 235
236 /* 236 /*
237 * Setup when switching from (reset default) slave mode 237 * Setup when switching from (reset default) slave mode
238 * to single-channel master mode 238 * to single-channel master mode
239 */ 239 */
240 l = mcspi_read_reg(master, OMAP2_MCSPI_MODULCTRL); 240 l = mcspi_read_reg(master, OMAP2_MCSPI_MODULCTRL);
241 MOD_REG_BIT(l, OMAP2_MCSPI_MODULCTRL_STEST, 0); 241 MOD_REG_BIT(l, OMAP2_MCSPI_MODULCTRL_STEST, 0);
242 MOD_REG_BIT(l, OMAP2_MCSPI_MODULCTRL_MS, 0); 242 MOD_REG_BIT(l, OMAP2_MCSPI_MODULCTRL_MS, 0);
243 MOD_REG_BIT(l, OMAP2_MCSPI_MODULCTRL_SINGLE, 1); 243 MOD_REG_BIT(l, OMAP2_MCSPI_MODULCTRL_SINGLE, 1);
244 mcspi_write_reg(master, OMAP2_MCSPI_MODULCTRL, l); 244 mcspi_write_reg(master, OMAP2_MCSPI_MODULCTRL, l);
245 245
246 ctx->modulctrl = l; 246 ctx->modulctrl = l;
247 } 247 }
248 248
249 static void omap2_mcspi_restore_ctx(struct omap2_mcspi *mcspi) 249 static void omap2_mcspi_restore_ctx(struct omap2_mcspi *mcspi)
250 { 250 {
251 struct spi_master *spi_cntrl = mcspi->master; 251 struct spi_master *spi_cntrl = mcspi->master;
252 struct omap2_mcspi_regs *ctx = &mcspi->ctx; 252 struct omap2_mcspi_regs *ctx = &mcspi->ctx;
253 struct omap2_mcspi_cs *cs; 253 struct omap2_mcspi_cs *cs;
254 254
255 /* McSPI: context restore */ 255 /* McSPI: context restore */
256 mcspi_write_reg(spi_cntrl, OMAP2_MCSPI_MODULCTRL, ctx->modulctrl); 256 mcspi_write_reg(spi_cntrl, OMAP2_MCSPI_MODULCTRL, ctx->modulctrl);
257 mcspi_write_reg(spi_cntrl, OMAP2_MCSPI_WAKEUPENABLE, ctx->wakeupenable); 257 mcspi_write_reg(spi_cntrl, OMAP2_MCSPI_WAKEUPENABLE, ctx->wakeupenable);
258 258
259 list_for_each_entry(cs, &ctx->cs, node) 259 list_for_each_entry(cs, &ctx->cs, node)
260 __raw_writel(cs->chconf0, cs->base + OMAP2_MCSPI_CHCONF0); 260 __raw_writel(cs->chconf0, cs->base + OMAP2_MCSPI_CHCONF0);
261 } 261 }
262 static void omap2_mcspi_disable_clocks(struct omap2_mcspi *mcspi) 262 static void omap2_mcspi_disable_clocks(struct omap2_mcspi *mcspi)
263 { 263 {
264 pm_runtime_mark_last_busy(mcspi->dev); 264 pm_runtime_mark_last_busy(mcspi->dev);
265 pm_runtime_put_autosuspend(mcspi->dev); 265 pm_runtime_put_autosuspend(mcspi->dev);
266 } 266 }
267 267
268 static int omap2_mcspi_enable_clocks(struct omap2_mcspi *mcspi) 268 static int omap2_mcspi_enable_clocks(struct omap2_mcspi *mcspi)
269 { 269 {
270 return pm_runtime_get_sync(mcspi->dev); 270 return pm_runtime_get_sync(mcspi->dev);
271 } 271 }
272 272
273 static int omap2_prepare_transfer(struct spi_master *master) 273 static int omap2_prepare_transfer(struct spi_master *master)
274 { 274 {
275 struct omap2_mcspi *mcspi = spi_master_get_devdata(master); 275 struct omap2_mcspi *mcspi = spi_master_get_devdata(master);
276 276
277 pm_runtime_get_sync(mcspi->dev); 277 pm_runtime_get_sync(mcspi->dev);
278 return 0; 278 return 0;
279 } 279 }
280 280
281 static int omap2_unprepare_transfer(struct spi_master *master) 281 static int omap2_unprepare_transfer(struct spi_master *master)
282 { 282 {
283 struct omap2_mcspi *mcspi = spi_master_get_devdata(master); 283 struct omap2_mcspi *mcspi = spi_master_get_devdata(master);
284 284
285 pm_runtime_mark_last_busy(mcspi->dev); 285 pm_runtime_mark_last_busy(mcspi->dev);
286 pm_runtime_put_autosuspend(mcspi->dev); 286 pm_runtime_put_autosuspend(mcspi->dev);
287 return 0; 287 return 0;
288 } 288 }
289 289
290 static int mcspi_wait_for_reg_bit(void __iomem *reg, unsigned long bit) 290 static int mcspi_wait_for_reg_bit(void __iomem *reg, unsigned long bit)
291 { 291 {
292 unsigned long timeout; 292 unsigned long timeout;
293 293
294 timeout = jiffies + msecs_to_jiffies(1000); 294 timeout = jiffies + msecs_to_jiffies(1000);
295 while (!(__raw_readl(reg) & bit)) { 295 while (!(__raw_readl(reg) & bit)) {
296 if (time_after(jiffies, timeout)) 296 if (time_after(jiffies, timeout))
297 return -1; 297 return -1;
298 cpu_relax(); 298 cpu_relax();
299 } 299 }
300 return 0; 300 return 0;
301 } 301 }
302 302
303 static unsigned 303 static unsigned
304 omap2_mcspi_txrx_dma(struct spi_device *spi, struct spi_transfer *xfer) 304 omap2_mcspi_txrx_dma(struct spi_device *spi, struct spi_transfer *xfer)
305 { 305 {
306 struct omap2_mcspi *mcspi; 306 struct omap2_mcspi *mcspi;
307 struct omap2_mcspi_cs *cs = spi->controller_state; 307 struct omap2_mcspi_cs *cs = spi->controller_state;
308 struct omap2_mcspi_dma *mcspi_dma; 308 struct omap2_mcspi_dma *mcspi_dma;
309 unsigned int count, c; 309 unsigned int count, c;
310 unsigned long base, tx_reg, rx_reg; 310 unsigned long base, tx_reg, rx_reg;
311 int word_len, data_type, element_count; 311 int word_len, data_type, element_count;
312 int elements = 0; 312 int elements = 0;
313 u32 l; 313 u32 l;
314 u8 * rx; 314 u8 * rx;
315 const u8 * tx; 315 const u8 * tx;
316 void __iomem *chstat_reg; 316 void __iomem *chstat_reg;
317 317
318 mcspi = spi_master_get_devdata(spi->master); 318 mcspi = spi_master_get_devdata(spi->master);
319 mcspi_dma = &mcspi->dma_channels[spi->chip_select]; 319 mcspi_dma = &mcspi->dma_channels[spi->chip_select];
320 l = mcspi_cached_chconf0(spi); 320 l = mcspi_cached_chconf0(spi);
321 321
322 chstat_reg = cs->base + OMAP2_MCSPI_CHSTAT0; 322 chstat_reg = cs->base + OMAP2_MCSPI_CHSTAT0;
323 323
324 count = xfer->len; 324 count = xfer->len;
325 c = count; 325 c = count;
326 word_len = cs->word_len; 326 word_len = cs->word_len;
327 327
328 base = cs->phys; 328 base = cs->phys;
329 tx_reg = base + OMAP2_MCSPI_TX0; 329 tx_reg = base + OMAP2_MCSPI_TX0;
330 rx_reg = base + OMAP2_MCSPI_RX0; 330 rx_reg = base + OMAP2_MCSPI_RX0;
331 rx = xfer->rx_buf; 331 rx = xfer->rx_buf;
332 tx = xfer->tx_buf; 332 tx = xfer->tx_buf;
333 333
334 if (word_len <= 8) { 334 if (word_len <= 8) {
335 data_type = OMAP_DMA_DATA_TYPE_S8; 335 data_type = OMAP_DMA_DATA_TYPE_S8;
336 element_count = count; 336 element_count = count;
337 } else if (word_len <= 16) { 337 } else if (word_len <= 16) {
338 data_type = OMAP_DMA_DATA_TYPE_S16; 338 data_type = OMAP_DMA_DATA_TYPE_S16;
339 element_count = count >> 1; 339 element_count = count >> 1;
340 } else /* word_len <= 32 */ { 340 } else /* word_len <= 32 */ {
341 data_type = OMAP_DMA_DATA_TYPE_S32; 341 data_type = OMAP_DMA_DATA_TYPE_S32;
342 element_count = count >> 2; 342 element_count = count >> 2;
343 } 343 }
344 344
345 if (tx != NULL) { 345 if (tx != NULL) {
346 omap_set_dma_transfer_params(mcspi_dma->dma_tx_channel, 346 omap_set_dma_transfer_params(mcspi_dma->dma_tx_channel,
347 data_type, element_count, 1, 347 data_type, element_count, 1,
348 OMAP_DMA_SYNC_ELEMENT, 348 OMAP_DMA_SYNC_ELEMENT,
349 mcspi_dma->dma_tx_sync_dev, 0); 349 mcspi_dma->dma_tx_sync_dev, 0);
350 350
351 omap_set_dma_dest_params(mcspi_dma->dma_tx_channel, 0, 351 omap_set_dma_dest_params(mcspi_dma->dma_tx_channel, 0,
352 OMAP_DMA_AMODE_CONSTANT, 352 OMAP_DMA_AMODE_CONSTANT,
353 tx_reg, 0, 0); 353 tx_reg, 0, 0);
354 354
355 omap_set_dma_src_params(mcspi_dma->dma_tx_channel, 0, 355 omap_set_dma_src_params(mcspi_dma->dma_tx_channel, 0,
356 OMAP_DMA_AMODE_POST_INC, 356 OMAP_DMA_AMODE_POST_INC,
357 xfer->tx_dma, 0, 0); 357 xfer->tx_dma, 0, 0);
358 } 358 }
359 359
360 if (rx != NULL) { 360 if (rx != NULL) {
361 elements = element_count - 1; 361 elements = element_count - 1;
362 if (l & OMAP2_MCSPI_CHCONF_TURBO) 362 if (l & OMAP2_MCSPI_CHCONF_TURBO)
363 elements--; 363 elements--;
364 364
365 omap_set_dma_transfer_params(mcspi_dma->dma_rx_channel, 365 omap_set_dma_transfer_params(mcspi_dma->dma_rx_channel,
366 data_type, elements, 1, 366 data_type, elements, 1,
367 OMAP_DMA_SYNC_ELEMENT, 367 OMAP_DMA_SYNC_ELEMENT,
368 mcspi_dma->dma_rx_sync_dev, 1); 368 mcspi_dma->dma_rx_sync_dev, 1);
369 369
370 omap_set_dma_src_params(mcspi_dma->dma_rx_channel, 0, 370 omap_set_dma_src_params(mcspi_dma->dma_rx_channel, 0,
371 OMAP_DMA_AMODE_CONSTANT, 371 OMAP_DMA_AMODE_CONSTANT,
372 rx_reg, 0, 0); 372 rx_reg, 0, 0);
373 373
374 omap_set_dma_dest_params(mcspi_dma->dma_rx_channel, 0, 374 omap_set_dma_dest_params(mcspi_dma->dma_rx_channel, 0,
375 OMAP_DMA_AMODE_POST_INC, 375 OMAP_DMA_AMODE_POST_INC,
376 xfer->rx_dma, 0, 0); 376 xfer->rx_dma, 0, 0);
377 } 377 }
378 378
379 if (tx != NULL) { 379 if (tx != NULL) {
380 omap_start_dma(mcspi_dma->dma_tx_channel); 380 omap_start_dma(mcspi_dma->dma_tx_channel);
381 omap2_mcspi_set_dma_req(spi, 0, 1); 381 omap2_mcspi_set_dma_req(spi, 0, 1);
382 } 382 }
383 383
384 if (rx != NULL) { 384 if (rx != NULL) {
385 omap_start_dma(mcspi_dma->dma_rx_channel); 385 omap_start_dma(mcspi_dma->dma_rx_channel);
386 omap2_mcspi_set_dma_req(spi, 1, 1); 386 omap2_mcspi_set_dma_req(spi, 1, 1);
387 } 387 }
388 388
389 if (tx != NULL) { 389 if (tx != NULL) {
390 wait_for_completion(&mcspi_dma->dma_tx_completion); 390 wait_for_completion(&mcspi_dma->dma_tx_completion);
391 dma_unmap_single(&spi->dev, xfer->tx_dma, count, DMA_TO_DEVICE); 391 dma_unmap_single(mcspi->dev, xfer->tx_dma, count,
392 DMA_TO_DEVICE);
392 393
393 /* for TX_ONLY mode, be sure all words have shifted out */ 394 /* for TX_ONLY mode, be sure all words have shifted out */
394 if (rx == NULL) { 395 if (rx == NULL) {
395 if (mcspi_wait_for_reg_bit(chstat_reg, 396 if (mcspi_wait_for_reg_bit(chstat_reg,
396 OMAP2_MCSPI_CHSTAT_TXS) < 0) 397 OMAP2_MCSPI_CHSTAT_TXS) < 0)
397 dev_err(&spi->dev, "TXS timed out\n"); 398 dev_err(&spi->dev, "TXS timed out\n");
398 else if (mcspi_wait_for_reg_bit(chstat_reg, 399 else if (mcspi_wait_for_reg_bit(chstat_reg,
399 OMAP2_MCSPI_CHSTAT_EOT) < 0) 400 OMAP2_MCSPI_CHSTAT_EOT) < 0)
400 dev_err(&spi->dev, "EOT timed out\n"); 401 dev_err(&spi->dev, "EOT timed out\n");
401 } 402 }
402 } 403 }
403 404
404 if (rx != NULL) { 405 if (rx != NULL) {
405 wait_for_completion(&mcspi_dma->dma_rx_completion); 406 wait_for_completion(&mcspi_dma->dma_rx_completion);
406 dma_unmap_single(&spi->dev, xfer->rx_dma, count, DMA_FROM_DEVICE); 407 dma_unmap_single(mcspi->dev, xfer->rx_dma, count,
408 DMA_FROM_DEVICE);
407 omap2_mcspi_set_enable(spi, 0); 409 omap2_mcspi_set_enable(spi, 0);
408 410
409 if (l & OMAP2_MCSPI_CHCONF_TURBO) { 411 if (l & OMAP2_MCSPI_CHCONF_TURBO) {
410 412
411 if (likely(mcspi_read_cs_reg(spi, OMAP2_MCSPI_CHSTAT0) 413 if (likely(mcspi_read_cs_reg(spi, OMAP2_MCSPI_CHSTAT0)
412 & OMAP2_MCSPI_CHSTAT_RXS)) { 414 & OMAP2_MCSPI_CHSTAT_RXS)) {
413 u32 w; 415 u32 w;
414 416
415 w = mcspi_read_cs_reg(spi, OMAP2_MCSPI_RX0); 417 w = mcspi_read_cs_reg(spi, OMAP2_MCSPI_RX0);
416 if (word_len <= 8) 418 if (word_len <= 8)
417 ((u8 *)xfer->rx_buf)[elements++] = w; 419 ((u8 *)xfer->rx_buf)[elements++] = w;
418 else if (word_len <= 16) 420 else if (word_len <= 16)
419 ((u16 *)xfer->rx_buf)[elements++] = w; 421 ((u16 *)xfer->rx_buf)[elements++] = w;
420 else /* word_len <= 32 */ 422 else /* word_len <= 32 */
421 ((u32 *)xfer->rx_buf)[elements++] = w; 423 ((u32 *)xfer->rx_buf)[elements++] = w;
422 } else { 424 } else {
423 dev_err(&spi->dev, 425 dev_err(&spi->dev,
424 "DMA RX penultimate word empty"); 426 "DMA RX penultimate word empty");
425 count -= (word_len <= 8) ? 2 : 427 count -= (word_len <= 8) ? 2 :
426 (word_len <= 16) ? 4 : 428 (word_len <= 16) ? 4 :
427 /* word_len <= 32 */ 8; 429 /* word_len <= 32 */ 8;
428 omap2_mcspi_set_enable(spi, 1); 430 omap2_mcspi_set_enable(spi, 1);
429 return count; 431 return count;
430 } 432 }
431 } 433 }
432 434
433 if (likely(mcspi_read_cs_reg(spi, OMAP2_MCSPI_CHSTAT0) 435 if (likely(mcspi_read_cs_reg(spi, OMAP2_MCSPI_CHSTAT0)
434 & OMAP2_MCSPI_CHSTAT_RXS)) { 436 & OMAP2_MCSPI_CHSTAT_RXS)) {
435 u32 w; 437 u32 w;
436 438
437 w = mcspi_read_cs_reg(spi, OMAP2_MCSPI_RX0); 439 w = mcspi_read_cs_reg(spi, OMAP2_MCSPI_RX0);
438 if (word_len <= 8) 440 if (word_len <= 8)
439 ((u8 *)xfer->rx_buf)[elements] = w; 441 ((u8 *)xfer->rx_buf)[elements] = w;
440 else if (word_len <= 16) 442 else if (word_len <= 16)
441 ((u16 *)xfer->rx_buf)[elements] = w; 443 ((u16 *)xfer->rx_buf)[elements] = w;
442 else /* word_len <= 32 */ 444 else /* word_len <= 32 */
443 ((u32 *)xfer->rx_buf)[elements] = w; 445 ((u32 *)xfer->rx_buf)[elements] = w;
444 } else { 446 } else {
445 dev_err(&spi->dev, "DMA RX last word empty"); 447 dev_err(&spi->dev, "DMA RX last word empty");
446 count -= (word_len <= 8) ? 1 : 448 count -= (word_len <= 8) ? 1 :
447 (word_len <= 16) ? 2 : 449 (word_len <= 16) ? 2 :
448 /* word_len <= 32 */ 4; 450 /* word_len <= 32 */ 4;
449 } 451 }
450 omap2_mcspi_set_enable(spi, 1); 452 omap2_mcspi_set_enable(spi, 1);
451 } 453 }
452 return count; 454 return count;
453 } 455 }
454 456
455 static unsigned 457 static unsigned
456 omap2_mcspi_txrx_pio(struct spi_device *spi, struct spi_transfer *xfer) 458 omap2_mcspi_txrx_pio(struct spi_device *spi, struct spi_transfer *xfer)
457 { 459 {
458 struct omap2_mcspi *mcspi; 460 struct omap2_mcspi *mcspi;
459 struct omap2_mcspi_cs *cs = spi->controller_state; 461 struct omap2_mcspi_cs *cs = spi->controller_state;
460 unsigned int count, c; 462 unsigned int count, c;
461 u32 l; 463 u32 l;
462 void __iomem *base = cs->base; 464 void __iomem *base = cs->base;
463 void __iomem *tx_reg; 465 void __iomem *tx_reg;
464 void __iomem *rx_reg; 466 void __iomem *rx_reg;
465 void __iomem *chstat_reg; 467 void __iomem *chstat_reg;
466 int word_len; 468 int word_len;
467 469
468 mcspi = spi_master_get_devdata(spi->master); 470 mcspi = spi_master_get_devdata(spi->master);
469 count = xfer->len; 471 count = xfer->len;
470 c = count; 472 c = count;
471 word_len = cs->word_len; 473 word_len = cs->word_len;
472 474
473 l = mcspi_cached_chconf0(spi); 475 l = mcspi_cached_chconf0(spi);
474 476
475 /* We store the pre-calculated register addresses on stack to speed 477 /* We store the pre-calculated register addresses on stack to speed
476 * up the transfer loop. */ 478 * up the transfer loop. */
477 tx_reg = base + OMAP2_MCSPI_TX0; 479 tx_reg = base + OMAP2_MCSPI_TX0;
478 rx_reg = base + OMAP2_MCSPI_RX0; 480 rx_reg = base + OMAP2_MCSPI_RX0;
479 chstat_reg = base + OMAP2_MCSPI_CHSTAT0; 481 chstat_reg = base + OMAP2_MCSPI_CHSTAT0;
480 482
481 if (c < (word_len>>3)) 483 if (c < (word_len>>3))
482 return 0; 484 return 0;
483 485
484 if (word_len <= 8) { 486 if (word_len <= 8) {
485 u8 *rx; 487 u8 *rx;
486 const u8 *tx; 488 const u8 *tx;
487 489
488 rx = xfer->rx_buf; 490 rx = xfer->rx_buf;
489 tx = xfer->tx_buf; 491 tx = xfer->tx_buf;
490 492
491 do { 493 do {
492 c -= 1; 494 c -= 1;
493 if (tx != NULL) { 495 if (tx != NULL) {
494 if (mcspi_wait_for_reg_bit(chstat_reg, 496 if (mcspi_wait_for_reg_bit(chstat_reg,
495 OMAP2_MCSPI_CHSTAT_TXS) < 0) { 497 OMAP2_MCSPI_CHSTAT_TXS) < 0) {
496 dev_err(&spi->dev, "TXS timed out\n"); 498 dev_err(&spi->dev, "TXS timed out\n");
497 goto out; 499 goto out;
498 } 500 }
499 dev_vdbg(&spi->dev, "write-%d %02x\n", 501 dev_vdbg(&spi->dev, "write-%d %02x\n",
500 word_len, *tx); 502 word_len, *tx);
501 __raw_writel(*tx++, tx_reg); 503 __raw_writel(*tx++, tx_reg);
502 } 504 }
503 if (rx != NULL) { 505 if (rx != NULL) {
504 if (mcspi_wait_for_reg_bit(chstat_reg, 506 if (mcspi_wait_for_reg_bit(chstat_reg,
505 OMAP2_MCSPI_CHSTAT_RXS) < 0) { 507 OMAP2_MCSPI_CHSTAT_RXS) < 0) {
506 dev_err(&spi->dev, "RXS timed out\n"); 508 dev_err(&spi->dev, "RXS timed out\n");
507 goto out; 509 goto out;
508 } 510 }
509 511
510 if (c == 1 && tx == NULL && 512 if (c == 1 && tx == NULL &&
511 (l & OMAP2_MCSPI_CHCONF_TURBO)) { 513 (l & OMAP2_MCSPI_CHCONF_TURBO)) {
512 omap2_mcspi_set_enable(spi, 0); 514 omap2_mcspi_set_enable(spi, 0);
513 *rx++ = __raw_readl(rx_reg); 515 *rx++ = __raw_readl(rx_reg);
514 dev_vdbg(&spi->dev, "read-%d %02x\n", 516 dev_vdbg(&spi->dev, "read-%d %02x\n",
515 word_len, *(rx - 1)); 517 word_len, *(rx - 1));
516 if (mcspi_wait_for_reg_bit(chstat_reg, 518 if (mcspi_wait_for_reg_bit(chstat_reg,
517 OMAP2_MCSPI_CHSTAT_RXS) < 0) { 519 OMAP2_MCSPI_CHSTAT_RXS) < 0) {
518 dev_err(&spi->dev, 520 dev_err(&spi->dev,
519 "RXS timed out\n"); 521 "RXS timed out\n");
520 goto out; 522 goto out;
521 } 523 }
522 c = 0; 524 c = 0;
523 } else if (c == 0 && tx == NULL) { 525 } else if (c == 0 && tx == NULL) {
524 omap2_mcspi_set_enable(spi, 0); 526 omap2_mcspi_set_enable(spi, 0);
525 } 527 }
526 528
527 *rx++ = __raw_readl(rx_reg); 529 *rx++ = __raw_readl(rx_reg);
528 dev_vdbg(&spi->dev, "read-%d %02x\n", 530 dev_vdbg(&spi->dev, "read-%d %02x\n",
529 word_len, *(rx - 1)); 531 word_len, *(rx - 1));
530 } 532 }
531 } while (c); 533 } while (c);
532 } else if (word_len <= 16) { 534 } else if (word_len <= 16) {
533 u16 *rx; 535 u16 *rx;
534 const u16 *tx; 536 const u16 *tx;
535 537
536 rx = xfer->rx_buf; 538 rx = xfer->rx_buf;
537 tx = xfer->tx_buf; 539 tx = xfer->tx_buf;
538 do { 540 do {
539 c -= 2; 541 c -= 2;
540 if (tx != NULL) { 542 if (tx != NULL) {
541 if (mcspi_wait_for_reg_bit(chstat_reg, 543 if (mcspi_wait_for_reg_bit(chstat_reg,
542 OMAP2_MCSPI_CHSTAT_TXS) < 0) { 544 OMAP2_MCSPI_CHSTAT_TXS) < 0) {
543 dev_err(&spi->dev, "TXS timed out\n"); 545 dev_err(&spi->dev, "TXS timed out\n");
544 goto out; 546 goto out;
545 } 547 }
546 dev_vdbg(&spi->dev, "write-%d %04x\n", 548 dev_vdbg(&spi->dev, "write-%d %04x\n",
547 word_len, *tx); 549 word_len, *tx);
548 __raw_writel(*tx++, tx_reg); 550 __raw_writel(*tx++, tx_reg);
549 } 551 }
550 if (rx != NULL) { 552 if (rx != NULL) {
551 if (mcspi_wait_for_reg_bit(chstat_reg, 553 if (mcspi_wait_for_reg_bit(chstat_reg,
552 OMAP2_MCSPI_CHSTAT_RXS) < 0) { 554 OMAP2_MCSPI_CHSTAT_RXS) < 0) {
553 dev_err(&spi->dev, "RXS timed out\n"); 555 dev_err(&spi->dev, "RXS timed out\n");
554 goto out; 556 goto out;
555 } 557 }
556 558
557 if (c == 2 && tx == NULL && 559 if (c == 2 && tx == NULL &&
558 (l & OMAP2_MCSPI_CHCONF_TURBO)) { 560 (l & OMAP2_MCSPI_CHCONF_TURBO)) {
559 omap2_mcspi_set_enable(spi, 0); 561 omap2_mcspi_set_enable(spi, 0);
560 *rx++ = __raw_readl(rx_reg); 562 *rx++ = __raw_readl(rx_reg);
561 dev_vdbg(&spi->dev, "read-%d %04x\n", 563 dev_vdbg(&spi->dev, "read-%d %04x\n",
562 word_len, *(rx - 1)); 564 word_len, *(rx - 1));
563 if (mcspi_wait_for_reg_bit(chstat_reg, 565 if (mcspi_wait_for_reg_bit(chstat_reg,
564 OMAP2_MCSPI_CHSTAT_RXS) < 0) { 566 OMAP2_MCSPI_CHSTAT_RXS) < 0) {
565 dev_err(&spi->dev, 567 dev_err(&spi->dev,
566 "RXS timed out\n"); 568 "RXS timed out\n");
567 goto out; 569 goto out;
568 } 570 }
569 c = 0; 571 c = 0;
570 } else if (c == 0 && tx == NULL) { 572 } else if (c == 0 && tx == NULL) {
571 omap2_mcspi_set_enable(spi, 0); 573 omap2_mcspi_set_enable(spi, 0);
572 } 574 }
573 575
574 *rx++ = __raw_readl(rx_reg); 576 *rx++ = __raw_readl(rx_reg);
575 dev_vdbg(&spi->dev, "read-%d %04x\n", 577 dev_vdbg(&spi->dev, "read-%d %04x\n",
576 word_len, *(rx - 1)); 578 word_len, *(rx - 1));
577 } 579 }
578 } while (c >= 2); 580 } while (c >= 2);
579 } else if (word_len <= 32) { 581 } else if (word_len <= 32) {
580 u32 *rx; 582 u32 *rx;
581 const u32 *tx; 583 const u32 *tx;
582 584
583 rx = xfer->rx_buf; 585 rx = xfer->rx_buf;
584 tx = xfer->tx_buf; 586 tx = xfer->tx_buf;
585 do { 587 do {
586 c -= 4; 588 c -= 4;
587 if (tx != NULL) { 589 if (tx != NULL) {
588 if (mcspi_wait_for_reg_bit(chstat_reg, 590 if (mcspi_wait_for_reg_bit(chstat_reg,
589 OMAP2_MCSPI_CHSTAT_TXS) < 0) { 591 OMAP2_MCSPI_CHSTAT_TXS) < 0) {
590 dev_err(&spi->dev, "TXS timed out\n"); 592 dev_err(&spi->dev, "TXS timed out\n");
591 goto out; 593 goto out;
592 } 594 }
593 dev_vdbg(&spi->dev, "write-%d %08x\n", 595 dev_vdbg(&spi->dev, "write-%d %08x\n",
594 word_len, *tx); 596 word_len, *tx);
595 __raw_writel(*tx++, tx_reg); 597 __raw_writel(*tx++, tx_reg);
596 } 598 }
597 if (rx != NULL) { 599 if (rx != NULL) {
598 if (mcspi_wait_for_reg_bit(chstat_reg, 600 if (mcspi_wait_for_reg_bit(chstat_reg,
599 OMAP2_MCSPI_CHSTAT_RXS) < 0) { 601 OMAP2_MCSPI_CHSTAT_RXS) < 0) {
600 dev_err(&spi->dev, "RXS timed out\n"); 602 dev_err(&spi->dev, "RXS timed out\n");
601 goto out; 603 goto out;
602 } 604 }
603 605
604 if (c == 4 && tx == NULL && 606 if (c == 4 && tx == NULL &&
605 (l & OMAP2_MCSPI_CHCONF_TURBO)) { 607 (l & OMAP2_MCSPI_CHCONF_TURBO)) {
606 omap2_mcspi_set_enable(spi, 0); 608 omap2_mcspi_set_enable(spi, 0);
607 *rx++ = __raw_readl(rx_reg); 609 *rx++ = __raw_readl(rx_reg);
608 dev_vdbg(&spi->dev, "read-%d %08x\n", 610 dev_vdbg(&spi->dev, "read-%d %08x\n",
609 word_len, *(rx - 1)); 611 word_len, *(rx - 1));
610 if (mcspi_wait_for_reg_bit(chstat_reg, 612 if (mcspi_wait_for_reg_bit(chstat_reg,
611 OMAP2_MCSPI_CHSTAT_RXS) < 0) { 613 OMAP2_MCSPI_CHSTAT_RXS) < 0) {
612 dev_err(&spi->dev, 614 dev_err(&spi->dev,
613 "RXS timed out\n"); 615 "RXS timed out\n");
614 goto out; 616 goto out;
615 } 617 }
616 c = 0; 618 c = 0;
617 } else if (c == 0 && tx == NULL) { 619 } else if (c == 0 && tx == NULL) {
618 omap2_mcspi_set_enable(spi, 0); 620 omap2_mcspi_set_enable(spi, 0);
619 } 621 }
620 622
621 *rx++ = __raw_readl(rx_reg); 623 *rx++ = __raw_readl(rx_reg);
622 dev_vdbg(&spi->dev, "read-%d %08x\n", 624 dev_vdbg(&spi->dev, "read-%d %08x\n",
623 word_len, *(rx - 1)); 625 word_len, *(rx - 1));
624 } 626 }
625 } while (c >= 4); 627 } while (c >= 4);
626 } 628 }
627 629
628 /* for TX_ONLY mode, be sure all words have shifted out */ 630 /* for TX_ONLY mode, be sure all words have shifted out */
629 if (xfer->rx_buf == NULL) { 631 if (xfer->rx_buf == NULL) {
630 if (mcspi_wait_for_reg_bit(chstat_reg, 632 if (mcspi_wait_for_reg_bit(chstat_reg,
631 OMAP2_MCSPI_CHSTAT_TXS) < 0) { 633 OMAP2_MCSPI_CHSTAT_TXS) < 0) {
632 dev_err(&spi->dev, "TXS timed out\n"); 634 dev_err(&spi->dev, "TXS timed out\n");
633 } else if (mcspi_wait_for_reg_bit(chstat_reg, 635 } else if (mcspi_wait_for_reg_bit(chstat_reg,
634 OMAP2_MCSPI_CHSTAT_EOT) < 0) 636 OMAP2_MCSPI_CHSTAT_EOT) < 0)
635 dev_err(&spi->dev, "EOT timed out\n"); 637 dev_err(&spi->dev, "EOT timed out\n");
636 638
637 /* disable chan to purge rx datas received in TX_ONLY transfer, 639 /* disable chan to purge rx datas received in TX_ONLY transfer,
638 * otherwise these rx datas will affect the direct following 640 * otherwise these rx datas will affect the direct following
639 * RX_ONLY transfer. 641 * RX_ONLY transfer.
640 */ 642 */
641 omap2_mcspi_set_enable(spi, 0); 643 omap2_mcspi_set_enable(spi, 0);
642 } 644 }
643 out: 645 out:
644 omap2_mcspi_set_enable(spi, 1); 646 omap2_mcspi_set_enable(spi, 1);
645 return count - c; 647 return count - c;
646 } 648 }
647 649
648 static u32 omap2_mcspi_calc_divisor(u32 speed_hz) 650 static u32 omap2_mcspi_calc_divisor(u32 speed_hz)
649 { 651 {
650 u32 div; 652 u32 div;
651 653
652 for (div = 0; div < 15; div++) 654 for (div = 0; div < 15; div++)
653 if (speed_hz >= (OMAP2_MCSPI_MAX_FREQ >> div)) 655 if (speed_hz >= (OMAP2_MCSPI_MAX_FREQ >> div))
654 return div; 656 return div;
655 657
656 return 15; 658 return 15;
657 } 659 }
658 660
659 /* called only when no transfer is active to this device */ 661 /* called only when no transfer is active to this device */
660 static int omap2_mcspi_setup_transfer(struct spi_device *spi, 662 static int omap2_mcspi_setup_transfer(struct spi_device *spi,
661 struct spi_transfer *t) 663 struct spi_transfer *t)
662 { 664 {
663 struct omap2_mcspi_cs *cs = spi->controller_state; 665 struct omap2_mcspi_cs *cs = spi->controller_state;
664 struct omap2_mcspi *mcspi; 666 struct omap2_mcspi *mcspi;
665 struct spi_master *spi_cntrl; 667 struct spi_master *spi_cntrl;
666 u32 l = 0, div = 0; 668 u32 l = 0, div = 0;
667 u8 word_len = spi->bits_per_word; 669 u8 word_len = spi->bits_per_word;
668 u32 speed_hz = spi->max_speed_hz; 670 u32 speed_hz = spi->max_speed_hz;
669 671
670 mcspi = spi_master_get_devdata(spi->master); 672 mcspi = spi_master_get_devdata(spi->master);
671 spi_cntrl = mcspi->master; 673 spi_cntrl = mcspi->master;
672 674
673 if (t != NULL && t->bits_per_word) 675 if (t != NULL && t->bits_per_word)
674 word_len = t->bits_per_word; 676 word_len = t->bits_per_word;
675 677
676 cs->word_len = word_len; 678 cs->word_len = word_len;
677 679
678 if (t && t->speed_hz) 680 if (t && t->speed_hz)
679 speed_hz = t->speed_hz; 681 speed_hz = t->speed_hz;
680 682
681 speed_hz = min_t(u32, speed_hz, OMAP2_MCSPI_MAX_FREQ); 683 speed_hz = min_t(u32, speed_hz, OMAP2_MCSPI_MAX_FREQ);
682 div = omap2_mcspi_calc_divisor(speed_hz); 684 div = omap2_mcspi_calc_divisor(speed_hz);
683 685
684 l = mcspi_cached_chconf0(spi); 686 l = mcspi_cached_chconf0(spi);
685 687
686 /* standard 4-wire master mode: SCK, MOSI/out, MISO/in, nCS 688 /* standard 4-wire master mode: SCK, MOSI/out, MISO/in, nCS
687 * REVISIT: this controller could support SPI_3WIRE mode. 689 * REVISIT: this controller could support SPI_3WIRE mode.
688 */ 690 */
689 l &= ~(OMAP2_MCSPI_CHCONF_IS|OMAP2_MCSPI_CHCONF_DPE1); 691 l &= ~(OMAP2_MCSPI_CHCONF_IS|OMAP2_MCSPI_CHCONF_DPE1);
690 l |= OMAP2_MCSPI_CHCONF_DPE0; 692 l |= OMAP2_MCSPI_CHCONF_DPE0;
691 693
692 /* wordlength */ 694 /* wordlength */
693 l &= ~OMAP2_MCSPI_CHCONF_WL_MASK; 695 l &= ~OMAP2_MCSPI_CHCONF_WL_MASK;
694 l |= (word_len - 1) << 7; 696 l |= (word_len - 1) << 7;
695 697
696 /* set chipselect polarity; manage with FORCE */ 698 /* set chipselect polarity; manage with FORCE */
697 if (!(spi->mode & SPI_CS_HIGH)) 699 if (!(spi->mode & SPI_CS_HIGH))
698 l |= OMAP2_MCSPI_CHCONF_EPOL; /* active-low; normal */ 700 l |= OMAP2_MCSPI_CHCONF_EPOL; /* active-low; normal */
699 else 701 else
700 l &= ~OMAP2_MCSPI_CHCONF_EPOL; 702 l &= ~OMAP2_MCSPI_CHCONF_EPOL;
701 703
702 /* set clock divisor */ 704 /* set clock divisor */
703 l &= ~OMAP2_MCSPI_CHCONF_CLKD_MASK; 705 l &= ~OMAP2_MCSPI_CHCONF_CLKD_MASK;
704 l |= div << 2; 706 l |= div << 2;
705 707
706 /* set SPI mode 0..3 */ 708 /* set SPI mode 0..3 */
707 if (spi->mode & SPI_CPOL) 709 if (spi->mode & SPI_CPOL)
708 l |= OMAP2_MCSPI_CHCONF_POL; 710 l |= OMAP2_MCSPI_CHCONF_POL;
709 else 711 else
710 l &= ~OMAP2_MCSPI_CHCONF_POL; 712 l &= ~OMAP2_MCSPI_CHCONF_POL;
711 if (spi->mode & SPI_CPHA) 713 if (spi->mode & SPI_CPHA)
712 l |= OMAP2_MCSPI_CHCONF_PHA; 714 l |= OMAP2_MCSPI_CHCONF_PHA;
713 else 715 else
714 l &= ~OMAP2_MCSPI_CHCONF_PHA; 716 l &= ~OMAP2_MCSPI_CHCONF_PHA;
715 717
716 mcspi_write_chconf0(spi, l); 718 mcspi_write_chconf0(spi, l);
717 719
718 dev_dbg(&spi->dev, "setup: speed %d, sample %s edge, clk %s\n", 720 dev_dbg(&spi->dev, "setup: speed %d, sample %s edge, clk %s\n",
719 OMAP2_MCSPI_MAX_FREQ >> div, 721 OMAP2_MCSPI_MAX_FREQ >> div,
720 (spi->mode & SPI_CPHA) ? "trailing" : "leading", 722 (spi->mode & SPI_CPHA) ? "trailing" : "leading",
721 (spi->mode & SPI_CPOL) ? "inverted" : "normal"); 723 (spi->mode & SPI_CPOL) ? "inverted" : "normal");
722 724
723 return 0; 725 return 0;
724 } 726 }
725 727
726 static void omap2_mcspi_dma_rx_callback(int lch, u16 ch_status, void *data) 728 static void omap2_mcspi_dma_rx_callback(int lch, u16 ch_status, void *data)
727 { 729 {
728 struct spi_device *spi = data; 730 struct spi_device *spi = data;
729 struct omap2_mcspi *mcspi; 731 struct omap2_mcspi *mcspi;
730 struct omap2_mcspi_dma *mcspi_dma; 732 struct omap2_mcspi_dma *mcspi_dma;
731 733
732 mcspi = spi_master_get_devdata(spi->master); 734 mcspi = spi_master_get_devdata(spi->master);
733 mcspi_dma = &(mcspi->dma_channels[spi->chip_select]); 735 mcspi_dma = &(mcspi->dma_channels[spi->chip_select]);
734 736
735 complete(&mcspi_dma->dma_rx_completion); 737 complete(&mcspi_dma->dma_rx_completion);
736 738
737 /* We must disable the DMA RX request */ 739 /* We must disable the DMA RX request */
738 omap2_mcspi_set_dma_req(spi, 1, 0); 740 omap2_mcspi_set_dma_req(spi, 1, 0);
739 } 741 }
740 742
741 static void omap2_mcspi_dma_tx_callback(int lch, u16 ch_status, void *data) 743 static void omap2_mcspi_dma_tx_callback(int lch, u16 ch_status, void *data)
742 { 744 {
743 struct spi_device *spi = data; 745 struct spi_device *spi = data;
744 struct omap2_mcspi *mcspi; 746 struct omap2_mcspi *mcspi;
745 struct omap2_mcspi_dma *mcspi_dma; 747 struct omap2_mcspi_dma *mcspi_dma;
746 748
747 mcspi = spi_master_get_devdata(spi->master); 749 mcspi = spi_master_get_devdata(spi->master);
748 mcspi_dma = &(mcspi->dma_channels[spi->chip_select]); 750 mcspi_dma = &(mcspi->dma_channels[spi->chip_select]);
749 751
750 complete(&mcspi_dma->dma_tx_completion); 752 complete(&mcspi_dma->dma_tx_completion);
751 753
752 /* We must disable the DMA TX request */ 754 /* We must disable the DMA TX request */
753 omap2_mcspi_set_dma_req(spi, 0, 0); 755 omap2_mcspi_set_dma_req(spi, 0, 0);
754 } 756 }
755 757
756 static int omap2_mcspi_request_dma(struct spi_device *spi) 758 static int omap2_mcspi_request_dma(struct spi_device *spi)
757 { 759 {
758 struct spi_master *master = spi->master; 760 struct spi_master *master = spi->master;
759 struct omap2_mcspi *mcspi; 761 struct omap2_mcspi *mcspi;
760 struct omap2_mcspi_dma *mcspi_dma; 762 struct omap2_mcspi_dma *mcspi_dma;
761 763
762 mcspi = spi_master_get_devdata(master); 764 mcspi = spi_master_get_devdata(master);
763 mcspi_dma = mcspi->dma_channels + spi->chip_select; 765 mcspi_dma = mcspi->dma_channels + spi->chip_select;
764 766
765 if (omap_request_dma(mcspi_dma->dma_rx_sync_dev, "McSPI RX", 767 if (omap_request_dma(mcspi_dma->dma_rx_sync_dev, "McSPI RX",
766 omap2_mcspi_dma_rx_callback, spi, 768 omap2_mcspi_dma_rx_callback, spi,
767 &mcspi_dma->dma_rx_channel)) { 769 &mcspi_dma->dma_rx_channel)) {
768 dev_err(&spi->dev, "no RX DMA channel for McSPI\n"); 770 dev_err(&spi->dev, "no RX DMA channel for McSPI\n");
769 return -EAGAIN; 771 return -EAGAIN;
770 } 772 }
771 773
772 if (omap_request_dma(mcspi_dma->dma_tx_sync_dev, "McSPI TX", 774 if (omap_request_dma(mcspi_dma->dma_tx_sync_dev, "McSPI TX",
773 omap2_mcspi_dma_tx_callback, spi, 775 omap2_mcspi_dma_tx_callback, spi,
774 &mcspi_dma->dma_tx_channel)) { 776 &mcspi_dma->dma_tx_channel)) {
775 omap_free_dma(mcspi_dma->dma_rx_channel); 777 omap_free_dma(mcspi_dma->dma_rx_channel);
776 mcspi_dma->dma_rx_channel = -1; 778 mcspi_dma->dma_rx_channel = -1;
777 dev_err(&spi->dev, "no TX DMA channel for McSPI\n"); 779 dev_err(&spi->dev, "no TX DMA channel for McSPI\n");
778 return -EAGAIN; 780 return -EAGAIN;
779 } 781 }
780 782
781 init_completion(&mcspi_dma->dma_rx_completion); 783 init_completion(&mcspi_dma->dma_rx_completion);
782 init_completion(&mcspi_dma->dma_tx_completion); 784 init_completion(&mcspi_dma->dma_tx_completion);
783 785
784 return 0; 786 return 0;
785 } 787 }
786 788
787 static int omap2_mcspi_setup(struct spi_device *spi) 789 static int omap2_mcspi_setup(struct spi_device *spi)
788 { 790 {
789 int ret; 791 int ret;
790 struct omap2_mcspi *mcspi = spi_master_get_devdata(spi->master); 792 struct omap2_mcspi *mcspi = spi_master_get_devdata(spi->master);
791 struct omap2_mcspi_regs *ctx = &mcspi->ctx; 793 struct omap2_mcspi_regs *ctx = &mcspi->ctx;
792 struct omap2_mcspi_dma *mcspi_dma; 794 struct omap2_mcspi_dma *mcspi_dma;
793 struct omap2_mcspi_cs *cs = spi->controller_state; 795 struct omap2_mcspi_cs *cs = spi->controller_state;
794 796
795 if (spi->bits_per_word < 4 || spi->bits_per_word > 32) { 797 if (spi->bits_per_word < 4 || spi->bits_per_word > 32) {
796 dev_dbg(&spi->dev, "setup: unsupported %d bit words\n", 798 dev_dbg(&spi->dev, "setup: unsupported %d bit words\n",
797 spi->bits_per_word); 799 spi->bits_per_word);
798 return -EINVAL; 800 return -EINVAL;
799 } 801 }
800 802
801 mcspi_dma = &mcspi->dma_channels[spi->chip_select]; 803 mcspi_dma = &mcspi->dma_channels[spi->chip_select];
802 804
803 if (!cs) { 805 if (!cs) {
804 cs = kzalloc(sizeof *cs, GFP_KERNEL); 806 cs = kzalloc(sizeof *cs, GFP_KERNEL);
805 if (!cs) 807 if (!cs)
806 return -ENOMEM; 808 return -ENOMEM;
807 cs->base = mcspi->base + spi->chip_select * 0x14; 809 cs->base = mcspi->base + spi->chip_select * 0x14;
808 cs->phys = mcspi->phys + spi->chip_select * 0x14; 810 cs->phys = mcspi->phys + spi->chip_select * 0x14;
809 cs->chconf0 = 0; 811 cs->chconf0 = 0;
810 spi->controller_state = cs; 812 spi->controller_state = cs;
811 /* Link this to context save list */ 813 /* Link this to context save list */
812 list_add_tail(&cs->node, &ctx->cs); 814 list_add_tail(&cs->node, &ctx->cs);
813 } 815 }
814 816
815 if (mcspi_dma->dma_rx_channel == -1 817 if (mcspi_dma->dma_rx_channel == -1
816 || mcspi_dma->dma_tx_channel == -1) { 818 || mcspi_dma->dma_tx_channel == -1) {
817 ret = omap2_mcspi_request_dma(spi); 819 ret = omap2_mcspi_request_dma(spi);
818 if (ret < 0) 820 if (ret < 0)
819 return ret; 821 return ret;
820 } 822 }
821 823
822 ret = omap2_mcspi_enable_clocks(mcspi); 824 ret = omap2_mcspi_enable_clocks(mcspi);
823 if (ret < 0) 825 if (ret < 0)
824 return ret; 826 return ret;
825 827
826 ret = omap2_mcspi_setup_transfer(spi, NULL); 828 ret = omap2_mcspi_setup_transfer(spi, NULL);
827 omap2_mcspi_disable_clocks(mcspi); 829 omap2_mcspi_disable_clocks(mcspi);
828 830
829 return ret; 831 return ret;
830 } 832 }
831 833
832 static void omap2_mcspi_cleanup(struct spi_device *spi) 834 static void omap2_mcspi_cleanup(struct spi_device *spi)
833 { 835 {
834 struct omap2_mcspi *mcspi; 836 struct omap2_mcspi *mcspi;
835 struct omap2_mcspi_dma *mcspi_dma; 837 struct omap2_mcspi_dma *mcspi_dma;
836 struct omap2_mcspi_cs *cs; 838 struct omap2_mcspi_cs *cs;
837 839
838 mcspi = spi_master_get_devdata(spi->master); 840 mcspi = spi_master_get_devdata(spi->master);
839 841
840 if (spi->controller_state) { 842 if (spi->controller_state) {
841 /* Unlink controller state from context save list */ 843 /* Unlink controller state from context save list */
842 cs = spi->controller_state; 844 cs = spi->controller_state;
843 list_del(&cs->node); 845 list_del(&cs->node);
844 846
845 kfree(cs); 847 kfree(cs);
846 } 848 }
847 849
848 if (spi->chip_select < spi->master->num_chipselect) { 850 if (spi->chip_select < spi->master->num_chipselect) {
849 mcspi_dma = &mcspi->dma_channels[spi->chip_select]; 851 mcspi_dma = &mcspi->dma_channels[spi->chip_select];
850 852
851 if (mcspi_dma->dma_rx_channel != -1) { 853 if (mcspi_dma->dma_rx_channel != -1) {
852 omap_free_dma(mcspi_dma->dma_rx_channel); 854 omap_free_dma(mcspi_dma->dma_rx_channel);
853 mcspi_dma->dma_rx_channel = -1; 855 mcspi_dma->dma_rx_channel = -1;
854 } 856 }
855 if (mcspi_dma->dma_tx_channel != -1) { 857 if (mcspi_dma->dma_tx_channel != -1) {
856 omap_free_dma(mcspi_dma->dma_tx_channel); 858 omap_free_dma(mcspi_dma->dma_tx_channel);
857 mcspi_dma->dma_tx_channel = -1; 859 mcspi_dma->dma_tx_channel = -1;
858 } 860 }
859 } 861 }
860 } 862 }
861 863
862 static void omap2_mcspi_work(struct omap2_mcspi *mcspi, struct spi_message *m) 864 static void omap2_mcspi_work(struct omap2_mcspi *mcspi, struct spi_message *m)
863 { 865 {
864 866
865 /* We only enable one channel at a time -- the one whose message is 867 /* We only enable one channel at a time -- the one whose message is
866 * -- although this controller would gladly 868 * -- although this controller would gladly
867 * arbitrate among multiple channels. This corresponds to "single 869 * arbitrate among multiple channels. This corresponds to "single
868 * channel" master mode. As a side effect, we need to manage the 870 * channel" master mode. As a side effect, we need to manage the
869 * chipselect with the FORCE bit ... CS != channel enable. 871 * chipselect with the FORCE bit ... CS != channel enable.
870 */ 872 */
871 873
872 struct spi_device *spi; 874 struct spi_device *spi;
873 struct spi_transfer *t = NULL; 875 struct spi_transfer *t = NULL;
874 int cs_active = 0; 876 int cs_active = 0;
875 struct omap2_mcspi_cs *cs; 877 struct omap2_mcspi_cs *cs;
876 struct omap2_mcspi_device_config *cd; 878 struct omap2_mcspi_device_config *cd;
877 int par_override = 0; 879 int par_override = 0;
878 int status = 0; 880 int status = 0;
879 u32 chconf; 881 u32 chconf;
880 882
881 spi = m->spi; 883 spi = m->spi;
882 cs = spi->controller_state; 884 cs = spi->controller_state;
883 cd = spi->controller_data; 885 cd = spi->controller_data;
884 886
885 omap2_mcspi_set_enable(spi, 1); 887 omap2_mcspi_set_enable(spi, 1);
886 list_for_each_entry(t, &m->transfers, transfer_list) { 888 list_for_each_entry(t, &m->transfers, transfer_list) {
887 if (t->tx_buf == NULL && t->rx_buf == NULL && t->len) { 889 if (t->tx_buf == NULL && t->rx_buf == NULL && t->len) {
888 status = -EINVAL; 890 status = -EINVAL;
889 break; 891 break;
890 } 892 }
891 if (par_override || t->speed_hz || t->bits_per_word) { 893 if (par_override || t->speed_hz || t->bits_per_word) {
892 par_override = 1; 894 par_override = 1;
893 status = omap2_mcspi_setup_transfer(spi, t); 895 status = omap2_mcspi_setup_transfer(spi, t);
894 if (status < 0) 896 if (status < 0)
895 break; 897 break;
896 if (!t->speed_hz && !t->bits_per_word) 898 if (!t->speed_hz && !t->bits_per_word)
897 par_override = 0; 899 par_override = 0;
898 } 900 }
899 901
900 if (!cs_active) { 902 if (!cs_active) {
901 omap2_mcspi_force_cs(spi, 1); 903 omap2_mcspi_force_cs(spi, 1);
902 cs_active = 1; 904 cs_active = 1;
903 } 905 }
904 906
905 chconf = mcspi_cached_chconf0(spi); 907 chconf = mcspi_cached_chconf0(spi);
906 chconf &= ~OMAP2_MCSPI_CHCONF_TRM_MASK; 908 chconf &= ~OMAP2_MCSPI_CHCONF_TRM_MASK;
907 chconf &= ~OMAP2_MCSPI_CHCONF_TURBO; 909 chconf &= ~OMAP2_MCSPI_CHCONF_TURBO;
908 910
909 if (t->tx_buf == NULL) 911 if (t->tx_buf == NULL)
910 chconf |= OMAP2_MCSPI_CHCONF_TRM_RX_ONLY; 912 chconf |= OMAP2_MCSPI_CHCONF_TRM_RX_ONLY;
911 else if (t->rx_buf == NULL) 913 else if (t->rx_buf == NULL)
912 chconf |= OMAP2_MCSPI_CHCONF_TRM_TX_ONLY; 914 chconf |= OMAP2_MCSPI_CHCONF_TRM_TX_ONLY;
913 915
914 if (cd && cd->turbo_mode && t->tx_buf == NULL) { 916 if (cd && cd->turbo_mode && t->tx_buf == NULL) {
915 /* Turbo mode is for more than one word */ 917 /* Turbo mode is for more than one word */
916 if (t->len > ((cs->word_len + 7) >> 3)) 918 if (t->len > ((cs->word_len + 7) >> 3))
917 chconf |= OMAP2_MCSPI_CHCONF_TURBO; 919 chconf |= OMAP2_MCSPI_CHCONF_TURBO;
918 } 920 }
919 921
920 mcspi_write_chconf0(spi, chconf); 922 mcspi_write_chconf0(spi, chconf);
921 923
922 if (t->len) { 924 if (t->len) {
923 unsigned count; 925 unsigned count;
924 926
925 /* RX_ONLY mode needs dummy data in TX reg */ 927 /* RX_ONLY mode needs dummy data in TX reg */
926 if (t->tx_buf == NULL) 928 if (t->tx_buf == NULL)
927 __raw_writel(0, cs->base 929 __raw_writel(0, cs->base
928 + OMAP2_MCSPI_TX0); 930 + OMAP2_MCSPI_TX0);
929 931
930 if (m->is_dma_mapped || t->len >= DMA_MIN_BYTES) 932 if (m->is_dma_mapped || t->len >= DMA_MIN_BYTES)
931 count = omap2_mcspi_txrx_dma(spi, t); 933 count = omap2_mcspi_txrx_dma(spi, t);
932 else 934 else
933 count = omap2_mcspi_txrx_pio(spi, t); 935 count = omap2_mcspi_txrx_pio(spi, t);
934 m->actual_length += count; 936 m->actual_length += count;
935 937
936 if (count != t->len) { 938 if (count != t->len) {
937 status = -EIO; 939 status = -EIO;
938 break; 940 break;
939 } 941 }
940 } 942 }
941 943
942 if (t->delay_usecs) 944 if (t->delay_usecs)
943 udelay(t->delay_usecs); 945 udelay(t->delay_usecs);
944 946
945 /* ignore the "leave it on after last xfer" hint */ 947 /* ignore the "leave it on after last xfer" hint */
946 if (t->cs_change) { 948 if (t->cs_change) {
947 omap2_mcspi_force_cs(spi, 0); 949 omap2_mcspi_force_cs(spi, 0);
948 cs_active = 0; 950 cs_active = 0;
949 } 951 }
950 } 952 }
951 /* Restore defaults if they were overriden */ 953 /* Restore defaults if they were overriden */
952 if (par_override) { 954 if (par_override) {
953 par_override = 0; 955 par_override = 0;
954 status = omap2_mcspi_setup_transfer(spi, NULL); 956 status = omap2_mcspi_setup_transfer(spi, NULL);
955 } 957 }
956 958
957 if (cs_active) 959 if (cs_active)
958 omap2_mcspi_force_cs(spi, 0); 960 omap2_mcspi_force_cs(spi, 0);
959 961
960 omap2_mcspi_set_enable(spi, 0); 962 omap2_mcspi_set_enable(spi, 0);
961 963
962 m->status = status; 964 m->status = status;
963 965
964 } 966 }
965 967
966 static int omap2_mcspi_transfer_one_message(struct spi_master *master, 968 static int omap2_mcspi_transfer_one_message(struct spi_master *master,
967 struct spi_message *m) 969 struct spi_message *m)
968 { 970 {
969 struct omap2_mcspi *mcspi; 971 struct omap2_mcspi *mcspi;
970 struct spi_transfer *t; 972 struct spi_transfer *t;
971 973
972 mcspi = spi_master_get_devdata(master); 974 mcspi = spi_master_get_devdata(master);
973 m->actual_length = 0; 975 m->actual_length = 0;
974 m->status = 0; 976 m->status = 0;
975 977
976 /* reject invalid messages and transfers */ 978 /* reject invalid messages and transfers */
977 if (list_empty(&m->transfers)) 979 if (list_empty(&m->transfers))
978 return -EINVAL; 980 return -EINVAL;
979 list_for_each_entry(t, &m->transfers, transfer_list) { 981 list_for_each_entry(t, &m->transfers, transfer_list) {
980 const void *tx_buf = t->tx_buf; 982 const void *tx_buf = t->tx_buf;
981 void *rx_buf = t->rx_buf; 983 void *rx_buf = t->rx_buf;
982 unsigned len = t->len; 984 unsigned len = t->len;
983 985
984 if (t->speed_hz > OMAP2_MCSPI_MAX_FREQ 986 if (t->speed_hz > OMAP2_MCSPI_MAX_FREQ
985 || (len && !(rx_buf || tx_buf)) 987 || (len && !(rx_buf || tx_buf))
986 || (t->bits_per_word && 988 || (t->bits_per_word &&
987 ( t->bits_per_word < 4 989 ( t->bits_per_word < 4
988 || t->bits_per_word > 32))) { 990 || t->bits_per_word > 32))) {
989 dev_dbg(mcspi->dev, "transfer: %d Hz, %d %s%s, %d bpw\n", 991 dev_dbg(mcspi->dev, "transfer: %d Hz, %d %s%s, %d bpw\n",
990 t->speed_hz, 992 t->speed_hz,
991 len, 993 len,
992 tx_buf ? "tx" : "", 994 tx_buf ? "tx" : "",
993 rx_buf ? "rx" : "", 995 rx_buf ? "rx" : "",
994 t->bits_per_word); 996 t->bits_per_word);
995 return -EINVAL; 997 return -EINVAL;
996 } 998 }
997 if (t->speed_hz && t->speed_hz < (OMAP2_MCSPI_MAX_FREQ >> 15)) { 999 if (t->speed_hz && t->speed_hz < (OMAP2_MCSPI_MAX_FREQ >> 15)) {
998 dev_dbg(mcspi->dev, "speed_hz %d below minimum %d Hz\n", 1000 dev_dbg(mcspi->dev, "speed_hz %d below minimum %d Hz\n",
999 t->speed_hz, 1001 t->speed_hz,
1000 OMAP2_MCSPI_MAX_FREQ >> 15); 1002 OMAP2_MCSPI_MAX_FREQ >> 15);
1001 return -EINVAL; 1003 return -EINVAL;
1002 } 1004 }
1003 1005
1004 if (m->is_dma_mapped || len < DMA_MIN_BYTES) 1006 if (m->is_dma_mapped || len < DMA_MIN_BYTES)
1005 continue; 1007 continue;
1006 1008
1007 if (tx_buf != NULL) { 1009 if (tx_buf != NULL) {
1008 t->tx_dma = dma_map_single(mcspi->dev, (void *) tx_buf, 1010 t->tx_dma = dma_map_single(mcspi->dev, (void *) tx_buf,
1009 len, DMA_TO_DEVICE); 1011 len, DMA_TO_DEVICE);
1010 if (dma_mapping_error(mcspi->dev, t->tx_dma)) { 1012 if (dma_mapping_error(mcspi->dev, t->tx_dma)) {
1011 dev_dbg(mcspi->dev, "dma %cX %d bytes error\n", 1013 dev_dbg(mcspi->dev, "dma %cX %d bytes error\n",
1012 'T', len); 1014 'T', len);
1013 return -EINVAL; 1015 return -EINVAL;
1014 } 1016 }
1015 } 1017 }
1016 if (rx_buf != NULL) { 1018 if (rx_buf != NULL) {
1017 t->rx_dma = dma_map_single(mcspi->dev, rx_buf, t->len, 1019 t->rx_dma = dma_map_single(mcspi->dev, rx_buf, t->len,
1018 DMA_FROM_DEVICE); 1020 DMA_FROM_DEVICE);
1019 if (dma_mapping_error(mcspi->dev, t->rx_dma)) { 1021 if (dma_mapping_error(mcspi->dev, t->rx_dma)) {
1020 dev_dbg(mcspi->dev, "dma %cX %d bytes error\n", 1022 dev_dbg(mcspi->dev, "dma %cX %d bytes error\n",
1021 'R', len); 1023 'R', len);
1022 if (tx_buf != NULL) 1024 if (tx_buf != NULL)
1023 dma_unmap_single(mcspi->dev, t->tx_dma, 1025 dma_unmap_single(mcspi->dev, t->tx_dma,
1024 len, DMA_TO_DEVICE); 1026 len, DMA_TO_DEVICE);
1025 return -EINVAL; 1027 return -EINVAL;
1026 } 1028 }
1027 } 1029 }
1028 } 1030 }
1029 1031
1030 omap2_mcspi_work(mcspi, m); 1032 omap2_mcspi_work(mcspi, m);
1031 spi_finalize_current_message(master); 1033 spi_finalize_current_message(master);
1032 return 0; 1034 return 0;
1033 } 1035 }
1034 1036
1035 static int __init omap2_mcspi_master_setup(struct omap2_mcspi *mcspi) 1037 static int __devinit omap2_mcspi_master_setup(struct omap2_mcspi *mcspi)
1036 { 1038 {
1037 struct spi_master *master = mcspi->master; 1039 struct spi_master *master = mcspi->master;
1038 struct omap2_mcspi_regs *ctx = &mcspi->ctx; 1040 struct omap2_mcspi_regs *ctx = &mcspi->ctx;
1039 int ret = 0; 1041 int ret = 0;
1040 1042
1041 ret = omap2_mcspi_enable_clocks(mcspi); 1043 ret = omap2_mcspi_enable_clocks(mcspi);
1042 if (ret < 0) 1044 if (ret < 0)
1043 return ret; 1045 return ret;
1044 1046
1045 mcspi_write_reg(master, OMAP2_MCSPI_WAKEUPENABLE, 1047 mcspi_write_reg(master, OMAP2_MCSPI_WAKEUPENABLE,
1046 OMAP2_MCSPI_WAKEUPENABLE_WKEN); 1048 OMAP2_MCSPI_WAKEUPENABLE_WKEN);
1047 ctx->wakeupenable = OMAP2_MCSPI_WAKEUPENABLE_WKEN; 1049 ctx->wakeupenable = OMAP2_MCSPI_WAKEUPENABLE_WKEN;
1048 1050
1049 omap2_mcspi_set_master_mode(master); 1051 omap2_mcspi_set_master_mode(master);
1050 omap2_mcspi_disable_clocks(mcspi); 1052 omap2_mcspi_disable_clocks(mcspi);
1051 return 0; 1053 return 0;
1052 } 1054 }
1053 1055
1054 static int omap_mcspi_runtime_resume(struct device *dev) 1056 static int omap_mcspi_runtime_resume(struct device *dev)
1055 { 1057 {
1056 struct omap2_mcspi *mcspi; 1058 struct omap2_mcspi *mcspi;
1057 struct spi_master *master; 1059 struct spi_master *master;
1058 1060
1059 master = dev_get_drvdata(dev); 1061 master = dev_get_drvdata(dev);
1060 mcspi = spi_master_get_devdata(master); 1062 mcspi = spi_master_get_devdata(master);
1061 omap2_mcspi_restore_ctx(mcspi); 1063 omap2_mcspi_restore_ctx(mcspi);
1062 1064
1063 return 0; 1065 return 0;
1064 } 1066 }
1065 1067
1066 static struct omap2_mcspi_platform_config omap2_pdata = { 1068 static struct omap2_mcspi_platform_config omap2_pdata = {
1067 .regs_offset = 0, 1069 .regs_offset = 0,
1068 }; 1070 };
1069 1071
1070 static struct omap2_mcspi_platform_config omap4_pdata = { 1072 static struct omap2_mcspi_platform_config omap4_pdata = {
1071 .regs_offset = OMAP4_MCSPI_REG_OFFSET, 1073 .regs_offset = OMAP4_MCSPI_REG_OFFSET,
1072 }; 1074 };
1073 1075
1074 static const struct of_device_id omap_mcspi_of_match[] = { 1076 static const struct of_device_id omap_mcspi_of_match[] = {
1075 { 1077 {
1076 .compatible = "ti,omap2-mcspi", 1078 .compatible = "ti,omap2-mcspi",
1077 .data = &omap2_pdata, 1079 .data = &omap2_pdata,
1078 }, 1080 },
1079 { 1081 {
1080 .compatible = "ti,omap4-mcspi", 1082 .compatible = "ti,omap4-mcspi",
1081 .data = &omap4_pdata, 1083 .data = &omap4_pdata,
1082 }, 1084 },
1083 { }, 1085 { },
1084 }; 1086 };
1085 MODULE_DEVICE_TABLE(of, omap_mcspi_of_match); 1087 MODULE_DEVICE_TABLE(of, omap_mcspi_of_match);
1086 1088
1087 static int __devinit omap2_mcspi_probe(struct platform_device *pdev) 1089 static int __devinit omap2_mcspi_probe(struct platform_device *pdev)
1088 { 1090 {
1089 struct spi_master *master; 1091 struct spi_master *master;
1090 struct omap2_mcspi_platform_config *pdata; 1092 struct omap2_mcspi_platform_config *pdata;
1091 struct omap2_mcspi *mcspi; 1093 struct omap2_mcspi *mcspi;
1092 struct resource *r; 1094 struct resource *r;
1093 int status = 0, i; 1095 int status = 0, i;
1094 u32 regs_offset = 0; 1096 u32 regs_offset = 0;
1095 static int bus_num = 1; 1097 static int bus_num = 1;
1096 struct device_node *node = pdev->dev.of_node; 1098 struct device_node *node = pdev->dev.of_node;
1097 const struct of_device_id *match; 1099 const struct of_device_id *match;
1098 1100
1099 master = spi_alloc_master(&pdev->dev, sizeof *mcspi); 1101 master = spi_alloc_master(&pdev->dev, sizeof *mcspi);
1100 if (master == NULL) { 1102 if (master == NULL) {
1101 dev_dbg(&pdev->dev, "master allocation failed\n"); 1103 dev_dbg(&pdev->dev, "master allocation failed\n");
1102 return -ENOMEM; 1104 return -ENOMEM;
1103 } 1105 }
1104 1106
1105 /* the spi->mode bits understood by this driver: */ 1107 /* the spi->mode bits understood by this driver: */
1106 master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH; 1108 master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH;
1107 1109
1108 master->setup = omap2_mcspi_setup; 1110 master->setup = omap2_mcspi_setup;
1109 master->prepare_transfer_hardware = omap2_prepare_transfer; 1111 master->prepare_transfer_hardware = omap2_prepare_transfer;
1110 master->unprepare_transfer_hardware = omap2_unprepare_transfer; 1112 master->unprepare_transfer_hardware = omap2_unprepare_transfer;
1111 master->transfer_one_message = omap2_mcspi_transfer_one_message; 1113 master->transfer_one_message = omap2_mcspi_transfer_one_message;
1112 master->cleanup = omap2_mcspi_cleanup; 1114 master->cleanup = omap2_mcspi_cleanup;
1113 master->dev.of_node = node; 1115 master->dev.of_node = node;
1114 1116
1115 match = of_match_device(omap_mcspi_of_match, &pdev->dev); 1117 match = of_match_device(omap_mcspi_of_match, &pdev->dev);
1116 if (match) { 1118 if (match) {
1117 u32 num_cs = 1; /* default number of chipselect */ 1119 u32 num_cs = 1; /* default number of chipselect */
1118 pdata = match->data; 1120 pdata = match->data;
1119 1121
1120 of_property_read_u32(node, "ti,spi-num-cs", &num_cs); 1122 of_property_read_u32(node, "ti,spi-num-cs", &num_cs);
1121 master->num_chipselect = num_cs; 1123 master->num_chipselect = num_cs;
1122 master->bus_num = bus_num++; 1124 master->bus_num = bus_num++;
1123 } else { 1125 } else {
1124 pdata = pdev->dev.platform_data; 1126 pdata = pdev->dev.platform_data;
1125 master->num_chipselect = pdata->num_cs; 1127 master->num_chipselect = pdata->num_cs;
1126 if (pdev->id != -1) 1128 if (pdev->id != -1)
1127 master->bus_num = pdev->id; 1129 master->bus_num = pdev->id;
1128 } 1130 }
1129 regs_offset = pdata->regs_offset; 1131 regs_offset = pdata->regs_offset;
1130 1132
1131 dev_set_drvdata(&pdev->dev, master); 1133 dev_set_drvdata(&pdev->dev, master);
1132 1134
1133 mcspi = spi_master_get_devdata(master); 1135 mcspi = spi_master_get_devdata(master);
1134 mcspi->master = master; 1136 mcspi->master = master;
1135 1137
1136 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1138 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
1137 if (r == NULL) { 1139 if (r == NULL) {
1138 status = -ENODEV; 1140 status = -ENODEV;
1139 goto free_master; 1141 goto free_master;
1140 } 1142 }
1141 1143
1142 r->start += regs_offset; 1144 r->start += regs_offset;
1143 r->end += regs_offset; 1145 r->end += regs_offset;
1144 mcspi->phys = r->start; 1146 mcspi->phys = r->start;
1145 1147
1146 mcspi->base = devm_request_and_ioremap(&pdev->dev, r); 1148 mcspi->base = devm_request_and_ioremap(&pdev->dev, r);
1147 if (!mcspi->base) { 1149 if (!mcspi->base) {
1148 dev_dbg(&pdev->dev, "can't ioremap MCSPI\n"); 1150 dev_dbg(&pdev->dev, "can't ioremap MCSPI\n");
1149 status = -ENOMEM; 1151 status = -ENOMEM;
1150 goto free_master; 1152 goto free_master;
1151 } 1153 }
1152 1154
1153 mcspi->dev = &pdev->dev; 1155 mcspi->dev = &pdev->dev;
1154 1156
1155 INIT_LIST_HEAD(&mcspi->ctx.cs); 1157 INIT_LIST_HEAD(&mcspi->ctx.cs);
1156 1158
1157 mcspi->dma_channels = kcalloc(master->num_chipselect, 1159 mcspi->dma_channels = kcalloc(master->num_chipselect,
1158 sizeof(struct omap2_mcspi_dma), 1160 sizeof(struct omap2_mcspi_dma),
1159 GFP_KERNEL); 1161 GFP_KERNEL);
1160 1162
1161 if (mcspi->dma_channels == NULL) 1163 if (mcspi->dma_channels == NULL)
1162 goto free_master; 1164 goto free_master;
1163 1165
1164 for (i = 0; i < master->num_chipselect; i++) { 1166 for (i = 0; i < master->num_chipselect; i++) {
1165 char dma_ch_name[14]; 1167 char dma_ch_name[14];
1166 struct resource *dma_res; 1168 struct resource *dma_res;
1167 1169
1168 sprintf(dma_ch_name, "rx%d", i); 1170 sprintf(dma_ch_name, "rx%d", i);
1169 dma_res = platform_get_resource_byname(pdev, IORESOURCE_DMA, 1171 dma_res = platform_get_resource_byname(pdev, IORESOURCE_DMA,
1170 dma_ch_name); 1172 dma_ch_name);
1171 if (!dma_res) { 1173 if (!dma_res) {
1172 dev_dbg(&pdev->dev, "cannot get DMA RX channel\n"); 1174 dev_dbg(&pdev->dev, "cannot get DMA RX channel\n");
1173 status = -ENODEV; 1175 status = -ENODEV;
1174 break; 1176 break;
1175 } 1177 }
1176 1178
1177 mcspi->dma_channels[i].dma_rx_channel = -1; 1179 mcspi->dma_channels[i].dma_rx_channel = -1;
1178 mcspi->dma_channels[i].dma_rx_sync_dev = dma_res->start; 1180 mcspi->dma_channels[i].dma_rx_sync_dev = dma_res->start;
1179 sprintf(dma_ch_name, "tx%d", i); 1181 sprintf(dma_ch_name, "tx%d", i);
1180 dma_res = platform_get_resource_byname(pdev, IORESOURCE_DMA, 1182 dma_res = platform_get_resource_byname(pdev, IORESOURCE_DMA,
1181 dma_ch_name); 1183 dma_ch_name);
1182 if (!dma_res) { 1184 if (!dma_res) {
1183 dev_dbg(&pdev->dev, "cannot get DMA TX channel\n"); 1185 dev_dbg(&pdev->dev, "cannot get DMA TX channel\n");
1184 status = -ENODEV; 1186 status = -ENODEV;
1185 break; 1187 break;
1186 } 1188 }
1187 1189
1188 mcspi->dma_channels[i].dma_tx_channel = -1; 1190 mcspi->dma_channels[i].dma_tx_channel = -1;
1189 mcspi->dma_channels[i].dma_tx_sync_dev = dma_res->start; 1191 mcspi->dma_channels[i].dma_tx_sync_dev = dma_res->start;
1190 } 1192 }
1191 1193
1192 if (status < 0) 1194 if (status < 0)
1193 goto dma_chnl_free; 1195 goto dma_chnl_free;
1194 1196
1195 pm_runtime_use_autosuspend(&pdev->dev); 1197 pm_runtime_use_autosuspend(&pdev->dev);
1196 pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT); 1198 pm_runtime_set_autosuspend_delay(&pdev->dev, SPI_AUTOSUSPEND_TIMEOUT);
1197 pm_runtime_enable(&pdev->dev); 1199 pm_runtime_enable(&pdev->dev);
1198 1200
1199 if (status || omap2_mcspi_master_setup(mcspi) < 0) 1201 if (status || omap2_mcspi_master_setup(mcspi) < 0)
1200 goto disable_pm; 1202 goto disable_pm;
1201 1203
1202 status = spi_register_master(master); 1204 status = spi_register_master(master);
1203 if (status < 0) 1205 if (status < 0)
1204 goto err_spi_register; 1206 goto err_spi_register;
1205 1207
1206 return status; 1208 return status;
1207 1209
1208 err_spi_register: 1210 err_spi_register:
1209 spi_master_put(master); 1211 spi_master_put(master);
1210 disable_pm: 1212 disable_pm:
1211 pm_runtime_disable(&pdev->dev); 1213 pm_runtime_disable(&pdev->dev);
1212 dma_chnl_free: 1214 dma_chnl_free:
1213 kfree(mcspi->dma_channels); 1215 kfree(mcspi->dma_channels);
1214 free_master: 1216 free_master:
1215 kfree(master); 1217 kfree(master);
1216 platform_set_drvdata(pdev, NULL); 1218 platform_set_drvdata(pdev, NULL);
1217 return status; 1219 return status;
1218 } 1220 }
1219 1221
1220 static int __devexit omap2_mcspi_remove(struct platform_device *pdev) 1222 static int __devexit omap2_mcspi_remove(struct platform_device *pdev)
1221 { 1223 {
1222 struct spi_master *master; 1224 struct spi_master *master;
1223 struct omap2_mcspi *mcspi; 1225 struct omap2_mcspi *mcspi;
1224 struct omap2_mcspi_dma *dma_channels; 1226 struct omap2_mcspi_dma *dma_channels;
1225 1227
1226 master = dev_get_drvdata(&pdev->dev); 1228 master = dev_get_drvdata(&pdev->dev);
1227 mcspi = spi_master_get_devdata(master); 1229 mcspi = spi_master_get_devdata(master);
1228 dma_channels = mcspi->dma_channels; 1230 dma_channels = mcspi->dma_channels;
1229 1231
1230 omap2_mcspi_disable_clocks(mcspi); 1232 omap2_mcspi_disable_clocks(mcspi);
1231 pm_runtime_disable(&pdev->dev); 1233 pm_runtime_disable(&pdev->dev);
1232 1234
1233 spi_unregister_master(master); 1235 spi_unregister_master(master);
1234 kfree(dma_channels); 1236 kfree(dma_channels);
1235 platform_set_drvdata(pdev, NULL); 1237 platform_set_drvdata(pdev, NULL);
1236 1238
1237 return 0; 1239 return 0;
1238 } 1240 }
1239 1241
1240 /* work with hotplug and coldplug */ 1242 /* work with hotplug and coldplug */
1241 MODULE_ALIAS("platform:omap2_mcspi"); 1243 MODULE_ALIAS("platform:omap2_mcspi");
1242 1244
1243 #ifdef CONFIG_SUSPEND 1245 #ifdef CONFIG_SUSPEND
1244 /* 1246 /*
1245 * When SPI wake up from off-mode, CS is in activate state. If it was in 1247 * When SPI wake up from off-mode, CS is in activate state. If it was in
1246 * unactive state when driver was suspend, then force it to unactive state at 1248 * unactive state when driver was suspend, then force it to unactive state at
1247 * wake up. 1249 * wake up.
1248 */ 1250 */
1249 static int omap2_mcspi_resume(struct device *dev) 1251 static int omap2_mcspi_resume(struct device *dev)
1250 { 1252 {
1251 struct spi_master *master = dev_get_drvdata(dev); 1253 struct spi_master *master = dev_get_drvdata(dev);
1252 struct omap2_mcspi *mcspi = spi_master_get_devdata(master); 1254 struct omap2_mcspi *mcspi = spi_master_get_devdata(master);
1253 struct omap2_mcspi_regs *ctx = &mcspi->ctx; 1255 struct omap2_mcspi_regs *ctx = &mcspi->ctx;
1254 struct omap2_mcspi_cs *cs; 1256 struct omap2_mcspi_cs *cs;
1255 1257
1256 omap2_mcspi_enable_clocks(mcspi); 1258 omap2_mcspi_enable_clocks(mcspi);
1257 list_for_each_entry(cs, &ctx->cs, node) { 1259 list_for_each_entry(cs, &ctx->cs, node) {
1258 if ((cs->chconf0 & OMAP2_MCSPI_CHCONF_FORCE) == 0) { 1260 if ((cs->chconf0 & OMAP2_MCSPI_CHCONF_FORCE) == 0) {
1259 /* 1261 /*
1260 * We need to toggle CS state for OMAP take this 1262 * We need to toggle CS state for OMAP take this
1261 * change in account. 1263 * change in account.
1262 */ 1264 */
1263 MOD_REG_BIT(cs->chconf0, OMAP2_MCSPI_CHCONF_FORCE, 1); 1265 MOD_REG_BIT(cs->chconf0, OMAP2_MCSPI_CHCONF_FORCE, 1);
1264 __raw_writel(cs->chconf0, cs->base + OMAP2_MCSPI_CHCONF0); 1266 __raw_writel(cs->chconf0, cs->base + OMAP2_MCSPI_CHCONF0);
1265 MOD_REG_BIT(cs->chconf0, OMAP2_MCSPI_CHCONF_FORCE, 0); 1267 MOD_REG_BIT(cs->chconf0, OMAP2_MCSPI_CHCONF_FORCE, 0);
1266 __raw_writel(cs->chconf0, cs->base + OMAP2_MCSPI_CHCONF0); 1268 __raw_writel(cs->chconf0, cs->base + OMAP2_MCSPI_CHCONF0);
1267 } 1269 }
1268 } 1270 }
1269 omap2_mcspi_disable_clocks(mcspi); 1271 omap2_mcspi_disable_clocks(mcspi);
1270 return 0; 1272 return 0;
1271 } 1273 }
1272 #else 1274 #else
1273 #define omap2_mcspi_resume NULL 1275 #define omap2_mcspi_resume NULL
1274 #endif 1276 #endif
1275 1277
1276 static const struct dev_pm_ops omap2_mcspi_pm_ops = { 1278 static const struct dev_pm_ops omap2_mcspi_pm_ops = {
1277 .resume = omap2_mcspi_resume, 1279 .resume = omap2_mcspi_resume,
1278 .runtime_resume = omap_mcspi_runtime_resume, 1280 .runtime_resume = omap_mcspi_runtime_resume,
1279 }; 1281 };
1280 1282
1281 static struct platform_driver omap2_mcspi_driver = { 1283 static struct platform_driver omap2_mcspi_driver = {
1282 .driver = { 1284 .driver = {
1283 .name = "omap2_mcspi", 1285 .name = "omap2_mcspi",
1284 .owner = THIS_MODULE, 1286 .owner = THIS_MODULE,
1285 .pm = &omap2_mcspi_pm_ops, 1287 .pm = &omap2_mcspi_pm_ops,
1286 .of_match_table = omap_mcspi_of_match, 1288 .of_match_table = omap_mcspi_of_match,
1287 }, 1289 },
1288 .probe = omap2_mcspi_probe, 1290 .probe = omap2_mcspi_probe,
1289 .remove = __devexit_p(omap2_mcspi_remove), 1291 .remove = __devexit_p(omap2_mcspi_remove),
1290 }; 1292 };
1291 1293
1292 module_platform_driver(omap2_mcspi_driver); 1294 module_platform_driver(omap2_mcspi_driver);
1293 MODULE_LICENSE("GPL"); 1295 MODULE_LICENSE("GPL");
1294 1296
drivers/spi/spi-orion.c
1 /* 1 /*
2 * Marvell Orion SPI controller driver 2 * Marvell Orion SPI controller driver
3 * 3 *
4 * Author: Shadi Ammouri <shadi@marvell.com> 4 * Author: Shadi Ammouri <shadi@marvell.com>
5 * Copyright (C) 2007-2008 Marvell Ltd. 5 * Copyright (C) 2007-2008 Marvell Ltd.
6 * 6 *
7 * This program is free software; you can redistribute it and/or modify 7 * This program is free software; you can redistribute it and/or modify
8 * it under the terms of the GNU General Public License version 2 as 8 * it under the terms of the GNU General Public License version 2 as
9 * published by the Free Software Foundation. 9 * published by the Free Software Foundation.
10 */ 10 */
11 11
12 #include <linux/init.h> 12 #include <linux/init.h>
13 #include <linux/interrupt.h> 13 #include <linux/interrupt.h>
14 #include <linux/delay.h> 14 #include <linux/delay.h>
15 #include <linux/platform_device.h> 15 #include <linux/platform_device.h>
16 #include <linux/err.h> 16 #include <linux/err.h>
17 #include <linux/io.h> 17 #include <linux/io.h>
18 #include <linux/spi/spi.h> 18 #include <linux/spi/spi.h>
19 #include <linux/module.h> 19 #include <linux/module.h>
20 #include <linux/of.h>
20 #include <linux/clk.h> 21 #include <linux/clk.h>
21 #include <asm/unaligned.h> 22 #include <asm/unaligned.h>
22 23
23 #define DRIVER_NAME "orion_spi" 24 #define DRIVER_NAME "orion_spi"
24 25
25 #define ORION_NUM_CHIPSELECTS 1 /* only one slave is supported*/ 26 #define ORION_NUM_CHIPSELECTS 1 /* only one slave is supported*/
26 #define ORION_SPI_WAIT_RDY_MAX_LOOP 2000 /* in usec */ 27 #define ORION_SPI_WAIT_RDY_MAX_LOOP 2000 /* in usec */
27 28
28 #define ORION_SPI_IF_CTRL_REG 0x00 29 #define ORION_SPI_IF_CTRL_REG 0x00
29 #define ORION_SPI_IF_CONFIG_REG 0x04 30 #define ORION_SPI_IF_CONFIG_REG 0x04
30 #define ORION_SPI_DATA_OUT_REG 0x08 31 #define ORION_SPI_DATA_OUT_REG 0x08
31 #define ORION_SPI_DATA_IN_REG 0x0c 32 #define ORION_SPI_DATA_IN_REG 0x0c
32 #define ORION_SPI_INT_CAUSE_REG 0x10 33 #define ORION_SPI_INT_CAUSE_REG 0x10
33 34
34 #define ORION_SPI_IF_8_16_BIT_MODE (1 << 5) 35 #define ORION_SPI_IF_8_16_BIT_MODE (1 << 5)
35 #define ORION_SPI_CLK_PRESCALE_MASK 0x1F 36 #define ORION_SPI_CLK_PRESCALE_MASK 0x1F
36 37
37 struct orion_spi { 38 struct orion_spi {
38 struct work_struct work; 39 struct work_struct work;
39 40
40 /* Lock access to transfer list. */ 41 /* Lock access to transfer list. */
41 spinlock_t lock; 42 spinlock_t lock;
42 43
43 struct list_head msg_queue; 44 struct list_head msg_queue;
44 struct spi_master *master; 45 struct spi_master *master;
45 void __iomem *base; 46 void __iomem *base;
46 unsigned int max_speed; 47 unsigned int max_speed;
47 unsigned int min_speed; 48 unsigned int min_speed;
48 struct orion_spi_info *spi_info;
49 struct clk *clk; 49 struct clk *clk;
50 }; 50 };
51 51
52 static struct workqueue_struct *orion_spi_wq; 52 static struct workqueue_struct *orion_spi_wq;
53 53
54 static inline void __iomem *spi_reg(struct orion_spi *orion_spi, u32 reg) 54 static inline void __iomem *spi_reg(struct orion_spi *orion_spi, u32 reg)
55 { 55 {
56 return orion_spi->base + reg; 56 return orion_spi->base + reg;
57 } 57 }
58 58
59 static inline void 59 static inline void
60 orion_spi_setbits(struct orion_spi *orion_spi, u32 reg, u32 mask) 60 orion_spi_setbits(struct orion_spi *orion_spi, u32 reg, u32 mask)
61 { 61 {
62 void __iomem *reg_addr = spi_reg(orion_spi, reg); 62 void __iomem *reg_addr = spi_reg(orion_spi, reg);
63 u32 val; 63 u32 val;
64 64
65 val = readl(reg_addr); 65 val = readl(reg_addr);
66 val |= mask; 66 val |= mask;
67 writel(val, reg_addr); 67 writel(val, reg_addr);
68 } 68 }
69 69
70 static inline void 70 static inline void
71 orion_spi_clrbits(struct orion_spi *orion_spi, u32 reg, u32 mask) 71 orion_spi_clrbits(struct orion_spi *orion_spi, u32 reg, u32 mask)
72 { 72 {
73 void __iomem *reg_addr = spi_reg(orion_spi, reg); 73 void __iomem *reg_addr = spi_reg(orion_spi, reg);
74 u32 val; 74 u32 val;
75 75
76 val = readl(reg_addr); 76 val = readl(reg_addr);
77 val &= ~mask; 77 val &= ~mask;
78 writel(val, reg_addr); 78 writel(val, reg_addr);
79 } 79 }
80 80
81 static int orion_spi_set_transfer_size(struct orion_spi *orion_spi, int size) 81 static int orion_spi_set_transfer_size(struct orion_spi *orion_spi, int size)
82 { 82 {
83 if (size == 16) { 83 if (size == 16) {
84 orion_spi_setbits(orion_spi, ORION_SPI_IF_CONFIG_REG, 84 orion_spi_setbits(orion_spi, ORION_SPI_IF_CONFIG_REG,
85 ORION_SPI_IF_8_16_BIT_MODE); 85 ORION_SPI_IF_8_16_BIT_MODE);
86 } else if (size == 8) { 86 } else if (size == 8) {
87 orion_spi_clrbits(orion_spi, ORION_SPI_IF_CONFIG_REG, 87 orion_spi_clrbits(orion_spi, ORION_SPI_IF_CONFIG_REG,
88 ORION_SPI_IF_8_16_BIT_MODE); 88 ORION_SPI_IF_8_16_BIT_MODE);
89 } else { 89 } else {
90 pr_debug("Bad bits per word value %d (only 8 or 16 are " 90 pr_debug("Bad bits per word value %d (only 8 or 16 are "
91 "allowed).\n", size); 91 "allowed).\n", size);
92 return -EINVAL; 92 return -EINVAL;
93 } 93 }
94 94
95 return 0; 95 return 0;
96 } 96 }
97 97
98 static int orion_spi_baudrate_set(struct spi_device *spi, unsigned int speed) 98 static int orion_spi_baudrate_set(struct spi_device *spi, unsigned int speed)
99 { 99 {
100 u32 tclk_hz; 100 u32 tclk_hz;
101 u32 rate; 101 u32 rate;
102 u32 prescale; 102 u32 prescale;
103 u32 reg; 103 u32 reg;
104 struct orion_spi *orion_spi; 104 struct orion_spi *orion_spi;
105 105
106 orion_spi = spi_master_get_devdata(spi->master); 106 orion_spi = spi_master_get_devdata(spi->master);
107 107
108 tclk_hz = clk_get_rate(orion_spi->clk); 108 tclk_hz = clk_get_rate(orion_spi->clk);
109 109
110 /* 110 /*
111 * the supported rates are: 4,6,8...30 111 * the supported rates are: 4,6,8...30
112 * round up as we look for equal or less speed 112 * round up as we look for equal or less speed
113 */ 113 */
114 rate = DIV_ROUND_UP(tclk_hz, speed); 114 rate = DIV_ROUND_UP(tclk_hz, speed);
115 rate = roundup(rate, 2); 115 rate = roundup(rate, 2);
116 116
117 /* check if requested speed is too small */ 117 /* check if requested speed is too small */
118 if (rate > 30) 118 if (rate > 30)
119 return -EINVAL; 119 return -EINVAL;
120 120
121 if (rate < 4) 121 if (rate < 4)
122 rate = 4; 122 rate = 4;
123 123
124 /* Convert the rate to SPI clock divisor value. */ 124 /* Convert the rate to SPI clock divisor value. */
125 prescale = 0x10 + rate/2; 125 prescale = 0x10 + rate/2;
126 126
127 reg = readl(spi_reg(orion_spi, ORION_SPI_IF_CONFIG_REG)); 127 reg = readl(spi_reg(orion_spi, ORION_SPI_IF_CONFIG_REG));
128 reg = ((reg & ~ORION_SPI_CLK_PRESCALE_MASK) | prescale); 128 reg = ((reg & ~ORION_SPI_CLK_PRESCALE_MASK) | prescale);
129 writel(reg, spi_reg(orion_spi, ORION_SPI_IF_CONFIG_REG)); 129 writel(reg, spi_reg(orion_spi, ORION_SPI_IF_CONFIG_REG));
130 130
131 return 0; 131 return 0;
132 } 132 }
133 133
134 /* 134 /*
135 * called only when no transfer is active on the bus 135 * called only when no transfer is active on the bus
136 */ 136 */
137 static int 137 static int
138 orion_spi_setup_transfer(struct spi_device *spi, struct spi_transfer *t) 138 orion_spi_setup_transfer(struct spi_device *spi, struct spi_transfer *t)
139 { 139 {
140 struct orion_spi *orion_spi; 140 struct orion_spi *orion_spi;
141 unsigned int speed = spi->max_speed_hz; 141 unsigned int speed = spi->max_speed_hz;
142 unsigned int bits_per_word = spi->bits_per_word; 142 unsigned int bits_per_word = spi->bits_per_word;
143 int rc; 143 int rc;
144 144
145 orion_spi = spi_master_get_devdata(spi->master); 145 orion_spi = spi_master_get_devdata(spi->master);
146 146
147 if ((t != NULL) && t->speed_hz) 147 if ((t != NULL) && t->speed_hz)
148 speed = t->speed_hz; 148 speed = t->speed_hz;
149 149
150 if ((t != NULL) && t->bits_per_word) 150 if ((t != NULL) && t->bits_per_word)
151 bits_per_word = t->bits_per_word; 151 bits_per_word = t->bits_per_word;
152 152
153 rc = orion_spi_baudrate_set(spi, speed); 153 rc = orion_spi_baudrate_set(spi, speed);
154 if (rc) 154 if (rc)
155 return rc; 155 return rc;
156 156
157 return orion_spi_set_transfer_size(orion_spi, bits_per_word); 157 return orion_spi_set_transfer_size(orion_spi, bits_per_word);
158 } 158 }
159 159
160 static void orion_spi_set_cs(struct orion_spi *orion_spi, int enable) 160 static void orion_spi_set_cs(struct orion_spi *orion_spi, int enable)
161 { 161 {
162 if (enable) 162 if (enable)
163 orion_spi_setbits(orion_spi, ORION_SPI_IF_CTRL_REG, 0x1); 163 orion_spi_setbits(orion_spi, ORION_SPI_IF_CTRL_REG, 0x1);
164 else 164 else
165 orion_spi_clrbits(orion_spi, ORION_SPI_IF_CTRL_REG, 0x1); 165 orion_spi_clrbits(orion_spi, ORION_SPI_IF_CTRL_REG, 0x1);
166 } 166 }
167 167
168 static inline int orion_spi_wait_till_ready(struct orion_spi *orion_spi) 168 static inline int orion_spi_wait_till_ready(struct orion_spi *orion_spi)
169 { 169 {
170 int i; 170 int i;
171 171
172 for (i = 0; i < ORION_SPI_WAIT_RDY_MAX_LOOP; i++) { 172 for (i = 0; i < ORION_SPI_WAIT_RDY_MAX_LOOP; i++) {
173 if (readl(spi_reg(orion_spi, ORION_SPI_INT_CAUSE_REG))) 173 if (readl(spi_reg(orion_spi, ORION_SPI_INT_CAUSE_REG)))
174 return 1; 174 return 1;
175 else 175 else
176 udelay(1); 176 udelay(1);
177 } 177 }
178 178
179 return -1; 179 return -1;
180 } 180 }
181 181
182 static inline int 182 static inline int
183 orion_spi_write_read_8bit(struct spi_device *spi, 183 orion_spi_write_read_8bit(struct spi_device *spi,
184 const u8 **tx_buf, u8 **rx_buf) 184 const u8 **tx_buf, u8 **rx_buf)
185 { 185 {
186 void __iomem *tx_reg, *rx_reg, *int_reg; 186 void __iomem *tx_reg, *rx_reg, *int_reg;
187 struct orion_spi *orion_spi; 187 struct orion_spi *orion_spi;
188 188
189 orion_spi = spi_master_get_devdata(spi->master); 189 orion_spi = spi_master_get_devdata(spi->master);
190 tx_reg = spi_reg(orion_spi, ORION_SPI_DATA_OUT_REG); 190 tx_reg = spi_reg(orion_spi, ORION_SPI_DATA_OUT_REG);
191 rx_reg = spi_reg(orion_spi, ORION_SPI_DATA_IN_REG); 191 rx_reg = spi_reg(orion_spi, ORION_SPI_DATA_IN_REG);
192 int_reg = spi_reg(orion_spi, ORION_SPI_INT_CAUSE_REG); 192 int_reg = spi_reg(orion_spi, ORION_SPI_INT_CAUSE_REG);
193 193
194 /* clear the interrupt cause register */ 194 /* clear the interrupt cause register */
195 writel(0x0, int_reg); 195 writel(0x0, int_reg);
196 196
197 if (tx_buf && *tx_buf) 197 if (tx_buf && *tx_buf)
198 writel(*(*tx_buf)++, tx_reg); 198 writel(*(*tx_buf)++, tx_reg);
199 else 199 else
200 writel(0, tx_reg); 200 writel(0, tx_reg);
201 201
202 if (orion_spi_wait_till_ready(orion_spi) < 0) { 202 if (orion_spi_wait_till_ready(orion_spi) < 0) {
203 dev_err(&spi->dev, "TXS timed out\n"); 203 dev_err(&spi->dev, "TXS timed out\n");
204 return -1; 204 return -1;
205 } 205 }
206 206
207 if (rx_buf && *rx_buf) 207 if (rx_buf && *rx_buf)
208 *(*rx_buf)++ = readl(rx_reg); 208 *(*rx_buf)++ = readl(rx_reg);
209 209
210 return 1; 210 return 1;
211 } 211 }
212 212
213 static inline int 213 static inline int
214 orion_spi_write_read_16bit(struct spi_device *spi, 214 orion_spi_write_read_16bit(struct spi_device *spi,
215 const u16 **tx_buf, u16 **rx_buf) 215 const u16 **tx_buf, u16 **rx_buf)
216 { 216 {
217 void __iomem *tx_reg, *rx_reg, *int_reg; 217 void __iomem *tx_reg, *rx_reg, *int_reg;
218 struct orion_spi *orion_spi; 218 struct orion_spi *orion_spi;
219 219
220 orion_spi = spi_master_get_devdata(spi->master); 220 orion_spi = spi_master_get_devdata(spi->master);
221 tx_reg = spi_reg(orion_spi, ORION_SPI_DATA_OUT_REG); 221 tx_reg = spi_reg(orion_spi, ORION_SPI_DATA_OUT_REG);
222 rx_reg = spi_reg(orion_spi, ORION_SPI_DATA_IN_REG); 222 rx_reg = spi_reg(orion_spi, ORION_SPI_DATA_IN_REG);
223 int_reg = spi_reg(orion_spi, ORION_SPI_INT_CAUSE_REG); 223 int_reg = spi_reg(orion_spi, ORION_SPI_INT_CAUSE_REG);
224 224
225 /* clear the interrupt cause register */ 225 /* clear the interrupt cause register */
226 writel(0x0, int_reg); 226 writel(0x0, int_reg);
227 227
228 if (tx_buf && *tx_buf) 228 if (tx_buf && *tx_buf)
229 writel(__cpu_to_le16(get_unaligned((*tx_buf)++)), tx_reg); 229 writel(__cpu_to_le16(get_unaligned((*tx_buf)++)), tx_reg);
230 else 230 else
231 writel(0, tx_reg); 231 writel(0, tx_reg);
232 232
233 if (orion_spi_wait_till_ready(orion_spi) < 0) { 233 if (orion_spi_wait_till_ready(orion_spi) < 0) {
234 dev_err(&spi->dev, "TXS timed out\n"); 234 dev_err(&spi->dev, "TXS timed out\n");
235 return -1; 235 return -1;
236 } 236 }
237 237
238 if (rx_buf && *rx_buf) 238 if (rx_buf && *rx_buf)
239 put_unaligned(__le16_to_cpu(readl(rx_reg)), (*rx_buf)++); 239 put_unaligned(__le16_to_cpu(readl(rx_reg)), (*rx_buf)++);
240 240
241 return 1; 241 return 1;
242 } 242 }
243 243
244 static unsigned int 244 static unsigned int
245 orion_spi_write_read(struct spi_device *spi, struct spi_transfer *xfer) 245 orion_spi_write_read(struct spi_device *spi, struct spi_transfer *xfer)
246 { 246 {
247 struct orion_spi *orion_spi; 247 struct orion_spi *orion_spi;
248 unsigned int count; 248 unsigned int count;
249 int word_len; 249 int word_len;
250 250
251 orion_spi = spi_master_get_devdata(spi->master); 251 orion_spi = spi_master_get_devdata(spi->master);
252 word_len = spi->bits_per_word; 252 word_len = spi->bits_per_word;
253 count = xfer->len; 253 count = xfer->len;
254 254
255 if (word_len == 8) { 255 if (word_len == 8) {
256 const u8 *tx = xfer->tx_buf; 256 const u8 *tx = xfer->tx_buf;
257 u8 *rx = xfer->rx_buf; 257 u8 *rx = xfer->rx_buf;
258 258
259 do { 259 do {
260 if (orion_spi_write_read_8bit(spi, &tx, &rx) < 0) 260 if (orion_spi_write_read_8bit(spi, &tx, &rx) < 0)
261 goto out; 261 goto out;
262 count--; 262 count--;
263 } while (count); 263 } while (count);
264 } else if (word_len == 16) { 264 } else if (word_len == 16) {
265 const u16 *tx = xfer->tx_buf; 265 const u16 *tx = xfer->tx_buf;
266 u16 *rx = xfer->rx_buf; 266 u16 *rx = xfer->rx_buf;
267 267
268 do { 268 do {
269 if (orion_spi_write_read_16bit(spi, &tx, &rx) < 0) 269 if (orion_spi_write_read_16bit(spi, &tx, &rx) < 0)
270 goto out; 270 goto out;
271 count -= 2; 271 count -= 2;
272 } while (count); 272 } while (count);
273 } 273 }
274 274
275 out: 275 out:
276 return xfer->len - count; 276 return xfer->len - count;
277 } 277 }
278 278
279 279
280 static void orion_spi_work(struct work_struct *work) 280 static void orion_spi_work(struct work_struct *work)
281 { 281 {
282 struct orion_spi *orion_spi = 282 struct orion_spi *orion_spi =
283 container_of(work, struct orion_spi, work); 283 container_of(work, struct orion_spi, work);
284 284
285 spin_lock_irq(&orion_spi->lock); 285 spin_lock_irq(&orion_spi->lock);
286 while (!list_empty(&orion_spi->msg_queue)) { 286 while (!list_empty(&orion_spi->msg_queue)) {
287 struct spi_message *m; 287 struct spi_message *m;
288 struct spi_device *spi; 288 struct spi_device *spi;
289 struct spi_transfer *t = NULL; 289 struct spi_transfer *t = NULL;
290 int par_override = 0; 290 int par_override = 0;
291 int status = 0; 291 int status = 0;
292 int cs_active = 0; 292 int cs_active = 0;
293 293
294 m = container_of(orion_spi->msg_queue.next, struct spi_message, 294 m = container_of(orion_spi->msg_queue.next, struct spi_message,
295 queue); 295 queue);
296 296
297 list_del_init(&m->queue); 297 list_del_init(&m->queue);
298 spin_unlock_irq(&orion_spi->lock); 298 spin_unlock_irq(&orion_spi->lock);
299 299
300 spi = m->spi; 300 spi = m->spi;
301 301
302 /* Load defaults */ 302 /* Load defaults */
303 status = orion_spi_setup_transfer(spi, NULL); 303 status = orion_spi_setup_transfer(spi, NULL);
304 304
305 if (status < 0) 305 if (status < 0)
306 goto msg_done; 306 goto msg_done;
307 307
308 list_for_each_entry(t, &m->transfers, transfer_list) { 308 list_for_each_entry(t, &m->transfers, transfer_list) {
309 if (par_override || t->speed_hz || t->bits_per_word) { 309 if (par_override || t->speed_hz || t->bits_per_word) {
310 par_override = 1; 310 par_override = 1;
311 status = orion_spi_setup_transfer(spi, t); 311 status = orion_spi_setup_transfer(spi, t);
312 if (status < 0) 312 if (status < 0)
313 break; 313 break;
314 if (!t->speed_hz && !t->bits_per_word) 314 if (!t->speed_hz && !t->bits_per_word)
315 par_override = 0; 315 par_override = 0;
316 } 316 }
317 317
318 if (!cs_active) { 318 if (!cs_active) {
319 orion_spi_set_cs(orion_spi, 1); 319 orion_spi_set_cs(orion_spi, 1);
320 cs_active = 1; 320 cs_active = 1;
321 } 321 }
322 322
323 if (t->len) 323 if (t->len)
324 m->actual_length += 324 m->actual_length +=
325 orion_spi_write_read(spi, t); 325 orion_spi_write_read(spi, t);
326 326
327 if (t->delay_usecs) 327 if (t->delay_usecs)
328 udelay(t->delay_usecs); 328 udelay(t->delay_usecs);
329 329
330 if (t->cs_change) { 330 if (t->cs_change) {
331 orion_spi_set_cs(orion_spi, 0); 331 orion_spi_set_cs(orion_spi, 0);
332 cs_active = 0; 332 cs_active = 0;
333 } 333 }
334 } 334 }
335 335
336 msg_done: 336 msg_done:
337 if (cs_active) 337 if (cs_active)
338 orion_spi_set_cs(orion_spi, 0); 338 orion_spi_set_cs(orion_spi, 0);
339 339
340 m->status = status; 340 m->status = status;
341 m->complete(m->context); 341 m->complete(m->context);
342 342
343 spin_lock_irq(&orion_spi->lock); 343 spin_lock_irq(&orion_spi->lock);
344 } 344 }
345 345
346 spin_unlock_irq(&orion_spi->lock); 346 spin_unlock_irq(&orion_spi->lock);
347 } 347 }
348 348
349 static int __init orion_spi_reset(struct orion_spi *orion_spi) 349 static int __init orion_spi_reset(struct orion_spi *orion_spi)
350 { 350 {
351 /* Verify that the CS is deasserted */ 351 /* Verify that the CS is deasserted */
352 orion_spi_set_cs(orion_spi, 0); 352 orion_spi_set_cs(orion_spi, 0);
353 353
354 return 0; 354 return 0;
355 } 355 }
356 356
357 static int orion_spi_setup(struct spi_device *spi) 357 static int orion_spi_setup(struct spi_device *spi)
358 { 358 {
359 struct orion_spi *orion_spi; 359 struct orion_spi *orion_spi;
360 360
361 orion_spi = spi_master_get_devdata(spi->master); 361 orion_spi = spi_master_get_devdata(spi->master);
362 362
363 if ((spi->max_speed_hz == 0) 363 if ((spi->max_speed_hz == 0)
364 || (spi->max_speed_hz > orion_spi->max_speed)) 364 || (spi->max_speed_hz > orion_spi->max_speed))
365 spi->max_speed_hz = orion_spi->max_speed; 365 spi->max_speed_hz = orion_spi->max_speed;
366 366
367 if (spi->max_speed_hz < orion_spi->min_speed) { 367 if (spi->max_speed_hz < orion_spi->min_speed) {
368 dev_err(&spi->dev, "setup: requested speed too low %d Hz\n", 368 dev_err(&spi->dev, "setup: requested speed too low %d Hz\n",
369 spi->max_speed_hz); 369 spi->max_speed_hz);
370 return -EINVAL; 370 return -EINVAL;
371 } 371 }
372 372
373 /* 373 /*
374 * baudrate & width will be set orion_spi_setup_transfer 374 * baudrate & width will be set orion_spi_setup_transfer
375 */ 375 */
376 return 0; 376 return 0;
377 } 377 }
378 378
379 static int orion_spi_transfer(struct spi_device *spi, struct spi_message *m) 379 static int orion_spi_transfer(struct spi_device *spi, struct spi_message *m)
380 { 380 {
381 struct orion_spi *orion_spi; 381 struct orion_spi *orion_spi;
382 struct spi_transfer *t = NULL; 382 struct spi_transfer *t = NULL;
383 unsigned long flags; 383 unsigned long flags;
384 384
385 m->actual_length = 0; 385 m->actual_length = 0;
386 m->status = 0; 386 m->status = 0;
387 387
388 /* reject invalid messages and transfers */ 388 /* reject invalid messages and transfers */
389 if (list_empty(&m->transfers) || !m->complete) 389 if (list_empty(&m->transfers) || !m->complete)
390 return -EINVAL; 390 return -EINVAL;
391 391
392 orion_spi = spi_master_get_devdata(spi->master); 392 orion_spi = spi_master_get_devdata(spi->master);
393 393
394 list_for_each_entry(t, &m->transfers, transfer_list) { 394 list_for_each_entry(t, &m->transfers, transfer_list) {
395 unsigned int bits_per_word = spi->bits_per_word; 395 unsigned int bits_per_word = spi->bits_per_word;
396 396
397 if (t->tx_buf == NULL && t->rx_buf == NULL && t->len) { 397 if (t->tx_buf == NULL && t->rx_buf == NULL && t->len) {
398 dev_err(&spi->dev, 398 dev_err(&spi->dev,
399 "message rejected : " 399 "message rejected : "
400 "invalid transfer data buffers\n"); 400 "invalid transfer data buffers\n");
401 goto msg_rejected; 401 goto msg_rejected;
402 } 402 }
403 403
404 if (t->bits_per_word) 404 if (t->bits_per_word)
405 bits_per_word = t->bits_per_word; 405 bits_per_word = t->bits_per_word;
406 406
407 if ((bits_per_word != 8) && (bits_per_word != 16)) { 407 if ((bits_per_word != 8) && (bits_per_word != 16)) {
408 dev_err(&spi->dev, 408 dev_err(&spi->dev,
409 "message rejected : " 409 "message rejected : "
410 "invalid transfer bits_per_word (%d bits)\n", 410 "invalid transfer bits_per_word (%d bits)\n",
411 bits_per_word); 411 bits_per_word);
412 goto msg_rejected; 412 goto msg_rejected;
413 } 413 }
414 /*make sure buffer length is even when working in 16 bit mode*/ 414 /*make sure buffer length is even when working in 16 bit mode*/
415 if ((t->bits_per_word == 16) && (t->len & 1)) { 415 if ((t->bits_per_word == 16) && (t->len & 1)) {
416 dev_err(&spi->dev, 416 dev_err(&spi->dev,
417 "message rejected : " 417 "message rejected : "
418 "odd data length (%d) while in 16 bit mode\n", 418 "odd data length (%d) while in 16 bit mode\n",
419 t->len); 419 t->len);
420 goto msg_rejected; 420 goto msg_rejected;
421 } 421 }
422 422
423 if (t->speed_hz && t->speed_hz < orion_spi->min_speed) { 423 if (t->speed_hz && t->speed_hz < orion_spi->min_speed) {
424 dev_err(&spi->dev, 424 dev_err(&spi->dev,
425 "message rejected : " 425 "message rejected : "
426 "device min speed (%d Hz) exceeds " 426 "device min speed (%d Hz) exceeds "
427 "required transfer speed (%d Hz)\n", 427 "required transfer speed (%d Hz)\n",
428 orion_spi->min_speed, t->speed_hz); 428 orion_spi->min_speed, t->speed_hz);
429 goto msg_rejected; 429 goto msg_rejected;
430 } 430 }
431 } 431 }
432 432
433 433
434 spin_lock_irqsave(&orion_spi->lock, flags); 434 spin_lock_irqsave(&orion_spi->lock, flags);
435 list_add_tail(&m->queue, &orion_spi->msg_queue); 435 list_add_tail(&m->queue, &orion_spi->msg_queue);
436 queue_work(orion_spi_wq, &orion_spi->work); 436 queue_work(orion_spi_wq, &orion_spi->work);
437 spin_unlock_irqrestore(&orion_spi->lock, flags); 437 spin_unlock_irqrestore(&orion_spi->lock, flags);
438 438
439 return 0; 439 return 0;
440 msg_rejected: 440 msg_rejected:
441 /* Message rejected and not queued */ 441 /* Message rejected and not queued */
442 m->status = -EINVAL; 442 m->status = -EINVAL;
443 if (m->complete) 443 if (m->complete)
444 m->complete(m->context); 444 m->complete(m->context);
445 return -EINVAL; 445 return -EINVAL;
446 } 446 }
447 447
448 static int __init orion_spi_probe(struct platform_device *pdev) 448 static int __init orion_spi_probe(struct platform_device *pdev)
449 { 449 {
450 struct spi_master *master; 450 struct spi_master *master;
451 struct orion_spi *spi; 451 struct orion_spi *spi;
452 struct resource *r; 452 struct resource *r;
453 struct orion_spi_info *spi_info;
454 unsigned long tclk_hz; 453 unsigned long tclk_hz;
455 int status = 0; 454 int status = 0;
455 const u32 *iprop;
456 int size;
456 457
457 spi_info = pdev->dev.platform_data;
458
459 master = spi_alloc_master(&pdev->dev, sizeof *spi); 458 master = spi_alloc_master(&pdev->dev, sizeof *spi);
460 if (master == NULL) { 459 if (master == NULL) {
461 dev_dbg(&pdev->dev, "master allocation failed\n"); 460 dev_dbg(&pdev->dev, "master allocation failed\n");
462 return -ENOMEM; 461 return -ENOMEM;
463 } 462 }
464 463
465 if (pdev->id != -1) 464 if (pdev->id != -1)
466 master->bus_num = pdev->id; 465 master->bus_num = pdev->id;
466 if (pdev->dev.of_node) {
467 iprop = of_get_property(pdev->dev.of_node, "cell-index",
468 &size);
469 if (iprop && size == sizeof(*iprop))
470 master->bus_num = *iprop;
471 }
467 472
468 /* we support only mode 0, and no options */ 473 /* we support only mode 0, and no options */
469 master->mode_bits = 0; 474 master->mode_bits = 0;
470 475
471 master->setup = orion_spi_setup; 476 master->setup = orion_spi_setup;
472 master->transfer = orion_spi_transfer; 477 master->transfer = orion_spi_transfer;
473 master->num_chipselect = ORION_NUM_CHIPSELECTS; 478 master->num_chipselect = ORION_NUM_CHIPSELECTS;
474 479
475 dev_set_drvdata(&pdev->dev, master); 480 dev_set_drvdata(&pdev->dev, master);
476 481
477 spi = spi_master_get_devdata(master); 482 spi = spi_master_get_devdata(master);
478 spi->master = master; 483 spi->master = master;
479 spi->spi_info = spi_info;
480 484
481 spi->clk = clk_get(&pdev->dev, NULL); 485 spi->clk = clk_get(&pdev->dev, NULL);
482 if (IS_ERR(spi->clk)) { 486 if (IS_ERR(spi->clk)) {
483 status = PTR_ERR(spi->clk); 487 status = PTR_ERR(spi->clk);
484 goto out; 488 goto out;
485 } 489 }
486 490
487 clk_prepare(spi->clk); 491 clk_prepare(spi->clk);
488 clk_enable(spi->clk); 492 clk_enable(spi->clk);
489 tclk_hz = clk_get_rate(spi->clk); 493 tclk_hz = clk_get_rate(spi->clk);
490 spi->max_speed = DIV_ROUND_UP(tclk_hz, 4); 494 spi->max_speed = DIV_ROUND_UP(tclk_hz, 4);
491 spi->min_speed = DIV_ROUND_UP(tclk_hz, 30); 495 spi->min_speed = DIV_ROUND_UP(tclk_hz, 30);
492 496
493 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 497 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
494 if (r == NULL) { 498 if (r == NULL) {
495 status = -ENODEV; 499 status = -ENODEV;
496 goto out_rel_clk; 500 goto out_rel_clk;
497 } 501 }
498 502
499 if (!request_mem_region(r->start, resource_size(r), 503 if (!request_mem_region(r->start, resource_size(r),
500 dev_name(&pdev->dev))) { 504 dev_name(&pdev->dev))) {
501 status = -EBUSY; 505 status = -EBUSY;
502 goto out_rel_clk; 506 goto out_rel_clk;
503 } 507 }
504 spi->base = ioremap(r->start, SZ_1K); 508 spi->base = ioremap(r->start, SZ_1K);
505 509
506 INIT_WORK(&spi->work, orion_spi_work); 510 INIT_WORK(&spi->work, orion_spi_work);
507 511
508 spin_lock_init(&spi->lock); 512 spin_lock_init(&spi->lock);
509 INIT_LIST_HEAD(&spi->msg_queue); 513 INIT_LIST_HEAD(&spi->msg_queue);
510 514
511 if (orion_spi_reset(spi) < 0) 515 if (orion_spi_reset(spi) < 0)
512 goto out_rel_mem; 516 goto out_rel_mem;
513 517
518 master->dev.of_node = pdev->dev.of_node;
514 status = spi_register_master(master); 519 status = spi_register_master(master);
515 if (status < 0) 520 if (status < 0)
516 goto out_rel_mem; 521 goto out_rel_mem;
517 522
518 return status; 523 return status;
519 524
520 out_rel_mem: 525 out_rel_mem:
521 release_mem_region(r->start, resource_size(r)); 526 release_mem_region(r->start, resource_size(r));
522 out_rel_clk: 527 out_rel_clk:
523 clk_disable_unprepare(spi->clk); 528 clk_disable_unprepare(spi->clk);
524 clk_put(spi->clk); 529 clk_put(spi->clk);
525 out: 530 out:
526 spi_master_put(master); 531 spi_master_put(master);
527 return status; 532 return status;
528 } 533 }
529 534
530 535
531 static int __exit orion_spi_remove(struct platform_device *pdev) 536 static int __exit orion_spi_remove(struct platform_device *pdev)
532 { 537 {
533 struct spi_master *master; 538 struct spi_master *master;
534 struct orion_spi *spi; 539 struct orion_spi *spi;
535 struct resource *r; 540 struct resource *r;
536 541
537 master = dev_get_drvdata(&pdev->dev); 542 master = dev_get_drvdata(&pdev->dev);
538 spi = spi_master_get_devdata(master); 543 spi = spi_master_get_devdata(master);
539 544
540 cancel_work_sync(&spi->work); 545 cancel_work_sync(&spi->work);
541 546
542 clk_disable_unprepare(spi->clk); 547 clk_disable_unprepare(spi->clk);
543 clk_put(spi->clk); 548 clk_put(spi->clk);
544 549
545 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 550 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
546 release_mem_region(r->start, resource_size(r)); 551 release_mem_region(r->start, resource_size(r));
547 552
548 spi_unregister_master(master); 553 spi_unregister_master(master);
549 554
550 return 0; 555 return 0;
551 } 556 }
552 557
553 MODULE_ALIAS("platform:" DRIVER_NAME); 558 MODULE_ALIAS("platform:" DRIVER_NAME);
554 559
560 static const struct of_device_id orion_spi_of_match_table[] __devinitdata = {
561 { .compatible = "marvell,orion-spi", },
562 {}
563 };
564 MODULE_DEVICE_TABLE(of, orion_spi_of_match_table);
565
555 static struct platform_driver orion_spi_driver = { 566 static struct platform_driver orion_spi_driver = {
556 .driver = { 567 .driver = {
557 .name = DRIVER_NAME, 568 .name = DRIVER_NAME,
558 .owner = THIS_MODULE, 569 .owner = THIS_MODULE,
570 .of_match_table = of_match_ptr(orion_spi_of_match_table),
559 }, 571 },
560 .remove = __exit_p(orion_spi_remove), 572 .remove = __exit_p(orion_spi_remove),
561 }; 573 };
562 574
563 static int __init orion_spi_init(void) 575 static int __init orion_spi_init(void)
564 { 576 {
565 orion_spi_wq = create_singlethread_workqueue( 577 orion_spi_wq = create_singlethread_workqueue(
566 orion_spi_driver.driver.name); 578 orion_spi_driver.driver.name);
567 if (orion_spi_wq == NULL) 579 if (orion_spi_wq == NULL)
568 return -ENOMEM; 580 return -ENOMEM;
569 581
570 return platform_driver_probe(&orion_spi_driver, orion_spi_probe); 582 return platform_driver_probe(&orion_spi_driver, orion_spi_probe);
571 } 583 }
572 module_init(orion_spi_init); 584 module_init(orion_spi_init);
573 585
574 static void __exit orion_spi_exit(void) 586 static void __exit orion_spi_exit(void)
575 { 587 {
576 flush_workqueue(orion_spi_wq); 588 flush_workqueue(orion_spi_wq);
577 platform_driver_unregister(&orion_spi_driver); 589 platform_driver_unregister(&orion_spi_driver);
578 590
579 destroy_workqueue(orion_spi_wq); 591 destroy_workqueue(orion_spi_wq);
580 } 592 }
581 module_exit(orion_spi_exit); 593 module_exit(orion_spi_exit);
drivers/spi/spi-pl022.c
1 /* 1 /*
2 * A driver for the ARM PL022 PrimeCell SSP/SPI bus master. 2 * A driver for the ARM PL022 PrimeCell SSP/SPI bus master.
3 * 3 *
4 * Copyright (C) 2008-2009 ST-Ericsson AB 4 * Copyright (C) 2008-2009 ST-Ericsson AB
5 * Copyright (C) 2006 STMicroelectronics Pvt. Ltd. 5 * Copyright (C) 2006 STMicroelectronics Pvt. Ltd.
6 * 6 *
7 * Author: Linus Walleij <linus.walleij@stericsson.com> 7 * Author: Linus Walleij <linus.walleij@stericsson.com>
8 * 8 *
9 * Initial version inspired by: 9 * Initial version inspired by:
10 * linux-2.6.17-rc3-mm1/drivers/spi/pxa2xx_spi.c 10 * linux-2.6.17-rc3-mm1/drivers/spi/pxa2xx_spi.c
11 * Initial adoption to PL022 by: 11 * Initial adoption to PL022 by:
12 * Sachin Verma <sachin.verma@st.com> 12 * Sachin Verma <sachin.verma@st.com>
13 * 13 *
14 * This program is free software; you can redistribute it and/or modify 14 * This program is free software; you can redistribute it and/or modify
15 * it under the terms of the GNU General Public License as published by 15 * it under the terms of the GNU General Public License as published by
16 * the Free Software Foundation; either version 2 of the License, or 16 * the Free Software Foundation; either version 2 of the License, or
17 * (at your option) any later version. 17 * (at your option) any later version.
18 * 18 *
19 * This program is distributed in the hope that it will be useful, 19 * This program is distributed in the hope that it will be useful,
20 * but WITHOUT ANY WARRANTY; without even the implied warranty of 20 * but WITHOUT ANY WARRANTY; without even the implied warranty of
21 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 21 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
22 * GNU General Public License for more details. 22 * GNU General Public License for more details.
23 */ 23 */
24 24
25 #include <linux/init.h> 25 #include <linux/init.h>
26 #include <linux/module.h> 26 #include <linux/module.h>
27 #include <linux/device.h> 27 #include <linux/device.h>
28 #include <linux/ioport.h> 28 #include <linux/ioport.h>
29 #include <linux/errno.h> 29 #include <linux/errno.h>
30 #include <linux/interrupt.h> 30 #include <linux/interrupt.h>
31 #include <linux/spi/spi.h> 31 #include <linux/spi/spi.h>
32 #include <linux/delay.h> 32 #include <linux/delay.h>
33 #include <linux/clk.h> 33 #include <linux/clk.h>
34 #include <linux/err.h> 34 #include <linux/err.h>
35 #include <linux/amba/bus.h> 35 #include <linux/amba/bus.h>
36 #include <linux/amba/pl022.h> 36 #include <linux/amba/pl022.h>
37 #include <linux/io.h> 37 #include <linux/io.h>
38 #include <linux/slab.h> 38 #include <linux/slab.h>
39 #include <linux/dmaengine.h> 39 #include <linux/dmaengine.h>
40 #include <linux/dma-mapping.h> 40 #include <linux/dma-mapping.h>
41 #include <linux/scatterlist.h> 41 #include <linux/scatterlist.h>
42 #include <linux/pm_runtime.h> 42 #include <linux/pm_runtime.h>
43 43
44 /* 44 /*
45 * This macro is used to define some register default values. 45 * This macro is used to define some register default values.
46 * reg is masked with mask, the OR:ed with an (again masked) 46 * reg is masked with mask, the OR:ed with an (again masked)
47 * val shifted sb steps to the left. 47 * val shifted sb steps to the left.
48 */ 48 */
49 #define SSP_WRITE_BITS(reg, val, mask, sb) \ 49 #define SSP_WRITE_BITS(reg, val, mask, sb) \
50 ((reg) = (((reg) & ~(mask)) | (((val)<<(sb)) & (mask)))) 50 ((reg) = (((reg) & ~(mask)) | (((val)<<(sb)) & (mask))))
51 51
52 /* 52 /*
53 * This macro is also used to define some default values. 53 * This macro is also used to define some default values.
54 * It will just shift val by sb steps to the left and mask 54 * It will just shift val by sb steps to the left and mask
55 * the result with mask. 55 * the result with mask.
56 */ 56 */
57 #define GEN_MASK_BITS(val, mask, sb) \ 57 #define GEN_MASK_BITS(val, mask, sb) \
58 (((val)<<(sb)) & (mask)) 58 (((val)<<(sb)) & (mask))
59 59
60 #define DRIVE_TX 0 60 #define DRIVE_TX 0
61 #define DO_NOT_DRIVE_TX 1 61 #define DO_NOT_DRIVE_TX 1
62 62
63 #define DO_NOT_QUEUE_DMA 0 63 #define DO_NOT_QUEUE_DMA 0
64 #define QUEUE_DMA 1 64 #define QUEUE_DMA 1
65 65
66 #define RX_TRANSFER 1 66 #define RX_TRANSFER 1
67 #define TX_TRANSFER 2 67 #define TX_TRANSFER 2
68 68
69 /* 69 /*
70 * Macros to access SSP Registers with their offsets 70 * Macros to access SSP Registers with their offsets
71 */ 71 */
72 #define SSP_CR0(r) (r + 0x000) 72 #define SSP_CR0(r) (r + 0x000)
73 #define SSP_CR1(r) (r + 0x004) 73 #define SSP_CR1(r) (r + 0x004)
74 #define SSP_DR(r) (r + 0x008) 74 #define SSP_DR(r) (r + 0x008)
75 #define SSP_SR(r) (r + 0x00C) 75 #define SSP_SR(r) (r + 0x00C)
76 #define SSP_CPSR(r) (r + 0x010) 76 #define SSP_CPSR(r) (r + 0x010)
77 #define SSP_IMSC(r) (r + 0x014) 77 #define SSP_IMSC(r) (r + 0x014)
78 #define SSP_RIS(r) (r + 0x018) 78 #define SSP_RIS(r) (r + 0x018)
79 #define SSP_MIS(r) (r + 0x01C) 79 #define SSP_MIS(r) (r + 0x01C)
80 #define SSP_ICR(r) (r + 0x020) 80 #define SSP_ICR(r) (r + 0x020)
81 #define SSP_DMACR(r) (r + 0x024) 81 #define SSP_DMACR(r) (r + 0x024)
82 #define SSP_ITCR(r) (r + 0x080) 82 #define SSP_ITCR(r) (r + 0x080)
83 #define SSP_ITIP(r) (r + 0x084) 83 #define SSP_ITIP(r) (r + 0x084)
84 #define SSP_ITOP(r) (r + 0x088) 84 #define SSP_ITOP(r) (r + 0x088)
85 #define SSP_TDR(r) (r + 0x08C) 85 #define SSP_TDR(r) (r + 0x08C)
86 86
87 #define SSP_PID0(r) (r + 0xFE0) 87 #define SSP_PID0(r) (r + 0xFE0)
88 #define SSP_PID1(r) (r + 0xFE4) 88 #define SSP_PID1(r) (r + 0xFE4)
89 #define SSP_PID2(r) (r + 0xFE8) 89 #define SSP_PID2(r) (r + 0xFE8)
90 #define SSP_PID3(r) (r + 0xFEC) 90 #define SSP_PID3(r) (r + 0xFEC)
91 91
92 #define SSP_CID0(r) (r + 0xFF0) 92 #define SSP_CID0(r) (r + 0xFF0)
93 #define SSP_CID1(r) (r + 0xFF4) 93 #define SSP_CID1(r) (r + 0xFF4)
94 #define SSP_CID2(r) (r + 0xFF8) 94 #define SSP_CID2(r) (r + 0xFF8)
95 #define SSP_CID3(r) (r + 0xFFC) 95 #define SSP_CID3(r) (r + 0xFFC)
96 96
97 /* 97 /*
98 * SSP Control Register 0 - SSP_CR0 98 * SSP Control Register 0 - SSP_CR0
99 */ 99 */
100 #define SSP_CR0_MASK_DSS (0x0FUL << 0) 100 #define SSP_CR0_MASK_DSS (0x0FUL << 0)
101 #define SSP_CR0_MASK_FRF (0x3UL << 4) 101 #define SSP_CR0_MASK_FRF (0x3UL << 4)
102 #define SSP_CR0_MASK_SPO (0x1UL << 6) 102 #define SSP_CR0_MASK_SPO (0x1UL << 6)
103 #define SSP_CR0_MASK_SPH (0x1UL << 7) 103 #define SSP_CR0_MASK_SPH (0x1UL << 7)
104 #define SSP_CR0_MASK_SCR (0xFFUL << 8) 104 #define SSP_CR0_MASK_SCR (0xFFUL << 8)
105 105
106 /* 106 /*
107 * The ST version of this block moves som bits 107 * The ST version of this block moves som bits
108 * in SSP_CR0 and extends it to 32 bits 108 * in SSP_CR0 and extends it to 32 bits
109 */ 109 */
110 #define SSP_CR0_MASK_DSS_ST (0x1FUL << 0) 110 #define SSP_CR0_MASK_DSS_ST (0x1FUL << 0)
111 #define SSP_CR0_MASK_HALFDUP_ST (0x1UL << 5) 111 #define SSP_CR0_MASK_HALFDUP_ST (0x1UL << 5)
112 #define SSP_CR0_MASK_CSS_ST (0x1FUL << 16) 112 #define SSP_CR0_MASK_CSS_ST (0x1FUL << 16)
113 #define SSP_CR0_MASK_FRF_ST (0x3UL << 21) 113 #define SSP_CR0_MASK_FRF_ST (0x3UL << 21)
114 114
115 /* 115 /*
116 * SSP Control Register 0 - SSP_CR1 116 * SSP Control Register 0 - SSP_CR1
117 */ 117 */
118 #define SSP_CR1_MASK_LBM (0x1UL << 0) 118 #define SSP_CR1_MASK_LBM (0x1UL << 0)
119 #define SSP_CR1_MASK_SSE (0x1UL << 1) 119 #define SSP_CR1_MASK_SSE (0x1UL << 1)
120 #define SSP_CR1_MASK_MS (0x1UL << 2) 120 #define SSP_CR1_MASK_MS (0x1UL << 2)
121 #define SSP_CR1_MASK_SOD (0x1UL << 3) 121 #define SSP_CR1_MASK_SOD (0x1UL << 3)
122 122
123 /* 123 /*
124 * The ST version of this block adds some bits 124 * The ST version of this block adds some bits
125 * in SSP_CR1 125 * in SSP_CR1
126 */ 126 */
127 #define SSP_CR1_MASK_RENDN_ST (0x1UL << 4) 127 #define SSP_CR1_MASK_RENDN_ST (0x1UL << 4)
128 #define SSP_CR1_MASK_TENDN_ST (0x1UL << 5) 128 #define SSP_CR1_MASK_TENDN_ST (0x1UL << 5)
129 #define SSP_CR1_MASK_MWAIT_ST (0x1UL << 6) 129 #define SSP_CR1_MASK_MWAIT_ST (0x1UL << 6)
130 #define SSP_CR1_MASK_RXIFLSEL_ST (0x7UL << 7) 130 #define SSP_CR1_MASK_RXIFLSEL_ST (0x7UL << 7)
131 #define SSP_CR1_MASK_TXIFLSEL_ST (0x7UL << 10) 131 #define SSP_CR1_MASK_TXIFLSEL_ST (0x7UL << 10)
132 /* This one is only in the PL023 variant */ 132 /* This one is only in the PL023 variant */
133 #define SSP_CR1_MASK_FBCLKDEL_ST (0x7UL << 13) 133 #define SSP_CR1_MASK_FBCLKDEL_ST (0x7UL << 13)
134 134
135 /* 135 /*
136 * SSP Status Register - SSP_SR 136 * SSP Status Register - SSP_SR
137 */ 137 */
138 #define SSP_SR_MASK_TFE (0x1UL << 0) /* Transmit FIFO empty */ 138 #define SSP_SR_MASK_TFE (0x1UL << 0) /* Transmit FIFO empty */
139 #define SSP_SR_MASK_TNF (0x1UL << 1) /* Transmit FIFO not full */ 139 #define SSP_SR_MASK_TNF (0x1UL << 1) /* Transmit FIFO not full */
140 #define SSP_SR_MASK_RNE (0x1UL << 2) /* Receive FIFO not empty */ 140 #define SSP_SR_MASK_RNE (0x1UL << 2) /* Receive FIFO not empty */
141 #define SSP_SR_MASK_RFF (0x1UL << 3) /* Receive FIFO full */ 141 #define SSP_SR_MASK_RFF (0x1UL << 3) /* Receive FIFO full */
142 #define SSP_SR_MASK_BSY (0x1UL << 4) /* Busy Flag */ 142 #define SSP_SR_MASK_BSY (0x1UL << 4) /* Busy Flag */
143 143
144 /* 144 /*
145 * SSP Clock Prescale Register - SSP_CPSR 145 * SSP Clock Prescale Register - SSP_CPSR
146 */ 146 */
147 #define SSP_CPSR_MASK_CPSDVSR (0xFFUL << 0) 147 #define SSP_CPSR_MASK_CPSDVSR (0xFFUL << 0)
148 148
149 /* 149 /*
150 * SSP Interrupt Mask Set/Clear Register - SSP_IMSC 150 * SSP Interrupt Mask Set/Clear Register - SSP_IMSC
151 */ 151 */
152 #define SSP_IMSC_MASK_RORIM (0x1UL << 0) /* Receive Overrun Interrupt mask */ 152 #define SSP_IMSC_MASK_RORIM (0x1UL << 0) /* Receive Overrun Interrupt mask */
153 #define SSP_IMSC_MASK_RTIM (0x1UL << 1) /* Receive timeout Interrupt mask */ 153 #define SSP_IMSC_MASK_RTIM (0x1UL << 1) /* Receive timeout Interrupt mask */
154 #define SSP_IMSC_MASK_RXIM (0x1UL << 2) /* Receive FIFO Interrupt mask */ 154 #define SSP_IMSC_MASK_RXIM (0x1UL << 2) /* Receive FIFO Interrupt mask */
155 #define SSP_IMSC_MASK_TXIM (0x1UL << 3) /* Transmit FIFO Interrupt mask */ 155 #define SSP_IMSC_MASK_TXIM (0x1UL << 3) /* Transmit FIFO Interrupt mask */
156 156
157 /* 157 /*
158 * SSP Raw Interrupt Status Register - SSP_RIS 158 * SSP Raw Interrupt Status Register - SSP_RIS
159 */ 159 */
160 /* Receive Overrun Raw Interrupt status */ 160 /* Receive Overrun Raw Interrupt status */
161 #define SSP_RIS_MASK_RORRIS (0x1UL << 0) 161 #define SSP_RIS_MASK_RORRIS (0x1UL << 0)
162 /* Receive Timeout Raw Interrupt status */ 162 /* Receive Timeout Raw Interrupt status */
163 #define SSP_RIS_MASK_RTRIS (0x1UL << 1) 163 #define SSP_RIS_MASK_RTRIS (0x1UL << 1)
164 /* Receive FIFO Raw Interrupt status */ 164 /* Receive FIFO Raw Interrupt status */
165 #define SSP_RIS_MASK_RXRIS (0x1UL << 2) 165 #define SSP_RIS_MASK_RXRIS (0x1UL << 2)
166 /* Transmit FIFO Raw Interrupt status */ 166 /* Transmit FIFO Raw Interrupt status */
167 #define SSP_RIS_MASK_TXRIS (0x1UL << 3) 167 #define SSP_RIS_MASK_TXRIS (0x1UL << 3)
168 168
169 /* 169 /*
170 * SSP Masked Interrupt Status Register - SSP_MIS 170 * SSP Masked Interrupt Status Register - SSP_MIS
171 */ 171 */
172 /* Receive Overrun Masked Interrupt status */ 172 /* Receive Overrun Masked Interrupt status */
173 #define SSP_MIS_MASK_RORMIS (0x1UL << 0) 173 #define SSP_MIS_MASK_RORMIS (0x1UL << 0)
174 /* Receive Timeout Masked Interrupt status */ 174 /* Receive Timeout Masked Interrupt status */
175 #define SSP_MIS_MASK_RTMIS (0x1UL << 1) 175 #define SSP_MIS_MASK_RTMIS (0x1UL << 1)
176 /* Receive FIFO Masked Interrupt status */ 176 /* Receive FIFO Masked Interrupt status */
177 #define SSP_MIS_MASK_RXMIS (0x1UL << 2) 177 #define SSP_MIS_MASK_RXMIS (0x1UL << 2)
178 /* Transmit FIFO Masked Interrupt status */ 178 /* Transmit FIFO Masked Interrupt status */
179 #define SSP_MIS_MASK_TXMIS (0x1UL << 3) 179 #define SSP_MIS_MASK_TXMIS (0x1UL << 3)
180 180
181 /* 181 /*
182 * SSP Interrupt Clear Register - SSP_ICR 182 * SSP Interrupt Clear Register - SSP_ICR
183 */ 183 */
184 /* Receive Overrun Raw Clear Interrupt bit */ 184 /* Receive Overrun Raw Clear Interrupt bit */
185 #define SSP_ICR_MASK_RORIC (0x1UL << 0) 185 #define SSP_ICR_MASK_RORIC (0x1UL << 0)
186 /* Receive Timeout Clear Interrupt bit */ 186 /* Receive Timeout Clear Interrupt bit */
187 #define SSP_ICR_MASK_RTIC (0x1UL << 1) 187 #define SSP_ICR_MASK_RTIC (0x1UL << 1)
188 188
189 /* 189 /*
190 * SSP DMA Control Register - SSP_DMACR 190 * SSP DMA Control Register - SSP_DMACR
191 */ 191 */
192 /* Receive DMA Enable bit */ 192 /* Receive DMA Enable bit */
193 #define SSP_DMACR_MASK_RXDMAE (0x1UL << 0) 193 #define SSP_DMACR_MASK_RXDMAE (0x1UL << 0)
194 /* Transmit DMA Enable bit */ 194 /* Transmit DMA Enable bit */
195 #define SSP_DMACR_MASK_TXDMAE (0x1UL << 1) 195 #define SSP_DMACR_MASK_TXDMAE (0x1UL << 1)
196 196
197 /* 197 /*
198 * SSP Integration Test control Register - SSP_ITCR 198 * SSP Integration Test control Register - SSP_ITCR
199 */ 199 */
200 #define SSP_ITCR_MASK_ITEN (0x1UL << 0) 200 #define SSP_ITCR_MASK_ITEN (0x1UL << 0)
201 #define SSP_ITCR_MASK_TESTFIFO (0x1UL << 1) 201 #define SSP_ITCR_MASK_TESTFIFO (0x1UL << 1)
202 202
203 /* 203 /*
204 * SSP Integration Test Input Register - SSP_ITIP 204 * SSP Integration Test Input Register - SSP_ITIP
205 */ 205 */
206 #define ITIP_MASK_SSPRXD (0x1UL << 0) 206 #define ITIP_MASK_SSPRXD (0x1UL << 0)
207 #define ITIP_MASK_SSPFSSIN (0x1UL << 1) 207 #define ITIP_MASK_SSPFSSIN (0x1UL << 1)
208 #define ITIP_MASK_SSPCLKIN (0x1UL << 2) 208 #define ITIP_MASK_SSPCLKIN (0x1UL << 2)
209 #define ITIP_MASK_RXDMAC (0x1UL << 3) 209 #define ITIP_MASK_RXDMAC (0x1UL << 3)
210 #define ITIP_MASK_TXDMAC (0x1UL << 4) 210 #define ITIP_MASK_TXDMAC (0x1UL << 4)
211 #define ITIP_MASK_SSPTXDIN (0x1UL << 5) 211 #define ITIP_MASK_SSPTXDIN (0x1UL << 5)
212 212
213 /* 213 /*
214 * SSP Integration Test output Register - SSP_ITOP 214 * SSP Integration Test output Register - SSP_ITOP
215 */ 215 */
216 #define ITOP_MASK_SSPTXD (0x1UL << 0) 216 #define ITOP_MASK_SSPTXD (0x1UL << 0)
217 #define ITOP_MASK_SSPFSSOUT (0x1UL << 1) 217 #define ITOP_MASK_SSPFSSOUT (0x1UL << 1)
218 #define ITOP_MASK_SSPCLKOUT (0x1UL << 2) 218 #define ITOP_MASK_SSPCLKOUT (0x1UL << 2)
219 #define ITOP_MASK_SSPOEn (0x1UL << 3) 219 #define ITOP_MASK_SSPOEn (0x1UL << 3)
220 #define ITOP_MASK_SSPCTLOEn (0x1UL << 4) 220 #define ITOP_MASK_SSPCTLOEn (0x1UL << 4)
221 #define ITOP_MASK_RORINTR (0x1UL << 5) 221 #define ITOP_MASK_RORINTR (0x1UL << 5)
222 #define ITOP_MASK_RTINTR (0x1UL << 6) 222 #define ITOP_MASK_RTINTR (0x1UL << 6)
223 #define ITOP_MASK_RXINTR (0x1UL << 7) 223 #define ITOP_MASK_RXINTR (0x1UL << 7)
224 #define ITOP_MASK_TXINTR (0x1UL << 8) 224 #define ITOP_MASK_TXINTR (0x1UL << 8)
225 #define ITOP_MASK_INTR (0x1UL << 9) 225 #define ITOP_MASK_INTR (0x1UL << 9)
226 #define ITOP_MASK_RXDMABREQ (0x1UL << 10) 226 #define ITOP_MASK_RXDMABREQ (0x1UL << 10)
227 #define ITOP_MASK_RXDMASREQ (0x1UL << 11) 227 #define ITOP_MASK_RXDMASREQ (0x1UL << 11)
228 #define ITOP_MASK_TXDMABREQ (0x1UL << 12) 228 #define ITOP_MASK_TXDMABREQ (0x1UL << 12)
229 #define ITOP_MASK_TXDMASREQ (0x1UL << 13) 229 #define ITOP_MASK_TXDMASREQ (0x1UL << 13)
230 230
231 /* 231 /*
232 * SSP Test Data Register - SSP_TDR 232 * SSP Test Data Register - SSP_TDR
233 */ 233 */
234 #define TDR_MASK_TESTDATA (0xFFFFFFFF) 234 #define TDR_MASK_TESTDATA (0xFFFFFFFF)
235 235
236 /* 236 /*
237 * Message State 237 * Message State
238 * we use the spi_message.state (void *) pointer to 238 * we use the spi_message.state (void *) pointer to
239 * hold a single state value, that's why all this 239 * hold a single state value, that's why all this
240 * (void *) casting is done here. 240 * (void *) casting is done here.
241 */ 241 */
242 #define STATE_START ((void *) 0) 242 #define STATE_START ((void *) 0)
243 #define STATE_RUNNING ((void *) 1) 243 #define STATE_RUNNING ((void *) 1)
244 #define STATE_DONE ((void *) 2) 244 #define STATE_DONE ((void *) 2)
245 #define STATE_ERROR ((void *) -1) 245 #define STATE_ERROR ((void *) -1)
246 246
247 /* 247 /*
248 * SSP State - Whether Enabled or Disabled 248 * SSP State - Whether Enabled or Disabled
249 */ 249 */
250 #define SSP_DISABLED (0) 250 #define SSP_DISABLED (0)
251 #define SSP_ENABLED (1) 251 #define SSP_ENABLED (1)
252 252
253 /* 253 /*
254 * SSP DMA State - Whether DMA Enabled or Disabled 254 * SSP DMA State - Whether DMA Enabled or Disabled
255 */ 255 */
256 #define SSP_DMA_DISABLED (0) 256 #define SSP_DMA_DISABLED (0)
257 #define SSP_DMA_ENABLED (1) 257 #define SSP_DMA_ENABLED (1)
258 258
259 /* 259 /*
260 * SSP Clock Defaults 260 * SSP Clock Defaults
261 */ 261 */
262 #define SSP_DEFAULT_CLKRATE 0x2 262 #define SSP_DEFAULT_CLKRATE 0x2
263 #define SSP_DEFAULT_PRESCALE 0x40 263 #define SSP_DEFAULT_PRESCALE 0x40
264 264
265 /* 265 /*
266 * SSP Clock Parameter ranges 266 * SSP Clock Parameter ranges
267 */ 267 */
268 #define CPSDVR_MIN 0x02 268 #define CPSDVR_MIN 0x02
269 #define CPSDVR_MAX 0xFE 269 #define CPSDVR_MAX 0xFE
270 #define SCR_MIN 0x00 270 #define SCR_MIN 0x00
271 #define SCR_MAX 0xFF 271 #define SCR_MAX 0xFF
272 272
273 /* 273 /*
274 * SSP Interrupt related Macros 274 * SSP Interrupt related Macros
275 */ 275 */
276 #define DEFAULT_SSP_REG_IMSC 0x0UL 276 #define DEFAULT_SSP_REG_IMSC 0x0UL
277 #define DISABLE_ALL_INTERRUPTS DEFAULT_SSP_REG_IMSC 277 #define DISABLE_ALL_INTERRUPTS DEFAULT_SSP_REG_IMSC
278 #define ENABLE_ALL_INTERRUPTS (~DEFAULT_SSP_REG_IMSC) 278 #define ENABLE_ALL_INTERRUPTS (~DEFAULT_SSP_REG_IMSC)
279 279
280 #define CLEAR_ALL_INTERRUPTS 0x3 280 #define CLEAR_ALL_INTERRUPTS 0x3
281 281
282 #define SPI_POLLING_TIMEOUT 1000 282 #define SPI_POLLING_TIMEOUT 1000
283 283
284 /* 284 /*
285 * The type of reading going on on this chip 285 * The type of reading going on on this chip
286 */ 286 */
287 enum ssp_reading { 287 enum ssp_reading {
288 READING_NULL, 288 READING_NULL,
289 READING_U8, 289 READING_U8,
290 READING_U16, 290 READING_U16,
291 READING_U32 291 READING_U32
292 }; 292 };
293 293
294 /** 294 /**
295 * The type of writing going on on this chip 295 * The type of writing going on on this chip
296 */ 296 */
297 enum ssp_writing { 297 enum ssp_writing {
298 WRITING_NULL, 298 WRITING_NULL,
299 WRITING_U8, 299 WRITING_U8,
300 WRITING_U16, 300 WRITING_U16,
301 WRITING_U32 301 WRITING_U32
302 }; 302 };
303 303
304 /** 304 /**
305 * struct vendor_data - vendor-specific config parameters 305 * struct vendor_data - vendor-specific config parameters
306 * for PL022 derivates 306 * for PL022 derivates
307 * @fifodepth: depth of FIFOs (both) 307 * @fifodepth: depth of FIFOs (both)
308 * @max_bpw: maximum number of bits per word 308 * @max_bpw: maximum number of bits per word
309 * @unidir: supports unidirection transfers 309 * @unidir: supports unidirection transfers
310 * @extended_cr: 32 bit wide control register 0 with extra 310 * @extended_cr: 32 bit wide control register 0 with extra
311 * features and extra features in CR1 as found in the ST variants 311 * features and extra features in CR1 as found in the ST variants
312 * @pl023: supports a subset of the ST extensions called "PL023" 312 * @pl023: supports a subset of the ST extensions called "PL023"
313 */ 313 */
314 struct vendor_data { 314 struct vendor_data {
315 int fifodepth; 315 int fifodepth;
316 int max_bpw; 316 int max_bpw;
317 bool unidir; 317 bool unidir;
318 bool extended_cr; 318 bool extended_cr;
319 bool pl023; 319 bool pl023;
320 bool loopback; 320 bool loopback;
321 }; 321 };
322 322
323 /** 323 /**
324 * struct pl022 - This is the private SSP driver data structure 324 * struct pl022 - This is the private SSP driver data structure
325 * @adev: AMBA device model hookup 325 * @adev: AMBA device model hookup
326 * @vendor: vendor data for the IP block 326 * @vendor: vendor data for the IP block
327 * @phybase: the physical memory where the SSP device resides 327 * @phybase: the physical memory where the SSP device resides
328 * @virtbase: the virtual memory where the SSP is mapped 328 * @virtbase: the virtual memory where the SSP is mapped
329 * @clk: outgoing clock "SPICLK" for the SPI bus 329 * @clk: outgoing clock "SPICLK" for the SPI bus
330 * @master: SPI framework hookup 330 * @master: SPI framework hookup
331 * @master_info: controller-specific data from machine setup 331 * @master_info: controller-specific data from machine setup
332 * @kworker: thread struct for message pump 332 * @kworker: thread struct for message pump
333 * @kworker_task: pointer to task for message pump kworker thread 333 * @kworker_task: pointer to task for message pump kworker thread
334 * @pump_messages: work struct for scheduling work to the message pump 334 * @pump_messages: work struct for scheduling work to the message pump
335 * @queue_lock: spinlock to syncronise access to message queue 335 * @queue_lock: spinlock to syncronise access to message queue
336 * @queue: message queue 336 * @queue: message queue
337 * @busy: message pump is busy 337 * @busy: message pump is busy
338 * @running: message pump is running 338 * @running: message pump is running
339 * @pump_transfers: Tasklet used in Interrupt Transfer mode 339 * @pump_transfers: Tasklet used in Interrupt Transfer mode
340 * @cur_msg: Pointer to current spi_message being processed 340 * @cur_msg: Pointer to current spi_message being processed
341 * @cur_transfer: Pointer to current spi_transfer 341 * @cur_transfer: Pointer to current spi_transfer
342 * @cur_chip: pointer to current clients chip(assigned from controller_state) 342 * @cur_chip: pointer to current clients chip(assigned from controller_state)
343 * @next_msg_cs_active: the next message in the queue has been examined 343 * @next_msg_cs_active: the next message in the queue has been examined
344 * and it was found that it uses the same chip select as the previous 344 * and it was found that it uses the same chip select as the previous
345 * message, so we left it active after the previous transfer, and it's 345 * message, so we left it active after the previous transfer, and it's
346 * active already. 346 * active already.
347 * @tx: current position in TX buffer to be read 347 * @tx: current position in TX buffer to be read
348 * @tx_end: end position in TX buffer to be read 348 * @tx_end: end position in TX buffer to be read
349 * @rx: current position in RX buffer to be written 349 * @rx: current position in RX buffer to be written
350 * @rx_end: end position in RX buffer to be written 350 * @rx_end: end position in RX buffer to be written
351 * @read: the type of read currently going on 351 * @read: the type of read currently going on
352 * @write: the type of write currently going on 352 * @write: the type of write currently going on
353 * @exp_fifo_level: expected FIFO level 353 * @exp_fifo_level: expected FIFO level
354 * @dma_rx_channel: optional channel for RX DMA 354 * @dma_rx_channel: optional channel for RX DMA
355 * @dma_tx_channel: optional channel for TX DMA 355 * @dma_tx_channel: optional channel for TX DMA
356 * @sgt_rx: scattertable for the RX transfer 356 * @sgt_rx: scattertable for the RX transfer
357 * @sgt_tx: scattertable for the TX transfer 357 * @sgt_tx: scattertable for the TX transfer
358 * @dummypage: a dummy page used for driving data on the bus with DMA 358 * @dummypage: a dummy page used for driving data on the bus with DMA
359 */ 359 */
360 struct pl022 { 360 struct pl022 {
361 struct amba_device *adev; 361 struct amba_device *adev;
362 struct vendor_data *vendor; 362 struct vendor_data *vendor;
363 resource_size_t phybase; 363 resource_size_t phybase;
364 void __iomem *virtbase; 364 void __iomem *virtbase;
365 struct clk *clk; 365 struct clk *clk;
366 struct spi_master *master; 366 struct spi_master *master;
367 struct pl022_ssp_controller *master_info; 367 struct pl022_ssp_controller *master_info;
368 /* Message per-transfer pump */ 368 /* Message per-transfer pump */
369 struct tasklet_struct pump_transfers; 369 struct tasklet_struct pump_transfers;
370 struct spi_message *cur_msg; 370 struct spi_message *cur_msg;
371 struct spi_transfer *cur_transfer; 371 struct spi_transfer *cur_transfer;
372 struct chip_data *cur_chip; 372 struct chip_data *cur_chip;
373 bool next_msg_cs_active; 373 bool next_msg_cs_active;
374 void *tx; 374 void *tx;
375 void *tx_end; 375 void *tx_end;
376 void *rx; 376 void *rx;
377 void *rx_end; 377 void *rx_end;
378 enum ssp_reading read; 378 enum ssp_reading read;
379 enum ssp_writing write; 379 enum ssp_writing write;
380 u32 exp_fifo_level; 380 u32 exp_fifo_level;
381 enum ssp_rx_level_trig rx_lev_trig; 381 enum ssp_rx_level_trig rx_lev_trig;
382 enum ssp_tx_level_trig tx_lev_trig; 382 enum ssp_tx_level_trig tx_lev_trig;
383 /* DMA settings */ 383 /* DMA settings */
384 #ifdef CONFIG_DMA_ENGINE 384 #ifdef CONFIG_DMA_ENGINE
385 struct dma_chan *dma_rx_channel; 385 struct dma_chan *dma_rx_channel;
386 struct dma_chan *dma_tx_channel; 386 struct dma_chan *dma_tx_channel;
387 struct sg_table sgt_rx; 387 struct sg_table sgt_rx;
388 struct sg_table sgt_tx; 388 struct sg_table sgt_tx;
389 char *dummypage; 389 char *dummypage;
390 bool dma_running; 390 bool dma_running;
391 #endif 391 #endif
392 }; 392 };
393 393
394 /** 394 /**
395 * struct chip_data - To maintain runtime state of SSP for each client chip 395 * struct chip_data - To maintain runtime state of SSP for each client chip
396 * @cr0: Value of control register CR0 of SSP - on later ST variants this 396 * @cr0: Value of control register CR0 of SSP - on later ST variants this
397 * register is 32 bits wide rather than just 16 397 * register is 32 bits wide rather than just 16
398 * @cr1: Value of control register CR1 of SSP 398 * @cr1: Value of control register CR1 of SSP
399 * @dmacr: Value of DMA control Register of SSP 399 * @dmacr: Value of DMA control Register of SSP
400 * @cpsr: Value of Clock prescale register 400 * @cpsr: Value of Clock prescale register
401 * @n_bytes: how many bytes(power of 2) reqd for a given data width of client 401 * @n_bytes: how many bytes(power of 2) reqd for a given data width of client
402 * @enable_dma: Whether to enable DMA or not 402 * @enable_dma: Whether to enable DMA or not
403 * @read: function ptr to be used to read when doing xfer for this chip 403 * @read: function ptr to be used to read when doing xfer for this chip
404 * @write: function ptr to be used to write when doing xfer for this chip 404 * @write: function ptr to be used to write when doing xfer for this chip
405 * @cs_control: chip select callback provided by chip 405 * @cs_control: chip select callback provided by chip
406 * @xfer_type: polling/interrupt/DMA 406 * @xfer_type: polling/interrupt/DMA
407 * 407 *
408 * Runtime state of the SSP controller, maintained per chip, 408 * Runtime state of the SSP controller, maintained per chip,
409 * This would be set according to the current message that would be served 409 * This would be set according to the current message that would be served
410 */ 410 */
411 struct chip_data { 411 struct chip_data {
412 u32 cr0; 412 u32 cr0;
413 u16 cr1; 413 u16 cr1;
414 u16 dmacr; 414 u16 dmacr;
415 u16 cpsr; 415 u16 cpsr;
416 u8 n_bytes; 416 u8 n_bytes;
417 bool enable_dma; 417 bool enable_dma;
418 enum ssp_reading read; 418 enum ssp_reading read;
419 enum ssp_writing write; 419 enum ssp_writing write;
420 void (*cs_control) (u32 command); 420 void (*cs_control) (u32 command);
421 int xfer_type; 421 int xfer_type;
422 }; 422 };
423 423
424 /** 424 /**
425 * null_cs_control - Dummy chip select function 425 * null_cs_control - Dummy chip select function
426 * @command: select/delect the chip 426 * @command: select/delect the chip
427 * 427 *
428 * If no chip select function is provided by client this is used as dummy 428 * If no chip select function is provided by client this is used as dummy
429 * chip select 429 * chip select
430 */ 430 */
431 static void null_cs_control(u32 command) 431 static void null_cs_control(u32 command)
432 { 432 {
433 pr_debug("pl022: dummy chip select control, CS=0x%x\n", command); 433 pr_debug("pl022: dummy chip select control, CS=0x%x\n", command);
434 } 434 }
435 435
436 /** 436 /**
437 * giveback - current spi_message is over, schedule next message and call 437 * giveback - current spi_message is over, schedule next message and call
438 * callback of this message. Assumes that caller already 438 * callback of this message. Assumes that caller already
439 * set message->status; dma and pio irqs are blocked 439 * set message->status; dma and pio irqs are blocked
440 * @pl022: SSP driver private data structure 440 * @pl022: SSP driver private data structure
441 */ 441 */
442 static void giveback(struct pl022 *pl022) 442 static void giveback(struct pl022 *pl022)
443 { 443 {
444 struct spi_transfer *last_transfer; 444 struct spi_transfer *last_transfer;
445 pl022->next_msg_cs_active = false; 445 pl022->next_msg_cs_active = false;
446 446
447 last_transfer = list_entry(pl022->cur_msg->transfers.prev, 447 last_transfer = list_entry(pl022->cur_msg->transfers.prev,
448 struct spi_transfer, 448 struct spi_transfer,
449 transfer_list); 449 transfer_list);
450 450
451 /* Delay if requested before any change in chip select */ 451 /* Delay if requested before any change in chip select */
452 if (last_transfer->delay_usecs) 452 if (last_transfer->delay_usecs)
453 /* 453 /*
454 * FIXME: This runs in interrupt context. 454 * FIXME: This runs in interrupt context.
455 * Is this really smart? 455 * Is this really smart?
456 */ 456 */
457 udelay(last_transfer->delay_usecs); 457 udelay(last_transfer->delay_usecs);
458 458
459 if (!last_transfer->cs_change) { 459 if (!last_transfer->cs_change) {
460 struct spi_message *next_msg; 460 struct spi_message *next_msg;
461 461
462 /* 462 /*
463 * cs_change was not set. We can keep the chip select 463 * cs_change was not set. We can keep the chip select
464 * enabled if there is message in the queue and it is 464 * enabled if there is message in the queue and it is
465 * for the same spi device. 465 * for the same spi device.
466 * 466 *
467 * We cannot postpone this until pump_messages, because 467 * We cannot postpone this until pump_messages, because
468 * after calling msg->complete (below) the driver that 468 * after calling msg->complete (below) the driver that
469 * sent the current message could be unloaded, which 469 * sent the current message could be unloaded, which
470 * could invalidate the cs_control() callback... 470 * could invalidate the cs_control() callback...
471 */ 471 */
472 /* get a pointer to the next message, if any */ 472 /* get a pointer to the next message, if any */
473 next_msg = spi_get_next_queued_message(pl022->master); 473 next_msg = spi_get_next_queued_message(pl022->master);
474 474
475 /* 475 /*
476 * see if the next and current messages point 476 * see if the next and current messages point
477 * to the same spi device. 477 * to the same spi device.
478 */ 478 */
479 if (next_msg && next_msg->spi != pl022->cur_msg->spi) 479 if (next_msg && next_msg->spi != pl022->cur_msg->spi)
480 next_msg = NULL; 480 next_msg = NULL;
481 if (!next_msg || pl022->cur_msg->state == STATE_ERROR) 481 if (!next_msg || pl022->cur_msg->state == STATE_ERROR)
482 pl022->cur_chip->cs_control(SSP_CHIP_DESELECT); 482 pl022->cur_chip->cs_control(SSP_CHIP_DESELECT);
483 else 483 else
484 pl022->next_msg_cs_active = true; 484 pl022->next_msg_cs_active = true;
485 485
486 } 486 }
487 487
488 pl022->cur_msg = NULL; 488 pl022->cur_msg = NULL;
489 pl022->cur_transfer = NULL; 489 pl022->cur_transfer = NULL;
490 pl022->cur_chip = NULL; 490 pl022->cur_chip = NULL;
491 spi_finalize_current_message(pl022->master); 491 spi_finalize_current_message(pl022->master);
492
493 /* disable the SPI/SSP operation */
494 writew((readw(SSP_CR1(pl022->virtbase)) &
495 (~SSP_CR1_MASK_SSE)), SSP_CR1(pl022->virtbase));
496
492 } 497 }
493 498
494 /** 499 /**
495 * flush - flush the FIFO to reach a clean state 500 * flush - flush the FIFO to reach a clean state
496 * @pl022: SSP driver private data structure 501 * @pl022: SSP driver private data structure
497 */ 502 */
498 static int flush(struct pl022 *pl022) 503 static int flush(struct pl022 *pl022)
499 { 504 {
500 unsigned long limit = loops_per_jiffy << 1; 505 unsigned long limit = loops_per_jiffy << 1;
501 506
502 dev_dbg(&pl022->adev->dev, "flush\n"); 507 dev_dbg(&pl022->adev->dev, "flush\n");
503 do { 508 do {
504 while (readw(SSP_SR(pl022->virtbase)) & SSP_SR_MASK_RNE) 509 while (readw(SSP_SR(pl022->virtbase)) & SSP_SR_MASK_RNE)
505 readw(SSP_DR(pl022->virtbase)); 510 readw(SSP_DR(pl022->virtbase));
506 } while ((readw(SSP_SR(pl022->virtbase)) & SSP_SR_MASK_BSY) && limit--); 511 } while ((readw(SSP_SR(pl022->virtbase)) & SSP_SR_MASK_BSY) && limit--);
507 512
508 pl022->exp_fifo_level = 0; 513 pl022->exp_fifo_level = 0;
509 514
510 return limit; 515 return limit;
511 } 516 }
512 517
513 /** 518 /**
514 * restore_state - Load configuration of current chip 519 * restore_state - Load configuration of current chip
515 * @pl022: SSP driver private data structure 520 * @pl022: SSP driver private data structure
516 */ 521 */
517 static void restore_state(struct pl022 *pl022) 522 static void restore_state(struct pl022 *pl022)
518 { 523 {
519 struct chip_data *chip = pl022->cur_chip; 524 struct chip_data *chip = pl022->cur_chip;
520 525
521 if (pl022->vendor->extended_cr) 526 if (pl022->vendor->extended_cr)
522 writel(chip->cr0, SSP_CR0(pl022->virtbase)); 527 writel(chip->cr0, SSP_CR0(pl022->virtbase));
523 else 528 else
524 writew(chip->cr0, SSP_CR0(pl022->virtbase)); 529 writew(chip->cr0, SSP_CR0(pl022->virtbase));
525 writew(chip->cr1, SSP_CR1(pl022->virtbase)); 530 writew(chip->cr1, SSP_CR1(pl022->virtbase));
526 writew(chip->dmacr, SSP_DMACR(pl022->virtbase)); 531 writew(chip->dmacr, SSP_DMACR(pl022->virtbase));
527 writew(chip->cpsr, SSP_CPSR(pl022->virtbase)); 532 writew(chip->cpsr, SSP_CPSR(pl022->virtbase));
528 writew(DISABLE_ALL_INTERRUPTS, SSP_IMSC(pl022->virtbase)); 533 writew(DISABLE_ALL_INTERRUPTS, SSP_IMSC(pl022->virtbase));
529 writew(CLEAR_ALL_INTERRUPTS, SSP_ICR(pl022->virtbase)); 534 writew(CLEAR_ALL_INTERRUPTS, SSP_ICR(pl022->virtbase));
530 } 535 }
531 536
532 /* 537 /*
533 * Default SSP Register Values 538 * Default SSP Register Values
534 */ 539 */
535 #define DEFAULT_SSP_REG_CR0 ( \ 540 #define DEFAULT_SSP_REG_CR0 ( \
536 GEN_MASK_BITS(SSP_DATA_BITS_12, SSP_CR0_MASK_DSS, 0) | \ 541 GEN_MASK_BITS(SSP_DATA_BITS_12, SSP_CR0_MASK_DSS, 0) | \
537 GEN_MASK_BITS(SSP_INTERFACE_MOTOROLA_SPI, SSP_CR0_MASK_FRF, 4) | \ 542 GEN_MASK_BITS(SSP_INTERFACE_MOTOROLA_SPI, SSP_CR0_MASK_FRF, 4) | \
538 GEN_MASK_BITS(SSP_CLK_POL_IDLE_LOW, SSP_CR0_MASK_SPO, 6) | \ 543 GEN_MASK_BITS(SSP_CLK_POL_IDLE_LOW, SSP_CR0_MASK_SPO, 6) | \
539 GEN_MASK_BITS(SSP_CLK_SECOND_EDGE, SSP_CR0_MASK_SPH, 7) | \ 544 GEN_MASK_BITS(SSP_CLK_SECOND_EDGE, SSP_CR0_MASK_SPH, 7) | \
540 GEN_MASK_BITS(SSP_DEFAULT_CLKRATE, SSP_CR0_MASK_SCR, 8) \ 545 GEN_MASK_BITS(SSP_DEFAULT_CLKRATE, SSP_CR0_MASK_SCR, 8) \
541 ) 546 )
542 547
543 /* ST versions have slightly different bit layout */ 548 /* ST versions have slightly different bit layout */
544 #define DEFAULT_SSP_REG_CR0_ST ( \ 549 #define DEFAULT_SSP_REG_CR0_ST ( \
545 GEN_MASK_BITS(SSP_DATA_BITS_12, SSP_CR0_MASK_DSS_ST, 0) | \ 550 GEN_MASK_BITS(SSP_DATA_BITS_12, SSP_CR0_MASK_DSS_ST, 0) | \
546 GEN_MASK_BITS(SSP_MICROWIRE_CHANNEL_FULL_DUPLEX, SSP_CR0_MASK_HALFDUP_ST, 5) | \ 551 GEN_MASK_BITS(SSP_MICROWIRE_CHANNEL_FULL_DUPLEX, SSP_CR0_MASK_HALFDUP_ST, 5) | \
547 GEN_MASK_BITS(SSP_CLK_POL_IDLE_LOW, SSP_CR0_MASK_SPO, 6) | \ 552 GEN_MASK_BITS(SSP_CLK_POL_IDLE_LOW, SSP_CR0_MASK_SPO, 6) | \
548 GEN_MASK_BITS(SSP_CLK_SECOND_EDGE, SSP_CR0_MASK_SPH, 7) | \ 553 GEN_MASK_BITS(SSP_CLK_SECOND_EDGE, SSP_CR0_MASK_SPH, 7) | \
549 GEN_MASK_BITS(SSP_DEFAULT_CLKRATE, SSP_CR0_MASK_SCR, 8) | \ 554 GEN_MASK_BITS(SSP_DEFAULT_CLKRATE, SSP_CR0_MASK_SCR, 8) | \
550 GEN_MASK_BITS(SSP_BITS_8, SSP_CR0_MASK_CSS_ST, 16) | \ 555 GEN_MASK_BITS(SSP_BITS_8, SSP_CR0_MASK_CSS_ST, 16) | \
551 GEN_MASK_BITS(SSP_INTERFACE_MOTOROLA_SPI, SSP_CR0_MASK_FRF_ST, 21) \ 556 GEN_MASK_BITS(SSP_INTERFACE_MOTOROLA_SPI, SSP_CR0_MASK_FRF_ST, 21) \
552 ) 557 )
553 558
554 /* The PL023 version is slightly different again */ 559 /* The PL023 version is slightly different again */
555 #define DEFAULT_SSP_REG_CR0_ST_PL023 ( \ 560 #define DEFAULT_SSP_REG_CR0_ST_PL023 ( \
556 GEN_MASK_BITS(SSP_DATA_BITS_12, SSP_CR0_MASK_DSS_ST, 0) | \ 561 GEN_MASK_BITS(SSP_DATA_BITS_12, SSP_CR0_MASK_DSS_ST, 0) | \
557 GEN_MASK_BITS(SSP_CLK_POL_IDLE_LOW, SSP_CR0_MASK_SPO, 6) | \ 562 GEN_MASK_BITS(SSP_CLK_POL_IDLE_LOW, SSP_CR0_MASK_SPO, 6) | \
558 GEN_MASK_BITS(SSP_CLK_SECOND_EDGE, SSP_CR0_MASK_SPH, 7) | \ 563 GEN_MASK_BITS(SSP_CLK_SECOND_EDGE, SSP_CR0_MASK_SPH, 7) | \
559 GEN_MASK_BITS(SSP_DEFAULT_CLKRATE, SSP_CR0_MASK_SCR, 8) \ 564 GEN_MASK_BITS(SSP_DEFAULT_CLKRATE, SSP_CR0_MASK_SCR, 8) \
560 ) 565 )
561 566
562 #define DEFAULT_SSP_REG_CR1 ( \ 567 #define DEFAULT_SSP_REG_CR1 ( \
563 GEN_MASK_BITS(LOOPBACK_DISABLED, SSP_CR1_MASK_LBM, 0) | \ 568 GEN_MASK_BITS(LOOPBACK_DISABLED, SSP_CR1_MASK_LBM, 0) | \
564 GEN_MASK_BITS(SSP_DISABLED, SSP_CR1_MASK_SSE, 1) | \ 569 GEN_MASK_BITS(SSP_DISABLED, SSP_CR1_MASK_SSE, 1) | \
565 GEN_MASK_BITS(SSP_MASTER, SSP_CR1_MASK_MS, 2) | \ 570 GEN_MASK_BITS(SSP_MASTER, SSP_CR1_MASK_MS, 2) | \
566 GEN_MASK_BITS(DO_NOT_DRIVE_TX, SSP_CR1_MASK_SOD, 3) \ 571 GEN_MASK_BITS(DO_NOT_DRIVE_TX, SSP_CR1_MASK_SOD, 3) \
567 ) 572 )
568 573
569 /* ST versions extend this register to use all 16 bits */ 574 /* ST versions extend this register to use all 16 bits */
570 #define DEFAULT_SSP_REG_CR1_ST ( \ 575 #define DEFAULT_SSP_REG_CR1_ST ( \
571 DEFAULT_SSP_REG_CR1 | \ 576 DEFAULT_SSP_REG_CR1 | \
572 GEN_MASK_BITS(SSP_RX_MSB, SSP_CR1_MASK_RENDN_ST, 4) | \ 577 GEN_MASK_BITS(SSP_RX_MSB, SSP_CR1_MASK_RENDN_ST, 4) | \
573 GEN_MASK_BITS(SSP_TX_MSB, SSP_CR1_MASK_TENDN_ST, 5) | \ 578 GEN_MASK_BITS(SSP_TX_MSB, SSP_CR1_MASK_TENDN_ST, 5) | \
574 GEN_MASK_BITS(SSP_MWIRE_WAIT_ZERO, SSP_CR1_MASK_MWAIT_ST, 6) |\ 579 GEN_MASK_BITS(SSP_MWIRE_WAIT_ZERO, SSP_CR1_MASK_MWAIT_ST, 6) |\
575 GEN_MASK_BITS(SSP_RX_1_OR_MORE_ELEM, SSP_CR1_MASK_RXIFLSEL_ST, 7) | \ 580 GEN_MASK_BITS(SSP_RX_1_OR_MORE_ELEM, SSP_CR1_MASK_RXIFLSEL_ST, 7) | \
576 GEN_MASK_BITS(SSP_TX_1_OR_MORE_EMPTY_LOC, SSP_CR1_MASK_TXIFLSEL_ST, 10) \ 581 GEN_MASK_BITS(SSP_TX_1_OR_MORE_EMPTY_LOC, SSP_CR1_MASK_TXIFLSEL_ST, 10) \
577 ) 582 )
578 583
579 /* 584 /*
580 * The PL023 variant has further differences: no loopback mode, no microwire 585 * The PL023 variant has further differences: no loopback mode, no microwire
581 * support, and a new clock feedback delay setting. 586 * support, and a new clock feedback delay setting.
582 */ 587 */
583 #define DEFAULT_SSP_REG_CR1_ST_PL023 ( \ 588 #define DEFAULT_SSP_REG_CR1_ST_PL023 ( \
584 GEN_MASK_BITS(SSP_DISABLED, SSP_CR1_MASK_SSE, 1) | \ 589 GEN_MASK_BITS(SSP_DISABLED, SSP_CR1_MASK_SSE, 1) | \
585 GEN_MASK_BITS(SSP_MASTER, SSP_CR1_MASK_MS, 2) | \ 590 GEN_MASK_BITS(SSP_MASTER, SSP_CR1_MASK_MS, 2) | \
586 GEN_MASK_BITS(DO_NOT_DRIVE_TX, SSP_CR1_MASK_SOD, 3) | \ 591 GEN_MASK_BITS(DO_NOT_DRIVE_TX, SSP_CR1_MASK_SOD, 3) | \
587 GEN_MASK_BITS(SSP_RX_MSB, SSP_CR1_MASK_RENDN_ST, 4) | \ 592 GEN_MASK_BITS(SSP_RX_MSB, SSP_CR1_MASK_RENDN_ST, 4) | \
588 GEN_MASK_BITS(SSP_TX_MSB, SSP_CR1_MASK_TENDN_ST, 5) | \ 593 GEN_MASK_BITS(SSP_TX_MSB, SSP_CR1_MASK_TENDN_ST, 5) | \
589 GEN_MASK_BITS(SSP_RX_1_OR_MORE_ELEM, SSP_CR1_MASK_RXIFLSEL_ST, 7) | \ 594 GEN_MASK_BITS(SSP_RX_1_OR_MORE_ELEM, SSP_CR1_MASK_RXIFLSEL_ST, 7) | \
590 GEN_MASK_BITS(SSP_TX_1_OR_MORE_EMPTY_LOC, SSP_CR1_MASK_TXIFLSEL_ST, 10) | \ 595 GEN_MASK_BITS(SSP_TX_1_OR_MORE_EMPTY_LOC, SSP_CR1_MASK_TXIFLSEL_ST, 10) | \
591 GEN_MASK_BITS(SSP_FEEDBACK_CLK_DELAY_NONE, SSP_CR1_MASK_FBCLKDEL_ST, 13) \ 596 GEN_MASK_BITS(SSP_FEEDBACK_CLK_DELAY_NONE, SSP_CR1_MASK_FBCLKDEL_ST, 13) \
592 ) 597 )
593 598
594 #define DEFAULT_SSP_REG_CPSR ( \ 599 #define DEFAULT_SSP_REG_CPSR ( \
595 GEN_MASK_BITS(SSP_DEFAULT_PRESCALE, SSP_CPSR_MASK_CPSDVSR, 0) \ 600 GEN_MASK_BITS(SSP_DEFAULT_PRESCALE, SSP_CPSR_MASK_CPSDVSR, 0) \
596 ) 601 )
597 602
598 #define DEFAULT_SSP_REG_DMACR (\ 603 #define DEFAULT_SSP_REG_DMACR (\
599 GEN_MASK_BITS(SSP_DMA_DISABLED, SSP_DMACR_MASK_RXDMAE, 0) | \ 604 GEN_MASK_BITS(SSP_DMA_DISABLED, SSP_DMACR_MASK_RXDMAE, 0) | \
600 GEN_MASK_BITS(SSP_DMA_DISABLED, SSP_DMACR_MASK_TXDMAE, 1) \ 605 GEN_MASK_BITS(SSP_DMA_DISABLED, SSP_DMACR_MASK_TXDMAE, 1) \
601 ) 606 )
602 607
603 /** 608 /**
604 * load_ssp_default_config - Load default configuration for SSP 609 * load_ssp_default_config - Load default configuration for SSP
605 * @pl022: SSP driver private data structure 610 * @pl022: SSP driver private data structure
606 */ 611 */
607 static void load_ssp_default_config(struct pl022 *pl022) 612 static void load_ssp_default_config(struct pl022 *pl022)
608 { 613 {
609 if (pl022->vendor->pl023) { 614 if (pl022->vendor->pl023) {
610 writel(DEFAULT_SSP_REG_CR0_ST_PL023, SSP_CR0(pl022->virtbase)); 615 writel(DEFAULT_SSP_REG_CR0_ST_PL023, SSP_CR0(pl022->virtbase));
611 writew(DEFAULT_SSP_REG_CR1_ST_PL023, SSP_CR1(pl022->virtbase)); 616 writew(DEFAULT_SSP_REG_CR1_ST_PL023, SSP_CR1(pl022->virtbase));
612 } else if (pl022->vendor->extended_cr) { 617 } else if (pl022->vendor->extended_cr) {
613 writel(DEFAULT_SSP_REG_CR0_ST, SSP_CR0(pl022->virtbase)); 618 writel(DEFAULT_SSP_REG_CR0_ST, SSP_CR0(pl022->virtbase));
614 writew(DEFAULT_SSP_REG_CR1_ST, SSP_CR1(pl022->virtbase)); 619 writew(DEFAULT_SSP_REG_CR1_ST, SSP_CR1(pl022->virtbase));
615 } else { 620 } else {
616 writew(DEFAULT_SSP_REG_CR0, SSP_CR0(pl022->virtbase)); 621 writew(DEFAULT_SSP_REG_CR0, SSP_CR0(pl022->virtbase));
617 writew(DEFAULT_SSP_REG_CR1, SSP_CR1(pl022->virtbase)); 622 writew(DEFAULT_SSP_REG_CR1, SSP_CR1(pl022->virtbase));
618 } 623 }
619 writew(DEFAULT_SSP_REG_DMACR, SSP_DMACR(pl022->virtbase)); 624 writew(DEFAULT_SSP_REG_DMACR, SSP_DMACR(pl022->virtbase));
620 writew(DEFAULT_SSP_REG_CPSR, SSP_CPSR(pl022->virtbase)); 625 writew(DEFAULT_SSP_REG_CPSR, SSP_CPSR(pl022->virtbase));
621 writew(DISABLE_ALL_INTERRUPTS, SSP_IMSC(pl022->virtbase)); 626 writew(DISABLE_ALL_INTERRUPTS, SSP_IMSC(pl022->virtbase));
622 writew(CLEAR_ALL_INTERRUPTS, SSP_ICR(pl022->virtbase)); 627 writew(CLEAR_ALL_INTERRUPTS, SSP_ICR(pl022->virtbase));
623 } 628 }
624 629
625 /** 630 /**
626 * This will write to TX and read from RX according to the parameters 631 * This will write to TX and read from RX according to the parameters
627 * set in pl022. 632 * set in pl022.
628 */ 633 */
629 static void readwriter(struct pl022 *pl022) 634 static void readwriter(struct pl022 *pl022)
630 { 635 {
631 636
632 /* 637 /*
633 * The FIFO depth is different between primecell variants. 638 * The FIFO depth is different between primecell variants.
634 * I believe filling in too much in the FIFO might cause 639 * I believe filling in too much in the FIFO might cause
635 * errons in 8bit wide transfers on ARM variants (just 8 words 640 * errons in 8bit wide transfers on ARM variants (just 8 words
636 * FIFO, means only 8x8 = 64 bits in FIFO) at least. 641 * FIFO, means only 8x8 = 64 bits in FIFO) at least.
637 * 642 *
638 * To prevent this issue, the TX FIFO is only filled to the 643 * To prevent this issue, the TX FIFO is only filled to the
639 * unused RX FIFO fill length, regardless of what the TX 644 * unused RX FIFO fill length, regardless of what the TX
640 * FIFO status flag indicates. 645 * FIFO status flag indicates.
641 */ 646 */
642 dev_dbg(&pl022->adev->dev, 647 dev_dbg(&pl022->adev->dev,
643 "%s, rx: %p, rxend: %p, tx: %p, txend: %p\n", 648 "%s, rx: %p, rxend: %p, tx: %p, txend: %p\n",
644 __func__, pl022->rx, pl022->rx_end, pl022->tx, pl022->tx_end); 649 __func__, pl022->rx, pl022->rx_end, pl022->tx, pl022->tx_end);
645 650
646 /* Read as much as you can */ 651 /* Read as much as you can */
647 while ((readw(SSP_SR(pl022->virtbase)) & SSP_SR_MASK_RNE) 652 while ((readw(SSP_SR(pl022->virtbase)) & SSP_SR_MASK_RNE)
648 && (pl022->rx < pl022->rx_end)) { 653 && (pl022->rx < pl022->rx_end)) {
649 switch (pl022->read) { 654 switch (pl022->read) {
650 case READING_NULL: 655 case READING_NULL:
651 readw(SSP_DR(pl022->virtbase)); 656 readw(SSP_DR(pl022->virtbase));
652 break; 657 break;
653 case READING_U8: 658 case READING_U8:
654 *(u8 *) (pl022->rx) = 659 *(u8 *) (pl022->rx) =
655 readw(SSP_DR(pl022->virtbase)) & 0xFFU; 660 readw(SSP_DR(pl022->virtbase)) & 0xFFU;
656 break; 661 break;
657 case READING_U16: 662 case READING_U16:
658 *(u16 *) (pl022->rx) = 663 *(u16 *) (pl022->rx) =
659 (u16) readw(SSP_DR(pl022->virtbase)); 664 (u16) readw(SSP_DR(pl022->virtbase));
660 break; 665 break;
661 case READING_U32: 666 case READING_U32:
662 *(u32 *) (pl022->rx) = 667 *(u32 *) (pl022->rx) =
663 readl(SSP_DR(pl022->virtbase)); 668 readl(SSP_DR(pl022->virtbase));
664 break; 669 break;
665 } 670 }
666 pl022->rx += (pl022->cur_chip->n_bytes); 671 pl022->rx += (pl022->cur_chip->n_bytes);
667 pl022->exp_fifo_level--; 672 pl022->exp_fifo_level--;
668 } 673 }
669 /* 674 /*
670 * Write as much as possible up to the RX FIFO size 675 * Write as much as possible up to the RX FIFO size
671 */ 676 */
672 while ((pl022->exp_fifo_level < pl022->vendor->fifodepth) 677 while ((pl022->exp_fifo_level < pl022->vendor->fifodepth)
673 && (pl022->tx < pl022->tx_end)) { 678 && (pl022->tx < pl022->tx_end)) {
674 switch (pl022->write) { 679 switch (pl022->write) {
675 case WRITING_NULL: 680 case WRITING_NULL:
676 writew(0x0, SSP_DR(pl022->virtbase)); 681 writew(0x0, SSP_DR(pl022->virtbase));
677 break; 682 break;
678 case WRITING_U8: 683 case WRITING_U8:
679 writew(*(u8 *) (pl022->tx), SSP_DR(pl022->virtbase)); 684 writew(*(u8 *) (pl022->tx), SSP_DR(pl022->virtbase));
680 break; 685 break;
681 case WRITING_U16: 686 case WRITING_U16:
682 writew((*(u16 *) (pl022->tx)), SSP_DR(pl022->virtbase)); 687 writew((*(u16 *) (pl022->tx)), SSP_DR(pl022->virtbase));
683 break; 688 break;
684 case WRITING_U32: 689 case WRITING_U32:
685 writel(*(u32 *) (pl022->tx), SSP_DR(pl022->virtbase)); 690 writel(*(u32 *) (pl022->tx), SSP_DR(pl022->virtbase));
686 break; 691 break;
687 } 692 }
688 pl022->tx += (pl022->cur_chip->n_bytes); 693 pl022->tx += (pl022->cur_chip->n_bytes);
689 pl022->exp_fifo_level++; 694 pl022->exp_fifo_level++;
690 /* 695 /*
691 * This inner reader takes care of things appearing in the RX 696 * This inner reader takes care of things appearing in the RX
692 * FIFO as we're transmitting. This will happen a lot since the 697 * FIFO as we're transmitting. This will happen a lot since the
693 * clock starts running when you put things into the TX FIFO, 698 * clock starts running when you put things into the TX FIFO,
694 * and then things are continuously clocked into the RX FIFO. 699 * and then things are continuously clocked into the RX FIFO.
695 */ 700 */
696 while ((readw(SSP_SR(pl022->virtbase)) & SSP_SR_MASK_RNE) 701 while ((readw(SSP_SR(pl022->virtbase)) & SSP_SR_MASK_RNE)
697 && (pl022->rx < pl022->rx_end)) { 702 && (pl022->rx < pl022->rx_end)) {
698 switch (pl022->read) { 703 switch (pl022->read) {
699 case READING_NULL: 704 case READING_NULL:
700 readw(SSP_DR(pl022->virtbase)); 705 readw(SSP_DR(pl022->virtbase));
701 break; 706 break;
702 case READING_U8: 707 case READING_U8:
703 *(u8 *) (pl022->rx) = 708 *(u8 *) (pl022->rx) =
704 readw(SSP_DR(pl022->virtbase)) & 0xFFU; 709 readw(SSP_DR(pl022->virtbase)) & 0xFFU;
705 break; 710 break;
706 case READING_U16: 711 case READING_U16:
707 *(u16 *) (pl022->rx) = 712 *(u16 *) (pl022->rx) =
708 (u16) readw(SSP_DR(pl022->virtbase)); 713 (u16) readw(SSP_DR(pl022->virtbase));
709 break; 714 break;
710 case READING_U32: 715 case READING_U32:
711 *(u32 *) (pl022->rx) = 716 *(u32 *) (pl022->rx) =
712 readl(SSP_DR(pl022->virtbase)); 717 readl(SSP_DR(pl022->virtbase));
713 break; 718 break;
714 } 719 }
715 pl022->rx += (pl022->cur_chip->n_bytes); 720 pl022->rx += (pl022->cur_chip->n_bytes);
716 pl022->exp_fifo_level--; 721 pl022->exp_fifo_level--;
717 } 722 }
718 } 723 }
719 /* 724 /*
720 * When we exit here the TX FIFO should be full and the RX FIFO 725 * When we exit here the TX FIFO should be full and the RX FIFO
721 * should be empty 726 * should be empty
722 */ 727 */
723 } 728 }
724 729
725 /** 730 /**
726 * next_transfer - Move to the Next transfer in the current spi message 731 * next_transfer - Move to the Next transfer in the current spi message
727 * @pl022: SSP driver private data structure 732 * @pl022: SSP driver private data structure
728 * 733 *
729 * This function moves though the linked list of spi transfers in the 734 * This function moves though the linked list of spi transfers in the
730 * current spi message and returns with the state of current spi 735 * current spi message and returns with the state of current spi
731 * message i.e whether its last transfer is done(STATE_DONE) or 736 * message i.e whether its last transfer is done(STATE_DONE) or
732 * Next transfer is ready(STATE_RUNNING) 737 * Next transfer is ready(STATE_RUNNING)
733 */ 738 */
734 static void *next_transfer(struct pl022 *pl022) 739 static void *next_transfer(struct pl022 *pl022)
735 { 740 {
736 struct spi_message *msg = pl022->cur_msg; 741 struct spi_message *msg = pl022->cur_msg;
737 struct spi_transfer *trans = pl022->cur_transfer; 742 struct spi_transfer *trans = pl022->cur_transfer;
738 743
739 /* Move to next transfer */ 744 /* Move to next transfer */
740 if (trans->transfer_list.next != &msg->transfers) { 745 if (trans->transfer_list.next != &msg->transfers) {
741 pl022->cur_transfer = 746 pl022->cur_transfer =
742 list_entry(trans->transfer_list.next, 747 list_entry(trans->transfer_list.next,
743 struct spi_transfer, transfer_list); 748 struct spi_transfer, transfer_list);
744 return STATE_RUNNING; 749 return STATE_RUNNING;
745 } 750 }
746 return STATE_DONE; 751 return STATE_DONE;
747 } 752 }
748 753
749 /* 754 /*
750 * This DMA functionality is only compiled in if we have 755 * This DMA functionality is only compiled in if we have
751 * access to the generic DMA devices/DMA engine. 756 * access to the generic DMA devices/DMA engine.
752 */ 757 */
753 #ifdef CONFIG_DMA_ENGINE 758 #ifdef CONFIG_DMA_ENGINE
754 static void unmap_free_dma_scatter(struct pl022 *pl022) 759 static void unmap_free_dma_scatter(struct pl022 *pl022)
755 { 760 {
756 /* Unmap and free the SG tables */ 761 /* Unmap and free the SG tables */
757 dma_unmap_sg(pl022->dma_tx_channel->device->dev, pl022->sgt_tx.sgl, 762 dma_unmap_sg(pl022->dma_tx_channel->device->dev, pl022->sgt_tx.sgl,
758 pl022->sgt_tx.nents, DMA_TO_DEVICE); 763 pl022->sgt_tx.nents, DMA_TO_DEVICE);
759 dma_unmap_sg(pl022->dma_rx_channel->device->dev, pl022->sgt_rx.sgl, 764 dma_unmap_sg(pl022->dma_rx_channel->device->dev, pl022->sgt_rx.sgl,
760 pl022->sgt_rx.nents, DMA_FROM_DEVICE); 765 pl022->sgt_rx.nents, DMA_FROM_DEVICE);
761 sg_free_table(&pl022->sgt_rx); 766 sg_free_table(&pl022->sgt_rx);
762 sg_free_table(&pl022->sgt_tx); 767 sg_free_table(&pl022->sgt_tx);
763 } 768 }
764 769
765 static void dma_callback(void *data) 770 static void dma_callback(void *data)
766 { 771 {
767 struct pl022 *pl022 = data; 772 struct pl022 *pl022 = data;
768 struct spi_message *msg = pl022->cur_msg; 773 struct spi_message *msg = pl022->cur_msg;
769 774
770 BUG_ON(!pl022->sgt_rx.sgl); 775 BUG_ON(!pl022->sgt_rx.sgl);
771 776
772 #ifdef VERBOSE_DEBUG 777 #ifdef VERBOSE_DEBUG
773 /* 778 /*
774 * Optionally dump out buffers to inspect contents, this is 779 * Optionally dump out buffers to inspect contents, this is
775 * good if you want to convince yourself that the loopback 780 * good if you want to convince yourself that the loopback
776 * read/write contents are the same, when adopting to a new 781 * read/write contents are the same, when adopting to a new
777 * DMA engine. 782 * DMA engine.
778 */ 783 */
779 { 784 {
780 struct scatterlist *sg; 785 struct scatterlist *sg;
781 unsigned int i; 786 unsigned int i;
782 787
783 dma_sync_sg_for_cpu(&pl022->adev->dev, 788 dma_sync_sg_for_cpu(&pl022->adev->dev,
784 pl022->sgt_rx.sgl, 789 pl022->sgt_rx.sgl,
785 pl022->sgt_rx.nents, 790 pl022->sgt_rx.nents,
786 DMA_FROM_DEVICE); 791 DMA_FROM_DEVICE);
787 792
788 for_each_sg(pl022->sgt_rx.sgl, sg, pl022->sgt_rx.nents, i) { 793 for_each_sg(pl022->sgt_rx.sgl, sg, pl022->sgt_rx.nents, i) {
789 dev_dbg(&pl022->adev->dev, "SPI RX SG ENTRY: %d", i); 794 dev_dbg(&pl022->adev->dev, "SPI RX SG ENTRY: %d", i);
790 print_hex_dump(KERN_ERR, "SPI RX: ", 795 print_hex_dump(KERN_ERR, "SPI RX: ",
791 DUMP_PREFIX_OFFSET, 796 DUMP_PREFIX_OFFSET,
792 16, 797 16,
793 1, 798 1,
794 sg_virt(sg), 799 sg_virt(sg),
795 sg_dma_len(sg), 800 sg_dma_len(sg),
796 1); 801 1);
797 } 802 }
798 for_each_sg(pl022->sgt_tx.sgl, sg, pl022->sgt_tx.nents, i) { 803 for_each_sg(pl022->sgt_tx.sgl, sg, pl022->sgt_tx.nents, i) {
799 dev_dbg(&pl022->adev->dev, "SPI TX SG ENTRY: %d", i); 804 dev_dbg(&pl022->adev->dev, "SPI TX SG ENTRY: %d", i);
800 print_hex_dump(KERN_ERR, "SPI TX: ", 805 print_hex_dump(KERN_ERR, "SPI TX: ",
801 DUMP_PREFIX_OFFSET, 806 DUMP_PREFIX_OFFSET,
802 16, 807 16,
803 1, 808 1,
804 sg_virt(sg), 809 sg_virt(sg),
805 sg_dma_len(sg), 810 sg_dma_len(sg),
806 1); 811 1);
807 } 812 }
808 } 813 }
809 #endif 814 #endif
810 815
811 unmap_free_dma_scatter(pl022); 816 unmap_free_dma_scatter(pl022);
812 817
813 /* Update total bytes transferred */ 818 /* Update total bytes transferred */
814 msg->actual_length += pl022->cur_transfer->len; 819 msg->actual_length += pl022->cur_transfer->len;
815 if (pl022->cur_transfer->cs_change) 820 if (pl022->cur_transfer->cs_change)
816 pl022->cur_chip-> 821 pl022->cur_chip->
817 cs_control(SSP_CHIP_DESELECT); 822 cs_control(SSP_CHIP_DESELECT);
818 823
819 /* Move to next transfer */ 824 /* Move to next transfer */
820 msg->state = next_transfer(pl022); 825 msg->state = next_transfer(pl022);
821 tasklet_schedule(&pl022->pump_transfers); 826 tasklet_schedule(&pl022->pump_transfers);
822 } 827 }
823 828
824 static void setup_dma_scatter(struct pl022 *pl022, 829 static void setup_dma_scatter(struct pl022 *pl022,
825 void *buffer, 830 void *buffer,
826 unsigned int length, 831 unsigned int length,
827 struct sg_table *sgtab) 832 struct sg_table *sgtab)
828 { 833 {
829 struct scatterlist *sg; 834 struct scatterlist *sg;
830 int bytesleft = length; 835 int bytesleft = length;
831 void *bufp = buffer; 836 void *bufp = buffer;
832 int mapbytes; 837 int mapbytes;
833 int i; 838 int i;
834 839
835 if (buffer) { 840 if (buffer) {
836 for_each_sg(sgtab->sgl, sg, sgtab->nents, i) { 841 for_each_sg(sgtab->sgl, sg, sgtab->nents, i) {
837 /* 842 /*
838 * If there are less bytes left than what fits 843 * If there are less bytes left than what fits
839 * in the current page (plus page alignment offset) 844 * in the current page (plus page alignment offset)
840 * we just feed in this, else we stuff in as much 845 * we just feed in this, else we stuff in as much
841 * as we can. 846 * as we can.
842 */ 847 */
843 if (bytesleft < (PAGE_SIZE - offset_in_page(bufp))) 848 if (bytesleft < (PAGE_SIZE - offset_in_page(bufp)))
844 mapbytes = bytesleft; 849 mapbytes = bytesleft;
845 else 850 else
846 mapbytes = PAGE_SIZE - offset_in_page(bufp); 851 mapbytes = PAGE_SIZE - offset_in_page(bufp);
847 sg_set_page(sg, virt_to_page(bufp), 852 sg_set_page(sg, virt_to_page(bufp),
848 mapbytes, offset_in_page(bufp)); 853 mapbytes, offset_in_page(bufp));
849 bufp += mapbytes; 854 bufp += mapbytes;
850 bytesleft -= mapbytes; 855 bytesleft -= mapbytes;
851 dev_dbg(&pl022->adev->dev, 856 dev_dbg(&pl022->adev->dev,
852 "set RX/TX target page @ %p, %d bytes, %d left\n", 857 "set RX/TX target page @ %p, %d bytes, %d left\n",
853 bufp, mapbytes, bytesleft); 858 bufp, mapbytes, bytesleft);
854 } 859 }
855 } else { 860 } else {
856 /* Map the dummy buffer on every page */ 861 /* Map the dummy buffer on every page */
857 for_each_sg(sgtab->sgl, sg, sgtab->nents, i) { 862 for_each_sg(sgtab->sgl, sg, sgtab->nents, i) {
858 if (bytesleft < PAGE_SIZE) 863 if (bytesleft < PAGE_SIZE)
859 mapbytes = bytesleft; 864 mapbytes = bytesleft;
860 else 865 else
861 mapbytes = PAGE_SIZE; 866 mapbytes = PAGE_SIZE;
862 sg_set_page(sg, virt_to_page(pl022->dummypage), 867 sg_set_page(sg, virt_to_page(pl022->dummypage),
863 mapbytes, 0); 868 mapbytes, 0);
864 bytesleft -= mapbytes; 869 bytesleft -= mapbytes;
865 dev_dbg(&pl022->adev->dev, 870 dev_dbg(&pl022->adev->dev,
866 "set RX/TX to dummy page %d bytes, %d left\n", 871 "set RX/TX to dummy page %d bytes, %d left\n",
867 mapbytes, bytesleft); 872 mapbytes, bytesleft);
868 873
869 } 874 }
870 } 875 }
871 BUG_ON(bytesleft); 876 BUG_ON(bytesleft);
872 } 877 }
873 878
874 /** 879 /**
875 * configure_dma - configures the channels for the next transfer 880 * configure_dma - configures the channels for the next transfer
876 * @pl022: SSP driver's private data structure 881 * @pl022: SSP driver's private data structure
877 */ 882 */
878 static int configure_dma(struct pl022 *pl022) 883 static int configure_dma(struct pl022 *pl022)
879 { 884 {
880 struct dma_slave_config rx_conf = { 885 struct dma_slave_config rx_conf = {
881 .src_addr = SSP_DR(pl022->phybase), 886 .src_addr = SSP_DR(pl022->phybase),
882 .direction = DMA_DEV_TO_MEM, 887 .direction = DMA_DEV_TO_MEM,
883 .device_fc = false, 888 .device_fc = false,
884 }; 889 };
885 struct dma_slave_config tx_conf = { 890 struct dma_slave_config tx_conf = {
886 .dst_addr = SSP_DR(pl022->phybase), 891 .dst_addr = SSP_DR(pl022->phybase),
887 .direction = DMA_MEM_TO_DEV, 892 .direction = DMA_MEM_TO_DEV,
888 .device_fc = false, 893 .device_fc = false,
889 }; 894 };
890 unsigned int pages; 895 unsigned int pages;
891 int ret; 896 int ret;
892 int rx_sglen, tx_sglen; 897 int rx_sglen, tx_sglen;
893 struct dma_chan *rxchan = pl022->dma_rx_channel; 898 struct dma_chan *rxchan = pl022->dma_rx_channel;
894 struct dma_chan *txchan = pl022->dma_tx_channel; 899 struct dma_chan *txchan = pl022->dma_tx_channel;
895 struct dma_async_tx_descriptor *rxdesc; 900 struct dma_async_tx_descriptor *rxdesc;
896 struct dma_async_tx_descriptor *txdesc; 901 struct dma_async_tx_descriptor *txdesc;
897 902
898 /* Check that the channels are available */ 903 /* Check that the channels are available */
899 if (!rxchan || !txchan) 904 if (!rxchan || !txchan)
900 return -ENODEV; 905 return -ENODEV;
901 906
902 /* 907 /*
903 * If supplied, the DMA burstsize should equal the FIFO trigger level. 908 * If supplied, the DMA burstsize should equal the FIFO trigger level.
904 * Notice that the DMA engine uses one-to-one mapping. Since we can 909 * Notice that the DMA engine uses one-to-one mapping. Since we can
905 * not trigger on 2 elements this needs explicit mapping rather than 910 * not trigger on 2 elements this needs explicit mapping rather than
906 * calculation. 911 * calculation.
907 */ 912 */
908 switch (pl022->rx_lev_trig) { 913 switch (pl022->rx_lev_trig) {
909 case SSP_RX_1_OR_MORE_ELEM: 914 case SSP_RX_1_OR_MORE_ELEM:
910 rx_conf.src_maxburst = 1; 915 rx_conf.src_maxburst = 1;
911 break; 916 break;
912 case SSP_RX_4_OR_MORE_ELEM: 917 case SSP_RX_4_OR_MORE_ELEM:
913 rx_conf.src_maxburst = 4; 918 rx_conf.src_maxburst = 4;
914 break; 919 break;
915 case SSP_RX_8_OR_MORE_ELEM: 920 case SSP_RX_8_OR_MORE_ELEM:
916 rx_conf.src_maxburst = 8; 921 rx_conf.src_maxburst = 8;
917 break; 922 break;
918 case SSP_RX_16_OR_MORE_ELEM: 923 case SSP_RX_16_OR_MORE_ELEM:
919 rx_conf.src_maxburst = 16; 924 rx_conf.src_maxburst = 16;
920 break; 925 break;
921 case SSP_RX_32_OR_MORE_ELEM: 926 case SSP_RX_32_OR_MORE_ELEM:
922 rx_conf.src_maxburst = 32; 927 rx_conf.src_maxburst = 32;
923 break; 928 break;
924 default: 929 default:
925 rx_conf.src_maxburst = pl022->vendor->fifodepth >> 1; 930 rx_conf.src_maxburst = pl022->vendor->fifodepth >> 1;
926 break; 931 break;
927 } 932 }
928 933
929 switch (pl022->tx_lev_trig) { 934 switch (pl022->tx_lev_trig) {
930 case SSP_TX_1_OR_MORE_EMPTY_LOC: 935 case SSP_TX_1_OR_MORE_EMPTY_LOC:
931 tx_conf.dst_maxburst = 1; 936 tx_conf.dst_maxburst = 1;
932 break; 937 break;
933 case SSP_TX_4_OR_MORE_EMPTY_LOC: 938 case SSP_TX_4_OR_MORE_EMPTY_LOC:
934 tx_conf.dst_maxburst = 4; 939 tx_conf.dst_maxburst = 4;
935 break; 940 break;
936 case SSP_TX_8_OR_MORE_EMPTY_LOC: 941 case SSP_TX_8_OR_MORE_EMPTY_LOC:
937 tx_conf.dst_maxburst = 8; 942 tx_conf.dst_maxburst = 8;
938 break; 943 break;
939 case SSP_TX_16_OR_MORE_EMPTY_LOC: 944 case SSP_TX_16_OR_MORE_EMPTY_LOC:
940 tx_conf.dst_maxburst = 16; 945 tx_conf.dst_maxburst = 16;
941 break; 946 break;
942 case SSP_TX_32_OR_MORE_EMPTY_LOC: 947 case SSP_TX_32_OR_MORE_EMPTY_LOC:
943 tx_conf.dst_maxburst = 32; 948 tx_conf.dst_maxburst = 32;
944 break; 949 break;
945 default: 950 default:
946 tx_conf.dst_maxburst = pl022->vendor->fifodepth >> 1; 951 tx_conf.dst_maxburst = pl022->vendor->fifodepth >> 1;
947 break; 952 break;
948 } 953 }
949 954
950 switch (pl022->read) { 955 switch (pl022->read) {
951 case READING_NULL: 956 case READING_NULL:
952 /* Use the same as for writing */ 957 /* Use the same as for writing */
953 rx_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_UNDEFINED; 958 rx_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_UNDEFINED;
954 break; 959 break;
955 case READING_U8: 960 case READING_U8:
956 rx_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 961 rx_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
957 break; 962 break;
958 case READING_U16: 963 case READING_U16:
959 rx_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES; 964 rx_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
960 break; 965 break;
961 case READING_U32: 966 case READING_U32:
962 rx_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 967 rx_conf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
963 break; 968 break;
964 } 969 }
965 970
966 switch (pl022->write) { 971 switch (pl022->write) {
967 case WRITING_NULL: 972 case WRITING_NULL:
968 /* Use the same as for reading */ 973 /* Use the same as for reading */
969 tx_conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_UNDEFINED; 974 tx_conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_UNDEFINED;
970 break; 975 break;
971 case WRITING_U8: 976 case WRITING_U8:
972 tx_conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 977 tx_conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
973 break; 978 break;
974 case WRITING_U16: 979 case WRITING_U16:
975 tx_conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES; 980 tx_conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_2_BYTES;
976 break; 981 break;
977 case WRITING_U32: 982 case WRITING_U32:
978 tx_conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 983 tx_conf.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
979 break; 984 break;
980 } 985 }
981 986
982 /* SPI pecularity: we need to read and write the same width */ 987 /* SPI pecularity: we need to read and write the same width */
983 if (rx_conf.src_addr_width == DMA_SLAVE_BUSWIDTH_UNDEFINED) 988 if (rx_conf.src_addr_width == DMA_SLAVE_BUSWIDTH_UNDEFINED)
984 rx_conf.src_addr_width = tx_conf.dst_addr_width; 989 rx_conf.src_addr_width = tx_conf.dst_addr_width;
985 if (tx_conf.dst_addr_width == DMA_SLAVE_BUSWIDTH_UNDEFINED) 990 if (tx_conf.dst_addr_width == DMA_SLAVE_BUSWIDTH_UNDEFINED)
986 tx_conf.dst_addr_width = rx_conf.src_addr_width; 991 tx_conf.dst_addr_width = rx_conf.src_addr_width;
987 BUG_ON(rx_conf.src_addr_width != tx_conf.dst_addr_width); 992 BUG_ON(rx_conf.src_addr_width != tx_conf.dst_addr_width);
988 993
989 dmaengine_slave_config(rxchan, &rx_conf); 994 dmaengine_slave_config(rxchan, &rx_conf);
990 dmaengine_slave_config(txchan, &tx_conf); 995 dmaengine_slave_config(txchan, &tx_conf);
991 996
992 /* Create sglists for the transfers */ 997 /* Create sglists for the transfers */
993 pages = DIV_ROUND_UP(pl022->cur_transfer->len, PAGE_SIZE); 998 pages = DIV_ROUND_UP(pl022->cur_transfer->len, PAGE_SIZE);
994 dev_dbg(&pl022->adev->dev, "using %d pages for transfer\n", pages); 999 dev_dbg(&pl022->adev->dev, "using %d pages for transfer\n", pages);
995 1000
996 ret = sg_alloc_table(&pl022->sgt_rx, pages, GFP_ATOMIC); 1001 ret = sg_alloc_table(&pl022->sgt_rx, pages, GFP_ATOMIC);
997 if (ret) 1002 if (ret)
998 goto err_alloc_rx_sg; 1003 goto err_alloc_rx_sg;
999 1004
1000 ret = sg_alloc_table(&pl022->sgt_tx, pages, GFP_ATOMIC); 1005 ret = sg_alloc_table(&pl022->sgt_tx, pages, GFP_ATOMIC);
1001 if (ret) 1006 if (ret)
1002 goto err_alloc_tx_sg; 1007 goto err_alloc_tx_sg;
1003 1008
1004 /* Fill in the scatterlists for the RX+TX buffers */ 1009 /* Fill in the scatterlists for the RX+TX buffers */
1005 setup_dma_scatter(pl022, pl022->rx, 1010 setup_dma_scatter(pl022, pl022->rx,
1006 pl022->cur_transfer->len, &pl022->sgt_rx); 1011 pl022->cur_transfer->len, &pl022->sgt_rx);
1007 setup_dma_scatter(pl022, pl022->tx, 1012 setup_dma_scatter(pl022, pl022->tx,
1008 pl022->cur_transfer->len, &pl022->sgt_tx); 1013 pl022->cur_transfer->len, &pl022->sgt_tx);
1009 1014
1010 /* Map DMA buffers */ 1015 /* Map DMA buffers */
1011 rx_sglen = dma_map_sg(rxchan->device->dev, pl022->sgt_rx.sgl, 1016 rx_sglen = dma_map_sg(rxchan->device->dev, pl022->sgt_rx.sgl,
1012 pl022->sgt_rx.nents, DMA_FROM_DEVICE); 1017 pl022->sgt_rx.nents, DMA_FROM_DEVICE);
1013 if (!rx_sglen) 1018 if (!rx_sglen)
1014 goto err_rx_sgmap; 1019 goto err_rx_sgmap;
1015 1020
1016 tx_sglen = dma_map_sg(txchan->device->dev, pl022->sgt_tx.sgl, 1021 tx_sglen = dma_map_sg(txchan->device->dev, pl022->sgt_tx.sgl,
1017 pl022->sgt_tx.nents, DMA_TO_DEVICE); 1022 pl022->sgt_tx.nents, DMA_TO_DEVICE);
1018 if (!tx_sglen) 1023 if (!tx_sglen)
1019 goto err_tx_sgmap; 1024 goto err_tx_sgmap;
1020 1025
1021 /* Send both scatterlists */ 1026 /* Send both scatterlists */
1022 rxdesc = dmaengine_prep_slave_sg(rxchan, 1027 rxdesc = dmaengine_prep_slave_sg(rxchan,
1023 pl022->sgt_rx.sgl, 1028 pl022->sgt_rx.sgl,
1024 rx_sglen, 1029 rx_sglen,
1025 DMA_DEV_TO_MEM, 1030 DMA_DEV_TO_MEM,
1026 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1031 DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
1027 if (!rxdesc) 1032 if (!rxdesc)
1028 goto err_rxdesc; 1033 goto err_rxdesc;
1029 1034
1030 txdesc = dmaengine_prep_slave_sg(txchan, 1035 txdesc = dmaengine_prep_slave_sg(txchan,
1031 pl022->sgt_tx.sgl, 1036 pl022->sgt_tx.sgl,
1032 tx_sglen, 1037 tx_sglen,
1033 DMA_MEM_TO_DEV, 1038 DMA_MEM_TO_DEV,
1034 DMA_PREP_INTERRUPT | DMA_CTRL_ACK); 1039 DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
1035 if (!txdesc) 1040 if (!txdesc)
1036 goto err_txdesc; 1041 goto err_txdesc;
1037 1042
1038 /* Put the callback on the RX transfer only, that should finish last */ 1043 /* Put the callback on the RX transfer only, that should finish last */
1039 rxdesc->callback = dma_callback; 1044 rxdesc->callback = dma_callback;
1040 rxdesc->callback_param = pl022; 1045 rxdesc->callback_param = pl022;
1041 1046
1042 /* Submit and fire RX and TX with TX last so we're ready to read! */ 1047 /* Submit and fire RX and TX with TX last so we're ready to read! */
1043 dmaengine_submit(rxdesc); 1048 dmaengine_submit(rxdesc);
1044 dmaengine_submit(txdesc); 1049 dmaengine_submit(txdesc);
1045 dma_async_issue_pending(rxchan); 1050 dma_async_issue_pending(rxchan);
1046 dma_async_issue_pending(txchan); 1051 dma_async_issue_pending(txchan);
1047 pl022->dma_running = true; 1052 pl022->dma_running = true;
1048 1053
1049 return 0; 1054 return 0;
1050 1055
1051 err_txdesc: 1056 err_txdesc:
1052 dmaengine_terminate_all(txchan); 1057 dmaengine_terminate_all(txchan);
1053 err_rxdesc: 1058 err_rxdesc:
1054 dmaengine_terminate_all(rxchan); 1059 dmaengine_terminate_all(rxchan);
1055 dma_unmap_sg(txchan->device->dev, pl022->sgt_tx.sgl, 1060 dma_unmap_sg(txchan->device->dev, pl022->sgt_tx.sgl,
1056 pl022->sgt_tx.nents, DMA_TO_DEVICE); 1061 pl022->sgt_tx.nents, DMA_TO_DEVICE);
1057 err_tx_sgmap: 1062 err_tx_sgmap:
1058 dma_unmap_sg(rxchan->device->dev, pl022->sgt_rx.sgl, 1063 dma_unmap_sg(rxchan->device->dev, pl022->sgt_rx.sgl,
1059 pl022->sgt_tx.nents, DMA_FROM_DEVICE); 1064 pl022->sgt_tx.nents, DMA_FROM_DEVICE);
1060 err_rx_sgmap: 1065 err_rx_sgmap:
1061 sg_free_table(&pl022->sgt_tx); 1066 sg_free_table(&pl022->sgt_tx);
1062 err_alloc_tx_sg: 1067 err_alloc_tx_sg:
1063 sg_free_table(&pl022->sgt_rx); 1068 sg_free_table(&pl022->sgt_rx);
1064 err_alloc_rx_sg: 1069 err_alloc_rx_sg:
1065 return -ENOMEM; 1070 return -ENOMEM;
1066 } 1071 }
1067 1072
1068 static int __devinit pl022_dma_probe(struct pl022 *pl022) 1073 static int __devinit pl022_dma_probe(struct pl022 *pl022)
1069 { 1074 {
1070 dma_cap_mask_t mask; 1075 dma_cap_mask_t mask;
1071 1076
1072 /* Try to acquire a generic DMA engine slave channel */ 1077 /* Try to acquire a generic DMA engine slave channel */
1073 dma_cap_zero(mask); 1078 dma_cap_zero(mask);
1074 dma_cap_set(DMA_SLAVE, mask); 1079 dma_cap_set(DMA_SLAVE, mask);
1075 /* 1080 /*
1076 * We need both RX and TX channels to do DMA, else do none 1081 * We need both RX and TX channels to do DMA, else do none
1077 * of them. 1082 * of them.
1078 */ 1083 */
1079 pl022->dma_rx_channel = dma_request_channel(mask, 1084 pl022->dma_rx_channel = dma_request_channel(mask,
1080 pl022->master_info->dma_filter, 1085 pl022->master_info->dma_filter,
1081 pl022->master_info->dma_rx_param); 1086 pl022->master_info->dma_rx_param);
1082 if (!pl022->dma_rx_channel) { 1087 if (!pl022->dma_rx_channel) {
1083 dev_dbg(&pl022->adev->dev, "no RX DMA channel!\n"); 1088 dev_dbg(&pl022->adev->dev, "no RX DMA channel!\n");
1084 goto err_no_rxchan; 1089 goto err_no_rxchan;
1085 } 1090 }
1086 1091
1087 pl022->dma_tx_channel = dma_request_channel(mask, 1092 pl022->dma_tx_channel = dma_request_channel(mask,
1088 pl022->master_info->dma_filter, 1093 pl022->master_info->dma_filter,
1089 pl022->master_info->dma_tx_param); 1094 pl022->master_info->dma_tx_param);
1090 if (!pl022->dma_tx_channel) { 1095 if (!pl022->dma_tx_channel) {
1091 dev_dbg(&pl022->adev->dev, "no TX DMA channel!\n"); 1096 dev_dbg(&pl022->adev->dev, "no TX DMA channel!\n");
1092 goto err_no_txchan; 1097 goto err_no_txchan;
1093 } 1098 }
1094 1099
1095 pl022->dummypage = kmalloc(PAGE_SIZE, GFP_KERNEL); 1100 pl022->dummypage = kmalloc(PAGE_SIZE, GFP_KERNEL);
1096 if (!pl022->dummypage) { 1101 if (!pl022->dummypage) {
1097 dev_dbg(&pl022->adev->dev, "no DMA dummypage!\n"); 1102 dev_dbg(&pl022->adev->dev, "no DMA dummypage!\n");
1098 goto err_no_dummypage; 1103 goto err_no_dummypage;
1099 } 1104 }
1100 1105
1101 dev_info(&pl022->adev->dev, "setup for DMA on RX %s, TX %s\n", 1106 dev_info(&pl022->adev->dev, "setup for DMA on RX %s, TX %s\n",
1102 dma_chan_name(pl022->dma_rx_channel), 1107 dma_chan_name(pl022->dma_rx_channel),
1103 dma_chan_name(pl022->dma_tx_channel)); 1108 dma_chan_name(pl022->dma_tx_channel));
1104 1109
1105 return 0; 1110 return 0;
1106 1111
1107 err_no_dummypage: 1112 err_no_dummypage:
1108 dma_release_channel(pl022->dma_tx_channel); 1113 dma_release_channel(pl022->dma_tx_channel);
1109 err_no_txchan: 1114 err_no_txchan:
1110 dma_release_channel(pl022->dma_rx_channel); 1115 dma_release_channel(pl022->dma_rx_channel);
1111 pl022->dma_rx_channel = NULL; 1116 pl022->dma_rx_channel = NULL;
1112 err_no_rxchan: 1117 err_no_rxchan:
1113 dev_err(&pl022->adev->dev, 1118 dev_err(&pl022->adev->dev,
1114 "Failed to work in dma mode, work without dma!\n"); 1119 "Failed to work in dma mode, work without dma!\n");
1115 return -ENODEV; 1120 return -ENODEV;
1116 } 1121 }
1117 1122
1118 static void terminate_dma(struct pl022 *pl022) 1123 static void terminate_dma(struct pl022 *pl022)
1119 { 1124 {
1120 struct dma_chan *rxchan = pl022->dma_rx_channel; 1125 struct dma_chan *rxchan = pl022->dma_rx_channel;
1121 struct dma_chan *txchan = pl022->dma_tx_channel; 1126 struct dma_chan *txchan = pl022->dma_tx_channel;
1122 1127
1123 dmaengine_terminate_all(rxchan); 1128 dmaengine_terminate_all(rxchan);
1124 dmaengine_terminate_all(txchan); 1129 dmaengine_terminate_all(txchan);
1125 unmap_free_dma_scatter(pl022); 1130 unmap_free_dma_scatter(pl022);
1126 pl022->dma_running = false; 1131 pl022->dma_running = false;
1127 } 1132 }
1128 1133
1129 static void pl022_dma_remove(struct pl022 *pl022) 1134 static void pl022_dma_remove(struct pl022 *pl022)
1130 { 1135 {
1131 if (pl022->dma_running) 1136 if (pl022->dma_running)
1132 terminate_dma(pl022); 1137 terminate_dma(pl022);
1133 if (pl022->dma_tx_channel) 1138 if (pl022->dma_tx_channel)
1134 dma_release_channel(pl022->dma_tx_channel); 1139 dma_release_channel(pl022->dma_tx_channel);
1135 if (pl022->dma_rx_channel) 1140 if (pl022->dma_rx_channel)
1136 dma_release_channel(pl022->dma_rx_channel); 1141 dma_release_channel(pl022->dma_rx_channel);
1137 kfree(pl022->dummypage); 1142 kfree(pl022->dummypage);
1138 } 1143 }
1139 1144
1140 #else 1145 #else
1141 static inline int configure_dma(struct pl022 *pl022) 1146 static inline int configure_dma(struct pl022 *pl022)
1142 { 1147 {
1143 return -ENODEV; 1148 return -ENODEV;
1144 } 1149 }
1145 1150
1146 static inline int pl022_dma_probe(struct pl022 *pl022) 1151 static inline int pl022_dma_probe(struct pl022 *pl022)
1147 { 1152 {
1148 return 0; 1153 return 0;
1149 } 1154 }
1150 1155
1151 static inline void pl022_dma_remove(struct pl022 *pl022) 1156 static inline void pl022_dma_remove(struct pl022 *pl022)
1152 { 1157 {
1153 } 1158 }
1154 #endif 1159 #endif
1155 1160
1156 /** 1161 /**
1157 * pl022_interrupt_handler - Interrupt handler for SSP controller 1162 * pl022_interrupt_handler - Interrupt handler for SSP controller
1158 * 1163 *
1159 * This function handles interrupts generated for an interrupt based transfer. 1164 * This function handles interrupts generated for an interrupt based transfer.
1160 * If a receive overrun (ROR) interrupt is there then we disable SSP, flag the 1165 * If a receive overrun (ROR) interrupt is there then we disable SSP, flag the
1161 * current message's state as STATE_ERROR and schedule the tasklet 1166 * current message's state as STATE_ERROR and schedule the tasklet
1162 * pump_transfers which will do the postprocessing of the current message by 1167 * pump_transfers which will do the postprocessing of the current message by
1163 * calling giveback(). Otherwise it reads data from RX FIFO till there is no 1168 * calling giveback(). Otherwise it reads data from RX FIFO till there is no
1164 * more data, and writes data in TX FIFO till it is not full. If we complete 1169 * more data, and writes data in TX FIFO till it is not full. If we complete
1165 * the transfer we move to the next transfer and schedule the tasklet. 1170 * the transfer we move to the next transfer and schedule the tasklet.
1166 */ 1171 */
1167 static irqreturn_t pl022_interrupt_handler(int irq, void *dev_id) 1172 static irqreturn_t pl022_interrupt_handler(int irq, void *dev_id)
1168 { 1173 {
1169 struct pl022 *pl022 = dev_id; 1174 struct pl022 *pl022 = dev_id;
1170 struct spi_message *msg = pl022->cur_msg; 1175 struct spi_message *msg = pl022->cur_msg;
1171 u16 irq_status = 0; 1176 u16 irq_status = 0;
1172 u16 flag = 0; 1177 u16 flag = 0;
1173 1178
1174 if (unlikely(!msg)) { 1179 if (unlikely(!msg)) {
1175 dev_err(&pl022->adev->dev, 1180 dev_err(&pl022->adev->dev,
1176 "bad message state in interrupt handler"); 1181 "bad message state in interrupt handler");
1177 /* Never fail */ 1182 /* Never fail */
1178 return IRQ_HANDLED; 1183 return IRQ_HANDLED;
1179 } 1184 }
1180 1185
1181 /* Read the Interrupt Status Register */ 1186 /* Read the Interrupt Status Register */
1182 irq_status = readw(SSP_MIS(pl022->virtbase)); 1187 irq_status = readw(SSP_MIS(pl022->virtbase));
1183 1188
1184 if (unlikely(!irq_status)) 1189 if (unlikely(!irq_status))
1185 return IRQ_NONE; 1190 return IRQ_NONE;
1186 1191
1187 /* 1192 /*
1188 * This handles the FIFO interrupts, the timeout 1193 * This handles the FIFO interrupts, the timeout
1189 * interrupts are flatly ignored, they cannot be 1194 * interrupts are flatly ignored, they cannot be
1190 * trusted. 1195 * trusted.
1191 */ 1196 */
1192 if (unlikely(irq_status & SSP_MIS_MASK_RORMIS)) { 1197 if (unlikely(irq_status & SSP_MIS_MASK_RORMIS)) {
1193 /* 1198 /*
1194 * Overrun interrupt - bail out since our Data has been 1199 * Overrun interrupt - bail out since our Data has been
1195 * corrupted 1200 * corrupted
1196 */ 1201 */
1197 dev_err(&pl022->adev->dev, "FIFO overrun\n"); 1202 dev_err(&pl022->adev->dev, "FIFO overrun\n");
1198 if (readw(SSP_SR(pl022->virtbase)) & SSP_SR_MASK_RFF) 1203 if (readw(SSP_SR(pl022->virtbase)) & SSP_SR_MASK_RFF)
1199 dev_err(&pl022->adev->dev, 1204 dev_err(&pl022->adev->dev,
1200 "RXFIFO is full\n"); 1205 "RXFIFO is full\n");
1201 if (readw(SSP_SR(pl022->virtbase)) & SSP_SR_MASK_TNF) 1206 if (readw(SSP_SR(pl022->virtbase)) & SSP_SR_MASK_TNF)
1202 dev_err(&pl022->adev->dev, 1207 dev_err(&pl022->adev->dev,
1203 "TXFIFO is full\n"); 1208 "TXFIFO is full\n");
1204 1209
1205 /* 1210 /*
1206 * Disable and clear interrupts, disable SSP, 1211 * Disable and clear interrupts, disable SSP,
1207 * mark message with bad status so it can be 1212 * mark message with bad status so it can be
1208 * retried. 1213 * retried.
1209 */ 1214 */
1210 writew(DISABLE_ALL_INTERRUPTS, 1215 writew(DISABLE_ALL_INTERRUPTS,
1211 SSP_IMSC(pl022->virtbase)); 1216 SSP_IMSC(pl022->virtbase));
1212 writew(CLEAR_ALL_INTERRUPTS, SSP_ICR(pl022->virtbase)); 1217 writew(CLEAR_ALL_INTERRUPTS, SSP_ICR(pl022->virtbase));
1213 writew((readw(SSP_CR1(pl022->virtbase)) & 1218 writew((readw(SSP_CR1(pl022->virtbase)) &
1214 (~SSP_CR1_MASK_SSE)), SSP_CR1(pl022->virtbase)); 1219 (~SSP_CR1_MASK_SSE)), SSP_CR1(pl022->virtbase));
1215 msg->state = STATE_ERROR; 1220 msg->state = STATE_ERROR;
1216 1221
1217 /* Schedule message queue handler */ 1222 /* Schedule message queue handler */
1218 tasklet_schedule(&pl022->pump_transfers); 1223 tasklet_schedule(&pl022->pump_transfers);
1219 return IRQ_HANDLED; 1224 return IRQ_HANDLED;
1220 } 1225 }
1221 1226
1222 readwriter(pl022); 1227 readwriter(pl022);
1223 1228
1224 if ((pl022->tx == pl022->tx_end) && (flag == 0)) { 1229 if ((pl022->tx == pl022->tx_end) && (flag == 0)) {
1225 flag = 1; 1230 flag = 1;
1226 /* Disable Transmit interrupt, enable receive interrupt */ 1231 /* Disable Transmit interrupt, enable receive interrupt */
1227 writew((readw(SSP_IMSC(pl022->virtbase)) & 1232 writew((readw(SSP_IMSC(pl022->virtbase)) &
1228 ~SSP_IMSC_MASK_TXIM) | SSP_IMSC_MASK_RXIM, 1233 ~SSP_IMSC_MASK_TXIM) | SSP_IMSC_MASK_RXIM,
1229 SSP_IMSC(pl022->virtbase)); 1234 SSP_IMSC(pl022->virtbase));
1230 } 1235 }
1231 1236
1232 /* 1237 /*
1233 * Since all transactions must write as much as shall be read, 1238 * Since all transactions must write as much as shall be read,
1234 * we can conclude the entire transaction once RX is complete. 1239 * we can conclude the entire transaction once RX is complete.
1235 * At this point, all TX will always be finished. 1240 * At this point, all TX will always be finished.
1236 */ 1241 */
1237 if (pl022->rx >= pl022->rx_end) { 1242 if (pl022->rx >= pl022->rx_end) {
1238 writew(DISABLE_ALL_INTERRUPTS, 1243 writew(DISABLE_ALL_INTERRUPTS,
1239 SSP_IMSC(pl022->virtbase)); 1244 SSP_IMSC(pl022->virtbase));
1240 writew(CLEAR_ALL_INTERRUPTS, SSP_ICR(pl022->virtbase)); 1245 writew(CLEAR_ALL_INTERRUPTS, SSP_ICR(pl022->virtbase));
1241 if (unlikely(pl022->rx > pl022->rx_end)) { 1246 if (unlikely(pl022->rx > pl022->rx_end)) {
1242 dev_warn(&pl022->adev->dev, "read %u surplus " 1247 dev_warn(&pl022->adev->dev, "read %u surplus "
1243 "bytes (did you request an odd " 1248 "bytes (did you request an odd "
1244 "number of bytes on a 16bit bus?)\n", 1249 "number of bytes on a 16bit bus?)\n",
1245 (u32) (pl022->rx - pl022->rx_end)); 1250 (u32) (pl022->rx - pl022->rx_end));
1246 } 1251 }
1247 /* Update total bytes transferred */ 1252 /* Update total bytes transferred */
1248 msg->actual_length += pl022->cur_transfer->len; 1253 msg->actual_length += pl022->cur_transfer->len;
1249 if (pl022->cur_transfer->cs_change) 1254 if (pl022->cur_transfer->cs_change)
1250 pl022->cur_chip-> 1255 pl022->cur_chip->
1251 cs_control(SSP_CHIP_DESELECT); 1256 cs_control(SSP_CHIP_DESELECT);
1252 /* Move to next transfer */ 1257 /* Move to next transfer */
1253 msg->state = next_transfer(pl022); 1258 msg->state = next_transfer(pl022);
1254 tasklet_schedule(&pl022->pump_transfers); 1259 tasklet_schedule(&pl022->pump_transfers);
1255 return IRQ_HANDLED; 1260 return IRQ_HANDLED;
1256 } 1261 }
1257 1262
1258 return IRQ_HANDLED; 1263 return IRQ_HANDLED;
1259 } 1264 }
1260 1265
1261 /** 1266 /**
1262 * This sets up the pointers to memory for the next message to 1267 * This sets up the pointers to memory for the next message to
1263 * send out on the SPI bus. 1268 * send out on the SPI bus.
1264 */ 1269 */
1265 static int set_up_next_transfer(struct pl022 *pl022, 1270 static int set_up_next_transfer(struct pl022 *pl022,
1266 struct spi_transfer *transfer) 1271 struct spi_transfer *transfer)
1267 { 1272 {
1268 int residue; 1273 int residue;
1269 1274
1270 /* Sanity check the message for this bus width */ 1275 /* Sanity check the message for this bus width */
1271 residue = pl022->cur_transfer->len % pl022->cur_chip->n_bytes; 1276 residue = pl022->cur_transfer->len % pl022->cur_chip->n_bytes;
1272 if (unlikely(residue != 0)) { 1277 if (unlikely(residue != 0)) {
1273 dev_err(&pl022->adev->dev, 1278 dev_err(&pl022->adev->dev,
1274 "message of %u bytes to transmit but the current " 1279 "message of %u bytes to transmit but the current "
1275 "chip bus has a data width of %u bytes!\n", 1280 "chip bus has a data width of %u bytes!\n",
1276 pl022->cur_transfer->len, 1281 pl022->cur_transfer->len,
1277 pl022->cur_chip->n_bytes); 1282 pl022->cur_chip->n_bytes);
1278 dev_err(&pl022->adev->dev, "skipping this message\n"); 1283 dev_err(&pl022->adev->dev, "skipping this message\n");
1279 return -EIO; 1284 return -EIO;
1280 } 1285 }
1281 pl022->tx = (void *)transfer->tx_buf; 1286 pl022->tx = (void *)transfer->tx_buf;
1282 pl022->tx_end = pl022->tx + pl022->cur_transfer->len; 1287 pl022->tx_end = pl022->tx + pl022->cur_transfer->len;
1283 pl022->rx = (void *)transfer->rx_buf; 1288 pl022->rx = (void *)transfer->rx_buf;
1284 pl022->rx_end = pl022->rx + pl022->cur_transfer->len; 1289 pl022->rx_end = pl022->rx + pl022->cur_transfer->len;
1285 pl022->write = 1290 pl022->write =
1286 pl022->tx ? pl022->cur_chip->write : WRITING_NULL; 1291 pl022->tx ? pl022->cur_chip->write : WRITING_NULL;
1287 pl022->read = pl022->rx ? pl022->cur_chip->read : READING_NULL; 1292 pl022->read = pl022->rx ? pl022->cur_chip->read : READING_NULL;
1288 return 0; 1293 return 0;
1289 } 1294 }
1290 1295
1291 /** 1296 /**
1292 * pump_transfers - Tasklet function which schedules next transfer 1297 * pump_transfers - Tasklet function which schedules next transfer
1293 * when running in interrupt or DMA transfer mode. 1298 * when running in interrupt or DMA transfer mode.
1294 * @data: SSP driver private data structure 1299 * @data: SSP driver private data structure
1295 * 1300 *
1296 */ 1301 */
1297 static void pump_transfers(unsigned long data) 1302 static void pump_transfers(unsigned long data)
1298 { 1303 {
1299 struct pl022 *pl022 = (struct pl022 *) data; 1304 struct pl022 *pl022 = (struct pl022 *) data;
1300 struct spi_message *message = NULL; 1305 struct spi_message *message = NULL;
1301 struct spi_transfer *transfer = NULL; 1306 struct spi_transfer *transfer = NULL;
1302 struct spi_transfer *previous = NULL; 1307 struct spi_transfer *previous = NULL;
1303 1308
1304 /* Get current state information */ 1309 /* Get current state information */
1305 message = pl022->cur_msg; 1310 message = pl022->cur_msg;
1306 transfer = pl022->cur_transfer; 1311 transfer = pl022->cur_transfer;
1307 1312
1308 /* Handle for abort */ 1313 /* Handle for abort */
1309 if (message->state == STATE_ERROR) { 1314 if (message->state == STATE_ERROR) {
1310 message->status = -EIO; 1315 message->status = -EIO;
1311 giveback(pl022); 1316 giveback(pl022);
1312 return; 1317 return;
1313 } 1318 }
1314 1319
1315 /* Handle end of message */ 1320 /* Handle end of message */
1316 if (message->state == STATE_DONE) { 1321 if (message->state == STATE_DONE) {
1317 message->status = 0; 1322 message->status = 0;
1318 giveback(pl022); 1323 giveback(pl022);
1319 return; 1324 return;
1320 } 1325 }
1321 1326
1322 /* Delay if requested at end of transfer before CS change */ 1327 /* Delay if requested at end of transfer before CS change */
1323 if (message->state == STATE_RUNNING) { 1328 if (message->state == STATE_RUNNING) {
1324 previous = list_entry(transfer->transfer_list.prev, 1329 previous = list_entry(transfer->transfer_list.prev,
1325 struct spi_transfer, 1330 struct spi_transfer,
1326 transfer_list); 1331 transfer_list);
1327 if (previous->delay_usecs) 1332 if (previous->delay_usecs)
1328 /* 1333 /*
1329 * FIXME: This runs in interrupt context. 1334 * FIXME: This runs in interrupt context.
1330 * Is this really smart? 1335 * Is this really smart?
1331 */ 1336 */
1332 udelay(previous->delay_usecs); 1337 udelay(previous->delay_usecs);
1333 1338
1334 /* Reselect chip select only if cs_change was requested */ 1339 /* Reselect chip select only if cs_change was requested */
1335 if (previous->cs_change) 1340 if (previous->cs_change)
1336 pl022->cur_chip->cs_control(SSP_CHIP_SELECT); 1341 pl022->cur_chip->cs_control(SSP_CHIP_SELECT);
1337 } else { 1342 } else {
1338 /* STATE_START */ 1343 /* STATE_START */
1339 message->state = STATE_RUNNING; 1344 message->state = STATE_RUNNING;
1340 } 1345 }
1341 1346
1342 if (set_up_next_transfer(pl022, transfer)) { 1347 if (set_up_next_transfer(pl022, transfer)) {
1343 message->state = STATE_ERROR; 1348 message->state = STATE_ERROR;
1344 message->status = -EIO; 1349 message->status = -EIO;
1345 giveback(pl022); 1350 giveback(pl022);
1346 return; 1351 return;
1347 } 1352 }
1348 /* Flush the FIFOs and let's go! */ 1353 /* Flush the FIFOs and let's go! */
1349 flush(pl022); 1354 flush(pl022);
1350 1355
1351 if (pl022->cur_chip->enable_dma) { 1356 if (pl022->cur_chip->enable_dma) {
1352 if (configure_dma(pl022)) { 1357 if (configure_dma(pl022)) {
1353 dev_dbg(&pl022->adev->dev, 1358 dev_dbg(&pl022->adev->dev,
1354 "configuration of DMA failed, fall back to interrupt mode\n"); 1359 "configuration of DMA failed, fall back to interrupt mode\n");
1355 goto err_config_dma; 1360 goto err_config_dma;
1356 } 1361 }
1357 return; 1362 return;
1358 } 1363 }
1359 1364
1360 err_config_dma: 1365 err_config_dma:
1361 /* enable all interrupts except RX */ 1366 /* enable all interrupts except RX */
1362 writew(ENABLE_ALL_INTERRUPTS & ~SSP_IMSC_MASK_RXIM, SSP_IMSC(pl022->virtbase)); 1367 writew(ENABLE_ALL_INTERRUPTS & ~SSP_IMSC_MASK_RXIM, SSP_IMSC(pl022->virtbase));
1363 } 1368 }
1364 1369
1365 static void do_interrupt_dma_transfer(struct pl022 *pl022) 1370 static void do_interrupt_dma_transfer(struct pl022 *pl022)
1366 { 1371 {
1367 /* 1372 /*
1368 * Default is to enable all interrupts except RX - 1373 * Default is to enable all interrupts except RX -
1369 * this will be enabled once TX is complete 1374 * this will be enabled once TX is complete
1370 */ 1375 */
1371 u32 irqflags = ENABLE_ALL_INTERRUPTS & ~SSP_IMSC_MASK_RXIM; 1376 u32 irqflags = ENABLE_ALL_INTERRUPTS & ~SSP_IMSC_MASK_RXIM;
1372 1377
1373 /* Enable target chip, if not already active */ 1378 /* Enable target chip, if not already active */
1374 if (!pl022->next_msg_cs_active) 1379 if (!pl022->next_msg_cs_active)
1375 pl022->cur_chip->cs_control(SSP_CHIP_SELECT); 1380 pl022->cur_chip->cs_control(SSP_CHIP_SELECT);
1376 1381
1377 if (set_up_next_transfer(pl022, pl022->cur_transfer)) { 1382 if (set_up_next_transfer(pl022, pl022->cur_transfer)) {
1378 /* Error path */ 1383 /* Error path */
1379 pl022->cur_msg->state = STATE_ERROR; 1384 pl022->cur_msg->state = STATE_ERROR;
1380 pl022->cur_msg->status = -EIO; 1385 pl022->cur_msg->status = -EIO;
1381 giveback(pl022); 1386 giveback(pl022);
1382 return; 1387 return;
1383 } 1388 }
1384 /* If we're using DMA, set up DMA here */ 1389 /* If we're using DMA, set up DMA here */
1385 if (pl022->cur_chip->enable_dma) { 1390 if (pl022->cur_chip->enable_dma) {
1386 /* Configure DMA transfer */ 1391 /* Configure DMA transfer */
1387 if (configure_dma(pl022)) { 1392 if (configure_dma(pl022)) {
1388 dev_dbg(&pl022->adev->dev, 1393 dev_dbg(&pl022->adev->dev,
1389 "configuration of DMA failed, fall back to interrupt mode\n"); 1394 "configuration of DMA failed, fall back to interrupt mode\n");
1390 goto err_config_dma; 1395 goto err_config_dma;
1391 } 1396 }
1392 /* Disable interrupts in DMA mode, IRQ from DMA controller */ 1397 /* Disable interrupts in DMA mode, IRQ from DMA controller */
1393 irqflags = DISABLE_ALL_INTERRUPTS; 1398 irqflags = DISABLE_ALL_INTERRUPTS;
1394 } 1399 }
1395 err_config_dma: 1400 err_config_dma:
1396 /* Enable SSP, turn on interrupts */ 1401 /* Enable SSP, turn on interrupts */
1397 writew((readw(SSP_CR1(pl022->virtbase)) | SSP_CR1_MASK_SSE), 1402 writew((readw(SSP_CR1(pl022->virtbase)) | SSP_CR1_MASK_SSE),
1398 SSP_CR1(pl022->virtbase)); 1403 SSP_CR1(pl022->virtbase));
1399 writew(irqflags, SSP_IMSC(pl022->virtbase)); 1404 writew(irqflags, SSP_IMSC(pl022->virtbase));
1400 } 1405 }
1401 1406
1402 static void do_polling_transfer(struct pl022 *pl022) 1407 static void do_polling_transfer(struct pl022 *pl022)
1403 { 1408 {
1404 struct spi_message *message = NULL; 1409 struct spi_message *message = NULL;
1405 struct spi_transfer *transfer = NULL; 1410 struct spi_transfer *transfer = NULL;
1406 struct spi_transfer *previous = NULL; 1411 struct spi_transfer *previous = NULL;
1407 struct chip_data *chip; 1412 struct chip_data *chip;
1408 unsigned long time, timeout; 1413 unsigned long time, timeout;
1409 1414
1410 chip = pl022->cur_chip; 1415 chip = pl022->cur_chip;
1411 message = pl022->cur_msg; 1416 message = pl022->cur_msg;
1412 1417
1413 while (message->state != STATE_DONE) { 1418 while (message->state != STATE_DONE) {
1414 /* Handle for abort */ 1419 /* Handle for abort */
1415 if (message->state == STATE_ERROR) 1420 if (message->state == STATE_ERROR)
1416 break; 1421 break;
1417 transfer = pl022->cur_transfer; 1422 transfer = pl022->cur_transfer;
1418 1423
1419 /* Delay if requested at end of transfer */ 1424 /* Delay if requested at end of transfer */
1420 if (message->state == STATE_RUNNING) { 1425 if (message->state == STATE_RUNNING) {
1421 previous = 1426 previous =
1422 list_entry(transfer->transfer_list.prev, 1427 list_entry(transfer->transfer_list.prev,
1423 struct spi_transfer, transfer_list); 1428 struct spi_transfer, transfer_list);
1424 if (previous->delay_usecs) 1429 if (previous->delay_usecs)
1425 udelay(previous->delay_usecs); 1430 udelay(previous->delay_usecs);
1426 if (previous->cs_change) 1431 if (previous->cs_change)
1427 pl022->cur_chip->cs_control(SSP_CHIP_SELECT); 1432 pl022->cur_chip->cs_control(SSP_CHIP_SELECT);
1428 } else { 1433 } else {
1429 /* STATE_START */ 1434 /* STATE_START */
1430 message->state = STATE_RUNNING; 1435 message->state = STATE_RUNNING;
1431 if (!pl022->next_msg_cs_active) 1436 if (!pl022->next_msg_cs_active)
1432 pl022->cur_chip->cs_control(SSP_CHIP_SELECT); 1437 pl022->cur_chip->cs_control(SSP_CHIP_SELECT);
1433 } 1438 }
1434 1439
1435 /* Configuration Changing Per Transfer */ 1440 /* Configuration Changing Per Transfer */
1436 if (set_up_next_transfer(pl022, transfer)) { 1441 if (set_up_next_transfer(pl022, transfer)) {
1437 /* Error path */ 1442 /* Error path */
1438 message->state = STATE_ERROR; 1443 message->state = STATE_ERROR;
1439 break; 1444 break;
1440 } 1445 }
1441 /* Flush FIFOs and enable SSP */ 1446 /* Flush FIFOs and enable SSP */
1442 flush(pl022); 1447 flush(pl022);
1443 writew((readw(SSP_CR1(pl022->virtbase)) | SSP_CR1_MASK_SSE), 1448 writew((readw(SSP_CR1(pl022->virtbase)) | SSP_CR1_MASK_SSE),
1444 SSP_CR1(pl022->virtbase)); 1449 SSP_CR1(pl022->virtbase));
1445 1450
1446 dev_dbg(&pl022->adev->dev, "polling transfer ongoing ...\n"); 1451 dev_dbg(&pl022->adev->dev, "polling transfer ongoing ...\n");
1447 1452
1448 timeout = jiffies + msecs_to_jiffies(SPI_POLLING_TIMEOUT); 1453 timeout = jiffies + msecs_to_jiffies(SPI_POLLING_TIMEOUT);
1449 while (pl022->tx < pl022->tx_end || pl022->rx < pl022->rx_end) { 1454 while (pl022->tx < pl022->tx_end || pl022->rx < pl022->rx_end) {
1450 time = jiffies; 1455 time = jiffies;
1451 readwriter(pl022); 1456 readwriter(pl022);
1452 if (time_after(time, timeout)) { 1457 if (time_after(time, timeout)) {
1453 dev_warn(&pl022->adev->dev, 1458 dev_warn(&pl022->adev->dev,
1454 "%s: timeout!\n", __func__); 1459 "%s: timeout!\n", __func__);
1455 message->state = STATE_ERROR; 1460 message->state = STATE_ERROR;
1456 goto out; 1461 goto out;
1457 } 1462 }
1458 cpu_relax(); 1463 cpu_relax();
1459 } 1464 }
1460 1465
1461 /* Update total byte transferred */ 1466 /* Update total byte transferred */
1462 message->actual_length += pl022->cur_transfer->len; 1467 message->actual_length += pl022->cur_transfer->len;
1463 if (pl022->cur_transfer->cs_change) 1468 if (pl022->cur_transfer->cs_change)
1464 pl022->cur_chip->cs_control(SSP_CHIP_DESELECT); 1469 pl022->cur_chip->cs_control(SSP_CHIP_DESELECT);
1465 /* Move to next transfer */ 1470 /* Move to next transfer */
1466 message->state = next_transfer(pl022); 1471 message->state = next_transfer(pl022);
1467 } 1472 }
1468 out: 1473 out:
1469 /* Handle end of message */ 1474 /* Handle end of message */
1470 if (message->state == STATE_DONE) 1475 if (message->state == STATE_DONE)
1471 message->status = 0; 1476 message->status = 0;
1472 else 1477 else
1473 message->status = -EIO; 1478 message->status = -EIO;
1474 1479
1475 giveback(pl022); 1480 giveback(pl022);
1476 return; 1481 return;
1477 } 1482 }
1478 1483
1479 static int pl022_transfer_one_message(struct spi_master *master, 1484 static int pl022_transfer_one_message(struct spi_master *master,
1480 struct spi_message *msg) 1485 struct spi_message *msg)
1481 { 1486 {
1482 struct pl022 *pl022 = spi_master_get_devdata(master); 1487 struct pl022 *pl022 = spi_master_get_devdata(master);
1483 1488
1484 /* Initial message state */ 1489 /* Initial message state */
1485 pl022->cur_msg = msg; 1490 pl022->cur_msg = msg;
1486 msg->state = STATE_START; 1491 msg->state = STATE_START;
1487 1492
1488 pl022->cur_transfer = list_entry(msg->transfers.next, 1493 pl022->cur_transfer = list_entry(msg->transfers.next,
1489 struct spi_transfer, transfer_list); 1494 struct spi_transfer, transfer_list);
1490 1495
1491 /* Setup the SPI using the per chip configuration */ 1496 /* Setup the SPI using the per chip configuration */
1492 pl022->cur_chip = spi_get_ctldata(msg->spi); 1497 pl022->cur_chip = spi_get_ctldata(msg->spi);
1493 1498
1494 restore_state(pl022); 1499 restore_state(pl022);
1495 flush(pl022); 1500 flush(pl022);
1496 1501
1497 if (pl022->cur_chip->xfer_type == POLLING_TRANSFER) 1502 if (pl022->cur_chip->xfer_type == POLLING_TRANSFER)
1498 do_polling_transfer(pl022); 1503 do_polling_transfer(pl022);
1499 else 1504 else
1500 do_interrupt_dma_transfer(pl022); 1505 do_interrupt_dma_transfer(pl022);
1501 1506
1502 return 0; 1507 return 0;
1503 } 1508 }
1504 1509
1505 static int pl022_prepare_transfer_hardware(struct spi_master *master) 1510 static int pl022_prepare_transfer_hardware(struct spi_master *master)
1506 { 1511 {
1507 struct pl022 *pl022 = spi_master_get_devdata(master); 1512 struct pl022 *pl022 = spi_master_get_devdata(master);
1508 1513
1509 /* 1514 /*
1510 * Just make sure we have all we need to run the transfer by syncing 1515 * Just make sure we have all we need to run the transfer by syncing
1511 * with the runtime PM framework. 1516 * with the runtime PM framework.
1512 */ 1517 */
1513 pm_runtime_get_sync(&pl022->adev->dev); 1518 pm_runtime_get_sync(&pl022->adev->dev);
1514 return 0; 1519 return 0;
1515 } 1520 }
1516 1521
1517 static int pl022_unprepare_transfer_hardware(struct spi_master *master) 1522 static int pl022_unprepare_transfer_hardware(struct spi_master *master)
1518 { 1523 {
1519 struct pl022 *pl022 = spi_master_get_devdata(master); 1524 struct pl022 *pl022 = spi_master_get_devdata(master);
1520 1525
1521 /* nothing more to do - disable spi/ssp and power off */ 1526 /* nothing more to do - disable spi/ssp and power off */
1522 writew((readw(SSP_CR1(pl022->virtbase)) & 1527 writew((readw(SSP_CR1(pl022->virtbase)) &
1523 (~SSP_CR1_MASK_SSE)), SSP_CR1(pl022->virtbase)); 1528 (~SSP_CR1_MASK_SSE)), SSP_CR1(pl022->virtbase));
1524 1529
1525 if (pl022->master_info->autosuspend_delay > 0) { 1530 if (pl022->master_info->autosuspend_delay > 0) {
1526 pm_runtime_mark_last_busy(&pl022->adev->dev); 1531 pm_runtime_mark_last_busy(&pl022->adev->dev);
1527 pm_runtime_put_autosuspend(&pl022->adev->dev); 1532 pm_runtime_put_autosuspend(&pl022->adev->dev);
1528 } else { 1533 } else {
1529 pm_runtime_put(&pl022->adev->dev); 1534 pm_runtime_put(&pl022->adev->dev);
1530 } 1535 }
1531 1536
1532 return 0; 1537 return 0;
1533 } 1538 }
1534 1539
1535 static int verify_controller_parameters(struct pl022 *pl022, 1540 static int verify_controller_parameters(struct pl022 *pl022,
1536 struct pl022_config_chip const *chip_info) 1541 struct pl022_config_chip const *chip_info)
1537 { 1542 {
1538 if ((chip_info->iface < SSP_INTERFACE_MOTOROLA_SPI) 1543 if ((chip_info->iface < SSP_INTERFACE_MOTOROLA_SPI)
1539 || (chip_info->iface > SSP_INTERFACE_UNIDIRECTIONAL)) { 1544 || (chip_info->iface > SSP_INTERFACE_UNIDIRECTIONAL)) {
1540 dev_err(&pl022->adev->dev, 1545 dev_err(&pl022->adev->dev,
1541 "interface is configured incorrectly\n"); 1546 "interface is configured incorrectly\n");
1542 return -EINVAL; 1547 return -EINVAL;
1543 } 1548 }
1544 if ((chip_info->iface == SSP_INTERFACE_UNIDIRECTIONAL) && 1549 if ((chip_info->iface == SSP_INTERFACE_UNIDIRECTIONAL) &&
1545 (!pl022->vendor->unidir)) { 1550 (!pl022->vendor->unidir)) {
1546 dev_err(&pl022->adev->dev, 1551 dev_err(&pl022->adev->dev,
1547 "unidirectional mode not supported in this " 1552 "unidirectional mode not supported in this "
1548 "hardware version\n"); 1553 "hardware version\n");
1549 return -EINVAL; 1554 return -EINVAL;
1550 } 1555 }
1551 if ((chip_info->hierarchy != SSP_MASTER) 1556 if ((chip_info->hierarchy != SSP_MASTER)
1552 && (chip_info->hierarchy != SSP_SLAVE)) { 1557 && (chip_info->hierarchy != SSP_SLAVE)) {
1553 dev_err(&pl022->adev->dev, 1558 dev_err(&pl022->adev->dev,
1554 "hierarchy is configured incorrectly\n"); 1559 "hierarchy is configured incorrectly\n");
1555 return -EINVAL; 1560 return -EINVAL;
1556 } 1561 }
1557 if ((chip_info->com_mode != INTERRUPT_TRANSFER) 1562 if ((chip_info->com_mode != INTERRUPT_TRANSFER)
1558 && (chip_info->com_mode != DMA_TRANSFER) 1563 && (chip_info->com_mode != DMA_TRANSFER)
1559 && (chip_info->com_mode != POLLING_TRANSFER)) { 1564 && (chip_info->com_mode != POLLING_TRANSFER)) {
1560 dev_err(&pl022->adev->dev, 1565 dev_err(&pl022->adev->dev,
1561 "Communication mode is configured incorrectly\n"); 1566 "Communication mode is configured incorrectly\n");
1562 return -EINVAL; 1567 return -EINVAL;
1563 } 1568 }
1564 switch (chip_info->rx_lev_trig) { 1569 switch (chip_info->rx_lev_trig) {
1565 case SSP_RX_1_OR_MORE_ELEM: 1570 case SSP_RX_1_OR_MORE_ELEM:
1566 case SSP_RX_4_OR_MORE_ELEM: 1571 case SSP_RX_4_OR_MORE_ELEM:
1567 case SSP_RX_8_OR_MORE_ELEM: 1572 case SSP_RX_8_OR_MORE_ELEM:
1568 /* These are always OK, all variants can handle this */ 1573 /* These are always OK, all variants can handle this */
1569 break; 1574 break;
1570 case SSP_RX_16_OR_MORE_ELEM: 1575 case SSP_RX_16_OR_MORE_ELEM:
1571 if (pl022->vendor->fifodepth < 16) { 1576 if (pl022->vendor->fifodepth < 16) {
1572 dev_err(&pl022->adev->dev, 1577 dev_err(&pl022->adev->dev,
1573 "RX FIFO Trigger Level is configured incorrectly\n"); 1578 "RX FIFO Trigger Level is configured incorrectly\n");
1574 return -EINVAL; 1579 return -EINVAL;
1575 } 1580 }
1576 break; 1581 break;
1577 case SSP_RX_32_OR_MORE_ELEM: 1582 case SSP_RX_32_OR_MORE_ELEM:
1578 if (pl022->vendor->fifodepth < 32) { 1583 if (pl022->vendor->fifodepth < 32) {
1579 dev_err(&pl022->adev->dev, 1584 dev_err(&pl022->adev->dev,
1580 "RX FIFO Trigger Level is configured incorrectly\n"); 1585 "RX FIFO Trigger Level is configured incorrectly\n");
1581 return -EINVAL; 1586 return -EINVAL;
1582 } 1587 }
1583 break; 1588 break;
1584 default: 1589 default:
1585 dev_err(&pl022->adev->dev, 1590 dev_err(&pl022->adev->dev,
1586 "RX FIFO Trigger Level is configured incorrectly\n"); 1591 "RX FIFO Trigger Level is configured incorrectly\n");
1587 return -EINVAL; 1592 return -EINVAL;
1588 break; 1593 break;
1589 } 1594 }
1590 switch (chip_info->tx_lev_trig) { 1595 switch (chip_info->tx_lev_trig) {
1591 case SSP_TX_1_OR_MORE_EMPTY_LOC: 1596 case SSP_TX_1_OR_MORE_EMPTY_LOC:
1592 case SSP_TX_4_OR_MORE_EMPTY_LOC: 1597 case SSP_TX_4_OR_MORE_EMPTY_LOC:
1593 case SSP_TX_8_OR_MORE_EMPTY_LOC: 1598 case SSP_TX_8_OR_MORE_EMPTY_LOC:
1594 /* These are always OK, all variants can handle this */ 1599 /* These are always OK, all variants can handle this */
1595 break; 1600 break;
1596 case SSP_TX_16_OR_MORE_EMPTY_LOC: 1601 case SSP_TX_16_OR_MORE_EMPTY_LOC:
1597 if (pl022->vendor->fifodepth < 16) { 1602 if (pl022->vendor->fifodepth < 16) {
1598 dev_err(&pl022->adev->dev, 1603 dev_err(&pl022->adev->dev,
1599 "TX FIFO Trigger Level is configured incorrectly\n"); 1604 "TX FIFO Trigger Level is configured incorrectly\n");
1600 return -EINVAL; 1605 return -EINVAL;
1601 } 1606 }
1602 break; 1607 break;
1603 case SSP_TX_32_OR_MORE_EMPTY_LOC: 1608 case SSP_TX_32_OR_MORE_EMPTY_LOC:
1604 if (pl022->vendor->fifodepth < 32) { 1609 if (pl022->vendor->fifodepth < 32) {
1605 dev_err(&pl022->adev->dev, 1610 dev_err(&pl022->adev->dev,
1606 "TX FIFO Trigger Level is configured incorrectly\n"); 1611 "TX FIFO Trigger Level is configured incorrectly\n");
1607 return -EINVAL; 1612 return -EINVAL;
1608 } 1613 }
1609 break; 1614 break;
1610 default: 1615 default:
1611 dev_err(&pl022->adev->dev, 1616 dev_err(&pl022->adev->dev,
1612 "TX FIFO Trigger Level is configured incorrectly\n"); 1617 "TX FIFO Trigger Level is configured incorrectly\n");
1613 return -EINVAL; 1618 return -EINVAL;
1614 break; 1619 break;
1615 } 1620 }
1616 if (chip_info->iface == SSP_INTERFACE_NATIONAL_MICROWIRE) { 1621 if (chip_info->iface == SSP_INTERFACE_NATIONAL_MICROWIRE) {
1617 if ((chip_info->ctrl_len < SSP_BITS_4) 1622 if ((chip_info->ctrl_len < SSP_BITS_4)
1618 || (chip_info->ctrl_len > SSP_BITS_32)) { 1623 || (chip_info->ctrl_len > SSP_BITS_32)) {
1619 dev_err(&pl022->adev->dev, 1624 dev_err(&pl022->adev->dev,
1620 "CTRL LEN is configured incorrectly\n"); 1625 "CTRL LEN is configured incorrectly\n");
1621 return -EINVAL; 1626 return -EINVAL;
1622 } 1627 }
1623 if ((chip_info->wait_state != SSP_MWIRE_WAIT_ZERO) 1628 if ((chip_info->wait_state != SSP_MWIRE_WAIT_ZERO)
1624 && (chip_info->wait_state != SSP_MWIRE_WAIT_ONE)) { 1629 && (chip_info->wait_state != SSP_MWIRE_WAIT_ONE)) {
1625 dev_err(&pl022->adev->dev, 1630 dev_err(&pl022->adev->dev,
1626 "Wait State is configured incorrectly\n"); 1631 "Wait State is configured incorrectly\n");
1627 return -EINVAL; 1632 return -EINVAL;
1628 } 1633 }
1629 /* Half duplex is only available in the ST Micro version */ 1634 /* Half duplex is only available in the ST Micro version */
1630 if (pl022->vendor->extended_cr) { 1635 if (pl022->vendor->extended_cr) {
1631 if ((chip_info->duplex != 1636 if ((chip_info->duplex !=
1632 SSP_MICROWIRE_CHANNEL_FULL_DUPLEX) 1637 SSP_MICROWIRE_CHANNEL_FULL_DUPLEX)
1633 && (chip_info->duplex != 1638 && (chip_info->duplex !=
1634 SSP_MICROWIRE_CHANNEL_HALF_DUPLEX)) { 1639 SSP_MICROWIRE_CHANNEL_HALF_DUPLEX)) {
1635 dev_err(&pl022->adev->dev, 1640 dev_err(&pl022->adev->dev,
1636 "Microwire duplex mode is configured incorrectly\n"); 1641 "Microwire duplex mode is configured incorrectly\n");
1637 return -EINVAL; 1642 return -EINVAL;
1638 } 1643 }
1639 } else { 1644 } else {
1640 if (chip_info->duplex != SSP_MICROWIRE_CHANNEL_FULL_DUPLEX) 1645 if (chip_info->duplex != SSP_MICROWIRE_CHANNEL_FULL_DUPLEX)
1641 dev_err(&pl022->adev->dev, 1646 dev_err(&pl022->adev->dev,
1642 "Microwire half duplex mode requested," 1647 "Microwire half duplex mode requested,"
1643 " but this is only available in the" 1648 " but this is only available in the"
1644 " ST version of PL022\n"); 1649 " ST version of PL022\n");
1645 return -EINVAL; 1650 return -EINVAL;
1646 } 1651 }
1647 } 1652 }
1648 return 0; 1653 return 0;
1649 } 1654 }
1650 1655
1651 static inline u32 spi_rate(u32 rate, u16 cpsdvsr, u16 scr) 1656 static inline u32 spi_rate(u32 rate, u16 cpsdvsr, u16 scr)
1652 { 1657 {
1653 return rate / (cpsdvsr * (1 + scr)); 1658 return rate / (cpsdvsr * (1 + scr));
1654 } 1659 }
1655 1660
1656 static int calculate_effective_freq(struct pl022 *pl022, int freq, struct 1661 static int calculate_effective_freq(struct pl022 *pl022, int freq, struct
1657 ssp_clock_params * clk_freq) 1662 ssp_clock_params * clk_freq)
1658 { 1663 {
1659 /* Lets calculate the frequency parameters */ 1664 /* Lets calculate the frequency parameters */
1660 u16 cpsdvsr = CPSDVR_MIN, scr = SCR_MIN; 1665 u16 cpsdvsr = CPSDVR_MIN, scr = SCR_MIN;
1661 u32 rate, max_tclk, min_tclk, best_freq = 0, best_cpsdvsr = 0, 1666 u32 rate, max_tclk, min_tclk, best_freq = 0, best_cpsdvsr = 0,
1662 best_scr = 0, tmp, found = 0; 1667 best_scr = 0, tmp, found = 0;
1663 1668
1664 rate = clk_get_rate(pl022->clk); 1669 rate = clk_get_rate(pl022->clk);
1665 /* cpsdvscr = 2 & scr 0 */ 1670 /* cpsdvscr = 2 & scr 0 */
1666 max_tclk = spi_rate(rate, CPSDVR_MIN, SCR_MIN); 1671 max_tclk = spi_rate(rate, CPSDVR_MIN, SCR_MIN);
1667 /* cpsdvsr = 254 & scr = 255 */ 1672 /* cpsdvsr = 254 & scr = 255 */
1668 min_tclk = spi_rate(rate, CPSDVR_MAX, SCR_MAX); 1673 min_tclk = spi_rate(rate, CPSDVR_MAX, SCR_MAX);
1669 1674
1670 if (freq > max_tclk) 1675 if (freq > max_tclk)
1671 dev_warn(&pl022->adev->dev, 1676 dev_warn(&pl022->adev->dev,
1672 "Max speed that can be programmed is %d Hz, you requested %d\n", 1677 "Max speed that can be programmed is %d Hz, you requested %d\n",
1673 max_tclk, freq); 1678 max_tclk, freq);
1674 1679
1675 if (freq < min_tclk) { 1680 if (freq < min_tclk) {
1676 dev_err(&pl022->adev->dev, 1681 dev_err(&pl022->adev->dev,
1677 "Requested frequency: %d Hz is less than minimum possible %d Hz\n", 1682 "Requested frequency: %d Hz is less than minimum possible %d Hz\n",
1678 freq, min_tclk); 1683 freq, min_tclk);
1679 return -EINVAL; 1684 return -EINVAL;
1680 } 1685 }
1681 1686
1682 /* 1687 /*
1683 * best_freq will give closest possible available rate (<= requested 1688 * best_freq will give closest possible available rate (<= requested
1684 * freq) for all values of scr & cpsdvsr. 1689 * freq) for all values of scr & cpsdvsr.
1685 */ 1690 */
1686 while ((cpsdvsr <= CPSDVR_MAX) && !found) { 1691 while ((cpsdvsr <= CPSDVR_MAX) && !found) {
1687 while (scr <= SCR_MAX) { 1692 while (scr <= SCR_MAX) {
1688 tmp = spi_rate(rate, cpsdvsr, scr); 1693 tmp = spi_rate(rate, cpsdvsr, scr);
1689 1694
1690 if (tmp > freq) { 1695 if (tmp > freq) {
1691 /* we need lower freq */ 1696 /* we need lower freq */
1692 scr++; 1697 scr++;
1693 continue; 1698 continue;
1694 } 1699 }
1695 1700
1696 /* 1701 /*
1697 * If found exact value, mark found and break. 1702 * If found exact value, mark found and break.
1698 * If found more closer value, update and break. 1703 * If found more closer value, update and break.
1699 */ 1704 */
1700 if (tmp > best_freq) { 1705 if (tmp > best_freq) {
1701 best_freq = tmp; 1706 best_freq = tmp;
1702 best_cpsdvsr = cpsdvsr; 1707 best_cpsdvsr = cpsdvsr;
1703 best_scr = scr; 1708 best_scr = scr;
1704 1709
1705 if (tmp == freq) 1710 if (tmp == freq)
1706 found = 1; 1711 found = 1;
1707 } 1712 }
1708 /* 1713 /*
1709 * increased scr will give lower rates, which are not 1714 * increased scr will give lower rates, which are not
1710 * required 1715 * required
1711 */ 1716 */
1712 break; 1717 break;
1713 } 1718 }
1714 cpsdvsr += 2; 1719 cpsdvsr += 2;
1715 scr = SCR_MIN; 1720 scr = SCR_MIN;
1716 } 1721 }
1717 1722
1718 WARN(!best_freq, "pl022: Matching cpsdvsr and scr not found for %d Hz rate \n", 1723 WARN(!best_freq, "pl022: Matching cpsdvsr and scr not found for %d Hz rate \n",
1719 freq); 1724 freq);
1720 1725
1721 clk_freq->cpsdvsr = (u8) (best_cpsdvsr & 0xFF); 1726 clk_freq->cpsdvsr = (u8) (best_cpsdvsr & 0xFF);
1722 clk_freq->scr = (u8) (best_scr & 0xFF); 1727 clk_freq->scr = (u8) (best_scr & 0xFF);
1723 dev_dbg(&pl022->adev->dev, 1728 dev_dbg(&pl022->adev->dev,
1724 "SSP Target Frequency is: %u, Effective Frequency is %u\n", 1729 "SSP Target Frequency is: %u, Effective Frequency is %u\n",
1725 freq, best_freq); 1730 freq, best_freq);
1726 dev_dbg(&pl022->adev->dev, "SSP cpsdvsr = %d, scr = %d\n", 1731 dev_dbg(&pl022->adev->dev, "SSP cpsdvsr = %d, scr = %d\n",
1727 clk_freq->cpsdvsr, clk_freq->scr); 1732 clk_freq->cpsdvsr, clk_freq->scr);
1728 1733
1729 return 0; 1734 return 0;
1730 } 1735 }
1731 1736
1732 /* 1737 /*
1733 * A piece of default chip info unless the platform 1738 * A piece of default chip info unless the platform
1734 * supplies it. 1739 * supplies it.
1735 */ 1740 */
1736 static const struct pl022_config_chip pl022_default_chip_info = { 1741 static const struct pl022_config_chip pl022_default_chip_info = {
1737 .com_mode = POLLING_TRANSFER, 1742 .com_mode = POLLING_TRANSFER,
1738 .iface = SSP_INTERFACE_MOTOROLA_SPI, 1743 .iface = SSP_INTERFACE_MOTOROLA_SPI,
1739 .hierarchy = SSP_SLAVE, 1744 .hierarchy = SSP_SLAVE,
1740 .slave_tx_disable = DO_NOT_DRIVE_TX, 1745 .slave_tx_disable = DO_NOT_DRIVE_TX,
1741 .rx_lev_trig = SSP_RX_1_OR_MORE_ELEM, 1746 .rx_lev_trig = SSP_RX_1_OR_MORE_ELEM,
1742 .tx_lev_trig = SSP_TX_1_OR_MORE_EMPTY_LOC, 1747 .tx_lev_trig = SSP_TX_1_OR_MORE_EMPTY_LOC,
1743 .ctrl_len = SSP_BITS_8, 1748 .ctrl_len = SSP_BITS_8,
1744 .wait_state = SSP_MWIRE_WAIT_ZERO, 1749 .wait_state = SSP_MWIRE_WAIT_ZERO,
1745 .duplex = SSP_MICROWIRE_CHANNEL_FULL_DUPLEX, 1750 .duplex = SSP_MICROWIRE_CHANNEL_FULL_DUPLEX,
1746 .cs_control = null_cs_control, 1751 .cs_control = null_cs_control,
1747 }; 1752 };
1748 1753
1749 /** 1754 /**
1750 * pl022_setup - setup function registered to SPI master framework 1755 * pl022_setup - setup function registered to SPI master framework
1751 * @spi: spi device which is requesting setup 1756 * @spi: spi device which is requesting setup
1752 * 1757 *
1753 * This function is registered to the SPI framework for this SPI master 1758 * This function is registered to the SPI framework for this SPI master
1754 * controller. If it is the first time when setup is called by this device, 1759 * controller. If it is the first time when setup is called by this device,
1755 * this function will initialize the runtime state for this chip and save 1760 * this function will initialize the runtime state for this chip and save
1756 * the same in the device structure. Else it will update the runtime info 1761 * the same in the device structure. Else it will update the runtime info
1757 * with the updated chip info. Nothing is really being written to the 1762 * with the updated chip info. Nothing is really being written to the
1758 * controller hardware here, that is not done until the actual transfer 1763 * controller hardware here, that is not done until the actual transfer
1759 * commence. 1764 * commence.
1760 */ 1765 */
1761 static int pl022_setup(struct spi_device *spi) 1766 static int pl022_setup(struct spi_device *spi)
1762 { 1767 {
1763 struct pl022_config_chip const *chip_info; 1768 struct pl022_config_chip const *chip_info;
1764 struct chip_data *chip; 1769 struct chip_data *chip;
1765 struct ssp_clock_params clk_freq = { .cpsdvsr = 0, .scr = 0}; 1770 struct ssp_clock_params clk_freq = { .cpsdvsr = 0, .scr = 0};
1766 int status = 0; 1771 int status = 0;
1767 struct pl022 *pl022 = spi_master_get_devdata(spi->master); 1772 struct pl022 *pl022 = spi_master_get_devdata(spi->master);
1768 unsigned int bits = spi->bits_per_word; 1773 unsigned int bits = spi->bits_per_word;
1769 u32 tmp; 1774 u32 tmp;
1770 1775
1771 if (!spi->max_speed_hz) 1776 if (!spi->max_speed_hz)
1772 return -EINVAL; 1777 return -EINVAL;
1773 1778
1774 /* Get controller_state if one is supplied */ 1779 /* Get controller_state if one is supplied */
1775 chip = spi_get_ctldata(spi); 1780 chip = spi_get_ctldata(spi);
1776 1781
1777 if (chip == NULL) { 1782 if (chip == NULL) {
1778 chip = kzalloc(sizeof(struct chip_data), GFP_KERNEL); 1783 chip = kzalloc(sizeof(struct chip_data), GFP_KERNEL);
1779 if (!chip) { 1784 if (!chip) {
1780 dev_err(&spi->dev, 1785 dev_err(&spi->dev,
1781 "cannot allocate controller state\n"); 1786 "cannot allocate controller state\n");
1782 return -ENOMEM; 1787 return -ENOMEM;
1783 } 1788 }
1784 dev_dbg(&spi->dev, 1789 dev_dbg(&spi->dev,
1785 "allocated memory for controller's runtime state\n"); 1790 "allocated memory for controller's runtime state\n");
1786 } 1791 }
1787 1792
1788 /* Get controller data if one is supplied */ 1793 /* Get controller data if one is supplied */
1789 chip_info = spi->controller_data; 1794 chip_info = spi->controller_data;
1790 1795
1791 if (chip_info == NULL) { 1796 if (chip_info == NULL) {
1792 chip_info = &pl022_default_chip_info; 1797 chip_info = &pl022_default_chip_info;
1793 /* spi_board_info.controller_data not is supplied */ 1798 /* spi_board_info.controller_data not is supplied */
1794 dev_dbg(&spi->dev, 1799 dev_dbg(&spi->dev,
1795 "using default controller_data settings\n"); 1800 "using default controller_data settings\n");
1796 } else 1801 } else
1797 dev_dbg(&spi->dev, 1802 dev_dbg(&spi->dev,
1798 "using user supplied controller_data settings\n"); 1803 "using user supplied controller_data settings\n");
1799 1804
1800 /* 1805 /*
1801 * We can override with custom divisors, else we use the board 1806 * We can override with custom divisors, else we use the board
1802 * frequency setting 1807 * frequency setting
1803 */ 1808 */
1804 if ((0 == chip_info->clk_freq.cpsdvsr) 1809 if ((0 == chip_info->clk_freq.cpsdvsr)
1805 && (0 == chip_info->clk_freq.scr)) { 1810 && (0 == chip_info->clk_freq.scr)) {
1806 status = calculate_effective_freq(pl022, 1811 status = calculate_effective_freq(pl022,
1807 spi->max_speed_hz, 1812 spi->max_speed_hz,
1808 &clk_freq); 1813 &clk_freq);
1809 if (status < 0) 1814 if (status < 0)
1810 goto err_config_params; 1815 goto err_config_params;
1811 } else { 1816 } else {
1812 memcpy(&clk_freq, &chip_info->clk_freq, sizeof(clk_freq)); 1817 memcpy(&clk_freq, &chip_info->clk_freq, sizeof(clk_freq));
1813 if ((clk_freq.cpsdvsr % 2) != 0) 1818 if ((clk_freq.cpsdvsr % 2) != 0)
1814 clk_freq.cpsdvsr = 1819 clk_freq.cpsdvsr =
1815 clk_freq.cpsdvsr - 1; 1820 clk_freq.cpsdvsr - 1;
1816 } 1821 }
1817 if ((clk_freq.cpsdvsr < CPSDVR_MIN) 1822 if ((clk_freq.cpsdvsr < CPSDVR_MIN)
1818 || (clk_freq.cpsdvsr > CPSDVR_MAX)) { 1823 || (clk_freq.cpsdvsr > CPSDVR_MAX)) {
1819 status = -EINVAL; 1824 status = -EINVAL;
1820 dev_err(&spi->dev, 1825 dev_err(&spi->dev,
1821 "cpsdvsr is configured incorrectly\n"); 1826 "cpsdvsr is configured incorrectly\n");
1822 goto err_config_params; 1827 goto err_config_params;
1823 } 1828 }
1824 1829
1825 status = verify_controller_parameters(pl022, chip_info); 1830 status = verify_controller_parameters(pl022, chip_info);
1826 if (status) { 1831 if (status) {
1827 dev_err(&spi->dev, "controller data is incorrect"); 1832 dev_err(&spi->dev, "controller data is incorrect");
1828 goto err_config_params; 1833 goto err_config_params;
1829 } 1834 }
1830 1835
1831 pl022->rx_lev_trig = chip_info->rx_lev_trig; 1836 pl022->rx_lev_trig = chip_info->rx_lev_trig;
1832 pl022->tx_lev_trig = chip_info->tx_lev_trig; 1837 pl022->tx_lev_trig = chip_info->tx_lev_trig;
1833 1838
1834 /* Now set controller state based on controller data */ 1839 /* Now set controller state based on controller data */
1835 chip->xfer_type = chip_info->com_mode; 1840 chip->xfer_type = chip_info->com_mode;
1836 if (!chip_info->cs_control) { 1841 if (!chip_info->cs_control) {
1837 chip->cs_control = null_cs_control; 1842 chip->cs_control = null_cs_control;
1838 dev_warn(&spi->dev, 1843 dev_warn(&spi->dev,
1839 "chip select function is NULL for this chip\n"); 1844 "chip select function is NULL for this chip\n");
1840 } else 1845 } else
1841 chip->cs_control = chip_info->cs_control; 1846 chip->cs_control = chip_info->cs_control;
1842 1847
1843 /* Check bits per word with vendor specific range */ 1848 /* Check bits per word with vendor specific range */
1844 if ((bits <= 3) || (bits > pl022->vendor->max_bpw)) { 1849 if ((bits <= 3) || (bits > pl022->vendor->max_bpw)) {
1845 status = -ENOTSUPP; 1850 status = -ENOTSUPP;
1846 dev_err(&spi->dev, "illegal data size for this controller!\n"); 1851 dev_err(&spi->dev, "illegal data size for this controller!\n");
1847 dev_err(&spi->dev, "This controller can only handle 4 <= n <= %d bit words\n", 1852 dev_err(&spi->dev, "This controller can only handle 4 <= n <= %d bit words\n",
1848 pl022->vendor->max_bpw); 1853 pl022->vendor->max_bpw);
1849 goto err_config_params; 1854 goto err_config_params;
1850 } else if (bits <= 8) { 1855 } else if (bits <= 8) {
1851 dev_dbg(&spi->dev, "4 <= n <=8 bits per word\n"); 1856 dev_dbg(&spi->dev, "4 <= n <=8 bits per word\n");
1852 chip->n_bytes = 1; 1857 chip->n_bytes = 1;
1853 chip->read = READING_U8; 1858 chip->read = READING_U8;
1854 chip->write = WRITING_U8; 1859 chip->write = WRITING_U8;
1855 } else if (bits <= 16) { 1860 } else if (bits <= 16) {
1856 dev_dbg(&spi->dev, "9 <= n <= 16 bits per word\n"); 1861 dev_dbg(&spi->dev, "9 <= n <= 16 bits per word\n");
1857 chip->n_bytes = 2; 1862 chip->n_bytes = 2;
1858 chip->read = READING_U16; 1863 chip->read = READING_U16;
1859 chip->write = WRITING_U16; 1864 chip->write = WRITING_U16;
1860 } else { 1865 } else {
1861 dev_dbg(&spi->dev, "17 <= n <= 32 bits per word\n"); 1866 dev_dbg(&spi->dev, "17 <= n <= 32 bits per word\n");
1862 chip->n_bytes = 4; 1867 chip->n_bytes = 4;
1863 chip->read = READING_U32; 1868 chip->read = READING_U32;
1864 chip->write = WRITING_U32; 1869 chip->write = WRITING_U32;
1865 } 1870 }
1866 1871
1867 /* Now Initialize all register settings required for this chip */ 1872 /* Now Initialize all register settings required for this chip */
1868 chip->cr0 = 0; 1873 chip->cr0 = 0;
1869 chip->cr1 = 0; 1874 chip->cr1 = 0;
1870 chip->dmacr = 0; 1875 chip->dmacr = 0;
1871 chip->cpsr = 0; 1876 chip->cpsr = 0;
1872 if ((chip_info->com_mode == DMA_TRANSFER) 1877 if ((chip_info->com_mode == DMA_TRANSFER)
1873 && ((pl022->master_info)->enable_dma)) { 1878 && ((pl022->master_info)->enable_dma)) {
1874 chip->enable_dma = true; 1879 chip->enable_dma = true;
1875 dev_dbg(&spi->dev, "DMA mode set in controller state\n"); 1880 dev_dbg(&spi->dev, "DMA mode set in controller state\n");
1876 SSP_WRITE_BITS(chip->dmacr, SSP_DMA_ENABLED, 1881 SSP_WRITE_BITS(chip->dmacr, SSP_DMA_ENABLED,
1877 SSP_DMACR_MASK_RXDMAE, 0); 1882 SSP_DMACR_MASK_RXDMAE, 0);
1878 SSP_WRITE_BITS(chip->dmacr, SSP_DMA_ENABLED, 1883 SSP_WRITE_BITS(chip->dmacr, SSP_DMA_ENABLED,
1879 SSP_DMACR_MASK_TXDMAE, 1); 1884 SSP_DMACR_MASK_TXDMAE, 1);
1880 } else { 1885 } else {
1881 chip->enable_dma = false; 1886 chip->enable_dma = false;
1882 dev_dbg(&spi->dev, "DMA mode NOT set in controller state\n"); 1887 dev_dbg(&spi->dev, "DMA mode NOT set in controller state\n");
1883 SSP_WRITE_BITS(chip->dmacr, SSP_DMA_DISABLED, 1888 SSP_WRITE_BITS(chip->dmacr, SSP_DMA_DISABLED,
1884 SSP_DMACR_MASK_RXDMAE, 0); 1889 SSP_DMACR_MASK_RXDMAE, 0);
1885 SSP_WRITE_BITS(chip->dmacr, SSP_DMA_DISABLED, 1890 SSP_WRITE_BITS(chip->dmacr, SSP_DMA_DISABLED,
1886 SSP_DMACR_MASK_TXDMAE, 1); 1891 SSP_DMACR_MASK_TXDMAE, 1);
1887 } 1892 }
1888 1893
1889 chip->cpsr = clk_freq.cpsdvsr; 1894 chip->cpsr = clk_freq.cpsdvsr;
1890 1895
1891 /* Special setup for the ST micro extended control registers */ 1896 /* Special setup for the ST micro extended control registers */
1892 if (pl022->vendor->extended_cr) { 1897 if (pl022->vendor->extended_cr) {
1893 u32 etx; 1898 u32 etx;
1894 1899
1895 if (pl022->vendor->pl023) { 1900 if (pl022->vendor->pl023) {
1896 /* These bits are only in the PL023 */ 1901 /* These bits are only in the PL023 */
1897 SSP_WRITE_BITS(chip->cr1, chip_info->clkdelay, 1902 SSP_WRITE_BITS(chip->cr1, chip_info->clkdelay,
1898 SSP_CR1_MASK_FBCLKDEL_ST, 13); 1903 SSP_CR1_MASK_FBCLKDEL_ST, 13);
1899 } else { 1904 } else {
1900 /* These bits are in the PL022 but not PL023 */ 1905 /* These bits are in the PL022 but not PL023 */
1901 SSP_WRITE_BITS(chip->cr0, chip_info->duplex, 1906 SSP_WRITE_BITS(chip->cr0, chip_info->duplex,
1902 SSP_CR0_MASK_HALFDUP_ST, 5); 1907 SSP_CR0_MASK_HALFDUP_ST, 5);
1903 SSP_WRITE_BITS(chip->cr0, chip_info->ctrl_len, 1908 SSP_WRITE_BITS(chip->cr0, chip_info->ctrl_len,
1904 SSP_CR0_MASK_CSS_ST, 16); 1909 SSP_CR0_MASK_CSS_ST, 16);
1905 SSP_WRITE_BITS(chip->cr0, chip_info->iface, 1910 SSP_WRITE_BITS(chip->cr0, chip_info->iface,
1906 SSP_CR0_MASK_FRF_ST, 21); 1911 SSP_CR0_MASK_FRF_ST, 21);
1907 SSP_WRITE_BITS(chip->cr1, chip_info->wait_state, 1912 SSP_WRITE_BITS(chip->cr1, chip_info->wait_state,
1908 SSP_CR1_MASK_MWAIT_ST, 6); 1913 SSP_CR1_MASK_MWAIT_ST, 6);
1909 } 1914 }
1910 SSP_WRITE_BITS(chip->cr0, bits - 1, 1915 SSP_WRITE_BITS(chip->cr0, bits - 1,
1911 SSP_CR0_MASK_DSS_ST, 0); 1916 SSP_CR0_MASK_DSS_ST, 0);
1912 1917
1913 if (spi->mode & SPI_LSB_FIRST) { 1918 if (spi->mode & SPI_LSB_FIRST) {
1914 tmp = SSP_RX_LSB; 1919 tmp = SSP_RX_LSB;
1915 etx = SSP_TX_LSB; 1920 etx = SSP_TX_LSB;
1916 } else { 1921 } else {
1917 tmp = SSP_RX_MSB; 1922 tmp = SSP_RX_MSB;
1918 etx = SSP_TX_MSB; 1923 etx = SSP_TX_MSB;
1919 } 1924 }
1920 SSP_WRITE_BITS(chip->cr1, tmp, SSP_CR1_MASK_RENDN_ST, 4); 1925 SSP_WRITE_BITS(chip->cr1, tmp, SSP_CR1_MASK_RENDN_ST, 4);
1921 SSP_WRITE_BITS(chip->cr1, etx, SSP_CR1_MASK_TENDN_ST, 5); 1926 SSP_WRITE_BITS(chip->cr1, etx, SSP_CR1_MASK_TENDN_ST, 5);
1922 SSP_WRITE_BITS(chip->cr1, chip_info->rx_lev_trig, 1927 SSP_WRITE_BITS(chip->cr1, chip_info->rx_lev_trig,
1923 SSP_CR1_MASK_RXIFLSEL_ST, 7); 1928 SSP_CR1_MASK_RXIFLSEL_ST, 7);
1924 SSP_WRITE_BITS(chip->cr1, chip_info->tx_lev_trig, 1929 SSP_WRITE_BITS(chip->cr1, chip_info->tx_lev_trig,
1925 SSP_CR1_MASK_TXIFLSEL_ST, 10); 1930 SSP_CR1_MASK_TXIFLSEL_ST, 10);
1926 } else { 1931 } else {
1927 SSP_WRITE_BITS(chip->cr0, bits - 1, 1932 SSP_WRITE_BITS(chip->cr0, bits - 1,
1928 SSP_CR0_MASK_DSS, 0); 1933 SSP_CR0_MASK_DSS, 0);
1929 SSP_WRITE_BITS(chip->cr0, chip_info->iface, 1934 SSP_WRITE_BITS(chip->cr0, chip_info->iface,
1930 SSP_CR0_MASK_FRF, 4); 1935 SSP_CR0_MASK_FRF, 4);
1931 } 1936 }
1932 1937
1933 /* Stuff that is common for all versions */ 1938 /* Stuff that is common for all versions */
1934 if (spi->mode & SPI_CPOL) 1939 if (spi->mode & SPI_CPOL)
1935 tmp = SSP_CLK_POL_IDLE_HIGH; 1940 tmp = SSP_CLK_POL_IDLE_HIGH;
1936 else 1941 else
1937 tmp = SSP_CLK_POL_IDLE_LOW; 1942 tmp = SSP_CLK_POL_IDLE_LOW;
1938 SSP_WRITE_BITS(chip->cr0, tmp, SSP_CR0_MASK_SPO, 6); 1943 SSP_WRITE_BITS(chip->cr0, tmp, SSP_CR0_MASK_SPO, 6);
1939 1944
1940 if (spi->mode & SPI_CPHA) 1945 if (spi->mode & SPI_CPHA)
1941 tmp = SSP_CLK_SECOND_EDGE; 1946 tmp = SSP_CLK_SECOND_EDGE;
1942 else 1947 else
1943 tmp = SSP_CLK_FIRST_EDGE; 1948 tmp = SSP_CLK_FIRST_EDGE;
1944 SSP_WRITE_BITS(chip->cr0, tmp, SSP_CR0_MASK_SPH, 7); 1949 SSP_WRITE_BITS(chip->cr0, tmp, SSP_CR0_MASK_SPH, 7);
1945 1950
1946 SSP_WRITE_BITS(chip->cr0, clk_freq.scr, SSP_CR0_MASK_SCR, 8); 1951 SSP_WRITE_BITS(chip->cr0, clk_freq.scr, SSP_CR0_MASK_SCR, 8);
1947 /* Loopback is available on all versions except PL023 */ 1952 /* Loopback is available on all versions except PL023 */
1948 if (pl022->vendor->loopback) { 1953 if (pl022->vendor->loopback) {
1949 if (spi->mode & SPI_LOOP) 1954 if (spi->mode & SPI_LOOP)
1950 tmp = LOOPBACK_ENABLED; 1955 tmp = LOOPBACK_ENABLED;
1951 else 1956 else
1952 tmp = LOOPBACK_DISABLED; 1957 tmp = LOOPBACK_DISABLED;
1953 SSP_WRITE_BITS(chip->cr1, tmp, SSP_CR1_MASK_LBM, 0); 1958 SSP_WRITE_BITS(chip->cr1, tmp, SSP_CR1_MASK_LBM, 0);
1954 } 1959 }
1955 SSP_WRITE_BITS(chip->cr1, SSP_DISABLED, SSP_CR1_MASK_SSE, 1); 1960 SSP_WRITE_BITS(chip->cr1, SSP_DISABLED, SSP_CR1_MASK_SSE, 1);
1956 SSP_WRITE_BITS(chip->cr1, chip_info->hierarchy, SSP_CR1_MASK_MS, 2); 1961 SSP_WRITE_BITS(chip->cr1, chip_info->hierarchy, SSP_CR1_MASK_MS, 2);
1957 SSP_WRITE_BITS(chip->cr1, chip_info->slave_tx_disable, SSP_CR1_MASK_SOD, 1962 SSP_WRITE_BITS(chip->cr1, chip_info->slave_tx_disable, SSP_CR1_MASK_SOD,
1958 3); 1963 3);
1959 1964
1960 /* Save controller_state */ 1965 /* Save controller_state */
1961 spi_set_ctldata(spi, chip); 1966 spi_set_ctldata(spi, chip);
1962 return status; 1967 return status;
1963 err_config_params: 1968 err_config_params:
1964 spi_set_ctldata(spi, NULL); 1969 spi_set_ctldata(spi, NULL);
1965 kfree(chip); 1970 kfree(chip);
1966 return status; 1971 return status;
1967 } 1972 }
1968 1973
1969 /** 1974 /**
1970 * pl022_cleanup - cleanup function registered to SPI master framework 1975 * pl022_cleanup - cleanup function registered to SPI master framework
1971 * @spi: spi device which is requesting cleanup 1976 * @spi: spi device which is requesting cleanup
1972 * 1977 *
1973 * This function is registered to the SPI framework for this SPI master 1978 * This function is registered to the SPI framework for this SPI master
1974 * controller. It will free the runtime state of chip. 1979 * controller. It will free the runtime state of chip.
1975 */ 1980 */
1976 static void pl022_cleanup(struct spi_device *spi) 1981 static void pl022_cleanup(struct spi_device *spi)
1977 { 1982 {
1978 struct chip_data *chip = spi_get_ctldata(spi); 1983 struct chip_data *chip = spi_get_ctldata(spi);
1979 1984
1980 spi_set_ctldata(spi, NULL); 1985 spi_set_ctldata(spi, NULL);
1981 kfree(chip); 1986 kfree(chip);
1982 } 1987 }
1983 1988
1984 static int __devinit 1989 static int __devinit
1985 pl022_probe(struct amba_device *adev, const struct amba_id *id) 1990 pl022_probe(struct amba_device *adev, const struct amba_id *id)
1986 { 1991 {
1987 struct device *dev = &adev->dev; 1992 struct device *dev = &adev->dev;
1988 struct pl022_ssp_controller *platform_info = adev->dev.platform_data; 1993 struct pl022_ssp_controller *platform_info = adev->dev.platform_data;
1989 struct spi_master *master; 1994 struct spi_master *master;
1990 struct pl022 *pl022 = NULL; /*Data for this driver */ 1995 struct pl022 *pl022 = NULL; /*Data for this driver */
1991 int status = 0; 1996 int status = 0;
1992 1997
1993 dev_info(&adev->dev, 1998 dev_info(&adev->dev,
1994 "ARM PL022 driver, device ID: 0x%08x\n", adev->periphid); 1999 "ARM PL022 driver, device ID: 0x%08x\n", adev->periphid);
1995 if (platform_info == NULL) { 2000 if (platform_info == NULL) {
1996 dev_err(&adev->dev, "probe - no platform data supplied\n"); 2001 dev_err(&adev->dev, "probe - no platform data supplied\n");
1997 status = -ENODEV; 2002 status = -ENODEV;
1998 goto err_no_pdata; 2003 goto err_no_pdata;
1999 } 2004 }
2000 2005
2001 /* Allocate master with space for data */ 2006 /* Allocate master with space for data */
2002 master = spi_alloc_master(dev, sizeof(struct pl022)); 2007 master = spi_alloc_master(dev, sizeof(struct pl022));
2003 if (master == NULL) { 2008 if (master == NULL) {
2004 dev_err(&adev->dev, "probe - cannot alloc SPI master\n"); 2009 dev_err(&adev->dev, "probe - cannot alloc SPI master\n");
2005 status = -ENOMEM; 2010 status = -ENOMEM;
2006 goto err_no_master; 2011 goto err_no_master;
2007 } 2012 }
2008 2013
2009 pl022 = spi_master_get_devdata(master); 2014 pl022 = spi_master_get_devdata(master);
2010 pl022->master = master; 2015 pl022->master = master;
2011 pl022->master_info = platform_info; 2016 pl022->master_info = platform_info;
2012 pl022->adev = adev; 2017 pl022->adev = adev;
2013 pl022->vendor = id->data; 2018 pl022->vendor = id->data;
2014 2019
2015 /* 2020 /*
2016 * Bus Number Which has been Assigned to this SSP controller 2021 * Bus Number Which has been Assigned to this SSP controller
2017 * on this board 2022 * on this board
2018 */ 2023 */
2019 master->bus_num = platform_info->bus_id; 2024 master->bus_num = platform_info->bus_id;
2020 master->num_chipselect = platform_info->num_chipselect; 2025 master->num_chipselect = platform_info->num_chipselect;
2021 master->cleanup = pl022_cleanup; 2026 master->cleanup = pl022_cleanup;
2022 master->setup = pl022_setup; 2027 master->setup = pl022_setup;
2023 master->prepare_transfer_hardware = pl022_prepare_transfer_hardware; 2028 master->prepare_transfer_hardware = pl022_prepare_transfer_hardware;
2024 master->transfer_one_message = pl022_transfer_one_message; 2029 master->transfer_one_message = pl022_transfer_one_message;
2025 master->unprepare_transfer_hardware = pl022_unprepare_transfer_hardware; 2030 master->unprepare_transfer_hardware = pl022_unprepare_transfer_hardware;
2026 master->rt = platform_info->rt; 2031 master->rt = platform_info->rt;
2027 2032
2028 /* 2033 /*
2029 * Supports mode 0-3, loopback, and active low CS. Transfers are 2034 * Supports mode 0-3, loopback, and active low CS. Transfers are
2030 * always MS bit first on the original pl022. 2035 * always MS bit first on the original pl022.
2031 */ 2036 */
2032 master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP; 2037 master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH | SPI_LOOP;
2033 if (pl022->vendor->extended_cr) 2038 if (pl022->vendor->extended_cr)
2034 master->mode_bits |= SPI_LSB_FIRST; 2039 master->mode_bits |= SPI_LSB_FIRST;
2035 2040
2036 dev_dbg(&adev->dev, "BUSNO: %d\n", master->bus_num); 2041 dev_dbg(&adev->dev, "BUSNO: %d\n", master->bus_num);
2037 2042
2038 status = amba_request_regions(adev, NULL); 2043 status = amba_request_regions(adev, NULL);
2039 if (status) 2044 if (status)
2040 goto err_no_ioregion; 2045 goto err_no_ioregion;
2041 2046
2042 pl022->phybase = adev->res.start; 2047 pl022->phybase = adev->res.start;
2043 pl022->virtbase = ioremap(adev->res.start, resource_size(&adev->res)); 2048 pl022->virtbase = ioremap(adev->res.start, resource_size(&adev->res));
2044 if (pl022->virtbase == NULL) { 2049 if (pl022->virtbase == NULL) {
2045 status = -ENOMEM; 2050 status = -ENOMEM;
2046 goto err_no_ioremap; 2051 goto err_no_ioremap;
2047 } 2052 }
2048 printk(KERN_INFO "pl022: mapped registers from 0x%08x to %p\n", 2053 printk(KERN_INFO "pl022: mapped registers from 0x%08x to %p\n",
2049 adev->res.start, pl022->virtbase); 2054 adev->res.start, pl022->virtbase);
2050 2055
2056 pm_runtime_enable(dev);
2057 pm_runtime_resume(dev);
2058
2051 pl022->clk = clk_get(&adev->dev, NULL); 2059 pl022->clk = clk_get(&adev->dev, NULL);
2052 if (IS_ERR(pl022->clk)) { 2060 if (IS_ERR(pl022->clk)) {
2053 status = PTR_ERR(pl022->clk); 2061 status = PTR_ERR(pl022->clk);
2054 dev_err(&adev->dev, "could not retrieve SSP/SPI bus clock\n"); 2062 dev_err(&adev->dev, "could not retrieve SSP/SPI bus clock\n");
2055 goto err_no_clk; 2063 goto err_no_clk;
2056 } 2064 }
2057 2065
2058 status = clk_prepare(pl022->clk); 2066 status = clk_prepare(pl022->clk);
2059 if (status) { 2067 if (status) {
2060 dev_err(&adev->dev, "could not prepare SSP/SPI bus clock\n"); 2068 dev_err(&adev->dev, "could not prepare SSP/SPI bus clock\n");
2061 goto err_clk_prep; 2069 goto err_clk_prep;
2062 } 2070 }
2063 2071
2064 status = clk_enable(pl022->clk); 2072 status = clk_enable(pl022->clk);
2065 if (status) { 2073 if (status) {
2066 dev_err(&adev->dev, "could not enable SSP/SPI bus clock\n"); 2074 dev_err(&adev->dev, "could not enable SSP/SPI bus clock\n");
2067 goto err_no_clk_en; 2075 goto err_no_clk_en;
2068 } 2076 }
2069 2077
2070 /* Initialize transfer pump */ 2078 /* Initialize transfer pump */
2071 tasklet_init(&pl022->pump_transfers, pump_transfers, 2079 tasklet_init(&pl022->pump_transfers, pump_transfers,
2072 (unsigned long)pl022); 2080 (unsigned long)pl022);
2073 2081
2074 /* Disable SSP */ 2082 /* Disable SSP */
2075 writew((readw(SSP_CR1(pl022->virtbase)) & (~SSP_CR1_MASK_SSE)), 2083 writew((readw(SSP_CR1(pl022->virtbase)) & (~SSP_CR1_MASK_SSE)),
2076 SSP_CR1(pl022->virtbase)); 2084 SSP_CR1(pl022->virtbase));
2077 load_ssp_default_config(pl022); 2085 load_ssp_default_config(pl022);
2078 2086
2079 status = request_irq(adev->irq[0], pl022_interrupt_handler, 0, "pl022", 2087 status = request_irq(adev->irq[0], pl022_interrupt_handler, 0, "pl022",
2080 pl022); 2088 pl022);
2081 if (status < 0) { 2089 if (status < 0) {
2082 dev_err(&adev->dev, "probe - cannot get IRQ (%d)\n", status); 2090 dev_err(&adev->dev, "probe - cannot get IRQ (%d)\n", status);
2083 goto err_no_irq; 2091 goto err_no_irq;
2084 } 2092 }
2085 2093
2086 /* Get DMA channels */ 2094 /* Get DMA channels */
2087 if (platform_info->enable_dma) { 2095 if (platform_info->enable_dma) {
2088 status = pl022_dma_probe(pl022); 2096 status = pl022_dma_probe(pl022);
2089 if (status != 0) 2097 if (status != 0)
2090 platform_info->enable_dma = 0; 2098 platform_info->enable_dma = 0;
2091 } 2099 }
2092 2100
2093 /* Register with the SPI framework */ 2101 /* Register with the SPI framework */
2094 amba_set_drvdata(adev, pl022); 2102 amba_set_drvdata(adev, pl022);
2095 status = spi_register_master(master); 2103 status = spi_register_master(master);
2096 if (status != 0) { 2104 if (status != 0) {
2097 dev_err(&adev->dev, 2105 dev_err(&adev->dev,
2098 "probe - problem registering spi master\n"); 2106 "probe - problem registering spi master\n");
2099 goto err_spi_register; 2107 goto err_spi_register;
2100 } 2108 }
2101 dev_dbg(dev, "probe succeeded\n"); 2109 dev_dbg(dev, "probe succeeded\n");
2102 2110
2103 /* let runtime pm put suspend */ 2111 /* let runtime pm put suspend */
2104 if (platform_info->autosuspend_delay > 0) { 2112 if (platform_info->autosuspend_delay > 0) {
2105 dev_info(&adev->dev, 2113 dev_info(&adev->dev,
2106 "will use autosuspend for runtime pm, delay %dms\n", 2114 "will use autosuspend for runtime pm, delay %dms\n",
2107 platform_info->autosuspend_delay); 2115 platform_info->autosuspend_delay);
2108 pm_runtime_set_autosuspend_delay(dev, 2116 pm_runtime_set_autosuspend_delay(dev,
2109 platform_info->autosuspend_delay); 2117 platform_info->autosuspend_delay);
2110 pm_runtime_use_autosuspend(dev); 2118 pm_runtime_use_autosuspend(dev);
2111 pm_runtime_put_autosuspend(dev); 2119 pm_runtime_put_autosuspend(dev);
2112 } else { 2120 } else {
2113 pm_runtime_put(dev); 2121 pm_runtime_put(dev);
2114 } 2122 }
2115 return 0; 2123 return 0;
2116 2124
2117 err_spi_register: 2125 err_spi_register:
2118 if (platform_info->enable_dma) 2126 if (platform_info->enable_dma)
2119 pl022_dma_remove(pl022); 2127 pl022_dma_remove(pl022);
2120 2128
2121 free_irq(adev->irq[0], pl022); 2129 free_irq(adev->irq[0], pl022);
2122 err_no_irq: 2130 err_no_irq:
2123 clk_disable(pl022->clk); 2131 clk_disable(pl022->clk);
2124 err_no_clk_en: 2132 err_no_clk_en:
2125 clk_unprepare(pl022->clk); 2133 clk_unprepare(pl022->clk);
2126 err_clk_prep: 2134 err_clk_prep:
2127 clk_put(pl022->clk); 2135 clk_put(pl022->clk);
2128 err_no_clk: 2136 err_no_clk:
2129 iounmap(pl022->virtbase); 2137 iounmap(pl022->virtbase);
2130 err_no_ioremap: 2138 err_no_ioremap:
2131 amba_release_regions(adev); 2139 amba_release_regions(adev);
2132 err_no_ioregion: 2140 err_no_ioregion:
2133 spi_master_put(master); 2141 spi_master_put(master);
2134 err_no_master: 2142 err_no_master:
2135 err_no_pdata: 2143 err_no_pdata:
2136 return status; 2144 return status;
2137 } 2145 }
2138 2146
2139 static int __devexit 2147 static int __devexit
2140 pl022_remove(struct amba_device *adev) 2148 pl022_remove(struct amba_device *adev)
2141 { 2149 {
2142 struct pl022 *pl022 = amba_get_drvdata(adev); 2150 struct pl022 *pl022 = amba_get_drvdata(adev);
2143 2151
2144 if (!pl022) 2152 if (!pl022)
2145 return 0; 2153 return 0;
2146 2154
2147 /* 2155 /*
2148 * undo pm_runtime_put() in probe. I assume that we're not 2156 * undo pm_runtime_put() in probe. I assume that we're not
2149 * accessing the primecell here. 2157 * accessing the primecell here.
2150 */ 2158 */
2151 pm_runtime_get_noresume(&adev->dev); 2159 pm_runtime_get_noresume(&adev->dev);
2152 2160
2153 load_ssp_default_config(pl022); 2161 load_ssp_default_config(pl022);
2154 if (pl022->master_info->enable_dma) 2162 if (pl022->master_info->enable_dma)
2155 pl022_dma_remove(pl022); 2163 pl022_dma_remove(pl022);
2156 2164
2157 free_irq(adev->irq[0], pl022); 2165 free_irq(adev->irq[0], pl022);
2158 clk_disable(pl022->clk); 2166 clk_disable(pl022->clk);
2159 clk_unprepare(pl022->clk); 2167 clk_unprepare(pl022->clk);
2160 clk_put(pl022->clk); 2168 clk_put(pl022->clk);
2169 pm_runtime_disable(&adev->dev);
2161 iounmap(pl022->virtbase); 2170 iounmap(pl022->virtbase);
2162 amba_release_regions(adev); 2171 amba_release_regions(adev);
2163 tasklet_disable(&pl022->pump_transfers); 2172 tasklet_disable(&pl022->pump_transfers);
2164 spi_unregister_master(pl022->master); 2173 spi_unregister_master(pl022->master);
2165 spi_master_put(pl022->master); 2174 spi_master_put(pl022->master);
2166 amba_set_drvdata(adev, NULL); 2175 amba_set_drvdata(adev, NULL);
2167 return 0; 2176 return 0;
2168 } 2177 }
2169 2178
2170 #ifdef CONFIG_SUSPEND 2179 #ifdef CONFIG_SUSPEND
2171 static int pl022_suspend(struct device *dev) 2180 static int pl022_suspend(struct device *dev)
2172 { 2181 {
2173 struct pl022 *pl022 = dev_get_drvdata(dev); 2182 struct pl022 *pl022 = dev_get_drvdata(dev);
2174 int ret; 2183 int ret;
2175 2184
2176 ret = spi_master_suspend(pl022->master); 2185 ret = spi_master_suspend(pl022->master);
2177 if (ret) { 2186 if (ret) {
2178 dev_warn(dev, "cannot suspend master\n"); 2187 dev_warn(dev, "cannot suspend master\n");
2179 return ret; 2188 return ret;
2180 } 2189 }
2181 2190
2182 dev_dbg(dev, "suspended\n"); 2191 dev_dbg(dev, "suspended\n");
2183 return 0; 2192 return 0;
2184 } 2193 }
2185 2194
2186 static int pl022_resume(struct device *dev) 2195 static int pl022_resume(struct device *dev)
2187 { 2196 {
2188 struct pl022 *pl022 = dev_get_drvdata(dev); 2197 struct pl022 *pl022 = dev_get_drvdata(dev);
2189 int ret; 2198 int ret;
2190 2199
2191 /* Start the queue running */ 2200 /* Start the queue running */
2192 ret = spi_master_resume(pl022->master); 2201 ret = spi_master_resume(pl022->master);
2193 if (ret) 2202 if (ret)
2194 dev_err(dev, "problem starting queue (%d)\n", ret); 2203 dev_err(dev, "problem starting queue (%d)\n", ret);
2195 else 2204 else
2196 dev_dbg(dev, "resumed\n"); 2205 dev_dbg(dev, "resumed\n");
2197 2206
2198 return ret; 2207 return ret;
2199 } 2208 }
2200 #endif /* CONFIG_PM */ 2209 #endif /* CONFIG_PM */
2201 2210
2202 #ifdef CONFIG_PM_RUNTIME 2211 #ifdef CONFIG_PM_RUNTIME
2203 static int pl022_runtime_suspend(struct device *dev) 2212 static int pl022_runtime_suspend(struct device *dev)
2204 { 2213 {
2205 struct pl022 *pl022 = dev_get_drvdata(dev); 2214 struct pl022 *pl022 = dev_get_drvdata(dev);
2206 2215
2207 clk_disable(pl022->clk); 2216 clk_disable(pl022->clk);
2208 2217
2209 return 0; 2218 return 0;
2210 } 2219 }
2211 2220
2212 static int pl022_runtime_resume(struct device *dev) 2221 static int pl022_runtime_resume(struct device *dev)
2213 { 2222 {
2214 struct pl022 *pl022 = dev_get_drvdata(dev); 2223 struct pl022 *pl022 = dev_get_drvdata(dev);
2215 2224
2216 clk_enable(pl022->clk); 2225 clk_enable(pl022->clk);
2217 2226
2218 return 0; 2227 return 0;
2219 } 2228 }
2220 #endif 2229 #endif
2221 2230
2222 static const struct dev_pm_ops pl022_dev_pm_ops = { 2231 static const struct dev_pm_ops pl022_dev_pm_ops = {
2223 SET_SYSTEM_SLEEP_PM_OPS(pl022_suspend, pl022_resume) 2232 SET_SYSTEM_SLEEP_PM_OPS(pl022_suspend, pl022_resume)
2224 SET_RUNTIME_PM_OPS(pl022_runtime_suspend, pl022_runtime_resume, NULL) 2233 SET_RUNTIME_PM_OPS(pl022_runtime_suspend, pl022_runtime_resume, NULL)
2225 }; 2234 };
2226 2235
2227 static struct vendor_data vendor_arm = { 2236 static struct vendor_data vendor_arm = {
2228 .fifodepth = 8, 2237 .fifodepth = 8,
2229 .max_bpw = 16, 2238 .max_bpw = 16,
2230 .unidir = false, 2239 .unidir = false,
2231 .extended_cr = false, 2240 .extended_cr = false,
2232 .pl023 = false, 2241 .pl023 = false,
2233 .loopback = true, 2242 .loopback = true,
2234 }; 2243 };
2235 2244
2236 static struct vendor_data vendor_st = { 2245 static struct vendor_data vendor_st = {
2237 .fifodepth = 32, 2246 .fifodepth = 32,
2238 .max_bpw = 32, 2247 .max_bpw = 32,
2239 .unidir = false, 2248 .unidir = false,
2240 .extended_cr = true, 2249 .extended_cr = true,
2241 .pl023 = false, 2250 .pl023 = false,
2242 .loopback = true, 2251 .loopback = true,
2243 }; 2252 };
2244 2253
2245 static struct vendor_data vendor_st_pl023 = { 2254 static struct vendor_data vendor_st_pl023 = {
2246 .fifodepth = 32, 2255 .fifodepth = 32,
2247 .max_bpw = 32, 2256 .max_bpw = 32,
2248 .unidir = false, 2257 .unidir = false,
2249 .extended_cr = true, 2258 .extended_cr = true,
2250 .pl023 = true, 2259 .pl023 = true,
2251 .loopback = false, 2260 .loopback = false,
2252 }; 2261 };
2253 2262
2254 static struct vendor_data vendor_db5500_pl023 = {
2255 .fifodepth = 32,
2256 .max_bpw = 32,
2257 .unidir = false,
2258 .extended_cr = true,
2259 .pl023 = true,
2260 .loopback = true,
2261 };
2262
2263 static struct amba_id pl022_ids[] = { 2263 static struct amba_id pl022_ids[] = {
2264 { 2264 {
2265 /* 2265 /*
2266 * ARM PL022 variant, this has a 16bit wide 2266 * ARM PL022 variant, this has a 16bit wide
2267 * and 8 locations deep TX/RX FIFO 2267 * and 8 locations deep TX/RX FIFO
2268 */ 2268 */
2269 .id = 0x00041022, 2269 .id = 0x00041022,
2270 .mask = 0x000fffff, 2270 .mask = 0x000fffff,
2271 .data = &vendor_arm, 2271 .data = &vendor_arm,
2272 }, 2272 },
2273 { 2273 {
2274 /* 2274 /*
2275 * ST Micro derivative, this has 32bit wide 2275 * ST Micro derivative, this has 32bit wide
2276 * and 32 locations deep TX/RX FIFO 2276 * and 32 locations deep TX/RX FIFO
2277 */ 2277 */
2278 .id = 0x01080022, 2278 .id = 0x01080022,
2279 .mask = 0xffffffff, 2279 .mask = 0xffffffff,
2280 .data = &vendor_st, 2280 .data = &vendor_st,
2281 }, 2281 },
2282 { 2282 {
2283 /* 2283 /*
2284 * ST-Ericsson derivative "PL023" (this is not 2284 * ST-Ericsson derivative "PL023" (this is not
2285 * an official ARM number), this is a PL022 SSP block 2285 * an official ARM number), this is a PL022 SSP block
2286 * stripped to SPI mode only, it has 32bit wide 2286 * stripped to SPI mode only, it has 32bit wide
2287 * and 32 locations deep TX/RX FIFO but no extended 2287 * and 32 locations deep TX/RX FIFO but no extended
2288 * CR0/CR1 register 2288 * CR0/CR1 register
2289 */ 2289 */
2290 .id = 0x00080023, 2290 .id = 0x00080023,
2291 .mask = 0xffffffff, 2291 .mask = 0xffffffff,
2292 .data = &vendor_st_pl023, 2292 .data = &vendor_st_pl023,
2293 },
2294 {
2295 .id = 0x10080023,
2296 .mask = 0xffffffff,
2297 .data = &vendor_db5500_pl023,
2298 }, 2293 },
2299 { 0, 0 }, 2294 { 0, 0 },
2300 }; 2295 };
2301 2296
2302 MODULE_DEVICE_TABLE(amba, pl022_ids); 2297 MODULE_DEVICE_TABLE(amba, pl022_ids);
2303 2298
2304 static struct amba_driver pl022_driver = { 2299 static struct amba_driver pl022_driver = {
2305 .drv = { 2300 .drv = {
2306 .name = "ssp-pl022", 2301 .name = "ssp-pl022",
2307 .pm = &pl022_dev_pm_ops, 2302 .pm = &pl022_dev_pm_ops,
2308 }, 2303 },
2309 .id_table = pl022_ids, 2304 .id_table = pl022_ids,
2310 .probe = pl022_probe, 2305 .probe = pl022_probe,
2311 .remove = __devexit_p(pl022_remove), 2306 .remove = __devexit_p(pl022_remove),
2312 }; 2307 };
2313 2308
2314 static int __init pl022_init(void) 2309 static int __init pl022_init(void)
2315 { 2310 {
2316 return amba_driver_register(&pl022_driver); 2311 return amba_driver_register(&pl022_driver);
2317 } 2312 }
2318 subsys_initcall(pl022_init); 2313 subsys_initcall(pl022_init);
2319 2314
2320 static void __exit pl022_exit(void) 2315 static void __exit pl022_exit(void)
drivers/spi/spi-tegra.c
1 /* 1 /*
2 * Driver for Nvidia TEGRA spi controller. 2 * Driver for Nvidia TEGRA spi controller.
3 * 3 *
4 * Copyright (C) 2010 Google, Inc. 4 * Copyright (C) 2010 Google, Inc.
5 * 5 *
6 * Author: 6 * Author:
7 * Erik Gilling <konkers@android.com> 7 * Erik Gilling <konkers@android.com>
8 * 8 *
9 * This software is licensed under the terms of the GNU General Public 9 * This software is licensed under the terms of the GNU General Public
10 * License version 2, as published by the Free Software Foundation, and 10 * License version 2, as published by the Free Software Foundation, and
11 * may be copied, distributed, and modified under those terms. 11 * may be copied, distributed, and modified under those terms.
12 * 12 *
13 * This program is distributed in the hope that it will be useful, 13 * This program is distributed in the hope that it will be useful,
14 * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 * but WITHOUT ANY WARRANTY; without even the implied warranty of
15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 * GNU General Public License for more details. 16 * GNU General Public License for more details.
17 * 17 *
18 */ 18 */
19 19
20 #include <linux/kernel.h> 20 #include <linux/kernel.h>
21 #include <linux/module.h> 21 #include <linux/module.h>
22 #include <linux/init.h> 22 #include <linux/init.h>
23 #include <linux/err.h> 23 #include <linux/err.h>
24 #include <linux/platform_device.h> 24 #include <linux/platform_device.h>
25 #include <linux/io.h> 25 #include <linux/io.h>
26 #include <linux/dma-mapping.h> 26 #include <linux/dma-mapping.h>
27 #include <linux/dmapool.h> 27 #include <linux/dmapool.h>
28 #include <linux/clk.h> 28 #include <linux/clk.h>
29 #include <linux/interrupt.h> 29 #include <linux/interrupt.h>
30 #include <linux/delay.h> 30 #include <linux/delay.h>
31 31
32 #include <linux/spi/spi.h> 32 #include <linux/spi/spi.h>
33 #include <linux/dmaengine.h>
33 34
34 #include <mach/dma.h> 35 #include <mach/dma.h>
35 36
36 #define SLINK_COMMAND 0x000 37 #define SLINK_COMMAND 0x000
37 #define SLINK_BIT_LENGTH(x) (((x) & 0x1f) << 0) 38 #define SLINK_BIT_LENGTH(x) (((x) & 0x1f) << 0)
38 #define SLINK_WORD_SIZE(x) (((x) & 0x1f) << 5) 39 #define SLINK_WORD_SIZE(x) (((x) & 0x1f) << 5)
39 #define SLINK_BOTH_EN (1 << 10) 40 #define SLINK_BOTH_EN (1 << 10)
40 #define SLINK_CS_SW (1 << 11) 41 #define SLINK_CS_SW (1 << 11)
41 #define SLINK_CS_VALUE (1 << 12) 42 #define SLINK_CS_VALUE (1 << 12)
42 #define SLINK_CS_POLARITY (1 << 13) 43 #define SLINK_CS_POLARITY (1 << 13)
43 #define SLINK_IDLE_SDA_DRIVE_LOW (0 << 16) 44 #define SLINK_IDLE_SDA_DRIVE_LOW (0 << 16)
44 #define SLINK_IDLE_SDA_DRIVE_HIGH (1 << 16) 45 #define SLINK_IDLE_SDA_DRIVE_HIGH (1 << 16)
45 #define SLINK_IDLE_SDA_PULL_LOW (2 << 16) 46 #define SLINK_IDLE_SDA_PULL_LOW (2 << 16)
46 #define SLINK_IDLE_SDA_PULL_HIGH (3 << 16) 47 #define SLINK_IDLE_SDA_PULL_HIGH (3 << 16)
47 #define SLINK_IDLE_SDA_MASK (3 << 16) 48 #define SLINK_IDLE_SDA_MASK (3 << 16)
48 #define SLINK_CS_POLARITY1 (1 << 20) 49 #define SLINK_CS_POLARITY1 (1 << 20)
49 #define SLINK_CK_SDA (1 << 21) 50 #define SLINK_CK_SDA (1 << 21)
50 #define SLINK_CS_POLARITY2 (1 << 22) 51 #define SLINK_CS_POLARITY2 (1 << 22)
51 #define SLINK_CS_POLARITY3 (1 << 23) 52 #define SLINK_CS_POLARITY3 (1 << 23)
52 #define SLINK_IDLE_SCLK_DRIVE_LOW (0 << 24) 53 #define SLINK_IDLE_SCLK_DRIVE_LOW (0 << 24)
53 #define SLINK_IDLE_SCLK_DRIVE_HIGH (1 << 24) 54 #define SLINK_IDLE_SCLK_DRIVE_HIGH (1 << 24)
54 #define SLINK_IDLE_SCLK_PULL_LOW (2 << 24) 55 #define SLINK_IDLE_SCLK_PULL_LOW (2 << 24)
55 #define SLINK_IDLE_SCLK_PULL_HIGH (3 << 24) 56 #define SLINK_IDLE_SCLK_PULL_HIGH (3 << 24)
56 #define SLINK_IDLE_SCLK_MASK (3 << 24) 57 #define SLINK_IDLE_SCLK_MASK (3 << 24)
57 #define SLINK_M_S (1 << 28) 58 #define SLINK_M_S (1 << 28)
58 #define SLINK_WAIT (1 << 29) 59 #define SLINK_WAIT (1 << 29)
59 #define SLINK_GO (1 << 30) 60 #define SLINK_GO (1 << 30)
60 #define SLINK_ENB (1 << 31) 61 #define SLINK_ENB (1 << 31)
61 62
62 #define SLINK_COMMAND2 0x004 63 #define SLINK_COMMAND2 0x004
63 #define SLINK_LSBFE (1 << 0) 64 #define SLINK_LSBFE (1 << 0)
64 #define SLINK_SSOE (1 << 1) 65 #define SLINK_SSOE (1 << 1)
65 #define SLINK_SPIE (1 << 4) 66 #define SLINK_SPIE (1 << 4)
66 #define SLINK_BIDIROE (1 << 6) 67 #define SLINK_BIDIROE (1 << 6)
67 #define SLINK_MODFEN (1 << 7) 68 #define SLINK_MODFEN (1 << 7)
68 #define SLINK_INT_SIZE(x) (((x) & 0x1f) << 8) 69 #define SLINK_INT_SIZE(x) (((x) & 0x1f) << 8)
69 #define SLINK_CS_ACTIVE_BETWEEN (1 << 17) 70 #define SLINK_CS_ACTIVE_BETWEEN (1 << 17)
70 #define SLINK_SS_EN_CS(x) (((x) & 0x3) << 18) 71 #define SLINK_SS_EN_CS(x) (((x) & 0x3) << 18)
71 #define SLINK_SS_SETUP(x) (((x) & 0x3) << 20) 72 #define SLINK_SS_SETUP(x) (((x) & 0x3) << 20)
72 #define SLINK_FIFO_REFILLS_0 (0 << 22) 73 #define SLINK_FIFO_REFILLS_0 (0 << 22)
73 #define SLINK_FIFO_REFILLS_1 (1 << 22) 74 #define SLINK_FIFO_REFILLS_1 (1 << 22)
74 #define SLINK_FIFO_REFILLS_2 (2 << 22) 75 #define SLINK_FIFO_REFILLS_2 (2 << 22)
75 #define SLINK_FIFO_REFILLS_3 (3 << 22) 76 #define SLINK_FIFO_REFILLS_3 (3 << 22)
76 #define SLINK_FIFO_REFILLS_MASK (3 << 22) 77 #define SLINK_FIFO_REFILLS_MASK (3 << 22)
77 #define SLINK_WAIT_PACK_INT(x) (((x) & 0x7) << 26) 78 #define SLINK_WAIT_PACK_INT(x) (((x) & 0x7) << 26)
78 #define SLINK_SPC0 (1 << 29) 79 #define SLINK_SPC0 (1 << 29)
79 #define SLINK_TXEN (1 << 30) 80 #define SLINK_TXEN (1 << 30)
80 #define SLINK_RXEN (1 << 31) 81 #define SLINK_RXEN (1 << 31)
81 82
82 #define SLINK_STATUS 0x008 83 #define SLINK_STATUS 0x008
83 #define SLINK_COUNT(val) (((val) >> 0) & 0x1f) 84 #define SLINK_COUNT(val) (((val) >> 0) & 0x1f)
84 #define SLINK_WORD(val) (((val) >> 5) & 0x1f) 85 #define SLINK_WORD(val) (((val) >> 5) & 0x1f)
85 #define SLINK_BLK_CNT(val) (((val) >> 0) & 0xffff) 86 #define SLINK_BLK_CNT(val) (((val) >> 0) & 0xffff)
86 #define SLINK_MODF (1 << 16) 87 #define SLINK_MODF (1 << 16)
87 #define SLINK_RX_UNF (1 << 18) 88 #define SLINK_RX_UNF (1 << 18)
88 #define SLINK_TX_OVF (1 << 19) 89 #define SLINK_TX_OVF (1 << 19)
89 #define SLINK_TX_FULL (1 << 20) 90 #define SLINK_TX_FULL (1 << 20)
90 #define SLINK_TX_EMPTY (1 << 21) 91 #define SLINK_TX_EMPTY (1 << 21)
91 #define SLINK_RX_FULL (1 << 22) 92 #define SLINK_RX_FULL (1 << 22)
92 #define SLINK_RX_EMPTY (1 << 23) 93 #define SLINK_RX_EMPTY (1 << 23)
93 #define SLINK_TX_UNF (1 << 24) 94 #define SLINK_TX_UNF (1 << 24)
94 #define SLINK_RX_OVF (1 << 25) 95 #define SLINK_RX_OVF (1 << 25)
95 #define SLINK_TX_FLUSH (1 << 26) 96 #define SLINK_TX_FLUSH (1 << 26)
96 #define SLINK_RX_FLUSH (1 << 27) 97 #define SLINK_RX_FLUSH (1 << 27)
97 #define SLINK_SCLK (1 << 28) 98 #define SLINK_SCLK (1 << 28)
98 #define SLINK_ERR (1 << 29) 99 #define SLINK_ERR (1 << 29)
99 #define SLINK_RDY (1 << 30) 100 #define SLINK_RDY (1 << 30)
100 #define SLINK_BSY (1 << 31) 101 #define SLINK_BSY (1 << 31)
101 102
102 #define SLINK_MAS_DATA 0x010 103 #define SLINK_MAS_DATA 0x010
103 #define SLINK_SLAVE_DATA 0x014 104 #define SLINK_SLAVE_DATA 0x014
104 105
105 #define SLINK_DMA_CTL 0x018 106 #define SLINK_DMA_CTL 0x018
106 #define SLINK_DMA_BLOCK_SIZE(x) (((x) & 0xffff) << 0) 107 #define SLINK_DMA_BLOCK_SIZE(x) (((x) & 0xffff) << 0)
107 #define SLINK_TX_TRIG_1 (0 << 16) 108 #define SLINK_TX_TRIG_1 (0 << 16)
108 #define SLINK_TX_TRIG_4 (1 << 16) 109 #define SLINK_TX_TRIG_4 (1 << 16)
109 #define SLINK_TX_TRIG_8 (2 << 16) 110 #define SLINK_TX_TRIG_8 (2 << 16)
110 #define SLINK_TX_TRIG_16 (3 << 16) 111 #define SLINK_TX_TRIG_16 (3 << 16)
111 #define SLINK_TX_TRIG_MASK (3 << 16) 112 #define SLINK_TX_TRIG_MASK (3 << 16)
112 #define SLINK_RX_TRIG_1 (0 << 18) 113 #define SLINK_RX_TRIG_1 (0 << 18)
113 #define SLINK_RX_TRIG_4 (1 << 18) 114 #define SLINK_RX_TRIG_4 (1 << 18)
114 #define SLINK_RX_TRIG_8 (2 << 18) 115 #define SLINK_RX_TRIG_8 (2 << 18)
115 #define SLINK_RX_TRIG_16 (3 << 18) 116 #define SLINK_RX_TRIG_16 (3 << 18)
116 #define SLINK_RX_TRIG_MASK (3 << 18) 117 #define SLINK_RX_TRIG_MASK (3 << 18)
117 #define SLINK_PACKED (1 << 20) 118 #define SLINK_PACKED (1 << 20)
118 #define SLINK_PACK_SIZE_4 (0 << 21) 119 #define SLINK_PACK_SIZE_4 (0 << 21)
119 #define SLINK_PACK_SIZE_8 (1 << 21) 120 #define SLINK_PACK_SIZE_8 (1 << 21)
120 #define SLINK_PACK_SIZE_16 (2 << 21) 121 #define SLINK_PACK_SIZE_16 (2 << 21)
121 #define SLINK_PACK_SIZE_32 (3 << 21) 122 #define SLINK_PACK_SIZE_32 (3 << 21)
122 #define SLINK_PACK_SIZE_MASK (3 << 21) 123 #define SLINK_PACK_SIZE_MASK (3 << 21)
123 #define SLINK_IE_TXC (1 << 26) 124 #define SLINK_IE_TXC (1 << 26)
124 #define SLINK_IE_RXC (1 << 27) 125 #define SLINK_IE_RXC (1 << 27)
125 #define SLINK_DMA_EN (1 << 31) 126 #define SLINK_DMA_EN (1 << 31)
126 127
127 #define SLINK_STATUS2 0x01c 128 #define SLINK_STATUS2 0x01c
128 #define SLINK_TX_FIFO_EMPTY_COUNT(val) (((val) & 0x3f) >> 0) 129 #define SLINK_TX_FIFO_EMPTY_COUNT(val) (((val) & 0x3f) >> 0)
129 #define SLINK_RX_FIFO_FULL_COUNT(val) (((val) & 0x3f) >> 16) 130 #define SLINK_RX_FIFO_FULL_COUNT(val) (((val) & 0x3f) >> 16)
130 131
131 #define SLINK_TX_FIFO 0x100 132 #define SLINK_TX_FIFO 0x100
132 #define SLINK_RX_FIFO 0x180 133 #define SLINK_RX_FIFO 0x180
133 134
134 static const unsigned long spi_tegra_req_sels[] = { 135 static const unsigned long spi_tegra_req_sels[] = {
135 TEGRA_DMA_REQ_SEL_SL2B1, 136 TEGRA_DMA_REQ_SEL_SL2B1,
136 TEGRA_DMA_REQ_SEL_SL2B2, 137 TEGRA_DMA_REQ_SEL_SL2B2,
137 TEGRA_DMA_REQ_SEL_SL2B3, 138 TEGRA_DMA_REQ_SEL_SL2B3,
138 TEGRA_DMA_REQ_SEL_SL2B4, 139 TEGRA_DMA_REQ_SEL_SL2B4,
139 }; 140 };
140 141
141 #define BB_LEN 32 142 #define BB_LEN 32
142 143
143 struct spi_tegra_data { 144 struct spi_tegra_data {
144 struct spi_master *master; 145 struct spi_master *master;
145 struct platform_device *pdev; 146 struct platform_device *pdev;
146 spinlock_t lock; 147 spinlock_t lock;
147 148
148 struct clk *clk; 149 struct clk *clk;
149 void __iomem *base; 150 void __iomem *base;
150 unsigned long phys; 151 unsigned long phys;
151 152
152 u32 cur_speed; 153 u32 cur_speed;
153 154
154 struct list_head queue; 155 struct list_head queue;
155 struct spi_transfer *cur; 156 struct spi_transfer *cur;
156 unsigned cur_pos; 157 unsigned cur_pos;
157 unsigned cur_len; 158 unsigned cur_len;
158 unsigned cur_bytes_per_word; 159 unsigned cur_bytes_per_word;
159 160
160 /* The tegra spi controller has a bug which causes the first word 161 /* The tegra spi controller has a bug which causes the first word
161 * in PIO transactions to be garbage. Since packed DMA transactions 162 * in PIO transactions to be garbage. Since packed DMA transactions
162 * require transfers to be 4 byte aligned we need a bounce buffer 163 * require transfers to be 4 byte aligned we need a bounce buffer
163 * for the generic case. 164 * for the generic case.
164 */ 165 */
166 int dma_req_len;
167 #if defined(CONFIG_TEGRA_SYSTEM_DMA)
165 struct tegra_dma_req rx_dma_req; 168 struct tegra_dma_req rx_dma_req;
166 struct tegra_dma_channel *rx_dma; 169 struct tegra_dma_channel *rx_dma;
170 #else
171 struct dma_chan *rx_dma;
172 struct dma_slave_config sconfig;
173 struct dma_async_tx_descriptor *rx_dma_desc;
174 dma_cookie_t rx_cookie;
175 #endif
167 u32 *rx_bb; 176 u32 *rx_bb;
168 dma_addr_t rx_bb_phys; 177 dma_addr_t rx_bb_phys;
169 }; 178 };
170 179
180 #if !defined(CONFIG_TEGRA_SYSTEM_DMA)
181 static void tegra_spi_rx_dma_complete(void *args);
182 #endif
171 183
172 static inline unsigned long spi_tegra_readl(struct spi_tegra_data *tspi, 184 static inline unsigned long spi_tegra_readl(struct spi_tegra_data *tspi,
173 unsigned long reg) 185 unsigned long reg)
174 { 186 {
175 return readl(tspi->base + reg); 187 return readl(tspi->base + reg);
176 } 188 }
177 189
178 static inline void spi_tegra_writel(struct spi_tegra_data *tspi, 190 static inline void spi_tegra_writel(struct spi_tegra_data *tspi,
179 unsigned long val, 191 unsigned long val,
180 unsigned long reg) 192 unsigned long reg)
181 { 193 {
182 writel(val, tspi->base + reg); 194 writel(val, tspi->base + reg);
183 } 195 }
184 196
185 static void spi_tegra_go(struct spi_tegra_data *tspi) 197 static void spi_tegra_go(struct spi_tegra_data *tspi)
186 { 198 {
187 unsigned long val; 199 unsigned long val;
188 200
189 wmb(); 201 wmb();
190 202
191 val = spi_tegra_readl(tspi, SLINK_DMA_CTL); 203 val = spi_tegra_readl(tspi, SLINK_DMA_CTL);
192 val &= ~SLINK_DMA_BLOCK_SIZE(~0) & ~SLINK_DMA_EN; 204 val &= ~SLINK_DMA_BLOCK_SIZE(~0) & ~SLINK_DMA_EN;
193 val |= SLINK_DMA_BLOCK_SIZE(tspi->rx_dma_req.size / 4 - 1); 205 val |= SLINK_DMA_BLOCK_SIZE(tspi->dma_req_len / 4 - 1);
194 spi_tegra_writel(tspi, val, SLINK_DMA_CTL); 206 spi_tegra_writel(tspi, val, SLINK_DMA_CTL);
195 207 #if defined(CONFIG_TEGRA_SYSTEM_DMA)
208 tspi->rx_dma_req.size = tspi->dma_req_len;
196 tegra_dma_enqueue_req(tspi->rx_dma, &tspi->rx_dma_req); 209 tegra_dma_enqueue_req(tspi->rx_dma, &tspi->rx_dma_req);
210 #else
211 tspi->rx_dma_desc = dmaengine_prep_slave_single(tspi->rx_dma,
212 tspi->rx_bb_phys, tspi->dma_req_len,
213 DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT);
214 if (!tspi->rx_dma_desc) {
215 dev_err(&tspi->pdev->dev, "dmaengine slave prep failed\n");
216 return;
217 }
218 tspi->rx_dma_desc->callback = tegra_spi_rx_dma_complete;
219 tspi->rx_dma_desc->callback_param = tspi;
220 tspi->rx_cookie = dmaengine_submit(tspi->rx_dma_desc);
221 dma_async_issue_pending(tspi->rx_dma);
222 #endif
197 223
198 val |= SLINK_DMA_EN; 224 val |= SLINK_DMA_EN;
199 spi_tegra_writel(tspi, val, SLINK_DMA_CTL); 225 spi_tegra_writel(tspi, val, SLINK_DMA_CTL);
200 } 226 }
201 227
202 static unsigned spi_tegra_fill_tx_fifo(struct spi_tegra_data *tspi, 228 static unsigned spi_tegra_fill_tx_fifo(struct spi_tegra_data *tspi,
203 struct spi_transfer *t) 229 struct spi_transfer *t)
204 { 230 {
205 unsigned len = min(t->len - tspi->cur_pos, BB_LEN * 231 unsigned len = min(t->len - tspi->cur_pos, BB_LEN *
206 tspi->cur_bytes_per_word); 232 tspi->cur_bytes_per_word);
207 u8 *tx_buf = (u8 *)t->tx_buf + tspi->cur_pos; 233 u8 *tx_buf = (u8 *)t->tx_buf + tspi->cur_pos;
208 int i, j; 234 int i, j;
209 unsigned long val; 235 unsigned long val;
210 236
211 val = spi_tegra_readl(tspi, SLINK_COMMAND); 237 val = spi_tegra_readl(tspi, SLINK_COMMAND);
212 val &= ~SLINK_WORD_SIZE(~0); 238 val &= ~SLINK_WORD_SIZE(~0);
213 val |= SLINK_WORD_SIZE(len / tspi->cur_bytes_per_word - 1); 239 val |= SLINK_WORD_SIZE(len / tspi->cur_bytes_per_word - 1);
214 spi_tegra_writel(tspi, val, SLINK_COMMAND); 240 spi_tegra_writel(tspi, val, SLINK_COMMAND);
215 241
216 for (i = 0; i < len; i += tspi->cur_bytes_per_word) { 242 for (i = 0; i < len; i += tspi->cur_bytes_per_word) {
217 val = 0; 243 val = 0;
218 for (j = 0; j < tspi->cur_bytes_per_word; j++) 244 for (j = 0; j < tspi->cur_bytes_per_word; j++)
219 val |= tx_buf[i + j] << j * 8; 245 val |= tx_buf[i + j] << j * 8;
220 246
221 spi_tegra_writel(tspi, val, SLINK_TX_FIFO); 247 spi_tegra_writel(tspi, val, SLINK_TX_FIFO);
222 } 248 }
223 249
224 tspi->rx_dma_req.size = len / tspi->cur_bytes_per_word * 4; 250 tspi->dma_req_len = len / tspi->cur_bytes_per_word * 4;
225 251
226 return len; 252 return len;
227 } 253 }
228 254
229 static unsigned spi_tegra_drain_rx_fifo(struct spi_tegra_data *tspi, 255 static unsigned spi_tegra_drain_rx_fifo(struct spi_tegra_data *tspi,
230 struct spi_transfer *t) 256 struct spi_transfer *t)
231 { 257 {
232 unsigned len = tspi->cur_len; 258 unsigned len = tspi->cur_len;
233 u8 *rx_buf = (u8 *)t->rx_buf + tspi->cur_pos; 259 u8 *rx_buf = (u8 *)t->rx_buf + tspi->cur_pos;
234 int i, j; 260 int i, j;
235 unsigned long val; 261 unsigned long val;
236 262
237 for (i = 0; i < len; i += tspi->cur_bytes_per_word) { 263 for (i = 0; i < len; i += tspi->cur_bytes_per_word) {
238 val = tspi->rx_bb[i / tspi->cur_bytes_per_word]; 264 val = tspi->rx_bb[i / tspi->cur_bytes_per_word];
239 for (j = 0; j < tspi->cur_bytes_per_word; j++) 265 for (j = 0; j < tspi->cur_bytes_per_word; j++)
240 rx_buf[i + j] = (val >> (j * 8)) & 0xff; 266 rx_buf[i + j] = (val >> (j * 8)) & 0xff;
241 } 267 }
242 268
243 return len; 269 return len;
244 } 270 }
245 271
246 static void spi_tegra_start_transfer(struct spi_device *spi, 272 static void spi_tegra_start_transfer(struct spi_device *spi,
247 struct spi_transfer *t) 273 struct spi_transfer *t)
248 { 274 {
249 struct spi_tegra_data *tspi = spi_master_get_devdata(spi->master); 275 struct spi_tegra_data *tspi = spi_master_get_devdata(spi->master);
250 u32 speed; 276 u32 speed;
251 u8 bits_per_word; 277 u8 bits_per_word;
252 unsigned long val; 278 unsigned long val;
253 279
254 speed = t->speed_hz ? t->speed_hz : spi->max_speed_hz; 280 speed = t->speed_hz ? t->speed_hz : spi->max_speed_hz;
255 bits_per_word = t->bits_per_word ? t->bits_per_word : 281 bits_per_word = t->bits_per_word ? t->bits_per_word :
256 spi->bits_per_word; 282 spi->bits_per_word;
257 283
258 tspi->cur_bytes_per_word = (bits_per_word - 1) / 8 + 1; 284 tspi->cur_bytes_per_word = (bits_per_word - 1) / 8 + 1;
259 285
260 if (speed != tspi->cur_speed) 286 if (speed != tspi->cur_speed)
261 clk_set_rate(tspi->clk, speed); 287 clk_set_rate(tspi->clk, speed);
262 288
263 if (tspi->cur_speed == 0) 289 if (tspi->cur_speed == 0)
264 clk_prepare_enable(tspi->clk); 290 clk_prepare_enable(tspi->clk);
265 291
266 tspi->cur_speed = speed; 292 tspi->cur_speed = speed;
267 293
268 val = spi_tegra_readl(tspi, SLINK_COMMAND2); 294 val = spi_tegra_readl(tspi, SLINK_COMMAND2);
269 val &= ~SLINK_SS_EN_CS(~0) | SLINK_RXEN | SLINK_TXEN; 295 val &= ~SLINK_SS_EN_CS(~0) | SLINK_RXEN | SLINK_TXEN;
270 if (t->rx_buf) 296 if (t->rx_buf)
271 val |= SLINK_RXEN; 297 val |= SLINK_RXEN;
272 if (t->tx_buf) 298 if (t->tx_buf)
273 val |= SLINK_TXEN; 299 val |= SLINK_TXEN;
274 val |= SLINK_SS_EN_CS(spi->chip_select); 300 val |= SLINK_SS_EN_CS(spi->chip_select);
275 val |= SLINK_SPIE; 301 val |= SLINK_SPIE;
276 spi_tegra_writel(tspi, val, SLINK_COMMAND2); 302 spi_tegra_writel(tspi, val, SLINK_COMMAND2);
277 303
278 val = spi_tegra_readl(tspi, SLINK_COMMAND); 304 val = spi_tegra_readl(tspi, SLINK_COMMAND);
279 val &= ~SLINK_BIT_LENGTH(~0); 305 val &= ~SLINK_BIT_LENGTH(~0);
280 val |= SLINK_BIT_LENGTH(bits_per_word - 1); 306 val |= SLINK_BIT_LENGTH(bits_per_word - 1);
281 307
282 /* FIXME: should probably control CS manually so that we can be sure 308 /* FIXME: should probably control CS manually so that we can be sure
283 * it does not go low between transfer and to support delay_usecs 309 * it does not go low between transfer and to support delay_usecs
284 * correctly. 310 * correctly.
285 */ 311 */
286 val &= ~SLINK_IDLE_SCLK_MASK & ~SLINK_CK_SDA & ~SLINK_CS_SW; 312 val &= ~SLINK_IDLE_SCLK_MASK & ~SLINK_CK_SDA & ~SLINK_CS_SW;
287 313
288 if (spi->mode & SPI_CPHA) 314 if (spi->mode & SPI_CPHA)
289 val |= SLINK_CK_SDA; 315 val |= SLINK_CK_SDA;
290 316
291 if (spi->mode & SPI_CPOL) 317 if (spi->mode & SPI_CPOL)
292 val |= SLINK_IDLE_SCLK_DRIVE_HIGH; 318 val |= SLINK_IDLE_SCLK_DRIVE_HIGH;
293 else 319 else
294 val |= SLINK_IDLE_SCLK_DRIVE_LOW; 320 val |= SLINK_IDLE_SCLK_DRIVE_LOW;
295 321
296 val |= SLINK_M_S; 322 val |= SLINK_M_S;
297 323
298 spi_tegra_writel(tspi, val, SLINK_COMMAND); 324 spi_tegra_writel(tspi, val, SLINK_COMMAND);
299 325
300 spi_tegra_writel(tspi, SLINK_RX_FLUSH | SLINK_TX_FLUSH, SLINK_STATUS); 326 spi_tegra_writel(tspi, SLINK_RX_FLUSH | SLINK_TX_FLUSH, SLINK_STATUS);
301 327
302 tspi->cur = t; 328 tspi->cur = t;
303 tspi->cur_pos = 0; 329 tspi->cur_pos = 0;
304 tspi->cur_len = spi_tegra_fill_tx_fifo(tspi, t); 330 tspi->cur_len = spi_tegra_fill_tx_fifo(tspi, t);
305 331
306 spi_tegra_go(tspi); 332 spi_tegra_go(tspi);
307 } 333 }
308 334
309 static void spi_tegra_start_message(struct spi_device *spi, 335 static void spi_tegra_start_message(struct spi_device *spi,
310 struct spi_message *m) 336 struct spi_message *m)
311 { 337 {
312 struct spi_transfer *t; 338 struct spi_transfer *t;
313 339
314 m->actual_length = 0; 340 m->actual_length = 0;
315 m->status = 0; 341 m->status = 0;
316 342
317 t = list_first_entry(&m->transfers, struct spi_transfer, transfer_list); 343 t = list_first_entry(&m->transfers, struct spi_transfer, transfer_list);
318 spi_tegra_start_transfer(spi, t); 344 spi_tegra_start_transfer(spi, t);
319 } 345 }
320 346
321 static void tegra_spi_rx_dma_complete(struct tegra_dma_req *req) 347 static void handle_spi_rx_dma_complete(struct spi_tegra_data *tspi)
322 { 348 {
323 struct spi_tegra_data *tspi = req->dev;
324 unsigned long flags; 349 unsigned long flags;
325 struct spi_message *m; 350 struct spi_message *m;
326 struct spi_device *spi; 351 struct spi_device *spi;
327 int timeout = 0; 352 int timeout = 0;
328 unsigned long val; 353 unsigned long val;
329 354
330 /* the SPI controller may come back with both the BSY and RDY bits 355 /* the SPI controller may come back with both the BSY and RDY bits
331 * set. In this case we need to wait for the BSY bit to clear so 356 * set. In this case we need to wait for the BSY bit to clear so
332 * that we are sure the DMA is finished. 1000 reads was empirically 357 * that we are sure the DMA is finished. 1000 reads was empirically
333 * determined to be long enough. 358 * determined to be long enough.
334 */ 359 */
335 while (timeout++ < 1000) { 360 while (timeout++ < 1000) {
336 if (!(spi_tegra_readl(tspi, SLINK_STATUS) & SLINK_BSY)) 361 if (!(spi_tegra_readl(tspi, SLINK_STATUS) & SLINK_BSY))
337 break; 362 break;
338 } 363 }
339 364
340 spin_lock_irqsave(&tspi->lock, flags); 365 spin_lock_irqsave(&tspi->lock, flags);
341 366
342 val = spi_tegra_readl(tspi, SLINK_STATUS); 367 val = spi_tegra_readl(tspi, SLINK_STATUS);
343 val |= SLINK_RDY; 368 val |= SLINK_RDY;
344 spi_tegra_writel(tspi, val, SLINK_STATUS); 369 spi_tegra_writel(tspi, val, SLINK_STATUS);
345 370
346 m = list_first_entry(&tspi->queue, struct spi_message, queue); 371 m = list_first_entry(&tspi->queue, struct spi_message, queue);
347 372
348 if (timeout >= 1000) 373 if (timeout >= 1000)
349 m->status = -EIO; 374 m->status = -EIO;
350 375
351 spi = m->state; 376 spi = m->state;
352 377
353 tspi->cur_pos += spi_tegra_drain_rx_fifo(tspi, tspi->cur); 378 tspi->cur_pos += spi_tegra_drain_rx_fifo(tspi, tspi->cur);
354 m->actual_length += tspi->cur_pos; 379 m->actual_length += tspi->cur_pos;
355 380
356 if (tspi->cur_pos < tspi->cur->len) { 381 if (tspi->cur_pos < tspi->cur->len) {
357 tspi->cur_len = spi_tegra_fill_tx_fifo(tspi, tspi->cur); 382 tspi->cur_len = spi_tegra_fill_tx_fifo(tspi, tspi->cur);
358 spi_tegra_go(tspi); 383 spi_tegra_go(tspi);
359 } else if (!list_is_last(&tspi->cur->transfer_list, 384 } else if (!list_is_last(&tspi->cur->transfer_list,
360 &m->transfers)) { 385 &m->transfers)) {
361 tspi->cur = list_first_entry(&tspi->cur->transfer_list, 386 tspi->cur = list_first_entry(&tspi->cur->transfer_list,
362 struct spi_transfer, 387 struct spi_transfer,
363 transfer_list); 388 transfer_list);
364 spi_tegra_start_transfer(spi, tspi->cur); 389 spi_tegra_start_transfer(spi, tspi->cur);
365 } else { 390 } else {
366 list_del(&m->queue); 391 list_del(&m->queue);
367 392
368 m->complete(m->context); 393 m->complete(m->context);
369 394
370 if (!list_empty(&tspi->queue)) { 395 if (!list_empty(&tspi->queue)) {
371 m = list_first_entry(&tspi->queue, struct spi_message, 396 m = list_first_entry(&tspi->queue, struct spi_message,
372 queue); 397 queue);
373 spi = m->state; 398 spi = m->state;
374 spi_tegra_start_message(spi, m); 399 spi_tegra_start_message(spi, m);
375 } else { 400 } else {
376 clk_disable_unprepare(tspi->clk); 401 clk_disable_unprepare(tspi->clk);
377 tspi->cur_speed = 0; 402 tspi->cur_speed = 0;
378 } 403 }
379 } 404 }
380 405
381 spin_unlock_irqrestore(&tspi->lock, flags); 406 spin_unlock_irqrestore(&tspi->lock, flags);
382 } 407 }
408 #if defined(CONFIG_TEGRA_SYSTEM_DMA)
409 static void tegra_spi_rx_dma_complete(struct tegra_dma_req *req)
410 {
411 struct spi_tegra_data *tspi = req->dev;
412 handle_spi_rx_dma_complete(tspi);
413 }
414 #else
415 static void tegra_spi_rx_dma_complete(void *args)
416 {
417 struct spi_tegra_data *tspi = args;
418 handle_spi_rx_dma_complete(tspi);
419 }
420 #endif
383 421
384 static int spi_tegra_setup(struct spi_device *spi) 422 static int spi_tegra_setup(struct spi_device *spi)
385 { 423 {
386 struct spi_tegra_data *tspi = spi_master_get_devdata(spi->master); 424 struct spi_tegra_data *tspi = spi_master_get_devdata(spi->master);
387 unsigned long cs_bit; 425 unsigned long cs_bit;
388 unsigned long val; 426 unsigned long val;
389 unsigned long flags; 427 unsigned long flags;
390 428
391 dev_dbg(&spi->dev, "setup %d bpw, %scpol, %scpha, %dHz\n", 429 dev_dbg(&spi->dev, "setup %d bpw, %scpol, %scpha, %dHz\n",
392 spi->bits_per_word, 430 spi->bits_per_word,
393 spi->mode & SPI_CPOL ? "" : "~", 431 spi->mode & SPI_CPOL ? "" : "~",
394 spi->mode & SPI_CPHA ? "" : "~", 432 spi->mode & SPI_CPHA ? "" : "~",
395 spi->max_speed_hz); 433 spi->max_speed_hz);
396 434
397 435
398 switch (spi->chip_select) { 436 switch (spi->chip_select) {
399 case 0: 437 case 0:
400 cs_bit = SLINK_CS_POLARITY; 438 cs_bit = SLINK_CS_POLARITY;
401 break; 439 break;
402 440
403 case 1: 441 case 1:
404 cs_bit = SLINK_CS_POLARITY1; 442 cs_bit = SLINK_CS_POLARITY1;
405 break; 443 break;
406 444
407 case 2: 445 case 2:
408 cs_bit = SLINK_CS_POLARITY2; 446 cs_bit = SLINK_CS_POLARITY2;
409 break; 447 break;
410 448
411 case 4: 449 case 4:
412 cs_bit = SLINK_CS_POLARITY3; 450 cs_bit = SLINK_CS_POLARITY3;
413 break; 451 break;
414 452
415 default: 453 default:
416 return -EINVAL; 454 return -EINVAL;
417 } 455 }
418 456
419 spin_lock_irqsave(&tspi->lock, flags); 457 spin_lock_irqsave(&tspi->lock, flags);
420 458
421 val = spi_tegra_readl(tspi, SLINK_COMMAND); 459 val = spi_tegra_readl(tspi, SLINK_COMMAND);
422 if (spi->mode & SPI_CS_HIGH) 460 if (spi->mode & SPI_CS_HIGH)
423 val |= cs_bit; 461 val |= cs_bit;
424 else 462 else
425 val &= ~cs_bit; 463 val &= ~cs_bit;
426 spi_tegra_writel(tspi, val, SLINK_COMMAND); 464 spi_tegra_writel(tspi, val, SLINK_COMMAND);
427 465
428 spin_unlock_irqrestore(&tspi->lock, flags); 466 spin_unlock_irqrestore(&tspi->lock, flags);
429 467
430 return 0; 468 return 0;
431 } 469 }
432 470
433 static int spi_tegra_transfer(struct spi_device *spi, struct spi_message *m) 471 static int spi_tegra_transfer(struct spi_device *spi, struct spi_message *m)
434 { 472 {
435 struct spi_tegra_data *tspi = spi_master_get_devdata(spi->master); 473 struct spi_tegra_data *tspi = spi_master_get_devdata(spi->master);
436 struct spi_transfer *t; 474 struct spi_transfer *t;
437 unsigned long flags; 475 unsigned long flags;
438 int was_empty; 476 int was_empty;
439 477
440 if (list_empty(&m->transfers) || !m->complete) 478 if (list_empty(&m->transfers) || !m->complete)
441 return -EINVAL; 479 return -EINVAL;
442 480
443 list_for_each_entry(t, &m->transfers, transfer_list) { 481 list_for_each_entry(t, &m->transfers, transfer_list) {
444 if (t->bits_per_word < 0 || t->bits_per_word > 32) 482 if (t->bits_per_word < 0 || t->bits_per_word > 32)
445 return -EINVAL; 483 return -EINVAL;
446 484
447 if (t->len == 0) 485 if (t->len == 0)
448 return -EINVAL; 486 return -EINVAL;
449 487
450 if (!t->rx_buf && !t->tx_buf) 488 if (!t->rx_buf && !t->tx_buf)
451 return -EINVAL; 489 return -EINVAL;
452 } 490 }
453 491
454 m->state = spi; 492 m->state = spi;
455 493
456 spin_lock_irqsave(&tspi->lock, flags); 494 spin_lock_irqsave(&tspi->lock, flags);
457 was_empty = list_empty(&tspi->queue); 495 was_empty = list_empty(&tspi->queue);
458 list_add_tail(&m->queue, &tspi->queue); 496 list_add_tail(&m->queue, &tspi->queue);
459 497
460 if (was_empty) 498 if (was_empty)
461 spi_tegra_start_message(spi, m); 499 spi_tegra_start_message(spi, m);
462 500
463 spin_unlock_irqrestore(&tspi->lock, flags); 501 spin_unlock_irqrestore(&tspi->lock, flags);
464 502
465 return 0; 503 return 0;
466 } 504 }
467 505
468 static int __devinit spi_tegra_probe(struct platform_device *pdev) 506 static int __devinit spi_tegra_probe(struct platform_device *pdev)
469 { 507 {
470 struct spi_master *master; 508 struct spi_master *master;
471 struct spi_tegra_data *tspi; 509 struct spi_tegra_data *tspi;
472 struct resource *r; 510 struct resource *r;
473 int ret; 511 int ret;
512 #if !defined(CONFIG_TEGRA_SYSTEM_DMA)
513 dma_cap_mask_t mask;
514 #endif
474 515
475 master = spi_alloc_master(&pdev->dev, sizeof *tspi); 516 master = spi_alloc_master(&pdev->dev, sizeof *tspi);
476 if (master == NULL) { 517 if (master == NULL) {
477 dev_err(&pdev->dev, "master allocation failed\n"); 518 dev_err(&pdev->dev, "master allocation failed\n");
478 return -ENOMEM; 519 return -ENOMEM;
479 } 520 }
480 521
481 /* the spi->mode bits understood by this driver: */ 522 /* the spi->mode bits understood by this driver: */
482 master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH; 523 master->mode_bits = SPI_CPOL | SPI_CPHA | SPI_CS_HIGH;
483 524
484 master->bus_num = pdev->id; 525 master->bus_num = pdev->id;
485 526
486 master->setup = spi_tegra_setup; 527 master->setup = spi_tegra_setup;
487 master->transfer = spi_tegra_transfer; 528 master->transfer = spi_tegra_transfer;
488 master->num_chipselect = 4; 529 master->num_chipselect = 4;
489 530
490 dev_set_drvdata(&pdev->dev, master); 531 dev_set_drvdata(&pdev->dev, master);
491 tspi = spi_master_get_devdata(master); 532 tspi = spi_master_get_devdata(master);
492 tspi->master = master; 533 tspi->master = master;
493 tspi->pdev = pdev; 534 tspi->pdev = pdev;
494 spin_lock_init(&tspi->lock); 535 spin_lock_init(&tspi->lock);
495 536
496 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 537 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
497 if (r == NULL) { 538 if (r == NULL) {
498 ret = -ENODEV; 539 ret = -ENODEV;
499 goto err0; 540 goto err0;
500 } 541 }
501 542
502 if (!request_mem_region(r->start, resource_size(r), 543 if (!request_mem_region(r->start, resource_size(r),
503 dev_name(&pdev->dev))) { 544 dev_name(&pdev->dev))) {
504 ret = -EBUSY; 545 ret = -EBUSY;
505 goto err0; 546 goto err0;
506 } 547 }
507 548
508 tspi->phys = r->start; 549 tspi->phys = r->start;
509 tspi->base = ioremap(r->start, resource_size(r)); 550 tspi->base = ioremap(r->start, resource_size(r));
510 if (!tspi->base) { 551 if (!tspi->base) {
511 dev_err(&pdev->dev, "can't ioremap iomem\n"); 552 dev_err(&pdev->dev, "can't ioremap iomem\n");
512 ret = -ENOMEM; 553 ret = -ENOMEM;
513 goto err1; 554 goto err1;
514 } 555 }
515 556
516 tspi->clk = clk_get(&pdev->dev, NULL); 557 tspi->clk = clk_get(&pdev->dev, NULL);
517 if (IS_ERR(tspi->clk)) { 558 if (IS_ERR(tspi->clk)) {
518 dev_err(&pdev->dev, "can not get clock\n"); 559 dev_err(&pdev->dev, "can not get clock\n");
519 ret = PTR_ERR(tspi->clk); 560 ret = PTR_ERR(tspi->clk);
520 goto err2; 561 goto err2;
521 } 562 }
522 563
523 INIT_LIST_HEAD(&tspi->queue); 564 INIT_LIST_HEAD(&tspi->queue);
524 565
566 #if defined(CONFIG_TEGRA_SYSTEM_DMA)
525 tspi->rx_dma = tegra_dma_allocate_channel(TEGRA_DMA_MODE_ONESHOT); 567 tspi->rx_dma = tegra_dma_allocate_channel(TEGRA_DMA_MODE_ONESHOT);
526 if (!tspi->rx_dma) { 568 if (!tspi->rx_dma) {
527 dev_err(&pdev->dev, "can not allocate rx dma channel\n"); 569 dev_err(&pdev->dev, "can not allocate rx dma channel\n");
528 ret = -ENODEV; 570 ret = -ENODEV;
529 goto err3; 571 goto err3;
530 } 572 }
573 #else
574 dma_cap_zero(mask);
575 dma_cap_set(DMA_SLAVE, mask);
576 tspi->rx_dma = dma_request_channel(mask, NULL, NULL);
577 if (!tspi->rx_dma) {
578 dev_err(&pdev->dev, "can not allocate rx dma channel\n");
579 ret = -ENODEV;
580 goto err3;
581 }
531 582
583 #endif
584
532 tspi->rx_bb = dma_alloc_coherent(&pdev->dev, sizeof(u32) * BB_LEN, 585 tspi->rx_bb = dma_alloc_coherent(&pdev->dev, sizeof(u32) * BB_LEN,
533 &tspi->rx_bb_phys, GFP_KERNEL); 586 &tspi->rx_bb_phys, GFP_KERNEL);
534 if (!tspi->rx_bb) { 587 if (!tspi->rx_bb) {
535 dev_err(&pdev->dev, "can not allocate rx bounce buffer\n"); 588 dev_err(&pdev->dev, "can not allocate rx bounce buffer\n");
536 ret = -ENOMEM; 589 ret = -ENOMEM;
537 goto err4; 590 goto err4;
538 } 591 }
539 592
593 #if defined(CONFIG_TEGRA_SYSTEM_DMA)
540 tspi->rx_dma_req.complete = tegra_spi_rx_dma_complete; 594 tspi->rx_dma_req.complete = tegra_spi_rx_dma_complete;
541 tspi->rx_dma_req.to_memory = 1; 595 tspi->rx_dma_req.to_memory = 1;
542 tspi->rx_dma_req.dest_addr = tspi->rx_bb_phys; 596 tspi->rx_dma_req.dest_addr = tspi->rx_bb_phys;
543 tspi->rx_dma_req.dest_bus_width = 32; 597 tspi->rx_dma_req.dest_bus_width = 32;
544 tspi->rx_dma_req.source_addr = tspi->phys + SLINK_RX_FIFO; 598 tspi->rx_dma_req.source_addr = tspi->phys + SLINK_RX_FIFO;
545 tspi->rx_dma_req.source_bus_width = 32; 599 tspi->rx_dma_req.source_bus_width = 32;
546 tspi->rx_dma_req.source_wrap = 4; 600 tspi->rx_dma_req.source_wrap = 4;
547 tspi->rx_dma_req.req_sel = spi_tegra_req_sels[pdev->id]; 601 tspi->rx_dma_req.req_sel = spi_tegra_req_sels[pdev->id];
548 tspi->rx_dma_req.dev = tspi; 602 tspi->rx_dma_req.dev = tspi;
603 #else
604 /* Dmaengine Dma slave config */
605 tspi->sconfig.src_addr = tspi->phys + SLINK_RX_FIFO;
606 tspi->sconfig.dst_addr = tspi->phys + SLINK_RX_FIFO;
607 tspi->sconfig.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
608 tspi->sconfig.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
609 tspi->sconfig.slave_id = spi_tegra_req_sels[pdev->id];
610 tspi->sconfig.src_maxburst = 1;
611 tspi->sconfig.dst_maxburst = 1;
612 ret = dmaengine_device_control(tspi->rx_dma,
613 DMA_SLAVE_CONFIG, (unsigned long) &tspi->sconfig);
614 if (ret < 0) {
615 dev_err(&pdev->dev, "can not do slave configure for dma %d\n",
616 ret);
617 goto err4;
618 }
619 #endif
549 620
550 master->dev.of_node = pdev->dev.of_node; 621 master->dev.of_node = pdev->dev.of_node;
551 ret = spi_register_master(master); 622 ret = spi_register_master(master);
552 623
553 if (ret < 0) 624 if (ret < 0)
554 goto err5; 625 goto err5;
555 626
556 return ret; 627 return ret;
557 628
558 err5: 629 err5:
559 dma_free_coherent(&pdev->dev, sizeof(u32) * BB_LEN, 630 dma_free_coherent(&pdev->dev, sizeof(u32) * BB_LEN,
560 tspi->rx_bb, tspi->rx_bb_phys); 631 tspi->rx_bb, tspi->rx_bb_phys);
561 err4: 632 err4:
633 #if defined(CONFIG_TEGRA_SYSTEM_DMA)
562 tegra_dma_free_channel(tspi->rx_dma); 634 tegra_dma_free_channel(tspi->rx_dma);
635 #else
636 dma_release_channel(tspi->rx_dma);
637 #endif
563 err3: 638 err3:
564 clk_put(tspi->clk); 639 clk_put(tspi->clk);
565 err2: 640 err2:
566 iounmap(tspi->base); 641 iounmap(tspi->base);
567 err1: 642 err1:
568 release_mem_region(r->start, resource_size(r)); 643 release_mem_region(r->start, resource_size(r));
569 err0: 644 err0:
570 spi_master_put(master); 645 spi_master_put(master);
571 return ret; 646 return ret;
572 } 647 }
573 648
574 static int __devexit spi_tegra_remove(struct platform_device *pdev) 649 static int __devexit spi_tegra_remove(struct platform_device *pdev)
575 { 650 {
576 struct spi_master *master; 651 struct spi_master *master;
577 struct spi_tegra_data *tspi; 652 struct spi_tegra_data *tspi;
578 struct resource *r; 653 struct resource *r;
579 654
580 master = dev_get_drvdata(&pdev->dev); 655 master = dev_get_drvdata(&pdev->dev);
581 tspi = spi_master_get_devdata(master); 656 tspi = spi_master_get_devdata(master);
582 657
583 spi_unregister_master(master); 658 spi_unregister_master(master);
659 #if defined(CONFIG_TEGRA_SYSTEM_DMA)
584 tegra_dma_free_channel(tspi->rx_dma); 660 tegra_dma_free_channel(tspi->rx_dma);
661 #else
662 dma_release_channel(tspi->rx_dma);
663 #endif
585 664
586 dma_free_coherent(&pdev->dev, sizeof(u32) * BB_LEN, 665 dma_free_coherent(&pdev->dev, sizeof(u32) * BB_LEN,
587 tspi->rx_bb, tspi->rx_bb_phys); 666 tspi->rx_bb, tspi->rx_bb_phys);
588 667
589 clk_put(tspi->clk); 668 clk_put(tspi->clk);
590 iounmap(tspi->base); 669 iounmap(tspi->base);
591 670
592 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 671 r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
593 release_mem_region(r->start, resource_size(r)); 672 release_mem_region(r->start, resource_size(r));
594 673
595 return 0; 674 return 0;
596 } 675 }
597 676
598 MODULE_ALIAS("platform:spi_tegra"); 677 MODULE_ALIAS("platform:spi_tegra");
599 678
600 #ifdef CONFIG_OF 679 #ifdef CONFIG_OF
601 static struct of_device_id spi_tegra_of_match_table[] __devinitdata = { 680 static struct of_device_id spi_tegra_of_match_table[] __devinitdata = {
602 { .compatible = "nvidia,tegra20-spi", }, 681 { .compatible = "nvidia,tegra20-spi", },
603 {} 682 {}
604 }; 683 };
605 MODULE_DEVICE_TABLE(of, spi_tegra_of_match_table); 684 MODULE_DEVICE_TABLE(of, spi_tegra_of_match_table);
606 #else /* CONFIG_OF */ 685 #else /* CONFIG_OF */
607 #define spi_tegra_of_match_table NULL 686 #define spi_tegra_of_match_table NULL
608 #endif /* CONFIG_OF */ 687 #endif /* CONFIG_OF */
609 688
610 static struct platform_driver spi_tegra_driver = { 689 static struct platform_driver spi_tegra_driver = {
611 .driver = { 690 .driver = {
612 .name = "spi_tegra", 691 .name = "spi_tegra",
613 .owner = THIS_MODULE, 692 .owner = THIS_MODULE,
614 .of_match_table = spi_tegra_of_match_table, 693 .of_match_table = spi_tegra_of_match_table,
615 }, 694 },
616 .probe = spi_tegra_probe, 695 .probe = spi_tegra_probe,
617 .remove = __devexit_p(spi_tegra_remove), 696 .remove = __devexit_p(spi_tegra_remove),
618 }; 697 };
619 module_platform_driver(spi_tegra_driver); 698 module_platform_driver(spi_tegra_driver);
620 699
621 MODULE_LICENSE("GPL"); 700 MODULE_LICENSE("GPL");
drivers/spi/spi-xcomm.c
File was created 1 /*
2 * Analog Devices AD-FMCOMMS1-EBZ board I2C-SPI bridge driver
3 *
4 * Copyright 2012 Analog Devices Inc.
5 * Author: Lars-Peter Clausen <lars@metafoo.de>
6 *
7 * Licensed under the GPL-2 or later.
8 */
9
10 #include <linux/kernel.h>
11 #include <linux/init.h>
12 #include <linux/module.h>
13 #include <linux/delay.h>
14 #include <linux/i2c.h>
15 #include <linux/spi/spi.h>
16 #include <asm/unaligned.h>
17
18 #define SPI_XCOMM_SETTINGS_LEN_OFFSET 10
19 #define SPI_XCOMM_SETTINGS_3WIRE BIT(6)
20 #define SPI_XCOMM_SETTINGS_CS_HIGH BIT(5)
21 #define SPI_XCOMM_SETTINGS_SAMPLE_END BIT(4)
22 #define SPI_XCOMM_SETTINGS_CPHA BIT(3)
23 #define SPI_XCOMM_SETTINGS_CPOL BIT(2)
24 #define SPI_XCOMM_SETTINGS_CLOCK_DIV_MASK 0x3
25 #define SPI_XCOMM_SETTINGS_CLOCK_DIV_64 0x2
26 #define SPI_XCOMM_SETTINGS_CLOCK_DIV_16 0x1
27 #define SPI_XCOMM_SETTINGS_CLOCK_DIV_4 0x0
28
29 #define SPI_XCOMM_CMD_UPDATE_CONFIG 0x03
30 #define SPI_XCOMM_CMD_WRITE 0x04
31
32 #define SPI_XCOMM_CLOCK 48000000
33
34 struct spi_xcomm {
35 struct i2c_client *i2c;
36
37 uint16_t settings;
38 uint16_t chipselect;
39
40 unsigned int current_speed;
41
42 uint8_t buf[63];
43 };
44
45 static int spi_xcomm_sync_config(struct spi_xcomm *spi_xcomm, unsigned int len)
46 {
47 uint16_t settings;
48 uint8_t *buf = spi_xcomm->buf;
49
50 settings = spi_xcomm->settings;
51 settings |= len << SPI_XCOMM_SETTINGS_LEN_OFFSET;
52
53 buf[0] = SPI_XCOMM_CMD_UPDATE_CONFIG;
54 put_unaligned_be16(settings, &buf[1]);
55 put_unaligned_be16(spi_xcomm->chipselect, &buf[3]);
56
57 return i2c_master_send(spi_xcomm->i2c, buf, 5);
58 }
59
60 static void spi_xcomm_chipselect(struct spi_xcomm *spi_xcomm,
61 struct spi_device *spi, int is_active)
62 {
63 unsigned long cs = spi->chip_select;
64 uint16_t chipselect = spi_xcomm->chipselect;
65
66 if (is_active)
67 chipselect |= BIT(cs);
68 else
69 chipselect &= ~BIT(cs);
70
71 spi_xcomm->chipselect = chipselect;
72 }
73
74 static int spi_xcomm_setup_transfer(struct spi_xcomm *spi_xcomm,
75 struct spi_device *spi, struct spi_transfer *t, unsigned int *settings)
76 {
77 unsigned int speed;
78
79 if ((t->bits_per_word && t->bits_per_word != 8) || t->len > 62)
80 return -EINVAL;
81
82 speed = t->speed_hz ? t->speed_hz : spi->max_speed_hz;
83
84 if (speed != spi_xcomm->current_speed) {
85 unsigned int divider = DIV_ROUND_UP(SPI_XCOMM_CLOCK, speed);
86 if (divider >= 64)
87 *settings |= SPI_XCOMM_SETTINGS_CLOCK_DIV_64;
88 else if (divider >= 16)
89 *settings |= SPI_XCOMM_SETTINGS_CLOCK_DIV_16;
90 else
91 *settings |= SPI_XCOMM_SETTINGS_CLOCK_DIV_4;
92
93 spi_xcomm->current_speed = speed;
94 }
95
96 if (spi->mode & SPI_CPOL)
97 *settings |= SPI_XCOMM_SETTINGS_CPOL;
98 else
99 *settings &= ~SPI_XCOMM_SETTINGS_CPOL;
100
101 if (spi->mode & SPI_CPHA)
102 *settings &= ~SPI_XCOMM_SETTINGS_CPHA;
103 else
104 *settings |= SPI_XCOMM_SETTINGS_CPHA;
105
106 if (spi->mode & SPI_3WIRE)
107 *settings |= SPI_XCOMM_SETTINGS_3WIRE;
108 else
109 *settings &= ~SPI_XCOMM_SETTINGS_3WIRE;
110
111 return 0;
112 }
113
114 static int spi_xcomm_txrx_bufs(struct spi_xcomm *spi_xcomm,
115 struct spi_device *spi, struct spi_transfer *t)
116 {
117 int ret;
118
119 if (t->tx_buf) {
120 spi_xcomm->buf[0] = SPI_XCOMM_CMD_WRITE;
121 memcpy(spi_xcomm->buf + 1, t->tx_buf, t->len);
122
123 ret = i2c_master_send(spi_xcomm->i2c, spi_xcomm->buf, t->len + 1);
124 if (ret < 0)
125 return ret;
126 else if (ret != t->len + 1)
127 return -EIO;
128 } else if (t->rx_buf) {
129 ret = i2c_master_recv(spi_xcomm->i2c, t->rx_buf, t->len);
130 if (ret < 0)
131 return ret;
132 else if (ret != t->len)
133 return -EIO;
134 }
135
136 return t->len;
137 }
138
139 static int spi_xcomm_transfer_one(struct spi_master *master,
140 struct spi_message *msg)
141 {
142 struct spi_xcomm *spi_xcomm = spi_master_get_devdata(master);
143 unsigned int settings = spi_xcomm->settings;
144 struct spi_device *spi = msg->spi;
145 unsigned cs_change = 0;
146 struct spi_transfer *t;
147 bool is_first = true;
148 int status = 0;
149 bool is_last;
150
151 is_first = true;
152
153 spi_xcomm_chipselect(spi_xcomm, spi, true);
154
155 list_for_each_entry(t, &msg->transfers, transfer_list) {
156
157 if (!t->tx_buf && !t->rx_buf && t->len) {
158 status = -EINVAL;
159 break;
160 }
161
162 status = spi_xcomm_setup_transfer(spi_xcomm, spi, t, &settings);
163 if (status < 0)
164 break;
165
166 is_last = list_is_last(&t->transfer_list, &msg->transfers);
167 cs_change = t->cs_change;
168
169 if (cs_change ^ is_last)
170 settings |= BIT(5);
171 else
172 settings &= ~BIT(5);
173
174 if (t->rx_buf) {
175 spi_xcomm->settings = settings;
176 status = spi_xcomm_sync_config(spi_xcomm, t->len);
177 if (status < 0)
178 break;
179 } else if (settings != spi_xcomm->settings || is_first) {
180 spi_xcomm->settings = settings;
181 status = spi_xcomm_sync_config(spi_xcomm, 0);
182 if (status < 0)
183 break;
184 }
185
186 if (t->len) {
187 status = spi_xcomm_txrx_bufs(spi_xcomm, spi, t);
188
189 if (status < 0)
190 break;
191
192 if (status > 0)
193 msg->actual_length += status;
194 }
195 status = 0;
196
197 if (t->delay_usecs)
198 udelay(t->delay_usecs);
199
200 is_first = false;
201 }
202
203 if (status != 0 || !cs_change)
204 spi_xcomm_chipselect(spi_xcomm, spi, false);
205
206 msg->status = status;
207 spi_finalize_current_message(master);
208
209 return status;
210 }
211
212 static int spi_xcomm_setup(struct spi_device *spi)
213 {
214 if (spi->bits_per_word != 8)
215 return -EINVAL;
216
217 return 0;
218 }
219
220 static int __devinit spi_xcomm_probe(struct i2c_client *i2c,
221 const struct i2c_device_id *id)
222 {
223 struct spi_xcomm *spi_xcomm;
224 struct spi_master *master;
225 int ret;
226
227 master = spi_alloc_master(&i2c->dev, sizeof(*spi_xcomm));
228 if (!master)
229 return -ENOMEM;
230
231 spi_xcomm = spi_master_get_devdata(master);
232 spi_xcomm->i2c = i2c;
233
234 master->num_chipselect = 16;
235 master->mode_bits = SPI_CPHA | SPI_CPOL | SPI_3WIRE;
236 master->flags = SPI_MASTER_HALF_DUPLEX;
237 master->setup = spi_xcomm_setup;
238 master->transfer_one_message = spi_xcomm_transfer_one;
239 master->dev.of_node = i2c->dev.of_node;
240 i2c_set_clientdata(i2c, master);
241
242 ret = spi_register_master(master);
243 if (ret < 0)
244 spi_master_put(master);
245
246 return ret;
247 }
248
249 static int __devexit spi_xcomm_remove(struct i2c_client *i2c)
250 {
251 struct spi_master *master = i2c_get_clientdata(i2c);
252
253 spi_unregister_master(master);
254
255 return 0;
256 }
257
258 static const struct i2c_device_id spi_xcomm_ids[] = {
259 { "spi-xcomm" },
260 { },
261 };
262
263 static struct i2c_driver spi_xcomm_driver = {
264 .driver = {
265 .name = "spi-xcomm",
266 .owner = THIS_MODULE,
267 },
268 .id_table = spi_xcomm_ids,
269 .probe = spi_xcomm_probe,
270 .remove = __devexit_p(spi_xcomm_remove),
271 };
272 module_i2c_driver(spi_xcomm_driver);
273
274 MODULE_LICENSE("GPL");
275 MODULE_AUTHOR("Lars-Peter Clausen <lars@metafoo.de>");
276 MODULE_DESCRIPTION("Analog Devices AD-FMCOMMS1-EBZ board I2C-SPI bridge driver");
277
1 /* 1 /*
2 * SPI init/core code 2 * SPI init/core code
3 * 3 *
4 * Copyright (C) 2005 David Brownell 4 * Copyright (C) 2005 David Brownell
5 * Copyright (C) 2008 Secret Lab Technologies Ltd. 5 * Copyright (C) 2008 Secret Lab Technologies Ltd.
6 * 6 *
7 * This program is free software; you can redistribute it and/or modify 7 * This program is free software; you can redistribute it and/or modify
8 * it under the terms of the GNU General Public License as published by 8 * it under the terms of the GNU General Public License as published by
9 * the Free Software Foundation; either version 2 of the License, or 9 * the Free Software Foundation; either version 2 of the License, or
10 * (at your option) any later version. 10 * (at your option) any later version.
11 * 11 *
12 * This program is distributed in the hope that it will be useful, 12 * This program is distributed in the hope that it will be useful,
13 * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 * but WITHOUT ANY WARRANTY; without even the implied warranty of
14 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 * GNU General Public License for more details. 15 * GNU General Public License for more details.
16 * 16 *
17 * You should have received a copy of the GNU General Public License 17 * You should have received a copy of the GNU General Public License
18 * along with this program; if not, write to the Free Software 18 * along with this program; if not, write to the Free Software
19 * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 19 * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
20 */ 20 */
21 21
22 #include <linux/kernel.h> 22 #include <linux/kernel.h>
23 #include <linux/kmod.h> 23 #include <linux/kmod.h>
24 #include <linux/device.h> 24 #include <linux/device.h>
25 #include <linux/init.h> 25 #include <linux/init.h>
26 #include <linux/cache.h> 26 #include <linux/cache.h>
27 #include <linux/mutex.h> 27 #include <linux/mutex.h>
28 #include <linux/of_device.h> 28 #include <linux/of_device.h>
29 #include <linux/of_irq.h> 29 #include <linux/of_irq.h>
30 #include <linux/slab.h> 30 #include <linux/slab.h>
31 #include <linux/mod_devicetable.h> 31 #include <linux/mod_devicetable.h>
32 #include <linux/spi/spi.h> 32 #include <linux/spi/spi.h>
33 #include <linux/pm_runtime.h> 33 #include <linux/pm_runtime.h>
34 #include <linux/export.h> 34 #include <linux/export.h>
35 #include <linux/sched.h> 35 #include <linux/sched.h>
36 #include <linux/delay.h> 36 #include <linux/delay.h>
37 #include <linux/kthread.h> 37 #include <linux/kthread.h>
38 38
39 static void spidev_release(struct device *dev) 39 static void spidev_release(struct device *dev)
40 { 40 {
41 struct spi_device *spi = to_spi_device(dev); 41 struct spi_device *spi = to_spi_device(dev);
42 42
43 /* spi masters may cleanup for released devices */ 43 /* spi masters may cleanup for released devices */
44 if (spi->master->cleanup) 44 if (spi->master->cleanup)
45 spi->master->cleanup(spi); 45 spi->master->cleanup(spi);
46 46
47 spi_master_put(spi->master); 47 spi_master_put(spi->master);
48 kfree(spi); 48 kfree(spi);
49 } 49 }
50 50
51 static ssize_t 51 static ssize_t
52 modalias_show(struct device *dev, struct device_attribute *a, char *buf) 52 modalias_show(struct device *dev, struct device_attribute *a, char *buf)
53 { 53 {
54 const struct spi_device *spi = to_spi_device(dev); 54 const struct spi_device *spi = to_spi_device(dev);
55 55
56 return sprintf(buf, "%s\n", spi->modalias); 56 return sprintf(buf, "%s%s\n", SPI_MODULE_PREFIX, spi->modalias);
57 } 57 }
58 58
59 static struct device_attribute spi_dev_attrs[] = { 59 static struct device_attribute spi_dev_attrs[] = {
60 __ATTR_RO(modalias), 60 __ATTR_RO(modalias),
61 __ATTR_NULL, 61 __ATTR_NULL,
62 }; 62 };
63 63
64 /* modalias support makes "modprobe $MODALIAS" new-style hotplug work, 64 /* modalias support makes "modprobe $MODALIAS" new-style hotplug work,
65 * and the sysfs version makes coldplug work too. 65 * and the sysfs version makes coldplug work too.
66 */ 66 */
67 67
68 static const struct spi_device_id *spi_match_id(const struct spi_device_id *id, 68 static const struct spi_device_id *spi_match_id(const struct spi_device_id *id,
69 const struct spi_device *sdev) 69 const struct spi_device *sdev)
70 { 70 {
71 while (id->name[0]) { 71 while (id->name[0]) {
72 if (!strcmp(sdev->modalias, id->name)) 72 if (!strcmp(sdev->modalias, id->name))
73 return id; 73 return id;
74 id++; 74 id++;
75 } 75 }
76 return NULL; 76 return NULL;
77 } 77 }
78 78
79 const struct spi_device_id *spi_get_device_id(const struct spi_device *sdev) 79 const struct spi_device_id *spi_get_device_id(const struct spi_device *sdev)
80 { 80 {
81 const struct spi_driver *sdrv = to_spi_driver(sdev->dev.driver); 81 const struct spi_driver *sdrv = to_spi_driver(sdev->dev.driver);
82 82
83 return spi_match_id(sdrv->id_table, sdev); 83 return spi_match_id(sdrv->id_table, sdev);
84 } 84 }
85 EXPORT_SYMBOL_GPL(spi_get_device_id); 85 EXPORT_SYMBOL_GPL(spi_get_device_id);
86 86
87 static int spi_match_device(struct device *dev, struct device_driver *drv) 87 static int spi_match_device(struct device *dev, struct device_driver *drv)
88 { 88 {
89 const struct spi_device *spi = to_spi_device(dev); 89 const struct spi_device *spi = to_spi_device(dev);
90 const struct spi_driver *sdrv = to_spi_driver(drv); 90 const struct spi_driver *sdrv = to_spi_driver(drv);
91 91
92 /* Attempt an OF style match */ 92 /* Attempt an OF style match */
93 if (of_driver_match_device(dev, drv)) 93 if (of_driver_match_device(dev, drv))
94 return 1; 94 return 1;
95 95
96 if (sdrv->id_table) 96 if (sdrv->id_table)
97 return !!spi_match_id(sdrv->id_table, spi); 97 return !!spi_match_id(sdrv->id_table, spi);
98 98
99 return strcmp(spi->modalias, drv->name) == 0; 99 return strcmp(spi->modalias, drv->name) == 0;
100 } 100 }
101 101
102 static int spi_uevent(struct device *dev, struct kobj_uevent_env *env) 102 static int spi_uevent(struct device *dev, struct kobj_uevent_env *env)
103 { 103 {
104 const struct spi_device *spi = to_spi_device(dev); 104 const struct spi_device *spi = to_spi_device(dev);
105 105
106 add_uevent_var(env, "MODALIAS=%s%s", SPI_MODULE_PREFIX, spi->modalias); 106 add_uevent_var(env, "MODALIAS=%s%s", SPI_MODULE_PREFIX, spi->modalias);
107 return 0; 107 return 0;
108 } 108 }
109 109
110 #ifdef CONFIG_PM_SLEEP 110 #ifdef CONFIG_PM_SLEEP
111 static int spi_legacy_suspend(struct device *dev, pm_message_t message) 111 static int spi_legacy_suspend(struct device *dev, pm_message_t message)
112 { 112 {
113 int value = 0; 113 int value = 0;
114 struct spi_driver *drv = to_spi_driver(dev->driver); 114 struct spi_driver *drv = to_spi_driver(dev->driver);
115 115
116 /* suspend will stop irqs and dma; no more i/o */ 116 /* suspend will stop irqs and dma; no more i/o */
117 if (drv) { 117 if (drv) {
118 if (drv->suspend) 118 if (drv->suspend)
119 value = drv->suspend(to_spi_device(dev), message); 119 value = drv->suspend(to_spi_device(dev), message);
120 else 120 else
121 dev_dbg(dev, "... can't suspend\n"); 121 dev_dbg(dev, "... can't suspend\n");
122 } 122 }
123 return value; 123 return value;
124 } 124 }
125 125
126 static int spi_legacy_resume(struct device *dev) 126 static int spi_legacy_resume(struct device *dev)
127 { 127 {
128 int value = 0; 128 int value = 0;
129 struct spi_driver *drv = to_spi_driver(dev->driver); 129 struct spi_driver *drv = to_spi_driver(dev->driver);
130 130
131 /* resume may restart the i/o queue */ 131 /* resume may restart the i/o queue */
132 if (drv) { 132 if (drv) {
133 if (drv->resume) 133 if (drv->resume)
134 value = drv->resume(to_spi_device(dev)); 134 value = drv->resume(to_spi_device(dev));
135 else 135 else
136 dev_dbg(dev, "... can't resume\n"); 136 dev_dbg(dev, "... can't resume\n");
137 } 137 }
138 return value; 138 return value;
139 } 139 }
140 140
141 static int spi_pm_suspend(struct device *dev) 141 static int spi_pm_suspend(struct device *dev)
142 { 142 {
143 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 143 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
144 144
145 if (pm) 145 if (pm)
146 return pm_generic_suspend(dev); 146 return pm_generic_suspend(dev);
147 else 147 else
148 return spi_legacy_suspend(dev, PMSG_SUSPEND); 148 return spi_legacy_suspend(dev, PMSG_SUSPEND);
149 } 149 }
150 150
151 static int spi_pm_resume(struct device *dev) 151 static int spi_pm_resume(struct device *dev)
152 { 152 {
153 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 153 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
154 154
155 if (pm) 155 if (pm)
156 return pm_generic_resume(dev); 156 return pm_generic_resume(dev);
157 else 157 else
158 return spi_legacy_resume(dev); 158 return spi_legacy_resume(dev);
159 } 159 }
160 160
161 static int spi_pm_freeze(struct device *dev) 161 static int spi_pm_freeze(struct device *dev)
162 { 162 {
163 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 163 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
164 164
165 if (pm) 165 if (pm)
166 return pm_generic_freeze(dev); 166 return pm_generic_freeze(dev);
167 else 167 else
168 return spi_legacy_suspend(dev, PMSG_FREEZE); 168 return spi_legacy_suspend(dev, PMSG_FREEZE);
169 } 169 }
170 170
171 static int spi_pm_thaw(struct device *dev) 171 static int spi_pm_thaw(struct device *dev)
172 { 172 {
173 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 173 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
174 174
175 if (pm) 175 if (pm)
176 return pm_generic_thaw(dev); 176 return pm_generic_thaw(dev);
177 else 177 else
178 return spi_legacy_resume(dev); 178 return spi_legacy_resume(dev);
179 } 179 }
180 180
181 static int spi_pm_poweroff(struct device *dev) 181 static int spi_pm_poweroff(struct device *dev)
182 { 182 {
183 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 183 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
184 184
185 if (pm) 185 if (pm)
186 return pm_generic_poweroff(dev); 186 return pm_generic_poweroff(dev);
187 else 187 else
188 return spi_legacy_suspend(dev, PMSG_HIBERNATE); 188 return spi_legacy_suspend(dev, PMSG_HIBERNATE);
189 } 189 }
190 190
191 static int spi_pm_restore(struct device *dev) 191 static int spi_pm_restore(struct device *dev)
192 { 192 {
193 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; 193 const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL;
194 194
195 if (pm) 195 if (pm)
196 return pm_generic_restore(dev); 196 return pm_generic_restore(dev);
197 else 197 else
198 return spi_legacy_resume(dev); 198 return spi_legacy_resume(dev);
199 } 199 }
200 #else 200 #else
201 #define spi_pm_suspend NULL 201 #define spi_pm_suspend NULL
202 #define spi_pm_resume NULL 202 #define spi_pm_resume NULL
203 #define spi_pm_freeze NULL 203 #define spi_pm_freeze NULL
204 #define spi_pm_thaw NULL 204 #define spi_pm_thaw NULL
205 #define spi_pm_poweroff NULL 205 #define spi_pm_poweroff NULL
206 #define spi_pm_restore NULL 206 #define spi_pm_restore NULL
207 #endif 207 #endif
208 208
209 static const struct dev_pm_ops spi_pm = { 209 static const struct dev_pm_ops spi_pm = {
210 .suspend = spi_pm_suspend, 210 .suspend = spi_pm_suspend,
211 .resume = spi_pm_resume, 211 .resume = spi_pm_resume,
212 .freeze = spi_pm_freeze, 212 .freeze = spi_pm_freeze,
213 .thaw = spi_pm_thaw, 213 .thaw = spi_pm_thaw,
214 .poweroff = spi_pm_poweroff, 214 .poweroff = spi_pm_poweroff,
215 .restore = spi_pm_restore, 215 .restore = spi_pm_restore,
216 SET_RUNTIME_PM_OPS( 216 SET_RUNTIME_PM_OPS(
217 pm_generic_runtime_suspend, 217 pm_generic_runtime_suspend,
218 pm_generic_runtime_resume, 218 pm_generic_runtime_resume,
219 pm_generic_runtime_idle 219 pm_generic_runtime_idle
220 ) 220 )
221 }; 221 };
222 222
223 struct bus_type spi_bus_type = { 223 struct bus_type spi_bus_type = {
224 .name = "spi", 224 .name = "spi",
225 .dev_attrs = spi_dev_attrs, 225 .dev_attrs = spi_dev_attrs,
226 .match = spi_match_device, 226 .match = spi_match_device,
227 .uevent = spi_uevent, 227 .uevent = spi_uevent,
228 .pm = &spi_pm, 228 .pm = &spi_pm,
229 }; 229 };
230 EXPORT_SYMBOL_GPL(spi_bus_type); 230 EXPORT_SYMBOL_GPL(spi_bus_type);
231 231
232 232
233 static int spi_drv_probe(struct device *dev) 233 static int spi_drv_probe(struct device *dev)
234 { 234 {
235 const struct spi_driver *sdrv = to_spi_driver(dev->driver); 235 const struct spi_driver *sdrv = to_spi_driver(dev->driver);
236 236
237 return sdrv->probe(to_spi_device(dev)); 237 return sdrv->probe(to_spi_device(dev));
238 } 238 }
239 239
240 static int spi_drv_remove(struct device *dev) 240 static int spi_drv_remove(struct device *dev)
241 { 241 {
242 const struct spi_driver *sdrv = to_spi_driver(dev->driver); 242 const struct spi_driver *sdrv = to_spi_driver(dev->driver);
243 243
244 return sdrv->remove(to_spi_device(dev)); 244 return sdrv->remove(to_spi_device(dev));
245 } 245 }
246 246
247 static void spi_drv_shutdown(struct device *dev) 247 static void spi_drv_shutdown(struct device *dev)
248 { 248 {
249 const struct spi_driver *sdrv = to_spi_driver(dev->driver); 249 const struct spi_driver *sdrv = to_spi_driver(dev->driver);
250 250
251 sdrv->shutdown(to_spi_device(dev)); 251 sdrv->shutdown(to_spi_device(dev));
252 } 252 }
253 253
254 /** 254 /**
255 * spi_register_driver - register a SPI driver 255 * spi_register_driver - register a SPI driver
256 * @sdrv: the driver to register 256 * @sdrv: the driver to register
257 * Context: can sleep 257 * Context: can sleep
258 */ 258 */
259 int spi_register_driver(struct spi_driver *sdrv) 259 int spi_register_driver(struct spi_driver *sdrv)
260 { 260 {
261 sdrv->driver.bus = &spi_bus_type; 261 sdrv->driver.bus = &spi_bus_type;
262 if (sdrv->probe) 262 if (sdrv->probe)
263 sdrv->driver.probe = spi_drv_probe; 263 sdrv->driver.probe = spi_drv_probe;
264 if (sdrv->remove) 264 if (sdrv->remove)
265 sdrv->driver.remove = spi_drv_remove; 265 sdrv->driver.remove = spi_drv_remove;
266 if (sdrv->shutdown) 266 if (sdrv->shutdown)
267 sdrv->driver.shutdown = spi_drv_shutdown; 267 sdrv->driver.shutdown = spi_drv_shutdown;
268 return driver_register(&sdrv->driver); 268 return driver_register(&sdrv->driver);
269 } 269 }
270 EXPORT_SYMBOL_GPL(spi_register_driver); 270 EXPORT_SYMBOL_GPL(spi_register_driver);
271 271
272 /*-------------------------------------------------------------------------*/ 272 /*-------------------------------------------------------------------------*/
273 273
274 /* SPI devices should normally not be created by SPI device drivers; that 274 /* SPI devices should normally not be created by SPI device drivers; that
275 * would make them board-specific. Similarly with SPI master drivers. 275 * would make them board-specific. Similarly with SPI master drivers.
276 * Device registration normally goes into like arch/.../mach.../board-YYY.c 276 * Device registration normally goes into like arch/.../mach.../board-YYY.c
277 * with other readonly (flashable) information about mainboard devices. 277 * with other readonly (flashable) information about mainboard devices.
278 */ 278 */
279 279
280 struct boardinfo { 280 struct boardinfo {
281 struct list_head list; 281 struct list_head list;
282 struct spi_board_info board_info; 282 struct spi_board_info board_info;
283 }; 283 };
284 284
285 static LIST_HEAD(board_list); 285 static LIST_HEAD(board_list);
286 static LIST_HEAD(spi_master_list); 286 static LIST_HEAD(spi_master_list);
287 287
288 /* 288 /*
289 * Used to protect add/del opertion for board_info list and 289 * Used to protect add/del opertion for board_info list and
290 * spi_master list, and their matching process 290 * spi_master list, and their matching process
291 */ 291 */
292 static DEFINE_MUTEX(board_lock); 292 static DEFINE_MUTEX(board_lock);
293 293
294 /** 294 /**
295 * spi_alloc_device - Allocate a new SPI device 295 * spi_alloc_device - Allocate a new SPI device
296 * @master: Controller to which device is connected 296 * @master: Controller to which device is connected
297 * Context: can sleep 297 * Context: can sleep
298 * 298 *
299 * Allows a driver to allocate and initialize a spi_device without 299 * Allows a driver to allocate and initialize a spi_device without
300 * registering it immediately. This allows a driver to directly 300 * registering it immediately. This allows a driver to directly
301 * fill the spi_device with device parameters before calling 301 * fill the spi_device with device parameters before calling
302 * spi_add_device() on it. 302 * spi_add_device() on it.
303 * 303 *
304 * Caller is responsible to call spi_add_device() on the returned 304 * Caller is responsible to call spi_add_device() on the returned
305 * spi_device structure to add it to the SPI master. If the caller 305 * spi_device structure to add it to the SPI master. If the caller
306 * needs to discard the spi_device without adding it, then it should 306 * needs to discard the spi_device without adding it, then it should
307 * call spi_dev_put() on it. 307 * call spi_dev_put() on it.
308 * 308 *
309 * Returns a pointer to the new device, or NULL. 309 * Returns a pointer to the new device, or NULL.
310 */ 310 */
311 struct spi_device *spi_alloc_device(struct spi_master *master) 311 struct spi_device *spi_alloc_device(struct spi_master *master)
312 { 312 {
313 struct spi_device *spi; 313 struct spi_device *spi;
314 struct device *dev = master->dev.parent; 314 struct device *dev = master->dev.parent;
315 315
316 if (!spi_master_get(master)) 316 if (!spi_master_get(master))
317 return NULL; 317 return NULL;
318 318
319 spi = kzalloc(sizeof *spi, GFP_KERNEL); 319 spi = kzalloc(sizeof *spi, GFP_KERNEL);
320 if (!spi) { 320 if (!spi) {
321 dev_err(dev, "cannot alloc spi_device\n"); 321 dev_err(dev, "cannot alloc spi_device\n");
322 spi_master_put(master); 322 spi_master_put(master);
323 return NULL; 323 return NULL;
324 } 324 }
325 325
326 spi->master = master; 326 spi->master = master;
327 spi->dev.parent = &master->dev; 327 spi->dev.parent = &master->dev;
328 spi->dev.bus = &spi_bus_type; 328 spi->dev.bus = &spi_bus_type;
329 spi->dev.release = spidev_release; 329 spi->dev.release = spidev_release;
330 device_initialize(&spi->dev); 330 device_initialize(&spi->dev);
331 return spi; 331 return spi;
332 } 332 }
333 EXPORT_SYMBOL_GPL(spi_alloc_device); 333 EXPORT_SYMBOL_GPL(spi_alloc_device);
334 334
335 /** 335 /**
336 * spi_add_device - Add spi_device allocated with spi_alloc_device 336 * spi_add_device - Add spi_device allocated with spi_alloc_device
337 * @spi: spi_device to register 337 * @spi: spi_device to register
338 * 338 *
339 * Companion function to spi_alloc_device. Devices allocated with 339 * Companion function to spi_alloc_device. Devices allocated with
340 * spi_alloc_device can be added onto the spi bus with this function. 340 * spi_alloc_device can be added onto the spi bus with this function.
341 * 341 *
342 * Returns 0 on success; negative errno on failure 342 * Returns 0 on success; negative errno on failure
343 */ 343 */
344 int spi_add_device(struct spi_device *spi) 344 int spi_add_device(struct spi_device *spi)
345 { 345 {
346 static DEFINE_MUTEX(spi_add_lock); 346 static DEFINE_MUTEX(spi_add_lock);
347 struct device *dev = spi->master->dev.parent; 347 struct device *dev = spi->master->dev.parent;
348 struct device *d; 348 struct device *d;
349 int status; 349 int status;
350 350
351 /* Chipselects are numbered 0..max; validate. */ 351 /* Chipselects are numbered 0..max; validate. */
352 if (spi->chip_select >= spi->master->num_chipselect) { 352 if (spi->chip_select >= spi->master->num_chipselect) {
353 dev_err(dev, "cs%d >= max %d\n", 353 dev_err(dev, "cs%d >= max %d\n",
354 spi->chip_select, 354 spi->chip_select,
355 spi->master->num_chipselect); 355 spi->master->num_chipselect);
356 return -EINVAL; 356 return -EINVAL;
357 } 357 }
358 358
359 /* Set the bus ID string */ 359 /* Set the bus ID string */
360 dev_set_name(&spi->dev, "%s.%u", dev_name(&spi->master->dev), 360 dev_set_name(&spi->dev, "%s.%u", dev_name(&spi->master->dev),
361 spi->chip_select); 361 spi->chip_select);
362 362
363 363
364 /* We need to make sure there's no other device with this 364 /* We need to make sure there's no other device with this
365 * chipselect **BEFORE** we call setup(), else we'll trash 365 * chipselect **BEFORE** we call setup(), else we'll trash
366 * its configuration. Lock against concurrent add() calls. 366 * its configuration. Lock against concurrent add() calls.
367 */ 367 */
368 mutex_lock(&spi_add_lock); 368 mutex_lock(&spi_add_lock);
369 369
370 d = bus_find_device_by_name(&spi_bus_type, NULL, dev_name(&spi->dev)); 370 d = bus_find_device_by_name(&spi_bus_type, NULL, dev_name(&spi->dev));
371 if (d != NULL) { 371 if (d != NULL) {
372 dev_err(dev, "chipselect %d already in use\n", 372 dev_err(dev, "chipselect %d already in use\n",
373 spi->chip_select); 373 spi->chip_select);
374 put_device(d); 374 put_device(d);
375 status = -EBUSY; 375 status = -EBUSY;
376 goto done; 376 goto done;
377 } 377 }
378 378
379 /* Drivers may modify this initial i/o setup, but will 379 /* Drivers may modify this initial i/o setup, but will
380 * normally rely on the device being setup. Devices 380 * normally rely on the device being setup. Devices
381 * using SPI_CS_HIGH can't coexist well otherwise... 381 * using SPI_CS_HIGH can't coexist well otherwise...
382 */ 382 */
383 status = spi_setup(spi); 383 status = spi_setup(spi);
384 if (status < 0) { 384 if (status < 0) {
385 dev_err(dev, "can't setup %s, status %d\n", 385 dev_err(dev, "can't setup %s, status %d\n",
386 dev_name(&spi->dev), status); 386 dev_name(&spi->dev), status);
387 goto done; 387 goto done;
388 } 388 }
389 389
390 /* Device may be bound to an active driver when this returns */ 390 /* Device may be bound to an active driver when this returns */
391 status = device_add(&spi->dev); 391 status = device_add(&spi->dev);
392 if (status < 0) 392 if (status < 0)
393 dev_err(dev, "can't add %s, status %d\n", 393 dev_err(dev, "can't add %s, status %d\n",
394 dev_name(&spi->dev), status); 394 dev_name(&spi->dev), status);
395 else 395 else
396 dev_dbg(dev, "registered child %s\n", dev_name(&spi->dev)); 396 dev_dbg(dev, "registered child %s\n", dev_name(&spi->dev));
397 397
398 done: 398 done:
399 mutex_unlock(&spi_add_lock); 399 mutex_unlock(&spi_add_lock);
400 return status; 400 return status;
401 } 401 }
402 EXPORT_SYMBOL_GPL(spi_add_device); 402 EXPORT_SYMBOL_GPL(spi_add_device);
403 403
404 /** 404 /**
405 * spi_new_device - instantiate one new SPI device 405 * spi_new_device - instantiate one new SPI device
406 * @master: Controller to which device is connected 406 * @master: Controller to which device is connected
407 * @chip: Describes the SPI device 407 * @chip: Describes the SPI device
408 * Context: can sleep 408 * Context: can sleep
409 * 409 *
410 * On typical mainboards, this is purely internal; and it's not needed 410 * On typical mainboards, this is purely internal; and it's not needed
411 * after board init creates the hard-wired devices. Some development 411 * after board init creates the hard-wired devices. Some development
412 * platforms may not be able to use spi_register_board_info though, and 412 * platforms may not be able to use spi_register_board_info though, and
413 * this is exported so that for example a USB or parport based adapter 413 * this is exported so that for example a USB or parport based adapter
414 * driver could add devices (which it would learn about out-of-band). 414 * driver could add devices (which it would learn about out-of-band).
415 * 415 *
416 * Returns the new device, or NULL. 416 * Returns the new device, or NULL.
417 */ 417 */
418 struct spi_device *spi_new_device(struct spi_master *master, 418 struct spi_device *spi_new_device(struct spi_master *master,
419 struct spi_board_info *chip) 419 struct spi_board_info *chip)
420 { 420 {
421 struct spi_device *proxy; 421 struct spi_device *proxy;
422 int status; 422 int status;
423 423
424 /* NOTE: caller did any chip->bus_num checks necessary. 424 /* NOTE: caller did any chip->bus_num checks necessary.
425 * 425 *
426 * Also, unless we change the return value convention to use 426 * Also, unless we change the return value convention to use
427 * error-or-pointer (not NULL-or-pointer), troubleshootability 427 * error-or-pointer (not NULL-or-pointer), troubleshootability
428 * suggests syslogged diagnostics are best here (ugh). 428 * suggests syslogged diagnostics are best here (ugh).
429 */ 429 */
430 430
431 proxy = spi_alloc_device(master); 431 proxy = spi_alloc_device(master);
432 if (!proxy) 432 if (!proxy)
433 return NULL; 433 return NULL;
434 434
435 WARN_ON(strlen(chip->modalias) >= sizeof(proxy->modalias)); 435 WARN_ON(strlen(chip->modalias) >= sizeof(proxy->modalias));
436 436
437 proxy->chip_select = chip->chip_select; 437 proxy->chip_select = chip->chip_select;
438 proxy->max_speed_hz = chip->max_speed_hz; 438 proxy->max_speed_hz = chip->max_speed_hz;
439 proxy->mode = chip->mode; 439 proxy->mode = chip->mode;
440 proxy->irq = chip->irq; 440 proxy->irq = chip->irq;
441 strlcpy(proxy->modalias, chip->modalias, sizeof(proxy->modalias)); 441 strlcpy(proxy->modalias, chip->modalias, sizeof(proxy->modalias));
442 proxy->dev.platform_data = (void *) chip->platform_data; 442 proxy->dev.platform_data = (void *) chip->platform_data;
443 proxy->controller_data = chip->controller_data; 443 proxy->controller_data = chip->controller_data;
444 proxy->controller_state = NULL; 444 proxy->controller_state = NULL;
445 445
446 status = spi_add_device(proxy); 446 status = spi_add_device(proxy);
447 if (status < 0) { 447 if (status < 0) {
448 spi_dev_put(proxy); 448 spi_dev_put(proxy);
449 return NULL; 449 return NULL;
450 } 450 }
451 451
452 return proxy; 452 return proxy;
453 } 453 }
454 EXPORT_SYMBOL_GPL(spi_new_device); 454 EXPORT_SYMBOL_GPL(spi_new_device);
455 455
456 static void spi_match_master_to_boardinfo(struct spi_master *master, 456 static void spi_match_master_to_boardinfo(struct spi_master *master,
457 struct spi_board_info *bi) 457 struct spi_board_info *bi)
458 { 458 {
459 struct spi_device *dev; 459 struct spi_device *dev;
460 460
461 if (master->bus_num != bi->bus_num) 461 if (master->bus_num != bi->bus_num)
462 return; 462 return;
463 463
464 dev = spi_new_device(master, bi); 464 dev = spi_new_device(master, bi);
465 if (!dev) 465 if (!dev)
466 dev_err(master->dev.parent, "can't create new device for %s\n", 466 dev_err(master->dev.parent, "can't create new device for %s\n",
467 bi->modalias); 467 bi->modalias);
468 } 468 }
469 469
470 /** 470 /**
471 * spi_register_board_info - register SPI devices for a given board 471 * spi_register_board_info - register SPI devices for a given board
472 * @info: array of chip descriptors 472 * @info: array of chip descriptors
473 * @n: how many descriptors are provided 473 * @n: how many descriptors are provided
474 * Context: can sleep 474 * Context: can sleep
475 * 475 *
476 * Board-specific early init code calls this (probably during arch_initcall) 476 * Board-specific early init code calls this (probably during arch_initcall)
477 * with segments of the SPI device table. Any device nodes are created later, 477 * with segments of the SPI device table. Any device nodes are created later,
478 * after the relevant parent SPI controller (bus_num) is defined. We keep 478 * after the relevant parent SPI controller (bus_num) is defined. We keep
479 * this table of devices forever, so that reloading a controller driver will 479 * this table of devices forever, so that reloading a controller driver will
480 * not make Linux forget about these hard-wired devices. 480 * not make Linux forget about these hard-wired devices.
481 * 481 *
482 * Other code can also call this, e.g. a particular add-on board might provide 482 * Other code can also call this, e.g. a particular add-on board might provide
483 * SPI devices through its expansion connector, so code initializing that board 483 * SPI devices through its expansion connector, so code initializing that board
484 * would naturally declare its SPI devices. 484 * would naturally declare its SPI devices.
485 * 485 *
486 * The board info passed can safely be __initdata ... but be careful of 486 * The board info passed can safely be __initdata ... but be careful of
487 * any embedded pointers (platform_data, etc), they're copied as-is. 487 * any embedded pointers (platform_data, etc), they're copied as-is.
488 */ 488 */
489 int __devinit 489 int __devinit
490 spi_register_board_info(struct spi_board_info const *info, unsigned n) 490 spi_register_board_info(struct spi_board_info const *info, unsigned n)
491 { 491 {
492 struct boardinfo *bi; 492 struct boardinfo *bi;
493 int i; 493 int i;
494 494
495 bi = kzalloc(n * sizeof(*bi), GFP_KERNEL); 495 bi = kzalloc(n * sizeof(*bi), GFP_KERNEL);
496 if (!bi) 496 if (!bi)
497 return -ENOMEM; 497 return -ENOMEM;
498 498
499 for (i = 0; i < n; i++, bi++, info++) { 499 for (i = 0; i < n; i++, bi++, info++) {
500 struct spi_master *master; 500 struct spi_master *master;
501 501
502 memcpy(&bi->board_info, info, sizeof(*info)); 502 memcpy(&bi->board_info, info, sizeof(*info));
503 mutex_lock(&board_lock); 503 mutex_lock(&board_lock);
504 list_add_tail(&bi->list, &board_list); 504 list_add_tail(&bi->list, &board_list);
505 list_for_each_entry(master, &spi_master_list, list) 505 list_for_each_entry(master, &spi_master_list, list)
506 spi_match_master_to_boardinfo(master, &bi->board_info); 506 spi_match_master_to_boardinfo(master, &bi->board_info);
507 mutex_unlock(&board_lock); 507 mutex_unlock(&board_lock);
508 } 508 }
509 509
510 return 0; 510 return 0;
511 } 511 }
512 512
513 /*-------------------------------------------------------------------------*/ 513 /*-------------------------------------------------------------------------*/
514 514
515 /** 515 /**
516 * spi_pump_messages - kthread work function which processes spi message queue 516 * spi_pump_messages - kthread work function which processes spi message queue
517 * @work: pointer to kthread work struct contained in the master struct 517 * @work: pointer to kthread work struct contained in the master struct
518 * 518 *
519 * This function checks if there is any spi message in the queue that 519 * This function checks if there is any spi message in the queue that
520 * needs processing and if so call out to the driver to initialize hardware 520 * needs processing and if so call out to the driver to initialize hardware
521 * and transfer each message. 521 * and transfer each message.
522 * 522 *
523 */ 523 */
524 static void spi_pump_messages(struct kthread_work *work) 524 static void spi_pump_messages(struct kthread_work *work)
525 { 525 {
526 struct spi_master *master = 526 struct spi_master *master =
527 container_of(work, struct spi_master, pump_messages); 527 container_of(work, struct spi_master, pump_messages);
528 unsigned long flags; 528 unsigned long flags;
529 bool was_busy = false; 529 bool was_busy = false;
530 int ret; 530 int ret;
531 531
532 /* Lock queue and check for queue work */ 532 /* Lock queue and check for queue work */
533 spin_lock_irqsave(&master->queue_lock, flags); 533 spin_lock_irqsave(&master->queue_lock, flags);
534 if (list_empty(&master->queue) || !master->running) { 534 if (list_empty(&master->queue) || !master->running) {
535 if (master->busy && master->unprepare_transfer_hardware) { 535 if (master->busy && master->unprepare_transfer_hardware) {
536 ret = master->unprepare_transfer_hardware(master); 536 ret = master->unprepare_transfer_hardware(master);
537 if (ret) { 537 if (ret) {
538 spin_unlock_irqrestore(&master->queue_lock, flags); 538 spin_unlock_irqrestore(&master->queue_lock, flags);
539 dev_err(&master->dev, 539 dev_err(&master->dev,
540 "failed to unprepare transfer hardware\n"); 540 "failed to unprepare transfer hardware\n");
541 return; 541 return;
542 } 542 }
543 } 543 }
544 master->busy = false; 544 master->busy = false;
545 spin_unlock_irqrestore(&master->queue_lock, flags); 545 spin_unlock_irqrestore(&master->queue_lock, flags);
546 return; 546 return;
547 } 547 }
548 548
549 /* Make sure we are not already running a message */ 549 /* Make sure we are not already running a message */
550 if (master->cur_msg) { 550 if (master->cur_msg) {
551 spin_unlock_irqrestore(&master->queue_lock, flags); 551 spin_unlock_irqrestore(&master->queue_lock, flags);
552 return; 552 return;
553 } 553 }
554 /* Extract head of queue */ 554 /* Extract head of queue */
555 master->cur_msg = 555 master->cur_msg =
556 list_entry(master->queue.next, struct spi_message, queue); 556 list_entry(master->queue.next, struct spi_message, queue);
557 557
558 list_del_init(&master->cur_msg->queue); 558 list_del_init(&master->cur_msg->queue);
559 if (master->busy) 559 if (master->busy)
560 was_busy = true; 560 was_busy = true;
561 else 561 else
562 master->busy = true; 562 master->busy = true;
563 spin_unlock_irqrestore(&master->queue_lock, flags); 563 spin_unlock_irqrestore(&master->queue_lock, flags);
564 564
565 if (!was_busy && master->prepare_transfer_hardware) { 565 if (!was_busy && master->prepare_transfer_hardware) {
566 ret = master->prepare_transfer_hardware(master); 566 ret = master->prepare_transfer_hardware(master);
567 if (ret) { 567 if (ret) {
568 dev_err(&master->dev, 568 dev_err(&master->dev,
569 "failed to prepare transfer hardware\n"); 569 "failed to prepare transfer hardware\n");
570 return; 570 return;
571 } 571 }
572 } 572 }
573 573
574 ret = master->transfer_one_message(master, master->cur_msg); 574 ret = master->transfer_one_message(master, master->cur_msg);
575 if (ret) { 575 if (ret) {
576 dev_err(&master->dev, 576 dev_err(&master->dev,
577 "failed to transfer one message from queue\n"); 577 "failed to transfer one message from queue\n");
578 return; 578 return;
579 } 579 }
580 } 580 }
581 581
582 static int spi_init_queue(struct spi_master *master) 582 static int spi_init_queue(struct spi_master *master)
583 { 583 {
584 struct sched_param param = { .sched_priority = MAX_RT_PRIO - 1 }; 584 struct sched_param param = { .sched_priority = MAX_RT_PRIO - 1 };
585 585
586 INIT_LIST_HEAD(&master->queue); 586 INIT_LIST_HEAD(&master->queue);
587 spin_lock_init(&master->queue_lock); 587 spin_lock_init(&master->queue_lock);
588 588
589 master->running = false; 589 master->running = false;
590 master->busy = false; 590 master->busy = false;
591 591
592 init_kthread_worker(&master->kworker); 592 init_kthread_worker(&master->kworker);
593 master->kworker_task = kthread_run(kthread_worker_fn, 593 master->kworker_task = kthread_run(kthread_worker_fn,
594 &master->kworker, 594 &master->kworker,
595 dev_name(&master->dev)); 595 dev_name(&master->dev));
596 if (IS_ERR(master->kworker_task)) { 596 if (IS_ERR(master->kworker_task)) {
597 dev_err(&master->dev, "failed to create message pump task\n"); 597 dev_err(&master->dev, "failed to create message pump task\n");
598 return -ENOMEM; 598 return -ENOMEM;
599 } 599 }
600 init_kthread_work(&master->pump_messages, spi_pump_messages); 600 init_kthread_work(&master->pump_messages, spi_pump_messages);
601 601
602 /* 602 /*
603 * Master config will indicate if this controller should run the 603 * Master config will indicate if this controller should run the
604 * message pump with high (realtime) priority to reduce the transfer 604 * message pump with high (realtime) priority to reduce the transfer
605 * latency on the bus by minimising the delay between a transfer 605 * latency on the bus by minimising the delay between a transfer
606 * request and the scheduling of the message pump thread. Without this 606 * request and the scheduling of the message pump thread. Without this
607 * setting the message pump thread will remain at default priority. 607 * setting the message pump thread will remain at default priority.
608 */ 608 */
609 if (master->rt) { 609 if (master->rt) {
610 dev_info(&master->dev, 610 dev_info(&master->dev,
611 "will run message pump with realtime priority\n"); 611 "will run message pump with realtime priority\n");
612 sched_setscheduler(master->kworker_task, SCHED_FIFO, &param); 612 sched_setscheduler(master->kworker_task, SCHED_FIFO, &param);
613 } 613 }
614 614
615 return 0; 615 return 0;
616 } 616 }
617 617
618 /** 618 /**
619 * spi_get_next_queued_message() - called by driver to check for queued 619 * spi_get_next_queued_message() - called by driver to check for queued
620 * messages 620 * messages
621 * @master: the master to check for queued messages 621 * @master: the master to check for queued messages
622 * 622 *
623 * If there are more messages in the queue, the next message is returned from 623 * If there are more messages in the queue, the next message is returned from
624 * this call. 624 * this call.
625 */ 625 */
626 struct spi_message *spi_get_next_queued_message(struct spi_master *master) 626 struct spi_message *spi_get_next_queued_message(struct spi_master *master)
627 { 627 {
628 struct spi_message *next; 628 struct spi_message *next;
629 unsigned long flags; 629 unsigned long flags;
630 630
631 /* get a pointer to the next message, if any */ 631 /* get a pointer to the next message, if any */
632 spin_lock_irqsave(&master->queue_lock, flags); 632 spin_lock_irqsave(&master->queue_lock, flags);
633 if (list_empty(&master->queue)) 633 if (list_empty(&master->queue))
634 next = NULL; 634 next = NULL;
635 else 635 else
636 next = list_entry(master->queue.next, 636 next = list_entry(master->queue.next,
637 struct spi_message, queue); 637 struct spi_message, queue);
638 spin_unlock_irqrestore(&master->queue_lock, flags); 638 spin_unlock_irqrestore(&master->queue_lock, flags);
639 639
640 return next; 640 return next;
641 } 641 }
642 EXPORT_SYMBOL_GPL(spi_get_next_queued_message); 642 EXPORT_SYMBOL_GPL(spi_get_next_queued_message);
643 643
644 /** 644 /**
645 * spi_finalize_current_message() - the current message is complete 645 * spi_finalize_current_message() - the current message is complete
646 * @master: the master to return the message to 646 * @master: the master to return the message to
647 * 647 *
648 * Called by the driver to notify the core that the message in the front of the 648 * Called by the driver to notify the core that the message in the front of the
649 * queue is complete and can be removed from the queue. 649 * queue is complete and can be removed from the queue.
650 */ 650 */
651 void spi_finalize_current_message(struct spi_master *master) 651 void spi_finalize_current_message(struct spi_master *master)
652 { 652 {
653 struct spi_message *mesg; 653 struct spi_message *mesg;
654 unsigned long flags; 654 unsigned long flags;
655 655
656 spin_lock_irqsave(&master->queue_lock, flags); 656 spin_lock_irqsave(&master->queue_lock, flags);
657 mesg = master->cur_msg; 657 mesg = master->cur_msg;
658 master->cur_msg = NULL; 658 master->cur_msg = NULL;
659 659
660 queue_kthread_work(&master->kworker, &master->pump_messages); 660 queue_kthread_work(&master->kworker, &master->pump_messages);
661 spin_unlock_irqrestore(&master->queue_lock, flags); 661 spin_unlock_irqrestore(&master->queue_lock, flags);
662 662
663 mesg->state = NULL; 663 mesg->state = NULL;
664 if (mesg->complete) 664 if (mesg->complete)
665 mesg->complete(mesg->context); 665 mesg->complete(mesg->context);
666 } 666 }
667 EXPORT_SYMBOL_GPL(spi_finalize_current_message); 667 EXPORT_SYMBOL_GPL(spi_finalize_current_message);
668 668
669 static int spi_start_queue(struct spi_master *master) 669 static int spi_start_queue(struct spi_master *master)
670 { 670 {
671 unsigned long flags; 671 unsigned long flags;
672 672
673 spin_lock_irqsave(&master->queue_lock, flags); 673 spin_lock_irqsave(&master->queue_lock, flags);
674 674
675 if (master->running || master->busy) { 675 if (master->running || master->busy) {
676 spin_unlock_irqrestore(&master->queue_lock, flags); 676 spin_unlock_irqrestore(&master->queue_lock, flags);
677 return -EBUSY; 677 return -EBUSY;
678 } 678 }
679 679
680 master->running = true; 680 master->running = true;
681 master->cur_msg = NULL; 681 master->cur_msg = NULL;
682 spin_unlock_irqrestore(&master->queue_lock, flags); 682 spin_unlock_irqrestore(&master->queue_lock, flags);
683 683
684 queue_kthread_work(&master->kworker, &master->pump_messages); 684 queue_kthread_work(&master->kworker, &master->pump_messages);
685 685
686 return 0; 686 return 0;
687 } 687 }
688 688
689 static int spi_stop_queue(struct spi_master *master) 689 static int spi_stop_queue(struct spi_master *master)
690 { 690 {
691 unsigned long flags; 691 unsigned long flags;
692 unsigned limit = 500; 692 unsigned limit = 500;
693 int ret = 0; 693 int ret = 0;
694 694
695 spin_lock_irqsave(&master->queue_lock, flags); 695 spin_lock_irqsave(&master->queue_lock, flags);
696 696
697 /* 697 /*
698 * This is a bit lame, but is optimized for the common execution path. 698 * This is a bit lame, but is optimized for the common execution path.
699 * A wait_queue on the master->busy could be used, but then the common 699 * A wait_queue on the master->busy could be used, but then the common
700 * execution path (pump_messages) would be required to call wake_up or 700 * execution path (pump_messages) would be required to call wake_up or
701 * friends on every SPI message. Do this instead. 701 * friends on every SPI message. Do this instead.
702 */ 702 */
703 while ((!list_empty(&master->queue) || master->busy) && limit--) { 703 while ((!list_empty(&master->queue) || master->busy) && limit--) {
704 spin_unlock_irqrestore(&master->queue_lock, flags); 704 spin_unlock_irqrestore(&master->queue_lock, flags);
705 msleep(10); 705 msleep(10);
706 spin_lock_irqsave(&master->queue_lock, flags); 706 spin_lock_irqsave(&master->queue_lock, flags);
707 } 707 }
708 708
709 if (!list_empty(&master->queue) || master->busy) 709 if (!list_empty(&master->queue) || master->busy)
710 ret = -EBUSY; 710 ret = -EBUSY;
711 else 711 else
712 master->running = false; 712 master->running = false;
713 713
714 spin_unlock_irqrestore(&master->queue_lock, flags); 714 spin_unlock_irqrestore(&master->queue_lock, flags);
715 715
716 if (ret) { 716 if (ret) {
717 dev_warn(&master->dev, 717 dev_warn(&master->dev,
718 "could not stop message queue\n"); 718 "could not stop message queue\n");
719 return ret; 719 return ret;
720 } 720 }
721 return ret; 721 return ret;
722 } 722 }
723 723
724 static int spi_destroy_queue(struct spi_master *master) 724 static int spi_destroy_queue(struct spi_master *master)
725 { 725 {
726 int ret; 726 int ret;
727 727
728 ret = spi_stop_queue(master); 728 ret = spi_stop_queue(master);
729 729
730 /* 730 /*
731 * flush_kthread_worker will block until all work is done. 731 * flush_kthread_worker will block until all work is done.
732 * If the reason that stop_queue timed out is that the work will never 732 * If the reason that stop_queue timed out is that the work will never
733 * finish, then it does no good to call flush/stop thread, so 733 * finish, then it does no good to call flush/stop thread, so
734 * return anyway. 734 * return anyway.
735 */ 735 */
736 if (ret) { 736 if (ret) {
737 dev_err(&master->dev, "problem destroying queue\n"); 737 dev_err(&master->dev, "problem destroying queue\n");
738 return ret; 738 return ret;
739 } 739 }
740 740
741 flush_kthread_worker(&master->kworker); 741 flush_kthread_worker(&master->kworker);
742 kthread_stop(master->kworker_task); 742 kthread_stop(master->kworker_task);
743 743
744 return 0; 744 return 0;
745 } 745 }
746 746
747 /** 747 /**
748 * spi_queued_transfer - transfer function for queued transfers 748 * spi_queued_transfer - transfer function for queued transfers
749 * @spi: spi device which is requesting transfer 749 * @spi: spi device which is requesting transfer
750 * @msg: spi message which is to handled is queued to driver queue 750 * @msg: spi message which is to handled is queued to driver queue
751 */ 751 */
752 static int spi_queued_transfer(struct spi_device *spi, struct spi_message *msg) 752 static int spi_queued_transfer(struct spi_device *spi, struct spi_message *msg)
753 { 753 {
754 struct spi_master *master = spi->master; 754 struct spi_master *master = spi->master;
755 unsigned long flags; 755 unsigned long flags;
756 756
757 spin_lock_irqsave(&master->queue_lock, flags); 757 spin_lock_irqsave(&master->queue_lock, flags);
758 758
759 if (!master->running) { 759 if (!master->running) {
760 spin_unlock_irqrestore(&master->queue_lock, flags); 760 spin_unlock_irqrestore(&master->queue_lock, flags);
761 return -ESHUTDOWN; 761 return -ESHUTDOWN;
762 } 762 }
763 msg->actual_length = 0; 763 msg->actual_length = 0;
764 msg->status = -EINPROGRESS; 764 msg->status = -EINPROGRESS;
765 765
766 list_add_tail(&msg->queue, &master->queue); 766 list_add_tail(&msg->queue, &master->queue);
767 if (master->running && !master->busy) 767 if (master->running && !master->busy)
768 queue_kthread_work(&master->kworker, &master->pump_messages); 768 queue_kthread_work(&master->kworker, &master->pump_messages);
769 769
770 spin_unlock_irqrestore(&master->queue_lock, flags); 770 spin_unlock_irqrestore(&master->queue_lock, flags);
771 return 0; 771 return 0;
772 } 772 }
773 773
774 static int spi_master_initialize_queue(struct spi_master *master) 774 static int spi_master_initialize_queue(struct spi_master *master)
775 { 775 {
776 int ret; 776 int ret;
777 777
778 master->queued = true; 778 master->queued = true;
779 master->transfer = spi_queued_transfer; 779 master->transfer = spi_queued_transfer;
780 780
781 /* Initialize and start queue */ 781 /* Initialize and start queue */
782 ret = spi_init_queue(master); 782 ret = spi_init_queue(master);
783 if (ret) { 783 if (ret) {
784 dev_err(&master->dev, "problem initializing queue\n"); 784 dev_err(&master->dev, "problem initializing queue\n");
785 goto err_init_queue; 785 goto err_init_queue;
786 } 786 }
787 ret = spi_start_queue(master); 787 ret = spi_start_queue(master);
788 if (ret) { 788 if (ret) {
789 dev_err(&master->dev, "problem starting queue\n"); 789 dev_err(&master->dev, "problem starting queue\n");
790 goto err_start_queue; 790 goto err_start_queue;
791 } 791 }
792 792
793 return 0; 793 return 0;
794 794
795 err_start_queue: 795 err_start_queue:
796 err_init_queue: 796 err_init_queue:
797 spi_destroy_queue(master); 797 spi_destroy_queue(master);
798 return ret; 798 return ret;
799 } 799 }
800 800
801 /*-------------------------------------------------------------------------*/ 801 /*-------------------------------------------------------------------------*/
802 802
803 #if defined(CONFIG_OF) && !defined(CONFIG_SPARC) 803 #if defined(CONFIG_OF) && !defined(CONFIG_SPARC)
804 /** 804 /**
805 * of_register_spi_devices() - Register child devices onto the SPI bus 805 * of_register_spi_devices() - Register child devices onto the SPI bus
806 * @master: Pointer to spi_master device 806 * @master: Pointer to spi_master device
807 * 807 *
808 * Registers an spi_device for each child node of master node which has a 'reg' 808 * Registers an spi_device for each child node of master node which has a 'reg'
809 * property. 809 * property.
810 */ 810 */
811 static void of_register_spi_devices(struct spi_master *master) 811 static void of_register_spi_devices(struct spi_master *master)
812 { 812 {
813 struct spi_device *spi; 813 struct spi_device *spi;
814 struct device_node *nc; 814 struct device_node *nc;
815 const __be32 *prop; 815 const __be32 *prop;
816 int rc; 816 int rc;
817 int len; 817 int len;
818 818
819 if (!master->dev.of_node) 819 if (!master->dev.of_node)
820 return; 820 return;
821 821
822 for_each_child_of_node(master->dev.of_node, nc) { 822 for_each_child_of_node(master->dev.of_node, nc) {
823 /* Alloc an spi_device */ 823 /* Alloc an spi_device */
824 spi = spi_alloc_device(master); 824 spi = spi_alloc_device(master);
825 if (!spi) { 825 if (!spi) {
826 dev_err(&master->dev, "spi_device alloc error for %s\n", 826 dev_err(&master->dev, "spi_device alloc error for %s\n",
827 nc->full_name); 827 nc->full_name);
828 spi_dev_put(spi); 828 spi_dev_put(spi);
829 continue; 829 continue;
830 } 830 }
831 831
832 /* Select device driver */ 832 /* Select device driver */
833 if (of_modalias_node(nc, spi->modalias, 833 if (of_modalias_node(nc, spi->modalias,
834 sizeof(spi->modalias)) < 0) { 834 sizeof(spi->modalias)) < 0) {
835 dev_err(&master->dev, "cannot find modalias for %s\n", 835 dev_err(&master->dev, "cannot find modalias for %s\n",
836 nc->full_name); 836 nc->full_name);
837 spi_dev_put(spi); 837 spi_dev_put(spi);
838 continue; 838 continue;
839 } 839 }
840 840
841 /* Device address */ 841 /* Device address */
842 prop = of_get_property(nc, "reg", &len); 842 prop = of_get_property(nc, "reg", &len);
843 if (!prop || len < sizeof(*prop)) { 843 if (!prop || len < sizeof(*prop)) {
844 dev_err(&master->dev, "%s has no 'reg' property\n", 844 dev_err(&master->dev, "%s has no 'reg' property\n",
845 nc->full_name); 845 nc->full_name);
846 spi_dev_put(spi); 846 spi_dev_put(spi);
847 continue; 847 continue;
848 } 848 }
849 spi->chip_select = be32_to_cpup(prop); 849 spi->chip_select = be32_to_cpup(prop);
850 850
851 /* Mode (clock phase/polarity/etc.) */ 851 /* Mode (clock phase/polarity/etc.) */
852 if (of_find_property(nc, "spi-cpha", NULL)) 852 if (of_find_property(nc, "spi-cpha", NULL))
853 spi->mode |= SPI_CPHA; 853 spi->mode |= SPI_CPHA;
854 if (of_find_property(nc, "spi-cpol", NULL)) 854 if (of_find_property(nc, "spi-cpol", NULL))
855 spi->mode |= SPI_CPOL; 855 spi->mode |= SPI_CPOL;
856 if (of_find_property(nc, "spi-cs-high", NULL)) 856 if (of_find_property(nc, "spi-cs-high", NULL))
857 spi->mode |= SPI_CS_HIGH; 857 spi->mode |= SPI_CS_HIGH;
858 858
859 /* Device speed */ 859 /* Device speed */
860 prop = of_get_property(nc, "spi-max-frequency", &len); 860 prop = of_get_property(nc, "spi-max-frequency", &len);
861 if (!prop || len < sizeof(*prop)) { 861 if (!prop || len < sizeof(*prop)) {
862 dev_err(&master->dev, "%s has no 'spi-max-frequency' property\n", 862 dev_err(&master->dev, "%s has no 'spi-max-frequency' property\n",
863 nc->full_name); 863 nc->full_name);
864 spi_dev_put(spi); 864 spi_dev_put(spi);
865 continue; 865 continue;
866 } 866 }
867 spi->max_speed_hz = be32_to_cpup(prop); 867 spi->max_speed_hz = be32_to_cpup(prop);
868 868
869 /* IRQ */ 869 /* IRQ */
870 spi->irq = irq_of_parse_and_map(nc, 0); 870 spi->irq = irq_of_parse_and_map(nc, 0);
871 871
872 /* Store a pointer to the node in the device structure */ 872 /* Store a pointer to the node in the device structure */
873 of_node_get(nc); 873 of_node_get(nc);
874 spi->dev.of_node = nc; 874 spi->dev.of_node = nc;
875 875
876 /* Register the new device */ 876 /* Register the new device */
877 request_module(spi->modalias); 877 request_module(spi->modalias);
878 rc = spi_add_device(spi); 878 rc = spi_add_device(spi);
879 if (rc) { 879 if (rc) {
880 dev_err(&master->dev, "spi_device register error %s\n", 880 dev_err(&master->dev, "spi_device register error %s\n",
881 nc->full_name); 881 nc->full_name);
882 spi_dev_put(spi); 882 spi_dev_put(spi);
883 } 883 }
884 884
885 } 885 }
886 } 886 }
887 #else 887 #else
888 static void of_register_spi_devices(struct spi_master *master) { } 888 static void of_register_spi_devices(struct spi_master *master) { }
889 #endif 889 #endif
890 890
891 static void spi_master_release(struct device *dev) 891 static void spi_master_release(struct device *dev)
892 { 892 {
893 struct spi_master *master; 893 struct spi_master *master;
894 894
895 master = container_of(dev, struct spi_master, dev); 895 master = container_of(dev, struct spi_master, dev);
896 kfree(master); 896 kfree(master);
897 } 897 }
898 898
899 static struct class spi_master_class = { 899 static struct class spi_master_class = {
900 .name = "spi_master", 900 .name = "spi_master",
901 .owner = THIS_MODULE, 901 .owner = THIS_MODULE,
902 .dev_release = spi_master_release, 902 .dev_release = spi_master_release,
903 }; 903 };
904 904
905 905
906 906
907 /** 907 /**
908 * spi_alloc_master - allocate SPI master controller 908 * spi_alloc_master - allocate SPI master controller
909 * @dev: the controller, possibly using the platform_bus 909 * @dev: the controller, possibly using the platform_bus
910 * @size: how much zeroed driver-private data to allocate; the pointer to this 910 * @size: how much zeroed driver-private data to allocate; the pointer to this
911 * memory is in the driver_data field of the returned device, 911 * memory is in the driver_data field of the returned device,
912 * accessible with spi_master_get_devdata(). 912 * accessible with spi_master_get_devdata().
913 * Context: can sleep 913 * Context: can sleep
914 * 914 *
915 * This call is used only by SPI master controller drivers, which are the 915 * This call is used only by SPI master controller drivers, which are the
916 * only ones directly touching chip registers. It's how they allocate 916 * only ones directly touching chip registers. It's how they allocate
917 * an spi_master structure, prior to calling spi_register_master(). 917 * an spi_master structure, prior to calling spi_register_master().
918 * 918 *
919 * This must be called from context that can sleep. It returns the SPI 919 * This must be called from context that can sleep. It returns the SPI
920 * master structure on success, else NULL. 920 * master structure on success, else NULL.
921 * 921 *
922 * The caller is responsible for assigning the bus number and initializing 922 * The caller is responsible for assigning the bus number and initializing
923 * the master's methods before calling spi_register_master(); and (after errors 923 * the master's methods before calling spi_register_master(); and (after errors
924 * adding the device) calling spi_master_put() and kfree() to prevent a memory 924 * adding the device) calling spi_master_put() and kfree() to prevent a memory
925 * leak. 925 * leak.
926 */ 926 */
927 struct spi_master *spi_alloc_master(struct device *dev, unsigned size) 927 struct spi_master *spi_alloc_master(struct device *dev, unsigned size)
928 { 928 {
929 struct spi_master *master; 929 struct spi_master *master;
930 930
931 if (!dev) 931 if (!dev)
932 return NULL; 932 return NULL;
933 933
934 master = kzalloc(size + sizeof *master, GFP_KERNEL); 934 master = kzalloc(size + sizeof *master, GFP_KERNEL);
935 if (!master) 935 if (!master)
936 return NULL; 936 return NULL;
937 937
938 device_initialize(&master->dev); 938 device_initialize(&master->dev);
939 master->bus_num = -1; 939 master->bus_num = -1;
940 master->num_chipselect = 1; 940 master->num_chipselect = 1;
941 master->dev.class = &spi_master_class; 941 master->dev.class = &spi_master_class;
942 master->dev.parent = get_device(dev); 942 master->dev.parent = get_device(dev);
943 spi_master_set_devdata(master, &master[1]); 943 spi_master_set_devdata(master, &master[1]);
944 944
945 return master; 945 return master;
946 } 946 }
947 EXPORT_SYMBOL_GPL(spi_alloc_master); 947 EXPORT_SYMBOL_GPL(spi_alloc_master);
948 948
949 /** 949 /**
950 * spi_register_master - register SPI master controller 950 * spi_register_master - register SPI master controller
951 * @master: initialized master, originally from spi_alloc_master() 951 * @master: initialized master, originally from spi_alloc_master()
952 * Context: can sleep 952 * Context: can sleep
953 * 953 *
954 * SPI master controllers connect to their drivers using some non-SPI bus, 954 * SPI master controllers connect to their drivers using some non-SPI bus,
955 * such as the platform bus. The final stage of probe() in that code 955 * such as the platform bus. The final stage of probe() in that code
956 * includes calling spi_register_master() to hook up to this SPI bus glue. 956 * includes calling spi_register_master() to hook up to this SPI bus glue.
957 * 957 *
958 * SPI controllers use board specific (often SOC specific) bus numbers, 958 * SPI controllers use board specific (often SOC specific) bus numbers,
959 * and board-specific addressing for SPI devices combines those numbers 959 * and board-specific addressing for SPI devices combines those numbers
960 * with chip select numbers. Since SPI does not directly support dynamic 960 * with chip select numbers. Since SPI does not directly support dynamic
961 * device identification, boards need configuration tables telling which 961 * device identification, boards need configuration tables telling which
962 * chip is at which address. 962 * chip is at which address.
963 * 963 *
964 * This must be called from context that can sleep. It returns zero on 964 * This must be called from context that can sleep. It returns zero on
965 * success, else a negative error code (dropping the master's refcount). 965 * success, else a negative error code (dropping the master's refcount).
966 * After a successful return, the caller is responsible for calling 966 * After a successful return, the caller is responsible for calling
967 * spi_unregister_master(). 967 * spi_unregister_master().
968 */ 968 */
969 int spi_register_master(struct spi_master *master) 969 int spi_register_master(struct spi_master *master)
970 { 970 {
971 static atomic_t dyn_bus_id = ATOMIC_INIT((1<<15) - 1); 971 static atomic_t dyn_bus_id = ATOMIC_INIT((1<<15) - 1);
972 struct device *dev = master->dev.parent; 972 struct device *dev = master->dev.parent;
973 struct boardinfo *bi; 973 struct boardinfo *bi;
974 int status = -ENODEV; 974 int status = -ENODEV;
975 int dynamic = 0; 975 int dynamic = 0;
976 976
977 if (!dev) 977 if (!dev)
978 return -ENODEV; 978 return -ENODEV;
979 979
980 /* even if it's just one always-selected device, there must 980 /* even if it's just one always-selected device, there must
981 * be at least one chipselect 981 * be at least one chipselect
982 */ 982 */
983 if (master->num_chipselect == 0) 983 if (master->num_chipselect == 0)
984 return -EINVAL; 984 return -EINVAL;
985 985
986 /* convention: dynamically assigned bus IDs count down from the max */ 986 /* convention: dynamically assigned bus IDs count down from the max */
987 if (master->bus_num < 0) { 987 if (master->bus_num < 0) {
988 /* FIXME switch to an IDR based scheme, something like 988 /* FIXME switch to an IDR based scheme, something like
989 * I2C now uses, so we can't run out of "dynamic" IDs 989 * I2C now uses, so we can't run out of "dynamic" IDs
990 */ 990 */
991 master->bus_num = atomic_dec_return(&dyn_bus_id); 991 master->bus_num = atomic_dec_return(&dyn_bus_id);
992 dynamic = 1; 992 dynamic = 1;
993 } 993 }
994 994
995 spin_lock_init(&master->bus_lock_spinlock); 995 spin_lock_init(&master->bus_lock_spinlock);
996 mutex_init(&master->bus_lock_mutex); 996 mutex_init(&master->bus_lock_mutex);
997 master->bus_lock_flag = 0; 997 master->bus_lock_flag = 0;
998 998
999 /* register the device, then userspace will see it. 999 /* register the device, then userspace will see it.
1000 * registration fails if the bus ID is in use. 1000 * registration fails if the bus ID is in use.
1001 */ 1001 */
1002 dev_set_name(&master->dev, "spi%u", master->bus_num); 1002 dev_set_name(&master->dev, "spi%u", master->bus_num);
1003 status = device_add(&master->dev); 1003 status = device_add(&master->dev);
1004 if (status < 0) 1004 if (status < 0)
1005 goto done; 1005 goto done;
1006 dev_dbg(dev, "registered master %s%s\n", dev_name(&master->dev), 1006 dev_dbg(dev, "registered master %s%s\n", dev_name(&master->dev),
1007 dynamic ? " (dynamic)" : ""); 1007 dynamic ? " (dynamic)" : "");
1008 1008
1009 /* If we're using a queued driver, start the queue */ 1009 /* If we're using a queued driver, start the queue */
1010 if (master->transfer) 1010 if (master->transfer)
1011 dev_info(dev, "master is unqueued, this is deprecated\n"); 1011 dev_info(dev, "master is unqueued, this is deprecated\n");
1012 else { 1012 else {
1013 status = spi_master_initialize_queue(master); 1013 status = spi_master_initialize_queue(master);
1014 if (status) { 1014 if (status) {
1015 device_unregister(&master->dev); 1015 device_unregister(&master->dev);
1016 goto done; 1016 goto done;
1017 } 1017 }
1018 } 1018 }
1019 1019
1020 mutex_lock(&board_lock); 1020 mutex_lock(&board_lock);
1021 list_add_tail(&master->list, &spi_master_list); 1021 list_add_tail(&master->list, &spi_master_list);
1022 list_for_each_entry(bi, &board_list, list) 1022 list_for_each_entry(bi, &board_list, list)
1023 spi_match_master_to_boardinfo(master, &bi->board_info); 1023 spi_match_master_to_boardinfo(master, &bi->board_info);
1024 mutex_unlock(&board_lock); 1024 mutex_unlock(&board_lock);
1025 1025
1026 /* Register devices from the device tree */ 1026 /* Register devices from the device tree */
1027 of_register_spi_devices(master); 1027 of_register_spi_devices(master);
1028 done: 1028 done:
1029 return status; 1029 return status;
1030 } 1030 }
1031 EXPORT_SYMBOL_GPL(spi_register_master); 1031 EXPORT_SYMBOL_GPL(spi_register_master);
1032 1032
1033 static int __unregister(struct device *dev, void *null) 1033 static int __unregister(struct device *dev, void *null)
1034 { 1034 {
1035 spi_unregister_device(to_spi_device(dev)); 1035 spi_unregister_device(to_spi_device(dev));
1036 return 0; 1036 return 0;
1037 } 1037 }
1038 1038
1039 /** 1039 /**
1040 * spi_unregister_master - unregister SPI master controller 1040 * spi_unregister_master - unregister SPI master controller
1041 * @master: the master being unregistered 1041 * @master: the master being unregistered
1042 * Context: can sleep 1042 * Context: can sleep
1043 * 1043 *
1044 * This call is used only by SPI master controller drivers, which are the 1044 * This call is used only by SPI master controller drivers, which are the
1045 * only ones directly touching chip registers. 1045 * only ones directly touching chip registers.
1046 * 1046 *
1047 * This must be called from context that can sleep. 1047 * This must be called from context that can sleep.
1048 */ 1048 */
1049 void spi_unregister_master(struct spi_master *master) 1049 void spi_unregister_master(struct spi_master *master)
1050 { 1050 {
1051 int dummy; 1051 int dummy;
1052 1052
1053 if (master->queued) { 1053 if (master->queued) {
1054 if (spi_destroy_queue(master)) 1054 if (spi_destroy_queue(master))
1055 dev_err(&master->dev, "queue remove failed\n"); 1055 dev_err(&master->dev, "queue remove failed\n");
1056 } 1056 }
1057 1057
1058 mutex_lock(&board_lock); 1058 mutex_lock(&board_lock);
1059 list_del(&master->list); 1059 list_del(&master->list);
1060 mutex_unlock(&board_lock); 1060 mutex_unlock(&board_lock);
1061 1061
1062 dummy = device_for_each_child(&master->dev, NULL, __unregister); 1062 dummy = device_for_each_child(&master->dev, NULL, __unregister);
1063 device_unregister(&master->dev); 1063 device_unregister(&master->dev);
1064 } 1064 }
1065 EXPORT_SYMBOL_GPL(spi_unregister_master); 1065 EXPORT_SYMBOL_GPL(spi_unregister_master);
1066 1066
1067 int spi_master_suspend(struct spi_master *master) 1067 int spi_master_suspend(struct spi_master *master)
1068 { 1068 {
1069 int ret; 1069 int ret;
1070 1070
1071 /* Basically no-ops for non-queued masters */ 1071 /* Basically no-ops for non-queued masters */
1072 if (!master->queued) 1072 if (!master->queued)
1073 return 0; 1073 return 0;
1074 1074
1075 ret = spi_stop_queue(master); 1075 ret = spi_stop_queue(master);
1076 if (ret) 1076 if (ret)
1077 dev_err(&master->dev, "queue stop failed\n"); 1077 dev_err(&master->dev, "queue stop failed\n");
1078 1078
1079 return ret; 1079 return ret;
1080 } 1080 }
1081 EXPORT_SYMBOL_GPL(spi_master_suspend); 1081 EXPORT_SYMBOL_GPL(spi_master_suspend);
1082 1082
1083 int spi_master_resume(struct spi_master *master) 1083 int spi_master_resume(struct spi_master *master)
1084 { 1084 {
1085 int ret; 1085 int ret;
1086 1086
1087 if (!master->queued) 1087 if (!master->queued)
1088 return 0; 1088 return 0;
1089 1089
1090 ret = spi_start_queue(master); 1090 ret = spi_start_queue(master);
1091 if (ret) 1091 if (ret)
1092 dev_err(&master->dev, "queue restart failed\n"); 1092 dev_err(&master->dev, "queue restart failed\n");
1093 1093
1094 return ret; 1094 return ret;
1095 } 1095 }
1096 EXPORT_SYMBOL_GPL(spi_master_resume); 1096 EXPORT_SYMBOL_GPL(spi_master_resume);
1097 1097
1098 static int __spi_master_match(struct device *dev, void *data) 1098 static int __spi_master_match(struct device *dev, void *data)
1099 { 1099 {
1100 struct spi_master *m; 1100 struct spi_master *m;
1101 u16 *bus_num = data; 1101 u16 *bus_num = data;
1102 1102
1103 m = container_of(dev, struct spi_master, dev); 1103 m = container_of(dev, struct spi_master, dev);
1104 return m->bus_num == *bus_num; 1104 return m->bus_num == *bus_num;
1105 } 1105 }
1106 1106
1107 /** 1107 /**
1108 * spi_busnum_to_master - look up master associated with bus_num 1108 * spi_busnum_to_master - look up master associated with bus_num
1109 * @bus_num: the master's bus number 1109 * @bus_num: the master's bus number
1110 * Context: can sleep 1110 * Context: can sleep
1111 * 1111 *
1112 * This call may be used with devices that are registered after 1112 * This call may be used with devices that are registered after
1113 * arch init time. It returns a refcounted pointer to the relevant 1113 * arch init time. It returns a refcounted pointer to the relevant
1114 * spi_master (which the caller must release), or NULL if there is 1114 * spi_master (which the caller must release), or NULL if there is
1115 * no such master registered. 1115 * no such master registered.
1116 */ 1116 */
1117 struct spi_master *spi_busnum_to_master(u16 bus_num) 1117 struct spi_master *spi_busnum_to_master(u16 bus_num)
1118 { 1118 {
1119 struct device *dev; 1119 struct device *dev;
1120 struct spi_master *master = NULL; 1120 struct spi_master *master = NULL;
1121 1121
1122 dev = class_find_device(&spi_master_class, NULL, &bus_num, 1122 dev = class_find_device(&spi_master_class, NULL, &bus_num,
1123 __spi_master_match); 1123 __spi_master_match);
1124 if (dev) 1124 if (dev)
1125 master = container_of(dev, struct spi_master, dev); 1125 master = container_of(dev, struct spi_master, dev);
1126 /* reference got in class_find_device */ 1126 /* reference got in class_find_device */
1127 return master; 1127 return master;
1128 } 1128 }
1129 EXPORT_SYMBOL_GPL(spi_busnum_to_master); 1129 EXPORT_SYMBOL_GPL(spi_busnum_to_master);
1130 1130
1131 1131
1132 /*-------------------------------------------------------------------------*/ 1132 /*-------------------------------------------------------------------------*/
1133 1133
1134 /* Core methods for SPI master protocol drivers. Some of the 1134 /* Core methods for SPI master protocol drivers. Some of the
1135 * other core methods are currently defined as inline functions. 1135 * other core methods are currently defined as inline functions.
1136 */ 1136 */
1137 1137
1138 /** 1138 /**
1139 * spi_setup - setup SPI mode and clock rate 1139 * spi_setup - setup SPI mode and clock rate
1140 * @spi: the device whose settings are being modified 1140 * @spi: the device whose settings are being modified
1141 * Context: can sleep, and no requests are queued to the device 1141 * Context: can sleep, and no requests are queued to the device
1142 * 1142 *
1143 * SPI protocol drivers may need to update the transfer mode if the 1143 * SPI protocol drivers may need to update the transfer mode if the
1144 * device doesn't work with its default. They may likewise need 1144 * device doesn't work with its default. They may likewise need
1145 * to update clock rates or word sizes from initial values. This function 1145 * to update clock rates or word sizes from initial values. This function
1146 * changes those settings, and must be called from a context that can sleep. 1146 * changes those settings, and must be called from a context that can sleep.
1147 * Except for SPI_CS_HIGH, which takes effect immediately, the changes take 1147 * Except for SPI_CS_HIGH, which takes effect immediately, the changes take
1148 * effect the next time the device is selected and data is transferred to 1148 * effect the next time the device is selected and data is transferred to
1149 * or from it. When this function returns, the spi device is deselected. 1149 * or from it. When this function returns, the spi device is deselected.
1150 * 1150 *
1151 * Note that this call will fail if the protocol driver specifies an option 1151 * Note that this call will fail if the protocol driver specifies an option
1152 * that the underlying controller or its driver does not support. For 1152 * that the underlying controller or its driver does not support. For
1153 * example, not all hardware supports wire transfers using nine bit words, 1153 * example, not all hardware supports wire transfers using nine bit words,
1154 * LSB-first wire encoding, or active-high chipselects. 1154 * LSB-first wire encoding, or active-high chipselects.
1155 */ 1155 */
1156 int spi_setup(struct spi_device *spi) 1156 int spi_setup(struct spi_device *spi)
1157 { 1157 {
1158 unsigned bad_bits; 1158 unsigned bad_bits;
1159 int status; 1159 int status;
1160 1160
1161 /* help drivers fail *cleanly* when they need options 1161 /* help drivers fail *cleanly* when they need options
1162 * that aren't supported with their current master 1162 * that aren't supported with their current master
1163 */ 1163 */
1164 bad_bits = spi->mode & ~spi->master->mode_bits; 1164 bad_bits = spi->mode & ~spi->master->mode_bits;
1165 if (bad_bits) { 1165 if (bad_bits) {
1166 dev_err(&spi->dev, "setup: unsupported mode bits %x\n", 1166 dev_err(&spi->dev, "setup: unsupported mode bits %x\n",
1167 bad_bits); 1167 bad_bits);
1168 return -EINVAL; 1168 return -EINVAL;
1169 } 1169 }
1170 1170
1171 if (!spi->bits_per_word) 1171 if (!spi->bits_per_word)
1172 spi->bits_per_word = 8; 1172 spi->bits_per_word = 8;
1173 1173
1174 status = spi->master->setup(spi); 1174 status = spi->master->setup(spi);
1175 1175
1176 dev_dbg(&spi->dev, "setup mode %d, %s%s%s%s" 1176 dev_dbg(&spi->dev, "setup mode %d, %s%s%s%s"
1177 "%u bits/w, %u Hz max --> %d\n", 1177 "%u bits/w, %u Hz max --> %d\n",
1178 (int) (spi->mode & (SPI_CPOL | SPI_CPHA)), 1178 (int) (spi->mode & (SPI_CPOL | SPI_CPHA)),
1179 (spi->mode & SPI_CS_HIGH) ? "cs_high, " : "", 1179 (spi->mode & SPI_CS_HIGH) ? "cs_high, " : "",
1180 (spi->mode & SPI_LSB_FIRST) ? "lsb, " : "", 1180 (spi->mode & SPI_LSB_FIRST) ? "lsb, " : "",
1181 (spi->mode & SPI_3WIRE) ? "3wire, " : "", 1181 (spi->mode & SPI_3WIRE) ? "3wire, " : "",
1182 (spi->mode & SPI_LOOP) ? "loopback, " : "", 1182 (spi->mode & SPI_LOOP) ? "loopback, " : "",
1183 spi->bits_per_word, spi->max_speed_hz, 1183 spi->bits_per_word, spi->max_speed_hz,
1184 status); 1184 status);
1185 1185
1186 return status; 1186 return status;
1187 } 1187 }
1188 EXPORT_SYMBOL_GPL(spi_setup); 1188 EXPORT_SYMBOL_GPL(spi_setup);
1189 1189
1190 static int __spi_async(struct spi_device *spi, struct spi_message *message) 1190 static int __spi_async(struct spi_device *spi, struct spi_message *message)
1191 { 1191 {
1192 struct spi_master *master = spi->master; 1192 struct spi_master *master = spi->master;
1193 1193
1194 /* Half-duplex links include original MicroWire, and ones with 1194 /* Half-duplex links include original MicroWire, and ones with
1195 * only one data pin like SPI_3WIRE (switches direction) or where 1195 * only one data pin like SPI_3WIRE (switches direction) or where
1196 * either MOSI or MISO is missing. They can also be caused by 1196 * either MOSI or MISO is missing. They can also be caused by
1197 * software limitations. 1197 * software limitations.
1198 */ 1198 */
1199 if ((master->flags & SPI_MASTER_HALF_DUPLEX) 1199 if ((master->flags & SPI_MASTER_HALF_DUPLEX)
1200 || (spi->mode & SPI_3WIRE)) { 1200 || (spi->mode & SPI_3WIRE)) {
1201 struct spi_transfer *xfer; 1201 struct spi_transfer *xfer;
1202 unsigned flags = master->flags; 1202 unsigned flags = master->flags;
1203 1203
1204 list_for_each_entry(xfer, &message->transfers, transfer_list) { 1204 list_for_each_entry(xfer, &message->transfers, transfer_list) {
1205 if (xfer->rx_buf && xfer->tx_buf) 1205 if (xfer->rx_buf && xfer->tx_buf)
1206 return -EINVAL; 1206 return -EINVAL;
1207 if ((flags & SPI_MASTER_NO_TX) && xfer->tx_buf) 1207 if ((flags & SPI_MASTER_NO_TX) && xfer->tx_buf)
1208 return -EINVAL; 1208 return -EINVAL;
1209 if ((flags & SPI_MASTER_NO_RX) && xfer->rx_buf) 1209 if ((flags & SPI_MASTER_NO_RX) && xfer->rx_buf)
1210 return -EINVAL; 1210 return -EINVAL;
1211 } 1211 }
1212 } 1212 }
1213 1213
1214 message->spi = spi; 1214 message->spi = spi;
1215 message->status = -EINPROGRESS; 1215 message->status = -EINPROGRESS;
1216 return master->transfer(spi, message); 1216 return master->transfer(spi, message);
1217 } 1217 }
1218 1218
1219 /** 1219 /**
1220 * spi_async - asynchronous SPI transfer 1220 * spi_async - asynchronous SPI transfer
1221 * @spi: device with which data will be exchanged 1221 * @spi: device with which data will be exchanged
1222 * @message: describes the data transfers, including completion callback 1222 * @message: describes the data transfers, including completion callback
1223 * Context: any (irqs may be blocked, etc) 1223 * Context: any (irqs may be blocked, etc)
1224 * 1224 *
1225 * This call may be used in_irq and other contexts which can't sleep, 1225 * This call may be used in_irq and other contexts which can't sleep,
1226 * as well as from task contexts which can sleep. 1226 * as well as from task contexts which can sleep.
1227 * 1227 *
1228 * The completion callback is invoked in a context which can't sleep. 1228 * The completion callback is invoked in a context which can't sleep.
1229 * Before that invocation, the value of message->status is undefined. 1229 * Before that invocation, the value of message->status is undefined.
1230 * When the callback is issued, message->status holds either zero (to 1230 * When the callback is issued, message->status holds either zero (to
1231 * indicate complete success) or a negative error code. After that 1231 * indicate complete success) or a negative error code. After that
1232 * callback returns, the driver which issued the transfer request may 1232 * callback returns, the driver which issued the transfer request may
1233 * deallocate the associated memory; it's no longer in use by any SPI 1233 * deallocate the associated memory; it's no longer in use by any SPI
1234 * core or controller driver code. 1234 * core or controller driver code.
1235 * 1235 *
1236 * Note that although all messages to a spi_device are handled in 1236 * Note that although all messages to a spi_device are handled in
1237 * FIFO order, messages may go to different devices in other orders. 1237 * FIFO order, messages may go to different devices in other orders.
1238 * Some device might be higher priority, or have various "hard" access 1238 * Some device might be higher priority, or have various "hard" access
1239 * time requirements, for example. 1239 * time requirements, for example.
1240 * 1240 *
1241 * On detection of any fault during the transfer, processing of 1241 * On detection of any fault during the transfer, processing of
1242 * the entire message is aborted, and the device is deselected. 1242 * the entire message is aborted, and the device is deselected.
1243 * Until returning from the associated message completion callback, 1243 * Until returning from the associated message completion callback,
1244 * no other spi_message queued to that device will be processed. 1244 * no other spi_message queued to that device will be processed.
1245 * (This rule applies equally to all the synchronous transfer calls, 1245 * (This rule applies equally to all the synchronous transfer calls,
1246 * which are wrappers around this core asynchronous primitive.) 1246 * which are wrappers around this core asynchronous primitive.)
1247 */ 1247 */
1248 int spi_async(struct spi_device *spi, struct spi_message *message) 1248 int spi_async(struct spi_device *spi, struct spi_message *message)
1249 { 1249 {
1250 struct spi_master *master = spi->master; 1250 struct spi_master *master = spi->master;
1251 int ret; 1251 int ret;
1252 unsigned long flags; 1252 unsigned long flags;
1253 1253
1254 spin_lock_irqsave(&master->bus_lock_spinlock, flags); 1254 spin_lock_irqsave(&master->bus_lock_spinlock, flags);
1255 1255
1256 if (master->bus_lock_flag) 1256 if (master->bus_lock_flag)
1257 ret = -EBUSY; 1257 ret = -EBUSY;
1258 else 1258 else
1259 ret = __spi_async(spi, message); 1259 ret = __spi_async(spi, message);
1260 1260
1261 spin_unlock_irqrestore(&master->bus_lock_spinlock, flags); 1261 spin_unlock_irqrestore(&master->bus_lock_spinlock, flags);
1262 1262
1263 return ret; 1263 return ret;
1264 } 1264 }
1265 EXPORT_SYMBOL_GPL(spi_async); 1265 EXPORT_SYMBOL_GPL(spi_async);
1266 1266
1267 /** 1267 /**
1268 * spi_async_locked - version of spi_async with exclusive bus usage 1268 * spi_async_locked - version of spi_async with exclusive bus usage
1269 * @spi: device with which data will be exchanged 1269 * @spi: device with which data will be exchanged
1270 * @message: describes the data transfers, including completion callback 1270 * @message: describes the data transfers, including completion callback
1271 * Context: any (irqs may be blocked, etc) 1271 * Context: any (irqs may be blocked, etc)
1272 * 1272 *
1273 * This call may be used in_irq and other contexts which can't sleep, 1273 * This call may be used in_irq and other contexts which can't sleep,
1274 * as well as from task contexts which can sleep. 1274 * as well as from task contexts which can sleep.
1275 * 1275 *
1276 * The completion callback is invoked in a context which can't sleep. 1276 * The completion callback is invoked in a context which can't sleep.
1277 * Before that invocation, the value of message->status is undefined. 1277 * Before that invocation, the value of message->status is undefined.
1278 * When the callback is issued, message->status holds either zero (to 1278 * When the callback is issued, message->status holds either zero (to
1279 * indicate complete success) or a negative error code. After that 1279 * indicate complete success) or a negative error code. After that
1280 * callback returns, the driver which issued the transfer request may 1280 * callback returns, the driver which issued the transfer request may
1281 * deallocate the associated memory; it's no longer in use by any SPI 1281 * deallocate the associated memory; it's no longer in use by any SPI
1282 * core or controller driver code. 1282 * core or controller driver code.
1283 * 1283 *
1284 * Note that although all messages to a spi_device are handled in 1284 * Note that although all messages to a spi_device are handled in
1285 * FIFO order, messages may go to different devices in other orders. 1285 * FIFO order, messages may go to different devices in other orders.
1286 * Some device might be higher priority, or have various "hard" access 1286 * Some device might be higher priority, or have various "hard" access
1287 * time requirements, for example. 1287 * time requirements, for example.
1288 * 1288 *
1289 * On detection of any fault during the transfer, processing of 1289 * On detection of any fault during the transfer, processing of
1290 * the entire message is aborted, and the device is deselected. 1290 * the entire message is aborted, and the device is deselected.
1291 * Until returning from the associated message completion callback, 1291 * Until returning from the associated message completion callback,
1292 * no other spi_message queued to that device will be processed. 1292 * no other spi_message queued to that device will be processed.
1293 * (This rule applies equally to all the synchronous transfer calls, 1293 * (This rule applies equally to all the synchronous transfer calls,
1294 * which are wrappers around this core asynchronous primitive.) 1294 * which are wrappers around this core asynchronous primitive.)
1295 */ 1295 */
1296 int spi_async_locked(struct spi_device *spi, struct spi_message *message) 1296 int spi_async_locked(struct spi_device *spi, struct spi_message *message)
1297 { 1297 {
1298 struct spi_master *master = spi->master; 1298 struct spi_master *master = spi->master;
1299 int ret; 1299 int ret;
1300 unsigned long flags; 1300 unsigned long flags;
1301 1301
1302 spin_lock_irqsave(&master->bus_lock_spinlock, flags); 1302 spin_lock_irqsave(&master->bus_lock_spinlock, flags);
1303 1303
1304 ret = __spi_async(spi, message); 1304 ret = __spi_async(spi, message);
1305 1305
1306 spin_unlock_irqrestore(&master->bus_lock_spinlock, flags); 1306 spin_unlock_irqrestore(&master->bus_lock_spinlock, flags);
1307 1307
1308 return ret; 1308 return ret;
1309 1309
1310 } 1310 }
1311 EXPORT_SYMBOL_GPL(spi_async_locked); 1311 EXPORT_SYMBOL_GPL(spi_async_locked);
1312 1312
1313 1313
1314 /*-------------------------------------------------------------------------*/ 1314 /*-------------------------------------------------------------------------*/
1315 1315
1316 /* Utility methods for SPI master protocol drivers, layered on 1316 /* Utility methods for SPI master protocol drivers, layered on
1317 * top of the core. Some other utility methods are defined as 1317 * top of the core. Some other utility methods are defined as
1318 * inline functions. 1318 * inline functions.
1319 */ 1319 */
1320 1320
1321 static void spi_complete(void *arg) 1321 static void spi_complete(void *arg)
1322 { 1322 {
1323 complete(arg); 1323 complete(arg);
1324 } 1324 }
1325 1325
1326 static int __spi_sync(struct spi_device *spi, struct spi_message *message, 1326 static int __spi_sync(struct spi_device *spi, struct spi_message *message,
1327 int bus_locked) 1327 int bus_locked)
1328 { 1328 {
1329 DECLARE_COMPLETION_ONSTACK(done); 1329 DECLARE_COMPLETION_ONSTACK(done);
1330 int status; 1330 int status;
1331 struct spi_master *master = spi->master; 1331 struct spi_master *master = spi->master;
1332 1332
1333 message->complete = spi_complete; 1333 message->complete = spi_complete;
1334 message->context = &done; 1334 message->context = &done;
1335 1335
1336 if (!bus_locked) 1336 if (!bus_locked)
1337 mutex_lock(&master->bus_lock_mutex); 1337 mutex_lock(&master->bus_lock_mutex);
1338 1338
1339 status = spi_async_locked(spi, message); 1339 status = spi_async_locked(spi, message);
1340 1340
1341 if (!bus_locked) 1341 if (!bus_locked)
1342 mutex_unlock(&master->bus_lock_mutex); 1342 mutex_unlock(&master->bus_lock_mutex);
1343 1343
1344 if (status == 0) { 1344 if (status == 0) {
1345 wait_for_completion(&done); 1345 wait_for_completion(&done);
1346 status = message->status; 1346 status = message->status;
1347 } 1347 }
1348 message->context = NULL; 1348 message->context = NULL;
1349 return status; 1349 return status;
1350 } 1350 }
1351 1351
1352 /** 1352 /**
1353 * spi_sync - blocking/synchronous SPI data transfers 1353 * spi_sync - blocking/synchronous SPI data transfers
1354 * @spi: device with which data will be exchanged 1354 * @spi: device with which data will be exchanged
1355 * @message: describes the data transfers 1355 * @message: describes the data transfers
1356 * Context: can sleep 1356 * Context: can sleep
1357 * 1357 *
1358 * This call may only be used from a context that may sleep. The sleep 1358 * This call may only be used from a context that may sleep. The sleep
1359 * is non-interruptible, and has no timeout. Low-overhead controller 1359 * is non-interruptible, and has no timeout. Low-overhead controller
1360 * drivers may DMA directly into and out of the message buffers. 1360 * drivers may DMA directly into and out of the message buffers.
1361 * 1361 *
1362 * Note that the SPI device's chip select is active during the message, 1362 * Note that the SPI device's chip select is active during the message,
1363 * and then is normally disabled between messages. Drivers for some 1363 * and then is normally disabled between messages. Drivers for some
1364 * frequently-used devices may want to minimize costs of selecting a chip, 1364 * frequently-used devices may want to minimize costs of selecting a chip,
1365 * by leaving it selected in anticipation that the next message will go 1365 * by leaving it selected in anticipation that the next message will go
1366 * to the same chip. (That may increase power usage.) 1366 * to the same chip. (That may increase power usage.)
1367 * 1367 *
1368 * Also, the caller is guaranteeing that the memory associated with the 1368 * Also, the caller is guaranteeing that the memory associated with the
1369 * message will not be freed before this call returns. 1369 * message will not be freed before this call returns.
1370 * 1370 *
1371 * It returns zero on success, else a negative error code. 1371 * It returns zero on success, else a negative error code.
1372 */ 1372 */
1373 int spi_sync(struct spi_device *spi, struct spi_message *message) 1373 int spi_sync(struct spi_device *spi, struct spi_message *message)
1374 { 1374 {
1375 return __spi_sync(spi, message, 0); 1375 return __spi_sync(spi, message, 0);
1376 } 1376 }
1377 EXPORT_SYMBOL_GPL(spi_sync); 1377 EXPORT_SYMBOL_GPL(spi_sync);
1378 1378
1379 /** 1379 /**
1380 * spi_sync_locked - version of spi_sync with exclusive bus usage 1380 * spi_sync_locked - version of spi_sync with exclusive bus usage
1381 * @spi: device with which data will be exchanged 1381 * @spi: device with which data will be exchanged
1382 * @message: describes the data transfers 1382 * @message: describes the data transfers
1383 * Context: can sleep 1383 * Context: can sleep
1384 * 1384 *
1385 * This call may only be used from a context that may sleep. The sleep 1385 * This call may only be used from a context that may sleep. The sleep
1386 * is non-interruptible, and has no timeout. Low-overhead controller 1386 * is non-interruptible, and has no timeout. Low-overhead controller
1387 * drivers may DMA directly into and out of the message buffers. 1387 * drivers may DMA directly into and out of the message buffers.
1388 * 1388 *
1389 * This call should be used by drivers that require exclusive access to the 1389 * This call should be used by drivers that require exclusive access to the
1390 * SPI bus. It has to be preceded by a spi_bus_lock call. The SPI bus must 1390 * SPI bus. It has to be preceded by a spi_bus_lock call. The SPI bus must
1391 * be released by a spi_bus_unlock call when the exclusive access is over. 1391 * be released by a spi_bus_unlock call when the exclusive access is over.
1392 * 1392 *
1393 * It returns zero on success, else a negative error code. 1393 * It returns zero on success, else a negative error code.
1394 */ 1394 */
1395 int spi_sync_locked(struct spi_device *spi, struct spi_message *message) 1395 int spi_sync_locked(struct spi_device *spi, struct spi_message *message)
1396 { 1396 {
1397 return __spi_sync(spi, message, 1); 1397 return __spi_sync(spi, message, 1);
1398 } 1398 }
1399 EXPORT_SYMBOL_GPL(spi_sync_locked); 1399 EXPORT_SYMBOL_GPL(spi_sync_locked);
1400 1400
1401 /** 1401 /**
1402 * spi_bus_lock - obtain a lock for exclusive SPI bus usage 1402 * spi_bus_lock - obtain a lock for exclusive SPI bus usage
1403 * @master: SPI bus master that should be locked for exclusive bus access 1403 * @master: SPI bus master that should be locked for exclusive bus access
1404 * Context: can sleep 1404 * Context: can sleep
1405 * 1405 *
1406 * This call may only be used from a context that may sleep. The sleep 1406 * This call may only be used from a context that may sleep. The sleep
1407 * is non-interruptible, and has no timeout. 1407 * is non-interruptible, and has no timeout.
1408 * 1408 *
1409 * This call should be used by drivers that require exclusive access to the 1409 * This call should be used by drivers that require exclusive access to the
1410 * SPI bus. The SPI bus must be released by a spi_bus_unlock call when the 1410 * SPI bus. The SPI bus must be released by a spi_bus_unlock call when the
1411 * exclusive access is over. Data transfer must be done by spi_sync_locked 1411 * exclusive access is over. Data transfer must be done by spi_sync_locked
1412 * and spi_async_locked calls when the SPI bus lock is held. 1412 * and spi_async_locked calls when the SPI bus lock is held.
1413 * 1413 *
1414 * It returns zero on success, else a negative error code. 1414 * It returns zero on success, else a negative error code.
1415 */ 1415 */
1416 int spi_bus_lock(struct spi_master *master) 1416 int spi_bus_lock(struct spi_master *master)
1417 { 1417 {
1418 unsigned long flags; 1418 unsigned long flags;
1419 1419
1420 mutex_lock(&master->bus_lock_mutex); 1420 mutex_lock(&master->bus_lock_mutex);
1421 1421
1422 spin_lock_irqsave(&master->bus_lock_spinlock, flags); 1422 spin_lock_irqsave(&master->bus_lock_spinlock, flags);
1423 master->bus_lock_flag = 1; 1423 master->bus_lock_flag = 1;
1424 spin_unlock_irqrestore(&master->bus_lock_spinlock, flags); 1424 spin_unlock_irqrestore(&master->bus_lock_spinlock, flags);
1425 1425
1426 /* mutex remains locked until spi_bus_unlock is called */ 1426 /* mutex remains locked until spi_bus_unlock is called */
1427 1427
1428 return 0; 1428 return 0;
1429 } 1429 }
1430 EXPORT_SYMBOL_GPL(spi_bus_lock); 1430 EXPORT_SYMBOL_GPL(spi_bus_lock);
1431 1431
1432 /** 1432 /**
1433 * spi_bus_unlock - release the lock for exclusive SPI bus usage 1433 * spi_bus_unlock - release the lock for exclusive SPI bus usage
1434 * @master: SPI bus master that was locked for exclusive bus access 1434 * @master: SPI bus master that was locked for exclusive bus access
1435 * Context: can sleep 1435 * Context: can sleep
1436 * 1436 *
1437 * This call may only be used from a context that may sleep. The sleep 1437 * This call may only be used from a context that may sleep. The sleep
1438 * is non-interruptible, and has no timeout. 1438 * is non-interruptible, and has no timeout.
1439 * 1439 *
1440 * This call releases an SPI bus lock previously obtained by an spi_bus_lock 1440 * This call releases an SPI bus lock previously obtained by an spi_bus_lock
1441 * call. 1441 * call.
1442 * 1442 *
1443 * It returns zero on success, else a negative error code. 1443 * It returns zero on success, else a negative error code.
1444 */ 1444 */
1445 int spi_bus_unlock(struct spi_master *master) 1445 int spi_bus_unlock(struct spi_master *master)
1446 { 1446 {
1447 master->bus_lock_flag = 0; 1447 master->bus_lock_flag = 0;
1448 1448
1449 mutex_unlock(&master->bus_lock_mutex); 1449 mutex_unlock(&master->bus_lock_mutex);
1450 1450
1451 return 0; 1451 return 0;
1452 } 1452 }
1453 EXPORT_SYMBOL_GPL(spi_bus_unlock); 1453 EXPORT_SYMBOL_GPL(spi_bus_unlock);
1454 1454
1455 /* portable code must never pass more than 32 bytes */ 1455 /* portable code must never pass more than 32 bytes */
1456 #define SPI_BUFSIZ max(32,SMP_CACHE_BYTES) 1456 #define SPI_BUFSIZ max(32,SMP_CACHE_BYTES)
1457 1457
1458 static u8 *buf; 1458 static u8 *buf;
1459 1459
1460 /** 1460 /**
1461 * spi_write_then_read - SPI synchronous write followed by read 1461 * spi_write_then_read - SPI synchronous write followed by read
1462 * @spi: device with which data will be exchanged 1462 * @spi: device with which data will be exchanged
1463 * @txbuf: data to be written (need not be dma-safe) 1463 * @txbuf: data to be written (need not be dma-safe)
1464 * @n_tx: size of txbuf, in bytes 1464 * @n_tx: size of txbuf, in bytes
1465 * @rxbuf: buffer into which data will be read (need not be dma-safe) 1465 * @rxbuf: buffer into which data will be read (need not be dma-safe)
1466 * @n_rx: size of rxbuf, in bytes 1466 * @n_rx: size of rxbuf, in bytes
1467 * Context: can sleep 1467 * Context: can sleep
1468 * 1468 *
1469 * This performs a half duplex MicroWire style transaction with the 1469 * This performs a half duplex MicroWire style transaction with the
1470 * device, sending txbuf and then reading rxbuf. The return value 1470 * device, sending txbuf and then reading rxbuf. The return value
1471 * is zero for success, else a negative errno status code. 1471 * is zero for success, else a negative errno status code.
1472 * This call may only be used from a context that may sleep. 1472 * This call may only be used from a context that may sleep.
1473 * 1473 *
1474 * Parameters to this routine are always copied using a small buffer; 1474 * Parameters to this routine are always copied using a small buffer;
1475 * portable code should never use this for more than 32 bytes. 1475 * portable code should never use this for more than 32 bytes.
1476 * Performance-sensitive or bulk transfer code should instead use 1476 * Performance-sensitive or bulk transfer code should instead use
1477 * spi_{async,sync}() calls with dma-safe buffers. 1477 * spi_{async,sync}() calls with dma-safe buffers.
1478 */ 1478 */
1479 int spi_write_then_read(struct spi_device *spi, 1479 int spi_write_then_read(struct spi_device *spi,
1480 const void *txbuf, unsigned n_tx, 1480 const void *txbuf, unsigned n_tx,
1481 void *rxbuf, unsigned n_rx) 1481 void *rxbuf, unsigned n_rx)
1482 { 1482 {
1483 static DEFINE_MUTEX(lock); 1483 static DEFINE_MUTEX(lock);
1484 1484
1485 int status; 1485 int status;
1486 struct spi_message message; 1486 struct spi_message message;
1487 struct spi_transfer x[2]; 1487 struct spi_transfer x[2];
1488 u8 *local_buf; 1488 u8 *local_buf;
1489 1489
1490 /* Use preallocated DMA-safe buffer. We can't avoid copying here, 1490 /* Use preallocated DMA-safe buffer. We can't avoid copying here,
1491 * (as a pure convenience thing), but we can keep heap costs 1491 * (as a pure convenience thing), but we can keep heap costs
1492 * out of the hot path ... 1492 * out of the hot path ...
1493 */ 1493 */
1494 if ((n_tx + n_rx) > SPI_BUFSIZ) 1494 if ((n_tx + n_rx) > SPI_BUFSIZ)
1495 return -EINVAL; 1495 return -EINVAL;
1496 1496
1497 spi_message_init(&message); 1497 spi_message_init(&message);
1498 memset(x, 0, sizeof x); 1498 memset(x, 0, sizeof x);
1499 if (n_tx) { 1499 if (n_tx) {
1500 x[0].len = n_tx; 1500 x[0].len = n_tx;
1501 spi_message_add_tail(&x[0], &message); 1501 spi_message_add_tail(&x[0], &message);
1502 } 1502 }
1503 if (n_rx) { 1503 if (n_rx) {
1504 x[1].len = n_rx; 1504 x[1].len = n_rx;
1505 spi_message_add_tail(&x[1], &message); 1505 spi_message_add_tail(&x[1], &message);
1506 } 1506 }
1507 1507
1508 /* ... unless someone else is using the pre-allocated buffer */ 1508 /* ... unless someone else is using the pre-allocated buffer */
1509 if (!mutex_trylock(&lock)) { 1509 if (!mutex_trylock(&lock)) {
1510 local_buf = kmalloc(SPI_BUFSIZ, GFP_KERNEL); 1510 local_buf = kmalloc(SPI_BUFSIZ, GFP_KERNEL);
1511 if (!local_buf) 1511 if (!local_buf)
1512 return -ENOMEM; 1512 return -ENOMEM;
1513 } else 1513 } else
1514 local_buf = buf; 1514 local_buf = buf;
1515 1515
1516 memcpy(local_buf, txbuf, n_tx); 1516 memcpy(local_buf, txbuf, n_tx);
1517 x[0].tx_buf = local_buf; 1517 x[0].tx_buf = local_buf;
1518 x[1].rx_buf = local_buf + n_tx; 1518 x[1].rx_buf = local_buf + n_tx;
1519 1519
1520 /* do the i/o */ 1520 /* do the i/o */
1521 status = spi_sync(spi, &message); 1521 status = spi_sync(spi, &message);
1522 if (status == 0) 1522 if (status == 0)
1523 memcpy(rxbuf, x[1].rx_buf, n_rx); 1523 memcpy(rxbuf, x[1].rx_buf, n_rx);
1524 1524
1525 if (x[0].tx_buf == buf) 1525 if (x[0].tx_buf == buf)
1526 mutex_unlock(&lock); 1526 mutex_unlock(&lock);
1527 else 1527 else
1528 kfree(local_buf); 1528 kfree(local_buf);
1529 1529
1530 return status; 1530 return status;
1531 } 1531 }
1532 EXPORT_SYMBOL_GPL(spi_write_then_read); 1532 EXPORT_SYMBOL_GPL(spi_write_then_read);
1533 1533
1534 /*-------------------------------------------------------------------------*/ 1534 /*-------------------------------------------------------------------------*/
1535 1535
1536 static int __init spi_init(void) 1536 static int __init spi_init(void)
1537 { 1537 {
1538 int status; 1538 int status;
1539 1539
1540 buf = kmalloc(SPI_BUFSIZ, GFP_KERNEL); 1540 buf = kmalloc(SPI_BUFSIZ, GFP_KERNEL);
1541 if (!buf) { 1541 if (!buf) {
1542 status = -ENOMEM; 1542 status = -ENOMEM;
1543 goto err0; 1543 goto err0;
1544 } 1544 }
1545 1545
1546 status = bus_register(&spi_bus_type); 1546 status = bus_register(&spi_bus_type);
1547 if (status < 0) 1547 if (status < 0)
1548 goto err1; 1548 goto err1;
1549 1549
1550 status = class_register(&spi_master_class); 1550 status = class_register(&spi_master_class);
1551 if (status < 0) 1551 if (status < 0)
1552 goto err2; 1552 goto err2;
1553 return 0; 1553 return 0;
1554 1554
1555 err2: 1555 err2:
1556 bus_unregister(&spi_bus_type); 1556 bus_unregister(&spi_bus_type);
1557 err1: 1557 err1:
1558 kfree(buf); 1558 kfree(buf);
1559 buf = NULL; 1559 buf = NULL;
1560 err0: 1560 err0:
1561 return status; 1561 return status;
1562 } 1562 }
1563 1563
1564 /* board_info is normally registered in arch_initcall(), 1564 /* board_info is normally registered in arch_initcall(),
1565 * but even essential drivers wait till later 1565 * but even essential drivers wait till later
1566 * 1566 *
1567 * REVISIT only boardinfo really needs static linking. the rest (device and 1567 * REVISIT only boardinfo really needs static linking. the rest (device and
1568 * driver registration) _could_ be dynamically linked (modular) ... costs 1568 * driver registration) _could_ be dynamically linked (modular) ... costs
1569 * include needing to have boardinfo data structures be much more public. 1569 * include needing to have boardinfo data structures be much more public.
1570 */ 1570 */
1571 postcore_initcall(spi_init); 1571 postcore_initcall(spi_init);
1572 1572
1573 1573
include/linux/amba/pl022.h
1 /* 1 /*
2 * include/linux/amba/pl022.h 2 * include/linux/amba/pl022.h
3 * 3 *
4 * Copyright (C) 2008-2009 ST-Ericsson AB 4 * Copyright (C) 2008-2009 ST-Ericsson AB
5 * Copyright (C) 2006 STMicroelectronics Pvt. Ltd. 5 * Copyright (C) 2006 STMicroelectronics Pvt. Ltd.
6 * 6 *
7 * Author: Linus Walleij <linus.walleij@stericsson.com> 7 * Author: Linus Walleij <linus.walleij@stericsson.com>
8 * 8 *
9 * Initial version inspired by: 9 * Initial version inspired by:
10 * linux-2.6.17-rc3-mm1/drivers/spi/pxa2xx_spi.c 10 * linux-2.6.17-rc3-mm1/drivers/spi/pxa2xx_spi.c
11 * Initial adoption to PL022 by: 11 * Initial adoption to PL022 by:
12 * Sachin Verma <sachin.verma@st.com> 12 * Sachin Verma <sachin.verma@st.com>
13 * 13 *
14 * This program is free software; you can redistribute it and/or modify 14 * This program is free software; you can redistribute it and/or modify
15 * it under the terms of the GNU General Public License as published by 15 * it under the terms of the GNU General Public License as published by
16 * the Free Software Foundation; either version 2 of the License, or 16 * the Free Software Foundation; either version 2 of the License, or
17 * (at your option) any later version. 17 * (at your option) any later version.
18 * 18 *
19 * This program is distributed in the hope that it will be useful, 19 * This program is distributed in the hope that it will be useful,
20 * but WITHOUT ANY WARRANTY; without even the implied warranty of 20 * but WITHOUT ANY WARRANTY; without even the implied warranty of
21 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 21 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
22 * GNU General Public License for more details. 22 * GNU General Public License for more details.
23 */ 23 */
24 24
25 #ifndef _SSP_PL022_H 25 #ifndef _SSP_PL022_H
26 #define _SSP_PL022_H 26 #define _SSP_PL022_H
27 27
28 #include <linux/types.h> 28 #include <linux/types.h>
29 29
30 /** 30 /**
31 * whether SSP is in loopback mode or not 31 * whether SSP is in loopback mode or not
32 */ 32 */
33 enum ssp_loopback { 33 enum ssp_loopback {
34 LOOPBACK_DISABLED, 34 LOOPBACK_DISABLED,
35 LOOPBACK_ENABLED 35 LOOPBACK_ENABLED
36 }; 36 };
37 37
38 /** 38 /**
39 * enum ssp_interface - interfaces allowed for this SSP Controller 39 * enum ssp_interface - interfaces allowed for this SSP Controller
40 * @SSP_INTERFACE_MOTOROLA_SPI: Motorola Interface 40 * @SSP_INTERFACE_MOTOROLA_SPI: Motorola Interface
41 * @SSP_INTERFACE_TI_SYNC_SERIAL: Texas Instrument Synchronous Serial 41 * @SSP_INTERFACE_TI_SYNC_SERIAL: Texas Instrument Synchronous Serial
42 * interface 42 * interface
43 * @SSP_INTERFACE_NATIONAL_MICROWIRE: National Semiconductor Microwire 43 * @SSP_INTERFACE_NATIONAL_MICROWIRE: National Semiconductor Microwire
44 * interface 44 * interface
45 * @SSP_INTERFACE_UNIDIRECTIONAL: Unidirectional interface (STn8810 45 * @SSP_INTERFACE_UNIDIRECTIONAL: Unidirectional interface (STn8810
46 * &STn8815 only) 46 * &STn8815 only)
47 */ 47 */
48 enum ssp_interface { 48 enum ssp_interface {
49 SSP_INTERFACE_MOTOROLA_SPI, 49 SSP_INTERFACE_MOTOROLA_SPI,
50 SSP_INTERFACE_TI_SYNC_SERIAL, 50 SSP_INTERFACE_TI_SYNC_SERIAL,
51 SSP_INTERFACE_NATIONAL_MICROWIRE, 51 SSP_INTERFACE_NATIONAL_MICROWIRE,
52 SSP_INTERFACE_UNIDIRECTIONAL 52 SSP_INTERFACE_UNIDIRECTIONAL
53 }; 53 };
54 54
55 /** 55 /**
56 * enum ssp_hierarchy - whether SSP is configured as Master or Slave 56 * enum ssp_hierarchy - whether SSP is configured as Master or Slave
57 */ 57 */
58 enum ssp_hierarchy { 58 enum ssp_hierarchy {
59 SSP_MASTER, 59 SSP_MASTER,
60 SSP_SLAVE 60 SSP_SLAVE
61 }; 61 };
62 62
63 /** 63 /**
64 * enum ssp_clock_params - clock parameters, to set SSP clock at a 64 * enum ssp_clock_params - clock parameters, to set SSP clock at a
65 * desired freq 65 * desired freq
66 */ 66 */
67 struct ssp_clock_params { 67 struct ssp_clock_params {
68 u8 cpsdvsr; /* value from 2 to 254 (even only!) */ 68 u8 cpsdvsr; /* value from 2 to 254 (even only!) */
69 u8 scr; /* value from 0 to 255 */ 69 u8 scr; /* value from 0 to 255 */
70 }; 70 };
71 71
72 /** 72 /**
73 * enum ssp_rx_endian - endianess of Rx FIFO Data 73 * enum ssp_rx_endian - endianess of Rx FIFO Data
74 * this feature is only available in ST versionf of PL022 74 * this feature is only available in ST versionf of PL022
75 */ 75 */
76 enum ssp_rx_endian { 76 enum ssp_rx_endian {
77 SSP_RX_MSB, 77 SSP_RX_MSB,
78 SSP_RX_LSB 78 SSP_RX_LSB
79 }; 79 };
80 80
81 /** 81 /**
82 * enum ssp_tx_endian - endianess of Tx FIFO Data 82 * enum ssp_tx_endian - endianess of Tx FIFO Data
83 */ 83 */
84 enum ssp_tx_endian { 84 enum ssp_tx_endian {
85 SSP_TX_MSB, 85 SSP_TX_MSB,
86 SSP_TX_LSB 86 SSP_TX_LSB
87 }; 87 };
88 88
89 /** 89 /**
90 * enum ssp_data_size - number of bits in one data element 90 * enum ssp_data_size - number of bits in one data element
91 */ 91 */
92 enum ssp_data_size { 92 enum ssp_data_size {
93 SSP_DATA_BITS_4 = 0x03, SSP_DATA_BITS_5, SSP_DATA_BITS_6, 93 SSP_DATA_BITS_4 = 0x03, SSP_DATA_BITS_5, SSP_DATA_BITS_6,
94 SSP_DATA_BITS_7, SSP_DATA_BITS_8, SSP_DATA_BITS_9, 94 SSP_DATA_BITS_7, SSP_DATA_BITS_8, SSP_DATA_BITS_9,
95 SSP_DATA_BITS_10, SSP_DATA_BITS_11, SSP_DATA_BITS_12, 95 SSP_DATA_BITS_10, SSP_DATA_BITS_11, SSP_DATA_BITS_12,
96 SSP_DATA_BITS_13, SSP_DATA_BITS_14, SSP_DATA_BITS_15, 96 SSP_DATA_BITS_13, SSP_DATA_BITS_14, SSP_DATA_BITS_15,
97 SSP_DATA_BITS_16, SSP_DATA_BITS_17, SSP_DATA_BITS_18, 97 SSP_DATA_BITS_16, SSP_DATA_BITS_17, SSP_DATA_BITS_18,
98 SSP_DATA_BITS_19, SSP_DATA_BITS_20, SSP_DATA_BITS_21, 98 SSP_DATA_BITS_19, SSP_DATA_BITS_20, SSP_DATA_BITS_21,
99 SSP_DATA_BITS_22, SSP_DATA_BITS_23, SSP_DATA_BITS_24, 99 SSP_DATA_BITS_22, SSP_DATA_BITS_23, SSP_DATA_BITS_24,
100 SSP_DATA_BITS_25, SSP_DATA_BITS_26, SSP_DATA_BITS_27, 100 SSP_DATA_BITS_25, SSP_DATA_BITS_26, SSP_DATA_BITS_27,
101 SSP_DATA_BITS_28, SSP_DATA_BITS_29, SSP_DATA_BITS_30, 101 SSP_DATA_BITS_28, SSP_DATA_BITS_29, SSP_DATA_BITS_30,
102 SSP_DATA_BITS_31, SSP_DATA_BITS_32 102 SSP_DATA_BITS_31, SSP_DATA_BITS_32
103 }; 103 };
104 104
105 /** 105 /**
106 * enum ssp_mode - SSP mode of operation (Communication modes) 106 * enum ssp_mode - SSP mode of operation (Communication modes)
107 */ 107 */
108 enum ssp_mode { 108 enum ssp_mode {
109 INTERRUPT_TRANSFER, 109 INTERRUPT_TRANSFER,
110 POLLING_TRANSFER, 110 POLLING_TRANSFER,
111 DMA_TRANSFER 111 DMA_TRANSFER
112 }; 112 };
113 113
114 /** 114 /**
115 * enum ssp_rx_level_trig - receive FIFO watermark level which triggers 115 * enum ssp_rx_level_trig - receive FIFO watermark level which triggers
116 * IT: Interrupt fires when _N_ or more elements in RX FIFO. 116 * IT: Interrupt fires when _N_ or more elements in RX FIFO.
117 */ 117 */
118 enum ssp_rx_level_trig { 118 enum ssp_rx_level_trig {
119 SSP_RX_1_OR_MORE_ELEM, 119 SSP_RX_1_OR_MORE_ELEM,
120 SSP_RX_4_OR_MORE_ELEM, 120 SSP_RX_4_OR_MORE_ELEM,
121 SSP_RX_8_OR_MORE_ELEM, 121 SSP_RX_8_OR_MORE_ELEM,
122 SSP_RX_16_OR_MORE_ELEM, 122 SSP_RX_16_OR_MORE_ELEM,
123 SSP_RX_32_OR_MORE_ELEM 123 SSP_RX_32_OR_MORE_ELEM
124 }; 124 };
125 125
126 /** 126 /**
127 * Transmit FIFO watermark level which triggers (IT Interrupt fires 127 * Transmit FIFO watermark level which triggers (IT Interrupt fires
128 * when _N_ or more empty locations in TX FIFO) 128 * when _N_ or more empty locations in TX FIFO)
129 */ 129 */
130 enum ssp_tx_level_trig { 130 enum ssp_tx_level_trig {
131 SSP_TX_1_OR_MORE_EMPTY_LOC, 131 SSP_TX_1_OR_MORE_EMPTY_LOC,
132 SSP_TX_4_OR_MORE_EMPTY_LOC, 132 SSP_TX_4_OR_MORE_EMPTY_LOC,
133 SSP_TX_8_OR_MORE_EMPTY_LOC, 133 SSP_TX_8_OR_MORE_EMPTY_LOC,
134 SSP_TX_16_OR_MORE_EMPTY_LOC, 134 SSP_TX_16_OR_MORE_EMPTY_LOC,
135 SSP_TX_32_OR_MORE_EMPTY_LOC 135 SSP_TX_32_OR_MORE_EMPTY_LOC
136 }; 136 };
137 137
138 /** 138 /**
139 * enum SPI Clock Phase - clock phase (Motorola SPI interface only) 139 * enum SPI Clock Phase - clock phase (Motorola SPI interface only)
140 * @SSP_CLK_FIRST_EDGE: Receive data on first edge transition (actual direction depends on polarity) 140 * @SSP_CLK_FIRST_EDGE: Receive data on first edge transition (actual direction depends on polarity)
141 * @SSP_CLK_SECOND_EDGE: Receive data on second edge transition (actual direction depends on polarity) 141 * @SSP_CLK_SECOND_EDGE: Receive data on second edge transition (actual direction depends on polarity)
142 */ 142 */
143 enum ssp_spi_clk_phase { 143 enum ssp_spi_clk_phase {
144 SSP_CLK_FIRST_EDGE, 144 SSP_CLK_FIRST_EDGE,
145 SSP_CLK_SECOND_EDGE 145 SSP_CLK_SECOND_EDGE
146 }; 146 };
147 147
148 /** 148 /**
149 * enum SPI Clock Polarity - clock polarity (Motorola SPI interface only) 149 * enum SPI Clock Polarity - clock polarity (Motorola SPI interface only)
150 * @SSP_CLK_POL_IDLE_LOW: Low inactive level 150 * @SSP_CLK_POL_IDLE_LOW: Low inactive level
151 * @SSP_CLK_POL_IDLE_HIGH: High inactive level 151 * @SSP_CLK_POL_IDLE_HIGH: High inactive level
152 */ 152 */
153 enum ssp_spi_clk_pol { 153 enum ssp_spi_clk_pol {
154 SSP_CLK_POL_IDLE_LOW, 154 SSP_CLK_POL_IDLE_LOW,
155 SSP_CLK_POL_IDLE_HIGH 155 SSP_CLK_POL_IDLE_HIGH
156 }; 156 };
157 157
158 /** 158 /**
159 * Microwire Conrol Lengths Command size in microwire format 159 * Microwire Conrol Lengths Command size in microwire format
160 */ 160 */
161 enum ssp_microwire_ctrl_len { 161 enum ssp_microwire_ctrl_len {
162 SSP_BITS_4 = 0x03, SSP_BITS_5, SSP_BITS_6, 162 SSP_BITS_4 = 0x03, SSP_BITS_5, SSP_BITS_6,
163 SSP_BITS_7, SSP_BITS_8, SSP_BITS_9, 163 SSP_BITS_7, SSP_BITS_8, SSP_BITS_9,
164 SSP_BITS_10, SSP_BITS_11, SSP_BITS_12, 164 SSP_BITS_10, SSP_BITS_11, SSP_BITS_12,
165 SSP_BITS_13, SSP_BITS_14, SSP_BITS_15, 165 SSP_BITS_13, SSP_BITS_14, SSP_BITS_15,
166 SSP_BITS_16, SSP_BITS_17, SSP_BITS_18, 166 SSP_BITS_16, SSP_BITS_17, SSP_BITS_18,
167 SSP_BITS_19, SSP_BITS_20, SSP_BITS_21, 167 SSP_BITS_19, SSP_BITS_20, SSP_BITS_21,
168 SSP_BITS_22, SSP_BITS_23, SSP_BITS_24, 168 SSP_BITS_22, SSP_BITS_23, SSP_BITS_24,
169 SSP_BITS_25, SSP_BITS_26, SSP_BITS_27, 169 SSP_BITS_25, SSP_BITS_26, SSP_BITS_27,
170 SSP_BITS_28, SSP_BITS_29, SSP_BITS_30, 170 SSP_BITS_28, SSP_BITS_29, SSP_BITS_30,
171 SSP_BITS_31, SSP_BITS_32 171 SSP_BITS_31, SSP_BITS_32
172 }; 172 };
173 173
174 /** 174 /**
175 * enum Microwire Wait State 175 * enum Microwire Wait State
176 * @SSP_MWIRE_WAIT_ZERO: No wait state inserted after last command bit 176 * @SSP_MWIRE_WAIT_ZERO: No wait state inserted after last command bit
177 * @SSP_MWIRE_WAIT_ONE: One wait state inserted after last command bit 177 * @SSP_MWIRE_WAIT_ONE: One wait state inserted after last command bit
178 */ 178 */
179 enum ssp_microwire_wait_state { 179 enum ssp_microwire_wait_state {
180 SSP_MWIRE_WAIT_ZERO, 180 SSP_MWIRE_WAIT_ZERO,
181 SSP_MWIRE_WAIT_ONE 181 SSP_MWIRE_WAIT_ONE
182 }; 182 };
183 183
184 /** 184 /**
185 * enum ssp_duplex - whether Full/Half Duplex on microwire, only 185 * enum ssp_duplex - whether Full/Half Duplex on microwire, only
186 * available in the ST Micro variant. 186 * available in the ST Micro variant.
187 * @SSP_MICROWIRE_CHANNEL_FULL_DUPLEX: SSPTXD becomes bi-directional, 187 * @SSP_MICROWIRE_CHANNEL_FULL_DUPLEX: SSPTXD becomes bi-directional,
188 * SSPRXD not used 188 * SSPRXD not used
189 * @SSP_MICROWIRE_CHANNEL_HALF_DUPLEX: SSPTXD is an output, SSPRXD is 189 * @SSP_MICROWIRE_CHANNEL_HALF_DUPLEX: SSPTXD is an output, SSPRXD is
190 * an input. 190 * an input.
191 */ 191 */
192 enum ssp_duplex { 192 enum ssp_duplex {
193 SSP_MICROWIRE_CHANNEL_FULL_DUPLEX, 193 SSP_MICROWIRE_CHANNEL_FULL_DUPLEX,
194 SSP_MICROWIRE_CHANNEL_HALF_DUPLEX 194 SSP_MICROWIRE_CHANNEL_HALF_DUPLEX
195 }; 195 };
196 196
197 /** 197 /**
198 * enum ssp_clkdelay - an optional clock delay on the feedback clock 198 * enum ssp_clkdelay - an optional clock delay on the feedback clock
199 * only available in the ST Micro PL023 variant. 199 * only available in the ST Micro PL023 variant.
200 * @SSP_FEEDBACK_CLK_DELAY_NONE: no delay, the data coming in from the 200 * @SSP_FEEDBACK_CLK_DELAY_NONE: no delay, the data coming in from the
201 * slave is sampled directly 201 * slave is sampled directly
202 * @SSP_FEEDBACK_CLK_DELAY_1T: the incoming slave data is sampled with 202 * @SSP_FEEDBACK_CLK_DELAY_1T: the incoming slave data is sampled with
203 * a delay of T-dt 203 * a delay of T-dt
204 * @SSP_FEEDBACK_CLK_DELAY_2T: dito with a delay if 2T-dt 204 * @SSP_FEEDBACK_CLK_DELAY_2T: dito with a delay if 2T-dt
205 * @SSP_FEEDBACK_CLK_DELAY_3T: dito with a delay if 3T-dt 205 * @SSP_FEEDBACK_CLK_DELAY_3T: dito with a delay if 3T-dt
206 * @SSP_FEEDBACK_CLK_DELAY_4T: dito with a delay if 4T-dt 206 * @SSP_FEEDBACK_CLK_DELAY_4T: dito with a delay if 4T-dt
207 * @SSP_FEEDBACK_CLK_DELAY_5T: dito with a delay if 5T-dt 207 * @SSP_FEEDBACK_CLK_DELAY_5T: dito with a delay if 5T-dt
208 * @SSP_FEEDBACK_CLK_DELAY_6T: dito with a delay if 6T-dt 208 * @SSP_FEEDBACK_CLK_DELAY_6T: dito with a delay if 6T-dt
209 * @SSP_FEEDBACK_CLK_DELAY_7T: dito with a delay if 7T-dt 209 * @SSP_FEEDBACK_CLK_DELAY_7T: dito with a delay if 7T-dt
210 */ 210 */
211 enum ssp_clkdelay { 211 enum ssp_clkdelay {
212 SSP_FEEDBACK_CLK_DELAY_NONE, 212 SSP_FEEDBACK_CLK_DELAY_NONE,
213 SSP_FEEDBACK_CLK_DELAY_1T, 213 SSP_FEEDBACK_CLK_DELAY_1T,
214 SSP_FEEDBACK_CLK_DELAY_2T, 214 SSP_FEEDBACK_CLK_DELAY_2T,
215 SSP_FEEDBACK_CLK_DELAY_3T, 215 SSP_FEEDBACK_CLK_DELAY_3T,
216 SSP_FEEDBACK_CLK_DELAY_4T, 216 SSP_FEEDBACK_CLK_DELAY_4T,
217 SSP_FEEDBACK_CLK_DELAY_5T, 217 SSP_FEEDBACK_CLK_DELAY_5T,
218 SSP_FEEDBACK_CLK_DELAY_6T, 218 SSP_FEEDBACK_CLK_DELAY_6T,
219 SSP_FEEDBACK_CLK_DELAY_7T 219 SSP_FEEDBACK_CLK_DELAY_7T
220 }; 220 };
221 221
222 /** 222 /**
223 * CHIP select/deselect commands 223 * CHIP select/deselect commands
224 */ 224 */
225 enum ssp_chip_select { 225 enum ssp_chip_select {
226 SSP_CHIP_SELECT, 226 SSP_CHIP_SELECT,
227 SSP_CHIP_DESELECT 227 SSP_CHIP_DESELECT
228 }; 228 };
229 229
230 230
231 struct dma_chan; 231 struct dma_chan;
232 /** 232 /**
233 * struct pl022_ssp_master - device.platform_data for SPI controller devices. 233 * struct pl022_ssp_master - device.platform_data for SPI controller devices.
234 * @bus_id: identifier for this bus
234 * @num_chipselect: chipselects are used to distinguish individual 235 * @num_chipselect: chipselects are used to distinguish individual
235 * SPI slaves, and are numbered from zero to num_chipselects - 1. 236 * SPI slaves, and are numbered from zero to num_chipselects - 1.
236 * each slave has a chipselect signal, but it's common that not 237 * each slave has a chipselect signal, but it's common that not
237 * every chipselect is connected to a slave. 238 * every chipselect is connected to a slave.
238 * @enable_dma: if true enables DMA driven transfers. 239 * @enable_dma: if true enables DMA driven transfers.
239 * @dma_rx_param: parameter to locate an RX DMA channel. 240 * @dma_rx_param: parameter to locate an RX DMA channel.
240 * @dma_tx_param: parameter to locate a TX DMA channel. 241 * @dma_tx_param: parameter to locate a TX DMA channel.
241 * @autosuspend_delay: delay in ms following transfer completion before the 242 * @autosuspend_delay: delay in ms following transfer completion before the
242 * runtime power management system suspends the device. A setting of 0 243 * runtime power management system suspends the device. A setting of 0
243 * indicates no delay and the device will be suspended immediately. 244 * indicates no delay and the device will be suspended immediately.
244 * @rt: indicates the controller should run the message pump with realtime 245 * @rt: indicates the controller should run the message pump with realtime
245 * priority to minimise the transfer latency on the bus. 246 * priority to minimise the transfer latency on the bus.
246 */ 247 */
247 struct pl022_ssp_controller { 248 struct pl022_ssp_controller {
248 u16 bus_id; 249 u16 bus_id;
249 u8 num_chipselect; 250 u8 num_chipselect;
250 u8 enable_dma:1; 251 u8 enable_dma:1;
251 bool (*dma_filter)(struct dma_chan *chan, void *filter_param); 252 bool (*dma_filter)(struct dma_chan *chan, void *filter_param);
252 void *dma_rx_param; 253 void *dma_rx_param;
253 void *dma_tx_param; 254 void *dma_tx_param;
254 int autosuspend_delay; 255 int autosuspend_delay;
255 bool rt; 256 bool rt;
256 }; 257 };
257 258
258 /** 259 /**
259 * struct ssp_config_chip - spi_board_info.controller_data for SPI 260 * struct ssp_config_chip - spi_board_info.controller_data for SPI
260 * slave devices, copied to spi_device.controller_data. 261 * slave devices, copied to spi_device.controller_data.
261 * 262 *
262 * @lbm: used for test purpose to internally connect RX and TX
263 * @iface: Interface type(Motorola, TI, Microwire, Universal) 263 * @iface: Interface type(Motorola, TI, Microwire, Universal)
264 * @hierarchy: sets whether interface is master or slave 264 * @hierarchy: sets whether interface is master or slave
265 * @slave_tx_disable: SSPTXD is disconnected (in slave mode only) 265 * @slave_tx_disable: SSPTXD is disconnected (in slave mode only)
266 * @clk_freq: Tune freq parameters of SSP(when in master mode) 266 * @clk_freq: Tune freq parameters of SSP(when in master mode)
267 * @endian_rx: Endianess of Data in Rx FIFO
268 * @endian_tx: Endianess of Data in Tx FIFO
269 * @data_size: Width of data element(4 to 32 bits)
270 * @com_mode: communication mode: polling, Interrupt or DMA 267 * @com_mode: communication mode: polling, Interrupt or DMA
271 * @rx_lev_trig: Rx FIFO watermark level (for IT & DMA mode) 268 * @rx_lev_trig: Rx FIFO watermark level (for IT & DMA mode)
272 * @tx_lev_trig: Tx FIFO watermark level (for IT & DMA mode) 269 * @tx_lev_trig: Tx FIFO watermark level (for IT & DMA mode)
273 * @clk_phase: Motorola SPI interface Clock phase
274 * @clk_pol: Motorola SPI interface Clock polarity
275 * @ctrl_len: Microwire interface: Control length 270 * @ctrl_len: Microwire interface: Control length
276 * @wait_state: Microwire interface: Wait state 271 * @wait_state: Microwire interface: Wait state
277 * @duplex: Microwire interface: Full/Half duplex 272 * @duplex: Microwire interface: Full/Half duplex
278 * @clkdelay: on the PL023 variant, the delay in feeback clock cycles 273 * @clkdelay: on the PL023 variant, the delay in feeback clock cycles
279 * before sampling the incoming line 274 * before sampling the incoming line
280 * @cs_control: function pointer to board-specific function to 275 * @cs_control: function pointer to board-specific function to
281 * assert/deassert I/O port to control HW generation of devices chip-select. 276 * assert/deassert I/O port to control HW generation of devices chip-select.
282 * @dma_xfer_type: Type of DMA xfer (Mem-to-periph or Periph-to-Periph)
283 * @dma_config: DMA configuration for SSP controller and peripheral
284 */ 277 */
285 struct pl022_config_chip { 278 struct pl022_config_chip {
286 enum ssp_interface iface; 279 enum ssp_interface iface;
287 enum ssp_hierarchy hierarchy; 280 enum ssp_hierarchy hierarchy;
288 bool slave_tx_disable; 281 bool slave_tx_disable;
289 struct ssp_clock_params clk_freq; 282 struct ssp_clock_params clk_freq;
290 enum ssp_mode com_mode; 283 enum ssp_mode com_mode;
291 enum ssp_rx_level_trig rx_lev_trig; 284 enum ssp_rx_level_trig rx_lev_trig;
292 enum ssp_tx_level_trig tx_lev_trig; 285 enum ssp_tx_level_trig tx_lev_trig;
293 enum ssp_microwire_ctrl_len ctrl_len; 286 enum ssp_microwire_ctrl_len ctrl_len;
294 enum ssp_microwire_wait_state wait_state; 287 enum ssp_microwire_wait_state wait_state;
295 enum ssp_duplex duplex; 288 enum ssp_duplex duplex;
296 enum ssp_clkdelay clkdelay; 289 enum ssp_clkdelay clkdelay;
297 void (*cs_control) (u32 control); 290 void (*cs_control) (u32 control);
298 }; 291 };
299 292
300 #endif /* _SSP_PL022_H */ 293 #endif /* _SSP_PL022_H */