Commit 5030c807907ae90ad21e9220c1a9d592558deba2

Authored by Stefan Richter
1 parent 0fcff4e393

ieee1394: remove unused variables

which caused gcc 4.6 to warn about
    variable 'XYZ' set but not used.

sbp2.c, unit_characteristics:

The underlying problem which was spotted here --- an incomplete
implementation --- is already 50% fixed in drivers/firewire/sbp2.c which
observes mgt_ORB_timeout but not yet ORB_size.

raw1394.c, length_conflict; dv1394.c, ts_off:

Impossible to tell why these variables are there.  We can safely remove
them though because we don't need a compiler warning to realize that we
are dealing with (at least stylistically) flawed code here.

dv1394.c, packet_time:

This was used in debug macro that is only compiled in with
DV1394_DEBUG_LEVEL >= 2 defined at compile-time.  Just drop it since
nobody debugs dv1394 anymore.  Avoids noise in regular kernel builds.

dv1394.c, ohci; eth1394.c, priv:

These variables clearly can go away.  Somebody wanted to use them but
then didn't (or not anymore).

Note, all of this code is considered to be at its end of life and is
thus not really meant to receive janitorial updates anymore.  But if we
can easily remove noisy warnings from kernel builds, we should.

Reported-by: Justin P. Mattock <justinmattock@gmail.com>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>

Showing 4 changed files with 7 additions and 24 deletions Inline Diff

drivers/ieee1394/dv1394.c
1 /* 1 /*
2 * dv1394.c - DV input/output over IEEE 1394 on OHCI chips 2 * dv1394.c - DV input/output over IEEE 1394 on OHCI chips
3 * Copyright (C)2001 Daniel Maas <dmaas@dcine.com> 3 * Copyright (C)2001 Daniel Maas <dmaas@dcine.com>
4 * receive by Dan Dennedy <dan@dennedy.org> 4 * receive by Dan Dennedy <dan@dennedy.org>
5 * 5 *
6 * based on: 6 * based on:
7 * video1394.c - video driver for OHCI 1394 boards 7 * video1394.c - video driver for OHCI 1394 boards
8 * Copyright (C)1999,2000 Sebastien Rougeaux <sebastien.rougeaux@anu.edu.au> 8 * Copyright (C)1999,2000 Sebastien Rougeaux <sebastien.rougeaux@anu.edu.au>
9 * 9 *
10 * This program is free software; you can redistribute it and/or modify 10 * This program is free software; you can redistribute it and/or modify
11 * it under the terms of the GNU General Public License as published by 11 * it under the terms of the GNU General Public License as published by
12 * the Free Software Foundation; either version 2 of the License, or 12 * the Free Software Foundation; either version 2 of the License, or
13 * (at your option) any later version. 13 * (at your option) any later version.
14 * 14 *
15 * This program is distributed in the hope that it will be useful, 15 * This program is distributed in the hope that it will be useful,
16 * but WITHOUT ANY WARRANTY; without even the implied warranty of 16 * but WITHOUT ANY WARRANTY; without even the implied warranty of
17 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 17 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
18 * GNU General Public License for more details. 18 * GNU General Public License for more details.
19 * 19 *
20 * You should have received a copy of the GNU General Public License 20 * You should have received a copy of the GNU General Public License
21 * along with this program; if not, write to the Free Software Foundation, 21 * along with this program; if not, write to the Free Software Foundation,
22 * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 22 * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
23 */ 23 */
24 24
25 /* 25 /*
26 OVERVIEW 26 OVERVIEW
27 27
28 I designed dv1394 as a "pipe" that you can use to shoot DV onto a 28 I designed dv1394 as a "pipe" that you can use to shoot DV onto a
29 FireWire bus. In transmission mode, dv1394 does the following: 29 FireWire bus. In transmission mode, dv1394 does the following:
30 30
31 1. accepts contiguous frames of DV data from user-space, via write() 31 1. accepts contiguous frames of DV data from user-space, via write()
32 or mmap() (see dv1394.h for the complete API) 32 or mmap() (see dv1394.h for the complete API)
33 2. wraps IEC 61883 packets around the DV data, inserting 33 2. wraps IEC 61883 packets around the DV data, inserting
34 empty synchronization packets as necessary 34 empty synchronization packets as necessary
35 3. assigns accurate SYT timestamps to the outgoing packets 35 3. assigns accurate SYT timestamps to the outgoing packets
36 4. shoots them out using the OHCI card's IT DMA engine 36 4. shoots them out using the OHCI card's IT DMA engine
37 37
38 Thanks to Dan Dennedy, we now have a receive mode that does the following: 38 Thanks to Dan Dennedy, we now have a receive mode that does the following:
39 39
40 1. accepts raw IEC 61883 packets from the OHCI card 40 1. accepts raw IEC 61883 packets from the OHCI card
41 2. re-assembles the DV data payloads into contiguous frames, 41 2. re-assembles the DV data payloads into contiguous frames,
42 discarding empty packets 42 discarding empty packets
43 3. sends the DV data to user-space via read() or mmap() 43 3. sends the DV data to user-space via read() or mmap()
44 */ 44 */
45 45
46 /* 46 /*
47 TODO: 47 TODO:
48 48
49 - tunable frame-drop behavior: either loop last frame, or halt transmission 49 - tunable frame-drop behavior: either loop last frame, or halt transmission
50 50
51 - use a scatter/gather buffer for DMA programs (f->descriptor_pool) 51 - use a scatter/gather buffer for DMA programs (f->descriptor_pool)
52 so that we don't rely on allocating 64KB of contiguous kernel memory 52 so that we don't rely on allocating 64KB of contiguous kernel memory
53 via pci_alloc_consistent() 53 via pci_alloc_consistent()
54 54
55 DONE: 55 DONE:
56 - during reception, better handling of dropped frames and continuity errors 56 - during reception, better handling of dropped frames and continuity errors
57 - during reception, prevent DMA from bypassing the irq tasklets 57 - during reception, prevent DMA from bypassing the irq tasklets
58 - reduce irq rate during reception (1/250 packets). 58 - reduce irq rate during reception (1/250 packets).
59 - add many more internal buffers during reception with scatter/gather dma. 59 - add many more internal buffers during reception with scatter/gather dma.
60 - add dbc (continuity) checking on receive, increment status.dropped_frames 60 - add dbc (continuity) checking on receive, increment status.dropped_frames
61 if not continuous. 61 if not continuous.
62 - restart IT DMA after a bus reset 62 - restart IT DMA after a bus reset
63 - safely obtain and release ISO Tx channels in cooperation with OHCI driver 63 - safely obtain and release ISO Tx channels in cooperation with OHCI driver
64 - map received DIF blocks to their proper location in DV frame (ensure 64 - map received DIF blocks to their proper location in DV frame (ensure
65 recovery if dropped packet) 65 recovery if dropped packet)
66 - handle bus resets gracefully (OHCI card seems to take care of this itself(!)) 66 - handle bus resets gracefully (OHCI card seems to take care of this itself(!))
67 - do not allow resizing the user_buf once allocated; eliminate nuke_buffer_mappings 67 - do not allow resizing the user_buf once allocated; eliminate nuke_buffer_mappings
68 - eliminated #ifdef DV1394_DEBUG_LEVEL by inventing macros debug_printk and irq_printk 68 - eliminated #ifdef DV1394_DEBUG_LEVEL by inventing macros debug_printk and irq_printk
69 - added wmb() and mb() to places where PCI read/write ordering needs to be enforced 69 - added wmb() and mb() to places where PCI read/write ordering needs to be enforced
70 - set video->id correctly 70 - set video->id correctly
71 - store video_cards in an array indexed by OHCI card ID, rather than a list 71 - store video_cards in an array indexed by OHCI card ID, rather than a list
72 - implement DMA context allocation to cooperate with other users of the OHCI 72 - implement DMA context allocation to cooperate with other users of the OHCI
73 - fix all XXX showstoppers 73 - fix all XXX showstoppers
74 - disable IR/IT DMA interrupts on shutdown 74 - disable IR/IT DMA interrupts on shutdown
75 - flush pci writes to the card by issuing a read 75 - flush pci writes to the card by issuing a read
76 - character device dispatching 76 - character device dispatching
77 - switch over to the new kernel DMA API (pci_map_*()) (* needs testing on platforms with IOMMU!) 77 - switch over to the new kernel DMA API (pci_map_*()) (* needs testing on platforms with IOMMU!)
78 - keep all video_cards in a list (for open() via chardev), set file->private_data = video 78 - keep all video_cards in a list (for open() via chardev), set file->private_data = video
79 - dv1394_poll should indicate POLLIN when receiving buffers are available 79 - dv1394_poll should indicate POLLIN when receiving buffers are available
80 - add proc fs interface to set cip_n, cip_d, syt_offset, and video signal 80 - add proc fs interface to set cip_n, cip_d, syt_offset, and video signal
81 - expose xmit and recv as separate devices (not exclusive) 81 - expose xmit and recv as separate devices (not exclusive)
82 - expose NTSC and PAL as separate devices (can be overridden) 82 - expose NTSC and PAL as separate devices (can be overridden)
83 83
84 */ 84 */
85 85
86 #include <linux/kernel.h> 86 #include <linux/kernel.h>
87 #include <linux/list.h> 87 #include <linux/list.h>
88 #include <linux/slab.h> 88 #include <linux/slab.h>
89 #include <linux/interrupt.h> 89 #include <linux/interrupt.h>
90 #include <linux/wait.h> 90 #include <linux/wait.h>
91 #include <linux/errno.h> 91 #include <linux/errno.h>
92 #include <linux/module.h> 92 #include <linux/module.h>
93 #include <linux/init.h> 93 #include <linux/init.h>
94 #include <linux/pci.h> 94 #include <linux/pci.h>
95 #include <linux/fs.h> 95 #include <linux/fs.h>
96 #include <linux/poll.h> 96 #include <linux/poll.h>
97 #include <linux/mutex.h> 97 #include <linux/mutex.h>
98 #include <linux/bitops.h> 98 #include <linux/bitops.h>
99 #include <asm/byteorder.h> 99 #include <asm/byteorder.h>
100 #include <asm/atomic.h> 100 #include <asm/atomic.h>
101 #include <asm/io.h> 101 #include <asm/io.h>
102 #include <asm/uaccess.h> 102 #include <asm/uaccess.h>
103 #include <linux/delay.h> 103 #include <linux/delay.h>
104 #include <asm/pgtable.h> 104 #include <asm/pgtable.h>
105 #include <asm/page.h> 105 #include <asm/page.h>
106 #include <linux/sched.h> 106 #include <linux/sched.h>
107 #include <linux/types.h> 107 #include <linux/types.h>
108 #include <linux/vmalloc.h> 108 #include <linux/vmalloc.h>
109 #include <linux/string.h> 109 #include <linux/string.h>
110 #include <linux/compat.h> 110 #include <linux/compat.h>
111 #include <linux/cdev.h> 111 #include <linux/cdev.h>
112 112
113 #include "dv1394.h" 113 #include "dv1394.h"
114 #include "dv1394-private.h" 114 #include "dv1394-private.h"
115 #include "highlevel.h" 115 #include "highlevel.h"
116 #include "hosts.h" 116 #include "hosts.h"
117 #include "ieee1394.h" 117 #include "ieee1394.h"
118 #include "ieee1394_core.h" 118 #include "ieee1394_core.h"
119 #include "ieee1394_hotplug.h" 119 #include "ieee1394_hotplug.h"
120 #include "ieee1394_types.h" 120 #include "ieee1394_types.h"
121 #include "nodemgr.h" 121 #include "nodemgr.h"
122 #include "ohci1394.h" 122 #include "ohci1394.h"
123 123
124 /* DEBUG LEVELS: 124 /* DEBUG LEVELS:
125 0 - no debugging messages 125 0 - no debugging messages
126 1 - some debugging messages, but none during DMA frame transmission 126 1 - some debugging messages, but none during DMA frame transmission
127 2 - lots of messages, including during DMA frame transmission 127 2 - lots of messages, including during DMA frame transmission
128 (will cause underflows if your machine is too slow!) 128 (will cause underflows if your machine is too slow!)
129 */ 129 */
130 130
131 #define DV1394_DEBUG_LEVEL 0 131 #define DV1394_DEBUG_LEVEL 0
132 132
133 /* for debugging use ONLY: allow more than one open() of the device */ 133 /* for debugging use ONLY: allow more than one open() of the device */
134 /* #define DV1394_ALLOW_MORE_THAN_ONE_OPEN 1 */ 134 /* #define DV1394_ALLOW_MORE_THAN_ONE_OPEN 1 */
135 135
136 #if DV1394_DEBUG_LEVEL >= 2 136 #if DV1394_DEBUG_LEVEL >= 2
137 #define irq_printk( args... ) printk( args ) 137 #define irq_printk( args... ) printk( args )
138 #else 138 #else
139 #define irq_printk( args... ) do {} while (0) 139 #define irq_printk( args... ) do {} while (0)
140 #endif 140 #endif
141 141
142 #if DV1394_DEBUG_LEVEL >= 1 142 #if DV1394_DEBUG_LEVEL >= 1
143 #define debug_printk( args... ) printk( args) 143 #define debug_printk( args... ) printk( args)
144 #else 144 #else
145 #define debug_printk( args... ) do {} while (0) 145 #define debug_printk( args... ) do {} while (0)
146 #endif 146 #endif
147 147
148 /* issue a dummy PCI read to force the preceding write 148 /* issue a dummy PCI read to force the preceding write
149 to be posted to the PCI bus immediately */ 149 to be posted to the PCI bus immediately */
150 150
151 static inline void flush_pci_write(struct ti_ohci *ohci) 151 static inline void flush_pci_write(struct ti_ohci *ohci)
152 { 152 {
153 mb(); 153 mb();
154 reg_read(ohci, OHCI1394_IsochronousCycleTimer); 154 reg_read(ohci, OHCI1394_IsochronousCycleTimer);
155 } 155 }
156 156
157 static void it_tasklet_func(unsigned long data); 157 static void it_tasklet_func(unsigned long data);
158 static void ir_tasklet_func(unsigned long data); 158 static void ir_tasklet_func(unsigned long data);
159 159
160 #ifdef CONFIG_COMPAT 160 #ifdef CONFIG_COMPAT
161 static long dv1394_compat_ioctl(struct file *file, unsigned int cmd, 161 static long dv1394_compat_ioctl(struct file *file, unsigned int cmd,
162 unsigned long arg); 162 unsigned long arg);
163 #endif 163 #endif
164 164
165 /* GLOBAL DATA */ 165 /* GLOBAL DATA */
166 166
167 /* list of all video_cards */ 167 /* list of all video_cards */
168 static LIST_HEAD(dv1394_cards); 168 static LIST_HEAD(dv1394_cards);
169 static DEFINE_SPINLOCK(dv1394_cards_lock); 169 static DEFINE_SPINLOCK(dv1394_cards_lock);
170 170
171 /* translate from a struct file* to the corresponding struct video_card* */ 171 /* translate from a struct file* to the corresponding struct video_card* */
172 172
173 static inline struct video_card* file_to_video_card(struct file *file) 173 static inline struct video_card* file_to_video_card(struct file *file)
174 { 174 {
175 return (struct video_card*) file->private_data; 175 return (struct video_card*) file->private_data;
176 } 176 }
177 177
178 /*** FRAME METHODS *********************************************************/ 178 /*** FRAME METHODS *********************************************************/
179 179
180 static void frame_reset(struct frame *f) 180 static void frame_reset(struct frame *f)
181 { 181 {
182 f->state = FRAME_CLEAR; 182 f->state = FRAME_CLEAR;
183 f->done = 0; 183 f->done = 0;
184 f->n_packets = 0; 184 f->n_packets = 0;
185 f->frame_begin_timestamp = NULL; 185 f->frame_begin_timestamp = NULL;
186 f->assigned_timestamp = 0; 186 f->assigned_timestamp = 0;
187 f->cip_syt1 = NULL; 187 f->cip_syt1 = NULL;
188 f->cip_syt2 = NULL; 188 f->cip_syt2 = NULL;
189 f->mid_frame_timestamp = NULL; 189 f->mid_frame_timestamp = NULL;
190 f->frame_end_timestamp = NULL; 190 f->frame_end_timestamp = NULL;
191 f->frame_end_branch = NULL; 191 f->frame_end_branch = NULL;
192 } 192 }
193 193
194 static struct frame* frame_new(unsigned int frame_num, struct video_card *video) 194 static struct frame* frame_new(unsigned int frame_num, struct video_card *video)
195 { 195 {
196 struct frame *f = kmalloc(sizeof(*f), GFP_KERNEL); 196 struct frame *f = kmalloc(sizeof(*f), GFP_KERNEL);
197 if (!f) 197 if (!f)
198 return NULL; 198 return NULL;
199 199
200 f->video = video; 200 f->video = video;
201 f->frame_num = frame_num; 201 f->frame_num = frame_num;
202 202
203 f->header_pool = pci_alloc_consistent(f->video->ohci->dev, PAGE_SIZE, &f->header_pool_dma); 203 f->header_pool = pci_alloc_consistent(f->video->ohci->dev, PAGE_SIZE, &f->header_pool_dma);
204 if (!f->header_pool) { 204 if (!f->header_pool) {
205 printk(KERN_ERR "dv1394: failed to allocate CIP header pool\n"); 205 printk(KERN_ERR "dv1394: failed to allocate CIP header pool\n");
206 kfree(f); 206 kfree(f);
207 return NULL; 207 return NULL;
208 } 208 }
209 209
210 debug_printk("dv1394: frame_new: allocated CIP header pool at virt 0x%08lx (contig) dma 0x%08lx size %ld\n", 210 debug_printk("dv1394: frame_new: allocated CIP header pool at virt 0x%08lx (contig) dma 0x%08lx size %ld\n",
211 (unsigned long) f->header_pool, (unsigned long) f->header_pool_dma, PAGE_SIZE); 211 (unsigned long) f->header_pool, (unsigned long) f->header_pool_dma, PAGE_SIZE);
212 212
213 f->descriptor_pool_size = MAX_PACKETS * sizeof(struct DMA_descriptor_block); 213 f->descriptor_pool_size = MAX_PACKETS * sizeof(struct DMA_descriptor_block);
214 /* make it an even # of pages */ 214 /* make it an even # of pages */
215 f->descriptor_pool_size += PAGE_SIZE - (f->descriptor_pool_size%PAGE_SIZE); 215 f->descriptor_pool_size += PAGE_SIZE - (f->descriptor_pool_size%PAGE_SIZE);
216 216
217 f->descriptor_pool = pci_alloc_consistent(f->video->ohci->dev, 217 f->descriptor_pool = pci_alloc_consistent(f->video->ohci->dev,
218 f->descriptor_pool_size, 218 f->descriptor_pool_size,
219 &f->descriptor_pool_dma); 219 &f->descriptor_pool_dma);
220 if (!f->descriptor_pool) { 220 if (!f->descriptor_pool) {
221 pci_free_consistent(f->video->ohci->dev, PAGE_SIZE, f->header_pool, f->header_pool_dma); 221 pci_free_consistent(f->video->ohci->dev, PAGE_SIZE, f->header_pool, f->header_pool_dma);
222 kfree(f); 222 kfree(f);
223 return NULL; 223 return NULL;
224 } 224 }
225 225
226 debug_printk("dv1394: frame_new: allocated DMA program memory at virt 0x%08lx (contig) dma 0x%08lx size %ld\n", 226 debug_printk("dv1394: frame_new: allocated DMA program memory at virt 0x%08lx (contig) dma 0x%08lx size %ld\n",
227 (unsigned long) f->descriptor_pool, (unsigned long) f->descriptor_pool_dma, f->descriptor_pool_size); 227 (unsigned long) f->descriptor_pool, (unsigned long) f->descriptor_pool_dma, f->descriptor_pool_size);
228 228
229 f->data = 0; 229 f->data = 0;
230 frame_reset(f); 230 frame_reset(f);
231 231
232 return f; 232 return f;
233 } 233 }
234 234
235 static void frame_delete(struct frame *f) 235 static void frame_delete(struct frame *f)
236 { 236 {
237 pci_free_consistent(f->video->ohci->dev, PAGE_SIZE, f->header_pool, f->header_pool_dma); 237 pci_free_consistent(f->video->ohci->dev, PAGE_SIZE, f->header_pool, f->header_pool_dma);
238 pci_free_consistent(f->video->ohci->dev, f->descriptor_pool_size, f->descriptor_pool, f->descriptor_pool_dma); 238 pci_free_consistent(f->video->ohci->dev, f->descriptor_pool_size, f->descriptor_pool, f->descriptor_pool_dma);
239 kfree(f); 239 kfree(f);
240 } 240 }
241 241
242 242
243 243
244 244
245 /* 245 /*
246 frame_prepare() - build the DMA program for transmitting 246 frame_prepare() - build the DMA program for transmitting
247 247
248 Frame_prepare() must be called OUTSIDE the video->spinlock. 248 Frame_prepare() must be called OUTSIDE the video->spinlock.
249 However, frame_prepare() must still be serialized, so 249 However, frame_prepare() must still be serialized, so
250 it should be called WITH the video->mtx taken. 250 it should be called WITH the video->mtx taken.
251 */ 251 */
252 252
253 static void frame_prepare(struct video_card *video, unsigned int this_frame) 253 static void frame_prepare(struct video_card *video, unsigned int this_frame)
254 { 254 {
255 struct frame *f = video->frames[this_frame]; 255 struct frame *f = video->frames[this_frame];
256 int last_frame; 256 int last_frame;
257 257
258 struct DMA_descriptor_block *block; 258 struct DMA_descriptor_block *block;
259 dma_addr_t block_dma; 259 dma_addr_t block_dma;
260 struct CIP_header *cip; 260 struct CIP_header *cip;
261 dma_addr_t cip_dma; 261 dma_addr_t cip_dma;
262 262
263 unsigned int n_descriptors, full_packets, packets_per_frame, payload_size; 263 unsigned int n_descriptors, full_packets, packets_per_frame, payload_size;
264 264
265 /* these flags denote packets that need special attention */ 265 /* these flags denote packets that need special attention */
266 int empty_packet, first_packet, last_packet, mid_packet; 266 int empty_packet, first_packet, last_packet, mid_packet;
267 267
268 __le32 *branch_address, *last_branch_address = NULL; 268 __le32 *branch_address, *last_branch_address = NULL;
269 unsigned long data_p; 269 unsigned long data_p;
270 int first_packet_empty = 0; 270 int first_packet_empty = 0;
271 u32 cycleTimer, ct_sec, ct_cyc, ct_off; 271 u32 cycleTimer, ct_sec, ct_cyc, ct_off;
272 unsigned long irq_flags; 272 unsigned long irq_flags;
273 273
274 irq_printk("frame_prepare( %d ) ---------------------\n", this_frame); 274 irq_printk("frame_prepare( %d ) ---------------------\n", this_frame);
275 275
276 full_packets = 0; 276 full_packets = 0;
277 277
278 278
279 279
280 if (video->pal_or_ntsc == DV1394_PAL) 280 if (video->pal_or_ntsc == DV1394_PAL)
281 packets_per_frame = DV1394_PAL_PACKETS_PER_FRAME; 281 packets_per_frame = DV1394_PAL_PACKETS_PER_FRAME;
282 else 282 else
283 packets_per_frame = DV1394_NTSC_PACKETS_PER_FRAME; 283 packets_per_frame = DV1394_NTSC_PACKETS_PER_FRAME;
284 284
285 while ( full_packets < packets_per_frame ) { 285 while ( full_packets < packets_per_frame ) {
286 empty_packet = first_packet = last_packet = mid_packet = 0; 286 empty_packet = first_packet = last_packet = mid_packet = 0;
287 287
288 data_p = f->data + full_packets * 480; 288 data_p = f->data + full_packets * 480;
289 289
290 /************************************************/ 290 /************************************************/
291 /* allocate a descriptor block and a CIP header */ 291 /* allocate a descriptor block and a CIP header */
292 /************************************************/ 292 /************************************************/
293 293
294 /* note: these should NOT cross a page boundary (DMA restriction) */ 294 /* note: these should NOT cross a page boundary (DMA restriction) */
295 295
296 if (f->n_packets >= MAX_PACKETS) { 296 if (f->n_packets >= MAX_PACKETS) {
297 printk(KERN_ERR "dv1394: FATAL ERROR: max packet count exceeded\n"); 297 printk(KERN_ERR "dv1394: FATAL ERROR: max packet count exceeded\n");
298 return; 298 return;
299 } 299 }
300 300
301 /* the block surely won't cross a page boundary, 301 /* the block surely won't cross a page boundary,
302 since an even number of descriptor_blocks fit on a page */ 302 since an even number of descriptor_blocks fit on a page */
303 block = &(f->descriptor_pool[f->n_packets]); 303 block = &(f->descriptor_pool[f->n_packets]);
304 304
305 /* DMA address of the block = offset of block relative 305 /* DMA address of the block = offset of block relative
306 to the kernel base address of the descriptor pool 306 to the kernel base address of the descriptor pool
307 + DMA base address of the descriptor pool */ 307 + DMA base address of the descriptor pool */
308 block_dma = ((unsigned long) block - (unsigned long) f->descriptor_pool) + f->descriptor_pool_dma; 308 block_dma = ((unsigned long) block - (unsigned long) f->descriptor_pool) + f->descriptor_pool_dma;
309 309
310 310
311 /* the whole CIP pool fits on one page, so no worries about boundaries */ 311 /* the whole CIP pool fits on one page, so no worries about boundaries */
312 if ( ((unsigned long) &(f->header_pool[f->n_packets]) - (unsigned long) f->header_pool) 312 if ( ((unsigned long) &(f->header_pool[f->n_packets]) - (unsigned long) f->header_pool)
313 > PAGE_SIZE) { 313 > PAGE_SIZE) {
314 printk(KERN_ERR "dv1394: FATAL ERROR: no room to allocate CIP header\n"); 314 printk(KERN_ERR "dv1394: FATAL ERROR: no room to allocate CIP header\n");
315 return; 315 return;
316 } 316 }
317 317
318 cip = &(f->header_pool[f->n_packets]); 318 cip = &(f->header_pool[f->n_packets]);
319 319
320 /* DMA address of the CIP header = offset of cip 320 /* DMA address of the CIP header = offset of cip
321 relative to kernel base address of the header pool 321 relative to kernel base address of the header pool
322 + DMA base address of the header pool */ 322 + DMA base address of the header pool */
323 cip_dma = (unsigned long) cip % PAGE_SIZE + f->header_pool_dma; 323 cip_dma = (unsigned long) cip % PAGE_SIZE + f->header_pool_dma;
324 324
325 /* is this an empty packet? */ 325 /* is this an empty packet? */
326 326
327 if (video->cip_accum > (video->cip_d - video->cip_n)) { 327 if (video->cip_accum > (video->cip_d - video->cip_n)) {
328 empty_packet = 1; 328 empty_packet = 1;
329 payload_size = 8; 329 payload_size = 8;
330 video->cip_accum -= (video->cip_d - video->cip_n); 330 video->cip_accum -= (video->cip_d - video->cip_n);
331 } else { 331 } else {
332 payload_size = 488; 332 payload_size = 488;
333 video->cip_accum += video->cip_n; 333 video->cip_accum += video->cip_n;
334 } 334 }
335 335
336 /* there are three important packets each frame: 336 /* there are three important packets each frame:
337 337
338 the first packet in the frame - we ask the card to record the timestamp when 338 the first packet in the frame - we ask the card to record the timestamp when
339 this packet is actually sent, so we can monitor 339 this packet is actually sent, so we can monitor
340 how accurate our timestamps are. Also, the first 340 how accurate our timestamps are. Also, the first
341 packet serves as a semaphore to let us know that 341 packet serves as a semaphore to let us know that
342 it's OK to free the *previous* frame's DMA buffer 342 it's OK to free the *previous* frame's DMA buffer
343 343
344 the last packet in the frame - this packet is used to detect buffer underflows. 344 the last packet in the frame - this packet is used to detect buffer underflows.
345 if this is the last ready frame, the last DMA block 345 if this is the last ready frame, the last DMA block
346 will have a branch back to the beginning of the frame 346 will have a branch back to the beginning of the frame
347 (so that the card will re-send the frame on underflow). 347 (so that the card will re-send the frame on underflow).
348 if this branch gets taken, we know that at least one 348 if this branch gets taken, we know that at least one
349 frame has been dropped. When the next frame is ready, 349 frame has been dropped. When the next frame is ready,
350 the branch is pointed to its first packet, and the 350 the branch is pointed to its first packet, and the
351 semaphore is disabled. 351 semaphore is disabled.
352 352
353 a "mid" packet slightly before the end of the frame - this packet should trigger 353 a "mid" packet slightly before the end of the frame - this packet should trigger
354 an interrupt so we can go and assign a timestamp to the first packet 354 an interrupt so we can go and assign a timestamp to the first packet
355 in the next frame. We don't use the very last packet in the frame 355 in the next frame. We don't use the very last packet in the frame
356 for this purpose, because that would leave very little time to set 356 for this purpose, because that would leave very little time to set
357 the timestamp before DMA starts on the next frame. 357 the timestamp before DMA starts on the next frame.
358 */ 358 */
359 359
360 if (f->n_packets == 0) { 360 if (f->n_packets == 0) {
361 first_packet = 1; 361 first_packet = 1;
362 } else if ( full_packets == (packets_per_frame-1) ) { 362 } else if ( full_packets == (packets_per_frame-1) ) {
363 last_packet = 1; 363 last_packet = 1;
364 } else if (f->n_packets == packets_per_frame) { 364 } else if (f->n_packets == packets_per_frame) {
365 mid_packet = 1; 365 mid_packet = 1;
366 } 366 }
367 367
368 368
369 /********************/ 369 /********************/
370 /* setup CIP header */ 370 /* setup CIP header */
371 /********************/ 371 /********************/
372 372
373 /* the timestamp will be written later from the 373 /* the timestamp will be written later from the
374 mid-frame interrupt handler. For now we just 374 mid-frame interrupt handler. For now we just
375 store the address of the CIP header(s) that 375 store the address of the CIP header(s) that
376 need a timestamp. */ 376 need a timestamp. */
377 377
378 /* first packet in the frame needs a timestamp */ 378 /* first packet in the frame needs a timestamp */
379 if (first_packet) { 379 if (first_packet) {
380 f->cip_syt1 = cip; 380 f->cip_syt1 = cip;
381 if (empty_packet) 381 if (empty_packet)
382 first_packet_empty = 1; 382 first_packet_empty = 1;
383 383
384 } else if (first_packet_empty && (f->n_packets == 1) ) { 384 } else if (first_packet_empty && (f->n_packets == 1) ) {
385 /* if the first packet was empty, the second 385 /* if the first packet was empty, the second
386 packet's CIP header also needs a timestamp */ 386 packet's CIP header also needs a timestamp */
387 f->cip_syt2 = cip; 387 f->cip_syt2 = cip;
388 } 388 }
389 389
390 fill_cip_header(cip, 390 fill_cip_header(cip,
391 /* the node ID number of the OHCI card */ 391 /* the node ID number of the OHCI card */
392 reg_read(video->ohci, OHCI1394_NodeID) & 0x3F, 392 reg_read(video->ohci, OHCI1394_NodeID) & 0x3F,
393 video->continuity_counter, 393 video->continuity_counter,
394 video->pal_or_ntsc, 394 video->pal_or_ntsc,
395 0xFFFF /* the timestamp is filled in later */); 395 0xFFFF /* the timestamp is filled in later */);
396 396
397 /* advance counter, only for full packets */ 397 /* advance counter, only for full packets */
398 if ( ! empty_packet ) 398 if ( ! empty_packet )
399 video->continuity_counter++; 399 video->continuity_counter++;
400 400
401 /******************************/ 401 /******************************/
402 /* setup DMA descriptor block */ 402 /* setup DMA descriptor block */
403 /******************************/ 403 /******************************/
404 404
405 /* first descriptor - OUTPUT_MORE_IMMEDIATE, for the controller's IT header */ 405 /* first descriptor - OUTPUT_MORE_IMMEDIATE, for the controller's IT header */
406 fill_output_more_immediate( &(block->u.out.omi), 1, video->channel, 0, payload_size); 406 fill_output_more_immediate( &(block->u.out.omi), 1, video->channel, 0, payload_size);
407 407
408 if (empty_packet) { 408 if (empty_packet) {
409 /* second descriptor - OUTPUT_LAST for CIP header */ 409 /* second descriptor - OUTPUT_LAST for CIP header */
410 fill_output_last( &(block->u.out.u.empty.ol), 410 fill_output_last( &(block->u.out.u.empty.ol),
411 411
412 /* want completion status on all interesting packets */ 412 /* want completion status on all interesting packets */
413 (first_packet || mid_packet || last_packet) ? 1 : 0, 413 (first_packet || mid_packet || last_packet) ? 1 : 0,
414 414
415 /* want interrupts on all interesting packets */ 415 /* want interrupts on all interesting packets */
416 (first_packet || mid_packet || last_packet) ? 1 : 0, 416 (first_packet || mid_packet || last_packet) ? 1 : 0,
417 417
418 sizeof(struct CIP_header), /* data size */ 418 sizeof(struct CIP_header), /* data size */
419 cip_dma); 419 cip_dma);
420 420
421 if (first_packet) 421 if (first_packet)
422 f->frame_begin_timestamp = &(block->u.out.u.empty.ol.q[3]); 422 f->frame_begin_timestamp = &(block->u.out.u.empty.ol.q[3]);
423 else if (mid_packet) 423 else if (mid_packet)
424 f->mid_frame_timestamp = &(block->u.out.u.empty.ol.q[3]); 424 f->mid_frame_timestamp = &(block->u.out.u.empty.ol.q[3]);
425 else if (last_packet) { 425 else if (last_packet) {
426 f->frame_end_timestamp = &(block->u.out.u.empty.ol.q[3]); 426 f->frame_end_timestamp = &(block->u.out.u.empty.ol.q[3]);
427 f->frame_end_branch = &(block->u.out.u.empty.ol.q[2]); 427 f->frame_end_branch = &(block->u.out.u.empty.ol.q[2]);
428 } 428 }
429 429
430 branch_address = &(block->u.out.u.empty.ol.q[2]); 430 branch_address = &(block->u.out.u.empty.ol.q[2]);
431 n_descriptors = 3; 431 n_descriptors = 3;
432 if (first_packet) 432 if (first_packet)
433 f->first_n_descriptors = n_descriptors; 433 f->first_n_descriptors = n_descriptors;
434 434
435 } else { /* full packet */ 435 } else { /* full packet */
436 436
437 /* second descriptor - OUTPUT_MORE for CIP header */ 437 /* second descriptor - OUTPUT_MORE for CIP header */
438 fill_output_more( &(block->u.out.u.full.om), 438 fill_output_more( &(block->u.out.u.full.om),
439 sizeof(struct CIP_header), /* data size */ 439 sizeof(struct CIP_header), /* data size */
440 cip_dma); 440 cip_dma);
441 441
442 442
443 /* third (and possibly fourth) descriptor - for DV data */ 443 /* third (and possibly fourth) descriptor - for DV data */
444 /* the 480-byte payload can cross a page boundary; if so, 444 /* the 480-byte payload can cross a page boundary; if so,
445 we need to split it into two DMA descriptors */ 445 we need to split it into two DMA descriptors */
446 446
447 /* does the 480-byte data payload cross a page boundary? */ 447 /* does the 480-byte data payload cross a page boundary? */
448 if ( (PAGE_SIZE- ((unsigned long)data_p % PAGE_SIZE) ) < 480 ) { 448 if ( (PAGE_SIZE- ((unsigned long)data_p % PAGE_SIZE) ) < 480 ) {
449 449
450 /* page boundary crossed */ 450 /* page boundary crossed */
451 451
452 fill_output_more( &(block->u.out.u.full.u.cross.om), 452 fill_output_more( &(block->u.out.u.full.u.cross.om),
453 /* data size - how much of data_p fits on the first page */ 453 /* data size - how much of data_p fits on the first page */
454 PAGE_SIZE - (data_p % PAGE_SIZE), 454 PAGE_SIZE - (data_p % PAGE_SIZE),
455 455
456 /* DMA address of data_p */ 456 /* DMA address of data_p */
457 dma_region_offset_to_bus(&video->dv_buf, 457 dma_region_offset_to_bus(&video->dv_buf,
458 data_p - (unsigned long) video->dv_buf.kvirt)); 458 data_p - (unsigned long) video->dv_buf.kvirt));
459 459
460 fill_output_last( &(block->u.out.u.full.u.cross.ol), 460 fill_output_last( &(block->u.out.u.full.u.cross.ol),
461 461
462 /* want completion status on all interesting packets */ 462 /* want completion status on all interesting packets */
463 (first_packet || mid_packet || last_packet) ? 1 : 0, 463 (first_packet || mid_packet || last_packet) ? 1 : 0,
464 464
465 /* want interrupt on all interesting packets */ 465 /* want interrupt on all interesting packets */
466 (first_packet || mid_packet || last_packet) ? 1 : 0, 466 (first_packet || mid_packet || last_packet) ? 1 : 0,
467 467
468 /* data size - remaining portion of data_p */ 468 /* data size - remaining portion of data_p */
469 480 - (PAGE_SIZE - (data_p % PAGE_SIZE)), 469 480 - (PAGE_SIZE - (data_p % PAGE_SIZE)),
470 470
471 /* DMA address of data_p + PAGE_SIZE - (data_p % PAGE_SIZE) */ 471 /* DMA address of data_p + PAGE_SIZE - (data_p % PAGE_SIZE) */
472 dma_region_offset_to_bus(&video->dv_buf, 472 dma_region_offset_to_bus(&video->dv_buf,
473 data_p + PAGE_SIZE - (data_p % PAGE_SIZE) - (unsigned long) video->dv_buf.kvirt)); 473 data_p + PAGE_SIZE - (data_p % PAGE_SIZE) - (unsigned long) video->dv_buf.kvirt));
474 474
475 if (first_packet) 475 if (first_packet)
476 f->frame_begin_timestamp = &(block->u.out.u.full.u.cross.ol.q[3]); 476 f->frame_begin_timestamp = &(block->u.out.u.full.u.cross.ol.q[3]);
477 else if (mid_packet) 477 else if (mid_packet)
478 f->mid_frame_timestamp = &(block->u.out.u.full.u.cross.ol.q[3]); 478 f->mid_frame_timestamp = &(block->u.out.u.full.u.cross.ol.q[3]);
479 else if (last_packet) { 479 else if (last_packet) {
480 f->frame_end_timestamp = &(block->u.out.u.full.u.cross.ol.q[3]); 480 f->frame_end_timestamp = &(block->u.out.u.full.u.cross.ol.q[3]);
481 f->frame_end_branch = &(block->u.out.u.full.u.cross.ol.q[2]); 481 f->frame_end_branch = &(block->u.out.u.full.u.cross.ol.q[2]);
482 } 482 }
483 483
484 branch_address = &(block->u.out.u.full.u.cross.ol.q[2]); 484 branch_address = &(block->u.out.u.full.u.cross.ol.q[2]);
485 485
486 n_descriptors = 5; 486 n_descriptors = 5;
487 if (first_packet) 487 if (first_packet)
488 f->first_n_descriptors = n_descriptors; 488 f->first_n_descriptors = n_descriptors;
489 489
490 full_packets++; 490 full_packets++;
491 491
492 } else { 492 } else {
493 /* fits on one page */ 493 /* fits on one page */
494 494
495 fill_output_last( &(block->u.out.u.full.u.nocross.ol), 495 fill_output_last( &(block->u.out.u.full.u.nocross.ol),
496 496
497 /* want completion status on all interesting packets */ 497 /* want completion status on all interesting packets */
498 (first_packet || mid_packet || last_packet) ? 1 : 0, 498 (first_packet || mid_packet || last_packet) ? 1 : 0,
499 499
500 /* want interrupt on all interesting packets */ 500 /* want interrupt on all interesting packets */
501 (first_packet || mid_packet || last_packet) ? 1 : 0, 501 (first_packet || mid_packet || last_packet) ? 1 : 0,
502 502
503 480, /* data size (480 bytes of DV data) */ 503 480, /* data size (480 bytes of DV data) */
504 504
505 505
506 /* DMA address of data_p */ 506 /* DMA address of data_p */
507 dma_region_offset_to_bus(&video->dv_buf, 507 dma_region_offset_to_bus(&video->dv_buf,
508 data_p - (unsigned long) video->dv_buf.kvirt)); 508 data_p - (unsigned long) video->dv_buf.kvirt));
509 509
510 if (first_packet) 510 if (first_packet)
511 f->frame_begin_timestamp = &(block->u.out.u.full.u.nocross.ol.q[3]); 511 f->frame_begin_timestamp = &(block->u.out.u.full.u.nocross.ol.q[3]);
512 else if (mid_packet) 512 else if (mid_packet)
513 f->mid_frame_timestamp = &(block->u.out.u.full.u.nocross.ol.q[3]); 513 f->mid_frame_timestamp = &(block->u.out.u.full.u.nocross.ol.q[3]);
514 else if (last_packet) { 514 else if (last_packet) {
515 f->frame_end_timestamp = &(block->u.out.u.full.u.nocross.ol.q[3]); 515 f->frame_end_timestamp = &(block->u.out.u.full.u.nocross.ol.q[3]);
516 f->frame_end_branch = &(block->u.out.u.full.u.nocross.ol.q[2]); 516 f->frame_end_branch = &(block->u.out.u.full.u.nocross.ol.q[2]);
517 } 517 }
518 518
519 branch_address = &(block->u.out.u.full.u.nocross.ol.q[2]); 519 branch_address = &(block->u.out.u.full.u.nocross.ol.q[2]);
520 520
521 n_descriptors = 4; 521 n_descriptors = 4;
522 if (first_packet) 522 if (first_packet)
523 f->first_n_descriptors = n_descriptors; 523 f->first_n_descriptors = n_descriptors;
524 524
525 full_packets++; 525 full_packets++;
526 } 526 }
527 } 527 }
528 528
529 /* link this descriptor block into the DMA program by filling in 529 /* link this descriptor block into the DMA program by filling in
530 the branch address of the previous block */ 530 the branch address of the previous block */
531 531
532 /* note: we are not linked into the active DMA chain yet */ 532 /* note: we are not linked into the active DMA chain yet */
533 533
534 if (last_branch_address) { 534 if (last_branch_address) {
535 *(last_branch_address) = cpu_to_le32(block_dma | n_descriptors); 535 *(last_branch_address) = cpu_to_le32(block_dma | n_descriptors);
536 } 536 }
537 537
538 last_branch_address = branch_address; 538 last_branch_address = branch_address;
539 539
540 540
541 f->n_packets++; 541 f->n_packets++;
542 542
543 } 543 }
544 544
545 /* when we first assemble a new frame, set the final branch 545 /* when we first assemble a new frame, set the final branch
546 to loop back up to the top */ 546 to loop back up to the top */
547 *(f->frame_end_branch) = cpu_to_le32(f->descriptor_pool_dma | f->first_n_descriptors); 547 *(f->frame_end_branch) = cpu_to_le32(f->descriptor_pool_dma | f->first_n_descriptors);
548 548
549 /* make the latest version of this frame visible to the PCI card */ 549 /* make the latest version of this frame visible to the PCI card */
550 dma_region_sync_for_device(&video->dv_buf, f->data - (unsigned long) video->dv_buf.kvirt, video->frame_size); 550 dma_region_sync_for_device(&video->dv_buf, f->data - (unsigned long) video->dv_buf.kvirt, video->frame_size);
551 551
552 /* lock against DMA interrupt */ 552 /* lock against DMA interrupt */
553 spin_lock_irqsave(&video->spinlock, irq_flags); 553 spin_lock_irqsave(&video->spinlock, irq_flags);
554 554
555 f->state = FRAME_READY; 555 f->state = FRAME_READY;
556 556
557 video->n_clear_frames--; 557 video->n_clear_frames--;
558 558
559 last_frame = video->first_clear_frame - 1; 559 last_frame = video->first_clear_frame - 1;
560 if (last_frame == -1) 560 if (last_frame == -1)
561 last_frame = video->n_frames-1; 561 last_frame = video->n_frames-1;
562 562
563 video->first_clear_frame = (video->first_clear_frame + 1) % video->n_frames; 563 video->first_clear_frame = (video->first_clear_frame + 1) % video->n_frames;
564 564
565 irq_printk(" frame %d prepared, active_frame = %d, n_clear_frames = %d, first_clear_frame = %d\n last=%d\n", 565 irq_printk(" frame %d prepared, active_frame = %d, n_clear_frames = %d, first_clear_frame = %d\n last=%d\n",
566 this_frame, video->active_frame, video->n_clear_frames, video->first_clear_frame, last_frame); 566 this_frame, video->active_frame, video->n_clear_frames, video->first_clear_frame, last_frame);
567 567
568 irq_printk(" begin_ts %08lx mid_ts %08lx end_ts %08lx end_br %08lx\n", 568 irq_printk(" begin_ts %08lx mid_ts %08lx end_ts %08lx end_br %08lx\n",
569 (unsigned long) f->frame_begin_timestamp, 569 (unsigned long) f->frame_begin_timestamp,
570 (unsigned long) f->mid_frame_timestamp, 570 (unsigned long) f->mid_frame_timestamp,
571 (unsigned long) f->frame_end_timestamp, 571 (unsigned long) f->frame_end_timestamp,
572 (unsigned long) f->frame_end_branch); 572 (unsigned long) f->frame_end_branch);
573 573
574 if (video->active_frame != -1) { 574 if (video->active_frame != -1) {
575 575
576 /* if DMA is already active, we are almost done */ 576 /* if DMA is already active, we are almost done */
577 /* just link us onto the active DMA chain */ 577 /* just link us onto the active DMA chain */
578 if (video->frames[last_frame]->frame_end_branch) { 578 if (video->frames[last_frame]->frame_end_branch) {
579 u32 temp; 579 u32 temp;
580 580
581 /* point the previous frame's tail to this frame's head */ 581 /* point the previous frame's tail to this frame's head */
582 *(video->frames[last_frame]->frame_end_branch) = cpu_to_le32(f->descriptor_pool_dma | f->first_n_descriptors); 582 *(video->frames[last_frame]->frame_end_branch) = cpu_to_le32(f->descriptor_pool_dma | f->first_n_descriptors);
583 583
584 /* this write MUST precede the next one, or we could silently drop frames */ 584 /* this write MUST precede the next one, or we could silently drop frames */
585 wmb(); 585 wmb();
586 586
587 /* disable the want_status semaphore on the last packet */ 587 /* disable the want_status semaphore on the last packet */
588 temp = le32_to_cpu(*(video->frames[last_frame]->frame_end_branch - 2)); 588 temp = le32_to_cpu(*(video->frames[last_frame]->frame_end_branch - 2));
589 temp &= 0xF7CFFFFF; 589 temp &= 0xF7CFFFFF;
590 *(video->frames[last_frame]->frame_end_branch - 2) = cpu_to_le32(temp); 590 *(video->frames[last_frame]->frame_end_branch - 2) = cpu_to_le32(temp);
591 591
592 /* flush these writes to memory ASAP */ 592 /* flush these writes to memory ASAP */
593 flush_pci_write(video->ohci); 593 flush_pci_write(video->ohci);
594 594
595 /* NOTE: 595 /* NOTE:
596 ideally the writes should be "atomic": if 596 ideally the writes should be "atomic": if
597 the OHCI card reads the want_status flag in 597 the OHCI card reads the want_status flag in
598 between them, we'll falsely report a 598 between them, we'll falsely report a
599 dropped frame. Hopefully this window is too 599 dropped frame. Hopefully this window is too
600 small to really matter, and the consequence 600 small to really matter, and the consequence
601 is rather harmless. */ 601 is rather harmless. */
602 602
603 603
604 irq_printk(" new frame %d linked onto DMA chain\n", this_frame); 604 irq_printk(" new frame %d linked onto DMA chain\n", this_frame);
605 605
606 } else { 606 } else {
607 printk(KERN_ERR "dv1394: last frame not ready???\n"); 607 printk(KERN_ERR "dv1394: last frame not ready???\n");
608 } 608 }
609 609
610 } else { 610 } else {
611 611
612 u32 transmit_sec, transmit_cyc; 612 u32 transmit_sec, transmit_cyc;
613 u32 ts_cyc, ts_off; 613 u32 ts_cyc;
614 614
615 /* DMA is stopped, so this is the very first frame */ 615 /* DMA is stopped, so this is the very first frame */
616 video->active_frame = this_frame; 616 video->active_frame = this_frame;
617 617
618 /* set CommandPtr to address and size of first descriptor block */ 618 /* set CommandPtr to address and size of first descriptor block */
619 reg_write(video->ohci, video->ohci_IsoXmitCommandPtr, 619 reg_write(video->ohci, video->ohci_IsoXmitCommandPtr,
620 video->frames[video->active_frame]->descriptor_pool_dma | 620 video->frames[video->active_frame]->descriptor_pool_dma |
621 f->first_n_descriptors); 621 f->first_n_descriptors);
622 622
623 /* assign a timestamp based on the current cycle time... 623 /* assign a timestamp based on the current cycle time...
624 We'll tell the card to begin DMA 100 cycles from now, 624 We'll tell the card to begin DMA 100 cycles from now,
625 and assign a timestamp 103 cycles from now */ 625 and assign a timestamp 103 cycles from now */
626 626
627 cycleTimer = reg_read(video->ohci, OHCI1394_IsochronousCycleTimer); 627 cycleTimer = reg_read(video->ohci, OHCI1394_IsochronousCycleTimer);
628 628
629 ct_sec = cycleTimer >> 25; 629 ct_sec = cycleTimer >> 25;
630 ct_cyc = (cycleTimer >> 12) & 0x1FFF; 630 ct_cyc = (cycleTimer >> 12) & 0x1FFF;
631 ct_off = cycleTimer & 0xFFF; 631 ct_off = cycleTimer & 0xFFF;
632 632
633 transmit_sec = ct_sec; 633 transmit_sec = ct_sec;
634 transmit_cyc = ct_cyc + 100; 634 transmit_cyc = ct_cyc + 100;
635 635
636 transmit_sec += transmit_cyc/8000; 636 transmit_sec += transmit_cyc/8000;
637 transmit_cyc %= 8000; 637 transmit_cyc %= 8000;
638 638
639 ts_off = ct_off;
640 ts_cyc = transmit_cyc + 3; 639 ts_cyc = transmit_cyc + 3;
641 ts_cyc %= 8000; 640 ts_cyc %= 8000;
642 641
643 f->assigned_timestamp = (ts_cyc&0xF) << 12; 642 f->assigned_timestamp = (ts_cyc&0xF) << 12;
644 643
645 /* now actually write the timestamp into the appropriate CIP headers */ 644 /* now actually write the timestamp into the appropriate CIP headers */
646 if (f->cip_syt1) { 645 if (f->cip_syt1) {
647 f->cip_syt1->b[6] = f->assigned_timestamp >> 8; 646 f->cip_syt1->b[6] = f->assigned_timestamp >> 8;
648 f->cip_syt1->b[7] = f->assigned_timestamp & 0xFF; 647 f->cip_syt1->b[7] = f->assigned_timestamp & 0xFF;
649 } 648 }
650 if (f->cip_syt2) { 649 if (f->cip_syt2) {
651 f->cip_syt2->b[6] = f->assigned_timestamp >> 8; 650 f->cip_syt2->b[6] = f->assigned_timestamp >> 8;
652 f->cip_syt2->b[7] = f->assigned_timestamp & 0xFF; 651 f->cip_syt2->b[7] = f->assigned_timestamp & 0xFF;
653 } 652 }
654 653
655 /* --- start DMA --- */ 654 /* --- start DMA --- */
656 655
657 /* clear all bits in ContextControl register */ 656 /* clear all bits in ContextControl register */
658 657
659 reg_write(video->ohci, video->ohci_IsoXmitContextControlClear, 0xFFFFFFFF); 658 reg_write(video->ohci, video->ohci_IsoXmitContextControlClear, 0xFFFFFFFF);
660 wmb(); 659 wmb();
661 660
662 /* the OHCI card has the ability to start ISO transmission on a 661 /* the OHCI card has the ability to start ISO transmission on a
663 particular cycle (start-on-cycle). This way we can ensure that 662 particular cycle (start-on-cycle). This way we can ensure that
664 the first DV frame will have an accurate timestamp. 663 the first DV frame will have an accurate timestamp.
665 664
666 However, start-on-cycle only appears to work if the OHCI card 665 However, start-on-cycle only appears to work if the OHCI card
667 is cycle master! Since the consequences of messing up the first 666 is cycle master! Since the consequences of messing up the first
668 timestamp are minimal*, just disable start-on-cycle for now. 667 timestamp are minimal*, just disable start-on-cycle for now.
669 668
670 * my DV deck drops the first few frames before it "locks in;" 669 * my DV deck drops the first few frames before it "locks in;"
671 so the first frame having an incorrect timestamp is inconsequential. 670 so the first frame having an incorrect timestamp is inconsequential.
672 */ 671 */
673 672
674 #if 0 673 #if 0
675 reg_write(video->ohci, video->ohci_IsoXmitContextControlSet, 674 reg_write(video->ohci, video->ohci_IsoXmitContextControlSet,
676 (1 << 31) /* enable start-on-cycle */ 675 (1 << 31) /* enable start-on-cycle */
677 | ( (transmit_sec & 0x3) << 29) 676 | ( (transmit_sec & 0x3) << 29)
678 | (transmit_cyc << 16)); 677 | (transmit_cyc << 16));
679 wmb(); 678 wmb();
680 #endif 679 #endif
681 680
682 video->dma_running = 1; 681 video->dma_running = 1;
683 682
684 /* set the 'run' bit */ 683 /* set the 'run' bit */
685 reg_write(video->ohci, video->ohci_IsoXmitContextControlSet, 0x8000); 684 reg_write(video->ohci, video->ohci_IsoXmitContextControlSet, 0x8000);
686 flush_pci_write(video->ohci); 685 flush_pci_write(video->ohci);
687 686
688 /* --- DMA should be running now --- */ 687 /* --- DMA should be running now --- */
689 688
690 debug_printk(" Cycle = %4u ContextControl = %08x CmdPtr = %08x\n", 689 debug_printk(" Cycle = %4u ContextControl = %08x CmdPtr = %08x\n",
691 (reg_read(video->ohci, OHCI1394_IsochronousCycleTimer) >> 12) & 0x1FFF, 690 (reg_read(video->ohci, OHCI1394_IsochronousCycleTimer) >> 12) & 0x1FFF,
692 reg_read(video->ohci, video->ohci_IsoXmitContextControlSet), 691 reg_read(video->ohci, video->ohci_IsoXmitContextControlSet),
693 reg_read(video->ohci, video->ohci_IsoXmitCommandPtr)); 692 reg_read(video->ohci, video->ohci_IsoXmitCommandPtr));
694 693
695 debug_printk(" DMA start - current cycle %4u, transmit cycle %4u (%2u), assigning ts cycle %2u\n", 694 debug_printk(" DMA start - current cycle %4u, transmit cycle %4u (%2u), assigning ts cycle %2u\n",
696 ct_cyc, transmit_cyc, transmit_cyc & 0xF, ts_cyc & 0xF); 695 ct_cyc, transmit_cyc, transmit_cyc & 0xF, ts_cyc & 0xF);
697 696
698 #if DV1394_DEBUG_LEVEL >= 2 697 #if DV1394_DEBUG_LEVEL >= 2
699 { 698 {
700 /* check if DMA is really running */ 699 /* check if DMA is really running */
701 int i = 0; 700 int i = 0;
702 while (i < 20) { 701 while (i < 20) {
703 mb(); 702 mb();
704 mdelay(1); 703 mdelay(1);
705 if (reg_read(video->ohci, video->ohci_IsoXmitContextControlSet) & (1 << 10)) { 704 if (reg_read(video->ohci, video->ohci_IsoXmitContextControlSet) & (1 << 10)) {
706 printk("DMA ACTIVE after %d msec\n", i); 705 printk("DMA ACTIVE after %d msec\n", i);
707 break; 706 break;
708 } 707 }
709 i++; 708 i++;
710 } 709 }
711 710
712 printk("set = %08x, cmdPtr = %08x\n", 711 printk("set = %08x, cmdPtr = %08x\n",
713 reg_read(video->ohci, video->ohci_IsoXmitContextControlSet), 712 reg_read(video->ohci, video->ohci_IsoXmitContextControlSet),
714 reg_read(video->ohci, video->ohci_IsoXmitCommandPtr) 713 reg_read(video->ohci, video->ohci_IsoXmitCommandPtr)
715 ); 714 );
716 715
717 if ( ! (reg_read(video->ohci, video->ohci_IsoXmitContextControlSet) & (1 << 10)) ) { 716 if ( ! (reg_read(video->ohci, video->ohci_IsoXmitContextControlSet) & (1 << 10)) ) {
718 printk("DMA did NOT go active after 20ms, event = %x\n", 717 printk("DMA did NOT go active after 20ms, event = %x\n",
719 reg_read(video->ohci, video->ohci_IsoXmitContextControlSet) & 0x1F); 718 reg_read(video->ohci, video->ohci_IsoXmitContextControlSet) & 0x1F);
720 } else 719 } else
721 printk("DMA is RUNNING!\n"); 720 printk("DMA is RUNNING!\n");
722 } 721 }
723 #endif 722 #endif
724 723
725 } 724 }
726 725
727 726
728 spin_unlock_irqrestore(&video->spinlock, irq_flags); 727 spin_unlock_irqrestore(&video->spinlock, irq_flags);
729 } 728 }
730 729
731 730
732 731
733 /*** RECEIVE FUNCTIONS *****************************************************/ 732 /*** RECEIVE FUNCTIONS *****************************************************/
734 733
735 /* 734 /*
736 frame method put_packet 735 frame method put_packet
737 736
738 map and copy the packet data to its location in the frame 737 map and copy the packet data to its location in the frame
739 based upon DIF section and sequence 738 based upon DIF section and sequence
740 */ 739 */
741 740
742 static void inline 741 static void inline
743 frame_put_packet (struct frame *f, struct packet *p) 742 frame_put_packet (struct frame *f, struct packet *p)
744 { 743 {
745 int section_type = p->data[0] >> 5; /* section type is in bits 5 - 7 */ 744 int section_type = p->data[0] >> 5; /* section type is in bits 5 - 7 */
746 int dif_sequence = p->data[1] >> 4; /* dif sequence number is in bits 4 - 7 */ 745 int dif_sequence = p->data[1] >> 4; /* dif sequence number is in bits 4 - 7 */
747 int dif_block = p->data[2]; 746 int dif_block = p->data[2];
748 747
749 /* sanity check */ 748 /* sanity check */
750 if (dif_sequence > 11 || dif_block > 149) return; 749 if (dif_sequence > 11 || dif_block > 149) return;
751 750
752 switch (section_type) { 751 switch (section_type) {
753 case 0: /* 1 Header block */ 752 case 0: /* 1 Header block */
754 memcpy( (void *) f->data + dif_sequence * 150 * 80, p->data, 480); 753 memcpy( (void *) f->data + dif_sequence * 150 * 80, p->data, 480);
755 break; 754 break;
756 755
757 case 1: /* 2 Subcode blocks */ 756 case 1: /* 2 Subcode blocks */
758 memcpy( (void *) f->data + dif_sequence * 150 * 80 + (1 + dif_block) * 80, p->data, 480); 757 memcpy( (void *) f->data + dif_sequence * 150 * 80 + (1 + dif_block) * 80, p->data, 480);
759 break; 758 break;
760 759
761 case 2: /* 3 VAUX blocks */ 760 case 2: /* 3 VAUX blocks */
762 memcpy( (void *) f->data + dif_sequence * 150 * 80 + (3 + dif_block) * 80, p->data, 480); 761 memcpy( (void *) f->data + dif_sequence * 150 * 80 + (3 + dif_block) * 80, p->data, 480);
763 break; 762 break;
764 763
765 case 3: /* 9 Audio blocks interleaved with video */ 764 case 3: /* 9 Audio blocks interleaved with video */
766 memcpy( (void *) f->data + dif_sequence * 150 * 80 + (6 + dif_block * 16) * 80, p->data, 480); 765 memcpy( (void *) f->data + dif_sequence * 150 * 80 + (6 + dif_block * 16) * 80, p->data, 480);
767 break; 766 break;
768 767
769 case 4: /* 135 Video blocks interleaved with audio */ 768 case 4: /* 135 Video blocks interleaved with audio */
770 memcpy( (void *) f->data + dif_sequence * 150 * 80 + (7 + (dif_block / 15) + dif_block) * 80, p->data, 480); 769 memcpy( (void *) f->data + dif_sequence * 150 * 80 + (7 + (dif_block / 15) + dif_block) * 80, p->data, 480);
771 break; 770 break;
772 771
773 default: /* we can not handle any other data */ 772 default: /* we can not handle any other data */
774 break; 773 break;
775 } 774 }
776 } 775 }
777 776
778 777
779 static void start_dma_receive(struct video_card *video) 778 static void start_dma_receive(struct video_card *video)
780 { 779 {
781 if (video->first_run == 1) { 780 if (video->first_run == 1) {
782 video->first_run = 0; 781 video->first_run = 0;
783 782
784 /* start DMA once all of the frames are READY */ 783 /* start DMA once all of the frames are READY */
785 video->n_clear_frames = 0; 784 video->n_clear_frames = 0;
786 video->first_clear_frame = -1; 785 video->first_clear_frame = -1;
787 video->current_packet = 0; 786 video->current_packet = 0;
788 video->active_frame = 0; 787 video->active_frame = 0;
789 788
790 /* reset iso recv control register */ 789 /* reset iso recv control register */
791 reg_write(video->ohci, video->ohci_IsoRcvContextControlClear, 0xFFFFFFFF); 790 reg_write(video->ohci, video->ohci_IsoRcvContextControlClear, 0xFFFFFFFF);
792 wmb(); 791 wmb();
793 792
794 /* clear bufferFill, set isochHeader and speed (0=100) */ 793 /* clear bufferFill, set isochHeader and speed (0=100) */
795 reg_write(video->ohci, video->ohci_IsoRcvContextControlSet, 0x40000000); 794 reg_write(video->ohci, video->ohci_IsoRcvContextControlSet, 0x40000000);
796 795
797 /* match on all tags, listen on channel */ 796 /* match on all tags, listen on channel */
798 reg_write(video->ohci, video->ohci_IsoRcvContextMatch, 0xf0000000 | video->channel); 797 reg_write(video->ohci, video->ohci_IsoRcvContextMatch, 0xf0000000 | video->channel);
799 798
800 /* address and first descriptor block + Z=1 */ 799 /* address and first descriptor block + Z=1 */
801 reg_write(video->ohci, video->ohci_IsoRcvCommandPtr, 800 reg_write(video->ohci, video->ohci_IsoRcvCommandPtr,
802 video->frames[0]->descriptor_pool_dma | 1); /* Z=1 */ 801 video->frames[0]->descriptor_pool_dma | 1); /* Z=1 */
803 wmb(); 802 wmb();
804 803
805 video->dma_running = 1; 804 video->dma_running = 1;
806 805
807 /* run */ 806 /* run */
808 reg_write(video->ohci, video->ohci_IsoRcvContextControlSet, 0x8000); 807 reg_write(video->ohci, video->ohci_IsoRcvContextControlSet, 0x8000);
809 flush_pci_write(video->ohci); 808 flush_pci_write(video->ohci);
810 809
811 debug_printk("dv1394: DMA started\n"); 810 debug_printk("dv1394: DMA started\n");
812 811
813 #if DV1394_DEBUG_LEVEL >= 2 812 #if DV1394_DEBUG_LEVEL >= 2
814 { 813 {
815 int i; 814 int i;
816 815
817 for (i = 0; i < 1000; ++i) { 816 for (i = 0; i < 1000; ++i) {
818 mdelay(1); 817 mdelay(1);
819 if (reg_read(video->ohci, video->ohci_IsoRcvContextControlSet) & (1 << 10)) { 818 if (reg_read(video->ohci, video->ohci_IsoRcvContextControlSet) & (1 << 10)) {
820 printk("DMA ACTIVE after %d msec\n", i); 819 printk("DMA ACTIVE after %d msec\n", i);
821 break; 820 break;
822 } 821 }
823 } 822 }
824 if ( reg_read(video->ohci, video->ohci_IsoRcvContextControlSet) & (1 << 11) ) { 823 if ( reg_read(video->ohci, video->ohci_IsoRcvContextControlSet) & (1 << 11) ) {
825 printk("DEAD, event = %x\n", 824 printk("DEAD, event = %x\n",
826 reg_read(video->ohci, video->ohci_IsoRcvContextControlSet) & 0x1F); 825 reg_read(video->ohci, video->ohci_IsoRcvContextControlSet) & 0x1F);
827 } else 826 } else
828 printk("RUNNING!\n"); 827 printk("RUNNING!\n");
829 } 828 }
830 #endif 829 #endif
831 } else if ( reg_read(video->ohci, video->ohci_IsoRcvContextControlSet) & (1 << 11) ) { 830 } else if ( reg_read(video->ohci, video->ohci_IsoRcvContextControlSet) & (1 << 11) ) {
832 debug_printk("DEAD, event = %x\n", 831 debug_printk("DEAD, event = %x\n",
833 reg_read(video->ohci, video->ohci_IsoRcvContextControlSet) & 0x1F); 832 reg_read(video->ohci, video->ohci_IsoRcvContextControlSet) & 0x1F);
834 833
835 /* wake */ 834 /* wake */
836 reg_write(video->ohci, video->ohci_IsoRcvContextControlSet, (1 << 12)); 835 reg_write(video->ohci, video->ohci_IsoRcvContextControlSet, (1 << 12));
837 } 836 }
838 } 837 }
839 838
840 839
841 /* 840 /*
842 receive_packets() - build the DMA program for receiving 841 receive_packets() - build the DMA program for receiving
843 */ 842 */
844 843
845 static void receive_packets(struct video_card *video) 844 static void receive_packets(struct video_card *video)
846 { 845 {
847 struct DMA_descriptor_block *block = NULL; 846 struct DMA_descriptor_block *block = NULL;
848 dma_addr_t block_dma = 0; 847 dma_addr_t block_dma = 0;
849 struct packet *data = NULL; 848 struct packet *data = NULL;
850 dma_addr_t data_dma = 0; 849 dma_addr_t data_dma = 0;
851 __le32 *last_branch_address = NULL; 850 __le32 *last_branch_address = NULL;
852 unsigned long irq_flags; 851 unsigned long irq_flags;
853 int want_interrupt = 0; 852 int want_interrupt = 0;
854 struct frame *f = NULL; 853 struct frame *f = NULL;
855 int i, j; 854 int i, j;
856 855
857 spin_lock_irqsave(&video->spinlock, irq_flags); 856 spin_lock_irqsave(&video->spinlock, irq_flags);
858 857
859 for (j = 0; j < video->n_frames; j++) { 858 for (j = 0; j < video->n_frames; j++) {
860 859
861 /* connect frames */ 860 /* connect frames */
862 if (j > 0 && f != NULL && f->frame_end_branch != NULL) 861 if (j > 0 && f != NULL && f->frame_end_branch != NULL)
863 *(f->frame_end_branch) = cpu_to_le32(video->frames[j]->descriptor_pool_dma | 1); /* set Z=1 */ 862 *(f->frame_end_branch) = cpu_to_le32(video->frames[j]->descriptor_pool_dma | 1); /* set Z=1 */
864 863
865 f = video->frames[j]; 864 f = video->frames[j];
866 865
867 for (i = 0; i < MAX_PACKETS; i++) { 866 for (i = 0; i < MAX_PACKETS; i++) {
868 /* locate a descriptor block and packet from the buffer */ 867 /* locate a descriptor block and packet from the buffer */
869 block = &(f->descriptor_pool[i]); 868 block = &(f->descriptor_pool[i]);
870 block_dma = ((unsigned long) block - (unsigned long) f->descriptor_pool) + f->descriptor_pool_dma; 869 block_dma = ((unsigned long) block - (unsigned long) f->descriptor_pool) + f->descriptor_pool_dma;
871 870
872 data = ((struct packet*)video->packet_buf.kvirt) + f->frame_num * MAX_PACKETS + i; 871 data = ((struct packet*)video->packet_buf.kvirt) + f->frame_num * MAX_PACKETS + i;
873 data_dma = dma_region_offset_to_bus( &video->packet_buf, 872 data_dma = dma_region_offset_to_bus( &video->packet_buf,
874 ((unsigned long) data - (unsigned long) video->packet_buf.kvirt) ); 873 ((unsigned long) data - (unsigned long) video->packet_buf.kvirt) );
875 874
876 /* setup DMA descriptor block */ 875 /* setup DMA descriptor block */
877 want_interrupt = ((i % (MAX_PACKETS/2)) == 0 || i == (MAX_PACKETS-1)); 876 want_interrupt = ((i % (MAX_PACKETS/2)) == 0 || i == (MAX_PACKETS-1));
878 fill_input_last( &(block->u.in.il), want_interrupt, 512, data_dma); 877 fill_input_last( &(block->u.in.il), want_interrupt, 512, data_dma);
879 878
880 /* link descriptors */ 879 /* link descriptors */
881 last_branch_address = f->frame_end_branch; 880 last_branch_address = f->frame_end_branch;
882 881
883 if (last_branch_address != NULL) 882 if (last_branch_address != NULL)
884 *(last_branch_address) = cpu_to_le32(block_dma | 1); /* set Z=1 */ 883 *(last_branch_address) = cpu_to_le32(block_dma | 1); /* set Z=1 */
885 884
886 f->frame_end_branch = &(block->u.in.il.q[2]); 885 f->frame_end_branch = &(block->u.in.il.q[2]);
887 } 886 }
888 887
889 } /* next j */ 888 } /* next j */
890 889
891 spin_unlock_irqrestore(&video->spinlock, irq_flags); 890 spin_unlock_irqrestore(&video->spinlock, irq_flags);
892 891
893 } 892 }
894 893
895 894
896 895
897 /*** MANAGEMENT FUNCTIONS **************************************************/ 896 /*** MANAGEMENT FUNCTIONS **************************************************/
898 897
899 static int do_dv1394_init(struct video_card *video, struct dv1394_init *init) 898 static int do_dv1394_init(struct video_card *video, struct dv1394_init *init)
900 { 899 {
901 unsigned long flags, new_buf_size; 900 unsigned long flags, new_buf_size;
902 int i; 901 int i;
903 u64 chan_mask; 902 u64 chan_mask;
904 int retval = -EINVAL; 903 int retval = -EINVAL;
905 904
906 debug_printk("dv1394: initialising %d\n", video->id); 905 debug_printk("dv1394: initialising %d\n", video->id);
907 if (init->api_version != DV1394_API_VERSION) 906 if (init->api_version != DV1394_API_VERSION)
908 return -EINVAL; 907 return -EINVAL;
909 908
910 /* first sanitize all the parameters */ 909 /* first sanitize all the parameters */
911 if ( (init->n_frames < 2) || (init->n_frames > DV1394_MAX_FRAMES) ) 910 if ( (init->n_frames < 2) || (init->n_frames > DV1394_MAX_FRAMES) )
912 return -EINVAL; 911 return -EINVAL;
913 912
914 if ( (init->format != DV1394_NTSC) && (init->format != DV1394_PAL) ) 913 if ( (init->format != DV1394_NTSC) && (init->format != DV1394_PAL) )
915 return -EINVAL; 914 return -EINVAL;
916 915
917 if ( (init->syt_offset == 0) || (init->syt_offset > 50) ) 916 if ( (init->syt_offset == 0) || (init->syt_offset > 50) )
918 /* default SYT offset is 3 cycles */ 917 /* default SYT offset is 3 cycles */
919 init->syt_offset = 3; 918 init->syt_offset = 3;
920 919
921 if (init->channel > 63) 920 if (init->channel > 63)
922 init->channel = 63; 921 init->channel = 63;
923 922
924 chan_mask = (u64)1 << init->channel; 923 chan_mask = (u64)1 << init->channel;
925 924
926 /* calculate what size DMA buffer is needed */ 925 /* calculate what size DMA buffer is needed */
927 if (init->format == DV1394_NTSC) 926 if (init->format == DV1394_NTSC)
928 new_buf_size = DV1394_NTSC_FRAME_SIZE * init->n_frames; 927 new_buf_size = DV1394_NTSC_FRAME_SIZE * init->n_frames;
929 else 928 else
930 new_buf_size = DV1394_PAL_FRAME_SIZE * init->n_frames; 929 new_buf_size = DV1394_PAL_FRAME_SIZE * init->n_frames;
931 930
932 /* round up to PAGE_SIZE */ 931 /* round up to PAGE_SIZE */
933 if (new_buf_size % PAGE_SIZE) new_buf_size += PAGE_SIZE - (new_buf_size % PAGE_SIZE); 932 if (new_buf_size % PAGE_SIZE) new_buf_size += PAGE_SIZE - (new_buf_size % PAGE_SIZE);
934 933
935 /* don't allow the user to allocate the DMA buffer more than once */ 934 /* don't allow the user to allocate the DMA buffer more than once */
936 if (video->dv_buf.kvirt && video->dv_buf_size != new_buf_size) { 935 if (video->dv_buf.kvirt && video->dv_buf_size != new_buf_size) {
937 printk("dv1394: re-sizing the DMA buffer is not allowed\n"); 936 printk("dv1394: re-sizing the DMA buffer is not allowed\n");
938 return -EINVAL; 937 return -EINVAL;
939 } 938 }
940 939
941 /* shutdown the card if it's currently active */ 940 /* shutdown the card if it's currently active */
942 /* (the card should not be reset if the parameters are screwy) */ 941 /* (the card should not be reset if the parameters are screwy) */
943 942
944 do_dv1394_shutdown(video, 0); 943 do_dv1394_shutdown(video, 0);
945 944
946 /* try to claim the ISO channel */ 945 /* try to claim the ISO channel */
947 spin_lock_irqsave(&video->ohci->IR_channel_lock, flags); 946 spin_lock_irqsave(&video->ohci->IR_channel_lock, flags);
948 if (video->ohci->ISO_channel_usage & chan_mask) { 947 if (video->ohci->ISO_channel_usage & chan_mask) {
949 spin_unlock_irqrestore(&video->ohci->IR_channel_lock, flags); 948 spin_unlock_irqrestore(&video->ohci->IR_channel_lock, flags);
950 retval = -EBUSY; 949 retval = -EBUSY;
951 goto err; 950 goto err;
952 } 951 }
953 video->ohci->ISO_channel_usage |= chan_mask; 952 video->ohci->ISO_channel_usage |= chan_mask;
954 spin_unlock_irqrestore(&video->ohci->IR_channel_lock, flags); 953 spin_unlock_irqrestore(&video->ohci->IR_channel_lock, flags);
955 954
956 video->channel = init->channel; 955 video->channel = init->channel;
957 956
958 /* initialize misc. fields of video */ 957 /* initialize misc. fields of video */
959 video->n_frames = init->n_frames; 958 video->n_frames = init->n_frames;
960 video->pal_or_ntsc = init->format; 959 video->pal_or_ntsc = init->format;
961 960
962 video->cip_accum = 0; 961 video->cip_accum = 0;
963 video->continuity_counter = 0; 962 video->continuity_counter = 0;
964 963
965 video->active_frame = -1; 964 video->active_frame = -1;
966 video->first_clear_frame = 0; 965 video->first_clear_frame = 0;
967 video->n_clear_frames = video->n_frames; 966 video->n_clear_frames = video->n_frames;
968 video->dropped_frames = 0; 967 video->dropped_frames = 0;
969 968
970 video->write_off = 0; 969 video->write_off = 0;
971 970
972 video->first_run = 1; 971 video->first_run = 1;
973 video->current_packet = -1; 972 video->current_packet = -1;
974 video->first_frame = 0; 973 video->first_frame = 0;
975 974
976 if (video->pal_or_ntsc == DV1394_NTSC) { 975 if (video->pal_or_ntsc == DV1394_NTSC) {
977 video->cip_n = init->cip_n != 0 ? init->cip_n : CIP_N_NTSC; 976 video->cip_n = init->cip_n != 0 ? init->cip_n : CIP_N_NTSC;
978 video->cip_d = init->cip_d != 0 ? init->cip_d : CIP_D_NTSC; 977 video->cip_d = init->cip_d != 0 ? init->cip_d : CIP_D_NTSC;
979 video->frame_size = DV1394_NTSC_FRAME_SIZE; 978 video->frame_size = DV1394_NTSC_FRAME_SIZE;
980 } else { 979 } else {
981 video->cip_n = init->cip_n != 0 ? init->cip_n : CIP_N_PAL; 980 video->cip_n = init->cip_n != 0 ? init->cip_n : CIP_N_PAL;
982 video->cip_d = init->cip_d != 0 ? init->cip_d : CIP_D_PAL; 981 video->cip_d = init->cip_d != 0 ? init->cip_d : CIP_D_PAL;
983 video->frame_size = DV1394_PAL_FRAME_SIZE; 982 video->frame_size = DV1394_PAL_FRAME_SIZE;
984 } 983 }
985 984
986 video->syt_offset = init->syt_offset; 985 video->syt_offset = init->syt_offset;
987 986
988 /* find and claim DMA contexts on the OHCI card */ 987 /* find and claim DMA contexts on the OHCI card */
989 988
990 if (video->ohci_it_ctx == -1) { 989 if (video->ohci_it_ctx == -1) {
991 ohci1394_init_iso_tasklet(&video->it_tasklet, OHCI_ISO_TRANSMIT, 990 ohci1394_init_iso_tasklet(&video->it_tasklet, OHCI_ISO_TRANSMIT,
992 it_tasklet_func, (unsigned long) video); 991 it_tasklet_func, (unsigned long) video);
993 992
994 if (ohci1394_register_iso_tasklet(video->ohci, &video->it_tasklet) < 0) { 993 if (ohci1394_register_iso_tasklet(video->ohci, &video->it_tasklet) < 0) {
995 printk(KERN_ERR "dv1394: could not find an available IT DMA context\n"); 994 printk(KERN_ERR "dv1394: could not find an available IT DMA context\n");
996 retval = -EBUSY; 995 retval = -EBUSY;
997 goto err; 996 goto err;
998 } 997 }
999 998
1000 video->ohci_it_ctx = video->it_tasklet.context; 999 video->ohci_it_ctx = video->it_tasklet.context;
1001 debug_printk("dv1394: claimed IT DMA context %d\n", video->ohci_it_ctx); 1000 debug_printk("dv1394: claimed IT DMA context %d\n", video->ohci_it_ctx);
1002 } 1001 }
1003 1002
1004 if (video->ohci_ir_ctx == -1) { 1003 if (video->ohci_ir_ctx == -1) {
1005 ohci1394_init_iso_tasklet(&video->ir_tasklet, OHCI_ISO_RECEIVE, 1004 ohci1394_init_iso_tasklet(&video->ir_tasklet, OHCI_ISO_RECEIVE,
1006 ir_tasklet_func, (unsigned long) video); 1005 ir_tasklet_func, (unsigned long) video);
1007 1006
1008 if (ohci1394_register_iso_tasklet(video->ohci, &video->ir_tasklet) < 0) { 1007 if (ohci1394_register_iso_tasklet(video->ohci, &video->ir_tasklet) < 0) {
1009 printk(KERN_ERR "dv1394: could not find an available IR DMA context\n"); 1008 printk(KERN_ERR "dv1394: could not find an available IR DMA context\n");
1010 retval = -EBUSY; 1009 retval = -EBUSY;
1011 goto err; 1010 goto err;
1012 } 1011 }
1013 video->ohci_ir_ctx = video->ir_tasklet.context; 1012 video->ohci_ir_ctx = video->ir_tasklet.context;
1014 debug_printk("dv1394: claimed IR DMA context %d\n", video->ohci_ir_ctx); 1013 debug_printk("dv1394: claimed IR DMA context %d\n", video->ohci_ir_ctx);
1015 } 1014 }
1016 1015
1017 /* allocate struct frames */ 1016 /* allocate struct frames */
1018 for (i = 0; i < init->n_frames; i++) { 1017 for (i = 0; i < init->n_frames; i++) {
1019 video->frames[i] = frame_new(i, video); 1018 video->frames[i] = frame_new(i, video);
1020 1019
1021 if (!video->frames[i]) { 1020 if (!video->frames[i]) {
1022 printk(KERN_ERR "dv1394: Cannot allocate frame structs\n"); 1021 printk(KERN_ERR "dv1394: Cannot allocate frame structs\n");
1023 retval = -ENOMEM; 1022 retval = -ENOMEM;
1024 goto err; 1023 goto err;
1025 } 1024 }
1026 } 1025 }
1027 1026
1028 if (!video->dv_buf.kvirt) { 1027 if (!video->dv_buf.kvirt) {
1029 /* allocate the ringbuffer */ 1028 /* allocate the ringbuffer */
1030 retval = dma_region_alloc(&video->dv_buf, new_buf_size, video->ohci->dev, PCI_DMA_TODEVICE); 1029 retval = dma_region_alloc(&video->dv_buf, new_buf_size, video->ohci->dev, PCI_DMA_TODEVICE);
1031 if (retval) 1030 if (retval)
1032 goto err; 1031 goto err;
1033 1032
1034 video->dv_buf_size = new_buf_size; 1033 video->dv_buf_size = new_buf_size;
1035 1034
1036 debug_printk("dv1394: Allocated %d frame buffers, total %u pages (%u DMA pages), %lu bytes\n", 1035 debug_printk("dv1394: Allocated %d frame buffers, total %u pages (%u DMA pages), %lu bytes\n",
1037 video->n_frames, video->dv_buf.n_pages, 1036 video->n_frames, video->dv_buf.n_pages,
1038 video->dv_buf.n_dma_pages, video->dv_buf_size); 1037 video->dv_buf.n_dma_pages, video->dv_buf_size);
1039 } 1038 }
1040 1039
1041 /* set up the frame->data pointers */ 1040 /* set up the frame->data pointers */
1042 for (i = 0; i < video->n_frames; i++) 1041 for (i = 0; i < video->n_frames; i++)
1043 video->frames[i]->data = (unsigned long) video->dv_buf.kvirt + i * video->frame_size; 1042 video->frames[i]->data = (unsigned long) video->dv_buf.kvirt + i * video->frame_size;
1044 1043
1045 if (!video->packet_buf.kvirt) { 1044 if (!video->packet_buf.kvirt) {
1046 /* allocate packet buffer */ 1045 /* allocate packet buffer */
1047 video->packet_buf_size = sizeof(struct packet) * video->n_frames * MAX_PACKETS; 1046 video->packet_buf_size = sizeof(struct packet) * video->n_frames * MAX_PACKETS;
1048 if (video->packet_buf_size % PAGE_SIZE) 1047 if (video->packet_buf_size % PAGE_SIZE)
1049 video->packet_buf_size += PAGE_SIZE - (video->packet_buf_size % PAGE_SIZE); 1048 video->packet_buf_size += PAGE_SIZE - (video->packet_buf_size % PAGE_SIZE);
1050 1049
1051 retval = dma_region_alloc(&video->packet_buf, video->packet_buf_size, 1050 retval = dma_region_alloc(&video->packet_buf, video->packet_buf_size,
1052 video->ohci->dev, PCI_DMA_FROMDEVICE); 1051 video->ohci->dev, PCI_DMA_FROMDEVICE);
1053 if (retval) 1052 if (retval)
1054 goto err; 1053 goto err;
1055 1054
1056 debug_printk("dv1394: Allocated %d packets in buffer, total %u pages (%u DMA pages), %lu bytes\n", 1055 debug_printk("dv1394: Allocated %d packets in buffer, total %u pages (%u DMA pages), %lu bytes\n",
1057 video->n_frames*MAX_PACKETS, video->packet_buf.n_pages, 1056 video->n_frames*MAX_PACKETS, video->packet_buf.n_pages,
1058 video->packet_buf.n_dma_pages, video->packet_buf_size); 1057 video->packet_buf.n_dma_pages, video->packet_buf_size);
1059 } 1058 }
1060 1059
1061 /* set up register offsets for IT context */ 1060 /* set up register offsets for IT context */
1062 /* IT DMA context registers are spaced 16 bytes apart */ 1061 /* IT DMA context registers are spaced 16 bytes apart */
1063 video->ohci_IsoXmitContextControlSet = OHCI1394_IsoXmitContextControlSet+16*video->ohci_it_ctx; 1062 video->ohci_IsoXmitContextControlSet = OHCI1394_IsoXmitContextControlSet+16*video->ohci_it_ctx;
1064 video->ohci_IsoXmitContextControlClear = OHCI1394_IsoXmitContextControlClear+16*video->ohci_it_ctx; 1063 video->ohci_IsoXmitContextControlClear = OHCI1394_IsoXmitContextControlClear+16*video->ohci_it_ctx;
1065 video->ohci_IsoXmitCommandPtr = OHCI1394_IsoXmitCommandPtr+16*video->ohci_it_ctx; 1064 video->ohci_IsoXmitCommandPtr = OHCI1394_IsoXmitCommandPtr+16*video->ohci_it_ctx;
1066 1065
1067 /* enable interrupts for IT context */ 1066 /* enable interrupts for IT context */
1068 reg_write(video->ohci, OHCI1394_IsoXmitIntMaskSet, (1 << video->ohci_it_ctx)); 1067 reg_write(video->ohci, OHCI1394_IsoXmitIntMaskSet, (1 << video->ohci_it_ctx));
1069 debug_printk("dv1394: interrupts enabled for IT context %d\n", video->ohci_it_ctx); 1068 debug_printk("dv1394: interrupts enabled for IT context %d\n", video->ohci_it_ctx);
1070 1069
1071 /* set up register offsets for IR context */ 1070 /* set up register offsets for IR context */
1072 /* IR DMA context registers are spaced 32 bytes apart */ 1071 /* IR DMA context registers are spaced 32 bytes apart */
1073 video->ohci_IsoRcvContextControlSet = OHCI1394_IsoRcvContextControlSet+32*video->ohci_ir_ctx; 1072 video->ohci_IsoRcvContextControlSet = OHCI1394_IsoRcvContextControlSet+32*video->ohci_ir_ctx;
1074 video->ohci_IsoRcvContextControlClear = OHCI1394_IsoRcvContextControlClear+32*video->ohci_ir_ctx; 1073 video->ohci_IsoRcvContextControlClear = OHCI1394_IsoRcvContextControlClear+32*video->ohci_ir_ctx;
1075 video->ohci_IsoRcvCommandPtr = OHCI1394_IsoRcvCommandPtr+32*video->ohci_ir_ctx; 1074 video->ohci_IsoRcvCommandPtr = OHCI1394_IsoRcvCommandPtr+32*video->ohci_ir_ctx;
1076 video->ohci_IsoRcvContextMatch = OHCI1394_IsoRcvContextMatch+32*video->ohci_ir_ctx; 1075 video->ohci_IsoRcvContextMatch = OHCI1394_IsoRcvContextMatch+32*video->ohci_ir_ctx;
1077 1076
1078 /* enable interrupts for IR context */ 1077 /* enable interrupts for IR context */
1079 reg_write(video->ohci, OHCI1394_IsoRecvIntMaskSet, (1 << video->ohci_ir_ctx) ); 1078 reg_write(video->ohci, OHCI1394_IsoRecvIntMaskSet, (1 << video->ohci_ir_ctx) );
1080 debug_printk("dv1394: interrupts enabled for IR context %d\n", video->ohci_ir_ctx); 1079 debug_printk("dv1394: interrupts enabled for IR context %d\n", video->ohci_ir_ctx);
1081 1080
1082 return 0; 1081 return 0;
1083 1082
1084 err: 1083 err:
1085 do_dv1394_shutdown(video, 1); 1084 do_dv1394_shutdown(video, 1);
1086 return retval; 1085 return retval;
1087 } 1086 }
1088 1087
1089 /* if the user doesn't bother to call ioctl(INIT) before starting 1088 /* if the user doesn't bother to call ioctl(INIT) before starting
1090 mmap() or read()/write(), just give him some default values */ 1089 mmap() or read()/write(), just give him some default values */
1091 1090
1092 static int do_dv1394_init_default(struct video_card *video) 1091 static int do_dv1394_init_default(struct video_card *video)
1093 { 1092 {
1094 struct dv1394_init init; 1093 struct dv1394_init init;
1095 1094
1096 init.api_version = DV1394_API_VERSION; 1095 init.api_version = DV1394_API_VERSION;
1097 init.n_frames = DV1394_MAX_FRAMES / 4; 1096 init.n_frames = DV1394_MAX_FRAMES / 4;
1098 init.channel = video->channel; 1097 init.channel = video->channel;
1099 init.format = video->pal_or_ntsc; 1098 init.format = video->pal_or_ntsc;
1100 init.cip_n = video->cip_n; 1099 init.cip_n = video->cip_n;
1101 init.cip_d = video->cip_d; 1100 init.cip_d = video->cip_d;
1102 init.syt_offset = video->syt_offset; 1101 init.syt_offset = video->syt_offset;
1103 1102
1104 return do_dv1394_init(video, &init); 1103 return do_dv1394_init(video, &init);
1105 } 1104 }
1106 1105
1107 /* do NOT call from interrupt context */ 1106 /* do NOT call from interrupt context */
1108 static void stop_dma(struct video_card *video) 1107 static void stop_dma(struct video_card *video)
1109 { 1108 {
1110 unsigned long flags; 1109 unsigned long flags;
1111 int i; 1110 int i;
1112 1111
1113 /* no interrupts */ 1112 /* no interrupts */
1114 spin_lock_irqsave(&video->spinlock, flags); 1113 spin_lock_irqsave(&video->spinlock, flags);
1115 1114
1116 video->dma_running = 0; 1115 video->dma_running = 0;
1117 1116
1118 if ( (video->ohci_it_ctx == -1) && (video->ohci_ir_ctx == -1) ) 1117 if ( (video->ohci_it_ctx == -1) && (video->ohci_ir_ctx == -1) )
1119 goto out; 1118 goto out;
1120 1119
1121 /* stop DMA if in progress */ 1120 /* stop DMA if in progress */
1122 if ( (video->active_frame != -1) || 1121 if ( (video->active_frame != -1) ||
1123 (reg_read(video->ohci, video->ohci_IsoXmitContextControlClear) & (1 << 10)) || 1122 (reg_read(video->ohci, video->ohci_IsoXmitContextControlClear) & (1 << 10)) ||
1124 (reg_read(video->ohci, video->ohci_IsoRcvContextControlClear) & (1 << 10)) ) { 1123 (reg_read(video->ohci, video->ohci_IsoRcvContextControlClear) & (1 << 10)) ) {
1125 1124
1126 /* clear the .run bits */ 1125 /* clear the .run bits */
1127 reg_write(video->ohci, video->ohci_IsoXmitContextControlClear, (1 << 15)); 1126 reg_write(video->ohci, video->ohci_IsoXmitContextControlClear, (1 << 15));
1128 reg_write(video->ohci, video->ohci_IsoRcvContextControlClear, (1 << 15)); 1127 reg_write(video->ohci, video->ohci_IsoRcvContextControlClear, (1 << 15));
1129 flush_pci_write(video->ohci); 1128 flush_pci_write(video->ohci);
1130 1129
1131 video->active_frame = -1; 1130 video->active_frame = -1;
1132 video->first_run = 1; 1131 video->first_run = 1;
1133 1132
1134 /* wait until DMA really stops */ 1133 /* wait until DMA really stops */
1135 i = 0; 1134 i = 0;
1136 while (i < 1000) { 1135 while (i < 1000) {
1137 1136
1138 /* wait 0.1 millisecond */ 1137 /* wait 0.1 millisecond */
1139 udelay(100); 1138 udelay(100);
1140 1139
1141 if ( (reg_read(video->ohci, video->ohci_IsoXmitContextControlClear) & (1 << 10)) || 1140 if ( (reg_read(video->ohci, video->ohci_IsoXmitContextControlClear) & (1 << 10)) ||
1142 (reg_read(video->ohci, video->ohci_IsoRcvContextControlClear) & (1 << 10)) ) { 1141 (reg_read(video->ohci, video->ohci_IsoRcvContextControlClear) & (1 << 10)) ) {
1143 /* still active */ 1142 /* still active */
1144 debug_printk("dv1394: stop_dma: DMA not stopped yet\n" ); 1143 debug_printk("dv1394: stop_dma: DMA not stopped yet\n" );
1145 mb(); 1144 mb();
1146 } else { 1145 } else {
1147 debug_printk("dv1394: stop_dma: DMA stopped safely after %d ms\n", i/10); 1146 debug_printk("dv1394: stop_dma: DMA stopped safely after %d ms\n", i/10);
1148 break; 1147 break;
1149 } 1148 }
1150 1149
1151 i++; 1150 i++;
1152 } 1151 }
1153 1152
1154 if (i == 1000) { 1153 if (i == 1000) {
1155 printk(KERN_ERR "dv1394: stop_dma: DMA still going after %d ms!\n", i/10); 1154 printk(KERN_ERR "dv1394: stop_dma: DMA still going after %d ms!\n", i/10);
1156 } 1155 }
1157 } 1156 }
1158 else 1157 else
1159 debug_printk("dv1394: stop_dma: already stopped.\n"); 1158 debug_printk("dv1394: stop_dma: already stopped.\n");
1160 1159
1161 out: 1160 out:
1162 spin_unlock_irqrestore(&video->spinlock, flags); 1161 spin_unlock_irqrestore(&video->spinlock, flags);
1163 } 1162 }
1164 1163
1165 1164
1166 1165
1167 static void do_dv1394_shutdown(struct video_card *video, int free_dv_buf) 1166 static void do_dv1394_shutdown(struct video_card *video, int free_dv_buf)
1168 { 1167 {
1169 int i; 1168 int i;
1170 1169
1171 debug_printk("dv1394: shutdown...\n"); 1170 debug_printk("dv1394: shutdown...\n");
1172 1171
1173 /* stop DMA if in progress */ 1172 /* stop DMA if in progress */
1174 stop_dma(video); 1173 stop_dma(video);
1175 1174
1176 /* release the DMA contexts */ 1175 /* release the DMA contexts */
1177 if (video->ohci_it_ctx != -1) { 1176 if (video->ohci_it_ctx != -1) {
1178 video->ohci_IsoXmitContextControlSet = 0; 1177 video->ohci_IsoXmitContextControlSet = 0;
1179 video->ohci_IsoXmitContextControlClear = 0; 1178 video->ohci_IsoXmitContextControlClear = 0;
1180 video->ohci_IsoXmitCommandPtr = 0; 1179 video->ohci_IsoXmitCommandPtr = 0;
1181 1180
1182 /* disable interrupts for IT context */ 1181 /* disable interrupts for IT context */
1183 reg_write(video->ohci, OHCI1394_IsoXmitIntMaskClear, (1 << video->ohci_it_ctx)); 1182 reg_write(video->ohci, OHCI1394_IsoXmitIntMaskClear, (1 << video->ohci_it_ctx));
1184 1183
1185 /* remove tasklet */ 1184 /* remove tasklet */
1186 ohci1394_unregister_iso_tasklet(video->ohci, &video->it_tasklet); 1185 ohci1394_unregister_iso_tasklet(video->ohci, &video->it_tasklet);
1187 debug_printk("dv1394: IT context %d released\n", video->ohci_it_ctx); 1186 debug_printk("dv1394: IT context %d released\n", video->ohci_it_ctx);
1188 video->ohci_it_ctx = -1; 1187 video->ohci_it_ctx = -1;
1189 } 1188 }
1190 1189
1191 if (video->ohci_ir_ctx != -1) { 1190 if (video->ohci_ir_ctx != -1) {
1192 video->ohci_IsoRcvContextControlSet = 0; 1191 video->ohci_IsoRcvContextControlSet = 0;
1193 video->ohci_IsoRcvContextControlClear = 0; 1192 video->ohci_IsoRcvContextControlClear = 0;
1194 video->ohci_IsoRcvCommandPtr = 0; 1193 video->ohci_IsoRcvCommandPtr = 0;
1195 video->ohci_IsoRcvContextMatch = 0; 1194 video->ohci_IsoRcvContextMatch = 0;
1196 1195
1197 /* disable interrupts for IR context */ 1196 /* disable interrupts for IR context */
1198 reg_write(video->ohci, OHCI1394_IsoRecvIntMaskClear, (1 << video->ohci_ir_ctx)); 1197 reg_write(video->ohci, OHCI1394_IsoRecvIntMaskClear, (1 << video->ohci_ir_ctx));
1199 1198
1200 /* remove tasklet */ 1199 /* remove tasklet */
1201 ohci1394_unregister_iso_tasklet(video->ohci, &video->ir_tasklet); 1200 ohci1394_unregister_iso_tasklet(video->ohci, &video->ir_tasklet);
1202 debug_printk("dv1394: IR context %d released\n", video->ohci_ir_ctx); 1201 debug_printk("dv1394: IR context %d released\n", video->ohci_ir_ctx);
1203 video->ohci_ir_ctx = -1; 1202 video->ohci_ir_ctx = -1;
1204 } 1203 }
1205 1204
1206 /* release the ISO channel */ 1205 /* release the ISO channel */
1207 if (video->channel != -1) { 1206 if (video->channel != -1) {
1208 u64 chan_mask; 1207 u64 chan_mask;
1209 unsigned long flags; 1208 unsigned long flags;
1210 1209
1211 chan_mask = (u64)1 << video->channel; 1210 chan_mask = (u64)1 << video->channel;
1212 1211
1213 spin_lock_irqsave(&video->ohci->IR_channel_lock, flags); 1212 spin_lock_irqsave(&video->ohci->IR_channel_lock, flags);
1214 video->ohci->ISO_channel_usage &= ~(chan_mask); 1213 video->ohci->ISO_channel_usage &= ~(chan_mask);
1215 spin_unlock_irqrestore(&video->ohci->IR_channel_lock, flags); 1214 spin_unlock_irqrestore(&video->ohci->IR_channel_lock, flags);
1216 1215
1217 video->channel = -1; 1216 video->channel = -1;
1218 } 1217 }
1219 1218
1220 /* free the frame structs */ 1219 /* free the frame structs */
1221 for (i = 0; i < DV1394_MAX_FRAMES; i++) { 1220 for (i = 0; i < DV1394_MAX_FRAMES; i++) {
1222 if (video->frames[i]) 1221 if (video->frames[i])
1223 frame_delete(video->frames[i]); 1222 frame_delete(video->frames[i]);
1224 video->frames[i] = NULL; 1223 video->frames[i] = NULL;
1225 } 1224 }
1226 1225
1227 video->n_frames = 0; 1226 video->n_frames = 0;
1228 1227
1229 /* we can't free the DMA buffer unless it is guaranteed that 1228 /* we can't free the DMA buffer unless it is guaranteed that
1230 no more user-space mappings exist */ 1229 no more user-space mappings exist */
1231 1230
1232 if (free_dv_buf) { 1231 if (free_dv_buf) {
1233 dma_region_free(&video->dv_buf); 1232 dma_region_free(&video->dv_buf);
1234 video->dv_buf_size = 0; 1233 video->dv_buf_size = 0;
1235 } 1234 }
1236 1235
1237 /* free packet buffer */ 1236 /* free packet buffer */
1238 dma_region_free(&video->packet_buf); 1237 dma_region_free(&video->packet_buf);
1239 video->packet_buf_size = 0; 1238 video->packet_buf_size = 0;
1240 1239
1241 debug_printk("dv1394: shutdown OK\n"); 1240 debug_printk("dv1394: shutdown OK\n");
1242 } 1241 }
1243 1242
1244 /* 1243 /*
1245 ********************************** 1244 **********************************
1246 *** MMAP() THEORY OF OPERATION *** 1245 *** MMAP() THEORY OF OPERATION ***
1247 ********************************** 1246 **********************************
1248 1247
1249 The ringbuffer cannot be re-allocated or freed while 1248 The ringbuffer cannot be re-allocated or freed while
1250 a user program maintains a mapping of it. (note that a mapping 1249 a user program maintains a mapping of it. (note that a mapping
1251 can persist even after the device fd is closed!) 1250 can persist even after the device fd is closed!)
1252 1251
1253 So, only let the user process allocate the DMA buffer once. 1252 So, only let the user process allocate the DMA buffer once.
1254 To resize or deallocate it, you must close the device file 1253 To resize or deallocate it, you must close the device file
1255 and open it again. 1254 and open it again.
1256 1255
1257 Previously Dan M. hacked out a scheme that allowed the DMA 1256 Previously Dan M. hacked out a scheme that allowed the DMA
1258 buffer to change by forcefully unmapping it from the user's 1257 buffer to change by forcefully unmapping it from the user's
1259 address space. It was prone to error because it's very hard to 1258 address space. It was prone to error because it's very hard to
1260 track all the places the buffer could have been mapped (we 1259 track all the places the buffer could have been mapped (we
1261 would have had to walk the vma list of every process in the 1260 would have had to walk the vma list of every process in the
1262 system to be sure we found all the mappings!). Instead, we 1261 system to be sure we found all the mappings!). Instead, we
1263 force the user to choose one buffer size and stick with 1262 force the user to choose one buffer size and stick with
1264 it. This small sacrifice is worth the huge reduction in 1263 it. This small sacrifice is worth the huge reduction in
1265 error-prone code in dv1394. 1264 error-prone code in dv1394.
1266 */ 1265 */
1267 1266
1268 static int dv1394_mmap(struct file *file, struct vm_area_struct *vma) 1267 static int dv1394_mmap(struct file *file, struct vm_area_struct *vma)
1269 { 1268 {
1270 struct video_card *video = file_to_video_card(file); 1269 struct video_card *video = file_to_video_card(file);
1271 int retval = -EINVAL; 1270 int retval = -EINVAL;
1272 1271
1273 /* 1272 /*
1274 * We cannot use the blocking variant mutex_lock here because .mmap 1273 * We cannot use the blocking variant mutex_lock here because .mmap
1275 * is called with mmap_sem held, while .ioctl, .read, .write acquire 1274 * is called with mmap_sem held, while .ioctl, .read, .write acquire
1276 * video->mtx and subsequently call copy_to/from_user which will 1275 * video->mtx and subsequently call copy_to/from_user which will
1277 * grab mmap_sem in case of a page fault. 1276 * grab mmap_sem in case of a page fault.
1278 */ 1277 */
1279 if (!mutex_trylock(&video->mtx)) 1278 if (!mutex_trylock(&video->mtx))
1280 return -EAGAIN; 1279 return -EAGAIN;
1281 1280
1282 if ( ! video_card_initialized(video) ) { 1281 if ( ! video_card_initialized(video) ) {
1283 retval = do_dv1394_init_default(video); 1282 retval = do_dv1394_init_default(video);
1284 if (retval) 1283 if (retval)
1285 goto out; 1284 goto out;
1286 } 1285 }
1287 1286
1288 retval = dma_region_mmap(&video->dv_buf, file, vma); 1287 retval = dma_region_mmap(&video->dv_buf, file, vma);
1289 out: 1288 out:
1290 mutex_unlock(&video->mtx); 1289 mutex_unlock(&video->mtx);
1291 return retval; 1290 return retval;
1292 } 1291 }
1293 1292
1294 /*** DEVICE FILE INTERFACE *************************************************/ 1293 /*** DEVICE FILE INTERFACE *************************************************/
1295 1294
1296 /* no need to serialize, multiple threads OK */ 1295 /* no need to serialize, multiple threads OK */
1297 static unsigned int dv1394_poll(struct file *file, struct poll_table_struct *wait) 1296 static unsigned int dv1394_poll(struct file *file, struct poll_table_struct *wait)
1298 { 1297 {
1299 struct video_card *video = file_to_video_card(file); 1298 struct video_card *video = file_to_video_card(file);
1300 unsigned int mask = 0; 1299 unsigned int mask = 0;
1301 unsigned long flags; 1300 unsigned long flags;
1302 1301
1303 poll_wait(file, &video->waitq, wait); 1302 poll_wait(file, &video->waitq, wait);
1304 1303
1305 spin_lock_irqsave(&video->spinlock, flags); 1304 spin_lock_irqsave(&video->spinlock, flags);
1306 if ( video->n_frames == 0 ) { 1305 if ( video->n_frames == 0 ) {
1307 1306
1308 } else if ( video->active_frame == -1 ) { 1307 } else if ( video->active_frame == -1 ) {
1309 /* nothing going on */ 1308 /* nothing going on */
1310 mask |= POLLOUT; 1309 mask |= POLLOUT;
1311 } else { 1310 } else {
1312 /* any clear/ready buffers? */ 1311 /* any clear/ready buffers? */
1313 if (video->n_clear_frames >0) 1312 if (video->n_clear_frames >0)
1314 mask |= POLLOUT | POLLIN; 1313 mask |= POLLOUT | POLLIN;
1315 } 1314 }
1316 spin_unlock_irqrestore(&video->spinlock, flags); 1315 spin_unlock_irqrestore(&video->spinlock, flags);
1317 1316
1318 return mask; 1317 return mask;
1319 } 1318 }
1320 1319
1321 static int dv1394_fasync(int fd, struct file *file, int on) 1320 static int dv1394_fasync(int fd, struct file *file, int on)
1322 { 1321 {
1323 /* I just copied this code verbatim from Alan Cox's mouse driver example 1322 /* I just copied this code verbatim from Alan Cox's mouse driver example
1324 (Documentation/DocBook/) */ 1323 (Documentation/DocBook/) */
1325 1324
1326 struct video_card *video = file_to_video_card(file); 1325 struct video_card *video = file_to_video_card(file);
1327 1326
1328 return fasync_helper(fd, file, on, &video->fasync); 1327 return fasync_helper(fd, file, on, &video->fasync);
1329 } 1328 }
1330 1329
1331 static ssize_t dv1394_write(struct file *file, const char __user *buffer, size_t count, loff_t *ppos) 1330 static ssize_t dv1394_write(struct file *file, const char __user *buffer, size_t count, loff_t *ppos)
1332 { 1331 {
1333 struct video_card *video = file_to_video_card(file); 1332 struct video_card *video = file_to_video_card(file);
1334 DECLARE_WAITQUEUE(wait, current); 1333 DECLARE_WAITQUEUE(wait, current);
1335 ssize_t ret; 1334 ssize_t ret;
1336 size_t cnt; 1335 size_t cnt;
1337 unsigned long flags; 1336 unsigned long flags;
1338 int target_frame; 1337 int target_frame;
1339 1338
1340 /* serialize this to prevent multi-threaded mayhem */ 1339 /* serialize this to prevent multi-threaded mayhem */
1341 if (file->f_flags & O_NONBLOCK) { 1340 if (file->f_flags & O_NONBLOCK) {
1342 if (!mutex_trylock(&video->mtx)) 1341 if (!mutex_trylock(&video->mtx))
1343 return -EAGAIN; 1342 return -EAGAIN;
1344 } else { 1343 } else {
1345 if (mutex_lock_interruptible(&video->mtx)) 1344 if (mutex_lock_interruptible(&video->mtx))
1346 return -ERESTARTSYS; 1345 return -ERESTARTSYS;
1347 } 1346 }
1348 1347
1349 if ( !video_card_initialized(video) ) { 1348 if ( !video_card_initialized(video) ) {
1350 ret = do_dv1394_init_default(video); 1349 ret = do_dv1394_init_default(video);
1351 if (ret) { 1350 if (ret) {
1352 mutex_unlock(&video->mtx); 1351 mutex_unlock(&video->mtx);
1353 return ret; 1352 return ret;
1354 } 1353 }
1355 } 1354 }
1356 1355
1357 ret = 0; 1356 ret = 0;
1358 add_wait_queue(&video->waitq, &wait); 1357 add_wait_queue(&video->waitq, &wait);
1359 1358
1360 while (count > 0) { 1359 while (count > 0) {
1361 1360
1362 /* must set TASK_INTERRUPTIBLE *before* checking for free 1361 /* must set TASK_INTERRUPTIBLE *before* checking for free
1363 buffers; otherwise we could miss a wakeup if the interrupt 1362 buffers; otherwise we could miss a wakeup if the interrupt
1364 fires between the check and the schedule() */ 1363 fires between the check and the schedule() */
1365 1364
1366 set_current_state(TASK_INTERRUPTIBLE); 1365 set_current_state(TASK_INTERRUPTIBLE);
1367 1366
1368 spin_lock_irqsave(&video->spinlock, flags); 1367 spin_lock_irqsave(&video->spinlock, flags);
1369 1368
1370 target_frame = video->first_clear_frame; 1369 target_frame = video->first_clear_frame;
1371 1370
1372 spin_unlock_irqrestore(&video->spinlock, flags); 1371 spin_unlock_irqrestore(&video->spinlock, flags);
1373 1372
1374 if (video->frames[target_frame]->state == FRAME_CLEAR) { 1373 if (video->frames[target_frame]->state == FRAME_CLEAR) {
1375 1374
1376 /* how much room is left in the target frame buffer */ 1375 /* how much room is left in the target frame buffer */
1377 cnt = video->frame_size - (video->write_off - target_frame * video->frame_size); 1376 cnt = video->frame_size - (video->write_off - target_frame * video->frame_size);
1378 1377
1379 } else { 1378 } else {
1380 /* buffer is already used */ 1379 /* buffer is already used */
1381 cnt = 0; 1380 cnt = 0;
1382 } 1381 }
1383 1382
1384 if (cnt > count) 1383 if (cnt > count)
1385 cnt = count; 1384 cnt = count;
1386 1385
1387 if (cnt <= 0) { 1386 if (cnt <= 0) {
1388 /* no room left, gotta wait */ 1387 /* no room left, gotta wait */
1389 if (file->f_flags & O_NONBLOCK) { 1388 if (file->f_flags & O_NONBLOCK) {
1390 if (!ret) 1389 if (!ret)
1391 ret = -EAGAIN; 1390 ret = -EAGAIN;
1392 break; 1391 break;
1393 } 1392 }
1394 if (signal_pending(current)) { 1393 if (signal_pending(current)) {
1395 if (!ret) 1394 if (!ret)
1396 ret = -ERESTARTSYS; 1395 ret = -ERESTARTSYS;
1397 break; 1396 break;
1398 } 1397 }
1399 1398
1400 schedule(); 1399 schedule();
1401 1400
1402 continue; /* start over from 'while(count > 0)...' */ 1401 continue; /* start over from 'while(count > 0)...' */
1403 } 1402 }
1404 1403
1405 if (copy_from_user(video->dv_buf.kvirt + video->write_off, buffer, cnt)) { 1404 if (copy_from_user(video->dv_buf.kvirt + video->write_off, buffer, cnt)) {
1406 if (!ret) 1405 if (!ret)
1407 ret = -EFAULT; 1406 ret = -EFAULT;
1408 break; 1407 break;
1409 } 1408 }
1410 1409
1411 video->write_off = (video->write_off + cnt) % (video->n_frames * video->frame_size); 1410 video->write_off = (video->write_off + cnt) % (video->n_frames * video->frame_size);
1412 1411
1413 count -= cnt; 1412 count -= cnt;
1414 buffer += cnt; 1413 buffer += cnt;
1415 ret += cnt; 1414 ret += cnt;
1416 1415
1417 if (video->write_off == video->frame_size * ((target_frame + 1) % video->n_frames)) 1416 if (video->write_off == video->frame_size * ((target_frame + 1) % video->n_frames))
1418 frame_prepare(video, target_frame); 1417 frame_prepare(video, target_frame);
1419 } 1418 }
1420 1419
1421 remove_wait_queue(&video->waitq, &wait); 1420 remove_wait_queue(&video->waitq, &wait);
1422 set_current_state(TASK_RUNNING); 1421 set_current_state(TASK_RUNNING);
1423 mutex_unlock(&video->mtx); 1422 mutex_unlock(&video->mtx);
1424 return ret; 1423 return ret;
1425 } 1424 }
1426 1425
1427 1426
1428 static ssize_t dv1394_read(struct file *file, char __user *buffer, size_t count, loff_t *ppos) 1427 static ssize_t dv1394_read(struct file *file, char __user *buffer, size_t count, loff_t *ppos)
1429 { 1428 {
1430 struct video_card *video = file_to_video_card(file); 1429 struct video_card *video = file_to_video_card(file);
1431 DECLARE_WAITQUEUE(wait, current); 1430 DECLARE_WAITQUEUE(wait, current);
1432 ssize_t ret; 1431 ssize_t ret;
1433 size_t cnt; 1432 size_t cnt;
1434 unsigned long flags; 1433 unsigned long flags;
1435 int target_frame; 1434 int target_frame;
1436 1435
1437 /* serialize this to prevent multi-threaded mayhem */ 1436 /* serialize this to prevent multi-threaded mayhem */
1438 if (file->f_flags & O_NONBLOCK) { 1437 if (file->f_flags & O_NONBLOCK) {
1439 if (!mutex_trylock(&video->mtx)) 1438 if (!mutex_trylock(&video->mtx))
1440 return -EAGAIN; 1439 return -EAGAIN;
1441 } else { 1440 } else {
1442 if (mutex_lock_interruptible(&video->mtx)) 1441 if (mutex_lock_interruptible(&video->mtx))
1443 return -ERESTARTSYS; 1442 return -ERESTARTSYS;
1444 } 1443 }
1445 1444
1446 if ( !video_card_initialized(video) ) { 1445 if ( !video_card_initialized(video) ) {
1447 ret = do_dv1394_init_default(video); 1446 ret = do_dv1394_init_default(video);
1448 if (ret) { 1447 if (ret) {
1449 mutex_unlock(&video->mtx); 1448 mutex_unlock(&video->mtx);
1450 return ret; 1449 return ret;
1451 } 1450 }
1452 video->continuity_counter = -1; 1451 video->continuity_counter = -1;
1453 1452
1454 receive_packets(video); 1453 receive_packets(video);
1455 1454
1456 start_dma_receive(video); 1455 start_dma_receive(video);
1457 } 1456 }
1458 1457
1459 ret = 0; 1458 ret = 0;
1460 add_wait_queue(&video->waitq, &wait); 1459 add_wait_queue(&video->waitq, &wait);
1461 1460
1462 while (count > 0) { 1461 while (count > 0) {
1463 1462
1464 /* must set TASK_INTERRUPTIBLE *before* checking for free 1463 /* must set TASK_INTERRUPTIBLE *before* checking for free
1465 buffers; otherwise we could miss a wakeup if the interrupt 1464 buffers; otherwise we could miss a wakeup if the interrupt
1466 fires between the check and the schedule() */ 1465 fires between the check and the schedule() */
1467 1466
1468 set_current_state(TASK_INTERRUPTIBLE); 1467 set_current_state(TASK_INTERRUPTIBLE);
1469 1468
1470 spin_lock_irqsave(&video->spinlock, flags); 1469 spin_lock_irqsave(&video->spinlock, flags);
1471 1470
1472 target_frame = video->first_clear_frame; 1471 target_frame = video->first_clear_frame;
1473 1472
1474 spin_unlock_irqrestore(&video->spinlock, flags); 1473 spin_unlock_irqrestore(&video->spinlock, flags);
1475 1474
1476 if (target_frame >= 0 && 1475 if (target_frame >= 0 &&
1477 video->n_clear_frames > 0 && 1476 video->n_clear_frames > 0 &&
1478 video->frames[target_frame]->state == FRAME_CLEAR) { 1477 video->frames[target_frame]->state == FRAME_CLEAR) {
1479 1478
1480 /* how much room is left in the target frame buffer */ 1479 /* how much room is left in the target frame buffer */
1481 cnt = video->frame_size - (video->write_off - target_frame * video->frame_size); 1480 cnt = video->frame_size - (video->write_off - target_frame * video->frame_size);
1482 1481
1483 } else { 1482 } else {
1484 /* buffer is already used */ 1483 /* buffer is already used */
1485 cnt = 0; 1484 cnt = 0;
1486 } 1485 }
1487 1486
1488 if (cnt > count) 1487 if (cnt > count)
1489 cnt = count; 1488 cnt = count;
1490 1489
1491 if (cnt <= 0) { 1490 if (cnt <= 0) {
1492 /* no room left, gotta wait */ 1491 /* no room left, gotta wait */
1493 if (file->f_flags & O_NONBLOCK) { 1492 if (file->f_flags & O_NONBLOCK) {
1494 if (!ret) 1493 if (!ret)
1495 ret = -EAGAIN; 1494 ret = -EAGAIN;
1496 break; 1495 break;
1497 } 1496 }
1498 if (signal_pending(current)) { 1497 if (signal_pending(current)) {
1499 if (!ret) 1498 if (!ret)
1500 ret = -ERESTARTSYS; 1499 ret = -ERESTARTSYS;
1501 break; 1500 break;
1502 } 1501 }
1503 1502
1504 schedule(); 1503 schedule();
1505 1504
1506 continue; /* start over from 'while(count > 0)...' */ 1505 continue; /* start over from 'while(count > 0)...' */
1507 } 1506 }
1508 1507
1509 if (copy_to_user(buffer, video->dv_buf.kvirt + video->write_off, cnt)) { 1508 if (copy_to_user(buffer, video->dv_buf.kvirt + video->write_off, cnt)) {
1510 if (!ret) 1509 if (!ret)
1511 ret = -EFAULT; 1510 ret = -EFAULT;
1512 break; 1511 break;
1513 } 1512 }
1514 1513
1515 video->write_off = (video->write_off + cnt) % (video->n_frames * video->frame_size); 1514 video->write_off = (video->write_off + cnt) % (video->n_frames * video->frame_size);
1516 1515
1517 count -= cnt; 1516 count -= cnt;
1518 buffer += cnt; 1517 buffer += cnt;
1519 ret += cnt; 1518 ret += cnt;
1520 1519
1521 if (video->write_off == video->frame_size * ((target_frame + 1) % video->n_frames)) { 1520 if (video->write_off == video->frame_size * ((target_frame + 1) % video->n_frames)) {
1522 spin_lock_irqsave(&video->spinlock, flags); 1521 spin_lock_irqsave(&video->spinlock, flags);
1523 video->n_clear_frames--; 1522 video->n_clear_frames--;
1524 video->first_clear_frame = (video->first_clear_frame + 1) % video->n_frames; 1523 video->first_clear_frame = (video->first_clear_frame + 1) % video->n_frames;
1525 spin_unlock_irqrestore(&video->spinlock, flags); 1524 spin_unlock_irqrestore(&video->spinlock, flags);
1526 } 1525 }
1527 } 1526 }
1528 1527
1529 remove_wait_queue(&video->waitq, &wait); 1528 remove_wait_queue(&video->waitq, &wait);
1530 set_current_state(TASK_RUNNING); 1529 set_current_state(TASK_RUNNING);
1531 mutex_unlock(&video->mtx); 1530 mutex_unlock(&video->mtx);
1532 return ret; 1531 return ret;
1533 } 1532 }
1534 1533
1535 1534
1536 /*** DEVICE IOCTL INTERFACE ************************************************/ 1535 /*** DEVICE IOCTL INTERFACE ************************************************/
1537 1536
1538 static long dv1394_ioctl(struct file *file, unsigned int cmd, unsigned long arg) 1537 static long dv1394_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
1539 { 1538 {
1540 struct video_card *video = file_to_video_card(file); 1539 struct video_card *video = file_to_video_card(file);
1541 unsigned long flags; 1540 unsigned long flags;
1542 int ret = -EINVAL; 1541 int ret = -EINVAL;
1543 void __user *argp = (void __user *)arg; 1542 void __user *argp = (void __user *)arg;
1544 1543
1545 DECLARE_WAITQUEUE(wait, current); 1544 DECLARE_WAITQUEUE(wait, current);
1546 1545
1547 /* serialize this to prevent multi-threaded mayhem */ 1546 /* serialize this to prevent multi-threaded mayhem */
1548 if (file->f_flags & O_NONBLOCK) { 1547 if (file->f_flags & O_NONBLOCK) {
1549 if (!mutex_trylock(&video->mtx)) 1548 if (!mutex_trylock(&video->mtx))
1550 return -EAGAIN; 1549 return -EAGAIN;
1551 } else { 1550 } else {
1552 if (mutex_lock_interruptible(&video->mtx)) 1551 if (mutex_lock_interruptible(&video->mtx))
1553 return -ERESTARTSYS; 1552 return -ERESTARTSYS;
1554 } 1553 }
1555 1554
1556 switch(cmd) 1555 switch(cmd)
1557 { 1556 {
1558 case DV1394_IOC_SUBMIT_FRAMES: { 1557 case DV1394_IOC_SUBMIT_FRAMES: {
1559 unsigned int n_submit; 1558 unsigned int n_submit;
1560 1559
1561 if ( !video_card_initialized(video) ) { 1560 if ( !video_card_initialized(video) ) {
1562 ret = do_dv1394_init_default(video); 1561 ret = do_dv1394_init_default(video);
1563 if (ret) 1562 if (ret)
1564 goto out; 1563 goto out;
1565 } 1564 }
1566 1565
1567 n_submit = (unsigned int) arg; 1566 n_submit = (unsigned int) arg;
1568 1567
1569 if (n_submit > video->n_frames) { 1568 if (n_submit > video->n_frames) {
1570 ret = -EINVAL; 1569 ret = -EINVAL;
1571 goto out; 1570 goto out;
1572 } 1571 }
1573 1572
1574 while (n_submit > 0) { 1573 while (n_submit > 0) {
1575 1574
1576 add_wait_queue(&video->waitq, &wait); 1575 add_wait_queue(&video->waitq, &wait);
1577 set_current_state(TASK_INTERRUPTIBLE); 1576 set_current_state(TASK_INTERRUPTIBLE);
1578 1577
1579 spin_lock_irqsave(&video->spinlock, flags); 1578 spin_lock_irqsave(&video->spinlock, flags);
1580 1579
1581 /* wait until video->first_clear_frame is really CLEAR */ 1580 /* wait until video->first_clear_frame is really CLEAR */
1582 while (video->frames[video->first_clear_frame]->state != FRAME_CLEAR) { 1581 while (video->frames[video->first_clear_frame]->state != FRAME_CLEAR) {
1583 1582
1584 spin_unlock_irqrestore(&video->spinlock, flags); 1583 spin_unlock_irqrestore(&video->spinlock, flags);
1585 1584
1586 if (signal_pending(current)) { 1585 if (signal_pending(current)) {
1587 remove_wait_queue(&video->waitq, &wait); 1586 remove_wait_queue(&video->waitq, &wait);
1588 set_current_state(TASK_RUNNING); 1587 set_current_state(TASK_RUNNING);
1589 ret = -EINTR; 1588 ret = -EINTR;
1590 goto out; 1589 goto out;
1591 } 1590 }
1592 1591
1593 schedule(); 1592 schedule();
1594 set_current_state(TASK_INTERRUPTIBLE); 1593 set_current_state(TASK_INTERRUPTIBLE);
1595 1594
1596 spin_lock_irqsave(&video->spinlock, flags); 1595 spin_lock_irqsave(&video->spinlock, flags);
1597 } 1596 }
1598 spin_unlock_irqrestore(&video->spinlock, flags); 1597 spin_unlock_irqrestore(&video->spinlock, flags);
1599 1598
1600 remove_wait_queue(&video->waitq, &wait); 1599 remove_wait_queue(&video->waitq, &wait);
1601 set_current_state(TASK_RUNNING); 1600 set_current_state(TASK_RUNNING);
1602 1601
1603 frame_prepare(video, video->first_clear_frame); 1602 frame_prepare(video, video->first_clear_frame);
1604 1603
1605 n_submit--; 1604 n_submit--;
1606 } 1605 }
1607 1606
1608 ret = 0; 1607 ret = 0;
1609 break; 1608 break;
1610 } 1609 }
1611 1610
1612 case DV1394_IOC_WAIT_FRAMES: { 1611 case DV1394_IOC_WAIT_FRAMES: {
1613 unsigned int n_wait; 1612 unsigned int n_wait;
1614 1613
1615 if ( !video_card_initialized(video) ) { 1614 if ( !video_card_initialized(video) ) {
1616 ret = -EINVAL; 1615 ret = -EINVAL;
1617 goto out; 1616 goto out;
1618 } 1617 }
1619 1618
1620 n_wait = (unsigned int) arg; 1619 n_wait = (unsigned int) arg;
1621 1620
1622 /* since we re-run the last frame on underflow, we will 1621 /* since we re-run the last frame on underflow, we will
1623 never actually have n_frames clear frames; at most only 1622 never actually have n_frames clear frames; at most only
1624 n_frames - 1 */ 1623 n_frames - 1 */
1625 1624
1626 if (n_wait > (video->n_frames-1) ) { 1625 if (n_wait > (video->n_frames-1) ) {
1627 ret = -EINVAL; 1626 ret = -EINVAL;
1628 goto out; 1627 goto out;
1629 } 1628 }
1630 1629
1631 add_wait_queue(&video->waitq, &wait); 1630 add_wait_queue(&video->waitq, &wait);
1632 set_current_state(TASK_INTERRUPTIBLE); 1631 set_current_state(TASK_INTERRUPTIBLE);
1633 1632
1634 spin_lock_irqsave(&video->spinlock, flags); 1633 spin_lock_irqsave(&video->spinlock, flags);
1635 1634
1636 while (video->n_clear_frames < n_wait) { 1635 while (video->n_clear_frames < n_wait) {
1637 1636
1638 spin_unlock_irqrestore(&video->spinlock, flags); 1637 spin_unlock_irqrestore(&video->spinlock, flags);
1639 1638
1640 if (signal_pending(current)) { 1639 if (signal_pending(current)) {
1641 remove_wait_queue(&video->waitq, &wait); 1640 remove_wait_queue(&video->waitq, &wait);
1642 set_current_state(TASK_RUNNING); 1641 set_current_state(TASK_RUNNING);
1643 ret = -EINTR; 1642 ret = -EINTR;
1644 goto out; 1643 goto out;
1645 } 1644 }
1646 1645
1647 schedule(); 1646 schedule();
1648 set_current_state(TASK_INTERRUPTIBLE); 1647 set_current_state(TASK_INTERRUPTIBLE);
1649 1648
1650 spin_lock_irqsave(&video->spinlock, flags); 1649 spin_lock_irqsave(&video->spinlock, flags);
1651 } 1650 }
1652 1651
1653 spin_unlock_irqrestore(&video->spinlock, flags); 1652 spin_unlock_irqrestore(&video->spinlock, flags);
1654 1653
1655 remove_wait_queue(&video->waitq, &wait); 1654 remove_wait_queue(&video->waitq, &wait);
1656 set_current_state(TASK_RUNNING); 1655 set_current_state(TASK_RUNNING);
1657 ret = 0; 1656 ret = 0;
1658 break; 1657 break;
1659 } 1658 }
1660 1659
1661 case DV1394_IOC_RECEIVE_FRAMES: { 1660 case DV1394_IOC_RECEIVE_FRAMES: {
1662 unsigned int n_recv; 1661 unsigned int n_recv;
1663 1662
1664 if ( !video_card_initialized(video) ) { 1663 if ( !video_card_initialized(video) ) {
1665 ret = -EINVAL; 1664 ret = -EINVAL;
1666 goto out; 1665 goto out;
1667 } 1666 }
1668 1667
1669 n_recv = (unsigned int) arg; 1668 n_recv = (unsigned int) arg;
1670 1669
1671 /* at least one frame must be active */ 1670 /* at least one frame must be active */
1672 if (n_recv > (video->n_frames-1) ) { 1671 if (n_recv > (video->n_frames-1) ) {
1673 ret = -EINVAL; 1672 ret = -EINVAL;
1674 goto out; 1673 goto out;
1675 } 1674 }
1676 1675
1677 spin_lock_irqsave(&video->spinlock, flags); 1676 spin_lock_irqsave(&video->spinlock, flags);
1678 1677
1679 /* release the clear frames */ 1678 /* release the clear frames */
1680 video->n_clear_frames -= n_recv; 1679 video->n_clear_frames -= n_recv;
1681 1680
1682 /* advance the clear frame cursor */ 1681 /* advance the clear frame cursor */
1683 video->first_clear_frame = (video->first_clear_frame + n_recv) % video->n_frames; 1682 video->first_clear_frame = (video->first_clear_frame + n_recv) % video->n_frames;
1684 1683
1685 /* reset dropped_frames */ 1684 /* reset dropped_frames */
1686 video->dropped_frames = 0; 1685 video->dropped_frames = 0;
1687 1686
1688 spin_unlock_irqrestore(&video->spinlock, flags); 1687 spin_unlock_irqrestore(&video->spinlock, flags);
1689 1688
1690 ret = 0; 1689 ret = 0;
1691 break; 1690 break;
1692 } 1691 }
1693 1692
1694 case DV1394_IOC_START_RECEIVE: { 1693 case DV1394_IOC_START_RECEIVE: {
1695 if ( !video_card_initialized(video) ) { 1694 if ( !video_card_initialized(video) ) {
1696 ret = do_dv1394_init_default(video); 1695 ret = do_dv1394_init_default(video);
1697 if (ret) 1696 if (ret)
1698 goto out; 1697 goto out;
1699 } 1698 }
1700 1699
1701 video->continuity_counter = -1; 1700 video->continuity_counter = -1;
1702 1701
1703 receive_packets(video); 1702 receive_packets(video);
1704 1703
1705 start_dma_receive(video); 1704 start_dma_receive(video);
1706 1705
1707 ret = 0; 1706 ret = 0;
1708 break; 1707 break;
1709 } 1708 }
1710 1709
1711 case DV1394_IOC_INIT: { 1710 case DV1394_IOC_INIT: {
1712 struct dv1394_init init; 1711 struct dv1394_init init;
1713 if (!argp) { 1712 if (!argp) {
1714 ret = do_dv1394_init_default(video); 1713 ret = do_dv1394_init_default(video);
1715 } else { 1714 } else {
1716 if (copy_from_user(&init, argp, sizeof(init))) { 1715 if (copy_from_user(&init, argp, sizeof(init))) {
1717 ret = -EFAULT; 1716 ret = -EFAULT;
1718 goto out; 1717 goto out;
1719 } 1718 }
1720 ret = do_dv1394_init(video, &init); 1719 ret = do_dv1394_init(video, &init);
1721 } 1720 }
1722 break; 1721 break;
1723 } 1722 }
1724 1723
1725 case DV1394_IOC_SHUTDOWN: 1724 case DV1394_IOC_SHUTDOWN:
1726 do_dv1394_shutdown(video, 0); 1725 do_dv1394_shutdown(video, 0);
1727 ret = 0; 1726 ret = 0;
1728 break; 1727 break;
1729 1728
1730 1729
1731 case DV1394_IOC_GET_STATUS: { 1730 case DV1394_IOC_GET_STATUS: {
1732 struct dv1394_status status; 1731 struct dv1394_status status;
1733 1732
1734 if ( !video_card_initialized(video) ) { 1733 if ( !video_card_initialized(video) ) {
1735 ret = -EINVAL; 1734 ret = -EINVAL;
1736 goto out; 1735 goto out;
1737 } 1736 }
1738 1737
1739 status.init.api_version = DV1394_API_VERSION; 1738 status.init.api_version = DV1394_API_VERSION;
1740 status.init.channel = video->channel; 1739 status.init.channel = video->channel;
1741 status.init.n_frames = video->n_frames; 1740 status.init.n_frames = video->n_frames;
1742 status.init.format = video->pal_or_ntsc; 1741 status.init.format = video->pal_or_ntsc;
1743 status.init.cip_n = video->cip_n; 1742 status.init.cip_n = video->cip_n;
1744 status.init.cip_d = video->cip_d; 1743 status.init.cip_d = video->cip_d;
1745 status.init.syt_offset = video->syt_offset; 1744 status.init.syt_offset = video->syt_offset;
1746 1745
1747 status.first_clear_frame = video->first_clear_frame; 1746 status.first_clear_frame = video->first_clear_frame;
1748 1747
1749 /* the rest of the fields need to be locked against the interrupt */ 1748 /* the rest of the fields need to be locked against the interrupt */
1750 spin_lock_irqsave(&video->spinlock, flags); 1749 spin_lock_irqsave(&video->spinlock, flags);
1751 1750
1752 status.active_frame = video->active_frame; 1751 status.active_frame = video->active_frame;
1753 status.n_clear_frames = video->n_clear_frames; 1752 status.n_clear_frames = video->n_clear_frames;
1754 1753
1755 status.dropped_frames = video->dropped_frames; 1754 status.dropped_frames = video->dropped_frames;
1756 1755
1757 /* reset dropped_frames */ 1756 /* reset dropped_frames */
1758 video->dropped_frames = 0; 1757 video->dropped_frames = 0;
1759 1758
1760 spin_unlock_irqrestore(&video->spinlock, flags); 1759 spin_unlock_irqrestore(&video->spinlock, flags);
1761 1760
1762 if (copy_to_user(argp, &status, sizeof(status))) { 1761 if (copy_to_user(argp, &status, sizeof(status))) {
1763 ret = -EFAULT; 1762 ret = -EFAULT;
1764 goto out; 1763 goto out;
1765 } 1764 }
1766 1765
1767 ret = 0; 1766 ret = 0;
1768 break; 1767 break;
1769 } 1768 }
1770 1769
1771 default: 1770 default:
1772 break; 1771 break;
1773 } 1772 }
1774 1773
1775 out: 1774 out:
1776 mutex_unlock(&video->mtx); 1775 mutex_unlock(&video->mtx);
1777 return ret; 1776 return ret;
1778 } 1777 }
1779 1778
1780 /*** DEVICE FILE INTERFACE CONTINUED ***************************************/ 1779 /*** DEVICE FILE INTERFACE CONTINUED ***************************************/
1781 1780
1782 static int dv1394_open(struct inode *inode, struct file *file) 1781 static int dv1394_open(struct inode *inode, struct file *file)
1783 { 1782 {
1784 struct video_card *video = NULL; 1783 struct video_card *video = NULL;
1785 1784
1786 if (file->private_data) { 1785 if (file->private_data) {
1787 video = (struct video_card*) file->private_data; 1786 video = (struct video_card*) file->private_data;
1788 1787
1789 } else { 1788 } else {
1790 /* look up the card by ID */ 1789 /* look up the card by ID */
1791 unsigned long flags; 1790 unsigned long flags;
1792 int idx = ieee1394_file_to_instance(file); 1791 int idx = ieee1394_file_to_instance(file);
1793 1792
1794 spin_lock_irqsave(&dv1394_cards_lock, flags); 1793 spin_lock_irqsave(&dv1394_cards_lock, flags);
1795 if (!list_empty(&dv1394_cards)) { 1794 if (!list_empty(&dv1394_cards)) {
1796 struct video_card *p; 1795 struct video_card *p;
1797 list_for_each_entry(p, &dv1394_cards, list) { 1796 list_for_each_entry(p, &dv1394_cards, list) {
1798 if ((p->id) == idx) { 1797 if ((p->id) == idx) {
1799 video = p; 1798 video = p;
1800 break; 1799 break;
1801 } 1800 }
1802 } 1801 }
1803 } 1802 }
1804 spin_unlock_irqrestore(&dv1394_cards_lock, flags); 1803 spin_unlock_irqrestore(&dv1394_cards_lock, flags);
1805 1804
1806 if (!video) { 1805 if (!video) {
1807 debug_printk("dv1394: OHCI card %d not found", idx); 1806 debug_printk("dv1394: OHCI card %d not found", idx);
1808 return -ENODEV; 1807 return -ENODEV;
1809 } 1808 }
1810 1809
1811 file->private_data = (void*) video; 1810 file->private_data = (void*) video;
1812 } 1811 }
1813 1812
1814 #ifndef DV1394_ALLOW_MORE_THAN_ONE_OPEN 1813 #ifndef DV1394_ALLOW_MORE_THAN_ONE_OPEN
1815 1814
1816 if ( test_and_set_bit(0, &video->open) ) { 1815 if ( test_and_set_bit(0, &video->open) ) {
1817 /* video is already open by someone else */ 1816 /* video is already open by someone else */
1818 return -EBUSY; 1817 return -EBUSY;
1819 } 1818 }
1820 1819
1821 #endif 1820 #endif
1822 1821
1823 printk(KERN_INFO "%s: NOTE, the dv1394 interface is unsupported " 1822 printk(KERN_INFO "%s: NOTE, the dv1394 interface is unsupported "
1824 "and will not be available in the new firewire driver stack. " 1823 "and will not be available in the new firewire driver stack. "
1825 "Try libraw1394 based programs instead.\n", current->comm); 1824 "Try libraw1394 based programs instead.\n", current->comm);
1826 1825
1827 return nonseekable_open(inode, file); 1826 return nonseekable_open(inode, file);
1828 } 1827 }
1829 1828
1830 1829
1831 static int dv1394_release(struct inode *inode, struct file *file) 1830 static int dv1394_release(struct inode *inode, struct file *file)
1832 { 1831 {
1833 struct video_card *video = file_to_video_card(file); 1832 struct video_card *video = file_to_video_card(file);
1834 1833
1835 /* OK to free the DMA buffer, no more mappings can exist */ 1834 /* OK to free the DMA buffer, no more mappings can exist */
1836 do_dv1394_shutdown(video, 1); 1835 do_dv1394_shutdown(video, 1);
1837 1836
1838 /* give someone else a turn */ 1837 /* give someone else a turn */
1839 clear_bit(0, &video->open); 1838 clear_bit(0, &video->open);
1840 1839
1841 return 0; 1840 return 0;
1842 } 1841 }
1843 1842
1844 1843
1845 /*** DEVICE DRIVER HANDLERS ************************************************/ 1844 /*** DEVICE DRIVER HANDLERS ************************************************/
1846 1845
1847 static void it_tasklet_func(unsigned long data) 1846 static void it_tasklet_func(unsigned long data)
1848 { 1847 {
1849 int wake = 0; 1848 int wake = 0;
1850 struct video_card *video = (struct video_card*) data; 1849 struct video_card *video = (struct video_card*) data;
1851 1850
1852 spin_lock(&video->spinlock); 1851 spin_lock(&video->spinlock);
1853 1852
1854 if (!video->dma_running) 1853 if (!video->dma_running)
1855 goto out; 1854 goto out;
1856 1855
1857 irq_printk("ContextControl = %08x, CommandPtr = %08x\n", 1856 irq_printk("ContextControl = %08x, CommandPtr = %08x\n",
1858 reg_read(video->ohci, video->ohci_IsoXmitContextControlSet), 1857 reg_read(video->ohci, video->ohci_IsoXmitContextControlSet),
1859 reg_read(video->ohci, video->ohci_IsoXmitCommandPtr) 1858 reg_read(video->ohci, video->ohci_IsoXmitCommandPtr)
1860 ); 1859 );
1861 1860
1862 1861
1863 if ( (video->ohci_it_ctx != -1) && 1862 if ( (video->ohci_it_ctx != -1) &&
1864 (reg_read(video->ohci, video->ohci_IsoXmitContextControlSet) & (1 << 10)) ) { 1863 (reg_read(video->ohci, video->ohci_IsoXmitContextControlSet) & (1 << 10)) ) {
1865 1864
1866 struct frame *f; 1865 struct frame *f;
1867 unsigned int frame, i; 1866 unsigned int frame, i;
1868 1867
1869 1868
1870 if (video->active_frame == -1) 1869 if (video->active_frame == -1)
1871 frame = 0; 1870 frame = 0;
1872 else 1871 else
1873 frame = video->active_frame; 1872 frame = video->active_frame;
1874 1873
1875 /* check all the DMA-able frames */ 1874 /* check all the DMA-able frames */
1876 for (i = 0; i < video->n_frames; i++, frame = (frame+1) % video->n_frames) { 1875 for (i = 0; i < video->n_frames; i++, frame = (frame+1) % video->n_frames) {
1877 1876
1878 irq_printk("IRQ checking frame %d...", frame); 1877 irq_printk("IRQ checking frame %d...", frame);
1879 f = video->frames[frame]; 1878 f = video->frames[frame];
1880 if (f->state != FRAME_READY) { 1879 if (f->state != FRAME_READY) {
1881 irq_printk("clear, skipping\n"); 1880 irq_printk("clear, skipping\n");
1882 /* we don't own this frame */ 1881 /* we don't own this frame */
1883 continue; 1882 continue;
1884 } 1883 }
1885 1884
1886 irq_printk("DMA\n"); 1885 irq_printk("DMA\n");
1887 1886
1888 /* check the frame begin semaphore to see if we can free the previous frame */ 1887 /* check the frame begin semaphore to see if we can free the previous frame */
1889 if ( *(f->frame_begin_timestamp) ) { 1888 if ( *(f->frame_begin_timestamp) ) {
1890 int prev_frame; 1889 int prev_frame;
1891 struct frame *prev_f; 1890 struct frame *prev_f;
1892 1891
1893 1892
1894 1893
1895 /* don't reset, need this later *(f->frame_begin_timestamp) = 0; */ 1894 /* don't reset, need this later *(f->frame_begin_timestamp) = 0; */
1896 irq_printk(" BEGIN\n"); 1895 irq_printk(" BEGIN\n");
1897 1896
1898 prev_frame = frame - 1; 1897 prev_frame = frame - 1;
1899 if (prev_frame == -1) 1898 if (prev_frame == -1)
1900 prev_frame += video->n_frames; 1899 prev_frame += video->n_frames;
1901 prev_f = video->frames[prev_frame]; 1900 prev_f = video->frames[prev_frame];
1902 1901
1903 /* make sure we can actually garbage collect 1902 /* make sure we can actually garbage collect
1904 this frame */ 1903 this frame */
1905 if ( (prev_f->state == FRAME_READY) && 1904 if ( (prev_f->state == FRAME_READY) &&
1906 prev_f->done && (!f->done) ) 1905 prev_f->done && (!f->done) )
1907 { 1906 {
1908 frame_reset(prev_f); 1907 frame_reset(prev_f);
1909 video->n_clear_frames++; 1908 video->n_clear_frames++;
1910 wake = 1; 1909 wake = 1;
1911 video->active_frame = frame; 1910 video->active_frame = frame;
1912 1911
1913 irq_printk(" BEGIN - freeing previous frame %d, new active frame is %d\n", prev_frame, frame); 1912 irq_printk(" BEGIN - freeing previous frame %d, new active frame is %d\n", prev_frame, frame);
1914 } else { 1913 } else {
1915 irq_printk(" BEGIN - can't free yet\n"); 1914 irq_printk(" BEGIN - can't free yet\n");
1916 } 1915 }
1917 1916
1918 f->done = 1; 1917 f->done = 1;
1919 } 1918 }
1920 1919
1921 1920
1922 /* see if we need to set the timestamp for the next frame */ 1921 /* see if we need to set the timestamp for the next frame */
1923 if ( *(f->mid_frame_timestamp) ) { 1922 if ( *(f->mid_frame_timestamp) ) {
1924 struct frame *next_frame; 1923 struct frame *next_frame;
1925 u32 begin_ts, ts_cyc, ts_off; 1924 u32 begin_ts, ts_cyc, ts_off;
1926 1925
1927 *(f->mid_frame_timestamp) = 0; 1926 *(f->mid_frame_timestamp) = 0;
1928 1927
1929 begin_ts = le32_to_cpu(*(f->frame_begin_timestamp)); 1928 begin_ts = le32_to_cpu(*(f->frame_begin_timestamp));
1930 1929
1931 irq_printk(" MIDDLE - first packet was sent at cycle %4u (%2u), assigned timestamp was (%2u) %4u\n", 1930 irq_printk(" MIDDLE - first packet was sent at cycle %4u (%2u), assigned timestamp was (%2u) %4u\n",
1932 begin_ts & 0x1FFF, begin_ts & 0xF, 1931 begin_ts & 0x1FFF, begin_ts & 0xF,
1933 f->assigned_timestamp >> 12, f->assigned_timestamp & 0xFFF); 1932 f->assigned_timestamp >> 12, f->assigned_timestamp & 0xFFF);
1934 1933
1935 /* prepare next frame and assign timestamp */ 1934 /* prepare next frame and assign timestamp */
1936 next_frame = video->frames[ (frame+1) % video->n_frames ]; 1935 next_frame = video->frames[ (frame+1) % video->n_frames ];
1937 1936
1938 if (next_frame->state == FRAME_READY) { 1937 if (next_frame->state == FRAME_READY) {
1939 irq_printk(" MIDDLE - next frame is ready, good\n"); 1938 irq_printk(" MIDDLE - next frame is ready, good\n");
1940 } else { 1939 } else {
1941 debug_printk("dv1394: Underflow! At least one frame has been dropped.\n"); 1940 debug_printk("dv1394: Underflow! At least one frame has been dropped.\n");
1942 next_frame = f; 1941 next_frame = f;
1943 } 1942 }
1944 1943
1945 /* set the timestamp to the timestamp of the last frame sent, 1944 /* set the timestamp to the timestamp of the last frame sent,
1946 plus the length of the last frame sent, plus the syt latency */ 1945 plus the length of the last frame sent, plus the syt latency */
1947 ts_cyc = begin_ts & 0xF; 1946 ts_cyc = begin_ts & 0xF;
1948 /* advance one frame, plus syt latency (typically 2-3) */ 1947 /* advance one frame, plus syt latency (typically 2-3) */
1949 ts_cyc += f->n_packets + video->syt_offset ; 1948 ts_cyc += f->n_packets + video->syt_offset ;
1950 1949
1951 ts_off = 0; 1950 ts_off = 0;
1952 1951
1953 ts_cyc += ts_off/3072; 1952 ts_cyc += ts_off/3072;
1954 ts_off %= 3072; 1953 ts_off %= 3072;
1955 1954
1956 next_frame->assigned_timestamp = ((ts_cyc&0xF) << 12) + ts_off; 1955 next_frame->assigned_timestamp = ((ts_cyc&0xF) << 12) + ts_off;
1957 if (next_frame->cip_syt1) { 1956 if (next_frame->cip_syt1) {
1958 next_frame->cip_syt1->b[6] = next_frame->assigned_timestamp >> 8; 1957 next_frame->cip_syt1->b[6] = next_frame->assigned_timestamp >> 8;
1959 next_frame->cip_syt1->b[7] = next_frame->assigned_timestamp & 0xFF; 1958 next_frame->cip_syt1->b[7] = next_frame->assigned_timestamp & 0xFF;
1960 } 1959 }
1961 if (next_frame->cip_syt2) { 1960 if (next_frame->cip_syt2) {
1962 next_frame->cip_syt2->b[6] = next_frame->assigned_timestamp >> 8; 1961 next_frame->cip_syt2->b[6] = next_frame->assigned_timestamp >> 8;
1963 next_frame->cip_syt2->b[7] = next_frame->assigned_timestamp & 0xFF; 1962 next_frame->cip_syt2->b[7] = next_frame->assigned_timestamp & 0xFF;
1964 } 1963 }
1965 1964
1966 } 1965 }
1967 1966
1968 /* see if the frame looped */ 1967 /* see if the frame looped */
1969 if ( *(f->frame_end_timestamp) ) { 1968 if ( *(f->frame_end_timestamp) ) {
1970 1969
1971 *(f->frame_end_timestamp) = 0; 1970 *(f->frame_end_timestamp) = 0;
1972 1971
1973 debug_printk(" END - the frame looped at least once\n"); 1972 debug_printk(" END - the frame looped at least once\n");
1974 1973
1975 video->dropped_frames++; 1974 video->dropped_frames++;
1976 } 1975 }
1977 1976
1978 } /* for (each frame) */ 1977 } /* for (each frame) */
1979 } 1978 }
1980 1979
1981 if (wake) { 1980 if (wake) {
1982 kill_fasync(&video->fasync, SIGIO, POLL_OUT); 1981 kill_fasync(&video->fasync, SIGIO, POLL_OUT);
1983 1982
1984 /* wake readers/writers/ioctl'ers */ 1983 /* wake readers/writers/ioctl'ers */
1985 wake_up_interruptible(&video->waitq); 1984 wake_up_interruptible(&video->waitq);
1986 } 1985 }
1987 1986
1988 out: 1987 out:
1989 spin_unlock(&video->spinlock); 1988 spin_unlock(&video->spinlock);
1990 } 1989 }
1991 1990
1992 static void ir_tasklet_func(unsigned long data) 1991 static void ir_tasklet_func(unsigned long data)
1993 { 1992 {
1994 int wake = 0; 1993 int wake = 0;
1995 struct video_card *video = (struct video_card*) data; 1994 struct video_card *video = (struct video_card*) data;
1996 1995
1997 spin_lock(&video->spinlock); 1996 spin_lock(&video->spinlock);
1998 1997
1999 if (!video->dma_running) 1998 if (!video->dma_running)
2000 goto out; 1999 goto out;
2001 2000
2002 if ( (video->ohci_ir_ctx != -1) && 2001 if ( (video->ohci_ir_ctx != -1) &&
2003 (reg_read(video->ohci, video->ohci_IsoRcvContextControlSet) & (1 << 10)) ) { 2002 (reg_read(video->ohci, video->ohci_IsoRcvContextControlSet) & (1 << 10)) ) {
2004 2003
2005 int sof=0; /* start-of-frame flag */ 2004 int sof=0; /* start-of-frame flag */
2006 struct frame *f; 2005 struct frame *f;
2007 u16 packet_length, packet_time; 2006 u16 packet_length;
2008 int i, dbc=0; 2007 int i, dbc=0;
2009 struct DMA_descriptor_block *block = NULL; 2008 struct DMA_descriptor_block *block = NULL;
2010 u16 xferstatus; 2009 u16 xferstatus;
2011 2010
2012 int next_i, prev_i; 2011 int next_i, prev_i;
2013 struct DMA_descriptor_block *next = NULL; 2012 struct DMA_descriptor_block *next = NULL;
2014 dma_addr_t next_dma = 0; 2013 dma_addr_t next_dma = 0;
2015 struct DMA_descriptor_block *prev = NULL; 2014 struct DMA_descriptor_block *prev = NULL;
2016 2015
2017 /* loop over all descriptors in all frames */ 2016 /* loop over all descriptors in all frames */
2018 for (i = 0; i < video->n_frames*MAX_PACKETS; i++) { 2017 for (i = 0; i < video->n_frames*MAX_PACKETS; i++) {
2019 struct packet *p = dma_region_i(&video->packet_buf, struct packet, video->current_packet); 2018 struct packet *p = dma_region_i(&video->packet_buf, struct packet, video->current_packet);
2020 2019
2021 /* make sure we are seeing the latest changes to p */ 2020 /* make sure we are seeing the latest changes to p */
2022 dma_region_sync_for_cpu(&video->packet_buf, 2021 dma_region_sync_for_cpu(&video->packet_buf,
2023 (unsigned long) p - (unsigned long) video->packet_buf.kvirt, 2022 (unsigned long) p - (unsigned long) video->packet_buf.kvirt,
2024 sizeof(struct packet)); 2023 sizeof(struct packet));
2025 2024
2026 packet_length = le16_to_cpu(p->data_length); 2025 packet_length = le16_to_cpu(p->data_length);
2027 packet_time = le16_to_cpu(p->timestamp);
2028 2026
2029 irq_printk("received packet %02d, timestamp=%04x, length=%04x, sof=%02x%02x\n", video->current_packet,
2030 packet_time, packet_length,
2031 p->data[0], p->data[1]);
2032
2033 /* get the descriptor based on packet_buffer cursor */ 2027 /* get the descriptor based on packet_buffer cursor */
2034 f = video->frames[video->current_packet / MAX_PACKETS]; 2028 f = video->frames[video->current_packet / MAX_PACKETS];
2035 block = &(f->descriptor_pool[video->current_packet % MAX_PACKETS]); 2029 block = &(f->descriptor_pool[video->current_packet % MAX_PACKETS]);
2036 xferstatus = le32_to_cpu(block->u.in.il.q[3]) >> 16; 2030 xferstatus = le32_to_cpu(block->u.in.il.q[3]) >> 16;
2037 xferstatus &= 0x1F; 2031 xferstatus &= 0x1F;
2038 irq_printk("ir_tasklet_func: xferStatus/resCount [%d] = 0x%08x\n", i, le32_to_cpu(block->u.in.il.q[3]) ); 2032 irq_printk("ir_tasklet_func: xferStatus/resCount [%d] = 0x%08x\n", i, le32_to_cpu(block->u.in.il.q[3]) );
2039 2033
2040 /* get the current frame */ 2034 /* get the current frame */
2041 f = video->frames[video->active_frame]; 2035 f = video->frames[video->active_frame];
2042 2036
2043 /* exclude empty packet */ 2037 /* exclude empty packet */
2044 if (packet_length > 8 && xferstatus == 0x11) { 2038 if (packet_length > 8 && xferstatus == 0x11) {
2045 /* check for start of frame */ 2039 /* check for start of frame */
2046 /* DRD> Changed to check section type ([0]>>5==0) 2040 /* DRD> Changed to check section type ([0]>>5==0)
2047 and dif sequence ([1]>>4==0) */ 2041 and dif sequence ([1]>>4==0) */
2048 sof = ( (p->data[0] >> 5) == 0 && (p->data[1] >> 4) == 0); 2042 sof = ( (p->data[0] >> 5) == 0 && (p->data[1] >> 4) == 0);
2049 2043
2050 dbc = (int) (p->cip_h1 >> 24); 2044 dbc = (int) (p->cip_h1 >> 24);
2051 if ( video->continuity_counter != -1 && dbc > ((video->continuity_counter + 1) % 256) ) 2045 if ( video->continuity_counter != -1 && dbc > ((video->continuity_counter + 1) % 256) )
2052 { 2046 {
2053 printk(KERN_WARNING "dv1394: discontinuity detected, dropping all frames\n" ); 2047 printk(KERN_WARNING "dv1394: discontinuity detected, dropping all frames\n" );
2054 video->dropped_frames += video->n_clear_frames + 1; 2048 video->dropped_frames += video->n_clear_frames + 1;
2055 video->first_frame = 0; 2049 video->first_frame = 0;
2056 video->n_clear_frames = 0; 2050 video->n_clear_frames = 0;
2057 video->first_clear_frame = -1; 2051 video->first_clear_frame = -1;
2058 } 2052 }
2059 video->continuity_counter = dbc; 2053 video->continuity_counter = dbc;
2060 2054
2061 if (!video->first_frame) { 2055 if (!video->first_frame) {
2062 if (sof) { 2056 if (sof) {
2063 video->first_frame = 1; 2057 video->first_frame = 1;
2064 } 2058 }
2065 2059
2066 } else if (sof) { 2060 } else if (sof) {
2067 /* close current frame */ 2061 /* close current frame */
2068 frame_reset(f); /* f->state = STATE_CLEAR */ 2062 frame_reset(f); /* f->state = STATE_CLEAR */
2069 video->n_clear_frames++; 2063 video->n_clear_frames++;
2070 if (video->n_clear_frames > video->n_frames) { 2064 if (video->n_clear_frames > video->n_frames) {
2071 video->dropped_frames++; 2065 video->dropped_frames++;
2072 printk(KERN_WARNING "dv1394: dropped a frame during reception\n" ); 2066 printk(KERN_WARNING "dv1394: dropped a frame during reception\n" );
2073 video->n_clear_frames = video->n_frames-1; 2067 video->n_clear_frames = video->n_frames-1;
2074 video->first_clear_frame = (video->first_clear_frame + 1) % video->n_frames; 2068 video->first_clear_frame = (video->first_clear_frame + 1) % video->n_frames;
2075 } 2069 }
2076 if (video->first_clear_frame == -1) 2070 if (video->first_clear_frame == -1)
2077 video->first_clear_frame = video->active_frame; 2071 video->first_clear_frame = video->active_frame;
2078 2072
2079 /* get the next frame */ 2073 /* get the next frame */
2080 video->active_frame = (video->active_frame + 1) % video->n_frames; 2074 video->active_frame = (video->active_frame + 1) % video->n_frames;
2081 f = video->frames[video->active_frame]; 2075 f = video->frames[video->active_frame];
2082 irq_printk(" frame received, active_frame = %d, n_clear_frames = %d, first_clear_frame = %d\n", 2076 irq_printk(" frame received, active_frame = %d, n_clear_frames = %d, first_clear_frame = %d\n",
2083 video->active_frame, video->n_clear_frames, video->first_clear_frame); 2077 video->active_frame, video->n_clear_frames, video->first_clear_frame);
2084 } 2078 }
2085 if (video->first_frame) { 2079 if (video->first_frame) {
2086 if (sof) { 2080 if (sof) {
2087 /* open next frame */ 2081 /* open next frame */
2088 f->state = FRAME_READY; 2082 f->state = FRAME_READY;
2089 } 2083 }
2090 2084
2091 /* copy to buffer */ 2085 /* copy to buffer */
2092 if (f->n_packets > (video->frame_size / 480)) { 2086 if (f->n_packets > (video->frame_size / 480)) {
2093 printk(KERN_ERR "frame buffer overflow during receive\n"); 2087 printk(KERN_ERR "frame buffer overflow during receive\n");
2094 } 2088 }
2095 2089
2096 frame_put_packet(f, p); 2090 frame_put_packet(f, p);
2097 2091
2098 } /* first_frame */ 2092 } /* first_frame */
2099 } 2093 }
2100 2094
2101 /* stop, end of ready packets */ 2095 /* stop, end of ready packets */
2102 else if (xferstatus == 0) { 2096 else if (xferstatus == 0) {
2103 break; 2097 break;
2104 } 2098 }
2105 2099
2106 /* reset xferStatus & resCount */ 2100 /* reset xferStatus & resCount */
2107 block->u.in.il.q[3] = cpu_to_le32(512); 2101 block->u.in.il.q[3] = cpu_to_le32(512);
2108 2102
2109 /* terminate dma chain at this (next) packet */ 2103 /* terminate dma chain at this (next) packet */
2110 next_i = video->current_packet; 2104 next_i = video->current_packet;
2111 f = video->frames[next_i / MAX_PACKETS]; 2105 f = video->frames[next_i / MAX_PACKETS];
2112 next = &(f->descriptor_pool[next_i % MAX_PACKETS]); 2106 next = &(f->descriptor_pool[next_i % MAX_PACKETS]);
2113 next_dma = ((unsigned long) block - (unsigned long) f->descriptor_pool) + f->descriptor_pool_dma; 2107 next_dma = ((unsigned long) block - (unsigned long) f->descriptor_pool) + f->descriptor_pool_dma;
2114 next->u.in.il.q[0] |= cpu_to_le32(3 << 20); /* enable interrupt */ 2108 next->u.in.il.q[0] |= cpu_to_le32(3 << 20); /* enable interrupt */
2115 next->u.in.il.q[2] = cpu_to_le32(0); /* disable branch */ 2109 next->u.in.il.q[2] = cpu_to_le32(0); /* disable branch */
2116 2110
2117 /* link previous to next */ 2111 /* link previous to next */
2118 prev_i = (next_i == 0) ? (MAX_PACKETS * video->n_frames - 1) : (next_i - 1); 2112 prev_i = (next_i == 0) ? (MAX_PACKETS * video->n_frames - 1) : (next_i - 1);
2119 f = video->frames[prev_i / MAX_PACKETS]; 2113 f = video->frames[prev_i / MAX_PACKETS];
2120 prev = &(f->descriptor_pool[prev_i % MAX_PACKETS]); 2114 prev = &(f->descriptor_pool[prev_i % MAX_PACKETS]);
2121 if (prev_i % (MAX_PACKETS/2)) { 2115 if (prev_i % (MAX_PACKETS/2)) {
2122 prev->u.in.il.q[0] &= ~cpu_to_le32(3 << 20); /* no interrupt */ 2116 prev->u.in.il.q[0] &= ~cpu_to_le32(3 << 20); /* no interrupt */
2123 } else { 2117 } else {
2124 prev->u.in.il.q[0] |= cpu_to_le32(3 << 20); /* enable interrupt */ 2118 prev->u.in.il.q[0] |= cpu_to_le32(3 << 20); /* enable interrupt */
2125 } 2119 }
2126 prev->u.in.il.q[2] = cpu_to_le32(next_dma | 1); /* set Z=1 */ 2120 prev->u.in.il.q[2] = cpu_to_le32(next_dma | 1); /* set Z=1 */
2127 wmb(); 2121 wmb();
2128 2122
2129 /* wake up DMA in case it fell asleep */ 2123 /* wake up DMA in case it fell asleep */
2130 reg_write(video->ohci, video->ohci_IsoRcvContextControlSet, (1 << 12)); 2124 reg_write(video->ohci, video->ohci_IsoRcvContextControlSet, (1 << 12));
2131 2125
2132 /* advance packet_buffer cursor */ 2126 /* advance packet_buffer cursor */
2133 video->current_packet = (video->current_packet + 1) % (MAX_PACKETS * video->n_frames); 2127 video->current_packet = (video->current_packet + 1) % (MAX_PACKETS * video->n_frames);
2134 2128
2135 } /* for all packets */ 2129 } /* for all packets */
2136 2130
2137 wake = 1; /* why the hell not? */ 2131 wake = 1; /* why the hell not? */
2138 2132
2139 } /* receive interrupt */ 2133 } /* receive interrupt */
2140 2134
2141 if (wake) { 2135 if (wake) {
2142 kill_fasync(&video->fasync, SIGIO, POLL_IN); 2136 kill_fasync(&video->fasync, SIGIO, POLL_IN);
2143 2137
2144 /* wake readers/writers/ioctl'ers */ 2138 /* wake readers/writers/ioctl'ers */
2145 wake_up_interruptible(&video->waitq); 2139 wake_up_interruptible(&video->waitq);
2146 } 2140 }
2147 2141
2148 out: 2142 out:
2149 spin_unlock(&video->spinlock); 2143 spin_unlock(&video->spinlock);
2150 } 2144 }
2151 2145
2152 static struct cdev dv1394_cdev; 2146 static struct cdev dv1394_cdev;
2153 static const struct file_operations dv1394_fops= 2147 static const struct file_operations dv1394_fops=
2154 { 2148 {
2155 .owner = THIS_MODULE, 2149 .owner = THIS_MODULE,
2156 .poll = dv1394_poll, 2150 .poll = dv1394_poll,
2157 .unlocked_ioctl = dv1394_ioctl, 2151 .unlocked_ioctl = dv1394_ioctl,
2158 #ifdef CONFIG_COMPAT 2152 #ifdef CONFIG_COMPAT
2159 .compat_ioctl = dv1394_compat_ioctl, 2153 .compat_ioctl = dv1394_compat_ioctl,
2160 #endif 2154 #endif
2161 .mmap = dv1394_mmap, 2155 .mmap = dv1394_mmap,
2162 .open = dv1394_open, 2156 .open = dv1394_open,
2163 .write = dv1394_write, 2157 .write = dv1394_write,
2164 .read = dv1394_read, 2158 .read = dv1394_read,
2165 .release = dv1394_release, 2159 .release = dv1394_release,
2166 .fasync = dv1394_fasync, 2160 .fasync = dv1394_fasync,
2167 .llseek = no_llseek, 2161 .llseek = no_llseek,
2168 }; 2162 };
2169 2163
2170 2164
2171 /*** HOTPLUG STUFF **********************************************************/ 2165 /*** HOTPLUG STUFF **********************************************************/
2172 /* 2166 /*
2173 * Export information about protocols/devices supported by this driver. 2167 * Export information about protocols/devices supported by this driver.
2174 */ 2168 */
2175 #ifdef MODULE 2169 #ifdef MODULE
2176 static const struct ieee1394_device_id dv1394_id_table[] = { 2170 static const struct ieee1394_device_id dv1394_id_table[] = {
2177 { 2171 {
2178 .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION, 2172 .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION,
2179 .specifier_id = AVC_UNIT_SPEC_ID_ENTRY & 0xffffff, 2173 .specifier_id = AVC_UNIT_SPEC_ID_ENTRY & 0xffffff,
2180 .version = AVC_SW_VERSION_ENTRY & 0xffffff 2174 .version = AVC_SW_VERSION_ENTRY & 0xffffff
2181 }, 2175 },
2182 { } 2176 { }
2183 }; 2177 };
2184 2178
2185 MODULE_DEVICE_TABLE(ieee1394, dv1394_id_table); 2179 MODULE_DEVICE_TABLE(ieee1394, dv1394_id_table);
2186 #endif /* MODULE */ 2180 #endif /* MODULE */
2187 2181
2188 static struct hpsb_protocol_driver dv1394_driver = { 2182 static struct hpsb_protocol_driver dv1394_driver = {
2189 .name = "dv1394", 2183 .name = "dv1394",
2190 }; 2184 };
2191 2185
2192 2186
2193 /*** IEEE1394 HPSB CALLBACKS ***********************************************/ 2187 /*** IEEE1394 HPSB CALLBACKS ***********************************************/
2194 2188
2195 static int dv1394_init(struct ti_ohci *ohci, enum pal_or_ntsc format, enum modes mode) 2189 static int dv1394_init(struct ti_ohci *ohci, enum pal_or_ntsc format, enum modes mode)
2196 { 2190 {
2197 struct video_card *video; 2191 struct video_card *video;
2198 unsigned long flags; 2192 unsigned long flags;
2199 int i; 2193 int i;
2200 2194
2201 video = kzalloc(sizeof(*video), GFP_KERNEL); 2195 video = kzalloc(sizeof(*video), GFP_KERNEL);
2202 if (!video) { 2196 if (!video) {
2203 printk(KERN_ERR "dv1394: cannot allocate video_card\n"); 2197 printk(KERN_ERR "dv1394: cannot allocate video_card\n");
2204 return -1; 2198 return -1;
2205 } 2199 }
2206 2200
2207 video->ohci = ohci; 2201 video->ohci = ohci;
2208 /* lower 2 bits of id indicate which of four "plugs" 2202 /* lower 2 bits of id indicate which of four "plugs"
2209 per host */ 2203 per host */
2210 video->id = ohci->host->id << 2; 2204 video->id = ohci->host->id << 2;
2211 if (format == DV1394_NTSC) 2205 if (format == DV1394_NTSC)
2212 video->id |= mode; 2206 video->id |= mode;
2213 else 2207 else
2214 video->id |= 2 + mode; 2208 video->id |= 2 + mode;
2215 2209
2216 video->ohci_it_ctx = -1; 2210 video->ohci_it_ctx = -1;
2217 video->ohci_ir_ctx = -1; 2211 video->ohci_ir_ctx = -1;
2218 2212
2219 video->ohci_IsoXmitContextControlSet = 0; 2213 video->ohci_IsoXmitContextControlSet = 0;
2220 video->ohci_IsoXmitContextControlClear = 0; 2214 video->ohci_IsoXmitContextControlClear = 0;
2221 video->ohci_IsoXmitCommandPtr = 0; 2215 video->ohci_IsoXmitCommandPtr = 0;
2222 2216
2223 video->ohci_IsoRcvContextControlSet = 0; 2217 video->ohci_IsoRcvContextControlSet = 0;
2224 video->ohci_IsoRcvContextControlClear = 0; 2218 video->ohci_IsoRcvContextControlClear = 0;
2225 video->ohci_IsoRcvCommandPtr = 0; 2219 video->ohci_IsoRcvCommandPtr = 0;
2226 video->ohci_IsoRcvContextMatch = 0; 2220 video->ohci_IsoRcvContextMatch = 0;
2227 2221
2228 video->n_frames = 0; /* flag that video is not initialized */ 2222 video->n_frames = 0; /* flag that video is not initialized */
2229 video->channel = 63; /* default to broadcast channel */ 2223 video->channel = 63; /* default to broadcast channel */
2230 video->active_frame = -1; 2224 video->active_frame = -1;
2231 2225
2232 /* initialize the following */ 2226 /* initialize the following */
2233 video->pal_or_ntsc = format; 2227 video->pal_or_ntsc = format;
2234 video->cip_n = 0; /* 0 = use builtin default */ 2228 video->cip_n = 0; /* 0 = use builtin default */
2235 video->cip_d = 0; 2229 video->cip_d = 0;
2236 video->syt_offset = 0; 2230 video->syt_offset = 0;
2237 video->mode = mode; 2231 video->mode = mode;
2238 2232
2239 for (i = 0; i < DV1394_MAX_FRAMES; i++) 2233 for (i = 0; i < DV1394_MAX_FRAMES; i++)
2240 video->frames[i] = NULL; 2234 video->frames[i] = NULL;
2241 2235
2242 dma_region_init(&video->dv_buf); 2236 dma_region_init(&video->dv_buf);
2243 video->dv_buf_size = 0; 2237 video->dv_buf_size = 0;
2244 dma_region_init(&video->packet_buf); 2238 dma_region_init(&video->packet_buf);
2245 video->packet_buf_size = 0; 2239 video->packet_buf_size = 0;
2246 2240
2247 clear_bit(0, &video->open); 2241 clear_bit(0, &video->open);
2248 spin_lock_init(&video->spinlock); 2242 spin_lock_init(&video->spinlock);
2249 video->dma_running = 0; 2243 video->dma_running = 0;
2250 mutex_init(&video->mtx); 2244 mutex_init(&video->mtx);
2251 init_waitqueue_head(&video->waitq); 2245 init_waitqueue_head(&video->waitq);
2252 video->fasync = NULL; 2246 video->fasync = NULL;
2253 2247
2254 spin_lock_irqsave(&dv1394_cards_lock, flags); 2248 spin_lock_irqsave(&dv1394_cards_lock, flags);
2255 INIT_LIST_HEAD(&video->list); 2249 INIT_LIST_HEAD(&video->list);
2256 list_add_tail(&video->list, &dv1394_cards); 2250 list_add_tail(&video->list, &dv1394_cards);
2257 spin_unlock_irqrestore(&dv1394_cards_lock, flags); 2251 spin_unlock_irqrestore(&dv1394_cards_lock, flags);
2258 2252
2259 debug_printk("dv1394: dv1394_init() OK on ID %d\n", video->id); 2253 debug_printk("dv1394: dv1394_init() OK on ID %d\n", video->id);
2260 return 0; 2254 return 0;
2261 } 2255 }
2262 2256
2263 static void dv1394_remove_host(struct hpsb_host *host) 2257 static void dv1394_remove_host(struct hpsb_host *host)
2264 { 2258 {
2265 struct video_card *video, *tmp_video; 2259 struct video_card *video, *tmp_video;
2266 unsigned long flags; 2260 unsigned long flags;
2267 int found_ohci_card = 0; 2261 int found_ohci_card = 0;
2268 2262
2269 do { 2263 do {
2270 video = NULL; 2264 video = NULL;
2271 spin_lock_irqsave(&dv1394_cards_lock, flags); 2265 spin_lock_irqsave(&dv1394_cards_lock, flags);
2272 list_for_each_entry(tmp_video, &dv1394_cards, list) { 2266 list_for_each_entry(tmp_video, &dv1394_cards, list) {
2273 if ((tmp_video->id >> 2) == host->id) { 2267 if ((tmp_video->id >> 2) == host->id) {
2274 list_del(&tmp_video->list); 2268 list_del(&tmp_video->list);
2275 video = tmp_video; 2269 video = tmp_video;
2276 found_ohci_card = 1; 2270 found_ohci_card = 1;
2277 break; 2271 break;
2278 } 2272 }
2279 } 2273 }
2280 spin_unlock_irqrestore(&dv1394_cards_lock, flags); 2274 spin_unlock_irqrestore(&dv1394_cards_lock, flags);
2281 2275
2282 if (video) { 2276 if (video) {
2283 do_dv1394_shutdown(video, 1); 2277 do_dv1394_shutdown(video, 1);
2284 kfree(video); 2278 kfree(video);
2285 } 2279 }
2286 } while (video); 2280 } while (video);
2287 2281
2288 if (found_ohci_card) 2282 if (found_ohci_card)
2289 device_destroy(hpsb_protocol_class, MKDEV(IEEE1394_MAJOR, 2283 device_destroy(hpsb_protocol_class, MKDEV(IEEE1394_MAJOR,
2290 IEEE1394_MINOR_BLOCK_DV1394 * 16 + (host->id << 2))); 2284 IEEE1394_MINOR_BLOCK_DV1394 * 16 + (host->id << 2)));
2291 } 2285 }
2292 2286
2293 static void dv1394_add_host(struct hpsb_host *host) 2287 static void dv1394_add_host(struct hpsb_host *host)
2294 { 2288 {
2295 struct ti_ohci *ohci; 2289 struct ti_ohci *ohci;
2296 int id = host->id; 2290 int id = host->id;
2297 2291
2298 /* We only work with the OHCI-1394 driver */ 2292 /* We only work with the OHCI-1394 driver */
2299 if (strcmp(host->driver->name, OHCI1394_DRIVER_NAME)) 2293 if (strcmp(host->driver->name, OHCI1394_DRIVER_NAME))
2300 return; 2294 return;
2301 2295
2302 ohci = (struct ti_ohci *)host->hostdata; 2296 ohci = (struct ti_ohci *)host->hostdata;
2303 2297
2304 device_create(hpsb_protocol_class, NULL, 2298 device_create(hpsb_protocol_class, NULL,
2305 MKDEV(IEEE1394_MAJOR, 2299 MKDEV(IEEE1394_MAJOR,
2306 IEEE1394_MINOR_BLOCK_DV1394 * 16 + (id<<2)), 2300 IEEE1394_MINOR_BLOCK_DV1394 * 16 + (id<<2)),
2307 NULL, "dv1394-%d", id); 2301 NULL, "dv1394-%d", id);
2308 2302
2309 dv1394_init(ohci, DV1394_NTSC, MODE_RECEIVE); 2303 dv1394_init(ohci, DV1394_NTSC, MODE_RECEIVE);
2310 dv1394_init(ohci, DV1394_NTSC, MODE_TRANSMIT); 2304 dv1394_init(ohci, DV1394_NTSC, MODE_TRANSMIT);
2311 dv1394_init(ohci, DV1394_PAL, MODE_RECEIVE); 2305 dv1394_init(ohci, DV1394_PAL, MODE_RECEIVE);
2312 dv1394_init(ohci, DV1394_PAL, MODE_TRANSMIT); 2306 dv1394_init(ohci, DV1394_PAL, MODE_TRANSMIT);
2313 } 2307 }
2314 2308
2315 2309
2316 /* Bus reset handler. In the event of a bus reset, we may need to 2310 /* Bus reset handler. In the event of a bus reset, we may need to
2317 re-start the DMA contexts - otherwise the user program would 2311 re-start the DMA contexts - otherwise the user program would
2318 end up waiting forever. 2312 end up waiting forever.
2319 */ 2313 */
2320 2314
2321 static void dv1394_host_reset(struct hpsb_host *host) 2315 static void dv1394_host_reset(struct hpsb_host *host)
2322 { 2316 {
2323 struct ti_ohci *ohci;
2324 struct video_card *video = NULL, *tmp_vid; 2317 struct video_card *video = NULL, *tmp_vid;
2325 unsigned long flags; 2318 unsigned long flags;
2326 2319
2327 /* We only work with the OHCI-1394 driver */ 2320 /* We only work with the OHCI-1394 driver */
2328 if (strcmp(host->driver->name, OHCI1394_DRIVER_NAME)) 2321 if (strcmp(host->driver->name, OHCI1394_DRIVER_NAME))
2329 return; 2322 return;
2330
2331 ohci = (struct ti_ohci *)host->hostdata;
2332
2333 2323
2334 /* find the corresponding video_cards */ 2324 /* find the corresponding video_cards */
2335 spin_lock_irqsave(&dv1394_cards_lock, flags); 2325 spin_lock_irqsave(&dv1394_cards_lock, flags);
2336 list_for_each_entry(tmp_vid, &dv1394_cards, list) { 2326 list_for_each_entry(tmp_vid, &dv1394_cards, list) {
2337 if ((tmp_vid->id >> 2) == host->id) { 2327 if ((tmp_vid->id >> 2) == host->id) {
2338 video = tmp_vid; 2328 video = tmp_vid;
2339 break; 2329 break;
2340 } 2330 }
2341 } 2331 }
2342 spin_unlock_irqrestore(&dv1394_cards_lock, flags); 2332 spin_unlock_irqrestore(&dv1394_cards_lock, flags);
2343 2333
2344 if (!video) 2334 if (!video)
2345 return; 2335 return;
2346 2336
2347 2337
2348 spin_lock_irqsave(&video->spinlock, flags); 2338 spin_lock_irqsave(&video->spinlock, flags);
2349 2339
2350 if (!video->dma_running) 2340 if (!video->dma_running)
2351 goto out; 2341 goto out;
2352 2342
2353 /* check IT context */ 2343 /* check IT context */
2354 if (video->ohci_it_ctx != -1) { 2344 if (video->ohci_it_ctx != -1) {
2355 u32 ctx; 2345 u32 ctx;
2356 2346
2357 ctx = reg_read(video->ohci, video->ohci_IsoXmitContextControlSet); 2347 ctx = reg_read(video->ohci, video->ohci_IsoXmitContextControlSet);
2358 2348
2359 /* if (RUN but not ACTIVE) */ 2349 /* if (RUN but not ACTIVE) */
2360 if ( (ctx & (1<<15)) && 2350 if ( (ctx & (1<<15)) &&
2361 !(ctx & (1<<10)) ) { 2351 !(ctx & (1<<10)) ) {
2362 2352
2363 debug_printk("dv1394: IT context stopped due to bus reset; waking it up\n"); 2353 debug_printk("dv1394: IT context stopped due to bus reset; waking it up\n");
2364 2354
2365 /* to be safe, assume a frame has been dropped. User-space programs 2355 /* to be safe, assume a frame has been dropped. User-space programs
2366 should handle this condition like an underflow. */ 2356 should handle this condition like an underflow. */
2367 video->dropped_frames++; 2357 video->dropped_frames++;
2368 2358
2369 /* for some reason you must clear, then re-set the RUN bit to restart DMA */ 2359 /* for some reason you must clear, then re-set the RUN bit to restart DMA */
2370 2360
2371 /* clear RUN */ 2361 /* clear RUN */
2372 reg_write(video->ohci, video->ohci_IsoXmitContextControlClear, (1 << 15)); 2362 reg_write(video->ohci, video->ohci_IsoXmitContextControlClear, (1 << 15));
2373 flush_pci_write(video->ohci); 2363 flush_pci_write(video->ohci);
2374 2364
2375 /* set RUN */ 2365 /* set RUN */
2376 reg_write(video->ohci, video->ohci_IsoXmitContextControlSet, (1 << 15)); 2366 reg_write(video->ohci, video->ohci_IsoXmitContextControlSet, (1 << 15));
2377 flush_pci_write(video->ohci); 2367 flush_pci_write(video->ohci);
2378 2368
2379 /* set the WAKE bit (just in case; this isn't strictly necessary) */ 2369 /* set the WAKE bit (just in case; this isn't strictly necessary) */
2380 reg_write(video->ohci, video->ohci_IsoXmitContextControlSet, (1 << 12)); 2370 reg_write(video->ohci, video->ohci_IsoXmitContextControlSet, (1 << 12));
2381 flush_pci_write(video->ohci); 2371 flush_pci_write(video->ohci);
2382 2372
2383 irq_printk("dv1394: AFTER IT restart ctx 0x%08x ptr 0x%08x\n", 2373 irq_printk("dv1394: AFTER IT restart ctx 0x%08x ptr 0x%08x\n",
2384 reg_read(video->ohci, video->ohci_IsoXmitContextControlSet), 2374 reg_read(video->ohci, video->ohci_IsoXmitContextControlSet),
2385 reg_read(video->ohci, video->ohci_IsoXmitCommandPtr)); 2375 reg_read(video->ohci, video->ohci_IsoXmitCommandPtr));
2386 } 2376 }
2387 } 2377 }
2388 2378
2389 /* check IR context */ 2379 /* check IR context */
2390 if (video->ohci_ir_ctx != -1) { 2380 if (video->ohci_ir_ctx != -1) {
2391 u32 ctx; 2381 u32 ctx;
2392 2382
2393 ctx = reg_read(video->ohci, video->ohci_IsoRcvContextControlSet); 2383 ctx = reg_read(video->ohci, video->ohci_IsoRcvContextControlSet);
2394 2384
2395 /* if (RUN but not ACTIVE) */ 2385 /* if (RUN but not ACTIVE) */
2396 if ( (ctx & (1<<15)) && 2386 if ( (ctx & (1<<15)) &&
2397 !(ctx & (1<<10)) ) { 2387 !(ctx & (1<<10)) ) {
2398 2388
2399 debug_printk("dv1394: IR context stopped due to bus reset; waking it up\n"); 2389 debug_printk("dv1394: IR context stopped due to bus reset; waking it up\n");
2400 2390
2401 /* to be safe, assume a frame has been dropped. User-space programs 2391 /* to be safe, assume a frame has been dropped. User-space programs
2402 should handle this condition like an overflow. */ 2392 should handle this condition like an overflow. */
2403 video->dropped_frames++; 2393 video->dropped_frames++;
2404 2394
2405 /* for some reason you must clear, then re-set the RUN bit to restart DMA */ 2395 /* for some reason you must clear, then re-set the RUN bit to restart DMA */
2406 /* XXX this doesn't work for me, I can't get IR DMA to restart :[ */ 2396 /* XXX this doesn't work for me, I can't get IR DMA to restart :[ */
2407 2397
2408 /* clear RUN */ 2398 /* clear RUN */
2409 reg_write(video->ohci, video->ohci_IsoRcvContextControlClear, (1 << 15)); 2399 reg_write(video->ohci, video->ohci_IsoRcvContextControlClear, (1 << 15));
2410 flush_pci_write(video->ohci); 2400 flush_pci_write(video->ohci);
2411 2401
2412 /* set RUN */ 2402 /* set RUN */
2413 reg_write(video->ohci, video->ohci_IsoRcvContextControlSet, (1 << 15)); 2403 reg_write(video->ohci, video->ohci_IsoRcvContextControlSet, (1 << 15));
2414 flush_pci_write(video->ohci); 2404 flush_pci_write(video->ohci);
2415 2405
2416 /* set the WAKE bit (just in case; this isn't strictly necessary) */ 2406 /* set the WAKE bit (just in case; this isn't strictly necessary) */
2417 reg_write(video->ohci, video->ohci_IsoRcvContextControlSet, (1 << 12)); 2407 reg_write(video->ohci, video->ohci_IsoRcvContextControlSet, (1 << 12));
2418 flush_pci_write(video->ohci); 2408 flush_pci_write(video->ohci);
2419 2409
2420 irq_printk("dv1394: AFTER IR restart ctx 0x%08x ptr 0x%08x\n", 2410 irq_printk("dv1394: AFTER IR restart ctx 0x%08x ptr 0x%08x\n",
2421 reg_read(video->ohci, video->ohci_IsoRcvContextControlSet), 2411 reg_read(video->ohci, video->ohci_IsoRcvContextControlSet),
2422 reg_read(video->ohci, video->ohci_IsoRcvCommandPtr)); 2412 reg_read(video->ohci, video->ohci_IsoRcvCommandPtr));
2423 } 2413 }
2424 } 2414 }
2425 2415
2426 out: 2416 out:
2427 spin_unlock_irqrestore(&video->spinlock, flags); 2417 spin_unlock_irqrestore(&video->spinlock, flags);
2428 2418
2429 /* wake readers/writers/ioctl'ers */ 2419 /* wake readers/writers/ioctl'ers */
2430 wake_up_interruptible(&video->waitq); 2420 wake_up_interruptible(&video->waitq);
2431 } 2421 }
2432 2422
2433 static struct hpsb_highlevel dv1394_highlevel = { 2423 static struct hpsb_highlevel dv1394_highlevel = {
2434 .name = "dv1394", 2424 .name = "dv1394",
2435 .add_host = dv1394_add_host, 2425 .add_host = dv1394_add_host,
2436 .remove_host = dv1394_remove_host, 2426 .remove_host = dv1394_remove_host,
2437 .host_reset = dv1394_host_reset, 2427 .host_reset = dv1394_host_reset,
2438 }; 2428 };
2439 2429
2440 #ifdef CONFIG_COMPAT 2430 #ifdef CONFIG_COMPAT
2441 2431
2442 #define DV1394_IOC32_INIT _IOW('#', 0x06, struct dv1394_init32) 2432 #define DV1394_IOC32_INIT _IOW('#', 0x06, struct dv1394_init32)
2443 #define DV1394_IOC32_GET_STATUS _IOR('#', 0x0c, struct dv1394_status32) 2433 #define DV1394_IOC32_GET_STATUS _IOR('#', 0x0c, struct dv1394_status32)
2444 2434
2445 struct dv1394_init32 { 2435 struct dv1394_init32 {
2446 u32 api_version; 2436 u32 api_version;
2447 u32 channel; 2437 u32 channel;
2448 u32 n_frames; 2438 u32 n_frames;
2449 u32 format; 2439 u32 format;
2450 u32 cip_n; 2440 u32 cip_n;
2451 u32 cip_d; 2441 u32 cip_d;
2452 u32 syt_offset; 2442 u32 syt_offset;
2453 }; 2443 };
2454 2444
2455 struct dv1394_status32 { 2445 struct dv1394_status32 {
2456 struct dv1394_init32 init; 2446 struct dv1394_init32 init;
2457 s32 active_frame; 2447 s32 active_frame;
2458 u32 first_clear_frame; 2448 u32 first_clear_frame;
2459 u32 n_clear_frames; 2449 u32 n_clear_frames;
2460 u32 dropped_frames; 2450 u32 dropped_frames;
2461 }; 2451 };
2462 2452
2463 /* RED-PEN: this should use compat_alloc_userspace instead */ 2453 /* RED-PEN: this should use compat_alloc_userspace instead */
2464 2454
2465 static int handle_dv1394_init(struct file *file, unsigned int cmd, unsigned long arg) 2455 static int handle_dv1394_init(struct file *file, unsigned int cmd, unsigned long arg)
2466 { 2456 {
2467 struct dv1394_init32 dv32; 2457 struct dv1394_init32 dv32;
2468 struct dv1394_init dv; 2458 struct dv1394_init dv;
2469 mm_segment_t old_fs; 2459 mm_segment_t old_fs;
2470 int ret; 2460 int ret;
2471 2461
2472 if (file->f_op->unlocked_ioctl != dv1394_ioctl) 2462 if (file->f_op->unlocked_ioctl != dv1394_ioctl)
2473 return -EFAULT; 2463 return -EFAULT;
2474 2464
2475 if (copy_from_user(&dv32, (void __user *)arg, sizeof(dv32))) 2465 if (copy_from_user(&dv32, (void __user *)arg, sizeof(dv32)))
2476 return -EFAULT; 2466 return -EFAULT;
2477 2467
2478 dv.api_version = dv32.api_version; 2468 dv.api_version = dv32.api_version;
2479 dv.channel = dv32.channel; 2469 dv.channel = dv32.channel;
2480 dv.n_frames = dv32.n_frames; 2470 dv.n_frames = dv32.n_frames;
2481 dv.format = dv32.format; 2471 dv.format = dv32.format;
2482 dv.cip_n = (unsigned long)dv32.cip_n; 2472 dv.cip_n = (unsigned long)dv32.cip_n;
2483 dv.cip_d = (unsigned long)dv32.cip_d; 2473 dv.cip_d = (unsigned long)dv32.cip_d;
2484 dv.syt_offset = dv32.syt_offset; 2474 dv.syt_offset = dv32.syt_offset;
2485 2475
2486 old_fs = get_fs(); 2476 old_fs = get_fs();
2487 set_fs(KERNEL_DS); 2477 set_fs(KERNEL_DS);
2488 ret = dv1394_ioctl(file, DV1394_IOC_INIT, (unsigned long)&dv); 2478 ret = dv1394_ioctl(file, DV1394_IOC_INIT, (unsigned long)&dv);
2489 set_fs(old_fs); 2479 set_fs(old_fs);
2490 2480
2491 return ret; 2481 return ret;
2492 } 2482 }
2493 2483
2494 static int handle_dv1394_get_status(struct file *file, unsigned int cmd, unsigned long arg) 2484 static int handle_dv1394_get_status(struct file *file, unsigned int cmd, unsigned long arg)
2495 { 2485 {
2496 struct dv1394_status32 dv32; 2486 struct dv1394_status32 dv32;
2497 struct dv1394_status dv; 2487 struct dv1394_status dv;
2498 mm_segment_t old_fs; 2488 mm_segment_t old_fs;
2499 int ret; 2489 int ret;
2500 2490
2501 if (file->f_op->unlocked_ioctl != dv1394_ioctl) 2491 if (file->f_op->unlocked_ioctl != dv1394_ioctl)
2502 return -EFAULT; 2492 return -EFAULT;
2503 2493
2504 old_fs = get_fs(); 2494 old_fs = get_fs();
2505 set_fs(KERNEL_DS); 2495 set_fs(KERNEL_DS);
2506 ret = dv1394_ioctl(file, DV1394_IOC_GET_STATUS, (unsigned long)&dv); 2496 ret = dv1394_ioctl(file, DV1394_IOC_GET_STATUS, (unsigned long)&dv);
2507 set_fs(old_fs); 2497 set_fs(old_fs);
2508 2498
2509 if (!ret) { 2499 if (!ret) {
2510 dv32.init.api_version = dv.init.api_version; 2500 dv32.init.api_version = dv.init.api_version;
2511 dv32.init.channel = dv.init.channel; 2501 dv32.init.channel = dv.init.channel;
2512 dv32.init.n_frames = dv.init.n_frames; 2502 dv32.init.n_frames = dv.init.n_frames;
2513 dv32.init.format = dv.init.format; 2503 dv32.init.format = dv.init.format;
2514 dv32.init.cip_n = (u32)dv.init.cip_n; 2504 dv32.init.cip_n = (u32)dv.init.cip_n;
2515 dv32.init.cip_d = (u32)dv.init.cip_d; 2505 dv32.init.cip_d = (u32)dv.init.cip_d;
2516 dv32.init.syt_offset = dv.init.syt_offset; 2506 dv32.init.syt_offset = dv.init.syt_offset;
2517 dv32.active_frame = dv.active_frame; 2507 dv32.active_frame = dv.active_frame;
2518 dv32.first_clear_frame = dv.first_clear_frame; 2508 dv32.first_clear_frame = dv.first_clear_frame;
2519 dv32.n_clear_frames = dv.n_clear_frames; 2509 dv32.n_clear_frames = dv.n_clear_frames;
2520 dv32.dropped_frames = dv.dropped_frames; 2510 dv32.dropped_frames = dv.dropped_frames;
2521 2511
2522 if (copy_to_user((struct dv1394_status32 __user *)arg, &dv32, sizeof(dv32))) 2512 if (copy_to_user((struct dv1394_status32 __user *)arg, &dv32, sizeof(dv32)))
2523 ret = -EFAULT; 2513 ret = -EFAULT;
2524 } 2514 }
2525 2515
2526 return ret; 2516 return ret;
2527 } 2517 }
2528 2518
2529 2519
2530 2520
2531 static long dv1394_compat_ioctl(struct file *file, unsigned int cmd, 2521 static long dv1394_compat_ioctl(struct file *file, unsigned int cmd,
2532 unsigned long arg) 2522 unsigned long arg)
2533 { 2523 {
2534 switch (cmd) { 2524 switch (cmd) {
2535 case DV1394_IOC_SHUTDOWN: 2525 case DV1394_IOC_SHUTDOWN:
2536 case DV1394_IOC_SUBMIT_FRAMES: 2526 case DV1394_IOC_SUBMIT_FRAMES:
2537 case DV1394_IOC_WAIT_FRAMES: 2527 case DV1394_IOC_WAIT_FRAMES:
2538 case DV1394_IOC_RECEIVE_FRAMES: 2528 case DV1394_IOC_RECEIVE_FRAMES:
2539 case DV1394_IOC_START_RECEIVE: 2529 case DV1394_IOC_START_RECEIVE:
2540 return dv1394_ioctl(file, cmd, arg); 2530 return dv1394_ioctl(file, cmd, arg);
2541 2531
2542 case DV1394_IOC32_INIT: 2532 case DV1394_IOC32_INIT:
2543 return handle_dv1394_init(file, cmd, arg); 2533 return handle_dv1394_init(file, cmd, arg);
2544 case DV1394_IOC32_GET_STATUS: 2534 case DV1394_IOC32_GET_STATUS:
2545 return handle_dv1394_get_status(file, cmd, arg); 2535 return handle_dv1394_get_status(file, cmd, arg);
2546 default: 2536 default:
2547 return -ENOIOCTLCMD; 2537 return -ENOIOCTLCMD;
2548 } 2538 }
2549 } 2539 }
2550 2540
2551 #endif /* CONFIG_COMPAT */ 2541 #endif /* CONFIG_COMPAT */
2552 2542
2553 2543
2554 /*** KERNEL MODULE HANDLERS ************************************************/ 2544 /*** KERNEL MODULE HANDLERS ************************************************/
2555 2545
2556 MODULE_AUTHOR("Dan Maas <dmaas@dcine.com>, Dan Dennedy <dan@dennedy.org>"); 2546 MODULE_AUTHOR("Dan Maas <dmaas@dcine.com>, Dan Dennedy <dan@dennedy.org>");
2557 MODULE_DESCRIPTION("driver for DV input/output on OHCI board"); 2547 MODULE_DESCRIPTION("driver for DV input/output on OHCI board");
2558 MODULE_SUPPORTED_DEVICE("dv1394"); 2548 MODULE_SUPPORTED_DEVICE("dv1394");
2559 MODULE_LICENSE("GPL"); 2549 MODULE_LICENSE("GPL");
2560 2550
2561 static void __exit dv1394_exit_module(void) 2551 static void __exit dv1394_exit_module(void)
2562 { 2552 {
2563 hpsb_unregister_protocol(&dv1394_driver); 2553 hpsb_unregister_protocol(&dv1394_driver);
2564 hpsb_unregister_highlevel(&dv1394_highlevel); 2554 hpsb_unregister_highlevel(&dv1394_highlevel);
2565 cdev_del(&dv1394_cdev); 2555 cdev_del(&dv1394_cdev);
2566 } 2556 }
2567 2557
2568 static int __init dv1394_init_module(void) 2558 static int __init dv1394_init_module(void)
2569 { 2559 {
2570 int ret; 2560 int ret;
2571 2561
2572 cdev_init(&dv1394_cdev, &dv1394_fops); 2562 cdev_init(&dv1394_cdev, &dv1394_fops);
2573 dv1394_cdev.owner = THIS_MODULE; 2563 dv1394_cdev.owner = THIS_MODULE;
2574 ret = cdev_add(&dv1394_cdev, IEEE1394_DV1394_DEV, 16); 2564 ret = cdev_add(&dv1394_cdev, IEEE1394_DV1394_DEV, 16);
2575 if (ret) { 2565 if (ret) {
2576 printk(KERN_ERR "dv1394: unable to register character device\n"); 2566 printk(KERN_ERR "dv1394: unable to register character device\n");
2577 return ret; 2567 return ret;
2578 } 2568 }
2579 2569
2580 hpsb_register_highlevel(&dv1394_highlevel); 2570 hpsb_register_highlevel(&dv1394_highlevel);
2581 2571
2582 ret = hpsb_register_protocol(&dv1394_driver); 2572 ret = hpsb_register_protocol(&dv1394_driver);
2583 if (ret) { 2573 if (ret) {
2584 printk(KERN_ERR "dv1394: failed to register protocol\n"); 2574 printk(KERN_ERR "dv1394: failed to register protocol\n");
2585 hpsb_unregister_highlevel(&dv1394_highlevel); 2575 hpsb_unregister_highlevel(&dv1394_highlevel);
2586 cdev_del(&dv1394_cdev); 2576 cdev_del(&dv1394_cdev);
2587 return ret; 2577 return ret;
2588 } 2578 }
2589 2579
2590 return 0; 2580 return 0;
2591 } 2581 }
2592 2582
2593 module_init(dv1394_init_module); 2583 module_init(dv1394_init_module);
2594 module_exit(dv1394_exit_module); 2584 module_exit(dv1394_exit_module);
2595 2585
drivers/ieee1394/eth1394.c
1 /* 1 /*
2 * eth1394.c -- IPv4 driver for Linux IEEE-1394 Subsystem 2 * eth1394.c -- IPv4 driver for Linux IEEE-1394 Subsystem
3 * 3 *
4 * Copyright (C) 2001-2003 Ben Collins <bcollins@debian.org> 4 * Copyright (C) 2001-2003 Ben Collins <bcollins@debian.org>
5 * 2000 Bonin Franck <boninf@free.fr> 5 * 2000 Bonin Franck <boninf@free.fr>
6 * 2003 Steve Kinneberg <kinnebergsteve@acmsystems.com> 6 * 2003 Steve Kinneberg <kinnebergsteve@acmsystems.com>
7 * 7 *
8 * Mainly based on work by Emanuel Pirker and Andreas E. Bombe 8 * Mainly based on work by Emanuel Pirker and Andreas E. Bombe
9 * 9 *
10 * This program is free software; you can redistribute it and/or modify 10 * This program is free software; you can redistribute it and/or modify
11 * it under the terms of the GNU General Public License as published by 11 * it under the terms of the GNU General Public License as published by
12 * the Free Software Foundation; either version 2 of the License, or 12 * the Free Software Foundation; either version 2 of the License, or
13 * (at your option) any later version. 13 * (at your option) any later version.
14 * 14 *
15 * This program is distributed in the hope that it will be useful, 15 * This program is distributed in the hope that it will be useful,
16 * but WITHOUT ANY WARRANTY; without even the implied warranty of 16 * but WITHOUT ANY WARRANTY; without even the implied warranty of
17 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 17 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
18 * GNU General Public License for more details. 18 * GNU General Public License for more details.
19 * 19 *
20 * You should have received a copy of the GNU General Public License 20 * You should have received a copy of the GNU General Public License
21 * along with this program; if not, write to the Free Software Foundation, 21 * along with this program; if not, write to the Free Software Foundation,
22 * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 22 * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
23 */ 23 */
24 24
25 /* 25 /*
26 * This driver intends to support RFC 2734, which describes a method for 26 * This driver intends to support RFC 2734, which describes a method for
27 * transporting IPv4 datagrams over IEEE-1394 serial busses. 27 * transporting IPv4 datagrams over IEEE-1394 serial busses.
28 * 28 *
29 * TODO: 29 * TODO:
30 * RFC 2734 related: 30 * RFC 2734 related:
31 * - Add MCAP. Limited Multicast exists only to 224.0.0.1 and 224.0.0.2. 31 * - Add MCAP. Limited Multicast exists only to 224.0.0.1 and 224.0.0.2.
32 * 32 *
33 * Non-RFC 2734 related: 33 * Non-RFC 2734 related:
34 * - Handle fragmented skb's coming from the networking layer. 34 * - Handle fragmented skb's coming from the networking layer.
35 * - Move generic GASP reception to core 1394 code 35 * - Move generic GASP reception to core 1394 code
36 * - Convert kmalloc/kfree for link fragments to use kmem_cache_* instead 36 * - Convert kmalloc/kfree for link fragments to use kmem_cache_* instead
37 * - Stability improvements 37 * - Stability improvements
38 * - Performance enhancements 38 * - Performance enhancements
39 * - Consider garbage collecting old partial datagrams after X amount of time 39 * - Consider garbage collecting old partial datagrams after X amount of time
40 */ 40 */
41 41
42 #include <linux/module.h> 42 #include <linux/module.h>
43 43
44 #include <linux/kernel.h> 44 #include <linux/kernel.h>
45 #include <linux/slab.h> 45 #include <linux/slab.h>
46 #include <linux/errno.h> 46 #include <linux/errno.h>
47 #include <linux/types.h> 47 #include <linux/types.h>
48 #include <linux/delay.h> 48 #include <linux/delay.h>
49 #include <linux/init.h> 49 #include <linux/init.h>
50 #include <linux/workqueue.h> 50 #include <linux/workqueue.h>
51 51
52 #include <linux/netdevice.h> 52 #include <linux/netdevice.h>
53 #include <linux/inetdevice.h> 53 #include <linux/inetdevice.h>
54 #include <linux/if_arp.h> 54 #include <linux/if_arp.h>
55 #include <linux/if_ether.h> 55 #include <linux/if_ether.h>
56 #include <linux/ip.h> 56 #include <linux/ip.h>
57 #include <linux/in.h> 57 #include <linux/in.h>
58 #include <linux/tcp.h> 58 #include <linux/tcp.h>
59 #include <linux/skbuff.h> 59 #include <linux/skbuff.h>
60 #include <linux/bitops.h> 60 #include <linux/bitops.h>
61 #include <linux/ethtool.h> 61 #include <linux/ethtool.h>
62 #include <asm/uaccess.h> 62 #include <asm/uaccess.h>
63 #include <asm/delay.h> 63 #include <asm/delay.h>
64 #include <asm/unaligned.h> 64 #include <asm/unaligned.h>
65 #include <net/arp.h> 65 #include <net/arp.h>
66 66
67 #include "config_roms.h" 67 #include "config_roms.h"
68 #include "csr1212.h" 68 #include "csr1212.h"
69 #include "eth1394.h" 69 #include "eth1394.h"
70 #include "highlevel.h" 70 #include "highlevel.h"
71 #include "ieee1394.h" 71 #include "ieee1394.h"
72 #include "ieee1394_core.h" 72 #include "ieee1394_core.h"
73 #include "ieee1394_hotplug.h" 73 #include "ieee1394_hotplug.h"
74 #include "ieee1394_transactions.h" 74 #include "ieee1394_transactions.h"
75 #include "ieee1394_types.h" 75 #include "ieee1394_types.h"
76 #include "iso.h" 76 #include "iso.h"
77 #include "nodemgr.h" 77 #include "nodemgr.h"
78 78
79 #define ETH1394_PRINT_G(level, fmt, args...) \ 79 #define ETH1394_PRINT_G(level, fmt, args...) \
80 printk(level "%s: " fmt, driver_name, ## args) 80 printk(level "%s: " fmt, driver_name, ## args)
81 81
82 #define ETH1394_PRINT(level, dev_name, fmt, args...) \ 82 #define ETH1394_PRINT(level, dev_name, fmt, args...) \
83 printk(level "%s: %s: " fmt, driver_name, dev_name, ## args) 83 printk(level "%s: %s: " fmt, driver_name, dev_name, ## args)
84 84
85 struct fragment_info { 85 struct fragment_info {
86 struct list_head list; 86 struct list_head list;
87 int offset; 87 int offset;
88 int len; 88 int len;
89 }; 89 };
90 90
91 struct partial_datagram { 91 struct partial_datagram {
92 struct list_head list; 92 struct list_head list;
93 u16 dgl; 93 u16 dgl;
94 u16 dg_size; 94 u16 dg_size;
95 __be16 ether_type; 95 __be16 ether_type;
96 struct sk_buff *skb; 96 struct sk_buff *skb;
97 char *pbuf; 97 char *pbuf;
98 struct list_head frag_info; 98 struct list_head frag_info;
99 }; 99 };
100 100
101 struct pdg_list { 101 struct pdg_list {
102 struct list_head list; /* partial datagram list per node */ 102 struct list_head list; /* partial datagram list per node */
103 unsigned int sz; /* partial datagram list size per node */ 103 unsigned int sz; /* partial datagram list size per node */
104 spinlock_t lock; /* partial datagram lock */ 104 spinlock_t lock; /* partial datagram lock */
105 }; 105 };
106 106
107 struct eth1394_host_info { 107 struct eth1394_host_info {
108 struct hpsb_host *host; 108 struct hpsb_host *host;
109 struct net_device *dev; 109 struct net_device *dev;
110 }; 110 };
111 111
112 struct eth1394_node_ref { 112 struct eth1394_node_ref {
113 struct unit_directory *ud; 113 struct unit_directory *ud;
114 struct list_head list; 114 struct list_head list;
115 }; 115 };
116 116
117 struct eth1394_node_info { 117 struct eth1394_node_info {
118 u16 maxpayload; /* max payload */ 118 u16 maxpayload; /* max payload */
119 u8 sspd; /* max speed */ 119 u8 sspd; /* max speed */
120 u64 fifo; /* FIFO address */ 120 u64 fifo; /* FIFO address */
121 struct pdg_list pdg; /* partial RX datagram lists */ 121 struct pdg_list pdg; /* partial RX datagram lists */
122 int dgl; /* outgoing datagram label */ 122 int dgl; /* outgoing datagram label */
123 }; 123 };
124 124
125 static const char driver_name[] = "eth1394"; 125 static const char driver_name[] = "eth1394";
126 126
127 static struct kmem_cache *packet_task_cache; 127 static struct kmem_cache *packet_task_cache;
128 128
129 static struct hpsb_highlevel eth1394_highlevel; 129 static struct hpsb_highlevel eth1394_highlevel;
130 130
131 /* Use common.lf to determine header len */ 131 /* Use common.lf to determine header len */
132 static const int hdr_type_len[] = { 132 static const int hdr_type_len[] = {
133 sizeof(struct eth1394_uf_hdr), 133 sizeof(struct eth1394_uf_hdr),
134 sizeof(struct eth1394_ff_hdr), 134 sizeof(struct eth1394_ff_hdr),
135 sizeof(struct eth1394_sf_hdr), 135 sizeof(struct eth1394_sf_hdr),
136 sizeof(struct eth1394_sf_hdr) 136 sizeof(struct eth1394_sf_hdr)
137 }; 137 };
138 138
139 static const u16 eth1394_speedto_maxpayload[] = { 139 static const u16 eth1394_speedto_maxpayload[] = {
140 /* S100, S200, S400, S800, S1600, S3200 */ 140 /* S100, S200, S400, S800, S1600, S3200 */
141 512, 1024, 2048, 4096, 4096, 4096 141 512, 1024, 2048, 4096, 4096, 4096
142 }; 142 };
143 143
144 MODULE_AUTHOR("Ben Collins (bcollins@debian.org)"); 144 MODULE_AUTHOR("Ben Collins (bcollins@debian.org)");
145 MODULE_DESCRIPTION("IEEE 1394 IPv4 Driver (IPv4-over-1394 as per RFC 2734)"); 145 MODULE_DESCRIPTION("IEEE 1394 IPv4 Driver (IPv4-over-1394 as per RFC 2734)");
146 MODULE_LICENSE("GPL"); 146 MODULE_LICENSE("GPL");
147 147
148 /* 148 /*
149 * The max_partial_datagrams parameter is the maximum number of fragmented 149 * The max_partial_datagrams parameter is the maximum number of fragmented
150 * datagrams per node that eth1394 will keep in memory. Providing an upper 150 * datagrams per node that eth1394 will keep in memory. Providing an upper
151 * bound allows us to limit the amount of memory that partial datagrams 151 * bound allows us to limit the amount of memory that partial datagrams
152 * consume in the event that some partial datagrams are never completed. 152 * consume in the event that some partial datagrams are never completed.
153 */ 153 */
154 static int max_partial_datagrams = 25; 154 static int max_partial_datagrams = 25;
155 module_param(max_partial_datagrams, int, S_IRUGO | S_IWUSR); 155 module_param(max_partial_datagrams, int, S_IRUGO | S_IWUSR);
156 MODULE_PARM_DESC(max_partial_datagrams, 156 MODULE_PARM_DESC(max_partial_datagrams,
157 "Maximum number of partially received fragmented datagrams " 157 "Maximum number of partially received fragmented datagrams "
158 "(default = 25)."); 158 "(default = 25).");
159 159
160 160
161 static int ether1394_header(struct sk_buff *skb, struct net_device *dev, 161 static int ether1394_header(struct sk_buff *skb, struct net_device *dev,
162 unsigned short type, const void *daddr, 162 unsigned short type, const void *daddr,
163 const void *saddr, unsigned len); 163 const void *saddr, unsigned len);
164 static int ether1394_rebuild_header(struct sk_buff *skb); 164 static int ether1394_rebuild_header(struct sk_buff *skb);
165 static int ether1394_header_parse(const struct sk_buff *skb, 165 static int ether1394_header_parse(const struct sk_buff *skb,
166 unsigned char *haddr); 166 unsigned char *haddr);
167 static int ether1394_header_cache(const struct neighbour *neigh, 167 static int ether1394_header_cache(const struct neighbour *neigh,
168 struct hh_cache *hh); 168 struct hh_cache *hh);
169 static void ether1394_header_cache_update(struct hh_cache *hh, 169 static void ether1394_header_cache_update(struct hh_cache *hh,
170 const struct net_device *dev, 170 const struct net_device *dev,
171 const unsigned char *haddr); 171 const unsigned char *haddr);
172 static netdev_tx_t ether1394_tx(struct sk_buff *skb, 172 static netdev_tx_t ether1394_tx(struct sk_buff *skb,
173 struct net_device *dev); 173 struct net_device *dev);
174 static void ether1394_iso(struct hpsb_iso *iso); 174 static void ether1394_iso(struct hpsb_iso *iso);
175 175
176 static const struct ethtool_ops ethtool_ops; 176 static const struct ethtool_ops ethtool_ops;
177 177
178 static int ether1394_write(struct hpsb_host *host, int srcid, int destid, 178 static int ether1394_write(struct hpsb_host *host, int srcid, int destid,
179 quadlet_t *data, u64 addr, size_t len, u16 flags); 179 quadlet_t *data, u64 addr, size_t len, u16 flags);
180 static void ether1394_add_host(struct hpsb_host *host); 180 static void ether1394_add_host(struct hpsb_host *host);
181 static void ether1394_remove_host(struct hpsb_host *host); 181 static void ether1394_remove_host(struct hpsb_host *host);
182 static void ether1394_host_reset(struct hpsb_host *host); 182 static void ether1394_host_reset(struct hpsb_host *host);
183 183
184 /* Function for incoming 1394 packets */ 184 /* Function for incoming 1394 packets */
185 static const struct hpsb_address_ops addr_ops = { 185 static const struct hpsb_address_ops addr_ops = {
186 .write = ether1394_write, 186 .write = ether1394_write,
187 }; 187 };
188 188
189 /* Ieee1394 highlevel driver functions */ 189 /* Ieee1394 highlevel driver functions */
190 static struct hpsb_highlevel eth1394_highlevel = { 190 static struct hpsb_highlevel eth1394_highlevel = {
191 .name = driver_name, 191 .name = driver_name,
192 .add_host = ether1394_add_host, 192 .add_host = ether1394_add_host,
193 .remove_host = ether1394_remove_host, 193 .remove_host = ether1394_remove_host,
194 .host_reset = ether1394_host_reset, 194 .host_reset = ether1394_host_reset,
195 }; 195 };
196 196
197 static int ether1394_recv_init(struct eth1394_priv *priv) 197 static int ether1394_recv_init(struct eth1394_priv *priv)
198 { 198 {
199 unsigned int iso_buf_size; 199 unsigned int iso_buf_size;
200 200
201 /* FIXME: rawiso limits us to PAGE_SIZE */ 201 /* FIXME: rawiso limits us to PAGE_SIZE */
202 iso_buf_size = min((unsigned int)PAGE_SIZE, 202 iso_buf_size = min((unsigned int)PAGE_SIZE,
203 2 * (1U << (priv->host->csr.max_rec + 1))); 203 2 * (1U << (priv->host->csr.max_rec + 1)));
204 204
205 priv->iso = hpsb_iso_recv_init(priv->host, 205 priv->iso = hpsb_iso_recv_init(priv->host,
206 ETHER1394_GASP_BUFFERS * iso_buf_size, 206 ETHER1394_GASP_BUFFERS * iso_buf_size,
207 ETHER1394_GASP_BUFFERS, 207 ETHER1394_GASP_BUFFERS,
208 priv->broadcast_channel, 208 priv->broadcast_channel,
209 HPSB_ISO_DMA_PACKET_PER_BUFFER, 209 HPSB_ISO_DMA_PACKET_PER_BUFFER,
210 1, ether1394_iso); 210 1, ether1394_iso);
211 if (priv->iso == NULL) { 211 if (priv->iso == NULL) {
212 ETH1394_PRINT_G(KERN_ERR, "Failed to allocate IR context\n"); 212 ETH1394_PRINT_G(KERN_ERR, "Failed to allocate IR context\n");
213 priv->bc_state = ETHER1394_BC_ERROR; 213 priv->bc_state = ETHER1394_BC_ERROR;
214 return -EAGAIN; 214 return -EAGAIN;
215 } 215 }
216 216
217 if (hpsb_iso_recv_start(priv->iso, -1, (1 << 3), -1) < 0) 217 if (hpsb_iso_recv_start(priv->iso, -1, (1 << 3), -1) < 0)
218 priv->bc_state = ETHER1394_BC_STOPPED; 218 priv->bc_state = ETHER1394_BC_STOPPED;
219 else 219 else
220 priv->bc_state = ETHER1394_BC_RUNNING; 220 priv->bc_state = ETHER1394_BC_RUNNING;
221 return 0; 221 return 0;
222 } 222 }
223 223
224 /* This is called after an "ifup" */ 224 /* This is called after an "ifup" */
225 static int ether1394_open(struct net_device *dev) 225 static int ether1394_open(struct net_device *dev)
226 { 226 {
227 struct eth1394_priv *priv = netdev_priv(dev); 227 struct eth1394_priv *priv = netdev_priv(dev);
228 int ret; 228 int ret;
229 229
230 if (priv->bc_state == ETHER1394_BC_ERROR) { 230 if (priv->bc_state == ETHER1394_BC_ERROR) {
231 ret = ether1394_recv_init(priv); 231 ret = ether1394_recv_init(priv);
232 if (ret) 232 if (ret)
233 return ret; 233 return ret;
234 } 234 }
235 netif_start_queue(dev); 235 netif_start_queue(dev);
236 return 0; 236 return 0;
237 } 237 }
238 238
239 /* This is called after an "ifdown" */ 239 /* This is called after an "ifdown" */
240 static int ether1394_stop(struct net_device *dev) 240 static int ether1394_stop(struct net_device *dev)
241 { 241 {
242 /* flush priv->wake */ 242 /* flush priv->wake */
243 flush_scheduled_work(); 243 flush_scheduled_work();
244 244
245 netif_stop_queue(dev); 245 netif_stop_queue(dev);
246 return 0; 246 return 0;
247 } 247 }
248 248
249 /* FIXME: What to do if we timeout? I think a host reset is probably in order, 249 /* FIXME: What to do if we timeout? I think a host reset is probably in order,
250 * so that's what we do. Should we increment the stat counters too? */ 250 * so that's what we do. Should we increment the stat counters too? */
251 static void ether1394_tx_timeout(struct net_device *dev) 251 static void ether1394_tx_timeout(struct net_device *dev)
252 { 252 {
253 struct hpsb_host *host = 253 struct hpsb_host *host =
254 ((struct eth1394_priv *)netdev_priv(dev))->host; 254 ((struct eth1394_priv *)netdev_priv(dev))->host;
255 255
256 ETH1394_PRINT(KERN_ERR, dev->name, "Timeout, resetting host\n"); 256 ETH1394_PRINT(KERN_ERR, dev->name, "Timeout, resetting host\n");
257 ether1394_host_reset(host); 257 ether1394_host_reset(host);
258 } 258 }
259 259
260 static inline int ether1394_max_mtu(struct hpsb_host* host) 260 static inline int ether1394_max_mtu(struct hpsb_host* host)
261 { 261 {
262 return (1 << (host->csr.max_rec + 1)) 262 return (1 << (host->csr.max_rec + 1))
263 - sizeof(union eth1394_hdr) - ETHER1394_GASP_OVERHEAD; 263 - sizeof(union eth1394_hdr) - ETHER1394_GASP_OVERHEAD;
264 } 264 }
265 265
266 static int ether1394_change_mtu(struct net_device *dev, int new_mtu) 266 static int ether1394_change_mtu(struct net_device *dev, int new_mtu)
267 { 267 {
268 int max_mtu; 268 int max_mtu;
269 269
270 if (new_mtu < 68) 270 if (new_mtu < 68)
271 return -EINVAL; 271 return -EINVAL;
272 272
273 max_mtu = ether1394_max_mtu( 273 max_mtu = ether1394_max_mtu(
274 ((struct eth1394_priv *)netdev_priv(dev))->host); 274 ((struct eth1394_priv *)netdev_priv(dev))->host);
275 if (new_mtu > max_mtu) { 275 if (new_mtu > max_mtu) {
276 ETH1394_PRINT(KERN_INFO, dev->name, 276 ETH1394_PRINT(KERN_INFO, dev->name,
277 "Local node constrains MTU to %d\n", max_mtu); 277 "Local node constrains MTU to %d\n", max_mtu);
278 return -ERANGE; 278 return -ERANGE;
279 } 279 }
280 280
281 dev->mtu = new_mtu; 281 dev->mtu = new_mtu;
282 return 0; 282 return 0;
283 } 283 }
284 284
285 static void purge_partial_datagram(struct list_head *old) 285 static void purge_partial_datagram(struct list_head *old)
286 { 286 {
287 struct partial_datagram *pd; 287 struct partial_datagram *pd;
288 struct list_head *lh, *n; 288 struct list_head *lh, *n;
289 struct fragment_info *fi; 289 struct fragment_info *fi;
290 290
291 pd = list_entry(old, struct partial_datagram, list); 291 pd = list_entry(old, struct partial_datagram, list);
292 292
293 list_for_each_safe(lh, n, &pd->frag_info) { 293 list_for_each_safe(lh, n, &pd->frag_info) {
294 fi = list_entry(lh, struct fragment_info, list); 294 fi = list_entry(lh, struct fragment_info, list);
295 list_del(lh); 295 list_del(lh);
296 kfree(fi); 296 kfree(fi);
297 } 297 }
298 list_del(old); 298 list_del(old);
299 kfree_skb(pd->skb); 299 kfree_skb(pd->skb);
300 kfree(pd); 300 kfree(pd);
301 } 301 }
302 302
303 /****************************************** 303 /******************************************
304 * 1394 bus activity functions 304 * 1394 bus activity functions
305 ******************************************/ 305 ******************************************/
306 306
307 static struct eth1394_node_ref *eth1394_find_node(struct list_head *inl, 307 static struct eth1394_node_ref *eth1394_find_node(struct list_head *inl,
308 struct unit_directory *ud) 308 struct unit_directory *ud)
309 { 309 {
310 struct eth1394_node_ref *node; 310 struct eth1394_node_ref *node;
311 311
312 list_for_each_entry(node, inl, list) 312 list_for_each_entry(node, inl, list)
313 if (node->ud == ud) 313 if (node->ud == ud)
314 return node; 314 return node;
315 315
316 return NULL; 316 return NULL;
317 } 317 }
318 318
319 static struct eth1394_node_ref *eth1394_find_node_guid(struct list_head *inl, 319 static struct eth1394_node_ref *eth1394_find_node_guid(struct list_head *inl,
320 u64 guid) 320 u64 guid)
321 { 321 {
322 struct eth1394_node_ref *node; 322 struct eth1394_node_ref *node;
323 323
324 list_for_each_entry(node, inl, list) 324 list_for_each_entry(node, inl, list)
325 if (node->ud->ne->guid == guid) 325 if (node->ud->ne->guid == guid)
326 return node; 326 return node;
327 327
328 return NULL; 328 return NULL;
329 } 329 }
330 330
331 static struct eth1394_node_ref *eth1394_find_node_nodeid(struct list_head *inl, 331 static struct eth1394_node_ref *eth1394_find_node_nodeid(struct list_head *inl,
332 nodeid_t nodeid) 332 nodeid_t nodeid)
333 { 333 {
334 struct eth1394_node_ref *node; 334 struct eth1394_node_ref *node;
335 335
336 list_for_each_entry(node, inl, list) 336 list_for_each_entry(node, inl, list)
337 if (node->ud->ne->nodeid == nodeid) 337 if (node->ud->ne->nodeid == nodeid)
338 return node; 338 return node;
339 339
340 return NULL; 340 return NULL;
341 } 341 }
342 342
343 static int eth1394_new_node(struct eth1394_host_info *hi, 343 static int eth1394_new_node(struct eth1394_host_info *hi,
344 struct unit_directory *ud) 344 struct unit_directory *ud)
345 { 345 {
346 struct eth1394_priv *priv; 346 struct eth1394_priv *priv;
347 struct eth1394_node_ref *new_node; 347 struct eth1394_node_ref *new_node;
348 struct eth1394_node_info *node_info; 348 struct eth1394_node_info *node_info;
349 349
350 new_node = kmalloc(sizeof(*new_node), GFP_KERNEL); 350 new_node = kmalloc(sizeof(*new_node), GFP_KERNEL);
351 if (!new_node) 351 if (!new_node)
352 return -ENOMEM; 352 return -ENOMEM;
353 353
354 node_info = kmalloc(sizeof(*node_info), GFP_KERNEL); 354 node_info = kmalloc(sizeof(*node_info), GFP_KERNEL);
355 if (!node_info) { 355 if (!node_info) {
356 kfree(new_node); 356 kfree(new_node);
357 return -ENOMEM; 357 return -ENOMEM;
358 } 358 }
359 359
360 spin_lock_init(&node_info->pdg.lock); 360 spin_lock_init(&node_info->pdg.lock);
361 INIT_LIST_HEAD(&node_info->pdg.list); 361 INIT_LIST_HEAD(&node_info->pdg.list);
362 node_info->pdg.sz = 0; 362 node_info->pdg.sz = 0;
363 node_info->fifo = CSR1212_INVALID_ADDR_SPACE; 363 node_info->fifo = CSR1212_INVALID_ADDR_SPACE;
364 364
365 dev_set_drvdata(&ud->device, node_info); 365 dev_set_drvdata(&ud->device, node_info);
366 new_node->ud = ud; 366 new_node->ud = ud;
367 367
368 priv = netdev_priv(hi->dev); 368 priv = netdev_priv(hi->dev);
369 list_add_tail(&new_node->list, &priv->ip_node_list); 369 list_add_tail(&new_node->list, &priv->ip_node_list);
370 return 0; 370 return 0;
371 } 371 }
372 372
373 static int eth1394_probe(struct device *dev) 373 static int eth1394_probe(struct device *dev)
374 { 374 {
375 struct unit_directory *ud; 375 struct unit_directory *ud;
376 struct eth1394_host_info *hi; 376 struct eth1394_host_info *hi;
377 377
378 ud = container_of(dev, struct unit_directory, device); 378 ud = container_of(dev, struct unit_directory, device);
379 hi = hpsb_get_hostinfo(&eth1394_highlevel, ud->ne->host); 379 hi = hpsb_get_hostinfo(&eth1394_highlevel, ud->ne->host);
380 if (!hi) 380 if (!hi)
381 return -ENOENT; 381 return -ENOENT;
382 382
383 return eth1394_new_node(hi, ud); 383 return eth1394_new_node(hi, ud);
384 } 384 }
385 385
386 static int eth1394_remove(struct device *dev) 386 static int eth1394_remove(struct device *dev)
387 { 387 {
388 struct unit_directory *ud; 388 struct unit_directory *ud;
389 struct eth1394_host_info *hi; 389 struct eth1394_host_info *hi;
390 struct eth1394_priv *priv; 390 struct eth1394_priv *priv;
391 struct eth1394_node_ref *old_node; 391 struct eth1394_node_ref *old_node;
392 struct eth1394_node_info *node_info; 392 struct eth1394_node_info *node_info;
393 struct list_head *lh, *n; 393 struct list_head *lh, *n;
394 unsigned long flags; 394 unsigned long flags;
395 395
396 ud = container_of(dev, struct unit_directory, device); 396 ud = container_of(dev, struct unit_directory, device);
397 hi = hpsb_get_hostinfo(&eth1394_highlevel, ud->ne->host); 397 hi = hpsb_get_hostinfo(&eth1394_highlevel, ud->ne->host);
398 if (!hi) 398 if (!hi)
399 return -ENOENT; 399 return -ENOENT;
400 400
401 priv = netdev_priv(hi->dev); 401 priv = netdev_priv(hi->dev);
402 402
403 old_node = eth1394_find_node(&priv->ip_node_list, ud); 403 old_node = eth1394_find_node(&priv->ip_node_list, ud);
404 if (!old_node) 404 if (!old_node)
405 return 0; 405 return 0;
406 406
407 list_del(&old_node->list); 407 list_del(&old_node->list);
408 kfree(old_node); 408 kfree(old_node);
409 409
410 node_info = dev_get_drvdata(&ud->device); 410 node_info = dev_get_drvdata(&ud->device);
411 411
412 spin_lock_irqsave(&node_info->pdg.lock, flags); 412 spin_lock_irqsave(&node_info->pdg.lock, flags);
413 /* The partial datagram list should be empty, but we'll just 413 /* The partial datagram list should be empty, but we'll just
414 * make sure anyway... */ 414 * make sure anyway... */
415 list_for_each_safe(lh, n, &node_info->pdg.list) 415 list_for_each_safe(lh, n, &node_info->pdg.list)
416 purge_partial_datagram(lh); 416 purge_partial_datagram(lh);
417 spin_unlock_irqrestore(&node_info->pdg.lock, flags); 417 spin_unlock_irqrestore(&node_info->pdg.lock, flags);
418 418
419 kfree(node_info); 419 kfree(node_info);
420 dev_set_drvdata(&ud->device, NULL); 420 dev_set_drvdata(&ud->device, NULL);
421 return 0; 421 return 0;
422 } 422 }
423 423
424 static int eth1394_update(struct unit_directory *ud) 424 static int eth1394_update(struct unit_directory *ud)
425 { 425 {
426 struct eth1394_host_info *hi; 426 struct eth1394_host_info *hi;
427 struct eth1394_priv *priv; 427 struct eth1394_priv *priv;
428 struct eth1394_node_ref *node; 428 struct eth1394_node_ref *node;
429 429
430 hi = hpsb_get_hostinfo(&eth1394_highlevel, ud->ne->host); 430 hi = hpsb_get_hostinfo(&eth1394_highlevel, ud->ne->host);
431 if (!hi) 431 if (!hi)
432 return -ENOENT; 432 return -ENOENT;
433 433
434 priv = netdev_priv(hi->dev); 434 priv = netdev_priv(hi->dev);
435 node = eth1394_find_node(&priv->ip_node_list, ud); 435 node = eth1394_find_node(&priv->ip_node_list, ud);
436 if (node) 436 if (node)
437 return 0; 437 return 0;
438 438
439 return eth1394_new_node(hi, ud); 439 return eth1394_new_node(hi, ud);
440 } 440 }
441 441
442 static const struct ieee1394_device_id eth1394_id_table[] = { 442 static const struct ieee1394_device_id eth1394_id_table[] = {
443 { 443 {
444 .match_flags = (IEEE1394_MATCH_SPECIFIER_ID | 444 .match_flags = (IEEE1394_MATCH_SPECIFIER_ID |
445 IEEE1394_MATCH_VERSION), 445 IEEE1394_MATCH_VERSION),
446 .specifier_id = ETHER1394_GASP_SPECIFIER_ID, 446 .specifier_id = ETHER1394_GASP_SPECIFIER_ID,
447 .version = ETHER1394_GASP_VERSION, 447 .version = ETHER1394_GASP_VERSION,
448 }, 448 },
449 {} 449 {}
450 }; 450 };
451 451
452 MODULE_DEVICE_TABLE(ieee1394, eth1394_id_table); 452 MODULE_DEVICE_TABLE(ieee1394, eth1394_id_table);
453 453
454 static struct hpsb_protocol_driver eth1394_proto_driver = { 454 static struct hpsb_protocol_driver eth1394_proto_driver = {
455 .name = driver_name, 455 .name = driver_name,
456 .id_table = eth1394_id_table, 456 .id_table = eth1394_id_table,
457 .update = eth1394_update, 457 .update = eth1394_update,
458 .driver = { 458 .driver = {
459 .probe = eth1394_probe, 459 .probe = eth1394_probe,
460 .remove = eth1394_remove, 460 .remove = eth1394_remove,
461 }, 461 },
462 }; 462 };
463 463
464 static void ether1394_reset_priv(struct net_device *dev, int set_mtu) 464 static void ether1394_reset_priv(struct net_device *dev, int set_mtu)
465 { 465 {
466 unsigned long flags; 466 unsigned long flags;
467 int i; 467 int i;
468 struct eth1394_priv *priv = netdev_priv(dev); 468 struct eth1394_priv *priv = netdev_priv(dev);
469 struct hpsb_host *host = priv->host; 469 struct hpsb_host *host = priv->host;
470 u64 guid = get_unaligned((u64 *)&(host->csr.rom->bus_info_data[3])); 470 u64 guid = get_unaligned((u64 *)&(host->csr.rom->bus_info_data[3]));
471 int max_speed = IEEE1394_SPEED_MAX; 471 int max_speed = IEEE1394_SPEED_MAX;
472 472
473 spin_lock_irqsave(&priv->lock, flags); 473 spin_lock_irqsave(&priv->lock, flags);
474 474
475 memset(priv->ud_list, 0, sizeof(priv->ud_list)); 475 memset(priv->ud_list, 0, sizeof(priv->ud_list));
476 priv->bc_maxpayload = 512; 476 priv->bc_maxpayload = 512;
477 477
478 /* Determine speed limit */ 478 /* Determine speed limit */
479 /* FIXME: This is broken for nodes with link speed < PHY speed, 479 /* FIXME: This is broken for nodes with link speed < PHY speed,
480 * and it is suboptimal for S200B...S800B hardware. 480 * and it is suboptimal for S200B...S800B hardware.
481 * The result of nodemgr's speed probe should be used somehow. */ 481 * The result of nodemgr's speed probe should be used somehow. */
482 for (i = 0; i < host->node_count; i++) { 482 for (i = 0; i < host->node_count; i++) {
483 /* take care of S100B...S400B PHY ports */ 483 /* take care of S100B...S400B PHY ports */
484 if (host->speed[i] == SELFID_SPEED_UNKNOWN) { 484 if (host->speed[i] == SELFID_SPEED_UNKNOWN) {
485 max_speed = IEEE1394_SPEED_100; 485 max_speed = IEEE1394_SPEED_100;
486 break; 486 break;
487 } 487 }
488 if (max_speed > host->speed[i]) 488 if (max_speed > host->speed[i])
489 max_speed = host->speed[i]; 489 max_speed = host->speed[i];
490 } 490 }
491 priv->bc_sspd = max_speed; 491 priv->bc_sspd = max_speed;
492 492
493 if (set_mtu) { 493 if (set_mtu) {
494 /* Use the RFC 2734 default 1500 octets or the maximum payload 494 /* Use the RFC 2734 default 1500 octets or the maximum payload
495 * as initial MTU */ 495 * as initial MTU */
496 dev->mtu = min(1500, ether1394_max_mtu(host)); 496 dev->mtu = min(1500, ether1394_max_mtu(host));
497 497
498 /* Set our hardware address while we're at it */ 498 /* Set our hardware address while we're at it */
499 memcpy(dev->dev_addr, &guid, sizeof(u64)); 499 memcpy(dev->dev_addr, &guid, sizeof(u64));
500 memset(dev->broadcast, 0xff, sizeof(u64)); 500 memset(dev->broadcast, 0xff, sizeof(u64));
501 } 501 }
502 502
503 spin_unlock_irqrestore(&priv->lock, flags); 503 spin_unlock_irqrestore(&priv->lock, flags);
504 } 504 }
505 505
506 static const struct header_ops ether1394_header_ops = { 506 static const struct header_ops ether1394_header_ops = {
507 .create = ether1394_header, 507 .create = ether1394_header,
508 .rebuild = ether1394_rebuild_header, 508 .rebuild = ether1394_rebuild_header,
509 .cache = ether1394_header_cache, 509 .cache = ether1394_header_cache,
510 .cache_update = ether1394_header_cache_update, 510 .cache_update = ether1394_header_cache_update,
511 .parse = ether1394_header_parse, 511 .parse = ether1394_header_parse,
512 }; 512 };
513 513
514 static const struct net_device_ops ether1394_netdev_ops = { 514 static const struct net_device_ops ether1394_netdev_ops = {
515 .ndo_open = ether1394_open, 515 .ndo_open = ether1394_open,
516 .ndo_stop = ether1394_stop, 516 .ndo_stop = ether1394_stop,
517 .ndo_start_xmit = ether1394_tx, 517 .ndo_start_xmit = ether1394_tx,
518 .ndo_tx_timeout = ether1394_tx_timeout, 518 .ndo_tx_timeout = ether1394_tx_timeout,
519 .ndo_change_mtu = ether1394_change_mtu, 519 .ndo_change_mtu = ether1394_change_mtu,
520 }; 520 };
521 521
522 static void ether1394_init_dev(struct net_device *dev) 522 static void ether1394_init_dev(struct net_device *dev)
523 { 523 {
524 524
525 dev->header_ops = &ether1394_header_ops; 525 dev->header_ops = &ether1394_header_ops;
526 dev->netdev_ops = &ether1394_netdev_ops; 526 dev->netdev_ops = &ether1394_netdev_ops;
527 527
528 SET_ETHTOOL_OPS(dev, &ethtool_ops); 528 SET_ETHTOOL_OPS(dev, &ethtool_ops);
529 529
530 dev->watchdog_timeo = ETHER1394_TIMEOUT; 530 dev->watchdog_timeo = ETHER1394_TIMEOUT;
531 dev->flags = IFF_BROADCAST | IFF_MULTICAST; 531 dev->flags = IFF_BROADCAST | IFF_MULTICAST;
532 dev->features = NETIF_F_HIGHDMA; 532 dev->features = NETIF_F_HIGHDMA;
533 dev->addr_len = ETH1394_ALEN; 533 dev->addr_len = ETH1394_ALEN;
534 dev->hard_header_len = ETH1394_HLEN; 534 dev->hard_header_len = ETH1394_HLEN;
535 dev->type = ARPHRD_IEEE1394; 535 dev->type = ARPHRD_IEEE1394;
536 536
537 /* FIXME: This value was copied from ether_setup(). Is it too much? */ 537 /* FIXME: This value was copied from ether_setup(). Is it too much? */
538 dev->tx_queue_len = 1000; 538 dev->tx_queue_len = 1000;
539 } 539 }
540 540
541 /* 541 /*
542 * Wake the queue up after commonly encountered transmit failure conditions are 542 * Wake the queue up after commonly encountered transmit failure conditions are
543 * hopefully over. Currently only tlabel exhaustion is accounted for. 543 * hopefully over. Currently only tlabel exhaustion is accounted for.
544 */ 544 */
545 static void ether1394_wake_queue(struct work_struct *work) 545 static void ether1394_wake_queue(struct work_struct *work)
546 { 546 {
547 struct eth1394_priv *priv; 547 struct eth1394_priv *priv;
548 struct hpsb_packet *packet; 548 struct hpsb_packet *packet;
549 549
550 priv = container_of(work, struct eth1394_priv, wake); 550 priv = container_of(work, struct eth1394_priv, wake);
551 packet = hpsb_alloc_packet(0); 551 packet = hpsb_alloc_packet(0);
552 552
553 /* This is really bad, but unjam the queue anyway. */ 553 /* This is really bad, but unjam the queue anyway. */
554 if (!packet) 554 if (!packet)
555 goto out; 555 goto out;
556 556
557 packet->host = priv->host; 557 packet->host = priv->host;
558 packet->node_id = priv->wake_node; 558 packet->node_id = priv->wake_node;
559 /* 559 /*
560 * A transaction label is all we really want. If we get one, it almost 560 * A transaction label is all we really want. If we get one, it almost
561 * always means we can get a lot more because the ieee1394 core recycled 561 * always means we can get a lot more because the ieee1394 core recycled
562 * a whole batch of tlabels, at last. 562 * a whole batch of tlabels, at last.
563 */ 563 */
564 if (hpsb_get_tlabel(packet) == 0) 564 if (hpsb_get_tlabel(packet) == 0)
565 hpsb_free_tlabel(packet); 565 hpsb_free_tlabel(packet);
566 566
567 hpsb_free_packet(packet); 567 hpsb_free_packet(packet);
568 out: 568 out:
569 netif_wake_queue(priv->wake_dev); 569 netif_wake_queue(priv->wake_dev);
570 } 570 }
571 571
572 /* 572 /*
573 * This function is called every time a card is found. It is generally called 573 * This function is called every time a card is found. It is generally called
574 * when the module is installed. This is where we add all of our ethernet 574 * when the module is installed. This is where we add all of our ethernet
575 * devices. One for each host. 575 * devices. One for each host.
576 */ 576 */
577 static void ether1394_add_host(struct hpsb_host *host) 577 static void ether1394_add_host(struct hpsb_host *host)
578 { 578 {
579 struct eth1394_host_info *hi = NULL; 579 struct eth1394_host_info *hi = NULL;
580 struct net_device *dev = NULL; 580 struct net_device *dev = NULL;
581 struct eth1394_priv *priv; 581 struct eth1394_priv *priv;
582 u64 fifo_addr; 582 u64 fifo_addr;
583 583
584 if (hpsb_config_rom_ip1394_add(host) != 0) { 584 if (hpsb_config_rom_ip1394_add(host) != 0) {
585 ETH1394_PRINT_G(KERN_ERR, "Can't add IP-over-1394 ROM entry\n"); 585 ETH1394_PRINT_G(KERN_ERR, "Can't add IP-over-1394 ROM entry\n");
586 return; 586 return;
587 } 587 }
588 588
589 fifo_addr = hpsb_allocate_and_register_addrspace( 589 fifo_addr = hpsb_allocate_and_register_addrspace(
590 &eth1394_highlevel, host, &addr_ops, 590 &eth1394_highlevel, host, &addr_ops,
591 ETHER1394_REGION_ADDR_LEN, ETHER1394_REGION_ADDR_LEN, 591 ETHER1394_REGION_ADDR_LEN, ETHER1394_REGION_ADDR_LEN,
592 CSR1212_INVALID_ADDR_SPACE, CSR1212_INVALID_ADDR_SPACE); 592 CSR1212_INVALID_ADDR_SPACE, CSR1212_INVALID_ADDR_SPACE);
593 if (fifo_addr == CSR1212_INVALID_ADDR_SPACE) { 593 if (fifo_addr == CSR1212_INVALID_ADDR_SPACE) {
594 ETH1394_PRINT_G(KERN_ERR, "Cannot register CSR space\n"); 594 ETH1394_PRINT_G(KERN_ERR, "Cannot register CSR space\n");
595 hpsb_config_rom_ip1394_remove(host); 595 hpsb_config_rom_ip1394_remove(host);
596 return; 596 return;
597 } 597 }
598 598
599 dev = alloc_netdev(sizeof(*priv), "eth%d", ether1394_init_dev); 599 dev = alloc_netdev(sizeof(*priv), "eth%d", ether1394_init_dev);
600 if (dev == NULL) { 600 if (dev == NULL) {
601 ETH1394_PRINT_G(KERN_ERR, "Out of memory\n"); 601 ETH1394_PRINT_G(KERN_ERR, "Out of memory\n");
602 goto out; 602 goto out;
603 } 603 }
604 604
605 SET_NETDEV_DEV(dev, &host->device); 605 SET_NETDEV_DEV(dev, &host->device);
606 606
607 priv = netdev_priv(dev); 607 priv = netdev_priv(dev);
608 INIT_LIST_HEAD(&priv->ip_node_list); 608 INIT_LIST_HEAD(&priv->ip_node_list);
609 spin_lock_init(&priv->lock); 609 spin_lock_init(&priv->lock);
610 priv->host = host; 610 priv->host = host;
611 priv->local_fifo = fifo_addr; 611 priv->local_fifo = fifo_addr;
612 INIT_WORK(&priv->wake, ether1394_wake_queue); 612 INIT_WORK(&priv->wake, ether1394_wake_queue);
613 priv->wake_dev = dev; 613 priv->wake_dev = dev;
614 614
615 hi = hpsb_create_hostinfo(&eth1394_highlevel, host, sizeof(*hi)); 615 hi = hpsb_create_hostinfo(&eth1394_highlevel, host, sizeof(*hi));
616 if (hi == NULL) { 616 if (hi == NULL) {
617 ETH1394_PRINT_G(KERN_ERR, "Out of memory\n"); 617 ETH1394_PRINT_G(KERN_ERR, "Out of memory\n");
618 goto out; 618 goto out;
619 } 619 }
620 620
621 ether1394_reset_priv(dev, 1); 621 ether1394_reset_priv(dev, 1);
622 622
623 if (register_netdev(dev)) { 623 if (register_netdev(dev)) {
624 ETH1394_PRINT_G(KERN_ERR, "Cannot register the driver\n"); 624 ETH1394_PRINT_G(KERN_ERR, "Cannot register the driver\n");
625 goto out; 625 goto out;
626 } 626 }
627 627
628 ETH1394_PRINT(KERN_INFO, dev->name, "IPv4 over IEEE 1394 (fw-host%d)\n", 628 ETH1394_PRINT(KERN_INFO, dev->name, "IPv4 over IEEE 1394 (fw-host%d)\n",
629 host->id); 629 host->id);
630 630
631 hi->host = host; 631 hi->host = host;
632 hi->dev = dev; 632 hi->dev = dev;
633 633
634 /* Ignore validity in hopes that it will be set in the future. It'll 634 /* Ignore validity in hopes that it will be set in the future. It'll
635 * be checked when the eth device is opened. */ 635 * be checked when the eth device is opened. */
636 priv->broadcast_channel = host->csr.broadcast_channel & 0x3f; 636 priv->broadcast_channel = host->csr.broadcast_channel & 0x3f;
637 637
638 ether1394_recv_init(priv); 638 ether1394_recv_init(priv);
639 return; 639 return;
640 out: 640 out:
641 if (dev) 641 if (dev)
642 free_netdev(dev); 642 free_netdev(dev);
643 if (hi) 643 if (hi)
644 hpsb_destroy_hostinfo(&eth1394_highlevel, host); 644 hpsb_destroy_hostinfo(&eth1394_highlevel, host);
645 hpsb_unregister_addrspace(&eth1394_highlevel, host, fifo_addr); 645 hpsb_unregister_addrspace(&eth1394_highlevel, host, fifo_addr);
646 hpsb_config_rom_ip1394_remove(host); 646 hpsb_config_rom_ip1394_remove(host);
647 } 647 }
648 648
649 /* Remove a card from our list */ 649 /* Remove a card from our list */
650 static void ether1394_remove_host(struct hpsb_host *host) 650 static void ether1394_remove_host(struct hpsb_host *host)
651 { 651 {
652 struct eth1394_host_info *hi; 652 struct eth1394_host_info *hi;
653 struct eth1394_priv *priv; 653 struct eth1394_priv *priv;
654 654
655 hi = hpsb_get_hostinfo(&eth1394_highlevel, host); 655 hi = hpsb_get_hostinfo(&eth1394_highlevel, host);
656 if (!hi) 656 if (!hi)
657 return; 657 return;
658 priv = netdev_priv(hi->dev); 658 priv = netdev_priv(hi->dev);
659 hpsb_unregister_addrspace(&eth1394_highlevel, host, priv->local_fifo); 659 hpsb_unregister_addrspace(&eth1394_highlevel, host, priv->local_fifo);
660 hpsb_config_rom_ip1394_remove(host); 660 hpsb_config_rom_ip1394_remove(host);
661 if (priv->iso) 661 if (priv->iso)
662 hpsb_iso_shutdown(priv->iso); 662 hpsb_iso_shutdown(priv->iso);
663 unregister_netdev(hi->dev); 663 unregister_netdev(hi->dev);
664 free_netdev(hi->dev); 664 free_netdev(hi->dev);
665 } 665 }
666 666
667 /* A bus reset happened */ 667 /* A bus reset happened */
668 static void ether1394_host_reset(struct hpsb_host *host) 668 static void ether1394_host_reset(struct hpsb_host *host)
669 { 669 {
670 struct eth1394_host_info *hi; 670 struct eth1394_host_info *hi;
671 struct eth1394_priv *priv; 671 struct eth1394_priv *priv;
672 struct net_device *dev; 672 struct net_device *dev;
673 struct list_head *lh, *n; 673 struct list_head *lh, *n;
674 struct eth1394_node_ref *node; 674 struct eth1394_node_ref *node;
675 struct eth1394_node_info *node_info; 675 struct eth1394_node_info *node_info;
676 unsigned long flags; 676 unsigned long flags;
677 677
678 hi = hpsb_get_hostinfo(&eth1394_highlevel, host); 678 hi = hpsb_get_hostinfo(&eth1394_highlevel, host);
679 679
680 /* This can happen for hosts that we don't use */ 680 /* This can happen for hosts that we don't use */
681 if (!hi) 681 if (!hi)
682 return; 682 return;
683 683
684 dev = hi->dev; 684 dev = hi->dev;
685 priv = netdev_priv(dev); 685 priv = netdev_priv(dev);
686 686
687 /* Reset our private host data, but not our MTU */ 687 /* Reset our private host data, but not our MTU */
688 netif_stop_queue(dev); 688 netif_stop_queue(dev);
689 ether1394_reset_priv(dev, 0); 689 ether1394_reset_priv(dev, 0);
690 690
691 list_for_each_entry(node, &priv->ip_node_list, list) { 691 list_for_each_entry(node, &priv->ip_node_list, list) {
692 node_info = dev_get_drvdata(&node->ud->device); 692 node_info = dev_get_drvdata(&node->ud->device);
693 693
694 spin_lock_irqsave(&node_info->pdg.lock, flags); 694 spin_lock_irqsave(&node_info->pdg.lock, flags);
695 695
696 list_for_each_safe(lh, n, &node_info->pdg.list) 696 list_for_each_safe(lh, n, &node_info->pdg.list)
697 purge_partial_datagram(lh); 697 purge_partial_datagram(lh);
698 698
699 INIT_LIST_HEAD(&(node_info->pdg.list)); 699 INIT_LIST_HEAD(&(node_info->pdg.list));
700 node_info->pdg.sz = 0; 700 node_info->pdg.sz = 0;
701 701
702 spin_unlock_irqrestore(&node_info->pdg.lock, flags); 702 spin_unlock_irqrestore(&node_info->pdg.lock, flags);
703 } 703 }
704 704
705 netif_wake_queue(dev); 705 netif_wake_queue(dev);
706 } 706 }
707 707
708 /****************************************** 708 /******************************************
709 * HW Header net device functions 709 * HW Header net device functions
710 ******************************************/ 710 ******************************************/
711 /* These functions have been adapted from net/ethernet/eth.c */ 711 /* These functions have been adapted from net/ethernet/eth.c */
712 712
713 /* Create a fake MAC header for an arbitrary protocol layer. 713 /* Create a fake MAC header for an arbitrary protocol layer.
714 * saddr=NULL means use device source address 714 * saddr=NULL means use device source address
715 * daddr=NULL means leave destination address (eg unresolved arp). */ 715 * daddr=NULL means leave destination address (eg unresolved arp). */
716 static int ether1394_header(struct sk_buff *skb, struct net_device *dev, 716 static int ether1394_header(struct sk_buff *skb, struct net_device *dev,
717 unsigned short type, const void *daddr, 717 unsigned short type, const void *daddr,
718 const void *saddr, unsigned len) 718 const void *saddr, unsigned len)
719 { 719 {
720 struct eth1394hdr *eth = 720 struct eth1394hdr *eth =
721 (struct eth1394hdr *)skb_push(skb, ETH1394_HLEN); 721 (struct eth1394hdr *)skb_push(skb, ETH1394_HLEN);
722 722
723 eth->h_proto = htons(type); 723 eth->h_proto = htons(type);
724 724
725 if (dev->flags & (IFF_LOOPBACK | IFF_NOARP)) { 725 if (dev->flags & (IFF_LOOPBACK | IFF_NOARP)) {
726 memset(eth->h_dest, 0, dev->addr_len); 726 memset(eth->h_dest, 0, dev->addr_len);
727 return dev->hard_header_len; 727 return dev->hard_header_len;
728 } 728 }
729 729
730 if (daddr) { 730 if (daddr) {
731 memcpy(eth->h_dest, daddr, dev->addr_len); 731 memcpy(eth->h_dest, daddr, dev->addr_len);
732 return dev->hard_header_len; 732 return dev->hard_header_len;
733 } 733 }
734 734
735 return -dev->hard_header_len; 735 return -dev->hard_header_len;
736 } 736 }
737 737
738 /* Rebuild the faked MAC header. This is called after an ARP 738 /* Rebuild the faked MAC header. This is called after an ARP
739 * (or in future other address resolution) has completed on this 739 * (or in future other address resolution) has completed on this
740 * sk_buff. We now let ARP fill in the other fields. 740 * sk_buff. We now let ARP fill in the other fields.
741 * 741 *
742 * This routine CANNOT use cached dst->neigh! 742 * This routine CANNOT use cached dst->neigh!
743 * Really, it is used only when dst->neigh is wrong. 743 * Really, it is used only when dst->neigh is wrong.
744 */ 744 */
745 static int ether1394_rebuild_header(struct sk_buff *skb) 745 static int ether1394_rebuild_header(struct sk_buff *skb)
746 { 746 {
747 struct eth1394hdr *eth = (struct eth1394hdr *)skb->data; 747 struct eth1394hdr *eth = (struct eth1394hdr *)skb->data;
748 748
749 if (eth->h_proto == htons(ETH_P_IP)) 749 if (eth->h_proto == htons(ETH_P_IP))
750 return arp_find((unsigned char *)&eth->h_dest, skb); 750 return arp_find((unsigned char *)&eth->h_dest, skb);
751 751
752 ETH1394_PRINT(KERN_DEBUG, skb->dev->name, 752 ETH1394_PRINT(KERN_DEBUG, skb->dev->name,
753 "unable to resolve type %04x addresses\n", 753 "unable to resolve type %04x addresses\n",
754 ntohs(eth->h_proto)); 754 ntohs(eth->h_proto));
755 return 0; 755 return 0;
756 } 756 }
757 757
758 static int ether1394_header_parse(const struct sk_buff *skb, 758 static int ether1394_header_parse(const struct sk_buff *skb,
759 unsigned char *haddr) 759 unsigned char *haddr)
760 { 760 {
761 memcpy(haddr, skb->dev->dev_addr, ETH1394_ALEN); 761 memcpy(haddr, skb->dev->dev_addr, ETH1394_ALEN);
762 return ETH1394_ALEN; 762 return ETH1394_ALEN;
763 } 763 }
764 764
765 static int ether1394_header_cache(const struct neighbour *neigh, 765 static int ether1394_header_cache(const struct neighbour *neigh,
766 struct hh_cache *hh) 766 struct hh_cache *hh)
767 { 767 {
768 __be16 type = hh->hh_type; 768 __be16 type = hh->hh_type;
769 struct net_device *dev = neigh->dev; 769 struct net_device *dev = neigh->dev;
770 struct eth1394hdr *eth = 770 struct eth1394hdr *eth =
771 (struct eth1394hdr *)((u8 *)hh->hh_data + 16 - ETH1394_HLEN); 771 (struct eth1394hdr *)((u8 *)hh->hh_data + 16 - ETH1394_HLEN);
772 772
773 if (type == htons(ETH_P_802_3)) 773 if (type == htons(ETH_P_802_3))
774 return -1; 774 return -1;
775 775
776 eth->h_proto = type; 776 eth->h_proto = type;
777 memcpy(eth->h_dest, neigh->ha, dev->addr_len); 777 memcpy(eth->h_dest, neigh->ha, dev->addr_len);
778 778
779 hh->hh_len = ETH1394_HLEN; 779 hh->hh_len = ETH1394_HLEN;
780 return 0; 780 return 0;
781 } 781 }
782 782
783 /* Called by Address Resolution module to notify changes in address. */ 783 /* Called by Address Resolution module to notify changes in address. */
784 static void ether1394_header_cache_update(struct hh_cache *hh, 784 static void ether1394_header_cache_update(struct hh_cache *hh,
785 const struct net_device *dev, 785 const struct net_device *dev,
786 const unsigned char * haddr) 786 const unsigned char * haddr)
787 { 787 {
788 memcpy((u8 *)hh->hh_data + 16 - ETH1394_HLEN, haddr, dev->addr_len); 788 memcpy((u8 *)hh->hh_data + 16 - ETH1394_HLEN, haddr, dev->addr_len);
789 } 789 }
790 790
791 /****************************************** 791 /******************************************
792 * Datagram reception code 792 * Datagram reception code
793 ******************************************/ 793 ******************************************/
794 794
795 /* Copied from net/ethernet/eth.c */ 795 /* Copied from net/ethernet/eth.c */
796 static __be16 ether1394_type_trans(struct sk_buff *skb, struct net_device *dev) 796 static __be16 ether1394_type_trans(struct sk_buff *skb, struct net_device *dev)
797 { 797 {
798 struct eth1394hdr *eth; 798 struct eth1394hdr *eth;
799 unsigned char *rawp; 799 unsigned char *rawp;
800 800
801 skb_reset_mac_header(skb); 801 skb_reset_mac_header(skb);
802 skb_pull(skb, ETH1394_HLEN); 802 skb_pull(skb, ETH1394_HLEN);
803 eth = eth1394_hdr(skb); 803 eth = eth1394_hdr(skb);
804 804
805 if (*eth->h_dest & 1) { 805 if (*eth->h_dest & 1) {
806 if (memcmp(eth->h_dest, dev->broadcast, dev->addr_len) == 0) 806 if (memcmp(eth->h_dest, dev->broadcast, dev->addr_len) == 0)
807 skb->pkt_type = PACKET_BROADCAST; 807 skb->pkt_type = PACKET_BROADCAST;
808 #if 0 808 #if 0
809 else 809 else
810 skb->pkt_type = PACKET_MULTICAST; 810 skb->pkt_type = PACKET_MULTICAST;
811 #endif 811 #endif
812 } else { 812 } else {
813 if (memcmp(eth->h_dest, dev->dev_addr, dev->addr_len)) 813 if (memcmp(eth->h_dest, dev->dev_addr, dev->addr_len))
814 skb->pkt_type = PACKET_OTHERHOST; 814 skb->pkt_type = PACKET_OTHERHOST;
815 } 815 }
816 816
817 if (ntohs(eth->h_proto) >= 1536) 817 if (ntohs(eth->h_proto) >= 1536)
818 return eth->h_proto; 818 return eth->h_proto;
819 819
820 rawp = skb->data; 820 rawp = skb->data;
821 821
822 if (*(unsigned short *)rawp == 0xFFFF) 822 if (*(unsigned short *)rawp == 0xFFFF)
823 return htons(ETH_P_802_3); 823 return htons(ETH_P_802_3);
824 824
825 return htons(ETH_P_802_2); 825 return htons(ETH_P_802_2);
826 } 826 }
827 827
828 /* Parse an encapsulated IP1394 header into an ethernet frame packet. 828 /* Parse an encapsulated IP1394 header into an ethernet frame packet.
829 * We also perform ARP translation here, if need be. */ 829 * We also perform ARP translation here, if need be. */
830 static __be16 ether1394_parse_encap(struct sk_buff *skb, struct net_device *dev, 830 static __be16 ether1394_parse_encap(struct sk_buff *skb, struct net_device *dev,
831 nodeid_t srcid, nodeid_t destid, 831 nodeid_t srcid, nodeid_t destid,
832 __be16 ether_type) 832 __be16 ether_type)
833 { 833 {
834 struct eth1394_priv *priv = netdev_priv(dev); 834 struct eth1394_priv *priv = netdev_priv(dev);
835 __be64 dest_hw; 835 __be64 dest_hw;
836 __be16 ret = 0; 836 __be16 ret = 0;
837 837
838 /* Setup our hw addresses. We use these to build the ethernet header. */ 838 /* Setup our hw addresses. We use these to build the ethernet header. */
839 if (destid == (LOCAL_BUS | ALL_NODES)) 839 if (destid == (LOCAL_BUS | ALL_NODES))
840 dest_hw = ~cpu_to_be64(0); /* broadcast */ 840 dest_hw = ~cpu_to_be64(0); /* broadcast */
841 else 841 else
842 dest_hw = cpu_to_be64((u64)priv->host->csr.guid_hi << 32 | 842 dest_hw = cpu_to_be64((u64)priv->host->csr.guid_hi << 32 |
843 priv->host->csr.guid_lo); 843 priv->host->csr.guid_lo);
844 844
845 /* If this is an ARP packet, convert it. First, we want to make 845 /* If this is an ARP packet, convert it. First, we want to make
846 * use of some of the fields, since they tell us a little bit 846 * use of some of the fields, since they tell us a little bit
847 * about the sending machine. */ 847 * about the sending machine. */
848 if (ether_type == htons(ETH_P_ARP)) { 848 if (ether_type == htons(ETH_P_ARP)) {
849 struct eth1394_arp *arp1394 = (struct eth1394_arp *)skb->data; 849 struct eth1394_arp *arp1394 = (struct eth1394_arp *)skb->data;
850 struct arphdr *arp = (struct arphdr *)skb->data; 850 struct arphdr *arp = (struct arphdr *)skb->data;
851 unsigned char *arp_ptr = (unsigned char *)(arp + 1); 851 unsigned char *arp_ptr = (unsigned char *)(arp + 1);
852 u64 fifo_addr = (u64)ntohs(arp1394->fifo_hi) << 32 | 852 u64 fifo_addr = (u64)ntohs(arp1394->fifo_hi) << 32 |
853 ntohl(arp1394->fifo_lo); 853 ntohl(arp1394->fifo_lo);
854 u8 max_rec = min(priv->host->csr.max_rec, 854 u8 max_rec = min(priv->host->csr.max_rec,
855 (u8)(arp1394->max_rec)); 855 (u8)(arp1394->max_rec));
856 int sspd = arp1394->sspd; 856 int sspd = arp1394->sspd;
857 u16 maxpayload; 857 u16 maxpayload;
858 struct eth1394_node_ref *node; 858 struct eth1394_node_ref *node;
859 struct eth1394_node_info *node_info; 859 struct eth1394_node_info *node_info;
860 __be64 guid; 860 __be64 guid;
861 861
862 /* Sanity check. MacOSX seems to be sending us 131 in this 862 /* Sanity check. MacOSX seems to be sending us 131 in this
863 * field (atleast on my Panther G5). Not sure why. */ 863 * field (atleast on my Panther G5). Not sure why. */
864 if (sspd > 5 || sspd < 0) 864 if (sspd > 5 || sspd < 0)
865 sspd = 0; 865 sspd = 0;
866 866
867 maxpayload = min(eth1394_speedto_maxpayload[sspd], 867 maxpayload = min(eth1394_speedto_maxpayload[sspd],
868 (u16)(1 << (max_rec + 1))); 868 (u16)(1 << (max_rec + 1)));
869 869
870 guid = get_unaligned(&arp1394->s_uniq_id); 870 guid = get_unaligned(&arp1394->s_uniq_id);
871 node = eth1394_find_node_guid(&priv->ip_node_list, 871 node = eth1394_find_node_guid(&priv->ip_node_list,
872 be64_to_cpu(guid)); 872 be64_to_cpu(guid));
873 if (!node) 873 if (!node)
874 return cpu_to_be16(0); 874 return cpu_to_be16(0);
875 875
876 node_info = dev_get_drvdata(&node->ud->device); 876 node_info = dev_get_drvdata(&node->ud->device);
877 877
878 /* Update our speed/payload/fifo_offset table */ 878 /* Update our speed/payload/fifo_offset table */
879 node_info->maxpayload = maxpayload; 879 node_info->maxpayload = maxpayload;
880 node_info->sspd = sspd; 880 node_info->sspd = sspd;
881 node_info->fifo = fifo_addr; 881 node_info->fifo = fifo_addr;
882 882
883 /* Now that we're done with the 1394 specific stuff, we'll 883 /* Now that we're done with the 1394 specific stuff, we'll
884 * need to alter some of the data. Believe it or not, all 884 * need to alter some of the data. Believe it or not, all
885 * that needs to be done is sender_IP_address needs to be 885 * that needs to be done is sender_IP_address needs to be
886 * moved, the destination hardware address get stuffed 886 * moved, the destination hardware address get stuffed
887 * in and the hardware address length set to 8. 887 * in and the hardware address length set to 8.
888 * 888 *
889 * IMPORTANT: The code below overwrites 1394 specific data 889 * IMPORTANT: The code below overwrites 1394 specific data
890 * needed above so keep the munging of the data for the 890 * needed above so keep the munging of the data for the
891 * higher level IP stack last. */ 891 * higher level IP stack last. */
892 892
893 arp->ar_hln = 8; 893 arp->ar_hln = 8;
894 arp_ptr += arp->ar_hln; /* skip over sender unique id */ 894 arp_ptr += arp->ar_hln; /* skip over sender unique id */
895 *(u32 *)arp_ptr = arp1394->sip; /* move sender IP addr */ 895 *(u32 *)arp_ptr = arp1394->sip; /* move sender IP addr */
896 arp_ptr += arp->ar_pln; /* skip over sender IP addr */ 896 arp_ptr += arp->ar_pln; /* skip over sender IP addr */
897 897
898 if (arp->ar_op == htons(ARPOP_REQUEST)) 898 if (arp->ar_op == htons(ARPOP_REQUEST))
899 memset(arp_ptr, 0, sizeof(u64)); 899 memset(arp_ptr, 0, sizeof(u64));
900 else 900 else
901 memcpy(arp_ptr, dev->dev_addr, sizeof(u64)); 901 memcpy(arp_ptr, dev->dev_addr, sizeof(u64));
902 } 902 }
903 903
904 /* Now add the ethernet header. */ 904 /* Now add the ethernet header. */
905 if (dev_hard_header(skb, dev, ntohs(ether_type), &dest_hw, NULL, 905 if (dev_hard_header(skb, dev, ntohs(ether_type), &dest_hw, NULL,
906 skb->len) >= 0) 906 skb->len) >= 0)
907 ret = ether1394_type_trans(skb, dev); 907 ret = ether1394_type_trans(skb, dev);
908 908
909 return ret; 909 return ret;
910 } 910 }
911 911
912 static int fragment_overlap(struct list_head *frag_list, int offset, int len) 912 static int fragment_overlap(struct list_head *frag_list, int offset, int len)
913 { 913 {
914 struct fragment_info *fi; 914 struct fragment_info *fi;
915 int end = offset + len; 915 int end = offset + len;
916 916
917 list_for_each_entry(fi, frag_list, list) 917 list_for_each_entry(fi, frag_list, list)
918 if (offset < fi->offset + fi->len && end > fi->offset) 918 if (offset < fi->offset + fi->len && end > fi->offset)
919 return 1; 919 return 1;
920 920
921 return 0; 921 return 0;
922 } 922 }
923 923
924 static struct list_head *find_partial_datagram(struct list_head *pdgl, int dgl) 924 static struct list_head *find_partial_datagram(struct list_head *pdgl, int dgl)
925 { 925 {
926 struct partial_datagram *pd; 926 struct partial_datagram *pd;
927 927
928 list_for_each_entry(pd, pdgl, list) 928 list_for_each_entry(pd, pdgl, list)
929 if (pd->dgl == dgl) 929 if (pd->dgl == dgl)
930 return &pd->list; 930 return &pd->list;
931 931
932 return NULL; 932 return NULL;
933 } 933 }
934 934
935 /* Assumes that new fragment does not overlap any existing fragments */ 935 /* Assumes that new fragment does not overlap any existing fragments */
936 static int new_fragment(struct list_head *frag_info, int offset, int len) 936 static int new_fragment(struct list_head *frag_info, int offset, int len)
937 { 937 {
938 struct list_head *lh; 938 struct list_head *lh;
939 struct fragment_info *fi, *fi2, *new; 939 struct fragment_info *fi, *fi2, *new;
940 940
941 list_for_each(lh, frag_info) { 941 list_for_each(lh, frag_info) {
942 fi = list_entry(lh, struct fragment_info, list); 942 fi = list_entry(lh, struct fragment_info, list);
943 if (fi->offset + fi->len == offset) { 943 if (fi->offset + fi->len == offset) {
944 /* The new fragment can be tacked on to the end */ 944 /* The new fragment can be tacked on to the end */
945 fi->len += len; 945 fi->len += len;
946 /* Did the new fragment plug a hole? */ 946 /* Did the new fragment plug a hole? */
947 fi2 = list_entry(lh->next, struct fragment_info, list); 947 fi2 = list_entry(lh->next, struct fragment_info, list);
948 if (fi->offset + fi->len == fi2->offset) { 948 if (fi->offset + fi->len == fi2->offset) {
949 /* glue fragments together */ 949 /* glue fragments together */
950 fi->len += fi2->len; 950 fi->len += fi2->len;
951 list_del(lh->next); 951 list_del(lh->next);
952 kfree(fi2); 952 kfree(fi2);
953 } 953 }
954 return 0; 954 return 0;
955 } else if (offset + len == fi->offset) { 955 } else if (offset + len == fi->offset) {
956 /* The new fragment can be tacked on to the beginning */ 956 /* The new fragment can be tacked on to the beginning */
957 fi->offset = offset; 957 fi->offset = offset;
958 fi->len += len; 958 fi->len += len;
959 /* Did the new fragment plug a hole? */ 959 /* Did the new fragment plug a hole? */
960 fi2 = list_entry(lh->prev, struct fragment_info, list); 960 fi2 = list_entry(lh->prev, struct fragment_info, list);
961 if (fi2->offset + fi2->len == fi->offset) { 961 if (fi2->offset + fi2->len == fi->offset) {
962 /* glue fragments together */ 962 /* glue fragments together */
963 fi2->len += fi->len; 963 fi2->len += fi->len;
964 list_del(lh); 964 list_del(lh);
965 kfree(fi); 965 kfree(fi);
966 } 966 }
967 return 0; 967 return 0;
968 } else if (offset > fi->offset + fi->len) { 968 } else if (offset > fi->offset + fi->len) {
969 break; 969 break;
970 } else if (offset + len < fi->offset) { 970 } else if (offset + len < fi->offset) {
971 lh = lh->prev; 971 lh = lh->prev;
972 break; 972 break;
973 } 973 }
974 } 974 }
975 975
976 new = kmalloc(sizeof(*new), GFP_ATOMIC); 976 new = kmalloc(sizeof(*new), GFP_ATOMIC);
977 if (!new) 977 if (!new)
978 return -ENOMEM; 978 return -ENOMEM;
979 979
980 new->offset = offset; 980 new->offset = offset;
981 new->len = len; 981 new->len = len;
982 982
983 list_add(&new->list, lh); 983 list_add(&new->list, lh);
984 return 0; 984 return 0;
985 } 985 }
986 986
987 static int new_partial_datagram(struct net_device *dev, struct list_head *pdgl, 987 static int new_partial_datagram(struct net_device *dev, struct list_head *pdgl,
988 int dgl, int dg_size, char *frag_buf, 988 int dgl, int dg_size, char *frag_buf,
989 int frag_off, int frag_len) 989 int frag_off, int frag_len)
990 { 990 {
991 struct partial_datagram *new; 991 struct partial_datagram *new;
992 992
993 new = kmalloc(sizeof(*new), GFP_ATOMIC); 993 new = kmalloc(sizeof(*new), GFP_ATOMIC);
994 if (!new) 994 if (!new)
995 return -ENOMEM; 995 return -ENOMEM;
996 996
997 INIT_LIST_HEAD(&new->frag_info); 997 INIT_LIST_HEAD(&new->frag_info);
998 998
999 if (new_fragment(&new->frag_info, frag_off, frag_len) < 0) { 999 if (new_fragment(&new->frag_info, frag_off, frag_len) < 0) {
1000 kfree(new); 1000 kfree(new);
1001 return -ENOMEM; 1001 return -ENOMEM;
1002 } 1002 }
1003 1003
1004 new->dgl = dgl; 1004 new->dgl = dgl;
1005 new->dg_size = dg_size; 1005 new->dg_size = dg_size;
1006 1006
1007 new->skb = dev_alloc_skb(dg_size + dev->hard_header_len + 15); 1007 new->skb = dev_alloc_skb(dg_size + dev->hard_header_len + 15);
1008 if (!new->skb) { 1008 if (!new->skb) {
1009 struct fragment_info *fi = list_entry(new->frag_info.next, 1009 struct fragment_info *fi = list_entry(new->frag_info.next,
1010 struct fragment_info, 1010 struct fragment_info,
1011 list); 1011 list);
1012 kfree(fi); 1012 kfree(fi);
1013 kfree(new); 1013 kfree(new);
1014 return -ENOMEM; 1014 return -ENOMEM;
1015 } 1015 }
1016 1016
1017 skb_reserve(new->skb, (dev->hard_header_len + 15) & ~15); 1017 skb_reserve(new->skb, (dev->hard_header_len + 15) & ~15);
1018 new->pbuf = skb_put(new->skb, dg_size); 1018 new->pbuf = skb_put(new->skb, dg_size);
1019 memcpy(new->pbuf + frag_off, frag_buf, frag_len); 1019 memcpy(new->pbuf + frag_off, frag_buf, frag_len);
1020 1020
1021 list_add(&new->list, pdgl); 1021 list_add(&new->list, pdgl);
1022 return 0; 1022 return 0;
1023 } 1023 }
1024 1024
1025 static int update_partial_datagram(struct list_head *pdgl, struct list_head *lh, 1025 static int update_partial_datagram(struct list_head *pdgl, struct list_head *lh,
1026 char *frag_buf, int frag_off, int frag_len) 1026 char *frag_buf, int frag_off, int frag_len)
1027 { 1027 {
1028 struct partial_datagram *pd = 1028 struct partial_datagram *pd =
1029 list_entry(lh, struct partial_datagram, list); 1029 list_entry(lh, struct partial_datagram, list);
1030 1030
1031 if (new_fragment(&pd->frag_info, frag_off, frag_len) < 0) 1031 if (new_fragment(&pd->frag_info, frag_off, frag_len) < 0)
1032 return -ENOMEM; 1032 return -ENOMEM;
1033 1033
1034 memcpy(pd->pbuf + frag_off, frag_buf, frag_len); 1034 memcpy(pd->pbuf + frag_off, frag_buf, frag_len);
1035 1035
1036 /* Move list entry to beginnig of list so that oldest partial 1036 /* Move list entry to beginnig of list so that oldest partial
1037 * datagrams percolate to the end of the list */ 1037 * datagrams percolate to the end of the list */
1038 list_move(lh, pdgl); 1038 list_move(lh, pdgl);
1039 return 0; 1039 return 0;
1040 } 1040 }
1041 1041
1042 static int is_datagram_complete(struct list_head *lh, int dg_size) 1042 static int is_datagram_complete(struct list_head *lh, int dg_size)
1043 { 1043 {
1044 struct partial_datagram *pd; 1044 struct partial_datagram *pd;
1045 struct fragment_info *fi; 1045 struct fragment_info *fi;
1046 1046
1047 pd = list_entry(lh, struct partial_datagram, list); 1047 pd = list_entry(lh, struct partial_datagram, list);
1048 fi = list_entry(pd->frag_info.next, struct fragment_info, list); 1048 fi = list_entry(pd->frag_info.next, struct fragment_info, list);
1049 1049
1050 return (fi->len == dg_size); 1050 return (fi->len == dg_size);
1051 } 1051 }
1052 1052
1053 /* Packet reception. We convert the IP1394 encapsulation header to an 1053 /* Packet reception. We convert the IP1394 encapsulation header to an
1054 * ethernet header, and fill it with some of our other fields. This is 1054 * ethernet header, and fill it with some of our other fields. This is
1055 * an incoming packet from the 1394 bus. */ 1055 * an incoming packet from the 1394 bus. */
1056 static int ether1394_data_handler(struct net_device *dev, int srcid, int destid, 1056 static int ether1394_data_handler(struct net_device *dev, int srcid, int destid,
1057 char *buf, int len) 1057 char *buf, int len)
1058 { 1058 {
1059 struct sk_buff *skb; 1059 struct sk_buff *skb;
1060 unsigned long flags; 1060 unsigned long flags;
1061 struct eth1394_priv *priv = netdev_priv(dev); 1061 struct eth1394_priv *priv = netdev_priv(dev);
1062 union eth1394_hdr *hdr = (union eth1394_hdr *)buf; 1062 union eth1394_hdr *hdr = (union eth1394_hdr *)buf;
1063 __be16 ether_type = cpu_to_be16(0); /* initialized to clear warning */ 1063 __be16 ether_type = cpu_to_be16(0); /* initialized to clear warning */
1064 int hdr_len; 1064 int hdr_len;
1065 struct unit_directory *ud = priv->ud_list[NODEID_TO_NODE(srcid)]; 1065 struct unit_directory *ud = priv->ud_list[NODEID_TO_NODE(srcid)];
1066 struct eth1394_node_info *node_info; 1066 struct eth1394_node_info *node_info;
1067 1067
1068 if (!ud) { 1068 if (!ud) {
1069 struct eth1394_node_ref *node; 1069 struct eth1394_node_ref *node;
1070 node = eth1394_find_node_nodeid(&priv->ip_node_list, srcid); 1070 node = eth1394_find_node_nodeid(&priv->ip_node_list, srcid);
1071 if (unlikely(!node)) { 1071 if (unlikely(!node)) {
1072 HPSB_PRINT(KERN_ERR, "ether1394 rx: sender nodeid " 1072 HPSB_PRINT(KERN_ERR, "ether1394 rx: sender nodeid "
1073 "lookup failure: " NODE_BUS_FMT, 1073 "lookup failure: " NODE_BUS_FMT,
1074 NODE_BUS_ARGS(priv->host, srcid)); 1074 NODE_BUS_ARGS(priv->host, srcid));
1075 dev->stats.rx_dropped++; 1075 dev->stats.rx_dropped++;
1076 return -1; 1076 return -1;
1077 } 1077 }
1078 ud = node->ud; 1078 ud = node->ud;
1079 1079
1080 priv->ud_list[NODEID_TO_NODE(srcid)] = ud; 1080 priv->ud_list[NODEID_TO_NODE(srcid)] = ud;
1081 } 1081 }
1082 1082
1083 node_info = dev_get_drvdata(&ud->device); 1083 node_info = dev_get_drvdata(&ud->device);
1084 1084
1085 /* First, did we receive a fragmented or unfragmented datagram? */ 1085 /* First, did we receive a fragmented or unfragmented datagram? */
1086 hdr->words.word1 = ntohs(hdr->words.word1); 1086 hdr->words.word1 = ntohs(hdr->words.word1);
1087 1087
1088 hdr_len = hdr_type_len[hdr->common.lf]; 1088 hdr_len = hdr_type_len[hdr->common.lf];
1089 1089
1090 if (hdr->common.lf == ETH1394_HDR_LF_UF) { 1090 if (hdr->common.lf == ETH1394_HDR_LF_UF) {
1091 /* An unfragmented datagram has been received by the ieee1394 1091 /* An unfragmented datagram has been received by the ieee1394
1092 * bus. Build an skbuff around it so we can pass it to the 1092 * bus. Build an skbuff around it so we can pass it to the
1093 * high level network layer. */ 1093 * high level network layer. */
1094 1094
1095 skb = dev_alloc_skb(len + dev->hard_header_len + 15); 1095 skb = dev_alloc_skb(len + dev->hard_header_len + 15);
1096 if (unlikely(!skb)) { 1096 if (unlikely(!skb)) {
1097 ETH1394_PRINT_G(KERN_ERR, "Out of memory\n"); 1097 ETH1394_PRINT_G(KERN_ERR, "Out of memory\n");
1098 dev->stats.rx_dropped++; 1098 dev->stats.rx_dropped++;
1099 return -1; 1099 return -1;
1100 } 1100 }
1101 skb_reserve(skb, (dev->hard_header_len + 15) & ~15); 1101 skb_reserve(skb, (dev->hard_header_len + 15) & ~15);
1102 memcpy(skb_put(skb, len - hdr_len), buf + hdr_len, 1102 memcpy(skb_put(skb, len - hdr_len), buf + hdr_len,
1103 len - hdr_len); 1103 len - hdr_len);
1104 ether_type = hdr->uf.ether_type; 1104 ether_type = hdr->uf.ether_type;
1105 } else { 1105 } else {
1106 /* A datagram fragment has been received, now the fun begins. */ 1106 /* A datagram fragment has been received, now the fun begins. */
1107 1107
1108 struct list_head *pdgl, *lh; 1108 struct list_head *pdgl, *lh;
1109 struct partial_datagram *pd; 1109 struct partial_datagram *pd;
1110 int fg_off; 1110 int fg_off;
1111 int fg_len = len - hdr_len; 1111 int fg_len = len - hdr_len;
1112 int dg_size; 1112 int dg_size;
1113 int dgl; 1113 int dgl;
1114 int retval; 1114 int retval;
1115 struct pdg_list *pdg = &(node_info->pdg); 1115 struct pdg_list *pdg = &(node_info->pdg);
1116 1116
1117 hdr->words.word3 = ntohs(hdr->words.word3); 1117 hdr->words.word3 = ntohs(hdr->words.word3);
1118 /* The 4th header word is reserved so no need to do ntohs() */ 1118 /* The 4th header word is reserved so no need to do ntohs() */
1119 1119
1120 if (hdr->common.lf == ETH1394_HDR_LF_FF) { 1120 if (hdr->common.lf == ETH1394_HDR_LF_FF) {
1121 ether_type = hdr->ff.ether_type; 1121 ether_type = hdr->ff.ether_type;
1122 dgl = hdr->ff.dgl; 1122 dgl = hdr->ff.dgl;
1123 dg_size = hdr->ff.dg_size + 1; 1123 dg_size = hdr->ff.dg_size + 1;
1124 fg_off = 0; 1124 fg_off = 0;
1125 } else { 1125 } else {
1126 hdr->words.word2 = ntohs(hdr->words.word2); 1126 hdr->words.word2 = ntohs(hdr->words.word2);
1127 dgl = hdr->sf.dgl; 1127 dgl = hdr->sf.dgl;
1128 dg_size = hdr->sf.dg_size + 1; 1128 dg_size = hdr->sf.dg_size + 1;
1129 fg_off = hdr->sf.fg_off; 1129 fg_off = hdr->sf.fg_off;
1130 } 1130 }
1131 spin_lock_irqsave(&pdg->lock, flags); 1131 spin_lock_irqsave(&pdg->lock, flags);
1132 1132
1133 pdgl = &(pdg->list); 1133 pdgl = &(pdg->list);
1134 lh = find_partial_datagram(pdgl, dgl); 1134 lh = find_partial_datagram(pdgl, dgl);
1135 1135
1136 if (lh == NULL) { 1136 if (lh == NULL) {
1137 while (pdg->sz >= max_partial_datagrams) { 1137 while (pdg->sz >= max_partial_datagrams) {
1138 /* remove the oldest */ 1138 /* remove the oldest */
1139 purge_partial_datagram(pdgl->prev); 1139 purge_partial_datagram(pdgl->prev);
1140 pdg->sz--; 1140 pdg->sz--;
1141 } 1141 }
1142 1142
1143 retval = new_partial_datagram(dev, pdgl, dgl, dg_size, 1143 retval = new_partial_datagram(dev, pdgl, dgl, dg_size,
1144 buf + hdr_len, fg_off, 1144 buf + hdr_len, fg_off,
1145 fg_len); 1145 fg_len);
1146 if (retval < 0) { 1146 if (retval < 0) {
1147 spin_unlock_irqrestore(&pdg->lock, flags); 1147 spin_unlock_irqrestore(&pdg->lock, flags);
1148 goto bad_proto; 1148 goto bad_proto;
1149 } 1149 }
1150 pdg->sz++; 1150 pdg->sz++;
1151 lh = find_partial_datagram(pdgl, dgl); 1151 lh = find_partial_datagram(pdgl, dgl);
1152 } else { 1152 } else {
1153 pd = list_entry(lh, struct partial_datagram, list); 1153 pd = list_entry(lh, struct partial_datagram, list);
1154 1154
1155 if (fragment_overlap(&pd->frag_info, fg_off, fg_len)) { 1155 if (fragment_overlap(&pd->frag_info, fg_off, fg_len)) {
1156 /* Overlapping fragments, obliterate old 1156 /* Overlapping fragments, obliterate old
1157 * datagram and start new one. */ 1157 * datagram and start new one. */
1158 purge_partial_datagram(lh); 1158 purge_partial_datagram(lh);
1159 retval = new_partial_datagram(dev, pdgl, dgl, 1159 retval = new_partial_datagram(dev, pdgl, dgl,
1160 dg_size, 1160 dg_size,
1161 buf + hdr_len, 1161 buf + hdr_len,
1162 fg_off, fg_len); 1162 fg_off, fg_len);
1163 if (retval < 0) { 1163 if (retval < 0) {
1164 pdg->sz--; 1164 pdg->sz--;
1165 spin_unlock_irqrestore(&pdg->lock, flags); 1165 spin_unlock_irqrestore(&pdg->lock, flags);
1166 goto bad_proto; 1166 goto bad_proto;
1167 } 1167 }
1168 } else { 1168 } else {
1169 retval = update_partial_datagram(pdgl, lh, 1169 retval = update_partial_datagram(pdgl, lh,
1170 buf + hdr_len, 1170 buf + hdr_len,
1171 fg_off, fg_len); 1171 fg_off, fg_len);
1172 if (retval < 0) { 1172 if (retval < 0) {
1173 /* Couldn't save off fragment anyway 1173 /* Couldn't save off fragment anyway
1174 * so might as well obliterate the 1174 * so might as well obliterate the
1175 * datagram now. */ 1175 * datagram now. */
1176 purge_partial_datagram(lh); 1176 purge_partial_datagram(lh);
1177 pdg->sz--; 1177 pdg->sz--;
1178 spin_unlock_irqrestore(&pdg->lock, flags); 1178 spin_unlock_irqrestore(&pdg->lock, flags);
1179 goto bad_proto; 1179 goto bad_proto;
1180 } 1180 }
1181 } /* fragment overlap */ 1181 } /* fragment overlap */
1182 } /* new datagram or add to existing one */ 1182 } /* new datagram or add to existing one */
1183 1183
1184 pd = list_entry(lh, struct partial_datagram, list); 1184 pd = list_entry(lh, struct partial_datagram, list);
1185 1185
1186 if (hdr->common.lf == ETH1394_HDR_LF_FF) 1186 if (hdr->common.lf == ETH1394_HDR_LF_FF)
1187 pd->ether_type = ether_type; 1187 pd->ether_type = ether_type;
1188 1188
1189 if (is_datagram_complete(lh, dg_size)) { 1189 if (is_datagram_complete(lh, dg_size)) {
1190 ether_type = pd->ether_type; 1190 ether_type = pd->ether_type;
1191 pdg->sz--; 1191 pdg->sz--;
1192 skb = skb_get(pd->skb); 1192 skb = skb_get(pd->skb);
1193 purge_partial_datagram(lh); 1193 purge_partial_datagram(lh);
1194 spin_unlock_irqrestore(&pdg->lock, flags); 1194 spin_unlock_irqrestore(&pdg->lock, flags);
1195 } else { 1195 } else {
1196 /* Datagram is not complete, we're done for the 1196 /* Datagram is not complete, we're done for the
1197 * moment. */ 1197 * moment. */
1198 spin_unlock_irqrestore(&pdg->lock, flags); 1198 spin_unlock_irqrestore(&pdg->lock, flags);
1199 return 0; 1199 return 0;
1200 } 1200 }
1201 } /* unframgented datagram or fragmented one */ 1201 } /* unframgented datagram or fragmented one */
1202 1202
1203 /* Write metadata, and then pass to the receive level */ 1203 /* Write metadata, and then pass to the receive level */
1204 skb->dev = dev; 1204 skb->dev = dev;
1205 skb->ip_summed = CHECKSUM_UNNECESSARY; /* don't check it */ 1205 skb->ip_summed = CHECKSUM_UNNECESSARY; /* don't check it */
1206 1206
1207 /* Parse the encapsulation header. This actually does the job of 1207 /* Parse the encapsulation header. This actually does the job of
1208 * converting to an ethernet frame header, aswell as arp 1208 * converting to an ethernet frame header, aswell as arp
1209 * conversion if needed. ARP conversion is easier in this 1209 * conversion if needed. ARP conversion is easier in this
1210 * direction, since we are using ethernet as our backend. */ 1210 * direction, since we are using ethernet as our backend. */
1211 skb->protocol = ether1394_parse_encap(skb, dev, srcid, destid, 1211 skb->protocol = ether1394_parse_encap(skb, dev, srcid, destid,
1212 ether_type); 1212 ether_type);
1213 1213
1214 spin_lock_irqsave(&priv->lock, flags); 1214 spin_lock_irqsave(&priv->lock, flags);
1215 1215
1216 if (!skb->protocol) { 1216 if (!skb->protocol) {
1217 dev->stats.rx_errors++; 1217 dev->stats.rx_errors++;
1218 dev->stats.rx_dropped++; 1218 dev->stats.rx_dropped++;
1219 dev_kfree_skb_any(skb); 1219 dev_kfree_skb_any(skb);
1220 } else if (netif_rx(skb) == NET_RX_DROP) { 1220 } else if (netif_rx(skb) == NET_RX_DROP) {
1221 dev->stats.rx_errors++; 1221 dev->stats.rx_errors++;
1222 dev->stats.rx_dropped++; 1222 dev->stats.rx_dropped++;
1223 } else { 1223 } else {
1224 dev->stats.rx_packets++; 1224 dev->stats.rx_packets++;
1225 dev->stats.rx_bytes += skb->len; 1225 dev->stats.rx_bytes += skb->len;
1226 } 1226 }
1227 1227
1228 spin_unlock_irqrestore(&priv->lock, flags); 1228 spin_unlock_irqrestore(&priv->lock, flags);
1229 1229
1230 bad_proto: 1230 bad_proto:
1231 if (netif_queue_stopped(dev)) 1231 if (netif_queue_stopped(dev))
1232 netif_wake_queue(dev); 1232 netif_wake_queue(dev);
1233 1233
1234 return 0; 1234 return 0;
1235 } 1235 }
1236 1236
1237 static int ether1394_write(struct hpsb_host *host, int srcid, int destid, 1237 static int ether1394_write(struct hpsb_host *host, int srcid, int destid,
1238 quadlet_t *data, u64 addr, size_t len, u16 flags) 1238 quadlet_t *data, u64 addr, size_t len, u16 flags)
1239 { 1239 {
1240 struct eth1394_host_info *hi; 1240 struct eth1394_host_info *hi;
1241 1241
1242 hi = hpsb_get_hostinfo(&eth1394_highlevel, host); 1242 hi = hpsb_get_hostinfo(&eth1394_highlevel, host);
1243 if (unlikely(!hi)) { 1243 if (unlikely(!hi)) {
1244 ETH1394_PRINT_G(KERN_ERR, "No net device at fw-host%d\n", 1244 ETH1394_PRINT_G(KERN_ERR, "No net device at fw-host%d\n",
1245 host->id); 1245 host->id);
1246 return RCODE_ADDRESS_ERROR; 1246 return RCODE_ADDRESS_ERROR;
1247 } 1247 }
1248 1248
1249 if (ether1394_data_handler(hi->dev, srcid, destid, (char*)data, len)) 1249 if (ether1394_data_handler(hi->dev, srcid, destid, (char*)data, len))
1250 return RCODE_ADDRESS_ERROR; 1250 return RCODE_ADDRESS_ERROR;
1251 else 1251 else
1252 return RCODE_COMPLETE; 1252 return RCODE_COMPLETE;
1253 } 1253 }
1254 1254
1255 static void ether1394_iso(struct hpsb_iso *iso) 1255 static void ether1394_iso(struct hpsb_iso *iso)
1256 { 1256 {
1257 __be32 *data; 1257 __be32 *data;
1258 char *buf; 1258 char *buf;
1259 struct eth1394_host_info *hi; 1259 struct eth1394_host_info *hi;
1260 struct net_device *dev; 1260 struct net_device *dev;
1261 struct eth1394_priv *priv;
1262 unsigned int len; 1261 unsigned int len;
1263 u32 specifier_id; 1262 u32 specifier_id;
1264 u16 source_id; 1263 u16 source_id;
1265 int i; 1264 int i;
1266 int nready; 1265 int nready;
1267 1266
1268 hi = hpsb_get_hostinfo(&eth1394_highlevel, iso->host); 1267 hi = hpsb_get_hostinfo(&eth1394_highlevel, iso->host);
1269 if (unlikely(!hi)) { 1268 if (unlikely(!hi)) {
1270 ETH1394_PRINT_G(KERN_ERR, "No net device at fw-host%d\n", 1269 ETH1394_PRINT_G(KERN_ERR, "No net device at fw-host%d\n",
1271 iso->host->id); 1270 iso->host->id);
1272 return; 1271 return;
1273 } 1272 }
1274 1273
1275 dev = hi->dev; 1274 dev = hi->dev;
1276 1275
1277 nready = hpsb_iso_n_ready(iso); 1276 nready = hpsb_iso_n_ready(iso);
1278 for (i = 0; i < nready; i++) { 1277 for (i = 0; i < nready; i++) {
1279 struct hpsb_iso_packet_info *info = 1278 struct hpsb_iso_packet_info *info =
1280 &iso->infos[(iso->first_packet + i) % iso->buf_packets]; 1279 &iso->infos[(iso->first_packet + i) % iso->buf_packets];
1281 data = (__be32 *)(iso->data_buf.kvirt + info->offset); 1280 data = (__be32 *)(iso->data_buf.kvirt + info->offset);
1282 1281
1283 /* skip over GASP header */ 1282 /* skip over GASP header */
1284 buf = (char *)data + 8; 1283 buf = (char *)data + 8;
1285 len = info->len - 8; 1284 len = info->len - 8;
1286 1285
1287 specifier_id = (be32_to_cpu(data[0]) & 0xffff) << 8 | 1286 specifier_id = (be32_to_cpu(data[0]) & 0xffff) << 8 |
1288 (be32_to_cpu(data[1]) & 0xff000000) >> 24; 1287 (be32_to_cpu(data[1]) & 0xff000000) >> 24;
1289 source_id = be32_to_cpu(data[0]) >> 16; 1288 source_id = be32_to_cpu(data[0]) >> 16;
1290
1291 priv = netdev_priv(dev);
1292 1289
1293 if (info->channel != (iso->host->csr.broadcast_channel & 0x3f) 1290 if (info->channel != (iso->host->csr.broadcast_channel & 0x3f)
1294 || specifier_id != ETHER1394_GASP_SPECIFIER_ID) { 1291 || specifier_id != ETHER1394_GASP_SPECIFIER_ID) {
1295 /* This packet is not for us */ 1292 /* This packet is not for us */
1296 continue; 1293 continue;
1297 } 1294 }
1298 ether1394_data_handler(dev, source_id, LOCAL_BUS | ALL_NODES, 1295 ether1394_data_handler(dev, source_id, LOCAL_BUS | ALL_NODES,
1299 buf, len); 1296 buf, len);
1300 } 1297 }
1301 1298
1302 hpsb_iso_recv_release_packets(iso, i); 1299 hpsb_iso_recv_release_packets(iso, i);
1303 1300
1304 } 1301 }
1305 1302
1306 /****************************************** 1303 /******************************************
1307 * Datagram transmission code 1304 * Datagram transmission code
1308 ******************************************/ 1305 ******************************************/
1309 1306
1310 /* Convert a standard ARP packet to 1394 ARP. The first 8 bytes (the entire 1307 /* Convert a standard ARP packet to 1394 ARP. The first 8 bytes (the entire
1311 * arphdr) is the same format as the ip1394 header, so they overlap. The rest 1308 * arphdr) is the same format as the ip1394 header, so they overlap. The rest
1312 * needs to be munged a bit. The remainder of the arphdr is formatted based 1309 * needs to be munged a bit. The remainder of the arphdr is formatted based
1313 * on hwaddr len and ipaddr len. We know what they'll be, so it's easy to 1310 * on hwaddr len and ipaddr len. We know what they'll be, so it's easy to
1314 * judge. 1311 * judge.
1315 * 1312 *
1316 * Now that the EUI is used for the hardware address all we need to do to make 1313 * Now that the EUI is used for the hardware address all we need to do to make
1317 * this work for 1394 is to insert 2 quadlets that contain max_rec size, 1314 * this work for 1394 is to insert 2 quadlets that contain max_rec size,
1318 * speed, and unicast FIFO address information between the sender_unique_id 1315 * speed, and unicast FIFO address information between the sender_unique_id
1319 * and the IP addresses. 1316 * and the IP addresses.
1320 */ 1317 */
1321 static void ether1394_arp_to_1394arp(struct sk_buff *skb, 1318 static void ether1394_arp_to_1394arp(struct sk_buff *skb,
1322 struct net_device *dev) 1319 struct net_device *dev)
1323 { 1320 {
1324 struct eth1394_priv *priv = netdev_priv(dev); 1321 struct eth1394_priv *priv = netdev_priv(dev);
1325 struct arphdr *arp = (struct arphdr *)skb->data; 1322 struct arphdr *arp = (struct arphdr *)skb->data;
1326 unsigned char *arp_ptr = (unsigned char *)(arp + 1); 1323 unsigned char *arp_ptr = (unsigned char *)(arp + 1);
1327 struct eth1394_arp *arp1394 = (struct eth1394_arp *)skb->data; 1324 struct eth1394_arp *arp1394 = (struct eth1394_arp *)skb->data;
1328 1325
1329 arp1394->hw_addr_len = 16; 1326 arp1394->hw_addr_len = 16;
1330 arp1394->sip = *(u32*)(arp_ptr + ETH1394_ALEN); 1327 arp1394->sip = *(u32*)(arp_ptr + ETH1394_ALEN);
1331 arp1394->max_rec = priv->host->csr.max_rec; 1328 arp1394->max_rec = priv->host->csr.max_rec;
1332 arp1394->sspd = priv->host->csr.lnk_spd; 1329 arp1394->sspd = priv->host->csr.lnk_spd;
1333 arp1394->fifo_hi = htons(priv->local_fifo >> 32); 1330 arp1394->fifo_hi = htons(priv->local_fifo >> 32);
1334 arp1394->fifo_lo = htonl(priv->local_fifo & ~0x0); 1331 arp1394->fifo_lo = htonl(priv->local_fifo & ~0x0);
1335 } 1332 }
1336 1333
1337 /* We need to encapsulate the standard header with our own. We use the 1334 /* We need to encapsulate the standard header with our own. We use the
1338 * ethernet header's proto for our own. */ 1335 * ethernet header's proto for our own. */
1339 static unsigned int ether1394_encapsulate_prep(unsigned int max_payload, 1336 static unsigned int ether1394_encapsulate_prep(unsigned int max_payload,
1340 __be16 proto, 1337 __be16 proto,
1341 union eth1394_hdr *hdr, 1338 union eth1394_hdr *hdr,
1342 u16 dg_size, u16 dgl) 1339 u16 dg_size, u16 dgl)
1343 { 1340 {
1344 unsigned int adj_max_payload = 1341 unsigned int adj_max_payload =
1345 max_payload - hdr_type_len[ETH1394_HDR_LF_UF]; 1342 max_payload - hdr_type_len[ETH1394_HDR_LF_UF];
1346 1343
1347 /* Does it all fit in one packet? */ 1344 /* Does it all fit in one packet? */
1348 if (dg_size <= adj_max_payload) { 1345 if (dg_size <= adj_max_payload) {
1349 hdr->uf.lf = ETH1394_HDR_LF_UF; 1346 hdr->uf.lf = ETH1394_HDR_LF_UF;
1350 hdr->uf.ether_type = proto; 1347 hdr->uf.ether_type = proto;
1351 } else { 1348 } else {
1352 hdr->ff.lf = ETH1394_HDR_LF_FF; 1349 hdr->ff.lf = ETH1394_HDR_LF_FF;
1353 hdr->ff.ether_type = proto; 1350 hdr->ff.ether_type = proto;
1354 hdr->ff.dg_size = dg_size - 1; 1351 hdr->ff.dg_size = dg_size - 1;
1355 hdr->ff.dgl = dgl; 1352 hdr->ff.dgl = dgl;
1356 adj_max_payload = max_payload - hdr_type_len[ETH1394_HDR_LF_FF]; 1353 adj_max_payload = max_payload - hdr_type_len[ETH1394_HDR_LF_FF];
1357 } 1354 }
1358 return DIV_ROUND_UP(dg_size, adj_max_payload); 1355 return DIV_ROUND_UP(dg_size, adj_max_payload);
1359 } 1356 }
1360 1357
1361 static unsigned int ether1394_encapsulate(struct sk_buff *skb, 1358 static unsigned int ether1394_encapsulate(struct sk_buff *skb,
1362 unsigned int max_payload, 1359 unsigned int max_payload,
1363 union eth1394_hdr *hdr) 1360 union eth1394_hdr *hdr)
1364 { 1361 {
1365 union eth1394_hdr *bufhdr; 1362 union eth1394_hdr *bufhdr;
1366 int ftype = hdr->common.lf; 1363 int ftype = hdr->common.lf;
1367 int hdrsz = hdr_type_len[ftype]; 1364 int hdrsz = hdr_type_len[ftype];
1368 unsigned int adj_max_payload = max_payload - hdrsz; 1365 unsigned int adj_max_payload = max_payload - hdrsz;
1369 1366
1370 switch (ftype) { 1367 switch (ftype) {
1371 case ETH1394_HDR_LF_UF: 1368 case ETH1394_HDR_LF_UF:
1372 bufhdr = (union eth1394_hdr *)skb_push(skb, hdrsz); 1369 bufhdr = (union eth1394_hdr *)skb_push(skb, hdrsz);
1373 bufhdr->words.word1 = htons(hdr->words.word1); 1370 bufhdr->words.word1 = htons(hdr->words.word1);
1374 bufhdr->words.word2 = hdr->words.word2; 1371 bufhdr->words.word2 = hdr->words.word2;
1375 break; 1372 break;
1376 1373
1377 case ETH1394_HDR_LF_FF: 1374 case ETH1394_HDR_LF_FF:
1378 bufhdr = (union eth1394_hdr *)skb_push(skb, hdrsz); 1375 bufhdr = (union eth1394_hdr *)skb_push(skb, hdrsz);
1379 bufhdr->words.word1 = htons(hdr->words.word1); 1376 bufhdr->words.word1 = htons(hdr->words.word1);
1380 bufhdr->words.word2 = hdr->words.word2; 1377 bufhdr->words.word2 = hdr->words.word2;
1381 bufhdr->words.word3 = htons(hdr->words.word3); 1378 bufhdr->words.word3 = htons(hdr->words.word3);
1382 bufhdr->words.word4 = 0; 1379 bufhdr->words.word4 = 0;
1383 1380
1384 /* Set frag type here for future interior fragments */ 1381 /* Set frag type here for future interior fragments */
1385 hdr->common.lf = ETH1394_HDR_LF_IF; 1382 hdr->common.lf = ETH1394_HDR_LF_IF;
1386 hdr->sf.fg_off = 0; 1383 hdr->sf.fg_off = 0;
1387 break; 1384 break;
1388 1385
1389 default: 1386 default:
1390 hdr->sf.fg_off += adj_max_payload; 1387 hdr->sf.fg_off += adj_max_payload;
1391 bufhdr = (union eth1394_hdr *)skb_pull(skb, adj_max_payload); 1388 bufhdr = (union eth1394_hdr *)skb_pull(skb, adj_max_payload);
1392 if (max_payload >= skb->len) 1389 if (max_payload >= skb->len)
1393 hdr->common.lf = ETH1394_HDR_LF_LF; 1390 hdr->common.lf = ETH1394_HDR_LF_LF;
1394 bufhdr->words.word1 = htons(hdr->words.word1); 1391 bufhdr->words.word1 = htons(hdr->words.word1);
1395 bufhdr->words.word2 = htons(hdr->words.word2); 1392 bufhdr->words.word2 = htons(hdr->words.word2);
1396 bufhdr->words.word3 = htons(hdr->words.word3); 1393 bufhdr->words.word3 = htons(hdr->words.word3);
1397 bufhdr->words.word4 = 0; 1394 bufhdr->words.word4 = 0;
1398 } 1395 }
1399 return min(max_payload, skb->len); 1396 return min(max_payload, skb->len);
1400 } 1397 }
1401 1398
1402 static struct hpsb_packet *ether1394_alloc_common_packet(struct hpsb_host *host) 1399 static struct hpsb_packet *ether1394_alloc_common_packet(struct hpsb_host *host)
1403 { 1400 {
1404 struct hpsb_packet *p; 1401 struct hpsb_packet *p;
1405 1402
1406 p = hpsb_alloc_packet(0); 1403 p = hpsb_alloc_packet(0);
1407 if (p) { 1404 if (p) {
1408 p->host = host; 1405 p->host = host;
1409 p->generation = get_hpsb_generation(host); 1406 p->generation = get_hpsb_generation(host);
1410 p->type = hpsb_async; 1407 p->type = hpsb_async;
1411 } 1408 }
1412 return p; 1409 return p;
1413 } 1410 }
1414 1411
1415 static int ether1394_prep_write_packet(struct hpsb_packet *p, 1412 static int ether1394_prep_write_packet(struct hpsb_packet *p,
1416 struct hpsb_host *host, nodeid_t node, 1413 struct hpsb_host *host, nodeid_t node,
1417 u64 addr, void *data, int tx_len) 1414 u64 addr, void *data, int tx_len)
1418 { 1415 {
1419 p->node_id = node; 1416 p->node_id = node;
1420 1417
1421 if (hpsb_get_tlabel(p)) 1418 if (hpsb_get_tlabel(p))
1422 return -EAGAIN; 1419 return -EAGAIN;
1423 1420
1424 p->tcode = TCODE_WRITEB; 1421 p->tcode = TCODE_WRITEB;
1425 p->header_size = 16; 1422 p->header_size = 16;
1426 p->expect_response = 1; 1423 p->expect_response = 1;
1427 p->header[0] = 1424 p->header[0] =
1428 p->node_id << 16 | p->tlabel << 10 | 1 << 8 | TCODE_WRITEB << 4; 1425 p->node_id << 16 | p->tlabel << 10 | 1 << 8 | TCODE_WRITEB << 4;
1429 p->header[1] = host->node_id << 16 | addr >> 32; 1426 p->header[1] = host->node_id << 16 | addr >> 32;
1430 p->header[2] = addr & 0xffffffff; 1427 p->header[2] = addr & 0xffffffff;
1431 p->header[3] = tx_len << 16; 1428 p->header[3] = tx_len << 16;
1432 p->data_size = (tx_len + 3) & ~3; 1429 p->data_size = (tx_len + 3) & ~3;
1433 p->data = data; 1430 p->data = data;
1434 1431
1435 return 0; 1432 return 0;
1436 } 1433 }
1437 1434
1438 static void ether1394_prep_gasp_packet(struct hpsb_packet *p, 1435 static void ether1394_prep_gasp_packet(struct hpsb_packet *p,
1439 struct eth1394_priv *priv, 1436 struct eth1394_priv *priv,
1440 struct sk_buff *skb, int length) 1437 struct sk_buff *skb, int length)
1441 { 1438 {
1442 p->header_size = 4; 1439 p->header_size = 4;
1443 p->tcode = TCODE_STREAM_DATA; 1440 p->tcode = TCODE_STREAM_DATA;
1444 1441
1445 p->header[0] = length << 16 | 3 << 14 | priv->broadcast_channel << 8 | 1442 p->header[0] = length << 16 | 3 << 14 | priv->broadcast_channel << 8 |
1446 TCODE_STREAM_DATA << 4; 1443 TCODE_STREAM_DATA << 4;
1447 p->data_size = length; 1444 p->data_size = length;
1448 p->data = (quadlet_t *)skb->data - 2; 1445 p->data = (quadlet_t *)skb->data - 2;
1449 p->data[0] = cpu_to_be32(priv->host->node_id << 16 | 1446 p->data[0] = cpu_to_be32(priv->host->node_id << 16 |
1450 ETHER1394_GASP_SPECIFIER_ID_HI); 1447 ETHER1394_GASP_SPECIFIER_ID_HI);
1451 p->data[1] = cpu_to_be32(ETHER1394_GASP_SPECIFIER_ID_LO << 24 | 1448 p->data[1] = cpu_to_be32(ETHER1394_GASP_SPECIFIER_ID_LO << 24 |
1452 ETHER1394_GASP_VERSION); 1449 ETHER1394_GASP_VERSION);
1453 1450
1454 p->speed_code = priv->bc_sspd; 1451 p->speed_code = priv->bc_sspd;
1455 1452
1456 /* prevent hpsb_send_packet() from overriding our speed code */ 1453 /* prevent hpsb_send_packet() from overriding our speed code */
1457 p->node_id = LOCAL_BUS | ALL_NODES; 1454 p->node_id = LOCAL_BUS | ALL_NODES;
1458 } 1455 }
1459 1456
1460 static void ether1394_free_packet(struct hpsb_packet *packet) 1457 static void ether1394_free_packet(struct hpsb_packet *packet)
1461 { 1458 {
1462 if (packet->tcode != TCODE_STREAM_DATA) 1459 if (packet->tcode != TCODE_STREAM_DATA)
1463 hpsb_free_tlabel(packet); 1460 hpsb_free_tlabel(packet);
1464 hpsb_free_packet(packet); 1461 hpsb_free_packet(packet);
1465 } 1462 }
1466 1463
1467 static void ether1394_complete_cb(void *__ptask); 1464 static void ether1394_complete_cb(void *__ptask);
1468 1465
1469 static int ether1394_send_packet(struct packet_task *ptask, unsigned int tx_len) 1466 static int ether1394_send_packet(struct packet_task *ptask, unsigned int tx_len)
1470 { 1467 {
1471 struct eth1394_priv *priv = ptask->priv; 1468 struct eth1394_priv *priv = ptask->priv;
1472 struct hpsb_packet *packet = NULL; 1469 struct hpsb_packet *packet = NULL;
1473 1470
1474 packet = ether1394_alloc_common_packet(priv->host); 1471 packet = ether1394_alloc_common_packet(priv->host);
1475 if (!packet) 1472 if (!packet)
1476 return -ENOMEM; 1473 return -ENOMEM;
1477 1474
1478 if (ptask->tx_type == ETH1394_GASP) { 1475 if (ptask->tx_type == ETH1394_GASP) {
1479 int length = tx_len + 2 * sizeof(quadlet_t); 1476 int length = tx_len + 2 * sizeof(quadlet_t);
1480 1477
1481 ether1394_prep_gasp_packet(packet, priv, ptask->skb, length); 1478 ether1394_prep_gasp_packet(packet, priv, ptask->skb, length);
1482 } else if (ether1394_prep_write_packet(packet, priv->host, 1479 } else if (ether1394_prep_write_packet(packet, priv->host,
1483 ptask->dest_node, 1480 ptask->dest_node,
1484 ptask->addr, ptask->skb->data, 1481 ptask->addr, ptask->skb->data,
1485 tx_len)) { 1482 tx_len)) {
1486 hpsb_free_packet(packet); 1483 hpsb_free_packet(packet);
1487 return -EAGAIN; 1484 return -EAGAIN;
1488 } 1485 }
1489 1486
1490 ptask->packet = packet; 1487 ptask->packet = packet;
1491 hpsb_set_packet_complete_task(ptask->packet, ether1394_complete_cb, 1488 hpsb_set_packet_complete_task(ptask->packet, ether1394_complete_cb,
1492 ptask); 1489 ptask);
1493 1490
1494 if (hpsb_send_packet(packet) < 0) { 1491 if (hpsb_send_packet(packet) < 0) {
1495 ether1394_free_packet(packet); 1492 ether1394_free_packet(packet);
1496 return -EIO; 1493 return -EIO;
1497 } 1494 }
1498 1495
1499 return 0; 1496 return 0;
1500 } 1497 }
1501 1498
1502 /* Task function to be run when a datagram transmission is completed */ 1499 /* Task function to be run when a datagram transmission is completed */
1503 static void ether1394_dg_complete(struct packet_task *ptask, int fail) 1500 static void ether1394_dg_complete(struct packet_task *ptask, int fail)
1504 { 1501 {
1505 struct sk_buff *skb = ptask->skb; 1502 struct sk_buff *skb = ptask->skb;
1506 struct net_device *dev = skb->dev; 1503 struct net_device *dev = skb->dev;
1507 struct eth1394_priv *priv = netdev_priv(dev); 1504 struct eth1394_priv *priv = netdev_priv(dev);
1508 unsigned long flags; 1505 unsigned long flags;
1509 1506
1510 /* Statistics */ 1507 /* Statistics */
1511 spin_lock_irqsave(&priv->lock, flags); 1508 spin_lock_irqsave(&priv->lock, flags);
1512 if (fail) { 1509 if (fail) {
1513 dev->stats.tx_dropped++; 1510 dev->stats.tx_dropped++;
1514 dev->stats.tx_errors++; 1511 dev->stats.tx_errors++;
1515 } else { 1512 } else {
1516 dev->stats.tx_bytes += skb->len; 1513 dev->stats.tx_bytes += skb->len;
1517 dev->stats.tx_packets++; 1514 dev->stats.tx_packets++;
1518 } 1515 }
1519 spin_unlock_irqrestore(&priv->lock, flags); 1516 spin_unlock_irqrestore(&priv->lock, flags);
1520 1517
1521 dev_kfree_skb_any(skb); 1518 dev_kfree_skb_any(skb);
1522 kmem_cache_free(packet_task_cache, ptask); 1519 kmem_cache_free(packet_task_cache, ptask);
1523 } 1520 }
1524 1521
1525 /* Callback for when a packet has been sent and the status of that packet is 1522 /* Callback for when a packet has been sent and the status of that packet is
1526 * known */ 1523 * known */
1527 static void ether1394_complete_cb(void *__ptask) 1524 static void ether1394_complete_cb(void *__ptask)
1528 { 1525 {
1529 struct packet_task *ptask = (struct packet_task *)__ptask; 1526 struct packet_task *ptask = (struct packet_task *)__ptask;
1530 struct hpsb_packet *packet = ptask->packet; 1527 struct hpsb_packet *packet = ptask->packet;
1531 int fail = 0; 1528 int fail = 0;
1532 1529
1533 if (packet->tcode != TCODE_STREAM_DATA) 1530 if (packet->tcode != TCODE_STREAM_DATA)
1534 fail = hpsb_packet_success(packet); 1531 fail = hpsb_packet_success(packet);
1535 1532
1536 ether1394_free_packet(packet); 1533 ether1394_free_packet(packet);
1537 1534
1538 ptask->outstanding_pkts--; 1535 ptask->outstanding_pkts--;
1539 if (ptask->outstanding_pkts > 0 && !fail) { 1536 if (ptask->outstanding_pkts > 0 && !fail) {
1540 int tx_len, err; 1537 int tx_len, err;
1541 1538
1542 /* Add the encapsulation header to the fragment */ 1539 /* Add the encapsulation header to the fragment */
1543 tx_len = ether1394_encapsulate(ptask->skb, ptask->max_payload, 1540 tx_len = ether1394_encapsulate(ptask->skb, ptask->max_payload,
1544 &ptask->hdr); 1541 &ptask->hdr);
1545 err = ether1394_send_packet(ptask, tx_len); 1542 err = ether1394_send_packet(ptask, tx_len);
1546 if (err) { 1543 if (err) {
1547 if (err == -EAGAIN) 1544 if (err == -EAGAIN)
1548 ETH1394_PRINT_G(KERN_ERR, "Out of tlabels\n"); 1545 ETH1394_PRINT_G(KERN_ERR, "Out of tlabels\n");
1549 1546
1550 ether1394_dg_complete(ptask, 1); 1547 ether1394_dg_complete(ptask, 1);
1551 } 1548 }
1552 } else { 1549 } else {
1553 ether1394_dg_complete(ptask, fail); 1550 ether1394_dg_complete(ptask, fail);
1554 } 1551 }
1555 } 1552 }
1556 1553
1557 /* Transmit a packet (called by kernel) */ 1554 /* Transmit a packet (called by kernel) */
1558 static netdev_tx_t ether1394_tx(struct sk_buff *skb, 1555 static netdev_tx_t ether1394_tx(struct sk_buff *skb,
1559 struct net_device *dev) 1556 struct net_device *dev)
1560 { 1557 {
1561 struct eth1394hdr hdr_buf; 1558 struct eth1394hdr hdr_buf;
1562 struct eth1394_priv *priv = netdev_priv(dev); 1559 struct eth1394_priv *priv = netdev_priv(dev);
1563 __be16 proto; 1560 __be16 proto;
1564 unsigned long flags; 1561 unsigned long flags;
1565 nodeid_t dest_node; 1562 nodeid_t dest_node;
1566 eth1394_tx_type tx_type; 1563 eth1394_tx_type tx_type;
1567 unsigned int tx_len; 1564 unsigned int tx_len;
1568 unsigned int max_payload; 1565 unsigned int max_payload;
1569 u16 dg_size; 1566 u16 dg_size;
1570 u16 dgl; 1567 u16 dgl;
1571 struct packet_task *ptask; 1568 struct packet_task *ptask;
1572 struct eth1394_node_ref *node; 1569 struct eth1394_node_ref *node;
1573 struct eth1394_node_info *node_info = NULL; 1570 struct eth1394_node_info *node_info = NULL;
1574 1571
1575 ptask = kmem_cache_alloc(packet_task_cache, GFP_ATOMIC); 1572 ptask = kmem_cache_alloc(packet_task_cache, GFP_ATOMIC);
1576 if (ptask == NULL) 1573 if (ptask == NULL)
1577 goto fail; 1574 goto fail;
1578 1575
1579 /* XXX Ignore this for now. Noticed that when MacOSX is the IRM, 1576 /* XXX Ignore this for now. Noticed that when MacOSX is the IRM,
1580 * it does not set our validity bit. We need to compensate for 1577 * it does not set our validity bit. We need to compensate for
1581 * that somewhere else, but not in eth1394. */ 1578 * that somewhere else, but not in eth1394. */
1582 #if 0 1579 #if 0
1583 if ((priv->host->csr.broadcast_channel & 0xc0000000) != 0xc0000000) 1580 if ((priv->host->csr.broadcast_channel & 0xc0000000) != 0xc0000000)
1584 goto fail; 1581 goto fail;
1585 #endif 1582 #endif
1586 1583
1587 skb = skb_share_check(skb, GFP_ATOMIC); 1584 skb = skb_share_check(skb, GFP_ATOMIC);
1588 if (!skb) 1585 if (!skb)
1589 goto fail; 1586 goto fail;
1590 1587
1591 /* Get rid of the fake eth1394 header, but first make a copy. 1588 /* Get rid of the fake eth1394 header, but first make a copy.
1592 * We might need to rebuild the header on tx failure. */ 1589 * We might need to rebuild the header on tx failure. */
1593 memcpy(&hdr_buf, skb->data, sizeof(hdr_buf)); 1590 memcpy(&hdr_buf, skb->data, sizeof(hdr_buf));
1594 skb_pull(skb, ETH1394_HLEN); 1591 skb_pull(skb, ETH1394_HLEN);
1595 1592
1596 proto = hdr_buf.h_proto; 1593 proto = hdr_buf.h_proto;
1597 dg_size = skb->len; 1594 dg_size = skb->len;
1598 1595
1599 /* Set the transmission type for the packet. ARP packets and IP 1596 /* Set the transmission type for the packet. ARP packets and IP
1600 * broadcast packets are sent via GASP. */ 1597 * broadcast packets are sent via GASP. */
1601 if (memcmp(hdr_buf.h_dest, dev->broadcast, ETH1394_ALEN) == 0 || 1598 if (memcmp(hdr_buf.h_dest, dev->broadcast, ETH1394_ALEN) == 0 ||
1602 proto == htons(ETH_P_ARP) || 1599 proto == htons(ETH_P_ARP) ||
1603 (proto == htons(ETH_P_IP) && 1600 (proto == htons(ETH_P_IP) &&
1604 IN_MULTICAST(ntohl(ip_hdr(skb)->daddr)))) { 1601 IN_MULTICAST(ntohl(ip_hdr(skb)->daddr)))) {
1605 tx_type = ETH1394_GASP; 1602 tx_type = ETH1394_GASP;
1606 dest_node = LOCAL_BUS | ALL_NODES; 1603 dest_node = LOCAL_BUS | ALL_NODES;
1607 max_payload = priv->bc_maxpayload - ETHER1394_GASP_OVERHEAD; 1604 max_payload = priv->bc_maxpayload - ETHER1394_GASP_OVERHEAD;
1608 BUG_ON(max_payload < 512 - ETHER1394_GASP_OVERHEAD); 1605 BUG_ON(max_payload < 512 - ETHER1394_GASP_OVERHEAD);
1609 dgl = priv->bc_dgl; 1606 dgl = priv->bc_dgl;
1610 if (max_payload < dg_size + hdr_type_len[ETH1394_HDR_LF_UF]) 1607 if (max_payload < dg_size + hdr_type_len[ETH1394_HDR_LF_UF])
1611 priv->bc_dgl++; 1608 priv->bc_dgl++;
1612 } else { 1609 } else {
1613 __be64 guid = get_unaligned((__be64 *)hdr_buf.h_dest); 1610 __be64 guid = get_unaligned((__be64 *)hdr_buf.h_dest);
1614 1611
1615 node = eth1394_find_node_guid(&priv->ip_node_list, 1612 node = eth1394_find_node_guid(&priv->ip_node_list,
1616 be64_to_cpu(guid)); 1613 be64_to_cpu(guid));
1617 if (!node) 1614 if (!node)
1618 goto fail; 1615 goto fail;
1619 1616
1620 node_info = dev_get_drvdata(&node->ud->device); 1617 node_info = dev_get_drvdata(&node->ud->device);
1621 if (node_info->fifo == CSR1212_INVALID_ADDR_SPACE) 1618 if (node_info->fifo == CSR1212_INVALID_ADDR_SPACE)
1622 goto fail; 1619 goto fail;
1623 1620
1624 dest_node = node->ud->ne->nodeid; 1621 dest_node = node->ud->ne->nodeid;
1625 max_payload = node_info->maxpayload; 1622 max_payload = node_info->maxpayload;
1626 BUG_ON(max_payload < 512 - ETHER1394_GASP_OVERHEAD); 1623 BUG_ON(max_payload < 512 - ETHER1394_GASP_OVERHEAD);
1627 1624
1628 dgl = node_info->dgl; 1625 dgl = node_info->dgl;
1629 if (max_payload < dg_size + hdr_type_len[ETH1394_HDR_LF_UF]) 1626 if (max_payload < dg_size + hdr_type_len[ETH1394_HDR_LF_UF])
1630 node_info->dgl++; 1627 node_info->dgl++;
1631 tx_type = ETH1394_WRREQ; 1628 tx_type = ETH1394_WRREQ;
1632 } 1629 }
1633 1630
1634 /* If this is an ARP packet, convert it */ 1631 /* If this is an ARP packet, convert it */
1635 if (proto == htons(ETH_P_ARP)) 1632 if (proto == htons(ETH_P_ARP))
1636 ether1394_arp_to_1394arp(skb, dev); 1633 ether1394_arp_to_1394arp(skb, dev);
1637 1634
1638 ptask->hdr.words.word1 = 0; 1635 ptask->hdr.words.word1 = 0;
1639 ptask->hdr.words.word2 = 0; 1636 ptask->hdr.words.word2 = 0;
1640 ptask->hdr.words.word3 = 0; 1637 ptask->hdr.words.word3 = 0;
1641 ptask->hdr.words.word4 = 0; 1638 ptask->hdr.words.word4 = 0;
1642 ptask->skb = skb; 1639 ptask->skb = skb;
1643 ptask->priv = priv; 1640 ptask->priv = priv;
1644 ptask->tx_type = tx_type; 1641 ptask->tx_type = tx_type;
1645 1642
1646 if (tx_type != ETH1394_GASP) { 1643 if (tx_type != ETH1394_GASP) {
1647 u64 addr; 1644 u64 addr;
1648 1645
1649 spin_lock_irqsave(&priv->lock, flags); 1646 spin_lock_irqsave(&priv->lock, flags);
1650 addr = node_info->fifo; 1647 addr = node_info->fifo;
1651 spin_unlock_irqrestore(&priv->lock, flags); 1648 spin_unlock_irqrestore(&priv->lock, flags);
1652 1649
1653 ptask->addr = addr; 1650 ptask->addr = addr;
1654 ptask->dest_node = dest_node; 1651 ptask->dest_node = dest_node;
1655 } 1652 }
1656 1653
1657 ptask->tx_type = tx_type; 1654 ptask->tx_type = tx_type;
1658 ptask->max_payload = max_payload; 1655 ptask->max_payload = max_payload;
1659 ptask->outstanding_pkts = ether1394_encapsulate_prep(max_payload, 1656 ptask->outstanding_pkts = ether1394_encapsulate_prep(max_payload,
1660 proto, &ptask->hdr, dg_size, dgl); 1657 proto, &ptask->hdr, dg_size, dgl);
1661 1658
1662 /* Add the encapsulation header to the fragment */ 1659 /* Add the encapsulation header to the fragment */
1663 tx_len = ether1394_encapsulate(skb, max_payload, &ptask->hdr); 1660 tx_len = ether1394_encapsulate(skb, max_payload, &ptask->hdr);
1664 dev->trans_start = jiffies; 1661 dev->trans_start = jiffies;
1665 if (ether1394_send_packet(ptask, tx_len)) { 1662 if (ether1394_send_packet(ptask, tx_len)) {
1666 if (dest_node == (LOCAL_BUS | ALL_NODES)) 1663 if (dest_node == (LOCAL_BUS | ALL_NODES))
1667 goto fail; 1664 goto fail;
1668 1665
1669 /* At this point we want to restore the packet. When we return 1666 /* At this point we want to restore the packet. When we return
1670 * here with NETDEV_TX_BUSY we will get another entrance in this 1667 * here with NETDEV_TX_BUSY we will get another entrance in this
1671 * routine with the same skb and we need it to look the same. 1668 * routine with the same skb and we need it to look the same.
1672 * So we pull 4 more bytes, then build the header again. */ 1669 * So we pull 4 more bytes, then build the header again. */
1673 skb_pull(skb, 4); 1670 skb_pull(skb, 4);
1674 ether1394_header(skb, dev, ntohs(hdr_buf.h_proto), 1671 ether1394_header(skb, dev, ntohs(hdr_buf.h_proto),
1675 hdr_buf.h_dest, NULL, 0); 1672 hdr_buf.h_dest, NULL, 0);
1676 1673
1677 /* Most failures of ether1394_send_packet are recoverable. */ 1674 /* Most failures of ether1394_send_packet are recoverable. */
1678 netif_stop_queue(dev); 1675 netif_stop_queue(dev);
1679 priv->wake_node = dest_node; 1676 priv->wake_node = dest_node;
1680 schedule_work(&priv->wake); 1677 schedule_work(&priv->wake);
1681 kmem_cache_free(packet_task_cache, ptask); 1678 kmem_cache_free(packet_task_cache, ptask);
1682 return NETDEV_TX_BUSY; 1679 return NETDEV_TX_BUSY;
1683 } 1680 }
1684 1681
1685 return NETDEV_TX_OK; 1682 return NETDEV_TX_OK;
1686 fail: 1683 fail:
1687 if (ptask) 1684 if (ptask)
1688 kmem_cache_free(packet_task_cache, ptask); 1685 kmem_cache_free(packet_task_cache, ptask);
1689 1686
1690 if (skb != NULL) 1687 if (skb != NULL)
1691 dev_kfree_skb(skb); 1688 dev_kfree_skb(skb);
1692 1689
1693 spin_lock_irqsave(&priv->lock, flags); 1690 spin_lock_irqsave(&priv->lock, flags);
1694 dev->stats.tx_dropped++; 1691 dev->stats.tx_dropped++;
1695 dev->stats.tx_errors++; 1692 dev->stats.tx_errors++;
1696 spin_unlock_irqrestore(&priv->lock, flags); 1693 spin_unlock_irqrestore(&priv->lock, flags);
1697 1694
1698 return NETDEV_TX_OK; 1695 return NETDEV_TX_OK;
1699 } 1696 }
1700 1697
1701 static void ether1394_get_drvinfo(struct net_device *dev, 1698 static void ether1394_get_drvinfo(struct net_device *dev,
1702 struct ethtool_drvinfo *info) 1699 struct ethtool_drvinfo *info)
1703 { 1700 {
1704 strcpy(info->driver, driver_name); 1701 strcpy(info->driver, driver_name);
1705 strcpy(info->bus_info, "ieee1394"); /* FIXME provide more detail? */ 1702 strcpy(info->bus_info, "ieee1394"); /* FIXME provide more detail? */
1706 } 1703 }
1707 1704
1708 static const struct ethtool_ops ethtool_ops = { 1705 static const struct ethtool_ops ethtool_ops = {
1709 .get_drvinfo = ether1394_get_drvinfo 1706 .get_drvinfo = ether1394_get_drvinfo
1710 }; 1707 };
1711 1708
1712 static int __init ether1394_init_module(void) 1709 static int __init ether1394_init_module(void)
1713 { 1710 {
1714 int err; 1711 int err;
1715 1712
1716 packet_task_cache = kmem_cache_create("packet_task", 1713 packet_task_cache = kmem_cache_create("packet_task",
1717 sizeof(struct packet_task), 1714 sizeof(struct packet_task),
1718 0, 0, NULL); 1715 0, 0, NULL);
1719 if (!packet_task_cache) 1716 if (!packet_task_cache)
1720 return -ENOMEM; 1717 return -ENOMEM;
1721 1718
1722 hpsb_register_highlevel(&eth1394_highlevel); 1719 hpsb_register_highlevel(&eth1394_highlevel);
1723 err = hpsb_register_protocol(&eth1394_proto_driver); 1720 err = hpsb_register_protocol(&eth1394_proto_driver);
1724 if (err) { 1721 if (err) {
1725 hpsb_unregister_highlevel(&eth1394_highlevel); 1722 hpsb_unregister_highlevel(&eth1394_highlevel);
1726 kmem_cache_destroy(packet_task_cache); 1723 kmem_cache_destroy(packet_task_cache);
1727 } 1724 }
1728 return err; 1725 return err;
1729 } 1726 }
1730 1727
1731 static void __exit ether1394_exit_module(void) 1728 static void __exit ether1394_exit_module(void)
1732 { 1729 {
1733 hpsb_unregister_protocol(&eth1394_proto_driver); 1730 hpsb_unregister_protocol(&eth1394_proto_driver);
1734 hpsb_unregister_highlevel(&eth1394_highlevel); 1731 hpsb_unregister_highlevel(&eth1394_highlevel);
1735 kmem_cache_destroy(packet_task_cache); 1732 kmem_cache_destroy(packet_task_cache);
1736 } 1733 }
1737 1734
1738 module_init(ether1394_init_module); 1735 module_init(ether1394_init_module);
1739 module_exit(ether1394_exit_module); 1736 module_exit(ether1394_exit_module);
1740 1737
drivers/ieee1394/raw1394.c
1 /* 1 /*
2 * IEEE 1394 for Linux 2 * IEEE 1394 for Linux
3 * 3 *
4 * Raw interface to the bus 4 * Raw interface to the bus
5 * 5 *
6 * Copyright (C) 1999, 2000 Andreas E. Bombe 6 * Copyright (C) 1999, 2000 Andreas E. Bombe
7 * 2001, 2002 Manfred Weihs <weihs@ict.tuwien.ac.at> 7 * 2001, 2002 Manfred Weihs <weihs@ict.tuwien.ac.at>
8 * 2002 Christian Toegel <christian.toegel@gmx.at> 8 * 2002 Christian Toegel <christian.toegel@gmx.at>
9 * 9 *
10 * This code is licensed under the GPL. See the file COPYING in the root 10 * This code is licensed under the GPL. See the file COPYING in the root
11 * directory of the kernel sources for details. 11 * directory of the kernel sources for details.
12 * 12 *
13 * 13 *
14 * Contributions: 14 * Contributions:
15 * 15 *
16 * Manfred Weihs <weihs@ict.tuwien.ac.at> 16 * Manfred Weihs <weihs@ict.tuwien.ac.at>
17 * configuration ROM manipulation 17 * configuration ROM manipulation
18 * address range mapping 18 * address range mapping
19 * adaptation for new (transparent) loopback mechanism 19 * adaptation for new (transparent) loopback mechanism
20 * sending of arbitrary async packets 20 * sending of arbitrary async packets
21 * Christian Toegel <christian.toegel@gmx.at> 21 * Christian Toegel <christian.toegel@gmx.at>
22 * address range mapping 22 * address range mapping
23 * lock64 request 23 * lock64 request
24 * transmit physical packet 24 * transmit physical packet
25 * busreset notification control (switch on/off) 25 * busreset notification control (switch on/off)
26 * busreset with selection of type (short/long) 26 * busreset with selection of type (short/long)
27 * request_reply 27 * request_reply
28 */ 28 */
29 29
30 #include <linux/kernel.h> 30 #include <linux/kernel.h>
31 #include <linux/list.h> 31 #include <linux/list.h>
32 #include <linux/sched.h> 32 #include <linux/sched.h>
33 #include <linux/string.h> 33 #include <linux/string.h>
34 #include <linux/slab.h> 34 #include <linux/slab.h>
35 #include <linux/fs.h> 35 #include <linux/fs.h>
36 #include <linux/poll.h> 36 #include <linux/poll.h>
37 #include <linux/module.h> 37 #include <linux/module.h>
38 #include <linux/mutex.h> 38 #include <linux/mutex.h>
39 #include <linux/init.h> 39 #include <linux/init.h>
40 #include <linux/interrupt.h> 40 #include <linux/interrupt.h>
41 #include <linux/vmalloc.h> 41 #include <linux/vmalloc.h>
42 #include <linux/cdev.h> 42 #include <linux/cdev.h>
43 #include <asm/uaccess.h> 43 #include <asm/uaccess.h>
44 #include <asm/atomic.h> 44 #include <asm/atomic.h>
45 #include <linux/compat.h> 45 #include <linux/compat.h>
46 46
47 #include "csr1212.h" 47 #include "csr1212.h"
48 #include "highlevel.h" 48 #include "highlevel.h"
49 #include "hosts.h" 49 #include "hosts.h"
50 #include "ieee1394.h" 50 #include "ieee1394.h"
51 #include "ieee1394_core.h" 51 #include "ieee1394_core.h"
52 #include "ieee1394_hotplug.h" 52 #include "ieee1394_hotplug.h"
53 #include "ieee1394_transactions.h" 53 #include "ieee1394_transactions.h"
54 #include "ieee1394_types.h" 54 #include "ieee1394_types.h"
55 #include "iso.h" 55 #include "iso.h"
56 #include "nodemgr.h" 56 #include "nodemgr.h"
57 #include "raw1394.h" 57 #include "raw1394.h"
58 #include "raw1394-private.h" 58 #include "raw1394-private.h"
59 59
60 #define int2ptr(x) ((void __user *)(unsigned long)x) 60 #define int2ptr(x) ((void __user *)(unsigned long)x)
61 #define ptr2int(x) ((u64)(unsigned long)(void __user *)x) 61 #define ptr2int(x) ((u64)(unsigned long)(void __user *)x)
62 62
63 #ifdef CONFIG_IEEE1394_VERBOSEDEBUG 63 #ifdef CONFIG_IEEE1394_VERBOSEDEBUG
64 #define RAW1394_DEBUG 64 #define RAW1394_DEBUG
65 #endif 65 #endif
66 66
67 #ifdef RAW1394_DEBUG 67 #ifdef RAW1394_DEBUG
68 #define DBGMSG(fmt, args...) \ 68 #define DBGMSG(fmt, args...) \
69 printk(KERN_INFO "raw1394:" fmt "\n" , ## args) 69 printk(KERN_INFO "raw1394:" fmt "\n" , ## args)
70 #else 70 #else
71 #define DBGMSG(fmt, args...) do {} while (0) 71 #define DBGMSG(fmt, args...) do {} while (0)
72 #endif 72 #endif
73 73
74 static LIST_HEAD(host_info_list); 74 static LIST_HEAD(host_info_list);
75 static int host_count; 75 static int host_count;
76 static DEFINE_SPINLOCK(host_info_lock); 76 static DEFINE_SPINLOCK(host_info_lock);
77 static atomic_t internal_generation = ATOMIC_INIT(0); 77 static atomic_t internal_generation = ATOMIC_INIT(0);
78 78
79 static atomic_t iso_buffer_size; 79 static atomic_t iso_buffer_size;
80 static const int iso_buffer_max = 4 * 1024 * 1024; /* 4 MB */ 80 static const int iso_buffer_max = 4 * 1024 * 1024; /* 4 MB */
81 81
82 static struct hpsb_highlevel raw1394_highlevel; 82 static struct hpsb_highlevel raw1394_highlevel;
83 83
84 static int arm_read(struct hpsb_host *host, int nodeid, quadlet_t * buffer, 84 static int arm_read(struct hpsb_host *host, int nodeid, quadlet_t * buffer,
85 u64 addr, size_t length, u16 flags); 85 u64 addr, size_t length, u16 flags);
86 static int arm_write(struct hpsb_host *host, int nodeid, int destid, 86 static int arm_write(struct hpsb_host *host, int nodeid, int destid,
87 quadlet_t * data, u64 addr, size_t length, u16 flags); 87 quadlet_t * data, u64 addr, size_t length, u16 flags);
88 static int arm_lock(struct hpsb_host *host, int nodeid, quadlet_t * store, 88 static int arm_lock(struct hpsb_host *host, int nodeid, quadlet_t * store,
89 u64 addr, quadlet_t data, quadlet_t arg, int ext_tcode, 89 u64 addr, quadlet_t data, quadlet_t arg, int ext_tcode,
90 u16 flags); 90 u16 flags);
91 static int arm_lock64(struct hpsb_host *host, int nodeid, octlet_t * store, 91 static int arm_lock64(struct hpsb_host *host, int nodeid, octlet_t * store,
92 u64 addr, octlet_t data, octlet_t arg, int ext_tcode, 92 u64 addr, octlet_t data, octlet_t arg, int ext_tcode,
93 u16 flags); 93 u16 flags);
94 static const struct hpsb_address_ops arm_ops = { 94 static const struct hpsb_address_ops arm_ops = {
95 .read = arm_read, 95 .read = arm_read,
96 .write = arm_write, 96 .write = arm_write,
97 .lock = arm_lock, 97 .lock = arm_lock,
98 .lock64 = arm_lock64, 98 .lock64 = arm_lock64,
99 }; 99 };
100 100
101 static void queue_complete_cb(struct pending_request *req); 101 static void queue_complete_cb(struct pending_request *req);
102 102
103 static struct pending_request *__alloc_pending_request(gfp_t flags) 103 static struct pending_request *__alloc_pending_request(gfp_t flags)
104 { 104 {
105 struct pending_request *req; 105 struct pending_request *req;
106 106
107 req = kzalloc(sizeof(*req), flags); 107 req = kzalloc(sizeof(*req), flags);
108 if (req) 108 if (req)
109 INIT_LIST_HEAD(&req->list); 109 INIT_LIST_HEAD(&req->list);
110 110
111 return req; 111 return req;
112 } 112 }
113 113
114 static inline struct pending_request *alloc_pending_request(void) 114 static inline struct pending_request *alloc_pending_request(void)
115 { 115 {
116 return __alloc_pending_request(GFP_KERNEL); 116 return __alloc_pending_request(GFP_KERNEL);
117 } 117 }
118 118
119 static void free_pending_request(struct pending_request *req) 119 static void free_pending_request(struct pending_request *req)
120 { 120 {
121 if (req->ibs) { 121 if (req->ibs) {
122 if (atomic_dec_and_test(&req->ibs->refcount)) { 122 if (atomic_dec_and_test(&req->ibs->refcount)) {
123 atomic_sub(req->ibs->data_size, &iso_buffer_size); 123 atomic_sub(req->ibs->data_size, &iso_buffer_size);
124 kfree(req->ibs); 124 kfree(req->ibs);
125 } 125 }
126 } else if (req->free_data) { 126 } else if (req->free_data) {
127 kfree(req->data); 127 kfree(req->data);
128 } 128 }
129 hpsb_free_packet(req->packet); 129 hpsb_free_packet(req->packet);
130 kfree(req); 130 kfree(req);
131 } 131 }
132 132
133 /* fi->reqlists_lock must be taken */ 133 /* fi->reqlists_lock must be taken */
134 static void __queue_complete_req(struct pending_request *req) 134 static void __queue_complete_req(struct pending_request *req)
135 { 135 {
136 struct file_info *fi = req->file_info; 136 struct file_info *fi = req->file_info;
137 137
138 list_move_tail(&req->list, &fi->req_complete); 138 list_move_tail(&req->list, &fi->req_complete);
139 wake_up(&fi->wait_complete); 139 wake_up(&fi->wait_complete);
140 } 140 }
141 141
142 static void queue_complete_req(struct pending_request *req) 142 static void queue_complete_req(struct pending_request *req)
143 { 143 {
144 unsigned long flags; 144 unsigned long flags;
145 struct file_info *fi = req->file_info; 145 struct file_info *fi = req->file_info;
146 146
147 spin_lock_irqsave(&fi->reqlists_lock, flags); 147 spin_lock_irqsave(&fi->reqlists_lock, flags);
148 __queue_complete_req(req); 148 __queue_complete_req(req);
149 spin_unlock_irqrestore(&fi->reqlists_lock, flags); 149 spin_unlock_irqrestore(&fi->reqlists_lock, flags);
150 } 150 }
151 151
152 static void queue_complete_cb(struct pending_request *req) 152 static void queue_complete_cb(struct pending_request *req)
153 { 153 {
154 struct hpsb_packet *packet = req->packet; 154 struct hpsb_packet *packet = req->packet;
155 int rcode = (packet->header[1] >> 12) & 0xf; 155 int rcode = (packet->header[1] >> 12) & 0xf;
156 156
157 switch (packet->ack_code) { 157 switch (packet->ack_code) {
158 case ACKX_NONE: 158 case ACKX_NONE:
159 case ACKX_SEND_ERROR: 159 case ACKX_SEND_ERROR:
160 req->req.error = RAW1394_ERROR_SEND_ERROR; 160 req->req.error = RAW1394_ERROR_SEND_ERROR;
161 break; 161 break;
162 case ACKX_ABORTED: 162 case ACKX_ABORTED:
163 req->req.error = RAW1394_ERROR_ABORTED; 163 req->req.error = RAW1394_ERROR_ABORTED;
164 break; 164 break;
165 case ACKX_TIMEOUT: 165 case ACKX_TIMEOUT:
166 req->req.error = RAW1394_ERROR_TIMEOUT; 166 req->req.error = RAW1394_ERROR_TIMEOUT;
167 break; 167 break;
168 default: 168 default:
169 req->req.error = (packet->ack_code << 16) | rcode; 169 req->req.error = (packet->ack_code << 16) | rcode;
170 break; 170 break;
171 } 171 }
172 172
173 if (!((packet->ack_code == ACK_PENDING) && (rcode == RCODE_COMPLETE))) { 173 if (!((packet->ack_code == ACK_PENDING) && (rcode == RCODE_COMPLETE))) {
174 req->req.length = 0; 174 req->req.length = 0;
175 } 175 }
176 176
177 if ((req->req.type == RAW1394_REQ_ASYNC_READ) || 177 if ((req->req.type == RAW1394_REQ_ASYNC_READ) ||
178 (req->req.type == RAW1394_REQ_ASYNC_WRITE) || 178 (req->req.type == RAW1394_REQ_ASYNC_WRITE) ||
179 (req->req.type == RAW1394_REQ_ASYNC_STREAM) || 179 (req->req.type == RAW1394_REQ_ASYNC_STREAM) ||
180 (req->req.type == RAW1394_REQ_LOCK) || 180 (req->req.type == RAW1394_REQ_LOCK) ||
181 (req->req.type == RAW1394_REQ_LOCK64)) 181 (req->req.type == RAW1394_REQ_LOCK64))
182 hpsb_free_tlabel(packet); 182 hpsb_free_tlabel(packet);
183 183
184 queue_complete_req(req); 184 queue_complete_req(req);
185 } 185 }
186 186
187 static void add_host(struct hpsb_host *host) 187 static void add_host(struct hpsb_host *host)
188 { 188 {
189 struct host_info *hi; 189 struct host_info *hi;
190 unsigned long flags; 190 unsigned long flags;
191 191
192 hi = kmalloc(sizeof(*hi), GFP_KERNEL); 192 hi = kmalloc(sizeof(*hi), GFP_KERNEL);
193 193
194 if (hi) { 194 if (hi) {
195 INIT_LIST_HEAD(&hi->list); 195 INIT_LIST_HEAD(&hi->list);
196 hi->host = host; 196 hi->host = host;
197 INIT_LIST_HEAD(&hi->file_info_list); 197 INIT_LIST_HEAD(&hi->file_info_list);
198 198
199 spin_lock_irqsave(&host_info_lock, flags); 199 spin_lock_irqsave(&host_info_lock, flags);
200 list_add_tail(&hi->list, &host_info_list); 200 list_add_tail(&hi->list, &host_info_list);
201 host_count++; 201 host_count++;
202 spin_unlock_irqrestore(&host_info_lock, flags); 202 spin_unlock_irqrestore(&host_info_lock, flags);
203 } 203 }
204 204
205 atomic_inc(&internal_generation); 205 atomic_inc(&internal_generation);
206 } 206 }
207 207
208 static struct host_info *find_host_info(struct hpsb_host *host) 208 static struct host_info *find_host_info(struct hpsb_host *host)
209 { 209 {
210 struct host_info *hi; 210 struct host_info *hi;
211 211
212 list_for_each_entry(hi, &host_info_list, list) 212 list_for_each_entry(hi, &host_info_list, list)
213 if (hi->host == host) 213 if (hi->host == host)
214 return hi; 214 return hi;
215 215
216 return NULL; 216 return NULL;
217 } 217 }
218 218
219 static void remove_host(struct hpsb_host *host) 219 static void remove_host(struct hpsb_host *host)
220 { 220 {
221 struct host_info *hi; 221 struct host_info *hi;
222 unsigned long flags; 222 unsigned long flags;
223 223
224 spin_lock_irqsave(&host_info_lock, flags); 224 spin_lock_irqsave(&host_info_lock, flags);
225 hi = find_host_info(host); 225 hi = find_host_info(host);
226 226
227 if (hi != NULL) { 227 if (hi != NULL) {
228 list_del(&hi->list); 228 list_del(&hi->list);
229 host_count--; 229 host_count--;
230 /* 230 /*
231 FIXME: address ranges should be removed 231 FIXME: address ranges should be removed
232 and fileinfo states should be initialized 232 and fileinfo states should be initialized
233 (including setting generation to 233 (including setting generation to
234 internal-generation ...) 234 internal-generation ...)
235 */ 235 */
236 } 236 }
237 spin_unlock_irqrestore(&host_info_lock, flags); 237 spin_unlock_irqrestore(&host_info_lock, flags);
238 238
239 if (hi == NULL) { 239 if (hi == NULL) {
240 printk(KERN_ERR "raw1394: attempt to remove unknown host " 240 printk(KERN_ERR "raw1394: attempt to remove unknown host "
241 "0x%p\n", host); 241 "0x%p\n", host);
242 return; 242 return;
243 } 243 }
244 244
245 kfree(hi); 245 kfree(hi);
246 246
247 atomic_inc(&internal_generation); 247 atomic_inc(&internal_generation);
248 } 248 }
249 249
250 static void host_reset(struct hpsb_host *host) 250 static void host_reset(struct hpsb_host *host)
251 { 251 {
252 unsigned long flags; 252 unsigned long flags;
253 struct host_info *hi; 253 struct host_info *hi;
254 struct file_info *fi; 254 struct file_info *fi;
255 struct pending_request *req; 255 struct pending_request *req;
256 256
257 spin_lock_irqsave(&host_info_lock, flags); 257 spin_lock_irqsave(&host_info_lock, flags);
258 hi = find_host_info(host); 258 hi = find_host_info(host);
259 259
260 if (hi != NULL) { 260 if (hi != NULL) {
261 list_for_each_entry(fi, &hi->file_info_list, list) { 261 list_for_each_entry(fi, &hi->file_info_list, list) {
262 if (fi->notification == RAW1394_NOTIFY_ON) { 262 if (fi->notification == RAW1394_NOTIFY_ON) {
263 req = __alloc_pending_request(GFP_ATOMIC); 263 req = __alloc_pending_request(GFP_ATOMIC);
264 264
265 if (req != NULL) { 265 if (req != NULL) {
266 req->file_info = fi; 266 req->file_info = fi;
267 req->req.type = RAW1394_REQ_BUS_RESET; 267 req->req.type = RAW1394_REQ_BUS_RESET;
268 req->req.generation = 268 req->req.generation =
269 get_hpsb_generation(host); 269 get_hpsb_generation(host);
270 req->req.misc = (host->node_id << 16) 270 req->req.misc = (host->node_id << 16)
271 | host->node_count; 271 | host->node_count;
272 if (fi->protocol_version > 3) { 272 if (fi->protocol_version > 3) {
273 req->req.misc |= 273 req->req.misc |=
274 (NODEID_TO_NODE 274 (NODEID_TO_NODE
275 (host->irm_id) 275 (host->irm_id)
276 << 8); 276 << 8);
277 } 277 }
278 278
279 queue_complete_req(req); 279 queue_complete_req(req);
280 } 280 }
281 } 281 }
282 } 282 }
283 } 283 }
284 spin_unlock_irqrestore(&host_info_lock, flags); 284 spin_unlock_irqrestore(&host_info_lock, flags);
285 } 285 }
286 286
287 static void fcp_request(struct hpsb_host *host, int nodeid, int direction, 287 static void fcp_request(struct hpsb_host *host, int nodeid, int direction,
288 int cts, u8 * data, size_t length) 288 int cts, u8 * data, size_t length)
289 { 289 {
290 unsigned long flags; 290 unsigned long flags;
291 struct host_info *hi; 291 struct host_info *hi;
292 struct file_info *fi; 292 struct file_info *fi;
293 struct pending_request *req, *req_next; 293 struct pending_request *req, *req_next;
294 struct iso_block_store *ibs = NULL; 294 struct iso_block_store *ibs = NULL;
295 LIST_HEAD(reqs); 295 LIST_HEAD(reqs);
296 296
297 if ((atomic_read(&iso_buffer_size) + length) > iso_buffer_max) { 297 if ((atomic_read(&iso_buffer_size) + length) > iso_buffer_max) {
298 HPSB_INFO("dropped fcp request"); 298 HPSB_INFO("dropped fcp request");
299 return; 299 return;
300 } 300 }
301 301
302 spin_lock_irqsave(&host_info_lock, flags); 302 spin_lock_irqsave(&host_info_lock, flags);
303 hi = find_host_info(host); 303 hi = find_host_info(host);
304 304
305 if (hi != NULL) { 305 if (hi != NULL) {
306 list_for_each_entry(fi, &hi->file_info_list, list) { 306 list_for_each_entry(fi, &hi->file_info_list, list) {
307 if (!fi->fcp_buffer) 307 if (!fi->fcp_buffer)
308 continue; 308 continue;
309 309
310 req = __alloc_pending_request(GFP_ATOMIC); 310 req = __alloc_pending_request(GFP_ATOMIC);
311 if (!req) 311 if (!req)
312 break; 312 break;
313 313
314 if (!ibs) { 314 if (!ibs) {
315 ibs = kmalloc(sizeof(*ibs) + length, 315 ibs = kmalloc(sizeof(*ibs) + length,
316 GFP_ATOMIC); 316 GFP_ATOMIC);
317 if (!ibs) { 317 if (!ibs) {
318 kfree(req); 318 kfree(req);
319 break; 319 break;
320 } 320 }
321 321
322 atomic_add(length, &iso_buffer_size); 322 atomic_add(length, &iso_buffer_size);
323 atomic_set(&ibs->refcount, 0); 323 atomic_set(&ibs->refcount, 0);
324 ibs->data_size = length; 324 ibs->data_size = length;
325 memcpy(ibs->data, data, length); 325 memcpy(ibs->data, data, length);
326 } 326 }
327 327
328 atomic_inc(&ibs->refcount); 328 atomic_inc(&ibs->refcount);
329 329
330 req->file_info = fi; 330 req->file_info = fi;
331 req->ibs = ibs; 331 req->ibs = ibs;
332 req->data = ibs->data; 332 req->data = ibs->data;
333 req->req.type = RAW1394_REQ_FCP_REQUEST; 333 req->req.type = RAW1394_REQ_FCP_REQUEST;
334 req->req.generation = get_hpsb_generation(host); 334 req->req.generation = get_hpsb_generation(host);
335 req->req.misc = nodeid | (direction << 16); 335 req->req.misc = nodeid | (direction << 16);
336 req->req.recvb = ptr2int(fi->fcp_buffer); 336 req->req.recvb = ptr2int(fi->fcp_buffer);
337 req->req.length = length; 337 req->req.length = length;
338 338
339 list_add_tail(&req->list, &reqs); 339 list_add_tail(&req->list, &reqs);
340 } 340 }
341 } 341 }
342 spin_unlock_irqrestore(&host_info_lock, flags); 342 spin_unlock_irqrestore(&host_info_lock, flags);
343 343
344 list_for_each_entry_safe(req, req_next, &reqs, list) 344 list_for_each_entry_safe(req, req_next, &reqs, list)
345 queue_complete_req(req); 345 queue_complete_req(req);
346 } 346 }
347 347
348 #ifdef CONFIG_COMPAT 348 #ifdef CONFIG_COMPAT
349 struct compat_raw1394_req { 349 struct compat_raw1394_req {
350 __u32 type; 350 __u32 type;
351 __s32 error; 351 __s32 error;
352 __u32 misc; 352 __u32 misc;
353 353
354 __u32 generation; 354 __u32 generation;
355 __u32 length; 355 __u32 length;
356 356
357 __u64 address; 357 __u64 address;
358 358
359 __u64 tag; 359 __u64 tag;
360 360
361 __u64 sendb; 361 __u64 sendb;
362 __u64 recvb; 362 __u64 recvb;
363 } 363 }
364 #if defined(CONFIG_X86_64) || defined(CONFIG_IA64) 364 #if defined(CONFIG_X86_64) || defined(CONFIG_IA64)
365 __attribute__((packed)) 365 __attribute__((packed))
366 #endif 366 #endif
367 ; 367 ;
368 368
369 static const char __user *raw1394_compat_write(const char __user *buf) 369 static const char __user *raw1394_compat_write(const char __user *buf)
370 { 370 {
371 struct compat_raw1394_req __user *cr = (typeof(cr)) buf; 371 struct compat_raw1394_req __user *cr = (typeof(cr)) buf;
372 struct raw1394_request __user *r; 372 struct raw1394_request __user *r;
373 373
374 r = compat_alloc_user_space(sizeof(struct raw1394_request)); 374 r = compat_alloc_user_space(sizeof(struct raw1394_request));
375 375
376 #define C(x) __copy_in_user(&r->x, &cr->x, sizeof(r->x)) 376 #define C(x) __copy_in_user(&r->x, &cr->x, sizeof(r->x))
377 377
378 if (copy_in_user(r, cr, sizeof(struct compat_raw1394_req)) || 378 if (copy_in_user(r, cr, sizeof(struct compat_raw1394_req)) ||
379 C(address) || 379 C(address) ||
380 C(tag) || 380 C(tag) ||
381 C(sendb) || 381 C(sendb) ||
382 C(recvb)) 382 C(recvb))
383 return (__force const char __user *)ERR_PTR(-EFAULT); 383 return (__force const char __user *)ERR_PTR(-EFAULT);
384 384
385 return (const char __user *)r; 385 return (const char __user *)r;
386 } 386 }
387 #undef C 387 #undef C
388 388
389 #define P(x) __put_user(r->x, &cr->x) 389 #define P(x) __put_user(r->x, &cr->x)
390 390
391 static int 391 static int
392 raw1394_compat_read(const char __user *buf, struct raw1394_request *r) 392 raw1394_compat_read(const char __user *buf, struct raw1394_request *r)
393 { 393 {
394 struct compat_raw1394_req __user *cr = (typeof(cr)) buf; 394 struct compat_raw1394_req __user *cr = (typeof(cr)) buf;
395 395
396 if (!access_ok(VERIFY_WRITE, cr, sizeof(struct compat_raw1394_req)) || 396 if (!access_ok(VERIFY_WRITE, cr, sizeof(struct compat_raw1394_req)) ||
397 P(type) || 397 P(type) ||
398 P(error) || 398 P(error) ||
399 P(misc) || 399 P(misc) ||
400 P(generation) || 400 P(generation) ||
401 P(length) || 401 P(length) ||
402 P(address) || 402 P(address) ||
403 P(tag) || 403 P(tag) ||
404 P(sendb) || 404 P(sendb) ||
405 P(recvb)) 405 P(recvb))
406 return -EFAULT; 406 return -EFAULT;
407 407
408 return sizeof(struct compat_raw1394_req); 408 return sizeof(struct compat_raw1394_req);
409 } 409 }
410 #undef P 410 #undef P
411 411
412 #endif 412 #endif
413 413
414 /* get next completed request (caller must hold fi->reqlists_lock) */ 414 /* get next completed request (caller must hold fi->reqlists_lock) */
415 static inline struct pending_request *__next_complete_req(struct file_info *fi) 415 static inline struct pending_request *__next_complete_req(struct file_info *fi)
416 { 416 {
417 struct list_head *lh; 417 struct list_head *lh;
418 struct pending_request *req = NULL; 418 struct pending_request *req = NULL;
419 419
420 if (!list_empty(&fi->req_complete)) { 420 if (!list_empty(&fi->req_complete)) {
421 lh = fi->req_complete.next; 421 lh = fi->req_complete.next;
422 list_del(lh); 422 list_del(lh);
423 req = list_entry(lh, struct pending_request, list); 423 req = list_entry(lh, struct pending_request, list);
424 } 424 }
425 return req; 425 return req;
426 } 426 }
427 427
428 /* atomically get next completed request */ 428 /* atomically get next completed request */
429 static struct pending_request *next_complete_req(struct file_info *fi) 429 static struct pending_request *next_complete_req(struct file_info *fi)
430 { 430 {
431 unsigned long flags; 431 unsigned long flags;
432 struct pending_request *req; 432 struct pending_request *req;
433 433
434 spin_lock_irqsave(&fi->reqlists_lock, flags); 434 spin_lock_irqsave(&fi->reqlists_lock, flags);
435 req = __next_complete_req(fi); 435 req = __next_complete_req(fi);
436 spin_unlock_irqrestore(&fi->reqlists_lock, flags); 436 spin_unlock_irqrestore(&fi->reqlists_lock, flags);
437 return req; 437 return req;
438 } 438 }
439 439
440 static ssize_t raw1394_read(struct file *file, char __user * buffer, 440 static ssize_t raw1394_read(struct file *file, char __user * buffer,
441 size_t count, loff_t * offset_is_ignored) 441 size_t count, loff_t * offset_is_ignored)
442 { 442 {
443 struct file_info *fi = (struct file_info *)file->private_data; 443 struct file_info *fi = (struct file_info *)file->private_data;
444 struct pending_request *req; 444 struct pending_request *req;
445 ssize_t ret; 445 ssize_t ret;
446 446
447 #ifdef CONFIG_COMPAT 447 #ifdef CONFIG_COMPAT
448 if (count == sizeof(struct compat_raw1394_req)) { 448 if (count == sizeof(struct compat_raw1394_req)) {
449 /* ok */ 449 /* ok */
450 } else 450 } else
451 #endif 451 #endif
452 if (count != sizeof(struct raw1394_request)) { 452 if (count != sizeof(struct raw1394_request)) {
453 return -EINVAL; 453 return -EINVAL;
454 } 454 }
455 455
456 if (!access_ok(VERIFY_WRITE, buffer, count)) { 456 if (!access_ok(VERIFY_WRITE, buffer, count)) {
457 return -EFAULT; 457 return -EFAULT;
458 } 458 }
459 459
460 if (file->f_flags & O_NONBLOCK) { 460 if (file->f_flags & O_NONBLOCK) {
461 if (!(req = next_complete_req(fi))) 461 if (!(req = next_complete_req(fi)))
462 return -EAGAIN; 462 return -EAGAIN;
463 } else { 463 } else {
464 /* 464 /*
465 * NB: We call the macro wait_event_interruptible() with a 465 * NB: We call the macro wait_event_interruptible() with a
466 * condition argument with side effect. This is only possible 466 * condition argument with side effect. This is only possible
467 * because the side effect does not occur until the condition 467 * because the side effect does not occur until the condition
468 * became true, and wait_event_interruptible() won't evaluate 468 * became true, and wait_event_interruptible() won't evaluate
469 * the condition again after that. 469 * the condition again after that.
470 */ 470 */
471 if (wait_event_interruptible(fi->wait_complete, 471 if (wait_event_interruptible(fi->wait_complete,
472 (req = next_complete_req(fi)))) 472 (req = next_complete_req(fi))))
473 return -ERESTARTSYS; 473 return -ERESTARTSYS;
474 } 474 }
475 475
476 if (req->req.length) { 476 if (req->req.length) {
477 if (copy_to_user(int2ptr(req->req.recvb), req->data, 477 if (copy_to_user(int2ptr(req->req.recvb), req->data,
478 req->req.length)) { 478 req->req.length)) {
479 req->req.error = RAW1394_ERROR_MEMFAULT; 479 req->req.error = RAW1394_ERROR_MEMFAULT;
480 } 480 }
481 } 481 }
482 482
483 #ifdef CONFIG_COMPAT 483 #ifdef CONFIG_COMPAT
484 if (count == sizeof(struct compat_raw1394_req) && 484 if (count == sizeof(struct compat_raw1394_req) &&
485 sizeof(struct compat_raw1394_req) != 485 sizeof(struct compat_raw1394_req) !=
486 sizeof(struct raw1394_request)) { 486 sizeof(struct raw1394_request)) {
487 ret = raw1394_compat_read(buffer, &req->req); 487 ret = raw1394_compat_read(buffer, &req->req);
488 } else 488 } else
489 #endif 489 #endif
490 { 490 {
491 if (copy_to_user(buffer, &req->req, sizeof(req->req))) { 491 if (copy_to_user(buffer, &req->req, sizeof(req->req))) {
492 ret = -EFAULT; 492 ret = -EFAULT;
493 goto out; 493 goto out;
494 } 494 }
495 ret = (ssize_t) sizeof(struct raw1394_request); 495 ret = (ssize_t) sizeof(struct raw1394_request);
496 } 496 }
497 out: 497 out:
498 free_pending_request(req); 498 free_pending_request(req);
499 return ret; 499 return ret;
500 } 500 }
501 501
502 static int state_opened(struct file_info *fi, struct pending_request *req) 502 static int state_opened(struct file_info *fi, struct pending_request *req)
503 { 503 {
504 if (req->req.type == RAW1394_REQ_INITIALIZE) { 504 if (req->req.type == RAW1394_REQ_INITIALIZE) {
505 switch (req->req.misc) { 505 switch (req->req.misc) {
506 case RAW1394_KERNELAPI_VERSION: 506 case RAW1394_KERNELAPI_VERSION:
507 case 3: 507 case 3:
508 fi->state = initialized; 508 fi->state = initialized;
509 fi->protocol_version = req->req.misc; 509 fi->protocol_version = req->req.misc;
510 req->req.error = RAW1394_ERROR_NONE; 510 req->req.error = RAW1394_ERROR_NONE;
511 req->req.generation = atomic_read(&internal_generation); 511 req->req.generation = atomic_read(&internal_generation);
512 break; 512 break;
513 513
514 default: 514 default:
515 req->req.error = RAW1394_ERROR_COMPAT; 515 req->req.error = RAW1394_ERROR_COMPAT;
516 req->req.misc = RAW1394_KERNELAPI_VERSION; 516 req->req.misc = RAW1394_KERNELAPI_VERSION;
517 } 517 }
518 } else { 518 } else {
519 req->req.error = RAW1394_ERROR_STATE_ORDER; 519 req->req.error = RAW1394_ERROR_STATE_ORDER;
520 } 520 }
521 521
522 req->req.length = 0; 522 req->req.length = 0;
523 queue_complete_req(req); 523 queue_complete_req(req);
524 return 0; 524 return 0;
525 } 525 }
526 526
527 static int state_initialized(struct file_info *fi, struct pending_request *req) 527 static int state_initialized(struct file_info *fi, struct pending_request *req)
528 { 528 {
529 unsigned long flags; 529 unsigned long flags;
530 struct host_info *hi; 530 struct host_info *hi;
531 struct raw1394_khost_list *khl; 531 struct raw1394_khost_list *khl;
532 532
533 if (req->req.generation != atomic_read(&internal_generation)) { 533 if (req->req.generation != atomic_read(&internal_generation)) {
534 req->req.error = RAW1394_ERROR_GENERATION; 534 req->req.error = RAW1394_ERROR_GENERATION;
535 req->req.generation = atomic_read(&internal_generation); 535 req->req.generation = atomic_read(&internal_generation);
536 req->req.length = 0; 536 req->req.length = 0;
537 queue_complete_req(req); 537 queue_complete_req(req);
538 return 0; 538 return 0;
539 } 539 }
540 540
541 switch (req->req.type) { 541 switch (req->req.type) {
542 case RAW1394_REQ_LIST_CARDS: 542 case RAW1394_REQ_LIST_CARDS:
543 spin_lock_irqsave(&host_info_lock, flags); 543 spin_lock_irqsave(&host_info_lock, flags);
544 khl = kmalloc(sizeof(*khl) * host_count, GFP_ATOMIC); 544 khl = kmalloc(sizeof(*khl) * host_count, GFP_ATOMIC);
545 545
546 if (khl) { 546 if (khl) {
547 req->req.misc = host_count; 547 req->req.misc = host_count;
548 req->data = (quadlet_t *) khl; 548 req->data = (quadlet_t *) khl;
549 549
550 list_for_each_entry(hi, &host_info_list, list) { 550 list_for_each_entry(hi, &host_info_list, list) {
551 khl->nodes = hi->host->node_count; 551 khl->nodes = hi->host->node_count;
552 strcpy(khl->name, hi->host->driver->name); 552 strcpy(khl->name, hi->host->driver->name);
553 khl++; 553 khl++;
554 } 554 }
555 } 555 }
556 spin_unlock_irqrestore(&host_info_lock, flags); 556 spin_unlock_irqrestore(&host_info_lock, flags);
557 557
558 if (khl) { 558 if (khl) {
559 req->req.error = RAW1394_ERROR_NONE; 559 req->req.error = RAW1394_ERROR_NONE;
560 req->req.length = min(req->req.length, 560 req->req.length = min(req->req.length,
561 (u32) (sizeof 561 (u32) (sizeof
562 (struct raw1394_khost_list) 562 (struct raw1394_khost_list)
563 * req->req.misc)); 563 * req->req.misc));
564 req->free_data = 1; 564 req->free_data = 1;
565 } else { 565 } else {
566 return -ENOMEM; 566 return -ENOMEM;
567 } 567 }
568 break; 568 break;
569 569
570 case RAW1394_REQ_SET_CARD: 570 case RAW1394_REQ_SET_CARD:
571 spin_lock_irqsave(&host_info_lock, flags); 571 spin_lock_irqsave(&host_info_lock, flags);
572 if (req->req.misc >= host_count) { 572 if (req->req.misc >= host_count) {
573 req->req.error = RAW1394_ERROR_INVALID_ARG; 573 req->req.error = RAW1394_ERROR_INVALID_ARG;
574 goto out_set_card; 574 goto out_set_card;
575 } 575 }
576 list_for_each_entry(hi, &host_info_list, list) 576 list_for_each_entry(hi, &host_info_list, list)
577 if (!req->req.misc--) 577 if (!req->req.misc--)
578 break; 578 break;
579 get_device(&hi->host->device); /* FIXME handle failure case */ 579 get_device(&hi->host->device); /* FIXME handle failure case */
580 list_add_tail(&fi->list, &hi->file_info_list); 580 list_add_tail(&fi->list, &hi->file_info_list);
581 581
582 /* prevent unloading of the host's low-level driver */ 582 /* prevent unloading of the host's low-level driver */
583 if (!try_module_get(hi->host->driver->owner)) { 583 if (!try_module_get(hi->host->driver->owner)) {
584 req->req.error = RAW1394_ERROR_ABORTED; 584 req->req.error = RAW1394_ERROR_ABORTED;
585 goto out_set_card; 585 goto out_set_card;
586 } 586 }
587 WARN_ON(fi->host); 587 WARN_ON(fi->host);
588 fi->host = hi->host; 588 fi->host = hi->host;
589 fi->state = connected; 589 fi->state = connected;
590 590
591 req->req.error = RAW1394_ERROR_NONE; 591 req->req.error = RAW1394_ERROR_NONE;
592 req->req.generation = get_hpsb_generation(fi->host); 592 req->req.generation = get_hpsb_generation(fi->host);
593 req->req.misc = (fi->host->node_id << 16) 593 req->req.misc = (fi->host->node_id << 16)
594 | fi->host->node_count; 594 | fi->host->node_count;
595 if (fi->protocol_version > 3) 595 if (fi->protocol_version > 3)
596 req->req.misc |= NODEID_TO_NODE(fi->host->irm_id) << 8; 596 req->req.misc |= NODEID_TO_NODE(fi->host->irm_id) << 8;
597 out_set_card: 597 out_set_card:
598 spin_unlock_irqrestore(&host_info_lock, flags); 598 spin_unlock_irqrestore(&host_info_lock, flags);
599 599
600 req->req.length = 0; 600 req->req.length = 0;
601 break; 601 break;
602 602
603 default: 603 default:
604 req->req.error = RAW1394_ERROR_STATE_ORDER; 604 req->req.error = RAW1394_ERROR_STATE_ORDER;
605 req->req.length = 0; 605 req->req.length = 0;
606 break; 606 break;
607 } 607 }
608 608
609 queue_complete_req(req); 609 queue_complete_req(req);
610 return 0; 610 return 0;
611 } 611 }
612 612
613 static void handle_fcp_listen(struct file_info *fi, struct pending_request *req) 613 static void handle_fcp_listen(struct file_info *fi, struct pending_request *req)
614 { 614 {
615 if (req->req.misc) { 615 if (req->req.misc) {
616 if (fi->fcp_buffer) { 616 if (fi->fcp_buffer) {
617 req->req.error = RAW1394_ERROR_ALREADY; 617 req->req.error = RAW1394_ERROR_ALREADY;
618 } else { 618 } else {
619 fi->fcp_buffer = int2ptr(req->req.recvb); 619 fi->fcp_buffer = int2ptr(req->req.recvb);
620 } 620 }
621 } else { 621 } else {
622 if (!fi->fcp_buffer) { 622 if (!fi->fcp_buffer) {
623 req->req.error = RAW1394_ERROR_ALREADY; 623 req->req.error = RAW1394_ERROR_ALREADY;
624 } else { 624 } else {
625 fi->fcp_buffer = NULL; 625 fi->fcp_buffer = NULL;
626 } 626 }
627 } 627 }
628 628
629 req->req.length = 0; 629 req->req.length = 0;
630 queue_complete_req(req); 630 queue_complete_req(req);
631 } 631 }
632 632
633 static int handle_async_request(struct file_info *fi, 633 static int handle_async_request(struct file_info *fi,
634 struct pending_request *req, int node) 634 struct pending_request *req, int node)
635 { 635 {
636 unsigned long flags; 636 unsigned long flags;
637 struct hpsb_packet *packet = NULL; 637 struct hpsb_packet *packet = NULL;
638 u64 addr = req->req.address & 0xffffffffffffULL; 638 u64 addr = req->req.address & 0xffffffffffffULL;
639 639
640 switch (req->req.type) { 640 switch (req->req.type) {
641 case RAW1394_REQ_ASYNC_READ: 641 case RAW1394_REQ_ASYNC_READ:
642 DBGMSG("read_request called"); 642 DBGMSG("read_request called");
643 packet = 643 packet =
644 hpsb_make_readpacket(fi->host, node, addr, req->req.length); 644 hpsb_make_readpacket(fi->host, node, addr, req->req.length);
645 645
646 if (!packet) 646 if (!packet)
647 return -ENOMEM; 647 return -ENOMEM;
648 648
649 if (req->req.length == 4) 649 if (req->req.length == 4)
650 req->data = &packet->header[3]; 650 req->data = &packet->header[3];
651 else 651 else
652 req->data = packet->data; 652 req->data = packet->data;
653 653
654 break; 654 break;
655 655
656 case RAW1394_REQ_ASYNC_WRITE: 656 case RAW1394_REQ_ASYNC_WRITE:
657 DBGMSG("write_request called"); 657 DBGMSG("write_request called");
658 658
659 packet = hpsb_make_writepacket(fi->host, node, addr, NULL, 659 packet = hpsb_make_writepacket(fi->host, node, addr, NULL,
660 req->req.length); 660 req->req.length);
661 if (!packet) 661 if (!packet)
662 return -ENOMEM; 662 return -ENOMEM;
663 663
664 if (req->req.length == 4) { 664 if (req->req.length == 4) {
665 if (copy_from_user 665 if (copy_from_user
666 (&packet->header[3], int2ptr(req->req.sendb), 666 (&packet->header[3], int2ptr(req->req.sendb),
667 req->req.length)) 667 req->req.length))
668 req->req.error = RAW1394_ERROR_MEMFAULT; 668 req->req.error = RAW1394_ERROR_MEMFAULT;
669 } else { 669 } else {
670 if (copy_from_user 670 if (copy_from_user
671 (packet->data, int2ptr(req->req.sendb), 671 (packet->data, int2ptr(req->req.sendb),
672 req->req.length)) 672 req->req.length))
673 req->req.error = RAW1394_ERROR_MEMFAULT; 673 req->req.error = RAW1394_ERROR_MEMFAULT;
674 } 674 }
675 675
676 req->req.length = 0; 676 req->req.length = 0;
677 break; 677 break;
678 678
679 case RAW1394_REQ_ASYNC_STREAM: 679 case RAW1394_REQ_ASYNC_STREAM:
680 DBGMSG("stream_request called"); 680 DBGMSG("stream_request called");
681 681
682 packet = 682 packet =
683 hpsb_make_streampacket(fi->host, NULL, req->req.length, 683 hpsb_make_streampacket(fi->host, NULL, req->req.length,
684 node & 0x3f /*channel */ , 684 node & 0x3f /*channel */ ,
685 (req->req.misc >> 16) & 0x3, 685 (req->req.misc >> 16) & 0x3,
686 req->req.misc & 0xf); 686 req->req.misc & 0xf);
687 if (!packet) 687 if (!packet)
688 return -ENOMEM; 688 return -ENOMEM;
689 689
690 if (copy_from_user(packet->data, int2ptr(req->req.sendb), 690 if (copy_from_user(packet->data, int2ptr(req->req.sendb),
691 req->req.length)) 691 req->req.length))
692 req->req.error = RAW1394_ERROR_MEMFAULT; 692 req->req.error = RAW1394_ERROR_MEMFAULT;
693 693
694 req->req.length = 0; 694 req->req.length = 0;
695 break; 695 break;
696 696
697 case RAW1394_REQ_LOCK: 697 case RAW1394_REQ_LOCK:
698 DBGMSG("lock_request called"); 698 DBGMSG("lock_request called");
699 if ((req->req.misc == EXTCODE_FETCH_ADD) 699 if ((req->req.misc == EXTCODE_FETCH_ADD)
700 || (req->req.misc == EXTCODE_LITTLE_ADD)) { 700 || (req->req.misc == EXTCODE_LITTLE_ADD)) {
701 if (req->req.length != 4) { 701 if (req->req.length != 4) {
702 req->req.error = RAW1394_ERROR_INVALID_ARG; 702 req->req.error = RAW1394_ERROR_INVALID_ARG;
703 break; 703 break;
704 } 704 }
705 } else { 705 } else {
706 if (req->req.length != 8) { 706 if (req->req.length != 8) {
707 req->req.error = RAW1394_ERROR_INVALID_ARG; 707 req->req.error = RAW1394_ERROR_INVALID_ARG;
708 break; 708 break;
709 } 709 }
710 } 710 }
711 711
712 packet = hpsb_make_lockpacket(fi->host, node, addr, 712 packet = hpsb_make_lockpacket(fi->host, node, addr,
713 req->req.misc, NULL, 0); 713 req->req.misc, NULL, 0);
714 if (!packet) 714 if (!packet)
715 return -ENOMEM; 715 return -ENOMEM;
716 716
717 if (copy_from_user(packet->data, int2ptr(req->req.sendb), 717 if (copy_from_user(packet->data, int2ptr(req->req.sendb),
718 req->req.length)) { 718 req->req.length)) {
719 req->req.error = RAW1394_ERROR_MEMFAULT; 719 req->req.error = RAW1394_ERROR_MEMFAULT;
720 break; 720 break;
721 } 721 }
722 722
723 req->data = packet->data; 723 req->data = packet->data;
724 req->req.length = 4; 724 req->req.length = 4;
725 break; 725 break;
726 726
727 case RAW1394_REQ_LOCK64: 727 case RAW1394_REQ_LOCK64:
728 DBGMSG("lock64_request called"); 728 DBGMSG("lock64_request called");
729 if ((req->req.misc == EXTCODE_FETCH_ADD) 729 if ((req->req.misc == EXTCODE_FETCH_ADD)
730 || (req->req.misc == EXTCODE_LITTLE_ADD)) { 730 || (req->req.misc == EXTCODE_LITTLE_ADD)) {
731 if (req->req.length != 8) { 731 if (req->req.length != 8) {
732 req->req.error = RAW1394_ERROR_INVALID_ARG; 732 req->req.error = RAW1394_ERROR_INVALID_ARG;
733 break; 733 break;
734 } 734 }
735 } else { 735 } else {
736 if (req->req.length != 16) { 736 if (req->req.length != 16) {
737 req->req.error = RAW1394_ERROR_INVALID_ARG; 737 req->req.error = RAW1394_ERROR_INVALID_ARG;
738 break; 738 break;
739 } 739 }
740 } 740 }
741 packet = hpsb_make_lock64packet(fi->host, node, addr, 741 packet = hpsb_make_lock64packet(fi->host, node, addr,
742 req->req.misc, NULL, 0); 742 req->req.misc, NULL, 0);
743 if (!packet) 743 if (!packet)
744 return -ENOMEM; 744 return -ENOMEM;
745 745
746 if (copy_from_user(packet->data, int2ptr(req->req.sendb), 746 if (copy_from_user(packet->data, int2ptr(req->req.sendb),
747 req->req.length)) { 747 req->req.length)) {
748 req->req.error = RAW1394_ERROR_MEMFAULT; 748 req->req.error = RAW1394_ERROR_MEMFAULT;
749 break; 749 break;
750 } 750 }
751 751
752 req->data = packet->data; 752 req->data = packet->data;
753 req->req.length = 8; 753 req->req.length = 8;
754 break; 754 break;
755 755
756 default: 756 default:
757 req->req.error = RAW1394_ERROR_STATE_ORDER; 757 req->req.error = RAW1394_ERROR_STATE_ORDER;
758 } 758 }
759 759
760 req->packet = packet; 760 req->packet = packet;
761 761
762 if (req->req.error) { 762 if (req->req.error) {
763 req->req.length = 0; 763 req->req.length = 0;
764 queue_complete_req(req); 764 queue_complete_req(req);
765 return 0; 765 return 0;
766 } 766 }
767 767
768 hpsb_set_packet_complete_task(packet, 768 hpsb_set_packet_complete_task(packet,
769 (void (*)(void *))queue_complete_cb, req); 769 (void (*)(void *))queue_complete_cb, req);
770 770
771 spin_lock_irqsave(&fi->reqlists_lock, flags); 771 spin_lock_irqsave(&fi->reqlists_lock, flags);
772 list_add_tail(&req->list, &fi->req_pending); 772 list_add_tail(&req->list, &fi->req_pending);
773 spin_unlock_irqrestore(&fi->reqlists_lock, flags); 773 spin_unlock_irqrestore(&fi->reqlists_lock, flags);
774 774
775 packet->generation = req->req.generation; 775 packet->generation = req->req.generation;
776 776
777 if (hpsb_send_packet(packet) < 0) { 777 if (hpsb_send_packet(packet) < 0) {
778 req->req.error = RAW1394_ERROR_SEND_ERROR; 778 req->req.error = RAW1394_ERROR_SEND_ERROR;
779 req->req.length = 0; 779 req->req.length = 0;
780 hpsb_free_tlabel(packet); 780 hpsb_free_tlabel(packet);
781 queue_complete_req(req); 781 queue_complete_req(req);
782 } 782 }
783 return 0; 783 return 0;
784 } 784 }
785 785
786 static int handle_async_send(struct file_info *fi, struct pending_request *req) 786 static int handle_async_send(struct file_info *fi, struct pending_request *req)
787 { 787 {
788 unsigned long flags; 788 unsigned long flags;
789 struct hpsb_packet *packet; 789 struct hpsb_packet *packet;
790 int header_length = req->req.misc & 0xffff; 790 int header_length = req->req.misc & 0xffff;
791 int expect_response = req->req.misc >> 16; 791 int expect_response = req->req.misc >> 16;
792 size_t data_size; 792 size_t data_size;
793 793
794 if (header_length > req->req.length || header_length < 12 || 794 if (header_length > req->req.length || header_length < 12 ||
795 header_length > FIELD_SIZEOF(struct hpsb_packet, header)) { 795 header_length > FIELD_SIZEOF(struct hpsb_packet, header)) {
796 req->req.error = RAW1394_ERROR_INVALID_ARG; 796 req->req.error = RAW1394_ERROR_INVALID_ARG;
797 req->req.length = 0; 797 req->req.length = 0;
798 queue_complete_req(req); 798 queue_complete_req(req);
799 return 0; 799 return 0;
800 } 800 }
801 801
802 data_size = req->req.length - header_length; 802 data_size = req->req.length - header_length;
803 packet = hpsb_alloc_packet(data_size); 803 packet = hpsb_alloc_packet(data_size);
804 req->packet = packet; 804 req->packet = packet;
805 if (!packet) 805 if (!packet)
806 return -ENOMEM; 806 return -ENOMEM;
807 807
808 if (copy_from_user(packet->header, int2ptr(req->req.sendb), 808 if (copy_from_user(packet->header, int2ptr(req->req.sendb),
809 header_length)) { 809 header_length)) {
810 req->req.error = RAW1394_ERROR_MEMFAULT; 810 req->req.error = RAW1394_ERROR_MEMFAULT;
811 req->req.length = 0; 811 req->req.length = 0;
812 queue_complete_req(req); 812 queue_complete_req(req);
813 return 0; 813 return 0;
814 } 814 }
815 815
816 if (copy_from_user 816 if (copy_from_user
817 (packet->data, int2ptr(req->req.sendb) + header_length, 817 (packet->data, int2ptr(req->req.sendb) + header_length,
818 data_size)) { 818 data_size)) {
819 req->req.error = RAW1394_ERROR_MEMFAULT; 819 req->req.error = RAW1394_ERROR_MEMFAULT;
820 req->req.length = 0; 820 req->req.length = 0;
821 queue_complete_req(req); 821 queue_complete_req(req);
822 return 0; 822 return 0;
823 } 823 }
824 824
825 packet->type = hpsb_async; 825 packet->type = hpsb_async;
826 packet->node_id = packet->header[0] >> 16; 826 packet->node_id = packet->header[0] >> 16;
827 packet->tcode = (packet->header[0] >> 4) & 0xf; 827 packet->tcode = (packet->header[0] >> 4) & 0xf;
828 packet->tlabel = (packet->header[0] >> 10) & 0x3f; 828 packet->tlabel = (packet->header[0] >> 10) & 0x3f;
829 packet->host = fi->host; 829 packet->host = fi->host;
830 packet->expect_response = expect_response; 830 packet->expect_response = expect_response;
831 packet->header_size = header_length; 831 packet->header_size = header_length;
832 packet->data_size = data_size; 832 packet->data_size = data_size;
833 833
834 req->req.length = 0; 834 req->req.length = 0;
835 hpsb_set_packet_complete_task(packet, 835 hpsb_set_packet_complete_task(packet,
836 (void (*)(void *))queue_complete_cb, req); 836 (void (*)(void *))queue_complete_cb, req);
837 837
838 spin_lock_irqsave(&fi->reqlists_lock, flags); 838 spin_lock_irqsave(&fi->reqlists_lock, flags);
839 list_add_tail(&req->list, &fi->req_pending); 839 list_add_tail(&req->list, &fi->req_pending);
840 spin_unlock_irqrestore(&fi->reqlists_lock, flags); 840 spin_unlock_irqrestore(&fi->reqlists_lock, flags);
841 841
842 /* Update the generation of the packet just before sending. */ 842 /* Update the generation of the packet just before sending. */
843 packet->generation = req->req.generation; 843 packet->generation = req->req.generation;
844 844
845 if (hpsb_send_packet(packet) < 0) { 845 if (hpsb_send_packet(packet) < 0) {
846 req->req.error = RAW1394_ERROR_SEND_ERROR; 846 req->req.error = RAW1394_ERROR_SEND_ERROR;
847 queue_complete_req(req); 847 queue_complete_req(req);
848 } 848 }
849 849
850 return 0; 850 return 0;
851 } 851 }
852 852
853 static int arm_read(struct hpsb_host *host, int nodeid, quadlet_t * buffer, 853 static int arm_read(struct hpsb_host *host, int nodeid, quadlet_t * buffer,
854 u64 addr, size_t length, u16 flags) 854 u64 addr, size_t length, u16 flags)
855 { 855 {
856 unsigned long irqflags; 856 unsigned long irqflags;
857 struct pending_request *req; 857 struct pending_request *req;
858 struct host_info *hi; 858 struct host_info *hi;
859 struct file_info *fi = NULL; 859 struct file_info *fi = NULL;
860 struct list_head *entry; 860 struct list_head *entry;
861 struct arm_addr *arm_addr = NULL; 861 struct arm_addr *arm_addr = NULL;
862 struct arm_request *arm_req = NULL; 862 struct arm_request *arm_req = NULL;
863 struct arm_response *arm_resp = NULL; 863 struct arm_response *arm_resp = NULL;
864 int found = 0, size = 0, rcode = -1; 864 int found = 0, size = 0, rcode = -1;
865 struct arm_request_response *arm_req_resp = NULL; 865 struct arm_request_response *arm_req_resp = NULL;
866 866
867 DBGMSG("arm_read called by node: %X " 867 DBGMSG("arm_read called by node: %X "
868 "addr: %4.4x %8.8x length: %Zu", nodeid, 868 "addr: %4.4x %8.8x length: %Zu", nodeid,
869 (u16) ((addr >> 32) & 0xFFFF), (u32) (addr & 0xFFFFFFFF), 869 (u16) ((addr >> 32) & 0xFFFF), (u32) (addr & 0xFFFFFFFF),
870 length); 870 length);
871 spin_lock_irqsave(&host_info_lock, irqflags); 871 spin_lock_irqsave(&host_info_lock, irqflags);
872 hi = find_host_info(host); /* search address-entry */ 872 hi = find_host_info(host); /* search address-entry */
873 if (hi != NULL) { 873 if (hi != NULL) {
874 list_for_each_entry(fi, &hi->file_info_list, list) { 874 list_for_each_entry(fi, &hi->file_info_list, list) {
875 entry = fi->addr_list.next; 875 entry = fi->addr_list.next;
876 while (entry != &(fi->addr_list)) { 876 while (entry != &(fi->addr_list)) {
877 arm_addr = 877 arm_addr =
878 list_entry(entry, struct arm_addr, 878 list_entry(entry, struct arm_addr,
879 addr_list); 879 addr_list);
880 if (((arm_addr->start) <= (addr)) 880 if (((arm_addr->start) <= (addr))
881 && ((arm_addr->end) >= (addr + length))) { 881 && ((arm_addr->end) >= (addr + length))) {
882 found = 1; 882 found = 1;
883 break; 883 break;
884 } 884 }
885 entry = entry->next; 885 entry = entry->next;
886 } 886 }
887 if (found) { 887 if (found) {
888 break; 888 break;
889 } 889 }
890 } 890 }
891 } 891 }
892 rcode = -1; 892 rcode = -1;
893 if (!found) { 893 if (!found) {
894 printk(KERN_ERR "raw1394: arm_read FAILED addr_entry not found" 894 printk(KERN_ERR "raw1394: arm_read FAILED addr_entry not found"
895 " -> rcode_address_error\n"); 895 " -> rcode_address_error\n");
896 spin_unlock_irqrestore(&host_info_lock, irqflags); 896 spin_unlock_irqrestore(&host_info_lock, irqflags);
897 return (RCODE_ADDRESS_ERROR); 897 return (RCODE_ADDRESS_ERROR);
898 } else { 898 } else {
899 DBGMSG("arm_read addr_entry FOUND"); 899 DBGMSG("arm_read addr_entry FOUND");
900 } 900 }
901 if (arm_addr->rec_length < length) { 901 if (arm_addr->rec_length < length) {
902 DBGMSG("arm_read blocklength too big -> rcode_data_error"); 902 DBGMSG("arm_read blocklength too big -> rcode_data_error");
903 rcode = RCODE_DATA_ERROR; /* hardware error, data is unavailable */ 903 rcode = RCODE_DATA_ERROR; /* hardware error, data is unavailable */
904 } 904 }
905 if (rcode == -1) { 905 if (rcode == -1) {
906 if (arm_addr->access_rights & ARM_READ) { 906 if (arm_addr->access_rights & ARM_READ) {
907 if (!(arm_addr->client_transactions & ARM_READ)) { 907 if (!(arm_addr->client_transactions & ARM_READ)) {
908 memcpy(buffer, 908 memcpy(buffer,
909 (arm_addr->addr_space_buffer) + (addr - 909 (arm_addr->addr_space_buffer) + (addr -
910 (arm_addr-> 910 (arm_addr->
911 start)), 911 start)),
912 length); 912 length);
913 DBGMSG("arm_read -> (rcode_complete)"); 913 DBGMSG("arm_read -> (rcode_complete)");
914 rcode = RCODE_COMPLETE; 914 rcode = RCODE_COMPLETE;
915 } 915 }
916 } else { 916 } else {
917 rcode = RCODE_TYPE_ERROR; /* function not allowed */ 917 rcode = RCODE_TYPE_ERROR; /* function not allowed */
918 DBGMSG("arm_read -> rcode_type_error (access denied)"); 918 DBGMSG("arm_read -> rcode_type_error (access denied)");
919 } 919 }
920 } 920 }
921 if (arm_addr->notification_options & ARM_READ) { 921 if (arm_addr->notification_options & ARM_READ) {
922 DBGMSG("arm_read -> entering notification-section"); 922 DBGMSG("arm_read -> entering notification-section");
923 req = __alloc_pending_request(GFP_ATOMIC); 923 req = __alloc_pending_request(GFP_ATOMIC);
924 if (!req) { 924 if (!req) {
925 DBGMSG("arm_read -> rcode_conflict_error"); 925 DBGMSG("arm_read -> rcode_conflict_error");
926 spin_unlock_irqrestore(&host_info_lock, irqflags); 926 spin_unlock_irqrestore(&host_info_lock, irqflags);
927 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected. 927 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
928 The request may be retried */ 928 The request may be retried */
929 } 929 }
930 if (rcode == RCODE_COMPLETE) { 930 if (rcode == RCODE_COMPLETE) {
931 size = 931 size =
932 sizeof(struct arm_request) + 932 sizeof(struct arm_request) +
933 sizeof(struct arm_response) + 933 sizeof(struct arm_response) +
934 length * sizeof(byte_t) + 934 length * sizeof(byte_t) +
935 sizeof(struct arm_request_response); 935 sizeof(struct arm_request_response);
936 } else { 936 } else {
937 size = 937 size =
938 sizeof(struct arm_request) + 938 sizeof(struct arm_request) +
939 sizeof(struct arm_response) + 939 sizeof(struct arm_response) +
940 sizeof(struct arm_request_response); 940 sizeof(struct arm_request_response);
941 } 941 }
942 req->data = kmalloc(size, GFP_ATOMIC); 942 req->data = kmalloc(size, GFP_ATOMIC);
943 if (!(req->data)) { 943 if (!(req->data)) {
944 free_pending_request(req); 944 free_pending_request(req);
945 DBGMSG("arm_read -> rcode_conflict_error"); 945 DBGMSG("arm_read -> rcode_conflict_error");
946 spin_unlock_irqrestore(&host_info_lock, irqflags); 946 spin_unlock_irqrestore(&host_info_lock, irqflags);
947 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected. 947 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
948 The request may be retried */ 948 The request may be retried */
949 } 949 }
950 req->free_data = 1; 950 req->free_data = 1;
951 req->file_info = fi; 951 req->file_info = fi;
952 req->req.type = RAW1394_REQ_ARM; 952 req->req.type = RAW1394_REQ_ARM;
953 req->req.generation = get_hpsb_generation(host); 953 req->req.generation = get_hpsb_generation(host);
954 req->req.misc = 954 req->req.misc =
955 (((length << 16) & (0xFFFF0000)) | (ARM_READ & 0xFF)); 955 (((length << 16) & (0xFFFF0000)) | (ARM_READ & 0xFF));
956 req->req.tag = arm_addr->arm_tag; 956 req->req.tag = arm_addr->arm_tag;
957 req->req.recvb = arm_addr->recvb; 957 req->req.recvb = arm_addr->recvb;
958 req->req.length = size; 958 req->req.length = size;
959 arm_req_resp = (struct arm_request_response *)(req->data); 959 arm_req_resp = (struct arm_request_response *)(req->data);
960 arm_req = (struct arm_request *)((byte_t *) (req->data) + 960 arm_req = (struct arm_request *)((byte_t *) (req->data) +
961 (sizeof 961 (sizeof
962 (struct 962 (struct
963 arm_request_response))); 963 arm_request_response)));
964 arm_resp = 964 arm_resp =
965 (struct arm_response *)((byte_t *) (arm_req) + 965 (struct arm_response *)((byte_t *) (arm_req) +
966 (sizeof(struct arm_request))); 966 (sizeof(struct arm_request)));
967 arm_req->buffer = NULL; 967 arm_req->buffer = NULL;
968 arm_resp->buffer = NULL; 968 arm_resp->buffer = NULL;
969 if (rcode == RCODE_COMPLETE) { 969 if (rcode == RCODE_COMPLETE) {
970 byte_t *buf = 970 byte_t *buf =
971 (byte_t *) arm_resp + sizeof(struct arm_response); 971 (byte_t *) arm_resp + sizeof(struct arm_response);
972 memcpy(buf, 972 memcpy(buf,
973 (arm_addr->addr_space_buffer) + (addr - 973 (arm_addr->addr_space_buffer) + (addr -
974 (arm_addr-> 974 (arm_addr->
975 start)), 975 start)),
976 length); 976 length);
977 arm_resp->buffer = 977 arm_resp->buffer =
978 int2ptr((arm_addr->recvb) + 978 int2ptr((arm_addr->recvb) +
979 sizeof(struct arm_request_response) + 979 sizeof(struct arm_request_response) +
980 sizeof(struct arm_request) + 980 sizeof(struct arm_request) +
981 sizeof(struct arm_response)); 981 sizeof(struct arm_response));
982 } 982 }
983 arm_resp->buffer_length = 983 arm_resp->buffer_length =
984 (rcode == RCODE_COMPLETE) ? length : 0; 984 (rcode == RCODE_COMPLETE) ? length : 0;
985 arm_resp->response_code = rcode; 985 arm_resp->response_code = rcode;
986 arm_req->buffer_length = 0; 986 arm_req->buffer_length = 0;
987 arm_req->generation = req->req.generation; 987 arm_req->generation = req->req.generation;
988 arm_req->extended_transaction_code = 0; 988 arm_req->extended_transaction_code = 0;
989 arm_req->destination_offset = addr; 989 arm_req->destination_offset = addr;
990 arm_req->source_nodeid = nodeid; 990 arm_req->source_nodeid = nodeid;
991 arm_req->destination_nodeid = host->node_id; 991 arm_req->destination_nodeid = host->node_id;
992 arm_req->tlabel = (flags >> 10) & 0x3f; 992 arm_req->tlabel = (flags >> 10) & 0x3f;
993 arm_req->tcode = (flags >> 4) & 0x0f; 993 arm_req->tcode = (flags >> 4) & 0x0f;
994 arm_req_resp->request = int2ptr((arm_addr->recvb) + 994 arm_req_resp->request = int2ptr((arm_addr->recvb) +
995 sizeof(struct 995 sizeof(struct
996 arm_request_response)); 996 arm_request_response));
997 arm_req_resp->response = 997 arm_req_resp->response =
998 int2ptr((arm_addr->recvb) + 998 int2ptr((arm_addr->recvb) +
999 sizeof(struct arm_request_response) + 999 sizeof(struct arm_request_response) +
1000 sizeof(struct arm_request)); 1000 sizeof(struct arm_request));
1001 queue_complete_req(req); 1001 queue_complete_req(req);
1002 } 1002 }
1003 spin_unlock_irqrestore(&host_info_lock, irqflags); 1003 spin_unlock_irqrestore(&host_info_lock, irqflags);
1004 return (rcode); 1004 return (rcode);
1005 } 1005 }
1006 1006
1007 static int arm_write(struct hpsb_host *host, int nodeid, int destid, 1007 static int arm_write(struct hpsb_host *host, int nodeid, int destid,
1008 quadlet_t * data, u64 addr, size_t length, u16 flags) 1008 quadlet_t * data, u64 addr, size_t length, u16 flags)
1009 { 1009 {
1010 unsigned long irqflags; 1010 unsigned long irqflags;
1011 struct pending_request *req; 1011 struct pending_request *req;
1012 struct host_info *hi; 1012 struct host_info *hi;
1013 struct file_info *fi = NULL; 1013 struct file_info *fi = NULL;
1014 struct list_head *entry; 1014 struct list_head *entry;
1015 struct arm_addr *arm_addr = NULL; 1015 struct arm_addr *arm_addr = NULL;
1016 struct arm_request *arm_req = NULL; 1016 struct arm_request *arm_req = NULL;
1017 struct arm_response *arm_resp = NULL; 1017 struct arm_response *arm_resp = NULL;
1018 int found = 0, size = 0, rcode = -1, length_conflict = 0; 1018 int found = 0, size = 0, rcode = -1;
1019 struct arm_request_response *arm_req_resp = NULL; 1019 struct arm_request_response *arm_req_resp = NULL;
1020 1020
1021 DBGMSG("arm_write called by node: %X " 1021 DBGMSG("arm_write called by node: %X "
1022 "addr: %4.4x %8.8x length: %Zu", nodeid, 1022 "addr: %4.4x %8.8x length: %Zu", nodeid,
1023 (u16) ((addr >> 32) & 0xFFFF), (u32) (addr & 0xFFFFFFFF), 1023 (u16) ((addr >> 32) & 0xFFFF), (u32) (addr & 0xFFFFFFFF),
1024 length); 1024 length);
1025 spin_lock_irqsave(&host_info_lock, irqflags); 1025 spin_lock_irqsave(&host_info_lock, irqflags);
1026 hi = find_host_info(host); /* search address-entry */ 1026 hi = find_host_info(host); /* search address-entry */
1027 if (hi != NULL) { 1027 if (hi != NULL) {
1028 list_for_each_entry(fi, &hi->file_info_list, list) { 1028 list_for_each_entry(fi, &hi->file_info_list, list) {
1029 entry = fi->addr_list.next; 1029 entry = fi->addr_list.next;
1030 while (entry != &(fi->addr_list)) { 1030 while (entry != &(fi->addr_list)) {
1031 arm_addr = 1031 arm_addr =
1032 list_entry(entry, struct arm_addr, 1032 list_entry(entry, struct arm_addr,
1033 addr_list); 1033 addr_list);
1034 if (((arm_addr->start) <= (addr)) 1034 if (((arm_addr->start) <= (addr))
1035 && ((arm_addr->end) >= (addr + length))) { 1035 && ((arm_addr->end) >= (addr + length))) {
1036 found = 1; 1036 found = 1;
1037 break; 1037 break;
1038 } 1038 }
1039 entry = entry->next; 1039 entry = entry->next;
1040 } 1040 }
1041 if (found) { 1041 if (found) {
1042 break; 1042 break;
1043 } 1043 }
1044 } 1044 }
1045 } 1045 }
1046 rcode = -1; 1046 rcode = -1;
1047 if (!found) { 1047 if (!found) {
1048 printk(KERN_ERR "raw1394: arm_write FAILED addr_entry not found" 1048 printk(KERN_ERR "raw1394: arm_write FAILED addr_entry not found"
1049 " -> rcode_address_error\n"); 1049 " -> rcode_address_error\n");
1050 spin_unlock_irqrestore(&host_info_lock, irqflags); 1050 spin_unlock_irqrestore(&host_info_lock, irqflags);
1051 return (RCODE_ADDRESS_ERROR); 1051 return (RCODE_ADDRESS_ERROR);
1052 } else { 1052 } else {
1053 DBGMSG("arm_write addr_entry FOUND"); 1053 DBGMSG("arm_write addr_entry FOUND");
1054 } 1054 }
1055 if (arm_addr->rec_length < length) { 1055 if (arm_addr->rec_length < length) {
1056 DBGMSG("arm_write blocklength too big -> rcode_data_error"); 1056 DBGMSG("arm_write blocklength too big -> rcode_data_error");
1057 length_conflict = 1;
1058 rcode = RCODE_DATA_ERROR; /* hardware error, data is unavailable */ 1057 rcode = RCODE_DATA_ERROR; /* hardware error, data is unavailable */
1059 } 1058 }
1060 if (rcode == -1) { 1059 if (rcode == -1) {
1061 if (arm_addr->access_rights & ARM_WRITE) { 1060 if (arm_addr->access_rights & ARM_WRITE) {
1062 if (!(arm_addr->client_transactions & ARM_WRITE)) { 1061 if (!(arm_addr->client_transactions & ARM_WRITE)) {
1063 memcpy((arm_addr->addr_space_buffer) + 1062 memcpy((arm_addr->addr_space_buffer) +
1064 (addr - (arm_addr->start)), data, 1063 (addr - (arm_addr->start)), data,
1065 length); 1064 length);
1066 DBGMSG("arm_write -> (rcode_complete)"); 1065 DBGMSG("arm_write -> (rcode_complete)");
1067 rcode = RCODE_COMPLETE; 1066 rcode = RCODE_COMPLETE;
1068 } 1067 }
1069 } else { 1068 } else {
1070 rcode = RCODE_TYPE_ERROR; /* function not allowed */ 1069 rcode = RCODE_TYPE_ERROR; /* function not allowed */
1071 DBGMSG("arm_write -> rcode_type_error (access denied)"); 1070 DBGMSG("arm_write -> rcode_type_error (access denied)");
1072 } 1071 }
1073 } 1072 }
1074 if (arm_addr->notification_options & ARM_WRITE) { 1073 if (arm_addr->notification_options & ARM_WRITE) {
1075 DBGMSG("arm_write -> entering notification-section"); 1074 DBGMSG("arm_write -> entering notification-section");
1076 req = __alloc_pending_request(GFP_ATOMIC); 1075 req = __alloc_pending_request(GFP_ATOMIC);
1077 if (!req) { 1076 if (!req) {
1078 DBGMSG("arm_write -> rcode_conflict_error"); 1077 DBGMSG("arm_write -> rcode_conflict_error");
1079 spin_unlock_irqrestore(&host_info_lock, irqflags); 1078 spin_unlock_irqrestore(&host_info_lock, irqflags);
1080 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected. 1079 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
1081 The request my be retried */ 1080 The request my be retried */
1082 } 1081 }
1083 size = 1082 size =
1084 sizeof(struct arm_request) + sizeof(struct arm_response) + 1083 sizeof(struct arm_request) + sizeof(struct arm_response) +
1085 (length) * sizeof(byte_t) + 1084 (length) * sizeof(byte_t) +
1086 sizeof(struct arm_request_response); 1085 sizeof(struct arm_request_response);
1087 req->data = kmalloc(size, GFP_ATOMIC); 1086 req->data = kmalloc(size, GFP_ATOMIC);
1088 if (!(req->data)) { 1087 if (!(req->data)) {
1089 free_pending_request(req); 1088 free_pending_request(req);
1090 DBGMSG("arm_write -> rcode_conflict_error"); 1089 DBGMSG("arm_write -> rcode_conflict_error");
1091 spin_unlock_irqrestore(&host_info_lock, irqflags); 1090 spin_unlock_irqrestore(&host_info_lock, irqflags);
1092 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected. 1091 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
1093 The request may be retried */ 1092 The request may be retried */
1094 } 1093 }
1095 req->free_data = 1; 1094 req->free_data = 1;
1096 req->file_info = fi; 1095 req->file_info = fi;
1097 req->req.type = RAW1394_REQ_ARM; 1096 req->req.type = RAW1394_REQ_ARM;
1098 req->req.generation = get_hpsb_generation(host); 1097 req->req.generation = get_hpsb_generation(host);
1099 req->req.misc = 1098 req->req.misc =
1100 (((length << 16) & (0xFFFF0000)) | (ARM_WRITE & 0xFF)); 1099 (((length << 16) & (0xFFFF0000)) | (ARM_WRITE & 0xFF));
1101 req->req.tag = arm_addr->arm_tag; 1100 req->req.tag = arm_addr->arm_tag;
1102 req->req.recvb = arm_addr->recvb; 1101 req->req.recvb = arm_addr->recvb;
1103 req->req.length = size; 1102 req->req.length = size;
1104 arm_req_resp = (struct arm_request_response *)(req->data); 1103 arm_req_resp = (struct arm_request_response *)(req->data);
1105 arm_req = (struct arm_request *)((byte_t *) (req->data) + 1104 arm_req = (struct arm_request *)((byte_t *) (req->data) +
1106 (sizeof 1105 (sizeof
1107 (struct 1106 (struct
1108 arm_request_response))); 1107 arm_request_response)));
1109 arm_resp = 1108 arm_resp =
1110 (struct arm_response *)((byte_t *) (arm_req) + 1109 (struct arm_response *)((byte_t *) (arm_req) +
1111 (sizeof(struct arm_request))); 1110 (sizeof(struct arm_request)));
1112 arm_resp->buffer = NULL; 1111 arm_resp->buffer = NULL;
1113 memcpy((byte_t *) arm_resp + sizeof(struct arm_response), 1112 memcpy((byte_t *) arm_resp + sizeof(struct arm_response),
1114 data, length); 1113 data, length);
1115 arm_req->buffer = int2ptr((arm_addr->recvb) + 1114 arm_req->buffer = int2ptr((arm_addr->recvb) +
1116 sizeof(struct arm_request_response) + 1115 sizeof(struct arm_request_response) +
1117 sizeof(struct arm_request) + 1116 sizeof(struct arm_request) +
1118 sizeof(struct arm_response)); 1117 sizeof(struct arm_response));
1119 arm_req->buffer_length = length; 1118 arm_req->buffer_length = length;
1120 arm_req->generation = req->req.generation; 1119 arm_req->generation = req->req.generation;
1121 arm_req->extended_transaction_code = 0; 1120 arm_req->extended_transaction_code = 0;
1122 arm_req->destination_offset = addr; 1121 arm_req->destination_offset = addr;
1123 arm_req->source_nodeid = nodeid; 1122 arm_req->source_nodeid = nodeid;
1124 arm_req->destination_nodeid = destid; 1123 arm_req->destination_nodeid = destid;
1125 arm_req->tlabel = (flags >> 10) & 0x3f; 1124 arm_req->tlabel = (flags >> 10) & 0x3f;
1126 arm_req->tcode = (flags >> 4) & 0x0f; 1125 arm_req->tcode = (flags >> 4) & 0x0f;
1127 arm_resp->buffer_length = 0; 1126 arm_resp->buffer_length = 0;
1128 arm_resp->response_code = rcode; 1127 arm_resp->response_code = rcode;
1129 arm_req_resp->request = int2ptr((arm_addr->recvb) + 1128 arm_req_resp->request = int2ptr((arm_addr->recvb) +
1130 sizeof(struct 1129 sizeof(struct
1131 arm_request_response)); 1130 arm_request_response));
1132 arm_req_resp->response = 1131 arm_req_resp->response =
1133 int2ptr((arm_addr->recvb) + 1132 int2ptr((arm_addr->recvb) +
1134 sizeof(struct arm_request_response) + 1133 sizeof(struct arm_request_response) +
1135 sizeof(struct arm_request)); 1134 sizeof(struct arm_request));
1136 queue_complete_req(req); 1135 queue_complete_req(req);
1137 } 1136 }
1138 spin_unlock_irqrestore(&host_info_lock, irqflags); 1137 spin_unlock_irqrestore(&host_info_lock, irqflags);
1139 return (rcode); 1138 return (rcode);
1140 } 1139 }
1141 1140
1142 static int arm_lock(struct hpsb_host *host, int nodeid, quadlet_t * store, 1141 static int arm_lock(struct hpsb_host *host, int nodeid, quadlet_t * store,
1143 u64 addr, quadlet_t data, quadlet_t arg, int ext_tcode, 1142 u64 addr, quadlet_t data, quadlet_t arg, int ext_tcode,
1144 u16 flags) 1143 u16 flags)
1145 { 1144 {
1146 unsigned long irqflags; 1145 unsigned long irqflags;
1147 struct pending_request *req; 1146 struct pending_request *req;
1148 struct host_info *hi; 1147 struct host_info *hi;
1149 struct file_info *fi = NULL; 1148 struct file_info *fi = NULL;
1150 struct list_head *entry; 1149 struct list_head *entry;
1151 struct arm_addr *arm_addr = NULL; 1150 struct arm_addr *arm_addr = NULL;
1152 struct arm_request *arm_req = NULL; 1151 struct arm_request *arm_req = NULL;
1153 struct arm_response *arm_resp = NULL; 1152 struct arm_response *arm_resp = NULL;
1154 int found = 0, size = 0, rcode = -1; 1153 int found = 0, size = 0, rcode = -1;
1155 quadlet_t old, new; 1154 quadlet_t old, new;
1156 struct arm_request_response *arm_req_resp = NULL; 1155 struct arm_request_response *arm_req_resp = NULL;
1157 1156
1158 if (((ext_tcode & 0xFF) == EXTCODE_FETCH_ADD) || 1157 if (((ext_tcode & 0xFF) == EXTCODE_FETCH_ADD) ||
1159 ((ext_tcode & 0xFF) == EXTCODE_LITTLE_ADD)) { 1158 ((ext_tcode & 0xFF) == EXTCODE_LITTLE_ADD)) {
1160 DBGMSG("arm_lock called by node: %X " 1159 DBGMSG("arm_lock called by node: %X "
1161 "addr: %4.4x %8.8x extcode: %2.2X data: %8.8X", 1160 "addr: %4.4x %8.8x extcode: %2.2X data: %8.8X",
1162 nodeid, (u16) ((addr >> 32) & 0xFFFF), 1161 nodeid, (u16) ((addr >> 32) & 0xFFFF),
1163 (u32) (addr & 0xFFFFFFFF), ext_tcode & 0xFF, 1162 (u32) (addr & 0xFFFFFFFF), ext_tcode & 0xFF,
1164 be32_to_cpu(data)); 1163 be32_to_cpu(data));
1165 } else { 1164 } else {
1166 DBGMSG("arm_lock called by node: %X " 1165 DBGMSG("arm_lock called by node: %X "
1167 "addr: %4.4x %8.8x extcode: %2.2X data: %8.8X arg: %8.8X", 1166 "addr: %4.4x %8.8x extcode: %2.2X data: %8.8X arg: %8.8X",
1168 nodeid, (u16) ((addr >> 32) & 0xFFFF), 1167 nodeid, (u16) ((addr >> 32) & 0xFFFF),
1169 (u32) (addr & 0xFFFFFFFF), ext_tcode & 0xFF, 1168 (u32) (addr & 0xFFFFFFFF), ext_tcode & 0xFF,
1170 be32_to_cpu(data), be32_to_cpu(arg)); 1169 be32_to_cpu(data), be32_to_cpu(arg));
1171 } 1170 }
1172 spin_lock_irqsave(&host_info_lock, irqflags); 1171 spin_lock_irqsave(&host_info_lock, irqflags);
1173 hi = find_host_info(host); /* search address-entry */ 1172 hi = find_host_info(host); /* search address-entry */
1174 if (hi != NULL) { 1173 if (hi != NULL) {
1175 list_for_each_entry(fi, &hi->file_info_list, list) { 1174 list_for_each_entry(fi, &hi->file_info_list, list) {
1176 entry = fi->addr_list.next; 1175 entry = fi->addr_list.next;
1177 while (entry != &(fi->addr_list)) { 1176 while (entry != &(fi->addr_list)) {
1178 arm_addr = 1177 arm_addr =
1179 list_entry(entry, struct arm_addr, 1178 list_entry(entry, struct arm_addr,
1180 addr_list); 1179 addr_list);
1181 if (((arm_addr->start) <= (addr)) 1180 if (((arm_addr->start) <= (addr))
1182 && ((arm_addr->end) >= 1181 && ((arm_addr->end) >=
1183 (addr + sizeof(*store)))) { 1182 (addr + sizeof(*store)))) {
1184 found = 1; 1183 found = 1;
1185 break; 1184 break;
1186 } 1185 }
1187 entry = entry->next; 1186 entry = entry->next;
1188 } 1187 }
1189 if (found) { 1188 if (found) {
1190 break; 1189 break;
1191 } 1190 }
1192 } 1191 }
1193 } 1192 }
1194 rcode = -1; 1193 rcode = -1;
1195 if (!found) { 1194 if (!found) {
1196 printk(KERN_ERR "raw1394: arm_lock FAILED addr_entry not found" 1195 printk(KERN_ERR "raw1394: arm_lock FAILED addr_entry not found"
1197 " -> rcode_address_error\n"); 1196 " -> rcode_address_error\n");
1198 spin_unlock_irqrestore(&host_info_lock, irqflags); 1197 spin_unlock_irqrestore(&host_info_lock, irqflags);
1199 return (RCODE_ADDRESS_ERROR); 1198 return (RCODE_ADDRESS_ERROR);
1200 } else { 1199 } else {
1201 DBGMSG("arm_lock addr_entry FOUND"); 1200 DBGMSG("arm_lock addr_entry FOUND");
1202 } 1201 }
1203 if (rcode == -1) { 1202 if (rcode == -1) {
1204 if (arm_addr->access_rights & ARM_LOCK) { 1203 if (arm_addr->access_rights & ARM_LOCK) {
1205 if (!(arm_addr->client_transactions & ARM_LOCK)) { 1204 if (!(arm_addr->client_transactions & ARM_LOCK)) {
1206 memcpy(&old, 1205 memcpy(&old,
1207 (arm_addr->addr_space_buffer) + (addr - 1206 (arm_addr->addr_space_buffer) + (addr -
1208 (arm_addr-> 1207 (arm_addr->
1209 start)), 1208 start)),
1210 sizeof(old)); 1209 sizeof(old));
1211 switch (ext_tcode) { 1210 switch (ext_tcode) {
1212 case (EXTCODE_MASK_SWAP): 1211 case (EXTCODE_MASK_SWAP):
1213 new = data | (old & ~arg); 1212 new = data | (old & ~arg);
1214 break; 1213 break;
1215 case (EXTCODE_COMPARE_SWAP): 1214 case (EXTCODE_COMPARE_SWAP):
1216 if (old == arg) { 1215 if (old == arg) {
1217 new = data; 1216 new = data;
1218 } else { 1217 } else {
1219 new = old; 1218 new = old;
1220 } 1219 }
1221 break; 1220 break;
1222 case (EXTCODE_FETCH_ADD): 1221 case (EXTCODE_FETCH_ADD):
1223 new = 1222 new =
1224 cpu_to_be32(be32_to_cpu(data) + 1223 cpu_to_be32(be32_to_cpu(data) +
1225 be32_to_cpu(old)); 1224 be32_to_cpu(old));
1226 break; 1225 break;
1227 case (EXTCODE_LITTLE_ADD): 1226 case (EXTCODE_LITTLE_ADD):
1228 new = 1227 new =
1229 cpu_to_le32(le32_to_cpu(data) + 1228 cpu_to_le32(le32_to_cpu(data) +
1230 le32_to_cpu(old)); 1229 le32_to_cpu(old));
1231 break; 1230 break;
1232 case (EXTCODE_BOUNDED_ADD): 1231 case (EXTCODE_BOUNDED_ADD):
1233 if (old != arg) { 1232 if (old != arg) {
1234 new = 1233 new =
1235 cpu_to_be32(be32_to_cpu 1234 cpu_to_be32(be32_to_cpu
1236 (data) + 1235 (data) +
1237 be32_to_cpu 1236 be32_to_cpu
1238 (old)); 1237 (old));
1239 } else { 1238 } else {
1240 new = old; 1239 new = old;
1241 } 1240 }
1242 break; 1241 break;
1243 case (EXTCODE_WRAP_ADD): 1242 case (EXTCODE_WRAP_ADD):
1244 if (old != arg) { 1243 if (old != arg) {
1245 new = 1244 new =
1246 cpu_to_be32(be32_to_cpu 1245 cpu_to_be32(be32_to_cpu
1247 (data) + 1246 (data) +
1248 be32_to_cpu 1247 be32_to_cpu
1249 (old)); 1248 (old));
1250 } else { 1249 } else {
1251 new = data; 1250 new = data;
1252 } 1251 }
1253 break; 1252 break;
1254 default: 1253 default:
1255 rcode = RCODE_TYPE_ERROR; /* function not allowed */ 1254 rcode = RCODE_TYPE_ERROR; /* function not allowed */
1256 printk(KERN_ERR 1255 printk(KERN_ERR
1257 "raw1394: arm_lock FAILED " 1256 "raw1394: arm_lock FAILED "
1258 "ext_tcode not allowed -> rcode_type_error\n"); 1257 "ext_tcode not allowed -> rcode_type_error\n");
1259 break; 1258 break;
1260 } /*switch */ 1259 } /*switch */
1261 if (rcode == -1) { 1260 if (rcode == -1) {
1262 DBGMSG("arm_lock -> (rcode_complete)"); 1261 DBGMSG("arm_lock -> (rcode_complete)");
1263 rcode = RCODE_COMPLETE; 1262 rcode = RCODE_COMPLETE;
1264 memcpy(store, &old, sizeof(*store)); 1263 memcpy(store, &old, sizeof(*store));
1265 memcpy((arm_addr->addr_space_buffer) + 1264 memcpy((arm_addr->addr_space_buffer) +
1266 (addr - (arm_addr->start)), 1265 (addr - (arm_addr->start)),
1267 &new, sizeof(*store)); 1266 &new, sizeof(*store));
1268 } 1267 }
1269 } 1268 }
1270 } else { 1269 } else {
1271 rcode = RCODE_TYPE_ERROR; /* function not allowed */ 1270 rcode = RCODE_TYPE_ERROR; /* function not allowed */
1272 DBGMSG("arm_lock -> rcode_type_error (access denied)"); 1271 DBGMSG("arm_lock -> rcode_type_error (access denied)");
1273 } 1272 }
1274 } 1273 }
1275 if (arm_addr->notification_options & ARM_LOCK) { 1274 if (arm_addr->notification_options & ARM_LOCK) {
1276 byte_t *buf1, *buf2; 1275 byte_t *buf1, *buf2;
1277 DBGMSG("arm_lock -> entering notification-section"); 1276 DBGMSG("arm_lock -> entering notification-section");
1278 req = __alloc_pending_request(GFP_ATOMIC); 1277 req = __alloc_pending_request(GFP_ATOMIC);
1279 if (!req) { 1278 if (!req) {
1280 DBGMSG("arm_lock -> rcode_conflict_error"); 1279 DBGMSG("arm_lock -> rcode_conflict_error");
1281 spin_unlock_irqrestore(&host_info_lock, irqflags); 1280 spin_unlock_irqrestore(&host_info_lock, irqflags);
1282 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected. 1281 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
1283 The request may be retried */ 1282 The request may be retried */
1284 } 1283 }
1285 size = sizeof(struct arm_request) + sizeof(struct arm_response) + 3 * sizeof(*store) + sizeof(struct arm_request_response); /* maximum */ 1284 size = sizeof(struct arm_request) + sizeof(struct arm_response) + 3 * sizeof(*store) + sizeof(struct arm_request_response); /* maximum */
1286 req->data = kmalloc(size, GFP_ATOMIC); 1285 req->data = kmalloc(size, GFP_ATOMIC);
1287 if (!(req->data)) { 1286 if (!(req->data)) {
1288 free_pending_request(req); 1287 free_pending_request(req);
1289 DBGMSG("arm_lock -> rcode_conflict_error"); 1288 DBGMSG("arm_lock -> rcode_conflict_error");
1290 spin_unlock_irqrestore(&host_info_lock, irqflags); 1289 spin_unlock_irqrestore(&host_info_lock, irqflags);
1291 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected. 1290 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
1292 The request may be retried */ 1291 The request may be retried */
1293 } 1292 }
1294 req->free_data = 1; 1293 req->free_data = 1;
1295 arm_req_resp = (struct arm_request_response *)(req->data); 1294 arm_req_resp = (struct arm_request_response *)(req->data);
1296 arm_req = (struct arm_request *)((byte_t *) (req->data) + 1295 arm_req = (struct arm_request *)((byte_t *) (req->data) +
1297 (sizeof 1296 (sizeof
1298 (struct 1297 (struct
1299 arm_request_response))); 1298 arm_request_response)));
1300 arm_resp = 1299 arm_resp =
1301 (struct arm_response *)((byte_t *) (arm_req) + 1300 (struct arm_response *)((byte_t *) (arm_req) +
1302 (sizeof(struct arm_request))); 1301 (sizeof(struct arm_request)));
1303 buf1 = (byte_t *) arm_resp + sizeof(struct arm_response); 1302 buf1 = (byte_t *) arm_resp + sizeof(struct arm_response);
1304 buf2 = buf1 + 2 * sizeof(*store); 1303 buf2 = buf1 + 2 * sizeof(*store);
1305 if ((ext_tcode == EXTCODE_FETCH_ADD) || 1304 if ((ext_tcode == EXTCODE_FETCH_ADD) ||
1306 (ext_tcode == EXTCODE_LITTLE_ADD)) { 1305 (ext_tcode == EXTCODE_LITTLE_ADD)) {
1307 arm_req->buffer_length = sizeof(*store); 1306 arm_req->buffer_length = sizeof(*store);
1308 memcpy(buf1, &data, sizeof(*store)); 1307 memcpy(buf1, &data, sizeof(*store));
1309 1308
1310 } else { 1309 } else {
1311 arm_req->buffer_length = 2 * sizeof(*store); 1310 arm_req->buffer_length = 2 * sizeof(*store);
1312 memcpy(buf1, &arg, sizeof(*store)); 1311 memcpy(buf1, &arg, sizeof(*store));
1313 memcpy(buf1 + sizeof(*store), &data, sizeof(*store)); 1312 memcpy(buf1 + sizeof(*store), &data, sizeof(*store));
1314 } 1313 }
1315 if (rcode == RCODE_COMPLETE) { 1314 if (rcode == RCODE_COMPLETE) {
1316 arm_resp->buffer_length = sizeof(*store); 1315 arm_resp->buffer_length = sizeof(*store);
1317 memcpy(buf2, &old, sizeof(*store)); 1316 memcpy(buf2, &old, sizeof(*store));
1318 } else { 1317 } else {
1319 arm_resp->buffer_length = 0; 1318 arm_resp->buffer_length = 0;
1320 } 1319 }
1321 req->file_info = fi; 1320 req->file_info = fi;
1322 req->req.type = RAW1394_REQ_ARM; 1321 req->req.type = RAW1394_REQ_ARM;
1323 req->req.generation = get_hpsb_generation(host); 1322 req->req.generation = get_hpsb_generation(host);
1324 req->req.misc = ((((sizeof(*store)) << 16) & (0xFFFF0000)) | 1323 req->req.misc = ((((sizeof(*store)) << 16) & (0xFFFF0000)) |
1325 (ARM_LOCK & 0xFF)); 1324 (ARM_LOCK & 0xFF));
1326 req->req.tag = arm_addr->arm_tag; 1325 req->req.tag = arm_addr->arm_tag;
1327 req->req.recvb = arm_addr->recvb; 1326 req->req.recvb = arm_addr->recvb;
1328 req->req.length = size; 1327 req->req.length = size;
1329 arm_req->generation = req->req.generation; 1328 arm_req->generation = req->req.generation;
1330 arm_req->extended_transaction_code = ext_tcode; 1329 arm_req->extended_transaction_code = ext_tcode;
1331 arm_req->destination_offset = addr; 1330 arm_req->destination_offset = addr;
1332 arm_req->source_nodeid = nodeid; 1331 arm_req->source_nodeid = nodeid;
1333 arm_req->destination_nodeid = host->node_id; 1332 arm_req->destination_nodeid = host->node_id;
1334 arm_req->tlabel = (flags >> 10) & 0x3f; 1333 arm_req->tlabel = (flags >> 10) & 0x3f;
1335 arm_req->tcode = (flags >> 4) & 0x0f; 1334 arm_req->tcode = (flags >> 4) & 0x0f;
1336 arm_resp->response_code = rcode; 1335 arm_resp->response_code = rcode;
1337 arm_req_resp->request = int2ptr((arm_addr->recvb) + 1336 arm_req_resp->request = int2ptr((arm_addr->recvb) +
1338 sizeof(struct 1337 sizeof(struct
1339 arm_request_response)); 1338 arm_request_response));
1340 arm_req_resp->response = 1339 arm_req_resp->response =
1341 int2ptr((arm_addr->recvb) + 1340 int2ptr((arm_addr->recvb) +
1342 sizeof(struct arm_request_response) + 1341 sizeof(struct arm_request_response) +
1343 sizeof(struct arm_request)); 1342 sizeof(struct arm_request));
1344 arm_req->buffer = 1343 arm_req->buffer =
1345 int2ptr((arm_addr->recvb) + 1344 int2ptr((arm_addr->recvb) +
1346 sizeof(struct arm_request_response) + 1345 sizeof(struct arm_request_response) +
1347 sizeof(struct arm_request) + 1346 sizeof(struct arm_request) +
1348 sizeof(struct arm_response)); 1347 sizeof(struct arm_response));
1349 arm_resp->buffer = 1348 arm_resp->buffer =
1350 int2ptr((arm_addr->recvb) + 1349 int2ptr((arm_addr->recvb) +
1351 sizeof(struct arm_request_response) + 1350 sizeof(struct arm_request_response) +
1352 sizeof(struct arm_request) + 1351 sizeof(struct arm_request) +
1353 sizeof(struct arm_response) + 2 * sizeof(*store)); 1352 sizeof(struct arm_response) + 2 * sizeof(*store));
1354 queue_complete_req(req); 1353 queue_complete_req(req);
1355 } 1354 }
1356 spin_unlock_irqrestore(&host_info_lock, irqflags); 1355 spin_unlock_irqrestore(&host_info_lock, irqflags);
1357 return (rcode); 1356 return (rcode);
1358 } 1357 }
1359 1358
1360 static int arm_lock64(struct hpsb_host *host, int nodeid, octlet_t * store, 1359 static int arm_lock64(struct hpsb_host *host, int nodeid, octlet_t * store,
1361 u64 addr, octlet_t data, octlet_t arg, int ext_tcode, 1360 u64 addr, octlet_t data, octlet_t arg, int ext_tcode,
1362 u16 flags) 1361 u16 flags)
1363 { 1362 {
1364 unsigned long irqflags; 1363 unsigned long irqflags;
1365 struct pending_request *req; 1364 struct pending_request *req;
1366 struct host_info *hi; 1365 struct host_info *hi;
1367 struct file_info *fi = NULL; 1366 struct file_info *fi = NULL;
1368 struct list_head *entry; 1367 struct list_head *entry;
1369 struct arm_addr *arm_addr = NULL; 1368 struct arm_addr *arm_addr = NULL;
1370 struct arm_request *arm_req = NULL; 1369 struct arm_request *arm_req = NULL;
1371 struct arm_response *arm_resp = NULL; 1370 struct arm_response *arm_resp = NULL;
1372 int found = 0, size = 0, rcode = -1; 1371 int found = 0, size = 0, rcode = -1;
1373 octlet_t old, new; 1372 octlet_t old, new;
1374 struct arm_request_response *arm_req_resp = NULL; 1373 struct arm_request_response *arm_req_resp = NULL;
1375 1374
1376 if (((ext_tcode & 0xFF) == EXTCODE_FETCH_ADD) || 1375 if (((ext_tcode & 0xFF) == EXTCODE_FETCH_ADD) ||
1377 ((ext_tcode & 0xFF) == EXTCODE_LITTLE_ADD)) { 1376 ((ext_tcode & 0xFF) == EXTCODE_LITTLE_ADD)) {
1378 DBGMSG("arm_lock64 called by node: %X " 1377 DBGMSG("arm_lock64 called by node: %X "
1379 "addr: %4.4x %8.8x extcode: %2.2X data: %8.8X %8.8X ", 1378 "addr: %4.4x %8.8x extcode: %2.2X data: %8.8X %8.8X ",
1380 nodeid, (u16) ((addr >> 32) & 0xFFFF), 1379 nodeid, (u16) ((addr >> 32) & 0xFFFF),
1381 (u32) (addr & 0xFFFFFFFF), 1380 (u32) (addr & 0xFFFFFFFF),
1382 ext_tcode & 0xFF, 1381 ext_tcode & 0xFF,
1383 (u32) ((be64_to_cpu(data) >> 32) & 0xFFFFFFFF), 1382 (u32) ((be64_to_cpu(data) >> 32) & 0xFFFFFFFF),
1384 (u32) (be64_to_cpu(data) & 0xFFFFFFFF)); 1383 (u32) (be64_to_cpu(data) & 0xFFFFFFFF));
1385 } else { 1384 } else {
1386 DBGMSG("arm_lock64 called by node: %X " 1385 DBGMSG("arm_lock64 called by node: %X "
1387 "addr: %4.4x %8.8x extcode: %2.2X data: %8.8X %8.8X arg: " 1386 "addr: %4.4x %8.8x extcode: %2.2X data: %8.8X %8.8X arg: "
1388 "%8.8X %8.8X ", 1387 "%8.8X %8.8X ",
1389 nodeid, (u16) ((addr >> 32) & 0xFFFF), 1388 nodeid, (u16) ((addr >> 32) & 0xFFFF),
1390 (u32) (addr & 0xFFFFFFFF), 1389 (u32) (addr & 0xFFFFFFFF),
1391 ext_tcode & 0xFF, 1390 ext_tcode & 0xFF,
1392 (u32) ((be64_to_cpu(data) >> 32) & 0xFFFFFFFF), 1391 (u32) ((be64_to_cpu(data) >> 32) & 0xFFFFFFFF),
1393 (u32) (be64_to_cpu(data) & 0xFFFFFFFF), 1392 (u32) (be64_to_cpu(data) & 0xFFFFFFFF),
1394 (u32) ((be64_to_cpu(arg) >> 32) & 0xFFFFFFFF), 1393 (u32) ((be64_to_cpu(arg) >> 32) & 0xFFFFFFFF),
1395 (u32) (be64_to_cpu(arg) & 0xFFFFFFFF)); 1394 (u32) (be64_to_cpu(arg) & 0xFFFFFFFF));
1396 } 1395 }
1397 spin_lock_irqsave(&host_info_lock, irqflags); 1396 spin_lock_irqsave(&host_info_lock, irqflags);
1398 hi = find_host_info(host); /* search addressentry in file_info's for host */ 1397 hi = find_host_info(host); /* search addressentry in file_info's for host */
1399 if (hi != NULL) { 1398 if (hi != NULL) {
1400 list_for_each_entry(fi, &hi->file_info_list, list) { 1399 list_for_each_entry(fi, &hi->file_info_list, list) {
1401 entry = fi->addr_list.next; 1400 entry = fi->addr_list.next;
1402 while (entry != &(fi->addr_list)) { 1401 while (entry != &(fi->addr_list)) {
1403 arm_addr = 1402 arm_addr =
1404 list_entry(entry, struct arm_addr, 1403 list_entry(entry, struct arm_addr,
1405 addr_list); 1404 addr_list);
1406 if (((arm_addr->start) <= (addr)) 1405 if (((arm_addr->start) <= (addr))
1407 && ((arm_addr->end) >= 1406 && ((arm_addr->end) >=
1408 (addr + sizeof(*store)))) { 1407 (addr + sizeof(*store)))) {
1409 found = 1; 1408 found = 1;
1410 break; 1409 break;
1411 } 1410 }
1412 entry = entry->next; 1411 entry = entry->next;
1413 } 1412 }
1414 if (found) { 1413 if (found) {
1415 break; 1414 break;
1416 } 1415 }
1417 } 1416 }
1418 } 1417 }
1419 rcode = -1; 1418 rcode = -1;
1420 if (!found) { 1419 if (!found) {
1421 printk(KERN_ERR 1420 printk(KERN_ERR
1422 "raw1394: arm_lock64 FAILED addr_entry not found" 1421 "raw1394: arm_lock64 FAILED addr_entry not found"
1423 " -> rcode_address_error\n"); 1422 " -> rcode_address_error\n");
1424 spin_unlock_irqrestore(&host_info_lock, irqflags); 1423 spin_unlock_irqrestore(&host_info_lock, irqflags);
1425 return (RCODE_ADDRESS_ERROR); 1424 return (RCODE_ADDRESS_ERROR);
1426 } else { 1425 } else {
1427 DBGMSG("arm_lock64 addr_entry FOUND"); 1426 DBGMSG("arm_lock64 addr_entry FOUND");
1428 } 1427 }
1429 if (rcode == -1) { 1428 if (rcode == -1) {
1430 if (arm_addr->access_rights & ARM_LOCK) { 1429 if (arm_addr->access_rights & ARM_LOCK) {
1431 if (!(arm_addr->client_transactions & ARM_LOCK)) { 1430 if (!(arm_addr->client_transactions & ARM_LOCK)) {
1432 memcpy(&old, 1431 memcpy(&old,
1433 (arm_addr->addr_space_buffer) + (addr - 1432 (arm_addr->addr_space_buffer) + (addr -
1434 (arm_addr-> 1433 (arm_addr->
1435 start)), 1434 start)),
1436 sizeof(old)); 1435 sizeof(old));
1437 switch (ext_tcode) { 1436 switch (ext_tcode) {
1438 case (EXTCODE_MASK_SWAP): 1437 case (EXTCODE_MASK_SWAP):
1439 new = data | (old & ~arg); 1438 new = data | (old & ~arg);
1440 break; 1439 break;
1441 case (EXTCODE_COMPARE_SWAP): 1440 case (EXTCODE_COMPARE_SWAP):
1442 if (old == arg) { 1441 if (old == arg) {
1443 new = data; 1442 new = data;
1444 } else { 1443 } else {
1445 new = old; 1444 new = old;
1446 } 1445 }
1447 break; 1446 break;
1448 case (EXTCODE_FETCH_ADD): 1447 case (EXTCODE_FETCH_ADD):
1449 new = 1448 new =
1450 cpu_to_be64(be64_to_cpu(data) + 1449 cpu_to_be64(be64_to_cpu(data) +
1451 be64_to_cpu(old)); 1450 be64_to_cpu(old));
1452 break; 1451 break;
1453 case (EXTCODE_LITTLE_ADD): 1452 case (EXTCODE_LITTLE_ADD):
1454 new = 1453 new =
1455 cpu_to_le64(le64_to_cpu(data) + 1454 cpu_to_le64(le64_to_cpu(data) +
1456 le64_to_cpu(old)); 1455 le64_to_cpu(old));
1457 break; 1456 break;
1458 case (EXTCODE_BOUNDED_ADD): 1457 case (EXTCODE_BOUNDED_ADD):
1459 if (old != arg) { 1458 if (old != arg) {
1460 new = 1459 new =
1461 cpu_to_be64(be64_to_cpu 1460 cpu_to_be64(be64_to_cpu
1462 (data) + 1461 (data) +
1463 be64_to_cpu 1462 be64_to_cpu
1464 (old)); 1463 (old));
1465 } else { 1464 } else {
1466 new = old; 1465 new = old;
1467 } 1466 }
1468 break; 1467 break;
1469 case (EXTCODE_WRAP_ADD): 1468 case (EXTCODE_WRAP_ADD):
1470 if (old != arg) { 1469 if (old != arg) {
1471 new = 1470 new =
1472 cpu_to_be64(be64_to_cpu 1471 cpu_to_be64(be64_to_cpu
1473 (data) + 1472 (data) +
1474 be64_to_cpu 1473 be64_to_cpu
1475 (old)); 1474 (old));
1476 } else { 1475 } else {
1477 new = data; 1476 new = data;
1478 } 1477 }
1479 break; 1478 break;
1480 default: 1479 default:
1481 printk(KERN_ERR 1480 printk(KERN_ERR
1482 "raw1394: arm_lock64 FAILED " 1481 "raw1394: arm_lock64 FAILED "
1483 "ext_tcode not allowed -> rcode_type_error\n"); 1482 "ext_tcode not allowed -> rcode_type_error\n");
1484 rcode = RCODE_TYPE_ERROR; /* function not allowed */ 1483 rcode = RCODE_TYPE_ERROR; /* function not allowed */
1485 break; 1484 break;
1486 } /*switch */ 1485 } /*switch */
1487 if (rcode == -1) { 1486 if (rcode == -1) {
1488 DBGMSG 1487 DBGMSG
1489 ("arm_lock64 -> (rcode_complete)"); 1488 ("arm_lock64 -> (rcode_complete)");
1490 rcode = RCODE_COMPLETE; 1489 rcode = RCODE_COMPLETE;
1491 memcpy(store, &old, sizeof(*store)); 1490 memcpy(store, &old, sizeof(*store));
1492 memcpy((arm_addr->addr_space_buffer) + 1491 memcpy((arm_addr->addr_space_buffer) +
1493 (addr - (arm_addr->start)), 1492 (addr - (arm_addr->start)),
1494 &new, sizeof(*store)); 1493 &new, sizeof(*store));
1495 } 1494 }
1496 } 1495 }
1497 } else { 1496 } else {
1498 rcode = RCODE_TYPE_ERROR; /* function not allowed */ 1497 rcode = RCODE_TYPE_ERROR; /* function not allowed */
1499 DBGMSG 1498 DBGMSG
1500 ("arm_lock64 -> rcode_type_error (access denied)"); 1499 ("arm_lock64 -> rcode_type_error (access denied)");
1501 } 1500 }
1502 } 1501 }
1503 if (arm_addr->notification_options & ARM_LOCK) { 1502 if (arm_addr->notification_options & ARM_LOCK) {
1504 byte_t *buf1, *buf2; 1503 byte_t *buf1, *buf2;
1505 DBGMSG("arm_lock64 -> entering notification-section"); 1504 DBGMSG("arm_lock64 -> entering notification-section");
1506 req = __alloc_pending_request(GFP_ATOMIC); 1505 req = __alloc_pending_request(GFP_ATOMIC);
1507 if (!req) { 1506 if (!req) {
1508 spin_unlock_irqrestore(&host_info_lock, irqflags); 1507 spin_unlock_irqrestore(&host_info_lock, irqflags);
1509 DBGMSG("arm_lock64 -> rcode_conflict_error"); 1508 DBGMSG("arm_lock64 -> rcode_conflict_error");
1510 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected. 1509 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
1511 The request may be retried */ 1510 The request may be retried */
1512 } 1511 }
1513 size = sizeof(struct arm_request) + sizeof(struct arm_response) + 3 * sizeof(*store) + sizeof(struct arm_request_response); /* maximum */ 1512 size = sizeof(struct arm_request) + sizeof(struct arm_response) + 3 * sizeof(*store) + sizeof(struct arm_request_response); /* maximum */
1514 req->data = kmalloc(size, GFP_ATOMIC); 1513 req->data = kmalloc(size, GFP_ATOMIC);
1515 if (!(req->data)) { 1514 if (!(req->data)) {
1516 free_pending_request(req); 1515 free_pending_request(req);
1517 spin_unlock_irqrestore(&host_info_lock, irqflags); 1516 spin_unlock_irqrestore(&host_info_lock, irqflags);
1518 DBGMSG("arm_lock64 -> rcode_conflict_error"); 1517 DBGMSG("arm_lock64 -> rcode_conflict_error");
1519 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected. 1518 return (RCODE_CONFLICT_ERROR); /* A resource conflict was detected.
1520 The request may be retried */ 1519 The request may be retried */
1521 } 1520 }
1522 req->free_data = 1; 1521 req->free_data = 1;
1523 arm_req_resp = (struct arm_request_response *)(req->data); 1522 arm_req_resp = (struct arm_request_response *)(req->data);
1524 arm_req = (struct arm_request *)((byte_t *) (req->data) + 1523 arm_req = (struct arm_request *)((byte_t *) (req->data) +
1525 (sizeof 1524 (sizeof
1526 (struct 1525 (struct
1527 arm_request_response))); 1526 arm_request_response)));
1528 arm_resp = 1527 arm_resp =
1529 (struct arm_response *)((byte_t *) (arm_req) + 1528 (struct arm_response *)((byte_t *) (arm_req) +
1530 (sizeof(struct arm_request))); 1529 (sizeof(struct arm_request)));
1531 buf1 = (byte_t *) arm_resp + sizeof(struct arm_response); 1530 buf1 = (byte_t *) arm_resp + sizeof(struct arm_response);
1532 buf2 = buf1 + 2 * sizeof(*store); 1531 buf2 = buf1 + 2 * sizeof(*store);
1533 if ((ext_tcode == EXTCODE_FETCH_ADD) || 1532 if ((ext_tcode == EXTCODE_FETCH_ADD) ||
1534 (ext_tcode == EXTCODE_LITTLE_ADD)) { 1533 (ext_tcode == EXTCODE_LITTLE_ADD)) {
1535 arm_req->buffer_length = sizeof(*store); 1534 arm_req->buffer_length = sizeof(*store);
1536 memcpy(buf1, &data, sizeof(*store)); 1535 memcpy(buf1, &data, sizeof(*store));
1537 1536
1538 } else { 1537 } else {
1539 arm_req->buffer_length = 2 * sizeof(*store); 1538 arm_req->buffer_length = 2 * sizeof(*store);
1540 memcpy(buf1, &arg, sizeof(*store)); 1539 memcpy(buf1, &arg, sizeof(*store));
1541 memcpy(buf1 + sizeof(*store), &data, sizeof(*store)); 1540 memcpy(buf1 + sizeof(*store), &data, sizeof(*store));
1542 } 1541 }
1543 if (rcode == RCODE_COMPLETE) { 1542 if (rcode == RCODE_COMPLETE) {
1544 arm_resp->buffer_length = sizeof(*store); 1543 arm_resp->buffer_length = sizeof(*store);
1545 memcpy(buf2, &old, sizeof(*store)); 1544 memcpy(buf2, &old, sizeof(*store));
1546 } else { 1545 } else {
1547 arm_resp->buffer_length = 0; 1546 arm_resp->buffer_length = 0;
1548 } 1547 }
1549 req->file_info = fi; 1548 req->file_info = fi;
1550 req->req.type = RAW1394_REQ_ARM; 1549 req->req.type = RAW1394_REQ_ARM;
1551 req->req.generation = get_hpsb_generation(host); 1550 req->req.generation = get_hpsb_generation(host);
1552 req->req.misc = ((((sizeof(*store)) << 16) & (0xFFFF0000)) | 1551 req->req.misc = ((((sizeof(*store)) << 16) & (0xFFFF0000)) |
1553 (ARM_LOCK & 0xFF)); 1552 (ARM_LOCK & 0xFF));
1554 req->req.tag = arm_addr->arm_tag; 1553 req->req.tag = arm_addr->arm_tag;
1555 req->req.recvb = arm_addr->recvb; 1554 req->req.recvb = arm_addr->recvb;
1556 req->req.length = size; 1555 req->req.length = size;
1557 arm_req->generation = req->req.generation; 1556 arm_req->generation = req->req.generation;
1558 arm_req->extended_transaction_code = ext_tcode; 1557 arm_req->extended_transaction_code = ext_tcode;
1559 arm_req->destination_offset = addr; 1558 arm_req->destination_offset = addr;
1560 arm_req->source_nodeid = nodeid; 1559 arm_req->source_nodeid = nodeid;
1561 arm_req->destination_nodeid = host->node_id; 1560 arm_req->destination_nodeid = host->node_id;
1562 arm_req->tlabel = (flags >> 10) & 0x3f; 1561 arm_req->tlabel = (flags >> 10) & 0x3f;
1563 arm_req->tcode = (flags >> 4) & 0x0f; 1562 arm_req->tcode = (flags >> 4) & 0x0f;
1564 arm_resp->response_code = rcode; 1563 arm_resp->response_code = rcode;
1565 arm_req_resp->request = int2ptr((arm_addr->recvb) + 1564 arm_req_resp->request = int2ptr((arm_addr->recvb) +
1566 sizeof(struct 1565 sizeof(struct
1567 arm_request_response)); 1566 arm_request_response));
1568 arm_req_resp->response = 1567 arm_req_resp->response =
1569 int2ptr((arm_addr->recvb) + 1568 int2ptr((arm_addr->recvb) +
1570 sizeof(struct arm_request_response) + 1569 sizeof(struct arm_request_response) +
1571 sizeof(struct arm_request)); 1570 sizeof(struct arm_request));
1572 arm_req->buffer = 1571 arm_req->buffer =
1573 int2ptr((arm_addr->recvb) + 1572 int2ptr((arm_addr->recvb) +
1574 sizeof(struct arm_request_response) + 1573 sizeof(struct arm_request_response) +
1575 sizeof(struct arm_request) + 1574 sizeof(struct arm_request) +
1576 sizeof(struct arm_response)); 1575 sizeof(struct arm_response));
1577 arm_resp->buffer = 1576 arm_resp->buffer =
1578 int2ptr((arm_addr->recvb) + 1577 int2ptr((arm_addr->recvb) +
1579 sizeof(struct arm_request_response) + 1578 sizeof(struct arm_request_response) +
1580 sizeof(struct arm_request) + 1579 sizeof(struct arm_request) +
1581 sizeof(struct arm_response) + 2 * sizeof(*store)); 1580 sizeof(struct arm_response) + 2 * sizeof(*store));
1582 queue_complete_req(req); 1581 queue_complete_req(req);
1583 } 1582 }
1584 spin_unlock_irqrestore(&host_info_lock, irqflags); 1583 spin_unlock_irqrestore(&host_info_lock, irqflags);
1585 return (rcode); 1584 return (rcode);
1586 } 1585 }
1587 1586
1588 static int arm_register(struct file_info *fi, struct pending_request *req) 1587 static int arm_register(struct file_info *fi, struct pending_request *req)
1589 { 1588 {
1590 int retval; 1589 int retval;
1591 struct arm_addr *addr; 1590 struct arm_addr *addr;
1592 struct host_info *hi; 1591 struct host_info *hi;
1593 struct file_info *fi_hlp = NULL; 1592 struct file_info *fi_hlp = NULL;
1594 struct list_head *entry; 1593 struct list_head *entry;
1595 struct arm_addr *arm_addr = NULL; 1594 struct arm_addr *arm_addr = NULL;
1596 int same_host, another_host; 1595 int same_host, another_host;
1597 unsigned long flags; 1596 unsigned long flags;
1598 1597
1599 DBGMSG("arm_register called " 1598 DBGMSG("arm_register called "
1600 "addr(Offset): %8.8x %8.8x length: %u " 1599 "addr(Offset): %8.8x %8.8x length: %u "
1601 "rights: %2.2X notify: %2.2X " 1600 "rights: %2.2X notify: %2.2X "
1602 "max_blk_len: %4.4X", 1601 "max_blk_len: %4.4X",
1603 (u32) ((req->req.address >> 32) & 0xFFFF), 1602 (u32) ((req->req.address >> 32) & 0xFFFF),
1604 (u32) (req->req.address & 0xFFFFFFFF), 1603 (u32) (req->req.address & 0xFFFFFFFF),
1605 req->req.length, ((req->req.misc >> 8) & 0xFF), 1604 req->req.length, ((req->req.misc >> 8) & 0xFF),
1606 (req->req.misc & 0xFF), ((req->req.misc >> 16) & 0xFFFF)); 1605 (req->req.misc & 0xFF), ((req->req.misc >> 16) & 0xFFFF));
1607 /* check addressrange */ 1606 /* check addressrange */
1608 if ((((req->req.address) & ~(0xFFFFFFFFFFFFULL)) != 0) || 1607 if ((((req->req.address) & ~(0xFFFFFFFFFFFFULL)) != 0) ||
1609 (((req->req.address + req->req.length) & ~(0xFFFFFFFFFFFFULL)) != 1608 (((req->req.address + req->req.length) & ~(0xFFFFFFFFFFFFULL)) !=
1610 0)) { 1609 0)) {
1611 req->req.length = 0; 1610 req->req.length = 0;
1612 return (-EINVAL); 1611 return (-EINVAL);
1613 } 1612 }
1614 /* addr-list-entry for fileinfo */ 1613 /* addr-list-entry for fileinfo */
1615 addr = kmalloc(sizeof(*addr), GFP_KERNEL); 1614 addr = kmalloc(sizeof(*addr), GFP_KERNEL);
1616 if (!addr) { 1615 if (!addr) {
1617 req->req.length = 0; 1616 req->req.length = 0;
1618 return (-ENOMEM); 1617 return (-ENOMEM);
1619 } 1618 }
1620 /* allocation of addr_space_buffer */ 1619 /* allocation of addr_space_buffer */
1621 addr->addr_space_buffer = vmalloc(req->req.length); 1620 addr->addr_space_buffer = vmalloc(req->req.length);
1622 if (!(addr->addr_space_buffer)) { 1621 if (!(addr->addr_space_buffer)) {
1623 kfree(addr); 1622 kfree(addr);
1624 req->req.length = 0; 1623 req->req.length = 0;
1625 return (-ENOMEM); 1624 return (-ENOMEM);
1626 } 1625 }
1627 /* initialization of addr_space_buffer */ 1626 /* initialization of addr_space_buffer */
1628 if ((req->req.sendb) == (unsigned long)NULL) { 1627 if ((req->req.sendb) == (unsigned long)NULL) {
1629 /* init: set 0 */ 1628 /* init: set 0 */
1630 memset(addr->addr_space_buffer, 0, req->req.length); 1629 memset(addr->addr_space_buffer, 0, req->req.length);
1631 } else { 1630 } else {
1632 /* init: user -> kernel */ 1631 /* init: user -> kernel */
1633 if (copy_from_user 1632 if (copy_from_user
1634 (addr->addr_space_buffer, int2ptr(req->req.sendb), 1633 (addr->addr_space_buffer, int2ptr(req->req.sendb),
1635 req->req.length)) { 1634 req->req.length)) {
1636 vfree(addr->addr_space_buffer); 1635 vfree(addr->addr_space_buffer);
1637 kfree(addr); 1636 kfree(addr);
1638 return (-EFAULT); 1637 return (-EFAULT);
1639 } 1638 }
1640 } 1639 }
1641 INIT_LIST_HEAD(&addr->addr_list); 1640 INIT_LIST_HEAD(&addr->addr_list);
1642 addr->arm_tag = req->req.tag; 1641 addr->arm_tag = req->req.tag;
1643 addr->start = req->req.address; 1642 addr->start = req->req.address;
1644 addr->end = req->req.address + req->req.length; 1643 addr->end = req->req.address + req->req.length;
1645 addr->access_rights = (u8) (req->req.misc & 0x0F); 1644 addr->access_rights = (u8) (req->req.misc & 0x0F);
1646 addr->notification_options = (u8) ((req->req.misc >> 4) & 0x0F); 1645 addr->notification_options = (u8) ((req->req.misc >> 4) & 0x0F);
1647 addr->client_transactions = (u8) ((req->req.misc >> 8) & 0x0F); 1646 addr->client_transactions = (u8) ((req->req.misc >> 8) & 0x0F);
1648 addr->access_rights |= addr->client_transactions; 1647 addr->access_rights |= addr->client_transactions;
1649 addr->notification_options |= addr->client_transactions; 1648 addr->notification_options |= addr->client_transactions;
1650 addr->recvb = req->req.recvb; 1649 addr->recvb = req->req.recvb;
1651 addr->rec_length = (u16) ((req->req.misc >> 16) & 0xFFFF); 1650 addr->rec_length = (u16) ((req->req.misc >> 16) & 0xFFFF);
1652 1651
1653 spin_lock_irqsave(&host_info_lock, flags); 1652 spin_lock_irqsave(&host_info_lock, flags);
1654 hi = find_host_info(fi->host); 1653 hi = find_host_info(fi->host);
1655 same_host = 0; 1654 same_host = 0;
1656 another_host = 0; 1655 another_host = 0;
1657 /* same host with address-entry containing same addressrange ? */ 1656 /* same host with address-entry containing same addressrange ? */
1658 list_for_each_entry(fi_hlp, &hi->file_info_list, list) { 1657 list_for_each_entry(fi_hlp, &hi->file_info_list, list) {
1659 entry = fi_hlp->addr_list.next; 1658 entry = fi_hlp->addr_list.next;
1660 while (entry != &(fi_hlp->addr_list)) { 1659 while (entry != &(fi_hlp->addr_list)) {
1661 arm_addr = 1660 arm_addr =
1662 list_entry(entry, struct arm_addr, addr_list); 1661 list_entry(entry, struct arm_addr, addr_list);
1663 if ((arm_addr->start == addr->start) 1662 if ((arm_addr->start == addr->start)
1664 && (arm_addr->end == addr->end)) { 1663 && (arm_addr->end == addr->end)) {
1665 DBGMSG("same host ownes same " 1664 DBGMSG("same host ownes same "
1666 "addressrange -> EALREADY"); 1665 "addressrange -> EALREADY");
1667 same_host = 1; 1666 same_host = 1;
1668 break; 1667 break;
1669 } 1668 }
1670 entry = entry->next; 1669 entry = entry->next;
1671 } 1670 }
1672 if (same_host) { 1671 if (same_host) {
1673 break; 1672 break;
1674 } 1673 }
1675 } 1674 }
1676 if (same_host) { 1675 if (same_host) {
1677 /* addressrange occupied by same host */ 1676 /* addressrange occupied by same host */
1678 spin_unlock_irqrestore(&host_info_lock, flags); 1677 spin_unlock_irqrestore(&host_info_lock, flags);
1679 vfree(addr->addr_space_buffer); 1678 vfree(addr->addr_space_buffer);
1680 kfree(addr); 1679 kfree(addr);
1681 return (-EALREADY); 1680 return (-EALREADY);
1682 } 1681 }
1683 /* another host with valid address-entry containing same addressrange */ 1682 /* another host with valid address-entry containing same addressrange */
1684 list_for_each_entry(hi, &host_info_list, list) { 1683 list_for_each_entry(hi, &host_info_list, list) {
1685 if (hi->host != fi->host) { 1684 if (hi->host != fi->host) {
1686 list_for_each_entry(fi_hlp, &hi->file_info_list, list) { 1685 list_for_each_entry(fi_hlp, &hi->file_info_list, list) {
1687 entry = fi_hlp->addr_list.next; 1686 entry = fi_hlp->addr_list.next;
1688 while (entry != &(fi_hlp->addr_list)) { 1687 while (entry != &(fi_hlp->addr_list)) {
1689 arm_addr = 1688 arm_addr =
1690 list_entry(entry, struct arm_addr, 1689 list_entry(entry, struct arm_addr,
1691 addr_list); 1690 addr_list);
1692 if ((arm_addr->start == addr->start) 1691 if ((arm_addr->start == addr->start)
1693 && (arm_addr->end == addr->end)) { 1692 && (arm_addr->end == addr->end)) {
1694 DBGMSG 1693 DBGMSG
1695 ("another host ownes same " 1694 ("another host ownes same "
1696 "addressrange"); 1695 "addressrange");
1697 another_host = 1; 1696 another_host = 1;
1698 break; 1697 break;
1699 } 1698 }
1700 entry = entry->next; 1699 entry = entry->next;
1701 } 1700 }
1702 if (another_host) { 1701 if (another_host) {
1703 break; 1702 break;
1704 } 1703 }
1705 } 1704 }
1706 } 1705 }
1707 } 1706 }
1708 spin_unlock_irqrestore(&host_info_lock, flags); 1707 spin_unlock_irqrestore(&host_info_lock, flags);
1709 1708
1710 if (another_host) { 1709 if (another_host) {
1711 DBGMSG("another hosts entry is valid -> SUCCESS"); 1710 DBGMSG("another hosts entry is valid -> SUCCESS");
1712 if (copy_to_user(int2ptr(req->req.recvb), 1711 if (copy_to_user(int2ptr(req->req.recvb),
1713 &addr->start, sizeof(u64))) { 1712 &addr->start, sizeof(u64))) {
1714 printk(KERN_ERR "raw1394: arm_register failed " 1713 printk(KERN_ERR "raw1394: arm_register failed "
1715 " address-range-entry is invalid -> EFAULT !!!\n"); 1714 " address-range-entry is invalid -> EFAULT !!!\n");
1716 vfree(addr->addr_space_buffer); 1715 vfree(addr->addr_space_buffer);
1717 kfree(addr); 1716 kfree(addr);
1718 return (-EFAULT); 1717 return (-EFAULT);
1719 } 1718 }
1720 free_pending_request(req); /* immediate success or fail */ 1719 free_pending_request(req); /* immediate success or fail */
1721 /* INSERT ENTRY */ 1720 /* INSERT ENTRY */
1722 spin_lock_irqsave(&host_info_lock, flags); 1721 spin_lock_irqsave(&host_info_lock, flags);
1723 list_add_tail(&addr->addr_list, &fi->addr_list); 1722 list_add_tail(&addr->addr_list, &fi->addr_list);
1724 spin_unlock_irqrestore(&host_info_lock, flags); 1723 spin_unlock_irqrestore(&host_info_lock, flags);
1725 return 0; 1724 return 0;
1726 } 1725 }
1727 retval = 1726 retval =
1728 hpsb_register_addrspace(&raw1394_highlevel, fi->host, &arm_ops, 1727 hpsb_register_addrspace(&raw1394_highlevel, fi->host, &arm_ops,
1729 req->req.address, 1728 req->req.address,
1730 req->req.address + req->req.length); 1729 req->req.address + req->req.length);
1731 if (retval) { 1730 if (retval) {
1732 /* INSERT ENTRY */ 1731 /* INSERT ENTRY */
1733 spin_lock_irqsave(&host_info_lock, flags); 1732 spin_lock_irqsave(&host_info_lock, flags);
1734 list_add_tail(&addr->addr_list, &fi->addr_list); 1733 list_add_tail(&addr->addr_list, &fi->addr_list);
1735 spin_unlock_irqrestore(&host_info_lock, flags); 1734 spin_unlock_irqrestore(&host_info_lock, flags);
1736 } else { 1735 } else {
1737 DBGMSG("arm_register failed errno: %d \n", retval); 1736 DBGMSG("arm_register failed errno: %d \n", retval);
1738 vfree(addr->addr_space_buffer); 1737 vfree(addr->addr_space_buffer);
1739 kfree(addr); 1738 kfree(addr);
1740 return (-EALREADY); 1739 return (-EALREADY);
1741 } 1740 }
1742 free_pending_request(req); /* immediate success or fail */ 1741 free_pending_request(req); /* immediate success or fail */
1743 return 0; 1742 return 0;
1744 } 1743 }
1745 1744
1746 static int arm_unregister(struct file_info *fi, struct pending_request *req) 1745 static int arm_unregister(struct file_info *fi, struct pending_request *req)
1747 { 1746 {
1748 int found = 0; 1747 int found = 0;
1749 int retval = 0; 1748 int retval = 0;
1750 struct list_head *entry; 1749 struct list_head *entry;
1751 struct arm_addr *addr = NULL; 1750 struct arm_addr *addr = NULL;
1752 struct host_info *hi; 1751 struct host_info *hi;
1753 struct file_info *fi_hlp = NULL; 1752 struct file_info *fi_hlp = NULL;
1754 struct arm_addr *arm_addr = NULL; 1753 struct arm_addr *arm_addr = NULL;
1755 int another_host; 1754 int another_host;
1756 unsigned long flags; 1755 unsigned long flags;
1757 1756
1758 DBGMSG("arm_Unregister called addr(Offset): " 1757 DBGMSG("arm_Unregister called addr(Offset): "
1759 "%8.8x %8.8x", 1758 "%8.8x %8.8x",
1760 (u32) ((req->req.address >> 32) & 0xFFFF), 1759 (u32) ((req->req.address >> 32) & 0xFFFF),
1761 (u32) (req->req.address & 0xFFFFFFFF)); 1760 (u32) (req->req.address & 0xFFFFFFFF));
1762 spin_lock_irqsave(&host_info_lock, flags); 1761 spin_lock_irqsave(&host_info_lock, flags);
1763 /* get addr */ 1762 /* get addr */
1764 entry = fi->addr_list.next; 1763 entry = fi->addr_list.next;
1765 while (entry != &(fi->addr_list)) { 1764 while (entry != &(fi->addr_list)) {
1766 addr = list_entry(entry, struct arm_addr, addr_list); 1765 addr = list_entry(entry, struct arm_addr, addr_list);
1767 if (addr->start == req->req.address) { 1766 if (addr->start == req->req.address) {
1768 found = 1; 1767 found = 1;
1769 break; 1768 break;
1770 } 1769 }
1771 entry = entry->next; 1770 entry = entry->next;
1772 } 1771 }
1773 if (!found) { 1772 if (!found) {
1774 DBGMSG("arm_Unregister addr not found"); 1773 DBGMSG("arm_Unregister addr not found");
1775 spin_unlock_irqrestore(&host_info_lock, flags); 1774 spin_unlock_irqrestore(&host_info_lock, flags);
1776 return (-EINVAL); 1775 return (-EINVAL);
1777 } 1776 }
1778 DBGMSG("arm_Unregister addr found"); 1777 DBGMSG("arm_Unregister addr found");
1779 another_host = 0; 1778 another_host = 0;
1780 /* another host with valid address-entry containing 1779 /* another host with valid address-entry containing
1781 same addressrange */ 1780 same addressrange */
1782 list_for_each_entry(hi, &host_info_list, list) { 1781 list_for_each_entry(hi, &host_info_list, list) {
1783 if (hi->host != fi->host) { 1782 if (hi->host != fi->host) {
1784 list_for_each_entry(fi_hlp, &hi->file_info_list, list) { 1783 list_for_each_entry(fi_hlp, &hi->file_info_list, list) {
1785 entry = fi_hlp->addr_list.next; 1784 entry = fi_hlp->addr_list.next;
1786 while (entry != &(fi_hlp->addr_list)) { 1785 while (entry != &(fi_hlp->addr_list)) {
1787 arm_addr = list_entry(entry, 1786 arm_addr = list_entry(entry,
1788 struct arm_addr, 1787 struct arm_addr,
1789 addr_list); 1788 addr_list);
1790 if (arm_addr->start == addr->start) { 1789 if (arm_addr->start == addr->start) {
1791 DBGMSG("another host ownes " 1790 DBGMSG("another host ownes "
1792 "same addressrange"); 1791 "same addressrange");
1793 another_host = 1; 1792 another_host = 1;
1794 break; 1793 break;
1795 } 1794 }
1796 entry = entry->next; 1795 entry = entry->next;
1797 } 1796 }
1798 if (another_host) { 1797 if (another_host) {
1799 break; 1798 break;
1800 } 1799 }
1801 } 1800 }
1802 } 1801 }
1803 } 1802 }
1804 if (another_host) { 1803 if (another_host) {
1805 DBGMSG("delete entry from list -> success"); 1804 DBGMSG("delete entry from list -> success");
1806 list_del(&addr->addr_list); 1805 list_del(&addr->addr_list);
1807 spin_unlock_irqrestore(&host_info_lock, flags); 1806 spin_unlock_irqrestore(&host_info_lock, flags);
1808 vfree(addr->addr_space_buffer); 1807 vfree(addr->addr_space_buffer);
1809 kfree(addr); 1808 kfree(addr);
1810 free_pending_request(req); /* immediate success or fail */ 1809 free_pending_request(req); /* immediate success or fail */
1811 return 0; 1810 return 0;
1812 } 1811 }
1813 retval = 1812 retval =
1814 hpsb_unregister_addrspace(&raw1394_highlevel, fi->host, 1813 hpsb_unregister_addrspace(&raw1394_highlevel, fi->host,
1815 addr->start); 1814 addr->start);
1816 if (!retval) { 1815 if (!retval) {
1817 printk(KERN_ERR "raw1394: arm_Unregister failed -> EINVAL\n"); 1816 printk(KERN_ERR "raw1394: arm_Unregister failed -> EINVAL\n");
1818 spin_unlock_irqrestore(&host_info_lock, flags); 1817 spin_unlock_irqrestore(&host_info_lock, flags);
1819 return (-EINVAL); 1818 return (-EINVAL);
1820 } 1819 }
1821 DBGMSG("delete entry from list -> success"); 1820 DBGMSG("delete entry from list -> success");
1822 list_del(&addr->addr_list); 1821 list_del(&addr->addr_list);
1823 spin_unlock_irqrestore(&host_info_lock, flags); 1822 spin_unlock_irqrestore(&host_info_lock, flags);
1824 vfree(addr->addr_space_buffer); 1823 vfree(addr->addr_space_buffer);
1825 kfree(addr); 1824 kfree(addr);
1826 free_pending_request(req); /* immediate success or fail */ 1825 free_pending_request(req); /* immediate success or fail */
1827 return 0; 1826 return 0;
1828 } 1827 }
1829 1828
1830 /* Copy data from ARM buffer(s) to user buffer. */ 1829 /* Copy data from ARM buffer(s) to user buffer. */
1831 static int arm_get_buf(struct file_info *fi, struct pending_request *req) 1830 static int arm_get_buf(struct file_info *fi, struct pending_request *req)
1832 { 1831 {
1833 struct arm_addr *arm_addr = NULL; 1832 struct arm_addr *arm_addr = NULL;
1834 unsigned long flags; 1833 unsigned long flags;
1835 unsigned long offset; 1834 unsigned long offset;
1836 1835
1837 struct list_head *entry; 1836 struct list_head *entry;
1838 1837
1839 DBGMSG("arm_get_buf " 1838 DBGMSG("arm_get_buf "
1840 "addr(Offset): %04X %08X length: %u", 1839 "addr(Offset): %04X %08X length: %u",
1841 (u32) ((req->req.address >> 32) & 0xFFFF), 1840 (u32) ((req->req.address >> 32) & 0xFFFF),
1842 (u32) (req->req.address & 0xFFFFFFFF), (u32) req->req.length); 1841 (u32) (req->req.address & 0xFFFFFFFF), (u32) req->req.length);
1843 1842
1844 spin_lock_irqsave(&host_info_lock, flags); 1843 spin_lock_irqsave(&host_info_lock, flags);
1845 entry = fi->addr_list.next; 1844 entry = fi->addr_list.next;
1846 while (entry != &(fi->addr_list)) { 1845 while (entry != &(fi->addr_list)) {
1847 arm_addr = list_entry(entry, struct arm_addr, addr_list); 1846 arm_addr = list_entry(entry, struct arm_addr, addr_list);
1848 if ((arm_addr->start <= req->req.address) && 1847 if ((arm_addr->start <= req->req.address) &&
1849 (arm_addr->end > req->req.address)) { 1848 (arm_addr->end > req->req.address)) {
1850 if (req->req.address + req->req.length <= arm_addr->end) { 1849 if (req->req.address + req->req.length <= arm_addr->end) {
1851 offset = req->req.address - arm_addr->start; 1850 offset = req->req.address - arm_addr->start;
1852 spin_unlock_irqrestore(&host_info_lock, flags); 1851 spin_unlock_irqrestore(&host_info_lock, flags);
1853 1852
1854 DBGMSG 1853 DBGMSG
1855 ("arm_get_buf copy_to_user( %08X, %p, %u )", 1854 ("arm_get_buf copy_to_user( %08X, %p, %u )",
1856 (u32) req->req.recvb, 1855 (u32) req->req.recvb,
1857 arm_addr->addr_space_buffer + offset, 1856 arm_addr->addr_space_buffer + offset,
1858 (u32) req->req.length); 1857 (u32) req->req.length);
1859 if (copy_to_user 1858 if (copy_to_user
1860 (int2ptr(req->req.recvb), 1859 (int2ptr(req->req.recvb),
1861 arm_addr->addr_space_buffer + offset, 1860 arm_addr->addr_space_buffer + offset,
1862 req->req.length)) 1861 req->req.length))
1863 return (-EFAULT); 1862 return (-EFAULT);
1864 1863
1865 /* We have to free the request, because we 1864 /* We have to free the request, because we
1866 * queue no response, and therefore nobody 1865 * queue no response, and therefore nobody
1867 * will free it. */ 1866 * will free it. */
1868 free_pending_request(req); 1867 free_pending_request(req);
1869 return 0; 1868 return 0;
1870 } else { 1869 } else {
1871 DBGMSG("arm_get_buf request exceeded mapping"); 1870 DBGMSG("arm_get_buf request exceeded mapping");
1872 spin_unlock_irqrestore(&host_info_lock, flags); 1871 spin_unlock_irqrestore(&host_info_lock, flags);
1873 return (-EINVAL); 1872 return (-EINVAL);
1874 } 1873 }
1875 } 1874 }
1876 entry = entry->next; 1875 entry = entry->next;
1877 } 1876 }
1878 spin_unlock_irqrestore(&host_info_lock, flags); 1877 spin_unlock_irqrestore(&host_info_lock, flags);
1879 return (-EINVAL); 1878 return (-EINVAL);
1880 } 1879 }
1881 1880
1882 /* Copy data from user buffer to ARM buffer(s). */ 1881 /* Copy data from user buffer to ARM buffer(s). */
1883 static int arm_set_buf(struct file_info *fi, struct pending_request *req) 1882 static int arm_set_buf(struct file_info *fi, struct pending_request *req)
1884 { 1883 {
1885 struct arm_addr *arm_addr = NULL; 1884 struct arm_addr *arm_addr = NULL;
1886 unsigned long flags; 1885 unsigned long flags;
1887 unsigned long offset; 1886 unsigned long offset;
1888 1887
1889 struct list_head *entry; 1888 struct list_head *entry;
1890 1889
1891 DBGMSG("arm_set_buf " 1890 DBGMSG("arm_set_buf "
1892 "addr(Offset): %04X %08X length: %u", 1891 "addr(Offset): %04X %08X length: %u",
1893 (u32) ((req->req.address >> 32) & 0xFFFF), 1892 (u32) ((req->req.address >> 32) & 0xFFFF),
1894 (u32) (req->req.address & 0xFFFFFFFF), (u32) req->req.length); 1893 (u32) (req->req.address & 0xFFFFFFFF), (u32) req->req.length);
1895 1894
1896 spin_lock_irqsave(&host_info_lock, flags); 1895 spin_lock_irqsave(&host_info_lock, flags);
1897 entry = fi->addr_list.next; 1896 entry = fi->addr_list.next;
1898 while (entry != &(fi->addr_list)) { 1897 while (entry != &(fi->addr_list)) {
1899 arm_addr = list_entry(entry, struct arm_addr, addr_list); 1898 arm_addr = list_entry(entry, struct arm_addr, addr_list);
1900 if ((arm_addr->start <= req->req.address) && 1899 if ((arm_addr->start <= req->req.address) &&
1901 (arm_addr->end > req->req.address)) { 1900 (arm_addr->end > req->req.address)) {
1902 if (req->req.address + req->req.length <= arm_addr->end) { 1901 if (req->req.address + req->req.length <= arm_addr->end) {
1903 offset = req->req.address - arm_addr->start; 1902 offset = req->req.address - arm_addr->start;
1904 spin_unlock_irqrestore(&host_info_lock, flags); 1903 spin_unlock_irqrestore(&host_info_lock, flags);
1905 1904
1906 DBGMSG 1905 DBGMSG
1907 ("arm_set_buf copy_from_user( %p, %08X, %u )", 1906 ("arm_set_buf copy_from_user( %p, %08X, %u )",
1908 arm_addr->addr_space_buffer + offset, 1907 arm_addr->addr_space_buffer + offset,
1909 (u32) req->req.sendb, 1908 (u32) req->req.sendb,
1910 (u32) req->req.length); 1909 (u32) req->req.length);
1911 if (copy_from_user 1910 if (copy_from_user
1912 (arm_addr->addr_space_buffer + offset, 1911 (arm_addr->addr_space_buffer + offset,
1913 int2ptr(req->req.sendb), 1912 int2ptr(req->req.sendb),
1914 req->req.length)) 1913 req->req.length))
1915 return (-EFAULT); 1914 return (-EFAULT);
1916 1915
1917 /* We have to free the request, because we 1916 /* We have to free the request, because we
1918 * queue no response, and therefore nobody 1917 * queue no response, and therefore nobody
1919 * will free it. */ 1918 * will free it. */
1920 free_pending_request(req); 1919 free_pending_request(req);
1921 return 0; 1920 return 0;
1922 } else { 1921 } else {
1923 DBGMSG("arm_set_buf request exceeded mapping"); 1922 DBGMSG("arm_set_buf request exceeded mapping");
1924 spin_unlock_irqrestore(&host_info_lock, flags); 1923 spin_unlock_irqrestore(&host_info_lock, flags);
1925 return (-EINVAL); 1924 return (-EINVAL);
1926 } 1925 }
1927 } 1926 }
1928 entry = entry->next; 1927 entry = entry->next;
1929 } 1928 }
1930 spin_unlock_irqrestore(&host_info_lock, flags); 1929 spin_unlock_irqrestore(&host_info_lock, flags);
1931 return (-EINVAL); 1930 return (-EINVAL);
1932 } 1931 }
1933 1932
1934 static int reset_notification(struct file_info *fi, struct pending_request *req) 1933 static int reset_notification(struct file_info *fi, struct pending_request *req)
1935 { 1934 {
1936 DBGMSG("reset_notification called - switch %s ", 1935 DBGMSG("reset_notification called - switch %s ",
1937 (req->req.misc == RAW1394_NOTIFY_OFF) ? "OFF" : "ON"); 1936 (req->req.misc == RAW1394_NOTIFY_OFF) ? "OFF" : "ON");
1938 if ((req->req.misc == RAW1394_NOTIFY_OFF) || 1937 if ((req->req.misc == RAW1394_NOTIFY_OFF) ||
1939 (req->req.misc == RAW1394_NOTIFY_ON)) { 1938 (req->req.misc == RAW1394_NOTIFY_ON)) {
1940 fi->notification = (u8) req->req.misc; 1939 fi->notification = (u8) req->req.misc;
1941 free_pending_request(req); /* we have to free the request, because we queue no response, and therefore nobody will free it */ 1940 free_pending_request(req); /* we have to free the request, because we queue no response, and therefore nobody will free it */
1942 return 0; 1941 return 0;
1943 } 1942 }
1944 /* error EINVAL (22) invalid argument */ 1943 /* error EINVAL (22) invalid argument */
1945 return (-EINVAL); 1944 return (-EINVAL);
1946 } 1945 }
1947 1946
1948 static int write_phypacket(struct file_info *fi, struct pending_request *req) 1947 static int write_phypacket(struct file_info *fi, struct pending_request *req)
1949 { 1948 {
1950 struct hpsb_packet *packet = NULL; 1949 struct hpsb_packet *packet = NULL;
1951 int retval = 0; 1950 int retval = 0;
1952 quadlet_t data; 1951 quadlet_t data;
1953 unsigned long flags; 1952 unsigned long flags;
1954 1953
1955 data = be32_to_cpu((u32) req->req.sendb); 1954 data = be32_to_cpu((u32) req->req.sendb);
1956 DBGMSG("write_phypacket called - quadlet 0x%8.8x ", data); 1955 DBGMSG("write_phypacket called - quadlet 0x%8.8x ", data);
1957 packet = hpsb_make_phypacket(fi->host, data); 1956 packet = hpsb_make_phypacket(fi->host, data);
1958 if (!packet) 1957 if (!packet)
1959 return -ENOMEM; 1958 return -ENOMEM;
1960 req->req.length = 0; 1959 req->req.length = 0;
1961 req->packet = packet; 1960 req->packet = packet;
1962 hpsb_set_packet_complete_task(packet, 1961 hpsb_set_packet_complete_task(packet,
1963 (void (*)(void *))queue_complete_cb, req); 1962 (void (*)(void *))queue_complete_cb, req);
1964 spin_lock_irqsave(&fi->reqlists_lock, flags); 1963 spin_lock_irqsave(&fi->reqlists_lock, flags);
1965 list_add_tail(&req->list, &fi->req_pending); 1964 list_add_tail(&req->list, &fi->req_pending);
1966 spin_unlock_irqrestore(&fi->reqlists_lock, flags); 1965 spin_unlock_irqrestore(&fi->reqlists_lock, flags);
1967 packet->generation = req->req.generation; 1966 packet->generation = req->req.generation;
1968 retval = hpsb_send_packet(packet); 1967 retval = hpsb_send_packet(packet);
1969 DBGMSG("write_phypacket send_packet called => retval: %d ", retval); 1968 DBGMSG("write_phypacket send_packet called => retval: %d ", retval);
1970 if (retval < 0) { 1969 if (retval < 0) {
1971 req->req.error = RAW1394_ERROR_SEND_ERROR; 1970 req->req.error = RAW1394_ERROR_SEND_ERROR;
1972 req->req.length = 0; 1971 req->req.length = 0;
1973 queue_complete_req(req); 1972 queue_complete_req(req);
1974 } 1973 }
1975 return 0; 1974 return 0;
1976 } 1975 }
1977 1976
1978 static int get_config_rom(struct file_info *fi, struct pending_request *req) 1977 static int get_config_rom(struct file_info *fi, struct pending_request *req)
1979 { 1978 {
1980 int ret = 0; 1979 int ret = 0;
1981 quadlet_t *data = kmalloc(req->req.length, GFP_KERNEL); 1980 quadlet_t *data = kmalloc(req->req.length, GFP_KERNEL);
1982 int status; 1981 int status;
1983 1982
1984 if (!data) 1983 if (!data)
1985 return -ENOMEM; 1984 return -ENOMEM;
1986 1985
1987 status = 1986 status =
1988 csr1212_read(fi->host->csr.rom, CSR1212_CONFIG_ROM_SPACE_OFFSET, 1987 csr1212_read(fi->host->csr.rom, CSR1212_CONFIG_ROM_SPACE_OFFSET,
1989 data, req->req.length); 1988 data, req->req.length);
1990 if (copy_to_user(int2ptr(req->req.recvb), data, req->req.length)) 1989 if (copy_to_user(int2ptr(req->req.recvb), data, req->req.length))
1991 ret = -EFAULT; 1990 ret = -EFAULT;
1992 if (copy_to_user 1991 if (copy_to_user
1993 (int2ptr(req->req.tag), &fi->host->csr.rom->cache_head->len, 1992 (int2ptr(req->req.tag), &fi->host->csr.rom->cache_head->len,
1994 sizeof(fi->host->csr.rom->cache_head->len))) 1993 sizeof(fi->host->csr.rom->cache_head->len)))
1995 ret = -EFAULT; 1994 ret = -EFAULT;
1996 if (copy_to_user(int2ptr(req->req.address), &fi->host->csr.generation, 1995 if (copy_to_user(int2ptr(req->req.address), &fi->host->csr.generation,
1997 sizeof(fi->host->csr.generation))) 1996 sizeof(fi->host->csr.generation)))
1998 ret = -EFAULT; 1997 ret = -EFAULT;
1999 if (copy_to_user(int2ptr(req->req.sendb), &status, sizeof(status))) 1998 if (copy_to_user(int2ptr(req->req.sendb), &status, sizeof(status)))
2000 ret = -EFAULT; 1999 ret = -EFAULT;
2001 kfree(data); 2000 kfree(data);
2002 if (ret >= 0) { 2001 if (ret >= 0) {
2003 free_pending_request(req); /* we have to free the request, because we queue no response, and therefore nobody will free it */ 2002 free_pending_request(req); /* we have to free the request, because we queue no response, and therefore nobody will free it */
2004 } 2003 }
2005 return ret; 2004 return ret;
2006 } 2005 }
2007 2006
2008 static int update_config_rom(struct file_info *fi, struct pending_request *req) 2007 static int update_config_rom(struct file_info *fi, struct pending_request *req)
2009 { 2008 {
2010 int ret = 0; 2009 int ret = 0;
2011 quadlet_t *data = kmalloc(req->req.length, GFP_KERNEL); 2010 quadlet_t *data = kmalloc(req->req.length, GFP_KERNEL);
2012 if (!data) 2011 if (!data)
2013 return -ENOMEM; 2012 return -ENOMEM;
2014 if (copy_from_user(data, int2ptr(req->req.sendb), req->req.length)) { 2013 if (copy_from_user(data, int2ptr(req->req.sendb), req->req.length)) {
2015 ret = -EFAULT; 2014 ret = -EFAULT;
2016 } else { 2015 } else {
2017 int status = hpsb_update_config_rom(fi->host, 2016 int status = hpsb_update_config_rom(fi->host,
2018 data, req->req.length, 2017 data, req->req.length,
2019 (unsigned char)req->req. 2018 (unsigned char)req->req.
2020 misc); 2019 misc);
2021 if (copy_to_user 2020 if (copy_to_user
2022 (int2ptr(req->req.recvb), &status, sizeof(status))) 2021 (int2ptr(req->req.recvb), &status, sizeof(status)))
2023 ret = -ENOMEM; 2022 ret = -ENOMEM;
2024 } 2023 }
2025 kfree(data); 2024 kfree(data);
2026 if (ret >= 0) { 2025 if (ret >= 0) {
2027 free_pending_request(req); /* we have to free the request, because we queue no response, and therefore nobody will free it */ 2026 free_pending_request(req); /* we have to free the request, because we queue no response, and therefore nobody will free it */
2028 fi->cfgrom_upd = 1; 2027 fi->cfgrom_upd = 1;
2029 } 2028 }
2030 return ret; 2029 return ret;
2031 } 2030 }
2032 2031
2033 static int modify_config_rom(struct file_info *fi, struct pending_request *req) 2032 static int modify_config_rom(struct file_info *fi, struct pending_request *req)
2034 { 2033 {
2035 struct csr1212_keyval *kv; 2034 struct csr1212_keyval *kv;
2036 struct csr1212_csr_rom_cache *cache; 2035 struct csr1212_csr_rom_cache *cache;
2037 struct csr1212_dentry *dentry; 2036 struct csr1212_dentry *dentry;
2038 u32 dr; 2037 u32 dr;
2039 int ret = 0; 2038 int ret = 0;
2040 2039
2041 if (req->req.misc == ~0) { 2040 if (req->req.misc == ~0) {
2042 if (req->req.length == 0) 2041 if (req->req.length == 0)
2043 return -EINVAL; 2042 return -EINVAL;
2044 2043
2045 /* Find an unused slot */ 2044 /* Find an unused slot */
2046 for (dr = 0; 2045 for (dr = 0;
2047 dr < RAW1394_MAX_USER_CSR_DIRS && fi->csr1212_dirs[dr]; 2046 dr < RAW1394_MAX_USER_CSR_DIRS && fi->csr1212_dirs[dr];
2048 dr++) ; 2047 dr++) ;
2049 2048
2050 if (dr == RAW1394_MAX_USER_CSR_DIRS) 2049 if (dr == RAW1394_MAX_USER_CSR_DIRS)
2051 return -ENOMEM; 2050 return -ENOMEM;
2052 2051
2053 fi->csr1212_dirs[dr] = 2052 fi->csr1212_dirs[dr] =
2054 csr1212_new_directory(CSR1212_KV_ID_VENDOR); 2053 csr1212_new_directory(CSR1212_KV_ID_VENDOR);
2055 if (!fi->csr1212_dirs[dr]) 2054 if (!fi->csr1212_dirs[dr])
2056 return -ENOMEM; 2055 return -ENOMEM;
2057 } else { 2056 } else {
2058 dr = req->req.misc; 2057 dr = req->req.misc;
2059 if (!fi->csr1212_dirs[dr]) 2058 if (!fi->csr1212_dirs[dr])
2060 return -EINVAL; 2059 return -EINVAL;
2061 2060
2062 /* Delete old stuff */ 2061 /* Delete old stuff */
2063 for (dentry = 2062 for (dentry =
2064 fi->csr1212_dirs[dr]->value.directory.dentries_head; 2063 fi->csr1212_dirs[dr]->value.directory.dentries_head;
2065 dentry; dentry = dentry->next) { 2064 dentry; dentry = dentry->next) {
2066 csr1212_detach_keyval_from_directory(fi->host->csr.rom-> 2065 csr1212_detach_keyval_from_directory(fi->host->csr.rom->
2067 root_kv, 2066 root_kv,
2068 dentry->kv); 2067 dentry->kv);
2069 } 2068 }
2070 2069
2071 if (req->req.length == 0) { 2070 if (req->req.length == 0) {
2072 csr1212_release_keyval(fi->csr1212_dirs[dr]); 2071 csr1212_release_keyval(fi->csr1212_dirs[dr]);
2073 fi->csr1212_dirs[dr] = NULL; 2072 fi->csr1212_dirs[dr] = NULL;
2074 2073
2075 hpsb_update_config_rom_image(fi->host); 2074 hpsb_update_config_rom_image(fi->host);
2076 free_pending_request(req); 2075 free_pending_request(req);
2077 return 0; 2076 return 0;
2078 } 2077 }
2079 } 2078 }
2080 2079
2081 cache = csr1212_rom_cache_malloc(0, req->req.length); 2080 cache = csr1212_rom_cache_malloc(0, req->req.length);
2082 if (!cache) { 2081 if (!cache) {
2083 csr1212_release_keyval(fi->csr1212_dirs[dr]); 2082 csr1212_release_keyval(fi->csr1212_dirs[dr]);
2084 fi->csr1212_dirs[dr] = NULL; 2083 fi->csr1212_dirs[dr] = NULL;
2085 return -ENOMEM; 2084 return -ENOMEM;
2086 } 2085 }
2087 2086
2088 cache->filled_head = kmalloc(sizeof(*cache->filled_head), GFP_KERNEL); 2087 cache->filled_head = kmalloc(sizeof(*cache->filled_head), GFP_KERNEL);
2089 if (!cache->filled_head) { 2088 if (!cache->filled_head) {
2090 csr1212_release_keyval(fi->csr1212_dirs[dr]); 2089 csr1212_release_keyval(fi->csr1212_dirs[dr]);
2091 fi->csr1212_dirs[dr] = NULL; 2090 fi->csr1212_dirs[dr] = NULL;
2092 CSR1212_FREE(cache); 2091 CSR1212_FREE(cache);
2093 return -ENOMEM; 2092 return -ENOMEM;
2094 } 2093 }
2095 cache->filled_tail = cache->filled_head; 2094 cache->filled_tail = cache->filled_head;
2096 2095
2097 if (copy_from_user(cache->data, int2ptr(req->req.sendb), 2096 if (copy_from_user(cache->data, int2ptr(req->req.sendb),
2098 req->req.length)) { 2097 req->req.length)) {
2099 csr1212_release_keyval(fi->csr1212_dirs[dr]); 2098 csr1212_release_keyval(fi->csr1212_dirs[dr]);
2100 fi->csr1212_dirs[dr] = NULL; 2099 fi->csr1212_dirs[dr] = NULL;
2101 ret = -EFAULT; 2100 ret = -EFAULT;
2102 } else { 2101 } else {
2103 cache->len = req->req.length; 2102 cache->len = req->req.length;
2104 cache->filled_head->offset_start = 0; 2103 cache->filled_head->offset_start = 0;
2105 cache->filled_head->offset_end = cache->size - 1; 2104 cache->filled_head->offset_end = cache->size - 1;
2106 2105
2107 cache->layout_head = cache->layout_tail = fi->csr1212_dirs[dr]; 2106 cache->layout_head = cache->layout_tail = fi->csr1212_dirs[dr];
2108 2107
2109 ret = CSR1212_SUCCESS; 2108 ret = CSR1212_SUCCESS;
2110 /* parse all the items */ 2109 /* parse all the items */
2111 for (kv = cache->layout_head; ret == CSR1212_SUCCESS && kv; 2110 for (kv = cache->layout_head; ret == CSR1212_SUCCESS && kv;
2112 kv = kv->next) { 2111 kv = kv->next) {
2113 ret = csr1212_parse_keyval(kv, cache); 2112 ret = csr1212_parse_keyval(kv, cache);
2114 } 2113 }
2115 2114
2116 /* attach top level items to the root directory */ 2115 /* attach top level items to the root directory */
2117 for (dentry = 2116 for (dentry =
2118 fi->csr1212_dirs[dr]->value.directory.dentries_head; 2117 fi->csr1212_dirs[dr]->value.directory.dentries_head;
2119 ret == CSR1212_SUCCESS && dentry; dentry = dentry->next) { 2118 ret == CSR1212_SUCCESS && dentry; dentry = dentry->next) {
2120 ret = 2119 ret =
2121 csr1212_attach_keyval_to_directory(fi->host->csr. 2120 csr1212_attach_keyval_to_directory(fi->host->csr.
2122 rom->root_kv, 2121 rom->root_kv,
2123 dentry->kv); 2122 dentry->kv);
2124 } 2123 }
2125 2124
2126 if (ret == CSR1212_SUCCESS) { 2125 if (ret == CSR1212_SUCCESS) {
2127 ret = hpsb_update_config_rom_image(fi->host); 2126 ret = hpsb_update_config_rom_image(fi->host);
2128 2127
2129 if (ret >= 0 && copy_to_user(int2ptr(req->req.recvb), 2128 if (ret >= 0 && copy_to_user(int2ptr(req->req.recvb),
2130 &dr, sizeof(dr))) { 2129 &dr, sizeof(dr))) {
2131 ret = -ENOMEM; 2130 ret = -ENOMEM;
2132 } 2131 }
2133 } 2132 }
2134 } 2133 }
2135 kfree(cache->filled_head); 2134 kfree(cache->filled_head);
2136 CSR1212_FREE(cache); 2135 CSR1212_FREE(cache);
2137 2136
2138 if (ret >= 0) { 2137 if (ret >= 0) {
2139 /* we have to free the request, because we queue no response, 2138 /* we have to free the request, because we queue no response,
2140 * and therefore nobody will free it */ 2139 * and therefore nobody will free it */
2141 free_pending_request(req); 2140 free_pending_request(req);
2142 return 0; 2141 return 0;
2143 } else { 2142 } else {
2144 for (dentry = 2143 for (dentry =
2145 fi->csr1212_dirs[dr]->value.directory.dentries_head; 2144 fi->csr1212_dirs[dr]->value.directory.dentries_head;
2146 dentry; dentry = dentry->next) { 2145 dentry; dentry = dentry->next) {
2147 csr1212_detach_keyval_from_directory(fi->host->csr.rom-> 2146 csr1212_detach_keyval_from_directory(fi->host->csr.rom->
2148 root_kv, 2147 root_kv,
2149 dentry->kv); 2148 dentry->kv);
2150 } 2149 }
2151 csr1212_release_keyval(fi->csr1212_dirs[dr]); 2150 csr1212_release_keyval(fi->csr1212_dirs[dr]);
2152 fi->csr1212_dirs[dr] = NULL; 2151 fi->csr1212_dirs[dr] = NULL;
2153 return ret; 2152 return ret;
2154 } 2153 }
2155 } 2154 }
2156 2155
2157 static int state_connected(struct file_info *fi, struct pending_request *req) 2156 static int state_connected(struct file_info *fi, struct pending_request *req)
2158 { 2157 {
2159 int node = req->req.address >> 48; 2158 int node = req->req.address >> 48;
2160 2159
2161 req->req.error = RAW1394_ERROR_NONE; 2160 req->req.error = RAW1394_ERROR_NONE;
2162 2161
2163 switch (req->req.type) { 2162 switch (req->req.type) {
2164 2163
2165 case RAW1394_REQ_ECHO: 2164 case RAW1394_REQ_ECHO:
2166 queue_complete_req(req); 2165 queue_complete_req(req);
2167 return 0; 2166 return 0;
2168 2167
2169 case RAW1394_REQ_ARM_REGISTER: 2168 case RAW1394_REQ_ARM_REGISTER:
2170 return arm_register(fi, req); 2169 return arm_register(fi, req);
2171 2170
2172 case RAW1394_REQ_ARM_UNREGISTER: 2171 case RAW1394_REQ_ARM_UNREGISTER:
2173 return arm_unregister(fi, req); 2172 return arm_unregister(fi, req);
2174 2173
2175 case RAW1394_REQ_ARM_SET_BUF: 2174 case RAW1394_REQ_ARM_SET_BUF:
2176 return arm_set_buf(fi, req); 2175 return arm_set_buf(fi, req);
2177 2176
2178 case RAW1394_REQ_ARM_GET_BUF: 2177 case RAW1394_REQ_ARM_GET_BUF:
2179 return arm_get_buf(fi, req); 2178 return arm_get_buf(fi, req);
2180 2179
2181 case RAW1394_REQ_RESET_NOTIFY: 2180 case RAW1394_REQ_RESET_NOTIFY:
2182 return reset_notification(fi, req); 2181 return reset_notification(fi, req);
2183 2182
2184 case RAW1394_REQ_ISO_SEND: 2183 case RAW1394_REQ_ISO_SEND:
2185 case RAW1394_REQ_ISO_LISTEN: 2184 case RAW1394_REQ_ISO_LISTEN:
2186 printk(KERN_DEBUG "raw1394: old iso ABI has been removed\n"); 2185 printk(KERN_DEBUG "raw1394: old iso ABI has been removed\n");
2187 req->req.error = RAW1394_ERROR_COMPAT; 2186 req->req.error = RAW1394_ERROR_COMPAT;
2188 req->req.misc = RAW1394_KERNELAPI_VERSION; 2187 req->req.misc = RAW1394_KERNELAPI_VERSION;
2189 queue_complete_req(req); 2188 queue_complete_req(req);
2190 return 0; 2189 return 0;
2191 2190
2192 case RAW1394_REQ_FCP_LISTEN: 2191 case RAW1394_REQ_FCP_LISTEN:
2193 handle_fcp_listen(fi, req); 2192 handle_fcp_listen(fi, req);
2194 return 0; 2193 return 0;
2195 2194
2196 case RAW1394_REQ_RESET_BUS: 2195 case RAW1394_REQ_RESET_BUS:
2197 if (req->req.misc == RAW1394_LONG_RESET) { 2196 if (req->req.misc == RAW1394_LONG_RESET) {
2198 DBGMSG("busreset called (type: LONG)"); 2197 DBGMSG("busreset called (type: LONG)");
2199 hpsb_reset_bus(fi->host, LONG_RESET); 2198 hpsb_reset_bus(fi->host, LONG_RESET);
2200 free_pending_request(req); /* we have to free the request, because we queue no response, and therefore nobody will free it */ 2199 free_pending_request(req); /* we have to free the request, because we queue no response, and therefore nobody will free it */
2201 return 0; 2200 return 0;
2202 } 2201 }
2203 if (req->req.misc == RAW1394_SHORT_RESET) { 2202 if (req->req.misc == RAW1394_SHORT_RESET) {
2204 DBGMSG("busreset called (type: SHORT)"); 2203 DBGMSG("busreset called (type: SHORT)");
2205 hpsb_reset_bus(fi->host, SHORT_RESET); 2204 hpsb_reset_bus(fi->host, SHORT_RESET);
2206 free_pending_request(req); /* we have to free the request, because we queue no response, and therefore nobody will free it */ 2205 free_pending_request(req); /* we have to free the request, because we queue no response, and therefore nobody will free it */
2207 return 0; 2206 return 0;
2208 } 2207 }
2209 /* error EINVAL (22) invalid argument */ 2208 /* error EINVAL (22) invalid argument */
2210 return (-EINVAL); 2209 return (-EINVAL);
2211 case RAW1394_REQ_GET_ROM: 2210 case RAW1394_REQ_GET_ROM:
2212 return get_config_rom(fi, req); 2211 return get_config_rom(fi, req);
2213 2212
2214 case RAW1394_REQ_UPDATE_ROM: 2213 case RAW1394_REQ_UPDATE_ROM:
2215 return update_config_rom(fi, req); 2214 return update_config_rom(fi, req);
2216 2215
2217 case RAW1394_REQ_MODIFY_ROM: 2216 case RAW1394_REQ_MODIFY_ROM:
2218 return modify_config_rom(fi, req); 2217 return modify_config_rom(fi, req);
2219 } 2218 }
2220 2219
2221 if (req->req.generation != get_hpsb_generation(fi->host)) { 2220 if (req->req.generation != get_hpsb_generation(fi->host)) {
2222 req->req.error = RAW1394_ERROR_GENERATION; 2221 req->req.error = RAW1394_ERROR_GENERATION;
2223 req->req.generation = get_hpsb_generation(fi->host); 2222 req->req.generation = get_hpsb_generation(fi->host);
2224 req->req.length = 0; 2223 req->req.length = 0;
2225 queue_complete_req(req); 2224 queue_complete_req(req);
2226 return 0; 2225 return 0;
2227 } 2226 }
2228 2227
2229 switch (req->req.type) { 2228 switch (req->req.type) {
2230 case RAW1394_REQ_PHYPACKET: 2229 case RAW1394_REQ_PHYPACKET:
2231 return write_phypacket(fi, req); 2230 return write_phypacket(fi, req);
2232 case RAW1394_REQ_ASYNC_SEND: 2231 case RAW1394_REQ_ASYNC_SEND:
2233 return handle_async_send(fi, req); 2232 return handle_async_send(fi, req);
2234 } 2233 }
2235 2234
2236 if (req->req.length == 0) { 2235 if (req->req.length == 0) {
2237 req->req.error = RAW1394_ERROR_INVALID_ARG; 2236 req->req.error = RAW1394_ERROR_INVALID_ARG;
2238 queue_complete_req(req); 2237 queue_complete_req(req);
2239 return 0; 2238 return 0;
2240 } 2239 }
2241 2240
2242 return handle_async_request(fi, req, node); 2241 return handle_async_request(fi, req, node);
2243 } 2242 }
2244 2243
2245 static ssize_t raw1394_write(struct file *file, const char __user * buffer, 2244 static ssize_t raw1394_write(struct file *file, const char __user * buffer,
2246 size_t count, loff_t * offset_is_ignored) 2245 size_t count, loff_t * offset_is_ignored)
2247 { 2246 {
2248 struct file_info *fi = (struct file_info *)file->private_data; 2247 struct file_info *fi = (struct file_info *)file->private_data;
2249 struct pending_request *req; 2248 struct pending_request *req;
2250 ssize_t retval = -EBADFD; 2249 ssize_t retval = -EBADFD;
2251 2250
2252 #ifdef CONFIG_COMPAT 2251 #ifdef CONFIG_COMPAT
2253 if (count == sizeof(struct compat_raw1394_req) && 2252 if (count == sizeof(struct compat_raw1394_req) &&
2254 sizeof(struct compat_raw1394_req) != 2253 sizeof(struct compat_raw1394_req) !=
2255 sizeof(struct raw1394_request)) { 2254 sizeof(struct raw1394_request)) {
2256 buffer = raw1394_compat_write(buffer); 2255 buffer = raw1394_compat_write(buffer);
2257 if (IS_ERR((__force void *)buffer)) 2256 if (IS_ERR((__force void *)buffer))
2258 return PTR_ERR((__force void *)buffer); 2257 return PTR_ERR((__force void *)buffer);
2259 } else 2258 } else
2260 #endif 2259 #endif
2261 if (count != sizeof(struct raw1394_request)) { 2260 if (count != sizeof(struct raw1394_request)) {
2262 return -EINVAL; 2261 return -EINVAL;
2263 } 2262 }
2264 2263
2265 req = alloc_pending_request(); 2264 req = alloc_pending_request();
2266 if (req == NULL) { 2265 if (req == NULL) {
2267 return -ENOMEM; 2266 return -ENOMEM;
2268 } 2267 }
2269 req->file_info = fi; 2268 req->file_info = fi;
2270 2269
2271 if (copy_from_user(&req->req, buffer, sizeof(struct raw1394_request))) { 2270 if (copy_from_user(&req->req, buffer, sizeof(struct raw1394_request))) {
2272 free_pending_request(req); 2271 free_pending_request(req);
2273 return -EFAULT; 2272 return -EFAULT;
2274 } 2273 }
2275 2274
2276 if (!mutex_trylock(&fi->state_mutex)) { 2275 if (!mutex_trylock(&fi->state_mutex)) {
2277 free_pending_request(req); 2276 free_pending_request(req);
2278 return -EAGAIN; 2277 return -EAGAIN;
2279 } 2278 }
2280 2279
2281 switch (fi->state) { 2280 switch (fi->state) {
2282 case opened: 2281 case opened:
2283 retval = state_opened(fi, req); 2282 retval = state_opened(fi, req);
2284 break; 2283 break;
2285 2284
2286 case initialized: 2285 case initialized:
2287 retval = state_initialized(fi, req); 2286 retval = state_initialized(fi, req);
2288 break; 2287 break;
2289 2288
2290 case connected: 2289 case connected:
2291 retval = state_connected(fi, req); 2290 retval = state_connected(fi, req);
2292 break; 2291 break;
2293 } 2292 }
2294 2293
2295 mutex_unlock(&fi->state_mutex); 2294 mutex_unlock(&fi->state_mutex);
2296 2295
2297 if (retval < 0) { 2296 if (retval < 0) {
2298 free_pending_request(req); 2297 free_pending_request(req);
2299 } else { 2298 } else {
2300 BUG_ON(retval); 2299 BUG_ON(retval);
2301 retval = count; 2300 retval = count;
2302 } 2301 }
2303 2302
2304 return retval; 2303 return retval;
2305 } 2304 }
2306 2305
2307 /* rawiso operations */ 2306 /* rawiso operations */
2308 2307
2309 /* check if any RAW1394_REQ_RAWISO_ACTIVITY event is already in the 2308 /* check if any RAW1394_REQ_RAWISO_ACTIVITY event is already in the
2310 * completion queue (reqlists_lock must be taken) */ 2309 * completion queue (reqlists_lock must be taken) */
2311 static inline int __rawiso_event_in_queue(struct file_info *fi) 2310 static inline int __rawiso_event_in_queue(struct file_info *fi)
2312 { 2311 {
2313 struct pending_request *req; 2312 struct pending_request *req;
2314 2313
2315 list_for_each_entry(req, &fi->req_complete, list) 2314 list_for_each_entry(req, &fi->req_complete, list)
2316 if (req->req.type == RAW1394_REQ_RAWISO_ACTIVITY) 2315 if (req->req.type == RAW1394_REQ_RAWISO_ACTIVITY)
2317 return 1; 2316 return 1;
2318 2317
2319 return 0; 2318 return 0;
2320 } 2319 }
2321 2320
2322 /* put a RAWISO_ACTIVITY event in the queue, if one isn't there already */ 2321 /* put a RAWISO_ACTIVITY event in the queue, if one isn't there already */
2323 static void queue_rawiso_event(struct file_info *fi) 2322 static void queue_rawiso_event(struct file_info *fi)
2324 { 2323 {
2325 unsigned long flags; 2324 unsigned long flags;
2326 2325
2327 spin_lock_irqsave(&fi->reqlists_lock, flags); 2326 spin_lock_irqsave(&fi->reqlists_lock, flags);
2328 2327
2329 /* only one ISO activity event may be in the queue */ 2328 /* only one ISO activity event may be in the queue */
2330 if (!__rawiso_event_in_queue(fi)) { 2329 if (!__rawiso_event_in_queue(fi)) {
2331 struct pending_request *req = 2330 struct pending_request *req =
2332 __alloc_pending_request(GFP_ATOMIC); 2331 __alloc_pending_request(GFP_ATOMIC);
2333 2332
2334 if (req) { 2333 if (req) {
2335 req->file_info = fi; 2334 req->file_info = fi;
2336 req->req.type = RAW1394_REQ_RAWISO_ACTIVITY; 2335 req->req.type = RAW1394_REQ_RAWISO_ACTIVITY;
2337 req->req.generation = get_hpsb_generation(fi->host); 2336 req->req.generation = get_hpsb_generation(fi->host);
2338 __queue_complete_req(req); 2337 __queue_complete_req(req);
2339 } else { 2338 } else {
2340 /* on allocation failure, signal an overflow */ 2339 /* on allocation failure, signal an overflow */
2341 if (fi->iso_handle) { 2340 if (fi->iso_handle) {
2342 atomic_inc(&fi->iso_handle->overflows); 2341 atomic_inc(&fi->iso_handle->overflows);
2343 } 2342 }
2344 } 2343 }
2345 } 2344 }
2346 spin_unlock_irqrestore(&fi->reqlists_lock, flags); 2345 spin_unlock_irqrestore(&fi->reqlists_lock, flags);
2347 } 2346 }
2348 2347
2349 static void rawiso_activity_cb(struct hpsb_iso *iso) 2348 static void rawiso_activity_cb(struct hpsb_iso *iso)
2350 { 2349 {
2351 unsigned long flags; 2350 unsigned long flags;
2352 struct host_info *hi; 2351 struct host_info *hi;
2353 struct file_info *fi; 2352 struct file_info *fi;
2354 2353
2355 spin_lock_irqsave(&host_info_lock, flags); 2354 spin_lock_irqsave(&host_info_lock, flags);
2356 hi = find_host_info(iso->host); 2355 hi = find_host_info(iso->host);
2357 2356
2358 if (hi != NULL) { 2357 if (hi != NULL) {
2359 list_for_each_entry(fi, &hi->file_info_list, list) { 2358 list_for_each_entry(fi, &hi->file_info_list, list) {
2360 if (fi->iso_handle == iso) 2359 if (fi->iso_handle == iso)
2361 queue_rawiso_event(fi); 2360 queue_rawiso_event(fi);
2362 } 2361 }
2363 } 2362 }
2364 2363
2365 spin_unlock_irqrestore(&host_info_lock, flags); 2364 spin_unlock_irqrestore(&host_info_lock, flags);
2366 } 2365 }
2367 2366
2368 /* helper function - gather all the kernel iso status bits for returning to user-space */ 2367 /* helper function - gather all the kernel iso status bits for returning to user-space */
2369 static void raw1394_iso_fill_status(struct hpsb_iso *iso, 2368 static void raw1394_iso_fill_status(struct hpsb_iso *iso,
2370 struct raw1394_iso_status *stat) 2369 struct raw1394_iso_status *stat)
2371 { 2370 {
2372 int overflows = atomic_read(&iso->overflows); 2371 int overflows = atomic_read(&iso->overflows);
2373 int skips = atomic_read(&iso->skips); 2372 int skips = atomic_read(&iso->skips);
2374 2373
2375 stat->config.data_buf_size = iso->buf_size; 2374 stat->config.data_buf_size = iso->buf_size;
2376 stat->config.buf_packets = iso->buf_packets; 2375 stat->config.buf_packets = iso->buf_packets;
2377 stat->config.channel = iso->channel; 2376 stat->config.channel = iso->channel;
2378 stat->config.speed = iso->speed; 2377 stat->config.speed = iso->speed;
2379 stat->config.irq_interval = iso->irq_interval; 2378 stat->config.irq_interval = iso->irq_interval;
2380 stat->n_packets = hpsb_iso_n_ready(iso); 2379 stat->n_packets = hpsb_iso_n_ready(iso);
2381 stat->overflows = ((skips & 0xFFFF) << 16) | ((overflows & 0xFFFF)); 2380 stat->overflows = ((skips & 0xFFFF) << 16) | ((overflows & 0xFFFF));
2382 stat->xmit_cycle = iso->xmit_cycle; 2381 stat->xmit_cycle = iso->xmit_cycle;
2383 } 2382 }
2384 2383
2385 static int raw1394_iso_xmit_init(struct file_info *fi, void __user * uaddr) 2384 static int raw1394_iso_xmit_init(struct file_info *fi, void __user * uaddr)
2386 { 2385 {
2387 struct raw1394_iso_status stat; 2386 struct raw1394_iso_status stat;
2388 2387
2389 if (!fi->host) 2388 if (!fi->host)
2390 return -EINVAL; 2389 return -EINVAL;
2391 2390
2392 if (copy_from_user(&stat, uaddr, sizeof(stat))) 2391 if (copy_from_user(&stat, uaddr, sizeof(stat)))
2393 return -EFAULT; 2392 return -EFAULT;
2394 2393
2395 fi->iso_handle = hpsb_iso_xmit_init(fi->host, 2394 fi->iso_handle = hpsb_iso_xmit_init(fi->host,
2396 stat.config.data_buf_size, 2395 stat.config.data_buf_size,
2397 stat.config.buf_packets, 2396 stat.config.buf_packets,
2398 stat.config.channel, 2397 stat.config.channel,
2399 stat.config.speed, 2398 stat.config.speed,
2400 stat.config.irq_interval, 2399 stat.config.irq_interval,
2401 rawiso_activity_cb); 2400 rawiso_activity_cb);
2402 if (!fi->iso_handle) 2401 if (!fi->iso_handle)
2403 return -ENOMEM; 2402 return -ENOMEM;
2404 2403
2405 fi->iso_state = RAW1394_ISO_XMIT; 2404 fi->iso_state = RAW1394_ISO_XMIT;
2406 2405
2407 raw1394_iso_fill_status(fi->iso_handle, &stat); 2406 raw1394_iso_fill_status(fi->iso_handle, &stat);
2408 if (copy_to_user(uaddr, &stat, sizeof(stat))) 2407 if (copy_to_user(uaddr, &stat, sizeof(stat)))
2409 return -EFAULT; 2408 return -EFAULT;
2410 2409
2411 /* queue an event to get things started */ 2410 /* queue an event to get things started */
2412 rawiso_activity_cb(fi->iso_handle); 2411 rawiso_activity_cb(fi->iso_handle);
2413 2412
2414 return 0; 2413 return 0;
2415 } 2414 }
2416 2415
2417 static int raw1394_iso_recv_init(struct file_info *fi, void __user * uaddr) 2416 static int raw1394_iso_recv_init(struct file_info *fi, void __user * uaddr)
2418 { 2417 {
2419 struct raw1394_iso_status stat; 2418 struct raw1394_iso_status stat;
2420 2419
2421 if (!fi->host) 2420 if (!fi->host)
2422 return -EINVAL; 2421 return -EINVAL;
2423 2422
2424 if (copy_from_user(&stat, uaddr, sizeof(stat))) 2423 if (copy_from_user(&stat, uaddr, sizeof(stat)))
2425 return -EFAULT; 2424 return -EFAULT;
2426 2425
2427 fi->iso_handle = hpsb_iso_recv_init(fi->host, 2426 fi->iso_handle = hpsb_iso_recv_init(fi->host,
2428 stat.config.data_buf_size, 2427 stat.config.data_buf_size,
2429 stat.config.buf_packets, 2428 stat.config.buf_packets,
2430 stat.config.channel, 2429 stat.config.channel,
2431 stat.config.dma_mode, 2430 stat.config.dma_mode,
2432 stat.config.irq_interval, 2431 stat.config.irq_interval,
2433 rawiso_activity_cb); 2432 rawiso_activity_cb);
2434 if (!fi->iso_handle) 2433 if (!fi->iso_handle)
2435 return -ENOMEM; 2434 return -ENOMEM;
2436 2435
2437 fi->iso_state = RAW1394_ISO_RECV; 2436 fi->iso_state = RAW1394_ISO_RECV;
2438 2437
2439 raw1394_iso_fill_status(fi->iso_handle, &stat); 2438 raw1394_iso_fill_status(fi->iso_handle, &stat);
2440 if (copy_to_user(uaddr, &stat, sizeof(stat))) 2439 if (copy_to_user(uaddr, &stat, sizeof(stat)))
2441 return -EFAULT; 2440 return -EFAULT;
2442 return 0; 2441 return 0;
2443 } 2442 }
2444 2443
2445 static int raw1394_iso_get_status(struct file_info *fi, void __user * uaddr) 2444 static int raw1394_iso_get_status(struct file_info *fi, void __user * uaddr)
2446 { 2445 {
2447 struct raw1394_iso_status stat; 2446 struct raw1394_iso_status stat;
2448 struct hpsb_iso *iso = fi->iso_handle; 2447 struct hpsb_iso *iso = fi->iso_handle;
2449 2448
2450 raw1394_iso_fill_status(fi->iso_handle, &stat); 2449 raw1394_iso_fill_status(fi->iso_handle, &stat);
2451 if (copy_to_user(uaddr, &stat, sizeof(stat))) 2450 if (copy_to_user(uaddr, &stat, sizeof(stat)))
2452 return -EFAULT; 2451 return -EFAULT;
2453 2452
2454 /* reset overflow counter */ 2453 /* reset overflow counter */
2455 atomic_set(&iso->overflows, 0); 2454 atomic_set(&iso->overflows, 0);
2456 /* reset skip counter */ 2455 /* reset skip counter */
2457 atomic_set(&iso->skips, 0); 2456 atomic_set(&iso->skips, 0);
2458 2457
2459 return 0; 2458 return 0;
2460 } 2459 }
2461 2460
2462 /* copy N packet_infos out of the ringbuffer into user-supplied array */ 2461 /* copy N packet_infos out of the ringbuffer into user-supplied array */
2463 static int raw1394_iso_recv_packets(struct file_info *fi, void __user * uaddr) 2462 static int raw1394_iso_recv_packets(struct file_info *fi, void __user * uaddr)
2464 { 2463 {
2465 struct raw1394_iso_packets upackets; 2464 struct raw1394_iso_packets upackets;
2466 unsigned int packet = fi->iso_handle->first_packet; 2465 unsigned int packet = fi->iso_handle->first_packet;
2467 int i; 2466 int i;
2468 2467
2469 if (copy_from_user(&upackets, uaddr, sizeof(upackets))) 2468 if (copy_from_user(&upackets, uaddr, sizeof(upackets)))
2470 return -EFAULT; 2469 return -EFAULT;
2471 2470
2472 if (upackets.n_packets > hpsb_iso_n_ready(fi->iso_handle)) 2471 if (upackets.n_packets > hpsb_iso_n_ready(fi->iso_handle))
2473 return -EINVAL; 2472 return -EINVAL;
2474 2473
2475 /* ensure user-supplied buffer is accessible and big enough */ 2474 /* ensure user-supplied buffer is accessible and big enough */
2476 if (!access_ok(VERIFY_WRITE, upackets.infos, 2475 if (!access_ok(VERIFY_WRITE, upackets.infos,
2477 upackets.n_packets * 2476 upackets.n_packets *
2478 sizeof(struct raw1394_iso_packet_info))) 2477 sizeof(struct raw1394_iso_packet_info)))
2479 return -EFAULT; 2478 return -EFAULT;
2480 2479
2481 /* copy the packet_infos out */ 2480 /* copy the packet_infos out */
2482 for (i = 0; i < upackets.n_packets; i++) { 2481 for (i = 0; i < upackets.n_packets; i++) {
2483 if (__copy_to_user(&upackets.infos[i], 2482 if (__copy_to_user(&upackets.infos[i],
2484 &fi->iso_handle->infos[packet], 2483 &fi->iso_handle->infos[packet],
2485 sizeof(struct raw1394_iso_packet_info))) 2484 sizeof(struct raw1394_iso_packet_info)))
2486 return -EFAULT; 2485 return -EFAULT;
2487 2486
2488 packet = (packet + 1) % fi->iso_handle->buf_packets; 2487 packet = (packet + 1) % fi->iso_handle->buf_packets;
2489 } 2488 }
2490 2489
2491 return 0; 2490 return 0;
2492 } 2491 }
2493 2492
2494 /* copy N packet_infos from user to ringbuffer, and queue them for transmission */ 2493 /* copy N packet_infos from user to ringbuffer, and queue them for transmission */
2495 static int raw1394_iso_send_packets(struct file_info *fi, void __user * uaddr) 2494 static int raw1394_iso_send_packets(struct file_info *fi, void __user * uaddr)
2496 { 2495 {
2497 struct raw1394_iso_packets upackets; 2496 struct raw1394_iso_packets upackets;
2498 int i, rv; 2497 int i, rv;
2499 2498
2500 if (copy_from_user(&upackets, uaddr, sizeof(upackets))) 2499 if (copy_from_user(&upackets, uaddr, sizeof(upackets)))
2501 return -EFAULT; 2500 return -EFAULT;
2502 2501
2503 if (upackets.n_packets >= fi->iso_handle->buf_packets) 2502 if (upackets.n_packets >= fi->iso_handle->buf_packets)
2504 return -EINVAL; 2503 return -EINVAL;
2505 2504
2506 if (upackets.n_packets >= hpsb_iso_n_ready(fi->iso_handle)) 2505 if (upackets.n_packets >= hpsb_iso_n_ready(fi->iso_handle))
2507 return -EAGAIN; 2506 return -EAGAIN;
2508 2507
2509 /* ensure user-supplied buffer is accessible and big enough */ 2508 /* ensure user-supplied buffer is accessible and big enough */
2510 if (!access_ok(VERIFY_READ, upackets.infos, 2509 if (!access_ok(VERIFY_READ, upackets.infos,
2511 upackets.n_packets * 2510 upackets.n_packets *
2512 sizeof(struct raw1394_iso_packet_info))) 2511 sizeof(struct raw1394_iso_packet_info)))
2513 return -EFAULT; 2512 return -EFAULT;
2514 2513
2515 /* copy the infos structs in and queue the packets */ 2514 /* copy the infos structs in and queue the packets */
2516 for (i = 0; i < upackets.n_packets; i++) { 2515 for (i = 0; i < upackets.n_packets; i++) {
2517 struct raw1394_iso_packet_info info; 2516 struct raw1394_iso_packet_info info;
2518 2517
2519 if (__copy_from_user(&info, &upackets.infos[i], 2518 if (__copy_from_user(&info, &upackets.infos[i],
2520 sizeof(struct raw1394_iso_packet_info))) 2519 sizeof(struct raw1394_iso_packet_info)))
2521 return -EFAULT; 2520 return -EFAULT;
2522 2521
2523 rv = hpsb_iso_xmit_queue_packet(fi->iso_handle, info.offset, 2522 rv = hpsb_iso_xmit_queue_packet(fi->iso_handle, info.offset,
2524 info.len, info.tag, info.sy); 2523 info.len, info.tag, info.sy);
2525 if (rv) 2524 if (rv)
2526 return rv; 2525 return rv;
2527 } 2526 }
2528 2527
2529 return 0; 2528 return 0;
2530 } 2529 }
2531 2530
2532 static void raw1394_iso_shutdown(struct file_info *fi) 2531 static void raw1394_iso_shutdown(struct file_info *fi)
2533 { 2532 {
2534 if (fi->iso_handle) 2533 if (fi->iso_handle)
2535 hpsb_iso_shutdown(fi->iso_handle); 2534 hpsb_iso_shutdown(fi->iso_handle);
2536 2535
2537 fi->iso_handle = NULL; 2536 fi->iso_handle = NULL;
2538 fi->iso_state = RAW1394_ISO_INACTIVE; 2537 fi->iso_state = RAW1394_ISO_INACTIVE;
2539 } 2538 }
2540 2539
2541 static int raw1394_read_cycle_timer(struct file_info *fi, void __user * uaddr) 2540 static int raw1394_read_cycle_timer(struct file_info *fi, void __user * uaddr)
2542 { 2541 {
2543 struct raw1394_cycle_timer ct; 2542 struct raw1394_cycle_timer ct;
2544 int err; 2543 int err;
2545 2544
2546 err = hpsb_read_cycle_timer(fi->host, &ct.cycle_timer, &ct.local_time); 2545 err = hpsb_read_cycle_timer(fi->host, &ct.cycle_timer, &ct.local_time);
2547 if (!err) 2546 if (!err)
2548 if (copy_to_user(uaddr, &ct, sizeof(ct))) 2547 if (copy_to_user(uaddr, &ct, sizeof(ct)))
2549 err = -EFAULT; 2548 err = -EFAULT;
2550 return err; 2549 return err;
2551 } 2550 }
2552 2551
2553 /* mmap the rawiso xmit/recv buffer */ 2552 /* mmap the rawiso xmit/recv buffer */
2554 static int raw1394_mmap(struct file *file, struct vm_area_struct *vma) 2553 static int raw1394_mmap(struct file *file, struct vm_area_struct *vma)
2555 { 2554 {
2556 struct file_info *fi = file->private_data; 2555 struct file_info *fi = file->private_data;
2557 int ret; 2556 int ret;
2558 2557
2559 if (!mutex_trylock(&fi->state_mutex)) 2558 if (!mutex_trylock(&fi->state_mutex))
2560 return -EAGAIN; 2559 return -EAGAIN;
2561 2560
2562 if (fi->iso_state == RAW1394_ISO_INACTIVE) 2561 if (fi->iso_state == RAW1394_ISO_INACTIVE)
2563 ret = -EINVAL; 2562 ret = -EINVAL;
2564 else 2563 else
2565 ret = dma_region_mmap(&fi->iso_handle->data_buf, file, vma); 2564 ret = dma_region_mmap(&fi->iso_handle->data_buf, file, vma);
2566 2565
2567 mutex_unlock(&fi->state_mutex); 2566 mutex_unlock(&fi->state_mutex);
2568 2567
2569 return ret; 2568 return ret;
2570 } 2569 }
2571 2570
2572 static long raw1394_ioctl_inactive(struct file_info *fi, unsigned int cmd, 2571 static long raw1394_ioctl_inactive(struct file_info *fi, unsigned int cmd,
2573 void __user *argp) 2572 void __user *argp)
2574 { 2573 {
2575 switch (cmd) { 2574 switch (cmd) {
2576 case RAW1394_IOC_ISO_XMIT_INIT: 2575 case RAW1394_IOC_ISO_XMIT_INIT:
2577 return raw1394_iso_xmit_init(fi, argp); 2576 return raw1394_iso_xmit_init(fi, argp);
2578 case RAW1394_IOC_ISO_RECV_INIT: 2577 case RAW1394_IOC_ISO_RECV_INIT:
2579 return raw1394_iso_recv_init(fi, argp); 2578 return raw1394_iso_recv_init(fi, argp);
2580 default: 2579 default:
2581 return -EINVAL; 2580 return -EINVAL;
2582 } 2581 }
2583 } 2582 }
2584 2583
2585 static long raw1394_ioctl_recv(struct file_info *fi, unsigned int cmd, 2584 static long raw1394_ioctl_recv(struct file_info *fi, unsigned int cmd,
2586 unsigned long arg) 2585 unsigned long arg)
2587 { 2586 {
2588 void __user *argp = (void __user *)arg; 2587 void __user *argp = (void __user *)arg;
2589 2588
2590 switch (cmd) { 2589 switch (cmd) {
2591 case RAW1394_IOC_ISO_RECV_START:{ 2590 case RAW1394_IOC_ISO_RECV_START:{
2592 int args[3]; 2591 int args[3];
2593 2592
2594 if (copy_from_user(&args[0], argp, sizeof(args))) 2593 if (copy_from_user(&args[0], argp, sizeof(args)))
2595 return -EFAULT; 2594 return -EFAULT;
2596 return hpsb_iso_recv_start(fi->iso_handle, 2595 return hpsb_iso_recv_start(fi->iso_handle,
2597 args[0], args[1], args[2]); 2596 args[0], args[1], args[2]);
2598 } 2597 }
2599 case RAW1394_IOC_ISO_XMIT_RECV_STOP: 2598 case RAW1394_IOC_ISO_XMIT_RECV_STOP:
2600 hpsb_iso_stop(fi->iso_handle); 2599 hpsb_iso_stop(fi->iso_handle);
2601 return 0; 2600 return 0;
2602 case RAW1394_IOC_ISO_RECV_LISTEN_CHANNEL: 2601 case RAW1394_IOC_ISO_RECV_LISTEN_CHANNEL:
2603 return hpsb_iso_recv_listen_channel(fi->iso_handle, arg); 2602 return hpsb_iso_recv_listen_channel(fi->iso_handle, arg);
2604 case RAW1394_IOC_ISO_RECV_UNLISTEN_CHANNEL: 2603 case RAW1394_IOC_ISO_RECV_UNLISTEN_CHANNEL:
2605 return hpsb_iso_recv_unlisten_channel(fi->iso_handle, arg); 2604 return hpsb_iso_recv_unlisten_channel(fi->iso_handle, arg);
2606 case RAW1394_IOC_ISO_RECV_SET_CHANNEL_MASK:{ 2605 case RAW1394_IOC_ISO_RECV_SET_CHANNEL_MASK:{
2607 u64 mask; 2606 u64 mask;
2608 2607
2609 if (copy_from_user(&mask, argp, sizeof(mask))) 2608 if (copy_from_user(&mask, argp, sizeof(mask)))
2610 return -EFAULT; 2609 return -EFAULT;
2611 return hpsb_iso_recv_set_channel_mask(fi->iso_handle, 2610 return hpsb_iso_recv_set_channel_mask(fi->iso_handle,
2612 mask); 2611 mask);
2613 } 2612 }
2614 case RAW1394_IOC_ISO_GET_STATUS: 2613 case RAW1394_IOC_ISO_GET_STATUS:
2615 return raw1394_iso_get_status(fi, argp); 2614 return raw1394_iso_get_status(fi, argp);
2616 case RAW1394_IOC_ISO_RECV_PACKETS: 2615 case RAW1394_IOC_ISO_RECV_PACKETS:
2617 return raw1394_iso_recv_packets(fi, argp); 2616 return raw1394_iso_recv_packets(fi, argp);
2618 case RAW1394_IOC_ISO_RECV_RELEASE_PACKETS: 2617 case RAW1394_IOC_ISO_RECV_RELEASE_PACKETS:
2619 return hpsb_iso_recv_release_packets(fi->iso_handle, arg); 2618 return hpsb_iso_recv_release_packets(fi->iso_handle, arg);
2620 case RAW1394_IOC_ISO_RECV_FLUSH: 2619 case RAW1394_IOC_ISO_RECV_FLUSH:
2621 return hpsb_iso_recv_flush(fi->iso_handle); 2620 return hpsb_iso_recv_flush(fi->iso_handle);
2622 case RAW1394_IOC_ISO_SHUTDOWN: 2621 case RAW1394_IOC_ISO_SHUTDOWN:
2623 raw1394_iso_shutdown(fi); 2622 raw1394_iso_shutdown(fi);
2624 return 0; 2623 return 0;
2625 case RAW1394_IOC_ISO_QUEUE_ACTIVITY: 2624 case RAW1394_IOC_ISO_QUEUE_ACTIVITY:
2626 queue_rawiso_event(fi); 2625 queue_rawiso_event(fi);
2627 return 0; 2626 return 0;
2628 default: 2627 default:
2629 return -EINVAL; 2628 return -EINVAL;
2630 } 2629 }
2631 } 2630 }
2632 2631
2633 static long raw1394_ioctl_xmit(struct file_info *fi, unsigned int cmd, 2632 static long raw1394_ioctl_xmit(struct file_info *fi, unsigned int cmd,
2634 void __user *argp) 2633 void __user *argp)
2635 { 2634 {
2636 switch (cmd) { 2635 switch (cmd) {
2637 case RAW1394_IOC_ISO_XMIT_START:{ 2636 case RAW1394_IOC_ISO_XMIT_START:{
2638 int args[2]; 2637 int args[2];
2639 2638
2640 if (copy_from_user(&args[0], argp, sizeof(args))) 2639 if (copy_from_user(&args[0], argp, sizeof(args)))
2641 return -EFAULT; 2640 return -EFAULT;
2642 return hpsb_iso_xmit_start(fi->iso_handle, 2641 return hpsb_iso_xmit_start(fi->iso_handle,
2643 args[0], args[1]); 2642 args[0], args[1]);
2644 } 2643 }
2645 case RAW1394_IOC_ISO_XMIT_SYNC: 2644 case RAW1394_IOC_ISO_XMIT_SYNC:
2646 return hpsb_iso_xmit_sync(fi->iso_handle); 2645 return hpsb_iso_xmit_sync(fi->iso_handle);
2647 case RAW1394_IOC_ISO_XMIT_RECV_STOP: 2646 case RAW1394_IOC_ISO_XMIT_RECV_STOP:
2648 hpsb_iso_stop(fi->iso_handle); 2647 hpsb_iso_stop(fi->iso_handle);
2649 return 0; 2648 return 0;
2650 case RAW1394_IOC_ISO_GET_STATUS: 2649 case RAW1394_IOC_ISO_GET_STATUS:
2651 return raw1394_iso_get_status(fi, argp); 2650 return raw1394_iso_get_status(fi, argp);
2652 case RAW1394_IOC_ISO_XMIT_PACKETS: 2651 case RAW1394_IOC_ISO_XMIT_PACKETS:
2653 return raw1394_iso_send_packets(fi, argp); 2652 return raw1394_iso_send_packets(fi, argp);
2654 case RAW1394_IOC_ISO_SHUTDOWN: 2653 case RAW1394_IOC_ISO_SHUTDOWN:
2655 raw1394_iso_shutdown(fi); 2654 raw1394_iso_shutdown(fi);
2656 return 0; 2655 return 0;
2657 case RAW1394_IOC_ISO_QUEUE_ACTIVITY: 2656 case RAW1394_IOC_ISO_QUEUE_ACTIVITY:
2658 queue_rawiso_event(fi); 2657 queue_rawiso_event(fi);
2659 return 0; 2658 return 0;
2660 default: 2659 default:
2661 return -EINVAL; 2660 return -EINVAL;
2662 } 2661 }
2663 } 2662 }
2664 2663
2665 /* ioctl is only used for rawiso operations */ 2664 /* ioctl is only used for rawiso operations */
2666 static long raw1394_ioctl(struct file *file, unsigned int cmd, 2665 static long raw1394_ioctl(struct file *file, unsigned int cmd,
2667 unsigned long arg) 2666 unsigned long arg)
2668 { 2667 {
2669 struct file_info *fi = file->private_data; 2668 struct file_info *fi = file->private_data;
2670 void __user *argp = (void __user *)arg; 2669 void __user *argp = (void __user *)arg;
2671 long ret; 2670 long ret;
2672 2671
2673 /* state-independent commands */ 2672 /* state-independent commands */
2674 switch(cmd) { 2673 switch(cmd) {
2675 case RAW1394_IOC_GET_CYCLE_TIMER: 2674 case RAW1394_IOC_GET_CYCLE_TIMER:
2676 return raw1394_read_cycle_timer(fi, argp); 2675 return raw1394_read_cycle_timer(fi, argp);
2677 default: 2676 default:
2678 break; 2677 break;
2679 } 2678 }
2680 2679
2681 if (!mutex_trylock(&fi->state_mutex)) 2680 if (!mutex_trylock(&fi->state_mutex))
2682 return -EAGAIN; 2681 return -EAGAIN;
2683 2682
2684 switch (fi->iso_state) { 2683 switch (fi->iso_state) {
2685 case RAW1394_ISO_INACTIVE: 2684 case RAW1394_ISO_INACTIVE:
2686 ret = raw1394_ioctl_inactive(fi, cmd, argp); 2685 ret = raw1394_ioctl_inactive(fi, cmd, argp);
2687 break; 2686 break;
2688 case RAW1394_ISO_RECV: 2687 case RAW1394_ISO_RECV:
2689 ret = raw1394_ioctl_recv(fi, cmd, arg); 2688 ret = raw1394_ioctl_recv(fi, cmd, arg);
2690 break; 2689 break;
2691 case RAW1394_ISO_XMIT: 2690 case RAW1394_ISO_XMIT:
2692 ret = raw1394_ioctl_xmit(fi, cmd, argp); 2691 ret = raw1394_ioctl_xmit(fi, cmd, argp);
2693 break; 2692 break;
2694 default: 2693 default:
2695 ret = -EINVAL; 2694 ret = -EINVAL;
2696 break; 2695 break;
2697 } 2696 }
2698 2697
2699 mutex_unlock(&fi->state_mutex); 2698 mutex_unlock(&fi->state_mutex);
2700 2699
2701 return ret; 2700 return ret;
2702 } 2701 }
2703 2702
2704 #ifdef CONFIG_COMPAT 2703 #ifdef CONFIG_COMPAT
2705 struct raw1394_iso_packets32 { 2704 struct raw1394_iso_packets32 {
2706 __u32 n_packets; 2705 __u32 n_packets;
2707 compat_uptr_t infos; 2706 compat_uptr_t infos;
2708 } __attribute__((packed)); 2707 } __attribute__((packed));
2709 2708
2710 struct raw1394_cycle_timer32 { 2709 struct raw1394_cycle_timer32 {
2711 __u32 cycle_timer; 2710 __u32 cycle_timer;
2712 __u64 local_time; 2711 __u64 local_time;
2713 } 2712 }
2714 #if defined(CONFIG_X86_64) || defined(CONFIG_IA64) 2713 #if defined(CONFIG_X86_64) || defined(CONFIG_IA64)
2715 __attribute__((packed)) 2714 __attribute__((packed))
2716 #endif 2715 #endif
2717 ; 2716 ;
2718 2717
2719 #define RAW1394_IOC_ISO_RECV_PACKETS32 \ 2718 #define RAW1394_IOC_ISO_RECV_PACKETS32 \
2720 _IOW ('#', 0x25, struct raw1394_iso_packets32) 2719 _IOW ('#', 0x25, struct raw1394_iso_packets32)
2721 #define RAW1394_IOC_ISO_XMIT_PACKETS32 \ 2720 #define RAW1394_IOC_ISO_XMIT_PACKETS32 \
2722 _IOW ('#', 0x27, struct raw1394_iso_packets32) 2721 _IOW ('#', 0x27, struct raw1394_iso_packets32)
2723 #define RAW1394_IOC_GET_CYCLE_TIMER32 \ 2722 #define RAW1394_IOC_GET_CYCLE_TIMER32 \
2724 _IOR ('#', 0x30, struct raw1394_cycle_timer32) 2723 _IOR ('#', 0x30, struct raw1394_cycle_timer32)
2725 2724
2726 static long raw1394_iso_xmit_recv_packets32(struct file *file, unsigned int cmd, 2725 static long raw1394_iso_xmit_recv_packets32(struct file *file, unsigned int cmd,
2727 struct raw1394_iso_packets32 __user *arg) 2726 struct raw1394_iso_packets32 __user *arg)
2728 { 2727 {
2729 compat_uptr_t infos32; 2728 compat_uptr_t infos32;
2730 void __user *infos; 2729 void __user *infos;
2731 long err = -EFAULT; 2730 long err = -EFAULT;
2732 struct raw1394_iso_packets __user *dst = compat_alloc_user_space(sizeof(struct raw1394_iso_packets)); 2731 struct raw1394_iso_packets __user *dst = compat_alloc_user_space(sizeof(struct raw1394_iso_packets));
2733 2732
2734 if (!copy_in_user(&dst->n_packets, &arg->n_packets, sizeof arg->n_packets) && 2733 if (!copy_in_user(&dst->n_packets, &arg->n_packets, sizeof arg->n_packets) &&
2735 !copy_from_user(&infos32, &arg->infos, sizeof infos32)) { 2734 !copy_from_user(&infos32, &arg->infos, sizeof infos32)) {
2736 infos = compat_ptr(infos32); 2735 infos = compat_ptr(infos32);
2737 if (!copy_to_user(&dst->infos, &infos, sizeof infos)) 2736 if (!copy_to_user(&dst->infos, &infos, sizeof infos))
2738 err = raw1394_ioctl(file, cmd, (unsigned long)dst); 2737 err = raw1394_ioctl(file, cmd, (unsigned long)dst);
2739 } 2738 }
2740 return err; 2739 return err;
2741 } 2740 }
2742 2741
2743 static long raw1394_read_cycle_timer32(struct file_info *fi, void __user * uaddr) 2742 static long raw1394_read_cycle_timer32(struct file_info *fi, void __user * uaddr)
2744 { 2743 {
2745 struct raw1394_cycle_timer32 ct; 2744 struct raw1394_cycle_timer32 ct;
2746 int err; 2745 int err;
2747 2746
2748 err = hpsb_read_cycle_timer(fi->host, &ct.cycle_timer, &ct.local_time); 2747 err = hpsb_read_cycle_timer(fi->host, &ct.cycle_timer, &ct.local_time);
2749 if (!err) 2748 if (!err)
2750 if (copy_to_user(uaddr, &ct, sizeof(ct))) 2749 if (copy_to_user(uaddr, &ct, sizeof(ct)))
2751 err = -EFAULT; 2750 err = -EFAULT;
2752 return err; 2751 return err;
2753 } 2752 }
2754 2753
2755 static long raw1394_compat_ioctl(struct file *file, 2754 static long raw1394_compat_ioctl(struct file *file,
2756 unsigned int cmd, unsigned long arg) 2755 unsigned int cmd, unsigned long arg)
2757 { 2756 {
2758 struct file_info *fi = file->private_data; 2757 struct file_info *fi = file->private_data;
2759 void __user *argp = (void __user *)arg; 2758 void __user *argp = (void __user *)arg;
2760 long err; 2759 long err;
2761 2760
2762 switch (cmd) { 2761 switch (cmd) {
2763 /* These requests have same format as long as 'int' has same size. */ 2762 /* These requests have same format as long as 'int' has same size. */
2764 case RAW1394_IOC_ISO_RECV_INIT: 2763 case RAW1394_IOC_ISO_RECV_INIT:
2765 case RAW1394_IOC_ISO_RECV_START: 2764 case RAW1394_IOC_ISO_RECV_START:
2766 case RAW1394_IOC_ISO_RECV_LISTEN_CHANNEL: 2765 case RAW1394_IOC_ISO_RECV_LISTEN_CHANNEL:
2767 case RAW1394_IOC_ISO_RECV_UNLISTEN_CHANNEL: 2766 case RAW1394_IOC_ISO_RECV_UNLISTEN_CHANNEL:
2768 case RAW1394_IOC_ISO_RECV_SET_CHANNEL_MASK: 2767 case RAW1394_IOC_ISO_RECV_SET_CHANNEL_MASK:
2769 case RAW1394_IOC_ISO_RECV_RELEASE_PACKETS: 2768 case RAW1394_IOC_ISO_RECV_RELEASE_PACKETS:
2770 case RAW1394_IOC_ISO_RECV_FLUSH: 2769 case RAW1394_IOC_ISO_RECV_FLUSH:
2771 case RAW1394_IOC_ISO_XMIT_RECV_STOP: 2770 case RAW1394_IOC_ISO_XMIT_RECV_STOP:
2772 case RAW1394_IOC_ISO_XMIT_INIT: 2771 case RAW1394_IOC_ISO_XMIT_INIT:
2773 case RAW1394_IOC_ISO_XMIT_START: 2772 case RAW1394_IOC_ISO_XMIT_START:
2774 case RAW1394_IOC_ISO_XMIT_SYNC: 2773 case RAW1394_IOC_ISO_XMIT_SYNC:
2775 case RAW1394_IOC_ISO_GET_STATUS: 2774 case RAW1394_IOC_ISO_GET_STATUS:
2776 case RAW1394_IOC_ISO_SHUTDOWN: 2775 case RAW1394_IOC_ISO_SHUTDOWN:
2777 case RAW1394_IOC_ISO_QUEUE_ACTIVITY: 2776 case RAW1394_IOC_ISO_QUEUE_ACTIVITY:
2778 err = raw1394_ioctl(file, cmd, arg); 2777 err = raw1394_ioctl(file, cmd, arg);
2779 break; 2778 break;
2780 /* These request have different format. */ 2779 /* These request have different format. */
2781 case RAW1394_IOC_ISO_RECV_PACKETS32: 2780 case RAW1394_IOC_ISO_RECV_PACKETS32:
2782 err = raw1394_iso_xmit_recv_packets32(file, RAW1394_IOC_ISO_RECV_PACKETS, argp); 2781 err = raw1394_iso_xmit_recv_packets32(file, RAW1394_IOC_ISO_RECV_PACKETS, argp);
2783 break; 2782 break;
2784 case RAW1394_IOC_ISO_XMIT_PACKETS32: 2783 case RAW1394_IOC_ISO_XMIT_PACKETS32:
2785 err = raw1394_iso_xmit_recv_packets32(file, RAW1394_IOC_ISO_XMIT_PACKETS, argp); 2784 err = raw1394_iso_xmit_recv_packets32(file, RAW1394_IOC_ISO_XMIT_PACKETS, argp);
2786 break; 2785 break;
2787 case RAW1394_IOC_GET_CYCLE_TIMER32: 2786 case RAW1394_IOC_GET_CYCLE_TIMER32:
2788 err = raw1394_read_cycle_timer32(fi, argp); 2787 err = raw1394_read_cycle_timer32(fi, argp);
2789 break; 2788 break;
2790 default: 2789 default:
2791 err = -EINVAL; 2790 err = -EINVAL;
2792 break; 2791 break;
2793 } 2792 }
2794 2793
2795 return err; 2794 return err;
2796 } 2795 }
2797 #endif 2796 #endif
2798 2797
2799 static unsigned int raw1394_poll(struct file *file, poll_table * pt) 2798 static unsigned int raw1394_poll(struct file *file, poll_table * pt)
2800 { 2799 {
2801 struct file_info *fi = file->private_data; 2800 struct file_info *fi = file->private_data;
2802 unsigned int mask = POLLOUT | POLLWRNORM; 2801 unsigned int mask = POLLOUT | POLLWRNORM;
2803 unsigned long flags; 2802 unsigned long flags;
2804 2803
2805 poll_wait(file, &fi->wait_complete, pt); 2804 poll_wait(file, &fi->wait_complete, pt);
2806 2805
2807 spin_lock_irqsave(&fi->reqlists_lock, flags); 2806 spin_lock_irqsave(&fi->reqlists_lock, flags);
2808 if (!list_empty(&fi->req_complete)) { 2807 if (!list_empty(&fi->req_complete)) {
2809 mask |= POLLIN | POLLRDNORM; 2808 mask |= POLLIN | POLLRDNORM;
2810 } 2809 }
2811 spin_unlock_irqrestore(&fi->reqlists_lock, flags); 2810 spin_unlock_irqrestore(&fi->reqlists_lock, flags);
2812 2811
2813 return mask; 2812 return mask;
2814 } 2813 }
2815 2814
2816 static int raw1394_open(struct inode *inode, struct file *file) 2815 static int raw1394_open(struct inode *inode, struct file *file)
2817 { 2816 {
2818 struct file_info *fi; 2817 struct file_info *fi;
2819 2818
2820 fi = kzalloc(sizeof(*fi), GFP_KERNEL); 2819 fi = kzalloc(sizeof(*fi), GFP_KERNEL);
2821 if (!fi) 2820 if (!fi)
2822 return -ENOMEM; 2821 return -ENOMEM;
2823 2822
2824 fi->notification = (u8) RAW1394_NOTIFY_ON; /* busreset notification */ 2823 fi->notification = (u8) RAW1394_NOTIFY_ON; /* busreset notification */
2825 2824
2826 INIT_LIST_HEAD(&fi->list); 2825 INIT_LIST_HEAD(&fi->list);
2827 mutex_init(&fi->state_mutex); 2826 mutex_init(&fi->state_mutex);
2828 fi->state = opened; 2827 fi->state = opened;
2829 INIT_LIST_HEAD(&fi->req_pending); 2828 INIT_LIST_HEAD(&fi->req_pending);
2830 INIT_LIST_HEAD(&fi->req_complete); 2829 INIT_LIST_HEAD(&fi->req_complete);
2831 spin_lock_init(&fi->reqlists_lock); 2830 spin_lock_init(&fi->reqlists_lock);
2832 init_waitqueue_head(&fi->wait_complete); 2831 init_waitqueue_head(&fi->wait_complete);
2833 INIT_LIST_HEAD(&fi->addr_list); 2832 INIT_LIST_HEAD(&fi->addr_list);
2834 2833
2835 file->private_data = fi; 2834 file->private_data = fi;
2836 2835
2837 return nonseekable_open(inode, file); 2836 return nonseekable_open(inode, file);
2838 } 2837 }
2839 2838
2840 static int raw1394_release(struct inode *inode, struct file *file) 2839 static int raw1394_release(struct inode *inode, struct file *file)
2841 { 2840 {
2842 struct file_info *fi = file->private_data; 2841 struct file_info *fi = file->private_data;
2843 struct list_head *lh; 2842 struct list_head *lh;
2844 struct pending_request *req; 2843 struct pending_request *req;
2845 int i, fail; 2844 int i, fail;
2846 int retval = 0; 2845 int retval = 0;
2847 struct list_head *entry; 2846 struct list_head *entry;
2848 struct arm_addr *addr = NULL; 2847 struct arm_addr *addr = NULL;
2849 struct host_info *hi; 2848 struct host_info *hi;
2850 struct file_info *fi_hlp = NULL; 2849 struct file_info *fi_hlp = NULL;
2851 struct arm_addr *arm_addr = NULL; 2850 struct arm_addr *arm_addr = NULL;
2852 int another_host; 2851 int another_host;
2853 int csr_mod = 0; 2852 int csr_mod = 0;
2854 unsigned long flags; 2853 unsigned long flags;
2855 2854
2856 if (fi->iso_state != RAW1394_ISO_INACTIVE) 2855 if (fi->iso_state != RAW1394_ISO_INACTIVE)
2857 raw1394_iso_shutdown(fi); 2856 raw1394_iso_shutdown(fi);
2858 2857
2859 spin_lock_irqsave(&host_info_lock, flags); 2858 spin_lock_irqsave(&host_info_lock, flags);
2860 2859
2861 fail = 0; 2860 fail = 0;
2862 /* set address-entries invalid */ 2861 /* set address-entries invalid */
2863 2862
2864 while (!list_empty(&fi->addr_list)) { 2863 while (!list_empty(&fi->addr_list)) {
2865 another_host = 0; 2864 another_host = 0;
2866 lh = fi->addr_list.next; 2865 lh = fi->addr_list.next;
2867 addr = list_entry(lh, struct arm_addr, addr_list); 2866 addr = list_entry(lh, struct arm_addr, addr_list);
2868 /* another host with valid address-entry containing 2867 /* another host with valid address-entry containing
2869 same addressrange? */ 2868 same addressrange? */
2870 list_for_each_entry(hi, &host_info_list, list) { 2869 list_for_each_entry(hi, &host_info_list, list) {
2871 if (hi->host != fi->host) { 2870 if (hi->host != fi->host) {
2872 list_for_each_entry(fi_hlp, &hi->file_info_list, 2871 list_for_each_entry(fi_hlp, &hi->file_info_list,
2873 list) { 2872 list) {
2874 entry = fi_hlp->addr_list.next; 2873 entry = fi_hlp->addr_list.next;
2875 while (entry != &(fi_hlp->addr_list)) { 2874 while (entry != &(fi_hlp->addr_list)) {
2876 arm_addr = list_entry(entry, struct 2875 arm_addr = list_entry(entry, struct
2877 arm_addr, 2876 arm_addr,
2878 addr_list); 2877 addr_list);
2879 if (arm_addr->start == 2878 if (arm_addr->start ==
2880 addr->start) { 2879 addr->start) {
2881 DBGMSG 2880 DBGMSG
2882 ("raw1394_release: " 2881 ("raw1394_release: "
2883 "another host ownes " 2882 "another host ownes "
2884 "same addressrange"); 2883 "same addressrange");
2885 another_host = 1; 2884 another_host = 1;
2886 break; 2885 break;
2887 } 2886 }
2888 entry = entry->next; 2887 entry = entry->next;
2889 } 2888 }
2890 if (another_host) { 2889 if (another_host) {
2891 break; 2890 break;
2892 } 2891 }
2893 } 2892 }
2894 } 2893 }
2895 } 2894 }
2896 if (!another_host) { 2895 if (!another_host) {
2897 DBGMSG("raw1394_release: call hpsb_arm_unregister"); 2896 DBGMSG("raw1394_release: call hpsb_arm_unregister");
2898 retval = 2897 retval =
2899 hpsb_unregister_addrspace(&raw1394_highlevel, 2898 hpsb_unregister_addrspace(&raw1394_highlevel,
2900 fi->host, addr->start); 2899 fi->host, addr->start);
2901 if (!retval) { 2900 if (!retval) {
2902 ++fail; 2901 ++fail;
2903 printk(KERN_ERR 2902 printk(KERN_ERR
2904 "raw1394_release arm_Unregister failed\n"); 2903 "raw1394_release arm_Unregister failed\n");
2905 } 2904 }
2906 } 2905 }
2907 DBGMSG("raw1394_release: delete addr_entry from list"); 2906 DBGMSG("raw1394_release: delete addr_entry from list");
2908 list_del(&addr->addr_list); 2907 list_del(&addr->addr_list);
2909 vfree(addr->addr_space_buffer); 2908 vfree(addr->addr_space_buffer);
2910 kfree(addr); 2909 kfree(addr);
2911 } /* while */ 2910 } /* while */
2912 spin_unlock_irqrestore(&host_info_lock, flags); 2911 spin_unlock_irqrestore(&host_info_lock, flags);
2913 if (fail > 0) { 2912 if (fail > 0) {
2914 printk(KERN_ERR "raw1394: during addr_list-release " 2913 printk(KERN_ERR "raw1394: during addr_list-release "
2915 "error(s) occurred \n"); 2914 "error(s) occurred \n");
2916 } 2915 }
2917 2916
2918 for (;;) { 2917 for (;;) {
2919 /* This locked section guarantees that neither 2918 /* This locked section guarantees that neither
2920 * complete nor pending requests exist once i!=0 */ 2919 * complete nor pending requests exist once i!=0 */
2921 spin_lock_irqsave(&fi->reqlists_lock, flags); 2920 spin_lock_irqsave(&fi->reqlists_lock, flags);
2922 while ((req = __next_complete_req(fi))) 2921 while ((req = __next_complete_req(fi)))
2923 free_pending_request(req); 2922 free_pending_request(req);
2924 2923
2925 i = list_empty(&fi->req_pending); 2924 i = list_empty(&fi->req_pending);
2926 spin_unlock_irqrestore(&fi->reqlists_lock, flags); 2925 spin_unlock_irqrestore(&fi->reqlists_lock, flags);
2927 2926
2928 if (i) 2927 if (i)
2929 break; 2928 break;
2930 /* 2929 /*
2931 * Sleep until more requests can be freed. 2930 * Sleep until more requests can be freed.
2932 * 2931 *
2933 * NB: We call the macro wait_event() with a condition argument 2932 * NB: We call the macro wait_event() with a condition argument
2934 * with side effect. This is only possible because the side 2933 * with side effect. This is only possible because the side
2935 * effect does not occur until the condition became true, and 2934 * effect does not occur until the condition became true, and
2936 * wait_event() won't evaluate the condition again after that. 2935 * wait_event() won't evaluate the condition again after that.
2937 */ 2936 */
2938 wait_event(fi->wait_complete, (req = next_complete_req(fi))); 2937 wait_event(fi->wait_complete, (req = next_complete_req(fi)));
2939 free_pending_request(req); 2938 free_pending_request(req);
2940 } 2939 }
2941 2940
2942 /* Remove any sub-trees left by user space programs */ 2941 /* Remove any sub-trees left by user space programs */
2943 for (i = 0; i < RAW1394_MAX_USER_CSR_DIRS; i++) { 2942 for (i = 0; i < RAW1394_MAX_USER_CSR_DIRS; i++) {
2944 struct csr1212_dentry *dentry; 2943 struct csr1212_dentry *dentry;
2945 if (!fi->csr1212_dirs[i]) 2944 if (!fi->csr1212_dirs[i])
2946 continue; 2945 continue;
2947 for (dentry = 2946 for (dentry =
2948 fi->csr1212_dirs[i]->value.directory.dentries_head; dentry; 2947 fi->csr1212_dirs[i]->value.directory.dentries_head; dentry;
2949 dentry = dentry->next) { 2948 dentry = dentry->next) {
2950 csr1212_detach_keyval_from_directory(fi->host->csr.rom-> 2949 csr1212_detach_keyval_from_directory(fi->host->csr.rom->
2951 root_kv, 2950 root_kv,
2952 dentry->kv); 2951 dentry->kv);
2953 } 2952 }
2954 csr1212_release_keyval(fi->csr1212_dirs[i]); 2953 csr1212_release_keyval(fi->csr1212_dirs[i]);
2955 fi->csr1212_dirs[i] = NULL; 2954 fi->csr1212_dirs[i] = NULL;
2956 csr_mod = 1; 2955 csr_mod = 1;
2957 } 2956 }
2958 2957
2959 if ((csr_mod || fi->cfgrom_upd) 2958 if ((csr_mod || fi->cfgrom_upd)
2960 && hpsb_update_config_rom_image(fi->host) < 0) 2959 && hpsb_update_config_rom_image(fi->host) < 0)
2961 HPSB_ERR 2960 HPSB_ERR
2962 ("Failed to generate Configuration ROM image for host %d", 2961 ("Failed to generate Configuration ROM image for host %d",
2963 fi->host->id); 2962 fi->host->id);
2964 2963
2965 if (fi->state == connected) { 2964 if (fi->state == connected) {
2966 spin_lock_irqsave(&host_info_lock, flags); 2965 spin_lock_irqsave(&host_info_lock, flags);
2967 list_del(&fi->list); 2966 list_del(&fi->list);
2968 spin_unlock_irqrestore(&host_info_lock, flags); 2967 spin_unlock_irqrestore(&host_info_lock, flags);
2969 2968
2970 put_device(&fi->host->device); 2969 put_device(&fi->host->device);
2971 } 2970 }
2972 2971
2973 spin_lock_irqsave(&host_info_lock, flags); 2972 spin_lock_irqsave(&host_info_lock, flags);
2974 if (fi->host) 2973 if (fi->host)
2975 module_put(fi->host->driver->owner); 2974 module_put(fi->host->driver->owner);
2976 spin_unlock_irqrestore(&host_info_lock, flags); 2975 spin_unlock_irqrestore(&host_info_lock, flags);
2977 2976
2978 kfree(fi); 2977 kfree(fi);
2979 2978
2980 return 0; 2979 return 0;
2981 } 2980 }
2982 2981
2983 /*** HOTPLUG STUFF **********************************************************/ 2982 /*** HOTPLUG STUFF **********************************************************/
2984 /* 2983 /*
2985 * Export information about protocols/devices supported by this driver. 2984 * Export information about protocols/devices supported by this driver.
2986 */ 2985 */
2987 #ifdef MODULE 2986 #ifdef MODULE
2988 static const struct ieee1394_device_id raw1394_id_table[] = { 2987 static const struct ieee1394_device_id raw1394_id_table[] = {
2989 { 2988 {
2990 .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION, 2989 .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION,
2991 .specifier_id = AVC_UNIT_SPEC_ID_ENTRY & 0xffffff, 2990 .specifier_id = AVC_UNIT_SPEC_ID_ENTRY & 0xffffff,
2992 .version = AVC_SW_VERSION_ENTRY & 0xffffff}, 2991 .version = AVC_SW_VERSION_ENTRY & 0xffffff},
2993 { 2992 {
2994 .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION, 2993 .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION,
2995 .specifier_id = CAMERA_UNIT_SPEC_ID_ENTRY & 0xffffff, 2994 .specifier_id = CAMERA_UNIT_SPEC_ID_ENTRY & 0xffffff,
2996 .version = CAMERA_SW_VERSION_ENTRY & 0xffffff}, 2995 .version = CAMERA_SW_VERSION_ENTRY & 0xffffff},
2997 { 2996 {
2998 .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION, 2997 .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION,
2999 .specifier_id = CAMERA_UNIT_SPEC_ID_ENTRY & 0xffffff, 2998 .specifier_id = CAMERA_UNIT_SPEC_ID_ENTRY & 0xffffff,
3000 .version = (CAMERA_SW_VERSION_ENTRY + 1) & 0xffffff}, 2999 .version = (CAMERA_SW_VERSION_ENTRY + 1) & 0xffffff},
3001 { 3000 {
3002 .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION, 3001 .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION,
3003 .specifier_id = CAMERA_UNIT_SPEC_ID_ENTRY & 0xffffff, 3002 .specifier_id = CAMERA_UNIT_SPEC_ID_ENTRY & 0xffffff,
3004 .version = (CAMERA_SW_VERSION_ENTRY + 2) & 0xffffff}, 3003 .version = (CAMERA_SW_VERSION_ENTRY + 2) & 0xffffff},
3005 {} 3004 {}
3006 }; 3005 };
3007 3006
3008 MODULE_DEVICE_TABLE(ieee1394, raw1394_id_table); 3007 MODULE_DEVICE_TABLE(ieee1394, raw1394_id_table);
3009 #endif /* MODULE */ 3008 #endif /* MODULE */
3010 3009
3011 static struct hpsb_protocol_driver raw1394_driver = { 3010 static struct hpsb_protocol_driver raw1394_driver = {
3012 .name = "raw1394", 3011 .name = "raw1394",
3013 }; 3012 };
3014 3013
3015 /******************************************************************************/ 3014 /******************************************************************************/
3016 3015
3017 static struct hpsb_highlevel raw1394_highlevel = { 3016 static struct hpsb_highlevel raw1394_highlevel = {
3018 .name = RAW1394_DEVICE_NAME, 3017 .name = RAW1394_DEVICE_NAME,
3019 .add_host = add_host, 3018 .add_host = add_host,
3020 .remove_host = remove_host, 3019 .remove_host = remove_host,
3021 .host_reset = host_reset, 3020 .host_reset = host_reset,
3022 .fcp_request = fcp_request, 3021 .fcp_request = fcp_request,
3023 }; 3022 };
3024 3023
3025 static struct cdev raw1394_cdev; 3024 static struct cdev raw1394_cdev;
3026 static const struct file_operations raw1394_fops = { 3025 static const struct file_operations raw1394_fops = {
3027 .owner = THIS_MODULE, 3026 .owner = THIS_MODULE,
3028 .read = raw1394_read, 3027 .read = raw1394_read,
3029 .write = raw1394_write, 3028 .write = raw1394_write,
3030 .mmap = raw1394_mmap, 3029 .mmap = raw1394_mmap,
3031 .unlocked_ioctl = raw1394_ioctl, 3030 .unlocked_ioctl = raw1394_ioctl,
3032 #ifdef CONFIG_COMPAT 3031 #ifdef CONFIG_COMPAT
3033 .compat_ioctl = raw1394_compat_ioctl, 3032 .compat_ioctl = raw1394_compat_ioctl,
3034 #endif 3033 #endif
3035 .poll = raw1394_poll, 3034 .poll = raw1394_poll,
3036 .open = raw1394_open, 3035 .open = raw1394_open,
3037 .release = raw1394_release, 3036 .release = raw1394_release,
3038 .llseek = no_llseek, 3037 .llseek = no_llseek,
3039 }; 3038 };
3040 3039
3041 static int __init init_raw1394(void) 3040 static int __init init_raw1394(void)
3042 { 3041 {
3043 int ret = 0; 3042 int ret = 0;
3044 3043
3045 hpsb_register_highlevel(&raw1394_highlevel); 3044 hpsb_register_highlevel(&raw1394_highlevel);
3046 3045
3047 if (IS_ERR 3046 if (IS_ERR
3048 (device_create(hpsb_protocol_class, NULL, 3047 (device_create(hpsb_protocol_class, NULL,
3049 MKDEV(IEEE1394_MAJOR, 3048 MKDEV(IEEE1394_MAJOR,
3050 IEEE1394_MINOR_BLOCK_RAW1394 * 16), 3049 IEEE1394_MINOR_BLOCK_RAW1394 * 16),
3051 NULL, RAW1394_DEVICE_NAME))) { 3050 NULL, RAW1394_DEVICE_NAME))) {
3052 ret = -EFAULT; 3051 ret = -EFAULT;
3053 goto out_unreg; 3052 goto out_unreg;
3054 } 3053 }
3055 3054
3056 cdev_init(&raw1394_cdev, &raw1394_fops); 3055 cdev_init(&raw1394_cdev, &raw1394_fops);
3057 raw1394_cdev.owner = THIS_MODULE; 3056 raw1394_cdev.owner = THIS_MODULE;
3058 ret = cdev_add(&raw1394_cdev, IEEE1394_RAW1394_DEV, 1); 3057 ret = cdev_add(&raw1394_cdev, IEEE1394_RAW1394_DEV, 1);
3059 if (ret) { 3058 if (ret) {
3060 HPSB_ERR("raw1394 failed to register minor device block"); 3059 HPSB_ERR("raw1394 failed to register minor device block");
3061 goto out_dev; 3060 goto out_dev;
3062 } 3061 }
3063 3062
3064 HPSB_INFO("raw1394: /dev/%s device initialized", RAW1394_DEVICE_NAME); 3063 HPSB_INFO("raw1394: /dev/%s device initialized", RAW1394_DEVICE_NAME);
3065 3064
3066 ret = hpsb_register_protocol(&raw1394_driver); 3065 ret = hpsb_register_protocol(&raw1394_driver);
3067 if (ret) { 3066 if (ret) {
3068 HPSB_ERR("raw1394: failed to register protocol"); 3067 HPSB_ERR("raw1394: failed to register protocol");
3069 cdev_del(&raw1394_cdev); 3068 cdev_del(&raw1394_cdev);
3070 goto out_dev; 3069 goto out_dev;
3071 } 3070 }
3072 3071
3073 goto out; 3072 goto out;
3074 3073
3075 out_dev: 3074 out_dev:
3076 device_destroy(hpsb_protocol_class, 3075 device_destroy(hpsb_protocol_class,
3077 MKDEV(IEEE1394_MAJOR, 3076 MKDEV(IEEE1394_MAJOR,
3078 IEEE1394_MINOR_BLOCK_RAW1394 * 16)); 3077 IEEE1394_MINOR_BLOCK_RAW1394 * 16));
3079 out_unreg: 3078 out_unreg:
3080 hpsb_unregister_highlevel(&raw1394_highlevel); 3079 hpsb_unregister_highlevel(&raw1394_highlevel);
3081 out: 3080 out:
3082 return ret; 3081 return ret;
3083 } 3082 }
3084 3083
3085 static void __exit cleanup_raw1394(void) 3084 static void __exit cleanup_raw1394(void)
3086 { 3085 {
3087 device_destroy(hpsb_protocol_class, 3086 device_destroy(hpsb_protocol_class,
3088 MKDEV(IEEE1394_MAJOR, 3087 MKDEV(IEEE1394_MAJOR,
3089 IEEE1394_MINOR_BLOCK_RAW1394 * 16)); 3088 IEEE1394_MINOR_BLOCK_RAW1394 * 16));
3090 cdev_del(&raw1394_cdev); 3089 cdev_del(&raw1394_cdev);
3091 hpsb_unregister_highlevel(&raw1394_highlevel); 3090 hpsb_unregister_highlevel(&raw1394_highlevel);
3092 hpsb_unregister_protocol(&raw1394_driver); 3091 hpsb_unregister_protocol(&raw1394_driver);
3093 } 3092 }
3094 3093
3095 module_init(init_raw1394); 3094 module_init(init_raw1394);
3096 module_exit(cleanup_raw1394); 3095 module_exit(cleanup_raw1394);
3097 MODULE_LICENSE("GPL"); 3096 MODULE_LICENSE("GPL");
3098 3097
drivers/ieee1394/sbp2.c
1 /* 1 /*
2 * sbp2.c - SBP-2 protocol driver for IEEE-1394 2 * sbp2.c - SBP-2 protocol driver for IEEE-1394
3 * 3 *
4 * Copyright (C) 2000 James Goodwin, Filanet Corporation (www.filanet.com) 4 * Copyright (C) 2000 James Goodwin, Filanet Corporation (www.filanet.com)
5 * jamesg@filanet.com (JSG) 5 * jamesg@filanet.com (JSG)
6 * 6 *
7 * Copyright (C) 2003 Ben Collins <bcollins@debian.org> 7 * Copyright (C) 2003 Ben Collins <bcollins@debian.org>
8 * 8 *
9 * This program is free software; you can redistribute it and/or modify 9 * This program is free software; you can redistribute it and/or modify
10 * it under the terms of the GNU General Public License as published by 10 * it under the terms of the GNU General Public License as published by
11 * the Free Software Foundation; either version 2 of the License, or 11 * the Free Software Foundation; either version 2 of the License, or
12 * (at your option) any later version. 12 * (at your option) any later version.
13 * 13 *
14 * This program is distributed in the hope that it will be useful, 14 * This program is distributed in the hope that it will be useful,
15 * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 * but WITHOUT ANY WARRANTY; without even the implied warranty of
16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
17 * GNU General Public License for more details. 17 * GNU General Public License for more details.
18 * 18 *
19 * You should have received a copy of the GNU General Public License 19 * You should have received a copy of the GNU General Public License
20 * along with this program; if not, write to the Free Software Foundation, 20 * along with this program; if not, write to the Free Software Foundation,
21 * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 21 * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
22 */ 22 */
23 23
24 /* 24 /*
25 * Brief Description: 25 * Brief Description:
26 * 26 *
27 * This driver implements the Serial Bus Protocol 2 (SBP-2) over IEEE-1394 27 * This driver implements the Serial Bus Protocol 2 (SBP-2) over IEEE-1394
28 * under Linux. The SBP-2 driver is implemented as an IEEE-1394 high-level 28 * under Linux. The SBP-2 driver is implemented as an IEEE-1394 high-level
29 * driver. It also registers as a SCSI lower-level driver in order to accept 29 * driver. It also registers as a SCSI lower-level driver in order to accept
30 * SCSI commands for transport using SBP-2. 30 * SCSI commands for transport using SBP-2.
31 * 31 *
32 * You may access any attached SBP-2 (usually storage devices) as regular 32 * You may access any attached SBP-2 (usually storage devices) as regular
33 * SCSI devices. E.g. mount /dev/sda1, fdisk, mkfs, etc.. 33 * SCSI devices. E.g. mount /dev/sda1, fdisk, mkfs, etc..
34 * 34 *
35 * See http://www.t10.org/drafts.htm#sbp2 for the final draft of the SBP-2 35 * See http://www.t10.org/drafts.htm#sbp2 for the final draft of the SBP-2
36 * specification and for where to purchase the official standard. 36 * specification and for where to purchase the official standard.
37 * 37 *
38 * TODO: 38 * TODO:
39 * - look into possible improvements of the SCSI error handlers 39 * - look into possible improvements of the SCSI error handlers
40 * - handle Unit_Characteristics.mgt_ORB_timeout and .ORB_size 40 * - handle Unit_Characteristics.mgt_ORB_timeout and .ORB_size
41 * - handle Logical_Unit_Number.ordered 41 * - handle Logical_Unit_Number.ordered
42 * - handle src == 1 in status blocks 42 * - handle src == 1 in status blocks
43 * - reimplement the DMA mapping in absence of physical DMA so that 43 * - reimplement the DMA mapping in absence of physical DMA so that
44 * bus_to_virt is no longer required 44 * bus_to_virt is no longer required
45 * - debug the handling of absent physical DMA 45 * - debug the handling of absent physical DMA
46 * - replace CONFIG_IEEE1394_SBP2_PHYS_DMA by automatic detection 46 * - replace CONFIG_IEEE1394_SBP2_PHYS_DMA by automatic detection
47 * (this is easy but depends on the previous two TODO items) 47 * (this is easy but depends on the previous two TODO items)
48 * - make the parameter serialize_io configurable per device 48 * - make the parameter serialize_io configurable per device
49 * - move all requests to fetch agent registers into non-atomic context, 49 * - move all requests to fetch agent registers into non-atomic context,
50 * replace all usages of sbp2util_node_write_no_wait by true transactions 50 * replace all usages of sbp2util_node_write_no_wait by true transactions
51 * Grep for inline FIXME comments below. 51 * Grep for inline FIXME comments below.
52 */ 52 */
53 53
54 #include <linux/blkdev.h> 54 #include <linux/blkdev.h>
55 #include <linux/compiler.h> 55 #include <linux/compiler.h>
56 #include <linux/delay.h> 56 #include <linux/delay.h>
57 #include <linux/device.h> 57 #include <linux/device.h>
58 #include <linux/dma-mapping.h> 58 #include <linux/dma-mapping.h>
59 #include <linux/gfp.h> 59 #include <linux/gfp.h>
60 #include <linux/init.h> 60 #include <linux/init.h>
61 #include <linux/kernel.h> 61 #include <linux/kernel.h>
62 #include <linux/list.h> 62 #include <linux/list.h>
63 #include <linux/mm.h> 63 #include <linux/mm.h>
64 #include <linux/module.h> 64 #include <linux/module.h>
65 #include <linux/moduleparam.h> 65 #include <linux/moduleparam.h>
66 #include <linux/sched.h> 66 #include <linux/sched.h>
67 #include <linux/slab.h> 67 #include <linux/slab.h>
68 #include <linux/spinlock.h> 68 #include <linux/spinlock.h>
69 #include <linux/stat.h> 69 #include <linux/stat.h>
70 #include <linux/string.h> 70 #include <linux/string.h>
71 #include <linux/stringify.h> 71 #include <linux/stringify.h>
72 #include <linux/types.h> 72 #include <linux/types.h>
73 #include <linux/wait.h> 73 #include <linux/wait.h>
74 #include <linux/workqueue.h> 74 #include <linux/workqueue.h>
75 #include <linux/scatterlist.h> 75 #include <linux/scatterlist.h>
76 76
77 #include <asm/byteorder.h> 77 #include <asm/byteorder.h>
78 #include <asm/errno.h> 78 #include <asm/errno.h>
79 #include <asm/param.h> 79 #include <asm/param.h>
80 #include <asm/system.h> 80 #include <asm/system.h>
81 #include <asm/types.h> 81 #include <asm/types.h>
82 82
83 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA 83 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA
84 #include <asm/io.h> /* for bus_to_virt */ 84 #include <asm/io.h> /* for bus_to_virt */
85 #endif 85 #endif
86 86
87 #include <scsi/scsi.h> 87 #include <scsi/scsi.h>
88 #include <scsi/scsi_cmnd.h> 88 #include <scsi/scsi_cmnd.h>
89 #include <scsi/scsi_dbg.h> 89 #include <scsi/scsi_dbg.h>
90 #include <scsi/scsi_device.h> 90 #include <scsi/scsi_device.h>
91 #include <scsi/scsi_host.h> 91 #include <scsi/scsi_host.h>
92 92
93 #include "csr1212.h" 93 #include "csr1212.h"
94 #include "highlevel.h" 94 #include "highlevel.h"
95 #include "hosts.h" 95 #include "hosts.h"
96 #include "ieee1394.h" 96 #include "ieee1394.h"
97 #include "ieee1394_core.h" 97 #include "ieee1394_core.h"
98 #include "ieee1394_hotplug.h" 98 #include "ieee1394_hotplug.h"
99 #include "ieee1394_transactions.h" 99 #include "ieee1394_transactions.h"
100 #include "ieee1394_types.h" 100 #include "ieee1394_types.h"
101 #include "nodemgr.h" 101 #include "nodemgr.h"
102 #include "sbp2.h" 102 #include "sbp2.h"
103 103
104 /* 104 /*
105 * Module load parameter definitions 105 * Module load parameter definitions
106 */ 106 */
107 107
108 /* 108 /*
109 * Change max_speed on module load if you have a bad IEEE-1394 109 * Change max_speed on module load if you have a bad IEEE-1394
110 * controller that has trouble running 2KB packets at 400mb. 110 * controller that has trouble running 2KB packets at 400mb.
111 * 111 *
112 * NOTE: On certain OHCI parts I have seen short packets on async transmit 112 * NOTE: On certain OHCI parts I have seen short packets on async transmit
113 * (probably due to PCI latency/throughput issues with the part). You can 113 * (probably due to PCI latency/throughput issues with the part). You can
114 * bump down the speed if you are running into problems. 114 * bump down the speed if you are running into problems.
115 */ 115 */
116 static int sbp2_max_speed = IEEE1394_SPEED_MAX; 116 static int sbp2_max_speed = IEEE1394_SPEED_MAX;
117 module_param_named(max_speed, sbp2_max_speed, int, 0644); 117 module_param_named(max_speed, sbp2_max_speed, int, 0644);
118 MODULE_PARM_DESC(max_speed, "Limit data transfer speed (5 <= 3200, " 118 MODULE_PARM_DESC(max_speed, "Limit data transfer speed (5 <= 3200, "
119 "4 <= 1600, 3 <= 800, 2 <= 400, 1 <= 200, 0 = 100 Mb/s)"); 119 "4 <= 1600, 3 <= 800, 2 <= 400, 1 <= 200, 0 = 100 Mb/s)");
120 120
121 /* 121 /*
122 * Set serialize_io to 0 or N to use dynamically appended lists of command ORBs. 122 * Set serialize_io to 0 or N to use dynamically appended lists of command ORBs.
123 * This is and always has been buggy in multiple subtle ways. See above TODOs. 123 * This is and always has been buggy in multiple subtle ways. See above TODOs.
124 */ 124 */
125 static int sbp2_serialize_io = 1; 125 static int sbp2_serialize_io = 1;
126 module_param_named(serialize_io, sbp2_serialize_io, bool, 0444); 126 module_param_named(serialize_io, sbp2_serialize_io, bool, 0444);
127 MODULE_PARM_DESC(serialize_io, "Serialize requests coming from SCSI drivers " 127 MODULE_PARM_DESC(serialize_io, "Serialize requests coming from SCSI drivers "
128 "(default = Y, faster but buggy = N)"); 128 "(default = Y, faster but buggy = N)");
129 129
130 /* 130 /*
131 * Adjust max_sectors if you'd like to influence how many sectors each SCSI 131 * Adjust max_sectors if you'd like to influence how many sectors each SCSI
132 * command can transfer at most. Please note that some older SBP-2 bridge 132 * command can transfer at most. Please note that some older SBP-2 bridge
133 * chips are broken for transfers greater or equal to 128KB, therefore 133 * chips are broken for transfers greater or equal to 128KB, therefore
134 * max_sectors used to be a safe 255 sectors for many years. We now have a 134 * max_sectors used to be a safe 255 sectors for many years. We now have a
135 * default of 0 here which means that we let the SCSI stack choose a limit. 135 * default of 0 here which means that we let the SCSI stack choose a limit.
136 * 136 *
137 * The SBP2_WORKAROUND_128K_MAX_TRANS flag, if set either in the workarounds 137 * The SBP2_WORKAROUND_128K_MAX_TRANS flag, if set either in the workarounds
138 * module parameter or in the sbp2_workarounds_table[], will override the 138 * module parameter or in the sbp2_workarounds_table[], will override the
139 * value of max_sectors. We should use sbp2_workarounds_table[] to cover any 139 * value of max_sectors. We should use sbp2_workarounds_table[] to cover any
140 * bridge chip which becomes known to need the 255 sectors limit. 140 * bridge chip which becomes known to need the 255 sectors limit.
141 */ 141 */
142 static int sbp2_max_sectors; 142 static int sbp2_max_sectors;
143 module_param_named(max_sectors, sbp2_max_sectors, int, 0444); 143 module_param_named(max_sectors, sbp2_max_sectors, int, 0444);
144 MODULE_PARM_DESC(max_sectors, "Change max sectors per I/O supported " 144 MODULE_PARM_DESC(max_sectors, "Change max sectors per I/O supported "
145 "(default = 0 = use SCSI stack's default)"); 145 "(default = 0 = use SCSI stack's default)");
146 146
147 /* 147 /*
148 * Exclusive login to sbp2 device? In most cases, the sbp2 driver should 148 * Exclusive login to sbp2 device? In most cases, the sbp2 driver should
149 * do an exclusive login, as it's generally unsafe to have two hosts 149 * do an exclusive login, as it's generally unsafe to have two hosts
150 * talking to a single sbp2 device at the same time (filesystem coherency, 150 * talking to a single sbp2 device at the same time (filesystem coherency,
151 * etc.). If you're running an sbp2 device that supports multiple logins, 151 * etc.). If you're running an sbp2 device that supports multiple logins,
152 * and you're either running read-only filesystems or some sort of special 152 * and you're either running read-only filesystems or some sort of special
153 * filesystem supporting multiple hosts, e.g. OpenGFS, Oracle Cluster 153 * filesystem supporting multiple hosts, e.g. OpenGFS, Oracle Cluster
154 * File System, or Lustre, then set exclusive_login to zero. 154 * File System, or Lustre, then set exclusive_login to zero.
155 * 155 *
156 * So far only bridges from Oxford Semiconductor are known to support 156 * So far only bridges from Oxford Semiconductor are known to support
157 * concurrent logins. Depending on firmware, four or two concurrent logins 157 * concurrent logins. Depending on firmware, four or two concurrent logins
158 * are possible on OXFW911 and newer Oxsemi bridges. 158 * are possible on OXFW911 and newer Oxsemi bridges.
159 */ 159 */
160 static int sbp2_exclusive_login = 1; 160 static int sbp2_exclusive_login = 1;
161 module_param_named(exclusive_login, sbp2_exclusive_login, bool, 0644); 161 module_param_named(exclusive_login, sbp2_exclusive_login, bool, 0644);
162 MODULE_PARM_DESC(exclusive_login, "Exclusive login to sbp2 device " 162 MODULE_PARM_DESC(exclusive_login, "Exclusive login to sbp2 device "
163 "(default = Y, use N for concurrent initiators)"); 163 "(default = Y, use N for concurrent initiators)");
164 164
165 /* 165 /*
166 * If any of the following workarounds is required for your device to work, 166 * If any of the following workarounds is required for your device to work,
167 * please submit the kernel messages logged by sbp2 to the linux1394-devel 167 * please submit the kernel messages logged by sbp2 to the linux1394-devel
168 * mailing list. 168 * mailing list.
169 * 169 *
170 * - 128kB max transfer 170 * - 128kB max transfer
171 * Limit transfer size. Necessary for some old bridges. 171 * Limit transfer size. Necessary for some old bridges.
172 * 172 *
173 * - 36 byte inquiry 173 * - 36 byte inquiry
174 * When scsi_mod probes the device, let the inquiry command look like that 174 * When scsi_mod probes the device, let the inquiry command look like that
175 * from MS Windows. 175 * from MS Windows.
176 * 176 *
177 * - skip mode page 8 177 * - skip mode page 8
178 * Suppress sending of mode_sense for mode page 8 if the device pretends to 178 * Suppress sending of mode_sense for mode page 8 if the device pretends to
179 * support the SCSI Primary Block commands instead of Reduced Block Commands. 179 * support the SCSI Primary Block commands instead of Reduced Block Commands.
180 * 180 *
181 * - fix capacity 181 * - fix capacity
182 * Tell sd_mod to correct the last sector number reported by read_capacity. 182 * Tell sd_mod to correct the last sector number reported by read_capacity.
183 * Avoids access beyond actual disk limits on devices with an off-by-one bug. 183 * Avoids access beyond actual disk limits on devices with an off-by-one bug.
184 * Don't use this with devices which don't have this bug. 184 * Don't use this with devices which don't have this bug.
185 * 185 *
186 * - delay inquiry 186 * - delay inquiry
187 * Wait extra SBP2_INQUIRY_DELAY seconds after login before SCSI inquiry. 187 * Wait extra SBP2_INQUIRY_DELAY seconds after login before SCSI inquiry.
188 * 188 *
189 * - power condition 189 * - power condition
190 * Set the power condition field in the START STOP UNIT commands sent by 190 * Set the power condition field in the START STOP UNIT commands sent by
191 * sd_mod on suspend, resume, and shutdown (if manage_start_stop is on). 191 * sd_mod on suspend, resume, and shutdown (if manage_start_stop is on).
192 * Some disks need this to spin down or to resume properly. 192 * Some disks need this to spin down or to resume properly.
193 * 193 *
194 * - override internal blacklist 194 * - override internal blacklist
195 * Instead of adding to the built-in blacklist, use only the workarounds 195 * Instead of adding to the built-in blacklist, use only the workarounds
196 * specified in the module load parameter. 196 * specified in the module load parameter.
197 * Useful if a blacklist entry interfered with a non-broken device. 197 * Useful if a blacklist entry interfered with a non-broken device.
198 */ 198 */
199 static int sbp2_default_workarounds; 199 static int sbp2_default_workarounds;
200 module_param_named(workarounds, sbp2_default_workarounds, int, 0644); 200 module_param_named(workarounds, sbp2_default_workarounds, int, 0644);
201 MODULE_PARM_DESC(workarounds, "Work around device bugs (default = 0" 201 MODULE_PARM_DESC(workarounds, "Work around device bugs (default = 0"
202 ", 128kB max transfer = " __stringify(SBP2_WORKAROUND_128K_MAX_TRANS) 202 ", 128kB max transfer = " __stringify(SBP2_WORKAROUND_128K_MAX_TRANS)
203 ", 36 byte inquiry = " __stringify(SBP2_WORKAROUND_INQUIRY_36) 203 ", 36 byte inquiry = " __stringify(SBP2_WORKAROUND_INQUIRY_36)
204 ", skip mode page 8 = " __stringify(SBP2_WORKAROUND_MODE_SENSE_8) 204 ", skip mode page 8 = " __stringify(SBP2_WORKAROUND_MODE_SENSE_8)
205 ", fix capacity = " __stringify(SBP2_WORKAROUND_FIX_CAPACITY) 205 ", fix capacity = " __stringify(SBP2_WORKAROUND_FIX_CAPACITY)
206 ", delay inquiry = " __stringify(SBP2_WORKAROUND_DELAY_INQUIRY) 206 ", delay inquiry = " __stringify(SBP2_WORKAROUND_DELAY_INQUIRY)
207 ", set power condition in start stop unit = " 207 ", set power condition in start stop unit = "
208 __stringify(SBP2_WORKAROUND_POWER_CONDITION) 208 __stringify(SBP2_WORKAROUND_POWER_CONDITION)
209 ", override internal blacklist = " __stringify(SBP2_WORKAROUND_OVERRIDE) 209 ", override internal blacklist = " __stringify(SBP2_WORKAROUND_OVERRIDE)
210 ", or a combination)"); 210 ", or a combination)");
211 211
212 /* 212 /*
213 * This influences the format of the sysfs attribute 213 * This influences the format of the sysfs attribute
214 * /sys/bus/scsi/devices/.../ieee1394_id. 214 * /sys/bus/scsi/devices/.../ieee1394_id.
215 * 215 *
216 * The default format is like in older kernels: %016Lx:%d:%d 216 * The default format is like in older kernels: %016Lx:%d:%d
217 * It contains the target's EUI-64, a number given to the logical unit by 217 * It contains the target's EUI-64, a number given to the logical unit by
218 * the ieee1394 driver's nodemgr (starting at 0), and the LUN. 218 * the ieee1394 driver's nodemgr (starting at 0), and the LUN.
219 * 219 *
220 * The long format is: %016Lx:%06x:%04x 220 * The long format is: %016Lx:%06x:%04x
221 * It contains the target's EUI-64, the unit directory's directory_ID as per 221 * It contains the target's EUI-64, the unit directory's directory_ID as per
222 * IEEE 1212 clause 7.7.19, and the LUN. This format comes closest to the 222 * IEEE 1212 clause 7.7.19, and the LUN. This format comes closest to the
223 * format of SBP(-3) target port and logical unit identifier as per SAM (SCSI 223 * format of SBP(-3) target port and logical unit identifier as per SAM (SCSI
224 * Architecture Model) rev.2 to 4 annex A. Therefore and because it is 224 * Architecture Model) rev.2 to 4 annex A. Therefore and because it is
225 * independent of the implementation of the ieee1394 nodemgr, the longer format 225 * independent of the implementation of the ieee1394 nodemgr, the longer format
226 * is recommended for future use. 226 * is recommended for future use.
227 */ 227 */
228 static int sbp2_long_sysfs_ieee1394_id; 228 static int sbp2_long_sysfs_ieee1394_id;
229 module_param_named(long_ieee1394_id, sbp2_long_sysfs_ieee1394_id, bool, 0644); 229 module_param_named(long_ieee1394_id, sbp2_long_sysfs_ieee1394_id, bool, 0644);
230 MODULE_PARM_DESC(long_ieee1394_id, "8+3+2 bytes format of ieee1394_id in sysfs " 230 MODULE_PARM_DESC(long_ieee1394_id, "8+3+2 bytes format of ieee1394_id in sysfs "
231 "(default = backwards-compatible = N, SAM-conforming = Y)"); 231 "(default = backwards-compatible = N, SAM-conforming = Y)");
232 232
233 233
234 #define SBP2_INFO(fmt, args...) HPSB_INFO("sbp2: "fmt, ## args) 234 #define SBP2_INFO(fmt, args...) HPSB_INFO("sbp2: "fmt, ## args)
235 #define SBP2_ERR(fmt, args...) HPSB_ERR("sbp2: "fmt, ## args) 235 #define SBP2_ERR(fmt, args...) HPSB_ERR("sbp2: "fmt, ## args)
236 236
237 /* 237 /*
238 * Globals 238 * Globals
239 */ 239 */
240 static void sbp2scsi_complete_all_commands(struct sbp2_lu *, u32); 240 static void sbp2scsi_complete_all_commands(struct sbp2_lu *, u32);
241 static void sbp2scsi_complete_command(struct sbp2_lu *, u32, struct scsi_cmnd *, 241 static void sbp2scsi_complete_command(struct sbp2_lu *, u32, struct scsi_cmnd *,
242 void (*)(struct scsi_cmnd *)); 242 void (*)(struct scsi_cmnd *));
243 static struct sbp2_lu *sbp2_alloc_device(struct unit_directory *); 243 static struct sbp2_lu *sbp2_alloc_device(struct unit_directory *);
244 static int sbp2_start_device(struct sbp2_lu *); 244 static int sbp2_start_device(struct sbp2_lu *);
245 static void sbp2_remove_device(struct sbp2_lu *); 245 static void sbp2_remove_device(struct sbp2_lu *);
246 static int sbp2_login_device(struct sbp2_lu *); 246 static int sbp2_login_device(struct sbp2_lu *);
247 static int sbp2_reconnect_device(struct sbp2_lu *); 247 static int sbp2_reconnect_device(struct sbp2_lu *);
248 static int sbp2_logout_device(struct sbp2_lu *); 248 static int sbp2_logout_device(struct sbp2_lu *);
249 static void sbp2_host_reset(struct hpsb_host *); 249 static void sbp2_host_reset(struct hpsb_host *);
250 static int sbp2_handle_status_write(struct hpsb_host *, int, int, quadlet_t *, 250 static int sbp2_handle_status_write(struct hpsb_host *, int, int, quadlet_t *,
251 u64, size_t, u16); 251 u64, size_t, u16);
252 static int sbp2_agent_reset(struct sbp2_lu *, int); 252 static int sbp2_agent_reset(struct sbp2_lu *, int);
253 static void sbp2_parse_unit_directory(struct sbp2_lu *, 253 static void sbp2_parse_unit_directory(struct sbp2_lu *,
254 struct unit_directory *); 254 struct unit_directory *);
255 static int sbp2_set_busy_timeout(struct sbp2_lu *); 255 static int sbp2_set_busy_timeout(struct sbp2_lu *);
256 static int sbp2_max_speed_and_size(struct sbp2_lu *); 256 static int sbp2_max_speed_and_size(struct sbp2_lu *);
257 257
258 258
259 static const u8 sbp2_speedto_max_payload[] = { 0x7, 0x8, 0x9, 0xa, 0xa, 0xa }; 259 static const u8 sbp2_speedto_max_payload[] = { 0x7, 0x8, 0x9, 0xa, 0xa, 0xa };
260 260
261 static DEFINE_RWLOCK(sbp2_hi_logical_units_lock); 261 static DEFINE_RWLOCK(sbp2_hi_logical_units_lock);
262 262
263 static struct hpsb_highlevel sbp2_highlevel = { 263 static struct hpsb_highlevel sbp2_highlevel = {
264 .name = SBP2_DEVICE_NAME, 264 .name = SBP2_DEVICE_NAME,
265 .host_reset = sbp2_host_reset, 265 .host_reset = sbp2_host_reset,
266 }; 266 };
267 267
268 static const struct hpsb_address_ops sbp2_ops = { 268 static const struct hpsb_address_ops sbp2_ops = {
269 .write = sbp2_handle_status_write 269 .write = sbp2_handle_status_write
270 }; 270 };
271 271
272 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA 272 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA
273 static int sbp2_handle_physdma_write(struct hpsb_host *, int, int, quadlet_t *, 273 static int sbp2_handle_physdma_write(struct hpsb_host *, int, int, quadlet_t *,
274 u64, size_t, u16); 274 u64, size_t, u16);
275 static int sbp2_handle_physdma_read(struct hpsb_host *, int, quadlet_t *, u64, 275 static int sbp2_handle_physdma_read(struct hpsb_host *, int, quadlet_t *, u64,
276 size_t, u16); 276 size_t, u16);
277 277
278 static const struct hpsb_address_ops sbp2_physdma_ops = { 278 static const struct hpsb_address_ops sbp2_physdma_ops = {
279 .read = sbp2_handle_physdma_read, 279 .read = sbp2_handle_physdma_read,
280 .write = sbp2_handle_physdma_write, 280 .write = sbp2_handle_physdma_write,
281 }; 281 };
282 #endif 282 #endif
283 283
284 284
285 /* 285 /*
286 * Interface to driver core and IEEE 1394 core 286 * Interface to driver core and IEEE 1394 core
287 */ 287 */
288 static const struct ieee1394_device_id sbp2_id_table[] = { 288 static const struct ieee1394_device_id sbp2_id_table[] = {
289 { 289 {
290 .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION, 290 .match_flags = IEEE1394_MATCH_SPECIFIER_ID | IEEE1394_MATCH_VERSION,
291 .specifier_id = SBP2_UNIT_SPEC_ID_ENTRY & 0xffffff, 291 .specifier_id = SBP2_UNIT_SPEC_ID_ENTRY & 0xffffff,
292 .version = SBP2_SW_VERSION_ENTRY & 0xffffff}, 292 .version = SBP2_SW_VERSION_ENTRY & 0xffffff},
293 {} 293 {}
294 }; 294 };
295 MODULE_DEVICE_TABLE(ieee1394, sbp2_id_table); 295 MODULE_DEVICE_TABLE(ieee1394, sbp2_id_table);
296 296
297 static int sbp2_probe(struct device *); 297 static int sbp2_probe(struct device *);
298 static int sbp2_remove(struct device *); 298 static int sbp2_remove(struct device *);
299 static int sbp2_update(struct unit_directory *); 299 static int sbp2_update(struct unit_directory *);
300 300
301 static struct hpsb_protocol_driver sbp2_driver = { 301 static struct hpsb_protocol_driver sbp2_driver = {
302 .name = SBP2_DEVICE_NAME, 302 .name = SBP2_DEVICE_NAME,
303 .id_table = sbp2_id_table, 303 .id_table = sbp2_id_table,
304 .update = sbp2_update, 304 .update = sbp2_update,
305 .driver = { 305 .driver = {
306 .probe = sbp2_probe, 306 .probe = sbp2_probe,
307 .remove = sbp2_remove, 307 .remove = sbp2_remove,
308 }, 308 },
309 }; 309 };
310 310
311 311
312 /* 312 /*
313 * Interface to SCSI core 313 * Interface to SCSI core
314 */ 314 */
315 static int sbp2scsi_queuecommand(struct scsi_cmnd *, 315 static int sbp2scsi_queuecommand(struct scsi_cmnd *,
316 void (*)(struct scsi_cmnd *)); 316 void (*)(struct scsi_cmnd *));
317 static int sbp2scsi_abort(struct scsi_cmnd *); 317 static int sbp2scsi_abort(struct scsi_cmnd *);
318 static int sbp2scsi_reset(struct scsi_cmnd *); 318 static int sbp2scsi_reset(struct scsi_cmnd *);
319 static int sbp2scsi_slave_alloc(struct scsi_device *); 319 static int sbp2scsi_slave_alloc(struct scsi_device *);
320 static int sbp2scsi_slave_configure(struct scsi_device *); 320 static int sbp2scsi_slave_configure(struct scsi_device *);
321 static void sbp2scsi_slave_destroy(struct scsi_device *); 321 static void sbp2scsi_slave_destroy(struct scsi_device *);
322 static ssize_t sbp2_sysfs_ieee1394_id_show(struct device *, 322 static ssize_t sbp2_sysfs_ieee1394_id_show(struct device *,
323 struct device_attribute *, char *); 323 struct device_attribute *, char *);
324 324
325 static DEVICE_ATTR(ieee1394_id, S_IRUGO, sbp2_sysfs_ieee1394_id_show, NULL); 325 static DEVICE_ATTR(ieee1394_id, S_IRUGO, sbp2_sysfs_ieee1394_id_show, NULL);
326 326
327 static struct device_attribute *sbp2_sysfs_sdev_attrs[] = { 327 static struct device_attribute *sbp2_sysfs_sdev_attrs[] = {
328 &dev_attr_ieee1394_id, 328 &dev_attr_ieee1394_id,
329 NULL 329 NULL
330 }; 330 };
331 331
332 static struct scsi_host_template sbp2_shost_template = { 332 static struct scsi_host_template sbp2_shost_template = {
333 .module = THIS_MODULE, 333 .module = THIS_MODULE,
334 .name = "SBP-2 IEEE-1394", 334 .name = "SBP-2 IEEE-1394",
335 .proc_name = SBP2_DEVICE_NAME, 335 .proc_name = SBP2_DEVICE_NAME,
336 .queuecommand = sbp2scsi_queuecommand, 336 .queuecommand = sbp2scsi_queuecommand,
337 .eh_abort_handler = sbp2scsi_abort, 337 .eh_abort_handler = sbp2scsi_abort,
338 .eh_device_reset_handler = sbp2scsi_reset, 338 .eh_device_reset_handler = sbp2scsi_reset,
339 .slave_alloc = sbp2scsi_slave_alloc, 339 .slave_alloc = sbp2scsi_slave_alloc,
340 .slave_configure = sbp2scsi_slave_configure, 340 .slave_configure = sbp2scsi_slave_configure,
341 .slave_destroy = sbp2scsi_slave_destroy, 341 .slave_destroy = sbp2scsi_slave_destroy,
342 .this_id = -1, 342 .this_id = -1,
343 .sg_tablesize = SG_ALL, 343 .sg_tablesize = SG_ALL,
344 .use_clustering = ENABLE_CLUSTERING, 344 .use_clustering = ENABLE_CLUSTERING,
345 .cmd_per_lun = SBP2_MAX_CMDS, 345 .cmd_per_lun = SBP2_MAX_CMDS,
346 .can_queue = SBP2_MAX_CMDS, 346 .can_queue = SBP2_MAX_CMDS,
347 .sdev_attrs = sbp2_sysfs_sdev_attrs, 347 .sdev_attrs = sbp2_sysfs_sdev_attrs,
348 }; 348 };
349 349
350 #define SBP2_ROM_VALUE_WILDCARD ~0 /* match all */ 350 #define SBP2_ROM_VALUE_WILDCARD ~0 /* match all */
351 #define SBP2_ROM_VALUE_MISSING 0xff000000 /* not present in the unit dir. */ 351 #define SBP2_ROM_VALUE_MISSING 0xff000000 /* not present in the unit dir. */
352 352
353 /* 353 /*
354 * List of devices with known bugs. 354 * List of devices with known bugs.
355 * 355 *
356 * The firmware_revision field, masked with 0xffff00, is the best indicator 356 * The firmware_revision field, masked with 0xffff00, is the best indicator
357 * for the type of bridge chip of a device. It yields a few false positives 357 * for the type of bridge chip of a device. It yields a few false positives
358 * but this did not break correctly behaving devices so far. 358 * but this did not break correctly behaving devices so far.
359 */ 359 */
360 static const struct { 360 static const struct {
361 u32 firmware_revision; 361 u32 firmware_revision;
362 u32 model; 362 u32 model;
363 unsigned workarounds; 363 unsigned workarounds;
364 } sbp2_workarounds_table[] = { 364 } sbp2_workarounds_table[] = {
365 /* DViCO Momobay CX-1 with TSB42AA9 bridge */ { 365 /* DViCO Momobay CX-1 with TSB42AA9 bridge */ {
366 .firmware_revision = 0x002800, 366 .firmware_revision = 0x002800,
367 .model = 0x001010, 367 .model = 0x001010,
368 .workarounds = SBP2_WORKAROUND_INQUIRY_36 | 368 .workarounds = SBP2_WORKAROUND_INQUIRY_36 |
369 SBP2_WORKAROUND_MODE_SENSE_8 | 369 SBP2_WORKAROUND_MODE_SENSE_8 |
370 SBP2_WORKAROUND_POWER_CONDITION, 370 SBP2_WORKAROUND_POWER_CONDITION,
371 }, 371 },
372 /* DViCO Momobay FX-3A with TSB42AA9A bridge */ { 372 /* DViCO Momobay FX-3A with TSB42AA9A bridge */ {
373 .firmware_revision = 0x002800, 373 .firmware_revision = 0x002800,
374 .model = 0x000000, 374 .model = 0x000000,
375 .workarounds = SBP2_WORKAROUND_POWER_CONDITION, 375 .workarounds = SBP2_WORKAROUND_POWER_CONDITION,
376 }, 376 },
377 /* Initio bridges, actually only needed for some older ones */ { 377 /* Initio bridges, actually only needed for some older ones */ {
378 .firmware_revision = 0x000200, 378 .firmware_revision = 0x000200,
379 .model = SBP2_ROM_VALUE_WILDCARD, 379 .model = SBP2_ROM_VALUE_WILDCARD,
380 .workarounds = SBP2_WORKAROUND_INQUIRY_36, 380 .workarounds = SBP2_WORKAROUND_INQUIRY_36,
381 }, 381 },
382 /* PL-3507 bridge with Prolific firmware */ { 382 /* PL-3507 bridge with Prolific firmware */ {
383 .firmware_revision = 0x012800, 383 .firmware_revision = 0x012800,
384 .model = SBP2_ROM_VALUE_WILDCARD, 384 .model = SBP2_ROM_VALUE_WILDCARD,
385 .workarounds = SBP2_WORKAROUND_POWER_CONDITION, 385 .workarounds = SBP2_WORKAROUND_POWER_CONDITION,
386 }, 386 },
387 /* Symbios bridge */ { 387 /* Symbios bridge */ {
388 .firmware_revision = 0xa0b800, 388 .firmware_revision = 0xa0b800,
389 .model = SBP2_ROM_VALUE_WILDCARD, 389 .model = SBP2_ROM_VALUE_WILDCARD,
390 .workarounds = SBP2_WORKAROUND_128K_MAX_TRANS, 390 .workarounds = SBP2_WORKAROUND_128K_MAX_TRANS,
391 }, 391 },
392 /* Datafab MD2-FW2 with Symbios/LSILogic SYM13FW500 bridge */ { 392 /* Datafab MD2-FW2 with Symbios/LSILogic SYM13FW500 bridge */ {
393 .firmware_revision = 0x002600, 393 .firmware_revision = 0x002600,
394 .model = SBP2_ROM_VALUE_WILDCARD, 394 .model = SBP2_ROM_VALUE_WILDCARD,
395 .workarounds = SBP2_WORKAROUND_128K_MAX_TRANS, 395 .workarounds = SBP2_WORKAROUND_128K_MAX_TRANS,
396 }, 396 },
397 /* 397 /*
398 * iPod 2nd generation: needs 128k max transfer size workaround 398 * iPod 2nd generation: needs 128k max transfer size workaround
399 * iPod 3rd generation: needs fix capacity workaround 399 * iPod 3rd generation: needs fix capacity workaround
400 */ 400 */
401 { 401 {
402 .firmware_revision = 0x0a2700, 402 .firmware_revision = 0x0a2700,
403 .model = 0x000000, 403 .model = 0x000000,
404 .workarounds = SBP2_WORKAROUND_128K_MAX_TRANS | 404 .workarounds = SBP2_WORKAROUND_128K_MAX_TRANS |
405 SBP2_WORKAROUND_FIX_CAPACITY, 405 SBP2_WORKAROUND_FIX_CAPACITY,
406 }, 406 },
407 /* iPod 4th generation */ { 407 /* iPod 4th generation */ {
408 .firmware_revision = 0x0a2700, 408 .firmware_revision = 0x0a2700,
409 .model = 0x000021, 409 .model = 0x000021,
410 .workarounds = SBP2_WORKAROUND_FIX_CAPACITY, 410 .workarounds = SBP2_WORKAROUND_FIX_CAPACITY,
411 }, 411 },
412 /* iPod mini */ { 412 /* iPod mini */ {
413 .firmware_revision = 0x0a2700, 413 .firmware_revision = 0x0a2700,
414 .model = 0x000022, 414 .model = 0x000022,
415 .workarounds = SBP2_WORKAROUND_FIX_CAPACITY, 415 .workarounds = SBP2_WORKAROUND_FIX_CAPACITY,
416 }, 416 },
417 /* iPod mini */ { 417 /* iPod mini */ {
418 .firmware_revision = 0x0a2700, 418 .firmware_revision = 0x0a2700,
419 .model = 0x000023, 419 .model = 0x000023,
420 .workarounds = SBP2_WORKAROUND_FIX_CAPACITY, 420 .workarounds = SBP2_WORKAROUND_FIX_CAPACITY,
421 }, 421 },
422 /* iPod Photo */ { 422 /* iPod Photo */ {
423 .firmware_revision = 0x0a2700, 423 .firmware_revision = 0x0a2700,
424 .model = 0x00007e, 424 .model = 0x00007e,
425 .workarounds = SBP2_WORKAROUND_FIX_CAPACITY, 425 .workarounds = SBP2_WORKAROUND_FIX_CAPACITY,
426 } 426 }
427 }; 427 };
428 428
429 /************************************** 429 /**************************************
430 * General utility functions 430 * General utility functions
431 **************************************/ 431 **************************************/
432 432
433 #ifndef __BIG_ENDIAN 433 #ifndef __BIG_ENDIAN
434 /* 434 /*
435 * Converts a buffer from be32 to cpu byte ordering. Length is in bytes. 435 * Converts a buffer from be32 to cpu byte ordering. Length is in bytes.
436 */ 436 */
437 static inline void sbp2util_be32_to_cpu_buffer(void *buffer, int length) 437 static inline void sbp2util_be32_to_cpu_buffer(void *buffer, int length)
438 { 438 {
439 u32 *temp = buffer; 439 u32 *temp = buffer;
440 440
441 for (length = (length >> 2); length--; ) 441 for (length = (length >> 2); length--; )
442 temp[length] = be32_to_cpu(temp[length]); 442 temp[length] = be32_to_cpu(temp[length]);
443 } 443 }
444 444
445 /* 445 /*
446 * Converts a buffer from cpu to be32 byte ordering. Length is in bytes. 446 * Converts a buffer from cpu to be32 byte ordering. Length is in bytes.
447 */ 447 */
448 static inline void sbp2util_cpu_to_be32_buffer(void *buffer, int length) 448 static inline void sbp2util_cpu_to_be32_buffer(void *buffer, int length)
449 { 449 {
450 u32 *temp = buffer; 450 u32 *temp = buffer;
451 451
452 for (length = (length >> 2); length--; ) 452 for (length = (length >> 2); length--; )
453 temp[length] = cpu_to_be32(temp[length]); 453 temp[length] = cpu_to_be32(temp[length]);
454 } 454 }
455 #else /* BIG_ENDIAN */ 455 #else /* BIG_ENDIAN */
456 /* Why waste the cpu cycles? */ 456 /* Why waste the cpu cycles? */
457 #define sbp2util_be32_to_cpu_buffer(x,y) do {} while (0) 457 #define sbp2util_be32_to_cpu_buffer(x,y) do {} while (0)
458 #define sbp2util_cpu_to_be32_buffer(x,y) do {} while (0) 458 #define sbp2util_cpu_to_be32_buffer(x,y) do {} while (0)
459 #endif 459 #endif
460 460
461 static DECLARE_WAIT_QUEUE_HEAD(sbp2_access_wq); 461 static DECLARE_WAIT_QUEUE_HEAD(sbp2_access_wq);
462 462
463 /* 463 /*
464 * Waits for completion of an SBP-2 access request. 464 * Waits for completion of an SBP-2 access request.
465 * Returns nonzero if timed out or prematurely interrupted. 465 * Returns nonzero if timed out or prematurely interrupted.
466 */ 466 */
467 static int sbp2util_access_timeout(struct sbp2_lu *lu, int timeout) 467 static int sbp2util_access_timeout(struct sbp2_lu *lu, int timeout)
468 { 468 {
469 long leftover; 469 long leftover;
470 470
471 leftover = wait_event_interruptible_timeout( 471 leftover = wait_event_interruptible_timeout(
472 sbp2_access_wq, lu->access_complete, timeout); 472 sbp2_access_wq, lu->access_complete, timeout);
473 lu->access_complete = 0; 473 lu->access_complete = 0;
474 return leftover <= 0; 474 return leftover <= 0;
475 } 475 }
476 476
477 static void sbp2_free_packet(void *packet) 477 static void sbp2_free_packet(void *packet)
478 { 478 {
479 hpsb_free_tlabel(packet); 479 hpsb_free_tlabel(packet);
480 hpsb_free_packet(packet); 480 hpsb_free_packet(packet);
481 } 481 }
482 482
483 /* 483 /*
484 * This is much like hpsb_node_write(), except it ignores the response 484 * This is much like hpsb_node_write(), except it ignores the response
485 * subaction and returns immediately. Can be used from atomic context. 485 * subaction and returns immediately. Can be used from atomic context.
486 */ 486 */
487 static int sbp2util_node_write_no_wait(struct node_entry *ne, u64 addr, 487 static int sbp2util_node_write_no_wait(struct node_entry *ne, u64 addr,
488 quadlet_t *buf, size_t len) 488 quadlet_t *buf, size_t len)
489 { 489 {
490 struct hpsb_packet *packet; 490 struct hpsb_packet *packet;
491 491
492 packet = hpsb_make_writepacket(ne->host, ne->nodeid, addr, buf, len); 492 packet = hpsb_make_writepacket(ne->host, ne->nodeid, addr, buf, len);
493 if (!packet) 493 if (!packet)
494 return -ENOMEM; 494 return -ENOMEM;
495 495
496 hpsb_set_packet_complete_task(packet, sbp2_free_packet, packet); 496 hpsb_set_packet_complete_task(packet, sbp2_free_packet, packet);
497 hpsb_node_fill_packet(ne, packet); 497 hpsb_node_fill_packet(ne, packet);
498 if (hpsb_send_packet(packet) < 0) { 498 if (hpsb_send_packet(packet) < 0) {
499 sbp2_free_packet(packet); 499 sbp2_free_packet(packet);
500 return -EIO; 500 return -EIO;
501 } 501 }
502 return 0; 502 return 0;
503 } 503 }
504 504
505 static void sbp2util_notify_fetch_agent(struct sbp2_lu *lu, u64 offset, 505 static void sbp2util_notify_fetch_agent(struct sbp2_lu *lu, u64 offset,
506 quadlet_t *data, size_t len) 506 quadlet_t *data, size_t len)
507 { 507 {
508 /* There is a small window after a bus reset within which the node 508 /* There is a small window after a bus reset within which the node
509 * entry's generation is current but the reconnect wasn't completed. */ 509 * entry's generation is current but the reconnect wasn't completed. */
510 if (unlikely(atomic_read(&lu->state) == SBP2LU_STATE_IN_RESET)) 510 if (unlikely(atomic_read(&lu->state) == SBP2LU_STATE_IN_RESET))
511 return; 511 return;
512 512
513 if (hpsb_node_write(lu->ne, lu->command_block_agent_addr + offset, 513 if (hpsb_node_write(lu->ne, lu->command_block_agent_addr + offset,
514 data, len)) 514 data, len))
515 SBP2_ERR("sbp2util_notify_fetch_agent failed."); 515 SBP2_ERR("sbp2util_notify_fetch_agent failed.");
516 516
517 /* Now accept new SCSI commands, unless a bus reset happended during 517 /* Now accept new SCSI commands, unless a bus reset happended during
518 * hpsb_node_write. */ 518 * hpsb_node_write. */
519 if (likely(atomic_read(&lu->state) != SBP2LU_STATE_IN_RESET)) 519 if (likely(atomic_read(&lu->state) != SBP2LU_STATE_IN_RESET))
520 scsi_unblock_requests(lu->shost); 520 scsi_unblock_requests(lu->shost);
521 } 521 }
522 522
523 static void sbp2util_write_orb_pointer(struct work_struct *work) 523 static void sbp2util_write_orb_pointer(struct work_struct *work)
524 { 524 {
525 struct sbp2_lu *lu = container_of(work, struct sbp2_lu, protocol_work); 525 struct sbp2_lu *lu = container_of(work, struct sbp2_lu, protocol_work);
526 quadlet_t data[2]; 526 quadlet_t data[2];
527 527
528 data[0] = ORB_SET_NODE_ID(lu->hi->host->node_id); 528 data[0] = ORB_SET_NODE_ID(lu->hi->host->node_id);
529 data[1] = lu->last_orb_dma; 529 data[1] = lu->last_orb_dma;
530 sbp2util_cpu_to_be32_buffer(data, 8); 530 sbp2util_cpu_to_be32_buffer(data, 8);
531 sbp2util_notify_fetch_agent(lu, SBP2_ORB_POINTER_OFFSET, data, 8); 531 sbp2util_notify_fetch_agent(lu, SBP2_ORB_POINTER_OFFSET, data, 8);
532 } 532 }
533 533
534 static void sbp2util_write_doorbell(struct work_struct *work) 534 static void sbp2util_write_doorbell(struct work_struct *work)
535 { 535 {
536 struct sbp2_lu *lu = container_of(work, struct sbp2_lu, protocol_work); 536 struct sbp2_lu *lu = container_of(work, struct sbp2_lu, protocol_work);
537 537
538 sbp2util_notify_fetch_agent(lu, SBP2_DOORBELL_OFFSET, NULL, 4); 538 sbp2util_notify_fetch_agent(lu, SBP2_DOORBELL_OFFSET, NULL, 4);
539 } 539 }
540 540
541 static int sbp2util_create_command_orb_pool(struct sbp2_lu *lu) 541 static int sbp2util_create_command_orb_pool(struct sbp2_lu *lu)
542 { 542 {
543 struct sbp2_command_info *cmd; 543 struct sbp2_command_info *cmd;
544 struct device *dmadev = lu->hi->host->device.parent; 544 struct device *dmadev = lu->hi->host->device.parent;
545 int i, orbs = sbp2_serialize_io ? 2 : SBP2_MAX_CMDS; 545 int i, orbs = sbp2_serialize_io ? 2 : SBP2_MAX_CMDS;
546 546
547 for (i = 0; i < orbs; i++) { 547 for (i = 0; i < orbs; i++) {
548 cmd = kzalloc(sizeof(*cmd), GFP_KERNEL); 548 cmd = kzalloc(sizeof(*cmd), GFP_KERNEL);
549 if (!cmd) 549 if (!cmd)
550 goto failed_alloc; 550 goto failed_alloc;
551 551
552 cmd->command_orb_dma = 552 cmd->command_orb_dma =
553 dma_map_single(dmadev, &cmd->command_orb, 553 dma_map_single(dmadev, &cmd->command_orb,
554 sizeof(struct sbp2_command_orb), 554 sizeof(struct sbp2_command_orb),
555 DMA_TO_DEVICE); 555 DMA_TO_DEVICE);
556 if (dma_mapping_error(dmadev, cmd->command_orb_dma)) 556 if (dma_mapping_error(dmadev, cmd->command_orb_dma))
557 goto failed_orb; 557 goto failed_orb;
558 558
559 cmd->sge_dma = 559 cmd->sge_dma =
560 dma_map_single(dmadev, &cmd->scatter_gather_element, 560 dma_map_single(dmadev, &cmd->scatter_gather_element,
561 sizeof(cmd->scatter_gather_element), 561 sizeof(cmd->scatter_gather_element),
562 DMA_TO_DEVICE); 562 DMA_TO_DEVICE);
563 if (dma_mapping_error(dmadev, cmd->sge_dma)) 563 if (dma_mapping_error(dmadev, cmd->sge_dma))
564 goto failed_sge; 564 goto failed_sge;
565 565
566 INIT_LIST_HEAD(&cmd->list); 566 INIT_LIST_HEAD(&cmd->list);
567 list_add_tail(&cmd->list, &lu->cmd_orb_completed); 567 list_add_tail(&cmd->list, &lu->cmd_orb_completed);
568 } 568 }
569 return 0; 569 return 0;
570 570
571 failed_sge: 571 failed_sge:
572 dma_unmap_single(dmadev, cmd->command_orb_dma, 572 dma_unmap_single(dmadev, cmd->command_orb_dma,
573 sizeof(struct sbp2_command_orb), DMA_TO_DEVICE); 573 sizeof(struct sbp2_command_orb), DMA_TO_DEVICE);
574 failed_orb: 574 failed_orb:
575 kfree(cmd); 575 kfree(cmd);
576 failed_alloc: 576 failed_alloc:
577 return -ENOMEM; 577 return -ENOMEM;
578 } 578 }
579 579
580 static void sbp2util_remove_command_orb_pool(struct sbp2_lu *lu, 580 static void sbp2util_remove_command_orb_pool(struct sbp2_lu *lu,
581 struct hpsb_host *host) 581 struct hpsb_host *host)
582 { 582 {
583 struct list_head *lh, *next; 583 struct list_head *lh, *next;
584 struct sbp2_command_info *cmd; 584 struct sbp2_command_info *cmd;
585 unsigned long flags; 585 unsigned long flags;
586 586
587 spin_lock_irqsave(&lu->cmd_orb_lock, flags); 587 spin_lock_irqsave(&lu->cmd_orb_lock, flags);
588 if (!list_empty(&lu->cmd_orb_completed)) 588 if (!list_empty(&lu->cmd_orb_completed))
589 list_for_each_safe(lh, next, &lu->cmd_orb_completed) { 589 list_for_each_safe(lh, next, &lu->cmd_orb_completed) {
590 cmd = list_entry(lh, struct sbp2_command_info, list); 590 cmd = list_entry(lh, struct sbp2_command_info, list);
591 dma_unmap_single(host->device.parent, 591 dma_unmap_single(host->device.parent,
592 cmd->command_orb_dma, 592 cmd->command_orb_dma,
593 sizeof(struct sbp2_command_orb), 593 sizeof(struct sbp2_command_orb),
594 DMA_TO_DEVICE); 594 DMA_TO_DEVICE);
595 dma_unmap_single(host->device.parent, cmd->sge_dma, 595 dma_unmap_single(host->device.parent, cmd->sge_dma,
596 sizeof(cmd->scatter_gather_element), 596 sizeof(cmd->scatter_gather_element),
597 DMA_TO_DEVICE); 597 DMA_TO_DEVICE);
598 kfree(cmd); 598 kfree(cmd);
599 } 599 }
600 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 600 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags);
601 return; 601 return;
602 } 602 }
603 603
604 /* 604 /*
605 * Finds the sbp2_command for a given outstanding command ORB. 605 * Finds the sbp2_command for a given outstanding command ORB.
606 * Only looks at the in-use list. 606 * Only looks at the in-use list.
607 */ 607 */
608 static struct sbp2_command_info *sbp2util_find_command_for_orb( 608 static struct sbp2_command_info *sbp2util_find_command_for_orb(
609 struct sbp2_lu *lu, dma_addr_t orb) 609 struct sbp2_lu *lu, dma_addr_t orb)
610 { 610 {
611 struct sbp2_command_info *cmd; 611 struct sbp2_command_info *cmd;
612 unsigned long flags; 612 unsigned long flags;
613 613
614 spin_lock_irqsave(&lu->cmd_orb_lock, flags); 614 spin_lock_irqsave(&lu->cmd_orb_lock, flags);
615 if (!list_empty(&lu->cmd_orb_inuse)) 615 if (!list_empty(&lu->cmd_orb_inuse))
616 list_for_each_entry(cmd, &lu->cmd_orb_inuse, list) 616 list_for_each_entry(cmd, &lu->cmd_orb_inuse, list)
617 if (cmd->command_orb_dma == orb) { 617 if (cmd->command_orb_dma == orb) {
618 spin_unlock_irqrestore( 618 spin_unlock_irqrestore(
619 &lu->cmd_orb_lock, flags); 619 &lu->cmd_orb_lock, flags);
620 return cmd; 620 return cmd;
621 } 621 }
622 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 622 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags);
623 return NULL; 623 return NULL;
624 } 624 }
625 625
626 /* 626 /*
627 * Finds the sbp2_command for a given outstanding SCpnt. 627 * Finds the sbp2_command for a given outstanding SCpnt.
628 * Only looks at the in-use list. 628 * Only looks at the in-use list.
629 * Must be called with lu->cmd_orb_lock held. 629 * Must be called with lu->cmd_orb_lock held.
630 */ 630 */
631 static struct sbp2_command_info *sbp2util_find_command_for_SCpnt( 631 static struct sbp2_command_info *sbp2util_find_command_for_SCpnt(
632 struct sbp2_lu *lu, void *SCpnt) 632 struct sbp2_lu *lu, void *SCpnt)
633 { 633 {
634 struct sbp2_command_info *cmd; 634 struct sbp2_command_info *cmd;
635 635
636 if (!list_empty(&lu->cmd_orb_inuse)) 636 if (!list_empty(&lu->cmd_orb_inuse))
637 list_for_each_entry(cmd, &lu->cmd_orb_inuse, list) 637 list_for_each_entry(cmd, &lu->cmd_orb_inuse, list)
638 if (cmd->Current_SCpnt == SCpnt) 638 if (cmd->Current_SCpnt == SCpnt)
639 return cmd; 639 return cmd;
640 return NULL; 640 return NULL;
641 } 641 }
642 642
643 static struct sbp2_command_info *sbp2util_allocate_command_orb( 643 static struct sbp2_command_info *sbp2util_allocate_command_orb(
644 struct sbp2_lu *lu, 644 struct sbp2_lu *lu,
645 struct scsi_cmnd *Current_SCpnt, 645 struct scsi_cmnd *Current_SCpnt,
646 void (*Current_done)(struct scsi_cmnd *)) 646 void (*Current_done)(struct scsi_cmnd *))
647 { 647 {
648 struct list_head *lh; 648 struct list_head *lh;
649 struct sbp2_command_info *cmd = NULL; 649 struct sbp2_command_info *cmd = NULL;
650 unsigned long flags; 650 unsigned long flags;
651 651
652 spin_lock_irqsave(&lu->cmd_orb_lock, flags); 652 spin_lock_irqsave(&lu->cmd_orb_lock, flags);
653 if (!list_empty(&lu->cmd_orb_completed)) { 653 if (!list_empty(&lu->cmd_orb_completed)) {
654 lh = lu->cmd_orb_completed.next; 654 lh = lu->cmd_orb_completed.next;
655 list_del(lh); 655 list_del(lh);
656 cmd = list_entry(lh, struct sbp2_command_info, list); 656 cmd = list_entry(lh, struct sbp2_command_info, list);
657 cmd->Current_done = Current_done; 657 cmd->Current_done = Current_done;
658 cmd->Current_SCpnt = Current_SCpnt; 658 cmd->Current_SCpnt = Current_SCpnt;
659 list_add_tail(&cmd->list, &lu->cmd_orb_inuse); 659 list_add_tail(&cmd->list, &lu->cmd_orb_inuse);
660 } else 660 } else
661 SBP2_ERR("%s: no orbs available", __func__); 661 SBP2_ERR("%s: no orbs available", __func__);
662 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 662 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags);
663 return cmd; 663 return cmd;
664 } 664 }
665 665
666 /* 666 /*
667 * Unmaps the DMAs of a command and moves the command to the completed ORB list. 667 * Unmaps the DMAs of a command and moves the command to the completed ORB list.
668 * Must be called with lu->cmd_orb_lock held. 668 * Must be called with lu->cmd_orb_lock held.
669 */ 669 */
670 static void sbp2util_mark_command_completed(struct sbp2_lu *lu, 670 static void sbp2util_mark_command_completed(struct sbp2_lu *lu,
671 struct sbp2_command_info *cmd) 671 struct sbp2_command_info *cmd)
672 { 672 {
673 if (scsi_sg_count(cmd->Current_SCpnt)) 673 if (scsi_sg_count(cmd->Current_SCpnt))
674 dma_unmap_sg(lu->ud->ne->host->device.parent, 674 dma_unmap_sg(lu->ud->ne->host->device.parent,
675 scsi_sglist(cmd->Current_SCpnt), 675 scsi_sglist(cmd->Current_SCpnt),
676 scsi_sg_count(cmd->Current_SCpnt), 676 scsi_sg_count(cmd->Current_SCpnt),
677 cmd->Current_SCpnt->sc_data_direction); 677 cmd->Current_SCpnt->sc_data_direction);
678 list_move_tail(&cmd->list, &lu->cmd_orb_completed); 678 list_move_tail(&cmd->list, &lu->cmd_orb_completed);
679 } 679 }
680 680
681 /* 681 /*
682 * Is lu valid? Is the 1394 node still present? 682 * Is lu valid? Is the 1394 node still present?
683 */ 683 */
684 static inline int sbp2util_node_is_available(struct sbp2_lu *lu) 684 static inline int sbp2util_node_is_available(struct sbp2_lu *lu)
685 { 685 {
686 return lu && lu->ne && !lu->ne->in_limbo; 686 return lu && lu->ne && !lu->ne->in_limbo;
687 } 687 }
688 688
689 /********************************************* 689 /*********************************************
690 * IEEE-1394 core driver stack related section 690 * IEEE-1394 core driver stack related section
691 *********************************************/ 691 *********************************************/
692 692
693 static int sbp2_probe(struct device *dev) 693 static int sbp2_probe(struct device *dev)
694 { 694 {
695 struct unit_directory *ud; 695 struct unit_directory *ud;
696 struct sbp2_lu *lu; 696 struct sbp2_lu *lu;
697 697
698 ud = container_of(dev, struct unit_directory, device); 698 ud = container_of(dev, struct unit_directory, device);
699 699
700 /* Don't probe UD's that have the LUN flag. We'll probe the LUN(s) 700 /* Don't probe UD's that have the LUN flag. We'll probe the LUN(s)
701 * instead. */ 701 * instead. */
702 if (ud->flags & UNIT_DIRECTORY_HAS_LUN_DIRECTORY) 702 if (ud->flags & UNIT_DIRECTORY_HAS_LUN_DIRECTORY)
703 return -ENODEV; 703 return -ENODEV;
704 704
705 lu = sbp2_alloc_device(ud); 705 lu = sbp2_alloc_device(ud);
706 if (!lu) 706 if (!lu)
707 return -ENOMEM; 707 return -ENOMEM;
708 708
709 sbp2_parse_unit_directory(lu, ud); 709 sbp2_parse_unit_directory(lu, ud);
710 return sbp2_start_device(lu); 710 return sbp2_start_device(lu);
711 } 711 }
712 712
713 static int sbp2_remove(struct device *dev) 713 static int sbp2_remove(struct device *dev)
714 { 714 {
715 struct unit_directory *ud; 715 struct unit_directory *ud;
716 struct sbp2_lu *lu; 716 struct sbp2_lu *lu;
717 struct scsi_device *sdev; 717 struct scsi_device *sdev;
718 718
719 ud = container_of(dev, struct unit_directory, device); 719 ud = container_of(dev, struct unit_directory, device);
720 lu = dev_get_drvdata(&ud->device); 720 lu = dev_get_drvdata(&ud->device);
721 if (!lu) 721 if (!lu)
722 return 0; 722 return 0;
723 723
724 if (lu->shost) { 724 if (lu->shost) {
725 /* Get rid of enqueued commands if there is no chance to 725 /* Get rid of enqueued commands if there is no chance to
726 * send them. */ 726 * send them. */
727 if (!sbp2util_node_is_available(lu)) 727 if (!sbp2util_node_is_available(lu))
728 sbp2scsi_complete_all_commands(lu, DID_NO_CONNECT); 728 sbp2scsi_complete_all_commands(lu, DID_NO_CONNECT);
729 /* scsi_remove_device() may trigger shutdown functions of SCSI 729 /* scsi_remove_device() may trigger shutdown functions of SCSI
730 * highlevel drivers which would deadlock if blocked. */ 730 * highlevel drivers which would deadlock if blocked. */
731 atomic_set(&lu->state, SBP2LU_STATE_IN_SHUTDOWN); 731 atomic_set(&lu->state, SBP2LU_STATE_IN_SHUTDOWN);
732 scsi_unblock_requests(lu->shost); 732 scsi_unblock_requests(lu->shost);
733 } 733 }
734 sdev = lu->sdev; 734 sdev = lu->sdev;
735 if (sdev) { 735 if (sdev) {
736 lu->sdev = NULL; 736 lu->sdev = NULL;
737 scsi_remove_device(sdev); 737 scsi_remove_device(sdev);
738 } 738 }
739 739
740 sbp2_logout_device(lu); 740 sbp2_logout_device(lu);
741 sbp2_remove_device(lu); 741 sbp2_remove_device(lu);
742 742
743 return 0; 743 return 0;
744 } 744 }
745 745
746 static int sbp2_update(struct unit_directory *ud) 746 static int sbp2_update(struct unit_directory *ud)
747 { 747 {
748 struct sbp2_lu *lu = dev_get_drvdata(&ud->device); 748 struct sbp2_lu *lu = dev_get_drvdata(&ud->device);
749 749
750 if (sbp2_reconnect_device(lu) != 0) { 750 if (sbp2_reconnect_device(lu) != 0) {
751 /* 751 /*
752 * Reconnect failed. If another bus reset happened, 752 * Reconnect failed. If another bus reset happened,
753 * let nodemgr proceed and call sbp2_update again later 753 * let nodemgr proceed and call sbp2_update again later
754 * (or sbp2_remove if this node went away). 754 * (or sbp2_remove if this node went away).
755 */ 755 */
756 if (!hpsb_node_entry_valid(lu->ne)) 756 if (!hpsb_node_entry_valid(lu->ne))
757 return 0; 757 return 0;
758 /* 758 /*
759 * Or the target rejected the reconnect because we weren't 759 * Or the target rejected the reconnect because we weren't
760 * fast enough. Try a regular login, but first log out 760 * fast enough. Try a regular login, but first log out
761 * just in case of any weirdness. 761 * just in case of any weirdness.
762 */ 762 */
763 sbp2_logout_device(lu); 763 sbp2_logout_device(lu);
764 764
765 if (sbp2_login_device(lu) != 0) { 765 if (sbp2_login_device(lu) != 0) {
766 if (!hpsb_node_entry_valid(lu->ne)) 766 if (!hpsb_node_entry_valid(lu->ne))
767 return 0; 767 return 0;
768 768
769 /* Maybe another initiator won the login. */ 769 /* Maybe another initiator won the login. */
770 SBP2_ERR("Failed to reconnect to sbp2 device!"); 770 SBP2_ERR("Failed to reconnect to sbp2 device!");
771 return -EBUSY; 771 return -EBUSY;
772 } 772 }
773 } 773 }
774 774
775 sbp2_set_busy_timeout(lu); 775 sbp2_set_busy_timeout(lu);
776 sbp2_agent_reset(lu, 1); 776 sbp2_agent_reset(lu, 1);
777 sbp2_max_speed_and_size(lu); 777 sbp2_max_speed_and_size(lu);
778 778
779 /* Complete any pending commands with busy (so they get retried) 779 /* Complete any pending commands with busy (so they get retried)
780 * and remove them from our queue. */ 780 * and remove them from our queue. */
781 sbp2scsi_complete_all_commands(lu, DID_BUS_BUSY); 781 sbp2scsi_complete_all_commands(lu, DID_BUS_BUSY);
782 782
783 /* Accept new commands unless there was another bus reset in the 783 /* Accept new commands unless there was another bus reset in the
784 * meantime. */ 784 * meantime. */
785 if (hpsb_node_entry_valid(lu->ne)) { 785 if (hpsb_node_entry_valid(lu->ne)) {
786 atomic_set(&lu->state, SBP2LU_STATE_RUNNING); 786 atomic_set(&lu->state, SBP2LU_STATE_RUNNING);
787 scsi_unblock_requests(lu->shost); 787 scsi_unblock_requests(lu->shost);
788 } 788 }
789 return 0; 789 return 0;
790 } 790 }
791 791
792 static struct sbp2_lu *sbp2_alloc_device(struct unit_directory *ud) 792 static struct sbp2_lu *sbp2_alloc_device(struct unit_directory *ud)
793 { 793 {
794 struct sbp2_fwhost_info *hi; 794 struct sbp2_fwhost_info *hi;
795 struct Scsi_Host *shost = NULL; 795 struct Scsi_Host *shost = NULL;
796 struct sbp2_lu *lu = NULL; 796 struct sbp2_lu *lu = NULL;
797 unsigned long flags; 797 unsigned long flags;
798 798
799 lu = kzalloc(sizeof(*lu), GFP_KERNEL); 799 lu = kzalloc(sizeof(*lu), GFP_KERNEL);
800 if (!lu) { 800 if (!lu) {
801 SBP2_ERR("failed to create lu"); 801 SBP2_ERR("failed to create lu");
802 goto failed_alloc; 802 goto failed_alloc;
803 } 803 }
804 804
805 lu->ne = ud->ne; 805 lu->ne = ud->ne;
806 lu->ud = ud; 806 lu->ud = ud;
807 lu->speed_code = IEEE1394_SPEED_100; 807 lu->speed_code = IEEE1394_SPEED_100;
808 lu->max_payload_size = sbp2_speedto_max_payload[IEEE1394_SPEED_100]; 808 lu->max_payload_size = sbp2_speedto_max_payload[IEEE1394_SPEED_100];
809 lu->status_fifo_addr = CSR1212_INVALID_ADDR_SPACE; 809 lu->status_fifo_addr = CSR1212_INVALID_ADDR_SPACE;
810 INIT_LIST_HEAD(&lu->cmd_orb_inuse); 810 INIT_LIST_HEAD(&lu->cmd_orb_inuse);
811 INIT_LIST_HEAD(&lu->cmd_orb_completed); 811 INIT_LIST_HEAD(&lu->cmd_orb_completed);
812 INIT_LIST_HEAD(&lu->lu_list); 812 INIT_LIST_HEAD(&lu->lu_list);
813 spin_lock_init(&lu->cmd_orb_lock); 813 spin_lock_init(&lu->cmd_orb_lock);
814 atomic_set(&lu->state, SBP2LU_STATE_RUNNING); 814 atomic_set(&lu->state, SBP2LU_STATE_RUNNING);
815 INIT_WORK(&lu->protocol_work, NULL); 815 INIT_WORK(&lu->protocol_work, NULL);
816 816
817 dev_set_drvdata(&ud->device, lu); 817 dev_set_drvdata(&ud->device, lu);
818 818
819 hi = hpsb_get_hostinfo(&sbp2_highlevel, ud->ne->host); 819 hi = hpsb_get_hostinfo(&sbp2_highlevel, ud->ne->host);
820 if (!hi) { 820 if (!hi) {
821 hi = hpsb_create_hostinfo(&sbp2_highlevel, ud->ne->host, 821 hi = hpsb_create_hostinfo(&sbp2_highlevel, ud->ne->host,
822 sizeof(*hi)); 822 sizeof(*hi));
823 if (!hi) { 823 if (!hi) {
824 SBP2_ERR("failed to allocate hostinfo"); 824 SBP2_ERR("failed to allocate hostinfo");
825 goto failed_alloc; 825 goto failed_alloc;
826 } 826 }
827 hi->host = ud->ne->host; 827 hi->host = ud->ne->host;
828 INIT_LIST_HEAD(&hi->logical_units); 828 INIT_LIST_HEAD(&hi->logical_units);
829 829
830 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA 830 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA
831 /* Handle data movement if physical dma is not 831 /* Handle data movement if physical dma is not
832 * enabled or not supported on host controller */ 832 * enabled or not supported on host controller */
833 if (!hpsb_register_addrspace(&sbp2_highlevel, ud->ne->host, 833 if (!hpsb_register_addrspace(&sbp2_highlevel, ud->ne->host,
834 &sbp2_physdma_ops, 834 &sbp2_physdma_ops,
835 0x0ULL, 0xfffffffcULL)) { 835 0x0ULL, 0xfffffffcULL)) {
836 SBP2_ERR("failed to register lower 4GB address range"); 836 SBP2_ERR("failed to register lower 4GB address range");
837 goto failed_alloc; 837 goto failed_alloc;
838 } 838 }
839 #endif 839 #endif
840 } 840 }
841 841
842 if (dma_get_max_seg_size(hi->host->device.parent) > SBP2_MAX_SEG_SIZE) 842 if (dma_get_max_seg_size(hi->host->device.parent) > SBP2_MAX_SEG_SIZE)
843 BUG_ON(dma_set_max_seg_size(hi->host->device.parent, 843 BUG_ON(dma_set_max_seg_size(hi->host->device.parent,
844 SBP2_MAX_SEG_SIZE)); 844 SBP2_MAX_SEG_SIZE));
845 845
846 /* Prevent unloading of the 1394 host */ 846 /* Prevent unloading of the 1394 host */
847 if (!try_module_get(hi->host->driver->owner)) { 847 if (!try_module_get(hi->host->driver->owner)) {
848 SBP2_ERR("failed to get a reference on 1394 host driver"); 848 SBP2_ERR("failed to get a reference on 1394 host driver");
849 goto failed_alloc; 849 goto failed_alloc;
850 } 850 }
851 851
852 lu->hi = hi; 852 lu->hi = hi;
853 853
854 write_lock_irqsave(&sbp2_hi_logical_units_lock, flags); 854 write_lock_irqsave(&sbp2_hi_logical_units_lock, flags);
855 list_add_tail(&lu->lu_list, &hi->logical_units); 855 list_add_tail(&lu->lu_list, &hi->logical_units);
856 write_unlock_irqrestore(&sbp2_hi_logical_units_lock, flags); 856 write_unlock_irqrestore(&sbp2_hi_logical_units_lock, flags);
857 857
858 /* Register the status FIFO address range. We could use the same FIFO 858 /* Register the status FIFO address range. We could use the same FIFO
859 * for targets at different nodes. However we need different FIFOs per 859 * for targets at different nodes. However we need different FIFOs per
860 * target in order to support multi-unit devices. 860 * target in order to support multi-unit devices.
861 * The FIFO is located out of the local host controller's physical range 861 * The FIFO is located out of the local host controller's physical range
862 * but, if possible, within the posted write area. Status writes will 862 * but, if possible, within the posted write area. Status writes will
863 * then be performed as unified transactions. This slightly reduces 863 * then be performed as unified transactions. This slightly reduces
864 * bandwidth usage, and some Prolific based devices seem to require it. 864 * bandwidth usage, and some Prolific based devices seem to require it.
865 */ 865 */
866 lu->status_fifo_addr = hpsb_allocate_and_register_addrspace( 866 lu->status_fifo_addr = hpsb_allocate_and_register_addrspace(
867 &sbp2_highlevel, ud->ne->host, &sbp2_ops, 867 &sbp2_highlevel, ud->ne->host, &sbp2_ops,
868 sizeof(struct sbp2_status_block), sizeof(quadlet_t), 868 sizeof(struct sbp2_status_block), sizeof(quadlet_t),
869 ud->ne->host->low_addr_space, CSR1212_ALL_SPACE_END); 869 ud->ne->host->low_addr_space, CSR1212_ALL_SPACE_END);
870 if (lu->status_fifo_addr == CSR1212_INVALID_ADDR_SPACE) { 870 if (lu->status_fifo_addr == CSR1212_INVALID_ADDR_SPACE) {
871 SBP2_ERR("failed to allocate status FIFO address range"); 871 SBP2_ERR("failed to allocate status FIFO address range");
872 goto failed_alloc; 872 goto failed_alloc;
873 } 873 }
874 874
875 shost = scsi_host_alloc(&sbp2_shost_template, sizeof(unsigned long)); 875 shost = scsi_host_alloc(&sbp2_shost_template, sizeof(unsigned long));
876 if (!shost) { 876 if (!shost) {
877 SBP2_ERR("failed to register scsi host"); 877 SBP2_ERR("failed to register scsi host");
878 goto failed_alloc; 878 goto failed_alloc;
879 } 879 }
880 880
881 shost->hostdata[0] = (unsigned long)lu; 881 shost->hostdata[0] = (unsigned long)lu;
882 shost->max_cmd_len = SBP2_MAX_CDB_SIZE; 882 shost->max_cmd_len = SBP2_MAX_CDB_SIZE;
883 883
884 if (!scsi_add_host(shost, &ud->device)) { 884 if (!scsi_add_host(shost, &ud->device)) {
885 lu->shost = shost; 885 lu->shost = shost;
886 return lu; 886 return lu;
887 } 887 }
888 888
889 SBP2_ERR("failed to add scsi host"); 889 SBP2_ERR("failed to add scsi host");
890 scsi_host_put(shost); 890 scsi_host_put(shost);
891 891
892 failed_alloc: 892 failed_alloc:
893 sbp2_remove_device(lu); 893 sbp2_remove_device(lu);
894 return NULL; 894 return NULL;
895 } 895 }
896 896
897 static void sbp2_host_reset(struct hpsb_host *host) 897 static void sbp2_host_reset(struct hpsb_host *host)
898 { 898 {
899 struct sbp2_fwhost_info *hi; 899 struct sbp2_fwhost_info *hi;
900 struct sbp2_lu *lu; 900 struct sbp2_lu *lu;
901 unsigned long flags; 901 unsigned long flags;
902 902
903 hi = hpsb_get_hostinfo(&sbp2_highlevel, host); 903 hi = hpsb_get_hostinfo(&sbp2_highlevel, host);
904 if (!hi) 904 if (!hi)
905 return; 905 return;
906 906
907 read_lock_irqsave(&sbp2_hi_logical_units_lock, flags); 907 read_lock_irqsave(&sbp2_hi_logical_units_lock, flags);
908 908
909 list_for_each_entry(lu, &hi->logical_units, lu_list) 909 list_for_each_entry(lu, &hi->logical_units, lu_list)
910 if (atomic_cmpxchg(&lu->state, 910 if (atomic_cmpxchg(&lu->state,
911 SBP2LU_STATE_RUNNING, SBP2LU_STATE_IN_RESET) 911 SBP2LU_STATE_RUNNING, SBP2LU_STATE_IN_RESET)
912 == SBP2LU_STATE_RUNNING) 912 == SBP2LU_STATE_RUNNING)
913 scsi_block_requests(lu->shost); 913 scsi_block_requests(lu->shost);
914 914
915 read_unlock_irqrestore(&sbp2_hi_logical_units_lock, flags); 915 read_unlock_irqrestore(&sbp2_hi_logical_units_lock, flags);
916 } 916 }
917 917
918 static int sbp2_start_device(struct sbp2_lu *lu) 918 static int sbp2_start_device(struct sbp2_lu *lu)
919 { 919 {
920 struct sbp2_fwhost_info *hi = lu->hi; 920 struct sbp2_fwhost_info *hi = lu->hi;
921 int error; 921 int error;
922 922
923 lu->login_response = dma_alloc_coherent(hi->host->device.parent, 923 lu->login_response = dma_alloc_coherent(hi->host->device.parent,
924 sizeof(struct sbp2_login_response), 924 sizeof(struct sbp2_login_response),
925 &lu->login_response_dma, GFP_KERNEL); 925 &lu->login_response_dma, GFP_KERNEL);
926 if (!lu->login_response) 926 if (!lu->login_response)
927 goto alloc_fail; 927 goto alloc_fail;
928 928
929 lu->query_logins_orb = dma_alloc_coherent(hi->host->device.parent, 929 lu->query_logins_orb = dma_alloc_coherent(hi->host->device.parent,
930 sizeof(struct sbp2_query_logins_orb), 930 sizeof(struct sbp2_query_logins_orb),
931 &lu->query_logins_orb_dma, GFP_KERNEL); 931 &lu->query_logins_orb_dma, GFP_KERNEL);
932 if (!lu->query_logins_orb) 932 if (!lu->query_logins_orb)
933 goto alloc_fail; 933 goto alloc_fail;
934 934
935 lu->query_logins_response = dma_alloc_coherent(hi->host->device.parent, 935 lu->query_logins_response = dma_alloc_coherent(hi->host->device.parent,
936 sizeof(struct sbp2_query_logins_response), 936 sizeof(struct sbp2_query_logins_response),
937 &lu->query_logins_response_dma, GFP_KERNEL); 937 &lu->query_logins_response_dma, GFP_KERNEL);
938 if (!lu->query_logins_response) 938 if (!lu->query_logins_response)
939 goto alloc_fail; 939 goto alloc_fail;
940 940
941 lu->reconnect_orb = dma_alloc_coherent(hi->host->device.parent, 941 lu->reconnect_orb = dma_alloc_coherent(hi->host->device.parent,
942 sizeof(struct sbp2_reconnect_orb), 942 sizeof(struct sbp2_reconnect_orb),
943 &lu->reconnect_orb_dma, GFP_KERNEL); 943 &lu->reconnect_orb_dma, GFP_KERNEL);
944 if (!lu->reconnect_orb) 944 if (!lu->reconnect_orb)
945 goto alloc_fail; 945 goto alloc_fail;
946 946
947 lu->logout_orb = dma_alloc_coherent(hi->host->device.parent, 947 lu->logout_orb = dma_alloc_coherent(hi->host->device.parent,
948 sizeof(struct sbp2_logout_orb), 948 sizeof(struct sbp2_logout_orb),
949 &lu->logout_orb_dma, GFP_KERNEL); 949 &lu->logout_orb_dma, GFP_KERNEL);
950 if (!lu->logout_orb) 950 if (!lu->logout_orb)
951 goto alloc_fail; 951 goto alloc_fail;
952 952
953 lu->login_orb = dma_alloc_coherent(hi->host->device.parent, 953 lu->login_orb = dma_alloc_coherent(hi->host->device.parent,
954 sizeof(struct sbp2_login_orb), 954 sizeof(struct sbp2_login_orb),
955 &lu->login_orb_dma, GFP_KERNEL); 955 &lu->login_orb_dma, GFP_KERNEL);
956 if (!lu->login_orb) 956 if (!lu->login_orb)
957 goto alloc_fail; 957 goto alloc_fail;
958 958
959 if (sbp2util_create_command_orb_pool(lu)) 959 if (sbp2util_create_command_orb_pool(lu))
960 goto alloc_fail; 960 goto alloc_fail;
961 961
962 /* Wait a second before trying to log in. Previously logged in 962 /* Wait a second before trying to log in. Previously logged in
963 * initiators need a chance to reconnect. */ 963 * initiators need a chance to reconnect. */
964 if (msleep_interruptible(1000)) { 964 if (msleep_interruptible(1000)) {
965 sbp2_remove_device(lu); 965 sbp2_remove_device(lu);
966 return -EINTR; 966 return -EINTR;
967 } 967 }
968 968
969 if (sbp2_login_device(lu)) { 969 if (sbp2_login_device(lu)) {
970 sbp2_remove_device(lu); 970 sbp2_remove_device(lu);
971 return -EBUSY; 971 return -EBUSY;
972 } 972 }
973 973
974 sbp2_set_busy_timeout(lu); 974 sbp2_set_busy_timeout(lu);
975 sbp2_agent_reset(lu, 1); 975 sbp2_agent_reset(lu, 1);
976 sbp2_max_speed_and_size(lu); 976 sbp2_max_speed_and_size(lu);
977 977
978 if (lu->workarounds & SBP2_WORKAROUND_DELAY_INQUIRY) 978 if (lu->workarounds & SBP2_WORKAROUND_DELAY_INQUIRY)
979 ssleep(SBP2_INQUIRY_DELAY); 979 ssleep(SBP2_INQUIRY_DELAY);
980 980
981 error = scsi_add_device(lu->shost, 0, lu->ud->id, 0); 981 error = scsi_add_device(lu->shost, 0, lu->ud->id, 0);
982 if (error) { 982 if (error) {
983 SBP2_ERR("scsi_add_device failed"); 983 SBP2_ERR("scsi_add_device failed");
984 sbp2_logout_device(lu); 984 sbp2_logout_device(lu);
985 sbp2_remove_device(lu); 985 sbp2_remove_device(lu);
986 return error; 986 return error;
987 } 987 }
988 988
989 return 0; 989 return 0;
990 990
991 alloc_fail: 991 alloc_fail:
992 SBP2_ERR("Could not allocate memory for lu"); 992 SBP2_ERR("Could not allocate memory for lu");
993 sbp2_remove_device(lu); 993 sbp2_remove_device(lu);
994 return -ENOMEM; 994 return -ENOMEM;
995 } 995 }
996 996
997 static void sbp2_remove_device(struct sbp2_lu *lu) 997 static void sbp2_remove_device(struct sbp2_lu *lu)
998 { 998 {
999 struct sbp2_fwhost_info *hi; 999 struct sbp2_fwhost_info *hi;
1000 unsigned long flags; 1000 unsigned long flags;
1001 1001
1002 if (!lu) 1002 if (!lu)
1003 return; 1003 return;
1004 hi = lu->hi; 1004 hi = lu->hi;
1005 if (!hi) 1005 if (!hi)
1006 goto no_hi; 1006 goto no_hi;
1007 1007
1008 if (lu->shost) { 1008 if (lu->shost) {
1009 scsi_remove_host(lu->shost); 1009 scsi_remove_host(lu->shost);
1010 scsi_host_put(lu->shost); 1010 scsi_host_put(lu->shost);
1011 } 1011 }
1012 flush_scheduled_work(); 1012 flush_scheduled_work();
1013 sbp2util_remove_command_orb_pool(lu, hi->host); 1013 sbp2util_remove_command_orb_pool(lu, hi->host);
1014 1014
1015 write_lock_irqsave(&sbp2_hi_logical_units_lock, flags); 1015 write_lock_irqsave(&sbp2_hi_logical_units_lock, flags);
1016 list_del(&lu->lu_list); 1016 list_del(&lu->lu_list);
1017 write_unlock_irqrestore(&sbp2_hi_logical_units_lock, flags); 1017 write_unlock_irqrestore(&sbp2_hi_logical_units_lock, flags);
1018 1018
1019 if (lu->login_response) 1019 if (lu->login_response)
1020 dma_free_coherent(hi->host->device.parent, 1020 dma_free_coherent(hi->host->device.parent,
1021 sizeof(struct sbp2_login_response), 1021 sizeof(struct sbp2_login_response),
1022 lu->login_response, 1022 lu->login_response,
1023 lu->login_response_dma); 1023 lu->login_response_dma);
1024 if (lu->login_orb) 1024 if (lu->login_orb)
1025 dma_free_coherent(hi->host->device.parent, 1025 dma_free_coherent(hi->host->device.parent,
1026 sizeof(struct sbp2_login_orb), 1026 sizeof(struct sbp2_login_orb),
1027 lu->login_orb, 1027 lu->login_orb,
1028 lu->login_orb_dma); 1028 lu->login_orb_dma);
1029 if (lu->reconnect_orb) 1029 if (lu->reconnect_orb)
1030 dma_free_coherent(hi->host->device.parent, 1030 dma_free_coherent(hi->host->device.parent,
1031 sizeof(struct sbp2_reconnect_orb), 1031 sizeof(struct sbp2_reconnect_orb),
1032 lu->reconnect_orb, 1032 lu->reconnect_orb,
1033 lu->reconnect_orb_dma); 1033 lu->reconnect_orb_dma);
1034 if (lu->logout_orb) 1034 if (lu->logout_orb)
1035 dma_free_coherent(hi->host->device.parent, 1035 dma_free_coherent(hi->host->device.parent,
1036 sizeof(struct sbp2_logout_orb), 1036 sizeof(struct sbp2_logout_orb),
1037 lu->logout_orb, 1037 lu->logout_orb,
1038 lu->logout_orb_dma); 1038 lu->logout_orb_dma);
1039 if (lu->query_logins_orb) 1039 if (lu->query_logins_orb)
1040 dma_free_coherent(hi->host->device.parent, 1040 dma_free_coherent(hi->host->device.parent,
1041 sizeof(struct sbp2_query_logins_orb), 1041 sizeof(struct sbp2_query_logins_orb),
1042 lu->query_logins_orb, 1042 lu->query_logins_orb,
1043 lu->query_logins_orb_dma); 1043 lu->query_logins_orb_dma);
1044 if (lu->query_logins_response) 1044 if (lu->query_logins_response)
1045 dma_free_coherent(hi->host->device.parent, 1045 dma_free_coherent(hi->host->device.parent,
1046 sizeof(struct sbp2_query_logins_response), 1046 sizeof(struct sbp2_query_logins_response),
1047 lu->query_logins_response, 1047 lu->query_logins_response,
1048 lu->query_logins_response_dma); 1048 lu->query_logins_response_dma);
1049 1049
1050 if (lu->status_fifo_addr != CSR1212_INVALID_ADDR_SPACE) 1050 if (lu->status_fifo_addr != CSR1212_INVALID_ADDR_SPACE)
1051 hpsb_unregister_addrspace(&sbp2_highlevel, hi->host, 1051 hpsb_unregister_addrspace(&sbp2_highlevel, hi->host,
1052 lu->status_fifo_addr); 1052 lu->status_fifo_addr);
1053 1053
1054 dev_set_drvdata(&lu->ud->device, NULL); 1054 dev_set_drvdata(&lu->ud->device, NULL);
1055 1055
1056 module_put(hi->host->driver->owner); 1056 module_put(hi->host->driver->owner);
1057 no_hi: 1057 no_hi:
1058 kfree(lu); 1058 kfree(lu);
1059 } 1059 }
1060 1060
1061 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA 1061 #ifdef CONFIG_IEEE1394_SBP2_PHYS_DMA
1062 /* 1062 /*
1063 * Deal with write requests on adapters which do not support physical DMA or 1063 * Deal with write requests on adapters which do not support physical DMA or
1064 * have it switched off. 1064 * have it switched off.
1065 */ 1065 */
1066 static int sbp2_handle_physdma_write(struct hpsb_host *host, int nodeid, 1066 static int sbp2_handle_physdma_write(struct hpsb_host *host, int nodeid,
1067 int destid, quadlet_t *data, u64 addr, 1067 int destid, quadlet_t *data, u64 addr,
1068 size_t length, u16 flags) 1068 size_t length, u16 flags)
1069 { 1069 {
1070 memcpy(bus_to_virt((u32) addr), data, length); 1070 memcpy(bus_to_virt((u32) addr), data, length);
1071 return RCODE_COMPLETE; 1071 return RCODE_COMPLETE;
1072 } 1072 }
1073 1073
1074 /* 1074 /*
1075 * Deal with read requests on adapters which do not support physical DMA or 1075 * Deal with read requests on adapters which do not support physical DMA or
1076 * have it switched off. 1076 * have it switched off.
1077 */ 1077 */
1078 static int sbp2_handle_physdma_read(struct hpsb_host *host, int nodeid, 1078 static int sbp2_handle_physdma_read(struct hpsb_host *host, int nodeid,
1079 quadlet_t *data, u64 addr, size_t length, 1079 quadlet_t *data, u64 addr, size_t length,
1080 u16 flags) 1080 u16 flags)
1081 { 1081 {
1082 memcpy(data, bus_to_virt((u32) addr), length); 1082 memcpy(data, bus_to_virt((u32) addr), length);
1083 return RCODE_COMPLETE; 1083 return RCODE_COMPLETE;
1084 } 1084 }
1085 #endif 1085 #endif
1086 1086
1087 /************************************** 1087 /**************************************
1088 * SBP-2 protocol related section 1088 * SBP-2 protocol related section
1089 **************************************/ 1089 **************************************/
1090 1090
1091 static int sbp2_query_logins(struct sbp2_lu *lu) 1091 static int sbp2_query_logins(struct sbp2_lu *lu)
1092 { 1092 {
1093 struct sbp2_fwhost_info *hi = lu->hi; 1093 struct sbp2_fwhost_info *hi = lu->hi;
1094 quadlet_t data[2]; 1094 quadlet_t data[2];
1095 int max_logins; 1095 int max_logins;
1096 int active_logins; 1096 int active_logins;
1097 1097
1098 lu->query_logins_orb->reserved1 = 0x0; 1098 lu->query_logins_orb->reserved1 = 0x0;
1099 lu->query_logins_orb->reserved2 = 0x0; 1099 lu->query_logins_orb->reserved2 = 0x0;
1100 1100
1101 lu->query_logins_orb->query_response_lo = lu->query_logins_response_dma; 1101 lu->query_logins_orb->query_response_lo = lu->query_logins_response_dma;
1102 lu->query_logins_orb->query_response_hi = 1102 lu->query_logins_orb->query_response_hi =
1103 ORB_SET_NODE_ID(hi->host->node_id); 1103 ORB_SET_NODE_ID(hi->host->node_id);
1104 lu->query_logins_orb->lun_misc = 1104 lu->query_logins_orb->lun_misc =
1105 ORB_SET_FUNCTION(SBP2_QUERY_LOGINS_REQUEST); 1105 ORB_SET_FUNCTION(SBP2_QUERY_LOGINS_REQUEST);
1106 lu->query_logins_orb->lun_misc |= ORB_SET_NOTIFY(1); 1106 lu->query_logins_orb->lun_misc |= ORB_SET_NOTIFY(1);
1107 lu->query_logins_orb->lun_misc |= ORB_SET_LUN(lu->lun); 1107 lu->query_logins_orb->lun_misc |= ORB_SET_LUN(lu->lun);
1108 1108
1109 lu->query_logins_orb->reserved_resp_length = 1109 lu->query_logins_orb->reserved_resp_length =
1110 ORB_SET_QUERY_LOGINS_RESP_LENGTH( 1110 ORB_SET_QUERY_LOGINS_RESP_LENGTH(
1111 sizeof(struct sbp2_query_logins_response)); 1111 sizeof(struct sbp2_query_logins_response));
1112 1112
1113 lu->query_logins_orb->status_fifo_hi = 1113 lu->query_logins_orb->status_fifo_hi =
1114 ORB_SET_STATUS_FIFO_HI(lu->status_fifo_addr, hi->host->node_id); 1114 ORB_SET_STATUS_FIFO_HI(lu->status_fifo_addr, hi->host->node_id);
1115 lu->query_logins_orb->status_fifo_lo = 1115 lu->query_logins_orb->status_fifo_lo =
1116 ORB_SET_STATUS_FIFO_LO(lu->status_fifo_addr); 1116 ORB_SET_STATUS_FIFO_LO(lu->status_fifo_addr);
1117 1117
1118 sbp2util_cpu_to_be32_buffer(lu->query_logins_orb, 1118 sbp2util_cpu_to_be32_buffer(lu->query_logins_orb,
1119 sizeof(struct sbp2_query_logins_orb)); 1119 sizeof(struct sbp2_query_logins_orb));
1120 1120
1121 memset(lu->query_logins_response, 0, 1121 memset(lu->query_logins_response, 0,
1122 sizeof(struct sbp2_query_logins_response)); 1122 sizeof(struct sbp2_query_logins_response));
1123 1123
1124 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1124 data[0] = ORB_SET_NODE_ID(hi->host->node_id);
1125 data[1] = lu->query_logins_orb_dma; 1125 data[1] = lu->query_logins_orb_dma;
1126 sbp2util_cpu_to_be32_buffer(data, 8); 1126 sbp2util_cpu_to_be32_buffer(data, 8);
1127 1127
1128 hpsb_node_write(lu->ne, lu->management_agent_addr, data, 8); 1128 hpsb_node_write(lu->ne, lu->management_agent_addr, data, 8);
1129 1129
1130 if (sbp2util_access_timeout(lu, 2*HZ)) { 1130 if (sbp2util_access_timeout(lu, 2*HZ)) {
1131 SBP2_INFO("Error querying logins to SBP-2 device - timed out"); 1131 SBP2_INFO("Error querying logins to SBP-2 device - timed out");
1132 return -EIO; 1132 return -EIO;
1133 } 1133 }
1134 1134
1135 if (lu->status_block.ORB_offset_lo != lu->query_logins_orb_dma) { 1135 if (lu->status_block.ORB_offset_lo != lu->query_logins_orb_dma) {
1136 SBP2_INFO("Error querying logins to SBP-2 device - timed out"); 1136 SBP2_INFO("Error querying logins to SBP-2 device - timed out");
1137 return -EIO; 1137 return -EIO;
1138 } 1138 }
1139 1139
1140 if (STATUS_TEST_RDS(lu->status_block.ORB_offset_hi_misc)) { 1140 if (STATUS_TEST_RDS(lu->status_block.ORB_offset_hi_misc)) {
1141 SBP2_INFO("Error querying logins to SBP-2 device - failed"); 1141 SBP2_INFO("Error querying logins to SBP-2 device - failed");
1142 return -EIO; 1142 return -EIO;
1143 } 1143 }
1144 1144
1145 sbp2util_cpu_to_be32_buffer(lu->query_logins_response, 1145 sbp2util_cpu_to_be32_buffer(lu->query_logins_response,
1146 sizeof(struct sbp2_query_logins_response)); 1146 sizeof(struct sbp2_query_logins_response));
1147 1147
1148 max_logins = RESPONSE_GET_MAX_LOGINS( 1148 max_logins = RESPONSE_GET_MAX_LOGINS(
1149 lu->query_logins_response->length_max_logins); 1149 lu->query_logins_response->length_max_logins);
1150 SBP2_INFO("Maximum concurrent logins supported: %d", max_logins); 1150 SBP2_INFO("Maximum concurrent logins supported: %d", max_logins);
1151 1151
1152 active_logins = RESPONSE_GET_ACTIVE_LOGINS( 1152 active_logins = RESPONSE_GET_ACTIVE_LOGINS(
1153 lu->query_logins_response->length_max_logins); 1153 lu->query_logins_response->length_max_logins);
1154 SBP2_INFO("Number of active logins: %d", active_logins); 1154 SBP2_INFO("Number of active logins: %d", active_logins);
1155 1155
1156 if (active_logins >= max_logins) { 1156 if (active_logins >= max_logins) {
1157 return -EIO; 1157 return -EIO;
1158 } 1158 }
1159 1159
1160 return 0; 1160 return 0;
1161 } 1161 }
1162 1162
1163 static int sbp2_login_device(struct sbp2_lu *lu) 1163 static int sbp2_login_device(struct sbp2_lu *lu)
1164 { 1164 {
1165 struct sbp2_fwhost_info *hi = lu->hi; 1165 struct sbp2_fwhost_info *hi = lu->hi;
1166 quadlet_t data[2]; 1166 quadlet_t data[2];
1167 1167
1168 if (!lu->login_orb) 1168 if (!lu->login_orb)
1169 return -EIO; 1169 return -EIO;
1170 1170
1171 if (!sbp2_exclusive_login && sbp2_query_logins(lu)) { 1171 if (!sbp2_exclusive_login && sbp2_query_logins(lu)) {
1172 SBP2_INFO("Device does not support any more concurrent logins"); 1172 SBP2_INFO("Device does not support any more concurrent logins");
1173 return -EIO; 1173 return -EIO;
1174 } 1174 }
1175 1175
1176 /* assume no password */ 1176 /* assume no password */
1177 lu->login_orb->password_hi = 0; 1177 lu->login_orb->password_hi = 0;
1178 lu->login_orb->password_lo = 0; 1178 lu->login_orb->password_lo = 0;
1179 1179
1180 lu->login_orb->login_response_lo = lu->login_response_dma; 1180 lu->login_orb->login_response_lo = lu->login_response_dma;
1181 lu->login_orb->login_response_hi = ORB_SET_NODE_ID(hi->host->node_id); 1181 lu->login_orb->login_response_hi = ORB_SET_NODE_ID(hi->host->node_id);
1182 lu->login_orb->lun_misc = ORB_SET_FUNCTION(SBP2_LOGIN_REQUEST); 1182 lu->login_orb->lun_misc = ORB_SET_FUNCTION(SBP2_LOGIN_REQUEST);
1183 1183
1184 /* one second reconnect time */ 1184 /* one second reconnect time */
1185 lu->login_orb->lun_misc |= ORB_SET_RECONNECT(0); 1185 lu->login_orb->lun_misc |= ORB_SET_RECONNECT(0);
1186 lu->login_orb->lun_misc |= ORB_SET_EXCLUSIVE(sbp2_exclusive_login); 1186 lu->login_orb->lun_misc |= ORB_SET_EXCLUSIVE(sbp2_exclusive_login);
1187 lu->login_orb->lun_misc |= ORB_SET_NOTIFY(1); 1187 lu->login_orb->lun_misc |= ORB_SET_NOTIFY(1);
1188 lu->login_orb->lun_misc |= ORB_SET_LUN(lu->lun); 1188 lu->login_orb->lun_misc |= ORB_SET_LUN(lu->lun);
1189 1189
1190 lu->login_orb->passwd_resp_lengths = 1190 lu->login_orb->passwd_resp_lengths =
1191 ORB_SET_LOGIN_RESP_LENGTH(sizeof(struct sbp2_login_response)); 1191 ORB_SET_LOGIN_RESP_LENGTH(sizeof(struct sbp2_login_response));
1192 1192
1193 lu->login_orb->status_fifo_hi = 1193 lu->login_orb->status_fifo_hi =
1194 ORB_SET_STATUS_FIFO_HI(lu->status_fifo_addr, hi->host->node_id); 1194 ORB_SET_STATUS_FIFO_HI(lu->status_fifo_addr, hi->host->node_id);
1195 lu->login_orb->status_fifo_lo = 1195 lu->login_orb->status_fifo_lo =
1196 ORB_SET_STATUS_FIFO_LO(lu->status_fifo_addr); 1196 ORB_SET_STATUS_FIFO_LO(lu->status_fifo_addr);
1197 1197
1198 sbp2util_cpu_to_be32_buffer(lu->login_orb, 1198 sbp2util_cpu_to_be32_buffer(lu->login_orb,
1199 sizeof(struct sbp2_login_orb)); 1199 sizeof(struct sbp2_login_orb));
1200 1200
1201 memset(lu->login_response, 0, sizeof(struct sbp2_login_response)); 1201 memset(lu->login_response, 0, sizeof(struct sbp2_login_response));
1202 1202
1203 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1203 data[0] = ORB_SET_NODE_ID(hi->host->node_id);
1204 data[1] = lu->login_orb_dma; 1204 data[1] = lu->login_orb_dma;
1205 sbp2util_cpu_to_be32_buffer(data, 8); 1205 sbp2util_cpu_to_be32_buffer(data, 8);
1206 1206
1207 hpsb_node_write(lu->ne, lu->management_agent_addr, data, 8); 1207 hpsb_node_write(lu->ne, lu->management_agent_addr, data, 8);
1208 1208
1209 /* wait up to 20 seconds for login status */ 1209 /* wait up to 20 seconds for login status */
1210 if (sbp2util_access_timeout(lu, 20*HZ)) { 1210 if (sbp2util_access_timeout(lu, 20*HZ)) {
1211 SBP2_ERR("Error logging into SBP-2 device - timed out"); 1211 SBP2_ERR("Error logging into SBP-2 device - timed out");
1212 return -EIO; 1212 return -EIO;
1213 } 1213 }
1214 1214
1215 /* make sure that the returned status matches the login ORB */ 1215 /* make sure that the returned status matches the login ORB */
1216 if (lu->status_block.ORB_offset_lo != lu->login_orb_dma) { 1216 if (lu->status_block.ORB_offset_lo != lu->login_orb_dma) {
1217 SBP2_ERR("Error logging into SBP-2 device - timed out"); 1217 SBP2_ERR("Error logging into SBP-2 device - timed out");
1218 return -EIO; 1218 return -EIO;
1219 } 1219 }
1220 1220
1221 if (STATUS_TEST_RDS(lu->status_block.ORB_offset_hi_misc)) { 1221 if (STATUS_TEST_RDS(lu->status_block.ORB_offset_hi_misc)) {
1222 SBP2_ERR("Error logging into SBP-2 device - failed"); 1222 SBP2_ERR("Error logging into SBP-2 device - failed");
1223 return -EIO; 1223 return -EIO;
1224 } 1224 }
1225 1225
1226 sbp2util_cpu_to_be32_buffer(lu->login_response, 1226 sbp2util_cpu_to_be32_buffer(lu->login_response,
1227 sizeof(struct sbp2_login_response)); 1227 sizeof(struct sbp2_login_response));
1228 lu->command_block_agent_addr = 1228 lu->command_block_agent_addr =
1229 ((u64)lu->login_response->command_block_agent_hi) << 32; 1229 ((u64)lu->login_response->command_block_agent_hi) << 32;
1230 lu->command_block_agent_addr |= 1230 lu->command_block_agent_addr |=
1231 ((u64)lu->login_response->command_block_agent_lo); 1231 ((u64)lu->login_response->command_block_agent_lo);
1232 lu->command_block_agent_addr &= 0x0000ffffffffffffULL; 1232 lu->command_block_agent_addr &= 0x0000ffffffffffffULL;
1233 1233
1234 SBP2_INFO("Logged into SBP-2 device"); 1234 SBP2_INFO("Logged into SBP-2 device");
1235 return 0; 1235 return 0;
1236 } 1236 }
1237 1237
1238 static int sbp2_logout_device(struct sbp2_lu *lu) 1238 static int sbp2_logout_device(struct sbp2_lu *lu)
1239 { 1239 {
1240 struct sbp2_fwhost_info *hi = lu->hi; 1240 struct sbp2_fwhost_info *hi = lu->hi;
1241 quadlet_t data[2]; 1241 quadlet_t data[2];
1242 int error; 1242 int error;
1243 1243
1244 lu->logout_orb->reserved1 = 0x0; 1244 lu->logout_orb->reserved1 = 0x0;
1245 lu->logout_orb->reserved2 = 0x0; 1245 lu->logout_orb->reserved2 = 0x0;
1246 lu->logout_orb->reserved3 = 0x0; 1246 lu->logout_orb->reserved3 = 0x0;
1247 lu->logout_orb->reserved4 = 0x0; 1247 lu->logout_orb->reserved4 = 0x0;
1248 1248
1249 lu->logout_orb->login_ID_misc = ORB_SET_FUNCTION(SBP2_LOGOUT_REQUEST); 1249 lu->logout_orb->login_ID_misc = ORB_SET_FUNCTION(SBP2_LOGOUT_REQUEST);
1250 lu->logout_orb->login_ID_misc |= 1250 lu->logout_orb->login_ID_misc |=
1251 ORB_SET_LOGIN_ID(lu->login_response->length_login_ID); 1251 ORB_SET_LOGIN_ID(lu->login_response->length_login_ID);
1252 lu->logout_orb->login_ID_misc |= ORB_SET_NOTIFY(1); 1252 lu->logout_orb->login_ID_misc |= ORB_SET_NOTIFY(1);
1253 1253
1254 lu->logout_orb->reserved5 = 0x0; 1254 lu->logout_orb->reserved5 = 0x0;
1255 lu->logout_orb->status_fifo_hi = 1255 lu->logout_orb->status_fifo_hi =
1256 ORB_SET_STATUS_FIFO_HI(lu->status_fifo_addr, hi->host->node_id); 1256 ORB_SET_STATUS_FIFO_HI(lu->status_fifo_addr, hi->host->node_id);
1257 lu->logout_orb->status_fifo_lo = 1257 lu->logout_orb->status_fifo_lo =
1258 ORB_SET_STATUS_FIFO_LO(lu->status_fifo_addr); 1258 ORB_SET_STATUS_FIFO_LO(lu->status_fifo_addr);
1259 1259
1260 sbp2util_cpu_to_be32_buffer(lu->logout_orb, 1260 sbp2util_cpu_to_be32_buffer(lu->logout_orb,
1261 sizeof(struct sbp2_logout_orb)); 1261 sizeof(struct sbp2_logout_orb));
1262 1262
1263 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1263 data[0] = ORB_SET_NODE_ID(hi->host->node_id);
1264 data[1] = lu->logout_orb_dma; 1264 data[1] = lu->logout_orb_dma;
1265 sbp2util_cpu_to_be32_buffer(data, 8); 1265 sbp2util_cpu_to_be32_buffer(data, 8);
1266 1266
1267 error = hpsb_node_write(lu->ne, lu->management_agent_addr, data, 8); 1267 error = hpsb_node_write(lu->ne, lu->management_agent_addr, data, 8);
1268 if (error) 1268 if (error)
1269 return error; 1269 return error;
1270 1270
1271 /* wait up to 1 second for the device to complete logout */ 1271 /* wait up to 1 second for the device to complete logout */
1272 if (sbp2util_access_timeout(lu, HZ)) 1272 if (sbp2util_access_timeout(lu, HZ))
1273 return -EIO; 1273 return -EIO;
1274 1274
1275 SBP2_INFO("Logged out of SBP-2 device"); 1275 SBP2_INFO("Logged out of SBP-2 device");
1276 return 0; 1276 return 0;
1277 } 1277 }
1278 1278
1279 static int sbp2_reconnect_device(struct sbp2_lu *lu) 1279 static int sbp2_reconnect_device(struct sbp2_lu *lu)
1280 { 1280 {
1281 struct sbp2_fwhost_info *hi = lu->hi; 1281 struct sbp2_fwhost_info *hi = lu->hi;
1282 quadlet_t data[2]; 1282 quadlet_t data[2];
1283 int error; 1283 int error;
1284 1284
1285 lu->reconnect_orb->reserved1 = 0x0; 1285 lu->reconnect_orb->reserved1 = 0x0;
1286 lu->reconnect_orb->reserved2 = 0x0; 1286 lu->reconnect_orb->reserved2 = 0x0;
1287 lu->reconnect_orb->reserved3 = 0x0; 1287 lu->reconnect_orb->reserved3 = 0x0;
1288 lu->reconnect_orb->reserved4 = 0x0; 1288 lu->reconnect_orb->reserved4 = 0x0;
1289 1289
1290 lu->reconnect_orb->login_ID_misc = 1290 lu->reconnect_orb->login_ID_misc =
1291 ORB_SET_FUNCTION(SBP2_RECONNECT_REQUEST); 1291 ORB_SET_FUNCTION(SBP2_RECONNECT_REQUEST);
1292 lu->reconnect_orb->login_ID_misc |= 1292 lu->reconnect_orb->login_ID_misc |=
1293 ORB_SET_LOGIN_ID(lu->login_response->length_login_ID); 1293 ORB_SET_LOGIN_ID(lu->login_response->length_login_ID);
1294 lu->reconnect_orb->login_ID_misc |= ORB_SET_NOTIFY(1); 1294 lu->reconnect_orb->login_ID_misc |= ORB_SET_NOTIFY(1);
1295 1295
1296 lu->reconnect_orb->reserved5 = 0x0; 1296 lu->reconnect_orb->reserved5 = 0x0;
1297 lu->reconnect_orb->status_fifo_hi = 1297 lu->reconnect_orb->status_fifo_hi =
1298 ORB_SET_STATUS_FIFO_HI(lu->status_fifo_addr, hi->host->node_id); 1298 ORB_SET_STATUS_FIFO_HI(lu->status_fifo_addr, hi->host->node_id);
1299 lu->reconnect_orb->status_fifo_lo = 1299 lu->reconnect_orb->status_fifo_lo =
1300 ORB_SET_STATUS_FIFO_LO(lu->status_fifo_addr); 1300 ORB_SET_STATUS_FIFO_LO(lu->status_fifo_addr);
1301 1301
1302 sbp2util_cpu_to_be32_buffer(lu->reconnect_orb, 1302 sbp2util_cpu_to_be32_buffer(lu->reconnect_orb,
1303 sizeof(struct sbp2_reconnect_orb)); 1303 sizeof(struct sbp2_reconnect_orb));
1304 1304
1305 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1305 data[0] = ORB_SET_NODE_ID(hi->host->node_id);
1306 data[1] = lu->reconnect_orb_dma; 1306 data[1] = lu->reconnect_orb_dma;
1307 sbp2util_cpu_to_be32_buffer(data, 8); 1307 sbp2util_cpu_to_be32_buffer(data, 8);
1308 1308
1309 error = hpsb_node_write(lu->ne, lu->management_agent_addr, data, 8); 1309 error = hpsb_node_write(lu->ne, lu->management_agent_addr, data, 8);
1310 if (error) 1310 if (error)
1311 return error; 1311 return error;
1312 1312
1313 /* wait up to 1 second for reconnect status */ 1313 /* wait up to 1 second for reconnect status */
1314 if (sbp2util_access_timeout(lu, HZ)) { 1314 if (sbp2util_access_timeout(lu, HZ)) {
1315 SBP2_ERR("Error reconnecting to SBP-2 device - timed out"); 1315 SBP2_ERR("Error reconnecting to SBP-2 device - timed out");
1316 return -EIO; 1316 return -EIO;
1317 } 1317 }
1318 1318
1319 /* make sure that the returned status matches the reconnect ORB */ 1319 /* make sure that the returned status matches the reconnect ORB */
1320 if (lu->status_block.ORB_offset_lo != lu->reconnect_orb_dma) { 1320 if (lu->status_block.ORB_offset_lo != lu->reconnect_orb_dma) {
1321 SBP2_ERR("Error reconnecting to SBP-2 device - timed out"); 1321 SBP2_ERR("Error reconnecting to SBP-2 device - timed out");
1322 return -EIO; 1322 return -EIO;
1323 } 1323 }
1324 1324
1325 if (STATUS_TEST_RDS(lu->status_block.ORB_offset_hi_misc)) { 1325 if (STATUS_TEST_RDS(lu->status_block.ORB_offset_hi_misc)) {
1326 SBP2_ERR("Error reconnecting to SBP-2 device - failed"); 1326 SBP2_ERR("Error reconnecting to SBP-2 device - failed");
1327 return -EIO; 1327 return -EIO;
1328 } 1328 }
1329 1329
1330 SBP2_INFO("Reconnected to SBP-2 device"); 1330 SBP2_INFO("Reconnected to SBP-2 device");
1331 return 0; 1331 return 0;
1332 } 1332 }
1333 1333
1334 /* 1334 /*
1335 * Set the target node's Single Phase Retry limit. Affects the target's retry 1335 * Set the target node's Single Phase Retry limit. Affects the target's retry
1336 * behaviour if our node is too busy to accept requests. 1336 * behaviour if our node is too busy to accept requests.
1337 */ 1337 */
1338 static int sbp2_set_busy_timeout(struct sbp2_lu *lu) 1338 static int sbp2_set_busy_timeout(struct sbp2_lu *lu)
1339 { 1339 {
1340 quadlet_t data; 1340 quadlet_t data;
1341 1341
1342 data = cpu_to_be32(SBP2_BUSY_TIMEOUT_VALUE); 1342 data = cpu_to_be32(SBP2_BUSY_TIMEOUT_VALUE);
1343 if (hpsb_node_write(lu->ne, SBP2_BUSY_TIMEOUT_ADDRESS, &data, 4)) 1343 if (hpsb_node_write(lu->ne, SBP2_BUSY_TIMEOUT_ADDRESS, &data, 4))
1344 SBP2_ERR("%s error", __func__); 1344 SBP2_ERR("%s error", __func__);
1345 return 0; 1345 return 0;
1346 } 1346 }
1347 1347
1348 static void sbp2_parse_unit_directory(struct sbp2_lu *lu, 1348 static void sbp2_parse_unit_directory(struct sbp2_lu *lu,
1349 struct unit_directory *ud) 1349 struct unit_directory *ud)
1350 { 1350 {
1351 struct csr1212_keyval *kv; 1351 struct csr1212_keyval *kv;
1352 struct csr1212_dentry *dentry; 1352 struct csr1212_dentry *dentry;
1353 u64 management_agent_addr; 1353 u64 management_agent_addr;
1354 u32 unit_characteristics, firmware_revision, model; 1354 u32 firmware_revision, model;
1355 unsigned workarounds; 1355 unsigned workarounds;
1356 int i; 1356 int i;
1357 1357
1358 management_agent_addr = 0; 1358 management_agent_addr = 0;
1359 unit_characteristics = 0;
1360 firmware_revision = SBP2_ROM_VALUE_MISSING; 1359 firmware_revision = SBP2_ROM_VALUE_MISSING;
1361 model = ud->flags & UNIT_DIRECTORY_MODEL_ID ? 1360 model = ud->flags & UNIT_DIRECTORY_MODEL_ID ?
1362 ud->model_id : SBP2_ROM_VALUE_MISSING; 1361 ud->model_id : SBP2_ROM_VALUE_MISSING;
1363 1362
1364 csr1212_for_each_dir_entry(ud->ne->csr, kv, ud->ud_kv, dentry) { 1363 csr1212_for_each_dir_entry(ud->ne->csr, kv, ud->ud_kv, dentry) {
1365 switch (kv->key.id) { 1364 switch (kv->key.id) {
1366 case CSR1212_KV_ID_DEPENDENT_INFO: 1365 case CSR1212_KV_ID_DEPENDENT_INFO:
1367 if (kv->key.type == CSR1212_KV_TYPE_CSR_OFFSET) 1366 if (kv->key.type == CSR1212_KV_TYPE_CSR_OFFSET)
1368 management_agent_addr = 1367 management_agent_addr =
1369 CSR1212_REGISTER_SPACE_BASE + 1368 CSR1212_REGISTER_SPACE_BASE +
1370 (kv->value.csr_offset << 2); 1369 (kv->value.csr_offset << 2);
1371 1370
1372 else if (kv->key.type == CSR1212_KV_TYPE_IMMEDIATE) 1371 else if (kv->key.type == CSR1212_KV_TYPE_IMMEDIATE)
1373 lu->lun = ORB_SET_LUN(kv->value.immediate); 1372 lu->lun = ORB_SET_LUN(kv->value.immediate);
1374 break; 1373 break;
1375 1374
1376 case SBP2_UNIT_CHARACTERISTICS_KEY:
1377 /* FIXME: This is ignored so far.
1378 * See SBP-2 clause 7.4.8. */
1379 unit_characteristics = kv->value.immediate;
1380 break;
1381 1375
1382 case SBP2_FIRMWARE_REVISION_KEY: 1376 case SBP2_FIRMWARE_REVISION_KEY:
1383 firmware_revision = kv->value.immediate; 1377 firmware_revision = kv->value.immediate;
1384 break; 1378 break;
1385 1379
1386 default: 1380 default:
1381 /* FIXME: Check for SBP2_UNIT_CHARACTERISTICS_KEY
1382 * mgt_ORB_timeout and ORB_size, SBP-2 clause 7.4.8. */
1383
1387 /* FIXME: Check for SBP2_DEVICE_TYPE_AND_LUN_KEY. 1384 /* FIXME: Check for SBP2_DEVICE_TYPE_AND_LUN_KEY.
1388 * Its "ordered" bit has consequences for command ORB 1385 * Its "ordered" bit has consequences for command ORB
1389 * list handling. See SBP-2 clauses 4.6, 7.4.11, 10.2 */ 1386 * list handling. See SBP-2 clauses 4.6, 7.4.11, 10.2 */
1390 break; 1387 break;
1391 } 1388 }
1392 } 1389 }
1393 1390
1394 workarounds = sbp2_default_workarounds; 1391 workarounds = sbp2_default_workarounds;
1395 1392
1396 if (!(workarounds & SBP2_WORKAROUND_OVERRIDE)) 1393 if (!(workarounds & SBP2_WORKAROUND_OVERRIDE))
1397 for (i = 0; i < ARRAY_SIZE(sbp2_workarounds_table); i++) { 1394 for (i = 0; i < ARRAY_SIZE(sbp2_workarounds_table); i++) {
1398 if (sbp2_workarounds_table[i].firmware_revision != 1395 if (sbp2_workarounds_table[i].firmware_revision !=
1399 SBP2_ROM_VALUE_WILDCARD && 1396 SBP2_ROM_VALUE_WILDCARD &&
1400 sbp2_workarounds_table[i].firmware_revision != 1397 sbp2_workarounds_table[i].firmware_revision !=
1401 (firmware_revision & 0xffff00)) 1398 (firmware_revision & 0xffff00))
1402 continue; 1399 continue;
1403 if (sbp2_workarounds_table[i].model != 1400 if (sbp2_workarounds_table[i].model !=
1404 SBP2_ROM_VALUE_WILDCARD && 1401 SBP2_ROM_VALUE_WILDCARD &&
1405 sbp2_workarounds_table[i].model != model) 1402 sbp2_workarounds_table[i].model != model)
1406 continue; 1403 continue;
1407 workarounds |= sbp2_workarounds_table[i].workarounds; 1404 workarounds |= sbp2_workarounds_table[i].workarounds;
1408 break; 1405 break;
1409 } 1406 }
1410 1407
1411 if (workarounds) 1408 if (workarounds)
1412 SBP2_INFO("Workarounds for node " NODE_BUS_FMT ": 0x%x " 1409 SBP2_INFO("Workarounds for node " NODE_BUS_FMT ": 0x%x "
1413 "(firmware_revision 0x%06x, vendor_id 0x%06x," 1410 "(firmware_revision 0x%06x, vendor_id 0x%06x,"
1414 " model_id 0x%06x)", 1411 " model_id 0x%06x)",
1415 NODE_BUS_ARGS(ud->ne->host, ud->ne->nodeid), 1412 NODE_BUS_ARGS(ud->ne->host, ud->ne->nodeid),
1416 workarounds, firmware_revision, ud->vendor_id, 1413 workarounds, firmware_revision, ud->vendor_id,
1417 model); 1414 model);
1418 1415
1419 /* We would need one SCSI host template for each target to adjust 1416 /* We would need one SCSI host template for each target to adjust
1420 * max_sectors on the fly, therefore warn only. */ 1417 * max_sectors on the fly, therefore warn only. */
1421 if (workarounds & SBP2_WORKAROUND_128K_MAX_TRANS && 1418 if (workarounds & SBP2_WORKAROUND_128K_MAX_TRANS &&
1422 (sbp2_max_sectors * 512) > (128 * 1024)) 1419 (sbp2_max_sectors * 512) > (128 * 1024))
1423 SBP2_INFO("Node " NODE_BUS_FMT ": Bridge only supports 128KB " 1420 SBP2_INFO("Node " NODE_BUS_FMT ": Bridge only supports 128KB "
1424 "max transfer size. WARNING: Current max_sectors " 1421 "max transfer size. WARNING: Current max_sectors "
1425 "setting is larger than 128KB (%d sectors)", 1422 "setting is larger than 128KB (%d sectors)",
1426 NODE_BUS_ARGS(ud->ne->host, ud->ne->nodeid), 1423 NODE_BUS_ARGS(ud->ne->host, ud->ne->nodeid),
1427 sbp2_max_sectors); 1424 sbp2_max_sectors);
1428 1425
1429 /* If this is a logical unit directory entry, process the parent 1426 /* If this is a logical unit directory entry, process the parent
1430 * to get the values. */ 1427 * to get the values. */
1431 if (ud->flags & UNIT_DIRECTORY_LUN_DIRECTORY) { 1428 if (ud->flags & UNIT_DIRECTORY_LUN_DIRECTORY) {
1432 struct unit_directory *parent_ud = container_of( 1429 struct unit_directory *parent_ud = container_of(
1433 ud->device.parent, struct unit_directory, device); 1430 ud->device.parent, struct unit_directory, device);
1434 sbp2_parse_unit_directory(lu, parent_ud); 1431 sbp2_parse_unit_directory(lu, parent_ud);
1435 } else { 1432 } else {
1436 lu->management_agent_addr = management_agent_addr; 1433 lu->management_agent_addr = management_agent_addr;
1437 lu->workarounds = workarounds; 1434 lu->workarounds = workarounds;
1438 if (ud->flags & UNIT_DIRECTORY_HAS_LUN) 1435 if (ud->flags & UNIT_DIRECTORY_HAS_LUN)
1439 lu->lun = ORB_SET_LUN(ud->lun); 1436 lu->lun = ORB_SET_LUN(ud->lun);
1440 } 1437 }
1441 } 1438 }
1442 1439
1443 #define SBP2_PAYLOAD_TO_BYTES(p) (1 << ((p) + 2)) 1440 #define SBP2_PAYLOAD_TO_BYTES(p) (1 << ((p) + 2))
1444 1441
1445 /* 1442 /*
1446 * This function is called in order to determine the max speed and packet 1443 * This function is called in order to determine the max speed and packet
1447 * size we can use in our ORBs. Note, that we (the driver and host) only 1444 * size we can use in our ORBs. Note, that we (the driver and host) only
1448 * initiate the transaction. The SBP-2 device actually transfers the data 1445 * initiate the transaction. The SBP-2 device actually transfers the data
1449 * (by reading from the DMA area we tell it). This means that the SBP-2 1446 * (by reading from the DMA area we tell it). This means that the SBP-2
1450 * device decides the actual maximum data it can transfer. We just tell it 1447 * device decides the actual maximum data it can transfer. We just tell it
1451 * the speed that it needs to use, and the max_rec the host supports, and 1448 * the speed that it needs to use, and the max_rec the host supports, and
1452 * it takes care of the rest. 1449 * it takes care of the rest.
1453 */ 1450 */
1454 static int sbp2_max_speed_and_size(struct sbp2_lu *lu) 1451 static int sbp2_max_speed_and_size(struct sbp2_lu *lu)
1455 { 1452 {
1456 struct sbp2_fwhost_info *hi = lu->hi; 1453 struct sbp2_fwhost_info *hi = lu->hi;
1457 u8 payload; 1454 u8 payload;
1458 1455
1459 lu->speed_code = hi->host->speed[NODEID_TO_NODE(lu->ne->nodeid)]; 1456 lu->speed_code = hi->host->speed[NODEID_TO_NODE(lu->ne->nodeid)];
1460 1457
1461 if (lu->speed_code > sbp2_max_speed) { 1458 if (lu->speed_code > sbp2_max_speed) {
1462 lu->speed_code = sbp2_max_speed; 1459 lu->speed_code = sbp2_max_speed;
1463 SBP2_INFO("Reducing speed to %s", 1460 SBP2_INFO("Reducing speed to %s",
1464 hpsb_speedto_str[sbp2_max_speed]); 1461 hpsb_speedto_str[sbp2_max_speed]);
1465 } 1462 }
1466 1463
1467 /* Payload size is the lesser of what our speed supports and what 1464 /* Payload size is the lesser of what our speed supports and what
1468 * our host supports. */ 1465 * our host supports. */
1469 payload = min(sbp2_speedto_max_payload[lu->speed_code], 1466 payload = min(sbp2_speedto_max_payload[lu->speed_code],
1470 (u8) (hi->host->csr.max_rec - 1)); 1467 (u8) (hi->host->csr.max_rec - 1));
1471 1468
1472 /* If physical DMA is off, work around limitation in ohci1394: 1469 /* If physical DMA is off, work around limitation in ohci1394:
1473 * packet size must not exceed PAGE_SIZE */ 1470 * packet size must not exceed PAGE_SIZE */
1474 if (lu->ne->host->low_addr_space < (1ULL << 32)) 1471 if (lu->ne->host->low_addr_space < (1ULL << 32))
1475 while (SBP2_PAYLOAD_TO_BYTES(payload) + 24 > PAGE_SIZE && 1472 while (SBP2_PAYLOAD_TO_BYTES(payload) + 24 > PAGE_SIZE &&
1476 payload) 1473 payload)
1477 payload--; 1474 payload--;
1478 1475
1479 SBP2_INFO("Node " NODE_BUS_FMT ": Max speed [%s] - Max payload [%u]", 1476 SBP2_INFO("Node " NODE_BUS_FMT ": Max speed [%s] - Max payload [%u]",
1480 NODE_BUS_ARGS(hi->host, lu->ne->nodeid), 1477 NODE_BUS_ARGS(hi->host, lu->ne->nodeid),
1481 hpsb_speedto_str[lu->speed_code], 1478 hpsb_speedto_str[lu->speed_code],
1482 SBP2_PAYLOAD_TO_BYTES(payload)); 1479 SBP2_PAYLOAD_TO_BYTES(payload));
1483 1480
1484 lu->max_payload_size = payload; 1481 lu->max_payload_size = payload;
1485 return 0; 1482 return 0;
1486 } 1483 }
1487 1484
1488 static int sbp2_agent_reset(struct sbp2_lu *lu, int wait) 1485 static int sbp2_agent_reset(struct sbp2_lu *lu, int wait)
1489 { 1486 {
1490 quadlet_t data; 1487 quadlet_t data;
1491 u64 addr; 1488 u64 addr;
1492 int retval; 1489 int retval;
1493 unsigned long flags; 1490 unsigned long flags;
1494 1491
1495 /* flush lu->protocol_work */ 1492 /* flush lu->protocol_work */
1496 if (wait) 1493 if (wait)
1497 flush_scheduled_work(); 1494 flush_scheduled_work();
1498 1495
1499 data = ntohl(SBP2_AGENT_RESET_DATA); 1496 data = ntohl(SBP2_AGENT_RESET_DATA);
1500 addr = lu->command_block_agent_addr + SBP2_AGENT_RESET_OFFSET; 1497 addr = lu->command_block_agent_addr + SBP2_AGENT_RESET_OFFSET;
1501 1498
1502 if (wait) 1499 if (wait)
1503 retval = hpsb_node_write(lu->ne, addr, &data, 4); 1500 retval = hpsb_node_write(lu->ne, addr, &data, 4);
1504 else 1501 else
1505 retval = sbp2util_node_write_no_wait(lu->ne, addr, &data, 4); 1502 retval = sbp2util_node_write_no_wait(lu->ne, addr, &data, 4);
1506 1503
1507 if (retval < 0) { 1504 if (retval < 0) {
1508 SBP2_ERR("hpsb_node_write failed.\n"); 1505 SBP2_ERR("hpsb_node_write failed.\n");
1509 return -EIO; 1506 return -EIO;
1510 } 1507 }
1511 1508
1512 /* make sure that the ORB_POINTER is written on next command */ 1509 /* make sure that the ORB_POINTER is written on next command */
1513 spin_lock_irqsave(&lu->cmd_orb_lock, flags); 1510 spin_lock_irqsave(&lu->cmd_orb_lock, flags);
1514 lu->last_orb = NULL; 1511 lu->last_orb = NULL;
1515 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 1512 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags);
1516 1513
1517 return 0; 1514 return 0;
1518 } 1515 }
1519 1516
1520 static int sbp2_prep_command_orb_sg(struct sbp2_command_orb *orb, 1517 static int sbp2_prep_command_orb_sg(struct sbp2_command_orb *orb,
1521 struct sbp2_fwhost_info *hi, 1518 struct sbp2_fwhost_info *hi,
1522 struct sbp2_command_info *cmd, 1519 struct sbp2_command_info *cmd,
1523 unsigned int sg_count, 1520 unsigned int sg_count,
1524 struct scatterlist *sg, 1521 struct scatterlist *sg,
1525 u32 orb_direction, 1522 u32 orb_direction,
1526 enum dma_data_direction dma_dir) 1523 enum dma_data_direction dma_dir)
1527 { 1524 {
1528 struct device *dmadev = hi->host->device.parent; 1525 struct device *dmadev = hi->host->device.parent;
1529 struct sbp2_unrestricted_page_table *pt; 1526 struct sbp2_unrestricted_page_table *pt;
1530 int i, n; 1527 int i, n;
1531 1528
1532 n = dma_map_sg(dmadev, sg, sg_count, dma_dir); 1529 n = dma_map_sg(dmadev, sg, sg_count, dma_dir);
1533 if (n == 0) 1530 if (n == 0)
1534 return -ENOMEM; 1531 return -ENOMEM;
1535 1532
1536 orb->data_descriptor_hi = ORB_SET_NODE_ID(hi->host->node_id); 1533 orb->data_descriptor_hi = ORB_SET_NODE_ID(hi->host->node_id);
1537 orb->misc |= ORB_SET_DIRECTION(orb_direction); 1534 orb->misc |= ORB_SET_DIRECTION(orb_direction);
1538 1535
1539 /* special case if only one element (and less than 64KB in size) */ 1536 /* special case if only one element (and less than 64KB in size) */
1540 if (n == 1) { 1537 if (n == 1) {
1541 orb->misc |= ORB_SET_DATA_SIZE(sg_dma_len(sg)); 1538 orb->misc |= ORB_SET_DATA_SIZE(sg_dma_len(sg));
1542 orb->data_descriptor_lo = sg_dma_address(sg); 1539 orb->data_descriptor_lo = sg_dma_address(sg);
1543 } else { 1540 } else {
1544 pt = &cmd->scatter_gather_element[0]; 1541 pt = &cmd->scatter_gather_element[0];
1545 1542
1546 dma_sync_single_for_cpu(dmadev, cmd->sge_dma, 1543 dma_sync_single_for_cpu(dmadev, cmd->sge_dma,
1547 sizeof(cmd->scatter_gather_element), 1544 sizeof(cmd->scatter_gather_element),
1548 DMA_TO_DEVICE); 1545 DMA_TO_DEVICE);
1549 1546
1550 for_each_sg(sg, sg, n, i) { 1547 for_each_sg(sg, sg, n, i) {
1551 pt[i].high = cpu_to_be32(sg_dma_len(sg) << 16); 1548 pt[i].high = cpu_to_be32(sg_dma_len(sg) << 16);
1552 pt[i].low = cpu_to_be32(sg_dma_address(sg)); 1549 pt[i].low = cpu_to_be32(sg_dma_address(sg));
1553 } 1550 }
1554 1551
1555 orb->misc |= ORB_SET_PAGE_TABLE_PRESENT(0x1) | 1552 orb->misc |= ORB_SET_PAGE_TABLE_PRESENT(0x1) |
1556 ORB_SET_DATA_SIZE(n); 1553 ORB_SET_DATA_SIZE(n);
1557 orb->data_descriptor_lo = cmd->sge_dma; 1554 orb->data_descriptor_lo = cmd->sge_dma;
1558 1555
1559 dma_sync_single_for_device(dmadev, cmd->sge_dma, 1556 dma_sync_single_for_device(dmadev, cmd->sge_dma,
1560 sizeof(cmd->scatter_gather_element), 1557 sizeof(cmd->scatter_gather_element),
1561 DMA_TO_DEVICE); 1558 DMA_TO_DEVICE);
1562 } 1559 }
1563 return 0; 1560 return 0;
1564 } 1561 }
1565 1562
1566 static int sbp2_create_command_orb(struct sbp2_lu *lu, 1563 static int sbp2_create_command_orb(struct sbp2_lu *lu,
1567 struct sbp2_command_info *cmd, 1564 struct sbp2_command_info *cmd,
1568 struct scsi_cmnd *SCpnt) 1565 struct scsi_cmnd *SCpnt)
1569 { 1566 {
1570 struct device *dmadev = lu->hi->host->device.parent; 1567 struct device *dmadev = lu->hi->host->device.parent;
1571 struct sbp2_command_orb *orb = &cmd->command_orb; 1568 struct sbp2_command_orb *orb = &cmd->command_orb;
1572 unsigned int scsi_request_bufflen = scsi_bufflen(SCpnt); 1569 unsigned int scsi_request_bufflen = scsi_bufflen(SCpnt);
1573 enum dma_data_direction dma_dir = SCpnt->sc_data_direction; 1570 enum dma_data_direction dma_dir = SCpnt->sc_data_direction;
1574 u32 orb_direction; 1571 u32 orb_direction;
1575 int ret; 1572 int ret;
1576 1573
1577 dma_sync_single_for_cpu(dmadev, cmd->command_orb_dma, 1574 dma_sync_single_for_cpu(dmadev, cmd->command_orb_dma,
1578 sizeof(struct sbp2_command_orb), DMA_TO_DEVICE); 1575 sizeof(struct sbp2_command_orb), DMA_TO_DEVICE);
1579 /* 1576 /*
1580 * Set-up our command ORB. 1577 * Set-up our command ORB.
1581 * 1578 *
1582 * NOTE: We're doing unrestricted page tables (s/g), as this is 1579 * NOTE: We're doing unrestricted page tables (s/g), as this is
1583 * best performance (at least with the devices I have). This means 1580 * best performance (at least with the devices I have). This means
1584 * that data_size becomes the number of s/g elements, and 1581 * that data_size becomes the number of s/g elements, and
1585 * page_size should be zero (for unrestricted). 1582 * page_size should be zero (for unrestricted).
1586 */ 1583 */
1587 orb->next_ORB_hi = ORB_SET_NULL_PTR(1); 1584 orb->next_ORB_hi = ORB_SET_NULL_PTR(1);
1588 orb->next_ORB_lo = 0x0; 1585 orb->next_ORB_lo = 0x0;
1589 orb->misc = ORB_SET_MAX_PAYLOAD(lu->max_payload_size); 1586 orb->misc = ORB_SET_MAX_PAYLOAD(lu->max_payload_size);
1590 orb->misc |= ORB_SET_SPEED(lu->speed_code); 1587 orb->misc |= ORB_SET_SPEED(lu->speed_code);
1591 orb->misc |= ORB_SET_NOTIFY(1); 1588 orb->misc |= ORB_SET_NOTIFY(1);
1592 1589
1593 if (dma_dir == DMA_NONE) 1590 if (dma_dir == DMA_NONE)
1594 orb_direction = ORB_DIRECTION_NO_DATA_TRANSFER; 1591 orb_direction = ORB_DIRECTION_NO_DATA_TRANSFER;
1595 else if (dma_dir == DMA_TO_DEVICE && scsi_request_bufflen) 1592 else if (dma_dir == DMA_TO_DEVICE && scsi_request_bufflen)
1596 orb_direction = ORB_DIRECTION_WRITE_TO_MEDIA; 1593 orb_direction = ORB_DIRECTION_WRITE_TO_MEDIA;
1597 else if (dma_dir == DMA_FROM_DEVICE && scsi_request_bufflen) 1594 else if (dma_dir == DMA_FROM_DEVICE && scsi_request_bufflen)
1598 orb_direction = ORB_DIRECTION_READ_FROM_MEDIA; 1595 orb_direction = ORB_DIRECTION_READ_FROM_MEDIA;
1599 else { 1596 else {
1600 SBP2_INFO("Falling back to DMA_NONE"); 1597 SBP2_INFO("Falling back to DMA_NONE");
1601 orb_direction = ORB_DIRECTION_NO_DATA_TRANSFER; 1598 orb_direction = ORB_DIRECTION_NO_DATA_TRANSFER;
1602 } 1599 }
1603 1600
1604 /* set up our page table stuff */ 1601 /* set up our page table stuff */
1605 if (orb_direction == ORB_DIRECTION_NO_DATA_TRANSFER) { 1602 if (orb_direction == ORB_DIRECTION_NO_DATA_TRANSFER) {
1606 orb->data_descriptor_hi = 0x0; 1603 orb->data_descriptor_hi = 0x0;
1607 orb->data_descriptor_lo = 0x0; 1604 orb->data_descriptor_lo = 0x0;
1608 orb->misc |= ORB_SET_DIRECTION(1); 1605 orb->misc |= ORB_SET_DIRECTION(1);
1609 ret = 0; 1606 ret = 0;
1610 } else { 1607 } else {
1611 ret = sbp2_prep_command_orb_sg(orb, lu->hi, cmd, 1608 ret = sbp2_prep_command_orb_sg(orb, lu->hi, cmd,
1612 scsi_sg_count(SCpnt), 1609 scsi_sg_count(SCpnt),
1613 scsi_sglist(SCpnt), 1610 scsi_sglist(SCpnt),
1614 orb_direction, dma_dir); 1611 orb_direction, dma_dir);
1615 } 1612 }
1616 sbp2util_cpu_to_be32_buffer(orb, sizeof(*orb)); 1613 sbp2util_cpu_to_be32_buffer(orb, sizeof(*orb));
1617 1614
1618 memset(orb->cdb, 0, sizeof(orb->cdb)); 1615 memset(orb->cdb, 0, sizeof(orb->cdb));
1619 memcpy(orb->cdb, SCpnt->cmnd, SCpnt->cmd_len); 1616 memcpy(orb->cdb, SCpnt->cmnd, SCpnt->cmd_len);
1620 1617
1621 dma_sync_single_for_device(dmadev, cmd->command_orb_dma, 1618 dma_sync_single_for_device(dmadev, cmd->command_orb_dma,
1622 sizeof(struct sbp2_command_orb), DMA_TO_DEVICE); 1619 sizeof(struct sbp2_command_orb), DMA_TO_DEVICE);
1623 return ret; 1620 return ret;
1624 } 1621 }
1625 1622
1626 static void sbp2_link_orb_command(struct sbp2_lu *lu, 1623 static void sbp2_link_orb_command(struct sbp2_lu *lu,
1627 struct sbp2_command_info *cmd) 1624 struct sbp2_command_info *cmd)
1628 { 1625 {
1629 struct sbp2_fwhost_info *hi = lu->hi; 1626 struct sbp2_fwhost_info *hi = lu->hi;
1630 struct sbp2_command_orb *last_orb; 1627 struct sbp2_command_orb *last_orb;
1631 dma_addr_t last_orb_dma; 1628 dma_addr_t last_orb_dma;
1632 u64 addr = lu->command_block_agent_addr; 1629 u64 addr = lu->command_block_agent_addr;
1633 quadlet_t data[2]; 1630 quadlet_t data[2];
1634 size_t length; 1631 size_t length;
1635 unsigned long flags; 1632 unsigned long flags;
1636 1633
1637 /* check to see if there are any previous orbs to use */ 1634 /* check to see if there are any previous orbs to use */
1638 spin_lock_irqsave(&lu->cmd_orb_lock, flags); 1635 spin_lock_irqsave(&lu->cmd_orb_lock, flags);
1639 last_orb = lu->last_orb; 1636 last_orb = lu->last_orb;
1640 last_orb_dma = lu->last_orb_dma; 1637 last_orb_dma = lu->last_orb_dma;
1641 if (!last_orb) { 1638 if (!last_orb) {
1642 /* 1639 /*
1643 * last_orb == NULL means: We know that the target's fetch agent 1640 * last_orb == NULL means: We know that the target's fetch agent
1644 * is not active right now. 1641 * is not active right now.
1645 */ 1642 */
1646 addr += SBP2_ORB_POINTER_OFFSET; 1643 addr += SBP2_ORB_POINTER_OFFSET;
1647 data[0] = ORB_SET_NODE_ID(hi->host->node_id); 1644 data[0] = ORB_SET_NODE_ID(hi->host->node_id);
1648 data[1] = cmd->command_orb_dma; 1645 data[1] = cmd->command_orb_dma;
1649 sbp2util_cpu_to_be32_buffer(data, 8); 1646 sbp2util_cpu_to_be32_buffer(data, 8);
1650 length = 8; 1647 length = 8;
1651 } else { 1648 } else {
1652 /* 1649 /*
1653 * last_orb != NULL means: We know that the target's fetch agent 1650 * last_orb != NULL means: We know that the target's fetch agent
1654 * is (very probably) not dead or in reset state right now. 1651 * is (very probably) not dead or in reset state right now.
1655 * We have an ORB already sent that we can append a new one to. 1652 * We have an ORB already sent that we can append a new one to.
1656 * The target's fetch agent may or may not have read this 1653 * The target's fetch agent may or may not have read this
1657 * previous ORB yet. 1654 * previous ORB yet.
1658 */ 1655 */
1659 dma_sync_single_for_cpu(hi->host->device.parent, last_orb_dma, 1656 dma_sync_single_for_cpu(hi->host->device.parent, last_orb_dma,
1660 sizeof(struct sbp2_command_orb), 1657 sizeof(struct sbp2_command_orb),
1661 DMA_TO_DEVICE); 1658 DMA_TO_DEVICE);
1662 last_orb->next_ORB_lo = cpu_to_be32(cmd->command_orb_dma); 1659 last_orb->next_ORB_lo = cpu_to_be32(cmd->command_orb_dma);
1663 wmb(); 1660 wmb();
1664 /* Tells hardware that this pointer is valid */ 1661 /* Tells hardware that this pointer is valid */
1665 last_orb->next_ORB_hi = 0; 1662 last_orb->next_ORB_hi = 0;
1666 dma_sync_single_for_device(hi->host->device.parent, 1663 dma_sync_single_for_device(hi->host->device.parent,
1667 last_orb_dma, 1664 last_orb_dma,
1668 sizeof(struct sbp2_command_orb), 1665 sizeof(struct sbp2_command_orb),
1669 DMA_TO_DEVICE); 1666 DMA_TO_DEVICE);
1670 addr += SBP2_DOORBELL_OFFSET; 1667 addr += SBP2_DOORBELL_OFFSET;
1671 data[0] = 0; 1668 data[0] = 0;
1672 length = 4; 1669 length = 4;
1673 } 1670 }
1674 lu->last_orb = &cmd->command_orb; 1671 lu->last_orb = &cmd->command_orb;
1675 lu->last_orb_dma = cmd->command_orb_dma; 1672 lu->last_orb_dma = cmd->command_orb_dma;
1676 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 1673 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags);
1677 1674
1678 if (sbp2util_node_write_no_wait(lu->ne, addr, data, length)) { 1675 if (sbp2util_node_write_no_wait(lu->ne, addr, data, length)) {
1679 /* 1676 /*
1680 * sbp2util_node_write_no_wait failed. We certainly ran out 1677 * sbp2util_node_write_no_wait failed. We certainly ran out
1681 * of transaction labels, perhaps just because there were no 1678 * of transaction labels, perhaps just because there were no
1682 * context switches which gave khpsbpkt a chance to collect 1679 * context switches which gave khpsbpkt a chance to collect
1683 * free tlabels. Try again in non-atomic context. If necessary, 1680 * free tlabels. Try again in non-atomic context. If necessary,
1684 * the workqueue job will sleep to guaranteedly get a tlabel. 1681 * the workqueue job will sleep to guaranteedly get a tlabel.
1685 * We do not accept new commands until the job is over. 1682 * We do not accept new commands until the job is over.
1686 */ 1683 */
1687 scsi_block_requests(lu->shost); 1684 scsi_block_requests(lu->shost);
1688 PREPARE_WORK(&lu->protocol_work, 1685 PREPARE_WORK(&lu->protocol_work,
1689 last_orb ? sbp2util_write_doorbell: 1686 last_orb ? sbp2util_write_doorbell:
1690 sbp2util_write_orb_pointer); 1687 sbp2util_write_orb_pointer);
1691 schedule_work(&lu->protocol_work); 1688 schedule_work(&lu->protocol_work);
1692 } 1689 }
1693 } 1690 }
1694 1691
1695 static int sbp2_send_command(struct sbp2_lu *lu, struct scsi_cmnd *SCpnt, 1692 static int sbp2_send_command(struct sbp2_lu *lu, struct scsi_cmnd *SCpnt,
1696 void (*done)(struct scsi_cmnd *)) 1693 void (*done)(struct scsi_cmnd *))
1697 { 1694 {
1698 struct sbp2_command_info *cmd; 1695 struct sbp2_command_info *cmd;
1699 1696
1700 cmd = sbp2util_allocate_command_orb(lu, SCpnt, done); 1697 cmd = sbp2util_allocate_command_orb(lu, SCpnt, done);
1701 if (!cmd) 1698 if (!cmd)
1702 return -EIO; 1699 return -EIO;
1703 1700
1704 if (sbp2_create_command_orb(lu, cmd, SCpnt)) 1701 if (sbp2_create_command_orb(lu, cmd, SCpnt))
1705 return -ENOMEM; 1702 return -ENOMEM;
1706 1703
1707 sbp2_link_orb_command(lu, cmd); 1704 sbp2_link_orb_command(lu, cmd);
1708 return 0; 1705 return 0;
1709 } 1706 }
1710 1707
1711 /* 1708 /*
1712 * Translates SBP-2 status into SCSI sense data for check conditions 1709 * Translates SBP-2 status into SCSI sense data for check conditions
1713 */ 1710 */
1714 static unsigned int sbp2_status_to_sense_data(unchar *sbp2_status, 1711 static unsigned int sbp2_status_to_sense_data(unchar *sbp2_status,
1715 unchar *sense_data) 1712 unchar *sense_data)
1716 { 1713 {
1717 /* OK, it's pretty ugly... ;-) */ 1714 /* OK, it's pretty ugly... ;-) */
1718 sense_data[0] = 0x70; 1715 sense_data[0] = 0x70;
1719 sense_data[1] = 0x0; 1716 sense_data[1] = 0x0;
1720 sense_data[2] = sbp2_status[9]; 1717 sense_data[2] = sbp2_status[9];
1721 sense_data[3] = sbp2_status[12]; 1718 sense_data[3] = sbp2_status[12];
1722 sense_data[4] = sbp2_status[13]; 1719 sense_data[4] = sbp2_status[13];
1723 sense_data[5] = sbp2_status[14]; 1720 sense_data[5] = sbp2_status[14];
1724 sense_data[6] = sbp2_status[15]; 1721 sense_data[6] = sbp2_status[15];
1725 sense_data[7] = 10; 1722 sense_data[7] = 10;
1726 sense_data[8] = sbp2_status[16]; 1723 sense_data[8] = sbp2_status[16];
1727 sense_data[9] = sbp2_status[17]; 1724 sense_data[9] = sbp2_status[17];
1728 sense_data[10] = sbp2_status[18]; 1725 sense_data[10] = sbp2_status[18];
1729 sense_data[11] = sbp2_status[19]; 1726 sense_data[11] = sbp2_status[19];
1730 sense_data[12] = sbp2_status[10]; 1727 sense_data[12] = sbp2_status[10];
1731 sense_data[13] = sbp2_status[11]; 1728 sense_data[13] = sbp2_status[11];
1732 sense_data[14] = sbp2_status[20]; 1729 sense_data[14] = sbp2_status[20];
1733 sense_data[15] = sbp2_status[21]; 1730 sense_data[15] = sbp2_status[21];
1734 1731
1735 return sbp2_status[8] & 0x3f; 1732 return sbp2_status[8] & 0x3f;
1736 } 1733 }
1737 1734
1738 static int sbp2_handle_status_write(struct hpsb_host *host, int nodeid, 1735 static int sbp2_handle_status_write(struct hpsb_host *host, int nodeid,
1739 int destid, quadlet_t *data, u64 addr, 1736 int destid, quadlet_t *data, u64 addr,
1740 size_t length, u16 fl) 1737 size_t length, u16 fl)
1741 { 1738 {
1742 struct sbp2_fwhost_info *hi; 1739 struct sbp2_fwhost_info *hi;
1743 struct sbp2_lu *lu = NULL, *lu_tmp; 1740 struct sbp2_lu *lu = NULL, *lu_tmp;
1744 struct scsi_cmnd *SCpnt = NULL; 1741 struct scsi_cmnd *SCpnt = NULL;
1745 struct sbp2_status_block *sb; 1742 struct sbp2_status_block *sb;
1746 u32 scsi_status = SBP2_SCSI_STATUS_GOOD; 1743 u32 scsi_status = SBP2_SCSI_STATUS_GOOD;
1747 struct sbp2_command_info *cmd; 1744 struct sbp2_command_info *cmd;
1748 unsigned long flags; 1745 unsigned long flags;
1749 1746
1750 if (unlikely(length < 8 || length > sizeof(struct sbp2_status_block))) { 1747 if (unlikely(length < 8 || length > sizeof(struct sbp2_status_block))) {
1751 SBP2_ERR("Wrong size of status block"); 1748 SBP2_ERR("Wrong size of status block");
1752 return RCODE_ADDRESS_ERROR; 1749 return RCODE_ADDRESS_ERROR;
1753 } 1750 }
1754 if (unlikely(!host)) { 1751 if (unlikely(!host)) {
1755 SBP2_ERR("host is NULL - this is bad!"); 1752 SBP2_ERR("host is NULL - this is bad!");
1756 return RCODE_ADDRESS_ERROR; 1753 return RCODE_ADDRESS_ERROR;
1757 } 1754 }
1758 hi = hpsb_get_hostinfo(&sbp2_highlevel, host); 1755 hi = hpsb_get_hostinfo(&sbp2_highlevel, host);
1759 if (unlikely(!hi)) { 1756 if (unlikely(!hi)) {
1760 SBP2_ERR("host info is NULL - this is bad!"); 1757 SBP2_ERR("host info is NULL - this is bad!");
1761 return RCODE_ADDRESS_ERROR; 1758 return RCODE_ADDRESS_ERROR;
1762 } 1759 }
1763 1760
1764 /* Find the unit which wrote the status. */ 1761 /* Find the unit which wrote the status. */
1765 read_lock_irqsave(&sbp2_hi_logical_units_lock, flags); 1762 read_lock_irqsave(&sbp2_hi_logical_units_lock, flags);
1766 list_for_each_entry(lu_tmp, &hi->logical_units, lu_list) { 1763 list_for_each_entry(lu_tmp, &hi->logical_units, lu_list) {
1767 if (lu_tmp->ne->nodeid == nodeid && 1764 if (lu_tmp->ne->nodeid == nodeid &&
1768 lu_tmp->status_fifo_addr == addr) { 1765 lu_tmp->status_fifo_addr == addr) {
1769 lu = lu_tmp; 1766 lu = lu_tmp;
1770 break; 1767 break;
1771 } 1768 }
1772 } 1769 }
1773 read_unlock_irqrestore(&sbp2_hi_logical_units_lock, flags); 1770 read_unlock_irqrestore(&sbp2_hi_logical_units_lock, flags);
1774 1771
1775 if (unlikely(!lu)) { 1772 if (unlikely(!lu)) {
1776 SBP2_ERR("lu is NULL - device is gone?"); 1773 SBP2_ERR("lu is NULL - device is gone?");
1777 return RCODE_ADDRESS_ERROR; 1774 return RCODE_ADDRESS_ERROR;
1778 } 1775 }
1779 1776
1780 /* Put response into lu status fifo buffer. The first two bytes 1777 /* Put response into lu status fifo buffer. The first two bytes
1781 * come in big endian bit order. Often the target writes only a 1778 * come in big endian bit order. Often the target writes only a
1782 * truncated status block, minimally the first two quadlets. The rest 1779 * truncated status block, minimally the first two quadlets. The rest
1783 * is implied to be zeros. */ 1780 * is implied to be zeros. */
1784 sb = &lu->status_block; 1781 sb = &lu->status_block;
1785 memset(sb->command_set_dependent, 0, sizeof(sb->command_set_dependent)); 1782 memset(sb->command_set_dependent, 0, sizeof(sb->command_set_dependent));
1786 memcpy(sb, data, length); 1783 memcpy(sb, data, length);
1787 sbp2util_be32_to_cpu_buffer(sb, 8); 1784 sbp2util_be32_to_cpu_buffer(sb, 8);
1788 1785
1789 /* Ignore unsolicited status. Handle command ORB status. */ 1786 /* Ignore unsolicited status. Handle command ORB status. */
1790 if (unlikely(STATUS_GET_SRC(sb->ORB_offset_hi_misc) == 2)) 1787 if (unlikely(STATUS_GET_SRC(sb->ORB_offset_hi_misc) == 2))
1791 cmd = NULL; 1788 cmd = NULL;
1792 else 1789 else
1793 cmd = sbp2util_find_command_for_orb(lu, sb->ORB_offset_lo); 1790 cmd = sbp2util_find_command_for_orb(lu, sb->ORB_offset_lo);
1794 if (cmd) { 1791 if (cmd) {
1795 /* Grab SCSI command pointers and check status. */ 1792 /* Grab SCSI command pointers and check status. */
1796 /* 1793 /*
1797 * FIXME: If the src field in the status is 1, the ORB DMA must 1794 * FIXME: If the src field in the status is 1, the ORB DMA must
1798 * not be reused until status for a subsequent ORB is received. 1795 * not be reused until status for a subsequent ORB is received.
1799 */ 1796 */
1800 SCpnt = cmd->Current_SCpnt; 1797 SCpnt = cmd->Current_SCpnt;
1801 spin_lock_irqsave(&lu->cmd_orb_lock, flags); 1798 spin_lock_irqsave(&lu->cmd_orb_lock, flags);
1802 sbp2util_mark_command_completed(lu, cmd); 1799 sbp2util_mark_command_completed(lu, cmd);
1803 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 1800 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags);
1804 1801
1805 if (SCpnt) { 1802 if (SCpnt) {
1806 u32 h = sb->ORB_offset_hi_misc; 1803 u32 h = sb->ORB_offset_hi_misc;
1807 u32 r = STATUS_GET_RESP(h); 1804 u32 r = STATUS_GET_RESP(h);
1808 1805
1809 if (r != RESP_STATUS_REQUEST_COMPLETE) { 1806 if (r != RESP_STATUS_REQUEST_COMPLETE) {
1810 SBP2_INFO("resp 0x%x, sbp_status 0x%x", 1807 SBP2_INFO("resp 0x%x, sbp_status 0x%x",
1811 r, STATUS_GET_SBP_STATUS(h)); 1808 r, STATUS_GET_SBP_STATUS(h));
1812 scsi_status = 1809 scsi_status =
1813 r == RESP_STATUS_TRANSPORT_FAILURE ? 1810 r == RESP_STATUS_TRANSPORT_FAILURE ?
1814 SBP2_SCSI_STATUS_BUSY : 1811 SBP2_SCSI_STATUS_BUSY :
1815 SBP2_SCSI_STATUS_COMMAND_TERMINATED; 1812 SBP2_SCSI_STATUS_COMMAND_TERMINATED;
1816 } 1813 }
1817 1814
1818 if (STATUS_GET_LEN(h) > 1) 1815 if (STATUS_GET_LEN(h) > 1)
1819 scsi_status = sbp2_status_to_sense_data( 1816 scsi_status = sbp2_status_to_sense_data(
1820 (unchar *)sb, SCpnt->sense_buffer); 1817 (unchar *)sb, SCpnt->sense_buffer);
1821 1818
1822 if (STATUS_TEST_DEAD(h)) 1819 if (STATUS_TEST_DEAD(h))
1823 sbp2_agent_reset(lu, 0); 1820 sbp2_agent_reset(lu, 0);
1824 } 1821 }
1825 1822
1826 /* Check here to see if there are no commands in-use. If there 1823 /* Check here to see if there are no commands in-use. If there
1827 * are none, we know that the fetch agent left the active state 1824 * are none, we know that the fetch agent left the active state
1828 * _and_ that we did not reactivate it yet. Therefore clear 1825 * _and_ that we did not reactivate it yet. Therefore clear
1829 * last_orb so that next time we write directly to the 1826 * last_orb so that next time we write directly to the
1830 * ORB_POINTER register. That way the fetch agent does not need 1827 * ORB_POINTER register. That way the fetch agent does not need
1831 * to refetch the next_ORB. */ 1828 * to refetch the next_ORB. */
1832 spin_lock_irqsave(&lu->cmd_orb_lock, flags); 1829 spin_lock_irqsave(&lu->cmd_orb_lock, flags);
1833 if (list_empty(&lu->cmd_orb_inuse)) 1830 if (list_empty(&lu->cmd_orb_inuse))
1834 lu->last_orb = NULL; 1831 lu->last_orb = NULL;
1835 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 1832 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags);
1836 1833
1837 } else { 1834 } else {
1838 /* It's probably status after a management request. */ 1835 /* It's probably status after a management request. */
1839 if ((sb->ORB_offset_lo == lu->reconnect_orb_dma) || 1836 if ((sb->ORB_offset_lo == lu->reconnect_orb_dma) ||
1840 (sb->ORB_offset_lo == lu->login_orb_dma) || 1837 (sb->ORB_offset_lo == lu->login_orb_dma) ||
1841 (sb->ORB_offset_lo == lu->query_logins_orb_dma) || 1838 (sb->ORB_offset_lo == lu->query_logins_orb_dma) ||
1842 (sb->ORB_offset_lo == lu->logout_orb_dma)) { 1839 (sb->ORB_offset_lo == lu->logout_orb_dma)) {
1843 lu->access_complete = 1; 1840 lu->access_complete = 1;
1844 wake_up_interruptible(&sbp2_access_wq); 1841 wake_up_interruptible(&sbp2_access_wq);
1845 } 1842 }
1846 } 1843 }
1847 1844
1848 if (SCpnt) 1845 if (SCpnt)
1849 sbp2scsi_complete_command(lu, scsi_status, SCpnt, 1846 sbp2scsi_complete_command(lu, scsi_status, SCpnt,
1850 cmd->Current_done); 1847 cmd->Current_done);
1851 return RCODE_COMPLETE; 1848 return RCODE_COMPLETE;
1852 } 1849 }
1853 1850
1854 /************************************** 1851 /**************************************
1855 * SCSI interface related section 1852 * SCSI interface related section
1856 **************************************/ 1853 **************************************/
1857 1854
1858 static int sbp2scsi_queuecommand(struct scsi_cmnd *SCpnt, 1855 static int sbp2scsi_queuecommand(struct scsi_cmnd *SCpnt,
1859 void (*done)(struct scsi_cmnd *)) 1856 void (*done)(struct scsi_cmnd *))
1860 { 1857 {
1861 struct sbp2_lu *lu = (struct sbp2_lu *)SCpnt->device->host->hostdata[0]; 1858 struct sbp2_lu *lu = (struct sbp2_lu *)SCpnt->device->host->hostdata[0];
1862 struct sbp2_fwhost_info *hi; 1859 struct sbp2_fwhost_info *hi;
1863 int result = DID_NO_CONNECT << 16; 1860 int result = DID_NO_CONNECT << 16;
1864 1861
1865 if (unlikely(!sbp2util_node_is_available(lu))) 1862 if (unlikely(!sbp2util_node_is_available(lu)))
1866 goto done; 1863 goto done;
1867 1864
1868 hi = lu->hi; 1865 hi = lu->hi;
1869 1866
1870 if (unlikely(!hi)) { 1867 if (unlikely(!hi)) {
1871 SBP2_ERR("sbp2_fwhost_info is NULL - this is bad!"); 1868 SBP2_ERR("sbp2_fwhost_info is NULL - this is bad!");
1872 goto done; 1869 goto done;
1873 } 1870 }
1874 1871
1875 /* Multiple units are currently represented to the SCSI core as separate 1872 /* Multiple units are currently represented to the SCSI core as separate
1876 * targets, not as one target with multiple LUs. Therefore return 1873 * targets, not as one target with multiple LUs. Therefore return
1877 * selection time-out to any IO directed at non-zero LUNs. */ 1874 * selection time-out to any IO directed at non-zero LUNs. */
1878 if (unlikely(SCpnt->device->lun)) 1875 if (unlikely(SCpnt->device->lun))
1879 goto done; 1876 goto done;
1880 1877
1881 if (unlikely(!hpsb_node_entry_valid(lu->ne))) { 1878 if (unlikely(!hpsb_node_entry_valid(lu->ne))) {
1882 SBP2_ERR("Bus reset in progress - rejecting command"); 1879 SBP2_ERR("Bus reset in progress - rejecting command");
1883 result = DID_BUS_BUSY << 16; 1880 result = DID_BUS_BUSY << 16;
1884 goto done; 1881 goto done;
1885 } 1882 }
1886 1883
1887 /* Bidirectional commands are not yet implemented, 1884 /* Bidirectional commands are not yet implemented,
1888 * and unknown transfer direction not handled. */ 1885 * and unknown transfer direction not handled. */
1889 if (unlikely(SCpnt->sc_data_direction == DMA_BIDIRECTIONAL)) { 1886 if (unlikely(SCpnt->sc_data_direction == DMA_BIDIRECTIONAL)) {
1890 SBP2_ERR("Cannot handle DMA_BIDIRECTIONAL - rejecting command"); 1887 SBP2_ERR("Cannot handle DMA_BIDIRECTIONAL - rejecting command");
1891 result = DID_ERROR << 16; 1888 result = DID_ERROR << 16;
1892 goto done; 1889 goto done;
1893 } 1890 }
1894 1891
1895 if (sbp2_send_command(lu, SCpnt, done)) { 1892 if (sbp2_send_command(lu, SCpnt, done)) {
1896 SBP2_ERR("Error sending SCSI command"); 1893 SBP2_ERR("Error sending SCSI command");
1897 sbp2scsi_complete_command(lu, 1894 sbp2scsi_complete_command(lu,
1898 SBP2_SCSI_STATUS_SELECTION_TIMEOUT, 1895 SBP2_SCSI_STATUS_SELECTION_TIMEOUT,
1899 SCpnt, done); 1896 SCpnt, done);
1900 } 1897 }
1901 return 0; 1898 return 0;
1902 1899
1903 done: 1900 done:
1904 SCpnt->result = result; 1901 SCpnt->result = result;
1905 done(SCpnt); 1902 done(SCpnt);
1906 return 0; 1903 return 0;
1907 } 1904 }
1908 1905
1909 static void sbp2scsi_complete_all_commands(struct sbp2_lu *lu, u32 status) 1906 static void sbp2scsi_complete_all_commands(struct sbp2_lu *lu, u32 status)
1910 { 1907 {
1911 struct list_head *lh; 1908 struct list_head *lh;
1912 struct sbp2_command_info *cmd; 1909 struct sbp2_command_info *cmd;
1913 unsigned long flags; 1910 unsigned long flags;
1914 1911
1915 spin_lock_irqsave(&lu->cmd_orb_lock, flags); 1912 spin_lock_irqsave(&lu->cmd_orb_lock, flags);
1916 while (!list_empty(&lu->cmd_orb_inuse)) { 1913 while (!list_empty(&lu->cmd_orb_inuse)) {
1917 lh = lu->cmd_orb_inuse.next; 1914 lh = lu->cmd_orb_inuse.next;
1918 cmd = list_entry(lh, struct sbp2_command_info, list); 1915 cmd = list_entry(lh, struct sbp2_command_info, list);
1919 sbp2util_mark_command_completed(lu, cmd); 1916 sbp2util_mark_command_completed(lu, cmd);
1920 if (cmd->Current_SCpnt) { 1917 if (cmd->Current_SCpnt) {
1921 cmd->Current_SCpnt->result = status << 16; 1918 cmd->Current_SCpnt->result = status << 16;
1922 cmd->Current_done(cmd->Current_SCpnt); 1919 cmd->Current_done(cmd->Current_SCpnt);
1923 } 1920 }
1924 } 1921 }
1925 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 1922 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags);
1926 1923
1927 return; 1924 return;
1928 } 1925 }
1929 1926
1930 /* 1927 /*
1931 * Complete a regular SCSI command. Can be called in atomic context. 1928 * Complete a regular SCSI command. Can be called in atomic context.
1932 */ 1929 */
1933 static void sbp2scsi_complete_command(struct sbp2_lu *lu, u32 scsi_status, 1930 static void sbp2scsi_complete_command(struct sbp2_lu *lu, u32 scsi_status,
1934 struct scsi_cmnd *SCpnt, 1931 struct scsi_cmnd *SCpnt,
1935 void (*done)(struct scsi_cmnd *)) 1932 void (*done)(struct scsi_cmnd *))
1936 { 1933 {
1937 if (!SCpnt) { 1934 if (!SCpnt) {
1938 SBP2_ERR("SCpnt is NULL"); 1935 SBP2_ERR("SCpnt is NULL");
1939 return; 1936 return;
1940 } 1937 }
1941 1938
1942 switch (scsi_status) { 1939 switch (scsi_status) {
1943 case SBP2_SCSI_STATUS_GOOD: 1940 case SBP2_SCSI_STATUS_GOOD:
1944 SCpnt->result = DID_OK << 16; 1941 SCpnt->result = DID_OK << 16;
1945 break; 1942 break;
1946 1943
1947 case SBP2_SCSI_STATUS_BUSY: 1944 case SBP2_SCSI_STATUS_BUSY:
1948 SBP2_ERR("SBP2_SCSI_STATUS_BUSY"); 1945 SBP2_ERR("SBP2_SCSI_STATUS_BUSY");
1949 SCpnt->result = DID_BUS_BUSY << 16; 1946 SCpnt->result = DID_BUS_BUSY << 16;
1950 break; 1947 break;
1951 1948
1952 case SBP2_SCSI_STATUS_CHECK_CONDITION: 1949 case SBP2_SCSI_STATUS_CHECK_CONDITION:
1953 SCpnt->result = CHECK_CONDITION << 1 | DID_OK << 16; 1950 SCpnt->result = CHECK_CONDITION << 1 | DID_OK << 16;
1954 break; 1951 break;
1955 1952
1956 case SBP2_SCSI_STATUS_SELECTION_TIMEOUT: 1953 case SBP2_SCSI_STATUS_SELECTION_TIMEOUT:
1957 SBP2_ERR("SBP2_SCSI_STATUS_SELECTION_TIMEOUT"); 1954 SBP2_ERR("SBP2_SCSI_STATUS_SELECTION_TIMEOUT");
1958 SCpnt->result = DID_NO_CONNECT << 16; 1955 SCpnt->result = DID_NO_CONNECT << 16;
1959 scsi_print_command(SCpnt); 1956 scsi_print_command(SCpnt);
1960 break; 1957 break;
1961 1958
1962 case SBP2_SCSI_STATUS_CONDITION_MET: 1959 case SBP2_SCSI_STATUS_CONDITION_MET:
1963 case SBP2_SCSI_STATUS_RESERVATION_CONFLICT: 1960 case SBP2_SCSI_STATUS_RESERVATION_CONFLICT:
1964 case SBP2_SCSI_STATUS_COMMAND_TERMINATED: 1961 case SBP2_SCSI_STATUS_COMMAND_TERMINATED:
1965 SBP2_ERR("Bad SCSI status = %x", scsi_status); 1962 SBP2_ERR("Bad SCSI status = %x", scsi_status);
1966 SCpnt->result = DID_ERROR << 16; 1963 SCpnt->result = DID_ERROR << 16;
1967 scsi_print_command(SCpnt); 1964 scsi_print_command(SCpnt);
1968 break; 1965 break;
1969 1966
1970 default: 1967 default:
1971 SBP2_ERR("Unsupported SCSI status = %x", scsi_status); 1968 SBP2_ERR("Unsupported SCSI status = %x", scsi_status);
1972 SCpnt->result = DID_ERROR << 16; 1969 SCpnt->result = DID_ERROR << 16;
1973 } 1970 }
1974 1971
1975 /* If a bus reset is in progress and there was an error, complete 1972 /* If a bus reset is in progress and there was an error, complete
1976 * the command as busy so that it will get retried. */ 1973 * the command as busy so that it will get retried. */
1977 if (!hpsb_node_entry_valid(lu->ne) 1974 if (!hpsb_node_entry_valid(lu->ne)
1978 && (scsi_status != SBP2_SCSI_STATUS_GOOD)) { 1975 && (scsi_status != SBP2_SCSI_STATUS_GOOD)) {
1979 SBP2_ERR("Completing command with busy (bus reset)"); 1976 SBP2_ERR("Completing command with busy (bus reset)");
1980 SCpnt->result = DID_BUS_BUSY << 16; 1977 SCpnt->result = DID_BUS_BUSY << 16;
1981 } 1978 }
1982 1979
1983 /* Tell the SCSI stack that we're done with this command. */ 1980 /* Tell the SCSI stack that we're done with this command. */
1984 done(SCpnt); 1981 done(SCpnt);
1985 } 1982 }
1986 1983
1987 static int sbp2scsi_slave_alloc(struct scsi_device *sdev) 1984 static int sbp2scsi_slave_alloc(struct scsi_device *sdev)
1988 { 1985 {
1989 struct sbp2_lu *lu = (struct sbp2_lu *)sdev->host->hostdata[0]; 1986 struct sbp2_lu *lu = (struct sbp2_lu *)sdev->host->hostdata[0];
1990 1987
1991 if (sdev->lun != 0 || sdev->id != lu->ud->id || sdev->channel != 0) 1988 if (sdev->lun != 0 || sdev->id != lu->ud->id || sdev->channel != 0)
1992 return -ENODEV; 1989 return -ENODEV;
1993 1990
1994 lu->sdev = sdev; 1991 lu->sdev = sdev;
1995 sdev->allow_restart = 1; 1992 sdev->allow_restart = 1;
1996 1993
1997 /* SBP-2 requires quadlet alignment of the data buffers. */ 1994 /* SBP-2 requires quadlet alignment of the data buffers. */
1998 blk_queue_update_dma_alignment(sdev->request_queue, 4 - 1); 1995 blk_queue_update_dma_alignment(sdev->request_queue, 4 - 1);
1999 1996
2000 if (lu->workarounds & SBP2_WORKAROUND_INQUIRY_36) 1997 if (lu->workarounds & SBP2_WORKAROUND_INQUIRY_36)
2001 sdev->inquiry_len = 36; 1998 sdev->inquiry_len = 36;
2002 return 0; 1999 return 0;
2003 } 2000 }
2004 2001
2005 static int sbp2scsi_slave_configure(struct scsi_device *sdev) 2002 static int sbp2scsi_slave_configure(struct scsi_device *sdev)
2006 { 2003 {
2007 struct sbp2_lu *lu = (struct sbp2_lu *)sdev->host->hostdata[0]; 2004 struct sbp2_lu *lu = (struct sbp2_lu *)sdev->host->hostdata[0];
2008 2005
2009 sdev->use_10_for_rw = 1; 2006 sdev->use_10_for_rw = 1;
2010 2007
2011 if (sbp2_exclusive_login) 2008 if (sbp2_exclusive_login)
2012 sdev->manage_start_stop = 1; 2009 sdev->manage_start_stop = 1;
2013 if (sdev->type == TYPE_ROM) 2010 if (sdev->type == TYPE_ROM)
2014 sdev->use_10_for_ms = 1; 2011 sdev->use_10_for_ms = 1;
2015 if (sdev->type == TYPE_DISK && 2012 if (sdev->type == TYPE_DISK &&
2016 lu->workarounds & SBP2_WORKAROUND_MODE_SENSE_8) 2013 lu->workarounds & SBP2_WORKAROUND_MODE_SENSE_8)
2017 sdev->skip_ms_page_8 = 1; 2014 sdev->skip_ms_page_8 = 1;
2018 if (lu->workarounds & SBP2_WORKAROUND_FIX_CAPACITY) 2015 if (lu->workarounds & SBP2_WORKAROUND_FIX_CAPACITY)
2019 sdev->fix_capacity = 1; 2016 sdev->fix_capacity = 1;
2020 if (lu->workarounds & SBP2_WORKAROUND_POWER_CONDITION) 2017 if (lu->workarounds & SBP2_WORKAROUND_POWER_CONDITION)
2021 sdev->start_stop_pwr_cond = 1; 2018 sdev->start_stop_pwr_cond = 1;
2022 if (lu->workarounds & SBP2_WORKAROUND_128K_MAX_TRANS) 2019 if (lu->workarounds & SBP2_WORKAROUND_128K_MAX_TRANS)
2023 blk_queue_max_sectors(sdev->request_queue, 128 * 1024 / 512); 2020 blk_queue_max_sectors(sdev->request_queue, 128 * 1024 / 512);
2024 2021
2025 blk_queue_max_segment_size(sdev->request_queue, SBP2_MAX_SEG_SIZE); 2022 blk_queue_max_segment_size(sdev->request_queue, SBP2_MAX_SEG_SIZE);
2026 return 0; 2023 return 0;
2027 } 2024 }
2028 2025
2029 static void sbp2scsi_slave_destroy(struct scsi_device *sdev) 2026 static void sbp2scsi_slave_destroy(struct scsi_device *sdev)
2030 { 2027 {
2031 ((struct sbp2_lu *)sdev->host->hostdata[0])->sdev = NULL; 2028 ((struct sbp2_lu *)sdev->host->hostdata[0])->sdev = NULL;
2032 return; 2029 return;
2033 } 2030 }
2034 2031
2035 /* 2032 /*
2036 * Called by scsi stack when something has really gone wrong. 2033 * Called by scsi stack when something has really gone wrong.
2037 * Usually called when a command has timed-out for some reason. 2034 * Usually called when a command has timed-out for some reason.
2038 */ 2035 */
2039 static int sbp2scsi_abort(struct scsi_cmnd *SCpnt) 2036 static int sbp2scsi_abort(struct scsi_cmnd *SCpnt)
2040 { 2037 {
2041 struct sbp2_lu *lu = (struct sbp2_lu *)SCpnt->device->host->hostdata[0]; 2038 struct sbp2_lu *lu = (struct sbp2_lu *)SCpnt->device->host->hostdata[0];
2042 struct sbp2_command_info *cmd; 2039 struct sbp2_command_info *cmd;
2043 unsigned long flags; 2040 unsigned long flags;
2044 2041
2045 SBP2_INFO("aborting sbp2 command"); 2042 SBP2_INFO("aborting sbp2 command");
2046 scsi_print_command(SCpnt); 2043 scsi_print_command(SCpnt);
2047 2044
2048 if (sbp2util_node_is_available(lu)) { 2045 if (sbp2util_node_is_available(lu)) {
2049 sbp2_agent_reset(lu, 1); 2046 sbp2_agent_reset(lu, 1);
2050 2047
2051 /* Return a matching command structure to the free pool. */ 2048 /* Return a matching command structure to the free pool. */
2052 spin_lock_irqsave(&lu->cmd_orb_lock, flags); 2049 spin_lock_irqsave(&lu->cmd_orb_lock, flags);
2053 cmd = sbp2util_find_command_for_SCpnt(lu, SCpnt); 2050 cmd = sbp2util_find_command_for_SCpnt(lu, SCpnt);
2054 if (cmd) { 2051 if (cmd) {
2055 sbp2util_mark_command_completed(lu, cmd); 2052 sbp2util_mark_command_completed(lu, cmd);
2056 if (cmd->Current_SCpnt) { 2053 if (cmd->Current_SCpnt) {
2057 cmd->Current_SCpnt->result = DID_ABORT << 16; 2054 cmd->Current_SCpnt->result = DID_ABORT << 16;
2058 cmd->Current_done(cmd->Current_SCpnt); 2055 cmd->Current_done(cmd->Current_SCpnt);
2059 } 2056 }
2060 } 2057 }
2061 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags); 2058 spin_unlock_irqrestore(&lu->cmd_orb_lock, flags);
2062 2059
2063 sbp2scsi_complete_all_commands(lu, DID_BUS_BUSY); 2060 sbp2scsi_complete_all_commands(lu, DID_BUS_BUSY);
2064 } 2061 }
2065 2062
2066 return SUCCESS; 2063 return SUCCESS;
2067 } 2064 }
2068 2065
2069 /* 2066 /*
2070 * Called by scsi stack when something has really gone wrong. 2067 * Called by scsi stack when something has really gone wrong.
2071 */ 2068 */
2072 static int sbp2scsi_reset(struct scsi_cmnd *SCpnt) 2069 static int sbp2scsi_reset(struct scsi_cmnd *SCpnt)
2073 { 2070 {
2074 struct sbp2_lu *lu = (struct sbp2_lu *)SCpnt->device->host->hostdata[0]; 2071 struct sbp2_lu *lu = (struct sbp2_lu *)SCpnt->device->host->hostdata[0];
2075 2072
2076 SBP2_INFO("reset requested"); 2073 SBP2_INFO("reset requested");
2077 2074
2078 if (sbp2util_node_is_available(lu)) { 2075 if (sbp2util_node_is_available(lu)) {
2079 SBP2_INFO("generating sbp2 fetch agent reset"); 2076 SBP2_INFO("generating sbp2 fetch agent reset");
2080 sbp2_agent_reset(lu, 1); 2077 sbp2_agent_reset(lu, 1);
2081 } 2078 }
2082 2079
2083 return SUCCESS; 2080 return SUCCESS;
2084 } 2081 }
2085 2082
2086 static ssize_t sbp2_sysfs_ieee1394_id_show(struct device *dev, 2083 static ssize_t sbp2_sysfs_ieee1394_id_show(struct device *dev,
2087 struct device_attribute *attr, 2084 struct device_attribute *attr,
2088 char *buf) 2085 char *buf)
2089 { 2086 {
2090 struct scsi_device *sdev; 2087 struct scsi_device *sdev;
2091 struct sbp2_lu *lu; 2088 struct sbp2_lu *lu;
2092 2089
2093 if (!(sdev = to_scsi_device(dev))) 2090 if (!(sdev = to_scsi_device(dev)))
2094 return 0; 2091 return 0;
2095 2092
2096 if (!(lu = (struct sbp2_lu *)sdev->host->hostdata[0])) 2093 if (!(lu = (struct sbp2_lu *)sdev->host->hostdata[0]))
2097 return 0; 2094 return 0;
2098 2095
2099 if (sbp2_long_sysfs_ieee1394_id) 2096 if (sbp2_long_sysfs_ieee1394_id)
2100 return sprintf(buf, "%016Lx:%06x:%04x\n", 2097 return sprintf(buf, "%016Lx:%06x:%04x\n",
2101 (unsigned long long)lu->ne->guid, 2098 (unsigned long long)lu->ne->guid,
2102 lu->ud->directory_id, ORB_SET_LUN(lu->lun)); 2099 lu->ud->directory_id, ORB_SET_LUN(lu->lun));
2103 else 2100 else
2104 return sprintf(buf, "%016Lx:%d:%d\n", 2101 return sprintf(buf, "%016Lx:%d:%d\n",
2105 (unsigned long long)lu->ne->guid, 2102 (unsigned long long)lu->ne->guid,
2106 lu->ud->id, ORB_SET_LUN(lu->lun)); 2103 lu->ud->id, ORB_SET_LUN(lu->lun));
2107 } 2104 }
2108 2105
2109 MODULE_AUTHOR("Ben Collins <bcollins@debian.org>"); 2106 MODULE_AUTHOR("Ben Collins <bcollins@debian.org>");
2110 MODULE_DESCRIPTION("IEEE-1394 SBP-2 protocol driver"); 2107 MODULE_DESCRIPTION("IEEE-1394 SBP-2 protocol driver");
2111 MODULE_SUPPORTED_DEVICE(SBP2_DEVICE_NAME); 2108 MODULE_SUPPORTED_DEVICE(SBP2_DEVICE_NAME);
2112 MODULE_LICENSE("GPL"); 2109 MODULE_LICENSE("GPL");
2113 2110
2114 static int sbp2_module_init(void) 2111 static int sbp2_module_init(void)
2115 { 2112 {
2116 int ret; 2113 int ret;
2117 2114
2118 if (sbp2_serialize_io) { 2115 if (sbp2_serialize_io) {
2119 sbp2_shost_template.can_queue = 1; 2116 sbp2_shost_template.can_queue = 1;
2120 sbp2_shost_template.cmd_per_lun = 1; 2117 sbp2_shost_template.cmd_per_lun = 1;
2121 } 2118 }
2122 2119
2123 sbp2_shost_template.max_sectors = sbp2_max_sectors; 2120 sbp2_shost_template.max_sectors = sbp2_max_sectors;
2124 2121
2125 hpsb_register_highlevel(&sbp2_highlevel); 2122 hpsb_register_highlevel(&sbp2_highlevel);
2126 ret = hpsb_register_protocol(&sbp2_driver); 2123 ret = hpsb_register_protocol(&sbp2_driver);
2127 if (ret) { 2124 if (ret) {
2128 SBP2_ERR("Failed to register protocol"); 2125 SBP2_ERR("Failed to register protocol");
2129 hpsb_unregister_highlevel(&sbp2_highlevel); 2126 hpsb_unregister_highlevel(&sbp2_highlevel);
2130 return ret; 2127 return ret;
2131 } 2128 }
2132 return 0; 2129 return 0;
2133 } 2130 }
2134 2131
2135 static void __exit sbp2_module_exit(void) 2132 static void __exit sbp2_module_exit(void)
2136 { 2133 {
2137 hpsb_unregister_protocol(&sbp2_driver); 2134 hpsb_unregister_protocol(&sbp2_driver);
2138 hpsb_unregister_highlevel(&sbp2_highlevel); 2135 hpsb_unregister_highlevel(&sbp2_highlevel);
2139 } 2136 }
2140 2137