Commit eca1ad82bda0293339e1f8439dc9c8dba25ff088

Authored by Al Viro
Committed by Jeff Garzik
1 parent 05bd831fcd

misc drivers/net annotations

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Jeff Garzik <jeff@garzik.org>

Showing 6 changed files with 20 additions and 19 deletions Inline Diff

drivers/net/8139too.c
1 /* 1 /*
2 2
3 8139too.c: A RealTek RTL-8139 Fast Ethernet driver for Linux. 3 8139too.c: A RealTek RTL-8139 Fast Ethernet driver for Linux.
4 4
5 Maintained by Jeff Garzik <jgarzik@pobox.com> 5 Maintained by Jeff Garzik <jgarzik@pobox.com>
6 Copyright 2000-2002 Jeff Garzik 6 Copyright 2000-2002 Jeff Garzik
7 7
8 Much code comes from Donald Becker's rtl8139.c driver, 8 Much code comes from Donald Becker's rtl8139.c driver,
9 versions 1.13 and older. This driver was originally based 9 versions 1.13 and older. This driver was originally based
10 on rtl8139.c version 1.07. Header of rtl8139.c version 1.13: 10 on rtl8139.c version 1.07. Header of rtl8139.c version 1.13:
11 11
12 -----<snip>----- 12 -----<snip>-----
13 13
14 Written 1997-2001 by Donald Becker. 14 Written 1997-2001 by Donald Becker.
15 This software may be used and distributed according to the 15 This software may be used and distributed according to the
16 terms of the GNU General Public License (GPL), incorporated 16 terms of the GNU General Public License (GPL), incorporated
17 herein by reference. Drivers based on or derived from this 17 herein by reference. Drivers based on or derived from this
18 code fall under the GPL and must retain the authorship, 18 code fall under the GPL and must retain the authorship,
19 copyright and license notice. This file is not a complete 19 copyright and license notice. This file is not a complete
20 program and may only be used when the entire operating 20 program and may only be used when the entire operating
21 system is licensed under the GPL. 21 system is licensed under the GPL.
22 22
23 This driver is for boards based on the RTL8129 and RTL8139 23 This driver is for boards based on the RTL8129 and RTL8139
24 PCI ethernet chips. 24 PCI ethernet chips.
25 25
26 The author may be reached as becker@scyld.com, or C/O Scyld 26 The author may be reached as becker@scyld.com, or C/O Scyld
27 Computing Corporation 410 Severn Ave., Suite 210 Annapolis 27 Computing Corporation 410 Severn Ave., Suite 210 Annapolis
28 MD 21403 28 MD 21403
29 29
30 Support and updates available at 30 Support and updates available at
31 http://www.scyld.com/network/rtl8139.html 31 http://www.scyld.com/network/rtl8139.html
32 32
33 Twister-tuning table provided by Kinston 33 Twister-tuning table provided by Kinston
34 <shangh@realtek.com.tw>. 34 <shangh@realtek.com.tw>.
35 35
36 -----<snip>----- 36 -----<snip>-----
37 37
38 This software may be used and distributed according to the terms 38 This software may be used and distributed according to the terms
39 of the GNU General Public License, incorporated herein by reference. 39 of the GNU General Public License, incorporated herein by reference.
40 40
41 Contributors: 41 Contributors:
42 42
43 Donald Becker - he wrote the original driver, kudos to him! 43 Donald Becker - he wrote the original driver, kudos to him!
44 (but please don't e-mail him for support, this isn't his driver) 44 (but please don't e-mail him for support, this isn't his driver)
45 45
46 Tigran Aivazian - bug fixes, skbuff free cleanup 46 Tigran Aivazian - bug fixes, skbuff free cleanup
47 47
48 Martin Mares - suggestions for PCI cleanup 48 Martin Mares - suggestions for PCI cleanup
49 49
50 David S. Miller - PCI DMA and softnet updates 50 David S. Miller - PCI DMA and softnet updates
51 51
52 Ernst Gill - fixes ported from BSD driver 52 Ernst Gill - fixes ported from BSD driver
53 53
54 Daniel Kobras - identified specific locations of 54 Daniel Kobras - identified specific locations of
55 posted MMIO write bugginess 55 posted MMIO write bugginess
56 56
57 Gerard Sharp - bug fix, testing and feedback 57 Gerard Sharp - bug fix, testing and feedback
58 58
59 David Ford - Rx ring wrap fix 59 David Ford - Rx ring wrap fix
60 60
61 Dan DeMaggio - swapped RTL8139 cards with me, and allowed me 61 Dan DeMaggio - swapped RTL8139 cards with me, and allowed me
62 to find and fix a crucial bug on older chipsets. 62 to find and fix a crucial bug on older chipsets.
63 63
64 Donald Becker/Chris Butterworth/Marcus Westergren - 64 Donald Becker/Chris Butterworth/Marcus Westergren -
65 Noticed various Rx packet size-related buglets. 65 Noticed various Rx packet size-related buglets.
66 66
67 Santiago Garcia Mantinan - testing and feedback 67 Santiago Garcia Mantinan - testing and feedback
68 68
69 Jens David - 2.2.x kernel backports 69 Jens David - 2.2.x kernel backports
70 70
71 Martin Dennett - incredibly helpful insight on undocumented 71 Martin Dennett - incredibly helpful insight on undocumented
72 features of the 8139 chips 72 features of the 8139 chips
73 73
74 Jean-Jacques Michel - bug fix 74 Jean-Jacques Michel - bug fix
75 75
76 Tobias Ringström - Rx interrupt status checking suggestion 76 Tobias Ringström - Rx interrupt status checking suggestion
77 77
78 Andrew Morton - Clear blocked signals, avoid 78 Andrew Morton - Clear blocked signals, avoid
79 buffer overrun setting current->comm. 79 buffer overrun setting current->comm.
80 80
81 Kalle Olavi Niemitalo - Wake-on-LAN ioctls 81 Kalle Olavi Niemitalo - Wake-on-LAN ioctls
82 82
83 Robert Kuebel - Save kernel thread from dying on any signal. 83 Robert Kuebel - Save kernel thread from dying on any signal.
84 84
85 Submitting bug reports: 85 Submitting bug reports:
86 86
87 "rtl8139-diag -mmmaaavvveefN" output 87 "rtl8139-diag -mmmaaavvveefN" output
88 enable RTL8139_DEBUG below, and look at 'dmesg' or kernel log 88 enable RTL8139_DEBUG below, and look at 'dmesg' or kernel log
89 89
90 */ 90 */
91 91
92 #define DRV_NAME "8139too" 92 #define DRV_NAME "8139too"
93 #define DRV_VERSION "0.9.28" 93 #define DRV_VERSION "0.9.28"
94 94
95 95
96 #include <linux/module.h> 96 #include <linux/module.h>
97 #include <linux/kernel.h> 97 #include <linux/kernel.h>
98 #include <linux/compiler.h> 98 #include <linux/compiler.h>
99 #include <linux/pci.h> 99 #include <linux/pci.h>
100 #include <linux/init.h> 100 #include <linux/init.h>
101 #include <linux/ioport.h> 101 #include <linux/ioport.h>
102 #include <linux/netdevice.h> 102 #include <linux/netdevice.h>
103 #include <linux/etherdevice.h> 103 #include <linux/etherdevice.h>
104 #include <linux/rtnetlink.h> 104 #include <linux/rtnetlink.h>
105 #include <linux/delay.h> 105 #include <linux/delay.h>
106 #include <linux/ethtool.h> 106 #include <linux/ethtool.h>
107 #include <linux/mii.h> 107 #include <linux/mii.h>
108 #include <linux/completion.h> 108 #include <linux/completion.h>
109 #include <linux/crc32.h> 109 #include <linux/crc32.h>
110 #include <asm/io.h> 110 #include <asm/io.h>
111 #include <asm/uaccess.h> 111 #include <asm/uaccess.h>
112 #include <asm/irq.h> 112 #include <asm/irq.h>
113 113
114 #define RTL8139_DRIVER_NAME DRV_NAME " Fast Ethernet driver " DRV_VERSION 114 #define RTL8139_DRIVER_NAME DRV_NAME " Fast Ethernet driver " DRV_VERSION
115 #define PFX DRV_NAME ": " 115 #define PFX DRV_NAME ": "
116 116
117 /* Default Message level */ 117 /* Default Message level */
118 #define RTL8139_DEF_MSG_ENABLE (NETIF_MSG_DRV | \ 118 #define RTL8139_DEF_MSG_ENABLE (NETIF_MSG_DRV | \
119 NETIF_MSG_PROBE | \ 119 NETIF_MSG_PROBE | \
120 NETIF_MSG_LINK) 120 NETIF_MSG_LINK)
121 121
122 122
123 /* enable PIO instead of MMIO, if CONFIG_8139TOO_PIO is selected */ 123 /* enable PIO instead of MMIO, if CONFIG_8139TOO_PIO is selected */
124 #ifdef CONFIG_8139TOO_PIO 124 #ifdef CONFIG_8139TOO_PIO
125 #define USE_IO_OPS 1 125 #define USE_IO_OPS 1
126 #endif 126 #endif
127 127
128 /* define to 1, 2 or 3 to enable copious debugging info */ 128 /* define to 1, 2 or 3 to enable copious debugging info */
129 #define RTL8139_DEBUG 0 129 #define RTL8139_DEBUG 0
130 130
131 /* define to 1 to disable lightweight runtime debugging checks */ 131 /* define to 1 to disable lightweight runtime debugging checks */
132 #undef RTL8139_NDEBUG 132 #undef RTL8139_NDEBUG
133 133
134 134
135 #if RTL8139_DEBUG 135 #if RTL8139_DEBUG
136 /* note: prints function name for you */ 136 /* note: prints function name for you */
137 # define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __FUNCTION__ , ## args) 137 # define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __FUNCTION__ , ## args)
138 #else 138 #else
139 # define DPRINTK(fmt, args...) 139 # define DPRINTK(fmt, args...)
140 #endif 140 #endif
141 141
142 #ifdef RTL8139_NDEBUG 142 #ifdef RTL8139_NDEBUG
143 # define assert(expr) do {} while (0) 143 # define assert(expr) do {} while (0)
144 #else 144 #else
145 # define assert(expr) \ 145 # define assert(expr) \
146 if(unlikely(!(expr))) { \ 146 if(unlikely(!(expr))) { \
147 printk(KERN_ERR "Assertion failed! %s,%s,%s,line=%d\n", \ 147 printk(KERN_ERR "Assertion failed! %s,%s,%s,line=%d\n", \
148 #expr,__FILE__,__FUNCTION__,__LINE__); \ 148 #expr,__FILE__,__FUNCTION__,__LINE__); \
149 } 149 }
150 #endif 150 #endif
151 151
152 152
153 /* A few user-configurable values. */ 153 /* A few user-configurable values. */
154 /* media options */ 154 /* media options */
155 #define MAX_UNITS 8 155 #define MAX_UNITS 8
156 static int media[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1}; 156 static int media[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
157 static int full_duplex[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1}; 157 static int full_duplex[MAX_UNITS] = {-1, -1, -1, -1, -1, -1, -1, -1};
158 158
159 /* Maximum number of multicast addresses to filter (vs. Rx-all-multicast). 159 /* Maximum number of multicast addresses to filter (vs. Rx-all-multicast).
160 The RTL chips use a 64 element hash table based on the Ethernet CRC. */ 160 The RTL chips use a 64 element hash table based on the Ethernet CRC. */
161 static int multicast_filter_limit = 32; 161 static int multicast_filter_limit = 32;
162 162
163 /* bitmapped message enable number */ 163 /* bitmapped message enable number */
164 static int debug = -1; 164 static int debug = -1;
165 165
166 /* 166 /*
167 * Receive ring size 167 * Receive ring size
168 * Warning: 64K ring has hardware issues and may lock up. 168 * Warning: 64K ring has hardware issues and may lock up.
169 */ 169 */
170 #if defined(CONFIG_SH_DREAMCAST) 170 #if defined(CONFIG_SH_DREAMCAST)
171 #define RX_BUF_IDX 0 /* 8K ring */ 171 #define RX_BUF_IDX 0 /* 8K ring */
172 #else 172 #else
173 #define RX_BUF_IDX 2 /* 32K ring */ 173 #define RX_BUF_IDX 2 /* 32K ring */
174 #endif 174 #endif
175 #define RX_BUF_LEN (8192 << RX_BUF_IDX) 175 #define RX_BUF_LEN (8192 << RX_BUF_IDX)
176 #define RX_BUF_PAD 16 176 #define RX_BUF_PAD 16
177 #define RX_BUF_WRAP_PAD 2048 /* spare padding to handle lack of packet wrap */ 177 #define RX_BUF_WRAP_PAD 2048 /* spare padding to handle lack of packet wrap */
178 178
179 #if RX_BUF_LEN == 65536 179 #if RX_BUF_LEN == 65536
180 #define RX_BUF_TOT_LEN RX_BUF_LEN 180 #define RX_BUF_TOT_LEN RX_BUF_LEN
181 #else 181 #else
182 #define RX_BUF_TOT_LEN (RX_BUF_LEN + RX_BUF_PAD + RX_BUF_WRAP_PAD) 182 #define RX_BUF_TOT_LEN (RX_BUF_LEN + RX_BUF_PAD + RX_BUF_WRAP_PAD)
183 #endif 183 #endif
184 184
185 /* Number of Tx descriptor registers. */ 185 /* Number of Tx descriptor registers. */
186 #define NUM_TX_DESC 4 186 #define NUM_TX_DESC 4
187 187
188 /* max supported ethernet frame size -- must be at least (dev->mtu+14+4).*/ 188 /* max supported ethernet frame size -- must be at least (dev->mtu+14+4).*/
189 #define MAX_ETH_FRAME_SIZE 1536 189 #define MAX_ETH_FRAME_SIZE 1536
190 190
191 /* Size of the Tx bounce buffers -- must be at least (dev->mtu+14+4). */ 191 /* Size of the Tx bounce buffers -- must be at least (dev->mtu+14+4). */
192 #define TX_BUF_SIZE MAX_ETH_FRAME_SIZE 192 #define TX_BUF_SIZE MAX_ETH_FRAME_SIZE
193 #define TX_BUF_TOT_LEN (TX_BUF_SIZE * NUM_TX_DESC) 193 #define TX_BUF_TOT_LEN (TX_BUF_SIZE * NUM_TX_DESC)
194 194
195 /* PCI Tuning Parameters 195 /* PCI Tuning Parameters
196 Threshold is bytes transferred to chip before transmission starts. */ 196 Threshold is bytes transferred to chip before transmission starts. */
197 #define TX_FIFO_THRESH 256 /* In bytes, rounded down to 32 byte units. */ 197 #define TX_FIFO_THRESH 256 /* In bytes, rounded down to 32 byte units. */
198 198
199 /* The following settings are log_2(bytes)-4: 0 == 16 bytes .. 6==1024, 7==end of packet. */ 199 /* The following settings are log_2(bytes)-4: 0 == 16 bytes .. 6==1024, 7==end of packet. */
200 #define RX_FIFO_THRESH 7 /* Rx buffer level before first PCI xfer. */ 200 #define RX_FIFO_THRESH 7 /* Rx buffer level before first PCI xfer. */
201 #define RX_DMA_BURST 7 /* Maximum PCI burst, '6' is 1024 */ 201 #define RX_DMA_BURST 7 /* Maximum PCI burst, '6' is 1024 */
202 #define TX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */ 202 #define TX_DMA_BURST 6 /* Maximum PCI burst, '6' is 1024 */
203 #define TX_RETRY 8 /* 0-15. retries = 16 + (TX_RETRY * 16) */ 203 #define TX_RETRY 8 /* 0-15. retries = 16 + (TX_RETRY * 16) */
204 204
205 /* Operational parameters that usually are not changed. */ 205 /* Operational parameters that usually are not changed. */
206 /* Time in jiffies before concluding the transmitter is hung. */ 206 /* Time in jiffies before concluding the transmitter is hung. */
207 #define TX_TIMEOUT (6*HZ) 207 #define TX_TIMEOUT (6*HZ)
208 208
209 209
210 enum { 210 enum {
211 HAS_MII_XCVR = 0x010000, 211 HAS_MII_XCVR = 0x010000,
212 HAS_CHIP_XCVR = 0x020000, 212 HAS_CHIP_XCVR = 0x020000,
213 HAS_LNK_CHNG = 0x040000, 213 HAS_LNK_CHNG = 0x040000,
214 }; 214 };
215 215
216 #define RTL_NUM_STATS 4 /* number of ETHTOOL_GSTATS u64's */ 216 #define RTL_NUM_STATS 4 /* number of ETHTOOL_GSTATS u64's */
217 #define RTL_REGS_VER 1 /* version of reg. data in ETHTOOL_GREGS */ 217 #define RTL_REGS_VER 1 /* version of reg. data in ETHTOOL_GREGS */
218 #define RTL_MIN_IO_SIZE 0x80 218 #define RTL_MIN_IO_SIZE 0x80
219 #define RTL8139B_IO_SIZE 256 219 #define RTL8139B_IO_SIZE 256
220 220
221 #define RTL8129_CAPS HAS_MII_XCVR 221 #define RTL8129_CAPS HAS_MII_XCVR
222 #define RTL8139_CAPS HAS_CHIP_XCVR|HAS_LNK_CHNG 222 #define RTL8139_CAPS HAS_CHIP_XCVR|HAS_LNK_CHNG
223 223
224 typedef enum { 224 typedef enum {
225 RTL8139 = 0, 225 RTL8139 = 0,
226 RTL8129, 226 RTL8129,
227 } board_t; 227 } board_t;
228 228
229 229
230 /* indexed by board_t, above */ 230 /* indexed by board_t, above */
231 static const struct { 231 static const struct {
232 const char *name; 232 const char *name;
233 u32 hw_flags; 233 u32 hw_flags;
234 } board_info[] __devinitdata = { 234 } board_info[] __devinitdata = {
235 { "RealTek RTL8139", RTL8139_CAPS }, 235 { "RealTek RTL8139", RTL8139_CAPS },
236 { "RealTek RTL8129", RTL8129_CAPS }, 236 { "RealTek RTL8129", RTL8129_CAPS },
237 }; 237 };
238 238
239 239
240 static struct pci_device_id rtl8139_pci_tbl[] = { 240 static struct pci_device_id rtl8139_pci_tbl[] = {
241 {0x10ec, 0x8139, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 241 {0x10ec, 0x8139, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
242 {0x10ec, 0x8138, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 242 {0x10ec, 0x8138, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
243 {0x1113, 0x1211, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 243 {0x1113, 0x1211, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
244 {0x1500, 0x1360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 244 {0x1500, 0x1360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
245 {0x4033, 0x1360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 245 {0x4033, 0x1360, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
246 {0x1186, 0x1300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 246 {0x1186, 0x1300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
247 {0x1186, 0x1340, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 247 {0x1186, 0x1340, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
248 {0x13d1, 0xab06, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 248 {0x13d1, 0xab06, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
249 {0x1259, 0xa117, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 249 {0x1259, 0xa117, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
250 {0x1259, 0xa11e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 250 {0x1259, 0xa11e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
251 {0x14ea, 0xab06, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 251 {0x14ea, 0xab06, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
252 {0x14ea, 0xab07, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 252 {0x14ea, 0xab07, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
253 {0x11db, 0x1234, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 253 {0x11db, 0x1234, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
254 {0x1432, 0x9130, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 254 {0x1432, 0x9130, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
255 {0x02ac, 0x1012, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 255 {0x02ac, 0x1012, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
256 {0x018a, 0x0106, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 256 {0x018a, 0x0106, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
257 {0x126c, 0x1211, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 257 {0x126c, 0x1211, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
258 {0x1743, 0x8139, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 258 {0x1743, 0x8139, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
259 {0x021b, 0x8139, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 259 {0x021b, 0x8139, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
260 260
261 #ifdef CONFIG_SH_SECUREEDGE5410 261 #ifdef CONFIG_SH_SECUREEDGE5410
262 /* Bogus 8139 silicon reports 8129 without external PROM :-( */ 262 /* Bogus 8139 silicon reports 8129 without external PROM :-( */
263 {0x10ec, 0x8129, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 }, 263 {0x10ec, 0x8129, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8139 },
264 #endif 264 #endif
265 #ifdef CONFIG_8139TOO_8129 265 #ifdef CONFIG_8139TOO_8129
266 {0x10ec, 0x8129, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8129 }, 266 {0x10ec, 0x8129, PCI_ANY_ID, PCI_ANY_ID, 0, 0, RTL8129 },
267 #endif 267 #endif
268 268
269 /* some crazy cards report invalid vendor ids like 269 /* some crazy cards report invalid vendor ids like
270 * 0x0001 here. The other ids are valid and constant, 270 * 0x0001 here. The other ids are valid and constant,
271 * so we simply don't match on the main vendor id. 271 * so we simply don't match on the main vendor id.
272 */ 272 */
273 {PCI_ANY_ID, 0x8139, 0x10ec, 0x8139, 0, 0, RTL8139 }, 273 {PCI_ANY_ID, 0x8139, 0x10ec, 0x8139, 0, 0, RTL8139 },
274 {PCI_ANY_ID, 0x8139, 0x1186, 0x1300, 0, 0, RTL8139 }, 274 {PCI_ANY_ID, 0x8139, 0x1186, 0x1300, 0, 0, RTL8139 },
275 {PCI_ANY_ID, 0x8139, 0x13d1, 0xab06, 0, 0, RTL8139 }, 275 {PCI_ANY_ID, 0x8139, 0x13d1, 0xab06, 0, 0, RTL8139 },
276 276
277 {0,} 277 {0,}
278 }; 278 };
279 MODULE_DEVICE_TABLE (pci, rtl8139_pci_tbl); 279 MODULE_DEVICE_TABLE (pci, rtl8139_pci_tbl);
280 280
281 static struct { 281 static struct {
282 const char str[ETH_GSTRING_LEN]; 282 const char str[ETH_GSTRING_LEN];
283 } ethtool_stats_keys[] = { 283 } ethtool_stats_keys[] = {
284 { "early_rx" }, 284 { "early_rx" },
285 { "tx_buf_mapped" }, 285 { "tx_buf_mapped" },
286 { "tx_timeouts" }, 286 { "tx_timeouts" },
287 { "rx_lost_in_ring" }, 287 { "rx_lost_in_ring" },
288 }; 288 };
289 289
290 /* The rest of these values should never change. */ 290 /* The rest of these values should never change. */
291 291
292 /* Symbolic offsets to registers. */ 292 /* Symbolic offsets to registers. */
293 enum RTL8139_registers { 293 enum RTL8139_registers {
294 MAC0 = 0, /* Ethernet hardware address. */ 294 MAC0 = 0, /* Ethernet hardware address. */
295 MAR0 = 8, /* Multicast filter. */ 295 MAR0 = 8, /* Multicast filter. */
296 TxStatus0 = 0x10, /* Transmit status (Four 32bit registers). */ 296 TxStatus0 = 0x10, /* Transmit status (Four 32bit registers). */
297 TxAddr0 = 0x20, /* Tx descriptors (also four 32bit). */ 297 TxAddr0 = 0x20, /* Tx descriptors (also four 32bit). */
298 RxBuf = 0x30, 298 RxBuf = 0x30,
299 ChipCmd = 0x37, 299 ChipCmd = 0x37,
300 RxBufPtr = 0x38, 300 RxBufPtr = 0x38,
301 RxBufAddr = 0x3A, 301 RxBufAddr = 0x3A,
302 IntrMask = 0x3C, 302 IntrMask = 0x3C,
303 IntrStatus = 0x3E, 303 IntrStatus = 0x3E,
304 TxConfig = 0x40, 304 TxConfig = 0x40,
305 RxConfig = 0x44, 305 RxConfig = 0x44,
306 Timer = 0x48, /* A general-purpose counter. */ 306 Timer = 0x48, /* A general-purpose counter. */
307 RxMissed = 0x4C, /* 24 bits valid, write clears. */ 307 RxMissed = 0x4C, /* 24 bits valid, write clears. */
308 Cfg9346 = 0x50, 308 Cfg9346 = 0x50,
309 Config0 = 0x51, 309 Config0 = 0x51,
310 Config1 = 0x52, 310 Config1 = 0x52,
311 FlashReg = 0x54, 311 FlashReg = 0x54,
312 MediaStatus = 0x58, 312 MediaStatus = 0x58,
313 Config3 = 0x59, 313 Config3 = 0x59,
314 Config4 = 0x5A, /* absent on RTL-8139A */ 314 Config4 = 0x5A, /* absent on RTL-8139A */
315 HltClk = 0x5B, 315 HltClk = 0x5B,
316 MultiIntr = 0x5C, 316 MultiIntr = 0x5C,
317 TxSummary = 0x60, 317 TxSummary = 0x60,
318 BasicModeCtrl = 0x62, 318 BasicModeCtrl = 0x62,
319 BasicModeStatus = 0x64, 319 BasicModeStatus = 0x64,
320 NWayAdvert = 0x66, 320 NWayAdvert = 0x66,
321 NWayLPAR = 0x68, 321 NWayLPAR = 0x68,
322 NWayExpansion = 0x6A, 322 NWayExpansion = 0x6A,
323 /* Undocumented registers, but required for proper operation. */ 323 /* Undocumented registers, but required for proper operation. */
324 FIFOTMS = 0x70, /* FIFO Control and test. */ 324 FIFOTMS = 0x70, /* FIFO Control and test. */
325 CSCR = 0x74, /* Chip Status and Configuration Register. */ 325 CSCR = 0x74, /* Chip Status and Configuration Register. */
326 PARA78 = 0x78, 326 PARA78 = 0x78,
327 PARA7c = 0x7c, /* Magic transceiver parameter register. */ 327 PARA7c = 0x7c, /* Magic transceiver parameter register. */
328 Config5 = 0xD8, /* absent on RTL-8139A */ 328 Config5 = 0xD8, /* absent on RTL-8139A */
329 }; 329 };
330 330
331 enum ClearBitMasks { 331 enum ClearBitMasks {
332 MultiIntrClear = 0xF000, 332 MultiIntrClear = 0xF000,
333 ChipCmdClear = 0xE2, 333 ChipCmdClear = 0xE2,
334 Config1Clear = (1<<7)|(1<<6)|(1<<3)|(1<<2)|(1<<1), 334 Config1Clear = (1<<7)|(1<<6)|(1<<3)|(1<<2)|(1<<1),
335 }; 335 };
336 336
337 enum ChipCmdBits { 337 enum ChipCmdBits {
338 CmdReset = 0x10, 338 CmdReset = 0x10,
339 CmdRxEnb = 0x08, 339 CmdRxEnb = 0x08,
340 CmdTxEnb = 0x04, 340 CmdTxEnb = 0x04,
341 RxBufEmpty = 0x01, 341 RxBufEmpty = 0x01,
342 }; 342 };
343 343
344 /* Interrupt register bits, using my own meaningful names. */ 344 /* Interrupt register bits, using my own meaningful names. */
345 enum IntrStatusBits { 345 enum IntrStatusBits {
346 PCIErr = 0x8000, 346 PCIErr = 0x8000,
347 PCSTimeout = 0x4000, 347 PCSTimeout = 0x4000,
348 RxFIFOOver = 0x40, 348 RxFIFOOver = 0x40,
349 RxUnderrun = 0x20, 349 RxUnderrun = 0x20,
350 RxOverflow = 0x10, 350 RxOverflow = 0x10,
351 TxErr = 0x08, 351 TxErr = 0x08,
352 TxOK = 0x04, 352 TxOK = 0x04,
353 RxErr = 0x02, 353 RxErr = 0x02,
354 RxOK = 0x01, 354 RxOK = 0x01,
355 355
356 RxAckBits = RxFIFOOver | RxOverflow | RxOK, 356 RxAckBits = RxFIFOOver | RxOverflow | RxOK,
357 }; 357 };
358 358
359 enum TxStatusBits { 359 enum TxStatusBits {
360 TxHostOwns = 0x2000, 360 TxHostOwns = 0x2000,
361 TxUnderrun = 0x4000, 361 TxUnderrun = 0x4000,
362 TxStatOK = 0x8000, 362 TxStatOK = 0x8000,
363 TxOutOfWindow = 0x20000000, 363 TxOutOfWindow = 0x20000000,
364 TxAborted = 0x40000000, 364 TxAborted = 0x40000000,
365 TxCarrierLost = 0x80000000, 365 TxCarrierLost = 0x80000000,
366 }; 366 };
367 enum RxStatusBits { 367 enum RxStatusBits {
368 RxMulticast = 0x8000, 368 RxMulticast = 0x8000,
369 RxPhysical = 0x4000, 369 RxPhysical = 0x4000,
370 RxBroadcast = 0x2000, 370 RxBroadcast = 0x2000,
371 RxBadSymbol = 0x0020, 371 RxBadSymbol = 0x0020,
372 RxRunt = 0x0010, 372 RxRunt = 0x0010,
373 RxTooLong = 0x0008, 373 RxTooLong = 0x0008,
374 RxCRCErr = 0x0004, 374 RxCRCErr = 0x0004,
375 RxBadAlign = 0x0002, 375 RxBadAlign = 0x0002,
376 RxStatusOK = 0x0001, 376 RxStatusOK = 0x0001,
377 }; 377 };
378 378
379 /* Bits in RxConfig. */ 379 /* Bits in RxConfig. */
380 enum rx_mode_bits { 380 enum rx_mode_bits {
381 AcceptErr = 0x20, 381 AcceptErr = 0x20,
382 AcceptRunt = 0x10, 382 AcceptRunt = 0x10,
383 AcceptBroadcast = 0x08, 383 AcceptBroadcast = 0x08,
384 AcceptMulticast = 0x04, 384 AcceptMulticast = 0x04,
385 AcceptMyPhys = 0x02, 385 AcceptMyPhys = 0x02,
386 AcceptAllPhys = 0x01, 386 AcceptAllPhys = 0x01,
387 }; 387 };
388 388
389 /* Bits in TxConfig. */ 389 /* Bits in TxConfig. */
390 enum tx_config_bits { 390 enum tx_config_bits {
391 /* Interframe Gap Time. Only TxIFG96 doesn't violate IEEE 802.3 */ 391 /* Interframe Gap Time. Only TxIFG96 doesn't violate IEEE 802.3 */
392 TxIFGShift = 24, 392 TxIFGShift = 24,
393 TxIFG84 = (0 << TxIFGShift), /* 8.4us / 840ns (10 / 100Mbps) */ 393 TxIFG84 = (0 << TxIFGShift), /* 8.4us / 840ns (10 / 100Mbps) */
394 TxIFG88 = (1 << TxIFGShift), /* 8.8us / 880ns (10 / 100Mbps) */ 394 TxIFG88 = (1 << TxIFGShift), /* 8.8us / 880ns (10 / 100Mbps) */
395 TxIFG92 = (2 << TxIFGShift), /* 9.2us / 920ns (10 / 100Mbps) */ 395 TxIFG92 = (2 << TxIFGShift), /* 9.2us / 920ns (10 / 100Mbps) */
396 TxIFG96 = (3 << TxIFGShift), /* 9.6us / 960ns (10 / 100Mbps) */ 396 TxIFG96 = (3 << TxIFGShift), /* 9.6us / 960ns (10 / 100Mbps) */
397 397
398 TxLoopBack = (1 << 18) | (1 << 17), /* enable loopback test mode */ 398 TxLoopBack = (1 << 18) | (1 << 17), /* enable loopback test mode */
399 TxCRC = (1 << 16), /* DISABLE Tx pkt CRC append */ 399 TxCRC = (1 << 16), /* DISABLE Tx pkt CRC append */
400 TxClearAbt = (1 << 0), /* Clear abort (WO) */ 400 TxClearAbt = (1 << 0), /* Clear abort (WO) */
401 TxDMAShift = 8, /* DMA burst value (0-7) is shifted X many bits */ 401 TxDMAShift = 8, /* DMA burst value (0-7) is shifted X many bits */
402 TxRetryShift = 4, /* TXRR value (0-15) is shifted X many bits */ 402 TxRetryShift = 4, /* TXRR value (0-15) is shifted X many bits */
403 403
404 TxVersionMask = 0x7C800000, /* mask out version bits 30-26, 23 */ 404 TxVersionMask = 0x7C800000, /* mask out version bits 30-26, 23 */
405 }; 405 };
406 406
407 /* Bits in Config1 */ 407 /* Bits in Config1 */
408 enum Config1Bits { 408 enum Config1Bits {
409 Cfg1_PM_Enable = 0x01, 409 Cfg1_PM_Enable = 0x01,
410 Cfg1_VPD_Enable = 0x02, 410 Cfg1_VPD_Enable = 0x02,
411 Cfg1_PIO = 0x04, 411 Cfg1_PIO = 0x04,
412 Cfg1_MMIO = 0x08, 412 Cfg1_MMIO = 0x08,
413 LWAKE = 0x10, /* not on 8139, 8139A */ 413 LWAKE = 0x10, /* not on 8139, 8139A */
414 Cfg1_Driver_Load = 0x20, 414 Cfg1_Driver_Load = 0x20,
415 Cfg1_LED0 = 0x40, 415 Cfg1_LED0 = 0x40,
416 Cfg1_LED1 = 0x80, 416 Cfg1_LED1 = 0x80,
417 SLEEP = (1 << 1), /* only on 8139, 8139A */ 417 SLEEP = (1 << 1), /* only on 8139, 8139A */
418 PWRDN = (1 << 0), /* only on 8139, 8139A */ 418 PWRDN = (1 << 0), /* only on 8139, 8139A */
419 }; 419 };
420 420
421 /* Bits in Config3 */ 421 /* Bits in Config3 */
422 enum Config3Bits { 422 enum Config3Bits {
423 Cfg3_FBtBEn = (1 << 0), /* 1 = Fast Back to Back */ 423 Cfg3_FBtBEn = (1 << 0), /* 1 = Fast Back to Back */
424 Cfg3_FuncRegEn = (1 << 1), /* 1 = enable CardBus Function registers */ 424 Cfg3_FuncRegEn = (1 << 1), /* 1 = enable CardBus Function registers */
425 Cfg3_CLKRUN_En = (1 << 2), /* 1 = enable CLKRUN */ 425 Cfg3_CLKRUN_En = (1 << 2), /* 1 = enable CLKRUN */
426 Cfg3_CardB_En = (1 << 3), /* 1 = enable CardBus registers */ 426 Cfg3_CardB_En = (1 << 3), /* 1 = enable CardBus registers */
427 Cfg3_LinkUp = (1 << 4), /* 1 = wake up on link up */ 427 Cfg3_LinkUp = (1 << 4), /* 1 = wake up on link up */
428 Cfg3_Magic = (1 << 5), /* 1 = wake up on Magic Packet (tm) */ 428 Cfg3_Magic = (1 << 5), /* 1 = wake up on Magic Packet (tm) */
429 Cfg3_PARM_En = (1 << 6), /* 0 = software can set twister parameters */ 429 Cfg3_PARM_En = (1 << 6), /* 0 = software can set twister parameters */
430 Cfg3_GNTSel = (1 << 7), /* 1 = delay 1 clock from PCI GNT signal */ 430 Cfg3_GNTSel = (1 << 7), /* 1 = delay 1 clock from PCI GNT signal */
431 }; 431 };
432 432
433 /* Bits in Config4 */ 433 /* Bits in Config4 */
434 enum Config4Bits { 434 enum Config4Bits {
435 LWPTN = (1 << 2), /* not on 8139, 8139A */ 435 LWPTN = (1 << 2), /* not on 8139, 8139A */
436 }; 436 };
437 437
438 /* Bits in Config5 */ 438 /* Bits in Config5 */
439 enum Config5Bits { 439 enum Config5Bits {
440 Cfg5_PME_STS = (1 << 0), /* 1 = PCI reset resets PME_Status */ 440 Cfg5_PME_STS = (1 << 0), /* 1 = PCI reset resets PME_Status */
441 Cfg5_LANWake = (1 << 1), /* 1 = enable LANWake signal */ 441 Cfg5_LANWake = (1 << 1), /* 1 = enable LANWake signal */
442 Cfg5_LDPS = (1 << 2), /* 0 = save power when link is down */ 442 Cfg5_LDPS = (1 << 2), /* 0 = save power when link is down */
443 Cfg5_FIFOAddrPtr= (1 << 3), /* Realtek internal SRAM testing */ 443 Cfg5_FIFOAddrPtr= (1 << 3), /* Realtek internal SRAM testing */
444 Cfg5_UWF = (1 << 4), /* 1 = accept unicast wakeup frame */ 444 Cfg5_UWF = (1 << 4), /* 1 = accept unicast wakeup frame */
445 Cfg5_MWF = (1 << 5), /* 1 = accept multicast wakeup frame */ 445 Cfg5_MWF = (1 << 5), /* 1 = accept multicast wakeup frame */
446 Cfg5_BWF = (1 << 6), /* 1 = accept broadcast wakeup frame */ 446 Cfg5_BWF = (1 << 6), /* 1 = accept broadcast wakeup frame */
447 }; 447 };
448 448
449 enum RxConfigBits { 449 enum RxConfigBits {
450 /* rx fifo threshold */ 450 /* rx fifo threshold */
451 RxCfgFIFOShift = 13, 451 RxCfgFIFOShift = 13,
452 RxCfgFIFONone = (7 << RxCfgFIFOShift), 452 RxCfgFIFONone = (7 << RxCfgFIFOShift),
453 453
454 /* Max DMA burst */ 454 /* Max DMA burst */
455 RxCfgDMAShift = 8, 455 RxCfgDMAShift = 8,
456 RxCfgDMAUnlimited = (7 << RxCfgDMAShift), 456 RxCfgDMAUnlimited = (7 << RxCfgDMAShift),
457 457
458 /* rx ring buffer length */ 458 /* rx ring buffer length */
459 RxCfgRcv8K = 0, 459 RxCfgRcv8K = 0,
460 RxCfgRcv16K = (1 << 11), 460 RxCfgRcv16K = (1 << 11),
461 RxCfgRcv32K = (1 << 12), 461 RxCfgRcv32K = (1 << 12),
462 RxCfgRcv64K = (1 << 11) | (1 << 12), 462 RxCfgRcv64K = (1 << 11) | (1 << 12),
463 463
464 /* Disable packet wrap at end of Rx buffer. (not possible with 64k) */ 464 /* Disable packet wrap at end of Rx buffer. (not possible with 64k) */
465 RxNoWrap = (1 << 7), 465 RxNoWrap = (1 << 7),
466 }; 466 };
467 467
468 /* Twister tuning parameters from RealTek. 468 /* Twister tuning parameters from RealTek.
469 Completely undocumented, but required to tune bad links on some boards. */ 469 Completely undocumented, but required to tune bad links on some boards. */
470 enum CSCRBits { 470 enum CSCRBits {
471 CSCR_LinkOKBit = 0x0400, 471 CSCR_LinkOKBit = 0x0400,
472 CSCR_LinkChangeBit = 0x0800, 472 CSCR_LinkChangeBit = 0x0800,
473 CSCR_LinkStatusBits = 0x0f000, 473 CSCR_LinkStatusBits = 0x0f000,
474 CSCR_LinkDownOffCmd = 0x003c0, 474 CSCR_LinkDownOffCmd = 0x003c0,
475 CSCR_LinkDownCmd = 0x0f3c0, 475 CSCR_LinkDownCmd = 0x0f3c0,
476 }; 476 };
477 477
478 enum Cfg9346Bits { 478 enum Cfg9346Bits {
479 Cfg9346_Lock = 0x00, 479 Cfg9346_Lock = 0x00,
480 Cfg9346_Unlock = 0xC0, 480 Cfg9346_Unlock = 0xC0,
481 }; 481 };
482 482
483 typedef enum { 483 typedef enum {
484 CH_8139 = 0, 484 CH_8139 = 0,
485 CH_8139_K, 485 CH_8139_K,
486 CH_8139A, 486 CH_8139A,
487 CH_8139A_G, 487 CH_8139A_G,
488 CH_8139B, 488 CH_8139B,
489 CH_8130, 489 CH_8130,
490 CH_8139C, 490 CH_8139C,
491 CH_8100, 491 CH_8100,
492 CH_8100B_8139D, 492 CH_8100B_8139D,
493 CH_8101, 493 CH_8101,
494 } chip_t; 494 } chip_t;
495 495
496 enum chip_flags { 496 enum chip_flags {
497 HasHltClk = (1 << 0), 497 HasHltClk = (1 << 0),
498 HasLWake = (1 << 1), 498 HasLWake = (1 << 1),
499 }; 499 };
500 500
501 #define HW_REVID(b30, b29, b28, b27, b26, b23, b22) \ 501 #define HW_REVID(b30, b29, b28, b27, b26, b23, b22) \
502 (b30<<30 | b29<<29 | b28<<28 | b27<<27 | b26<<26 | b23<<23 | b22<<22) 502 (b30<<30 | b29<<29 | b28<<28 | b27<<27 | b26<<26 | b23<<23 | b22<<22)
503 #define HW_REVID_MASK HW_REVID(1, 1, 1, 1, 1, 1, 1) 503 #define HW_REVID_MASK HW_REVID(1, 1, 1, 1, 1, 1, 1)
504 504
505 /* directly indexed by chip_t, above */ 505 /* directly indexed by chip_t, above */
506 static const struct { 506 static const struct {
507 const char *name; 507 const char *name;
508 u32 version; /* from RTL8139C/RTL8139D docs */ 508 u32 version; /* from RTL8139C/RTL8139D docs */
509 u32 flags; 509 u32 flags;
510 } rtl_chip_info[] = { 510 } rtl_chip_info[] = {
511 { "RTL-8139", 511 { "RTL-8139",
512 HW_REVID(1, 0, 0, 0, 0, 0, 0), 512 HW_REVID(1, 0, 0, 0, 0, 0, 0),
513 HasHltClk, 513 HasHltClk,
514 }, 514 },
515 515
516 { "RTL-8139 rev K", 516 { "RTL-8139 rev K",
517 HW_REVID(1, 1, 0, 0, 0, 0, 0), 517 HW_REVID(1, 1, 0, 0, 0, 0, 0),
518 HasHltClk, 518 HasHltClk,
519 }, 519 },
520 520
521 { "RTL-8139A", 521 { "RTL-8139A",
522 HW_REVID(1, 1, 1, 0, 0, 0, 0), 522 HW_REVID(1, 1, 1, 0, 0, 0, 0),
523 HasHltClk, /* XXX undocumented? */ 523 HasHltClk, /* XXX undocumented? */
524 }, 524 },
525 525
526 { "RTL-8139A rev G", 526 { "RTL-8139A rev G",
527 HW_REVID(1, 1, 1, 0, 0, 1, 0), 527 HW_REVID(1, 1, 1, 0, 0, 1, 0),
528 HasHltClk, /* XXX undocumented? */ 528 HasHltClk, /* XXX undocumented? */
529 }, 529 },
530 530
531 { "RTL-8139B", 531 { "RTL-8139B",
532 HW_REVID(1, 1, 1, 1, 0, 0, 0), 532 HW_REVID(1, 1, 1, 1, 0, 0, 0),
533 HasLWake, 533 HasLWake,
534 }, 534 },
535 535
536 { "RTL-8130", 536 { "RTL-8130",
537 HW_REVID(1, 1, 1, 1, 1, 0, 0), 537 HW_REVID(1, 1, 1, 1, 1, 0, 0),
538 HasLWake, 538 HasLWake,
539 }, 539 },
540 540
541 { "RTL-8139C", 541 { "RTL-8139C",
542 HW_REVID(1, 1, 1, 0, 1, 0, 0), 542 HW_REVID(1, 1, 1, 0, 1, 0, 0),
543 HasLWake, 543 HasLWake,
544 }, 544 },
545 545
546 { "RTL-8100", 546 { "RTL-8100",
547 HW_REVID(1, 1, 1, 1, 0, 1, 0), 547 HW_REVID(1, 1, 1, 1, 0, 1, 0),
548 HasLWake, 548 HasLWake,
549 }, 549 },
550 550
551 { "RTL-8100B/8139D", 551 { "RTL-8100B/8139D",
552 HW_REVID(1, 1, 1, 0, 1, 0, 1), 552 HW_REVID(1, 1, 1, 0, 1, 0, 1),
553 HasHltClk /* XXX undocumented? */ 553 HasHltClk /* XXX undocumented? */
554 | HasLWake, 554 | HasLWake,
555 }, 555 },
556 556
557 { "RTL-8101", 557 { "RTL-8101",
558 HW_REVID(1, 1, 1, 0, 1, 1, 1), 558 HW_REVID(1, 1, 1, 0, 1, 1, 1),
559 HasLWake, 559 HasLWake,
560 }, 560 },
561 }; 561 };
562 562
563 struct rtl_extra_stats { 563 struct rtl_extra_stats {
564 unsigned long early_rx; 564 unsigned long early_rx;
565 unsigned long tx_buf_mapped; 565 unsigned long tx_buf_mapped;
566 unsigned long tx_timeouts; 566 unsigned long tx_timeouts;
567 unsigned long rx_lost_in_ring; 567 unsigned long rx_lost_in_ring;
568 }; 568 };
569 569
570 struct rtl8139_private { 570 struct rtl8139_private {
571 void __iomem *mmio_addr; 571 void __iomem *mmio_addr;
572 int drv_flags; 572 int drv_flags;
573 struct pci_dev *pci_dev; 573 struct pci_dev *pci_dev;
574 u32 msg_enable; 574 u32 msg_enable;
575 struct napi_struct napi; 575 struct napi_struct napi;
576 struct net_device *dev; 576 struct net_device *dev;
577 struct net_device_stats stats; 577 struct net_device_stats stats;
578 578
579 unsigned char *rx_ring; 579 unsigned char *rx_ring;
580 unsigned int cur_rx; /* RX buf index of next pkt */ 580 unsigned int cur_rx; /* RX buf index of next pkt */
581 dma_addr_t rx_ring_dma; 581 dma_addr_t rx_ring_dma;
582 582
583 unsigned int tx_flag; 583 unsigned int tx_flag;
584 unsigned long cur_tx; 584 unsigned long cur_tx;
585 unsigned long dirty_tx; 585 unsigned long dirty_tx;
586 unsigned char *tx_buf[NUM_TX_DESC]; /* Tx bounce buffers */ 586 unsigned char *tx_buf[NUM_TX_DESC]; /* Tx bounce buffers */
587 unsigned char *tx_bufs; /* Tx bounce buffer region. */ 587 unsigned char *tx_bufs; /* Tx bounce buffer region. */
588 dma_addr_t tx_bufs_dma; 588 dma_addr_t tx_bufs_dma;
589 589
590 signed char phys[4]; /* MII device addresses. */ 590 signed char phys[4]; /* MII device addresses. */
591 591
592 /* Twister tune state. */ 592 /* Twister tune state. */
593 char twistie, twist_row, twist_col; 593 char twistie, twist_row, twist_col;
594 594
595 unsigned int watchdog_fired : 1; 595 unsigned int watchdog_fired : 1;
596 unsigned int default_port : 4; /* Last dev->if_port value. */ 596 unsigned int default_port : 4; /* Last dev->if_port value. */
597 unsigned int have_thread : 1; 597 unsigned int have_thread : 1;
598 598
599 spinlock_t lock; 599 spinlock_t lock;
600 spinlock_t rx_lock; 600 spinlock_t rx_lock;
601 601
602 chip_t chipset; 602 chip_t chipset;
603 u32 rx_config; 603 u32 rx_config;
604 struct rtl_extra_stats xstats; 604 struct rtl_extra_stats xstats;
605 605
606 struct delayed_work thread; 606 struct delayed_work thread;
607 607
608 struct mii_if_info mii; 608 struct mii_if_info mii;
609 unsigned int regs_len; 609 unsigned int regs_len;
610 unsigned long fifo_copy_timeout; 610 unsigned long fifo_copy_timeout;
611 }; 611 };
612 612
613 MODULE_AUTHOR ("Jeff Garzik <jgarzik@pobox.com>"); 613 MODULE_AUTHOR ("Jeff Garzik <jgarzik@pobox.com>");
614 MODULE_DESCRIPTION ("RealTek RTL-8139 Fast Ethernet driver"); 614 MODULE_DESCRIPTION ("RealTek RTL-8139 Fast Ethernet driver");
615 MODULE_LICENSE("GPL"); 615 MODULE_LICENSE("GPL");
616 MODULE_VERSION(DRV_VERSION); 616 MODULE_VERSION(DRV_VERSION);
617 617
618 module_param(multicast_filter_limit, int, 0); 618 module_param(multicast_filter_limit, int, 0);
619 module_param_array(media, int, NULL, 0); 619 module_param_array(media, int, NULL, 0);
620 module_param_array(full_duplex, int, NULL, 0); 620 module_param_array(full_duplex, int, NULL, 0);
621 module_param(debug, int, 0); 621 module_param(debug, int, 0);
622 MODULE_PARM_DESC (debug, "8139too bitmapped message enable number"); 622 MODULE_PARM_DESC (debug, "8139too bitmapped message enable number");
623 MODULE_PARM_DESC (multicast_filter_limit, "8139too maximum number of filtered multicast addresses"); 623 MODULE_PARM_DESC (multicast_filter_limit, "8139too maximum number of filtered multicast addresses");
624 MODULE_PARM_DESC (media, "8139too: Bits 4+9: force full duplex, bit 5: 100Mbps"); 624 MODULE_PARM_DESC (media, "8139too: Bits 4+9: force full duplex, bit 5: 100Mbps");
625 MODULE_PARM_DESC (full_duplex, "8139too: Force full duplex for board(s) (1)"); 625 MODULE_PARM_DESC (full_duplex, "8139too: Force full duplex for board(s) (1)");
626 626
627 static int read_eeprom (void __iomem *ioaddr, int location, int addr_len); 627 static int read_eeprom (void __iomem *ioaddr, int location, int addr_len);
628 static int rtl8139_open (struct net_device *dev); 628 static int rtl8139_open (struct net_device *dev);
629 static int mdio_read (struct net_device *dev, int phy_id, int location); 629 static int mdio_read (struct net_device *dev, int phy_id, int location);
630 static void mdio_write (struct net_device *dev, int phy_id, int location, 630 static void mdio_write (struct net_device *dev, int phy_id, int location,
631 int val); 631 int val);
632 static void rtl8139_start_thread(struct rtl8139_private *tp); 632 static void rtl8139_start_thread(struct rtl8139_private *tp);
633 static void rtl8139_tx_timeout (struct net_device *dev); 633 static void rtl8139_tx_timeout (struct net_device *dev);
634 static void rtl8139_init_ring (struct net_device *dev); 634 static void rtl8139_init_ring (struct net_device *dev);
635 static int rtl8139_start_xmit (struct sk_buff *skb, 635 static int rtl8139_start_xmit (struct sk_buff *skb,
636 struct net_device *dev); 636 struct net_device *dev);
637 #ifdef CONFIG_NET_POLL_CONTROLLER 637 #ifdef CONFIG_NET_POLL_CONTROLLER
638 static void rtl8139_poll_controller(struct net_device *dev); 638 static void rtl8139_poll_controller(struct net_device *dev);
639 #endif 639 #endif
640 static int rtl8139_poll(struct napi_struct *napi, int budget); 640 static int rtl8139_poll(struct napi_struct *napi, int budget);
641 static irqreturn_t rtl8139_interrupt (int irq, void *dev_instance); 641 static irqreturn_t rtl8139_interrupt (int irq, void *dev_instance);
642 static int rtl8139_close (struct net_device *dev); 642 static int rtl8139_close (struct net_device *dev);
643 static int netdev_ioctl (struct net_device *dev, struct ifreq *rq, int cmd); 643 static int netdev_ioctl (struct net_device *dev, struct ifreq *rq, int cmd);
644 static struct net_device_stats *rtl8139_get_stats (struct net_device *dev); 644 static struct net_device_stats *rtl8139_get_stats (struct net_device *dev);
645 static void rtl8139_set_rx_mode (struct net_device *dev); 645 static void rtl8139_set_rx_mode (struct net_device *dev);
646 static void __set_rx_mode (struct net_device *dev); 646 static void __set_rx_mode (struct net_device *dev);
647 static void rtl8139_hw_start (struct net_device *dev); 647 static void rtl8139_hw_start (struct net_device *dev);
648 static void rtl8139_thread (struct work_struct *work); 648 static void rtl8139_thread (struct work_struct *work);
649 static void rtl8139_tx_timeout_task(struct work_struct *work); 649 static void rtl8139_tx_timeout_task(struct work_struct *work);
650 static const struct ethtool_ops rtl8139_ethtool_ops; 650 static const struct ethtool_ops rtl8139_ethtool_ops;
651 651
652 /* write MMIO register, with flush */ 652 /* write MMIO register, with flush */
653 /* Flush avoids rtl8139 bug w/ posted MMIO writes */ 653 /* Flush avoids rtl8139 bug w/ posted MMIO writes */
654 #define RTL_W8_F(reg, val8) do { iowrite8 ((val8), ioaddr + (reg)); ioread8 (ioaddr + (reg)); } while (0) 654 #define RTL_W8_F(reg, val8) do { iowrite8 ((val8), ioaddr + (reg)); ioread8 (ioaddr + (reg)); } while (0)
655 #define RTL_W16_F(reg, val16) do { iowrite16 ((val16), ioaddr + (reg)); ioread16 (ioaddr + (reg)); } while (0) 655 #define RTL_W16_F(reg, val16) do { iowrite16 ((val16), ioaddr + (reg)); ioread16 (ioaddr + (reg)); } while (0)
656 #define RTL_W32_F(reg, val32) do { iowrite32 ((val32), ioaddr + (reg)); ioread32 (ioaddr + (reg)); } while (0) 656 #define RTL_W32_F(reg, val32) do { iowrite32 ((val32), ioaddr + (reg)); ioread32 (ioaddr + (reg)); } while (0)
657 657
658 /* write MMIO register */ 658 /* write MMIO register */
659 #define RTL_W8(reg, val8) iowrite8 ((val8), ioaddr + (reg)) 659 #define RTL_W8(reg, val8) iowrite8 ((val8), ioaddr + (reg))
660 #define RTL_W16(reg, val16) iowrite16 ((val16), ioaddr + (reg)) 660 #define RTL_W16(reg, val16) iowrite16 ((val16), ioaddr + (reg))
661 #define RTL_W32(reg, val32) iowrite32 ((val32), ioaddr + (reg)) 661 #define RTL_W32(reg, val32) iowrite32 ((val32), ioaddr + (reg))
662 662
663 /* read MMIO register */ 663 /* read MMIO register */
664 #define RTL_R8(reg) ioread8 (ioaddr + (reg)) 664 #define RTL_R8(reg) ioread8 (ioaddr + (reg))
665 #define RTL_R16(reg) ioread16 (ioaddr + (reg)) 665 #define RTL_R16(reg) ioread16 (ioaddr + (reg))
666 #define RTL_R32(reg) ((unsigned long) ioread32 (ioaddr + (reg))) 666 #define RTL_R32(reg) ((unsigned long) ioread32 (ioaddr + (reg)))
667 667
668 668
669 static const u16 rtl8139_intr_mask = 669 static const u16 rtl8139_intr_mask =
670 PCIErr | PCSTimeout | RxUnderrun | RxOverflow | RxFIFOOver | 670 PCIErr | PCSTimeout | RxUnderrun | RxOverflow | RxFIFOOver |
671 TxErr | TxOK | RxErr | RxOK; 671 TxErr | TxOK | RxErr | RxOK;
672 672
673 static const u16 rtl8139_norx_intr_mask = 673 static const u16 rtl8139_norx_intr_mask =
674 PCIErr | PCSTimeout | RxUnderrun | 674 PCIErr | PCSTimeout | RxUnderrun |
675 TxErr | TxOK | RxErr ; 675 TxErr | TxOK | RxErr ;
676 676
677 #if RX_BUF_IDX == 0 677 #if RX_BUF_IDX == 0
678 static const unsigned int rtl8139_rx_config = 678 static const unsigned int rtl8139_rx_config =
679 RxCfgRcv8K | RxNoWrap | 679 RxCfgRcv8K | RxNoWrap |
680 (RX_FIFO_THRESH << RxCfgFIFOShift) | 680 (RX_FIFO_THRESH << RxCfgFIFOShift) |
681 (RX_DMA_BURST << RxCfgDMAShift); 681 (RX_DMA_BURST << RxCfgDMAShift);
682 #elif RX_BUF_IDX == 1 682 #elif RX_BUF_IDX == 1
683 static const unsigned int rtl8139_rx_config = 683 static const unsigned int rtl8139_rx_config =
684 RxCfgRcv16K | RxNoWrap | 684 RxCfgRcv16K | RxNoWrap |
685 (RX_FIFO_THRESH << RxCfgFIFOShift) | 685 (RX_FIFO_THRESH << RxCfgFIFOShift) |
686 (RX_DMA_BURST << RxCfgDMAShift); 686 (RX_DMA_BURST << RxCfgDMAShift);
687 #elif RX_BUF_IDX == 2 687 #elif RX_BUF_IDX == 2
688 static const unsigned int rtl8139_rx_config = 688 static const unsigned int rtl8139_rx_config =
689 RxCfgRcv32K | RxNoWrap | 689 RxCfgRcv32K | RxNoWrap |
690 (RX_FIFO_THRESH << RxCfgFIFOShift) | 690 (RX_FIFO_THRESH << RxCfgFIFOShift) |
691 (RX_DMA_BURST << RxCfgDMAShift); 691 (RX_DMA_BURST << RxCfgDMAShift);
692 #elif RX_BUF_IDX == 3 692 #elif RX_BUF_IDX == 3
693 static const unsigned int rtl8139_rx_config = 693 static const unsigned int rtl8139_rx_config =
694 RxCfgRcv64K | 694 RxCfgRcv64K |
695 (RX_FIFO_THRESH << RxCfgFIFOShift) | 695 (RX_FIFO_THRESH << RxCfgFIFOShift) |
696 (RX_DMA_BURST << RxCfgDMAShift); 696 (RX_DMA_BURST << RxCfgDMAShift);
697 #else 697 #else
698 #error "Invalid configuration for 8139_RXBUF_IDX" 698 #error "Invalid configuration for 8139_RXBUF_IDX"
699 #endif 699 #endif
700 700
701 static const unsigned int rtl8139_tx_config = 701 static const unsigned int rtl8139_tx_config =
702 TxIFG96 | (TX_DMA_BURST << TxDMAShift) | (TX_RETRY << TxRetryShift); 702 TxIFG96 | (TX_DMA_BURST << TxDMAShift) | (TX_RETRY << TxRetryShift);
703 703
704 static void __rtl8139_cleanup_dev (struct net_device *dev) 704 static void __rtl8139_cleanup_dev (struct net_device *dev)
705 { 705 {
706 struct rtl8139_private *tp = netdev_priv(dev); 706 struct rtl8139_private *tp = netdev_priv(dev);
707 struct pci_dev *pdev; 707 struct pci_dev *pdev;
708 708
709 assert (dev != NULL); 709 assert (dev != NULL);
710 assert (tp->pci_dev != NULL); 710 assert (tp->pci_dev != NULL);
711 pdev = tp->pci_dev; 711 pdev = tp->pci_dev;
712 712
713 #ifdef USE_IO_OPS 713 #ifdef USE_IO_OPS
714 if (tp->mmio_addr) 714 if (tp->mmio_addr)
715 ioport_unmap (tp->mmio_addr); 715 ioport_unmap (tp->mmio_addr);
716 #else 716 #else
717 if (tp->mmio_addr) 717 if (tp->mmio_addr)
718 pci_iounmap (pdev, tp->mmio_addr); 718 pci_iounmap (pdev, tp->mmio_addr);
719 #endif /* USE_IO_OPS */ 719 #endif /* USE_IO_OPS */
720 720
721 /* it's ok to call this even if we have no regions to free */ 721 /* it's ok to call this even if we have no regions to free */
722 pci_release_regions (pdev); 722 pci_release_regions (pdev);
723 723
724 free_netdev(dev); 724 free_netdev(dev);
725 pci_set_drvdata (pdev, NULL); 725 pci_set_drvdata (pdev, NULL);
726 } 726 }
727 727
728 728
729 static void rtl8139_chip_reset (void __iomem *ioaddr) 729 static void rtl8139_chip_reset (void __iomem *ioaddr)
730 { 730 {
731 int i; 731 int i;
732 732
733 /* Soft reset the chip. */ 733 /* Soft reset the chip. */
734 RTL_W8 (ChipCmd, CmdReset); 734 RTL_W8 (ChipCmd, CmdReset);
735 735
736 /* Check that the chip has finished the reset. */ 736 /* Check that the chip has finished the reset. */
737 for (i = 1000; i > 0; i--) { 737 for (i = 1000; i > 0; i--) {
738 barrier(); 738 barrier();
739 if ((RTL_R8 (ChipCmd) & CmdReset) == 0) 739 if ((RTL_R8 (ChipCmd) & CmdReset) == 0)
740 break; 740 break;
741 udelay (10); 741 udelay (10);
742 } 742 }
743 } 743 }
744 744
745 745
746 static int __devinit rtl8139_init_board (struct pci_dev *pdev, 746 static int __devinit rtl8139_init_board (struct pci_dev *pdev,
747 struct net_device **dev_out) 747 struct net_device **dev_out)
748 { 748 {
749 void __iomem *ioaddr; 749 void __iomem *ioaddr;
750 struct net_device *dev; 750 struct net_device *dev;
751 struct rtl8139_private *tp; 751 struct rtl8139_private *tp;
752 u8 tmp8; 752 u8 tmp8;
753 int rc, disable_dev_on_err = 0; 753 int rc, disable_dev_on_err = 0;
754 unsigned int i; 754 unsigned int i;
755 unsigned long pio_start, pio_end, pio_flags, pio_len; 755 unsigned long pio_start, pio_end, pio_flags, pio_len;
756 unsigned long mmio_start, mmio_end, mmio_flags, mmio_len; 756 unsigned long mmio_start, mmio_end, mmio_flags, mmio_len;
757 u32 version; 757 u32 version;
758 758
759 assert (pdev != NULL); 759 assert (pdev != NULL);
760 760
761 *dev_out = NULL; 761 *dev_out = NULL;
762 762
763 /* dev and priv zeroed in alloc_etherdev */ 763 /* dev and priv zeroed in alloc_etherdev */
764 dev = alloc_etherdev (sizeof (*tp)); 764 dev = alloc_etherdev (sizeof (*tp));
765 if (dev == NULL) { 765 if (dev == NULL) {
766 dev_err(&pdev->dev, "Unable to alloc new net device\n"); 766 dev_err(&pdev->dev, "Unable to alloc new net device\n");
767 return -ENOMEM; 767 return -ENOMEM;
768 } 768 }
769 SET_NETDEV_DEV(dev, &pdev->dev); 769 SET_NETDEV_DEV(dev, &pdev->dev);
770 770
771 tp = netdev_priv(dev); 771 tp = netdev_priv(dev);
772 tp->pci_dev = pdev; 772 tp->pci_dev = pdev;
773 773
774 /* enable device (incl. PCI PM wakeup and hotplug setup) */ 774 /* enable device (incl. PCI PM wakeup and hotplug setup) */
775 rc = pci_enable_device (pdev); 775 rc = pci_enable_device (pdev);
776 if (rc) 776 if (rc)
777 goto err_out; 777 goto err_out;
778 778
779 pio_start = pci_resource_start (pdev, 0); 779 pio_start = pci_resource_start (pdev, 0);
780 pio_end = pci_resource_end (pdev, 0); 780 pio_end = pci_resource_end (pdev, 0);
781 pio_flags = pci_resource_flags (pdev, 0); 781 pio_flags = pci_resource_flags (pdev, 0);
782 pio_len = pci_resource_len (pdev, 0); 782 pio_len = pci_resource_len (pdev, 0);
783 783
784 mmio_start = pci_resource_start (pdev, 1); 784 mmio_start = pci_resource_start (pdev, 1);
785 mmio_end = pci_resource_end (pdev, 1); 785 mmio_end = pci_resource_end (pdev, 1);
786 mmio_flags = pci_resource_flags (pdev, 1); 786 mmio_flags = pci_resource_flags (pdev, 1);
787 mmio_len = pci_resource_len (pdev, 1); 787 mmio_len = pci_resource_len (pdev, 1);
788 788
789 /* set this immediately, we need to know before 789 /* set this immediately, we need to know before
790 * we talk to the chip directly */ 790 * we talk to the chip directly */
791 DPRINTK("PIO region size == 0x%02X\n", pio_len); 791 DPRINTK("PIO region size == 0x%02X\n", pio_len);
792 DPRINTK("MMIO region size == 0x%02lX\n", mmio_len); 792 DPRINTK("MMIO region size == 0x%02lX\n", mmio_len);
793 793
794 #ifdef USE_IO_OPS 794 #ifdef USE_IO_OPS
795 /* make sure PCI base addr 0 is PIO */ 795 /* make sure PCI base addr 0 is PIO */
796 if (!(pio_flags & IORESOURCE_IO)) { 796 if (!(pio_flags & IORESOURCE_IO)) {
797 dev_err(&pdev->dev, "region #0 not a PIO resource, aborting\n"); 797 dev_err(&pdev->dev, "region #0 not a PIO resource, aborting\n");
798 rc = -ENODEV; 798 rc = -ENODEV;
799 goto err_out; 799 goto err_out;
800 } 800 }
801 /* check for weird/broken PCI region reporting */ 801 /* check for weird/broken PCI region reporting */
802 if (pio_len < RTL_MIN_IO_SIZE) { 802 if (pio_len < RTL_MIN_IO_SIZE) {
803 dev_err(&pdev->dev, "Invalid PCI I/O region size(s), aborting\n"); 803 dev_err(&pdev->dev, "Invalid PCI I/O region size(s), aborting\n");
804 rc = -ENODEV; 804 rc = -ENODEV;
805 goto err_out; 805 goto err_out;
806 } 806 }
807 #else 807 #else
808 /* make sure PCI base addr 1 is MMIO */ 808 /* make sure PCI base addr 1 is MMIO */
809 if (!(mmio_flags & IORESOURCE_MEM)) { 809 if (!(mmio_flags & IORESOURCE_MEM)) {
810 dev_err(&pdev->dev, "region #1 not an MMIO resource, aborting\n"); 810 dev_err(&pdev->dev, "region #1 not an MMIO resource, aborting\n");
811 rc = -ENODEV; 811 rc = -ENODEV;
812 goto err_out; 812 goto err_out;
813 } 813 }
814 if (mmio_len < RTL_MIN_IO_SIZE) { 814 if (mmio_len < RTL_MIN_IO_SIZE) {
815 dev_err(&pdev->dev, "Invalid PCI mem region size(s), aborting\n"); 815 dev_err(&pdev->dev, "Invalid PCI mem region size(s), aborting\n");
816 rc = -ENODEV; 816 rc = -ENODEV;
817 goto err_out; 817 goto err_out;
818 } 818 }
819 #endif 819 #endif
820 820
821 rc = pci_request_regions (pdev, DRV_NAME); 821 rc = pci_request_regions (pdev, DRV_NAME);
822 if (rc) 822 if (rc)
823 goto err_out; 823 goto err_out;
824 disable_dev_on_err = 1; 824 disable_dev_on_err = 1;
825 825
826 /* enable PCI bus-mastering */ 826 /* enable PCI bus-mastering */
827 pci_set_master (pdev); 827 pci_set_master (pdev);
828 828
829 #ifdef USE_IO_OPS 829 #ifdef USE_IO_OPS
830 ioaddr = ioport_map(pio_start, pio_len); 830 ioaddr = ioport_map(pio_start, pio_len);
831 if (!ioaddr) { 831 if (!ioaddr) {
832 dev_err(&pdev->dev, "cannot map PIO, aborting\n"); 832 dev_err(&pdev->dev, "cannot map PIO, aborting\n");
833 rc = -EIO; 833 rc = -EIO;
834 goto err_out; 834 goto err_out;
835 } 835 }
836 dev->base_addr = pio_start; 836 dev->base_addr = pio_start;
837 tp->mmio_addr = ioaddr; 837 tp->mmio_addr = ioaddr;
838 tp->regs_len = pio_len; 838 tp->regs_len = pio_len;
839 #else 839 #else
840 /* ioremap MMIO region */ 840 /* ioremap MMIO region */
841 ioaddr = pci_iomap(pdev, 1, 0); 841 ioaddr = pci_iomap(pdev, 1, 0);
842 if (ioaddr == NULL) { 842 if (ioaddr == NULL) {
843 dev_err(&pdev->dev, "cannot remap MMIO, aborting\n"); 843 dev_err(&pdev->dev, "cannot remap MMIO, aborting\n");
844 rc = -EIO; 844 rc = -EIO;
845 goto err_out; 845 goto err_out;
846 } 846 }
847 dev->base_addr = (long) ioaddr; 847 dev->base_addr = (long) ioaddr;
848 tp->mmio_addr = ioaddr; 848 tp->mmio_addr = ioaddr;
849 tp->regs_len = mmio_len; 849 tp->regs_len = mmio_len;
850 #endif /* USE_IO_OPS */ 850 #endif /* USE_IO_OPS */
851 851
852 /* Bring old chips out of low-power mode. */ 852 /* Bring old chips out of low-power mode. */
853 RTL_W8 (HltClk, 'R'); 853 RTL_W8 (HltClk, 'R');
854 854
855 /* check for missing/broken hardware */ 855 /* check for missing/broken hardware */
856 if (RTL_R32 (TxConfig) == 0xFFFFFFFF) { 856 if (RTL_R32 (TxConfig) == 0xFFFFFFFF) {
857 dev_err(&pdev->dev, "Chip not responding, ignoring board\n"); 857 dev_err(&pdev->dev, "Chip not responding, ignoring board\n");
858 rc = -EIO; 858 rc = -EIO;
859 goto err_out; 859 goto err_out;
860 } 860 }
861 861
862 /* identify chip attached to board */ 862 /* identify chip attached to board */
863 version = RTL_R32 (TxConfig) & HW_REVID_MASK; 863 version = RTL_R32 (TxConfig) & HW_REVID_MASK;
864 for (i = 0; i < ARRAY_SIZE (rtl_chip_info); i++) 864 for (i = 0; i < ARRAY_SIZE (rtl_chip_info); i++)
865 if (version == rtl_chip_info[i].version) { 865 if (version == rtl_chip_info[i].version) {
866 tp->chipset = i; 866 tp->chipset = i;
867 goto match; 867 goto match;
868 } 868 }
869 869
870 /* if unknown chip, assume array element #0, original RTL-8139 in this case */ 870 /* if unknown chip, assume array element #0, original RTL-8139 in this case */
871 dev_printk (KERN_DEBUG, &pdev->dev, 871 dev_printk (KERN_DEBUG, &pdev->dev,
872 "unknown chip version, assuming RTL-8139\n"); 872 "unknown chip version, assuming RTL-8139\n");
873 dev_printk (KERN_DEBUG, &pdev->dev, 873 dev_printk (KERN_DEBUG, &pdev->dev,
874 "TxConfig = 0x%lx\n", RTL_R32 (TxConfig)); 874 "TxConfig = 0x%lx\n", RTL_R32 (TxConfig));
875 tp->chipset = 0; 875 tp->chipset = 0;
876 876
877 match: 877 match:
878 DPRINTK ("chipset id (%d) == index %d, '%s'\n", 878 DPRINTK ("chipset id (%d) == index %d, '%s'\n",
879 version, i, rtl_chip_info[i].name); 879 version, i, rtl_chip_info[i].name);
880 880
881 if (tp->chipset >= CH_8139B) { 881 if (tp->chipset >= CH_8139B) {
882 u8 new_tmp8 = tmp8 = RTL_R8 (Config1); 882 u8 new_tmp8 = tmp8 = RTL_R8 (Config1);
883 DPRINTK("PCI PM wakeup\n"); 883 DPRINTK("PCI PM wakeup\n");
884 if ((rtl_chip_info[tp->chipset].flags & HasLWake) && 884 if ((rtl_chip_info[tp->chipset].flags & HasLWake) &&
885 (tmp8 & LWAKE)) 885 (tmp8 & LWAKE))
886 new_tmp8 &= ~LWAKE; 886 new_tmp8 &= ~LWAKE;
887 new_tmp8 |= Cfg1_PM_Enable; 887 new_tmp8 |= Cfg1_PM_Enable;
888 if (new_tmp8 != tmp8) { 888 if (new_tmp8 != tmp8) {
889 RTL_W8 (Cfg9346, Cfg9346_Unlock); 889 RTL_W8 (Cfg9346, Cfg9346_Unlock);
890 RTL_W8 (Config1, tmp8); 890 RTL_W8 (Config1, tmp8);
891 RTL_W8 (Cfg9346, Cfg9346_Lock); 891 RTL_W8 (Cfg9346, Cfg9346_Lock);
892 } 892 }
893 if (rtl_chip_info[tp->chipset].flags & HasLWake) { 893 if (rtl_chip_info[tp->chipset].flags & HasLWake) {
894 tmp8 = RTL_R8 (Config4); 894 tmp8 = RTL_R8 (Config4);
895 if (tmp8 & LWPTN) { 895 if (tmp8 & LWPTN) {
896 RTL_W8 (Cfg9346, Cfg9346_Unlock); 896 RTL_W8 (Cfg9346, Cfg9346_Unlock);
897 RTL_W8 (Config4, tmp8 & ~LWPTN); 897 RTL_W8 (Config4, tmp8 & ~LWPTN);
898 RTL_W8 (Cfg9346, Cfg9346_Lock); 898 RTL_W8 (Cfg9346, Cfg9346_Lock);
899 } 899 }
900 } 900 }
901 } else { 901 } else {
902 DPRINTK("Old chip wakeup\n"); 902 DPRINTK("Old chip wakeup\n");
903 tmp8 = RTL_R8 (Config1); 903 tmp8 = RTL_R8 (Config1);
904 tmp8 &= ~(SLEEP | PWRDN); 904 tmp8 &= ~(SLEEP | PWRDN);
905 RTL_W8 (Config1, tmp8); 905 RTL_W8 (Config1, tmp8);
906 } 906 }
907 907
908 rtl8139_chip_reset (ioaddr); 908 rtl8139_chip_reset (ioaddr);
909 909
910 *dev_out = dev; 910 *dev_out = dev;
911 return 0; 911 return 0;
912 912
913 err_out: 913 err_out:
914 __rtl8139_cleanup_dev (dev); 914 __rtl8139_cleanup_dev (dev);
915 if (disable_dev_on_err) 915 if (disable_dev_on_err)
916 pci_disable_device (pdev); 916 pci_disable_device (pdev);
917 return rc; 917 return rc;
918 } 918 }
919 919
920 920
921 static int __devinit rtl8139_init_one (struct pci_dev *pdev, 921 static int __devinit rtl8139_init_one (struct pci_dev *pdev,
922 const struct pci_device_id *ent) 922 const struct pci_device_id *ent)
923 { 923 {
924 struct net_device *dev = NULL; 924 struct net_device *dev = NULL;
925 struct rtl8139_private *tp; 925 struct rtl8139_private *tp;
926 int i, addr_len, option; 926 int i, addr_len, option;
927 void __iomem *ioaddr; 927 void __iomem *ioaddr;
928 static int board_idx = -1; 928 static int board_idx = -1;
929 DECLARE_MAC_BUF(mac); 929 DECLARE_MAC_BUF(mac);
930 930
931 assert (pdev != NULL); 931 assert (pdev != NULL);
932 assert (ent != NULL); 932 assert (ent != NULL);
933 933
934 board_idx++; 934 board_idx++;
935 935
936 /* when we're built into the kernel, the driver version message 936 /* when we're built into the kernel, the driver version message
937 * is only printed if at least one 8139 board has been found 937 * is only printed if at least one 8139 board has been found
938 */ 938 */
939 #ifndef MODULE 939 #ifndef MODULE
940 { 940 {
941 static int printed_version; 941 static int printed_version;
942 if (!printed_version++) 942 if (!printed_version++)
943 printk (KERN_INFO RTL8139_DRIVER_NAME "\n"); 943 printk (KERN_INFO RTL8139_DRIVER_NAME "\n");
944 } 944 }
945 #endif 945 #endif
946 946
947 if (pdev->vendor == PCI_VENDOR_ID_REALTEK && 947 if (pdev->vendor == PCI_VENDOR_ID_REALTEK &&
948 pdev->device == PCI_DEVICE_ID_REALTEK_8139 && pdev->revision >= 0x20) { 948 pdev->device == PCI_DEVICE_ID_REALTEK_8139 && pdev->revision >= 0x20) {
949 dev_info(&pdev->dev, 949 dev_info(&pdev->dev,
950 "This (id %04x:%04x rev %02x) is an enhanced 8139C+ chip\n", 950 "This (id %04x:%04x rev %02x) is an enhanced 8139C+ chip\n",
951 pdev->vendor, pdev->device, pdev->revision); 951 pdev->vendor, pdev->device, pdev->revision);
952 dev_info(&pdev->dev, 952 dev_info(&pdev->dev,
953 "Use the \"8139cp\" driver for improved performance and stability.\n"); 953 "Use the \"8139cp\" driver for improved performance and stability.\n");
954 } 954 }
955 955
956 i = rtl8139_init_board (pdev, &dev); 956 i = rtl8139_init_board (pdev, &dev);
957 if (i < 0) 957 if (i < 0)
958 return i; 958 return i;
959 959
960 assert (dev != NULL); 960 assert (dev != NULL);
961 tp = netdev_priv(dev); 961 tp = netdev_priv(dev);
962 tp->dev = dev; 962 tp->dev = dev;
963 963
964 ioaddr = tp->mmio_addr; 964 ioaddr = tp->mmio_addr;
965 assert (ioaddr != NULL); 965 assert (ioaddr != NULL);
966 966
967 addr_len = read_eeprom (ioaddr, 0, 8) == 0x8129 ? 8 : 6; 967 addr_len = read_eeprom (ioaddr, 0, 8) == 0x8129 ? 8 : 6;
968 for (i = 0; i < 3; i++) 968 for (i = 0; i < 3; i++)
969 ((u16 *) (dev->dev_addr))[i] = 969 ((__le16 *) (dev->dev_addr))[i] =
970 le16_to_cpu (read_eeprom (ioaddr, i + 7, addr_len)); 970 cpu_to_le16(read_eeprom (ioaddr, i + 7, addr_len));
971 memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len); 971 memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
972 972
973 /* The Rtl8139-specific entries in the device structure. */ 973 /* The Rtl8139-specific entries in the device structure. */
974 dev->open = rtl8139_open; 974 dev->open = rtl8139_open;
975 dev->hard_start_xmit = rtl8139_start_xmit; 975 dev->hard_start_xmit = rtl8139_start_xmit;
976 netif_napi_add(dev, &tp->napi, rtl8139_poll, 64); 976 netif_napi_add(dev, &tp->napi, rtl8139_poll, 64);
977 dev->stop = rtl8139_close; 977 dev->stop = rtl8139_close;
978 dev->get_stats = rtl8139_get_stats; 978 dev->get_stats = rtl8139_get_stats;
979 dev->set_multicast_list = rtl8139_set_rx_mode; 979 dev->set_multicast_list = rtl8139_set_rx_mode;
980 dev->do_ioctl = netdev_ioctl; 980 dev->do_ioctl = netdev_ioctl;
981 dev->ethtool_ops = &rtl8139_ethtool_ops; 981 dev->ethtool_ops = &rtl8139_ethtool_ops;
982 dev->tx_timeout = rtl8139_tx_timeout; 982 dev->tx_timeout = rtl8139_tx_timeout;
983 dev->watchdog_timeo = TX_TIMEOUT; 983 dev->watchdog_timeo = TX_TIMEOUT;
984 #ifdef CONFIG_NET_POLL_CONTROLLER 984 #ifdef CONFIG_NET_POLL_CONTROLLER
985 dev->poll_controller = rtl8139_poll_controller; 985 dev->poll_controller = rtl8139_poll_controller;
986 #endif 986 #endif
987 987
988 /* note: the hardware is not capable of sg/csum/highdma, however 988 /* note: the hardware is not capable of sg/csum/highdma, however
989 * through the use of skb_copy_and_csum_dev we enable these 989 * through the use of skb_copy_and_csum_dev we enable these
990 * features 990 * features
991 */ 991 */
992 dev->features |= NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_HIGHDMA; 992 dev->features |= NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_HIGHDMA;
993 993
994 dev->irq = pdev->irq; 994 dev->irq = pdev->irq;
995 995
996 /* tp zeroed and aligned in alloc_etherdev */ 996 /* tp zeroed and aligned in alloc_etherdev */
997 tp = netdev_priv(dev); 997 tp = netdev_priv(dev);
998 998
999 /* note: tp->chipset set in rtl8139_init_board */ 999 /* note: tp->chipset set in rtl8139_init_board */
1000 tp->drv_flags = board_info[ent->driver_data].hw_flags; 1000 tp->drv_flags = board_info[ent->driver_data].hw_flags;
1001 tp->mmio_addr = ioaddr; 1001 tp->mmio_addr = ioaddr;
1002 tp->msg_enable = 1002 tp->msg_enable =
1003 (debug < 0 ? RTL8139_DEF_MSG_ENABLE : ((1 << debug) - 1)); 1003 (debug < 0 ? RTL8139_DEF_MSG_ENABLE : ((1 << debug) - 1));
1004 spin_lock_init (&tp->lock); 1004 spin_lock_init (&tp->lock);
1005 spin_lock_init (&tp->rx_lock); 1005 spin_lock_init (&tp->rx_lock);
1006 INIT_DELAYED_WORK(&tp->thread, rtl8139_thread); 1006 INIT_DELAYED_WORK(&tp->thread, rtl8139_thread);
1007 tp->mii.dev = dev; 1007 tp->mii.dev = dev;
1008 tp->mii.mdio_read = mdio_read; 1008 tp->mii.mdio_read = mdio_read;
1009 tp->mii.mdio_write = mdio_write; 1009 tp->mii.mdio_write = mdio_write;
1010 tp->mii.phy_id_mask = 0x3f; 1010 tp->mii.phy_id_mask = 0x3f;
1011 tp->mii.reg_num_mask = 0x1f; 1011 tp->mii.reg_num_mask = 0x1f;
1012 1012
1013 /* dev is fully set up and ready to use now */ 1013 /* dev is fully set up and ready to use now */
1014 DPRINTK("about to register device named %s (%p)...\n", dev->name, dev); 1014 DPRINTK("about to register device named %s (%p)...\n", dev->name, dev);
1015 i = register_netdev (dev); 1015 i = register_netdev (dev);
1016 if (i) goto err_out; 1016 if (i) goto err_out;
1017 1017
1018 pci_set_drvdata (pdev, dev); 1018 pci_set_drvdata (pdev, dev);
1019 1019
1020 printk (KERN_INFO "%s: %s at 0x%lx, " 1020 printk (KERN_INFO "%s: %s at 0x%lx, "
1021 "%s, IRQ %d\n", 1021 "%s, IRQ %d\n",
1022 dev->name, 1022 dev->name,
1023 board_info[ent->driver_data].name, 1023 board_info[ent->driver_data].name,
1024 dev->base_addr, 1024 dev->base_addr,
1025 print_mac(mac, dev->dev_addr), 1025 print_mac(mac, dev->dev_addr),
1026 dev->irq); 1026 dev->irq);
1027 1027
1028 printk (KERN_DEBUG "%s: Identified 8139 chip type '%s'\n", 1028 printk (KERN_DEBUG "%s: Identified 8139 chip type '%s'\n",
1029 dev->name, rtl_chip_info[tp->chipset].name); 1029 dev->name, rtl_chip_info[tp->chipset].name);
1030 1030
1031 /* Find the connected MII xcvrs. 1031 /* Find the connected MII xcvrs.
1032 Doing this in open() would allow detecting external xcvrs later, but 1032 Doing this in open() would allow detecting external xcvrs later, but
1033 takes too much time. */ 1033 takes too much time. */
1034 #ifdef CONFIG_8139TOO_8129 1034 #ifdef CONFIG_8139TOO_8129
1035 if (tp->drv_flags & HAS_MII_XCVR) { 1035 if (tp->drv_flags & HAS_MII_XCVR) {
1036 int phy, phy_idx = 0; 1036 int phy, phy_idx = 0;
1037 for (phy = 0; phy < 32 && phy_idx < sizeof(tp->phys); phy++) { 1037 for (phy = 0; phy < 32 && phy_idx < sizeof(tp->phys); phy++) {
1038 int mii_status = mdio_read(dev, phy, 1); 1038 int mii_status = mdio_read(dev, phy, 1);
1039 if (mii_status != 0xffff && mii_status != 0x0000) { 1039 if (mii_status != 0xffff && mii_status != 0x0000) {
1040 u16 advertising = mdio_read(dev, phy, 4); 1040 u16 advertising = mdio_read(dev, phy, 4);
1041 tp->phys[phy_idx++] = phy; 1041 tp->phys[phy_idx++] = phy;
1042 printk(KERN_INFO "%s: MII transceiver %d status 0x%4.4x " 1042 printk(KERN_INFO "%s: MII transceiver %d status 0x%4.4x "
1043 "advertising %4.4x.\n", 1043 "advertising %4.4x.\n",
1044 dev->name, phy, mii_status, advertising); 1044 dev->name, phy, mii_status, advertising);
1045 } 1045 }
1046 } 1046 }
1047 if (phy_idx == 0) { 1047 if (phy_idx == 0) {
1048 printk(KERN_INFO "%s: No MII transceivers found! Assuming SYM " 1048 printk(KERN_INFO "%s: No MII transceivers found! Assuming SYM "
1049 "transceiver.\n", 1049 "transceiver.\n",
1050 dev->name); 1050 dev->name);
1051 tp->phys[0] = 32; 1051 tp->phys[0] = 32;
1052 } 1052 }
1053 } else 1053 } else
1054 #endif 1054 #endif
1055 tp->phys[0] = 32; 1055 tp->phys[0] = 32;
1056 tp->mii.phy_id = tp->phys[0]; 1056 tp->mii.phy_id = tp->phys[0];
1057 1057
1058 /* The lower four bits are the media type. */ 1058 /* The lower four bits are the media type. */
1059 option = (board_idx >= MAX_UNITS) ? 0 : media[board_idx]; 1059 option = (board_idx >= MAX_UNITS) ? 0 : media[board_idx];
1060 if (option > 0) { 1060 if (option > 0) {
1061 tp->mii.full_duplex = (option & 0x210) ? 1 : 0; 1061 tp->mii.full_duplex = (option & 0x210) ? 1 : 0;
1062 tp->default_port = option & 0xFF; 1062 tp->default_port = option & 0xFF;
1063 if (tp->default_port) 1063 if (tp->default_port)
1064 tp->mii.force_media = 1; 1064 tp->mii.force_media = 1;
1065 } 1065 }
1066 if (board_idx < MAX_UNITS && full_duplex[board_idx] > 0) 1066 if (board_idx < MAX_UNITS && full_duplex[board_idx] > 0)
1067 tp->mii.full_duplex = full_duplex[board_idx]; 1067 tp->mii.full_duplex = full_duplex[board_idx];
1068 if (tp->mii.full_duplex) { 1068 if (tp->mii.full_duplex) {
1069 printk(KERN_INFO "%s: Media type forced to Full Duplex.\n", dev->name); 1069 printk(KERN_INFO "%s: Media type forced to Full Duplex.\n", dev->name);
1070 /* Changing the MII-advertised media because might prevent 1070 /* Changing the MII-advertised media because might prevent
1071 re-connection. */ 1071 re-connection. */
1072 tp->mii.force_media = 1; 1072 tp->mii.force_media = 1;
1073 } 1073 }
1074 if (tp->default_port) { 1074 if (tp->default_port) {
1075 printk(KERN_INFO " Forcing %dMbps %s-duplex operation.\n", 1075 printk(KERN_INFO " Forcing %dMbps %s-duplex operation.\n",
1076 (option & 0x20 ? 100 : 10), 1076 (option & 0x20 ? 100 : 10),
1077 (option & 0x10 ? "full" : "half")); 1077 (option & 0x10 ? "full" : "half"));
1078 mdio_write(dev, tp->phys[0], 0, 1078 mdio_write(dev, tp->phys[0], 0,
1079 ((option & 0x20) ? 0x2000 : 0) | /* 100Mbps? */ 1079 ((option & 0x20) ? 0x2000 : 0) | /* 100Mbps? */
1080 ((option & 0x10) ? 0x0100 : 0)); /* Full duplex? */ 1080 ((option & 0x10) ? 0x0100 : 0)); /* Full duplex? */
1081 } 1081 }
1082 1082
1083 /* Put the chip into low-power mode. */ 1083 /* Put the chip into low-power mode. */
1084 if (rtl_chip_info[tp->chipset].flags & HasHltClk) 1084 if (rtl_chip_info[tp->chipset].flags & HasHltClk)
1085 RTL_W8 (HltClk, 'H'); /* 'R' would leave the clock running. */ 1085 RTL_W8 (HltClk, 'H'); /* 'R' would leave the clock running. */
1086 1086
1087 return 0; 1087 return 0;
1088 1088
1089 err_out: 1089 err_out:
1090 __rtl8139_cleanup_dev (dev); 1090 __rtl8139_cleanup_dev (dev);
1091 pci_disable_device (pdev); 1091 pci_disable_device (pdev);
1092 return i; 1092 return i;
1093 } 1093 }
1094 1094
1095 1095
1096 static void __devexit rtl8139_remove_one (struct pci_dev *pdev) 1096 static void __devexit rtl8139_remove_one (struct pci_dev *pdev)
1097 { 1097 {
1098 struct net_device *dev = pci_get_drvdata (pdev); 1098 struct net_device *dev = pci_get_drvdata (pdev);
1099 1099
1100 assert (dev != NULL); 1100 assert (dev != NULL);
1101 1101
1102 flush_scheduled_work(); 1102 flush_scheduled_work();
1103 1103
1104 unregister_netdev (dev); 1104 unregister_netdev (dev);
1105 1105
1106 __rtl8139_cleanup_dev (dev); 1106 __rtl8139_cleanup_dev (dev);
1107 pci_disable_device (pdev); 1107 pci_disable_device (pdev);
1108 } 1108 }
1109 1109
1110 1110
1111 /* Serial EEPROM section. */ 1111 /* Serial EEPROM section. */
1112 1112
1113 /* EEPROM_Ctrl bits. */ 1113 /* EEPROM_Ctrl bits. */
1114 #define EE_SHIFT_CLK 0x04 /* EEPROM shift clock. */ 1114 #define EE_SHIFT_CLK 0x04 /* EEPROM shift clock. */
1115 #define EE_CS 0x08 /* EEPROM chip select. */ 1115 #define EE_CS 0x08 /* EEPROM chip select. */
1116 #define EE_DATA_WRITE 0x02 /* EEPROM chip data in. */ 1116 #define EE_DATA_WRITE 0x02 /* EEPROM chip data in. */
1117 #define EE_WRITE_0 0x00 1117 #define EE_WRITE_0 0x00
1118 #define EE_WRITE_1 0x02 1118 #define EE_WRITE_1 0x02
1119 #define EE_DATA_READ 0x01 /* EEPROM chip data out. */ 1119 #define EE_DATA_READ 0x01 /* EEPROM chip data out. */
1120 #define EE_ENB (0x80 | EE_CS) 1120 #define EE_ENB (0x80 | EE_CS)
1121 1121
1122 /* Delay between EEPROM clock transitions. 1122 /* Delay between EEPROM clock transitions.
1123 No extra delay is needed with 33Mhz PCI, but 66Mhz may change this. 1123 No extra delay is needed with 33Mhz PCI, but 66Mhz may change this.
1124 */ 1124 */
1125 1125
1126 #define eeprom_delay() (void)RTL_R32(Cfg9346) 1126 #define eeprom_delay() (void)RTL_R32(Cfg9346)
1127 1127
1128 /* The EEPROM commands include the alway-set leading bit. */ 1128 /* The EEPROM commands include the alway-set leading bit. */
1129 #define EE_WRITE_CMD (5) 1129 #define EE_WRITE_CMD (5)
1130 #define EE_READ_CMD (6) 1130 #define EE_READ_CMD (6)
1131 #define EE_ERASE_CMD (7) 1131 #define EE_ERASE_CMD (7)
1132 1132
1133 static int __devinit read_eeprom (void __iomem *ioaddr, int location, int addr_len) 1133 static int __devinit read_eeprom (void __iomem *ioaddr, int location, int addr_len)
1134 { 1134 {
1135 int i; 1135 int i;
1136 unsigned retval = 0; 1136 unsigned retval = 0;
1137 int read_cmd = location | (EE_READ_CMD << addr_len); 1137 int read_cmd = location | (EE_READ_CMD << addr_len);
1138 1138
1139 RTL_W8 (Cfg9346, EE_ENB & ~EE_CS); 1139 RTL_W8 (Cfg9346, EE_ENB & ~EE_CS);
1140 RTL_W8 (Cfg9346, EE_ENB); 1140 RTL_W8 (Cfg9346, EE_ENB);
1141 eeprom_delay (); 1141 eeprom_delay ();
1142 1142
1143 /* Shift the read command bits out. */ 1143 /* Shift the read command bits out. */
1144 for (i = 4 + addr_len; i >= 0; i--) { 1144 for (i = 4 + addr_len; i >= 0; i--) {
1145 int dataval = (read_cmd & (1 << i)) ? EE_DATA_WRITE : 0; 1145 int dataval = (read_cmd & (1 << i)) ? EE_DATA_WRITE : 0;
1146 RTL_W8 (Cfg9346, EE_ENB | dataval); 1146 RTL_W8 (Cfg9346, EE_ENB | dataval);
1147 eeprom_delay (); 1147 eeprom_delay ();
1148 RTL_W8 (Cfg9346, EE_ENB | dataval | EE_SHIFT_CLK); 1148 RTL_W8 (Cfg9346, EE_ENB | dataval | EE_SHIFT_CLK);
1149 eeprom_delay (); 1149 eeprom_delay ();
1150 } 1150 }
1151 RTL_W8 (Cfg9346, EE_ENB); 1151 RTL_W8 (Cfg9346, EE_ENB);
1152 eeprom_delay (); 1152 eeprom_delay ();
1153 1153
1154 for (i = 16; i > 0; i--) { 1154 for (i = 16; i > 0; i--) {
1155 RTL_W8 (Cfg9346, EE_ENB | EE_SHIFT_CLK); 1155 RTL_W8 (Cfg9346, EE_ENB | EE_SHIFT_CLK);
1156 eeprom_delay (); 1156 eeprom_delay ();
1157 retval = 1157 retval =
1158 (retval << 1) | ((RTL_R8 (Cfg9346) & EE_DATA_READ) ? 1 : 1158 (retval << 1) | ((RTL_R8 (Cfg9346) & EE_DATA_READ) ? 1 :
1159 0); 1159 0);
1160 RTL_W8 (Cfg9346, EE_ENB); 1160 RTL_W8 (Cfg9346, EE_ENB);
1161 eeprom_delay (); 1161 eeprom_delay ();
1162 } 1162 }
1163 1163
1164 /* Terminate the EEPROM access. */ 1164 /* Terminate the EEPROM access. */
1165 RTL_W8 (Cfg9346, ~EE_CS); 1165 RTL_W8 (Cfg9346, ~EE_CS);
1166 eeprom_delay (); 1166 eeprom_delay ();
1167 1167
1168 return retval; 1168 return retval;
1169 } 1169 }
1170 1170
1171 /* MII serial management: mostly bogus for now. */ 1171 /* MII serial management: mostly bogus for now. */
1172 /* Read and write the MII management registers using software-generated 1172 /* Read and write the MII management registers using software-generated
1173 serial MDIO protocol. 1173 serial MDIO protocol.
1174 The maximum data clock rate is 2.5 Mhz. The minimum timing is usually 1174 The maximum data clock rate is 2.5 Mhz. The minimum timing is usually
1175 met by back-to-back PCI I/O cycles, but we insert a delay to avoid 1175 met by back-to-back PCI I/O cycles, but we insert a delay to avoid
1176 "overclocking" issues. */ 1176 "overclocking" issues. */
1177 #define MDIO_DIR 0x80 1177 #define MDIO_DIR 0x80
1178 #define MDIO_DATA_OUT 0x04 1178 #define MDIO_DATA_OUT 0x04
1179 #define MDIO_DATA_IN 0x02 1179 #define MDIO_DATA_IN 0x02
1180 #define MDIO_CLK 0x01 1180 #define MDIO_CLK 0x01
1181 #define MDIO_WRITE0 (MDIO_DIR) 1181 #define MDIO_WRITE0 (MDIO_DIR)
1182 #define MDIO_WRITE1 (MDIO_DIR | MDIO_DATA_OUT) 1182 #define MDIO_WRITE1 (MDIO_DIR | MDIO_DATA_OUT)
1183 1183
1184 #define mdio_delay() RTL_R8(Config4) 1184 #define mdio_delay() RTL_R8(Config4)
1185 1185
1186 1186
1187 static const char mii_2_8139_map[8] = { 1187 static const char mii_2_8139_map[8] = {
1188 BasicModeCtrl, 1188 BasicModeCtrl,
1189 BasicModeStatus, 1189 BasicModeStatus,
1190 0, 1190 0,
1191 0, 1191 0,
1192 NWayAdvert, 1192 NWayAdvert,
1193 NWayLPAR, 1193 NWayLPAR,
1194 NWayExpansion, 1194 NWayExpansion,
1195 0 1195 0
1196 }; 1196 };
1197 1197
1198 1198
1199 #ifdef CONFIG_8139TOO_8129 1199 #ifdef CONFIG_8139TOO_8129
1200 /* Syncronize the MII management interface by shifting 32 one bits out. */ 1200 /* Syncronize the MII management interface by shifting 32 one bits out. */
1201 static void mdio_sync (void __iomem *ioaddr) 1201 static void mdio_sync (void __iomem *ioaddr)
1202 { 1202 {
1203 int i; 1203 int i;
1204 1204
1205 for (i = 32; i >= 0; i--) { 1205 for (i = 32; i >= 0; i--) {
1206 RTL_W8 (Config4, MDIO_WRITE1); 1206 RTL_W8 (Config4, MDIO_WRITE1);
1207 mdio_delay (); 1207 mdio_delay ();
1208 RTL_W8 (Config4, MDIO_WRITE1 | MDIO_CLK); 1208 RTL_W8 (Config4, MDIO_WRITE1 | MDIO_CLK);
1209 mdio_delay (); 1209 mdio_delay ();
1210 } 1210 }
1211 } 1211 }
1212 #endif 1212 #endif
1213 1213
1214 static int mdio_read (struct net_device *dev, int phy_id, int location) 1214 static int mdio_read (struct net_device *dev, int phy_id, int location)
1215 { 1215 {
1216 struct rtl8139_private *tp = netdev_priv(dev); 1216 struct rtl8139_private *tp = netdev_priv(dev);
1217 int retval = 0; 1217 int retval = 0;
1218 #ifdef CONFIG_8139TOO_8129 1218 #ifdef CONFIG_8139TOO_8129
1219 void __iomem *ioaddr = tp->mmio_addr; 1219 void __iomem *ioaddr = tp->mmio_addr;
1220 int mii_cmd = (0xf6 << 10) | (phy_id << 5) | location; 1220 int mii_cmd = (0xf6 << 10) | (phy_id << 5) | location;
1221 int i; 1221 int i;
1222 #endif 1222 #endif
1223 1223
1224 if (phy_id > 31) { /* Really a 8139. Use internal registers. */ 1224 if (phy_id > 31) { /* Really a 8139. Use internal registers. */
1225 void __iomem *ioaddr = tp->mmio_addr; 1225 void __iomem *ioaddr = tp->mmio_addr;
1226 return location < 8 && mii_2_8139_map[location] ? 1226 return location < 8 && mii_2_8139_map[location] ?
1227 RTL_R16 (mii_2_8139_map[location]) : 0; 1227 RTL_R16 (mii_2_8139_map[location]) : 0;
1228 } 1228 }
1229 1229
1230 #ifdef CONFIG_8139TOO_8129 1230 #ifdef CONFIG_8139TOO_8129
1231 mdio_sync (ioaddr); 1231 mdio_sync (ioaddr);
1232 /* Shift the read command bits out. */ 1232 /* Shift the read command bits out. */
1233 for (i = 15; i >= 0; i--) { 1233 for (i = 15; i >= 0; i--) {
1234 int dataval = (mii_cmd & (1 << i)) ? MDIO_DATA_OUT : 0; 1234 int dataval = (mii_cmd & (1 << i)) ? MDIO_DATA_OUT : 0;
1235 1235
1236 RTL_W8 (Config4, MDIO_DIR | dataval); 1236 RTL_W8 (Config4, MDIO_DIR | dataval);
1237 mdio_delay (); 1237 mdio_delay ();
1238 RTL_W8 (Config4, MDIO_DIR | dataval | MDIO_CLK); 1238 RTL_W8 (Config4, MDIO_DIR | dataval | MDIO_CLK);
1239 mdio_delay (); 1239 mdio_delay ();
1240 } 1240 }
1241 1241
1242 /* Read the two transition, 16 data, and wire-idle bits. */ 1242 /* Read the two transition, 16 data, and wire-idle bits. */
1243 for (i = 19; i > 0; i--) { 1243 for (i = 19; i > 0; i--) {
1244 RTL_W8 (Config4, 0); 1244 RTL_W8 (Config4, 0);
1245 mdio_delay (); 1245 mdio_delay ();
1246 retval = (retval << 1) | ((RTL_R8 (Config4) & MDIO_DATA_IN) ? 1 : 0); 1246 retval = (retval << 1) | ((RTL_R8 (Config4) & MDIO_DATA_IN) ? 1 : 0);
1247 RTL_W8 (Config4, MDIO_CLK); 1247 RTL_W8 (Config4, MDIO_CLK);
1248 mdio_delay (); 1248 mdio_delay ();
1249 } 1249 }
1250 #endif 1250 #endif
1251 1251
1252 return (retval >> 1) & 0xffff; 1252 return (retval >> 1) & 0xffff;
1253 } 1253 }
1254 1254
1255 1255
1256 static void mdio_write (struct net_device *dev, int phy_id, int location, 1256 static void mdio_write (struct net_device *dev, int phy_id, int location,
1257 int value) 1257 int value)
1258 { 1258 {
1259 struct rtl8139_private *tp = netdev_priv(dev); 1259 struct rtl8139_private *tp = netdev_priv(dev);
1260 #ifdef CONFIG_8139TOO_8129 1260 #ifdef CONFIG_8139TOO_8129
1261 void __iomem *ioaddr = tp->mmio_addr; 1261 void __iomem *ioaddr = tp->mmio_addr;
1262 int mii_cmd = (0x5002 << 16) | (phy_id << 23) | (location << 18) | value; 1262 int mii_cmd = (0x5002 << 16) | (phy_id << 23) | (location << 18) | value;
1263 int i; 1263 int i;
1264 #endif 1264 #endif
1265 1265
1266 if (phy_id > 31) { /* Really a 8139. Use internal registers. */ 1266 if (phy_id > 31) { /* Really a 8139. Use internal registers. */
1267 void __iomem *ioaddr = tp->mmio_addr; 1267 void __iomem *ioaddr = tp->mmio_addr;
1268 if (location == 0) { 1268 if (location == 0) {
1269 RTL_W8 (Cfg9346, Cfg9346_Unlock); 1269 RTL_W8 (Cfg9346, Cfg9346_Unlock);
1270 RTL_W16 (BasicModeCtrl, value); 1270 RTL_W16 (BasicModeCtrl, value);
1271 RTL_W8 (Cfg9346, Cfg9346_Lock); 1271 RTL_W8 (Cfg9346, Cfg9346_Lock);
1272 } else if (location < 8 && mii_2_8139_map[location]) 1272 } else if (location < 8 && mii_2_8139_map[location])
1273 RTL_W16 (mii_2_8139_map[location], value); 1273 RTL_W16 (mii_2_8139_map[location], value);
1274 return; 1274 return;
1275 } 1275 }
1276 1276
1277 #ifdef CONFIG_8139TOO_8129 1277 #ifdef CONFIG_8139TOO_8129
1278 mdio_sync (ioaddr); 1278 mdio_sync (ioaddr);
1279 1279
1280 /* Shift the command bits out. */ 1280 /* Shift the command bits out. */
1281 for (i = 31; i >= 0; i--) { 1281 for (i = 31; i >= 0; i--) {
1282 int dataval = 1282 int dataval =
1283 (mii_cmd & (1 << i)) ? MDIO_WRITE1 : MDIO_WRITE0; 1283 (mii_cmd & (1 << i)) ? MDIO_WRITE1 : MDIO_WRITE0;
1284 RTL_W8 (Config4, dataval); 1284 RTL_W8 (Config4, dataval);
1285 mdio_delay (); 1285 mdio_delay ();
1286 RTL_W8 (Config4, dataval | MDIO_CLK); 1286 RTL_W8 (Config4, dataval | MDIO_CLK);
1287 mdio_delay (); 1287 mdio_delay ();
1288 } 1288 }
1289 /* Clear out extra bits. */ 1289 /* Clear out extra bits. */
1290 for (i = 2; i > 0; i--) { 1290 for (i = 2; i > 0; i--) {
1291 RTL_W8 (Config4, 0); 1291 RTL_W8 (Config4, 0);
1292 mdio_delay (); 1292 mdio_delay ();
1293 RTL_W8 (Config4, MDIO_CLK); 1293 RTL_W8 (Config4, MDIO_CLK);
1294 mdio_delay (); 1294 mdio_delay ();
1295 } 1295 }
1296 #endif 1296 #endif
1297 } 1297 }
1298 1298
1299 1299
1300 static int rtl8139_open (struct net_device *dev) 1300 static int rtl8139_open (struct net_device *dev)
1301 { 1301 {
1302 struct rtl8139_private *tp = netdev_priv(dev); 1302 struct rtl8139_private *tp = netdev_priv(dev);
1303 int retval; 1303 int retval;
1304 void __iomem *ioaddr = tp->mmio_addr; 1304 void __iomem *ioaddr = tp->mmio_addr;
1305 1305
1306 retval = request_irq (dev->irq, rtl8139_interrupt, IRQF_SHARED, dev->name, dev); 1306 retval = request_irq (dev->irq, rtl8139_interrupt, IRQF_SHARED, dev->name, dev);
1307 if (retval) 1307 if (retval)
1308 return retval; 1308 return retval;
1309 1309
1310 tp->tx_bufs = dma_alloc_coherent(&tp->pci_dev->dev, TX_BUF_TOT_LEN, 1310 tp->tx_bufs = dma_alloc_coherent(&tp->pci_dev->dev, TX_BUF_TOT_LEN,
1311 &tp->tx_bufs_dma, GFP_KERNEL); 1311 &tp->tx_bufs_dma, GFP_KERNEL);
1312 tp->rx_ring = dma_alloc_coherent(&tp->pci_dev->dev, RX_BUF_TOT_LEN, 1312 tp->rx_ring = dma_alloc_coherent(&tp->pci_dev->dev, RX_BUF_TOT_LEN,
1313 &tp->rx_ring_dma, GFP_KERNEL); 1313 &tp->rx_ring_dma, GFP_KERNEL);
1314 if (tp->tx_bufs == NULL || tp->rx_ring == NULL) { 1314 if (tp->tx_bufs == NULL || tp->rx_ring == NULL) {
1315 free_irq(dev->irq, dev); 1315 free_irq(dev->irq, dev);
1316 1316
1317 if (tp->tx_bufs) 1317 if (tp->tx_bufs)
1318 dma_free_coherent(&tp->pci_dev->dev, TX_BUF_TOT_LEN, 1318 dma_free_coherent(&tp->pci_dev->dev, TX_BUF_TOT_LEN,
1319 tp->tx_bufs, tp->tx_bufs_dma); 1319 tp->tx_bufs, tp->tx_bufs_dma);
1320 if (tp->rx_ring) 1320 if (tp->rx_ring)
1321 dma_free_coherent(&tp->pci_dev->dev, RX_BUF_TOT_LEN, 1321 dma_free_coherent(&tp->pci_dev->dev, RX_BUF_TOT_LEN,
1322 tp->rx_ring, tp->rx_ring_dma); 1322 tp->rx_ring, tp->rx_ring_dma);
1323 1323
1324 return -ENOMEM; 1324 return -ENOMEM;
1325 1325
1326 } 1326 }
1327 1327
1328 napi_enable(&tp->napi); 1328 napi_enable(&tp->napi);
1329 1329
1330 tp->mii.full_duplex = tp->mii.force_media; 1330 tp->mii.full_duplex = tp->mii.force_media;
1331 tp->tx_flag = (TX_FIFO_THRESH << 11) & 0x003f0000; 1331 tp->tx_flag = (TX_FIFO_THRESH << 11) & 0x003f0000;
1332 1332
1333 rtl8139_init_ring (dev); 1333 rtl8139_init_ring (dev);
1334 rtl8139_hw_start (dev); 1334 rtl8139_hw_start (dev);
1335 netif_start_queue (dev); 1335 netif_start_queue (dev);
1336 1336
1337 if (netif_msg_ifup(tp)) 1337 if (netif_msg_ifup(tp))
1338 printk(KERN_DEBUG "%s: rtl8139_open() ioaddr %#llx IRQ %d" 1338 printk(KERN_DEBUG "%s: rtl8139_open() ioaddr %#llx IRQ %d"
1339 " GP Pins %2.2x %s-duplex.\n", dev->name, 1339 " GP Pins %2.2x %s-duplex.\n", dev->name,
1340 (unsigned long long)pci_resource_start (tp->pci_dev, 1), 1340 (unsigned long long)pci_resource_start (tp->pci_dev, 1),
1341 dev->irq, RTL_R8 (MediaStatus), 1341 dev->irq, RTL_R8 (MediaStatus),
1342 tp->mii.full_duplex ? "full" : "half"); 1342 tp->mii.full_duplex ? "full" : "half");
1343 1343
1344 rtl8139_start_thread(tp); 1344 rtl8139_start_thread(tp);
1345 1345
1346 return 0; 1346 return 0;
1347 } 1347 }
1348 1348
1349 1349
1350 static void rtl_check_media (struct net_device *dev, unsigned int init_media) 1350 static void rtl_check_media (struct net_device *dev, unsigned int init_media)
1351 { 1351 {
1352 struct rtl8139_private *tp = netdev_priv(dev); 1352 struct rtl8139_private *tp = netdev_priv(dev);
1353 1353
1354 if (tp->phys[0] >= 0) { 1354 if (tp->phys[0] >= 0) {
1355 mii_check_media(&tp->mii, netif_msg_link(tp), init_media); 1355 mii_check_media(&tp->mii, netif_msg_link(tp), init_media);
1356 } 1356 }
1357 } 1357 }
1358 1358
1359 /* Start the hardware at open or resume. */ 1359 /* Start the hardware at open or resume. */
1360 static void rtl8139_hw_start (struct net_device *dev) 1360 static void rtl8139_hw_start (struct net_device *dev)
1361 { 1361 {
1362 struct rtl8139_private *tp = netdev_priv(dev); 1362 struct rtl8139_private *tp = netdev_priv(dev);
1363 void __iomem *ioaddr = tp->mmio_addr; 1363 void __iomem *ioaddr = tp->mmio_addr;
1364 u32 i; 1364 u32 i;
1365 u8 tmp; 1365 u8 tmp;
1366 1366
1367 /* Bring old chips out of low-power mode. */ 1367 /* Bring old chips out of low-power mode. */
1368 if (rtl_chip_info[tp->chipset].flags & HasHltClk) 1368 if (rtl_chip_info[tp->chipset].flags & HasHltClk)
1369 RTL_W8 (HltClk, 'R'); 1369 RTL_W8 (HltClk, 'R');
1370 1370
1371 rtl8139_chip_reset (ioaddr); 1371 rtl8139_chip_reset (ioaddr);
1372 1372
1373 /* unlock Config[01234] and BMCR register writes */ 1373 /* unlock Config[01234] and BMCR register writes */
1374 RTL_W8_F (Cfg9346, Cfg9346_Unlock); 1374 RTL_W8_F (Cfg9346, Cfg9346_Unlock);
1375 /* Restore our idea of the MAC address. */ 1375 /* Restore our idea of the MAC address. */
1376 RTL_W32_F (MAC0 + 0, cpu_to_le32 (*(u32 *) (dev->dev_addr + 0))); 1376 RTL_W32_F (MAC0 + 0, le32_to_cpu (*(__le32 *) (dev->dev_addr + 0)));
1377 RTL_W32_F (MAC0 + 4, cpu_to_le32 (*(u32 *) (dev->dev_addr + 4))); 1377 RTL_W32_F (MAC0 + 4, le16_to_cpu (*(__le16 *) (dev->dev_addr + 4)));
1378 1378
1379 /* Must enable Tx/Rx before setting transfer thresholds! */ 1379 /* Must enable Tx/Rx before setting transfer thresholds! */
1380 RTL_W8 (ChipCmd, CmdRxEnb | CmdTxEnb); 1380 RTL_W8 (ChipCmd, CmdRxEnb | CmdTxEnb);
1381 1381
1382 tp->rx_config = rtl8139_rx_config | AcceptBroadcast | AcceptMyPhys; 1382 tp->rx_config = rtl8139_rx_config | AcceptBroadcast | AcceptMyPhys;
1383 RTL_W32 (RxConfig, tp->rx_config); 1383 RTL_W32 (RxConfig, tp->rx_config);
1384 RTL_W32 (TxConfig, rtl8139_tx_config); 1384 RTL_W32 (TxConfig, rtl8139_tx_config);
1385 1385
1386 tp->cur_rx = 0; 1386 tp->cur_rx = 0;
1387 1387
1388 rtl_check_media (dev, 1); 1388 rtl_check_media (dev, 1);
1389 1389
1390 if (tp->chipset >= CH_8139B) { 1390 if (tp->chipset >= CH_8139B) {
1391 /* Disable magic packet scanning, which is enabled 1391 /* Disable magic packet scanning, which is enabled
1392 * when PM is enabled in Config1. It can be reenabled 1392 * when PM is enabled in Config1. It can be reenabled
1393 * via ETHTOOL_SWOL if desired. */ 1393 * via ETHTOOL_SWOL if desired. */
1394 RTL_W8 (Config3, RTL_R8 (Config3) & ~Cfg3_Magic); 1394 RTL_W8 (Config3, RTL_R8 (Config3) & ~Cfg3_Magic);
1395 } 1395 }
1396 1396
1397 DPRINTK("init buffer addresses\n"); 1397 DPRINTK("init buffer addresses\n");
1398 1398
1399 /* Lock Config[01234] and BMCR register writes */ 1399 /* Lock Config[01234] and BMCR register writes */
1400 RTL_W8 (Cfg9346, Cfg9346_Lock); 1400 RTL_W8 (Cfg9346, Cfg9346_Lock);
1401 1401
1402 /* init Rx ring buffer DMA address */ 1402 /* init Rx ring buffer DMA address */
1403 RTL_W32_F (RxBuf, tp->rx_ring_dma); 1403 RTL_W32_F (RxBuf, tp->rx_ring_dma);
1404 1404
1405 /* init Tx buffer DMA addresses */ 1405 /* init Tx buffer DMA addresses */
1406 for (i = 0; i < NUM_TX_DESC; i++) 1406 for (i = 0; i < NUM_TX_DESC; i++)
1407 RTL_W32_F (TxAddr0 + (i * 4), tp->tx_bufs_dma + (tp->tx_buf[i] - tp->tx_bufs)); 1407 RTL_W32_F (TxAddr0 + (i * 4), tp->tx_bufs_dma + (tp->tx_buf[i] - tp->tx_bufs));
1408 1408
1409 RTL_W32 (RxMissed, 0); 1409 RTL_W32 (RxMissed, 0);
1410 1410
1411 rtl8139_set_rx_mode (dev); 1411 rtl8139_set_rx_mode (dev);
1412 1412
1413 /* no early-rx interrupts */ 1413 /* no early-rx interrupts */
1414 RTL_W16 (MultiIntr, RTL_R16 (MultiIntr) & MultiIntrClear); 1414 RTL_W16 (MultiIntr, RTL_R16 (MultiIntr) & MultiIntrClear);
1415 1415
1416 /* make sure RxTx has started */ 1416 /* make sure RxTx has started */
1417 tmp = RTL_R8 (ChipCmd); 1417 tmp = RTL_R8 (ChipCmd);
1418 if ((!(tmp & CmdRxEnb)) || (!(tmp & CmdTxEnb))) 1418 if ((!(tmp & CmdRxEnb)) || (!(tmp & CmdTxEnb)))
1419 RTL_W8 (ChipCmd, CmdRxEnb | CmdTxEnb); 1419 RTL_W8 (ChipCmd, CmdRxEnb | CmdTxEnb);
1420 1420
1421 /* Enable all known interrupts by setting the interrupt mask. */ 1421 /* Enable all known interrupts by setting the interrupt mask. */
1422 RTL_W16 (IntrMask, rtl8139_intr_mask); 1422 RTL_W16 (IntrMask, rtl8139_intr_mask);
1423 } 1423 }
1424 1424
1425 1425
1426 /* Initialize the Rx and Tx rings, along with various 'dev' bits. */ 1426 /* Initialize the Rx and Tx rings, along with various 'dev' bits. */
1427 static void rtl8139_init_ring (struct net_device *dev) 1427 static void rtl8139_init_ring (struct net_device *dev)
1428 { 1428 {
1429 struct rtl8139_private *tp = netdev_priv(dev); 1429 struct rtl8139_private *tp = netdev_priv(dev);
1430 int i; 1430 int i;
1431 1431
1432 tp->cur_rx = 0; 1432 tp->cur_rx = 0;
1433 tp->cur_tx = 0; 1433 tp->cur_tx = 0;
1434 tp->dirty_tx = 0; 1434 tp->dirty_tx = 0;
1435 1435
1436 for (i = 0; i < NUM_TX_DESC; i++) 1436 for (i = 0; i < NUM_TX_DESC; i++)
1437 tp->tx_buf[i] = &tp->tx_bufs[i * TX_BUF_SIZE]; 1437 tp->tx_buf[i] = &tp->tx_bufs[i * TX_BUF_SIZE];
1438 } 1438 }
1439 1439
1440 1440
1441 /* This must be global for CONFIG_8139TOO_TUNE_TWISTER case */ 1441 /* This must be global for CONFIG_8139TOO_TUNE_TWISTER case */
1442 static int next_tick = 3 * HZ; 1442 static int next_tick = 3 * HZ;
1443 1443
1444 #ifndef CONFIG_8139TOO_TUNE_TWISTER 1444 #ifndef CONFIG_8139TOO_TUNE_TWISTER
1445 static inline void rtl8139_tune_twister (struct net_device *dev, 1445 static inline void rtl8139_tune_twister (struct net_device *dev,
1446 struct rtl8139_private *tp) {} 1446 struct rtl8139_private *tp) {}
1447 #else 1447 #else
1448 enum TwisterParamVals { 1448 enum TwisterParamVals {
1449 PARA78_default = 0x78fa8388, 1449 PARA78_default = 0x78fa8388,
1450 PARA7c_default = 0xcb38de43, /* param[0][3] */ 1450 PARA7c_default = 0xcb38de43, /* param[0][3] */
1451 PARA7c_xxx = 0xcb38de43, 1451 PARA7c_xxx = 0xcb38de43,
1452 }; 1452 };
1453 1453
1454 static const unsigned long param[4][4] = { 1454 static const unsigned long param[4][4] = {
1455 {0xcb39de43, 0xcb39ce43, 0xfb38de03, 0xcb38de43}, 1455 {0xcb39de43, 0xcb39ce43, 0xfb38de03, 0xcb38de43},
1456 {0xcb39de43, 0xcb39ce43, 0xcb39ce83, 0xcb39ce83}, 1456 {0xcb39de43, 0xcb39ce43, 0xcb39ce83, 0xcb39ce83},
1457 {0xcb39de43, 0xcb39ce43, 0xcb39ce83, 0xcb39ce83}, 1457 {0xcb39de43, 0xcb39ce43, 0xcb39ce83, 0xcb39ce83},
1458 {0xbb39de43, 0xbb39ce43, 0xbb39ce83, 0xbb39ce83} 1458 {0xbb39de43, 0xbb39ce43, 0xbb39ce83, 0xbb39ce83}
1459 }; 1459 };
1460 1460
1461 static void rtl8139_tune_twister (struct net_device *dev, 1461 static void rtl8139_tune_twister (struct net_device *dev,
1462 struct rtl8139_private *tp) 1462 struct rtl8139_private *tp)
1463 { 1463 {
1464 int linkcase; 1464 int linkcase;
1465 void __iomem *ioaddr = tp->mmio_addr; 1465 void __iomem *ioaddr = tp->mmio_addr;
1466 1466
1467 /* This is a complicated state machine to configure the "twister" for 1467 /* This is a complicated state machine to configure the "twister" for
1468 impedance/echos based on the cable length. 1468 impedance/echos based on the cable length.
1469 All of this is magic and undocumented. 1469 All of this is magic and undocumented.
1470 */ 1470 */
1471 switch (tp->twistie) { 1471 switch (tp->twistie) {
1472 case 1: 1472 case 1:
1473 if (RTL_R16 (CSCR) & CSCR_LinkOKBit) { 1473 if (RTL_R16 (CSCR) & CSCR_LinkOKBit) {
1474 /* We have link beat, let us tune the twister. */ 1474 /* We have link beat, let us tune the twister. */
1475 RTL_W16 (CSCR, CSCR_LinkDownOffCmd); 1475 RTL_W16 (CSCR, CSCR_LinkDownOffCmd);
1476 tp->twistie = 2; /* Change to state 2. */ 1476 tp->twistie = 2; /* Change to state 2. */
1477 next_tick = HZ / 10; 1477 next_tick = HZ / 10;
1478 } else { 1478 } else {
1479 /* Just put in some reasonable defaults for when beat returns. */ 1479 /* Just put in some reasonable defaults for when beat returns. */
1480 RTL_W16 (CSCR, CSCR_LinkDownCmd); 1480 RTL_W16 (CSCR, CSCR_LinkDownCmd);
1481 RTL_W32 (FIFOTMS, 0x20); /* Turn on cable test mode. */ 1481 RTL_W32 (FIFOTMS, 0x20); /* Turn on cable test mode. */
1482 RTL_W32 (PARA78, PARA78_default); 1482 RTL_W32 (PARA78, PARA78_default);
1483 RTL_W32 (PARA7c, PARA7c_default); 1483 RTL_W32 (PARA7c, PARA7c_default);
1484 tp->twistie = 0; /* Bail from future actions. */ 1484 tp->twistie = 0; /* Bail from future actions. */
1485 } 1485 }
1486 break; 1486 break;
1487 case 2: 1487 case 2:
1488 /* Read how long it took to hear the echo. */ 1488 /* Read how long it took to hear the echo. */
1489 linkcase = RTL_R16 (CSCR) & CSCR_LinkStatusBits; 1489 linkcase = RTL_R16 (CSCR) & CSCR_LinkStatusBits;
1490 if (linkcase == 0x7000) 1490 if (linkcase == 0x7000)
1491 tp->twist_row = 3; 1491 tp->twist_row = 3;
1492 else if (linkcase == 0x3000) 1492 else if (linkcase == 0x3000)
1493 tp->twist_row = 2; 1493 tp->twist_row = 2;
1494 else if (linkcase == 0x1000) 1494 else if (linkcase == 0x1000)
1495 tp->twist_row = 1; 1495 tp->twist_row = 1;
1496 else 1496 else
1497 tp->twist_row = 0; 1497 tp->twist_row = 0;
1498 tp->twist_col = 0; 1498 tp->twist_col = 0;
1499 tp->twistie = 3; /* Change to state 2. */ 1499 tp->twistie = 3; /* Change to state 2. */
1500 next_tick = HZ / 10; 1500 next_tick = HZ / 10;
1501 break; 1501 break;
1502 case 3: 1502 case 3:
1503 /* Put out four tuning parameters, one per 100msec. */ 1503 /* Put out four tuning parameters, one per 100msec. */
1504 if (tp->twist_col == 0) 1504 if (tp->twist_col == 0)
1505 RTL_W16 (FIFOTMS, 0); 1505 RTL_W16 (FIFOTMS, 0);
1506 RTL_W32 (PARA7c, param[(int) tp->twist_row] 1506 RTL_W32 (PARA7c, param[(int) tp->twist_row]
1507 [(int) tp->twist_col]); 1507 [(int) tp->twist_col]);
1508 next_tick = HZ / 10; 1508 next_tick = HZ / 10;
1509 if (++tp->twist_col >= 4) { 1509 if (++tp->twist_col >= 4) {
1510 /* For short cables we are done. 1510 /* For short cables we are done.
1511 For long cables (row == 3) check for mistune. */ 1511 For long cables (row == 3) check for mistune. */
1512 tp->twistie = 1512 tp->twistie =
1513 (tp->twist_row == 3) ? 4 : 0; 1513 (tp->twist_row == 3) ? 4 : 0;
1514 } 1514 }
1515 break; 1515 break;
1516 case 4: 1516 case 4:
1517 /* Special case for long cables: check for mistune. */ 1517 /* Special case for long cables: check for mistune. */
1518 if ((RTL_R16 (CSCR) & 1518 if ((RTL_R16 (CSCR) &
1519 CSCR_LinkStatusBits) == 0x7000) { 1519 CSCR_LinkStatusBits) == 0x7000) {
1520 tp->twistie = 0; 1520 tp->twistie = 0;
1521 break; 1521 break;
1522 } else { 1522 } else {
1523 RTL_W32 (PARA7c, 0xfb38de03); 1523 RTL_W32 (PARA7c, 0xfb38de03);
1524 tp->twistie = 5; 1524 tp->twistie = 5;
1525 next_tick = HZ / 10; 1525 next_tick = HZ / 10;
1526 } 1526 }
1527 break; 1527 break;
1528 case 5: 1528 case 5:
1529 /* Retune for shorter cable (column 2). */ 1529 /* Retune for shorter cable (column 2). */
1530 RTL_W32 (FIFOTMS, 0x20); 1530 RTL_W32 (FIFOTMS, 0x20);
1531 RTL_W32 (PARA78, PARA78_default); 1531 RTL_W32 (PARA78, PARA78_default);
1532 RTL_W32 (PARA7c, PARA7c_default); 1532 RTL_W32 (PARA7c, PARA7c_default);
1533 RTL_W32 (FIFOTMS, 0x00); 1533 RTL_W32 (FIFOTMS, 0x00);
1534 tp->twist_row = 2; 1534 tp->twist_row = 2;
1535 tp->twist_col = 0; 1535 tp->twist_col = 0;
1536 tp->twistie = 3; 1536 tp->twistie = 3;
1537 next_tick = HZ / 10; 1537 next_tick = HZ / 10;
1538 break; 1538 break;
1539 1539
1540 default: 1540 default:
1541 /* do nothing */ 1541 /* do nothing */
1542 break; 1542 break;
1543 } 1543 }
1544 } 1544 }
1545 #endif /* CONFIG_8139TOO_TUNE_TWISTER */ 1545 #endif /* CONFIG_8139TOO_TUNE_TWISTER */
1546 1546
1547 static inline void rtl8139_thread_iter (struct net_device *dev, 1547 static inline void rtl8139_thread_iter (struct net_device *dev,
1548 struct rtl8139_private *tp, 1548 struct rtl8139_private *tp,
1549 void __iomem *ioaddr) 1549 void __iomem *ioaddr)
1550 { 1550 {
1551 int mii_lpa; 1551 int mii_lpa;
1552 1552
1553 mii_lpa = mdio_read (dev, tp->phys[0], MII_LPA); 1553 mii_lpa = mdio_read (dev, tp->phys[0], MII_LPA);
1554 1554
1555 if (!tp->mii.force_media && mii_lpa != 0xffff) { 1555 if (!tp->mii.force_media && mii_lpa != 0xffff) {
1556 int duplex = (mii_lpa & LPA_100FULL) 1556 int duplex = (mii_lpa & LPA_100FULL)
1557 || (mii_lpa & 0x01C0) == 0x0040; 1557 || (mii_lpa & 0x01C0) == 0x0040;
1558 if (tp->mii.full_duplex != duplex) { 1558 if (tp->mii.full_duplex != duplex) {
1559 tp->mii.full_duplex = duplex; 1559 tp->mii.full_duplex = duplex;
1560 1560
1561 if (mii_lpa) { 1561 if (mii_lpa) {
1562 printk (KERN_INFO 1562 printk (KERN_INFO
1563 "%s: Setting %s-duplex based on MII #%d link" 1563 "%s: Setting %s-duplex based on MII #%d link"
1564 " partner ability of %4.4x.\n", 1564 " partner ability of %4.4x.\n",
1565 dev->name, 1565 dev->name,
1566 tp->mii.full_duplex ? "full" : "half", 1566 tp->mii.full_duplex ? "full" : "half",
1567 tp->phys[0], mii_lpa); 1567 tp->phys[0], mii_lpa);
1568 } else { 1568 } else {
1569 printk(KERN_INFO"%s: media is unconnected, link down, or incompatible connection\n", 1569 printk(KERN_INFO"%s: media is unconnected, link down, or incompatible connection\n",
1570 dev->name); 1570 dev->name);
1571 } 1571 }
1572 #if 0 1572 #if 0
1573 RTL_W8 (Cfg9346, Cfg9346_Unlock); 1573 RTL_W8 (Cfg9346, Cfg9346_Unlock);
1574 RTL_W8 (Config1, tp->mii.full_duplex ? 0x60 : 0x20); 1574 RTL_W8 (Config1, tp->mii.full_duplex ? 0x60 : 0x20);
1575 RTL_W8 (Cfg9346, Cfg9346_Lock); 1575 RTL_W8 (Cfg9346, Cfg9346_Lock);
1576 #endif 1576 #endif
1577 } 1577 }
1578 } 1578 }
1579 1579
1580 next_tick = HZ * 60; 1580 next_tick = HZ * 60;
1581 1581
1582 rtl8139_tune_twister (dev, tp); 1582 rtl8139_tune_twister (dev, tp);
1583 1583
1584 DPRINTK ("%s: Media selection tick, Link partner %4.4x.\n", 1584 DPRINTK ("%s: Media selection tick, Link partner %4.4x.\n",
1585 dev->name, RTL_R16 (NWayLPAR)); 1585 dev->name, RTL_R16 (NWayLPAR));
1586 DPRINTK ("%s: Other registers are IntMask %4.4x IntStatus %4.4x\n", 1586 DPRINTK ("%s: Other registers are IntMask %4.4x IntStatus %4.4x\n",
1587 dev->name, RTL_R16 (IntrMask), RTL_R16 (IntrStatus)); 1587 dev->name, RTL_R16 (IntrMask), RTL_R16 (IntrStatus));
1588 DPRINTK ("%s: Chip config %2.2x %2.2x.\n", 1588 DPRINTK ("%s: Chip config %2.2x %2.2x.\n",
1589 dev->name, RTL_R8 (Config0), 1589 dev->name, RTL_R8 (Config0),
1590 RTL_R8 (Config1)); 1590 RTL_R8 (Config1));
1591 } 1591 }
1592 1592
1593 static void rtl8139_thread (struct work_struct *work) 1593 static void rtl8139_thread (struct work_struct *work)
1594 { 1594 {
1595 struct rtl8139_private *tp = 1595 struct rtl8139_private *tp =
1596 container_of(work, struct rtl8139_private, thread.work); 1596 container_of(work, struct rtl8139_private, thread.work);
1597 struct net_device *dev = tp->mii.dev; 1597 struct net_device *dev = tp->mii.dev;
1598 unsigned long thr_delay = next_tick; 1598 unsigned long thr_delay = next_tick;
1599 1599
1600 rtnl_lock(); 1600 rtnl_lock();
1601 1601
1602 if (!netif_running(dev)) 1602 if (!netif_running(dev))
1603 goto out_unlock; 1603 goto out_unlock;
1604 1604
1605 if (tp->watchdog_fired) { 1605 if (tp->watchdog_fired) {
1606 tp->watchdog_fired = 0; 1606 tp->watchdog_fired = 0;
1607 rtl8139_tx_timeout_task(work); 1607 rtl8139_tx_timeout_task(work);
1608 } else 1608 } else
1609 rtl8139_thread_iter(dev, tp, tp->mmio_addr); 1609 rtl8139_thread_iter(dev, tp, tp->mmio_addr);
1610 1610
1611 if (tp->have_thread) 1611 if (tp->have_thread)
1612 schedule_delayed_work(&tp->thread, thr_delay); 1612 schedule_delayed_work(&tp->thread, thr_delay);
1613 out_unlock: 1613 out_unlock:
1614 rtnl_unlock (); 1614 rtnl_unlock ();
1615 } 1615 }
1616 1616
1617 static void rtl8139_start_thread(struct rtl8139_private *tp) 1617 static void rtl8139_start_thread(struct rtl8139_private *tp)
1618 { 1618 {
1619 tp->twistie = 0; 1619 tp->twistie = 0;
1620 if (tp->chipset == CH_8139_K) 1620 if (tp->chipset == CH_8139_K)
1621 tp->twistie = 1; 1621 tp->twistie = 1;
1622 else if (tp->drv_flags & HAS_LNK_CHNG) 1622 else if (tp->drv_flags & HAS_LNK_CHNG)
1623 return; 1623 return;
1624 1624
1625 tp->have_thread = 1; 1625 tp->have_thread = 1;
1626 tp->watchdog_fired = 0; 1626 tp->watchdog_fired = 0;
1627 1627
1628 schedule_delayed_work(&tp->thread, next_tick); 1628 schedule_delayed_work(&tp->thread, next_tick);
1629 } 1629 }
1630 1630
1631 static inline void rtl8139_tx_clear (struct rtl8139_private *tp) 1631 static inline void rtl8139_tx_clear (struct rtl8139_private *tp)
1632 { 1632 {
1633 tp->cur_tx = 0; 1633 tp->cur_tx = 0;
1634 tp->dirty_tx = 0; 1634 tp->dirty_tx = 0;
1635 1635
1636 /* XXX account for unsent Tx packets in tp->stats.tx_dropped */ 1636 /* XXX account for unsent Tx packets in tp->stats.tx_dropped */
1637 } 1637 }
1638 1638
1639 static void rtl8139_tx_timeout_task (struct work_struct *work) 1639 static void rtl8139_tx_timeout_task (struct work_struct *work)
1640 { 1640 {
1641 struct rtl8139_private *tp = 1641 struct rtl8139_private *tp =
1642 container_of(work, struct rtl8139_private, thread.work); 1642 container_of(work, struct rtl8139_private, thread.work);
1643 struct net_device *dev = tp->mii.dev; 1643 struct net_device *dev = tp->mii.dev;
1644 void __iomem *ioaddr = tp->mmio_addr; 1644 void __iomem *ioaddr = tp->mmio_addr;
1645 int i; 1645 int i;
1646 u8 tmp8; 1646 u8 tmp8;
1647 1647
1648 printk (KERN_DEBUG "%s: Transmit timeout, status %2.2x %4.4x %4.4x " 1648 printk (KERN_DEBUG "%s: Transmit timeout, status %2.2x %4.4x %4.4x "
1649 "media %2.2x.\n", dev->name, RTL_R8 (ChipCmd), 1649 "media %2.2x.\n", dev->name, RTL_R8 (ChipCmd),
1650 RTL_R16(IntrStatus), RTL_R16(IntrMask), RTL_R8(MediaStatus)); 1650 RTL_R16(IntrStatus), RTL_R16(IntrMask), RTL_R8(MediaStatus));
1651 /* Emit info to figure out what went wrong. */ 1651 /* Emit info to figure out what went wrong. */
1652 printk (KERN_DEBUG "%s: Tx queue start entry %ld dirty entry %ld.\n", 1652 printk (KERN_DEBUG "%s: Tx queue start entry %ld dirty entry %ld.\n",
1653 dev->name, tp->cur_tx, tp->dirty_tx); 1653 dev->name, tp->cur_tx, tp->dirty_tx);
1654 for (i = 0; i < NUM_TX_DESC; i++) 1654 for (i = 0; i < NUM_TX_DESC; i++)
1655 printk (KERN_DEBUG "%s: Tx descriptor %d is %8.8lx.%s\n", 1655 printk (KERN_DEBUG "%s: Tx descriptor %d is %8.8lx.%s\n",
1656 dev->name, i, RTL_R32 (TxStatus0 + (i * 4)), 1656 dev->name, i, RTL_R32 (TxStatus0 + (i * 4)),
1657 i == tp->dirty_tx % NUM_TX_DESC ? 1657 i == tp->dirty_tx % NUM_TX_DESC ?
1658 " (queue head)" : ""); 1658 " (queue head)" : "");
1659 1659
1660 tp->xstats.tx_timeouts++; 1660 tp->xstats.tx_timeouts++;
1661 1661
1662 /* disable Tx ASAP, if not already */ 1662 /* disable Tx ASAP, if not already */
1663 tmp8 = RTL_R8 (ChipCmd); 1663 tmp8 = RTL_R8 (ChipCmd);
1664 if (tmp8 & CmdTxEnb) 1664 if (tmp8 & CmdTxEnb)
1665 RTL_W8 (ChipCmd, CmdRxEnb); 1665 RTL_W8 (ChipCmd, CmdRxEnb);
1666 1666
1667 spin_lock_bh(&tp->rx_lock); 1667 spin_lock_bh(&tp->rx_lock);
1668 /* Disable interrupts by clearing the interrupt mask. */ 1668 /* Disable interrupts by clearing the interrupt mask. */
1669 RTL_W16 (IntrMask, 0x0000); 1669 RTL_W16 (IntrMask, 0x0000);
1670 1670
1671 /* Stop a shared interrupt from scavenging while we are. */ 1671 /* Stop a shared interrupt from scavenging while we are. */
1672 spin_lock_irq(&tp->lock); 1672 spin_lock_irq(&tp->lock);
1673 rtl8139_tx_clear (tp); 1673 rtl8139_tx_clear (tp);
1674 spin_unlock_irq(&tp->lock); 1674 spin_unlock_irq(&tp->lock);
1675 1675
1676 /* ...and finally, reset everything */ 1676 /* ...and finally, reset everything */
1677 if (netif_running(dev)) { 1677 if (netif_running(dev)) {
1678 rtl8139_hw_start (dev); 1678 rtl8139_hw_start (dev);
1679 netif_wake_queue (dev); 1679 netif_wake_queue (dev);
1680 } 1680 }
1681 spin_unlock_bh(&tp->rx_lock); 1681 spin_unlock_bh(&tp->rx_lock);
1682 } 1682 }
1683 1683
1684 static void rtl8139_tx_timeout (struct net_device *dev) 1684 static void rtl8139_tx_timeout (struct net_device *dev)
1685 { 1685 {
1686 struct rtl8139_private *tp = netdev_priv(dev); 1686 struct rtl8139_private *tp = netdev_priv(dev);
1687 1687
1688 tp->watchdog_fired = 1; 1688 tp->watchdog_fired = 1;
1689 if (!tp->have_thread) { 1689 if (!tp->have_thread) {
1690 INIT_DELAYED_WORK(&tp->thread, rtl8139_thread); 1690 INIT_DELAYED_WORK(&tp->thread, rtl8139_thread);
1691 schedule_delayed_work(&tp->thread, next_tick); 1691 schedule_delayed_work(&tp->thread, next_tick);
1692 } 1692 }
1693 } 1693 }
1694 1694
1695 static int rtl8139_start_xmit (struct sk_buff *skb, struct net_device *dev) 1695 static int rtl8139_start_xmit (struct sk_buff *skb, struct net_device *dev)
1696 { 1696 {
1697 struct rtl8139_private *tp = netdev_priv(dev); 1697 struct rtl8139_private *tp = netdev_priv(dev);
1698 void __iomem *ioaddr = tp->mmio_addr; 1698 void __iomem *ioaddr = tp->mmio_addr;
1699 unsigned int entry; 1699 unsigned int entry;
1700 unsigned int len = skb->len; 1700 unsigned int len = skb->len;
1701 unsigned long flags; 1701 unsigned long flags;
1702 1702
1703 /* Calculate the next Tx descriptor entry. */ 1703 /* Calculate the next Tx descriptor entry. */
1704 entry = tp->cur_tx % NUM_TX_DESC; 1704 entry = tp->cur_tx % NUM_TX_DESC;
1705 1705
1706 /* Note: the chip doesn't have auto-pad! */ 1706 /* Note: the chip doesn't have auto-pad! */
1707 if (likely(len < TX_BUF_SIZE)) { 1707 if (likely(len < TX_BUF_SIZE)) {
1708 if (len < ETH_ZLEN) 1708 if (len < ETH_ZLEN)
1709 memset(tp->tx_buf[entry], 0, ETH_ZLEN); 1709 memset(tp->tx_buf[entry], 0, ETH_ZLEN);
1710 skb_copy_and_csum_dev(skb, tp->tx_buf[entry]); 1710 skb_copy_and_csum_dev(skb, tp->tx_buf[entry]);
1711 dev_kfree_skb(skb); 1711 dev_kfree_skb(skb);
1712 } else { 1712 } else {
1713 dev_kfree_skb(skb); 1713 dev_kfree_skb(skb);
1714 tp->stats.tx_dropped++; 1714 tp->stats.tx_dropped++;
1715 return 0; 1715 return 0;
1716 } 1716 }
1717 1717
1718 spin_lock_irqsave(&tp->lock, flags); 1718 spin_lock_irqsave(&tp->lock, flags);
1719 RTL_W32_F (TxStatus0 + (entry * sizeof (u32)), 1719 RTL_W32_F (TxStatus0 + (entry * sizeof (u32)),
1720 tp->tx_flag | max(len, (unsigned int)ETH_ZLEN)); 1720 tp->tx_flag | max(len, (unsigned int)ETH_ZLEN));
1721 1721
1722 dev->trans_start = jiffies; 1722 dev->trans_start = jiffies;
1723 1723
1724 tp->cur_tx++; 1724 tp->cur_tx++;
1725 wmb(); 1725 wmb();
1726 1726
1727 if ((tp->cur_tx - NUM_TX_DESC) == tp->dirty_tx) 1727 if ((tp->cur_tx - NUM_TX_DESC) == tp->dirty_tx)
1728 netif_stop_queue (dev); 1728 netif_stop_queue (dev);
1729 spin_unlock_irqrestore(&tp->lock, flags); 1729 spin_unlock_irqrestore(&tp->lock, flags);
1730 1730
1731 if (netif_msg_tx_queued(tp)) 1731 if (netif_msg_tx_queued(tp))
1732 printk (KERN_DEBUG "%s: Queued Tx packet size %u to slot %d.\n", 1732 printk (KERN_DEBUG "%s: Queued Tx packet size %u to slot %d.\n",
1733 dev->name, len, entry); 1733 dev->name, len, entry);
1734 1734
1735 return 0; 1735 return 0;
1736 } 1736 }
1737 1737
1738 1738
1739 static void rtl8139_tx_interrupt (struct net_device *dev, 1739 static void rtl8139_tx_interrupt (struct net_device *dev,
1740 struct rtl8139_private *tp, 1740 struct rtl8139_private *tp,
1741 void __iomem *ioaddr) 1741 void __iomem *ioaddr)
1742 { 1742 {
1743 unsigned long dirty_tx, tx_left; 1743 unsigned long dirty_tx, tx_left;
1744 1744
1745 assert (dev != NULL); 1745 assert (dev != NULL);
1746 assert (ioaddr != NULL); 1746 assert (ioaddr != NULL);
1747 1747
1748 dirty_tx = tp->dirty_tx; 1748 dirty_tx = tp->dirty_tx;
1749 tx_left = tp->cur_tx - dirty_tx; 1749 tx_left = tp->cur_tx - dirty_tx;
1750 while (tx_left > 0) { 1750 while (tx_left > 0) {
1751 int entry = dirty_tx % NUM_TX_DESC; 1751 int entry = dirty_tx % NUM_TX_DESC;
1752 int txstatus; 1752 int txstatus;
1753 1753
1754 txstatus = RTL_R32 (TxStatus0 + (entry * sizeof (u32))); 1754 txstatus = RTL_R32 (TxStatus0 + (entry * sizeof (u32)));
1755 1755
1756 if (!(txstatus & (TxStatOK | TxUnderrun | TxAborted))) 1756 if (!(txstatus & (TxStatOK | TxUnderrun | TxAborted)))
1757 break; /* It still hasn't been Txed */ 1757 break; /* It still hasn't been Txed */
1758 1758
1759 /* Note: TxCarrierLost is always asserted at 100mbps. */ 1759 /* Note: TxCarrierLost is always asserted at 100mbps. */
1760 if (txstatus & (TxOutOfWindow | TxAborted)) { 1760 if (txstatus & (TxOutOfWindow | TxAborted)) {
1761 /* There was an major error, log it. */ 1761 /* There was an major error, log it. */
1762 if (netif_msg_tx_err(tp)) 1762 if (netif_msg_tx_err(tp))
1763 printk(KERN_DEBUG "%s: Transmit error, Tx status %8.8x.\n", 1763 printk(KERN_DEBUG "%s: Transmit error, Tx status %8.8x.\n",
1764 dev->name, txstatus); 1764 dev->name, txstatus);
1765 tp->stats.tx_errors++; 1765 tp->stats.tx_errors++;
1766 if (txstatus & TxAborted) { 1766 if (txstatus & TxAborted) {
1767 tp->stats.tx_aborted_errors++; 1767 tp->stats.tx_aborted_errors++;
1768 RTL_W32 (TxConfig, TxClearAbt); 1768 RTL_W32 (TxConfig, TxClearAbt);
1769 RTL_W16 (IntrStatus, TxErr); 1769 RTL_W16 (IntrStatus, TxErr);
1770 wmb(); 1770 wmb();
1771 } 1771 }
1772 if (txstatus & TxCarrierLost) 1772 if (txstatus & TxCarrierLost)
1773 tp->stats.tx_carrier_errors++; 1773 tp->stats.tx_carrier_errors++;
1774 if (txstatus & TxOutOfWindow) 1774 if (txstatus & TxOutOfWindow)
1775 tp->stats.tx_window_errors++; 1775 tp->stats.tx_window_errors++;
1776 } else { 1776 } else {
1777 if (txstatus & TxUnderrun) { 1777 if (txstatus & TxUnderrun) {
1778 /* Add 64 to the Tx FIFO threshold. */ 1778 /* Add 64 to the Tx FIFO threshold. */
1779 if (tp->tx_flag < 0x00300000) 1779 if (tp->tx_flag < 0x00300000)
1780 tp->tx_flag += 0x00020000; 1780 tp->tx_flag += 0x00020000;
1781 tp->stats.tx_fifo_errors++; 1781 tp->stats.tx_fifo_errors++;
1782 } 1782 }
1783 tp->stats.collisions += (txstatus >> 24) & 15; 1783 tp->stats.collisions += (txstatus >> 24) & 15;
1784 tp->stats.tx_bytes += txstatus & 0x7ff; 1784 tp->stats.tx_bytes += txstatus & 0x7ff;
1785 tp->stats.tx_packets++; 1785 tp->stats.tx_packets++;
1786 } 1786 }
1787 1787
1788 dirty_tx++; 1788 dirty_tx++;
1789 tx_left--; 1789 tx_left--;
1790 } 1790 }
1791 1791
1792 #ifndef RTL8139_NDEBUG 1792 #ifndef RTL8139_NDEBUG
1793 if (tp->cur_tx - dirty_tx > NUM_TX_DESC) { 1793 if (tp->cur_tx - dirty_tx > NUM_TX_DESC) {
1794 printk (KERN_ERR "%s: Out-of-sync dirty pointer, %ld vs. %ld.\n", 1794 printk (KERN_ERR "%s: Out-of-sync dirty pointer, %ld vs. %ld.\n",
1795 dev->name, dirty_tx, tp->cur_tx); 1795 dev->name, dirty_tx, tp->cur_tx);
1796 dirty_tx += NUM_TX_DESC; 1796 dirty_tx += NUM_TX_DESC;
1797 } 1797 }
1798 #endif /* RTL8139_NDEBUG */ 1798 #endif /* RTL8139_NDEBUG */
1799 1799
1800 /* only wake the queue if we did work, and the queue is stopped */ 1800 /* only wake the queue if we did work, and the queue is stopped */
1801 if (tp->dirty_tx != dirty_tx) { 1801 if (tp->dirty_tx != dirty_tx) {
1802 tp->dirty_tx = dirty_tx; 1802 tp->dirty_tx = dirty_tx;
1803 mb(); 1803 mb();
1804 netif_wake_queue (dev); 1804 netif_wake_queue (dev);
1805 } 1805 }
1806 } 1806 }
1807 1807
1808 1808
1809 /* TODO: clean this up! Rx reset need not be this intensive */ 1809 /* TODO: clean this up! Rx reset need not be this intensive */
1810 static void rtl8139_rx_err (u32 rx_status, struct net_device *dev, 1810 static void rtl8139_rx_err (u32 rx_status, struct net_device *dev,
1811 struct rtl8139_private *tp, void __iomem *ioaddr) 1811 struct rtl8139_private *tp, void __iomem *ioaddr)
1812 { 1812 {
1813 u8 tmp8; 1813 u8 tmp8;
1814 #ifdef CONFIG_8139_OLD_RX_RESET 1814 #ifdef CONFIG_8139_OLD_RX_RESET
1815 int tmp_work; 1815 int tmp_work;
1816 #endif 1816 #endif
1817 1817
1818 if (netif_msg_rx_err (tp)) 1818 if (netif_msg_rx_err (tp))
1819 printk(KERN_DEBUG "%s: Ethernet frame had errors, status %8.8x.\n", 1819 printk(KERN_DEBUG "%s: Ethernet frame had errors, status %8.8x.\n",
1820 dev->name, rx_status); 1820 dev->name, rx_status);
1821 tp->stats.rx_errors++; 1821 tp->stats.rx_errors++;
1822 if (!(rx_status & RxStatusOK)) { 1822 if (!(rx_status & RxStatusOK)) {
1823 if (rx_status & RxTooLong) { 1823 if (rx_status & RxTooLong) {
1824 DPRINTK ("%s: Oversized Ethernet frame, status %4.4x!\n", 1824 DPRINTK ("%s: Oversized Ethernet frame, status %4.4x!\n",
1825 dev->name, rx_status); 1825 dev->name, rx_status);
1826 /* A.C.: The chip hangs here. */ 1826 /* A.C.: The chip hangs here. */
1827 } 1827 }
1828 if (rx_status & (RxBadSymbol | RxBadAlign)) 1828 if (rx_status & (RxBadSymbol | RxBadAlign))
1829 tp->stats.rx_frame_errors++; 1829 tp->stats.rx_frame_errors++;
1830 if (rx_status & (RxRunt | RxTooLong)) 1830 if (rx_status & (RxRunt | RxTooLong))
1831 tp->stats.rx_length_errors++; 1831 tp->stats.rx_length_errors++;
1832 if (rx_status & RxCRCErr) 1832 if (rx_status & RxCRCErr)
1833 tp->stats.rx_crc_errors++; 1833 tp->stats.rx_crc_errors++;
1834 } else { 1834 } else {
1835 tp->xstats.rx_lost_in_ring++; 1835 tp->xstats.rx_lost_in_ring++;
1836 } 1836 }
1837 1837
1838 #ifndef CONFIG_8139_OLD_RX_RESET 1838 #ifndef CONFIG_8139_OLD_RX_RESET
1839 tmp8 = RTL_R8 (ChipCmd); 1839 tmp8 = RTL_R8 (ChipCmd);
1840 RTL_W8 (ChipCmd, tmp8 & ~CmdRxEnb); 1840 RTL_W8 (ChipCmd, tmp8 & ~CmdRxEnb);
1841 RTL_W8 (ChipCmd, tmp8); 1841 RTL_W8 (ChipCmd, tmp8);
1842 RTL_W32 (RxConfig, tp->rx_config); 1842 RTL_W32 (RxConfig, tp->rx_config);
1843 tp->cur_rx = 0; 1843 tp->cur_rx = 0;
1844 #else 1844 #else
1845 /* Reset the receiver, based on RealTek recommendation. (Bug?) */ 1845 /* Reset the receiver, based on RealTek recommendation. (Bug?) */
1846 1846
1847 /* disable receive */ 1847 /* disable receive */
1848 RTL_W8_F (ChipCmd, CmdTxEnb); 1848 RTL_W8_F (ChipCmd, CmdTxEnb);
1849 tmp_work = 200; 1849 tmp_work = 200;
1850 while (--tmp_work > 0) { 1850 while (--tmp_work > 0) {
1851 udelay(1); 1851 udelay(1);
1852 tmp8 = RTL_R8 (ChipCmd); 1852 tmp8 = RTL_R8 (ChipCmd);
1853 if (!(tmp8 & CmdRxEnb)) 1853 if (!(tmp8 & CmdRxEnb))
1854 break; 1854 break;
1855 } 1855 }
1856 if (tmp_work <= 0) 1856 if (tmp_work <= 0)
1857 printk (KERN_WARNING PFX "rx stop wait too long\n"); 1857 printk (KERN_WARNING PFX "rx stop wait too long\n");
1858 /* restart receive */ 1858 /* restart receive */
1859 tmp_work = 200; 1859 tmp_work = 200;
1860 while (--tmp_work > 0) { 1860 while (--tmp_work > 0) {
1861 RTL_W8_F (ChipCmd, CmdRxEnb | CmdTxEnb); 1861 RTL_W8_F (ChipCmd, CmdRxEnb | CmdTxEnb);
1862 udelay(1); 1862 udelay(1);
1863 tmp8 = RTL_R8 (ChipCmd); 1863 tmp8 = RTL_R8 (ChipCmd);
1864 if ((tmp8 & CmdRxEnb) && (tmp8 & CmdTxEnb)) 1864 if ((tmp8 & CmdRxEnb) && (tmp8 & CmdTxEnb))
1865 break; 1865 break;
1866 } 1866 }
1867 if (tmp_work <= 0) 1867 if (tmp_work <= 0)
1868 printk (KERN_WARNING PFX "tx/rx enable wait too long\n"); 1868 printk (KERN_WARNING PFX "tx/rx enable wait too long\n");
1869 1869
1870 /* and reinitialize all rx related registers */ 1870 /* and reinitialize all rx related registers */
1871 RTL_W8_F (Cfg9346, Cfg9346_Unlock); 1871 RTL_W8_F (Cfg9346, Cfg9346_Unlock);
1872 /* Must enable Tx/Rx before setting transfer thresholds! */ 1872 /* Must enable Tx/Rx before setting transfer thresholds! */
1873 RTL_W8 (ChipCmd, CmdRxEnb | CmdTxEnb); 1873 RTL_W8 (ChipCmd, CmdRxEnb | CmdTxEnb);
1874 1874
1875 tp->rx_config = rtl8139_rx_config | AcceptBroadcast | AcceptMyPhys; 1875 tp->rx_config = rtl8139_rx_config | AcceptBroadcast | AcceptMyPhys;
1876 RTL_W32 (RxConfig, tp->rx_config); 1876 RTL_W32 (RxConfig, tp->rx_config);
1877 tp->cur_rx = 0; 1877 tp->cur_rx = 0;
1878 1878
1879 DPRINTK("init buffer addresses\n"); 1879 DPRINTK("init buffer addresses\n");
1880 1880
1881 /* Lock Config[01234] and BMCR register writes */ 1881 /* Lock Config[01234] and BMCR register writes */
1882 RTL_W8 (Cfg9346, Cfg9346_Lock); 1882 RTL_W8 (Cfg9346, Cfg9346_Lock);
1883 1883
1884 /* init Rx ring buffer DMA address */ 1884 /* init Rx ring buffer DMA address */
1885 RTL_W32_F (RxBuf, tp->rx_ring_dma); 1885 RTL_W32_F (RxBuf, tp->rx_ring_dma);
1886 1886
1887 /* A.C.: Reset the multicast list. */ 1887 /* A.C.: Reset the multicast list. */
1888 __set_rx_mode (dev); 1888 __set_rx_mode (dev);
1889 #endif 1889 #endif
1890 } 1890 }
1891 1891
1892 #if RX_BUF_IDX == 3 1892 #if RX_BUF_IDX == 3
1893 static __inline__ void wrap_copy(struct sk_buff *skb, const unsigned char *ring, 1893 static __inline__ void wrap_copy(struct sk_buff *skb, const unsigned char *ring,
1894 u32 offset, unsigned int size) 1894 u32 offset, unsigned int size)
1895 { 1895 {
1896 u32 left = RX_BUF_LEN - offset; 1896 u32 left = RX_BUF_LEN - offset;
1897 1897
1898 if (size > left) { 1898 if (size > left) {
1899 skb_copy_to_linear_data(skb, ring + offset, left); 1899 skb_copy_to_linear_data(skb, ring + offset, left);
1900 skb_copy_to_linear_data_offset(skb, left, ring, size - left); 1900 skb_copy_to_linear_data_offset(skb, left, ring, size - left);
1901 } else 1901 } else
1902 skb_copy_to_linear_data(skb, ring + offset, size); 1902 skb_copy_to_linear_data(skb, ring + offset, size);
1903 } 1903 }
1904 #endif 1904 #endif
1905 1905
1906 static void rtl8139_isr_ack(struct rtl8139_private *tp) 1906 static void rtl8139_isr_ack(struct rtl8139_private *tp)
1907 { 1907 {
1908 void __iomem *ioaddr = tp->mmio_addr; 1908 void __iomem *ioaddr = tp->mmio_addr;
1909 u16 status; 1909 u16 status;
1910 1910
1911 status = RTL_R16 (IntrStatus) & RxAckBits; 1911 status = RTL_R16 (IntrStatus) & RxAckBits;
1912 1912
1913 /* Clear out errors and receive interrupts */ 1913 /* Clear out errors and receive interrupts */
1914 if (likely(status != 0)) { 1914 if (likely(status != 0)) {
1915 if (unlikely(status & (RxFIFOOver | RxOverflow))) { 1915 if (unlikely(status & (RxFIFOOver | RxOverflow))) {
1916 tp->stats.rx_errors++; 1916 tp->stats.rx_errors++;
1917 if (status & RxFIFOOver) 1917 if (status & RxFIFOOver)
1918 tp->stats.rx_fifo_errors++; 1918 tp->stats.rx_fifo_errors++;
1919 } 1919 }
1920 RTL_W16_F (IntrStatus, RxAckBits); 1920 RTL_W16_F (IntrStatus, RxAckBits);
1921 } 1921 }
1922 } 1922 }
1923 1923
1924 static int rtl8139_rx(struct net_device *dev, struct rtl8139_private *tp, 1924 static int rtl8139_rx(struct net_device *dev, struct rtl8139_private *tp,
1925 int budget) 1925 int budget)
1926 { 1926 {
1927 void __iomem *ioaddr = tp->mmio_addr; 1927 void __iomem *ioaddr = tp->mmio_addr;
1928 int received = 0; 1928 int received = 0;
1929 unsigned char *rx_ring = tp->rx_ring; 1929 unsigned char *rx_ring = tp->rx_ring;
1930 unsigned int cur_rx = tp->cur_rx; 1930 unsigned int cur_rx = tp->cur_rx;
1931 unsigned int rx_size = 0; 1931 unsigned int rx_size = 0;
1932 1932
1933 DPRINTK ("%s: In rtl8139_rx(), current %4.4x BufAddr %4.4x," 1933 DPRINTK ("%s: In rtl8139_rx(), current %4.4x BufAddr %4.4x,"
1934 " free to %4.4x, Cmd %2.2x.\n", dev->name, (u16)cur_rx, 1934 " free to %4.4x, Cmd %2.2x.\n", dev->name, (u16)cur_rx,
1935 RTL_R16 (RxBufAddr), 1935 RTL_R16 (RxBufAddr),
1936 RTL_R16 (RxBufPtr), RTL_R8 (ChipCmd)); 1936 RTL_R16 (RxBufPtr), RTL_R8 (ChipCmd));
1937 1937
1938 while (netif_running(dev) && received < budget 1938 while (netif_running(dev) && received < budget
1939 && (RTL_R8 (ChipCmd) & RxBufEmpty) == 0) { 1939 && (RTL_R8 (ChipCmd) & RxBufEmpty) == 0) {
1940 u32 ring_offset = cur_rx % RX_BUF_LEN; 1940 u32 ring_offset = cur_rx % RX_BUF_LEN;
1941 u32 rx_status; 1941 u32 rx_status;
1942 unsigned int pkt_size; 1942 unsigned int pkt_size;
1943 struct sk_buff *skb; 1943 struct sk_buff *skb;
1944 1944
1945 rmb(); 1945 rmb();
1946 1946
1947 /* read size+status of next frame from DMA ring buffer */ 1947 /* read size+status of next frame from DMA ring buffer */
1948 rx_status = le32_to_cpu (*(u32 *) (rx_ring + ring_offset)); 1948 rx_status = le32_to_cpu (*(__le32 *) (rx_ring + ring_offset));
1949 rx_size = rx_status >> 16; 1949 rx_size = rx_status >> 16;
1950 pkt_size = rx_size - 4; 1950 pkt_size = rx_size - 4;
1951 1951
1952 if (netif_msg_rx_status(tp)) 1952 if (netif_msg_rx_status(tp))
1953 printk(KERN_DEBUG "%s: rtl8139_rx() status %4.4x, size %4.4x," 1953 printk(KERN_DEBUG "%s: rtl8139_rx() status %4.4x, size %4.4x,"
1954 " cur %4.4x.\n", dev->name, rx_status, 1954 " cur %4.4x.\n", dev->name, rx_status,
1955 rx_size, cur_rx); 1955 rx_size, cur_rx);
1956 #if RTL8139_DEBUG > 2 1956 #if RTL8139_DEBUG > 2
1957 { 1957 {
1958 int i; 1958 int i;
1959 DPRINTK ("%s: Frame contents ", dev->name); 1959 DPRINTK ("%s: Frame contents ", dev->name);
1960 for (i = 0; i < 70; i++) 1960 for (i = 0; i < 70; i++)
1961 printk (" %2.2x", 1961 printk (" %2.2x",
1962 rx_ring[ring_offset + i]); 1962 rx_ring[ring_offset + i]);
1963 printk (".\n"); 1963 printk (".\n");
1964 } 1964 }
1965 #endif 1965 #endif
1966 1966
1967 /* Packet copy from FIFO still in progress. 1967 /* Packet copy from FIFO still in progress.
1968 * Theoretically, this should never happen 1968 * Theoretically, this should never happen
1969 * since EarlyRx is disabled. 1969 * since EarlyRx is disabled.
1970 */ 1970 */
1971 if (unlikely(rx_size == 0xfff0)) { 1971 if (unlikely(rx_size == 0xfff0)) {
1972 if (!tp->fifo_copy_timeout) 1972 if (!tp->fifo_copy_timeout)
1973 tp->fifo_copy_timeout = jiffies + 2; 1973 tp->fifo_copy_timeout = jiffies + 2;
1974 else if (time_after(jiffies, tp->fifo_copy_timeout)) { 1974 else if (time_after(jiffies, tp->fifo_copy_timeout)) {
1975 DPRINTK ("%s: hung FIFO. Reset.", dev->name); 1975 DPRINTK ("%s: hung FIFO. Reset.", dev->name);
1976 rx_size = 0; 1976 rx_size = 0;
1977 goto no_early_rx; 1977 goto no_early_rx;
1978 } 1978 }
1979 if (netif_msg_intr(tp)) { 1979 if (netif_msg_intr(tp)) {
1980 printk(KERN_DEBUG "%s: fifo copy in progress.", 1980 printk(KERN_DEBUG "%s: fifo copy in progress.",
1981 dev->name); 1981 dev->name);
1982 } 1982 }
1983 tp->xstats.early_rx++; 1983 tp->xstats.early_rx++;
1984 break; 1984 break;
1985 } 1985 }
1986 1986
1987 no_early_rx: 1987 no_early_rx:
1988 tp->fifo_copy_timeout = 0; 1988 tp->fifo_copy_timeout = 0;
1989 1989
1990 /* If Rx err or invalid rx_size/rx_status received 1990 /* If Rx err or invalid rx_size/rx_status received
1991 * (which happens if we get lost in the ring), 1991 * (which happens if we get lost in the ring),
1992 * Rx process gets reset, so we abort any further 1992 * Rx process gets reset, so we abort any further
1993 * Rx processing. 1993 * Rx processing.
1994 */ 1994 */
1995 if (unlikely((rx_size > (MAX_ETH_FRAME_SIZE+4)) || 1995 if (unlikely((rx_size > (MAX_ETH_FRAME_SIZE+4)) ||
1996 (rx_size < 8) || 1996 (rx_size < 8) ||
1997 (!(rx_status & RxStatusOK)))) { 1997 (!(rx_status & RxStatusOK)))) {
1998 rtl8139_rx_err (rx_status, dev, tp, ioaddr); 1998 rtl8139_rx_err (rx_status, dev, tp, ioaddr);
1999 received = -1; 1999 received = -1;
2000 goto out; 2000 goto out;
2001 } 2001 }
2002 2002
2003 /* Malloc up new buffer, compatible with net-2e. */ 2003 /* Malloc up new buffer, compatible with net-2e. */
2004 /* Omit the four octet CRC from the length. */ 2004 /* Omit the four octet CRC from the length. */
2005 2005
2006 skb = dev_alloc_skb (pkt_size + 2); 2006 skb = dev_alloc_skb (pkt_size + 2);
2007 if (likely(skb)) { 2007 if (likely(skb)) {
2008 skb_reserve (skb, 2); /* 16 byte align the IP fields. */ 2008 skb_reserve (skb, 2); /* 16 byte align the IP fields. */
2009 #if RX_BUF_IDX == 3 2009 #if RX_BUF_IDX == 3
2010 wrap_copy(skb, rx_ring, ring_offset+4, pkt_size); 2010 wrap_copy(skb, rx_ring, ring_offset+4, pkt_size);
2011 #else 2011 #else
2012 skb_copy_to_linear_data (skb, &rx_ring[ring_offset + 4], pkt_size); 2012 skb_copy_to_linear_data (skb, &rx_ring[ring_offset + 4], pkt_size);
2013 #endif 2013 #endif
2014 skb_put (skb, pkt_size); 2014 skb_put (skb, pkt_size);
2015 2015
2016 skb->protocol = eth_type_trans (skb, dev); 2016 skb->protocol = eth_type_trans (skb, dev);
2017 2017
2018 dev->last_rx = jiffies; 2018 dev->last_rx = jiffies;
2019 tp->stats.rx_bytes += pkt_size; 2019 tp->stats.rx_bytes += pkt_size;
2020 tp->stats.rx_packets++; 2020 tp->stats.rx_packets++;
2021 2021
2022 netif_receive_skb (skb); 2022 netif_receive_skb (skb);
2023 } else { 2023 } else {
2024 if (net_ratelimit()) 2024 if (net_ratelimit())
2025 printk (KERN_WARNING 2025 printk (KERN_WARNING
2026 "%s: Memory squeeze, dropping packet.\n", 2026 "%s: Memory squeeze, dropping packet.\n",
2027 dev->name); 2027 dev->name);
2028 tp->stats.rx_dropped++; 2028 tp->stats.rx_dropped++;
2029 } 2029 }
2030 received++; 2030 received++;
2031 2031
2032 cur_rx = (cur_rx + rx_size + 4 + 3) & ~3; 2032 cur_rx = (cur_rx + rx_size + 4 + 3) & ~3;
2033 RTL_W16 (RxBufPtr, (u16) (cur_rx - 16)); 2033 RTL_W16 (RxBufPtr, (u16) (cur_rx - 16));
2034 2034
2035 rtl8139_isr_ack(tp); 2035 rtl8139_isr_ack(tp);
2036 } 2036 }
2037 2037
2038 if (unlikely(!received || rx_size == 0xfff0)) 2038 if (unlikely(!received || rx_size == 0xfff0))
2039 rtl8139_isr_ack(tp); 2039 rtl8139_isr_ack(tp);
2040 2040
2041 #if RTL8139_DEBUG > 1 2041 #if RTL8139_DEBUG > 1
2042 DPRINTK ("%s: Done rtl8139_rx(), current %4.4x BufAddr %4.4x," 2042 DPRINTK ("%s: Done rtl8139_rx(), current %4.4x BufAddr %4.4x,"
2043 " free to %4.4x, Cmd %2.2x.\n", dev->name, cur_rx, 2043 " free to %4.4x, Cmd %2.2x.\n", dev->name, cur_rx,
2044 RTL_R16 (RxBufAddr), 2044 RTL_R16 (RxBufAddr),
2045 RTL_R16 (RxBufPtr), RTL_R8 (ChipCmd)); 2045 RTL_R16 (RxBufPtr), RTL_R8 (ChipCmd));
2046 #endif 2046 #endif
2047 2047
2048 tp->cur_rx = cur_rx; 2048 tp->cur_rx = cur_rx;
2049 2049
2050 /* 2050 /*
2051 * The receive buffer should be mostly empty. 2051 * The receive buffer should be mostly empty.
2052 * Tell NAPI to reenable the Rx irq. 2052 * Tell NAPI to reenable the Rx irq.
2053 */ 2053 */
2054 if (tp->fifo_copy_timeout) 2054 if (tp->fifo_copy_timeout)
2055 received = budget; 2055 received = budget;
2056 2056
2057 out: 2057 out:
2058 return received; 2058 return received;
2059 } 2059 }
2060 2060
2061 2061
2062 static void rtl8139_weird_interrupt (struct net_device *dev, 2062 static void rtl8139_weird_interrupt (struct net_device *dev,
2063 struct rtl8139_private *tp, 2063 struct rtl8139_private *tp,
2064 void __iomem *ioaddr, 2064 void __iomem *ioaddr,
2065 int status, int link_changed) 2065 int status, int link_changed)
2066 { 2066 {
2067 DPRINTK ("%s: Abnormal interrupt, status %8.8x.\n", 2067 DPRINTK ("%s: Abnormal interrupt, status %8.8x.\n",
2068 dev->name, status); 2068 dev->name, status);
2069 2069
2070 assert (dev != NULL); 2070 assert (dev != NULL);
2071 assert (tp != NULL); 2071 assert (tp != NULL);
2072 assert (ioaddr != NULL); 2072 assert (ioaddr != NULL);
2073 2073
2074 /* Update the error count. */ 2074 /* Update the error count. */
2075 tp->stats.rx_missed_errors += RTL_R32 (RxMissed); 2075 tp->stats.rx_missed_errors += RTL_R32 (RxMissed);
2076 RTL_W32 (RxMissed, 0); 2076 RTL_W32 (RxMissed, 0);
2077 2077
2078 if ((status & RxUnderrun) && link_changed && 2078 if ((status & RxUnderrun) && link_changed &&
2079 (tp->drv_flags & HAS_LNK_CHNG)) { 2079 (tp->drv_flags & HAS_LNK_CHNG)) {
2080 rtl_check_media(dev, 0); 2080 rtl_check_media(dev, 0);
2081 status &= ~RxUnderrun; 2081 status &= ~RxUnderrun;
2082 } 2082 }
2083 2083
2084 if (status & (RxUnderrun | RxErr)) 2084 if (status & (RxUnderrun | RxErr))
2085 tp->stats.rx_errors++; 2085 tp->stats.rx_errors++;
2086 2086
2087 if (status & PCSTimeout) 2087 if (status & PCSTimeout)
2088 tp->stats.rx_length_errors++; 2088 tp->stats.rx_length_errors++;
2089 if (status & RxUnderrun) 2089 if (status & RxUnderrun)
2090 tp->stats.rx_fifo_errors++; 2090 tp->stats.rx_fifo_errors++;
2091 if (status & PCIErr) { 2091 if (status & PCIErr) {
2092 u16 pci_cmd_status; 2092 u16 pci_cmd_status;
2093 pci_read_config_word (tp->pci_dev, PCI_STATUS, &pci_cmd_status); 2093 pci_read_config_word (tp->pci_dev, PCI_STATUS, &pci_cmd_status);
2094 pci_write_config_word (tp->pci_dev, PCI_STATUS, pci_cmd_status); 2094 pci_write_config_word (tp->pci_dev, PCI_STATUS, pci_cmd_status);
2095 2095
2096 printk (KERN_ERR "%s: PCI Bus error %4.4x.\n", 2096 printk (KERN_ERR "%s: PCI Bus error %4.4x.\n",
2097 dev->name, pci_cmd_status); 2097 dev->name, pci_cmd_status);
2098 } 2098 }
2099 } 2099 }
2100 2100
2101 static int rtl8139_poll(struct napi_struct *napi, int budget) 2101 static int rtl8139_poll(struct napi_struct *napi, int budget)
2102 { 2102 {
2103 struct rtl8139_private *tp = container_of(napi, struct rtl8139_private, napi); 2103 struct rtl8139_private *tp = container_of(napi, struct rtl8139_private, napi);
2104 struct net_device *dev = tp->dev; 2104 struct net_device *dev = tp->dev;
2105 void __iomem *ioaddr = tp->mmio_addr; 2105 void __iomem *ioaddr = tp->mmio_addr;
2106 int work_done; 2106 int work_done;
2107 2107
2108 spin_lock(&tp->rx_lock); 2108 spin_lock(&tp->rx_lock);
2109 work_done = 0; 2109 work_done = 0;
2110 if (likely(RTL_R16(IntrStatus) & RxAckBits)) 2110 if (likely(RTL_R16(IntrStatus) & RxAckBits))
2111 work_done += rtl8139_rx(dev, tp, budget); 2111 work_done += rtl8139_rx(dev, tp, budget);
2112 2112
2113 if (work_done < budget) { 2113 if (work_done < budget) {
2114 unsigned long flags; 2114 unsigned long flags;
2115 /* 2115 /*
2116 * Order is important since data can get interrupted 2116 * Order is important since data can get interrupted
2117 * again when we think we are done. 2117 * again when we think we are done.
2118 */ 2118 */
2119 spin_lock_irqsave(&tp->lock, flags); 2119 spin_lock_irqsave(&tp->lock, flags);
2120 RTL_W16_F(IntrMask, rtl8139_intr_mask); 2120 RTL_W16_F(IntrMask, rtl8139_intr_mask);
2121 __netif_rx_complete(dev, napi); 2121 __netif_rx_complete(dev, napi);
2122 spin_unlock_irqrestore(&tp->lock, flags); 2122 spin_unlock_irqrestore(&tp->lock, flags);
2123 } 2123 }
2124 spin_unlock(&tp->rx_lock); 2124 spin_unlock(&tp->rx_lock);
2125 2125
2126 return work_done; 2126 return work_done;
2127 } 2127 }
2128 2128
2129 /* The interrupt handler does all of the Rx thread work and cleans up 2129 /* The interrupt handler does all of the Rx thread work and cleans up
2130 after the Tx thread. */ 2130 after the Tx thread. */
2131 static irqreturn_t rtl8139_interrupt (int irq, void *dev_instance) 2131 static irqreturn_t rtl8139_interrupt (int irq, void *dev_instance)
2132 { 2132 {
2133 struct net_device *dev = (struct net_device *) dev_instance; 2133 struct net_device *dev = (struct net_device *) dev_instance;
2134 struct rtl8139_private *tp = netdev_priv(dev); 2134 struct rtl8139_private *tp = netdev_priv(dev);
2135 void __iomem *ioaddr = tp->mmio_addr; 2135 void __iomem *ioaddr = tp->mmio_addr;
2136 u16 status, ackstat; 2136 u16 status, ackstat;
2137 int link_changed = 0; /* avoid bogus "uninit" warning */ 2137 int link_changed = 0; /* avoid bogus "uninit" warning */
2138 int handled = 0; 2138 int handled = 0;
2139 2139
2140 spin_lock (&tp->lock); 2140 spin_lock (&tp->lock);
2141 status = RTL_R16 (IntrStatus); 2141 status = RTL_R16 (IntrStatus);
2142 2142
2143 /* shared irq? */ 2143 /* shared irq? */
2144 if (unlikely((status & rtl8139_intr_mask) == 0)) 2144 if (unlikely((status & rtl8139_intr_mask) == 0))
2145 goto out; 2145 goto out;
2146 2146
2147 handled = 1; 2147 handled = 1;
2148 2148
2149 /* h/w no longer present (hotplug?) or major error, bail */ 2149 /* h/w no longer present (hotplug?) or major error, bail */
2150 if (unlikely(status == 0xFFFF)) 2150 if (unlikely(status == 0xFFFF))
2151 goto out; 2151 goto out;
2152 2152
2153 /* close possible race's with dev_close */ 2153 /* close possible race's with dev_close */
2154 if (unlikely(!netif_running(dev))) { 2154 if (unlikely(!netif_running(dev))) {
2155 RTL_W16 (IntrMask, 0); 2155 RTL_W16 (IntrMask, 0);
2156 goto out; 2156 goto out;
2157 } 2157 }
2158 2158
2159 /* Acknowledge all of the current interrupt sources ASAP, but 2159 /* Acknowledge all of the current interrupt sources ASAP, but
2160 an first get an additional status bit from CSCR. */ 2160 an first get an additional status bit from CSCR. */
2161 if (unlikely(status & RxUnderrun)) 2161 if (unlikely(status & RxUnderrun))
2162 link_changed = RTL_R16 (CSCR) & CSCR_LinkChangeBit; 2162 link_changed = RTL_R16 (CSCR) & CSCR_LinkChangeBit;
2163 2163
2164 ackstat = status & ~(RxAckBits | TxErr); 2164 ackstat = status & ~(RxAckBits | TxErr);
2165 if (ackstat) 2165 if (ackstat)
2166 RTL_W16 (IntrStatus, ackstat); 2166 RTL_W16 (IntrStatus, ackstat);
2167 2167
2168 /* Receive packets are processed by poll routine. 2168 /* Receive packets are processed by poll routine.
2169 If not running start it now. */ 2169 If not running start it now. */
2170 if (status & RxAckBits){ 2170 if (status & RxAckBits){
2171 if (netif_rx_schedule_prep(dev, &tp->napi)) { 2171 if (netif_rx_schedule_prep(dev, &tp->napi)) {
2172 RTL_W16_F (IntrMask, rtl8139_norx_intr_mask); 2172 RTL_W16_F (IntrMask, rtl8139_norx_intr_mask);
2173 __netif_rx_schedule(dev, &tp->napi); 2173 __netif_rx_schedule(dev, &tp->napi);
2174 } 2174 }
2175 } 2175 }
2176 2176
2177 /* Check uncommon events with one test. */ 2177 /* Check uncommon events with one test. */
2178 if (unlikely(status & (PCIErr | PCSTimeout | RxUnderrun | RxErr))) 2178 if (unlikely(status & (PCIErr | PCSTimeout | RxUnderrun | RxErr)))
2179 rtl8139_weird_interrupt (dev, tp, ioaddr, 2179 rtl8139_weird_interrupt (dev, tp, ioaddr,
2180 status, link_changed); 2180 status, link_changed);
2181 2181
2182 if (status & (TxOK | TxErr)) { 2182 if (status & (TxOK | TxErr)) {
2183 rtl8139_tx_interrupt (dev, tp, ioaddr); 2183 rtl8139_tx_interrupt (dev, tp, ioaddr);
2184 if (status & TxErr) 2184 if (status & TxErr)
2185 RTL_W16 (IntrStatus, TxErr); 2185 RTL_W16 (IntrStatus, TxErr);
2186 } 2186 }
2187 out: 2187 out:
2188 spin_unlock (&tp->lock); 2188 spin_unlock (&tp->lock);
2189 2189
2190 DPRINTK ("%s: exiting interrupt, intr_status=%#4.4x.\n", 2190 DPRINTK ("%s: exiting interrupt, intr_status=%#4.4x.\n",
2191 dev->name, RTL_R16 (IntrStatus)); 2191 dev->name, RTL_R16 (IntrStatus));
2192 return IRQ_RETVAL(handled); 2192 return IRQ_RETVAL(handled);
2193 } 2193 }
2194 2194
2195 #ifdef CONFIG_NET_POLL_CONTROLLER 2195 #ifdef CONFIG_NET_POLL_CONTROLLER
2196 /* 2196 /*
2197 * Polling receive - used by netconsole and other diagnostic tools 2197 * Polling receive - used by netconsole and other diagnostic tools
2198 * to allow network i/o with interrupts disabled. 2198 * to allow network i/o with interrupts disabled.
2199 */ 2199 */
2200 static void rtl8139_poll_controller(struct net_device *dev) 2200 static void rtl8139_poll_controller(struct net_device *dev)
2201 { 2201 {
2202 disable_irq(dev->irq); 2202 disable_irq(dev->irq);
2203 rtl8139_interrupt(dev->irq, dev); 2203 rtl8139_interrupt(dev->irq, dev);
2204 enable_irq(dev->irq); 2204 enable_irq(dev->irq);
2205 } 2205 }
2206 #endif 2206 #endif
2207 2207
2208 static int rtl8139_close (struct net_device *dev) 2208 static int rtl8139_close (struct net_device *dev)
2209 { 2209 {
2210 struct rtl8139_private *tp = netdev_priv(dev); 2210 struct rtl8139_private *tp = netdev_priv(dev);
2211 void __iomem *ioaddr = tp->mmio_addr; 2211 void __iomem *ioaddr = tp->mmio_addr;
2212 unsigned long flags; 2212 unsigned long flags;
2213 2213
2214 netif_stop_queue(dev); 2214 netif_stop_queue(dev);
2215 napi_disable(&tp->napi); 2215 napi_disable(&tp->napi);
2216 2216
2217 if (netif_msg_ifdown(tp)) 2217 if (netif_msg_ifdown(tp))
2218 printk(KERN_DEBUG "%s: Shutting down ethercard, status was 0x%4.4x.\n", 2218 printk(KERN_DEBUG "%s: Shutting down ethercard, status was 0x%4.4x.\n",
2219 dev->name, RTL_R16 (IntrStatus)); 2219 dev->name, RTL_R16 (IntrStatus));
2220 2220
2221 spin_lock_irqsave (&tp->lock, flags); 2221 spin_lock_irqsave (&tp->lock, flags);
2222 2222
2223 /* Stop the chip's Tx and Rx DMA processes. */ 2223 /* Stop the chip's Tx and Rx DMA processes. */
2224 RTL_W8 (ChipCmd, 0); 2224 RTL_W8 (ChipCmd, 0);
2225 2225
2226 /* Disable interrupts by clearing the interrupt mask. */ 2226 /* Disable interrupts by clearing the interrupt mask. */
2227 RTL_W16 (IntrMask, 0); 2227 RTL_W16 (IntrMask, 0);
2228 2228
2229 /* Update the error counts. */ 2229 /* Update the error counts. */
2230 tp->stats.rx_missed_errors += RTL_R32 (RxMissed); 2230 tp->stats.rx_missed_errors += RTL_R32 (RxMissed);
2231 RTL_W32 (RxMissed, 0); 2231 RTL_W32 (RxMissed, 0);
2232 2232
2233 spin_unlock_irqrestore (&tp->lock, flags); 2233 spin_unlock_irqrestore (&tp->lock, flags);
2234 2234
2235 synchronize_irq (dev->irq); /* racy, but that's ok here */ 2235 synchronize_irq (dev->irq); /* racy, but that's ok here */
2236 free_irq (dev->irq, dev); 2236 free_irq (dev->irq, dev);
2237 2237
2238 rtl8139_tx_clear (tp); 2238 rtl8139_tx_clear (tp);
2239 2239
2240 dma_free_coherent(&tp->pci_dev->dev, RX_BUF_TOT_LEN, 2240 dma_free_coherent(&tp->pci_dev->dev, RX_BUF_TOT_LEN,
2241 tp->rx_ring, tp->rx_ring_dma); 2241 tp->rx_ring, tp->rx_ring_dma);
2242 dma_free_coherent(&tp->pci_dev->dev, TX_BUF_TOT_LEN, 2242 dma_free_coherent(&tp->pci_dev->dev, TX_BUF_TOT_LEN,
2243 tp->tx_bufs, tp->tx_bufs_dma); 2243 tp->tx_bufs, tp->tx_bufs_dma);
2244 tp->rx_ring = NULL; 2244 tp->rx_ring = NULL;
2245 tp->tx_bufs = NULL; 2245 tp->tx_bufs = NULL;
2246 2246
2247 /* Green! Put the chip in low-power mode. */ 2247 /* Green! Put the chip in low-power mode. */
2248 RTL_W8 (Cfg9346, Cfg9346_Unlock); 2248 RTL_W8 (Cfg9346, Cfg9346_Unlock);
2249 2249
2250 if (rtl_chip_info[tp->chipset].flags & HasHltClk) 2250 if (rtl_chip_info[tp->chipset].flags & HasHltClk)
2251 RTL_W8 (HltClk, 'H'); /* 'R' would leave the clock running. */ 2251 RTL_W8 (HltClk, 'H'); /* 'R' would leave the clock running. */
2252 2252
2253 return 0; 2253 return 0;
2254 } 2254 }
2255 2255
2256 2256
2257 /* Get the ethtool Wake-on-LAN settings. Assumes that wol points to 2257 /* Get the ethtool Wake-on-LAN settings. Assumes that wol points to
2258 kernel memory, *wol has been initialized as {ETHTOOL_GWOL}, and 2258 kernel memory, *wol has been initialized as {ETHTOOL_GWOL}, and
2259 other threads or interrupts aren't messing with the 8139. */ 2259 other threads or interrupts aren't messing with the 8139. */
2260 static void rtl8139_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol) 2260 static void rtl8139_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
2261 { 2261 {
2262 struct rtl8139_private *np = netdev_priv(dev); 2262 struct rtl8139_private *np = netdev_priv(dev);
2263 void __iomem *ioaddr = np->mmio_addr; 2263 void __iomem *ioaddr = np->mmio_addr;
2264 2264
2265 spin_lock_irq(&np->lock); 2265 spin_lock_irq(&np->lock);
2266 if (rtl_chip_info[np->chipset].flags & HasLWake) { 2266 if (rtl_chip_info[np->chipset].flags & HasLWake) {
2267 u8 cfg3 = RTL_R8 (Config3); 2267 u8 cfg3 = RTL_R8 (Config3);
2268 u8 cfg5 = RTL_R8 (Config5); 2268 u8 cfg5 = RTL_R8 (Config5);
2269 2269
2270 wol->supported = WAKE_PHY | WAKE_MAGIC 2270 wol->supported = WAKE_PHY | WAKE_MAGIC
2271 | WAKE_UCAST | WAKE_MCAST | WAKE_BCAST; 2271 | WAKE_UCAST | WAKE_MCAST | WAKE_BCAST;
2272 2272
2273 wol->wolopts = 0; 2273 wol->wolopts = 0;
2274 if (cfg3 & Cfg3_LinkUp) 2274 if (cfg3 & Cfg3_LinkUp)
2275 wol->wolopts |= WAKE_PHY; 2275 wol->wolopts |= WAKE_PHY;
2276 if (cfg3 & Cfg3_Magic) 2276 if (cfg3 & Cfg3_Magic)
2277 wol->wolopts |= WAKE_MAGIC; 2277 wol->wolopts |= WAKE_MAGIC;
2278 /* (KON)FIXME: See how netdev_set_wol() handles the 2278 /* (KON)FIXME: See how netdev_set_wol() handles the
2279 following constants. */ 2279 following constants. */
2280 if (cfg5 & Cfg5_UWF) 2280 if (cfg5 & Cfg5_UWF)
2281 wol->wolopts |= WAKE_UCAST; 2281 wol->wolopts |= WAKE_UCAST;
2282 if (cfg5 & Cfg5_MWF) 2282 if (cfg5 & Cfg5_MWF)
2283 wol->wolopts |= WAKE_MCAST; 2283 wol->wolopts |= WAKE_MCAST;
2284 if (cfg5 & Cfg5_BWF) 2284 if (cfg5 & Cfg5_BWF)
2285 wol->wolopts |= WAKE_BCAST; 2285 wol->wolopts |= WAKE_BCAST;
2286 } 2286 }
2287 spin_unlock_irq(&np->lock); 2287 spin_unlock_irq(&np->lock);
2288 } 2288 }
2289 2289
2290 2290
2291 /* Set the ethtool Wake-on-LAN settings. Return 0 or -errno. Assumes 2291 /* Set the ethtool Wake-on-LAN settings. Return 0 or -errno. Assumes
2292 that wol points to kernel memory and other threads or interrupts 2292 that wol points to kernel memory and other threads or interrupts
2293 aren't messing with the 8139. */ 2293 aren't messing with the 8139. */
2294 static int rtl8139_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol) 2294 static int rtl8139_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
2295 { 2295 {
2296 struct rtl8139_private *np = netdev_priv(dev); 2296 struct rtl8139_private *np = netdev_priv(dev);
2297 void __iomem *ioaddr = np->mmio_addr; 2297 void __iomem *ioaddr = np->mmio_addr;
2298 u32 support; 2298 u32 support;
2299 u8 cfg3, cfg5; 2299 u8 cfg3, cfg5;
2300 2300
2301 support = ((rtl_chip_info[np->chipset].flags & HasLWake) 2301 support = ((rtl_chip_info[np->chipset].flags & HasLWake)
2302 ? (WAKE_PHY | WAKE_MAGIC 2302 ? (WAKE_PHY | WAKE_MAGIC
2303 | WAKE_UCAST | WAKE_MCAST | WAKE_BCAST) 2303 | WAKE_UCAST | WAKE_MCAST | WAKE_BCAST)
2304 : 0); 2304 : 0);
2305 if (wol->wolopts & ~support) 2305 if (wol->wolopts & ~support)
2306 return -EINVAL; 2306 return -EINVAL;
2307 2307
2308 spin_lock_irq(&np->lock); 2308 spin_lock_irq(&np->lock);
2309 cfg3 = RTL_R8 (Config3) & ~(Cfg3_LinkUp | Cfg3_Magic); 2309 cfg3 = RTL_R8 (Config3) & ~(Cfg3_LinkUp | Cfg3_Magic);
2310 if (wol->wolopts & WAKE_PHY) 2310 if (wol->wolopts & WAKE_PHY)
2311 cfg3 |= Cfg3_LinkUp; 2311 cfg3 |= Cfg3_LinkUp;
2312 if (wol->wolopts & WAKE_MAGIC) 2312 if (wol->wolopts & WAKE_MAGIC)
2313 cfg3 |= Cfg3_Magic; 2313 cfg3 |= Cfg3_Magic;
2314 RTL_W8 (Cfg9346, Cfg9346_Unlock); 2314 RTL_W8 (Cfg9346, Cfg9346_Unlock);
2315 RTL_W8 (Config3, cfg3); 2315 RTL_W8 (Config3, cfg3);
2316 RTL_W8 (Cfg9346, Cfg9346_Lock); 2316 RTL_W8 (Cfg9346, Cfg9346_Lock);
2317 2317
2318 cfg5 = RTL_R8 (Config5) & ~(Cfg5_UWF | Cfg5_MWF | Cfg5_BWF); 2318 cfg5 = RTL_R8 (Config5) & ~(Cfg5_UWF | Cfg5_MWF | Cfg5_BWF);
2319 /* (KON)FIXME: These are untested. We may have to set the 2319 /* (KON)FIXME: These are untested. We may have to set the
2320 CRC0, Wakeup0 and LSBCRC0 registers too, but I have no 2320 CRC0, Wakeup0 and LSBCRC0 registers too, but I have no
2321 documentation. */ 2321 documentation. */
2322 if (wol->wolopts & WAKE_UCAST) 2322 if (wol->wolopts & WAKE_UCAST)
2323 cfg5 |= Cfg5_UWF; 2323 cfg5 |= Cfg5_UWF;
2324 if (wol->wolopts & WAKE_MCAST) 2324 if (wol->wolopts & WAKE_MCAST)
2325 cfg5 |= Cfg5_MWF; 2325 cfg5 |= Cfg5_MWF;
2326 if (wol->wolopts & WAKE_BCAST) 2326 if (wol->wolopts & WAKE_BCAST)
2327 cfg5 |= Cfg5_BWF; 2327 cfg5 |= Cfg5_BWF;
2328 RTL_W8 (Config5, cfg5); /* need not unlock via Cfg9346 */ 2328 RTL_W8 (Config5, cfg5); /* need not unlock via Cfg9346 */
2329 spin_unlock_irq(&np->lock); 2329 spin_unlock_irq(&np->lock);
2330 2330
2331 return 0; 2331 return 0;
2332 } 2332 }
2333 2333
2334 static void rtl8139_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info) 2334 static void rtl8139_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
2335 { 2335 {
2336 struct rtl8139_private *np = netdev_priv(dev); 2336 struct rtl8139_private *np = netdev_priv(dev);
2337 strcpy(info->driver, DRV_NAME); 2337 strcpy(info->driver, DRV_NAME);
2338 strcpy(info->version, DRV_VERSION); 2338 strcpy(info->version, DRV_VERSION);
2339 strcpy(info->bus_info, pci_name(np->pci_dev)); 2339 strcpy(info->bus_info, pci_name(np->pci_dev));
2340 info->regdump_len = np->regs_len; 2340 info->regdump_len = np->regs_len;
2341 } 2341 }
2342 2342
2343 static int rtl8139_get_settings(struct net_device *dev, struct ethtool_cmd *cmd) 2343 static int rtl8139_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
2344 { 2344 {
2345 struct rtl8139_private *np = netdev_priv(dev); 2345 struct rtl8139_private *np = netdev_priv(dev);
2346 spin_lock_irq(&np->lock); 2346 spin_lock_irq(&np->lock);
2347 mii_ethtool_gset(&np->mii, cmd); 2347 mii_ethtool_gset(&np->mii, cmd);
2348 spin_unlock_irq(&np->lock); 2348 spin_unlock_irq(&np->lock);
2349 return 0; 2349 return 0;
2350 } 2350 }
2351 2351
2352 static int rtl8139_set_settings(struct net_device *dev, struct ethtool_cmd *cmd) 2352 static int rtl8139_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
2353 { 2353 {
2354 struct rtl8139_private *np = netdev_priv(dev); 2354 struct rtl8139_private *np = netdev_priv(dev);
2355 int rc; 2355 int rc;
2356 spin_lock_irq(&np->lock); 2356 spin_lock_irq(&np->lock);
2357 rc = mii_ethtool_sset(&np->mii, cmd); 2357 rc = mii_ethtool_sset(&np->mii, cmd);
2358 spin_unlock_irq(&np->lock); 2358 spin_unlock_irq(&np->lock);
2359 return rc; 2359 return rc;
2360 } 2360 }
2361 2361
2362 static int rtl8139_nway_reset(struct net_device *dev) 2362 static int rtl8139_nway_reset(struct net_device *dev)
2363 { 2363 {
2364 struct rtl8139_private *np = netdev_priv(dev); 2364 struct rtl8139_private *np = netdev_priv(dev);
2365 return mii_nway_restart(&np->mii); 2365 return mii_nway_restart(&np->mii);
2366 } 2366 }
2367 2367
2368 static u32 rtl8139_get_link(struct net_device *dev) 2368 static u32 rtl8139_get_link(struct net_device *dev)
2369 { 2369 {
2370 struct rtl8139_private *np = netdev_priv(dev); 2370 struct rtl8139_private *np = netdev_priv(dev);
2371 return mii_link_ok(&np->mii); 2371 return mii_link_ok(&np->mii);
2372 } 2372 }
2373 2373
2374 static u32 rtl8139_get_msglevel(struct net_device *dev) 2374 static u32 rtl8139_get_msglevel(struct net_device *dev)
2375 { 2375 {
2376 struct rtl8139_private *np = netdev_priv(dev); 2376 struct rtl8139_private *np = netdev_priv(dev);
2377 return np->msg_enable; 2377 return np->msg_enable;
2378 } 2378 }
2379 2379
2380 static void rtl8139_set_msglevel(struct net_device *dev, u32 datum) 2380 static void rtl8139_set_msglevel(struct net_device *dev, u32 datum)
2381 { 2381 {
2382 struct rtl8139_private *np = netdev_priv(dev); 2382 struct rtl8139_private *np = netdev_priv(dev);
2383 np->msg_enable = datum; 2383 np->msg_enable = datum;
2384 } 2384 }
2385 2385
2386 /* TODO: we are too slack to do reg dumping for pio, for now */ 2386 /* TODO: we are too slack to do reg dumping for pio, for now */
2387 #ifdef CONFIG_8139TOO_PIO 2387 #ifdef CONFIG_8139TOO_PIO
2388 #define rtl8139_get_regs_len NULL 2388 #define rtl8139_get_regs_len NULL
2389 #define rtl8139_get_regs NULL 2389 #define rtl8139_get_regs NULL
2390 #else 2390 #else
2391 static int rtl8139_get_regs_len(struct net_device *dev) 2391 static int rtl8139_get_regs_len(struct net_device *dev)
2392 { 2392 {
2393 struct rtl8139_private *np = netdev_priv(dev); 2393 struct rtl8139_private *np = netdev_priv(dev);
2394 return np->regs_len; 2394 return np->regs_len;
2395 } 2395 }
2396 2396
2397 static void rtl8139_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *regbuf) 2397 static void rtl8139_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *regbuf)
2398 { 2398 {
2399 struct rtl8139_private *np = netdev_priv(dev); 2399 struct rtl8139_private *np = netdev_priv(dev);
2400 2400
2401 regs->version = RTL_REGS_VER; 2401 regs->version = RTL_REGS_VER;
2402 2402
2403 spin_lock_irq(&np->lock); 2403 spin_lock_irq(&np->lock);
2404 memcpy_fromio(regbuf, np->mmio_addr, regs->len); 2404 memcpy_fromio(regbuf, np->mmio_addr, regs->len);
2405 spin_unlock_irq(&np->lock); 2405 spin_unlock_irq(&np->lock);
2406 } 2406 }
2407 #endif /* CONFIG_8139TOO_MMIO */ 2407 #endif /* CONFIG_8139TOO_MMIO */
2408 2408
2409 static int rtl8139_get_sset_count(struct net_device *dev, int sset) 2409 static int rtl8139_get_sset_count(struct net_device *dev, int sset)
2410 { 2410 {
2411 switch (sset) { 2411 switch (sset) {
2412 case ETH_SS_STATS: 2412 case ETH_SS_STATS:
2413 return RTL_NUM_STATS; 2413 return RTL_NUM_STATS;
2414 default: 2414 default:
2415 return -EOPNOTSUPP; 2415 return -EOPNOTSUPP;
2416 } 2416 }
2417 } 2417 }
2418 2418
2419 static void rtl8139_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *stats, u64 *data) 2419 static void rtl8139_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *stats, u64 *data)
2420 { 2420 {
2421 struct rtl8139_private *np = netdev_priv(dev); 2421 struct rtl8139_private *np = netdev_priv(dev);
2422 2422
2423 data[0] = np->xstats.early_rx; 2423 data[0] = np->xstats.early_rx;
2424 data[1] = np->xstats.tx_buf_mapped; 2424 data[1] = np->xstats.tx_buf_mapped;
2425 data[2] = np->xstats.tx_timeouts; 2425 data[2] = np->xstats.tx_timeouts;
2426 data[3] = np->xstats.rx_lost_in_ring; 2426 data[3] = np->xstats.rx_lost_in_ring;
2427 } 2427 }
2428 2428
2429 static void rtl8139_get_strings(struct net_device *dev, u32 stringset, u8 *data) 2429 static void rtl8139_get_strings(struct net_device *dev, u32 stringset, u8 *data)
2430 { 2430 {
2431 memcpy(data, ethtool_stats_keys, sizeof(ethtool_stats_keys)); 2431 memcpy(data, ethtool_stats_keys, sizeof(ethtool_stats_keys));
2432 } 2432 }
2433 2433
2434 static const struct ethtool_ops rtl8139_ethtool_ops = { 2434 static const struct ethtool_ops rtl8139_ethtool_ops = {
2435 .get_drvinfo = rtl8139_get_drvinfo, 2435 .get_drvinfo = rtl8139_get_drvinfo,
2436 .get_settings = rtl8139_get_settings, 2436 .get_settings = rtl8139_get_settings,
2437 .set_settings = rtl8139_set_settings, 2437 .set_settings = rtl8139_set_settings,
2438 .get_regs_len = rtl8139_get_regs_len, 2438 .get_regs_len = rtl8139_get_regs_len,
2439 .get_regs = rtl8139_get_regs, 2439 .get_regs = rtl8139_get_regs,
2440 .nway_reset = rtl8139_nway_reset, 2440 .nway_reset = rtl8139_nway_reset,
2441 .get_link = rtl8139_get_link, 2441 .get_link = rtl8139_get_link,
2442 .get_msglevel = rtl8139_get_msglevel, 2442 .get_msglevel = rtl8139_get_msglevel,
2443 .set_msglevel = rtl8139_set_msglevel, 2443 .set_msglevel = rtl8139_set_msglevel,
2444 .get_wol = rtl8139_get_wol, 2444 .get_wol = rtl8139_get_wol,
2445 .set_wol = rtl8139_set_wol, 2445 .set_wol = rtl8139_set_wol,
2446 .get_strings = rtl8139_get_strings, 2446 .get_strings = rtl8139_get_strings,
2447 .get_sset_count = rtl8139_get_sset_count, 2447 .get_sset_count = rtl8139_get_sset_count,
2448 .get_ethtool_stats = rtl8139_get_ethtool_stats, 2448 .get_ethtool_stats = rtl8139_get_ethtool_stats,
2449 }; 2449 };
2450 2450
2451 static int netdev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) 2451 static int netdev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
2452 { 2452 {
2453 struct rtl8139_private *np = netdev_priv(dev); 2453 struct rtl8139_private *np = netdev_priv(dev);
2454 int rc; 2454 int rc;
2455 2455
2456 if (!netif_running(dev)) 2456 if (!netif_running(dev))
2457 return -EINVAL; 2457 return -EINVAL;
2458 2458
2459 spin_lock_irq(&np->lock); 2459 spin_lock_irq(&np->lock);
2460 rc = generic_mii_ioctl(&np->mii, if_mii(rq), cmd, NULL); 2460 rc = generic_mii_ioctl(&np->mii, if_mii(rq), cmd, NULL);
2461 spin_unlock_irq(&np->lock); 2461 spin_unlock_irq(&np->lock);
2462 2462
2463 return rc; 2463 return rc;
2464 } 2464 }
2465 2465
2466 2466
2467 static struct net_device_stats *rtl8139_get_stats (struct net_device *dev) 2467 static struct net_device_stats *rtl8139_get_stats (struct net_device *dev)
2468 { 2468 {
2469 struct rtl8139_private *tp = netdev_priv(dev); 2469 struct rtl8139_private *tp = netdev_priv(dev);
2470 void __iomem *ioaddr = tp->mmio_addr; 2470 void __iomem *ioaddr = tp->mmio_addr;
2471 unsigned long flags; 2471 unsigned long flags;
2472 2472
2473 if (netif_running(dev)) { 2473 if (netif_running(dev)) {
2474 spin_lock_irqsave (&tp->lock, flags); 2474 spin_lock_irqsave (&tp->lock, flags);
2475 tp->stats.rx_missed_errors += RTL_R32 (RxMissed); 2475 tp->stats.rx_missed_errors += RTL_R32 (RxMissed);
2476 RTL_W32 (RxMissed, 0); 2476 RTL_W32 (RxMissed, 0);
2477 spin_unlock_irqrestore (&tp->lock, flags); 2477 spin_unlock_irqrestore (&tp->lock, flags);
2478 } 2478 }
2479 2479
2480 return &tp->stats; 2480 return &tp->stats;
2481 } 2481 }
2482 2482
2483 /* Set or clear the multicast filter for this adaptor. 2483 /* Set or clear the multicast filter for this adaptor.
2484 This routine is not state sensitive and need not be SMP locked. */ 2484 This routine is not state sensitive and need not be SMP locked. */
2485 2485
2486 static void __set_rx_mode (struct net_device *dev) 2486 static void __set_rx_mode (struct net_device *dev)
2487 { 2487 {
2488 struct rtl8139_private *tp = netdev_priv(dev); 2488 struct rtl8139_private *tp = netdev_priv(dev);
2489 void __iomem *ioaddr = tp->mmio_addr; 2489 void __iomem *ioaddr = tp->mmio_addr;
2490 u32 mc_filter[2]; /* Multicast hash filter */ 2490 u32 mc_filter[2]; /* Multicast hash filter */
2491 int i, rx_mode; 2491 int i, rx_mode;
2492 u32 tmp; 2492 u32 tmp;
2493 2493
2494 DPRINTK ("%s: rtl8139_set_rx_mode(%4.4x) done -- Rx config %8.8lx.\n", 2494 DPRINTK ("%s: rtl8139_set_rx_mode(%4.4x) done -- Rx config %8.8lx.\n",
2495 dev->name, dev->flags, RTL_R32 (RxConfig)); 2495 dev->name, dev->flags, RTL_R32 (RxConfig));
2496 2496
2497 /* Note: do not reorder, GCC is clever about common statements. */ 2497 /* Note: do not reorder, GCC is clever about common statements. */
2498 if (dev->flags & IFF_PROMISC) { 2498 if (dev->flags & IFF_PROMISC) {
2499 rx_mode = 2499 rx_mode =
2500 AcceptBroadcast | AcceptMulticast | AcceptMyPhys | 2500 AcceptBroadcast | AcceptMulticast | AcceptMyPhys |
2501 AcceptAllPhys; 2501 AcceptAllPhys;
2502 mc_filter[1] = mc_filter[0] = 0xffffffff; 2502 mc_filter[1] = mc_filter[0] = 0xffffffff;
2503 } else if ((dev->mc_count > multicast_filter_limit) 2503 } else if ((dev->mc_count > multicast_filter_limit)
2504 || (dev->flags & IFF_ALLMULTI)) { 2504 || (dev->flags & IFF_ALLMULTI)) {
2505 /* Too many to filter perfectly -- accept all multicasts. */ 2505 /* Too many to filter perfectly -- accept all multicasts. */
2506 rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys; 2506 rx_mode = AcceptBroadcast | AcceptMulticast | AcceptMyPhys;
2507 mc_filter[1] = mc_filter[0] = 0xffffffff; 2507 mc_filter[1] = mc_filter[0] = 0xffffffff;
2508 } else { 2508 } else {
2509 struct dev_mc_list *mclist; 2509 struct dev_mc_list *mclist;
2510 rx_mode = AcceptBroadcast | AcceptMyPhys; 2510 rx_mode = AcceptBroadcast | AcceptMyPhys;
2511 mc_filter[1] = mc_filter[0] = 0; 2511 mc_filter[1] = mc_filter[0] = 0;
2512 for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count; 2512 for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
2513 i++, mclist = mclist->next) { 2513 i++, mclist = mclist->next) {
2514 int bit_nr = ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26; 2514 int bit_nr = ether_crc(ETH_ALEN, mclist->dmi_addr) >> 26;
2515 2515
2516 mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31); 2516 mc_filter[bit_nr >> 5] |= 1 << (bit_nr & 31);
2517 rx_mode |= AcceptMulticast; 2517 rx_mode |= AcceptMulticast;
2518 } 2518 }
2519 } 2519 }
2520 2520
2521 /* We can safely update without stopping the chip. */ 2521 /* We can safely update without stopping the chip. */
2522 tmp = rtl8139_rx_config | rx_mode; 2522 tmp = rtl8139_rx_config | rx_mode;
2523 if (tp->rx_config != tmp) { 2523 if (tp->rx_config != tmp) {
2524 RTL_W32_F (RxConfig, tmp); 2524 RTL_W32_F (RxConfig, tmp);
2525 tp->rx_config = tmp; 2525 tp->rx_config = tmp;
2526 } 2526 }
2527 RTL_W32_F (MAR0 + 0, mc_filter[0]); 2527 RTL_W32_F (MAR0 + 0, mc_filter[0]);
2528 RTL_W32_F (MAR0 + 4, mc_filter[1]); 2528 RTL_W32_F (MAR0 + 4, mc_filter[1]);
2529 } 2529 }
2530 2530
2531 static void rtl8139_set_rx_mode (struct net_device *dev) 2531 static void rtl8139_set_rx_mode (struct net_device *dev)
2532 { 2532 {
2533 unsigned long flags; 2533 unsigned long flags;
2534 struct rtl8139_private *tp = netdev_priv(dev); 2534 struct rtl8139_private *tp = netdev_priv(dev);
2535 2535
2536 spin_lock_irqsave (&tp->lock, flags); 2536 spin_lock_irqsave (&tp->lock, flags);
2537 __set_rx_mode(dev); 2537 __set_rx_mode(dev);
2538 spin_unlock_irqrestore (&tp->lock, flags); 2538 spin_unlock_irqrestore (&tp->lock, flags);
2539 } 2539 }
2540 2540
2541 #ifdef CONFIG_PM 2541 #ifdef CONFIG_PM
2542 2542
2543 static int rtl8139_suspend (struct pci_dev *pdev, pm_message_t state) 2543 static int rtl8139_suspend (struct pci_dev *pdev, pm_message_t state)
2544 { 2544 {
2545 struct net_device *dev = pci_get_drvdata (pdev); 2545 struct net_device *dev = pci_get_drvdata (pdev);
2546 struct rtl8139_private *tp = netdev_priv(dev); 2546 struct rtl8139_private *tp = netdev_priv(dev);
2547 void __iomem *ioaddr = tp->mmio_addr; 2547 void __iomem *ioaddr = tp->mmio_addr;
2548 unsigned long flags; 2548 unsigned long flags;
2549 2549
2550 pci_save_state (pdev); 2550 pci_save_state (pdev);
2551 2551
2552 if (!netif_running (dev)) 2552 if (!netif_running (dev))
2553 return 0; 2553 return 0;
2554 2554
2555 netif_device_detach (dev); 2555 netif_device_detach (dev);
2556 2556
2557 spin_lock_irqsave (&tp->lock, flags); 2557 spin_lock_irqsave (&tp->lock, flags);
2558 2558
2559 /* Disable interrupts, stop Tx and Rx. */ 2559 /* Disable interrupts, stop Tx and Rx. */
2560 RTL_W16 (IntrMask, 0); 2560 RTL_W16 (IntrMask, 0);
2561 RTL_W8 (ChipCmd, 0); 2561 RTL_W8 (ChipCmd, 0);
2562 2562
2563 /* Update the error counts. */ 2563 /* Update the error counts. */
2564 tp->stats.rx_missed_errors += RTL_R32 (RxMissed); 2564 tp->stats.rx_missed_errors += RTL_R32 (RxMissed);
2565 RTL_W32 (RxMissed, 0); 2565 RTL_W32 (RxMissed, 0);
2566 2566
2567 spin_unlock_irqrestore (&tp->lock, flags); 2567 spin_unlock_irqrestore (&tp->lock, flags);
2568 2568
2569 pci_set_power_state (pdev, PCI_D3hot); 2569 pci_set_power_state (pdev, PCI_D3hot);
2570 2570
2571 return 0; 2571 return 0;
2572 } 2572 }
2573 2573
2574 2574
2575 static int rtl8139_resume (struct pci_dev *pdev) 2575 static int rtl8139_resume (struct pci_dev *pdev)
2576 { 2576 {
2577 struct net_device *dev = pci_get_drvdata (pdev); 2577 struct net_device *dev = pci_get_drvdata (pdev);
2578 2578
2579 pci_restore_state (pdev); 2579 pci_restore_state (pdev);
2580 if (!netif_running (dev)) 2580 if (!netif_running (dev))
2581 return 0; 2581 return 0;
2582 pci_set_power_state (pdev, PCI_D0); 2582 pci_set_power_state (pdev, PCI_D0);
2583 rtl8139_init_ring (dev); 2583 rtl8139_init_ring (dev);
2584 rtl8139_hw_start (dev); 2584 rtl8139_hw_start (dev);
2585 netif_device_attach (dev); 2585 netif_device_attach (dev);
2586 return 0; 2586 return 0;
2587 } 2587 }
2588 2588
2589 #endif /* CONFIG_PM */ 2589 #endif /* CONFIG_PM */
2590 2590
2591 2591
2592 static struct pci_driver rtl8139_pci_driver = { 2592 static struct pci_driver rtl8139_pci_driver = {
2593 .name = DRV_NAME, 2593 .name = DRV_NAME,
2594 .id_table = rtl8139_pci_tbl, 2594 .id_table = rtl8139_pci_tbl,
2595 .probe = rtl8139_init_one, 2595 .probe = rtl8139_init_one,
2596 .remove = __devexit_p(rtl8139_remove_one), 2596 .remove = __devexit_p(rtl8139_remove_one),
2597 #ifdef CONFIG_PM 2597 #ifdef CONFIG_PM
2598 .suspend = rtl8139_suspend, 2598 .suspend = rtl8139_suspend,
2599 .resume = rtl8139_resume, 2599 .resume = rtl8139_resume,
2600 #endif /* CONFIG_PM */ 2600 #endif /* CONFIG_PM */
2601 }; 2601 };
2602 2602
2603 2603
2604 static int __init rtl8139_init_module (void) 2604 static int __init rtl8139_init_module (void)
2605 { 2605 {
2606 /* when we're a module, we always print a version message, 2606 /* when we're a module, we always print a version message,
2607 * even if no 8139 board is found. 2607 * even if no 8139 board is found.
2608 */ 2608 */
2609 #ifdef MODULE 2609 #ifdef MODULE
2610 printk (KERN_INFO RTL8139_DRIVER_NAME "\n"); 2610 printk (KERN_INFO RTL8139_DRIVER_NAME "\n");
2611 #endif 2611 #endif
2612 2612
2613 return pci_register_driver(&rtl8139_pci_driver); 2613 return pci_register_driver(&rtl8139_pci_driver);
2614 } 2614 }
2615 2615
2616 2616
2617 static void __exit rtl8139_cleanup_module (void) 2617 static void __exit rtl8139_cleanup_module (void)
2618 { 2618 {
2619 pci_unregister_driver (&rtl8139_pci_driver); 2619 pci_unregister_driver (&rtl8139_pci_driver);
2620 } 2620 }
2621 2621
2622 2622
2623 module_init(rtl8139_init_module); 2623 module_init(rtl8139_init_module);
2624 module_exit(rtl8139_cleanup_module); 2624 module_exit(rtl8139_cleanup_module);
2625 2625
1 /* atp.c: Attached (pocket) ethernet adapter driver for linux. */ 1 /* atp.c: Attached (pocket) ethernet adapter driver for linux. */
2 /* 2 /*
3 This is a driver for commonly OEM pocket (parallel port) 3 This is a driver for commonly OEM pocket (parallel port)
4 ethernet adapters based on the Realtek RTL8002 and RTL8012 chips. 4 ethernet adapters based on the Realtek RTL8002 and RTL8012 chips.
5 5
6 Written 1993-2000 by Donald Becker. 6 Written 1993-2000 by Donald Becker.
7 7
8 This software may be used and distributed according to the terms of 8 This software may be used and distributed according to the terms of
9 the GNU General Public License (GPL), incorporated herein by reference. 9 the GNU General Public License (GPL), incorporated herein by reference.
10 Drivers based on or derived from this code fall under the GPL and must 10 Drivers based on or derived from this code fall under the GPL and must
11 retain the authorship, copyright and license notice. This file is not 11 retain the authorship, copyright and license notice. This file is not
12 a complete program and may only be used when the entire operating 12 a complete program and may only be used when the entire operating
13 system is licensed under the GPL. 13 system is licensed under the GPL.
14 14
15 Copyright 1993 United States Government as represented by the Director, 15 Copyright 1993 United States Government as represented by the Director,
16 National Security Agency. Copyright 1994-2000 retained by the original 16 National Security Agency. Copyright 1994-2000 retained by the original
17 author, Donald Becker. The timer-based reset code was supplied in 1995 17 author, Donald Becker. The timer-based reset code was supplied in 1995
18 by Bill Carlson, wwc@super.org. 18 by Bill Carlson, wwc@super.org.
19 19
20 The author may be reached as becker@scyld.com, or C/O 20 The author may be reached as becker@scyld.com, or C/O
21 Scyld Computing Corporation 21 Scyld Computing Corporation
22 410 Severn Ave., Suite 210 22 410 Severn Ave., Suite 210
23 Annapolis MD 21403 23 Annapolis MD 21403
24 24
25 Support information and updates available at 25 Support information and updates available at
26 http://www.scyld.com/network/atp.html 26 http://www.scyld.com/network/atp.html
27 27
28 28
29 Modular support/softnet added by Alan Cox. 29 Modular support/softnet added by Alan Cox.
30 _bit abuse fixed up by Alan Cox 30 _bit abuse fixed up by Alan Cox
31 31
32 */ 32 */
33 33
34 static const char version[] = 34 static const char version[] =
35 "atp.c:v1.09=ac 2002/10/01 Donald Becker <becker@scyld.com>\n"; 35 "atp.c:v1.09=ac 2002/10/01 Donald Becker <becker@scyld.com>\n";
36 36
37 /* The user-configurable values. 37 /* The user-configurable values.
38 These may be modified when a driver module is loaded.*/ 38 These may be modified when a driver module is loaded.*/
39 39
40 static int debug = 1; /* 1 normal messages, 0 quiet .. 7 verbose. */ 40 static int debug = 1; /* 1 normal messages, 0 quiet .. 7 verbose. */
41 #define net_debug debug 41 #define net_debug debug
42 42
43 /* Maximum events (Rx packets, etc.) to handle at each interrupt. */ 43 /* Maximum events (Rx packets, etc.) to handle at each interrupt. */
44 static int max_interrupt_work = 15; 44 static int max_interrupt_work = 15;
45 45
46 #define NUM_UNITS 2 46 #define NUM_UNITS 2
47 /* The standard set of ISA module parameters. */ 47 /* The standard set of ISA module parameters. */
48 static int io[NUM_UNITS]; 48 static int io[NUM_UNITS];
49 static int irq[NUM_UNITS]; 49 static int irq[NUM_UNITS];
50 static int xcvr[NUM_UNITS]; /* The data transfer mode. */ 50 static int xcvr[NUM_UNITS]; /* The data transfer mode. */
51 51
52 /* Operational parameters that are set at compile time. */ 52 /* Operational parameters that are set at compile time. */
53 53
54 /* Time in jiffies before concluding the transmitter is hung. */ 54 /* Time in jiffies before concluding the transmitter is hung. */
55 #define TX_TIMEOUT (400*HZ/1000) 55 #define TX_TIMEOUT (400*HZ/1000)
56 56
57 /* 57 /*
58 This file is a device driver for the RealTek (aka AT-Lan-Tec) pocket 58 This file is a device driver for the RealTek (aka AT-Lan-Tec) pocket
59 ethernet adapter. This is a common low-cost OEM pocket ethernet 59 ethernet adapter. This is a common low-cost OEM pocket ethernet
60 adapter, sold under many names. 60 adapter, sold under many names.
61 61
62 Sources: 62 Sources:
63 This driver was written from the packet driver assembly code provided by 63 This driver was written from the packet driver assembly code provided by
64 Vincent Bono of AT-Lan-Tec. Ever try to figure out how a complicated 64 Vincent Bono of AT-Lan-Tec. Ever try to figure out how a complicated
65 device works just from the assembly code? It ain't pretty. The following 65 device works just from the assembly code? It ain't pretty. The following
66 description is written based on guesses and writing lots of special-purpose 66 description is written based on guesses and writing lots of special-purpose
67 code to test my theorized operation. 67 code to test my theorized operation.
68 68
69 In 1997 Realtek made available the documentation for the second generation 69 In 1997 Realtek made available the documentation for the second generation
70 RTL8012 chip, which has lead to several driver improvements. 70 RTL8012 chip, which has lead to several driver improvements.
71 http://www.realtek.com.tw/cn/cn.html 71 http://www.realtek.com.tw/cn/cn.html
72 72
73 Theory of Operation 73 Theory of Operation
74 74
75 The RTL8002 adapter seems to be built around a custom spin of the SEEQ 75 The RTL8002 adapter seems to be built around a custom spin of the SEEQ
76 controller core. It probably has a 16K or 64K internal packet buffer, of 76 controller core. It probably has a 16K or 64K internal packet buffer, of
77 which the first 4K is devoted to transmit and the rest to receive. 77 which the first 4K is devoted to transmit and the rest to receive.
78 The controller maintains the queue of received packet and the packet buffer 78 The controller maintains the queue of received packet and the packet buffer
79 access pointer internally, with only 'reset to beginning' and 'skip to next 79 access pointer internally, with only 'reset to beginning' and 'skip to next
80 packet' commands visible. The transmit packet queue holds two (or more?) 80 packet' commands visible. The transmit packet queue holds two (or more?)
81 packets: both 'retransmit this packet' (due to collision) and 'transmit next 81 packets: both 'retransmit this packet' (due to collision) and 'transmit next
82 packet' commands must be started by hand. 82 packet' commands must be started by hand.
83 83
84 The station address is stored in a standard bit-serial EEPROM which must be 84 The station address is stored in a standard bit-serial EEPROM which must be
85 read (ughh) by the device driver. (Provisions have been made for 85 read (ughh) by the device driver. (Provisions have been made for
86 substituting a 74S288 PROM, but I haven't gotten reports of any models 86 substituting a 74S288 PROM, but I haven't gotten reports of any models
87 using it.) Unlike built-in devices, a pocket adapter can temporarily lose 87 using it.) Unlike built-in devices, a pocket adapter can temporarily lose
88 power without indication to the device driver. The major effect is that 88 power without indication to the device driver. The major effect is that
89 the station address, receive filter (promiscuous, etc.) and transceiver 89 the station address, receive filter (promiscuous, etc.) and transceiver
90 must be reset. 90 must be reset.
91 91
92 The controller itself has 16 registers, some of which use only the lower 92 The controller itself has 16 registers, some of which use only the lower
93 bits. The registers are read and written 4 bits at a time. The four bit 93 bits. The registers are read and written 4 bits at a time. The four bit
94 register address is presented on the data lines along with a few additional 94 register address is presented on the data lines along with a few additional
95 timing and control bits. The data is then read from status port or written 95 timing and control bits. The data is then read from status port or written
96 to the data port. 96 to the data port.
97 97
98 Correction: the controller has two banks of 16 registers. The second 98 Correction: the controller has two banks of 16 registers. The second
99 bank contains only the multicast filter table (now used) and the EEPROM 99 bank contains only the multicast filter table (now used) and the EEPROM
100 access registers. 100 access registers.
101 101
102 Since the bulk data transfer of the actual packets through the slow 102 Since the bulk data transfer of the actual packets through the slow
103 parallel port dominates the driver's running time, four distinct data 103 parallel port dominates the driver's running time, four distinct data
104 (non-register) transfer modes are provided by the adapter, two in each 104 (non-register) transfer modes are provided by the adapter, two in each
105 direction. In the first mode timing for the nibble transfers is 105 direction. In the first mode timing for the nibble transfers is
106 provided through the data port. In the second mode the same timing is 106 provided through the data port. In the second mode the same timing is
107 provided through the control port. In either case the data is read from 107 provided through the control port. In either case the data is read from
108 the status port and written to the data port, just as it is accessing 108 the status port and written to the data port, just as it is accessing
109 registers. 109 registers.
110 110
111 In addition to the basic data transfer methods, several more are modes are 111 In addition to the basic data transfer methods, several more are modes are
112 created by adding some delay by doing multiple reads of the data to allow 112 created by adding some delay by doing multiple reads of the data to allow
113 it to stabilize. This delay seems to be needed on most machines. 113 it to stabilize. This delay seems to be needed on most machines.
114 114
115 The data transfer mode is stored in the 'dev->if_port' field. Its default 115 The data transfer mode is stored in the 'dev->if_port' field. Its default
116 value is '4'. It may be overridden at boot-time using the third parameter 116 value is '4'. It may be overridden at boot-time using the third parameter
117 to the "ether=..." initialization. 117 to the "ether=..." initialization.
118 118
119 The header file <atp.h> provides inline functions that encapsulate the 119 The header file <atp.h> provides inline functions that encapsulate the
120 register and data access methods. These functions are hand-tuned to 120 register and data access methods. These functions are hand-tuned to
121 generate reasonable object code. This header file also documents my 121 generate reasonable object code. This header file also documents my
122 interpretations of the device registers. 122 interpretations of the device registers.
123 */ 123 */
124 124
125 #include <linux/kernel.h> 125 #include <linux/kernel.h>
126 #include <linux/module.h> 126 #include <linux/module.h>
127 #include <linux/types.h> 127 #include <linux/types.h>
128 #include <linux/fcntl.h> 128 #include <linux/fcntl.h>
129 #include <linux/interrupt.h> 129 #include <linux/interrupt.h>
130 #include <linux/ioport.h> 130 #include <linux/ioport.h>
131 #include <linux/in.h> 131 #include <linux/in.h>
132 #include <linux/slab.h> 132 #include <linux/slab.h>
133 #include <linux/string.h> 133 #include <linux/string.h>
134 #include <linux/errno.h> 134 #include <linux/errno.h>
135 #include <linux/init.h> 135 #include <linux/init.h>
136 #include <linux/crc32.h> 136 #include <linux/crc32.h>
137 #include <linux/netdevice.h> 137 #include <linux/netdevice.h>
138 #include <linux/etherdevice.h> 138 #include <linux/etherdevice.h>
139 #include <linux/skbuff.h> 139 #include <linux/skbuff.h>
140 #include <linux/spinlock.h> 140 #include <linux/spinlock.h>
141 #include <linux/delay.h> 141 #include <linux/delay.h>
142 #include <linux/bitops.h> 142 #include <linux/bitops.h>
143 143
144 #include <asm/system.h> 144 #include <asm/system.h>
145 #include <asm/io.h> 145 #include <asm/io.h>
146 #include <asm/dma.h> 146 #include <asm/dma.h>
147 147
148 #include "atp.h" 148 #include "atp.h"
149 149
150 MODULE_AUTHOR("Donald Becker <becker@scyld.com>"); 150 MODULE_AUTHOR("Donald Becker <becker@scyld.com>");
151 MODULE_DESCRIPTION("RealTek RTL8002/8012 parallel port Ethernet driver"); 151 MODULE_DESCRIPTION("RealTek RTL8002/8012 parallel port Ethernet driver");
152 MODULE_LICENSE("GPL"); 152 MODULE_LICENSE("GPL");
153 153
154 module_param(max_interrupt_work, int, 0); 154 module_param(max_interrupt_work, int, 0);
155 module_param(debug, int, 0); 155 module_param(debug, int, 0);
156 module_param_array(io, int, NULL, 0); 156 module_param_array(io, int, NULL, 0);
157 module_param_array(irq, int, NULL, 0); 157 module_param_array(irq, int, NULL, 0);
158 module_param_array(xcvr, int, NULL, 0); 158 module_param_array(xcvr, int, NULL, 0);
159 MODULE_PARM_DESC(max_interrupt_work, "ATP maximum events handled per interrupt"); 159 MODULE_PARM_DESC(max_interrupt_work, "ATP maximum events handled per interrupt");
160 MODULE_PARM_DESC(debug, "ATP debug level (0-7)"); 160 MODULE_PARM_DESC(debug, "ATP debug level (0-7)");
161 MODULE_PARM_DESC(io, "ATP I/O base address(es)"); 161 MODULE_PARM_DESC(io, "ATP I/O base address(es)");
162 MODULE_PARM_DESC(irq, "ATP IRQ number(s)"); 162 MODULE_PARM_DESC(irq, "ATP IRQ number(s)");
163 MODULE_PARM_DESC(xcvr, "ATP transceiver(s) (0=internal, 1=external)"); 163 MODULE_PARM_DESC(xcvr, "ATP transceiver(s) (0=internal, 1=external)");
164 164
165 /* The number of low I/O ports used by the ethercard. */ 165 /* The number of low I/O ports used by the ethercard. */
166 #define ETHERCARD_TOTAL_SIZE 3 166 #define ETHERCARD_TOTAL_SIZE 3
167 167
168 /* Sequence to switch an 8012 from printer mux to ethernet mode. */ 168 /* Sequence to switch an 8012 from printer mux to ethernet mode. */
169 static char mux_8012[] = { 0xff, 0xf7, 0xff, 0xfb, 0xf3, 0xfb, 0xff, 0xf7,}; 169 static char mux_8012[] = { 0xff, 0xf7, 0xff, 0xfb, 0xf3, 0xfb, 0xff, 0xf7,};
170 170
171 struct net_local { 171 struct net_local {
172 spinlock_t lock; 172 spinlock_t lock;
173 struct net_device *next_module; 173 struct net_device *next_module;
174 struct timer_list timer; /* Media selection timer. */ 174 struct timer_list timer; /* Media selection timer. */
175 long last_rx_time; /* Last Rx, in jiffies, to handle Rx hang. */ 175 long last_rx_time; /* Last Rx, in jiffies, to handle Rx hang. */
176 int saved_tx_size; 176 int saved_tx_size;
177 unsigned int tx_unit_busy:1; 177 unsigned int tx_unit_busy:1;
178 unsigned char re_tx, /* Number of packet retransmissions. */ 178 unsigned char re_tx, /* Number of packet retransmissions. */
179 addr_mode, /* Current Rx filter e.g. promiscuous, etc. */ 179 addr_mode, /* Current Rx filter e.g. promiscuous, etc. */
180 pac_cnt_in_tx_buf, 180 pac_cnt_in_tx_buf,
181 chip_type; 181 chip_type;
182 }; 182 };
183 183
184 /* This code, written by wwc@super.org, resets the adapter every 184 /* This code, written by wwc@super.org, resets the adapter every
185 TIMED_CHECKER ticks. This recovers from an unknown error which 185 TIMED_CHECKER ticks. This recovers from an unknown error which
186 hangs the device. */ 186 hangs the device. */
187 #define TIMED_CHECKER (HZ/4) 187 #define TIMED_CHECKER (HZ/4)
188 #ifdef TIMED_CHECKER 188 #ifdef TIMED_CHECKER
189 #include <linux/timer.h> 189 #include <linux/timer.h>
190 static void atp_timed_checker(unsigned long ignored); 190 static void atp_timed_checker(unsigned long ignored);
191 #endif 191 #endif
192 192
193 /* Index to functions, as function prototypes. */ 193 /* Index to functions, as function prototypes. */
194 194
195 static int atp_probe1(long ioaddr); 195 static int atp_probe1(long ioaddr);
196 static void get_node_ID(struct net_device *dev); 196 static void get_node_ID(struct net_device *dev);
197 static unsigned short eeprom_op(long ioaddr, unsigned int cmd); 197 static unsigned short eeprom_op(long ioaddr, unsigned int cmd);
198 static int net_open(struct net_device *dev); 198 static int net_open(struct net_device *dev);
199 static void hardware_init(struct net_device *dev); 199 static void hardware_init(struct net_device *dev);
200 static void write_packet(long ioaddr, int length, unsigned char *packet, int pad, int mode); 200 static void write_packet(long ioaddr, int length, unsigned char *packet, int pad, int mode);
201 static void trigger_send(long ioaddr, int length); 201 static void trigger_send(long ioaddr, int length);
202 static int atp_send_packet(struct sk_buff *skb, struct net_device *dev); 202 static int atp_send_packet(struct sk_buff *skb, struct net_device *dev);
203 static irqreturn_t atp_interrupt(int irq, void *dev_id); 203 static irqreturn_t atp_interrupt(int irq, void *dev_id);
204 static void net_rx(struct net_device *dev); 204 static void net_rx(struct net_device *dev);
205 static void read_block(long ioaddr, int length, unsigned char *buffer, int data_mode); 205 static void read_block(long ioaddr, int length, unsigned char *buffer, int data_mode);
206 static int net_close(struct net_device *dev); 206 static int net_close(struct net_device *dev);
207 static void set_rx_mode_8002(struct net_device *dev); 207 static void set_rx_mode_8002(struct net_device *dev);
208 static void set_rx_mode_8012(struct net_device *dev); 208 static void set_rx_mode_8012(struct net_device *dev);
209 static void tx_timeout(struct net_device *dev); 209 static void tx_timeout(struct net_device *dev);
210 210
211 211
212 /* A list of all installed ATP devices, for removing the driver module. */ 212 /* A list of all installed ATP devices, for removing the driver module. */
213 static struct net_device *root_atp_dev; 213 static struct net_device *root_atp_dev;
214 214
215 /* Check for a network adapter of this type, and return '0' iff one exists. 215 /* Check for a network adapter of this type, and return '0' iff one exists.
216 If dev->base_addr == 0, probe all likely locations. 216 If dev->base_addr == 0, probe all likely locations.
217 If dev->base_addr == 1, always return failure. 217 If dev->base_addr == 1, always return failure.
218 If dev->base_addr == 2, allocate space for the device and return success 218 If dev->base_addr == 2, allocate space for the device and return success
219 (detachable devices only). 219 (detachable devices only).
220 220
221 FIXME: we should use the parport layer for this 221 FIXME: we should use the parport layer for this
222 */ 222 */
223 static int __init atp_init(void) 223 static int __init atp_init(void)
224 { 224 {
225 int *port, ports[] = {0x378, 0x278, 0x3bc, 0}; 225 int *port, ports[] = {0x378, 0x278, 0x3bc, 0};
226 int base_addr = io[0]; 226 int base_addr = io[0];
227 227
228 if (base_addr > 0x1ff) /* Check a single specified location. */ 228 if (base_addr > 0x1ff) /* Check a single specified location. */
229 return atp_probe1(base_addr); 229 return atp_probe1(base_addr);
230 else if (base_addr == 1) /* Don't probe at all. */ 230 else if (base_addr == 1) /* Don't probe at all. */
231 return -ENXIO; 231 return -ENXIO;
232 232
233 for (port = ports; *port; port++) { 233 for (port = ports; *port; port++) {
234 long ioaddr = *port; 234 long ioaddr = *port;
235 outb(0x57, ioaddr + PAR_DATA); 235 outb(0x57, ioaddr + PAR_DATA);
236 if (inb(ioaddr + PAR_DATA) != 0x57) 236 if (inb(ioaddr + PAR_DATA) != 0x57)
237 continue; 237 continue;
238 if (atp_probe1(ioaddr) == 0) 238 if (atp_probe1(ioaddr) == 0)
239 return 0; 239 return 0;
240 } 240 }
241 241
242 return -ENODEV; 242 return -ENODEV;
243 } 243 }
244 244
245 static int __init atp_probe1(long ioaddr) 245 static int __init atp_probe1(long ioaddr)
246 { 246 {
247 struct net_device *dev = NULL; 247 struct net_device *dev = NULL;
248 struct net_local *lp; 248 struct net_local *lp;
249 int saved_ctrl_reg, status, i; 249 int saved_ctrl_reg, status, i;
250 int res; 250 int res;
251 DECLARE_MAC_BUF(mac); 251 DECLARE_MAC_BUF(mac);
252 252
253 outb(0xff, ioaddr + PAR_DATA); 253 outb(0xff, ioaddr + PAR_DATA);
254 /* Save the original value of the Control register, in case we guessed 254 /* Save the original value of the Control register, in case we guessed
255 wrong. */ 255 wrong. */
256 saved_ctrl_reg = inb(ioaddr + PAR_CONTROL); 256 saved_ctrl_reg = inb(ioaddr + PAR_CONTROL);
257 if (net_debug > 3) 257 if (net_debug > 3)
258 printk("atp: Control register was %#2.2x.\n", saved_ctrl_reg); 258 printk("atp: Control register was %#2.2x.\n", saved_ctrl_reg);
259 /* IRQEN=0, SLCTB=high INITB=high, AUTOFDB=high, STBB=high. */ 259 /* IRQEN=0, SLCTB=high INITB=high, AUTOFDB=high, STBB=high. */
260 outb(0x04, ioaddr + PAR_CONTROL); 260 outb(0x04, ioaddr + PAR_CONTROL);
261 #ifndef final_version 261 #ifndef final_version
262 if (net_debug > 3) { 262 if (net_debug > 3) {
263 /* Turn off the printer multiplexer on the 8012. */ 263 /* Turn off the printer multiplexer on the 8012. */
264 for (i = 0; i < 8; i++) 264 for (i = 0; i < 8; i++)
265 outb(mux_8012[i], ioaddr + PAR_DATA); 265 outb(mux_8012[i], ioaddr + PAR_DATA);
266 write_reg(ioaddr, MODSEL, 0x00); 266 write_reg(ioaddr, MODSEL, 0x00);
267 printk("atp: Registers are "); 267 printk("atp: Registers are ");
268 for (i = 0; i < 32; i++) 268 for (i = 0; i < 32; i++)
269 printk(" %2.2x", read_nibble(ioaddr, i)); 269 printk(" %2.2x", read_nibble(ioaddr, i));
270 printk(".\n"); 270 printk(".\n");
271 } 271 }
272 #endif 272 #endif
273 /* Turn off the printer multiplexer on the 8012. */ 273 /* Turn off the printer multiplexer on the 8012. */
274 for (i = 0; i < 8; i++) 274 for (i = 0; i < 8; i++)
275 outb(mux_8012[i], ioaddr + PAR_DATA); 275 outb(mux_8012[i], ioaddr + PAR_DATA);
276 write_reg_high(ioaddr, CMR1, CMR1h_RESET); 276 write_reg_high(ioaddr, CMR1, CMR1h_RESET);
277 /* udelay() here? */ 277 /* udelay() here? */
278 status = read_nibble(ioaddr, CMR1); 278 status = read_nibble(ioaddr, CMR1);
279 279
280 if (net_debug > 3) { 280 if (net_debug > 3) {
281 printk(KERN_DEBUG "atp: Status nibble was %#2.2x..", status); 281 printk(KERN_DEBUG "atp: Status nibble was %#2.2x..", status);
282 for (i = 0; i < 32; i++) 282 for (i = 0; i < 32; i++)
283 printk(" %2.2x", read_nibble(ioaddr, i)); 283 printk(" %2.2x", read_nibble(ioaddr, i));
284 printk("\n"); 284 printk("\n");
285 } 285 }
286 286
287 if ((status & 0x78) != 0x08) { 287 if ((status & 0x78) != 0x08) {
288 /* The pocket adapter probe failed, restore the control register. */ 288 /* The pocket adapter probe failed, restore the control register. */
289 outb(saved_ctrl_reg, ioaddr + PAR_CONTROL); 289 outb(saved_ctrl_reg, ioaddr + PAR_CONTROL);
290 return -ENODEV; 290 return -ENODEV;
291 } 291 }
292 status = read_nibble(ioaddr, CMR2_h); 292 status = read_nibble(ioaddr, CMR2_h);
293 if ((status & 0x78) != 0x10) { 293 if ((status & 0x78) != 0x10) {
294 outb(saved_ctrl_reg, ioaddr + PAR_CONTROL); 294 outb(saved_ctrl_reg, ioaddr + PAR_CONTROL);
295 return -ENODEV; 295 return -ENODEV;
296 } 296 }
297 297
298 dev = alloc_etherdev(sizeof(struct net_local)); 298 dev = alloc_etherdev(sizeof(struct net_local));
299 if (!dev) 299 if (!dev)
300 return -ENOMEM; 300 return -ENOMEM;
301 301
302 /* Find the IRQ used by triggering an interrupt. */ 302 /* Find the IRQ used by triggering an interrupt. */
303 write_reg_byte(ioaddr, CMR2, 0x01); /* No accept mode, IRQ out. */ 303 write_reg_byte(ioaddr, CMR2, 0x01); /* No accept mode, IRQ out. */
304 write_reg_high(ioaddr, CMR1, CMR1h_RxENABLE | CMR1h_TxENABLE); /* Enable Tx and Rx. */ 304 write_reg_high(ioaddr, CMR1, CMR1h_RxENABLE | CMR1h_TxENABLE); /* Enable Tx and Rx. */
305 305
306 /* Omit autoIRQ routine for now. Use "table lookup" instead. Uhgggh. */ 306 /* Omit autoIRQ routine for now. Use "table lookup" instead. Uhgggh. */
307 if (irq[0]) 307 if (irq[0])
308 dev->irq = irq[0]; 308 dev->irq = irq[0];
309 else if (ioaddr == 0x378) 309 else if (ioaddr == 0x378)
310 dev->irq = 7; 310 dev->irq = 7;
311 else 311 else
312 dev->irq = 5; 312 dev->irq = 5;
313 write_reg_high(ioaddr, CMR1, CMR1h_TxRxOFF); /* Disable Tx and Rx units. */ 313 write_reg_high(ioaddr, CMR1, CMR1h_TxRxOFF); /* Disable Tx and Rx units. */
314 write_reg(ioaddr, CMR2, CMR2_NULL); 314 write_reg(ioaddr, CMR2, CMR2_NULL);
315 315
316 dev->base_addr = ioaddr; 316 dev->base_addr = ioaddr;
317 317
318 /* Read the station address PROM. */ 318 /* Read the station address PROM. */
319 get_node_ID(dev); 319 get_node_ID(dev);
320 320
321 #ifndef MODULE 321 #ifndef MODULE
322 if (net_debug) 322 if (net_debug)
323 printk(KERN_INFO "%s", version); 323 printk(KERN_INFO "%s", version);
324 #endif 324 #endif
325 325
326 printk(KERN_NOTICE "%s: Pocket adapter found at %#3lx, IRQ %d, " 326 printk(KERN_NOTICE "%s: Pocket adapter found at %#3lx, IRQ %d, "
327 "SAPROM %s.\n", 327 "SAPROM %s.\n",
328 dev->name, dev->base_addr, dev->irq, print_mac(mac, dev->dev_addr)); 328 dev->name, dev->base_addr, dev->irq, print_mac(mac, dev->dev_addr));
329 329
330 /* Reset the ethernet hardware and activate the printer pass-through. */ 330 /* Reset the ethernet hardware and activate the printer pass-through. */
331 write_reg_high(ioaddr, CMR1, CMR1h_RESET | CMR1h_MUX); 331 write_reg_high(ioaddr, CMR1, CMR1h_RESET | CMR1h_MUX);
332 332
333 lp = netdev_priv(dev); 333 lp = netdev_priv(dev);
334 lp->chip_type = RTL8002; 334 lp->chip_type = RTL8002;
335 lp->addr_mode = CMR2h_Normal; 335 lp->addr_mode = CMR2h_Normal;
336 spin_lock_init(&lp->lock); 336 spin_lock_init(&lp->lock);
337 337
338 /* For the ATP adapter the "if_port" is really the data transfer mode. */ 338 /* For the ATP adapter the "if_port" is really the data transfer mode. */
339 if (xcvr[0]) 339 if (xcvr[0])
340 dev->if_port = xcvr[0]; 340 dev->if_port = xcvr[0];
341 else 341 else
342 dev->if_port = (dev->mem_start & 0xf) ? (dev->mem_start & 0x7) : 4; 342 dev->if_port = (dev->mem_start & 0xf) ? (dev->mem_start & 0x7) : 4;
343 if (dev->mem_end & 0xf) 343 if (dev->mem_end & 0xf)
344 net_debug = dev->mem_end & 7; 344 net_debug = dev->mem_end & 7;
345 345
346 dev->open = net_open; 346 dev->open = net_open;
347 dev->stop = net_close; 347 dev->stop = net_close;
348 dev->hard_start_xmit = atp_send_packet; 348 dev->hard_start_xmit = atp_send_packet;
349 dev->set_multicast_list = 349 dev->set_multicast_list =
350 lp->chip_type == RTL8002 ? &set_rx_mode_8002 : &set_rx_mode_8012; 350 lp->chip_type == RTL8002 ? &set_rx_mode_8002 : &set_rx_mode_8012;
351 dev->tx_timeout = tx_timeout; 351 dev->tx_timeout = tx_timeout;
352 dev->watchdog_timeo = TX_TIMEOUT; 352 dev->watchdog_timeo = TX_TIMEOUT;
353 353
354 res = register_netdev(dev); 354 res = register_netdev(dev);
355 if (res) { 355 if (res) {
356 free_netdev(dev); 356 free_netdev(dev);
357 return res; 357 return res;
358 } 358 }
359 359
360 lp->next_module = root_atp_dev; 360 lp->next_module = root_atp_dev;
361 root_atp_dev = dev; 361 root_atp_dev = dev;
362 362
363 return 0; 363 return 0;
364 } 364 }
365 365
366 /* Read the station address PROM, usually a word-wide EEPROM. */ 366 /* Read the station address PROM, usually a word-wide EEPROM. */
367 static void __init get_node_ID(struct net_device *dev) 367 static void __init get_node_ID(struct net_device *dev)
368 { 368 {
369 long ioaddr = dev->base_addr; 369 long ioaddr = dev->base_addr;
370 int sa_offset = 0; 370 int sa_offset = 0;
371 int i; 371 int i;
372 372
373 write_reg(ioaddr, CMR2, CMR2_EEPROM); /* Point to the EEPROM control registers. */ 373 write_reg(ioaddr, CMR2, CMR2_EEPROM); /* Point to the EEPROM control registers. */
374 374
375 /* Some adapters have the station address at offset 15 instead of offset 375 /* Some adapters have the station address at offset 15 instead of offset
376 zero. Check for it, and fix it if needed. */ 376 zero. Check for it, and fix it if needed. */
377 if (eeprom_op(ioaddr, EE_READ(0)) == 0xffff) 377 if (eeprom_op(ioaddr, EE_READ(0)) == 0xffff)
378 sa_offset = 15; 378 sa_offset = 15;
379 379
380 for (i = 0; i < 3; i++) 380 for (i = 0; i < 3; i++)
381 ((u16 *)dev->dev_addr)[i] = 381 ((__be16 *)dev->dev_addr)[i] =
382 be16_to_cpu(eeprom_op(ioaddr, EE_READ(sa_offset + i))); 382 cpu_to_be16(eeprom_op(ioaddr, EE_READ(sa_offset + i)));
383 383
384 write_reg(ioaddr, CMR2, CMR2_NULL); 384 write_reg(ioaddr, CMR2, CMR2_NULL);
385 } 385 }
386 386
387 /* 387 /*
388 An EEPROM read command starts by shifting out 0x60+address, and then 388 An EEPROM read command starts by shifting out 0x60+address, and then
389 shifting in the serial data. See the NatSemi databook for details. 389 shifting in the serial data. See the NatSemi databook for details.
390 * ________________ 390 * ________________
391 * CS : __| 391 * CS : __|
392 * ___ ___ 392 * ___ ___
393 * CLK: ______| |___| | 393 * CLK: ______| |___| |
394 * __ _______ _______ 394 * __ _______ _______
395 * DI : __X_______X_______X 395 * DI : __X_______X_______X
396 * DO : _________X_______X 396 * DO : _________X_______X
397 */ 397 */
398 398
399 static unsigned short __init eeprom_op(long ioaddr, u32 cmd) 399 static unsigned short __init eeprom_op(long ioaddr, u32 cmd)
400 { 400 {
401 unsigned eedata_out = 0; 401 unsigned eedata_out = 0;
402 int num_bits = EE_CMD_SIZE; 402 int num_bits = EE_CMD_SIZE;
403 403
404 while (--num_bits >= 0) { 404 while (--num_bits >= 0) {
405 char outval = (cmd & (1<<num_bits)) ? EE_DATA_WRITE : 0; 405 char outval = (cmd & (1<<num_bits)) ? EE_DATA_WRITE : 0;
406 write_reg_high(ioaddr, PROM_CMD, outval | EE_CLK_LOW); 406 write_reg_high(ioaddr, PROM_CMD, outval | EE_CLK_LOW);
407 write_reg_high(ioaddr, PROM_CMD, outval | EE_CLK_HIGH); 407 write_reg_high(ioaddr, PROM_CMD, outval | EE_CLK_HIGH);
408 eedata_out <<= 1; 408 eedata_out <<= 1;
409 if (read_nibble(ioaddr, PROM_DATA) & EE_DATA_READ) 409 if (read_nibble(ioaddr, PROM_DATA) & EE_DATA_READ)
410 eedata_out++; 410 eedata_out++;
411 } 411 }
412 write_reg_high(ioaddr, PROM_CMD, EE_CLK_LOW & ~EE_CS); 412 write_reg_high(ioaddr, PROM_CMD, EE_CLK_LOW & ~EE_CS);
413 return eedata_out; 413 return eedata_out;
414 } 414 }
415 415
416 416
417 /* Open/initialize the board. This is called (in the current kernel) 417 /* Open/initialize the board. This is called (in the current kernel)
418 sometime after booting when the 'ifconfig' program is run. 418 sometime after booting when the 'ifconfig' program is run.
419 419
420 This routine sets everything up anew at each open, even 420 This routine sets everything up anew at each open, even
421 registers that "should" only need to be set once at boot, so that 421 registers that "should" only need to be set once at boot, so that
422 there is non-reboot way to recover if something goes wrong. 422 there is non-reboot way to recover if something goes wrong.
423 423
424 This is an attachable device: if there is no dev->priv entry then it wasn't 424 This is an attachable device: if there is no dev->priv entry then it wasn't
425 probed for at boot-time, and we need to probe for it again. 425 probed for at boot-time, and we need to probe for it again.
426 */ 426 */
427 static int net_open(struct net_device *dev) 427 static int net_open(struct net_device *dev)
428 { 428 {
429 struct net_local *lp = netdev_priv(dev); 429 struct net_local *lp = netdev_priv(dev);
430 int ret; 430 int ret;
431 431
432 /* The interrupt line is turned off (tri-stated) when the device isn't in 432 /* The interrupt line is turned off (tri-stated) when the device isn't in
433 use. That's especially important for "attached" interfaces where the 433 use. That's especially important for "attached" interfaces where the
434 port or interrupt may be shared. */ 434 port or interrupt may be shared. */
435 ret = request_irq(dev->irq, &atp_interrupt, 0, dev->name, dev); 435 ret = request_irq(dev->irq, &atp_interrupt, 0, dev->name, dev);
436 if (ret) 436 if (ret)
437 return ret; 437 return ret;
438 438
439 hardware_init(dev); 439 hardware_init(dev);
440 440
441 init_timer(&lp->timer); 441 init_timer(&lp->timer);
442 lp->timer.expires = jiffies + TIMED_CHECKER; 442 lp->timer.expires = jiffies + TIMED_CHECKER;
443 lp->timer.data = (unsigned long)dev; 443 lp->timer.data = (unsigned long)dev;
444 lp->timer.function = &atp_timed_checker; /* timer handler */ 444 lp->timer.function = &atp_timed_checker; /* timer handler */
445 add_timer(&lp->timer); 445 add_timer(&lp->timer);
446 446
447 netif_start_queue(dev); 447 netif_start_queue(dev);
448 return 0; 448 return 0;
449 } 449 }
450 450
451 /* This routine resets the hardware. We initialize everything, assuming that 451 /* This routine resets the hardware. We initialize everything, assuming that
452 the hardware may have been temporarily detached. */ 452 the hardware may have been temporarily detached. */
453 static void hardware_init(struct net_device *dev) 453 static void hardware_init(struct net_device *dev)
454 { 454 {
455 struct net_local *lp = netdev_priv(dev); 455 struct net_local *lp = netdev_priv(dev);
456 long ioaddr = dev->base_addr; 456 long ioaddr = dev->base_addr;
457 int i; 457 int i;
458 458
459 /* Turn off the printer multiplexer on the 8012. */ 459 /* Turn off the printer multiplexer on the 8012. */
460 for (i = 0; i < 8; i++) 460 for (i = 0; i < 8; i++)
461 outb(mux_8012[i], ioaddr + PAR_DATA); 461 outb(mux_8012[i], ioaddr + PAR_DATA);
462 write_reg_high(ioaddr, CMR1, CMR1h_RESET); 462 write_reg_high(ioaddr, CMR1, CMR1h_RESET);
463 463
464 for (i = 0; i < 6; i++) 464 for (i = 0; i < 6; i++)
465 write_reg_byte(ioaddr, PAR0 + i, dev->dev_addr[i]); 465 write_reg_byte(ioaddr, PAR0 + i, dev->dev_addr[i]);
466 466
467 write_reg_high(ioaddr, CMR2, lp->addr_mode); 467 write_reg_high(ioaddr, CMR2, lp->addr_mode);
468 468
469 if (net_debug > 2) { 469 if (net_debug > 2) {
470 printk(KERN_DEBUG "%s: Reset: current Rx mode %d.\n", dev->name, 470 printk(KERN_DEBUG "%s: Reset: current Rx mode %d.\n", dev->name,
471 (read_nibble(ioaddr, CMR2_h) >> 3) & 0x0f); 471 (read_nibble(ioaddr, CMR2_h) >> 3) & 0x0f);
472 } 472 }
473 473
474 write_reg(ioaddr, CMR2, CMR2_IRQOUT); 474 write_reg(ioaddr, CMR2, CMR2_IRQOUT);
475 write_reg_high(ioaddr, CMR1, CMR1h_RxENABLE | CMR1h_TxENABLE); 475 write_reg_high(ioaddr, CMR1, CMR1h_RxENABLE | CMR1h_TxENABLE);
476 476
477 /* Enable the interrupt line from the serial port. */ 477 /* Enable the interrupt line from the serial port. */
478 outb(Ctrl_SelData + Ctrl_IRQEN, ioaddr + PAR_CONTROL); 478 outb(Ctrl_SelData + Ctrl_IRQEN, ioaddr + PAR_CONTROL);
479 479
480 /* Unmask the interesting interrupts. */ 480 /* Unmask the interesting interrupts. */
481 write_reg(ioaddr, IMR, ISR_RxOK | ISR_TxErr | ISR_TxOK); 481 write_reg(ioaddr, IMR, ISR_RxOK | ISR_TxErr | ISR_TxOK);
482 write_reg_high(ioaddr, IMR, ISRh_RxErr); 482 write_reg_high(ioaddr, IMR, ISRh_RxErr);
483 483
484 lp->tx_unit_busy = 0; 484 lp->tx_unit_busy = 0;
485 lp->pac_cnt_in_tx_buf = 0; 485 lp->pac_cnt_in_tx_buf = 0;
486 lp->saved_tx_size = 0; 486 lp->saved_tx_size = 0;
487 } 487 }
488 488
489 static void trigger_send(long ioaddr, int length) 489 static void trigger_send(long ioaddr, int length)
490 { 490 {
491 write_reg_byte(ioaddr, TxCNT0, length & 0xff); 491 write_reg_byte(ioaddr, TxCNT0, length & 0xff);
492 write_reg(ioaddr, TxCNT1, length >> 8); 492 write_reg(ioaddr, TxCNT1, length >> 8);
493 write_reg(ioaddr, CMR1, CMR1_Xmit); 493 write_reg(ioaddr, CMR1, CMR1_Xmit);
494 } 494 }
495 495
496 static void write_packet(long ioaddr, int length, unsigned char *packet, int pad_len, int data_mode) 496 static void write_packet(long ioaddr, int length, unsigned char *packet, int pad_len, int data_mode)
497 { 497 {
498 if (length & 1) 498 if (length & 1)
499 { 499 {
500 length++; 500 length++;
501 pad_len++; 501 pad_len++;
502 } 502 }
503 503
504 outb(EOC+MAR, ioaddr + PAR_DATA); 504 outb(EOC+MAR, ioaddr + PAR_DATA);
505 if ((data_mode & 1) == 0) { 505 if ((data_mode & 1) == 0) {
506 /* Write the packet out, starting with the write addr. */ 506 /* Write the packet out, starting with the write addr. */
507 outb(WrAddr+MAR, ioaddr + PAR_DATA); 507 outb(WrAddr+MAR, ioaddr + PAR_DATA);
508 do { 508 do {
509 write_byte_mode0(ioaddr, *packet++); 509 write_byte_mode0(ioaddr, *packet++);
510 } while (--length > pad_len) ; 510 } while (--length > pad_len) ;
511 do { 511 do {
512 write_byte_mode0(ioaddr, 0); 512 write_byte_mode0(ioaddr, 0);
513 } while (--length > 0) ; 513 } while (--length > 0) ;
514 } else { 514 } else {
515 /* Write the packet out in slow mode. */ 515 /* Write the packet out in slow mode. */
516 unsigned char outbyte = *packet++; 516 unsigned char outbyte = *packet++;
517 517
518 outb(Ctrl_LNibWrite + Ctrl_IRQEN, ioaddr + PAR_CONTROL); 518 outb(Ctrl_LNibWrite + Ctrl_IRQEN, ioaddr + PAR_CONTROL);
519 outb(WrAddr+MAR, ioaddr + PAR_DATA); 519 outb(WrAddr+MAR, ioaddr + PAR_DATA);
520 520
521 outb((outbyte & 0x0f)|0x40, ioaddr + PAR_DATA); 521 outb((outbyte & 0x0f)|0x40, ioaddr + PAR_DATA);
522 outb(outbyte & 0x0f, ioaddr + PAR_DATA); 522 outb(outbyte & 0x0f, ioaddr + PAR_DATA);
523 outbyte >>= 4; 523 outbyte >>= 4;
524 outb(outbyte & 0x0f, ioaddr + PAR_DATA); 524 outb(outbyte & 0x0f, ioaddr + PAR_DATA);
525 outb(Ctrl_HNibWrite + Ctrl_IRQEN, ioaddr + PAR_CONTROL); 525 outb(Ctrl_HNibWrite + Ctrl_IRQEN, ioaddr + PAR_CONTROL);
526 while (--length > pad_len) 526 while (--length > pad_len)
527 write_byte_mode1(ioaddr, *packet++); 527 write_byte_mode1(ioaddr, *packet++);
528 while (--length > 0) 528 while (--length > 0)
529 write_byte_mode1(ioaddr, 0); 529 write_byte_mode1(ioaddr, 0);
530 } 530 }
531 /* Terminate the Tx frame. End of write: ECB. */ 531 /* Terminate the Tx frame. End of write: ECB. */
532 outb(0xff, ioaddr + PAR_DATA); 532 outb(0xff, ioaddr + PAR_DATA);
533 outb(Ctrl_HNibWrite | Ctrl_SelData | Ctrl_IRQEN, ioaddr + PAR_CONTROL); 533 outb(Ctrl_HNibWrite | Ctrl_SelData | Ctrl_IRQEN, ioaddr + PAR_CONTROL);
534 } 534 }
535 535
536 static void tx_timeout(struct net_device *dev) 536 static void tx_timeout(struct net_device *dev)
537 { 537 {
538 long ioaddr = dev->base_addr; 538 long ioaddr = dev->base_addr;
539 539
540 printk(KERN_WARNING "%s: Transmit timed out, %s?\n", dev->name, 540 printk(KERN_WARNING "%s: Transmit timed out, %s?\n", dev->name,
541 inb(ioaddr + PAR_CONTROL) & 0x10 ? "network cable problem" 541 inb(ioaddr + PAR_CONTROL) & 0x10 ? "network cable problem"
542 : "IRQ conflict"); 542 : "IRQ conflict");
543 dev->stats.tx_errors++; 543 dev->stats.tx_errors++;
544 /* Try to restart the adapter. */ 544 /* Try to restart the adapter. */
545 hardware_init(dev); 545 hardware_init(dev);
546 dev->trans_start = jiffies; 546 dev->trans_start = jiffies;
547 netif_wake_queue(dev); 547 netif_wake_queue(dev);
548 dev->stats.tx_errors++; 548 dev->stats.tx_errors++;
549 } 549 }
550 550
551 static int atp_send_packet(struct sk_buff *skb, struct net_device *dev) 551 static int atp_send_packet(struct sk_buff *skb, struct net_device *dev)
552 { 552 {
553 struct net_local *lp = netdev_priv(dev); 553 struct net_local *lp = netdev_priv(dev);
554 long ioaddr = dev->base_addr; 554 long ioaddr = dev->base_addr;
555 int length; 555 int length;
556 unsigned long flags; 556 unsigned long flags;
557 557
558 length = ETH_ZLEN < skb->len ? skb->len : ETH_ZLEN; 558 length = ETH_ZLEN < skb->len ? skb->len : ETH_ZLEN;
559 559
560 netif_stop_queue(dev); 560 netif_stop_queue(dev);
561 561
562 /* Disable interrupts by writing 0x00 to the Interrupt Mask Register. 562 /* Disable interrupts by writing 0x00 to the Interrupt Mask Register.
563 This sequence must not be interrupted by an incoming packet. */ 563 This sequence must not be interrupted by an incoming packet. */
564 564
565 spin_lock_irqsave(&lp->lock, flags); 565 spin_lock_irqsave(&lp->lock, flags);
566 write_reg(ioaddr, IMR, 0); 566 write_reg(ioaddr, IMR, 0);
567 write_reg_high(ioaddr, IMR, 0); 567 write_reg_high(ioaddr, IMR, 0);
568 spin_unlock_irqrestore(&lp->lock, flags); 568 spin_unlock_irqrestore(&lp->lock, flags);
569 569
570 write_packet(ioaddr, length, skb->data, length-skb->len, dev->if_port); 570 write_packet(ioaddr, length, skb->data, length-skb->len, dev->if_port);
571 571
572 lp->pac_cnt_in_tx_buf++; 572 lp->pac_cnt_in_tx_buf++;
573 if (lp->tx_unit_busy == 0) { 573 if (lp->tx_unit_busy == 0) {
574 trigger_send(ioaddr, length); 574 trigger_send(ioaddr, length);
575 lp->saved_tx_size = 0; /* Redundant */ 575 lp->saved_tx_size = 0; /* Redundant */
576 lp->re_tx = 0; 576 lp->re_tx = 0;
577 lp->tx_unit_busy = 1; 577 lp->tx_unit_busy = 1;
578 } else 578 } else
579 lp->saved_tx_size = length; 579 lp->saved_tx_size = length;
580 /* Re-enable the LPT interrupts. */ 580 /* Re-enable the LPT interrupts. */
581 write_reg(ioaddr, IMR, ISR_RxOK | ISR_TxErr | ISR_TxOK); 581 write_reg(ioaddr, IMR, ISR_RxOK | ISR_TxErr | ISR_TxOK);
582 write_reg_high(ioaddr, IMR, ISRh_RxErr); 582 write_reg_high(ioaddr, IMR, ISRh_RxErr);
583 583
584 dev->trans_start = jiffies; 584 dev->trans_start = jiffies;
585 dev_kfree_skb (skb); 585 dev_kfree_skb (skb);
586 return 0; 586 return 0;
587 } 587 }
588 588
589 589
590 /* The typical workload of the driver: 590 /* The typical workload of the driver:
591 Handle the network interface interrupts. */ 591 Handle the network interface interrupts. */
592 static irqreturn_t atp_interrupt(int irq, void *dev_instance) 592 static irqreturn_t atp_interrupt(int irq, void *dev_instance)
593 { 593 {
594 struct net_device *dev = dev_instance; 594 struct net_device *dev = dev_instance;
595 struct net_local *lp; 595 struct net_local *lp;
596 long ioaddr; 596 long ioaddr;
597 static int num_tx_since_rx; 597 static int num_tx_since_rx;
598 int boguscount = max_interrupt_work; 598 int boguscount = max_interrupt_work;
599 int handled = 0; 599 int handled = 0;
600 600
601 ioaddr = dev->base_addr; 601 ioaddr = dev->base_addr;
602 lp = netdev_priv(dev); 602 lp = netdev_priv(dev);
603 603
604 spin_lock(&lp->lock); 604 spin_lock(&lp->lock);
605 605
606 /* Disable additional spurious interrupts. */ 606 /* Disable additional spurious interrupts. */
607 outb(Ctrl_SelData, ioaddr + PAR_CONTROL); 607 outb(Ctrl_SelData, ioaddr + PAR_CONTROL);
608 608
609 /* The adapter's output is currently the IRQ line, switch it to data. */ 609 /* The adapter's output is currently the IRQ line, switch it to data. */
610 write_reg(ioaddr, CMR2, CMR2_NULL); 610 write_reg(ioaddr, CMR2, CMR2_NULL);
611 write_reg(ioaddr, IMR, 0); 611 write_reg(ioaddr, IMR, 0);
612 612
613 if (net_debug > 5) printk(KERN_DEBUG "%s: In interrupt ", dev->name); 613 if (net_debug > 5) printk(KERN_DEBUG "%s: In interrupt ", dev->name);
614 while (--boguscount > 0) { 614 while (--boguscount > 0) {
615 int status = read_nibble(ioaddr, ISR); 615 int status = read_nibble(ioaddr, ISR);
616 if (net_debug > 5) printk("loop status %02x..", status); 616 if (net_debug > 5) printk("loop status %02x..", status);
617 617
618 if (status & (ISR_RxOK<<3)) { 618 if (status & (ISR_RxOK<<3)) {
619 handled = 1; 619 handled = 1;
620 write_reg(ioaddr, ISR, ISR_RxOK); /* Clear the Rx interrupt. */ 620 write_reg(ioaddr, ISR, ISR_RxOK); /* Clear the Rx interrupt. */
621 do { 621 do {
622 int read_status = read_nibble(ioaddr, CMR1); 622 int read_status = read_nibble(ioaddr, CMR1);
623 if (net_debug > 6) 623 if (net_debug > 6)
624 printk("handling Rx packet %02x..", read_status); 624 printk("handling Rx packet %02x..", read_status);
625 /* We acknowledged the normal Rx interrupt, so if the interrupt 625 /* We acknowledged the normal Rx interrupt, so if the interrupt
626 is still outstanding we must have a Rx error. */ 626 is still outstanding we must have a Rx error. */
627 if (read_status & (CMR1_IRQ << 3)) { /* Overrun. */ 627 if (read_status & (CMR1_IRQ << 3)) { /* Overrun. */
628 dev->stats.rx_over_errors++; 628 dev->stats.rx_over_errors++;
629 /* Set to no-accept mode long enough to remove a packet. */ 629 /* Set to no-accept mode long enough to remove a packet. */
630 write_reg_high(ioaddr, CMR2, CMR2h_OFF); 630 write_reg_high(ioaddr, CMR2, CMR2h_OFF);
631 net_rx(dev); 631 net_rx(dev);
632 /* Clear the interrupt and return to normal Rx mode. */ 632 /* Clear the interrupt and return to normal Rx mode. */
633 write_reg_high(ioaddr, ISR, ISRh_RxErr); 633 write_reg_high(ioaddr, ISR, ISRh_RxErr);
634 write_reg_high(ioaddr, CMR2, lp->addr_mode); 634 write_reg_high(ioaddr, CMR2, lp->addr_mode);
635 } else if ((read_status & (CMR1_BufEnb << 3)) == 0) { 635 } else if ((read_status & (CMR1_BufEnb << 3)) == 0) {
636 net_rx(dev); 636 net_rx(dev);
637 num_tx_since_rx = 0; 637 num_tx_since_rx = 0;
638 } else 638 } else
639 break; 639 break;
640 } while (--boguscount > 0); 640 } while (--boguscount > 0);
641 } else if (status & ((ISR_TxErr + ISR_TxOK)<<3)) { 641 } else if (status & ((ISR_TxErr + ISR_TxOK)<<3)) {
642 handled = 1; 642 handled = 1;
643 if (net_debug > 6) printk("handling Tx done.."); 643 if (net_debug > 6) printk("handling Tx done..");
644 /* Clear the Tx interrupt. We should check for too many failures 644 /* Clear the Tx interrupt. We should check for too many failures
645 and reinitialize the adapter. */ 645 and reinitialize the adapter. */
646 write_reg(ioaddr, ISR, ISR_TxErr + ISR_TxOK); 646 write_reg(ioaddr, ISR, ISR_TxErr + ISR_TxOK);
647 if (status & (ISR_TxErr<<3)) { 647 if (status & (ISR_TxErr<<3)) {
648 dev->stats.collisions++; 648 dev->stats.collisions++;
649 if (++lp->re_tx > 15) { 649 if (++lp->re_tx > 15) {
650 dev->stats.tx_aborted_errors++; 650 dev->stats.tx_aborted_errors++;
651 hardware_init(dev); 651 hardware_init(dev);
652 break; 652 break;
653 } 653 }
654 /* Attempt to retransmit. */ 654 /* Attempt to retransmit. */
655 if (net_debug > 6) printk("attempting to ReTx"); 655 if (net_debug > 6) printk("attempting to ReTx");
656 write_reg(ioaddr, CMR1, CMR1_ReXmit + CMR1_Xmit); 656 write_reg(ioaddr, CMR1, CMR1_ReXmit + CMR1_Xmit);
657 } else { 657 } else {
658 /* Finish up the transmit. */ 658 /* Finish up the transmit. */
659 dev->stats.tx_packets++; 659 dev->stats.tx_packets++;
660 lp->pac_cnt_in_tx_buf--; 660 lp->pac_cnt_in_tx_buf--;
661 if ( lp->saved_tx_size) { 661 if ( lp->saved_tx_size) {
662 trigger_send(ioaddr, lp->saved_tx_size); 662 trigger_send(ioaddr, lp->saved_tx_size);
663 lp->saved_tx_size = 0; 663 lp->saved_tx_size = 0;
664 lp->re_tx = 0; 664 lp->re_tx = 0;
665 } else 665 } else
666 lp->tx_unit_busy = 0; 666 lp->tx_unit_busy = 0;
667 netif_wake_queue(dev); /* Inform upper layers. */ 667 netif_wake_queue(dev); /* Inform upper layers. */
668 } 668 }
669 num_tx_since_rx++; 669 num_tx_since_rx++;
670 } else if (num_tx_since_rx > 8 670 } else if (num_tx_since_rx > 8
671 && time_after(jiffies, dev->last_rx + HZ)) { 671 && time_after(jiffies, dev->last_rx + HZ)) {
672 if (net_debug > 2) 672 if (net_debug > 2)
673 printk(KERN_DEBUG "%s: Missed packet? No Rx after %d Tx and " 673 printk(KERN_DEBUG "%s: Missed packet? No Rx after %d Tx and "
674 "%ld jiffies status %02x CMR1 %02x.\n", dev->name, 674 "%ld jiffies status %02x CMR1 %02x.\n", dev->name,
675 num_tx_since_rx, jiffies - dev->last_rx, status, 675 num_tx_since_rx, jiffies - dev->last_rx, status,
676 (read_nibble(ioaddr, CMR1) >> 3) & 15); 676 (read_nibble(ioaddr, CMR1) >> 3) & 15);
677 dev->stats.rx_missed_errors++; 677 dev->stats.rx_missed_errors++;
678 hardware_init(dev); 678 hardware_init(dev);
679 num_tx_since_rx = 0; 679 num_tx_since_rx = 0;
680 break; 680 break;
681 } else 681 } else
682 break; 682 break;
683 } 683 }
684 684
685 /* This following code fixes a rare (and very difficult to track down) 685 /* This following code fixes a rare (and very difficult to track down)
686 problem where the adapter forgets its ethernet address. */ 686 problem where the adapter forgets its ethernet address. */
687 { 687 {
688 int i; 688 int i;
689 for (i = 0; i < 6; i++) 689 for (i = 0; i < 6; i++)
690 write_reg_byte(ioaddr, PAR0 + i, dev->dev_addr[i]); 690 write_reg_byte(ioaddr, PAR0 + i, dev->dev_addr[i]);
691 #if 0 && defined(TIMED_CHECKER) 691 #if 0 && defined(TIMED_CHECKER)
692 mod_timer(&lp->timer, jiffies + TIMED_CHECKER); 692 mod_timer(&lp->timer, jiffies + TIMED_CHECKER);
693 #endif 693 #endif
694 } 694 }
695 695
696 /* Tell the adapter that it can go back to using the output line as IRQ. */ 696 /* Tell the adapter that it can go back to using the output line as IRQ. */
697 write_reg(ioaddr, CMR2, CMR2_IRQOUT); 697 write_reg(ioaddr, CMR2, CMR2_IRQOUT);
698 /* Enable the physical interrupt line, which is sure to be low until.. */ 698 /* Enable the physical interrupt line, which is sure to be low until.. */
699 outb(Ctrl_SelData + Ctrl_IRQEN, ioaddr + PAR_CONTROL); 699 outb(Ctrl_SelData + Ctrl_IRQEN, ioaddr + PAR_CONTROL);
700 /* .. we enable the interrupt sources. */ 700 /* .. we enable the interrupt sources. */
701 write_reg(ioaddr, IMR, ISR_RxOK | ISR_TxErr | ISR_TxOK); 701 write_reg(ioaddr, IMR, ISR_RxOK | ISR_TxErr | ISR_TxOK);
702 write_reg_high(ioaddr, IMR, ISRh_RxErr); /* Hmmm, really needed? */ 702 write_reg_high(ioaddr, IMR, ISRh_RxErr); /* Hmmm, really needed? */
703 703
704 spin_unlock(&lp->lock); 704 spin_unlock(&lp->lock);
705 705
706 if (net_debug > 5) printk("exiting interrupt.\n"); 706 if (net_debug > 5) printk("exiting interrupt.\n");
707 return IRQ_RETVAL(handled); 707 return IRQ_RETVAL(handled);
708 } 708 }
709 709
710 #ifdef TIMED_CHECKER 710 #ifdef TIMED_CHECKER
711 /* This following code fixes a rare (and very difficult to track down) 711 /* This following code fixes a rare (and very difficult to track down)
712 problem where the adapter forgets its ethernet address. */ 712 problem where the adapter forgets its ethernet address. */
713 static void atp_timed_checker(unsigned long data) 713 static void atp_timed_checker(unsigned long data)
714 { 714 {
715 struct net_device *dev = (struct net_device *)data; 715 struct net_device *dev = (struct net_device *)data;
716 long ioaddr = dev->base_addr; 716 long ioaddr = dev->base_addr;
717 struct net_local *lp = netdev_priv(dev); 717 struct net_local *lp = netdev_priv(dev);
718 int tickssofar = jiffies - lp->last_rx_time; 718 int tickssofar = jiffies - lp->last_rx_time;
719 int i; 719 int i;
720 720
721 spin_lock(&lp->lock); 721 spin_lock(&lp->lock);
722 if (tickssofar > 2*HZ) { 722 if (tickssofar > 2*HZ) {
723 #if 1 723 #if 1
724 for (i = 0; i < 6; i++) 724 for (i = 0; i < 6; i++)
725 write_reg_byte(ioaddr, PAR0 + i, dev->dev_addr[i]); 725 write_reg_byte(ioaddr, PAR0 + i, dev->dev_addr[i]);
726 lp->last_rx_time = jiffies; 726 lp->last_rx_time = jiffies;
727 #else 727 #else
728 for (i = 0; i < 6; i++) 728 for (i = 0; i < 6; i++)
729 if (read_cmd_byte(ioaddr, PAR0 + i) != atp_timed_dev->dev_addr[i]) 729 if (read_cmd_byte(ioaddr, PAR0 + i) != atp_timed_dev->dev_addr[i])
730 { 730 {
731 struct net_local *lp = netdev_priv(atp_timed_dev); 731 struct net_local *lp = netdev_priv(atp_timed_dev);
732 write_reg_byte(ioaddr, PAR0 + i, atp_timed_dev->dev_addr[i]); 732 write_reg_byte(ioaddr, PAR0 + i, atp_timed_dev->dev_addr[i]);
733 if (i == 2) 733 if (i == 2)
734 dev->stats.tx_errors++; 734 dev->stats.tx_errors++;
735 else if (i == 3) 735 else if (i == 3)
736 dev->stats.tx_dropped++; 736 dev->stats.tx_dropped++;
737 else if (i == 4) 737 else if (i == 4)
738 dev->stats.collisions++; 738 dev->stats.collisions++;
739 else 739 else
740 dev->stats.rx_errors++; 740 dev->stats.rx_errors++;
741 } 741 }
742 #endif 742 #endif
743 } 743 }
744 spin_unlock(&lp->lock); 744 spin_unlock(&lp->lock);
745 lp->timer.expires = jiffies + TIMED_CHECKER; 745 lp->timer.expires = jiffies + TIMED_CHECKER;
746 add_timer(&lp->timer); 746 add_timer(&lp->timer);
747 } 747 }
748 #endif 748 #endif
749 749
750 /* We have a good packet(s), get it/them out of the buffers. */ 750 /* We have a good packet(s), get it/them out of the buffers. */
751 static void net_rx(struct net_device *dev) 751 static void net_rx(struct net_device *dev)
752 { 752 {
753 struct net_local *lp = netdev_priv(dev); 753 struct net_local *lp = netdev_priv(dev);
754 long ioaddr = dev->base_addr; 754 long ioaddr = dev->base_addr;
755 struct rx_header rx_head; 755 struct rx_header rx_head;
756 756
757 /* Process the received packet. */ 757 /* Process the received packet. */
758 outb(EOC+MAR, ioaddr + PAR_DATA); 758 outb(EOC+MAR, ioaddr + PAR_DATA);
759 read_block(ioaddr, 8, (unsigned char*)&rx_head, dev->if_port); 759 read_block(ioaddr, 8, (unsigned char*)&rx_head, dev->if_port);
760 if (net_debug > 5) 760 if (net_debug > 5)
761 printk(KERN_DEBUG " rx_count %04x %04x %04x %04x..", rx_head.pad, 761 printk(KERN_DEBUG " rx_count %04x %04x %04x %04x..", rx_head.pad,
762 rx_head.rx_count, rx_head.rx_status, rx_head.cur_addr); 762 rx_head.rx_count, rx_head.rx_status, rx_head.cur_addr);
763 if ((rx_head.rx_status & 0x77) != 0x01) { 763 if ((rx_head.rx_status & 0x77) != 0x01) {
764 dev->stats.rx_errors++; 764 dev->stats.rx_errors++;
765 if (rx_head.rx_status & 0x0004) dev->stats.rx_frame_errors++; 765 if (rx_head.rx_status & 0x0004) dev->stats.rx_frame_errors++;
766 else if (rx_head.rx_status & 0x0002) dev->stats.rx_crc_errors++; 766 else if (rx_head.rx_status & 0x0002) dev->stats.rx_crc_errors++;
767 if (net_debug > 3) 767 if (net_debug > 3)
768 printk(KERN_DEBUG "%s: Unknown ATP Rx error %04x.\n", 768 printk(KERN_DEBUG "%s: Unknown ATP Rx error %04x.\n",
769 dev->name, rx_head.rx_status); 769 dev->name, rx_head.rx_status);
770 if (rx_head.rx_status & 0x0020) { 770 if (rx_head.rx_status & 0x0020) {
771 dev->stats.rx_fifo_errors++; 771 dev->stats.rx_fifo_errors++;
772 write_reg_high(ioaddr, CMR1, CMR1h_TxENABLE); 772 write_reg_high(ioaddr, CMR1, CMR1h_TxENABLE);
773 write_reg_high(ioaddr, CMR1, CMR1h_RxENABLE | CMR1h_TxENABLE); 773 write_reg_high(ioaddr, CMR1, CMR1h_RxENABLE | CMR1h_TxENABLE);
774 } else if (rx_head.rx_status & 0x0050) 774 } else if (rx_head.rx_status & 0x0050)
775 hardware_init(dev); 775 hardware_init(dev);
776 return; 776 return;
777 } else { 777 } else {
778 /* Malloc up new buffer. The "-4" omits the FCS (CRC). */ 778 /* Malloc up new buffer. The "-4" omits the FCS (CRC). */
779 int pkt_len = (rx_head.rx_count & 0x7ff) - 4; 779 int pkt_len = (rx_head.rx_count & 0x7ff) - 4;
780 struct sk_buff *skb; 780 struct sk_buff *skb;
781 781
782 skb = dev_alloc_skb(pkt_len + 2); 782 skb = dev_alloc_skb(pkt_len + 2);
783 if (skb == NULL) { 783 if (skb == NULL) {
784 printk(KERN_ERR "%s: Memory squeeze, dropping packet.\n", 784 printk(KERN_ERR "%s: Memory squeeze, dropping packet.\n",
785 dev->name); 785 dev->name);
786 dev->stats.rx_dropped++; 786 dev->stats.rx_dropped++;
787 goto done; 787 goto done;
788 } 788 }
789 789
790 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */ 790 skb_reserve(skb, 2); /* Align IP on 16 byte boundaries */
791 read_block(ioaddr, pkt_len, skb_put(skb,pkt_len), dev->if_port); 791 read_block(ioaddr, pkt_len, skb_put(skb,pkt_len), dev->if_port);
792 skb->protocol = eth_type_trans(skb, dev); 792 skb->protocol = eth_type_trans(skb, dev);
793 netif_rx(skb); 793 netif_rx(skb);
794 dev->last_rx = jiffies; 794 dev->last_rx = jiffies;
795 dev->stats.rx_packets++; 795 dev->stats.rx_packets++;
796 dev->stats.rx_bytes += pkt_len; 796 dev->stats.rx_bytes += pkt_len;
797 } 797 }
798 done: 798 done:
799 write_reg(ioaddr, CMR1, CMR1_NextPkt); 799 write_reg(ioaddr, CMR1, CMR1_NextPkt);
800 lp->last_rx_time = jiffies; 800 lp->last_rx_time = jiffies;
801 return; 801 return;
802 } 802 }
803 803
804 static void read_block(long ioaddr, int length, unsigned char *p, int data_mode) 804 static void read_block(long ioaddr, int length, unsigned char *p, int data_mode)
805 { 805 {
806 806
807 if (data_mode <= 3) { /* Mode 0 or 1 */ 807 if (data_mode <= 3) { /* Mode 0 or 1 */
808 outb(Ctrl_LNibRead, ioaddr + PAR_CONTROL); 808 outb(Ctrl_LNibRead, ioaddr + PAR_CONTROL);
809 outb(length == 8 ? RdAddr | HNib | MAR : RdAddr | MAR, 809 outb(length == 8 ? RdAddr | HNib | MAR : RdAddr | MAR,
810 ioaddr + PAR_DATA); 810 ioaddr + PAR_DATA);
811 if (data_mode <= 1) { /* Mode 0 or 1 */ 811 if (data_mode <= 1) { /* Mode 0 or 1 */
812 do *p++ = read_byte_mode0(ioaddr); while (--length > 0); 812 do *p++ = read_byte_mode0(ioaddr); while (--length > 0);
813 } else /* Mode 2 or 3 */ 813 } else /* Mode 2 or 3 */
814 do *p++ = read_byte_mode2(ioaddr); while (--length > 0); 814 do *p++ = read_byte_mode2(ioaddr); while (--length > 0);
815 } else if (data_mode <= 5) 815 } else if (data_mode <= 5)
816 do *p++ = read_byte_mode4(ioaddr); while (--length > 0); 816 do *p++ = read_byte_mode4(ioaddr); while (--length > 0);
817 else 817 else
818 do *p++ = read_byte_mode6(ioaddr); while (--length > 0); 818 do *p++ = read_byte_mode6(ioaddr); while (--length > 0);
819 819
820 outb(EOC+HNib+MAR, ioaddr + PAR_DATA); 820 outb(EOC+HNib+MAR, ioaddr + PAR_DATA);
821 outb(Ctrl_SelData, ioaddr + PAR_CONTROL); 821 outb(Ctrl_SelData, ioaddr + PAR_CONTROL);
822 } 822 }
823 823
824 /* The inverse routine to net_open(). */ 824 /* The inverse routine to net_open(). */
825 static int 825 static int
826 net_close(struct net_device *dev) 826 net_close(struct net_device *dev)
827 { 827 {
828 struct net_local *lp = netdev_priv(dev); 828 struct net_local *lp = netdev_priv(dev);
829 long ioaddr = dev->base_addr; 829 long ioaddr = dev->base_addr;
830 830
831 netif_stop_queue(dev); 831 netif_stop_queue(dev);
832 832
833 del_timer_sync(&lp->timer); 833 del_timer_sync(&lp->timer);
834 834
835 /* Flush the Tx and disable Rx here. */ 835 /* Flush the Tx and disable Rx here. */
836 lp->addr_mode = CMR2h_OFF; 836 lp->addr_mode = CMR2h_OFF;
837 write_reg_high(ioaddr, CMR2, CMR2h_OFF); 837 write_reg_high(ioaddr, CMR2, CMR2h_OFF);
838 838
839 /* Free the IRQ line. */ 839 /* Free the IRQ line. */
840 outb(0x00, ioaddr + PAR_CONTROL); 840 outb(0x00, ioaddr + PAR_CONTROL);
841 free_irq(dev->irq, dev); 841 free_irq(dev->irq, dev);
842 842
843 /* Reset the ethernet hardware and activate the printer pass-through. */ 843 /* Reset the ethernet hardware and activate the printer pass-through. */
844 write_reg_high(ioaddr, CMR1, CMR1h_RESET | CMR1h_MUX); 844 write_reg_high(ioaddr, CMR1, CMR1h_RESET | CMR1h_MUX);
845 return 0; 845 return 0;
846 } 846 }
847 847
848 /* 848 /*
849 * Set or clear the multicast filter for this adapter. 849 * Set or clear the multicast filter for this adapter.
850 */ 850 */
851 851
852 static void set_rx_mode_8002(struct net_device *dev) 852 static void set_rx_mode_8002(struct net_device *dev)
853 { 853 {
854 struct net_local *lp = netdev_priv(dev); 854 struct net_local *lp = netdev_priv(dev);
855 long ioaddr = dev->base_addr; 855 long ioaddr = dev->base_addr;
856 856
857 if ( dev->mc_count > 0 || (dev->flags & (IFF_ALLMULTI|IFF_PROMISC))) { 857 if ( dev->mc_count > 0 || (dev->flags & (IFF_ALLMULTI|IFF_PROMISC))) {
858 /* We must make the kernel realise we had to move 858 /* We must make the kernel realise we had to move
859 * into promisc mode or we start all out war on 859 * into promisc mode or we start all out war on
860 * the cable. - AC 860 * the cable. - AC
861 */ 861 */
862 dev->flags|=IFF_PROMISC; 862 dev->flags|=IFF_PROMISC;
863 lp->addr_mode = CMR2h_PROMISC; 863 lp->addr_mode = CMR2h_PROMISC;
864 } else 864 } else
865 lp->addr_mode = CMR2h_Normal; 865 lp->addr_mode = CMR2h_Normal;
866 write_reg_high(ioaddr, CMR2, lp->addr_mode); 866 write_reg_high(ioaddr, CMR2, lp->addr_mode);
867 } 867 }
868 868
869 static void set_rx_mode_8012(struct net_device *dev) 869 static void set_rx_mode_8012(struct net_device *dev)
870 { 870 {
871 struct net_local *lp = netdev_priv(dev); 871 struct net_local *lp = netdev_priv(dev);
872 long ioaddr = dev->base_addr; 872 long ioaddr = dev->base_addr;
873 unsigned char new_mode, mc_filter[8]; /* Multicast hash filter */ 873 unsigned char new_mode, mc_filter[8]; /* Multicast hash filter */
874 int i; 874 int i;
875 875
876 if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */ 876 if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */
877 new_mode = CMR2h_PROMISC; 877 new_mode = CMR2h_PROMISC;
878 } else if ((dev->mc_count > 1000) || (dev->flags & IFF_ALLMULTI)) { 878 } else if ((dev->mc_count > 1000) || (dev->flags & IFF_ALLMULTI)) {
879 /* Too many to filter perfectly -- accept all multicasts. */ 879 /* Too many to filter perfectly -- accept all multicasts. */
880 memset(mc_filter, 0xff, sizeof(mc_filter)); 880 memset(mc_filter, 0xff, sizeof(mc_filter));
881 new_mode = CMR2h_Normal; 881 new_mode = CMR2h_Normal;
882 } else { 882 } else {
883 struct dev_mc_list *mclist; 883 struct dev_mc_list *mclist;
884 884
885 memset(mc_filter, 0, sizeof(mc_filter)); 885 memset(mc_filter, 0, sizeof(mc_filter));
886 for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count; 886 for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
887 i++, mclist = mclist->next) 887 i++, mclist = mclist->next)
888 { 888 {
889 int filterbit = ether_crc_le(ETH_ALEN, mclist->dmi_addr) & 0x3f; 889 int filterbit = ether_crc_le(ETH_ALEN, mclist->dmi_addr) & 0x3f;
890 mc_filter[filterbit >> 5] |= 1 << (filterbit & 31); 890 mc_filter[filterbit >> 5] |= 1 << (filterbit & 31);
891 } 891 }
892 new_mode = CMR2h_Normal; 892 new_mode = CMR2h_Normal;
893 } 893 }
894 lp->addr_mode = new_mode; 894 lp->addr_mode = new_mode;
895 write_reg(ioaddr, CMR2, CMR2_IRQOUT | 0x04); /* Switch to page 1. */ 895 write_reg(ioaddr, CMR2, CMR2_IRQOUT | 0x04); /* Switch to page 1. */
896 for (i = 0; i < 8; i++) 896 for (i = 0; i < 8; i++)
897 write_reg_byte(ioaddr, i, mc_filter[i]); 897 write_reg_byte(ioaddr, i, mc_filter[i]);
898 if (net_debug > 2 || 1) { 898 if (net_debug > 2 || 1) {
899 lp->addr_mode = 1; 899 lp->addr_mode = 1;
900 printk(KERN_DEBUG "%s: Mode %d, setting multicast filter to", 900 printk(KERN_DEBUG "%s: Mode %d, setting multicast filter to",
901 dev->name, lp->addr_mode); 901 dev->name, lp->addr_mode);
902 for (i = 0; i < 8; i++) 902 for (i = 0; i < 8; i++)
903 printk(" %2.2x", mc_filter[i]); 903 printk(" %2.2x", mc_filter[i]);
904 printk(".\n"); 904 printk(".\n");
905 } 905 }
906 906
907 write_reg_high(ioaddr, CMR2, lp->addr_mode); 907 write_reg_high(ioaddr, CMR2, lp->addr_mode);
908 write_reg(ioaddr, CMR2, CMR2_IRQOUT); /* Switch back to page 0 */ 908 write_reg(ioaddr, CMR2, CMR2_IRQOUT); /* Switch back to page 0 */
909 } 909 }
910 910
911 static int __init atp_init_module(void) { 911 static int __init atp_init_module(void) {
912 if (debug) /* Emit version even if no cards detected. */ 912 if (debug) /* Emit version even if no cards detected. */
913 printk(KERN_INFO "%s", version); 913 printk(KERN_INFO "%s", version);
914 return atp_init(); 914 return atp_init();
915 } 915 }
916 916
917 static void __exit atp_cleanup_module(void) { 917 static void __exit atp_cleanup_module(void) {
918 struct net_device *next_dev; 918 struct net_device *next_dev;
919 919
920 while (root_atp_dev) { 920 while (root_atp_dev) {
921 next_dev = ((struct net_local *)root_atp_dev->priv)->next_module; 921 next_dev = ((struct net_local *)root_atp_dev->priv)->next_module;
922 unregister_netdev(root_atp_dev); 922 unregister_netdev(root_atp_dev);
923 /* No need to release_region(), since we never snarf it. */ 923 /* No need to release_region(), since we never snarf it. */
924 free_netdev(root_atp_dev); 924 free_netdev(root_atp_dev);
925 root_atp_dev = next_dev; 925 root_atp_dev = next_dev;
926 } 926 }
927 } 927 }
928 928
929 module_init(atp_init_module); 929 module_init(atp_init_module);
930 module_exit(atp_cleanup_module); 930 module_exit(atp_cleanup_module);
931 931
1 /* 1 /*
2 * File Name: 2 * File Name:
3 * defxx.c 3 * defxx.c
4 * 4 *
5 * Copyright Information: 5 * Copyright Information:
6 * Copyright Digital Equipment Corporation 1996. 6 * Copyright Digital Equipment Corporation 1996.
7 * 7 *
8 * This software may be used and distributed according to the terms of 8 * This software may be used and distributed according to the terms of
9 * the GNU General Public License, incorporated herein by reference. 9 * the GNU General Public License, incorporated herein by reference.
10 * 10 *
11 * Abstract: 11 * Abstract:
12 * A Linux device driver supporting the Digital Equipment Corporation 12 * A Linux device driver supporting the Digital Equipment Corporation
13 * FDDI TURBOchannel, EISA and PCI controller families. Supported 13 * FDDI TURBOchannel, EISA and PCI controller families. Supported
14 * adapters include: 14 * adapters include:
15 * 15 *
16 * DEC FDDIcontroller/TURBOchannel (DEFTA) 16 * DEC FDDIcontroller/TURBOchannel (DEFTA)
17 * DEC FDDIcontroller/EISA (DEFEA) 17 * DEC FDDIcontroller/EISA (DEFEA)
18 * DEC FDDIcontroller/PCI (DEFPA) 18 * DEC FDDIcontroller/PCI (DEFPA)
19 * 19 *
20 * The original author: 20 * The original author:
21 * LVS Lawrence V. Stefani <lstefani@yahoo.com> 21 * LVS Lawrence V. Stefani <lstefani@yahoo.com>
22 * 22 *
23 * Maintainers: 23 * Maintainers:
24 * macro Maciej W. Rozycki <macro@linux-mips.org> 24 * macro Maciej W. Rozycki <macro@linux-mips.org>
25 * 25 *
26 * Credits: 26 * Credits:
27 * I'd like to thank Patricia Cross for helping me get started with 27 * I'd like to thank Patricia Cross for helping me get started with
28 * Linux, David Davies for a lot of help upgrading and configuring 28 * Linux, David Davies for a lot of help upgrading and configuring
29 * my development system and for answering many OS and driver 29 * my development system and for answering many OS and driver
30 * development questions, and Alan Cox for recommendations and 30 * development questions, and Alan Cox for recommendations and
31 * integration help on getting FDDI support into Linux. LVS 31 * integration help on getting FDDI support into Linux. LVS
32 * 32 *
33 * Driver Architecture: 33 * Driver Architecture:
34 * The driver architecture is largely based on previous driver work 34 * The driver architecture is largely based on previous driver work
35 * for other operating systems. The upper edge interface and 35 * for other operating systems. The upper edge interface and
36 * functions were largely taken from existing Linux device drivers 36 * functions were largely taken from existing Linux device drivers
37 * such as David Davies' DE4X5.C driver and Donald Becker's TULIP.C 37 * such as David Davies' DE4X5.C driver and Donald Becker's TULIP.C
38 * driver. 38 * driver.
39 * 39 *
40 * Adapter Probe - 40 * Adapter Probe -
41 * The driver scans for supported EISA adapters by reading the 41 * The driver scans for supported EISA adapters by reading the
42 * SLOT ID register for each EISA slot and making a match 42 * SLOT ID register for each EISA slot and making a match
43 * against the expected value. 43 * against the expected value.
44 * 44 *
45 * Bus-Specific Initialization - 45 * Bus-Specific Initialization -
46 * This driver currently supports both EISA and PCI controller 46 * This driver currently supports both EISA and PCI controller
47 * families. While the custom DMA chip and FDDI logic is similar 47 * families. While the custom DMA chip and FDDI logic is similar
48 * or identical, the bus logic is very different. After 48 * or identical, the bus logic is very different. After
49 * initialization, the only bus-specific differences is in how the 49 * initialization, the only bus-specific differences is in how the
50 * driver enables and disables interrupts. Other than that, the 50 * driver enables and disables interrupts. Other than that, the
51 * run-time critical code behaves the same on both families. 51 * run-time critical code behaves the same on both families.
52 * It's important to note that both adapter families are configured 52 * It's important to note that both adapter families are configured
53 * to I/O map, rather than memory map, the adapter registers. 53 * to I/O map, rather than memory map, the adapter registers.
54 * 54 *
55 * Driver Open/Close - 55 * Driver Open/Close -
56 * In the driver open routine, the driver ISR (interrupt service 56 * In the driver open routine, the driver ISR (interrupt service
57 * routine) is registered and the adapter is brought to an 57 * routine) is registered and the adapter is brought to an
58 * operational state. In the driver close routine, the opposite 58 * operational state. In the driver close routine, the opposite
59 * occurs; the driver ISR is deregistered and the adapter is 59 * occurs; the driver ISR is deregistered and the adapter is
60 * brought to a safe, but closed state. Users may use consecutive 60 * brought to a safe, but closed state. Users may use consecutive
61 * commands to bring the adapter up and down as in the following 61 * commands to bring the adapter up and down as in the following
62 * example: 62 * example:
63 * ifconfig fddi0 up 63 * ifconfig fddi0 up
64 * ifconfig fddi0 down 64 * ifconfig fddi0 down
65 * ifconfig fddi0 up 65 * ifconfig fddi0 up
66 * 66 *
67 * Driver Shutdown - 67 * Driver Shutdown -
68 * Apparently, there is no shutdown or halt routine support under 68 * Apparently, there is no shutdown or halt routine support under
69 * Linux. This routine would be called during "reboot" or 69 * Linux. This routine would be called during "reboot" or
70 * "shutdown" to allow the driver to place the adapter in a safe 70 * "shutdown" to allow the driver to place the adapter in a safe
71 * state before a warm reboot occurs. To be really safe, the user 71 * state before a warm reboot occurs. To be really safe, the user
72 * should close the adapter before shutdown (eg. ifconfig fddi0 down) 72 * should close the adapter before shutdown (eg. ifconfig fddi0 down)
73 * to ensure that the adapter DMA engine is taken off-line. However, 73 * to ensure that the adapter DMA engine is taken off-line. However,
74 * the current driver code anticipates this problem and always issues 74 * the current driver code anticipates this problem and always issues
75 * a soft reset of the adapter at the beginning of driver initialization. 75 * a soft reset of the adapter at the beginning of driver initialization.
76 * A future driver enhancement in this area may occur in 2.1.X where 76 * A future driver enhancement in this area may occur in 2.1.X where
77 * Alan indicated that a shutdown handler may be implemented. 77 * Alan indicated that a shutdown handler may be implemented.
78 * 78 *
79 * Interrupt Service Routine - 79 * Interrupt Service Routine -
80 * The driver supports shared interrupts, so the ISR is registered for 80 * The driver supports shared interrupts, so the ISR is registered for
81 * each board with the appropriate flag and the pointer to that board's 81 * each board with the appropriate flag and the pointer to that board's
82 * device structure. This provides the context during interrupt 82 * device structure. This provides the context during interrupt
83 * processing to support shared interrupts and multiple boards. 83 * processing to support shared interrupts and multiple boards.
84 * 84 *
85 * Interrupt enabling/disabling can occur at many levels. At the host 85 * Interrupt enabling/disabling can occur at many levels. At the host
86 * end, you can disable system interrupts, or disable interrupts at the 86 * end, you can disable system interrupts, or disable interrupts at the
87 * PIC (on Intel systems). Across the bus, both EISA and PCI adapters 87 * PIC (on Intel systems). Across the bus, both EISA and PCI adapters
88 * have a bus-logic chip interrupt enable/disable as well as a DMA 88 * have a bus-logic chip interrupt enable/disable as well as a DMA
89 * controller interrupt enable/disable. 89 * controller interrupt enable/disable.
90 * 90 *
91 * The driver currently enables and disables adapter interrupts at the 91 * The driver currently enables and disables adapter interrupts at the
92 * bus-logic chip and assumes that Linux will take care of clearing or 92 * bus-logic chip and assumes that Linux will take care of clearing or
93 * acknowledging any host-based interrupt chips. 93 * acknowledging any host-based interrupt chips.
94 * 94 *
95 * Control Functions - 95 * Control Functions -
96 * Control functions are those used to support functions such as adding 96 * Control functions are those used to support functions such as adding
97 * or deleting multicast addresses, enabling or disabling packet 97 * or deleting multicast addresses, enabling or disabling packet
98 * reception filters, or other custom/proprietary commands. Presently, 98 * reception filters, or other custom/proprietary commands. Presently,
99 * the driver supports the "get statistics", "set multicast list", and 99 * the driver supports the "get statistics", "set multicast list", and
100 * "set mac address" functions defined by Linux. A list of possible 100 * "set mac address" functions defined by Linux. A list of possible
101 * enhancements include: 101 * enhancements include:
102 * 102 *
103 * - Custom ioctl interface for executing port interface commands 103 * - Custom ioctl interface for executing port interface commands
104 * - Custom ioctl interface for adding unicast addresses to 104 * - Custom ioctl interface for adding unicast addresses to
105 * adapter CAM (to support bridge functions). 105 * adapter CAM (to support bridge functions).
106 * - Custom ioctl interface for supporting firmware upgrades. 106 * - Custom ioctl interface for supporting firmware upgrades.
107 * 107 *
108 * Hardware (port interface) Support Routines - 108 * Hardware (port interface) Support Routines -
109 * The driver function names that start with "dfx_hw_" represent 109 * The driver function names that start with "dfx_hw_" represent
110 * low-level port interface routines that are called frequently. They 110 * low-level port interface routines that are called frequently. They
111 * include issuing a DMA or port control command to the adapter, 111 * include issuing a DMA or port control command to the adapter,
112 * resetting the adapter, or reading the adapter state. Since the 112 * resetting the adapter, or reading the adapter state. Since the
113 * driver initialization and run-time code must make calls into the 113 * driver initialization and run-time code must make calls into the
114 * port interface, these routines were written to be as generic and 114 * port interface, these routines were written to be as generic and
115 * usable as possible. 115 * usable as possible.
116 * 116 *
117 * Receive Path - 117 * Receive Path -
118 * The adapter DMA engine supports a 256 entry receive descriptor block 118 * The adapter DMA engine supports a 256 entry receive descriptor block
119 * of which up to 255 entries can be used at any given time. The 119 * of which up to 255 entries can be used at any given time. The
120 * architecture is a standard producer, consumer, completion model in 120 * architecture is a standard producer, consumer, completion model in
121 * which the driver "produces" receive buffers to the adapter, the 121 * which the driver "produces" receive buffers to the adapter, the
122 * adapter "consumes" the receive buffers by DMAing incoming packet data, 122 * adapter "consumes" the receive buffers by DMAing incoming packet data,
123 * and the driver "completes" the receive buffers by servicing the 123 * and the driver "completes" the receive buffers by servicing the
124 * incoming packet, then "produces" a new buffer and starts the cycle 124 * incoming packet, then "produces" a new buffer and starts the cycle
125 * again. Receive buffers can be fragmented in up to 16 fragments 125 * again. Receive buffers can be fragmented in up to 16 fragments
126 * (descriptor entries). For simplicity, this driver posts 126 * (descriptor entries). For simplicity, this driver posts
127 * single-fragment receive buffers of 4608 bytes, then allocates a 127 * single-fragment receive buffers of 4608 bytes, then allocates a
128 * sk_buff, copies the data, then reposts the buffer. To reduce CPU 128 * sk_buff, copies the data, then reposts the buffer. To reduce CPU
129 * utilization, a better approach would be to pass up the receive 129 * utilization, a better approach would be to pass up the receive
130 * buffer (no extra copy) then allocate and post a replacement buffer. 130 * buffer (no extra copy) then allocate and post a replacement buffer.
131 * This is a performance enhancement that should be looked into at 131 * This is a performance enhancement that should be looked into at
132 * some point. 132 * some point.
133 * 133 *
134 * Transmit Path - 134 * Transmit Path -
135 * Like the receive path, the adapter DMA engine supports a 256 entry 135 * Like the receive path, the adapter DMA engine supports a 256 entry
136 * transmit descriptor block of which up to 255 entries can be used at 136 * transmit descriptor block of which up to 255 entries can be used at
137 * any given time. Transmit buffers can be fragmented in up to 255 137 * any given time. Transmit buffers can be fragmented in up to 255
138 * fragments (descriptor entries). This driver always posts one 138 * fragments (descriptor entries). This driver always posts one
139 * fragment per transmit packet request. 139 * fragment per transmit packet request.
140 * 140 *
141 * The fragment contains the entire packet from FC to end of data. 141 * The fragment contains the entire packet from FC to end of data.
142 * Before posting the buffer to the adapter, the driver sets a three-byte 142 * Before posting the buffer to the adapter, the driver sets a three-byte
143 * packet request header (PRH) which is required by the Motorola MAC chip 143 * packet request header (PRH) which is required by the Motorola MAC chip
144 * used on the adapters. The PRH tells the MAC the type of token to 144 * used on the adapters. The PRH tells the MAC the type of token to
145 * receive/send, whether or not to generate and append the CRC, whether 145 * receive/send, whether or not to generate and append the CRC, whether
146 * synchronous or asynchronous framing is used, etc. Since the PRH 146 * synchronous or asynchronous framing is used, etc. Since the PRH
147 * definition is not necessarily consistent across all FDDI chipsets, 147 * definition is not necessarily consistent across all FDDI chipsets,
148 * the driver, rather than the common FDDI packet handler routines, 148 * the driver, rather than the common FDDI packet handler routines,
149 * sets these bytes. 149 * sets these bytes.
150 * 150 *
151 * To reduce the amount of descriptor fetches needed per transmit request, 151 * To reduce the amount of descriptor fetches needed per transmit request,
152 * the driver takes advantage of the fact that there are at least three 152 * the driver takes advantage of the fact that there are at least three
153 * bytes available before the skb->data field on the outgoing transmit 153 * bytes available before the skb->data field on the outgoing transmit
154 * request. This is guaranteed by having fddi_setup() in net_init.c set 154 * request. This is guaranteed by having fddi_setup() in net_init.c set
155 * dev->hard_header_len to 24 bytes. 21 bytes accounts for the largest 155 * dev->hard_header_len to 24 bytes. 21 bytes accounts for the largest
156 * header in an 802.2 SNAP frame. The other 3 bytes are the extra "pad" 156 * header in an 802.2 SNAP frame. The other 3 bytes are the extra "pad"
157 * bytes which we'll use to store the PRH. 157 * bytes which we'll use to store the PRH.
158 * 158 *
159 * There's a subtle advantage to adding these pad bytes to the 159 * There's a subtle advantage to adding these pad bytes to the
160 * hard_header_len, it ensures that the data portion of the packet for 160 * hard_header_len, it ensures that the data portion of the packet for
161 * an 802.2 SNAP frame is longword aligned. Other FDDI driver 161 * an 802.2 SNAP frame is longword aligned. Other FDDI driver
162 * implementations may not need the extra padding and can start copying 162 * implementations may not need the extra padding and can start copying
163 * or DMAing directly from the FC byte which starts at skb->data. Should 163 * or DMAing directly from the FC byte which starts at skb->data. Should
164 * another driver implementation need ADDITIONAL padding, the net_init.c 164 * another driver implementation need ADDITIONAL padding, the net_init.c
165 * module should be updated and dev->hard_header_len should be increased. 165 * module should be updated and dev->hard_header_len should be increased.
166 * NOTE: To maintain the alignment on the data portion of the packet, 166 * NOTE: To maintain the alignment on the data portion of the packet,
167 * dev->hard_header_len should always be evenly divisible by 4 and at 167 * dev->hard_header_len should always be evenly divisible by 4 and at
168 * least 24 bytes in size. 168 * least 24 bytes in size.
169 * 169 *
170 * Modification History: 170 * Modification History:
171 * Date Name Description 171 * Date Name Description
172 * 16-Aug-96 LVS Created. 172 * 16-Aug-96 LVS Created.
173 * 20-Aug-96 LVS Updated dfx_probe so that version information 173 * 20-Aug-96 LVS Updated dfx_probe so that version information
174 * string is only displayed if 1 or more cards are 174 * string is only displayed if 1 or more cards are
175 * found. Changed dfx_rcv_queue_process to copy 175 * found. Changed dfx_rcv_queue_process to copy
176 * 3 NULL bytes before FC to ensure that data is 176 * 3 NULL bytes before FC to ensure that data is
177 * longword aligned in receive buffer. 177 * longword aligned in receive buffer.
178 * 09-Sep-96 LVS Updated dfx_ctl_set_multicast_list to enable 178 * 09-Sep-96 LVS Updated dfx_ctl_set_multicast_list to enable
179 * LLC group promiscuous mode if multicast list 179 * LLC group promiscuous mode if multicast list
180 * is too large. LLC individual/group promiscuous 180 * is too large. LLC individual/group promiscuous
181 * mode is now disabled if IFF_PROMISC flag not set. 181 * mode is now disabled if IFF_PROMISC flag not set.
182 * dfx_xmt_queue_pkt no longer checks for NULL skb 182 * dfx_xmt_queue_pkt no longer checks for NULL skb
183 * on Alan Cox recommendation. Added node address 183 * on Alan Cox recommendation. Added node address
184 * override support. 184 * override support.
185 * 12-Sep-96 LVS Reset current address to factory address during 185 * 12-Sep-96 LVS Reset current address to factory address during
186 * device open. Updated transmit path to post a 186 * device open. Updated transmit path to post a
187 * single fragment which includes PRH->end of data. 187 * single fragment which includes PRH->end of data.
188 * Mar 2000 AC Did various cleanups for 2.3.x 188 * Mar 2000 AC Did various cleanups for 2.3.x
189 * Jun 2000 jgarzik PCI and resource alloc cleanups 189 * Jun 2000 jgarzik PCI and resource alloc cleanups
190 * Jul 2000 tjeerd Much cleanup and some bug fixes 190 * Jul 2000 tjeerd Much cleanup and some bug fixes
191 * Sep 2000 tjeerd Fix leak on unload, cosmetic code cleanup 191 * Sep 2000 tjeerd Fix leak on unload, cosmetic code cleanup
192 * Feb 2001 Skb allocation fixes 192 * Feb 2001 Skb allocation fixes
193 * Feb 2001 davej PCI enable cleanups. 193 * Feb 2001 davej PCI enable cleanups.
194 * 04 Aug 2003 macro Converted to the DMA API. 194 * 04 Aug 2003 macro Converted to the DMA API.
195 * 14 Aug 2004 macro Fix device names reported. 195 * 14 Aug 2004 macro Fix device names reported.
196 * 14 Jun 2005 macro Use irqreturn_t. 196 * 14 Jun 2005 macro Use irqreturn_t.
197 * 23 Oct 2006 macro Big-endian host support. 197 * 23 Oct 2006 macro Big-endian host support.
198 * 14 Dec 2006 macro TURBOchannel support. 198 * 14 Dec 2006 macro TURBOchannel support.
199 */ 199 */
200 200
201 /* Include files */ 201 /* Include files */
202 #include <linux/bitops.h> 202 #include <linux/bitops.h>
203 #include <linux/compiler.h> 203 #include <linux/compiler.h>
204 #include <linux/delay.h> 204 #include <linux/delay.h>
205 #include <linux/dma-mapping.h> 205 #include <linux/dma-mapping.h>
206 #include <linux/eisa.h> 206 #include <linux/eisa.h>
207 #include <linux/errno.h> 207 #include <linux/errno.h>
208 #include <linux/fddidevice.h> 208 #include <linux/fddidevice.h>
209 #include <linux/init.h> 209 #include <linux/init.h>
210 #include <linux/interrupt.h> 210 #include <linux/interrupt.h>
211 #include <linux/ioport.h> 211 #include <linux/ioport.h>
212 #include <linux/kernel.h> 212 #include <linux/kernel.h>
213 #include <linux/module.h> 213 #include <linux/module.h>
214 #include <linux/netdevice.h> 214 #include <linux/netdevice.h>
215 #include <linux/pci.h> 215 #include <linux/pci.h>
216 #include <linux/skbuff.h> 216 #include <linux/skbuff.h>
217 #include <linux/slab.h> 217 #include <linux/slab.h>
218 #include <linux/string.h> 218 #include <linux/string.h>
219 #include <linux/tc.h> 219 #include <linux/tc.h>
220 220
221 #include <asm/byteorder.h> 221 #include <asm/byteorder.h>
222 #include <asm/io.h> 222 #include <asm/io.h>
223 223
224 #include "defxx.h" 224 #include "defxx.h"
225 225
226 /* Version information string should be updated prior to each new release! */ 226 /* Version information string should be updated prior to each new release! */
227 #define DRV_NAME "defxx" 227 #define DRV_NAME "defxx"
228 #define DRV_VERSION "v1.10" 228 #define DRV_VERSION "v1.10"
229 #define DRV_RELDATE "2006/12/14" 229 #define DRV_RELDATE "2006/12/14"
230 230
231 static char version[] __devinitdata = 231 static char version[] __devinitdata =
232 DRV_NAME ": " DRV_VERSION " " DRV_RELDATE 232 DRV_NAME ": " DRV_VERSION " " DRV_RELDATE
233 " Lawrence V. Stefani and others\n"; 233 " Lawrence V. Stefani and others\n";
234 234
235 #define DYNAMIC_BUFFERS 1 235 #define DYNAMIC_BUFFERS 1
236 236
237 #define SKBUFF_RX_COPYBREAK 200 237 #define SKBUFF_RX_COPYBREAK 200
238 /* 238 /*
239 * NEW_SKB_SIZE = PI_RCV_DATA_K_SIZE_MAX+128 to allow 128 byte 239 * NEW_SKB_SIZE = PI_RCV_DATA_K_SIZE_MAX+128 to allow 128 byte
240 * alignment for compatibility with old EISA boards. 240 * alignment for compatibility with old EISA boards.
241 */ 241 */
242 #define NEW_SKB_SIZE (PI_RCV_DATA_K_SIZE_MAX+128) 242 #define NEW_SKB_SIZE (PI_RCV_DATA_K_SIZE_MAX+128)
243 243
244 #ifdef CONFIG_PCI 244 #ifdef CONFIG_PCI
245 #define DFX_BUS_PCI(dev) (dev->bus == &pci_bus_type) 245 #define DFX_BUS_PCI(dev) (dev->bus == &pci_bus_type)
246 #else 246 #else
247 #define DFX_BUS_PCI(dev) 0 247 #define DFX_BUS_PCI(dev) 0
248 #endif 248 #endif
249 249
250 #ifdef CONFIG_EISA 250 #ifdef CONFIG_EISA
251 #define DFX_BUS_EISA(dev) (dev->bus == &eisa_bus_type) 251 #define DFX_BUS_EISA(dev) (dev->bus == &eisa_bus_type)
252 #else 252 #else
253 #define DFX_BUS_EISA(dev) 0 253 #define DFX_BUS_EISA(dev) 0
254 #endif 254 #endif
255 255
256 #ifdef CONFIG_TC 256 #ifdef CONFIG_TC
257 #define DFX_BUS_TC(dev) (dev->bus == &tc_bus_type) 257 #define DFX_BUS_TC(dev) (dev->bus == &tc_bus_type)
258 #else 258 #else
259 #define DFX_BUS_TC(dev) 0 259 #define DFX_BUS_TC(dev) 0
260 #endif 260 #endif
261 261
262 #ifdef CONFIG_DEFXX_MMIO 262 #ifdef CONFIG_DEFXX_MMIO
263 #define DFX_MMIO 1 263 #define DFX_MMIO 1
264 #else 264 #else
265 #define DFX_MMIO 0 265 #define DFX_MMIO 0
266 #endif 266 #endif
267 267
268 /* Define module-wide (static) routines */ 268 /* Define module-wide (static) routines */
269 269
270 static void dfx_bus_init(struct net_device *dev); 270 static void dfx_bus_init(struct net_device *dev);
271 static void dfx_bus_uninit(struct net_device *dev); 271 static void dfx_bus_uninit(struct net_device *dev);
272 static void dfx_bus_config_check(DFX_board_t *bp); 272 static void dfx_bus_config_check(DFX_board_t *bp);
273 273
274 static int dfx_driver_init(struct net_device *dev, 274 static int dfx_driver_init(struct net_device *dev,
275 const char *print_name, 275 const char *print_name,
276 resource_size_t bar_start); 276 resource_size_t bar_start);
277 static int dfx_adap_init(DFX_board_t *bp, int get_buffers); 277 static int dfx_adap_init(DFX_board_t *bp, int get_buffers);
278 278
279 static int dfx_open(struct net_device *dev); 279 static int dfx_open(struct net_device *dev);
280 static int dfx_close(struct net_device *dev); 280 static int dfx_close(struct net_device *dev);
281 281
282 static void dfx_int_pr_halt_id(DFX_board_t *bp); 282 static void dfx_int_pr_halt_id(DFX_board_t *bp);
283 static void dfx_int_type_0_process(DFX_board_t *bp); 283 static void dfx_int_type_0_process(DFX_board_t *bp);
284 static void dfx_int_common(struct net_device *dev); 284 static void dfx_int_common(struct net_device *dev);
285 static irqreturn_t dfx_interrupt(int irq, void *dev_id); 285 static irqreturn_t dfx_interrupt(int irq, void *dev_id);
286 286
287 static struct net_device_stats *dfx_ctl_get_stats(struct net_device *dev); 287 static struct net_device_stats *dfx_ctl_get_stats(struct net_device *dev);
288 static void dfx_ctl_set_multicast_list(struct net_device *dev); 288 static void dfx_ctl_set_multicast_list(struct net_device *dev);
289 static int dfx_ctl_set_mac_address(struct net_device *dev, void *addr); 289 static int dfx_ctl_set_mac_address(struct net_device *dev, void *addr);
290 static int dfx_ctl_update_cam(DFX_board_t *bp); 290 static int dfx_ctl_update_cam(DFX_board_t *bp);
291 static int dfx_ctl_update_filters(DFX_board_t *bp); 291 static int dfx_ctl_update_filters(DFX_board_t *bp);
292 292
293 static int dfx_hw_dma_cmd_req(DFX_board_t *bp); 293 static int dfx_hw_dma_cmd_req(DFX_board_t *bp);
294 static int dfx_hw_port_ctrl_req(DFX_board_t *bp, PI_UINT32 command, PI_UINT32 data_a, PI_UINT32 data_b, PI_UINT32 *host_data); 294 static int dfx_hw_port_ctrl_req(DFX_board_t *bp, PI_UINT32 command, PI_UINT32 data_a, PI_UINT32 data_b, PI_UINT32 *host_data);
295 static void dfx_hw_adap_reset(DFX_board_t *bp, PI_UINT32 type); 295 static void dfx_hw_adap_reset(DFX_board_t *bp, PI_UINT32 type);
296 static int dfx_hw_adap_state_rd(DFX_board_t *bp); 296 static int dfx_hw_adap_state_rd(DFX_board_t *bp);
297 static int dfx_hw_dma_uninit(DFX_board_t *bp, PI_UINT32 type); 297 static int dfx_hw_dma_uninit(DFX_board_t *bp, PI_UINT32 type);
298 298
299 static int dfx_rcv_init(DFX_board_t *bp, int get_buffers); 299 static int dfx_rcv_init(DFX_board_t *bp, int get_buffers);
300 static void dfx_rcv_queue_process(DFX_board_t *bp); 300 static void dfx_rcv_queue_process(DFX_board_t *bp);
301 static void dfx_rcv_flush(DFX_board_t *bp); 301 static void dfx_rcv_flush(DFX_board_t *bp);
302 302
303 static int dfx_xmt_queue_pkt(struct sk_buff *skb, struct net_device *dev); 303 static int dfx_xmt_queue_pkt(struct sk_buff *skb, struct net_device *dev);
304 static int dfx_xmt_done(DFX_board_t *bp); 304 static int dfx_xmt_done(DFX_board_t *bp);
305 static void dfx_xmt_flush(DFX_board_t *bp); 305 static void dfx_xmt_flush(DFX_board_t *bp);
306 306
307 /* Define module-wide (static) variables */ 307 /* Define module-wide (static) variables */
308 308
309 static struct pci_driver dfx_pci_driver; 309 static struct pci_driver dfx_pci_driver;
310 static struct eisa_driver dfx_eisa_driver; 310 static struct eisa_driver dfx_eisa_driver;
311 static struct tc_driver dfx_tc_driver; 311 static struct tc_driver dfx_tc_driver;
312 312
313 313
314 /* 314 /*
315 * ======================= 315 * =======================
316 * = dfx_port_write_long = 316 * = dfx_port_write_long =
317 * = dfx_port_read_long = 317 * = dfx_port_read_long =
318 * ======================= 318 * =======================
319 * 319 *
320 * Overview: 320 * Overview:
321 * Routines for reading and writing values from/to adapter 321 * Routines for reading and writing values from/to adapter
322 * 322 *
323 * Returns: 323 * Returns:
324 * None 324 * None
325 * 325 *
326 * Arguments: 326 * Arguments:
327 * bp - pointer to board information 327 * bp - pointer to board information
328 * offset - register offset from base I/O address 328 * offset - register offset from base I/O address
329 * data - for dfx_port_write_long, this is a value to write; 329 * data - for dfx_port_write_long, this is a value to write;
330 * for dfx_port_read_long, this is a pointer to store 330 * for dfx_port_read_long, this is a pointer to store
331 * the read value 331 * the read value
332 * 332 *
333 * Functional Description: 333 * Functional Description:
334 * These routines perform the correct operation to read or write 334 * These routines perform the correct operation to read or write
335 * the adapter register. 335 * the adapter register.
336 * 336 *
337 * EISA port block base addresses are based on the slot number in which the 337 * EISA port block base addresses are based on the slot number in which the
338 * controller is installed. For example, if the EISA controller is installed 338 * controller is installed. For example, if the EISA controller is installed
339 * in slot 4, the port block base address is 0x4000. If the controller is 339 * in slot 4, the port block base address is 0x4000. If the controller is
340 * installed in slot 2, the port block base address is 0x2000, and so on. 340 * installed in slot 2, the port block base address is 0x2000, and so on.
341 * This port block can be used to access PDQ, ESIC, and DEFEA on-board 341 * This port block can be used to access PDQ, ESIC, and DEFEA on-board
342 * registers using the register offsets defined in DEFXX.H. 342 * registers using the register offsets defined in DEFXX.H.
343 * 343 *
344 * PCI port block base addresses are assigned by the PCI BIOS or system 344 * PCI port block base addresses are assigned by the PCI BIOS or system
345 * firmware. There is one 128 byte port block which can be accessed. It 345 * firmware. There is one 128 byte port block which can be accessed. It
346 * allows for I/O mapping of both PDQ and PFI registers using the register 346 * allows for I/O mapping of both PDQ and PFI registers using the register
347 * offsets defined in DEFXX.H. 347 * offsets defined in DEFXX.H.
348 * 348 *
349 * Return Codes: 349 * Return Codes:
350 * None 350 * None
351 * 351 *
352 * Assumptions: 352 * Assumptions:
353 * bp->base is a valid base I/O address for this adapter. 353 * bp->base is a valid base I/O address for this adapter.
354 * offset is a valid register offset for this adapter. 354 * offset is a valid register offset for this adapter.
355 * 355 *
356 * Side Effects: 356 * Side Effects:
357 * Rather than produce macros for these functions, these routines 357 * Rather than produce macros for these functions, these routines
358 * are defined using "inline" to ensure that the compiler will 358 * are defined using "inline" to ensure that the compiler will
359 * generate inline code and not waste a procedure call and return. 359 * generate inline code and not waste a procedure call and return.
360 * This provides all the benefits of macros, but with the 360 * This provides all the benefits of macros, but with the
361 * advantage of strict data type checking. 361 * advantage of strict data type checking.
362 */ 362 */
363 363
364 static inline void dfx_writel(DFX_board_t *bp, int offset, u32 data) 364 static inline void dfx_writel(DFX_board_t *bp, int offset, u32 data)
365 { 365 {
366 writel(data, bp->base.mem + offset); 366 writel(data, bp->base.mem + offset);
367 mb(); 367 mb();
368 } 368 }
369 369
370 static inline void dfx_outl(DFX_board_t *bp, int offset, u32 data) 370 static inline void dfx_outl(DFX_board_t *bp, int offset, u32 data)
371 { 371 {
372 outl(data, bp->base.port + offset); 372 outl(data, bp->base.port + offset);
373 } 373 }
374 374
375 static void dfx_port_write_long(DFX_board_t *bp, int offset, u32 data) 375 static void dfx_port_write_long(DFX_board_t *bp, int offset, u32 data)
376 { 376 {
377 struct device __maybe_unused *bdev = bp->bus_dev; 377 struct device __maybe_unused *bdev = bp->bus_dev;
378 int dfx_bus_tc = DFX_BUS_TC(bdev); 378 int dfx_bus_tc = DFX_BUS_TC(bdev);
379 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc; 379 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc;
380 380
381 if (dfx_use_mmio) 381 if (dfx_use_mmio)
382 dfx_writel(bp, offset, data); 382 dfx_writel(bp, offset, data);
383 else 383 else
384 dfx_outl(bp, offset, data); 384 dfx_outl(bp, offset, data);
385 } 385 }
386 386
387 387
388 static inline void dfx_readl(DFX_board_t *bp, int offset, u32 *data) 388 static inline void dfx_readl(DFX_board_t *bp, int offset, u32 *data)
389 { 389 {
390 mb(); 390 mb();
391 *data = readl(bp->base.mem + offset); 391 *data = readl(bp->base.mem + offset);
392 } 392 }
393 393
394 static inline void dfx_inl(DFX_board_t *bp, int offset, u32 *data) 394 static inline void dfx_inl(DFX_board_t *bp, int offset, u32 *data)
395 { 395 {
396 *data = inl(bp->base.port + offset); 396 *data = inl(bp->base.port + offset);
397 } 397 }
398 398
399 static void dfx_port_read_long(DFX_board_t *bp, int offset, u32 *data) 399 static void dfx_port_read_long(DFX_board_t *bp, int offset, u32 *data)
400 { 400 {
401 struct device __maybe_unused *bdev = bp->bus_dev; 401 struct device __maybe_unused *bdev = bp->bus_dev;
402 int dfx_bus_tc = DFX_BUS_TC(bdev); 402 int dfx_bus_tc = DFX_BUS_TC(bdev);
403 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc; 403 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc;
404 404
405 if (dfx_use_mmio) 405 if (dfx_use_mmio)
406 dfx_readl(bp, offset, data); 406 dfx_readl(bp, offset, data);
407 else 407 else
408 dfx_inl(bp, offset, data); 408 dfx_inl(bp, offset, data);
409 } 409 }
410 410
411 411
412 /* 412 /*
413 * ================ 413 * ================
414 * = dfx_get_bars = 414 * = dfx_get_bars =
415 * ================ 415 * ================
416 * 416 *
417 * Overview: 417 * Overview:
418 * Retrieves the address range used to access control and status 418 * Retrieves the address range used to access control and status
419 * registers. 419 * registers.
420 * 420 *
421 * Returns: 421 * Returns:
422 * None 422 * None
423 * 423 *
424 * Arguments: 424 * Arguments:
425 * bdev - pointer to device information 425 * bdev - pointer to device information
426 * bar_start - pointer to store the start address 426 * bar_start - pointer to store the start address
427 * bar_len - pointer to store the length of the area 427 * bar_len - pointer to store the length of the area
428 * 428 *
429 * Assumptions: 429 * Assumptions:
430 * I am sure there are some. 430 * I am sure there are some.
431 * 431 *
432 * Side Effects: 432 * Side Effects:
433 * None 433 * None
434 */ 434 */
435 static void dfx_get_bars(struct device *bdev, 435 static void dfx_get_bars(struct device *bdev,
436 resource_size_t *bar_start, resource_size_t *bar_len) 436 resource_size_t *bar_start, resource_size_t *bar_len)
437 { 437 {
438 int dfx_bus_pci = DFX_BUS_PCI(bdev); 438 int dfx_bus_pci = DFX_BUS_PCI(bdev);
439 int dfx_bus_eisa = DFX_BUS_EISA(bdev); 439 int dfx_bus_eisa = DFX_BUS_EISA(bdev);
440 int dfx_bus_tc = DFX_BUS_TC(bdev); 440 int dfx_bus_tc = DFX_BUS_TC(bdev);
441 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc; 441 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc;
442 442
443 if (dfx_bus_pci) { 443 if (dfx_bus_pci) {
444 int num = dfx_use_mmio ? 0 : 1; 444 int num = dfx_use_mmio ? 0 : 1;
445 445
446 *bar_start = pci_resource_start(to_pci_dev(bdev), num); 446 *bar_start = pci_resource_start(to_pci_dev(bdev), num);
447 *bar_len = pci_resource_len(to_pci_dev(bdev), num); 447 *bar_len = pci_resource_len(to_pci_dev(bdev), num);
448 } 448 }
449 if (dfx_bus_eisa) { 449 if (dfx_bus_eisa) {
450 unsigned long base_addr = to_eisa_device(bdev)->base_addr; 450 unsigned long base_addr = to_eisa_device(bdev)->base_addr;
451 resource_size_t bar; 451 resource_size_t bar;
452 452
453 if (dfx_use_mmio) { 453 if (dfx_use_mmio) {
454 bar = inb(base_addr + PI_ESIC_K_MEM_ADD_CMP_2); 454 bar = inb(base_addr + PI_ESIC_K_MEM_ADD_CMP_2);
455 bar <<= 8; 455 bar <<= 8;
456 bar |= inb(base_addr + PI_ESIC_K_MEM_ADD_CMP_1); 456 bar |= inb(base_addr + PI_ESIC_K_MEM_ADD_CMP_1);
457 bar <<= 8; 457 bar <<= 8;
458 bar |= inb(base_addr + PI_ESIC_K_MEM_ADD_CMP_0); 458 bar |= inb(base_addr + PI_ESIC_K_MEM_ADD_CMP_0);
459 bar <<= 16; 459 bar <<= 16;
460 *bar_start = bar; 460 *bar_start = bar;
461 bar = inb(base_addr + PI_ESIC_K_MEM_ADD_MASK_2); 461 bar = inb(base_addr + PI_ESIC_K_MEM_ADD_MASK_2);
462 bar <<= 8; 462 bar <<= 8;
463 bar |= inb(base_addr + PI_ESIC_K_MEM_ADD_MASK_1); 463 bar |= inb(base_addr + PI_ESIC_K_MEM_ADD_MASK_1);
464 bar <<= 8; 464 bar <<= 8;
465 bar |= inb(base_addr + PI_ESIC_K_MEM_ADD_MASK_0); 465 bar |= inb(base_addr + PI_ESIC_K_MEM_ADD_MASK_0);
466 bar <<= 16; 466 bar <<= 16;
467 *bar_len = (bar | PI_MEM_ADD_MASK_M) + 1; 467 *bar_len = (bar | PI_MEM_ADD_MASK_M) + 1;
468 } else { 468 } else {
469 *bar_start = base_addr; 469 *bar_start = base_addr;
470 *bar_len = PI_ESIC_K_CSR_IO_LEN; 470 *bar_len = PI_ESIC_K_CSR_IO_LEN;
471 } 471 }
472 } 472 }
473 if (dfx_bus_tc) { 473 if (dfx_bus_tc) {
474 *bar_start = to_tc_dev(bdev)->resource.start + 474 *bar_start = to_tc_dev(bdev)->resource.start +
475 PI_TC_K_CSR_OFFSET; 475 PI_TC_K_CSR_OFFSET;
476 *bar_len = PI_TC_K_CSR_LEN; 476 *bar_len = PI_TC_K_CSR_LEN;
477 } 477 }
478 } 478 }
479 479
480 /* 480 /*
481 * ================ 481 * ================
482 * = dfx_register = 482 * = dfx_register =
483 * ================ 483 * ================
484 * 484 *
485 * Overview: 485 * Overview:
486 * Initializes a supported FDDI controller 486 * Initializes a supported FDDI controller
487 * 487 *
488 * Returns: 488 * Returns:
489 * Condition code 489 * Condition code
490 * 490 *
491 * Arguments: 491 * Arguments:
492 * bdev - pointer to device information 492 * bdev - pointer to device information
493 * 493 *
494 * Functional Description: 494 * Functional Description:
495 * 495 *
496 * Return Codes: 496 * Return Codes:
497 * 0 - This device (fddi0, fddi1, etc) configured successfully 497 * 0 - This device (fddi0, fddi1, etc) configured successfully
498 * -EBUSY - Failed to get resources, or dfx_driver_init failed. 498 * -EBUSY - Failed to get resources, or dfx_driver_init failed.
499 * 499 *
500 * Assumptions: 500 * Assumptions:
501 * It compiles so it should work :-( (PCI cards do :-) 501 * It compiles so it should work :-( (PCI cards do :-)
502 * 502 *
503 * Side Effects: 503 * Side Effects:
504 * Device structures for FDDI adapters (fddi0, fddi1, etc) are 504 * Device structures for FDDI adapters (fddi0, fddi1, etc) are
505 * initialized and the board resources are read and stored in 505 * initialized and the board resources are read and stored in
506 * the device structure. 506 * the device structure.
507 */ 507 */
508 static int __devinit dfx_register(struct device *bdev) 508 static int __devinit dfx_register(struct device *bdev)
509 { 509 {
510 static int version_disp; 510 static int version_disp;
511 int dfx_bus_pci = DFX_BUS_PCI(bdev); 511 int dfx_bus_pci = DFX_BUS_PCI(bdev);
512 int dfx_bus_tc = DFX_BUS_TC(bdev); 512 int dfx_bus_tc = DFX_BUS_TC(bdev);
513 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc; 513 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc;
514 char *print_name = bdev->bus_id; 514 char *print_name = bdev->bus_id;
515 struct net_device *dev; 515 struct net_device *dev;
516 DFX_board_t *bp; /* board pointer */ 516 DFX_board_t *bp; /* board pointer */
517 resource_size_t bar_start = 0; /* pointer to port */ 517 resource_size_t bar_start = 0; /* pointer to port */
518 resource_size_t bar_len = 0; /* resource length */ 518 resource_size_t bar_len = 0; /* resource length */
519 int alloc_size; /* total buffer size used */ 519 int alloc_size; /* total buffer size used */
520 struct resource *region; 520 struct resource *region;
521 int err = 0; 521 int err = 0;
522 522
523 if (!version_disp) { /* display version info if adapter is found */ 523 if (!version_disp) { /* display version info if adapter is found */
524 version_disp = 1; /* set display flag to TRUE so that */ 524 version_disp = 1; /* set display flag to TRUE so that */
525 printk(version); /* we only display this string ONCE */ 525 printk(version); /* we only display this string ONCE */
526 } 526 }
527 527
528 dev = alloc_fddidev(sizeof(*bp)); 528 dev = alloc_fddidev(sizeof(*bp));
529 if (!dev) { 529 if (!dev) {
530 printk(KERN_ERR "%s: Unable to allocate fddidev, aborting\n", 530 printk(KERN_ERR "%s: Unable to allocate fddidev, aborting\n",
531 print_name); 531 print_name);
532 return -ENOMEM; 532 return -ENOMEM;
533 } 533 }
534 534
535 /* Enable PCI device. */ 535 /* Enable PCI device. */
536 if (dfx_bus_pci && pci_enable_device(to_pci_dev(bdev))) { 536 if (dfx_bus_pci && pci_enable_device(to_pci_dev(bdev))) {
537 printk(KERN_ERR "%s: Cannot enable PCI device, aborting\n", 537 printk(KERN_ERR "%s: Cannot enable PCI device, aborting\n",
538 print_name); 538 print_name);
539 goto err_out; 539 goto err_out;
540 } 540 }
541 541
542 SET_NETDEV_DEV(dev, bdev); 542 SET_NETDEV_DEV(dev, bdev);
543 543
544 bp = netdev_priv(dev); 544 bp = netdev_priv(dev);
545 bp->bus_dev = bdev; 545 bp->bus_dev = bdev;
546 dev_set_drvdata(bdev, dev); 546 dev_set_drvdata(bdev, dev);
547 547
548 dfx_get_bars(bdev, &bar_start, &bar_len); 548 dfx_get_bars(bdev, &bar_start, &bar_len);
549 549
550 if (dfx_use_mmio) 550 if (dfx_use_mmio)
551 region = request_mem_region(bar_start, bar_len, print_name); 551 region = request_mem_region(bar_start, bar_len, print_name);
552 else 552 else
553 region = request_region(bar_start, bar_len, print_name); 553 region = request_region(bar_start, bar_len, print_name);
554 if (!region) { 554 if (!region) {
555 printk(KERN_ERR "%s: Cannot reserve I/O resource " 555 printk(KERN_ERR "%s: Cannot reserve I/O resource "
556 "0x%lx @ 0x%lx, aborting\n", 556 "0x%lx @ 0x%lx, aborting\n",
557 print_name, (long)bar_len, (long)bar_start); 557 print_name, (long)bar_len, (long)bar_start);
558 err = -EBUSY; 558 err = -EBUSY;
559 goto err_out_disable; 559 goto err_out_disable;
560 } 560 }
561 561
562 /* Set up I/O base address. */ 562 /* Set up I/O base address. */
563 if (dfx_use_mmio) { 563 if (dfx_use_mmio) {
564 bp->base.mem = ioremap_nocache(bar_start, bar_len); 564 bp->base.mem = ioremap_nocache(bar_start, bar_len);
565 if (!bp->base.mem) { 565 if (!bp->base.mem) {
566 printk(KERN_ERR "%s: Cannot map MMIO\n", print_name); 566 printk(KERN_ERR "%s: Cannot map MMIO\n", print_name);
567 err = -ENOMEM; 567 err = -ENOMEM;
568 goto err_out_region; 568 goto err_out_region;
569 } 569 }
570 } else { 570 } else {
571 bp->base.port = bar_start; 571 bp->base.port = bar_start;
572 dev->base_addr = bar_start; 572 dev->base_addr = bar_start;
573 } 573 }
574 574
575 /* Initialize new device structure */ 575 /* Initialize new device structure */
576 576
577 dev->get_stats = dfx_ctl_get_stats; 577 dev->get_stats = dfx_ctl_get_stats;
578 dev->open = dfx_open; 578 dev->open = dfx_open;
579 dev->stop = dfx_close; 579 dev->stop = dfx_close;
580 dev->hard_start_xmit = dfx_xmt_queue_pkt; 580 dev->hard_start_xmit = dfx_xmt_queue_pkt;
581 dev->set_multicast_list = dfx_ctl_set_multicast_list; 581 dev->set_multicast_list = dfx_ctl_set_multicast_list;
582 dev->set_mac_address = dfx_ctl_set_mac_address; 582 dev->set_mac_address = dfx_ctl_set_mac_address;
583 583
584 if (dfx_bus_pci) 584 if (dfx_bus_pci)
585 pci_set_master(to_pci_dev(bdev)); 585 pci_set_master(to_pci_dev(bdev));
586 586
587 if (dfx_driver_init(dev, print_name, bar_start) != DFX_K_SUCCESS) { 587 if (dfx_driver_init(dev, print_name, bar_start) != DFX_K_SUCCESS) {
588 err = -ENODEV; 588 err = -ENODEV;
589 goto err_out_unmap; 589 goto err_out_unmap;
590 } 590 }
591 591
592 err = register_netdev(dev); 592 err = register_netdev(dev);
593 if (err) 593 if (err)
594 goto err_out_kfree; 594 goto err_out_kfree;
595 595
596 printk("%s: registered as %s\n", print_name, dev->name); 596 printk("%s: registered as %s\n", print_name, dev->name);
597 return 0; 597 return 0;
598 598
599 err_out_kfree: 599 err_out_kfree:
600 alloc_size = sizeof(PI_DESCR_BLOCK) + 600 alloc_size = sizeof(PI_DESCR_BLOCK) +
601 PI_CMD_REQ_K_SIZE_MAX + PI_CMD_RSP_K_SIZE_MAX + 601 PI_CMD_REQ_K_SIZE_MAX + PI_CMD_RSP_K_SIZE_MAX +
602 #ifndef DYNAMIC_BUFFERS 602 #ifndef DYNAMIC_BUFFERS
603 (bp->rcv_bufs_to_post * PI_RCV_DATA_K_SIZE_MAX) + 603 (bp->rcv_bufs_to_post * PI_RCV_DATA_K_SIZE_MAX) +
604 #endif 604 #endif
605 sizeof(PI_CONSUMER_BLOCK) + 605 sizeof(PI_CONSUMER_BLOCK) +
606 (PI_ALIGN_K_DESC_BLK - 1); 606 (PI_ALIGN_K_DESC_BLK - 1);
607 if (bp->kmalloced) 607 if (bp->kmalloced)
608 dma_free_coherent(bdev, alloc_size, 608 dma_free_coherent(bdev, alloc_size,
609 bp->kmalloced, bp->kmalloced_dma); 609 bp->kmalloced, bp->kmalloced_dma);
610 610
611 err_out_unmap: 611 err_out_unmap:
612 if (dfx_use_mmio) 612 if (dfx_use_mmio)
613 iounmap(bp->base.mem); 613 iounmap(bp->base.mem);
614 614
615 err_out_region: 615 err_out_region:
616 if (dfx_use_mmio) 616 if (dfx_use_mmio)
617 release_mem_region(bar_start, bar_len); 617 release_mem_region(bar_start, bar_len);
618 else 618 else
619 release_region(bar_start, bar_len); 619 release_region(bar_start, bar_len);
620 620
621 err_out_disable: 621 err_out_disable:
622 if (dfx_bus_pci) 622 if (dfx_bus_pci)
623 pci_disable_device(to_pci_dev(bdev)); 623 pci_disable_device(to_pci_dev(bdev));
624 624
625 err_out: 625 err_out:
626 free_netdev(dev); 626 free_netdev(dev);
627 return err; 627 return err;
628 } 628 }
629 629
630 630
631 /* 631 /*
632 * ================ 632 * ================
633 * = dfx_bus_init = 633 * = dfx_bus_init =
634 * ================ 634 * ================
635 * 635 *
636 * Overview: 636 * Overview:
637 * Initializes the bus-specific controller logic. 637 * Initializes the bus-specific controller logic.
638 * 638 *
639 * Returns: 639 * Returns:
640 * None 640 * None
641 * 641 *
642 * Arguments: 642 * Arguments:
643 * dev - pointer to device information 643 * dev - pointer to device information
644 * 644 *
645 * Functional Description: 645 * Functional Description:
646 * Determine and save adapter IRQ in device table, 646 * Determine and save adapter IRQ in device table,
647 * then perform bus-specific logic initialization. 647 * then perform bus-specific logic initialization.
648 * 648 *
649 * Return Codes: 649 * Return Codes:
650 * None 650 * None
651 * 651 *
652 * Assumptions: 652 * Assumptions:
653 * bp->base has already been set with the proper 653 * bp->base has already been set with the proper
654 * base I/O address for this device. 654 * base I/O address for this device.
655 * 655 *
656 * Side Effects: 656 * Side Effects:
657 * Interrupts are enabled at the adapter bus-specific logic. 657 * Interrupts are enabled at the adapter bus-specific logic.
658 * Note: Interrupts at the DMA engine (PDQ chip) are not 658 * Note: Interrupts at the DMA engine (PDQ chip) are not
659 * enabled yet. 659 * enabled yet.
660 */ 660 */
661 661
662 static void __devinit dfx_bus_init(struct net_device *dev) 662 static void __devinit dfx_bus_init(struct net_device *dev)
663 { 663 {
664 DFX_board_t *bp = netdev_priv(dev); 664 DFX_board_t *bp = netdev_priv(dev);
665 struct device *bdev = bp->bus_dev; 665 struct device *bdev = bp->bus_dev;
666 int dfx_bus_pci = DFX_BUS_PCI(bdev); 666 int dfx_bus_pci = DFX_BUS_PCI(bdev);
667 int dfx_bus_eisa = DFX_BUS_EISA(bdev); 667 int dfx_bus_eisa = DFX_BUS_EISA(bdev);
668 int dfx_bus_tc = DFX_BUS_TC(bdev); 668 int dfx_bus_tc = DFX_BUS_TC(bdev);
669 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc; 669 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc;
670 u8 val; 670 u8 val;
671 671
672 DBG_printk("In dfx_bus_init...\n"); 672 DBG_printk("In dfx_bus_init...\n");
673 673
674 /* Initialize a pointer back to the net_device struct */ 674 /* Initialize a pointer back to the net_device struct */
675 bp->dev = dev; 675 bp->dev = dev;
676 676
677 /* Initialize adapter based on bus type */ 677 /* Initialize adapter based on bus type */
678 678
679 if (dfx_bus_tc) 679 if (dfx_bus_tc)
680 dev->irq = to_tc_dev(bdev)->interrupt; 680 dev->irq = to_tc_dev(bdev)->interrupt;
681 if (dfx_bus_eisa) { 681 if (dfx_bus_eisa) {
682 unsigned long base_addr = to_eisa_device(bdev)->base_addr; 682 unsigned long base_addr = to_eisa_device(bdev)->base_addr;
683 683
684 /* Get the interrupt level from the ESIC chip. */ 684 /* Get the interrupt level from the ESIC chip. */
685 val = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0); 685 val = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
686 val &= PI_CONFIG_STAT_0_M_IRQ; 686 val &= PI_CONFIG_STAT_0_M_IRQ;
687 val >>= PI_CONFIG_STAT_0_V_IRQ; 687 val >>= PI_CONFIG_STAT_0_V_IRQ;
688 688
689 switch (val) { 689 switch (val) {
690 case PI_CONFIG_STAT_0_IRQ_K_9: 690 case PI_CONFIG_STAT_0_IRQ_K_9:
691 dev->irq = 9; 691 dev->irq = 9;
692 break; 692 break;
693 693
694 case PI_CONFIG_STAT_0_IRQ_K_10: 694 case PI_CONFIG_STAT_0_IRQ_K_10:
695 dev->irq = 10; 695 dev->irq = 10;
696 break; 696 break;
697 697
698 case PI_CONFIG_STAT_0_IRQ_K_11: 698 case PI_CONFIG_STAT_0_IRQ_K_11:
699 dev->irq = 11; 699 dev->irq = 11;
700 break; 700 break;
701 701
702 case PI_CONFIG_STAT_0_IRQ_K_15: 702 case PI_CONFIG_STAT_0_IRQ_K_15:
703 dev->irq = 15; 703 dev->irq = 15;
704 break; 704 break;
705 } 705 }
706 706
707 /* 707 /*
708 * Enable memory decoding (MEMCS0) and/or port decoding 708 * Enable memory decoding (MEMCS0) and/or port decoding
709 * (IOCS1/IOCS0) as appropriate in Function Control 709 * (IOCS1/IOCS0) as appropriate in Function Control
710 * Register. One of the port chip selects seems to be 710 * Register. One of the port chip selects seems to be
711 * used for the Burst Holdoff register, but this bit of 711 * used for the Burst Holdoff register, but this bit of
712 * documentation is missing and as yet it has not been 712 * documentation is missing and as yet it has not been
713 * determined which of the two. This is also the reason 713 * determined which of the two. This is also the reason
714 * the size of the decoded port range is twice as large 714 * the size of the decoded port range is twice as large
715 * as one required by the PDQ. 715 * as one required by the PDQ.
716 */ 716 */
717 717
718 /* Set the decode range of the board. */ 718 /* Set the decode range of the board. */
719 val = ((bp->base.port >> 12) << PI_IO_CMP_V_SLOT); 719 val = ((bp->base.port >> 12) << PI_IO_CMP_V_SLOT);
720 outb(base_addr + PI_ESIC_K_IO_ADD_CMP_0_1, val); 720 outb(base_addr + PI_ESIC_K_IO_ADD_CMP_0_1, val);
721 outb(base_addr + PI_ESIC_K_IO_ADD_CMP_0_0, 0); 721 outb(base_addr + PI_ESIC_K_IO_ADD_CMP_0_0, 0);
722 outb(base_addr + PI_ESIC_K_IO_ADD_CMP_1_1, val); 722 outb(base_addr + PI_ESIC_K_IO_ADD_CMP_1_1, val);
723 outb(base_addr + PI_ESIC_K_IO_ADD_CMP_1_0, 0); 723 outb(base_addr + PI_ESIC_K_IO_ADD_CMP_1_0, 0);
724 val = PI_ESIC_K_CSR_IO_LEN - 1; 724 val = PI_ESIC_K_CSR_IO_LEN - 1;
725 outb(base_addr + PI_ESIC_K_IO_ADD_MASK_0_1, (val >> 8) & 0xff); 725 outb(base_addr + PI_ESIC_K_IO_ADD_MASK_0_1, (val >> 8) & 0xff);
726 outb(base_addr + PI_ESIC_K_IO_ADD_MASK_0_0, val & 0xff); 726 outb(base_addr + PI_ESIC_K_IO_ADD_MASK_0_0, val & 0xff);
727 outb(base_addr + PI_ESIC_K_IO_ADD_MASK_1_1, (val >> 8) & 0xff); 727 outb(base_addr + PI_ESIC_K_IO_ADD_MASK_1_1, (val >> 8) & 0xff);
728 outb(base_addr + PI_ESIC_K_IO_ADD_MASK_1_0, val & 0xff); 728 outb(base_addr + PI_ESIC_K_IO_ADD_MASK_1_0, val & 0xff);
729 729
730 /* Enable the decoders. */ 730 /* Enable the decoders. */
731 val = PI_FUNCTION_CNTRL_M_IOCS1 | PI_FUNCTION_CNTRL_M_IOCS0; 731 val = PI_FUNCTION_CNTRL_M_IOCS1 | PI_FUNCTION_CNTRL_M_IOCS0;
732 if (dfx_use_mmio) 732 if (dfx_use_mmio)
733 val |= PI_FUNCTION_CNTRL_M_MEMCS0; 733 val |= PI_FUNCTION_CNTRL_M_MEMCS0;
734 outb(base_addr + PI_ESIC_K_FUNCTION_CNTRL, val); 734 outb(base_addr + PI_ESIC_K_FUNCTION_CNTRL, val);
735 735
736 /* 736 /*
737 * Enable access to the rest of the module 737 * Enable access to the rest of the module
738 * (including PDQ and packet memory). 738 * (including PDQ and packet memory).
739 */ 739 */
740 val = PI_SLOT_CNTRL_M_ENB; 740 val = PI_SLOT_CNTRL_M_ENB;
741 outb(base_addr + PI_ESIC_K_SLOT_CNTRL, val); 741 outb(base_addr + PI_ESIC_K_SLOT_CNTRL, val);
742 742
743 /* 743 /*
744 * Map PDQ registers into memory or port space. This is 744 * Map PDQ registers into memory or port space. This is
745 * done with a bit in the Burst Holdoff register. 745 * done with a bit in the Burst Holdoff register.
746 */ 746 */
747 val = inb(base_addr + PI_DEFEA_K_BURST_HOLDOFF); 747 val = inb(base_addr + PI_DEFEA_K_BURST_HOLDOFF);
748 if (dfx_use_mmio) 748 if (dfx_use_mmio)
749 val |= PI_BURST_HOLDOFF_V_MEM_MAP; 749 val |= PI_BURST_HOLDOFF_V_MEM_MAP;
750 else 750 else
751 val &= ~PI_BURST_HOLDOFF_V_MEM_MAP; 751 val &= ~PI_BURST_HOLDOFF_V_MEM_MAP;
752 outb(base_addr + PI_DEFEA_K_BURST_HOLDOFF, val); 752 outb(base_addr + PI_DEFEA_K_BURST_HOLDOFF, val);
753 753
754 /* Enable interrupts at EISA bus interface chip (ESIC) */ 754 /* Enable interrupts at EISA bus interface chip (ESIC) */
755 val = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0); 755 val = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
756 val |= PI_CONFIG_STAT_0_M_INT_ENB; 756 val |= PI_CONFIG_STAT_0_M_INT_ENB;
757 outb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0, val); 757 outb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0, val);
758 } 758 }
759 if (dfx_bus_pci) { 759 if (dfx_bus_pci) {
760 struct pci_dev *pdev = to_pci_dev(bdev); 760 struct pci_dev *pdev = to_pci_dev(bdev);
761 761
762 /* Get the interrupt level from the PCI Configuration Table */ 762 /* Get the interrupt level from the PCI Configuration Table */
763 763
764 dev->irq = pdev->irq; 764 dev->irq = pdev->irq;
765 765
766 /* Check Latency Timer and set if less than minimal */ 766 /* Check Latency Timer and set if less than minimal */
767 767
768 pci_read_config_byte(pdev, PCI_LATENCY_TIMER, &val); 768 pci_read_config_byte(pdev, PCI_LATENCY_TIMER, &val);
769 if (val < PFI_K_LAT_TIMER_MIN) { 769 if (val < PFI_K_LAT_TIMER_MIN) {
770 val = PFI_K_LAT_TIMER_DEF; 770 val = PFI_K_LAT_TIMER_DEF;
771 pci_write_config_byte(pdev, PCI_LATENCY_TIMER, val); 771 pci_write_config_byte(pdev, PCI_LATENCY_TIMER, val);
772 } 772 }
773 773
774 /* Enable interrupts at PCI bus interface chip (PFI) */ 774 /* Enable interrupts at PCI bus interface chip (PFI) */
775 val = PFI_MODE_M_PDQ_INT_ENB | PFI_MODE_M_DMA_ENB; 775 val = PFI_MODE_M_PDQ_INT_ENB | PFI_MODE_M_DMA_ENB;
776 dfx_port_write_long(bp, PFI_K_REG_MODE_CTRL, val); 776 dfx_port_write_long(bp, PFI_K_REG_MODE_CTRL, val);
777 } 777 }
778 } 778 }
779 779
780 /* 780 /*
781 * ================== 781 * ==================
782 * = dfx_bus_uninit = 782 * = dfx_bus_uninit =
783 * ================== 783 * ==================
784 * 784 *
785 * Overview: 785 * Overview:
786 * Uninitializes the bus-specific controller logic. 786 * Uninitializes the bus-specific controller logic.
787 * 787 *
788 * Returns: 788 * Returns:
789 * None 789 * None
790 * 790 *
791 * Arguments: 791 * Arguments:
792 * dev - pointer to device information 792 * dev - pointer to device information
793 * 793 *
794 * Functional Description: 794 * Functional Description:
795 * Perform bus-specific logic uninitialization. 795 * Perform bus-specific logic uninitialization.
796 * 796 *
797 * Return Codes: 797 * Return Codes:
798 * None 798 * None
799 * 799 *
800 * Assumptions: 800 * Assumptions:
801 * bp->base has already been set with the proper 801 * bp->base has already been set with the proper
802 * base I/O address for this device. 802 * base I/O address for this device.
803 * 803 *
804 * Side Effects: 804 * Side Effects:
805 * Interrupts are disabled at the adapter bus-specific logic. 805 * Interrupts are disabled at the adapter bus-specific logic.
806 */ 806 */
807 807
808 static void __devexit dfx_bus_uninit(struct net_device *dev) 808 static void __devexit dfx_bus_uninit(struct net_device *dev)
809 { 809 {
810 DFX_board_t *bp = netdev_priv(dev); 810 DFX_board_t *bp = netdev_priv(dev);
811 struct device *bdev = bp->bus_dev; 811 struct device *bdev = bp->bus_dev;
812 int dfx_bus_pci = DFX_BUS_PCI(bdev); 812 int dfx_bus_pci = DFX_BUS_PCI(bdev);
813 int dfx_bus_eisa = DFX_BUS_EISA(bdev); 813 int dfx_bus_eisa = DFX_BUS_EISA(bdev);
814 u8 val; 814 u8 val;
815 815
816 DBG_printk("In dfx_bus_uninit...\n"); 816 DBG_printk("In dfx_bus_uninit...\n");
817 817
818 /* Uninitialize adapter based on bus type */ 818 /* Uninitialize adapter based on bus type */
819 819
820 if (dfx_bus_eisa) { 820 if (dfx_bus_eisa) {
821 unsigned long base_addr = to_eisa_device(bdev)->base_addr; 821 unsigned long base_addr = to_eisa_device(bdev)->base_addr;
822 822
823 /* Disable interrupts at EISA bus interface chip (ESIC) */ 823 /* Disable interrupts at EISA bus interface chip (ESIC) */
824 val = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0); 824 val = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
825 val &= ~PI_CONFIG_STAT_0_M_INT_ENB; 825 val &= ~PI_CONFIG_STAT_0_M_INT_ENB;
826 outb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0, val); 826 outb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0, val);
827 } 827 }
828 if (dfx_bus_pci) { 828 if (dfx_bus_pci) {
829 /* Disable interrupts at PCI bus interface chip (PFI) */ 829 /* Disable interrupts at PCI bus interface chip (PFI) */
830 dfx_port_write_long(bp, PFI_K_REG_MODE_CTRL, 0); 830 dfx_port_write_long(bp, PFI_K_REG_MODE_CTRL, 0);
831 } 831 }
832 } 832 }
833 833
834 834
835 /* 835 /*
836 * ======================== 836 * ========================
837 * = dfx_bus_config_check = 837 * = dfx_bus_config_check =
838 * ======================== 838 * ========================
839 * 839 *
840 * Overview: 840 * Overview:
841 * Checks the configuration (burst size, full-duplex, etc.) If any parameters 841 * Checks the configuration (burst size, full-duplex, etc.) If any parameters
842 * are illegal, then this routine will set new defaults. 842 * are illegal, then this routine will set new defaults.
843 * 843 *
844 * Returns: 844 * Returns:
845 * None 845 * None
846 * 846 *
847 * Arguments: 847 * Arguments:
848 * bp - pointer to board information 848 * bp - pointer to board information
849 * 849 *
850 * Functional Description: 850 * Functional Description:
851 * For Revision 1 FDDI EISA, Revision 2 or later FDDI EISA with rev E or later 851 * For Revision 1 FDDI EISA, Revision 2 or later FDDI EISA with rev E or later
852 * PDQ, and all FDDI PCI controllers, all values are legal. 852 * PDQ, and all FDDI PCI controllers, all values are legal.
853 * 853 *
854 * Return Codes: 854 * Return Codes:
855 * None 855 * None
856 * 856 *
857 * Assumptions: 857 * Assumptions:
858 * dfx_adap_init has NOT been called yet so burst size and other items have 858 * dfx_adap_init has NOT been called yet so burst size and other items have
859 * not been set. 859 * not been set.
860 * 860 *
861 * Side Effects: 861 * Side Effects:
862 * None 862 * None
863 */ 863 */
864 864
865 static void __devinit dfx_bus_config_check(DFX_board_t *bp) 865 static void __devinit dfx_bus_config_check(DFX_board_t *bp)
866 { 866 {
867 struct device __maybe_unused *bdev = bp->bus_dev; 867 struct device __maybe_unused *bdev = bp->bus_dev;
868 int dfx_bus_eisa = DFX_BUS_EISA(bdev); 868 int dfx_bus_eisa = DFX_BUS_EISA(bdev);
869 int status; /* return code from adapter port control call */ 869 int status; /* return code from adapter port control call */
870 u32 host_data; /* LW data returned from port control call */ 870 u32 host_data; /* LW data returned from port control call */
871 871
872 DBG_printk("In dfx_bus_config_check...\n"); 872 DBG_printk("In dfx_bus_config_check...\n");
873 873
874 /* Configuration check only valid for EISA adapter */ 874 /* Configuration check only valid for EISA adapter */
875 875
876 if (dfx_bus_eisa) { 876 if (dfx_bus_eisa) {
877 /* 877 /*
878 * First check if revision 2 EISA controller. Rev. 1 cards used 878 * First check if revision 2 EISA controller. Rev. 1 cards used
879 * PDQ revision B, so no workaround needed in this case. Rev. 3 879 * PDQ revision B, so no workaround needed in this case. Rev. 3
880 * cards used PDQ revision E, so no workaround needed in this 880 * cards used PDQ revision E, so no workaround needed in this
881 * case, either. Only Rev. 2 cards used either Rev. D or E 881 * case, either. Only Rev. 2 cards used either Rev. D or E
882 * chips, so we must verify the chip revision on Rev. 2 cards. 882 * chips, so we must verify the chip revision on Rev. 2 cards.
883 */ 883 */
884 if (to_eisa_device(bdev)->id.driver_data == DEFEA_PROD_ID_2) { 884 if (to_eisa_device(bdev)->id.driver_data == DEFEA_PROD_ID_2) {
885 /* 885 /*
886 * Revision 2 FDDI EISA controller found, 886 * Revision 2 FDDI EISA controller found,
887 * so let's check PDQ revision of adapter. 887 * so let's check PDQ revision of adapter.
888 */ 888 */
889 status = dfx_hw_port_ctrl_req(bp, 889 status = dfx_hw_port_ctrl_req(bp,
890 PI_PCTRL_M_SUB_CMD, 890 PI_PCTRL_M_SUB_CMD,
891 PI_SUB_CMD_K_PDQ_REV_GET, 891 PI_SUB_CMD_K_PDQ_REV_GET,
892 0, 892 0,
893 &host_data); 893 &host_data);
894 if ((status != DFX_K_SUCCESS) || (host_data == 2)) 894 if ((status != DFX_K_SUCCESS) || (host_data == 2))
895 { 895 {
896 /* 896 /*
897 * Either we couldn't determine the PDQ revision, or 897 * Either we couldn't determine the PDQ revision, or
898 * we determined that it is at revision D. In either case, 898 * we determined that it is at revision D. In either case,
899 * we need to implement the workaround. 899 * we need to implement the workaround.
900 */ 900 */
901 901
902 /* Ensure that the burst size is set to 8 longwords or less */ 902 /* Ensure that the burst size is set to 8 longwords or less */
903 903
904 switch (bp->burst_size) 904 switch (bp->burst_size)
905 { 905 {
906 case PI_PDATA_B_DMA_BURST_SIZE_32: 906 case PI_PDATA_B_DMA_BURST_SIZE_32:
907 case PI_PDATA_B_DMA_BURST_SIZE_16: 907 case PI_PDATA_B_DMA_BURST_SIZE_16:
908 bp->burst_size = PI_PDATA_B_DMA_BURST_SIZE_8; 908 bp->burst_size = PI_PDATA_B_DMA_BURST_SIZE_8;
909 break; 909 break;
910 910
911 default: 911 default:
912 break; 912 break;
913 } 913 }
914 914
915 /* Ensure that full-duplex mode is not enabled */ 915 /* Ensure that full-duplex mode is not enabled */
916 916
917 bp->full_duplex_enb = PI_SNMP_K_FALSE; 917 bp->full_duplex_enb = PI_SNMP_K_FALSE;
918 } 918 }
919 } 919 }
920 } 920 }
921 } 921 }
922 922
923 923
924 /* 924 /*
925 * =================== 925 * ===================
926 * = dfx_driver_init = 926 * = dfx_driver_init =
927 * =================== 927 * ===================
928 * 928 *
929 * Overview: 929 * Overview:
930 * Initializes remaining adapter board structure information 930 * Initializes remaining adapter board structure information
931 * and makes sure adapter is in a safe state prior to dfx_open(). 931 * and makes sure adapter is in a safe state prior to dfx_open().
932 * 932 *
933 * Returns: 933 * Returns:
934 * Condition code 934 * Condition code
935 * 935 *
936 * Arguments: 936 * Arguments:
937 * dev - pointer to device information 937 * dev - pointer to device information
938 * print_name - printable device name 938 * print_name - printable device name
939 * 939 *
940 * Functional Description: 940 * Functional Description:
941 * This function allocates additional resources such as the host memory 941 * This function allocates additional resources such as the host memory
942 * blocks needed by the adapter (eg. descriptor and consumer blocks). 942 * blocks needed by the adapter (eg. descriptor and consumer blocks).
943 * Remaining bus initialization steps are also completed. The adapter 943 * Remaining bus initialization steps are also completed. The adapter
944 * is also reset so that it is in the DMA_UNAVAILABLE state. The OS 944 * is also reset so that it is in the DMA_UNAVAILABLE state. The OS
945 * must call dfx_open() to open the adapter and bring it on-line. 945 * must call dfx_open() to open the adapter and bring it on-line.
946 * 946 *
947 * Return Codes: 947 * Return Codes:
948 * DFX_K_SUCCESS - initialization succeeded 948 * DFX_K_SUCCESS - initialization succeeded
949 * DFX_K_FAILURE - initialization failed - could not allocate memory 949 * DFX_K_FAILURE - initialization failed - could not allocate memory
950 * or read adapter MAC address 950 * or read adapter MAC address
951 * 951 *
952 * Assumptions: 952 * Assumptions:
953 * Memory allocated from pci_alloc_consistent() call is physically 953 * Memory allocated from pci_alloc_consistent() call is physically
954 * contiguous, locked memory. 954 * contiguous, locked memory.
955 * 955 *
956 * Side Effects: 956 * Side Effects:
957 * Adapter is reset and should be in DMA_UNAVAILABLE state before 957 * Adapter is reset and should be in DMA_UNAVAILABLE state before
958 * returning from this routine. 958 * returning from this routine.
959 */ 959 */
960 960
961 static int __devinit dfx_driver_init(struct net_device *dev, 961 static int __devinit dfx_driver_init(struct net_device *dev,
962 const char *print_name, 962 const char *print_name,
963 resource_size_t bar_start) 963 resource_size_t bar_start)
964 { 964 {
965 DFX_board_t *bp = netdev_priv(dev); 965 DFX_board_t *bp = netdev_priv(dev);
966 struct device *bdev = bp->bus_dev; 966 struct device *bdev = bp->bus_dev;
967 int dfx_bus_pci = DFX_BUS_PCI(bdev); 967 int dfx_bus_pci = DFX_BUS_PCI(bdev);
968 int dfx_bus_eisa = DFX_BUS_EISA(bdev); 968 int dfx_bus_eisa = DFX_BUS_EISA(bdev);
969 int dfx_bus_tc = DFX_BUS_TC(bdev); 969 int dfx_bus_tc = DFX_BUS_TC(bdev);
970 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc; 970 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc;
971 int alloc_size; /* total buffer size needed */ 971 int alloc_size; /* total buffer size needed */
972 char *top_v, *curr_v; /* virtual addrs into memory block */ 972 char *top_v, *curr_v; /* virtual addrs into memory block */
973 dma_addr_t top_p, curr_p; /* physical addrs into memory block */ 973 dma_addr_t top_p, curr_p; /* physical addrs into memory block */
974 u32 data, le32; /* host data register value */ 974 u32 data; /* host data register value */
975 __le32 le32;
975 char *board_name = NULL; 976 char *board_name = NULL;
976 977
977 DBG_printk("In dfx_driver_init...\n"); 978 DBG_printk("In dfx_driver_init...\n");
978 979
979 /* Initialize bus-specific hardware registers */ 980 /* Initialize bus-specific hardware registers */
980 981
981 dfx_bus_init(dev); 982 dfx_bus_init(dev);
982 983
983 /* 984 /*
984 * Initialize default values for configurable parameters 985 * Initialize default values for configurable parameters
985 * 986 *
986 * Note: All of these parameters are ones that a user may 987 * Note: All of these parameters are ones that a user may
987 * want to customize. It'd be nice to break these 988 * want to customize. It'd be nice to break these
988 * out into Space.c or someplace else that's more 989 * out into Space.c or someplace else that's more
989 * accessible/understandable than this file. 990 * accessible/understandable than this file.
990 */ 991 */
991 992
992 bp->full_duplex_enb = PI_SNMP_K_FALSE; 993 bp->full_duplex_enb = PI_SNMP_K_FALSE;
993 bp->req_ttrt = 8 * 12500; /* 8ms in 80 nanosec units */ 994 bp->req_ttrt = 8 * 12500; /* 8ms in 80 nanosec units */
994 bp->burst_size = PI_PDATA_B_DMA_BURST_SIZE_DEF; 995 bp->burst_size = PI_PDATA_B_DMA_BURST_SIZE_DEF;
995 bp->rcv_bufs_to_post = RCV_BUFS_DEF; 996 bp->rcv_bufs_to_post = RCV_BUFS_DEF;
996 997
997 /* 998 /*
998 * Ensure that HW configuration is OK 999 * Ensure that HW configuration is OK
999 * 1000 *
1000 * Note: Depending on the hardware revision, we may need to modify 1001 * Note: Depending on the hardware revision, we may need to modify
1001 * some of the configurable parameters to workaround hardware 1002 * some of the configurable parameters to workaround hardware
1002 * limitations. We'll perform this configuration check AFTER 1003 * limitations. We'll perform this configuration check AFTER
1003 * setting the parameters to their default values. 1004 * setting the parameters to their default values.
1004 */ 1005 */
1005 1006
1006 dfx_bus_config_check(bp); 1007 dfx_bus_config_check(bp);
1007 1008
1008 /* Disable PDQ interrupts first */ 1009 /* Disable PDQ interrupts first */
1009 1010
1010 dfx_port_write_long(bp, PI_PDQ_K_REG_HOST_INT_ENB, PI_HOST_INT_K_DISABLE_ALL_INTS); 1011 dfx_port_write_long(bp, PI_PDQ_K_REG_HOST_INT_ENB, PI_HOST_INT_K_DISABLE_ALL_INTS);
1011 1012
1012 /* Place adapter in DMA_UNAVAILABLE state by resetting adapter */ 1013 /* Place adapter in DMA_UNAVAILABLE state by resetting adapter */
1013 1014
1014 (void) dfx_hw_dma_uninit(bp, PI_PDATA_A_RESET_M_SKIP_ST); 1015 (void) dfx_hw_dma_uninit(bp, PI_PDATA_A_RESET_M_SKIP_ST);
1015 1016
1016 /* Read the factory MAC address from the adapter then save it */ 1017 /* Read the factory MAC address from the adapter then save it */
1017 1018
1018 if (dfx_hw_port_ctrl_req(bp, PI_PCTRL_M_MLA, PI_PDATA_A_MLA_K_LO, 0, 1019 if (dfx_hw_port_ctrl_req(bp, PI_PCTRL_M_MLA, PI_PDATA_A_MLA_K_LO, 0,
1019 &data) != DFX_K_SUCCESS) { 1020 &data) != DFX_K_SUCCESS) {
1020 printk("%s: Could not read adapter factory MAC address!\n", 1021 printk("%s: Could not read adapter factory MAC address!\n",
1021 print_name); 1022 print_name);
1022 return(DFX_K_FAILURE); 1023 return(DFX_K_FAILURE);
1023 } 1024 }
1024 le32 = cpu_to_le32(data); 1025 le32 = cpu_to_le32(data);
1025 memcpy(&bp->factory_mac_addr[0], &le32, sizeof(u32)); 1026 memcpy(&bp->factory_mac_addr[0], &le32, sizeof(u32));
1026 1027
1027 if (dfx_hw_port_ctrl_req(bp, PI_PCTRL_M_MLA, PI_PDATA_A_MLA_K_HI, 0, 1028 if (dfx_hw_port_ctrl_req(bp, PI_PCTRL_M_MLA, PI_PDATA_A_MLA_K_HI, 0,
1028 &data) != DFX_K_SUCCESS) { 1029 &data) != DFX_K_SUCCESS) {
1029 printk("%s: Could not read adapter factory MAC address!\n", 1030 printk("%s: Could not read adapter factory MAC address!\n",
1030 print_name); 1031 print_name);
1031 return(DFX_K_FAILURE); 1032 return(DFX_K_FAILURE);
1032 } 1033 }
1033 le32 = cpu_to_le32(data); 1034 le32 = cpu_to_le32(data);
1034 memcpy(&bp->factory_mac_addr[4], &le32, sizeof(u16)); 1035 memcpy(&bp->factory_mac_addr[4], &le32, sizeof(u16));
1035 1036
1036 /* 1037 /*
1037 * Set current address to factory address 1038 * Set current address to factory address
1038 * 1039 *
1039 * Note: Node address override support is handled through 1040 * Note: Node address override support is handled through
1040 * dfx_ctl_set_mac_address. 1041 * dfx_ctl_set_mac_address.
1041 */ 1042 */
1042 1043
1043 memcpy(dev->dev_addr, bp->factory_mac_addr, FDDI_K_ALEN); 1044 memcpy(dev->dev_addr, bp->factory_mac_addr, FDDI_K_ALEN);
1044 if (dfx_bus_tc) 1045 if (dfx_bus_tc)
1045 board_name = "DEFTA"; 1046 board_name = "DEFTA";
1046 if (dfx_bus_eisa) 1047 if (dfx_bus_eisa)
1047 board_name = "DEFEA"; 1048 board_name = "DEFEA";
1048 if (dfx_bus_pci) 1049 if (dfx_bus_pci)
1049 board_name = "DEFPA"; 1050 board_name = "DEFPA";
1050 pr_info("%s: %s at %saddr = 0x%llx, IRQ = %d, " 1051 pr_info("%s: %s at %saddr = 0x%llx, IRQ = %d, "
1051 "Hardware addr = %02X-%02X-%02X-%02X-%02X-%02X\n", 1052 "Hardware addr = %02X-%02X-%02X-%02X-%02X-%02X\n",
1052 print_name, board_name, dfx_use_mmio ? "" : "I/O ", 1053 print_name, board_name, dfx_use_mmio ? "" : "I/O ",
1053 (long long)bar_start, dev->irq, 1054 (long long)bar_start, dev->irq,
1054 dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2], 1055 dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
1055 dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]); 1056 dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
1056 1057
1057 /* 1058 /*
1058 * Get memory for descriptor block, consumer block, and other buffers 1059 * Get memory for descriptor block, consumer block, and other buffers
1059 * that need to be DMA read or written to by the adapter. 1060 * that need to be DMA read or written to by the adapter.
1060 */ 1061 */
1061 1062
1062 alloc_size = sizeof(PI_DESCR_BLOCK) + 1063 alloc_size = sizeof(PI_DESCR_BLOCK) +
1063 PI_CMD_REQ_K_SIZE_MAX + 1064 PI_CMD_REQ_K_SIZE_MAX +
1064 PI_CMD_RSP_K_SIZE_MAX + 1065 PI_CMD_RSP_K_SIZE_MAX +
1065 #ifndef DYNAMIC_BUFFERS 1066 #ifndef DYNAMIC_BUFFERS
1066 (bp->rcv_bufs_to_post * PI_RCV_DATA_K_SIZE_MAX) + 1067 (bp->rcv_bufs_to_post * PI_RCV_DATA_K_SIZE_MAX) +
1067 #endif 1068 #endif
1068 sizeof(PI_CONSUMER_BLOCK) + 1069 sizeof(PI_CONSUMER_BLOCK) +
1069 (PI_ALIGN_K_DESC_BLK - 1); 1070 (PI_ALIGN_K_DESC_BLK - 1);
1070 bp->kmalloced = top_v = dma_alloc_coherent(bp->bus_dev, alloc_size, 1071 bp->kmalloced = top_v = dma_alloc_coherent(bp->bus_dev, alloc_size,
1071 &bp->kmalloced_dma, 1072 &bp->kmalloced_dma,
1072 GFP_ATOMIC); 1073 GFP_ATOMIC);
1073 if (top_v == NULL) { 1074 if (top_v == NULL) {
1074 printk("%s: Could not allocate memory for host buffers " 1075 printk("%s: Could not allocate memory for host buffers "
1075 "and structures!\n", print_name); 1076 "and structures!\n", print_name);
1076 return(DFX_K_FAILURE); 1077 return(DFX_K_FAILURE);
1077 } 1078 }
1078 memset(top_v, 0, alloc_size); /* zero out memory before continuing */ 1079 memset(top_v, 0, alloc_size); /* zero out memory before continuing */
1079 top_p = bp->kmalloced_dma; /* get physical address of buffer */ 1080 top_p = bp->kmalloced_dma; /* get physical address of buffer */
1080 1081
1081 /* 1082 /*
1082 * To guarantee the 8K alignment required for the descriptor block, 8K - 1 1083 * To guarantee the 8K alignment required for the descriptor block, 8K - 1
1083 * plus the amount of memory needed was allocated. The physical address 1084 * plus the amount of memory needed was allocated. The physical address
1084 * is now 8K aligned. By carving up the memory in a specific order, 1085 * is now 8K aligned. By carving up the memory in a specific order,
1085 * we'll guarantee the alignment requirements for all other structures. 1086 * we'll guarantee the alignment requirements for all other structures.
1086 * 1087 *
1087 * Note: If the assumptions change regarding the non-paged, non-cached, 1088 * Note: If the assumptions change regarding the non-paged, non-cached,
1088 * physically contiguous nature of the memory block or the address 1089 * physically contiguous nature of the memory block or the address
1089 * alignments, then we'll need to implement a different algorithm 1090 * alignments, then we'll need to implement a different algorithm
1090 * for allocating the needed memory. 1091 * for allocating the needed memory.
1091 */ 1092 */
1092 1093
1093 curr_p = ALIGN(top_p, PI_ALIGN_K_DESC_BLK); 1094 curr_p = ALIGN(top_p, PI_ALIGN_K_DESC_BLK);
1094 curr_v = top_v + (curr_p - top_p); 1095 curr_v = top_v + (curr_p - top_p);
1095 1096
1096 /* Reserve space for descriptor block */ 1097 /* Reserve space for descriptor block */
1097 1098
1098 bp->descr_block_virt = (PI_DESCR_BLOCK *) curr_v; 1099 bp->descr_block_virt = (PI_DESCR_BLOCK *) curr_v;
1099 bp->descr_block_phys = curr_p; 1100 bp->descr_block_phys = curr_p;
1100 curr_v += sizeof(PI_DESCR_BLOCK); 1101 curr_v += sizeof(PI_DESCR_BLOCK);
1101 curr_p += sizeof(PI_DESCR_BLOCK); 1102 curr_p += sizeof(PI_DESCR_BLOCK);
1102 1103
1103 /* Reserve space for command request buffer */ 1104 /* Reserve space for command request buffer */
1104 1105
1105 bp->cmd_req_virt = (PI_DMA_CMD_REQ *) curr_v; 1106 bp->cmd_req_virt = (PI_DMA_CMD_REQ *) curr_v;
1106 bp->cmd_req_phys = curr_p; 1107 bp->cmd_req_phys = curr_p;
1107 curr_v += PI_CMD_REQ_K_SIZE_MAX; 1108 curr_v += PI_CMD_REQ_K_SIZE_MAX;
1108 curr_p += PI_CMD_REQ_K_SIZE_MAX; 1109 curr_p += PI_CMD_REQ_K_SIZE_MAX;
1109 1110
1110 /* Reserve space for command response buffer */ 1111 /* Reserve space for command response buffer */
1111 1112
1112 bp->cmd_rsp_virt = (PI_DMA_CMD_RSP *) curr_v; 1113 bp->cmd_rsp_virt = (PI_DMA_CMD_RSP *) curr_v;
1113 bp->cmd_rsp_phys = curr_p; 1114 bp->cmd_rsp_phys = curr_p;
1114 curr_v += PI_CMD_RSP_K_SIZE_MAX; 1115 curr_v += PI_CMD_RSP_K_SIZE_MAX;
1115 curr_p += PI_CMD_RSP_K_SIZE_MAX; 1116 curr_p += PI_CMD_RSP_K_SIZE_MAX;
1116 1117
1117 /* Reserve space for the LLC host receive queue buffers */ 1118 /* Reserve space for the LLC host receive queue buffers */
1118 1119
1119 bp->rcv_block_virt = curr_v; 1120 bp->rcv_block_virt = curr_v;
1120 bp->rcv_block_phys = curr_p; 1121 bp->rcv_block_phys = curr_p;
1121 1122
1122 #ifndef DYNAMIC_BUFFERS 1123 #ifndef DYNAMIC_BUFFERS
1123 curr_v += (bp->rcv_bufs_to_post * PI_RCV_DATA_K_SIZE_MAX); 1124 curr_v += (bp->rcv_bufs_to_post * PI_RCV_DATA_K_SIZE_MAX);
1124 curr_p += (bp->rcv_bufs_to_post * PI_RCV_DATA_K_SIZE_MAX); 1125 curr_p += (bp->rcv_bufs_to_post * PI_RCV_DATA_K_SIZE_MAX);
1125 #endif 1126 #endif
1126 1127
1127 /* Reserve space for the consumer block */ 1128 /* Reserve space for the consumer block */
1128 1129
1129 bp->cons_block_virt = (PI_CONSUMER_BLOCK *) curr_v; 1130 bp->cons_block_virt = (PI_CONSUMER_BLOCK *) curr_v;
1130 bp->cons_block_phys = curr_p; 1131 bp->cons_block_phys = curr_p;
1131 1132
1132 /* Display virtual and physical addresses if debug driver */ 1133 /* Display virtual and physical addresses if debug driver */
1133 1134
1134 DBG_printk("%s: Descriptor block virt = %0lX, phys = %0X\n", 1135 DBG_printk("%s: Descriptor block virt = %0lX, phys = %0X\n",
1135 print_name, 1136 print_name,
1136 (long)bp->descr_block_virt, bp->descr_block_phys); 1137 (long)bp->descr_block_virt, bp->descr_block_phys);
1137 DBG_printk("%s: Command Request buffer virt = %0lX, phys = %0X\n", 1138 DBG_printk("%s: Command Request buffer virt = %0lX, phys = %0X\n",
1138 print_name, (long)bp->cmd_req_virt, bp->cmd_req_phys); 1139 print_name, (long)bp->cmd_req_virt, bp->cmd_req_phys);
1139 DBG_printk("%s: Command Response buffer virt = %0lX, phys = %0X\n", 1140 DBG_printk("%s: Command Response buffer virt = %0lX, phys = %0X\n",
1140 print_name, (long)bp->cmd_rsp_virt, bp->cmd_rsp_phys); 1141 print_name, (long)bp->cmd_rsp_virt, bp->cmd_rsp_phys);
1141 DBG_printk("%s: Receive buffer block virt = %0lX, phys = %0X\n", 1142 DBG_printk("%s: Receive buffer block virt = %0lX, phys = %0X\n",
1142 print_name, (long)bp->rcv_block_virt, bp->rcv_block_phys); 1143 print_name, (long)bp->rcv_block_virt, bp->rcv_block_phys);
1143 DBG_printk("%s: Consumer block virt = %0lX, phys = %0X\n", 1144 DBG_printk("%s: Consumer block virt = %0lX, phys = %0X\n",
1144 print_name, (long)bp->cons_block_virt, bp->cons_block_phys); 1145 print_name, (long)bp->cons_block_virt, bp->cons_block_phys);
1145 1146
1146 return(DFX_K_SUCCESS); 1147 return(DFX_K_SUCCESS);
1147 } 1148 }
1148 1149
1149 1150
1150 /* 1151 /*
1151 * ================= 1152 * =================
1152 * = dfx_adap_init = 1153 * = dfx_adap_init =
1153 * ================= 1154 * =================
1154 * 1155 *
1155 * Overview: 1156 * Overview:
1156 * Brings the adapter to the link avail/link unavailable state. 1157 * Brings the adapter to the link avail/link unavailable state.
1157 * 1158 *
1158 * Returns: 1159 * Returns:
1159 * Condition code 1160 * Condition code
1160 * 1161 *
1161 * Arguments: 1162 * Arguments:
1162 * bp - pointer to board information 1163 * bp - pointer to board information
1163 * get_buffers - non-zero if buffers to be allocated 1164 * get_buffers - non-zero if buffers to be allocated
1164 * 1165 *
1165 * Functional Description: 1166 * Functional Description:
1166 * Issues the low-level firmware/hardware calls necessary to bring 1167 * Issues the low-level firmware/hardware calls necessary to bring
1167 * the adapter up, or to properly reset and restore adapter during 1168 * the adapter up, or to properly reset and restore adapter during
1168 * run-time. 1169 * run-time.
1169 * 1170 *
1170 * Return Codes: 1171 * Return Codes:
1171 * DFX_K_SUCCESS - Adapter brought up successfully 1172 * DFX_K_SUCCESS - Adapter brought up successfully
1172 * DFX_K_FAILURE - Adapter initialization failed 1173 * DFX_K_FAILURE - Adapter initialization failed
1173 * 1174 *
1174 * Assumptions: 1175 * Assumptions:
1175 * bp->reset_type should be set to a valid reset type value before 1176 * bp->reset_type should be set to a valid reset type value before
1176 * calling this routine. 1177 * calling this routine.
1177 * 1178 *
1178 * Side Effects: 1179 * Side Effects:
1179 * Adapter should be in LINK_AVAILABLE or LINK_UNAVAILABLE state 1180 * Adapter should be in LINK_AVAILABLE or LINK_UNAVAILABLE state
1180 * upon a successful return of this routine. 1181 * upon a successful return of this routine.
1181 */ 1182 */
1182 1183
1183 static int dfx_adap_init(DFX_board_t *bp, int get_buffers) 1184 static int dfx_adap_init(DFX_board_t *bp, int get_buffers)
1184 { 1185 {
1185 DBG_printk("In dfx_adap_init...\n"); 1186 DBG_printk("In dfx_adap_init...\n");
1186 1187
1187 /* Disable PDQ interrupts first */ 1188 /* Disable PDQ interrupts first */
1188 1189
1189 dfx_port_write_long(bp, PI_PDQ_K_REG_HOST_INT_ENB, PI_HOST_INT_K_DISABLE_ALL_INTS); 1190 dfx_port_write_long(bp, PI_PDQ_K_REG_HOST_INT_ENB, PI_HOST_INT_K_DISABLE_ALL_INTS);
1190 1191
1191 /* Place adapter in DMA_UNAVAILABLE state by resetting adapter */ 1192 /* Place adapter in DMA_UNAVAILABLE state by resetting adapter */
1192 1193
1193 if (dfx_hw_dma_uninit(bp, bp->reset_type) != DFX_K_SUCCESS) 1194 if (dfx_hw_dma_uninit(bp, bp->reset_type) != DFX_K_SUCCESS)
1194 { 1195 {
1195 printk("%s: Could not uninitialize/reset adapter!\n", bp->dev->name); 1196 printk("%s: Could not uninitialize/reset adapter!\n", bp->dev->name);
1196 return(DFX_K_FAILURE); 1197 return(DFX_K_FAILURE);
1197 } 1198 }
1198 1199
1199 /* 1200 /*
1200 * When the PDQ is reset, some false Type 0 interrupts may be pending, 1201 * When the PDQ is reset, some false Type 0 interrupts may be pending,
1201 * so we'll acknowledge all Type 0 interrupts now before continuing. 1202 * so we'll acknowledge all Type 0 interrupts now before continuing.
1202 */ 1203 */
1203 1204
1204 dfx_port_write_long(bp, PI_PDQ_K_REG_TYPE_0_STATUS, PI_HOST_INT_K_ACK_ALL_TYPE_0); 1205 dfx_port_write_long(bp, PI_PDQ_K_REG_TYPE_0_STATUS, PI_HOST_INT_K_ACK_ALL_TYPE_0);
1205 1206
1206 /* 1207 /*
1207 * Clear Type 1 and Type 2 registers before going to DMA_AVAILABLE state 1208 * Clear Type 1 and Type 2 registers before going to DMA_AVAILABLE state
1208 * 1209 *
1209 * Note: We only need to clear host copies of these registers. The PDQ reset 1210 * Note: We only need to clear host copies of these registers. The PDQ reset
1210 * takes care of the on-board register values. 1211 * takes care of the on-board register values.
1211 */ 1212 */
1212 1213
1213 bp->cmd_req_reg.lword = 0; 1214 bp->cmd_req_reg.lword = 0;
1214 bp->cmd_rsp_reg.lword = 0; 1215 bp->cmd_rsp_reg.lword = 0;
1215 bp->rcv_xmt_reg.lword = 0; 1216 bp->rcv_xmt_reg.lword = 0;
1216 1217
1217 /* Clear consumer block before going to DMA_AVAILABLE state */ 1218 /* Clear consumer block before going to DMA_AVAILABLE state */
1218 1219
1219 memset(bp->cons_block_virt, 0, sizeof(PI_CONSUMER_BLOCK)); 1220 memset(bp->cons_block_virt, 0, sizeof(PI_CONSUMER_BLOCK));
1220 1221
1221 /* Initialize the DMA Burst Size */ 1222 /* Initialize the DMA Burst Size */
1222 1223
1223 if (dfx_hw_port_ctrl_req(bp, 1224 if (dfx_hw_port_ctrl_req(bp,
1224 PI_PCTRL_M_SUB_CMD, 1225 PI_PCTRL_M_SUB_CMD,
1225 PI_SUB_CMD_K_BURST_SIZE_SET, 1226 PI_SUB_CMD_K_BURST_SIZE_SET,
1226 bp->burst_size, 1227 bp->burst_size,
1227 NULL) != DFX_K_SUCCESS) 1228 NULL) != DFX_K_SUCCESS)
1228 { 1229 {
1229 printk("%s: Could not set adapter burst size!\n", bp->dev->name); 1230 printk("%s: Could not set adapter burst size!\n", bp->dev->name);
1230 return(DFX_K_FAILURE); 1231 return(DFX_K_FAILURE);
1231 } 1232 }
1232 1233
1233 /* 1234 /*
1234 * Set base address of Consumer Block 1235 * Set base address of Consumer Block
1235 * 1236 *
1236 * Assumption: 32-bit physical address of consumer block is 64 byte 1237 * Assumption: 32-bit physical address of consumer block is 64 byte
1237 * aligned. That is, bits 0-5 of the address must be zero. 1238 * aligned. That is, bits 0-5 of the address must be zero.
1238 */ 1239 */
1239 1240
1240 if (dfx_hw_port_ctrl_req(bp, 1241 if (dfx_hw_port_ctrl_req(bp,
1241 PI_PCTRL_M_CONS_BLOCK, 1242 PI_PCTRL_M_CONS_BLOCK,
1242 bp->cons_block_phys, 1243 bp->cons_block_phys,
1243 0, 1244 0,
1244 NULL) != DFX_K_SUCCESS) 1245 NULL) != DFX_K_SUCCESS)
1245 { 1246 {
1246 printk("%s: Could not set consumer block address!\n", bp->dev->name); 1247 printk("%s: Could not set consumer block address!\n", bp->dev->name);
1247 return(DFX_K_FAILURE); 1248 return(DFX_K_FAILURE);
1248 } 1249 }
1249 1250
1250 /* 1251 /*
1251 * Set the base address of Descriptor Block and bring adapter 1252 * Set the base address of Descriptor Block and bring adapter
1252 * to DMA_AVAILABLE state. 1253 * to DMA_AVAILABLE state.
1253 * 1254 *
1254 * Note: We also set the literal and data swapping requirements 1255 * Note: We also set the literal and data swapping requirements
1255 * in this command. 1256 * in this command.
1256 * 1257 *
1257 * Assumption: 32-bit physical address of descriptor block 1258 * Assumption: 32-bit physical address of descriptor block
1258 * is 8Kbyte aligned. 1259 * is 8Kbyte aligned.
1259 */ 1260 */
1260 if (dfx_hw_port_ctrl_req(bp, PI_PCTRL_M_INIT, 1261 if (dfx_hw_port_ctrl_req(bp, PI_PCTRL_M_INIT,
1261 (u32)(bp->descr_block_phys | 1262 (u32)(bp->descr_block_phys |
1262 PI_PDATA_A_INIT_M_BSWAP_INIT), 1263 PI_PDATA_A_INIT_M_BSWAP_INIT),
1263 0, NULL) != DFX_K_SUCCESS) { 1264 0, NULL) != DFX_K_SUCCESS) {
1264 printk("%s: Could not set descriptor block address!\n", 1265 printk("%s: Could not set descriptor block address!\n",
1265 bp->dev->name); 1266 bp->dev->name);
1266 return DFX_K_FAILURE; 1267 return DFX_K_FAILURE;
1267 } 1268 }
1268 1269
1269 /* Set transmit flush timeout value */ 1270 /* Set transmit flush timeout value */
1270 1271
1271 bp->cmd_req_virt->cmd_type = PI_CMD_K_CHARS_SET; 1272 bp->cmd_req_virt->cmd_type = PI_CMD_K_CHARS_SET;
1272 bp->cmd_req_virt->char_set.item[0].item_code = PI_ITEM_K_FLUSH_TIME; 1273 bp->cmd_req_virt->char_set.item[0].item_code = PI_ITEM_K_FLUSH_TIME;
1273 bp->cmd_req_virt->char_set.item[0].value = 3; /* 3 seconds */ 1274 bp->cmd_req_virt->char_set.item[0].value = 3; /* 3 seconds */
1274 bp->cmd_req_virt->char_set.item[0].item_index = 0; 1275 bp->cmd_req_virt->char_set.item[0].item_index = 0;
1275 bp->cmd_req_virt->char_set.item[1].item_code = PI_ITEM_K_EOL; 1276 bp->cmd_req_virt->char_set.item[1].item_code = PI_ITEM_K_EOL;
1276 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS) 1277 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS)
1277 { 1278 {
1278 printk("%s: DMA command request failed!\n", bp->dev->name); 1279 printk("%s: DMA command request failed!\n", bp->dev->name);
1279 return(DFX_K_FAILURE); 1280 return(DFX_K_FAILURE);
1280 } 1281 }
1281 1282
1282 /* Set the initial values for eFDXEnable and MACTReq MIB objects */ 1283 /* Set the initial values for eFDXEnable and MACTReq MIB objects */
1283 1284
1284 bp->cmd_req_virt->cmd_type = PI_CMD_K_SNMP_SET; 1285 bp->cmd_req_virt->cmd_type = PI_CMD_K_SNMP_SET;
1285 bp->cmd_req_virt->snmp_set.item[0].item_code = PI_ITEM_K_FDX_ENB_DIS; 1286 bp->cmd_req_virt->snmp_set.item[0].item_code = PI_ITEM_K_FDX_ENB_DIS;
1286 bp->cmd_req_virt->snmp_set.item[0].value = bp->full_duplex_enb; 1287 bp->cmd_req_virt->snmp_set.item[0].value = bp->full_duplex_enb;
1287 bp->cmd_req_virt->snmp_set.item[0].item_index = 0; 1288 bp->cmd_req_virt->snmp_set.item[0].item_index = 0;
1288 bp->cmd_req_virt->snmp_set.item[1].item_code = PI_ITEM_K_MAC_T_REQ; 1289 bp->cmd_req_virt->snmp_set.item[1].item_code = PI_ITEM_K_MAC_T_REQ;
1289 bp->cmd_req_virt->snmp_set.item[1].value = bp->req_ttrt; 1290 bp->cmd_req_virt->snmp_set.item[1].value = bp->req_ttrt;
1290 bp->cmd_req_virt->snmp_set.item[1].item_index = 0; 1291 bp->cmd_req_virt->snmp_set.item[1].item_index = 0;
1291 bp->cmd_req_virt->snmp_set.item[2].item_code = PI_ITEM_K_EOL; 1292 bp->cmd_req_virt->snmp_set.item[2].item_code = PI_ITEM_K_EOL;
1292 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS) 1293 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS)
1293 { 1294 {
1294 printk("%s: DMA command request failed!\n", bp->dev->name); 1295 printk("%s: DMA command request failed!\n", bp->dev->name);
1295 return(DFX_K_FAILURE); 1296 return(DFX_K_FAILURE);
1296 } 1297 }
1297 1298
1298 /* Initialize adapter CAM */ 1299 /* Initialize adapter CAM */
1299 1300
1300 if (dfx_ctl_update_cam(bp) != DFX_K_SUCCESS) 1301 if (dfx_ctl_update_cam(bp) != DFX_K_SUCCESS)
1301 { 1302 {
1302 printk("%s: Adapter CAM update failed!\n", bp->dev->name); 1303 printk("%s: Adapter CAM update failed!\n", bp->dev->name);
1303 return(DFX_K_FAILURE); 1304 return(DFX_K_FAILURE);
1304 } 1305 }
1305 1306
1306 /* Initialize adapter filters */ 1307 /* Initialize adapter filters */
1307 1308
1308 if (dfx_ctl_update_filters(bp) != DFX_K_SUCCESS) 1309 if (dfx_ctl_update_filters(bp) != DFX_K_SUCCESS)
1309 { 1310 {
1310 printk("%s: Adapter filters update failed!\n", bp->dev->name); 1311 printk("%s: Adapter filters update failed!\n", bp->dev->name);
1311 return(DFX_K_FAILURE); 1312 return(DFX_K_FAILURE);
1312 } 1313 }
1313 1314
1314 /* 1315 /*
1315 * Remove any existing dynamic buffers (i.e. if the adapter is being 1316 * Remove any existing dynamic buffers (i.e. if the adapter is being
1316 * reinitialized) 1317 * reinitialized)
1317 */ 1318 */
1318 1319
1319 if (get_buffers) 1320 if (get_buffers)
1320 dfx_rcv_flush(bp); 1321 dfx_rcv_flush(bp);
1321 1322
1322 /* Initialize receive descriptor block and produce buffers */ 1323 /* Initialize receive descriptor block and produce buffers */
1323 1324
1324 if (dfx_rcv_init(bp, get_buffers)) 1325 if (dfx_rcv_init(bp, get_buffers))
1325 { 1326 {
1326 printk("%s: Receive buffer allocation failed\n", bp->dev->name); 1327 printk("%s: Receive buffer allocation failed\n", bp->dev->name);
1327 if (get_buffers) 1328 if (get_buffers)
1328 dfx_rcv_flush(bp); 1329 dfx_rcv_flush(bp);
1329 return(DFX_K_FAILURE); 1330 return(DFX_K_FAILURE);
1330 } 1331 }
1331 1332
1332 /* Issue START command and bring adapter to LINK_(UN)AVAILABLE state */ 1333 /* Issue START command and bring adapter to LINK_(UN)AVAILABLE state */
1333 1334
1334 bp->cmd_req_virt->cmd_type = PI_CMD_K_START; 1335 bp->cmd_req_virt->cmd_type = PI_CMD_K_START;
1335 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS) 1336 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS)
1336 { 1337 {
1337 printk("%s: Start command failed\n", bp->dev->name); 1338 printk("%s: Start command failed\n", bp->dev->name);
1338 if (get_buffers) 1339 if (get_buffers)
1339 dfx_rcv_flush(bp); 1340 dfx_rcv_flush(bp);
1340 return(DFX_K_FAILURE); 1341 return(DFX_K_FAILURE);
1341 } 1342 }
1342 1343
1343 /* Initialization succeeded, reenable PDQ interrupts */ 1344 /* Initialization succeeded, reenable PDQ interrupts */
1344 1345
1345 dfx_port_write_long(bp, PI_PDQ_K_REG_HOST_INT_ENB, PI_HOST_INT_K_ENABLE_DEF_INTS); 1346 dfx_port_write_long(bp, PI_PDQ_K_REG_HOST_INT_ENB, PI_HOST_INT_K_ENABLE_DEF_INTS);
1346 return(DFX_K_SUCCESS); 1347 return(DFX_K_SUCCESS);
1347 } 1348 }
1348 1349
1349 1350
1350 /* 1351 /*
1351 * ============ 1352 * ============
1352 * = dfx_open = 1353 * = dfx_open =
1353 * ============ 1354 * ============
1354 * 1355 *
1355 * Overview: 1356 * Overview:
1356 * Opens the adapter 1357 * Opens the adapter
1357 * 1358 *
1358 * Returns: 1359 * Returns:
1359 * Condition code 1360 * Condition code
1360 * 1361 *
1361 * Arguments: 1362 * Arguments:
1362 * dev - pointer to device information 1363 * dev - pointer to device information
1363 * 1364 *
1364 * Functional Description: 1365 * Functional Description:
1365 * This function brings the adapter to an operational state. 1366 * This function brings the adapter to an operational state.
1366 * 1367 *
1367 * Return Codes: 1368 * Return Codes:
1368 * 0 - Adapter was successfully opened 1369 * 0 - Adapter was successfully opened
1369 * -EAGAIN - Could not register IRQ or adapter initialization failed 1370 * -EAGAIN - Could not register IRQ or adapter initialization failed
1370 * 1371 *
1371 * Assumptions: 1372 * Assumptions:
1372 * This routine should only be called for a device that was 1373 * This routine should only be called for a device that was
1373 * initialized successfully. 1374 * initialized successfully.
1374 * 1375 *
1375 * Side Effects: 1376 * Side Effects:
1376 * Adapter should be in LINK_AVAILABLE or LINK_UNAVAILABLE state 1377 * Adapter should be in LINK_AVAILABLE or LINK_UNAVAILABLE state
1377 * if the open is successful. 1378 * if the open is successful.
1378 */ 1379 */
1379 1380
1380 static int dfx_open(struct net_device *dev) 1381 static int dfx_open(struct net_device *dev)
1381 { 1382 {
1382 DFX_board_t *bp = netdev_priv(dev); 1383 DFX_board_t *bp = netdev_priv(dev);
1383 int ret; 1384 int ret;
1384 1385
1385 DBG_printk("In dfx_open...\n"); 1386 DBG_printk("In dfx_open...\n");
1386 1387
1387 /* Register IRQ - support shared interrupts by passing device ptr */ 1388 /* Register IRQ - support shared interrupts by passing device ptr */
1388 1389
1389 ret = request_irq(dev->irq, dfx_interrupt, IRQF_SHARED, dev->name, 1390 ret = request_irq(dev->irq, dfx_interrupt, IRQF_SHARED, dev->name,
1390 dev); 1391 dev);
1391 if (ret) { 1392 if (ret) {
1392 printk(KERN_ERR "%s: Requested IRQ %d is busy\n", dev->name, dev->irq); 1393 printk(KERN_ERR "%s: Requested IRQ %d is busy\n", dev->name, dev->irq);
1393 return ret; 1394 return ret;
1394 } 1395 }
1395 1396
1396 /* 1397 /*
1397 * Set current address to factory MAC address 1398 * Set current address to factory MAC address
1398 * 1399 *
1399 * Note: We've already done this step in dfx_driver_init. 1400 * Note: We've already done this step in dfx_driver_init.
1400 * However, it's possible that a user has set a node 1401 * However, it's possible that a user has set a node
1401 * address override, then closed and reopened the 1402 * address override, then closed and reopened the
1402 * adapter. Unless we reset the device address field 1403 * adapter. Unless we reset the device address field
1403 * now, we'll continue to use the existing modified 1404 * now, we'll continue to use the existing modified
1404 * address. 1405 * address.
1405 */ 1406 */
1406 1407
1407 memcpy(dev->dev_addr, bp->factory_mac_addr, FDDI_K_ALEN); 1408 memcpy(dev->dev_addr, bp->factory_mac_addr, FDDI_K_ALEN);
1408 1409
1409 /* Clear local unicast/multicast address tables and counts */ 1410 /* Clear local unicast/multicast address tables and counts */
1410 1411
1411 memset(bp->uc_table, 0, sizeof(bp->uc_table)); 1412 memset(bp->uc_table, 0, sizeof(bp->uc_table));
1412 memset(bp->mc_table, 0, sizeof(bp->mc_table)); 1413 memset(bp->mc_table, 0, sizeof(bp->mc_table));
1413 bp->uc_count = 0; 1414 bp->uc_count = 0;
1414 bp->mc_count = 0; 1415 bp->mc_count = 0;
1415 1416
1416 /* Disable promiscuous filter settings */ 1417 /* Disable promiscuous filter settings */
1417 1418
1418 bp->ind_group_prom = PI_FSTATE_K_BLOCK; 1419 bp->ind_group_prom = PI_FSTATE_K_BLOCK;
1419 bp->group_prom = PI_FSTATE_K_BLOCK; 1420 bp->group_prom = PI_FSTATE_K_BLOCK;
1420 1421
1421 spin_lock_init(&bp->lock); 1422 spin_lock_init(&bp->lock);
1422 1423
1423 /* Reset and initialize adapter */ 1424 /* Reset and initialize adapter */
1424 1425
1425 bp->reset_type = PI_PDATA_A_RESET_M_SKIP_ST; /* skip self-test */ 1426 bp->reset_type = PI_PDATA_A_RESET_M_SKIP_ST; /* skip self-test */
1426 if (dfx_adap_init(bp, 1) != DFX_K_SUCCESS) 1427 if (dfx_adap_init(bp, 1) != DFX_K_SUCCESS)
1427 { 1428 {
1428 printk(KERN_ERR "%s: Adapter open failed!\n", dev->name); 1429 printk(KERN_ERR "%s: Adapter open failed!\n", dev->name);
1429 free_irq(dev->irq, dev); 1430 free_irq(dev->irq, dev);
1430 return -EAGAIN; 1431 return -EAGAIN;
1431 } 1432 }
1432 1433
1433 /* Set device structure info */ 1434 /* Set device structure info */
1434 netif_start_queue(dev); 1435 netif_start_queue(dev);
1435 return(0); 1436 return(0);
1436 } 1437 }
1437 1438
1438 1439
1439 /* 1440 /*
1440 * ============= 1441 * =============
1441 * = dfx_close = 1442 * = dfx_close =
1442 * ============= 1443 * =============
1443 * 1444 *
1444 * Overview: 1445 * Overview:
1445 * Closes the device/module. 1446 * Closes the device/module.
1446 * 1447 *
1447 * Returns: 1448 * Returns:
1448 * Condition code 1449 * Condition code
1449 * 1450 *
1450 * Arguments: 1451 * Arguments:
1451 * dev - pointer to device information 1452 * dev - pointer to device information
1452 * 1453 *
1453 * Functional Description: 1454 * Functional Description:
1454 * This routine closes the adapter and brings it to a safe state. 1455 * This routine closes the adapter and brings it to a safe state.
1455 * The interrupt service routine is deregistered with the OS. 1456 * The interrupt service routine is deregistered with the OS.
1456 * The adapter can be opened again with another call to dfx_open(). 1457 * The adapter can be opened again with another call to dfx_open().
1457 * 1458 *
1458 * Return Codes: 1459 * Return Codes:
1459 * Always return 0. 1460 * Always return 0.
1460 * 1461 *
1461 * Assumptions: 1462 * Assumptions:
1462 * No further requests for this adapter are made after this routine is 1463 * No further requests for this adapter are made after this routine is
1463 * called. dfx_open() can be called to reset and reinitialize the 1464 * called. dfx_open() can be called to reset and reinitialize the
1464 * adapter. 1465 * adapter.
1465 * 1466 *
1466 * Side Effects: 1467 * Side Effects:
1467 * Adapter should be in DMA_UNAVAILABLE state upon completion of this 1468 * Adapter should be in DMA_UNAVAILABLE state upon completion of this
1468 * routine. 1469 * routine.
1469 */ 1470 */
1470 1471
1471 static int dfx_close(struct net_device *dev) 1472 static int dfx_close(struct net_device *dev)
1472 { 1473 {
1473 DFX_board_t *bp = netdev_priv(dev); 1474 DFX_board_t *bp = netdev_priv(dev);
1474 1475
1475 DBG_printk("In dfx_close...\n"); 1476 DBG_printk("In dfx_close...\n");
1476 1477
1477 /* Disable PDQ interrupts first */ 1478 /* Disable PDQ interrupts first */
1478 1479
1479 dfx_port_write_long(bp, PI_PDQ_K_REG_HOST_INT_ENB, PI_HOST_INT_K_DISABLE_ALL_INTS); 1480 dfx_port_write_long(bp, PI_PDQ_K_REG_HOST_INT_ENB, PI_HOST_INT_K_DISABLE_ALL_INTS);
1480 1481
1481 /* Place adapter in DMA_UNAVAILABLE state by resetting adapter */ 1482 /* Place adapter in DMA_UNAVAILABLE state by resetting adapter */
1482 1483
1483 (void) dfx_hw_dma_uninit(bp, PI_PDATA_A_RESET_M_SKIP_ST); 1484 (void) dfx_hw_dma_uninit(bp, PI_PDATA_A_RESET_M_SKIP_ST);
1484 1485
1485 /* 1486 /*
1486 * Flush any pending transmit buffers 1487 * Flush any pending transmit buffers
1487 * 1488 *
1488 * Note: It's important that we flush the transmit buffers 1489 * Note: It's important that we flush the transmit buffers
1489 * BEFORE we clear our copy of the Type 2 register. 1490 * BEFORE we clear our copy of the Type 2 register.
1490 * Otherwise, we'll have no idea how many buffers 1491 * Otherwise, we'll have no idea how many buffers
1491 * we need to free. 1492 * we need to free.
1492 */ 1493 */
1493 1494
1494 dfx_xmt_flush(bp); 1495 dfx_xmt_flush(bp);
1495 1496
1496 /* 1497 /*
1497 * Clear Type 1 and Type 2 registers after adapter reset 1498 * Clear Type 1 and Type 2 registers after adapter reset
1498 * 1499 *
1499 * Note: Even though we're closing the adapter, it's 1500 * Note: Even though we're closing the adapter, it's
1500 * possible that an interrupt will occur after 1501 * possible that an interrupt will occur after
1501 * dfx_close is called. Without some assurance to 1502 * dfx_close is called. Without some assurance to
1502 * the contrary we want to make sure that we don't 1503 * the contrary we want to make sure that we don't
1503 * process receive and transmit LLC frames and update 1504 * process receive and transmit LLC frames and update
1504 * the Type 2 register with bad information. 1505 * the Type 2 register with bad information.
1505 */ 1506 */
1506 1507
1507 bp->cmd_req_reg.lword = 0; 1508 bp->cmd_req_reg.lword = 0;
1508 bp->cmd_rsp_reg.lword = 0; 1509 bp->cmd_rsp_reg.lword = 0;
1509 bp->rcv_xmt_reg.lword = 0; 1510 bp->rcv_xmt_reg.lword = 0;
1510 1511
1511 /* Clear consumer block for the same reason given above */ 1512 /* Clear consumer block for the same reason given above */
1512 1513
1513 memset(bp->cons_block_virt, 0, sizeof(PI_CONSUMER_BLOCK)); 1514 memset(bp->cons_block_virt, 0, sizeof(PI_CONSUMER_BLOCK));
1514 1515
1515 /* Release all dynamically allocate skb in the receive ring. */ 1516 /* Release all dynamically allocate skb in the receive ring. */
1516 1517
1517 dfx_rcv_flush(bp); 1518 dfx_rcv_flush(bp);
1518 1519
1519 /* Clear device structure flags */ 1520 /* Clear device structure flags */
1520 1521
1521 netif_stop_queue(dev); 1522 netif_stop_queue(dev);
1522 1523
1523 /* Deregister (free) IRQ */ 1524 /* Deregister (free) IRQ */
1524 1525
1525 free_irq(dev->irq, dev); 1526 free_irq(dev->irq, dev);
1526 1527
1527 return(0); 1528 return(0);
1528 } 1529 }
1529 1530
1530 1531
1531 /* 1532 /*
1532 * ====================== 1533 * ======================
1533 * = dfx_int_pr_halt_id = 1534 * = dfx_int_pr_halt_id =
1534 * ====================== 1535 * ======================
1535 * 1536 *
1536 * Overview: 1537 * Overview:
1537 * Displays halt id's in string form. 1538 * Displays halt id's in string form.
1538 * 1539 *
1539 * Returns: 1540 * Returns:
1540 * None 1541 * None
1541 * 1542 *
1542 * Arguments: 1543 * Arguments:
1543 * bp - pointer to board information 1544 * bp - pointer to board information
1544 * 1545 *
1545 * Functional Description: 1546 * Functional Description:
1546 * Determine current halt id and display appropriate string. 1547 * Determine current halt id and display appropriate string.
1547 * 1548 *
1548 * Return Codes: 1549 * Return Codes:
1549 * None 1550 * None
1550 * 1551 *
1551 * Assumptions: 1552 * Assumptions:
1552 * None 1553 * None
1553 * 1554 *
1554 * Side Effects: 1555 * Side Effects:
1555 * None 1556 * None
1556 */ 1557 */
1557 1558
1558 static void dfx_int_pr_halt_id(DFX_board_t *bp) 1559 static void dfx_int_pr_halt_id(DFX_board_t *bp)
1559 { 1560 {
1560 PI_UINT32 port_status; /* PDQ port status register value */ 1561 PI_UINT32 port_status; /* PDQ port status register value */
1561 PI_UINT32 halt_id; /* PDQ port status halt ID */ 1562 PI_UINT32 halt_id; /* PDQ port status halt ID */
1562 1563
1563 /* Read the latest port status */ 1564 /* Read the latest port status */
1564 1565
1565 dfx_port_read_long(bp, PI_PDQ_K_REG_PORT_STATUS, &port_status); 1566 dfx_port_read_long(bp, PI_PDQ_K_REG_PORT_STATUS, &port_status);
1566 1567
1567 /* Display halt state transition information */ 1568 /* Display halt state transition information */
1568 1569
1569 halt_id = (port_status & PI_PSTATUS_M_HALT_ID) >> PI_PSTATUS_V_HALT_ID; 1570 halt_id = (port_status & PI_PSTATUS_M_HALT_ID) >> PI_PSTATUS_V_HALT_ID;
1570 switch (halt_id) 1571 switch (halt_id)
1571 { 1572 {
1572 case PI_HALT_ID_K_SELFTEST_TIMEOUT: 1573 case PI_HALT_ID_K_SELFTEST_TIMEOUT:
1573 printk("%s: Halt ID: Selftest Timeout\n", bp->dev->name); 1574 printk("%s: Halt ID: Selftest Timeout\n", bp->dev->name);
1574 break; 1575 break;
1575 1576
1576 case PI_HALT_ID_K_PARITY_ERROR: 1577 case PI_HALT_ID_K_PARITY_ERROR:
1577 printk("%s: Halt ID: Host Bus Parity Error\n", bp->dev->name); 1578 printk("%s: Halt ID: Host Bus Parity Error\n", bp->dev->name);
1578 break; 1579 break;
1579 1580
1580 case PI_HALT_ID_K_HOST_DIR_HALT: 1581 case PI_HALT_ID_K_HOST_DIR_HALT:
1581 printk("%s: Halt ID: Host-Directed Halt\n", bp->dev->name); 1582 printk("%s: Halt ID: Host-Directed Halt\n", bp->dev->name);
1582 break; 1583 break;
1583 1584
1584 case PI_HALT_ID_K_SW_FAULT: 1585 case PI_HALT_ID_K_SW_FAULT:
1585 printk("%s: Halt ID: Adapter Software Fault\n", bp->dev->name); 1586 printk("%s: Halt ID: Adapter Software Fault\n", bp->dev->name);
1586 break; 1587 break;
1587 1588
1588 case PI_HALT_ID_K_HW_FAULT: 1589 case PI_HALT_ID_K_HW_FAULT:
1589 printk("%s: Halt ID: Adapter Hardware Fault\n", bp->dev->name); 1590 printk("%s: Halt ID: Adapter Hardware Fault\n", bp->dev->name);
1590 break; 1591 break;
1591 1592
1592 case PI_HALT_ID_K_PC_TRACE: 1593 case PI_HALT_ID_K_PC_TRACE:
1593 printk("%s: Halt ID: FDDI Network PC Trace Path Test\n", bp->dev->name); 1594 printk("%s: Halt ID: FDDI Network PC Trace Path Test\n", bp->dev->name);
1594 break; 1595 break;
1595 1596
1596 case PI_HALT_ID_K_DMA_ERROR: 1597 case PI_HALT_ID_K_DMA_ERROR:
1597 printk("%s: Halt ID: Adapter DMA Error\n", bp->dev->name); 1598 printk("%s: Halt ID: Adapter DMA Error\n", bp->dev->name);
1598 break; 1599 break;
1599 1600
1600 case PI_HALT_ID_K_IMAGE_CRC_ERROR: 1601 case PI_HALT_ID_K_IMAGE_CRC_ERROR:
1601 printk("%s: Halt ID: Firmware Image CRC Error\n", bp->dev->name); 1602 printk("%s: Halt ID: Firmware Image CRC Error\n", bp->dev->name);
1602 break; 1603 break;
1603 1604
1604 case PI_HALT_ID_K_BUS_EXCEPTION: 1605 case PI_HALT_ID_K_BUS_EXCEPTION:
1605 printk("%s: Halt ID: 68000 Bus Exception\n", bp->dev->name); 1606 printk("%s: Halt ID: 68000 Bus Exception\n", bp->dev->name);
1606 break; 1607 break;
1607 1608
1608 default: 1609 default:
1609 printk("%s: Halt ID: Unknown (code = %X)\n", bp->dev->name, halt_id); 1610 printk("%s: Halt ID: Unknown (code = %X)\n", bp->dev->name, halt_id);
1610 break; 1611 break;
1611 } 1612 }
1612 } 1613 }
1613 1614
1614 1615
1615 /* 1616 /*
1616 * ========================== 1617 * ==========================
1617 * = dfx_int_type_0_process = 1618 * = dfx_int_type_0_process =
1618 * ========================== 1619 * ==========================
1619 * 1620 *
1620 * Overview: 1621 * Overview:
1621 * Processes Type 0 interrupts. 1622 * Processes Type 0 interrupts.
1622 * 1623 *
1623 * Returns: 1624 * Returns:
1624 * None 1625 * None
1625 * 1626 *
1626 * Arguments: 1627 * Arguments:
1627 * bp - pointer to board information 1628 * bp - pointer to board information
1628 * 1629 *
1629 * Functional Description: 1630 * Functional Description:
1630 * Processes all enabled Type 0 interrupts. If the reason for the interrupt 1631 * Processes all enabled Type 0 interrupts. If the reason for the interrupt
1631 * is a serious fault on the adapter, then an error message is displayed 1632 * is a serious fault on the adapter, then an error message is displayed
1632 * and the adapter is reset. 1633 * and the adapter is reset.
1633 * 1634 *
1634 * One tricky potential timing window is the rapid succession of "link avail" 1635 * One tricky potential timing window is the rapid succession of "link avail"
1635 * "link unavail" state change interrupts. The acknowledgement of the Type 0 1636 * "link unavail" state change interrupts. The acknowledgement of the Type 0
1636 * interrupt must be done before reading the state from the Port Status 1637 * interrupt must be done before reading the state from the Port Status
1637 * register. This is true because a state change could occur after reading 1638 * register. This is true because a state change could occur after reading
1638 * the data, but before acknowledging the interrupt. If this state change 1639 * the data, but before acknowledging the interrupt. If this state change
1639 * does happen, it would be lost because the driver is using the old state, 1640 * does happen, it would be lost because the driver is using the old state,
1640 * and it will never know about the new state because it subsequently 1641 * and it will never know about the new state because it subsequently
1641 * acknowledges the state change interrupt. 1642 * acknowledges the state change interrupt.
1642 * 1643 *
1643 * INCORRECT CORRECT 1644 * INCORRECT CORRECT
1644 * read type 0 int reasons read type 0 int reasons 1645 * read type 0 int reasons read type 0 int reasons
1645 * read adapter state ack type 0 interrupts 1646 * read adapter state ack type 0 interrupts
1646 * ack type 0 interrupts read adapter state 1647 * ack type 0 interrupts read adapter state
1647 * ... process interrupt ... ... process interrupt ... 1648 * ... process interrupt ... ... process interrupt ...
1648 * 1649 *
1649 * Return Codes: 1650 * Return Codes:
1650 * None 1651 * None
1651 * 1652 *
1652 * Assumptions: 1653 * Assumptions:
1653 * None 1654 * None
1654 * 1655 *
1655 * Side Effects: 1656 * Side Effects:
1656 * An adapter reset may occur if the adapter has any Type 0 error interrupts 1657 * An adapter reset may occur if the adapter has any Type 0 error interrupts
1657 * or if the port status indicates that the adapter is halted. The driver 1658 * or if the port status indicates that the adapter is halted. The driver
1658 * is responsible for reinitializing the adapter with the current CAM 1659 * is responsible for reinitializing the adapter with the current CAM
1659 * contents and adapter filter settings. 1660 * contents and adapter filter settings.
1660 */ 1661 */
1661 1662
1662 static void dfx_int_type_0_process(DFX_board_t *bp) 1663 static void dfx_int_type_0_process(DFX_board_t *bp)
1663 1664
1664 { 1665 {
1665 PI_UINT32 type_0_status; /* Host Interrupt Type 0 register */ 1666 PI_UINT32 type_0_status; /* Host Interrupt Type 0 register */
1666 PI_UINT32 state; /* current adap state (from port status) */ 1667 PI_UINT32 state; /* current adap state (from port status) */
1667 1668
1668 /* 1669 /*
1669 * Read host interrupt Type 0 register to determine which Type 0 1670 * Read host interrupt Type 0 register to determine which Type 0
1670 * interrupts are pending. Immediately write it back out to clear 1671 * interrupts are pending. Immediately write it back out to clear
1671 * those interrupts. 1672 * those interrupts.
1672 */ 1673 */
1673 1674
1674 dfx_port_read_long(bp, PI_PDQ_K_REG_TYPE_0_STATUS, &type_0_status); 1675 dfx_port_read_long(bp, PI_PDQ_K_REG_TYPE_0_STATUS, &type_0_status);
1675 dfx_port_write_long(bp, PI_PDQ_K_REG_TYPE_0_STATUS, type_0_status); 1676 dfx_port_write_long(bp, PI_PDQ_K_REG_TYPE_0_STATUS, type_0_status);
1676 1677
1677 /* Check for Type 0 error interrupts */ 1678 /* Check for Type 0 error interrupts */
1678 1679
1679 if (type_0_status & (PI_TYPE_0_STAT_M_NXM | 1680 if (type_0_status & (PI_TYPE_0_STAT_M_NXM |
1680 PI_TYPE_0_STAT_M_PM_PAR_ERR | 1681 PI_TYPE_0_STAT_M_PM_PAR_ERR |
1681 PI_TYPE_0_STAT_M_BUS_PAR_ERR)) 1682 PI_TYPE_0_STAT_M_BUS_PAR_ERR))
1682 { 1683 {
1683 /* Check for Non-Existent Memory error */ 1684 /* Check for Non-Existent Memory error */
1684 1685
1685 if (type_0_status & PI_TYPE_0_STAT_M_NXM) 1686 if (type_0_status & PI_TYPE_0_STAT_M_NXM)
1686 printk("%s: Non-Existent Memory Access Error\n", bp->dev->name); 1687 printk("%s: Non-Existent Memory Access Error\n", bp->dev->name);
1687 1688
1688 /* Check for Packet Memory Parity error */ 1689 /* Check for Packet Memory Parity error */
1689 1690
1690 if (type_0_status & PI_TYPE_0_STAT_M_PM_PAR_ERR) 1691 if (type_0_status & PI_TYPE_0_STAT_M_PM_PAR_ERR)
1691 printk("%s: Packet Memory Parity Error\n", bp->dev->name); 1692 printk("%s: Packet Memory Parity Error\n", bp->dev->name);
1692 1693
1693 /* Check for Host Bus Parity error */ 1694 /* Check for Host Bus Parity error */
1694 1695
1695 if (type_0_status & PI_TYPE_0_STAT_M_BUS_PAR_ERR) 1696 if (type_0_status & PI_TYPE_0_STAT_M_BUS_PAR_ERR)
1696 printk("%s: Host Bus Parity Error\n", bp->dev->name); 1697 printk("%s: Host Bus Parity Error\n", bp->dev->name);
1697 1698
1698 /* Reset adapter and bring it back on-line */ 1699 /* Reset adapter and bring it back on-line */
1699 1700
1700 bp->link_available = PI_K_FALSE; /* link is no longer available */ 1701 bp->link_available = PI_K_FALSE; /* link is no longer available */
1701 bp->reset_type = 0; /* rerun on-board diagnostics */ 1702 bp->reset_type = 0; /* rerun on-board diagnostics */
1702 printk("%s: Resetting adapter...\n", bp->dev->name); 1703 printk("%s: Resetting adapter...\n", bp->dev->name);
1703 if (dfx_adap_init(bp, 0) != DFX_K_SUCCESS) 1704 if (dfx_adap_init(bp, 0) != DFX_K_SUCCESS)
1704 { 1705 {
1705 printk("%s: Adapter reset failed! Disabling adapter interrupts.\n", bp->dev->name); 1706 printk("%s: Adapter reset failed! Disabling adapter interrupts.\n", bp->dev->name);
1706 dfx_port_write_long(bp, PI_PDQ_K_REG_HOST_INT_ENB, PI_HOST_INT_K_DISABLE_ALL_INTS); 1707 dfx_port_write_long(bp, PI_PDQ_K_REG_HOST_INT_ENB, PI_HOST_INT_K_DISABLE_ALL_INTS);
1707 return; 1708 return;
1708 } 1709 }
1709 printk("%s: Adapter reset successful!\n", bp->dev->name); 1710 printk("%s: Adapter reset successful!\n", bp->dev->name);
1710 return; 1711 return;
1711 } 1712 }
1712 1713
1713 /* Check for transmit flush interrupt */ 1714 /* Check for transmit flush interrupt */
1714 1715
1715 if (type_0_status & PI_TYPE_0_STAT_M_XMT_FLUSH) 1716 if (type_0_status & PI_TYPE_0_STAT_M_XMT_FLUSH)
1716 { 1717 {
1717 /* Flush any pending xmt's and acknowledge the flush interrupt */ 1718 /* Flush any pending xmt's and acknowledge the flush interrupt */
1718 1719
1719 bp->link_available = PI_K_FALSE; /* link is no longer available */ 1720 bp->link_available = PI_K_FALSE; /* link is no longer available */
1720 dfx_xmt_flush(bp); /* flush any outstanding packets */ 1721 dfx_xmt_flush(bp); /* flush any outstanding packets */
1721 (void) dfx_hw_port_ctrl_req(bp, 1722 (void) dfx_hw_port_ctrl_req(bp,
1722 PI_PCTRL_M_XMT_DATA_FLUSH_DONE, 1723 PI_PCTRL_M_XMT_DATA_FLUSH_DONE,
1723 0, 1724 0,
1724 0, 1725 0,
1725 NULL); 1726 NULL);
1726 } 1727 }
1727 1728
1728 /* Check for adapter state change */ 1729 /* Check for adapter state change */
1729 1730
1730 if (type_0_status & PI_TYPE_0_STAT_M_STATE_CHANGE) 1731 if (type_0_status & PI_TYPE_0_STAT_M_STATE_CHANGE)
1731 { 1732 {
1732 /* Get latest adapter state */ 1733 /* Get latest adapter state */
1733 1734
1734 state = dfx_hw_adap_state_rd(bp); /* get adapter state */ 1735 state = dfx_hw_adap_state_rd(bp); /* get adapter state */
1735 if (state == PI_STATE_K_HALTED) 1736 if (state == PI_STATE_K_HALTED)
1736 { 1737 {
1737 /* 1738 /*
1738 * Adapter has transitioned to HALTED state, try to reset 1739 * Adapter has transitioned to HALTED state, try to reset
1739 * adapter to bring it back on-line. If reset fails, 1740 * adapter to bring it back on-line. If reset fails,
1740 * leave the adapter in the broken state. 1741 * leave the adapter in the broken state.
1741 */ 1742 */
1742 1743
1743 printk("%s: Controller has transitioned to HALTED state!\n", bp->dev->name); 1744 printk("%s: Controller has transitioned to HALTED state!\n", bp->dev->name);
1744 dfx_int_pr_halt_id(bp); /* display halt id as string */ 1745 dfx_int_pr_halt_id(bp); /* display halt id as string */
1745 1746
1746 /* Reset adapter and bring it back on-line */ 1747 /* Reset adapter and bring it back on-line */
1747 1748
1748 bp->link_available = PI_K_FALSE; /* link is no longer available */ 1749 bp->link_available = PI_K_FALSE; /* link is no longer available */
1749 bp->reset_type = 0; /* rerun on-board diagnostics */ 1750 bp->reset_type = 0; /* rerun on-board diagnostics */
1750 printk("%s: Resetting adapter...\n", bp->dev->name); 1751 printk("%s: Resetting adapter...\n", bp->dev->name);
1751 if (dfx_adap_init(bp, 0) != DFX_K_SUCCESS) 1752 if (dfx_adap_init(bp, 0) != DFX_K_SUCCESS)
1752 { 1753 {
1753 printk("%s: Adapter reset failed! Disabling adapter interrupts.\n", bp->dev->name); 1754 printk("%s: Adapter reset failed! Disabling adapter interrupts.\n", bp->dev->name);
1754 dfx_port_write_long(bp, PI_PDQ_K_REG_HOST_INT_ENB, PI_HOST_INT_K_DISABLE_ALL_INTS); 1755 dfx_port_write_long(bp, PI_PDQ_K_REG_HOST_INT_ENB, PI_HOST_INT_K_DISABLE_ALL_INTS);
1755 return; 1756 return;
1756 } 1757 }
1757 printk("%s: Adapter reset successful!\n", bp->dev->name); 1758 printk("%s: Adapter reset successful!\n", bp->dev->name);
1758 } 1759 }
1759 else if (state == PI_STATE_K_LINK_AVAIL) 1760 else if (state == PI_STATE_K_LINK_AVAIL)
1760 { 1761 {
1761 bp->link_available = PI_K_TRUE; /* set link available flag */ 1762 bp->link_available = PI_K_TRUE; /* set link available flag */
1762 } 1763 }
1763 } 1764 }
1764 } 1765 }
1765 1766
1766 1767
1767 /* 1768 /*
1768 * ================== 1769 * ==================
1769 * = dfx_int_common = 1770 * = dfx_int_common =
1770 * ================== 1771 * ==================
1771 * 1772 *
1772 * Overview: 1773 * Overview:
1773 * Interrupt service routine (ISR) 1774 * Interrupt service routine (ISR)
1774 * 1775 *
1775 * Returns: 1776 * Returns:
1776 * None 1777 * None
1777 * 1778 *
1778 * Arguments: 1779 * Arguments:
1779 * bp - pointer to board information 1780 * bp - pointer to board information
1780 * 1781 *
1781 * Functional Description: 1782 * Functional Description:
1782 * This is the ISR which processes incoming adapter interrupts. 1783 * This is the ISR which processes incoming adapter interrupts.
1783 * 1784 *
1784 * Return Codes: 1785 * Return Codes:
1785 * None 1786 * None
1786 * 1787 *
1787 * Assumptions: 1788 * Assumptions:
1788 * This routine assumes PDQ interrupts have not been disabled. 1789 * This routine assumes PDQ interrupts have not been disabled.
1789 * When interrupts are disabled at the PDQ, the Port Status register 1790 * When interrupts are disabled at the PDQ, the Port Status register
1790 * is automatically cleared. This routine uses the Port Status 1791 * is automatically cleared. This routine uses the Port Status
1791 * register value to determine whether a Type 0 interrupt occurred, 1792 * register value to determine whether a Type 0 interrupt occurred,
1792 * so it's important that adapter interrupts are not normally 1793 * so it's important that adapter interrupts are not normally
1793 * enabled/disabled at the PDQ. 1794 * enabled/disabled at the PDQ.
1794 * 1795 *
1795 * It's vital that this routine is NOT reentered for the 1796 * It's vital that this routine is NOT reentered for the
1796 * same board and that the OS is not in another section of 1797 * same board and that the OS is not in another section of
1797 * code (eg. dfx_xmt_queue_pkt) for the same board on a 1798 * code (eg. dfx_xmt_queue_pkt) for the same board on a
1798 * different thread. 1799 * different thread.
1799 * 1800 *
1800 * Side Effects: 1801 * Side Effects:
1801 * Pending interrupts are serviced. Depending on the type of 1802 * Pending interrupts are serviced. Depending on the type of
1802 * interrupt, acknowledging and clearing the interrupt at the 1803 * interrupt, acknowledging and clearing the interrupt at the
1803 * PDQ involves writing a register to clear the interrupt bit 1804 * PDQ involves writing a register to clear the interrupt bit
1804 * or updating completion indices. 1805 * or updating completion indices.
1805 */ 1806 */
1806 1807
1807 static void dfx_int_common(struct net_device *dev) 1808 static void dfx_int_common(struct net_device *dev)
1808 { 1809 {
1809 DFX_board_t *bp = netdev_priv(dev); 1810 DFX_board_t *bp = netdev_priv(dev);
1810 PI_UINT32 port_status; /* Port Status register */ 1811 PI_UINT32 port_status; /* Port Status register */
1811 1812
1812 /* Process xmt interrupts - frequent case, so always call this routine */ 1813 /* Process xmt interrupts - frequent case, so always call this routine */
1813 1814
1814 if(dfx_xmt_done(bp)) /* free consumed xmt packets */ 1815 if(dfx_xmt_done(bp)) /* free consumed xmt packets */
1815 netif_wake_queue(dev); 1816 netif_wake_queue(dev);
1816 1817
1817 /* Process rcv interrupts - frequent case, so always call this routine */ 1818 /* Process rcv interrupts - frequent case, so always call this routine */
1818 1819
1819 dfx_rcv_queue_process(bp); /* service received LLC frames */ 1820 dfx_rcv_queue_process(bp); /* service received LLC frames */
1820 1821
1821 /* 1822 /*
1822 * Transmit and receive producer and completion indices are updated on the 1823 * Transmit and receive producer and completion indices are updated on the
1823 * adapter by writing to the Type 2 Producer register. Since the frequent 1824 * adapter by writing to the Type 2 Producer register. Since the frequent
1824 * case is that we'll be processing either LLC transmit or receive buffers, 1825 * case is that we'll be processing either LLC transmit or receive buffers,
1825 * we'll optimize I/O writes by doing a single register write here. 1826 * we'll optimize I/O writes by doing a single register write here.
1826 */ 1827 */
1827 1828
1828 dfx_port_write_long(bp, PI_PDQ_K_REG_TYPE_2_PROD, bp->rcv_xmt_reg.lword); 1829 dfx_port_write_long(bp, PI_PDQ_K_REG_TYPE_2_PROD, bp->rcv_xmt_reg.lword);
1829 1830
1830 /* Read PDQ Port Status register to find out which interrupts need processing */ 1831 /* Read PDQ Port Status register to find out which interrupts need processing */
1831 1832
1832 dfx_port_read_long(bp, PI_PDQ_K_REG_PORT_STATUS, &port_status); 1833 dfx_port_read_long(bp, PI_PDQ_K_REG_PORT_STATUS, &port_status);
1833 1834
1834 /* Process Type 0 interrupts (if any) - infrequent, so only call when needed */ 1835 /* Process Type 0 interrupts (if any) - infrequent, so only call when needed */
1835 1836
1836 if (port_status & PI_PSTATUS_M_TYPE_0_PENDING) 1837 if (port_status & PI_PSTATUS_M_TYPE_0_PENDING)
1837 dfx_int_type_0_process(bp); /* process Type 0 interrupts */ 1838 dfx_int_type_0_process(bp); /* process Type 0 interrupts */
1838 } 1839 }
1839 1840
1840 1841
1841 /* 1842 /*
1842 * ================= 1843 * =================
1843 * = dfx_interrupt = 1844 * = dfx_interrupt =
1844 * ================= 1845 * =================
1845 * 1846 *
1846 * Overview: 1847 * Overview:
1847 * Interrupt processing routine 1848 * Interrupt processing routine
1848 * 1849 *
1849 * Returns: 1850 * Returns:
1850 * Whether a valid interrupt was seen. 1851 * Whether a valid interrupt was seen.
1851 * 1852 *
1852 * Arguments: 1853 * Arguments:
1853 * irq - interrupt vector 1854 * irq - interrupt vector
1854 * dev_id - pointer to device information 1855 * dev_id - pointer to device information
1855 * 1856 *
1856 * Functional Description: 1857 * Functional Description:
1857 * This routine calls the interrupt processing routine for this adapter. It 1858 * This routine calls the interrupt processing routine for this adapter. It
1858 * disables and reenables adapter interrupts, as appropriate. We can support 1859 * disables and reenables adapter interrupts, as appropriate. We can support
1859 * shared interrupts since the incoming dev_id pointer provides our device 1860 * shared interrupts since the incoming dev_id pointer provides our device
1860 * structure context. 1861 * structure context.
1861 * 1862 *
1862 * Return Codes: 1863 * Return Codes:
1863 * IRQ_HANDLED - an IRQ was handled. 1864 * IRQ_HANDLED - an IRQ was handled.
1864 * IRQ_NONE - no IRQ was handled. 1865 * IRQ_NONE - no IRQ was handled.
1865 * 1866 *
1866 * Assumptions: 1867 * Assumptions:
1867 * The interrupt acknowledgement at the hardware level (eg. ACKing the PIC 1868 * The interrupt acknowledgement at the hardware level (eg. ACKing the PIC
1868 * on Intel-based systems) is done by the operating system outside this 1869 * on Intel-based systems) is done by the operating system outside this
1869 * routine. 1870 * routine.
1870 * 1871 *
1871 * System interrupts are enabled through this call. 1872 * System interrupts are enabled through this call.
1872 * 1873 *
1873 * Side Effects: 1874 * Side Effects:
1874 * Interrupts are disabled, then reenabled at the adapter. 1875 * Interrupts are disabled, then reenabled at the adapter.
1875 */ 1876 */
1876 1877
1877 static irqreturn_t dfx_interrupt(int irq, void *dev_id) 1878 static irqreturn_t dfx_interrupt(int irq, void *dev_id)
1878 { 1879 {
1879 struct net_device *dev = dev_id; 1880 struct net_device *dev = dev_id;
1880 DFX_board_t *bp = netdev_priv(dev); 1881 DFX_board_t *bp = netdev_priv(dev);
1881 struct device *bdev = bp->bus_dev; 1882 struct device *bdev = bp->bus_dev;
1882 int dfx_bus_pci = DFX_BUS_PCI(bdev); 1883 int dfx_bus_pci = DFX_BUS_PCI(bdev);
1883 int dfx_bus_eisa = DFX_BUS_EISA(bdev); 1884 int dfx_bus_eisa = DFX_BUS_EISA(bdev);
1884 int dfx_bus_tc = DFX_BUS_TC(bdev); 1885 int dfx_bus_tc = DFX_BUS_TC(bdev);
1885 1886
1886 /* Service adapter interrupts */ 1887 /* Service adapter interrupts */
1887 1888
1888 if (dfx_bus_pci) { 1889 if (dfx_bus_pci) {
1889 u32 status; 1890 u32 status;
1890 1891
1891 dfx_port_read_long(bp, PFI_K_REG_STATUS, &status); 1892 dfx_port_read_long(bp, PFI_K_REG_STATUS, &status);
1892 if (!(status & PFI_STATUS_M_PDQ_INT)) 1893 if (!(status & PFI_STATUS_M_PDQ_INT))
1893 return IRQ_NONE; 1894 return IRQ_NONE;
1894 1895
1895 spin_lock(&bp->lock); 1896 spin_lock(&bp->lock);
1896 1897
1897 /* Disable PDQ-PFI interrupts at PFI */ 1898 /* Disable PDQ-PFI interrupts at PFI */
1898 dfx_port_write_long(bp, PFI_K_REG_MODE_CTRL, 1899 dfx_port_write_long(bp, PFI_K_REG_MODE_CTRL,
1899 PFI_MODE_M_DMA_ENB); 1900 PFI_MODE_M_DMA_ENB);
1900 1901
1901 /* Call interrupt service routine for this adapter */ 1902 /* Call interrupt service routine for this adapter */
1902 dfx_int_common(dev); 1903 dfx_int_common(dev);
1903 1904
1904 /* Clear PDQ interrupt status bit and reenable interrupts */ 1905 /* Clear PDQ interrupt status bit and reenable interrupts */
1905 dfx_port_write_long(bp, PFI_K_REG_STATUS, 1906 dfx_port_write_long(bp, PFI_K_REG_STATUS,
1906 PFI_STATUS_M_PDQ_INT); 1907 PFI_STATUS_M_PDQ_INT);
1907 dfx_port_write_long(bp, PFI_K_REG_MODE_CTRL, 1908 dfx_port_write_long(bp, PFI_K_REG_MODE_CTRL,
1908 (PFI_MODE_M_PDQ_INT_ENB | 1909 (PFI_MODE_M_PDQ_INT_ENB |
1909 PFI_MODE_M_DMA_ENB)); 1910 PFI_MODE_M_DMA_ENB));
1910 1911
1911 spin_unlock(&bp->lock); 1912 spin_unlock(&bp->lock);
1912 } 1913 }
1913 if (dfx_bus_eisa) { 1914 if (dfx_bus_eisa) {
1914 unsigned long base_addr = to_eisa_device(bdev)->base_addr; 1915 unsigned long base_addr = to_eisa_device(bdev)->base_addr;
1915 u8 status; 1916 u8 status;
1916 1917
1917 status = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0); 1918 status = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
1918 if (!(status & PI_CONFIG_STAT_0_M_PEND)) 1919 if (!(status & PI_CONFIG_STAT_0_M_PEND))
1919 return IRQ_NONE; 1920 return IRQ_NONE;
1920 1921
1921 spin_lock(&bp->lock); 1922 spin_lock(&bp->lock);
1922 1923
1923 /* Disable interrupts at the ESIC */ 1924 /* Disable interrupts at the ESIC */
1924 status &= ~PI_CONFIG_STAT_0_M_INT_ENB; 1925 status &= ~PI_CONFIG_STAT_0_M_INT_ENB;
1925 outb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0, status); 1926 outb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0, status);
1926 1927
1927 /* Call interrupt service routine for this adapter */ 1928 /* Call interrupt service routine for this adapter */
1928 dfx_int_common(dev); 1929 dfx_int_common(dev);
1929 1930
1930 /* Reenable interrupts at the ESIC */ 1931 /* Reenable interrupts at the ESIC */
1931 status = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0); 1932 status = inb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0);
1932 status |= PI_CONFIG_STAT_0_M_INT_ENB; 1933 status |= PI_CONFIG_STAT_0_M_INT_ENB;
1933 outb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0, status); 1934 outb(base_addr + PI_ESIC_K_IO_CONFIG_STAT_0, status);
1934 1935
1935 spin_unlock(&bp->lock); 1936 spin_unlock(&bp->lock);
1936 } 1937 }
1937 if (dfx_bus_tc) { 1938 if (dfx_bus_tc) {
1938 u32 status; 1939 u32 status;
1939 1940
1940 dfx_port_read_long(bp, PI_PDQ_K_REG_PORT_STATUS, &status); 1941 dfx_port_read_long(bp, PI_PDQ_K_REG_PORT_STATUS, &status);
1941 if (!(status & (PI_PSTATUS_M_RCV_DATA_PENDING | 1942 if (!(status & (PI_PSTATUS_M_RCV_DATA_PENDING |
1942 PI_PSTATUS_M_XMT_DATA_PENDING | 1943 PI_PSTATUS_M_XMT_DATA_PENDING |
1943 PI_PSTATUS_M_SMT_HOST_PENDING | 1944 PI_PSTATUS_M_SMT_HOST_PENDING |
1944 PI_PSTATUS_M_UNSOL_PENDING | 1945 PI_PSTATUS_M_UNSOL_PENDING |
1945 PI_PSTATUS_M_CMD_RSP_PENDING | 1946 PI_PSTATUS_M_CMD_RSP_PENDING |
1946 PI_PSTATUS_M_CMD_REQ_PENDING | 1947 PI_PSTATUS_M_CMD_REQ_PENDING |
1947 PI_PSTATUS_M_TYPE_0_PENDING))) 1948 PI_PSTATUS_M_TYPE_0_PENDING)))
1948 return IRQ_NONE; 1949 return IRQ_NONE;
1949 1950
1950 spin_lock(&bp->lock); 1951 spin_lock(&bp->lock);
1951 1952
1952 /* Call interrupt service routine for this adapter */ 1953 /* Call interrupt service routine for this adapter */
1953 dfx_int_common(dev); 1954 dfx_int_common(dev);
1954 1955
1955 spin_unlock(&bp->lock); 1956 spin_unlock(&bp->lock);
1956 } 1957 }
1957 1958
1958 return IRQ_HANDLED; 1959 return IRQ_HANDLED;
1959 } 1960 }
1960 1961
1961 1962
1962 /* 1963 /*
1963 * ===================== 1964 * =====================
1964 * = dfx_ctl_get_stats = 1965 * = dfx_ctl_get_stats =
1965 * ===================== 1966 * =====================
1966 * 1967 *
1967 * Overview: 1968 * Overview:
1968 * Get statistics for FDDI adapter 1969 * Get statistics for FDDI adapter
1969 * 1970 *
1970 * Returns: 1971 * Returns:
1971 * Pointer to FDDI statistics structure 1972 * Pointer to FDDI statistics structure
1972 * 1973 *
1973 * Arguments: 1974 * Arguments:
1974 * dev - pointer to device information 1975 * dev - pointer to device information
1975 * 1976 *
1976 * Functional Description: 1977 * Functional Description:
1977 * Gets current MIB objects from adapter, then 1978 * Gets current MIB objects from adapter, then
1978 * returns FDDI statistics structure as defined 1979 * returns FDDI statistics structure as defined
1979 * in if_fddi.h. 1980 * in if_fddi.h.
1980 * 1981 *
1981 * Note: Since the FDDI statistics structure is 1982 * Note: Since the FDDI statistics structure is
1982 * still new and the device structure doesn't 1983 * still new and the device structure doesn't
1983 * have an FDDI-specific get statistics handler, 1984 * have an FDDI-specific get statistics handler,
1984 * we'll return the FDDI statistics structure as 1985 * we'll return the FDDI statistics structure as
1985 * a pointer to an Ethernet statistics structure. 1986 * a pointer to an Ethernet statistics structure.
1986 * That way, at least the first part of the statistics 1987 * That way, at least the first part of the statistics
1987 * structure can be decoded properly, and it allows 1988 * structure can be decoded properly, and it allows
1988 * "smart" applications to perform a second cast to 1989 * "smart" applications to perform a second cast to
1989 * decode the FDDI-specific statistics. 1990 * decode the FDDI-specific statistics.
1990 * 1991 *
1991 * We'll have to pay attention to this routine as the 1992 * We'll have to pay attention to this routine as the
1992 * device structure becomes more mature and LAN media 1993 * device structure becomes more mature and LAN media
1993 * independent. 1994 * independent.
1994 * 1995 *
1995 * Return Codes: 1996 * Return Codes:
1996 * None 1997 * None
1997 * 1998 *
1998 * Assumptions: 1999 * Assumptions:
1999 * None 2000 * None
2000 * 2001 *
2001 * Side Effects: 2002 * Side Effects:
2002 * None 2003 * None
2003 */ 2004 */
2004 2005
2005 static struct net_device_stats *dfx_ctl_get_stats(struct net_device *dev) 2006 static struct net_device_stats *dfx_ctl_get_stats(struct net_device *dev)
2006 { 2007 {
2007 DFX_board_t *bp = netdev_priv(dev); 2008 DFX_board_t *bp = netdev_priv(dev);
2008 2009
2009 /* Fill the bp->stats structure with driver-maintained counters */ 2010 /* Fill the bp->stats structure with driver-maintained counters */
2010 2011
2011 bp->stats.gen.rx_packets = bp->rcv_total_frames; 2012 bp->stats.gen.rx_packets = bp->rcv_total_frames;
2012 bp->stats.gen.tx_packets = bp->xmt_total_frames; 2013 bp->stats.gen.tx_packets = bp->xmt_total_frames;
2013 bp->stats.gen.rx_bytes = bp->rcv_total_bytes; 2014 bp->stats.gen.rx_bytes = bp->rcv_total_bytes;
2014 bp->stats.gen.tx_bytes = bp->xmt_total_bytes; 2015 bp->stats.gen.tx_bytes = bp->xmt_total_bytes;
2015 bp->stats.gen.rx_errors = bp->rcv_crc_errors + 2016 bp->stats.gen.rx_errors = bp->rcv_crc_errors +
2016 bp->rcv_frame_status_errors + 2017 bp->rcv_frame_status_errors +
2017 bp->rcv_length_errors; 2018 bp->rcv_length_errors;
2018 bp->stats.gen.tx_errors = bp->xmt_length_errors; 2019 bp->stats.gen.tx_errors = bp->xmt_length_errors;
2019 bp->stats.gen.rx_dropped = bp->rcv_discards; 2020 bp->stats.gen.rx_dropped = bp->rcv_discards;
2020 bp->stats.gen.tx_dropped = bp->xmt_discards; 2021 bp->stats.gen.tx_dropped = bp->xmt_discards;
2021 bp->stats.gen.multicast = bp->rcv_multicast_frames; 2022 bp->stats.gen.multicast = bp->rcv_multicast_frames;
2022 bp->stats.gen.collisions = 0; /* always zero (0) for FDDI */ 2023 bp->stats.gen.collisions = 0; /* always zero (0) for FDDI */
2023 2024
2024 /* Get FDDI SMT MIB objects */ 2025 /* Get FDDI SMT MIB objects */
2025 2026
2026 bp->cmd_req_virt->cmd_type = PI_CMD_K_SMT_MIB_GET; 2027 bp->cmd_req_virt->cmd_type = PI_CMD_K_SMT_MIB_GET;
2027 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS) 2028 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS)
2028 return((struct net_device_stats *) &bp->stats); 2029 return((struct net_device_stats *) &bp->stats);
2029 2030
2030 /* Fill the bp->stats structure with the SMT MIB object values */ 2031 /* Fill the bp->stats structure with the SMT MIB object values */
2031 2032
2032 memcpy(bp->stats.smt_station_id, &bp->cmd_rsp_virt->smt_mib_get.smt_station_id, sizeof(bp->cmd_rsp_virt->smt_mib_get.smt_station_id)); 2033 memcpy(bp->stats.smt_station_id, &bp->cmd_rsp_virt->smt_mib_get.smt_station_id, sizeof(bp->cmd_rsp_virt->smt_mib_get.smt_station_id));
2033 bp->stats.smt_op_version_id = bp->cmd_rsp_virt->smt_mib_get.smt_op_version_id; 2034 bp->stats.smt_op_version_id = bp->cmd_rsp_virt->smt_mib_get.smt_op_version_id;
2034 bp->stats.smt_hi_version_id = bp->cmd_rsp_virt->smt_mib_get.smt_hi_version_id; 2035 bp->stats.smt_hi_version_id = bp->cmd_rsp_virt->smt_mib_get.smt_hi_version_id;
2035 bp->stats.smt_lo_version_id = bp->cmd_rsp_virt->smt_mib_get.smt_lo_version_id; 2036 bp->stats.smt_lo_version_id = bp->cmd_rsp_virt->smt_mib_get.smt_lo_version_id;
2036 memcpy(bp->stats.smt_user_data, &bp->cmd_rsp_virt->smt_mib_get.smt_user_data, sizeof(bp->cmd_rsp_virt->smt_mib_get.smt_user_data)); 2037 memcpy(bp->stats.smt_user_data, &bp->cmd_rsp_virt->smt_mib_get.smt_user_data, sizeof(bp->cmd_rsp_virt->smt_mib_get.smt_user_data));
2037 bp->stats.smt_mib_version_id = bp->cmd_rsp_virt->smt_mib_get.smt_mib_version_id; 2038 bp->stats.smt_mib_version_id = bp->cmd_rsp_virt->smt_mib_get.smt_mib_version_id;
2038 bp->stats.smt_mac_cts = bp->cmd_rsp_virt->smt_mib_get.smt_mac_ct; 2039 bp->stats.smt_mac_cts = bp->cmd_rsp_virt->smt_mib_get.smt_mac_ct;
2039 bp->stats.smt_non_master_cts = bp->cmd_rsp_virt->smt_mib_get.smt_non_master_ct; 2040 bp->stats.smt_non_master_cts = bp->cmd_rsp_virt->smt_mib_get.smt_non_master_ct;
2040 bp->stats.smt_master_cts = bp->cmd_rsp_virt->smt_mib_get.smt_master_ct; 2041 bp->stats.smt_master_cts = bp->cmd_rsp_virt->smt_mib_get.smt_master_ct;
2041 bp->stats.smt_available_paths = bp->cmd_rsp_virt->smt_mib_get.smt_available_paths; 2042 bp->stats.smt_available_paths = bp->cmd_rsp_virt->smt_mib_get.smt_available_paths;
2042 bp->stats.smt_config_capabilities = bp->cmd_rsp_virt->smt_mib_get.smt_config_capabilities; 2043 bp->stats.smt_config_capabilities = bp->cmd_rsp_virt->smt_mib_get.smt_config_capabilities;
2043 bp->stats.smt_config_policy = bp->cmd_rsp_virt->smt_mib_get.smt_config_policy; 2044 bp->stats.smt_config_policy = bp->cmd_rsp_virt->smt_mib_get.smt_config_policy;
2044 bp->stats.smt_connection_policy = bp->cmd_rsp_virt->smt_mib_get.smt_connection_policy; 2045 bp->stats.smt_connection_policy = bp->cmd_rsp_virt->smt_mib_get.smt_connection_policy;
2045 bp->stats.smt_t_notify = bp->cmd_rsp_virt->smt_mib_get.smt_t_notify; 2046 bp->stats.smt_t_notify = bp->cmd_rsp_virt->smt_mib_get.smt_t_notify;
2046 bp->stats.smt_stat_rpt_policy = bp->cmd_rsp_virt->smt_mib_get.smt_stat_rpt_policy; 2047 bp->stats.smt_stat_rpt_policy = bp->cmd_rsp_virt->smt_mib_get.smt_stat_rpt_policy;
2047 bp->stats.smt_trace_max_expiration = bp->cmd_rsp_virt->smt_mib_get.smt_trace_max_expiration; 2048 bp->stats.smt_trace_max_expiration = bp->cmd_rsp_virt->smt_mib_get.smt_trace_max_expiration;
2048 bp->stats.smt_bypass_present = bp->cmd_rsp_virt->smt_mib_get.smt_bypass_present; 2049 bp->stats.smt_bypass_present = bp->cmd_rsp_virt->smt_mib_get.smt_bypass_present;
2049 bp->stats.smt_ecm_state = bp->cmd_rsp_virt->smt_mib_get.smt_ecm_state; 2050 bp->stats.smt_ecm_state = bp->cmd_rsp_virt->smt_mib_get.smt_ecm_state;
2050 bp->stats.smt_cf_state = bp->cmd_rsp_virt->smt_mib_get.smt_cf_state; 2051 bp->stats.smt_cf_state = bp->cmd_rsp_virt->smt_mib_get.smt_cf_state;
2051 bp->stats.smt_remote_disconnect_flag = bp->cmd_rsp_virt->smt_mib_get.smt_remote_disconnect_flag; 2052 bp->stats.smt_remote_disconnect_flag = bp->cmd_rsp_virt->smt_mib_get.smt_remote_disconnect_flag;
2052 bp->stats.smt_station_status = bp->cmd_rsp_virt->smt_mib_get.smt_station_status; 2053 bp->stats.smt_station_status = bp->cmd_rsp_virt->smt_mib_get.smt_station_status;
2053 bp->stats.smt_peer_wrap_flag = bp->cmd_rsp_virt->smt_mib_get.smt_peer_wrap_flag; 2054 bp->stats.smt_peer_wrap_flag = bp->cmd_rsp_virt->smt_mib_get.smt_peer_wrap_flag;
2054 bp->stats.smt_time_stamp = bp->cmd_rsp_virt->smt_mib_get.smt_msg_time_stamp.ls; 2055 bp->stats.smt_time_stamp = bp->cmd_rsp_virt->smt_mib_get.smt_msg_time_stamp.ls;
2055 bp->stats.smt_transition_time_stamp = bp->cmd_rsp_virt->smt_mib_get.smt_transition_time_stamp.ls; 2056 bp->stats.smt_transition_time_stamp = bp->cmd_rsp_virt->smt_mib_get.smt_transition_time_stamp.ls;
2056 bp->stats.mac_frame_status_functions = bp->cmd_rsp_virt->smt_mib_get.mac_frame_status_functions; 2057 bp->stats.mac_frame_status_functions = bp->cmd_rsp_virt->smt_mib_get.mac_frame_status_functions;
2057 bp->stats.mac_t_max_capability = bp->cmd_rsp_virt->smt_mib_get.mac_t_max_capability; 2058 bp->stats.mac_t_max_capability = bp->cmd_rsp_virt->smt_mib_get.mac_t_max_capability;
2058 bp->stats.mac_tvx_capability = bp->cmd_rsp_virt->smt_mib_get.mac_tvx_capability; 2059 bp->stats.mac_tvx_capability = bp->cmd_rsp_virt->smt_mib_get.mac_tvx_capability;
2059 bp->stats.mac_available_paths = bp->cmd_rsp_virt->smt_mib_get.mac_available_paths; 2060 bp->stats.mac_available_paths = bp->cmd_rsp_virt->smt_mib_get.mac_available_paths;
2060 bp->stats.mac_current_path = bp->cmd_rsp_virt->smt_mib_get.mac_current_path; 2061 bp->stats.mac_current_path = bp->cmd_rsp_virt->smt_mib_get.mac_current_path;
2061 memcpy(bp->stats.mac_upstream_nbr, &bp->cmd_rsp_virt->smt_mib_get.mac_upstream_nbr, FDDI_K_ALEN); 2062 memcpy(bp->stats.mac_upstream_nbr, &bp->cmd_rsp_virt->smt_mib_get.mac_upstream_nbr, FDDI_K_ALEN);
2062 memcpy(bp->stats.mac_downstream_nbr, &bp->cmd_rsp_virt->smt_mib_get.mac_downstream_nbr, FDDI_K_ALEN); 2063 memcpy(bp->stats.mac_downstream_nbr, &bp->cmd_rsp_virt->smt_mib_get.mac_downstream_nbr, FDDI_K_ALEN);
2063 memcpy(bp->stats.mac_old_upstream_nbr, &bp->cmd_rsp_virt->smt_mib_get.mac_old_upstream_nbr, FDDI_K_ALEN); 2064 memcpy(bp->stats.mac_old_upstream_nbr, &bp->cmd_rsp_virt->smt_mib_get.mac_old_upstream_nbr, FDDI_K_ALEN);
2064 memcpy(bp->stats.mac_old_downstream_nbr, &bp->cmd_rsp_virt->smt_mib_get.mac_old_downstream_nbr, FDDI_K_ALEN); 2065 memcpy(bp->stats.mac_old_downstream_nbr, &bp->cmd_rsp_virt->smt_mib_get.mac_old_downstream_nbr, FDDI_K_ALEN);
2065 bp->stats.mac_dup_address_test = bp->cmd_rsp_virt->smt_mib_get.mac_dup_address_test; 2066 bp->stats.mac_dup_address_test = bp->cmd_rsp_virt->smt_mib_get.mac_dup_address_test;
2066 bp->stats.mac_requested_paths = bp->cmd_rsp_virt->smt_mib_get.mac_requested_paths; 2067 bp->stats.mac_requested_paths = bp->cmd_rsp_virt->smt_mib_get.mac_requested_paths;
2067 bp->stats.mac_downstream_port_type = bp->cmd_rsp_virt->smt_mib_get.mac_downstream_port_type; 2068 bp->stats.mac_downstream_port_type = bp->cmd_rsp_virt->smt_mib_get.mac_downstream_port_type;
2068 memcpy(bp->stats.mac_smt_address, &bp->cmd_rsp_virt->smt_mib_get.mac_smt_address, FDDI_K_ALEN); 2069 memcpy(bp->stats.mac_smt_address, &bp->cmd_rsp_virt->smt_mib_get.mac_smt_address, FDDI_K_ALEN);
2069 bp->stats.mac_t_req = bp->cmd_rsp_virt->smt_mib_get.mac_t_req; 2070 bp->stats.mac_t_req = bp->cmd_rsp_virt->smt_mib_get.mac_t_req;
2070 bp->stats.mac_t_neg = bp->cmd_rsp_virt->smt_mib_get.mac_t_neg; 2071 bp->stats.mac_t_neg = bp->cmd_rsp_virt->smt_mib_get.mac_t_neg;
2071 bp->stats.mac_t_max = bp->cmd_rsp_virt->smt_mib_get.mac_t_max; 2072 bp->stats.mac_t_max = bp->cmd_rsp_virt->smt_mib_get.mac_t_max;
2072 bp->stats.mac_tvx_value = bp->cmd_rsp_virt->smt_mib_get.mac_tvx_value; 2073 bp->stats.mac_tvx_value = bp->cmd_rsp_virt->smt_mib_get.mac_tvx_value;
2073 bp->stats.mac_frame_error_threshold = bp->cmd_rsp_virt->smt_mib_get.mac_frame_error_threshold; 2074 bp->stats.mac_frame_error_threshold = bp->cmd_rsp_virt->smt_mib_get.mac_frame_error_threshold;
2074 bp->stats.mac_frame_error_ratio = bp->cmd_rsp_virt->smt_mib_get.mac_frame_error_ratio; 2075 bp->stats.mac_frame_error_ratio = bp->cmd_rsp_virt->smt_mib_get.mac_frame_error_ratio;
2075 bp->stats.mac_rmt_state = bp->cmd_rsp_virt->smt_mib_get.mac_rmt_state; 2076 bp->stats.mac_rmt_state = bp->cmd_rsp_virt->smt_mib_get.mac_rmt_state;
2076 bp->stats.mac_da_flag = bp->cmd_rsp_virt->smt_mib_get.mac_da_flag; 2077 bp->stats.mac_da_flag = bp->cmd_rsp_virt->smt_mib_get.mac_da_flag;
2077 bp->stats.mac_una_da_flag = bp->cmd_rsp_virt->smt_mib_get.mac_unda_flag; 2078 bp->stats.mac_una_da_flag = bp->cmd_rsp_virt->smt_mib_get.mac_unda_flag;
2078 bp->stats.mac_frame_error_flag = bp->cmd_rsp_virt->smt_mib_get.mac_frame_error_flag; 2079 bp->stats.mac_frame_error_flag = bp->cmd_rsp_virt->smt_mib_get.mac_frame_error_flag;
2079 bp->stats.mac_ma_unitdata_available = bp->cmd_rsp_virt->smt_mib_get.mac_ma_unitdata_available; 2080 bp->stats.mac_ma_unitdata_available = bp->cmd_rsp_virt->smt_mib_get.mac_ma_unitdata_available;
2080 bp->stats.mac_hardware_present = bp->cmd_rsp_virt->smt_mib_get.mac_hardware_present; 2081 bp->stats.mac_hardware_present = bp->cmd_rsp_virt->smt_mib_get.mac_hardware_present;
2081 bp->stats.mac_ma_unitdata_enable = bp->cmd_rsp_virt->smt_mib_get.mac_ma_unitdata_enable; 2082 bp->stats.mac_ma_unitdata_enable = bp->cmd_rsp_virt->smt_mib_get.mac_ma_unitdata_enable;
2082 bp->stats.path_tvx_lower_bound = bp->cmd_rsp_virt->smt_mib_get.path_tvx_lower_bound; 2083 bp->stats.path_tvx_lower_bound = bp->cmd_rsp_virt->smt_mib_get.path_tvx_lower_bound;
2083 bp->stats.path_t_max_lower_bound = bp->cmd_rsp_virt->smt_mib_get.path_t_max_lower_bound; 2084 bp->stats.path_t_max_lower_bound = bp->cmd_rsp_virt->smt_mib_get.path_t_max_lower_bound;
2084 bp->stats.path_max_t_req = bp->cmd_rsp_virt->smt_mib_get.path_max_t_req; 2085 bp->stats.path_max_t_req = bp->cmd_rsp_virt->smt_mib_get.path_max_t_req;
2085 memcpy(bp->stats.path_configuration, &bp->cmd_rsp_virt->smt_mib_get.path_configuration, sizeof(bp->cmd_rsp_virt->smt_mib_get.path_configuration)); 2086 memcpy(bp->stats.path_configuration, &bp->cmd_rsp_virt->smt_mib_get.path_configuration, sizeof(bp->cmd_rsp_virt->smt_mib_get.path_configuration));
2086 bp->stats.port_my_type[0] = bp->cmd_rsp_virt->smt_mib_get.port_my_type[0]; 2087 bp->stats.port_my_type[0] = bp->cmd_rsp_virt->smt_mib_get.port_my_type[0];
2087 bp->stats.port_my_type[1] = bp->cmd_rsp_virt->smt_mib_get.port_my_type[1]; 2088 bp->stats.port_my_type[1] = bp->cmd_rsp_virt->smt_mib_get.port_my_type[1];
2088 bp->stats.port_neighbor_type[0] = bp->cmd_rsp_virt->smt_mib_get.port_neighbor_type[0]; 2089 bp->stats.port_neighbor_type[0] = bp->cmd_rsp_virt->smt_mib_get.port_neighbor_type[0];
2089 bp->stats.port_neighbor_type[1] = bp->cmd_rsp_virt->smt_mib_get.port_neighbor_type[1]; 2090 bp->stats.port_neighbor_type[1] = bp->cmd_rsp_virt->smt_mib_get.port_neighbor_type[1];
2090 bp->stats.port_connection_policies[0] = bp->cmd_rsp_virt->smt_mib_get.port_connection_policies[0]; 2091 bp->stats.port_connection_policies[0] = bp->cmd_rsp_virt->smt_mib_get.port_connection_policies[0];
2091 bp->stats.port_connection_policies[1] = bp->cmd_rsp_virt->smt_mib_get.port_connection_policies[1]; 2092 bp->stats.port_connection_policies[1] = bp->cmd_rsp_virt->smt_mib_get.port_connection_policies[1];
2092 bp->stats.port_mac_indicated[0] = bp->cmd_rsp_virt->smt_mib_get.port_mac_indicated[0]; 2093 bp->stats.port_mac_indicated[0] = bp->cmd_rsp_virt->smt_mib_get.port_mac_indicated[0];
2093 bp->stats.port_mac_indicated[1] = bp->cmd_rsp_virt->smt_mib_get.port_mac_indicated[1]; 2094 bp->stats.port_mac_indicated[1] = bp->cmd_rsp_virt->smt_mib_get.port_mac_indicated[1];
2094 bp->stats.port_current_path[0] = bp->cmd_rsp_virt->smt_mib_get.port_current_path[0]; 2095 bp->stats.port_current_path[0] = bp->cmd_rsp_virt->smt_mib_get.port_current_path[0];
2095 bp->stats.port_current_path[1] = bp->cmd_rsp_virt->smt_mib_get.port_current_path[1]; 2096 bp->stats.port_current_path[1] = bp->cmd_rsp_virt->smt_mib_get.port_current_path[1];
2096 memcpy(&bp->stats.port_requested_paths[0*3], &bp->cmd_rsp_virt->smt_mib_get.port_requested_paths[0], 3); 2097 memcpy(&bp->stats.port_requested_paths[0*3], &bp->cmd_rsp_virt->smt_mib_get.port_requested_paths[0], 3);
2097 memcpy(&bp->stats.port_requested_paths[1*3], &bp->cmd_rsp_virt->smt_mib_get.port_requested_paths[1], 3); 2098 memcpy(&bp->stats.port_requested_paths[1*3], &bp->cmd_rsp_virt->smt_mib_get.port_requested_paths[1], 3);
2098 bp->stats.port_mac_placement[0] = bp->cmd_rsp_virt->smt_mib_get.port_mac_placement[0]; 2099 bp->stats.port_mac_placement[0] = bp->cmd_rsp_virt->smt_mib_get.port_mac_placement[0];
2099 bp->stats.port_mac_placement[1] = bp->cmd_rsp_virt->smt_mib_get.port_mac_placement[1]; 2100 bp->stats.port_mac_placement[1] = bp->cmd_rsp_virt->smt_mib_get.port_mac_placement[1];
2100 bp->stats.port_available_paths[0] = bp->cmd_rsp_virt->smt_mib_get.port_available_paths[0]; 2101 bp->stats.port_available_paths[0] = bp->cmd_rsp_virt->smt_mib_get.port_available_paths[0];
2101 bp->stats.port_available_paths[1] = bp->cmd_rsp_virt->smt_mib_get.port_available_paths[1]; 2102 bp->stats.port_available_paths[1] = bp->cmd_rsp_virt->smt_mib_get.port_available_paths[1];
2102 bp->stats.port_pmd_class[0] = bp->cmd_rsp_virt->smt_mib_get.port_pmd_class[0]; 2103 bp->stats.port_pmd_class[0] = bp->cmd_rsp_virt->smt_mib_get.port_pmd_class[0];
2103 bp->stats.port_pmd_class[1] = bp->cmd_rsp_virt->smt_mib_get.port_pmd_class[1]; 2104 bp->stats.port_pmd_class[1] = bp->cmd_rsp_virt->smt_mib_get.port_pmd_class[1];
2104 bp->stats.port_connection_capabilities[0] = bp->cmd_rsp_virt->smt_mib_get.port_connection_capabilities[0]; 2105 bp->stats.port_connection_capabilities[0] = bp->cmd_rsp_virt->smt_mib_get.port_connection_capabilities[0];
2105 bp->stats.port_connection_capabilities[1] = bp->cmd_rsp_virt->smt_mib_get.port_connection_capabilities[1]; 2106 bp->stats.port_connection_capabilities[1] = bp->cmd_rsp_virt->smt_mib_get.port_connection_capabilities[1];
2106 bp->stats.port_bs_flag[0] = bp->cmd_rsp_virt->smt_mib_get.port_bs_flag[0]; 2107 bp->stats.port_bs_flag[0] = bp->cmd_rsp_virt->smt_mib_get.port_bs_flag[0];
2107 bp->stats.port_bs_flag[1] = bp->cmd_rsp_virt->smt_mib_get.port_bs_flag[1]; 2108 bp->stats.port_bs_flag[1] = bp->cmd_rsp_virt->smt_mib_get.port_bs_flag[1];
2108 bp->stats.port_ler_estimate[0] = bp->cmd_rsp_virt->smt_mib_get.port_ler_estimate[0]; 2109 bp->stats.port_ler_estimate[0] = bp->cmd_rsp_virt->smt_mib_get.port_ler_estimate[0];
2109 bp->stats.port_ler_estimate[1] = bp->cmd_rsp_virt->smt_mib_get.port_ler_estimate[1]; 2110 bp->stats.port_ler_estimate[1] = bp->cmd_rsp_virt->smt_mib_get.port_ler_estimate[1];
2110 bp->stats.port_ler_cutoff[0] = bp->cmd_rsp_virt->smt_mib_get.port_ler_cutoff[0]; 2111 bp->stats.port_ler_cutoff[0] = bp->cmd_rsp_virt->smt_mib_get.port_ler_cutoff[0];
2111 bp->stats.port_ler_cutoff[1] = bp->cmd_rsp_virt->smt_mib_get.port_ler_cutoff[1]; 2112 bp->stats.port_ler_cutoff[1] = bp->cmd_rsp_virt->smt_mib_get.port_ler_cutoff[1];
2112 bp->stats.port_ler_alarm[0] = bp->cmd_rsp_virt->smt_mib_get.port_ler_alarm[0]; 2113 bp->stats.port_ler_alarm[0] = bp->cmd_rsp_virt->smt_mib_get.port_ler_alarm[0];
2113 bp->stats.port_ler_alarm[1] = bp->cmd_rsp_virt->smt_mib_get.port_ler_alarm[1]; 2114 bp->stats.port_ler_alarm[1] = bp->cmd_rsp_virt->smt_mib_get.port_ler_alarm[1];
2114 bp->stats.port_connect_state[0] = bp->cmd_rsp_virt->smt_mib_get.port_connect_state[0]; 2115 bp->stats.port_connect_state[0] = bp->cmd_rsp_virt->smt_mib_get.port_connect_state[0];
2115 bp->stats.port_connect_state[1] = bp->cmd_rsp_virt->smt_mib_get.port_connect_state[1]; 2116 bp->stats.port_connect_state[1] = bp->cmd_rsp_virt->smt_mib_get.port_connect_state[1];
2116 bp->stats.port_pcm_state[0] = bp->cmd_rsp_virt->smt_mib_get.port_pcm_state[0]; 2117 bp->stats.port_pcm_state[0] = bp->cmd_rsp_virt->smt_mib_get.port_pcm_state[0];
2117 bp->stats.port_pcm_state[1] = bp->cmd_rsp_virt->smt_mib_get.port_pcm_state[1]; 2118 bp->stats.port_pcm_state[1] = bp->cmd_rsp_virt->smt_mib_get.port_pcm_state[1];
2118 bp->stats.port_pc_withhold[0] = bp->cmd_rsp_virt->smt_mib_get.port_pc_withhold[0]; 2119 bp->stats.port_pc_withhold[0] = bp->cmd_rsp_virt->smt_mib_get.port_pc_withhold[0];
2119 bp->stats.port_pc_withhold[1] = bp->cmd_rsp_virt->smt_mib_get.port_pc_withhold[1]; 2120 bp->stats.port_pc_withhold[1] = bp->cmd_rsp_virt->smt_mib_get.port_pc_withhold[1];
2120 bp->stats.port_ler_flag[0] = bp->cmd_rsp_virt->smt_mib_get.port_ler_flag[0]; 2121 bp->stats.port_ler_flag[0] = bp->cmd_rsp_virt->smt_mib_get.port_ler_flag[0];
2121 bp->stats.port_ler_flag[1] = bp->cmd_rsp_virt->smt_mib_get.port_ler_flag[1]; 2122 bp->stats.port_ler_flag[1] = bp->cmd_rsp_virt->smt_mib_get.port_ler_flag[1];
2122 bp->stats.port_hardware_present[0] = bp->cmd_rsp_virt->smt_mib_get.port_hardware_present[0]; 2123 bp->stats.port_hardware_present[0] = bp->cmd_rsp_virt->smt_mib_get.port_hardware_present[0];
2123 bp->stats.port_hardware_present[1] = bp->cmd_rsp_virt->smt_mib_get.port_hardware_present[1]; 2124 bp->stats.port_hardware_present[1] = bp->cmd_rsp_virt->smt_mib_get.port_hardware_present[1];
2124 2125
2125 /* Get FDDI counters */ 2126 /* Get FDDI counters */
2126 2127
2127 bp->cmd_req_virt->cmd_type = PI_CMD_K_CNTRS_GET; 2128 bp->cmd_req_virt->cmd_type = PI_CMD_K_CNTRS_GET;
2128 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS) 2129 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS)
2129 return((struct net_device_stats *) &bp->stats); 2130 return((struct net_device_stats *) &bp->stats);
2130 2131
2131 /* Fill the bp->stats structure with the FDDI counter values */ 2132 /* Fill the bp->stats structure with the FDDI counter values */
2132 2133
2133 bp->stats.mac_frame_cts = bp->cmd_rsp_virt->cntrs_get.cntrs.frame_cnt.ls; 2134 bp->stats.mac_frame_cts = bp->cmd_rsp_virt->cntrs_get.cntrs.frame_cnt.ls;
2134 bp->stats.mac_copied_cts = bp->cmd_rsp_virt->cntrs_get.cntrs.copied_cnt.ls; 2135 bp->stats.mac_copied_cts = bp->cmd_rsp_virt->cntrs_get.cntrs.copied_cnt.ls;
2135 bp->stats.mac_transmit_cts = bp->cmd_rsp_virt->cntrs_get.cntrs.transmit_cnt.ls; 2136 bp->stats.mac_transmit_cts = bp->cmd_rsp_virt->cntrs_get.cntrs.transmit_cnt.ls;
2136 bp->stats.mac_error_cts = bp->cmd_rsp_virt->cntrs_get.cntrs.error_cnt.ls; 2137 bp->stats.mac_error_cts = bp->cmd_rsp_virt->cntrs_get.cntrs.error_cnt.ls;
2137 bp->stats.mac_lost_cts = bp->cmd_rsp_virt->cntrs_get.cntrs.lost_cnt.ls; 2138 bp->stats.mac_lost_cts = bp->cmd_rsp_virt->cntrs_get.cntrs.lost_cnt.ls;
2138 bp->stats.port_lct_fail_cts[0] = bp->cmd_rsp_virt->cntrs_get.cntrs.lct_rejects[0].ls; 2139 bp->stats.port_lct_fail_cts[0] = bp->cmd_rsp_virt->cntrs_get.cntrs.lct_rejects[0].ls;
2139 bp->stats.port_lct_fail_cts[1] = bp->cmd_rsp_virt->cntrs_get.cntrs.lct_rejects[1].ls; 2140 bp->stats.port_lct_fail_cts[1] = bp->cmd_rsp_virt->cntrs_get.cntrs.lct_rejects[1].ls;
2140 bp->stats.port_lem_reject_cts[0] = bp->cmd_rsp_virt->cntrs_get.cntrs.lem_rejects[0].ls; 2141 bp->stats.port_lem_reject_cts[0] = bp->cmd_rsp_virt->cntrs_get.cntrs.lem_rejects[0].ls;
2141 bp->stats.port_lem_reject_cts[1] = bp->cmd_rsp_virt->cntrs_get.cntrs.lem_rejects[1].ls; 2142 bp->stats.port_lem_reject_cts[1] = bp->cmd_rsp_virt->cntrs_get.cntrs.lem_rejects[1].ls;
2142 bp->stats.port_lem_cts[0] = bp->cmd_rsp_virt->cntrs_get.cntrs.link_errors[0].ls; 2143 bp->stats.port_lem_cts[0] = bp->cmd_rsp_virt->cntrs_get.cntrs.link_errors[0].ls;
2143 bp->stats.port_lem_cts[1] = bp->cmd_rsp_virt->cntrs_get.cntrs.link_errors[1].ls; 2144 bp->stats.port_lem_cts[1] = bp->cmd_rsp_virt->cntrs_get.cntrs.link_errors[1].ls;
2144 2145
2145 return((struct net_device_stats *) &bp->stats); 2146 return((struct net_device_stats *) &bp->stats);
2146 } 2147 }
2147 2148
2148 2149
2149 /* 2150 /*
2150 * ============================== 2151 * ==============================
2151 * = dfx_ctl_set_multicast_list = 2152 * = dfx_ctl_set_multicast_list =
2152 * ============================== 2153 * ==============================
2153 * 2154 *
2154 * Overview: 2155 * Overview:
2155 * Enable/Disable LLC frame promiscuous mode reception 2156 * Enable/Disable LLC frame promiscuous mode reception
2156 * on the adapter and/or update multicast address table. 2157 * on the adapter and/or update multicast address table.
2157 * 2158 *
2158 * Returns: 2159 * Returns:
2159 * None 2160 * None
2160 * 2161 *
2161 * Arguments: 2162 * Arguments:
2162 * dev - pointer to device information 2163 * dev - pointer to device information
2163 * 2164 *
2164 * Functional Description: 2165 * Functional Description:
2165 * This routine follows a fairly simple algorithm for setting the 2166 * This routine follows a fairly simple algorithm for setting the
2166 * adapter filters and CAM: 2167 * adapter filters and CAM:
2167 * 2168 *
2168 * if IFF_PROMISC flag is set 2169 * if IFF_PROMISC flag is set
2169 * enable LLC individual/group promiscuous mode 2170 * enable LLC individual/group promiscuous mode
2170 * else 2171 * else
2171 * disable LLC individual/group promiscuous mode 2172 * disable LLC individual/group promiscuous mode
2172 * if number of incoming multicast addresses > 2173 * if number of incoming multicast addresses >
2173 * (CAM max size - number of unicast addresses in CAM) 2174 * (CAM max size - number of unicast addresses in CAM)
2174 * enable LLC group promiscuous mode 2175 * enable LLC group promiscuous mode
2175 * set driver-maintained multicast address count to zero 2176 * set driver-maintained multicast address count to zero
2176 * else 2177 * else
2177 * disable LLC group promiscuous mode 2178 * disable LLC group promiscuous mode
2178 * set driver-maintained multicast address count to incoming count 2179 * set driver-maintained multicast address count to incoming count
2179 * update adapter CAM 2180 * update adapter CAM
2180 * update adapter filters 2181 * update adapter filters
2181 * 2182 *
2182 * Return Codes: 2183 * Return Codes:
2183 * None 2184 * None
2184 * 2185 *
2185 * Assumptions: 2186 * Assumptions:
2186 * Multicast addresses are presented in canonical (LSB) format. 2187 * Multicast addresses are presented in canonical (LSB) format.
2187 * 2188 *
2188 * Side Effects: 2189 * Side Effects:
2189 * On-board adapter CAM and filters are updated. 2190 * On-board adapter CAM and filters are updated.
2190 */ 2191 */
2191 2192
2192 static void dfx_ctl_set_multicast_list(struct net_device *dev) 2193 static void dfx_ctl_set_multicast_list(struct net_device *dev)
2193 { 2194 {
2194 DFX_board_t *bp = netdev_priv(dev); 2195 DFX_board_t *bp = netdev_priv(dev);
2195 int i; /* used as index in for loop */ 2196 int i; /* used as index in for loop */
2196 struct dev_mc_list *dmi; /* ptr to multicast addr entry */ 2197 struct dev_mc_list *dmi; /* ptr to multicast addr entry */
2197 2198
2198 /* Enable LLC frame promiscuous mode, if necessary */ 2199 /* Enable LLC frame promiscuous mode, if necessary */
2199 2200
2200 if (dev->flags & IFF_PROMISC) 2201 if (dev->flags & IFF_PROMISC)
2201 bp->ind_group_prom = PI_FSTATE_K_PASS; /* Enable LLC ind/group prom mode */ 2202 bp->ind_group_prom = PI_FSTATE_K_PASS; /* Enable LLC ind/group prom mode */
2202 2203
2203 /* Else, update multicast address table */ 2204 /* Else, update multicast address table */
2204 2205
2205 else 2206 else
2206 { 2207 {
2207 bp->ind_group_prom = PI_FSTATE_K_BLOCK; /* Disable LLC ind/group prom mode */ 2208 bp->ind_group_prom = PI_FSTATE_K_BLOCK; /* Disable LLC ind/group prom mode */
2208 /* 2209 /*
2209 * Check whether incoming multicast address count exceeds table size 2210 * Check whether incoming multicast address count exceeds table size
2210 * 2211 *
2211 * Note: The adapters utilize an on-board 64 entry CAM for 2212 * Note: The adapters utilize an on-board 64 entry CAM for
2212 * supporting perfect filtering of multicast packets 2213 * supporting perfect filtering of multicast packets
2213 * and bridge functions when adding unicast addresses. 2214 * and bridge functions when adding unicast addresses.
2214 * There is no hash function available. To support 2215 * There is no hash function available. To support
2215 * additional multicast addresses, the all multicast 2216 * additional multicast addresses, the all multicast
2216 * filter (LLC group promiscuous mode) must be enabled. 2217 * filter (LLC group promiscuous mode) must be enabled.
2217 * 2218 *
2218 * The firmware reserves two CAM entries for SMT-related 2219 * The firmware reserves two CAM entries for SMT-related
2219 * multicast addresses, which leaves 62 entries available. 2220 * multicast addresses, which leaves 62 entries available.
2220 * The following code ensures that we're not being asked 2221 * The following code ensures that we're not being asked
2221 * to add more than 62 addresses to the CAM. If we are, 2222 * to add more than 62 addresses to the CAM. If we are,
2222 * the driver will enable the all multicast filter. 2223 * the driver will enable the all multicast filter.
2223 * Should the number of multicast addresses drop below 2224 * Should the number of multicast addresses drop below
2224 * the high water mark, the filter will be disabled and 2225 * the high water mark, the filter will be disabled and
2225 * perfect filtering will be used. 2226 * perfect filtering will be used.
2226 */ 2227 */
2227 2228
2228 if (dev->mc_count > (PI_CMD_ADDR_FILTER_K_SIZE - bp->uc_count)) 2229 if (dev->mc_count > (PI_CMD_ADDR_FILTER_K_SIZE - bp->uc_count))
2229 { 2230 {
2230 bp->group_prom = PI_FSTATE_K_PASS; /* Enable LLC group prom mode */ 2231 bp->group_prom = PI_FSTATE_K_PASS; /* Enable LLC group prom mode */
2231 bp->mc_count = 0; /* Don't add mc addrs to CAM */ 2232 bp->mc_count = 0; /* Don't add mc addrs to CAM */
2232 } 2233 }
2233 else 2234 else
2234 { 2235 {
2235 bp->group_prom = PI_FSTATE_K_BLOCK; /* Disable LLC group prom mode */ 2236 bp->group_prom = PI_FSTATE_K_BLOCK; /* Disable LLC group prom mode */
2236 bp->mc_count = dev->mc_count; /* Add mc addrs to CAM */ 2237 bp->mc_count = dev->mc_count; /* Add mc addrs to CAM */
2237 } 2238 }
2238 2239
2239 /* Copy addresses to multicast address table, then update adapter CAM */ 2240 /* Copy addresses to multicast address table, then update adapter CAM */
2240 2241
2241 dmi = dev->mc_list; /* point to first multicast addr */ 2242 dmi = dev->mc_list; /* point to first multicast addr */
2242 for (i=0; i < bp->mc_count; i++) 2243 for (i=0; i < bp->mc_count; i++)
2243 { 2244 {
2244 memcpy(&bp->mc_table[i*FDDI_K_ALEN], dmi->dmi_addr, FDDI_K_ALEN); 2245 memcpy(&bp->mc_table[i*FDDI_K_ALEN], dmi->dmi_addr, FDDI_K_ALEN);
2245 dmi = dmi->next; /* point to next multicast addr */ 2246 dmi = dmi->next; /* point to next multicast addr */
2246 } 2247 }
2247 if (dfx_ctl_update_cam(bp) != DFX_K_SUCCESS) 2248 if (dfx_ctl_update_cam(bp) != DFX_K_SUCCESS)
2248 { 2249 {
2249 DBG_printk("%s: Could not update multicast address table!\n", dev->name); 2250 DBG_printk("%s: Could not update multicast address table!\n", dev->name);
2250 } 2251 }
2251 else 2252 else
2252 { 2253 {
2253 DBG_printk("%s: Multicast address table updated! Added %d addresses.\n", dev->name, bp->mc_count); 2254 DBG_printk("%s: Multicast address table updated! Added %d addresses.\n", dev->name, bp->mc_count);
2254 } 2255 }
2255 } 2256 }
2256 2257
2257 /* Update adapter filters */ 2258 /* Update adapter filters */
2258 2259
2259 if (dfx_ctl_update_filters(bp) != DFX_K_SUCCESS) 2260 if (dfx_ctl_update_filters(bp) != DFX_K_SUCCESS)
2260 { 2261 {
2261 DBG_printk("%s: Could not update adapter filters!\n", dev->name); 2262 DBG_printk("%s: Could not update adapter filters!\n", dev->name);
2262 } 2263 }
2263 else 2264 else
2264 { 2265 {
2265 DBG_printk("%s: Adapter filters updated!\n", dev->name); 2266 DBG_printk("%s: Adapter filters updated!\n", dev->name);
2266 } 2267 }
2267 } 2268 }
2268 2269
2269 2270
2270 /* 2271 /*
2271 * =========================== 2272 * ===========================
2272 * = dfx_ctl_set_mac_address = 2273 * = dfx_ctl_set_mac_address =
2273 * =========================== 2274 * ===========================
2274 * 2275 *
2275 * Overview: 2276 * Overview:
2276 * Add node address override (unicast address) to adapter 2277 * Add node address override (unicast address) to adapter
2277 * CAM and update dev_addr field in device table. 2278 * CAM and update dev_addr field in device table.
2278 * 2279 *
2279 * Returns: 2280 * Returns:
2280 * None 2281 * None
2281 * 2282 *
2282 * Arguments: 2283 * Arguments:
2283 * dev - pointer to device information 2284 * dev - pointer to device information
2284 * addr - pointer to sockaddr structure containing unicast address to add 2285 * addr - pointer to sockaddr structure containing unicast address to add
2285 * 2286 *
2286 * Functional Description: 2287 * Functional Description:
2287 * The adapter supports node address overrides by adding one or more 2288 * The adapter supports node address overrides by adding one or more
2288 * unicast addresses to the adapter CAM. This is similar to adding 2289 * unicast addresses to the adapter CAM. This is similar to adding
2289 * multicast addresses. In this routine we'll update the driver and 2290 * multicast addresses. In this routine we'll update the driver and
2290 * device structures with the new address, then update the adapter CAM 2291 * device structures with the new address, then update the adapter CAM
2291 * to ensure that the adapter will copy and strip frames destined and 2292 * to ensure that the adapter will copy and strip frames destined and
2292 * sourced by that address. 2293 * sourced by that address.
2293 * 2294 *
2294 * Return Codes: 2295 * Return Codes:
2295 * Always returns zero. 2296 * Always returns zero.
2296 * 2297 *
2297 * Assumptions: 2298 * Assumptions:
2298 * The address pointed to by addr->sa_data is a valid unicast 2299 * The address pointed to by addr->sa_data is a valid unicast
2299 * address and is presented in canonical (LSB) format. 2300 * address and is presented in canonical (LSB) format.
2300 * 2301 *
2301 * Side Effects: 2302 * Side Effects:
2302 * On-board adapter CAM is updated. On-board adapter filters 2303 * On-board adapter CAM is updated. On-board adapter filters
2303 * may be updated. 2304 * may be updated.
2304 */ 2305 */
2305 2306
2306 static int dfx_ctl_set_mac_address(struct net_device *dev, void *addr) 2307 static int dfx_ctl_set_mac_address(struct net_device *dev, void *addr)
2307 { 2308 {
2308 struct sockaddr *p_sockaddr = (struct sockaddr *)addr; 2309 struct sockaddr *p_sockaddr = (struct sockaddr *)addr;
2309 DFX_board_t *bp = netdev_priv(dev); 2310 DFX_board_t *bp = netdev_priv(dev);
2310 2311
2311 /* Copy unicast address to driver-maintained structs and update count */ 2312 /* Copy unicast address to driver-maintained structs and update count */
2312 2313
2313 memcpy(dev->dev_addr, p_sockaddr->sa_data, FDDI_K_ALEN); /* update device struct */ 2314 memcpy(dev->dev_addr, p_sockaddr->sa_data, FDDI_K_ALEN); /* update device struct */
2314 memcpy(&bp->uc_table[0], p_sockaddr->sa_data, FDDI_K_ALEN); /* update driver struct */ 2315 memcpy(&bp->uc_table[0], p_sockaddr->sa_data, FDDI_K_ALEN); /* update driver struct */
2315 bp->uc_count = 1; 2316 bp->uc_count = 1;
2316 2317
2317 /* 2318 /*
2318 * Verify we're not exceeding the CAM size by adding unicast address 2319 * Verify we're not exceeding the CAM size by adding unicast address
2319 * 2320 *
2320 * Note: It's possible that before entering this routine we've 2321 * Note: It's possible that before entering this routine we've
2321 * already filled the CAM with 62 multicast addresses. 2322 * already filled the CAM with 62 multicast addresses.
2322 * Since we need to place the node address override into 2323 * Since we need to place the node address override into
2323 * the CAM, we have to check to see that we're not 2324 * the CAM, we have to check to see that we're not
2324 * exceeding the CAM size. If we are, we have to enable 2325 * exceeding the CAM size. If we are, we have to enable
2325 * the LLC group (multicast) promiscuous mode filter as 2326 * the LLC group (multicast) promiscuous mode filter as
2326 * in dfx_ctl_set_multicast_list. 2327 * in dfx_ctl_set_multicast_list.
2327 */ 2328 */
2328 2329
2329 if ((bp->uc_count + bp->mc_count) > PI_CMD_ADDR_FILTER_K_SIZE) 2330 if ((bp->uc_count + bp->mc_count) > PI_CMD_ADDR_FILTER_K_SIZE)
2330 { 2331 {
2331 bp->group_prom = PI_FSTATE_K_PASS; /* Enable LLC group prom mode */ 2332 bp->group_prom = PI_FSTATE_K_PASS; /* Enable LLC group prom mode */
2332 bp->mc_count = 0; /* Don't add mc addrs to CAM */ 2333 bp->mc_count = 0; /* Don't add mc addrs to CAM */
2333 2334
2334 /* Update adapter filters */ 2335 /* Update adapter filters */
2335 2336
2336 if (dfx_ctl_update_filters(bp) != DFX_K_SUCCESS) 2337 if (dfx_ctl_update_filters(bp) != DFX_K_SUCCESS)
2337 { 2338 {
2338 DBG_printk("%s: Could not update adapter filters!\n", dev->name); 2339 DBG_printk("%s: Could not update adapter filters!\n", dev->name);
2339 } 2340 }
2340 else 2341 else
2341 { 2342 {
2342 DBG_printk("%s: Adapter filters updated!\n", dev->name); 2343 DBG_printk("%s: Adapter filters updated!\n", dev->name);
2343 } 2344 }
2344 } 2345 }
2345 2346
2346 /* Update adapter CAM with new unicast address */ 2347 /* Update adapter CAM with new unicast address */
2347 2348
2348 if (dfx_ctl_update_cam(bp) != DFX_K_SUCCESS) 2349 if (dfx_ctl_update_cam(bp) != DFX_K_SUCCESS)
2349 { 2350 {
2350 DBG_printk("%s: Could not set new MAC address!\n", dev->name); 2351 DBG_printk("%s: Could not set new MAC address!\n", dev->name);
2351 } 2352 }
2352 else 2353 else
2353 { 2354 {
2354 DBG_printk("%s: Adapter CAM updated with new MAC address\n", dev->name); 2355 DBG_printk("%s: Adapter CAM updated with new MAC address\n", dev->name);
2355 } 2356 }
2356 return(0); /* always return zero */ 2357 return(0); /* always return zero */
2357 } 2358 }
2358 2359
2359 2360
2360 /* 2361 /*
2361 * ====================== 2362 * ======================
2362 * = dfx_ctl_update_cam = 2363 * = dfx_ctl_update_cam =
2363 * ====================== 2364 * ======================
2364 * 2365 *
2365 * Overview: 2366 * Overview:
2366 * Procedure to update adapter CAM (Content Addressable Memory) 2367 * Procedure to update adapter CAM (Content Addressable Memory)
2367 * with desired unicast and multicast address entries. 2368 * with desired unicast and multicast address entries.
2368 * 2369 *
2369 * Returns: 2370 * Returns:
2370 * Condition code 2371 * Condition code
2371 * 2372 *
2372 * Arguments: 2373 * Arguments:
2373 * bp - pointer to board information 2374 * bp - pointer to board information
2374 * 2375 *
2375 * Functional Description: 2376 * Functional Description:
2376 * Updates adapter CAM with current contents of board structure 2377 * Updates adapter CAM with current contents of board structure
2377 * unicast and multicast address tables. Since there are only 62 2378 * unicast and multicast address tables. Since there are only 62
2378 * free entries in CAM, this routine ensures that the command 2379 * free entries in CAM, this routine ensures that the command
2379 * request buffer is not overrun. 2380 * request buffer is not overrun.
2380 * 2381 *
2381 * Return Codes: 2382 * Return Codes:
2382 * DFX_K_SUCCESS - Request succeeded 2383 * DFX_K_SUCCESS - Request succeeded
2383 * DFX_K_FAILURE - Request failed 2384 * DFX_K_FAILURE - Request failed
2384 * 2385 *
2385 * Assumptions: 2386 * Assumptions:
2386 * All addresses being added (unicast and multicast) are in canonical 2387 * All addresses being added (unicast and multicast) are in canonical
2387 * order. 2388 * order.
2388 * 2389 *
2389 * Side Effects: 2390 * Side Effects:
2390 * On-board adapter CAM is updated. 2391 * On-board adapter CAM is updated.
2391 */ 2392 */
2392 2393
2393 static int dfx_ctl_update_cam(DFX_board_t *bp) 2394 static int dfx_ctl_update_cam(DFX_board_t *bp)
2394 { 2395 {
2395 int i; /* used as index */ 2396 int i; /* used as index */
2396 PI_LAN_ADDR *p_addr; /* pointer to CAM entry */ 2397 PI_LAN_ADDR *p_addr; /* pointer to CAM entry */
2397 2398
2398 /* 2399 /*
2399 * Fill in command request information 2400 * Fill in command request information
2400 * 2401 *
2401 * Note: Even though both the unicast and multicast address 2402 * Note: Even though both the unicast and multicast address
2402 * table entries are stored as contiguous 6 byte entries, 2403 * table entries are stored as contiguous 6 byte entries,
2403 * the firmware address filter set command expects each 2404 * the firmware address filter set command expects each
2404 * entry to be two longwords (8 bytes total). We must be 2405 * entry to be two longwords (8 bytes total). We must be
2405 * careful to only copy the six bytes of each unicast and 2406 * careful to only copy the six bytes of each unicast and
2406 * multicast table entry into each command entry. This 2407 * multicast table entry into each command entry. This
2407 * is also why we must first clear the entire command 2408 * is also why we must first clear the entire command
2408 * request buffer. 2409 * request buffer.
2409 */ 2410 */
2410 2411
2411 memset(bp->cmd_req_virt, 0, PI_CMD_REQ_K_SIZE_MAX); /* first clear buffer */ 2412 memset(bp->cmd_req_virt, 0, PI_CMD_REQ_K_SIZE_MAX); /* first clear buffer */
2412 bp->cmd_req_virt->cmd_type = PI_CMD_K_ADDR_FILTER_SET; 2413 bp->cmd_req_virt->cmd_type = PI_CMD_K_ADDR_FILTER_SET;
2413 p_addr = &bp->cmd_req_virt->addr_filter_set.entry[0]; 2414 p_addr = &bp->cmd_req_virt->addr_filter_set.entry[0];
2414 2415
2415 /* Now add unicast addresses to command request buffer, if any */ 2416 /* Now add unicast addresses to command request buffer, if any */
2416 2417
2417 for (i=0; i < (int)bp->uc_count; i++) 2418 for (i=0; i < (int)bp->uc_count; i++)
2418 { 2419 {
2419 if (i < PI_CMD_ADDR_FILTER_K_SIZE) 2420 if (i < PI_CMD_ADDR_FILTER_K_SIZE)
2420 { 2421 {
2421 memcpy(p_addr, &bp->uc_table[i*FDDI_K_ALEN], FDDI_K_ALEN); 2422 memcpy(p_addr, &bp->uc_table[i*FDDI_K_ALEN], FDDI_K_ALEN);
2422 p_addr++; /* point to next command entry */ 2423 p_addr++; /* point to next command entry */
2423 } 2424 }
2424 } 2425 }
2425 2426
2426 /* Now add multicast addresses to command request buffer, if any */ 2427 /* Now add multicast addresses to command request buffer, if any */
2427 2428
2428 for (i=0; i < (int)bp->mc_count; i++) 2429 for (i=0; i < (int)bp->mc_count; i++)
2429 { 2430 {
2430 if ((i + bp->uc_count) < PI_CMD_ADDR_FILTER_K_SIZE) 2431 if ((i + bp->uc_count) < PI_CMD_ADDR_FILTER_K_SIZE)
2431 { 2432 {
2432 memcpy(p_addr, &bp->mc_table[i*FDDI_K_ALEN], FDDI_K_ALEN); 2433 memcpy(p_addr, &bp->mc_table[i*FDDI_K_ALEN], FDDI_K_ALEN);
2433 p_addr++; /* point to next command entry */ 2434 p_addr++; /* point to next command entry */
2434 } 2435 }
2435 } 2436 }
2436 2437
2437 /* Issue command to update adapter CAM, then return */ 2438 /* Issue command to update adapter CAM, then return */
2438 2439
2439 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS) 2440 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS)
2440 return(DFX_K_FAILURE); 2441 return(DFX_K_FAILURE);
2441 return(DFX_K_SUCCESS); 2442 return(DFX_K_SUCCESS);
2442 } 2443 }
2443 2444
2444 2445
2445 /* 2446 /*
2446 * ========================== 2447 * ==========================
2447 * = dfx_ctl_update_filters = 2448 * = dfx_ctl_update_filters =
2448 * ========================== 2449 * ==========================
2449 * 2450 *
2450 * Overview: 2451 * Overview:
2451 * Procedure to update adapter filters with desired 2452 * Procedure to update adapter filters with desired
2452 * filter settings. 2453 * filter settings.
2453 * 2454 *
2454 * Returns: 2455 * Returns:
2455 * Condition code 2456 * Condition code
2456 * 2457 *
2457 * Arguments: 2458 * Arguments:
2458 * bp - pointer to board information 2459 * bp - pointer to board information
2459 * 2460 *
2460 * Functional Description: 2461 * Functional Description:
2461 * Enables or disables filter using current filter settings. 2462 * Enables or disables filter using current filter settings.
2462 * 2463 *
2463 * Return Codes: 2464 * Return Codes:
2464 * DFX_K_SUCCESS - Request succeeded. 2465 * DFX_K_SUCCESS - Request succeeded.
2465 * DFX_K_FAILURE - Request failed. 2466 * DFX_K_FAILURE - Request failed.
2466 * 2467 *
2467 * Assumptions: 2468 * Assumptions:
2468 * We must always pass up packets destined to the broadcast 2469 * We must always pass up packets destined to the broadcast
2469 * address (FF-FF-FF-FF-FF-FF), so we'll always keep the 2470 * address (FF-FF-FF-FF-FF-FF), so we'll always keep the
2470 * broadcast filter enabled. 2471 * broadcast filter enabled.
2471 * 2472 *
2472 * Side Effects: 2473 * Side Effects:
2473 * On-board adapter filters are updated. 2474 * On-board adapter filters are updated.
2474 */ 2475 */
2475 2476
2476 static int dfx_ctl_update_filters(DFX_board_t *bp) 2477 static int dfx_ctl_update_filters(DFX_board_t *bp)
2477 { 2478 {
2478 int i = 0; /* used as index */ 2479 int i = 0; /* used as index */
2479 2480
2480 /* Fill in command request information */ 2481 /* Fill in command request information */
2481 2482
2482 bp->cmd_req_virt->cmd_type = PI_CMD_K_FILTERS_SET; 2483 bp->cmd_req_virt->cmd_type = PI_CMD_K_FILTERS_SET;
2483 2484
2484 /* Initialize Broadcast filter - * ALWAYS ENABLED * */ 2485 /* Initialize Broadcast filter - * ALWAYS ENABLED * */
2485 2486
2486 bp->cmd_req_virt->filter_set.item[i].item_code = PI_ITEM_K_BROADCAST; 2487 bp->cmd_req_virt->filter_set.item[i].item_code = PI_ITEM_K_BROADCAST;
2487 bp->cmd_req_virt->filter_set.item[i++].value = PI_FSTATE_K_PASS; 2488 bp->cmd_req_virt->filter_set.item[i++].value = PI_FSTATE_K_PASS;
2488 2489
2489 /* Initialize LLC Individual/Group Promiscuous filter */ 2490 /* Initialize LLC Individual/Group Promiscuous filter */
2490 2491
2491 bp->cmd_req_virt->filter_set.item[i].item_code = PI_ITEM_K_IND_GROUP_PROM; 2492 bp->cmd_req_virt->filter_set.item[i].item_code = PI_ITEM_K_IND_GROUP_PROM;
2492 bp->cmd_req_virt->filter_set.item[i++].value = bp->ind_group_prom; 2493 bp->cmd_req_virt->filter_set.item[i++].value = bp->ind_group_prom;
2493 2494
2494 /* Initialize LLC Group Promiscuous filter */ 2495 /* Initialize LLC Group Promiscuous filter */
2495 2496
2496 bp->cmd_req_virt->filter_set.item[i].item_code = PI_ITEM_K_GROUP_PROM; 2497 bp->cmd_req_virt->filter_set.item[i].item_code = PI_ITEM_K_GROUP_PROM;
2497 bp->cmd_req_virt->filter_set.item[i++].value = bp->group_prom; 2498 bp->cmd_req_virt->filter_set.item[i++].value = bp->group_prom;
2498 2499
2499 /* Terminate the item code list */ 2500 /* Terminate the item code list */
2500 2501
2501 bp->cmd_req_virt->filter_set.item[i].item_code = PI_ITEM_K_EOL; 2502 bp->cmd_req_virt->filter_set.item[i].item_code = PI_ITEM_K_EOL;
2502 2503
2503 /* Issue command to update adapter filters, then return */ 2504 /* Issue command to update adapter filters, then return */
2504 2505
2505 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS) 2506 if (dfx_hw_dma_cmd_req(bp) != DFX_K_SUCCESS)
2506 return(DFX_K_FAILURE); 2507 return(DFX_K_FAILURE);
2507 return(DFX_K_SUCCESS); 2508 return(DFX_K_SUCCESS);
2508 } 2509 }
2509 2510
2510 2511
2511 /* 2512 /*
2512 * ====================== 2513 * ======================
2513 * = dfx_hw_dma_cmd_req = 2514 * = dfx_hw_dma_cmd_req =
2514 * ====================== 2515 * ======================
2515 * 2516 *
2516 * Overview: 2517 * Overview:
2517 * Sends PDQ DMA command to adapter firmware 2518 * Sends PDQ DMA command to adapter firmware
2518 * 2519 *
2519 * Returns: 2520 * Returns:
2520 * Condition code 2521 * Condition code
2521 * 2522 *
2522 * Arguments: 2523 * Arguments:
2523 * bp - pointer to board information 2524 * bp - pointer to board information
2524 * 2525 *
2525 * Functional Description: 2526 * Functional Description:
2526 * The command request and response buffers are posted to the adapter in the manner 2527 * The command request and response buffers are posted to the adapter in the manner
2527 * described in the PDQ Port Specification: 2528 * described in the PDQ Port Specification:
2528 * 2529 *
2529 * 1. Command Response Buffer is posted to adapter. 2530 * 1. Command Response Buffer is posted to adapter.
2530 * 2. Command Request Buffer is posted to adapter. 2531 * 2. Command Request Buffer is posted to adapter.
2531 * 3. Command Request consumer index is polled until it indicates that request 2532 * 3. Command Request consumer index is polled until it indicates that request
2532 * buffer has been DMA'd to adapter. 2533 * buffer has been DMA'd to adapter.
2533 * 4. Command Response consumer index is polled until it indicates that response 2534 * 4. Command Response consumer index is polled until it indicates that response
2534 * buffer has been DMA'd from adapter. 2535 * buffer has been DMA'd from adapter.
2535 * 2536 *
2536 * This ordering ensures that a response buffer is already available for the firmware 2537 * This ordering ensures that a response buffer is already available for the firmware
2537 * to use once it's done processing the request buffer. 2538 * to use once it's done processing the request buffer.
2538 * 2539 *
2539 * Return Codes: 2540 * Return Codes:
2540 * DFX_K_SUCCESS - DMA command succeeded 2541 * DFX_K_SUCCESS - DMA command succeeded
2541 * DFX_K_OUTSTATE - Adapter is NOT in proper state 2542 * DFX_K_OUTSTATE - Adapter is NOT in proper state
2542 * DFX_K_HW_TIMEOUT - DMA command timed out 2543 * DFX_K_HW_TIMEOUT - DMA command timed out
2543 * 2544 *
2544 * Assumptions: 2545 * Assumptions:
2545 * Command request buffer has already been filled with desired DMA command. 2546 * Command request buffer has already been filled with desired DMA command.
2546 * 2547 *
2547 * Side Effects: 2548 * Side Effects:
2548 * None 2549 * None
2549 */ 2550 */
2550 2551
2551 static int dfx_hw_dma_cmd_req(DFX_board_t *bp) 2552 static int dfx_hw_dma_cmd_req(DFX_board_t *bp)
2552 { 2553 {
2553 int status; /* adapter status */ 2554 int status; /* adapter status */
2554 int timeout_cnt; /* used in for loops */ 2555 int timeout_cnt; /* used in for loops */
2555 2556
2556 /* Make sure the adapter is in a state that we can issue the DMA command in */ 2557 /* Make sure the adapter is in a state that we can issue the DMA command in */
2557 2558
2558 status = dfx_hw_adap_state_rd(bp); 2559 status = dfx_hw_adap_state_rd(bp);
2559 if ((status == PI_STATE_K_RESET) || 2560 if ((status == PI_STATE_K_RESET) ||
2560 (status == PI_STATE_K_HALTED) || 2561 (status == PI_STATE_K_HALTED) ||
2561 (status == PI_STATE_K_DMA_UNAVAIL) || 2562 (status == PI_STATE_K_DMA_UNAVAIL) ||
2562 (status == PI_STATE_K_UPGRADE)) 2563 (status == PI_STATE_K_UPGRADE))
2563 return(DFX_K_OUTSTATE); 2564 return(DFX_K_OUTSTATE);
2564 2565
2565 /* Put response buffer on the command response queue */ 2566 /* Put response buffer on the command response queue */
2566 2567
2567 bp->descr_block_virt->cmd_rsp[bp->cmd_rsp_reg.index.prod].long_0 = (u32) (PI_RCV_DESCR_M_SOP | 2568 bp->descr_block_virt->cmd_rsp[bp->cmd_rsp_reg.index.prod].long_0 = (u32) (PI_RCV_DESCR_M_SOP |
2568 ((PI_CMD_RSP_K_SIZE_MAX / PI_ALIGN_K_CMD_RSP_BUFF) << PI_RCV_DESCR_V_SEG_LEN)); 2569 ((PI_CMD_RSP_K_SIZE_MAX / PI_ALIGN_K_CMD_RSP_BUFF) << PI_RCV_DESCR_V_SEG_LEN));
2569 bp->descr_block_virt->cmd_rsp[bp->cmd_rsp_reg.index.prod].long_1 = bp->cmd_rsp_phys; 2570 bp->descr_block_virt->cmd_rsp[bp->cmd_rsp_reg.index.prod].long_1 = bp->cmd_rsp_phys;
2570 2571
2571 /* Bump (and wrap) the producer index and write out to register */ 2572 /* Bump (and wrap) the producer index and write out to register */
2572 2573
2573 bp->cmd_rsp_reg.index.prod += 1; 2574 bp->cmd_rsp_reg.index.prod += 1;
2574 bp->cmd_rsp_reg.index.prod &= PI_CMD_RSP_K_NUM_ENTRIES-1; 2575 bp->cmd_rsp_reg.index.prod &= PI_CMD_RSP_K_NUM_ENTRIES-1;
2575 dfx_port_write_long(bp, PI_PDQ_K_REG_CMD_RSP_PROD, bp->cmd_rsp_reg.lword); 2576 dfx_port_write_long(bp, PI_PDQ_K_REG_CMD_RSP_PROD, bp->cmd_rsp_reg.lword);
2576 2577
2577 /* Put request buffer on the command request queue */ 2578 /* Put request buffer on the command request queue */
2578 2579
2579 bp->descr_block_virt->cmd_req[bp->cmd_req_reg.index.prod].long_0 = (u32) (PI_XMT_DESCR_M_SOP | 2580 bp->descr_block_virt->cmd_req[bp->cmd_req_reg.index.prod].long_0 = (u32) (PI_XMT_DESCR_M_SOP |
2580 PI_XMT_DESCR_M_EOP | (PI_CMD_REQ_K_SIZE_MAX << PI_XMT_DESCR_V_SEG_LEN)); 2581 PI_XMT_DESCR_M_EOP | (PI_CMD_REQ_K_SIZE_MAX << PI_XMT_DESCR_V_SEG_LEN));
2581 bp->descr_block_virt->cmd_req[bp->cmd_req_reg.index.prod].long_1 = bp->cmd_req_phys; 2582 bp->descr_block_virt->cmd_req[bp->cmd_req_reg.index.prod].long_1 = bp->cmd_req_phys;
2582 2583
2583 /* Bump (and wrap) the producer index and write out to register */ 2584 /* Bump (and wrap) the producer index and write out to register */
2584 2585
2585 bp->cmd_req_reg.index.prod += 1; 2586 bp->cmd_req_reg.index.prod += 1;
2586 bp->cmd_req_reg.index.prod &= PI_CMD_REQ_K_NUM_ENTRIES-1; 2587 bp->cmd_req_reg.index.prod &= PI_CMD_REQ_K_NUM_ENTRIES-1;
2587 dfx_port_write_long(bp, PI_PDQ_K_REG_CMD_REQ_PROD, bp->cmd_req_reg.lword); 2588 dfx_port_write_long(bp, PI_PDQ_K_REG_CMD_REQ_PROD, bp->cmd_req_reg.lword);
2588 2589
2589 /* 2590 /*
2590 * Here we wait for the command request consumer index to be equal 2591 * Here we wait for the command request consumer index to be equal
2591 * to the producer, indicating that the adapter has DMAed the request. 2592 * to the producer, indicating that the adapter has DMAed the request.
2592 */ 2593 */
2593 2594
2594 for (timeout_cnt = 20000; timeout_cnt > 0; timeout_cnt--) 2595 for (timeout_cnt = 20000; timeout_cnt > 0; timeout_cnt--)
2595 { 2596 {
2596 if (bp->cmd_req_reg.index.prod == (u8)(bp->cons_block_virt->cmd_req)) 2597 if (bp->cmd_req_reg.index.prod == (u8)(bp->cons_block_virt->cmd_req))
2597 break; 2598 break;
2598 udelay(100); /* wait for 100 microseconds */ 2599 udelay(100); /* wait for 100 microseconds */
2599 } 2600 }
2600 if (timeout_cnt == 0) 2601 if (timeout_cnt == 0)
2601 return(DFX_K_HW_TIMEOUT); 2602 return(DFX_K_HW_TIMEOUT);
2602 2603
2603 /* Bump (and wrap) the completion index and write out to register */ 2604 /* Bump (and wrap) the completion index and write out to register */
2604 2605
2605 bp->cmd_req_reg.index.comp += 1; 2606 bp->cmd_req_reg.index.comp += 1;
2606 bp->cmd_req_reg.index.comp &= PI_CMD_REQ_K_NUM_ENTRIES-1; 2607 bp->cmd_req_reg.index.comp &= PI_CMD_REQ_K_NUM_ENTRIES-1;
2607 dfx_port_write_long(bp, PI_PDQ_K_REG_CMD_REQ_PROD, bp->cmd_req_reg.lword); 2608 dfx_port_write_long(bp, PI_PDQ_K_REG_CMD_REQ_PROD, bp->cmd_req_reg.lword);
2608 2609
2609 /* 2610 /*
2610 * Here we wait for the command response consumer index to be equal 2611 * Here we wait for the command response consumer index to be equal
2611 * to the producer, indicating that the adapter has DMAed the response. 2612 * to the producer, indicating that the adapter has DMAed the response.
2612 */ 2613 */
2613 2614
2614 for (timeout_cnt = 20000; timeout_cnt > 0; timeout_cnt--) 2615 for (timeout_cnt = 20000; timeout_cnt > 0; timeout_cnt--)
2615 { 2616 {
2616 if (bp->cmd_rsp_reg.index.prod == (u8)(bp->cons_block_virt->cmd_rsp)) 2617 if (bp->cmd_rsp_reg.index.prod == (u8)(bp->cons_block_virt->cmd_rsp))
2617 break; 2618 break;
2618 udelay(100); /* wait for 100 microseconds */ 2619 udelay(100); /* wait for 100 microseconds */
2619 } 2620 }
2620 if (timeout_cnt == 0) 2621 if (timeout_cnt == 0)
2621 return(DFX_K_HW_TIMEOUT); 2622 return(DFX_K_HW_TIMEOUT);
2622 2623
2623 /* Bump (and wrap) the completion index and write out to register */ 2624 /* Bump (and wrap) the completion index and write out to register */
2624 2625
2625 bp->cmd_rsp_reg.index.comp += 1; 2626 bp->cmd_rsp_reg.index.comp += 1;
2626 bp->cmd_rsp_reg.index.comp &= PI_CMD_RSP_K_NUM_ENTRIES-1; 2627 bp->cmd_rsp_reg.index.comp &= PI_CMD_RSP_K_NUM_ENTRIES-1;
2627 dfx_port_write_long(bp, PI_PDQ_K_REG_CMD_RSP_PROD, bp->cmd_rsp_reg.lword); 2628 dfx_port_write_long(bp, PI_PDQ_K_REG_CMD_RSP_PROD, bp->cmd_rsp_reg.lword);
2628 return(DFX_K_SUCCESS); 2629 return(DFX_K_SUCCESS);
2629 } 2630 }
2630 2631
2631 2632
2632 /* 2633 /*
2633 * ======================== 2634 * ========================
2634 * = dfx_hw_port_ctrl_req = 2635 * = dfx_hw_port_ctrl_req =
2635 * ======================== 2636 * ========================
2636 * 2637 *
2637 * Overview: 2638 * Overview:
2638 * Sends PDQ port control command to adapter firmware 2639 * Sends PDQ port control command to adapter firmware
2639 * 2640 *
2640 * Returns: 2641 * Returns:
2641 * Host data register value in host_data if ptr is not NULL 2642 * Host data register value in host_data if ptr is not NULL
2642 * 2643 *
2643 * Arguments: 2644 * Arguments:
2644 * bp - pointer to board information 2645 * bp - pointer to board information
2645 * command - port control command 2646 * command - port control command
2646 * data_a - port data A register value 2647 * data_a - port data A register value
2647 * data_b - port data B register value 2648 * data_b - port data B register value
2648 * host_data - ptr to host data register value 2649 * host_data - ptr to host data register value
2649 * 2650 *
2650 * Functional Description: 2651 * Functional Description:
2651 * Send generic port control command to adapter by writing 2652 * Send generic port control command to adapter by writing
2652 * to various PDQ port registers, then polling for completion. 2653 * to various PDQ port registers, then polling for completion.
2653 * 2654 *
2654 * Return Codes: 2655 * Return Codes:
2655 * DFX_K_SUCCESS - port control command succeeded 2656 * DFX_K_SUCCESS - port control command succeeded
2656 * DFX_K_HW_TIMEOUT - port control command timed out 2657 * DFX_K_HW_TIMEOUT - port control command timed out
2657 * 2658 *
2658 * Assumptions: 2659 * Assumptions:
2659 * None 2660 * None
2660 * 2661 *
2661 * Side Effects: 2662 * Side Effects:
2662 * None 2663 * None
2663 */ 2664 */
2664 2665
2665 static int dfx_hw_port_ctrl_req( 2666 static int dfx_hw_port_ctrl_req(
2666 DFX_board_t *bp, 2667 DFX_board_t *bp,
2667 PI_UINT32 command, 2668 PI_UINT32 command,
2668 PI_UINT32 data_a, 2669 PI_UINT32 data_a,
2669 PI_UINT32 data_b, 2670 PI_UINT32 data_b,
2670 PI_UINT32 *host_data 2671 PI_UINT32 *host_data
2671 ) 2672 )
2672 2673
2673 { 2674 {
2674 PI_UINT32 port_cmd; /* Port Control command register value */ 2675 PI_UINT32 port_cmd; /* Port Control command register value */
2675 int timeout_cnt; /* used in for loops */ 2676 int timeout_cnt; /* used in for loops */
2676 2677
2677 /* Set Command Error bit in command longword */ 2678 /* Set Command Error bit in command longword */
2678 2679
2679 port_cmd = (PI_UINT32) (command | PI_PCTRL_M_CMD_ERROR); 2680 port_cmd = (PI_UINT32) (command | PI_PCTRL_M_CMD_ERROR);
2680 2681
2681 /* Issue port command to the adapter */ 2682 /* Issue port command to the adapter */
2682 2683
2683 dfx_port_write_long(bp, PI_PDQ_K_REG_PORT_DATA_A, data_a); 2684 dfx_port_write_long(bp, PI_PDQ_K_REG_PORT_DATA_A, data_a);
2684 dfx_port_write_long(bp, PI_PDQ_K_REG_PORT_DATA_B, data_b); 2685 dfx_port_write_long(bp, PI_PDQ_K_REG_PORT_DATA_B, data_b);
2685 dfx_port_write_long(bp, PI_PDQ_K_REG_PORT_CTRL, port_cmd); 2686 dfx_port_write_long(bp, PI_PDQ_K_REG_PORT_CTRL, port_cmd);
2686 2687
2687 /* Now wait for command to complete */ 2688 /* Now wait for command to complete */
2688 2689
2689 if (command == PI_PCTRL_M_BLAST_FLASH) 2690 if (command == PI_PCTRL_M_BLAST_FLASH)
2690 timeout_cnt = 600000; /* set command timeout count to 60 seconds */ 2691 timeout_cnt = 600000; /* set command timeout count to 60 seconds */
2691 else 2692 else
2692 timeout_cnt = 20000; /* set command timeout count to 2 seconds */ 2693 timeout_cnt = 20000; /* set command timeout count to 2 seconds */
2693 2694
2694 for (; timeout_cnt > 0; timeout_cnt--) 2695 for (; timeout_cnt > 0; timeout_cnt--)
2695 { 2696 {
2696 dfx_port_read_long(bp, PI_PDQ_K_REG_PORT_CTRL, &port_cmd); 2697 dfx_port_read_long(bp, PI_PDQ_K_REG_PORT_CTRL, &port_cmd);
2697 if (!(port_cmd & PI_PCTRL_M_CMD_ERROR)) 2698 if (!(port_cmd & PI_PCTRL_M_CMD_ERROR))
2698 break; 2699 break;
2699 udelay(100); /* wait for 100 microseconds */ 2700 udelay(100); /* wait for 100 microseconds */
2700 } 2701 }
2701 if (timeout_cnt == 0) 2702 if (timeout_cnt == 0)
2702 return(DFX_K_HW_TIMEOUT); 2703 return(DFX_K_HW_TIMEOUT);
2703 2704
2704 /* 2705 /*
2705 * If the address of host_data is non-zero, assume caller has supplied a 2706 * If the address of host_data is non-zero, assume caller has supplied a
2706 * non NULL pointer, and return the contents of the HOST_DATA register in 2707 * non NULL pointer, and return the contents of the HOST_DATA register in
2707 * it. 2708 * it.
2708 */ 2709 */
2709 2710
2710 if (host_data != NULL) 2711 if (host_data != NULL)
2711 dfx_port_read_long(bp, PI_PDQ_K_REG_HOST_DATA, host_data); 2712 dfx_port_read_long(bp, PI_PDQ_K_REG_HOST_DATA, host_data);
2712 return(DFX_K_SUCCESS); 2713 return(DFX_K_SUCCESS);
2713 } 2714 }
2714 2715
2715 2716
2716 /* 2717 /*
2717 * ===================== 2718 * =====================
2718 * = dfx_hw_adap_reset = 2719 * = dfx_hw_adap_reset =
2719 * ===================== 2720 * =====================
2720 * 2721 *
2721 * Overview: 2722 * Overview:
2722 * Resets adapter 2723 * Resets adapter
2723 * 2724 *
2724 * Returns: 2725 * Returns:
2725 * None 2726 * None
2726 * 2727 *
2727 * Arguments: 2728 * Arguments:
2728 * bp - pointer to board information 2729 * bp - pointer to board information
2729 * type - type of reset to perform 2730 * type - type of reset to perform
2730 * 2731 *
2731 * Functional Description: 2732 * Functional Description:
2732 * Issue soft reset to adapter by writing to PDQ Port Reset 2733 * Issue soft reset to adapter by writing to PDQ Port Reset
2733 * register. Use incoming reset type to tell adapter what 2734 * register. Use incoming reset type to tell adapter what
2734 * kind of reset operation to perform. 2735 * kind of reset operation to perform.
2735 * 2736 *
2736 * Return Codes: 2737 * Return Codes:
2737 * None 2738 * None
2738 * 2739 *
2739 * Assumptions: 2740 * Assumptions:
2740 * This routine merely issues a soft reset to the adapter. 2741 * This routine merely issues a soft reset to the adapter.
2741 * It is expected that after this routine returns, the caller 2742 * It is expected that after this routine returns, the caller
2742 * will appropriately poll the Port Status register for the 2743 * will appropriately poll the Port Status register for the
2743 * adapter to enter the proper state. 2744 * adapter to enter the proper state.
2744 * 2745 *
2745 * Side Effects: 2746 * Side Effects:
2746 * Internal adapter registers are cleared. 2747 * Internal adapter registers are cleared.
2747 */ 2748 */
2748 2749
2749 static void dfx_hw_adap_reset( 2750 static void dfx_hw_adap_reset(
2750 DFX_board_t *bp, 2751 DFX_board_t *bp,
2751 PI_UINT32 type 2752 PI_UINT32 type
2752 ) 2753 )
2753 2754
2754 { 2755 {
2755 /* Set Reset type and assert reset */ 2756 /* Set Reset type and assert reset */
2756 2757
2757 dfx_port_write_long(bp, PI_PDQ_K_REG_PORT_DATA_A, type); /* tell adapter type of reset */ 2758 dfx_port_write_long(bp, PI_PDQ_K_REG_PORT_DATA_A, type); /* tell adapter type of reset */
2758 dfx_port_write_long(bp, PI_PDQ_K_REG_PORT_RESET, PI_RESET_M_ASSERT_RESET); 2759 dfx_port_write_long(bp, PI_PDQ_K_REG_PORT_RESET, PI_RESET_M_ASSERT_RESET);
2759 2760
2760 /* Wait for at least 1 Microsecond according to the spec. We wait 20 just to be safe */ 2761 /* Wait for at least 1 Microsecond according to the spec. We wait 20 just to be safe */
2761 2762
2762 udelay(20); 2763 udelay(20);
2763 2764
2764 /* Deassert reset */ 2765 /* Deassert reset */
2765 2766
2766 dfx_port_write_long(bp, PI_PDQ_K_REG_PORT_RESET, 0); 2767 dfx_port_write_long(bp, PI_PDQ_K_REG_PORT_RESET, 0);
2767 } 2768 }
2768 2769
2769 2770
2770 /* 2771 /*
2771 * ======================== 2772 * ========================
2772 * = dfx_hw_adap_state_rd = 2773 * = dfx_hw_adap_state_rd =
2773 * ======================== 2774 * ========================
2774 * 2775 *
2775 * Overview: 2776 * Overview:
2776 * Returns current adapter state 2777 * Returns current adapter state
2777 * 2778 *
2778 * Returns: 2779 * Returns:
2779 * Adapter state per PDQ Port Specification 2780 * Adapter state per PDQ Port Specification
2780 * 2781 *
2781 * Arguments: 2782 * Arguments:
2782 * bp - pointer to board information 2783 * bp - pointer to board information
2783 * 2784 *
2784 * Functional Description: 2785 * Functional Description:
2785 * Reads PDQ Port Status register and returns adapter state. 2786 * Reads PDQ Port Status register and returns adapter state.
2786 * 2787 *
2787 * Return Codes: 2788 * Return Codes:
2788 * None 2789 * None
2789 * 2790 *
2790 * Assumptions: 2791 * Assumptions:
2791 * None 2792 * None
2792 * 2793 *
2793 * Side Effects: 2794 * Side Effects:
2794 * None 2795 * None
2795 */ 2796 */
2796 2797
2797 static int dfx_hw_adap_state_rd(DFX_board_t *bp) 2798 static int dfx_hw_adap_state_rd(DFX_board_t *bp)
2798 { 2799 {
2799 PI_UINT32 port_status; /* Port Status register value */ 2800 PI_UINT32 port_status; /* Port Status register value */
2800 2801
2801 dfx_port_read_long(bp, PI_PDQ_K_REG_PORT_STATUS, &port_status); 2802 dfx_port_read_long(bp, PI_PDQ_K_REG_PORT_STATUS, &port_status);
2802 return((port_status & PI_PSTATUS_M_STATE) >> PI_PSTATUS_V_STATE); 2803 return((port_status & PI_PSTATUS_M_STATE) >> PI_PSTATUS_V_STATE);
2803 } 2804 }
2804 2805
2805 2806
2806 /* 2807 /*
2807 * ===================== 2808 * =====================
2808 * = dfx_hw_dma_uninit = 2809 * = dfx_hw_dma_uninit =
2809 * ===================== 2810 * =====================
2810 * 2811 *
2811 * Overview: 2812 * Overview:
2812 * Brings adapter to DMA_UNAVAILABLE state 2813 * Brings adapter to DMA_UNAVAILABLE state
2813 * 2814 *
2814 * Returns: 2815 * Returns:
2815 * Condition code 2816 * Condition code
2816 * 2817 *
2817 * Arguments: 2818 * Arguments:
2818 * bp - pointer to board information 2819 * bp - pointer to board information
2819 * type - type of reset to perform 2820 * type - type of reset to perform
2820 * 2821 *
2821 * Functional Description: 2822 * Functional Description:
2822 * Bring adapter to DMA_UNAVAILABLE state by performing the following: 2823 * Bring adapter to DMA_UNAVAILABLE state by performing the following:
2823 * 1. Set reset type bit in Port Data A Register then reset adapter. 2824 * 1. Set reset type bit in Port Data A Register then reset adapter.
2824 * 2. Check that adapter is in DMA_UNAVAILABLE state. 2825 * 2. Check that adapter is in DMA_UNAVAILABLE state.
2825 * 2826 *
2826 * Return Codes: 2827 * Return Codes:
2827 * DFX_K_SUCCESS - adapter is in DMA_UNAVAILABLE state 2828 * DFX_K_SUCCESS - adapter is in DMA_UNAVAILABLE state
2828 * DFX_K_HW_TIMEOUT - adapter did not reset properly 2829 * DFX_K_HW_TIMEOUT - adapter did not reset properly
2829 * 2830 *
2830 * Assumptions: 2831 * Assumptions:
2831 * None 2832 * None
2832 * 2833 *
2833 * Side Effects: 2834 * Side Effects:
2834 * Internal adapter registers are cleared. 2835 * Internal adapter registers are cleared.
2835 */ 2836 */
2836 2837
2837 static int dfx_hw_dma_uninit(DFX_board_t *bp, PI_UINT32 type) 2838 static int dfx_hw_dma_uninit(DFX_board_t *bp, PI_UINT32 type)
2838 { 2839 {
2839 int timeout_cnt; /* used in for loops */ 2840 int timeout_cnt; /* used in for loops */
2840 2841
2841 /* Set reset type bit and reset adapter */ 2842 /* Set reset type bit and reset adapter */
2842 2843
2843 dfx_hw_adap_reset(bp, type); 2844 dfx_hw_adap_reset(bp, type);
2844 2845
2845 /* Now wait for adapter to enter DMA_UNAVAILABLE state */ 2846 /* Now wait for adapter to enter DMA_UNAVAILABLE state */
2846 2847
2847 for (timeout_cnt = 100000; timeout_cnt > 0; timeout_cnt--) 2848 for (timeout_cnt = 100000; timeout_cnt > 0; timeout_cnt--)
2848 { 2849 {
2849 if (dfx_hw_adap_state_rd(bp) == PI_STATE_K_DMA_UNAVAIL) 2850 if (dfx_hw_adap_state_rd(bp) == PI_STATE_K_DMA_UNAVAIL)
2850 break; 2851 break;
2851 udelay(100); /* wait for 100 microseconds */ 2852 udelay(100); /* wait for 100 microseconds */
2852 } 2853 }
2853 if (timeout_cnt == 0) 2854 if (timeout_cnt == 0)
2854 return(DFX_K_HW_TIMEOUT); 2855 return(DFX_K_HW_TIMEOUT);
2855 return(DFX_K_SUCCESS); 2856 return(DFX_K_SUCCESS);
2856 } 2857 }
2857 2858
2858 /* 2859 /*
2859 * Align an sk_buff to a boundary power of 2 2860 * Align an sk_buff to a boundary power of 2
2860 * 2861 *
2861 */ 2862 */
2862 2863
2863 static void my_skb_align(struct sk_buff *skb, int n) 2864 static void my_skb_align(struct sk_buff *skb, int n)
2864 { 2865 {
2865 unsigned long x = (unsigned long)skb->data; 2866 unsigned long x = (unsigned long)skb->data;
2866 unsigned long v; 2867 unsigned long v;
2867 2868
2868 v = ALIGN(x, n); /* Where we want to be */ 2869 v = ALIGN(x, n); /* Where we want to be */
2869 2870
2870 skb_reserve(skb, v - x); 2871 skb_reserve(skb, v - x);
2871 } 2872 }
2872 2873
2873 2874
2874 /* 2875 /*
2875 * ================ 2876 * ================
2876 * = dfx_rcv_init = 2877 * = dfx_rcv_init =
2877 * ================ 2878 * ================
2878 * 2879 *
2879 * Overview: 2880 * Overview:
2880 * Produces buffers to adapter LLC Host receive descriptor block 2881 * Produces buffers to adapter LLC Host receive descriptor block
2881 * 2882 *
2882 * Returns: 2883 * Returns:
2883 * None 2884 * None
2884 * 2885 *
2885 * Arguments: 2886 * Arguments:
2886 * bp - pointer to board information 2887 * bp - pointer to board information
2887 * get_buffers - non-zero if buffers to be allocated 2888 * get_buffers - non-zero if buffers to be allocated
2888 * 2889 *
2889 * Functional Description: 2890 * Functional Description:
2890 * This routine can be called during dfx_adap_init() or during an adapter 2891 * This routine can be called during dfx_adap_init() or during an adapter
2891 * reset. It initializes the descriptor block and produces all allocated 2892 * reset. It initializes the descriptor block and produces all allocated
2892 * LLC Host queue receive buffers. 2893 * LLC Host queue receive buffers.
2893 * 2894 *
2894 * Return Codes: 2895 * Return Codes:
2895 * Return 0 on success or -ENOMEM if buffer allocation failed (when using 2896 * Return 0 on success or -ENOMEM if buffer allocation failed (when using
2896 * dynamic buffer allocation). If the buffer allocation failed, the 2897 * dynamic buffer allocation). If the buffer allocation failed, the
2897 * already allocated buffers will not be released and the caller should do 2898 * already allocated buffers will not be released and the caller should do
2898 * this. 2899 * this.
2899 * 2900 *
2900 * Assumptions: 2901 * Assumptions:
2901 * The PDQ has been reset and the adapter and driver maintained Type 2 2902 * The PDQ has been reset and the adapter and driver maintained Type 2
2902 * register indices are cleared. 2903 * register indices are cleared.
2903 * 2904 *
2904 * Side Effects: 2905 * Side Effects:
2905 * Receive buffers are posted to the adapter LLC queue and the adapter 2906 * Receive buffers are posted to the adapter LLC queue and the adapter
2906 * is notified. 2907 * is notified.
2907 */ 2908 */
2908 2909
2909 static int dfx_rcv_init(DFX_board_t *bp, int get_buffers) 2910 static int dfx_rcv_init(DFX_board_t *bp, int get_buffers)
2910 { 2911 {
2911 int i, j; /* used in for loop */ 2912 int i, j; /* used in for loop */
2912 2913
2913 /* 2914 /*
2914 * Since each receive buffer is a single fragment of same length, initialize 2915 * Since each receive buffer is a single fragment of same length, initialize
2915 * first longword in each receive descriptor for entire LLC Host descriptor 2916 * first longword in each receive descriptor for entire LLC Host descriptor
2916 * block. Also initialize second longword in each receive descriptor with 2917 * block. Also initialize second longword in each receive descriptor with
2917 * physical address of receive buffer. We'll always allocate receive 2918 * physical address of receive buffer. We'll always allocate receive
2918 * buffers in powers of 2 so that we can easily fill the 256 entry descriptor 2919 * buffers in powers of 2 so that we can easily fill the 256 entry descriptor
2919 * block and produce new receive buffers by simply updating the receive 2920 * block and produce new receive buffers by simply updating the receive
2920 * producer index. 2921 * producer index.
2921 * 2922 *
2922 * Assumptions: 2923 * Assumptions:
2923 * To support all shipping versions of PDQ, the receive buffer size 2924 * To support all shipping versions of PDQ, the receive buffer size
2924 * must be mod 128 in length and the physical address must be 128 byte 2925 * must be mod 128 in length and the physical address must be 128 byte
2925 * aligned. In other words, bits 0-6 of the length and address must 2926 * aligned. In other words, bits 0-6 of the length and address must
2926 * be zero for the following descriptor field entries to be correct on 2927 * be zero for the following descriptor field entries to be correct on
2927 * all PDQ-based boards. We guaranteed both requirements during 2928 * all PDQ-based boards. We guaranteed both requirements during
2928 * driver initialization when we allocated memory for the receive buffers. 2929 * driver initialization when we allocated memory for the receive buffers.
2929 */ 2930 */
2930 2931
2931 if (get_buffers) { 2932 if (get_buffers) {
2932 #ifdef DYNAMIC_BUFFERS 2933 #ifdef DYNAMIC_BUFFERS
2933 for (i = 0; i < (int)(bp->rcv_bufs_to_post); i++) 2934 for (i = 0; i < (int)(bp->rcv_bufs_to_post); i++)
2934 for (j = 0; (i + j) < (int)PI_RCV_DATA_K_NUM_ENTRIES; j += bp->rcv_bufs_to_post) 2935 for (j = 0; (i + j) < (int)PI_RCV_DATA_K_NUM_ENTRIES; j += bp->rcv_bufs_to_post)
2935 { 2936 {
2936 struct sk_buff *newskb = __dev_alloc_skb(NEW_SKB_SIZE, GFP_NOIO); 2937 struct sk_buff *newskb = __dev_alloc_skb(NEW_SKB_SIZE, GFP_NOIO);
2937 if (!newskb) 2938 if (!newskb)
2938 return -ENOMEM; 2939 return -ENOMEM;
2939 bp->descr_block_virt->rcv_data[i+j].long_0 = (u32) (PI_RCV_DESCR_M_SOP | 2940 bp->descr_block_virt->rcv_data[i+j].long_0 = (u32) (PI_RCV_DESCR_M_SOP |
2940 ((PI_RCV_DATA_K_SIZE_MAX / PI_ALIGN_K_RCV_DATA_BUFF) << PI_RCV_DESCR_V_SEG_LEN)); 2941 ((PI_RCV_DATA_K_SIZE_MAX / PI_ALIGN_K_RCV_DATA_BUFF) << PI_RCV_DESCR_V_SEG_LEN));
2941 /* 2942 /*
2942 * align to 128 bytes for compatibility with 2943 * align to 128 bytes for compatibility with
2943 * the old EISA boards. 2944 * the old EISA boards.
2944 */ 2945 */
2945 2946
2946 my_skb_align(newskb, 128); 2947 my_skb_align(newskb, 128);
2947 bp->descr_block_virt->rcv_data[i + j].long_1 = 2948 bp->descr_block_virt->rcv_data[i + j].long_1 =
2948 (u32)dma_map_single(bp->bus_dev, newskb->data, 2949 (u32)dma_map_single(bp->bus_dev, newskb->data,
2949 NEW_SKB_SIZE, 2950 NEW_SKB_SIZE,
2950 DMA_FROM_DEVICE); 2951 DMA_FROM_DEVICE);
2951 /* 2952 /*
2952 * p_rcv_buff_va is only used inside the 2953 * p_rcv_buff_va is only used inside the
2953 * kernel so we put the skb pointer here. 2954 * kernel so we put the skb pointer here.
2954 */ 2955 */
2955 bp->p_rcv_buff_va[i+j] = (char *) newskb; 2956 bp->p_rcv_buff_va[i+j] = (char *) newskb;
2956 } 2957 }
2957 #else 2958 #else
2958 for (i=0; i < (int)(bp->rcv_bufs_to_post); i++) 2959 for (i=0; i < (int)(bp->rcv_bufs_to_post); i++)
2959 for (j=0; (i + j) < (int)PI_RCV_DATA_K_NUM_ENTRIES; j += bp->rcv_bufs_to_post) 2960 for (j=0; (i + j) < (int)PI_RCV_DATA_K_NUM_ENTRIES; j += bp->rcv_bufs_to_post)
2960 { 2961 {
2961 bp->descr_block_virt->rcv_data[i+j].long_0 = (u32) (PI_RCV_DESCR_M_SOP | 2962 bp->descr_block_virt->rcv_data[i+j].long_0 = (u32) (PI_RCV_DESCR_M_SOP |
2962 ((PI_RCV_DATA_K_SIZE_MAX / PI_ALIGN_K_RCV_DATA_BUFF) << PI_RCV_DESCR_V_SEG_LEN)); 2963 ((PI_RCV_DATA_K_SIZE_MAX / PI_ALIGN_K_RCV_DATA_BUFF) << PI_RCV_DESCR_V_SEG_LEN));
2963 bp->descr_block_virt->rcv_data[i+j].long_1 = (u32) (bp->rcv_block_phys + (i * PI_RCV_DATA_K_SIZE_MAX)); 2964 bp->descr_block_virt->rcv_data[i+j].long_1 = (u32) (bp->rcv_block_phys + (i * PI_RCV_DATA_K_SIZE_MAX));
2964 bp->p_rcv_buff_va[i+j] = (char *) (bp->rcv_block_virt + (i * PI_RCV_DATA_K_SIZE_MAX)); 2965 bp->p_rcv_buff_va[i+j] = (char *) (bp->rcv_block_virt + (i * PI_RCV_DATA_K_SIZE_MAX));
2965 } 2966 }
2966 #endif 2967 #endif
2967 } 2968 }
2968 2969
2969 /* Update receive producer and Type 2 register */ 2970 /* Update receive producer and Type 2 register */
2970 2971
2971 bp->rcv_xmt_reg.index.rcv_prod = bp->rcv_bufs_to_post; 2972 bp->rcv_xmt_reg.index.rcv_prod = bp->rcv_bufs_to_post;
2972 dfx_port_write_long(bp, PI_PDQ_K_REG_TYPE_2_PROD, bp->rcv_xmt_reg.lword); 2973 dfx_port_write_long(bp, PI_PDQ_K_REG_TYPE_2_PROD, bp->rcv_xmt_reg.lword);
2973 return 0; 2974 return 0;
2974 } 2975 }
2975 2976
2976 2977
2977 /* 2978 /*
2978 * ========================= 2979 * =========================
2979 * = dfx_rcv_queue_process = 2980 * = dfx_rcv_queue_process =
2980 * ========================= 2981 * =========================
2981 * 2982 *
2982 * Overview: 2983 * Overview:
2983 * Process received LLC frames. 2984 * Process received LLC frames.
2984 * 2985 *
2985 * Returns: 2986 * Returns:
2986 * None 2987 * None
2987 * 2988 *
2988 * Arguments: 2989 * Arguments:
2989 * bp - pointer to board information 2990 * bp - pointer to board information
2990 * 2991 *
2991 * Functional Description: 2992 * Functional Description:
2992 * Received LLC frames are processed until there are no more consumed frames. 2993 * Received LLC frames are processed until there are no more consumed frames.
2993 * Once all frames are processed, the receive buffers are returned to the 2994 * Once all frames are processed, the receive buffers are returned to the
2994 * adapter. Note that this algorithm fixes the length of time that can be spent 2995 * adapter. Note that this algorithm fixes the length of time that can be spent
2995 * in this routine, because there are a fixed number of receive buffers to 2996 * in this routine, because there are a fixed number of receive buffers to
2996 * process and buffers are not produced until this routine exits and returns 2997 * process and buffers are not produced until this routine exits and returns
2997 * to the ISR. 2998 * to the ISR.
2998 * 2999 *
2999 * Return Codes: 3000 * Return Codes:
3000 * None 3001 * None
3001 * 3002 *
3002 * Assumptions: 3003 * Assumptions:
3003 * None 3004 * None
3004 * 3005 *
3005 * Side Effects: 3006 * Side Effects:
3006 * None 3007 * None
3007 */ 3008 */
3008 3009
3009 static void dfx_rcv_queue_process( 3010 static void dfx_rcv_queue_process(
3010 DFX_board_t *bp 3011 DFX_board_t *bp
3011 ) 3012 )
3012 3013
3013 { 3014 {
3014 PI_TYPE_2_CONSUMER *p_type_2_cons; /* ptr to rcv/xmt consumer block register */ 3015 PI_TYPE_2_CONSUMER *p_type_2_cons; /* ptr to rcv/xmt consumer block register */
3015 char *p_buff; /* ptr to start of packet receive buffer (FMC descriptor) */ 3016 char *p_buff; /* ptr to start of packet receive buffer (FMC descriptor) */
3016 u32 descr, pkt_len; /* FMC descriptor field and packet length */ 3017 u32 descr, pkt_len; /* FMC descriptor field and packet length */
3017 struct sk_buff *skb; /* pointer to a sk_buff to hold incoming packet data */ 3018 struct sk_buff *skb; /* pointer to a sk_buff to hold incoming packet data */
3018 3019
3019 /* Service all consumed LLC receive frames */ 3020 /* Service all consumed LLC receive frames */
3020 3021
3021 p_type_2_cons = (PI_TYPE_2_CONSUMER *)(&bp->cons_block_virt->xmt_rcv_data); 3022 p_type_2_cons = (PI_TYPE_2_CONSUMER *)(&bp->cons_block_virt->xmt_rcv_data);
3022 while (bp->rcv_xmt_reg.index.rcv_comp != p_type_2_cons->index.rcv_cons) 3023 while (bp->rcv_xmt_reg.index.rcv_comp != p_type_2_cons->index.rcv_cons)
3023 { 3024 {
3024 /* Process any errors */ 3025 /* Process any errors */
3025 3026
3026 int entry; 3027 int entry;
3027 3028
3028 entry = bp->rcv_xmt_reg.index.rcv_comp; 3029 entry = bp->rcv_xmt_reg.index.rcv_comp;
3029 #ifdef DYNAMIC_BUFFERS 3030 #ifdef DYNAMIC_BUFFERS
3030 p_buff = (char *) (((struct sk_buff *)bp->p_rcv_buff_va[entry])->data); 3031 p_buff = (char *) (((struct sk_buff *)bp->p_rcv_buff_va[entry])->data);
3031 #else 3032 #else
3032 p_buff = (char *) bp->p_rcv_buff_va[entry]; 3033 p_buff = (char *) bp->p_rcv_buff_va[entry];
3033 #endif 3034 #endif
3034 memcpy(&descr, p_buff + RCV_BUFF_K_DESCR, sizeof(u32)); 3035 memcpy(&descr, p_buff + RCV_BUFF_K_DESCR, sizeof(u32));
3035 3036
3036 if (descr & PI_FMC_DESCR_M_RCC_FLUSH) 3037 if (descr & PI_FMC_DESCR_M_RCC_FLUSH)
3037 { 3038 {
3038 if (descr & PI_FMC_DESCR_M_RCC_CRC) 3039 if (descr & PI_FMC_DESCR_M_RCC_CRC)
3039 bp->rcv_crc_errors++; 3040 bp->rcv_crc_errors++;
3040 else 3041 else
3041 bp->rcv_frame_status_errors++; 3042 bp->rcv_frame_status_errors++;
3042 } 3043 }
3043 else 3044 else
3044 { 3045 {
3045 int rx_in_place = 0; 3046 int rx_in_place = 0;
3046 3047
3047 /* The frame was received without errors - verify packet length */ 3048 /* The frame was received without errors - verify packet length */
3048 3049
3049 pkt_len = (u32)((descr & PI_FMC_DESCR_M_LEN) >> PI_FMC_DESCR_V_LEN); 3050 pkt_len = (u32)((descr & PI_FMC_DESCR_M_LEN) >> PI_FMC_DESCR_V_LEN);
3050 pkt_len -= 4; /* subtract 4 byte CRC */ 3051 pkt_len -= 4; /* subtract 4 byte CRC */
3051 if (!IN_RANGE(pkt_len, FDDI_K_LLC_ZLEN, FDDI_K_LLC_LEN)) 3052 if (!IN_RANGE(pkt_len, FDDI_K_LLC_ZLEN, FDDI_K_LLC_LEN))
3052 bp->rcv_length_errors++; 3053 bp->rcv_length_errors++;
3053 else{ 3054 else{
3054 #ifdef DYNAMIC_BUFFERS 3055 #ifdef DYNAMIC_BUFFERS
3055 if (pkt_len > SKBUFF_RX_COPYBREAK) { 3056 if (pkt_len > SKBUFF_RX_COPYBREAK) {
3056 struct sk_buff *newskb; 3057 struct sk_buff *newskb;
3057 3058
3058 newskb = dev_alloc_skb(NEW_SKB_SIZE); 3059 newskb = dev_alloc_skb(NEW_SKB_SIZE);
3059 if (newskb){ 3060 if (newskb){
3060 rx_in_place = 1; 3061 rx_in_place = 1;
3061 3062
3062 my_skb_align(newskb, 128); 3063 my_skb_align(newskb, 128);
3063 skb = (struct sk_buff *)bp->p_rcv_buff_va[entry]; 3064 skb = (struct sk_buff *)bp->p_rcv_buff_va[entry];
3064 dma_unmap_single(bp->bus_dev, 3065 dma_unmap_single(bp->bus_dev,
3065 bp->descr_block_virt->rcv_data[entry].long_1, 3066 bp->descr_block_virt->rcv_data[entry].long_1,
3066 NEW_SKB_SIZE, 3067 NEW_SKB_SIZE,
3067 DMA_FROM_DEVICE); 3068 DMA_FROM_DEVICE);
3068 skb_reserve(skb, RCV_BUFF_K_PADDING); 3069 skb_reserve(skb, RCV_BUFF_K_PADDING);
3069 bp->p_rcv_buff_va[entry] = (char *)newskb; 3070 bp->p_rcv_buff_va[entry] = (char *)newskb;
3070 bp->descr_block_virt->rcv_data[entry].long_1 = 3071 bp->descr_block_virt->rcv_data[entry].long_1 =
3071 (u32)dma_map_single(bp->bus_dev, 3072 (u32)dma_map_single(bp->bus_dev,
3072 newskb->data, 3073 newskb->data,
3073 NEW_SKB_SIZE, 3074 NEW_SKB_SIZE,
3074 DMA_FROM_DEVICE); 3075 DMA_FROM_DEVICE);
3075 } else 3076 } else
3076 skb = NULL; 3077 skb = NULL;
3077 } else 3078 } else
3078 #endif 3079 #endif
3079 skb = dev_alloc_skb(pkt_len+3); /* alloc new buffer to pass up, add room for PRH */ 3080 skb = dev_alloc_skb(pkt_len+3); /* alloc new buffer to pass up, add room for PRH */
3080 if (skb == NULL) 3081 if (skb == NULL)
3081 { 3082 {
3082 printk("%s: Could not allocate receive buffer. Dropping packet.\n", bp->dev->name); 3083 printk("%s: Could not allocate receive buffer. Dropping packet.\n", bp->dev->name);
3083 bp->rcv_discards++; 3084 bp->rcv_discards++;
3084 break; 3085 break;
3085 } 3086 }
3086 else { 3087 else {
3087 #ifndef DYNAMIC_BUFFERS 3088 #ifndef DYNAMIC_BUFFERS
3088 if (! rx_in_place) 3089 if (! rx_in_place)
3089 #endif 3090 #endif
3090 { 3091 {
3091 /* Receive buffer allocated, pass receive packet up */ 3092 /* Receive buffer allocated, pass receive packet up */
3092 3093
3093 skb_copy_to_linear_data(skb, 3094 skb_copy_to_linear_data(skb,
3094 p_buff + RCV_BUFF_K_PADDING, 3095 p_buff + RCV_BUFF_K_PADDING,
3095 pkt_len + 3); 3096 pkt_len + 3);
3096 } 3097 }
3097 3098
3098 skb_reserve(skb,3); /* adjust data field so that it points to FC byte */ 3099 skb_reserve(skb,3); /* adjust data field so that it points to FC byte */
3099 skb_put(skb, pkt_len); /* pass up packet length, NOT including CRC */ 3100 skb_put(skb, pkt_len); /* pass up packet length, NOT including CRC */
3100 skb->protocol = fddi_type_trans(skb, bp->dev); 3101 skb->protocol = fddi_type_trans(skb, bp->dev);
3101 bp->rcv_total_bytes += skb->len; 3102 bp->rcv_total_bytes += skb->len;
3102 netif_rx(skb); 3103 netif_rx(skb);
3103 3104
3104 /* Update the rcv counters */ 3105 /* Update the rcv counters */
3105 bp->dev->last_rx = jiffies; 3106 bp->dev->last_rx = jiffies;
3106 bp->rcv_total_frames++; 3107 bp->rcv_total_frames++;
3107 if (*(p_buff + RCV_BUFF_K_DA) & 0x01) 3108 if (*(p_buff + RCV_BUFF_K_DA) & 0x01)
3108 bp->rcv_multicast_frames++; 3109 bp->rcv_multicast_frames++;
3109 } 3110 }
3110 } 3111 }
3111 } 3112 }
3112 3113
3113 /* 3114 /*
3114 * Advance the producer (for recycling) and advance the completion 3115 * Advance the producer (for recycling) and advance the completion
3115 * (for servicing received frames). Note that it is okay to 3116 * (for servicing received frames). Note that it is okay to
3116 * advance the producer without checking that it passes the 3117 * advance the producer without checking that it passes the
3117 * completion index because they are both advanced at the same 3118 * completion index because they are both advanced at the same
3118 * rate. 3119 * rate.
3119 */ 3120 */
3120 3121
3121 bp->rcv_xmt_reg.index.rcv_prod += 1; 3122 bp->rcv_xmt_reg.index.rcv_prod += 1;
3122 bp->rcv_xmt_reg.index.rcv_comp += 1; 3123 bp->rcv_xmt_reg.index.rcv_comp += 1;
3123 } 3124 }
3124 } 3125 }
3125 3126
3126 3127
3127 /* 3128 /*
3128 * ===================== 3129 * =====================
3129 * = dfx_xmt_queue_pkt = 3130 * = dfx_xmt_queue_pkt =
3130 * ===================== 3131 * =====================
3131 * 3132 *
3132 * Overview: 3133 * Overview:
3133 * Queues packets for transmission 3134 * Queues packets for transmission
3134 * 3135 *
3135 * Returns: 3136 * Returns:
3136 * Condition code 3137 * Condition code
3137 * 3138 *
3138 * Arguments: 3139 * Arguments:
3139 * skb - pointer to sk_buff to queue for transmission 3140 * skb - pointer to sk_buff to queue for transmission
3140 * dev - pointer to device information 3141 * dev - pointer to device information
3141 * 3142 *
3142 * Functional Description: 3143 * Functional Description:
3143 * Here we assume that an incoming skb transmit request 3144 * Here we assume that an incoming skb transmit request
3144 * is contained in a single physically contiguous buffer 3145 * is contained in a single physically contiguous buffer
3145 * in which the virtual address of the start of packet 3146 * in which the virtual address of the start of packet
3146 * (skb->data) can be converted to a physical address 3147 * (skb->data) can be converted to a physical address
3147 * by using pci_map_single(). 3148 * by using pci_map_single().
3148 * 3149 *
3149 * Since the adapter architecture requires a three byte 3150 * Since the adapter architecture requires a three byte
3150 * packet request header to prepend the start of packet, 3151 * packet request header to prepend the start of packet,
3151 * we'll write the three byte field immediately prior to 3152 * we'll write the three byte field immediately prior to
3152 * the FC byte. This assumption is valid because we've 3153 * the FC byte. This assumption is valid because we've
3153 * ensured that dev->hard_header_len includes three pad 3154 * ensured that dev->hard_header_len includes three pad
3154 * bytes. By posting a single fragment to the adapter, 3155 * bytes. By posting a single fragment to the adapter,
3155 * we'll reduce the number of descriptor fetches and 3156 * we'll reduce the number of descriptor fetches and
3156 * bus traffic needed to send the request. 3157 * bus traffic needed to send the request.
3157 * 3158 *
3158 * Also, we can't free the skb until after it's been DMA'd 3159 * Also, we can't free the skb until after it's been DMA'd
3159 * out by the adapter, so we'll queue it in the driver and 3160 * out by the adapter, so we'll queue it in the driver and
3160 * return it in dfx_xmt_done. 3161 * return it in dfx_xmt_done.
3161 * 3162 *
3162 * Return Codes: 3163 * Return Codes:
3163 * 0 - driver queued packet, link is unavailable, or skbuff was bad 3164 * 0 - driver queued packet, link is unavailable, or skbuff was bad
3164 * 1 - caller should requeue the sk_buff for later transmission 3165 * 1 - caller should requeue the sk_buff for later transmission
3165 * 3166 *
3166 * Assumptions: 3167 * Assumptions:
3167 * First and foremost, we assume the incoming skb pointer 3168 * First and foremost, we assume the incoming skb pointer
3168 * is NOT NULL and is pointing to a valid sk_buff structure. 3169 * is NOT NULL and is pointing to a valid sk_buff structure.
3169 * 3170 *
3170 * The outgoing packet is complete, starting with the 3171 * The outgoing packet is complete, starting with the
3171 * frame control byte including the last byte of data, 3172 * frame control byte including the last byte of data,
3172 * but NOT including the 4 byte CRC. We'll let the 3173 * but NOT including the 4 byte CRC. We'll let the
3173 * adapter hardware generate and append the CRC. 3174 * adapter hardware generate and append the CRC.
3174 * 3175 *
3175 * The entire packet is stored in one physically 3176 * The entire packet is stored in one physically
3176 * contiguous buffer which is not cached and whose 3177 * contiguous buffer which is not cached and whose
3177 * 32-bit physical address can be determined. 3178 * 32-bit physical address can be determined.
3178 * 3179 *
3179 * It's vital that this routine is NOT reentered for the 3180 * It's vital that this routine is NOT reentered for the
3180 * same board and that the OS is not in another section of 3181 * same board and that the OS is not in another section of
3181 * code (eg. dfx_int_common) for the same board on a 3182 * code (eg. dfx_int_common) for the same board on a
3182 * different thread. 3183 * different thread.
3183 * 3184 *
3184 * Side Effects: 3185 * Side Effects:
3185 * None 3186 * None
3186 */ 3187 */
3187 3188
3188 static int dfx_xmt_queue_pkt( 3189 static int dfx_xmt_queue_pkt(
3189 struct sk_buff *skb, 3190 struct sk_buff *skb,
3190 struct net_device *dev 3191 struct net_device *dev
3191 ) 3192 )
3192 3193
3193 { 3194 {
3194 DFX_board_t *bp = netdev_priv(dev); 3195 DFX_board_t *bp = netdev_priv(dev);
3195 u8 prod; /* local transmit producer index */ 3196 u8 prod; /* local transmit producer index */
3196 PI_XMT_DESCR *p_xmt_descr; /* ptr to transmit descriptor block entry */ 3197 PI_XMT_DESCR *p_xmt_descr; /* ptr to transmit descriptor block entry */
3197 XMT_DRIVER_DESCR *p_xmt_drv_descr; /* ptr to transmit driver descriptor */ 3198 XMT_DRIVER_DESCR *p_xmt_drv_descr; /* ptr to transmit driver descriptor */
3198 unsigned long flags; 3199 unsigned long flags;
3199 3200
3200 netif_stop_queue(dev); 3201 netif_stop_queue(dev);
3201 3202
3202 /* 3203 /*
3203 * Verify that incoming transmit request is OK 3204 * Verify that incoming transmit request is OK
3204 * 3205 *
3205 * Note: The packet size check is consistent with other 3206 * Note: The packet size check is consistent with other
3206 * Linux device drivers, although the correct packet 3207 * Linux device drivers, although the correct packet
3207 * size should be verified before calling the 3208 * size should be verified before calling the
3208 * transmit routine. 3209 * transmit routine.
3209 */ 3210 */
3210 3211
3211 if (!IN_RANGE(skb->len, FDDI_K_LLC_ZLEN, FDDI_K_LLC_LEN)) 3212 if (!IN_RANGE(skb->len, FDDI_K_LLC_ZLEN, FDDI_K_LLC_LEN))
3212 { 3213 {
3213 printk("%s: Invalid packet length - %u bytes\n", 3214 printk("%s: Invalid packet length - %u bytes\n",
3214 dev->name, skb->len); 3215 dev->name, skb->len);
3215 bp->xmt_length_errors++; /* bump error counter */ 3216 bp->xmt_length_errors++; /* bump error counter */
3216 netif_wake_queue(dev); 3217 netif_wake_queue(dev);
3217 dev_kfree_skb(skb); 3218 dev_kfree_skb(skb);
3218 return(0); /* return "success" */ 3219 return(0); /* return "success" */
3219 } 3220 }
3220 /* 3221 /*
3221 * See if adapter link is available, if not, free buffer 3222 * See if adapter link is available, if not, free buffer
3222 * 3223 *
3223 * Note: If the link isn't available, free buffer and return 0 3224 * Note: If the link isn't available, free buffer and return 0
3224 * rather than tell the upper layer to requeue the packet. 3225 * rather than tell the upper layer to requeue the packet.
3225 * The methodology here is that by the time the link 3226 * The methodology here is that by the time the link
3226 * becomes available, the packet to be sent will be 3227 * becomes available, the packet to be sent will be
3227 * fairly stale. By simply dropping the packet, the 3228 * fairly stale. By simply dropping the packet, the
3228 * higher layer protocols will eventually time out 3229 * higher layer protocols will eventually time out
3229 * waiting for response packets which it won't receive. 3230 * waiting for response packets which it won't receive.
3230 */ 3231 */
3231 3232
3232 if (bp->link_available == PI_K_FALSE) 3233 if (bp->link_available == PI_K_FALSE)
3233 { 3234 {
3234 if (dfx_hw_adap_state_rd(bp) == PI_STATE_K_LINK_AVAIL) /* is link really available? */ 3235 if (dfx_hw_adap_state_rd(bp) == PI_STATE_K_LINK_AVAIL) /* is link really available? */
3235 bp->link_available = PI_K_TRUE; /* if so, set flag and continue */ 3236 bp->link_available = PI_K_TRUE; /* if so, set flag and continue */
3236 else 3237 else
3237 { 3238 {
3238 bp->xmt_discards++; /* bump error counter */ 3239 bp->xmt_discards++; /* bump error counter */
3239 dev_kfree_skb(skb); /* free sk_buff now */ 3240 dev_kfree_skb(skb); /* free sk_buff now */
3240 netif_wake_queue(dev); 3241 netif_wake_queue(dev);
3241 return(0); /* return "success" */ 3242 return(0); /* return "success" */
3242 } 3243 }
3243 } 3244 }
3244 3245
3245 spin_lock_irqsave(&bp->lock, flags); 3246 spin_lock_irqsave(&bp->lock, flags);
3246 3247
3247 /* Get the current producer and the next free xmt data descriptor */ 3248 /* Get the current producer and the next free xmt data descriptor */
3248 3249
3249 prod = bp->rcv_xmt_reg.index.xmt_prod; 3250 prod = bp->rcv_xmt_reg.index.xmt_prod;
3250 p_xmt_descr = &(bp->descr_block_virt->xmt_data[prod]); 3251 p_xmt_descr = &(bp->descr_block_virt->xmt_data[prod]);
3251 3252
3252 /* 3253 /*
3253 * Get pointer to auxiliary queue entry to contain information 3254 * Get pointer to auxiliary queue entry to contain information
3254 * for this packet. 3255 * for this packet.
3255 * 3256 *
3256 * Note: The current xmt producer index will become the 3257 * Note: The current xmt producer index will become the
3257 * current xmt completion index when we complete this 3258 * current xmt completion index when we complete this
3258 * packet later on. So, we'll get the pointer to the 3259 * packet later on. So, we'll get the pointer to the
3259 * next auxiliary queue entry now before we bump the 3260 * next auxiliary queue entry now before we bump the
3260 * producer index. 3261 * producer index.
3261 */ 3262 */
3262 3263
3263 p_xmt_drv_descr = &(bp->xmt_drv_descr_blk[prod++]); /* also bump producer index */ 3264 p_xmt_drv_descr = &(bp->xmt_drv_descr_blk[prod++]); /* also bump producer index */
3264 3265
3265 /* Write the three PRH bytes immediately before the FC byte */ 3266 /* Write the three PRH bytes immediately before the FC byte */
3266 3267
3267 skb_push(skb,3); 3268 skb_push(skb,3);
3268 skb->data[0] = DFX_PRH0_BYTE; /* these byte values are defined */ 3269 skb->data[0] = DFX_PRH0_BYTE; /* these byte values are defined */
3269 skb->data[1] = DFX_PRH1_BYTE; /* in the Motorola FDDI MAC chip */ 3270 skb->data[1] = DFX_PRH1_BYTE; /* in the Motorola FDDI MAC chip */
3270 skb->data[2] = DFX_PRH2_BYTE; /* specification */ 3271 skb->data[2] = DFX_PRH2_BYTE; /* specification */
3271 3272
3272 /* 3273 /*
3273 * Write the descriptor with buffer info and bump producer 3274 * Write the descriptor with buffer info and bump producer
3274 * 3275 *
3275 * Note: Since we need to start DMA from the packet request 3276 * Note: Since we need to start DMA from the packet request
3276 * header, we'll add 3 bytes to the DMA buffer length, 3277 * header, we'll add 3 bytes to the DMA buffer length,
3277 * and we'll determine the physical address of the 3278 * and we'll determine the physical address of the
3278 * buffer from the PRH, not skb->data. 3279 * buffer from the PRH, not skb->data.
3279 * 3280 *
3280 * Assumptions: 3281 * Assumptions:
3281 * 1. Packet starts with the frame control (FC) byte 3282 * 1. Packet starts with the frame control (FC) byte
3282 * at skb->data. 3283 * at skb->data.
3283 * 2. The 4-byte CRC is not appended to the buffer or 3284 * 2. The 4-byte CRC is not appended to the buffer or
3284 * included in the length. 3285 * included in the length.
3285 * 3. Packet length (skb->len) is from FC to end of 3286 * 3. Packet length (skb->len) is from FC to end of
3286 * data, inclusive. 3287 * data, inclusive.
3287 * 4. The packet length does not exceed the maximum 3288 * 4. The packet length does not exceed the maximum
3288 * FDDI LLC frame length of 4491 bytes. 3289 * FDDI LLC frame length of 4491 bytes.
3289 * 5. The entire packet is contained in a physically 3290 * 5. The entire packet is contained in a physically
3290 * contiguous, non-cached, locked memory space 3291 * contiguous, non-cached, locked memory space
3291 * comprised of a single buffer pointed to by 3292 * comprised of a single buffer pointed to by
3292 * skb->data. 3293 * skb->data.
3293 * 6. The physical address of the start of packet 3294 * 6. The physical address of the start of packet
3294 * can be determined from the virtual address 3295 * can be determined from the virtual address
3295 * by using pci_map_single() and is only 32-bits 3296 * by using pci_map_single() and is only 32-bits
3296 * wide. 3297 * wide.
3297 */ 3298 */
3298 3299
3299 p_xmt_descr->long_0 = (u32) (PI_XMT_DESCR_M_SOP | PI_XMT_DESCR_M_EOP | ((skb->len) << PI_XMT_DESCR_V_SEG_LEN)); 3300 p_xmt_descr->long_0 = (u32) (PI_XMT_DESCR_M_SOP | PI_XMT_DESCR_M_EOP | ((skb->len) << PI_XMT_DESCR_V_SEG_LEN));
3300 p_xmt_descr->long_1 = (u32)dma_map_single(bp->bus_dev, skb->data, 3301 p_xmt_descr->long_1 = (u32)dma_map_single(bp->bus_dev, skb->data,
3301 skb->len, DMA_TO_DEVICE); 3302 skb->len, DMA_TO_DEVICE);
3302 3303
3303 /* 3304 /*
3304 * Verify that descriptor is actually available 3305 * Verify that descriptor is actually available
3305 * 3306 *
3306 * Note: If descriptor isn't available, return 1 which tells 3307 * Note: If descriptor isn't available, return 1 which tells
3307 * the upper layer to requeue the packet for later 3308 * the upper layer to requeue the packet for later
3308 * transmission. 3309 * transmission.
3309 * 3310 *
3310 * We need to ensure that the producer never reaches the 3311 * We need to ensure that the producer never reaches the
3311 * completion, except to indicate that the queue is empty. 3312 * completion, except to indicate that the queue is empty.
3312 */ 3313 */
3313 3314
3314 if (prod == bp->rcv_xmt_reg.index.xmt_comp) 3315 if (prod == bp->rcv_xmt_reg.index.xmt_comp)
3315 { 3316 {
3316 skb_pull(skb,3); 3317 skb_pull(skb,3);
3317 spin_unlock_irqrestore(&bp->lock, flags); 3318 spin_unlock_irqrestore(&bp->lock, flags);
3318 return(1); /* requeue packet for later */ 3319 return(1); /* requeue packet for later */
3319 } 3320 }
3320 3321
3321 /* 3322 /*
3322 * Save info for this packet for xmt done indication routine 3323 * Save info for this packet for xmt done indication routine
3323 * 3324 *
3324 * Normally, we'd save the producer index in the p_xmt_drv_descr 3325 * Normally, we'd save the producer index in the p_xmt_drv_descr
3325 * structure so that we'd have it handy when we complete this 3326 * structure so that we'd have it handy when we complete this
3326 * packet later (in dfx_xmt_done). However, since the current 3327 * packet later (in dfx_xmt_done). However, since the current
3327 * transmit architecture guarantees a single fragment for the 3328 * transmit architecture guarantees a single fragment for the
3328 * entire packet, we can simply bump the completion index by 3329 * entire packet, we can simply bump the completion index by
3329 * one (1) for each completed packet. 3330 * one (1) for each completed packet.
3330 * 3331 *
3331 * Note: If this assumption changes and we're presented with 3332 * Note: If this assumption changes and we're presented with
3332 * an inconsistent number of transmit fragments for packet 3333 * an inconsistent number of transmit fragments for packet
3333 * data, we'll need to modify this code to save the current 3334 * data, we'll need to modify this code to save the current
3334 * transmit producer index. 3335 * transmit producer index.
3335 */ 3336 */
3336 3337
3337 p_xmt_drv_descr->p_skb = skb; 3338 p_xmt_drv_descr->p_skb = skb;
3338 3339
3339 /* Update Type 2 register */ 3340 /* Update Type 2 register */
3340 3341
3341 bp->rcv_xmt_reg.index.xmt_prod = prod; 3342 bp->rcv_xmt_reg.index.xmt_prod = prod;
3342 dfx_port_write_long(bp, PI_PDQ_K_REG_TYPE_2_PROD, bp->rcv_xmt_reg.lword); 3343 dfx_port_write_long(bp, PI_PDQ_K_REG_TYPE_2_PROD, bp->rcv_xmt_reg.lword);
3343 spin_unlock_irqrestore(&bp->lock, flags); 3344 spin_unlock_irqrestore(&bp->lock, flags);
3344 netif_wake_queue(dev); 3345 netif_wake_queue(dev);
3345 return(0); /* packet queued to adapter */ 3346 return(0); /* packet queued to adapter */
3346 } 3347 }
3347 3348
3348 3349
3349 /* 3350 /*
3350 * ================ 3351 * ================
3351 * = dfx_xmt_done = 3352 * = dfx_xmt_done =
3352 * ================ 3353 * ================
3353 * 3354 *
3354 * Overview: 3355 * Overview:
3355 * Processes all frames that have been transmitted. 3356 * Processes all frames that have been transmitted.
3356 * 3357 *
3357 * Returns: 3358 * Returns:
3358 * None 3359 * None
3359 * 3360 *
3360 * Arguments: 3361 * Arguments:
3361 * bp - pointer to board information 3362 * bp - pointer to board information
3362 * 3363 *
3363 * Functional Description: 3364 * Functional Description:
3364 * For all consumed transmit descriptors that have not 3365 * For all consumed transmit descriptors that have not
3365 * yet been completed, we'll free the skb we were holding 3366 * yet been completed, we'll free the skb we were holding
3366 * onto using dev_kfree_skb and bump the appropriate 3367 * onto using dev_kfree_skb and bump the appropriate
3367 * counters. 3368 * counters.
3368 * 3369 *
3369 * Return Codes: 3370 * Return Codes:
3370 * None 3371 * None
3371 * 3372 *
3372 * Assumptions: 3373 * Assumptions:
3373 * The Type 2 register is not updated in this routine. It is 3374 * The Type 2 register is not updated in this routine. It is
3374 * assumed that it will be updated in the ISR when dfx_xmt_done 3375 * assumed that it will be updated in the ISR when dfx_xmt_done
3375 * returns. 3376 * returns.
3376 * 3377 *
3377 * Side Effects: 3378 * Side Effects:
3378 * None 3379 * None
3379 */ 3380 */
3380 3381
3381 static int dfx_xmt_done(DFX_board_t *bp) 3382 static int dfx_xmt_done(DFX_board_t *bp)
3382 { 3383 {
3383 XMT_DRIVER_DESCR *p_xmt_drv_descr; /* ptr to transmit driver descriptor */ 3384 XMT_DRIVER_DESCR *p_xmt_drv_descr; /* ptr to transmit driver descriptor */
3384 PI_TYPE_2_CONSUMER *p_type_2_cons; /* ptr to rcv/xmt consumer block register */ 3385 PI_TYPE_2_CONSUMER *p_type_2_cons; /* ptr to rcv/xmt consumer block register */
3385 u8 comp; /* local transmit completion index */ 3386 u8 comp; /* local transmit completion index */
3386 int freed = 0; /* buffers freed */ 3387 int freed = 0; /* buffers freed */
3387 3388
3388 /* Service all consumed transmit frames */ 3389 /* Service all consumed transmit frames */
3389 3390
3390 p_type_2_cons = (PI_TYPE_2_CONSUMER *)(&bp->cons_block_virt->xmt_rcv_data); 3391 p_type_2_cons = (PI_TYPE_2_CONSUMER *)(&bp->cons_block_virt->xmt_rcv_data);
3391 while (bp->rcv_xmt_reg.index.xmt_comp != p_type_2_cons->index.xmt_cons) 3392 while (bp->rcv_xmt_reg.index.xmt_comp != p_type_2_cons->index.xmt_cons)
3392 { 3393 {
3393 /* Get pointer to the transmit driver descriptor block information */ 3394 /* Get pointer to the transmit driver descriptor block information */
3394 3395
3395 p_xmt_drv_descr = &(bp->xmt_drv_descr_blk[bp->rcv_xmt_reg.index.xmt_comp]); 3396 p_xmt_drv_descr = &(bp->xmt_drv_descr_blk[bp->rcv_xmt_reg.index.xmt_comp]);
3396 3397
3397 /* Increment transmit counters */ 3398 /* Increment transmit counters */
3398 3399
3399 bp->xmt_total_frames++; 3400 bp->xmt_total_frames++;
3400 bp->xmt_total_bytes += p_xmt_drv_descr->p_skb->len; 3401 bp->xmt_total_bytes += p_xmt_drv_descr->p_skb->len;
3401 3402
3402 /* Return skb to operating system */ 3403 /* Return skb to operating system */
3403 comp = bp->rcv_xmt_reg.index.xmt_comp; 3404 comp = bp->rcv_xmt_reg.index.xmt_comp;
3404 dma_unmap_single(bp->bus_dev, 3405 dma_unmap_single(bp->bus_dev,
3405 bp->descr_block_virt->xmt_data[comp].long_1, 3406 bp->descr_block_virt->xmt_data[comp].long_1,
3406 p_xmt_drv_descr->p_skb->len, 3407 p_xmt_drv_descr->p_skb->len,
3407 DMA_TO_DEVICE); 3408 DMA_TO_DEVICE);
3408 dev_kfree_skb_irq(p_xmt_drv_descr->p_skb); 3409 dev_kfree_skb_irq(p_xmt_drv_descr->p_skb);
3409 3410
3410 /* 3411 /*
3411 * Move to start of next packet by updating completion index 3412 * Move to start of next packet by updating completion index
3412 * 3413 *
3413 * Here we assume that a transmit packet request is always 3414 * Here we assume that a transmit packet request is always
3414 * serviced by posting one fragment. We can therefore 3415 * serviced by posting one fragment. We can therefore
3415 * simplify the completion code by incrementing the 3416 * simplify the completion code by incrementing the
3416 * completion index by one. This code will need to be 3417 * completion index by one. This code will need to be
3417 * modified if this assumption changes. See comments 3418 * modified if this assumption changes. See comments
3418 * in dfx_xmt_queue_pkt for more details. 3419 * in dfx_xmt_queue_pkt for more details.
3419 */ 3420 */
3420 3421
3421 bp->rcv_xmt_reg.index.xmt_comp += 1; 3422 bp->rcv_xmt_reg.index.xmt_comp += 1;
3422 freed++; 3423 freed++;
3423 } 3424 }
3424 return freed; 3425 return freed;
3425 } 3426 }
3426 3427
3427 3428
3428 /* 3429 /*
3429 * ================= 3430 * =================
3430 * = dfx_rcv_flush = 3431 * = dfx_rcv_flush =
3431 * ================= 3432 * =================
3432 * 3433 *
3433 * Overview: 3434 * Overview:
3434 * Remove all skb's in the receive ring. 3435 * Remove all skb's in the receive ring.
3435 * 3436 *
3436 * Returns: 3437 * Returns:
3437 * None 3438 * None
3438 * 3439 *
3439 * Arguments: 3440 * Arguments:
3440 * bp - pointer to board information 3441 * bp - pointer to board information
3441 * 3442 *
3442 * Functional Description: 3443 * Functional Description:
3443 * Free's all the dynamically allocated skb's that are 3444 * Free's all the dynamically allocated skb's that are
3444 * currently attached to the device receive ring. This 3445 * currently attached to the device receive ring. This
3445 * function is typically only used when the device is 3446 * function is typically only used when the device is
3446 * initialized or reinitialized. 3447 * initialized or reinitialized.
3447 * 3448 *
3448 * Return Codes: 3449 * Return Codes:
3449 * None 3450 * None
3450 * 3451 *
3451 * Side Effects: 3452 * Side Effects:
3452 * None 3453 * None
3453 */ 3454 */
3454 #ifdef DYNAMIC_BUFFERS 3455 #ifdef DYNAMIC_BUFFERS
3455 static void dfx_rcv_flush( DFX_board_t *bp ) 3456 static void dfx_rcv_flush( DFX_board_t *bp )
3456 { 3457 {
3457 int i, j; 3458 int i, j;
3458 3459
3459 for (i = 0; i < (int)(bp->rcv_bufs_to_post); i++) 3460 for (i = 0; i < (int)(bp->rcv_bufs_to_post); i++)
3460 for (j = 0; (i + j) < (int)PI_RCV_DATA_K_NUM_ENTRIES; j += bp->rcv_bufs_to_post) 3461 for (j = 0; (i + j) < (int)PI_RCV_DATA_K_NUM_ENTRIES; j += bp->rcv_bufs_to_post)
3461 { 3462 {
3462 struct sk_buff *skb; 3463 struct sk_buff *skb;
3463 skb = (struct sk_buff *)bp->p_rcv_buff_va[i+j]; 3464 skb = (struct sk_buff *)bp->p_rcv_buff_va[i+j];
3464 if (skb) 3465 if (skb)
3465 dev_kfree_skb(skb); 3466 dev_kfree_skb(skb);
3466 bp->p_rcv_buff_va[i+j] = NULL; 3467 bp->p_rcv_buff_va[i+j] = NULL;
3467 } 3468 }
3468 3469
3469 } 3470 }
3470 #else 3471 #else
3471 static inline void dfx_rcv_flush( DFX_board_t *bp ) 3472 static inline void dfx_rcv_flush( DFX_board_t *bp )
3472 { 3473 {
3473 } 3474 }
3474 #endif /* DYNAMIC_BUFFERS */ 3475 #endif /* DYNAMIC_BUFFERS */
3475 3476
3476 /* 3477 /*
3477 * ================= 3478 * =================
3478 * = dfx_xmt_flush = 3479 * = dfx_xmt_flush =
3479 * ================= 3480 * =================
3480 * 3481 *
3481 * Overview: 3482 * Overview:
3482 * Processes all frames whether they've been transmitted 3483 * Processes all frames whether they've been transmitted
3483 * or not. 3484 * or not.
3484 * 3485 *
3485 * Returns: 3486 * Returns:
3486 * None 3487 * None
3487 * 3488 *
3488 * Arguments: 3489 * Arguments:
3489 * bp - pointer to board information 3490 * bp - pointer to board information
3490 * 3491 *
3491 * Functional Description: 3492 * Functional Description:
3492 * For all produced transmit descriptors that have not 3493 * For all produced transmit descriptors that have not
3493 * yet been completed, we'll free the skb we were holding 3494 * yet been completed, we'll free the skb we were holding
3494 * onto using dev_kfree_skb and bump the appropriate 3495 * onto using dev_kfree_skb and bump the appropriate
3495 * counters. Of course, it's possible that some of 3496 * counters. Of course, it's possible that some of
3496 * these transmit requests actually did go out, but we 3497 * these transmit requests actually did go out, but we
3497 * won't make that distinction here. Finally, we'll 3498 * won't make that distinction here. Finally, we'll
3498 * update the consumer index to match the producer. 3499 * update the consumer index to match the producer.
3499 * 3500 *
3500 * Return Codes: 3501 * Return Codes:
3501 * None 3502 * None
3502 * 3503 *
3503 * Assumptions: 3504 * Assumptions:
3504 * This routine does NOT update the Type 2 register. It 3505 * This routine does NOT update the Type 2 register. It
3505 * is assumed that this routine is being called during a 3506 * is assumed that this routine is being called during a
3506 * transmit flush interrupt, or a shutdown or close routine. 3507 * transmit flush interrupt, or a shutdown or close routine.
3507 * 3508 *
3508 * Side Effects: 3509 * Side Effects:
3509 * None 3510 * None
3510 */ 3511 */
3511 3512
3512 static void dfx_xmt_flush( DFX_board_t *bp ) 3513 static void dfx_xmt_flush( DFX_board_t *bp )
3513 { 3514 {
3514 u32 prod_cons; /* rcv/xmt consumer block longword */ 3515 u32 prod_cons; /* rcv/xmt consumer block longword */
3515 XMT_DRIVER_DESCR *p_xmt_drv_descr; /* ptr to transmit driver descriptor */ 3516 XMT_DRIVER_DESCR *p_xmt_drv_descr; /* ptr to transmit driver descriptor */
3516 u8 comp; /* local transmit completion index */ 3517 u8 comp; /* local transmit completion index */
3517 3518
3518 /* Flush all outstanding transmit frames */ 3519 /* Flush all outstanding transmit frames */
3519 3520
3520 while (bp->rcv_xmt_reg.index.xmt_comp != bp->rcv_xmt_reg.index.xmt_prod) 3521 while (bp->rcv_xmt_reg.index.xmt_comp != bp->rcv_xmt_reg.index.xmt_prod)
3521 { 3522 {
3522 /* Get pointer to the transmit driver descriptor block information */ 3523 /* Get pointer to the transmit driver descriptor block information */
3523 3524
3524 p_xmt_drv_descr = &(bp->xmt_drv_descr_blk[bp->rcv_xmt_reg.index.xmt_comp]); 3525 p_xmt_drv_descr = &(bp->xmt_drv_descr_blk[bp->rcv_xmt_reg.index.xmt_comp]);
3525 3526
3526 /* Return skb to operating system */ 3527 /* Return skb to operating system */
3527 comp = bp->rcv_xmt_reg.index.xmt_comp; 3528 comp = bp->rcv_xmt_reg.index.xmt_comp;
3528 dma_unmap_single(bp->bus_dev, 3529 dma_unmap_single(bp->bus_dev,
3529 bp->descr_block_virt->xmt_data[comp].long_1, 3530 bp->descr_block_virt->xmt_data[comp].long_1,
3530 p_xmt_drv_descr->p_skb->len, 3531 p_xmt_drv_descr->p_skb->len,
3531 DMA_TO_DEVICE); 3532 DMA_TO_DEVICE);
3532 dev_kfree_skb(p_xmt_drv_descr->p_skb); 3533 dev_kfree_skb(p_xmt_drv_descr->p_skb);
3533 3534
3534 /* Increment transmit error counter */ 3535 /* Increment transmit error counter */
3535 3536
3536 bp->xmt_discards++; 3537 bp->xmt_discards++;
3537 3538
3538 /* 3539 /*
3539 * Move to start of next packet by updating completion index 3540 * Move to start of next packet by updating completion index
3540 * 3541 *
3541 * Here we assume that a transmit packet request is always 3542 * Here we assume that a transmit packet request is always
3542 * serviced by posting one fragment. We can therefore 3543 * serviced by posting one fragment. We can therefore
3543 * simplify the completion code by incrementing the 3544 * simplify the completion code by incrementing the
3544 * completion index by one. This code will need to be 3545 * completion index by one. This code will need to be
3545 * modified if this assumption changes. See comments 3546 * modified if this assumption changes. See comments
3546 * in dfx_xmt_queue_pkt for more details. 3547 * in dfx_xmt_queue_pkt for more details.
3547 */ 3548 */
3548 3549
3549 bp->rcv_xmt_reg.index.xmt_comp += 1; 3550 bp->rcv_xmt_reg.index.xmt_comp += 1;
3550 } 3551 }
3551 3552
3552 /* Update the transmit consumer index in the consumer block */ 3553 /* Update the transmit consumer index in the consumer block */
3553 3554
3554 prod_cons = (u32)(bp->cons_block_virt->xmt_rcv_data & ~PI_CONS_M_XMT_INDEX); 3555 prod_cons = (u32)(bp->cons_block_virt->xmt_rcv_data & ~PI_CONS_M_XMT_INDEX);
3555 prod_cons |= (u32)(bp->rcv_xmt_reg.index.xmt_prod << PI_CONS_V_XMT_INDEX); 3556 prod_cons |= (u32)(bp->rcv_xmt_reg.index.xmt_prod << PI_CONS_V_XMT_INDEX);
3556 bp->cons_block_virt->xmt_rcv_data = prod_cons; 3557 bp->cons_block_virt->xmt_rcv_data = prod_cons;
3557 } 3558 }
3558 3559
3559 /* 3560 /*
3560 * ================== 3561 * ==================
3561 * = dfx_unregister = 3562 * = dfx_unregister =
3562 * ================== 3563 * ==================
3563 * 3564 *
3564 * Overview: 3565 * Overview:
3565 * Shuts down an FDDI controller 3566 * Shuts down an FDDI controller
3566 * 3567 *
3567 * Returns: 3568 * Returns:
3568 * Condition code 3569 * Condition code
3569 * 3570 *
3570 * Arguments: 3571 * Arguments:
3571 * bdev - pointer to device information 3572 * bdev - pointer to device information
3572 * 3573 *
3573 * Functional Description: 3574 * Functional Description:
3574 * 3575 *
3575 * Return Codes: 3576 * Return Codes:
3576 * None 3577 * None
3577 * 3578 *
3578 * Assumptions: 3579 * Assumptions:
3579 * It compiles so it should work :-( (PCI cards do :-) 3580 * It compiles so it should work :-( (PCI cards do :-)
3580 * 3581 *
3581 * Side Effects: 3582 * Side Effects:
3582 * Device structures for FDDI adapters (fddi0, fddi1, etc) are 3583 * Device structures for FDDI adapters (fddi0, fddi1, etc) are
3583 * freed. 3584 * freed.
3584 */ 3585 */
3585 static void __devexit dfx_unregister(struct device *bdev) 3586 static void __devexit dfx_unregister(struct device *bdev)
3586 { 3587 {
3587 struct net_device *dev = dev_get_drvdata(bdev); 3588 struct net_device *dev = dev_get_drvdata(bdev);
3588 DFX_board_t *bp = netdev_priv(dev); 3589 DFX_board_t *bp = netdev_priv(dev);
3589 int dfx_bus_pci = DFX_BUS_PCI(bdev); 3590 int dfx_bus_pci = DFX_BUS_PCI(bdev);
3590 int dfx_bus_tc = DFX_BUS_TC(bdev); 3591 int dfx_bus_tc = DFX_BUS_TC(bdev);
3591 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc; 3592 int dfx_use_mmio = DFX_MMIO || dfx_bus_tc;
3592 resource_size_t bar_start = 0; /* pointer to port */ 3593 resource_size_t bar_start = 0; /* pointer to port */
3593 resource_size_t bar_len = 0; /* resource length */ 3594 resource_size_t bar_len = 0; /* resource length */
3594 int alloc_size; /* total buffer size used */ 3595 int alloc_size; /* total buffer size used */
3595 3596
3596 unregister_netdev(dev); 3597 unregister_netdev(dev);
3597 3598
3598 alloc_size = sizeof(PI_DESCR_BLOCK) + 3599 alloc_size = sizeof(PI_DESCR_BLOCK) +
3599 PI_CMD_REQ_K_SIZE_MAX + PI_CMD_RSP_K_SIZE_MAX + 3600 PI_CMD_REQ_K_SIZE_MAX + PI_CMD_RSP_K_SIZE_MAX +
3600 #ifndef DYNAMIC_BUFFERS 3601 #ifndef DYNAMIC_BUFFERS
3601 (bp->rcv_bufs_to_post * PI_RCV_DATA_K_SIZE_MAX) + 3602 (bp->rcv_bufs_to_post * PI_RCV_DATA_K_SIZE_MAX) +
3602 #endif 3603 #endif
3603 sizeof(PI_CONSUMER_BLOCK) + 3604 sizeof(PI_CONSUMER_BLOCK) +
3604 (PI_ALIGN_K_DESC_BLK - 1); 3605 (PI_ALIGN_K_DESC_BLK - 1);
3605 if (bp->kmalloced) 3606 if (bp->kmalloced)
3606 dma_free_coherent(bdev, alloc_size, 3607 dma_free_coherent(bdev, alloc_size,
3607 bp->kmalloced, bp->kmalloced_dma); 3608 bp->kmalloced, bp->kmalloced_dma);
3608 3609
3609 dfx_bus_uninit(dev); 3610 dfx_bus_uninit(dev);
3610 3611
3611 dfx_get_bars(bdev, &bar_start, &bar_len); 3612 dfx_get_bars(bdev, &bar_start, &bar_len);
3612 if (dfx_use_mmio) { 3613 if (dfx_use_mmio) {
3613 iounmap(bp->base.mem); 3614 iounmap(bp->base.mem);
3614 release_mem_region(bar_start, bar_len); 3615 release_mem_region(bar_start, bar_len);
3615 } else 3616 } else
3616 release_region(bar_start, bar_len); 3617 release_region(bar_start, bar_len);
3617 3618
3618 if (dfx_bus_pci) 3619 if (dfx_bus_pci)
3619 pci_disable_device(to_pci_dev(bdev)); 3620 pci_disable_device(to_pci_dev(bdev));
3620 3621
3621 free_netdev(dev); 3622 free_netdev(dev);
3622 } 3623 }
3623 3624
3624 3625
3625 static int __devinit __maybe_unused dfx_dev_register(struct device *); 3626 static int __devinit __maybe_unused dfx_dev_register(struct device *);
3626 static int __devexit __maybe_unused dfx_dev_unregister(struct device *); 3627 static int __devexit __maybe_unused dfx_dev_unregister(struct device *);
3627 3628
3628 #ifdef CONFIG_PCI 3629 #ifdef CONFIG_PCI
3629 static int __devinit dfx_pci_register(struct pci_dev *, 3630 static int __devinit dfx_pci_register(struct pci_dev *,
3630 const struct pci_device_id *); 3631 const struct pci_device_id *);
3631 static void __devexit dfx_pci_unregister(struct pci_dev *); 3632 static void __devexit dfx_pci_unregister(struct pci_dev *);
3632 3633
3633 static struct pci_device_id dfx_pci_table[] = { 3634 static struct pci_device_id dfx_pci_table[] = {
3634 { PCI_DEVICE(PCI_VENDOR_ID_DEC, PCI_DEVICE_ID_DEC_FDDI) }, 3635 { PCI_DEVICE(PCI_VENDOR_ID_DEC, PCI_DEVICE_ID_DEC_FDDI) },
3635 { } 3636 { }
3636 }; 3637 };
3637 MODULE_DEVICE_TABLE(pci, dfx_pci_table); 3638 MODULE_DEVICE_TABLE(pci, dfx_pci_table);
3638 3639
3639 static struct pci_driver dfx_pci_driver = { 3640 static struct pci_driver dfx_pci_driver = {
3640 .name = "defxx", 3641 .name = "defxx",
3641 .id_table = dfx_pci_table, 3642 .id_table = dfx_pci_table,
3642 .probe = dfx_pci_register, 3643 .probe = dfx_pci_register,
3643 .remove = __devexit_p(dfx_pci_unregister), 3644 .remove = __devexit_p(dfx_pci_unregister),
3644 }; 3645 };
3645 3646
3646 static __devinit int dfx_pci_register(struct pci_dev *pdev, 3647 static __devinit int dfx_pci_register(struct pci_dev *pdev,
3647 const struct pci_device_id *ent) 3648 const struct pci_device_id *ent)
3648 { 3649 {
3649 return dfx_register(&pdev->dev); 3650 return dfx_register(&pdev->dev);
3650 } 3651 }
3651 3652
3652 static void __devexit dfx_pci_unregister(struct pci_dev *pdev) 3653 static void __devexit dfx_pci_unregister(struct pci_dev *pdev)
3653 { 3654 {
3654 dfx_unregister(&pdev->dev); 3655 dfx_unregister(&pdev->dev);
3655 } 3656 }
3656 #endif /* CONFIG_PCI */ 3657 #endif /* CONFIG_PCI */
3657 3658
3658 #ifdef CONFIG_EISA 3659 #ifdef CONFIG_EISA
3659 static struct eisa_device_id dfx_eisa_table[] = { 3660 static struct eisa_device_id dfx_eisa_table[] = {
3660 { "DEC3001", DEFEA_PROD_ID_1 }, 3661 { "DEC3001", DEFEA_PROD_ID_1 },
3661 { "DEC3002", DEFEA_PROD_ID_2 }, 3662 { "DEC3002", DEFEA_PROD_ID_2 },
3662 { "DEC3003", DEFEA_PROD_ID_3 }, 3663 { "DEC3003", DEFEA_PROD_ID_3 },
3663 { "DEC3004", DEFEA_PROD_ID_4 }, 3664 { "DEC3004", DEFEA_PROD_ID_4 },
3664 { } 3665 { }
3665 }; 3666 };
3666 MODULE_DEVICE_TABLE(eisa, dfx_eisa_table); 3667 MODULE_DEVICE_TABLE(eisa, dfx_eisa_table);
3667 3668
3668 static struct eisa_driver dfx_eisa_driver = { 3669 static struct eisa_driver dfx_eisa_driver = {
3669 .id_table = dfx_eisa_table, 3670 .id_table = dfx_eisa_table,
3670 .driver = { 3671 .driver = {
3671 .name = "defxx", 3672 .name = "defxx",
3672 .bus = &eisa_bus_type, 3673 .bus = &eisa_bus_type,
3673 .probe = dfx_dev_register, 3674 .probe = dfx_dev_register,
3674 .remove = __devexit_p(dfx_dev_unregister), 3675 .remove = __devexit_p(dfx_dev_unregister),
3675 }, 3676 },
3676 }; 3677 };
3677 #endif /* CONFIG_EISA */ 3678 #endif /* CONFIG_EISA */
3678 3679
3679 #ifdef CONFIG_TC 3680 #ifdef CONFIG_TC
3680 static struct tc_device_id const dfx_tc_table[] = { 3681 static struct tc_device_id const dfx_tc_table[] = {
3681 { "DEC ", "PMAF-FA " }, 3682 { "DEC ", "PMAF-FA " },
3682 { "DEC ", "PMAF-FD " }, 3683 { "DEC ", "PMAF-FD " },
3683 { "DEC ", "PMAF-FS " }, 3684 { "DEC ", "PMAF-FS " },
3684 { "DEC ", "PMAF-FU " }, 3685 { "DEC ", "PMAF-FU " },
3685 { } 3686 { }
3686 }; 3687 };
3687 MODULE_DEVICE_TABLE(tc, dfx_tc_table); 3688 MODULE_DEVICE_TABLE(tc, dfx_tc_table);
3688 3689
3689 static struct tc_driver dfx_tc_driver = { 3690 static struct tc_driver dfx_tc_driver = {
3690 .id_table = dfx_tc_table, 3691 .id_table = dfx_tc_table,
3691 .driver = { 3692 .driver = {
3692 .name = "defxx", 3693 .name = "defxx",
3693 .bus = &tc_bus_type, 3694 .bus = &tc_bus_type,
3694 .probe = dfx_dev_register, 3695 .probe = dfx_dev_register,
3695 .remove = __devexit_p(dfx_dev_unregister), 3696 .remove = __devexit_p(dfx_dev_unregister),
3696 }, 3697 },
3697 }; 3698 };
3698 #endif /* CONFIG_TC */ 3699 #endif /* CONFIG_TC */
3699 3700
3700 static int __devinit __maybe_unused dfx_dev_register(struct device *dev) 3701 static int __devinit __maybe_unused dfx_dev_register(struct device *dev)
3701 { 3702 {
3702 int status; 3703 int status;
3703 3704
3704 status = dfx_register(dev); 3705 status = dfx_register(dev);
3705 if (!status) 3706 if (!status)
3706 get_device(dev); 3707 get_device(dev);
3707 return status; 3708 return status;
3708 } 3709 }
3709 3710
3710 static int __devexit __maybe_unused dfx_dev_unregister(struct device *dev) 3711 static int __devexit __maybe_unused dfx_dev_unregister(struct device *dev)
3711 { 3712 {
3712 put_device(dev); 3713 put_device(dev);
3713 dfx_unregister(dev); 3714 dfx_unregister(dev);
3714 return 0; 3715 return 0;
3715 } 3716 }
3716 3717
3717 3718
3718 static int __devinit dfx_init(void) 3719 static int __devinit dfx_init(void)
3719 { 3720 {
3720 int status; 3721 int status;
3721 3722
3722 status = pci_register_driver(&dfx_pci_driver); 3723 status = pci_register_driver(&dfx_pci_driver);
3723 if (!status) 3724 if (!status)
3724 status = eisa_driver_register(&dfx_eisa_driver); 3725 status = eisa_driver_register(&dfx_eisa_driver);
3725 if (!status) 3726 if (!status)
3726 status = tc_register_driver(&dfx_tc_driver); 3727 status = tc_register_driver(&dfx_tc_driver);
3727 return status; 3728 return status;
3728 } 3729 }
3729 3730
3730 static void __devexit dfx_cleanup(void) 3731 static void __devexit dfx_cleanup(void)
3731 { 3732 {
3732 tc_unregister_driver(&dfx_tc_driver); 3733 tc_unregister_driver(&dfx_tc_driver);
3733 eisa_driver_unregister(&dfx_eisa_driver); 3734 eisa_driver_unregister(&dfx_eisa_driver);
3734 pci_unregister_driver(&dfx_pci_driver); 3735 pci_unregister_driver(&dfx_pci_driver);
3735 } 3736 }
3736 3737
3737 module_init(dfx_init); 3738 module_init(dfx_init);
3738 module_exit(dfx_cleanup); 3739 module_exit(dfx_cleanup);
3739 MODULE_AUTHOR("Lawrence V. Stefani"); 3740 MODULE_AUTHOR("Lawrence V. Stefani");
3740 MODULE_DESCRIPTION("DEC FDDIcontroller TC/EISA/PCI (DEFTA/DEFEA/DEFPA) driver " 3741 MODULE_DESCRIPTION("DEC FDDIcontroller TC/EISA/PCI (DEFTA/DEFEA/DEFPA) driver "
3741 DRV_VERSION " " DRV_RELDATE); 3742 DRV_VERSION " " DRV_RELDATE);
3742 MODULE_LICENSE("GPL"); 3743 MODULE_LICENSE("GPL");
3743 3744
3744 3745
3745 /* 3746 /*
3746 * Local variables: 3747 * Local variables:
3747 * kernel-compile-command: "gcc -D__KERNEL__ -I/root/linux/include -Wall -Wstrict-prototypes -O2 -pipe -fomit-frame-pointer -fno-strength-reduce -m486 -malign-loops=2 -malign-jumps=2 -malign-functions=2 -c defxx.c" 3748 * kernel-compile-command: "gcc -D__KERNEL__ -I/root/linux/include -Wall -Wstrict-prototypes -O2 -pipe -fomit-frame-pointer -fno-strength-reduce -m486 -malign-loops=2 -malign-jumps=2 -malign-functions=2 -c defxx.c"
3748 * End: 3749 * End:
3749 */ 3750 */
3750 3751
drivers/net/natsemi.c
1 /* natsemi.c: A Linux PCI Ethernet driver for the NatSemi DP8381x series. */ 1 /* natsemi.c: A Linux PCI Ethernet driver for the NatSemi DP8381x series. */
2 /* 2 /*
3 Written/copyright 1999-2001 by Donald Becker. 3 Written/copyright 1999-2001 by Donald Becker.
4 Portions copyright (c) 2001,2002 Sun Microsystems (thockin@sun.com) 4 Portions copyright (c) 2001,2002 Sun Microsystems (thockin@sun.com)
5 Portions copyright 2001,2002 Manfred Spraul (manfred@colorfullife.com) 5 Portions copyright 2001,2002 Manfred Spraul (manfred@colorfullife.com)
6 Portions copyright 2004 Harald Welte <laforge@gnumonks.org> 6 Portions copyright 2004 Harald Welte <laforge@gnumonks.org>
7 7
8 This software may be used and distributed according to the terms of 8 This software may be used and distributed according to the terms of
9 the GNU General Public License (GPL), incorporated herein by reference. 9 the GNU General Public License (GPL), incorporated herein by reference.
10 Drivers based on or derived from this code fall under the GPL and must 10 Drivers based on or derived from this code fall under the GPL and must
11 retain the authorship, copyright and license notice. This file is not 11 retain the authorship, copyright and license notice. This file is not
12 a complete program and may only be used when the entire operating 12 a complete program and may only be used when the entire operating
13 system is licensed under the GPL. License for under other terms may be 13 system is licensed under the GPL. License for under other terms may be
14 available. Contact the original author for details. 14 available. Contact the original author for details.
15 15
16 The original author may be reached as becker@scyld.com, or at 16 The original author may be reached as becker@scyld.com, or at
17 Scyld Computing Corporation 17 Scyld Computing Corporation
18 410 Severn Ave., Suite 210 18 410 Severn Ave., Suite 210
19 Annapolis MD 21403 19 Annapolis MD 21403
20 20
21 Support information and updates available at 21 Support information and updates available at
22 http://www.scyld.com/network/netsemi.html 22 http://www.scyld.com/network/netsemi.html
23 [link no longer provides useful info -jgarzik] 23 [link no longer provides useful info -jgarzik]
24 24
25 25
26 TODO: 26 TODO:
27 * big endian support with CFG:BEM instead of cpu_to_le32 27 * big endian support with CFG:BEM instead of cpu_to_le32
28 */ 28 */
29 29
30 #include <linux/module.h> 30 #include <linux/module.h>
31 #include <linux/kernel.h> 31 #include <linux/kernel.h>
32 #include <linux/string.h> 32 #include <linux/string.h>
33 #include <linux/timer.h> 33 #include <linux/timer.h>
34 #include <linux/errno.h> 34 #include <linux/errno.h>
35 #include <linux/ioport.h> 35 #include <linux/ioport.h>
36 #include <linux/slab.h> 36 #include <linux/slab.h>
37 #include <linux/interrupt.h> 37 #include <linux/interrupt.h>
38 #include <linux/pci.h> 38 #include <linux/pci.h>
39 #include <linux/netdevice.h> 39 #include <linux/netdevice.h>
40 #include <linux/etherdevice.h> 40 #include <linux/etherdevice.h>
41 #include <linux/skbuff.h> 41 #include <linux/skbuff.h>
42 #include <linux/init.h> 42 #include <linux/init.h>
43 #include <linux/spinlock.h> 43 #include <linux/spinlock.h>
44 #include <linux/ethtool.h> 44 #include <linux/ethtool.h>
45 #include <linux/delay.h> 45 #include <linux/delay.h>
46 #include <linux/rtnetlink.h> 46 #include <linux/rtnetlink.h>
47 #include <linux/mii.h> 47 #include <linux/mii.h>
48 #include <linux/crc32.h> 48 #include <linux/crc32.h>
49 #include <linux/bitops.h> 49 #include <linux/bitops.h>
50 #include <linux/prefetch.h> 50 #include <linux/prefetch.h>
51 #include <asm/processor.h> /* Processor type for cache alignment. */ 51 #include <asm/processor.h> /* Processor type for cache alignment. */
52 #include <asm/io.h> 52 #include <asm/io.h>
53 #include <asm/irq.h> 53 #include <asm/irq.h>
54 #include <asm/uaccess.h> 54 #include <asm/uaccess.h>
55 55
56 #define DRV_NAME "natsemi" 56 #define DRV_NAME "natsemi"
57 #define DRV_VERSION "2.1" 57 #define DRV_VERSION "2.1"
58 #define DRV_RELDATE "Sept 11, 2006" 58 #define DRV_RELDATE "Sept 11, 2006"
59 59
60 #define RX_OFFSET 2 60 #define RX_OFFSET 2
61 61
62 /* Updated to recommendations in pci-skeleton v2.03. */ 62 /* Updated to recommendations in pci-skeleton v2.03. */
63 63
64 /* The user-configurable values. 64 /* The user-configurable values.
65 These may be modified when a driver module is loaded.*/ 65 These may be modified when a driver module is loaded.*/
66 66
67 #define NATSEMI_DEF_MSG (NETIF_MSG_DRV | \ 67 #define NATSEMI_DEF_MSG (NETIF_MSG_DRV | \
68 NETIF_MSG_LINK | \ 68 NETIF_MSG_LINK | \
69 NETIF_MSG_WOL | \ 69 NETIF_MSG_WOL | \
70 NETIF_MSG_RX_ERR | \ 70 NETIF_MSG_RX_ERR | \
71 NETIF_MSG_TX_ERR) 71 NETIF_MSG_TX_ERR)
72 static int debug = -1; 72 static int debug = -1;
73 73
74 static int mtu; 74 static int mtu;
75 75
76 /* Maximum number of multicast addresses to filter (vs. rx-all-multicast). 76 /* Maximum number of multicast addresses to filter (vs. rx-all-multicast).
77 This chip uses a 512 element hash table based on the Ethernet CRC. */ 77 This chip uses a 512 element hash table based on the Ethernet CRC. */
78 static const int multicast_filter_limit = 100; 78 static const int multicast_filter_limit = 100;
79 79
80 /* Set the copy breakpoint for the copy-only-tiny-frames scheme. 80 /* Set the copy breakpoint for the copy-only-tiny-frames scheme.
81 Setting to > 1518 effectively disables this feature. */ 81 Setting to > 1518 effectively disables this feature. */
82 static int rx_copybreak; 82 static int rx_copybreak;
83 83
84 static int dspcfg_workaround = 1; 84 static int dspcfg_workaround = 1;
85 85
86 /* Used to pass the media type, etc. 86 /* Used to pass the media type, etc.
87 Both 'options[]' and 'full_duplex[]' should exist for driver 87 Both 'options[]' and 'full_duplex[]' should exist for driver
88 interoperability. 88 interoperability.
89 The media type is usually passed in 'options[]'. 89 The media type is usually passed in 'options[]'.
90 */ 90 */
91 #define MAX_UNITS 8 /* More are supported, limit only on options */ 91 #define MAX_UNITS 8 /* More are supported, limit only on options */
92 static int options[MAX_UNITS]; 92 static int options[MAX_UNITS];
93 static int full_duplex[MAX_UNITS]; 93 static int full_duplex[MAX_UNITS];
94 94
95 /* Operational parameters that are set at compile time. */ 95 /* Operational parameters that are set at compile time. */
96 96
97 /* Keep the ring sizes a power of two for compile efficiency. 97 /* Keep the ring sizes a power of two for compile efficiency.
98 The compiler will convert <unsigned>'%'<2^N> into a bit mask. 98 The compiler will convert <unsigned>'%'<2^N> into a bit mask.
99 Making the Tx ring too large decreases the effectiveness of channel 99 Making the Tx ring too large decreases the effectiveness of channel
100 bonding and packet priority. 100 bonding and packet priority.
101 There are no ill effects from too-large receive rings. */ 101 There are no ill effects from too-large receive rings. */
102 #define TX_RING_SIZE 16 102 #define TX_RING_SIZE 16
103 #define TX_QUEUE_LEN 10 /* Limit ring entries actually used, min 4. */ 103 #define TX_QUEUE_LEN 10 /* Limit ring entries actually used, min 4. */
104 #define RX_RING_SIZE 32 104 #define RX_RING_SIZE 32
105 105
106 /* Operational parameters that usually are not changed. */ 106 /* Operational parameters that usually are not changed. */
107 /* Time in jiffies before concluding the transmitter is hung. */ 107 /* Time in jiffies before concluding the transmitter is hung. */
108 #define TX_TIMEOUT (2*HZ) 108 #define TX_TIMEOUT (2*HZ)
109 109
110 #define NATSEMI_HW_TIMEOUT 400 110 #define NATSEMI_HW_TIMEOUT 400
111 #define NATSEMI_TIMER_FREQ 5*HZ 111 #define NATSEMI_TIMER_FREQ 5*HZ
112 #define NATSEMI_PG0_NREGS 64 112 #define NATSEMI_PG0_NREGS 64
113 #define NATSEMI_RFDR_NREGS 8 113 #define NATSEMI_RFDR_NREGS 8
114 #define NATSEMI_PG1_NREGS 4 114 #define NATSEMI_PG1_NREGS 4
115 #define NATSEMI_NREGS (NATSEMI_PG0_NREGS + NATSEMI_RFDR_NREGS + \ 115 #define NATSEMI_NREGS (NATSEMI_PG0_NREGS + NATSEMI_RFDR_NREGS + \
116 NATSEMI_PG1_NREGS) 116 NATSEMI_PG1_NREGS)
117 #define NATSEMI_REGS_VER 1 /* v1 added RFDR registers */ 117 #define NATSEMI_REGS_VER 1 /* v1 added RFDR registers */
118 #define NATSEMI_REGS_SIZE (NATSEMI_NREGS * sizeof(u32)) 118 #define NATSEMI_REGS_SIZE (NATSEMI_NREGS * sizeof(u32))
119 119
120 /* Buffer sizes: 120 /* Buffer sizes:
121 * The nic writes 32-bit values, even if the upper bytes of 121 * The nic writes 32-bit values, even if the upper bytes of
122 * a 32-bit value are beyond the end of the buffer. 122 * a 32-bit value are beyond the end of the buffer.
123 */ 123 */
124 #define NATSEMI_HEADERS 22 /* 2*mac,type,vlan,crc */ 124 #define NATSEMI_HEADERS 22 /* 2*mac,type,vlan,crc */
125 #define NATSEMI_PADDING 16 /* 2 bytes should be sufficient */ 125 #define NATSEMI_PADDING 16 /* 2 bytes should be sufficient */
126 #define NATSEMI_LONGPKT 1518 /* limit for normal packets */ 126 #define NATSEMI_LONGPKT 1518 /* limit for normal packets */
127 #define NATSEMI_RX_LIMIT 2046 /* maximum supported by hardware */ 127 #define NATSEMI_RX_LIMIT 2046 /* maximum supported by hardware */
128 128
129 /* These identify the driver base version and may not be removed. */ 129 /* These identify the driver base version and may not be removed. */
130 static char version[] __devinitdata = 130 static char version[] __devinitdata =
131 KERN_INFO DRV_NAME " dp8381x driver, version " 131 KERN_INFO DRV_NAME " dp8381x driver, version "
132 DRV_VERSION ", " DRV_RELDATE "\n" 132 DRV_VERSION ", " DRV_RELDATE "\n"
133 KERN_INFO " originally by Donald Becker <becker@scyld.com>\n" 133 KERN_INFO " originally by Donald Becker <becker@scyld.com>\n"
134 KERN_INFO " 2.4.x kernel port by Jeff Garzik, Tjeerd Mulder\n"; 134 KERN_INFO " 2.4.x kernel port by Jeff Garzik, Tjeerd Mulder\n";
135 135
136 MODULE_AUTHOR("Donald Becker <becker@scyld.com>"); 136 MODULE_AUTHOR("Donald Becker <becker@scyld.com>");
137 MODULE_DESCRIPTION("National Semiconductor DP8381x series PCI Ethernet driver"); 137 MODULE_DESCRIPTION("National Semiconductor DP8381x series PCI Ethernet driver");
138 MODULE_LICENSE("GPL"); 138 MODULE_LICENSE("GPL");
139 139
140 module_param(mtu, int, 0); 140 module_param(mtu, int, 0);
141 module_param(debug, int, 0); 141 module_param(debug, int, 0);
142 module_param(rx_copybreak, int, 0); 142 module_param(rx_copybreak, int, 0);
143 module_param(dspcfg_workaround, int, 1); 143 module_param(dspcfg_workaround, int, 1);
144 module_param_array(options, int, NULL, 0); 144 module_param_array(options, int, NULL, 0);
145 module_param_array(full_duplex, int, NULL, 0); 145 module_param_array(full_duplex, int, NULL, 0);
146 MODULE_PARM_DESC(mtu, "DP8381x MTU (all boards)"); 146 MODULE_PARM_DESC(mtu, "DP8381x MTU (all boards)");
147 MODULE_PARM_DESC(debug, "DP8381x default debug level"); 147 MODULE_PARM_DESC(debug, "DP8381x default debug level");
148 MODULE_PARM_DESC(rx_copybreak, 148 MODULE_PARM_DESC(rx_copybreak,
149 "DP8381x copy breakpoint for copy-only-tiny-frames"); 149 "DP8381x copy breakpoint for copy-only-tiny-frames");
150 MODULE_PARM_DESC(dspcfg_workaround, "DP8381x: control DspCfg workaround"); 150 MODULE_PARM_DESC(dspcfg_workaround, "DP8381x: control DspCfg workaround");
151 MODULE_PARM_DESC(options, 151 MODULE_PARM_DESC(options,
152 "DP8381x: Bits 0-3: media type, bit 17: full duplex"); 152 "DP8381x: Bits 0-3: media type, bit 17: full duplex");
153 MODULE_PARM_DESC(full_duplex, "DP8381x full duplex setting(s) (1)"); 153 MODULE_PARM_DESC(full_duplex, "DP8381x full duplex setting(s) (1)");
154 154
155 /* 155 /*
156 Theory of Operation 156 Theory of Operation
157 157
158 I. Board Compatibility 158 I. Board Compatibility
159 159
160 This driver is designed for National Semiconductor DP83815 PCI Ethernet NIC. 160 This driver is designed for National Semiconductor DP83815 PCI Ethernet NIC.
161 It also works with other chips in in the DP83810 series. 161 It also works with other chips in in the DP83810 series.
162 162
163 II. Board-specific settings 163 II. Board-specific settings
164 164
165 This driver requires the PCI interrupt line to be valid. 165 This driver requires the PCI interrupt line to be valid.
166 It honors the EEPROM-set values. 166 It honors the EEPROM-set values.
167 167
168 III. Driver operation 168 III. Driver operation
169 169
170 IIIa. Ring buffers 170 IIIa. Ring buffers
171 171
172 This driver uses two statically allocated fixed-size descriptor lists 172 This driver uses two statically allocated fixed-size descriptor lists
173 formed into rings by a branch from the final descriptor to the beginning of 173 formed into rings by a branch from the final descriptor to the beginning of
174 the list. The ring sizes are set at compile time by RX/TX_RING_SIZE. 174 the list. The ring sizes are set at compile time by RX/TX_RING_SIZE.
175 The NatSemi design uses a 'next descriptor' pointer that the driver forms 175 The NatSemi design uses a 'next descriptor' pointer that the driver forms
176 into a list. 176 into a list.
177 177
178 IIIb/c. Transmit/Receive Structure 178 IIIb/c. Transmit/Receive Structure
179 179
180 This driver uses a zero-copy receive and transmit scheme. 180 This driver uses a zero-copy receive and transmit scheme.
181 The driver allocates full frame size skbuffs for the Rx ring buffers at 181 The driver allocates full frame size skbuffs for the Rx ring buffers at
182 open() time and passes the skb->data field to the chip as receive data 182 open() time and passes the skb->data field to the chip as receive data
183 buffers. When an incoming frame is less than RX_COPYBREAK bytes long, 183 buffers. When an incoming frame is less than RX_COPYBREAK bytes long,
184 a fresh skbuff is allocated and the frame is copied to the new skbuff. 184 a fresh skbuff is allocated and the frame is copied to the new skbuff.
185 When the incoming frame is larger, the skbuff is passed directly up the 185 When the incoming frame is larger, the skbuff is passed directly up the
186 protocol stack. Buffers consumed this way are replaced by newly allocated 186 protocol stack. Buffers consumed this way are replaced by newly allocated
187 skbuffs in a later phase of receives. 187 skbuffs in a later phase of receives.
188 188
189 The RX_COPYBREAK value is chosen to trade-off the memory wasted by 189 The RX_COPYBREAK value is chosen to trade-off the memory wasted by
190 using a full-sized skbuff for small frames vs. the copying costs of larger 190 using a full-sized skbuff for small frames vs. the copying costs of larger
191 frames. New boards are typically used in generously configured machines 191 frames. New boards are typically used in generously configured machines
192 and the underfilled buffers have negligible impact compared to the benefit of 192 and the underfilled buffers have negligible impact compared to the benefit of
193 a single allocation size, so the default value of zero results in never 193 a single allocation size, so the default value of zero results in never
194 copying packets. When copying is done, the cost is usually mitigated by using 194 copying packets. When copying is done, the cost is usually mitigated by using
195 a combined copy/checksum routine. Copying also preloads the cache, which is 195 a combined copy/checksum routine. Copying also preloads the cache, which is
196 most useful with small frames. 196 most useful with small frames.
197 197
198 A subtle aspect of the operation is that unaligned buffers are not permitted 198 A subtle aspect of the operation is that unaligned buffers are not permitted
199 by the hardware. Thus the IP header at offset 14 in an ethernet frame isn't 199 by the hardware. Thus the IP header at offset 14 in an ethernet frame isn't
200 longword aligned for further processing. On copies frames are put into the 200 longword aligned for further processing. On copies frames are put into the
201 skbuff at an offset of "+2", 16-byte aligning the IP header. 201 skbuff at an offset of "+2", 16-byte aligning the IP header.
202 202
203 IIId. Synchronization 203 IIId. Synchronization
204 204
205 Most operations are synchronized on the np->lock irq spinlock, except the 205 Most operations are synchronized on the np->lock irq spinlock, except the
206 recieve and transmit paths which are synchronised using a combination of 206 recieve and transmit paths which are synchronised using a combination of
207 hardware descriptor ownership, disabling interrupts and NAPI poll scheduling. 207 hardware descriptor ownership, disabling interrupts and NAPI poll scheduling.
208 208
209 IVb. References 209 IVb. References
210 210
211 http://www.scyld.com/expert/100mbps.html 211 http://www.scyld.com/expert/100mbps.html
212 http://www.scyld.com/expert/NWay.html 212 http://www.scyld.com/expert/NWay.html
213 Datasheet is available from: 213 Datasheet is available from:
214 http://www.national.com/pf/DP/DP83815.html 214 http://www.national.com/pf/DP/DP83815.html
215 215
216 IVc. Errata 216 IVc. Errata
217 217
218 None characterised. 218 None characterised.
219 */ 219 */
220 220
221 221
222 222
223 /* 223 /*
224 * Support for fibre connections on Am79C874: 224 * Support for fibre connections on Am79C874:
225 * This phy needs a special setup when connected to a fibre cable. 225 * This phy needs a special setup when connected to a fibre cable.
226 * http://www.amd.com/files/connectivitysolutions/networking/archivednetworking/22235.pdf 226 * http://www.amd.com/files/connectivitysolutions/networking/archivednetworking/22235.pdf
227 */ 227 */
228 #define PHYID_AM79C874 0x0022561b 228 #define PHYID_AM79C874 0x0022561b
229 229
230 enum { 230 enum {
231 MII_MCTRL = 0x15, /* mode control register */ 231 MII_MCTRL = 0x15, /* mode control register */
232 MII_FX_SEL = 0x0001, /* 100BASE-FX (fiber) */ 232 MII_FX_SEL = 0x0001, /* 100BASE-FX (fiber) */
233 MII_EN_SCRM = 0x0004, /* enable scrambler (tp) */ 233 MII_EN_SCRM = 0x0004, /* enable scrambler (tp) */
234 }; 234 };
235 235
236 enum { 236 enum {
237 NATSEMI_FLAG_IGNORE_PHY = 0x1, 237 NATSEMI_FLAG_IGNORE_PHY = 0x1,
238 }; 238 };
239 239
240 /* array of board data directly indexed by pci_tbl[x].driver_data */ 240 /* array of board data directly indexed by pci_tbl[x].driver_data */
241 static struct { 241 static struct {
242 const char *name; 242 const char *name;
243 unsigned long flags; 243 unsigned long flags;
244 unsigned int eeprom_size; 244 unsigned int eeprom_size;
245 } natsemi_pci_info[] __devinitdata = { 245 } natsemi_pci_info[] __devinitdata = {
246 { "Aculab E1/T1 PMXc cPCI carrier card", NATSEMI_FLAG_IGNORE_PHY, 128 }, 246 { "Aculab E1/T1 PMXc cPCI carrier card", NATSEMI_FLAG_IGNORE_PHY, 128 },
247 { "NatSemi DP8381[56]", 0, 24 }, 247 { "NatSemi DP8381[56]", 0, 24 },
248 }; 248 };
249 249
250 static struct pci_device_id natsemi_pci_tbl[] __devinitdata = { 250 static struct pci_device_id natsemi_pci_tbl[] __devinitdata = {
251 { PCI_VENDOR_ID_NS, 0x0020, 0x12d9, 0x000c, 0, 0, 0 }, 251 { PCI_VENDOR_ID_NS, 0x0020, 0x12d9, 0x000c, 0, 0, 0 },
252 { PCI_VENDOR_ID_NS, 0x0020, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1 }, 252 { PCI_VENDOR_ID_NS, 0x0020, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 1 },
253 { } /* terminate list */ 253 { } /* terminate list */
254 }; 254 };
255 MODULE_DEVICE_TABLE(pci, natsemi_pci_tbl); 255 MODULE_DEVICE_TABLE(pci, natsemi_pci_tbl);
256 256
257 /* Offsets to the device registers. 257 /* Offsets to the device registers.
258 Unlike software-only systems, device drivers interact with complex hardware. 258 Unlike software-only systems, device drivers interact with complex hardware.
259 It's not useful to define symbolic names for every register bit in the 259 It's not useful to define symbolic names for every register bit in the
260 device. 260 device.
261 */ 261 */
262 enum register_offsets { 262 enum register_offsets {
263 ChipCmd = 0x00, 263 ChipCmd = 0x00,
264 ChipConfig = 0x04, 264 ChipConfig = 0x04,
265 EECtrl = 0x08, 265 EECtrl = 0x08,
266 PCIBusCfg = 0x0C, 266 PCIBusCfg = 0x0C,
267 IntrStatus = 0x10, 267 IntrStatus = 0x10,
268 IntrMask = 0x14, 268 IntrMask = 0x14,
269 IntrEnable = 0x18, 269 IntrEnable = 0x18,
270 IntrHoldoff = 0x1C, /* DP83816 only */ 270 IntrHoldoff = 0x1C, /* DP83816 only */
271 TxRingPtr = 0x20, 271 TxRingPtr = 0x20,
272 TxConfig = 0x24, 272 TxConfig = 0x24,
273 RxRingPtr = 0x30, 273 RxRingPtr = 0x30,
274 RxConfig = 0x34, 274 RxConfig = 0x34,
275 ClkRun = 0x3C, 275 ClkRun = 0x3C,
276 WOLCmd = 0x40, 276 WOLCmd = 0x40,
277 PauseCmd = 0x44, 277 PauseCmd = 0x44,
278 RxFilterAddr = 0x48, 278 RxFilterAddr = 0x48,
279 RxFilterData = 0x4C, 279 RxFilterData = 0x4C,
280 BootRomAddr = 0x50, 280 BootRomAddr = 0x50,
281 BootRomData = 0x54, 281 BootRomData = 0x54,
282 SiliconRev = 0x58, 282 SiliconRev = 0x58,
283 StatsCtrl = 0x5C, 283 StatsCtrl = 0x5C,
284 StatsData = 0x60, 284 StatsData = 0x60,
285 RxPktErrs = 0x60, 285 RxPktErrs = 0x60,
286 RxMissed = 0x68, 286 RxMissed = 0x68,
287 RxCRCErrs = 0x64, 287 RxCRCErrs = 0x64,
288 BasicControl = 0x80, 288 BasicControl = 0x80,
289 BasicStatus = 0x84, 289 BasicStatus = 0x84,
290 AnegAdv = 0x90, 290 AnegAdv = 0x90,
291 AnegPeer = 0x94, 291 AnegPeer = 0x94,
292 PhyStatus = 0xC0, 292 PhyStatus = 0xC0,
293 MIntrCtrl = 0xC4, 293 MIntrCtrl = 0xC4,
294 MIntrStatus = 0xC8, 294 MIntrStatus = 0xC8,
295 PhyCtrl = 0xE4, 295 PhyCtrl = 0xE4,
296 296
297 /* These are from the spec, around page 78... on a separate table. 297 /* These are from the spec, around page 78... on a separate table.
298 * The meaning of these registers depend on the value of PGSEL. */ 298 * The meaning of these registers depend on the value of PGSEL. */
299 PGSEL = 0xCC, 299 PGSEL = 0xCC,
300 PMDCSR = 0xE4, 300 PMDCSR = 0xE4,
301 TSTDAT = 0xFC, 301 TSTDAT = 0xFC,
302 DSPCFG = 0xF4, 302 DSPCFG = 0xF4,
303 SDCFG = 0xF8 303 SDCFG = 0xF8
304 }; 304 };
305 /* the values for the 'magic' registers above (PGSEL=1) */ 305 /* the values for the 'magic' registers above (PGSEL=1) */
306 #define PMDCSR_VAL 0x189c /* enable preferred adaptation circuitry */ 306 #define PMDCSR_VAL 0x189c /* enable preferred adaptation circuitry */
307 #define TSTDAT_VAL 0x0 307 #define TSTDAT_VAL 0x0
308 #define DSPCFG_VAL 0x5040 308 #define DSPCFG_VAL 0x5040
309 #define SDCFG_VAL 0x008c /* set voltage thresholds for Signal Detect */ 309 #define SDCFG_VAL 0x008c /* set voltage thresholds for Signal Detect */
310 #define DSPCFG_LOCK 0x20 /* coefficient lock bit in DSPCFG */ 310 #define DSPCFG_LOCK 0x20 /* coefficient lock bit in DSPCFG */
311 #define DSPCFG_COEF 0x1000 /* see coefficient (in TSTDAT) bit in DSPCFG */ 311 #define DSPCFG_COEF 0x1000 /* see coefficient (in TSTDAT) bit in DSPCFG */
312 #define TSTDAT_FIXED 0xe8 /* magic number for bad coefficients */ 312 #define TSTDAT_FIXED 0xe8 /* magic number for bad coefficients */
313 313
314 /* misc PCI space registers */ 314 /* misc PCI space registers */
315 enum pci_register_offsets { 315 enum pci_register_offsets {
316 PCIPM = 0x44, 316 PCIPM = 0x44,
317 }; 317 };
318 318
319 enum ChipCmd_bits { 319 enum ChipCmd_bits {
320 ChipReset = 0x100, 320 ChipReset = 0x100,
321 RxReset = 0x20, 321 RxReset = 0x20,
322 TxReset = 0x10, 322 TxReset = 0x10,
323 RxOff = 0x08, 323 RxOff = 0x08,
324 RxOn = 0x04, 324 RxOn = 0x04,
325 TxOff = 0x02, 325 TxOff = 0x02,
326 TxOn = 0x01, 326 TxOn = 0x01,
327 }; 327 };
328 328
329 enum ChipConfig_bits { 329 enum ChipConfig_bits {
330 CfgPhyDis = 0x200, 330 CfgPhyDis = 0x200,
331 CfgPhyRst = 0x400, 331 CfgPhyRst = 0x400,
332 CfgExtPhy = 0x1000, 332 CfgExtPhy = 0x1000,
333 CfgAnegEnable = 0x2000, 333 CfgAnegEnable = 0x2000,
334 CfgAneg100 = 0x4000, 334 CfgAneg100 = 0x4000,
335 CfgAnegFull = 0x8000, 335 CfgAnegFull = 0x8000,
336 CfgAnegDone = 0x8000000, 336 CfgAnegDone = 0x8000000,
337 CfgFullDuplex = 0x20000000, 337 CfgFullDuplex = 0x20000000,
338 CfgSpeed100 = 0x40000000, 338 CfgSpeed100 = 0x40000000,
339 CfgLink = 0x80000000, 339 CfgLink = 0x80000000,
340 }; 340 };
341 341
342 enum EECtrl_bits { 342 enum EECtrl_bits {
343 EE_ShiftClk = 0x04, 343 EE_ShiftClk = 0x04,
344 EE_DataIn = 0x01, 344 EE_DataIn = 0x01,
345 EE_ChipSelect = 0x08, 345 EE_ChipSelect = 0x08,
346 EE_DataOut = 0x02, 346 EE_DataOut = 0x02,
347 MII_Data = 0x10, 347 MII_Data = 0x10,
348 MII_Write = 0x20, 348 MII_Write = 0x20,
349 MII_ShiftClk = 0x40, 349 MII_ShiftClk = 0x40,
350 }; 350 };
351 351
352 enum PCIBusCfg_bits { 352 enum PCIBusCfg_bits {
353 EepromReload = 0x4, 353 EepromReload = 0x4,
354 }; 354 };
355 355
356 /* Bits in the interrupt status/mask registers. */ 356 /* Bits in the interrupt status/mask registers. */
357 enum IntrStatus_bits { 357 enum IntrStatus_bits {
358 IntrRxDone = 0x0001, 358 IntrRxDone = 0x0001,
359 IntrRxIntr = 0x0002, 359 IntrRxIntr = 0x0002,
360 IntrRxErr = 0x0004, 360 IntrRxErr = 0x0004,
361 IntrRxEarly = 0x0008, 361 IntrRxEarly = 0x0008,
362 IntrRxIdle = 0x0010, 362 IntrRxIdle = 0x0010,
363 IntrRxOverrun = 0x0020, 363 IntrRxOverrun = 0x0020,
364 IntrTxDone = 0x0040, 364 IntrTxDone = 0x0040,
365 IntrTxIntr = 0x0080, 365 IntrTxIntr = 0x0080,
366 IntrTxErr = 0x0100, 366 IntrTxErr = 0x0100,
367 IntrTxIdle = 0x0200, 367 IntrTxIdle = 0x0200,
368 IntrTxUnderrun = 0x0400, 368 IntrTxUnderrun = 0x0400,
369 StatsMax = 0x0800, 369 StatsMax = 0x0800,
370 SWInt = 0x1000, 370 SWInt = 0x1000,
371 WOLPkt = 0x2000, 371 WOLPkt = 0x2000,
372 LinkChange = 0x4000, 372 LinkChange = 0x4000,
373 IntrHighBits = 0x8000, 373 IntrHighBits = 0x8000,
374 RxStatusFIFOOver = 0x10000, 374 RxStatusFIFOOver = 0x10000,
375 IntrPCIErr = 0xf00000, 375 IntrPCIErr = 0xf00000,
376 RxResetDone = 0x1000000, 376 RxResetDone = 0x1000000,
377 TxResetDone = 0x2000000, 377 TxResetDone = 0x2000000,
378 IntrAbnormalSummary = 0xCD20, 378 IntrAbnormalSummary = 0xCD20,
379 }; 379 };
380 380
381 /* 381 /*
382 * Default Interrupts: 382 * Default Interrupts:
383 * Rx OK, Rx Packet Error, Rx Overrun, 383 * Rx OK, Rx Packet Error, Rx Overrun,
384 * Tx OK, Tx Packet Error, Tx Underrun, 384 * Tx OK, Tx Packet Error, Tx Underrun,
385 * MIB Service, Phy Interrupt, High Bits, 385 * MIB Service, Phy Interrupt, High Bits,
386 * Rx Status FIFO overrun, 386 * Rx Status FIFO overrun,
387 * Received Target Abort, Received Master Abort, 387 * Received Target Abort, Received Master Abort,
388 * Signalled System Error, Received Parity Error 388 * Signalled System Error, Received Parity Error
389 */ 389 */
390 #define DEFAULT_INTR 0x00f1cd65 390 #define DEFAULT_INTR 0x00f1cd65
391 391
392 enum TxConfig_bits { 392 enum TxConfig_bits {
393 TxDrthMask = 0x3f, 393 TxDrthMask = 0x3f,
394 TxFlthMask = 0x3f00, 394 TxFlthMask = 0x3f00,
395 TxMxdmaMask = 0x700000, 395 TxMxdmaMask = 0x700000,
396 TxMxdma_512 = 0x0, 396 TxMxdma_512 = 0x0,
397 TxMxdma_4 = 0x100000, 397 TxMxdma_4 = 0x100000,
398 TxMxdma_8 = 0x200000, 398 TxMxdma_8 = 0x200000,
399 TxMxdma_16 = 0x300000, 399 TxMxdma_16 = 0x300000,
400 TxMxdma_32 = 0x400000, 400 TxMxdma_32 = 0x400000,
401 TxMxdma_64 = 0x500000, 401 TxMxdma_64 = 0x500000,
402 TxMxdma_128 = 0x600000, 402 TxMxdma_128 = 0x600000,
403 TxMxdma_256 = 0x700000, 403 TxMxdma_256 = 0x700000,
404 TxCollRetry = 0x800000, 404 TxCollRetry = 0x800000,
405 TxAutoPad = 0x10000000, 405 TxAutoPad = 0x10000000,
406 TxMacLoop = 0x20000000, 406 TxMacLoop = 0x20000000,
407 TxHeartIgn = 0x40000000, 407 TxHeartIgn = 0x40000000,
408 TxCarrierIgn = 0x80000000 408 TxCarrierIgn = 0x80000000
409 }; 409 };
410 410
411 /* 411 /*
412 * Tx Configuration: 412 * Tx Configuration:
413 * - 256 byte DMA burst length 413 * - 256 byte DMA burst length
414 * - fill threshold 512 bytes (i.e. restart DMA when 512 bytes are free) 414 * - fill threshold 512 bytes (i.e. restart DMA when 512 bytes are free)
415 * - 64 bytes initial drain threshold (i.e. begin actual transmission 415 * - 64 bytes initial drain threshold (i.e. begin actual transmission
416 * when 64 byte are in the fifo) 416 * when 64 byte are in the fifo)
417 * - on tx underruns, increase drain threshold by 64. 417 * - on tx underruns, increase drain threshold by 64.
418 * - at most use a drain threshold of 1472 bytes: The sum of the fill 418 * - at most use a drain threshold of 1472 bytes: The sum of the fill
419 * threshold and the drain threshold must be less than 2016 bytes. 419 * threshold and the drain threshold must be less than 2016 bytes.
420 * 420 *
421 */ 421 */
422 #define TX_FLTH_VAL ((512/32) << 8) 422 #define TX_FLTH_VAL ((512/32) << 8)
423 #define TX_DRTH_VAL_START (64/32) 423 #define TX_DRTH_VAL_START (64/32)
424 #define TX_DRTH_VAL_INC 2 424 #define TX_DRTH_VAL_INC 2
425 #define TX_DRTH_VAL_LIMIT (1472/32) 425 #define TX_DRTH_VAL_LIMIT (1472/32)
426 426
427 enum RxConfig_bits { 427 enum RxConfig_bits {
428 RxDrthMask = 0x3e, 428 RxDrthMask = 0x3e,
429 RxMxdmaMask = 0x700000, 429 RxMxdmaMask = 0x700000,
430 RxMxdma_512 = 0x0, 430 RxMxdma_512 = 0x0,
431 RxMxdma_4 = 0x100000, 431 RxMxdma_4 = 0x100000,
432 RxMxdma_8 = 0x200000, 432 RxMxdma_8 = 0x200000,
433 RxMxdma_16 = 0x300000, 433 RxMxdma_16 = 0x300000,
434 RxMxdma_32 = 0x400000, 434 RxMxdma_32 = 0x400000,
435 RxMxdma_64 = 0x500000, 435 RxMxdma_64 = 0x500000,
436 RxMxdma_128 = 0x600000, 436 RxMxdma_128 = 0x600000,
437 RxMxdma_256 = 0x700000, 437 RxMxdma_256 = 0x700000,
438 RxAcceptLong = 0x8000000, 438 RxAcceptLong = 0x8000000,
439 RxAcceptTx = 0x10000000, 439 RxAcceptTx = 0x10000000,
440 RxAcceptRunt = 0x40000000, 440 RxAcceptRunt = 0x40000000,
441 RxAcceptErr = 0x80000000 441 RxAcceptErr = 0x80000000
442 }; 442 };
443 #define RX_DRTH_VAL (128/8) 443 #define RX_DRTH_VAL (128/8)
444 444
445 enum ClkRun_bits { 445 enum ClkRun_bits {
446 PMEEnable = 0x100, 446 PMEEnable = 0x100,
447 PMEStatus = 0x8000, 447 PMEStatus = 0x8000,
448 }; 448 };
449 449
450 enum WolCmd_bits { 450 enum WolCmd_bits {
451 WakePhy = 0x1, 451 WakePhy = 0x1,
452 WakeUnicast = 0x2, 452 WakeUnicast = 0x2,
453 WakeMulticast = 0x4, 453 WakeMulticast = 0x4,
454 WakeBroadcast = 0x8, 454 WakeBroadcast = 0x8,
455 WakeArp = 0x10, 455 WakeArp = 0x10,
456 WakePMatch0 = 0x20, 456 WakePMatch0 = 0x20,
457 WakePMatch1 = 0x40, 457 WakePMatch1 = 0x40,
458 WakePMatch2 = 0x80, 458 WakePMatch2 = 0x80,
459 WakePMatch3 = 0x100, 459 WakePMatch3 = 0x100,
460 WakeMagic = 0x200, 460 WakeMagic = 0x200,
461 WakeMagicSecure = 0x400, 461 WakeMagicSecure = 0x400,
462 SecureHack = 0x100000, 462 SecureHack = 0x100000,
463 WokePhy = 0x400000, 463 WokePhy = 0x400000,
464 WokeUnicast = 0x800000, 464 WokeUnicast = 0x800000,
465 WokeMulticast = 0x1000000, 465 WokeMulticast = 0x1000000,
466 WokeBroadcast = 0x2000000, 466 WokeBroadcast = 0x2000000,
467 WokeArp = 0x4000000, 467 WokeArp = 0x4000000,
468 WokePMatch0 = 0x8000000, 468 WokePMatch0 = 0x8000000,
469 WokePMatch1 = 0x10000000, 469 WokePMatch1 = 0x10000000,
470 WokePMatch2 = 0x20000000, 470 WokePMatch2 = 0x20000000,
471 WokePMatch3 = 0x40000000, 471 WokePMatch3 = 0x40000000,
472 WokeMagic = 0x80000000, 472 WokeMagic = 0x80000000,
473 WakeOptsSummary = 0x7ff 473 WakeOptsSummary = 0x7ff
474 }; 474 };
475 475
476 enum RxFilterAddr_bits { 476 enum RxFilterAddr_bits {
477 RFCRAddressMask = 0x3ff, 477 RFCRAddressMask = 0x3ff,
478 AcceptMulticast = 0x00200000, 478 AcceptMulticast = 0x00200000,
479 AcceptMyPhys = 0x08000000, 479 AcceptMyPhys = 0x08000000,
480 AcceptAllPhys = 0x10000000, 480 AcceptAllPhys = 0x10000000,
481 AcceptAllMulticast = 0x20000000, 481 AcceptAllMulticast = 0x20000000,
482 AcceptBroadcast = 0x40000000, 482 AcceptBroadcast = 0x40000000,
483 RxFilterEnable = 0x80000000 483 RxFilterEnable = 0x80000000
484 }; 484 };
485 485
486 enum StatsCtrl_bits { 486 enum StatsCtrl_bits {
487 StatsWarn = 0x1, 487 StatsWarn = 0x1,
488 StatsFreeze = 0x2, 488 StatsFreeze = 0x2,
489 StatsClear = 0x4, 489 StatsClear = 0x4,
490 StatsStrobe = 0x8, 490 StatsStrobe = 0x8,
491 }; 491 };
492 492
493 enum MIntrCtrl_bits { 493 enum MIntrCtrl_bits {
494 MICRIntEn = 0x2, 494 MICRIntEn = 0x2,
495 }; 495 };
496 496
497 enum PhyCtrl_bits { 497 enum PhyCtrl_bits {
498 PhyAddrMask = 0x1f, 498 PhyAddrMask = 0x1f,
499 }; 499 };
500 500
501 #define PHY_ADDR_NONE 32 501 #define PHY_ADDR_NONE 32
502 #define PHY_ADDR_INTERNAL 1 502 #define PHY_ADDR_INTERNAL 1
503 503
504 /* values we might find in the silicon revision register */ 504 /* values we might find in the silicon revision register */
505 #define SRR_DP83815_C 0x0302 505 #define SRR_DP83815_C 0x0302
506 #define SRR_DP83815_D 0x0403 506 #define SRR_DP83815_D 0x0403
507 #define SRR_DP83816_A4 0x0504 507 #define SRR_DP83816_A4 0x0504
508 #define SRR_DP83816_A5 0x0505 508 #define SRR_DP83816_A5 0x0505
509 509
510 /* The Rx and Tx buffer descriptors. */ 510 /* The Rx and Tx buffer descriptors. */
511 /* Note that using only 32 bit fields simplifies conversion to big-endian 511 /* Note that using only 32 bit fields simplifies conversion to big-endian
512 architectures. */ 512 architectures. */
513 struct netdev_desc { 513 struct netdev_desc {
514 u32 next_desc; 514 __le32 next_desc;
515 s32 cmd_status; 515 __le32 cmd_status;
516 u32 addr; 516 __le32 addr;
517 u32 software_use; 517 __le32 software_use;
518 }; 518 };
519 519
520 /* Bits in network_desc.status */ 520 /* Bits in network_desc.status */
521 enum desc_status_bits { 521 enum desc_status_bits {
522 DescOwn=0x80000000, DescMore=0x40000000, DescIntr=0x20000000, 522 DescOwn=0x80000000, DescMore=0x40000000, DescIntr=0x20000000,
523 DescNoCRC=0x10000000, DescPktOK=0x08000000, 523 DescNoCRC=0x10000000, DescPktOK=0x08000000,
524 DescSizeMask=0xfff, 524 DescSizeMask=0xfff,
525 525
526 DescTxAbort=0x04000000, DescTxFIFO=0x02000000, 526 DescTxAbort=0x04000000, DescTxFIFO=0x02000000,
527 DescTxCarrier=0x01000000, DescTxDefer=0x00800000, 527 DescTxCarrier=0x01000000, DescTxDefer=0x00800000,
528 DescTxExcDefer=0x00400000, DescTxOOWCol=0x00200000, 528 DescTxExcDefer=0x00400000, DescTxOOWCol=0x00200000,
529 DescTxExcColl=0x00100000, DescTxCollCount=0x000f0000, 529 DescTxExcColl=0x00100000, DescTxCollCount=0x000f0000,
530 530
531 DescRxAbort=0x04000000, DescRxOver=0x02000000, 531 DescRxAbort=0x04000000, DescRxOver=0x02000000,
532 DescRxDest=0x01800000, DescRxLong=0x00400000, 532 DescRxDest=0x01800000, DescRxLong=0x00400000,
533 DescRxRunt=0x00200000, DescRxInvalid=0x00100000, 533 DescRxRunt=0x00200000, DescRxInvalid=0x00100000,
534 DescRxCRC=0x00080000, DescRxAlign=0x00040000, 534 DescRxCRC=0x00080000, DescRxAlign=0x00040000,
535 DescRxLoop=0x00020000, DesRxColl=0x00010000, 535 DescRxLoop=0x00020000, DesRxColl=0x00010000,
536 }; 536 };
537 537
538 struct netdev_private { 538 struct netdev_private {
539 /* Descriptor rings first for alignment */ 539 /* Descriptor rings first for alignment */
540 dma_addr_t ring_dma; 540 dma_addr_t ring_dma;
541 struct netdev_desc *rx_ring; 541 struct netdev_desc *rx_ring;
542 struct netdev_desc *tx_ring; 542 struct netdev_desc *tx_ring;
543 /* The addresses of receive-in-place skbuffs */ 543 /* The addresses of receive-in-place skbuffs */
544 struct sk_buff *rx_skbuff[RX_RING_SIZE]; 544 struct sk_buff *rx_skbuff[RX_RING_SIZE];
545 dma_addr_t rx_dma[RX_RING_SIZE]; 545 dma_addr_t rx_dma[RX_RING_SIZE];
546 /* address of a sent-in-place packet/buffer, for later free() */ 546 /* address of a sent-in-place packet/buffer, for later free() */
547 struct sk_buff *tx_skbuff[TX_RING_SIZE]; 547 struct sk_buff *tx_skbuff[TX_RING_SIZE];
548 dma_addr_t tx_dma[TX_RING_SIZE]; 548 dma_addr_t tx_dma[TX_RING_SIZE];
549 struct net_device *dev; 549 struct net_device *dev;
550 struct napi_struct napi; 550 struct napi_struct napi;
551 struct net_device_stats stats; 551 struct net_device_stats stats;
552 /* Media monitoring timer */ 552 /* Media monitoring timer */
553 struct timer_list timer; 553 struct timer_list timer;
554 /* Frequently used values: keep some adjacent for cache effect */ 554 /* Frequently used values: keep some adjacent for cache effect */
555 struct pci_dev *pci_dev; 555 struct pci_dev *pci_dev;
556 struct netdev_desc *rx_head_desc; 556 struct netdev_desc *rx_head_desc;
557 /* Producer/consumer ring indices */ 557 /* Producer/consumer ring indices */
558 unsigned int cur_rx, dirty_rx; 558 unsigned int cur_rx, dirty_rx;
559 unsigned int cur_tx, dirty_tx; 559 unsigned int cur_tx, dirty_tx;
560 /* Based on MTU+slack. */ 560 /* Based on MTU+slack. */
561 unsigned int rx_buf_sz; 561 unsigned int rx_buf_sz;
562 int oom; 562 int oom;
563 /* Interrupt status */ 563 /* Interrupt status */
564 u32 intr_status; 564 u32 intr_status;
565 /* Do not touch the nic registers */ 565 /* Do not touch the nic registers */
566 int hands_off; 566 int hands_off;
567 /* Don't pay attention to the reported link state. */ 567 /* Don't pay attention to the reported link state. */
568 int ignore_phy; 568 int ignore_phy;
569 /* external phy that is used: only valid if dev->if_port != PORT_TP */ 569 /* external phy that is used: only valid if dev->if_port != PORT_TP */
570 int mii; 570 int mii;
571 int phy_addr_external; 571 int phy_addr_external;
572 unsigned int full_duplex; 572 unsigned int full_duplex;
573 /* Rx filter */ 573 /* Rx filter */
574 u32 cur_rx_mode; 574 u32 cur_rx_mode;
575 u32 rx_filter[16]; 575 u32 rx_filter[16];
576 /* FIFO and PCI burst thresholds */ 576 /* FIFO and PCI burst thresholds */
577 u32 tx_config, rx_config; 577 u32 tx_config, rx_config;
578 /* original contents of ClkRun register */ 578 /* original contents of ClkRun register */
579 u32 SavedClkRun; 579 u32 SavedClkRun;
580 /* silicon revision */ 580 /* silicon revision */
581 u32 srr; 581 u32 srr;
582 /* expected DSPCFG value */ 582 /* expected DSPCFG value */
583 u16 dspcfg; 583 u16 dspcfg;
584 int dspcfg_workaround; 584 int dspcfg_workaround;
585 /* parms saved in ethtool format */ 585 /* parms saved in ethtool format */
586 u16 speed; /* The forced speed, 10Mb, 100Mb, gigabit */ 586 u16 speed; /* The forced speed, 10Mb, 100Mb, gigabit */
587 u8 duplex; /* Duplex, half or full */ 587 u8 duplex; /* Duplex, half or full */
588 u8 autoneg; /* Autonegotiation enabled */ 588 u8 autoneg; /* Autonegotiation enabled */
589 /* MII transceiver section */ 589 /* MII transceiver section */
590 u16 advertising; 590 u16 advertising;
591 unsigned int iosize; 591 unsigned int iosize;
592 spinlock_t lock; 592 spinlock_t lock;
593 u32 msg_enable; 593 u32 msg_enable;
594 /* EEPROM data */ 594 /* EEPROM data */
595 int eeprom_size; 595 int eeprom_size;
596 }; 596 };
597 597
598 static void move_int_phy(struct net_device *dev, int addr); 598 static void move_int_phy(struct net_device *dev, int addr);
599 static int eeprom_read(void __iomem *ioaddr, int location); 599 static int eeprom_read(void __iomem *ioaddr, int location);
600 static int mdio_read(struct net_device *dev, int reg); 600 static int mdio_read(struct net_device *dev, int reg);
601 static void mdio_write(struct net_device *dev, int reg, u16 data); 601 static void mdio_write(struct net_device *dev, int reg, u16 data);
602 static void init_phy_fixup(struct net_device *dev); 602 static void init_phy_fixup(struct net_device *dev);
603 static int miiport_read(struct net_device *dev, int phy_id, int reg); 603 static int miiport_read(struct net_device *dev, int phy_id, int reg);
604 static void miiport_write(struct net_device *dev, int phy_id, int reg, u16 data); 604 static void miiport_write(struct net_device *dev, int phy_id, int reg, u16 data);
605 static int find_mii(struct net_device *dev); 605 static int find_mii(struct net_device *dev);
606 static void natsemi_reset(struct net_device *dev); 606 static void natsemi_reset(struct net_device *dev);
607 static void natsemi_reload_eeprom(struct net_device *dev); 607 static void natsemi_reload_eeprom(struct net_device *dev);
608 static void natsemi_stop_rxtx(struct net_device *dev); 608 static void natsemi_stop_rxtx(struct net_device *dev);
609 static int netdev_open(struct net_device *dev); 609 static int netdev_open(struct net_device *dev);
610 static void do_cable_magic(struct net_device *dev); 610 static void do_cable_magic(struct net_device *dev);
611 static void undo_cable_magic(struct net_device *dev); 611 static void undo_cable_magic(struct net_device *dev);
612 static void check_link(struct net_device *dev); 612 static void check_link(struct net_device *dev);
613 static void netdev_timer(unsigned long data); 613 static void netdev_timer(unsigned long data);
614 static void dump_ring(struct net_device *dev); 614 static void dump_ring(struct net_device *dev);
615 static void tx_timeout(struct net_device *dev); 615 static void tx_timeout(struct net_device *dev);
616 static int alloc_ring(struct net_device *dev); 616 static int alloc_ring(struct net_device *dev);
617 static void refill_rx(struct net_device *dev); 617 static void refill_rx(struct net_device *dev);
618 static void init_ring(struct net_device *dev); 618 static void init_ring(struct net_device *dev);
619 static void drain_tx(struct net_device *dev); 619 static void drain_tx(struct net_device *dev);
620 static void drain_ring(struct net_device *dev); 620 static void drain_ring(struct net_device *dev);
621 static void free_ring(struct net_device *dev); 621 static void free_ring(struct net_device *dev);
622 static void reinit_ring(struct net_device *dev); 622 static void reinit_ring(struct net_device *dev);
623 static void init_registers(struct net_device *dev); 623 static void init_registers(struct net_device *dev);
624 static int start_tx(struct sk_buff *skb, struct net_device *dev); 624 static int start_tx(struct sk_buff *skb, struct net_device *dev);
625 static irqreturn_t intr_handler(int irq, void *dev_instance); 625 static irqreturn_t intr_handler(int irq, void *dev_instance);
626 static void netdev_error(struct net_device *dev, int intr_status); 626 static void netdev_error(struct net_device *dev, int intr_status);
627 static int natsemi_poll(struct napi_struct *napi, int budget); 627 static int natsemi_poll(struct napi_struct *napi, int budget);
628 static void netdev_rx(struct net_device *dev, int *work_done, int work_to_do); 628 static void netdev_rx(struct net_device *dev, int *work_done, int work_to_do);
629 static void netdev_tx_done(struct net_device *dev); 629 static void netdev_tx_done(struct net_device *dev);
630 static int natsemi_change_mtu(struct net_device *dev, int new_mtu); 630 static int natsemi_change_mtu(struct net_device *dev, int new_mtu);
631 #ifdef CONFIG_NET_POLL_CONTROLLER 631 #ifdef CONFIG_NET_POLL_CONTROLLER
632 static void natsemi_poll_controller(struct net_device *dev); 632 static void natsemi_poll_controller(struct net_device *dev);
633 #endif 633 #endif
634 static void __set_rx_mode(struct net_device *dev); 634 static void __set_rx_mode(struct net_device *dev);
635 static void set_rx_mode(struct net_device *dev); 635 static void set_rx_mode(struct net_device *dev);
636 static void __get_stats(struct net_device *dev); 636 static void __get_stats(struct net_device *dev);
637 static struct net_device_stats *get_stats(struct net_device *dev); 637 static struct net_device_stats *get_stats(struct net_device *dev);
638 static int netdev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd); 638 static int netdev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
639 static int netdev_set_wol(struct net_device *dev, u32 newval); 639 static int netdev_set_wol(struct net_device *dev, u32 newval);
640 static int netdev_get_wol(struct net_device *dev, u32 *supported, u32 *cur); 640 static int netdev_get_wol(struct net_device *dev, u32 *supported, u32 *cur);
641 static int netdev_set_sopass(struct net_device *dev, u8 *newval); 641 static int netdev_set_sopass(struct net_device *dev, u8 *newval);
642 static int netdev_get_sopass(struct net_device *dev, u8 *data); 642 static int netdev_get_sopass(struct net_device *dev, u8 *data);
643 static int netdev_get_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd); 643 static int netdev_get_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd);
644 static int netdev_set_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd); 644 static int netdev_set_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd);
645 static void enable_wol_mode(struct net_device *dev, int enable_intr); 645 static void enable_wol_mode(struct net_device *dev, int enable_intr);
646 static int netdev_close(struct net_device *dev); 646 static int netdev_close(struct net_device *dev);
647 static int netdev_get_regs(struct net_device *dev, u8 *buf); 647 static int netdev_get_regs(struct net_device *dev, u8 *buf);
648 static int netdev_get_eeprom(struct net_device *dev, u8 *buf); 648 static int netdev_get_eeprom(struct net_device *dev, u8 *buf);
649 static const struct ethtool_ops ethtool_ops; 649 static const struct ethtool_ops ethtool_ops;
650 650
651 #define NATSEMI_ATTR(_name) \ 651 #define NATSEMI_ATTR(_name) \
652 static ssize_t natsemi_show_##_name(struct device *dev, \ 652 static ssize_t natsemi_show_##_name(struct device *dev, \
653 struct device_attribute *attr, char *buf); \ 653 struct device_attribute *attr, char *buf); \
654 static ssize_t natsemi_set_##_name(struct device *dev, \ 654 static ssize_t natsemi_set_##_name(struct device *dev, \
655 struct device_attribute *attr, \ 655 struct device_attribute *attr, \
656 const char *buf, size_t count); \ 656 const char *buf, size_t count); \
657 static DEVICE_ATTR(_name, 0644, natsemi_show_##_name, natsemi_set_##_name) 657 static DEVICE_ATTR(_name, 0644, natsemi_show_##_name, natsemi_set_##_name)
658 658
659 #define NATSEMI_CREATE_FILE(_dev, _name) \ 659 #define NATSEMI_CREATE_FILE(_dev, _name) \
660 device_create_file(&_dev->dev, &dev_attr_##_name) 660 device_create_file(&_dev->dev, &dev_attr_##_name)
661 #define NATSEMI_REMOVE_FILE(_dev, _name) \ 661 #define NATSEMI_REMOVE_FILE(_dev, _name) \
662 device_remove_file(&_dev->dev, &dev_attr_##_name) 662 device_remove_file(&_dev->dev, &dev_attr_##_name)
663 663
664 NATSEMI_ATTR(dspcfg_workaround); 664 NATSEMI_ATTR(dspcfg_workaround);
665 665
666 static ssize_t natsemi_show_dspcfg_workaround(struct device *dev, 666 static ssize_t natsemi_show_dspcfg_workaround(struct device *dev,
667 struct device_attribute *attr, 667 struct device_attribute *attr,
668 char *buf) 668 char *buf)
669 { 669 {
670 struct netdev_private *np = netdev_priv(to_net_dev(dev)); 670 struct netdev_private *np = netdev_priv(to_net_dev(dev));
671 671
672 return sprintf(buf, "%s\n", np->dspcfg_workaround ? "on" : "off"); 672 return sprintf(buf, "%s\n", np->dspcfg_workaround ? "on" : "off");
673 } 673 }
674 674
675 static ssize_t natsemi_set_dspcfg_workaround(struct device *dev, 675 static ssize_t natsemi_set_dspcfg_workaround(struct device *dev,
676 struct device_attribute *attr, 676 struct device_attribute *attr,
677 const char *buf, size_t count) 677 const char *buf, size_t count)
678 { 678 {
679 struct netdev_private *np = netdev_priv(to_net_dev(dev)); 679 struct netdev_private *np = netdev_priv(to_net_dev(dev));
680 int new_setting; 680 int new_setting;
681 unsigned long flags; 681 unsigned long flags;
682 682
683 /* Find out the new setting */ 683 /* Find out the new setting */
684 if (!strncmp("on", buf, count - 1) || !strncmp("1", buf, count - 1)) 684 if (!strncmp("on", buf, count - 1) || !strncmp("1", buf, count - 1))
685 new_setting = 1; 685 new_setting = 1;
686 else if (!strncmp("off", buf, count - 1) 686 else if (!strncmp("off", buf, count - 1)
687 || !strncmp("0", buf, count - 1)) 687 || !strncmp("0", buf, count - 1))
688 new_setting = 0; 688 new_setting = 0;
689 else 689 else
690 return count; 690 return count;
691 691
692 spin_lock_irqsave(&np->lock, flags); 692 spin_lock_irqsave(&np->lock, flags);
693 693
694 np->dspcfg_workaround = new_setting; 694 np->dspcfg_workaround = new_setting;
695 695
696 spin_unlock_irqrestore(&np->lock, flags); 696 spin_unlock_irqrestore(&np->lock, flags);
697 697
698 return count; 698 return count;
699 } 699 }
700 700
701 static inline void __iomem *ns_ioaddr(struct net_device *dev) 701 static inline void __iomem *ns_ioaddr(struct net_device *dev)
702 { 702 {
703 return (void __iomem *) dev->base_addr; 703 return (void __iomem *) dev->base_addr;
704 } 704 }
705 705
706 static inline void natsemi_irq_enable(struct net_device *dev) 706 static inline void natsemi_irq_enable(struct net_device *dev)
707 { 707 {
708 writel(1, ns_ioaddr(dev) + IntrEnable); 708 writel(1, ns_ioaddr(dev) + IntrEnable);
709 readl(ns_ioaddr(dev) + IntrEnable); 709 readl(ns_ioaddr(dev) + IntrEnable);
710 } 710 }
711 711
712 static inline void natsemi_irq_disable(struct net_device *dev) 712 static inline void natsemi_irq_disable(struct net_device *dev)
713 { 713 {
714 writel(0, ns_ioaddr(dev) + IntrEnable); 714 writel(0, ns_ioaddr(dev) + IntrEnable);
715 readl(ns_ioaddr(dev) + IntrEnable); 715 readl(ns_ioaddr(dev) + IntrEnable);
716 } 716 }
717 717
718 static void move_int_phy(struct net_device *dev, int addr) 718 static void move_int_phy(struct net_device *dev, int addr)
719 { 719 {
720 struct netdev_private *np = netdev_priv(dev); 720 struct netdev_private *np = netdev_priv(dev);
721 void __iomem *ioaddr = ns_ioaddr(dev); 721 void __iomem *ioaddr = ns_ioaddr(dev);
722 int target = 31; 722 int target = 31;
723 723
724 /* 724 /*
725 * The internal phy is visible on the external mii bus. Therefore we must 725 * The internal phy is visible on the external mii bus. Therefore we must
726 * move it away before we can send commands to an external phy. 726 * move it away before we can send commands to an external phy.
727 * There are two addresses we must avoid: 727 * There are two addresses we must avoid:
728 * - the address on the external phy that is used for transmission. 728 * - the address on the external phy that is used for transmission.
729 * - the address that we want to access. User space can access phys 729 * - the address that we want to access. User space can access phys
730 * on the mii bus with SIOCGMIIREG/SIOCSMIIREG, independant from the 730 * on the mii bus with SIOCGMIIREG/SIOCSMIIREG, independant from the
731 * phy that is used for transmission. 731 * phy that is used for transmission.
732 */ 732 */
733 733
734 if (target == addr) 734 if (target == addr)
735 target--; 735 target--;
736 if (target == np->phy_addr_external) 736 if (target == np->phy_addr_external)
737 target--; 737 target--;
738 writew(target, ioaddr + PhyCtrl); 738 writew(target, ioaddr + PhyCtrl);
739 readw(ioaddr + PhyCtrl); 739 readw(ioaddr + PhyCtrl);
740 udelay(1); 740 udelay(1);
741 } 741 }
742 742
743 static void __devinit natsemi_init_media (struct net_device *dev) 743 static void __devinit natsemi_init_media (struct net_device *dev)
744 { 744 {
745 struct netdev_private *np = netdev_priv(dev); 745 struct netdev_private *np = netdev_priv(dev);
746 u32 tmp; 746 u32 tmp;
747 747
748 if (np->ignore_phy) 748 if (np->ignore_phy)
749 netif_carrier_on(dev); 749 netif_carrier_on(dev);
750 else 750 else
751 netif_carrier_off(dev); 751 netif_carrier_off(dev);
752 752
753 /* get the initial settings from hardware */ 753 /* get the initial settings from hardware */
754 tmp = mdio_read(dev, MII_BMCR); 754 tmp = mdio_read(dev, MII_BMCR);
755 np->speed = (tmp & BMCR_SPEED100)? SPEED_100 : SPEED_10; 755 np->speed = (tmp & BMCR_SPEED100)? SPEED_100 : SPEED_10;
756 np->duplex = (tmp & BMCR_FULLDPLX)? DUPLEX_FULL : DUPLEX_HALF; 756 np->duplex = (tmp & BMCR_FULLDPLX)? DUPLEX_FULL : DUPLEX_HALF;
757 np->autoneg = (tmp & BMCR_ANENABLE)? AUTONEG_ENABLE: AUTONEG_DISABLE; 757 np->autoneg = (tmp & BMCR_ANENABLE)? AUTONEG_ENABLE: AUTONEG_DISABLE;
758 np->advertising= mdio_read(dev, MII_ADVERTISE); 758 np->advertising= mdio_read(dev, MII_ADVERTISE);
759 759
760 if ((np->advertising & ADVERTISE_ALL) != ADVERTISE_ALL 760 if ((np->advertising & ADVERTISE_ALL) != ADVERTISE_ALL
761 && netif_msg_probe(np)) { 761 && netif_msg_probe(np)) {
762 printk(KERN_INFO "natsemi %s: Transceiver default autonegotiation %s " 762 printk(KERN_INFO "natsemi %s: Transceiver default autonegotiation %s "
763 "10%s %s duplex.\n", 763 "10%s %s duplex.\n",
764 pci_name(np->pci_dev), 764 pci_name(np->pci_dev),
765 (mdio_read(dev, MII_BMCR) & BMCR_ANENABLE)? 765 (mdio_read(dev, MII_BMCR) & BMCR_ANENABLE)?
766 "enabled, advertise" : "disabled, force", 766 "enabled, advertise" : "disabled, force",
767 (np->advertising & 767 (np->advertising &
768 (ADVERTISE_100FULL|ADVERTISE_100HALF))? 768 (ADVERTISE_100FULL|ADVERTISE_100HALF))?
769 "0" : "", 769 "0" : "",
770 (np->advertising & 770 (np->advertising &
771 (ADVERTISE_100FULL|ADVERTISE_10FULL))? 771 (ADVERTISE_100FULL|ADVERTISE_10FULL))?
772 "full" : "half"); 772 "full" : "half");
773 } 773 }
774 if (netif_msg_probe(np)) 774 if (netif_msg_probe(np))
775 printk(KERN_INFO 775 printk(KERN_INFO
776 "natsemi %s: Transceiver status %#04x advertising %#04x.\n", 776 "natsemi %s: Transceiver status %#04x advertising %#04x.\n",
777 pci_name(np->pci_dev), mdio_read(dev, MII_BMSR), 777 pci_name(np->pci_dev), mdio_read(dev, MII_BMSR),
778 np->advertising); 778 np->advertising);
779 779
780 } 780 }
781 781
782 static int __devinit natsemi_probe1 (struct pci_dev *pdev, 782 static int __devinit natsemi_probe1 (struct pci_dev *pdev,
783 const struct pci_device_id *ent) 783 const struct pci_device_id *ent)
784 { 784 {
785 struct net_device *dev; 785 struct net_device *dev;
786 struct netdev_private *np; 786 struct netdev_private *np;
787 int i, option, irq, chip_idx = ent->driver_data; 787 int i, option, irq, chip_idx = ent->driver_data;
788 static int find_cnt = -1; 788 static int find_cnt = -1;
789 unsigned long iostart, iosize; 789 unsigned long iostart, iosize;
790 void __iomem *ioaddr; 790 void __iomem *ioaddr;
791 const int pcibar = 1; /* PCI base address register */ 791 const int pcibar = 1; /* PCI base address register */
792 int prev_eedata; 792 int prev_eedata;
793 u32 tmp; 793 u32 tmp;
794 DECLARE_MAC_BUF(mac); 794 DECLARE_MAC_BUF(mac);
795 795
796 /* when built into the kernel, we only print version if device is found */ 796 /* when built into the kernel, we only print version if device is found */
797 #ifndef MODULE 797 #ifndef MODULE
798 static int printed_version; 798 static int printed_version;
799 if (!printed_version++) 799 if (!printed_version++)
800 printk(version); 800 printk(version);
801 #endif 801 #endif
802 802
803 i = pci_enable_device(pdev); 803 i = pci_enable_device(pdev);
804 if (i) return i; 804 if (i) return i;
805 805
806 /* natsemi has a non-standard PM control register 806 /* natsemi has a non-standard PM control register
807 * in PCI config space. Some boards apparently need 807 * in PCI config space. Some boards apparently need
808 * to be brought to D0 in this manner. 808 * to be brought to D0 in this manner.
809 */ 809 */
810 pci_read_config_dword(pdev, PCIPM, &tmp); 810 pci_read_config_dword(pdev, PCIPM, &tmp);
811 if (tmp & PCI_PM_CTRL_STATE_MASK) { 811 if (tmp & PCI_PM_CTRL_STATE_MASK) {
812 /* D0 state, disable PME assertion */ 812 /* D0 state, disable PME assertion */
813 u32 newtmp = tmp & ~PCI_PM_CTRL_STATE_MASK; 813 u32 newtmp = tmp & ~PCI_PM_CTRL_STATE_MASK;
814 pci_write_config_dword(pdev, PCIPM, newtmp); 814 pci_write_config_dword(pdev, PCIPM, newtmp);
815 } 815 }
816 816
817 find_cnt++; 817 find_cnt++;
818 iostart = pci_resource_start(pdev, pcibar); 818 iostart = pci_resource_start(pdev, pcibar);
819 iosize = pci_resource_len(pdev, pcibar); 819 iosize = pci_resource_len(pdev, pcibar);
820 irq = pdev->irq; 820 irq = pdev->irq;
821 821
822 pci_set_master(pdev); 822 pci_set_master(pdev);
823 823
824 dev = alloc_etherdev(sizeof (struct netdev_private)); 824 dev = alloc_etherdev(sizeof (struct netdev_private));
825 if (!dev) 825 if (!dev)
826 return -ENOMEM; 826 return -ENOMEM;
827 SET_NETDEV_DEV(dev, &pdev->dev); 827 SET_NETDEV_DEV(dev, &pdev->dev);
828 828
829 i = pci_request_regions(pdev, DRV_NAME); 829 i = pci_request_regions(pdev, DRV_NAME);
830 if (i) 830 if (i)
831 goto err_pci_request_regions; 831 goto err_pci_request_regions;
832 832
833 ioaddr = ioremap(iostart, iosize); 833 ioaddr = ioremap(iostart, iosize);
834 if (!ioaddr) { 834 if (!ioaddr) {
835 i = -ENOMEM; 835 i = -ENOMEM;
836 goto err_ioremap; 836 goto err_ioremap;
837 } 837 }
838 838
839 /* Work around the dropped serial bit. */ 839 /* Work around the dropped serial bit. */
840 prev_eedata = eeprom_read(ioaddr, 6); 840 prev_eedata = eeprom_read(ioaddr, 6);
841 for (i = 0; i < 3; i++) { 841 for (i = 0; i < 3; i++) {
842 int eedata = eeprom_read(ioaddr, i + 7); 842 int eedata = eeprom_read(ioaddr, i + 7);
843 dev->dev_addr[i*2] = (eedata << 1) + (prev_eedata >> 15); 843 dev->dev_addr[i*2] = (eedata << 1) + (prev_eedata >> 15);
844 dev->dev_addr[i*2+1] = eedata >> 7; 844 dev->dev_addr[i*2+1] = eedata >> 7;
845 prev_eedata = eedata; 845 prev_eedata = eedata;
846 } 846 }
847 847
848 dev->base_addr = (unsigned long __force) ioaddr; 848 dev->base_addr = (unsigned long __force) ioaddr;
849 dev->irq = irq; 849 dev->irq = irq;
850 850
851 np = netdev_priv(dev); 851 np = netdev_priv(dev);
852 netif_napi_add(dev, &np->napi, natsemi_poll, 64); 852 netif_napi_add(dev, &np->napi, natsemi_poll, 64);
853 np->dev = dev; 853 np->dev = dev;
854 854
855 np->pci_dev = pdev; 855 np->pci_dev = pdev;
856 pci_set_drvdata(pdev, dev); 856 pci_set_drvdata(pdev, dev);
857 np->iosize = iosize; 857 np->iosize = iosize;
858 spin_lock_init(&np->lock); 858 spin_lock_init(&np->lock);
859 np->msg_enable = (debug >= 0) ? (1<<debug)-1 : NATSEMI_DEF_MSG; 859 np->msg_enable = (debug >= 0) ? (1<<debug)-1 : NATSEMI_DEF_MSG;
860 np->hands_off = 0; 860 np->hands_off = 0;
861 np->intr_status = 0; 861 np->intr_status = 0;
862 np->eeprom_size = natsemi_pci_info[chip_idx].eeprom_size; 862 np->eeprom_size = natsemi_pci_info[chip_idx].eeprom_size;
863 if (natsemi_pci_info[chip_idx].flags & NATSEMI_FLAG_IGNORE_PHY) 863 if (natsemi_pci_info[chip_idx].flags & NATSEMI_FLAG_IGNORE_PHY)
864 np->ignore_phy = 1; 864 np->ignore_phy = 1;
865 else 865 else
866 np->ignore_phy = 0; 866 np->ignore_phy = 0;
867 np->dspcfg_workaround = dspcfg_workaround; 867 np->dspcfg_workaround = dspcfg_workaround;
868 868
869 /* Initial port: 869 /* Initial port:
870 * - If configured to ignore the PHY set up for external. 870 * - If configured to ignore the PHY set up for external.
871 * - If the nic was configured to use an external phy and if find_mii 871 * - If the nic was configured to use an external phy and if find_mii
872 * finds a phy: use external port, first phy that replies. 872 * finds a phy: use external port, first phy that replies.
873 * - Otherwise: internal port. 873 * - Otherwise: internal port.
874 * Note that the phy address for the internal phy doesn't matter: 874 * Note that the phy address for the internal phy doesn't matter:
875 * The address would be used to access a phy over the mii bus, but 875 * The address would be used to access a phy over the mii bus, but
876 * the internal phy is accessed through mapped registers. 876 * the internal phy is accessed through mapped registers.
877 */ 877 */
878 if (np->ignore_phy || readl(ioaddr + ChipConfig) & CfgExtPhy) 878 if (np->ignore_phy || readl(ioaddr + ChipConfig) & CfgExtPhy)
879 dev->if_port = PORT_MII; 879 dev->if_port = PORT_MII;
880 else 880 else
881 dev->if_port = PORT_TP; 881 dev->if_port = PORT_TP;
882 /* Reset the chip to erase previous misconfiguration. */ 882 /* Reset the chip to erase previous misconfiguration. */
883 natsemi_reload_eeprom(dev); 883 natsemi_reload_eeprom(dev);
884 natsemi_reset(dev); 884 natsemi_reset(dev);
885 885
886 if (dev->if_port != PORT_TP) { 886 if (dev->if_port != PORT_TP) {
887 np->phy_addr_external = find_mii(dev); 887 np->phy_addr_external = find_mii(dev);
888 /* If we're ignoring the PHY it doesn't matter if we can't 888 /* If we're ignoring the PHY it doesn't matter if we can't
889 * find one. */ 889 * find one. */
890 if (!np->ignore_phy && np->phy_addr_external == PHY_ADDR_NONE) { 890 if (!np->ignore_phy && np->phy_addr_external == PHY_ADDR_NONE) {
891 dev->if_port = PORT_TP; 891 dev->if_port = PORT_TP;
892 np->phy_addr_external = PHY_ADDR_INTERNAL; 892 np->phy_addr_external = PHY_ADDR_INTERNAL;
893 } 893 }
894 } else { 894 } else {
895 np->phy_addr_external = PHY_ADDR_INTERNAL; 895 np->phy_addr_external = PHY_ADDR_INTERNAL;
896 } 896 }
897 897
898 option = find_cnt < MAX_UNITS ? options[find_cnt] : 0; 898 option = find_cnt < MAX_UNITS ? options[find_cnt] : 0;
899 if (dev->mem_start) 899 if (dev->mem_start)
900 option = dev->mem_start; 900 option = dev->mem_start;
901 901
902 /* The lower four bits are the media type. */ 902 /* The lower four bits are the media type. */
903 if (option) { 903 if (option) {
904 if (option & 0x200) 904 if (option & 0x200)
905 np->full_duplex = 1; 905 np->full_duplex = 1;
906 if (option & 15) 906 if (option & 15)
907 printk(KERN_INFO 907 printk(KERN_INFO
908 "natsemi %s: ignoring user supplied media type %d", 908 "natsemi %s: ignoring user supplied media type %d",
909 pci_name(np->pci_dev), option & 15); 909 pci_name(np->pci_dev), option & 15);
910 } 910 }
911 if (find_cnt < MAX_UNITS && full_duplex[find_cnt]) 911 if (find_cnt < MAX_UNITS && full_duplex[find_cnt])
912 np->full_duplex = 1; 912 np->full_duplex = 1;
913 913
914 /* The chip-specific entries in the device structure. */ 914 /* The chip-specific entries in the device structure. */
915 dev->open = &netdev_open; 915 dev->open = &netdev_open;
916 dev->hard_start_xmit = &start_tx; 916 dev->hard_start_xmit = &start_tx;
917 dev->stop = &netdev_close; 917 dev->stop = &netdev_close;
918 dev->get_stats = &get_stats; 918 dev->get_stats = &get_stats;
919 dev->set_multicast_list = &set_rx_mode; 919 dev->set_multicast_list = &set_rx_mode;
920 dev->change_mtu = &natsemi_change_mtu; 920 dev->change_mtu = &natsemi_change_mtu;
921 dev->do_ioctl = &netdev_ioctl; 921 dev->do_ioctl = &netdev_ioctl;
922 dev->tx_timeout = &tx_timeout; 922 dev->tx_timeout = &tx_timeout;
923 dev->watchdog_timeo = TX_TIMEOUT; 923 dev->watchdog_timeo = TX_TIMEOUT;
924 924
925 #ifdef CONFIG_NET_POLL_CONTROLLER 925 #ifdef CONFIG_NET_POLL_CONTROLLER
926 dev->poll_controller = &natsemi_poll_controller; 926 dev->poll_controller = &natsemi_poll_controller;
927 #endif 927 #endif
928 SET_ETHTOOL_OPS(dev, &ethtool_ops); 928 SET_ETHTOOL_OPS(dev, &ethtool_ops);
929 929
930 if (mtu) 930 if (mtu)
931 dev->mtu = mtu; 931 dev->mtu = mtu;
932 932
933 natsemi_init_media(dev); 933 natsemi_init_media(dev);
934 934
935 /* save the silicon revision for later querying */ 935 /* save the silicon revision for later querying */
936 np->srr = readl(ioaddr + SiliconRev); 936 np->srr = readl(ioaddr + SiliconRev);
937 if (netif_msg_hw(np)) 937 if (netif_msg_hw(np))
938 printk(KERN_INFO "natsemi %s: silicon revision %#04x.\n", 938 printk(KERN_INFO "natsemi %s: silicon revision %#04x.\n",
939 pci_name(np->pci_dev), np->srr); 939 pci_name(np->pci_dev), np->srr);
940 940
941 i = register_netdev(dev); 941 i = register_netdev(dev);
942 if (i) 942 if (i)
943 goto err_register_netdev; 943 goto err_register_netdev;
944 944
945 if (NATSEMI_CREATE_FILE(pdev, dspcfg_workaround)) 945 if (NATSEMI_CREATE_FILE(pdev, dspcfg_workaround))
946 goto err_create_file; 946 goto err_create_file;
947 947
948 if (netif_msg_drv(np)) { 948 if (netif_msg_drv(np)) {
949 printk(KERN_INFO "natsemi %s: %s at %#08lx " 949 printk(KERN_INFO "natsemi %s: %s at %#08lx "
950 "(%s), %s, IRQ %d", 950 "(%s), %s, IRQ %d",
951 dev->name, natsemi_pci_info[chip_idx].name, iostart, 951 dev->name, natsemi_pci_info[chip_idx].name, iostart,
952 pci_name(np->pci_dev), print_mac(mac, dev->dev_addr), irq); 952 pci_name(np->pci_dev), print_mac(mac, dev->dev_addr), irq);
953 if (dev->if_port == PORT_TP) 953 if (dev->if_port == PORT_TP)
954 printk(", port TP.\n"); 954 printk(", port TP.\n");
955 else if (np->ignore_phy) 955 else if (np->ignore_phy)
956 printk(", port MII, ignoring PHY\n"); 956 printk(", port MII, ignoring PHY\n");
957 else 957 else
958 printk(", port MII, phy ad %d.\n", np->phy_addr_external); 958 printk(", port MII, phy ad %d.\n", np->phy_addr_external);
959 } 959 }
960 return 0; 960 return 0;
961 961
962 err_create_file: 962 err_create_file:
963 unregister_netdev(dev); 963 unregister_netdev(dev);
964 964
965 err_register_netdev: 965 err_register_netdev:
966 iounmap(ioaddr); 966 iounmap(ioaddr);
967 967
968 err_ioremap: 968 err_ioremap:
969 pci_release_regions(pdev); 969 pci_release_regions(pdev);
970 pci_set_drvdata(pdev, NULL); 970 pci_set_drvdata(pdev, NULL);
971 971
972 err_pci_request_regions: 972 err_pci_request_regions:
973 free_netdev(dev); 973 free_netdev(dev);
974 return i; 974 return i;
975 } 975 }
976 976
977 977
978 /* Read the EEPROM and MII Management Data I/O (MDIO) interfaces. 978 /* Read the EEPROM and MII Management Data I/O (MDIO) interfaces.
979 The EEPROM code is for the common 93c06/46 EEPROMs with 6 bit addresses. */ 979 The EEPROM code is for the common 93c06/46 EEPROMs with 6 bit addresses. */
980 980
981 /* Delay between EEPROM clock transitions. 981 /* Delay between EEPROM clock transitions.
982 No extra delay is needed with 33Mhz PCI, but future 66Mhz access may need 982 No extra delay is needed with 33Mhz PCI, but future 66Mhz access may need
983 a delay. Note that pre-2.0.34 kernels had a cache-alignment bug that 983 a delay. Note that pre-2.0.34 kernels had a cache-alignment bug that
984 made udelay() unreliable. 984 made udelay() unreliable.
985 The old method of using an ISA access as a delay, __SLOW_DOWN_IO__, is 985 The old method of using an ISA access as a delay, __SLOW_DOWN_IO__, is
986 deprecated. 986 deprecated.
987 */ 987 */
988 #define eeprom_delay(ee_addr) readl(ee_addr) 988 #define eeprom_delay(ee_addr) readl(ee_addr)
989 989
990 #define EE_Write0 (EE_ChipSelect) 990 #define EE_Write0 (EE_ChipSelect)
991 #define EE_Write1 (EE_ChipSelect | EE_DataIn) 991 #define EE_Write1 (EE_ChipSelect | EE_DataIn)
992 992
993 /* The EEPROM commands include the alway-set leading bit. */ 993 /* The EEPROM commands include the alway-set leading bit. */
994 enum EEPROM_Cmds { 994 enum EEPROM_Cmds {
995 EE_WriteCmd=(5 << 6), EE_ReadCmd=(6 << 6), EE_EraseCmd=(7 << 6), 995 EE_WriteCmd=(5 << 6), EE_ReadCmd=(6 << 6), EE_EraseCmd=(7 << 6),
996 }; 996 };
997 997
998 static int eeprom_read(void __iomem *addr, int location) 998 static int eeprom_read(void __iomem *addr, int location)
999 { 999 {
1000 int i; 1000 int i;
1001 int retval = 0; 1001 int retval = 0;
1002 void __iomem *ee_addr = addr + EECtrl; 1002 void __iomem *ee_addr = addr + EECtrl;
1003 int read_cmd = location | EE_ReadCmd; 1003 int read_cmd = location | EE_ReadCmd;
1004 1004
1005 writel(EE_Write0, ee_addr); 1005 writel(EE_Write0, ee_addr);
1006 1006
1007 /* Shift the read command bits out. */ 1007 /* Shift the read command bits out. */
1008 for (i = 10; i >= 0; i--) { 1008 for (i = 10; i >= 0; i--) {
1009 short dataval = (read_cmd & (1 << i)) ? EE_Write1 : EE_Write0; 1009 short dataval = (read_cmd & (1 << i)) ? EE_Write1 : EE_Write0;
1010 writel(dataval, ee_addr); 1010 writel(dataval, ee_addr);
1011 eeprom_delay(ee_addr); 1011 eeprom_delay(ee_addr);
1012 writel(dataval | EE_ShiftClk, ee_addr); 1012 writel(dataval | EE_ShiftClk, ee_addr);
1013 eeprom_delay(ee_addr); 1013 eeprom_delay(ee_addr);
1014 } 1014 }
1015 writel(EE_ChipSelect, ee_addr); 1015 writel(EE_ChipSelect, ee_addr);
1016 eeprom_delay(ee_addr); 1016 eeprom_delay(ee_addr);
1017 1017
1018 for (i = 0; i < 16; i++) { 1018 for (i = 0; i < 16; i++) {
1019 writel(EE_ChipSelect | EE_ShiftClk, ee_addr); 1019 writel(EE_ChipSelect | EE_ShiftClk, ee_addr);
1020 eeprom_delay(ee_addr); 1020 eeprom_delay(ee_addr);
1021 retval |= (readl(ee_addr) & EE_DataOut) ? 1 << i : 0; 1021 retval |= (readl(ee_addr) & EE_DataOut) ? 1 << i : 0;
1022 writel(EE_ChipSelect, ee_addr); 1022 writel(EE_ChipSelect, ee_addr);
1023 eeprom_delay(ee_addr); 1023 eeprom_delay(ee_addr);
1024 } 1024 }
1025 1025
1026 /* Terminate the EEPROM access. */ 1026 /* Terminate the EEPROM access. */
1027 writel(EE_Write0, ee_addr); 1027 writel(EE_Write0, ee_addr);
1028 writel(0, ee_addr); 1028 writel(0, ee_addr);
1029 return retval; 1029 return retval;
1030 } 1030 }
1031 1031
1032 /* MII transceiver control section. 1032 /* MII transceiver control section.
1033 * The 83815 series has an internal transceiver, and we present the 1033 * The 83815 series has an internal transceiver, and we present the
1034 * internal management registers as if they were MII connected. 1034 * internal management registers as if they were MII connected.
1035 * External Phy registers are referenced through the MII interface. 1035 * External Phy registers are referenced through the MII interface.
1036 */ 1036 */
1037 1037
1038 /* clock transitions >= 20ns (25MHz) 1038 /* clock transitions >= 20ns (25MHz)
1039 * One readl should be good to PCI @ 100MHz 1039 * One readl should be good to PCI @ 100MHz
1040 */ 1040 */
1041 #define mii_delay(ioaddr) readl(ioaddr + EECtrl) 1041 #define mii_delay(ioaddr) readl(ioaddr + EECtrl)
1042 1042
1043 static int mii_getbit (struct net_device *dev) 1043 static int mii_getbit (struct net_device *dev)
1044 { 1044 {
1045 int data; 1045 int data;
1046 void __iomem *ioaddr = ns_ioaddr(dev); 1046 void __iomem *ioaddr = ns_ioaddr(dev);
1047 1047
1048 writel(MII_ShiftClk, ioaddr + EECtrl); 1048 writel(MII_ShiftClk, ioaddr + EECtrl);
1049 data = readl(ioaddr + EECtrl); 1049 data = readl(ioaddr + EECtrl);
1050 writel(0, ioaddr + EECtrl); 1050 writel(0, ioaddr + EECtrl);
1051 mii_delay(ioaddr); 1051 mii_delay(ioaddr);
1052 return (data & MII_Data)? 1 : 0; 1052 return (data & MII_Data)? 1 : 0;
1053 } 1053 }
1054 1054
1055 static void mii_send_bits (struct net_device *dev, u32 data, int len) 1055 static void mii_send_bits (struct net_device *dev, u32 data, int len)
1056 { 1056 {
1057 u32 i; 1057 u32 i;
1058 void __iomem *ioaddr = ns_ioaddr(dev); 1058 void __iomem *ioaddr = ns_ioaddr(dev);
1059 1059
1060 for (i = (1 << (len-1)); i; i >>= 1) 1060 for (i = (1 << (len-1)); i; i >>= 1)
1061 { 1061 {
1062 u32 mdio_val = MII_Write | ((data & i)? MII_Data : 0); 1062 u32 mdio_val = MII_Write | ((data & i)? MII_Data : 0);
1063 writel(mdio_val, ioaddr + EECtrl); 1063 writel(mdio_val, ioaddr + EECtrl);
1064 mii_delay(ioaddr); 1064 mii_delay(ioaddr);
1065 writel(mdio_val | MII_ShiftClk, ioaddr + EECtrl); 1065 writel(mdio_val | MII_ShiftClk, ioaddr + EECtrl);
1066 mii_delay(ioaddr); 1066 mii_delay(ioaddr);
1067 } 1067 }
1068 writel(0, ioaddr + EECtrl); 1068 writel(0, ioaddr + EECtrl);
1069 mii_delay(ioaddr); 1069 mii_delay(ioaddr);
1070 } 1070 }
1071 1071
1072 static int miiport_read(struct net_device *dev, int phy_id, int reg) 1072 static int miiport_read(struct net_device *dev, int phy_id, int reg)
1073 { 1073 {
1074 u32 cmd; 1074 u32 cmd;
1075 int i; 1075 int i;
1076 u32 retval = 0; 1076 u32 retval = 0;
1077 1077
1078 /* Ensure sync */ 1078 /* Ensure sync */
1079 mii_send_bits (dev, 0xffffffff, 32); 1079 mii_send_bits (dev, 0xffffffff, 32);
1080 /* ST(2), OP(2), ADDR(5), REG#(5), TA(2), Data(16) total 32 bits */ 1080 /* ST(2), OP(2), ADDR(5), REG#(5), TA(2), Data(16) total 32 bits */
1081 /* ST,OP = 0110'b for read operation */ 1081 /* ST,OP = 0110'b for read operation */
1082 cmd = (0x06 << 10) | (phy_id << 5) | reg; 1082 cmd = (0x06 << 10) | (phy_id << 5) | reg;
1083 mii_send_bits (dev, cmd, 14); 1083 mii_send_bits (dev, cmd, 14);
1084 /* Turnaround */ 1084 /* Turnaround */
1085 if (mii_getbit (dev)) 1085 if (mii_getbit (dev))
1086 return 0; 1086 return 0;
1087 /* Read data */ 1087 /* Read data */
1088 for (i = 0; i < 16; i++) { 1088 for (i = 0; i < 16; i++) {
1089 retval <<= 1; 1089 retval <<= 1;
1090 retval |= mii_getbit (dev); 1090 retval |= mii_getbit (dev);
1091 } 1091 }
1092 /* End cycle */ 1092 /* End cycle */
1093 mii_getbit (dev); 1093 mii_getbit (dev);
1094 return retval; 1094 return retval;
1095 } 1095 }
1096 1096
1097 static void miiport_write(struct net_device *dev, int phy_id, int reg, u16 data) 1097 static void miiport_write(struct net_device *dev, int phy_id, int reg, u16 data)
1098 { 1098 {
1099 u32 cmd; 1099 u32 cmd;
1100 1100
1101 /* Ensure sync */ 1101 /* Ensure sync */
1102 mii_send_bits (dev, 0xffffffff, 32); 1102 mii_send_bits (dev, 0xffffffff, 32);
1103 /* ST(2), OP(2), ADDR(5), REG#(5), TA(2), Data(16) total 32 bits */ 1103 /* ST(2), OP(2), ADDR(5), REG#(5), TA(2), Data(16) total 32 bits */
1104 /* ST,OP,AAAAA,RRRRR,TA = 0101xxxxxxxxxx10'b = 0x5002 for write */ 1104 /* ST,OP,AAAAA,RRRRR,TA = 0101xxxxxxxxxx10'b = 0x5002 for write */
1105 cmd = (0x5002 << 16) | (phy_id << 23) | (reg << 18) | data; 1105 cmd = (0x5002 << 16) | (phy_id << 23) | (reg << 18) | data;
1106 mii_send_bits (dev, cmd, 32); 1106 mii_send_bits (dev, cmd, 32);
1107 /* End cycle */ 1107 /* End cycle */
1108 mii_getbit (dev); 1108 mii_getbit (dev);
1109 } 1109 }
1110 1110
1111 static int mdio_read(struct net_device *dev, int reg) 1111 static int mdio_read(struct net_device *dev, int reg)
1112 { 1112 {
1113 struct netdev_private *np = netdev_priv(dev); 1113 struct netdev_private *np = netdev_priv(dev);
1114 void __iomem *ioaddr = ns_ioaddr(dev); 1114 void __iomem *ioaddr = ns_ioaddr(dev);
1115 1115
1116 /* The 83815 series has two ports: 1116 /* The 83815 series has two ports:
1117 * - an internal transceiver 1117 * - an internal transceiver
1118 * - an external mii bus 1118 * - an external mii bus
1119 */ 1119 */
1120 if (dev->if_port == PORT_TP) 1120 if (dev->if_port == PORT_TP)
1121 return readw(ioaddr+BasicControl+(reg<<2)); 1121 return readw(ioaddr+BasicControl+(reg<<2));
1122 else 1122 else
1123 return miiport_read(dev, np->phy_addr_external, reg); 1123 return miiport_read(dev, np->phy_addr_external, reg);
1124 } 1124 }
1125 1125
1126 static void mdio_write(struct net_device *dev, int reg, u16 data) 1126 static void mdio_write(struct net_device *dev, int reg, u16 data)
1127 { 1127 {
1128 struct netdev_private *np = netdev_priv(dev); 1128 struct netdev_private *np = netdev_priv(dev);
1129 void __iomem *ioaddr = ns_ioaddr(dev); 1129 void __iomem *ioaddr = ns_ioaddr(dev);
1130 1130
1131 /* The 83815 series has an internal transceiver; handle separately */ 1131 /* The 83815 series has an internal transceiver; handle separately */
1132 if (dev->if_port == PORT_TP) 1132 if (dev->if_port == PORT_TP)
1133 writew(data, ioaddr+BasicControl+(reg<<2)); 1133 writew(data, ioaddr+BasicControl+(reg<<2));
1134 else 1134 else
1135 miiport_write(dev, np->phy_addr_external, reg, data); 1135 miiport_write(dev, np->phy_addr_external, reg, data);
1136 } 1136 }
1137 1137
1138 static void init_phy_fixup(struct net_device *dev) 1138 static void init_phy_fixup(struct net_device *dev)
1139 { 1139 {
1140 struct netdev_private *np = netdev_priv(dev); 1140 struct netdev_private *np = netdev_priv(dev);
1141 void __iomem *ioaddr = ns_ioaddr(dev); 1141 void __iomem *ioaddr = ns_ioaddr(dev);
1142 int i; 1142 int i;
1143 u32 cfg; 1143 u32 cfg;
1144 u16 tmp; 1144 u16 tmp;
1145 1145
1146 /* restore stuff lost when power was out */ 1146 /* restore stuff lost when power was out */
1147 tmp = mdio_read(dev, MII_BMCR); 1147 tmp = mdio_read(dev, MII_BMCR);
1148 if (np->autoneg == AUTONEG_ENABLE) { 1148 if (np->autoneg == AUTONEG_ENABLE) {
1149 /* renegotiate if something changed */ 1149 /* renegotiate if something changed */
1150 if ((tmp & BMCR_ANENABLE) == 0 1150 if ((tmp & BMCR_ANENABLE) == 0
1151 || np->advertising != mdio_read(dev, MII_ADVERTISE)) 1151 || np->advertising != mdio_read(dev, MII_ADVERTISE))
1152 { 1152 {
1153 /* turn on autonegotiation and force negotiation */ 1153 /* turn on autonegotiation and force negotiation */
1154 tmp |= (BMCR_ANENABLE | BMCR_ANRESTART); 1154 tmp |= (BMCR_ANENABLE | BMCR_ANRESTART);
1155 mdio_write(dev, MII_ADVERTISE, np->advertising); 1155 mdio_write(dev, MII_ADVERTISE, np->advertising);
1156 } 1156 }
1157 } else { 1157 } else {
1158 /* turn off auto negotiation, set speed and duplexity */ 1158 /* turn off auto negotiation, set speed and duplexity */
1159 tmp &= ~(BMCR_ANENABLE | BMCR_SPEED100 | BMCR_FULLDPLX); 1159 tmp &= ~(BMCR_ANENABLE | BMCR_SPEED100 | BMCR_FULLDPLX);
1160 if (np->speed == SPEED_100) 1160 if (np->speed == SPEED_100)
1161 tmp |= BMCR_SPEED100; 1161 tmp |= BMCR_SPEED100;
1162 if (np->duplex == DUPLEX_FULL) 1162 if (np->duplex == DUPLEX_FULL)
1163 tmp |= BMCR_FULLDPLX; 1163 tmp |= BMCR_FULLDPLX;
1164 /* 1164 /*
1165 * Note: there is no good way to inform the link partner 1165 * Note: there is no good way to inform the link partner
1166 * that our capabilities changed. The user has to unplug 1166 * that our capabilities changed. The user has to unplug
1167 * and replug the network cable after some changes, e.g. 1167 * and replug the network cable after some changes, e.g.
1168 * after switching from 10HD, autoneg off to 100 HD, 1168 * after switching from 10HD, autoneg off to 100 HD,
1169 * autoneg off. 1169 * autoneg off.
1170 */ 1170 */
1171 } 1171 }
1172 mdio_write(dev, MII_BMCR, tmp); 1172 mdio_write(dev, MII_BMCR, tmp);
1173 readl(ioaddr + ChipConfig); 1173 readl(ioaddr + ChipConfig);
1174 udelay(1); 1174 udelay(1);
1175 1175
1176 /* find out what phy this is */ 1176 /* find out what phy this is */
1177 np->mii = (mdio_read(dev, MII_PHYSID1) << 16) 1177 np->mii = (mdio_read(dev, MII_PHYSID1) << 16)
1178 + mdio_read(dev, MII_PHYSID2); 1178 + mdio_read(dev, MII_PHYSID2);
1179 1179
1180 /* handle external phys here */ 1180 /* handle external phys here */
1181 switch (np->mii) { 1181 switch (np->mii) {
1182 case PHYID_AM79C874: 1182 case PHYID_AM79C874:
1183 /* phy specific configuration for fibre/tp operation */ 1183 /* phy specific configuration for fibre/tp operation */
1184 tmp = mdio_read(dev, MII_MCTRL); 1184 tmp = mdio_read(dev, MII_MCTRL);
1185 tmp &= ~(MII_FX_SEL | MII_EN_SCRM); 1185 tmp &= ~(MII_FX_SEL | MII_EN_SCRM);
1186 if (dev->if_port == PORT_FIBRE) 1186 if (dev->if_port == PORT_FIBRE)
1187 tmp |= MII_FX_SEL; 1187 tmp |= MII_FX_SEL;
1188 else 1188 else
1189 tmp |= MII_EN_SCRM; 1189 tmp |= MII_EN_SCRM;
1190 mdio_write(dev, MII_MCTRL, tmp); 1190 mdio_write(dev, MII_MCTRL, tmp);
1191 break; 1191 break;
1192 default: 1192 default:
1193 break; 1193 break;
1194 } 1194 }
1195 cfg = readl(ioaddr + ChipConfig); 1195 cfg = readl(ioaddr + ChipConfig);
1196 if (cfg & CfgExtPhy) 1196 if (cfg & CfgExtPhy)
1197 return; 1197 return;
1198 1198
1199 /* On page 78 of the spec, they recommend some settings for "optimum 1199 /* On page 78 of the spec, they recommend some settings for "optimum
1200 performance" to be done in sequence. These settings optimize some 1200 performance" to be done in sequence. These settings optimize some
1201 of the 100Mbit autodetection circuitry. They say we only want to 1201 of the 100Mbit autodetection circuitry. They say we only want to
1202 do this for rev C of the chip, but engineers at NSC (Bradley 1202 do this for rev C of the chip, but engineers at NSC (Bradley
1203 Kennedy) recommends always setting them. If you don't, you get 1203 Kennedy) recommends always setting them. If you don't, you get
1204 errors on some autonegotiations that make the device unusable. 1204 errors on some autonegotiations that make the device unusable.
1205 1205
1206 It seems that the DSP needs a few usec to reinitialize after 1206 It seems that the DSP needs a few usec to reinitialize after
1207 the start of the phy. Just retry writing these values until they 1207 the start of the phy. Just retry writing these values until they
1208 stick. 1208 stick.
1209 */ 1209 */
1210 for (i=0;i<NATSEMI_HW_TIMEOUT;i++) { 1210 for (i=0;i<NATSEMI_HW_TIMEOUT;i++) {
1211 1211
1212 int dspcfg; 1212 int dspcfg;
1213 writew(1, ioaddr + PGSEL); 1213 writew(1, ioaddr + PGSEL);
1214 writew(PMDCSR_VAL, ioaddr + PMDCSR); 1214 writew(PMDCSR_VAL, ioaddr + PMDCSR);
1215 writew(TSTDAT_VAL, ioaddr + TSTDAT); 1215 writew(TSTDAT_VAL, ioaddr + TSTDAT);
1216 np->dspcfg = (np->srr <= SRR_DP83815_C)? 1216 np->dspcfg = (np->srr <= SRR_DP83815_C)?
1217 DSPCFG_VAL : (DSPCFG_COEF | readw(ioaddr + DSPCFG)); 1217 DSPCFG_VAL : (DSPCFG_COEF | readw(ioaddr + DSPCFG));
1218 writew(np->dspcfg, ioaddr + DSPCFG); 1218 writew(np->dspcfg, ioaddr + DSPCFG);
1219 writew(SDCFG_VAL, ioaddr + SDCFG); 1219 writew(SDCFG_VAL, ioaddr + SDCFG);
1220 writew(0, ioaddr + PGSEL); 1220 writew(0, ioaddr + PGSEL);
1221 readl(ioaddr + ChipConfig); 1221 readl(ioaddr + ChipConfig);
1222 udelay(10); 1222 udelay(10);
1223 1223
1224 writew(1, ioaddr + PGSEL); 1224 writew(1, ioaddr + PGSEL);
1225 dspcfg = readw(ioaddr + DSPCFG); 1225 dspcfg = readw(ioaddr + DSPCFG);
1226 writew(0, ioaddr + PGSEL); 1226 writew(0, ioaddr + PGSEL);
1227 if (np->dspcfg == dspcfg) 1227 if (np->dspcfg == dspcfg)
1228 break; 1228 break;
1229 } 1229 }
1230 1230
1231 if (netif_msg_link(np)) { 1231 if (netif_msg_link(np)) {
1232 if (i==NATSEMI_HW_TIMEOUT) { 1232 if (i==NATSEMI_HW_TIMEOUT) {
1233 printk(KERN_INFO 1233 printk(KERN_INFO
1234 "%s: DSPCFG mismatch after retrying for %d usec.\n", 1234 "%s: DSPCFG mismatch after retrying for %d usec.\n",
1235 dev->name, i*10); 1235 dev->name, i*10);
1236 } else { 1236 } else {
1237 printk(KERN_INFO 1237 printk(KERN_INFO
1238 "%s: DSPCFG accepted after %d usec.\n", 1238 "%s: DSPCFG accepted after %d usec.\n",
1239 dev->name, i*10); 1239 dev->name, i*10);
1240 } 1240 }
1241 } 1241 }
1242 /* 1242 /*
1243 * Enable PHY Specific event based interrupts. Link state change 1243 * Enable PHY Specific event based interrupts. Link state change
1244 * and Auto-Negotiation Completion are among the affected. 1244 * and Auto-Negotiation Completion are among the affected.
1245 * Read the intr status to clear it (needed for wake events). 1245 * Read the intr status to clear it (needed for wake events).
1246 */ 1246 */
1247 readw(ioaddr + MIntrStatus); 1247 readw(ioaddr + MIntrStatus);
1248 writew(MICRIntEn, ioaddr + MIntrCtrl); 1248 writew(MICRIntEn, ioaddr + MIntrCtrl);
1249 } 1249 }
1250 1250
1251 static int switch_port_external(struct net_device *dev) 1251 static int switch_port_external(struct net_device *dev)
1252 { 1252 {
1253 struct netdev_private *np = netdev_priv(dev); 1253 struct netdev_private *np = netdev_priv(dev);
1254 void __iomem *ioaddr = ns_ioaddr(dev); 1254 void __iomem *ioaddr = ns_ioaddr(dev);
1255 u32 cfg; 1255 u32 cfg;
1256 1256
1257 cfg = readl(ioaddr + ChipConfig); 1257 cfg = readl(ioaddr + ChipConfig);
1258 if (cfg & CfgExtPhy) 1258 if (cfg & CfgExtPhy)
1259 return 0; 1259 return 0;
1260 1260
1261 if (netif_msg_link(np)) { 1261 if (netif_msg_link(np)) {
1262 printk(KERN_INFO "%s: switching to external transceiver.\n", 1262 printk(KERN_INFO "%s: switching to external transceiver.\n",
1263 dev->name); 1263 dev->name);
1264 } 1264 }
1265 1265
1266 /* 1) switch back to external phy */ 1266 /* 1) switch back to external phy */
1267 writel(cfg | (CfgExtPhy | CfgPhyDis), ioaddr + ChipConfig); 1267 writel(cfg | (CfgExtPhy | CfgPhyDis), ioaddr + ChipConfig);
1268 readl(ioaddr + ChipConfig); 1268 readl(ioaddr + ChipConfig);
1269 udelay(1); 1269 udelay(1);
1270 1270
1271 /* 2) reset the external phy: */ 1271 /* 2) reset the external phy: */
1272 /* resetting the external PHY has been known to cause a hub supplying 1272 /* resetting the external PHY has been known to cause a hub supplying
1273 * power over Ethernet to kill the power. We don't want to kill 1273 * power over Ethernet to kill the power. We don't want to kill
1274 * power to this computer, so we avoid resetting the phy. 1274 * power to this computer, so we avoid resetting the phy.
1275 */ 1275 */
1276 1276
1277 /* 3) reinit the phy fixup, it got lost during power down. */ 1277 /* 3) reinit the phy fixup, it got lost during power down. */
1278 move_int_phy(dev, np->phy_addr_external); 1278 move_int_phy(dev, np->phy_addr_external);
1279 init_phy_fixup(dev); 1279 init_phy_fixup(dev);
1280 1280
1281 return 1; 1281 return 1;
1282 } 1282 }
1283 1283
1284 static int switch_port_internal(struct net_device *dev) 1284 static int switch_port_internal(struct net_device *dev)
1285 { 1285 {
1286 struct netdev_private *np = netdev_priv(dev); 1286 struct netdev_private *np = netdev_priv(dev);
1287 void __iomem *ioaddr = ns_ioaddr(dev); 1287 void __iomem *ioaddr = ns_ioaddr(dev);
1288 int i; 1288 int i;
1289 u32 cfg; 1289 u32 cfg;
1290 u16 bmcr; 1290 u16 bmcr;
1291 1291
1292 cfg = readl(ioaddr + ChipConfig); 1292 cfg = readl(ioaddr + ChipConfig);
1293 if (!(cfg &CfgExtPhy)) 1293 if (!(cfg &CfgExtPhy))
1294 return 0; 1294 return 0;
1295 1295
1296 if (netif_msg_link(np)) { 1296 if (netif_msg_link(np)) {
1297 printk(KERN_INFO "%s: switching to internal transceiver.\n", 1297 printk(KERN_INFO "%s: switching to internal transceiver.\n",
1298 dev->name); 1298 dev->name);
1299 } 1299 }
1300 /* 1) switch back to internal phy: */ 1300 /* 1) switch back to internal phy: */
1301 cfg = cfg & ~(CfgExtPhy | CfgPhyDis); 1301 cfg = cfg & ~(CfgExtPhy | CfgPhyDis);
1302 writel(cfg, ioaddr + ChipConfig); 1302 writel(cfg, ioaddr + ChipConfig);
1303 readl(ioaddr + ChipConfig); 1303 readl(ioaddr + ChipConfig);
1304 udelay(1); 1304 udelay(1);
1305 1305
1306 /* 2) reset the internal phy: */ 1306 /* 2) reset the internal phy: */
1307 bmcr = readw(ioaddr+BasicControl+(MII_BMCR<<2)); 1307 bmcr = readw(ioaddr+BasicControl+(MII_BMCR<<2));
1308 writel(bmcr | BMCR_RESET, ioaddr+BasicControl+(MII_BMCR<<2)); 1308 writel(bmcr | BMCR_RESET, ioaddr+BasicControl+(MII_BMCR<<2));
1309 readl(ioaddr + ChipConfig); 1309 readl(ioaddr + ChipConfig);
1310 udelay(10); 1310 udelay(10);
1311 for (i=0;i<NATSEMI_HW_TIMEOUT;i++) { 1311 for (i=0;i<NATSEMI_HW_TIMEOUT;i++) {
1312 bmcr = readw(ioaddr+BasicControl+(MII_BMCR<<2)); 1312 bmcr = readw(ioaddr+BasicControl+(MII_BMCR<<2));
1313 if (!(bmcr & BMCR_RESET)) 1313 if (!(bmcr & BMCR_RESET))
1314 break; 1314 break;
1315 udelay(10); 1315 udelay(10);
1316 } 1316 }
1317 if (i==NATSEMI_HW_TIMEOUT && netif_msg_link(np)) { 1317 if (i==NATSEMI_HW_TIMEOUT && netif_msg_link(np)) {
1318 printk(KERN_INFO 1318 printk(KERN_INFO
1319 "%s: phy reset did not complete in %d usec.\n", 1319 "%s: phy reset did not complete in %d usec.\n",
1320 dev->name, i*10); 1320 dev->name, i*10);
1321 } 1321 }
1322 /* 3) reinit the phy fixup, it got lost during power down. */ 1322 /* 3) reinit the phy fixup, it got lost during power down. */
1323 init_phy_fixup(dev); 1323 init_phy_fixup(dev);
1324 1324
1325 return 1; 1325 return 1;
1326 } 1326 }
1327 1327
1328 /* Scan for a PHY on the external mii bus. 1328 /* Scan for a PHY on the external mii bus.
1329 * There are two tricky points: 1329 * There are two tricky points:
1330 * - Do not scan while the internal phy is enabled. The internal phy will 1330 * - Do not scan while the internal phy is enabled. The internal phy will
1331 * crash: e.g. reads from the DSPCFG register will return odd values and 1331 * crash: e.g. reads from the DSPCFG register will return odd values and
1332 * the nasty random phy reset code will reset the nic every few seconds. 1332 * the nasty random phy reset code will reset the nic every few seconds.
1333 * - The internal phy must be moved around, an external phy could 1333 * - The internal phy must be moved around, an external phy could
1334 * have the same address as the internal phy. 1334 * have the same address as the internal phy.
1335 */ 1335 */
1336 static int find_mii(struct net_device *dev) 1336 static int find_mii(struct net_device *dev)
1337 { 1337 {
1338 struct netdev_private *np = netdev_priv(dev); 1338 struct netdev_private *np = netdev_priv(dev);
1339 int tmp; 1339 int tmp;
1340 int i; 1340 int i;
1341 int did_switch; 1341 int did_switch;
1342 1342
1343 /* Switch to external phy */ 1343 /* Switch to external phy */
1344 did_switch = switch_port_external(dev); 1344 did_switch = switch_port_external(dev);
1345 1345
1346 /* Scan the possible phy addresses: 1346 /* Scan the possible phy addresses:
1347 * 1347 *
1348 * PHY address 0 means that the phy is in isolate mode. Not yet 1348 * PHY address 0 means that the phy is in isolate mode. Not yet
1349 * supported due to lack of test hardware. User space should 1349 * supported due to lack of test hardware. User space should
1350 * handle it through ethtool. 1350 * handle it through ethtool.
1351 */ 1351 */
1352 for (i = 1; i <= 31; i++) { 1352 for (i = 1; i <= 31; i++) {
1353 move_int_phy(dev, i); 1353 move_int_phy(dev, i);
1354 tmp = miiport_read(dev, i, MII_BMSR); 1354 tmp = miiport_read(dev, i, MII_BMSR);
1355 if (tmp != 0xffff && tmp != 0x0000) { 1355 if (tmp != 0xffff && tmp != 0x0000) {
1356 /* found something! */ 1356 /* found something! */
1357 np->mii = (mdio_read(dev, MII_PHYSID1) << 16) 1357 np->mii = (mdio_read(dev, MII_PHYSID1) << 16)
1358 + mdio_read(dev, MII_PHYSID2); 1358 + mdio_read(dev, MII_PHYSID2);
1359 if (netif_msg_probe(np)) { 1359 if (netif_msg_probe(np)) {
1360 printk(KERN_INFO "natsemi %s: found external phy %08x at address %d.\n", 1360 printk(KERN_INFO "natsemi %s: found external phy %08x at address %d.\n",
1361 pci_name(np->pci_dev), np->mii, i); 1361 pci_name(np->pci_dev), np->mii, i);
1362 } 1362 }
1363 break; 1363 break;
1364 } 1364 }
1365 } 1365 }
1366 /* And switch back to internal phy: */ 1366 /* And switch back to internal phy: */
1367 if (did_switch) 1367 if (did_switch)
1368 switch_port_internal(dev); 1368 switch_port_internal(dev);
1369 return i; 1369 return i;
1370 } 1370 }
1371 1371
1372 /* CFG bits [13:16] [18:23] */ 1372 /* CFG bits [13:16] [18:23] */
1373 #define CFG_RESET_SAVE 0xfde000 1373 #define CFG_RESET_SAVE 0xfde000
1374 /* WCSR bits [0:4] [9:10] */ 1374 /* WCSR bits [0:4] [9:10] */
1375 #define WCSR_RESET_SAVE 0x61f 1375 #define WCSR_RESET_SAVE 0x61f
1376 /* RFCR bits [20] [22] [27:31] */ 1376 /* RFCR bits [20] [22] [27:31] */
1377 #define RFCR_RESET_SAVE 0xf8500000; 1377 #define RFCR_RESET_SAVE 0xf8500000;
1378 1378
1379 static void natsemi_reset(struct net_device *dev) 1379 static void natsemi_reset(struct net_device *dev)
1380 { 1380 {
1381 int i; 1381 int i;
1382 u32 cfg; 1382 u32 cfg;
1383 u32 wcsr; 1383 u32 wcsr;
1384 u32 rfcr; 1384 u32 rfcr;
1385 u16 pmatch[3]; 1385 u16 pmatch[3];
1386 u16 sopass[3]; 1386 u16 sopass[3];
1387 struct netdev_private *np = netdev_priv(dev); 1387 struct netdev_private *np = netdev_priv(dev);
1388 void __iomem *ioaddr = ns_ioaddr(dev); 1388 void __iomem *ioaddr = ns_ioaddr(dev);
1389 1389
1390 /* 1390 /*
1391 * Resetting the chip causes some registers to be lost. 1391 * Resetting the chip causes some registers to be lost.
1392 * Natsemi suggests NOT reloading the EEPROM while live, so instead 1392 * Natsemi suggests NOT reloading the EEPROM while live, so instead
1393 * we save the state that would have been loaded from EEPROM 1393 * we save the state that would have been loaded from EEPROM
1394 * on a normal power-up (see the spec EEPROM map). This assumes 1394 * on a normal power-up (see the spec EEPROM map). This assumes
1395 * whoever calls this will follow up with init_registers() eventually. 1395 * whoever calls this will follow up with init_registers() eventually.
1396 */ 1396 */
1397 1397
1398 /* CFG */ 1398 /* CFG */
1399 cfg = readl(ioaddr + ChipConfig) & CFG_RESET_SAVE; 1399 cfg = readl(ioaddr + ChipConfig) & CFG_RESET_SAVE;
1400 /* WCSR */ 1400 /* WCSR */
1401 wcsr = readl(ioaddr + WOLCmd) & WCSR_RESET_SAVE; 1401 wcsr = readl(ioaddr + WOLCmd) & WCSR_RESET_SAVE;
1402 /* RFCR */ 1402 /* RFCR */
1403 rfcr = readl(ioaddr + RxFilterAddr) & RFCR_RESET_SAVE; 1403 rfcr = readl(ioaddr + RxFilterAddr) & RFCR_RESET_SAVE;
1404 /* PMATCH */ 1404 /* PMATCH */
1405 for (i = 0; i < 3; i++) { 1405 for (i = 0; i < 3; i++) {
1406 writel(i*2, ioaddr + RxFilterAddr); 1406 writel(i*2, ioaddr + RxFilterAddr);
1407 pmatch[i] = readw(ioaddr + RxFilterData); 1407 pmatch[i] = readw(ioaddr + RxFilterData);
1408 } 1408 }
1409 /* SOPAS */ 1409 /* SOPAS */
1410 for (i = 0; i < 3; i++) { 1410 for (i = 0; i < 3; i++) {
1411 writel(0xa+(i*2), ioaddr + RxFilterAddr); 1411 writel(0xa+(i*2), ioaddr + RxFilterAddr);
1412 sopass[i] = readw(ioaddr + RxFilterData); 1412 sopass[i] = readw(ioaddr + RxFilterData);
1413 } 1413 }
1414 1414
1415 /* now whack the chip */ 1415 /* now whack the chip */
1416 writel(ChipReset, ioaddr + ChipCmd); 1416 writel(ChipReset, ioaddr + ChipCmd);
1417 for (i=0;i<NATSEMI_HW_TIMEOUT;i++) { 1417 for (i=0;i<NATSEMI_HW_TIMEOUT;i++) {
1418 if (!(readl(ioaddr + ChipCmd) & ChipReset)) 1418 if (!(readl(ioaddr + ChipCmd) & ChipReset))
1419 break; 1419 break;
1420 udelay(5); 1420 udelay(5);
1421 } 1421 }
1422 if (i==NATSEMI_HW_TIMEOUT) { 1422 if (i==NATSEMI_HW_TIMEOUT) {
1423 printk(KERN_WARNING "%s: reset did not complete in %d usec.\n", 1423 printk(KERN_WARNING "%s: reset did not complete in %d usec.\n",
1424 dev->name, i*5); 1424 dev->name, i*5);
1425 } else if (netif_msg_hw(np)) { 1425 } else if (netif_msg_hw(np)) {
1426 printk(KERN_DEBUG "%s: reset completed in %d usec.\n", 1426 printk(KERN_DEBUG "%s: reset completed in %d usec.\n",
1427 dev->name, i*5); 1427 dev->name, i*5);
1428 } 1428 }
1429 1429
1430 /* restore CFG */ 1430 /* restore CFG */
1431 cfg |= readl(ioaddr + ChipConfig) & ~CFG_RESET_SAVE; 1431 cfg |= readl(ioaddr + ChipConfig) & ~CFG_RESET_SAVE;
1432 /* turn on external phy if it was selected */ 1432 /* turn on external phy if it was selected */
1433 if (dev->if_port == PORT_TP) 1433 if (dev->if_port == PORT_TP)
1434 cfg &= ~(CfgExtPhy | CfgPhyDis); 1434 cfg &= ~(CfgExtPhy | CfgPhyDis);
1435 else 1435 else
1436 cfg |= (CfgExtPhy | CfgPhyDis); 1436 cfg |= (CfgExtPhy | CfgPhyDis);
1437 writel(cfg, ioaddr + ChipConfig); 1437 writel(cfg, ioaddr + ChipConfig);
1438 /* restore WCSR */ 1438 /* restore WCSR */
1439 wcsr |= readl(ioaddr + WOLCmd) & ~WCSR_RESET_SAVE; 1439 wcsr |= readl(ioaddr + WOLCmd) & ~WCSR_RESET_SAVE;
1440 writel(wcsr, ioaddr + WOLCmd); 1440 writel(wcsr, ioaddr + WOLCmd);
1441 /* read RFCR */ 1441 /* read RFCR */
1442 rfcr |= readl(ioaddr + RxFilterAddr) & ~RFCR_RESET_SAVE; 1442 rfcr |= readl(ioaddr + RxFilterAddr) & ~RFCR_RESET_SAVE;
1443 /* restore PMATCH */ 1443 /* restore PMATCH */
1444 for (i = 0; i < 3; i++) { 1444 for (i = 0; i < 3; i++) {
1445 writel(i*2, ioaddr + RxFilterAddr); 1445 writel(i*2, ioaddr + RxFilterAddr);
1446 writew(pmatch[i], ioaddr + RxFilterData); 1446 writew(pmatch[i], ioaddr + RxFilterData);
1447 } 1447 }
1448 for (i = 0; i < 3; i++) { 1448 for (i = 0; i < 3; i++) {
1449 writel(0xa+(i*2), ioaddr + RxFilterAddr); 1449 writel(0xa+(i*2), ioaddr + RxFilterAddr);
1450 writew(sopass[i], ioaddr + RxFilterData); 1450 writew(sopass[i], ioaddr + RxFilterData);
1451 } 1451 }
1452 /* restore RFCR */ 1452 /* restore RFCR */
1453 writel(rfcr, ioaddr + RxFilterAddr); 1453 writel(rfcr, ioaddr + RxFilterAddr);
1454 } 1454 }
1455 1455
1456 static void reset_rx(struct net_device *dev) 1456 static void reset_rx(struct net_device *dev)
1457 { 1457 {
1458 int i; 1458 int i;
1459 struct netdev_private *np = netdev_priv(dev); 1459 struct netdev_private *np = netdev_priv(dev);
1460 void __iomem *ioaddr = ns_ioaddr(dev); 1460 void __iomem *ioaddr = ns_ioaddr(dev);
1461 1461
1462 np->intr_status &= ~RxResetDone; 1462 np->intr_status &= ~RxResetDone;
1463 1463
1464 writel(RxReset, ioaddr + ChipCmd); 1464 writel(RxReset, ioaddr + ChipCmd);
1465 1465
1466 for (i=0;i<NATSEMI_HW_TIMEOUT;i++) { 1466 for (i=0;i<NATSEMI_HW_TIMEOUT;i++) {
1467 np->intr_status |= readl(ioaddr + IntrStatus); 1467 np->intr_status |= readl(ioaddr + IntrStatus);
1468 if (np->intr_status & RxResetDone) 1468 if (np->intr_status & RxResetDone)
1469 break; 1469 break;
1470 udelay(15); 1470 udelay(15);
1471 } 1471 }
1472 if (i==NATSEMI_HW_TIMEOUT) { 1472 if (i==NATSEMI_HW_TIMEOUT) {
1473 printk(KERN_WARNING "%s: RX reset did not complete in %d usec.\n", 1473 printk(KERN_WARNING "%s: RX reset did not complete in %d usec.\n",
1474 dev->name, i*15); 1474 dev->name, i*15);
1475 } else if (netif_msg_hw(np)) { 1475 } else if (netif_msg_hw(np)) {
1476 printk(KERN_WARNING "%s: RX reset took %d usec.\n", 1476 printk(KERN_WARNING "%s: RX reset took %d usec.\n",
1477 dev->name, i*15); 1477 dev->name, i*15);
1478 } 1478 }
1479 } 1479 }
1480 1480
1481 static void natsemi_reload_eeprom(struct net_device *dev) 1481 static void natsemi_reload_eeprom(struct net_device *dev)
1482 { 1482 {
1483 struct netdev_private *np = netdev_priv(dev); 1483 struct netdev_private *np = netdev_priv(dev);
1484 void __iomem *ioaddr = ns_ioaddr(dev); 1484 void __iomem *ioaddr = ns_ioaddr(dev);
1485 int i; 1485 int i;
1486 1486
1487 writel(EepromReload, ioaddr + PCIBusCfg); 1487 writel(EepromReload, ioaddr + PCIBusCfg);
1488 for (i=0;i<NATSEMI_HW_TIMEOUT;i++) { 1488 for (i=0;i<NATSEMI_HW_TIMEOUT;i++) {
1489 udelay(50); 1489 udelay(50);
1490 if (!(readl(ioaddr + PCIBusCfg) & EepromReload)) 1490 if (!(readl(ioaddr + PCIBusCfg) & EepromReload))
1491 break; 1491 break;
1492 } 1492 }
1493 if (i==NATSEMI_HW_TIMEOUT) { 1493 if (i==NATSEMI_HW_TIMEOUT) {
1494 printk(KERN_WARNING "natsemi %s: EEPROM did not reload in %d usec.\n", 1494 printk(KERN_WARNING "natsemi %s: EEPROM did not reload in %d usec.\n",
1495 pci_name(np->pci_dev), i*50); 1495 pci_name(np->pci_dev), i*50);
1496 } else if (netif_msg_hw(np)) { 1496 } else if (netif_msg_hw(np)) {
1497 printk(KERN_DEBUG "natsemi %s: EEPROM reloaded in %d usec.\n", 1497 printk(KERN_DEBUG "natsemi %s: EEPROM reloaded in %d usec.\n",
1498 pci_name(np->pci_dev), i*50); 1498 pci_name(np->pci_dev), i*50);
1499 } 1499 }
1500 } 1500 }
1501 1501
1502 static void natsemi_stop_rxtx(struct net_device *dev) 1502 static void natsemi_stop_rxtx(struct net_device *dev)
1503 { 1503 {
1504 void __iomem * ioaddr = ns_ioaddr(dev); 1504 void __iomem * ioaddr = ns_ioaddr(dev);
1505 struct netdev_private *np = netdev_priv(dev); 1505 struct netdev_private *np = netdev_priv(dev);
1506 int i; 1506 int i;
1507 1507
1508 writel(RxOff | TxOff, ioaddr + ChipCmd); 1508 writel(RxOff | TxOff, ioaddr + ChipCmd);
1509 for(i=0;i< NATSEMI_HW_TIMEOUT;i++) { 1509 for(i=0;i< NATSEMI_HW_TIMEOUT;i++) {
1510 if ((readl(ioaddr + ChipCmd) & (TxOn|RxOn)) == 0) 1510 if ((readl(ioaddr + ChipCmd) & (TxOn|RxOn)) == 0)
1511 break; 1511 break;
1512 udelay(5); 1512 udelay(5);
1513 } 1513 }
1514 if (i==NATSEMI_HW_TIMEOUT) { 1514 if (i==NATSEMI_HW_TIMEOUT) {
1515 printk(KERN_WARNING "%s: Tx/Rx process did not stop in %d usec.\n", 1515 printk(KERN_WARNING "%s: Tx/Rx process did not stop in %d usec.\n",
1516 dev->name, i*5); 1516 dev->name, i*5);
1517 } else if (netif_msg_hw(np)) { 1517 } else if (netif_msg_hw(np)) {
1518 printk(KERN_DEBUG "%s: Tx/Rx process stopped in %d usec.\n", 1518 printk(KERN_DEBUG "%s: Tx/Rx process stopped in %d usec.\n",
1519 dev->name, i*5); 1519 dev->name, i*5);
1520 } 1520 }
1521 } 1521 }
1522 1522
1523 static int netdev_open(struct net_device *dev) 1523 static int netdev_open(struct net_device *dev)
1524 { 1524 {
1525 struct netdev_private *np = netdev_priv(dev); 1525 struct netdev_private *np = netdev_priv(dev);
1526 void __iomem * ioaddr = ns_ioaddr(dev); 1526 void __iomem * ioaddr = ns_ioaddr(dev);
1527 int i; 1527 int i;
1528 1528
1529 /* Reset the chip, just in case. */ 1529 /* Reset the chip, just in case. */
1530 natsemi_reset(dev); 1530 natsemi_reset(dev);
1531 1531
1532 i = request_irq(dev->irq, &intr_handler, IRQF_SHARED, dev->name, dev); 1532 i = request_irq(dev->irq, &intr_handler, IRQF_SHARED, dev->name, dev);
1533 if (i) return i; 1533 if (i) return i;
1534 1534
1535 if (netif_msg_ifup(np)) 1535 if (netif_msg_ifup(np))
1536 printk(KERN_DEBUG "%s: netdev_open() irq %d.\n", 1536 printk(KERN_DEBUG "%s: netdev_open() irq %d.\n",
1537 dev->name, dev->irq); 1537 dev->name, dev->irq);
1538 i = alloc_ring(dev); 1538 i = alloc_ring(dev);
1539 if (i < 0) { 1539 if (i < 0) {
1540 free_irq(dev->irq, dev); 1540 free_irq(dev->irq, dev);
1541 return i; 1541 return i;
1542 } 1542 }
1543 napi_enable(&np->napi); 1543 napi_enable(&np->napi);
1544 1544
1545 init_ring(dev); 1545 init_ring(dev);
1546 spin_lock_irq(&np->lock); 1546 spin_lock_irq(&np->lock);
1547 init_registers(dev); 1547 init_registers(dev);
1548 /* now set the MAC address according to dev->dev_addr */ 1548 /* now set the MAC address according to dev->dev_addr */
1549 for (i = 0; i < 3; i++) { 1549 for (i = 0; i < 3; i++) {
1550 u16 mac = (dev->dev_addr[2*i+1]<<8) + dev->dev_addr[2*i]; 1550 u16 mac = (dev->dev_addr[2*i+1]<<8) + dev->dev_addr[2*i];
1551 1551
1552 writel(i*2, ioaddr + RxFilterAddr); 1552 writel(i*2, ioaddr + RxFilterAddr);
1553 writew(mac, ioaddr + RxFilterData); 1553 writew(mac, ioaddr + RxFilterData);
1554 } 1554 }
1555 writel(np->cur_rx_mode, ioaddr + RxFilterAddr); 1555 writel(np->cur_rx_mode, ioaddr + RxFilterAddr);
1556 spin_unlock_irq(&np->lock); 1556 spin_unlock_irq(&np->lock);
1557 1557
1558 netif_start_queue(dev); 1558 netif_start_queue(dev);
1559 1559
1560 if (netif_msg_ifup(np)) 1560 if (netif_msg_ifup(np))
1561 printk(KERN_DEBUG "%s: Done netdev_open(), status: %#08x.\n", 1561 printk(KERN_DEBUG "%s: Done netdev_open(), status: %#08x.\n",
1562 dev->name, (int)readl(ioaddr + ChipCmd)); 1562 dev->name, (int)readl(ioaddr + ChipCmd));
1563 1563
1564 /* Set the timer to check for link beat. */ 1564 /* Set the timer to check for link beat. */
1565 init_timer(&np->timer); 1565 init_timer(&np->timer);
1566 np->timer.expires = round_jiffies(jiffies + NATSEMI_TIMER_FREQ); 1566 np->timer.expires = round_jiffies(jiffies + NATSEMI_TIMER_FREQ);
1567 np->timer.data = (unsigned long)dev; 1567 np->timer.data = (unsigned long)dev;
1568 np->timer.function = &netdev_timer; /* timer handler */ 1568 np->timer.function = &netdev_timer; /* timer handler */
1569 add_timer(&np->timer); 1569 add_timer(&np->timer);
1570 1570
1571 return 0; 1571 return 0;
1572 } 1572 }
1573 1573
1574 static void do_cable_magic(struct net_device *dev) 1574 static void do_cable_magic(struct net_device *dev)
1575 { 1575 {
1576 struct netdev_private *np = netdev_priv(dev); 1576 struct netdev_private *np = netdev_priv(dev);
1577 void __iomem *ioaddr = ns_ioaddr(dev); 1577 void __iomem *ioaddr = ns_ioaddr(dev);
1578 1578
1579 if (dev->if_port != PORT_TP) 1579 if (dev->if_port != PORT_TP)
1580 return; 1580 return;
1581 1581
1582 if (np->srr >= SRR_DP83816_A5) 1582 if (np->srr >= SRR_DP83816_A5)
1583 return; 1583 return;
1584 1584
1585 /* 1585 /*
1586 * 100 MBit links with short cables can trip an issue with the chip. 1586 * 100 MBit links with short cables can trip an issue with the chip.
1587 * The problem manifests as lots of CRC errors and/or flickering 1587 * The problem manifests as lots of CRC errors and/or flickering
1588 * activity LED while idle. This process is based on instructions 1588 * activity LED while idle. This process is based on instructions
1589 * from engineers at National. 1589 * from engineers at National.
1590 */ 1590 */
1591 if (readl(ioaddr + ChipConfig) & CfgSpeed100) { 1591 if (readl(ioaddr + ChipConfig) & CfgSpeed100) {
1592 u16 data; 1592 u16 data;
1593 1593
1594 writew(1, ioaddr + PGSEL); 1594 writew(1, ioaddr + PGSEL);
1595 /* 1595 /*
1596 * coefficient visibility should already be enabled via 1596 * coefficient visibility should already be enabled via
1597 * DSPCFG | 0x1000 1597 * DSPCFG | 0x1000
1598 */ 1598 */
1599 data = readw(ioaddr + TSTDAT) & 0xff; 1599 data = readw(ioaddr + TSTDAT) & 0xff;
1600 /* 1600 /*
1601 * the value must be negative, and within certain values 1601 * the value must be negative, and within certain values
1602 * (these values all come from National) 1602 * (these values all come from National)
1603 */ 1603 */
1604 if (!(data & 0x80) || ((data >= 0xd8) && (data <= 0xff))) { 1604 if (!(data & 0x80) || ((data >= 0xd8) && (data <= 0xff))) {
1605 np = netdev_priv(dev); 1605 np = netdev_priv(dev);
1606 1606
1607 /* the bug has been triggered - fix the coefficient */ 1607 /* the bug has been triggered - fix the coefficient */
1608 writew(TSTDAT_FIXED, ioaddr + TSTDAT); 1608 writew(TSTDAT_FIXED, ioaddr + TSTDAT);
1609 /* lock the value */ 1609 /* lock the value */
1610 data = readw(ioaddr + DSPCFG); 1610 data = readw(ioaddr + DSPCFG);
1611 np->dspcfg = data | DSPCFG_LOCK; 1611 np->dspcfg = data | DSPCFG_LOCK;
1612 writew(np->dspcfg, ioaddr + DSPCFG); 1612 writew(np->dspcfg, ioaddr + DSPCFG);
1613 } 1613 }
1614 writew(0, ioaddr + PGSEL); 1614 writew(0, ioaddr + PGSEL);
1615 } 1615 }
1616 } 1616 }
1617 1617
1618 static void undo_cable_magic(struct net_device *dev) 1618 static void undo_cable_magic(struct net_device *dev)
1619 { 1619 {
1620 u16 data; 1620 u16 data;
1621 struct netdev_private *np = netdev_priv(dev); 1621 struct netdev_private *np = netdev_priv(dev);
1622 void __iomem * ioaddr = ns_ioaddr(dev); 1622 void __iomem * ioaddr = ns_ioaddr(dev);
1623 1623
1624 if (dev->if_port != PORT_TP) 1624 if (dev->if_port != PORT_TP)
1625 return; 1625 return;
1626 1626
1627 if (np->srr >= SRR_DP83816_A5) 1627 if (np->srr >= SRR_DP83816_A5)
1628 return; 1628 return;
1629 1629
1630 writew(1, ioaddr + PGSEL); 1630 writew(1, ioaddr + PGSEL);
1631 /* make sure the lock bit is clear */ 1631 /* make sure the lock bit is clear */
1632 data = readw(ioaddr + DSPCFG); 1632 data = readw(ioaddr + DSPCFG);
1633 np->dspcfg = data & ~DSPCFG_LOCK; 1633 np->dspcfg = data & ~DSPCFG_LOCK;
1634 writew(np->dspcfg, ioaddr + DSPCFG); 1634 writew(np->dspcfg, ioaddr + DSPCFG);
1635 writew(0, ioaddr + PGSEL); 1635 writew(0, ioaddr + PGSEL);
1636 } 1636 }
1637 1637
1638 static void check_link(struct net_device *dev) 1638 static void check_link(struct net_device *dev)
1639 { 1639 {
1640 struct netdev_private *np = netdev_priv(dev); 1640 struct netdev_private *np = netdev_priv(dev);
1641 void __iomem * ioaddr = ns_ioaddr(dev); 1641 void __iomem * ioaddr = ns_ioaddr(dev);
1642 int duplex = np->duplex; 1642 int duplex = np->duplex;
1643 u16 bmsr; 1643 u16 bmsr;
1644 1644
1645 /* If we are ignoring the PHY then don't try reading it. */ 1645 /* If we are ignoring the PHY then don't try reading it. */
1646 if (np->ignore_phy) 1646 if (np->ignore_phy)
1647 goto propagate_state; 1647 goto propagate_state;
1648 1648
1649 /* The link status field is latched: it remains low after a temporary 1649 /* The link status field is latched: it remains low after a temporary
1650 * link failure until it's read. We need the current link status, 1650 * link failure until it's read. We need the current link status,
1651 * thus read twice. 1651 * thus read twice.
1652 */ 1652 */
1653 mdio_read(dev, MII_BMSR); 1653 mdio_read(dev, MII_BMSR);
1654 bmsr = mdio_read(dev, MII_BMSR); 1654 bmsr = mdio_read(dev, MII_BMSR);
1655 1655
1656 if (!(bmsr & BMSR_LSTATUS)) { 1656 if (!(bmsr & BMSR_LSTATUS)) {
1657 if (netif_carrier_ok(dev)) { 1657 if (netif_carrier_ok(dev)) {
1658 if (netif_msg_link(np)) 1658 if (netif_msg_link(np))
1659 printk(KERN_NOTICE "%s: link down.\n", 1659 printk(KERN_NOTICE "%s: link down.\n",
1660 dev->name); 1660 dev->name);
1661 netif_carrier_off(dev); 1661 netif_carrier_off(dev);
1662 undo_cable_magic(dev); 1662 undo_cable_magic(dev);
1663 } 1663 }
1664 return; 1664 return;
1665 } 1665 }
1666 if (!netif_carrier_ok(dev)) { 1666 if (!netif_carrier_ok(dev)) {
1667 if (netif_msg_link(np)) 1667 if (netif_msg_link(np))
1668 printk(KERN_NOTICE "%s: link up.\n", dev->name); 1668 printk(KERN_NOTICE "%s: link up.\n", dev->name);
1669 netif_carrier_on(dev); 1669 netif_carrier_on(dev);
1670 do_cable_magic(dev); 1670 do_cable_magic(dev);
1671 } 1671 }
1672 1672
1673 duplex = np->full_duplex; 1673 duplex = np->full_duplex;
1674 if (!duplex) { 1674 if (!duplex) {
1675 if (bmsr & BMSR_ANEGCOMPLETE) { 1675 if (bmsr & BMSR_ANEGCOMPLETE) {
1676 int tmp = mii_nway_result( 1676 int tmp = mii_nway_result(
1677 np->advertising & mdio_read(dev, MII_LPA)); 1677 np->advertising & mdio_read(dev, MII_LPA));
1678 if (tmp == LPA_100FULL || tmp == LPA_10FULL) 1678 if (tmp == LPA_100FULL || tmp == LPA_10FULL)
1679 duplex = 1; 1679 duplex = 1;
1680 } else if (mdio_read(dev, MII_BMCR) & BMCR_FULLDPLX) 1680 } else if (mdio_read(dev, MII_BMCR) & BMCR_FULLDPLX)
1681 duplex = 1; 1681 duplex = 1;
1682 } 1682 }
1683 1683
1684 propagate_state: 1684 propagate_state:
1685 /* if duplex is set then bit 28 must be set, too */ 1685 /* if duplex is set then bit 28 must be set, too */
1686 if (duplex ^ !!(np->rx_config & RxAcceptTx)) { 1686 if (duplex ^ !!(np->rx_config & RxAcceptTx)) {
1687 if (netif_msg_link(np)) 1687 if (netif_msg_link(np))
1688 printk(KERN_INFO 1688 printk(KERN_INFO
1689 "%s: Setting %s-duplex based on negotiated " 1689 "%s: Setting %s-duplex based on negotiated "
1690 "link capability.\n", dev->name, 1690 "link capability.\n", dev->name,
1691 duplex ? "full" : "half"); 1691 duplex ? "full" : "half");
1692 if (duplex) { 1692 if (duplex) {
1693 np->rx_config |= RxAcceptTx; 1693 np->rx_config |= RxAcceptTx;
1694 np->tx_config |= TxCarrierIgn | TxHeartIgn; 1694 np->tx_config |= TxCarrierIgn | TxHeartIgn;
1695 } else { 1695 } else {
1696 np->rx_config &= ~RxAcceptTx; 1696 np->rx_config &= ~RxAcceptTx;
1697 np->tx_config &= ~(TxCarrierIgn | TxHeartIgn); 1697 np->tx_config &= ~(TxCarrierIgn | TxHeartIgn);
1698 } 1698 }
1699 writel(np->tx_config, ioaddr + TxConfig); 1699 writel(np->tx_config, ioaddr + TxConfig);
1700 writel(np->rx_config, ioaddr + RxConfig); 1700 writel(np->rx_config, ioaddr + RxConfig);
1701 } 1701 }
1702 } 1702 }
1703 1703
1704 static void init_registers(struct net_device *dev) 1704 static void init_registers(struct net_device *dev)
1705 { 1705 {
1706 struct netdev_private *np = netdev_priv(dev); 1706 struct netdev_private *np = netdev_priv(dev);
1707 void __iomem * ioaddr = ns_ioaddr(dev); 1707 void __iomem * ioaddr = ns_ioaddr(dev);
1708 1708
1709 init_phy_fixup(dev); 1709 init_phy_fixup(dev);
1710 1710
1711 /* clear any interrupts that are pending, such as wake events */ 1711 /* clear any interrupts that are pending, such as wake events */
1712 readl(ioaddr + IntrStatus); 1712 readl(ioaddr + IntrStatus);
1713 1713
1714 writel(np->ring_dma, ioaddr + RxRingPtr); 1714 writel(np->ring_dma, ioaddr + RxRingPtr);
1715 writel(np->ring_dma + RX_RING_SIZE * sizeof(struct netdev_desc), 1715 writel(np->ring_dma + RX_RING_SIZE * sizeof(struct netdev_desc),
1716 ioaddr + TxRingPtr); 1716 ioaddr + TxRingPtr);
1717 1717
1718 /* Initialize other registers. 1718 /* Initialize other registers.
1719 * Configure the PCI bus bursts and FIFO thresholds. 1719 * Configure the PCI bus bursts and FIFO thresholds.
1720 * Configure for standard, in-spec Ethernet. 1720 * Configure for standard, in-spec Ethernet.
1721 * Start with half-duplex. check_link will update 1721 * Start with half-duplex. check_link will update
1722 * to the correct settings. 1722 * to the correct settings.
1723 */ 1723 */
1724 1724
1725 /* DRTH: 2: start tx if 64 bytes are in the fifo 1725 /* DRTH: 2: start tx if 64 bytes are in the fifo
1726 * FLTH: 0x10: refill with next packet if 512 bytes are free 1726 * FLTH: 0x10: refill with next packet if 512 bytes are free
1727 * MXDMA: 0: up to 256 byte bursts. 1727 * MXDMA: 0: up to 256 byte bursts.
1728 * MXDMA must be <= FLTH 1728 * MXDMA must be <= FLTH
1729 * ECRETRY=1 1729 * ECRETRY=1
1730 * ATP=1 1730 * ATP=1
1731 */ 1731 */
1732 np->tx_config = TxAutoPad | TxCollRetry | TxMxdma_256 | 1732 np->tx_config = TxAutoPad | TxCollRetry | TxMxdma_256 |
1733 TX_FLTH_VAL | TX_DRTH_VAL_START; 1733 TX_FLTH_VAL | TX_DRTH_VAL_START;
1734 writel(np->tx_config, ioaddr + TxConfig); 1734 writel(np->tx_config, ioaddr + TxConfig);
1735 1735
1736 /* DRTH 0x10: start copying to memory if 128 bytes are in the fifo 1736 /* DRTH 0x10: start copying to memory if 128 bytes are in the fifo
1737 * MXDMA 0: up to 256 byte bursts 1737 * MXDMA 0: up to 256 byte bursts
1738 */ 1738 */
1739 np->rx_config = RxMxdma_256 | RX_DRTH_VAL; 1739 np->rx_config = RxMxdma_256 | RX_DRTH_VAL;
1740 /* if receive ring now has bigger buffers than normal, enable jumbo */ 1740 /* if receive ring now has bigger buffers than normal, enable jumbo */
1741 if (np->rx_buf_sz > NATSEMI_LONGPKT) 1741 if (np->rx_buf_sz > NATSEMI_LONGPKT)
1742 np->rx_config |= RxAcceptLong; 1742 np->rx_config |= RxAcceptLong;
1743 1743
1744 writel(np->rx_config, ioaddr + RxConfig); 1744 writel(np->rx_config, ioaddr + RxConfig);
1745 1745
1746 /* Disable PME: 1746 /* Disable PME:
1747 * The PME bit is initialized from the EEPROM contents. 1747 * The PME bit is initialized from the EEPROM contents.
1748 * PCI cards probably have PME disabled, but motherboard 1748 * PCI cards probably have PME disabled, but motherboard
1749 * implementations may have PME set to enable WakeOnLan. 1749 * implementations may have PME set to enable WakeOnLan.
1750 * With PME set the chip will scan incoming packets but 1750 * With PME set the chip will scan incoming packets but
1751 * nothing will be written to memory. */ 1751 * nothing will be written to memory. */
1752 np->SavedClkRun = readl(ioaddr + ClkRun); 1752 np->SavedClkRun = readl(ioaddr + ClkRun);
1753 writel(np->SavedClkRun & ~PMEEnable, ioaddr + ClkRun); 1753 writel(np->SavedClkRun & ~PMEEnable, ioaddr + ClkRun);
1754 if (np->SavedClkRun & PMEStatus && netif_msg_wol(np)) { 1754 if (np->SavedClkRun & PMEStatus && netif_msg_wol(np)) {
1755 printk(KERN_NOTICE "%s: Wake-up event %#08x\n", 1755 printk(KERN_NOTICE "%s: Wake-up event %#08x\n",
1756 dev->name, readl(ioaddr + WOLCmd)); 1756 dev->name, readl(ioaddr + WOLCmd));
1757 } 1757 }
1758 1758
1759 check_link(dev); 1759 check_link(dev);
1760 __set_rx_mode(dev); 1760 __set_rx_mode(dev);
1761 1761
1762 /* Enable interrupts by setting the interrupt mask. */ 1762 /* Enable interrupts by setting the interrupt mask. */
1763 writel(DEFAULT_INTR, ioaddr + IntrMask); 1763 writel(DEFAULT_INTR, ioaddr + IntrMask);
1764 natsemi_irq_enable(dev); 1764 natsemi_irq_enable(dev);
1765 1765
1766 writel(RxOn | TxOn, ioaddr + ChipCmd); 1766 writel(RxOn | TxOn, ioaddr + ChipCmd);
1767 writel(StatsClear, ioaddr + StatsCtrl); /* Clear Stats */ 1767 writel(StatsClear, ioaddr + StatsCtrl); /* Clear Stats */
1768 } 1768 }
1769 1769
1770 /* 1770 /*
1771 * netdev_timer: 1771 * netdev_timer:
1772 * Purpose: 1772 * Purpose:
1773 * 1) check for link changes. Usually they are handled by the MII interrupt 1773 * 1) check for link changes. Usually they are handled by the MII interrupt
1774 * but it doesn't hurt to check twice. 1774 * but it doesn't hurt to check twice.
1775 * 2) check for sudden death of the NIC: 1775 * 2) check for sudden death of the NIC:
1776 * It seems that a reference set for this chip went out with incorrect info, 1776 * It seems that a reference set for this chip went out with incorrect info,
1777 * and there exist boards that aren't quite right. An unexpected voltage 1777 * and there exist boards that aren't quite right. An unexpected voltage
1778 * drop can cause the PHY to get itself in a weird state (basically reset). 1778 * drop can cause the PHY to get itself in a weird state (basically reset).
1779 * NOTE: this only seems to affect revC chips. The user can disable 1779 * NOTE: this only seems to affect revC chips. The user can disable
1780 * this check via dspcfg_workaround sysfs option. 1780 * this check via dspcfg_workaround sysfs option.
1781 * 3) check of death of the RX path due to OOM 1781 * 3) check of death of the RX path due to OOM
1782 */ 1782 */
1783 static void netdev_timer(unsigned long data) 1783 static void netdev_timer(unsigned long data)
1784 { 1784 {
1785 struct net_device *dev = (struct net_device *)data; 1785 struct net_device *dev = (struct net_device *)data;
1786 struct netdev_private *np = netdev_priv(dev); 1786 struct netdev_private *np = netdev_priv(dev);
1787 void __iomem * ioaddr = ns_ioaddr(dev); 1787 void __iomem * ioaddr = ns_ioaddr(dev);
1788 int next_tick = NATSEMI_TIMER_FREQ; 1788 int next_tick = NATSEMI_TIMER_FREQ;
1789 1789
1790 if (netif_msg_timer(np)) { 1790 if (netif_msg_timer(np)) {
1791 /* DO NOT read the IntrStatus register, 1791 /* DO NOT read the IntrStatus register,
1792 * a read clears any pending interrupts. 1792 * a read clears any pending interrupts.
1793 */ 1793 */
1794 printk(KERN_DEBUG "%s: Media selection timer tick.\n", 1794 printk(KERN_DEBUG "%s: Media selection timer tick.\n",
1795 dev->name); 1795 dev->name);
1796 } 1796 }
1797 1797
1798 if (dev->if_port == PORT_TP) { 1798 if (dev->if_port == PORT_TP) {
1799 u16 dspcfg; 1799 u16 dspcfg;
1800 1800
1801 spin_lock_irq(&np->lock); 1801 spin_lock_irq(&np->lock);
1802 /* check for a nasty random phy-reset - use dspcfg as a flag */ 1802 /* check for a nasty random phy-reset - use dspcfg as a flag */
1803 writew(1, ioaddr+PGSEL); 1803 writew(1, ioaddr+PGSEL);
1804 dspcfg = readw(ioaddr+DSPCFG); 1804 dspcfg = readw(ioaddr+DSPCFG);
1805 writew(0, ioaddr+PGSEL); 1805 writew(0, ioaddr+PGSEL);
1806 if (np->dspcfg_workaround && dspcfg != np->dspcfg) { 1806 if (np->dspcfg_workaround && dspcfg != np->dspcfg) {
1807 if (!netif_queue_stopped(dev)) { 1807 if (!netif_queue_stopped(dev)) {
1808 spin_unlock_irq(&np->lock); 1808 spin_unlock_irq(&np->lock);
1809 if (netif_msg_drv(np)) 1809 if (netif_msg_drv(np))
1810 printk(KERN_NOTICE "%s: possible phy reset: " 1810 printk(KERN_NOTICE "%s: possible phy reset: "
1811 "re-initializing\n", dev->name); 1811 "re-initializing\n", dev->name);
1812 disable_irq(dev->irq); 1812 disable_irq(dev->irq);
1813 spin_lock_irq(&np->lock); 1813 spin_lock_irq(&np->lock);
1814 natsemi_stop_rxtx(dev); 1814 natsemi_stop_rxtx(dev);
1815 dump_ring(dev); 1815 dump_ring(dev);
1816 reinit_ring(dev); 1816 reinit_ring(dev);
1817 init_registers(dev); 1817 init_registers(dev);
1818 spin_unlock_irq(&np->lock); 1818 spin_unlock_irq(&np->lock);
1819 enable_irq(dev->irq); 1819 enable_irq(dev->irq);
1820 } else { 1820 } else {
1821 /* hurry back */ 1821 /* hurry back */
1822 next_tick = HZ; 1822 next_tick = HZ;
1823 spin_unlock_irq(&np->lock); 1823 spin_unlock_irq(&np->lock);
1824 } 1824 }
1825 } else { 1825 } else {
1826 /* init_registers() calls check_link() for the above case */ 1826 /* init_registers() calls check_link() for the above case */
1827 check_link(dev); 1827 check_link(dev);
1828 spin_unlock_irq(&np->lock); 1828 spin_unlock_irq(&np->lock);
1829 } 1829 }
1830 } else { 1830 } else {
1831 spin_lock_irq(&np->lock); 1831 spin_lock_irq(&np->lock);
1832 check_link(dev); 1832 check_link(dev);
1833 spin_unlock_irq(&np->lock); 1833 spin_unlock_irq(&np->lock);
1834 } 1834 }
1835 if (np->oom) { 1835 if (np->oom) {
1836 disable_irq(dev->irq); 1836 disable_irq(dev->irq);
1837 np->oom = 0; 1837 np->oom = 0;
1838 refill_rx(dev); 1838 refill_rx(dev);
1839 enable_irq(dev->irq); 1839 enable_irq(dev->irq);
1840 if (!np->oom) { 1840 if (!np->oom) {
1841 writel(RxOn, ioaddr + ChipCmd); 1841 writel(RxOn, ioaddr + ChipCmd);
1842 } else { 1842 } else {
1843 next_tick = 1; 1843 next_tick = 1;
1844 } 1844 }
1845 } 1845 }
1846 1846
1847 if (next_tick > 1) 1847 if (next_tick > 1)
1848 mod_timer(&np->timer, round_jiffies(jiffies + next_tick)); 1848 mod_timer(&np->timer, round_jiffies(jiffies + next_tick));
1849 else 1849 else
1850 mod_timer(&np->timer, jiffies + next_tick); 1850 mod_timer(&np->timer, jiffies + next_tick);
1851 } 1851 }
1852 1852
1853 static void dump_ring(struct net_device *dev) 1853 static void dump_ring(struct net_device *dev)
1854 { 1854 {
1855 struct netdev_private *np = netdev_priv(dev); 1855 struct netdev_private *np = netdev_priv(dev);
1856 1856
1857 if (netif_msg_pktdata(np)) { 1857 if (netif_msg_pktdata(np)) {
1858 int i; 1858 int i;
1859 printk(KERN_DEBUG " Tx ring at %p:\n", np->tx_ring); 1859 printk(KERN_DEBUG " Tx ring at %p:\n", np->tx_ring);
1860 for (i = 0; i < TX_RING_SIZE; i++) { 1860 for (i = 0; i < TX_RING_SIZE; i++) {
1861 printk(KERN_DEBUG " #%d desc. %#08x %#08x %#08x.\n", 1861 printk(KERN_DEBUG " #%d desc. %#08x %#08x %#08x.\n",
1862 i, np->tx_ring[i].next_desc, 1862 i, np->tx_ring[i].next_desc,
1863 np->tx_ring[i].cmd_status, 1863 np->tx_ring[i].cmd_status,
1864 np->tx_ring[i].addr); 1864 np->tx_ring[i].addr);
1865 } 1865 }
1866 printk(KERN_DEBUG " Rx ring %p:\n", np->rx_ring); 1866 printk(KERN_DEBUG " Rx ring %p:\n", np->rx_ring);
1867 for (i = 0; i < RX_RING_SIZE; i++) { 1867 for (i = 0; i < RX_RING_SIZE; i++) {
1868 printk(KERN_DEBUG " #%d desc. %#08x %#08x %#08x.\n", 1868 printk(KERN_DEBUG " #%d desc. %#08x %#08x %#08x.\n",
1869 i, np->rx_ring[i].next_desc, 1869 i, np->rx_ring[i].next_desc,
1870 np->rx_ring[i].cmd_status, 1870 np->rx_ring[i].cmd_status,
1871 np->rx_ring[i].addr); 1871 np->rx_ring[i].addr);
1872 } 1872 }
1873 } 1873 }
1874 } 1874 }
1875 1875
1876 static void tx_timeout(struct net_device *dev) 1876 static void tx_timeout(struct net_device *dev)
1877 { 1877 {
1878 struct netdev_private *np = netdev_priv(dev); 1878 struct netdev_private *np = netdev_priv(dev);
1879 void __iomem * ioaddr = ns_ioaddr(dev); 1879 void __iomem * ioaddr = ns_ioaddr(dev);
1880 1880
1881 disable_irq(dev->irq); 1881 disable_irq(dev->irq);
1882 spin_lock_irq(&np->lock); 1882 spin_lock_irq(&np->lock);
1883 if (!np->hands_off) { 1883 if (!np->hands_off) {
1884 if (netif_msg_tx_err(np)) 1884 if (netif_msg_tx_err(np))
1885 printk(KERN_WARNING 1885 printk(KERN_WARNING
1886 "%s: Transmit timed out, status %#08x," 1886 "%s: Transmit timed out, status %#08x,"
1887 " resetting...\n", 1887 " resetting...\n",
1888 dev->name, readl(ioaddr + IntrStatus)); 1888 dev->name, readl(ioaddr + IntrStatus));
1889 dump_ring(dev); 1889 dump_ring(dev);
1890 1890
1891 natsemi_reset(dev); 1891 natsemi_reset(dev);
1892 reinit_ring(dev); 1892 reinit_ring(dev);
1893 init_registers(dev); 1893 init_registers(dev);
1894 } else { 1894 } else {
1895 printk(KERN_WARNING 1895 printk(KERN_WARNING
1896 "%s: tx_timeout while in hands_off state?\n", 1896 "%s: tx_timeout while in hands_off state?\n",
1897 dev->name); 1897 dev->name);
1898 } 1898 }
1899 spin_unlock_irq(&np->lock); 1899 spin_unlock_irq(&np->lock);
1900 enable_irq(dev->irq); 1900 enable_irq(dev->irq);
1901 1901
1902 dev->trans_start = jiffies; 1902 dev->trans_start = jiffies;
1903 np->stats.tx_errors++; 1903 np->stats.tx_errors++;
1904 netif_wake_queue(dev); 1904 netif_wake_queue(dev);
1905 } 1905 }
1906 1906
1907 static int alloc_ring(struct net_device *dev) 1907 static int alloc_ring(struct net_device *dev)
1908 { 1908 {
1909 struct netdev_private *np = netdev_priv(dev); 1909 struct netdev_private *np = netdev_priv(dev);
1910 np->rx_ring = pci_alloc_consistent(np->pci_dev, 1910 np->rx_ring = pci_alloc_consistent(np->pci_dev,
1911 sizeof(struct netdev_desc) * (RX_RING_SIZE+TX_RING_SIZE), 1911 sizeof(struct netdev_desc) * (RX_RING_SIZE+TX_RING_SIZE),
1912 &np->ring_dma); 1912 &np->ring_dma);
1913 if (!np->rx_ring) 1913 if (!np->rx_ring)
1914 return -ENOMEM; 1914 return -ENOMEM;
1915 np->tx_ring = &np->rx_ring[RX_RING_SIZE]; 1915 np->tx_ring = &np->rx_ring[RX_RING_SIZE];
1916 return 0; 1916 return 0;
1917 } 1917 }
1918 1918
1919 static void refill_rx(struct net_device *dev) 1919 static void refill_rx(struct net_device *dev)
1920 { 1920 {
1921 struct netdev_private *np = netdev_priv(dev); 1921 struct netdev_private *np = netdev_priv(dev);
1922 1922
1923 /* Refill the Rx ring buffers. */ 1923 /* Refill the Rx ring buffers. */
1924 for (; np->cur_rx - np->dirty_rx > 0; np->dirty_rx++) { 1924 for (; np->cur_rx - np->dirty_rx > 0; np->dirty_rx++) {
1925 struct sk_buff *skb; 1925 struct sk_buff *skb;
1926 int entry = np->dirty_rx % RX_RING_SIZE; 1926 int entry = np->dirty_rx % RX_RING_SIZE;
1927 if (np->rx_skbuff[entry] == NULL) { 1927 if (np->rx_skbuff[entry] == NULL) {
1928 unsigned int buflen = np->rx_buf_sz+NATSEMI_PADDING; 1928 unsigned int buflen = np->rx_buf_sz+NATSEMI_PADDING;
1929 skb = dev_alloc_skb(buflen); 1929 skb = dev_alloc_skb(buflen);
1930 np->rx_skbuff[entry] = skb; 1930 np->rx_skbuff[entry] = skb;
1931 if (skb == NULL) 1931 if (skb == NULL)
1932 break; /* Better luck next round. */ 1932 break; /* Better luck next round. */
1933 skb->dev = dev; /* Mark as being used by this device. */ 1933 skb->dev = dev; /* Mark as being used by this device. */
1934 np->rx_dma[entry] = pci_map_single(np->pci_dev, 1934 np->rx_dma[entry] = pci_map_single(np->pci_dev,
1935 skb->data, buflen, PCI_DMA_FROMDEVICE); 1935 skb->data, buflen, PCI_DMA_FROMDEVICE);
1936 np->rx_ring[entry].addr = cpu_to_le32(np->rx_dma[entry]); 1936 np->rx_ring[entry].addr = cpu_to_le32(np->rx_dma[entry]);
1937 } 1937 }
1938 np->rx_ring[entry].cmd_status = cpu_to_le32(np->rx_buf_sz); 1938 np->rx_ring[entry].cmd_status = cpu_to_le32(np->rx_buf_sz);
1939 } 1939 }
1940 if (np->cur_rx - np->dirty_rx == RX_RING_SIZE) { 1940 if (np->cur_rx - np->dirty_rx == RX_RING_SIZE) {
1941 if (netif_msg_rx_err(np)) 1941 if (netif_msg_rx_err(np))
1942 printk(KERN_WARNING "%s: going OOM.\n", dev->name); 1942 printk(KERN_WARNING "%s: going OOM.\n", dev->name);
1943 np->oom = 1; 1943 np->oom = 1;
1944 } 1944 }
1945 } 1945 }
1946 1946
1947 static void set_bufsize(struct net_device *dev) 1947 static void set_bufsize(struct net_device *dev)
1948 { 1948 {
1949 struct netdev_private *np = netdev_priv(dev); 1949 struct netdev_private *np = netdev_priv(dev);
1950 if (dev->mtu <= ETH_DATA_LEN) 1950 if (dev->mtu <= ETH_DATA_LEN)
1951 np->rx_buf_sz = ETH_DATA_LEN + NATSEMI_HEADERS; 1951 np->rx_buf_sz = ETH_DATA_LEN + NATSEMI_HEADERS;
1952 else 1952 else
1953 np->rx_buf_sz = dev->mtu + NATSEMI_HEADERS; 1953 np->rx_buf_sz = dev->mtu + NATSEMI_HEADERS;
1954 } 1954 }
1955 1955
1956 /* Initialize the Rx and Tx rings, along with various 'dev' bits. */ 1956 /* Initialize the Rx and Tx rings, along with various 'dev' bits. */
1957 static void init_ring(struct net_device *dev) 1957 static void init_ring(struct net_device *dev)
1958 { 1958 {
1959 struct netdev_private *np = netdev_priv(dev); 1959 struct netdev_private *np = netdev_priv(dev);
1960 int i; 1960 int i;
1961 1961
1962 /* 1) TX ring */ 1962 /* 1) TX ring */
1963 np->dirty_tx = np->cur_tx = 0; 1963 np->dirty_tx = np->cur_tx = 0;
1964 for (i = 0; i < TX_RING_SIZE; i++) { 1964 for (i = 0; i < TX_RING_SIZE; i++) {
1965 np->tx_skbuff[i] = NULL; 1965 np->tx_skbuff[i] = NULL;
1966 np->tx_ring[i].next_desc = cpu_to_le32(np->ring_dma 1966 np->tx_ring[i].next_desc = cpu_to_le32(np->ring_dma
1967 +sizeof(struct netdev_desc) 1967 +sizeof(struct netdev_desc)
1968 *((i+1)%TX_RING_SIZE+RX_RING_SIZE)); 1968 *((i+1)%TX_RING_SIZE+RX_RING_SIZE));
1969 np->tx_ring[i].cmd_status = 0; 1969 np->tx_ring[i].cmd_status = 0;
1970 } 1970 }
1971 1971
1972 /* 2) RX ring */ 1972 /* 2) RX ring */
1973 np->dirty_rx = 0; 1973 np->dirty_rx = 0;
1974 np->cur_rx = RX_RING_SIZE; 1974 np->cur_rx = RX_RING_SIZE;
1975 np->oom = 0; 1975 np->oom = 0;
1976 set_bufsize(dev); 1976 set_bufsize(dev);
1977 1977
1978 np->rx_head_desc = &np->rx_ring[0]; 1978 np->rx_head_desc = &np->rx_ring[0];
1979 1979
1980 /* Please be carefull before changing this loop - at least gcc-2.95.1 1980 /* Please be carefull before changing this loop - at least gcc-2.95.1
1981 * miscompiles it otherwise. 1981 * miscompiles it otherwise.
1982 */ 1982 */
1983 /* Initialize all Rx descriptors. */ 1983 /* Initialize all Rx descriptors. */
1984 for (i = 0; i < RX_RING_SIZE; i++) { 1984 for (i = 0; i < RX_RING_SIZE; i++) {
1985 np->rx_ring[i].next_desc = cpu_to_le32(np->ring_dma 1985 np->rx_ring[i].next_desc = cpu_to_le32(np->ring_dma
1986 +sizeof(struct netdev_desc) 1986 +sizeof(struct netdev_desc)
1987 *((i+1)%RX_RING_SIZE)); 1987 *((i+1)%RX_RING_SIZE));
1988 np->rx_ring[i].cmd_status = cpu_to_le32(DescOwn); 1988 np->rx_ring[i].cmd_status = cpu_to_le32(DescOwn);
1989 np->rx_skbuff[i] = NULL; 1989 np->rx_skbuff[i] = NULL;
1990 } 1990 }
1991 refill_rx(dev); 1991 refill_rx(dev);
1992 dump_ring(dev); 1992 dump_ring(dev);
1993 } 1993 }
1994 1994
1995 static void drain_tx(struct net_device *dev) 1995 static void drain_tx(struct net_device *dev)
1996 { 1996 {
1997 struct netdev_private *np = netdev_priv(dev); 1997 struct netdev_private *np = netdev_priv(dev);
1998 int i; 1998 int i;
1999 1999
2000 for (i = 0; i < TX_RING_SIZE; i++) { 2000 for (i = 0; i < TX_RING_SIZE; i++) {
2001 if (np->tx_skbuff[i]) { 2001 if (np->tx_skbuff[i]) {
2002 pci_unmap_single(np->pci_dev, 2002 pci_unmap_single(np->pci_dev,
2003 np->tx_dma[i], np->tx_skbuff[i]->len, 2003 np->tx_dma[i], np->tx_skbuff[i]->len,
2004 PCI_DMA_TODEVICE); 2004 PCI_DMA_TODEVICE);
2005 dev_kfree_skb(np->tx_skbuff[i]); 2005 dev_kfree_skb(np->tx_skbuff[i]);
2006 np->stats.tx_dropped++; 2006 np->stats.tx_dropped++;
2007 } 2007 }
2008 np->tx_skbuff[i] = NULL; 2008 np->tx_skbuff[i] = NULL;
2009 } 2009 }
2010 } 2010 }
2011 2011
2012 static void drain_rx(struct net_device *dev) 2012 static void drain_rx(struct net_device *dev)
2013 { 2013 {
2014 struct netdev_private *np = netdev_priv(dev); 2014 struct netdev_private *np = netdev_priv(dev);
2015 unsigned int buflen = np->rx_buf_sz; 2015 unsigned int buflen = np->rx_buf_sz;
2016 int i; 2016 int i;
2017 2017
2018 /* Free all the skbuffs in the Rx queue. */ 2018 /* Free all the skbuffs in the Rx queue. */
2019 for (i = 0; i < RX_RING_SIZE; i++) { 2019 for (i = 0; i < RX_RING_SIZE; i++) {
2020 np->rx_ring[i].cmd_status = 0; 2020 np->rx_ring[i].cmd_status = 0;
2021 np->rx_ring[i].addr = 0xBADF00D0; /* An invalid address. */ 2021 np->rx_ring[i].addr = cpu_to_le32(0xBADF00D0); /* An invalid address. */
2022 if (np->rx_skbuff[i]) { 2022 if (np->rx_skbuff[i]) {
2023 pci_unmap_single(np->pci_dev, 2023 pci_unmap_single(np->pci_dev,
2024 np->rx_dma[i], buflen, 2024 np->rx_dma[i], buflen,
2025 PCI_DMA_FROMDEVICE); 2025 PCI_DMA_FROMDEVICE);
2026 dev_kfree_skb(np->rx_skbuff[i]); 2026 dev_kfree_skb(np->rx_skbuff[i]);
2027 } 2027 }
2028 np->rx_skbuff[i] = NULL; 2028 np->rx_skbuff[i] = NULL;
2029 } 2029 }
2030 } 2030 }
2031 2031
2032 static void drain_ring(struct net_device *dev) 2032 static void drain_ring(struct net_device *dev)
2033 { 2033 {
2034 drain_rx(dev); 2034 drain_rx(dev);
2035 drain_tx(dev); 2035 drain_tx(dev);
2036 } 2036 }
2037 2037
2038 static void free_ring(struct net_device *dev) 2038 static void free_ring(struct net_device *dev)
2039 { 2039 {
2040 struct netdev_private *np = netdev_priv(dev); 2040 struct netdev_private *np = netdev_priv(dev);
2041 pci_free_consistent(np->pci_dev, 2041 pci_free_consistent(np->pci_dev,
2042 sizeof(struct netdev_desc) * (RX_RING_SIZE+TX_RING_SIZE), 2042 sizeof(struct netdev_desc) * (RX_RING_SIZE+TX_RING_SIZE),
2043 np->rx_ring, np->ring_dma); 2043 np->rx_ring, np->ring_dma);
2044 } 2044 }
2045 2045
2046 static void reinit_rx(struct net_device *dev) 2046 static void reinit_rx(struct net_device *dev)
2047 { 2047 {
2048 struct netdev_private *np = netdev_priv(dev); 2048 struct netdev_private *np = netdev_priv(dev);
2049 int i; 2049 int i;
2050 2050
2051 /* RX Ring */ 2051 /* RX Ring */
2052 np->dirty_rx = 0; 2052 np->dirty_rx = 0;
2053 np->cur_rx = RX_RING_SIZE; 2053 np->cur_rx = RX_RING_SIZE;
2054 np->rx_head_desc = &np->rx_ring[0]; 2054 np->rx_head_desc = &np->rx_ring[0];
2055 /* Initialize all Rx descriptors. */ 2055 /* Initialize all Rx descriptors. */
2056 for (i = 0; i < RX_RING_SIZE; i++) 2056 for (i = 0; i < RX_RING_SIZE; i++)
2057 np->rx_ring[i].cmd_status = cpu_to_le32(DescOwn); 2057 np->rx_ring[i].cmd_status = cpu_to_le32(DescOwn);
2058 2058
2059 refill_rx(dev); 2059 refill_rx(dev);
2060 } 2060 }
2061 2061
2062 static void reinit_ring(struct net_device *dev) 2062 static void reinit_ring(struct net_device *dev)
2063 { 2063 {
2064 struct netdev_private *np = netdev_priv(dev); 2064 struct netdev_private *np = netdev_priv(dev);
2065 int i; 2065 int i;
2066 2066
2067 /* drain TX ring */ 2067 /* drain TX ring */
2068 drain_tx(dev); 2068 drain_tx(dev);
2069 np->dirty_tx = np->cur_tx = 0; 2069 np->dirty_tx = np->cur_tx = 0;
2070 for (i=0;i<TX_RING_SIZE;i++) 2070 for (i=0;i<TX_RING_SIZE;i++)
2071 np->tx_ring[i].cmd_status = 0; 2071 np->tx_ring[i].cmd_status = 0;
2072 2072
2073 reinit_rx(dev); 2073 reinit_rx(dev);
2074 } 2074 }
2075 2075
2076 static int start_tx(struct sk_buff *skb, struct net_device *dev) 2076 static int start_tx(struct sk_buff *skb, struct net_device *dev)
2077 { 2077 {
2078 struct netdev_private *np = netdev_priv(dev); 2078 struct netdev_private *np = netdev_priv(dev);
2079 void __iomem * ioaddr = ns_ioaddr(dev); 2079 void __iomem * ioaddr = ns_ioaddr(dev);
2080 unsigned entry; 2080 unsigned entry;
2081 unsigned long flags; 2081 unsigned long flags;
2082 2082
2083 /* Note: Ordering is important here, set the field with the 2083 /* Note: Ordering is important here, set the field with the
2084 "ownership" bit last, and only then increment cur_tx. */ 2084 "ownership" bit last, and only then increment cur_tx. */
2085 2085
2086 /* Calculate the next Tx descriptor entry. */ 2086 /* Calculate the next Tx descriptor entry. */
2087 entry = np->cur_tx % TX_RING_SIZE; 2087 entry = np->cur_tx % TX_RING_SIZE;
2088 2088
2089 np->tx_skbuff[entry] = skb; 2089 np->tx_skbuff[entry] = skb;
2090 np->tx_dma[entry] = pci_map_single(np->pci_dev, 2090 np->tx_dma[entry] = pci_map_single(np->pci_dev,
2091 skb->data,skb->len, PCI_DMA_TODEVICE); 2091 skb->data,skb->len, PCI_DMA_TODEVICE);
2092 2092
2093 np->tx_ring[entry].addr = cpu_to_le32(np->tx_dma[entry]); 2093 np->tx_ring[entry].addr = cpu_to_le32(np->tx_dma[entry]);
2094 2094
2095 spin_lock_irqsave(&np->lock, flags); 2095 spin_lock_irqsave(&np->lock, flags);
2096 2096
2097 if (!np->hands_off) { 2097 if (!np->hands_off) {
2098 np->tx_ring[entry].cmd_status = cpu_to_le32(DescOwn | skb->len); 2098 np->tx_ring[entry].cmd_status = cpu_to_le32(DescOwn | skb->len);
2099 /* StrongARM: Explicitly cache flush np->tx_ring and 2099 /* StrongARM: Explicitly cache flush np->tx_ring and
2100 * skb->data,skb->len. */ 2100 * skb->data,skb->len. */
2101 wmb(); 2101 wmb();
2102 np->cur_tx++; 2102 np->cur_tx++;
2103 if (np->cur_tx - np->dirty_tx >= TX_QUEUE_LEN - 1) { 2103 if (np->cur_tx - np->dirty_tx >= TX_QUEUE_LEN - 1) {
2104 netdev_tx_done(dev); 2104 netdev_tx_done(dev);
2105 if (np->cur_tx - np->dirty_tx >= TX_QUEUE_LEN - 1) 2105 if (np->cur_tx - np->dirty_tx >= TX_QUEUE_LEN - 1)
2106 netif_stop_queue(dev); 2106 netif_stop_queue(dev);
2107 } 2107 }
2108 /* Wake the potentially-idle transmit channel. */ 2108 /* Wake the potentially-idle transmit channel. */
2109 writel(TxOn, ioaddr + ChipCmd); 2109 writel(TxOn, ioaddr + ChipCmd);
2110 } else { 2110 } else {
2111 dev_kfree_skb_irq(skb); 2111 dev_kfree_skb_irq(skb);
2112 np->stats.tx_dropped++; 2112 np->stats.tx_dropped++;
2113 } 2113 }
2114 spin_unlock_irqrestore(&np->lock, flags); 2114 spin_unlock_irqrestore(&np->lock, flags);
2115 2115
2116 dev->trans_start = jiffies; 2116 dev->trans_start = jiffies;
2117 2117
2118 if (netif_msg_tx_queued(np)) { 2118 if (netif_msg_tx_queued(np)) {
2119 printk(KERN_DEBUG "%s: Transmit frame #%d queued in slot %d.\n", 2119 printk(KERN_DEBUG "%s: Transmit frame #%d queued in slot %d.\n",
2120 dev->name, np->cur_tx, entry); 2120 dev->name, np->cur_tx, entry);
2121 } 2121 }
2122 return 0; 2122 return 0;
2123 } 2123 }
2124 2124
2125 static void netdev_tx_done(struct net_device *dev) 2125 static void netdev_tx_done(struct net_device *dev)
2126 { 2126 {
2127 struct netdev_private *np = netdev_priv(dev); 2127 struct netdev_private *np = netdev_priv(dev);
2128 2128
2129 for (; np->cur_tx - np->dirty_tx > 0; np->dirty_tx++) { 2129 for (; np->cur_tx - np->dirty_tx > 0; np->dirty_tx++) {
2130 int entry = np->dirty_tx % TX_RING_SIZE; 2130 int entry = np->dirty_tx % TX_RING_SIZE;
2131 if (np->tx_ring[entry].cmd_status & cpu_to_le32(DescOwn)) 2131 if (np->tx_ring[entry].cmd_status & cpu_to_le32(DescOwn))
2132 break; 2132 break;
2133 if (netif_msg_tx_done(np)) 2133 if (netif_msg_tx_done(np))
2134 printk(KERN_DEBUG 2134 printk(KERN_DEBUG
2135 "%s: tx frame #%d finished, status %#08x.\n", 2135 "%s: tx frame #%d finished, status %#08x.\n",
2136 dev->name, np->dirty_tx, 2136 dev->name, np->dirty_tx,
2137 le32_to_cpu(np->tx_ring[entry].cmd_status)); 2137 le32_to_cpu(np->tx_ring[entry].cmd_status));
2138 if (np->tx_ring[entry].cmd_status & cpu_to_le32(DescPktOK)) { 2138 if (np->tx_ring[entry].cmd_status & cpu_to_le32(DescPktOK)) {
2139 np->stats.tx_packets++; 2139 np->stats.tx_packets++;
2140 np->stats.tx_bytes += np->tx_skbuff[entry]->len; 2140 np->stats.tx_bytes += np->tx_skbuff[entry]->len;
2141 } else { /* Various Tx errors */ 2141 } else { /* Various Tx errors */
2142 int tx_status = 2142 int tx_status =
2143 le32_to_cpu(np->tx_ring[entry].cmd_status); 2143 le32_to_cpu(np->tx_ring[entry].cmd_status);
2144 if (tx_status & (DescTxAbort|DescTxExcColl)) 2144 if (tx_status & (DescTxAbort|DescTxExcColl))
2145 np->stats.tx_aborted_errors++; 2145 np->stats.tx_aborted_errors++;
2146 if (tx_status & DescTxFIFO) 2146 if (tx_status & DescTxFIFO)
2147 np->stats.tx_fifo_errors++; 2147 np->stats.tx_fifo_errors++;
2148 if (tx_status & DescTxCarrier) 2148 if (tx_status & DescTxCarrier)
2149 np->stats.tx_carrier_errors++; 2149 np->stats.tx_carrier_errors++;
2150 if (tx_status & DescTxOOWCol) 2150 if (tx_status & DescTxOOWCol)
2151 np->stats.tx_window_errors++; 2151 np->stats.tx_window_errors++;
2152 np->stats.tx_errors++; 2152 np->stats.tx_errors++;
2153 } 2153 }
2154 pci_unmap_single(np->pci_dev,np->tx_dma[entry], 2154 pci_unmap_single(np->pci_dev,np->tx_dma[entry],
2155 np->tx_skbuff[entry]->len, 2155 np->tx_skbuff[entry]->len,
2156 PCI_DMA_TODEVICE); 2156 PCI_DMA_TODEVICE);
2157 /* Free the original skb. */ 2157 /* Free the original skb. */
2158 dev_kfree_skb_irq(np->tx_skbuff[entry]); 2158 dev_kfree_skb_irq(np->tx_skbuff[entry]);
2159 np->tx_skbuff[entry] = NULL; 2159 np->tx_skbuff[entry] = NULL;
2160 } 2160 }
2161 if (netif_queue_stopped(dev) 2161 if (netif_queue_stopped(dev)
2162 && np->cur_tx - np->dirty_tx < TX_QUEUE_LEN - 4) { 2162 && np->cur_tx - np->dirty_tx < TX_QUEUE_LEN - 4) {
2163 /* The ring is no longer full, wake queue. */ 2163 /* The ring is no longer full, wake queue. */
2164 netif_wake_queue(dev); 2164 netif_wake_queue(dev);
2165 } 2165 }
2166 } 2166 }
2167 2167
2168 /* The interrupt handler doesn't actually handle interrupts itself, it 2168 /* The interrupt handler doesn't actually handle interrupts itself, it
2169 * schedules a NAPI poll if there is anything to do. */ 2169 * schedules a NAPI poll if there is anything to do. */
2170 static irqreturn_t intr_handler(int irq, void *dev_instance) 2170 static irqreturn_t intr_handler(int irq, void *dev_instance)
2171 { 2171 {
2172 struct net_device *dev = dev_instance; 2172 struct net_device *dev = dev_instance;
2173 struct netdev_private *np = netdev_priv(dev); 2173 struct netdev_private *np = netdev_priv(dev);
2174 void __iomem * ioaddr = ns_ioaddr(dev); 2174 void __iomem * ioaddr = ns_ioaddr(dev);
2175 2175
2176 /* Reading IntrStatus automatically acknowledges so don't do 2176 /* Reading IntrStatus automatically acknowledges so don't do
2177 * that while interrupts are disabled, (for example, while a 2177 * that while interrupts are disabled, (for example, while a
2178 * poll is scheduled). */ 2178 * poll is scheduled). */
2179 if (np->hands_off || !readl(ioaddr + IntrEnable)) 2179 if (np->hands_off || !readl(ioaddr + IntrEnable))
2180 return IRQ_NONE; 2180 return IRQ_NONE;
2181 2181
2182 np->intr_status = readl(ioaddr + IntrStatus); 2182 np->intr_status = readl(ioaddr + IntrStatus);
2183 2183
2184 if (!np->intr_status) 2184 if (!np->intr_status)
2185 return IRQ_NONE; 2185 return IRQ_NONE;
2186 2186
2187 if (netif_msg_intr(np)) 2187 if (netif_msg_intr(np))
2188 printk(KERN_DEBUG 2188 printk(KERN_DEBUG
2189 "%s: Interrupt, status %#08x, mask %#08x.\n", 2189 "%s: Interrupt, status %#08x, mask %#08x.\n",
2190 dev->name, np->intr_status, 2190 dev->name, np->intr_status,
2191 readl(ioaddr + IntrMask)); 2191 readl(ioaddr + IntrMask));
2192 2192
2193 prefetch(&np->rx_skbuff[np->cur_rx % RX_RING_SIZE]); 2193 prefetch(&np->rx_skbuff[np->cur_rx % RX_RING_SIZE]);
2194 2194
2195 if (netif_rx_schedule_prep(dev, &np->napi)) { 2195 if (netif_rx_schedule_prep(dev, &np->napi)) {
2196 /* Disable interrupts and register for poll */ 2196 /* Disable interrupts and register for poll */
2197 natsemi_irq_disable(dev); 2197 natsemi_irq_disable(dev);
2198 __netif_rx_schedule(dev, &np->napi); 2198 __netif_rx_schedule(dev, &np->napi);
2199 } else 2199 } else
2200 printk(KERN_WARNING 2200 printk(KERN_WARNING
2201 "%s: Ignoring interrupt, status %#08x, mask %#08x.\n", 2201 "%s: Ignoring interrupt, status %#08x, mask %#08x.\n",
2202 dev->name, np->intr_status, 2202 dev->name, np->intr_status,
2203 readl(ioaddr + IntrMask)); 2203 readl(ioaddr + IntrMask));
2204 2204
2205 return IRQ_HANDLED; 2205 return IRQ_HANDLED;
2206 } 2206 }
2207 2207
2208 /* This is the NAPI poll routine. As well as the standard RX handling 2208 /* This is the NAPI poll routine. As well as the standard RX handling
2209 * it also handles all other interrupts that the chip might raise. 2209 * it also handles all other interrupts that the chip might raise.
2210 */ 2210 */
2211 static int natsemi_poll(struct napi_struct *napi, int budget) 2211 static int natsemi_poll(struct napi_struct *napi, int budget)
2212 { 2212 {
2213 struct netdev_private *np = container_of(napi, struct netdev_private, napi); 2213 struct netdev_private *np = container_of(napi, struct netdev_private, napi);
2214 struct net_device *dev = np->dev; 2214 struct net_device *dev = np->dev;
2215 void __iomem * ioaddr = ns_ioaddr(dev); 2215 void __iomem * ioaddr = ns_ioaddr(dev);
2216 int work_done = 0; 2216 int work_done = 0;
2217 2217
2218 do { 2218 do {
2219 if (netif_msg_intr(np)) 2219 if (netif_msg_intr(np))
2220 printk(KERN_DEBUG 2220 printk(KERN_DEBUG
2221 "%s: Poll, status %#08x, mask %#08x.\n", 2221 "%s: Poll, status %#08x, mask %#08x.\n",
2222 dev->name, np->intr_status, 2222 dev->name, np->intr_status,
2223 readl(ioaddr + IntrMask)); 2223 readl(ioaddr + IntrMask));
2224 2224
2225 /* netdev_rx() may read IntrStatus again if the RX state 2225 /* netdev_rx() may read IntrStatus again if the RX state
2226 * machine falls over so do it first. */ 2226 * machine falls over so do it first. */
2227 if (np->intr_status & 2227 if (np->intr_status &
2228 (IntrRxDone | IntrRxIntr | RxStatusFIFOOver | 2228 (IntrRxDone | IntrRxIntr | RxStatusFIFOOver |
2229 IntrRxErr | IntrRxOverrun)) { 2229 IntrRxErr | IntrRxOverrun)) {
2230 netdev_rx(dev, &work_done, budget); 2230 netdev_rx(dev, &work_done, budget);
2231 } 2231 }
2232 2232
2233 if (np->intr_status & 2233 if (np->intr_status &
2234 (IntrTxDone | IntrTxIntr | IntrTxIdle | IntrTxErr)) { 2234 (IntrTxDone | IntrTxIntr | IntrTxIdle | IntrTxErr)) {
2235 spin_lock(&np->lock); 2235 spin_lock(&np->lock);
2236 netdev_tx_done(dev); 2236 netdev_tx_done(dev);
2237 spin_unlock(&np->lock); 2237 spin_unlock(&np->lock);
2238 } 2238 }
2239 2239
2240 /* Abnormal error summary/uncommon events handlers. */ 2240 /* Abnormal error summary/uncommon events handlers. */
2241 if (np->intr_status & IntrAbnormalSummary) 2241 if (np->intr_status & IntrAbnormalSummary)
2242 netdev_error(dev, np->intr_status); 2242 netdev_error(dev, np->intr_status);
2243 2243
2244 if (work_done >= budget) 2244 if (work_done >= budget)
2245 return work_done; 2245 return work_done;
2246 2246
2247 np->intr_status = readl(ioaddr + IntrStatus); 2247 np->intr_status = readl(ioaddr + IntrStatus);
2248 } while (np->intr_status); 2248 } while (np->intr_status);
2249 2249
2250 netif_rx_complete(dev, napi); 2250 netif_rx_complete(dev, napi);
2251 2251
2252 /* Reenable interrupts providing nothing is trying to shut 2252 /* Reenable interrupts providing nothing is trying to shut
2253 * the chip down. */ 2253 * the chip down. */
2254 spin_lock(&np->lock); 2254 spin_lock(&np->lock);
2255 if (!np->hands_off) 2255 if (!np->hands_off)
2256 natsemi_irq_enable(dev); 2256 natsemi_irq_enable(dev);
2257 spin_unlock(&np->lock); 2257 spin_unlock(&np->lock);
2258 2258
2259 return work_done; 2259 return work_done;
2260 } 2260 }
2261 2261
2262 /* This routine is logically part of the interrupt handler, but separated 2262 /* This routine is logically part of the interrupt handler, but separated
2263 for clarity and better register allocation. */ 2263 for clarity and better register allocation. */
2264 static void netdev_rx(struct net_device *dev, int *work_done, int work_to_do) 2264 static void netdev_rx(struct net_device *dev, int *work_done, int work_to_do)
2265 { 2265 {
2266 struct netdev_private *np = netdev_priv(dev); 2266 struct netdev_private *np = netdev_priv(dev);
2267 int entry = np->cur_rx % RX_RING_SIZE; 2267 int entry = np->cur_rx % RX_RING_SIZE;
2268 int boguscnt = np->dirty_rx + RX_RING_SIZE - np->cur_rx; 2268 int boguscnt = np->dirty_rx + RX_RING_SIZE - np->cur_rx;
2269 s32 desc_status = le32_to_cpu(np->rx_head_desc->cmd_status); 2269 s32 desc_status = le32_to_cpu(np->rx_head_desc->cmd_status);
2270 unsigned int buflen = np->rx_buf_sz; 2270 unsigned int buflen = np->rx_buf_sz;
2271 void __iomem * ioaddr = ns_ioaddr(dev); 2271 void __iomem * ioaddr = ns_ioaddr(dev);
2272 2272
2273 /* If the driver owns the next entry it's a new packet. Send it up. */ 2273 /* If the driver owns the next entry it's a new packet. Send it up. */
2274 while (desc_status < 0) { /* e.g. & DescOwn */ 2274 while (desc_status < 0) { /* e.g. & DescOwn */
2275 int pkt_len; 2275 int pkt_len;
2276 if (netif_msg_rx_status(np)) 2276 if (netif_msg_rx_status(np))
2277 printk(KERN_DEBUG 2277 printk(KERN_DEBUG
2278 " netdev_rx() entry %d status was %#08x.\n", 2278 " netdev_rx() entry %d status was %#08x.\n",
2279 entry, desc_status); 2279 entry, desc_status);
2280 if (--boguscnt < 0) 2280 if (--boguscnt < 0)
2281 break; 2281 break;
2282 2282
2283 if (*work_done >= work_to_do) 2283 if (*work_done >= work_to_do)
2284 break; 2284 break;
2285 2285
2286 (*work_done)++; 2286 (*work_done)++;
2287 2287
2288 pkt_len = (desc_status & DescSizeMask) - 4; 2288 pkt_len = (desc_status & DescSizeMask) - 4;
2289 if ((desc_status&(DescMore|DescPktOK|DescRxLong)) != DescPktOK){ 2289 if ((desc_status&(DescMore|DescPktOK|DescRxLong)) != DescPktOK){
2290 if (desc_status & DescMore) { 2290 if (desc_status & DescMore) {
2291 unsigned long flags; 2291 unsigned long flags;
2292 2292
2293 if (netif_msg_rx_err(np)) 2293 if (netif_msg_rx_err(np))
2294 printk(KERN_WARNING 2294 printk(KERN_WARNING
2295 "%s: Oversized(?) Ethernet " 2295 "%s: Oversized(?) Ethernet "
2296 "frame spanned multiple " 2296 "frame spanned multiple "
2297 "buffers, entry %#08x " 2297 "buffers, entry %#08x "
2298 "status %#08x.\n", dev->name, 2298 "status %#08x.\n", dev->name,
2299 np->cur_rx, desc_status); 2299 np->cur_rx, desc_status);
2300 np->stats.rx_length_errors++; 2300 np->stats.rx_length_errors++;
2301 2301
2302 /* The RX state machine has probably 2302 /* The RX state machine has probably
2303 * locked up beneath us. Follow the 2303 * locked up beneath us. Follow the
2304 * reset procedure documented in 2304 * reset procedure documented in
2305 * AN-1287. */ 2305 * AN-1287. */
2306 2306
2307 spin_lock_irqsave(&np->lock, flags); 2307 spin_lock_irqsave(&np->lock, flags);
2308 reset_rx(dev); 2308 reset_rx(dev);
2309 reinit_rx(dev); 2309 reinit_rx(dev);
2310 writel(np->ring_dma, ioaddr + RxRingPtr); 2310 writel(np->ring_dma, ioaddr + RxRingPtr);
2311 check_link(dev); 2311 check_link(dev);
2312 spin_unlock_irqrestore(&np->lock, flags); 2312 spin_unlock_irqrestore(&np->lock, flags);
2313 2313
2314 /* We'll enable RX on exit from this 2314 /* We'll enable RX on exit from this
2315 * function. */ 2315 * function. */
2316 break; 2316 break;
2317 2317
2318 } else { 2318 } else {
2319 /* There was an error. */ 2319 /* There was an error. */
2320 np->stats.rx_errors++; 2320 np->stats.rx_errors++;
2321 if (desc_status & (DescRxAbort|DescRxOver)) 2321 if (desc_status & (DescRxAbort|DescRxOver))
2322 np->stats.rx_over_errors++; 2322 np->stats.rx_over_errors++;
2323 if (desc_status & (DescRxLong|DescRxRunt)) 2323 if (desc_status & (DescRxLong|DescRxRunt))
2324 np->stats.rx_length_errors++; 2324 np->stats.rx_length_errors++;
2325 if (desc_status & (DescRxInvalid|DescRxAlign)) 2325 if (desc_status & (DescRxInvalid|DescRxAlign))
2326 np->stats.rx_frame_errors++; 2326 np->stats.rx_frame_errors++;
2327 if (desc_status & DescRxCRC) 2327 if (desc_status & DescRxCRC)
2328 np->stats.rx_crc_errors++; 2328 np->stats.rx_crc_errors++;
2329 } 2329 }
2330 } else if (pkt_len > np->rx_buf_sz) { 2330 } else if (pkt_len > np->rx_buf_sz) {
2331 /* if this is the tail of a double buffer 2331 /* if this is the tail of a double buffer
2332 * packet, we've already counted the error 2332 * packet, we've already counted the error
2333 * on the first part. Ignore the second half. 2333 * on the first part. Ignore the second half.
2334 */ 2334 */
2335 } else { 2335 } else {
2336 struct sk_buff *skb; 2336 struct sk_buff *skb;
2337 /* Omit CRC size. */ 2337 /* Omit CRC size. */
2338 /* Check if the packet is long enough to accept 2338 /* Check if the packet is long enough to accept
2339 * without copying to a minimally-sized skbuff. */ 2339 * without copying to a minimally-sized skbuff. */
2340 if (pkt_len < rx_copybreak 2340 if (pkt_len < rx_copybreak
2341 && (skb = dev_alloc_skb(pkt_len + RX_OFFSET)) != NULL) { 2341 && (skb = dev_alloc_skb(pkt_len + RX_OFFSET)) != NULL) {
2342 /* 16 byte align the IP header */ 2342 /* 16 byte align the IP header */
2343 skb_reserve(skb, RX_OFFSET); 2343 skb_reserve(skb, RX_OFFSET);
2344 pci_dma_sync_single_for_cpu(np->pci_dev, 2344 pci_dma_sync_single_for_cpu(np->pci_dev,
2345 np->rx_dma[entry], 2345 np->rx_dma[entry],
2346 buflen, 2346 buflen,
2347 PCI_DMA_FROMDEVICE); 2347 PCI_DMA_FROMDEVICE);
2348 skb_copy_to_linear_data(skb, 2348 skb_copy_to_linear_data(skb,
2349 np->rx_skbuff[entry]->data, pkt_len); 2349 np->rx_skbuff[entry]->data, pkt_len);
2350 skb_put(skb, pkt_len); 2350 skb_put(skb, pkt_len);
2351 pci_dma_sync_single_for_device(np->pci_dev, 2351 pci_dma_sync_single_for_device(np->pci_dev,
2352 np->rx_dma[entry], 2352 np->rx_dma[entry],
2353 buflen, 2353 buflen,
2354 PCI_DMA_FROMDEVICE); 2354 PCI_DMA_FROMDEVICE);
2355 } else { 2355 } else {
2356 pci_unmap_single(np->pci_dev, np->rx_dma[entry], 2356 pci_unmap_single(np->pci_dev, np->rx_dma[entry],
2357 buflen, PCI_DMA_FROMDEVICE); 2357 buflen, PCI_DMA_FROMDEVICE);
2358 skb_put(skb = np->rx_skbuff[entry], pkt_len); 2358 skb_put(skb = np->rx_skbuff[entry], pkt_len);
2359 np->rx_skbuff[entry] = NULL; 2359 np->rx_skbuff[entry] = NULL;
2360 } 2360 }
2361 skb->protocol = eth_type_trans(skb, dev); 2361 skb->protocol = eth_type_trans(skb, dev);
2362 netif_receive_skb(skb); 2362 netif_receive_skb(skb);
2363 dev->last_rx = jiffies; 2363 dev->last_rx = jiffies;
2364 np->stats.rx_packets++; 2364 np->stats.rx_packets++;
2365 np->stats.rx_bytes += pkt_len; 2365 np->stats.rx_bytes += pkt_len;
2366 } 2366 }
2367 entry = (++np->cur_rx) % RX_RING_SIZE; 2367 entry = (++np->cur_rx) % RX_RING_SIZE;
2368 np->rx_head_desc = &np->rx_ring[entry]; 2368 np->rx_head_desc = &np->rx_ring[entry];
2369 desc_status = le32_to_cpu(np->rx_head_desc->cmd_status); 2369 desc_status = le32_to_cpu(np->rx_head_desc->cmd_status);
2370 } 2370 }
2371 refill_rx(dev); 2371 refill_rx(dev);
2372 2372
2373 /* Restart Rx engine if stopped. */ 2373 /* Restart Rx engine if stopped. */
2374 if (np->oom) 2374 if (np->oom)
2375 mod_timer(&np->timer, jiffies + 1); 2375 mod_timer(&np->timer, jiffies + 1);
2376 else 2376 else
2377 writel(RxOn, ioaddr + ChipCmd); 2377 writel(RxOn, ioaddr + ChipCmd);
2378 } 2378 }
2379 2379
2380 static void netdev_error(struct net_device *dev, int intr_status) 2380 static void netdev_error(struct net_device *dev, int intr_status)
2381 { 2381 {
2382 struct netdev_private *np = netdev_priv(dev); 2382 struct netdev_private *np = netdev_priv(dev);
2383 void __iomem * ioaddr = ns_ioaddr(dev); 2383 void __iomem * ioaddr = ns_ioaddr(dev);
2384 2384
2385 spin_lock(&np->lock); 2385 spin_lock(&np->lock);
2386 if (intr_status & LinkChange) { 2386 if (intr_status & LinkChange) {
2387 u16 lpa = mdio_read(dev, MII_LPA); 2387 u16 lpa = mdio_read(dev, MII_LPA);
2388 if (mdio_read(dev, MII_BMCR) & BMCR_ANENABLE 2388 if (mdio_read(dev, MII_BMCR) & BMCR_ANENABLE
2389 && netif_msg_link(np)) { 2389 && netif_msg_link(np)) {
2390 printk(KERN_INFO 2390 printk(KERN_INFO
2391 "%s: Autonegotiation advertising" 2391 "%s: Autonegotiation advertising"
2392 " %#04x partner %#04x.\n", dev->name, 2392 " %#04x partner %#04x.\n", dev->name,
2393 np->advertising, lpa); 2393 np->advertising, lpa);
2394 } 2394 }
2395 2395
2396 /* read MII int status to clear the flag */ 2396 /* read MII int status to clear the flag */
2397 readw(ioaddr + MIntrStatus); 2397 readw(ioaddr + MIntrStatus);
2398 check_link(dev); 2398 check_link(dev);
2399 } 2399 }
2400 if (intr_status & StatsMax) { 2400 if (intr_status & StatsMax) {
2401 __get_stats(dev); 2401 __get_stats(dev);
2402 } 2402 }
2403 if (intr_status & IntrTxUnderrun) { 2403 if (intr_status & IntrTxUnderrun) {
2404 if ((np->tx_config & TxDrthMask) < TX_DRTH_VAL_LIMIT) { 2404 if ((np->tx_config & TxDrthMask) < TX_DRTH_VAL_LIMIT) {
2405 np->tx_config += TX_DRTH_VAL_INC; 2405 np->tx_config += TX_DRTH_VAL_INC;
2406 if (netif_msg_tx_err(np)) 2406 if (netif_msg_tx_err(np))
2407 printk(KERN_NOTICE 2407 printk(KERN_NOTICE
2408 "%s: increased tx threshold, txcfg %#08x.\n", 2408 "%s: increased tx threshold, txcfg %#08x.\n",
2409 dev->name, np->tx_config); 2409 dev->name, np->tx_config);
2410 } else { 2410 } else {
2411 if (netif_msg_tx_err(np)) 2411 if (netif_msg_tx_err(np))
2412 printk(KERN_NOTICE 2412 printk(KERN_NOTICE
2413 "%s: tx underrun with maximum tx threshold, txcfg %#08x.\n", 2413 "%s: tx underrun with maximum tx threshold, txcfg %#08x.\n",
2414 dev->name, np->tx_config); 2414 dev->name, np->tx_config);
2415 } 2415 }
2416 writel(np->tx_config, ioaddr + TxConfig); 2416 writel(np->tx_config, ioaddr + TxConfig);
2417 } 2417 }
2418 if (intr_status & WOLPkt && netif_msg_wol(np)) { 2418 if (intr_status & WOLPkt && netif_msg_wol(np)) {
2419 int wol_status = readl(ioaddr + WOLCmd); 2419 int wol_status = readl(ioaddr + WOLCmd);
2420 printk(KERN_NOTICE "%s: Link wake-up event %#08x\n", 2420 printk(KERN_NOTICE "%s: Link wake-up event %#08x\n",
2421 dev->name, wol_status); 2421 dev->name, wol_status);
2422 } 2422 }
2423 if (intr_status & RxStatusFIFOOver) { 2423 if (intr_status & RxStatusFIFOOver) {
2424 if (netif_msg_rx_err(np) && netif_msg_intr(np)) { 2424 if (netif_msg_rx_err(np) && netif_msg_intr(np)) {
2425 printk(KERN_NOTICE "%s: Rx status FIFO overrun\n", 2425 printk(KERN_NOTICE "%s: Rx status FIFO overrun\n",
2426 dev->name); 2426 dev->name);
2427 } 2427 }
2428 np->stats.rx_fifo_errors++; 2428 np->stats.rx_fifo_errors++;
2429 np->stats.rx_errors++; 2429 np->stats.rx_errors++;
2430 } 2430 }
2431 /* Hmmmmm, it's not clear how to recover from PCI faults. */ 2431 /* Hmmmmm, it's not clear how to recover from PCI faults. */
2432 if (intr_status & IntrPCIErr) { 2432 if (intr_status & IntrPCIErr) {
2433 printk(KERN_NOTICE "%s: PCI error %#08x\n", dev->name, 2433 printk(KERN_NOTICE "%s: PCI error %#08x\n", dev->name,
2434 intr_status & IntrPCIErr); 2434 intr_status & IntrPCIErr);
2435 np->stats.tx_fifo_errors++; 2435 np->stats.tx_fifo_errors++;
2436 np->stats.tx_errors++; 2436 np->stats.tx_errors++;
2437 np->stats.rx_fifo_errors++; 2437 np->stats.rx_fifo_errors++;
2438 np->stats.rx_errors++; 2438 np->stats.rx_errors++;
2439 } 2439 }
2440 spin_unlock(&np->lock); 2440 spin_unlock(&np->lock);
2441 } 2441 }
2442 2442
2443 static void __get_stats(struct net_device *dev) 2443 static void __get_stats(struct net_device *dev)
2444 { 2444 {
2445 void __iomem * ioaddr = ns_ioaddr(dev); 2445 void __iomem * ioaddr = ns_ioaddr(dev);
2446 struct netdev_private *np = netdev_priv(dev); 2446 struct netdev_private *np = netdev_priv(dev);
2447 2447
2448 /* The chip only need report frame silently dropped. */ 2448 /* The chip only need report frame silently dropped. */
2449 np->stats.rx_crc_errors += readl(ioaddr + RxCRCErrs); 2449 np->stats.rx_crc_errors += readl(ioaddr + RxCRCErrs);
2450 np->stats.rx_missed_errors += readl(ioaddr + RxMissed); 2450 np->stats.rx_missed_errors += readl(ioaddr + RxMissed);
2451 } 2451 }
2452 2452
2453 static struct net_device_stats *get_stats(struct net_device *dev) 2453 static struct net_device_stats *get_stats(struct net_device *dev)
2454 { 2454 {
2455 struct netdev_private *np = netdev_priv(dev); 2455 struct netdev_private *np = netdev_priv(dev);
2456 2456
2457 /* The chip only need report frame silently dropped. */ 2457 /* The chip only need report frame silently dropped. */
2458 spin_lock_irq(&np->lock); 2458 spin_lock_irq(&np->lock);
2459 if (netif_running(dev) && !np->hands_off) 2459 if (netif_running(dev) && !np->hands_off)
2460 __get_stats(dev); 2460 __get_stats(dev);
2461 spin_unlock_irq(&np->lock); 2461 spin_unlock_irq(&np->lock);
2462 2462
2463 return &np->stats; 2463 return &np->stats;
2464 } 2464 }
2465 2465
2466 #ifdef CONFIG_NET_POLL_CONTROLLER 2466 #ifdef CONFIG_NET_POLL_CONTROLLER
2467 static void natsemi_poll_controller(struct net_device *dev) 2467 static void natsemi_poll_controller(struct net_device *dev)
2468 { 2468 {
2469 disable_irq(dev->irq); 2469 disable_irq(dev->irq);
2470 intr_handler(dev->irq, dev); 2470 intr_handler(dev->irq, dev);
2471 enable_irq(dev->irq); 2471 enable_irq(dev->irq);
2472 } 2472 }
2473 #endif 2473 #endif
2474 2474
2475 #define HASH_TABLE 0x200 2475 #define HASH_TABLE 0x200
2476 static void __set_rx_mode(struct net_device *dev) 2476 static void __set_rx_mode(struct net_device *dev)
2477 { 2477 {
2478 void __iomem * ioaddr = ns_ioaddr(dev); 2478 void __iomem * ioaddr = ns_ioaddr(dev);
2479 struct netdev_private *np = netdev_priv(dev); 2479 struct netdev_private *np = netdev_priv(dev);
2480 u8 mc_filter[64]; /* Multicast hash filter */ 2480 u8 mc_filter[64]; /* Multicast hash filter */
2481 u32 rx_mode; 2481 u32 rx_mode;
2482 2482
2483 if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */ 2483 if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */
2484 rx_mode = RxFilterEnable | AcceptBroadcast 2484 rx_mode = RxFilterEnable | AcceptBroadcast
2485 | AcceptAllMulticast | AcceptAllPhys | AcceptMyPhys; 2485 | AcceptAllMulticast | AcceptAllPhys | AcceptMyPhys;
2486 } else if ((dev->mc_count > multicast_filter_limit) 2486 } else if ((dev->mc_count > multicast_filter_limit)
2487 || (dev->flags & IFF_ALLMULTI)) { 2487 || (dev->flags & IFF_ALLMULTI)) {
2488 rx_mode = RxFilterEnable | AcceptBroadcast 2488 rx_mode = RxFilterEnable | AcceptBroadcast
2489 | AcceptAllMulticast | AcceptMyPhys; 2489 | AcceptAllMulticast | AcceptMyPhys;
2490 } else { 2490 } else {
2491 struct dev_mc_list *mclist; 2491 struct dev_mc_list *mclist;
2492 int i; 2492 int i;
2493 memset(mc_filter, 0, sizeof(mc_filter)); 2493 memset(mc_filter, 0, sizeof(mc_filter));
2494 for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count; 2494 for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count;
2495 i++, mclist = mclist->next) { 2495 i++, mclist = mclist->next) {
2496 int b = (ether_crc(ETH_ALEN, mclist->dmi_addr) >> 23) & 0x1ff; 2496 int b = (ether_crc(ETH_ALEN, mclist->dmi_addr) >> 23) & 0x1ff;
2497 mc_filter[b/8] |= (1 << (b & 0x07)); 2497 mc_filter[b/8] |= (1 << (b & 0x07));
2498 } 2498 }
2499 rx_mode = RxFilterEnable | AcceptBroadcast 2499 rx_mode = RxFilterEnable | AcceptBroadcast
2500 | AcceptMulticast | AcceptMyPhys; 2500 | AcceptMulticast | AcceptMyPhys;
2501 for (i = 0; i < 64; i += 2) { 2501 for (i = 0; i < 64; i += 2) {
2502 writel(HASH_TABLE + i, ioaddr + RxFilterAddr); 2502 writel(HASH_TABLE + i, ioaddr + RxFilterAddr);
2503 writel((mc_filter[i + 1] << 8) + mc_filter[i], 2503 writel((mc_filter[i + 1] << 8) + mc_filter[i],
2504 ioaddr + RxFilterData); 2504 ioaddr + RxFilterData);
2505 } 2505 }
2506 } 2506 }
2507 writel(rx_mode, ioaddr + RxFilterAddr); 2507 writel(rx_mode, ioaddr + RxFilterAddr);
2508 np->cur_rx_mode = rx_mode; 2508 np->cur_rx_mode = rx_mode;
2509 } 2509 }
2510 2510
2511 static int natsemi_change_mtu(struct net_device *dev, int new_mtu) 2511 static int natsemi_change_mtu(struct net_device *dev, int new_mtu)
2512 { 2512 {
2513 if (new_mtu < 64 || new_mtu > NATSEMI_RX_LIMIT-NATSEMI_HEADERS) 2513 if (new_mtu < 64 || new_mtu > NATSEMI_RX_LIMIT-NATSEMI_HEADERS)
2514 return -EINVAL; 2514 return -EINVAL;
2515 2515
2516 dev->mtu = new_mtu; 2516 dev->mtu = new_mtu;
2517 2517
2518 /* synchronized against open : rtnl_lock() held by caller */ 2518 /* synchronized against open : rtnl_lock() held by caller */
2519 if (netif_running(dev)) { 2519 if (netif_running(dev)) {
2520 struct netdev_private *np = netdev_priv(dev); 2520 struct netdev_private *np = netdev_priv(dev);
2521 void __iomem * ioaddr = ns_ioaddr(dev); 2521 void __iomem * ioaddr = ns_ioaddr(dev);
2522 2522
2523 disable_irq(dev->irq); 2523 disable_irq(dev->irq);
2524 spin_lock(&np->lock); 2524 spin_lock(&np->lock);
2525 /* stop engines */ 2525 /* stop engines */
2526 natsemi_stop_rxtx(dev); 2526 natsemi_stop_rxtx(dev);
2527 /* drain rx queue */ 2527 /* drain rx queue */
2528 drain_rx(dev); 2528 drain_rx(dev);
2529 /* change buffers */ 2529 /* change buffers */
2530 set_bufsize(dev); 2530 set_bufsize(dev);
2531 reinit_rx(dev); 2531 reinit_rx(dev);
2532 writel(np->ring_dma, ioaddr + RxRingPtr); 2532 writel(np->ring_dma, ioaddr + RxRingPtr);
2533 /* restart engines */ 2533 /* restart engines */
2534 writel(RxOn | TxOn, ioaddr + ChipCmd); 2534 writel(RxOn | TxOn, ioaddr + ChipCmd);
2535 spin_unlock(&np->lock); 2535 spin_unlock(&np->lock);
2536 enable_irq(dev->irq); 2536 enable_irq(dev->irq);
2537 } 2537 }
2538 return 0; 2538 return 0;
2539 } 2539 }
2540 2540
2541 static void set_rx_mode(struct net_device *dev) 2541 static void set_rx_mode(struct net_device *dev)
2542 { 2542 {
2543 struct netdev_private *np = netdev_priv(dev); 2543 struct netdev_private *np = netdev_priv(dev);
2544 spin_lock_irq(&np->lock); 2544 spin_lock_irq(&np->lock);
2545 if (!np->hands_off) 2545 if (!np->hands_off)
2546 __set_rx_mode(dev); 2546 __set_rx_mode(dev);
2547 spin_unlock_irq(&np->lock); 2547 spin_unlock_irq(&np->lock);
2548 } 2548 }
2549 2549
2550 static void get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info) 2550 static void get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
2551 { 2551 {
2552 struct netdev_private *np = netdev_priv(dev); 2552 struct netdev_private *np = netdev_priv(dev);
2553 strncpy(info->driver, DRV_NAME, ETHTOOL_BUSINFO_LEN); 2553 strncpy(info->driver, DRV_NAME, ETHTOOL_BUSINFO_LEN);
2554 strncpy(info->version, DRV_VERSION, ETHTOOL_BUSINFO_LEN); 2554 strncpy(info->version, DRV_VERSION, ETHTOOL_BUSINFO_LEN);
2555 strncpy(info->bus_info, pci_name(np->pci_dev), ETHTOOL_BUSINFO_LEN); 2555 strncpy(info->bus_info, pci_name(np->pci_dev), ETHTOOL_BUSINFO_LEN);
2556 } 2556 }
2557 2557
2558 static int get_regs_len(struct net_device *dev) 2558 static int get_regs_len(struct net_device *dev)
2559 { 2559 {
2560 return NATSEMI_REGS_SIZE; 2560 return NATSEMI_REGS_SIZE;
2561 } 2561 }
2562 2562
2563 static int get_eeprom_len(struct net_device *dev) 2563 static int get_eeprom_len(struct net_device *dev)
2564 { 2564 {
2565 struct netdev_private *np = netdev_priv(dev); 2565 struct netdev_private *np = netdev_priv(dev);
2566 return np->eeprom_size; 2566 return np->eeprom_size;
2567 } 2567 }
2568 2568
2569 static int get_settings(struct net_device *dev, struct ethtool_cmd *ecmd) 2569 static int get_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
2570 { 2570 {
2571 struct netdev_private *np = netdev_priv(dev); 2571 struct netdev_private *np = netdev_priv(dev);
2572 spin_lock_irq(&np->lock); 2572 spin_lock_irq(&np->lock);
2573 netdev_get_ecmd(dev, ecmd); 2573 netdev_get_ecmd(dev, ecmd);
2574 spin_unlock_irq(&np->lock); 2574 spin_unlock_irq(&np->lock);
2575 return 0; 2575 return 0;
2576 } 2576 }
2577 2577
2578 static int set_settings(struct net_device *dev, struct ethtool_cmd *ecmd) 2578 static int set_settings(struct net_device *dev, struct ethtool_cmd *ecmd)
2579 { 2579 {
2580 struct netdev_private *np = netdev_priv(dev); 2580 struct netdev_private *np = netdev_priv(dev);
2581 int res; 2581 int res;
2582 spin_lock_irq(&np->lock); 2582 spin_lock_irq(&np->lock);
2583 res = netdev_set_ecmd(dev, ecmd); 2583 res = netdev_set_ecmd(dev, ecmd);
2584 spin_unlock_irq(&np->lock); 2584 spin_unlock_irq(&np->lock);
2585 return res; 2585 return res;
2586 } 2586 }
2587 2587
2588 static void get_wol(struct net_device *dev, struct ethtool_wolinfo *wol) 2588 static void get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
2589 { 2589 {
2590 struct netdev_private *np = netdev_priv(dev); 2590 struct netdev_private *np = netdev_priv(dev);
2591 spin_lock_irq(&np->lock); 2591 spin_lock_irq(&np->lock);
2592 netdev_get_wol(dev, &wol->supported, &wol->wolopts); 2592 netdev_get_wol(dev, &wol->supported, &wol->wolopts);
2593 netdev_get_sopass(dev, wol->sopass); 2593 netdev_get_sopass(dev, wol->sopass);
2594 spin_unlock_irq(&np->lock); 2594 spin_unlock_irq(&np->lock);
2595 } 2595 }
2596 2596
2597 static int set_wol(struct net_device *dev, struct ethtool_wolinfo *wol) 2597 static int set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
2598 { 2598 {
2599 struct netdev_private *np = netdev_priv(dev); 2599 struct netdev_private *np = netdev_priv(dev);
2600 int res; 2600 int res;
2601 spin_lock_irq(&np->lock); 2601 spin_lock_irq(&np->lock);
2602 netdev_set_wol(dev, wol->wolopts); 2602 netdev_set_wol(dev, wol->wolopts);
2603 res = netdev_set_sopass(dev, wol->sopass); 2603 res = netdev_set_sopass(dev, wol->sopass);
2604 spin_unlock_irq(&np->lock); 2604 spin_unlock_irq(&np->lock);
2605 return res; 2605 return res;
2606 } 2606 }
2607 2607
2608 static void get_regs(struct net_device *dev, struct ethtool_regs *regs, void *buf) 2608 static void get_regs(struct net_device *dev, struct ethtool_regs *regs, void *buf)
2609 { 2609 {
2610 struct netdev_private *np = netdev_priv(dev); 2610 struct netdev_private *np = netdev_priv(dev);
2611 regs->version = NATSEMI_REGS_VER; 2611 regs->version = NATSEMI_REGS_VER;
2612 spin_lock_irq(&np->lock); 2612 spin_lock_irq(&np->lock);
2613 netdev_get_regs(dev, buf); 2613 netdev_get_regs(dev, buf);
2614 spin_unlock_irq(&np->lock); 2614 spin_unlock_irq(&np->lock);
2615 } 2615 }
2616 2616
2617 static u32 get_msglevel(struct net_device *dev) 2617 static u32 get_msglevel(struct net_device *dev)
2618 { 2618 {
2619 struct netdev_private *np = netdev_priv(dev); 2619 struct netdev_private *np = netdev_priv(dev);
2620 return np->msg_enable; 2620 return np->msg_enable;
2621 } 2621 }
2622 2622
2623 static void set_msglevel(struct net_device *dev, u32 val) 2623 static void set_msglevel(struct net_device *dev, u32 val)
2624 { 2624 {
2625 struct netdev_private *np = netdev_priv(dev); 2625 struct netdev_private *np = netdev_priv(dev);
2626 np->msg_enable = val; 2626 np->msg_enable = val;
2627 } 2627 }
2628 2628
2629 static int nway_reset(struct net_device *dev) 2629 static int nway_reset(struct net_device *dev)
2630 { 2630 {
2631 int tmp; 2631 int tmp;
2632 int r = -EINVAL; 2632 int r = -EINVAL;
2633 /* if autoneg is off, it's an error */ 2633 /* if autoneg is off, it's an error */
2634 tmp = mdio_read(dev, MII_BMCR); 2634 tmp = mdio_read(dev, MII_BMCR);
2635 if (tmp & BMCR_ANENABLE) { 2635 if (tmp & BMCR_ANENABLE) {
2636 tmp |= (BMCR_ANRESTART); 2636 tmp |= (BMCR_ANRESTART);
2637 mdio_write(dev, MII_BMCR, tmp); 2637 mdio_write(dev, MII_BMCR, tmp);
2638 r = 0; 2638 r = 0;
2639 } 2639 }
2640 return r; 2640 return r;
2641 } 2641 }
2642 2642
2643 static u32 get_link(struct net_device *dev) 2643 static u32 get_link(struct net_device *dev)
2644 { 2644 {
2645 /* LSTATUS is latched low until a read - so read twice */ 2645 /* LSTATUS is latched low until a read - so read twice */
2646 mdio_read(dev, MII_BMSR); 2646 mdio_read(dev, MII_BMSR);
2647 return (mdio_read(dev, MII_BMSR)&BMSR_LSTATUS) ? 1:0; 2647 return (mdio_read(dev, MII_BMSR)&BMSR_LSTATUS) ? 1:0;
2648 } 2648 }
2649 2649
2650 static int get_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom, u8 *data) 2650 static int get_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom, u8 *data)
2651 { 2651 {
2652 struct netdev_private *np = netdev_priv(dev); 2652 struct netdev_private *np = netdev_priv(dev);
2653 u8 *eebuf; 2653 u8 *eebuf;
2654 int res; 2654 int res;
2655 2655
2656 eebuf = kmalloc(np->eeprom_size, GFP_KERNEL); 2656 eebuf = kmalloc(np->eeprom_size, GFP_KERNEL);
2657 if (!eebuf) 2657 if (!eebuf)
2658 return -ENOMEM; 2658 return -ENOMEM;
2659 2659
2660 eeprom->magic = PCI_VENDOR_ID_NS | (PCI_DEVICE_ID_NS_83815<<16); 2660 eeprom->magic = PCI_VENDOR_ID_NS | (PCI_DEVICE_ID_NS_83815<<16);
2661 spin_lock_irq(&np->lock); 2661 spin_lock_irq(&np->lock);
2662 res = netdev_get_eeprom(dev, eebuf); 2662 res = netdev_get_eeprom(dev, eebuf);
2663 spin_unlock_irq(&np->lock); 2663 spin_unlock_irq(&np->lock);
2664 if (!res) 2664 if (!res)
2665 memcpy(data, eebuf+eeprom->offset, eeprom->len); 2665 memcpy(data, eebuf+eeprom->offset, eeprom->len);
2666 kfree(eebuf); 2666 kfree(eebuf);
2667 return res; 2667 return res;
2668 } 2668 }
2669 2669
2670 static const struct ethtool_ops ethtool_ops = { 2670 static const struct ethtool_ops ethtool_ops = {
2671 .get_drvinfo = get_drvinfo, 2671 .get_drvinfo = get_drvinfo,
2672 .get_regs_len = get_regs_len, 2672 .get_regs_len = get_regs_len,
2673 .get_eeprom_len = get_eeprom_len, 2673 .get_eeprom_len = get_eeprom_len,
2674 .get_settings = get_settings, 2674 .get_settings = get_settings,
2675 .set_settings = set_settings, 2675 .set_settings = set_settings,
2676 .get_wol = get_wol, 2676 .get_wol = get_wol,
2677 .set_wol = set_wol, 2677 .set_wol = set_wol,
2678 .get_regs = get_regs, 2678 .get_regs = get_regs,
2679 .get_msglevel = get_msglevel, 2679 .get_msglevel = get_msglevel,
2680 .set_msglevel = set_msglevel, 2680 .set_msglevel = set_msglevel,
2681 .nway_reset = nway_reset, 2681 .nway_reset = nway_reset,
2682 .get_link = get_link, 2682 .get_link = get_link,
2683 .get_eeprom = get_eeprom, 2683 .get_eeprom = get_eeprom,
2684 }; 2684 };
2685 2685
2686 static int netdev_set_wol(struct net_device *dev, u32 newval) 2686 static int netdev_set_wol(struct net_device *dev, u32 newval)
2687 { 2687 {
2688 struct netdev_private *np = netdev_priv(dev); 2688 struct netdev_private *np = netdev_priv(dev);
2689 void __iomem * ioaddr = ns_ioaddr(dev); 2689 void __iomem * ioaddr = ns_ioaddr(dev);
2690 u32 data = readl(ioaddr + WOLCmd) & ~WakeOptsSummary; 2690 u32 data = readl(ioaddr + WOLCmd) & ~WakeOptsSummary;
2691 2691
2692 /* translate to bitmasks this chip understands */ 2692 /* translate to bitmasks this chip understands */
2693 if (newval & WAKE_PHY) 2693 if (newval & WAKE_PHY)
2694 data |= WakePhy; 2694 data |= WakePhy;
2695 if (newval & WAKE_UCAST) 2695 if (newval & WAKE_UCAST)
2696 data |= WakeUnicast; 2696 data |= WakeUnicast;
2697 if (newval & WAKE_MCAST) 2697 if (newval & WAKE_MCAST)
2698 data |= WakeMulticast; 2698 data |= WakeMulticast;
2699 if (newval & WAKE_BCAST) 2699 if (newval & WAKE_BCAST)
2700 data |= WakeBroadcast; 2700 data |= WakeBroadcast;
2701 if (newval & WAKE_ARP) 2701 if (newval & WAKE_ARP)
2702 data |= WakeArp; 2702 data |= WakeArp;
2703 if (newval & WAKE_MAGIC) 2703 if (newval & WAKE_MAGIC)
2704 data |= WakeMagic; 2704 data |= WakeMagic;
2705 if (np->srr >= SRR_DP83815_D) { 2705 if (np->srr >= SRR_DP83815_D) {
2706 if (newval & WAKE_MAGICSECURE) { 2706 if (newval & WAKE_MAGICSECURE) {
2707 data |= WakeMagicSecure; 2707 data |= WakeMagicSecure;
2708 } 2708 }
2709 } 2709 }
2710 2710
2711 writel(data, ioaddr + WOLCmd); 2711 writel(data, ioaddr + WOLCmd);
2712 2712
2713 return 0; 2713 return 0;
2714 } 2714 }
2715 2715
2716 static int netdev_get_wol(struct net_device *dev, u32 *supported, u32 *cur) 2716 static int netdev_get_wol(struct net_device *dev, u32 *supported, u32 *cur)
2717 { 2717 {
2718 struct netdev_private *np = netdev_priv(dev); 2718 struct netdev_private *np = netdev_priv(dev);
2719 void __iomem * ioaddr = ns_ioaddr(dev); 2719 void __iomem * ioaddr = ns_ioaddr(dev);
2720 u32 regval = readl(ioaddr + WOLCmd); 2720 u32 regval = readl(ioaddr + WOLCmd);
2721 2721
2722 *supported = (WAKE_PHY | WAKE_UCAST | WAKE_MCAST | WAKE_BCAST 2722 *supported = (WAKE_PHY | WAKE_UCAST | WAKE_MCAST | WAKE_BCAST
2723 | WAKE_ARP | WAKE_MAGIC); 2723 | WAKE_ARP | WAKE_MAGIC);
2724 2724
2725 if (np->srr >= SRR_DP83815_D) { 2725 if (np->srr >= SRR_DP83815_D) {
2726 /* SOPASS works on revD and higher */ 2726 /* SOPASS works on revD and higher */
2727 *supported |= WAKE_MAGICSECURE; 2727 *supported |= WAKE_MAGICSECURE;
2728 } 2728 }
2729 *cur = 0; 2729 *cur = 0;
2730 2730
2731 /* translate from chip bitmasks */ 2731 /* translate from chip bitmasks */
2732 if (regval & WakePhy) 2732 if (regval & WakePhy)
2733 *cur |= WAKE_PHY; 2733 *cur |= WAKE_PHY;
2734 if (regval & WakeUnicast) 2734 if (regval & WakeUnicast)
2735 *cur |= WAKE_UCAST; 2735 *cur |= WAKE_UCAST;
2736 if (regval & WakeMulticast) 2736 if (regval & WakeMulticast)
2737 *cur |= WAKE_MCAST; 2737 *cur |= WAKE_MCAST;
2738 if (regval & WakeBroadcast) 2738 if (regval & WakeBroadcast)
2739 *cur |= WAKE_BCAST; 2739 *cur |= WAKE_BCAST;
2740 if (regval & WakeArp) 2740 if (regval & WakeArp)
2741 *cur |= WAKE_ARP; 2741 *cur |= WAKE_ARP;
2742 if (regval & WakeMagic) 2742 if (regval & WakeMagic)
2743 *cur |= WAKE_MAGIC; 2743 *cur |= WAKE_MAGIC;
2744 if (regval & WakeMagicSecure) { 2744 if (regval & WakeMagicSecure) {
2745 /* this can be on in revC, but it's broken */ 2745 /* this can be on in revC, but it's broken */
2746 *cur |= WAKE_MAGICSECURE; 2746 *cur |= WAKE_MAGICSECURE;
2747 } 2747 }
2748 2748
2749 return 0; 2749 return 0;
2750 } 2750 }
2751 2751
2752 static int netdev_set_sopass(struct net_device *dev, u8 *newval) 2752 static int netdev_set_sopass(struct net_device *dev, u8 *newval)
2753 { 2753 {
2754 struct netdev_private *np = netdev_priv(dev); 2754 struct netdev_private *np = netdev_priv(dev);
2755 void __iomem * ioaddr = ns_ioaddr(dev); 2755 void __iomem * ioaddr = ns_ioaddr(dev);
2756 u16 *sval = (u16 *)newval; 2756 u16 *sval = (u16 *)newval;
2757 u32 addr; 2757 u32 addr;
2758 2758
2759 if (np->srr < SRR_DP83815_D) { 2759 if (np->srr < SRR_DP83815_D) {
2760 return 0; 2760 return 0;
2761 } 2761 }
2762 2762
2763 /* enable writing to these registers by disabling the RX filter */ 2763 /* enable writing to these registers by disabling the RX filter */
2764 addr = readl(ioaddr + RxFilterAddr) & ~RFCRAddressMask; 2764 addr = readl(ioaddr + RxFilterAddr) & ~RFCRAddressMask;
2765 addr &= ~RxFilterEnable; 2765 addr &= ~RxFilterEnable;
2766 writel(addr, ioaddr + RxFilterAddr); 2766 writel(addr, ioaddr + RxFilterAddr);
2767 2767
2768 /* write the three words to (undocumented) RFCR vals 0xa, 0xc, 0xe */ 2768 /* write the three words to (undocumented) RFCR vals 0xa, 0xc, 0xe */
2769 writel(addr | 0xa, ioaddr + RxFilterAddr); 2769 writel(addr | 0xa, ioaddr + RxFilterAddr);
2770 writew(sval[0], ioaddr + RxFilterData); 2770 writew(sval[0], ioaddr + RxFilterData);
2771 2771
2772 writel(addr | 0xc, ioaddr + RxFilterAddr); 2772 writel(addr | 0xc, ioaddr + RxFilterAddr);
2773 writew(sval[1], ioaddr + RxFilterData); 2773 writew(sval[1], ioaddr + RxFilterData);
2774 2774
2775 writel(addr | 0xe, ioaddr + RxFilterAddr); 2775 writel(addr | 0xe, ioaddr + RxFilterAddr);
2776 writew(sval[2], ioaddr + RxFilterData); 2776 writew(sval[2], ioaddr + RxFilterData);
2777 2777
2778 /* re-enable the RX filter */ 2778 /* re-enable the RX filter */
2779 writel(addr | RxFilterEnable, ioaddr + RxFilterAddr); 2779 writel(addr | RxFilterEnable, ioaddr + RxFilterAddr);
2780 2780
2781 return 0; 2781 return 0;
2782 } 2782 }
2783 2783
2784 static int netdev_get_sopass(struct net_device *dev, u8 *data) 2784 static int netdev_get_sopass(struct net_device *dev, u8 *data)
2785 { 2785 {
2786 struct netdev_private *np = netdev_priv(dev); 2786 struct netdev_private *np = netdev_priv(dev);
2787 void __iomem * ioaddr = ns_ioaddr(dev); 2787 void __iomem * ioaddr = ns_ioaddr(dev);
2788 u16 *sval = (u16 *)data; 2788 u16 *sval = (u16 *)data;
2789 u32 addr; 2789 u32 addr;
2790 2790
2791 if (np->srr < SRR_DP83815_D) { 2791 if (np->srr < SRR_DP83815_D) {
2792 sval[0] = sval[1] = sval[2] = 0; 2792 sval[0] = sval[1] = sval[2] = 0;
2793 return 0; 2793 return 0;
2794 } 2794 }
2795 2795
2796 /* read the three words from (undocumented) RFCR vals 0xa, 0xc, 0xe */ 2796 /* read the three words from (undocumented) RFCR vals 0xa, 0xc, 0xe */
2797 addr = readl(ioaddr + RxFilterAddr) & ~RFCRAddressMask; 2797 addr = readl(ioaddr + RxFilterAddr) & ~RFCRAddressMask;
2798 2798
2799 writel(addr | 0xa, ioaddr + RxFilterAddr); 2799 writel(addr | 0xa, ioaddr + RxFilterAddr);
2800 sval[0] = readw(ioaddr + RxFilterData); 2800 sval[0] = readw(ioaddr + RxFilterData);
2801 2801
2802 writel(addr | 0xc, ioaddr + RxFilterAddr); 2802 writel(addr | 0xc, ioaddr + RxFilterAddr);
2803 sval[1] = readw(ioaddr + RxFilterData); 2803 sval[1] = readw(ioaddr + RxFilterData);
2804 2804
2805 writel(addr | 0xe, ioaddr + RxFilterAddr); 2805 writel(addr | 0xe, ioaddr + RxFilterAddr);
2806 sval[2] = readw(ioaddr + RxFilterData); 2806 sval[2] = readw(ioaddr + RxFilterData);
2807 2807
2808 writel(addr, ioaddr + RxFilterAddr); 2808 writel(addr, ioaddr + RxFilterAddr);
2809 2809
2810 return 0; 2810 return 0;
2811 } 2811 }
2812 2812
2813 static int netdev_get_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd) 2813 static int netdev_get_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd)
2814 { 2814 {
2815 struct netdev_private *np = netdev_priv(dev); 2815 struct netdev_private *np = netdev_priv(dev);
2816 u32 tmp; 2816 u32 tmp;
2817 2817
2818 ecmd->port = dev->if_port; 2818 ecmd->port = dev->if_port;
2819 ecmd->speed = np->speed; 2819 ecmd->speed = np->speed;
2820 ecmd->duplex = np->duplex; 2820 ecmd->duplex = np->duplex;
2821 ecmd->autoneg = np->autoneg; 2821 ecmd->autoneg = np->autoneg;
2822 ecmd->advertising = 0; 2822 ecmd->advertising = 0;
2823 if (np->advertising & ADVERTISE_10HALF) 2823 if (np->advertising & ADVERTISE_10HALF)
2824 ecmd->advertising |= ADVERTISED_10baseT_Half; 2824 ecmd->advertising |= ADVERTISED_10baseT_Half;
2825 if (np->advertising & ADVERTISE_10FULL) 2825 if (np->advertising & ADVERTISE_10FULL)
2826 ecmd->advertising |= ADVERTISED_10baseT_Full; 2826 ecmd->advertising |= ADVERTISED_10baseT_Full;
2827 if (np->advertising & ADVERTISE_100HALF) 2827 if (np->advertising & ADVERTISE_100HALF)
2828 ecmd->advertising |= ADVERTISED_100baseT_Half; 2828 ecmd->advertising |= ADVERTISED_100baseT_Half;
2829 if (np->advertising & ADVERTISE_100FULL) 2829 if (np->advertising & ADVERTISE_100FULL)
2830 ecmd->advertising |= ADVERTISED_100baseT_Full; 2830 ecmd->advertising |= ADVERTISED_100baseT_Full;
2831 ecmd->supported = (SUPPORTED_Autoneg | 2831 ecmd->supported = (SUPPORTED_Autoneg |
2832 SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full | 2832 SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full |
2833 SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full | 2833 SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full |
2834 SUPPORTED_TP | SUPPORTED_MII | SUPPORTED_FIBRE); 2834 SUPPORTED_TP | SUPPORTED_MII | SUPPORTED_FIBRE);
2835 ecmd->phy_address = np->phy_addr_external; 2835 ecmd->phy_address = np->phy_addr_external;
2836 /* 2836 /*
2837 * We intentionally report the phy address of the external 2837 * We intentionally report the phy address of the external
2838 * phy, even if the internal phy is used. This is necessary 2838 * phy, even if the internal phy is used. This is necessary
2839 * to work around a deficiency of the ethtool interface: 2839 * to work around a deficiency of the ethtool interface:
2840 * It's only possible to query the settings of the active 2840 * It's only possible to query the settings of the active
2841 * port. Therefore 2841 * port. Therefore
2842 * # ethtool -s ethX port mii 2842 * # ethtool -s ethX port mii
2843 * actually sends an ioctl to switch to port mii with the 2843 * actually sends an ioctl to switch to port mii with the
2844 * settings that are used for the current active port. 2844 * settings that are used for the current active port.
2845 * If we would report a different phy address in this 2845 * If we would report a different phy address in this
2846 * command, then 2846 * command, then
2847 * # ethtool -s ethX port tp;ethtool -s ethX port mii 2847 * # ethtool -s ethX port tp;ethtool -s ethX port mii
2848 * would unintentionally change the phy address. 2848 * would unintentionally change the phy address.
2849 * 2849 *
2850 * Fortunately the phy address doesn't matter with the 2850 * Fortunately the phy address doesn't matter with the
2851 * internal phy... 2851 * internal phy...
2852 */ 2852 */
2853 2853
2854 /* set information based on active port type */ 2854 /* set information based on active port type */
2855 switch (ecmd->port) { 2855 switch (ecmd->port) {
2856 default: 2856 default:
2857 case PORT_TP: 2857 case PORT_TP:
2858 ecmd->advertising |= ADVERTISED_TP; 2858 ecmd->advertising |= ADVERTISED_TP;
2859 ecmd->transceiver = XCVR_INTERNAL; 2859 ecmd->transceiver = XCVR_INTERNAL;
2860 break; 2860 break;
2861 case PORT_MII: 2861 case PORT_MII:
2862 ecmd->advertising |= ADVERTISED_MII; 2862 ecmd->advertising |= ADVERTISED_MII;
2863 ecmd->transceiver = XCVR_EXTERNAL; 2863 ecmd->transceiver = XCVR_EXTERNAL;
2864 break; 2864 break;
2865 case PORT_FIBRE: 2865 case PORT_FIBRE:
2866 ecmd->advertising |= ADVERTISED_FIBRE; 2866 ecmd->advertising |= ADVERTISED_FIBRE;
2867 ecmd->transceiver = XCVR_EXTERNAL; 2867 ecmd->transceiver = XCVR_EXTERNAL;
2868 break; 2868 break;
2869 } 2869 }
2870 2870
2871 /* if autonegotiation is on, try to return the active speed/duplex */ 2871 /* if autonegotiation is on, try to return the active speed/duplex */
2872 if (ecmd->autoneg == AUTONEG_ENABLE) { 2872 if (ecmd->autoneg == AUTONEG_ENABLE) {
2873 ecmd->advertising |= ADVERTISED_Autoneg; 2873 ecmd->advertising |= ADVERTISED_Autoneg;
2874 tmp = mii_nway_result( 2874 tmp = mii_nway_result(
2875 np->advertising & mdio_read(dev, MII_LPA)); 2875 np->advertising & mdio_read(dev, MII_LPA));
2876 if (tmp == LPA_100FULL || tmp == LPA_100HALF) 2876 if (tmp == LPA_100FULL || tmp == LPA_100HALF)
2877 ecmd->speed = SPEED_100; 2877 ecmd->speed = SPEED_100;
2878 else 2878 else
2879 ecmd->speed = SPEED_10; 2879 ecmd->speed = SPEED_10;
2880 if (tmp == LPA_100FULL || tmp == LPA_10FULL) 2880 if (tmp == LPA_100FULL || tmp == LPA_10FULL)
2881 ecmd->duplex = DUPLEX_FULL; 2881 ecmd->duplex = DUPLEX_FULL;
2882 else 2882 else
2883 ecmd->duplex = DUPLEX_HALF; 2883 ecmd->duplex = DUPLEX_HALF;
2884 } 2884 }
2885 2885
2886 /* ignore maxtxpkt, maxrxpkt for now */ 2886 /* ignore maxtxpkt, maxrxpkt for now */
2887 2887
2888 return 0; 2888 return 0;
2889 } 2889 }
2890 2890
2891 static int netdev_set_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd) 2891 static int netdev_set_ecmd(struct net_device *dev, struct ethtool_cmd *ecmd)
2892 { 2892 {
2893 struct netdev_private *np = netdev_priv(dev); 2893 struct netdev_private *np = netdev_priv(dev);
2894 2894
2895 if (ecmd->port != PORT_TP && ecmd->port != PORT_MII && ecmd->port != PORT_FIBRE) 2895 if (ecmd->port != PORT_TP && ecmd->port != PORT_MII && ecmd->port != PORT_FIBRE)
2896 return -EINVAL; 2896 return -EINVAL;
2897 if (ecmd->transceiver != XCVR_INTERNAL && ecmd->transceiver != XCVR_EXTERNAL) 2897 if (ecmd->transceiver != XCVR_INTERNAL && ecmd->transceiver != XCVR_EXTERNAL)
2898 return -EINVAL; 2898 return -EINVAL;
2899 if (ecmd->autoneg == AUTONEG_ENABLE) { 2899 if (ecmd->autoneg == AUTONEG_ENABLE) {
2900 if ((ecmd->advertising & (ADVERTISED_10baseT_Half | 2900 if ((ecmd->advertising & (ADVERTISED_10baseT_Half |
2901 ADVERTISED_10baseT_Full | 2901 ADVERTISED_10baseT_Full |
2902 ADVERTISED_100baseT_Half | 2902 ADVERTISED_100baseT_Half |
2903 ADVERTISED_100baseT_Full)) == 0) { 2903 ADVERTISED_100baseT_Full)) == 0) {
2904 return -EINVAL; 2904 return -EINVAL;
2905 } 2905 }
2906 } else if (ecmd->autoneg == AUTONEG_DISABLE) { 2906 } else if (ecmd->autoneg == AUTONEG_DISABLE) {
2907 if (ecmd->speed != SPEED_10 && ecmd->speed != SPEED_100) 2907 if (ecmd->speed != SPEED_10 && ecmd->speed != SPEED_100)
2908 return -EINVAL; 2908 return -EINVAL;
2909 if (ecmd->duplex != DUPLEX_HALF && ecmd->duplex != DUPLEX_FULL) 2909 if (ecmd->duplex != DUPLEX_HALF && ecmd->duplex != DUPLEX_FULL)
2910 return -EINVAL; 2910 return -EINVAL;
2911 } else { 2911 } else {
2912 return -EINVAL; 2912 return -EINVAL;
2913 } 2913 }
2914 2914
2915 /* 2915 /*
2916 * If we're ignoring the PHY then autoneg and the internal 2916 * If we're ignoring the PHY then autoneg and the internal
2917 * transciever are really not going to work so don't let the 2917 * transciever are really not going to work so don't let the
2918 * user select them. 2918 * user select them.
2919 */ 2919 */
2920 if (np->ignore_phy && (ecmd->autoneg == AUTONEG_ENABLE || 2920 if (np->ignore_phy && (ecmd->autoneg == AUTONEG_ENABLE ||
2921 ecmd->port == PORT_TP)) 2921 ecmd->port == PORT_TP))
2922 return -EINVAL; 2922 return -EINVAL;
2923 2923
2924 /* 2924 /*
2925 * maxtxpkt, maxrxpkt: ignored for now. 2925 * maxtxpkt, maxrxpkt: ignored for now.
2926 * 2926 *
2927 * transceiver: 2927 * transceiver:
2928 * PORT_TP is always XCVR_INTERNAL, PORT_MII and PORT_FIBRE are always 2928 * PORT_TP is always XCVR_INTERNAL, PORT_MII and PORT_FIBRE are always
2929 * XCVR_EXTERNAL. The implementation thus ignores ecmd->transceiver and 2929 * XCVR_EXTERNAL. The implementation thus ignores ecmd->transceiver and
2930 * selects based on ecmd->port. 2930 * selects based on ecmd->port.
2931 * 2931 *
2932 * Actually PORT_FIBRE is nearly identical to PORT_MII: it's for fibre 2932 * Actually PORT_FIBRE is nearly identical to PORT_MII: it's for fibre
2933 * phys that are connected to the mii bus. It's used to apply fibre 2933 * phys that are connected to the mii bus. It's used to apply fibre
2934 * specific updates. 2934 * specific updates.
2935 */ 2935 */
2936 2936
2937 /* WHEW! now lets bang some bits */ 2937 /* WHEW! now lets bang some bits */
2938 2938
2939 /* save the parms */ 2939 /* save the parms */
2940 dev->if_port = ecmd->port; 2940 dev->if_port = ecmd->port;
2941 np->autoneg = ecmd->autoneg; 2941 np->autoneg = ecmd->autoneg;
2942 np->phy_addr_external = ecmd->phy_address & PhyAddrMask; 2942 np->phy_addr_external = ecmd->phy_address & PhyAddrMask;
2943 if (np->autoneg == AUTONEG_ENABLE) { 2943 if (np->autoneg == AUTONEG_ENABLE) {
2944 /* advertise only what has been requested */ 2944 /* advertise only what has been requested */
2945 np->advertising &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4); 2945 np->advertising &= ~(ADVERTISE_ALL | ADVERTISE_100BASE4);
2946 if (ecmd->advertising & ADVERTISED_10baseT_Half) 2946 if (ecmd->advertising & ADVERTISED_10baseT_Half)
2947 np->advertising |= ADVERTISE_10HALF; 2947 np->advertising |= ADVERTISE_10HALF;
2948 if (ecmd->advertising & ADVERTISED_10baseT_Full) 2948 if (ecmd->advertising & ADVERTISED_10baseT_Full)
2949 np->advertising |= ADVERTISE_10FULL; 2949 np->advertising |= ADVERTISE_10FULL;
2950 if (ecmd->advertising & ADVERTISED_100baseT_Half) 2950 if (ecmd->advertising & ADVERTISED_100baseT_Half)
2951 np->advertising |= ADVERTISE_100HALF; 2951 np->advertising |= ADVERTISE_100HALF;
2952 if (ecmd->advertising & ADVERTISED_100baseT_Full) 2952 if (ecmd->advertising & ADVERTISED_100baseT_Full)
2953 np->advertising |= ADVERTISE_100FULL; 2953 np->advertising |= ADVERTISE_100FULL;
2954 } else { 2954 } else {
2955 np->speed = ecmd->speed; 2955 np->speed = ecmd->speed;
2956 np->duplex = ecmd->duplex; 2956 np->duplex = ecmd->duplex;
2957 /* user overriding the initial full duplex parm? */ 2957 /* user overriding the initial full duplex parm? */
2958 if (np->duplex == DUPLEX_HALF) 2958 if (np->duplex == DUPLEX_HALF)
2959 np->full_duplex = 0; 2959 np->full_duplex = 0;
2960 } 2960 }
2961 2961
2962 /* get the right phy enabled */ 2962 /* get the right phy enabled */
2963 if (ecmd->port == PORT_TP) 2963 if (ecmd->port == PORT_TP)
2964 switch_port_internal(dev); 2964 switch_port_internal(dev);
2965 else 2965 else
2966 switch_port_external(dev); 2966 switch_port_external(dev);
2967 2967
2968 /* set parms and see how this affected our link status */ 2968 /* set parms and see how this affected our link status */
2969 init_phy_fixup(dev); 2969 init_phy_fixup(dev);
2970 check_link(dev); 2970 check_link(dev);
2971 return 0; 2971 return 0;
2972 } 2972 }
2973 2973
2974 static int netdev_get_regs(struct net_device *dev, u8 *buf) 2974 static int netdev_get_regs(struct net_device *dev, u8 *buf)
2975 { 2975 {
2976 int i; 2976 int i;
2977 int j; 2977 int j;
2978 u32 rfcr; 2978 u32 rfcr;
2979 u32 *rbuf = (u32 *)buf; 2979 u32 *rbuf = (u32 *)buf;
2980 void __iomem * ioaddr = ns_ioaddr(dev); 2980 void __iomem * ioaddr = ns_ioaddr(dev);
2981 2981
2982 /* read non-mii page 0 of registers */ 2982 /* read non-mii page 0 of registers */
2983 for (i = 0; i < NATSEMI_PG0_NREGS/2; i++) { 2983 for (i = 0; i < NATSEMI_PG0_NREGS/2; i++) {
2984 rbuf[i] = readl(ioaddr + i*4); 2984 rbuf[i] = readl(ioaddr + i*4);
2985 } 2985 }
2986 2986
2987 /* read current mii registers */ 2987 /* read current mii registers */
2988 for (i = NATSEMI_PG0_NREGS/2; i < NATSEMI_PG0_NREGS; i++) 2988 for (i = NATSEMI_PG0_NREGS/2; i < NATSEMI_PG0_NREGS; i++)
2989 rbuf[i] = mdio_read(dev, i & 0x1f); 2989 rbuf[i] = mdio_read(dev, i & 0x1f);
2990 2990
2991 /* read only the 'magic' registers from page 1 */ 2991 /* read only the 'magic' registers from page 1 */
2992 writew(1, ioaddr + PGSEL); 2992 writew(1, ioaddr + PGSEL);
2993 rbuf[i++] = readw(ioaddr + PMDCSR); 2993 rbuf[i++] = readw(ioaddr + PMDCSR);
2994 rbuf[i++] = readw(ioaddr + TSTDAT); 2994 rbuf[i++] = readw(ioaddr + TSTDAT);
2995 rbuf[i++] = readw(ioaddr + DSPCFG); 2995 rbuf[i++] = readw(ioaddr + DSPCFG);
2996 rbuf[i++] = readw(ioaddr + SDCFG); 2996 rbuf[i++] = readw(ioaddr + SDCFG);
2997 writew(0, ioaddr + PGSEL); 2997 writew(0, ioaddr + PGSEL);
2998 2998
2999 /* read RFCR indexed registers */ 2999 /* read RFCR indexed registers */
3000 rfcr = readl(ioaddr + RxFilterAddr); 3000 rfcr = readl(ioaddr + RxFilterAddr);
3001 for (j = 0; j < NATSEMI_RFDR_NREGS; j++) { 3001 for (j = 0; j < NATSEMI_RFDR_NREGS; j++) {
3002 writel(j*2, ioaddr + RxFilterAddr); 3002 writel(j*2, ioaddr + RxFilterAddr);
3003 rbuf[i++] = readw(ioaddr + RxFilterData); 3003 rbuf[i++] = readw(ioaddr + RxFilterData);
3004 } 3004 }
3005 writel(rfcr, ioaddr + RxFilterAddr); 3005 writel(rfcr, ioaddr + RxFilterAddr);
3006 3006
3007 /* the interrupt status is clear-on-read - see if we missed any */ 3007 /* the interrupt status is clear-on-read - see if we missed any */
3008 if (rbuf[4] & rbuf[5]) { 3008 if (rbuf[4] & rbuf[5]) {
3009 printk(KERN_WARNING 3009 printk(KERN_WARNING
3010 "%s: shoot, we dropped an interrupt (%#08x)\n", 3010 "%s: shoot, we dropped an interrupt (%#08x)\n",
3011 dev->name, rbuf[4] & rbuf[5]); 3011 dev->name, rbuf[4] & rbuf[5]);
3012 } 3012 }
3013 3013
3014 return 0; 3014 return 0;
3015 } 3015 }
3016 3016
3017 #define SWAP_BITS(x) ( (((x) & 0x0001) << 15) | (((x) & 0x0002) << 13) \ 3017 #define SWAP_BITS(x) ( (((x) & 0x0001) << 15) | (((x) & 0x0002) << 13) \
3018 | (((x) & 0x0004) << 11) | (((x) & 0x0008) << 9) \ 3018 | (((x) & 0x0004) << 11) | (((x) & 0x0008) << 9) \
3019 | (((x) & 0x0010) << 7) | (((x) & 0x0020) << 5) \ 3019 | (((x) & 0x0010) << 7) | (((x) & 0x0020) << 5) \
3020 | (((x) & 0x0040) << 3) | (((x) & 0x0080) << 1) \ 3020 | (((x) & 0x0040) << 3) | (((x) & 0x0080) << 1) \
3021 | (((x) & 0x0100) >> 1) | (((x) & 0x0200) >> 3) \ 3021 | (((x) & 0x0100) >> 1) | (((x) & 0x0200) >> 3) \
3022 | (((x) & 0x0400) >> 5) | (((x) & 0x0800) >> 7) \ 3022 | (((x) & 0x0400) >> 5) | (((x) & 0x0800) >> 7) \
3023 | (((x) & 0x1000) >> 9) | (((x) & 0x2000) >> 11) \ 3023 | (((x) & 0x1000) >> 9) | (((x) & 0x2000) >> 11) \
3024 | (((x) & 0x4000) >> 13) | (((x) & 0x8000) >> 15) ) 3024 | (((x) & 0x4000) >> 13) | (((x) & 0x8000) >> 15) )
3025 3025
3026 static int netdev_get_eeprom(struct net_device *dev, u8 *buf) 3026 static int netdev_get_eeprom(struct net_device *dev, u8 *buf)
3027 { 3027 {
3028 int i; 3028 int i;
3029 u16 *ebuf = (u16 *)buf; 3029 u16 *ebuf = (u16 *)buf;
3030 void __iomem * ioaddr = ns_ioaddr(dev); 3030 void __iomem * ioaddr = ns_ioaddr(dev);
3031 struct netdev_private *np = netdev_priv(dev); 3031 struct netdev_private *np = netdev_priv(dev);
3032 3032
3033 /* eeprom_read reads 16 bits, and indexes by 16 bits */ 3033 /* eeprom_read reads 16 bits, and indexes by 16 bits */
3034 for (i = 0; i < np->eeprom_size/2; i++) { 3034 for (i = 0; i < np->eeprom_size/2; i++) {
3035 ebuf[i] = eeprom_read(ioaddr, i); 3035 ebuf[i] = eeprom_read(ioaddr, i);
3036 /* The EEPROM itself stores data bit-swapped, but eeprom_read 3036 /* The EEPROM itself stores data bit-swapped, but eeprom_read
3037 * reads it back "sanely". So we swap it back here in order to 3037 * reads it back "sanely". So we swap it back here in order to
3038 * present it to userland as it is stored. */ 3038 * present it to userland as it is stored. */
3039 ebuf[i] = SWAP_BITS(ebuf[i]); 3039 ebuf[i] = SWAP_BITS(ebuf[i]);
3040 } 3040 }
3041 return 0; 3041 return 0;
3042 } 3042 }
3043 3043
3044 static int netdev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd) 3044 static int netdev_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
3045 { 3045 {
3046 struct mii_ioctl_data *data = if_mii(rq); 3046 struct mii_ioctl_data *data = if_mii(rq);
3047 struct netdev_private *np = netdev_priv(dev); 3047 struct netdev_private *np = netdev_priv(dev);
3048 3048
3049 switch(cmd) { 3049 switch(cmd) {
3050 case SIOCGMIIPHY: /* Get address of MII PHY in use. */ 3050 case SIOCGMIIPHY: /* Get address of MII PHY in use. */
3051 case SIOCDEVPRIVATE: /* for binary compat, remove in 2.5 */ 3051 case SIOCDEVPRIVATE: /* for binary compat, remove in 2.5 */
3052 data->phy_id = np->phy_addr_external; 3052 data->phy_id = np->phy_addr_external;
3053 /* Fall Through */ 3053 /* Fall Through */
3054 3054
3055 case SIOCGMIIREG: /* Read MII PHY register. */ 3055 case SIOCGMIIREG: /* Read MII PHY register. */
3056 case SIOCDEVPRIVATE+1: /* for binary compat, remove in 2.5 */ 3056 case SIOCDEVPRIVATE+1: /* for binary compat, remove in 2.5 */
3057 /* The phy_id is not enough to uniquely identify 3057 /* The phy_id is not enough to uniquely identify
3058 * the intended target. Therefore the command is sent to 3058 * the intended target. Therefore the command is sent to
3059 * the given mii on the current port. 3059 * the given mii on the current port.
3060 */ 3060 */
3061 if (dev->if_port == PORT_TP) { 3061 if (dev->if_port == PORT_TP) {
3062 if ((data->phy_id & 0x1f) == np->phy_addr_external) 3062 if ((data->phy_id & 0x1f) == np->phy_addr_external)
3063 data->val_out = mdio_read(dev, 3063 data->val_out = mdio_read(dev,
3064 data->reg_num & 0x1f); 3064 data->reg_num & 0x1f);
3065 else 3065 else
3066 data->val_out = 0; 3066 data->val_out = 0;
3067 } else { 3067 } else {
3068 move_int_phy(dev, data->phy_id & 0x1f); 3068 move_int_phy(dev, data->phy_id & 0x1f);
3069 data->val_out = miiport_read(dev, data->phy_id & 0x1f, 3069 data->val_out = miiport_read(dev, data->phy_id & 0x1f,
3070 data->reg_num & 0x1f); 3070 data->reg_num & 0x1f);
3071 } 3071 }
3072 return 0; 3072 return 0;
3073 3073
3074 case SIOCSMIIREG: /* Write MII PHY register. */ 3074 case SIOCSMIIREG: /* Write MII PHY register. */
3075 case SIOCDEVPRIVATE+2: /* for binary compat, remove in 2.5 */ 3075 case SIOCDEVPRIVATE+2: /* for binary compat, remove in 2.5 */
3076 if (!capable(CAP_NET_ADMIN)) 3076 if (!capable(CAP_NET_ADMIN))
3077 return -EPERM; 3077 return -EPERM;
3078 if (dev->if_port == PORT_TP) { 3078 if (dev->if_port == PORT_TP) {
3079 if ((data->phy_id & 0x1f) == np->phy_addr_external) { 3079 if ((data->phy_id & 0x1f) == np->phy_addr_external) {
3080 if ((data->reg_num & 0x1f) == MII_ADVERTISE) 3080 if ((data->reg_num & 0x1f) == MII_ADVERTISE)
3081 np->advertising = data->val_in; 3081 np->advertising = data->val_in;
3082 mdio_write(dev, data->reg_num & 0x1f, 3082 mdio_write(dev, data->reg_num & 0x1f,
3083 data->val_in); 3083 data->val_in);
3084 } 3084 }
3085 } else { 3085 } else {
3086 if ((data->phy_id & 0x1f) == np->phy_addr_external) { 3086 if ((data->phy_id & 0x1f) == np->phy_addr_external) {
3087 if ((data->reg_num & 0x1f) == MII_ADVERTISE) 3087 if ((data->reg_num & 0x1f) == MII_ADVERTISE)
3088 np->advertising = data->val_in; 3088 np->advertising = data->val_in;
3089 } 3089 }
3090 move_int_phy(dev, data->phy_id & 0x1f); 3090 move_int_phy(dev, data->phy_id & 0x1f);
3091 miiport_write(dev, data->phy_id & 0x1f, 3091 miiport_write(dev, data->phy_id & 0x1f,
3092 data->reg_num & 0x1f, 3092 data->reg_num & 0x1f,
3093 data->val_in); 3093 data->val_in);
3094 } 3094 }
3095 return 0; 3095 return 0;
3096 default: 3096 default:
3097 return -EOPNOTSUPP; 3097 return -EOPNOTSUPP;
3098 } 3098 }
3099 } 3099 }
3100 3100
3101 static void enable_wol_mode(struct net_device *dev, int enable_intr) 3101 static void enable_wol_mode(struct net_device *dev, int enable_intr)
3102 { 3102 {
3103 void __iomem * ioaddr = ns_ioaddr(dev); 3103 void __iomem * ioaddr = ns_ioaddr(dev);
3104 struct netdev_private *np = netdev_priv(dev); 3104 struct netdev_private *np = netdev_priv(dev);
3105 3105
3106 if (netif_msg_wol(np)) 3106 if (netif_msg_wol(np))
3107 printk(KERN_INFO "%s: remaining active for wake-on-lan\n", 3107 printk(KERN_INFO "%s: remaining active for wake-on-lan\n",
3108 dev->name); 3108 dev->name);
3109 3109
3110 /* For WOL we must restart the rx process in silent mode. 3110 /* For WOL we must restart the rx process in silent mode.
3111 * Write NULL to the RxRingPtr. Only possible if 3111 * Write NULL to the RxRingPtr. Only possible if
3112 * rx process is stopped 3112 * rx process is stopped
3113 */ 3113 */
3114 writel(0, ioaddr + RxRingPtr); 3114 writel(0, ioaddr + RxRingPtr);
3115 3115
3116 /* read WoL status to clear */ 3116 /* read WoL status to clear */
3117 readl(ioaddr + WOLCmd); 3117 readl(ioaddr + WOLCmd);
3118 3118
3119 /* PME on, clear status */ 3119 /* PME on, clear status */
3120 writel(np->SavedClkRun | PMEEnable | PMEStatus, ioaddr + ClkRun); 3120 writel(np->SavedClkRun | PMEEnable | PMEStatus, ioaddr + ClkRun);
3121 3121
3122 /* and restart the rx process */ 3122 /* and restart the rx process */
3123 writel(RxOn, ioaddr + ChipCmd); 3123 writel(RxOn, ioaddr + ChipCmd);
3124 3124
3125 if (enable_intr) { 3125 if (enable_intr) {
3126 /* enable the WOL interrupt. 3126 /* enable the WOL interrupt.
3127 * Could be used to send a netlink message. 3127 * Could be used to send a netlink message.
3128 */ 3128 */
3129 writel(WOLPkt | LinkChange, ioaddr + IntrMask); 3129 writel(WOLPkt | LinkChange, ioaddr + IntrMask);
3130 natsemi_irq_enable(dev); 3130 natsemi_irq_enable(dev);
3131 } 3131 }
3132 } 3132 }
3133 3133
3134 static int netdev_close(struct net_device *dev) 3134 static int netdev_close(struct net_device *dev)
3135 { 3135 {
3136 void __iomem * ioaddr = ns_ioaddr(dev); 3136 void __iomem * ioaddr = ns_ioaddr(dev);
3137 struct netdev_private *np = netdev_priv(dev); 3137 struct netdev_private *np = netdev_priv(dev);
3138 3138
3139 if (netif_msg_ifdown(np)) 3139 if (netif_msg_ifdown(np))
3140 printk(KERN_DEBUG 3140 printk(KERN_DEBUG
3141 "%s: Shutting down ethercard, status was %#04x.\n", 3141 "%s: Shutting down ethercard, status was %#04x.\n",
3142 dev->name, (int)readl(ioaddr + ChipCmd)); 3142 dev->name, (int)readl(ioaddr + ChipCmd));
3143 if (netif_msg_pktdata(np)) 3143 if (netif_msg_pktdata(np))
3144 printk(KERN_DEBUG 3144 printk(KERN_DEBUG
3145 "%s: Queue pointers were Tx %d / %d, Rx %d / %d.\n", 3145 "%s: Queue pointers were Tx %d / %d, Rx %d / %d.\n",
3146 dev->name, np->cur_tx, np->dirty_tx, 3146 dev->name, np->cur_tx, np->dirty_tx,
3147 np->cur_rx, np->dirty_rx); 3147 np->cur_rx, np->dirty_rx);
3148 3148
3149 napi_disable(&np->napi); 3149 napi_disable(&np->napi);
3150 3150
3151 /* 3151 /*
3152 * FIXME: what if someone tries to close a device 3152 * FIXME: what if someone tries to close a device
3153 * that is suspended? 3153 * that is suspended?
3154 * Should we reenable the nic to switch to 3154 * Should we reenable the nic to switch to
3155 * the final WOL settings? 3155 * the final WOL settings?
3156 */ 3156 */
3157 3157
3158 del_timer_sync(&np->timer); 3158 del_timer_sync(&np->timer);
3159 disable_irq(dev->irq); 3159 disable_irq(dev->irq);
3160 spin_lock_irq(&np->lock); 3160 spin_lock_irq(&np->lock);
3161 natsemi_irq_disable(dev); 3161 natsemi_irq_disable(dev);
3162 np->hands_off = 1; 3162 np->hands_off = 1;
3163 spin_unlock_irq(&np->lock); 3163 spin_unlock_irq(&np->lock);
3164 enable_irq(dev->irq); 3164 enable_irq(dev->irq);
3165 3165
3166 free_irq(dev->irq, dev); 3166 free_irq(dev->irq, dev);
3167 3167
3168 /* Interrupt disabled, interrupt handler released, 3168 /* Interrupt disabled, interrupt handler released,
3169 * queue stopped, timer deleted, rtnl_lock held 3169 * queue stopped, timer deleted, rtnl_lock held
3170 * All async codepaths that access the driver are disabled. 3170 * All async codepaths that access the driver are disabled.
3171 */ 3171 */
3172 spin_lock_irq(&np->lock); 3172 spin_lock_irq(&np->lock);
3173 np->hands_off = 0; 3173 np->hands_off = 0;
3174 readl(ioaddr + IntrMask); 3174 readl(ioaddr + IntrMask);
3175 readw(ioaddr + MIntrStatus); 3175 readw(ioaddr + MIntrStatus);
3176 3176
3177 /* Freeze Stats */ 3177 /* Freeze Stats */
3178 writel(StatsFreeze, ioaddr + StatsCtrl); 3178 writel(StatsFreeze, ioaddr + StatsCtrl);
3179 3179
3180 /* Stop the chip's Tx and Rx processes. */ 3180 /* Stop the chip's Tx and Rx processes. */
3181 natsemi_stop_rxtx(dev); 3181 natsemi_stop_rxtx(dev);
3182 3182
3183 __get_stats(dev); 3183 __get_stats(dev);
3184 spin_unlock_irq(&np->lock); 3184 spin_unlock_irq(&np->lock);
3185 3185
3186 /* clear the carrier last - an interrupt could reenable it otherwise */ 3186 /* clear the carrier last - an interrupt could reenable it otherwise */
3187 netif_carrier_off(dev); 3187 netif_carrier_off(dev);
3188 netif_stop_queue(dev); 3188 netif_stop_queue(dev);
3189 3189
3190 dump_ring(dev); 3190 dump_ring(dev);
3191 drain_ring(dev); 3191 drain_ring(dev);
3192 free_ring(dev); 3192 free_ring(dev);
3193 3193
3194 { 3194 {
3195 u32 wol = readl(ioaddr + WOLCmd) & WakeOptsSummary; 3195 u32 wol = readl(ioaddr + WOLCmd) & WakeOptsSummary;
3196 if (wol) { 3196 if (wol) {
3197 /* restart the NIC in WOL mode. 3197 /* restart the NIC in WOL mode.
3198 * The nic must be stopped for this. 3198 * The nic must be stopped for this.
3199 */ 3199 */
3200 enable_wol_mode(dev, 0); 3200 enable_wol_mode(dev, 0);
3201 } else { 3201 } else {
3202 /* Restore PME enable bit unmolested */ 3202 /* Restore PME enable bit unmolested */
3203 writel(np->SavedClkRun, ioaddr + ClkRun); 3203 writel(np->SavedClkRun, ioaddr + ClkRun);
3204 } 3204 }
3205 } 3205 }
3206 return 0; 3206 return 0;
3207 } 3207 }
3208 3208
3209 3209
3210 static void __devexit natsemi_remove1 (struct pci_dev *pdev) 3210 static void __devexit natsemi_remove1 (struct pci_dev *pdev)
3211 { 3211 {
3212 struct net_device *dev = pci_get_drvdata(pdev); 3212 struct net_device *dev = pci_get_drvdata(pdev);
3213 void __iomem * ioaddr = ns_ioaddr(dev); 3213 void __iomem * ioaddr = ns_ioaddr(dev);
3214 3214
3215 NATSEMI_REMOVE_FILE(pdev, dspcfg_workaround); 3215 NATSEMI_REMOVE_FILE(pdev, dspcfg_workaround);
3216 unregister_netdev (dev); 3216 unregister_netdev (dev);
3217 pci_release_regions (pdev); 3217 pci_release_regions (pdev);
3218 iounmap(ioaddr); 3218 iounmap(ioaddr);
3219 free_netdev (dev); 3219 free_netdev (dev);
3220 pci_set_drvdata(pdev, NULL); 3220 pci_set_drvdata(pdev, NULL);
3221 } 3221 }
3222 3222
3223 #ifdef CONFIG_PM 3223 #ifdef CONFIG_PM
3224 3224
3225 /* 3225 /*
3226 * The ns83815 chip doesn't have explicit RxStop bits. 3226 * The ns83815 chip doesn't have explicit RxStop bits.
3227 * Kicking the Rx or Tx process for a new packet reenables the Rx process 3227 * Kicking the Rx or Tx process for a new packet reenables the Rx process
3228 * of the nic, thus this function must be very careful: 3228 * of the nic, thus this function must be very careful:
3229 * 3229 *
3230 * suspend/resume synchronization: 3230 * suspend/resume synchronization:
3231 * entry points: 3231 * entry points:
3232 * netdev_open, netdev_close, netdev_ioctl, set_rx_mode, intr_handler, 3232 * netdev_open, netdev_close, netdev_ioctl, set_rx_mode, intr_handler,
3233 * start_tx, tx_timeout 3233 * start_tx, tx_timeout
3234 * 3234 *
3235 * No function accesses the hardware without checking np->hands_off. 3235 * No function accesses the hardware without checking np->hands_off.
3236 * the check occurs under spin_lock_irq(&np->lock); 3236 * the check occurs under spin_lock_irq(&np->lock);
3237 * exceptions: 3237 * exceptions:
3238 * * netdev_ioctl: noncritical access. 3238 * * netdev_ioctl: noncritical access.
3239 * * netdev_open: cannot happen due to the device_detach 3239 * * netdev_open: cannot happen due to the device_detach
3240 * * netdev_close: doesn't hurt. 3240 * * netdev_close: doesn't hurt.
3241 * * netdev_timer: timer stopped by natsemi_suspend. 3241 * * netdev_timer: timer stopped by natsemi_suspend.
3242 * * intr_handler: doesn't acquire the spinlock. suspend calls 3242 * * intr_handler: doesn't acquire the spinlock. suspend calls
3243 * disable_irq() to enforce synchronization. 3243 * disable_irq() to enforce synchronization.
3244 * * natsemi_poll: checks before reenabling interrupts. suspend 3244 * * natsemi_poll: checks before reenabling interrupts. suspend
3245 * sets hands_off, disables interrupts and then waits with 3245 * sets hands_off, disables interrupts and then waits with
3246 * napi_disable(). 3246 * napi_disable().
3247 * 3247 *
3248 * Interrupts must be disabled, otherwise hands_off can cause irq storms. 3248 * Interrupts must be disabled, otherwise hands_off can cause irq storms.
3249 */ 3249 */
3250 3250
3251 static int natsemi_suspend (struct pci_dev *pdev, pm_message_t state) 3251 static int natsemi_suspend (struct pci_dev *pdev, pm_message_t state)
3252 { 3252 {
3253 struct net_device *dev = pci_get_drvdata (pdev); 3253 struct net_device *dev = pci_get_drvdata (pdev);
3254 struct netdev_private *np = netdev_priv(dev); 3254 struct netdev_private *np = netdev_priv(dev);
3255 void __iomem * ioaddr = ns_ioaddr(dev); 3255 void __iomem * ioaddr = ns_ioaddr(dev);
3256 3256
3257 rtnl_lock(); 3257 rtnl_lock();
3258 if (netif_running (dev)) { 3258 if (netif_running (dev)) {
3259 del_timer_sync(&np->timer); 3259 del_timer_sync(&np->timer);
3260 3260
3261 disable_irq(dev->irq); 3261 disable_irq(dev->irq);
3262 spin_lock_irq(&np->lock); 3262 spin_lock_irq(&np->lock);
3263 3263
3264 natsemi_irq_disable(dev); 3264 natsemi_irq_disable(dev);
3265 np->hands_off = 1; 3265 np->hands_off = 1;
3266 natsemi_stop_rxtx(dev); 3266 natsemi_stop_rxtx(dev);
3267 netif_stop_queue(dev); 3267 netif_stop_queue(dev);
3268 3268
3269 spin_unlock_irq(&np->lock); 3269 spin_unlock_irq(&np->lock);
3270 enable_irq(dev->irq); 3270 enable_irq(dev->irq);
3271 3271
3272 napi_disable(&np->napi); 3272 napi_disable(&np->napi);
3273 3273
3274 /* Update the error counts. */ 3274 /* Update the error counts. */
3275 __get_stats(dev); 3275 __get_stats(dev);
3276 3276
3277 /* pci_power_off(pdev, -1); */ 3277 /* pci_power_off(pdev, -1); */
3278 drain_ring(dev); 3278 drain_ring(dev);
3279 { 3279 {
3280 u32 wol = readl(ioaddr + WOLCmd) & WakeOptsSummary; 3280 u32 wol = readl(ioaddr + WOLCmd) & WakeOptsSummary;
3281 /* Restore PME enable bit */ 3281 /* Restore PME enable bit */
3282 if (wol) { 3282 if (wol) {
3283 /* restart the NIC in WOL mode. 3283 /* restart the NIC in WOL mode.
3284 * The nic must be stopped for this. 3284 * The nic must be stopped for this.
3285 * FIXME: use the WOL interrupt 3285 * FIXME: use the WOL interrupt
3286 */ 3286 */
3287 enable_wol_mode(dev, 0); 3287 enable_wol_mode(dev, 0);
3288 } else { 3288 } else {
3289 /* Restore PME enable bit unmolested */ 3289 /* Restore PME enable bit unmolested */
3290 writel(np->SavedClkRun, ioaddr + ClkRun); 3290 writel(np->SavedClkRun, ioaddr + ClkRun);
3291 } 3291 }
3292 } 3292 }
3293 } 3293 }
3294 netif_device_detach(dev); 3294 netif_device_detach(dev);
3295 rtnl_unlock(); 3295 rtnl_unlock();
3296 return 0; 3296 return 0;
3297 } 3297 }
3298 3298
3299 3299
3300 static int natsemi_resume (struct pci_dev *pdev) 3300 static int natsemi_resume (struct pci_dev *pdev)
3301 { 3301 {
3302 struct net_device *dev = pci_get_drvdata (pdev); 3302 struct net_device *dev = pci_get_drvdata (pdev);
3303 struct netdev_private *np = netdev_priv(dev); 3303 struct netdev_private *np = netdev_priv(dev);
3304 int ret = 0; 3304 int ret = 0;
3305 3305
3306 rtnl_lock(); 3306 rtnl_lock();
3307 if (netif_device_present(dev)) 3307 if (netif_device_present(dev))
3308 goto out; 3308 goto out;
3309 if (netif_running(dev)) { 3309 if (netif_running(dev)) {
3310 BUG_ON(!np->hands_off); 3310 BUG_ON(!np->hands_off);
3311 ret = pci_enable_device(pdev); 3311 ret = pci_enable_device(pdev);
3312 if (ret < 0) { 3312 if (ret < 0) {
3313 dev_err(&pdev->dev, 3313 dev_err(&pdev->dev,
3314 "pci_enable_device() failed: %d\n", ret); 3314 "pci_enable_device() failed: %d\n", ret);
3315 goto out; 3315 goto out;
3316 } 3316 }
3317 /* pci_power_on(pdev); */ 3317 /* pci_power_on(pdev); */
3318 3318
3319 napi_enable(&np->napi); 3319 napi_enable(&np->napi);
3320 3320
3321 natsemi_reset(dev); 3321 natsemi_reset(dev);
3322 init_ring(dev); 3322 init_ring(dev);
3323 disable_irq(dev->irq); 3323 disable_irq(dev->irq);
3324 spin_lock_irq(&np->lock); 3324 spin_lock_irq(&np->lock);
3325 np->hands_off = 0; 3325 np->hands_off = 0;
3326 init_registers(dev); 3326 init_registers(dev);
3327 netif_device_attach(dev); 3327 netif_device_attach(dev);
3328 spin_unlock_irq(&np->lock); 3328 spin_unlock_irq(&np->lock);
3329 enable_irq(dev->irq); 3329 enable_irq(dev->irq);
3330 3330
3331 mod_timer(&np->timer, round_jiffies(jiffies + 1*HZ)); 3331 mod_timer(&np->timer, round_jiffies(jiffies + 1*HZ));
3332 } 3332 }
3333 netif_device_attach(dev); 3333 netif_device_attach(dev);
3334 out: 3334 out:
3335 rtnl_unlock(); 3335 rtnl_unlock();
3336 return ret; 3336 return ret;
3337 } 3337 }
3338 3338
3339 #endif /* CONFIG_PM */ 3339 #endif /* CONFIG_PM */
3340 3340
3341 static struct pci_driver natsemi_driver = { 3341 static struct pci_driver natsemi_driver = {
3342 .name = DRV_NAME, 3342 .name = DRV_NAME,
3343 .id_table = natsemi_pci_tbl, 3343 .id_table = natsemi_pci_tbl,
3344 .probe = natsemi_probe1, 3344 .probe = natsemi_probe1,
3345 .remove = __devexit_p(natsemi_remove1), 3345 .remove = __devexit_p(natsemi_remove1),
3346 #ifdef CONFIG_PM 3346 #ifdef CONFIG_PM
3347 .suspend = natsemi_suspend, 3347 .suspend = natsemi_suspend,
3348 .resume = natsemi_resume, 3348 .resume = natsemi_resume,
3349 #endif 3349 #endif
3350 }; 3350 };
3351 3351
3352 static int __init natsemi_init_mod (void) 3352 static int __init natsemi_init_mod (void)
3353 { 3353 {
3354 /* when a module, this is printed whether or not devices are found in probe */ 3354 /* when a module, this is printed whether or not devices are found in probe */
3355 #ifdef MODULE 3355 #ifdef MODULE
3356 printk(version); 3356 printk(version);
3357 #endif 3357 #endif
3358 3358
3359 return pci_register_driver(&natsemi_driver); 3359 return pci_register_driver(&natsemi_driver);
3360 } 3360 }
3361 3361
3362 static void __exit natsemi_exit_mod (void) 3362 static void __exit natsemi_exit_mod (void)
3363 { 3363 {
3364 pci_unregister_driver (&natsemi_driver); 3364 pci_unregister_driver (&natsemi_driver);
3365 } 3365 }
3366 3366
3367 module_init(natsemi_init_mod); 3367 module_init(natsemi_init_mod);
3368 module_exit(natsemi_exit_mod); 3368 module_exit(natsemi_exit_mod);
3369 3369
3370 3370
drivers/net/usb/dm9601.c
1 /* 1 /*
2 * Davicom DM9601 USB 1.1 10/100Mbps ethernet devices 2 * Davicom DM9601 USB 1.1 10/100Mbps ethernet devices
3 * 3 *
4 * Peter Korsgaard <jacmet@sunsite.dk> 4 * Peter Korsgaard <jacmet@sunsite.dk>
5 * 5 *
6 * This file is licensed under the terms of the GNU General Public License 6 * This file is licensed under the terms of the GNU General Public License
7 * version 2. This program is licensed "as is" without any warranty of any 7 * version 2. This program is licensed "as is" without any warranty of any
8 * kind, whether express or implied. 8 * kind, whether express or implied.
9 */ 9 */
10 10
11 //#define DEBUG 11 //#define DEBUG
12 12
13 #include <linux/module.h> 13 #include <linux/module.h>
14 #include <linux/sched.h> 14 #include <linux/sched.h>
15 #include <linux/stddef.h> 15 #include <linux/stddef.h>
16 #include <linux/init.h> 16 #include <linux/init.h>
17 #include <linux/netdevice.h> 17 #include <linux/netdevice.h>
18 #include <linux/etherdevice.h> 18 #include <linux/etherdevice.h>
19 #include <linux/ethtool.h> 19 #include <linux/ethtool.h>
20 #include <linux/mii.h> 20 #include <linux/mii.h>
21 #include <linux/usb.h> 21 #include <linux/usb.h>
22 #include <linux/crc32.h> 22 #include <linux/crc32.h>
23 #include <linux/usb/usbnet.h> 23 #include <linux/usb/usbnet.h>
24 24
25 /* datasheet: 25 /* datasheet:
26 http://www.davicom.com.tw/big5/download/Data%20Sheet/DM9601-DS-P01-930914.pdf 26 http://www.davicom.com.tw/big5/download/Data%20Sheet/DM9601-DS-P01-930914.pdf
27 */ 27 */
28 28
29 /* control requests */ 29 /* control requests */
30 #define DM_READ_REGS 0x00 30 #define DM_READ_REGS 0x00
31 #define DM_WRITE_REGS 0x01 31 #define DM_WRITE_REGS 0x01
32 #define DM_READ_MEMS 0x02 32 #define DM_READ_MEMS 0x02
33 #define DM_WRITE_REG 0x03 33 #define DM_WRITE_REG 0x03
34 #define DM_WRITE_MEMS 0x05 34 #define DM_WRITE_MEMS 0x05
35 #define DM_WRITE_MEM 0x07 35 #define DM_WRITE_MEM 0x07
36 36
37 /* registers */ 37 /* registers */
38 #define DM_NET_CTRL 0x00 38 #define DM_NET_CTRL 0x00
39 #define DM_RX_CTRL 0x05 39 #define DM_RX_CTRL 0x05
40 #define DM_SHARED_CTRL 0x0b 40 #define DM_SHARED_CTRL 0x0b
41 #define DM_SHARED_ADDR 0x0c 41 #define DM_SHARED_ADDR 0x0c
42 #define DM_SHARED_DATA 0x0d /* low + high */ 42 #define DM_SHARED_DATA 0x0d /* low + high */
43 #define DM_PHY_ADDR 0x10 /* 6 bytes */ 43 #define DM_PHY_ADDR 0x10 /* 6 bytes */
44 #define DM_MCAST_ADDR 0x16 /* 8 bytes */ 44 #define DM_MCAST_ADDR 0x16 /* 8 bytes */
45 #define DM_GPR_CTRL 0x1e 45 #define DM_GPR_CTRL 0x1e
46 #define DM_GPR_DATA 0x1f 46 #define DM_GPR_DATA 0x1f
47 47
48 #define DM_MAX_MCAST 64 48 #define DM_MAX_MCAST 64
49 #define DM_MCAST_SIZE 8 49 #define DM_MCAST_SIZE 8
50 #define DM_EEPROM_LEN 256 50 #define DM_EEPROM_LEN 256
51 #define DM_TX_OVERHEAD 2 /* 2 byte header */ 51 #define DM_TX_OVERHEAD 2 /* 2 byte header */
52 #define DM_RX_OVERHEAD 7 /* 3 byte header + 4 byte crc tail */ 52 #define DM_RX_OVERHEAD 7 /* 3 byte header + 4 byte crc tail */
53 #define DM_TIMEOUT 1000 53 #define DM_TIMEOUT 1000
54 54
55 55
56 static int dm_read(struct usbnet *dev, u8 reg, u16 length, void *data) 56 static int dm_read(struct usbnet *dev, u8 reg, u16 length, void *data)
57 { 57 {
58 devdbg(dev, "dm_read() reg=0x%02x length=%d", reg, length); 58 devdbg(dev, "dm_read() reg=0x%02x length=%d", reg, length);
59 return usb_control_msg(dev->udev, 59 return usb_control_msg(dev->udev,
60 usb_rcvctrlpipe(dev->udev, 0), 60 usb_rcvctrlpipe(dev->udev, 0),
61 DM_READ_REGS, 61 DM_READ_REGS,
62 USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE, 62 USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
63 0, reg, data, length, USB_CTRL_SET_TIMEOUT); 63 0, reg, data, length, USB_CTRL_SET_TIMEOUT);
64 } 64 }
65 65
66 static int dm_read_reg(struct usbnet *dev, u8 reg, u8 *value) 66 static int dm_read_reg(struct usbnet *dev, u8 reg, u8 *value)
67 { 67 {
68 return dm_read(dev, reg, 1, value); 68 return dm_read(dev, reg, 1, value);
69 } 69 }
70 70
71 static int dm_write(struct usbnet *dev, u8 reg, u16 length, void *data) 71 static int dm_write(struct usbnet *dev, u8 reg, u16 length, void *data)
72 { 72 {
73 devdbg(dev, "dm_write() reg=0x%02x, length=%d", reg, length); 73 devdbg(dev, "dm_write() reg=0x%02x, length=%d", reg, length);
74 return usb_control_msg(dev->udev, 74 return usb_control_msg(dev->udev,
75 usb_sndctrlpipe(dev->udev, 0), 75 usb_sndctrlpipe(dev->udev, 0),
76 DM_WRITE_REGS, 76 DM_WRITE_REGS,
77 USB_DIR_OUT | USB_TYPE_VENDOR |USB_RECIP_DEVICE, 77 USB_DIR_OUT | USB_TYPE_VENDOR |USB_RECIP_DEVICE,
78 0, reg, data, length, USB_CTRL_SET_TIMEOUT); 78 0, reg, data, length, USB_CTRL_SET_TIMEOUT);
79 } 79 }
80 80
81 static int dm_write_reg(struct usbnet *dev, u8 reg, u8 value) 81 static int dm_write_reg(struct usbnet *dev, u8 reg, u8 value)
82 { 82 {
83 devdbg(dev, "dm_write_reg() reg=0x%02x, value=0x%02x", reg, value); 83 devdbg(dev, "dm_write_reg() reg=0x%02x, value=0x%02x", reg, value);
84 return usb_control_msg(dev->udev, 84 return usb_control_msg(dev->udev,
85 usb_sndctrlpipe(dev->udev, 0), 85 usb_sndctrlpipe(dev->udev, 0),
86 DM_WRITE_REG, 86 DM_WRITE_REG,
87 USB_DIR_OUT | USB_TYPE_VENDOR |USB_RECIP_DEVICE, 87 USB_DIR_OUT | USB_TYPE_VENDOR |USB_RECIP_DEVICE,
88 value, reg, NULL, 0, USB_CTRL_SET_TIMEOUT); 88 value, reg, NULL, 0, USB_CTRL_SET_TIMEOUT);
89 } 89 }
90 90
91 static void dm_write_async_callback(struct urb *urb) 91 static void dm_write_async_callback(struct urb *urb)
92 { 92 {
93 struct usb_ctrlrequest *req = (struct usb_ctrlrequest *)urb->context; 93 struct usb_ctrlrequest *req = (struct usb_ctrlrequest *)urb->context;
94 94
95 if (urb->status < 0) 95 if (urb->status < 0)
96 printk(KERN_DEBUG "dm_write_async_callback() failed with %d\n", 96 printk(KERN_DEBUG "dm_write_async_callback() failed with %d\n",
97 urb->status); 97 urb->status);
98 98
99 kfree(req); 99 kfree(req);
100 usb_free_urb(urb); 100 usb_free_urb(urb);
101 } 101 }
102 102
103 static void dm_write_async_helper(struct usbnet *dev, u8 reg, u8 value, 103 static void dm_write_async_helper(struct usbnet *dev, u8 reg, u8 value,
104 u16 length, void *data) 104 u16 length, void *data)
105 { 105 {
106 struct usb_ctrlrequest *req; 106 struct usb_ctrlrequest *req;
107 struct urb *urb; 107 struct urb *urb;
108 int status; 108 int status;
109 109
110 urb = usb_alloc_urb(0, GFP_ATOMIC); 110 urb = usb_alloc_urb(0, GFP_ATOMIC);
111 if (!urb) { 111 if (!urb) {
112 deverr(dev, "Error allocating URB in dm_write_async_helper!"); 112 deverr(dev, "Error allocating URB in dm_write_async_helper!");
113 return; 113 return;
114 } 114 }
115 115
116 req = kmalloc(sizeof(struct usb_ctrlrequest), GFP_ATOMIC); 116 req = kmalloc(sizeof(struct usb_ctrlrequest), GFP_ATOMIC);
117 if (!req) { 117 if (!req) {
118 deverr(dev, "Failed to allocate memory for control request"); 118 deverr(dev, "Failed to allocate memory for control request");
119 usb_free_urb(urb); 119 usb_free_urb(urb);
120 return; 120 return;
121 } 121 }
122 122
123 req->bRequestType = USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE; 123 req->bRequestType = USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE;
124 req->bRequest = length ? DM_WRITE_REGS : DM_WRITE_REG; 124 req->bRequest = length ? DM_WRITE_REGS : DM_WRITE_REG;
125 req->wValue = cpu_to_le16(value); 125 req->wValue = cpu_to_le16(value);
126 req->wIndex = cpu_to_le16(reg); 126 req->wIndex = cpu_to_le16(reg);
127 req->wLength = cpu_to_le16(length); 127 req->wLength = cpu_to_le16(length);
128 128
129 usb_fill_control_urb(urb, dev->udev, 129 usb_fill_control_urb(urb, dev->udev,
130 usb_sndctrlpipe(dev->udev, 0), 130 usb_sndctrlpipe(dev->udev, 0),
131 (void *)req, data, length, 131 (void *)req, data, length,
132 dm_write_async_callback, req); 132 dm_write_async_callback, req);
133 133
134 status = usb_submit_urb(urb, GFP_ATOMIC); 134 status = usb_submit_urb(urb, GFP_ATOMIC);
135 if (status < 0) { 135 if (status < 0) {
136 deverr(dev, "Error submitting the control message: status=%d", 136 deverr(dev, "Error submitting the control message: status=%d",
137 status); 137 status);
138 kfree(req); 138 kfree(req);
139 usb_free_urb(urb); 139 usb_free_urb(urb);
140 } 140 }
141 } 141 }
142 142
143 static void dm_write_async(struct usbnet *dev, u8 reg, u16 length, void *data) 143 static void dm_write_async(struct usbnet *dev, u8 reg, u16 length, void *data)
144 { 144 {
145 devdbg(dev, "dm_write_async() reg=0x%02x length=%d", reg, length); 145 devdbg(dev, "dm_write_async() reg=0x%02x length=%d", reg, length);
146 146
147 dm_write_async_helper(dev, reg, 0, length, data); 147 dm_write_async_helper(dev, reg, 0, length, data);
148 } 148 }
149 149
150 static void dm_write_reg_async(struct usbnet *dev, u8 reg, u8 value) 150 static void dm_write_reg_async(struct usbnet *dev, u8 reg, u8 value)
151 { 151 {
152 devdbg(dev, "dm_write_reg_async() reg=0x%02x value=0x%02x", 152 devdbg(dev, "dm_write_reg_async() reg=0x%02x value=0x%02x",
153 reg, value); 153 reg, value);
154 154
155 dm_write_async_helper(dev, reg, value, 0, NULL); 155 dm_write_async_helper(dev, reg, value, 0, NULL);
156 } 156 }
157 157
158 static int dm_read_shared_word(struct usbnet *dev, int phy, u8 reg, u16 *value) 158 static int dm_read_shared_word(struct usbnet *dev, int phy, u8 reg, __le16 *value)
159 { 159 {
160 int ret, i; 160 int ret, i;
161 161
162 mutex_lock(&dev->phy_mutex); 162 mutex_lock(&dev->phy_mutex);
163 163
164 dm_write_reg(dev, DM_SHARED_ADDR, phy ? (reg | 0x40) : reg); 164 dm_write_reg(dev, DM_SHARED_ADDR, phy ? (reg | 0x40) : reg);
165 dm_write_reg(dev, DM_SHARED_CTRL, phy ? 0xc : 0x4); 165 dm_write_reg(dev, DM_SHARED_CTRL, phy ? 0xc : 0x4);
166 166
167 for (i = 0; i < DM_TIMEOUT; i++) { 167 for (i = 0; i < DM_TIMEOUT; i++) {
168 u8 tmp; 168 u8 tmp;
169 169
170 udelay(1); 170 udelay(1);
171 ret = dm_read_reg(dev, DM_SHARED_CTRL, &tmp); 171 ret = dm_read_reg(dev, DM_SHARED_CTRL, &tmp);
172 if (ret < 0) 172 if (ret < 0)
173 goto out; 173 goto out;
174 174
175 /* ready */ 175 /* ready */
176 if ((tmp & 1) == 0) 176 if ((tmp & 1) == 0)
177 break; 177 break;
178 } 178 }
179 179
180 if (i == DM_TIMEOUT) { 180 if (i == DM_TIMEOUT) {
181 deverr(dev, "%s read timed out!", phy ? "phy" : "eeprom"); 181 deverr(dev, "%s read timed out!", phy ? "phy" : "eeprom");
182 ret = -EIO; 182 ret = -EIO;
183 goto out; 183 goto out;
184 } 184 }
185 185
186 dm_write_reg(dev, DM_SHARED_CTRL, 0x0); 186 dm_write_reg(dev, DM_SHARED_CTRL, 0x0);
187 ret = dm_read(dev, DM_SHARED_DATA, 2, value); 187 ret = dm_read(dev, DM_SHARED_DATA, 2, value);
188 188
189 devdbg(dev, "read shared %d 0x%02x returned 0x%04x, %d", 189 devdbg(dev, "read shared %d 0x%02x returned 0x%04x, %d",
190 phy, reg, *value, ret); 190 phy, reg, *value, ret);
191 191
192 out: 192 out:
193 mutex_unlock(&dev->phy_mutex); 193 mutex_unlock(&dev->phy_mutex);
194 return ret; 194 return ret;
195 } 195 }
196 196
197 static int dm_write_shared_word(struct usbnet *dev, int phy, u8 reg, u16 value) 197 static int dm_write_shared_word(struct usbnet *dev, int phy, u8 reg, __le16 value)
198 { 198 {
199 int ret, i; 199 int ret, i;
200 200
201 mutex_lock(&dev->phy_mutex); 201 mutex_lock(&dev->phy_mutex);
202 202
203 ret = dm_write(dev, DM_SHARED_DATA, 2, &value); 203 ret = dm_write(dev, DM_SHARED_DATA, 2, &value);
204 if (ret < 0) 204 if (ret < 0)
205 goto out; 205 goto out;
206 206
207 dm_write_reg(dev, DM_SHARED_ADDR, phy ? (reg | 0x40) : reg); 207 dm_write_reg(dev, DM_SHARED_ADDR, phy ? (reg | 0x40) : reg);
208 dm_write_reg(dev, DM_SHARED_CTRL, phy ? 0x1c : 0x14); 208 dm_write_reg(dev, DM_SHARED_CTRL, phy ? 0x1c : 0x14);
209 209
210 for (i = 0; i < DM_TIMEOUT; i++) { 210 for (i = 0; i < DM_TIMEOUT; i++) {
211 u8 tmp; 211 u8 tmp;
212 212
213 udelay(1); 213 udelay(1);
214 ret = dm_read_reg(dev, DM_SHARED_CTRL, &tmp); 214 ret = dm_read_reg(dev, DM_SHARED_CTRL, &tmp);
215 if (ret < 0) 215 if (ret < 0)
216 goto out; 216 goto out;
217 217
218 /* ready */ 218 /* ready */
219 if ((tmp & 1) == 0) 219 if ((tmp & 1) == 0)
220 break; 220 break;
221 } 221 }
222 222
223 if (i == DM_TIMEOUT) { 223 if (i == DM_TIMEOUT) {
224 deverr(dev, "%s write timed out!", phy ? "phy" : "eeprom"); 224 deverr(dev, "%s write timed out!", phy ? "phy" : "eeprom");
225 ret = -EIO; 225 ret = -EIO;
226 goto out; 226 goto out;
227 } 227 }
228 228
229 dm_write_reg(dev, DM_SHARED_CTRL, 0x0); 229 dm_write_reg(dev, DM_SHARED_CTRL, 0x0);
230 230
231 out: 231 out:
232 mutex_unlock(&dev->phy_mutex); 232 mutex_unlock(&dev->phy_mutex);
233 return ret; 233 return ret;
234 } 234 }
235 235
236 static int dm_read_eeprom_word(struct usbnet *dev, u8 offset, void *value) 236 static int dm_read_eeprom_word(struct usbnet *dev, u8 offset, void *value)
237 { 237 {
238 return dm_read_shared_word(dev, 0, offset, value); 238 return dm_read_shared_word(dev, 0, offset, value);
239 } 239 }
240 240
241 241
242 242
243 static int dm9601_get_eeprom_len(struct net_device *dev) 243 static int dm9601_get_eeprom_len(struct net_device *dev)
244 { 244 {
245 return DM_EEPROM_LEN; 245 return DM_EEPROM_LEN;
246 } 246 }
247 247
248 static int dm9601_get_eeprom(struct net_device *net, 248 static int dm9601_get_eeprom(struct net_device *net,
249 struct ethtool_eeprom *eeprom, u8 * data) 249 struct ethtool_eeprom *eeprom, u8 * data)
250 { 250 {
251 struct usbnet *dev = netdev_priv(net); 251 struct usbnet *dev = netdev_priv(net);
252 u16 *ebuf = (u16 *) data; 252 __le16 *ebuf = (__le16 *) data;
253 int i; 253 int i;
254 254
255 /* access is 16bit */ 255 /* access is 16bit */
256 if ((eeprom->offset % 2) || (eeprom->len % 2)) 256 if ((eeprom->offset % 2) || (eeprom->len % 2))
257 return -EINVAL; 257 return -EINVAL;
258 258
259 for (i = 0; i < eeprom->len / 2; i++) { 259 for (i = 0; i < eeprom->len / 2; i++) {
260 if (dm_read_eeprom_word(dev, eeprom->offset / 2 + i, 260 if (dm_read_eeprom_word(dev, eeprom->offset / 2 + i,
261 &ebuf[i]) < 0) 261 &ebuf[i]) < 0)
262 return -EINVAL; 262 return -EINVAL;
263 } 263 }
264 return 0; 264 return 0;
265 } 265 }
266 266
267 static int dm9601_mdio_read(struct net_device *netdev, int phy_id, int loc) 267 static int dm9601_mdio_read(struct net_device *netdev, int phy_id, int loc)
268 { 268 {
269 struct usbnet *dev = netdev_priv(netdev); 269 struct usbnet *dev = netdev_priv(netdev);
270 270
271 u16 res; 271 __le16 res;
272 272
273 if (phy_id) { 273 if (phy_id) {
274 devdbg(dev, "Only internal phy supported"); 274 devdbg(dev, "Only internal phy supported");
275 return 0; 275 return 0;
276 } 276 }
277 277
278 dm_read_shared_word(dev, 1, loc, &res); 278 dm_read_shared_word(dev, 1, loc, &res);
279 279
280 devdbg(dev, 280 devdbg(dev,
281 "dm9601_mdio_read() phy_id=0x%02x, loc=0x%02x, returns=0x%04x", 281 "dm9601_mdio_read() phy_id=0x%02x, loc=0x%02x, returns=0x%04x",
282 phy_id, loc, le16_to_cpu(res)); 282 phy_id, loc, le16_to_cpu(res));
283 283
284 return le16_to_cpu(res); 284 return le16_to_cpu(res);
285 } 285 }
286 286
287 static void dm9601_mdio_write(struct net_device *netdev, int phy_id, int loc, 287 static void dm9601_mdio_write(struct net_device *netdev, int phy_id, int loc,
288 int val) 288 int val)
289 { 289 {
290 struct usbnet *dev = netdev_priv(netdev); 290 struct usbnet *dev = netdev_priv(netdev);
291 u16 res = cpu_to_le16(val); 291 __le16 res = cpu_to_le16(val);
292 292
293 if (phy_id) { 293 if (phy_id) {
294 devdbg(dev, "Only internal phy supported"); 294 devdbg(dev, "Only internal phy supported");
295 return; 295 return;
296 } 296 }
297 297
298 devdbg(dev,"dm9601_mdio_write() phy_id=0x%02x, loc=0x%02x, val=0x%04x", 298 devdbg(dev,"dm9601_mdio_write() phy_id=0x%02x, loc=0x%02x, val=0x%04x",
299 phy_id, loc, val); 299 phy_id, loc, val);
300 300
301 dm_write_shared_word(dev, 1, loc, res); 301 dm_write_shared_word(dev, 1, loc, res);
302 } 302 }
303 303
304 static void dm9601_get_drvinfo(struct net_device *net, 304 static void dm9601_get_drvinfo(struct net_device *net,
305 struct ethtool_drvinfo *info) 305 struct ethtool_drvinfo *info)
306 { 306 {
307 /* Inherit standard device info */ 307 /* Inherit standard device info */
308 usbnet_get_drvinfo(net, info); 308 usbnet_get_drvinfo(net, info);
309 info->eedump_len = DM_EEPROM_LEN; 309 info->eedump_len = DM_EEPROM_LEN;
310 } 310 }
311 311
312 static u32 dm9601_get_link(struct net_device *net) 312 static u32 dm9601_get_link(struct net_device *net)
313 { 313 {
314 struct usbnet *dev = netdev_priv(net); 314 struct usbnet *dev = netdev_priv(net);
315 315
316 return mii_link_ok(&dev->mii); 316 return mii_link_ok(&dev->mii);
317 } 317 }
318 318
319 static int dm9601_ioctl(struct net_device *net, struct ifreq *rq, int cmd) 319 static int dm9601_ioctl(struct net_device *net, struct ifreq *rq, int cmd)
320 { 320 {
321 struct usbnet *dev = netdev_priv(net); 321 struct usbnet *dev = netdev_priv(net);
322 322
323 return generic_mii_ioctl(&dev->mii, if_mii(rq), cmd, NULL); 323 return generic_mii_ioctl(&dev->mii, if_mii(rq), cmd, NULL);
324 } 324 }
325 325
326 static struct ethtool_ops dm9601_ethtool_ops = { 326 static struct ethtool_ops dm9601_ethtool_ops = {
327 .get_drvinfo = dm9601_get_drvinfo, 327 .get_drvinfo = dm9601_get_drvinfo,
328 .get_link = dm9601_get_link, 328 .get_link = dm9601_get_link,
329 .get_msglevel = usbnet_get_msglevel, 329 .get_msglevel = usbnet_get_msglevel,
330 .set_msglevel = usbnet_set_msglevel, 330 .set_msglevel = usbnet_set_msglevel,
331 .get_eeprom_len = dm9601_get_eeprom_len, 331 .get_eeprom_len = dm9601_get_eeprom_len,
332 .get_eeprom = dm9601_get_eeprom, 332 .get_eeprom = dm9601_get_eeprom,
333 .get_settings = usbnet_get_settings, 333 .get_settings = usbnet_get_settings,
334 .set_settings = usbnet_set_settings, 334 .set_settings = usbnet_set_settings,
335 .nway_reset = usbnet_nway_reset, 335 .nway_reset = usbnet_nway_reset,
336 }; 336 };
337 337
338 static void dm9601_set_multicast(struct net_device *net) 338 static void dm9601_set_multicast(struct net_device *net)
339 { 339 {
340 struct usbnet *dev = netdev_priv(net); 340 struct usbnet *dev = netdev_priv(net);
341 /* We use the 20 byte dev->data for our 8 byte filter buffer 341 /* We use the 20 byte dev->data for our 8 byte filter buffer
342 * to avoid allocating memory that is tricky to free later */ 342 * to avoid allocating memory that is tricky to free later */
343 u8 *hashes = (u8 *) & dev->data; 343 u8 *hashes = (u8 *) & dev->data;
344 u8 rx_ctl = 0x01; 344 u8 rx_ctl = 0x01;
345 345
346 memset(hashes, 0x00, DM_MCAST_SIZE); 346 memset(hashes, 0x00, DM_MCAST_SIZE);
347 hashes[DM_MCAST_SIZE - 1] |= 0x80; /* broadcast address */ 347 hashes[DM_MCAST_SIZE - 1] |= 0x80; /* broadcast address */
348 348
349 if (net->flags & IFF_PROMISC) { 349 if (net->flags & IFF_PROMISC) {
350 rx_ctl |= 0x02; 350 rx_ctl |= 0x02;
351 } else if (net->flags & IFF_ALLMULTI || net->mc_count > DM_MAX_MCAST) { 351 } else if (net->flags & IFF_ALLMULTI || net->mc_count > DM_MAX_MCAST) {
352 rx_ctl |= 0x04; 352 rx_ctl |= 0x04;
353 } else if (net->mc_count) { 353 } else if (net->mc_count) {
354 struct dev_mc_list *mc_list = net->mc_list; 354 struct dev_mc_list *mc_list = net->mc_list;
355 int i; 355 int i;
356 356
357 for (i = 0; i < net->mc_count; i++) { 357 for (i = 0; i < net->mc_count; i++) {
358 u32 crc = ether_crc(ETH_ALEN, mc_list->dmi_addr) >> 26; 358 u32 crc = ether_crc(ETH_ALEN, mc_list->dmi_addr) >> 26;
359 hashes[crc >> 3] |= 1 << (crc & 0x7); 359 hashes[crc >> 3] |= 1 << (crc & 0x7);
360 } 360 }
361 } 361 }
362 362
363 dm_write_async(dev, DM_MCAST_ADDR, DM_MCAST_SIZE, hashes); 363 dm_write_async(dev, DM_MCAST_ADDR, DM_MCAST_SIZE, hashes);
364 dm_write_reg_async(dev, DM_RX_CTRL, rx_ctl); 364 dm_write_reg_async(dev, DM_RX_CTRL, rx_ctl);
365 } 365 }
366 366
367 static int dm9601_bind(struct usbnet *dev, struct usb_interface *intf) 367 static int dm9601_bind(struct usbnet *dev, struct usb_interface *intf)
368 { 368 {
369 int ret; 369 int ret;
370 370
371 ret = usbnet_get_endpoints(dev, intf); 371 ret = usbnet_get_endpoints(dev, intf);
372 if (ret) 372 if (ret)
373 goto out; 373 goto out;
374 374
375 dev->net->do_ioctl = dm9601_ioctl; 375 dev->net->do_ioctl = dm9601_ioctl;
376 dev->net->set_multicast_list = dm9601_set_multicast; 376 dev->net->set_multicast_list = dm9601_set_multicast;
377 dev->net->ethtool_ops = &dm9601_ethtool_ops; 377 dev->net->ethtool_ops = &dm9601_ethtool_ops;
378 dev->net->hard_header_len += DM_TX_OVERHEAD; 378 dev->net->hard_header_len += DM_TX_OVERHEAD;
379 dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len; 379 dev->hard_mtu = dev->net->mtu + dev->net->hard_header_len;
380 dev->rx_urb_size = dev->net->mtu + ETH_HLEN + DM_RX_OVERHEAD; 380 dev->rx_urb_size = dev->net->mtu + ETH_HLEN + DM_RX_OVERHEAD;
381 381
382 dev->mii.dev = dev->net; 382 dev->mii.dev = dev->net;
383 dev->mii.mdio_read = dm9601_mdio_read; 383 dev->mii.mdio_read = dm9601_mdio_read;
384 dev->mii.mdio_write = dm9601_mdio_write; 384 dev->mii.mdio_write = dm9601_mdio_write;
385 dev->mii.phy_id_mask = 0x1f; 385 dev->mii.phy_id_mask = 0x1f;
386 dev->mii.reg_num_mask = 0x1f; 386 dev->mii.reg_num_mask = 0x1f;
387 387
388 /* reset */ 388 /* reset */
389 dm_write_reg(dev, DM_NET_CTRL, 1); 389 dm_write_reg(dev, DM_NET_CTRL, 1);
390 udelay(20); 390 udelay(20);
391 391
392 /* read MAC */ 392 /* read MAC */
393 if (dm_read(dev, DM_PHY_ADDR, ETH_ALEN, dev->net->dev_addr) < 0) { 393 if (dm_read(dev, DM_PHY_ADDR, ETH_ALEN, dev->net->dev_addr) < 0) {
394 printk(KERN_ERR "Error reading MAC address\n"); 394 printk(KERN_ERR "Error reading MAC address\n");
395 ret = -ENODEV; 395 ret = -ENODEV;
396 goto out; 396 goto out;
397 } 397 }
398 398
399 /* power up phy */ 399 /* power up phy */
400 dm_write_reg(dev, DM_GPR_CTRL, 1); 400 dm_write_reg(dev, DM_GPR_CTRL, 1);
401 dm_write_reg(dev, DM_GPR_DATA, 0); 401 dm_write_reg(dev, DM_GPR_DATA, 0);
402 402
403 /* receive broadcast packets */ 403 /* receive broadcast packets */
404 dm9601_set_multicast(dev->net); 404 dm9601_set_multicast(dev->net);
405 405
406 dm9601_mdio_write(dev->net, dev->mii.phy_id, MII_BMCR, BMCR_RESET); 406 dm9601_mdio_write(dev->net, dev->mii.phy_id, MII_BMCR, BMCR_RESET);
407 dm9601_mdio_write(dev->net, dev->mii.phy_id, MII_ADVERTISE, 407 dm9601_mdio_write(dev->net, dev->mii.phy_id, MII_ADVERTISE,
408 ADVERTISE_ALL | ADVERTISE_CSMA | ADVERTISE_PAUSE_CAP); 408 ADVERTISE_ALL | ADVERTISE_CSMA | ADVERTISE_PAUSE_CAP);
409 mii_nway_restart(&dev->mii); 409 mii_nway_restart(&dev->mii);
410 410
411 out: 411 out:
412 return ret; 412 return ret;
413 } 413 }
414 414
415 static int dm9601_rx_fixup(struct usbnet *dev, struct sk_buff *skb) 415 static int dm9601_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
416 { 416 {
417 u8 status; 417 u8 status;
418 int len; 418 int len;
419 419
420 /* format: 420 /* format:
421 b0: rx status 421 b0: rx status
422 b1: packet length (incl crc) low 422 b1: packet length (incl crc) low
423 b2: packet length (incl crc) high 423 b2: packet length (incl crc) high
424 b3..n-4: packet data 424 b3..n-4: packet data
425 bn-3..bn: ethernet crc 425 bn-3..bn: ethernet crc
426 */ 426 */
427 427
428 if (unlikely(skb->len < DM_RX_OVERHEAD)) { 428 if (unlikely(skb->len < DM_RX_OVERHEAD)) {
429 dev_err(&dev->udev->dev, "unexpected tiny rx frame\n"); 429 dev_err(&dev->udev->dev, "unexpected tiny rx frame\n");
430 return 0; 430 return 0;
431 } 431 }
432 432
433 status = skb->data[0]; 433 status = skb->data[0];
434 len = (skb->data[1] | (skb->data[2] << 8)) - 4; 434 len = (skb->data[1] | (skb->data[2] << 8)) - 4;
435 435
436 if (unlikely(status & 0xbf)) { 436 if (unlikely(status & 0xbf)) {
437 if (status & 0x01) dev->stats.rx_fifo_errors++; 437 if (status & 0x01) dev->stats.rx_fifo_errors++;
438 if (status & 0x02) dev->stats.rx_crc_errors++; 438 if (status & 0x02) dev->stats.rx_crc_errors++;
439 if (status & 0x04) dev->stats.rx_frame_errors++; 439 if (status & 0x04) dev->stats.rx_frame_errors++;
440 if (status & 0x20) dev->stats.rx_missed_errors++; 440 if (status & 0x20) dev->stats.rx_missed_errors++;
441 if (status & 0x90) dev->stats.rx_length_errors++; 441 if (status & 0x90) dev->stats.rx_length_errors++;
442 return 0; 442 return 0;
443 } 443 }
444 444
445 skb_pull(skb, 3); 445 skb_pull(skb, 3);
446 skb_trim(skb, len); 446 skb_trim(skb, len);
447 447
448 return 1; 448 return 1;
449 } 449 }
450 450
451 static struct sk_buff *dm9601_tx_fixup(struct usbnet *dev, struct sk_buff *skb, 451 static struct sk_buff *dm9601_tx_fixup(struct usbnet *dev, struct sk_buff *skb,
452 gfp_t flags) 452 gfp_t flags)
453 { 453 {
454 int len; 454 int len;
455 455
456 /* format: 456 /* format:
457 b0: packet length low 457 b0: packet length low
458 b1: packet length high 458 b1: packet length high
459 b3..n: packet data 459 b3..n: packet data
460 */ 460 */
461 461
462 len = skb->len; 462 len = skb->len;
463 463
464 if (skb_headroom(skb) < DM_TX_OVERHEAD) { 464 if (skb_headroom(skb) < DM_TX_OVERHEAD) {
465 struct sk_buff *skb2; 465 struct sk_buff *skb2;
466 466
467 skb2 = skb_copy_expand(skb, DM_TX_OVERHEAD, 0, flags); 467 skb2 = skb_copy_expand(skb, DM_TX_OVERHEAD, 0, flags);
468 dev_kfree_skb_any(skb); 468 dev_kfree_skb_any(skb);
469 skb = skb2; 469 skb = skb2;
470 if (!skb) 470 if (!skb)
471 return NULL; 471 return NULL;
472 } 472 }
473 473
474 __skb_push(skb, DM_TX_OVERHEAD); 474 __skb_push(skb, DM_TX_OVERHEAD);
475 475
476 /* usbnet adds padding if length is a multiple of packet size 476 /* usbnet adds padding if length is a multiple of packet size
477 if so, adjust length value in header */ 477 if so, adjust length value in header */
478 if ((skb->len % dev->maxpacket) == 0) 478 if ((skb->len % dev->maxpacket) == 0)
479 len++; 479 len++;
480 480
481 skb->data[0] = len; 481 skb->data[0] = len;
482 skb->data[1] = len >> 8; 482 skb->data[1] = len >> 8;
483 483
484 return skb; 484 return skb;
485 } 485 }
486 486
487 static void dm9601_status(struct usbnet *dev, struct urb *urb) 487 static void dm9601_status(struct usbnet *dev, struct urb *urb)
488 { 488 {
489 int link; 489 int link;
490 u8 *buf; 490 u8 *buf;
491 491
492 /* format: 492 /* format:
493 b0: net status 493 b0: net status
494 b1: tx status 1 494 b1: tx status 1
495 b2: tx status 2 495 b2: tx status 2
496 b3: rx status 496 b3: rx status
497 b4: rx overflow 497 b4: rx overflow
498 b5: rx count 498 b5: rx count
499 b6: tx count 499 b6: tx count
500 b7: gpr 500 b7: gpr
501 */ 501 */
502 502
503 if (urb->actual_length < 8) 503 if (urb->actual_length < 8)
504 return; 504 return;
505 505
506 buf = urb->transfer_buffer; 506 buf = urb->transfer_buffer;
507 507
508 link = !!(buf[0] & 0x40); 508 link = !!(buf[0] & 0x40);
509 if (netif_carrier_ok(dev->net) != link) { 509 if (netif_carrier_ok(dev->net) != link) {
510 if (link) { 510 if (link) {
511 netif_carrier_on(dev->net); 511 netif_carrier_on(dev->net);
512 usbnet_defer_kevent (dev, EVENT_LINK_RESET); 512 usbnet_defer_kevent (dev, EVENT_LINK_RESET);
513 } 513 }
514 else 514 else
515 netif_carrier_off(dev->net); 515 netif_carrier_off(dev->net);
516 devdbg(dev, "Link Status is: %d", link); 516 devdbg(dev, "Link Status is: %d", link);
517 } 517 }
518 } 518 }
519 519
520 static int dm9601_link_reset(struct usbnet *dev) 520 static int dm9601_link_reset(struct usbnet *dev)
521 { 521 {
522 struct ethtool_cmd ecmd; 522 struct ethtool_cmd ecmd;
523 523
524 mii_check_media(&dev->mii, 1, 1); 524 mii_check_media(&dev->mii, 1, 1);
525 mii_ethtool_gset(&dev->mii, &ecmd); 525 mii_ethtool_gset(&dev->mii, &ecmd);
526 526
527 devdbg(dev, "link_reset() speed: %d duplex: %d", 527 devdbg(dev, "link_reset() speed: %d duplex: %d",
528 ecmd.speed, ecmd.duplex); 528 ecmd.speed, ecmd.duplex);
529 529
530 return 0; 530 return 0;
531 } 531 }
532 532
533 static const struct driver_info dm9601_info = { 533 static const struct driver_info dm9601_info = {
534 .description = "Davicom DM9601 USB Ethernet", 534 .description = "Davicom DM9601 USB Ethernet",
535 .flags = FLAG_ETHER, 535 .flags = FLAG_ETHER,
536 .bind = dm9601_bind, 536 .bind = dm9601_bind,
537 .rx_fixup = dm9601_rx_fixup, 537 .rx_fixup = dm9601_rx_fixup,
538 .tx_fixup = dm9601_tx_fixup, 538 .tx_fixup = dm9601_tx_fixup,
539 .status = dm9601_status, 539 .status = dm9601_status,
540 .link_reset = dm9601_link_reset, 540 .link_reset = dm9601_link_reset,
541 .reset = dm9601_link_reset, 541 .reset = dm9601_link_reset,
542 }; 542 };
543 543
544 static const struct usb_device_id products[] = { 544 static const struct usb_device_id products[] = {
545 { 545 {
546 USB_DEVICE(0x07aa, 0x9601), /* Corega FEther USB-TXC */ 546 USB_DEVICE(0x07aa, 0x9601), /* Corega FEther USB-TXC */
547 .driver_info = (unsigned long)&dm9601_info, 547 .driver_info = (unsigned long)&dm9601_info,
548 }, 548 },
549 { 549 {
550 USB_DEVICE(0x0a46, 0x9601), /* Davicom USB-100 */ 550 USB_DEVICE(0x0a46, 0x9601), /* Davicom USB-100 */
551 .driver_info = (unsigned long)&dm9601_info, 551 .driver_info = (unsigned long)&dm9601_info,
552 }, 552 },
553 { 553 {
554 USB_DEVICE(0x0a46, 0x6688), /* ZT6688 USB NIC */ 554 USB_DEVICE(0x0a46, 0x6688), /* ZT6688 USB NIC */
555 .driver_info = (unsigned long)&dm9601_info, 555 .driver_info = (unsigned long)&dm9601_info,
556 }, 556 },
557 { 557 {
558 USB_DEVICE(0x0a46, 0x0268), /* ShanTou ST268 USB NIC */ 558 USB_DEVICE(0x0a46, 0x0268), /* ShanTou ST268 USB NIC */
559 .driver_info = (unsigned long)&dm9601_info, 559 .driver_info = (unsigned long)&dm9601_info,
560 }, 560 },
561 { 561 {
562 USB_DEVICE(0x0a46, 0x8515), /* ADMtek ADM8515 USB NIC */ 562 USB_DEVICE(0x0a46, 0x8515), /* ADMtek ADM8515 USB NIC */
563 .driver_info = (unsigned long)&dm9601_info, 563 .driver_info = (unsigned long)&dm9601_info,
564 }, 564 },
565 {}, // END 565 {}, // END
566 }; 566 };
567 567
568 MODULE_DEVICE_TABLE(usb, products); 568 MODULE_DEVICE_TABLE(usb, products);
569 569
570 static struct usb_driver dm9601_driver = { 570 static struct usb_driver dm9601_driver = {
571 .name = "dm9601", 571 .name = "dm9601",
572 .id_table = products, 572 .id_table = products,
573 .probe = usbnet_probe, 573 .probe = usbnet_probe,
574 .disconnect = usbnet_disconnect, 574 .disconnect = usbnet_disconnect,
575 .suspend = usbnet_suspend, 575 .suspend = usbnet_suspend,
576 .resume = usbnet_resume, 576 .resume = usbnet_resume,
577 }; 577 };
578 578
579 static int __init dm9601_init(void) 579 static int __init dm9601_init(void)
580 { 580 {
581 return usb_register(&dm9601_driver); 581 return usb_register(&dm9601_driver);
582 } 582 }
583 583
584 static void __exit dm9601_exit(void) 584 static void __exit dm9601_exit(void)
585 { 585 {
586 usb_deregister(&dm9601_driver); 586 usb_deregister(&dm9601_driver);
587 } 587 }
588 588
589 module_init(dm9601_init); 589 module_init(dm9601_init);
590 module_exit(dm9601_exit); 590 module_exit(dm9601_exit);
591 591
592 MODULE_AUTHOR("Peter Korsgaard <jacmet@sunsite.dk>"); 592 MODULE_AUTHOR("Peter Korsgaard <jacmet@sunsite.dk>");
593 MODULE_DESCRIPTION("Davicom DM9601 USB 1.1 ethernet devices"); 593 MODULE_DESCRIPTION("Davicom DM9601 USB 1.1 ethernet devices");
594 MODULE_LICENSE("GPL"); 594 MODULE_LICENSE("GPL");
595 595
drivers/net/usb/rndis_host.c
1 /* 1 /*
2 * Host Side support for RNDIS Networking Links 2 * Host Side support for RNDIS Networking Links
3 * Copyright (C) 2005 by David Brownell 3 * Copyright (C) 2005 by David Brownell
4 * 4 *
5 * This program is free software; you can redistribute it and/or modify 5 * This program is free software; you can redistribute it and/or modify
6 * it under the terms of the GNU General Public License as published by 6 * it under the terms of the GNU General Public License as published by
7 * the Free Software Foundation; either version 2 of the License, or 7 * the Free Software Foundation; either version 2 of the License, or
8 * (at your option) any later version. 8 * (at your option) any later version.
9 * 9 *
10 * This program is distributed in the hope that it will be useful, 10 * This program is distributed in the hope that it will be useful,
11 * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 * but WITHOUT ANY WARRANTY; without even the implied warranty of
12 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 * GNU General Public License for more details. 13 * GNU General Public License for more details.
14 * 14 *
15 * You should have received a copy of the GNU General Public License 15 * You should have received a copy of the GNU General Public License
16 * along with this program; if not, write to the Free Software 16 * along with this program; if not, write to the Free Software
17 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 17 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
18 */ 18 */
19 #include <linux/module.h> 19 #include <linux/module.h>
20 #include <linux/init.h> 20 #include <linux/init.h>
21 #include <linux/netdevice.h> 21 #include <linux/netdevice.h>
22 #include <linux/etherdevice.h> 22 #include <linux/etherdevice.h>
23 #include <linux/ethtool.h> 23 #include <linux/ethtool.h>
24 #include <linux/workqueue.h> 24 #include <linux/workqueue.h>
25 #include <linux/mii.h> 25 #include <linux/mii.h>
26 #include <linux/usb.h> 26 #include <linux/usb.h>
27 #include <linux/usb/cdc.h> 27 #include <linux/usb/cdc.h>
28 #include <linux/usb/usbnet.h> 28 #include <linux/usb/usbnet.h>
29 #include <linux/usb/rndis_host.h> 29 #include <linux/usb/rndis_host.h>
30 30
31 31
32 /* 32 /*
33 * RNDIS is NDIS remoted over USB. It's a MSFT variant of CDC ACM ... of 33 * RNDIS is NDIS remoted over USB. It's a MSFT variant of CDC ACM ... of
34 * course ACM was intended for modems, not Ethernet links! USB's standard 34 * course ACM was intended for modems, not Ethernet links! USB's standard
35 * for Ethernet links is "CDC Ethernet", which is significantly simpler. 35 * for Ethernet links is "CDC Ethernet", which is significantly simpler.
36 * 36 *
37 * NOTE that Microsoft's "RNDIS 1.0" specification is incomplete. Issues 37 * NOTE that Microsoft's "RNDIS 1.0" specification is incomplete. Issues
38 * include: 38 * include:
39 * - Power management in particular relies on information that's scattered 39 * - Power management in particular relies on information that's scattered
40 * through other documentation, and which is incomplete or incorrect even 40 * through other documentation, and which is incomplete or incorrect even
41 * there. 41 * there.
42 * - There are various undocumented protocol requirements, such as the 42 * - There are various undocumented protocol requirements, such as the
43 * need to send unused garbage in control-OUT messages. 43 * need to send unused garbage in control-OUT messages.
44 * - In some cases, MS-Windows will emit undocumented requests; this 44 * - In some cases, MS-Windows will emit undocumented requests; this
45 * matters more to peripheral implementations than host ones. 45 * matters more to peripheral implementations than host ones.
46 * 46 *
47 * Moreover there's a no-open-specs variant of RNDIS called "ActiveSync". 47 * Moreover there's a no-open-specs variant of RNDIS called "ActiveSync".
48 * 48 *
49 * For these reasons and others, ** USE OF RNDIS IS STRONGLY DISCOURAGED ** in 49 * For these reasons and others, ** USE OF RNDIS IS STRONGLY DISCOURAGED ** in
50 * favor of such non-proprietary alternatives as CDC Ethernet or the newer (and 50 * favor of such non-proprietary alternatives as CDC Ethernet or the newer (and
51 * currently rare) "Ethernet Emulation Model" (EEM). 51 * currently rare) "Ethernet Emulation Model" (EEM).
52 */ 52 */
53 53
54 /* 54 /*
55 * RNDIS notifications from device: command completion; "reverse" 55 * RNDIS notifications from device: command completion; "reverse"
56 * keepalives; etc 56 * keepalives; etc
57 */ 57 */
58 void rndis_status(struct usbnet *dev, struct urb *urb) 58 void rndis_status(struct usbnet *dev, struct urb *urb)
59 { 59 {
60 devdbg(dev, "rndis status urb, len %d stat %d", 60 devdbg(dev, "rndis status urb, len %d stat %d",
61 urb->actual_length, urb->status); 61 urb->actual_length, urb->status);
62 // FIXME for keepalives, respond immediately (asynchronously) 62 // FIXME for keepalives, respond immediately (asynchronously)
63 // if not an RNDIS status, do like cdc_status(dev,urb) does 63 // if not an RNDIS status, do like cdc_status(dev,urb) does
64 } 64 }
65 EXPORT_SYMBOL_GPL(rndis_status); 65 EXPORT_SYMBOL_GPL(rndis_status);
66 66
67 /* 67 /*
68 * RPC done RNDIS-style. Caller guarantees: 68 * RPC done RNDIS-style. Caller guarantees:
69 * - message is properly byteswapped 69 * - message is properly byteswapped
70 * - there's no other request pending 70 * - there's no other request pending
71 * - buf can hold up to 1KB response (required by RNDIS spec) 71 * - buf can hold up to 1KB response (required by RNDIS spec)
72 * On return, the first few entries are already byteswapped. 72 * On return, the first few entries are already byteswapped.
73 * 73 *
74 * Call context is likely probe(), before interface name is known, 74 * Call context is likely probe(), before interface name is known,
75 * which is why we won't try to use it in the diagnostics. 75 * which is why we won't try to use it in the diagnostics.
76 */ 76 */
77 int rndis_command(struct usbnet *dev, struct rndis_msg_hdr *buf) 77 int rndis_command(struct usbnet *dev, struct rndis_msg_hdr *buf)
78 { 78 {
79 struct cdc_state *info = (void *) &dev->data; 79 struct cdc_state *info = (void *) &dev->data;
80 int master_ifnum; 80 int master_ifnum;
81 int retval; 81 int retval;
82 unsigned count; 82 unsigned count;
83 __le32 rsp; 83 __le32 rsp;
84 u32 xid = 0, msg_len, request_id; 84 u32 xid = 0, msg_len, request_id;
85 85
86 /* REVISIT when this gets called from contexts other than probe() or 86 /* REVISIT when this gets called from contexts other than probe() or
87 * disconnect(): either serialize, or dispatch responses on xid 87 * disconnect(): either serialize, or dispatch responses on xid
88 */ 88 */
89 89
90 /* Issue the request; xid is unique, don't bother byteswapping it */ 90 /* Issue the request; xid is unique, don't bother byteswapping it */
91 if (likely(buf->msg_type != RNDIS_MSG_HALT 91 if (likely(buf->msg_type != RNDIS_MSG_HALT
92 && buf->msg_type != RNDIS_MSG_RESET)) { 92 && buf->msg_type != RNDIS_MSG_RESET)) {
93 xid = dev->xid++; 93 xid = dev->xid++;
94 if (!xid) 94 if (!xid)
95 xid = dev->xid++; 95 xid = dev->xid++;
96 buf->request_id = (__force __le32) xid; 96 buf->request_id = (__force __le32) xid;
97 } 97 }
98 master_ifnum = info->control->cur_altsetting->desc.bInterfaceNumber; 98 master_ifnum = info->control->cur_altsetting->desc.bInterfaceNumber;
99 retval = usb_control_msg(dev->udev, 99 retval = usb_control_msg(dev->udev,
100 usb_sndctrlpipe(dev->udev, 0), 100 usb_sndctrlpipe(dev->udev, 0),
101 USB_CDC_SEND_ENCAPSULATED_COMMAND, 101 USB_CDC_SEND_ENCAPSULATED_COMMAND,
102 USB_TYPE_CLASS | USB_RECIP_INTERFACE, 102 USB_TYPE_CLASS | USB_RECIP_INTERFACE,
103 0, master_ifnum, 103 0, master_ifnum,
104 buf, le32_to_cpu(buf->msg_len), 104 buf, le32_to_cpu(buf->msg_len),
105 RNDIS_CONTROL_TIMEOUT_MS); 105 RNDIS_CONTROL_TIMEOUT_MS);
106 if (unlikely(retval < 0 || xid == 0)) 106 if (unlikely(retval < 0 || xid == 0))
107 return retval; 107 return retval;
108 108
109 // FIXME Seems like some devices discard responses when 109 // FIXME Seems like some devices discard responses when
110 // we time out and cancel our "get response" requests... 110 // we time out and cancel our "get response" requests...
111 // so, this is fragile. Probably need to poll for status. 111 // so, this is fragile. Probably need to poll for status.
112 112
113 /* ignore status endpoint, just poll the control channel; 113 /* ignore status endpoint, just poll the control channel;
114 * the request probably completed immediately 114 * the request probably completed immediately
115 */ 115 */
116 rsp = buf->msg_type | RNDIS_MSG_COMPLETION; 116 rsp = buf->msg_type | RNDIS_MSG_COMPLETION;
117 for (count = 0; count < 10; count++) { 117 for (count = 0; count < 10; count++) {
118 memset(buf, 0, CONTROL_BUFFER_SIZE); 118 memset(buf, 0, CONTROL_BUFFER_SIZE);
119 retval = usb_control_msg(dev->udev, 119 retval = usb_control_msg(dev->udev,
120 usb_rcvctrlpipe(dev->udev, 0), 120 usb_rcvctrlpipe(dev->udev, 0),
121 USB_CDC_GET_ENCAPSULATED_RESPONSE, 121 USB_CDC_GET_ENCAPSULATED_RESPONSE,
122 USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE, 122 USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE,
123 0, master_ifnum, 123 0, master_ifnum,
124 buf, CONTROL_BUFFER_SIZE, 124 buf, CONTROL_BUFFER_SIZE,
125 RNDIS_CONTROL_TIMEOUT_MS); 125 RNDIS_CONTROL_TIMEOUT_MS);
126 if (likely(retval >= 8)) { 126 if (likely(retval >= 8)) {
127 msg_len = le32_to_cpu(buf->msg_len); 127 msg_len = le32_to_cpu(buf->msg_len);
128 request_id = (__force u32) buf->request_id; 128 request_id = (__force u32) buf->request_id;
129 if (likely(buf->msg_type == rsp)) { 129 if (likely(buf->msg_type == rsp)) {
130 if (likely(request_id == xid)) { 130 if (likely(request_id == xid)) {
131 if (unlikely(rsp == RNDIS_MSG_RESET_C)) 131 if (unlikely(rsp == RNDIS_MSG_RESET_C))
132 return 0; 132 return 0;
133 if (likely(RNDIS_STATUS_SUCCESS 133 if (likely(RNDIS_STATUS_SUCCESS
134 == buf->status)) 134 == buf->status))
135 return 0; 135 return 0;
136 dev_dbg(&info->control->dev, 136 dev_dbg(&info->control->dev,
137 "rndis reply status %08x\n", 137 "rndis reply status %08x\n",
138 le32_to_cpu(buf->status)); 138 le32_to_cpu(buf->status));
139 return -EL3RST; 139 return -EL3RST;
140 } 140 }
141 dev_dbg(&info->control->dev, 141 dev_dbg(&info->control->dev,
142 "rndis reply id %d expected %d\n", 142 "rndis reply id %d expected %d\n",
143 request_id, xid); 143 request_id, xid);
144 /* then likely retry */ 144 /* then likely retry */
145 } else switch (buf->msg_type) { 145 } else switch (buf->msg_type) {
146 case RNDIS_MSG_INDICATE: { /* fault/event */ 146 case RNDIS_MSG_INDICATE: { /* fault/event */
147 struct rndis_indicate *msg = (void *)buf; 147 struct rndis_indicate *msg = (void *)buf;
148 int state = 0; 148 int state = 0;
149 149
150 switch (msg->status) { 150 switch (msg->status) {
151 case RNDIS_STATUS_MEDIA_CONNECT: 151 case RNDIS_STATUS_MEDIA_CONNECT:
152 state = 1; 152 state = 1;
153 case RNDIS_STATUS_MEDIA_DISCONNECT: 153 case RNDIS_STATUS_MEDIA_DISCONNECT:
154 dev_info(&info->control->dev, 154 dev_info(&info->control->dev,
155 "rndis media %sconnect\n", 155 "rndis media %sconnect\n",
156 !state?"dis":""); 156 !state?"dis":"");
157 if (dev->driver_info->link_change) 157 if (dev->driver_info->link_change)
158 dev->driver_info->link_change( 158 dev->driver_info->link_change(
159 dev, state); 159 dev, state);
160 break; 160 break;
161 default: 161 default:
162 dev_info(&info->control->dev, 162 dev_info(&info->control->dev,
163 "rndis indication: 0x%08x\n", 163 "rndis indication: 0x%08x\n",
164 le32_to_cpu(msg->status)); 164 le32_to_cpu(msg->status));
165 } 165 }
166 } 166 }
167 break; 167 break;
168 case RNDIS_MSG_KEEPALIVE: { /* ping */ 168 case RNDIS_MSG_KEEPALIVE: { /* ping */
169 struct rndis_keepalive_c *msg = (void *)buf; 169 struct rndis_keepalive_c *msg = (void *)buf;
170 170
171 msg->msg_type = RNDIS_MSG_KEEPALIVE_C; 171 msg->msg_type = RNDIS_MSG_KEEPALIVE_C;
172 msg->msg_len = ccpu2(sizeof *msg); 172 msg->msg_len = ccpu2(sizeof *msg);
173 msg->status = RNDIS_STATUS_SUCCESS; 173 msg->status = RNDIS_STATUS_SUCCESS;
174 retval = usb_control_msg(dev->udev, 174 retval = usb_control_msg(dev->udev,
175 usb_sndctrlpipe(dev->udev, 0), 175 usb_sndctrlpipe(dev->udev, 0),
176 USB_CDC_SEND_ENCAPSULATED_COMMAND, 176 USB_CDC_SEND_ENCAPSULATED_COMMAND,
177 USB_TYPE_CLASS | USB_RECIP_INTERFACE, 177 USB_TYPE_CLASS | USB_RECIP_INTERFACE,
178 0, master_ifnum, 178 0, master_ifnum,
179 msg, sizeof *msg, 179 msg, sizeof *msg,
180 RNDIS_CONTROL_TIMEOUT_MS); 180 RNDIS_CONTROL_TIMEOUT_MS);
181 if (unlikely(retval < 0)) 181 if (unlikely(retval < 0))
182 dev_dbg(&info->control->dev, 182 dev_dbg(&info->control->dev,
183 "rndis keepalive err %d\n", 183 "rndis keepalive err %d\n",
184 retval); 184 retval);
185 } 185 }
186 break; 186 break;
187 default: 187 default:
188 dev_dbg(&info->control->dev, 188 dev_dbg(&info->control->dev,
189 "unexpected rndis msg %08x len %d\n", 189 "unexpected rndis msg %08x len %d\n",
190 le32_to_cpu(buf->msg_type), msg_len); 190 le32_to_cpu(buf->msg_type), msg_len);
191 } 191 }
192 } else { 192 } else {
193 /* device probably issued a protocol stall; ignore */ 193 /* device probably issued a protocol stall; ignore */
194 dev_dbg(&info->control->dev, 194 dev_dbg(&info->control->dev,
195 "rndis response error, code %d\n", retval); 195 "rndis response error, code %d\n", retval);
196 } 196 }
197 msleep(2); 197 msleep(2);
198 } 198 }
199 dev_dbg(&info->control->dev, "rndis response timeout\n"); 199 dev_dbg(&info->control->dev, "rndis response timeout\n");
200 return -ETIMEDOUT; 200 return -ETIMEDOUT;
201 } 201 }
202 EXPORT_SYMBOL_GPL(rndis_command); 202 EXPORT_SYMBOL_GPL(rndis_command);
203 203
204 /* 204 /*
205 * rndis_query: 205 * rndis_query:
206 * 206 *
207 * Performs a query for @oid along with 0 or more bytes of payload as 207 * Performs a query for @oid along with 0 or more bytes of payload as
208 * specified by @in_len. If @reply_len is not set to -1 then the reply 208 * specified by @in_len. If @reply_len is not set to -1 then the reply
209 * length is checked against this value, resulting in an error if it 209 * length is checked against this value, resulting in an error if it
210 * doesn't match. 210 * doesn't match.
211 * 211 *
212 * NOTE: Adding a payload exactly or greater than the size of the expected 212 * NOTE: Adding a payload exactly or greater than the size of the expected
213 * response payload is an evident requirement MSFT added for ActiveSync. 213 * response payload is an evident requirement MSFT added for ActiveSync.
214 * 214 *
215 * The only exception is for OIDs that return a variably sized response, 215 * The only exception is for OIDs that return a variably sized response,
216 * in which case no payload should be added. This undocumented (and 216 * in which case no payload should be added. This undocumented (and
217 * nonsensical!) issue was found by sniffing protocol requests from the 217 * nonsensical!) issue was found by sniffing protocol requests from the
218 * ActiveSync 4.1 Windows driver. 218 * ActiveSync 4.1 Windows driver.
219 */ 219 */
220 static int rndis_query(struct usbnet *dev, struct usb_interface *intf, 220 static int rndis_query(struct usbnet *dev, struct usb_interface *intf,
221 void *buf, u32 oid, u32 in_len, 221 void *buf, __le32 oid, u32 in_len,
222 void **reply, int *reply_len) 222 void **reply, int *reply_len)
223 { 223 {
224 int retval; 224 int retval;
225 union { 225 union {
226 void *buf; 226 void *buf;
227 struct rndis_msg_hdr *header; 227 struct rndis_msg_hdr *header;
228 struct rndis_query *get; 228 struct rndis_query *get;
229 struct rndis_query_c *get_c; 229 struct rndis_query_c *get_c;
230 } u; 230 } u;
231 u32 off, len; 231 u32 off, len;
232 232
233 u.buf = buf; 233 u.buf = buf;
234 234
235 memset(u.get, 0, sizeof *u.get + in_len); 235 memset(u.get, 0, sizeof *u.get + in_len);
236 u.get->msg_type = RNDIS_MSG_QUERY; 236 u.get->msg_type = RNDIS_MSG_QUERY;
237 u.get->msg_len = cpu_to_le32(sizeof *u.get + in_len); 237 u.get->msg_len = cpu_to_le32(sizeof *u.get + in_len);
238 u.get->oid = oid; 238 u.get->oid = oid;
239 u.get->len = cpu_to_le32(in_len); 239 u.get->len = cpu_to_le32(in_len);
240 u.get->offset = ccpu2(20); 240 u.get->offset = ccpu2(20);
241 241
242 retval = rndis_command(dev, u.header); 242 retval = rndis_command(dev, u.header);
243 if (unlikely(retval < 0)) { 243 if (unlikely(retval < 0)) {
244 dev_err(&intf->dev, "RNDIS_MSG_QUERY(0x%08x) failed, %d\n", 244 dev_err(&intf->dev, "RNDIS_MSG_QUERY(0x%08x) failed, %d\n",
245 oid, retval); 245 oid, retval);
246 return retval; 246 return retval;
247 } 247 }
248 248
249 off = le32_to_cpu(u.get_c->offset); 249 off = le32_to_cpu(u.get_c->offset);
250 len = le32_to_cpu(u.get_c->len); 250 len = le32_to_cpu(u.get_c->len);
251 if (unlikely((8 + off + len) > CONTROL_BUFFER_SIZE)) 251 if (unlikely((8 + off + len) > CONTROL_BUFFER_SIZE))
252 goto response_error; 252 goto response_error;
253 253
254 if (*reply_len != -1 && len != *reply_len) 254 if (*reply_len != -1 && len != *reply_len)
255 goto response_error; 255 goto response_error;
256 256
257 *reply = (unsigned char *) &u.get_c->request_id + off; 257 *reply = (unsigned char *) &u.get_c->request_id + off;
258 *reply_len = len; 258 *reply_len = len;
259 259
260 return retval; 260 return retval;
261 261
262 response_error: 262 response_error:
263 dev_err(&intf->dev, "RNDIS_MSG_QUERY(0x%08x) " 263 dev_err(&intf->dev, "RNDIS_MSG_QUERY(0x%08x) "
264 "invalid response - off %d len %d\n", 264 "invalid response - off %d len %d\n",
265 oid, off, len); 265 oid, off, len);
266 return -EDOM; 266 return -EDOM;
267 } 267 }
268 268
269 int 269 int
270 generic_rndis_bind(struct usbnet *dev, struct usb_interface *intf, int flags) 270 generic_rndis_bind(struct usbnet *dev, struct usb_interface *intf, int flags)
271 { 271 {
272 int retval; 272 int retval;
273 struct net_device *net = dev->net; 273 struct net_device *net = dev->net;
274 struct cdc_state *info = (void *) &dev->data; 274 struct cdc_state *info = (void *) &dev->data;
275 union { 275 union {
276 void *buf; 276 void *buf;
277 struct rndis_msg_hdr *header; 277 struct rndis_msg_hdr *header;
278 struct rndis_init *init; 278 struct rndis_init *init;
279 struct rndis_init_c *init_c; 279 struct rndis_init_c *init_c;
280 struct rndis_query *get; 280 struct rndis_query *get;
281 struct rndis_query_c *get_c; 281 struct rndis_query_c *get_c;
282 struct rndis_set *set; 282 struct rndis_set *set;
283 struct rndis_set_c *set_c; 283 struct rndis_set_c *set_c;
284 struct rndis_halt *halt; 284 struct rndis_halt *halt;
285 } u; 285 } u;
286 u32 tmp, *phym; 286 u32 tmp, *phym;
287 int reply_len; 287 int reply_len;
288 unsigned char *bp; 288 unsigned char *bp;
289 289
290 /* we can't rely on i/o from stack working, or stack allocation */ 290 /* we can't rely on i/o from stack working, or stack allocation */
291 u.buf = kmalloc(CONTROL_BUFFER_SIZE, GFP_KERNEL); 291 u.buf = kmalloc(CONTROL_BUFFER_SIZE, GFP_KERNEL);
292 if (!u.buf) 292 if (!u.buf)
293 return -ENOMEM; 293 return -ENOMEM;
294 retval = usbnet_generic_cdc_bind(dev, intf); 294 retval = usbnet_generic_cdc_bind(dev, intf);
295 if (retval < 0) 295 if (retval < 0)
296 goto fail; 296 goto fail;
297 297
298 u.init->msg_type = RNDIS_MSG_INIT; 298 u.init->msg_type = RNDIS_MSG_INIT;
299 u.init->msg_len = ccpu2(sizeof *u.init); 299 u.init->msg_len = ccpu2(sizeof *u.init);
300 u.init->major_version = ccpu2(1); 300 u.init->major_version = ccpu2(1);
301 u.init->minor_version = ccpu2(0); 301 u.init->minor_version = ccpu2(0);
302 302
303 /* max transfer (in spec) is 0x4000 at full speed, but for 303 /* max transfer (in spec) is 0x4000 at full speed, but for
304 * TX we'll stick to one Ethernet packet plus RNDIS framing. 304 * TX we'll stick to one Ethernet packet plus RNDIS framing.
305 * For RX we handle drivers that zero-pad to end-of-packet. 305 * For RX we handle drivers that zero-pad to end-of-packet.
306 * Don't let userspace change these settings. 306 * Don't let userspace change these settings.
307 * 307 *
308 * NOTE: there still seems to be wierdness here, as if we need 308 * NOTE: there still seems to be wierdness here, as if we need
309 * to do some more things to make sure WinCE targets accept this. 309 * to do some more things to make sure WinCE targets accept this.
310 * They default to jumbograms of 8KB or 16KB, which is absurd 310 * They default to jumbograms of 8KB or 16KB, which is absurd
311 * for such low data rates and which is also more than Linux 311 * for such low data rates and which is also more than Linux
312 * can usually expect to allocate for SKB data... 312 * can usually expect to allocate for SKB data...
313 */ 313 */
314 net->hard_header_len += sizeof (struct rndis_data_hdr); 314 net->hard_header_len += sizeof (struct rndis_data_hdr);
315 dev->hard_mtu = net->mtu + net->hard_header_len; 315 dev->hard_mtu = net->mtu + net->hard_header_len;
316 316
317 dev->maxpacket = usb_maxpacket(dev->udev, dev->out, 1); 317 dev->maxpacket = usb_maxpacket(dev->udev, dev->out, 1);
318 if (dev->maxpacket == 0) { 318 if (dev->maxpacket == 0) {
319 if (netif_msg_probe(dev)) 319 if (netif_msg_probe(dev))
320 dev_dbg(&intf->dev, "dev->maxpacket can't be 0\n"); 320 dev_dbg(&intf->dev, "dev->maxpacket can't be 0\n");
321 retval = -EINVAL; 321 retval = -EINVAL;
322 goto fail_and_release; 322 goto fail_and_release;
323 } 323 }
324 324
325 dev->rx_urb_size = dev->hard_mtu + (dev->maxpacket + 1); 325 dev->rx_urb_size = dev->hard_mtu + (dev->maxpacket + 1);
326 dev->rx_urb_size &= ~(dev->maxpacket - 1); 326 dev->rx_urb_size &= ~(dev->maxpacket - 1);
327 u.init->max_transfer_size = cpu_to_le32(dev->rx_urb_size); 327 u.init->max_transfer_size = cpu_to_le32(dev->rx_urb_size);
328 328
329 net->change_mtu = NULL; 329 net->change_mtu = NULL;
330 retval = rndis_command(dev, u.header); 330 retval = rndis_command(dev, u.header);
331 if (unlikely(retval < 0)) { 331 if (unlikely(retval < 0)) {
332 /* it might not even be an RNDIS device!! */ 332 /* it might not even be an RNDIS device!! */
333 dev_err(&intf->dev, "RNDIS init failed, %d\n", retval); 333 dev_err(&intf->dev, "RNDIS init failed, %d\n", retval);
334 goto fail_and_release; 334 goto fail_and_release;
335 } 335 }
336 tmp = le32_to_cpu(u.init_c->max_transfer_size); 336 tmp = le32_to_cpu(u.init_c->max_transfer_size);
337 if (tmp < dev->hard_mtu) { 337 if (tmp < dev->hard_mtu) {
338 if (tmp <= net->hard_header_len) { 338 if (tmp <= net->hard_header_len) {
339 dev_err(&intf->dev, 339 dev_err(&intf->dev,
340 "dev can't take %u byte packets (max %u)\n", 340 "dev can't take %u byte packets (max %u)\n",
341 dev->hard_mtu, tmp); 341 dev->hard_mtu, tmp);
342 retval = -EINVAL; 342 retval = -EINVAL;
343 goto halt_fail_and_release; 343 goto halt_fail_and_release;
344 } 344 }
345 dev->hard_mtu = tmp; 345 dev->hard_mtu = tmp;
346 net->mtu = dev->hard_mtu - net->hard_header_len; 346 net->mtu = dev->hard_mtu - net->hard_header_len;
347 dev_warn(&intf->dev, 347 dev_warn(&intf->dev,
348 "dev can't take %u byte packets (max %u), " 348 "dev can't take %u byte packets (max %u), "
349 "adjusting MTU to %u\n", 349 "adjusting MTU to %u\n",
350 dev->hard_mtu, tmp, net->mtu); 350 dev->hard_mtu, tmp, net->mtu);
351 } 351 }
352 352
353 /* REVISIT: peripheral "alignment" request is ignored ... */ 353 /* REVISIT: peripheral "alignment" request is ignored ... */
354 dev_dbg(&intf->dev, 354 dev_dbg(&intf->dev,
355 "hard mtu %u (%u from dev), rx buflen %Zu, align %d\n", 355 "hard mtu %u (%u from dev), rx buflen %Zu, align %d\n",
356 dev->hard_mtu, tmp, dev->rx_urb_size, 356 dev->hard_mtu, tmp, dev->rx_urb_size,
357 1 << le32_to_cpu(u.init_c->packet_alignment)); 357 1 << le32_to_cpu(u.init_c->packet_alignment));
358 358
359 /* module has some device initialization code needs to be done right 359 /* module has some device initialization code needs to be done right
360 * after RNDIS_INIT */ 360 * after RNDIS_INIT */
361 if (dev->driver_info->early_init && 361 if (dev->driver_info->early_init &&
362 dev->driver_info->early_init(dev) != 0) 362 dev->driver_info->early_init(dev) != 0)
363 goto halt_fail_and_release; 363 goto halt_fail_and_release;
364 364
365 /* Check physical medium */ 365 /* Check physical medium */
366 reply_len = sizeof *phym; 366 reply_len = sizeof *phym;
367 retval = rndis_query(dev, intf, u.buf, OID_GEN_PHYSICAL_MEDIUM, 367 retval = rndis_query(dev, intf, u.buf, OID_GEN_PHYSICAL_MEDIUM,
368 0, (void **) &phym, &reply_len); 368 0, (void **) &phym, &reply_len);
369 if (retval != 0) 369 if (retval != 0)
370 /* OID is optional so don't fail here. */ 370 /* OID is optional so don't fail here. */
371 *phym = RNDIS_PHYSICAL_MEDIUM_UNSPECIFIED; 371 *phym = RNDIS_PHYSICAL_MEDIUM_UNSPECIFIED;
372 if ((flags & FLAG_RNDIS_PHYM_WIRELESS) && 372 if ((flags & FLAG_RNDIS_PHYM_WIRELESS) &&
373 *phym != RNDIS_PHYSICAL_MEDIUM_WIRELESS_LAN) { 373 *phym != RNDIS_PHYSICAL_MEDIUM_WIRELESS_LAN) {
374 if (netif_msg_probe(dev)) 374 if (netif_msg_probe(dev))
375 dev_dbg(&intf->dev, "driver requires wireless " 375 dev_dbg(&intf->dev, "driver requires wireless "
376 "physical medium, but device is not.\n"); 376 "physical medium, but device is not.\n");
377 retval = -ENODEV; 377 retval = -ENODEV;
378 goto halt_fail_and_release; 378 goto halt_fail_and_release;
379 } 379 }
380 if ((flags & FLAG_RNDIS_PHYM_NOT_WIRELESS) && 380 if ((flags & FLAG_RNDIS_PHYM_NOT_WIRELESS) &&
381 *phym == RNDIS_PHYSICAL_MEDIUM_WIRELESS_LAN) { 381 *phym == RNDIS_PHYSICAL_MEDIUM_WIRELESS_LAN) {
382 if (netif_msg_probe(dev)) 382 if (netif_msg_probe(dev))
383 dev_dbg(&intf->dev, "driver requires non-wireless " 383 dev_dbg(&intf->dev, "driver requires non-wireless "
384 "physical medium, but device is wireless.\n"); 384 "physical medium, but device is wireless.\n");
385 retval = -ENODEV; 385 retval = -ENODEV;
386 goto halt_fail_and_release; 386 goto halt_fail_and_release;
387 } 387 }
388 388
389 /* Get designated host ethernet address */ 389 /* Get designated host ethernet address */
390 reply_len = ETH_ALEN; 390 reply_len = ETH_ALEN;
391 retval = rndis_query(dev, intf, u.buf, OID_802_3_PERMANENT_ADDRESS, 391 retval = rndis_query(dev, intf, u.buf, OID_802_3_PERMANENT_ADDRESS,
392 48, (void **) &bp, &reply_len); 392 48, (void **) &bp, &reply_len);
393 if (unlikely(retval< 0)) { 393 if (unlikely(retval< 0)) {
394 dev_err(&intf->dev, "rndis get ethaddr, %d\n", retval); 394 dev_err(&intf->dev, "rndis get ethaddr, %d\n", retval);
395 goto halt_fail_and_release; 395 goto halt_fail_and_release;
396 } 396 }
397 memcpy(net->dev_addr, bp, ETH_ALEN); 397 memcpy(net->dev_addr, bp, ETH_ALEN);
398 398
399 /* set a nonzero filter to enable data transfers */ 399 /* set a nonzero filter to enable data transfers */
400 memset(u.set, 0, sizeof *u.set); 400 memset(u.set, 0, sizeof *u.set);
401 u.set->msg_type = RNDIS_MSG_SET; 401 u.set->msg_type = RNDIS_MSG_SET;
402 u.set->msg_len = ccpu2(4 + sizeof *u.set); 402 u.set->msg_len = ccpu2(4 + sizeof *u.set);
403 u.set->oid = OID_GEN_CURRENT_PACKET_FILTER; 403 u.set->oid = OID_GEN_CURRENT_PACKET_FILTER;
404 u.set->len = ccpu2(4); 404 u.set->len = ccpu2(4);
405 u.set->offset = ccpu2((sizeof *u.set) - 8); 405 u.set->offset = ccpu2((sizeof *u.set) - 8);
406 *(__le32 *)(u.buf + sizeof *u.set) = RNDIS_DEFAULT_FILTER; 406 *(__le32 *)(u.buf + sizeof *u.set) = RNDIS_DEFAULT_FILTER;
407 407
408 retval = rndis_command(dev, u.header); 408 retval = rndis_command(dev, u.header);
409 if (unlikely(retval < 0)) { 409 if (unlikely(retval < 0)) {
410 dev_err(&intf->dev, "rndis set packet filter, %d\n", retval); 410 dev_err(&intf->dev, "rndis set packet filter, %d\n", retval);
411 goto halt_fail_and_release; 411 goto halt_fail_and_release;
412 } 412 }
413 413
414 retval = 0; 414 retval = 0;
415 415
416 kfree(u.buf); 416 kfree(u.buf);
417 return retval; 417 return retval;
418 418
419 halt_fail_and_release: 419 halt_fail_and_release:
420 memset(u.halt, 0, sizeof *u.halt); 420 memset(u.halt, 0, sizeof *u.halt);
421 u.halt->msg_type = RNDIS_MSG_HALT; 421 u.halt->msg_type = RNDIS_MSG_HALT;
422 u.halt->msg_len = ccpu2(sizeof *u.halt); 422 u.halt->msg_len = ccpu2(sizeof *u.halt);
423 (void) rndis_command(dev, (void *)u.halt); 423 (void) rndis_command(dev, (void *)u.halt);
424 fail_and_release: 424 fail_and_release:
425 usb_set_intfdata(info->data, NULL); 425 usb_set_intfdata(info->data, NULL);
426 usb_driver_release_interface(driver_of(intf), info->data); 426 usb_driver_release_interface(driver_of(intf), info->data);
427 info->data = NULL; 427 info->data = NULL;
428 fail: 428 fail:
429 kfree(u.buf); 429 kfree(u.buf);
430 return retval; 430 return retval;
431 } 431 }
432 EXPORT_SYMBOL_GPL(generic_rndis_bind); 432 EXPORT_SYMBOL_GPL(generic_rndis_bind);
433 433
434 static int rndis_bind(struct usbnet *dev, struct usb_interface *intf) 434 static int rndis_bind(struct usbnet *dev, struct usb_interface *intf)
435 { 435 {
436 return generic_rndis_bind(dev, intf, FLAG_RNDIS_PHYM_NOT_WIRELESS); 436 return generic_rndis_bind(dev, intf, FLAG_RNDIS_PHYM_NOT_WIRELESS);
437 } 437 }
438 438
439 void rndis_unbind(struct usbnet *dev, struct usb_interface *intf) 439 void rndis_unbind(struct usbnet *dev, struct usb_interface *intf)
440 { 440 {
441 struct rndis_halt *halt; 441 struct rndis_halt *halt;
442 442
443 /* try to clear any rndis state/activity (no i/o from stack!) */ 443 /* try to clear any rndis state/activity (no i/o from stack!) */
444 halt = kzalloc(CONTROL_BUFFER_SIZE, GFP_KERNEL); 444 halt = kzalloc(CONTROL_BUFFER_SIZE, GFP_KERNEL);
445 if (halt) { 445 if (halt) {
446 halt->msg_type = RNDIS_MSG_HALT; 446 halt->msg_type = RNDIS_MSG_HALT;
447 halt->msg_len = ccpu2(sizeof *halt); 447 halt->msg_len = ccpu2(sizeof *halt);
448 (void) rndis_command(dev, (void *)halt); 448 (void) rndis_command(dev, (void *)halt);
449 kfree(halt); 449 kfree(halt);
450 } 450 }
451 451
452 usbnet_cdc_unbind(dev, intf); 452 usbnet_cdc_unbind(dev, intf);
453 } 453 }
454 EXPORT_SYMBOL_GPL(rndis_unbind); 454 EXPORT_SYMBOL_GPL(rndis_unbind);
455 455
456 /* 456 /*
457 * DATA -- host must not write zlps 457 * DATA -- host must not write zlps
458 */ 458 */
459 int rndis_rx_fixup(struct usbnet *dev, struct sk_buff *skb) 459 int rndis_rx_fixup(struct usbnet *dev, struct sk_buff *skb)
460 { 460 {
461 /* peripheral may have batched packets to us... */ 461 /* peripheral may have batched packets to us... */
462 while (likely(skb->len)) { 462 while (likely(skb->len)) {
463 struct rndis_data_hdr *hdr = (void *)skb->data; 463 struct rndis_data_hdr *hdr = (void *)skb->data;
464 struct sk_buff *skb2; 464 struct sk_buff *skb2;
465 u32 msg_len, data_offset, data_len; 465 u32 msg_len, data_offset, data_len;
466 466
467 msg_len = le32_to_cpu(hdr->msg_len); 467 msg_len = le32_to_cpu(hdr->msg_len);
468 data_offset = le32_to_cpu(hdr->data_offset); 468 data_offset = le32_to_cpu(hdr->data_offset);
469 data_len = le32_to_cpu(hdr->data_len); 469 data_len = le32_to_cpu(hdr->data_len);
470 470
471 /* don't choke if we see oob, per-packet data, etc */ 471 /* don't choke if we see oob, per-packet data, etc */
472 if (unlikely(hdr->msg_type != RNDIS_MSG_PACKET 472 if (unlikely(hdr->msg_type != RNDIS_MSG_PACKET
473 || skb->len < msg_len 473 || skb->len < msg_len
474 || (data_offset + data_len + 8) > msg_len)) { 474 || (data_offset + data_len + 8) > msg_len)) {
475 dev->stats.rx_frame_errors++; 475 dev->stats.rx_frame_errors++;
476 devdbg(dev, "bad rndis message %d/%d/%d/%d, len %d", 476 devdbg(dev, "bad rndis message %d/%d/%d/%d, len %d",
477 le32_to_cpu(hdr->msg_type), 477 le32_to_cpu(hdr->msg_type),
478 msg_len, data_offset, data_len, skb->len); 478 msg_len, data_offset, data_len, skb->len);
479 return 0; 479 return 0;
480 } 480 }
481 skb_pull(skb, 8 + data_offset); 481 skb_pull(skb, 8 + data_offset);
482 482
483 /* at most one packet left? */ 483 /* at most one packet left? */
484 if (likely((data_len - skb->len) <= sizeof *hdr)) { 484 if (likely((data_len - skb->len) <= sizeof *hdr)) {
485 skb_trim(skb, data_len); 485 skb_trim(skb, data_len);
486 break; 486 break;
487 } 487 }
488 488
489 /* try to return all the packets in the batch */ 489 /* try to return all the packets in the batch */
490 skb2 = skb_clone(skb, GFP_ATOMIC); 490 skb2 = skb_clone(skb, GFP_ATOMIC);
491 if (unlikely(!skb2)) 491 if (unlikely(!skb2))
492 break; 492 break;
493 skb_pull(skb, msg_len - sizeof *hdr); 493 skb_pull(skb, msg_len - sizeof *hdr);
494 skb_trim(skb2, data_len); 494 skb_trim(skb2, data_len);
495 usbnet_skb_return(dev, skb2); 495 usbnet_skb_return(dev, skb2);
496 } 496 }
497 497
498 /* caller will usbnet_skb_return the remaining packet */ 498 /* caller will usbnet_skb_return the remaining packet */
499 return 1; 499 return 1;
500 } 500 }
501 EXPORT_SYMBOL_GPL(rndis_rx_fixup); 501 EXPORT_SYMBOL_GPL(rndis_rx_fixup);
502 502
503 struct sk_buff * 503 struct sk_buff *
504 rndis_tx_fixup(struct usbnet *dev, struct sk_buff *skb, gfp_t flags) 504 rndis_tx_fixup(struct usbnet *dev, struct sk_buff *skb, gfp_t flags)
505 { 505 {
506 struct rndis_data_hdr *hdr; 506 struct rndis_data_hdr *hdr;
507 struct sk_buff *skb2; 507 struct sk_buff *skb2;
508 unsigned len = skb->len; 508 unsigned len = skb->len;
509 509
510 if (likely(!skb_cloned(skb))) { 510 if (likely(!skb_cloned(skb))) {
511 int room = skb_headroom(skb); 511 int room = skb_headroom(skb);
512 512
513 /* enough head room as-is? */ 513 /* enough head room as-is? */
514 if (unlikely((sizeof *hdr) <= room)) 514 if (unlikely((sizeof *hdr) <= room))
515 goto fill; 515 goto fill;
516 516
517 /* enough room, but needs to be readjusted? */ 517 /* enough room, but needs to be readjusted? */
518 room += skb_tailroom(skb); 518 room += skb_tailroom(skb);
519 if (likely((sizeof *hdr) <= room)) { 519 if (likely((sizeof *hdr) <= room)) {
520 skb->data = memmove(skb->head + sizeof *hdr, 520 skb->data = memmove(skb->head + sizeof *hdr,
521 skb->data, len); 521 skb->data, len);
522 skb_set_tail_pointer(skb, len); 522 skb_set_tail_pointer(skb, len);
523 goto fill; 523 goto fill;
524 } 524 }
525 } 525 }
526 526
527 /* create a new skb, with the correct size (and tailpad) */ 527 /* create a new skb, with the correct size (and tailpad) */
528 skb2 = skb_copy_expand(skb, sizeof *hdr, 1, flags); 528 skb2 = skb_copy_expand(skb, sizeof *hdr, 1, flags);
529 dev_kfree_skb_any(skb); 529 dev_kfree_skb_any(skb);
530 if (unlikely(!skb2)) 530 if (unlikely(!skb2))
531 return skb2; 531 return skb2;
532 skb = skb2; 532 skb = skb2;
533 533
534 /* fill out the RNDIS header. we won't bother trying to batch 534 /* fill out the RNDIS header. we won't bother trying to batch
535 * packets; Linux minimizes wasted bandwidth through tx queues. 535 * packets; Linux minimizes wasted bandwidth through tx queues.
536 */ 536 */
537 fill: 537 fill:
538 hdr = (void *) __skb_push(skb, sizeof *hdr); 538 hdr = (void *) __skb_push(skb, sizeof *hdr);
539 memset(hdr, 0, sizeof *hdr); 539 memset(hdr, 0, sizeof *hdr);
540 hdr->msg_type = RNDIS_MSG_PACKET; 540 hdr->msg_type = RNDIS_MSG_PACKET;
541 hdr->msg_len = cpu_to_le32(skb->len); 541 hdr->msg_len = cpu_to_le32(skb->len);
542 hdr->data_offset = ccpu2(sizeof(*hdr) - 8); 542 hdr->data_offset = ccpu2(sizeof(*hdr) - 8);
543 hdr->data_len = cpu_to_le32(len); 543 hdr->data_len = cpu_to_le32(len);
544 544
545 /* FIXME make the last packet always be short ... */ 545 /* FIXME make the last packet always be short ... */
546 return skb; 546 return skb;
547 } 547 }
548 EXPORT_SYMBOL_GPL(rndis_tx_fixup); 548 EXPORT_SYMBOL_GPL(rndis_tx_fixup);
549 549
550 550
551 static const struct driver_info rndis_info = { 551 static const struct driver_info rndis_info = {
552 .description = "RNDIS device", 552 .description = "RNDIS device",
553 .flags = FLAG_ETHER | FLAG_FRAMING_RN | FLAG_NO_SETINT, 553 .flags = FLAG_ETHER | FLAG_FRAMING_RN | FLAG_NO_SETINT,
554 .bind = rndis_bind, 554 .bind = rndis_bind,
555 .unbind = rndis_unbind, 555 .unbind = rndis_unbind,
556 .status = rndis_status, 556 .status = rndis_status,
557 .rx_fixup = rndis_rx_fixup, 557 .rx_fixup = rndis_rx_fixup,
558 .tx_fixup = rndis_tx_fixup, 558 .tx_fixup = rndis_tx_fixup,
559 }; 559 };
560 560
561 #undef ccpu2 561 #undef ccpu2
562 562
563 563
564 /*-------------------------------------------------------------------------*/ 564 /*-------------------------------------------------------------------------*/
565 565
566 static const struct usb_device_id products [] = { 566 static const struct usb_device_id products [] = {
567 { 567 {
568 /* RNDIS is MSFT's un-official variant of CDC ACM */ 568 /* RNDIS is MSFT's un-official variant of CDC ACM */
569 USB_INTERFACE_INFO(USB_CLASS_COMM, 2 /* ACM */, 0x0ff), 569 USB_INTERFACE_INFO(USB_CLASS_COMM, 2 /* ACM */, 0x0ff),
570 .driver_info = (unsigned long) &rndis_info, 570 .driver_info = (unsigned long) &rndis_info,
571 }, { 571 }, {
572 /* "ActiveSync" is an undocumented variant of RNDIS, used in WM5 */ 572 /* "ActiveSync" is an undocumented variant of RNDIS, used in WM5 */
573 USB_INTERFACE_INFO(USB_CLASS_MISC, 1, 1), 573 USB_INTERFACE_INFO(USB_CLASS_MISC, 1, 1),
574 .driver_info = (unsigned long) &rndis_info, 574 .driver_info = (unsigned long) &rndis_info,
575 }, 575 },
576 { }, // END 576 { }, // END
577 }; 577 };
578 MODULE_DEVICE_TABLE(usb, products); 578 MODULE_DEVICE_TABLE(usb, products);
579 579
580 static struct usb_driver rndis_driver = { 580 static struct usb_driver rndis_driver = {
581 .name = "rndis_host", 581 .name = "rndis_host",
582 .id_table = products, 582 .id_table = products,
583 .probe = usbnet_probe, 583 .probe = usbnet_probe,
584 .disconnect = usbnet_disconnect, 584 .disconnect = usbnet_disconnect,
585 .suspend = usbnet_suspend, 585 .suspend = usbnet_suspend,
586 .resume = usbnet_resume, 586 .resume = usbnet_resume,
587 }; 587 };
588 588
589 static int __init rndis_init(void) 589 static int __init rndis_init(void)
590 { 590 {
591 return usb_register(&rndis_driver); 591 return usb_register(&rndis_driver);
592 } 592 }
593 module_init(rndis_init); 593 module_init(rndis_init);
594 594
595 static void __exit rndis_exit(void) 595 static void __exit rndis_exit(void)
596 { 596 {
597 usb_deregister(&rndis_driver); 597 usb_deregister(&rndis_driver);
598 } 598 }
599 module_exit(rndis_exit); 599 module_exit(rndis_exit);
600 600
601 MODULE_AUTHOR("David Brownell"); 601 MODULE_AUTHOR("David Brownell");
602 MODULE_DESCRIPTION("USB Host side RNDIS driver"); 602 MODULE_DESCRIPTION("USB Host side RNDIS driver");
603 MODULE_LICENSE("GPL"); 603 MODULE_LICENSE("GPL");
604 604