Blame view
Documentation/networking/ixgbe.txt
13 KB
d7064f4c1 Documentation/net... |
1 2 3 |
Linux* Base Driver for the Intel(R) Ethernet 10 Gigabit PCI Express Family of Adapters ============================================================================= |
09e1c0614 ixgbe: Add docume... |
4 |
|
d7064f4c1 Documentation/net... |
5 6 |
Intel 10 Gigabit Linux driver. Copyright(c) 1999 - 2013 Intel Corporation. |
09e1c0614 ixgbe: Add docume... |
7 8 9 |
Contents ======== |
09e1c0614 ixgbe: Add docume... |
10 |
- Identifying Your Adapter |
09e1c0614 ixgbe: Add docume... |
11 |
- Additional Configurations |
872857a84 Documentation/net... |
12 13 |
- Performance Tuning - Known Issues |
09e1c0614 ixgbe: Add docume... |
14 |
- Support |
872857a84 Documentation/net... |
15 16 |
Identifying Your Adapter ======================== |
09e1c0614 ixgbe: Add docume... |
17 |
|
d7064f4c1 Documentation/net... |
18 19 |
The driver in this release is compatible with 82598, 82599 and X540-based Intel Network Connections. |
09e1c0614 ixgbe: Add docume... |
20 |
|
872857a84 Documentation/net... |
21 22 |
For more information on how to identify your adapter, go to the Adapter & Driver ID Guide at: |
09e1c0614 ixgbe: Add docume... |
23 |
|
872857a84 Documentation/net... |
24 |
http://support.intel.com/support/network/sb/CS-012904.htm |
09e1c0614 ixgbe: Add docume... |
25 |
|
872857a84 Documentation/net... |
26 27 |
SFP+ Devices with Pluggable Optics ---------------------------------- |
09e1c0614 ixgbe: Add docume... |
28 |
|
872857a84 Documentation/net... |
29 |
82599-BASED ADAPTERS |
09e1c0614 ixgbe: Add docume... |
30 |
|
872857a84 Documentation/net... |
31 32 33 |
NOTES: If your 82599-based Intel(R) Network Adapter came with Intel optics, or is an Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel optics and/or the direct attach cables listed below. |
09e1c0614 ixgbe: Add docume... |
34 |
|
872857a84 Documentation/net... |
35 |
When 82599-based SFP+ devices are connected back to back, they should be set to |
68f20d948 Documentation/net... |
36 |
the same Speed setting via ethtool. Results may vary if you mix speed settings. |
872857a84 Documentation/net... |
37 38 39 |
82598-based adapters support all passive direct attach cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach cables are not supported. |
09e1c0614 ixgbe: Add docume... |
40 |
|
872857a84 Documentation/net... |
41 |
Supplier Type Part Numbers |
09e1c0614 ixgbe: Add docume... |
42 |
|
872857a84 Documentation/net... |
43 44 45 46 47 48 49 50 |
SR Modules Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-IN1 Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2 LR Modules Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-IN1 Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2 |
09e1c0614 ixgbe: Add docume... |
51 |
|
872857a84 Documentation/net... |
52 53 |
The following is a list of 3rd party SFP+ modules and direct attach cables that have received some testing. Not all modules are applicable to all devices. |
09e1c0614 ixgbe: Add docume... |
54 |
|
872857a84 Documentation/net... |
55 |
Supplier Type Part Numbers |
09e1c0614 ixgbe: Add docume... |
56 |
|
872857a84 Documentation/net... |
57 58 59 |
Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL |
09e1c0614 ixgbe: Add docume... |
60 |
|
872857a84 Documentation/net... |
61 62 63 64 65 66 |
Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1 Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1 Finistar 1000BASE-T SFP FCLF8522P2BTL Avago 1000BASE-T SFP ABCU-5710RZ |
09e1c0614 ixgbe: Add docume... |
67 |
|
872857a84 Documentation/net... |
68 69 |
82599-based adapters support all passive and active limiting direct attach cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. |
09e1c0614 ixgbe: Add docume... |
70 |
|
d7018be0a ixgbe: fix docume... |
71 |
Laser turns off for SFP+ when device is down |
872857a84 Documentation/net... |
72 |
------------------------------------------- |
d7018be0a ixgbe: fix docume... |
73 74 |
"ip link set down" turns off the laser for 82599-based SFP+ fiber adapters. "ip link set up" turns on the laser. |
09e1c0614 ixgbe: Add docume... |
75 |
|
09e1c0614 ixgbe: Add docume... |
76 |
|
872857a84 Documentation/net... |
77 |
82598-BASED ADAPTERS |
09e1c0614 ixgbe: Add docume... |
78 |
|
872857a84 Documentation/net... |
79 80 81 82 83 84 85 86 87 |
NOTES for 82598-Based Adapters: - Intel(R) Network Adapters that support removable optical modules only support their original module type (i.e., the Intel(R) 10 Gigabit SR Dual Port Express Module only supports SR optical modules). If you plug in a different type of module, the driver will not load. - Hot Swapping/hot plugging optical modules is not supported. - Only single speed, 10 gigabit modules are supported. - LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module types are not supported. Please see your system documentation for details. |
09e1c0614 ixgbe: Add docume... |
88 |
|
872857a84 Documentation/net... |
89 90 |
The following is a list of 3rd party SFP+ modules and direct attach cables that have received some testing. Not all modules are applicable to all devices. |
09e1c0614 ixgbe: Add docume... |
91 |
|
872857a84 Documentation/net... |
92 |
Supplier Type Part Numbers |
09e1c0614 ixgbe: Add docume... |
93 |
|
872857a84 Documentation/net... |
94 95 96 |
Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL |
09e1c0614 ixgbe: Add docume... |
97 |
|
872857a84 Documentation/net... |
98 99 100 |
82598-based adapters support all passive direct attach cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach cables are not supported. |
09e1c0614 ixgbe: Add docume... |
101 |
|
09e1c0614 ixgbe: Add docume... |
102 |
|
872857a84 Documentation/net... |
103 104 105 106 107 108 109 |
Flow Control ------------ Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable receiving and transmitting pause frames for ixgbe. When TX is enabled, PAUSE frames are generated when the receive packet buffer crosses a predefined threshold. When rx is enabled, the transmit unit will halt for the time delay specified when a PAUSE frame is received. |
09e1c0614 ixgbe: Add docume... |
110 |
|
872857a84 Documentation/net... |
111 |
Flow Control is enabled by default. If you want to disable a flow control |
68f20d948 Documentation/net... |
112 |
capable link partner, use ethtool: |
09e1c0614 ixgbe: Add docume... |
113 |
|
872857a84 Documentation/net... |
114 |
ethtool -A eth? autoneg off RX off TX off |
09e1c0614 ixgbe: Add docume... |
115 |
|
872857a84 Documentation/net... |
116 117 118 |
NOTE: For 82598 backplane cards entering 1 gig mode, flow control default behavior is changed to off. Flow control in 1 gig mode on these devices can lead to Tx hangs. |
09e1c0614 ixgbe: Add docume... |
119 |
|
d7064f4c1 Documentation/net... |
120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
Intel(R) Ethernet Flow Director ------------------------------- Supports advanced filters that direct receive packets by their flows to different queues. Enables tight control on routing a flow in the platform. Matches flows and CPU cores for flow affinity. Supports multiple parameters for flexible flow classification and load balancing. Flow director is enabled only if the kernel is multiple TX queue capable. An included script (set_irq_affinity.sh) automates setting the IRQ to CPU affinity. You can verify that the driver is using Flow Director by looking at the counter in ethtool: fdir_miss and fdir_match. Other ethtool Commands: To enable Flow Director ethtool -K ethX ntuple on To add a filter |
6dc696401 Documentation (ix... |
139 |
Use -U switch. e.g., ethtool -U ethX flow-type tcp4 src-ip 10.0.128.23 |
d7064f4c1 Documentation/net... |
140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 |
action 1 To see the list of filters currently present: ethtool -u ethX Perfect Filter: Perfect filter is an interface to load the filter table that funnels all flow into queue_0 unless an alternative queue is specified using "action". In that case, any flow that matches the filter criteria will be directed to the appropriate queue. If the queue is defined as -1, filter will drop matching packets. To account for filter matches and misses, there are two stats in ethtool: fdir_match and fdir_miss. In addition, rx_queue_N_packets shows the number of packets processed by the Nth queue. NOTE: Receive Packet Steering (RPS) and Receive Flow Steering (RFS) are not compatible with Flow Director. IF Flow Director is enabled, these will be disabled. The following three parameters impact Flow Director. FdirMode -------- Valid Range: 0-2 (0=off, 1=ATR, 2=Perfect filter mode) Default Value: 1 Flow Director filtering modes. FdirPballoc ----------- Valid Range: 0-2 (0=64k, 1=128k, 2=256k) Default Value: 0 Flow Director allocated packet buffer size. AtrSampleRate -------------- Valid Range: 1-100 Default Value: 20 Software ATR Tx packet sample rate. For example, when set to 20, every 20th packet, looks to see if the packet will create a new flow. Node ---- Valid Range: 0-n Default Value: 1 (off) 0 - n: where n is the number of NUMA nodes (i.e. 0 - 3) currently online in your system 1: turns this option off The Node parameter will allow you to pick which NUMA node you want to have the adapter allocate memory on. max_vfs ------- Valid Range: 1-63 Default Value: 0 If the value is greater than 0 it will also force the VMDq parameter to be 1 or more. This parameter adds support for SR-IOV. It causes the driver to spawn up to max_vfs worth of virtual function. |
09e1c0614 ixgbe: Add docume... |
205 206 |
Additional Configurations ========================= |
09e1c0614 ixgbe: Add docume... |
207 208 209 210 |
Jumbo Frames ------------ The driver supports Jumbo Frames for all adapters. Jumbo Frames support is enabled by changing the MTU to a value larger than the default of 1500. |
d7018be0a ixgbe: fix docume... |
211 |
The maximum value for the MTU is 16110. Use the ip command to |
09e1c0614 ixgbe: Add docume... |
212 |
increase the MTU size. For example: |
d7018be0a ixgbe: fix docume... |
213 |
ip link set dev ethx mtu 9000 |
09e1c0614 ixgbe: Add docume... |
214 |
|
d7018be0a ixgbe: fix docume... |
215 216 |
The maximum MTU setting for Jumbo Frames is 9710. This value coincides with the maximum Jumbo Frames size of 9728. |
09e1c0614 ixgbe: Add docume... |
217 218 219 220 221 222 223 224 225 |
Generic Receive Offload, aka GRO -------------------------------- The driver supports the in-kernel software implementation of GRO. GRO has shown that by coalescing Rx traffic into larger chunks of data, CPU utilization can be significantly reduced when under large Rx load. GRO is an evolution of the previously-used LRO interface. GRO is able to coalesce other protocols besides TCP. It's also safe to use with configurations that are problematic for LRO, namely bridging and iSCSI. |
09e1c0614 ixgbe: Add docume... |
226 227 |
Data Center Bridging, aka DCB ----------------------------- |
09e1c0614 ixgbe: Add docume... |
228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 |
DCB is a configuration Quality of Service implementation in hardware. It uses the VLAN priority tag (802.1p) to filter traffic. That means that there are 8 different priorities that traffic can be filtered into. It also enables priority flow control which can limit or eliminate the number of dropped packets during network stress. Bandwidth can be allocated to each of these priorities, which is enforced at the hardware level. To enable DCB support in ixgbe, you must enable the DCB netlink layer to allow the userspace tools (see below) to communicate with the driver. This can be found in the kernel configuration here: -> Networking support -> Networking options -> Data Center Bridging support Once this is selected, DCB support must be selected for ixgbe. This can be found here: -> Device Drivers -> Network device support (NETDEVICES [=y]) -> Ethernet (10000 Mbit) (NETDEV_10000 [=y]) -> Intel(R) 10GbE PCI Express adapters support -> Data Center Bridging (DCB) Support After these options are selected, you must rebuild your kernel and your modules. In order to use DCB, userspace tools must be downloaded and installed. The dcbd tools can be found at: http://e1000.sf.net |
09e1c0614 ixgbe: Add docume... |
260 261 262 |
Ethtool ------- The driver utilizes the ethtool interface for driver configuration and |
872857a84 Documentation/net... |
263 |
diagnostics, as well as displaying statistical information. The latest |
68f20d948 Documentation/net... |
264 |
ethtool version is required for this functionality. |
09e1c0614 ixgbe: Add docume... |
265 266 |
The latest release of ethtool can be found from |
68f20d948 Documentation/net... |
267 |
http://ftp.kernel.org/pub/software/network/ethtool/ |
09e1c0614 ixgbe: Add docume... |
268 |
|
872857a84 Documentation/net... |
269 |
FCoE |
09e1c0614 ixgbe: Add docume... |
270 |
---- |
872857a84 Documentation/net... |
271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 |
This release of the ixgbe driver contains new code to enable users to use Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB) functionality that is supported by the 82598-based hardware. This code has no default effect on the regular driver operation, and configuring DCB and FCoE is outside the scope of this driver README. Refer to http://www.open-fcoe.org/ for FCoE project information and contact e1000-eedc@lists.sourceforge.net for DCB information. MAC and VLAN anti-spoofing feature ---------------------------------- When a malicious driver attempts to send a spoofed packet, it is dropped by the hardware and not transmitted. An interrupt is sent to the PF driver notifying it of the spoof attempt. When a spoofed packet is detected the PF driver will send the following message to the system log (displayed by the "dmesg" command): Spoof event(s) detected on VF (n) Where n=the VF that attempted to do the spoofing. Performance Tuning ================== An excellent article on performance tuning can be found at: http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf Known Issues ============ |
d7064f4c1 Documentation/net... |
303 304 305 306 |
Enabling SR-IOV in a 32-bit or 64-bit Microsoft* Windows* Server 2008/R2 Guest OS using Intel (R) 82576-based GbE or Intel (R) 82599-based 10GbE controller under KVM ------------------------------------------------------------------------ |
872857a84 Documentation/net... |
307 308 309 310 311 312 313 314 315 316 317 |
KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This includes traditional PCIe devices, as well as SR-IOV-capable devices using Intel 82576-based and 82599-based controllers. While direct assignment of a PCIe device or an SR-IOV Virtual Function (VF) to a Linux-based VM running 2.6.32 or later kernel works fine, there is a known issue with Microsoft Windows Server 2008 VM that results in a "yellow bang" error. This problem is within the KVM VMM itself, not the Intel driver, or the SR-IOV logic of the VMM, but rather that KVM emulates an older CPU model for the guests, and this older CPU model does not support MSI-X interrupts, which is a requirement for Intel SR-IOV. |
09e1c0614 ixgbe: Add docume... |
318 |
|
872857a84 Documentation/net... |
319 320 321 322 |
If you wish to use the Intel 82576 or 82599-based controllers in SR-IOV mode with KVM and a Microsoft Windows Server 2008 guest try the following workaround. The workaround is to tell KVM to emulate a different model of CPU when using qemu to create the KVM guest: |
09e1c0614 ixgbe: Add docume... |
323 |
|
872857a84 Documentation/net... |
324 |
"-cpu qemu64,model=13" |
09e1c0614 ixgbe: Add docume... |
325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 |
Support ======= For general information, go to the Intel support website at: http://support.intel.com or the Intel Wired Networking project hosted by Sourceforge at: http://e1000.sourceforge.net If an issue is identified with the released source code on the supported kernel with a supported adapter, email the specific information related to the issue to e1000-devel@lists.sf.net |