Commit 872857a84e18f4bf9b56b298309a977b2ce77b5b

Authored by Jeff Kirsher
1 parent f2be142979

Documentation/networking/ixgbe.txt: Update ixgbe documentation

Update Intel Wired LAN ixgbe documentation.

Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>

Showing 1 changed file with 136 additions and 75 deletions Side-by-side Diff

Documentation/networking/ixgbe.txt
1 1 Linux Base Driver for 10 Gigabit PCI Express Intel(R) Network Connection
2 2 ========================================================================
3 3  
4   -March 10, 2009
  4 +Intel Gigabit Linux driver.
  5 +Copyright(c) 1999 - 2010 Intel Corporation.
5 6  
6   -
7 7 Contents
8 8 ========
9 9  
10   -- In This Release
11 10 - Identifying Your Adapter
12   -- Building and Installation
13 11 - Additional Configurations
  12 +- Performance Tuning
  13 +- Known Issues
14 14 - Support
15 15  
  16 +Identifying Your Adapter
  17 +========================
16 18  
  19 +The driver in this release is compatible with 82598 and 82599-based Intel
  20 +Network Connections.
17 21  
18   -In This Release
19   -===============
  22 +For more information on how to identify your adapter, go to the Adapter &
  23 +Driver ID Guide at:
20 24  
21   -This file describes the ixgbe Linux Base Driver for the 10 Gigabit PCI
22   -Express Intel(R) Network Connection. This driver includes support for
23   -Itanium(R)2-based systems.
  25 + http://support.intel.com/support/network/sb/CS-012904.htm
24 26  
25   -For questions related to hardware requirements, refer to the documentation
26   -supplied with your 10 Gigabit adapter. All hardware requirements listed apply
27   -to use with Linux.
  27 +SFP+ Devices with Pluggable Optics
  28 +----------------------------------
28 29  
29   -The following features are available in this kernel:
30   - - Native VLANs
31   - - Channel Bonding (teaming)
32   - - SNMP
33   - - Generic Receive Offload
34   - - Data Center Bridging
  30 +82599-BASED ADAPTERS
35 31  
36   -Channel Bonding documentation can be found in the Linux kernel source:
37   -/Documentation/networking/bonding.txt
  32 +NOTES: If your 82599-based Intel(R) Network Adapter came with Intel optics, or
  33 +is an Intel(R) Ethernet Server Adapter X520-2, then it only supports Intel
  34 +optics and/or the direct attach cables listed below.
38 35  
39   -Ethtool, lspci, and ifconfig can be used to display device and driver
40   -specific information.
  36 +When 82599-based SFP+ devices are connected back to back, they should be set to
  37 +the same Speed setting via Ethtool. Results may vary if you mix speed settings.
  38 +82598-based adapters support all passive direct attach cables that comply
  39 +with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
  40 +cables are not supported.
41 41  
  42 +Supplier Type Part Numbers
42 43  
43   -Identifying Your Adapter
44   -========================
  44 +SR Modules
  45 +Intel DUAL RATE 1G/10G SFP+ SR (bailed) FTLX8571D3BCV-IT
  46 +Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZ-IN1
  47 +Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDZ-IN2
  48 +LR Modules
  49 +Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCV-IT
  50 +Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZ-IN1
  51 +Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZ-IN2
45 52  
46   -This driver supports devices based on the 82598 controller and the 82599
47   -controller.
  53 +The following is a list of 3rd party SFP+ modules and direct attach cables that
  54 +have received some testing. Not all modules are applicable to all devices.
48 55  
49   -For specific information on identifying which adapter you have, please visit:
  56 +Supplier Type Part Numbers
50 57  
51   - http://support.intel.com/support/network/sb/CS-008441.htm
  58 +Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
  59 +Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
  60 +Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
52 61  
  62 +Finisar DUAL RATE 1G/10G SFP+ SR (No Bail) FTLX8571D3QCV-IT
  63 +Avago DUAL RATE 1G/10G SFP+ SR (No Bail) AFBR-703SDZ-IN1
  64 +Finisar DUAL RATE 1G/10G SFP+ LR (No Bail) FTLX1471D3QCV-IT
  65 +Avago DUAL RATE 1G/10G SFP+ LR (No Bail) AFCT-701SDZ-IN1
  66 +Finistar 1000BASE-T SFP FCLF8522P2BTL
  67 +Avago 1000BASE-T SFP ABCU-5710RZ
53 68  
54   -Building and Installation
55   -=========================
  69 +82599-based adapters support all passive and active limiting direct attach
  70 +cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
56 71  
57   -select m for "Intel(R) 10GbE PCI Express adapters support" located at:
58   - Location:
59   - -> Device Drivers
60   - -> Network device support (NETDEVICES [=y])
61   - -> Ethernet (10000 Mbit) (NETDEV_10000 [=y])
  72 +Laser turns off for SFP+ when ifconfig down
  73 +-------------------------------------------
  74 +"ifconfig down" turns off the laser for 82599-based SFP+ fiber adapters.
  75 +"ifconfig up" turns on the later.
62 76  
63   -1. make modules & make modules_install
64 77  
65   -2. Load the module:
  78 +82598-BASED ADAPTERS
66 79  
67   -# modprobe ixgbe
  80 +NOTES for 82598-Based Adapters:
  81 +- Intel(R) Network Adapters that support removable optical modules only support
  82 + their original module type (i.e., the Intel(R) 10 Gigabit SR Dual Port
  83 + Express Module only supports SR optical modules). If you plug in a different
  84 + type of module, the driver will not load.
  85 +- Hot Swapping/hot plugging optical modules is not supported.
  86 +- Only single speed, 10 gigabit modules are supported.
  87 +- LAN on Motherboard (LOMs) may support DA, SR, or LR modules. Other module
  88 + types are not supported. Please see your system documentation for details.
68 89  
69   - The insmod command can be used if the full
70   - path to the driver module is specified. For example:
  90 +The following is a list of 3rd party SFP+ modules and direct attach cables that
  91 +have received some testing. Not all modules are applicable to all devices.
71 92  
72   - insmod /lib/modules/<KERNEL VERSION>/kernel/drivers/net/ixgbe/ixgbe.ko
  93 +Supplier Type Part Numbers
73 94  
74   - With 2.6 based kernels also make sure that older ixgbe drivers are
75   - removed from the kernel, before loading the new module:
  95 +Finisar SFP+ SR bailed, 10g single rate FTLX8571D3BCL
  96 +Avago SFP+ SR bailed, 10g single rate AFBR-700SDZ
  97 +Finisar SFP+ LR bailed, 10g single rate FTLX1471D3BCL
76 98  
77   - rmmod ixgbe; modprobe ixgbe
  99 +82598-based adapters support all passive direct attach cables that comply
  100 +with SFF-8431 v4.1 and SFF-8472 v10.4 specifications. Active direct attach
  101 +cables are not supported.
78 102  
79   -3. Assign an IP address to the interface by entering the following, where
80   - x is the interface number:
81 103  
82   - ifconfig ethx <IP_address>
  104 +Flow Control
  105 +------------
  106 +Ethernet Flow Control (IEEE 802.3x) can be configured with ethtool to enable
  107 +receiving and transmitting pause frames for ixgbe. When TX is enabled, PAUSE
  108 +frames are generated when the receive packet buffer crosses a predefined
  109 +threshold. When rx is enabled, the transmit unit will halt for the time delay
  110 +specified when a PAUSE frame is received.
83 111  
84   -4. Verify that the interface works. Enter the following, where <IP_address>
85   - is the IP address for another machine on the same subnet as the interface
86   - that is being tested:
  112 +Flow Control is enabled by default. If you want to disable a flow control
  113 +capable link partner, use Ethtool:
87 114  
88   - ping <IP_address>
  115 + ethtool -A eth? autoneg off RX off TX off
89 116  
  117 +NOTE: For 82598 backplane cards entering 1 gig mode, flow control default
  118 +behavior is changed to off. Flow control in 1 gig mode on these devices can
  119 +lead to Tx hangs.
90 120  
91 121 Additional Configurations
92 122 =========================
93 123  
94   - Viewing Link Messages
95   - ---------------------
96   - Link messages will not be displayed to the console if the distribution is
97   - restricting system messages. In order to see network driver link messages on
98   - your console, set dmesg to eight by entering the following:
99   -
100   - dmesg -n 8
101   -
102   - NOTE: This setting is not saved across reboots.
103   -
104   -
105 124 Jumbo Frames
106 125 ------------
107 126 The driver supports Jumbo Frames for all adapters. Jumbo Frames support is
108 127  
... ... @@ -123,13 +142,8 @@
123 142 other protocols besides TCP. It's also safe to use with configurations that
124 143 are problematic for LRO, namely bridging and iSCSI.
125 144  
126   - GRO is enabled by default in the driver. Future versions of ethtool will
127   - support disabling and re-enabling GRO on the fly.
128   -
129   -
130 145 Data Center Bridging, aka DCB
131 146 -----------------------------
132   -
133 147 DCB is a configuration Quality of Service implementation in hardware.
134 148 It uses the VLAN priority tag (802.1p) to filter traffic. That means
135 149 that there are 8 different priorities that traffic can be filtered into.
136 150  
137 151  
138 152  
139 153  
140 154  
... ... @@ -163,24 +177,71 @@
163 177  
164 178 http://e1000.sf.net
165 179  
166   -
167 180 Ethtool
168 181 -------
169 182 The driver utilizes the ethtool interface for driver configuration and
170   - diagnostics, as well as displaying statistical information. Ethtool
171   - version 3.0 or later is required for this functionality.
  183 + diagnostics, as well as displaying statistical information. The latest
  184 + Ethtool version is required for this functionality.
172 185  
173 186 The latest release of ethtool can be found from
174 187 http://sourceforge.net/projects/gkernel.
175 188  
176   -
177   - NAPI
  189 + FCoE
178 190 ----
  191 + This release of the ixgbe driver contains new code to enable users to use
  192 + Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB)
  193 + functionality that is supported by the 82598-based hardware. This code has
  194 + no default effect on the regular driver operation, and configuring DCB and
  195 + FCoE is outside the scope of this driver README. Refer to
  196 + http://www.open-fcoe.org/ for FCoE project information and contact
  197 + e1000-eedc@lists.sourceforge.net for DCB information.
179 198  
180   - NAPI (Rx polling mode) is supported in the ixgbe driver. NAPI is enabled
181   - by default in the driver.
  199 + MAC and VLAN anti-spoofing feature
  200 + ----------------------------------
  201 + When a malicious driver attempts to send a spoofed packet, it is dropped by
  202 + the hardware and not transmitted. An interrupt is sent to the PF driver
  203 + notifying it of the spoof attempt.
182 204  
183   - See www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI.
  205 + When a spoofed packet is detected the PF driver will send the following
  206 + message to the system log (displayed by the "dmesg" command):
  207 +
  208 + Spoof event(s) detected on VF (n)
  209 +
  210 + Where n=the VF that attempted to do the spoofing.
  211 +
  212 +
  213 +Performance Tuning
  214 +==================
  215 +
  216 +An excellent article on performance tuning can be found at:
  217 +
  218 +http://www.redhat.com/promo/summit/2008/downloads/pdf/Thursday/Mark_Wagner.pdf
  219 +
  220 +
  221 +Known Issues
  222 +============
  223 +
  224 + Enabling SR-IOV in a 32-bit Microsoft* Windows* Server 2008 Guest OS using
  225 + Intel (R) 82576-based GbE or Intel (R) 82599-based 10GbE controller under KVM
  226 + -----------------------------------------------------------------------------
  227 + KVM Hypervisor/VMM supports direct assignment of a PCIe device to a VM. This
  228 + includes traditional PCIe devices, as well as SR-IOV-capable devices using
  229 + Intel 82576-based and 82599-based controllers.
  230 +
  231 + While direct assignment of a PCIe device or an SR-IOV Virtual Function (VF)
  232 + to a Linux-based VM running 2.6.32 or later kernel works fine, there is a
  233 + known issue with Microsoft Windows Server 2008 VM that results in a "yellow
  234 + bang" error. This problem is within the KVM VMM itself, not the Intel driver,
  235 + or the SR-IOV logic of the VMM, but rather that KVM emulates an older CPU
  236 + model for the guests, and this older CPU model does not support MSI-X
  237 + interrupts, which is a requirement for Intel SR-IOV.
  238 +
  239 + If you wish to use the Intel 82576 or 82599-based controllers in SR-IOV mode
  240 + with KVM and a Microsoft Windows Server 2008 guest try the following
  241 + workaround. The workaround is to tell KVM to emulate a different model of CPU
  242 + when using qemu to create the KVM guest:
  243 +
  244 + "-cpu qemu64,model=13"
184 245  
185 246  
186 247 Support