Commit 9ceb87fceacca86a37f189b84b79797c313b0c03

Authored by Jesper Dangaard Brouer
Committed by David S. Miller
1 parent 5b9e7e1607

pktgen: document tuning for max NIC performance

Using pktgen I'm seeing the ixgbe driver "push-back", due TX ring
running full.  Thus, the TX ring is artificially limiting pktgen.
(Diagnose via "ethtool -S", look for "tx_restart_queue" or "tx_busy"
counters.)

Using ixgbe, the real reason behind the TX ring running full, is due
to TX ring not being cleaned up fast enough. The ixgbe driver combines
TX+RX ring cleanups, and the cleanup interval is affected by the
ethtool --coalesce setting of parameter "rx-usecs".

Do not increase the default NIC TX ring buffer or default cleanup
interval.  Instead simply document that pktgen needs special NIC
tuning for maximum packet per sec performance.

Performance results with pktgen with clone_skb=100000.
TX ring size 512 (default), adjusting "rx-usecs":
 (Single CPU performance, E5-2630, ixgbe)
 - 3935002 pps - rx-usecs:  1 (irqs:  9346)
 - 5132350 pps - rx-usecs: 10 (irqs: 99157)
 - 5375111 pps - rx-usecs: 20 (irqs: 50154)
 - 5454050 pps - rx-usecs: 30 (irqs: 33872)
 - 5496320 pps - rx-usecs: 40 (irqs: 26197)
 - 5502510 pps - rx-usecs: 50 (irqs: 21527)

TX ring size adjusting (ethtool -G), "rx-usecs==1" (default):
 - 3935002 pps - tx-size:  512
 - 5354401 pps - tx-size:  768
 - 5356847 pps - tx-size: 1024
 - 5327595 pps - tx-size: 1536
 - 5356779 pps - tx-size: 2048
 - 5353438 pps - tx-size: 4096

Notice after commit 6f25cd47d (pktgen: fix xmit test for BQL enabled
devices) pktgen uses netif_xmit_frozen_or_drv_stopped() and ignores
the BQL "stack" pause (QUEUE_STATE_STACK_XOFF) flag.  This allow us to put
more pressure on the TX ring buffers.

It is the ixgbe_maybe_stop_tx() call that stops the transmits, and
pktgen respecting this in the call to netif_xmit_frozen_or_drv_stopped(txq).

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

Showing 1 changed file with 28 additions and 0 deletions Side-by-side Diff

Documentation/networking/pktgen.txt
... ... @@ -24,6 +24,34 @@
24 24 /proc/net/pktgen/ethX
25 25  
26 26  
  27 +Tuning NIC for max performance
  28 +==============================
  29 +
  30 +The default NIC setting are (likely) not tuned for pktgen's artificial
  31 +overload type of benchmarking, as this could hurt the normal use-case.
  32 +
  33 +Specifically increasing the TX ring buffer in the NIC:
  34 + # ethtool -G ethX tx 1024
  35 +
  36 +A larger TX ring can improve pktgen's performance, while it can hurt
  37 +in the general case, 1) because the TX ring buffer might get larger
  38 +than the CPUs L1/L2 cache, 2) because it allow more queueing in the
  39 +NIC HW layer (which is bad for bufferbloat).
  40 +
  41 +One should be careful to conclude, that packets/descriptors in the HW
  42 +TX ring cause delay. Drivers usually delay cleaning up the
  43 +ring-buffers (for various performance reasons), thus packets stalling
  44 +the TX ring, might just be waiting for cleanup.
  45 +
  46 +This cleanup issues is specifically the case, for the driver ixgbe
  47 +(Intel 82599 chip). This driver (ixgbe) combine TX+RX ring cleanups,
  48 +and the cleanup interval is affected by the ethtool --coalesce setting
  49 +of parameter "rx-usecs".
  50 +
  51 +For ixgbe use e.g "30" resulting in approx 33K interrupts/sec (1/30*10^6):
  52 + # ethtool -C ethX rx-usecs 30
  53 +
  54 +
27 55 Viewing threads
28 56 ===============
29 57 /proc/net/pktgen/kpktgend_0