22 Jun, 2017

1 commit

  • Add a flag to indicate if a queue is rate-limited. Test the flag in
    NAPI poll handler and avoid rescheduling the queue if true, otherwise
    we risk locking up the host. The rescheduling will be done in the
    timer callback function.

    Reported-by: Jean-Louis Dupond
    Signed-off-by: Wei Liu
    Tested-by: Jean-Louis Dupond
    Reviewed-by: Paul Durrant
    Signed-off-by: David S. Miller

    Wei Liu
     

14 Feb, 2017

1 commit

  • This patch fixes an issue where the type of counters in the queue(s)
    and interface are not in sync (queue counters are int, interface
    counters are long), causing incorrect reporting of tx/rx values
    of the vif interface and unclear counter overflows.
    This patch sets both counters to the u64 type.

    Signed-off-by: Mart van Santen
    Reviewed-by: Paul Durrant
    Signed-off-by: David S. Miller

    Mart van Santen
     

13 Oct, 2016

1 commit

  • It is useful to be able to see the hash configuration when running tests.
    This patch adds a debugfs node for that purpose.

    The original version of this patch (commit c0c64c152389) was reverted due
    to build failures caused by a conflict with commit 0364a8824c02
    ("xen-netback: switch to threaded irq for control ring"). This new version
    of the patch is nearly identical to the original, the only difference
    being that creation of the debugfs node is predicated on 'ctrl_irq' being
    non-zero rather then the now non-existent 'ctrl_task'.

    Signed-off-by: Paul Durrant
    Cc: Wei Liu
    Cc: David S. Miller
    Acked-by: Wei Liu
    Signed-off-by: David S. Miller

    Paul Durrant
     

07 Oct, 2016

3 commits

  • Instead of flushing the copy ops when an packet is complete, complete
    packets when their copy ops are done. This improves performance by
    reducing the number of grant copy hypercalls.

    Latency is still limited by the relatively small size of the copy
    batch.

    Signed-off-by: David Vrabel
    [re-based]
    Signed-off-by: Paul Durrant
    Signed-off-by: David S. Miller

    David Vrabel
     
  • Refactor the to-guest (rx) path to:

    1. Push responses for completed skbs earlier, reducing latency.

    2. Reduce the per-queue memory overhead by greatly reducing the
    maximum number of grant copy ops in each hypercall (from 4352 to
    64). Each struct xenvif_queue is now only 44 kB instead of 220 kB.

    3. Make the code more maintainable.

    Signed-off-by: David Vrabel
    [re-based]
    Signed-off-by: Paul Durrant
    Signed-off-by: David S. Miller

    David Vrabel
     
  • As far as I am aware only very old Windows network frontends make use of
    this style of passing GSO packets from backend to frontend. These
    frontends can easily be replaced by the freely available Xen Project
    Windows PV network frontend, which uses the 'default' mechanism for
    passing GSO packets, which is also used by all Linux frontends.

    NOTE: Removal of this feature will not cause breakage in old Windows
    frontends. They simply will no longer receive GSO packets - the
    packets instead being fragmented in the backend.

    Signed-off-by: Paul Durrant
    Reviewed-by: David Vrabel
    Signed-off-by: David S. Miller

    Paul Durrant
     

23 Sep, 2016

1 commit


22 Sep, 2016

1 commit


19 Aug, 2016

1 commit


17 May, 2016

2 commits

  • My recent patch to include/xen/interface/io/netif.h defines a new shared
    ring (in addition to the rx and tx rings) for passing control messages
    from a VM frontend driver to a backend driver.

    A previous patch added the necessary boilerplate for mapping the control
    ring from the frontend, should it be created. This patch adds
    implementations for each of the defined protocol messages.

    Signed-off-by: Paul Durrant
    Cc: Wei Liu
    Acked-by: Wei Liu
    Signed-off-by: David S. Miller

    Paul Durrant
     
  • My recent patch to include/xen/interface/io/netif.h defines a new shared
    ring (in addition to the rx and tx rings) for passing control messages
    from a VM frontend driver to a backend driver.

    This patch adds the necessary code to xen-netback to map this new shared
    ring, should it be created by a frontend, but does not add implementations
    for any of the defined protocol messages. These are added in a subsequent
    patch for clarity.

    Signed-off-by: Paul Durrant
    Acked-by: Wei Liu
    Signed-off-by: David S. Miller

    Paul Durrant
     

14 Mar, 2016

1 commit

  • The code does not currently support a frontend passing multiple extra info
    fragments to the backend in a tx request. The xenvif_get_extras() function
    handles multiple extra_info fragments but make_tx_response() assumes there
    is only ever a single extra info fragment.

    This patch modifies xenvif_get_extras() to pass back a count of extra
    info fragments, which is then passed to make_tx_response() (after
    possibly being stashed in pending_tx_info for deferred responses).

    Signed-off-by: Paul Durrant
    Cc: Wei Liu
    Acked-by: Wei Liu
    Signed-off-by: David S. Miller

    Paul Durrant
     

08 Feb, 2016

1 commit

  • My recent patch to the Xen Project documents a protocol for 'dynamic
    multicast control' in netif.h. This extends the previous multicast control
    protocol to not require a shared ring reconnection to turn the feature off.
    Instead the backend watches the "request-multicast-control" key in xenstore
    and turns the feature off if the key value is written to zero.

    This patch adds support for dynamic multicast control in xen-netback.

    Signed-off-by: Paul Durrant
    Cc: Ian Campbell
    Cc: Wei Liu
    Acked-by: Wei Liu
    Signed-off-by: David S. Miller

    Paul Durrant
     

23 Oct, 2015

1 commit

  • The PV network protocol is using 4KB page granularity. The goal of this
    patch is to allow a Linux using 64KB page granularity working as a
    network backend on a non-modified Xen.

    It's only necessary to adapt the ring size and break skb data in small
    chunk of 4KB. The rest of the code is relying on the grant table code.

    Signed-off-by: Julien Grall
    Reviewed-by: Wei Liu
    Signed-off-by: David Vrabel

    Julien Grall
     

10 Sep, 2015

1 commit

  • Commit f48da8b14d04ca87ffcffe68829afd45f926ec6a (xen-netback: fix
    unlimited guest Rx internal queue and carrier flapping) introduced a
    regression.

    The PV frontend in IPXE only places 4 requests on the guest Rx ring.
    Since netback required at least (MAX_SKB_FRAGS + 1) slots, IPXE could
    not receive any packets.

    a) If GSO is not enabled on the VIF, fewer guest Rx slots are required
    for the largest possible packet. Calculate the required slots
    based on the maximum GSO size or the MTU.

    This calculation of the number of required slots relies on
    1650d5455bd2 (xen-netback: always fully coalesce guest Rx packets)
    which present in 4.0-rc1 and later.

    b) Reduce the Rx stall detection to checking for at least one
    available Rx request. This is fine since we're predominately
    concerned with detecting interfaces which are down and thus have
    zero available Rx requests.

    Signed-off-by: David Vrabel
    Reviewed-by: Wei Liu
    Signed-off-by: David S. Miller

    David Vrabel
     

03 Sep, 2015

1 commit

  • Xen's PV network protocol includes messages to add/remove ethernet
    multicast addresses to/from a filter list in the backend. This allows
    the frontend to request the backend only forward multicast packets
    which are of interest thus preventing unnecessary noise on the shared
    ring.

    The canonical netif header in git://xenbits.xen.org/xen.git specifies
    the message format (two more XEN_NETIF_EXTRA_TYPEs) so the minimal
    necessary changes have been pulled into include/xen/interface/io/netif.h.

    To prevent the frontend from extending the multicast filter list
    arbitrarily a limit (XEN_NETBK_MCAST_MAX) has been set to 64 entries.
    This limit is not specified by the protocol and so may change in future.
    If the limit is reached then the next XEN_NETIF_EXTRA_TYPE_MCAST_ADD
    sent by the frontend will be failed with NETIF_RSP_ERROR.

    Signed-off-by: Paul Durrant
    Cc: Ian Campbell
    Cc: Wei Liu
    Acked-by: Wei Liu
    Signed-off-by: David S. Miller

    Paul Durrant
     

09 Jul, 2015

1 commit


21 Mar, 2015

1 commit

  • With the current netback, the bandwidth limiter's parameters are only
    settable during vif setup time. This patch register a watch on them, and
    thus makes them runtime changeable.

    When the watch fires, the timer is reset. The timer's mutex is used for
    fencing the change.

    Cc: Anthony Liguori
    Signed-off-by: Imre Palik
    Acked-by: Wei Liu
    Signed-off-by: David S. Miller

    Palik, Imre
     

24 Jan, 2015

1 commit

  • Always fully coalesce guest Rx packets into the minimum number of ring
    slots. Reducing the number of slots per packet has significant
    performance benefits when receiving off-host traffic.

    Results from XenServer's performance benchmarks:

    Baseline Full coalesce
    Interhost VM receive 7.2 Gb/s 11 Gb/s
    Interhost aggregate 24 Gb/s 24 Gb/s
    Intrahost single stream 14 Gb/s 14 Gb/s
    Intrahost aggregate 34 Gb/s 34 Gb/s

    However, this can increase the number of grant ops per packet which
    decreases performance of backend (dom0) to VM traffic (by ~10%)
    /unless/ grant copy has been optimized for adjacent ops with the same
    source or destination (see "grant-table: defer releasing pages
    acquired in a grant copy"[1] expected in Xen 4.6).

    [1] http://lists.xen.org/archives/html/xen-devel/2015-01/msg01118.html

    Signed-off-by: David Vrabel
    Acked-by: Ian Campbell
    Signed-off-by: David S. Miller

    David Vrabel
     

19 Dec, 2014

1 commit

  • Commit bc96f648df1bbc2729abbb84513cf4f64273a1f1 (xen-netback: make
    feature-rx-notify mandatory) incorrectly assumed that there were no
    frontends in use that did not support this feature. But the frontend
    driver in MiniOS does not and since this is used by (qemu) stubdoms,
    these stopped working.

    Netback sort of works as-is in this mode except:

    - If there are no Rx requests and the internal Rx queue fills, only
    the drain timeout will wake the thread. The default drain timeout
    of 10 s would give unacceptable pauses.

    - If an Rx stall was detected and the internal Rx queue is drained,
    then the Rx thread would never wake.

    Handle these two cases (when feature-rx-notify is disabled) by:

    - Reducing the drain timeout to 30 ms.

    - Disabling Rx stall detection.

    Reported-by: John
    Tested-by: John
    Signed-off-by: David Vrabel
    Reviewed-by: Wei Liu
    Signed-off-by: David S. Miller

    David Vrabel
     

26 Oct, 2014

3 commits

  • If a frontend not receiving packets it is useful to detect this and
    turn off the carrier so packets are dropped early instead of being
    queued and drained when they expire.

    A to-guest queue is stalled if it doesn't have enough free slots for a
    an extended period of time (default 60 s).

    If at least one queue is stalled, the carrier is turned off (in the
    expectation that the other queues will soon stall as well). The
    carrier is only turned on once all queues are ready.

    When the frontend connects, all the queues start in the stalled state
    and only become ready once the frontend queues enough Rx requests.

    Signed-off-by: David Vrabel
    Reviewed-by: Wei Liu
    Signed-off-by: David S. Miller

    David Vrabel
     
  • Netback needs to discard old to-guest skb's (guest Rx queue drain) and
    it needs detect guest Rx stalls (to disable the carrier so packets are
    discarded earlier), but the current implementation is very broken.

    1. The check in hard_start_xmit of the slot availability did not
    consider the number of packets that were already in the guest Rx
    queue. This could allow the queue to grow without bound.

    The guest stops consuming packets and the ring was allowed to fill
    leaving S slot free. Netback queues a packet requiring more than S
    slots (ensuring that the ring stays with S slots free). Netback
    queue indefinately packets provided that then require S or fewer
    slots.

    2. The Rx stall detection is not triggered in this case since the
    (host) Tx queue is not stopped.

    3. If the Tx queue is stopped and a guest Rx interrupt occurs, netback
    will consider this an Rx purge event which may result in it taking
    the carrier down unnecessarily. It also considers a queue with
    only 1 slot free as unstalled (even though the next packet might
    not fit in this).

    The internal guest Rx queue is limited by a byte length (to 512 Kib,
    enough for half the ring). The (host) Tx queue is stopped and started
    based on this limit. This sets an upper bound on the amount of memory
    used by packets on the internal queue.

    This allows the estimatation of the number of slots for an skb to be
    removed (it wasn't a very good estimate anyway). Instead, the guest
    Rx thread just waits for enough free slots for a maximum sized packet.

    skbs queued on the internal queue have an 'expires' time (set to the
    current time plus the drain timeout). The guest Rx thread will detect
    when the skb at the head of the queue has expired and discard expired
    skbs. This sets a clear upper bound on the length of time an skb can
    be queued for. For a guest being destroyed the maximum time needed to
    wait for all the packets it sent to be dropped is still the drain
    timeout (10 s) since it will not be sending new packets.

    Rx stall detection is reintroduced in a later commit.

    Signed-off-by: David Vrabel
    Reviewed-by: Wei Liu
    Signed-off-by: David S. Miller

    David Vrabel
     
  • Frontends that do not provide feature-rx-notify may stall because
    netback depends on the notification from frontend to wake the guest Rx
    thread (even if can_queue is false).

    This could be fixed but feature-rx-notify was introduced in 2006 and I
    am not aware of any frontends that do not implement this.

    Signed-off-by: David Vrabel
    Acked-by: Wei Liu
    Signed-off-by: David S. Miller

    David Vrabel
     

14 Aug, 2014

1 commit

  • Reference count the number of packets in host stack, so that we don't
    stop the deallocation thread too early. If not, we can end up with
    xenvif_free permanently waiting for deallocation thread to unmap grefs.

    Reported-by: Thomas Leonard
    Signed-off-by: Wei Liu
    Cc: Ian Campbell
    Cc: Zoltan Kiss
    Signed-off-by: David S. Miller

    Wei Liu
     

06 Aug, 2014

2 commits

  • Currently when the guest is not able to receive more packets, qdisc layer starts
    a timer, and when it goes off, qdisc is started again to deliver a packet again.
    This is a very slow way to drain the queues, consumes unnecessary resources and
    slows down other guests shutdown.
    This patch change the behaviour by turning the carrier off when that timer
    fires, so all the packets are freed up which were stucked waiting for that vif.
    Instead of the rx_queue_purge bool it uses the VIF_STATUS_RX_PURGE_EVENT bit to
    signal the thread that either the timeout happened or an RX interrupt arrived,
    so the thread can check what it should do. It also disables NAPI, so the guest
    can't transmit, but leaves the interrupts on, so it can resurrect.
    Only the queues which brought down the interface can enable it again, the bit
    QUEUE_STATUS_RX_STALLED makes sure of that.

    Signed-off-by: Zoltan Kiss
    Signed-off-by: David Vrabel
    Cc: netdev@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Cc: xen-devel@lists.xenproject.org
    Signed-off-by: David S. Miller

    Zoltan Kiss
     
  • This patch introduces a new state bit VIF_STATUS_CONNECTED to track whether the
    vif is in a connected state. Using carrier will not work with the next patch
    in this series, which aims to turn the carrier temporarily off if the guest
    doesn't seem to be able to receive packets.

    Signed-off-by: Zoltan Kiss
    Signed-off-by: David Vrabel
    Cc: netdev@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Cc: xen-devel@lists.xenproject.org

    v2:
    - rename the bitshift type to "enum state_bit_shift" here, not in the next patch
    Signed-off-by: David S. Miller

    Zoltan Kiss
     

09 Jul, 2014

1 commit

  • This patch adds debugfs capabilities to netback. There used to be a similar
    patch floating around for classic kernel, but it used procfs. It is based on a
    very similar blkback patch.
    It creates xen-netback/[vifname]/io_ring_q[queueno] files, reading them output
    various ring variables etc. Writing "kick" into it imitates an interrupt
    happened, it can be useful to check whether the ring is just stalled due to a
    missed interrupt.

    Signed-off-by: Zoltan Kiss
    Cc: netdev@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Cc: xen-devel@lists.xenproject.org
    Signed-off-by: David S. Miller

    Zoltan Kiss
     

26 Jun, 2014

1 commit

  • The original code uses netdev->real_num_tx_queues to bookkeep number of
    queues and invokes netif_set_real_num_tx_queues to set the number of
    queues. However, netif_set_real_num_tx_queues doesn't allow
    real_num_tx_queues to be smaller than 1, which means setting the number
    to 0 will not work and real_num_tx_queues is untouched.

    This is bogus when xenvif_free is invoked before any number of queues is
    allocated. That function needs to iterate through all queues to free
    resources. Using the wrong number of queues results in NULL pointer
    dereference.

    So we bookkeep the number of queues in xen-netback to solve this
    problem. This fixes a regression introduced by multiqueue patchset in
    3.16-rc1.

    There's another bug in original code that the real number of RX queues
    is never set. In current Xen multiqueue design, the number of TX queues
    and RX queues are in fact the same. We need to set the numbers of TX and
    RX queues to the same value.

    Also remove xenvif_select_queue and leave queue selection to core
    driver, as suggested by David Miller.

    Reported-by: Boris Ostrovsky
    Signed-off-by: Wei Liu
    CC: Ian Campbell
    CC: Paul Durrant
    Signed-off-by: David S. Miller

    Wei Liu
     

05 Jun, 2014

3 commits

  • Builds on the refactoring of the previous patch to implement multiple
    queues between xen-netfront and xen-netback.

    Writes the maximum supported number of queues into XenStore, and reads
    the values written by the frontend to determine how many queues to use.

    Ring references and event channels are read from XenStore on a per-queue
    basis and rings are connected accordingly.

    Also adds code to handle the cleanup of any already initialised queues
    if the initialisation of a subsequent queue fails.

    Signed-off-by: Andrew J. Bennieston
    Acked-by: Wei Liu
    Signed-off-by: David S. Miller

    Andrew J. Bennieston
     
  • In preparation for multi-queue support in xen-netback, move the
    queue-specific data from struct xenvif into struct xenvif_queue, and
    update the rest of the code to use this.

    Also adds loops over queues where appropriate, even though only one is
    configured at this point, and uses alloc_netdev_mq() and the
    corresponding multi-queue netif wake/start/stop functions in preparation
    for multiple active queues.

    Finally, implements a trivial queue selection function suitable for
    ndo_select_queue, which simply returns 0 for a single queue and uses
    skb_get_hash() to compute the queue index otherwise.

    Signed-off-by: Andrew J. Bennieston
    Signed-off-by: Wei Liu
    Signed-off-by: David S. Miller

    Wei Liu
     
  • This array was allocated separately in commit ac3d5ac2 ("xen-netback:
    fix guest-receive-side array sizes") due to it being very large, and a
    struct xenvif is allocated as the netdev_priv part of a struct
    net_device, i.e. via kmalloc() but falling back to vmalloc() if the
    initial alloc. fails.

    In preparation for the multi-queue patches, where this array becomes
    part of struct xenvif_queue and is always allocated through vzalloc(),
    move this back into the struct xenvif.

    Signed-off-by: Andrew J. Bennieston
    Acked-by: Wei Liu
    Signed-off-by: David S. Miller

    Andrew J. Bennieston
     

17 May, 2014

1 commit

  • When the NAPI budget was not all used, xenvif_poll() would call
    napi_complete() /after/ enabling the interrupt. This resulted in a
    race between the napi_complete() and the napi_schedule() in the
    interrupt handler. The use of local_irq_save/restore() avoided by
    race iff the handler is running on the same CPU but not if it was
    running on a different CPU.

    Fix this properly by calling napi_complete() before reenabling
    interrupts (in the xenvif_napi_schedule_or_enable_irq() call).

    Signed-off-by: David Vrabel
    Acked-by: Wei Liu
    Acked-by: Ian Campbell
    Signed-off-by: David S. Miller

    David Vrabel
     

04 Apr, 2014

1 commit

  • An old inefficiency of the TX path that we are grant mapping the first slot,
    and then copy the header part to the linear area. Instead, doing a grant copy
    for that header straight on is more reasonable. Especially because there are
    ongoing efforts to make Xen avoiding TLB flush after unmap when the page were
    not touched in Dom0. In the original way the memcpy ruined that.
    The key changes:
    - the vif has a tx_copy_ops array again
    - xenvif_tx_build_gops sets up the grant copy operations
    - we don't have to figure out whether the header and first frag are on the same
    grant mapped page or not
    Note, we only grant copy PKT_PROT_LEN bytes from the first slot, the rest (if
    any) will be on the first frag, which is grant mapped. If the first slot is
    smaller than PKT_PROT_LEN, then we grant copy that, and later __pskb_pull_tail
    will pull more from the frags (if any)

    Signed-off-by: Zoltan Kiss
    Reviewed-by: Paul Durrant
    Acked-by: Ian Campbell
    Signed-off-by: David S. Miller

    Zoltan Kiss
     

02 Apr, 2014

1 commit

  • When netback discovers frontend is sending malformed packet it will
    disables the interface which serves that frontend.

    However disabling a network interface involving taking a mutex which
    cannot be done in softirq context, so we need to defer this process to
    kthread context.

    This patch does the following:
    1. introduce a flag to indicate the interface is disabled.
    2. check that flag in TX path, don't do any work if it's true.
    3. check that flag in RX path, turn off that interface if it's true.

    The reason to disable it in RX path is because RX uses kthread. After
    this change the behavior of netback is still consistent -- it won't do
    any TX work for a rogue frontend, and the interface will be eventually
    turned off.

    Also change a "continue" to "break" after xenvif_fatal_tx_err, as it
    doesn't make sense to continue processing packets if frontend is rogue.

    This is a fix for XSA-90.

    Reported-by: Török Edwin
    Signed-off-by: Wei Liu
    Cc: Ian Campbell
    Reviewed-by: David Vrabel
    Acked-by: Ian Campbell
    Signed-off-by: David S. Miller

    Wei Liu
     

27 Mar, 2014

1 commit

  • Since the early days TX stops if there isn't enough free pending slots to
    consume a maximum sized (slot-wise) packet. Probably the reason for that is to
    avoid the case when we don't have enough free pending slot in the ring to finish
    the packet. But if we make sure that the pending ring has the same size as the
    shared ring, that shouldn't really happen. The frontend can only post packets
    which fit the to the free space of the shared ring. If it doesn't, the frontend
    has to stop, as it can only increase the req_prod when the whole packet fits
    onto the ring.
    This patch avoid using this checking, makes sure the 2 ring has the same size,
    and remove a checking from the callback. As now we don't stop the NAPI instance
    on this condition, we don't have to wake it up if we free pending slots up.

    Signed-off-by: Zoltan Kiss
    Signed-off-by: David S. Miller

    Zoltan Kiss
     

26 Mar, 2014

1 commit

  • This reverts commit e9275f5e2df1b2098a8cc405d87b88b9affd73e6. This commit is the
    last in the netback grant mapping series, and it tries to do more aggressive
    aggreagtion of unmap operations. However practical use showed almost no
    positive effect, whilst with certain frontends it causes significant performance
    regression.

    Signed-off-by: Zoltan Kiss
    Acked-by: Ian Campbell
    Signed-off-by: David S. Miller

    Zoltan Kiss
     

08 Mar, 2014

4 commits

  • Unmapping causes TLB flushing, therefore we should make it in the largest
    possible batches. However we shouldn't starve the guest for too long. So if
    the guest has space for at least two big packets and we don't have at least a
    quarter ring to unmap, delay it for at most 1 milisec.

    Signed-off-by: Zoltan Kiss
    Signed-off-by: David S. Miller

    Zoltan Kiss
     
  • A malicious or buggy guest can leave its queue filled indefinitely, in which
    case qdisc start to queue packets for that VIF. If those packets came from an
    another guest, it can block its slots and prevent shutdown. To avoid that, we
    make sure the queue is drained in every 10 seconds.
    The QDisc queue in worst case takes 3 round to flush usually.

    Signed-off-by: Zoltan Kiss
    Signed-off-by: David S. Miller

    Zoltan Kiss
     
  • Xen network protocol had implicit dependency on MAX_SKB_FRAGS. Netback has to
    handle guests sending up to XEN_NETBK_LEGACY_SLOTS_MAX slots. To achieve that:
    - create a new skb
    - map the leftover slots to its frags (no linear buffer here!)
    - chain it to the previous through skb_shinfo(skb)->frag_list
    - map them
    - copy and coalesce the frags into a brand new one and send it to the stack
    - unmap the 2 old skb's pages

    It's also introduces new stat counters, which help determine how often the guest
    sends a packet with more than MAX_SKB_FRAGS frags.

    NOTE: if bisect brought you here, you should apply the series up until
    "xen-netback: Timeout packets in RX path", otherwise malicious guests can block
    other guests by not releasing their sent packets.

    Signed-off-by: Zoltan Kiss
    Signed-off-by: David S. Miller

    Zoltan Kiss
     
  • These counters help determine how often the buffers had to be copied. Also
    they help find out if packets are leaked, as if "sent != success + fail",
    there are probably packets never freed up properly.

    NOTE: if bisect brought you here, you should apply the series up until
    "xen-netback: Timeout packets in RX path", otherwise Windows guests can't work
    properly and malicious guests can block other guests by not releasing their sent
    packets.

    Signed-off-by: Zoltan Kiss
    Signed-off-by: David S. Miller

    Zoltan Kiss