Commit 6a6dc08ff6395f58be3ee568cb970ea956f16819

Authored by David Vrabel
Committed by David S. Miller
1 parent f1fb521f7d

xen-netfront: use napi_complete() correctly to prevent Rx stalling

After d75b1ade567ffab085e8adbbdacf0092d10cd09c (net: less interrupt
masking in NAPI) the napi instance is removed from the per-cpu list
prior to calling the n->poll(), and is only requeued if all of the
budget was used.  This inadvertently broke netfront because netfront
does not use NAPI correctly.

If netfront had not used all of its budget it would do a final check
for any Rx responses and avoid calling napi_complete() if there were
more responses.  It would still return under budget so it would never
be rescheduled.  The final check would also not re-enable the Rx
interrupt.

Additionally, xenvif_poll() would also call napi_complete() /after/
enabling the interrupt.  This resulted in a race between the
napi_complete() and the napi_schedule() in the interrupt handler.  The
use of local_irq_save/restore() avoided by race iff the handler is
running on the same CPU but not if it was running on a different CPU.

Fix both of these by always calling napi_compete() if the budget was
not all used, and then calling napi_schedule() if the final checks
says there's more work.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

Showing 1 changed file with 3 additions and 8 deletions Side-by-side Diff

drivers/net/xen-netfront.c
... ... @@ -977,7 +977,6 @@
977 977 struct sk_buff_head rxq;
978 978 struct sk_buff_head errq;
979 979 struct sk_buff_head tmpq;
980   - unsigned long flags;
981 980 int err;
982 981  
983 982 spin_lock(&queue->rx_lock);
984 983  
985 984  
... ... @@ -1050,15 +1049,11 @@
1050 1049 if (work_done < budget) {
1051 1050 int more_to_do = 0;
1052 1051  
1053   - napi_gro_flush(napi, false);
  1052 + napi_complete(napi);
1054 1053  
1055   - local_irq_save(flags);
1056   -
1057 1054 RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
1058   - if (!more_to_do)
1059   - __napi_complete(napi);
1060   -
1061   - local_irq_restore(flags);
  1055 + if (more_to_do)
  1056 + napi_schedule(napi);
1062 1057 }
1063 1058  
1064 1059 spin_unlock(&queue->rx_lock);