Commit a761129e3625688310aecf26e1be9e98e85f8eb5

Authored by Dongli Zhang
Committed by David S. Miller
1 parent a3ce2a21bb

xen-netfront: do not use ~0U as error return value for xennet_fill_frags()

xennet_fill_frags() uses ~0U as return value when the sk_buff is not able
to cache extra fragments. This is incorrect because the return type of
xennet_fill_frags() is RING_IDX and 0xffffffff is an expected value for
ring buffer index.

In the situation when the rsp_cons is approaching 0xffffffff, the return
value of xennet_fill_frags() may become 0xffffffff which xennet_poll() (the
caller) would regard as error. As a result, queue->rx.rsp_cons is set
incorrectly because it is updated only when there is error. If there is no
error, xennet_poll() would be responsible to update queue->rx.rsp_cons.
Finally, queue->rx.rsp_cons would point to the rx ring buffer entries whose
queue->rx_skbs[i] and queue->grant_rx_ref[i] are already cleared to NULL.
This leads to NULL pointer access in the next iteration to process rx ring
buffer entries.

The symptom is similar to the one fixed in
commit 00b368502d18 ("xen-netfront: do not assume sk_buff_head list is
empty in error handling").

This patch changes the return type of xennet_fill_frags() to indicate
whether it is successful or failed. The queue->rx.rsp_cons will be
always updated inside this function.

Fixes: ad4f15dc2c70 ("xen/netfront: don't bug in case of too many frags")
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>

Showing 1 changed file with 9 additions and 8 deletions Side-by-side Diff

drivers/net/xen-netfront.c
... ... @@ -887,9 +887,9 @@
887 887 return 0;
888 888 }
889 889  
890   -static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
891   - struct sk_buff *skb,
892   - struct sk_buff_head *list)
  890 +static int xennet_fill_frags(struct netfront_queue *queue,
  891 + struct sk_buff *skb,
  892 + struct sk_buff_head *list)
893 893 {
894 894 RING_IDX cons = queue->rx.rsp_cons;
895 895 struct sk_buff *nskb;
... ... @@ -908,7 +908,7 @@
908 908 if (unlikely(skb_shinfo(skb)->nr_frags >= MAX_SKB_FRAGS)) {
909 909 queue->rx.rsp_cons = ++cons + skb_queue_len(list);
910 910 kfree_skb(nskb);
911   - return ~0U;
  911 + return -ENOENT;
912 912 }
913 913  
914 914 skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
... ... @@ -919,7 +919,9 @@
919 919 kfree_skb(nskb);
920 920 }
921 921  
922   - return cons;
  922 + queue->rx.rsp_cons = cons;
  923 +
  924 + return 0;
923 925 }
924 926  
925 927 static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
... ... @@ -1045,8 +1047,7 @@
1045 1047 skb->data_len = rx->status;
1046 1048 skb->len += rx->status;
1047 1049  
1048   - i = xennet_fill_frags(queue, skb, &tmpq);
1049   - if (unlikely(i == ~0U))
  1050 + if (unlikely(xennet_fill_frags(queue, skb, &tmpq)))
1050 1051 goto err;
1051 1052  
1052 1053 if (rx->flags & XEN_NETRXF_csum_blank)
... ... @@ -1056,7 +1057,7 @@
1056 1057  
1057 1058 __skb_queue_tail(&rxq, skb);
1058 1059  
1059   - queue->rx.rsp_cons = ++i;
  1060 + i = ++queue->rx.rsp_cons;
1060 1061 work_done++;
1061 1062 }
1062 1063