Commit 1252ecf63f77ea147bd40f5462c7d9e3d3ae2815
Committed by
David S. Miller
1 parent
00181fc946
Exists in
master
and in
7 other branches
[ATM]: fix possible recursive locking in skb_migrate()
ok this is a real potential deadlock in a way, it takes two locks of 2 skbuffs without doing any kind of lock ordering; I think the following patch should fix it. Just sort the lock taking order by address of the skb.. it's not pretty but it's the best this can do in a minimally invasive way. Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil> Signed-off-by: David S. Miller <davem@davemloft.net>
Showing 1 changed file with 11 additions and 6 deletions Side-by-side Diff
net/atm/ipcommon.c
... | ... | @@ -25,22 +25,27 @@ |
25 | 25 | /* |
26 | 26 | * skb_migrate appends the list at "from" to "to", emptying "from" in the |
27 | 27 | * process. skb_migrate is atomic with respect to all other skb operations on |
28 | - * "from" and "to". Note that it locks both lists at the same time, so beware | |
29 | - * of potential deadlocks. | |
28 | + * "from" and "to". Note that it locks both lists at the same time, so to deal | |
29 | + * with the lock ordering, the locks are taken in address order. | |
30 | 30 | * |
31 | 31 | * This function should live in skbuff.c or skbuff.h. |
32 | 32 | */ |
33 | 33 | |
34 | 34 | |
35 | -void skb_migrate(struct sk_buff_head *from,struct sk_buff_head *to) | |
35 | +void skb_migrate(struct sk_buff_head *from, struct sk_buff_head *to) | |
36 | 36 | { |
37 | 37 | unsigned long flags; |
38 | 38 | struct sk_buff *skb_from = (struct sk_buff *) from; |
39 | 39 | struct sk_buff *skb_to = (struct sk_buff *) to; |
40 | 40 | struct sk_buff *prev; |
41 | 41 | |
42 | - spin_lock_irqsave(&from->lock,flags); | |
43 | - spin_lock(&to->lock); | |
42 | + if ((unsigned long) from < (unsigned long) to) { | |
43 | + spin_lock_irqsave(&from->lock, flags); | |
44 | + spin_lock_nested(&to->lock, SINGLE_DEPTH_NESTING); | |
45 | + } else { | |
46 | + spin_lock_irqsave(&to->lock, flags); | |
47 | + spin_lock_nested(&from->lock, SINGLE_DEPTH_NESTING); | |
48 | + } | |
44 | 49 | prev = from->prev; |
45 | 50 | from->next->prev = to->prev; |
46 | 51 | prev->next = skb_to; |
... | ... | @@ -51,7 +56,7 @@ |
51 | 56 | from->prev = skb_from; |
52 | 57 | from->next = skb_from; |
53 | 58 | from->qlen = 0; |
54 | - spin_unlock_irqrestore(&from->lock,flags); | |
59 | + spin_unlock_irqrestore(&from->lock, flags); | |
55 | 60 | } |
56 | 61 | |
57 | 62 |