Commit c06d68b814d556cff5a4dc589215f5ed9f0b7fd5
Committed by
Greg Kroah-Hartman
1 parent
d6d98a4d8d
USB: xhci: Minimize HW event ring dequeue pointer writes.
The xHCI specification suggests that writing the hardware event ring dequeue pointer register too often can be an expensive operation for the xHCI hardware to manage. It suggests minimizing the number of writes to that register. Originally, the driver wrote the event ring dequeue pointer after each event was processed. Depending on how the event ring moderation register is set up and how fast the transfers are completing, there may be several events processed for each interrupt. This patch makes the hardware event ring dequeue pointer be written only once per interrupt. Make the transfer event handler and port status event handler only write the software event ring dequeue pointer. Move the updating of the hardware event ring dequeue pointer into the interrupt function. Move the contents of xhci_set_hc_event_deq() into the interrupt handler. The interrupt handler must clear the event handler busy flag, so it might as well also write the dequeue pointer to the same register. This eliminates two 32-bit PCI reads and two 32-bit PCI writes. Reported-by: Andiry Xu <andiry.xu@amd.com> Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Showing 1 changed file with 37 additions and 13 deletions Inline Diff
drivers/usb/host/xhci-ring.c
1 | /* | 1 | /* |
2 | * xHCI host controller driver | 2 | * xHCI host controller driver |
3 | * | 3 | * |
4 | * Copyright (C) 2008 Intel Corp. | 4 | * Copyright (C) 2008 Intel Corp. |
5 | * | 5 | * |
6 | * Author: Sarah Sharp | 6 | * Author: Sarah Sharp |
7 | * Some code borrowed from the Linux EHCI driver. | 7 | * Some code borrowed from the Linux EHCI driver. |
8 | * | 8 | * |
9 | * This program is free software; you can redistribute it and/or modify | 9 | * This program is free software; you can redistribute it and/or modify |
10 | * it under the terms of the GNU General Public License version 2 as | 10 | * it under the terms of the GNU General Public License version 2 as |
11 | * published by the Free Software Foundation. | 11 | * published by the Free Software Foundation. |
12 | * | 12 | * |
13 | * This program is distributed in the hope that it will be useful, but | 13 | * This program is distributed in the hope that it will be useful, but |
14 | * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY | 14 | * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY |
15 | * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License | 15 | * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License |
16 | * for more details. | 16 | * for more details. |
17 | * | 17 | * |
18 | * You should have received a copy of the GNU General Public License | 18 | * You should have received a copy of the GNU General Public License |
19 | * along with this program; if not, write to the Free Software Foundation, | 19 | * along with this program; if not, write to the Free Software Foundation, |
20 | * Inc., 675 Mass Ave, Cambridge, MA 02139, USA. | 20 | * Inc., 675 Mass Ave, Cambridge, MA 02139, USA. |
21 | */ | 21 | */ |
22 | 22 | ||
23 | /* | 23 | /* |
24 | * Ring initialization rules: | 24 | * Ring initialization rules: |
25 | * 1. Each segment is initialized to zero, except for link TRBs. | 25 | * 1. Each segment is initialized to zero, except for link TRBs. |
26 | * 2. Ring cycle state = 0. This represents Producer Cycle State (PCS) or | 26 | * 2. Ring cycle state = 0. This represents Producer Cycle State (PCS) or |
27 | * Consumer Cycle State (CCS), depending on ring function. | 27 | * Consumer Cycle State (CCS), depending on ring function. |
28 | * 3. Enqueue pointer = dequeue pointer = address of first TRB in the segment. | 28 | * 3. Enqueue pointer = dequeue pointer = address of first TRB in the segment. |
29 | * | 29 | * |
30 | * Ring behavior rules: | 30 | * Ring behavior rules: |
31 | * 1. A ring is empty if enqueue == dequeue. This means there will always be at | 31 | * 1. A ring is empty if enqueue == dequeue. This means there will always be at |
32 | * least one free TRB in the ring. This is useful if you want to turn that | 32 | * least one free TRB in the ring. This is useful if you want to turn that |
33 | * into a link TRB and expand the ring. | 33 | * into a link TRB and expand the ring. |
34 | * 2. When incrementing an enqueue or dequeue pointer, if the next TRB is a | 34 | * 2. When incrementing an enqueue or dequeue pointer, if the next TRB is a |
35 | * link TRB, then load the pointer with the address in the link TRB. If the | 35 | * link TRB, then load the pointer with the address in the link TRB. If the |
36 | * link TRB had its toggle bit set, you may need to update the ring cycle | 36 | * link TRB had its toggle bit set, you may need to update the ring cycle |
37 | * state (see cycle bit rules). You may have to do this multiple times | 37 | * state (see cycle bit rules). You may have to do this multiple times |
38 | * until you reach a non-link TRB. | 38 | * until you reach a non-link TRB. |
39 | * 3. A ring is full if enqueue++ (for the definition of increment above) | 39 | * 3. A ring is full if enqueue++ (for the definition of increment above) |
40 | * equals the dequeue pointer. | 40 | * equals the dequeue pointer. |
41 | * | 41 | * |
42 | * Cycle bit rules: | 42 | * Cycle bit rules: |
43 | * 1. When a consumer increments a dequeue pointer and encounters a toggle bit | 43 | * 1. When a consumer increments a dequeue pointer and encounters a toggle bit |
44 | * in a link TRB, it must toggle the ring cycle state. | 44 | * in a link TRB, it must toggle the ring cycle state. |
45 | * 2. When a producer increments an enqueue pointer and encounters a toggle bit | 45 | * 2. When a producer increments an enqueue pointer and encounters a toggle bit |
46 | * in a link TRB, it must toggle the ring cycle state. | 46 | * in a link TRB, it must toggle the ring cycle state. |
47 | * | 47 | * |
48 | * Producer rules: | 48 | * Producer rules: |
49 | * 1. Check if ring is full before you enqueue. | 49 | * 1. Check if ring is full before you enqueue. |
50 | * 2. Write the ring cycle state to the cycle bit in the TRB you're enqueuing. | 50 | * 2. Write the ring cycle state to the cycle bit in the TRB you're enqueuing. |
51 | * Update enqueue pointer between each write (which may update the ring | 51 | * Update enqueue pointer between each write (which may update the ring |
52 | * cycle state). | 52 | * cycle state). |
53 | * 3. Notify consumer. If SW is producer, it rings the doorbell for command | 53 | * 3. Notify consumer. If SW is producer, it rings the doorbell for command |
54 | * and endpoint rings. If HC is the producer for the event ring, | 54 | * and endpoint rings. If HC is the producer for the event ring, |
55 | * and it generates an interrupt according to interrupt modulation rules. | 55 | * and it generates an interrupt according to interrupt modulation rules. |
56 | * | 56 | * |
57 | * Consumer rules: | 57 | * Consumer rules: |
58 | * 1. Check if TRB belongs to you. If the cycle bit == your ring cycle state, | 58 | * 1. Check if TRB belongs to you. If the cycle bit == your ring cycle state, |
59 | * the TRB is owned by the consumer. | 59 | * the TRB is owned by the consumer. |
60 | * 2. Update dequeue pointer (which may update the ring cycle state) and | 60 | * 2. Update dequeue pointer (which may update the ring cycle state) and |
61 | * continue processing TRBs until you reach a TRB which is not owned by you. | 61 | * continue processing TRBs until you reach a TRB which is not owned by you. |
62 | * 3. Notify the producer. SW is the consumer for the event ring, and it | 62 | * 3. Notify the producer. SW is the consumer for the event ring, and it |
63 | * updates event ring dequeue pointer. HC is the consumer for the command and | 63 | * updates event ring dequeue pointer. HC is the consumer for the command and |
64 | * endpoint rings; it generates events on the event ring for these. | 64 | * endpoint rings; it generates events on the event ring for these. |
65 | */ | 65 | */ |
66 | 66 | ||
67 | #include <linux/scatterlist.h> | 67 | #include <linux/scatterlist.h> |
68 | #include <linux/slab.h> | 68 | #include <linux/slab.h> |
69 | #include "xhci.h" | 69 | #include "xhci.h" |
70 | 70 | ||
71 | /* | 71 | /* |
72 | * Returns zero if the TRB isn't in this segment, otherwise it returns the DMA | 72 | * Returns zero if the TRB isn't in this segment, otherwise it returns the DMA |
73 | * address of the TRB. | 73 | * address of the TRB. |
74 | */ | 74 | */ |
75 | dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg, | 75 | dma_addr_t xhci_trb_virt_to_dma(struct xhci_segment *seg, |
76 | union xhci_trb *trb) | 76 | union xhci_trb *trb) |
77 | { | 77 | { |
78 | unsigned long segment_offset; | 78 | unsigned long segment_offset; |
79 | 79 | ||
80 | if (!seg || !trb || trb < seg->trbs) | 80 | if (!seg || !trb || trb < seg->trbs) |
81 | return 0; | 81 | return 0; |
82 | /* offset in TRBs */ | 82 | /* offset in TRBs */ |
83 | segment_offset = trb - seg->trbs; | 83 | segment_offset = trb - seg->trbs; |
84 | if (segment_offset > TRBS_PER_SEGMENT) | 84 | if (segment_offset > TRBS_PER_SEGMENT) |
85 | return 0; | 85 | return 0; |
86 | return seg->dma + (segment_offset * sizeof(*trb)); | 86 | return seg->dma + (segment_offset * sizeof(*trb)); |
87 | } | 87 | } |
88 | 88 | ||
89 | /* Does this link TRB point to the first segment in a ring, | 89 | /* Does this link TRB point to the first segment in a ring, |
90 | * or was the previous TRB the last TRB on the last segment in the ERST? | 90 | * or was the previous TRB the last TRB on the last segment in the ERST? |
91 | */ | 91 | */ |
92 | static inline bool last_trb_on_last_seg(struct xhci_hcd *xhci, struct xhci_ring *ring, | 92 | static inline bool last_trb_on_last_seg(struct xhci_hcd *xhci, struct xhci_ring *ring, |
93 | struct xhci_segment *seg, union xhci_trb *trb) | 93 | struct xhci_segment *seg, union xhci_trb *trb) |
94 | { | 94 | { |
95 | if (ring == xhci->event_ring) | 95 | if (ring == xhci->event_ring) |
96 | return (trb == &seg->trbs[TRBS_PER_SEGMENT]) && | 96 | return (trb == &seg->trbs[TRBS_PER_SEGMENT]) && |
97 | (seg->next == xhci->event_ring->first_seg); | 97 | (seg->next == xhci->event_ring->first_seg); |
98 | else | 98 | else |
99 | return trb->link.control & LINK_TOGGLE; | 99 | return trb->link.control & LINK_TOGGLE; |
100 | } | 100 | } |
101 | 101 | ||
102 | /* Is this TRB a link TRB or was the last TRB the last TRB in this event ring | 102 | /* Is this TRB a link TRB or was the last TRB the last TRB in this event ring |
103 | * segment? I.e. would the updated event TRB pointer step off the end of the | 103 | * segment? I.e. would the updated event TRB pointer step off the end of the |
104 | * event seg? | 104 | * event seg? |
105 | */ | 105 | */ |
106 | static inline int last_trb(struct xhci_hcd *xhci, struct xhci_ring *ring, | 106 | static inline int last_trb(struct xhci_hcd *xhci, struct xhci_ring *ring, |
107 | struct xhci_segment *seg, union xhci_trb *trb) | 107 | struct xhci_segment *seg, union xhci_trb *trb) |
108 | { | 108 | { |
109 | if (ring == xhci->event_ring) | 109 | if (ring == xhci->event_ring) |
110 | return trb == &seg->trbs[TRBS_PER_SEGMENT]; | 110 | return trb == &seg->trbs[TRBS_PER_SEGMENT]; |
111 | else | 111 | else |
112 | return (trb->link.control & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK); | 112 | return (trb->link.control & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK); |
113 | } | 113 | } |
114 | 114 | ||
115 | static inline int enqueue_is_link_trb(struct xhci_ring *ring) | 115 | static inline int enqueue_is_link_trb(struct xhci_ring *ring) |
116 | { | 116 | { |
117 | struct xhci_link_trb *link = &ring->enqueue->link; | 117 | struct xhci_link_trb *link = &ring->enqueue->link; |
118 | return ((link->control & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK)); | 118 | return ((link->control & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK)); |
119 | } | 119 | } |
120 | 120 | ||
121 | /* Updates trb to point to the next TRB in the ring, and updates seg if the next | 121 | /* Updates trb to point to the next TRB in the ring, and updates seg if the next |
122 | * TRB is in a new segment. This does not skip over link TRBs, and it does not | 122 | * TRB is in a new segment. This does not skip over link TRBs, and it does not |
123 | * effect the ring dequeue or enqueue pointers. | 123 | * effect the ring dequeue or enqueue pointers. |
124 | */ | 124 | */ |
125 | static void next_trb(struct xhci_hcd *xhci, | 125 | static void next_trb(struct xhci_hcd *xhci, |
126 | struct xhci_ring *ring, | 126 | struct xhci_ring *ring, |
127 | struct xhci_segment **seg, | 127 | struct xhci_segment **seg, |
128 | union xhci_trb **trb) | 128 | union xhci_trb **trb) |
129 | { | 129 | { |
130 | if (last_trb(xhci, ring, *seg, *trb)) { | 130 | if (last_trb(xhci, ring, *seg, *trb)) { |
131 | *seg = (*seg)->next; | 131 | *seg = (*seg)->next; |
132 | *trb = ((*seg)->trbs); | 132 | *trb = ((*seg)->trbs); |
133 | } else { | 133 | } else { |
134 | *trb = (*trb)++; | 134 | *trb = (*trb)++; |
135 | } | 135 | } |
136 | } | 136 | } |
137 | 137 | ||
138 | /* | 138 | /* |
139 | * See Cycle bit rules. SW is the consumer for the event ring only. | 139 | * See Cycle bit rules. SW is the consumer for the event ring only. |
140 | * Don't make a ring full of link TRBs. That would be dumb and this would loop. | 140 | * Don't make a ring full of link TRBs. That would be dumb and this would loop. |
141 | */ | 141 | */ |
142 | static void inc_deq(struct xhci_hcd *xhci, struct xhci_ring *ring, bool consumer) | 142 | static void inc_deq(struct xhci_hcd *xhci, struct xhci_ring *ring, bool consumer) |
143 | { | 143 | { |
144 | union xhci_trb *next = ++(ring->dequeue); | 144 | union xhci_trb *next = ++(ring->dequeue); |
145 | unsigned long long addr; | 145 | unsigned long long addr; |
146 | 146 | ||
147 | ring->deq_updates++; | 147 | ring->deq_updates++; |
148 | /* Update the dequeue pointer further if that was a link TRB or we're at | 148 | /* Update the dequeue pointer further if that was a link TRB or we're at |
149 | * the end of an event ring segment (which doesn't have link TRBS) | 149 | * the end of an event ring segment (which doesn't have link TRBS) |
150 | */ | 150 | */ |
151 | while (last_trb(xhci, ring, ring->deq_seg, next)) { | 151 | while (last_trb(xhci, ring, ring->deq_seg, next)) { |
152 | if (consumer && last_trb_on_last_seg(xhci, ring, ring->deq_seg, next)) { | 152 | if (consumer && last_trb_on_last_seg(xhci, ring, ring->deq_seg, next)) { |
153 | ring->cycle_state = (ring->cycle_state ? 0 : 1); | 153 | ring->cycle_state = (ring->cycle_state ? 0 : 1); |
154 | if (!in_interrupt()) | 154 | if (!in_interrupt()) |
155 | xhci_dbg(xhci, "Toggle cycle state for ring %p = %i\n", | 155 | xhci_dbg(xhci, "Toggle cycle state for ring %p = %i\n", |
156 | ring, | 156 | ring, |
157 | (unsigned int) ring->cycle_state); | 157 | (unsigned int) ring->cycle_state); |
158 | } | 158 | } |
159 | ring->deq_seg = ring->deq_seg->next; | 159 | ring->deq_seg = ring->deq_seg->next; |
160 | ring->dequeue = ring->deq_seg->trbs; | 160 | ring->dequeue = ring->deq_seg->trbs; |
161 | next = ring->dequeue; | 161 | next = ring->dequeue; |
162 | } | 162 | } |
163 | addr = (unsigned long long) xhci_trb_virt_to_dma(ring->deq_seg, ring->dequeue); | 163 | addr = (unsigned long long) xhci_trb_virt_to_dma(ring->deq_seg, ring->dequeue); |
164 | if (ring == xhci->event_ring) | 164 | if (ring == xhci->event_ring) |
165 | xhci_dbg(xhci, "Event ring deq = 0x%llx (DMA)\n", addr); | 165 | xhci_dbg(xhci, "Event ring deq = 0x%llx (DMA)\n", addr); |
166 | else if (ring == xhci->cmd_ring) | 166 | else if (ring == xhci->cmd_ring) |
167 | xhci_dbg(xhci, "Command ring deq = 0x%llx (DMA)\n", addr); | 167 | xhci_dbg(xhci, "Command ring deq = 0x%llx (DMA)\n", addr); |
168 | else | 168 | else |
169 | xhci_dbg(xhci, "Ring deq = 0x%llx (DMA)\n", addr); | 169 | xhci_dbg(xhci, "Ring deq = 0x%llx (DMA)\n", addr); |
170 | } | 170 | } |
171 | 171 | ||
172 | /* | 172 | /* |
173 | * See Cycle bit rules. SW is the consumer for the event ring only. | 173 | * See Cycle bit rules. SW is the consumer for the event ring only. |
174 | * Don't make a ring full of link TRBs. That would be dumb and this would loop. | 174 | * Don't make a ring full of link TRBs. That would be dumb and this would loop. |
175 | * | 175 | * |
176 | * If we've just enqueued a TRB that is in the middle of a TD (meaning the | 176 | * If we've just enqueued a TRB that is in the middle of a TD (meaning the |
177 | * chain bit is set), then set the chain bit in all the following link TRBs. | 177 | * chain bit is set), then set the chain bit in all the following link TRBs. |
178 | * If we've enqueued the last TRB in a TD, make sure the following link TRBs | 178 | * If we've enqueued the last TRB in a TD, make sure the following link TRBs |
179 | * have their chain bit cleared (so that each Link TRB is a separate TD). | 179 | * have their chain bit cleared (so that each Link TRB is a separate TD). |
180 | * | 180 | * |
181 | * Section 6.4.4.1 of the 0.95 spec says link TRBs cannot have the chain bit | 181 | * Section 6.4.4.1 of the 0.95 spec says link TRBs cannot have the chain bit |
182 | * set, but other sections talk about dealing with the chain bit set. This was | 182 | * set, but other sections talk about dealing with the chain bit set. This was |
183 | * fixed in the 0.96 specification errata, but we have to assume that all 0.95 | 183 | * fixed in the 0.96 specification errata, but we have to assume that all 0.95 |
184 | * xHCI hardware can't handle the chain bit being cleared on a link TRB. | 184 | * xHCI hardware can't handle the chain bit being cleared on a link TRB. |
185 | * | 185 | * |
186 | * @more_trbs_coming: Will you enqueue more TRBs before calling | 186 | * @more_trbs_coming: Will you enqueue more TRBs before calling |
187 | * prepare_transfer()? | 187 | * prepare_transfer()? |
188 | */ | 188 | */ |
189 | static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring, | 189 | static void inc_enq(struct xhci_hcd *xhci, struct xhci_ring *ring, |
190 | bool consumer, bool more_trbs_coming) | 190 | bool consumer, bool more_trbs_coming) |
191 | { | 191 | { |
192 | u32 chain; | 192 | u32 chain; |
193 | union xhci_trb *next; | 193 | union xhci_trb *next; |
194 | unsigned long long addr; | 194 | unsigned long long addr; |
195 | 195 | ||
196 | chain = ring->enqueue->generic.field[3] & TRB_CHAIN; | 196 | chain = ring->enqueue->generic.field[3] & TRB_CHAIN; |
197 | next = ++(ring->enqueue); | 197 | next = ++(ring->enqueue); |
198 | 198 | ||
199 | ring->enq_updates++; | 199 | ring->enq_updates++; |
200 | /* Update the dequeue pointer further if that was a link TRB or we're at | 200 | /* Update the dequeue pointer further if that was a link TRB or we're at |
201 | * the end of an event ring segment (which doesn't have link TRBS) | 201 | * the end of an event ring segment (which doesn't have link TRBS) |
202 | */ | 202 | */ |
203 | while (last_trb(xhci, ring, ring->enq_seg, next)) { | 203 | while (last_trb(xhci, ring, ring->enq_seg, next)) { |
204 | if (!consumer) { | 204 | if (!consumer) { |
205 | if (ring != xhci->event_ring) { | 205 | if (ring != xhci->event_ring) { |
206 | /* | 206 | /* |
207 | * If the caller doesn't plan on enqueueing more | 207 | * If the caller doesn't plan on enqueueing more |
208 | * TDs before ringing the doorbell, then we | 208 | * TDs before ringing the doorbell, then we |
209 | * don't want to give the link TRB to the | 209 | * don't want to give the link TRB to the |
210 | * hardware just yet. We'll give the link TRB | 210 | * hardware just yet. We'll give the link TRB |
211 | * back in prepare_ring() just before we enqueue | 211 | * back in prepare_ring() just before we enqueue |
212 | * the TD at the top of the ring. | 212 | * the TD at the top of the ring. |
213 | */ | 213 | */ |
214 | if (!chain && !more_trbs_coming) | 214 | if (!chain && !more_trbs_coming) |
215 | break; | 215 | break; |
216 | 216 | ||
217 | /* If we're not dealing with 0.95 hardware, | 217 | /* If we're not dealing with 0.95 hardware, |
218 | * carry over the chain bit of the previous TRB | 218 | * carry over the chain bit of the previous TRB |
219 | * (which may mean the chain bit is cleared). | 219 | * (which may mean the chain bit is cleared). |
220 | */ | 220 | */ |
221 | if (!xhci_link_trb_quirk(xhci)) { | 221 | if (!xhci_link_trb_quirk(xhci)) { |
222 | next->link.control &= ~TRB_CHAIN; | 222 | next->link.control &= ~TRB_CHAIN; |
223 | next->link.control |= chain; | 223 | next->link.control |= chain; |
224 | } | 224 | } |
225 | /* Give this link TRB to the hardware */ | 225 | /* Give this link TRB to the hardware */ |
226 | wmb(); | 226 | wmb(); |
227 | next->link.control ^= TRB_CYCLE; | 227 | next->link.control ^= TRB_CYCLE; |
228 | } | 228 | } |
229 | /* Toggle the cycle bit after the last ring segment. */ | 229 | /* Toggle the cycle bit after the last ring segment. */ |
230 | if (last_trb_on_last_seg(xhci, ring, ring->enq_seg, next)) { | 230 | if (last_trb_on_last_seg(xhci, ring, ring->enq_seg, next)) { |
231 | ring->cycle_state = (ring->cycle_state ? 0 : 1); | 231 | ring->cycle_state = (ring->cycle_state ? 0 : 1); |
232 | if (!in_interrupt()) | 232 | if (!in_interrupt()) |
233 | xhci_dbg(xhci, "Toggle cycle state for ring %p = %i\n", | 233 | xhci_dbg(xhci, "Toggle cycle state for ring %p = %i\n", |
234 | ring, | 234 | ring, |
235 | (unsigned int) ring->cycle_state); | 235 | (unsigned int) ring->cycle_state); |
236 | } | 236 | } |
237 | } | 237 | } |
238 | ring->enq_seg = ring->enq_seg->next; | 238 | ring->enq_seg = ring->enq_seg->next; |
239 | ring->enqueue = ring->enq_seg->trbs; | 239 | ring->enqueue = ring->enq_seg->trbs; |
240 | next = ring->enqueue; | 240 | next = ring->enqueue; |
241 | } | 241 | } |
242 | addr = (unsigned long long) xhci_trb_virt_to_dma(ring->enq_seg, ring->enqueue); | 242 | addr = (unsigned long long) xhci_trb_virt_to_dma(ring->enq_seg, ring->enqueue); |
243 | if (ring == xhci->event_ring) | 243 | if (ring == xhci->event_ring) |
244 | xhci_dbg(xhci, "Event ring enq = 0x%llx (DMA)\n", addr); | 244 | xhci_dbg(xhci, "Event ring enq = 0x%llx (DMA)\n", addr); |
245 | else if (ring == xhci->cmd_ring) | 245 | else if (ring == xhci->cmd_ring) |
246 | xhci_dbg(xhci, "Command ring enq = 0x%llx (DMA)\n", addr); | 246 | xhci_dbg(xhci, "Command ring enq = 0x%llx (DMA)\n", addr); |
247 | else | 247 | else |
248 | xhci_dbg(xhci, "Ring enq = 0x%llx (DMA)\n", addr); | 248 | xhci_dbg(xhci, "Ring enq = 0x%llx (DMA)\n", addr); |
249 | } | 249 | } |
250 | 250 | ||
251 | /* | 251 | /* |
252 | * Check to see if there's room to enqueue num_trbs on the ring. See rules | 252 | * Check to see if there's room to enqueue num_trbs on the ring. See rules |
253 | * above. | 253 | * above. |
254 | * FIXME: this would be simpler and faster if we just kept track of the number | 254 | * FIXME: this would be simpler and faster if we just kept track of the number |
255 | * of free TRBs in a ring. | 255 | * of free TRBs in a ring. |
256 | */ | 256 | */ |
257 | static int room_on_ring(struct xhci_hcd *xhci, struct xhci_ring *ring, | 257 | static int room_on_ring(struct xhci_hcd *xhci, struct xhci_ring *ring, |
258 | unsigned int num_trbs) | 258 | unsigned int num_trbs) |
259 | { | 259 | { |
260 | int i; | 260 | int i; |
261 | union xhci_trb *enq = ring->enqueue; | 261 | union xhci_trb *enq = ring->enqueue; |
262 | struct xhci_segment *enq_seg = ring->enq_seg; | 262 | struct xhci_segment *enq_seg = ring->enq_seg; |
263 | struct xhci_segment *cur_seg; | 263 | struct xhci_segment *cur_seg; |
264 | unsigned int left_on_ring; | 264 | unsigned int left_on_ring; |
265 | 265 | ||
266 | /* If we are currently pointing to a link TRB, advance the | 266 | /* If we are currently pointing to a link TRB, advance the |
267 | * enqueue pointer before checking for space */ | 267 | * enqueue pointer before checking for space */ |
268 | while (last_trb(xhci, ring, enq_seg, enq)) { | 268 | while (last_trb(xhci, ring, enq_seg, enq)) { |
269 | enq_seg = enq_seg->next; | 269 | enq_seg = enq_seg->next; |
270 | enq = enq_seg->trbs; | 270 | enq = enq_seg->trbs; |
271 | } | 271 | } |
272 | 272 | ||
273 | /* Check if ring is empty */ | 273 | /* Check if ring is empty */ |
274 | if (enq == ring->dequeue) { | 274 | if (enq == ring->dequeue) { |
275 | /* Can't use link trbs */ | 275 | /* Can't use link trbs */ |
276 | left_on_ring = TRBS_PER_SEGMENT - 1; | 276 | left_on_ring = TRBS_PER_SEGMENT - 1; |
277 | for (cur_seg = enq_seg->next; cur_seg != enq_seg; | 277 | for (cur_seg = enq_seg->next; cur_seg != enq_seg; |
278 | cur_seg = cur_seg->next) | 278 | cur_seg = cur_seg->next) |
279 | left_on_ring += TRBS_PER_SEGMENT - 1; | 279 | left_on_ring += TRBS_PER_SEGMENT - 1; |
280 | 280 | ||
281 | /* Always need one TRB free in the ring. */ | 281 | /* Always need one TRB free in the ring. */ |
282 | left_on_ring -= 1; | 282 | left_on_ring -= 1; |
283 | if (num_trbs > left_on_ring) { | 283 | if (num_trbs > left_on_ring) { |
284 | xhci_warn(xhci, "Not enough room on ring; " | 284 | xhci_warn(xhci, "Not enough room on ring; " |
285 | "need %u TRBs, %u TRBs left\n", | 285 | "need %u TRBs, %u TRBs left\n", |
286 | num_trbs, left_on_ring); | 286 | num_trbs, left_on_ring); |
287 | return 0; | 287 | return 0; |
288 | } | 288 | } |
289 | return 1; | 289 | return 1; |
290 | } | 290 | } |
291 | /* Make sure there's an extra empty TRB available */ | 291 | /* Make sure there's an extra empty TRB available */ |
292 | for (i = 0; i <= num_trbs; ++i) { | 292 | for (i = 0; i <= num_trbs; ++i) { |
293 | if (enq == ring->dequeue) | 293 | if (enq == ring->dequeue) |
294 | return 0; | 294 | return 0; |
295 | enq++; | 295 | enq++; |
296 | while (last_trb(xhci, ring, enq_seg, enq)) { | 296 | while (last_trb(xhci, ring, enq_seg, enq)) { |
297 | enq_seg = enq_seg->next; | 297 | enq_seg = enq_seg->next; |
298 | enq = enq_seg->trbs; | 298 | enq = enq_seg->trbs; |
299 | } | 299 | } |
300 | } | 300 | } |
301 | return 1; | 301 | return 1; |
302 | } | 302 | } |
303 | 303 | ||
304 | void xhci_set_hc_event_deq(struct xhci_hcd *xhci) | 304 | void xhci_set_hc_event_deq(struct xhci_hcd *xhci) |
305 | { | 305 | { |
306 | u64 temp; | 306 | u64 temp; |
307 | dma_addr_t deq; | 307 | dma_addr_t deq; |
308 | 308 | ||
309 | deq = xhci_trb_virt_to_dma(xhci->event_ring->deq_seg, | 309 | deq = xhci_trb_virt_to_dma(xhci->event_ring->deq_seg, |
310 | xhci->event_ring->dequeue); | 310 | xhci->event_ring->dequeue); |
311 | if (deq == 0 && !in_interrupt()) | 311 | if (deq == 0 && !in_interrupt()) |
312 | xhci_warn(xhci, "WARN something wrong with SW event ring " | 312 | xhci_warn(xhci, "WARN something wrong with SW event ring " |
313 | "dequeue ptr.\n"); | 313 | "dequeue ptr.\n"); |
314 | /* Update HC event ring dequeue pointer */ | 314 | /* Update HC event ring dequeue pointer */ |
315 | temp = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue); | 315 | temp = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue); |
316 | temp &= ERST_PTR_MASK; | 316 | temp &= ERST_PTR_MASK; |
317 | /* Don't clear the EHB bit (which is RW1C) because | 317 | /* Don't clear the EHB bit (which is RW1C) because |
318 | * there might be more events to service. | 318 | * there might be more events to service. |
319 | */ | 319 | */ |
320 | temp &= ~ERST_EHB; | 320 | temp &= ~ERST_EHB; |
321 | xhci_dbg(xhci, "// Write event ring dequeue pointer, preserving EHB bit\n"); | 321 | xhci_dbg(xhci, "// Write event ring dequeue pointer, preserving EHB bit\n"); |
322 | xhci_write_64(xhci, ((u64) deq & (u64) ~ERST_PTR_MASK) | temp, | 322 | xhci_write_64(xhci, ((u64) deq & (u64) ~ERST_PTR_MASK) | temp, |
323 | &xhci->ir_set->erst_dequeue); | 323 | &xhci->ir_set->erst_dequeue); |
324 | } | 324 | } |
325 | 325 | ||
326 | /* Ring the host controller doorbell after placing a command on the ring */ | 326 | /* Ring the host controller doorbell after placing a command on the ring */ |
327 | void xhci_ring_cmd_db(struct xhci_hcd *xhci) | 327 | void xhci_ring_cmd_db(struct xhci_hcd *xhci) |
328 | { | 328 | { |
329 | u32 temp; | 329 | u32 temp; |
330 | 330 | ||
331 | xhci_dbg(xhci, "// Ding dong!\n"); | 331 | xhci_dbg(xhci, "// Ding dong!\n"); |
332 | temp = xhci_readl(xhci, &xhci->dba->doorbell[0]) & DB_MASK; | 332 | temp = xhci_readl(xhci, &xhci->dba->doorbell[0]) & DB_MASK; |
333 | xhci_writel(xhci, temp | DB_TARGET_HOST, &xhci->dba->doorbell[0]); | 333 | xhci_writel(xhci, temp | DB_TARGET_HOST, &xhci->dba->doorbell[0]); |
334 | /* Flush PCI posted writes */ | 334 | /* Flush PCI posted writes */ |
335 | xhci_readl(xhci, &xhci->dba->doorbell[0]); | 335 | xhci_readl(xhci, &xhci->dba->doorbell[0]); |
336 | } | 336 | } |
337 | 337 | ||
338 | static void ring_ep_doorbell(struct xhci_hcd *xhci, | 338 | static void ring_ep_doorbell(struct xhci_hcd *xhci, |
339 | unsigned int slot_id, | 339 | unsigned int slot_id, |
340 | unsigned int ep_index, | 340 | unsigned int ep_index, |
341 | unsigned int stream_id) | 341 | unsigned int stream_id) |
342 | { | 342 | { |
343 | struct xhci_virt_ep *ep; | 343 | struct xhci_virt_ep *ep; |
344 | unsigned int ep_state; | 344 | unsigned int ep_state; |
345 | u32 field; | 345 | u32 field; |
346 | __u32 __iomem *db_addr = &xhci->dba->doorbell[slot_id]; | 346 | __u32 __iomem *db_addr = &xhci->dba->doorbell[slot_id]; |
347 | 347 | ||
348 | ep = &xhci->devs[slot_id]->eps[ep_index]; | 348 | ep = &xhci->devs[slot_id]->eps[ep_index]; |
349 | ep_state = ep->ep_state; | 349 | ep_state = ep->ep_state; |
350 | /* Don't ring the doorbell for this endpoint if there are pending | 350 | /* Don't ring the doorbell for this endpoint if there are pending |
351 | * cancellations because the we don't want to interrupt processing. | 351 | * cancellations because the we don't want to interrupt processing. |
352 | * We don't want to restart any stream rings if there's a set dequeue | 352 | * We don't want to restart any stream rings if there's a set dequeue |
353 | * pointer command pending because the device can choose to start any | 353 | * pointer command pending because the device can choose to start any |
354 | * stream once the endpoint is on the HW schedule. | 354 | * stream once the endpoint is on the HW schedule. |
355 | * FIXME - check all the stream rings for pending cancellations. | 355 | * FIXME - check all the stream rings for pending cancellations. |
356 | */ | 356 | */ |
357 | if (!(ep_state & EP_HALT_PENDING) && !(ep_state & SET_DEQ_PENDING) | 357 | if (!(ep_state & EP_HALT_PENDING) && !(ep_state & SET_DEQ_PENDING) |
358 | && !(ep_state & EP_HALTED)) { | 358 | && !(ep_state & EP_HALTED)) { |
359 | field = xhci_readl(xhci, db_addr) & DB_MASK; | 359 | field = xhci_readl(xhci, db_addr) & DB_MASK; |
360 | field |= EPI_TO_DB(ep_index) | STREAM_ID_TO_DB(stream_id); | 360 | field |= EPI_TO_DB(ep_index) | STREAM_ID_TO_DB(stream_id); |
361 | xhci_writel(xhci, field, db_addr); | 361 | xhci_writel(xhci, field, db_addr); |
362 | /* Flush PCI posted writes - FIXME Matthew Wilcox says this | 362 | /* Flush PCI posted writes - FIXME Matthew Wilcox says this |
363 | * isn't time-critical and we shouldn't make the CPU wait for | 363 | * isn't time-critical and we shouldn't make the CPU wait for |
364 | * the flush. | 364 | * the flush. |
365 | */ | 365 | */ |
366 | xhci_readl(xhci, db_addr); | 366 | xhci_readl(xhci, db_addr); |
367 | } | 367 | } |
368 | } | 368 | } |
369 | 369 | ||
370 | /* Ring the doorbell for any rings with pending URBs */ | 370 | /* Ring the doorbell for any rings with pending URBs */ |
371 | static void ring_doorbell_for_active_rings(struct xhci_hcd *xhci, | 371 | static void ring_doorbell_for_active_rings(struct xhci_hcd *xhci, |
372 | unsigned int slot_id, | 372 | unsigned int slot_id, |
373 | unsigned int ep_index) | 373 | unsigned int ep_index) |
374 | { | 374 | { |
375 | unsigned int stream_id; | 375 | unsigned int stream_id; |
376 | struct xhci_virt_ep *ep; | 376 | struct xhci_virt_ep *ep; |
377 | 377 | ||
378 | ep = &xhci->devs[slot_id]->eps[ep_index]; | 378 | ep = &xhci->devs[slot_id]->eps[ep_index]; |
379 | 379 | ||
380 | /* A ring has pending URBs if its TD list is not empty */ | 380 | /* A ring has pending URBs if its TD list is not empty */ |
381 | if (!(ep->ep_state & EP_HAS_STREAMS)) { | 381 | if (!(ep->ep_state & EP_HAS_STREAMS)) { |
382 | if (!(list_empty(&ep->ring->td_list))) | 382 | if (!(list_empty(&ep->ring->td_list))) |
383 | ring_ep_doorbell(xhci, slot_id, ep_index, 0); | 383 | ring_ep_doorbell(xhci, slot_id, ep_index, 0); |
384 | return; | 384 | return; |
385 | } | 385 | } |
386 | 386 | ||
387 | for (stream_id = 1; stream_id < ep->stream_info->num_streams; | 387 | for (stream_id = 1; stream_id < ep->stream_info->num_streams; |
388 | stream_id++) { | 388 | stream_id++) { |
389 | struct xhci_stream_info *stream_info = ep->stream_info; | 389 | struct xhci_stream_info *stream_info = ep->stream_info; |
390 | if (!list_empty(&stream_info->stream_rings[stream_id]->td_list)) | 390 | if (!list_empty(&stream_info->stream_rings[stream_id]->td_list)) |
391 | ring_ep_doorbell(xhci, slot_id, ep_index, stream_id); | 391 | ring_ep_doorbell(xhci, slot_id, ep_index, stream_id); |
392 | } | 392 | } |
393 | } | 393 | } |
394 | 394 | ||
395 | /* | 395 | /* |
396 | * Find the segment that trb is in. Start searching in start_seg. | 396 | * Find the segment that trb is in. Start searching in start_seg. |
397 | * If we must move past a segment that has a link TRB with a toggle cycle state | 397 | * If we must move past a segment that has a link TRB with a toggle cycle state |
398 | * bit set, then we will toggle the value pointed at by cycle_state. | 398 | * bit set, then we will toggle the value pointed at by cycle_state. |
399 | */ | 399 | */ |
400 | static struct xhci_segment *find_trb_seg( | 400 | static struct xhci_segment *find_trb_seg( |
401 | struct xhci_segment *start_seg, | 401 | struct xhci_segment *start_seg, |
402 | union xhci_trb *trb, int *cycle_state) | 402 | union xhci_trb *trb, int *cycle_state) |
403 | { | 403 | { |
404 | struct xhci_segment *cur_seg = start_seg; | 404 | struct xhci_segment *cur_seg = start_seg; |
405 | struct xhci_generic_trb *generic_trb; | 405 | struct xhci_generic_trb *generic_trb; |
406 | 406 | ||
407 | while (cur_seg->trbs > trb || | 407 | while (cur_seg->trbs > trb || |
408 | &cur_seg->trbs[TRBS_PER_SEGMENT - 1] < trb) { | 408 | &cur_seg->trbs[TRBS_PER_SEGMENT - 1] < trb) { |
409 | generic_trb = &cur_seg->trbs[TRBS_PER_SEGMENT - 1].generic; | 409 | generic_trb = &cur_seg->trbs[TRBS_PER_SEGMENT - 1].generic; |
410 | if ((generic_trb->field[3] & TRB_TYPE_BITMASK) == | 410 | if ((generic_trb->field[3] & TRB_TYPE_BITMASK) == |
411 | TRB_TYPE(TRB_LINK) && | 411 | TRB_TYPE(TRB_LINK) && |
412 | (generic_trb->field[3] & LINK_TOGGLE)) | 412 | (generic_trb->field[3] & LINK_TOGGLE)) |
413 | *cycle_state = ~(*cycle_state) & 0x1; | 413 | *cycle_state = ~(*cycle_state) & 0x1; |
414 | cur_seg = cur_seg->next; | 414 | cur_seg = cur_seg->next; |
415 | if (cur_seg == start_seg) | 415 | if (cur_seg == start_seg) |
416 | /* Looped over the entire list. Oops! */ | 416 | /* Looped over the entire list. Oops! */ |
417 | return NULL; | 417 | return NULL; |
418 | } | 418 | } |
419 | return cur_seg; | 419 | return cur_seg; |
420 | } | 420 | } |
421 | 421 | ||
422 | 422 | ||
423 | static struct xhci_ring *xhci_triad_to_transfer_ring(struct xhci_hcd *xhci, | 423 | static struct xhci_ring *xhci_triad_to_transfer_ring(struct xhci_hcd *xhci, |
424 | unsigned int slot_id, unsigned int ep_index, | 424 | unsigned int slot_id, unsigned int ep_index, |
425 | unsigned int stream_id) | 425 | unsigned int stream_id) |
426 | { | 426 | { |
427 | struct xhci_virt_ep *ep; | 427 | struct xhci_virt_ep *ep; |
428 | 428 | ||
429 | ep = &xhci->devs[slot_id]->eps[ep_index]; | 429 | ep = &xhci->devs[slot_id]->eps[ep_index]; |
430 | /* Common case: no streams */ | 430 | /* Common case: no streams */ |
431 | if (!(ep->ep_state & EP_HAS_STREAMS)) | 431 | if (!(ep->ep_state & EP_HAS_STREAMS)) |
432 | return ep->ring; | 432 | return ep->ring; |
433 | 433 | ||
434 | if (stream_id == 0) { | 434 | if (stream_id == 0) { |
435 | xhci_warn(xhci, | 435 | xhci_warn(xhci, |
436 | "WARN: Slot ID %u, ep index %u has streams, " | 436 | "WARN: Slot ID %u, ep index %u has streams, " |
437 | "but URB has no stream ID.\n", | 437 | "but URB has no stream ID.\n", |
438 | slot_id, ep_index); | 438 | slot_id, ep_index); |
439 | return NULL; | 439 | return NULL; |
440 | } | 440 | } |
441 | 441 | ||
442 | if (stream_id < ep->stream_info->num_streams) | 442 | if (stream_id < ep->stream_info->num_streams) |
443 | return ep->stream_info->stream_rings[stream_id]; | 443 | return ep->stream_info->stream_rings[stream_id]; |
444 | 444 | ||
445 | xhci_warn(xhci, | 445 | xhci_warn(xhci, |
446 | "WARN: Slot ID %u, ep index %u has " | 446 | "WARN: Slot ID %u, ep index %u has " |
447 | "stream IDs 1 to %u allocated, " | 447 | "stream IDs 1 to %u allocated, " |
448 | "but stream ID %u is requested.\n", | 448 | "but stream ID %u is requested.\n", |
449 | slot_id, ep_index, | 449 | slot_id, ep_index, |
450 | ep->stream_info->num_streams - 1, | 450 | ep->stream_info->num_streams - 1, |
451 | stream_id); | 451 | stream_id); |
452 | return NULL; | 452 | return NULL; |
453 | } | 453 | } |
454 | 454 | ||
455 | /* Get the right ring for the given URB. | 455 | /* Get the right ring for the given URB. |
456 | * If the endpoint supports streams, boundary check the URB's stream ID. | 456 | * If the endpoint supports streams, boundary check the URB's stream ID. |
457 | * If the endpoint doesn't support streams, return the singular endpoint ring. | 457 | * If the endpoint doesn't support streams, return the singular endpoint ring. |
458 | */ | 458 | */ |
459 | static struct xhci_ring *xhci_urb_to_transfer_ring(struct xhci_hcd *xhci, | 459 | static struct xhci_ring *xhci_urb_to_transfer_ring(struct xhci_hcd *xhci, |
460 | struct urb *urb) | 460 | struct urb *urb) |
461 | { | 461 | { |
462 | return xhci_triad_to_transfer_ring(xhci, urb->dev->slot_id, | 462 | return xhci_triad_to_transfer_ring(xhci, urb->dev->slot_id, |
463 | xhci_get_endpoint_index(&urb->ep->desc), urb->stream_id); | 463 | xhci_get_endpoint_index(&urb->ep->desc), urb->stream_id); |
464 | } | 464 | } |
465 | 465 | ||
466 | /* | 466 | /* |
467 | * Move the xHC's endpoint ring dequeue pointer past cur_td. | 467 | * Move the xHC's endpoint ring dequeue pointer past cur_td. |
468 | * Record the new state of the xHC's endpoint ring dequeue segment, | 468 | * Record the new state of the xHC's endpoint ring dequeue segment, |
469 | * dequeue pointer, and new consumer cycle state in state. | 469 | * dequeue pointer, and new consumer cycle state in state. |
470 | * Update our internal representation of the ring's dequeue pointer. | 470 | * Update our internal representation of the ring's dequeue pointer. |
471 | * | 471 | * |
472 | * We do this in three jumps: | 472 | * We do this in three jumps: |
473 | * - First we update our new ring state to be the same as when the xHC stopped. | 473 | * - First we update our new ring state to be the same as when the xHC stopped. |
474 | * - Then we traverse the ring to find the segment that contains | 474 | * - Then we traverse the ring to find the segment that contains |
475 | * the last TRB in the TD. We toggle the xHC's new cycle state when we pass | 475 | * the last TRB in the TD. We toggle the xHC's new cycle state when we pass |
476 | * any link TRBs with the toggle cycle bit set. | 476 | * any link TRBs with the toggle cycle bit set. |
477 | * - Finally we move the dequeue state one TRB further, toggling the cycle bit | 477 | * - Finally we move the dequeue state one TRB further, toggling the cycle bit |
478 | * if we've moved it past a link TRB with the toggle cycle bit set. | 478 | * if we've moved it past a link TRB with the toggle cycle bit set. |
479 | */ | 479 | */ |
480 | void xhci_find_new_dequeue_state(struct xhci_hcd *xhci, | 480 | void xhci_find_new_dequeue_state(struct xhci_hcd *xhci, |
481 | unsigned int slot_id, unsigned int ep_index, | 481 | unsigned int slot_id, unsigned int ep_index, |
482 | unsigned int stream_id, struct xhci_td *cur_td, | 482 | unsigned int stream_id, struct xhci_td *cur_td, |
483 | struct xhci_dequeue_state *state) | 483 | struct xhci_dequeue_state *state) |
484 | { | 484 | { |
485 | struct xhci_virt_device *dev = xhci->devs[slot_id]; | 485 | struct xhci_virt_device *dev = xhci->devs[slot_id]; |
486 | struct xhci_ring *ep_ring; | 486 | struct xhci_ring *ep_ring; |
487 | struct xhci_generic_trb *trb; | 487 | struct xhci_generic_trb *trb; |
488 | struct xhci_ep_ctx *ep_ctx; | 488 | struct xhci_ep_ctx *ep_ctx; |
489 | dma_addr_t addr; | 489 | dma_addr_t addr; |
490 | 490 | ||
491 | ep_ring = xhci_triad_to_transfer_ring(xhci, slot_id, | 491 | ep_ring = xhci_triad_to_transfer_ring(xhci, slot_id, |
492 | ep_index, stream_id); | 492 | ep_index, stream_id); |
493 | if (!ep_ring) { | 493 | if (!ep_ring) { |
494 | xhci_warn(xhci, "WARN can't find new dequeue state " | 494 | xhci_warn(xhci, "WARN can't find new dequeue state " |
495 | "for invalid stream ID %u.\n", | 495 | "for invalid stream ID %u.\n", |
496 | stream_id); | 496 | stream_id); |
497 | return; | 497 | return; |
498 | } | 498 | } |
499 | state->new_cycle_state = 0; | 499 | state->new_cycle_state = 0; |
500 | xhci_dbg(xhci, "Finding segment containing stopped TRB.\n"); | 500 | xhci_dbg(xhci, "Finding segment containing stopped TRB.\n"); |
501 | state->new_deq_seg = find_trb_seg(cur_td->start_seg, | 501 | state->new_deq_seg = find_trb_seg(cur_td->start_seg, |
502 | dev->eps[ep_index].stopped_trb, | 502 | dev->eps[ep_index].stopped_trb, |
503 | &state->new_cycle_state); | 503 | &state->new_cycle_state); |
504 | if (!state->new_deq_seg) | 504 | if (!state->new_deq_seg) |
505 | BUG(); | 505 | BUG(); |
506 | /* Dig out the cycle state saved by the xHC during the stop ep cmd */ | 506 | /* Dig out the cycle state saved by the xHC during the stop ep cmd */ |
507 | xhci_dbg(xhci, "Finding endpoint context\n"); | 507 | xhci_dbg(xhci, "Finding endpoint context\n"); |
508 | ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index); | 508 | ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index); |
509 | state->new_cycle_state = 0x1 & ep_ctx->deq; | 509 | state->new_cycle_state = 0x1 & ep_ctx->deq; |
510 | 510 | ||
511 | state->new_deq_ptr = cur_td->last_trb; | 511 | state->new_deq_ptr = cur_td->last_trb; |
512 | xhci_dbg(xhci, "Finding segment containing last TRB in TD.\n"); | 512 | xhci_dbg(xhci, "Finding segment containing last TRB in TD.\n"); |
513 | state->new_deq_seg = find_trb_seg(state->new_deq_seg, | 513 | state->new_deq_seg = find_trb_seg(state->new_deq_seg, |
514 | state->new_deq_ptr, | 514 | state->new_deq_ptr, |
515 | &state->new_cycle_state); | 515 | &state->new_cycle_state); |
516 | if (!state->new_deq_seg) | 516 | if (!state->new_deq_seg) |
517 | BUG(); | 517 | BUG(); |
518 | 518 | ||
519 | trb = &state->new_deq_ptr->generic; | 519 | trb = &state->new_deq_ptr->generic; |
520 | if ((trb->field[3] & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK) && | 520 | if ((trb->field[3] & TRB_TYPE_BITMASK) == TRB_TYPE(TRB_LINK) && |
521 | (trb->field[3] & LINK_TOGGLE)) | 521 | (trb->field[3] & LINK_TOGGLE)) |
522 | state->new_cycle_state = ~(state->new_cycle_state) & 0x1; | 522 | state->new_cycle_state = ~(state->new_cycle_state) & 0x1; |
523 | next_trb(xhci, ep_ring, &state->new_deq_seg, &state->new_deq_ptr); | 523 | next_trb(xhci, ep_ring, &state->new_deq_seg, &state->new_deq_ptr); |
524 | 524 | ||
525 | /* Don't update the ring cycle state for the producer (us). */ | 525 | /* Don't update the ring cycle state for the producer (us). */ |
526 | xhci_dbg(xhci, "New dequeue segment = %p (virtual)\n", | 526 | xhci_dbg(xhci, "New dequeue segment = %p (virtual)\n", |
527 | state->new_deq_seg); | 527 | state->new_deq_seg); |
528 | addr = xhci_trb_virt_to_dma(state->new_deq_seg, state->new_deq_ptr); | 528 | addr = xhci_trb_virt_to_dma(state->new_deq_seg, state->new_deq_ptr); |
529 | xhci_dbg(xhci, "New dequeue pointer = 0x%llx (DMA)\n", | 529 | xhci_dbg(xhci, "New dequeue pointer = 0x%llx (DMA)\n", |
530 | (unsigned long long) addr); | 530 | (unsigned long long) addr); |
531 | xhci_dbg(xhci, "Setting dequeue pointer in internal ring state.\n"); | 531 | xhci_dbg(xhci, "Setting dequeue pointer in internal ring state.\n"); |
532 | ep_ring->dequeue = state->new_deq_ptr; | 532 | ep_ring->dequeue = state->new_deq_ptr; |
533 | ep_ring->deq_seg = state->new_deq_seg; | 533 | ep_ring->deq_seg = state->new_deq_seg; |
534 | } | 534 | } |
535 | 535 | ||
536 | static void td_to_noop(struct xhci_hcd *xhci, struct xhci_ring *ep_ring, | 536 | static void td_to_noop(struct xhci_hcd *xhci, struct xhci_ring *ep_ring, |
537 | struct xhci_td *cur_td) | 537 | struct xhci_td *cur_td) |
538 | { | 538 | { |
539 | struct xhci_segment *cur_seg; | 539 | struct xhci_segment *cur_seg; |
540 | union xhci_trb *cur_trb; | 540 | union xhci_trb *cur_trb; |
541 | 541 | ||
542 | for (cur_seg = cur_td->start_seg, cur_trb = cur_td->first_trb; | 542 | for (cur_seg = cur_td->start_seg, cur_trb = cur_td->first_trb; |
543 | true; | 543 | true; |
544 | next_trb(xhci, ep_ring, &cur_seg, &cur_trb)) { | 544 | next_trb(xhci, ep_ring, &cur_seg, &cur_trb)) { |
545 | if ((cur_trb->generic.field[3] & TRB_TYPE_BITMASK) == | 545 | if ((cur_trb->generic.field[3] & TRB_TYPE_BITMASK) == |
546 | TRB_TYPE(TRB_LINK)) { | 546 | TRB_TYPE(TRB_LINK)) { |
547 | /* Unchain any chained Link TRBs, but | 547 | /* Unchain any chained Link TRBs, but |
548 | * leave the pointers intact. | 548 | * leave the pointers intact. |
549 | */ | 549 | */ |
550 | cur_trb->generic.field[3] &= ~TRB_CHAIN; | 550 | cur_trb->generic.field[3] &= ~TRB_CHAIN; |
551 | xhci_dbg(xhci, "Cancel (unchain) link TRB\n"); | 551 | xhci_dbg(xhci, "Cancel (unchain) link TRB\n"); |
552 | xhci_dbg(xhci, "Address = %p (0x%llx dma); " | 552 | xhci_dbg(xhci, "Address = %p (0x%llx dma); " |
553 | "in seg %p (0x%llx dma)\n", | 553 | "in seg %p (0x%llx dma)\n", |
554 | cur_trb, | 554 | cur_trb, |
555 | (unsigned long long)xhci_trb_virt_to_dma(cur_seg, cur_trb), | 555 | (unsigned long long)xhci_trb_virt_to_dma(cur_seg, cur_trb), |
556 | cur_seg, | 556 | cur_seg, |
557 | (unsigned long long)cur_seg->dma); | 557 | (unsigned long long)cur_seg->dma); |
558 | } else { | 558 | } else { |
559 | cur_trb->generic.field[0] = 0; | 559 | cur_trb->generic.field[0] = 0; |
560 | cur_trb->generic.field[1] = 0; | 560 | cur_trb->generic.field[1] = 0; |
561 | cur_trb->generic.field[2] = 0; | 561 | cur_trb->generic.field[2] = 0; |
562 | /* Preserve only the cycle bit of this TRB */ | 562 | /* Preserve only the cycle bit of this TRB */ |
563 | cur_trb->generic.field[3] &= TRB_CYCLE; | 563 | cur_trb->generic.field[3] &= TRB_CYCLE; |
564 | cur_trb->generic.field[3] |= TRB_TYPE(TRB_TR_NOOP); | 564 | cur_trb->generic.field[3] |= TRB_TYPE(TRB_TR_NOOP); |
565 | xhci_dbg(xhci, "Cancel TRB %p (0x%llx dma) " | 565 | xhci_dbg(xhci, "Cancel TRB %p (0x%llx dma) " |
566 | "in seg %p (0x%llx dma)\n", | 566 | "in seg %p (0x%llx dma)\n", |
567 | cur_trb, | 567 | cur_trb, |
568 | (unsigned long long)xhci_trb_virt_to_dma(cur_seg, cur_trb), | 568 | (unsigned long long)xhci_trb_virt_to_dma(cur_seg, cur_trb), |
569 | cur_seg, | 569 | cur_seg, |
570 | (unsigned long long)cur_seg->dma); | 570 | (unsigned long long)cur_seg->dma); |
571 | } | 571 | } |
572 | if (cur_trb == cur_td->last_trb) | 572 | if (cur_trb == cur_td->last_trb) |
573 | break; | 573 | break; |
574 | } | 574 | } |
575 | } | 575 | } |
576 | 576 | ||
577 | static int queue_set_tr_deq(struct xhci_hcd *xhci, int slot_id, | 577 | static int queue_set_tr_deq(struct xhci_hcd *xhci, int slot_id, |
578 | unsigned int ep_index, unsigned int stream_id, | 578 | unsigned int ep_index, unsigned int stream_id, |
579 | struct xhci_segment *deq_seg, | 579 | struct xhci_segment *deq_seg, |
580 | union xhci_trb *deq_ptr, u32 cycle_state); | 580 | union xhci_trb *deq_ptr, u32 cycle_state); |
581 | 581 | ||
582 | void xhci_queue_new_dequeue_state(struct xhci_hcd *xhci, | 582 | void xhci_queue_new_dequeue_state(struct xhci_hcd *xhci, |
583 | unsigned int slot_id, unsigned int ep_index, | 583 | unsigned int slot_id, unsigned int ep_index, |
584 | unsigned int stream_id, | 584 | unsigned int stream_id, |
585 | struct xhci_dequeue_state *deq_state) | 585 | struct xhci_dequeue_state *deq_state) |
586 | { | 586 | { |
587 | struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index]; | 587 | struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index]; |
588 | 588 | ||
589 | xhci_dbg(xhci, "Set TR Deq Ptr cmd, new deq seg = %p (0x%llx dma), " | 589 | xhci_dbg(xhci, "Set TR Deq Ptr cmd, new deq seg = %p (0x%llx dma), " |
590 | "new deq ptr = %p (0x%llx dma), new cycle = %u\n", | 590 | "new deq ptr = %p (0x%llx dma), new cycle = %u\n", |
591 | deq_state->new_deq_seg, | 591 | deq_state->new_deq_seg, |
592 | (unsigned long long)deq_state->new_deq_seg->dma, | 592 | (unsigned long long)deq_state->new_deq_seg->dma, |
593 | deq_state->new_deq_ptr, | 593 | deq_state->new_deq_ptr, |
594 | (unsigned long long)xhci_trb_virt_to_dma(deq_state->new_deq_seg, deq_state->new_deq_ptr), | 594 | (unsigned long long)xhci_trb_virt_to_dma(deq_state->new_deq_seg, deq_state->new_deq_ptr), |
595 | deq_state->new_cycle_state); | 595 | deq_state->new_cycle_state); |
596 | queue_set_tr_deq(xhci, slot_id, ep_index, stream_id, | 596 | queue_set_tr_deq(xhci, slot_id, ep_index, stream_id, |
597 | deq_state->new_deq_seg, | 597 | deq_state->new_deq_seg, |
598 | deq_state->new_deq_ptr, | 598 | deq_state->new_deq_ptr, |
599 | (u32) deq_state->new_cycle_state); | 599 | (u32) deq_state->new_cycle_state); |
600 | /* Stop the TD queueing code from ringing the doorbell until | 600 | /* Stop the TD queueing code from ringing the doorbell until |
601 | * this command completes. The HC won't set the dequeue pointer | 601 | * this command completes. The HC won't set the dequeue pointer |
602 | * if the ring is running, and ringing the doorbell starts the | 602 | * if the ring is running, and ringing the doorbell starts the |
603 | * ring running. | 603 | * ring running. |
604 | */ | 604 | */ |
605 | ep->ep_state |= SET_DEQ_PENDING; | 605 | ep->ep_state |= SET_DEQ_PENDING; |
606 | } | 606 | } |
607 | 607 | ||
608 | static inline void xhci_stop_watchdog_timer_in_irq(struct xhci_hcd *xhci, | 608 | static inline void xhci_stop_watchdog_timer_in_irq(struct xhci_hcd *xhci, |
609 | struct xhci_virt_ep *ep) | 609 | struct xhci_virt_ep *ep) |
610 | { | 610 | { |
611 | ep->ep_state &= ~EP_HALT_PENDING; | 611 | ep->ep_state &= ~EP_HALT_PENDING; |
612 | /* Can't del_timer_sync in interrupt, so we attempt to cancel. If the | 612 | /* Can't del_timer_sync in interrupt, so we attempt to cancel. If the |
613 | * timer is running on another CPU, we don't decrement stop_cmds_pending | 613 | * timer is running on another CPU, we don't decrement stop_cmds_pending |
614 | * (since we didn't successfully stop the watchdog timer). | 614 | * (since we didn't successfully stop the watchdog timer). |
615 | */ | 615 | */ |
616 | if (del_timer(&ep->stop_cmd_timer)) | 616 | if (del_timer(&ep->stop_cmd_timer)) |
617 | ep->stop_cmds_pending--; | 617 | ep->stop_cmds_pending--; |
618 | } | 618 | } |
619 | 619 | ||
620 | /* Must be called with xhci->lock held in interrupt context */ | 620 | /* Must be called with xhci->lock held in interrupt context */ |
621 | static void xhci_giveback_urb_in_irq(struct xhci_hcd *xhci, | 621 | static void xhci_giveback_urb_in_irq(struct xhci_hcd *xhci, |
622 | struct xhci_td *cur_td, int status, char *adjective) | 622 | struct xhci_td *cur_td, int status, char *adjective) |
623 | { | 623 | { |
624 | struct usb_hcd *hcd = xhci_to_hcd(xhci); | 624 | struct usb_hcd *hcd = xhci_to_hcd(xhci); |
625 | struct urb *urb; | 625 | struct urb *urb; |
626 | struct urb_priv *urb_priv; | 626 | struct urb_priv *urb_priv; |
627 | 627 | ||
628 | urb = cur_td->urb; | 628 | urb = cur_td->urb; |
629 | urb_priv = urb->hcpriv; | 629 | urb_priv = urb->hcpriv; |
630 | urb_priv->td_cnt++; | 630 | urb_priv->td_cnt++; |
631 | 631 | ||
632 | /* Only giveback urb when this is the last td in urb */ | 632 | /* Only giveback urb when this is the last td in urb */ |
633 | if (urb_priv->td_cnt == urb_priv->length) { | 633 | if (urb_priv->td_cnt == urb_priv->length) { |
634 | usb_hcd_unlink_urb_from_ep(hcd, urb); | 634 | usb_hcd_unlink_urb_from_ep(hcd, urb); |
635 | xhci_dbg(xhci, "Giveback %s URB %p\n", adjective, urb); | 635 | xhci_dbg(xhci, "Giveback %s URB %p\n", adjective, urb); |
636 | 636 | ||
637 | spin_unlock(&xhci->lock); | 637 | spin_unlock(&xhci->lock); |
638 | usb_hcd_giveback_urb(hcd, urb, status); | 638 | usb_hcd_giveback_urb(hcd, urb, status); |
639 | xhci_urb_free_priv(xhci, urb_priv); | 639 | xhci_urb_free_priv(xhci, urb_priv); |
640 | spin_lock(&xhci->lock); | 640 | spin_lock(&xhci->lock); |
641 | xhci_dbg(xhci, "%s URB given back\n", adjective); | 641 | xhci_dbg(xhci, "%s URB given back\n", adjective); |
642 | } | 642 | } |
643 | } | 643 | } |
644 | 644 | ||
645 | /* | 645 | /* |
646 | * When we get a command completion for a Stop Endpoint Command, we need to | 646 | * When we get a command completion for a Stop Endpoint Command, we need to |
647 | * unlink any cancelled TDs from the ring. There are two ways to do that: | 647 | * unlink any cancelled TDs from the ring. There are two ways to do that: |
648 | * | 648 | * |
649 | * 1. If the HW was in the middle of processing the TD that needs to be | 649 | * 1. If the HW was in the middle of processing the TD that needs to be |
650 | * cancelled, then we must move the ring's dequeue pointer past the last TRB | 650 | * cancelled, then we must move the ring's dequeue pointer past the last TRB |
651 | * in the TD with a Set Dequeue Pointer Command. | 651 | * in the TD with a Set Dequeue Pointer Command. |
652 | * 2. Otherwise, we turn all the TRBs in the TD into No-op TRBs (with the chain | 652 | * 2. Otherwise, we turn all the TRBs in the TD into No-op TRBs (with the chain |
653 | * bit cleared) so that the HW will skip over them. | 653 | * bit cleared) so that the HW will skip over them. |
654 | */ | 654 | */ |
655 | static void handle_stopped_endpoint(struct xhci_hcd *xhci, | 655 | static void handle_stopped_endpoint(struct xhci_hcd *xhci, |
656 | union xhci_trb *trb) | 656 | union xhci_trb *trb) |
657 | { | 657 | { |
658 | unsigned int slot_id; | 658 | unsigned int slot_id; |
659 | unsigned int ep_index; | 659 | unsigned int ep_index; |
660 | struct xhci_ring *ep_ring; | 660 | struct xhci_ring *ep_ring; |
661 | struct xhci_virt_ep *ep; | 661 | struct xhci_virt_ep *ep; |
662 | struct list_head *entry; | 662 | struct list_head *entry; |
663 | struct xhci_td *cur_td = NULL; | 663 | struct xhci_td *cur_td = NULL; |
664 | struct xhci_td *last_unlinked_td; | 664 | struct xhci_td *last_unlinked_td; |
665 | 665 | ||
666 | struct xhci_dequeue_state deq_state; | 666 | struct xhci_dequeue_state deq_state; |
667 | 667 | ||
668 | memset(&deq_state, 0, sizeof(deq_state)); | 668 | memset(&deq_state, 0, sizeof(deq_state)); |
669 | slot_id = TRB_TO_SLOT_ID(trb->generic.field[3]); | 669 | slot_id = TRB_TO_SLOT_ID(trb->generic.field[3]); |
670 | ep_index = TRB_TO_EP_INDEX(trb->generic.field[3]); | 670 | ep_index = TRB_TO_EP_INDEX(trb->generic.field[3]); |
671 | ep = &xhci->devs[slot_id]->eps[ep_index]; | 671 | ep = &xhci->devs[slot_id]->eps[ep_index]; |
672 | 672 | ||
673 | if (list_empty(&ep->cancelled_td_list)) { | 673 | if (list_empty(&ep->cancelled_td_list)) { |
674 | xhci_stop_watchdog_timer_in_irq(xhci, ep); | 674 | xhci_stop_watchdog_timer_in_irq(xhci, ep); |
675 | ring_doorbell_for_active_rings(xhci, slot_id, ep_index); | 675 | ring_doorbell_for_active_rings(xhci, slot_id, ep_index); |
676 | return; | 676 | return; |
677 | } | 677 | } |
678 | 678 | ||
679 | /* Fix up the ep ring first, so HW stops executing cancelled TDs. | 679 | /* Fix up the ep ring first, so HW stops executing cancelled TDs. |
680 | * We have the xHCI lock, so nothing can modify this list until we drop | 680 | * We have the xHCI lock, so nothing can modify this list until we drop |
681 | * it. We're also in the event handler, so we can't get re-interrupted | 681 | * it. We're also in the event handler, so we can't get re-interrupted |
682 | * if another Stop Endpoint command completes | 682 | * if another Stop Endpoint command completes |
683 | */ | 683 | */ |
684 | list_for_each(entry, &ep->cancelled_td_list) { | 684 | list_for_each(entry, &ep->cancelled_td_list) { |
685 | cur_td = list_entry(entry, struct xhci_td, cancelled_td_list); | 685 | cur_td = list_entry(entry, struct xhci_td, cancelled_td_list); |
686 | xhci_dbg(xhci, "Cancelling TD starting at %p, 0x%llx (dma).\n", | 686 | xhci_dbg(xhci, "Cancelling TD starting at %p, 0x%llx (dma).\n", |
687 | cur_td->first_trb, | 687 | cur_td->first_trb, |
688 | (unsigned long long)xhci_trb_virt_to_dma(cur_td->start_seg, cur_td->first_trb)); | 688 | (unsigned long long)xhci_trb_virt_to_dma(cur_td->start_seg, cur_td->first_trb)); |
689 | ep_ring = xhci_urb_to_transfer_ring(xhci, cur_td->urb); | 689 | ep_ring = xhci_urb_to_transfer_ring(xhci, cur_td->urb); |
690 | if (!ep_ring) { | 690 | if (!ep_ring) { |
691 | /* This shouldn't happen unless a driver is mucking | 691 | /* This shouldn't happen unless a driver is mucking |
692 | * with the stream ID after submission. This will | 692 | * with the stream ID after submission. This will |
693 | * leave the TD on the hardware ring, and the hardware | 693 | * leave the TD on the hardware ring, and the hardware |
694 | * will try to execute it, and may access a buffer | 694 | * will try to execute it, and may access a buffer |
695 | * that has already been freed. In the best case, the | 695 | * that has already been freed. In the best case, the |
696 | * hardware will execute it, and the event handler will | 696 | * hardware will execute it, and the event handler will |
697 | * ignore the completion event for that TD, since it was | 697 | * ignore the completion event for that TD, since it was |
698 | * removed from the td_list for that endpoint. In | 698 | * removed from the td_list for that endpoint. In |
699 | * short, don't muck with the stream ID after | 699 | * short, don't muck with the stream ID after |
700 | * submission. | 700 | * submission. |
701 | */ | 701 | */ |
702 | xhci_warn(xhci, "WARN Cancelled URB %p " | 702 | xhci_warn(xhci, "WARN Cancelled URB %p " |
703 | "has invalid stream ID %u.\n", | 703 | "has invalid stream ID %u.\n", |
704 | cur_td->urb, | 704 | cur_td->urb, |
705 | cur_td->urb->stream_id); | 705 | cur_td->urb->stream_id); |
706 | goto remove_finished_td; | 706 | goto remove_finished_td; |
707 | } | 707 | } |
708 | /* | 708 | /* |
709 | * If we stopped on the TD we need to cancel, then we have to | 709 | * If we stopped on the TD we need to cancel, then we have to |
710 | * move the xHC endpoint ring dequeue pointer past this TD. | 710 | * move the xHC endpoint ring dequeue pointer past this TD. |
711 | */ | 711 | */ |
712 | if (cur_td == ep->stopped_td) | 712 | if (cur_td == ep->stopped_td) |
713 | xhci_find_new_dequeue_state(xhci, slot_id, ep_index, | 713 | xhci_find_new_dequeue_state(xhci, slot_id, ep_index, |
714 | cur_td->urb->stream_id, | 714 | cur_td->urb->stream_id, |
715 | cur_td, &deq_state); | 715 | cur_td, &deq_state); |
716 | else | 716 | else |
717 | td_to_noop(xhci, ep_ring, cur_td); | 717 | td_to_noop(xhci, ep_ring, cur_td); |
718 | remove_finished_td: | 718 | remove_finished_td: |
719 | /* | 719 | /* |
720 | * The event handler won't see a completion for this TD anymore, | 720 | * The event handler won't see a completion for this TD anymore, |
721 | * so remove it from the endpoint ring's TD list. Keep it in | 721 | * so remove it from the endpoint ring's TD list. Keep it in |
722 | * the cancelled TD list for URB completion later. | 722 | * the cancelled TD list for URB completion later. |
723 | */ | 723 | */ |
724 | list_del(&cur_td->td_list); | 724 | list_del(&cur_td->td_list); |
725 | } | 725 | } |
726 | last_unlinked_td = cur_td; | 726 | last_unlinked_td = cur_td; |
727 | xhci_stop_watchdog_timer_in_irq(xhci, ep); | 727 | xhci_stop_watchdog_timer_in_irq(xhci, ep); |
728 | 728 | ||
729 | /* If necessary, queue a Set Transfer Ring Dequeue Pointer command */ | 729 | /* If necessary, queue a Set Transfer Ring Dequeue Pointer command */ |
730 | if (deq_state.new_deq_ptr && deq_state.new_deq_seg) { | 730 | if (deq_state.new_deq_ptr && deq_state.new_deq_seg) { |
731 | xhci_queue_new_dequeue_state(xhci, | 731 | xhci_queue_new_dequeue_state(xhci, |
732 | slot_id, ep_index, | 732 | slot_id, ep_index, |
733 | ep->stopped_td->urb->stream_id, | 733 | ep->stopped_td->urb->stream_id, |
734 | &deq_state); | 734 | &deq_state); |
735 | xhci_ring_cmd_db(xhci); | 735 | xhci_ring_cmd_db(xhci); |
736 | } else { | 736 | } else { |
737 | /* Otherwise ring the doorbell(s) to restart queued transfers */ | 737 | /* Otherwise ring the doorbell(s) to restart queued transfers */ |
738 | ring_doorbell_for_active_rings(xhci, slot_id, ep_index); | 738 | ring_doorbell_for_active_rings(xhci, slot_id, ep_index); |
739 | } | 739 | } |
740 | ep->stopped_td = NULL; | 740 | ep->stopped_td = NULL; |
741 | ep->stopped_trb = NULL; | 741 | ep->stopped_trb = NULL; |
742 | 742 | ||
743 | /* | 743 | /* |
744 | * Drop the lock and complete the URBs in the cancelled TD list. | 744 | * Drop the lock and complete the URBs in the cancelled TD list. |
745 | * New TDs to be cancelled might be added to the end of the list before | 745 | * New TDs to be cancelled might be added to the end of the list before |
746 | * we can complete all the URBs for the TDs we already unlinked. | 746 | * we can complete all the URBs for the TDs we already unlinked. |
747 | * So stop when we've completed the URB for the last TD we unlinked. | 747 | * So stop when we've completed the URB for the last TD we unlinked. |
748 | */ | 748 | */ |
749 | do { | 749 | do { |
750 | cur_td = list_entry(ep->cancelled_td_list.next, | 750 | cur_td = list_entry(ep->cancelled_td_list.next, |
751 | struct xhci_td, cancelled_td_list); | 751 | struct xhci_td, cancelled_td_list); |
752 | list_del(&cur_td->cancelled_td_list); | 752 | list_del(&cur_td->cancelled_td_list); |
753 | 753 | ||
754 | /* Clean up the cancelled URB */ | 754 | /* Clean up the cancelled URB */ |
755 | /* Doesn't matter what we pass for status, since the core will | 755 | /* Doesn't matter what we pass for status, since the core will |
756 | * just overwrite it (because the URB has been unlinked). | 756 | * just overwrite it (because the URB has been unlinked). |
757 | */ | 757 | */ |
758 | xhci_giveback_urb_in_irq(xhci, cur_td, 0, "cancelled"); | 758 | xhci_giveback_urb_in_irq(xhci, cur_td, 0, "cancelled"); |
759 | 759 | ||
760 | /* Stop processing the cancelled list if the watchdog timer is | 760 | /* Stop processing the cancelled list if the watchdog timer is |
761 | * running. | 761 | * running. |
762 | */ | 762 | */ |
763 | if (xhci->xhc_state & XHCI_STATE_DYING) | 763 | if (xhci->xhc_state & XHCI_STATE_DYING) |
764 | return; | 764 | return; |
765 | } while (cur_td != last_unlinked_td); | 765 | } while (cur_td != last_unlinked_td); |
766 | 766 | ||
767 | /* Return to the event handler with xhci->lock re-acquired */ | 767 | /* Return to the event handler with xhci->lock re-acquired */ |
768 | } | 768 | } |
769 | 769 | ||
770 | /* Watchdog timer function for when a stop endpoint command fails to complete. | 770 | /* Watchdog timer function for when a stop endpoint command fails to complete. |
771 | * In this case, we assume the host controller is broken or dying or dead. The | 771 | * In this case, we assume the host controller is broken or dying or dead. The |
772 | * host may still be completing some other events, so we have to be careful to | 772 | * host may still be completing some other events, so we have to be careful to |
773 | * let the event ring handler and the URB dequeueing/enqueueing functions know | 773 | * let the event ring handler and the URB dequeueing/enqueueing functions know |
774 | * through xhci->state. | 774 | * through xhci->state. |
775 | * | 775 | * |
776 | * The timer may also fire if the host takes a very long time to respond to the | 776 | * The timer may also fire if the host takes a very long time to respond to the |
777 | * command, and the stop endpoint command completion handler cannot delete the | 777 | * command, and the stop endpoint command completion handler cannot delete the |
778 | * timer before the timer function is called. Another endpoint cancellation may | 778 | * timer before the timer function is called. Another endpoint cancellation may |
779 | * sneak in before the timer function can grab the lock, and that may queue | 779 | * sneak in before the timer function can grab the lock, and that may queue |
780 | * another stop endpoint command and add the timer back. So we cannot use a | 780 | * another stop endpoint command and add the timer back. So we cannot use a |
781 | * simple flag to say whether there is a pending stop endpoint command for a | 781 | * simple flag to say whether there is a pending stop endpoint command for a |
782 | * particular endpoint. | 782 | * particular endpoint. |
783 | * | 783 | * |
784 | * Instead we use a combination of that flag and a counter for the number of | 784 | * Instead we use a combination of that flag and a counter for the number of |
785 | * pending stop endpoint commands. If the timer is the tail end of the last | 785 | * pending stop endpoint commands. If the timer is the tail end of the last |
786 | * stop endpoint command, and the endpoint's command is still pending, we assume | 786 | * stop endpoint command, and the endpoint's command is still pending, we assume |
787 | * the host is dying. | 787 | * the host is dying. |
788 | */ | 788 | */ |
789 | void xhci_stop_endpoint_command_watchdog(unsigned long arg) | 789 | void xhci_stop_endpoint_command_watchdog(unsigned long arg) |
790 | { | 790 | { |
791 | struct xhci_hcd *xhci; | 791 | struct xhci_hcd *xhci; |
792 | struct xhci_virt_ep *ep; | 792 | struct xhci_virt_ep *ep; |
793 | struct xhci_virt_ep *temp_ep; | 793 | struct xhci_virt_ep *temp_ep; |
794 | struct xhci_ring *ring; | 794 | struct xhci_ring *ring; |
795 | struct xhci_td *cur_td; | 795 | struct xhci_td *cur_td; |
796 | int ret, i, j; | 796 | int ret, i, j; |
797 | 797 | ||
798 | ep = (struct xhci_virt_ep *) arg; | 798 | ep = (struct xhci_virt_ep *) arg; |
799 | xhci = ep->xhci; | 799 | xhci = ep->xhci; |
800 | 800 | ||
801 | spin_lock(&xhci->lock); | 801 | spin_lock(&xhci->lock); |
802 | 802 | ||
803 | ep->stop_cmds_pending--; | 803 | ep->stop_cmds_pending--; |
804 | if (xhci->xhc_state & XHCI_STATE_DYING) { | 804 | if (xhci->xhc_state & XHCI_STATE_DYING) { |
805 | xhci_dbg(xhci, "Stop EP timer ran, but another timer marked " | 805 | xhci_dbg(xhci, "Stop EP timer ran, but another timer marked " |
806 | "xHCI as DYING, exiting.\n"); | 806 | "xHCI as DYING, exiting.\n"); |
807 | spin_unlock(&xhci->lock); | 807 | spin_unlock(&xhci->lock); |
808 | return; | 808 | return; |
809 | } | 809 | } |
810 | if (!(ep->stop_cmds_pending == 0 && (ep->ep_state & EP_HALT_PENDING))) { | 810 | if (!(ep->stop_cmds_pending == 0 && (ep->ep_state & EP_HALT_PENDING))) { |
811 | xhci_dbg(xhci, "Stop EP timer ran, but no command pending, " | 811 | xhci_dbg(xhci, "Stop EP timer ran, but no command pending, " |
812 | "exiting.\n"); | 812 | "exiting.\n"); |
813 | spin_unlock(&xhci->lock); | 813 | spin_unlock(&xhci->lock); |
814 | return; | 814 | return; |
815 | } | 815 | } |
816 | 816 | ||
817 | xhci_warn(xhci, "xHCI host not responding to stop endpoint command.\n"); | 817 | xhci_warn(xhci, "xHCI host not responding to stop endpoint command.\n"); |
818 | xhci_warn(xhci, "Assuming host is dying, halting host.\n"); | 818 | xhci_warn(xhci, "Assuming host is dying, halting host.\n"); |
819 | /* Oops, HC is dead or dying or at least not responding to the stop | 819 | /* Oops, HC is dead or dying or at least not responding to the stop |
820 | * endpoint command. | 820 | * endpoint command. |
821 | */ | 821 | */ |
822 | xhci->xhc_state |= XHCI_STATE_DYING; | 822 | xhci->xhc_state |= XHCI_STATE_DYING; |
823 | /* Disable interrupts from the host controller and start halting it */ | 823 | /* Disable interrupts from the host controller and start halting it */ |
824 | xhci_quiesce(xhci); | 824 | xhci_quiesce(xhci); |
825 | spin_unlock(&xhci->lock); | 825 | spin_unlock(&xhci->lock); |
826 | 826 | ||
827 | ret = xhci_halt(xhci); | 827 | ret = xhci_halt(xhci); |
828 | 828 | ||
829 | spin_lock(&xhci->lock); | 829 | spin_lock(&xhci->lock); |
830 | if (ret < 0) { | 830 | if (ret < 0) { |
831 | /* This is bad; the host is not responding to commands and it's | 831 | /* This is bad; the host is not responding to commands and it's |
832 | * not allowing itself to be halted. At least interrupts are | 832 | * not allowing itself to be halted. At least interrupts are |
833 | * disabled, so we can set HC_STATE_HALT and notify the | 833 | * disabled, so we can set HC_STATE_HALT and notify the |
834 | * USB core. But if we call usb_hc_died(), it will attempt to | 834 | * USB core. But if we call usb_hc_died(), it will attempt to |
835 | * disconnect all device drivers under this host. Those | 835 | * disconnect all device drivers under this host. Those |
836 | * disconnect() methods will wait for all URBs to be unlinked, | 836 | * disconnect() methods will wait for all URBs to be unlinked, |
837 | * so we must complete them. | 837 | * so we must complete them. |
838 | */ | 838 | */ |
839 | xhci_warn(xhci, "Non-responsive xHCI host is not halting.\n"); | 839 | xhci_warn(xhci, "Non-responsive xHCI host is not halting.\n"); |
840 | xhci_warn(xhci, "Completing active URBs anyway.\n"); | 840 | xhci_warn(xhci, "Completing active URBs anyway.\n"); |
841 | /* We could turn all TDs on the rings to no-ops. This won't | 841 | /* We could turn all TDs on the rings to no-ops. This won't |
842 | * help if the host has cached part of the ring, and is slow if | 842 | * help if the host has cached part of the ring, and is slow if |
843 | * we want to preserve the cycle bit. Skip it and hope the host | 843 | * we want to preserve the cycle bit. Skip it and hope the host |
844 | * doesn't touch the memory. | 844 | * doesn't touch the memory. |
845 | */ | 845 | */ |
846 | } | 846 | } |
847 | for (i = 0; i < MAX_HC_SLOTS; i++) { | 847 | for (i = 0; i < MAX_HC_SLOTS; i++) { |
848 | if (!xhci->devs[i]) | 848 | if (!xhci->devs[i]) |
849 | continue; | 849 | continue; |
850 | for (j = 0; j < 31; j++) { | 850 | for (j = 0; j < 31; j++) { |
851 | temp_ep = &xhci->devs[i]->eps[j]; | 851 | temp_ep = &xhci->devs[i]->eps[j]; |
852 | ring = temp_ep->ring; | 852 | ring = temp_ep->ring; |
853 | if (!ring) | 853 | if (!ring) |
854 | continue; | 854 | continue; |
855 | xhci_dbg(xhci, "Killing URBs for slot ID %u, " | 855 | xhci_dbg(xhci, "Killing URBs for slot ID %u, " |
856 | "ep index %u\n", i, j); | 856 | "ep index %u\n", i, j); |
857 | while (!list_empty(&ring->td_list)) { | 857 | while (!list_empty(&ring->td_list)) { |
858 | cur_td = list_first_entry(&ring->td_list, | 858 | cur_td = list_first_entry(&ring->td_list, |
859 | struct xhci_td, | 859 | struct xhci_td, |
860 | td_list); | 860 | td_list); |
861 | list_del(&cur_td->td_list); | 861 | list_del(&cur_td->td_list); |
862 | if (!list_empty(&cur_td->cancelled_td_list)) | 862 | if (!list_empty(&cur_td->cancelled_td_list)) |
863 | list_del(&cur_td->cancelled_td_list); | 863 | list_del(&cur_td->cancelled_td_list); |
864 | xhci_giveback_urb_in_irq(xhci, cur_td, | 864 | xhci_giveback_urb_in_irq(xhci, cur_td, |
865 | -ESHUTDOWN, "killed"); | 865 | -ESHUTDOWN, "killed"); |
866 | } | 866 | } |
867 | while (!list_empty(&temp_ep->cancelled_td_list)) { | 867 | while (!list_empty(&temp_ep->cancelled_td_list)) { |
868 | cur_td = list_first_entry( | 868 | cur_td = list_first_entry( |
869 | &temp_ep->cancelled_td_list, | 869 | &temp_ep->cancelled_td_list, |
870 | struct xhci_td, | 870 | struct xhci_td, |
871 | cancelled_td_list); | 871 | cancelled_td_list); |
872 | list_del(&cur_td->cancelled_td_list); | 872 | list_del(&cur_td->cancelled_td_list); |
873 | xhci_giveback_urb_in_irq(xhci, cur_td, | 873 | xhci_giveback_urb_in_irq(xhci, cur_td, |
874 | -ESHUTDOWN, "killed"); | 874 | -ESHUTDOWN, "killed"); |
875 | } | 875 | } |
876 | } | 876 | } |
877 | } | 877 | } |
878 | spin_unlock(&xhci->lock); | 878 | spin_unlock(&xhci->lock); |
879 | xhci_to_hcd(xhci)->state = HC_STATE_HALT; | 879 | xhci_to_hcd(xhci)->state = HC_STATE_HALT; |
880 | xhci_dbg(xhci, "Calling usb_hc_died()\n"); | 880 | xhci_dbg(xhci, "Calling usb_hc_died()\n"); |
881 | usb_hc_died(xhci_to_hcd(xhci)); | 881 | usb_hc_died(xhci_to_hcd(xhci)); |
882 | xhci_dbg(xhci, "xHCI host controller is dead.\n"); | 882 | xhci_dbg(xhci, "xHCI host controller is dead.\n"); |
883 | } | 883 | } |
884 | 884 | ||
885 | /* | 885 | /* |
886 | * When we get a completion for a Set Transfer Ring Dequeue Pointer command, | 886 | * When we get a completion for a Set Transfer Ring Dequeue Pointer command, |
887 | * we need to clear the set deq pending flag in the endpoint ring state, so that | 887 | * we need to clear the set deq pending flag in the endpoint ring state, so that |
888 | * the TD queueing code can ring the doorbell again. We also need to ring the | 888 | * the TD queueing code can ring the doorbell again. We also need to ring the |
889 | * endpoint doorbell to restart the ring, but only if there aren't more | 889 | * endpoint doorbell to restart the ring, but only if there aren't more |
890 | * cancellations pending. | 890 | * cancellations pending. |
891 | */ | 891 | */ |
892 | static void handle_set_deq_completion(struct xhci_hcd *xhci, | 892 | static void handle_set_deq_completion(struct xhci_hcd *xhci, |
893 | struct xhci_event_cmd *event, | 893 | struct xhci_event_cmd *event, |
894 | union xhci_trb *trb) | 894 | union xhci_trb *trb) |
895 | { | 895 | { |
896 | unsigned int slot_id; | 896 | unsigned int slot_id; |
897 | unsigned int ep_index; | 897 | unsigned int ep_index; |
898 | unsigned int stream_id; | 898 | unsigned int stream_id; |
899 | struct xhci_ring *ep_ring; | 899 | struct xhci_ring *ep_ring; |
900 | struct xhci_virt_device *dev; | 900 | struct xhci_virt_device *dev; |
901 | struct xhci_ep_ctx *ep_ctx; | 901 | struct xhci_ep_ctx *ep_ctx; |
902 | struct xhci_slot_ctx *slot_ctx; | 902 | struct xhci_slot_ctx *slot_ctx; |
903 | 903 | ||
904 | slot_id = TRB_TO_SLOT_ID(trb->generic.field[3]); | 904 | slot_id = TRB_TO_SLOT_ID(trb->generic.field[3]); |
905 | ep_index = TRB_TO_EP_INDEX(trb->generic.field[3]); | 905 | ep_index = TRB_TO_EP_INDEX(trb->generic.field[3]); |
906 | stream_id = TRB_TO_STREAM_ID(trb->generic.field[2]); | 906 | stream_id = TRB_TO_STREAM_ID(trb->generic.field[2]); |
907 | dev = xhci->devs[slot_id]; | 907 | dev = xhci->devs[slot_id]; |
908 | 908 | ||
909 | ep_ring = xhci_stream_id_to_ring(dev, ep_index, stream_id); | 909 | ep_ring = xhci_stream_id_to_ring(dev, ep_index, stream_id); |
910 | if (!ep_ring) { | 910 | if (!ep_ring) { |
911 | xhci_warn(xhci, "WARN Set TR deq ptr command for " | 911 | xhci_warn(xhci, "WARN Set TR deq ptr command for " |
912 | "freed stream ID %u\n", | 912 | "freed stream ID %u\n", |
913 | stream_id); | 913 | stream_id); |
914 | /* XXX: Harmless??? */ | 914 | /* XXX: Harmless??? */ |
915 | dev->eps[ep_index].ep_state &= ~SET_DEQ_PENDING; | 915 | dev->eps[ep_index].ep_state &= ~SET_DEQ_PENDING; |
916 | return; | 916 | return; |
917 | } | 917 | } |
918 | 918 | ||
919 | ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index); | 919 | ep_ctx = xhci_get_ep_ctx(xhci, dev->out_ctx, ep_index); |
920 | slot_ctx = xhci_get_slot_ctx(xhci, dev->out_ctx); | 920 | slot_ctx = xhci_get_slot_ctx(xhci, dev->out_ctx); |
921 | 921 | ||
922 | if (GET_COMP_CODE(event->status) != COMP_SUCCESS) { | 922 | if (GET_COMP_CODE(event->status) != COMP_SUCCESS) { |
923 | unsigned int ep_state; | 923 | unsigned int ep_state; |
924 | unsigned int slot_state; | 924 | unsigned int slot_state; |
925 | 925 | ||
926 | switch (GET_COMP_CODE(event->status)) { | 926 | switch (GET_COMP_CODE(event->status)) { |
927 | case COMP_TRB_ERR: | 927 | case COMP_TRB_ERR: |
928 | xhci_warn(xhci, "WARN Set TR Deq Ptr cmd invalid because " | 928 | xhci_warn(xhci, "WARN Set TR Deq Ptr cmd invalid because " |
929 | "of stream ID configuration\n"); | 929 | "of stream ID configuration\n"); |
930 | break; | 930 | break; |
931 | case COMP_CTX_STATE: | 931 | case COMP_CTX_STATE: |
932 | xhci_warn(xhci, "WARN Set TR Deq Ptr cmd failed due " | 932 | xhci_warn(xhci, "WARN Set TR Deq Ptr cmd failed due " |
933 | "to incorrect slot or ep state.\n"); | 933 | "to incorrect slot or ep state.\n"); |
934 | ep_state = ep_ctx->ep_info; | 934 | ep_state = ep_ctx->ep_info; |
935 | ep_state &= EP_STATE_MASK; | 935 | ep_state &= EP_STATE_MASK; |
936 | slot_state = slot_ctx->dev_state; | 936 | slot_state = slot_ctx->dev_state; |
937 | slot_state = GET_SLOT_STATE(slot_state); | 937 | slot_state = GET_SLOT_STATE(slot_state); |
938 | xhci_dbg(xhci, "Slot state = %u, EP state = %u\n", | 938 | xhci_dbg(xhci, "Slot state = %u, EP state = %u\n", |
939 | slot_state, ep_state); | 939 | slot_state, ep_state); |
940 | break; | 940 | break; |
941 | case COMP_EBADSLT: | 941 | case COMP_EBADSLT: |
942 | xhci_warn(xhci, "WARN Set TR Deq Ptr cmd failed because " | 942 | xhci_warn(xhci, "WARN Set TR Deq Ptr cmd failed because " |
943 | "slot %u was not enabled.\n", slot_id); | 943 | "slot %u was not enabled.\n", slot_id); |
944 | break; | 944 | break; |
945 | default: | 945 | default: |
946 | xhci_warn(xhci, "WARN Set TR Deq Ptr cmd with unknown " | 946 | xhci_warn(xhci, "WARN Set TR Deq Ptr cmd with unknown " |
947 | "completion code of %u.\n", | 947 | "completion code of %u.\n", |
948 | GET_COMP_CODE(event->status)); | 948 | GET_COMP_CODE(event->status)); |
949 | break; | 949 | break; |
950 | } | 950 | } |
951 | /* OK what do we do now? The endpoint state is hosed, and we | 951 | /* OK what do we do now? The endpoint state is hosed, and we |
952 | * should never get to this point if the synchronization between | 952 | * should never get to this point if the synchronization between |
953 | * queueing, and endpoint state are correct. This might happen | 953 | * queueing, and endpoint state are correct. This might happen |
954 | * if the device gets disconnected after we've finished | 954 | * if the device gets disconnected after we've finished |
955 | * cancelling URBs, which might not be an error... | 955 | * cancelling URBs, which might not be an error... |
956 | */ | 956 | */ |
957 | } else { | 957 | } else { |
958 | xhci_dbg(xhci, "Successful Set TR Deq Ptr cmd, deq = @%08llx\n", | 958 | xhci_dbg(xhci, "Successful Set TR Deq Ptr cmd, deq = @%08llx\n", |
959 | ep_ctx->deq); | 959 | ep_ctx->deq); |
960 | } | 960 | } |
961 | 961 | ||
962 | dev->eps[ep_index].ep_state &= ~SET_DEQ_PENDING; | 962 | dev->eps[ep_index].ep_state &= ~SET_DEQ_PENDING; |
963 | /* Restart any rings with pending URBs */ | 963 | /* Restart any rings with pending URBs */ |
964 | ring_doorbell_for_active_rings(xhci, slot_id, ep_index); | 964 | ring_doorbell_for_active_rings(xhci, slot_id, ep_index); |
965 | } | 965 | } |
966 | 966 | ||
967 | static void handle_reset_ep_completion(struct xhci_hcd *xhci, | 967 | static void handle_reset_ep_completion(struct xhci_hcd *xhci, |
968 | struct xhci_event_cmd *event, | 968 | struct xhci_event_cmd *event, |
969 | union xhci_trb *trb) | 969 | union xhci_trb *trb) |
970 | { | 970 | { |
971 | int slot_id; | 971 | int slot_id; |
972 | unsigned int ep_index; | 972 | unsigned int ep_index; |
973 | 973 | ||
974 | slot_id = TRB_TO_SLOT_ID(trb->generic.field[3]); | 974 | slot_id = TRB_TO_SLOT_ID(trb->generic.field[3]); |
975 | ep_index = TRB_TO_EP_INDEX(trb->generic.field[3]); | 975 | ep_index = TRB_TO_EP_INDEX(trb->generic.field[3]); |
976 | /* This command will only fail if the endpoint wasn't halted, | 976 | /* This command will only fail if the endpoint wasn't halted, |
977 | * but we don't care. | 977 | * but we don't care. |
978 | */ | 978 | */ |
979 | xhci_dbg(xhci, "Ignoring reset ep completion code of %u\n", | 979 | xhci_dbg(xhci, "Ignoring reset ep completion code of %u\n", |
980 | (unsigned int) GET_COMP_CODE(event->status)); | 980 | (unsigned int) GET_COMP_CODE(event->status)); |
981 | 981 | ||
982 | /* HW with the reset endpoint quirk needs to have a configure endpoint | 982 | /* HW with the reset endpoint quirk needs to have a configure endpoint |
983 | * command complete before the endpoint can be used. Queue that here | 983 | * command complete before the endpoint can be used. Queue that here |
984 | * because the HW can't handle two commands being queued in a row. | 984 | * because the HW can't handle two commands being queued in a row. |
985 | */ | 985 | */ |
986 | if (xhci->quirks & XHCI_RESET_EP_QUIRK) { | 986 | if (xhci->quirks & XHCI_RESET_EP_QUIRK) { |
987 | xhci_dbg(xhci, "Queueing configure endpoint command\n"); | 987 | xhci_dbg(xhci, "Queueing configure endpoint command\n"); |
988 | xhci_queue_configure_endpoint(xhci, | 988 | xhci_queue_configure_endpoint(xhci, |
989 | xhci->devs[slot_id]->in_ctx->dma, slot_id, | 989 | xhci->devs[slot_id]->in_ctx->dma, slot_id, |
990 | false); | 990 | false); |
991 | xhci_ring_cmd_db(xhci); | 991 | xhci_ring_cmd_db(xhci); |
992 | } else { | 992 | } else { |
993 | /* Clear our internal halted state and restart the ring(s) */ | 993 | /* Clear our internal halted state and restart the ring(s) */ |
994 | xhci->devs[slot_id]->eps[ep_index].ep_state &= ~EP_HALTED; | 994 | xhci->devs[slot_id]->eps[ep_index].ep_state &= ~EP_HALTED; |
995 | ring_doorbell_for_active_rings(xhci, slot_id, ep_index); | 995 | ring_doorbell_for_active_rings(xhci, slot_id, ep_index); |
996 | } | 996 | } |
997 | } | 997 | } |
998 | 998 | ||
999 | /* Check to see if a command in the device's command queue matches this one. | 999 | /* Check to see if a command in the device's command queue matches this one. |
1000 | * Signal the completion or free the command, and return 1. Return 0 if the | 1000 | * Signal the completion or free the command, and return 1. Return 0 if the |
1001 | * completed command isn't at the head of the command list. | 1001 | * completed command isn't at the head of the command list. |
1002 | */ | 1002 | */ |
1003 | static int handle_cmd_in_cmd_wait_list(struct xhci_hcd *xhci, | 1003 | static int handle_cmd_in_cmd_wait_list(struct xhci_hcd *xhci, |
1004 | struct xhci_virt_device *virt_dev, | 1004 | struct xhci_virt_device *virt_dev, |
1005 | struct xhci_event_cmd *event) | 1005 | struct xhci_event_cmd *event) |
1006 | { | 1006 | { |
1007 | struct xhci_command *command; | 1007 | struct xhci_command *command; |
1008 | 1008 | ||
1009 | if (list_empty(&virt_dev->cmd_list)) | 1009 | if (list_empty(&virt_dev->cmd_list)) |
1010 | return 0; | 1010 | return 0; |
1011 | 1011 | ||
1012 | command = list_entry(virt_dev->cmd_list.next, | 1012 | command = list_entry(virt_dev->cmd_list.next, |
1013 | struct xhci_command, cmd_list); | 1013 | struct xhci_command, cmd_list); |
1014 | if (xhci->cmd_ring->dequeue != command->command_trb) | 1014 | if (xhci->cmd_ring->dequeue != command->command_trb) |
1015 | return 0; | 1015 | return 0; |
1016 | 1016 | ||
1017 | command->status = | 1017 | command->status = |
1018 | GET_COMP_CODE(event->status); | 1018 | GET_COMP_CODE(event->status); |
1019 | list_del(&command->cmd_list); | 1019 | list_del(&command->cmd_list); |
1020 | if (command->completion) | 1020 | if (command->completion) |
1021 | complete(command->completion); | 1021 | complete(command->completion); |
1022 | else | 1022 | else |
1023 | xhci_free_command(xhci, command); | 1023 | xhci_free_command(xhci, command); |
1024 | return 1; | 1024 | return 1; |
1025 | } | 1025 | } |
1026 | 1026 | ||
1027 | static void handle_cmd_completion(struct xhci_hcd *xhci, | 1027 | static void handle_cmd_completion(struct xhci_hcd *xhci, |
1028 | struct xhci_event_cmd *event) | 1028 | struct xhci_event_cmd *event) |
1029 | { | 1029 | { |
1030 | int slot_id = TRB_TO_SLOT_ID(event->flags); | 1030 | int slot_id = TRB_TO_SLOT_ID(event->flags); |
1031 | u64 cmd_dma; | 1031 | u64 cmd_dma; |
1032 | dma_addr_t cmd_dequeue_dma; | 1032 | dma_addr_t cmd_dequeue_dma; |
1033 | struct xhci_input_control_ctx *ctrl_ctx; | 1033 | struct xhci_input_control_ctx *ctrl_ctx; |
1034 | struct xhci_virt_device *virt_dev; | 1034 | struct xhci_virt_device *virt_dev; |
1035 | unsigned int ep_index; | 1035 | unsigned int ep_index; |
1036 | struct xhci_ring *ep_ring; | 1036 | struct xhci_ring *ep_ring; |
1037 | unsigned int ep_state; | 1037 | unsigned int ep_state; |
1038 | 1038 | ||
1039 | cmd_dma = event->cmd_trb; | 1039 | cmd_dma = event->cmd_trb; |
1040 | cmd_dequeue_dma = xhci_trb_virt_to_dma(xhci->cmd_ring->deq_seg, | 1040 | cmd_dequeue_dma = xhci_trb_virt_to_dma(xhci->cmd_ring->deq_seg, |
1041 | xhci->cmd_ring->dequeue); | 1041 | xhci->cmd_ring->dequeue); |
1042 | /* Is the command ring deq ptr out of sync with the deq seg ptr? */ | 1042 | /* Is the command ring deq ptr out of sync with the deq seg ptr? */ |
1043 | if (cmd_dequeue_dma == 0) { | 1043 | if (cmd_dequeue_dma == 0) { |
1044 | xhci->error_bitmask |= 1 << 4; | 1044 | xhci->error_bitmask |= 1 << 4; |
1045 | return; | 1045 | return; |
1046 | } | 1046 | } |
1047 | /* Does the DMA address match our internal dequeue pointer address? */ | 1047 | /* Does the DMA address match our internal dequeue pointer address? */ |
1048 | if (cmd_dma != (u64) cmd_dequeue_dma) { | 1048 | if (cmd_dma != (u64) cmd_dequeue_dma) { |
1049 | xhci->error_bitmask |= 1 << 5; | 1049 | xhci->error_bitmask |= 1 << 5; |
1050 | return; | 1050 | return; |
1051 | } | 1051 | } |
1052 | switch (xhci->cmd_ring->dequeue->generic.field[3] & TRB_TYPE_BITMASK) { | 1052 | switch (xhci->cmd_ring->dequeue->generic.field[3] & TRB_TYPE_BITMASK) { |
1053 | case TRB_TYPE(TRB_ENABLE_SLOT): | 1053 | case TRB_TYPE(TRB_ENABLE_SLOT): |
1054 | if (GET_COMP_CODE(event->status) == COMP_SUCCESS) | 1054 | if (GET_COMP_CODE(event->status) == COMP_SUCCESS) |
1055 | xhci->slot_id = slot_id; | 1055 | xhci->slot_id = slot_id; |
1056 | else | 1056 | else |
1057 | xhci->slot_id = 0; | 1057 | xhci->slot_id = 0; |
1058 | complete(&xhci->addr_dev); | 1058 | complete(&xhci->addr_dev); |
1059 | break; | 1059 | break; |
1060 | case TRB_TYPE(TRB_DISABLE_SLOT): | 1060 | case TRB_TYPE(TRB_DISABLE_SLOT): |
1061 | if (xhci->devs[slot_id]) | 1061 | if (xhci->devs[slot_id]) |
1062 | xhci_free_virt_device(xhci, slot_id); | 1062 | xhci_free_virt_device(xhci, slot_id); |
1063 | break; | 1063 | break; |
1064 | case TRB_TYPE(TRB_CONFIG_EP): | 1064 | case TRB_TYPE(TRB_CONFIG_EP): |
1065 | virt_dev = xhci->devs[slot_id]; | 1065 | virt_dev = xhci->devs[slot_id]; |
1066 | if (handle_cmd_in_cmd_wait_list(xhci, virt_dev, event)) | 1066 | if (handle_cmd_in_cmd_wait_list(xhci, virt_dev, event)) |
1067 | break; | 1067 | break; |
1068 | /* | 1068 | /* |
1069 | * Configure endpoint commands can come from the USB core | 1069 | * Configure endpoint commands can come from the USB core |
1070 | * configuration or alt setting changes, or because the HW | 1070 | * configuration or alt setting changes, or because the HW |
1071 | * needed an extra configure endpoint command after a reset | 1071 | * needed an extra configure endpoint command after a reset |
1072 | * endpoint command or streams were being configured. | 1072 | * endpoint command or streams were being configured. |
1073 | * If the command was for a halted endpoint, the xHCI driver | 1073 | * If the command was for a halted endpoint, the xHCI driver |
1074 | * is not waiting on the configure endpoint command. | 1074 | * is not waiting on the configure endpoint command. |
1075 | */ | 1075 | */ |
1076 | ctrl_ctx = xhci_get_input_control_ctx(xhci, | 1076 | ctrl_ctx = xhci_get_input_control_ctx(xhci, |
1077 | virt_dev->in_ctx); | 1077 | virt_dev->in_ctx); |
1078 | /* Input ctx add_flags are the endpoint index plus one */ | 1078 | /* Input ctx add_flags are the endpoint index plus one */ |
1079 | ep_index = xhci_last_valid_endpoint(ctrl_ctx->add_flags) - 1; | 1079 | ep_index = xhci_last_valid_endpoint(ctrl_ctx->add_flags) - 1; |
1080 | /* A usb_set_interface() call directly after clearing a halted | 1080 | /* A usb_set_interface() call directly after clearing a halted |
1081 | * condition may race on this quirky hardware. Not worth | 1081 | * condition may race on this quirky hardware. Not worth |
1082 | * worrying about, since this is prototype hardware. Not sure | 1082 | * worrying about, since this is prototype hardware. Not sure |
1083 | * if this will work for streams, but streams support was | 1083 | * if this will work for streams, but streams support was |
1084 | * untested on this prototype. | 1084 | * untested on this prototype. |
1085 | */ | 1085 | */ |
1086 | if (xhci->quirks & XHCI_RESET_EP_QUIRK && | 1086 | if (xhci->quirks & XHCI_RESET_EP_QUIRK && |
1087 | ep_index != (unsigned int) -1 && | 1087 | ep_index != (unsigned int) -1 && |
1088 | ctrl_ctx->add_flags - SLOT_FLAG == | 1088 | ctrl_ctx->add_flags - SLOT_FLAG == |
1089 | ctrl_ctx->drop_flags) { | 1089 | ctrl_ctx->drop_flags) { |
1090 | ep_ring = xhci->devs[slot_id]->eps[ep_index].ring; | 1090 | ep_ring = xhci->devs[slot_id]->eps[ep_index].ring; |
1091 | ep_state = xhci->devs[slot_id]->eps[ep_index].ep_state; | 1091 | ep_state = xhci->devs[slot_id]->eps[ep_index].ep_state; |
1092 | if (!(ep_state & EP_HALTED)) | 1092 | if (!(ep_state & EP_HALTED)) |
1093 | goto bandwidth_change; | 1093 | goto bandwidth_change; |
1094 | xhci_dbg(xhci, "Completed config ep cmd - " | 1094 | xhci_dbg(xhci, "Completed config ep cmd - " |
1095 | "last ep index = %d, state = %d\n", | 1095 | "last ep index = %d, state = %d\n", |
1096 | ep_index, ep_state); | 1096 | ep_index, ep_state); |
1097 | /* Clear internal halted state and restart ring(s) */ | 1097 | /* Clear internal halted state and restart ring(s) */ |
1098 | xhci->devs[slot_id]->eps[ep_index].ep_state &= | 1098 | xhci->devs[slot_id]->eps[ep_index].ep_state &= |
1099 | ~EP_HALTED; | 1099 | ~EP_HALTED; |
1100 | ring_doorbell_for_active_rings(xhci, slot_id, ep_index); | 1100 | ring_doorbell_for_active_rings(xhci, slot_id, ep_index); |
1101 | break; | 1101 | break; |
1102 | } | 1102 | } |
1103 | bandwidth_change: | 1103 | bandwidth_change: |
1104 | xhci_dbg(xhci, "Completed config ep cmd\n"); | 1104 | xhci_dbg(xhci, "Completed config ep cmd\n"); |
1105 | xhci->devs[slot_id]->cmd_status = | 1105 | xhci->devs[slot_id]->cmd_status = |
1106 | GET_COMP_CODE(event->status); | 1106 | GET_COMP_CODE(event->status); |
1107 | complete(&xhci->devs[slot_id]->cmd_completion); | 1107 | complete(&xhci->devs[slot_id]->cmd_completion); |
1108 | break; | 1108 | break; |
1109 | case TRB_TYPE(TRB_EVAL_CONTEXT): | 1109 | case TRB_TYPE(TRB_EVAL_CONTEXT): |
1110 | virt_dev = xhci->devs[slot_id]; | 1110 | virt_dev = xhci->devs[slot_id]; |
1111 | if (handle_cmd_in_cmd_wait_list(xhci, virt_dev, event)) | 1111 | if (handle_cmd_in_cmd_wait_list(xhci, virt_dev, event)) |
1112 | break; | 1112 | break; |
1113 | xhci->devs[slot_id]->cmd_status = GET_COMP_CODE(event->status); | 1113 | xhci->devs[slot_id]->cmd_status = GET_COMP_CODE(event->status); |
1114 | complete(&xhci->devs[slot_id]->cmd_completion); | 1114 | complete(&xhci->devs[slot_id]->cmd_completion); |
1115 | break; | 1115 | break; |
1116 | case TRB_TYPE(TRB_ADDR_DEV): | 1116 | case TRB_TYPE(TRB_ADDR_DEV): |
1117 | xhci->devs[slot_id]->cmd_status = GET_COMP_CODE(event->status); | 1117 | xhci->devs[slot_id]->cmd_status = GET_COMP_CODE(event->status); |
1118 | complete(&xhci->addr_dev); | 1118 | complete(&xhci->addr_dev); |
1119 | break; | 1119 | break; |
1120 | case TRB_TYPE(TRB_STOP_RING): | 1120 | case TRB_TYPE(TRB_STOP_RING): |
1121 | handle_stopped_endpoint(xhci, xhci->cmd_ring->dequeue); | 1121 | handle_stopped_endpoint(xhci, xhci->cmd_ring->dequeue); |
1122 | break; | 1122 | break; |
1123 | case TRB_TYPE(TRB_SET_DEQ): | 1123 | case TRB_TYPE(TRB_SET_DEQ): |
1124 | handle_set_deq_completion(xhci, event, xhci->cmd_ring->dequeue); | 1124 | handle_set_deq_completion(xhci, event, xhci->cmd_ring->dequeue); |
1125 | break; | 1125 | break; |
1126 | case TRB_TYPE(TRB_CMD_NOOP): | 1126 | case TRB_TYPE(TRB_CMD_NOOP): |
1127 | ++xhci->noops_handled; | 1127 | ++xhci->noops_handled; |
1128 | break; | 1128 | break; |
1129 | case TRB_TYPE(TRB_RESET_EP): | 1129 | case TRB_TYPE(TRB_RESET_EP): |
1130 | handle_reset_ep_completion(xhci, event, xhci->cmd_ring->dequeue); | 1130 | handle_reset_ep_completion(xhci, event, xhci->cmd_ring->dequeue); |
1131 | break; | 1131 | break; |
1132 | case TRB_TYPE(TRB_RESET_DEV): | 1132 | case TRB_TYPE(TRB_RESET_DEV): |
1133 | xhci_dbg(xhci, "Completed reset device command.\n"); | 1133 | xhci_dbg(xhci, "Completed reset device command.\n"); |
1134 | slot_id = TRB_TO_SLOT_ID( | 1134 | slot_id = TRB_TO_SLOT_ID( |
1135 | xhci->cmd_ring->dequeue->generic.field[3]); | 1135 | xhci->cmd_ring->dequeue->generic.field[3]); |
1136 | virt_dev = xhci->devs[slot_id]; | 1136 | virt_dev = xhci->devs[slot_id]; |
1137 | if (virt_dev) | 1137 | if (virt_dev) |
1138 | handle_cmd_in_cmd_wait_list(xhci, virt_dev, event); | 1138 | handle_cmd_in_cmd_wait_list(xhci, virt_dev, event); |
1139 | else | 1139 | else |
1140 | xhci_warn(xhci, "Reset device command completion " | 1140 | xhci_warn(xhci, "Reset device command completion " |
1141 | "for disabled slot %u\n", slot_id); | 1141 | "for disabled slot %u\n", slot_id); |
1142 | break; | 1142 | break; |
1143 | case TRB_TYPE(TRB_NEC_GET_FW): | 1143 | case TRB_TYPE(TRB_NEC_GET_FW): |
1144 | if (!(xhci->quirks & XHCI_NEC_HOST)) { | 1144 | if (!(xhci->quirks & XHCI_NEC_HOST)) { |
1145 | xhci->error_bitmask |= 1 << 6; | 1145 | xhci->error_bitmask |= 1 << 6; |
1146 | break; | 1146 | break; |
1147 | } | 1147 | } |
1148 | xhci_dbg(xhci, "NEC firmware version %2x.%02x\n", | 1148 | xhci_dbg(xhci, "NEC firmware version %2x.%02x\n", |
1149 | NEC_FW_MAJOR(event->status), | 1149 | NEC_FW_MAJOR(event->status), |
1150 | NEC_FW_MINOR(event->status)); | 1150 | NEC_FW_MINOR(event->status)); |
1151 | break; | 1151 | break; |
1152 | default: | 1152 | default: |
1153 | /* Skip over unknown commands on the event ring */ | 1153 | /* Skip over unknown commands on the event ring */ |
1154 | xhci->error_bitmask |= 1 << 6; | 1154 | xhci->error_bitmask |= 1 << 6; |
1155 | break; | 1155 | break; |
1156 | } | 1156 | } |
1157 | inc_deq(xhci, xhci->cmd_ring, false); | 1157 | inc_deq(xhci, xhci->cmd_ring, false); |
1158 | } | 1158 | } |
1159 | 1159 | ||
1160 | static void handle_vendor_event(struct xhci_hcd *xhci, | 1160 | static void handle_vendor_event(struct xhci_hcd *xhci, |
1161 | union xhci_trb *event) | 1161 | union xhci_trb *event) |
1162 | { | 1162 | { |
1163 | u32 trb_type; | 1163 | u32 trb_type; |
1164 | 1164 | ||
1165 | trb_type = TRB_FIELD_TO_TYPE(event->generic.field[3]); | 1165 | trb_type = TRB_FIELD_TO_TYPE(event->generic.field[3]); |
1166 | xhci_dbg(xhci, "Vendor specific event TRB type = %u\n", trb_type); | 1166 | xhci_dbg(xhci, "Vendor specific event TRB type = %u\n", trb_type); |
1167 | if (trb_type == TRB_NEC_CMD_COMP && (xhci->quirks & XHCI_NEC_HOST)) | 1167 | if (trb_type == TRB_NEC_CMD_COMP && (xhci->quirks & XHCI_NEC_HOST)) |
1168 | handle_cmd_completion(xhci, &event->event_cmd); | 1168 | handle_cmd_completion(xhci, &event->event_cmd); |
1169 | } | 1169 | } |
1170 | 1170 | ||
1171 | static void handle_port_status(struct xhci_hcd *xhci, | 1171 | static void handle_port_status(struct xhci_hcd *xhci, |
1172 | union xhci_trb *event) | 1172 | union xhci_trb *event) |
1173 | { | 1173 | { |
1174 | u32 port_id; | 1174 | u32 port_id; |
1175 | 1175 | ||
1176 | /* Port status change events always have a successful completion code */ | 1176 | /* Port status change events always have a successful completion code */ |
1177 | if (GET_COMP_CODE(event->generic.field[2]) != COMP_SUCCESS) { | 1177 | if (GET_COMP_CODE(event->generic.field[2]) != COMP_SUCCESS) { |
1178 | xhci_warn(xhci, "WARN: xHC returned failed port status event\n"); | 1178 | xhci_warn(xhci, "WARN: xHC returned failed port status event\n"); |
1179 | xhci->error_bitmask |= 1 << 8; | 1179 | xhci->error_bitmask |= 1 << 8; |
1180 | } | 1180 | } |
1181 | /* FIXME: core doesn't care about all port link state changes yet */ | 1181 | /* FIXME: core doesn't care about all port link state changes yet */ |
1182 | port_id = GET_PORT_ID(event->generic.field[0]); | 1182 | port_id = GET_PORT_ID(event->generic.field[0]); |
1183 | xhci_dbg(xhci, "Port Status Change Event for port %d\n", port_id); | 1183 | xhci_dbg(xhci, "Port Status Change Event for port %d\n", port_id); |
1184 | 1184 | ||
1185 | /* Update event ring dequeue pointer before dropping the lock */ | 1185 | /* Update event ring dequeue pointer before dropping the lock */ |
1186 | inc_deq(xhci, xhci->event_ring, true); | 1186 | inc_deq(xhci, xhci->event_ring, true); |
1187 | xhci_set_hc_event_deq(xhci); | ||
1188 | 1187 | ||
1189 | spin_unlock(&xhci->lock); | 1188 | spin_unlock(&xhci->lock); |
1190 | /* Pass this up to the core */ | 1189 | /* Pass this up to the core */ |
1191 | usb_hcd_poll_rh_status(xhci_to_hcd(xhci)); | 1190 | usb_hcd_poll_rh_status(xhci_to_hcd(xhci)); |
1192 | spin_lock(&xhci->lock); | 1191 | spin_lock(&xhci->lock); |
1193 | } | 1192 | } |
1194 | 1193 | ||
1195 | /* | 1194 | /* |
1196 | * This TD is defined by the TRBs starting at start_trb in start_seg and ending | 1195 | * This TD is defined by the TRBs starting at start_trb in start_seg and ending |
1197 | * at end_trb, which may be in another segment. If the suspect DMA address is a | 1196 | * at end_trb, which may be in another segment. If the suspect DMA address is a |
1198 | * TRB in this TD, this function returns that TRB's segment. Otherwise it | 1197 | * TRB in this TD, this function returns that TRB's segment. Otherwise it |
1199 | * returns 0. | 1198 | * returns 0. |
1200 | */ | 1199 | */ |
1201 | struct xhci_segment *trb_in_td(struct xhci_segment *start_seg, | 1200 | struct xhci_segment *trb_in_td(struct xhci_segment *start_seg, |
1202 | union xhci_trb *start_trb, | 1201 | union xhci_trb *start_trb, |
1203 | union xhci_trb *end_trb, | 1202 | union xhci_trb *end_trb, |
1204 | dma_addr_t suspect_dma) | 1203 | dma_addr_t suspect_dma) |
1205 | { | 1204 | { |
1206 | dma_addr_t start_dma; | 1205 | dma_addr_t start_dma; |
1207 | dma_addr_t end_seg_dma; | 1206 | dma_addr_t end_seg_dma; |
1208 | dma_addr_t end_trb_dma; | 1207 | dma_addr_t end_trb_dma; |
1209 | struct xhci_segment *cur_seg; | 1208 | struct xhci_segment *cur_seg; |
1210 | 1209 | ||
1211 | start_dma = xhci_trb_virt_to_dma(start_seg, start_trb); | 1210 | start_dma = xhci_trb_virt_to_dma(start_seg, start_trb); |
1212 | cur_seg = start_seg; | 1211 | cur_seg = start_seg; |
1213 | 1212 | ||
1214 | do { | 1213 | do { |
1215 | if (start_dma == 0) | 1214 | if (start_dma == 0) |
1216 | return NULL; | 1215 | return NULL; |
1217 | /* We may get an event for a Link TRB in the middle of a TD */ | 1216 | /* We may get an event for a Link TRB in the middle of a TD */ |
1218 | end_seg_dma = xhci_trb_virt_to_dma(cur_seg, | 1217 | end_seg_dma = xhci_trb_virt_to_dma(cur_seg, |
1219 | &cur_seg->trbs[TRBS_PER_SEGMENT - 1]); | 1218 | &cur_seg->trbs[TRBS_PER_SEGMENT - 1]); |
1220 | /* If the end TRB isn't in this segment, this is set to 0 */ | 1219 | /* If the end TRB isn't in this segment, this is set to 0 */ |
1221 | end_trb_dma = xhci_trb_virt_to_dma(cur_seg, end_trb); | 1220 | end_trb_dma = xhci_trb_virt_to_dma(cur_seg, end_trb); |
1222 | 1221 | ||
1223 | if (end_trb_dma > 0) { | 1222 | if (end_trb_dma > 0) { |
1224 | /* The end TRB is in this segment, so suspect should be here */ | 1223 | /* The end TRB is in this segment, so suspect should be here */ |
1225 | if (start_dma <= end_trb_dma) { | 1224 | if (start_dma <= end_trb_dma) { |
1226 | if (suspect_dma >= start_dma && suspect_dma <= end_trb_dma) | 1225 | if (suspect_dma >= start_dma && suspect_dma <= end_trb_dma) |
1227 | return cur_seg; | 1226 | return cur_seg; |
1228 | } else { | 1227 | } else { |
1229 | /* Case for one segment with | 1228 | /* Case for one segment with |
1230 | * a TD wrapped around to the top | 1229 | * a TD wrapped around to the top |
1231 | */ | 1230 | */ |
1232 | if ((suspect_dma >= start_dma && | 1231 | if ((suspect_dma >= start_dma && |
1233 | suspect_dma <= end_seg_dma) || | 1232 | suspect_dma <= end_seg_dma) || |
1234 | (suspect_dma >= cur_seg->dma && | 1233 | (suspect_dma >= cur_seg->dma && |
1235 | suspect_dma <= end_trb_dma)) | 1234 | suspect_dma <= end_trb_dma)) |
1236 | return cur_seg; | 1235 | return cur_seg; |
1237 | } | 1236 | } |
1238 | return NULL; | 1237 | return NULL; |
1239 | } else { | 1238 | } else { |
1240 | /* Might still be somewhere in this segment */ | 1239 | /* Might still be somewhere in this segment */ |
1241 | if (suspect_dma >= start_dma && suspect_dma <= end_seg_dma) | 1240 | if (suspect_dma >= start_dma && suspect_dma <= end_seg_dma) |
1242 | return cur_seg; | 1241 | return cur_seg; |
1243 | } | 1242 | } |
1244 | cur_seg = cur_seg->next; | 1243 | cur_seg = cur_seg->next; |
1245 | start_dma = xhci_trb_virt_to_dma(cur_seg, &cur_seg->trbs[0]); | 1244 | start_dma = xhci_trb_virt_to_dma(cur_seg, &cur_seg->trbs[0]); |
1246 | } while (cur_seg != start_seg); | 1245 | } while (cur_seg != start_seg); |
1247 | 1246 | ||
1248 | return NULL; | 1247 | return NULL; |
1249 | } | 1248 | } |
1250 | 1249 | ||
1251 | static void xhci_cleanup_halted_endpoint(struct xhci_hcd *xhci, | 1250 | static void xhci_cleanup_halted_endpoint(struct xhci_hcd *xhci, |
1252 | unsigned int slot_id, unsigned int ep_index, | 1251 | unsigned int slot_id, unsigned int ep_index, |
1253 | unsigned int stream_id, | 1252 | unsigned int stream_id, |
1254 | struct xhci_td *td, union xhci_trb *event_trb) | 1253 | struct xhci_td *td, union xhci_trb *event_trb) |
1255 | { | 1254 | { |
1256 | struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index]; | 1255 | struct xhci_virt_ep *ep = &xhci->devs[slot_id]->eps[ep_index]; |
1257 | ep->ep_state |= EP_HALTED; | 1256 | ep->ep_state |= EP_HALTED; |
1258 | ep->stopped_td = td; | 1257 | ep->stopped_td = td; |
1259 | ep->stopped_trb = event_trb; | 1258 | ep->stopped_trb = event_trb; |
1260 | ep->stopped_stream = stream_id; | 1259 | ep->stopped_stream = stream_id; |
1261 | 1260 | ||
1262 | xhci_queue_reset_ep(xhci, slot_id, ep_index); | 1261 | xhci_queue_reset_ep(xhci, slot_id, ep_index); |
1263 | xhci_cleanup_stalled_ring(xhci, td->urb->dev, ep_index); | 1262 | xhci_cleanup_stalled_ring(xhci, td->urb->dev, ep_index); |
1264 | 1263 | ||
1265 | ep->stopped_td = NULL; | 1264 | ep->stopped_td = NULL; |
1266 | ep->stopped_trb = NULL; | 1265 | ep->stopped_trb = NULL; |
1267 | ep->stopped_stream = 0; | 1266 | ep->stopped_stream = 0; |
1268 | 1267 | ||
1269 | xhci_ring_cmd_db(xhci); | 1268 | xhci_ring_cmd_db(xhci); |
1270 | } | 1269 | } |
1271 | 1270 | ||
1272 | /* Check if an error has halted the endpoint ring. The class driver will | 1271 | /* Check if an error has halted the endpoint ring. The class driver will |
1273 | * cleanup the halt for a non-default control endpoint if we indicate a stall. | 1272 | * cleanup the halt for a non-default control endpoint if we indicate a stall. |
1274 | * However, a babble and other errors also halt the endpoint ring, and the class | 1273 | * However, a babble and other errors also halt the endpoint ring, and the class |
1275 | * driver won't clear the halt in that case, so we need to issue a Set Transfer | 1274 | * driver won't clear the halt in that case, so we need to issue a Set Transfer |
1276 | * Ring Dequeue Pointer command manually. | 1275 | * Ring Dequeue Pointer command manually. |
1277 | */ | 1276 | */ |
1278 | static int xhci_requires_manual_halt_cleanup(struct xhci_hcd *xhci, | 1277 | static int xhci_requires_manual_halt_cleanup(struct xhci_hcd *xhci, |
1279 | struct xhci_ep_ctx *ep_ctx, | 1278 | struct xhci_ep_ctx *ep_ctx, |
1280 | unsigned int trb_comp_code) | 1279 | unsigned int trb_comp_code) |
1281 | { | 1280 | { |
1282 | /* TRB completion codes that may require a manual halt cleanup */ | 1281 | /* TRB completion codes that may require a manual halt cleanup */ |
1283 | if (trb_comp_code == COMP_TX_ERR || | 1282 | if (trb_comp_code == COMP_TX_ERR || |
1284 | trb_comp_code == COMP_BABBLE || | 1283 | trb_comp_code == COMP_BABBLE || |
1285 | trb_comp_code == COMP_SPLIT_ERR) | 1284 | trb_comp_code == COMP_SPLIT_ERR) |
1286 | /* The 0.96 spec says a babbling control endpoint | 1285 | /* The 0.96 spec says a babbling control endpoint |
1287 | * is not halted. The 0.96 spec says it is. Some HW | 1286 | * is not halted. The 0.96 spec says it is. Some HW |
1288 | * claims to be 0.95 compliant, but it halts the control | 1287 | * claims to be 0.95 compliant, but it halts the control |
1289 | * endpoint anyway. Check if a babble halted the | 1288 | * endpoint anyway. Check if a babble halted the |
1290 | * endpoint. | 1289 | * endpoint. |
1291 | */ | 1290 | */ |
1292 | if ((ep_ctx->ep_info & EP_STATE_MASK) == EP_STATE_HALTED) | 1291 | if ((ep_ctx->ep_info & EP_STATE_MASK) == EP_STATE_HALTED) |
1293 | return 1; | 1292 | return 1; |
1294 | 1293 | ||
1295 | return 0; | 1294 | return 0; |
1296 | } | 1295 | } |
1297 | 1296 | ||
1298 | int xhci_is_vendor_info_code(struct xhci_hcd *xhci, unsigned int trb_comp_code) | 1297 | int xhci_is_vendor_info_code(struct xhci_hcd *xhci, unsigned int trb_comp_code) |
1299 | { | 1298 | { |
1300 | if (trb_comp_code >= 224 && trb_comp_code <= 255) { | 1299 | if (trb_comp_code >= 224 && trb_comp_code <= 255) { |
1301 | /* Vendor defined "informational" completion code, | 1300 | /* Vendor defined "informational" completion code, |
1302 | * treat as not-an-error. | 1301 | * treat as not-an-error. |
1303 | */ | 1302 | */ |
1304 | xhci_dbg(xhci, "Vendor defined info completion code %u\n", | 1303 | xhci_dbg(xhci, "Vendor defined info completion code %u\n", |
1305 | trb_comp_code); | 1304 | trb_comp_code); |
1306 | xhci_dbg(xhci, "Treating code as success.\n"); | 1305 | xhci_dbg(xhci, "Treating code as success.\n"); |
1307 | return 1; | 1306 | return 1; |
1308 | } | 1307 | } |
1309 | return 0; | 1308 | return 0; |
1310 | } | 1309 | } |
1311 | 1310 | ||
1312 | /* | 1311 | /* |
1313 | * Finish the td processing, remove the td from td list; | 1312 | * Finish the td processing, remove the td from td list; |
1314 | * Return 1 if the urb can be given back. | 1313 | * Return 1 if the urb can be given back. |
1315 | */ | 1314 | */ |
1316 | static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td, | 1315 | static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td, |
1317 | union xhci_trb *event_trb, struct xhci_transfer_event *event, | 1316 | union xhci_trb *event_trb, struct xhci_transfer_event *event, |
1318 | struct xhci_virt_ep *ep, int *status, bool skip) | 1317 | struct xhci_virt_ep *ep, int *status, bool skip) |
1319 | { | 1318 | { |
1320 | struct xhci_virt_device *xdev; | 1319 | struct xhci_virt_device *xdev; |
1321 | struct xhci_ring *ep_ring; | 1320 | struct xhci_ring *ep_ring; |
1322 | unsigned int slot_id; | 1321 | unsigned int slot_id; |
1323 | int ep_index; | 1322 | int ep_index; |
1324 | struct urb *urb = NULL; | 1323 | struct urb *urb = NULL; |
1325 | struct xhci_ep_ctx *ep_ctx; | 1324 | struct xhci_ep_ctx *ep_ctx; |
1326 | int ret = 0; | 1325 | int ret = 0; |
1327 | struct urb_priv *urb_priv; | 1326 | struct urb_priv *urb_priv; |
1328 | u32 trb_comp_code; | 1327 | u32 trb_comp_code; |
1329 | 1328 | ||
1330 | slot_id = TRB_TO_SLOT_ID(event->flags); | 1329 | slot_id = TRB_TO_SLOT_ID(event->flags); |
1331 | xdev = xhci->devs[slot_id]; | 1330 | xdev = xhci->devs[slot_id]; |
1332 | ep_index = TRB_TO_EP_ID(event->flags) - 1; | 1331 | ep_index = TRB_TO_EP_ID(event->flags) - 1; |
1333 | ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer); | 1332 | ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer); |
1334 | ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); | 1333 | ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); |
1335 | trb_comp_code = GET_COMP_CODE(event->transfer_len); | 1334 | trb_comp_code = GET_COMP_CODE(event->transfer_len); |
1336 | 1335 | ||
1337 | if (skip) | 1336 | if (skip) |
1338 | goto td_cleanup; | 1337 | goto td_cleanup; |
1339 | 1338 | ||
1340 | if (trb_comp_code == COMP_STOP_INVAL || | 1339 | if (trb_comp_code == COMP_STOP_INVAL || |
1341 | trb_comp_code == COMP_STOP) { | 1340 | trb_comp_code == COMP_STOP) { |
1342 | /* The Endpoint Stop Command completion will take care of any | 1341 | /* The Endpoint Stop Command completion will take care of any |
1343 | * stopped TDs. A stopped TD may be restarted, so don't update | 1342 | * stopped TDs. A stopped TD may be restarted, so don't update |
1344 | * the ring dequeue pointer or take this TD off any lists yet. | 1343 | * the ring dequeue pointer or take this TD off any lists yet. |
1345 | */ | 1344 | */ |
1346 | ep->stopped_td = td; | 1345 | ep->stopped_td = td; |
1347 | ep->stopped_trb = event_trb; | 1346 | ep->stopped_trb = event_trb; |
1348 | return 0; | 1347 | return 0; |
1349 | } else { | 1348 | } else { |
1350 | if (trb_comp_code == COMP_STALL) { | 1349 | if (trb_comp_code == COMP_STALL) { |
1351 | /* The transfer is completed from the driver's | 1350 | /* The transfer is completed from the driver's |
1352 | * perspective, but we need to issue a set dequeue | 1351 | * perspective, but we need to issue a set dequeue |
1353 | * command for this stalled endpoint to move the dequeue | 1352 | * command for this stalled endpoint to move the dequeue |
1354 | * pointer past the TD. We can't do that here because | 1353 | * pointer past the TD. We can't do that here because |
1355 | * the halt condition must be cleared first. Let the | 1354 | * the halt condition must be cleared first. Let the |
1356 | * USB class driver clear the stall later. | 1355 | * USB class driver clear the stall later. |
1357 | */ | 1356 | */ |
1358 | ep->stopped_td = td; | 1357 | ep->stopped_td = td; |
1359 | ep->stopped_trb = event_trb; | 1358 | ep->stopped_trb = event_trb; |
1360 | ep->stopped_stream = ep_ring->stream_id; | 1359 | ep->stopped_stream = ep_ring->stream_id; |
1361 | } else if (xhci_requires_manual_halt_cleanup(xhci, | 1360 | } else if (xhci_requires_manual_halt_cleanup(xhci, |
1362 | ep_ctx, trb_comp_code)) { | 1361 | ep_ctx, trb_comp_code)) { |
1363 | /* Other types of errors halt the endpoint, but the | 1362 | /* Other types of errors halt the endpoint, but the |
1364 | * class driver doesn't call usb_reset_endpoint() unless | 1363 | * class driver doesn't call usb_reset_endpoint() unless |
1365 | * the error is -EPIPE. Clear the halted status in the | 1364 | * the error is -EPIPE. Clear the halted status in the |
1366 | * xHCI hardware manually. | 1365 | * xHCI hardware manually. |
1367 | */ | 1366 | */ |
1368 | xhci_cleanup_halted_endpoint(xhci, | 1367 | xhci_cleanup_halted_endpoint(xhci, |
1369 | slot_id, ep_index, ep_ring->stream_id, | 1368 | slot_id, ep_index, ep_ring->stream_id, |
1370 | td, event_trb); | 1369 | td, event_trb); |
1371 | } else { | 1370 | } else { |
1372 | /* Update ring dequeue pointer */ | 1371 | /* Update ring dequeue pointer */ |
1373 | while (ep_ring->dequeue != td->last_trb) | 1372 | while (ep_ring->dequeue != td->last_trb) |
1374 | inc_deq(xhci, ep_ring, false); | 1373 | inc_deq(xhci, ep_ring, false); |
1375 | inc_deq(xhci, ep_ring, false); | 1374 | inc_deq(xhci, ep_ring, false); |
1376 | } | 1375 | } |
1377 | 1376 | ||
1378 | td_cleanup: | 1377 | td_cleanup: |
1379 | /* Clean up the endpoint's TD list */ | 1378 | /* Clean up the endpoint's TD list */ |
1380 | urb = td->urb; | 1379 | urb = td->urb; |
1381 | urb_priv = urb->hcpriv; | 1380 | urb_priv = urb->hcpriv; |
1382 | 1381 | ||
1383 | /* Do one last check of the actual transfer length. | 1382 | /* Do one last check of the actual transfer length. |
1384 | * If the host controller said we transferred more data than | 1383 | * If the host controller said we transferred more data than |
1385 | * the buffer length, urb->actual_length will be a very big | 1384 | * the buffer length, urb->actual_length will be a very big |
1386 | * number (since it's unsigned). Play it safe and say we didn't | 1385 | * number (since it's unsigned). Play it safe and say we didn't |
1387 | * transfer anything. | 1386 | * transfer anything. |
1388 | */ | 1387 | */ |
1389 | if (urb->actual_length > urb->transfer_buffer_length) { | 1388 | if (urb->actual_length > urb->transfer_buffer_length) { |
1390 | xhci_warn(xhci, "URB transfer length is wrong, " | 1389 | xhci_warn(xhci, "URB transfer length is wrong, " |
1391 | "xHC issue? req. len = %u, " | 1390 | "xHC issue? req. len = %u, " |
1392 | "act. len = %u\n", | 1391 | "act. len = %u\n", |
1393 | urb->transfer_buffer_length, | 1392 | urb->transfer_buffer_length, |
1394 | urb->actual_length); | 1393 | urb->actual_length); |
1395 | urb->actual_length = 0; | 1394 | urb->actual_length = 0; |
1396 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) | 1395 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) |
1397 | *status = -EREMOTEIO; | 1396 | *status = -EREMOTEIO; |
1398 | else | 1397 | else |
1399 | *status = 0; | 1398 | *status = 0; |
1400 | } | 1399 | } |
1401 | list_del(&td->td_list); | 1400 | list_del(&td->td_list); |
1402 | /* Was this TD slated to be cancelled but completed anyway? */ | 1401 | /* Was this TD slated to be cancelled but completed anyway? */ |
1403 | if (!list_empty(&td->cancelled_td_list)) | 1402 | if (!list_empty(&td->cancelled_td_list)) |
1404 | list_del(&td->cancelled_td_list); | 1403 | list_del(&td->cancelled_td_list); |
1405 | 1404 | ||
1406 | urb_priv->td_cnt++; | 1405 | urb_priv->td_cnt++; |
1407 | /* Giveback the urb when all the tds are completed */ | 1406 | /* Giveback the urb when all the tds are completed */ |
1408 | if (urb_priv->td_cnt == urb_priv->length) | 1407 | if (urb_priv->td_cnt == urb_priv->length) |
1409 | ret = 1; | 1408 | ret = 1; |
1410 | } | 1409 | } |
1411 | 1410 | ||
1412 | return ret; | 1411 | return ret; |
1413 | } | 1412 | } |
1414 | 1413 | ||
1415 | /* | 1414 | /* |
1416 | * Process control tds, update urb status and actual_length. | 1415 | * Process control tds, update urb status and actual_length. |
1417 | */ | 1416 | */ |
1418 | static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td, | 1417 | static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td, |
1419 | union xhci_trb *event_trb, struct xhci_transfer_event *event, | 1418 | union xhci_trb *event_trb, struct xhci_transfer_event *event, |
1420 | struct xhci_virt_ep *ep, int *status) | 1419 | struct xhci_virt_ep *ep, int *status) |
1421 | { | 1420 | { |
1422 | struct xhci_virt_device *xdev; | 1421 | struct xhci_virt_device *xdev; |
1423 | struct xhci_ring *ep_ring; | 1422 | struct xhci_ring *ep_ring; |
1424 | unsigned int slot_id; | 1423 | unsigned int slot_id; |
1425 | int ep_index; | 1424 | int ep_index; |
1426 | struct xhci_ep_ctx *ep_ctx; | 1425 | struct xhci_ep_ctx *ep_ctx; |
1427 | u32 trb_comp_code; | 1426 | u32 trb_comp_code; |
1428 | 1427 | ||
1429 | slot_id = TRB_TO_SLOT_ID(event->flags); | 1428 | slot_id = TRB_TO_SLOT_ID(event->flags); |
1430 | xdev = xhci->devs[slot_id]; | 1429 | xdev = xhci->devs[slot_id]; |
1431 | ep_index = TRB_TO_EP_ID(event->flags) - 1; | 1430 | ep_index = TRB_TO_EP_ID(event->flags) - 1; |
1432 | ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer); | 1431 | ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer); |
1433 | ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); | 1432 | ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); |
1434 | trb_comp_code = GET_COMP_CODE(event->transfer_len); | 1433 | trb_comp_code = GET_COMP_CODE(event->transfer_len); |
1435 | 1434 | ||
1436 | xhci_debug_trb(xhci, xhci->event_ring->dequeue); | 1435 | xhci_debug_trb(xhci, xhci->event_ring->dequeue); |
1437 | switch (trb_comp_code) { | 1436 | switch (trb_comp_code) { |
1438 | case COMP_SUCCESS: | 1437 | case COMP_SUCCESS: |
1439 | if (event_trb == ep_ring->dequeue) { | 1438 | if (event_trb == ep_ring->dequeue) { |
1440 | xhci_warn(xhci, "WARN: Success on ctrl setup TRB " | 1439 | xhci_warn(xhci, "WARN: Success on ctrl setup TRB " |
1441 | "without IOC set??\n"); | 1440 | "without IOC set??\n"); |
1442 | *status = -ESHUTDOWN; | 1441 | *status = -ESHUTDOWN; |
1443 | } else if (event_trb != td->last_trb) { | 1442 | } else if (event_trb != td->last_trb) { |
1444 | xhci_warn(xhci, "WARN: Success on ctrl data TRB " | 1443 | xhci_warn(xhci, "WARN: Success on ctrl data TRB " |
1445 | "without IOC set??\n"); | 1444 | "without IOC set??\n"); |
1446 | *status = -ESHUTDOWN; | 1445 | *status = -ESHUTDOWN; |
1447 | } else { | 1446 | } else { |
1448 | xhci_dbg(xhci, "Successful control transfer!\n"); | 1447 | xhci_dbg(xhci, "Successful control transfer!\n"); |
1449 | *status = 0; | 1448 | *status = 0; |
1450 | } | 1449 | } |
1451 | break; | 1450 | break; |
1452 | case COMP_SHORT_TX: | 1451 | case COMP_SHORT_TX: |
1453 | xhci_warn(xhci, "WARN: short transfer on control ep\n"); | 1452 | xhci_warn(xhci, "WARN: short transfer on control ep\n"); |
1454 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) | 1453 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) |
1455 | *status = -EREMOTEIO; | 1454 | *status = -EREMOTEIO; |
1456 | else | 1455 | else |
1457 | *status = 0; | 1456 | *status = 0; |
1458 | break; | 1457 | break; |
1459 | default: | 1458 | default: |
1460 | if (!xhci_requires_manual_halt_cleanup(xhci, | 1459 | if (!xhci_requires_manual_halt_cleanup(xhci, |
1461 | ep_ctx, trb_comp_code)) | 1460 | ep_ctx, trb_comp_code)) |
1462 | break; | 1461 | break; |
1463 | xhci_dbg(xhci, "TRB error code %u, " | 1462 | xhci_dbg(xhci, "TRB error code %u, " |
1464 | "halted endpoint index = %u\n", | 1463 | "halted endpoint index = %u\n", |
1465 | trb_comp_code, ep_index); | 1464 | trb_comp_code, ep_index); |
1466 | /* else fall through */ | 1465 | /* else fall through */ |
1467 | case COMP_STALL: | 1466 | case COMP_STALL: |
1468 | /* Did we transfer part of the data (middle) phase? */ | 1467 | /* Did we transfer part of the data (middle) phase? */ |
1469 | if (event_trb != ep_ring->dequeue && | 1468 | if (event_trb != ep_ring->dequeue && |
1470 | event_trb != td->last_trb) | 1469 | event_trb != td->last_trb) |
1471 | td->urb->actual_length = | 1470 | td->urb->actual_length = |
1472 | td->urb->transfer_buffer_length | 1471 | td->urb->transfer_buffer_length |
1473 | - TRB_LEN(event->transfer_len); | 1472 | - TRB_LEN(event->transfer_len); |
1474 | else | 1473 | else |
1475 | td->urb->actual_length = 0; | 1474 | td->urb->actual_length = 0; |
1476 | 1475 | ||
1477 | xhci_cleanup_halted_endpoint(xhci, | 1476 | xhci_cleanup_halted_endpoint(xhci, |
1478 | slot_id, ep_index, 0, td, event_trb); | 1477 | slot_id, ep_index, 0, td, event_trb); |
1479 | return finish_td(xhci, td, event_trb, event, ep, status, true); | 1478 | return finish_td(xhci, td, event_trb, event, ep, status, true); |
1480 | } | 1479 | } |
1481 | /* | 1480 | /* |
1482 | * Did we transfer any data, despite the errors that might have | 1481 | * Did we transfer any data, despite the errors that might have |
1483 | * happened? I.e. did we get past the setup stage? | 1482 | * happened? I.e. did we get past the setup stage? |
1484 | */ | 1483 | */ |
1485 | if (event_trb != ep_ring->dequeue) { | 1484 | if (event_trb != ep_ring->dequeue) { |
1486 | /* The event was for the status stage */ | 1485 | /* The event was for the status stage */ |
1487 | if (event_trb == td->last_trb) { | 1486 | if (event_trb == td->last_trb) { |
1488 | if (td->urb->actual_length != 0) { | 1487 | if (td->urb->actual_length != 0) { |
1489 | /* Don't overwrite a previously set error code | 1488 | /* Don't overwrite a previously set error code |
1490 | */ | 1489 | */ |
1491 | if ((*status == -EINPROGRESS || *status == 0) && | 1490 | if ((*status == -EINPROGRESS || *status == 0) && |
1492 | (td->urb->transfer_flags | 1491 | (td->urb->transfer_flags |
1493 | & URB_SHORT_NOT_OK)) | 1492 | & URB_SHORT_NOT_OK)) |
1494 | /* Did we already see a short data | 1493 | /* Did we already see a short data |
1495 | * stage? */ | 1494 | * stage? */ |
1496 | *status = -EREMOTEIO; | 1495 | *status = -EREMOTEIO; |
1497 | } else { | 1496 | } else { |
1498 | td->urb->actual_length = | 1497 | td->urb->actual_length = |
1499 | td->urb->transfer_buffer_length; | 1498 | td->urb->transfer_buffer_length; |
1500 | } | 1499 | } |
1501 | } else { | 1500 | } else { |
1502 | /* Maybe the event was for the data stage? */ | 1501 | /* Maybe the event was for the data stage? */ |
1503 | if (trb_comp_code != COMP_STOP_INVAL) { | 1502 | if (trb_comp_code != COMP_STOP_INVAL) { |
1504 | /* We didn't stop on a link TRB in the middle */ | 1503 | /* We didn't stop on a link TRB in the middle */ |
1505 | td->urb->actual_length = | 1504 | td->urb->actual_length = |
1506 | td->urb->transfer_buffer_length - | 1505 | td->urb->transfer_buffer_length - |
1507 | TRB_LEN(event->transfer_len); | 1506 | TRB_LEN(event->transfer_len); |
1508 | xhci_dbg(xhci, "Waiting for status " | 1507 | xhci_dbg(xhci, "Waiting for status " |
1509 | "stage event\n"); | 1508 | "stage event\n"); |
1510 | return 0; | 1509 | return 0; |
1511 | } | 1510 | } |
1512 | } | 1511 | } |
1513 | } | 1512 | } |
1514 | 1513 | ||
1515 | return finish_td(xhci, td, event_trb, event, ep, status, false); | 1514 | return finish_td(xhci, td, event_trb, event, ep, status, false); |
1516 | } | 1515 | } |
1517 | 1516 | ||
1518 | /* | 1517 | /* |
1519 | * Process isochronous tds, update urb packet status and actual_length. | 1518 | * Process isochronous tds, update urb packet status and actual_length. |
1520 | */ | 1519 | */ |
1521 | static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, | 1520 | static int process_isoc_td(struct xhci_hcd *xhci, struct xhci_td *td, |
1522 | union xhci_trb *event_trb, struct xhci_transfer_event *event, | 1521 | union xhci_trb *event_trb, struct xhci_transfer_event *event, |
1523 | struct xhci_virt_ep *ep, int *status) | 1522 | struct xhci_virt_ep *ep, int *status) |
1524 | { | 1523 | { |
1525 | struct xhci_ring *ep_ring; | 1524 | struct xhci_ring *ep_ring; |
1526 | struct urb_priv *urb_priv; | 1525 | struct urb_priv *urb_priv; |
1527 | int idx; | 1526 | int idx; |
1528 | int len = 0; | 1527 | int len = 0; |
1529 | int skip_td = 0; | 1528 | int skip_td = 0; |
1530 | union xhci_trb *cur_trb; | 1529 | union xhci_trb *cur_trb; |
1531 | struct xhci_segment *cur_seg; | 1530 | struct xhci_segment *cur_seg; |
1532 | u32 trb_comp_code; | 1531 | u32 trb_comp_code; |
1533 | 1532 | ||
1534 | ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer); | 1533 | ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer); |
1535 | trb_comp_code = GET_COMP_CODE(event->transfer_len); | 1534 | trb_comp_code = GET_COMP_CODE(event->transfer_len); |
1536 | urb_priv = td->urb->hcpriv; | 1535 | urb_priv = td->urb->hcpriv; |
1537 | idx = urb_priv->td_cnt; | 1536 | idx = urb_priv->td_cnt; |
1538 | 1537 | ||
1539 | if (ep->skip) { | 1538 | if (ep->skip) { |
1540 | /* The transfer is partly done */ | 1539 | /* The transfer is partly done */ |
1541 | *status = -EXDEV; | 1540 | *status = -EXDEV; |
1542 | td->urb->iso_frame_desc[idx].status = -EXDEV; | 1541 | td->urb->iso_frame_desc[idx].status = -EXDEV; |
1543 | } else { | 1542 | } else { |
1544 | /* handle completion code */ | 1543 | /* handle completion code */ |
1545 | switch (trb_comp_code) { | 1544 | switch (trb_comp_code) { |
1546 | case COMP_SUCCESS: | 1545 | case COMP_SUCCESS: |
1547 | td->urb->iso_frame_desc[idx].status = 0; | 1546 | td->urb->iso_frame_desc[idx].status = 0; |
1548 | xhci_dbg(xhci, "Successful isoc transfer!\n"); | 1547 | xhci_dbg(xhci, "Successful isoc transfer!\n"); |
1549 | break; | 1548 | break; |
1550 | case COMP_SHORT_TX: | 1549 | case COMP_SHORT_TX: |
1551 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) | 1550 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) |
1552 | td->urb->iso_frame_desc[idx].status = | 1551 | td->urb->iso_frame_desc[idx].status = |
1553 | -EREMOTEIO; | 1552 | -EREMOTEIO; |
1554 | else | 1553 | else |
1555 | td->urb->iso_frame_desc[idx].status = 0; | 1554 | td->urb->iso_frame_desc[idx].status = 0; |
1556 | break; | 1555 | break; |
1557 | case COMP_BW_OVER: | 1556 | case COMP_BW_OVER: |
1558 | td->urb->iso_frame_desc[idx].status = -ECOMM; | 1557 | td->urb->iso_frame_desc[idx].status = -ECOMM; |
1559 | skip_td = 1; | 1558 | skip_td = 1; |
1560 | break; | 1559 | break; |
1561 | case COMP_BUFF_OVER: | 1560 | case COMP_BUFF_OVER: |
1562 | case COMP_BABBLE: | 1561 | case COMP_BABBLE: |
1563 | td->urb->iso_frame_desc[idx].status = -EOVERFLOW; | 1562 | td->urb->iso_frame_desc[idx].status = -EOVERFLOW; |
1564 | skip_td = 1; | 1563 | skip_td = 1; |
1565 | break; | 1564 | break; |
1566 | case COMP_STALL: | 1565 | case COMP_STALL: |
1567 | td->urb->iso_frame_desc[idx].status = -EPROTO; | 1566 | td->urb->iso_frame_desc[idx].status = -EPROTO; |
1568 | skip_td = 1; | 1567 | skip_td = 1; |
1569 | break; | 1568 | break; |
1570 | case COMP_STOP: | 1569 | case COMP_STOP: |
1571 | case COMP_STOP_INVAL: | 1570 | case COMP_STOP_INVAL: |
1572 | break; | 1571 | break; |
1573 | default: | 1572 | default: |
1574 | td->urb->iso_frame_desc[idx].status = -1; | 1573 | td->urb->iso_frame_desc[idx].status = -1; |
1575 | break; | 1574 | break; |
1576 | } | 1575 | } |
1577 | } | 1576 | } |
1578 | 1577 | ||
1579 | /* calc actual length */ | 1578 | /* calc actual length */ |
1580 | if (ep->skip) { | 1579 | if (ep->skip) { |
1581 | td->urb->iso_frame_desc[idx].actual_length = 0; | 1580 | td->urb->iso_frame_desc[idx].actual_length = 0; |
1582 | return finish_td(xhci, td, event_trb, event, ep, status, true); | 1581 | return finish_td(xhci, td, event_trb, event, ep, status, true); |
1583 | } | 1582 | } |
1584 | 1583 | ||
1585 | if (trb_comp_code == COMP_SUCCESS || skip_td == 1) { | 1584 | if (trb_comp_code == COMP_SUCCESS || skip_td == 1) { |
1586 | td->urb->iso_frame_desc[idx].actual_length = | 1585 | td->urb->iso_frame_desc[idx].actual_length = |
1587 | td->urb->iso_frame_desc[idx].length; | 1586 | td->urb->iso_frame_desc[idx].length; |
1588 | td->urb->actual_length += | 1587 | td->urb->actual_length += |
1589 | td->urb->iso_frame_desc[idx].length; | 1588 | td->urb->iso_frame_desc[idx].length; |
1590 | } else { | 1589 | } else { |
1591 | for (cur_trb = ep_ring->dequeue, | 1590 | for (cur_trb = ep_ring->dequeue, |
1592 | cur_seg = ep_ring->deq_seg; cur_trb != event_trb; | 1591 | cur_seg = ep_ring->deq_seg; cur_trb != event_trb; |
1593 | next_trb(xhci, ep_ring, &cur_seg, &cur_trb)) { | 1592 | next_trb(xhci, ep_ring, &cur_seg, &cur_trb)) { |
1594 | if ((cur_trb->generic.field[3] & | 1593 | if ((cur_trb->generic.field[3] & |
1595 | TRB_TYPE_BITMASK) != TRB_TYPE(TRB_TR_NOOP) && | 1594 | TRB_TYPE_BITMASK) != TRB_TYPE(TRB_TR_NOOP) && |
1596 | (cur_trb->generic.field[3] & | 1595 | (cur_trb->generic.field[3] & |
1597 | TRB_TYPE_BITMASK) != TRB_TYPE(TRB_LINK)) | 1596 | TRB_TYPE_BITMASK) != TRB_TYPE(TRB_LINK)) |
1598 | len += | 1597 | len += |
1599 | TRB_LEN(cur_trb->generic.field[2]); | 1598 | TRB_LEN(cur_trb->generic.field[2]); |
1600 | } | 1599 | } |
1601 | len += TRB_LEN(cur_trb->generic.field[2]) - | 1600 | len += TRB_LEN(cur_trb->generic.field[2]) - |
1602 | TRB_LEN(event->transfer_len); | 1601 | TRB_LEN(event->transfer_len); |
1603 | 1602 | ||
1604 | if (trb_comp_code != COMP_STOP_INVAL) { | 1603 | if (trb_comp_code != COMP_STOP_INVAL) { |
1605 | td->urb->iso_frame_desc[idx].actual_length = len; | 1604 | td->urb->iso_frame_desc[idx].actual_length = len; |
1606 | td->urb->actual_length += len; | 1605 | td->urb->actual_length += len; |
1607 | } | 1606 | } |
1608 | } | 1607 | } |
1609 | 1608 | ||
1610 | if ((idx == urb_priv->length - 1) && *status == -EINPROGRESS) | 1609 | if ((idx == urb_priv->length - 1) && *status == -EINPROGRESS) |
1611 | *status = 0; | 1610 | *status = 0; |
1612 | 1611 | ||
1613 | return finish_td(xhci, td, event_trb, event, ep, status, false); | 1612 | return finish_td(xhci, td, event_trb, event, ep, status, false); |
1614 | } | 1613 | } |
1615 | 1614 | ||
1616 | /* | 1615 | /* |
1617 | * Process bulk and interrupt tds, update urb status and actual_length. | 1616 | * Process bulk and interrupt tds, update urb status and actual_length. |
1618 | */ | 1617 | */ |
1619 | static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td, | 1618 | static int process_bulk_intr_td(struct xhci_hcd *xhci, struct xhci_td *td, |
1620 | union xhci_trb *event_trb, struct xhci_transfer_event *event, | 1619 | union xhci_trb *event_trb, struct xhci_transfer_event *event, |
1621 | struct xhci_virt_ep *ep, int *status) | 1620 | struct xhci_virt_ep *ep, int *status) |
1622 | { | 1621 | { |
1623 | struct xhci_ring *ep_ring; | 1622 | struct xhci_ring *ep_ring; |
1624 | union xhci_trb *cur_trb; | 1623 | union xhci_trb *cur_trb; |
1625 | struct xhci_segment *cur_seg; | 1624 | struct xhci_segment *cur_seg; |
1626 | u32 trb_comp_code; | 1625 | u32 trb_comp_code; |
1627 | 1626 | ||
1628 | ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer); | 1627 | ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer); |
1629 | trb_comp_code = GET_COMP_CODE(event->transfer_len); | 1628 | trb_comp_code = GET_COMP_CODE(event->transfer_len); |
1630 | 1629 | ||
1631 | switch (trb_comp_code) { | 1630 | switch (trb_comp_code) { |
1632 | case COMP_SUCCESS: | 1631 | case COMP_SUCCESS: |
1633 | /* Double check that the HW transferred everything. */ | 1632 | /* Double check that the HW transferred everything. */ |
1634 | if (event_trb != td->last_trb) { | 1633 | if (event_trb != td->last_trb) { |
1635 | xhci_warn(xhci, "WARN Successful completion " | 1634 | xhci_warn(xhci, "WARN Successful completion " |
1636 | "on short TX\n"); | 1635 | "on short TX\n"); |
1637 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) | 1636 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) |
1638 | *status = -EREMOTEIO; | 1637 | *status = -EREMOTEIO; |
1639 | else | 1638 | else |
1640 | *status = 0; | 1639 | *status = 0; |
1641 | } else { | 1640 | } else { |
1642 | if (usb_endpoint_xfer_bulk(&td->urb->ep->desc)) | 1641 | if (usb_endpoint_xfer_bulk(&td->urb->ep->desc)) |
1643 | xhci_dbg(xhci, "Successful bulk " | 1642 | xhci_dbg(xhci, "Successful bulk " |
1644 | "transfer!\n"); | 1643 | "transfer!\n"); |
1645 | else | 1644 | else |
1646 | xhci_dbg(xhci, "Successful interrupt " | 1645 | xhci_dbg(xhci, "Successful interrupt " |
1647 | "transfer!\n"); | 1646 | "transfer!\n"); |
1648 | *status = 0; | 1647 | *status = 0; |
1649 | } | 1648 | } |
1650 | break; | 1649 | break; |
1651 | case COMP_SHORT_TX: | 1650 | case COMP_SHORT_TX: |
1652 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) | 1651 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) |
1653 | *status = -EREMOTEIO; | 1652 | *status = -EREMOTEIO; |
1654 | else | 1653 | else |
1655 | *status = 0; | 1654 | *status = 0; |
1656 | break; | 1655 | break; |
1657 | default: | 1656 | default: |
1658 | /* Others already handled above */ | 1657 | /* Others already handled above */ |
1659 | break; | 1658 | break; |
1660 | } | 1659 | } |
1661 | dev_dbg(&td->urb->dev->dev, | 1660 | dev_dbg(&td->urb->dev->dev, |
1662 | "ep %#x - asked for %d bytes, " | 1661 | "ep %#x - asked for %d bytes, " |
1663 | "%d bytes untransferred\n", | 1662 | "%d bytes untransferred\n", |
1664 | td->urb->ep->desc.bEndpointAddress, | 1663 | td->urb->ep->desc.bEndpointAddress, |
1665 | td->urb->transfer_buffer_length, | 1664 | td->urb->transfer_buffer_length, |
1666 | TRB_LEN(event->transfer_len)); | 1665 | TRB_LEN(event->transfer_len)); |
1667 | /* Fast path - was this the last TRB in the TD for this URB? */ | 1666 | /* Fast path - was this the last TRB in the TD for this URB? */ |
1668 | if (event_trb == td->last_trb) { | 1667 | if (event_trb == td->last_trb) { |
1669 | if (TRB_LEN(event->transfer_len) != 0) { | 1668 | if (TRB_LEN(event->transfer_len) != 0) { |
1670 | td->urb->actual_length = | 1669 | td->urb->actual_length = |
1671 | td->urb->transfer_buffer_length - | 1670 | td->urb->transfer_buffer_length - |
1672 | TRB_LEN(event->transfer_len); | 1671 | TRB_LEN(event->transfer_len); |
1673 | if (td->urb->transfer_buffer_length < | 1672 | if (td->urb->transfer_buffer_length < |
1674 | td->urb->actual_length) { | 1673 | td->urb->actual_length) { |
1675 | xhci_warn(xhci, "HC gave bad length " | 1674 | xhci_warn(xhci, "HC gave bad length " |
1676 | "of %d bytes left\n", | 1675 | "of %d bytes left\n", |
1677 | TRB_LEN(event->transfer_len)); | 1676 | TRB_LEN(event->transfer_len)); |
1678 | td->urb->actual_length = 0; | 1677 | td->urb->actual_length = 0; |
1679 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) | 1678 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) |
1680 | *status = -EREMOTEIO; | 1679 | *status = -EREMOTEIO; |
1681 | else | 1680 | else |
1682 | *status = 0; | 1681 | *status = 0; |
1683 | } | 1682 | } |
1684 | /* Don't overwrite a previously set error code */ | 1683 | /* Don't overwrite a previously set error code */ |
1685 | if (*status == -EINPROGRESS) { | 1684 | if (*status == -EINPROGRESS) { |
1686 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) | 1685 | if (td->urb->transfer_flags & URB_SHORT_NOT_OK) |
1687 | *status = -EREMOTEIO; | 1686 | *status = -EREMOTEIO; |
1688 | else | 1687 | else |
1689 | *status = 0; | 1688 | *status = 0; |
1690 | } | 1689 | } |
1691 | } else { | 1690 | } else { |
1692 | td->urb->actual_length = | 1691 | td->urb->actual_length = |
1693 | td->urb->transfer_buffer_length; | 1692 | td->urb->transfer_buffer_length; |
1694 | /* Ignore a short packet completion if the | 1693 | /* Ignore a short packet completion if the |
1695 | * untransferred length was zero. | 1694 | * untransferred length was zero. |
1696 | */ | 1695 | */ |
1697 | if (*status == -EREMOTEIO) | 1696 | if (*status == -EREMOTEIO) |
1698 | *status = 0; | 1697 | *status = 0; |
1699 | } | 1698 | } |
1700 | } else { | 1699 | } else { |
1701 | /* Slow path - walk the list, starting from the dequeue | 1700 | /* Slow path - walk the list, starting from the dequeue |
1702 | * pointer, to get the actual length transferred. | 1701 | * pointer, to get the actual length transferred. |
1703 | */ | 1702 | */ |
1704 | td->urb->actual_length = 0; | 1703 | td->urb->actual_length = 0; |
1705 | for (cur_trb = ep_ring->dequeue, cur_seg = ep_ring->deq_seg; | 1704 | for (cur_trb = ep_ring->dequeue, cur_seg = ep_ring->deq_seg; |
1706 | cur_trb != event_trb; | 1705 | cur_trb != event_trb; |
1707 | next_trb(xhci, ep_ring, &cur_seg, &cur_trb)) { | 1706 | next_trb(xhci, ep_ring, &cur_seg, &cur_trb)) { |
1708 | if ((cur_trb->generic.field[3] & | 1707 | if ((cur_trb->generic.field[3] & |
1709 | TRB_TYPE_BITMASK) != TRB_TYPE(TRB_TR_NOOP) && | 1708 | TRB_TYPE_BITMASK) != TRB_TYPE(TRB_TR_NOOP) && |
1710 | (cur_trb->generic.field[3] & | 1709 | (cur_trb->generic.field[3] & |
1711 | TRB_TYPE_BITMASK) != TRB_TYPE(TRB_LINK)) | 1710 | TRB_TYPE_BITMASK) != TRB_TYPE(TRB_LINK)) |
1712 | td->urb->actual_length += | 1711 | td->urb->actual_length += |
1713 | TRB_LEN(cur_trb->generic.field[2]); | 1712 | TRB_LEN(cur_trb->generic.field[2]); |
1714 | } | 1713 | } |
1715 | /* If the ring didn't stop on a Link or No-op TRB, add | 1714 | /* If the ring didn't stop on a Link or No-op TRB, add |
1716 | * in the actual bytes transferred from the Normal TRB | 1715 | * in the actual bytes transferred from the Normal TRB |
1717 | */ | 1716 | */ |
1718 | if (trb_comp_code != COMP_STOP_INVAL) | 1717 | if (trb_comp_code != COMP_STOP_INVAL) |
1719 | td->urb->actual_length += | 1718 | td->urb->actual_length += |
1720 | TRB_LEN(cur_trb->generic.field[2]) - | 1719 | TRB_LEN(cur_trb->generic.field[2]) - |
1721 | TRB_LEN(event->transfer_len); | 1720 | TRB_LEN(event->transfer_len); |
1722 | } | 1721 | } |
1723 | 1722 | ||
1724 | return finish_td(xhci, td, event_trb, event, ep, status, false); | 1723 | return finish_td(xhci, td, event_trb, event, ep, status, false); |
1725 | } | 1724 | } |
1726 | 1725 | ||
1727 | /* | 1726 | /* |
1728 | * If this function returns an error condition, it means it got a Transfer | 1727 | * If this function returns an error condition, it means it got a Transfer |
1729 | * event with a corrupted Slot ID, Endpoint ID, or TRB DMA address. | 1728 | * event with a corrupted Slot ID, Endpoint ID, or TRB DMA address. |
1730 | * At this point, the host controller is probably hosed and should be reset. | 1729 | * At this point, the host controller is probably hosed and should be reset. |
1731 | */ | 1730 | */ |
1732 | static int handle_tx_event(struct xhci_hcd *xhci, | 1731 | static int handle_tx_event(struct xhci_hcd *xhci, |
1733 | struct xhci_transfer_event *event) | 1732 | struct xhci_transfer_event *event) |
1734 | { | 1733 | { |
1735 | struct xhci_virt_device *xdev; | 1734 | struct xhci_virt_device *xdev; |
1736 | struct xhci_virt_ep *ep; | 1735 | struct xhci_virt_ep *ep; |
1737 | struct xhci_ring *ep_ring; | 1736 | struct xhci_ring *ep_ring; |
1738 | unsigned int slot_id; | 1737 | unsigned int slot_id; |
1739 | int ep_index; | 1738 | int ep_index; |
1740 | struct xhci_td *td = NULL; | 1739 | struct xhci_td *td = NULL; |
1741 | dma_addr_t event_dma; | 1740 | dma_addr_t event_dma; |
1742 | struct xhci_segment *event_seg; | 1741 | struct xhci_segment *event_seg; |
1743 | union xhci_trb *event_trb; | 1742 | union xhci_trb *event_trb; |
1744 | struct urb *urb = NULL; | 1743 | struct urb *urb = NULL; |
1745 | int status = -EINPROGRESS; | 1744 | int status = -EINPROGRESS; |
1746 | struct urb_priv *urb_priv; | 1745 | struct urb_priv *urb_priv; |
1747 | struct xhci_ep_ctx *ep_ctx; | 1746 | struct xhci_ep_ctx *ep_ctx; |
1748 | u32 trb_comp_code; | 1747 | u32 trb_comp_code; |
1749 | int ret = 0; | 1748 | int ret = 0; |
1750 | 1749 | ||
1751 | slot_id = TRB_TO_SLOT_ID(event->flags); | 1750 | slot_id = TRB_TO_SLOT_ID(event->flags); |
1752 | xdev = xhci->devs[slot_id]; | 1751 | xdev = xhci->devs[slot_id]; |
1753 | if (!xdev) { | 1752 | if (!xdev) { |
1754 | xhci_err(xhci, "ERROR Transfer event pointed to bad slot\n"); | 1753 | xhci_err(xhci, "ERROR Transfer event pointed to bad slot\n"); |
1755 | return -ENODEV; | 1754 | return -ENODEV; |
1756 | } | 1755 | } |
1757 | 1756 | ||
1758 | /* Endpoint ID is 1 based, our index is zero based */ | 1757 | /* Endpoint ID is 1 based, our index is zero based */ |
1759 | ep_index = TRB_TO_EP_ID(event->flags) - 1; | 1758 | ep_index = TRB_TO_EP_ID(event->flags) - 1; |
1760 | xhci_dbg(xhci, "%s - ep index = %d\n", __func__, ep_index); | 1759 | xhci_dbg(xhci, "%s - ep index = %d\n", __func__, ep_index); |
1761 | ep = &xdev->eps[ep_index]; | 1760 | ep = &xdev->eps[ep_index]; |
1762 | ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer); | 1761 | ep_ring = xhci_dma_to_transfer_ring(ep, event->buffer); |
1763 | ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); | 1762 | ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); |
1764 | if (!ep_ring || | 1763 | if (!ep_ring || |
1765 | (ep_ctx->ep_info & EP_STATE_MASK) == EP_STATE_DISABLED) { | 1764 | (ep_ctx->ep_info & EP_STATE_MASK) == EP_STATE_DISABLED) { |
1766 | xhci_err(xhci, "ERROR Transfer event for disabled endpoint " | 1765 | xhci_err(xhci, "ERROR Transfer event for disabled endpoint " |
1767 | "or incorrect stream ring\n"); | 1766 | "or incorrect stream ring\n"); |
1768 | return -ENODEV; | 1767 | return -ENODEV; |
1769 | } | 1768 | } |
1770 | 1769 | ||
1771 | event_dma = event->buffer; | 1770 | event_dma = event->buffer; |
1772 | trb_comp_code = GET_COMP_CODE(event->transfer_len); | 1771 | trb_comp_code = GET_COMP_CODE(event->transfer_len); |
1773 | /* Look for common error cases */ | 1772 | /* Look for common error cases */ |
1774 | switch (trb_comp_code) { | 1773 | switch (trb_comp_code) { |
1775 | /* Skip codes that require special handling depending on | 1774 | /* Skip codes that require special handling depending on |
1776 | * transfer type | 1775 | * transfer type |
1777 | */ | 1776 | */ |
1778 | case COMP_SUCCESS: | 1777 | case COMP_SUCCESS: |
1779 | case COMP_SHORT_TX: | 1778 | case COMP_SHORT_TX: |
1780 | break; | 1779 | break; |
1781 | case COMP_STOP: | 1780 | case COMP_STOP: |
1782 | xhci_dbg(xhci, "Stopped on Transfer TRB\n"); | 1781 | xhci_dbg(xhci, "Stopped on Transfer TRB\n"); |
1783 | break; | 1782 | break; |
1784 | case COMP_STOP_INVAL: | 1783 | case COMP_STOP_INVAL: |
1785 | xhci_dbg(xhci, "Stopped on No-op or Link TRB\n"); | 1784 | xhci_dbg(xhci, "Stopped on No-op or Link TRB\n"); |
1786 | break; | 1785 | break; |
1787 | case COMP_STALL: | 1786 | case COMP_STALL: |
1788 | xhci_warn(xhci, "WARN: Stalled endpoint\n"); | 1787 | xhci_warn(xhci, "WARN: Stalled endpoint\n"); |
1789 | ep->ep_state |= EP_HALTED; | 1788 | ep->ep_state |= EP_HALTED; |
1790 | status = -EPIPE; | 1789 | status = -EPIPE; |
1791 | break; | 1790 | break; |
1792 | case COMP_TRB_ERR: | 1791 | case COMP_TRB_ERR: |
1793 | xhci_warn(xhci, "WARN: TRB error on endpoint\n"); | 1792 | xhci_warn(xhci, "WARN: TRB error on endpoint\n"); |
1794 | status = -EILSEQ; | 1793 | status = -EILSEQ; |
1795 | break; | 1794 | break; |
1796 | case COMP_SPLIT_ERR: | 1795 | case COMP_SPLIT_ERR: |
1797 | case COMP_TX_ERR: | 1796 | case COMP_TX_ERR: |
1798 | xhci_warn(xhci, "WARN: transfer error on endpoint\n"); | 1797 | xhci_warn(xhci, "WARN: transfer error on endpoint\n"); |
1799 | status = -EPROTO; | 1798 | status = -EPROTO; |
1800 | break; | 1799 | break; |
1801 | case COMP_BABBLE: | 1800 | case COMP_BABBLE: |
1802 | xhci_warn(xhci, "WARN: babble error on endpoint\n"); | 1801 | xhci_warn(xhci, "WARN: babble error on endpoint\n"); |
1803 | status = -EOVERFLOW; | 1802 | status = -EOVERFLOW; |
1804 | break; | 1803 | break; |
1805 | case COMP_DB_ERR: | 1804 | case COMP_DB_ERR: |
1806 | xhci_warn(xhci, "WARN: HC couldn't access mem fast enough\n"); | 1805 | xhci_warn(xhci, "WARN: HC couldn't access mem fast enough\n"); |
1807 | status = -ENOSR; | 1806 | status = -ENOSR; |
1808 | break; | 1807 | break; |
1809 | case COMP_BW_OVER: | 1808 | case COMP_BW_OVER: |
1810 | xhci_warn(xhci, "WARN: bandwidth overrun event on endpoint\n"); | 1809 | xhci_warn(xhci, "WARN: bandwidth overrun event on endpoint\n"); |
1811 | break; | 1810 | break; |
1812 | case COMP_BUFF_OVER: | 1811 | case COMP_BUFF_OVER: |
1813 | xhci_warn(xhci, "WARN: buffer overrun event on endpoint\n"); | 1812 | xhci_warn(xhci, "WARN: buffer overrun event on endpoint\n"); |
1814 | break; | 1813 | break; |
1815 | case COMP_UNDERRUN: | 1814 | case COMP_UNDERRUN: |
1816 | /* | 1815 | /* |
1817 | * When the Isoch ring is empty, the xHC will generate | 1816 | * When the Isoch ring is empty, the xHC will generate |
1818 | * a Ring Overrun Event for IN Isoch endpoint or Ring | 1817 | * a Ring Overrun Event for IN Isoch endpoint or Ring |
1819 | * Underrun Event for OUT Isoch endpoint. | 1818 | * Underrun Event for OUT Isoch endpoint. |
1820 | */ | 1819 | */ |
1821 | xhci_dbg(xhci, "underrun event on endpoint\n"); | 1820 | xhci_dbg(xhci, "underrun event on endpoint\n"); |
1822 | if (!list_empty(&ep_ring->td_list)) | 1821 | if (!list_empty(&ep_ring->td_list)) |
1823 | xhci_dbg(xhci, "Underrun Event for slot %d ep %d " | 1822 | xhci_dbg(xhci, "Underrun Event for slot %d ep %d " |
1824 | "still with TDs queued?\n", | 1823 | "still with TDs queued?\n", |
1825 | TRB_TO_SLOT_ID(event->flags), ep_index); | 1824 | TRB_TO_SLOT_ID(event->flags), ep_index); |
1826 | goto cleanup; | 1825 | goto cleanup; |
1827 | case COMP_OVERRUN: | 1826 | case COMP_OVERRUN: |
1828 | xhci_dbg(xhci, "overrun event on endpoint\n"); | 1827 | xhci_dbg(xhci, "overrun event on endpoint\n"); |
1829 | if (!list_empty(&ep_ring->td_list)) | 1828 | if (!list_empty(&ep_ring->td_list)) |
1830 | xhci_dbg(xhci, "Overrun Event for slot %d ep %d " | 1829 | xhci_dbg(xhci, "Overrun Event for slot %d ep %d " |
1831 | "still with TDs queued?\n", | 1830 | "still with TDs queued?\n", |
1832 | TRB_TO_SLOT_ID(event->flags), ep_index); | 1831 | TRB_TO_SLOT_ID(event->flags), ep_index); |
1833 | goto cleanup; | 1832 | goto cleanup; |
1834 | case COMP_MISSED_INT: | 1833 | case COMP_MISSED_INT: |
1835 | /* | 1834 | /* |
1836 | * When encounter missed service error, one or more isoc tds | 1835 | * When encounter missed service error, one or more isoc tds |
1837 | * may be missed by xHC. | 1836 | * may be missed by xHC. |
1838 | * Set skip flag of the ep_ring; Complete the missed tds as | 1837 | * Set skip flag of the ep_ring; Complete the missed tds as |
1839 | * short transfer when process the ep_ring next time. | 1838 | * short transfer when process the ep_ring next time. |
1840 | */ | 1839 | */ |
1841 | ep->skip = true; | 1840 | ep->skip = true; |
1842 | xhci_dbg(xhci, "Miss service interval error, set skip flag\n"); | 1841 | xhci_dbg(xhci, "Miss service interval error, set skip flag\n"); |
1843 | goto cleanup; | 1842 | goto cleanup; |
1844 | default: | 1843 | default: |
1845 | if (xhci_is_vendor_info_code(xhci, trb_comp_code)) { | 1844 | if (xhci_is_vendor_info_code(xhci, trb_comp_code)) { |
1846 | status = 0; | 1845 | status = 0; |
1847 | break; | 1846 | break; |
1848 | } | 1847 | } |
1849 | xhci_warn(xhci, "ERROR Unknown event condition, HC probably " | 1848 | xhci_warn(xhci, "ERROR Unknown event condition, HC probably " |
1850 | "busted\n"); | 1849 | "busted\n"); |
1851 | goto cleanup; | 1850 | goto cleanup; |
1852 | } | 1851 | } |
1853 | 1852 | ||
1854 | do { | 1853 | do { |
1855 | /* This TRB should be in the TD at the head of this ring's | 1854 | /* This TRB should be in the TD at the head of this ring's |
1856 | * TD list. | 1855 | * TD list. |
1857 | */ | 1856 | */ |
1858 | if (list_empty(&ep_ring->td_list)) { | 1857 | if (list_empty(&ep_ring->td_list)) { |
1859 | xhci_warn(xhci, "WARN Event TRB for slot %d ep %d " | 1858 | xhci_warn(xhci, "WARN Event TRB for slot %d ep %d " |
1860 | "with no TDs queued?\n", | 1859 | "with no TDs queued?\n", |
1861 | TRB_TO_SLOT_ID(event->flags), ep_index); | 1860 | TRB_TO_SLOT_ID(event->flags), ep_index); |
1862 | xhci_dbg(xhci, "Event TRB with TRB type ID %u\n", | 1861 | xhci_dbg(xhci, "Event TRB with TRB type ID %u\n", |
1863 | (unsigned int) (event->flags & TRB_TYPE_BITMASK)>>10); | 1862 | (unsigned int) (event->flags & TRB_TYPE_BITMASK)>>10); |
1864 | xhci_print_trb_offsets(xhci, (union xhci_trb *) event); | 1863 | xhci_print_trb_offsets(xhci, (union xhci_trb *) event); |
1865 | if (ep->skip) { | 1864 | if (ep->skip) { |
1866 | ep->skip = false; | 1865 | ep->skip = false; |
1867 | xhci_dbg(xhci, "td_list is empty while skip " | 1866 | xhci_dbg(xhci, "td_list is empty while skip " |
1868 | "flag set. Clear skip flag.\n"); | 1867 | "flag set. Clear skip flag.\n"); |
1869 | } | 1868 | } |
1870 | ret = 0; | 1869 | ret = 0; |
1871 | goto cleanup; | 1870 | goto cleanup; |
1872 | } | 1871 | } |
1873 | 1872 | ||
1874 | td = list_entry(ep_ring->td_list.next, struct xhci_td, td_list); | 1873 | td = list_entry(ep_ring->td_list.next, struct xhci_td, td_list); |
1875 | /* Is this a TRB in the currently executing TD? */ | 1874 | /* Is this a TRB in the currently executing TD? */ |
1876 | event_seg = trb_in_td(ep_ring->deq_seg, ep_ring->dequeue, | 1875 | event_seg = trb_in_td(ep_ring->deq_seg, ep_ring->dequeue, |
1877 | td->last_trb, event_dma); | 1876 | td->last_trb, event_dma); |
1878 | if (event_seg && ep->skip) { | 1877 | if (event_seg && ep->skip) { |
1879 | xhci_dbg(xhci, "Found td. Clear skip flag.\n"); | 1878 | xhci_dbg(xhci, "Found td. Clear skip flag.\n"); |
1880 | ep->skip = false; | 1879 | ep->skip = false; |
1881 | } | 1880 | } |
1882 | if (!event_seg && | 1881 | if (!event_seg && |
1883 | (!ep->skip || !usb_endpoint_xfer_isoc(&td->urb->ep->desc))) { | 1882 | (!ep->skip || !usb_endpoint_xfer_isoc(&td->urb->ep->desc))) { |
1884 | /* HC is busted, give up! */ | 1883 | /* HC is busted, give up! */ |
1885 | xhci_err(xhci, "ERROR Transfer event TRB DMA ptr not " | 1884 | xhci_err(xhci, "ERROR Transfer event TRB DMA ptr not " |
1886 | "part of current TD\n"); | 1885 | "part of current TD\n"); |
1887 | return -ESHUTDOWN; | 1886 | return -ESHUTDOWN; |
1888 | } | 1887 | } |
1889 | 1888 | ||
1890 | if (event_seg) { | 1889 | if (event_seg) { |
1891 | event_trb = &event_seg->trbs[(event_dma - | 1890 | event_trb = &event_seg->trbs[(event_dma - |
1892 | event_seg->dma) / sizeof(*event_trb)]; | 1891 | event_seg->dma) / sizeof(*event_trb)]; |
1893 | /* | 1892 | /* |
1894 | * No-op TRB should not trigger interrupts. | 1893 | * No-op TRB should not trigger interrupts. |
1895 | * If event_trb is a no-op TRB, it means the | 1894 | * If event_trb is a no-op TRB, it means the |
1896 | * corresponding TD has been cancelled. Just ignore | 1895 | * corresponding TD has been cancelled. Just ignore |
1897 | * the TD. | 1896 | * the TD. |
1898 | */ | 1897 | */ |
1899 | if ((event_trb->generic.field[3] & TRB_TYPE_BITMASK) | 1898 | if ((event_trb->generic.field[3] & TRB_TYPE_BITMASK) |
1900 | == TRB_TYPE(TRB_TR_NOOP)) { | 1899 | == TRB_TYPE(TRB_TR_NOOP)) { |
1901 | xhci_dbg(xhci, "event_trb is a no-op TRB. " | 1900 | xhci_dbg(xhci, "event_trb is a no-op TRB. " |
1902 | "Skip it\n"); | 1901 | "Skip it\n"); |
1903 | goto cleanup; | 1902 | goto cleanup; |
1904 | } | 1903 | } |
1905 | } | 1904 | } |
1906 | 1905 | ||
1907 | /* Now update the urb's actual_length and give back to | 1906 | /* Now update the urb's actual_length and give back to |
1908 | * the core | 1907 | * the core |
1909 | */ | 1908 | */ |
1910 | if (usb_endpoint_xfer_control(&td->urb->ep->desc)) | 1909 | if (usb_endpoint_xfer_control(&td->urb->ep->desc)) |
1911 | ret = process_ctrl_td(xhci, td, event_trb, event, ep, | 1910 | ret = process_ctrl_td(xhci, td, event_trb, event, ep, |
1912 | &status); | 1911 | &status); |
1913 | else if (usb_endpoint_xfer_isoc(&td->urb->ep->desc)) | 1912 | else if (usb_endpoint_xfer_isoc(&td->urb->ep->desc)) |
1914 | ret = process_isoc_td(xhci, td, event_trb, event, ep, | 1913 | ret = process_isoc_td(xhci, td, event_trb, event, ep, |
1915 | &status); | 1914 | &status); |
1916 | else | 1915 | else |
1917 | ret = process_bulk_intr_td(xhci, td, event_trb, event, | 1916 | ret = process_bulk_intr_td(xhci, td, event_trb, event, |
1918 | ep, &status); | 1917 | ep, &status); |
1919 | 1918 | ||
1920 | cleanup: | 1919 | cleanup: |
1921 | /* | 1920 | /* |
1922 | * Do not update event ring dequeue pointer if ep->skip is set. | 1921 | * Do not update event ring dequeue pointer if ep->skip is set. |
1923 | * Will roll back to continue process missed tds. | 1922 | * Will roll back to continue process missed tds. |
1924 | */ | 1923 | */ |
1925 | if (trb_comp_code == COMP_MISSED_INT || !ep->skip) { | 1924 | if (trb_comp_code == COMP_MISSED_INT || !ep->skip) { |
1926 | inc_deq(xhci, xhci->event_ring, true); | 1925 | inc_deq(xhci, xhci->event_ring, true); |
1927 | xhci_set_hc_event_deq(xhci); | ||
1928 | } | 1926 | } |
1929 | 1927 | ||
1930 | if (ret) { | 1928 | if (ret) { |
1931 | urb = td->urb; | 1929 | urb = td->urb; |
1932 | urb_priv = urb->hcpriv; | 1930 | urb_priv = urb->hcpriv; |
1933 | /* Leave the TD around for the reset endpoint function | 1931 | /* Leave the TD around for the reset endpoint function |
1934 | * to use(but only if it's not a control endpoint, | 1932 | * to use(but only if it's not a control endpoint, |
1935 | * since we already queued the Set TR dequeue pointer | 1933 | * since we already queued the Set TR dequeue pointer |
1936 | * command for stalled control endpoints). | 1934 | * command for stalled control endpoints). |
1937 | */ | 1935 | */ |
1938 | if (usb_endpoint_xfer_control(&urb->ep->desc) || | 1936 | if (usb_endpoint_xfer_control(&urb->ep->desc) || |
1939 | (trb_comp_code != COMP_STALL && | 1937 | (trb_comp_code != COMP_STALL && |
1940 | trb_comp_code != COMP_BABBLE)) | 1938 | trb_comp_code != COMP_BABBLE)) |
1941 | xhci_urb_free_priv(xhci, urb_priv); | 1939 | xhci_urb_free_priv(xhci, urb_priv); |
1942 | 1940 | ||
1943 | usb_hcd_unlink_urb_from_ep(xhci_to_hcd(xhci), urb); | 1941 | usb_hcd_unlink_urb_from_ep(xhci_to_hcd(xhci), urb); |
1944 | xhci_dbg(xhci, "Giveback URB %p, len = %d, " | 1942 | xhci_dbg(xhci, "Giveback URB %p, len = %d, " |
1945 | "status = %d\n", | 1943 | "status = %d\n", |
1946 | urb, urb->actual_length, status); | 1944 | urb, urb->actual_length, status); |
1947 | spin_unlock(&xhci->lock); | 1945 | spin_unlock(&xhci->lock); |
1948 | usb_hcd_giveback_urb(xhci_to_hcd(xhci), urb, status); | 1946 | usb_hcd_giveback_urb(xhci_to_hcd(xhci), urb, status); |
1949 | spin_lock(&xhci->lock); | 1947 | spin_lock(&xhci->lock); |
1950 | } | 1948 | } |
1951 | 1949 | ||
1952 | /* | 1950 | /* |
1953 | * If ep->skip is set, it means there are missed tds on the | 1951 | * If ep->skip is set, it means there are missed tds on the |
1954 | * endpoint ring need to take care of. | 1952 | * endpoint ring need to take care of. |
1955 | * Process them as short transfer until reach the td pointed by | 1953 | * Process them as short transfer until reach the td pointed by |
1956 | * the event. | 1954 | * the event. |
1957 | */ | 1955 | */ |
1958 | } while (ep->skip && trb_comp_code != COMP_MISSED_INT); | 1956 | } while (ep->skip && trb_comp_code != COMP_MISSED_INT); |
1959 | 1957 | ||
1960 | return 0; | 1958 | return 0; |
1961 | } | 1959 | } |
1962 | 1960 | ||
1963 | /* | 1961 | /* |
1964 | * This function handles all OS-owned events on the event ring. It may drop | 1962 | * This function handles all OS-owned events on the event ring. It may drop |
1965 | * xhci->lock between event processing (e.g. to pass up port status changes). | 1963 | * xhci->lock between event processing (e.g. to pass up port status changes). |
1966 | */ | 1964 | */ |
1967 | static void xhci_handle_event(struct xhci_hcd *xhci) | 1965 | static void xhci_handle_event(struct xhci_hcd *xhci) |
1968 | { | 1966 | { |
1969 | union xhci_trb *event; | 1967 | union xhci_trb *event; |
1970 | int update_ptrs = 1; | 1968 | int update_ptrs = 1; |
1971 | int ret; | 1969 | int ret; |
1972 | 1970 | ||
1973 | xhci_dbg(xhci, "In %s\n", __func__); | 1971 | xhci_dbg(xhci, "In %s\n", __func__); |
1974 | if (!xhci->event_ring || !xhci->event_ring->dequeue) { | 1972 | if (!xhci->event_ring || !xhci->event_ring->dequeue) { |
1975 | xhci->error_bitmask |= 1 << 1; | 1973 | xhci->error_bitmask |= 1 << 1; |
1976 | return; | 1974 | return; |
1977 | } | 1975 | } |
1978 | 1976 | ||
1979 | event = xhci->event_ring->dequeue; | 1977 | event = xhci->event_ring->dequeue; |
1980 | /* Does the HC or OS own the TRB? */ | 1978 | /* Does the HC or OS own the TRB? */ |
1981 | if ((event->event_cmd.flags & TRB_CYCLE) != | 1979 | if ((event->event_cmd.flags & TRB_CYCLE) != |
1982 | xhci->event_ring->cycle_state) { | 1980 | xhci->event_ring->cycle_state) { |
1983 | xhci->error_bitmask |= 1 << 2; | 1981 | xhci->error_bitmask |= 1 << 2; |
1984 | return; | 1982 | return; |
1985 | } | 1983 | } |
1986 | xhci_dbg(xhci, "%s - OS owns TRB\n", __func__); | 1984 | xhci_dbg(xhci, "%s - OS owns TRB\n", __func__); |
1987 | 1985 | ||
1988 | /* FIXME: Handle more event types. */ | 1986 | /* FIXME: Handle more event types. */ |
1989 | switch ((event->event_cmd.flags & TRB_TYPE_BITMASK)) { | 1987 | switch ((event->event_cmd.flags & TRB_TYPE_BITMASK)) { |
1990 | case TRB_TYPE(TRB_COMPLETION): | 1988 | case TRB_TYPE(TRB_COMPLETION): |
1991 | xhci_dbg(xhci, "%s - calling handle_cmd_completion\n", __func__); | 1989 | xhci_dbg(xhci, "%s - calling handle_cmd_completion\n", __func__); |
1992 | handle_cmd_completion(xhci, &event->event_cmd); | 1990 | handle_cmd_completion(xhci, &event->event_cmd); |
1993 | xhci_dbg(xhci, "%s - returned from handle_cmd_completion\n", __func__); | 1991 | xhci_dbg(xhci, "%s - returned from handle_cmd_completion\n", __func__); |
1994 | break; | 1992 | break; |
1995 | case TRB_TYPE(TRB_PORT_STATUS): | 1993 | case TRB_TYPE(TRB_PORT_STATUS): |
1996 | xhci_dbg(xhci, "%s - calling handle_port_status\n", __func__); | 1994 | xhci_dbg(xhci, "%s - calling handle_port_status\n", __func__); |
1997 | handle_port_status(xhci, event); | 1995 | handle_port_status(xhci, event); |
1998 | xhci_dbg(xhci, "%s - returned from handle_port_status\n", __func__); | 1996 | xhci_dbg(xhci, "%s - returned from handle_port_status\n", __func__); |
1999 | update_ptrs = 0; | 1997 | update_ptrs = 0; |
2000 | break; | 1998 | break; |
2001 | case TRB_TYPE(TRB_TRANSFER): | 1999 | case TRB_TYPE(TRB_TRANSFER): |
2002 | xhci_dbg(xhci, "%s - calling handle_tx_event\n", __func__); | 2000 | xhci_dbg(xhci, "%s - calling handle_tx_event\n", __func__); |
2003 | ret = handle_tx_event(xhci, &event->trans_event); | 2001 | ret = handle_tx_event(xhci, &event->trans_event); |
2004 | xhci_dbg(xhci, "%s - returned from handle_tx_event\n", __func__); | 2002 | xhci_dbg(xhci, "%s - returned from handle_tx_event\n", __func__); |
2005 | if (ret < 0) | 2003 | if (ret < 0) |
2006 | xhci->error_bitmask |= 1 << 9; | 2004 | xhci->error_bitmask |= 1 << 9; |
2007 | else | 2005 | else |
2008 | update_ptrs = 0; | 2006 | update_ptrs = 0; |
2009 | break; | 2007 | break; |
2010 | default: | 2008 | default: |
2011 | if ((event->event_cmd.flags & TRB_TYPE_BITMASK) >= TRB_TYPE(48)) | 2009 | if ((event->event_cmd.flags & TRB_TYPE_BITMASK) >= TRB_TYPE(48)) |
2012 | handle_vendor_event(xhci, event); | 2010 | handle_vendor_event(xhci, event); |
2013 | else | 2011 | else |
2014 | xhci->error_bitmask |= 1 << 3; | 2012 | xhci->error_bitmask |= 1 << 3; |
2015 | } | 2013 | } |
2016 | /* Any of the above functions may drop and re-acquire the lock, so check | 2014 | /* Any of the above functions may drop and re-acquire the lock, so check |
2017 | * to make sure a watchdog timer didn't mark the host as non-responsive. | 2015 | * to make sure a watchdog timer didn't mark the host as non-responsive. |
2018 | */ | 2016 | */ |
2019 | if (xhci->xhc_state & XHCI_STATE_DYING) { | 2017 | if (xhci->xhc_state & XHCI_STATE_DYING) { |
2020 | xhci_dbg(xhci, "xHCI host dying, returning from " | 2018 | xhci_dbg(xhci, "xHCI host dying, returning from " |
2021 | "event handler.\n"); | 2019 | "event handler.\n"); |
2022 | return; | 2020 | return; |
2023 | } | 2021 | } |
2024 | 2022 | ||
2025 | if (update_ptrs) { | 2023 | if (update_ptrs) |
2026 | /* Update SW and HC event ring dequeue pointer */ | 2024 | /* Update SW event ring dequeue pointer */ |
2027 | inc_deq(xhci, xhci->event_ring, true); | 2025 | inc_deq(xhci, xhci->event_ring, true); |
2028 | xhci_set_hc_event_deq(xhci); | 2026 | |
2029 | } | ||
2030 | /* Are there more items on the event ring? */ | 2027 | /* Are there more items on the event ring? */ |
2031 | xhci_handle_event(xhci); | 2028 | xhci_handle_event(xhci); |
2032 | } | 2029 | } |
2033 | 2030 | ||
2034 | /* | 2031 | /* |
2035 | * xHCI spec says we can get an interrupt, and if the HC has an error condition, | 2032 | * xHCI spec says we can get an interrupt, and if the HC has an error condition, |
2036 | * we might get bad data out of the event ring. Section 4.10.2.7 has a list of | 2033 | * we might get bad data out of the event ring. Section 4.10.2.7 has a list of |
2037 | * indicators of an event TRB error, but we check the status *first* to be safe. | 2034 | * indicators of an event TRB error, but we check the status *first* to be safe. |
2038 | */ | 2035 | */ |
2039 | irqreturn_t xhci_irq(struct usb_hcd *hcd) | 2036 | irqreturn_t xhci_irq(struct usb_hcd *hcd) |
2040 | { | 2037 | { |
2041 | struct xhci_hcd *xhci = hcd_to_xhci(hcd); | 2038 | struct xhci_hcd *xhci = hcd_to_xhci(hcd); |
2042 | u32 status, irq_pending; | 2039 | u32 status, irq_pending; |
2043 | union xhci_trb *trb; | 2040 | union xhci_trb *trb; |
2044 | u64 temp_64; | 2041 | u64 temp_64; |
2042 | union xhci_trb *event_ring_deq; | ||
2043 | dma_addr_t deq; | ||
2045 | 2044 | ||
2046 | spin_lock(&xhci->lock); | 2045 | spin_lock(&xhci->lock); |
2047 | trb = xhci->event_ring->dequeue; | 2046 | trb = xhci->event_ring->dequeue; |
2048 | /* Check if the xHC generated the interrupt, or the irq is shared */ | 2047 | /* Check if the xHC generated the interrupt, or the irq is shared */ |
2049 | status = xhci_readl(xhci, &xhci->op_regs->status); | 2048 | status = xhci_readl(xhci, &xhci->op_regs->status); |
2050 | irq_pending = xhci_readl(xhci, &xhci->ir_set->irq_pending); | 2049 | irq_pending = xhci_readl(xhci, &xhci->ir_set->irq_pending); |
2051 | if (status == 0xffffffff && irq_pending == 0xffffffff) | 2050 | if (status == 0xffffffff && irq_pending == 0xffffffff) |
2052 | goto hw_died; | 2051 | goto hw_died; |
2053 | 2052 | ||
2054 | if (!(status & STS_EINT) && !ER_IRQ_PENDING(irq_pending)) { | 2053 | if (!(status & STS_EINT) && !ER_IRQ_PENDING(irq_pending)) { |
2055 | spin_unlock(&xhci->lock); | 2054 | spin_unlock(&xhci->lock); |
2056 | xhci_warn(xhci, "Spurious interrupt.\n"); | 2055 | xhci_warn(xhci, "Spurious interrupt.\n"); |
2057 | return IRQ_NONE; | 2056 | return IRQ_NONE; |
2058 | } | 2057 | } |
2059 | xhci_dbg(xhci, "op reg status = %08x\n", status); | 2058 | xhci_dbg(xhci, "op reg status = %08x\n", status); |
2060 | xhci_dbg(xhci, "ir set irq_pending = %08x\n", irq_pending); | 2059 | xhci_dbg(xhci, "ir set irq_pending = %08x\n", irq_pending); |
2061 | xhci_dbg(xhci, "Event ring dequeue ptr:\n"); | 2060 | xhci_dbg(xhci, "Event ring dequeue ptr:\n"); |
2062 | xhci_dbg(xhci, "@%llx %08x %08x %08x %08x\n", | 2061 | xhci_dbg(xhci, "@%llx %08x %08x %08x %08x\n", |
2063 | (unsigned long long) | 2062 | (unsigned long long) |
2064 | xhci_trb_virt_to_dma(xhci->event_ring->deq_seg, trb), | 2063 | xhci_trb_virt_to_dma(xhci->event_ring->deq_seg, trb), |
2065 | lower_32_bits(trb->link.segment_ptr), | 2064 | lower_32_bits(trb->link.segment_ptr), |
2066 | upper_32_bits(trb->link.segment_ptr), | 2065 | upper_32_bits(trb->link.segment_ptr), |
2067 | (unsigned int) trb->link.intr_target, | 2066 | (unsigned int) trb->link.intr_target, |
2068 | (unsigned int) trb->link.control); | 2067 | (unsigned int) trb->link.control); |
2069 | 2068 | ||
2070 | if (status & STS_FATAL) { | 2069 | if (status & STS_FATAL) { |
2071 | xhci_warn(xhci, "WARNING: Host System Error\n"); | 2070 | xhci_warn(xhci, "WARNING: Host System Error\n"); |
2072 | xhci_halt(xhci); | 2071 | xhci_halt(xhci); |
2073 | hw_died: | 2072 | hw_died: |
2074 | xhci_to_hcd(xhci)->state = HC_STATE_HALT; | 2073 | xhci_to_hcd(xhci)->state = HC_STATE_HALT; |
2075 | spin_unlock(&xhci->lock); | 2074 | spin_unlock(&xhci->lock); |
2076 | return -ESHUTDOWN; | 2075 | return -ESHUTDOWN; |
2077 | } | 2076 | } |
2078 | 2077 | ||
2079 | /* | 2078 | /* |
2080 | * Clear the op reg interrupt status first, | 2079 | * Clear the op reg interrupt status first, |
2081 | * so we can receive interrupts from other MSI-X interrupters. | 2080 | * so we can receive interrupts from other MSI-X interrupters. |
2082 | * Write 1 to clear the interrupt status. | 2081 | * Write 1 to clear the interrupt status. |
2083 | */ | 2082 | */ |
2084 | status |= STS_EINT; | 2083 | status |= STS_EINT; |
2085 | xhci_writel(xhci, status, &xhci->op_regs->status); | 2084 | xhci_writel(xhci, status, &xhci->op_regs->status); |
2086 | /* FIXME when MSI-X is supported and there are multiple vectors */ | 2085 | /* FIXME when MSI-X is supported and there are multiple vectors */ |
2087 | /* Clear the MSI-X event interrupt status */ | 2086 | /* Clear the MSI-X event interrupt status */ |
2088 | 2087 | ||
2089 | /* Acknowledge the interrupt */ | 2088 | /* Acknowledge the interrupt */ |
2090 | irq_pending |= 0x3; | 2089 | irq_pending |= 0x3; |
2091 | xhci_writel(xhci, irq_pending, &xhci->ir_set->irq_pending); | 2090 | xhci_writel(xhci, irq_pending, &xhci->ir_set->irq_pending); |
2092 | 2091 | ||
2093 | if (xhci->xhc_state & XHCI_STATE_DYING) | 2092 | if (xhci->xhc_state & XHCI_STATE_DYING) { |
2094 | xhci_dbg(xhci, "xHCI dying, ignoring interrupt. " | 2093 | xhci_dbg(xhci, "xHCI dying, ignoring interrupt. " |
2095 | "Shouldn't IRQs be disabled?\n"); | 2094 | "Shouldn't IRQs be disabled?\n"); |
2096 | else | 2095 | /* Clear the event handler busy flag (RW1C); |
2097 | /* FIXME this should be a delayed service routine | 2096 | * the event ring should be empty. |
2098 | * that clears the EHB. | ||
2099 | */ | 2097 | */ |
2100 | xhci_handle_event(xhci); | 2098 | temp_64 = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue); |
2099 | xhci_write_64(xhci, temp_64 | ERST_EHB, | ||
2100 | &xhci->ir_set->erst_dequeue); | ||
2101 | spin_unlock(&xhci->lock); | ||
2101 | 2102 | ||
2102 | /* Clear the event handler busy flag (RW1C); event ring is empty. */ | 2103 | return IRQ_HANDLED; |
2104 | } | ||
2105 | |||
2106 | event_ring_deq = xhci->event_ring->dequeue; | ||
2107 | /* FIXME this should be a delayed service routine | ||
2108 | * that clears the EHB. | ||
2109 | */ | ||
2110 | xhci_handle_event(xhci); | ||
2111 | |||
2103 | temp_64 = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue); | 2112 | temp_64 = xhci_read_64(xhci, &xhci->ir_set->erst_dequeue); |
2104 | xhci_write_64(xhci, temp_64 | ERST_EHB, &xhci->ir_set->erst_dequeue); | 2113 | /* If necessary, update the HW's version of the event ring deq ptr. */ |
2114 | if (event_ring_deq != xhci->event_ring->dequeue) { | ||
2115 | deq = xhci_trb_virt_to_dma(xhci->event_ring->deq_seg, | ||
2116 | xhci->event_ring->dequeue); | ||
2117 | if (deq == 0) | ||
2118 | xhci_warn(xhci, "WARN something wrong with SW event " | ||
2119 | "ring dequeue ptr.\n"); | ||
2120 | /* Update HC event ring dequeue pointer */ | ||
2121 | temp_64 &= ERST_PTR_MASK; | ||
2122 | temp_64 |= ((u64) deq & (u64) ~ERST_PTR_MASK); | ||
2123 | } | ||
2124 | |||
2125 | /* Clear the event handler busy flag (RW1C); event ring is empty. */ | ||
2126 | temp_64 |= ERST_EHB; | ||
2127 | xhci_write_64(xhci, temp_64, &xhci->ir_set->erst_dequeue); | ||
2128 | |||
2105 | spin_unlock(&xhci->lock); | 2129 | spin_unlock(&xhci->lock); |
2106 | 2130 | ||
2107 | return IRQ_HANDLED; | 2131 | return IRQ_HANDLED; |
2108 | } | 2132 | } |
2109 | 2133 | ||
2110 | irqreturn_t xhci_msi_irq(int irq, struct usb_hcd *hcd) | 2134 | irqreturn_t xhci_msi_irq(int irq, struct usb_hcd *hcd) |
2111 | { | 2135 | { |
2112 | irqreturn_t ret; | 2136 | irqreturn_t ret; |
2113 | 2137 | ||
2114 | set_bit(HCD_FLAG_SAW_IRQ, &hcd->flags); | 2138 | set_bit(HCD_FLAG_SAW_IRQ, &hcd->flags); |
2115 | 2139 | ||
2116 | ret = xhci_irq(hcd); | 2140 | ret = xhci_irq(hcd); |
2117 | 2141 | ||
2118 | return ret; | 2142 | return ret; |
2119 | } | 2143 | } |
2120 | 2144 | ||
2121 | /**** Endpoint Ring Operations ****/ | 2145 | /**** Endpoint Ring Operations ****/ |
2122 | 2146 | ||
2123 | /* | 2147 | /* |
2124 | * Generic function for queueing a TRB on a ring. | 2148 | * Generic function for queueing a TRB on a ring. |
2125 | * The caller must have checked to make sure there's room on the ring. | 2149 | * The caller must have checked to make sure there's room on the ring. |
2126 | * | 2150 | * |
2127 | * @more_trbs_coming: Will you enqueue more TRBs before calling | 2151 | * @more_trbs_coming: Will you enqueue more TRBs before calling |
2128 | * prepare_transfer()? | 2152 | * prepare_transfer()? |
2129 | */ | 2153 | */ |
2130 | static void queue_trb(struct xhci_hcd *xhci, struct xhci_ring *ring, | 2154 | static void queue_trb(struct xhci_hcd *xhci, struct xhci_ring *ring, |
2131 | bool consumer, bool more_trbs_coming, | 2155 | bool consumer, bool more_trbs_coming, |
2132 | u32 field1, u32 field2, u32 field3, u32 field4) | 2156 | u32 field1, u32 field2, u32 field3, u32 field4) |
2133 | { | 2157 | { |
2134 | struct xhci_generic_trb *trb; | 2158 | struct xhci_generic_trb *trb; |
2135 | 2159 | ||
2136 | trb = &ring->enqueue->generic; | 2160 | trb = &ring->enqueue->generic; |
2137 | trb->field[0] = field1; | 2161 | trb->field[0] = field1; |
2138 | trb->field[1] = field2; | 2162 | trb->field[1] = field2; |
2139 | trb->field[2] = field3; | 2163 | trb->field[2] = field3; |
2140 | trb->field[3] = field4; | 2164 | trb->field[3] = field4; |
2141 | inc_enq(xhci, ring, consumer, more_trbs_coming); | 2165 | inc_enq(xhci, ring, consumer, more_trbs_coming); |
2142 | } | 2166 | } |
2143 | 2167 | ||
2144 | /* | 2168 | /* |
2145 | * Does various checks on the endpoint ring, and makes it ready to queue num_trbs. | 2169 | * Does various checks on the endpoint ring, and makes it ready to queue num_trbs. |
2146 | * FIXME allocate segments if the ring is full. | 2170 | * FIXME allocate segments if the ring is full. |
2147 | */ | 2171 | */ |
2148 | static int prepare_ring(struct xhci_hcd *xhci, struct xhci_ring *ep_ring, | 2172 | static int prepare_ring(struct xhci_hcd *xhci, struct xhci_ring *ep_ring, |
2149 | u32 ep_state, unsigned int num_trbs, gfp_t mem_flags) | 2173 | u32 ep_state, unsigned int num_trbs, gfp_t mem_flags) |
2150 | { | 2174 | { |
2151 | /* Make sure the endpoint has been added to xHC schedule */ | 2175 | /* Make sure the endpoint has been added to xHC schedule */ |
2152 | xhci_dbg(xhci, "Endpoint state = 0x%x\n", ep_state); | 2176 | xhci_dbg(xhci, "Endpoint state = 0x%x\n", ep_state); |
2153 | switch (ep_state) { | 2177 | switch (ep_state) { |
2154 | case EP_STATE_DISABLED: | 2178 | case EP_STATE_DISABLED: |
2155 | /* | 2179 | /* |
2156 | * USB core changed config/interfaces without notifying us, | 2180 | * USB core changed config/interfaces without notifying us, |
2157 | * or hardware is reporting the wrong state. | 2181 | * or hardware is reporting the wrong state. |
2158 | */ | 2182 | */ |
2159 | xhci_warn(xhci, "WARN urb submitted to disabled ep\n"); | 2183 | xhci_warn(xhci, "WARN urb submitted to disabled ep\n"); |
2160 | return -ENOENT; | 2184 | return -ENOENT; |
2161 | case EP_STATE_ERROR: | 2185 | case EP_STATE_ERROR: |
2162 | xhci_warn(xhci, "WARN waiting for error on ep to be cleared\n"); | 2186 | xhci_warn(xhci, "WARN waiting for error on ep to be cleared\n"); |
2163 | /* FIXME event handling code for error needs to clear it */ | 2187 | /* FIXME event handling code for error needs to clear it */ |
2164 | /* XXX not sure if this should be -ENOENT or not */ | 2188 | /* XXX not sure if this should be -ENOENT or not */ |
2165 | return -EINVAL; | 2189 | return -EINVAL; |
2166 | case EP_STATE_HALTED: | 2190 | case EP_STATE_HALTED: |
2167 | xhci_dbg(xhci, "WARN halted endpoint, queueing URB anyway.\n"); | 2191 | xhci_dbg(xhci, "WARN halted endpoint, queueing URB anyway.\n"); |
2168 | case EP_STATE_STOPPED: | 2192 | case EP_STATE_STOPPED: |
2169 | case EP_STATE_RUNNING: | 2193 | case EP_STATE_RUNNING: |
2170 | break; | 2194 | break; |
2171 | default: | 2195 | default: |
2172 | xhci_err(xhci, "ERROR unknown endpoint state for ep\n"); | 2196 | xhci_err(xhci, "ERROR unknown endpoint state for ep\n"); |
2173 | /* | 2197 | /* |
2174 | * FIXME issue Configure Endpoint command to try to get the HC | 2198 | * FIXME issue Configure Endpoint command to try to get the HC |
2175 | * back into a known state. | 2199 | * back into a known state. |
2176 | */ | 2200 | */ |
2177 | return -EINVAL; | 2201 | return -EINVAL; |
2178 | } | 2202 | } |
2179 | if (!room_on_ring(xhci, ep_ring, num_trbs)) { | 2203 | if (!room_on_ring(xhci, ep_ring, num_trbs)) { |
2180 | /* FIXME allocate more room */ | 2204 | /* FIXME allocate more room */ |
2181 | xhci_err(xhci, "ERROR no room on ep ring\n"); | 2205 | xhci_err(xhci, "ERROR no room on ep ring\n"); |
2182 | return -ENOMEM; | 2206 | return -ENOMEM; |
2183 | } | 2207 | } |
2184 | 2208 | ||
2185 | if (enqueue_is_link_trb(ep_ring)) { | 2209 | if (enqueue_is_link_trb(ep_ring)) { |
2186 | struct xhci_ring *ring = ep_ring; | 2210 | struct xhci_ring *ring = ep_ring; |
2187 | union xhci_trb *next; | 2211 | union xhci_trb *next; |
2188 | 2212 | ||
2189 | xhci_dbg(xhci, "prepare_ring: pointing to link trb\n"); | 2213 | xhci_dbg(xhci, "prepare_ring: pointing to link trb\n"); |
2190 | next = ring->enqueue; | 2214 | next = ring->enqueue; |
2191 | 2215 | ||
2192 | while (last_trb(xhci, ring, ring->enq_seg, next)) { | 2216 | while (last_trb(xhci, ring, ring->enq_seg, next)) { |
2193 | 2217 | ||
2194 | /* If we're not dealing with 0.95 hardware, | 2218 | /* If we're not dealing with 0.95 hardware, |
2195 | * clear the chain bit. | 2219 | * clear the chain bit. |
2196 | */ | 2220 | */ |
2197 | if (!xhci_link_trb_quirk(xhci)) | 2221 | if (!xhci_link_trb_quirk(xhci)) |
2198 | next->link.control &= ~TRB_CHAIN; | 2222 | next->link.control &= ~TRB_CHAIN; |
2199 | else | 2223 | else |
2200 | next->link.control |= TRB_CHAIN; | 2224 | next->link.control |= TRB_CHAIN; |
2201 | 2225 | ||
2202 | wmb(); | 2226 | wmb(); |
2203 | next->link.control ^= (u32) TRB_CYCLE; | 2227 | next->link.control ^= (u32) TRB_CYCLE; |
2204 | 2228 | ||
2205 | /* Toggle the cycle bit after the last ring segment. */ | 2229 | /* Toggle the cycle bit after the last ring segment. */ |
2206 | if (last_trb_on_last_seg(xhci, ring, ring->enq_seg, next)) { | 2230 | if (last_trb_on_last_seg(xhci, ring, ring->enq_seg, next)) { |
2207 | ring->cycle_state = (ring->cycle_state ? 0 : 1); | 2231 | ring->cycle_state = (ring->cycle_state ? 0 : 1); |
2208 | if (!in_interrupt()) { | 2232 | if (!in_interrupt()) { |
2209 | xhci_dbg(xhci, "queue_trb: Toggle cycle " | 2233 | xhci_dbg(xhci, "queue_trb: Toggle cycle " |
2210 | "state for ring %p = %i\n", | 2234 | "state for ring %p = %i\n", |
2211 | ring, (unsigned int)ring->cycle_state); | 2235 | ring, (unsigned int)ring->cycle_state); |
2212 | } | 2236 | } |
2213 | } | 2237 | } |
2214 | ring->enq_seg = ring->enq_seg->next; | 2238 | ring->enq_seg = ring->enq_seg->next; |
2215 | ring->enqueue = ring->enq_seg->trbs; | 2239 | ring->enqueue = ring->enq_seg->trbs; |
2216 | next = ring->enqueue; | 2240 | next = ring->enqueue; |
2217 | } | 2241 | } |
2218 | } | 2242 | } |
2219 | 2243 | ||
2220 | return 0; | 2244 | return 0; |
2221 | } | 2245 | } |
2222 | 2246 | ||
2223 | static int prepare_transfer(struct xhci_hcd *xhci, | 2247 | static int prepare_transfer(struct xhci_hcd *xhci, |
2224 | struct xhci_virt_device *xdev, | 2248 | struct xhci_virt_device *xdev, |
2225 | unsigned int ep_index, | 2249 | unsigned int ep_index, |
2226 | unsigned int stream_id, | 2250 | unsigned int stream_id, |
2227 | unsigned int num_trbs, | 2251 | unsigned int num_trbs, |
2228 | struct urb *urb, | 2252 | struct urb *urb, |
2229 | unsigned int td_index, | 2253 | unsigned int td_index, |
2230 | gfp_t mem_flags) | 2254 | gfp_t mem_flags) |
2231 | { | 2255 | { |
2232 | int ret; | 2256 | int ret; |
2233 | struct urb_priv *urb_priv; | 2257 | struct urb_priv *urb_priv; |
2234 | struct xhci_td *td; | 2258 | struct xhci_td *td; |
2235 | struct xhci_ring *ep_ring; | 2259 | struct xhci_ring *ep_ring; |
2236 | struct xhci_ep_ctx *ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); | 2260 | struct xhci_ep_ctx *ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); |
2237 | 2261 | ||
2238 | ep_ring = xhci_stream_id_to_ring(xdev, ep_index, stream_id); | 2262 | ep_ring = xhci_stream_id_to_ring(xdev, ep_index, stream_id); |
2239 | if (!ep_ring) { | 2263 | if (!ep_ring) { |
2240 | xhci_dbg(xhci, "Can't prepare ring for bad stream ID %u\n", | 2264 | xhci_dbg(xhci, "Can't prepare ring for bad stream ID %u\n", |
2241 | stream_id); | 2265 | stream_id); |
2242 | return -EINVAL; | 2266 | return -EINVAL; |
2243 | } | 2267 | } |
2244 | 2268 | ||
2245 | ret = prepare_ring(xhci, ep_ring, | 2269 | ret = prepare_ring(xhci, ep_ring, |
2246 | ep_ctx->ep_info & EP_STATE_MASK, | 2270 | ep_ctx->ep_info & EP_STATE_MASK, |
2247 | num_trbs, mem_flags); | 2271 | num_trbs, mem_flags); |
2248 | if (ret) | 2272 | if (ret) |
2249 | return ret; | 2273 | return ret; |
2250 | 2274 | ||
2251 | urb_priv = urb->hcpriv; | 2275 | urb_priv = urb->hcpriv; |
2252 | td = urb_priv->td[td_index]; | 2276 | td = urb_priv->td[td_index]; |
2253 | 2277 | ||
2254 | INIT_LIST_HEAD(&td->td_list); | 2278 | INIT_LIST_HEAD(&td->td_list); |
2255 | INIT_LIST_HEAD(&td->cancelled_td_list); | 2279 | INIT_LIST_HEAD(&td->cancelled_td_list); |
2256 | 2280 | ||
2257 | if (td_index == 0) { | 2281 | if (td_index == 0) { |
2258 | ret = usb_hcd_link_urb_to_ep(xhci_to_hcd(xhci), urb); | 2282 | ret = usb_hcd_link_urb_to_ep(xhci_to_hcd(xhci), urb); |
2259 | if (unlikely(ret)) { | 2283 | if (unlikely(ret)) { |
2260 | xhci_urb_free_priv(xhci, urb_priv); | 2284 | xhci_urb_free_priv(xhci, urb_priv); |
2261 | urb->hcpriv = NULL; | 2285 | urb->hcpriv = NULL; |
2262 | return ret; | 2286 | return ret; |
2263 | } | 2287 | } |
2264 | } | 2288 | } |
2265 | 2289 | ||
2266 | td->urb = urb; | 2290 | td->urb = urb; |
2267 | /* Add this TD to the tail of the endpoint ring's TD list */ | 2291 | /* Add this TD to the tail of the endpoint ring's TD list */ |
2268 | list_add_tail(&td->td_list, &ep_ring->td_list); | 2292 | list_add_tail(&td->td_list, &ep_ring->td_list); |
2269 | td->start_seg = ep_ring->enq_seg; | 2293 | td->start_seg = ep_ring->enq_seg; |
2270 | td->first_trb = ep_ring->enqueue; | 2294 | td->first_trb = ep_ring->enqueue; |
2271 | 2295 | ||
2272 | urb_priv->td[td_index] = td; | 2296 | urb_priv->td[td_index] = td; |
2273 | 2297 | ||
2274 | return 0; | 2298 | return 0; |
2275 | } | 2299 | } |
2276 | 2300 | ||
2277 | static unsigned int count_sg_trbs_needed(struct xhci_hcd *xhci, struct urb *urb) | 2301 | static unsigned int count_sg_trbs_needed(struct xhci_hcd *xhci, struct urb *urb) |
2278 | { | 2302 | { |
2279 | int num_sgs, num_trbs, running_total, temp, i; | 2303 | int num_sgs, num_trbs, running_total, temp, i; |
2280 | struct scatterlist *sg; | 2304 | struct scatterlist *sg; |
2281 | 2305 | ||
2282 | sg = NULL; | 2306 | sg = NULL; |
2283 | num_sgs = urb->num_sgs; | 2307 | num_sgs = urb->num_sgs; |
2284 | temp = urb->transfer_buffer_length; | 2308 | temp = urb->transfer_buffer_length; |
2285 | 2309 | ||
2286 | xhci_dbg(xhci, "count sg list trbs: \n"); | 2310 | xhci_dbg(xhci, "count sg list trbs: \n"); |
2287 | num_trbs = 0; | 2311 | num_trbs = 0; |
2288 | for_each_sg(urb->sg, sg, num_sgs, i) { | 2312 | for_each_sg(urb->sg, sg, num_sgs, i) { |
2289 | unsigned int previous_total_trbs = num_trbs; | 2313 | unsigned int previous_total_trbs = num_trbs; |
2290 | unsigned int len = sg_dma_len(sg); | 2314 | unsigned int len = sg_dma_len(sg); |
2291 | 2315 | ||
2292 | /* Scatter gather list entries may cross 64KB boundaries */ | 2316 | /* Scatter gather list entries may cross 64KB boundaries */ |
2293 | running_total = TRB_MAX_BUFF_SIZE - | 2317 | running_total = TRB_MAX_BUFF_SIZE - |
2294 | (sg_dma_address(sg) & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); | 2318 | (sg_dma_address(sg) & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); |
2295 | if (running_total != 0) | 2319 | if (running_total != 0) |
2296 | num_trbs++; | 2320 | num_trbs++; |
2297 | 2321 | ||
2298 | /* How many more 64KB chunks to transfer, how many more TRBs? */ | 2322 | /* How many more 64KB chunks to transfer, how many more TRBs? */ |
2299 | while (running_total < sg_dma_len(sg)) { | 2323 | while (running_total < sg_dma_len(sg)) { |
2300 | num_trbs++; | 2324 | num_trbs++; |
2301 | running_total += TRB_MAX_BUFF_SIZE; | 2325 | running_total += TRB_MAX_BUFF_SIZE; |
2302 | } | 2326 | } |
2303 | xhci_dbg(xhci, " sg #%d: dma = %#llx, len = %#x (%d), num_trbs = %d\n", | 2327 | xhci_dbg(xhci, " sg #%d: dma = %#llx, len = %#x (%d), num_trbs = %d\n", |
2304 | i, (unsigned long long)sg_dma_address(sg), | 2328 | i, (unsigned long long)sg_dma_address(sg), |
2305 | len, len, num_trbs - previous_total_trbs); | 2329 | len, len, num_trbs - previous_total_trbs); |
2306 | 2330 | ||
2307 | len = min_t(int, len, temp); | 2331 | len = min_t(int, len, temp); |
2308 | temp -= len; | 2332 | temp -= len; |
2309 | if (temp == 0) | 2333 | if (temp == 0) |
2310 | break; | 2334 | break; |
2311 | } | 2335 | } |
2312 | xhci_dbg(xhci, "\n"); | 2336 | xhci_dbg(xhci, "\n"); |
2313 | if (!in_interrupt()) | 2337 | if (!in_interrupt()) |
2314 | dev_dbg(&urb->dev->dev, "ep %#x - urb len = %d, sglist used, num_trbs = %d\n", | 2338 | dev_dbg(&urb->dev->dev, "ep %#x - urb len = %d, sglist used, num_trbs = %d\n", |
2315 | urb->ep->desc.bEndpointAddress, | 2339 | urb->ep->desc.bEndpointAddress, |
2316 | urb->transfer_buffer_length, | 2340 | urb->transfer_buffer_length, |
2317 | num_trbs); | 2341 | num_trbs); |
2318 | return num_trbs; | 2342 | return num_trbs; |
2319 | } | 2343 | } |
2320 | 2344 | ||
2321 | static void check_trb_math(struct urb *urb, int num_trbs, int running_total) | 2345 | static void check_trb_math(struct urb *urb, int num_trbs, int running_total) |
2322 | { | 2346 | { |
2323 | if (num_trbs != 0) | 2347 | if (num_trbs != 0) |
2324 | dev_dbg(&urb->dev->dev, "%s - ep %#x - Miscalculated number of " | 2348 | dev_dbg(&urb->dev->dev, "%s - ep %#x - Miscalculated number of " |
2325 | "TRBs, %d left\n", __func__, | 2349 | "TRBs, %d left\n", __func__, |
2326 | urb->ep->desc.bEndpointAddress, num_trbs); | 2350 | urb->ep->desc.bEndpointAddress, num_trbs); |
2327 | if (running_total != urb->transfer_buffer_length) | 2351 | if (running_total != urb->transfer_buffer_length) |
2328 | dev_dbg(&urb->dev->dev, "%s - ep %#x - Miscalculated tx length, " | 2352 | dev_dbg(&urb->dev->dev, "%s - ep %#x - Miscalculated tx length, " |
2329 | "queued %#x (%d), asked for %#x (%d)\n", | 2353 | "queued %#x (%d), asked for %#x (%d)\n", |
2330 | __func__, | 2354 | __func__, |
2331 | urb->ep->desc.bEndpointAddress, | 2355 | urb->ep->desc.bEndpointAddress, |
2332 | running_total, running_total, | 2356 | running_total, running_total, |
2333 | urb->transfer_buffer_length, | 2357 | urb->transfer_buffer_length, |
2334 | urb->transfer_buffer_length); | 2358 | urb->transfer_buffer_length); |
2335 | } | 2359 | } |
2336 | 2360 | ||
2337 | static void giveback_first_trb(struct xhci_hcd *xhci, int slot_id, | 2361 | static void giveback_first_trb(struct xhci_hcd *xhci, int slot_id, |
2338 | unsigned int ep_index, unsigned int stream_id, int start_cycle, | 2362 | unsigned int ep_index, unsigned int stream_id, int start_cycle, |
2339 | struct xhci_generic_trb *start_trb, struct xhci_td *td) | 2363 | struct xhci_generic_trb *start_trb, struct xhci_td *td) |
2340 | { | 2364 | { |
2341 | /* | 2365 | /* |
2342 | * Pass all the TRBs to the hardware at once and make sure this write | 2366 | * Pass all the TRBs to the hardware at once and make sure this write |
2343 | * isn't reordered. | 2367 | * isn't reordered. |
2344 | */ | 2368 | */ |
2345 | wmb(); | 2369 | wmb(); |
2346 | start_trb->field[3] |= start_cycle; | 2370 | start_trb->field[3] |= start_cycle; |
2347 | ring_ep_doorbell(xhci, slot_id, ep_index, stream_id); | 2371 | ring_ep_doorbell(xhci, slot_id, ep_index, stream_id); |
2348 | } | 2372 | } |
2349 | 2373 | ||
2350 | /* | 2374 | /* |
2351 | * xHCI uses normal TRBs for both bulk and interrupt. When the interrupt | 2375 | * xHCI uses normal TRBs for both bulk and interrupt. When the interrupt |
2352 | * endpoint is to be serviced, the xHC will consume (at most) one TD. A TD | 2376 | * endpoint is to be serviced, the xHC will consume (at most) one TD. A TD |
2353 | * (comprised of sg list entries) can take several service intervals to | 2377 | * (comprised of sg list entries) can take several service intervals to |
2354 | * transmit. | 2378 | * transmit. |
2355 | */ | 2379 | */ |
2356 | int xhci_queue_intr_tx(struct xhci_hcd *xhci, gfp_t mem_flags, | 2380 | int xhci_queue_intr_tx(struct xhci_hcd *xhci, gfp_t mem_flags, |
2357 | struct urb *urb, int slot_id, unsigned int ep_index) | 2381 | struct urb *urb, int slot_id, unsigned int ep_index) |
2358 | { | 2382 | { |
2359 | struct xhci_ep_ctx *ep_ctx = xhci_get_ep_ctx(xhci, | 2383 | struct xhci_ep_ctx *ep_ctx = xhci_get_ep_ctx(xhci, |
2360 | xhci->devs[slot_id]->out_ctx, ep_index); | 2384 | xhci->devs[slot_id]->out_ctx, ep_index); |
2361 | int xhci_interval; | 2385 | int xhci_interval; |
2362 | int ep_interval; | 2386 | int ep_interval; |
2363 | 2387 | ||
2364 | xhci_interval = EP_INTERVAL_TO_UFRAMES(ep_ctx->ep_info); | 2388 | xhci_interval = EP_INTERVAL_TO_UFRAMES(ep_ctx->ep_info); |
2365 | ep_interval = urb->interval; | 2389 | ep_interval = urb->interval; |
2366 | /* Convert to microframes */ | 2390 | /* Convert to microframes */ |
2367 | if (urb->dev->speed == USB_SPEED_LOW || | 2391 | if (urb->dev->speed == USB_SPEED_LOW || |
2368 | urb->dev->speed == USB_SPEED_FULL) | 2392 | urb->dev->speed == USB_SPEED_FULL) |
2369 | ep_interval *= 8; | 2393 | ep_interval *= 8; |
2370 | /* FIXME change this to a warning and a suggestion to use the new API | 2394 | /* FIXME change this to a warning and a suggestion to use the new API |
2371 | * to set the polling interval (once the API is added). | 2395 | * to set the polling interval (once the API is added). |
2372 | */ | 2396 | */ |
2373 | if (xhci_interval != ep_interval) { | 2397 | if (xhci_interval != ep_interval) { |
2374 | if (!printk_ratelimit()) | 2398 | if (!printk_ratelimit()) |
2375 | dev_dbg(&urb->dev->dev, "Driver uses different interval" | 2399 | dev_dbg(&urb->dev->dev, "Driver uses different interval" |
2376 | " (%d microframe%s) than xHCI " | 2400 | " (%d microframe%s) than xHCI " |
2377 | "(%d microframe%s)\n", | 2401 | "(%d microframe%s)\n", |
2378 | ep_interval, | 2402 | ep_interval, |
2379 | ep_interval == 1 ? "" : "s", | 2403 | ep_interval == 1 ? "" : "s", |
2380 | xhci_interval, | 2404 | xhci_interval, |
2381 | xhci_interval == 1 ? "" : "s"); | 2405 | xhci_interval == 1 ? "" : "s"); |
2382 | urb->interval = xhci_interval; | 2406 | urb->interval = xhci_interval; |
2383 | /* Convert back to frames for LS/FS devices */ | 2407 | /* Convert back to frames for LS/FS devices */ |
2384 | if (urb->dev->speed == USB_SPEED_LOW || | 2408 | if (urb->dev->speed == USB_SPEED_LOW || |
2385 | urb->dev->speed == USB_SPEED_FULL) | 2409 | urb->dev->speed == USB_SPEED_FULL) |
2386 | urb->interval /= 8; | 2410 | urb->interval /= 8; |
2387 | } | 2411 | } |
2388 | return xhci_queue_bulk_tx(xhci, GFP_ATOMIC, urb, slot_id, ep_index); | 2412 | return xhci_queue_bulk_tx(xhci, GFP_ATOMIC, urb, slot_id, ep_index); |
2389 | } | 2413 | } |
2390 | 2414 | ||
2391 | /* | 2415 | /* |
2392 | * The TD size is the number of bytes remaining in the TD (including this TRB), | 2416 | * The TD size is the number of bytes remaining in the TD (including this TRB), |
2393 | * right shifted by 10. | 2417 | * right shifted by 10. |
2394 | * It must fit in bits 21:17, so it can't be bigger than 31. | 2418 | * It must fit in bits 21:17, so it can't be bigger than 31. |
2395 | */ | 2419 | */ |
2396 | static u32 xhci_td_remainder(unsigned int remainder) | 2420 | static u32 xhci_td_remainder(unsigned int remainder) |
2397 | { | 2421 | { |
2398 | u32 max = (1 << (21 - 17 + 1)) - 1; | 2422 | u32 max = (1 << (21 - 17 + 1)) - 1; |
2399 | 2423 | ||
2400 | if ((remainder >> 10) >= max) | 2424 | if ((remainder >> 10) >= max) |
2401 | return max << 17; | 2425 | return max << 17; |
2402 | else | 2426 | else |
2403 | return (remainder >> 10) << 17; | 2427 | return (remainder >> 10) << 17; |
2404 | } | 2428 | } |
2405 | 2429 | ||
2406 | static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags, | 2430 | static int queue_bulk_sg_tx(struct xhci_hcd *xhci, gfp_t mem_flags, |
2407 | struct urb *urb, int slot_id, unsigned int ep_index) | 2431 | struct urb *urb, int slot_id, unsigned int ep_index) |
2408 | { | 2432 | { |
2409 | struct xhci_ring *ep_ring; | 2433 | struct xhci_ring *ep_ring; |
2410 | unsigned int num_trbs; | 2434 | unsigned int num_trbs; |
2411 | struct urb_priv *urb_priv; | 2435 | struct urb_priv *urb_priv; |
2412 | struct xhci_td *td; | 2436 | struct xhci_td *td; |
2413 | struct scatterlist *sg; | 2437 | struct scatterlist *sg; |
2414 | int num_sgs; | 2438 | int num_sgs; |
2415 | int trb_buff_len, this_sg_len, running_total; | 2439 | int trb_buff_len, this_sg_len, running_total; |
2416 | bool first_trb; | 2440 | bool first_trb; |
2417 | u64 addr; | 2441 | u64 addr; |
2418 | bool more_trbs_coming; | 2442 | bool more_trbs_coming; |
2419 | 2443 | ||
2420 | struct xhci_generic_trb *start_trb; | 2444 | struct xhci_generic_trb *start_trb; |
2421 | int start_cycle; | 2445 | int start_cycle; |
2422 | 2446 | ||
2423 | ep_ring = xhci_urb_to_transfer_ring(xhci, urb); | 2447 | ep_ring = xhci_urb_to_transfer_ring(xhci, urb); |
2424 | if (!ep_ring) | 2448 | if (!ep_ring) |
2425 | return -EINVAL; | 2449 | return -EINVAL; |
2426 | 2450 | ||
2427 | num_trbs = count_sg_trbs_needed(xhci, urb); | 2451 | num_trbs = count_sg_trbs_needed(xhci, urb); |
2428 | num_sgs = urb->num_sgs; | 2452 | num_sgs = urb->num_sgs; |
2429 | 2453 | ||
2430 | trb_buff_len = prepare_transfer(xhci, xhci->devs[slot_id], | 2454 | trb_buff_len = prepare_transfer(xhci, xhci->devs[slot_id], |
2431 | ep_index, urb->stream_id, | 2455 | ep_index, urb->stream_id, |
2432 | num_trbs, urb, 0, mem_flags); | 2456 | num_trbs, urb, 0, mem_flags); |
2433 | if (trb_buff_len < 0) | 2457 | if (trb_buff_len < 0) |
2434 | return trb_buff_len; | 2458 | return trb_buff_len; |
2435 | 2459 | ||
2436 | urb_priv = urb->hcpriv; | 2460 | urb_priv = urb->hcpriv; |
2437 | td = urb_priv->td[0]; | 2461 | td = urb_priv->td[0]; |
2438 | 2462 | ||
2439 | /* | 2463 | /* |
2440 | * Don't give the first TRB to the hardware (by toggling the cycle bit) | 2464 | * Don't give the first TRB to the hardware (by toggling the cycle bit) |
2441 | * until we've finished creating all the other TRBs. The ring's cycle | 2465 | * until we've finished creating all the other TRBs. The ring's cycle |
2442 | * state may change as we enqueue the other TRBs, so save it too. | 2466 | * state may change as we enqueue the other TRBs, so save it too. |
2443 | */ | 2467 | */ |
2444 | start_trb = &ep_ring->enqueue->generic; | 2468 | start_trb = &ep_ring->enqueue->generic; |
2445 | start_cycle = ep_ring->cycle_state; | 2469 | start_cycle = ep_ring->cycle_state; |
2446 | 2470 | ||
2447 | running_total = 0; | 2471 | running_total = 0; |
2448 | /* | 2472 | /* |
2449 | * How much data is in the first TRB? | 2473 | * How much data is in the first TRB? |
2450 | * | 2474 | * |
2451 | * There are three forces at work for TRB buffer pointers and lengths: | 2475 | * There are three forces at work for TRB buffer pointers and lengths: |
2452 | * 1. We don't want to walk off the end of this sg-list entry buffer. | 2476 | * 1. We don't want to walk off the end of this sg-list entry buffer. |
2453 | * 2. The transfer length that the driver requested may be smaller than | 2477 | * 2. The transfer length that the driver requested may be smaller than |
2454 | * the amount of memory allocated for this scatter-gather list. | 2478 | * the amount of memory allocated for this scatter-gather list. |
2455 | * 3. TRBs buffers can't cross 64KB boundaries. | 2479 | * 3. TRBs buffers can't cross 64KB boundaries. |
2456 | */ | 2480 | */ |
2457 | sg = urb->sg; | 2481 | sg = urb->sg; |
2458 | addr = (u64) sg_dma_address(sg); | 2482 | addr = (u64) sg_dma_address(sg); |
2459 | this_sg_len = sg_dma_len(sg); | 2483 | this_sg_len = sg_dma_len(sg); |
2460 | trb_buff_len = TRB_MAX_BUFF_SIZE - | 2484 | trb_buff_len = TRB_MAX_BUFF_SIZE - |
2461 | (addr & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); | 2485 | (addr & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); |
2462 | trb_buff_len = min_t(int, trb_buff_len, this_sg_len); | 2486 | trb_buff_len = min_t(int, trb_buff_len, this_sg_len); |
2463 | if (trb_buff_len > urb->transfer_buffer_length) | 2487 | if (trb_buff_len > urb->transfer_buffer_length) |
2464 | trb_buff_len = urb->transfer_buffer_length; | 2488 | trb_buff_len = urb->transfer_buffer_length; |
2465 | xhci_dbg(xhci, "First length to xfer from 1st sglist entry = %u\n", | 2489 | xhci_dbg(xhci, "First length to xfer from 1st sglist entry = %u\n", |
2466 | trb_buff_len); | 2490 | trb_buff_len); |
2467 | 2491 | ||
2468 | first_trb = true; | 2492 | first_trb = true; |
2469 | /* Queue the first TRB, even if it's zero-length */ | 2493 | /* Queue the first TRB, even if it's zero-length */ |
2470 | do { | 2494 | do { |
2471 | u32 field = 0; | 2495 | u32 field = 0; |
2472 | u32 length_field = 0; | 2496 | u32 length_field = 0; |
2473 | u32 remainder = 0; | 2497 | u32 remainder = 0; |
2474 | 2498 | ||
2475 | /* Don't change the cycle bit of the first TRB until later */ | 2499 | /* Don't change the cycle bit of the first TRB until later */ |
2476 | if (first_trb) | 2500 | if (first_trb) |
2477 | first_trb = false; | 2501 | first_trb = false; |
2478 | else | 2502 | else |
2479 | field |= ep_ring->cycle_state; | 2503 | field |= ep_ring->cycle_state; |
2480 | 2504 | ||
2481 | /* Chain all the TRBs together; clear the chain bit in the last | 2505 | /* Chain all the TRBs together; clear the chain bit in the last |
2482 | * TRB to indicate it's the last TRB in the chain. | 2506 | * TRB to indicate it's the last TRB in the chain. |
2483 | */ | 2507 | */ |
2484 | if (num_trbs > 1) { | 2508 | if (num_trbs > 1) { |
2485 | field |= TRB_CHAIN; | 2509 | field |= TRB_CHAIN; |
2486 | } else { | 2510 | } else { |
2487 | /* FIXME - add check for ZERO_PACKET flag before this */ | 2511 | /* FIXME - add check for ZERO_PACKET flag before this */ |
2488 | td->last_trb = ep_ring->enqueue; | 2512 | td->last_trb = ep_ring->enqueue; |
2489 | field |= TRB_IOC; | 2513 | field |= TRB_IOC; |
2490 | } | 2514 | } |
2491 | xhci_dbg(xhci, " sg entry: dma = %#x, len = %#x (%d), " | 2515 | xhci_dbg(xhci, " sg entry: dma = %#x, len = %#x (%d), " |
2492 | "64KB boundary at %#x, end dma = %#x\n", | 2516 | "64KB boundary at %#x, end dma = %#x\n", |
2493 | (unsigned int) addr, trb_buff_len, trb_buff_len, | 2517 | (unsigned int) addr, trb_buff_len, trb_buff_len, |
2494 | (unsigned int) (addr + TRB_MAX_BUFF_SIZE) & ~(TRB_MAX_BUFF_SIZE - 1), | 2518 | (unsigned int) (addr + TRB_MAX_BUFF_SIZE) & ~(TRB_MAX_BUFF_SIZE - 1), |
2495 | (unsigned int) addr + trb_buff_len); | 2519 | (unsigned int) addr + trb_buff_len); |
2496 | if (TRB_MAX_BUFF_SIZE - | 2520 | if (TRB_MAX_BUFF_SIZE - |
2497 | (addr & ((1 << TRB_MAX_BUFF_SHIFT) - 1)) < trb_buff_len) { | 2521 | (addr & ((1 << TRB_MAX_BUFF_SHIFT) - 1)) < trb_buff_len) { |
2498 | xhci_warn(xhci, "WARN: sg dma xfer crosses 64KB boundaries!\n"); | 2522 | xhci_warn(xhci, "WARN: sg dma xfer crosses 64KB boundaries!\n"); |
2499 | xhci_dbg(xhci, "Next boundary at %#x, end dma = %#x\n", | 2523 | xhci_dbg(xhci, "Next boundary at %#x, end dma = %#x\n", |
2500 | (unsigned int) (addr + TRB_MAX_BUFF_SIZE) & ~(TRB_MAX_BUFF_SIZE - 1), | 2524 | (unsigned int) (addr + TRB_MAX_BUFF_SIZE) & ~(TRB_MAX_BUFF_SIZE - 1), |
2501 | (unsigned int) addr + trb_buff_len); | 2525 | (unsigned int) addr + trb_buff_len); |
2502 | } | 2526 | } |
2503 | remainder = xhci_td_remainder(urb->transfer_buffer_length - | 2527 | remainder = xhci_td_remainder(urb->transfer_buffer_length - |
2504 | running_total) ; | 2528 | running_total) ; |
2505 | length_field = TRB_LEN(trb_buff_len) | | 2529 | length_field = TRB_LEN(trb_buff_len) | |
2506 | remainder | | 2530 | remainder | |
2507 | TRB_INTR_TARGET(0); | 2531 | TRB_INTR_TARGET(0); |
2508 | if (num_trbs > 1) | 2532 | if (num_trbs > 1) |
2509 | more_trbs_coming = true; | 2533 | more_trbs_coming = true; |
2510 | else | 2534 | else |
2511 | more_trbs_coming = false; | 2535 | more_trbs_coming = false; |
2512 | queue_trb(xhci, ep_ring, false, more_trbs_coming, | 2536 | queue_trb(xhci, ep_ring, false, more_trbs_coming, |
2513 | lower_32_bits(addr), | 2537 | lower_32_bits(addr), |
2514 | upper_32_bits(addr), | 2538 | upper_32_bits(addr), |
2515 | length_field, | 2539 | length_field, |
2516 | /* We always want to know if the TRB was short, | 2540 | /* We always want to know if the TRB was short, |
2517 | * or we won't get an event when it completes. | 2541 | * or we won't get an event when it completes. |
2518 | * (Unless we use event data TRBs, which are a | 2542 | * (Unless we use event data TRBs, which are a |
2519 | * waste of space and HC resources.) | 2543 | * waste of space and HC resources.) |
2520 | */ | 2544 | */ |
2521 | field | TRB_ISP | TRB_TYPE(TRB_NORMAL)); | 2545 | field | TRB_ISP | TRB_TYPE(TRB_NORMAL)); |
2522 | --num_trbs; | 2546 | --num_trbs; |
2523 | running_total += trb_buff_len; | 2547 | running_total += trb_buff_len; |
2524 | 2548 | ||
2525 | /* Calculate length for next transfer -- | 2549 | /* Calculate length for next transfer -- |
2526 | * Are we done queueing all the TRBs for this sg entry? | 2550 | * Are we done queueing all the TRBs for this sg entry? |
2527 | */ | 2551 | */ |
2528 | this_sg_len -= trb_buff_len; | 2552 | this_sg_len -= trb_buff_len; |
2529 | if (this_sg_len == 0) { | 2553 | if (this_sg_len == 0) { |
2530 | --num_sgs; | 2554 | --num_sgs; |
2531 | if (num_sgs == 0) | 2555 | if (num_sgs == 0) |
2532 | break; | 2556 | break; |
2533 | sg = sg_next(sg); | 2557 | sg = sg_next(sg); |
2534 | addr = (u64) sg_dma_address(sg); | 2558 | addr = (u64) sg_dma_address(sg); |
2535 | this_sg_len = sg_dma_len(sg); | 2559 | this_sg_len = sg_dma_len(sg); |
2536 | } else { | 2560 | } else { |
2537 | addr += trb_buff_len; | 2561 | addr += trb_buff_len; |
2538 | } | 2562 | } |
2539 | 2563 | ||
2540 | trb_buff_len = TRB_MAX_BUFF_SIZE - | 2564 | trb_buff_len = TRB_MAX_BUFF_SIZE - |
2541 | (addr & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); | 2565 | (addr & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); |
2542 | trb_buff_len = min_t(int, trb_buff_len, this_sg_len); | 2566 | trb_buff_len = min_t(int, trb_buff_len, this_sg_len); |
2543 | if (running_total + trb_buff_len > urb->transfer_buffer_length) | 2567 | if (running_total + trb_buff_len > urb->transfer_buffer_length) |
2544 | trb_buff_len = | 2568 | trb_buff_len = |
2545 | urb->transfer_buffer_length - running_total; | 2569 | urb->transfer_buffer_length - running_total; |
2546 | } while (running_total < urb->transfer_buffer_length); | 2570 | } while (running_total < urb->transfer_buffer_length); |
2547 | 2571 | ||
2548 | check_trb_math(urb, num_trbs, running_total); | 2572 | check_trb_math(urb, num_trbs, running_total); |
2549 | giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id, | 2573 | giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id, |
2550 | start_cycle, start_trb, td); | 2574 | start_cycle, start_trb, td); |
2551 | return 0; | 2575 | return 0; |
2552 | } | 2576 | } |
2553 | 2577 | ||
2554 | /* This is very similar to what ehci-q.c qtd_fill() does */ | 2578 | /* This is very similar to what ehci-q.c qtd_fill() does */ |
2555 | int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags, | 2579 | int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags, |
2556 | struct urb *urb, int slot_id, unsigned int ep_index) | 2580 | struct urb *urb, int slot_id, unsigned int ep_index) |
2557 | { | 2581 | { |
2558 | struct xhci_ring *ep_ring; | 2582 | struct xhci_ring *ep_ring; |
2559 | struct urb_priv *urb_priv; | 2583 | struct urb_priv *urb_priv; |
2560 | struct xhci_td *td; | 2584 | struct xhci_td *td; |
2561 | int num_trbs; | 2585 | int num_trbs; |
2562 | struct xhci_generic_trb *start_trb; | 2586 | struct xhci_generic_trb *start_trb; |
2563 | bool first_trb; | 2587 | bool first_trb; |
2564 | bool more_trbs_coming; | 2588 | bool more_trbs_coming; |
2565 | int start_cycle; | 2589 | int start_cycle; |
2566 | u32 field, length_field; | 2590 | u32 field, length_field; |
2567 | 2591 | ||
2568 | int running_total, trb_buff_len, ret; | 2592 | int running_total, trb_buff_len, ret; |
2569 | u64 addr; | 2593 | u64 addr; |
2570 | 2594 | ||
2571 | if (urb->num_sgs) | 2595 | if (urb->num_sgs) |
2572 | return queue_bulk_sg_tx(xhci, mem_flags, urb, slot_id, ep_index); | 2596 | return queue_bulk_sg_tx(xhci, mem_flags, urb, slot_id, ep_index); |
2573 | 2597 | ||
2574 | ep_ring = xhci_urb_to_transfer_ring(xhci, urb); | 2598 | ep_ring = xhci_urb_to_transfer_ring(xhci, urb); |
2575 | if (!ep_ring) | 2599 | if (!ep_ring) |
2576 | return -EINVAL; | 2600 | return -EINVAL; |
2577 | 2601 | ||
2578 | num_trbs = 0; | 2602 | num_trbs = 0; |
2579 | /* How much data is (potentially) left before the 64KB boundary? */ | 2603 | /* How much data is (potentially) left before the 64KB boundary? */ |
2580 | running_total = TRB_MAX_BUFF_SIZE - | 2604 | running_total = TRB_MAX_BUFF_SIZE - |
2581 | (urb->transfer_dma & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); | 2605 | (urb->transfer_dma & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); |
2582 | 2606 | ||
2583 | /* If there's some data on this 64KB chunk, or we have to send a | 2607 | /* If there's some data on this 64KB chunk, or we have to send a |
2584 | * zero-length transfer, we need at least one TRB | 2608 | * zero-length transfer, we need at least one TRB |
2585 | */ | 2609 | */ |
2586 | if (running_total != 0 || urb->transfer_buffer_length == 0) | 2610 | if (running_total != 0 || urb->transfer_buffer_length == 0) |
2587 | num_trbs++; | 2611 | num_trbs++; |
2588 | /* How many more 64KB chunks to transfer, how many more TRBs? */ | 2612 | /* How many more 64KB chunks to transfer, how many more TRBs? */ |
2589 | while (running_total < urb->transfer_buffer_length) { | 2613 | while (running_total < urb->transfer_buffer_length) { |
2590 | num_trbs++; | 2614 | num_trbs++; |
2591 | running_total += TRB_MAX_BUFF_SIZE; | 2615 | running_total += TRB_MAX_BUFF_SIZE; |
2592 | } | 2616 | } |
2593 | /* FIXME: this doesn't deal with URB_ZERO_PACKET - need one more */ | 2617 | /* FIXME: this doesn't deal with URB_ZERO_PACKET - need one more */ |
2594 | 2618 | ||
2595 | if (!in_interrupt()) | 2619 | if (!in_interrupt()) |
2596 | dev_dbg(&urb->dev->dev, "ep %#x - urb len = %#x (%d), addr = %#llx, num_trbs = %d\n", | 2620 | dev_dbg(&urb->dev->dev, "ep %#x - urb len = %#x (%d), addr = %#llx, num_trbs = %d\n", |
2597 | urb->ep->desc.bEndpointAddress, | 2621 | urb->ep->desc.bEndpointAddress, |
2598 | urb->transfer_buffer_length, | 2622 | urb->transfer_buffer_length, |
2599 | urb->transfer_buffer_length, | 2623 | urb->transfer_buffer_length, |
2600 | (unsigned long long)urb->transfer_dma, | 2624 | (unsigned long long)urb->transfer_dma, |
2601 | num_trbs); | 2625 | num_trbs); |
2602 | 2626 | ||
2603 | ret = prepare_transfer(xhci, xhci->devs[slot_id], | 2627 | ret = prepare_transfer(xhci, xhci->devs[slot_id], |
2604 | ep_index, urb->stream_id, | 2628 | ep_index, urb->stream_id, |
2605 | num_trbs, urb, 0, mem_flags); | 2629 | num_trbs, urb, 0, mem_flags); |
2606 | if (ret < 0) | 2630 | if (ret < 0) |
2607 | return ret; | 2631 | return ret; |
2608 | 2632 | ||
2609 | urb_priv = urb->hcpriv; | 2633 | urb_priv = urb->hcpriv; |
2610 | td = urb_priv->td[0]; | 2634 | td = urb_priv->td[0]; |
2611 | 2635 | ||
2612 | /* | 2636 | /* |
2613 | * Don't give the first TRB to the hardware (by toggling the cycle bit) | 2637 | * Don't give the first TRB to the hardware (by toggling the cycle bit) |
2614 | * until we've finished creating all the other TRBs. The ring's cycle | 2638 | * until we've finished creating all the other TRBs. The ring's cycle |
2615 | * state may change as we enqueue the other TRBs, so save it too. | 2639 | * state may change as we enqueue the other TRBs, so save it too. |
2616 | */ | 2640 | */ |
2617 | start_trb = &ep_ring->enqueue->generic; | 2641 | start_trb = &ep_ring->enqueue->generic; |
2618 | start_cycle = ep_ring->cycle_state; | 2642 | start_cycle = ep_ring->cycle_state; |
2619 | 2643 | ||
2620 | running_total = 0; | 2644 | running_total = 0; |
2621 | /* How much data is in the first TRB? */ | 2645 | /* How much data is in the first TRB? */ |
2622 | addr = (u64) urb->transfer_dma; | 2646 | addr = (u64) urb->transfer_dma; |
2623 | trb_buff_len = TRB_MAX_BUFF_SIZE - | 2647 | trb_buff_len = TRB_MAX_BUFF_SIZE - |
2624 | (urb->transfer_dma & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); | 2648 | (urb->transfer_dma & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); |
2625 | if (urb->transfer_buffer_length < trb_buff_len) | 2649 | if (urb->transfer_buffer_length < trb_buff_len) |
2626 | trb_buff_len = urb->transfer_buffer_length; | 2650 | trb_buff_len = urb->transfer_buffer_length; |
2627 | 2651 | ||
2628 | first_trb = true; | 2652 | first_trb = true; |
2629 | 2653 | ||
2630 | /* Queue the first TRB, even if it's zero-length */ | 2654 | /* Queue the first TRB, even if it's zero-length */ |
2631 | do { | 2655 | do { |
2632 | u32 remainder = 0; | 2656 | u32 remainder = 0; |
2633 | field = 0; | 2657 | field = 0; |
2634 | 2658 | ||
2635 | /* Don't change the cycle bit of the first TRB until later */ | 2659 | /* Don't change the cycle bit of the first TRB until later */ |
2636 | if (first_trb) | 2660 | if (first_trb) |
2637 | first_trb = false; | 2661 | first_trb = false; |
2638 | else | 2662 | else |
2639 | field |= ep_ring->cycle_state; | 2663 | field |= ep_ring->cycle_state; |
2640 | 2664 | ||
2641 | /* Chain all the TRBs together; clear the chain bit in the last | 2665 | /* Chain all the TRBs together; clear the chain bit in the last |
2642 | * TRB to indicate it's the last TRB in the chain. | 2666 | * TRB to indicate it's the last TRB in the chain. |
2643 | */ | 2667 | */ |
2644 | if (num_trbs > 1) { | 2668 | if (num_trbs > 1) { |
2645 | field |= TRB_CHAIN; | 2669 | field |= TRB_CHAIN; |
2646 | } else { | 2670 | } else { |
2647 | /* FIXME - add check for ZERO_PACKET flag before this */ | 2671 | /* FIXME - add check for ZERO_PACKET flag before this */ |
2648 | td->last_trb = ep_ring->enqueue; | 2672 | td->last_trb = ep_ring->enqueue; |
2649 | field |= TRB_IOC; | 2673 | field |= TRB_IOC; |
2650 | } | 2674 | } |
2651 | remainder = xhci_td_remainder(urb->transfer_buffer_length - | 2675 | remainder = xhci_td_remainder(urb->transfer_buffer_length - |
2652 | running_total); | 2676 | running_total); |
2653 | length_field = TRB_LEN(trb_buff_len) | | 2677 | length_field = TRB_LEN(trb_buff_len) | |
2654 | remainder | | 2678 | remainder | |
2655 | TRB_INTR_TARGET(0); | 2679 | TRB_INTR_TARGET(0); |
2656 | if (num_trbs > 1) | 2680 | if (num_trbs > 1) |
2657 | more_trbs_coming = true; | 2681 | more_trbs_coming = true; |
2658 | else | 2682 | else |
2659 | more_trbs_coming = false; | 2683 | more_trbs_coming = false; |
2660 | queue_trb(xhci, ep_ring, false, more_trbs_coming, | 2684 | queue_trb(xhci, ep_ring, false, more_trbs_coming, |
2661 | lower_32_bits(addr), | 2685 | lower_32_bits(addr), |
2662 | upper_32_bits(addr), | 2686 | upper_32_bits(addr), |
2663 | length_field, | 2687 | length_field, |
2664 | /* We always want to know if the TRB was short, | 2688 | /* We always want to know if the TRB was short, |
2665 | * or we won't get an event when it completes. | 2689 | * or we won't get an event when it completes. |
2666 | * (Unless we use event data TRBs, which are a | 2690 | * (Unless we use event data TRBs, which are a |
2667 | * waste of space and HC resources.) | 2691 | * waste of space and HC resources.) |
2668 | */ | 2692 | */ |
2669 | field | TRB_ISP | TRB_TYPE(TRB_NORMAL)); | 2693 | field | TRB_ISP | TRB_TYPE(TRB_NORMAL)); |
2670 | --num_trbs; | 2694 | --num_trbs; |
2671 | running_total += trb_buff_len; | 2695 | running_total += trb_buff_len; |
2672 | 2696 | ||
2673 | /* Calculate length for next transfer */ | 2697 | /* Calculate length for next transfer */ |
2674 | addr += trb_buff_len; | 2698 | addr += trb_buff_len; |
2675 | trb_buff_len = urb->transfer_buffer_length - running_total; | 2699 | trb_buff_len = urb->transfer_buffer_length - running_total; |
2676 | if (trb_buff_len > TRB_MAX_BUFF_SIZE) | 2700 | if (trb_buff_len > TRB_MAX_BUFF_SIZE) |
2677 | trb_buff_len = TRB_MAX_BUFF_SIZE; | 2701 | trb_buff_len = TRB_MAX_BUFF_SIZE; |
2678 | } while (running_total < urb->transfer_buffer_length); | 2702 | } while (running_total < urb->transfer_buffer_length); |
2679 | 2703 | ||
2680 | check_trb_math(urb, num_trbs, running_total); | 2704 | check_trb_math(urb, num_trbs, running_total); |
2681 | giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id, | 2705 | giveback_first_trb(xhci, slot_id, ep_index, urb->stream_id, |
2682 | start_cycle, start_trb, td); | 2706 | start_cycle, start_trb, td); |
2683 | return 0; | 2707 | return 0; |
2684 | } | 2708 | } |
2685 | 2709 | ||
2686 | /* Caller must have locked xhci->lock */ | 2710 | /* Caller must have locked xhci->lock */ |
2687 | int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags, | 2711 | int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags, |
2688 | struct urb *urb, int slot_id, unsigned int ep_index) | 2712 | struct urb *urb, int slot_id, unsigned int ep_index) |
2689 | { | 2713 | { |
2690 | struct xhci_ring *ep_ring; | 2714 | struct xhci_ring *ep_ring; |
2691 | int num_trbs; | 2715 | int num_trbs; |
2692 | int ret; | 2716 | int ret; |
2693 | struct usb_ctrlrequest *setup; | 2717 | struct usb_ctrlrequest *setup; |
2694 | struct xhci_generic_trb *start_trb; | 2718 | struct xhci_generic_trb *start_trb; |
2695 | int start_cycle; | 2719 | int start_cycle; |
2696 | u32 field, length_field; | 2720 | u32 field, length_field; |
2697 | struct urb_priv *urb_priv; | 2721 | struct urb_priv *urb_priv; |
2698 | struct xhci_td *td; | 2722 | struct xhci_td *td; |
2699 | 2723 | ||
2700 | ep_ring = xhci_urb_to_transfer_ring(xhci, urb); | 2724 | ep_ring = xhci_urb_to_transfer_ring(xhci, urb); |
2701 | if (!ep_ring) | 2725 | if (!ep_ring) |
2702 | return -EINVAL; | 2726 | return -EINVAL; |
2703 | 2727 | ||
2704 | /* | 2728 | /* |
2705 | * Need to copy setup packet into setup TRB, so we can't use the setup | 2729 | * Need to copy setup packet into setup TRB, so we can't use the setup |
2706 | * DMA address. | 2730 | * DMA address. |
2707 | */ | 2731 | */ |
2708 | if (!urb->setup_packet) | 2732 | if (!urb->setup_packet) |
2709 | return -EINVAL; | 2733 | return -EINVAL; |
2710 | 2734 | ||
2711 | if (!in_interrupt()) | 2735 | if (!in_interrupt()) |
2712 | xhci_dbg(xhci, "Queueing ctrl tx for slot id %d, ep %d\n", | 2736 | xhci_dbg(xhci, "Queueing ctrl tx for slot id %d, ep %d\n", |
2713 | slot_id, ep_index); | 2737 | slot_id, ep_index); |
2714 | /* 1 TRB for setup, 1 for status */ | 2738 | /* 1 TRB for setup, 1 for status */ |
2715 | num_trbs = 2; | 2739 | num_trbs = 2; |
2716 | /* | 2740 | /* |
2717 | * Don't need to check if we need additional event data and normal TRBs, | 2741 | * Don't need to check if we need additional event data and normal TRBs, |
2718 | * since data in control transfers will never get bigger than 16MB | 2742 | * since data in control transfers will never get bigger than 16MB |
2719 | * XXX: can we get a buffer that crosses 64KB boundaries? | 2743 | * XXX: can we get a buffer that crosses 64KB boundaries? |
2720 | */ | 2744 | */ |
2721 | if (urb->transfer_buffer_length > 0) | 2745 | if (urb->transfer_buffer_length > 0) |
2722 | num_trbs++; | 2746 | num_trbs++; |
2723 | ret = prepare_transfer(xhci, xhci->devs[slot_id], | 2747 | ret = prepare_transfer(xhci, xhci->devs[slot_id], |
2724 | ep_index, urb->stream_id, | 2748 | ep_index, urb->stream_id, |
2725 | num_trbs, urb, 0, mem_flags); | 2749 | num_trbs, urb, 0, mem_flags); |
2726 | if (ret < 0) | 2750 | if (ret < 0) |
2727 | return ret; | 2751 | return ret; |
2728 | 2752 | ||
2729 | urb_priv = urb->hcpriv; | 2753 | urb_priv = urb->hcpriv; |
2730 | td = urb_priv->td[0]; | 2754 | td = urb_priv->td[0]; |
2731 | 2755 | ||
2732 | /* | 2756 | /* |
2733 | * Don't give the first TRB to the hardware (by toggling the cycle bit) | 2757 | * Don't give the first TRB to the hardware (by toggling the cycle bit) |
2734 | * until we've finished creating all the other TRBs. The ring's cycle | 2758 | * until we've finished creating all the other TRBs. The ring's cycle |
2735 | * state may change as we enqueue the other TRBs, so save it too. | 2759 | * state may change as we enqueue the other TRBs, so save it too. |
2736 | */ | 2760 | */ |
2737 | start_trb = &ep_ring->enqueue->generic; | 2761 | start_trb = &ep_ring->enqueue->generic; |
2738 | start_cycle = ep_ring->cycle_state; | 2762 | start_cycle = ep_ring->cycle_state; |
2739 | 2763 | ||
2740 | /* Queue setup TRB - see section 6.4.1.2.1 */ | 2764 | /* Queue setup TRB - see section 6.4.1.2.1 */ |
2741 | /* FIXME better way to translate setup_packet into two u32 fields? */ | 2765 | /* FIXME better way to translate setup_packet into two u32 fields? */ |
2742 | setup = (struct usb_ctrlrequest *) urb->setup_packet; | 2766 | setup = (struct usb_ctrlrequest *) urb->setup_packet; |
2743 | queue_trb(xhci, ep_ring, false, true, | 2767 | queue_trb(xhci, ep_ring, false, true, |
2744 | /* FIXME endianness is probably going to bite my ass here. */ | 2768 | /* FIXME endianness is probably going to bite my ass here. */ |
2745 | setup->bRequestType | setup->bRequest << 8 | setup->wValue << 16, | 2769 | setup->bRequestType | setup->bRequest << 8 | setup->wValue << 16, |
2746 | setup->wIndex | setup->wLength << 16, | 2770 | setup->wIndex | setup->wLength << 16, |
2747 | TRB_LEN(8) | TRB_INTR_TARGET(0), | 2771 | TRB_LEN(8) | TRB_INTR_TARGET(0), |
2748 | /* Immediate data in pointer */ | 2772 | /* Immediate data in pointer */ |
2749 | TRB_IDT | TRB_TYPE(TRB_SETUP)); | 2773 | TRB_IDT | TRB_TYPE(TRB_SETUP)); |
2750 | 2774 | ||
2751 | /* If there's data, queue data TRBs */ | 2775 | /* If there's data, queue data TRBs */ |
2752 | field = 0; | 2776 | field = 0; |
2753 | length_field = TRB_LEN(urb->transfer_buffer_length) | | 2777 | length_field = TRB_LEN(urb->transfer_buffer_length) | |
2754 | xhci_td_remainder(urb->transfer_buffer_length) | | 2778 | xhci_td_remainder(urb->transfer_buffer_length) | |
2755 | TRB_INTR_TARGET(0); | 2779 | TRB_INTR_TARGET(0); |
2756 | if (urb->transfer_buffer_length > 0) { | 2780 | if (urb->transfer_buffer_length > 0) { |
2757 | if (setup->bRequestType & USB_DIR_IN) | 2781 | if (setup->bRequestType & USB_DIR_IN) |
2758 | field |= TRB_DIR_IN; | 2782 | field |= TRB_DIR_IN; |
2759 | queue_trb(xhci, ep_ring, false, true, | 2783 | queue_trb(xhci, ep_ring, false, true, |
2760 | lower_32_bits(urb->transfer_dma), | 2784 | lower_32_bits(urb->transfer_dma), |
2761 | upper_32_bits(urb->transfer_dma), | 2785 | upper_32_bits(urb->transfer_dma), |
2762 | length_field, | 2786 | length_field, |
2763 | /* Event on short tx */ | 2787 | /* Event on short tx */ |
2764 | field | TRB_ISP | TRB_TYPE(TRB_DATA) | ep_ring->cycle_state); | 2788 | field | TRB_ISP | TRB_TYPE(TRB_DATA) | ep_ring->cycle_state); |
2765 | } | 2789 | } |
2766 | 2790 | ||
2767 | /* Save the DMA address of the last TRB in the TD */ | 2791 | /* Save the DMA address of the last TRB in the TD */ |
2768 | td->last_trb = ep_ring->enqueue; | 2792 | td->last_trb = ep_ring->enqueue; |
2769 | 2793 | ||
2770 | /* Queue status TRB - see Table 7 and sections 4.11.2.2 and 6.4.1.2.3 */ | 2794 | /* Queue status TRB - see Table 7 and sections 4.11.2.2 and 6.4.1.2.3 */ |
2771 | /* If the device sent data, the status stage is an OUT transfer */ | 2795 | /* If the device sent data, the status stage is an OUT transfer */ |
2772 | if (urb->transfer_buffer_length > 0 && setup->bRequestType & USB_DIR_IN) | 2796 | if (urb->transfer_buffer_length > 0 && setup->bRequestType & USB_DIR_IN) |
2773 | field = 0; | 2797 | field = 0; |
2774 | else | 2798 | else |
2775 | field = TRB_DIR_IN; | 2799 | field = TRB_DIR_IN; |
2776 | queue_trb(xhci, ep_ring, false, false, | 2800 | queue_trb(xhci, ep_ring, false, false, |
2777 | 0, | 2801 | 0, |
2778 | 0, | 2802 | 0, |
2779 | TRB_INTR_TARGET(0), | 2803 | TRB_INTR_TARGET(0), |
2780 | /* Event on completion */ | 2804 | /* Event on completion */ |
2781 | field | TRB_IOC | TRB_TYPE(TRB_STATUS) | ep_ring->cycle_state); | 2805 | field | TRB_IOC | TRB_TYPE(TRB_STATUS) | ep_ring->cycle_state); |
2782 | 2806 | ||
2783 | giveback_first_trb(xhci, slot_id, ep_index, 0, | 2807 | giveback_first_trb(xhci, slot_id, ep_index, 0, |
2784 | start_cycle, start_trb, td); | 2808 | start_cycle, start_trb, td); |
2785 | return 0; | 2809 | return 0; |
2786 | } | 2810 | } |
2787 | 2811 | ||
2788 | static int count_isoc_trbs_needed(struct xhci_hcd *xhci, | 2812 | static int count_isoc_trbs_needed(struct xhci_hcd *xhci, |
2789 | struct urb *urb, int i) | 2813 | struct urb *urb, int i) |
2790 | { | 2814 | { |
2791 | int num_trbs = 0; | 2815 | int num_trbs = 0; |
2792 | u64 addr, td_len, running_total; | 2816 | u64 addr, td_len, running_total; |
2793 | 2817 | ||
2794 | addr = (u64) (urb->transfer_dma + urb->iso_frame_desc[i].offset); | 2818 | addr = (u64) (urb->transfer_dma + urb->iso_frame_desc[i].offset); |
2795 | td_len = urb->iso_frame_desc[i].length; | 2819 | td_len = urb->iso_frame_desc[i].length; |
2796 | 2820 | ||
2797 | running_total = TRB_MAX_BUFF_SIZE - | 2821 | running_total = TRB_MAX_BUFF_SIZE - |
2798 | (addr & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); | 2822 | (addr & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); |
2799 | if (running_total != 0) | 2823 | if (running_total != 0) |
2800 | num_trbs++; | 2824 | num_trbs++; |
2801 | 2825 | ||
2802 | while (running_total < td_len) { | 2826 | while (running_total < td_len) { |
2803 | num_trbs++; | 2827 | num_trbs++; |
2804 | running_total += TRB_MAX_BUFF_SIZE; | 2828 | running_total += TRB_MAX_BUFF_SIZE; |
2805 | } | 2829 | } |
2806 | 2830 | ||
2807 | return num_trbs; | 2831 | return num_trbs; |
2808 | } | 2832 | } |
2809 | 2833 | ||
2810 | /* This is for isoc transfer */ | 2834 | /* This is for isoc transfer */ |
2811 | static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags, | 2835 | static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags, |
2812 | struct urb *urb, int slot_id, unsigned int ep_index) | 2836 | struct urb *urb, int slot_id, unsigned int ep_index) |
2813 | { | 2837 | { |
2814 | struct xhci_ring *ep_ring; | 2838 | struct xhci_ring *ep_ring; |
2815 | struct urb_priv *urb_priv; | 2839 | struct urb_priv *urb_priv; |
2816 | struct xhci_td *td; | 2840 | struct xhci_td *td; |
2817 | int num_tds, trbs_per_td; | 2841 | int num_tds, trbs_per_td; |
2818 | struct xhci_generic_trb *start_trb; | 2842 | struct xhci_generic_trb *start_trb; |
2819 | bool first_trb; | 2843 | bool first_trb; |
2820 | int start_cycle; | 2844 | int start_cycle; |
2821 | u32 field, length_field; | 2845 | u32 field, length_field; |
2822 | int running_total, trb_buff_len, td_len, td_remain_len, ret; | 2846 | int running_total, trb_buff_len, td_len, td_remain_len, ret; |
2823 | u64 start_addr, addr; | 2847 | u64 start_addr, addr; |
2824 | int i, j; | 2848 | int i, j; |
2825 | 2849 | ||
2826 | ep_ring = xhci->devs[slot_id]->eps[ep_index].ring; | 2850 | ep_ring = xhci->devs[slot_id]->eps[ep_index].ring; |
2827 | 2851 | ||
2828 | num_tds = urb->number_of_packets; | 2852 | num_tds = urb->number_of_packets; |
2829 | if (num_tds < 1) { | 2853 | if (num_tds < 1) { |
2830 | xhci_dbg(xhci, "Isoc URB with zero packets?\n"); | 2854 | xhci_dbg(xhci, "Isoc URB with zero packets?\n"); |
2831 | return -EINVAL; | 2855 | return -EINVAL; |
2832 | } | 2856 | } |
2833 | 2857 | ||
2834 | if (!in_interrupt()) | 2858 | if (!in_interrupt()) |
2835 | dev_dbg(&urb->dev->dev, "ep %#x - urb len = %#x (%d)," | 2859 | dev_dbg(&urb->dev->dev, "ep %#x - urb len = %#x (%d)," |
2836 | " addr = %#llx, num_tds = %d\n", | 2860 | " addr = %#llx, num_tds = %d\n", |
2837 | urb->ep->desc.bEndpointAddress, | 2861 | urb->ep->desc.bEndpointAddress, |
2838 | urb->transfer_buffer_length, | 2862 | urb->transfer_buffer_length, |
2839 | urb->transfer_buffer_length, | 2863 | urb->transfer_buffer_length, |
2840 | (unsigned long long)urb->transfer_dma, | 2864 | (unsigned long long)urb->transfer_dma, |
2841 | num_tds); | 2865 | num_tds); |
2842 | 2866 | ||
2843 | start_addr = (u64) urb->transfer_dma; | 2867 | start_addr = (u64) urb->transfer_dma; |
2844 | start_trb = &ep_ring->enqueue->generic; | 2868 | start_trb = &ep_ring->enqueue->generic; |
2845 | start_cycle = ep_ring->cycle_state; | 2869 | start_cycle = ep_ring->cycle_state; |
2846 | 2870 | ||
2847 | /* Queue the first TRB, even if it's zero-length */ | 2871 | /* Queue the first TRB, even if it's zero-length */ |
2848 | for (i = 0; i < num_tds; i++) { | 2872 | for (i = 0; i < num_tds; i++) { |
2849 | first_trb = true; | 2873 | first_trb = true; |
2850 | 2874 | ||
2851 | running_total = 0; | 2875 | running_total = 0; |
2852 | addr = start_addr + urb->iso_frame_desc[i].offset; | 2876 | addr = start_addr + urb->iso_frame_desc[i].offset; |
2853 | td_len = urb->iso_frame_desc[i].length; | 2877 | td_len = urb->iso_frame_desc[i].length; |
2854 | td_remain_len = td_len; | 2878 | td_remain_len = td_len; |
2855 | 2879 | ||
2856 | trbs_per_td = count_isoc_trbs_needed(xhci, urb, i); | 2880 | trbs_per_td = count_isoc_trbs_needed(xhci, urb, i); |
2857 | 2881 | ||
2858 | ret = prepare_transfer(xhci, xhci->devs[slot_id], ep_index, | 2882 | ret = prepare_transfer(xhci, xhci->devs[slot_id], ep_index, |
2859 | urb->stream_id, trbs_per_td, urb, i, mem_flags); | 2883 | urb->stream_id, trbs_per_td, urb, i, mem_flags); |
2860 | if (ret < 0) | 2884 | if (ret < 0) |
2861 | return ret; | 2885 | return ret; |
2862 | 2886 | ||
2863 | urb_priv = urb->hcpriv; | 2887 | urb_priv = urb->hcpriv; |
2864 | td = urb_priv->td[i]; | 2888 | td = urb_priv->td[i]; |
2865 | 2889 | ||
2866 | for (j = 0; j < trbs_per_td; j++) { | 2890 | for (j = 0; j < trbs_per_td; j++) { |
2867 | u32 remainder = 0; | 2891 | u32 remainder = 0; |
2868 | field = 0; | 2892 | field = 0; |
2869 | 2893 | ||
2870 | if (first_trb) { | 2894 | if (first_trb) { |
2871 | /* Queue the isoc TRB */ | 2895 | /* Queue the isoc TRB */ |
2872 | field |= TRB_TYPE(TRB_ISOC); | 2896 | field |= TRB_TYPE(TRB_ISOC); |
2873 | /* Assume URB_ISO_ASAP is set */ | 2897 | /* Assume URB_ISO_ASAP is set */ |
2874 | field |= TRB_SIA; | 2898 | field |= TRB_SIA; |
2875 | if (i > 0) | 2899 | if (i > 0) |
2876 | field |= ep_ring->cycle_state; | 2900 | field |= ep_ring->cycle_state; |
2877 | first_trb = false; | 2901 | first_trb = false; |
2878 | } else { | 2902 | } else { |
2879 | /* Queue other normal TRBs */ | 2903 | /* Queue other normal TRBs */ |
2880 | field |= TRB_TYPE(TRB_NORMAL); | 2904 | field |= TRB_TYPE(TRB_NORMAL); |
2881 | field |= ep_ring->cycle_state; | 2905 | field |= ep_ring->cycle_state; |
2882 | } | 2906 | } |
2883 | 2907 | ||
2884 | /* Chain all the TRBs together; clear the chain bit in | 2908 | /* Chain all the TRBs together; clear the chain bit in |
2885 | * the last TRB to indicate it's the last TRB in the | 2909 | * the last TRB to indicate it's the last TRB in the |
2886 | * chain. | 2910 | * chain. |
2887 | */ | 2911 | */ |
2888 | if (j < trbs_per_td - 1) { | 2912 | if (j < trbs_per_td - 1) { |
2889 | field |= TRB_CHAIN; | 2913 | field |= TRB_CHAIN; |
2890 | } else { | 2914 | } else { |
2891 | td->last_trb = ep_ring->enqueue; | 2915 | td->last_trb = ep_ring->enqueue; |
2892 | field |= TRB_IOC; | 2916 | field |= TRB_IOC; |
2893 | } | 2917 | } |
2894 | 2918 | ||
2895 | /* Calculate TRB length */ | 2919 | /* Calculate TRB length */ |
2896 | trb_buff_len = TRB_MAX_BUFF_SIZE - | 2920 | trb_buff_len = TRB_MAX_BUFF_SIZE - |
2897 | (addr & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); | 2921 | (addr & ((1 << TRB_MAX_BUFF_SHIFT) - 1)); |
2898 | if (trb_buff_len > td_remain_len) | 2922 | if (trb_buff_len > td_remain_len) |
2899 | trb_buff_len = td_remain_len; | 2923 | trb_buff_len = td_remain_len; |
2900 | 2924 | ||
2901 | remainder = xhci_td_remainder(td_len - running_total); | 2925 | remainder = xhci_td_remainder(td_len - running_total); |
2902 | length_field = TRB_LEN(trb_buff_len) | | 2926 | length_field = TRB_LEN(trb_buff_len) | |
2903 | remainder | | 2927 | remainder | |
2904 | TRB_INTR_TARGET(0); | 2928 | TRB_INTR_TARGET(0); |
2905 | queue_trb(xhci, ep_ring, false, false, | 2929 | queue_trb(xhci, ep_ring, false, false, |
2906 | lower_32_bits(addr), | 2930 | lower_32_bits(addr), |
2907 | upper_32_bits(addr), | 2931 | upper_32_bits(addr), |
2908 | length_field, | 2932 | length_field, |
2909 | /* We always want to know if the TRB was short, | 2933 | /* We always want to know if the TRB was short, |
2910 | * or we won't get an event when it completes. | 2934 | * or we won't get an event when it completes. |
2911 | * (Unless we use event data TRBs, which are a | 2935 | * (Unless we use event data TRBs, which are a |
2912 | * waste of space and HC resources.) | 2936 | * waste of space and HC resources.) |
2913 | */ | 2937 | */ |
2914 | field | TRB_ISP); | 2938 | field | TRB_ISP); |
2915 | running_total += trb_buff_len; | 2939 | running_total += trb_buff_len; |
2916 | 2940 | ||
2917 | addr += trb_buff_len; | 2941 | addr += trb_buff_len; |
2918 | td_remain_len -= trb_buff_len; | 2942 | td_remain_len -= trb_buff_len; |
2919 | } | 2943 | } |
2920 | 2944 | ||
2921 | /* Check TD length */ | 2945 | /* Check TD length */ |
2922 | if (running_total != td_len) { | 2946 | if (running_total != td_len) { |
2923 | xhci_err(xhci, "ISOC TD length unmatch\n"); | 2947 | xhci_err(xhci, "ISOC TD length unmatch\n"); |
2924 | return -EINVAL; | 2948 | return -EINVAL; |
2925 | } | 2949 | } |
2926 | } | 2950 | } |
2927 | 2951 | ||
2928 | wmb(); | 2952 | wmb(); |
2929 | start_trb->field[3] |= start_cycle; | 2953 | start_trb->field[3] |= start_cycle; |
2930 | 2954 | ||
2931 | ring_ep_doorbell(xhci, slot_id, ep_index, urb->stream_id); | 2955 | ring_ep_doorbell(xhci, slot_id, ep_index, urb->stream_id); |
2932 | return 0; | 2956 | return 0; |
2933 | } | 2957 | } |
2934 | 2958 | ||
2935 | /* | 2959 | /* |
2936 | * Check transfer ring to guarantee there is enough room for the urb. | 2960 | * Check transfer ring to guarantee there is enough room for the urb. |
2937 | * Update ISO URB start_frame and interval. | 2961 | * Update ISO URB start_frame and interval. |
2938 | * Update interval as xhci_queue_intr_tx does. Just use xhci frame_index to | 2962 | * Update interval as xhci_queue_intr_tx does. Just use xhci frame_index to |
2939 | * update the urb->start_frame by now. | 2963 | * update the urb->start_frame by now. |
2940 | * Always assume URB_ISO_ASAP set, and NEVER use urb->start_frame as input. | 2964 | * Always assume URB_ISO_ASAP set, and NEVER use urb->start_frame as input. |
2941 | */ | 2965 | */ |
2942 | int xhci_queue_isoc_tx_prepare(struct xhci_hcd *xhci, gfp_t mem_flags, | 2966 | int xhci_queue_isoc_tx_prepare(struct xhci_hcd *xhci, gfp_t mem_flags, |
2943 | struct urb *urb, int slot_id, unsigned int ep_index) | 2967 | struct urb *urb, int slot_id, unsigned int ep_index) |
2944 | { | 2968 | { |
2945 | struct xhci_virt_device *xdev; | 2969 | struct xhci_virt_device *xdev; |
2946 | struct xhci_ring *ep_ring; | 2970 | struct xhci_ring *ep_ring; |
2947 | struct xhci_ep_ctx *ep_ctx; | 2971 | struct xhci_ep_ctx *ep_ctx; |
2948 | int start_frame; | 2972 | int start_frame; |
2949 | int xhci_interval; | 2973 | int xhci_interval; |
2950 | int ep_interval; | 2974 | int ep_interval; |
2951 | int num_tds, num_trbs, i; | 2975 | int num_tds, num_trbs, i; |
2952 | int ret; | 2976 | int ret; |
2953 | 2977 | ||
2954 | xdev = xhci->devs[slot_id]; | 2978 | xdev = xhci->devs[slot_id]; |
2955 | ep_ring = xdev->eps[ep_index].ring; | 2979 | ep_ring = xdev->eps[ep_index].ring; |
2956 | ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); | 2980 | ep_ctx = xhci_get_ep_ctx(xhci, xdev->out_ctx, ep_index); |
2957 | 2981 | ||
2958 | num_trbs = 0; | 2982 | num_trbs = 0; |
2959 | num_tds = urb->number_of_packets; | 2983 | num_tds = urb->number_of_packets; |
2960 | for (i = 0; i < num_tds; i++) | 2984 | for (i = 0; i < num_tds; i++) |
2961 | num_trbs += count_isoc_trbs_needed(xhci, urb, i); | 2985 | num_trbs += count_isoc_trbs_needed(xhci, urb, i); |
2962 | 2986 | ||
2963 | /* Check the ring to guarantee there is enough room for the whole urb. | 2987 | /* Check the ring to guarantee there is enough room for the whole urb. |
2964 | * Do not insert any td of the urb to the ring if the check failed. | 2988 | * Do not insert any td of the urb to the ring if the check failed. |
2965 | */ | 2989 | */ |
2966 | ret = prepare_ring(xhci, ep_ring, ep_ctx->ep_info & EP_STATE_MASK, | 2990 | ret = prepare_ring(xhci, ep_ring, ep_ctx->ep_info & EP_STATE_MASK, |
2967 | num_trbs, mem_flags); | 2991 | num_trbs, mem_flags); |
2968 | if (ret) | 2992 | if (ret) |
2969 | return ret; | 2993 | return ret; |
2970 | 2994 | ||
2971 | start_frame = xhci_readl(xhci, &xhci->run_regs->microframe_index); | 2995 | start_frame = xhci_readl(xhci, &xhci->run_regs->microframe_index); |
2972 | start_frame &= 0x3fff; | 2996 | start_frame &= 0x3fff; |
2973 | 2997 | ||
2974 | urb->start_frame = start_frame; | 2998 | urb->start_frame = start_frame; |
2975 | if (urb->dev->speed == USB_SPEED_LOW || | 2999 | if (urb->dev->speed == USB_SPEED_LOW || |
2976 | urb->dev->speed == USB_SPEED_FULL) | 3000 | urb->dev->speed == USB_SPEED_FULL) |
2977 | urb->start_frame >>= 3; | 3001 | urb->start_frame >>= 3; |
2978 | 3002 | ||
2979 | xhci_interval = EP_INTERVAL_TO_UFRAMES(ep_ctx->ep_info); | 3003 | xhci_interval = EP_INTERVAL_TO_UFRAMES(ep_ctx->ep_info); |
2980 | ep_interval = urb->interval; | 3004 | ep_interval = urb->interval; |
2981 | /* Convert to microframes */ | 3005 | /* Convert to microframes */ |
2982 | if (urb->dev->speed == USB_SPEED_LOW || | 3006 | if (urb->dev->speed == USB_SPEED_LOW || |
2983 | urb->dev->speed == USB_SPEED_FULL) | 3007 | urb->dev->speed == USB_SPEED_FULL) |
2984 | ep_interval *= 8; | 3008 | ep_interval *= 8; |
2985 | /* FIXME change this to a warning and a suggestion to use the new API | 3009 | /* FIXME change this to a warning and a suggestion to use the new API |
2986 | * to set the polling interval (once the API is added). | 3010 | * to set the polling interval (once the API is added). |
2987 | */ | 3011 | */ |
2988 | if (xhci_interval != ep_interval) { | 3012 | if (xhci_interval != ep_interval) { |
2989 | if (!printk_ratelimit()) | 3013 | if (!printk_ratelimit()) |
2990 | dev_dbg(&urb->dev->dev, "Driver uses different interval" | 3014 | dev_dbg(&urb->dev->dev, "Driver uses different interval" |
2991 | " (%d microframe%s) than xHCI " | 3015 | " (%d microframe%s) than xHCI " |
2992 | "(%d microframe%s)\n", | 3016 | "(%d microframe%s)\n", |
2993 | ep_interval, | 3017 | ep_interval, |
2994 | ep_interval == 1 ? "" : "s", | 3018 | ep_interval == 1 ? "" : "s", |
2995 | xhci_interval, | 3019 | xhci_interval, |
2996 | xhci_interval == 1 ? "" : "s"); | 3020 | xhci_interval == 1 ? "" : "s"); |
2997 | urb->interval = xhci_interval; | 3021 | urb->interval = xhci_interval; |
2998 | /* Convert back to frames for LS/FS devices */ | 3022 | /* Convert back to frames for LS/FS devices */ |
2999 | if (urb->dev->speed == USB_SPEED_LOW || | 3023 | if (urb->dev->speed == USB_SPEED_LOW || |
3000 | urb->dev->speed == USB_SPEED_FULL) | 3024 | urb->dev->speed == USB_SPEED_FULL) |
3001 | urb->interval /= 8; | 3025 | urb->interval /= 8; |
3002 | } | 3026 | } |
3003 | return xhci_queue_isoc_tx(xhci, GFP_ATOMIC, urb, slot_id, ep_index); | 3027 | return xhci_queue_isoc_tx(xhci, GFP_ATOMIC, urb, slot_id, ep_index); |
3004 | } | 3028 | } |
3005 | 3029 | ||
3006 | /**** Command Ring Operations ****/ | 3030 | /**** Command Ring Operations ****/ |
3007 | 3031 | ||
3008 | /* Generic function for queueing a command TRB on the command ring. | 3032 | /* Generic function for queueing a command TRB on the command ring. |
3009 | * Check to make sure there's room on the command ring for one command TRB. | 3033 | * Check to make sure there's room on the command ring for one command TRB. |
3010 | * Also check that there's room reserved for commands that must not fail. | 3034 | * Also check that there's room reserved for commands that must not fail. |
3011 | * If this is a command that must not fail, meaning command_must_succeed = TRUE, | 3035 | * If this is a command that must not fail, meaning command_must_succeed = TRUE, |
3012 | * then only check for the number of reserved spots. | 3036 | * then only check for the number of reserved spots. |
3013 | * Don't decrement xhci->cmd_ring_reserved_trbs after we've queued the TRB | 3037 | * Don't decrement xhci->cmd_ring_reserved_trbs after we've queued the TRB |
3014 | * because the command event handler may want to resubmit a failed command. | 3038 | * because the command event handler may want to resubmit a failed command. |
3015 | */ | 3039 | */ |
3016 | static int queue_command(struct xhci_hcd *xhci, u32 field1, u32 field2, | 3040 | static int queue_command(struct xhci_hcd *xhci, u32 field1, u32 field2, |
3017 | u32 field3, u32 field4, bool command_must_succeed) | 3041 | u32 field3, u32 field4, bool command_must_succeed) |
3018 | { | 3042 | { |
3019 | int reserved_trbs = xhci->cmd_ring_reserved_trbs; | 3043 | int reserved_trbs = xhci->cmd_ring_reserved_trbs; |
3020 | int ret; | 3044 | int ret; |
3021 | 3045 | ||
3022 | if (!command_must_succeed) | 3046 | if (!command_must_succeed) |
3023 | reserved_trbs++; | 3047 | reserved_trbs++; |
3024 | 3048 | ||
3025 | ret = prepare_ring(xhci, xhci->cmd_ring, EP_STATE_RUNNING, | 3049 | ret = prepare_ring(xhci, xhci->cmd_ring, EP_STATE_RUNNING, |
3026 | reserved_trbs, GFP_ATOMIC); | 3050 | reserved_trbs, GFP_ATOMIC); |
3027 | if (ret < 0) { | 3051 | if (ret < 0) { |
3028 | xhci_err(xhci, "ERR: No room for command on command ring\n"); | 3052 | xhci_err(xhci, "ERR: No room for command on command ring\n"); |
3029 | if (command_must_succeed) | 3053 | if (command_must_succeed) |
3030 | xhci_err(xhci, "ERR: Reserved TRB counting for " | 3054 | xhci_err(xhci, "ERR: Reserved TRB counting for " |
3031 | "unfailable commands failed.\n"); | 3055 | "unfailable commands failed.\n"); |
3032 | return ret; | 3056 | return ret; |
3033 | } | 3057 | } |
3034 | queue_trb(xhci, xhci->cmd_ring, false, false, field1, field2, field3, | 3058 | queue_trb(xhci, xhci->cmd_ring, false, false, field1, field2, field3, |
3035 | field4 | xhci->cmd_ring->cycle_state); | 3059 | field4 | xhci->cmd_ring->cycle_state); |
3036 | return 0; | 3060 | return 0; |
3037 | } | 3061 | } |
3038 | 3062 | ||
3039 | /* Queue a no-op command on the command ring */ | 3063 | /* Queue a no-op command on the command ring */ |
3040 | static int queue_cmd_noop(struct xhci_hcd *xhci) | 3064 | static int queue_cmd_noop(struct xhci_hcd *xhci) |
3041 | { | 3065 | { |
3042 | return queue_command(xhci, 0, 0, 0, TRB_TYPE(TRB_CMD_NOOP), false); | 3066 | return queue_command(xhci, 0, 0, 0, TRB_TYPE(TRB_CMD_NOOP), false); |
3043 | } | 3067 | } |
3044 | 3068 | ||
3045 | /* | 3069 | /* |
3046 | * Place a no-op command on the command ring to test the command and | 3070 | * Place a no-op command on the command ring to test the command and |
3047 | * event ring. | 3071 | * event ring. |
3048 | */ | 3072 | */ |
3049 | void *xhci_setup_one_noop(struct xhci_hcd *xhci) | 3073 | void *xhci_setup_one_noop(struct xhci_hcd *xhci) |
3050 | { | 3074 | { |
3051 | if (queue_cmd_noop(xhci) < 0) | 3075 | if (queue_cmd_noop(xhci) < 0) |
3052 | return NULL; | 3076 | return NULL; |
3053 | xhci->noops_submitted++; | 3077 | xhci->noops_submitted++; |
3054 | return xhci_ring_cmd_db; | 3078 | return xhci_ring_cmd_db; |
3055 | } | 3079 | } |
3056 | 3080 | ||
3057 | /* Queue a slot enable or disable request on the command ring */ | 3081 | /* Queue a slot enable or disable request on the command ring */ |
3058 | int xhci_queue_slot_control(struct xhci_hcd *xhci, u32 trb_type, u32 slot_id) | 3082 | int xhci_queue_slot_control(struct xhci_hcd *xhci, u32 trb_type, u32 slot_id) |
3059 | { | 3083 | { |
3060 | return queue_command(xhci, 0, 0, 0, | 3084 | return queue_command(xhci, 0, 0, 0, |
3061 | TRB_TYPE(trb_type) | SLOT_ID_FOR_TRB(slot_id), false); | 3085 | TRB_TYPE(trb_type) | SLOT_ID_FOR_TRB(slot_id), false); |
3062 | } | 3086 | } |
3063 | 3087 | ||
3064 | /* Queue an address device command TRB */ | 3088 | /* Queue an address device command TRB */ |
3065 | int xhci_queue_address_device(struct xhci_hcd *xhci, dma_addr_t in_ctx_ptr, | 3089 | int xhci_queue_address_device(struct xhci_hcd *xhci, dma_addr_t in_ctx_ptr, |
3066 | u32 slot_id) | 3090 | u32 slot_id) |
3067 | { | 3091 | { |
3068 | return queue_command(xhci, lower_32_bits(in_ctx_ptr), | 3092 | return queue_command(xhci, lower_32_bits(in_ctx_ptr), |
3069 | upper_32_bits(in_ctx_ptr), 0, | 3093 | upper_32_bits(in_ctx_ptr), 0, |
3070 | TRB_TYPE(TRB_ADDR_DEV) | SLOT_ID_FOR_TRB(slot_id), | 3094 | TRB_TYPE(TRB_ADDR_DEV) | SLOT_ID_FOR_TRB(slot_id), |
3071 | false); | 3095 | false); |
3072 | } | 3096 | } |
3073 | 3097 | ||
3074 | int xhci_queue_vendor_command(struct xhci_hcd *xhci, | 3098 | int xhci_queue_vendor_command(struct xhci_hcd *xhci, |
3075 | u32 field1, u32 field2, u32 field3, u32 field4) | 3099 | u32 field1, u32 field2, u32 field3, u32 field4) |
3076 | { | 3100 | { |
3077 | return queue_command(xhci, field1, field2, field3, field4, false); | 3101 | return queue_command(xhci, field1, field2, field3, field4, false); |
3078 | } | 3102 | } |
3079 | 3103 | ||
3080 | /* Queue a reset device command TRB */ | 3104 | /* Queue a reset device command TRB */ |
3081 | int xhci_queue_reset_device(struct xhci_hcd *xhci, u32 slot_id) | 3105 | int xhci_queue_reset_device(struct xhci_hcd *xhci, u32 slot_id) |
3082 | { | 3106 | { |
3083 | return queue_command(xhci, 0, 0, 0, | 3107 | return queue_command(xhci, 0, 0, 0, |
3084 | TRB_TYPE(TRB_RESET_DEV) | SLOT_ID_FOR_TRB(slot_id), | 3108 | TRB_TYPE(TRB_RESET_DEV) | SLOT_ID_FOR_TRB(slot_id), |
3085 | false); | 3109 | false); |
3086 | } | 3110 | } |
3087 | 3111 | ||
3088 | /* Queue a configure endpoint command TRB */ | 3112 | /* Queue a configure endpoint command TRB */ |
3089 | int xhci_queue_configure_endpoint(struct xhci_hcd *xhci, dma_addr_t in_ctx_ptr, | 3113 | int xhci_queue_configure_endpoint(struct xhci_hcd *xhci, dma_addr_t in_ctx_ptr, |
3090 | u32 slot_id, bool command_must_succeed) | 3114 | u32 slot_id, bool command_must_succeed) |
3091 | { | 3115 | { |
3092 | return queue_command(xhci, lower_32_bits(in_ctx_ptr), | 3116 | return queue_command(xhci, lower_32_bits(in_ctx_ptr), |
3093 | upper_32_bits(in_ctx_ptr), 0, | 3117 | upper_32_bits(in_ctx_ptr), 0, |
3094 | TRB_TYPE(TRB_CONFIG_EP) | SLOT_ID_FOR_TRB(slot_id), | 3118 | TRB_TYPE(TRB_CONFIG_EP) | SLOT_ID_FOR_TRB(slot_id), |
3095 | command_must_succeed); | 3119 | command_must_succeed); |
3096 | } | 3120 | } |
3097 | 3121 | ||
3098 | /* Queue an evaluate context command TRB */ | 3122 | /* Queue an evaluate context command TRB */ |
3099 | int xhci_queue_evaluate_context(struct xhci_hcd *xhci, dma_addr_t in_ctx_ptr, | 3123 | int xhci_queue_evaluate_context(struct xhci_hcd *xhci, dma_addr_t in_ctx_ptr, |
3100 | u32 slot_id) | 3124 | u32 slot_id) |
3101 | { | 3125 | { |
3102 | return queue_command(xhci, lower_32_bits(in_ctx_ptr), | 3126 | return queue_command(xhci, lower_32_bits(in_ctx_ptr), |
3103 | upper_32_bits(in_ctx_ptr), 0, | 3127 | upper_32_bits(in_ctx_ptr), 0, |
3104 | TRB_TYPE(TRB_EVAL_CONTEXT) | SLOT_ID_FOR_TRB(slot_id), | 3128 | TRB_TYPE(TRB_EVAL_CONTEXT) | SLOT_ID_FOR_TRB(slot_id), |
3105 | false); | 3129 | false); |
3106 | } | 3130 | } |
3107 | 3131 | ||
3108 | int xhci_queue_stop_endpoint(struct xhci_hcd *xhci, int slot_id, | 3132 | int xhci_queue_stop_endpoint(struct xhci_hcd *xhci, int slot_id, |
3109 | unsigned int ep_index) | 3133 | unsigned int ep_index) |
3110 | { | 3134 | { |
3111 | u32 trb_slot_id = SLOT_ID_FOR_TRB(slot_id); | 3135 | u32 trb_slot_id = SLOT_ID_FOR_TRB(slot_id); |
3112 | u32 trb_ep_index = EP_ID_FOR_TRB(ep_index); | 3136 | u32 trb_ep_index = EP_ID_FOR_TRB(ep_index); |
3113 | u32 type = TRB_TYPE(TRB_STOP_RING); | 3137 | u32 type = TRB_TYPE(TRB_STOP_RING); |
3114 | 3138 | ||
3115 | return queue_command(xhci, 0, 0, 0, | 3139 | return queue_command(xhci, 0, 0, 0, |
3116 | trb_slot_id | trb_ep_index | type, false); | 3140 | trb_slot_id | trb_ep_index | type, false); |
3117 | } | 3141 | } |
3118 | 3142 | ||
3119 | /* Set Transfer Ring Dequeue Pointer command. | 3143 | /* Set Transfer Ring Dequeue Pointer command. |
3120 | * This should not be used for endpoints that have streams enabled. | 3144 | * This should not be used for endpoints that have streams enabled. |
3121 | */ | 3145 | */ |
3122 | static int queue_set_tr_deq(struct xhci_hcd *xhci, int slot_id, | 3146 | static int queue_set_tr_deq(struct xhci_hcd *xhci, int slot_id, |
3123 | unsigned int ep_index, unsigned int stream_id, | 3147 | unsigned int ep_index, unsigned int stream_id, |
3124 | struct xhci_segment *deq_seg, | 3148 | struct xhci_segment *deq_seg, |
3125 | union xhci_trb *deq_ptr, u32 cycle_state) | 3149 | union xhci_trb *deq_ptr, u32 cycle_state) |
3126 | { | 3150 | { |
3127 | dma_addr_t addr; | 3151 | dma_addr_t addr; |
3128 | u32 trb_slot_id = SLOT_ID_FOR_TRB(slot_id); | 3152 | u32 trb_slot_id = SLOT_ID_FOR_TRB(slot_id); |
3129 | u32 trb_ep_index = EP_ID_FOR_TRB(ep_index); | 3153 | u32 trb_ep_index = EP_ID_FOR_TRB(ep_index); |
3130 | u32 trb_stream_id = STREAM_ID_FOR_TRB(stream_id); | 3154 | u32 trb_stream_id = STREAM_ID_FOR_TRB(stream_id); |
3131 | u32 type = TRB_TYPE(TRB_SET_DEQ); | 3155 | u32 type = TRB_TYPE(TRB_SET_DEQ); |
3132 | 3156 | ||
3133 | addr = xhci_trb_virt_to_dma(deq_seg, deq_ptr); | 3157 | addr = xhci_trb_virt_to_dma(deq_seg, deq_ptr); |
3134 | if (addr == 0) { | 3158 | if (addr == 0) { |
3135 | xhci_warn(xhci, "WARN Cannot submit Set TR Deq Ptr\n"); | 3159 | xhci_warn(xhci, "WARN Cannot submit Set TR Deq Ptr\n"); |
3136 | xhci_warn(xhci, "WARN deq seg = %p, deq pt = %p\n", | 3160 | xhci_warn(xhci, "WARN deq seg = %p, deq pt = %p\n", |
3137 | deq_seg, deq_ptr); | 3161 | deq_seg, deq_ptr); |
3138 | return 0; | 3162 | return 0; |
3139 | } | 3163 | } |
3140 | return queue_command(xhci, lower_32_bits(addr) | cycle_state, | 3164 | return queue_command(xhci, lower_32_bits(addr) | cycle_state, |
3141 | upper_32_bits(addr), trb_stream_id, | 3165 | upper_32_bits(addr), trb_stream_id, |
3142 | trb_slot_id | trb_ep_index | type, false); | 3166 | trb_slot_id | trb_ep_index | type, false); |
3143 | } | 3167 | } |
3144 | 3168 | ||
3145 | int xhci_queue_reset_ep(struct xhci_hcd *xhci, int slot_id, | 3169 | int xhci_queue_reset_ep(struct xhci_hcd *xhci, int slot_id, |
3146 | unsigned int ep_index) | 3170 | unsigned int ep_index) |
3147 | { | 3171 | { |
3148 | u32 trb_slot_id = SLOT_ID_FOR_TRB(slot_id); | 3172 | u32 trb_slot_id = SLOT_ID_FOR_TRB(slot_id); |
3149 | u32 trb_ep_index = EP_ID_FOR_TRB(ep_index); | 3173 | u32 trb_ep_index = EP_ID_FOR_TRB(ep_index); |
3150 | u32 type = TRB_TYPE(TRB_RESET_EP); | 3174 | u32 type = TRB_TYPE(TRB_RESET_EP); |
3151 | 3175 |