Blame view

Documentation/atomic_t.txt 9.92 KB
706eeb3e9   Peter Zijlstra   Documentation/loc...
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
  
  On atomic types (atomic_t atomic64_t and atomic_long_t).
  
  The atomic type provides an interface to the architecture's means of atomic
  RMW operations between CPUs (atomic operations on MMIO are not supported and
  can lead to fatal traps on some platforms).
  
  API
  ---
  
  The 'full' API consists of (atomic64_ and atomic_long_ prefixes omitted for
  brevity):
  
  Non-RMW ops:
  
    atomic_read(), atomic_set()
    atomic_read_acquire(), atomic_set_release()
  
  
  RMW atomic operations:
  
  Arithmetic:
  
    atomic_{add,sub,inc,dec}()
    atomic_{add,sub,inc,dec}_return{,_relaxed,_acquire,_release}()
    atomic_fetch_{add,sub,inc,dec}{,_relaxed,_acquire,_release}()
  
  
  Bitwise:
  
    atomic_{and,or,xor,andnot}()
    atomic_fetch_{and,or,xor,andnot}{,_relaxed,_acquire,_release}()
  
  
  Swap:
  
    atomic_xchg{,_relaxed,_acquire,_release}()
    atomic_cmpxchg{,_relaxed,_acquire,_release}()
    atomic_try_cmpxchg{,_relaxed,_acquire,_release}()
  
  
  Reference count (but please see refcount_t):
  
    atomic_add_unless(), atomic_inc_not_zero()
    atomic_sub_and_test(), atomic_dec_and_test()
  
  
  Misc:
  
    atomic_inc_and_test(), atomic_add_negative()
    atomic_dec_unless_positive(), atomic_inc_unless_negative()
  
  
  Barriers:
  
    smp_mb__{before,after}_atomic()
f1887143f   Peter Zijlstra   Documentation/ato...
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
  TYPES (signed vs unsigned)
  -----
  
  While atomic_t, atomic_long_t and atomic64_t use int, long and s64
  respectively (for hysterical raisins), the kernel uses -fno-strict-overflow
  (which implies -fwrapv) and defines signed overflow to behave like
  2s-complement.
  
  Therefore, an explicitly unsigned variant of the atomic ops is strictly
  unnecessary and we can simply cast, there is no UB.
  
  There was a bug in UBSAN prior to GCC-8 that would generate UB warnings for
  signed types.
  
  With this we also conform to the C/C++ _Atomic behaviour and things like
  P1236R1.
706eeb3e9   Peter Zijlstra   Documentation/loc...
73
74
75
76
77
78
79
80
  
  SEMANTICS
  ---------
  
  Non-RMW ops:
  
  The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
  implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
fff9b6c7d   Peter Zijlstra   Documentation/ato...
81
82
83
  smp_store_release() respectively. Therefore, if you find yourself only using
  the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
  and are doing it wrong.
706eeb3e9   Peter Zijlstra   Documentation/loc...
84

4dcd4d36d   Boqun Feng   Documentation/lit...
85
86
  A note for the implementation of atomic_set{}() is that it must not break the
  atomicity of the RMW ops. That is:
706eeb3e9   Peter Zijlstra   Documentation/loc...
87

4dcd4d36d   Boqun Feng   Documentation/lit...
88
    C Atomic-RMW-ops-are-atomic-WRT-atomic_set
706eeb3e9   Peter Zijlstra   Documentation/loc...
89
90
  
    {
4dcd4d36d   Boqun Feng   Documentation/lit...
91
      atomic_t v = ATOMIC_INIT(1);
706eeb3e9   Peter Zijlstra   Documentation/loc...
92
    }
4dcd4d36d   Boqun Feng   Documentation/lit...
93
    P0(atomic_t *v)
706eeb3e9   Peter Zijlstra   Documentation/loc...
94
    {
4dcd4d36d   Boqun Feng   Documentation/lit...
95
      (void)atomic_add_unless(v, 1, 0);
706eeb3e9   Peter Zijlstra   Documentation/loc...
96
    }
4dcd4d36d   Boqun Feng   Documentation/lit...
97
    P1(atomic_t *v)
706eeb3e9   Peter Zijlstra   Documentation/loc...
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
    {
      atomic_set(v, 0);
    }
  
    exists
    (v=2)
  
  In this case we would expect the atomic_set() from CPU1 to either happen
  before the atomic_add_unless(), in which case that latter one would no-op, or
  _after_ in which case we'd overwrite its result. In no case is "2" a valid
  outcome.
  
  This is typically true on 'normal' platforms, where a regular competing STORE
  will invalidate a LL/SC or fail a CMPXCHG.
  
  The obvious case where this is not so is when we need to implement atomic ops
  with a lock:
  
    CPU0						CPU1
  
    atomic_add_unless(v, 1, 0);
      lock();
      ret = READ_ONCE(v->counter); // == 1
  						atomic_set(v, 0);
      if (ret != u)				  WRITE_ONCE(v->counter, 0);
        WRITE_ONCE(v->counter, ret + 1);
      unlock();
  
  the typical solution is to then implement atomic_set{}() with atomic_xchg().
  
  
  RMW ops:
  
  These come in various forms:
  
   - plain operations without return value: atomic_{}()
  
   - operations which return the modified value: atomic_{}_return()
  
     these are limited to the arithmetic operations because those are
     reversible. Bitops are irreversible and therefore the modified value
     is of dubious utility.
  
   - operations which return the original value: atomic_fetch_{}()
  
   - swap operations: xchg(), cmpxchg() and try_cmpxchg()
  
   - misc; the special purpose operations that are commonly used and would,
     given the interface, normally be implemented using (try_)cmpxchg loops but
     are time critical and can, (typically) on LL/SC architectures, be more
     efficiently implemented.
  
  All these operations are SMP atomic; that is, the operations (for a single
  atomic variable) can be fully ordered and no intermediate state is lost or
  visible.
  
  
  ORDERING  (go read memory-barriers.txt first)
  --------
  
  The rule of thumb:
  
   - non-RMW operations are unordered;
  
   - RMW operations that have no return value are unordered;
  
   - RMW operations that have a return value are fully ordered;
  
   - RMW operations that are conditional are unordered on FAILURE,
     otherwise the above rules apply.
  
  Except of course when an operation has an explicit ordering like:
  
   {}_relaxed: unordered
   {}_acquire: the R of the RMW (or atomic_read) is an ACQUIRE
   {}_release: the W of the RMW (or atomic_set)  is a  RELEASE
  
  Where 'unordered' is against other memory locations. Address dependencies are
  not defeated.
  
  Fully ordered primitives are ordered against everything prior and everything
  subsequent. Therefore a fully ordered primitive is like having an smp_mb()
  before and an smp_mb() after the primitive.
  
  
  The barriers:
  
    smp_mb__{before,after}_atomic()
2966f8d44   Alan Stern   Documentation: at...
186
187
188
189
190
191
192
193
  only apply to the RMW atomic ops and can be used to augment/upgrade the
  ordering inherent to the op. These barriers act almost like a full smp_mb():
  smp_mb__before_atomic() orders all earlier accesses against the RMW op
  itself and all accesses following it, and smp_mb__after_atomic() orders all
  later accesses against the RMW op and all accesses preceding it. However,
  accesses between the smp_mb__{before,after}_atomic() and the RMW op are not
  ordered, so it is advisable to place the barrier right next to the RMW atomic
  op whenever possible.
706eeb3e9   Peter Zijlstra   Documentation/loc...
194
195
196
197
  
  These helper barriers exist because architectures have varying implicit
  ordering on their SMP atomic primitives. For example our TSO architectures
  provide full ordered atomics and these barriers are no-ops.
69d927bba   Peter Zijlstra   x86/atomic: Fix s...
198
199
  NOTE: when the atomic RmW ops are fully ordered, they should also imply a
  compiler barrier.
706eeb3e9   Peter Zijlstra   Documentation/loc...
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
  Thus:
  
    atomic_fetch_add();
  
  is equivalent to:
  
    smp_mb__before_atomic();
    atomic_fetch_add_relaxed();
    smp_mb__after_atomic();
  
  However the atomic_fetch_add() might be implemented more efficiently.
  
  Further, while something like:
  
    smp_mb__before_atomic();
    atomic_dec(&X);
  
  is a 'typical' RELEASE pattern, the barrier is strictly stronger than
2966f8d44   Alan Stern   Documentation: at...
218
219
220
  a RELEASE because it orders preceding instructions against both the read
  and write parts of the atomic_dec(), and against all following instructions
  as well. Similarly, something like:
706eeb3e9   Peter Zijlstra   Documentation/loc...
221

ca110694c   Peter Zijlstra   Documentation/loc...
222
223
224
225
226
    atomic_inc(&X);
    smp_mb__after_atomic();
  
  is an ACQUIRE pattern (though very much not typical), but again the barrier is
  strictly stronger than ACQUIRE. As illustrated:
e30d02355   Boqun Feng   Documentation/lit...
227
    C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
ca110694c   Peter Zijlstra   Documentation/loc...
228
229
230
  
    {
    }
e30d02355   Boqun Feng   Documentation/lit...
231
    P0(int *x, atomic_t *y)
ca110694c   Peter Zijlstra   Documentation/loc...
232
233
234
235
236
    {
      r0 = READ_ONCE(*x);
      smp_rmb();
      r1 = atomic_read(y);
    }
e30d02355   Boqun Feng   Documentation/lit...
237
    P1(int *x, atomic_t *y)
ca110694c   Peter Zijlstra   Documentation/loc...
238
239
240
241
242
243
244
    {
      atomic_inc(y);
      smp_mb__after_atomic();
      WRITE_ONCE(*x, 1);
    }
  
    exists
e30d02355   Boqun Feng   Documentation/lit...
245
    (0:r0=1 /\ 0:r1=0)
ca110694c   Peter Zijlstra   Documentation/loc...
246
247
248
  
  This should not happen; but a hypothetical atomic_inc_acquire() --
  (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
2966f8d44   Alan Stern   Documentation: at...
249
250
  because it would not order the W part of the RMW against the following
  WRITE_ONCE.  Thus:
ca110694c   Peter Zijlstra   Documentation/loc...
251

e30d02355   Boqun Feng   Documentation/lit...
252
    P0			P1
ca110694c   Peter Zijlstra   Documentation/loc...
253
254
255
256
257
258
259
260
  
  			t = LL.acq *y (0)
  			t++;
  			*x = 1;
    r0 = *x (1)
    RMB
    r1 = *y (0)
  			SC *y, t;
706eeb3e9   Peter Zijlstra   Documentation/loc...
261

ca110694c   Peter Zijlstra   Documentation/loc...
262
  is allowed.
d1bbfd0c7   Peter Zijlstra   Documentation/ato...
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
  
  
  CMPXCHG vs TRY_CMPXCHG
  ----------------------
  
    int atomic_cmpxchg(atomic_t *ptr, int old, int new);
    bool atomic_try_cmpxchg(atomic_t *ptr, int *oldp, int new);
  
  Both provide the same functionality, but try_cmpxchg() can lead to more
  compact code. The functions relate like:
  
    bool atomic_try_cmpxchg(atomic_t *ptr, int *oldp, int new)
    {
      int ret, old = *oldp;
      ret = atomic_cmpxchg(ptr, old, new);
      if (ret != old)
        *oldp = ret;
      return ret == old;
    }
  
  and:
  
    int atomic_cmpxchg(atomic_t *ptr, int old, int new)
    {
      (void)atomic_try_cmpxchg(ptr, &old, new);
      return old;
    }
  
  Usage:
  
    old = atomic_read(&v);			old = atomic_read(&v);
    for (;;) {					do {
      new = func(old);				  new = func(old);
      tmp = atomic_cmpxchg(&v, old, new);		} while (!atomic_try_cmpxchg(&v, &old, new));
      if (tmp == old)
        break;
      old = tmp;
    }
  
  NB. try_cmpxchg() also generates better code on some platforms (notably x86)
  where the function more closely matches the hardware instruction.
55bccf1f9   Peter Zijlstra   Documentation/ato...
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
  
  
  FORWARD PROGRESS
  ----------------
  
  In general strong forward progress is expected of all unconditional atomic
  operations -- those in the Arithmetic and Bitwise classes and xchg(). However
  a fair amount of code also requires forward progress from the conditional
  atomic operations.
  
  Specifically 'simple' cmpxchg() loops are expected to not starve one another
  indefinitely. However, this is not evident on LL/SC architectures, because
  while an LL/SC architecure 'can/should/must' provide forward progress
  guarantees between competing LL/SC sections, such a guarantee does not
  transfer to cmpxchg() implemented using LL/SC. Consider:
  
    old = atomic_read(&v);
    do {
      new = func(old);
    } while (!atomic_try_cmpxchg(&v, &old, new));
  
  which on LL/SC becomes something like:
  
    old = atomic_read(&v);
    do {
      new = func(old);
    } while (!({
      volatile asm ("1: LL  %[oldval], %[v]
  "
                    "   CMP %[oldval], %[old]
  "
                    "   BNE 2f
  "
                    "   SC  %[new], %[v]
  "
                    "   BNE 1b
  "
                    "2:
  "
                    : [oldval] "=&r" (oldval), [v] "m" (v)
  		  : [old] "r" (old), [new] "r" (new)
                    : "memory");
      success = (oldval == old);
      if (!success)
        old = oldval;
      success; }));
  
  However, even the forward branch from the failed compare can cause the LL/SC
  to fail on some architectures, let alone whatever the compiler makes of the C
  loop body. As a result there is no guarantee what so ever the cacheline
  containing @v will stay on the local CPU and progress is made.
  
  Even native CAS architectures can fail to provide forward progress for their
  primitive (See Sparc64 for an example).
  
  Such implementations are strongly encouraged to add exponential backoff loops
  to a failed CAS in order to ensure some progress. Affected architectures are
  also strongly encouraged to inspect/audit the atomic fallbacks, refcount_t and
  their locking primitives.