Commit 54539cd217d687d9acf385eab22ec02b3f7a86a0

Authored by Linus Torvalds

Merge branch 'for-3.15-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu

Pull percpu fix from Tejun Heo:
 "It is very late but this is an important percpu-refcount fix from
  Sebastian Ott.

  The problem is that percpu_ref_*() used __this_cpu_*() instead of
  this_cpu_*().  The difference between the two is that the latter is
  atomic on the local cpu while the former is not.  this_cpu_inc() is
  guaranteed to increment the percpu counter on the cpu that the
  operation is executed on without any synchronization; however,
  __this_cpu_inc() doesn't and if the local cpu invokes the function
  from different contexts (e.g.  process and irq) of the same CPU, it's
  not guaranteed to actually increment as it may be implemented as rmw.

  This bug existed from the get-go but it hasn't been noticed earlier
  probably because on x86 __this_cpu_inc() is equivalent to
  this_cpu_inc() as both get translated into single instruction;
  however, s390 uses the generic rmw implementation and gets affected by
  the bug.  Kudos to Sebastian and Heiko for diagnosing it.

  The change is very low risk and fixes a critical issue on the affected
  architectures, so I think it's a good candidate for inclusion although
  it's very late in the devel cycle.  On the other hand, this has been
  broken since v3.11, so backporting it through -stable post -rc1 won't
  be the end of the world.

  I'll ping Christoph whether __this_cpu_*() ops can be better annotated
  so that it can trigger lockdep warning when used from multiple
  contexts"

* 'for-3.15-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu:
  percpu-refcount: fix usage of this_cpu_ops

Showing 1 changed file Side-by-side Diff

include/linux/percpu-refcount.h
... ... @@ -110,7 +110,7 @@
110 110 pcpu_count = ACCESS_ONCE(ref->pcpu_count);
111 111  
112 112 if (likely(REF_STATUS(pcpu_count) == PCPU_REF_PTR))
113   - __this_cpu_inc(*pcpu_count);
  113 + this_cpu_inc(*pcpu_count);
114 114 else
115 115 atomic_inc(&ref->count);
116 116  
... ... @@ -139,7 +139,7 @@
139 139 pcpu_count = ACCESS_ONCE(ref->pcpu_count);
140 140  
141 141 if (likely(REF_STATUS(pcpu_count) == PCPU_REF_PTR)) {
142   - __this_cpu_inc(*pcpu_count);
  142 + this_cpu_inc(*pcpu_count);
143 143 ret = true;
144 144 }
145 145  
... ... @@ -164,7 +164,7 @@
164 164 pcpu_count = ACCESS_ONCE(ref->pcpu_count);
165 165  
166 166 if (likely(REF_STATUS(pcpu_count) == PCPU_REF_PTR))
167   - __this_cpu_dec(*pcpu_count);
  167 + this_cpu_dec(*pcpu_count);
168 168 else if (unlikely(atomic_dec_and_test(&ref->count)))
169 169 ref->release(ref);
170 170