09 Oct, 2015

1 commit

  • commit e001bbae7147b111fe1aa42beaf835635f3c016e upstream.

    A recent change in kernel/acct.c added a new warning for many
    configurations on ARM:

    kernel/acct.c: In function 'acct_pin_kill':
    arch/arm/include/asm/cmpxchg.h:122:3: warning: value computed is not used [-Wunused-value]

    The code is in fact correct, it's just a cmpxchg() call that
    intentionally ignores the result, and no other code does that. The
    warning does not show up on x86 because of the way that its cmpxchg()
    macro is written. This changes the ARM implementation to use a similar
    construct with a compound expression instead of a typecast, which causes
    the compiler to not complain about an unused result.

    Fix the other macros in this file in a similar way, and place them
    just below their function implementations.

    Signed-off-by: Russell King
    Signed-off-by: Dan Murphy

    Russell King
     

25 Feb, 2014

1 commit

  • After a bunch of benchmarking on the interaction between dmb and pldw,
    it turns out that issuing the pldw *after* the dmb instruction can
    give modest performance gains (~3% atomic_add_return improvement on a
    dual A15).

    This patch adds prefetchw invocations to our barriered atomic operations
    including cmpxchg, test_and_xxx and futexes.

    Signed-off-by: Will Deacon
    Signed-off-by: Russell King

    Will Deacon
     

29 Oct, 2013

2 commits

  • This patch introduces cmpxchg64_relaxed for arm, which performs a 64-bit
    cmpxchg operation without barrier semantics. cmpxchg64_local is updated
    to use the new operation.

    Signed-off-by: Will Deacon
    Signed-off-by: Russell King

    Will Deacon
     
  • Our cmpxchg64 macros are wrappers around atomic64_cmpxchg. Whilst this is
    great for code re-use, there is a case for barrier-less cmpxchg where it
    is known to be safe (for example cmpxchg64_local and cmpxchg-based
    lockrefs).

    This patch introduces a 64-bit cmpxchg implementation specifically
    for the cmpxchg64_* macros, so that it can be later used by the lockref
    code.

    Signed-off-by: Will Deacon
    Signed-off-by: Russell King

    Will Deacon
     

14 May, 2013

1 commit

  • The implementation of cmpxchg64() for the ARM v6 and v7 architecture
    casts parameter 2 and 3 (the old and new 64bit values) to an unsigned
    long before calling the atomic_cmpxchg64() function. This clears
    the top 32 bits of the old and new values, resulting in the wrong
    values being compare-exchanged. Luckily, this only appears to be used
    for 64-bit sched_clock, which we don't (yet) have on ARM.

    This bug was introduced by commit 3e0f5a15f500 ("ARM: 7404/1: cmpxchg64:
    use atomic64 and local64 routines for cmpxchg64").

    Cc:
    Acked-by: Will Deacon
    Signed-off-by: Jaccon Bastiaansen
    Signed-off-by: Russell King

    Jaccon Bastiaansen
     

29 Apr, 2012

1 commit

  • The cmpxchg64 routines for ARMv6+ CPUs replicate inline assembly that
    already exists for atomic64 operations. Furthermore, the cmpxchg64 code
    uses the "memory" constraint in the clobber list rather than identifying
    the region of memory that is actually modified.

    This patch replaces the ARMv6+ cmpxchg64 code with macros that expand to
    the atomic64_ and local64_ variants, casting the pointer parameter to
    the appropriate container type.

    Cc: Nicolas Pitre
    Signed-off-by: Will Deacon
    Signed-off-by: Russell King

    Will Deacon
     

29 Mar, 2012

1 commit