16 Jun, 2016

1 commit

  • …relaxed,_acquire,_release}()

    Now that all the architectures have implemented support for these new
    atomic primitives add on the generic infrastructure to expose and use
    it.

    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Cc: Boqun Feng <boqun.feng@gmail.com>
    Cc: Borislav Petkov <bp@suse.de>
    Cc: Davidlohr Bueso <dave@stgolabs.net>
    Cc: Frederic Weisbecker <fweisbec@gmail.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: linux-arch@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>

    Peter Zijlstra
     

27 Jul, 2015

1 commit


14 Aug, 2014

1 commit

  • Rewrite generic atomic support to only require cmpxchg(), generate all
    other primitives from that.

    Furthermore reduce the endless repetition for all these primitives to
    a few CPP macros. This way we get more for less lines.

    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/20140508135852.940119622@infradead.org
    Cc: Arnd Bergmann
    Cc: David Howells
    Cc: Paul E. McKenney
    Cc: David S. Miller
    Cc: Linus Torvalds
    Cc: linux-arch@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

21 Dec, 2012

1 commit

  • The atomic64 library uses a handful of static spin locks to implement
    atomic 64-bit operations on architectures without support for atomic
    64-bit instructions.

    Unfortunately, the spinlocks are initialized in a pure initcall and that
    is too late for the vfs namespace code which wants to use atomic64
    operations before the initcall is run.

    This became a problem as of commit 8823c079ba71: "vfs: Add setns support
    for the mount namespace".

    This leads to BUG messages such as:

    BUG: spinlock bad magic on CPU#0, swapper/0/0
    lock: atomic64_lock+0x240/0x400, .magic: 00000000, .owner: /-1, .owner_cpu: 0
    do_raw_spin_lock+0x158/0x198
    _raw_spin_lock_irqsave+0x4c/0x58
    atomic64_add_return+0x30/0x5c
    alloc_mnt_ns.clone.14+0x44/0xac
    create_mnt_ns+0xc/0x54
    mnt_init+0x120/0x1d4
    vfs_caches_init+0xe0/0x10c
    start_kernel+0x29c/0x300

    coming out early on during boot when spinlock debugging is enabled.

    Fix this by initializing the spinlocks statically at compile time.

    Reported-and-tested-by: Vaibhav Bedia
    Tested-by: Tony Lindgren
    Cc: Eric W. Biederman
    Signed-off-by: Stephen Boyd
    Signed-off-by: Linus Torvalds

    Stephen Boyd
     

08 Mar, 2012

1 commit


14 Sep, 2011

1 commit

  • There are still some leftovers of commit f59ca058
    [locking, lib/atomic64: Annotate atomic64_lock::lock as raw]

    [ tglx: Seems I picked the wrong version of that patch :( ]

    Signed-off-by: Yong Zhang
    Cc: Peter Zijlstra
    Cc: Shan Hai
    Cc: Stephen Rothwell
    Link: http://lkml.kernel.org/r/20110914074924.GA16096@zhy
    Signed-off-by: Thomas Gleixner

    Yong Zhang
     

13 Sep, 2011

1 commit

  • The spinlock protected atomic64 operations must be irq safe as they
    are used in hard interrupt context and cannot be preempted on -rt:

    NIP [c068b218] rt_spin_lock_slowlock+0x78/0x3a8
    LR [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8
    Call Trace:
    [eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable)
    [eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98
    [eb459c40] [c03d2a14] atomic64_read+0x48/0x84
    [eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c
    [eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150
    [eb459d30] [c0014170] ret_from_except_full+0x0/0x4c

    So annotate it.

    In mainline this change documents the low level nature of
    the lock - otherwise there's no functional difference. Lockdep
    and Sparse checking will work as usual.

    Signed-off-by: Shan Hai
    Reviewed-by: Yong Zhang
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Shan Hai
     

27 Jul, 2011

1 commit

  • This allows us to move duplicated code in
    (atomic_inc_not_zero() for now) to

    Signed-off-by: Arun Sharma
    Reviewed-by: Eric Dumazet
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: Eric Dumazet
    Acked-by: Mike Frysinger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arun Sharma
     

02 Mar, 2010

1 commit


30 Jul, 2009

1 commit

  • The generic atomic64_t implementation in lib/ did not export the functions
    it defined, which means that modules that use atomic64_t would not link on
    platforms (such as 32-bit powerpc). For example, trying to build a kernel
    with CONFIG_NET_RDS on such a platform would fail with:

    ERROR: "atomic64_read" [net/rds/rds.ko] undefined!
    ERROR: "atomic64_set" [net/rds/rds.ko] undefined!

    Fix this by exporting the atomic64_t functions to modules. (I export the
    entire API even if it's not all currently used by in-tree modules to avoid
    having to continue fixing this in dribs and drabs)

    Signed-off-by: Roland Dreier
    Acked-by: Paul Mackerras
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roland Dreier
     

15 Jun, 2009

1 commit

  • Many processor architectures have no 64-bit atomic instructions, but
    we need atomic64_t in order to support the perf_counter subsystem.

    This adds an implementation of 64-bit atomic operations using hashed
    spinlocks to provide atomicity. For each atomic operation, the address
    of the atomic64_t variable is hashed to an index into an array of 16
    spinlocks. That spinlock is taken (with interrupts disabled) around the
    operation, which can then be coded non-atomically within the lock.

    On UP, all the spinlock manipulation goes away and we simply disable
    interrupts around each operation. In fact gcc eliminates the whole
    atomic64_lock variable as well.

    Signed-off-by: Paul Mackerras
    Signed-off-by: Benjamin Herrenschmidt

    Paul Mackerras