03 Jun, 2019

1 commit

  • As a step towards making the atomic64 API use consistent types treewide,
    let's have the generic atomic64 implementation use s64 as the underlying
    type for atomic64_t, rather than long long, matching the generated
    headers.

    Otherwise, there should be no functional change as a result of this
    patch.

    Signed-off-by: Mark Rutland
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Arnd Bergmann
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Will Deacon
    Cc: aou@eecs.berkeley.edu
    Cc: bp@alien8.de
    Cc: catalin.marinas@arm.com
    Cc: davem@davemloft.net
    Cc: fenghua.yu@intel.com
    Cc: heiko.carstens@de.ibm.com
    Cc: herbert@gondor.apana.org.au
    Cc: ink@jurassic.park.msu.ru
    Cc: jhogan@kernel.org
    Cc: linux@armlinux.org.uk
    Cc: mattst88@gmail.com
    Cc: mpe@ellerman.id.au
    Cc: palmer@sifive.com
    Cc: paul.burton@mips.com
    Cc: paulus@samba.org
    Cc: ralf@linux-mips.org
    Cc: rth@twiddle.net
    Cc: tony.luck@intel.com
    Cc: vgupta@synopsys.com
    Link: https://lkml.kernel.org/r/20190522132250.26499-4-mark.rutland@arm.com
    Signed-off-by: Ingo Molnar

    Mark Rutland
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

21 Jun, 2018

2 commits

  • As a step towards unifying the atomic/atomic64/atomic_long APIs, this
    patch converts the generic implementation of atomic64_add_unless() into
    a generic implementation of atomic64_fetch_add_unless().

    A wrapper in will build atomic_add_unless() atop of
    this, provided it is given a preprocessor definition.

    No functional change is intended as a result of this patch.

    Signed-off-by: Mark Rutland
    Reviewed-by: Will Deacon
    Acked-by: Peter Zijlstra (Intel)
    Cc: Arnd Bergmann
    Cc: Boqun Feng
    Cc: Linus Torvalds
    Cc: Thomas Gleixner
    Link: https://lore.kernel.org/lkml/20180621121321.4761-9-mark.rutland@arm.com
    Signed-off-by: Ingo Molnar

    Mark Rutland
     
  • Some of the atomics return a status value, which is a boolean value
    describing whether the operation was performed. To make it clear that
    this is a boolean value, let's update the common fallbacks to return
    bool, fixing up the return values and comments likewise.

    At the same time, let's simplify the description of the operations in
    their respective comments.

    The instrumented atomics and generic atomic64 implementation are updated
    accordingly.

    Note that atomic64_dec_if_positive() doesn't follow the usual test op
    pattern, and returns the would-be decremented value. This is not
    changed.

    Signed-off-by: Mark Rutland
    Reviewed-by: Will Deacon
    Acked-by: Peter Zijlstra (Intel)
    Cc: Boqun Feng
    Cc: Linus Torvalds
    Cc: Michael Ellerman
    Cc: Thomas Gleixner
    Link: https://lore.kernel.org/lkml/20180621121321.4761-5-mark.rutland@arm.com
    Signed-off-by: Ingo Molnar

    Mark Rutland
     

16 Jun, 2016

1 commit

  • …relaxed,_acquire,_release}()

    Now that all the architectures have implemented support for these new
    atomic primitives add on the generic infrastructure to expose and use
    it.

    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Cc: Boqun Feng <boqun.feng@gmail.com>
    Cc: Borislav Petkov <bp@suse.de>
    Cc: Davidlohr Bueso <dave@stgolabs.net>
    Cc: Frederic Weisbecker <fweisbec@gmail.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: linux-arch@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>

    Peter Zijlstra
     

27 Jul, 2015

1 commit


14 Aug, 2014

1 commit

  • Rewrite generic atomic support to only require cmpxchg(), generate all
    other primitives from that.

    Furthermore reduce the endless repetition for all these primitives to
    a few CPP macros. This way we get more for less lines.

    Signed-off-by: Peter Zijlstra
    Link: http://lkml.kernel.org/r/20140508135852.940119622@infradead.org
    Cc: Arnd Bergmann
    Cc: David Howells
    Cc: Paul E. McKenney
    Cc: David S. Miller
    Cc: Linus Torvalds
    Cc: linux-arch@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

21 Dec, 2012

1 commit

  • The atomic64 library uses a handful of static spin locks to implement
    atomic 64-bit operations on architectures without support for atomic
    64-bit instructions.

    Unfortunately, the spinlocks are initialized in a pure initcall and that
    is too late for the vfs namespace code which wants to use atomic64
    operations before the initcall is run.

    This became a problem as of commit 8823c079ba71: "vfs: Add setns support
    for the mount namespace".

    This leads to BUG messages such as:

    BUG: spinlock bad magic on CPU#0, swapper/0/0
    lock: atomic64_lock+0x240/0x400, .magic: 00000000, .owner: /-1, .owner_cpu: 0
    do_raw_spin_lock+0x158/0x198
    _raw_spin_lock_irqsave+0x4c/0x58
    atomic64_add_return+0x30/0x5c
    alloc_mnt_ns.clone.14+0x44/0xac
    create_mnt_ns+0xc/0x54
    mnt_init+0x120/0x1d4
    vfs_caches_init+0xe0/0x10c
    start_kernel+0x29c/0x300

    coming out early on during boot when spinlock debugging is enabled.

    Fix this by initializing the spinlocks statically at compile time.

    Reported-and-tested-by: Vaibhav Bedia
    Tested-by: Tony Lindgren
    Cc: Eric W. Biederman
    Signed-off-by: Stephen Boyd
    Signed-off-by: Linus Torvalds

    Stephen Boyd
     

08 Mar, 2012

1 commit


14 Sep, 2011

1 commit

  • There are still some leftovers of commit f59ca058
    [locking, lib/atomic64: Annotate atomic64_lock::lock as raw]

    [ tglx: Seems I picked the wrong version of that patch :( ]

    Signed-off-by: Yong Zhang
    Cc: Peter Zijlstra
    Cc: Shan Hai
    Cc: Stephen Rothwell
    Link: http://lkml.kernel.org/r/20110914074924.GA16096@zhy
    Signed-off-by: Thomas Gleixner

    Yong Zhang
     

13 Sep, 2011

1 commit

  • The spinlock protected atomic64 operations must be irq safe as they
    are used in hard interrupt context and cannot be preempted on -rt:

    NIP [c068b218] rt_spin_lock_slowlock+0x78/0x3a8
    LR [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8
    Call Trace:
    [eb459b90] [c068b1e0] rt_spin_lock_slowlock+0x40/0x3a8 (unreliable)
    [eb459c20] [c068bdb0] rt_spin_lock+0x40/0x98
    [eb459c40] [c03d2a14] atomic64_read+0x48/0x84
    [eb459c60] [c001aaf4] perf_event_interrupt+0xec/0x28c
    [eb459d10] [c0010138] performance_monitor_exception+0x7c/0x150
    [eb459d30] [c0014170] ret_from_except_full+0x0/0x4c

    So annotate it.

    In mainline this change documents the low level nature of
    the lock - otherwise there's no functional difference. Lockdep
    and Sparse checking will work as usual.

    Signed-off-by: Shan Hai
    Reviewed-by: Yong Zhang
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Shan Hai
     

27 Jul, 2011

1 commit

  • This allows us to move duplicated code in
    (atomic_inc_not_zero() for now) to

    Signed-off-by: Arun Sharma
    Reviewed-by: Eric Dumazet
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: Eric Dumazet
    Acked-by: Mike Frysinger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arun Sharma
     

02 Mar, 2010

1 commit


30 Jul, 2009

1 commit

  • The generic atomic64_t implementation in lib/ did not export the functions
    it defined, which means that modules that use atomic64_t would not link on
    platforms (such as 32-bit powerpc). For example, trying to build a kernel
    with CONFIG_NET_RDS on such a platform would fail with:

    ERROR: "atomic64_read" [net/rds/rds.ko] undefined!
    ERROR: "atomic64_set" [net/rds/rds.ko] undefined!

    Fix this by exporting the atomic64_t functions to modules. (I export the
    entire API even if it's not all currently used by in-tree modules to avoid
    having to continue fixing this in dribs and drabs)

    Signed-off-by: Roland Dreier
    Acked-by: Paul Mackerras
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Roland Dreier
     

15 Jun, 2009

1 commit

  • Many processor architectures have no 64-bit atomic instructions, but
    we need atomic64_t in order to support the perf_counter subsystem.

    This adds an implementation of 64-bit atomic operations using hashed
    spinlocks to provide atomicity. For each atomic operation, the address
    of the atomic64_t variable is hashed to an index into an array of 16
    spinlocks. That spinlock is taken (with interrupts disabled) around the
    operation, which can then be coded non-atomically within the lock.

    On UP, all the spinlock manipulation goes away and we simply disable
    interrupts around each operation. In fact gcc eliminates the whole
    atomic64_lock variable as well.

    Signed-off-by: Paul Mackerras
    Signed-off-by: Benjamin Herrenschmidt

    Paul Mackerras