09 Oct, 2020

1 commit

  • …nux-rcu into locking/core

    Pull KCSAN updates for v5.10 from Paul E. McKenney:

    - Improve kernel messages.

    - Be more permissive with bitops races under KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.

    - Optimize debugfs stat counters.

    - Introduce the instrument_*read_write() annotations, to provide a
    finer description of certain ops - using KCSAN's compound instrumentation.
    Use them for atomic RNW and bitops, where appropriate.
    Doing this might find new races.
    (Depends on the compiler having tsan-compound-read-before-write=1 support.)

    - Support atomic built-ins, which will help certain architectures, such as s390.

    - Misc enhancements and smaller fixes.

    Signed-off-by: Ingo Molnar <mingo@kernel.org>

    Ingo Molnar
     

08 Oct, 2020

1 commit

  • The sha1sum of include/linux/atomic-arch-fallback.h isn't checked by
    check-atomics.sh. It's not clear why it's skipped so let's check it too.

    Signed-off-by: Paul Bolle
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Mark Rutland
    Link: https://lkml.kernel.org/r/20201001202028.1048418-1-pebolle@tiscali.nl

    Paul Bolle
     

25 Aug, 2020

1 commit


25 Jun, 2020

1 commit

  • Architectures with instrumented (KASAN/KCSAN) atomic operations
    natively provide arch_atomic_ variants that are not instrumented.

    It turns out that some generic code also requires arch_atomic_ in
    order to avoid instrumentation, so provide the arch_atomic_ interface
    as a direct map into the regular atomic_ interface for
    non-instrumented architectures.

    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Paul E. McKenney

    Peter Zijlstra
     

12 Jun, 2020

1 commit

  • Merge the state of the locking kcsan branch before the read/write_once()
    and the atomics modifications got merged.

    Squash the fallout of the rebase on top of the read/write once and atomic
    fallback work into the merge. The history of the original branch is
    preserved in tag locking-kcsan-2020-06-02.

    Signed-off-by: Thomas Gleixner

    Thomas Gleixner
     

11 Jun, 2020

2 commits

  • Currently instrumentation of atomic primitives is done at the architecture
    level, while composites or fallbacks are provided at the generic level.

    The result is that there are no uninstrumented variants of the
    fallbacks. Since there is now need of such variants to isolate text poke
    from any form of instrumentation invert this ordering.

    Doing this means moving the instrumentation into the generic code as
    well as having (for now) two variants of the fallbacks.

    Notes:

    - the various *cond_read* primitives are not proper fallbacks
    and got moved into linux/atomic.c. No arch_ variants are
    generated because the base primitives smp_cond_load*()
    are instrumented.

    - once all architectures are moved over to arch_atomic_ one of the
    fallback variants can be removed and some 2300 lines reclaimed.

    - atomic_{read,set}*() are no longer double-instrumented

    Reported-by: Thomas Gleixner
    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Thomas Gleixner
    Acked-by: Mark Rutland
    Link: https://lkml.kernel.org/r/20200505134058.769149955@linutronix.de

    Peter Zijlstra
     
  • Use __always_inline for atomic fallback wrappers. When building for size
    (CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclined to
    inline even relatively small static inline functions that are assumed to
    be inlinable such as atomic ops. This can cause problems, for example in
    UACCESS regions.

    While the fallback wrappers aren't pure wrappers, they are trivial
    nonetheless, and the function they wrap should determine the final
    inlining policy.

    For x86 tinyconfig we observe:
    - vmlinux baseline: 1315988
    - vmlinux with patch: 1315928 (-60 bytes)

    [ tglx: Cherry-picked from KCSAN ]

    Suggested-by: Mark Rutland
    Signed-off-by: Marco Elver
    Acked-by: Mark Rutland
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Thomas Gleixner

    Marco Elver
     

21 Mar, 2020

1 commit


07 Jan, 2020

2 commits

  • Use __always_inline for atomic fallback wrappers. When building for size
    (CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclined to
    inline even relatively small static inline functions that are assumed to
    be inlinable such as atomic ops. This can cause problems, for example in
    UACCESS regions.

    While the fallback wrappers aren't pure wrappers, they are trivial
    nonetheless, and the function they wrap should determine the final
    inlining policy.

    For x86 tinyconfig we observe:
    - vmlinux baseline: 1315988
    - vmlinux with patch: 1315928 (-60 bytes)

    Suggested-by: Mark Rutland
    Signed-off-by: Marco Elver
    Acked-by: Mark Rutland
    Signed-off-by: Paul E. McKenney

    Marco Elver
     
  • Prefer __always_inline for atomic wrappers. When building for size
    (CC_OPTIMIZE_FOR_SIZE), some compilers appear to be less inclined to
    inline even relatively small static inline functions that are assumed to
    be inlinable such as atomic ops. This can cause problems, for example in
    UACCESS regions.

    By using __always_inline, we let the real implementation and not the
    wrapper determine the final inlining preference.

    For x86 tinyconfig we observe:
    - vmlinux baseline: 1316204
    - vmlinux with patch: 1315988 (-216 bytes)

    This came up when addressing UACCESS warnings with CC_OPTIMIZE_FOR_SIZE
    in the KCSAN runtime:
    http://lkml.kernel.org/r/58708908-84a0-0a81-a836-ad97e33dbb62@infradead.org

    Reported-by: Randy Dunlap
    Signed-off-by: Marco Elver
    Acked-by: Mark Rutland
    Signed-off-by: Paul E. McKenney

    Marco Elver
     

16 Nov, 2019

1 commit


25 Jun, 2019

1 commit

  • POSIX says the -n option must be a positive decimal integer. Not all
    implementations of head(1) support negative numbers meaning offset from
    the end of the file.

    Instead, the sed expression '$d' has the same effect of removing the
    last line of the file.

    Signed-off-by: Michael Forney
    Signed-off-by: Peter Zijlstra (Intel)
    Acked-by: Will Deacon
    Cc: Boqun Feng
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: https://lkml.kernel.org/r/20190618053306.730-1-mforney@mforney.org
    Signed-off-by: Ingo Molnar

    Michael Forney
     

19 Apr, 2019

1 commit

  • patch(1) doesn't set the x bit on files. So if someone downloads and
    applies patch-4.21.xz, their kernel won't build. Fix that by executing
    /bin/sh.

    Signed-off-by: Andrew Morton
    Acked-by: Mark Rutland
    Cc: Boqun Feng
    Cc: Linus Torvalds
    Cc: Peter Zijlstra (Intel)
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Will Deacon
    Signed-off-by: Ingo Molnar

    Andrew Morton
     

13 Feb, 2019

1 commit

  • We currently check the atomic headers at build-time to ensure they
    haven't been modified directly, and these checks require regenerating
    the headers in full. As this takes a few seconds, even when
    parallelized, this is too slow to run for every kernel build.

    Instead, we can generate a hash of each header as we generate them,
    which we can cheaply check at build time (~0.16s for all headers).

    This patch does so, updating headers with their hashes using the new
    gen-atomics.sh script. As some users apparently build the kernel wihout
    coreutils, lacking sha1sum, the checks are skipped in this case.
    Presumably, most developers have a working coreutils installation.

    Signed-off-by: Mark Rutland
    Acked-by: Will Deacon
    Cc: Andrew Morton
    Cc: Boqun Feng
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: anders.roxell@linaro.org
    Cc: linux-kernel@vger.kernel.rg
    Cc: naresh.kamboju@linaro.org
    Signed-off-by: Ingo Molnar

    Mark Rutland
     

11 Feb, 2019

1 commit

  • Some distibutions and build systems doesn't include 'fold' from
    coreutils default.

    .../scripts/atomic/atomic-tbl.sh: line 183: fold: command not found

    Rework to use 'grep' instead of 'fold' to use a dependency that is
    already used a lot in the kernel.

    [Mark: rework commit message]

    Suggested-by: Will Deacon
    Reported-by: Naresh Kamboju
    Signed-off-by: Anders Roxell
    Signed-off-by: Mark Rutland
    Acked-by: Will Deacon
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: akpm@linux-foundation.org
    Cc: boqun.feng@gmail.com
    Cc: linux-kernel@vger.kernel.rg
    Signed-off-by: Ingo Molnar

    Anders Roxell
     

01 Nov, 2018

3 commits

  • Mark all these scripts executable.

    Cc: Mark Rutland
    Cc: Peter Zijlstra (Intel)
    Cc: Will Deacon
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: Catalin Marinas
    Cc: linuxdrivers@attotech.com
    Cc: dvyukov@google.com
    Cc: boqun.feng@gmail.com
    Cc: arnd@arndb.de
    Cc: aryabinin@virtuozzo.com
    Cc: glider@google.com
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • Now that all the generated atomic headers are in place, it would be good
    to ensure that:

    a) the headers are up-to-date when scripting changes.

    b) developers don't directly modify the generated headers.

    To ensure both of these properties, let's add a Kbuild step to check
    that the generated headers are up-to-date.

    Signed-off-by: Mark Rutland
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: catalin.marinas@arm.com
    Cc: Will Deacon
    Cc: linuxdrivers@attotech.com
    Cc: dvyukov@google.com
    Cc: Boqun Feng
    Cc: arnd@arndb.de
    Cc: aryabinin@virtuozzo.com
    Cc: glider@google.com
    Link: http://lkml.kernel.org/r/20180904104830.2975-6-mark.rutland@arm.com
    Signed-off-by: Ingo Molnar

    Mark Rutland
     
  • To minimize repetition, to allow for future rework, and to ensure
    regularity of the various atomic APIs, we'd like to automatically
    generate (the bulk of) a number of headers related to atomics.

    This patch adds the infrastructure to do so, leaving actual conversion
    of headers to subsequent patches. This infrastructure consists of:

    * atomics.tbl - a table describing the functions in the atomics API,
    with names, prototypes, and metadata describing the variants that
    exist (e.g fetch/return, acquire/release/relaxed). Note that the
    return type is dependent on the particular variant.

    * atomic-tbl.sh - a library of routines useful for dealing with
    atomics.tbl (e.g. querying which variants exist, or generating
    argument/parameter lists for a given function variant).

    * gen-atomic-fallback.sh - a script which generates a header of
    fallbacks, covering cases where architecture omit certain functions
    (e.g. omitting relaxed variants).

    * gen-atomic-long.sh - a script which generates wrappers providing the
    atomic_long API atomic of the relevant atomic or atomic64 API,
    ensuring the APIs are consistent.

    * gen-atomic-instrumented.sh - a script which generates atomic* wrappers
    atop of arch_atomic* functions, with automatically generated KASAN
    instrumentation.

    * fallbacks/* - a set of fallback implementations for atomics, which
    should be used when no implementation of a given atomic is provided.
    These are used by gen-atomic-fallback.sh to generate fallbacks, and
    these are also used by other scripts to determine the set of optional
    atomics (as required to generate preprocessor guards correctly).

    Fallbacks may use the following variables:

    ${atomic} atomic prefix: atomic/atomic64/atomic_long, which can be
    used to derive the atomic type, and to prefix functions

    ${int} integer type: int/s64/long

    ${pfx} variant prefix, e.g. fetch_

    ${name} base function name, e.g. add

    ${sfx} variant suffix, e.g. _return

    ${order} order suffix, e.g. _relaxed

    ${atomicname} full name, e.g. atomic64_fetch_add_relaxed

    ${ret} return type of the function, e.g. void

    ${retstmt} a return statement (with a trailing space), unless the
    variant returns void

    ${params} parameter list for the function declaration, e.g.
    "int i, atomic_t *v"

    ${args} argument list for invoking the function, e.g. "i, v"

    ... for clarity, ${ret}, ${retstmt}, ${params}, and ${args} are
    open-coded for fallbacks where these do not vary, or are critical to
    understanding the logic of the fallback.

    The MAINTAINERS entry for the atomic infrastructure is updated to cover
    the new scripts.

    There should be no functional change as a result of this patch.

    Signed-off-by: Mark Rutland
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: linux-arm-kernel@lists.infradead.org
    Cc: catalin.marinas@arm.com
    Cc: Will Deacon
    Cc: linuxdrivers@attotech.com
    Cc: dvyukov@google.com
    Cc: Boqun Feng
    Cc: arnd@arndb.de
    Cc: aryabinin@virtuozzo.com
    Cc: glider@google.com
    Link: http://lkml.kernel.org/r/20180904104830.2975-2-mark.rutland@arm.com
    Signed-off-by: Ingo Molnar

    Mark Rutland