31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

15 Jul, 2017

1 commit

  • atomic64_inc_not_zero() returns a "truth value" which in C is
    traditionally an int. That means callers are likely to expect the
    result will fit in an int.

    If an implementation returns a "true" value which does not fit in an
    int, then there's a possibility that callers will truncate it when they
    store it in an int.

    In fact this happened in practice, see commit 966d2b04e070
    ("percpu-refcount: fix reference leak during percpu-atomic transition").

    So add a test that the result fits in an int, even when the input
    doesn't. This catches the case where an implementation just passes the
    non-zero input value out as the result.

    Link: http://lkml.kernel.org/r/1499775133-1231-1-git-send-email-mpe@ellerman.id.au
    Signed-off-by: Michael Ellerman
    Cc: Douglas Miller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Ellerman
     

25 Feb, 2017

1 commit

  • Allow to compile the atomic64 test code either to a loadable module, or
    builtin into the kernel.

    Link: http://lkml.kernel.org/r/1483470276-10517-3-git-send-email-geert@linux-m68k.org
    Signed-off-by: Geert Uytterhoeven
    Reviewed-by: Andy Shevchenko
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Geert Uytterhoeven
     

08 Oct, 2016

1 commit

  • This came to light when implementing native 64-bit atomics for ARCv2.

    The atomic64 self-test code uses CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
    to check whether atomic64_dec_if_positive() is available. It seems it
    was needed when not every arch defined it. However as of current code
    the Kconfig option seems needless

    - for CONFIG_GENERIC_ATOMIC64 it is auto-enabled in lib/Kconfig and a
    generic definition of API is present lib/atomic64.c
    - arches with native 64-bit atomics select it in arch/*/Kconfig and
    define the API in their headers

    So I see no point in keeping the Kconfig option

    Compile tested for:
    - blackfin (CONFIG_GENERIC_ATOMIC64)
    - x86 (!CONFIG_GENERIC_ATOMIC64)
    - ia64

    Link: http://lkml.kernel.org/r/1473703083-8625-3-git-send-email-vgupta@synopsys.com
    Signed-off-by: Vineet Gupta
    Cc: Richard Henderson
    Cc: Ivan Kokshaysky
    Cc: Matt Turner
    Cc: Russell King
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc: Ralf Baechle
    Cc: "James E.J. Bottomley"
    Cc: Helge Deller
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: Michael Ellerman
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: "David S. Miller"
    Cc: Chris Metcalf
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Vineet Gupta
    Cc: Zhaoxiu Zeng
    Cc: Linus Walleij
    Cc: Alexander Potapenko
    Cc: Andrey Ryabinin
    Cc: Herbert Xu
    Cc: Ming Lin
    Cc: Arnd Bergmann
    Cc: Geert Uytterhoeven
    Cc: Peter Zijlstra
    Cc: Borislav Petkov
    Cc: Andi Kleen
    Cc: Boqun Feng
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Vineet Gupta
     

16 Jun, 2016

1 commit

  • …relaxed,_acquire,_release}()

    Now that all the architectures have implemented support for these new
    atomic primitives add on the generic infrastructure to expose and use
    it.

    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Arnd Bergmann <arnd@arndb.de>
    Cc: Boqun Feng <boqun.feng@gmail.com>
    Cc: Borislav Petkov <bp@suse.de>
    Cc: Davidlohr Bueso <dave@stgolabs.net>
    Cc: Frederic Weisbecker <fweisbec@gmail.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: linux-arch@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>

    Peter Zijlstra
     

30 Jan, 2016

1 commit

  • Move them to a separate header and have the following
    dependency:

    x86/cpufeatures.h
    Signed-off-by: Borislav Petkov
    Cc: Andy Lutomirski
    Cc: Borislav Petkov
    Cc: Brian Gerst
    Cc: Denys Vlasenko
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/1453842730-28463-5-git-send-email-bp@alien8.de
    Signed-off-by: Ingo Molnar

    Borislav Petkov
     

12 Jan, 2016

1 commit

  • Pull perf updates from Ingo Molnar:
    "Kernel side changes:

    - Intel Knights Landing support. (Harish Chegondi)

    - Intel Broadwell-EP uncore PMU support. (Kan Liang)

    - Core code improvements. (Peter Zijlstra.)

    - Event filter, LBR and PEBS fixes. (Stephane Eranian)

    - Enable cycles:pp on Intel Atom. (Stephane Eranian)

    - Add cycles:ppp support for Skylake. (Andi Kleen)

    - Various x86 NMI overhead optimizations. (Andi Kleen)

    - Intel PT enhancements. (Takao Indoh)

    - AMD cache events fix. (Vince Weaver)

    Tons of tooling changes:

    - Show random perf tool tips in the 'perf report' bottom line
    (Namhyung Kim)

    - perf report now defaults to --group if the perf.data file has
    grouped events, try it with:

    # perf record -e '{cycles,instructions}' -a sleep 1
    [ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 1.093 MB perf.data (1247 samples) ]
    # perf report
    # Samples: 1K of event 'anon group { cycles, instructions }'
    # Event count (approx.): 1955219195
    #
    # Overhead Command Shared Object Symbol

    2.86% 0.22% swapper [kernel.kallsyms] [k] intel_idle
    1.05% 0.33% firefox libxul.so [.] js::SetObjectElement
    1.05% 0.00% kworker/0:3 [kernel.kallsyms] [k] gen6_ring_get_seqno
    0.88% 0.17% chrome chrome [.] 0x0000000000ee27ab
    0.65% 0.86% firefox libxul.so [.] js::ValueToId
    0.64% 0.23% JS Helper libxul.so [.] js::SplayTree::splay
    0.62% 1.27% firefox libxul.so [.] js::GetIterator
    0.61% 1.74% firefox libxul.so [.] js::NativeSetProperty
    0.61% 0.31% firefox libxul.so [.] js::SetPropertyByDefining

    - Introduce the 'perf stat record/report' workflow:

    Generate perf.data files from 'perf stat', to tap into the
    scripting capabilities perf has instead of defining a 'perf stat'
    specific scripting support to calculate event ratios, etc.

    Simple example:

    $ perf stat record -e cycles usleep 1

    Performance counter stats for 'usleep 1':

    1,134,996 cycles

    0.000670644 seconds time elapsed

    $ perf stat report

    Performance counter stats for '/home/acme/bin/perf stat record -e cycles usleep 1':

    1,134,996 cycles

    0.000670644 seconds time elapsed

    $

    It generates PERF_RECORD_ userspace records to store the details:

    $ perf report -D | grep PERF_RECORD
    0xf0 [0x28]: PERF_RECORD_THREAD_MAP nr: 1 thread: 27637
    0x118 [0x12]: PERF_RECORD_CPU_MAP nr: 1 cpu: 65535
    0x12a [0x40]: PERF_RECORD_STAT_CONFIG
    0x16a [0x30]: PERF_RECORD_STAT
    -1 -1 0x19a [0x40]: PERF_RECORD_MMAP -1/0: [0xffffffff81000000(0x1f000000) @ 0xffffffff81000000]: x [kernel.kallsyms]_text
    0x1da [0x18]: PERF_RECORD_STAT_ROUND
    [acme@ssdandy linux]$

    An effort was made to make perf.data files generated like this to
    not generate cryptic messages when processed by older tools.

    The 'perf script' bits need rebasing, will go up later.

    - Make command line options always available, even when they depend
    on some feature being enabled, warning the user about use of such
    options (Wang Nan)

    - Support hw breakpoint events (mem:0xAddress) in the default output
    mode in 'perf script' (Wang Nan)

    - Fixes and improvements for supporting annotating ARM binaries,
    support ARM call and jump instructions, more work needed to have
    arch specific stuff separated into tools/perf/arch/*/annotate/
    (Russell King)

    - Add initial 'perf config' command, for now just with a --list
    command to the contents of the configuration file in use and a
    basic man page describing its format, commands for doing edits and
    detailed documentation are being reviewed and proof-read. (Taeung
    Song)

    - Allows BPF scriptlets specify arguments to be fetched using DWARF
    info, using a prologue generated at compile/build time (He Kuang,
    Wang Nan)

    - Allow attaching BPF scriptlets to module symbols (Wang Nan)

    - Allow attaching BPF scriptlets to userspace code using uprobe (Wang
    Nan)

    - BPF programs now can specify 'perf probe' tunables via its section
    name, separating key=val values using semicolons (Wang Nan)

    Testing some of these new BPF features:

    Use case: get callchains when receiving SSL packets, filter then in the
    kernel, at arbitrary place.

    # cat ssl.bpf.c
    #define SEC(NAME) __attribute__((section(NAME), used))

    struct pt_regs;

    SEC("func=__inet_lookup_established hnum")
    int func(struct pt_regs *ctx, int err, unsigned short port)
    {
    return err == 0 && port == 443;
    }

    char _license[] SEC("license") = "GPL";
    int _version SEC("version") = LINUX_VERSION_CODE;
    #
    # perf record -a -g -e ssl.bpf.c
    ^C[ perf record: Woken up 1 times to write data ]
    [ perf record: Captured and wrote 0.787 MB perf.data (3 samples) ]
    # perf script | head -30
    swapper 0 [000] 58783.268118: perf_bpf_probe:func: (ffffffff816a0f60) hnum=0x1bb
    8a0f61 __inet_lookup_established (/lib/modules/4.3.0+/build/vmlinux)
    896def ip_rcv_finish (/lib/modules/4.3.0+/build/vmlinux)
    8976c2 ip_rcv (/lib/modules/4.3.0+/build/vmlinux)
    855eba __netif_receive_skb_core (/lib/modules/4.3.0+/build/vmlinux)
    8565d8 __netif_receive_skb (/lib/modules/4.3.0+/build/vmlinux)
    8572a8 process_backlog (/lib/modules/4.3.0+/build/vmlinux)
    856b11 net_rx_action (/lib/modules/4.3.0+/build/vmlinux)
    2a284b __do_softirq (/lib/modules/4.3.0+/build/vmlinux)
    2a2ba3 irq_exit (/lib/modules/4.3.0+/build/vmlinux)
    96b7a4 do_IRQ (/lib/modules/4.3.0+/build/vmlinux)
    969807 ret_from_intr (/lib/modules/4.3.0+/build/vmlinux)
    2dede5 cpu_startup_entry (/lib/modules/4.3.0+/build/vmlinux)
    95d5bc rest_init (/lib/modules/4.3.0+/build/vmlinux)
    1163ffa start_kernel ([kernel.vmlinux].init.text)
    11634d7 x86_64_start_reservations ([kernel.vmlinux].init.text)
    1163623 x86_64_start_kernel ([kernel.vmlinux].init.text)

    qemu-system-x86 9178 [003] 58785.792417: perf_bpf_probe:func: (ffffffff816a0f60) hnum=0x1bb
    8a0f61 __inet_lookup_established (/lib/modules/4.3.0+/build/vmlinux)
    896def ip_rcv_finish (/lib/modules/4.3.0+/build/vmlinux)
    8976c2 ip_rcv (/lib/modules/4.3.0+/build/vmlinux)
    855eba __netif_receive_skb_core (/lib/modules/4.3.0+/build/vmlinux)
    8565d8 __netif_receive_skb (/lib/modules/4.3.0+/build/vmlinux)
    856660 netif_receive_skb_internal (/lib/modules/4.3.0+/build/vmlinux)
    8566ec netif_receive_skb_sk (/lib/modules/4.3.0+/build/vmlinux)
    430a br_handle_frame_finish ([bridge])
    48bc br_handle_frame ([bridge])
    855f44 __netif_receive_skb_core (/lib/modules/4.3.0+/build/vmlinux)
    8565d8 __netif_receive_skb (/lib/modules/4.3.0+/build/vmlinux)
    #

    - Use 'perf probe' various options to list functions, see what
    variables can be collected at any given point, experiment first
    collecting without a filter, then filter, use it together with
    'perf trace', 'perf top', with or without callchains, if it
    explodes, please tell us!

    - Introduce a new callchain mode: "folded", that will list per line
    representations of all callchains for a give histogram entry,
    facilitating 'perf report' output processing by other tools, such
    as Brendan Gregg's flamegraph tools (Namhyung Kim)

    E.g:

    # perf report | grep -v ^# | head
    18.37% 0.00% swapper [kernel.kallsyms] [k] cpu_startup_entry
    |
    ---cpu_startup_entry
    |
    |--12.07%--start_secondary
    |
    --6.30%--rest_init
    start_kernel
    x86_64_start_reservations
    x86_64_start_kernel
    #

    Becomes, in "folded" mode:

    # perf report -g folded | grep -v ^# | head -5
    18.37% 0.00% swapper [kernel.kallsyms] [k] cpu_startup_entry
    12.07% cpu_startup_entry;start_secondary
    6.30% cpu_startup_entry;rest_init;start_kernel;x86_64_start_reservations;x86_64_start_kernel
    16.90% 0.00% swapper [kernel.kallsyms] [k] call_cpuidle
    11.23% call_cpuidle;cpu_startup_entry;start_secondary
    5.67% call_cpuidle;cpu_startup_entry;rest_init;start_kernel;x86_64_start_reservations;x86_64_start_kernel
    16.90% 0.00% swapper [kernel.kallsyms] [k] cpuidle_enter
    11.23% cpuidle_enter;call_cpuidle;cpu_startup_entry;start_secondary
    5.67% cpuidle_enter;call_cpuidle;cpu_startup_entry;rest_init;start_kernel;x86_64_start_reservations;x86_64_start_kernel
    15.12% 0.00% swapper [kernel.kallsyms] [k] cpuidle_enter_state
    #

    The user can also select one of "count", "period" or "percent" as
    the first column.

    ... and lots of infrastructure enhancements, plus fixes and other
    changes, features I failed to list - see the shortlog and the git log
    for details"

    * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (271 commits)
    perf evlist: Add --trace-fields option to show trace fields
    perf record: Store data mmaps for dwarf unwind
    perf libdw: Check for mmaps also in MAP__VARIABLE tree
    perf unwind: Check for mmaps also in MAP__VARIABLE tree
    perf unwind: Use find_map function in access_dso_mem
    perf evlist: Remove perf_evlist__(enable|disable)_event functions
    perf evlist: Make perf_evlist__open() open evsels with their cpus and threads (like perf record does)
    perf report: Show random usage tip on the help line
    perf hists: Export a couple of hist functions
    perf diff: Use perf_hpp__register_sort_field interface
    perf tools: Add overhead/overhead_children keys defaults via string
    perf tools: Remove list entry from struct sort_entry
    perf tools: Include all tools/lib directory for tags/cscope/TAGS targets
    perf script: Align event name properly
    perf tools: Add missing headers in perf's MANIFEST
    perf tools: Do not show trace command if it's not compiled in
    perf report: Change default to use event group view
    perf top: Decay periods in callchains
    tools lib: Move bitmap.[ch] from tools/perf/ to tools/{lib,include}/
    tools lib: Sync tools/lib/find_bit.c with the kernel
    ...

    Linus Torvalds
     

06 Dec, 2015

1 commit

  • asm/atomic.h doesn't really need asm/processor.h anymore. Everything
    it uses has moved to other header files. So remove that include.

    processor.h is a nasty header that includes lots of
    other headers and makes it prone to include loops. Removing the
    include here makes asm/atomic.h a "leaf" header that can
    be safely included in most other headers.

    The only fallout is in the lib/atomic tester which relied on
    this implicit include. Give it an explicit include.
    (the include is in ifdef because the user is also in ifdef)

    Signed-off-by: Andi Kleen
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Arnaldo Carvalho de Melo
    Cc: Jiri Olsa
    Cc: Linus Torvalds
    Cc: Mike Galbraith
    Cc: Peter Zijlstra
    Cc: Stephane Eranian
    Cc: Thomas Gleixner
    Cc: Vince Weaver
    Cc: rostedt@goodmis.org
    Link: http://lkml.kernel.org/r/1449018060-1742-1-git-send-email-andi@firstfloor.org
    Signed-off-by: Ingo Molnar

    Andi Kleen
     

23 Nov, 2015

1 commit

  • Some atomic operations now have _relaxed/acquire/release variants, this
    patch adds some trivial tests for two purposes:

    1. test the behavior of these new operations in single-CPU
    environment.

    2. make their code generated before we actually use them somewhere,
    so that we can examine their assembly code.

    Signed-off-by: Boqun Feng
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Andrew Morton
    Cc: Davidlohr Bueso
    Cc: Linus Torvalds
    Cc: Paul E. McKenney
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Waiman Long
    Cc: Will Deacon
    Link: http://lkml.kernel.org/r/1446634365-25176-1-git-send-email-boqun.feng@gmail.com
    Signed-off-by: Ingo Molnar

    Boqun Feng
     

27 Jul, 2015

1 commit


05 Jun, 2014

1 commit


31 Jul, 2012

1 commit


01 Mar, 2012

1 commit


27 Jul, 2011

1 commit

  • This allows us to move duplicated code in
    (atomic_inc_not_zero() for now) to

    Signed-off-by: Arun Sharma
    Reviewed-by: Eric Dumazet
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: Eric Dumazet
    Acked-by: Mike Frysinger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arun Sharma
     

27 Jul, 2010

1 commit


05 Jun, 2010

1 commit

  • Add s390 to list of architectures that have atomic64_dec_if_positive
    implemented so we get rid of this warning:

    lib/atomic64_test.c:129:2: warning: #warning Please implement
    atomic64_dec_if_positive for your architecture, and add it to the IF above

    Signed-off-by: Heiko Carstens
    Cc: Luca Barbieri
    Cc: "H. Peter Anvin"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Heiko Carstens
     

25 May, 2010

1 commit


02 Mar, 2010

5 commits


26 Feb, 2010

1 commit