27 Apr, 2020

1 commit

  • Instead of having all the sysctl handlers deal with user pointers, which
    is rather hairy in terms of the BPF interaction, copy the input to and
    from userspace in common code. This also means that the strings are
    always NUL-terminated by the common code, making the API a little bit
    safer.

    As most handler just pass through the data to one of the common handlers
    a lot of the changes are mechnical.

    Signed-off-by: Christoph Hellwig
    Acked-by: Andrey Ignatov
    Signed-off-by: Al Viro

    Christoph Hellwig
     

17 Jul, 2019

1 commit

  • Architectures which support kprobes have very similar boilerplate around
    calling kprobe_fault_handler(). Use a helper function in kprobes.h to
    unify them, based on the x86 code.

    This changes the behaviour for other architectures when preemption is
    enabled. Previously, they would have disabled preemption while calling
    the kprobe handler. However, preemption would be disabled if this fault
    was due to a kprobe, so we know the fault was not due to a kprobe
    handler and can simply return failure.

    This behaviour was introduced in commit a980c0ef9f6d ("x86/kprobes:
    Refactor kprobes_fault() like kprobe_exceptions_notify()")

    [anshuman.khandual@arm.com: export kprobe_fault_handler()]
    Link: http://lkml.kernel.org/r/1561133358-8876-1-git-send-email-anshuman.khandual@arm.com
    Link: http://lkml.kernel.org/r/1560420444-25737-1-git-send-email-anshuman.khandual@arm.com
    Signed-off-by: Anshuman Khandual
    Reviewed-by: Dave Hansen
    Cc: Michal Hocko
    Cc: Matthew Wilcox
    Cc: Mark Rutland
    Cc: Christophe Leroy
    Cc: Stephen Rothwell
    Cc: Andrey Konovalov
    Cc: Michael Ellerman
    Cc: Paul Mackerras
    Cc: Russell King
    Cc: Catalin Marinas
    Cc: Will Deacon
    Cc: Tony Luck
    Cc: Fenghua Yu
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: Yoshinori Sato
    Cc: "David S. Miller"
    Cc: Thomas Gleixner
    Cc: Peter Zijlstra
    Cc: Ingo Molnar
    Cc: Andy Lutomirski
    Cc: Vineet Gupta
    Cc: James Hogan
    Cc: Paul Burton
    Cc: Ralf Baechle
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anshuman Khandual
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version this program is distributed in the
    hope that it will be useful but without any warranty without even
    the implied warranty of merchantability or fitness for a particular
    purpose see the gnu general public license for more details you
    should have received a copy of the gnu general public license along
    with this program if not write to the free software foundation inc
    59 temple place suite 330 boston ma 02111 1307 usa

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 1334 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Reviewed-by: Richard Fontana
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070033.113240726@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

19 Apr, 2019

1 commit

  • Verify the stack frame pointer on kretprobe trampoline handler,
    If the stack frame pointer does not match, it skips the wrong
    entry and tries to find correct one.

    This can happen if user puts the kretprobe on the function
    which can be used in the path of ftrace user-function call.
    Such functions should not be probed, so this adds a warning
    message that reports which function should be blacklisted.

    Tested-by: Andrea Righi
    Signed-off-by: Masami Hiramatsu
    Acked-by: Steven Rostedt
    Cc: Linus Torvalds
    Cc: Mathieu Desnoyers
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: stable@vger.kernel.org
    Link: http://lkml.kernel.org/r/155094059185.6137.15527904013362842072.stgit@devbox
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

30 Jan, 2019

1 commit

  • Remove the ifdeffery in the breakpoint parsing arch_build_bp_info() by
    adding a within_kprobe_blacklist() stub for the !CONFIG_KPROBES case.

    It is returning true when kprobes are not enabled to mean that any
    address is within the kprobes blacklist on such kernels and thus not
    allow kernel breakpoints on non-kprobes kernels.

    Signed-off-by: Borislav Petkov
    Acked-by: Masami Hiramatsu
    Cc: Anil S Keshavamurthy
    Cc: "David S. Miller"
    Cc: Frederic Weisbecker
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Cc: "Naveen N. Rao"
    Cc: Thomas Gleixner
    Link: https://lkml.kernel.org/r/20190127131237.4557-1-bp@alien8.de

    Borislav Petkov
     

27 Dec, 2018

1 commit

  • Pull x86 cleanups from Ingo Molnar:
    "Misc cleanups"

    * 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
    x86/kprobes: Remove trampoline_handler() prototype
    x86/kernel: Fix more -Wmissing-prototypes warnings
    x86: Fix various typos in comments
    x86/headers: Fix -Wmissing-prototypes warning
    x86/process: Avoid unnecessary NULL check in get_wchan()
    x86/traps: Complete prototype declarations
    x86/mce: Fix -Wmissing-prototypes warnings
    x86/gart: Rewrite early_gart_iommu_check() comment

    Linus Torvalds
     

18 Dec, 2018

1 commit

  • Blacklist symbols in arch-defined probe-prohibited areas.
    With this change, user can see all symbols which are prohibited
    to probe in debugfs.

    All archtectures which have custom prohibit areas should define
    its own arch_populate_kprobe_blacklist() function, but unless that,
    all symbols marked __kprobes are blacklisted.

    Reported-by: Andrea Righi
    Tested-by: Andrea Righi
    Signed-off-by: Masami Hiramatsu
    Cc: Andy Lutomirski
    Cc: Anil S Keshavamurthy
    Cc: Borislav Petkov
    Cc: David S. Miller
    Cc: Linus Torvalds
    Cc: Naveen N. Rao
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Yonghong Song
    Link: http://lkml.kernel.org/r/154503485491.26176.15823229545155174796.stgit@devbox
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

08 Dec, 2018

1 commit

  • ... with the goal of eventually enabling -Wmissing-prototypes by
    default. At least on x86.

    Make functions static where possible, otherwise add prototypes or make
    them visible through includes.

    asm/trace/ changes courtesy of Steven Rostedt .

    Signed-off-by: Borislav Petkov
    Reviewed-by: Masami Hiramatsu
    Reviewed-by: Ingo Molnar
    Acked-by: Rafael J. Wysocki # ACPI + cpufreq bits
    Cc: Andrew Banman
    Cc: Dimitri Sivanich
    Cc: "H. Peter Anvin"
    Cc: Ingo Molnar
    Cc: Masami Hiramatsu
    Cc: Mike Travis
    Cc: "Steven Rostedt (VMware)"
    Cc: Thomas Gleixner
    Cc: Yi Wang
    Cc: linux-acpi@vger.kernel.org

    Borislav Petkov
     

21 Jun, 2018

2 commits

  • Remove jprobe stub APIs from linux/kprobes.h since
    the jprobe implementation was completely gone.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Thomas Gleixner
    Cc: Ananth N Mavinakayanahalli
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    Cc: linux-arch@vger.kernel.org
    Link: https://lore.kernel.org/lkml/152942503572.15209.1652552217914694917.stgit@devbox
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Remove functionally empty jprobe API implementations and test cases.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Thomas Gleixner
    Cc: Ananth N Mavinakayanahalli
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    Cc: linux-arch@vger.kernel.org
    Link: https://lore.kernel.org/lkml/152942430705.15209.2307050500995264322.stgit@devbox
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

13 Nov, 2017

1 commit


20 Oct, 2017

1 commit

  • Disable the jprobes APIs and comment out the jprobes API function
    code. This is in preparation of removing all jprobes related
    code (including kprobe's break_handler).

    Nowadays ftrace and other tracing features are mature enough
    to replace jprobes use-cases. Users can safely use ftrace and
    perf probe etc. for their use cases.

    Signed-off-by: Masami Hiramatsu
    Cc: Alexei Starovoitov
    Cc: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Cc: David S . Miller
    Cc: Ian McDonald
    Cc: Kees Cook
    Cc: Linus Torvalds
    Cc: Paul E . McKenney
    Cc: Peter Zijlstra
    Cc: Stephen Hemminger
    Cc: Steven Rostedt
    Cc: Thomas Gleixner
    Cc: Vlad Yasevich
    Link: http://lkml.kernel.org/r/150724527741.5014.15465541485637899227.stgit@devbox
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

08 Jul, 2017

1 commit

  • Rename function_offset_within_entry() to scope it to kprobe namespace by
    using kprobe_ prefix, and to also simplify it.

    Suggested-by: Ingo Molnar
    Suggested-by: Masami Hiramatsu
    Signed-off-by: Naveen N. Rao
    Cc: Ananth N Mavinakayanahalli
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/3aa6c7e2e4fb6e00f3c24fa306496a66edb558ea.1499443367.git.naveen.n.rao@linux.vnet.ibm.com
    Signed-off-by: Ingo Molnar

    Naveen N. Rao
     

18 May, 2017

1 commit

  • Enabling the tracer selftest triggers occasionally the warning in
    text_poke(), which warns when the to be modified page is not marked
    reserved.

    The reason is that the tracer selftest installs kprobes on functions marked
    __init for testing. These probes are removed after the tests, but that
    removal schedules the delayed kprobes_optimizer work, which will do the
    actual text poke. If the work is executed after the init text is freed,
    then the warning triggers. The bug can be reproduced reliably when the work
    delay is increased.

    Flush the optimizer work and wait for the optimizing/unoptimizing lists to
    become empty before returning from the kprobes tracer selftest. That
    ensures that all operations which were queued due to the probes removal
    have completed.

    Link: http://lkml.kernel.org/r/20170516094802.76a468bb@gandalf.local.home

    Signed-off-by: Thomas Gleixner
    Acked-by: Masami Hiramatsu
    Cc: stable@vger.kernel.org
    Fixes: 6274de498 ("kprobes: Support delayed unoptimizing")
    Signed-off-by: Steven Rostedt (VMware)

    Thomas Gleixner
     

06 May, 2017

1 commit

  • Pull powerpc updates from Michael Ellerman:
    "Highlights include:

    - Larger virtual address space on 64-bit server CPUs. By default we
    use a 128TB virtual address space, but a process can request access
    to the full 512TB by passing a hint to mmap().

    - Support for the new Power9 "XIVE" interrupt controller.

    - TLB flushing optimisations for the radix MMU on Power9.

    - Support for CAPI cards on Power9, using the "Coherent Accelerator
    Interface Architecture 2.0".

    - The ability to configure the mmap randomisation limits at build and
    runtime.

    - Several small fixes and cleanups to the kprobes code, as well as
    support for KPROBES_ON_FTRACE.

    - Major improvements to handling of system reset interrupts,
    correctly treating them as NMIs, giving them a dedicated stack and
    using a new hypervisor call to trigger them, all of which should
    aid debugging and robustness.

    - Many fixes and other minor enhancements.

    Thanks to: Alastair D'Silva, Alexey Kardashevskiy, Alistair Popple,
    Andrew Donnellan, Aneesh Kumar K.V, Anshuman Khandual, Anton
    Blanchard, Balbir Singh, Ben Hutchings, Benjamin Herrenschmidt,
    Bhupesh Sharma, Chris Packham, Christian Zigotzky, Christophe Leroy,
    Christophe Lombard, Daniel Axtens, David Gibson, Gautham R. Shenoy,
    Gavin Shan, Geert Uytterhoeven, Guilherme G. Piccoli, Hamish Martin,
    Hari Bathini, Kees Cook, Laurent Dufour, Madhavan Srinivasan, Mahesh J
    Salgaonkar, Mahesh Salgaonkar, Masami Hiramatsu, Matt Brown, Matthew
    R. Ochs, Michael Neuling, Naveen N. Rao, Nicholas Piggin, Oliver
    O'Halloran, Pan Xinhui, Paul Mackerras, Rashmica Gupta, Russell
    Currey, Sukadev Bhattiprolu, Thadeu Lima de Souza Cascardo, Tobin C.
    Harding, Tyrel Datwyler, Uma Krishnan, Vaibhav Jain, Vipin K Parashar,
    Yang Shi"

    * tag 'powerpc-4.12-1' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: (214 commits)
    powerpc/64s: Power9 has no LPCR[VRMASD] field so don't set it
    powerpc/powernv: Fix TCE kill on NVLink2
    powerpc/mm/radix: Drop support for CPUs without lockless tlbie
    powerpc/book3s/mce: Move add_taint() later in virtual mode
    powerpc/sysfs: Move #ifdef CONFIG_HOTPLUG_CPU out of the function body
    powerpc/smp: Document irq enable/disable after migrating IRQs
    powerpc/mpc52xx: Don't select user-visible RTAS_PROC
    powerpc/powernv: Document cxl dependency on special case in pnv_eeh_reset()
    powerpc/eeh: Clean up and document event handling functions
    powerpc/eeh: Avoid use after free in eeh_handle_special_event()
    cxl: Mask slice error interrupts after first occurrence
    cxl: Route eeh events to all drivers in cxl_pci_error_detected()
    cxl: Force context lock during EEH flow
    powerpc/64: Allow CONFIG_RELOCATABLE if COMPILE_TEST
    powerpc/xmon: Teach xmon oops about radix vectors
    powerpc/mm/hash: Fix off-by-one in comment about kernel contexts ids
    powerpc/pseries: Enable VFIO
    powerpc/powernv: Fix iommu table size calculation hook for small tables
    powerpc/powernv: Check kzalloc() return value in pnv_pci_table_alloc
    powerpc: Add arch/powerpc/tools directory
    ...

    Linus Torvalds
     

20 Apr, 2017

2 commits

  • commit 239aeba76409 ("perf powerpc: Fix kprobe and kretprobe handling with
    kallsyms on ppc64le") changed how we use the offset field in struct kprobe on
    ABIv2. perf now offsets from the global entry point if an offset is specified
    and otherwise chooses the local entry point.

    Fix the same in kernel for kprobe API users. We do this by extending
    kprobe_lookup_name() to accept an additional parameter to indicate the offset
    specified with the kprobe registration. If offset is 0, we return the local
    function entry and return the global entry point otherwise.

    With:
    # cd /sys/kernel/debug/tracing/
    # echo "p _do_fork" >> kprobe_events
    # echo "p _do_fork+0x10" >> kprobe_events

    before this patch:
    # cat ../kprobes/list
    c0000000000d0748 k _do_fork+0x8 [DISABLED]
    c0000000000d0758 k _do_fork+0x18 [DISABLED]
    c0000000000412b0 k kretprobe_trampoline+0x0 [OPTIMIZED]

    and after:
    # cat ../kprobes/list
    c0000000000d04c8 k _do_fork+0x8 [DISABLED]
    c0000000000d04d0 k _do_fork+0x10 [DISABLED]
    c0000000000412b0 k kretprobe_trampoline+0x0 [OPTIMIZED]

    Acked-by: Ananth N Mavinakayanahalli
    Signed-off-by: Naveen N. Rao
    Signed-off-by: Michael Ellerman

    Naveen N. Rao
     
  • The macro is now pretty long and ugly on powerpc. In the light of further
    changes needed here, convert it to a __weak variant to be over-ridden with a
    nicer looking function.

    Suggested-by: Masami Hiramatsu
    Acked-by: Masami Hiramatsu
    Signed-off-by: Naveen N. Rao
    Signed-off-by: Michael Ellerman

    Naveen N. Rao
     

16 Mar, 2017

1 commit

  • perf specifies an offset from _text and since this offset is fed
    directly into the arch-specific helper, kprobes tracer rejects
    installation of kretprobes through perf. Fix this by looking up the
    actual offset from a function for the specified sym+offset.

    Refactor and reuse existing routines to limit code duplication -- we
    repurpose kprobe_addr() for determining final kprobe address and we
    split out the function entry offset determination into a separate
    generic helper.

    Before patch:

    naveen@ubuntu:~/linux/tools/perf$ sudo ./perf probe -v do_open%return
    probe-definition(0): do_open%return
    symbol:do_open file:(null) line:0 offset:0 return:1 lazy:(null)
    0 arguments
    Looking at the vmlinux_path (8 entries long)
    Using /boot/vmlinux for symbols
    Open Debuginfo file: /boot/vmlinux
    Try to find probe point from debuginfo.
    Matched function: do_open [2d0c7ff]
    Probe point found: do_open+0
    Matched function: do_open [35d76dc]
    found inline addr: 0xc0000000004ba9c4
    Failed to find "do_open%return",
    because do_open is an inlined function and has no return point.
    An error occurred in debuginfo analysis (-22).
    Trying to use symbols.
    Opening /sys/kernel/debug/tracing//README write=0
    Opening /sys/kernel/debug/tracing//kprobe_events write=1
    Writing event: r:probe/do_open _text+4469776
    Failed to write event: Invalid argument
    Error: Failed to add events. Reason: Invalid argument (Code: -22)
    naveen@ubuntu:~/linux/tools/perf$ dmesg | tail

    [ 33.568656] Given offset is not valid for return probe.

    After patch:

    naveen@ubuntu:~/linux/tools/perf$ sudo ./perf probe -v do_open%return
    probe-definition(0): do_open%return
    symbol:do_open file:(null) line:0 offset:0 return:1 lazy:(null)
    0 arguments
    Looking at the vmlinux_path (8 entries long)
    Using /boot/vmlinux for symbols
    Open Debuginfo file: /boot/vmlinux
    Try to find probe point from debuginfo.
    Matched function: do_open [2d0c7d6]
    Probe point found: do_open+0
    Matched function: do_open [35d76b3]
    found inline addr: 0xc0000000004ba9e4
    Failed to find "do_open%return",
    because do_open is an inlined function and has no return point.
    An error occurred in debuginfo analysis (-22).
    Trying to use symbols.
    Opening /sys/kernel/debug/tracing//README write=0
    Opening /sys/kernel/debug/tracing//kprobe_events write=1
    Writing event: r:probe/do_open _text+4469808
    Writing event: r:probe/do_open_1 _text+4956344
    Added new events:
    probe:do_open (on do_open%return)
    probe:do_open_1 (on do_open%return)

    You can now use it in all perf tools, such as:

    perf record -e probe:do_open_1 -aR sleep 1

    naveen@ubuntu:~/linux/tools/perf$ sudo cat /sys/kernel/debug/kprobes/list
    c000000000041370 k kretprobe_trampoline+0x0 [OPTIMIZED]
    c0000000004ba0b8 r do_open+0x8 [DISABLED]
    c000000000443430 r do_open+0x0 [DISABLED]

    Signed-off-by: Naveen N. Rao
    Acked-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Michael Ellerman
    Cc: Steven Rostedt
    Cc: linuxppc-dev@lists.ozlabs.org
    Link: http://lkml.kernel.org/r/d8cd1ef420ec22e3643ac332fdabcffc77319a42.1488961018.git.naveen.n.rao@linux.vnet.ibm.com
    Signed-off-by: Arnaldo Carvalho de Melo

    Naveen N. Rao
     

04 Mar, 2017

1 commit

  • kretprobes can be registered by specifying an absolute address or by
    specifying offset to a symbol. However, we need to ensure this falls at
    function entry so as to be able to determine the return address.

    Validate the same during kretprobe registration. By default, there
    should not be any offset from a function entry, as determined through a
    kallsyms_lookup(). Introduce arch_function_offset_within_entry() as a
    way for architectures to override this.

    Signed-off-by: Naveen N. Rao
    Acked-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Michael Ellerman
    Cc: Steven Rostedt
    Cc: linuxppc-dev@lists.ozlabs.org
    Link: http://lkml.kernel.org/r/f1583bc4839a3862cfc2acefcc56f9c8837fa2ba.1487770934.git.naveen.n.rao@linux.vnet.ibm.com
    Signed-off-by: Arnaldo Carvalho de Melo

    Naveen N. Rao
     

28 Feb, 2017

1 commit

  • Often all is needed is these small helpers, instead of compiler.h or a
    full kprobes.h. This is important for asm helpers, in fact even some
    asm/kprobes.h make use of these helpers... instead just keep a generic
    asm file with helpers useful for asm code with the least amount of
    clutter as possible.

    Likewise we need now to also address what to do about this file for both
    when architectures have CONFIG_HAVE_KPROBES, and when they do not. Then
    for when architectures have CONFIG_HAVE_KPROBES but have disabled
    CONFIG_KPROBES.

    Right now most asm/kprobes.h do not have guards against CONFIG_KPROBES,
    this means most architecture code cannot include asm/kprobes.h safely.
    Correct this and add guards for architectures missing them.
    Additionally provide architectures that not have kprobes support with
    the default asm-generic solution. This lets us force asm/kprobes.h on
    the header include/linux/kprobes.h always, but most importantly we can
    now safely include just asm/kprobes.h on architecture code without
    bringing the full kitchen sink of header files.

    Two architectures already provided a guard against CONFIG_KPROBES on its
    kprobes.h: sh, arch. The rest of the architectures needed gaurds added.
    We avoid including any not-needed headers on asm/kprobes.h unless
    kprobes have been enabled.

    In a subsequent atomic change we can try now to remove compiler.h from
    include/linux/kprobes.h.

    During this sweep I've also identified a few architectures defining a
    common macro needed for both kprobes and ftrace, that of the definition
    of the breakput instruction up. Some refer to this as
    BREAKPOINT_INSTRUCTION. This must be kept outside of the #ifdef
    CONFIG_KPROBES guard.

    [mcgrof@kernel.org: fix arm64 build]
    Link: http://lkml.kernel.org/r/CAB=NE6X1WMByuARS4mZ1g9+W=LuVBnMDnh_5zyN0CLADaVh=Jw@mail.gmail.com
    [sfr@canb.auug.org.au: fixup for kprobes declarations moving]
    Link: http://lkml.kernel.org/r/20170214165933.13ebd4f4@canb.auug.org.au
    Link: http://lkml.kernel.org/r/20170203233139.32682-1-mcgrof@kernel.org
    Signed-off-by: Luis R. Rodriguez
    Signed-off-by: Stephen Rothwell
    Acked-by: Masami Hiramatsu
    Cc: Arnd Bergmann
    Cc: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Cc: David S. Miller
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: H. Peter Anvin
    Cc: Andy Lutomirski
    Cc: Steven Rostedt
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Luis R. Rodriguez
     

14 Jan, 2017

1 commit

  • Improve __kernel_text_address()/kernel_text_address() to return
    true if the given address is on a kprobe's instruction slot
    trampoline.

    This can help stacktraces to determine the address is on a
    text area or not.

    To implement this atomically in is_kprobe_*_slot(), also change
    the insn_cache page list to an RCU list.

    This changes timings a bit (it delays page freeing to the RCU garbage
    collection phase), but none of that is in the hot path.

    Note: this change can add small overhead to stack unwinders because
    it adds 2 additional checks to __kernel_text_address(). However, the
    impact should be very small, because kprobe_insn_pages list has 1 entry
    per 256 probes(on x86, on arm/arm64 it will be 1024 probes),
    and kprobe_optinsn_pages has 1 entry per 32 probes(on x86).
    In most use cases, the number of kprobe events may be less
    than 20, which means that is_kprobe_*_slot() will check just one entry.

    Tested-by: Josh Poimboeuf
    Signed-off-by: Masami Hiramatsu
    Acked-by: Peter Zijlstra
    Cc: Alexander Shishkin
    Cc: Ananth N Mavinakayanahalli
    Cc: Andrew Morton
    Cc: Andrey Konovalov
    Cc: Arnaldo Carvalho de Melo
    Cc: Jiri Olsa
    Cc: Linus Torvalds
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/148388747896.6869.6354262871751682264.stgit@devbox
    [ Improved the changelog and coding style. ]
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

04 Aug, 2015

1 commit

  • Code on the kprobe blacklist doesn't want unexpected int3
    exceptions. It probably doesn't want unexpected debug exceptions
    either. Be safe: disallow breakpoints in nokprobes code.

    On non-CONFIG_KPROBES kernels, there is no kprobe blacklist. In
    that case, disallow kernel breakpoints entirely.

    It will be particularly important to keep hw breakpoints out of the
    entry and NMI code once we move debug exceptions off the IST stack.

    Signed-off-by: Andy Lutomirski
    Signed-off-by: Peter Zijlstra (Intel)
    Cc: Borislav Petkov
    Cc: Brian Gerst
    Cc: Linus Torvalds
    Cc: Masami Hiramatsu
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    Cc: Thomas Gleixner
    Link: http://lkml.kernel.org/r/e14b152af99640448d895e3c2a8c2d5ee19a1325.1438312874.git.luto@kernel.org
    Signed-off-by: Ingo Molnar

    Andy Lutomirski
     

14 Jan, 2015

1 commit


27 Oct, 2014

1 commit

  • Introduce weak arch_check_ftrace_location() helper function which
    architectures can override in order to implement handling of kprobes
    on function tracer call sites on their own, without depending on
    common code or implementing the KPROBES_ON_FTRACE feature.

    Signed-off-by: Heiko Carstens
    Acked-by: Masami Hiramatsu
    Acked-by: Steven Rostedt
    Signed-off-by: Martin Schwidefsky

    Heiko Carstens
     

13 Jun, 2014

1 commit

  • Pull more perf updates from Ingo Molnar:
    "A second round of perf updates:

    - wide reaching kprobes sanitization and robustization, with the hope
    of fixing all 'probe this function crashes the kernel' bugs, by
    Masami Hiramatsu.

    - uprobes updates from Oleg Nesterov: tmpfs support, corner case
    fixes and robustization work.

    - perf tooling updates and fixes from Jiri Olsa, Namhyung Ki, Arnaldo
    et al:
    * Add support to accumulate hist periods (Namhyung Kim)
    * various fixes, refactorings and enhancements"

    * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (101 commits)
    perf: Differentiate exec() and non-exec() comm events
    perf: Fix perf_event_comm() vs. exec() assumption
    uprobes/x86: Rename arch_uprobe->def to ->defparam, minor comment updates
    perf/documentation: Add description for conditional branch filter
    perf/x86: Add conditional branch filtering support
    perf/tool: Add conditional branch filter 'cond' to perf record
    perf: Add new conditional branch filter 'PERF_SAMPLE_BRANCH_COND'
    uprobes: Teach copy_insn() to support tmpfs
    uprobes: Shift ->readpage check from __copy_insn() to uprobe_register()
    perf/x86: Use common PMU interrupt disabled code
    perf/ARM: Use common PMU interrupt disabled code
    perf: Disable sampled events if no PMU interrupt
    perf: Fix use after free in perf_remove_from_context()
    perf tools: Fix 'make help' message error
    perf record: Fix poll return value propagation
    perf tools: Move elide bool into perf_hpp_fmt struct
    perf tools: Remove elide setup for SORT_MODE__MEMORY mode
    perf tools: Fix "==" into "=" in ui_browser__warning assignment
    perf tools: Allow overriding sysfs and proc finding with env var
    perf tools: Consider header files outside perf directory in tags target
    ...

    Linus Torvalds
     

06 May, 2014

1 commit


24 Apr, 2014

2 commits

  • Introduce NOKPROBE_SYMBOL() macro which builds a kprobes
    blacklist at kernel build time.

    The usage of this macro is similar to EXPORT_SYMBOL(),
    placed after the function definition:

    NOKPROBE_SYMBOL(function);

    Since this macro will inhibit inlining of static/inline
    functions, this patch also introduces a nokprobe_inline macro
    for static/inline functions. In this case, we must use
    NOKPROBE_SYMBOL() for the inline function caller.

    When CONFIG_KPROBES=y, the macro stores the given function
    address in the "_kprobe_blacklist" section.

    Since the data structures are not fully initialized by the
    macro (because there is no "size" information), those
    are re-initialized at boot time by using kallsyms.

    Signed-off-by: Masami Hiramatsu
    Link: http://lkml.kernel.org/r/20140417081705.26341.96719.stgit@ltc230.yrl.intra.hitachi.co.jp
    Cc: Alok Kataria
    Cc: Ananth N Mavinakayanahalli
    Cc: Andrew Morton
    Cc: Anil S Keshavamurthy
    Cc: Arnd Bergmann
    Cc: Christopher Li
    Cc: Chris Wright
    Cc: David S. Miller
    Cc: Jan-Simon Möller
    Cc: Jeremy Fitzhardinge
    Cc: Linus Torvalds
    Cc: Randy Dunlap
    Cc: Rusty Russell
    Cc: linux-arch@vger.kernel.org
    Cc: linux-doc@vger.kernel.org
    Cc: linux-sparse@vger.kernel.org
    Cc: virtualization@lists.linux-foundation.org
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • .entry.text is a code area which is used for interrupt/syscall
    entries, which includes many sensitive code.
    Thus, it is better to prohibit probing on all of such code
    instead of a part of that.
    Since some symbols are already registered on kprobe blacklist,
    this also removes them from the blacklist.

    Signed-off-by: Masami Hiramatsu
    Reviewed-by: Steven Rostedt
    Cc: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Cc: Borislav Petkov
    Cc: David S. Miller
    Cc: Frederic Weisbecker
    Cc: Jan Kiszka
    Cc: Jiri Kosina
    Cc: Jonathan Lebon
    Cc: Seiji Aguchi
    Link: http://lkml.kernel.org/r/20140417081658.26341.57354.stgit@ltc230.yrl.intra.hitachi.co.jp
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

12 Sep, 2013

2 commits

  • The current two insn slot caches both use module_alloc/module_free to
    allocate and free insn slot cache pages.

    For s390 this is not sufficient since there is the need to allocate insn
    slots that are either within the vmalloc module area or within dma memory.

    Therefore add a mechanism which allows to specify an own allocator for an
    own insn slot cache.

    Signed-off-by: Heiko Carstens
    Acked-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Ingo Molnar
    Cc: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Heiko Carstens
     
  • The current kpropes insn caches allocate memory areas for insn slots
    with module_alloc(). The assumption is that the kernel image and module
    area are both within the same +/- 2GB memory area.

    This however is not true for s390 where the kernel image resides within
    the first 2GB (DMA memory area), but the module area is far away in the
    vmalloc area, usually somewhere close below the 4TB area.

    For new pc relative instructions s390 needs insn slots that are within
    +/- 2GB of each area. That way we can patch displacements of
    pc-relative instructions within the insn slots just like x86 and
    powerpc.

    The module area works already with the normal insn slot allocator,
    however there is currently no way to get insn slots that are within the
    first 2GB on s390 (aka DMA area).

    Therefore this patch set modifies the kprobes insn slot cache code in
    order to allow to specify a custom allocator for the insn slot cache
    pages. In addition architecure can now have private insn slot caches
    withhout the need to modify common code.

    Patch 1 unifies and simplifies the current insn and optinsn caches
    implementation. This is a preparation which allows to add more
    insn caches in a simple way.

    Patch 2 adds the possibility to specify a custom allocator.

    Patch 3 makes s390 use the new insn slot mechanisms and adds support for
    pc-relative instructions with long displacements.

    This patch (of 3):

    The two insn caches (insn, and optinsn) each have an own mutex and
    alloc/free functions (get_[opt]insn_slot() / free_[opt]insn_slot()).

    Since there is the need for yet another insn cache which satifies dma
    allocations on s390, unify and simplify the current implementation:

    - Move the per insn cache mutex into struct kprobe_insn_cache.
    - Move the alloc/free functions to kprobe.h so they are simply
    wrappers for the generic __get_insn_slot/__free_insn_slot functions.
    The implementation is done with a DEFINE_INSN_CACHE_OPS() macro
    which provides the alloc/free functions for each cache if needed.
    - move the struct kprobe_insn_cache to kprobe.h which allows to generate
    architecture specific insn slot caches outside of the core kprobes
    code.

    Signed-off-by: Heiko Carstens
    Cc: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Ingo Molnar
    Cc: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Heiko Carstens
     

08 Apr, 2013

1 commit

  • Currently, __kprobes is defined in linux/kprobes.h which
    is too big to be included in small or basic headers
    that want to make use of this simple attribute.

    So move __kprobes definition into linux/compiler.h
    in which other compiler attributes are defined.

    Signed-off-by: Masami Hiramatsu
    Cc: Timo Juhani Lindfors
    Cc: Ananth N Mavinakayanahalli
    Cc: Pavel Emelyanov
    Cc: Jiri Kosina
    Cc: Nadia Yvette Chambers
    Cc: yrl.pp-manager.tt@hitachi.com
    Cc: David S. Miller
    Cc: Linus Torvalds
    Link: http://lkml.kernel.org/r/20130404104049.21071.20908.stgit@mhiramat-M0-7522
    [ Improved the attribute explanation a bit. ]
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

22 Jan, 2013

1 commit

  • Split ftrace-based kprobes code from kprobes, and introduce
    CONFIG_(HAVE_)KPROBES_ON_FTRACE Kconfig flags.
    For the cleanup reason, this also moves kprobe_ftrace check
    into skip_singlestep.

    Link: http://lkml.kernel.org/r/20120928081520.3560.25624.stgit@ltc138.sdl.hitachi.co.jp

    Cc: Ingo Molnar
    Cc: Ananth N Mavinakayanahalli
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Frederic Weisbecker
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt

    Masami Hiramatsu
     

31 Jul, 2012

2 commits

  • Add function tracer based kprobe optimization support
    handlers on x86. This allows kprobes to use function
    tracer for probing on mcount call.

    Link: http://lkml.kernel.org/r/20120605102838.27845.26317.stgit@localhost.localdomain

    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Ananth N Mavinakayanahalli
    Cc: "Frank Ch. Eigler"
    Cc: Andrew Morton
    Cc: Frederic Weisbecker
    Signed-off-by: Masami Hiramatsu

    [ Updated to new port of ftrace save regs functions ]

    Signed-off-by: Steven Rostedt

    Masami Hiramatsu
     
  • Introduce function trace based kprobes optimization.

    With using ftrace optimization, kprobes on the mcount calling
    address, use ftrace's mcount call instead of breakpoint.
    Furthermore, this optimization works with preemptive kernel
    not like as current jump-based optimization. Of cource,
    this feature works only if the probe is on mcount call.

    Only if kprobe.break_handler is set, that probe is not
    optimized with ftrace (nor put on ftrace). The reason why this
    limitation comes is that this break_handler may be used only
    from jprobes which changes ip address (for fetching the function
    arguments), but function tracer ignores modified ip address.

    Changes in v2:
    - Fix ftrace_ops registering right after setting its filter.
    - Unregister ftrace_ops if there is no kprobe using.
    - Remove notrace dependency from __kprobes macro.

    Link: http://lkml.kernel.org/r/20120605102832.27845.63461.stgit@localhost.localdomain

    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Cc: "H. Peter Anvin"
    Cc: Ananth N Mavinakayanahalli
    Cc: "Frank Ch. Eigler"
    Cc: Andrew Morton
    Cc: Frederic Weisbecker
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Steven Rostedt

    Masami Hiramatsu
     

05 Mar, 2012

1 commit

  • If a header file is making use of BUG, BUG_ON, BUILD_BUG_ON, or any
    other BUG variant in a static inline (i.e. not in a #define) then
    that header really should be including and not just
    expecting it to be implicitly present.

    We can make this change risk-free, since if the files using these
    headers didn't have exposure to linux/bug.h already, they would have
    been causing compile failures/warnings.

    Signed-off-by: Paul Gortmaker

    Paul Gortmaker
     

13 Sep, 2011

1 commit


08 Jan, 2011

1 commit

  • * 'for-2.6.38' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (30 commits)
    gameport: use this_cpu_read instead of lookup
    x86: udelay: Use this_cpu_read to avoid address calculation
    x86: Use this_cpu_inc_return for nmi counter
    x86: Replace uses of current_cpu_data with this_cpu ops
    x86: Use this_cpu_ops to optimize code
    vmstat: User per cpu atomics to avoid interrupt disable / enable
    irq_work: Use per cpu atomics instead of regular atomics
    cpuops: Use cmpxchg for xchg to avoid lock semantics
    x86: this_cpu_cmpxchg and this_cpu_xchg operations
    percpu: Generic this_cpu_cmpxchg() and this_cpu_xchg support
    percpu,x86: relocate this_cpu_add_return() and friends
    connector: Use this_cpu operations
    xen: Use this_cpu_inc_return
    taskstats: Use this_cpu_ops
    random: Use this_cpu_inc_return
    fs: Use this_cpu_inc_return in buffer.c
    highmem: Use this_cpu_xx_return() operations
    vmstat: Use this_cpu_inc_return for vm statistics
    x86: Support for this_cpu_add, sub, dec, inc_return
    percpu: Generic support for this_cpu_add, sub, dec, inc_return
    ...

    Fixed up conflicts: in arch/x86/kernel/{apic/nmi.c, apic/x2apic_uv_x.c, process.c}
    as per Tejun.

    Linus Torvalds
     

17 Dec, 2010

1 commit


07 Dec, 2010

2 commits

  • Use text_poke_smp_batch() on unoptimization path for reducing
    the number of stop_machine() issues. If the number of
    unoptimizing probes is more than MAX_OPTIMIZE_PROBES(=256),
    kprobes unoptimizes first MAX_OPTIMIZE_PROBES probes and kicks
    optimizer for remaining probes.

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Use text_poke_smp_batch() in optimization path for reducing
    the number of stop_machine() issues. If the number of optimizing
    probes is more than MAX_OPTIMIZE_PROBES(=256), kprobes optimizes
    first MAX_OPTIMIZE_PROBES probes and kicks optimizer for
    remaining probes.

    Changes in v5:
    - Use kick_kprobe_optimizer() instead of directly calling
    schedule_delayed_work().
    - Rescheduling optimizer outside of kprobe mutex lock.

    Changes in v2:
    - Allocate code buffer and parameters in arch_init_kprobes()
    instead of using static arraies.
    - Merge previous max optimization limit patch into this patch.
    So, this patch introduces upper limit of optimization at
    once.

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu