26 Oct, 2011

1 commit

  • * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (121 commits)
    perf symbols: Increase symbol KSYM_NAME_LEN size
    perf hists browser: Refuse 'a' hotkey on non symbolic views
    perf ui browser: Use libslang to read keys
    perf tools: Fix tracing info recording
    perf hists browser: Elide DSO column when it is set to just one DSO, ditto for threads
    perf hists: Don't consider filtered entries when calculating column widths
    perf hists: Don't decay total_period for filtered entries
    perf hists browser: Honour symbol_conf.show_{nr_samples,total_period}
    perf hists browser: Do not exit on tab key with single event
    perf annotate browser: Don't change selection line when returning from callq
    perf tools: handle endianness of feature bitmap
    perf tools: Add prelink suggestion to dso update message
    perf script: Fix unknown feature comment
    perf hists browser: Apply the dso and thread filters when merging new batches
    perf hists: Move the dso and thread filters from hist_browser
    perf ui browser: Honour the xterm colors
    perf top tui: Give color hints just on the percentage, like on --stdio
    perf ui browser: Make the colors configurable and change the defaults
    perf tui: Remove unneeded call to newtCls on startup
    perf hists: Don't format the percentage on hist_entry__snprintf
    ...

    Fix up conflicts in arch/x86/kernel/kprobes.c manually.

    Ingo's tree did the insane "add volatile to const array", which just
    doesn't make sense ("volatile const"?). But we could remove the const
    *and* make the array volatile to make doubly sure that gcc doesn't
    optimize it away..

    Also fix up kernel/trace/ring_buffer.c non-data-conflicts manually: the
    reader_lock has been turned into a raw lock by the core locking merge,
    and there was a new user of it introduced in this perf core merge. Make
    sure that new use also uses the raw accessor functions.

    Linus Torvalds
     

25 Oct, 2011

1 commit

  • When compiling an i386_defconfig kernel with gcc-4.6.1-9.fc15.i686, I
    noticed a warning about the asm operand for test_bit in kprobes'
    can_boost. I discovered that this caused only the first long of
    twobyte_is_boostable[] to be output.

    Jakub filed and fixed gcc PR50571 to correct the warning and this output
    issue. But to solve it for less current gcc, we can make kprobes'
    twobyte_is_boostable[] non-const, and it won't be optimized out.

    Before:

    CC arch/x86/kernel/kprobes.o
    In file included from include/linux/bitops.h:22:0,
    from include/linux/kernel.h:17,
    from [...]/arch/x86/include/asm/percpu.h:44,
    from [...]/arch/x86/include/asm/current.h:5,
    from [...]/arch/x86/include/asm/processor.h:15,
    from [...]/arch/x86/include/asm/atomic.h:6,
    from include/linux/atomic.h:4,
    from include/linux/mutex.h:18,
    from include/linux/notifier.h:13,
    from include/linux/kprobes.h:34,
    from arch/x86/kernel/kprobes.c:43:
    [...]/arch/x86/include/asm/bitops.h: In function ‘can_boost.part.1’:
    [...]/arch/x86/include/asm/bitops.h:319:2: warning: use of memory input
    without lvalue in asm operand 1 is deprecated [enabled by default]

    $ objdump -rd arch/x86/kernel/kprobes.o | grep -A1 -w bt
    551: 0f a3 05 00 00 00 00 bt %eax,0x0
    554: R_386_32 .rodata.cst4

    $ objdump -s -j .rodata.cst4 -j .data arch/x86/kernel/kprobes.o

    arch/x86/kernel/kprobes.o: file format elf32-i386

    Contents of section .data:
    0000 48000000 00000000 00000000 00000000 H...............
    Contents of section .rodata.cst4:
    0000 4c030000 L...

    Only a single long of twobyte_is_boostable[] is in the object file.

    After, without the const on twobyte_is_boostable:

    $ objdump -rd arch/x86/kernel/kprobes.o | grep -A1 -w bt
    551: 0f a3 05 20 00 00 00 bt %eax,0x20
    554: R_386_32 .data

    $ objdump -s -j .rodata.cst4 -j .data arch/x86/kernel/kprobes.o

    arch/x86/kernel/kprobes.o: file format elf32-i386

    Contents of section .data:
    0000 48000000 00000000 00000000 00000000 H...............
    0010 00000000 00000000 00000000 00000000 ................
    0020 4c030000 0f000200 ffff0000 ffcff0c0 L...............
    0030 0000ffff 3bbbfff8 03ff2ebb 26bb2e77 ....;.......&..w

    Now all 32 bytes are output into .data instead.

    Signed-off-by: Josh Stone
    Cc: Masami Hiramatsu
    Cc: Jakub Jelinek
    Cc: stable@kernel.org
    Signed-off-by: Linus Torvalds

    Josh Stone
     

18 Oct, 2011

1 commit

  • When compiling an i386_defconfig kernel with
    gcc-4.6.1-9.fc15.i686, I noticed a warning about the asm operand
    for test_bit in kprobes' can_boost. I discovered that this
    caused only the first long of twobyte_is_boostable[] to be
    output.

    Jakub filed and fixed gcc PR50571 to correct the warning and
    this output issue. But to solve it for less current gcc, we can
    make kprobes' twobyte_is_boostable[] volatile, and it won't be
    optimized out.

    Before:

    CC arch/x86/kernel/kprobes.o
    In file included from include/linux/bitops.h:22:0,
    from include/linux/kernel.h:17,
    from [...]/arch/x86/include/asm/percpu.h:44,
    from [...]/arch/x86/include/asm/current.h:5,
    from [...]/arch/x86/include/asm/processor.h:15,
    from [...]/arch/x86/include/asm/atomic.h:6,
    from include/linux/atomic.h:4,
    from include/linux/mutex.h:18,
    from include/linux/notifier.h:13,
    from include/linux/kprobes.h:34,
    from arch/x86/kernel/kprobes.c:43:
    [...]/arch/x86/include/asm/bitops.h: In function ‘can_boost.part.1’:
    [...]/arch/x86/include/asm/bitops.h:319:2: warning: use of memory input without lvalue in asm operand 1 is deprecated [enabled by default]

    $ objdump -rd arch/x86/kernel/kprobes.o | grep -A1 -w bt
    551: 0f a3 05 00 00 00 00 bt %eax,0x0
    554: R_386_32 .rodata.cst4

    $ objdump -s -j .rodata.cst4 -j .data arch/x86/kernel/kprobes.o

    arch/x86/kernel/kprobes.o: file format elf32-i386

    Contents of section .data:
    0000 48000000 00000000 00000000 00000000 H...............
    Contents of section .rodata.cst4:
    0000 4c030000 L...

    Only a single long of twobyte_is_boostable[] is in the object
    file.

    After, with volatile:

    $ objdump -rd arch/x86/kernel/kprobes.o | grep -A1 -w bt
    551: 0f a3 05 20 00 00 00 bt %eax,0x20
    554: R_386_32 .data

    $ objdump -s -j .rodata.cst4 -j .data arch/x86/kernel/kprobes.o

    arch/x86/kernel/kprobes.o: file format elf32-i386

    Contents of section .data:
    0000 48000000 00000000 00000000 00000000 H...............
    0010 00000000 00000000 00000000 00000000 ................
    0020 4c030000 0f000200 ffff0000 ffcff0c0 L...............
    0030 0000ffff 3bbbfff8 03ff2ebb 26bb2e77 ....;.......&..w

    Now all 32 bytes are output into .data instead.

    Signed-off-by: Josh Stone
    Acked-by: Masami Hiramatsu
    Cc: Srikar Dronamraju
    Cc: Jakub Jelinek
    Link: http://lkml.kernel.org/r/1318899645-4068-1-git-send-email-jistone@redhat.com
    Signed-off-by: Ingo Molnar

    Josh Stone
     

11 May, 2011

1 commit

  • Disable irqs during optimized callback, so we dont miss any in-irq kprobes.

    The following commands:

    # cd /debug/tracing/
    # echo "p mutex_unlock" >> kprobe_events
    # echo "p _raw_spin_lock" >> kprobe_events
    # echo "p smp_apic_timer_interrupt" >> ./kprobe_events
    # echo 1 > events/enable

    Cause the optimized kprobes to be missed. None is missed
    with the fix applied.

    Signed-off-by: Jiri Olsa
    Acked-by: Masami Hiramatsu
    Link: http://lkml.kernel.org/r/20110511110613.GB2390@jolsa.brq.redhat.com
    Signed-off-by: Ingo Molnar

    Jiri Olsa
     

09 Mar, 2011

1 commit

  • You can crash the kernel (with root/admin privileges) using kprobe tracer by running:

    echo "p system_call_after_swapgs" > ./kprobe_events
    echo 1 > ./events/kprobes/enable

    The reason is that at the system_call_after_swapgs label, the
    kernel stack is not set up. If optimized kprobes are enabled,
    the user space stack is being used in this case (see optimized
    kprobe template) and this might result in a crash.

    There are several places like this over the entry code
    (entry_$BIT). As it seems there's no any reasonable/maintainable
    way to disable only those places where the stack is not ready, I
    switched off the whole entry code from kprobe optimizing.

    Signed-off-by: Jiri Olsa
    Acked-by: Masami Hiramatsu
    Cc: acme@redhat.com
    Cc: fweisbec@gmail.com
    Cc: ananth@in.ibm.com
    Cc: davem@davemloft.net
    Cc: a.p.zijlstra@chello.nl
    Cc: eric.dumazet@gmail.com
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Jiri Olsa
     

08 Jan, 2011

1 commit

  • * 'for-2.6.38' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (30 commits)
    gameport: use this_cpu_read instead of lookup
    x86: udelay: Use this_cpu_read to avoid address calculation
    x86: Use this_cpu_inc_return for nmi counter
    x86: Replace uses of current_cpu_data with this_cpu ops
    x86: Use this_cpu_ops to optimize code
    vmstat: User per cpu atomics to avoid interrupt disable / enable
    irq_work: Use per cpu atomics instead of regular atomics
    cpuops: Use cmpxchg for xchg to avoid lock semantics
    x86: this_cpu_cmpxchg and this_cpu_xchg operations
    percpu: Generic this_cpu_cmpxchg() and this_cpu_xchg support
    percpu,x86: relocate this_cpu_add_return() and friends
    connector: Use this_cpu operations
    xen: Use this_cpu_inc_return
    taskstats: Use this_cpu_ops
    random: Use this_cpu_inc_return
    fs: Use this_cpu_inc_return in buffer.c
    highmem: Use this_cpu_xx_return() operations
    vmstat: Use this_cpu_inc_return for vm statistics
    x86: Support for this_cpu_add, sub, dec, inc_return
    percpu: Generic support for this_cpu_add, sub, dec, inc_return
    ...

    Fixed up conflicts: in arch/x86/kernel/{apic/nmi.c, apic/x2apic_uv_x.c, process.c}
    as per Tejun.

    Linus Torvalds
     

17 Dec, 2010

1 commit


07 Dec, 2010

3 commits

  • Use text_poke_smp_batch() on unoptimization path for reducing
    the number of stop_machine() issues. If the number of
    unoptimizing probes is more than MAX_OPTIMIZE_PROBES(=256),
    kprobes unoptimizes first MAX_OPTIMIZE_PROBES probes and kicks
    optimizer for remaining probes.

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Use text_poke_smp_batch() in optimization path for reducing
    the number of stop_machine() issues. If the number of optimizing
    probes is more than MAX_OPTIMIZE_PROBES(=256), kprobes optimizes
    first MAX_OPTIMIZE_PROBES probes and kicks optimizer for
    remaining probes.

    Changes in v5:
    - Use kick_kprobe_optimizer() instead of directly calling
    schedule_delayed_work().
    - Rescheduling optimizer outside of kprobe mutex lock.

    Changes in v2:
    - Allocate code buffer and parameters in arch_init_kprobes()
    instead of using static arraies.
    - Merge previous max optimization limit patch into this patch.
    So, this patch introduces upper limit of optimization at
    once.

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Unoptimization occurs when a probe is unregistered or disabled,
    and is heavy because it recovers instructions by using
    stop_machine(). This patch delays unoptimization operations and
    unoptimize several probes at once by using
    text_poke_smp_batch(). This can avoid unexpected system slowdown
    coming from stop_machine().

    Changes in v5:
    - Split this patch into several cleanup patches and this patch.
    - Fix some text_mutex lock miss.
    - Use bool instead of int for behavior flags.
    - Add additional comment for (un)optimizing path.

    Changes in v2:
    - Use dynamic allocated buffers and params.

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

23 Sep, 2010

1 commit


15 Sep, 2010

2 commits

  • Remove __dummy_buf which is needed for kallsyms_lookup only.
    use kallsysm_lookup_size_offset instead.

    Signed-off-by: Namhyung Kim
    Acked-by: Masami Hiramatsu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Namhyung Kim
     
  • Make following (internal) functions static to make sparse
    happier :-)

    * get_optimized_kprobe: only called from static functions
    * kretprobe_table_unlock: _lock function is static
    * kprobes_optinsn_template_holder: never called but holding asm code

    Signed-off-by: Namhyung Kim
    Acked-by: Masami Hiramatsu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Namhyung Kim
     

19 Aug, 2010

1 commit

  • Fix the return address of subsequent kretprobes when multiple
    kretprobes are set on the same function.

    For example:

    # cd /sys/kernel/debug/tracing
    # echo "r:event1 sys_symlink" > kprobe_events
    # echo "r:event2 sys_symlink" >> kprobe_events
    # echo 1 > events/kprobes/enable
    # ln -s /tmp/foo /tmp/bar

    (without this patch)

    # cat trace
    ln-897 [000] 20404.133727: event1: (kretprobe_trampoline+0x0/0x4c
    Reviewed-by: Masami Hiramatsu
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Peter Zijlstra
    Cc: YOSHIFUJI Hideaki
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    KUMANO Syuhei
     

22 Jul, 2010

1 commit


19 Jul, 2010

1 commit


29 Jun, 2010

1 commit

  • Fix resume_execution() and is_IF_modifier() to skip x86
    instruction prefixes correctly by using x86 instruction
    attribute.

    Without this fix, resume_execution() can't handle instructions
    which have non-REX prefixes (REX prefixes are skipped). This
    will cause unexpected kernel panic by hitting bad address when a
    kprobe hits on two-byte ret (e.g. "repz ret" generated for
    Athlon/K8 optimization), because it just checks "repz" and can't
    recognize the "ret" instruction.

    These prefixes can be found easily with x86 instruction
    attribute. This patch introduces skip_prefixes() and uses it in
    resume_execution() and is_IF_modifier() to skip prefixes.

    Signed-off-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

18 May, 2010

1 commit

  • …git/tip/linux-2.6-tip

    * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (311 commits)
    perf tools: Add mode to build without newt support
    perf symbols: symbol inconsistency message should be done only at verbose=1
    perf tui: Add explicit -lslang option
    perf options: Type check all the remaining OPT_ variants
    perf options: Type check OPT_BOOLEAN and fix the offenders
    perf options: Check v type in OPT_U?INTEGER
    perf options: Introduce OPT_UINTEGER
    perf tui: Add workaround for slang < 2.1.4
    perf record: Fix bug mismatch with -c option definition
    perf options: Introduce OPT_U64
    perf tui: Add help window to show key associations
    perf tui: Make <- exit menus too
    perf newt: Add single key shortcuts for zoom into DSO and threads
    perf newt: Exit browser unconditionally when CTRL+C, q or Q is pressed
    perf newt: Fix the 'A'/'a' shortcut for annotate
    perf newt: Make <- exit the ui_browser
    x86, perf: P4 PMU - fix counters management logic
    perf newt: Make <- zoom out filters
    perf report: Report number of events, not samples
    perf hist: Clarify events_stats fields usage
    ...

    Fix up trivial conflicts in kernel/fork.c and tools/perf/builtin-record.c

    Linus Torvalds
     

11 May, 2010

1 commit

  • Fix kprobe/x86 to check removed int3 when failing to get kprobe
    from hlist. Since we have a time window between checking int3
    exists on probed address and getting kprobe on that address,
    we can have following scenario:

    -------
    CPU1 CPU2
    hit int3
    check int3 exists
    remove int3
    remove kprobe from hlist
    get kprobe from hlist
    no kprobe->OOPS!
    -------

    This patch moves int3 checking if there is no kprobe on that
    address for fixing this problem as follows:

    ------
    CPU1 CPU2
    hit int3
    remove int3
    remove kprobe from hlist
    get kprobe from hlist
    no kprobe->check int3 exists
    ->rollback&retry
    ------

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: systemtap
    Cc: DLE
    Cc: Dave Anderson
    Cc: Peter Zijlstra
    Cc: Mike Galbraith
    Cc: Paul Mackerras
    Cc: Arnaldo Carvalho de Melo
    Cc: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

26 Mar, 2010

2 commits

  • Implement ptrace-block-step using TIF_BLOCKSTEP which will set
    DEBUGCTLMSR_BTF when set for a task while preserving any other
    DEBUGCTLMSR bits.

    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Support for the PMU's BTS features has been upstreamed in
    v2.6.32, but we still have the old and disabled ptrace-BTS,
    as Linus noticed it not so long ago.

    It's buggy: TIF_DEBUGCTLMSR is trampling all over that MSR without
    regard for other uses (perf) and doesn't provide the flexibility
    needed for perf either.

    Its users are ptrace-block-step and ptrace-bts, since ptrace-bts
    was never used and ptrace-block-step can be implemented using a
    much simpler approach.

    So axe all 3000 lines of it. That includes the *locked_memory*()
    APIs in mm/mlock.c as well.

    Reported-by: Linus Torvalds
    Signed-off-by: Peter Zijlstra
    Cc: Roland McGrath
    Cc: Oleg Nesterov
    Cc: Markus Metzger
    Cc: Steven Rostedt
    Cc: Andrew Morton
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

26 Feb, 2010

4 commits

  • Introduce x86 arch-specific optimization code, which supports
    both of x86-32 and x86-64.

    This code also supports safety checking, which decodes whole of
    a function in which probe is inserted, and checks following
    conditions before optimization:
    - The optimized instructions which will be replaced by a jump instruction
    don't straddle the function boundary.
    - There is no indirect jump instruction, because it will jumps into
    the address range which is replaced by jump operand.
    - There is no jump/loop instruction which jumps into the address range
    which is replaced by jump operand.
    - Don't optimize kprobes if it is in functions into which fixup code will
    jumps.

    This uses text_poke_multibyte() which doesn't support modifying
    code on NMI/MCE handler. However, since kprobes itself doesn't
    support NMI/MCE code probing, it's not a problem.

    Changes in v9:
    - Use *_text_reserved() for checking the probe can be optimized.
    - Verify jump address range is in 2G range when preparing slot.
    - Backup original code when switching optimized buffer, instead of
    preparing buffer, because there can be int3 of other probes in
    preparing phase.
    - Check kprobe is disabled in arch_check_optimized_kprobe().
    - Strictly check indirect jump opcodes (ff /4, ff /5).

    Changes in v6:
    - Split stop_machine-based jump patching code.
    - Update comments and coding style.

    Changes in v5:
    - Introduce stop_machine-based jump replacing.

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Srikar Dronamraju
    Cc: Christoph Hellwig
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Anders Kaseorg
    Cc: Tim Abbott
    Cc: Andi Kleen
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Introduce SAVE/RESOTRE_REGS_STRING for cleanup
    kretprobe-trampoline asm code. These macros will be used for
    emulating interruption.

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Srikar Dronamraju
    Cc: Christoph Hellwig
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Anders Kaseorg
    Cc: Tim Abbott
    Cc: Andi Kleen
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Integrate prepare_singlestep() into setup_singlestep() to boost
    up reenter probes, if possible.

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Srikar Dronamraju
    Cc: Christoph Hellwig
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Anders Kaseorg
    Cc: Tim Abbott
    Cc: Andi Kleen
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Change RELATIVEJUMP_INSTRUCTION macro to RELATIVEJUMP_OPCODE
    since it represents just the opcode byte.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Mathieu Desnoyers
    Cc: systemtap
    Cc: DLE
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Srikar Dronamraju
    Cc: Christoph Hellwig
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Anders Kaseorg
    Cc: Tim Abbott
    Cc: Andi Kleen
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

04 Feb, 2010

2 commits

  • Check whether the address of new probe is already reserved by
    ftrace or alternatives (on x86) when registering new probe.
    If reserved, it returns an error and not register the probe.

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Steven Rostedt
    Cc: przemyslaw@pawelczyk.it
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Mathieu Desnoyers
    Cc: Jason Baron
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Disable kprobe booster when CONFIG_PREEMPT=y at this time,
    because it can't ensure that all kernel threads preempted on
    kprobe's boosted slot run out from the slot even using
    freeze_processes().

    The booster on preemptive kernel will be resumed if
    synchronize_tasks() or something like that is introduced.

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Ananth N Mavinakayanahalli
    Cc: Frederic Weisbecker
    Cc: Jim Keniston
    Cc: Mathieu Desnoyers
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

08 Dec, 2009

1 commit


06 Dec, 2009

1 commit

  • …el/git/tip/linux-2.6-tip

    * 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    x86: Fix a section mismatch in arch/x86/kernel/setup.c
    x86: Fixup last users of irq_chip->typename
    x86: Remove BKL from apm_32
    x86: Remove BKL from microcode
    x86: use kernel_stack_pointer() in kprobes.c
    x86: use kernel_stack_pointer() in kgdb.c
    x86: use kernel_stack_pointer() in dumpstack.c
    x86: use kernel_stack_pointer() in process_32.c

    Linus Torvalds
     

04 Dec, 2009

1 commit

  • That is "success", "unknown", "through", "performance", "[re|un]mapping"
    , "access", "default", "reasonable", "[con]currently", "temperature"
    , "channel", "[un]used", "application", "example","hierarchy", "therefore"
    , "[over|under]flow", "contiguous", "threshold", "enough" and others.

    Signed-off-by: André Goddard Rosa
    Signed-off-by: Jiri Kosina

    André Goddard Rosa
     

21 Nov, 2009

1 commit


13 Oct, 2009

1 commit

  • The way to obtain a kernel-mode stack pointer from a struct pt_regs in
    32-bit mode is "subtle": the stack doesn't actually contain the stack
    pointer, but rather the location where it would have been marks the
    actual previous stack frame. For clarity, use kernel_stack_pointer()
    instead of coding this weirdness explicitly.

    Signed-off-by: H. Peter Anvin
    Cc: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Cc: "David S. Miller"
    Cc: Masami Hiramatsu

    H. Peter Anvin
     

30 Aug, 2009

2 commits

  • Allow to reenter probe on the post_handler of another probe on x86-64,
    because x86-64 already allows reentering int3.
    In that case, reentered probe just increases kp.nmissed and returns.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Ingo Molnar
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Masami Hiramatsu
     
  • Call BUG() when a probe have been hit on the way of kprobe processing
    path, because that kind of probes are currently unrecoverable
    (recovering it will cause an infinite loop and stack overflow).

    The original code seems to assume that it's caused by an int3
    which another subsystem inserted on out-of-line singlestep buffer if
    the hitting probe is same as current probe. However, in that case,
    int3-hitting-address is on the out-of-line buffer and should be
    different from first (current) int3 address.
    Thus, I decided to remove the code.

    I also removes arch_disarm_kprobe() because it will involve other stuffs
    in text_poke().

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Ingo Molnar
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Masami Hiramatsu
     

27 Aug, 2009

3 commits

  • Kprobes can enter into a probing recursion, ie: a kprobe that does an
    endless loop because one of its core mechanism function used during
    probing is also probed itself.

    This patch helps pinpointing the kprobe that raised such recursion
    by dumping it and raising a BUG instead of a warning (we also disarm
    the kprobe to try avoiding recursion in BUG itself). Having a BUG
    instead of a warning stops the stacktrace in the right place and
    doesn't pollute the logs with hundreds of traces that eventually end
    up in a stack overflow.

    Signed-off-by: Frederic Weisbecker
    Cc: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli

    Frederic Weisbecker
     
  • Cleanup fix_riprel() in arch/x86/kernel/kprobes.c by using the new x86
    instruction decoder instead of using comparisons with raw ad hoc numeric
    opcodes.

    Signed-off-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Avi Kivity
    Cc: Andi Kleen
    Cc: Christoph Hellwig
    Cc: Frank Ch. Eigler
    Cc: H. Peter Anvin
    Cc: Ingo Molnar
    Cc: Jason Baron
    Cc: Jim Keniston
    Cc: K.Prasad
    Cc: Lai Jiangshan
    Cc: Li Zefan
    Cc: Przemysław Pawełczyk
    Cc: Roland McGrath
    Cc: Sam Ravnborg
    Cc: Srikar Dronamraju
    Cc: Steven Rostedt
    Cc: Tom Zanussi
    Cc: Vegard Nossum
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Masami Hiramatsu
     
  • Ensure safeness of inserting kprobes by checking whether the specified
    address is at the first byte of an instruction on x86.
    This is done by decoding probed function from its head to the probe
    point.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Avi Kivity
    Cc: Andi Kleen
    Cc: Christoph Hellwig
    Cc: Frank Ch. Eigler
    Cc: H. Peter Anvin
    Cc: Ingo Molnar
    Cc: Jason Baron
    Cc: Jim Keniston
    Cc: K.Prasad
    Cc: Lai Jiangshan
    Cc: Li Zefan
    Cc: Przemysław Pawełczyk
    Cc: Roland McGrath
    Cc: Sam Ravnborg
    Cc: Srikar Dronamraju
    Cc: Steven Rostedt
    Cc: Tom Zanussi
    Cc: Vegard Nossum
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Masami Hiramatsu
     

03 Jun, 2009

1 commit


02 Apr, 2009

1 commit


26 Mar, 2009

1 commit

  • Fix interrupt emulation code in kretprobe-booster according to
    pt_regs update (es/ds change and gs adding).

    This issue has been reported on systemtap-bugzilla:

    http://sources.redhat.com/bugzilla/show_bug.cgi?id=9965

    | On a -tip kernel on x86_32, kretprobe_example (from samples) triggers the
    | following backtrace when its retprobing a class of functions that cause a
    | copy_from/to_user().
    |
    | BUG: sleeping function called from invalid context at mm/memory.c:3196
    | in_atomic(): 0, irqs_disabled(): 1, pid: 2286, name: cat

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Tested-by: Bharata B Rao
    Cc: systemtap-ml
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu