31 Oct, 2011

1 commit

  • The changed files were only including linux/module.h for the
    EXPORT_SYMBOL infrastructure, and nothing else. Revector them
    onto the isolated export header for faster compile times.

    Nothing to see here but a whole lot of instances of:

    -#include
    +#include

    This commit is only changing the kernel dir; next targets
    will probably be mm, fs, the arch dirs, etc.

    Signed-off-by: Paul Gortmaker

    Paul Gortmaker
     

13 Sep, 2011

1 commit


16 Jul, 2011

1 commit

  • Return -ENOENT if probe point doesn't exist, but still returns
    -EINVAL if both of kprobe->addr and kprobe->symbol_name are
    specified or both are not specified.

    Acked-by: Ananth N Mavinakayanahalli
    Signed-off-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Arnaldo Carvalho de Melo
    Cc: Ingo Molnar
    Cc: Frederic Weisbecker
    Cc: Peter Zijlstra
    Cc: Anil S Keshavamurthy
    Cc: "David S. Miller"
    Link: http://lkml.kernel.org/r/20110627072650.6528.67329.stgit@fedora15
    Signed-off-by: Steven Rostedt

    Masami Hiramatsu
     

08 Jan, 2011

1 commit

  • * 'for-2.6.38' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (30 commits)
    gameport: use this_cpu_read instead of lookup
    x86: udelay: Use this_cpu_read to avoid address calculation
    x86: Use this_cpu_inc_return for nmi counter
    x86: Replace uses of current_cpu_data with this_cpu ops
    x86: Use this_cpu_ops to optimize code
    vmstat: User per cpu atomics to avoid interrupt disable / enable
    irq_work: Use per cpu atomics instead of regular atomics
    cpuops: Use cmpxchg for xchg to avoid lock semantics
    x86: this_cpu_cmpxchg and this_cpu_xchg operations
    percpu: Generic this_cpu_cmpxchg() and this_cpu_xchg support
    percpu,x86: relocate this_cpu_add_return() and friends
    connector: Use this_cpu operations
    xen: Use this_cpu_inc_return
    taskstats: Use this_cpu_ops
    random: Use this_cpu_inc_return
    fs: Use this_cpu_inc_return in buffer.c
    highmem: Use this_cpu_xx_return() operations
    vmstat: Use this_cpu_inc_return for vm statistics
    x86: Support for this_cpu_add, sub, dec, inc_return
    percpu: Generic support for this_cpu_add, sub, dec, inc_return
    ...

    Fixed up conflicts: in arch/x86/kernel/{apic/nmi.c, apic/x2apic_uv_x.c, process.c}
    as per Tejun.

    Linus Torvalds
     

17 Dec, 2010

1 commit


07 Dec, 2010

7 commits

  • Use text_poke_smp_batch() on unoptimization path for reducing
    the number of stop_machine() issues. If the number of
    unoptimizing probes is more than MAX_OPTIMIZE_PROBES(=256),
    kprobes unoptimizes first MAX_OPTIMIZE_PROBES probes and kicks
    optimizer for remaining probes.

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Use text_poke_smp_batch() in optimization path for reducing
    the number of stop_machine() issues. If the number of optimizing
    probes is more than MAX_OPTIMIZE_PROBES(=256), kprobes optimizes
    first MAX_OPTIMIZE_PROBES probes and kicks optimizer for
    remaining probes.

    Changes in v5:
    - Use kick_kprobe_optimizer() instead of directly calling
    schedule_delayed_work().
    - Rescheduling optimizer outside of kprobe mutex lock.

    Changes in v2:
    - Allocate code buffer and parameters in arch_init_kprobes()
    instead of using static arraies.
    - Merge previous max optimization limit patch into this patch.
    So, this patch introduces upper limit of optimization at
    once.

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    Cc: Peter Zijlstra
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Reuse unused (waiting for unoptimizing and no user handler)
    kprobe on given address instead of returning -EBUSY for
    registering a new kprobe.

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Unoptimization occurs when a probe is unregistered or disabled,
    and is heavy because it recovers instructions by using
    stop_machine(). This patch delays unoptimization operations and
    unoptimize several probes at once by using
    text_poke_smp_batch(). This can avoid unexpected system slowdown
    coming from stop_machine().

    Changes in v5:
    - Split this patch into several cleanup patches and this patch.
    - Fix some text_mutex lock miss.
    - Use bool instead of int for behavior flags.
    - Add additional comment for (un)optimizing path.

    Changes in v2:
    - Use dynamic allocated buffers and params.

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Separate kprobe optimizing code from optimizer, this
    will make easy to introducing unoptimizing code in
    optimizer.

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Merge disabling kprobe to unregistering kprobe function
    and add comments for disabing/unregistring process.

    Current unregistering code disables(disarms) kprobes after
    checking target kprobe status. This patch changes it to
    disabling kprobe first after that it changing the kprobe's
    state. This allows to share probe disabling code between
    disable_kprobe() and unregister_kprobe().

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Rename irrelevant uses of "old_p" to more appropriate names.
    Originally, "old_p" just meant "the old kprobe on given address"
    but current code uses that name as "just another kprobe" or
    something like that. This patch renames those pointer names
    to more appropriate one for maintainability.

    Signed-off-by: Masami Hiramatsu
    Cc: Rusty Russell
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: 2nddept-manager@sdl.hitachi.co.jp
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

30 Oct, 2010

1 commit

  • Kprobes and jump label were having a race between mutexes that
    was fixed by reordering the jump label. But this reordering
    moved the jump label mutex into a preempt disable location.

    This patch does a little fiddling to move the grabbing of
    the jump label mutex from inside the preempt disable section
    and still keep the order correct between the mutex and the
    kprobes lock.

    Reported-by: Ingo Molnar
    Acked-by: Masami Hiramatsu
    Cc: Jason Baron
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

28 Oct, 2010

2 commits

  • register_kprobe() downs the 'text_mutex' and then calls
    jump_label_text_reserved(), which downs the 'jump_label_mutex'.
    However, the jump label code takes those mutexes in the reverse
    order.

    Fix by requiring the caller of jump_label_text_reserved() to do
    the jump label locking via the newly added: jump_label_lock(),
    jump_label_unlock(). Currently, kprobes is the only user
    of jump_label_text_reserved().

    Reported-by: Ingo Molnar
    Acked-by: Masami Hiramatsu
    Signed-off-by: Jason Baron
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Jason Baron
     
  • …/git/tip/linux-2.6-tip

    * 'perf-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (50 commits)
    perf python scripting: Add futex-contention script
    perf python scripting: Fixup cut'n'paste error in sctop script
    perf scripting: Shut up 'perf record' final status
    perf record: Remove newline character from perror() argument
    perf python scripting: Support fedora 11 (audit 1.7.17)
    perf python scripting: Improve the syscalls-by-pid script
    perf python scripting: print the syscall name on sctop
    perf python scripting: Improve the syscalls-counts script
    perf python scripting: Improve the failed-syscalls-by-pid script
    kprobes: Remove redundant text_mutex lock in optimize
    x86/oprofile: Fix uninitialized variable use in debug printk
    tracing: Fix 'faild' -> 'failed' typo
    perf probe: Fix format specified for Dwarf_Off parameter
    perf trace: Fix detection of script extension
    perf trace: Use $PERF_EXEC_PATH in canned report scripts
    perf tools: Document event modifiers
    perf tools: Remove direct slang.h include
    perf_events: Fix for transaction recovery in group_sched_in()
    perf_events: Revert: Fix transaction recovery in group_sched_in()
    perf, x86: Use NUMA aware allocations for PEBS/BTS/DS allocations
    ...

    Linus Torvalds
     

25 Oct, 2010

1 commit

  • Remove text_mutex locking in optimize_all_kprobes, because
    this function doesn't modify text. It simply queues probes on
    optimization list for kprobe_optimizer worker thread.

    Signed-off-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Cc: David S. Miller
    Cc: Namhyung Kim
    Cc: Jason Baron
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

23 Oct, 2010

1 commit

  • * 'llseek' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/bkl:
    vfs: make no_llseek the default
    vfs: don't use BKL in default_llseek
    llseek: automatically add .llseek fop
    libfs: use generic_file_llseek for simple_attr
    mac80211: disallow seeks in minstrel debug code
    lirc: make chardev nonseekable
    viotape: use noop_llseek
    raw: use explicit llseek file operations
    ibmasmfs: use generic_file_llseek
    spufs: use llseek in all file operations
    arm/omap: use generic_file_llseek in iommu_debug
    lkdtm: use generic_file_llseek in debugfs
    net/wireless: use generic_file_llseek in debugfs
    drm: use noop_llseek

    Linus Torvalds
     

15 Oct, 2010

1 commit

  • All file_operations should get a .llseek operation so we can make
    nonseekable_open the default for future file operations without a
    .llseek pointer.

    The three cases that we can automatically detect are no_llseek, seq_lseek
    and default_llseek. For cases where we can we can automatically prove that
    the file offset is always ignored, we use noop_llseek, which maintains
    the current behavior of not returning an error from a seek.

    New drivers should normally not use noop_llseek but instead use no_llseek
    and call nonseekable_open at open time. Existing drivers can be converted
    to do the same when the maintainer knows for certain that no user code
    relies on calling seek on the device file.

    The generated code is often incorrectly indented and right now contains
    comments that clarify for each added line why a specific variant was
    chosen. In the version that gets submitted upstream, the comments will
    be gone and I will manually fix the indentation, because there does not
    seem to be a way to do that using coccinelle.

    Some amount of new code is currently sitting in linux-next that should get
    the same modifications, which I will do at the end of the merge window.

    Many thanks to Julia Lawall for helping me learn to write a semantic
    patch that does all this.

    ===== begin semantic patch =====
    // This adds an llseek= method to all file operations,
    // as a preparation for making no_llseek the default.
    //
    // The rules are
    // - use no_llseek explicitly if we do nonseekable_open
    // - use seq_lseek for sequential files
    // - use default_llseek if we know we access f_pos
    // - use noop_llseek if we know we don't access f_pos,
    // but we still want to allow users to call lseek
    //
    @ open1 exists @
    identifier nested_open;
    @@
    nested_open(...)
    {

    }

    @ open exists@
    identifier open_f;
    identifier i, f;
    identifier open1.nested_open;
    @@
    int open_f(struct inode *i, struct file *f)
    {

    }

    @ read disable optional_qualifier exists @
    identifier read_f;
    identifier f, p, s, off;
    type ssize_t, size_t, loff_t;
    expression E;
    identifier func;
    @@
    ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
    {

    }

    @ read_no_fpos disable optional_qualifier exists @
    identifier read_f;
    identifier f, p, s, off;
    type ssize_t, size_t, loff_t;
    @@
    ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
    {
    ... when != off
    }

    @ write @
    identifier write_f;
    identifier f, p, s, off;
    type ssize_t, size_t, loff_t;
    expression E;
    identifier func;
    @@
    ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
    {

    }

    @ write_no_fpos @
    identifier write_f;
    identifier f, p, s, off;
    type ssize_t, size_t, loff_t;
    @@
    ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
    {
    ... when != off
    }

    @ fops0 @
    identifier fops;
    @@
    struct file_operations fops = {
    ...
    };

    @ has_llseek depends on fops0 @
    identifier fops0.fops;
    identifier llseek_f;
    @@
    struct file_operations fops = {
    ...
    .llseek = llseek_f,
    ...
    };

    @ has_read depends on fops0 @
    identifier fops0.fops;
    identifier read_f;
    @@
    struct file_operations fops = {
    ...
    .read = read_f,
    ...
    };

    @ has_write depends on fops0 @
    identifier fops0.fops;
    identifier write_f;
    @@
    struct file_operations fops = {
    ...
    .write = write_f,
    ...
    };

    @ has_open depends on fops0 @
    identifier fops0.fops;
    identifier open_f;
    @@
    struct file_operations fops = {
    ...
    .open = open_f,
    ...
    };

    // use no_llseek if we call nonseekable_open
    ////////////////////////////////////////////
    @ nonseekable1 depends on !has_llseek && has_open @
    identifier fops0.fops;
    identifier nso ~= "nonseekable_open";
    @@
    struct file_operations fops = {
    ... .open = nso, ...
    +.llseek = no_llseek, /* nonseekable */
    };

    @ nonseekable2 depends on !has_llseek @
    identifier fops0.fops;
    identifier open.open_f;
    @@
    struct file_operations fops = {
    ... .open = open_f, ...
    +.llseek = no_llseek, /* open uses nonseekable */
    };

    // use seq_lseek for sequential files
    /////////////////////////////////////
    @ seq depends on !has_llseek @
    identifier fops0.fops;
    identifier sr ~= "seq_read";
    @@
    struct file_operations fops = {
    ... .read = sr, ...
    +.llseek = seq_lseek, /* we have seq_read */
    };

    // use default_llseek if there is a readdir
    ///////////////////////////////////////////
    @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier readdir_e;
    @@
    // any other fop is used that changes pos
    struct file_operations fops = {
    ... .readdir = readdir_e, ...
    +.llseek = default_llseek, /* readdir is present */
    };

    // use default_llseek if at least one of read/write touches f_pos
    /////////////////////////////////////////////////////////////////
    @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier read.read_f;
    @@
    // read fops use offset
    struct file_operations fops = {
    ... .read = read_f, ...
    +.llseek = default_llseek, /* read accesses f_pos */
    };

    @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier write.write_f;
    @@
    // write fops use offset
    struct file_operations fops = {
    ... .write = write_f, ...
    + .llseek = default_llseek, /* write accesses f_pos */
    };

    // Use noop_llseek if neither read nor write accesses f_pos
    ///////////////////////////////////////////////////////////

    @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier read_no_fpos.read_f;
    identifier write_no_fpos.write_f;
    @@
    // write fops use offset
    struct file_operations fops = {
    ...
    .write = write_f,
    .read = read_f,
    ...
    +.llseek = noop_llseek, /* read and write both use no f_pos */
    };

    @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier write_no_fpos.write_f;
    @@
    struct file_operations fops = {
    ... .write = write_f, ...
    +.llseek = noop_llseek, /* write uses no f_pos */
    };

    @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier read_no_fpos.read_f;
    @@
    struct file_operations fops = {
    ... .read = read_f, ...
    +.llseek = noop_llseek, /* read uses no f_pos */
    };

    @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    @@
    struct file_operations fops = {
    ...
    +.llseek = noop_llseek, /* no read or write fn */
    };
    ===== End semantic patch =====

    Signed-off-by: Arnd Bergmann
    Cc: Julia Lawall
    Cc: Christoph Hellwig

    Arnd Bergmann
     

23 Sep, 2010

2 commits

  • Add a jump_label_text_reserved(void *start, void *end), so that other
    pieces of code that want to modify kernel text, can first verify that
    jump label has not reserved the instruction.

    Acked-by: Masami Hiramatsu
    Signed-off-by: Jason Baron
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Jason Baron
     
  • base patch to implement 'jump labeling'. Based on a new 'asm goto' inline
    assembly gcc mechanism, we can now branch to labels from an 'asm goto'
    statment. This allows us to create a 'no-op' fastpath, which can subsequently
    be patched with a jump to the slowpath code. This is useful for code which
    might be rarely used, but which we'd like to be able to call, if needed.
    Tracepoints are the current usecase that these are being implemented for.

    Acked-by: David S. Miller
    Signed-off-by: Jason Baron
    LKML-Reference:

    [ cleaned up some formating ]

    Signed-off-by: Steven Rostedt

    Jason Baron
     

15 Sep, 2010

4 commits

  • This removes following warnings when build with C=1

    warning: context imbalance in 'kretprobe_hash_lock' - wrong count at exit
    warning: context imbalance in 'kretprobe_table_lock' - wrong count at exit
    warning: context imbalance in 'kretprobe_hash_unlock' - unexpected unlock
    warning: context imbalance in 'kretprobe_table_unlock' - unexpected unlock

    Signed-off-by: Namhyung Kim
    Acked-by: Masami Hiramatsu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Namhyung Kim
     
  • Make following (internal) functions static to make sparse
    happier :-)

    * get_optimized_kprobe: only called from static functions
    * kretprobe_table_unlock: _lock function is static
    * kprobes_optinsn_template_holder: never called but holding asm code

    Signed-off-by: Namhyung Kim
    Acked-by: Masami Hiramatsu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Namhyung Kim
     
  • Verify jprobe's entry point is a function entry point
    using kallsyms' offset value.

    Signed-off-by: Namhyung Kim
    Acked-by: Masami Hiramatsu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Namhyung Kim
     
  • Remove call to kernel_text_address() in register_jprobes()
    because it is called right after in register_kprobe().

    Signed-off-by: Namhyung Kim
    Acked-by: Masami Hiramatsu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Namhyung Kim
     

09 May, 2010

1 commit


11 Mar, 2010

1 commit

  • From : Ananth N Mavinakayanahalli

    When freeing the instruction slot, the arithmetic to calculate
    the index of the slot in the page needs to account for the total
    size of the instruction on the various architectures.

    Calculate the index correctly when freeing the out-of-line
    execution slot.

    Reported-by: Sachin Sant
    Reported-by: Heiko Carstens
    Signed-off-by: Ananth N Mavinakayanahalli
    Signed-off-by: Masami Hiramatsu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

26 Feb, 2010

3 commits

  • Add /proc/sys/debug/kprobes-optimization sysctl which enables
    and disables kprobes jump optimization on the fly for debugging.

    Changes in v7:
    - Remove ctl_name = CTL_UNNUMBERED for upstream compatibility.

    Changes in v6:
    - Update comments and coding style.

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Srikar Dronamraju
    Cc: Christoph Hellwig
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Anders Kaseorg
    Cc: Tim Abbott
    Cc: Andi Kleen
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Introduce kprobes jump optimization arch-independent parts.
    Kprobes uses breakpoint instruction for interrupting execution
    flow, on some architectures, it can be replaced by a jump
    instruction and interruption emulation code. This gains kprobs'
    performance drastically.

    To enable this feature, set CONFIG_OPTPROBES=y (default y if the
    arch supports OPTPROBE).

    Changes in v9:
    - Fix a bug to optimize probe when enabling.
    - Check nearby probes can be optimize/unoptimize when disarming/arming
    kprobes, instead of registering/unregistering. This will help
    kprobe-tracer because most of probes on it are usually disabled.

    Changes in v6:
    - Cleanup coding style for readability.
    - Add comments around get/put_online_cpus().

    Changes in v5:
    - Use get_online_cpus()/put_online_cpus() for avoiding text_mutex
    deadlock.

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Srikar Dronamraju
    Cc: Christoph Hellwig
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Anders Kaseorg
    Cc: Tim Abbott
    Cc: Andi Kleen
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Make insn_slot framework support various size slots.
    Current insn_slot just supports one-size instruction buffer
    slot. However, kprobes jump optimization needs larger size
    buffers.

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Srikar Dronamraju
    Cc: Christoph Hellwig
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Anders Kaseorg
    Cc: Tim Abbott
    Cc: Andi Kleen
    Cc: Jason Baron
    Cc: Mathieu Desnoyers
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    LKML-Reference:
    Signed-off-by: Ingo Molnar
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Srikar Dronamraju
    Cc: Christoph Hellwig
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: H. Peter Anvin
    Cc: Anders Kaseorg
    Cc: Tim Abbott
    Cc: Andi Kleen
    Cc: Jason Baron
    Cc: Mathieu Desnoyers

    Masami Hiramatsu
     

05 Feb, 2010

1 commit

  • Since mcount function can be called from everywhere,
    it should be blacklisted. Moreover, the "mcount" symbol
    is a special symbol name. So, it is better to put it in
    the generic blacklist.

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Ananth N Mavinakayanahalli
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

04 Feb, 2010

2 commits

  • Check whether the address of new probe is already reserved by
    ftrace or alternatives (on x86) when registering new probe.
    If reserved, it returns an error and not register the probe.

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Steven Rostedt
    Cc: przemyslaw@pawelczyk.it
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Mathieu Desnoyers
    Cc: Jason Baron
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Disable kprobe booster when CONFIG_PREEMPT=y at this time,
    because it can't ensure that all kernel threads preempted on
    kprobe's boosted slot run out from the slot even using
    freeze_processes().

    The booster on preemptive kernel will be resumed if
    synchronize_tasks() or something like that is introduced.

    Signed-off-by: Masami Hiramatsu
    Cc: systemtap
    Cc: DLE
    Cc: Ananth N Mavinakayanahalli
    Cc: Frederic Weisbecker
    Cc: Jim Keniston
    Cc: Mathieu Desnoyers
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

28 Dec, 2009

1 commit

  • Every time I see this:

    kernel/kprobes.c: In function 'register_kretprobe':
    kernel/kprobes.c:1038: warning: comparison of distinct pointer types lacks a cast

    I'm wondering if something changed in common code and we need to
    do something for s390. Apparently that's not the case.
    Let's get rid of this annoying warning.

    Signed-off-by: Heiko Carstens
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Masami Hiramatsu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Heiko Carstens
     

06 Dec, 2009

1 commit

  • …git/tip/linux-2.6-tip

    * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (470 commits)
    x86: Fix comments of register/stack access functions
    perf tools: Replace %m with %a in sscanf
    hw-breakpoints: Keep track of user disabled breakpoints
    tracing/syscalls: Make syscall events print callbacks static
    tracing: Add DEFINE_EVENT(), DEFINE_SINGLE_EVENT() support to docbook
    perf: Don't free perf_mmap_data until work has been done
    perf_event: Fix compile error
    perf tools: Fix _GNU_SOURCE macro related strndup() build error
    trace_syscalls: Remove unused syscall_name_to_nr()
    trace_syscalls: Simplify syscall profile
    trace_syscalls: Remove duplicate init_enter_##sname()
    trace_syscalls: Add syscall_nr field to struct syscall_metadata
    trace_syscalls: Remove enter_id exit_id
    trace_syscalls: Set event_enter_##sname->data to its metadata
    trace_syscalls: Remove unused event_syscall_enter and event_syscall_exit
    perf_event: Initialize data.period in perf_swevent_hrtimer()
    perf probe: Simplify event naming
    perf probe: Add --list option for listing current probe events
    perf probe: Add argv_split() from lib/argv_split.c
    perf probe: Move probe event utility functions to probe-event.c
    ...

    Linus Torvalds
     

03 Nov, 2009

1 commit

  • For as long as kretprobes have existed, we've allocated NR_CPUS
    instances of kretprobe_instance structures. With the default
    value of CONFIG_NR_CPUS increasing on certain architectures, we
    are potentially wasting kernel memory.

    See http://sourceware.org/bugzilla/show_bug.cgi?id=10839#c3 for
    more details.

    Use a saner num_possible_cpus() instead of NR_CPUS for
    allocation.

    Signed-off-by: Ananth N Mavinakayanahalli
    Acked-by: Masami Hiramatsu
    Cc: Jim Keniston
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ananth N Mavinakayanahalli
     

17 Oct, 2009

1 commit


02 Oct, 2009

1 commit


24 Sep, 2009

1 commit


23 Sep, 2009

1 commit

  • Make all seq_operations structs const, to help mitigate against
    revectoring user-triggerable function pointers.

    This is derived from the grsecurity patch, although generated from scratch
    because it's simpler than extracting the changes from there.

    Signed-off-by: James Morris
    Acked-by: Serge Hallyn
    Acked-by: Casey Schaufler
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    James Morris
     

17 Sep, 2009

1 commit

  • Prevent re-registration of the same kprobe. This situation, though
    unlikely, needs to be flagged since it can lead to a system crash if
    it's not handled.

    The core change itself is small, but the helper routine needed to be
    moved around a bit; hence the diffstat.

    Signed-off-by: Ananth N Mavinakayanahalli
    Acked-by: Masami Hiramatsu
    Cc: Jim Keniston
    Cc: Andi Kleen
    Cc: Christoph Hellwig
    Cc: Frank Ch. Eigler
    Cc: Frederic Weisbecker
    Cc: H. Peter Anvin
    Cc: Ingo Molnar
    Cc: Jason Baron
    Cc: K.Prasad
    Cc: Lai Jiangshan
    Cc: Li Zefan
    Cc: Peter Zijlstra
    Cc: Srikar Dronamraju
    Cc: Steven Rostedt
    Cc: Tom Zanussi
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Ananth N Mavinakayanahalli