06 Dec, 2009

1 commit

  • …git/tip/linux-2.6-tip

    * 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (470 commits)
    x86: Fix comments of register/stack access functions
    perf tools: Replace %m with %a in sscanf
    hw-breakpoints: Keep track of user disabled breakpoints
    tracing/syscalls: Make syscall events print callbacks static
    tracing: Add DEFINE_EVENT(), DEFINE_SINGLE_EVENT() support to docbook
    perf: Don't free perf_mmap_data until work has been done
    perf_event: Fix compile error
    perf tools: Fix _GNU_SOURCE macro related strndup() build error
    trace_syscalls: Remove unused syscall_name_to_nr()
    trace_syscalls: Simplify syscall profile
    trace_syscalls: Remove duplicate init_enter_##sname()
    trace_syscalls: Add syscall_nr field to struct syscall_metadata
    trace_syscalls: Remove enter_id exit_id
    trace_syscalls: Set event_enter_##sname->data to its metadata
    trace_syscalls: Remove unused event_syscall_enter and event_syscall_exit
    perf_event: Initialize data.period in perf_swevent_hrtimer()
    perf probe: Simplify event naming
    perf probe: Add --list option for listing current probe events
    perf probe: Add argv_split() from lib/argv_split.c
    perf probe: Move probe event utility functions to probe-event.c
    ...

    Linus Torvalds
     

03 Nov, 2009

1 commit

  • For as long as kretprobes have existed, we've allocated NR_CPUS
    instances of kretprobe_instance structures. With the default
    value of CONFIG_NR_CPUS increasing on certain architectures, we
    are potentially wasting kernel memory.

    See http://sourceware.org/bugzilla/show_bug.cgi?id=10839#c3 for
    more details.

    Use a saner num_possible_cpus() instead of NR_CPUS for
    allocation.

    Signed-off-by: Ananth N Mavinakayanahalli
    Acked-by: Masami Hiramatsu
    Cc: Jim Keniston
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ananth N Mavinakayanahalli
     

17 Oct, 2009

1 commit


02 Oct, 2009

1 commit


24 Sep, 2009

1 commit


23 Sep, 2009

1 commit

  • Make all seq_operations structs const, to help mitigate against
    revectoring user-triggerable function pointers.

    This is derived from the grsecurity patch, although generated from scratch
    because it's simpler than extracting the changes from there.

    Signed-off-by: James Morris
    Acked-by: Serge Hallyn
    Acked-by: Casey Schaufler
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    James Morris
     

17 Sep, 2009

1 commit

  • Prevent re-registration of the same kprobe. This situation, though
    unlikely, needs to be flagged since it can lead to a system crash if
    it's not handled.

    The core change itself is small, but the helper routine needed to be
    moved around a bit; hence the diffstat.

    Signed-off-by: Ananth N Mavinakayanahalli
    Acked-by: Masami Hiramatsu
    Cc: Jim Keniston
    Cc: Andi Kleen
    Cc: Christoph Hellwig
    Cc: Frank Ch. Eigler
    Cc: Frederic Weisbecker
    Cc: H. Peter Anvin
    Cc: Ingo Molnar
    Cc: Jason Baron
    Cc: K.Prasad
    Cc: Lai Jiangshan
    Cc: Li Zefan
    Cc: Peter Zijlstra
    Cc: Srikar Dronamraju
    Cc: Steven Rostedt
    Cc: Tom Zanussi
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Ananth N Mavinakayanahalli
     

11 Sep, 2009

1 commit

  • Move irq-exit functions to .kprobes.text section to protect against
    kprobes recursion.

    When I ran kprobe stress test on x86-32, I found below symbols
    cause unrecoverable recursive probing:

    ret_from_exception
    ret_from_intr
    check_userspace
    restore_all
    restore_all_notrace
    restore_nocheck
    irq_return

    And also, I found some interrupt/exception entry points that
    cause similar problems.

    This patch moves those symbols (including their container functions)
    to .kprobes.text section to prevent any kprobes probing.

    Signed-off-by: Masami Hiramatsu
    Cc: Frederic Weisbecker
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Ingo Molnar
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Masami Hiramatsu
     

30 Aug, 2009

1 commit

  • Since do_debug() calls get_debugreg(), native_get_debugreg() will be
    called from singlestepping. This can cause an int3 infinite loop.

    We can't put it in the .text.kprobes section because it is inlined,
    then we blacklist its name.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Ingo Molnar
    LKML-Reference:
    Signed-off-by: Frederic Weisbecker

    Masami Hiramatsu
     

27 Aug, 2009

1 commit

  • Kprobes can enter into a probing recursion, ie: a kprobe that does an
    endless loop because one of its core mechanism function used during
    probing is also probed itself.

    This patch helps pinpointing the kprobe that raised such recursion
    by dumping it and raising a BUG instead of a warning (we also disarm
    the kprobe to try avoiding recursion in BUG itself). Having a BUG
    instead of a warning stops the stacktrace in the right place and
    doesn't pollute the logs with hundreds of traces that eventually end
    up in a stack overflow.

    Signed-off-by: Frederic Weisbecker
    Cc: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli

    Frederic Weisbecker
     

11 Aug, 2009

1 commit


31 Jul, 2009

1 commit


01 Jul, 2009

2 commits

  • Use struct list instead of struct hlist for managing
    insn_pages, because insn_pages doesn't use hash table.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Ananth N Mavinakayanahalli
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     
  • Remove needless kprobe_insn_mutex unlocking during safety check
    in garbage collection, because if someone releases a dirty slot
    during safety check (which ensures other cpus doesn't execute
    all dirty slots), the safety check must be fail. So, we need to
    hold the mutex while checking safety.

    Signed-off-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Ananth N Mavinakayanahalli
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Masami Hiramatsu
     

09 May, 2009

1 commit


07 Apr, 2009

4 commits

  • Add disable_kprobe() and enable_kprobe() to disable/enable kprobes
    temporarily.

    disable_kprobe() asynchronously disables probe handlers of specified
    kprobe. So, after calling it, some handlers can be called at a while.
    enable_kprobe() enables specified kprobe.

    aggr_pre_handler and aggr_post_handler check disabled probes. On the
    other hand aggr_break_handler and aggr_fault_handler don't check it
    because these handlers will be called while executing pre or post handlers
    and usually those help error handling.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Cc: David S. Miller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     
  • Rename kprobe_enabled to kprobes_all_disarmed and invert logic due to
    avoiding naming confusion from per-probe disabling.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Cc: David S. Miller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     
  • Clean up positions of EXPORT_SYMBOL_GPL in kernel/kprobes.c according to
    checkpatch.pl.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Cc: David S. Miller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     
  • Currently, kprobes can disable all probes at once, but can't disable it
    individually (not unregister, just disable an kprobe, because
    unregistering needs to wait for scheduler synchronization). These patches
    introduce APIs for on-the-fly per-probe disabling and re-enabling by
    dis-arming/re-arming its breakpoint instruction.

    This patch:

    Change old_p to ap in add_new_kprobe() for readability, copy flags member
    in add_aggr_kprobe(), and simplify the code flow of
    register_aggr_kprobe().

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Cc: David S. Miller
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     

18 Mar, 2009

1 commit

  • Fix locking imbalance in kretprobes:

    =====================================
    [ BUG: bad unlock balance detected! ]
    -------------------------------------
    kthreadd/2 is trying to release lock (&rp->lock) at:
    [] pre_handler_kretprobe+0xea/0xf4
    but there are no more locks to release!

    other info that might help us debug this:
    1 lock held by kthreadd/2:
    #0: (rcu_read_lock){..--}, at: [] __atomic_notifier_call_chain+0x0/0x5a

    stack backtrace:
    Pid: 2, comm: kthreadd Not tainted 2.6.29-rc8 #1
    Call Trace:
    [] ? printk+0xf/0x17
    [] ? pre_handler_kretprobe+0xea/0xf4
    [] print_unlock_inbalance_bug+0xc3/0xce
    [] ? clocksource_read+0x7/0xa
    [] ? getnstimeofday+0x5f/0xf6
    [] ? register_lock_class+0x17/0x293
    [] ? mark_lock+0x1e/0x30b
    [] ? tick_dev_program_event+0x4a/0xbc
    [] ? __slab_alloc+0xa5/0x415
    [] ? pre_handler_kretprobe+0x28/0xf4
    [] ? pre_handler_kretprobe+0xea/0xf4
    [] lock_release_non_nested+0xa4/0x1a5
    [] ? pre_handler_kretprobe+0xea/0xf4
    [] lock_release+0x141/0x166
    [] _spin_unlock_irqrestore+0x19/0x50
    [] pre_handler_kretprobe+0xea/0xf4
    [] kprobe_exceptions_notify+0x1c9/0x43e
    [] notifier_call_chain+0x26/0x48
    [] __atomic_notifier_call_chain+0x37/0x5a
    [] ? __atomic_notifier_call_chain+0x0/0x5a
    [] atomic_notifier_call_chain+0xc/0xe
    [] notify_die+0x2d/0x2f
    [] do_int3+0x1f/0x71
    [] int3+0x2c/0x34
    [] ? do_fork+0x1/0x288
    [] ? kernel_thread+0x71/0x79
    [] ? kthread+0x0/0x60
    [] ? kthread+0x0/0x60
    [] ? kernel_thread_helper+0x0/0x10
    [] kthreadd+0xac/0x148
    [] ? kthreadd+0x0/0x148
    [] kernel_thread_helper+0x7/0x10

    Signed-off-by: Ananth N Mavinakayanahalli
    Tested-by: Bharata B Rao
    Cc: Masami Hiramatsu
    Cc: Jim Keniston
    Cc: Linus Torvalds
    Cc: Andrew Morton
    Cc: [2.6.29.x, 2.6.28.x, 2.6.27.x]
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ananth N Mavinakayanahalli
     

06 Mar, 2009

1 commit

  • Use the mutual exclusion provided by the text edit lock in the kprobes code. It
    allows coherent manipulation of the kernel code by other subsystems.

    Changelog:

    Move the kernel_text_lock/unlock out of the for loops.
    Use text_mutex directly instead of a function.
    Remove whitespace modifications.

    (note : kprobes_mutex is always taken outside of text_mutex)

    Signed-off-by: Mathieu Desnoyers
    Acked-by: Ananth N Mavinakayanahalli
    Acked-by: Masami Hiramatsu
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Mathieu Desnoyers
     

17 Jan, 2009

1 commit

  • Check CONFIG_FREEZER instead of CONFIG_PM because kprobe booster
    depends on freeze_processes() and thaw_processes() when CONFIG_PREEMPT=y.

    This fixes a linkage error which occurs when CONFIG_PREEMPT=y, CONFIG_PM=y
    and CONFIG_FREEZER=n.

    Reported-by: Cheng Renquan
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Rafael J. Wysocki
    Acked-by: Ingo Molnar
    Signed-off-by: Len Brown

    Masami Hiramatsu
     

07 Jan, 2009

6 commits

  • Allow kprobes to probe module __init routines. When __init functions are
    freed, kprobes which probe those functions are set to "Gone" flag. These
    "Gone" probes are disarmed from the code and never be enabled.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     
  • Remove called_from argument from kprobes which had been used for
    preventing self-refering of kernel module. However, since we don't keep
    module's refcount after registering kprobe any more, there is no reason to
    check that.

    This patch also simplifies registering/unregistering functions because we
    don't need to use __builtin_return_address(0) which was passed to
    called_from.

    [ananth@in.ibm.com: build fix]
    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Signed-off-by: Ananth N Mavinakayanahalli
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     
  • Allows kprobes to probe __exit routine. This adds flags member to struct
    kprobe. When module is freed(kprobes hooks module_notifier to get this
    event), kprobes which probe the functions in that module are set to "Gone"
    flag to the flags member. These "Gone" probes are never be enabled.
    Users can check the GONE flag through debugfs.

    This also removes mod_refcounted, because we couldn't free a module if
    kprobe incremented the refcount of that module.

    [akpm@linux-foundation.org: document some locking]
    [mhiramat@redhat.com: bugfix: pass aggr_kprobe to arch_remove_kprobe]
    [mhiramat@redhat.com: bugfix: release old_p's insn_slot before error return]
    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Signed-off-by: Masami Hiramatsu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     
  • Add __kprobes to kprobes internal functions for protecting from probing by
    kprobes itself.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     
  • Add kprobe_insn_mutex for protecting kprobe_insn_pages hlist, and remove
    kprobe_mutex from architecture dependent code.

    This allows us to call arch_remove_kprobe() (and free_insn_slot) while
    holding kprobe_mutex.

    Signed-off-by: Masami Hiramatsu
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Cc: Russell King
    Cc: "Luck, Tony"
    Cc: Paul Mackerras
    Cc: Benjamin Herrenschmidt
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     
  • When someone called register_*probe() from kernel-core code(not from
    module) and that probes a kernel module, users can remove the probed
    module because kprobe doesn't increment reference counter of the module.
    (on the other hand, if the kernel-module calls register_*probe, kprobe
    increments refcount of the probed module.)

    Currently, we have no register_*probe() calling from kernel-core(except
    smoke-test, but the smoke-test doesn't probe module), so there is no real
    bugs. But the logic is wrong(or not fair) and it can causes a problem
    when someone might want to probe module from kernel.

    After this patch is applied, even if someone put register_*probe() call in
    the kernel-core code, it increments the reference counter of the probed
    module, and it prevents user to remove the module until stopping probing
    it.

    Signed-off-by: Masami Hiramatsu
    Cc: Lai Jiangshan
    Cc: Ananth N Mavinakayanahalli
    Cc: Hiroshi Shimamoto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     

13 Nov, 2008

2 commits

  • We only need the cacheline padding on SMP kernels. Saves 6k:

    text data bss dec hex filename
    5713 388 8840 14941 3a5d kernel/kprobes.o
    5713 388 2632 8733 221d kernel/kprobes.o

    Acked-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • __register_kprobe() can be preempted after checking probing address but
    before module_text_address() or try_module_get(), and in this interval
    the module can be unloaded. In that case, try_module_get(probed_mod)
    will access to invalid address, or kprobe will probe invalid address.

    This patch uses preempt_disable() to protect it and uses
    __module_text_address() and __kernel_text_address().

    Signed-off-by: Lai Jiangshan
    Signed-off-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Hiroshi Shimamoto
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     

17 Oct, 2008

1 commit


26 Jul, 2008

2 commits

  • I noticed that there's a CONFIG_KPROBES check inside kernel/kprobes.c,
    which is redundant.

    Signed-off-by: Abhishek Sagar
    Acked-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Abhishek Sagar
     
  • Currently list of kretprobe instances are stored in kretprobe object (as
    used_instances,free_instances) and in kretprobe hash table. We have one
    global kretprobe lock to serialise the access to these lists. This causes
    only one kretprobe handler to execute at a time. Hence affects system
    performance, particularly on SMP systems and when return probe is set on
    lot of functions (like on all systemcalls).

    Solution proposed here gives fine-grain locks that performs better on SMP
    system compared to present kretprobe implementation.

    Solution:

    1) Instead of having one global lock to protect kretprobe instances
    present in kretprobe object and kretprobe hash table. We will have
    two locks, one lock for protecting kretprobe hash table and another
    lock for kretporbe object.

    2) We hold lock present in kretprobe object while we modify kretprobe
    instance in kretprobe object and we hold per-hash-list lock while
    modifying kretprobe instances present in that hash list. To prevent
    deadlock, we never grab a per-hash-list lock while holding a kretprobe
    lock.

    3) We can remove used_instances from struct kretprobe, as we can
    track used instances of kretprobe instances using kretprobe hash
    table.

    Time duration for kernel compilation ("make -j 8") on a 8-way ppc64 system
    with return probes set on all systemcalls looks like this.

    cacheline non-cacheline Un-patched kernel
    aligned patch aligned patch
    ===============================================================================
    real 9m46.784s 9m54.412s 10m2.450s
    user 40m5.715s 40m7.142s 40m4.273s
    sys 2m57.754s 2m58.583s 3m17.430s
    ===========================================================

    Time duration for kernel compilation ("make -j 8) on the same system, when
    kernel is not probed.
    =========================
    real 9m26.389s
    user 40m8.775s
    sys 2m7.283s
    =========================

    Signed-off-by: Srinivasa DS
    Signed-off-by: Jim Keniston
    Acked-by: Ananth N Mavinakayanahalli
    Cc: Anil S Keshavamurthy
    Cc: David S. Miller
    Cc: Masami Hiramatsu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Srinivasa D S
     

11 Jul, 2008

1 commit


13 Jun, 2008

1 commit


28 Apr, 2008

4 commits

  • Introduce unregister_/register_jprobes() for jprobe batch registration.

    Signed-off-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Prasanna S Panchamukhi
    Cc: Shaohua Li
    Cc: David Miller
    Cc: "Frank Ch. Eigler"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     
  • Introduce unregister_/register_kretprobes() for kretprobe batch registration.

    Signed-off-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Prasanna S Panchamukhi
    Cc: Shaohua Li
    Cc: David Miller
    Cc: "Frank Ch. Eigler"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     
  • Introduce unregister_/register_kprobes() for kprobe batch registration. This
    can reduce waiting time for synchronized_sched() when a lot of probes have to
    be unregistered at once.

    Signed-off-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Cc: Prasanna S Panchamukhi
    Cc: Shaohua Li
    Cc: David Miller
    Cc: "Frank Ch. Eigler"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu
     
  • Prohibit users from probing preempt_schedule(). One way of prohibiting the
    user from probing functions is by marking such functions with __kprobes. But
    this method doesn't work for those functions, which are already marked to
    different section like preempt_schedule() (belongs to __sched section). So we
    use blacklist approach to refuse user from probing these functions.

    In blacklist approach we populate the blacklisted function's starting address
    and its size in kprobe_blacklist structure. Then we verify the user specified
    address against start and end of the blacklisted function. So any attempt to
    register probe on blacklisted functions will be rejected.

    [akpm@linux-foundation.org: build fix]
    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Srinivasa DS
    Signed-off-by: Ananth N Mavinakayanahalli
    Signed-off-by: Jim Keniston
    Cc: Dave Hansen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Srinivasa Ds
     

05 Mar, 2008

1 commit

  • Fix a bug in regiseter_kretprobe() which does not check rp->kp.symbol_name ==
    NULL before calling kprobe_lookup_name.

    For maintainability, this introduces kprobe_addr helper function which
    resolves addr field. It is used by register_kprobe and register_kretprobe.

    Signed-off-by: Masami Hiramatsu
    Cc: Ananth N Mavinakayanahalli
    Cc: Jim Keniston
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Masami Hiramatsu