08 Sep, 2005

1 commit

  • This patch fixes a race condition where in system used to hang or sometime
    crash within minutes when kprobes are inserted on ISR routine and a task
    routine.

    The fix has been stress tested on i386, ia64, pp64 and on x86_64. To
    reproduce the problem insert kprobes on schedule() and do_IRQ() functions
    and you should see hang or system crash.

    Signed-off-by: Anil S Keshavamurthy
    Signed-off-by: Ananth N Mavinakayanahalli
    Acked-by: Prasanna S Panchamukhi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Keshavamurthy Anil S
     

28 Jun, 2005

2 commits

  • The following patch implements function return probes for ia64 using
    the revised design. With this new design we no longer need to do some
    of the odd hacks previous required on the last ia64 return probe port
    that I sent out for comments.

    Note that this new implementation still does not resolve the problem noted
    by Keith Owens where backtrace data is lost after a return probe is hit.

    Changes include:
    * Addition of kretprobe_trampoline to act as a dummy function for instrumented
    functions to return to, and for the return probe infrastructure to place
    a kprobe on on, gaining control so that the return probe handler
    can be called, and so that the instruction pointer can be moved back
    to the original return address.
    * Addition of arch_init(), allowing a kprobe to be registered on
    kretprobe_trampoline
    * Addition of trampoline_probe_handler() which is used as the pre_handler
    for the kprobe inserted on kretprobe_implementation. This is the function
    that handles the details for calling the return probe handler function
    and returning control back at the original return address
    * Addition of arch_prepare_kretprobe() which is setup as the pre_handler
    for a kprobe registered at the beginning of the target function by
    kernel/kprobes.c so that a return probe instance can be setup when
    a caller enters the target function. (A return probe instance contains
    all the needed information for trampoline_probe_handler to do it's job.)
    * Hooks added to the exit path of a task so that we can cleanup any left-over
    return probe instances (i.e. if a task dies while inside a targeted function
    then the return probe instance was reserved at the beginning of the function
    but the function never returns so we need to mark the instance as unused.)

    Signed-off-by: Rusty Lynch
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rusty Lynch
     
  • Now that PPC64 has no-execute support, here is a second try to fix the
    single step out of line during kprobe execution. Kprobes on x86_64 already
    solved this problem by allocating an executable page and using it as the
    scratch area for stepping out of line. Reuse that.

    Signed-off-by: Ananth N Mavinakayanahalli
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ananth N Mavinakayanahalli
     

24 Jun, 2005

5 commits

  • The current Kprobes when patching the original instruction with the break
    instruction tries to retain the original qualifying predicate(qp), however
    for cmp.crel.ctype where ctype == unc, which is a special instruction
    always needs to be executed irrespective of qp. Hence, if the instruction
    we are patching is of this type, then we should not copy the original qp to
    the break instruction, this is because we always want the break fault to
    happen so that we can emulate the instruction.

    This patch is based on the feedback given by David Mosberger

    Signed-off-by: Anil S Keshavamurthy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anil S Keshavamurthy
     
  • A cleanup of the ia64 kprobes implementation such that all of the bundle
    manipulation logic is concentrated in arch_prepare_kprobe().

    With the current design for kprobes, the arch specific code only has a
    chance to return failure inside the arch_prepare_kprobe() function.

    This patch moves all of the work that was happening in arch_copy_kprobe()
    and most of the work that was happening in arch_arm_kprobe() into
    arch_prepare_kprobe(). By doing this we can add further robustness checks
    in arch_arm_kprobe() and refuse to insert kprobes that will cause problems.

    Signed-off-by: Rusty Lynch
    Signed-off-by: Anil S Keshavamurthy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rusty Lynch
     
  • This patch is required to support kprobe on branch/call instructions.

    Signed-off-by: Anil S Keshavamurthy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anil S Keshavamurthy
     
  • This patch adds IA64 architecture specific JProbes support on top of Kprobes

    Signed-off-by: Anil S Keshavamurthy
    Signed-off-by: Rusty Lynch
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anil S Keshavamurthy
     
  • This is an IA64 arch specific handling of Kprobes

    Signed-off-by: Anil S Keshavamurthy
    Signed-off-by: Rusty Lynch
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Anil S Keshavamurthy