21 Dec, 2011

2 commits

  • Add stacktrace_filter= to the kernel command line that lets
    the user pick specific functions to check the stack on.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • The stack_tracer is used to look at every function and check
    if the current stack is bigger than the last recorded max stack size.
    When a new max is found, then it saves that stack off.

    Currently the stack tracer is limited by the global_ops of
    the function tracer. As the stack tracer has nothing to do with
    the ftrace function tracer, except that it uses it as its internal
    engine, the stack tracer should have its own list.

    A new file is added to the tracing debugfs directory called:

    stack_trace_filter

    that can be used to select which functions you want to check the stack
    on.

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

15 Jun, 2011

1 commit

  • This patch replaces the code for getting an unsigned long from a
    userspace buffer by a simple call to kstroul_from_user.
    This makes it easier to read and less error prone.

    Signed-off-by: Peter Huewe
    Link: http://lkml.kernel.org/r/1307476707-14762-1-git-send-email-peterhuewe@gmx.de
    Signed-off-by: Steven Rostedt

    Peter Huewe
     

19 May, 2011

1 commit

  • ftrace_ops that are registered to trace functions can now be
    agnostic to each other in respect to what functions they trace.
    Each ops has their own hash of the functions they want to trace
    and a hash to what they do not want to trace. A empty hash for
    the functions they want to trace denotes all functions should
    be traced that are not in the notrace hash.

    Cc: Paul E. McKenney
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

15 Oct, 2010

1 commit

  • All file_operations should get a .llseek operation so we can make
    nonseekable_open the default for future file operations without a
    .llseek pointer.

    The three cases that we can automatically detect are no_llseek, seq_lseek
    and default_llseek. For cases where we can we can automatically prove that
    the file offset is always ignored, we use noop_llseek, which maintains
    the current behavior of not returning an error from a seek.

    New drivers should normally not use noop_llseek but instead use no_llseek
    and call nonseekable_open at open time. Existing drivers can be converted
    to do the same when the maintainer knows for certain that no user code
    relies on calling seek on the device file.

    The generated code is often incorrectly indented and right now contains
    comments that clarify for each added line why a specific variant was
    chosen. In the version that gets submitted upstream, the comments will
    be gone and I will manually fix the indentation, because there does not
    seem to be a way to do that using coccinelle.

    Some amount of new code is currently sitting in linux-next that should get
    the same modifications, which I will do at the end of the merge window.

    Many thanks to Julia Lawall for helping me learn to write a semantic
    patch that does all this.

    ===== begin semantic patch =====
    // This adds an llseek= method to all file operations,
    // as a preparation for making no_llseek the default.
    //
    // The rules are
    // - use no_llseek explicitly if we do nonseekable_open
    // - use seq_lseek for sequential files
    // - use default_llseek if we know we access f_pos
    // - use noop_llseek if we know we don't access f_pos,
    // but we still want to allow users to call lseek
    //
    @ open1 exists @
    identifier nested_open;
    @@
    nested_open(...)
    {

    }

    @ open exists@
    identifier open_f;
    identifier i, f;
    identifier open1.nested_open;
    @@
    int open_f(struct inode *i, struct file *f)
    {

    }

    @ read disable optional_qualifier exists @
    identifier read_f;
    identifier f, p, s, off;
    type ssize_t, size_t, loff_t;
    expression E;
    identifier func;
    @@
    ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
    {

    }

    @ read_no_fpos disable optional_qualifier exists @
    identifier read_f;
    identifier f, p, s, off;
    type ssize_t, size_t, loff_t;
    @@
    ssize_t read_f(struct file *f, char *p, size_t s, loff_t *off)
    {
    ... when != off
    }

    @ write @
    identifier write_f;
    identifier f, p, s, off;
    type ssize_t, size_t, loff_t;
    expression E;
    identifier func;
    @@
    ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
    {

    }

    @ write_no_fpos @
    identifier write_f;
    identifier f, p, s, off;
    type ssize_t, size_t, loff_t;
    @@
    ssize_t write_f(struct file *f, const char *p, size_t s, loff_t *off)
    {
    ... when != off
    }

    @ fops0 @
    identifier fops;
    @@
    struct file_operations fops = {
    ...
    };

    @ has_llseek depends on fops0 @
    identifier fops0.fops;
    identifier llseek_f;
    @@
    struct file_operations fops = {
    ...
    .llseek = llseek_f,
    ...
    };

    @ has_read depends on fops0 @
    identifier fops0.fops;
    identifier read_f;
    @@
    struct file_operations fops = {
    ...
    .read = read_f,
    ...
    };

    @ has_write depends on fops0 @
    identifier fops0.fops;
    identifier write_f;
    @@
    struct file_operations fops = {
    ...
    .write = write_f,
    ...
    };

    @ has_open depends on fops0 @
    identifier fops0.fops;
    identifier open_f;
    @@
    struct file_operations fops = {
    ...
    .open = open_f,
    ...
    };

    // use no_llseek if we call nonseekable_open
    ////////////////////////////////////////////
    @ nonseekable1 depends on !has_llseek && has_open @
    identifier fops0.fops;
    identifier nso ~= "nonseekable_open";
    @@
    struct file_operations fops = {
    ... .open = nso, ...
    +.llseek = no_llseek, /* nonseekable */
    };

    @ nonseekable2 depends on !has_llseek @
    identifier fops0.fops;
    identifier open.open_f;
    @@
    struct file_operations fops = {
    ... .open = open_f, ...
    +.llseek = no_llseek, /* open uses nonseekable */
    };

    // use seq_lseek for sequential files
    /////////////////////////////////////
    @ seq depends on !has_llseek @
    identifier fops0.fops;
    identifier sr ~= "seq_read";
    @@
    struct file_operations fops = {
    ... .read = sr, ...
    +.llseek = seq_lseek, /* we have seq_read */
    };

    // use default_llseek if there is a readdir
    ///////////////////////////////////////////
    @ fops1 depends on !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier readdir_e;
    @@
    // any other fop is used that changes pos
    struct file_operations fops = {
    ... .readdir = readdir_e, ...
    +.llseek = default_llseek, /* readdir is present */
    };

    // use default_llseek if at least one of read/write touches f_pos
    /////////////////////////////////////////////////////////////////
    @ fops2 depends on !fops1 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier read.read_f;
    @@
    // read fops use offset
    struct file_operations fops = {
    ... .read = read_f, ...
    +.llseek = default_llseek, /* read accesses f_pos */
    };

    @ fops3 depends on !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier write.write_f;
    @@
    // write fops use offset
    struct file_operations fops = {
    ... .write = write_f, ...
    + .llseek = default_llseek, /* write accesses f_pos */
    };

    // Use noop_llseek if neither read nor write accesses f_pos
    ///////////////////////////////////////////////////////////

    @ fops4 depends on !fops1 && !fops2 && !fops3 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier read_no_fpos.read_f;
    identifier write_no_fpos.write_f;
    @@
    // write fops use offset
    struct file_operations fops = {
    ...
    .write = write_f,
    .read = read_f,
    ...
    +.llseek = noop_llseek, /* read and write both use no f_pos */
    };

    @ depends on has_write && !has_read && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier write_no_fpos.write_f;
    @@
    struct file_operations fops = {
    ... .write = write_f, ...
    +.llseek = noop_llseek, /* write uses no f_pos */
    };

    @ depends on has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    identifier read_no_fpos.read_f;
    @@
    struct file_operations fops = {
    ... .read = read_f, ...
    +.llseek = noop_llseek, /* read uses no f_pos */
    };

    @ depends on !has_read && !has_write && !fops1 && !fops2 && !has_llseek && !nonseekable1 && !nonseekable2 && !seq @
    identifier fops0.fops;
    @@
    struct file_operations fops = {
    ...
    +.llseek = noop_llseek, /* no read or write fn */
    };
    ===== End semantic patch =====

    Signed-off-by: Arnd Bergmann
    Cc: Julia Lawall
    Cc: Christoph Hellwig

    Arnd Bergmann
     

25 Aug, 2010

1 commit

  • save_stack_trace() stores the instruction pointer, not the
    function descriptor. On ppc64 the trace stack code currently
    dereferences the instruction pointer and shows 8 bytes of
    instructions in our backtraces:

    # cat /sys/kernel/debug/tracing/stack_trace
    Depth Size Location (26 entries)
    ----- ---- --------
    0) 5424 112 0x6000000048000004
    1) 5312 160 0x60000000ebad01b0
    2) 5152 160 0x2c23000041c20030
    3) 4992 240 0x600000007c781b79
    4) 4752 160 0xe84100284800000c
    5) 4592 192 0x600000002fa30000
    6) 4400 256 0x7f1800347b7407e0
    7) 4144 208 0xe89f0108f87f0070
    8) 3936 272 0xe84100282fa30000

    Since we aren't dealing with function descriptors, use %pS
    instead of %pF to fix it:

    # cat /sys/kernel/debug/tracing/stack_trace
    Depth Size Location (26 entries)
    ----- ---- --------
    0) 5424 112 ftrace_call+0x4/0x8
    1) 5312 160 .current_io_context+0x28/0x74
    2) 5152 160 .get_io_context+0x48/0xa0
    3) 4992 240 .cfq_set_request+0x94/0x4c4
    4) 4752 160 .elv_set_request+0x60/0x84
    5) 4592 192 .get_request+0x2d4/0x468
    6) 4400 256 .get_request_wait+0x7c/0x258
    7) 4144 208 .__make_request+0x49c/0x610
    8) 3936 272 .generic_make_request+0x390/0x434

    Signed-off-by: Anton Blanchard
    Cc: rostedt@goodmis.org
    Cc: fweisbec@gmail.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Anton Blanchard
     

04 Jun, 2010

1 commit

  • The ftrace_preempt_disable/enable functions were to address a
    recursive race caused by the function tracer. The function tracer
    traces all functions which makes it easily susceptible to recursion.
    One area was preempt_enable(). This would call the scheduler and
    the schedulre would call the function tracer and loop.
    (So was it thought).

    The ftrace_preempt_disable/enable was made to protect against recursion
    inside the scheduler by storing the NEED_RESCHED flag. If it was
    set before the ftrace_preempt_disable() it would not call schedule
    on ftrace_preempt_enable(), thinking that if it was set before then
    it would have already scheduled unless it was already in the scheduler.

    This worked fine except in the case of SMP, where another task would set
    the NEED_RESCHED flag for a task on another CPU, and then kick off an
    IPI to trigger it. This could cause the NEED_RESCHED to be saved at
    ftrace_preempt_disable() but the IPI to arrive in the the preempt
    disabled section. The ftrace_preempt_enable() would not call the scheduler
    because the flag was already set before entring the section.

    This bug would cause a missed preemption check and cause lower latencies.

    Investigating further, I found that the recusion caused by the function
    tracer was not due to schedule(), but due to preempt_schedule(). Now
    that preempt_schedule is completely annotated with notrace, the recusion
    no longer is an issue.

    Reported-by: Thomas Gleixner
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

02 Feb, 2010

1 commit

  • When we cat /tracing/stack_trace, we may cause circular lock:
    sys_read()
    t_start()
    arch_spin_lock(&max_stack_lock);

    t_show()
    seq_printf(), vsnprintf() .... /* they are all trace-able,
    when they are traced, max_stack_lock may be required again. */

    The following script can trigger this circular dead lock very easy:
    #!/bin/bash

    echo 1 > /proc/sys/kernel/stack_tracer_enabled

    mount -t debugfs xxx /mnt > /dev/null 2>&1

    (
    # make check_stack() zealous to require max_stack_lock
    for ((; ;))
    {
    echo 1 > /mnt/tracing/stack_max_size
    }
    ) &

    for ((; ;))
    {
    cat /mnt/tracing/stack_trace > /dev/null
    }

    To fix this bug, we increase the percpu trace_active before
    require the lock.

    Reported-by: Li Zefan
    Signed-off-by: Lai Jiangshan
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    Lai Jiangshan
     

15 Dec, 2009

3 commits


24 Sep, 2009

1 commit

  • It's unused.

    It isn't needed -- read or write flag is already passed and sysctl
    shouldn't care about the rest.

    It _was_ used in two places at arch/frv for some reason.

    Signed-off-by: Alexey Dobriyan
    Cc: David Howells
    Cc: "Eric W. Biederman"
    Cc: Al Viro
    Cc: Ralf Baechle
    Cc: Martin Schwidefsky
    Cc: Ingo Molnar
    Cc: "David S. Miller"
    Cc: James Morris
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

17 Aug, 2009

1 commit


11 Aug, 2009

1 commit


23 Jul, 2009

1 commit


17 Jul, 2009

1 commit


27 Jun, 2009

1 commit

  • This made my machine completely frozen:

    # echo 1 > /proc/sys/kernel/stack_tracer_enabled
    # echo 2 > /proc/sys/kernel/stack_tracer_enabled

    The cause is register_ftrace_function() was called twice.

    Also fix ftrace_enabled sysctl, though seems nothing bad happened
    as I tested it.

    Signed-off-by: Li Zefan
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Li Zefan
     

03 Jun, 2009

1 commit

  • The last entry in the stack_dump_trace is ULONG_MAX, which is not
    a valid entry, but max_stack_trace.nr_entries has accounted for it.
    So when printing the header, we should decrease it by one.
    Before fix, print as following, for example:

    Depth Size Location (53 entries)
    LKML-Reference:
    Signed-off-by: Steven Rostedt

    walimis
     

07 Apr, 2009

1 commit


13 Mar, 2009

3 commits

  • Ingo Molnar suggested, instead of:

    Depth Size Location (27 entries)
    ----- ---- --------
    0) 2880 48 lock_timer_base+0x2b/0x4f
    1) 2832 80 __mod_timer+0x33/0xe0
    2) 2752 16 __ide_set_handler+0x63/0x65

    To have it be:

    Depth Size Location (27 entries)
    ----- ---- --------
    0) 2880 48 lock_timer_base+0x2b/0x4f
    1) 2832 80 __mod_timer+0x33/0xe0
    2) 2752 16 __ide_set_handler+0x63/0x65

    Requested-by: Ingo Molnar
    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • If the stack tracing is disabled (by default) the stack_trace file
    will only contain the header:

    # cat /debug/tracing/stack_trace
    Depth Size Location (0 entries)
    ----- ---- --------

    This can be frustrating to a developer that does not realize that the
    stack tracer is disabled. This patch adds the following text:

    # cat /debug/tracing/stack_trace
    Depth Size Location (0 entries)
    ----- ---- --------
    #
    # Stack tracer disabled
    #
    # To enable the stack tracer, either add 'stacktrace' to the
    # kernel command line
    # or 'echo 1 > /proc/sys/kernel/stack_tracer_enabled'
    #

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     
  • The stack tracer use to look like this:

    # cat /debug/tracing/stack_trace
    Depth Size Location (57 entries)
    ----- ---- --------
    0) 5088 16 mempool_alloc_slab+0x16/0x18
    1) 5072 144 mempool_alloc+0x4d/0xfe
    2) 4928 16 scsi_sg_alloc+0x48/0x4a [scsi_mod]

    Now it looks like this:

    # cat /debug/tracing/stack_trace

    Depth Size Location (57 entries)
    ----- ---- --------
    0) 5088 16 mempool_alloc_slab+0x16/0x18
    1) 5072 144 mempool_alloc+0x4d/0xfe
    2) 4928 16 scsi_sg_alloc+0x48/0x4a [scsi_mod]

    Signed-off-by: Steven Rostedt

    Steven Rostedt
     

18 Dec, 2008

2 commits

  • Impact: clean up

    Andrew Morton suggested to use the stack_tracer_enabled variable
    to decide whether or not to start stack tracing on bootup.
    This lets us remove the start_stack_trace variable.

    Reported-by: Andrew Morton
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Impact: enhancement to stack tracer

    The stack tracer currently is either on when configured in or
    off when it is not. It can not be disabled when it is configured on.
    (besides disabling the function tracer that it uses)

    This patch adds a way to enable or disable the stack tracer at
    run time. It defaults off on bootup, but a kernel parameter 'stacktrace'
    has been added to enable it on bootup.

    A new sysctl has been added "kernel.stack_tracer_enabled" to let
    the user enable or disable the stack tracer at run time.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

04 Dec, 2008

1 commit

  • Impact: fix to output of stack trace

    If a function is not found in the stack of the stack tracer, the
    number printed is quite strange. This fixes the algorithm to handle
    missing functions better.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

03 Dec, 2008

1 commit


23 Nov, 2008

1 commit


21 Nov, 2008

1 commit

  • Impact: make output of stack_trace complete if buffer overruns

    When read buffer overruns, the output of stack_trace isn't complete.

    When printing records with seq_printf in t_show, if the read buffer
    has overruned by the current record, then this record won't be
    printed to user space through read buffer, it will just be dropped in
    this printing.

    When next printing, t_start should return the "*pos"th record, which
    is the one dropped by previous printing, but it just returns
    (m->private + *pos)th record.

    Here we use a more sane method to implement seq_operations which can
    be found in kernel code. Thus we needn't initialize m->private.

    About testing, it's not easy to overrun read buffer, but we can use
    seq_printf to print more padding bytes in t_show, then it's easy to
    check whether or not records are lost.

    This commit has been tested on both condition of overrun and non
    overrun.

    Signed-off-by: Liming Wang
    Acked-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Liming Wang
     

04 Nov, 2008

1 commit


21 Oct, 2008

1 commit

  • The stack trace API does not record if the stack is not on the current
    task's stack. That is, if the stack is the interrupt stack or NMI stack,
    the output does not show. Also, the size of those stacks are not
    consistent with the size of the thread stack, this makes the calculation
    of the stack size usually bogus.

    This all confuses the stack tracer. I unfortunately do not have time to
    fix all these problems, but this patch does record the worst stack when
    the stack pointer is on the tasks stack (instead of bogus numbers).

    The patch simply returns if the stack pointer is not on the task's stack.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     

14 Oct, 2008

2 commits

  • This patch adds indexes into the stack that the functions in the
    stack dump were found at. As an added bonus, I also added a diff
    to show which function is the most notorious consumer of the stack.

    The output now looks like this:

    # cat /debug/tracing/stack_trace
    Depth Size Location (48 entries)
    ----- ---- --------
    0) 2476 212 blk_recount_segments+0x39/0x59
    1) 2264 12 bio_phys_segments+0x16/0x1d
    2) 2252 20 blk_rq_bio_prep+0x23/0xaf
    3) 2232 12 init_request_from_bio+0x74/0x77
    4) 2220 56 __make_request+0x294/0x331
    5) 2164 136 generic_make_request+0x34f/0x37d
    6) 2028 56 submit_bio+0xe7/0xef
    7) 1972 28 submit_bh+0xd1/0xf0
    8) 1944 112 block_read_full_page+0x299/0x2a9
    9) 1832 8 blkdev_readpage+0x14/0x16
    10) 1824 28 read_cache_page_async+0x7e/0x109
    11) 1796 16 read_cache_page+0x11/0x49
    12) 1780 32 read_dev_sector+0x3c/0x72
    13) 1748 48 read_lba+0x4d/0xaa
    14) 1700 168 efi_partition+0x85/0x61b
    15) 1532 72 rescan_partitions+0x10e/0x266
    16) 1460 40 do_open+0x1c7/0x24e
    17) 1420 292 __blkdev_get+0x79/0x84
    18) 1128 12 blkdev_get+0x12/0x14
    19) 1116 20 register_disk+0xd1/0x11e
    20) 1096 28 add_disk+0x34/0x90
    21) 1068 52 sd_probe+0x2b1/0x366
    22) 1016 20 driver_probe_device+0xa5/0x120
    23) 996 8 __device_attach+0xd/0xf
    24) 988 32 bus_for_each_drv+0x3e/0x68
    25) 956 24 device_attach+0x56/0x6c
    26) 932 16 bus_attach_device+0x26/0x4d
    27) 916 64 device_add+0x380/0x4b4
    28) 852 28 scsi_sysfs_add_sdev+0xa1/0x1c9
    29) 824 160 scsi_probe_and_add_lun+0x919/0xa2a
    30) 664 36 __scsi_add_device+0x88/0xae
    31) 628 44 ata_scsi_scan_host+0x9e/0x21c
    32) 584 28 ata_host_register+0x1cb/0x1db
    33) 556 24 ata_host_activate+0x98/0xb5
    34) 532 192 ahci_init_one+0x9bd/0x9e9
    35) 340 20 pci_device_probe+0x3e/0x5e
    36) 320 20 driver_probe_device+0xa5/0x120
    37) 300 20 __driver_attach+0x3f/0x5e
    38) 280 36 bus_for_each_dev+0x40/0x62
    39) 244 12 driver_attach+0x19/0x1b
    40) 232 28 bus_add_driver+0x9c/0x1af
    41) 204 28 driver_register+0x76/0xd2
    42) 176 20 __pci_register_driver+0x44/0x71
    43) 156 8 ahci_init+0x14/0x16
    44) 148 100 _stext+0x42/0x122
    45) 48 20 kernel_init+0x175/0x1dc
    46) 28 28 kernel_thread_helper+0x7/0x10

    The first column is simply an index starting from the inner most function
    and counting down to the outer most.

    The next column is the location that the function was found on the stack.

    The next column is the size of the stack for that function.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This is another tracer using the ftrace infrastructure, that examines
    at each function call the size of the stack. If the stack use is greater
    than the previous max it is recorded.

    You can always see (and set) the max stack size seen. By setting it
    to zero will start the recording again. The backtrace is also available.

    For example:

    # cat /debug/tracing/stack_max_size
    1856

    # cat /debug/tracing/stack_trace
    [] stack_trace_call+0x8f/0x101
    [] ftrace_call+0x5/0x8
    [] clocksource_get_next+0x12/0x48
    [] update_wall_time+0x538/0x6d1
    [] do_timer+0x23/0xb0
    [] tick_do_update_jiffies64+0xd9/0xf1
    [] tick_sched_timer+0x4a/0xad
    [] __run_hrtimer+0x3e/0x75
    [] hrtimer_interrupt+0xf1/0x154
    [] smp_apic_timer_interrupt+0x71/0x84
    [] apic_timer_interrupt+0x2d/0x34
    [] finish_task_switch+0x29/0xa0
    [] schedule+0x765/0x7be
    [] schedule_timeout+0x1b/0x90
    [] wait_for_common+0xab/0x101
    [] wait_for_completion+0x12/0x14
    [] blk_execute_rq+0x84/0x99
    [] scsi_execute+0xc2/0x105
    [] scsi_execute_req+0x57/0x7f
    [] sr_test_unit_ready+0x3e/0x97
    [] sr_media_change+0x43/0x205
    [] media_changed+0x48/0x77
    [] cdrom_media_changed+0x31/0x37
    [] sr_block_media_changed+0x16/0x18
    [] check_disk_change+0x1b/0x63
    [] cdrom_open+0x7a1/0x806
    [] sr_block_open+0x78/0x8d
    [] do_open+0x90/0x257
    [] blkdev_open+0x2d/0x56
    [] __dentry_open+0x14d/0x23c
    [] nameidata_to_filp+0x24/0x38
    [] do_filp_open+0x347/0x626
    [] do_sys_open+0x47/0xbc
    [] sys_open+0x23/0x2b
    [] sysenter_do_call+0x12/0x26

    I've tested this on both x86_64 and i386.

    Signed-off-by: Steven Rostedt
    Cc: Peter Zijlstra
    Cc: Andrew Morton
    Cc: Linus Torvalds
    Signed-off-by: Ingo Molnar

    Steven Rostedt