24 Jun, 2010

1 commit

  • In submit_bio, we count vm events by check READ/WRITE.
    But actually DISCARD_NOBARRIER also has the WRITE flag set.
    It looks as if in blkdev_issue_discard, we also add a
    page as the payload and the bio_has_data check isn't enough.
    So add another check for discard bio.

    Signed-off-by: Tao Ma
    Signed-off-by: Jens Axboe

    Tao Ma
     

21 Jun, 2010

1 commit


19 Jun, 2010

1 commit

  • Hi Jens,

    Few days back Ingo noticed a CFQ boot time warning. This patch fixes it.
    The issue here is that with CFQ_GROUP_IOSCHED=n, CFQ should not really
    be making blkio stat related calls.

    > Hm, it's still not entirely fixed, as of 2.6.35-rc2-00131-g7908a9e. With
    > some
    > configs i get bad spinlock warnings during bootup:
    >
    > [ 28.968013] initcall net_olddevs_init+0x0/0x82 returned 0 after 93750
    > usecs
    > [ 28.972003] calling b44_init+0x0/0x55 @ 1
    > [ 28.976009] bus: 'pci': add driver b44
    > [ 28.976374] sda:
    > [ 28.978157] BUG: spinlock bad magic on CPU#1, async/0/117
    > [ 28.980000] lock: 7e1c5bbc, .magic: 00000000, .owner: /-1, +.owner_cpu: 0
    > [ 28.980000] Pid: 117, comm: async/0 Not tainted +2.6.35-rc2-tip-01092-g010e7ef-dirty #8183
    > [ 28.980000] Call Trace:
    > [ 28.980000] [] ? printk+0x20/0x24
    > [ 28.980000] [] spin_bug+0x7c/0x87
    > [ 28.980000] [] do_raw_spin_lock+0x1e/0x123
    > [ 28.980000] [] ? _raw_spin_lock_irqsave+0x12/0x20
    > [ 28.980000] [] _raw_spin_lock_irqsave+0x1a/0x20
    > [ 28.980000] [] blkiocg_update_io_add_stats+0x25/0xfb
    > [ 28.980000] [] ? cfq_prio_tree_add+0xb1/0xc1
    > [ 28.980000] [] cfq_insert_request+0x8c/0x425

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

18 Jun, 2010

1 commit

  • Hi,

    A user reported a kernel bug when running a particular program that did
    the following:

    created 32 threads
    - each thread took a mutex, grabbed a global offset, added a buffer size
    to that offset, released the lock
    - read from the given offset in the file
    - created a new thread to do the same
    - exited

    The result is that cfq's close cooperator logic would trigger, as the
    threads were issuing I/O within the mean seek distance of one another.
    This workload managed to routinely trigger a use after free bug when
    walking the list of merge candidates for a particular cfqq
    (cfqq->new_cfqq). The logic used for merging queues looks like this:

    static void cfq_setup_merge(struct cfq_queue *cfqq, struct cfq_queue *new_cfqq)
    {
    int process_refs, new_process_refs;
    struct cfq_queue *__cfqq;

    /* Avoid a circular list and skip interim queue merges */
    while ((__cfqq = new_cfqq->new_cfqq)) {
    if (__cfqq == cfqq)
    return;
    new_cfqq = __cfqq;
    }

    process_refs = cfqq_process_refs(cfqq);
    /*
    * If the process for the cfqq has gone away, there is no
    * sense in merging the queues.
    */
    if (process_refs == 0)
    return;

    /*
    * Merge in the direction of the lesser amount of work.
    */
    new_process_refs = cfqq_process_refs(new_cfqq);
    if (new_process_refs >= process_refs) {
    cfqq->new_cfqq = new_cfqq;
    atomic_add(process_refs, &new_cfqq->ref);
    } else {
    new_cfqq->new_cfqq = cfqq;
    atomic_add(new_process_refs, &cfqq->ref);
    }
    }

    When a merge candidate is found, we add the process references for the
    queue with less references to the queue with more. The actual merging
    of queues happens when a new request is issued for a given cfqq. In the
    case of the test program, it only does a single pread call to read in
    1MB, so the actual merge never happens.

    Normally, this is fine, as when the queue exits, we simply drop the
    references we took on the other cfqqs in the merge chain:

    /*
    * If this queue was scheduled to merge with another queue, be
    * sure to drop the reference taken on that queue (and others in
    * the merge chain). See cfq_setup_merge and cfq_merge_cfqqs.
    */
    __cfqq = cfqq->new_cfqq;
    while (__cfqq) {
    if (__cfqq == cfqq) {
    WARN(1, "cfqq->new_cfqq loop detected\n");
    break;
    }
    next = __cfqq->new_cfqq;
    cfq_put_queue(__cfqq);
    __cfqq = next;
    }

    However, there is a hole in this logic. Consider the following (and
    keep in mind that each I/O keeps a reference to the cfqq):

    q1->new_cfqq = q2 // q2 now has 2 process references
    q3->new_cfqq = q2 // q2 now has 3 process references

    // the process associated with q2 exits
    // q2 now has 2 process references

    // queue 1 exits, drops its reference on q2
    // q2 now has 1 process reference

    // q3 exits, so has 0 process references, and hence drops its references
    // to q2, which leaves q2 also with 0 process references

    q4 comes along and wants to merge with q3

    q3->new_cfqq still points at q2! We follow that link and end up at an
    already freed cfqq.

    So, the fix is to not follow a merge chain if the top-most queue does
    not have a process reference, otherwise any queue in the chain could be
    already freed. I also changed the logic to disallow merging with a
    queue that does not have any process references. Previously, we did
    this check for one of the merge candidates, but not the other. That
    doesn't really make sense.

    Without the attached patch, my system would BUG within a couple of
    seconds of running the reproducer program. With the patch applied, my
    system ran the program for over an hour without issues.

    This addresses the following bugzilla:
    https://bugzilla.kernel.org/show_bug.cgi?id=16217

    Thanks a ton to Phil Carns for providing the bug report and an excellent
    reproducer.

    [ Note for stable: this applies to 2.6.32/33/34 ].

    Signed-off-by: Jeff Moyer
    Reported-by: Phil Carns
    Cc: stable@kernel.org
    Signed-off-by: Jens Axboe

    Jeff Moyer
     

17 Jun, 2010

1 commit

  • Filesystems assume that DISCARD_BARRIER are full barriers, so that they
    don't have to track in-progress discard operation when submitting new I/O.
    But currently we only treat them as elevator barriers, which don't
    actually do the nessecary queue drains.

    Also remove the unlikely around both the DISCARD and BARRIER requests -
    the happen far too often for a static mispredict.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     

04 Jun, 2010

2 commits

  • blk_init_allocated_queue_node may fail and the caller _could_ retry.
    Accommodate the unlikely event that blk_init_allocated_queue_node is
    called on an already initialized (possibly partially) request_queue.

    Signed-off-by: Mike Snitzer
    Signed-off-by: Jens Axboe

    Mike Snitzer
     
  • On blk_init_allocated_queue_node failure, only free the request_queue if
    it is wasn't previously allocated outside the block layer
    (e.g. blk_init_queue_node was blk_init_allocated_queue_node caller).

    This addresses an interface bug introduced by the following commit:
    01effb0 block: allow initialization of previously allocated
    request_queue

    Otherwise the request_queue may be free'd out from underneath a caller
    that is managing the request_queue directly (e.g. caller uses
    blk_alloc_queue + blk_init_allocated_queue_node).

    Signed-off-by: Mike Snitzer
    Signed-off-by: Jens Axboe

    Mike Snitzer
     

25 May, 2010

1 commit

  • I got below oops when unloading cfq-iosched. Considering scenario:
    queue A merge to B, C merge to D and B will be merged to D. Before B is merged
    to D, we do split B. We should put B's reference for D.

    [ 807.768536] =============================================================================
    [ 807.768539] BUG cfq_queue: Objects remaining on kmem_cache_close()
    [ 807.768541] -----------------------------------------------------------------------------
    [ 807.768543]
    [ 807.768546] INFO: Slab 0xffffea0003e6b4e0 objects=26 used=1 fp=0xffff88011d584fd8 flags=0x200000000004082
    [ 807.768550] Pid: 5946, comm: rmmod Tainted: G W 2.6.34-07097-gf4b87de-dirty #724
    [ 807.768552] Call Trace:
    [ 807.768560] [] slab_err+0x8f/0x9d
    [ 807.768564] [] ? flush_cpu_slab+0x0/0x93
    [ 807.768569] [] ? add_preempt_count+0xe/0xca
    [ 807.768572] [] ? sub_preempt_count+0xe/0xb6
    [ 807.768577] [] ? _raw_spin_unlock+0x15/0x30
    [ 807.768580] [] ? sub_preempt_count+0xe/0xb6
    [ 807.768584] [] list_slab_objects+0x9b/0x19f
    [ 807.768588] [] ? add_preempt_count+0xc6/0xca
    [ 807.768591] [] kmem_cache_destroy+0x13f/0x21d
    [ 807.768597] [] cfq_slab_kill+0x1a/0x43 [cfq_iosched]
    [ 807.768601] [] cfq_exit+0x93/0x9e [cfq_iosched]
    [ 807.768606] [] sys_delete_module+0x1b1/0x219
    [ 807.768612] [] system_call_fastpath+0x16/0x1b
    [ 807.768618] INFO: Object 0xffff88011d584618 @offset=1560
    [ 807.768622] INFO: Allocated in cfq_get_queue+0x11e/0x274 [cfq_iosched] age=7173 cpu=1 pid=5496
    [ 807.768626] =============================================================================

    Cc: stable@kernel.org
    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     

24 May, 2010

3 commits

  • Bio-based DM doesn't use an elevator (queue is !blk_queue_stackable()).

    Longer-term DM will not allocate an elevator for bio-based DM. But even
    then there will be small potential for an elevator to be allocated for
    a request-based DM table only to have a bio-based table be loaded in the
    end.

    Displaying "none" for bio-based DM will help avoid user confusion.

    Signed-off-by: Mike Snitzer
    Signed-off-by: Jens Axboe

    Mike Snitzer
     
  • Use small consequent indexes as radix tree keys instead of sparse cfqd address.

    This change will reduce radix tree depth from 11 (6 for 32-bit hosts)
    to 1 if host have key.
    (bit 0 -- dead mark, bits 1..30 -- index: ida produce id in range 0..2^31-1)

    Signed-off-by: Konstantin Khlebnikov
    Signed-off-by: Jens Axboe

    Konstantin Khlebnikov
     
  • Remove ->dead_key field from cfq_io_context to shrink its size to 128 bytes.
    (64 bytes for 32-bit hosts)

    Use lower bit in ->key as dead-mark, instead of moving key to separate field.
    After this for dead cfq_io_context we got cic->key != cfqd automatically.
    Thus, io_context's last-hit cache should work without changing.

    Now to check ->key for non-dead state compare it with cfqd,
    instead of checking ->key for non-null value as it was before.

    Plus remove obsolete race protection in cfq_cic_lookup.
    This race gone after v2.6.24-1728-g4ac845a

    Signed-off-by: Konstantin Khlebnikov
    Signed-off-by: Jens Axboe

    Konstantin Khlebnikov
     

22 May, 2010

2 commits


11 May, 2010

1 commit

  • blk_init_queue() allocates the request_queue structure and then
    initializes it as needed (request_fn, elevator, etc).

    Split initialization out to blk_init_allocated_queue_node.
    Introduce blk_init_allocated_queue wrapper function to model existing
    blk_init_queue and blk_init_queue_node interfaces.

    Export elv_register_queue to allow a newly added elevator to be
    registered with sysfs. Export elv_unregister_queue for symmetry.

    These changes allow DM to initialize a device's request_queue with more
    precision. In particular, DM no longer unconditionally initializes a
    full request_queue (elevator et al). It only does so for a
    request-based DM device.

    Signed-off-by: Mike Snitzer
    Signed-off-by: Jens Axboe

    Mike Snitzer
     

07 May, 2010

1 commit

  • with CONFIG_PROVE_RCU=y, a warning can be triggered:

    # mount -t cgroup -o blkio xxx /mnt
    # mkdir /mnt/subgroup

    ...
    kernel/cgroup.c:4442 invoked rcu_dereference_check() without protection!
    ...

    To fix this, we avoid caling css_depth() here, which is a bit simpler
    than the original code.

    Signed-off-by: Li Zefan
    Acked-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Li Zefan
     

06 May, 2010

1 commit

  • It is necessary to be in an RCU read-side critical section when invoking
    css_id(), so this patch adds one to blkiocg_add_blkio_group(). This is
    actually a false positive, because this is called at initialization time
    and hence always refers to the root cgroup, which cannot go away.

    [ 103.790505] ===================================================
    [ 103.790509] [ INFO: suspicious rcu_dereference_check() usage. ]
    [ 103.790511] ---------------------------------------------------
    [ 103.790514] kernel/cgroup.c:4432 invoked rcu_dereference_check() without protection!
    [ 103.790517]
    [ 103.790517] other info that might help us debug this:
    [ 103.790519]
    [ 103.790521]
    [ 103.790521] rcu_scheduler_active = 1, debug_locks = 1
    [ 103.790524] 4 locks held by bash/4422:
    [ 103.790526] #0: (&buffer->mutex){+.+.+.}, at: [] sysfs_write_file+0x3c/0x144
    [ 103.790537] #1: (s_active#102){.+.+.+}, at: [] sysfs_write_file+0xe7/0x144
    [ 103.790544] #2: (&q->sysfs_lock){+.+.+.}, at: [] queue_attr_store+0x49/0x8f
    [ 103.790552] #3: (&(&blkcg->lock)->rlock){......}, at: [] blkiocg_add_blkio_group+0x2b/0xad
    [ 103.790560]
    [ 103.790561] stack backtrace:
    [ 103.790564] Pid: 4422, comm: bash Not tainted 2.6.34-rc4-blkio-second-crash #81
    [ 103.790567] Call Trace:
    [ 103.790572] [] lockdep_rcu_dereference+0x9d/0xa5
    [ 103.790577] [] css_id+0x44/0x57
    [ 103.790581] [] blkiocg_add_blkio_group+0x53/0xad
    [ 103.790586] [] cfq_init_queue+0x139/0x32c
    [ 103.790591] [] elv_iosched_store+0xbf/0x1bf
    [ 103.790595] [] queue_attr_store+0x70/0x8f
    [ 103.790599] [] ? sysfs_write_file+0xe7/0x144
    [ 103.790603] [] sysfs_write_file+0x108/0x144
    [ 103.790609] [] vfs_write+0xae/0x10b
    [ 103.790612] [] ? trace_hardirqs_on_caller+0x10c/0x130
    [ 103.790616] [] sys_write+0x4a/0x6e
    [ 103.790622] [] system_call_fastpath+0x16/0x1b
    [ 103.790625]

    Located-by: Miles Lane
    Signed-off-by: Vivek Goyal
    Signed-off-by: Paul E. McKenney
    Signed-off-by: Ingo Molnar
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

03 May, 2010

1 commit


29 Apr, 2010

6 commits


27 Apr, 2010

2 commits

  • This patch fixes few usability and configurability issues.

    o All the cgroup based controller options are configurable from
    "Genral Setup/Control Group Support/" menu. blkio is the only exception.
    Hence make this option visible in above menu and make it configurable from
    there to bring it inline with rest of the cgroup based controllers.

    o Get rid of CONFIG_DEBUG_CFQ_IOSCHED.

    This option currently does two things.

    - Enable printing of cgroup paths in blktrace
    - Enables CONFIG_DEBUG_BLK_CGROUP, which in turn displays additional stat
    files in cgroup.

    If we are using group scheduling, blktrace data is of not really much use
    if cgroup information is not present. To get this data, currently one has to
    also enable CONFIG_DEBUG_CFQ_IOSCHED, which in turn brings the overhead of
    all the additional debug stat files which is not desired.

    Hence, this patch moves printing of cgroup paths under
    CONFIG_CFQ_GROUP_IOSCHED.

    This allows us to get rid of CONFIG_DEBUG_CFQ_IOSCHED completely. Now all
    the debug stat files are controlled only by CONFIG_DEBUG_BLK_CGROUP which
    can be enabled through config menu.

    Signed-off-by: Vivek Goyal
    Acked-by: Divyesh Shah
    Reviewed-by: Gui Jianfeng
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Once in a while, I was hitting a BUG_ON() in blkio code. empty_time was
    assuming that upon slice expiry, group can't be marked empty already (except
    forced dispatch).

    But this assumption is broken if cfqq can move (group_isolation=0) across
    groups after receiving a request.

    I think most likely in this case we got a request in a cfqq and accounted
    the rq in one group, later while adding the cfqq to tree, we moved the queue
    to a different group which was already marked empty and after dispatch from
    slice we found group already marked empty and raised alarm.

    This patch does not error out if group is already marked empty. This can
    introduce some empty_time stat error only in case of group_isolation=0. This
    is better than crashing. In case of group_isolation=1 we should still get
    same stats as before this patch.

    [ 222.308546] ------------[ cut here ]------------
    [ 222.309311] kernel BUG at block/blk-cgroup.c:236!
    [ 222.309311] invalid opcode: 0000 [#1] SMP
    [ 222.309311] last sysfs file: /sys/devices/virtual/block/dm-3/queue/scheduler
    [ 222.309311] CPU 1
    [ 222.309311] Modules linked in: dm_round_robin dm_multipath qla2xxx scsi_transport_fc dm_zero dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
    [ 222.309311]
    [ 222.309311] Pid: 4780, comm: fio Not tainted 2.6.34-rc4-blkio-config #68 0A98h/HP xw8600 Workstation
    [ 222.309311] RIP: 0010:[] [] blkiocg_set_start_empty_time+0x50/0x83
    [ 222.309311] RSP: 0018:ffff8800ba6e79f8 EFLAGS: 00010002
    [ 222.309311] RAX: 0000000000000082 RBX: ffff8800a13b7990 RCX: ffff8800a13b7808
    [ 222.309311] RDX: 0000000000002121 RSI: 0000000000000082 RDI: ffff8800a13b7a30
    [ 222.309311] RBP: ffff8800ba6e7a18 R08: 0000000000000000 R09: 0000000000000001
    [ 222.309311] R10: 000000000002f8c8 R11: ffff8800ba6e7ad8 R12: ffff8800a13b78ff
    [ 222.309311] R13: ffff8800a13b7990 R14: 0000000000000001 R15: ffff8800a13b7808
    [ 222.309311] FS: 00007f3beec476f0(0000) GS:ffff880001e40000(0000) knlGS:0000000000000000
    [ 222.309311] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [ 222.309311] CR2: 000000000040e7f0 CR3: 00000000a12d5000 CR4: 00000000000006e0
    [ 222.309311] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    [ 222.309311] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    [ 222.309311] Process fio (pid: 4780, threadinfo ffff8800ba6e6000, task ffff8800b3d6bf00)
    [ 222.309311] Stack:
    [ 222.309311] 0000000000000001 ffff8800bab17a48 ffff8800bab17a48 ffff8800a13b7800
    [ 222.309311] ffff8800ba6e7a68 ffffffff8121da35 ffff880000000001 00ff8800ba5c5698
    [ 222.309311] ffff8800ba6e7a68 ffff8800a13b7800 0000000000000000 ffff8800bab17a48
    [ 222.309311] Call Trace:
    [ 222.309311] [] __cfq_slice_expired+0x2af/0x3ec
    [ 222.309311] [] cfq_dispatch_requests+0x2c8/0x8e8
    [ 222.309311] [] ? spin_unlock_irqrestore+0xe/0x10
    [ 222.309311] [] ? blk_insert_cloned_request+0x70/0x7b
    [ 222.309311] [] blk_peek_request+0x191/0x1a7
    [ 222.309311] [] dm_request_fn+0x38/0x14c [dm_mod]
    [ 222.309311] [] ? sync_page_killable+0x0/0x35
    [ 222.309311] [] __generic_unplug_device+0x32/0x37
    [ 222.309311] [] generic_unplug_device+0x2e/0x3c
    [ 222.309311] [] dm_unplug_all+0x42/0x5b [dm_mod]
    [ 222.309311] [] blk_unplug+0x29/0x2d
    [ 222.309311] [] blk_backing_dev_unplug+0x12/0x14
    [ 222.309311] [] block_sync_page+0x35/0x39
    [ 222.309311] [] sync_page+0x41/0x4a
    [ 222.309311] [] sync_page_killable+0xe/0x35
    [ 222.309311] [] __wait_on_bit_lock+0x46/0x8f
    [ 222.309311] [] __lock_page_killable+0x66/0x6d
    [ 222.309311] [] ? wake_bit_function+0x0/0x33
    [ 222.309311] [] lock_page_killable+0x2c/0x2e
    [ 222.309311] [] generic_file_aio_read+0x361/0x4f0
    [ 222.309311] [] do_sync_read+0xcb/0x108
    [ 222.309311] [] ? security_file_permission+0x16/0x18
    [ 222.309311] [] vfs_read+0xab/0x108
    [ 222.309311] [] sys_read+0x4a/0x6e
    [ 222.309311] [] system_call_fastpath+0x16/0x1b
    [ 222.309311] Code: 58 01 00 00 00 48 89 c6 75 0a 48 83 bb 60 01 00 00 00 74 09 48 8d bb a0 00 00 00 eb 35 41 fe cc 74 0d f6 83 c0 01 00 00 04 74 04 0b eb fe 48 89 75 e8 e8 be e0 de ff 66 83 8b c0 01 00 00 04
    [ 222.309311] RIP [] blkiocg_set_start_empty_time+0x50/0x83
    [ 222.309311] RSP
    [ 222.309311] ---[ end trace 32b4f71dffc15712 ]---

    Signed-off-by: Vivek Goyal
    Acked-by: Divyesh Shah
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

21 Apr, 2010

2 commits

  • blkio + cfq was crashing even when two sequential readers were put in two
    separate cgroups (group_isolation=0).

    The reason being that cfqq can migrate across groups based on its being
    sync-noidle or not, it can happen that at request insertion time, cfqq
    belonged to one cfqg and at request dispatch time, it belonged to root
    group. In this case request stats per cgroup can go wrong and it also runs
    into BUG_ON().

    This patch implements rq stashing away a cfq group pointer and not relying
    on cfqq->cfqg pointer alone for rq stat accounting.

    [ 65.163523] ------------[ cut here ]------------
    [ 65.164301] kernel BUG at block/blk-cgroup.c:117!
    [ 65.164301] invalid opcode: 0000 [#1] SMP
    [ 65.164301] last sysfs file: /sys/devices/pci0000:00/0000:00:05.0/0000:60:00.1/host9/rport-9:0-0/target9:0:0/9:0:0:2/block/sde/stat
    [ 65.164301] CPU 1
    [ 65.164301] Modules linked in: dm_round_robin dm_multipath qla2xxx scsi_transport_fc dm_zero dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
    [ 65.164301]
    [ 65.164301] Pid: 4505, comm: fio Not tainted 2.6.34-rc4-blk-for-35 #34 0A98h/HP xw8600 Workstation
    [ 65.164301] RIP: 0010:[] [] blkiocg_update_io_remove_stats+0x5b/0xaf
    [ 65.164301] RSP: 0018:ffff8800ba5a79e8 EFLAGS: 00010046
    [ 65.164301] RAX: 0000000000000096 RBX: ffff8800bb268d60 RCX: 0000000000000000
    [ 65.164301] RDX: ffff8800bb268eb8 RSI: 0000000000000000 RDI: ffff8800bb268e00
    [ 65.164301] RBP: ffff8800ba5a7a08 R08: 0000000000000064 R09: 0000000000000001
    [ 65.164301] R10: 0000000000079640 R11: ffff8800a0bd5bf0 R12: ffff8800bab4af01
    [ 65.164301] R13: ffff8800bab4af00 R14: ffff8800bb1d8928 R15: 0000000000000000
    [ 65.164301] FS: 00007f18f75056f0(0000) GS:ffff880001e40000(0000) knlGS:0000000000000000
    [ 65.164301] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [ 65.164301] CR2: 000000000040e7f0 CR3: 00000000ba52b000 CR4: 00000000000006e0
    [ 65.164301] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    [ 65.164301] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    [ 65.164301] Process fio (pid: 4505, threadinfo ffff8800ba5a6000, task ffff8800ba45ae80)
    [ 65.164301] Stack:
    [ 65.164301] ffff8800ba5a7a08 ffff8800ba722540 ffff8800bab4af68 ffff8800bab4af68
    [ 65.164301] ffff8800ba5a7a38 ffffffff8121d814 ffff8800ba722540 ffff8800bab4af68
    [ 65.164301] ffff8800ba722540 ffff8800a08f6800 ffff8800ba5a7a68 ffffffff8121d8ca
    [ 65.164301] Call Trace:
    [ 65.164301] [] cfq_remove_request+0xe4/0x116
    [ 65.164301] [] cfq_dispatch_insert+0x84/0xe1
    [ 65.164301] [] cfq_dispatch_requests+0x767/0x8e8
    [ 65.164301] [] ? submit_bio+0xc3/0xcc
    [ 65.164301] [] ? sync_page_killable+0x0/0x35
    [ 65.164301] [] blk_peek_request+0x191/0x1a7
    [ 65.164301] [] ? dm_get_live_table+0x44/0x4f [dm_mod]
    [ 65.164301] [] dm_request_fn+0x38/0x14c [dm_mod]
    [ 65.164301] [] ? sync_page_killable+0x0/0x35
    [ 65.164301] [] __generic_unplug_device+0x32/0x37
    [ 65.164301] [] generic_unplug_device+0x2e/0x3c
    [ 65.164301] [] dm_unplug_all+0x42/0x5b [dm_mod]
    [ 65.164301] [] blk_unplug+0x29/0x2d
    [ 65.164301] [] blk_backing_dev_unplug+0x12/0x14
    [ 65.164301] [] block_sync_page+0x35/0x39
    [ 65.164301] [] sync_page+0x41/0x4a
    [ 65.164301] [] sync_page_killable+0xe/0x35
    [ 65.164301] [] __wait_on_bit_lock+0x46/0x8f
    [ 65.164301] [] __lock_page_killable+0x66/0x6d
    [ 65.164301] [] ? wake_bit_function+0x0/0x33
    [ 65.164301] [] lock_page_killable+0x2c/0x2e
    [ 65.164301] [] generic_file_aio_read+0x361/0x4f0
    [ 65.164301] [] do_sync_read+0xcb/0x108
    [ 65.164301] [] ? security_file_permission+0x16/0x18
    [ 65.164301] [] vfs_read+0xab/0x108
    [ 65.164301] [] sys_read+0x4a/0x6e
    [ 65.164301] [] system_call_fastpath+0x16/0x1b
    [ 65.164301] Code: 00 74 1c 48 8b 8b 60 01 00 00 48 85 c9 75 04 0f 0b eb fe 48 ff c9 48 89 8b 60 01 00 00 eb 1a 48 8b 8b 58 01 00 00 48 85 c9 75 04 0b eb fe 48 ff c9 48 89 8b 58 01 00 00 45 84 e4 74 16 48 8b
    [ 65.164301] RIP [] blkiocg_update_io_remove_stats+0x5b/0xaf
    [ 65.164301] RSP
    [ 65.164301] ---[ end trace 1b2b828753032e68 ]---

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • blk_rq_timed_out_timer() relied on blk_add_timer() never returning a
    timer value of zero, but commit 7838c15b8dd18e78a523513749e5b54bda07b0cb
    removed the code that bumped this value when it was zero.
    Therefore when jiffies is near wrap we could get unlucky & not set the
    timeout value correctly.

    This patch uses a flag to indicate that the timeout value was set and so
    handles jiffies wrap correctly, and it keeps all the logic in one
    function so should be easier to maintain in the future.

    Signed-off-by: Richard Kennedy
    Cc: stable@kernel.org
    Signed-off-by: Jens Axboe

    Richard Kennedy
     

16 Apr, 2010

1 commit


15 Apr, 2010

2 commits

  • After merging the block tree, 20100414's linux-next build (x86_64
    allmodconfig) failed like this:

    ERROR: "get_gendisk" [block/blk-cgroup.ko] undefined!
    ERROR: "sched_clock" [block/blk-cgroup.ko] undefined!

    This happens because the two symbols aren't exported and hence not available
    when blk-cgroup code is built as a module. I've tried to stay consistent with
    the use of EXPORT_SYMBOL or EXPORT_SYMBOL_GPL with the other symbols in the
    respective files.

    Signed-off-by: Divyesh Shah
    Acked-by: Gui Jianfeng
    Signed-off-by: Jens Axboe

    Divyesh Shah
     
  • blk_rq_timed_out_timer() relied on blk_add_timer() never returning a
    timer value of zero, but commit 7838c15b8dd18e78a523513749e5b54bda07b0cb
    removed the code that bumped this value when it was zero.
    Therefore when jiffies is near wrap we could get unlucky & not set the
    timeout value correctly.

    This patch uses a flag to indicate that the timeout value was set and so
    handles jiffies wrap correctly, and it keeps all the logic in one
    function so should be easier to maintain in the future.

    Signed-off-by: Richard Kennedy
    Signed-off-by: Jens Axboe

    Richard Kennedy
     

14 Apr, 2010

3 commits

  • Fixes compile errors in blk-cgroup code for empty_time stat and a merge fix in
    CFQ. The first error was when CONFIG_DEBUG_CFQ_IOSCHED is not set.

    Signed-off-by: Divyesh Shah
    Signed-off-by: Jens Axboe

    Divyesh Shah
     
  • Conflicts:
    block/blk-cgroup.c
    block/cfq-iosched.c

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Changelog from v1:
    o Call blkiocg_update_idle_time_stats() at cfq_rq_enqueued() instead of at
    dispatch time.

    Changelog from original patchset: (in response to Vivek Goyal's comments)
    o group blkiocg_update_blkio_group_dequeue_stats() with other DEBUG functions
    o rename blkiocg_update_set_active_queue_stats() to
    blkiocg_update_avg_queue_size_stats()
    o s/request/io/ in blkiocg_update_request_add_stats() and
    blkiocg_update_request_remove_stats()
    o Call cfq_del_timer() at request dispatch() instead of
    blkiocg_update_idle_time_stats()

    Signed-off-by: Divyesh Shah
    Acked-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Divyesh Shah
     

13 Apr, 2010

1 commit

  • Currently, IO Controller makes use of blkio.weight to assign weight for
    all devices. Here a new user interface "blkio.weight_device" is introduced to
    assign different weights for different devices. blkio.weight becomes the
    default value for devices which are not configured by "blkio.weight_device"

    You can use the following format to assigned specific weight for a given
    device:
    #echo "major:minor weight" > blkio.weight_device

    major:minor represents device number.

    And you can remove weight for a given device as following:
    #echo "major:minor 0" > blkio.weight_device

    V1->V2 changes:
    - use user interface "weight_device" instead of "policy" suggested by Vivek
    - rename some struct suggested by Vivek
    - rebase to 2.6-block "for-linus" branch
    - remove an useless list_empty check pointed out by Li Zefan
    - some trivial typo fix

    V2->V3 changes:
    - Move policy_*_node() functions up to get rid of forward declarations
    - rename related functions by adding prefix "blkio_"

    Signed-off-by: Gui Jianfeng
    Acked-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Gui Jianfeng
     

10 Apr, 2010

1 commit

  • * 'for-linus' of git://git.kernel.dk/linux-2.6-block: (34 commits)
    cfq-iosched: Fix the incorrect timeslice accounting with forced_dispatch
    loop: Update mtime when writing using aops
    block: expose the statistics in blkio.time and blkio.sectors for the root cgroup
    backing-dev: Handle class_create() failure
    Block: Fix block/elevator.c elevator_get() off-by-one error
    drbd: lc_element_by_index() never returns NULL
    cciss: unlock on error path
    cfq-iosched: Do not merge queues of BE and IDLE classes
    cfq-iosched: Add additional blktrace log messages in CFQ for easier debugging
    i2o: Remove the dangerous kobj_to_i2o_device macro
    block: remove 16 bytes of padding from struct request on 64bits
    cfq-iosched: fix a kbuild regression
    block: make CONFIG_BLK_CGROUP visible
    Remove GENHD_FL_DRIVERFS
    block: Export max number of segments and max segment size in sysfs
    block: Finalize conversion of block limits functions
    block: Fix overrun in lcm() and move it to lib
    vfs: improve writeback_inodes_wb()
    paride: fix off-by-one test
    drbd: fix al-to-on-disk-bitmap for 4k logical_block_size
    ...

    Linus Torvalds
     

09 Apr, 2010

5 commits

  • When CFQ dispatches requests forcefully due to a barrier or changing iosched,
    it runs through all cfqq's dispatching requests and then expires each queue.
    However, it does not activate a cfqq before flushing its IOs resulting in
    using stale values for computing slice_used.
    This patch fixes it by calling activate queue before flushing reuqests from
    each queue.

    This is useful mostly for barrier requests because when the iosched is changing
    it really doesnt matter if we have incorrect accounting since we're going to
    break down all structures anyway.

    We also now expire the current timeslice before moving on with the dispatch
    to accurately account slice used for that cfqq.

    Signed-off-by: Divyesh Shah
    Signed-off-by: Jens Axboe

    Divyesh Shah
     
  • 1) group_wait_time - This is the amount of time the cgroup had to wait to get a
    timeslice for one of its queues from when it became busy, i.e., went from 0
    to 1 request queued. This is different from the io_wait_time which is the
    cumulative total of the amount of time spent by each IO in that cgroup waiting
    in the scheduler queue. This stat is a great way to find out any jobs in the
    fleet that are being starved or waiting for longer than what is expected (due
    to an IO controller bug or any other issue).
    2) empty_time - This is the amount of time a cgroup spends w/o any pending
    requests. This stat is useful when a job does not seem to be able to use its
    assigned disk share by helping check if that is happening due to an IO
    controller bug or because the job is not submitting enough IOs.
    3) idle_time - This is the amount of time spent by the IO scheduler idling
    for a given cgroup in anticipation of a better request than the exising ones
    from other queues/cgroups.

    All these stats are recorded using start and stop events. When reading these
    stats, we do not add the delta between the current time and the last start time
    if we're between the start and stop events. We avoid doing this to make sure
    that these numbers are always monotonically increasing when read. Since we're
    using sched_clock() which may use the tsc as its source, it may induce some
    inconsistency (due to tsc resync across cpus) if we included the current delta.

    Signed-off-by: Divyesh Shah
    Signed-off-by: Jens Axboe

    Divyesh Shah
     
  • These stats are useful for getting a feel for the queue depth of the cgroup,
    i.e., how filled up its queues are at a given instant and over the existence of
    the cgroup. This ability is useful when debugging problems in the wild as it
    helps understand the application's IO pattern w/o having to read through the
    userspace code (coz its tedious or just not available) or w/o the ability
    to run blktrace (since you may not have root access and/or not want to disturb
    performance).

    Signed-off-by: Divyesh Shah
    Signed-off-by: Jens Axboe

    Divyesh Shah
     
  • This includes both the number of bios merged into requests belonging to this
    cgroup as well as the number of requests merged together.
    In the past, we've observed different merging behavior across upstream kernels,
    some by design some actual bugs. This stat helps a lot in debugging such
    problems when applications report decreased throughput with a new kernel
    version.

    This needed adding an extra elevator function to capture bios being merged as I
    did not want to pollute elevator code with blkiocg knowledge and hence needed
    the accounting invocation to come from CFQ.

    Signed-off-by: Divyesh Shah
    Signed-off-by: Jens Axboe

    Divyesh Shah
     
  • that include some minor fixes and addresses all comments.

    Changelog: (most based on Vivek Goyal's comments)
    o renamed blkiocg_reset_write to blkiocg_reset_stats
    o more clarification in the documentation on io_service_time and io_wait_time
    o Initialize blkg->stats_lock
    o rename io_add_stat to blkio_add_stat and declare it static
    o use bool for direction and sync
    o derive direction and sync info from existing rq methods
    o use 12 for major:minor string length
    o define io_service_time better to cover the NCQ case
    o add a separate reset_stats interface
    o make the indexed stats a 2d array to simplify macro and function pointer code
    o blkio.time now exports in jiffies as before
    o Added stats description in patch description and
    Documentation/cgroup/blkio-controller.txt
    o Prefix all stats functions with blkio and make them static as applicable
    o replace IO_TYPE_MAX with IO_TYPE_TOTAL
    o Moved #define constant to top of blk-cgroup.c
    o Pass dev_t around instead of char *
    o Add note to documentation file about resetting stats
    o use BLK_CGROUP_MODULE in addition to BLK_CGROUP config option in #ifdef
    statements
    o Avoid struct request specific knowledge in blk-cgroup. blk-cgroup.h now has
    rq_direction() and rq_sync() functions which are used by CFQ and when using
    io-controller at a higher level, bio_* functions can be added.

    Signed-off-by: Divyesh Shah
    Signed-off-by: Jens Axboe

    Divyesh Shah