01 Mar, 2010

6 commits

  • As the comment says the initial value of last_waited is never used, so
    there is no need to initialise it with the current jiffies. Jiffies is
    hot enough without accessing it for no reason.

    Signed-off-by: Richard Kennedy
    Signed-off-by: Jens Axboe

    Richard Kennedy
     
  • Reorder cfq_rb_root to remove 8 bytes of padding on 64 bit builds.

    Consequently removing 56 bytes from cfq_group and 64 bytes from
    cfq_data.

    Signed-off-by: Richard Kennedy
    Signed-off-by: Jens Axboe

    Richard Kennedy
     
  • Currently a queue can only dispatch up to 4 requests if there are other queues.
    This isn't optimal, device can handle more requests, for example, AHCI can
    handle 31 requests. I can understand the limit is for fairness, but we could
    do a tweak: if the queue still has a lot of slice left, sounds we could
    ignore the limit. Test shows this boost my workload (two thread randread of
    a SSD) from 78m/s to 100m/s.
    Thanks for suggestions from Corrado and Vivek for the patch.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     
  • Counters for requests "in flight" and "in driver" are used asymmetrically
    in cfq_may_dispatch, and have slightly different meaning.
    We split the rq_in_flight counter (was sync_flight) to count both sync
    and async requests, in order to use this one, which is more accurate in
    some corner cases.
    The rq_in_driver counter is coalesced, since individual sync/async counts
    are not used any more.

    Signed-off-by: Corrado Zoccolo
    Signed-off-by: Jens Axboe

    Corrado Zoccolo
     
  • CFQ currently applies the same logic of detecting seeky queues and
    grouping them together for rotational disks as well as SSDs.
    For SSDs, the time to complete a request doesn't depend on the
    request location, but only on the size.
    This patch therefore changes the criterion to group queues by
    request size in case of SSDs, in order to achieve better fairness.

    Signed-off-by: Corrado Zoccolo
    Signed-off-by: Jens Axboe

    Corrado Zoccolo
     
  • Current seeky detection is based on average seek lenght.
    This is suboptimal, since the average will not distinguish between:
    * a process doing medium sized seeks
    * a process doing some sequential requests interleaved with larger seeks
    and even a medium seek can take lot of time, if the requested sector
    happens to be behind the disk head in the rotation (50% probability).

    Therefore, we change the seeky queue detection to work as follows:
    * each request can be classified as sequential if it is very close to
    the current head position, i.e. it is likely in the disk cache (disks
    usually read more data than requested, and put it in cache for
    subsequent reads). Otherwise, the request is classified as seeky.
    * an history window of the last 32 requests is kept, storing the
    classification result.
    * A queue is marked as seeky if more than 1/8 of the last 32 requests
    were seeky.

    This patch fixes a regression reported by Yanmin, on mmap 64k random
    reads.

    Reported-by: Yanmin Zhang
    Signed-off-by: Corrado Zoccolo
    Signed-off-by: Jens Axboe

    Corrado Zoccolo
     

26 Feb, 2010

6 commits


25 Feb, 2010

1 commit


23 Feb, 2010

2 commits


22 Feb, 2010

2 commits


05 Feb, 2010

1 commit

  • Currently we split seeky coop queues after 1s, which is too big. Below patch
    marks seeky coop queue split_coop flag after one slice. After that, if new
    requests come in, the queues will be splitted. Patch is suggested by Corrado.

    Signed-off-by: Shaohua Li
    Reviewed-by: Corrado Zoccolo
    Acked-by: Jeff Moyer
    Signed-off-by: Jens Axboe

    Shaohua Li
     

03 Feb, 2010

1 commit

  • Few weeks back, Shaohua Li had posted similar patch. I am reposting it
    with more test results.

    This patch does two things.

    - Do not idle on async queues.

    - It also changes the write queue depth CFQ drives (cfq_may_dispatch()).
    Currently, we seem to driving queue depth of 1 always for WRITES. This is
    true even if there is only one write queue in the system and all the logic
    of infinite queue depth in case of single busy queue as well as slowly
    increasing queue depth based on last delayed sync request does not seem to
    be kicking in at all.

    This patch will allow deeper WRITE queue depths (subjected to the other
    WRITE queue depth contstraints like cfq_quantum and last delayed sync
    request).

    Shaohua Li had reported getting more out of his SSD. For me, I have got
    one Lun exported from an HP EVA and when pure buffered writes are on, I
    can get more out of the system. Following are test results of pure
    buffered writes (with end_fsync=1) with vanilla and patched kernel. These
    results are average of 3 sets of run with increasing number of threads.

    AVERAGE[bufwfs][vanilla]
    -------
    job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us)
    --- --- -- ------------ ----------- ------------- -----------
    bufwfs 3 1 0 0 95349 474141
    bufwfs 3 2 0 0 100282 806926
    bufwfs 3 4 0 0 109989 2.7301e+06
    bufwfs 3 8 0 0 116642 3762231
    bufwfs 3 16 0 0 118230 6902970

    AVERAGE[bufwfs] [patched kernel]
    -------
    bufwfs 3 1 0 0 270722 404352
    bufwfs 3 2 0 0 206770 1.06552e+06
    bufwfs 3 4 0 0 195277 1.62283e+06
    bufwfs 3 8 0 0 260960 2.62979e+06
    bufwfs 3 16 0 0 299260 1.70731e+06

    I also ran buffered writes along with some sequential reads and some
    buffered reads going on in the system on a SATA disk because the potential
    risk could be that we should not be driving queue depth higher in presence
    of sync IO going to keep the max clat low.

    With some random and sequential reads going on in the system on one SATA
    disk I did not see any significant increase in max clat. So it looks like
    other WRITE queue depth control logic is doing its job. Here are the
    results.

    AVERAGE[brr, bsr, bufw together] [vanilla]
    -------
    job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us)
    --- --- -- ------------ ----------- ------------- -----------
    brr 3 1 850 546345 0 0
    bsr 3 1 14650 729543 0 0
    bufw 3 1 0 0 23908 8274517

    brr 3 2 981.333 579395 0 0
    bsr 3 2 14149.7 1175689 0 0
    bufw 3 2 0 0 21921 1.28108e+07

    brr 3 4 898.333 1.75527e+06 0 0
    bsr 3 4 12230.7 1.40072e+06 0 0
    bufw 3 4 0 0 19722.3 2.4901e+07

    brr 3 8 900 3160594 0 0
    bsr 3 8 9282.33 1.91314e+06 0 0
    bufw 3 8 0 0 18789.3 23890622

    AVERAGE[brr, bsr, bufw mixed] [patched kernel]
    -------
    job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us)
    --- --- -- ------------ ----------- ------------- -----------
    brr 3 1 837 417973 0 0
    bsr 3 1 14357.7 591275 0 0
    bufw 3 1 0 0 24869.7 8910662

    brr 3 2 1038.33 543434 0 0
    bsr 3 2 13351.3 1205858 0 0
    bufw 3 2 0 0 18626.3 13280370

    brr 3 4 913 1.86861e+06 0 0
    bsr 3 4 12652.3 1430974 0 0
    bufw 3 4 0 0 15343.3 2.81305e+07

    brr 3 8 890 2.92695e+06 0 0
    bsr 3 8 9635.33 1.90244e+06 0 0
    bufw 3 8 0 0 17200.3 24424392

    So looks like it might make sense to include this patch.

    Thanks
    Vivek

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

01 Feb, 2010

1 commit

  • I triggered a lockdep warning as following.

    =======================================================
    [ INFO: possible circular locking dependency detected ]
    2.6.33-rc2 #1
    -------------------------------------------------------
    test_io_control/7357 is trying to acquire lock:
    (blkio_list_lock){+.+...}, at: [] blkiocg_weight_write+0x82/0x9e

    but task is already holding lock:
    (&(&blkcg->lock)->rlock){......}, at: [] blkiocg_weight_write+0x3b/0x9e

    which lock already depends on the new lock.

    the existing dependency chain (in reverse order) is:

    -> #2 (&(&blkcg->lock)->rlock){......}:
    [] validate_chain+0x8bc/0xb9c
    [] __lock_acquire+0x723/0x789
    [] lock_acquire+0x90/0xa7
    [] _raw_spin_lock_irqsave+0x27/0x5a
    [] blkiocg_add_blkio_group+0x1a/0x6d
    [] cfq_get_queue+0x225/0x3de
    [] cfq_set_request+0x217/0x42d
    [] elv_set_request+0x17/0x26
    [] get_request+0x203/0x2c5
    [] get_request_wait+0x18/0x10e
    [] __make_request+0x2ba/0x375
    [] generic_make_request+0x28d/0x30f
    [] submit_bio+0x8a/0x8f
    [] submit_bh+0xf0/0x10f
    [] ll_rw_block+0xc0/0xf9
    [] ext3_find_entry+0x319/0x544 [ext3]
    [] ext3_lookup+0x2c/0xb9 [ext3]
    [] do_lookup+0xd3/0x172
    [] link_path_walk+0x5fb/0x95c
    [] path_walk+0x3c/0x81
    [] do_path_lookup+0x21/0x8a
    [] do_filp_open+0xf0/0x978
    [] open_exec+0x1b/0xb7
    [] do_execve+0xbb/0x266
    [] sys_execve+0x24/0x4a
    [] ptregs_execve+0x12/0x18

    -> #1 (&(&q->__queue_lock)->rlock){..-.-.}:
    [] validate_chain+0x8bc/0xb9c
    [] __lock_acquire+0x723/0x789
    [] lock_acquire+0x90/0xa7
    [] _raw_spin_lock_irqsave+0x27/0x5a
    [] cfq_unlink_blkio_group+0x17/0x41
    [] blkiocg_destroy+0x72/0xc7
    [] cgroup_diput+0x4a/0xb2
    [] dentry_iput+0x93/0xb7
    [] d_kill+0x1c/0x36
    [] dput+0xf5/0xfe
    [] do_rmdir+0x95/0xbe
    [] sys_rmdir+0x10/0x12
    [] sysenter_do_call+0x12/0x32

    -> #0 (blkio_list_lock){+.+...}:
    [] validate_chain+0x61c/0xb9c
    [] __lock_acquire+0x723/0x789
    [] lock_acquire+0x90/0xa7
    [] _raw_spin_lock+0x1e/0x4e
    [] blkiocg_weight_write+0x82/0x9e
    [] cgroup_file_write+0xc6/0x1c0
    [] vfs_write+0x8c/0x116
    [] sys_write+0x3b/0x60
    [] sysenter_do_call+0x12/0x32

    other info that might help us debug this:

    1 lock held by test_io_control/7357:
    #0: (&(&blkcg->lock)->rlock){......}, at: [] blkiocg_weight_write+0x3b/0x9e
    stack backtrace:
    Pid: 7357, comm: test_io_control Not tainted 2.6.33-rc2 #1
    Call Trace:
    [] print_circular_bug+0x91/0x9d
    [] validate_chain+0x61c/0xb9c
    [] __lock_acquire+0x723/0x789
    [] lock_acquire+0x90/0xa7
    [] ? blkiocg_weight_write+0x82/0x9e
    [] _raw_spin_lock+0x1e/0x4e
    [] ? blkiocg_weight_write+0x82/0x9e
    [] blkiocg_weight_write+0x82/0x9e
    [] cgroup_file_write+0xc6/0x1c0
    [] ? trace_hardirqs_off+0xb/0xd
    [] ? cpu_clock+0x2e/0x44
    [] ? security_file_permission+0xf/0x11
    [] ? rw_verify_area+0x8a/0xad
    [] ? cgroup_file_write+0x0/0x1c0
    [] vfs_write+0x8c/0x116
    [] sys_write+0x3b/0x60
    [] sysenter_do_call+0x12/0x32

    To prevent deadlock, we should take locks as following sequence:

    blkio_list_lock -> queue_lock -> blkcg_lock.

    The following patch should fix this bug.

    Signed-off-by: Gui Jianfeng
    Signed-off-by: Jens Axboe

    Gui Jianfeng
     

29 Jan, 2010

1 commit

  • Updated 'nomerges' tunable to accept a value of '2' - indicating that _no_
    merges at all are to be attempted (not even the simple one-hit cache).

    The following table illustrates the additional benefit - 5 minute runs of
    a random I/O load were applied to a dozen devices on a 16-way x86_64 system.

    nomerges Throughput %System Improvement (tput / %sys)
    -------- ------------ ----------- -------------------------
    0 12.45 MB/sec 0.669365609
    1 12.50 MB/sec 0.641519199 0.40% / 2.71%
    2 12.52 MB/sec 0.639849750 0.56% / 2.96%

    Signed-off-by: Alan D. Brunelle
    Signed-off-by: Jens Axboe

    Alan D. Brunelle
     

11 Jan, 2010

6 commits


29 Dec, 2009

2 commits


28 Dec, 2009

1 commit


21 Dec, 2009

1 commit


18 Dec, 2009

3 commits

  • o CFQ now internally divides cfq queues in therr workload categories. sync-idle,
    sync-noidle and async. Which workload to run depends primarily on rb_key
    offset across three service trees. Which is a combination of mulitiple things
    including what time queue got queued on the service tree.

    There is one exception though. That is if we switched the prio class, say
    we served some RT tasks and again started serving BE class, then with-in
    BE class we always started with sync-noidle workload irrespective of rb_key
    offset in service trees.

    This can provide better latencies for sync-noidle workload in the presence
    of RT tasks.

    o This patch gets rid of that exception and which workload to run with-in
    class always depends on lowest rb_key across service trees. The reason
    being that now we have multiple BE class groups and if we always switch
    to sync-noidle workload with-in group, we can potentially starve a sync-idle
    workload with-in group. Same is true for async workload which will be in
    root group. Also the workload-switching with-in group will become very
    unpredictable as it now depends whether some RT workload was running in
    the system or not.

    Signed-off-by: Vivek Goyal
    Reviewed-by: Gui Jianfeng
    Acked-by: Corrado Zoccolo
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Currently code does not seem to be using cfqd->nr_groups. Get rid of it.

    Signed-off-by: Vivek Goyal
    Reviewed-by: Gui Jianfeng
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o allow_merge() already checks if submitting task is pointing to same cfqq
    as rq has been queued in. If everything is fine, we should not be having
    a task in one cgroup and having a pointer to cfqq in other cgroup.

    Well I guess in some situations it can happen and that is, when a random
    IO queue has been moved into root cgroup for group_isolation=0. In
    this case, tasks's cgroup/group is different from where actually cfqq is,
    but this is intentional and in this case merging should be allowed.

    The second situation is where due to close cooperator patches, multiple
    processes can be sharing a cfqq. If everything implemented right, we should
    not end up in a situation where tasks from different processes in different
    groups are sharing the same cfqq as we allow merging of cooperating queues
    only if they are in same group.

    Signed-off-by: Vivek Goyal
    Reviewed-by: Gui Jianfeng
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

16 Dec, 2009

2 commits

  • Commit 86b37281411cf1e9bc0a6b5406c45edb7bd9ea5d adds a check for
    misaligned stacking offsets, but it's buggy since the defaults are 0.
    Hence all dm devices that pass in a non-zero starting offset will
    be marked as misaligned amd dm will complain.

    A real fix is coming, in the mean time disable the discard granularity
    check so that users don't worry about dm reporting about misaligned
    devices.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • * 'for-2.6.33' of git://git.kernel.dk/linux-2.6-block:
    cfq: set workload as expired if it doesn't have any slice left
    Fix a CFQ crash in "for-2.6.33" branch of block tree
    cfq: Remove wait_request flag when idle time is being deleted
    cfq-iosched: commenting non-obvious initialization
    cfq-iosched: Take care of corner cases of group losing share due to deletion
    cfq-iosched: Get rid of cfqq wait_busy_done flag
    cfq: Optimization for close cooperating queue searching
    block,xd: Delay allocation of DMA buffers until device is known
    drbd: Following the hmac change to SHASH (see linux commit 8bd1209cfff)
    cfq-iosched: reduce write depth only if sync was delayed

    Linus Torvalds
     

15 Dec, 2009

1 commit


11 Dec, 2009

1 commit

  • I think my previous patch introduced a bug which can lead to CFQ hitting
    BUG_ON().

    The offending commit in for-2.6.33 branch is.

    commit 7667aa0630407bc07dc38dcc79d29cc0a65553c1
    Author: Vivek Goyal
    Date: Tue Dec 8 17:52:58 2009 -0500

    cfq-iosched: Take care of corner cases of group losing share due to deletion

    While doing some stress testing on my box, I enountered following.

    login: [ 3165.148841] BUG: scheduling while
    atomic: swapper/0/0x10000100
    [ 3165.149821] Modules linked in: cfq_iosched dm_multipath qla2xxx igb
    scsi_transport_fc dm_snapshot [last unloaded: scsi_wait_scan]
    [ 3165.149821] Pid: 0, comm: swapper Not tainted
    2.6.32-block-for-33-merged-new #3
    [ 3165.149821] Call Trace:
    [ 3165.149821] [] __schedule_bug+0x5c/0x60
    [ 3165.149821] [] ? __wake_up+0x44/0x4d
    [ 3165.149821] [] schedule+0xe3/0x7bc
    [ 3165.149821] [] ? cpumask_next+0x1d/0x1f
    [ 3165.149821] [] ? cfq_dispatch_requests+0x6ba/0x93e
    [cfq_iosched]
    [ 3165.149821] [] __cond_resched+0x2a/0x35
    [ 3165.149821] [] ? cfq_dispatch_requests+0x6ba/0x93e
    [cfq_iosched]
    [ 3165.149821] [] _cond_resched+0x2c/0x37
    [ 3165.149821] [] is_valid_bugaddr+0x16/0x2f
    [ 3165.149821] [] report_bug+0x18/0xac
    [ 3165.149821] [] die+0x39/0x63
    [ 3165.149821] [] do_trap+0x11a/0x129
    [ 3165.149821] [] do_invalid_op+0x96/0x9f
    [ 3165.149821] [] ? cfq_dispatch_requests+0x6ba/0x93e
    [cfq_iosched]
    [ 3165.149821] [] ? enqueue_task+0x5c/0x67
    [ 3165.149821] [] ? task_rq_unlock+0x11/0x13
    [ 3165.149821] [] ? try_to_wake_up+0x292/0x2a4
    [ 3165.149821] [] invalid_op+0x15/0x20
    [ 3165.149821] [] ? cfq_dispatch_requests+0x6ba/0x93e
    [cfq_iosched]
    [ 3165.149821] [] ? virt_to_head_page+0xe/0x2f
    [ 3165.149821] [] blk_peek_request+0x191/0x1a7
    [ 3165.149821] [] ? kobject_get+0x1a/0x21
    [ 3165.149821] [] scsi_request_fn+0x82/0x3df
    [ 3165.149821] [] ? bio_fs_destructor+0x15/0x17
    [ 3165.149821] [] ? virt_to_head_page+0xe/0x2f
    [ 3165.149821] [] __blk_run_queue+0x42/0x71
    [ 3165.149821] [] blk_run_queue+0x26/0x3a
    [ 3165.149821] [] scsi_run_queue+0x2de/0x375
    [ 3165.149821] [] ? put_device+0x17/0x19
    [ 3165.149821] [] scsi_next_command+0x3b/0x4b
    [ 3165.149821] [] scsi_io_completion+0x1c9/0x3f5
    [ 3165.149821] [] scsi_finish_command+0xb5/0xbe

    I think I have hit following BUG_ON() in cfq_dispatch_request().

    BUG_ON(RB_EMPTY_ROOT(&cfqq->sort_list));

    Please find attached the patch to fix it. I have done some stress testing
    with it and have not seen it happening again.

    o We should wait on a queue even after slice expiry only if it is empty. If
    queue is not empty then continue to expire it.

    o If we decide to keep the queue then make cfqq=NULL. Otherwise select_queue()
    will return a valid cfqq and cfq_dispatch_request() can hit following
    BUG_ON().

    BUG_ON(RB_EMPTY_ROOT(&cfqq->sort_list))

    Reviewed-by: Jeff Moyer
    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

10 Dec, 2009

2 commits

  • Remove wait_request flag when idle time is being deleted, otherwise
    it'll hit this path every time when a request is enqueued.

    Signed-off-by: Gui Jianfeng
    Signed-off-by: Jens Axboe

    Gui Jianfeng
     
  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (42 commits)
    tree-wide: fix misspelling of "definition" in comments
    reiserfs: fix misspelling of "journaled"
    doc: Fix a typo in slub.txt.
    inotify: remove superfluous return code check
    hdlc: spelling fix in find_pvc() comment
    doc: fix regulator docs cut-and-pasteism
    mtd: Fix comment in Kconfig
    doc: Fix IRQ chip docs
    tree-wide: fix assorted typos all over the place
    drivers/ata/libata-sff.c: comment spelling fixes
    fix typos/grammos in Documentation/edac.txt
    sysctl: add missing comments
    fs/debugfs/inode.c: fix comment typos
    sgivwfb: Make use of ARRAY_SIZE.
    sky2: fix sky2_link_down copy/paste comment error
    tree-wide: fix typos "couter" -> "counter"
    tree-wide: fix typos "offest" -> "offset"
    fix kerneldoc for set_irq_msi()
    spidev: fix double "of of" in comment
    comment typo fix: sybsystem -> subsystem
    ...

    Linus Torvalds