19 Apr, 2011

1 commit

  • For some configurations of CONFIG_PREEMPT that is not true. So
    get rid of __call_for_each_cic() and always uses the explicitly
    rcu_read_lock() protected call_for_each_cic() instead.

    This fixes a potential bug related to IO scheduler removal or
    online switching.

    Thanks to Paul McKenney for clarifying this.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

18 Apr, 2011

1 commit

  • Instead of overloading __blk_run_queue to force an offload to kblockd
    add a new blk_run_queue_async helper to do it explicitly. I've kept
    the blk_queue_stopped check for now, but I suspect it's not needed
    as the check we do when the workqueue items runs should be enough.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     

31 Mar, 2011

1 commit


23 Mar, 2011

3 commits

  • Removing think time checking. A high thinktime queue might means the queue
    dispatches several requests and then do away. Limitting such queue seems
    meaningless. And also this can simplify code. This is suggested by Vivek.

    Signed-off-by: Shaohua Li
    Acked-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Li, Shaohua
     
  • For v2, I added back lines to cfq_preempt_queue() that were removed
    during updates for accounting unaccounted_time. Thanks for pointing out
    that I'd missed these, Vivek.

    Previous commit "cfq-iosched: Don't set active queue in preempt" wrongly
    cleared stats for preempting queues when it shouldn't have, because when
    we choose a queue to preempt, it still isn't necessarily scheduled next.

    Thanks to Vivek Goyal for figuring this out and understanding how the
    preemption code works.

    Signed-off-by: Justin TerAvest
    Signed-off-by: Jens Axboe

    Justin TerAvest
     
  • Commit "Add unaccounted time to timeslice_used" changed the behavior of
    cfq_preempt_queue to set cfqq active. Vivek pointed out that other
    preemption rules might get involved, so we shouldn't manually set which
    queue is active.

    This cleans up the code to just clear the queue stats at preemption
    time.

    Signed-off-by: Justin TerAvest
    Signed-off-by: Jens Axboe

    Justin TerAvest
     

17 Mar, 2011

1 commit

  • Version 3 is updated to apply to for-2.6.39/core.

    For version 2, I took Vivek's advice and made sure we update the group
    weight from cfq_group_service_tree_add().

    If a weight was updated while a group is on the service tree, the
    calculation for the total weight of the service tree can be adjusted
    improperly, which either leads to bad service tree weights, or
    potentially crashes (if total_weight becomes 0).

    This patch defers updates to the weight until a group is off the service
    tree.

    Signed-off-by: Justin TerAvest
    Acked-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Justin TerAvest
     

12 Mar, 2011

1 commit

  • There are two kind of times that tasks are not charged for: the first
    seek and the extra time slice used over the allocated timeslice. Both
    of these exported as a new unaccounted_time stat.

    I think it would be good to have this reported in 'time' as well, but
    that is probably a separate discussion.

    Signed-off-by: Justin TerAvest
    Signed-off-by: Jens Axboe

    Justin TerAvest
     

10 Mar, 2011

2 commits


07 Mar, 2011

4 commits


05 Mar, 2011

1 commit

  • This merge creates two set of conflicts. One is simple context
    conflicts caused by removal of throtl_scheduled_delayed_work() in
    for-linus and removal of throtl_shutdown_timer_wq() in
    for-2.6.39/core.

    The other is caused by commit 255bb490c8 (block: blk-flush shouldn't
    call directly into q->request_fn() __blk_run_queue()) in for-linus
    crashing with FLUSH reimplementation in for-2.6.39/core. The conflict
    isn't trivial but the resolution is straight-forward.

    * __blk_run_queue() calls in flush_end_io() and flush_data_end_io()
    should be called with @force_kblockd set to %true.

    * elv_insert() in blk_kick_flush() should use
    %ELEVATOR_INSERT_REQUEUE.

    Both changes are to avoid invoking ->request_fn() directly from
    request completion path and closely match the changes in the commit
    255bb490c8.

    Signed-off-by: Tejun Heo

    Tejun Heo
     

02 Mar, 2011

3 commits

  • __blk_run_queue() automatically either calls q->request_fn() directly
    or schedules kblockd depending on whether the function is recursed.
    blk-flush implementation needs to be able to explicitly choose
    kblockd. Add @force_kblockd.

    All the current users are converted to specify %false for the
    parameter and this patch doesn't introduce any behavior change.

    stable: This is prerequisite for fixing ide oops caused by the new
    blk-flush implementation.

    Signed-off-by: Tejun Heo
    Cc: Jan Beulich
    Cc: James Bottomley
    Cc: stable@kernel.org
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Effectively, make group_isolation=1 the default and remove the tunable.
    The setting group_isolation=0 was because by default we idle on
    sync-noidle tree and on fast devices, this can be very harmful for
    throughput.

    However, this problem can also be addressed by tuning slice_idle and
    possibly group_idle on faster storage devices.

    This change simplifies the CFQ code by removing the feature entirely.

    Signed-off-by: Justin TerAvest
    Acked-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Justin TerAvest
     
  • Conflicts:
    block/cfq-iosched.c

    Signed-off-by: Jens Axboe

    Jens Axboe
     

11 Feb, 2011

1 commit

  • Flush requests are never put on the IO scheduler. Convert request
    structure's elevator_private* into an array and have the flush fields
    share a union with it.

    Reclaim the space lost in 'struct request' by moving 'completion_data'
    back in the union with 'rb_node'.

    Signed-off-by: Mike Snitzer
    Acked-by: Vivek Goyal
    Acked-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Mike Snitzer
     

09 Feb, 2011

1 commit

  • Commit 7667aa0630407bc07dc38dcc79d29cc0a65553c1 added logic to wait for
    the last queue of the group to become busy (have at least one request),
    so that the group does not lose out for not being continuously
    backlogged. The commit did not check for the condition that the last
    queue already has some requests. As a result, if the queue already has
    requests, wait_busy is set. Later on, cfq_select_queue() checks the
    flag, and decides that since the queue has a request now and wait_busy
    is set, the queue is expired. This results in early expiration of the
    queue.

    This patch fixes the problem by adding a check to see if queue already
    has requests. If it does, wait_busy is not set. As a result, time slices
    do not expire early.

    The queues with more than one request are usually buffered writers.
    Testing shows improvement in isolation between buffered writers.

    Cc: stable@kernel.org
    Signed-off-by: Justin TerAvest
    Reviewed-by: Gui Jianfeng
    Acked-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Justin TerAvest
     

19 Jan, 2011

1 commit


14 Jan, 2011

3 commits

  • If a queue is preempted before it gets slice assigned, the queue doesn't get
    compensation, which looks unfair. For such queue, we compensate it for a whole
    slice.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     
  • I got this:
    fio-874 [007] 2157.724514: 8,32 m N cfq874 preempt
    fio-874 [007] 2157.724519: 8,32 m N cfq830 slice expired t=1
    fio-874 [007] 2157.724520: 8,32 m N cfq830 sl_used=1 disp=0 charge=1 iops=0 sect=0
    fio-874 [007] 2157.724521: 8,32 m N cfq830 set_active wl_prio:0 wl_type:0
    fio-874 [007] 2157.724522: 8,32 m N cfq830 Not idling. st->count:1

    cfq830 is an async queue, and preempted by a sync queue cfq874. But since we
    have cfqg->saved_workload_slice mechanism, the preempt is a nop.
    Looks currently our preempt is totally broken if the two queues are not from
    the same workload type.
    Below patch fixes it. This will might make async queue starvation, but it's
    what our old code does before cgroup is added.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     
  • * 'for-2.6.38/core' of git://git.kernel.dk/linux-2.6-block: (43 commits)
    block: ensure that completion error gets properly traced
    blktrace: add missing probe argument to block_bio_complete
    block cfq: don't use atomic_t for cfq_group
    block cfq: don't use atomic_t for cfq_queue
    block: trace event block fix unassigned field
    block: add internal hd part table references
    block: fix accounting bug on cross partition merges
    kref: add kref_test_and_get
    bio-integrity: mark kintegrityd_wq highpri and CPU intensive
    block: make kblockd_workqueue smarter
    Revert "sd: implement sd_check_events()"
    block: Clean up exit_io_context() source code.
    Fix compile warnings due to missing removal of a 'ret' variable
    fs/block: type signature of major_to_index(int) to major_to_index(unsigned)
    block: convert !IS_ERR(p) && p to !IS_ERR_NOR_NULL(p)
    cfq-iosched: don't check cfqg in choose_service_tree()
    fs/splice: Pull buf->ops->confirm() from splice_from_pipe actors
    cdrom: export cdrom_check_events()
    sd: implement sd_check_events()
    sr: implement sr_check_events()
    ...

    Linus Torvalds
     

07 Jan, 2011

2 commits


17 Dec, 2010

1 commit

  • When cfq_choose_cfqg() is called in select_queue(), there must be at least one
    backlogged CFQ queue waiting for dispatching, hence there must be at least one
    backlogged CFQ group on service tree. So we never call choose_service_tree()
    with cfqg == NULL.

    Signed-off-by: Gui Jianfeng
    Reviewed-by: Jeff Moyer
    Acked-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Gui Jianfeng
     

13 Dec, 2010

1 commit


01 Dec, 2010

2 commits


16 Nov, 2010

1 commit


09 Nov, 2010

1 commit


08 Nov, 2010

3 commits

  • If a deep seek queue slowly deliver requests but disk is much faster, idle
    for the queue just wastes disk throughput. If the queue delevers all requests
    before half its slice is used, the patch disable idle for it.
    In my test, application delivers 32 requests one time, the disk can accept
    128 requests at maxium and disk is fast. without the patch, the throughput
    is just around 30m/s, while with it, the speed is about 80m/s. The disk is
    a SSD, but is detected as a rotational disk. I can configure it as SSD, but
    I thought the deep seek queue logic should be fixed too, for example,
    considering a fast raid.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     
  • A queue is idle at cfq_dispatch_requests(), but it gets noidle later. Unless
    other task explictly does unplug or all requests are drained, we will not
    deliever requests to the disk even cfq_arm_slice_timer doesn't make the
    queue idle. For example, cfq_should_idle() returns true because of
    service_tree->count == 1, and then other queues are added. Note, I didn't
    see obvious performance impacts so far with the patch, but just thought
    this could be a problem.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     
  • Some functions should return boolean.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     

02 Nov, 2010

1 commit

  • "gadget", "through", "command", "maintain", "maintain", "controller", "address",
    "between", "initiali[zs]e", "instead", "function", "select", "already",
    "equal", "access", "management", "hierarchy", "registration", "interest",
    "relative", "memory", "offset", "already",

    Signed-off-by: Uwe Kleine-König
    Signed-off-by: Jiri Kosina

    Uwe Kleine-König
     

23 Oct, 2010

1 commit

  • * 'for-2.6.37/core' of git://git.kernel.dk/linux-2.6-block: (39 commits)
    cfq-iosched: Fix a gcc 4.5 warning and put some comments
    block: Turn bvec_k{un,}map_irq() into static inline functions
    block: fix accounting bug on cross partition merges
    block: Make the integrity mapped property a bio flag
    block: Fix double free in blk_integrity_unregister
    block: Ensure physical block size is unsigned int
    blkio-throttle: Fix possible multiplication overflow in iops calculations
    blkio-throttle: limit max iops value to UINT_MAX
    blkio-throttle: There is no need to convert jiffies to milli seconds
    blkio-throttle: Fix link failure failure on i386
    blkio: Recalculate the throttled bio dispatch time upon throttle limit change
    blkio: Add root group to td->tg_list
    blkio: deletion of a cgroup was causes oops
    blkio: Do not export throttle files if CONFIG_BLK_DEV_THROTTLING=n
    block: set the bounce_pfn to the actual DMA limit rather than to max memory
    block: revert bad fix for memory hotplug causing bounces
    Fix compile error in blk-exec.c for !CONFIG_DETECT_HUNG_TASK
    block: set the bounce_pfn to the actual DMA limit rather than to max memory
    block: Prevent hang_check firing during long I/O
    cfq: improve fsync performance for small files
    ...

    Fix up trivial conflicts due to __rcu sparse annotation in include/linux/genhd.h

    Linus Torvalds
     

22 Oct, 2010

1 commit

  • - Andi encountedred following warning with gcc 4.5

    linux/block/cfq-iosched.c: In function ‘cfq_dispatch_requests’:
    linux/block/cfq-iosched.c:2156:3: warning: array subscript is above array
    bounds

    - Warning happens due to following code.

    slice = group_slice * count /
    max_t(unsigned, cfqg->busy_queues_avg[cfqd->serving_prio],
    cfq_group_busy_queues_wl(cfqd->serving_prio, cfqd, cfqg));

    gcc is complaining about cfqg->busy_queues_avg[] being indexed by CFQ
    prio classes (RT, BE and IDLE) while the array size is only 2.

    - At run time, we never access cfqg->busy_queues_avg[IDLE] and return from
    function before this code hits.

    - To fix warning increase the array size though it will remain unused. This
    patch also puts some comments to clarify some of the confusions.

    - I have taken Jens's patch and modified it a bit.

    - Compile tested with gcc 4.4 and boot tested. I don't have gcc 4.5
    running, Andi can you please test it with gcc 4.5 to make sure it
    worked.

    Reported-by: Andi Kleen
    Signed-off-by: Vivek Goyal
    Acked-by: Jeff Moyer
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

01 Oct, 2010

1 commit

  • o Currently any cgroup throttle limit changes are processed asynchronousy and
    the change does not take affect till a new bio is dispatched from same group.

    o It might happen that a user sets a redicuously low limit on throttling.
    Say 1 bytes per second on reads. In such cases simple operations like mount
    a disk can wait for a very long time.

    o Once bio is throttled, there is no easy way to come out of that wait even if
    user increases the read limit later.

    o This patch fixes it. Now if a user changes the cgroup limits, we recalculate
    the bio dispatch time according to new limits.

    o Can't take queueu lock under blkcg_lock, hence after the change I wake
    up the dispatch thread again which recalculates the time. So there are some
    variables being synchronized across two threads without lock and I had to
    make use of barriers. Hoping I have used barriers correctly. Any review of
    memory barrier code especially will help.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

21 Sep, 2010

1 commit

  • Mike reported a kernel crash when a usb key hotplug is performed while all
    kernel thrads are not in a root cgroup and are running in one of the child
    cgroups of blkio controller.

    BUG: unable to handle kernel NULL pointer dereference at 0000002c
    IP: [] cfq_get_queue+0x232/0x412
    *pde = 00000000
    Oops: 0000 [#1] PREEMPT
    last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/host3/scsi_host/host3/uevent

    [..]
    Pid: 30039, comm: scsi_scan_3 Not tainted 2.6.35.2-fg.roam #1 Volvi2 /Aspire 4315
    EIP: 0060:[] EFLAGS: 00010086 CPU: 0
    EIP is at cfq_get_queue+0x232/0x412
    EAX: f705f9c0 EBX: e977abac ECX: 00000000 EDX: 00000000
    ESI: f00da400 EDI: f00da4ec EBP: e977a800 ESP: dff8fd00
    DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
    Process scsi_scan_3 (pid: 30039, ti=dff8e000 task=f6b6c9a0 task.ti=dff8e000)
    Stack:
    00000000 00000000 00000001 01ff0000 f00da508 00000000 f00da524 f00da540
    e7994940 dd631750 f705f9c0 e977a820 e977ac44 f00da4d0 00000001 f6b6c9a0
    00000010 00008010 0000000b 00000000 00000001 e977a800 dd76fac0 00000246
    Call Trace:
    [] ? cfq_set_request+0x228/0x34c
    [] ? cfq_set_request+0x0/0x34c
    [] ? elv_set_request+0xf/0x1c
    [] ? get_request+0x1ad/0x22f
    [] ? get_request_wait+0x1f/0x11a
    [] ? kvasprintf+0x33/0x3b
    [] ? scsi_execute+0x1d/0x103
    [] ? scsi_execute_req+0x58/0x83
    [] ? scsi_probe_and_add_lun+0x188/0x7c2
    [] ? attribute_container_add_device+0x15/0xfa
    [] ? kobject_get+0xf/0x13
    [] ? get_device+0x10/0x14
    [] ? scsi_alloc_target+0x217/0x24d
    [] ? __scsi_scan_target+0x95/0x480
    [] ? dequeue_entity+0x14/0x1fe
    [] ? update_curr+0x165/0x1ab
    [] ? update_curr+0x165/0x1ab
    [] ? scsi_scan_channel+0x4a/0x76
    [] ? scsi_scan_host_selected+0x77/0xad
    [] ? do_scan_async+0x0/0x11a
    [] ? do_scsi_scan_host+0x51/0x56
    [] ? do_scan_async+0x0/0x11a
    [] ? do_scan_async+0xe/0x11a
    [] ? do_scan_async+0x0/0x11a
    [] ? kthread+0x5e/0x63
    [] ? kthread+0x0/0x63
    [] ? kernel_thread_helper+0x6/0x10
    Code: 44 24 1c 54 83 44 24 18 54 83 fa 03 75 94 8b 06 c7 86 64 02 00 00 01 00 00 00 83 e0 03 09 f0 89 06 8b 44 24 28 8b 90 58 01 00 00 42 2c 85 c0 75 03 8b 42 08 8d 54 24 48 52 8d 4c 24 50 51 68
    EIP: [] cfq_get_queue+0x232/0x412 SS:ESP 0068:dff8fd00
    CR2: 000000000000002c
    ---[ end trace 9a88306573f69b12 ]---

    The problem here is that we don't have bdi->dev information available when
    thread does some IO. Hence when dev_name() tries to access bdi->dev, it
    crashes.

    This problem does not happen if kernel threads are in root group as root
    group is statically allocated at device initialization time and we don't
    hit this piece of code.

    Fix it by delaying the filling of major and minor number information of
    device in blk_group. Initially a blk_group is created with 0 as device
    information and this information is filled later once some more IO comes
    in from same group.

    Reported-by: Mike Kazantsev
    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal