10 Dec, 2009

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (42 commits)
    tree-wide: fix misspelling of "definition" in comments
    reiserfs: fix misspelling of "journaled"
    doc: Fix a typo in slub.txt.
    inotify: remove superfluous return code check
    hdlc: spelling fix in find_pvc() comment
    doc: fix regulator docs cut-and-pasteism
    mtd: Fix comment in Kconfig
    doc: Fix IRQ chip docs
    tree-wide: fix assorted typos all over the place
    drivers/ata/libata-sff.c: comment spelling fixes
    fix typos/grammos in Documentation/edac.txt
    sysctl: add missing comments
    fs/debugfs/inode.c: fix comment typos
    sgivwfb: Make use of ARRAY_SIZE.
    sky2: fix sky2_link_down copy/paste comment error
    tree-wide: fix typos "couter" -> "counter"
    tree-wide: fix typos "offest" -> "offset"
    fix kerneldoc for set_irq_msi()
    spidev: fix double "of of" in comment
    comment typo fix: sybsystem -> subsystem
    ...

    Linus Torvalds
     

08 Dec, 2009

1 commit


07 Dec, 2009

1 commit


06 Dec, 2009

1 commit

  • After the merge of the IO controller patches, booting on my megaraid
    box ran much slower. Vivek Goyal traced it down to megaraid discovery
    creating tons of devices, each suffering a grace period when they later
    kill that queue (if no device is found).

    So lets use call_rcu() to batch these deferred frees, instead of taking
    the grace period hit for each one.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

04 Dec, 2009

30 commits

  • o Now issues of blkio controller and CFQ in module mode should be fixed.
    Enable the cfq group scheduling support in module mode.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o One of the goals of block IO controller is that it should be able to
    support mulitple io control policies, some of which be operational at
    higher level in storage hierarchy.

    o To begin with, we had one io controlling policy implemented by CFQ, and
    I hard coded the CFQ functions called by blkio. This created issues when
    CFQ is compiled as module.

    o This patch implements a basic dynamic io controlling policy registration
    functionality in blkio. This is similar to elevator functionality where
    ioschedulers register the functions dynamically.

    o Now in future, when more IO controlling policies are implemented, these
    can dynakically register with block IO controller.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o blkio controller is inside the kernel and cfq makes use of interfaces
    exported by blkio. CFQ can be a module too, hence export symbols used
    by CFQ.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • With CLONE_IO, parent's io_context->nr_tasks is incremented, but never
    decremented whenever copy_process() fails afterwards, which prevents
    exit_io_context() from calling IO schedulers exit functions.

    Give a task_struct to exit_io_context(), and call exit_io_context() instead of
    put_io_context() in copy_process() cleanup path.

    Signed-off-by: Louis Rilling
    Signed-off-by: Jens Axboe

    Louis Rilling
     
  • With CLONE_IO, copy_io() increments both ioc->refcount and ioc->nr_tasks.
    However exit_io_context() only decrements ioc->refcount if ioc->nr_tasks
    reaches 0.

    Always call put_io_context() in exit_io_context().

    Signed-off-by: Louis Rilling
    Signed-off-by: Jens Axboe

    Louis Rilling
     
  • That is "success", "unknown", "through", "performance", "[re|un]mapping"
    , "access", "default", "reasonable", "[con]currently", "temperature"
    , "channel", "[un]used", "application", "example","hierarchy", "therefore"
    , "[over|under]flow", "contiguous", "threshold", "enough" and others.

    Signed-off-by: André Goddard Rosa
    Signed-off-by: Jiri Kosina

    André Goddard Rosa
     
  • cfq_arm_slice_timer() has logic to disable idle window for SSD device. The same
    thing should be done at cfq_select_queue() too, otherwise we will still see
    idle window. This makes the nonrot check logic consistent in cfq.
    Tests in a intel SSD with low_latency knob close, below patch can triple disk
    thoughput for muti-thread sequential read.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     
  • It's currently not an allowed configuration, so express that in Kconfig.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • They should not be declared inside some other file that's not related
    to CFQ.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Signed-off-by: Jens Axboe

    Jens Axboe
     
  • o rq_noidle() is supposed to tell cfq that do not expect a request after this
    one, hence don't idle. But this does not seem to work very well. For example
    for direct random readers, rq_noidle = 1 but there is next request coming
    after this. Not idling, leads to a group not getting its share even if
    group_isolation=1.

    o The right solution for this issue is to scan the higher layers and set
    right flag (WRITE_SYNC or WRITE_ODIRECT). For the time being, this single
    line fix helps. This should not have any significant impact when we are
    not using cgroups. I will later figure out IO paths in higher layer and
    fix it.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o If a group is running only a random reader, then it will not have enough
    traffic to keep disk busy and we will reduce overall throughput. This
    should result in better latencies for random reader though. If we don't
    idle on random reader service tree, then this random reader will experience
    large latencies if there are other groups present in system with sequential
    readers running in these.

    o One solution suggested by corrado is that by default keep the random readers
    or sync-noidle workload in root group so that during one dispatch round
    we idle only once on sync-noidle tree. This means that all the sync-idle
    workload queues will be in their respective group and we will see service
    differentiation in those but not on sync-noidle workload.

    o Provide a tunable group_isolation. If set, this will make sure that even
    sync-noidle queues go in their respective group and we wait on these. This
    provides stronger isolation between groups but at the expense of throughput
    if group does not have enough traffic to keep the disk busy.

    o By default group_isolation = 0

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Async queues are not per group. Instead these are system wide and maintained
    in root group. Hence their workload slice length should be calculated
    based on total number of queues in the system and not just queues in the
    root group.

    o As root group's default weight is 1000, make sure to charge async queue
    more in terms of vtime so that it does not get more time on disk because
    root group has higher weight.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o If a queue consumes its slice and then gets deleted from service tree, its
    associated group will also get deleted from service tree if this was the
    only queue in the group. That will make group loose its share.

    o For the queues on which we have idling on and if these have used their
    slice, wait a bit for these queues to get backlogged again and then
    expire these queues so that group does not loose its share.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Propagate blkio cgroup weight updation to associated cfq groups.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o If a task changes cgroup, drop reference to the cfqq associated with io
    context and set cfqq pointer stored in ioc to NULL so that upon next request
    arrival we will allocate a new queue in new group.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Do not allow following three operations across groups for isolation.
    - selection of co-operating queues
    - preemtpions across groups
    - request merging across groups.

    o Async queues are currently global and not per group. Allow preemption of
    an async queue if a sync queue in other group gets backlogged.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Export disk time and sector used by a group to user space through cgroup
    interface.

    o Also export a "dequeue" interface to cgroup which keeps track of how many
    a times a group was deleted from service tree. Helps in debugging.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Some debugging aids for CFQ.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o One can choose to change elevator or delete a cgroup. Implement group
    reference counting so that both elevator exit and cgroup deletion can
    take place gracefully.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Nauman Rafique
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Determine the cgroup IO submitting task belongs to and create the cfq
    group if it does not exist already.

    o Also link cfqq and associated cfq group.

    o Currently all async IO is mapped to root group.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o This patch introduces the functionality to do the accounting of group time
    when a queue expires. This time used decides which is the group to go
    next.

    o Also introduce the functionlity to save and restore the workload type
    context with-in group. It might happen that once we expire the cfq queue
    and group, a different group will schedule in and we will lose the context
    of the workload type. Hence save and restore it upon queue expiry.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o So far we had 300ms soft target latency system wide. Now with the
    introduction of cfq groups, divide that latency by number of groups so
    that one can come up with group target latency which will be helpful
    in determining the workload slice with-in group and also the dynamic
    slice length of the cfq queue.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Bring in the per cfq group weight and how vdisktime is calculated for the
    group. Also bring in the functionality of updating the min_vdisktime of
    the group service tree.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o This is basic implementation of blkio controller cgroup interface. This is
    the common interface visible to user space and should be used by different
    IO control policies as we implement those.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o So far we just had one cfq_group in cfq_data. To create space for more than
    one cfq_group, we need to have a service tree of groups where all the groups
    can be queued if they have active cfq queues backlogged in these.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Currently cfqq deletes a queue from service tree if it is empty (even if
    we might idle on the queue). This patch keeps the queue on service tree
    hence associated group remains on the service tree until we decide that
    we are not going to idle on the queue and expire it.

    o This just helps in time accounting for queue/group and in implementation
    of rest of the patches.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Implement a macro to traverse each service tree in the group. This avoids
    usage of double for loop and special condition for idle tree 4 times.

    o Macro is little twisted because of special handling of idle class service
    tree.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o This patch introduce the notion of cfq groups. Soon we will can have multiple
    groups of different weights in the system.

    o Various service trees (prioclass and workload type trees), will become per
    cfq group. So hierarchy looks as follows.

    cfq_groups
    |
    workload type
    |
    cfq queue

    o When an scheduling decision has to be taken, first we select the cfq group
    then workload with-in the group and then cfq queue with-in the workload
    type.

    o This patch just makes various workload service tree per cfq group and
    introduce the function to be able to choose a group for scheduling.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o must_dispatch flag should be set only if we decided not to run the queue
    and dispatch the request.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

03 Dec, 2009

2 commits

  • Since commit 2f5cb7381b737e24c8046fd4aeab571fb71315f5, each queue can send
    up to 4 * 4 requests if only one queue exists. I wonder why we have such limit.
    Device supports tag can send more requests. For example, AHCI can send 31
    requests. Test (direct aio randread) shows the limits reduce about 4% disk
    thoughput.
    On the other hand, since we send one request one time, if other queue
    pop when current is sending more than cfq_quantum requests, current queue will
    stop send requests soon after one request, so sounds there is no big latency.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     
  • The discard ioctl is used by mkfs utilities to clear a block device
    prior to putting metadata down. However, not all devices return zeroed
    blocks after a discard. Some drives return stale data, potentially
    containing old superblocks. It is therefore important to know whether
    discarded blocks are properly zeroed.

    Both ATA and SCSI drives have configuration bits that indicate whether
    zeroes are returned after a discard operation. Implement a block level
    interface that allows this information to be bubbled up the stack and
    queried via a new block device ioctl.

    Signed-off-by: Martin K. Petersen
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     

30 Nov, 2009

1 commit


26 Nov, 2009

3 commits

  • Idling logic was disabled in some corner cases, leading to unfair share
    for noidle queues.
    * the idle timer was not armed if there were other requests in the
    driver. unfortunately, those requests could come from other workloads,
    or queues for which we don't enable idling. So we will check only
    pending requests from the active queue
    * rq_noidle check on no-idle queue could disable the end of tree idle if
    the last completed request was rq_noidle. Now, we will disable that
    idle only if all the queues served in the no-idle tree had rq_noidle
    requests.

    Reported-by: Vivek Goyal
    Signed-off-by: Corrado Zoccolo
    Acked-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Corrado Zoccolo
     
  • Seeky sync queues with large depth can gain unfairly big share of disk
    time, at the expense of other seeky queues. This patch ensures that
    idling will be enabled for queues with I/O depth at least 4, and small
    think time. The decision to enable idling is sticky, until an idle
    window times out without seeing a new request.

    The reasoning behind the decision is that, if an application is using
    large I/O depth, it is already optimized to make full utilization of
    the hardware, and therefore we reserve a slice of exclusive use for it.

    Reported-by: Vivek Goyal
    Signed-off-by: Corrado Zoccolo
    Signed-off-by: Jens Axboe

    Corrado Zoccolo
     
  • An incoming no-idle queue should preempt the active no-idle queue
    only if the active queue is idling due to service tree empty.
    Previous code was buggy in two ways:
    * it relied on service_tree field to be set on the active queue, while
    it is not set when the code is idling for a new request
    * it didn't check for the service tree empty condition, so could lead to
    LIFO behaviour if multiple queues with depth > 1 were preempting each
    other on an non-NCQ device.

    Reported-by: Vivek Goyal
    Signed-off-by: Corrado Zoccolo
    Acked-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Corrado Zoccolo