09 Oct, 2008

40 commits

  • The DM and MD integrity support now depends on being able to use
    gendisks instead of block_devices when comparing integrity profiles.
    Change function parameters accordingly.

    Also update comparison logic so that two NULL profiles are a valid
    configuration.

    Signed-off-by: Martin K. Petersen
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     
  • - kobject_del already puts the parent.

    - Set integrity profile to NULL to prevent stale data.

    Signed-off-by: Martin K. Petersen
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     
  • This patch removes end_queued_request() and end_dequeued_request(),
    which are no longer used.

    As a results, users of __end_request() became only end_request().
    So the actual code in __end_request() is moved to end_request()
    and __end_request() is removed.

    Signed-off-by: Kiyoshi Ueda
    Signed-off-by: Jun'ichi Nomura
    Signed-off-by: Jens Axboe

    Kiyoshi Ueda
     
  • This patch converts elevator to use __blk_end_request() directly
    so that end_{queued|dequeued}_request() can be removed.
    Related 'uptodate' arguments is converted to 'error'.

    Signed-off-by: Kiyoshi Ueda
    Signed-off-by: Jun'ichi Nomura
    Signed-off-by: Jens Axboe

    Kiyoshi Ueda
     
  • Define as 32, which is is what BDEVNAME_SIZE is/was as well. This keeps
    the user interface the same and gets rid of the difference between
    kernel and user api here.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • This patch adds an new interface, blk_lld_busy(), to check lld's
    busy state from the block layer.
    blk_lld_busy() calls down into low-level drivers for the checking
    if the drivers set q->lld_busy_fn() using blk_queue_lld_busy().

    This resolves a performance problem on request stacking devices below.

    Some drivers like scsi mid layer stop dispatching request when
    they detect busy state on its low-level device like host/target/device.
    It allows other requests to stay in the I/O scheduler's queue
    for a chance of merging.

    Request stacking drivers like request-based dm should follow
    the same logic.
    However, there is no generic interface for the stacked device
    to check if the underlying device(s) are busy.
    If the request stacking driver dispatches and submits requests to
    the busy underlying device, the requests will stay in
    the underlying device's queue without a chance of merging.
    This causes performance problem on burst I/O load.

    With this patch, busy state of the underlying device is exported
    via q->lld_busy_fn(). So the request stacking driver can check it
    and stop dispatching requests if busy.

    The underlying device driver must return the busy state appropriately:
    1: when the device driver can't process requests immediately.
    0: when the device driver can process requests immediately,
    including abnormal situations where the device driver needs
    to kill all requests.

    Signed-off-by: Kiyoshi Ueda
    Signed-off-by: Jun'ichi Nomura
    Cc: Andrew Morton
    Signed-off-by: Jens Axboe

    Kiyoshi Ueda
     
  • blk_start_queueing() should act like the generic queue unplugging
    and kicking and ignore a stopped queue. Such a queue may not be
    run until after a call to blk_start_queue().

    Signed-off-by: Elias Oltmanns
    Signed-off-by: Jens Axboe

    Elias Oltmanns
     
  • By only allowing async IO to consume 3/4 ths of the tag depth, we
    always have slots free to serve sync IO. This is important to avoid
    having writes fill the entire tag queue, thus starving reads.

    Original patch and idea from Linus Torvalds

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • We really need to know about the hardware tagging support as well,
    since if the SSD does not do tagging then we still want to idle.
    Otherwise have the same dependent sync IO vs flooding async IO
    problem as on rotational media.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • We don't want to idle in AS/CFQ if the device doesn't have a seek
    penalty. So add a QUEUE_FLAG_NONROT to indicate a non-rotational
    device, low level drivers should set this flag upon discovery of
    an SSD or similar device type.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • This patch adds a queue flag to indicate the block device can be
    used for request stacking.

    Request stacking drivers need to stack their devices on top of
    only devices of which q->request_fn is functional.
    Since bio stacking drivers (e.g. md, loop) basically initialize
    their queue using blk_alloc_queue() and don't set q->request_fn,
    the check of (q->request_fn == NULL) looks enough for that purpose.

    However, dm will become both types of stacking driver (bio-based and
    request-based). And dm will always set q->request_fn even if the dm
    device is bio-based of which q->request_fn is not functional actually.
    So we need something else to distinguish the type of the device.
    Adding a queue flag is a solution for that.

    The reason why dm always sets q->request_fn is to keep
    the compatibility of dm user-space tools.
    Currently, all dm user-space tools are using bio-based dm without
    specifying the type of the dm device they use.
    To use request-based dm without changing such tools, the kernel
    must decide the type of the dm device automatically.
    The automatic type decision can't be done at the device creation time
    and needs to be deferred until such tools load a mapping table,
    since the actual type is decided by dm target type included in
    the mapping table.

    So a dm device has to be initialized using blk_init_queue()
    so that we can load either type of table.
    Then, all queue stuffs are set (e.g. q->request_fn) and we have
    no element to distinguish that it is bio-based or request-based,
    even after a table is loaded and the type of the device is decided.

    By the way, some stuffs of the queue (e.g. request_list, elevator)
    are needless when the dm device is used as bio-based.
    But the memory size is not so large (about 20[KB] per queue on ia64),
    so I hope the memory loss can be acceptable for bio-based dm users.

    Signed-off-by: Kiyoshi Ueda
    Signed-off-by: Jun'ichi Nomura
    Signed-off-by: Jens Axboe

    Kiyoshi Ueda
     
  • This patch adds blk_insert_cloned_request(), a generic request
    submission interface for request stacking drivers.
    Request-based dm will use it to submit their clones to underlying
    devices.

    blk_rq_check_limits() is also added because it is possible that
    the lower queue has stronger limitations than the upper queue
    if multiple drivers are stacking at request-level.
    Not only for blk_insert_cloned_request()'s internal use, the function
    will be used by request-based dm when the queue limitation is
    modified (e.g. by replacing dm's table).

    Signed-off-by: Kiyoshi Ueda
    Signed-off-by: Jun'ichi Nomura
    Signed-off-by: Jens Axboe

    Kiyoshi Ueda
     
  • This patch adds blk_update_request(), which updates struct request
    with completing its data part, but doesn't complete the struct
    request itself.
    Though it looks like end_that_request_first() of older kernels,
    blk_update_request() should be used only by request stacking drivers.

    Request-based dm will use it in bio->bi_end_io callback to update
    the original request when a data part of a cloned request completes.
    Followings are additional background information of why request-based
    dm needs this interface.

    - Request stacking drivers can't use blk_end_request() directly from
    the lower driver's completion context (bio->bi_end_io or rq->end_io),
    because some device drivers (e.g. ide) may try to complete
    their request with queue lock held, and it may cause deadlock.
    See below for detailed description of possible deadlock:

    - To solve that, request-based dm offloads the completion of
    cloned struct request to softirq context (i.e. using
    blk_complete_request() from rq->end_io).

    - Though it is possible to use the same solution from bio->bi_end_io,
    it will delay the notification of bio completion to the original
    submitter. Also, it will cause inefficient partial completion,
    because the lower driver can't perform the cloned request anymore
    and request-based dm needs to requeue and redispatch it to
    the lower driver again later. That's not good.

    - So request-based dm needs blk_update_request() to perform the bio
    completion in the lower driver's completion context, which is more
    efficient.

    Signed-off-by: Kiyoshi Ueda
    Signed-off-by: Jun'ichi Nomura
    Signed-off-by: Jens Axboe

    Kiyoshi Ueda
     
  • When a driver calls blk_cleanup_queue(), the device should be fully idle.
    However, the block layer may have pending plugging timers and the IO
    schedulers may have pending work in the work queues. So quisce the device
    by waiting for the timer and flushing the work queues.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • We cannot abort a request if we raced with the timeout handler already,
    or with the IO completion. So make blk_abort_request() mark the request
    as complete, and only continue if we succeeded.

    Found and suggested by Mike Anderson

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Only works for the generic request timer handling. Allows one to
    sporadically ignore request completions, thus exercising the timeout
    handling.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Two mods to blkdev_issue_discard(), thinking ahead to its use on swap:

    1. Add gfp_mask argument, so swap allocation can use it where GFP_KERNEL
    might deadlock but GFP_NOIO is safe.

    2. Enlarge nr_sects argument from unsigned to sector_t: unsigned long is
    enough to cover a whole swap area, but sector_t suits any partition.

    Change sb_issue_discard()'s nr_blocks to sector_t too; but no need seen
    for a gfp_mask there, just pass GFP_KERNEL down to blkdev_issue_discard().

    Signed-off-by: Hugh Dickins
    Signed-off-by: Jens Axboe

    Hugh Dickins
     
  • Signed-off-by: Mike Anderson
    Signed-off-by: Jens Axboe

    Mike Anderson
     
  • Right now SCSI and others do their own command timeout handling.
    Move those bits to the block layer.

    Instead of having a timer per command, we try to be a bit more clever
    and simply have one per-queue. This avoids the overhead of having to
    tear down and setup a timer for each command, so it will result in a lot
    less timer fiddling.

    Signed-off-by: Mike Anderson
    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • seqf can be started multiple times for a read and the header should be
    printed only for the initial one. Fix it.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • This patch changes blk_rq_map_user to accept a NULL user-space buffer
    with a READ command if rq_map_data is not NULL. Thus a caller can pass
    page frames to lk_rq_map_user to just set up a request and bios with
    page frames propely. bio_uncopy_user (called via blk_rq_unmap_user)
    doesn't copy data to user space with such request.

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • It refers to functions that no longer exist after the IO completion
    changes.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • bdget_disk() and blk_lookup_devt() never cared whether the specified
    partition (or disk) is zero sized or not. I got confused while
    converting those not to depend on consecutive minor numbers in commit
    5a6411b1178baf534aa9138052864dfa89d3eada and later when dev0 was added
    it broke callers which expected to get valid return for zero sized
    disk devices.

    So, they never needed nr_sects checks in the first place. Kill them.

    This problem was spotted and debugged by Bartlmoiej Zolnierkiewicz.

    Signed-off-by: Tejun Heo
    Cc: Bartlomiej Zolnierkiewicz
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Noticed by sparse:
    block/blk-softirq.c:156:12: warning: symbol 'blk_softirq_init' was not declared. Should it be static?
    block/genhd.c:583:28: warning: function 'bdget_disk' with external linkage has definition
    block/genhd.c:659:17: warning: incorrect type in argument 1 (different base types)
    block/genhd.c:659:17: expected unsigned int [unsigned] [usertype] size
    block/genhd.c:659:17: got restricted gfp_t
    block/genhd.c:659:29: warning: incorrect type in argument 2 (different base types)
    block/genhd.c:659:29: expected restricted gfp_t [usertype] flags
    block/genhd.c:659:29: got unsigned int
    block: kmalloc args reversed

    Signed-off-by: Harvey Harrison
    Signed-off-by: Jens Axboe

    Harvey Harrison
     
  • This adds blk_rq_aligned helper function to see if alignment and
    padding requirement is satisfied for DMA transfer. This also converts
    blk_rq_map_kern and __blk_rq_map_user to use the helper function.

    Signed-off-by: FUJITA Tomonori
    Cc: Jens Axboe
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • This patch introduces struct rq_map_data to enable bio_copy_use_iov()
    use reserved pages.

    Currently, bio_copy_user_iov allocates bounce pages but
    drivers/scsi/sg.c wants to allocate pages by itself and use
    them. struct rq_map_data can be used to pass allocated pages to
    bio_copy_user_iov.

    The current users of bio_copy_user_iov simply passes NULL (they don't
    want to use pre-allocated pages).

    Signed-off-by: FUJITA Tomonori
    Cc: Jens Axboe
    Cc: Douglas Gilbert
    Cc: Mike Christie
    Cc: James Bottomley
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • Currently, blk_rq_map_user and blk_rq_map_user_iov always do
    GFP_KERNEL allocation.

    This adds gfp_mask argument to blk_rq_map_user and blk_rq_map_user_iov
    so sg can use it (sg always does GFP_ATOMIC allocation).

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Douglas Gilbert
    Cc: Mike Christie
    Cc: James Bottomley
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • CFQ's detection of queueing devices assumes a non-queuing device and detects
    if the queue depth reaches a certain threshold. Under some workloads (e.g.
    synchronous reads), CFQ effectively forces a unit queue depth, thus defeating
    the detection logic. This leads to poor performance on queuing hardware,
    since the idle window remains enabled.

    This patch inverts the sense of the logic: assume a queuing-capable device,
    and detect if the depth does not exceed the threshold.

    Signed-off-by: Aaron Carroll
    Signed-off-by: Jens Axboe

    Aaron Carroll
     
  • We should just check for rq->bio, as that is really the information
    we are looking for. Even if the bio attached doesn't carry data,
    we still need to do IO post processing on it.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Somewhat incomplete, as we do allow merges of requests and bios
    that have different completion CPUs given. This is done on the
    assumption that a larger IO is still more beneficial than CPU
    locality.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • This patch adds support for controlling the IO completion CPU of
    either all requests on a queue, or on a per-request basis. We export
    a sysfs variable (rq_affinity) which, if set, migrates completions
    of requests to the CPU that originally submitted it. A bio helper
    (bio_set_completion_cpu()) is also added, so that queuers can ask
    for completion on that specific CPU.

    In testing, this has been show to cut the system time by as much
    as 20-40% on synthetic workloads where CPU affinity is desired.

    This requires a little help from the architecture, so it'll only
    work as designed for archs that are using the new generic smp
    helper infrastructure.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Preparatory patch for checking queuing affinity.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Now that disk and partition handlings are mostly unified, it's easy to
    allow disk to have extended device number. This patch makes
    add_disk() use extended device number if disk->minors is zero. Both
    sd and ide-disk are updated to use this.

    * sd_format_disk_name() is implemented which can generically determine
    the drive name. This removes disk number restriction stemming from
    limited device names.

    * If sd index goes over SD_MAX_DISKS (which can be increased now BTW),
    sd simply doesn't initialize minors letting block layer choose
    extended device number.

    * If CONFIG_DEBUG_EXT_DEVT is set, both sd and ide-disk always set
    minors to 0 and use extended device numbers.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • With previous changes, it's meaningless to limit the number of
    partitions. Replace @ext_minors with GENHD_FL_EXT_DEVT such that
    setting the flag allows the disk to have maximum number of allowed
    partitions (only limited by the number of entries in parsed_partitions
    as determined by MAX_PART constant).

    This kills not-too-pretty alloc_disk_ext[_node]() functions and makes
    @minors parameter to alloc_disk[_node]() unnecessary. The parameter
    is left alone to avoid disturbing the users.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • disk->__part used to be statically allocated to the maximum possible
    number of partitions. This patch makes partition array allocation
    dynamic. The added overhead is minimal as only real change is one
    memory dereference changed to RCU one. This saves both a bit of
    memory and cpu cycles iterating through unoccupied slots and makes
    increasing partition limit easier.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Move stats related fields - stamp, in_flight, dkstats - from disk to
    part0 and unify stat handling such that...

    * part_stat_*() now updates part0 together if the specified partition
    is not part0. ie. part_stat_*() are now essentially all_stat_*().

    * {disk|all}_stat_*() are gone.

    * part_round_stats() is updated similary. It handles part0 stats
    automatically and disk_round_stats() is killed.

    * part_{inc|dec}_in_fligh() is implemented which automatically updates
    part0 stats for parts other than part0.

    * disk_map_sector_rcu() is updated to return part0 if no part matches.
    Combined with the above changes, this makes NULL special case
    handling in callers unnecessary.

    * Separate stats show code paths for disk are collapsed into part
    stats show code paths.

    * Rename disk_stat_lock/unlock() to part_stat_lock/unlock()

    While at it, reposition stat handling macros a bit and add missing
    parentheses around macro parameters.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • GENHD_FL_FAIL for disk is what make_it_fail is for parts. Kill it and
    use part0->make_it_fail. Sysfs node handling is unified too.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Till now, bdev->bd_part is set only if the bdev was for parts other
    than part0. This patch makes bdev->bd_part always set so that code
    paths don't have to differenciate common handling.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo