24 Oct, 2008

2 commits

  • * 'proc' of git://git.kernel.org/pub/scm/linux/kernel/git/adobriyan/proc: (35 commits)
    proc: remove fs/proc/proc_misc.c
    proc: move /proc/vmcore creation to fs/proc/vmcore.c
    proc: move pagecount stuff to fs/proc/page.c
    proc: move all /proc/kcore stuff to fs/proc/kcore.c
    proc: move /proc/schedstat boilerplate to kernel/sched_stats.h
    proc: move /proc/modules boilerplate to kernel/module.c
    proc: move /proc/diskstats boilerplate to block/genhd.c
    proc: move /proc/zoneinfo boilerplate to mm/vmstat.c
    proc: move /proc/vmstat boilerplate to mm/vmstat.c
    proc: move /proc/pagetypeinfo boilerplate to mm/vmstat.c
    proc: move /proc/buddyinfo boilerplate to mm/vmstat.c
    proc: move /proc/vmallocinfo to mm/vmalloc.c
    proc: move /proc/slabinfo boilerplate to mm/slub.c, mm/slab.c
    proc: move /proc/slab_allocators boilerplate to mm/slab.c
    proc: move /proc/interrupts boilerplate code to fs/proc/interrupts.c
    proc: move /proc/stat to fs/proc/stat.c
    proc: move rest of /proc/partitions code to block/genhd.c
    proc: move /proc/cpuinfo code to fs/proc/cpuinfo.c
    proc: move /proc/devices code to fs/proc/devices.c
    proc: move rest of /proc/locks to fs/locks.c
    ...

    Linus Torvalds
     
  • Variable 'ret' is no longer used. Don't declare it.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

23 Oct, 2008

2 commits


21 Oct, 2008

12 commits


18 Oct, 2008

2 commits

  • * 'for-linus' of git://git.kernel.dk/linux-2.6-block:
    block: remove __generic_unplug_device() from exports
    block: move q->unplug_work initialization
    blktrace: pass zfcp driver data
    blktrace: add support for driver data
    block: fix current kernel-doc warnings
    block: only call ->request_fn when the queue is not stopped
    block: simplify string handling in elv_iosched_store()
    block: fix kernel-doc for blk_alloc_devt()
    block: fix nr_phys_segments miscalculation bug
    block: add partition attribute for partition number
    block: add BIG FAT WARNING to CONFIG_DEBUG_BLOCK_EXT_DEVT
    softirq: Add support for triggering softirq work on softirqs.

    Linus Torvalds
     
  • * git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6: (39 commits)
    [SCSI] sd: fix compile failure with CONFIG_BLK_DEV_INTEGRITY=n
    libiscsi: fix locking in iscsi_eh_device_reset
    libiscsi: check reason why we are stopping iscsi session to determine error value
    [SCSI] iscsi_tcp: return a descriptive error value during connection errors
    [SCSI] libiscsi: rename host reset to target reset
    [SCSI] iscsi class: fix endpoint id handling
    [SCSI] libiscsi: Support drivers initiating session removal
    [SCSI] libiscsi: fix data corruption when target has to resend data-in packets
    [SCSI] sd: Switch kernel printing level for DIF messages
    [SCSI] sd: Correctly handle all combinations of DIF and DIX
    [SCSI] sd: Always print actual protection_type
    [SCSI] sd: Issue correct protection operation
    [SCSI] scsi_error: fix target reset handling
    [SCSI] lpfc 8.2.8 v2 : Add statistical reporting control and additional fc vendor events
    [SCSI] lpfc 8.2.8 v2 : Add sysfs control of target queue depth handling
    [SCSI] lpfc 8.2.8 v2 : Revert target busy in favor of transport disrupted
    [SCSI] scsi_dh_alua: remove REQ_NOMERGE
    [SCSI] lpfc 8.2.8 : update driver version to 8.2.8
    [SCSI] lpfc 8.2.8 : Add MSI-X support
    [SCSI] lpfc 8.2.8 : Update driver to use new Host byte error code DID_TRANSPORT_DISRUPTED
    ...

    Linus Torvalds
     

17 Oct, 2008

8 commits

  • The only out-of-core user is IDE, and that should be using
    blk_start_queueing() instead.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • modprobe loop; rmmod loop effectively creates a blk_queue and destroys it
    which results in q->unplug_work being canceled without it ever being
    initialized.

    Therefore, move the initialization of q->unplug_work from
    blk_queue_make_request() to blk_alloc_queue*().

    Reported-by: Alexey Dobriyan
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Jens Axboe

    Peter Zijlstra
     
  • Fix block kernel-doc warnings:

    Warning(linux-2.6.27-git4//fs/block_dev.c:1272): No description found for parameter 'path'
    Warning(linux-2.6.27-git4//block/blk-core.c:1021): No description found for parameter 'cpu'
    Warning(linux-2.6.27-git4//block/blk-core.c:1021): No description found for parameter 'part'
    Warning(/var/linsrc/linux-2.6.27-git4//block/genhd.c:544): No description found for parameter 'partno'

    Signed-off-by: Randy Dunlap
    Signed-off-by: Jens Axboe

    Randy Dunlap
     
  • Callers should use either blk_run_queue/__blk_run_queue, or
    blk_start_queueing() to invoke request handling instead of calling
    ->request_fn() directly as that does not take the queue stopped
    flag into account.

    Also add appropriate comments on the above functions to detail
    their usage.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • strlcpy() guarantees the dest buffer is NULL teminated.

    Signed-off-by: Li Zefan
    Signed-off-by: Jens Axboe

    Li Zefan
     
  • No argument 'gfp_mask' for blk_alloc_devt().

    Signed-off-by: Li Zefan
    Signed-off-by: Jens Axboe

    Li Zefan
     
  • This fixes the bug reported by Nikanth Karthikesan :

    http://lkml.org/lkml/2008/10/2/203

    The root cause of the bug is that blk_phys_contig_segment
    miscalculates q->max_segment_size.

    blk_phys_contig_segment checks:

    req->biotail->bi_size + next_req->bio->bi_size > q->max_segment_size

    But blk_recalc_rq_segments might expect that req->biotail and the
    previous bio in the req are supposed be merged into one
    segment. blk_recalc_rq_segments might also expect that next_req->bio
    and the next bio in the next_req are supposed be merged into one
    segment. In such case, we merge two requests that can't be merged
    here. Later, blk_rq_map_sg gives more segments than it should.

    We need to keep track of segment size in blk_recalc_rq_segments and
    use it to see if two requests can be merged. This patch implements it
    in the similar way that we used to do for hw merging (virtual
    merging).

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • Now that device_create() has been audited, rename things back to the
    original call to be sane.

    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

13 Oct, 2008

1 commit

  • Multipath is best at handling transport errors. If it gets a device
    error then there is not much the multipath layer can do. It will just
    access the same device but from a different path.

    This patch breaks up failfast into device, transport and driver errors.
    The multipath layers (md and dm mutlipath) only ask the lower levels to
    fast fail transport errors. The user of failfast, read ahead, will ask
    to fast fail on all errors.

    Note that blk_noretry_request will return true if any failfast bit
    is set. This allows drivers that do not support the multipath failfast
    bits to continue to fail on any failfast error like before. Drivers
    like scsi that are able to fail fast specific errors can check
    for the specific fail fast type. In the next patch I will convert
    scsi.

    Signed-off-by: Mike Christie
    Cc: Jens Axboe
    Signed-off-by: James Bottomley

    Mike Christie
     

09 Oct, 2008

13 commits

  • The DM and MD integrity support now depends on being able to use
    gendisks instead of block_devices when comparing integrity profiles.
    Change function parameters accordingly.

    Also update comparison logic so that two NULL profiles are a valid
    configuration.

    Signed-off-by: Martin K. Petersen
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     
  • - kobject_del already puts the parent.

    - Set integrity profile to NULL to prevent stale data.

    Signed-off-by: Martin K. Petersen
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     
  • This patch removes end_queued_request() and end_dequeued_request(),
    which are no longer used.

    As a results, users of __end_request() became only end_request().
    So the actual code in __end_request() is moved to end_request()
    and __end_request() is removed.

    Signed-off-by: Kiyoshi Ueda
    Signed-off-by: Jun'ichi Nomura
    Signed-off-by: Jens Axboe

    Kiyoshi Ueda
     
  • This patch converts elevator to use __blk_end_request() directly
    so that end_{queued|dequeued}_request() can be removed.
    Related 'uptodate' arguments is converted to 'error'.

    Signed-off-by: Kiyoshi Ueda
    Signed-off-by: Jun'ichi Nomura
    Signed-off-by: Jens Axboe

    Kiyoshi Ueda
     
  • Define as 32, which is is what BDEVNAME_SIZE is/was as well. This keeps
    the user interface the same and gets rid of the difference between
    kernel and user api here.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • This patch adds an new interface, blk_lld_busy(), to check lld's
    busy state from the block layer.
    blk_lld_busy() calls down into low-level drivers for the checking
    if the drivers set q->lld_busy_fn() using blk_queue_lld_busy().

    This resolves a performance problem on request stacking devices below.

    Some drivers like scsi mid layer stop dispatching request when
    they detect busy state on its low-level device like host/target/device.
    It allows other requests to stay in the I/O scheduler's queue
    for a chance of merging.

    Request stacking drivers like request-based dm should follow
    the same logic.
    However, there is no generic interface for the stacked device
    to check if the underlying device(s) are busy.
    If the request stacking driver dispatches and submits requests to
    the busy underlying device, the requests will stay in
    the underlying device's queue without a chance of merging.
    This causes performance problem on burst I/O load.

    With this patch, busy state of the underlying device is exported
    via q->lld_busy_fn(). So the request stacking driver can check it
    and stop dispatching requests if busy.

    The underlying device driver must return the busy state appropriately:
    1: when the device driver can't process requests immediately.
    0: when the device driver can process requests immediately,
    including abnormal situations where the device driver needs
    to kill all requests.

    Signed-off-by: Kiyoshi Ueda
    Signed-off-by: Jun'ichi Nomura
    Cc: Andrew Morton
    Signed-off-by: Jens Axboe

    Kiyoshi Ueda
     
  • blk_start_queueing() should act like the generic queue unplugging
    and kicking and ignore a stopped queue. Such a queue may not be
    run until after a call to blk_start_queue().

    Signed-off-by: Elias Oltmanns
    Signed-off-by: Jens Axboe

    Elias Oltmanns
     
  • By only allowing async IO to consume 3/4 ths of the tag depth, we
    always have slots free to serve sync IO. This is important to avoid
    having writes fill the entire tag queue, thus starving reads.

    Original patch and idea from Linus Torvalds

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • We really need to know about the hardware tagging support as well,
    since if the SSD does not do tagging then we still want to idle.
    Otherwise have the same dependent sync IO vs flooding async IO
    problem as on rotational media.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • We don't want to idle in AS/CFQ if the device doesn't have a seek
    penalty. So add a QUEUE_FLAG_NONROT to indicate a non-rotational
    device, low level drivers should set this flag upon discovery of
    an SSD or similar device type.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • This patch adds a queue flag to indicate the block device can be
    used for request stacking.

    Request stacking drivers need to stack their devices on top of
    only devices of which q->request_fn is functional.
    Since bio stacking drivers (e.g. md, loop) basically initialize
    their queue using blk_alloc_queue() and don't set q->request_fn,
    the check of (q->request_fn == NULL) looks enough for that purpose.

    However, dm will become both types of stacking driver (bio-based and
    request-based). And dm will always set q->request_fn even if the dm
    device is bio-based of which q->request_fn is not functional actually.
    So we need something else to distinguish the type of the device.
    Adding a queue flag is a solution for that.

    The reason why dm always sets q->request_fn is to keep
    the compatibility of dm user-space tools.
    Currently, all dm user-space tools are using bio-based dm without
    specifying the type of the dm device they use.
    To use request-based dm without changing such tools, the kernel
    must decide the type of the dm device automatically.
    The automatic type decision can't be done at the device creation time
    and needs to be deferred until such tools load a mapping table,
    since the actual type is decided by dm target type included in
    the mapping table.

    So a dm device has to be initialized using blk_init_queue()
    so that we can load either type of table.
    Then, all queue stuffs are set (e.g. q->request_fn) and we have
    no element to distinguish that it is bio-based or request-based,
    even after a table is loaded and the type of the device is decided.

    By the way, some stuffs of the queue (e.g. request_list, elevator)
    are needless when the dm device is used as bio-based.
    But the memory size is not so large (about 20[KB] per queue on ia64),
    so I hope the memory loss can be acceptable for bio-based dm users.

    Signed-off-by: Kiyoshi Ueda
    Signed-off-by: Jun'ichi Nomura
    Signed-off-by: Jens Axboe

    Kiyoshi Ueda
     
  • This patch adds blk_insert_cloned_request(), a generic request
    submission interface for request stacking drivers.
    Request-based dm will use it to submit their clones to underlying
    devices.

    blk_rq_check_limits() is also added because it is possible that
    the lower queue has stronger limitations than the upper queue
    if multiple drivers are stacking at request-level.
    Not only for blk_insert_cloned_request()'s internal use, the function
    will be used by request-based dm when the queue limitation is
    modified (e.g. by replacing dm's table).

    Signed-off-by: Kiyoshi Ueda
    Signed-off-by: Jun'ichi Nomura
    Signed-off-by: Jens Axboe

    Kiyoshi Ueda
     
  • This patch adds blk_update_request(), which updates struct request
    with completing its data part, but doesn't complete the struct
    request itself.
    Though it looks like end_that_request_first() of older kernels,
    blk_update_request() should be used only by request stacking drivers.

    Request-based dm will use it in bio->bi_end_io callback to update
    the original request when a data part of a cloned request completes.
    Followings are additional background information of why request-based
    dm needs this interface.

    - Request stacking drivers can't use blk_end_request() directly from
    the lower driver's completion context (bio->bi_end_io or rq->end_io),
    because some device drivers (e.g. ide) may try to complete
    their request with queue lock held, and it may cause deadlock.
    See below for detailed description of possible deadlock:

    - To solve that, request-based dm offloads the completion of
    cloned struct request to softirq context (i.e. using
    blk_complete_request() from rq->end_io).

    - Though it is possible to use the same solution from bio->bi_end_io,
    it will delay the notification of bio completion to the original
    submitter. Also, it will cause inefficient partial completion,
    because the lower driver can't perform the cloned request anymore
    and request-based dm needs to requeue and redispatch it to
    the lower driver again later. That's not good.

    - So request-based dm needs blk_update_request() to perform the bio
    completion in the lower driver's completion context, which is more
    efficient.

    Signed-off-by: Kiyoshi Ueda
    Signed-off-by: Jun'ichi Nomura
    Signed-off-by: Jens Axboe

    Kiyoshi Ueda