10 Sep, 2010

1 commit

  • This patch converts md to support REQ_FLUSH/FUA instead of now
    deprecated REQ_HARDBARRIER. In the core part (md.c), the following
    changes are notable.

    * Unlike REQ_HARDBARRIER, REQ_FLUSH/FUA don't interfere with
    processing of other requests and thus there is no reason to mark the
    queue congested while FLUSH/FUA is in progress.

    * REQ_FLUSH/FUA failures are final and its users don't need retry
    logic. Retry logic is removed.

    * Preflush needs to be issued to all member devices but FUA writes can
    be handled the same way as other writes - their processing can be
    deferred to request_queue of member devices. md_barrier_request()
    is renamed to md_flush_request() and simplified accordingly.

    For linear, raid0 and multipath, the core changes are enough. raid1,
    5 and 10 need the following conversions.

    * raid1: Handling of FLUSH/FUA bio's can simply be deferred to
    request_queues of member devices. Barrier related logic removed.

    * raid5: Queue draining logic dropped. FUA bit is propagated through
    biodrain and stripe resconstruction such that all the updated parts
    of the stripe are written out with FUA writes if any of the dirtying
    writes was FUA. preread_active_stripes handling in make_request()
    is updated as suggested by Neil Brown.

    * raid10: FUA bit needs to be propagated to write clones.

    linear, raid0, 1, 5 and 10 tested.

    Signed-off-by: Tejun Heo
    Reviewed-by: Neil Brown
    Signed-off-by: Jens Axboe

    Tejun Heo
     

08 Aug, 2010

1 commit

  • Remove the current bio flags and reuse the request flags for the bio, too.
    This allows to more easily trace the type of I/O from the filesystem
    down to the block driver. There were two flags in the bio that were
    missing in the requests: BIO_RW_UNPLUG and BIO_RW_AHEAD. Also I've
    renamed two request flags that had a superflous RW in them.

    Note that the flags are in bio.h despite having the REQ_ name - as
    blkdev.h includes bio.h that is the only way to go for now.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     

22 May, 2010

1 commit


18 May, 2010

3 commits


30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

16 Mar, 2010

1 commit

  • If a component device has a merge_bvec_fn then as we never call it
    we must ensure we never need to. Currently this is done by setting
    max_sector to 1 PAGE, however this does not stop a bio being created
    with several sub-page iovecs that would violate the merge_bvec_fn.

    So instead set max_segments to 1 and set the segment boundary to the
    same as a page boundary to ensure there is only ever one single-page
    segment of IO requested at a time.

    This can particularly be an issue when 'xen' is used as it is
    known to submit multiple small buffers in a single bio.

    Signed-off-by: NeilBrown
    Cc: stable@kernel.org

    NeilBrown
     

26 Feb, 2010

1 commit

  • The block layer calling convention is blk_queue_.
    blk_queue_max_sectors predates this practice, leading to some confusion.
    Rename the function to appropriately reflect that its intended use is to
    set max_hw_sectors.

    Also introduce a temporary wrapper for backwards compability. This can
    be removed after the merge window is closed.

    Signed-off-by: Martin K. Petersen
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     

14 Dec, 2009

2 commits

  • Suggested by Oren Held

    Signed-off-by: NeilBrown

    NeilBrown
     
  • Previously barriers were only supported on RAID1. This is because
    other levels requires synchronisation across all devices and so needed
    a different approach.
    Here is that approach.

    When a barrier arrives, we send a zero-length barrier to every active
    device. When that completes - and if the original request was not
    empty - we submit the barrier request itself (with the barrier flag
    cleared) and then submit a fresh load of zero length barriers.

    The barrier request itself is asynchronous, but any subsequent
    request will block until the barrier completes.

    The reason for clearing the barrier flag is that a barrier request is
    allowed to fail. If we pass a non-empty barrier through a striping
    raid level it is conceivable that part of it could succeed and part
    could fail. That would be way too hard to deal with.
    So if the first run of zero length barriers succeed, we assume all is
    sufficiently well that we send the request and ignore errors in the
    second run of barriers.

    RAID5 needs extra care as write requests may not have been submitted
    to the underlying devices yet. So we flush the stripe cache before
    proceeding with the barrier.

    Note that the second set of zero-length barriers are submitted
    immediately after the original request is submitted. Thus when
    a personality finds mddev->barrier to be set during make_request,
    it should not return from make_request until the corresponding
    per-device request(s) have been queued.

    That will be done in later patches.

    Signed-off-by: NeilBrown
    Reviewed-by: Andre Noll

    NeilBrown
     

23 Sep, 2009

3 commits

  • Recent commit bbba809e96539672f775a3d70102657d05816a5b
    replaced mempool_create_kzalloc_pool with mempool_create_kmalloc_pool
    plus a memset.
    This memset is not needed (and we didn't need kzalloc in the first
    place).
    Ever field of the allocated structure (struct multipath_bh) is
    initialised immediately except retry_list, and memset does not
    initial a list_head anyway.

    To remove the memset.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • This should writeback from coming when the device is temporarily
    suspended.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • The management thread for raid4,5,6 arrays are all called
    mdX_raid5, independent of the actual raid level, which is wrong and
    can be confusion.

    So change md_register_thread to use the name from the personality
    unless no alternate name (like 'resync' or 'reshape') is given.

    This is simpler and more correct.

    Cc: Jinzc
    Signed-off-by: NeilBrown

    NeilBrown
     

22 Sep, 2009

1 commit

  • The kzalloc mempool does not re-zero items that have been used and then
    returned to the pool. Manually zero the allocated multipath_bh instead.

    Acked-by: Neil Brown
    Signed-off-by: Sage Weil
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sage Weil
     

11 Sep, 2009

1 commit


03 Aug, 2009

1 commit

  • This patch replaces md_integrity_check() by two new public functions:
    md_integrity_register() and md_integrity_add_rdev() which are both
    personality-independent.

    md_integrity_register() is called from the ->run and ->hot_remove
    methods of all personalities that support data integrity. The
    function iterates over the component devices of the array and
    determines if all active devices are integrity capable and if their
    profiles match. If this is the case, the common profile is registered
    for the mddev via blk_integrity_register().

    The second new function, md_integrity_add_rdev() is called from the
    ->hot_add_disk methods, i.e. whenever a new device is being added
    to a raid array. If the new device does not support data integrity,
    or has a profile different from the one already registered, data
    integrity for the mddev is disabled.

    For raid0 and linear, only the call to md_integrity_register() from
    the ->run method is necessary.

    Signed-off-by: Andre Noll
    Signed-off-by: NeilBrown

    Andre Noll
     

01 Jul, 2009

1 commit


18 Jun, 2009

1 commit

  • If the superblock of a component device indicates the presence of a
    bitmap but the corresponding raid personality does not support bitmaps
    (raid0, linear, multipath, faulty), then something is seriously wrong
    and we'd better refuse to run such an array.

    Currently, this check is performed while the superblocks are examined,
    i.e. before entering personality code. Therefore the generic md layer
    must know which raid levels support bitmaps and which do not.

    This patch avoids this layer violation without adding identical code
    to various personalities. This is accomplished by introducing a new
    public function to md.c, md_check_no_bitmap(), which replaces the
    hard-coded checks in the superblock loading functions.

    A call to md_check_no_bitmap() is added to the ->run method of each
    personality which does not support bitmaps and assembly is aborted
    if at least one component device contains a bitmap.

    Signed-off-by: Andre Noll
    Signed-off-by: NeilBrown

    Andre Noll
     

16 Jun, 2009

1 commit

  • Having a macro just to cast a void* isn't really helpful.
    I would must rather see that we are simply de-referencing ->private,
    than have to know what the macro does.

    So open code the macro everywhere and remove the pointless cast.

    Signed-off-by: NeilBrown

    NeilBrown
     

23 May, 2009

1 commit


31 Mar, 2009

6 commits

  • Get personalities out of the business of directly modifying
    ->array_sectors. Lays groundwork to introduce policy on when
    ->array_sectors can be modified.

    Reviewed-by: Andre Noll
    Signed-off-by: Dan Williams

    Dan Williams
     
  • In preparation for giving userspace control over ->array_sectors we need
    to be able to retrieve the 'default' size, and the 'anticipated' size
    when a reshape is requested. For personalities that do not reshape emit
    a warning if anything but the default size is requested.

    In the raid5 case we need to update ->previous_raid_disks to make the
    new 'default' size available.

    Reviewed-by: Andre Noll
    Signed-off-by: Dan Williams

    Dan Williams
     
  • This patch renames the "size" field of struct mddev_s to "dev_sectors"
    and stores the number of 512-byte sectors instead of the number of
    1K-blocks in it.

    All users of that field, including raid levels 1,4-6,10, are adjusted
    accordingly. This simplifies the code a bit because it allows to get
    rid of a couple of divisions/multiplications by two.

    In order to make checkpatch happy, some minor coding style issues
    have also been addressed. In particular, size_store() now uses
    strict_strtoull() instead of simple_strtoull().

    Signed-off-by: Andre Noll
    Signed-off-by: NeilBrown

    Andre Noll
     
  • It really is nicer to keep related code together..

    Signed-off-by: NeilBrown

    NeilBrown
     
  • This makes the includes more explicit, and is preparation for moving
    md_k.h to drivers/md/md.h

    Remove include/raid/md.h as its only remaining use was to #include
    other files.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • Move the headers with the local structures for the disciplines and
    bitmap.h into drivers/md/ so that they are more easily grepable for
    hacking and not far away. md.h is left where it is for now as there
    are some uses from the outside.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: NeilBrown

    Christoph Hellwig
     

09 Jan, 2009

1 commit

  • The rdev_for_each macro defined in is identical to
    list_for_each_entry_safe, from , it should be defined to
    use list_for_each_entry_safe, instead of reinventing the wheel.

    But some calls to each_entry_safe don't really need a safe version,
    just a direct list_for_each_entry is enough, this could save a temp
    variable (tmp) in every function that used rdev_for_each.

    In this patch, most rdev_for_each loops are replaced by list_for_each_entry,
    totally save many tmp vars; and only in the other situations that will call
    list_del to delete an entry, the safe version is used.

    Signed-off-by: Cheng Renquan
    Signed-off-by: NeilBrown

    Cheng Renquan
     

18 Oct, 2008

1 commit

  • * git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi-misc-2.6: (39 commits)
    [SCSI] sd: fix compile failure with CONFIG_BLK_DEV_INTEGRITY=n
    libiscsi: fix locking in iscsi_eh_device_reset
    libiscsi: check reason why we are stopping iscsi session to determine error value
    [SCSI] iscsi_tcp: return a descriptive error value during connection errors
    [SCSI] libiscsi: rename host reset to target reset
    [SCSI] iscsi class: fix endpoint id handling
    [SCSI] libiscsi: Support drivers initiating session removal
    [SCSI] libiscsi: fix data corruption when target has to resend data-in packets
    [SCSI] sd: Switch kernel printing level for DIF messages
    [SCSI] sd: Correctly handle all combinations of DIF and DIX
    [SCSI] sd: Always print actual protection_type
    [SCSI] sd: Issue correct protection operation
    [SCSI] scsi_error: fix target reset handling
    [SCSI] lpfc 8.2.8 v2 : Add statistical reporting control and additional fc vendor events
    [SCSI] lpfc 8.2.8 v2 : Add sysfs control of target queue depth handling
    [SCSI] lpfc 8.2.8 v2 : Revert target busy in favor of transport disrupted
    [SCSI] scsi_dh_alua: remove REQ_NOMERGE
    [SCSI] lpfc 8.2.8 : update driver version to 8.2.8
    [SCSI] lpfc 8.2.8 : Add MSI-X support
    [SCSI] lpfc 8.2.8 : Update driver to use new Host byte error code DID_TRANSPORT_DISRUPTED
    ...

    Linus Torvalds
     

13 Oct, 2008

2 commits

  • Multipath is best at handling transport errors. If it gets a device
    error then there is not much the multipath layer can do. It will just
    access the same device but from a different path.

    This patch breaks up failfast into device, transport and driver errors.
    The multipath layers (md and dm mutlipath) only ask the lower levels to
    fast fail transport errors. The user of failfast, read ahead, will ask
    to fast fail on all errors.

    Note that blk_noretry_request will return true if any failfast bit
    is set. This allows drivers that do not support the multipath failfast
    bits to continue to fail on any failfast error like before. Drivers
    like scsi that are able to fail fast specific errors can check
    for the specific fail fast type. In the next patch I will convert
    scsi.

    Signed-off-by: Mike Christie
    Cc: Jens Axboe
    Signed-off-by: James Bottomley

    Mike Christie
     
  • A lot of cruft has gathered over the years. Time to remove it.

    Signed-off-by: NeilBrown

    NeilBrown
     

09 Oct, 2008

2 commits

  • Move stats related fields - stamp, in_flight, dkstats - from disk to
    part0 and unify stat handling such that...

    * part_stat_*() now updates part0 together if the specified partition
    is not part0. ie. part_stat_*() are now essentially all_stat_*().

    * {disk|all}_stat_*() are gone.

    * part_round_stats() is updated similary. It handles part0 stats
    automatically and disk_round_stats() is killed.

    * part_{inc|dec}_in_fligh() is implemented which automatically updates
    part0 stats for parts other than part0.

    * disk_map_sector_rcu() is updated to return part0 if no part matches.
    Combined with the above changes, this makes NULL special case
    handling in callers unnecessary.

    * Separate stats show code paths for disk are collapsed into part
    stats show code paths.

    * Rename disk_stat_lock/unlock() to part_stat_lock/unlock()

    While at it, reposition stat handling macros a bit and add missing
    parentheses around macro parameters.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • There are two variants of stat functions - ones prefixed with double
    underbars which don't care about preemption and ones without which
    disable preemption before manipulating per-cpu counters. It's unclear
    whether the underbarred ones assume that preemtion is disabled on
    entry as some callers don't do that.

    This patch unifies diskstats access by implementing disk_stat_lock()
    and disk_stat_unlock() which take care of both RCU (for partition
    access) and preemption (for per-cpu counter access). diskstats access
    should always be enclosed between the two functions. As such, there's
    no need for the versions which disables preemption. They're removed
    and double underbars ones are renamed to drop the underbars. As an
    extra argument is added, there's no danger of using the old version
    unconverted.

    disk_stat_lock() uses get_cpu() and returns the cpu index and all
    diskstat functions which access per-cpu counters now has @cpu
    argument to help RT.

    This change adds RCU or preemption operations at some places but also
    collapses several preemption ops into one at others. Overall, the
    performance difference should be negligible as all involved ops are
    very lightweight per-cpu ones.

    Signed-off-by: Tejun Heo
    Cc: Peter Zijlstra
    Signed-off-by: Jens Axboe

    Tejun Heo
     

21 Jul, 2008

1 commit


28 Jun, 2008

2 commits


25 May, 2008

1 commit

  • When we get any IO error during a recovery (rebuilding a spare), we abort
    the recovery and restart it.

    For RAID6 (and multi-drive RAID1) it may not be best to restart at the
    beginning: when multiple failures can be tolerated, the recovery may be
    able to continue and re-doing all that has already been done doesn't make
    sense.

    We already have the infrastructure to record where a recovery is up to
    and restart from there, but it is not being used properly.
    This is because:
    - We sometimes abort with MD_RECOVERY_ERR rather than just MD_RECOVERY_INTR,
    which causes the recovery not be be checkpointed.
    - We remove spares and then re-added them which loses important state
    information.

    The distinction between MD_RECOVERY_ERR and MD_RECOVERY_INTR really isn't
    needed. If there is an error, the relevant drive will be marked as
    Faulty, and that is enough to ensure correct handling of the error. So we
    first remove MD_RECOVERY_ERR, changing some of the uses of it to
    MD_RECOVERY_INTR.

    Then we cause the attempt to remove a non-faulty device from an array to
    fail (unless recovery is impossible as the array is too degraded). Then
    when remove_and_add_spares attempts to remove the devices on which
    recovery can continue, it will fail, they will remain in place, and
    recovery will continue on them as desired.

    Issue: If we are halfway through rebuilding a spare and another drive
    fails, and a new spare is immediately available, do we want to:
    1/ complete the current rebuild, then go back and rebuild the new spare or
    2/ restart the rebuild from the start and rebuild both devices in
    parallel.

    Both options can be argued for. The code currently takes option 2 as
    a/ this requires least code change
    b/ this results in a minimally-degraded array in minimal time.

    Cc: "Eivind Sarto"
    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     

15 May, 2008

1 commit

  • As setting and clearing queue flags now requires that we hold a spinlock
    on the queue, and as blk_queue_stack_limits is called without that lock,
    get the lock inside blk_queue_stack_limits.

    For blk_queue_stack_limits to be able to find the right lock, each md
    personality needs to set q->queue_lock to point to the appropriate lock.
    Those personalities which didn't previously use a spin_lock, us
    q->__queue_lock. So always initialise that lock when allocated.

    With this in place, setting/clearing of the QUEUE_FLAG_PLUGGED bit will no
    longer cause warnings as it will be clear that the proper lock is held.

    Thanks to Dan Williams for review and fixing the silly bugs.

    Signed-off-by: NeilBrown
    Cc: Dan Williams
    Cc: Jens Axboe
    Cc: Alistair John Strachan
    Cc: Nick Piggin
    Cc: "Rafael J. Wysocki"
    Cc: Jacek Luczak
    Cc: Prakash Punnoor
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Neil Brown
     

28 Apr, 2008

1 commit

  • MD drivers use one printk() call to print 2 log messages and the second line
    may be prefixed by a TAB character. It may also output a trailing space
    before newline. klogd (I think) turns the TAB character into the 2 characters
    '^I' when logging to a file. This looks ugly.

    Instead of a leading TAB to indicate continuation, prefix both output lines
    with 'raid:' or similar. Also remove any trailing space in the vicinity of
    the affected code and consistently end the sentences with a period.

    Signed-off-by: Nick Andrew
    Cc: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Andrew
     

07 Feb, 2008

1 commit