28 Jul, 2011

11 commits

  • raid1d is too big with several deep branches.
    So separate them out into their own functions.

    Signed-off-by: NeilBrown
    Reviewed-by: Namhyung Kim

    NeilBrown
     
  • If we cannot read a block from anywhere during recovery, there is
    now a better approach than just giving up.
    We can record a bad block on each device and keep going - being
    careful not to clear the bad block when a write succeeds as it might -
    it will be a write of incorrect data.

    We have now reached the state where - for raid1 - we only call
    md_error if md_set_badblocks has failed.

    Signed-off-by: NeilBrown
    Reviewed-by: Namhyung Kim

    NeilBrown
     
  • If we find a bad block while writing as part of resync/recovery we
    need to report that back to raid1d which must record the bad block,
    or fail the device.

    Similarly when fixing a read error, a further error should just
    record a bad block if possible rather than failing the device.

    Signed-off-by: NeilBrown
    Reviewed-by: Namhyung Kim

    NeilBrown
     
  • When we get a write error (in the data area, not in metadata),
    update the badblock log rather than failing the whole device.

    As the write may well be many blocks, we trying writing each
    block individually and only log the ones which fail.

    Signed-off-by: NeilBrown
    Reviewed-by: Namhyung Kim

    NeilBrown
     
  • When performing write-behind we allocate pages to store the data
    during write.
    Previously we just keep a list of pages. Now we keep a list of
    bi_vec which includes offset and size.
    This means that the r1bio has complete information to create a new
    bio which will be needed for retrying after write errors.

    Signed-off-by: NeilBrown
    Reviewed-by: Namhyung Kim

    NeilBrown
     
  • If we succeed in writing to a block that was recorded as
    being bad, we clear the bad-block record.

    This requires some delayed handling as the bad-block-list update has
    to happen in process-context.

    Signed-off-by: NeilBrown
    Reviewed-by: Namhyung Kim

    NeilBrown
     
  • If we have seen any write error on a drive, then don't write to
    any known-bad blocks on that drive.
    If necessary, we divide the write request up into pieces just
    like we do for reads, so each piece is either all written or
    all not written to any given drive.

    Signed-off-by: NeilBrown
    Reviewed-by: Namhyung Kim

    NeilBrown
     
  • It is only safe to choose not to write to a bad block if that bad
    block is safely recorded in metadata - i.e. if it has been
    'acknowledged'.

    If it hasn't we need to wait for the acknowledgement.

    We support that using rdev->blocked wait and
    md_wait_for_blocked_rdev by introducing a new device flag
    'BlockedBadBlock'.

    This flag is only advisory.
    It is cleared whenever we acknowledge a bad block, so that a waiter
    can re-check the particular bad blocks that it is interested it.

    It should be set by a caller when they find they need to wait.
    This (set after test) is inherently racy, but as
    md_wait_for_blocked_rdev already has a timeout, losing the race will
    have minimal impact.

    When we clear "Blocked" was also clear "BlockedBadBlocks" incase it
    was set incorrectly (see above race).

    We also modify the way we manage 'Blocked' to fit better with the new
    handling of 'BlockedBadBlocks' and to make it consistent between
    externally managed and internally managed metadata. This requires
    that each raidXd loop checks if the metadata needs to be written and
    triggers a write (md_check_recovery) if needed. Otherwise a queued
    write request might cause raidXd to wait for the metadata to write,
    and only that thread can write it.

    Before writing metadata, we set FaultRecorded for all devices that
    are Faulty, then after writing the metadata we clear Blocked for any
    device for which the Fault was certainly Recorded.

    The 'faulty' device flag now appears in sysfs if the device is faulty
    *or* it has unacknowledged bad blocks. So user-space which does not
    understand bad blocks can continue to function correctly.
    User space which does, should not assume a device is faulty until it
    sees the 'faulty' flag, and then sees the list of unacknowledged bad
    blocks is empty.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • When performing resync/etc, keep the size of the request
    small enough that it doesn't overlap any known bad blocks.
    Devices with badblocks at the start of the request are completely
    excluded.
    If there is nowhere to read from due to bad blocks, record
    a bad block on each target device.

    Now that we never read from known-bad-blocks we can allow devices with
    known-bad-blocks into a RAID1.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • Now that we have a bad block list, we should not read from those
    blocks.
    There are several main parts to this:
    1/ read_balance needs to check for bad blocks, and return not only
    the chosen device, but also how many good blocks are available
    there.
    2/ fix_read_error needs to avoid trying to read from bad blocks.
    3/ read submission must be ready to issue multiple reads to
    different devices as different bad blocks on different devices
    could mean that a single large read cannot be served by any one
    device, but can still be served by the array.
    This requires keeping count of the number of outstanding requests
    per bio. This count is stored in 'bi_phys_segments'
    4/ retrying a read needs to also be ready to submit a smaller read
    and queue another request for the rest.

    This does not yet handle bad blocks when reading to perform resync,
    recovery, or check.

    'md_trim_bio' will also be used for RAID10, so put it in md.c and
    export it.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • As no personality understand bad block lists yet, we must
    reject any device that is known to contain bad blocks.
    As the personalities get taught, these tests can be removed.

    This only applies to raid1/raid5/raid10.
    For linear/raid0/multipath/faulty the whole concept of bad blocks
    doesn't mean anything so there is no point adding the checks.

    Signed-off-by: NeilBrown
    Reviewed-by: Namhyung Kim

    NeilBrown
     

27 Jul, 2011

5 commits

  • If device-mapper creates a RAID1 array that includes devices to
    be rebuilt, it will deref a NULL pointer when finished because
    sysfs is not used by device-mapper instantiated RAID devices.

    Signed-off-by: Jonathan Brassow
    Signed-off-by: NeilBrown

    Jonathan Brassow
     
  • Read errors are considered to corrected if write-back and re-read
    cycle is finished without further problems. Thus moving the rdev->
    corrected_errors counting after the re-reading looks more reasonable
    IMHO. Also included a couple of whitespace fixes on sync_page_io().

    Signed-off-by: Namhyung Kim
    Signed-off-by: NeilBrown

    Namhyung Kim
     
  • If we hit a read error while recovering a mirror, we want to abort the
    recovery without necessarily failing the disk - as having a disk this
    a read error is better than not having an array at all.

    Currently this is managed with a per-array flag "recovery_disabled"
    and is only implemented for RAID1. For RAID10 we will need finer
    grained control as we might want to disable recovery for individual
    devices separately.

    So push more of the decision making into the personality.
    'recovery_disabled' is now a 'cookie' which is copied when the
    personality want to disable recovery and is changed when a device is
    added to the array as this is used as a trigger to 'try recovery
    again'.

    This will allow RAID10 to get the control that it needs.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • There are places where sysfs links to rdev are handled
    in a same way. Add the helper functions to consolidate
    them.

    Signed-off-by: Namhyung Kim
    Signed-off-by: NeilBrown

    Namhyung Kim
     
  • As per printk_ratelimit comment, it should not be used.

    Signed-off-by: Christian Dietrich
    Signed-off-by: NeilBrown

    Christian Dietrich
     

08 Jun, 2011

1 commit


11 May, 2011

6 commits

  • The sysfs attribute 'resync_start' (known internally as recovery_cp),
    records where a resync is up to. A value of 0 means the array is
    not known to be in-sync at all. A value of MaxSector means the array
    is believed to be fully in-sync.

    When the size of member devices of an array (RAID1,RAID4/5/6) is
    increased, the array can be increased to match. This process sets
    resync_start to the old end-of-device offset so that the new part of
    the array gets resynced.

    However with RAID1 (and RAID6) a resync is not technically necessary
    and may be undesirable. So it would be good if the implied resync
    after the array is resized could be avoided.

    So: change 'resync_start' so the value can be changed while the array
    is active, and as a precaution only allow it to be changed while
    resync/recovery is 'frozen'. Changing it once resync has started is
    not going to be useful anyway.

    This allows the array to be resized without a resync by:
    write 'frozen' to 'sync_action'
    write new size to 'component_size' (this will set resync_start)
    write 'none' to 'resync_start'
    write 'idle' to 'sync_action'.

    Also slightly improve some tests on recovery_cp when resizing
    raid1/raid5. Now that an arbitrary value could be set we should be
    more careful in our tests.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • The current handling and freeing of these pages is a bit fragile.
    We only keep the list of allocated pages in each bio, so we need to
    still have a valid bio when freeing the pages, which is a bit clumsy.

    So simply store the allocated page list in the r1_bio so it can easily
    be found and freed when we are finished with the r1_bio.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • If we get a read error during resync/recovery we current repeat with
    single-page reads to find out just where the error is, and possibly
    read each page from a different device.

    With check/repair we don't currently do that, we just fail.
    However it is possible that while all devices fail on the large 64K
    read, we might be able to satisfy each 4K from one device or another.

    So call fix_sync_read_error before process_checks to maximise the
    chance of finding good data and writing it out to the devices with
    read errors.

    For this to work, we need to set the 'uptodate' flags properly after
    fix_sync_read_error has succeeded.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • These changes are mostly cosmetic:

    1/ change mddev->raid_disks to conf->raid_disks because the later is
    technically safer, though in current practice it doesn't matter in
    this particular context.
    2/ Rearrange two for / if loops to have an early 'continue' so the
    body of the 'if' doesn't need to be indented so much.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • sync_request_write is too big and too deep.
    So split out two self-contains bits of functionality into separate
    function.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • read_balance has two loops which both look for a 'best'
    device based on slightly different criteria.
    This is clumsy and makes is hard to add extra criteria.

    So replace it all with a single loop that combines everything.

    Signed-off-by: NeilBrown

    NeilBrown
     

18 Apr, 2011

2 commits


17 Mar, 2011

1 commit

  • MD and DM create a new bio_set for every metadevice. Each bio_set has an
    integrity mempool attached regardless of whether the metadevice is
    capable of passing integrity metadata. This is a waste of memory.

    Instead we defer the allocation decision to MD and DM since we know at
    metadevice creation time whether integrity passthrough is needed or not.

    Automatic integrity mempool allocation can then be removed from
    bioset_create() and we make an explicit integrity allocation for the
    fs_bio_set.

    Signed-off-by: Martin K. Petersen
    Reported-by: Zdenek Kabelac
    Acked-by: Mike Snitzer
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     

10 Mar, 2011

2 commits


21 Feb, 2011

1 commit

  • blk_throtl_exit assumes that ->queue_lock still exists,
    so make sure that it does.
    To do this, we stop redirecting ->queue_lock to conf->device_lock
    and leave it pointing where it is initialised - __queue_lock.

    As the blk_plug functions check the ->queue_lock is held, we now
    take that spin_lock explicitly around the plug functions. We don't
    need the locking, just the warning removal.

    This is needed for any kernel with the blk_throtl code, which is
    which is 2.6.37 and later.

    Cc: stable@kernel.org
    Signed-off-by: NeilBrown

    NeilBrown
     

14 Jan, 2011

2 commits


24 Nov, 2010

1 commit

  • Commit 4044ba58dd15cb01797c4fd034f39ef4a75f7cc3 supposedly fixed a
    problem where if a raid1 with just one good device gets a read-error
    during recovery, the recovery would abort and immediately restart in
    an infinite loop.

    However it depended on raid1_remove_disk removing the spare device
    from the array. But that does not happen in this case. So add a test
    so that in the 'recovery_disabled' case, the device will be removed.

    This suitable for any kernel since 2.6.29 which is when
    recovery_disabled was introduced.

    Cc: stable@kernel.org
    Reported-by: Sebastian Färber
    Signed-off-by: NeilBrown

    NeilBrown
     

29 Oct, 2010

3 commits


28 Oct, 2010

5 commits

  • bio_clone and bio_alloc allocate from a common bio pool.
    If an md device is stacked with other devices that use this pool, or under
    something like swap which uses the pool, then the multiple calls on
    the pool can cause deadlocks.

    So allocate a local bio pool for each md array and use that rather
    than the common pool.

    This pool is used both for regular IO and metadata updates.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • Currently sync_page_io takes a 'bdev'.
    Every caller passes 'rdev->bdev'.
    We will soon want another field out of the rdev in sync_page_io,
    So just pass the rdev instead of the bdev out of it.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • Though this mem alloc is GFP_NOIO an so will not deadlock, it seems
    better to do the allocation before 'raise_barrier' which stops any IO
    requests while the resync proceeds.

    raid10 always uses this order, so it is at least consistent to do the
    same in raid1.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • bio_alloc can never fail (as it uses a mempool) but an block
    indefinitely, especially if the caller is holding a reference to a
    previously allocated bio.

    So these to places which both handle failure and hold multiple bios
    should not use bio_alloc, they should use bio_kmalloc.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • It is not safe to allocate from a mempool while holding an item
    previously allocated from that mempool as that can deadlock when the
    mempool is close to exhaustion.

    So don't use a bio list to collect the bios to write to multiple
    devices in raid1 and raid10.
    Instead queue each bio as it becomes available so an unplug will
    activate all previously allocated bios and so a new bio has a chance
    of being allocated.

    This means we must set the 'remaining' count to '1' before submitting
    any requests, then when all are submitted, decrement 'remaining' and
    possible handle the write completion at that point.

    Reported-by: Torsten Kaiser
    Tested-by: Torsten Kaiser
    Signed-off-by: NeilBrown

    NeilBrown