16 Dec, 2009

3 commits

  • * git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dm: (80 commits)
    dm snapshot: use merge origin if snapshot invalid
    dm snapshot: report merge failure in status
    dm snapshot: merge consecutive chunks together
    dm snapshot: trigger exceptions in remaining snapshots during merge
    dm snapshot: delay merging a chunk until writes to it complete
    dm snapshot: queue writes to chunks being merged
    dm snapshot: add merging
    dm snapshot: permit only one merge at once
    dm snapshot: support barriers in snapshot merge target
    dm snapshot: avoid allocating exceptions in merge
    dm snapshot: rework writing to origin
    dm snapshot: add merge target
    dm exception store: add merge specific methods
    dm snapshot: create function for chunk_is_tracked wait
    dm snapshot: make bio optional in __origin_write
    dm mpath: reject messages when device is suspended
    dm: export suspended state to targets
    dm: rename dm_suspended to dm_suspended_md
    dm: swap target postsuspend call and setting suspended flag
    dm crypt: add plain64 iv
    ...

    Linus Torvalds
     
  • Signed-off-by: Joe Perches
    Cc: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joe Perches
     
  • Makes use of skip_spaces() defined in lib/string.c for removing leading
    spaces from strings all over the tree.

    It decreases lib.a code size by 47 bytes and reuses the function tree-wide:
    text data bss dec hex filename
    64688 584 592 65864 10148 (TOTALS-BEFORE)
    64641 584 592 65817 10119 (TOTALS-AFTER)

    Also, while at it, if we see (*str && isspace(*str)), we can be sure to
    remove the first condition (*str) as the second one (isspace(*str)) also
    evaluates to 0 whenever *str == 0, making it redundant. In other words,
    "a char equals zero is never a space".

    Julia Lawall tried the semantic patch (http://coccinelle.lip6.fr) below,
    and found occurrences of this pattern on 3 more files:
    drivers/leds/led-class.c
    drivers/leds/ledtrig-timer.c
    drivers/video/output.c

    @@
    expression str;
    @@

    ( // ignore skip_spaces cases
    while (*str && isspace(*str)) { \(str++;\|++str;\) }
    |
    - *str &&
    isspace(*str)
    )

    Signed-off-by: André Goddard Rosa
    Cc: Julia Lawall
    Cc: Martin Schwidefsky
    Cc: Jeff Dike
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Richard Purdie
    Cc: Neil Brown
    Cc: Kyle McMartin
    Cc: Henrique de Moraes Holschuh
    Cc: David Howells
    Cc:
    Cc: Samuel Ortiz
    Cc: Patrick McHardy
    Cc: Takashi Iwai
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    André Goddard Rosa
     

14 Dec, 2009

27 commits

  • Enable external metadata arrays to manage rebuild checkpointing via a
    md/dev-XXX/recovery_start attribute which reflects rdev->recovery_offset

    Also update resync_start_store to allow 'none' to be written, for
    consistency.

    Signed-off-by: Dan Williams
    Signed-off-by: NeilBrown

    Dan Williams
     
  • Other walks of this list are either under rcu_read_lock() or the list
    mutation lock (mddev_lock()). This protects against the improbable case of a
    disk being removed from the array at the start of md_do_sync().

    Signed-off-by: Dan Williams

    Dan Williams
     
  • As v1.x metadata can record that a member of the array is
    not completely recovered, it make sense to record that a
    spare has become a regular member of the array at the earliest
    opportunity.
    So remove the tests on "recovery_offset > 0" in super_1_sync
    as they really aren't needed, and schedule a metadata update
    immediately after adding spares to a degraded array.

    This means that if a crash happens immediately after a recovery
    starts, the new device will be included in the array and recovery will
    continue from wherever it was up to. Previously this didn't happen
    unless recovery was at least 1/16 of the way through.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • The RAID ioctls are only implemented in md.c, so the
    handling for them should also be moved there from
    fs/compat_ioctl.c.

    Signed-off-by: Arnd Bergmann
    Cc: Neil Brown
    Cc: Andre Noll
    Cc: linux-raid@vger.kernel.org
    Signed-off-by: NeilBrown

    Arnd Bergmann
     
  • Make it clear in the config message that MD_MULTIPATH is not under
    active development.

    Cc: Oren Held
    Signed-off-by: NeilBrown

    NeilBrown
     
  • Suggested by Oren Held

    Signed-off-by: NeilBrown

    NeilBrown
     
  • We've noticed severe lasting performance degradation of our raid
    arrays when we have drives that yield large amounts of media errors.
    The raid10 module will queue each failed read for retry, and also
    will attempt call fix_read_error() to perform the read recovery.
    Read recovery is performed while the array is frozen, so repeated
    recovery attempts can degrade the performance of the array for
    extended periods of time.

    With this patch I propose adding a per md device max number of
    corrected read attempts. Each rdev will maintain a count of
    read correction attempts in the rdev->read_errors field (not
    used currently for raid10). When we enter fix_read_error()
    we'll check to see when the last read error occurred, and
    divide the read error count by 2 for every hour since the
    last read error. If at that point our read error count
    exceeds the read error threshold, we'll fail the raid device.

    In addition in this patch I add sysfs nodes (get/set) for
    the per md max_read_errors attribute, the rdev->read_errors
    attribute, and added some printk's to indicate when
    fix_read_error fails to repair an rdev.

    For testing I used debugfs->fail_make_request to inject
    IO errors to the rdev while doing IO to the raid array.

    Signed-off-by: Robert Becker
    Signed-off-by: NeilBrown

    Robert Becker
     
  • When we get a read error on a device in a RAID10, and attempting to
    repair the error fails, print more useful messages about why it
    failed.

    Signed-off-by: Robert Becker
    Signed-off-by: NeilBrown

    Robert Becker
     
  • There is a sysfs file which allows bits in the write-intent
    bitmap to be explicit set - indicating that the block is thought
    to be 'dirty'.
    When this happens we should really set recovery_cp backwards
    to include the block to reflect this dirtiness.

    In particular, a 'resync' process will refuse to start if
    recovery_cp is beyond the end of the array, so this is needed
    to allow a resync to be triggered.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • In this case, the metadata needs to not be in the same
    sector as the bitmap.
    md will not read/write any bitmap metadata. Config must be
    done via sysfs and when a recovery makes the array non-degraded
    again, writing 'true' to 'bitmap/can_clear' will allow bits in
    the bitmap to be cleared again.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • Setting daemon_lastrun really has nothing to do with reading
    the bitmap superblock, it just happens to be needed at the same time.
    bitmap_read_sb is about to become options, so move that code out
    to after the call to bitmap_read_sb.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • A new attribute directory 'bitmap' in 'md' is created which
    contains files for configuring the bitmap.
    'location' identifies where the bitmap is, either 'none',
    or 'file' or 'sector offset from metadata'.
    Writing 'location' can create or remove a bitmap.
    Adding a 'file' bitmap this way is not yet supported.
    'chunksize' and 'time_base' must be set before 'location'
    can be set.

    'chunksize' can be set before creating a bitmap, but is
    currently always over-ridden by the bitmap superblock.

    'time_base' and 'backlog' can be updated at any time.

    Signed-off-by: NeilBrown
    Reviewed-by: Andre Noll

    NeilBrown
     
  • safe_delay_store can parse fixed point numbers (for fractions
    of a second). We will want to do that for another sysfs
    file soon, so factor out the code.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • For md arrays were metadata is managed externally, the kernel does not
    know about a superblock so the superblock offset is 0.
    If we want to have a write-intent-bitmap near the end of the
    devices of such an array, we should support sector_t sized offset.
    We need offset be possibly negative for when the bitmap is before
    the metadata, so use loff_t instead.

    Also add sanity check that bitmap does not overlap with data.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • As bitmap_create and bitmap_destroy already set thread->timeout
    as appropriate, there is no need to do it in raid10_quiesce.
    There is a possible need to wake the thread after the timeout
    has been set low, but it is better to do that where the timeout
    is actually set low, in bitmap_create.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • This removes a lot of multiplications by HZ.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • ... and into bitmap_info. These are all configuration parameters
    that need to be set before the bitmap is created.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • In preparation for making bitmap fields configurable via sysfs,
    start tidying up by making a single structure to contain the
    configuration fields.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • A 2-device raid5 array can now be converted to raid1.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • This will allow us to stop writeout to portions of the array
    while they are resynced by someone else - e.g. another node in
    a cluster.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • The post-barrier-flush is sent by md as soon as make_request on the
    barrier write completes. For raid5, the data might not be in the
    per-device queues yet. So for barrier requests, wait for any
    pre-reading to be done so that the request will be in the per-device
    queues.

    We use the 'preread_active' count to check that nothing is still in
    the preread phase, and delay the decrement of this count until after
    write requests have been submitted to the underlying devices.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • Previously barriers were only supported on RAID1. This is because
    other levels requires synchronisation across all devices and so needed
    a different approach.
    Here is that approach.

    When a barrier arrives, we send a zero-length barrier to every active
    device. When that completes - and if the original request was not
    empty - we submit the barrier request itself (with the barrier flag
    cleared) and then submit a fresh load of zero length barriers.

    The barrier request itself is asynchronous, but any subsequent
    request will block until the barrier completes.

    The reason for clearing the barrier flag is that a barrier request is
    allowed to fail. If we pass a non-empty barrier through a striping
    raid level it is conceivable that part of it could succeed and part
    could fail. That would be way too hard to deal with.
    So if the first run of zero length barriers succeed, we assume all is
    sufficiently well that we send the request and ignore errors in the
    second run of barriers.

    RAID5 needs extra care as write requests may not have been submitted
    to the underlying devices yet. So we flush the stripe cache before
    proceeding with the barrier.

    Note that the second set of zero-length barriers are submitted
    immediately after the original request is submitted. Thus when
    a personality finds mddev->barrier to be set during make_request,
    it should not return from make_request until the corresponding
    per-device request(s) have been queued.

    That will be done in later patches.

    Signed-off-by: NeilBrown
    Reviewed-by: Andre Noll

    NeilBrown
     
  • If a resync/recovery/check/repair is interrupted for some reason, it
    can be useful to know exactly where it got up to.
    So in that case, do not clear curr_resync_completed.
    Initialise it when starting a resync/recovery/... instead.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • When a 'check' or 'repair' finished we should clear resync_min
    so that a future check/repair will cover the whole array (by default).
    However if it is interrupted, we should update resync_min to
    where we got up to, so that when the check/repair continues it
    just does the remainder of the array.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • Signed-off-by: NeilBrown

    NeilBrown
     
  • qd_idx is previously declared and given exactly the same value!

    Signed-off-by: NeilBrown

    NeilBrown
     
  • A write intent bitmap can be removed from an array while the
    array is active.
    When this happens, all IO is suspended and flushed before the
    bitmap is removed.
    However it is possible that bitmap_daemon_work is still running to
    clear old bits from the bitmap. If it is, it can dereference the
    bitmap after it has been freed.

    So introduce a new mutex to protect bitmap_daemon_work and get it
    before destroying a bitmap.

    This is suitable for any current -stable kernel.

    Signed-off-by: NeilBrown
    Cc: stable@kernel.org

    NeilBrown
     

11 Dec, 2009

10 commits

  • If the snapshot we are merging became invalid (e.g. it ran out of
    space) redirect all I/O directly to the origin device.

    Signed-off-by: Mikulas Patocka
    Reviewed-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mikulas Patocka
     
  • Set 'merge_failed' flag if a snapshot fails to merge. Update
    snapshot_status() to report "Merge failed" if 'merge_failed' is set.

    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mike Snitzer
     
  • s->store->type->prepare_merge returns the number of chunks that can be
    copied linearly working backwards from the returned chunk number.

    For example, if it returns 3 chunks with old_chunk == 10 and new_chunk
    == 20, then chunk 20 can be copied to 10, chunk 19 to 9 and 18 to 8.

    Until now kcopyd only copied one chunk at a time. This patch now copies
    the full set at once.

    Consequently, snapshot_merge_process() needs to delay the merging of all
    chunks if any have writes in progress, not just the first chunk in the
    region that is to be merged.

    snapshot-merge's performance is now comparable to the original
    snapshot-origin target.

    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mike Snitzer
     
  • When there is one merging snapshot and other non-merging snapshots,
    snapshot_merge_process() must make exceptions in the non-merging
    snapshots.

    Use a sequence count to resolve the race between I/O to chunks that are
    about to be merged. The count increases each time an exception
    reallocation finishes. Use wait_event() to wait until the count
    changes.

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mikulas Patocka
     
  • Track writes to chunks that are currently being merged and delay merging
    a chunk until all writes to that chunk finish.

    Signed-off-by: Mikulas Patocka
    Reviewed-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mikulas Patocka
     
  • While a set of chunks is being merged, any overlapping writes need to be
    queued.

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mikulas Patocka
     
  • Merging is started when origin is resumed and it is stopped when
    origin is suspended or when the merging snapshot is destroyed or
    errors are detected.

    Merging is not yet interlocked with writes: this will be handled in
    subsequent patches.

    The code relies on callbacks from a private kcopyd thread.

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mikulas Patocka
     
  • Merging more than one snapshot is not supported, so prevent
    this happening.

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mikulas Patocka
     
  • Sets num_flush_requests=2 to support flushing both the origin and cow
    devices used by the snapshot-merge target.

    Also, snapshot_ctr() now gets the origin device using FMODE_WRITE if the
    target is snapshot-merge (which writes to the origin device).

    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mike Snitzer
     
  • The snapshot-merge target should not allocate new exceptions because the
    intent is to merge all of its exceptions as quickly and safely as
    possible.

    This patch introduces the snapshot-merge mapping function and updates
    __origin_write() so that it doesn't allocate exceptions on any snapshots
    that are being merged.

    If a write request to a merging snapshot device is to be dispatched
    directly to the origin (because the chunk is not remapped or was already
    merged), snapshot_merge_map() must make exceptions in other snapshots so
    calls do_origin().

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mikulas Patocka