07 Nov, 2011

1 commit

  • * 'modsplit-Oct31_2011' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux: (230 commits)
    Revert "tracing: Include module.h in define_trace.h"
    irq: don't put module.h into irq.h for tracking irqgen modules.
    bluetooth: macroize two small inlines to avoid module.h
    ip_vs.h: fix implicit use of module_get/module_put from module.h
    nf_conntrack.h: fix up fallout from implicit moduleparam.h presence
    include: replace linux/module.h with "struct module" wherever possible
    include: convert various register fcns to macros to avoid include chaining
    crypto.h: remove unused crypto_tfm_alg_modname() inline
    uwb.h: fix implicit use of asm/page.h for PAGE_SIZE
    pm_runtime.h: explicitly requires notifier.h
    linux/dmaengine.h: fix implicit use of bitmap.h and asm/page.h
    miscdevice.h: fix up implicit use of lists and types
    stop_machine.h: fix implicit use of smp.h for smp_processor_id
    of: fix implicit use of errno.h in include/linux/of.h
    of_platform.h: delete needless include
    acpi: remove module.h include from platform/aclinux.h
    miscdevice.h: delete unnecessary inclusion of module.h
    device_cgroup.h: delete needless include
    net: sch_generic remove redundant use of
    net: inet_timewait_sock doesnt need
    ...

    Fix up trivial conflicts (other header files, and removal of the ab3550 mfd driver) in
    - drivers/media/dvb/frontends/dibx000_common.c
    - drivers/media/video/{mt9m111.c,ov6650.c}
    - drivers/mfd/ab3550-core.c
    - include/linux/dmaengine.h

    Linus Torvalds
     

05 Nov, 2011

1 commit

  • * 'for-3.2/core' of git://git.kernel.dk/linux-block: (29 commits)
    block: don't call blk_drain_queue() if elevator is not up
    blk-throttle: use queue_is_locked() instead of lockdep_is_held()
    blk-throttle: Take blkcg->lock while traversing blkcg->policy_list
    blk-throttle: Free up policy node associated with deleted rule
    block: warn if tag is greater than real_max_depth.
    block: make gendisk hold a reference to its queue
    blk-flush: move the queue kick into
    blk-flush: fix invalid BUG_ON in blk_insert_flush
    block: Remove the control of complete cpu from bio.
    block: fix a typo in the blk-cgroup.h file
    block: initialize the bounce pool if high memory may be added later
    block: fix request_queue lifetime handling by making blk_queue_cleanup() properly shutdown
    block: drop @tsk from attempt_plug_merge() and explain sync rules
    block: make get_request[_wait]() fail if queue is dead
    block: reorganize throtl_get_tg() and blk_throtl_bio()
    block: reorganize queue draining
    block: drop unnecessary blk_get/put_queue() in scsi_cmd_ioctl() and blk_get_tg()
    block: pass around REQ_* flags instead of broken down booleans during request alloc/free
    block: move blk_throtl prototypes to block/blk.h
    block: fix genhd refcounting in blkio_policy_parse_and_set()
    ...

    Fix up trivial conflicts due to "mddev_t" -> "struct mddev" conversion
    and making the request functions be of type "void" instead of "int" in
    - drivers/md/{faulty.c,linear.c,md.c,md.h,multipath.c,raid0.c,raid1.c,raid10.c,raid5.c}
    - drivers/staging/zram/zram_drv.c

    Linus Torvalds
     

01 Nov, 2011

1 commit


31 Oct, 2011

1 commit

  • This is a fairly serious bug in RAID10.

    When a RAID10 array is degraded and a hot-spare is activated, the
    spare does not take up the empty slot, but rather replaces the first
    working device.
    This is likely to make the array non-functional. It would normally
    be possible to recover the data, but that would need care and is not
    guaranteed.

    This bug was introduced in commit
    2bb77736ae5dca0a189829fbb7379d43364a9dac
    which first appeared in 3.1.

    Cc: stable@kernel.org
    Signed-off-by: NeilBrown

    NeilBrown
     

26 Oct, 2011

1 commit

  • In 3.0 we changed the way recovery_disabled was handle so that instead
    of testing against zero, we test an mddev-> value against a conf->
    value.
    Two problems:
    1/ one place in raid1 was missed and still sets to '1'.
    2/ We didn't explicitly set the conf-> value at array creation
    time.
    It defaulted to '0' just like the mddev value does so they
    could appear equal and thus disable recovery.
    This did not affect normal 'md' as it calls bind_rdev_to_array
    which changes the mddev value. However the dmraid interface
    doesn't call this and so doesn't change ->recovery_disabled; so at
    array start all recovery is incorrectly disabled.

    So initialise the 'conf' value to one less that the mddev value, so
    the will only be the same when explicitly set that way.

    Reported-by: Jonathan Brassow
    Signed-off-by: NeilBrown

    NeilBrown
     

19 Oct, 2011

1 commit


11 Oct, 2011

8 commits


21 Sep, 2011

1 commit

  • Two related problems:

    1/ some error paths call "md_unregister_thread(mddev->thread)"
    without subsequently clearing ->thread. A subsequent call
    to mddev_unlock will try to wake the thread, and crash.

    2/ Most calls to md_wakeup_thread are protected against the thread
    disappeared either by:
    - holding the ->mutex
    - having an active request, so something else must be keeping
    the array active.
    However mddev_unlock calls md_wakeup_thread after dropping the
    mutex and without any certainty of an active request, so the
    ->thread could theoretically disappear.
    So we need a spinlock to provide some protections.

    So change md_unregister_thread to take a pointer to the thread
    pointer, and ensure that it always does the required locking, and
    clears the pointer properly.

    Reported-by: "Moshe Melnikov"
    Signed-off-by: NeilBrown
    cc: stable@kernel.org

    NeilBrown
     

12 Sep, 2011

1 commit

  • There is very little benefit in allowing to let a ->make_request
    instance update the bios device and sector and loop around it in
    __generic_make_request when we can archive the same through calling
    generic_make_request from the driver and letting the loop in
    generic_make_request handle it.

    Note that various drivers got the return value from ->make_request and
    returned non-zero values for errors.

    Signed-off-by: Christoph Hellwig
    Acked-by: NeilBrown
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     

10 Sep, 2011

2 commits

  • A single request to RAID1 or RAID10 might result in multiple
    requests if there are known bad blocks that need to be avoided.

    To detect if we need to submit another write request we test:
    if (sectors_handled < (bio->bi_size >> 9)) {

    However this is after we call **_write_done() so the 'bio' no longer
    belongs to us - the writes could have completed and the bio freed.

    So move the **_write_done call until after the test against
    bio->bi_size.

    This addresses https://bugzilla.kernel.org/show_bug.cgi?id=41862

    Reported-by: Bruno Wolff III
    Tested-by: Bruno Wolff III
    Signed-off-by: NeilBrown

    NeilBrown
     
  • A write can complete at two different places:
    1/ when the last member-device write completes, through
    raid10_end_write_request
    2/ in make_request() when we remove the initial bias from ->remaining.

    These two should do exactly the same thing and the comment says they
    do, but they don't.

    So factor the correct code out into a function and call it in both
    places. This makes the code much more similar to RAID1.

    The difference is only significant if there is an error, and they
    usually take a while, so it is unlikely that there will be an error
    already when make_request is completing, so this is unlikely to cause
    real problems.

    Signed-off-by: NeilBrown

    NeilBrown
     

28 Jul, 2011

17 commits

  • If we find more read/write errors we should record a bad block before
    failing the device.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • Currently when we get a read error during recovery, we simply abort
    the recovery.

    Instead, repeat the read in page-sized blocks.
    On successful reads, write to the target.
    On read errors, record a bad block on the destination,
    and only if that fails do we abort the recovery.

    As we now retry reads we need to know where we read from. This was in
    bi_sector but that can be changed during a read attempt.
    So store the correct from_addr and to_addr in the r10_bio for later
    access.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • If a read error is detected during recovery the code currently
    fails the read device.
    This isn't really necessary. recovery_request_write will signal
    a write error to end_sync_write and it will record a write
    error on the destination device which will record a bad block
    there or kick it from the array.

    So just remove this call to do md_error.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • If we get a write error during resync/recovery don't fail the device
    but instead record a bad block. If that fails we can then fail the
    device.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • We already attempt to fix read errors found during normal IO
    and a 'repair' process.
    It is best to try to repair them at any time they are found,
    so move a test so that during sync and check a read error will
    be corrected by over-writing with good data.

    If both (all) devices have known bad blocks in the sync section we
    won't try to fix even though the bad blocks might not overlap. That
    should be considered later.

    Also if we hit a read error during recovery we don't try to fix it.
    It would only be possible to fix if there were at least three copies
    of data, which is not very common with RAID10. But it should still
    be considered later.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • When we get a write error (in the data area, not in metadata),
    update the badblock log rather than failing the whole device.

    As the write may well be many blocks, we trying writing each
    block individually and only log the ones which fail.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • If we succeed in writing to a block that was recorded as
    being bad, we clear the bad-block record.

    This requires some delayed handling as the bad-block-list update has
    to happen in process-context.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • Writing to known bad blocks on drives that have seen a write error
    is asking for trouble. So try to avoid these blocks.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • When recovering one or more devices, if all the good devices have
    bad blocks we should record a bad block on the device being rebuilt.

    If this fails, we need to abort the recovery.

    To ensure we don't think that we aborted later than we actually did,
    we need to move the check for MD_RECOVERY_INTR earlier in md_do_sync,
    in particular before mddev->curr_resync is updated.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • During resync/recovery limit the size of the request to avoid
    reading into a bad block that does not start at-or-before the current
    read address.

    Similarly if there is a bad block at this address, don't allow the
    current request to extend beyond the end of that bad block.

    Now that we don't ever read from known bad blocks, it is safe to allow
    devices with those blocks into the array.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • When attempting to repair a read error, don't read from
    devices with a known bad block.

    As we are only reading PAGE_SIZE blocks, we don't try to
    narrow down to smaller regions in the hope that only part of this
    page is bad - it isn't worth the effort.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • When redirecting a read error to a different device, we must
    again avoid bad blocks and possibly split the request.

    Spin_lock typo fixed thanks to Dan Carpenter

    Signed-off-by: NeilBrown

    NeilBrown
     
  • This patch just covers the basic read path:
    1/ read_balance needs to check for badblocks, and return not only
    the chosen slot, but also how many good blocks are available
    there.
    2/ read submission must be ready to issue multiple reads to
    different devices as different bad blocks on different devices
    could mean that a single large read cannot be served by any one
    device, but can still be served by the array.
    This requires keeping count of the number of outstanding requests
    per bio. This count is stored in 'bi_phys_segments'

    On read error we currently just fail the request if another target
    cannot handle the whole request. Next patch refines that a bit.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • raid10d() is too big and is about to get bigger, so split
    handle_read_error() out as a separate function.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • When a loop ends with a large if, it can be neater to change the
    if to invert the condition and just 'continue'.
    Then the body of the if can be indented to a lower level.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • It is only safe to choose not to write to a bad block if that bad
    block is safely recorded in metadata - i.e. if it has been
    'acknowledged'.

    If it hasn't we need to wait for the acknowledgement.

    We support that using rdev->blocked wait and
    md_wait_for_blocked_rdev by introducing a new device flag
    'BlockedBadBlock'.

    This flag is only advisory.
    It is cleared whenever we acknowledge a bad block, so that a waiter
    can re-check the particular bad blocks that it is interested it.

    It should be set by a caller when they find they need to wait.
    This (set after test) is inherently racy, but as
    md_wait_for_blocked_rdev already has a timeout, losing the race will
    have minimal impact.

    When we clear "Blocked" was also clear "BlockedBadBlocks" incase it
    was set incorrectly (see above race).

    We also modify the way we manage 'Blocked' to fit better with the new
    handling of 'BlockedBadBlocks' and to make it consistent between
    externally managed and internally managed metadata. This requires
    that each raidXd loop checks if the metadata needs to be written and
    triggers a write (md_check_recovery) if needed. Otherwise a queued
    write request might cause raidXd to wait for the metadata to write,
    and only that thread can write it.

    Before writing metadata, we set FaultRecorded for all devices that
    are Faulty, then after writing the metadata we clear Blocked for any
    device for which the Fault was certainly Recorded.

    The 'faulty' device flag now appears in sysfs if the device is faulty
    *or* it has unacknowledged bad blocks. So user-space which does not
    understand bad blocks can continue to function correctly.
    User space which does, should not assume a device is faulty until it
    sees the 'faulty' flag, and then sees the list of unacknowledged bad
    blocks is empty.

    Signed-off-by: NeilBrown

    NeilBrown
     
  • As no personality understand bad block lists yet, we must
    reject any device that is known to contain bad blocks.
    As the personalities get taught, these tests can be removed.

    This only applies to raid1/raid5/raid10.
    For linear/raid0/multipath/faulty the whole concept of bad blocks
    doesn't mean anything so there is no point adding the checks.

    Signed-off-by: NeilBrown
    Reviewed-by: Namhyung Kim

    NeilBrown
     

27 Jul, 2011

4 commits


18 Jul, 2011

1 commit