02 Apr, 2006

1 commit


28 Mar, 2006

12 commits

  • raid5 overloads bi_phys_segments to count the number of blocks that the
    request was broken in to so that it knows when the bio is completely handled.

    Accessing this must always be done under a spinlock. In one case we also call
    bi_end_io under that spinlock, which probably isn't ideal as bi_end_io could
    be expensive (even though it isn't allowed to sleep).

    So we reducde the range of the spinlock to just accessing bi_phys_segments.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • wait_event_lock_irq puts a ';' after its usage of the 4th arg, so we don't
    need to.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • This allows user-space to access data safely. This is needed for raid5
    reshape as user-space needs to take a backup of the first few stripes before
    allowing reshape to commence.

    It will also be useful in cluster-aware raid1 configurations so that all
    cluster members can leave a section of the array untouched while a
    resync/recovery happens.

    A 'start' and 'end' of the suspended range are written to 2 sysfs attributes.
    Note that only one range can be suspended at a time.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • check_reshape checks validity and does things that can be done instantly -
    like adding devices to raid1. start_reshape initiates a restriping process to
    convert the whole array.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • Instead of checkpointing at each stripe, only checkpoint when a new write
    would overwrite uncheckpointed data. Block any write to the uncheckpointed
    area. Arbitrarily checkpoint at least every 3Meg.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • We allow the superblock to record an 'old' and a 'new' geometry, and a
    position where any conversion is up to. The geometry allows for changing
    chunksize, layout and level as well as number of devices.

    When using verion-0.90 superblock, we convert the version to 0.91 while the
    conversion is happening so that an old kernel will refuse the assemble the
    array. For version-1, we use a feature bit for the same effect.

    When starting an array we check for an incomplete reshape and restart the
    reshape process if needed. If the reshape stopped at an awkward time (like
    when updating the first stripe) we refuse to assemble the array, and let
    user-space worry about it.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • This patch adds raid5_reshape and end_reshape which will start and finish the
    reshape processes.

    raid5_reshape is only enabled in CONFIG_MD_RAID5_RESHAPE is set, to discourage
    accidental use.

    Read the 'help' for the CONFIG_MD_RAID5_RESHAPE entry.

    and Make sure that you have backups, just in case.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • This patch provides the core of the resize/expand process.

    sync_request notices if a 'reshape' is happening and acts accordingly.

    It allocated new stripe_heads for the next chunk-wide-stripe in the target
    geometry, marking them STRIPE_EXPANDING.

    Then it finds which stripe heads in the old geometry can provide data needed
    by these and marks them STRIPE_EXPAND_SOURCE. This causes stripe_handle to
    read all blocks on those stripes.

    Once all blocks on a STRIPE_EXPAND_SOURCE stripe_head are read, any that are
    needed are copied into the corresponding STRIPE_EXPANDING stripe_head. Once a
    STRIPE_EXPANDING stripe_head is full, it is marks STRIPE_EXPAND_READY and then
    is written out and released.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • We need to allow that different stripes are of different effective sizes, and
    use the appropriate size. Also, when a stripe is being expanded, we must
    block any IO attempts until the stripe is stable again.

    Key elements in this change are:
    - each stripe_head gets a 'disk' field which is part of the key,
    thus there can sometimes be two stripe heads of the same area of
    the array, but covering different numbers of devices. One of these
    will be marked STRIPE_EXPANDING and so won't accept new requests.
    - conf->expand_progress tracks how the expansion is progressing and
    is used to determine whether the target part of the array has been
    expanded yet or not.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • Before a RAID-5 can be expanded, we need to be able to expand the stripe-cache
    data structure.

    This requires allocating new stripes in a new kmem_cache. If this succeeds,
    we copy cache pages over and release the old stripes and kmem_cache.

    We then allocate new pages. If that fails, we leave the stripe cache at it's
    new size. It isn't worth the effort to shrink it back again.

    Unfortuanately this means we need two kmem_cache names as we, for a short
    period of time, we have two kmem_caches. So they are raid5/%s and
    raid5/%s-alt

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • The remainder of this batch implements raid5 reshaping. Currently the only
    shape change that is supported is added a device, but it is envisioned that
    changing the chunksize and layout will also be supported, as well as changing
    the level (e.g. 1->5, 5->6).

    The reshape process naturally has to move all of the data in the array, and so
    should be used with caution. It is believed to work, and some testing does
    support this, but wider testing would be great for increasing my confidence.

    You will need a version of mdadm newer than 2.3.1 to make use of raid5 growth.
    This is because mdadm need to take a copy of a 'critical section' at the
    start of the array incase there is a crash at an awkward moment. On restart,
    mdadm will restore the critical section and allow reshape to continue.

    I hope to release a 2.4-pre by early next week - it still needs a little more
    polishing.

    This patch:

    Previously the array of disk information was included in the raid5 'conf'
    structure which was allocated to an appropriate size. This makes it awkward
    to change the size of that array. So we split it off into a separate
    kmalloced array which will require a little extra indexing, but is much easier
    to grow.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     

04 Feb, 2006

1 commit

  • - version-1 superblock
    + The default_bitmap_offset is in sectors, not bytes.
    + the 'size' field in the superblock is in sectors, not KB
    - raid0_run should return a negative number on error, not '1'
    - raid10_read_balance should not return a valid 'disk' number if
    ->rdev turned out to be NULL
    - kmem_cache_destroy doesn't like being passed a NULL.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     

15 Jan, 2006

1 commit


07 Jan, 2006

10 commits

  • Store this total in superblock (As appropriate), and make it available to
    userspace via sysfs.

    Signed-off-by: Neil Brown
    Acked-by: Greg KH
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • Signed-off-by: Neil Brown
    Acked-by: Greg KH
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • md supports multiple different RAID level, each being implemented by a
    'personality' (which is often in a separate module).

    These personalities have fairly artificial 'numbers'. The numbers
    are use to:
    1- provide an index into an array where the various personalities
    are recorded
    2- identify the module (via an alias) which implements are particular
    personality.

    Neither of these uses really justify the existence of personality numbers.
    The array can be replaced by a linked list which is searched (array lookup
    only happens very rarely). Module identification can be done using an alias
    based on level rather than 'personality' number.

    The current 'raid5' modules support two level (4 and 5) but only one
    personality. This slight awkwardness (which was handled in the mapping from
    level to personality) can be better handled by allowing raid5 to register 2
    personalities.

    With this change in place, the core md module does not need to have an
    exhaustive list of all possible personalities, so other personalities can be
    added independently.

    This patch also moves the check for chunksize being non-zero into the ->run
    routines for the personalities that need it, rather than having it in core-md.
    This has a side effect of allowing 'faulty' and 'linear' not to have a
    chunk-size set.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • - replace open-coded hash chain with hlist macros

    - Fix hash-table size at one page - it is already quite generous, so there
    will never be a need to use multiple pages, so no need for __get_free_pages

    No functional change.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • Replace multiple kmalloc/memset pairs with kzalloc calls.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • Substitute:

    page_cache_get -> get_page
    page_cache_release -> put_page
    PAGE_CACHE_SHIFT -> PAGE_SHIFT
    PAGE_CACHE_SIZE -> PAGE_SIZE
    PAGE_CACHE_MASK -> PAGE_MASK
    __free_page -> put_page

    because we aren't using the page cache, we are just using pages.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • There is this "FIXME" comment with a typo in it!! that been annoying me for
    days, so I just had to remove it.

    conf->disks[i].rdev should only be accessed if
    - we know we hold a reference or
    - the mddev->reconfig_sem is down or
    - we have a rcu_readlock

    handle_stripe was referencing rdev in three places without any of these. For
    the first two, get an rcu_readlock. For the last, the same access
    (md_sync_acct call) is made a little later after the rdev has been claimed
    under and rcu_readlock, if R5_Syncio is set. So just use that access...
    However R5_Syncio isn't really needed as the 'syncing' variable contains the
    same information. So use that instead.

    Issues, comment, and fix are identical in raid5 and raid6.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • This is important because bitmap_create uses
    mddev->resync_max_sectors
    and that doesn't have a valid value until after the array
    has been initialised (with pers->run()).
    [It doesn't make a difference for current personalities that
    support bitmaps, but will make a difference for raid10]

    This has the added advantage of meaning with can move the thread->timeout
    manipulation inside the bitmap.c code instead of sprinkling identical code
    throughout all personalities.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • See patch to md.txt for more details

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • Resync code:
    A test that isn't needed,
    a 'compute_block' that makes more sense
    elsewhere (And then doesn't need a test),
    a couple of BUG_ONs to confirm the change makes sense.

    Printks:
    A few were missing KERN_*

    Also fix a typo in a comment..

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     

13 Dec, 2005

1 commit


29 Nov, 2005

1 commit


09 Nov, 2005

10 commits

  • 1/ I really should be using the __ATTR macros for defining attributes, so
    that the .owner field get set properly, otherwise modules can be removed
    while sysfs files are open. This also involves some name changes of _show
    routines.

    2/ Always lock the mddev (against reconfiguration) for all sysfs attribute
    access. This easily avoid certain races and is completely consistant with
    other interfaces (ioctl and /proc/mdstat both always lock against
    reconfiguration).

    3/ raid5 attributes must check that the 'conf' structure actually exists
    (the array could have been stopped while an attribute file was open).

    4/ A missing 'kfree' from when the raid5_conf_t was converted to have a
    kobject embedded, and then converted back again.

    Signed-off-by: Neil Brown
    Acked-by: Greg Kroah-Hartman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • A sync of raid5 usually ignore blocks which the bitmap says are in-sync. But
    a user-request check or repair should not ignore these.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • This has the advantage of removing the confusion caused by 'rdev_t' and
    'mddev_t' both having 'in_sync' fields.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • Two refinements to the 'attempt-overwrite-on-read-error' mechanism.
    1/ If the array is read-only, don't attempt an over-write.
    2/ If there are more than max_nr_stripes read errors on a device with
    no success, fail the drive. This will make sure a dead
    drive will be eventually kicked even when we aren't trying
    to rewrite (which would normally kick a dead drive more quickly.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • There isn't really a need for raid5 attributes to be an a subdirectory,
    so this patch moves them from
    /sys/block/mdX/md/raid5/attribute
    to
    /sys/block/mdX/md/attribute

    This suggests that all md personalities should co-operate about
    namespace usage, but that shouldn't be a problem.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • Thanks Greg.

    Cc: Greg KH
    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • Acked-by:
    Signed-off-by: Suzanne Wood
    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Suzanne Wood
     
  • With this, raid5 can be asked to check parity without repairing it. It also
    keeps a count of the number of incorrect parity blocks found (mismatches) and
    reports them through sysfs.

    Signed-off-by: Neil Brown
    Cc: Greg KH
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • /sys/block/mdX/md/raid5/
    contains raid5-related attributes.
    Currently
    stripe_cache_size
    is number of entries in stripe cache, and is settable.
    stripe_cache_active
    is number of active entries, and in only readable.

    Signed-off-by: Neil Brown
    Cc: Greg KH
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     
  • This patch changes the behaviour of raid5 when it gets a read error.
    Instead of just failing the device, it tried to find out what should have
    been there, and writes it over the bad block. For some media-errors, this
    has a reasonable chance of fixing the error. If the write succeeds, and a
    subsequent read succeeds as well, raid5 decided the address is OK and
    conitnues.

    Instead of failing a drive on read-error, we attempt to re-write the block,
    and then re-read. If that all works, we allow the device to remain in the
    array.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown
     

07 Nov, 2005

1 commit


01 Nov, 2005

1 commit

  • Instead of having ->read_sectors and ->write_sectors, combine the two
    into ->sectors[2] and similar for the other fields. This saves a branch
    several places in the io path, since we don't have to care for what the
    actual io direction is. On my x86-64 box, that's 200 bytes less text in
    just the core (not counting the various drivers).

    Signed-off-by: Jens Axboe

    Jens Axboe
     

10 Sep, 2005

1 commit

  • Most awkward part of this is delaying write requests until bitmap updates have
    been flushed.

    To achieve this, we have a sequence number (seq_flush) which is incremented
    each time the raid5 is unplugged.

    If the raid thread notices that this has changed, it flushes bitmap changes,
    and assigned the value of seq_flush to seq_write.

    When a write request arrives, it is given the number from seq_write, and that
    write request may not complete until seq_flush is larger than the saved seq
    number.

    We have a new queue for storing stripes which are waiting for a bitmap flush
    and an extra flag for stripes to record if the write was 'degraded' and so
    should not clear the a bit in the bitmap.

    Signed-off-by: Neil Brown
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    NeilBrown