02 Aug, 2011

4 commits

  • Add the ability to parse and use metadata devices to dm-raid. Although
    not strictly required, without the metadata devices, many features of
    RAID are unavailable. They are used to store a superblock and bitmap.

    The role, or position in the array, of each device must be recorded in
    its superblock. This is to help with fault handling, array reshaping,
    and sanity checks. RAID 4/5/6 devices must be loaded in a specific order:
    in this way, the 'array_position' field helps validate the correctness
    of the mapping when it is loaded. It can be used during reshaping to
    identify which devices are added/removed. Fault handling is impossible
    without this field. For example, when a device fails it is recorded in
    the superblock. If this is a RAID1 device and the offending device is
    removed from the array, there must be a way during subsequent array
    assembly to determine that the failed device was the one removed. This
    is done by correlating the 'array_position' field and the bit-field
    variable 'failed_devices'.

    Signed-off-by: Jonathan Brassow
    Signed-off-by: Alasdair G Kergon

    Jonathan Brassow
     
  • Add the write_mostly parameter to RAID1 dm-raid tables.

    This allows the user to set the WriteMostly flag on a RAID1 device that
    should normally be avoided for read I/O.

    Signed-off-by: Jonathan Brassow
    Signed-off-by: Alasdair G Kergon

    Jonathan Brassow
     
  • Allow the user to specify the region_size.

    Ensures that the supplied value meets md's constraints, viz. the number of
    regions does not exceed 2^21.

    Signed-off-by: Jonathan Brassow
    Signed-off-by: Alasdair G Kergon

    Jonathan Brassow
     
  • Add more information about some dm-raid table parameters and clarify how
    parameters are printed when 'dmsetup table' is issued.

    Signed-off-by: Jonathan Brassow
    Signed-off-by: Alasdair G Kergon

    Jonathan Brassow
     

14 Jan, 2011

1 commit

  • This patch is the skeleton for the DM target that will be
    the bridge from DM to MD (initially RAID456 and later RAID1). It
    provides a way to use device-mapper interfaces to the MD RAID456
    drivers.

    As with all device-mapper targets, the nominal public interfaces are the
    constructor (CTR) tables and the status outputs (both STATUSTYPE_INFO
    and STATUSTYPE_TABLE). The CTR table looks like the following:

    1: raid \
    2: \
    3: ..

    Line 1 contains the standard first three arguments to any device-mapper
    target - the start, length, and target type fields. The target type in
    this case is "raid".

    Line 2 contains the arguments that define the particular raid
    type/personality/level, the required arguments for that raid type, and
    any optional arguments. Possible raid types include: raid4, raid5_la,
    raid5_ls, raid5_rs, raid6_zr, raid6_nr, and raid6_nc. (again, raid1 is
    planned for the future.) The list of required and optional parameters
    is the same for all the current raid types. The required parameters are
    positional, while the optional parameters are given as key/value pairs.
    The possible parameters are as follows:
    Chunk size in sectors.
    [[no]sync] Force/Prevent RAID initialization
    [rebuild ] Rebuild the drive indicated by the index
    [daemon_sleep ] Time between bitmap daemon work to clear bits
    [min_recovery_rate ] Throttle RAID initialization
    [max_recovery_rate ] Throttle RAID initialization
    [max_write_behind ] See '-write-behind=' (man mdadm)
    [stripe_cache ] Stripe cache size for higher RAIDs

    Line 3 contains the list of devices that compose the array in
    metadata/data device pairs. If the metadata is stored separately, a '-'
    is given for the metadata device position. If a drive has failed or is
    missing at creation time, a '-' can be given for both the metadata and
    data drives for a given position.

    Examples:
    # RAID4 - 4 data drives, 1 parity
    # No metadata devices specified to hold superblock/bitmap info
    # Chunk size of 1MiB
    # (Lines separated for easy reading)
    0 1960893648 raid \
    raid4 1 2048 \
    5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81

    # RAID4 - 4 data drives, 1 parity (no metadata devices)
    # Chunk size of 1MiB, force RAID initialization,
    # min recovery rate at 20 kiB/sec/disk
    0 1960893648 raid \
    raid4 4 2048 min_recovery_rate 20 sync\
    5 - 8:17 - 8:33 - 8:49 - 8:65 - 8:81

    Performing a 'dmsetup table' should display the CTR table used to
    construct the mapping (with possible reordering of optional
    parameters).

    Performing a 'dmsetup status' will yield information on the state and
    health of the array. The output is as follows:
    1: raid \
    2:

    Line 1 is standard DM output. Line 2 is best shown by example:
    0 1960893648 raid raid4 5 AAAAA 2/490221568
    Here we can see the RAID type is raid4, there are 5 devices - all of
    which are 'A'live, and the array is 2/490221568 complete with recovery.

    Cc: linux-raid@vger.kernel.org
    Signed-off-by: NeilBrown
    Signed-off-by: Jonathan Brassow
    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    NeilBrown