01 Jul, 2019

1 commit


12 Apr, 2018

1 commit


06 Dec, 2016

1 commit


09 Aug, 2015

1 commit

  • The current RAID 5/6 recovery code isn't quite prepared to handle
    missing devices. In particular, it expects a bio that we previously
    attempted to use in the read path, meaning that it has valid pages
    allocated. However, missing devices have a NULL blkdev, and we can't
    call bio_add_page() on a bio with a NULL blkdev. We could do manual
    manipulation of bio->bi_io_vec, but that's pretty gross. So instead, add
    a separate path that allows us to manually add pages to the rbio.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Chris Mason

    Omar Sandoval
     

22 Jan, 2015

1 commit


03 Dec, 2014

3 commits

  • The commit c404e0dc (Btrfs: fix use-after-free in the finishing
    procedure of the device replace) fixed a use-after-free problem
    which happened when removing the source device at the end of device
    replace, but at that time, btrfs didn't support device replace
    on raid56, so we didn't fix the problem on the raid56 profile.
    Currently, we implemented device replace for raid56, so we need
    kick that problem out before we enable that function for raid56.

    The fix method is very simple, we just increase the bio per-cpu
    counter before we submit a raid56 io, and decrease the counter
    when the raid56 io ends.

    Signed-off-by: Miao Xie

    Miao Xie
     
  • The implementation is:
    - Read and check all the data with checksum in the same stripe.
    All the data which has checksum is COW data, and we are sure
    that it is not changed though we don't lock the stripe. because
    the space of that data just can be reclaimed after the current
    transction is committed, and then the fs can use it to store the
    other data, but when doing scrub, we hold the current transaction,
    that is that data can not be recovered, it is safe that read and check
    it out of the stripe lock.
    - Lock the stripe
    - Read out all the data without checksum and parity
    The data without checksum and the parity may be changed if we don't
    lock the stripe, so we need read it in the stripe lock context.
    - Check the parity
    - Re-calculate the new parity and write back it if the old parity
    is not right
    - Unlock the stripe

    If we can not read out the data or the data we read is corrupted,
    we will try to repair it. If the repair fails. we will mark the
    horizontal sub-stripe(pages on the same horizontal) as corrupted
    sub-stripe, and we will skip the parity check and repair of that
    horizontal sub-stripe.

    And in order to skip the horizontal sub-stripe that has no data, we
    introduce a bitmap. If there is some data on the horizontal sub-stripe,
    we will the relative bit to 1, and when we check and repair the
    parity, we will skip those horizontal sub-stripes that the relative
    bits is 0.

    Signed-off-by: Miao Xie

    Miao Xie
     
  • This patch implement the RAID5/6 common data repair function, the
    implementation is similar to the scrub on the other RAID such as
    RAID1, the differentia is that we don't read the data from the
    mirror, we use the data repair function of RAID5/6.

    Signed-off-by: Miao Xie

    Miao Xie
     

02 Feb, 2013

1 commit

  • This builds on David Woodhouse's original Btrfs raid5/6 implementation.
    The code has changed quite a bit, blame Chris Mason for any bugs.

    Read/modify/write is done after the higher levels of the filesystem have
    prepared a given bio. This means the higher layers are not responsible
    for building full stripes, and they don't need to query for the topology
    of the extents that may get allocated during delayed allocation runs.
    It also means different files can easily share the same stripe.

    But, it does expose us to incorrect parity if we crash or lose power
    while doing a read/modify/write cycle. This will be addressed in a
    later commit.

    Scrub is unable to repair crc errors on raid5/6 chunks.

    Discard does not work on raid5/6 (yet)

    The stripe size is fixed at 64KiB per disk. This will be tunable
    in a later commit.

    Signed-off-by: Chris Mason

    David Woodhouse