Commit 035fe03a7ad56982b30ab3a522b7b08d58feccd0

Authored by Josef Bacik
Committed by Chris Mason
1 parent 7f59203abe

Btrfs: check total number of devices when removing missing

If you have a disk failure in RAID1 and then add a new disk to the
array, and then try to remove the missing volume, it will fail.  The
reason is the sanity check only looks at the total number of rw devices,
which is just 2 because we have 2 good disks and 1 bad one.  Instead
check the total number of devices in the array to make sure we can
actually remove the device.  Tested this with a failed disk setup and
with this test we can now run

btrfs-vol -r missing /mount/point

and it works fine.

Signed-off-by: Josef Bacik <josef@redhat.com>
Signed-off-by: Chris Mason <chris.mason@oracle.com>

Showing 1 changed file with 2 additions and 2 deletions Side-by-side Diff

... ... @@ -1135,7 +1135,7 @@
1135 1135 root->fs_info->avail_metadata_alloc_bits;
1136 1136  
1137 1137 if ((all_avail & BTRFS_BLOCK_GROUP_RAID10) &&
1138   - root->fs_info->fs_devices->rw_devices <= 4) {
  1138 + root->fs_info->fs_devices->num_devices <= 4) {
1139 1139 printk(KERN_ERR "btrfs: unable to go below four devices "
1140 1140 "on raid10\n");
1141 1141 ret = -EINVAL;
... ... @@ -1143,7 +1143,7 @@
1143 1143 }
1144 1144  
1145 1145 if ((all_avail & BTRFS_BLOCK_GROUP_RAID1) &&
1146   - root->fs_info->fs_devices->rw_devices <= 2) {
  1146 + root->fs_info->fs_devices->num_devices <= 2) {
1147 1147 printk(KERN_ERR "btrfs: unable to go below two "
1148 1148 "devices on raid1\n");
1149 1149 ret = -EINVAL;