03 Jan, 2015

1 commit


03 Dec, 2014

5 commits

  • Chris Mason
     
  • The commit c404e0dc (Btrfs: fix use-after-free in the finishing
    procedure of the device replace) fixed a use-after-free problem
    which happened when removing the source device at the end of device
    replace, but at that time, btrfs didn't support device replace
    on raid56, so we didn't fix the problem on the raid56 profile.
    Currently, we implemented device replace for raid56, so we need
    kick that problem out before we enable that function for raid56.

    The fix method is very simple, we just increase the bio per-cpu
    counter before we submit a raid56 io, and decrease the counter
    when the raid56 io ends.

    Signed-off-by: Miao Xie

    Miao Xie
     
  • This function reused the code of parity scrub, and we just write
    the right parity or corrected parity into the target device before
    the parity scrub end.

    Signed-off-by: Miao Xie

    Miao Xie
     
  • The implementation is:
    - Read and check all the data with checksum in the same stripe.
    All the data which has checksum is COW data, and we are sure
    that it is not changed though we don't lock the stripe. because
    the space of that data just can be reclaimed after the current
    transction is committed, and then the fs can use it to store the
    other data, but when doing scrub, we hold the current transaction,
    that is that data can not be recovered, it is safe that read and check
    it out of the stripe lock.
    - Lock the stripe
    - Read out all the data without checksum and parity
    The data without checksum and the parity may be changed if we don't
    lock the stripe, so we need read it in the stripe lock context.
    - Check the parity
    - Re-calculate the new parity and write back it if the old parity
    is not right
    - Unlock the stripe

    If we can not read out the data or the data we read is corrupted,
    we will try to repair it. If the repair fails. we will mark the
    horizontal sub-stripe(pages on the same horizontal) as corrupted
    sub-stripe, and we will skip the parity check and repair of that
    horizontal sub-stripe.

    And in order to skip the horizontal sub-stripe that has no data, we
    introduce a bitmap. If there is some data on the horizontal sub-stripe,
    we will the relative bit to 1, and when we check and repair the
    parity, we will skip those horizontal sub-stripes that the relative
    bits is 0.

    Signed-off-by: Miao Xie

    Miao Xie
     
  • This patch implement the RAID5/6 common data repair function, the
    implementation is similar to the scrub on the other RAID such as
    RAID1, the differentia is that we don't read the data from the
    mirror, we use the data repair function of RAID5/6.

    Signed-off-by: Miao Xie

    Miao Xie
     

21 Nov, 2014

1 commit

  • This can be reproduced by fstests: btrfs/070

    The scenario is like the following:

    replace worker thread defrag thread
    --------------------- -------------
    copy_nocow_pages_worker btrfs_defrag_file
    copy_nocow_pages_for_inode ...
    btrfs_writepages
    |A| lock_extent_bits extent_write_cache_pages
    |B| lock_page
    __extent_writepage
    ... writepage_delalloc
    find_lock_delalloc_range
    |B| lock_extent_bits
    find_or_create_page
    pagecache_get_page
    |A| lock_page

    This leads to an ABBA pattern deadlock. To fix it,
    o we just change it to an AABB pattern which means to @unlock_extent_bits()
    before we @lock_page(), and in this way the @extent_read_full_page_nolock()
    is no longer in an locked context, so change it back to @extent_read_full_page()
    to regain protection.

    o Since we @unlock_extent_bits() earlier, then before @write_page_nocow(),
    the extent may not really point at the physical block we want, so we
    have to check it before write.

    Signed-off-by: Gui Hecheng
    Tested-by: David Sterba
    Signed-off-by: Chris Mason

    Gui Hecheng
     

02 Oct, 2014

1 commit


18 Sep, 2014

7 commits

  • We could not use clean_io_failure in the direct IO path because it got the
    filesystem information from the page structure, but the page in the direct
    IO bio didn't have the filesystem information in its structure. So we need
    modify it and pass all the information it need by parameters.

    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Miao Xie
     
  • The original code of repair_io_failure was just used for buffered read,
    because it got some filesystem data from page structure, it is safe for
    the page in the page cache. But when we do a direct read, the pages in bio
    are not in the page cache, that is there is no filesystem data in the page
    structure. In order to implement direct read data repair, we need modify
    repair_io_failure and pass all filesystem data it need by function
    parameters.

    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Miao Xie
     
  • total_size will be changed when resizing a device, and disk_total_size
    will be changed if resizing is successful. Meanwhile, the on-disk super
    blocks of the previous transaction might not be updated. Considering
    the consistency of the metadata in the previous transaction, We should
    use the size in the previous transaction to check if the super block is
    beyond the boundary of the device. Fix it.

    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Miao Xie
     
  • The super block generation of the seed devices is not the same as the
    filesystem which sprouted from them because we don't update the super
    block on the seed devices when we change that new filesystem. So we
    should not use the generation of that new filesystem to check the super
    block generation on the seed devices, Fix it.

    Signed-off-by: Miao Xie
    Reviewed-by: David Sterba
    Signed-off-by: Chris Mason

    Miao Xie
     
  • All the metadata in the seed devices has the same fsid as the fsid
    of the seed filesystem which is on the seed device, so we should check
    them by the current filesystem. Fix it.

    Signed-off-by: Miao Xie
    Reviewed-by: David Sterba
    Signed-off-by: Chris Mason

    Miao Xie
     
  • The nodesize and leafsize were never of different values. Unify the
    usage and make nodesize the one. Cleanup the redundant checks and
    helpers.

    Shaves a few bytes from .text:

    text data bss dec hex filename
    852418 24560 23112 900090 dbbfa btrfs.ko.before
    851074 24584 23112 898770 db6d2 btrfs.ko.after

    Signed-off-by: David Sterba
    Signed-off-by: Chris Mason

    David Sterba
     
  • btrfs_set_key_type and btrfs_key_type are used inconsistently along with
    open coded variants. Other members of btrfs_key are accessed directly
    without any helpers anyway.

    Signed-off-by: David Sterba
    Signed-off-by: Chris Mason

    David Sterba
     

24 Aug, 2014

1 commit

  • This has been reported and discussed for a long time, and this hang occurs in
    both 3.15 and 3.16.

    Btrfs now migrates to use kernel workqueue, but it introduces this hang problem.

    Btrfs has a kind of work queued as an ordered way, which means that its
    ordered_func() must be processed in the way of FIFO, so it usually looks like --

    normal_work_helper(arg)
    work = container_of(arg, struct btrfs_work, normal_work);

    work->func() ordered_list
    ordered_work->ordered_func()
    ordered_work->ordered_free()

    The hang is a rare case, first when we find free space, we get an uncached block
    group, then we go to read its free space cache inode for free space information,
    so it will

    file a readahead request
    btrfs_readpages()
    for page that is not in page cache
    __do_readpage()
    submit_extent_page()
    btrfs_submit_bio_hook()
    btrfs_bio_wq_end_io()
    submit_bio()
    end_workqueue_bio() current_work = arg; normal_work
    worker->current_func(arg)
    normal_work_helper(arg)
    A = container_of(arg, struct btrfs_work, normal_work);

    A->func()
    A->ordered_func()
    A->ordered_free() ordered_func()
    submit_compressed_extents()
    find_free_extent()
    load_free_space_inode()
    ... ordered_free()

    As if work A has a high priority in wq->ordered_list and there are more ordered
    works queued after it, such as B->ordered_func(), its memory could have been
    freed before normal_work_helper() returns, which means that kernel workqueue
    code worker_thread() still has worker->current_work pointer to be work
    A->normal_work's, ie. arg's address.

    Meanwhile, work C is allocated after work A is freed, work C->normal_work
    and work A->normal_work are likely to share the same address(I confirmed this
    with ftrace output, so I'm not just guessing, it's rare though).

    When another kthread picks up work C->normal_work to process, and finds our
    kthread is processing it(see find_worker_executing_work()), it'll think
    work C as a collision and skip then, which ends up nobody processing work C.

    So the situation is that our kthread is waiting forever on work C.

    Besides, there're other cases that can lead to deadlock, but the real problem
    is that all btrfs workqueue shares one work->func, -- normal_work_helper,
    so this makes each workqueue to have its own helper function, but only a
    wraper pf normal_work_helper.

    With this patch, I no long hit the above hang.

    Signed-off-by: Liu Bo
    Signed-off-by: Chris Mason

    Liu Bo
     

19 Aug, 2014

1 commit


20 Jun, 2014

1 commit

  • When run scrub with balance, sometimes -ENOENT will be returned, since
    in scrub_enumerate_chunks() will search dev_extent in *COMMIT_ROOT*, but
    btrfs_lookup_block_group() will search block group in *MEMORY*, so if a
    chunk is removed but not committed, -ENOENT will be returned.

    However, there is no need to stop scrubbing since other chunks may be
    scrubbed without problem.

    So this patch changes the behavior to skip removed chunks and continue
    to scrub the rest.

    Signed-off-by: Qu Wenruo
    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Qu Wenruo
     

10 Jun, 2014

2 commits


12 Apr, 2014

1 commit

  • Pull second set of btrfs updates from Chris Mason:
    "The most important changes here are from Josef, fixing a btrfs
    regression in 3.14 that can cause corruptions in the extent allocation
    tree when snapshots are in use.

    Josef also fixed some deadlocks in send/recv and other assorted races
    when balance is running"

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: (23 commits)
    Btrfs: fix compile warnings on on avr32 platform
    btrfs: allow mounting btrfs subvolumes with different ro/rw options
    btrfs: export global block reserve size as space_info
    btrfs: fix crash in remount(thread_pool=) case
    Btrfs: abort the transaction when we don't find our extent ref
    Btrfs: fix EINVAL checks in btrfs_clone
    Btrfs: fix unlock in __start_delalloc_inodes()
    Btrfs: scrub raid56 stripes in the right way
    Btrfs: don't compress for a small write
    Btrfs: more efficient io tree navigation on wait_extent_bit
    Btrfs: send, build path string only once in send_hole
    btrfs: filter invalid arg for btrfs resize
    Btrfs: send, fix data corruption due to incorrect hole detection
    Btrfs: kmalloc() doesn't return an ERR_PTR
    Btrfs: fix snapshot vs nocow writting
    btrfs: Change the expanding write sequence to fix snapshot related bug.
    btrfs: make device scan less noisy
    btrfs: fix lockdep warning with reclaim lock inversion
    Btrfs: hold the commit_root_sem when getting the commit root during send
    Btrfs: remove transaction from send
    ...

    Linus Torvalds
     

11 Apr, 2014

1 commit

  • fs/btrfs/scrub.c: In function 'get_raid56_logic_offset':
    fs/btrfs/scrub.c:2269: warning: comparison of distinct pointer types lacks a cast
    fs/btrfs/scrub.c:2269: warning: right shift count >= width of type
    fs/btrfs/scrub.c:2269: warning: passing argument 1 of '__div64_32' from incompatible pointer type

    Since @rot is an int type, we should not use do_div(), fix it.

    Reported-by: kbuild test robot
    Signed-off-by: Wang Shilong
    Signed-off-by: Chris Mason

    Wang Shilong
     

08 Apr, 2014

1 commit


05 Apr, 2014

1 commit

  • Pull btrfs changes from Chris Mason:
    "This is a pretty long stream of bug fixes and performance fixes.

    Qu Wenruo has replaced the btrfs async threads with regular kernel
    workqueues. We'll keep an eye out for performance differences, but
    it's nice to be using more generic code for this.

    We still have some corruption fixes and other patches coming in for
    the merge window, but this batch is tested and ready to go"

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: (108 commits)
    Btrfs: fix a crash of clone with inline extents's split
    btrfs: fix uninit variable warning
    Btrfs: take into account total references when doing backref lookup
    Btrfs: part 2, fix incremental send's decision to delay a dir move/rename
    Btrfs: fix incremental send's decision to delay a dir move/rename
    Btrfs: remove unnecessary inode generation lookup in send
    Btrfs: fix race when updating existing ref head
    btrfs: Add trace for btrfs_workqueue alloc/destroy
    Btrfs: less fs tree lock contention when using autodefrag
    Btrfs: return EPERM when deleting a default subvolume
    Btrfs: add missing kfree in btrfs_destroy_workqueue
    Btrfs: cache extent states in defrag code path
    Btrfs: fix deadlock with nested trans handles
    Btrfs: fix possible empty list access when flushing the delalloc inodes
    Btrfs: split the global ordered extents mutex
    Btrfs: don't flush all delalloc inodes when we doesn't get s_umount lock
    Btrfs: reclaim delalloc metadata more aggressively
    Btrfs: remove unnecessary lock in may_commit_transaction()
    Btrfs: remove the unnecessary flush when preparing the pages
    Btrfs: just do dirty page flush for the inode with compression before direct IO
    ...

    Linus Torvalds
     

11 Mar, 2014

4 commits

  • Since the "_struct" suffix is mainly used for distinguish the differnt
    btrfs_work between the original and the newly created one,
    there is no need using the suffix since all btrfs_workers are changed
    into btrfs_workqueue.

    Also this patch fixed some codes whose code style is changed due to the
    too long "_struct" suffix.

    Signed-off-by: Qu Wenruo
    Tested-by: David Sterba
    Signed-off-by: Josef Bacik

    Qu Wenruo
     
  • Replace the fs_info->scrub_* with the newly created
    btrfs_workqueue.

    Signed-off-by: Qu Wenruo
    Tested-by: David Sterba
    Signed-off-by: Josef Bacik

    Qu Wenruo
     
  • check if @scrubs_running=@scrubs_paused condition inside wait_event()
    is not an atomic operation which means we may inc/dec @scrub_running/
    paused at any time. Let's wake up @scrub_pause_wait as much as we can
    to let commit transaction blocked less.

    An example below:

    Thread1 Thread2
    |->scrub_blocked_if_needed() |->scrub_pending_trans_workers_inc
    |->increase @scrub_paused
    |->increase @scrub_running
    |->wake up scrub_pause_wait list
    |->scrub blocked
    |->increase @scrub_paused

    Thread3 is commiting transaction which is blocked at btrfs_scrub_pause().
    So after Thread2 increase @scrub_paused, we meet the condition
    @scrub_paused=@scrub_running, but transaction will be still blocked until
    another calling to wake up @scrub_pause_wait.

    Signed-off-by: Wang Shilong
    Signed-off-by: Miao Xie
    Signed-off-by: Josef Bacik

    Wang Shilong
     
  • commit cb7ab02156e4 cause a following deadlock found by
    xfstests,btrfs/011:

    Thread1 is commiting transaction which is blocked at
    btrfs_scrub_pause().

    Thread2 is calling btrfs_file_aio_write() which has held
    inode's @i_mutex and commit transaction(blocked because
    Thread1 is committing transaction).

    Thread3 is copy_nocow_page worker which will also try to
    hold inode @i_mutex, so thread3 will wait Thread1 finished.

    Thread4 is waiting pending workers finished which will wait
    Thread3 finished. So the problem is like this:

    Thread1--->Thread4--->Thread3--->Thread2---->Thread1

    Deadlock happens! we fix it by letting Thread1 go firstly,
    which means we won't block transaction commit while we are
    waiting pending workers finished.

    Reported-by: Qu Wenruo
    Signed-off-by: Wang Shilong
    Signed-off-by: Josef Bacik

    Wang Shilong
     

31 Jan, 2014

1 commit

  • Pull btrfs updates from Chris Mason:
    "This is a pretty big pull, and most of these changes have been
    floating in btrfs-next for a long time. Filipe's properties work is a
    cool building block for inheriting attributes like compression down on
    a per inode basis.

    Jeff Mahoney kicked in code to export filesystem info into sysfs.

    Otherwise, lots of performance improvements, cleanups and bug fixes.

    Looks like there are still a few other small pending incrementals, but
    I wanted to get the bulk of this in first"

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: (149 commits)
    Btrfs: fix spin_unlock in check_ref_cleanup
    Btrfs: setup inode location during btrfs_init_inode_locked
    Btrfs: don't use ram_bytes for uncompressed inline items
    Btrfs: fix btrfs_search_slot_for_read backwards iteration
    Btrfs: do not export ulist functions
    Btrfs: rework ulist with list+rb_tree
    Btrfs: fix memory leaks on walking backrefs failure
    Btrfs: fix send file hole detection leading to data corruption
    Btrfs: add a reschedule point in btrfs_find_all_roots()
    Btrfs: make send's file extent item search more efficient
    Btrfs: fix to catch all errors when resolving indirect ref
    Btrfs: fix protection between walking backrefs and root deletion
    btrfs: fix warning while merging two adjacent extents
    Btrfs: fix infinite path build loops in incremental send
    btrfs: undo sysfs when open_ctree() fails
    Btrfs: fix snprintf usage by send's gen_unique_name
    btrfs: fix defrag 32-bit integer overflow
    btrfs: sysfs: list the NO_HOLES feature
    btrfs: sysfs: don't show reserved incompat feature
    btrfs: call permission checks earlier in ioctls and return EPERM
    ...

    Linus Torvalds
     

29 Jan, 2014

6 commits

  • There is a bug that using btrfs_previous_item() to search metadata extent item.
    This is because in btrfs_previous_item(), we need type match, however, since
    skinny metada was introduced by josef, we may mix this two types. So just
    use btrfs_previous_item() is not working right.

    To keep btrfs_previous_item() like normal tree search, i introduce another
    function btrfs_previous_extent_item().

    Signed-off-by: Wang Shilong
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Wang Shilong
     
  • Check if we support skinny metadata firstly and fix to use
    right type to search.

    Signed-off-by: Wang Shilong
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Wang Shilong
     
  • Convert all applicable cases of printk and pr_* to the btrfs_* macros.

    Fix all uses of the BTRFS prefix.

    Signed-off-by: Frank Holton
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Frank Holton
     
  • Just wrap same code into one function scrub_blocked_if_needed().

    This make a change that we will move waiting (@workers_pending = 0)
    before we can wake up commiting transaction(atomic_inc(@scrub_paused)),
    we must take carefully to not deadlock here.

    Thread 1 Thread 2
    |->btrfs_commit_transaction()
    |->set trans type(COMMIT_DOING)
    |->btrfs_scrub_paused()(blocked)
    |->join_transaction(blocked)

    Move btrfs_scrub_paused() before setting trans type which means we can
    still join a transaction when commiting_transaction is blocked.

    Signed-off-by: Wang Shilong
    Suggested-by: Miao Xie
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Wang Shilong
     
  • We came a race condition when scrubbing superblocks, the story is:

    In commiting transaction, we will update @last_trans_commited after
    writting superblocks, if scrubber start after writting superblocks
    and before updating @last_trans_commited, generation mismatch happens!

    We fix this by checking @scrub_pause_req, and we won't start a srubber
    until commiting transaction is finished.(after btrfs_scrub_continue()
    finished.)

    Reported-by: Sebastian Ochmann
    Signed-off-by: Wang Shilong
    Reviewed-by: Miao Xie
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Wang Shilong
     
  • Signed-off-by: Valentina Giusti
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Valentina Giusti
     

06 Dec, 2013

1 commit

  • Pull block layer fixes from Jens Axboe:
    "A small collection of fixes for the current series. It contains:

    - A fix for a use-after-free of a request in blk-mq. From Ming Lei

    - A fix for a blk-mq bug that could attempt to dereference a NULL rq
    if allocation failed

    - Two xen-blkfront small fixes

    - Cleanup of submit_bio_wait() type uses in the kernel, unifying
    that. From Kent

    - A fix for 32-bit blkg_rwstat reading. I apologize for this one
    looking mangled in the shortlog, it's entirely my fault for missing
    an empty line between the description and body of the text"

    * 'for-linus' of git://git.kernel.dk/linux-block:
    blk-mq: fix use-after-free of request
    blk-mq: fix dereference of rq->mq_ctx if allocation fails
    block: xen-blkfront: Fix possible NULL ptr dereference
    xen-blkfront: Silence pfn maybe-uninitialized warning
    block: submit_bio_wait() conversions
    Update of blkg_stat and blkg_rwstat may happen in bh context

    Linus Torvalds
     

25 Nov, 2013

1 commit


24 Nov, 2013

2 commits

  • Immutable biovecs are going to require an explicit iterator. To
    implement immutable bvecs, a later patch is going to add a bi_bvec_done
    member to this struct; for now, this patch effectively just renames
    things.

    Signed-off-by: Kent Overstreet
    Cc: Jens Axboe
    Cc: Geert Uytterhoeven
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: "Ed L. Cashin"
    Cc: Nick Piggin
    Cc: Lars Ellenberg
    Cc: Jiri Kosina
    Cc: Matthew Wilcox
    Cc: Geoff Levand
    Cc: Yehuda Sadeh
    Cc: Sage Weil
    Cc: Alex Elder
    Cc: ceph-devel@vger.kernel.org
    Cc: Joshua Morris
    Cc: Philip Kelleher
    Cc: Rusty Russell
    Cc: "Michael S. Tsirkin"
    Cc: Konrad Rzeszutek Wilk
    Cc: Jeremy Fitzhardinge
    Cc: Neil Brown
    Cc: Alasdair Kergon
    Cc: Mike Snitzer
    Cc: dm-devel@redhat.com
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: linux390@de.ibm.com
    Cc: Boaz Harrosh
    Cc: Benny Halevy
    Cc: "James E.J. Bottomley"
    Cc: Greg Kroah-Hartman
    Cc: "Nicholas A. Bellinger"
    Cc: Alexander Viro
    Cc: Chris Mason
    Cc: "Theodore Ts'o"
    Cc: Andreas Dilger
    Cc: Jaegeuk Kim
    Cc: Steven Whitehouse
    Cc: Dave Kleikamp
    Cc: Joern Engel
    Cc: Prasad Joshi
    Cc: Trond Myklebust
    Cc: KONISHI Ryusuke
    Cc: Mark Fasheh
    Cc: Joel Becker
    Cc: Ben Myers
    Cc: xfs@oss.sgi.com
    Cc: Steven Rostedt
    Cc: Frederic Weisbecker
    Cc: Ingo Molnar
    Cc: Len Brown
    Cc: Pavel Machek
    Cc: "Rafael J. Wysocki"
    Cc: Herton Ronaldo Krzesinski
    Cc: Ben Hutchings
    Cc: Andrew Morton
    Cc: Guo Chao
    Cc: Tejun Heo
    Cc: Asai Thambi S P
    Cc: Selvan Mani
    Cc: Sam Bradshaw
    Cc: Wei Yongjun
    Cc: "Roger Pau Monné"
    Cc: Jan Beulich
    Cc: Stefano Stabellini
    Cc: Ian Campbell
    Cc: Sebastian Ott
    Cc: Christian Borntraeger
    Cc: Minchan Kim
    Cc: Jiang Liu
    Cc: Nitin Gupta
    Cc: Jerome Marchand
    Cc: Joe Perches
    Cc: Peng Tao
    Cc: Andy Adamson
    Cc: fanchaoting
    Cc: Jie Liu
    Cc: Sunil Mushran
    Cc: "Martin K. Petersen"
    Cc: Namjae Jeon
    Cc: Pankaj Kumar
    Cc: Dan Magenheimer
    Cc: Mel Gorman 6

    Kent Overstreet
     
  • It was being open coded in a few places.

    Signed-off-by: Kent Overstreet
    Cc: Jens Axboe
    Cc: Joern Engel
    Cc: Prasad Joshi
    Cc: Neil Brown
    Cc: Chris Mason
    Acked-by: NeilBrown

    Kent Overstreet
     

21 Nov, 2013

1 commit

  • Currently if we discover an error when scrubbing in ro mode we a)
    blindly increment the uncorrectable_errors counter, and b) spam the
    dmesg with the 'unable to fixup (regular) error at ...' message, even
    though a) we haven't tried to determine if the error is correctable or
    not, and b) we haven't tried to fixup anything. Fix this.

    Cc: Stefan Behrens
    Signed-off-by: Ilya Dryomov
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Ilya Dryomov