25 Nov, 2014

1 commit


22 Nov, 2014

1 commit

  • Liu Bo pointed out that my previous fix would lose the generation update in the
    scenario I described. It is actually much worse than that, we could lose the
    entire extent if we lose power right after the transaction commits. Consider
    the following

    write extent 0-4k
    log extent in log tree
    commit transaction
    < power fail happens here
    ordered extent completes

    We would lose the 0-4k extent because it hasn't updated the actual fs tree, and
    the transaction commit will reset the log so it isn't replayed. If we lose
    power before the transaction commit we are save, otherwise we are not.

    Fix this by keeping track of all extents we logged in this transaction. Then
    when we go to commit the transaction make sure we wait for all of those ordered
    extents to complete before proceeding. This will make sure that if we lose
    power after the transaction commit we still have our data. This also fixes the
    problem of the improperly updated extent generation. Thanks,

    cc: stable@vger.kernel.org
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     

21 Nov, 2014

2 commits

  • Right now the only caller of find_first_extent_bit() that is interested
    in caching extent states (transaction or log commit), never gets an extent
    state cached. This is because find_first_extent_bit() only caches states
    that have at least one of the flags EXTENT_IOBITS or EXTENT_BOUNDARY, and
    the transaction/log commit caller always passes a tree that doesn't have
    ever extent states with any of those flags (they can only have one of the
    following flags: EXTENT_DIRTY, EXTENT_NEW or EXTENT_NEED_WAIT).

    This change together with the following one in the patch series (titled
    "Btrfs: avoid returning -ENOMEM in convert_extent_bit() too early") will
    help reduce significantly the chances of calls to convert_extent_bit()
    fail with -ENOMEM when called from the transaction/log commit code.

    Signed-off-by: Filipe Manana
    Signed-off-by: Chris Mason

    Filipe Manana
     
  • When committing a transaction or a log, we look for btree extents that
    need to be durably persisted by searching for ranges in a io tree that
    have some bits set (EXTENT_DIRTY or EXTENT_NEW). We then attempt to clear
    those bits and set the EXTENT_NEED_WAIT bit, with calls to the function
    convert_extent_bit, and then start writeback for the extents.

    That function however can return an error (at the moment only -ENOMEM
    is possible, specially when it does GFP_ATOMIC allocation requests
    through alloc_extent_state_atomic) - that means the ranges didn't got
    the EXTENT_NEED_WAIT bit set (or at least not for the whole range),
    which in turn means a call to btrfs_wait_marked_extents() won't find
    those ranges for which we started writeback, causing a transaction
    commit or a log commit to persist a new superblock without waiting
    for the writeback of extents in that range to finish first.

    Therefore if a crash happens after persisting the new superblock and
    before writeback finishes, we have a superblock pointing to roots that
    weren't fully persisted or roots that point to nodes or leafs that weren't
    fully persisted, causing all sorts of unexpected/bad behaviour as we endup
    reading garbage from disk or the content of some node/leaf from a past
    generation that got cowed or deleted and is no longer valid (for this later
    case we end up getting error messages like "parent transid verify failed on
    X wanted Y found Z" when reading btree nodes/leafs from disk).

    Signed-off-by: Filipe Manana
    Signed-off-by: Chris Mason

    Filipe Manana
     

12 Nov, 2014

3 commits

  • In some contexts, like in sysfs handlers, we don't want to trigger a
    transaction commit. It's a heavy operation, we don't know what external
    locks may be taken. Instead, make it possible to finish the operation
    through sync syscall or SYNC_FS ioctl.

    Signed-off-by: David Sterba

    David Sterba
     
  • The pending mount option(s) now share namespace and bits with the normal
    options, and the existing one for (inode_cache) is unset unconditionally
    at each transaction commit.

    Introduce a separate namespace for pending changes and enhance the
    descriptions of the intended change to use separate bits for each
    action.

    Signed-off-by: David Sterba

    David Sterba
     
  • There are some actions that modify global filesystem state but cannot be
    performed at the time of request, but later at the transaction commit
    time when the filesystem is in a known state.

    For example enabling new incompat features on-the-fly or issuing
    transaction commit from unsafe contexts (sysfs handlers).

    Signed-off-by: David Sterba

    David Sterba
     

05 Oct, 2014

2 commits


04 Oct, 2014

2 commits

  • We check whether transid is already committed via last_trans_committed and
    then search through trans_list for pending transactions. If
    last_trans_committed is updated by btrfs_commit_transaction after we check
    it (there is no locking), we will fail to find the committed transaction
    and return EINVAL to the caller. This has been observed occasionally by
    ceph-osd (which uses this ioctl heavily).

    Fix by rechecking whether the provided transid
    Signed-off-by: Chris Mason

    Sage Weil
     
  • While we have a transaction ongoing, the VM might decide at any time
    to call btree_inode->i_mapping->a_ops->writepages(), which will start
    writeback of dirty pages belonging to btree nodes/leafs. This call
    might return an error or the writeback might finish with an error
    before we attempt to commit the running transaction. If this happens,
    we might have no way of knowing that such error happened when we are
    committing the transaction - because the pages might no longer be
    marked dirty nor tagged for writeback (if a subsequent modification
    to the extent buffer didn't happen before the transaction commit) which
    makes filemap_fdata[write|wait]_range unable to find such pages (even
    if they're marked with SetPageError).
    So if this happens we must abort the transaction, otherwise we commit
    a super block with btree roots that point to btree nodes/leafs whose
    content on disk is invalid - either garbage or the content of some
    node/leaf from a past generation that got cowed or deleted and is no
    longer valid (for this later case we end up getting error messages like
    "parent transid verify failed on 10826481664 wanted 25748 found 29562"
    when reading btree nodes/leafs from disk).

    Note that setting and checking AS_EIO/AS_ENOSPC in the btree inode's
    i_mapping would not be enough because we need to distinguish between
    log tree extents (not fatal) vs non-log tree extents (fatal) and
    because the next call to filemap_fdatawait_range() will catch and clear
    such errors in the mapping - and that call might be from a log sync and
    not from a transaction commit, which means we would not know about the
    error at transaction commit time. Also, checking for the eb flag
    EXTENT_BUFFER_IOERR at transaction commit time isn't done and would
    not be completely reliable, as the eb might be removed from memory and
    read back when trying to get it, which clears that flag right before
    reading the eb's pages from disk, making us not know about the previous
    write error.

    Using the new 3 flags for the btree inode also makes us achieve the
    goal of AS_EIO/AS_ENOSPC when writepages() returns success, started
    writeback for all dirty pages and before filemap_fdatawait_range() is
    called, the writeback for all dirty pages had already finished with
    errors - because we were not using AS_EIO/AS_ENOSPC,
    filemap_fdatawait_range() would return success, as it could not know
    that writeback errors happened (the pages were no longer tagged for
    writeback).

    Signed-off-by: Filipe Manana
    Signed-off-by: Chris Mason

    Filipe Manana
     

02 Oct, 2014

2 commits


18 Sep, 2014

4 commits

  • device->bytes_used will be changed when allocating a new chunk, and
    disk_total_size will be changed if resizing is successful.
    Meanwhile, the on-disk super blocks of the previous transaction
    might not be updated. Considering the consistency of the metadata
    in the previous transaction, We should use the size in the previous
    transaction to check if the super block is beyond the boundary
    of the device.

    Though it is not big problem because we don't use it now, but anyway
    it is better that we make it be consistent with the common metadata,
    maybe we will use it in the future.

    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Miao Xie
     
  • total_size will be changed when resizing a device, and disk_total_size
    will be changed if resizing is successful. Meanwhile, the on-disk super
    blocks of the previous transaction might not be updated. Considering
    the consistency of the metadata in the previous transaction, We should
    use the size in the previous transaction to check if the super block is
    beyond the boundary of the device. Fix it.

    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Miao Xie
     
  • The nodesize and leafsize were never of different values. Unify the
    usage and make nodesize the one. Cleanup the redundant checks and
    helpers.

    Shaves a few bytes from .text:

    text data bss dec hex filename
    852418 24560 23112 900090 dbbfa btrfs.ko.before
    851074 24584 23112 898770 db6d2 btrfs.ko.after

    Signed-off-by: David Sterba
    Signed-off-by: Chris Mason

    David Sterba
     
  • The comment applied when there was a BUG_ON.

    Signed-off-by: David Sterba
    Signed-off-by: Chris Mason

    David Sterba
     

15 Aug, 2014

1 commit

  • Truncates and renames are often used to replace old versions of a file
    with new versions. Applications often expect this to be an atomic
    replacement, even if they haven't done anything to make sure the new
    version is fully on disk.

    Btrfs has strict flushing in place to make sure that renaming over an
    old file with a new file will fully flush out the new file before
    allowing the transaction commit with the rename to complete.

    This ordering means the commit code needs to be able to lock file pages,
    and there are a few paths in the filesystem where we will try to end a
    transaction with the page lock held. It's rare, but these things can
    deadlock.

    This patch removes the ordered flushes and switches to a best effort
    filemap_flush like ext4 uses. It's not perfect, but it should fix the
    deadlocks.

    Signed-off-by: Chris Mason

    Chris Mason
     

03 Jul, 2014

2 commits

  • Often when starting a transaction we commit the currently running transaction,
    which can end up writing block group caches when the current process has its
    journal_info set to NULL (and not to a transaction). This makes our assertion
    at btrfs_check_data_free_space() (current_journal != NULL) fail, resulting
    in a crash/hang. Therefore fix it by setting journal_info.

    Two different traces of this issue follow below.

    1)

    [51502.241936] BTRFS: assertion failed: current->journal_info, file: fs/btrfs/extent-tree.c, line: 3670
    [51502.242213] ------------[ cut here ]------------
    [51502.242493] kernel BUG at fs/btrfs/ctree.h:3964!
    [51502.242669] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
    (...)
    [51502.244010] Call Trace:
    [51502.244010] [] btrfs_check_data_free_space+0x395/0x3a0 [btrfs]
    [51502.244010] [] btrfs_write_dirty_block_groups+0x4ac/0x640 [btrfs]
    [51502.244010] [] commit_cowonly_roots+0x164/0x226 [btrfs]
    [51502.244010] [] btrfs_commit_transaction+0x4ed/0xab0 [btrfs]
    [51502.244010] [] ? _raw_spin_unlock+0x2b/0x40
    [51502.244010] [] start_transaction+0x459/0x620 [btrfs]
    [51502.244010] [] btrfs_start_transaction+0x1b/0x20 [btrfs]
    [51502.244010] [] __unlink_start_trans+0x31/0xe0 [btrfs]
    [51502.244010] [] btrfs_unlink+0x37/0xc0 [btrfs]
    [51502.244010] [] ? do_unlinkat+0x114/0x2a0
    [51502.244010] [] vfs_unlink+0xcc/0x150
    [51502.244010] [] do_unlinkat+0x260/0x2a0
    [51502.244010] [] ? filp_close+0x64/0x90
    [51502.244010] [] ? trace_hardirqs_on_caller+0x16/0x1e0
    [51502.244010] [] ? trace_hardirqs_on_thunk+0x3a/0x3f
    [51502.244010] [] SyS_unlinkat+0x1b/0x40
    [51502.244010] [] system_call_fastpath+0x16/0x1b
    [51502.244010] Code: 0b 55 48 89 e5 0f 0b 55 48 89 e5 0f 0b 55 89 f1 48 c7 c2 71 13 36 a0 48 89 fe 31 c0 48 c7 c7 b8 43 36 a0 48 89 e5 e8 5d b0 32 e1 0b 0f 1f 44 00 00 55 b9 11 00 00 00 48 89 e5 41 55 49 89 f5
    [51502.244010] RIP [] assfail.constprop.88+0x1e/0x20 [btrfs]

    2)

    [25405.097230] BTRFS: assertion failed: current->journal_info, file: fs/btrfs/extent-tree.c, line: 3670
    [25405.097488] ------------[ cut here ]------------
    [25405.097767] kernel BUG at fs/btrfs/ctree.h:3964!
    [25405.097940] invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
    (...)
    [25405.100008] Call Trace:
    [25405.100008] [] btrfs_check_data_free_space+0x395/0x3a0 [btrfs]
    [25405.100008] [] btrfs_write_dirty_block_groups+0x4ac/0x640 [btrfs]
    [25405.100008] [] commit_cowonly_roots+0x164/0x226 [btrfs]
    [25405.100008] [] btrfs_commit_transaction+0x4ed/0xab0 [btrfs]
    [25405.100008] [] ? bit_waitqueue+0xc0/0xc0
    [25405.100008] [] start_transaction+0x459/0x620 [btrfs]
    [25405.100008] [] btrfs_start_transaction+0x1b/0x20 [btrfs]
    [25405.100008] [] btrfs_create+0x47/0x210 [btrfs]
    [25405.100008] [] ? btrfs_permission+0x3c/0x80 [btrfs]
    [25405.100008] [] vfs_create+0x9b/0x130
    [25405.100008] [] do_last+0x849/0xe20
    [25405.100008] [] ? link_path_walk+0x79/0x820
    [25405.100008] [] path_openat+0xc5/0x690
    [25405.100008] [] ? trace_hardirqs_on+0xd/0x10
    [25405.100008] [] ? __alloc_fd+0x32/0x1d0
    [25405.100008] [] do_filp_open+0x43/0xa0
    [25405.100008] [] ? __alloc_fd+0x151/0x1d0
    [25405.100008] [] do_sys_open+0x13c/0x230
    [25405.100008] [] ? trace_hardirqs_on_caller+0x16/0x1e0
    [25405.100008] [] SyS_open+0x22/0x30
    [25405.100008] [] system_call_fastpath+0x16/0x1b
    [25405.100008] Code: 0b 55 48 89 e5 0f 0b 55 48 89 e5 0f 0b 55 89 f1 48 c7 c2 51 13 36 a0 48 89 fe 31 c0 48 c7 c7 d0 43 36 a0 48 89 e5 e8 6d b5 32 e1 0b 0f 1f 44 00 00 55 b9 11 00 00 00 48 89 e5 41 55 49 89 f5
    [25405.100008] RIP [] assfail.constprop.88+0x1e/0x20 [btrfs]

    Signed-off-by: Filipe David Borba Manana
    Signed-off-by: Chris Mason

    Filipe Manana
     
  • Commit fcebe4562dec83b3f8d3088d77584727b09130b2 (Btrfs: rework qgroup
    accounting) removed the qgroup accounting after delayed refs.

    Signed-off-by: David Sterba

    David Sterba
     

29 Jun, 2014

1 commit

  • When starting a transaction just assert that current->journal_info
    doesn't contain a send transaction stub, since send isn't supposed
    to start transactions and when it finishes (either successfully or
    not) it's supposed to set current->journal_info to NULL.

    This is motivated by the change titled:

    Btrfs: fix crash when starting transaction

    Signed-off-by: Filipe David Borba Manana
    Signed-off-by: Chris Mason

    Filipe Manana
     

14 Jun, 2014

1 commit


11 Jun, 2014

1 commit


10 Jun, 2014

4 commits

  • Delayed extent operations are triggered during transaction commits.
    The goal is to queue up a healthly batch of changes to the extent
    allocation tree and run through them in bulk.

    This farms them off to async helper threads. The goal is to have the
    bulk of the delayed operations being done in the background, but this is
    also important to limit our stack footprint.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • Currently qgroups account for space by intercepting delayed ref updates to fs
    trees. It does this by adding sequence numbers to delayed ref updates so that
    it can figure out how the tree looked before the update so we can adjust the
    counters properly. The problem with this is that it does not allow delayed refs
    to be merged, so if you say are defragging an extent with 5k snapshots pointing
    to it we will thrash the delayed ref lock because we need to go back and
    manually merge these things together. Instead we want to process quota changes
    when we know they are going to happen, like when we first allocate an extent, we
    free a reference for an extent, we add new references etc. This patch
    accomplishes this by only adding qgroup operations for real ref changes. We
    only modify the sequence number when we need to lookup roots for bytenrs, this
    reduces the amount of churn on the sequence number and allows us to merge
    delayed refs as we add them most of the time. This patch encompasses a bunch of
    architectural changes

    1) qgroup ref operations: instead of tracking qgroup operations through the
    delayed refs we simply add new ref operations whenever we notice that we need to
    when we've modified the refs themselves.

    2) tree mod seq: we no longer have this separation of major/minor counters.
    this makes the sequence number stuff much more sane and we can remove some
    locking that was needed to protect the counter.

    3) delayed ref seq: we now read the tree mod seq number and use that as our
    sequence. This means each new delayed ref doesn't have it's own unique sequence
    number, rather whenever we go to lookup backrefs we inc the sequence number so
    we can make sure to keep any new operations from screwing up our world view at
    that given point. This allows us to merge delayed refs during runtime.

    With all of these changes the delayed ref stuff is a little saner and the qgroup
    accounting stuff no longer goes negative in some cases like it was before.
    Thanks,

    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     
  • Signed-off-by: Miao Xie
    Signed-off-by: Wang Shilong
    Signed-off-by: Chris Mason

    Miao Xie
     
  • CC: Miao Xie
    CC: Wang Shilong
    Signed-off-by: David Sterba
    Signed-off-by: Chris Mason

    David Sterba
     

07 Apr, 2014

2 commits

  • Lets try this again. We can deadlock the box if we send on a box and try to
    write onto the same fs with the app that is trying to listen to the send pipe.
    This is because the writer could get stuck waiting for a transaction commit
    which is being blocked by the send. So fix this by making sure looking at the
    commit roots is always going to be consistent. We do this by keeping track of
    which roots need to have their commit roots swapped during commit, and then
    taking the commit_root_sem and swapping them all at once. Then make sure we
    take a read lock on the commit_root_sem in cases where we search the commit root
    to make sure we're always looking at a consistent view of the commit roots.
    Previously we had problems with this because we would swap a fs tree commit root
    and then swap the extent tree commit root independently which would cause the
    backref walking code to screw up sometimes. With this patch we no longer
    deadlock and pass all the weird send/receive corner cases. Thanks,

    Reportedy-by: Hugo Mills
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     
  • So I have an awful exercise script that will run snapshot, balance and
    send/receive in parallel. This sometimes would crash spectacularly and when it
    came back up the fs would be completely hosed. Turns out this is because of a
    bad interaction of balance and send/receive. Send will hold onto its entire
    path for the whole send, but its blocks could get relocated out from underneath
    it, and because it doesn't old tree locks theres nothing to keep this from
    happening. So it will go to read in a slot with an old transid, and we could
    have re-allocated this block for something else and it could have a completely
    different transid. But because we think it is invalid we clear uptodate and
    re-read in the block. If we do this before we actually write out the new block
    we could write back stale data to the fs, and boom we're screwed.

    Now we definitely need to fix this disconnect between send and balance, but we
    really really need to not allow ourselves to accidently read in stale data over
    new data. So make sure we check if the extent buffer is not under io before
    clearing uptodate, this will kick back EIO to the caller instead of reading in
    stale data and keep us from corrupting the fs. Thanks,

    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     

21 Mar, 2014

1 commit

  • Zach found this deadlock that would happen like this

    btrfs_end_transaction use_count to 0
    btrfs_run_delayed_refs
    btrfs_cow_block
    find_free_extent
    btrfs_start_transaction use_count to 1
    allocate chunk
    btrfs_end_transaction use_count to 0
    btrfs_run_delayed_refs
    lock tree block we are cowing above ^^

    We need to only decrease trans->use_count if it is above 1, otherwise leave it
    alone. This will make nested trans be the only ones who decrease their added
    ref, and will let us get rid of the trans->use_count++ hack if we have to commit
    the transaction. Thanks,

    cc: stable@vger.kernel.org
    Reported-by: Zach Brown
    Signed-off-by: Josef Bacik
    Tested-by: Zach Brown
    Signed-off-by: Chris Mason

    Josef Bacik
     

11 Mar, 2014

3 commits


29 Jan, 2014

7 commits

  • Add noinode_cache mount option for btrfs.

    Since inode map cache involves all the btrfs_find_free_ino/return_ino
    things and if just trigger the mount_opt,
    an inode number get from inode map cache will not returned to inode map
    cache.

    To keep the find and return inode both in the same behavior,
    a new bit in mount_opt, CHANGE_INODE_CACHE, is introduced for this idea.
    CHANGE_INODE_CACHE is set/cleared in remounting, and the original
    INODE_MAP_CACHE is set/cleared according to CHANGE_INODE_CACHE after a
    success transaction.
    Since find/return inode is all done between btrfs_start_transaction and
    btrfs_commit_transaction, this will keep consistent behavior.

    Also noinode_cache mount option will not stop the caching_kthread.

    Cc: David Sterba
    Signed-off-by: Miao Xie
    Signed-off-by: Qu Wenruo
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Qu Wenruo
     
  • On one of our gluster clusters we noticed some pretty big lag spikes. This
    turned out to be because our transaction commit was taking like 3 minutes to
    complete. This is because we have like 30 gigs of metadata, so our global
    reserve would end up being the max which is like 512 mb. So our throttling code
    would allow a ridiculous amount of delayed refs to build up and then they'd all
    get run at transaction commit time, and for a cold mounted file system that
    could take up to 3 minutes to run. So fix the throttling to be based on both
    the size of the global reserve and how long it takes us to run delayed refs.
    This patch tracks the time it takes to run delayed refs and then only allows 1
    seconds worth of outstanding delayed refs at a time. This way it will auto-tune
    itself from cold cache up to when everything is in memory and it no longer has
    to go to disk. This makes our transaction commits take much less time to run.
    Thanks,

    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     
  • Currently we have two rb-trees, one for delayed ref heads and one for all of the
    delayed refs, including the delayed ref heads. When we process the delayed refs
    we have to hold onto the delayed ref lock for all of the selecting and merging
    and such, which results in quite a bit of lock contention. This was solved by
    having a waitqueue and only one flusher at a time, however this hurts if we get
    a lot of delayed refs queued up.

    So instead just have an rb tree for the delayed ref heads, and then attach the
    delayed ref updates to an rb tree that is per delayed ref head. Then we only
    need to take the delayed ref lock when adding new delayed refs and when
    selecting a delayed ref head to process, all the rest of the time we deal with a
    per delayed ref head lock which will be much less contentious.

    The locking rules for this get a little more complicated since we have to lock
    up to 3 things to properly process delayed refs, but I will address that problem
    later. For now this passes all of xfstests and my overnight stress tests.
    Thanks,

    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     
  • Looking into some performance related issues with large amounts of metadata
    revealed that we can have some pretty huge swings in fsync() performance. If we
    have a lot of delayed refs backed up (as you will tend to do with lots of
    metadata) fsync() will wander off and try to run some of those delayed refs
    which can result in reading from disk and such. Since the actual act of fsync()
    doesn't create any delayed refs there is no need to make it throttle on delayed
    ref stuff, that will be handled by other people. With this patch we get much
    smoother fsync performance with large amounts of metadata. Thanks,

    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     
  • We should gurantee that parent and clone roots can not be destroyed
    during send, for this we have two ideas.

    1.by holding @subvol_sem, this might be a nightmare, because it will
    block all subvolumes deletion for a long time.

    2.Miao pointed out we can reuse @send_in_progress, that mean we will
    skip snapshot deletion if root sending is in progress.

    Here we adopt the second approach since it won't block other subvolumes
    deletion for a long time.

    Besides in btrfs_clean_one_deleted_snapshot(), we only check first root
    , if this root is involved in send, we return directly rather than
    continue to check.There are several reasons about it:

    1.this case happen seldomly.
    2.after sending,cleaner thread can continue to drop that root.
    3.make code simple

    Cc: David Sterba
    Signed-off-by: Wang Shilong
    Reviewed-by: Miao Xie
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Wang Shilong
     
  • Two reasons:
    - btrfs_end_transaction_dmeta() is the same as btrfs_end_transaction_throttle()
    so it is unnecessary.
    - All the delayed items should be dealt in the current transaction, so the
    workers should not commit the transaction, instead, deal with the delayed
    items as many as possible.

    So we can remove btrfs_end_transaction_dmeta()

    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Miao Xie
     
  • Convert all applicable cases of printk and pr_* to the btrfs_* macros.

    Fix all uses of the BTRFS prefix.

    Signed-off-by: Frank Holton
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Frank Holton