15 Nov, 2011

1 commit

  • The btrfs snapshotting code requires that once a root has been
    snapshotted, we don't change it during a commit.

    But there are two cases to lead to tree corruptions:

    1) multi-thread snapshots can commit serveral snapshots in a transaction,
    and this may change the src root when processing the following pending
    snapshots, which lead to the former snapshots corruptions;

    2) the free inode cache was changing the roots when it root the cache,
    which lead to corruptions.

    This fixes things by making sure we force COW the block after we create a
    snapshot during commiting a transaction, then any changes to the roots
    will result in COW, and we get all the fs roots and snapshot roots to be
    consistent.

    Signed-off-by: Liu Bo
    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Liu Bo
     

11 Nov, 2011

1 commit


06 Nov, 2011

3 commits

  • While we're allocating ram for a new transaction, we drop our spinlock.
    When we get the lock back, we do check to see if a transaction started
    while we slept, but we don't check to make sure it isn't blocked
    because a commit has already started.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • Failure testing was tripping up over stale PageError bits in
    metadata pages. If we have an io error on a block, and later on
    end up reusing it, nobody ever clears PageError on those pages.

    During commit, we'll find PageError and think we had trouble writing
    the block, which will lead to aborts and other problems.

    This changes clean_tree_block and the btrfs writepage code to
    clear the PageError bit. In both cases we're either completely
    done with the page or the page has good stuff and the error bit
    is no longer valid.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • fs_info has now ~9kb, more than fits into one page. This will cause
    mount failure when memory is too fragmented. Top space consumers are
    super block structures super_copy and super_for_commit, ~2.8kb each.
    Allocate them dynamically. fs_info will be ~3.5kb. (measured on x86_64)

    Add a wrapper for freeing fs_info and all of it's dynamically allocated
    members.

    Signed-off-by: David Sterba

    David Sterba
     

20 Oct, 2011

9 commits

  • Currently btrfs_block_rsv_check does 2 things, it will either refill a block
    reserve like in the truncate or refill case, or it will check to see if there is
    enough space in the global reserve and possibly refill it. However because of
    overcommit we could be well overcommitting ourselves just to try and refill the
    global reserve, when really we should just be committing the transaction. So
    breack this out into btrfs_block_rsv_refill and btrfs_block_rsv_check. Refill
    will try to reserve more metadata if it can and btrfs_block_rsv_check will not,
    it will only tell you if the factor of the total space is still reserved.
    Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     
  • We started setting trans->block_rsv = NULL to allow the delayed refs flushing
    stuff to use the right block_rsv and then just made
    btrfs_trans_release_metadata() unconditionally use the trans block rsv. The
    problem with this is we need to reserve some space in the transaction and then
    migrate it to the global block rsv, so we need to be able to free that out
    properly. So instead just move btrfs_trans_release_metadata() before the
    delayed ref flushing and use trans->block_rsv for the freeing. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     
  • Some users have requested this and I've found I needed a way to disable cache
    loading without actually clearing the cache, so introduce the no_space_cache
    option. Before we check the super blocks cache generation field and if it was
    populated we always turned space caching on. Now we check this and set the
    space cache option on, and then parse the mount options so that if we want it
    off it get's turned off. Then we check the mount option all the places we do
    the caching work instead of checking the super's cache generation. This makes
    things more consistent and lets us turn space caching off. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     
  • While looking for a performance regression a user was complaining about, I
    noticed that we had a regression with the varmail test of filebench. This was
    introduced by

    0d10ee2e6deb5c8409ae65b970846344897d5e4e

    which keeps us from calling writepages in writepage. This is a correct change,
    however it happens to help the varmail test because we write out in larger
    chunks. This is largly to do with how we write out dirty pages for each
    transaction. If you run filebench with

    load varmail
    set $dir=/mnt/btrfs-test
    run 60

    prior to this patch you would get ~1420 ops/second, but with the patch you get
    ~1200 ops/second. This is a 16% decrease. So since we know the range of dirty
    pages we want to write out, don't write out in one page chunks, write out in
    ranges. So to do this we call filemap_fdatawrite_range() on the range of bytes.
    Then we convert the DIRTY extents to NEED_WAIT extents. When we then call
    btrfs_wait_marked_extents() we only have to filemap_fdatawait_range() on that
    range and clear the NEED_WAIT extents. This doesn't get us back to our original
    speeds, but I've been seeing ~1380 ops/second, which is a 15% regression. That is acceptable given that the original commit
    greatly reduces our latency to begin with. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     
  • Checksums are charged in 2 different ways. The first case is when we're writing
    to the disk, we account for the new checksums with the delalloc block rsv. In
    order for this to work we check if we're allocating a block for the csum root
    and if trans->block_rsv == the delalloc block rsv. But when we're deleting the
    csums because of cow, this is charged to the global block rsv, and is done when
    we run the delayed refs. So we need to make sure that trans->block_rsv == NULL
    when running the delayed refs. So set it to NULL and reset it in
    should_end_transaction, and set it to NULL in commit_transaction. This got rid
    of the ridiculous amount of warnings I was seeing when trying to do a balance.
    Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     
  • The only thing that we need to have a trans handle for is in
    reserve_metadata_bytes and thats to know how much flushing we can do. So
    instead of passing it around, just check current->journal_info for a
    trans_handle so we know if we can commit a transaction to try and free up space
    or not. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     
  • The alloc warnings everybody has been seeing is because we have been reserving
    space for csums, but we weren't actually using that space. So make
    get_block_rsv() return the trans->block_rsv if we're modifying the csum root.
    Also set the trans->block_rsv to NULL so that if we modify the csum root when
    running delayed ref's that comes out of the global reserve like it's supposed
    to. With this patch I'm not seeing those alloc warnings anymore. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     
  • If you run xfstest 224 it you will get lots of messages about not being able to
    delete inodes and that they will be cleaned up next mount. This is because
    btrfs_block_rsv_check was not calling reserve_metadata_bytes with the ability to
    flush, so if there was not enough space, it simply failed. But in truncate and
    evict case we could easily flush space to try and get enough space to do our
    work, so make btrfs_block_rsv_check take a flush argument to pass down to
    reserve_metadata_bytes. Now xfstests 224 runs fine without all those
    complaints. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     
  • This patch kills off the calculation for the amount of space needed for the
    orphan operations during a snapshot. The thing is we only do snapshots on
    commit, so any space that is in the block_rsv->freed[] isn't going to be in the
    new snapshot anyway, so there isn't any reason to require that space to be
    reserved for the snapshot to occur. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     

11 Sep, 2011

1 commit

  • At the beginning of create_pending_snapshot, trans->block_rsv is set
    to pending->block_rsv and is used for snapshot things, however, when
    it is done, we do not recover it as will.

    Signed-off-by: Liu Bo
    Signed-off-by: Chris Mason

    Liu Bo
     

02 Aug, 2011

2 commits


28 Jul, 2011

1 commit


11 Jul, 2011

1 commit

  • We have to do weird things when handling enospc in the transaction joining code.
    Because we've already joined the transaction we cannot commit the transaction
    within the reservation code since it will deadlock, so we have to return EAGAIN
    and then make sure we don't retry too many times. Instead of doing this, just
    do the reservation the normal way before we join the transaction, that way we
    can do whatever we want to try and reclaim space, and then if it fails we know
    for sure we are out of space and we can return ENOSPC. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     

18 Jun, 2011

3 commits

  • Snapshot creation has two phases. One is the initial snapshot setup,
    and the second is done during commit, while nobody is allowed to modify
    the root we are snapshotting.

    The delayed metadata insertion code can break that rule, it does a
    delayed inode update on the inode of the parent of the snapshot,
    and delayed directory item insertion.

    This makes sure to run the pending delayed operations before we
    record the snapshot root, which avoids corruptions.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • …btrfs-work into for-linus

    Conflicts:
    fs/btrfs/transaction.c

    Signed-off-by: Chris Mason <chris.mason@oracle.com>

    Chris Mason
     
  • The recent commit to get rid of our trans_mutex introduced
    some races with block group relocation. The problem is that relocation
    needs to do some record keeping about each root, and it was relying
    on the transaction mutex to coordinate things in subtle ways.

    This fix adds a mutex just for the relocation code and makes sure
    it doesn't have a big impact on normal operations. The race is
    really fixed in btrfs_record_root_in_trans, which is where we
    step back and wait for the relocation code to finish accounting
    setup.

    Signed-off-by: Chris Mason

    Chris Mason
     

16 Jun, 2011

1 commit

  • We can lockup if we try to allow new writers join the transaction and we have
    flushoncommit set or have a pending snapshot. This is because we set
    no_trans_join and then loop around and try to wait for ordered extents again.
    The problem is the ordered endio stuff needs to join the transaction, which it
    can't do because no_trans_join is set. So instead wait until after this loop to
    set no_trans_join and then make sure to wait for num_writers == 1 in case
    anybody got started in between us exiting the loop and setting no_trans_join.
    This could easily be reproduced by mounting -o flushoncommit and running xfstest
    13. It cannot be reproduced with this patch. Thanks,

    Reported-by: Jim Schutt
    Signed-off-by: Josef Bacik

    Josef Bacik
     

11 Jun, 2011

1 commit

  • Normally current->jouranl_info is cleared by commit_transaction. For an
    async snap or subvol creation, though, it runs in a work queue. Clear
    it in btrfs_commit_transaction_async() to avoid leaking a non-NULL
    journal_info when we return to userspace. When the actual commit runs in
    the other thread it won't care that it's current->journal_info is already
    NULL.

    Signed-off-by: Sage Weil
    Tested-by: Jim Schutt
    Signed-off-by: Chris Mason

    Sage Weil
     

09 Jun, 2011

1 commit

  • In btrfs_wait_for_commit if we came upon a transaction that had committed we
    just exited, but that's bad since we are holding the trans_lock. So break
    instead so that the lock is dropped. Thanks,

    Reported-by: David Sterba
    Signed-off-by: Josef Bacik

    Josef Bacik
     

04 Jun, 2011

1 commit


28 May, 2011

1 commit

  • git://git.kernel.org/pub/scm/linux/kernel/git/josef/btrfs-work into for-linus

    Conflicts:
    fs/btrfs/disk-io.c
    fs/btrfs/extent-tree.c
    fs/btrfs/free-space-cache.c
    fs/btrfs/inode.c
    fs/btrfs/transaction.c

    Signed-off-by: Chris Mason

    Chris Mason
     

24 May, 2011

4 commits

  • Originally this was going to be used as a way to give hints to the allocator,
    but frankly we can get much better hints elsewhere and it's not even used at all
    for anything usefull. In addition to be completely useless, when we initialize
    an inode we try and find a freeish block group to set as the inodes block group,
    and with a completely full 40gb fs this takes _forever_, so I imagine with say
    1tb fs this is just unbearable. So just axe the thing altoghether, we don't
    need it and it saves us 8 bytes in the inode and saves us 500 microseconds per
    inode lookup in my testcase. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     
  • We use trans_mutex for lots of things, here's a basic list

    1) To serialize trans_handles joining the currently running transaction
    2) To make sure that no new trans handles are started while we are committing
    3) To protect the dead_roots list and the transaction lists

    Really the serializing trans_handles joining is not too hard, and can really get
    bogged down in acquiring a reference to the transaction. So replace the
    trans_mutex with a trans_lock spinlock and use it to do the following

    1) Protect fs_info->running_transaction. All trans handles have to do is check
    this, and then take a reference of the transaction and keep on going.
    2) Protect the fs_info->trans_list. This doesn't get used too much, basically
    it just holds the current transactions, which will usually just be the currently
    committing transaction and the currently running transaction at most.
    3) Protect the dead roots list. This is only ever processed by splicing the
    list so this is relatively simple.
    4) Protect the fs_info->reloc_ctl stuff. This is very lightweight and was using
    the trans_mutex before, so this is a pretty straightforward change.
    5) Protect fs_info->no_trans_join. Because we don't hold the trans_lock over
    the entirety of the commit we need to have a way to block new people from
    creating a new transaction while we're doing our work. So we set no_trans_join
    and in join_transaction we test to see if that is set, and if it is we do a
    wait_on_commit.
    6) Make the transaction use count atomic so we don't need to take locks to
    modify it when we're dropping references.
    7) Add a commit_lock to the transaction to make sure multiple people trying to
    commit the same transaction don't race and commit at the same time.
    8) Make open_ioctl_trans an atomic so we don't have to take any locks for ioctl
    trans.

    I have tested this with xfstests, but obviously it is a pretty hairy change so
    lots of testing is greatly appreciated. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     
  • We currently track trans handles in current->journal_info, but we don't actually
    use it. This patch fixes it. This will cover the case where we have multiple
    people starting transactions down the call chain. This keeps us from having to
    allocate a new handle and all of that, we just increase the use count of the
    current handle, save the old block_rsv, and return. I tested this with xfstests
    and it worked out fine. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     
  • I keep forgetting that btrfs_join_transaction() just ignores the num_items
    argument, which leads me to sending pointless patches and looking stupid :). So
    just kill the num_items argument from btrfs_join_transaction and
    btrfs_start_ioctl_transaction, since neither of them use it. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     

23 May, 2011

2 commits


22 May, 2011

1 commit


21 May, 2011

1 commit

  • Changelog V5 -> V6:
    - Fix oom when the memory load is high, by storing the delayed nodes into the
    root's radix tree, and letting btrfs inodes go.

    Changelog V4 -> V5:
    - Fix the race on adding the delayed node to the inode, which is spotted by
    Chris Mason.
    - Merge Chris Mason's incremental patch into this patch.
    - Fix deadlock between readdir() and memory fault, which is reported by
    Itaru Kitayama.

    Changelog V3 -> V4:
    - Fix nested lock, which is reported by Itaru Kitayama, by updating space cache
    inode in time.

    Changelog V2 -> V3:
    - Fix the race between the delayed worker and the task which does delayed items
    balance, which is reported by Tsutomu Itoh.
    - Modify the patch address David Sterba's comment.
    - Fix the bug of the cpu recursion spinlock, reported by Chris Mason

    Changelog V1 -> V2:
    - break up the global rb-tree, use a list to manage the delayed nodes,
    which is created for every directory and file, and used to manage the
    delayed directory name index items and the delayed inode item.
    - introduce a worker to deal with the delayed nodes.

    Compare with Ext3/4, the performance of file creation and deletion on btrfs
    is very poor. the reason is that btrfs must do a lot of b+ tree insertions,
    such as inode item, directory name item, directory name index and so on.

    If we can do some delayed b+ tree insertion or deletion, we can improve the
    performance, so we made this patch which implemented delayed directory name
    index insertion/deletion and delayed inode update.

    Implementation:
    - introduce a delayed root object into the filesystem, that use two lists to
    manage the delayed nodes which are created for every file/directory.
    One is used to manage all the delayed nodes that have delayed items. And the
    other is used to manage the delayed nodes which is waiting to be dealt with
    by the work thread.
    - Every delayed node has two rb-tree, one is used to manage the directory name
    index which is going to be inserted into b+ tree, and the other is used to
    manage the directory name index which is going to be deleted from b+ tree.
    - introduce a worker to deal with the delayed operation. This worker is used
    to deal with the works of the delayed directory name index items insertion
    and deletion and the delayed inode update.
    When the delayed items is beyond the lower limit, we create works for some
    delayed nodes and insert them into the work queue of the worker, and then
    go back.
    When the delayed items is beyond the upper bound, we create works for all
    the delayed nodes that haven't been dealt with, and insert them into the work
    queue of the worker, and then wait for that the untreated items is below some
    threshold value.
    - When we want to insert a directory name index into b+ tree, we just add the
    information into the delayed inserting rb-tree.
    And then we check the number of the delayed items and do delayed items
    balance. (The balance policy is above.)
    - When we want to delete a directory name index from the b+ tree, we search it
    in the inserting rb-tree at first. If we look it up, just drop it. If not,
    add the key of it into the delayed deleting rb-tree.
    Similar to the delayed inserting rb-tree, we also check the number of the
    delayed items and do delayed items balance.
    (The same to inserting manipulation)
    - When we want to update the metadata of some inode, we cached the data of the
    inode into the delayed node. the worker will flush it into the b+ tree after
    dealing with the delayed insertion and deletion.
    - We will move the delayed node to the tail of the list after we access the
    delayed node, By this way, we can cache more delayed items and merge more
    inode updates.
    - If we want to commit transaction, we will deal with all the delayed node.
    - the delayed node will be freed when we free the btrfs inode.
    - Before we log the inode items, we commit all the directory name index items
    and the delayed inode update.

    I did a quick test by the benchmark tool[1] and found we can improve the
    performance of file creation by ~15%, and file deletion by ~20%.

    Before applying this patch:
    Create files:
    Total files: 50000
    Total time: 1.096108
    Average time: 0.000022
    Delete files:
    Total files: 50000
    Total time: 1.510403
    Average time: 0.000030

    After applying this patch:
    Create files:
    Total files: 50000
    Total time: 0.932899
    Average time: 0.000019
    Delete files:
    Total files: 50000
    Total time: 1.215732
    Average time: 0.000024

    [1] http://marc.info/?l=linux-btrfs&m=128212635122920&q=p3

    Many thanks for Kitayama-san's help!

    Signed-off-by: Miao Xie
    Reviewed-by: David Sterba
    Tested-by: Tsutomu Itoh
    Tested-by: Itaru Kitayama
    Signed-off-by: Chris Mason

    Miao Xie
     

12 May, 2011

1 commit

  • This adds an initial implementation for scrub. It works quite
    straightforward. The usermode issues an ioctl for each device in the
    fs. For each device, it enumerates the allocated device chunks. For
    each chunk, the contained extents are enumerated and the data checksums
    fetched. The extents are read sequentially and the checksums verified.
    If an error occurs (checksum or EIO), a good copy is searched for. If
    one is found, the bad copy will be rewritten.
    All enumerations happen from the commit roots. During a transaction
    commit, the scrubs get paused and afterwards continue from the new
    roots.

    This commit is based on the series originally posted to linux-btrfs
    with some improvements that resulted from comments from David Sterba,
    Ilya Dryomov and Jan Schmidt.

    Signed-off-by: Arne Jansen

    Arne Jansen
     

06 May, 2011

1 commit


02 May, 2011

1 commit

  • all callers pass GFP_NOFS, but the GFP mask argument is not used in the
    function; GFP_ATOMIC is passed to radix tree initialization and it's the
    only correct one, since we're using the preload/insert mechanism of
    radix tree.
    Let's drop the gfp mask from btrfs function, this will not change
    behaviour.

    Signed-off-by: David Sterba

    David Sterba
     

25 Apr, 2011

2 commits

  • This is similar to block group caching.

    We dedicate a special inode in fs tree to save free ino cache.

    At the very first time we create/delete a file after mount, the free ino
    cache will be loaded from disk into memory. When the fs tree is commited,
    the cache will be written back to disk.

    To keep compatibility, we check the root generation against the generation
    of the special inode when loading the cache, so the loading will fail
    if the btrfs filesystem was mounted in an older kernel before.

    Signed-off-by: Li Zefan

    Li Zefan
     
  • There's a potential problem in 32bit system when we exhaust 32bit inode
    numbers and start to allocate big inode numbers, because btrfs uses
    inode->i_ino in many places.

    So here we always use BTRFS_I(inode)->location.objectid, which is an
    u64 variable.

    There are 2 exceptions that BTRFS_I(inode)->location.objectid !=
    inode->i_ino: the btree inode (0 vs 1) and empty subvol dirs (256 vs 2),
    and inode->i_ino will be used in those cases.

    Another reason to make this change is I'm going to use a special inode
    to save free ino cache, and the inode number must be > (u64)-256.

    Signed-off-by: Li Zefan

    Li Zefan