24 Jul, 2009

2 commits

  • This patch moves the caching of the block group off to a kthread in order to
    allow people to allocate sooner. Instead of blocking up behind the caching
    mutex, we instead kick of the caching kthread, and then attempt to make an
    allocation. If we cannot, we wait on the block groups caching waitqueue, which
    the caching kthread will wake the waiting threads up everytime it finds 2 meg
    worth of space, and then again when its finished caching. This is how I tested
    the speedup from this

    mkfs the disk
    mount the disk
    fill the disk up with fs_mark
    unmount the disk
    mount the disk
    time touch /mnt/foo

    Without my changes this took 11 seconds on my box, with these changes it now
    takes 1 second.

    Another change thats been put in place is we lock the super mirror's in the
    pinned extent map in order to keep us from adding that stuff as free space when
    caching the block group. This doesn't really change anything else as far as the
    pinned extent map is concerned, since for actual pinned extents we use
    EXTENT_DIRTY, but it does mean that when we unmount we have to go in and unlock
    those extents to keep from leaking memory.

    I've also added a check where when we are reading block groups from disk, if the
    amount of space used == the size of the block group, we go ahead and mark the
    block group as cached. This drastically reduces the amount of time it takes to
    cache the block groups. Using the same test as above, except doing a dd to a
    file and then unmounting, it used to take 33 seconds to umount, now it takes 3
    seconds.

    This version uses the commit_root in the caching kthread, and then keeps track
    of how many async caching threads are running at any given time so if one of the
    async threads is still running as we cross transactions we can wait until its
    finished before handling the pinned extents. Thank you,

    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     
  • Currently btrfs has a problem where it can use a ridiculous amount of RAM simply
    tracking free space. As free space gets fragmented, we end up with thousands of
    entries on an rb-tree per block group, which usually spans 1 gig of area. Since
    we currently don't ever flush free space cache back to disk this gets to be a
    bit unweildly on large fs's with lots of fragmentation.

    This patch solves this problem by using PAGE_SIZE bitmaps for parts of the free
    space cache. Initially we calculate a threshold of extent entries we can
    handle, which is however many extent entries we can cram into 16k of ram. The
    maximum amount of RAM that should ever be used to track 1 gigabyte of diskspace
    will be 32k of RAM, which scales much better than we did before.

    Once we pass the extent threshold, we start adding bitmaps and using those
    instead for tracking the free space. This patch also makes it so that any free
    space thats less than 4 * sectorsize we go ahead and put into a bitmap. This is
    nice since we try and allocate out of the front of a block group, so if the
    front of a block group is heavily fragmented and then has a huge chunk of free
    space at the end, we go ahead and add the fragmented areas to bitmaps and use a
    normal extent entry to track the big chunk at the back of the block group.

    I've also taken the opportunity to revamp how we search for free space.
    Previously we indexed free space via an offset indexed rb tree and a bytes
    indexed rb tree. I've dropped the bytes indexed rb tree and use only the offset
    indexed rb tree. This cuts the number of tree operations we were doing
    previously down by half, and gives us a little bit of a better allocation
    pattern since we will always start from a specific offset and search forward
    from there, instead of searching for the size we need and try and get it as
    close as possible to the offset we want.

    I've given this a healthy amount of testing pre-new format stuff, as well as
    post-new format stuff. I've booted up my fedora box which is installed on btrfs
    with this patch and ran with it for a few days without issues. I've not seen
    any performance regressions in any of my tests.

    Since the last patch Yan Zheng fixed a problem where we could have overlapping
    entries, so updating their offset inline would cause problems. Thanks,

    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     

23 Jul, 2009

5 commits

  • If the tree roots hit read errors during mount, btrfs is not properly
    erroring out. We need to check the uptodate bits after
    reading in the tree root node.

    Signed-off-by: David Woodhouse
    Signed-off-by: Chris Mason

    David Woodhouse
     
  • This removes the continues call's of btrfs_header_level. One call of
    btrfs_header_level(c) its enough.

    Signed-off-by Daniel Cadete

    Signed-off-by: Chris Mason

    Daniel Cadete
     
  • Move the call to BUG_ON to before the dereference of the tested value.

    Signed-off-by: Julia Lawall
    Signed-off-by: Chris Mason

    Julia Lawall
     
  • It was never actually doing anything anyway (see the loop condition),
    and it would be difficult to make it work for RAID[56].

    Even if it was actually working, it's checking for the wrong thing
    anyway. Instead of checking whether we list a block which _doesn't_ land
    at the relevant physical location, it should be checking that we _have_
    listed all the logical blocks which refer to the required physical
    location on all devices.

    This function is only called from remove_sb_from_cache() to ensure that
    we reserve the logical blocks which would reside at the same physical
    location as the superblock copies. So listing more blocks than we need
    is actually OK.

    With RAID[56] we're going to throw away an entire stripe for each block
    we have to ignore, so we _are_ going to list blocks other than the
    ones which actually contain the superblock.

    Signed-off-by: David Woodhouse
    Signed-off-by: Chris Mason

    David Woodhouse
     
  • If spin_lock_irqsave is called twice in a row with the same second
    argument, the interrupt state at the point of the second call overwrites
    the value saved by the first call. Indeed, the second call does not need
    to save the interrupt state, so it is changed to a simple spin_lock.

    Signed-off-by: Julia Lawall
    Signed-off-by: Chris Mason

    Julia Lawall
     

22 Jul, 2009

5 commits

  • Write dirty block groups may allocate new block, and so may add new delayed
    back ref. btrfs_run_delayed_refs may make some block groups dirty.

    commit_cowonly_roots does not handle the recursion properly, and some dirty
    blocks can be left unwritten at commit time. This patch moves
    btrfs_run_delayed_refs into the loop that writes dirty block groups, and makes
    the code not break out of the loop until there are no dirty block groups or
    delayed back refs.

    Signed-off-by: Yan Zheng
    Signed-off-by: Chris Mason

    Yan Zheng
     
  • When walking up the tree, btrfs_find_next_key assumes the upper level tree
    block is properly locked. This isn't always true even path->keep_locks is 1.
    This is because btrfs_find_next_key may advance path->slots[] several times
    instead of only once.

    When 'path->slots[level] >= btrfs_header_nritems(path->nodes[level])' is found,
    we can't guarantee the original value of 'path->slots[level]' is
    'btrfs_header_nritems(path->nodes[level]) - 1'. If it's not, the tree block at
    'level + 1' isn't locked.

    This patch fixes the issue by explicitly checking the locking state,
    re-searching the tree if it's not locked.

    Signed-off-by: Yan Zheng
    Signed-off-by: Chris Mason

    Yan Zheng
     
  • if 1 is returned by btrfs_search_slot, the path already points to the
    first item with 'key > searching key'. So increasing path->slots[0] by
    one is superfluous in that case.

    Signed-off-by: Yan Zheng
    Signed-off-by: Chris Mason

    Yan Zheng
     
  • Change 'goto done' to 'break' for the case of all device extents have
    been freed, so that the code updates space information will be execute.

    Signed-off-by: Yan Zheng
    Signed-off-by: Chris Mason

    Yan Zheng
     
  • use __le64 instead of u64 in on-disk structure definition.

    Signed-off-by: Yan Zheng
    Signed-off-by: Chris Mason

    Yan Zheng
     

03 Jul, 2009

7 commits


16 Jun, 2009

1 commit


11 Jun, 2009

7 commits

  • During tree log replay, we read in the tree log roots,
    process them and then free them. A recent change
    takes an extra reference on the root node of the tree
    when the root is read in, and stores that reference
    in root->commit_root.

    This reference was not being freed, leaving us with
    one buffer pinned in ram for each subvol with
    a tree log root after a crash.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • This happens during subvol creation.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • It was printing nodatacsum, which was not the correct option name.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • lookup_inline_extent_backref only checks for duplicate backref for data
    extents. It assumes backrefs for tree block never conflict.

    This patch makes lookup_inline_extent_backref check for duplicate backrefs
    for both data and tree block, so that we can detect potential bug earlier.
    This is a safety check, strictly speaking it is not required.

    Signed-off-by: Yan Zheng
    Signed-off-by: Chris Mason

    Yan Zheng
     
  • This patch fixes a bug which may result race condition
    between btrfs_start_workers() and worker_loop().

    btrfs_start_workers() executed in a parent thread writes
    on workers->worker and worker_loop() in a child thread
    reads workers->worker. However, there is no synchronization
    enforcing the order of two operations.

    This patch makes btrfs_start_workers() fill workers->worker
    before it starts a child thread with worker_loop()

    Signed-off-by: Chris Mason

    Shin Hong
     
  • write_dev_supers is called in sequence. First is it called with wait == 0,
    which starts IO on all of the super blocks for a given device. Then it is
    called with wait == 1 to make sure they all reach the disk.

    It doesn't currently pin the buffers between the two calls, and it also
    assumes the buffers won't go away between the two calls, leading to
    an oops if the VM manages to free the buffers in the middle of the sync.

    This fixes that assumption and updates the code to return an error if things
    are not up to date when the wait == 1 run is done.

    Signed-off-by: Hisashi Hifumi
    Signed-off-by: Chris Mason

    Hisashi Hifumi
     
  • On multi-device filesystems, btrfs writes supers to all of the devices
    before considering a sync complete. There wasn't any additional
    locking between super writeout and the device list management code
    because device management was done inside a transaction and
    super writeout only happened with no transation writers running.

    With the btrfs fsync log and other async transaction updates, this
    has been racey for some time. This adds a mutex to protect
    the device list. The existing volume mutex could not be reused due to
    transaction lock ordering requirements.

    Signed-off-by: Chris Mason

    Chris Mason
     

10 Jun, 2009

13 commits

  • ... otherwise generic_permission() will allow *anything* for all
    files you don't own and that have some group permissions.

    Signed-off-by: Al Viro
    Signed-off-by: Chris Mason

    Al Viro
     
  • In btrfs, fdatasync and fsync are identical, but
    fdatasync should skip committing transaction when
    inode->i_state is set just I_DIRTY_SYNC and this indicates
    only atime or/and mtime updates.
    Following patch improves fdatasync throughput.

    --file-block-size=4K --file-total-size=16G --file-test-mode=rndwr
    --file-fsync-mode=fdatasync run

    Results:
    -2.6.30-rc8
    Test execution summary:
    total time: 1980.6540s
    total number of events: 10001
    total time taken by event execution: 1192.9804
    per-request statistics:
    min: 0.0000s
    avg: 0.1193s
    max: 15.3720s
    approx. 95 percentile: 0.7257s

    Threads fairness:
    events (avg/stddev): 625.0625/151.32
    execution time (avg/stddev): 74.5613/9.46

    -2.6.30-rc8-patched
    Test execution summary:
    total time: 1695.9118s
    total number of events: 10000
    total time taken by event execution: 871.3214
    per-request statistics:
    min: 0.0000s
    avg: 0.0871s
    max: 10.4644s
    approx. 95 percentile: 0.4787s

    Threads fairness:
    events (avg/stddev): 625.0000/131.86
    execution time (avg/stddev): 54.4576/8.98

    Signed-off-by: Hisashi Hifumi
    Signed-off-by: Chris Mason

    Hisashi Hifumi
     
  • There's no need to preserve this abstraction; it used to let us use
    hardware crc32c support directly, but libcrc32c is already doing that for us
    through the crypto API -- so we're already using the Intel crc32c
    acceleration where appropriate.

    Signed-off-by: David Woodhouse
    Signed-off-by: Chris Mason

    David Woodhouse
     
  • Add support for the standard attributes set via chattr and read via
    lsattr. Currently we store the attributes in the flags value in
    the btrfs inode, but I wonder whether we should split it into two so
    that we don't have to keep converting between the two formats.

    Remove the btrfs_clear_flag/btrfs_set_flag/btrfs_test_flag macros
    as they were confusing the existing code and got in the way of the
    new additions.

    Also add the FS_IOC_GETVERSION ioctl for getting i_generation as it's
    trivial.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Chris Mason

    Christoph Hellwig
     
  • During mount, btrfs will check the queue nonrot flag
    for all the devices found in the FS. If they are all
    non-rotating, SSD mode is enabled by default.

    If the FS was mounted with -o nossd, the non-rotating
    flag is ignored.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • Some SSDs perform best when reusing block numbers often, while
    others perform much better when clustering strictly allocates
    big chunks of unused space.

    The default mount -o ssd will find rough groupings of blocks
    where there are a bunch of free blocks that might have some
    allocated blocks mixed in.

    mount -o ssd_spread will make sure there are no allocated blocks
    mixed in. It should perform better on lower end SSDs.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • In SSD mode for data, and all the time for metadata the allocator
    will try to find a cluster of nearby blocks for allocations. This
    commit adds extra checks to make sure that each free block in the
    cluster is close to the last one.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • This allows you to turn off the ssd mode via remount.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • The btrfs IO submission threads try to service a bunch of devices with a small
    number of threads. They do a congestion check to try and avoid waiting
    on requests for a busy device.

    The checks make sure we've sent a few requests down to a given device just so
    that we aren't bouncing between busy devices without actually sending down
    any IO. The counter used to decide if we can switch to the next device
    is somewhat overloaded. It is also being used to decide if we've done
    a good batch of requests between the WRITE_SYNC or regular priority lists.
    It may get reset to zero often, leaving us hammering on a busy device
    instead of moving on to another disk.

    This commit adds a new counter for the number of bios sent while
    servicing a device. It doesn't get reset or fiddled with. On
    multi-device filesystems, this fixes IO stalls in streaming
    write workloads.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • Btrfs uses dedicated threads to submit bios when checksumming is on,
    which allows us to make sure the threads dedicated to checksumming don't get
    stuck waiting for requests. For each btrfs device, there are
    two lists of bios. One list is for WRITE_SYNC bios and the other
    is for regular priority bios.

    The IO submission threads used to process all of the WRITE_SYNC bios first and
    then switch to the regular bios. This commit makes sure we don't completely
    starve the regular bios by rotating between the two lists.

    WRITE_SYNC bios are still favored 2:1 over the regular bios, and this tries
    to run in batches to avoid seeking. Benchmarking shows this eliminates
    stalls during streaming buffered writes on both multi-device and
    single device filesystems.

    If the regular bios starve, the system can end up with a large amount of ram
    pinned down in writeback pages. If we are a little more fair between the two
    classes, we're able to keep throughput up and make progress on the bulk of
    our dirty ram.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • Once a metadata block has been written, it must be recowed, so the
    btrfs dirty balancing call has a check to make sure a fair amount of metadata
    was actually dirty before it started writing it back to disk.

    A previous commit had changed the dirty tracking for metadata without
    updating the btrfs dirty balancing checks. This commit switches it
    to use the correct counter.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • The block allocator in SSD mode will try to find groups of free blocks
    that are close together. This commit makes it loop less on a given
    group size before bumping it.

    The end result is that we are less likely to fill small holes in the
    available free space, but we don't waste as much CPU building the
    large cluster used by ssd mode.

    Signed-off-by: Chris Mason

    Chris Mason
     
  • With the new back reference code, the cost of a balance has gone down
    in terms of the number of back reference updates done. This commit
    makes us more aggressively balance leaves and nodes as they become
    less full.

    Signed-off-by: Chris Mason

    Chris Mason