16 Jan, 2016

1 commit


22 Nov, 2014

1 commit

  • We use the modified list to keep track of which extents have been modified so we
    know which ones are candidates for logging at fsync() time. Newly modified
    extents are added to the list at modification time, around the same time the
    ordered extent is created. We do this so that we don't have to wait for ordered
    extents to complete before we know what we need to log. The problem is when
    something like this happens

    log extent 0-4k on inode 1
    copy csum for 0-4k from ordered extent into log
    sync log
    commit transaction
    log some other extent on inode 1
    ordered extent for 0-4k completes and adds itself onto modified list again
    log changed extents
    see ordered extent for 0-4k has already been logged
    at this point we assume the csum has been copied
    sync log
    crash

    On replay we will see the extent 0-4k in the log, drop the original 0-4k extent
    which is the same one that we are replaying which also drops the csum, and then
    we won't find the csum in the log for that bytenr. This of course causes us to
    have errors about not having csums for certain ranges of our inode. So remove
    the modified list manipulation in unpin_extent_cache, any modified extents
    should have been added well before now, and we don't want them re-logged. This
    fixes my test that I could reliably reproduce this problem with. Thanks,

    cc: stable@vger.kernel.org
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     

20 Jun, 2014

1 commit

  • While running balance, scrub, fsstress concurrently we hit the
    following kernel crash:

    [56561.448845] BTRFS info (device sde): relocating block group 11005853696 flags 132
    [56561.524077] BUG: unable to handle kernel NULL pointer dereference at 0000000000000078
    [56561.524237] IP: [] scrub_chunk.isra.12+0xdd/0x130 [btrfs]
    [56561.524297] PGD 9be28067 PUD 7f3dd067 PMD 0
    [56561.524325] Oops: 0000 [#1] SMP
    [....]
    [56561.527237] Call Trace:
    [56561.527309] [] scrub_enumerate_chunks+0x24e/0x490 [btrfs]
    [56561.527392] [] ? abort_exclusive_wait+0x50/0xb0
    [56561.527476] [] btrfs_scrub_dev+0x1a4/0x530 [btrfs]
    [56561.527561] [] btrfs_ioctl+0x13f7/0x2a90 [btrfs]
    [56561.527639] [] do_vfs_ioctl+0x2e0/0x4c0
    [56561.527712] [] ? vtime_account_user+0x54/0x60
    [56561.527788] [] ? __audit_syscall_entry+0x9c/0xf0
    [56561.527870] [] SyS_ioctl+0x81/0xa0
    [56561.527941] [] tracesys+0xdd/0xe2
    [...]
    [56561.528304] RIP [] scrub_chunk.isra.12+0xdd/0x130 [btrfs]
    [56561.528395] RSP
    [56561.528454] CR2: 0000000000000078

    This is because in btrfs_relocate_chunk(), we will free @bdev directly while
    scrub may still hold extent mapping, and may access freed memory.

    Fix this problem by wrapping freeing @bdev work into free_extent_map() which
    is based on reference count.

    Reported-by: Qu Wenruo
    Signed-off-by: Wang Shilong
    Signed-off-by: Miao Xie
    Signed-off-by: Chris Mason

    Wang Shilong
     

11 Mar, 2014

2 commits

  • While droping extent map structures from the extent cache that cover our
    target range, we would remove each extent map structure from the red black
    tree and then add either 1 or 2 new extent map structures if the former
    extent map covered sections outside our target range.

    This change simply attempts to replace the existing extent map structure
    with a new one that covers the subsection we're not interested in, instead
    of doing a red black remove operation followed by an insertion operation.

    The number of elements in an inode's extent map tree can get very high for large
    files under random writes. For example, while running the following test:

    sysbench --test=fileio --file-num=1 --file-total-size=10G \
    --file-test-mode=rndrw --num-threads=32 --file-block-size=32768 \
    --max-requests=500000 --file-rw-ratio=2 [prepare|run]

    I captured the following histogram capturing the number of extent_map items
    in the red black tree while that test was running:

    Count: 122462
    Range: 1.000 - 172231.000; Mean: 96415.831; Median: 101855.000; Stddev: 49700.981
    Percentiles: 90th: 160120.000; 95th: 166335.000; 99th: 171070.000
    1.000 - 5.231: 452 |
    5.231 - 187.392: 87 |
    187.392 - 585.911: 206 |
    585.911 - 1827.438: 623 |
    1827.438 - 5695.245: 1962 #
    5695.245 - 17744.861: 6204 ####
    17744.861 - 55283.764: 21115 ############
    55283.764 - 172231.000: 91813 #####################################################

    Benchmark:

    sysbench --test=fileio --file-num=1 --file-total-size=10G --file-test-mode=rndwr \
    --num-threads=64 --file-block-size=32768 --max-requests=0 --max-time=60 \
    --file-io-mode=sync --file-fsync-freq=0 [prepare|run]

    Before this change: 122.1Mb/sec
    After this change: 125.07Mb/sec
    (averages of 5 test runs)

    Test machine: quad core intel i5-3570K, 32Gb of ram, SSD

    Signed-off-by: Filipe David Borba Manana
    Signed-off-by: Josef Bacik

    Filipe Manana
     
  • We don't need to have an unsigned int field in the extent_map struct
    to tell us whether the extent map is in the inode's extent_map tree or
    not. We can use the rb_node struct field and the RB_CLEAR_NODE and
    RB_EMPTY_NODE macros to achieve the same task.

    This reduces sizeof(struct extent_map) from 152 bytes to 144 bytes (on a
    64 bits system).

    Signed-off-by: Filipe David Borba Manana
    Reviewed-by: David Sterba
    Signed-off-by: Josef Bacik

    Filipe Manana
     

29 Jan, 2014

2 commits

  • When merging an extent_map with its right neighbor, increment
    its block_len with the neighbor's block_len.

    Signed-off-by: Filipe David Borba Manana
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Filipe David Borba Manana
     
  • Before this change, adding an extent map to the extent map tree of an
    inode required 2 tree nevigations:

    1) doing a tree navigation to search for an existing extent map starting
    at the same offset or an extent map that overlaps the extent map we
    want to insert;

    2) Another tree navigation to add the extent map to the tree (if the
    former tree search didn't found anything).

    This change just merges these 2 steps into a single one.
    While running first few btrfs xfstests I had noticed these trees easily
    had a few hundred elements, and then with the following sysbench test it
    reached over 1100 elements very often.

    Test:

    sysbench --test=fileio --file-num=32 --file-total-size=10G \
    --file-test-mode=seqwr --num-threads=512 --file-block-size=8192 \
    --max-requests=1000000 --file-io-mode=sync [prepare|run]

    (fs created with mkfs.btrfs -l 4096 -f /dev/sdb3 before each sysbench
    prepare phase)

    Before this patch:

    run 1 - 41.894Mb/sec
    run 2 - 40.527Mb/sec
    run 3 - 40.922Mb/sec
    run 4 - 49.433Mb/sec
    run 5 - 40.959Mb/sec

    average - 42.75Mb/sec

    After this patch:

    run 1 - 48.036Mb/sec
    run 2 - 50.21Mb/sec
    run 3 - 50.929Mb/sec
    run 4 - 46.881Mb/sec
    run 5 - 53.192Mb/sec

    average - 49.85Mb/sec

    Signed-off-by: Filipe David Borba Manana
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Filipe David Borba Manana
     

07 May, 2013

2 commits

  • Big patch, but all it does is add statics to functions which
    are in fact static, then remove the associated dead-code fallout.

    removed functions:

    btrfs_iref_to_path()
    __btrfs_lookup_delayed_deletion_item()
    __btrfs_search_delayed_insertion_item()
    __btrfs_search_delayed_deletion_item()
    find_eb_for_page()
    btrfs_find_block_group()
    range_straddles_pages()
    extent_range_uptodate()
    btrfs_file_extent_length()
    btrfs_scrub_cancel_devid()
    btrfs_start_transaction_lflush()

    btrfs_print_tree() is left because it is used for debugging.
    btrfs_start_transaction_lflush() and btrfs_reada_detach() are
    left for symmetry.

    ulist.c functions are left, another patch will take care of those.

    Signed-off-by: Eric Sandeen
    Signed-off-by: Josef Bacik

    Eric Sandeen
     
  • A user sent me a btrfs-image of a file system that was panicing on mount during
    the log recovery. I had originally thought these problems were from a bug in
    the free space cache code, but that was just a symptom of the problem. The
    problem is if your application does something like this

    [prealloc][prealloc][prealloc]

    the internal extent maps will merge those all together into one extent map, even
    though on disk they are 3 separate extents. So if you go to write into one of
    these ranges the extent map will be right since we use the physical extent when
    doing the write, but when we log the extents they will use the wrong sizes for
    the remainder prealloc space. If this doesn't happen to trip up the free space
    cache (which it won't in a lot of cases) then you will get bogus entries in your
    extent tree which will screw stuff up later. The data and such will still work,
    but everything else is broken. This patch fixes this by not allowing extents
    that are on the modified list to be merged. This has the side effect that we
    are no longer adding everything to the modified list all the time, which means
    we now have to call btrfs_drop_extents every time we log an extent into the
    tree. So this allows me to drop all this speciality code I was using to get
    around calling btrfs_drop_extents. With this patch the testcase I've created no
    longer creates a bogus file system after replaying the log. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     

02 Mar, 2013

1 commit

  • We want to avoid module.h where posible, since it in turn includes
    nearly all of header space. This means removing it where it is not
    required, and using export.h where we are only exporting symbols via
    EXPORT_SYMBOL and friends.

    Signed-off-by: Paul Gortmaker
    Signed-off-by: Chris Mason

    Paul Gortmaker
     

08 Feb, 2013

1 commit

  • Pull btrfs fixes from Chris Mason:
    "We've got corner cases for updating i_size that ceph was hitting,
    error handling for quotas when we run out of space, a very subtle
    snapshot deletion race, a crash while removing devices, and one
    deadlock between subvolume creation and the sb_internal code (thanks
    lockdep)."

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs:
    Btrfs: move d_instantiate outside the transaction during mksubvol
    Btrfs: fix EDQUOT handling in btrfs_delalloc_reserve_metadata
    Btrfs: fix possible stale data exposure
    Btrfs: fix missing i_size update
    Btrfs: fix race between snapshot deletion and getting inode
    Btrfs: fix missing release of the space/qgroup reservation in start_transaction()
    Btrfs: fix wrong sync_writers decrement in btrfs_file_aio_write()
    Btrfs: do not merge logged extents if we've removed them from the tree
    btrfs: don't try to notify udev about missing devices

    Linus Torvalds
     

06 Feb, 2013

1 commit

  • You can run into this problem where if somebody is fsyncing and writing out
    the existing extents you will have removed the extent map from the em tree,
    but it's still valid for the current fsync so we go ahead and write it. The
    problem is we unconditionally try to merge it back into the em tree, but if
    we've removed it from the em tree that will cause use after free problems.
    Fix this to only merge if we are still a part of the tree. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     

26 Jan, 2013

1 commit

  • Pull btrfs fixes from Chris Mason:
    "It turns out that we had two crc bugs when running fsx-linux in a
    loop. Many thanks to Josef, Miao Xie, and Dave Sterba for nailing it
    all down. Miao also has a new OOM fix in this v2 pull as well.

    Ilya fixed a regression Liu Bo found in the balance ioctls for pausing
    and resuming a running balance across drives.

    Josef's orphan truncate patch fixes an obscure corruption we'd see
    during xfstests.

    Arne's patches address problems with subvolume quotas. If the user
    destroys quota groups incorrectly the FS will refuse to mount.

    The rest are smaller fixes and plugs for memory leaks."

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: (30 commits)
    Btrfs: fix repeated delalloc work allocation
    Btrfs: fix wrong max device number for single profile
    Btrfs: fix missed transaction->aborted check
    Btrfs: Add ACCESS_ONCE() to transaction->abort accesses
    Btrfs: put csums on the right ordered extent
    Btrfs: use right range to find checksum for compressed extents
    Btrfs: fix panic when recovering tree log
    Btrfs: do not allow logged extents to be merged or removed
    Btrfs: fix a regression in balance usage filter
    Btrfs: prevent qgroup destroy when there are still relations
    Btrfs: ignore orphan qgroup relations
    Btrfs: reorder locks and sanity checks in btrfs_ioctl_defrag
    Btrfs: fix unlock order in btrfs_ioctl_rm_dev
    Btrfs: fix unlock order in btrfs_ioctl_resize
    Btrfs: fix "mutually exclusive op is running" error code
    Btrfs: bring back balance pause/resume logic
    btrfs: update timestamps on truncate()
    btrfs: fix btrfs_cont_expand() freeing IS_ERR em
    Btrfs: fix a bug when llseek for delalloc bytes behind prealloc extents
    Btrfs: fix off-by-one in lseek
    ...

    Linus Torvalds
     

25 Jan, 2013

1 commit

  • We drop the extent map tree lock while we're logging extents, so somebody
    could come in and merge another extent into this one and screw up our
    logging, or they could even remove us from the list which would keep us from
    logging the extent or freeing our ref on it, so we need to make sure to not
    clear LOGGING until after the extent is logged, and then we can merge it to
    adjacent extents. Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     

19 Dec, 2012

1 commit

  • Pull btrfs update from Chris Mason:
    "A big set of fixes and features.

    In terms of line count, most of the code comes from Stefan, who added
    the ability to replace a single drive in place. This is different
    from how btrfs normally replaces drives, and is much much much faster.

    Josef is plowing through our synchronous write performance. This pull
    request does not include the DIO_OWN_WAITING patch that was discussed
    on the list, but it has a number of other improvements to cut down our
    latencies and CPU time during fsync/O_DIRECT writes.

    Miao Xie has a big series of fixes and is spreading out ordered
    operations over more CPUs. This improves performance and reduces
    contention.

    I've put in fixes for error handling around hash collisions. These
    are going back to individual stable kernels as I test against them.

    Otherwise we have a lot of fixes and cleanups, thanks everyone!
    raid5/6 is being rebased against the device replacement code. I'll
    have it posted this Friday along with a nice series of benchmarks."

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs: (115 commits)
    Btrfs: fix a bug of per-file nocow
    Btrfs: fix hash overflow handling
    Btrfs: don't take inode delalloc mutex if we're a free space inode
    Btrfs: fix autodefrag and umount lockup
    Btrfs: fix permissions of empty files not affected by umask
    Btrfs: put raid properties into global table
    Btrfs: fix BUG() in scrub when first superblock reading gives EIO
    Btrfs: do not call file_update_time in aio_write
    Btrfs: only unlock and relock if we have to
    Btrfs: use tokens where we can in the tree log
    Btrfs: optimize leaf_space_used
    Btrfs: don't memset new tokens
    Btrfs: only clear dirty on the buffer if it is marked as dirty
    Btrfs: move checks in set_page_dirty under DEBUG
    Btrfs: log changed inodes based on the extent map tree
    Btrfs: add path->really_keep_locks
    Btrfs: do not mark ems as prealloc if we are writing to them
    Btrfs: keep track of the extents original block length
    Btrfs: inline csums if we're fsyncing
    Btrfs: don't bother copying if we're only logging the inode
    ...

    Linus Torvalds
     

17 Dec, 2012

2 commits

  • We don't really need to copy extents from the source tree since we have all
    of the information already available to us in the extent_map tree. So
    instead just write the extents straight to the log tree and don't bother to
    copy the extent items from the source tree.

    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     
  • We are going to use EM's to log extents in the future, so we need to not
    mark them as prealloc if they aren't actually prealloc extents. Instead
    mark them with FILLING so we know to ammend mod_start/mod_len and that way
    we don't confuse the extent logging code. Thanks,

    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Josef Bacik
     

30 Oct, 2012

1 commit


04 Oct, 2012

1 commit

  • Dave Sterba pointed out a sleeping while atomic bug while doing fsync. This
    is because I'm an idiot and didn't realize that rwlock's were spin locks, so
    we've been holding this thing while doing allocations and such which is not
    good. This patch fixes this by dropping the write lock before we do
    anything heavy and re-acquire it when it is done. We also need to take a
    ref on the em's in case their corresponding pages are evicted and mark them
    as being logged so that releasepage does not remove them and doesn't remove
    them from our local list. Thanks,

    Reported-by: Dave Sterba
    Signed-off-by: Josef Bacik

    Josef Bacik
     

02 Oct, 2012

3 commits

  • Usecase:

    watch 'grep btrfs < /proc/slabinfo'

    easy to watch all caches in one go.

    Signed-off-by: David Sterba

    David Sterba
     
  • This is based on Josef's "Btrfs: turbo charge fsync".

    The above Josef's patch performs very good in random sync write test,
    because we won't have too much extents to merge.

    However, it does not performs good on the test:
    dd if=/dev/zero of=foobar bs=4k count=12500 oflag=sync

    The reason is when we do sequencial sync write, we need to merge the
    current extent just with the previous one, so that we can get accumulated
    extents to log:

    A(4k) --> AA(8k) --> AAA(12k) --> AAAA(16k) ...

    So we'll have to flush more and more checksum into log tree, which is the
    bottleneck according to my tests.

    But we can avoid this by telling fsync the real extents that are needed
    to be logged.

    With this, I did the above dd sync write test (size=50m),

    w/o (orig) w/ (josef's) w/ (this)
    SATA 104KB/s 109KB/s 121KB/s
    ramdisk 1.5MB/s 1.5MB/s 10.7MB/s (613%)

    Signed-off-by: Liu Bo

    Liu Bo
     
  • At least for the vm workload. Currently on fsync we will

    1) Truncate all items in the log tree for the given inode if they exist

    and

    2) Copy all items for a given inode into the log

    The problem with this is that for things like VMs you can have lots of
    extents from the fragmented writing behavior, and worst yet you may have
    only modified a few extents, not the entire thing. This patch fixes this
    problem by tracking which transid modified our extent, and then when we do
    the tree logging we find all of the extents we've modified in our current
    transaction, sort them and commit them. We also only truncate up to the
    xattrs of the inode and copy that stuff in normally, and then just drop any
    extents in the range we have that exist in the log already. Here are some
    numbers of a 50 meg fio job that does random writes and fsync()s after every
    write

    Original Patched
    SATA drive 82KB/s 140KB/s
    Fusion drive 431KB/s 2532KB/s

    So around 2-6 times faster depending on your hardware. There are a few
    corner cases, for example if you truncate at all we have to do it the old
    way since there is no way to be sure what is in the log is ok. This
    probably could be done smarter, but if you write-fsync-truncate-write-fsync
    you deserve what you get. All this work is in RAM of course so if your
    inode gets evicted from cache and you read it in and fsync it we'll do it
    the slow way if we are still in the same transaction that we last modified
    the inode in.

    The biggest cool part of this is that it requires no changes to the recovery
    code, so if you fsync with this patch and crash and load an old kernel, it
    will run the recovery and be a-ok. I have tested this pretty thoroughly
    with an fsync tester and everything comes back fine, as well as xfstests.
    Thanks,

    Signed-off-by: Josef Bacik

    Josef Bacik
     

02 Aug, 2011

3 commits


02 May, 2011

2 commits


31 Mar, 2011

1 commit


15 Feb, 2011

1 commit

  • I add the check on the return value of alloc_extent_map() to several places.
    In addition, alloc_extent_map() returns only the address or NULL.
    Therefore, check by IS_ERR() is unnecessary. So, I remove IS_ERR() checking.

    Signed-off-by: Tsutomu Itoh
    Signed-off-by: Chris Mason

    Tsutomu Itoh
     

22 Dec, 2010

1 commit


30 Oct, 2010

1 commit

  • Use ERR_CAST(x) rather than ERR_PTR(PTR_ERR(x)). The former makes more
    clear what is the purpose of the operation, which otherwise looks like a
    no-op.

    The semantic patch that makes this change is as follows:
    (http://coccinelle.lip6.fr/)

    //
    @@
    type T;
    T x;
    identifier f;
    @@

    T f (...) { }

    @@
    expression x;
    @@

    - ERR_PTR(PTR_ERR(x))
    + ERR_CAST(x)
    //

    Signed-off-by: Julia Lawall
    Cc: Chris Mason
    Signed-off-by: Andrew Morton
    Signed-off-by: Chris Mason

    Julia Lawall
     

30 Mar, 2010

1 commit

  • …it slab.h inclusion from percpu.h

    percpu.h is included by sched.h and module.h and thus ends up being
    included when building most .c files. percpu.h includes slab.h which
    in turn includes gfp.h making everything defined by the two files
    universally available and complicating inclusion dependencies.

    percpu.h -> slab.h dependency is about to be removed. Prepare for
    this change by updating users of gfp and slab facilities include those
    headers directly instead of assuming availability. As this conversion
    needs to touch large number of source files, the following script is
    used as the basis of conversion.

    http://userweb.kernel.org/~tj/misc/slabh-sweep.py

    The script does the followings.

    * Scan files for gfp and slab usages and update includes such that
    only the necessary includes are there. ie. if only gfp is used,
    gfp.h, if slab is used, slab.h.

    * When the script inserts a new include, it looks at the include
    blocks and try to put the new include such that its order conforms
    to its surrounding. It's put in the include block which contains
    core kernel includes, in the same order that the rest are ordered -
    alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
    doesn't seem to be any matching order.

    * If the script can't find a place to put a new include (mostly
    because the file doesn't have fitting include block), it prints out
    an error message indicating which .h file needs to be added to the
    file.

    The conversion was done in the following steps.

    1. The initial automatic conversion of all .c files updated slightly
    over 4000 files, deleting around 700 includes and adding ~480 gfp.h
    and ~3000 slab.h inclusions. The script emitted errors for ~400
    files.

    2. Each error was manually checked. Some didn't need the inclusion,
    some needed manual addition while adding it to implementation .h or
    embedding .c file was more appropriate for others. This step added
    inclusions to around 150 files.

    3. The script was run again and the output was compared to the edits
    from #2 to make sure no file was left behind.

    4. Several build tests were done and a couple of problems were fixed.
    e.g. lib/decompress_*.c used malloc/free() wrappers around slab
    APIs requiring slab.h to be added manually.

    5. The script was run on all .h files but without automatically
    editing them as sprinkling gfp.h and slab.h inclusions around .h
    files could easily lead to inclusion dependency hell. Most gfp.h
    inclusion directives were ignored as stuff from gfp.h was usually
    wildly available and often used in preprocessor macros. Each
    slab.h inclusion directive was examined and added manually as
    necessary.

    6. percpu.h was updated not to include slab.h.

    7. Build test were done on the following configurations and failures
    were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
    distributed build env didn't work with gcov compiles) and a few
    more options had to be turned off depending on archs to make things
    build (like ipr on powerpc/64 which failed due to missing writeq).

    * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
    * powerpc and powerpc64 SMP allmodconfig
    * sparc and sparc64 SMP allmodconfig
    * ia64 SMP allmodconfig
    * s390 SMP allmodconfig
    * alpha SMP allmodconfig
    * um on x86_64 SMP allmodconfig

    8. percpu.h modifications were reverted so that it could be applied as
    a separate patch and serve as bisection point.

    Given the fact that I had only a couple of failures from tests on step
    6, I'm fairly confident about the coverage of this conversion patch.
    If there is a breakage, it's likely to be something in one of the arch
    headers which should be easily discoverable easily on most builds of
    the specific arch.

    Signed-off-by: Tejun Heo <tj@kernel.org>
    Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
    Cc: Ingo Molnar <mingo@redhat.com>
    Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>

    Tejun Heo
     

09 Mar, 2010

2 commits


30 Jan, 2010

1 commit

  • * git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable:
    Btrfs: check total number of devices when removing missing
    Btrfs: check return value of open_bdev_exclusive properly
    Btrfs: do not mark the chunk as readonly if in degraded mode
    Btrfs: run orphan cleanup on default fs root
    Btrfs: fix a memory leak in btrfs_init_acl
    Btrfs: Use correct values when updating inode i_size on fallocate
    Btrfs: remove tree_search() in extent_map.c
    Btrfs: Add mount -o compress-force

    Linus Torvalds
     

29 Jan, 2010

1 commit


08 Dec, 2009

1 commit


04 Dec, 2009

1 commit

  • That is "success", "unknown", "through", "performance", "[re|un]mapping"
    , "access", "default", "reasonable", "[con]currently", "temperature"
    , "channel", "[un]used", "application", "example","hierarchy", "therefore"
    , "[over|under]flow", "contiguous", "threshold", "enough" and others.

    Signed-off-by: André Goddard Rosa
    Signed-off-by: Jiri Kosina

    André Goddard Rosa
     

12 Nov, 2009

1 commit


19 Sep, 2009

1 commit