17 Feb, 2011

1 commit

  • When decompressing a chunk of data, we'll copy the data out to
    a working buffer if the data is stored in more than one page,
    otherwise we'll use the mapped page directly to avoid memory
    copy.

    In the latter case, we'll end up accessing the kernel address
    after we've unmapped the page in a corner case.

    Reported-by: Juan Francisco Cantero Hurtado
    Signed-off-by: Li Zefan
    Signed-off-by: Chris Mason

    Li Zefan
     

22 Dec, 2010

2 commits

  • Add a common function to copy decompressed data from working buffer
    to bio pages.

    Signed-off-by: Li Zefan

    Li Zefan
     
  • Lzo is a much faster compression algorithm than gzib, so would allow
    more users to enable transparent compression, and some users can
    choose from compression ratio and speed for different applications

    Usage:

    # mount -t btrfs -o compress[=] dev /mnt
    or
    # mount -t btrfs -o compress-force[=] dev /mnt

    "-o compress" without argument is still allowed for compatability.

    Compatibility:

    If we mount a filesystem with lzo compression, it will not be able be
    mounted in old kernels. One reason is, otherwise btrfs will directly
    dump compressed data, which sits in inline extent, to user.

    Performance:

    The test copied a linux source tarball (~400M) from an ext4 partition
    to the btrfs partition, and then extracted it.

    (time in second)
    lzo zlib nocompress
    copy: 10.6 21.7 14.9
    extract: 70.1 94.4 66.6

    (data size in MB)
    lzo zlib nocompress
    copy: 185.87 108.69 394.49
    extract: 193.80 132.36 381.21

    Changelog:

    v1 -> v2:
    - Select LZO_COMPRESS and LZO_DECOMPRESS in btrfs Kconfig.
    - Add incompability flag.
    - Fix error handling in compress code.

    Signed-off-by: Li Zefan

    Li Zefan