17 Feb, 2015

1 commit


01 Dec, 2014

1 commit

  • Don Bailey noticed that our page zeroing for compression at end-io time
    isn't complete. This reworks a patch from Linus to push the zeroing
    into the zlib and lzo specific functions instead of trying to handle the
    corners inside btrfs_decompress_buf2page

    Signed-off-by: Chris Mason
    Reviewed-by: Josef Bacik
    Reported-by: Don A. Bailey
    cc: stable@vger.kernel.org
    Signed-off-by: Linus Torvalds

    Chris Mason
     

18 Sep, 2014

1 commit


10 Jun, 2014

1 commit

  • The compression layer seems to have been built to return -1 and have
    callers make up errors that make sense. This isn't great because there
    are different errors that originate down in the compression layer.

    Let's return real negative errnos from the compression layer so that
    callers can pass on the error without having to guess what happened.
    ENOMEM for allocation failure, E2BIG when compression exceeds the
    uncompressed input, and EIO for everything else.

    This helps a future path return errors from btrfs_decompress().

    Signed-off-by: Zach Brown
    Signed-off-by: Chris Mason

    Zach Brown
     

29 Jan, 2014

1 commit


01 Sep, 2013

1 commit

  • With this fix the lzo code behaves like the zlib code by returning an
    error
    code when compression does not help reduce the size of the file.
    This is currently not a bug since the compressed size is checked again
    in
    the calling method compress_file_range.

    Signed-off-by: Stefan Agner
    Signed-off-by: Josef Bacik
    Signed-off-by: Chris Mason

    Stefan Agner
     

01 Jul, 2013

1 commit


20 Mar, 2012

1 commit


17 Feb, 2011

1 commit

  • When decompressing a chunk of data, we'll copy the data out to
    a working buffer if the data is stored in more than one page,
    otherwise we'll use the mapped page directly to avoid memory
    copy.

    In the latter case, we'll end up accessing the kernel address
    after we've unmapped the page in a corner case.

    Reported-by: Juan Francisco Cantero Hurtado
    Signed-off-by: Li Zefan
    Signed-off-by: Chris Mason

    Li Zefan
     

22 Dec, 2010

2 commits

  • Add a common function to copy decompressed data from working buffer
    to bio pages.

    Signed-off-by: Li Zefan

    Li Zefan
     
  • Lzo is a much faster compression algorithm than gzib, so would allow
    more users to enable transparent compression, and some users can
    choose from compression ratio and speed for different applications

    Usage:

    # mount -t btrfs -o compress[=] dev /mnt
    or
    # mount -t btrfs -o compress-force[=] dev /mnt

    "-o compress" without argument is still allowed for compatability.

    Compatibility:

    If we mount a filesystem with lzo compression, it will not be able be
    mounted in old kernels. One reason is, otherwise btrfs will directly
    dump compressed data, which sits in inline extent, to user.

    Performance:

    The test copied a linux source tarball (~400M) from an ext4 partition
    to the btrfs partition, and then extracted it.

    (time in second)
    lzo zlib nocompress
    copy: 10.6 21.7 14.9
    extract: 70.1 94.4 66.6

    (data size in MB)
    lzo zlib nocompress
    copy: 185.87 108.69 394.49
    extract: 193.80 132.36 381.21

    Changelog:

    v1 -> v2:
    - Select LZO_COMPRESS and LZO_DECOMPRESS in btrfs Kconfig.
    - Add incompability flag.
    - Fix error handling in compress code.

    Signed-off-by: Li Zefan

    Li Zefan