14 Apr, 2016
2 commits
-
These identifiers are bogus. The interested architectures should define
HAVE_EFFICIENT_UNALIGNED_ACCESS whenever relevant to do so. If this
isn't true for some arch, it should be fixed in the arch definition.Signed-off-by: Rui Salvaterra
Reviewed-by: Sergey Senozhatsky
Signed-off-by: Greg Kroah-Hartman -
Based on Sergey's test patch [1], this fixes zram with lz4 compression
on big endian cpus.Note that the 64-bit preprocessor test is not a cleanup, it's part of
the fix, since those identifiers are bogus (for example, __ppc64__
isn't defined anywhere else in the kernel, which means we'd fall into
the 32-bit definitions on ppc64).Tested on ppc64 with no regression on x86_64.
[1] http://marc.info/?l=linux-kernel&m=145994470805853&w=4
Cc: stable@vger.kernel.org
Suggested-by: Sergey Senozhatsky
Signed-off-by: Rui Salvaterra
Reviewed-by: Sergey Senozhatsky
Signed-off-by: Greg Kroah-Hartman
25 May, 2015
1 commit
-
Sometimes, on x86_64, decompression fails with the following
error:Decompressing Linux...
Decoding failed
-- System halted
This condition is not needed for a 64bit kernel(from commit d5e7caf):
if( ... ||
(op + COPYLENGTH) > oend)
goto _output_errormacro LZ4_SECURE_COPY() tests op and does not copy any data
when op exceeds the value.added by analogy to lz4_uncompress_unknownoutputsize(...)
Signed-off-by: Krzysztof Kolasa
Tested-by: Alexander Kuleshov
Tested-by: Caleb Jorden
Signed-off-by: Greg Kroah-Hartman
25 Mar, 2015
1 commit
-
There's no reason to allocate the dec{32,64}table on the stack; it
just wastes a bunch of instructions setting them up and, of course,
also consumes quite a bit of stack. Using size_t for such small
integers is a little excessive.$ scripts/bloat-o-meter /tmp/built-in.o lib/built-in.o
add/remove: 2/2 grow/shrink: 2/0 up/down: 1304/-1548 (-244)
function old new delta
lz4_decompress_unknownoutputsize 55 718 +663
lz4_decompress 55 632 +577
dec64table - 32 +32
dec32table - 32 +32
lz4_uncompress 747 - -747
lz4_uncompress_unknownoutputsize 801 - -801The now inlined lz4_uncompress functions used to have a stack
footprint of 176 bytes (according to -fstack-usage); their inlinees
have increased their stack use from 32 bytes to 48 and 80 bytes,
respectively.Signed-off-by: Rasmus Villemoes
Signed-off-by: Greg Kroah-Hartman
17 Mar, 2015
1 commit
-
If the part of the compression data are corrupted, or the compression
data is totally fake, the memory access over the limit is possible.This is the log from my system usning lz4 decompression.
[6502]data abort, halting
[6503]r0 0x00000000 r1 0x00000000 r2 0xdcea0ffc r3 0xdcea0ffc
[6509]r4 0xb9ab0bfd r5 0xdcea0ffc r6 0xdcea0ff8 r7 0xdce80000
[6515]r8 0x00000000 r9 0x00000000 r10 0x00000000 r11 0xb9a98000
[6522]r12 0xdcea1000 usp 0x00000000 ulr 0x00000000 pc 0x820149bc
[6528]spsr 0x400001f3
and the memory addresses of some variables at the moment are
ref:0xdcea0ffc, op:0xdcea0ffc, oend:0xdcea1000As you can see, COPYLENGH is 8bytes, so @ref and @op can access the momory
over @oend.Signed-off-by: JeHyeon Yeon
Reviewed-by: David Sterba
Signed-off-by: Greg Kroah-Hartman
04 Jul, 2014
1 commit
-
Jan points out that I forgot to make the needed fixes to the
lz4_uncompress_unknownoutputsize() function to mirror the changes done
in lz4_decompress() with regards to potential pointer overflows.The only in-kernel user of this function is the zram code, which only
takes data from a valid compressed buffer that it made itself, so it's
not a big issue. But due to external kernel modules using this
function, it's better to be safe here.Reported-by: Jan Beulich
Cc: "Don A. Bailey"
Cc: stable
Signed-off-by: Greg Kroah-Hartman
28 Jun, 2014
1 commit
-
There is one other possible overrun in the lz4 code as implemented by
Linux at this point in time (which differs from the upstream lz4
codebase, but will get synced at in a future kernel release.) As
pointed out by Don, we also need to check the overflow in the data
itself.While we are at it, replace the odd error return value with just a
"simple" -1 value as the return value is never used for anything other
than a basic "did this work or not" check.Reported-by: "Don A. Bailey"
Reported-by: Willy Tarreau
Cc: stable
Signed-off-by: Greg Kroah-Hartman
24 Jun, 2014
1 commit
-
Given some pathologically compressed data, lz4 could possibly decide to
wrap a few internal variables, causing unknown things to happen. Catch
this before the wrapping happens and abort the decompression.Reported-by: "Don A. Bailey"
Cc: stable
Signed-off-by: Greg Kroah-Hartman
12 Sep, 2013
1 commit
-
LZ4 compression and decompression functions require different in
signedness input/output parameters: unsigned char for compression and
signed char for decompression.Change decompression API to require "(const) unsigned char *".
Signed-off-by: Sergey Senozhatsky
Cc: Kyungsik Lee
Cc: Geert Uytterhoeven
Cc: Yann Collet
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
24 Aug, 2013
1 commit
-
The LZ4 code is listed as using the "BSD 2-Clause License".
Signed-off-by: Richard Laager
Acked-by: Kyungsik Lee
Cc: Chanho Min
Cc: Richard Yao
Signed-off-by: Andrew Morton
[ The 2-clause BSD can be just converted into GPL, but that's rude and
pointless, so don't do it - Linus ]
Signed-off-by: Linus Torvalds
10 Jul, 2013
3 commits
-
This patchset is for supporting LZ4 compression and the crypto API using
it.As shown below, the size of data is a little bit bigger but compressing
speed is faster under the enabled unaligned memory access. We can use
lz4 de/compression through crypto API as well. Also, It will be useful
for another potential user of lz4 compression.lz4 Compression Benchmark:
Compiler: ARM gcc 4.6.4
ARMv7, 1 GHz based board
Kernel: linux 3.4
Uncompressed data Size: 101 MB
Compressed Size compression Speed
LZO 72.1MB 32.1MB/s, 33.0MB/s(UA)
LZ4 75.1MB 30.4MB/s, 35.9MB/s(UA)
LZ4HC 59.8MB 2.4MB/s, 2.5MB/s(UA)
- UA: Unaligned memory Access support
- Latest patch set for LZO appliedThis patch:
Add support for LZ4 compression in the Linux Kernel. LZ4 Compression APIs
for kernel are based on LZ4 implementation by Yann Collet and were changed
for kernel coding style.LZ4 homepage : http://fastcompression.blogspot.com/p/lz4.html
LZ4 source repository : http://code.google.com/p/lz4/
svn revision : r90Two APIs are added:
lz4_compress() support basic lz4 compression whereas lz4hc_compress()
support high compression or CPU performance get lower but compression
ratio get higher. Also, we require the pre-allocated working memory with
the defined size and destination buffer must be allocated with the size of
lz4_compressbound.[akpm@linux-foundation.org: make lz4_compresshcctx() static]
Signed-off-by: Chanho Min
Cc: "Darrick J. Wong"
Cc: Bob Pearson
Cc: Richard Weinberger
Cc: Herbert Xu
Cc: Yann Collet
Cc: Kyungsik Lee
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Add support for extracting LZ4-compressed kernel images, as well as
LZ4-compressed ramdisk images in the kernel boot process.Signed-off-by: Kyungsik Lee
Cc: "H. Peter Anvin"
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Russell King
Cc: Borislav Petkov
Cc: Florian Fainelli
Cc: Yann Collet
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Add support for LZ4 decompression in the Linux Kernel. LZ4 Decompression
APIs for kernel are based on LZ4 implementation by Yann Collet.Benchmark Results(PATCH v3)
Compiler: Linaro ARM gcc 4.6.21. ARMv7, 1.5GHz based board
Kernel: linux 3.4
Uncompressed Kernel Size: 14MB
Compressed Size Decompression Speed
LZO 6.7MB 20.1MB/s, 25.2MB/s(UA)
LZ4 7.3MB 29.1MB/s, 45.6MB/s(UA)2. ARMv7, 1.7GHz based board
Kernel: linux 3.7
Uncompressed Kernel Size: 14MB
Compressed Size Decompression Speed
LZO 6.0MB 34.1MB/s, 52.2MB/s(UA)
LZ4 6.5MB 86.7MB/s
- UA: Unaligned memory Access support
- Latest patch set for LZO appliedThis patch set is for adding support for LZ4-compressed Kernel. LZ4 is a
very fast lossless compression algorithm and it also features an extremely
fast decoder [1].But we have five of decompressors already and one question which does
arise, however, is that of where do we stop adding new ones? This issue
had been discussed and came to the conclusion [2].Russell King said that we should have:
- one decompressor which is the fastest
- one decompressor for the highest compression ratio
- one popular decompressor (eg conventional gzip)If we have a replacement one for one of these, then it should do exactly
that: replace it.The benchmark shows that an 8% increase in image size vs a 66% increase
in decompression speed compared to LZO(which has been known as the
fastest decompressor in the Kernel). Therefore the "fast but may not be
small" compression title has clearly been taken by LZ4 [3].[1] http://code.google.com/p/lz4/
[2] http://thread.gmane.org/gmane.linux.kbuild.devel/9157
[3] http://thread.gmane.org/gmane.linux.kbuild.devel/9347LZ4 homepage: http://fastcompression.blogspot.com/p/lz4.html
LZ4 source repository: http://code.google.com/p/lz4/Signed-off-by: Kyungsik Lee
Signed-off-by: Yann Collet
Cc: "H. Peter Anvin"
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: Russell King
Cc: Borislav Petkov
Cc: Florian Fainelli
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds