Commit 24fa0402a9b6a537e87e38341e78b7da86486846

Authored by Lasse Collin
Committed by Linus Torvalds
1 parent fb7fa589fd

decompressors: add XZ decompressor module

In userspace, the .lzma format has become mostly a legacy file format that
got superseded by the .xz format.  Similarly, LZMA Utils was superseded by
XZ Utils.

These patches add support for XZ decompression into the kernel.  Most of
the code is as is from XZ Embedded <http://tukaani.org/xz/embedded.html>.
It was written for the Linux kernel but is usable in other projects too.

Advantages of XZ over the current LZMA code in the kernel:
  - Nice API that can be used by other kernel modules; it's
    not limited to kernel, initramfs, and initrd decompression.
  - Integrity check support (CRC32)
  - BCJ filters improve compression of executable code on
    certain architectures. These together with LZMA2 can
    produce a few percent smaller kernel or Squashfs images
    than plain LZMA without making the decompression slower.

This patch: Add the main decompression code (xz_dec), testing module
(xz_dec_test), wrapper script (xz_wrap.sh) for the xz command line tool,
and documentation.  The xz_dec module is enough to have a usable XZ
decompressor e.g.  for Squashfs.

Signed-off-by: Lasse Collin <lasse.collin@tukaani.org>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Alain Knaff <alain@knaff.lu>
Cc: Albin Tonnerre <albin.tonnerre@free-electrons.com>
Cc: Phillip Lougher <phillip@lougher.demon.co.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

Showing 17 changed files with 3783 additions and 0 deletions Side-by-side Diff

Documentation/xz.txt
  1 +
  2 +XZ data compression in Linux
  3 +============================
  4 +
  5 +Introduction
  6 +
  7 + XZ is a general purpose data compression format with high compression
  8 + ratio and relatively fast decompression. The primary compression
  9 + algorithm (filter) is LZMA2. Additional filters can be used to improve
  10 + compression ratio even further. E.g. Branch/Call/Jump (BCJ) filters
  11 + improve compression ratio of executable data.
  12 +
  13 + The XZ decompressor in Linux is called XZ Embedded. It supports
  14 + the LZMA2 filter and optionally also BCJ filters. CRC32 is supported
  15 + for integrity checking. The home page of XZ Embedded is at
  16 + <http://tukaani.org/xz/embedded.html>, where you can find the
  17 + latest version and also information about using the code outside
  18 + the Linux kernel.
  19 +
  20 + For userspace, XZ Utils provide a zlib-like compression library
  21 + and a gzip-like command line tool. XZ Utils can be downloaded from
  22 + <http://tukaani.org/xz/>.
  23 +
  24 +XZ related components in the kernel
  25 +
  26 + The xz_dec module provides XZ decompressor with single-call (buffer
  27 + to buffer) and multi-call (stateful) APIs. The usage of the xz_dec
  28 + module is documented in include/linux/xz.h.
  29 +
  30 + The xz_dec_test module is for testing xz_dec. xz_dec_test is not
  31 + useful unless you are hacking the XZ decompressor. xz_dec_test
  32 + allocates a char device major dynamically to which one can write
  33 + .xz files from userspace. The decompressed output is thrown away.
  34 + Keep an eye on dmesg to see diagnostics printed by xz_dec_test.
  35 + See the xz_dec_test source code for the details.
  36 +
  37 + For decompressing the kernel image, initramfs, and initrd, there
  38 + is a wrapper function in lib/decompress_unxz.c. Its API is the
  39 + same as in other decompress_*.c files, which is defined in
  40 + include/linux/decompress/generic.h.
  41 +
  42 + scripts/xz_wrap.sh is a wrapper for the xz command line tool found
  43 + from XZ Utils. The wrapper sets compression options to values suitable
  44 + for compressing the kernel image.
  45 +
  46 + For kernel makefiles, two commands are provided for use with
  47 + $(call if_needed). The kernel image should be compressed with
  48 + $(call if_needed,xzkern) which will use a BCJ filter and a big LZMA2
  49 + dictionary. It will also append a four-byte trailer containing the
  50 + uncompressed size of the file, which is needed by the boot code.
  51 + Other things should be compressed with $(call if_needed,xzmisc)
  52 + which will use no BCJ filter and 1 MiB LZMA2 dictionary.
  53 +
  54 +Notes on compression options
  55 +
  56 + Since the XZ Embedded supports only streams with no integrity check or
  57 + CRC32, make sure that you don't use some other integrity check type
  58 + when encoding files that are supposed to be decoded by the kernel. With
  59 + liblzma, you need to use either LZMA_CHECK_NONE or LZMA_CHECK_CRC32
  60 + when encoding. With the xz command line tool, use --check=none or
  61 + --check=crc32.
  62 +
  63 + Using CRC32 is strongly recommended unless there is some other layer
  64 + which will verify the integrity of the uncompressed data anyway.
  65 + Double checking the integrity would probably be waste of CPU cycles.
  66 + Note that the headers will always have a CRC32 which will be validated
  67 + by the decoder; you can only change the integrity check type (or
  68 + disable it) for the actual uncompressed data.
  69 +
  70 + In userspace, LZMA2 is typically used with dictionary sizes of several
  71 + megabytes. The decoder needs to have the dictionary in RAM, thus big
  72 + dictionaries cannot be used for files that are intended to be decoded
  73 + by the kernel. 1 MiB is probably the maximum reasonable dictionary
  74 + size for in-kernel use (maybe more is OK for initramfs). The presets
  75 + in XZ Utils may not be optimal when creating files for the kernel,
  76 + so don't hesitate to use custom settings. Example:
  77 +
  78 + xz --check=crc32 --lzma2=dict=512KiB inputfile
  79 +
  80 + An exception to above dictionary size limitation is when the decoder
  81 + is used in single-call mode. Decompressing the kernel itself is an
  82 + example of this situation. In single-call mode, the memory usage
  83 + doesn't depend on the dictionary size, and it is perfectly fine to
  84 + use a big dictionary: for maximum compression, the dictionary should
  85 + be at least as big as the uncompressed data itself.
  86 +
  87 +Future plans
  88 +
  89 + Creating a limited XZ encoder may be considered if people think it is
  90 + useful. LZMA2 is slower to compress than e.g. Deflate or LZO even at
  91 + the fastest settings, so it isn't clear if LZMA2 encoder is wanted
  92 + into the kernel.
  93 +
  94 + Support for limited random-access reading is planned for the
  95 + decompression code. I don't know if it could have any use in the
  96 + kernel, but I know that it would be useful in some embedded projects
  97 + outside the Linux kernel.
  98 +
  99 +Conformance to the .xz file format specification
  100 +
  101 + There are a couple of corner cases where things have been simplified
  102 + at expense of detecting errors as early as possible. These should not
  103 + matter in practice all, since they don't cause security issues. But
  104 + it is good to know this if testing the code e.g. with the test files
  105 + from XZ Utils.
  106 +
  107 +Reporting bugs
  108 +
  109 + Before reporting a bug, please check that it's not fixed already
  110 + at upstream. See <http://tukaani.org/xz/embedded.html> to get the
  111 + latest code.
  112 +
  113 + Report bugs to <lasse.collin@tukaani.org> or visit #tukaani on
  114 + Freenode and talk to Larhzu. I don't actively read LKML or other
  115 + kernel-related mailing lists, so if there's something I should know,
  116 + you should email to me personally or use IRC.
  117 +
  118 + Don't bother Igor Pavlov with questions about the XZ implementation
  119 + in the kernel or about XZ Utils. While these two implementations
  120 + include essential code that is directly based on Igor Pavlov's code,
  121 + these implementations aren't maintained nor supported by him.
  1 +/*
  2 + * XZ decompressor
  3 + *
  4 + * Authors: Lasse Collin <lasse.collin@tukaani.org>
  5 + * Igor Pavlov <http://7-zip.org/>
  6 + *
  7 + * This file has been put into the public domain.
  8 + * You can do whatever you want with this file.
  9 + */
  10 +
  11 +#ifndef XZ_H
  12 +#define XZ_H
  13 +
  14 +#ifdef __KERNEL__
  15 +# include <linux/stddef.h>
  16 +# include <linux/types.h>
  17 +#else
  18 +# include <stddef.h>
  19 +# include <stdint.h>
  20 +#endif
  21 +
  22 +/* In Linux, this is used to make extern functions static when needed. */
  23 +#ifndef XZ_EXTERN
  24 +# define XZ_EXTERN extern
  25 +#endif
  26 +
  27 +/**
  28 + * enum xz_mode - Operation mode
  29 + *
  30 + * @XZ_SINGLE: Single-call mode. This uses less RAM than
  31 + * than multi-call modes, because the LZMA2
  32 + * dictionary doesn't need to be allocated as
  33 + * part of the decoder state. All required data
  34 + * structures are allocated at initialization,
  35 + * so xz_dec_run() cannot return XZ_MEM_ERROR.
  36 + * @XZ_PREALLOC: Multi-call mode with preallocated LZMA2
  37 + * dictionary buffer. All data structures are
  38 + * allocated at initialization, so xz_dec_run()
  39 + * cannot return XZ_MEM_ERROR.
  40 + * @XZ_DYNALLOC: Multi-call mode. The LZMA2 dictionary is
  41 + * allocated once the required size has been
  42 + * parsed from the stream headers. If the
  43 + * allocation fails, xz_dec_run() will return
  44 + * XZ_MEM_ERROR.
  45 + *
  46 + * It is possible to enable support only for a subset of the above
  47 + * modes at compile time by defining XZ_DEC_SINGLE, XZ_DEC_PREALLOC,
  48 + * or XZ_DEC_DYNALLOC. The xz_dec kernel module is always compiled
  49 + * with support for all operation modes, but the preboot code may
  50 + * be built with fewer features to minimize code size.
  51 + */
  52 +enum xz_mode {
  53 + XZ_SINGLE,
  54 + XZ_PREALLOC,
  55 + XZ_DYNALLOC
  56 +};
  57 +
  58 +/**
  59 + * enum xz_ret - Return codes
  60 + * @XZ_OK: Everything is OK so far. More input or more
  61 + * output space is required to continue. This
  62 + * return code is possible only in multi-call mode
  63 + * (XZ_PREALLOC or XZ_DYNALLOC).
  64 + * @XZ_STREAM_END: Operation finished successfully.
  65 + * @XZ_UNSUPPORTED_CHECK: Integrity check type is not supported. Decoding
  66 + * is still possible in multi-call mode by simply
  67 + * calling xz_dec_run() again.
  68 + * Note that this return value is used only if
  69 + * XZ_DEC_ANY_CHECK was defined at build time,
  70 + * which is not used in the kernel. Unsupported
  71 + * check types return XZ_OPTIONS_ERROR if
  72 + * XZ_DEC_ANY_CHECK was not defined at build time.
  73 + * @XZ_MEM_ERROR: Allocating memory failed. This return code is
  74 + * possible only if the decoder was initialized
  75 + * with XZ_DYNALLOC. The amount of memory that was
  76 + * tried to be allocated was no more than the
  77 + * dict_max argument given to xz_dec_init().
  78 + * @XZ_MEMLIMIT_ERROR: A bigger LZMA2 dictionary would be needed than
  79 + * allowed by the dict_max argument given to
  80 + * xz_dec_init(). This return value is possible
  81 + * only in multi-call mode (XZ_PREALLOC or
  82 + * XZ_DYNALLOC); the single-call mode (XZ_SINGLE)
  83 + * ignores the dict_max argument.
  84 + * @XZ_FORMAT_ERROR: File format was not recognized (wrong magic
  85 + * bytes).
  86 + * @XZ_OPTIONS_ERROR: This implementation doesn't support the requested
  87 + * compression options. In the decoder this means
  88 + * that the header CRC32 matches, but the header
  89 + * itself specifies something that we don't support.
  90 + * @XZ_DATA_ERROR: Compressed data is corrupt.
  91 + * @XZ_BUF_ERROR: Cannot make any progress. Details are slightly
  92 + * different between multi-call and single-call
  93 + * mode; more information below.
  94 + *
  95 + * In multi-call mode, XZ_BUF_ERROR is returned when two consecutive calls
  96 + * to XZ code cannot consume any input and cannot produce any new output.
  97 + * This happens when there is no new input available, or the output buffer
  98 + * is full while at least one output byte is still pending. Assuming your
  99 + * code is not buggy, you can get this error only when decoding a compressed
  100 + * stream that is truncated or otherwise corrupt.
  101 + *
  102 + * In single-call mode, XZ_BUF_ERROR is returned only when the output buffer
  103 + * is too small or the compressed input is corrupt in a way that makes the
  104 + * decoder produce more output than the caller expected. When it is
  105 + * (relatively) clear that the compressed input is truncated, XZ_DATA_ERROR
  106 + * is used instead of XZ_BUF_ERROR.
  107 + */
  108 +enum xz_ret {
  109 + XZ_OK,
  110 + XZ_STREAM_END,
  111 + XZ_UNSUPPORTED_CHECK,
  112 + XZ_MEM_ERROR,
  113 + XZ_MEMLIMIT_ERROR,
  114 + XZ_FORMAT_ERROR,
  115 + XZ_OPTIONS_ERROR,
  116 + XZ_DATA_ERROR,
  117 + XZ_BUF_ERROR
  118 +};
  119 +
  120 +/**
  121 + * struct xz_buf - Passing input and output buffers to XZ code
  122 + * @in: Beginning of the input buffer. This may be NULL if and only
  123 + * if in_pos is equal to in_size.
  124 + * @in_pos: Current position in the input buffer. This must not exceed
  125 + * in_size.
  126 + * @in_size: Size of the input buffer
  127 + * @out: Beginning of the output buffer. This may be NULL if and only
  128 + * if out_pos is equal to out_size.
  129 + * @out_pos: Current position in the output buffer. This must not exceed
  130 + * out_size.
  131 + * @out_size: Size of the output buffer
  132 + *
  133 + * Only the contents of the output buffer from out[out_pos] onward, and
  134 + * the variables in_pos and out_pos are modified by the XZ code.
  135 + */
  136 +struct xz_buf {
  137 + const uint8_t *in;
  138 + size_t in_pos;
  139 + size_t in_size;
  140 +
  141 + uint8_t *out;
  142 + size_t out_pos;
  143 + size_t out_size;
  144 +};
  145 +
  146 +/**
  147 + * struct xz_dec - Opaque type to hold the XZ decoder state
  148 + */
  149 +struct xz_dec;
  150 +
  151 +/**
  152 + * xz_dec_init() - Allocate and initialize a XZ decoder state
  153 + * @mode: Operation mode
  154 + * @dict_max: Maximum size of the LZMA2 dictionary (history buffer) for
  155 + * multi-call decoding. This is ignored in single-call mode
  156 + * (mode == XZ_SINGLE). LZMA2 dictionary is always 2^n bytes
  157 + * or 2^n + 2^(n-1) bytes (the latter sizes are less common
  158 + * in practice), so other values for dict_max don't make sense.
  159 + * In the kernel, dictionary sizes of 64 KiB, 128 KiB, 256 KiB,
  160 + * 512 KiB, and 1 MiB are probably the only reasonable values,
  161 + * except for kernel and initramfs images where a bigger
  162 + * dictionary can be fine and useful.
  163 + *
  164 + * Single-call mode (XZ_SINGLE): xz_dec_run() decodes the whole stream at
  165 + * once. The caller must provide enough output space or the decoding will
  166 + * fail. The output space is used as the dictionary buffer, which is why
  167 + * there is no need to allocate the dictionary as part of the decoder's
  168 + * internal state.
  169 + *
  170 + * Because the output buffer is used as the workspace, streams encoded using
  171 + * a big dictionary are not a problem in single-call mode. It is enough that
  172 + * the output buffer is big enough to hold the actual uncompressed data; it
  173 + * can be smaller than the dictionary size stored in the stream headers.
  174 + *
  175 + * Multi-call mode with preallocated dictionary (XZ_PREALLOC): dict_max bytes
  176 + * of memory is preallocated for the LZMA2 dictionary. This way there is no
  177 + * risk that xz_dec_run() could run out of memory, since xz_dec_run() will
  178 + * never allocate any memory. Instead, if the preallocated dictionary is too
  179 + * small for decoding the given input stream, xz_dec_run() will return
  180 + * XZ_MEMLIMIT_ERROR. Thus, it is important to know what kind of data will be
  181 + * decoded to avoid allocating excessive amount of memory for the dictionary.
  182 + *
  183 + * Multi-call mode with dynamically allocated dictionary (XZ_DYNALLOC):
  184 + * dict_max specifies the maximum allowed dictionary size that xz_dec_run()
  185 + * may allocate once it has parsed the dictionary size from the stream
  186 + * headers. This way excessive allocations can be avoided while still
  187 + * limiting the maximum memory usage to a sane value to prevent running the
  188 + * system out of memory when decompressing streams from untrusted sources.
  189 + *
  190 + * On success, xz_dec_init() returns a pointer to struct xz_dec, which is
  191 + * ready to be used with xz_dec_run(). If memory allocation fails,
  192 + * xz_dec_init() returns NULL.
  193 + */
  194 +XZ_EXTERN struct xz_dec *xz_dec_init(enum xz_mode mode, uint32_t dict_max);
  195 +
  196 +/**
  197 + * xz_dec_run() - Run the XZ decoder
  198 + * @s: Decoder state allocated using xz_dec_init()
  199 + * @b: Input and output buffers
  200 + *
  201 + * The possible return values depend on build options and operation mode.
  202 + * See enum xz_ret for details.
  203 + *
  204 + * Note that if an error occurs in single-call mode (return value is not
  205 + * XZ_STREAM_END), b->in_pos and b->out_pos are not modified and the
  206 + * contents of the output buffer from b->out[b->out_pos] onward are
  207 + * undefined. This is true even after XZ_BUF_ERROR, because with some filter
  208 + * chains, there may be a second pass over the output buffer, and this pass
  209 + * cannot be properly done if the output buffer is truncated. Thus, you
  210 + * cannot give the single-call decoder a too small buffer and then expect to
  211 + * get that amount valid data from the beginning of the stream. You must use
  212 + * the multi-call decoder if you don't want to uncompress the whole stream.
  213 + */
  214 +XZ_EXTERN enum xz_ret xz_dec_run(struct xz_dec *s, struct xz_buf *b);
  215 +
  216 +/**
  217 + * xz_dec_reset() - Reset an already allocated decoder state
  218 + * @s: Decoder state allocated using xz_dec_init()
  219 + *
  220 + * This function can be used to reset the multi-call decoder state without
  221 + * freeing and reallocating memory with xz_dec_end() and xz_dec_init().
  222 + *
  223 + * In single-call mode, xz_dec_reset() is always called in the beginning of
  224 + * xz_dec_run(). Thus, explicit call to xz_dec_reset() is useful only in
  225 + * multi-call mode.
  226 + */
  227 +XZ_EXTERN void xz_dec_reset(struct xz_dec *s);
  228 +
  229 +/**
  230 + * xz_dec_end() - Free the memory allocated for the decoder state
  231 + * @s: Decoder state allocated using xz_dec_init(). If s is NULL,
  232 + * this function does nothing.
  233 + */
  234 +XZ_EXTERN void xz_dec_end(struct xz_dec *s);
  235 +
  236 +/*
  237 + * Standalone build (userspace build or in-kernel build for boot time use)
  238 + * needs a CRC32 implementation. For normal in-kernel use, kernel's own
  239 + * CRC32 module is used instead, and users of this module don't need to
  240 + * care about the functions below.
  241 + */
  242 +#ifndef XZ_INTERNAL_CRC32
  243 +# ifdef __KERNEL__
  244 +# define XZ_INTERNAL_CRC32 0
  245 +# else
  246 +# define XZ_INTERNAL_CRC32 1
  247 +# endif
  248 +#endif
  249 +
  250 +#if XZ_INTERNAL_CRC32
  251 +/*
  252 + * This must be called before any other xz_* function to initialize
  253 + * the CRC32 lookup table.
  254 + */
  255 +XZ_EXTERN void xz_crc32_init(void);
  256 +
  257 +/*
  258 + * Update CRC32 value using the polynomial from IEEE-802.3. To start a new
  259 + * calculation, the third argument must be zero. To continue the calculation,
  260 + * the previously returned value is passed as the third argument.
  261 + */
  262 +XZ_EXTERN uint32_t xz_crc32(const uint8_t *buf, size_t size, uint32_t crc);
  263 +#endif
  264 +#endif
... ... @@ -106,6 +106,8 @@
106 106 config LZO_DECOMPRESS
107 107 tristate
108 108  
  109 +source "lib/xz/Kconfig"
  110 +
109 111 #
110 112 # These all provide a common interface (hence the apparent duplication with
111 113 # ZLIB_INFLATE; DECOMPRESS_GZIP is just a wrapper.)
... ... @@ -69,6 +69,7 @@
69 69 obj-$(CONFIG_REED_SOLOMON) += reed_solomon/
70 70 obj-$(CONFIG_LZO_COMPRESS) += lzo/
71 71 obj-$(CONFIG_LZO_DECOMPRESS) += lzo/
  72 +obj-$(CONFIG_XZ_DEC) += xz/
72 73 obj-$(CONFIG_RAID6_PQ) += raid6/
73 74  
74 75 lib-$(CONFIG_DECOMPRESS_GZIP) += decompress_inflate.o
  1 +config XZ_DEC
  2 + tristate "XZ decompression support"
  3 + select CRC32
  4 + help
  5 + LZMA2 compression algorithm and BCJ filters are supported using
  6 + the .xz file format as the container. For integrity checking,
  7 + CRC32 is supported. See Documentation/xz.txt for more information.
  8 +
  9 +config XZ_DEC_X86
  10 + bool "x86 BCJ filter decoder" if EMBEDDED
  11 + default y
  12 + depends on XZ_DEC
  13 + select XZ_DEC_BCJ
  14 +
  15 +config XZ_DEC_POWERPC
  16 + bool "PowerPC BCJ filter decoder" if EMBEDDED
  17 + default y
  18 + depends on XZ_DEC
  19 + select XZ_DEC_BCJ
  20 +
  21 +config XZ_DEC_IA64
  22 + bool "IA-64 BCJ filter decoder" if EMBEDDED
  23 + default y
  24 + depends on XZ_DEC
  25 + select XZ_DEC_BCJ
  26 +
  27 +config XZ_DEC_ARM
  28 + bool "ARM BCJ filter decoder" if EMBEDDED
  29 + default y
  30 + depends on XZ_DEC
  31 + select XZ_DEC_BCJ
  32 +
  33 +config XZ_DEC_ARMTHUMB
  34 + bool "ARM-Thumb BCJ filter decoder" if EMBEDDED
  35 + default y
  36 + depends on XZ_DEC
  37 + select XZ_DEC_BCJ
  38 +
  39 +config XZ_DEC_SPARC
  40 + bool "SPARC BCJ filter decoder" if EMBEDDED
  41 + default y
  42 + depends on XZ_DEC
  43 + select XZ_DEC_BCJ
  44 +
  45 +config XZ_DEC_BCJ
  46 + bool
  47 + default n
  48 +
  49 +config XZ_DEC_TEST
  50 + tristate "XZ decompressor tester"
  51 + default n
  52 + depends on XZ_DEC
  53 + help
  54 + This allows passing .xz files to the in-kernel XZ decoder via
  55 + a character special file. It calculates CRC32 of the decompressed
  56 + data and writes diagnostics to the system log.
  57 +
  58 + Unless you are developing the XZ decoder, you don't need this
  59 + and should say N.
  1 +obj-$(CONFIG_XZ_DEC) += xz_dec.o
  2 +xz_dec-y := xz_dec_syms.o xz_dec_stream.o xz_dec_lzma2.o
  3 +xz_dec-$(CONFIG_XZ_DEC_BCJ) += xz_dec_bcj.o
  4 +
  5 +obj-$(CONFIG_XZ_DEC_TEST) += xz_dec_test.o
  1 +/*
  2 + * CRC32 using the polynomial from IEEE-802.3
  3 + *
  4 + * Authors: Lasse Collin <lasse.collin@tukaani.org>
  5 + * Igor Pavlov <http://7-zip.org/>
  6 + *
  7 + * This file has been put into the public domain.
  8 + * You can do whatever you want with this file.
  9 + */
  10 +
  11 +/*
  12 + * This is not the fastest implementation, but it is pretty compact.
  13 + * The fastest versions of xz_crc32() on modern CPUs without hardware
  14 + * accelerated CRC instruction are 3-5 times as fast as this version,
  15 + * but they are bigger and use more memory for the lookup table.
  16 + */
  17 +
  18 +#include "xz_private.h"
  19 +
  20 +/*
  21 + * STATIC_RW_DATA is used in the pre-boot environment on some architectures.
  22 + * See <linux/decompress/mm.h> for details.
  23 + */
  24 +#ifndef STATIC_RW_DATA
  25 +# define STATIC_RW_DATA static
  26 +#endif
  27 +
  28 +STATIC_RW_DATA uint32_t xz_crc32_table[256];
  29 +
  30 +XZ_EXTERN void xz_crc32_init(void)
  31 +{
  32 + const uint32_t poly = 0xEDB88320;
  33 +
  34 + uint32_t i;
  35 + uint32_t j;
  36 + uint32_t r;
  37 +
  38 + for (i = 0; i < 256; ++i) {
  39 + r = i;
  40 + for (j = 0; j < 8; ++j)
  41 + r = (r >> 1) ^ (poly & ~((r & 1) - 1));
  42 +
  43 + xz_crc32_table[i] = r;
  44 + }
  45 +
  46 + return;
  47 +}
  48 +
  49 +XZ_EXTERN uint32_t xz_crc32(const uint8_t *buf, size_t size, uint32_t crc)
  50 +{
  51 + crc = ~crc;
  52 +
  53 + while (size != 0) {
  54 + crc = xz_crc32_table[*buf++ ^ (crc & 0xFF)] ^ (crc >> 8);
  55 + --size;
  56 + }
  57 +
  58 + return ~crc;
  59 +}
  1 +/*
  2 + * Branch/Call/Jump (BCJ) filter decoders
  3 + *
  4 + * Authors: Lasse Collin <lasse.collin@tukaani.org>
  5 + * Igor Pavlov <http://7-zip.org/>
  6 + *
  7 + * This file has been put into the public domain.
  8 + * You can do whatever you want with this file.
  9 + */
  10 +
  11 +#include "xz_private.h"
  12 +
  13 +/*
  14 + * The rest of the file is inside this ifdef. It makes things a little more
  15 + * convenient when building without support for any BCJ filters.
  16 + */
  17 +#ifdef XZ_DEC_BCJ
  18 +
  19 +struct xz_dec_bcj {
  20 + /* Type of the BCJ filter being used */
  21 + enum {
  22 + BCJ_X86 = 4, /* x86 or x86-64 */
  23 + BCJ_POWERPC = 5, /* Big endian only */
  24 + BCJ_IA64 = 6, /* Big or little endian */
  25 + BCJ_ARM = 7, /* Little endian only */
  26 + BCJ_ARMTHUMB = 8, /* Little endian only */
  27 + BCJ_SPARC = 9 /* Big or little endian */
  28 + } type;
  29 +
  30 + /*
  31 + * Return value of the next filter in the chain. We need to preserve
  32 + * this information across calls, because we must not call the next
  33 + * filter anymore once it has returned XZ_STREAM_END.
  34 + */
  35 + enum xz_ret ret;
  36 +
  37 + /* True if we are operating in single-call mode. */
  38 + bool single_call;
  39 +
  40 + /*
  41 + * Absolute position relative to the beginning of the uncompressed
  42 + * data (in a single .xz Block). We care only about the lowest 32
  43 + * bits so this doesn't need to be uint64_t even with big files.
  44 + */
  45 + uint32_t pos;
  46 +
  47 + /* x86 filter state */
  48 + uint32_t x86_prev_mask;
  49 +
  50 + /* Temporary space to hold the variables from struct xz_buf */
  51 + uint8_t *out;
  52 + size_t out_pos;
  53 + size_t out_size;
  54 +
  55 + struct {
  56 + /* Amount of already filtered data in the beginning of buf */
  57 + size_t filtered;
  58 +
  59 + /* Total amount of data currently stored in buf */
  60 + size_t size;
  61 +
  62 + /*
  63 + * Buffer to hold a mix of filtered and unfiltered data. This
  64 + * needs to be big enough to hold Alignment + 2 * Look-ahead:
  65 + *
  66 + * Type Alignment Look-ahead
  67 + * x86 1 4
  68 + * PowerPC 4 0
  69 + * IA-64 16 0
  70 + * ARM 4 0
  71 + * ARM-Thumb 2 2
  72 + * SPARC 4 0
  73 + */
  74 + uint8_t buf[16];
  75 + } temp;
  76 +};
  77 +
  78 +#ifdef XZ_DEC_X86
  79 +/*
  80 + * This is used to test the most significant byte of a memory address
  81 + * in an x86 instruction.
  82 + */
  83 +static inline int bcj_x86_test_msbyte(uint8_t b)
  84 +{
  85 + return b == 0x00 || b == 0xFF;
  86 +}
  87 +
  88 +static size_t bcj_x86(struct xz_dec_bcj *s, uint8_t *buf, size_t size)
  89 +{
  90 + static const bool mask_to_allowed_status[8]
  91 + = { true, true, true, false, true, false, false, false };
  92 +
  93 + static const uint8_t mask_to_bit_num[8] = { 0, 1, 2, 2, 3, 3, 3, 3 };
  94 +
  95 + size_t i;
  96 + size_t prev_pos = (size_t)-1;
  97 + uint32_t prev_mask = s->x86_prev_mask;
  98 + uint32_t src;
  99 + uint32_t dest;
  100 + uint32_t j;
  101 + uint8_t b;
  102 +
  103 + if (size <= 4)
  104 + return 0;
  105 +
  106 + size -= 4;
  107 + for (i = 0; i < size; ++i) {
  108 + if ((buf[i] & 0xFE) != 0xE8)
  109 + continue;
  110 +
  111 + prev_pos = i - prev_pos;
  112 + if (prev_pos > 3) {
  113 + prev_mask = 0;
  114 + } else {
  115 + prev_mask = (prev_mask << (prev_pos - 1)) & 7;
  116 + if (prev_mask != 0) {
  117 + b = buf[i + 4 - mask_to_bit_num[prev_mask]];
  118 + if (!mask_to_allowed_status[prev_mask]
  119 + || bcj_x86_test_msbyte(b)) {
  120 + prev_pos = i;
  121 + prev_mask = (prev_mask << 1) | 1;
  122 + continue;
  123 + }
  124 + }
  125 + }
  126 +
  127 + prev_pos = i;
  128 +
  129 + if (bcj_x86_test_msbyte(buf[i + 4])) {
  130 + src = get_unaligned_le32(buf + i + 1);
  131 + while (true) {
  132 + dest = src - (s->pos + (uint32_t)i + 5);
  133 + if (prev_mask == 0)
  134 + break;
  135 +
  136 + j = mask_to_bit_num[prev_mask] * 8;
  137 + b = (uint8_t)(dest >> (24 - j));
  138 + if (!bcj_x86_test_msbyte(b))
  139 + break;
  140 +
  141 + src = dest ^ (((uint32_t)1 << (32 - j)) - 1);
  142 + }
  143 +
  144 + dest &= 0x01FFFFFF;
  145 + dest |= (uint32_t)0 - (dest & 0x01000000);
  146 + put_unaligned_le32(dest, buf + i + 1);
  147 + i += 4;
  148 + } else {
  149 + prev_mask = (prev_mask << 1) | 1;
  150 + }
  151 + }
  152 +
  153 + prev_pos = i - prev_pos;
  154 + s->x86_prev_mask = prev_pos > 3 ? 0 : prev_mask << (prev_pos - 1);
  155 + return i;
  156 +}
  157 +#endif
  158 +
  159 +#ifdef XZ_DEC_POWERPC
  160 +static size_t bcj_powerpc(struct xz_dec_bcj *s, uint8_t *buf, size_t size)
  161 +{
  162 + size_t i;
  163 + uint32_t instr;
  164 +
  165 + for (i = 0; i + 4 <= size; i += 4) {
  166 + instr = get_unaligned_be32(buf + i);
  167 + if ((instr & 0xFC000003) == 0x48000001) {
  168 + instr &= 0x03FFFFFC;
  169 + instr -= s->pos + (uint32_t)i;
  170 + instr &= 0x03FFFFFC;
  171 + instr |= 0x48000001;
  172 + put_unaligned_be32(instr, buf + i);
  173 + }
  174 + }
  175 +
  176 + return i;
  177 +}
  178 +#endif
  179 +
  180 +#ifdef XZ_DEC_IA64
  181 +static size_t bcj_ia64(struct xz_dec_bcj *s, uint8_t *buf, size_t size)
  182 +{
  183 + static const uint8_t branch_table[32] = {
  184 + 0, 0, 0, 0, 0, 0, 0, 0,
  185 + 0, 0, 0, 0, 0, 0, 0, 0,
  186 + 4, 4, 6, 6, 0, 0, 7, 7,
  187 + 4, 4, 0, 0, 4, 4, 0, 0
  188 + };
  189 +
  190 + /*
  191 + * The local variables take a little bit stack space, but it's less
  192 + * than what LZMA2 decoder takes, so it doesn't make sense to reduce
  193 + * stack usage here without doing that for the LZMA2 decoder too.
  194 + */
  195 +
  196 + /* Loop counters */
  197 + size_t i;
  198 + size_t j;
  199 +
  200 + /* Instruction slot (0, 1, or 2) in the 128-bit instruction word */
  201 + uint32_t slot;
  202 +
  203 + /* Bitwise offset of the instruction indicated by slot */
  204 + uint32_t bit_pos;
  205 +
  206 + /* bit_pos split into byte and bit parts */
  207 + uint32_t byte_pos;
  208 + uint32_t bit_res;
  209 +
  210 + /* Address part of an instruction */
  211 + uint32_t addr;
  212 +
  213 + /* Mask used to detect which instructions to convert */
  214 + uint32_t mask;
  215 +
  216 + /* 41-bit instruction stored somewhere in the lowest 48 bits */
  217 + uint64_t instr;
  218 +
  219 + /* Instruction normalized with bit_res for easier manipulation */
  220 + uint64_t norm;
  221 +
  222 + for (i = 0; i + 16 <= size; i += 16) {
  223 + mask = branch_table[buf[i] & 0x1F];
  224 + for (slot = 0, bit_pos = 5; slot < 3; ++slot, bit_pos += 41) {
  225 + if (((mask >> slot) & 1) == 0)
  226 + continue;
  227 +
  228 + byte_pos = bit_pos >> 3;
  229 + bit_res = bit_pos & 7;
  230 + instr = 0;
  231 + for (j = 0; j < 6; ++j)
  232 + instr |= (uint64_t)(buf[i + j + byte_pos])
  233 + << (8 * j);
  234 +
  235 + norm = instr >> bit_res;
  236 +
  237 + if (((norm >> 37) & 0x0F) == 0x05
  238 + && ((norm >> 9) & 0x07) == 0) {
  239 + addr = (norm >> 13) & 0x0FFFFF;
  240 + addr |= ((uint32_t)(norm >> 36) & 1) << 20;
  241 + addr <<= 4;
  242 + addr -= s->pos + (uint32_t)i;
  243 + addr >>= 4;
  244 +
  245 + norm &= ~((uint64_t)0x8FFFFF << 13);
  246 + norm |= (uint64_t)(addr & 0x0FFFFF) << 13;
  247 + norm |= (uint64_t)(addr & 0x100000)
  248 + << (36 - 20);
  249 +
  250 + instr &= (1 << bit_res) - 1;
  251 + instr |= norm << bit_res;
  252 +
  253 + for (j = 0; j < 6; j++)
  254 + buf[i + j + byte_pos]
  255 + = (uint8_t)(instr >> (8 * j));
  256 + }
  257 + }
  258 + }
  259 +
  260 + return i;
  261 +}
  262 +#endif
  263 +
  264 +#ifdef XZ_DEC_ARM
  265 +static size_t bcj_arm(struct xz_dec_bcj *s, uint8_t *buf, size_t size)
  266 +{
  267 + size_t i;
  268 + uint32_t addr;
  269 +
  270 + for (i = 0; i + 4 <= size; i += 4) {
  271 + if (buf[i + 3] == 0xEB) {
  272 + addr = (uint32_t)buf[i] | ((uint32_t)buf[i + 1] << 8)
  273 + | ((uint32_t)buf[i + 2] << 16);
  274 + addr <<= 2;
  275 + addr -= s->pos + (uint32_t)i + 8;
  276 + addr >>= 2;
  277 + buf[i] = (uint8_t)addr;
  278 + buf[i + 1] = (uint8_t)(addr >> 8);
  279 + buf[i + 2] = (uint8_t)(addr >> 16);
  280 + }
  281 + }
  282 +
  283 + return i;
  284 +}
  285 +#endif
  286 +
  287 +#ifdef XZ_DEC_ARMTHUMB
  288 +static size_t bcj_armthumb(struct xz_dec_bcj *s, uint8_t *buf, size_t size)
  289 +{
  290 + size_t i;
  291 + uint32_t addr;
  292 +
  293 + for (i = 0; i + 4 <= size; i += 2) {
  294 + if ((buf[i + 1] & 0xF8) == 0xF0
  295 + && (buf[i + 3] & 0xF8) == 0xF8) {
  296 + addr = (((uint32_t)buf[i + 1] & 0x07) << 19)
  297 + | ((uint32_t)buf[i] << 11)
  298 + | (((uint32_t)buf[i + 3] & 0x07) << 8)
  299 + | (uint32_t)buf[i + 2];
  300 + addr <<= 1;
  301 + addr -= s->pos + (uint32_t)i + 4;
  302 + addr >>= 1;
  303 + buf[i + 1] = (uint8_t)(0xF0 | ((addr >> 19) & 0x07));
  304 + buf[i] = (uint8_t)(addr >> 11);
  305 + buf[i + 3] = (uint8_t)(0xF8 | ((addr >> 8) & 0x07));
  306 + buf[i + 2] = (uint8_t)addr;
  307 + i += 2;
  308 + }
  309 + }
  310 +
  311 + return i;
  312 +}
  313 +#endif
  314 +
  315 +#ifdef XZ_DEC_SPARC
  316 +static size_t bcj_sparc(struct xz_dec_bcj *s, uint8_t *buf, size_t size)
  317 +{
  318 + size_t i;
  319 + uint32_t instr;
  320 +
  321 + for (i = 0; i + 4 <= size; i += 4) {
  322 + instr = get_unaligned_be32(buf + i);
  323 + if ((instr >> 22) == 0x100 || (instr >> 22) == 0x1FF) {
  324 + instr <<= 2;
  325 + instr -= s->pos + (uint32_t)i;
  326 + instr >>= 2;
  327 + instr = ((uint32_t)0x40000000 - (instr & 0x400000))
  328 + | 0x40000000 | (instr & 0x3FFFFF);
  329 + put_unaligned_be32(instr, buf + i);
  330 + }
  331 + }
  332 +
  333 + return i;
  334 +}
  335 +#endif
  336 +
  337 +/*
  338 + * Apply the selected BCJ filter. Update *pos and s->pos to match the amount
  339 + * of data that got filtered.
  340 + *
  341 + * NOTE: This is implemented as a switch statement to avoid using function
  342 + * pointers, which could be problematic in the kernel boot code, which must
  343 + * avoid pointers to static data (at least on x86).
  344 + */
  345 +static void bcj_apply(struct xz_dec_bcj *s,
  346 + uint8_t *buf, size_t *pos, size_t size)
  347 +{
  348 + size_t filtered;
  349 +
  350 + buf += *pos;
  351 + size -= *pos;
  352 +
  353 + switch (s->type) {
  354 +#ifdef XZ_DEC_X86
  355 + case BCJ_X86:
  356 + filtered = bcj_x86(s, buf, size);
  357 + break;
  358 +#endif
  359 +#ifdef XZ_DEC_POWERPC
  360 + case BCJ_POWERPC:
  361 + filtered = bcj_powerpc(s, buf, size);
  362 + break;
  363 +#endif
  364 +#ifdef XZ_DEC_IA64
  365 + case BCJ_IA64:
  366 + filtered = bcj_ia64(s, buf, size);
  367 + break;
  368 +#endif
  369 +#ifdef XZ_DEC_ARM
  370 + case BCJ_ARM:
  371 + filtered = bcj_arm(s, buf, size);
  372 + break;
  373 +#endif
  374 +#ifdef XZ_DEC_ARMTHUMB
  375 + case BCJ_ARMTHUMB:
  376 + filtered = bcj_armthumb(s, buf, size);
  377 + break;
  378 +#endif
  379 +#ifdef XZ_DEC_SPARC
  380 + case BCJ_SPARC:
  381 + filtered = bcj_sparc(s, buf, size);
  382 + break;
  383 +#endif
  384 + default:
  385 + /* Never reached but silence compiler warnings. */
  386 + filtered = 0;
  387 + break;
  388 + }
  389 +
  390 + *pos += filtered;
  391 + s->pos += filtered;
  392 +}
  393 +
  394 +/*
  395 + * Flush pending filtered data from temp to the output buffer.
  396 + * Move the remaining mixture of possibly filtered and unfiltered
  397 + * data to the beginning of temp.
  398 + */
  399 +static void bcj_flush(struct xz_dec_bcj *s, struct xz_buf *b)
  400 +{
  401 + size_t copy_size;
  402 +
  403 + copy_size = min_t(size_t, s->temp.filtered, b->out_size - b->out_pos);
  404 + memcpy(b->out + b->out_pos, s->temp.buf, copy_size);
  405 + b->out_pos += copy_size;
  406 +
  407 + s->temp.filtered -= copy_size;
  408 + s->temp.size -= copy_size;
  409 + memmove(s->temp.buf, s->temp.buf + copy_size, s->temp.size);
  410 +}
  411 +
  412 +/*
  413 + * The BCJ filter functions are primitive in sense that they process the
  414 + * data in chunks of 1-16 bytes. To hide this issue, this function does
  415 + * some buffering.
  416 + */
  417 +XZ_EXTERN enum xz_ret xz_dec_bcj_run(struct xz_dec_bcj *s,
  418 + struct xz_dec_lzma2 *lzma2,
  419 + struct xz_buf *b)
  420 +{
  421 + size_t out_start;
  422 +
  423 + /*
  424 + * Flush pending already filtered data to the output buffer. Return
  425 + * immediatelly if we couldn't flush everything, or if the next
  426 + * filter in the chain had already returned XZ_STREAM_END.
  427 + */
  428 + if (s->temp.filtered > 0) {
  429 + bcj_flush(s, b);
  430 + if (s->temp.filtered > 0)
  431 + return XZ_OK;
  432 +
  433 + if (s->ret == XZ_STREAM_END)
  434 + return XZ_STREAM_END;
  435 + }
  436 +
  437 + /*
  438 + * If we have more output space than what is currently pending in
  439 + * temp, copy the unfiltered data from temp to the output buffer
  440 + * and try to fill the output buffer by decoding more data from the
  441 + * next filter in the chain. Apply the BCJ filter on the new data
  442 + * in the output buffer. If everything cannot be filtered, copy it
  443 + * to temp and rewind the output buffer position accordingly.
  444 + */
  445 + if (s->temp.size < b->out_size - b->out_pos) {
  446 + out_start = b->out_pos;
  447 + memcpy(b->out + b->out_pos, s->temp.buf, s->temp.size);
  448 + b->out_pos += s->temp.size;
  449 +
  450 + s->ret = xz_dec_lzma2_run(lzma2, b);
  451 + if (s->ret != XZ_STREAM_END
  452 + && (s->ret != XZ_OK || s->single_call))
  453 + return s->ret;
  454 +
  455 + bcj_apply(s, b->out, &out_start, b->out_pos);
  456 +
  457 + /*
  458 + * As an exception, if the next filter returned XZ_STREAM_END,
  459 + * we can do that too, since the last few bytes that remain
  460 + * unfiltered are meant to remain unfiltered.
  461 + */
  462 + if (s->ret == XZ_STREAM_END)
  463 + return XZ_STREAM_END;
  464 +
  465 + s->temp.size = b->out_pos - out_start;
  466 + b->out_pos -= s->temp.size;
  467 + memcpy(s->temp.buf, b->out + b->out_pos, s->temp.size);
  468 + }
  469 +
  470 + /*
  471 + * If we have unfiltered data in temp, try to fill by decoding more
  472 + * data from the next filter. Apply the BCJ filter on temp. Then we
  473 + * hopefully can fill the actual output buffer by copying filtered
  474 + * data from temp. A mix of filtered and unfiltered data may be left
  475 + * in temp; it will be taken care on the next call to this function.
  476 + */
  477 + if (s->temp.size > 0) {
  478 + /* Make b->out{,_pos,_size} temporarily point to s->temp. */
  479 + s->out = b->out;
  480 + s->out_pos = b->out_pos;
  481 + s->out_size = b->out_size;
  482 + b->out = s->temp.buf;
  483 + b->out_pos = s->temp.size;
  484 + b->out_size = sizeof(s->temp.buf);
  485 +
  486 + s->ret = xz_dec_lzma2_run(lzma2, b);
  487 +
  488 + s->temp.size = b->out_pos;
  489 + b->out = s->out;
  490 + b->out_pos = s->out_pos;
  491 + b->out_size = s->out_size;
  492 +
  493 + if (s->ret != XZ_OK && s->ret != XZ_STREAM_END)
  494 + return s->ret;
  495 +
  496 + bcj_apply(s, s->temp.buf, &s->temp.filtered, s->temp.size);
  497 +
  498 + /*
  499 + * If the next filter returned XZ_STREAM_END, we mark that
  500 + * everything is filtered, since the last unfiltered bytes
  501 + * of the stream are meant to be left as is.
  502 + */
  503 + if (s->ret == XZ_STREAM_END)
  504 + s->temp.filtered = s->temp.size;
  505 +
  506 + bcj_flush(s, b);
  507 + if (s->temp.filtered > 0)
  508 + return XZ_OK;
  509 + }
  510 +
  511 + return s->ret;
  512 +}
  513 +
  514 +XZ_EXTERN struct xz_dec_bcj *xz_dec_bcj_create(bool single_call)
  515 +{
  516 + struct xz_dec_bcj *s = kmalloc(sizeof(*s), GFP_KERNEL);
  517 + if (s != NULL)
  518 + s->single_call = single_call;
  519 +
  520 + return s;
  521 +}
  522 +
  523 +XZ_EXTERN enum xz_ret xz_dec_bcj_reset(struct xz_dec_bcj *s, uint8_t id)
  524 +{
  525 + switch (id) {
  526 +#ifdef XZ_DEC_X86
  527 + case BCJ_X86:
  528 +#endif
  529 +#ifdef XZ_DEC_POWERPC
  530 + case BCJ_POWERPC:
  531 +#endif
  532 +#ifdef XZ_DEC_IA64
  533 + case BCJ_IA64:
  534 +#endif
  535 +#ifdef XZ_DEC_ARM
  536 + case BCJ_ARM:
  537 +#endif
  538 +#ifdef XZ_DEC_ARMTHUMB
  539 + case BCJ_ARMTHUMB:
  540 +#endif
  541 +#ifdef XZ_DEC_SPARC
  542 + case BCJ_SPARC:
  543 +#endif
  544 + break;
  545 +
  546 + default:
  547 + /* Unsupported Filter ID */
  548 + return XZ_OPTIONS_ERROR;
  549 + }
  550 +
  551 + s->type = id;
  552 + s->ret = XZ_OK;
  553 + s->pos = 0;
  554 + s->x86_prev_mask = 0;
  555 + s->temp.filtered = 0;
  556 + s->temp.size = 0;
  557 +
  558 + return XZ_OK;
  559 +}
  560 +
  561 +#endif
lib/xz/xz_dec_lzma2.c
Changes suppressed. Click to show
  1 +/*
  2 + * LZMA2 decoder
  3 + *
  4 + * Authors: Lasse Collin <lasse.collin@tukaani.org>
  5 + * Igor Pavlov <http://7-zip.org/>
  6 + *
  7 + * This file has been put into the public domain.
  8 + * You can do whatever you want with this file.
  9 + */
  10 +
  11 +#include "xz_private.h"
  12 +#include "xz_lzma2.h"
  13 +
  14 +/*
  15 + * Range decoder initialization eats the first five bytes of each LZMA chunk.
  16 + */
  17 +#define RC_INIT_BYTES 5
  18 +
  19 +/*
  20 + * Minimum number of usable input buffer to safely decode one LZMA symbol.
  21 + * The worst case is that we decode 22 bits using probabilities and 26
  22 + * direct bits. This may decode at maximum of 20 bytes of input. However,
  23 + * lzma_main() does an extra normalization before returning, thus we
  24 + * need to put 21 here.
  25 + */
  26 +#define LZMA_IN_REQUIRED 21
  27 +
  28 +/*
  29 + * Dictionary (history buffer)
  30 + *
  31 + * These are always true:
  32 + * start <= pos <= full <= end
  33 + * pos <= limit <= end
  34 + *
  35 + * In multi-call mode, also these are true:
  36 + * end == size
  37 + * size <= size_max
  38 + * allocated <= size
  39 + *
  40 + * Most of these variables are size_t to support single-call mode,
  41 + * in which the dictionary variables address the actual output
  42 + * buffer directly.
  43 + */
  44 +struct dictionary {
  45 + /* Beginning of the history buffer */
  46 + uint8_t *buf;
  47 +
  48 + /* Old position in buf (before decoding more data) */
  49 + size_t start;
  50 +
  51 + /* Position in buf */
  52 + size_t pos;
  53 +
  54 + /*
  55 + * How full dictionary is. This is used to detect corrupt input that
  56 + * would read beyond the beginning of the uncompressed stream.
  57 + */
  58 + size_t full;
  59 +
  60 + /* Write limit; we don't write to buf[limit] or later bytes. */
  61 + size_t limit;
  62 +
  63 + /*
  64 + * End of the dictionary buffer. In multi-call mode, this is
  65 + * the same as the dictionary size. In single-call mode, this
  66 + * indicates the size of the output buffer.
  67 + */
  68 + size_t end;
  69 +
  70 + /*
  71 + * Size of the dictionary as specified in Block Header. This is used
  72 + * together with "full" to detect corrupt input that would make us
  73 + * read beyond the beginning of the uncompressed stream.
  74 + */
  75 + uint32_t size;
  76 +
  77 + /*
  78 + * Maximum allowed dictionary size in multi-call mode.
  79 + * This is ignored in single-call mode.
  80 + */
  81 + uint32_t size_max;
  82 +
  83 + /*
  84 + * Amount of memory currently allocated for the dictionary.
  85 + * This is used only with XZ_DYNALLOC. (With XZ_PREALLOC,
  86 + * size_max is always the same as the allocated size.)
  87 + */
  88 + uint32_t allocated;
  89 +
  90 + /* Operation mode */
  91 + enum xz_mode mode;
  92 +};
  93 +
  94 +/* Range decoder */
  95 +struct rc_dec {
  96 + uint32_t range;
  97 + uint32_t code;
  98 +
  99 + /*
  100 + * Number of initializing bytes remaining to be read
  101 + * by rc_read_init().
  102 + */
  103 + uint32_t init_bytes_left;
  104 +
  105 + /*
  106 + * Buffer from which we read our input. It can be either
  107 + * temp.buf or the caller-provided input buffer.
  108 + */
  109 + const uint8_t *in;
  110 + size_t in_pos;
  111 + size_t in_limit;
  112 +};
  113 +
  114 +/* Probabilities for a length decoder. */
  115 +struct lzma_len_dec {
  116 + /* Probability of match length being at least 10 */
  117 + uint16_t choice;
  118 +
  119 + /* Probability of match length being at least 18 */
  120 + uint16_t choice2;
  121 +
  122 + /* Probabilities for match lengths 2-9 */
  123 + uint16_t low[POS_STATES_MAX][LEN_LOW_SYMBOLS];
  124 +
  125 + /* Probabilities for match lengths 10-17 */
  126 + uint16_t mid[POS_STATES_MAX][LEN_MID_SYMBOLS];
  127 +
  128 + /* Probabilities for match lengths 18-273 */
  129 + uint16_t high[LEN_HIGH_SYMBOLS];
  130 +};
  131 +
  132 +struct lzma_dec {
  133 + /* Distances of latest four matches */
  134 + uint32_t rep0;
  135 + uint32_t rep1;
  136 + uint32_t rep2;
  137 + uint32_t rep3;
  138 +
  139 + /* Types of the most recently seen LZMA symbols */
  140 + enum lzma_state state;
  141 +
  142 + /*
  143 + * Length of a match. This is updated so that dict_repeat can
  144 + * be called again to finish repeating the whole match.
  145 + */
  146 + uint32_t len;
  147 +
  148 + /*
  149 + * LZMA properties or related bit masks (number of literal
  150 + * context bits, a mask dervied from the number of literal
  151 + * position bits, and a mask dervied from the number
  152 + * position bits)
  153 + */
  154 + uint32_t lc;
  155 + uint32_t literal_pos_mask; /* (1 << lp) - 1 */
  156 + uint32_t pos_mask; /* (1 << pb) - 1 */
  157 +
  158 + /* If 1, it's a match. Otherwise it's a single 8-bit literal. */
  159 + uint16_t is_match[STATES][POS_STATES_MAX];
  160 +
  161 + /* If 1, it's a repeated match. The distance is one of rep0 .. rep3. */
  162 + uint16_t is_rep[STATES];
  163 +
  164 + /*
  165 + * If 0, distance of a repeated match is rep0.
  166 + * Otherwise check is_rep1.
  167 + */
  168 + uint16_t is_rep0[STATES];
  169 +
  170 + /*
  171 + * If 0, distance of a repeated match is rep1.
  172 + * Otherwise check is_rep2.
  173 + */
  174 + uint16_t is_rep1[STATES];
  175 +
  176 + /* If 0, distance of a repeated match is rep2. Otherwise it is rep3. */
  177 + uint16_t is_rep2[STATES];
  178 +
  179 + /*
  180 + * If 1, the repeated match has length of one byte. Otherwise
  181 + * the length is decoded from rep_len_decoder.
  182 + */
  183 + uint16_t is_rep0_long[STATES][POS_STATES_MAX];
  184 +
  185 + /*
  186 + * Probability tree for the highest two bits of the match
  187 + * distance. There is a separate probability tree for match
  188 + * lengths of 2 (i.e. MATCH_LEN_MIN), 3, 4, and [5, 273].
  189 + */
  190 + uint16_t dist_slot[DIST_STATES][DIST_SLOTS];
  191 +
  192 + /*
  193 + * Probility trees for additional bits for match distance
  194 + * when the distance is in the range [4, 127].
  195 + */
  196 + uint16_t dist_special[FULL_DISTANCES - DIST_MODEL_END];
  197 +
  198 + /*
  199 + * Probability tree for the lowest four bits of a match
  200 + * distance that is equal to or greater than 128.
  201 + */
  202 + uint16_t dist_align[ALIGN_SIZE];
  203 +
  204 + /* Length of a normal match */
  205 + struct lzma_len_dec match_len_dec;
  206 +
  207 + /* Length of a repeated match */
  208 + struct lzma_len_dec rep_len_dec;
  209 +
  210 + /* Probabilities of literals */
  211 + uint16_t literal[LITERAL_CODERS_MAX][LITERAL_CODER_SIZE];
  212 +};
  213 +
  214 +struct lzma2_dec {
  215 + /* Position in xz_dec_lzma2_run(). */
  216 + enum lzma2_seq {
  217 + SEQ_CONTROL,
  218 + SEQ_UNCOMPRESSED_1,
  219 + SEQ_UNCOMPRESSED_2,
  220 + SEQ_COMPRESSED_0,
  221 + SEQ_COMPRESSED_1,
  222 + SEQ_PROPERTIES,
  223 + SEQ_LZMA_PREPARE,
  224 + SEQ_LZMA_RUN,
  225 + SEQ_COPY
  226 + } sequence;
  227 +
  228 + /* Next position after decoding the compressed size of the chunk. */
  229 + enum lzma2_seq next_sequence;
  230 +
  231 + /* Uncompressed size of LZMA chunk (2 MiB at maximum) */
  232 + uint32_t uncompressed;
  233 +
  234 + /*
  235 + * Compressed size of LZMA chunk or compressed/uncompressed
  236 + * size of uncompressed chunk (64 KiB at maximum)
  237 + */
  238 + uint32_t compressed;
  239 +
  240 + /*
  241 + * True if dictionary reset is needed. This is false before
  242 + * the first chunk (LZMA or uncompressed).
  243 + */
  244 + bool need_dict_reset;
  245 +
  246 + /*
  247 + * True if new LZMA properties are needed. This is false
  248 + * before the first LZMA chunk.
  249 + */
  250 + bool need_props;
  251 +};
  252 +
  253 +struct xz_dec_lzma2 {
  254 + /*
  255 + * The order below is important on x86 to reduce code size and
  256 + * it shouldn't hurt on other platforms. Everything up to and
  257 + * including lzma.pos_mask are in the first 128 bytes on x86-32,
  258 + * which allows using smaller instructions to access those
  259 + * variables. On x86-64, fewer variables fit into the first 128
  260 + * bytes, but this is still the best order without sacrificing
  261 + * the readability by splitting the structures.
  262 + */
  263 + struct rc_dec rc;
  264 + struct dictionary dict;
  265 + struct lzma2_dec lzma2;
  266 + struct lzma_dec lzma;
  267 +
  268 + /*
  269 + * Temporary buffer which holds small number of input bytes between
  270 + * decoder calls. See lzma2_lzma() for details.
  271 + */
  272 + struct {
  273 + uint32_t size;
  274 + uint8_t buf[3 * LZMA_IN_REQUIRED];
  275 + } temp;
  276 +};
  277 +
  278 +/**************
  279 + * Dictionary *
  280 + **************/
  281 +
  282 +/*
  283 + * Reset the dictionary state. When in single-call mode, set up the beginning
  284 + * of the dictionary to point to the actual output buffer.
  285 + */
  286 +static void dict_reset(struct dictionary *dict, struct xz_buf *b)
  287 +{
  288 + if (DEC_IS_SINGLE(dict->mode)) {
  289 + dict->buf = b->out + b->out_pos;
  290 + dict->end = b->out_size - b->out_pos;
  291 + }
  292 +
  293 + dict->start = 0;
  294 + dict->pos = 0;
  295 + dict->limit = 0;
  296 + dict->full = 0;
  297 +}
  298 +
  299 +/* Set dictionary write limit */
  300 +static void dict_limit(struct dictionary *dict, size_t out_max)
  301 +{
  302 + if (dict->end - dict->pos <= out_max)
  303 + dict->limit = dict->end;
  304 + else
  305 + dict->limit = dict->pos + out_max;
  306 +}
  307 +
  308 +/* Return true if at least one byte can be written into the dictionary. */
  309 +static inline bool dict_has_space(const struct dictionary *dict)
  310 +{
  311 + return dict->pos < dict->limit;
  312 +}
  313 +
  314 +/*
  315 + * Get a byte from the dictionary at the given distance. The distance is
  316 + * assumed to valid, or as a special case, zero when the dictionary is
  317 + * still empty. This special case is needed for single-call decoding to
  318 + * avoid writing a '\0' to the end of the destination buffer.
  319 + */
  320 +static inline uint32_t dict_get(const struct dictionary *dict, uint32_t dist)
  321 +{
  322 + size_t offset = dict->pos - dist - 1;
  323 +
  324 + if (dist >= dict->pos)
  325 + offset += dict->end;
  326 +
  327 + return dict->full > 0 ? dict->buf[offset] : 0;
  328 +}
  329 +
  330 +/*
  331 + * Put one byte into the dictionary. It is assumed that there is space for it.
  332 + */
  333 +static inline void dict_put(struct dictionary *dict, uint8_t byte)
  334 +{
  335 + dict->buf[dict->pos++] = byte;
  336 +
  337 + if (dict->full < dict->pos)
  338 + dict->full = dict->pos;
  339 +}
  340 +
  341 +/*
  342 + * Repeat given number of bytes from the given distance. If the distance is
  343 + * invalid, false is returned. On success, true is returned and *len is
  344 + * updated to indicate how many bytes were left to be repeated.
  345 + */
  346 +static bool dict_repeat(struct dictionary *dict, uint32_t *len, uint32_t dist)
  347 +{
  348 + size_t back;
  349 + uint32_t left;
  350 +
  351 + if (dist >= dict->full || dist >= dict->size)
  352 + return false;
  353 +
  354 + left = min_t(size_t, dict->limit - dict->pos, *len);
  355 + *len -= left;
  356 +
  357 + back = dict->pos - dist - 1;
  358 + if (dist >= dict->pos)
  359 + back += dict->end;
  360 +
  361 + do {
  362 + dict->buf[dict->pos++] = dict->buf[back++];
  363 + if (back == dict->end)
  364 + back = 0;
  365 + } while (--left > 0);
  366 +
  367 + if (dict->full < dict->pos)
  368 + dict->full = dict->pos;
  369 +
  370 + return true;
  371 +}
  372 +
  373 +/* Copy uncompressed data as is from input to dictionary and output buffers. */
  374 +static void dict_uncompressed(struct dictionary *dict, struct xz_buf *b,
  375 + uint32_t *left)
  376 +{
  377 + size_t copy_size;
  378 +
  379 + while (*left > 0 && b->in_pos < b->in_size
  380 + && b->out_pos < b->out_size) {
  381 + copy_size = min(b->in_size - b->in_pos,
  382 + b->out_size - b->out_pos);
  383 + if (copy_size > dict->end - dict->pos)
  384 + copy_size = dict->end - dict->pos;
  385 + if (copy_size > *left)
  386 + copy_size = *left;
  387 +
  388 + *left -= copy_size;
  389 +
  390 + memcpy(dict->buf + dict->pos, b->in + b->in_pos, copy_size);
  391 + dict->pos += copy_size;
  392 +
  393 + if (dict->full < dict->pos)
  394 + dict->full = dict->pos;
  395 +
  396 + if (DEC_IS_MULTI(dict->mode)) {
  397 + if (dict->pos == dict->end)
  398 + dict->pos = 0;
  399 +
  400 + memcpy(b->out + b->out_pos, b->in + b->in_pos,
  401 + copy_size);
  402 + }
  403 +
  404 + dict->start = dict->pos;
  405 +
  406 + b->out_pos += copy_size;
  407 + b->in_pos += copy_size;
  408 + }
  409 +}
  410 +
  411 +/*
  412 + * Flush pending data from dictionary to b->out. It is assumed that there is
  413 + * enough space in b->out. This is guaranteed because caller uses dict_limit()
  414 + * before decoding data into the dictionary.
  415 + */
  416 +static uint32_t dict_flush(struct dictionary *dict, struct xz_buf *b)
  417 +{
  418 + size_t copy_size = dict->pos - dict->start;
  419 +
  420 + if (DEC_IS_MULTI(dict->mode)) {
  421 + if (dict->pos == dict->end)
  422 + dict->pos = 0;
  423 +
  424 + memcpy(b->out + b->out_pos, dict->buf + dict->start,
  425 + copy_size);
  426 + }
  427 +
  428 + dict->start = dict->pos;
  429 + b->out_pos += copy_size;
  430 + return copy_size;
  431 +}
  432 +
  433 +/*****************
  434 + * Range decoder *
  435 + *****************/
  436 +
  437 +/* Reset the range decoder. */
  438 +static void rc_reset(struct rc_dec *rc)
  439 +{
  440 + rc->range = (uint32_t)-1;
  441 + rc->code = 0;
  442 + rc->init_bytes_left = RC_INIT_BYTES;
  443 +}
  444 +
  445 +/*
  446 + * Read the first five initial bytes into rc->code if they haven't been
  447 + * read already. (Yes, the first byte gets completely ignored.)
  448 + */
  449 +static bool rc_read_init(struct rc_dec *rc, struct xz_buf *b)
  450 +{
  451 + while (rc->init_bytes_left > 0) {
  452 + if (b->in_pos == b->in_size)
  453 + return false;
  454 +
  455 + rc->code = (rc->code << 8) + b->in[b->in_pos++];
  456 + --rc->init_bytes_left;
  457 + }
  458 +
  459 + return true;
  460 +}
  461 +
  462 +/* Return true if there may not be enough input for the next decoding loop. */
  463 +static inline bool rc_limit_exceeded(const struct rc_dec *rc)
  464 +{
  465 + return rc->in_pos > rc->in_limit;
  466 +}
  467 +
  468 +/*
  469 + * Return true if it is possible (from point of view of range decoder) that
  470 + * we have reached the end of the LZMA chunk.
  471 + */
  472 +static inline bool rc_is_finished(const struct rc_dec *rc)
  473 +{
  474 + return rc->code == 0;
  475 +}
  476 +
  477 +/* Read the next input byte if needed. */
  478 +static __always_inline void rc_normalize(struct rc_dec *rc)
  479 +{
  480 + if (rc->range < RC_TOP_VALUE) {
  481 + rc->range <<= RC_SHIFT_BITS;
  482 + rc->code = (rc->code << RC_SHIFT_BITS) + rc->in[rc->in_pos++];
  483 + }
  484 +}
  485 +
  486 +/*
  487 + * Decode one bit. In some versions, this function has been splitted in three
  488 + * functions so that the compiler is supposed to be able to more easily avoid
  489 + * an extra branch. In this particular version of the LZMA decoder, this
  490 + * doesn't seem to be a good idea (tested with GCC 3.3.6, 3.4.6, and 4.3.3
  491 + * on x86). Using a non-splitted version results in nicer looking code too.
  492 + *
  493 + * NOTE: This must return an int. Do not make it return a bool or the speed
  494 + * of the code generated by GCC 3.x decreases 10-15 %. (GCC 4.3 doesn't care,
  495 + * and it generates 10-20 % faster code than GCC 3.x from this file anyway.)
  496 + */
  497 +static __always_inline int rc_bit(struct rc_dec *rc, uint16_t *prob)
  498 +{
  499 + uint32_t bound;
  500 + int bit;
  501 +
  502 + rc_normalize(rc);
  503 + bound = (rc->range >> RC_BIT_MODEL_TOTAL_BITS) * *prob;
  504 + if (rc->code < bound) {
  505 + rc->range = bound;
  506 + *prob += (RC_BIT_MODEL_TOTAL - *prob) >> RC_MOVE_BITS;
  507 + bit = 0;
  508 + } else {
  509 + rc->range -= bound;
  510 + rc->code -= bound;
  511 + *prob -= *prob >> RC_MOVE_BITS;
  512 + bit = 1;
  513 + }
  514 +
  515 + return bit;
  516 +}
  517 +
  518 +/* Decode a bittree starting from the most significant bit. */
  519 +static __always_inline uint32_t rc_bittree(struct rc_dec *rc,
  520 + uint16_t *probs, uint32_t limit)
  521 +{
  522 + uint32_t symbol = 1;
  523 +
  524 + do {
  525 + if (rc_bit(rc, &probs[symbol]))
  526 + symbol = (symbol << 1) + 1;
  527 + else
  528 + symbol <<= 1;
  529 + } while (symbol < limit);
  530 +
  531 + return symbol;
  532 +}
  533 +
  534 +/* Decode a bittree starting from the least significant bit. */
  535 +static __always_inline void rc_bittree_reverse(struct rc_dec *rc,
  536 + uint16_t *probs,
  537 + uint32_t *dest, uint32_t limit)
  538 +{
  539 + uint32_t symbol = 1;
  540 + uint32_t i = 0;
  541 +
  542 + do {
  543 + if (rc_bit(rc, &probs[symbol])) {
  544 + symbol = (symbol << 1) + 1;
  545 + *dest += 1 << i;
  546 + } else {
  547 + symbol <<= 1;
  548 + }
  549 + } while (++i < limit);
  550 +}
  551 +
  552 +/* Decode direct bits (fixed fifty-fifty probability) */
  553 +static inline void rc_direct(struct rc_dec *rc, uint32_t *dest, uint32_t limit)
  554 +{
  555 + uint32_t mask;
  556 +
  557 + do {
  558 + rc_normalize(rc);
  559 + rc->range >>= 1;
  560 + rc->code -= rc->range;
  561 + mask = (uint32_t)0 - (rc->code >> 31);
  562 + rc->code += rc->range & mask;
  563 + *dest = (*dest << 1) + (mask + 1);
  564 + } while (--limit > 0);
  565 +}
  566 +
  567 +/********
  568 + * LZMA *
  569 + ********/
  570 +
  571 +/* Get pointer to literal coder probability array. */
  572 +static uint16_t *lzma_literal_probs(struct xz_dec_lzma2 *s)
  573 +{
  574 + uint32_t prev_byte = dict_get(&s->dict, 0);
  575 + uint32_t low = prev_byte >> (8 - s->lzma.lc);
  576 + uint32_t high = (s->dict.pos & s->lzma.literal_pos_mask) << s->lzma.lc;
  577 + return s->lzma.literal[low + high];
  578 +}
  579 +
  580 +/* Decode a literal (one 8-bit byte) */
  581 +static void lzma_literal(struct xz_dec_lzma2 *s)
  582 +{
  583 + uint16_t *probs;
  584 + uint32_t symbol;
  585 + uint32_t match_byte;
  586 + uint32_t match_bit;
  587 + uint32_t offset;
  588 + uint32_t i;
  589 +
  590 + probs = lzma_literal_probs(s);
  591 +
  592 + if (lzma_state_is_literal(s->lzma.state)) {
  593 + symbol = rc_bittree(&s->rc, probs, 0x100);
  594 + } else {
  595 + symbol = 1;
  596 + match_byte = dict_get(&s->dict, s->lzma.rep0) << 1;
  597 + offset = 0x100;
  598 +
  599 + do {
  600 + match_bit = match_byte & offset;
  601 + match_byte <<= 1;
  602 + i = offset + match_bit + symbol;
  603 +
  604 + if (rc_bit(&s->rc, &probs[i])) {
  605 + symbol = (symbol << 1) + 1;
  606 + offset &= match_bit;
  607 + } else {
  608 + symbol <<= 1;
  609 + offset &= ~match_bit;
  610 + }
  611 + } while (symbol < 0x100);
  612 + }
  613 +
  614 + dict_put(&s->dict, (uint8_t)symbol);
  615 + lzma_state_literal(&s->lzma.state);
  616 +}
  617 +
  618 +/* Decode the length of the match into s->lzma.len. */
  619 +static void lzma_len(struct xz_dec_lzma2 *s, struct lzma_len_dec *l,
  620 + uint32_t pos_state)
  621 +{
  622 + uint16_t *probs;
  623 + uint32_t limit;
  624 +
  625 + if (!rc_bit(&s->rc, &l->choice)) {
  626 + probs = l->low[pos_state];
  627 + limit = LEN_LOW_SYMBOLS;
  628 + s->lzma.len = MATCH_LEN_MIN;
  629 + } else {
  630 + if (!rc_bit(&s->rc, &l->choice2)) {
  631 + probs = l->mid[pos_state];
  632 + limit = LEN_MID_SYMBOLS;
  633 + s->lzma.len = MATCH_LEN_MIN + LEN_LOW_SYMBOLS;
  634 + } else {
  635 + probs = l->high;
  636 + limit = LEN_HIGH_SYMBOLS;
  637 + s->lzma.len = MATCH_LEN_MIN + LEN_LOW_SYMBOLS
  638 + + LEN_MID_SYMBOLS;
  639 + }
  640 + }
  641 +
  642 + s->lzma.len += rc_bittree(&s->rc, probs, limit) - limit;
  643 +}
  644 +
  645 +/* Decode a match. The distance will be stored in s->lzma.rep0. */
  646 +static void lzma_match(struct xz_dec_lzma2 *s, uint32_t pos_state)
  647 +{
  648 + uint16_t *probs;
  649 + uint32_t dist_slot;
  650 + uint32_t limit;
  651 +
  652 + lzma_state_match(&s->lzma.state);
  653 +
  654 + s->lzma.rep3 = s->lzma.rep2;
  655 + s->lzma.rep2 = s->lzma.rep1;
  656 + s->lzma.rep1 = s->lzma.rep0;
  657 +
  658 + lzma_len(s, &s->lzma.match_len_dec, pos_state);
  659 +
  660 + probs = s->lzma.dist_slot[lzma_get_dist_state(s->lzma.len)];
  661 + dist_slot = rc_bittree(&s->rc, probs, DIST_SLOTS) - DIST_SLOTS;
  662 +
  663 + if (dist_slot < DIST_MODEL_START) {
  664 + s->lzma.rep0 = dist_slot;
  665 + } else {
  666 + limit = (dist_slot >> 1) - 1;
  667 + s->lzma.rep0 = 2 + (dist_slot & 1);
  668 +
  669 + if (dist_slot < DIST_MODEL_END) {
  670 + s->lzma.rep0 <<= limit;
  671 + probs = s->lzma.dist_special + s->lzma.rep0
  672 + - dist_slot - 1;
  673 + rc_bittree_reverse(&s->rc, probs,
  674 + &s->lzma.rep0, limit);
  675 + } else {
  676 + rc_direct(&s->rc, &s->lzma.rep0, limit - ALIGN_BITS);
  677 + s->lzma.rep0 <<= ALIGN_BITS;
  678 + rc_bittree_reverse(&s->rc, s->lzma.dist_align,
  679 + &s->lzma.rep0, ALIGN_BITS);
  680 + }
  681 + }
  682 +}
  683 +
  684 +/*
  685 + * Decode a repeated match. The distance is one of the four most recently
  686 + * seen matches. The distance will be stored in s->lzma.rep0.
  687 + */
  688 +static void lzma_rep_match(struct xz_dec_lzma2 *s, uint32_t pos_state)
  689 +{
  690 + uint32_t tmp;
  691 +
  692 + if (!rc_bit(&s->rc, &s->lzma.is_rep0[s->lzma.state])) {
  693 + if (!rc_bit(&s->rc, &s->lzma.is_rep0_long[
  694 + s->lzma.state][pos_state])) {
  695 + lzma_state_short_rep(&s->lzma.state);
  696 + s->lzma.len = 1;
  697 + return;
  698 + }
  699 + } else {
  700 + if (!rc_bit(&s->rc, &s->lzma.is_rep1[s->lzma.state])) {
  701 + tmp = s->lzma.rep1;
  702 + } else {
  703 + if (!rc_bit(&s->rc, &s->lzma.is_rep2[s->lzma.state])) {
  704 + tmp = s->lzma.rep2;
  705 + } else {
  706 + tmp = s->lzma.rep3;
  707 + s->lzma.rep3 = s->lzma.rep2;
  708 + }
  709 +
  710 + s->lzma.rep2 = s->lzma.rep1;
  711 + }
  712 +
  713 + s->lzma.rep1 = s->lzma.rep0;
  714 + s->lzma.rep0 = tmp;
  715 + }
  716 +
  717 + lzma_state_long_rep(&s->lzma.state);
  718 + lzma_len(s, &s->lzma.rep_len_dec, pos_state);
  719 +}
  720 +
  721 +/* LZMA decoder core */
  722 +static bool lzma_main(struct xz_dec_lzma2 *s)
  723 +{
  724 + uint32_t pos_state;
  725 +
  726 + /*
  727 + * If the dictionary was reached during the previous call, try to
  728 + * finish the possibly pending repeat in the dictionary.
  729 + */
  730 + if (dict_has_space(&s->dict) && s->lzma.len > 0)
  731 + dict_repeat(&s->dict, &s->lzma.len, s->lzma.rep0);
  732 +
  733 + /*
  734 + * Decode more LZMA symbols. One iteration may consume up to
  735 + * LZMA_IN_REQUIRED - 1 bytes.
  736 + */
  737 + while (dict_has_space(&s->dict) && !rc_limit_exceeded(&s->rc)) {
  738 + pos_state = s->dict.pos & s->lzma.pos_mask;
  739 +
  740 + if (!rc_bit(&s->rc, &s->lzma.is_match[
  741 + s->lzma.state][pos_state])) {
  742 + lzma_literal(s);
  743 + } else {
  744 + if (rc_bit(&s->rc, &s->lzma.is_rep[s->lzma.state]))
  745 + lzma_rep_match(s, pos_state);
  746 + else
  747 + lzma_match(s, pos_state);
  748 +
  749 + if (!dict_repeat(&s->dict, &s->lzma.len, s->lzma.rep0))
  750 + return false;
  751 + }
  752 + }
  753 +
  754 + /*
  755 + * Having the range decoder always normalized when we are outside
  756 + * this function makes it easier to correctly handle end of the chunk.
  757 + */
  758 + rc_normalize(&s->rc);
  759 +
  760 + return true;
  761 +}
  762 +
  763 +/*
  764 + * Reset the LZMA decoder and range decoder state. Dictionary is nore reset
  765 + * here, because LZMA state may be reset without resetting the dictionary.
  766 + */
  767 +static void lzma_reset(struct xz_dec_lzma2 *s)
  768 +{
  769 + uint16_t *probs;
  770 + size_t i;
  771 +
  772 + s->lzma.state = STATE_LIT_LIT;
  773 + s->lzma.rep0 = 0;
  774 + s->lzma.rep1 = 0;
  775 + s->lzma.rep2 = 0;
  776 + s->lzma.rep3 = 0;
  777 +
  778 + /*
  779 + * All probabilities are initialized to the same value. This hack
  780 + * makes the code smaller by avoiding a separate loop for each
  781 + * probability array.
  782 + *
  783 + * This could be optimized so that only that part of literal
  784 + * probabilities that are actually required. In the common case
  785 + * we would write 12 KiB less.
  786 + */
  787 + probs = s->lzma.is_match[0];
  788 + for (i = 0; i < PROBS_TOTAL; ++i)
  789 + probs[i] = RC_BIT_MODEL_TOTAL / 2;
  790 +
  791 + rc_reset(&s->rc);
  792 +}
  793 +
  794 +/*
  795 + * Decode and validate LZMA properties (lc/lp/pb) and calculate the bit masks
  796 + * from the decoded lp and pb values. On success, the LZMA decoder state is
  797 + * reset and true is returned.
  798 + */
  799 +static bool lzma_props(struct xz_dec_lzma2 *s, uint8_t props)
  800 +{
  801 + if (props > (4 * 5 + 4) * 9 + 8)
  802 + return false;
  803 +
  804 + s->lzma.pos_mask = 0;
  805 + while (props >= 9 * 5) {
  806 + props -= 9 * 5;
  807 + ++s->lzma.pos_mask;
  808 + }
  809 +
  810 + s->lzma.pos_mask = (1 << s->lzma.pos_mask) - 1;
  811 +
  812 + s->lzma.literal_pos_mask = 0;
  813 + while (props >= 9) {
  814 + props -= 9;
  815 + ++s->lzma.literal_pos_mask;
  816 + }
  817 +
  818 + s->lzma.lc = props;
  819 +
  820 + if (s->lzma.lc + s->lzma.literal_pos_mask > 4)
  821 + return false;
  822 +
  823 + s->lzma.literal_pos_mask = (1 << s->lzma.literal_pos_mask) - 1;
  824 +
  825 + lzma_reset(s);
  826 +
  827 + return true;
  828 +}
  829 +
  830 +/*********
  831 + * LZMA2 *
  832 + *********/
  833 +
  834 +/*
  835 + * The LZMA decoder assumes that if the input limit (s->rc.in_limit) hasn't
  836 + * been exceeded, it is safe to read up to LZMA_IN_REQUIRED bytes. This
  837 + * wrapper function takes care of making the LZMA decoder's assumption safe.
  838 + *
  839 + * As long as there is plenty of input left to be decoded in the current LZMA
  840 + * chunk, we decode directly from the caller-supplied input buffer until
  841 + * there's LZMA_IN_REQUIRED bytes left. Those remaining bytes are copied into
  842 + * s->temp.buf, which (hopefully) gets filled on the next call to this
  843 + * function. We decode a few bytes from the temporary buffer so that we can
  844 + * continue decoding from the caller-supplied input buffer again.
  845 + */
  846 +static bool lzma2_lzma(struct xz_dec_lzma2 *s, struct xz_buf *b)
  847 +{
  848 + size_t in_avail;
  849 + uint32_t tmp;
  850 +
  851 + in_avail = b->in_size - b->in_pos;
  852 + if (s->temp.size > 0 || s->lzma2.compressed == 0) {
  853 + tmp = 2 * LZMA_IN_REQUIRED - s->temp.size;
  854 + if (tmp > s->lzma2.compressed - s->temp.size)
  855 + tmp = s->lzma2.compressed - s->temp.size;
  856 + if (tmp > in_avail)
  857 + tmp = in_avail;
  858 +
  859 + memcpy(s->temp.buf + s->temp.size, b->in + b->in_pos, tmp);
  860 +
  861 + if (s->temp.size + tmp == s->lzma2.compressed) {
  862 + memzero(s->temp.buf + s->temp.size + tmp,
  863 + sizeof(s->temp.buf)
  864 + - s->temp.size - tmp);
  865 + s->rc.in_limit = s->temp.size + tmp;
  866 + } else if (s->temp.size + tmp < LZMA_IN_REQUIRED) {
  867 + s->temp.size += tmp;
  868 + b->in_pos += tmp;
  869 + return true;
  870 + } else {
  871 + s->rc.in_limit = s->temp.size + tmp - LZMA_IN_REQUIRED;
  872 + }
  873 +
  874 + s->rc.in = s->temp.buf;
  875 + s->rc.in_pos = 0;
  876 +
  877 + if (!lzma_main(s) || s->rc.in_pos > s->temp.size + tmp)
  878 + return false;
  879 +
  880 + s->lzma2.compressed -= s->rc.in_pos;
  881 +
  882 + if (s->rc.in_pos < s->temp.size) {
  883 + s->temp.size -= s->rc.in_pos;
  884 + memmove(s->temp.buf, s->temp.buf + s->rc.in_pos,
  885 + s->temp.size);
  886 + return true;
  887 + }
  888 +
  889 + b->in_pos += s->rc.in_pos - s->temp.size;
  890 + s->temp.size = 0;
  891 + }
  892 +
  893 + in_avail = b->in_size - b->in_pos;
  894 + if (in_avail >= LZMA_IN_REQUIRED) {
  895 + s->rc.in = b->in;
  896 + s->rc.in_pos = b->in_pos;
  897 +
  898 + if (in_avail >= s->lzma2.compressed + LZMA_IN_REQUIRED)
  899 + s->rc.in_limit = b->in_pos + s->lzma2.compressed;
  900 + else
  901 + s->rc.in_limit = b->in_size - LZMA_IN_REQUIRED;
  902 +
  903 + if (!lzma_main(s))
  904 + return false;
  905 +
  906 + in_avail = s->rc.in_pos - b->in_pos;
  907 + if (in_avail > s->lzma2.compressed)
  908 + return false;
  909 +
  910 + s->lzma2.compressed -= in_avail;
  911 + b->in_pos = s->rc.in_pos;
  912 + }
  913 +
  914 + in_avail = b->in_size - b->in_pos;
  915 + if (in_avail < LZMA_IN_REQUIRED) {
  916 + if (in_avail > s->lzma2.compressed)
  917 + in_avail = s->lzma2.compressed;
  918 +
  919 + memcpy(s->temp.buf, b->in + b->in_pos, in_avail);
  920 + s->temp.size = in_avail;
  921 + b->in_pos += in_avail;
  922 + }
  923 +
  924 + return true;
  925 +}
  926 +
  927 +/*
  928 + * Take care of the LZMA2 control layer, and forward the job of actual LZMA
  929 + * decoding or copying of uncompressed chunks to other functions.
  930 + */
  931 +XZ_EXTERN enum xz_ret xz_dec_lzma2_run(struct xz_dec_lzma2 *s,
  932 + struct xz_buf *b)
  933 +{
  934 + uint32_t tmp;
  935 +
  936 + while (b->in_pos < b->in_size || s->lzma2.sequence == SEQ_LZMA_RUN) {
  937 + switch (s->lzma2.sequence) {
  938 + case SEQ_CONTROL:
  939 + /*
  940 + * LZMA2 control byte
  941 + *
  942 + * Exact values:
  943 + * 0x00 End marker
  944 + * 0x01 Dictionary reset followed by
  945 + * an uncompressed chunk
  946 + * 0x02 Uncompressed chunk (no dictionary reset)
  947 + *
  948 + * Highest three bits (s->control & 0xE0):
  949 + * 0xE0 Dictionary reset, new properties and state
  950 + * reset, followed by LZMA compressed chunk
  951 + * 0xC0 New properties and state reset, followed
  952 + * by LZMA compressed chunk (no dictionary
  953 + * reset)
  954 + * 0xA0 State reset using old properties,
  955 + * followed by LZMA compressed chunk (no
  956 + * dictionary reset)
  957 + * 0x80 LZMA chunk (no dictionary or state reset)
  958 + *
  959 + * For LZMA compressed chunks, the lowest five bits
  960 + * (s->control & 1F) are the highest bits of the
  961 + * uncompressed size (bits 16-20).
  962 + *
  963 + * A new LZMA2 stream must begin with a dictionary
  964 + * reset. The first LZMA chunk must set new
  965 + * properties and reset the LZMA state.
  966 + *
  967 + * Values that don't match anything described above
  968 + * are invalid and we return XZ_DATA_ERROR.
  969 + */
  970 + tmp = b->in[b->in_pos++];
  971 +
  972 + if (tmp >= 0xE0 || tmp == 0x01) {
  973 + s->lzma2.need_props = true;
  974 + s->lzma2.need_dict_reset = false;
  975 + dict_reset(&s->dict, b);
  976 + } else if (s->lzma2.need_dict_reset) {
  977 + return XZ_DATA_ERROR;
  978 + }
  979 +
  980 + if (tmp >= 0x80) {
  981 + s->lzma2.uncompressed = (tmp & 0x1F) << 16;
  982 + s->lzma2.sequence = SEQ_UNCOMPRESSED_1;
  983 +
  984 + if (tmp >= 0xC0) {
  985 + /*
  986 + * When there are new properties,
  987 + * state reset is done at
  988 + * SEQ_PROPERTIES.
  989 + */
  990 + s->lzma2.need_props = false;
  991 + s->lzma2.next_sequence
  992 + = SEQ_PROPERTIES;
  993 +
  994 + } else if (s->lzma2.need_props) {
  995 + return XZ_DATA_ERROR;
  996 +
  997 + } else {
  998 + s->lzma2.next_sequence
  999 + = SEQ_LZMA_PREPARE;
  1000 + if (tmp >= 0xA0)
  1001 + lzma_reset(s);
  1002 + }
  1003 + } else {
  1004 + if (tmp == 0x00)
  1005 + return XZ_STREAM_END;
  1006 +
  1007 + if (tmp > 0x02)
  1008 + return XZ_DATA_ERROR;
  1009 +
  1010 + s->lzma2.sequence = SEQ_COMPRESSED_0;
  1011 + s->lzma2.next_sequence = SEQ_COPY;
  1012 + }
  1013 +
  1014 + break;
  1015 +
  1016 + case SEQ_UNCOMPRESSED_1:
  1017 + s->lzma2.uncompressed
  1018 + += (uint32_t)b->in[b->in_pos++] << 8;
  1019 + s->lzma2.sequence = SEQ_UNCOMPRESSED_2;
  1020 + break;
  1021 +
  1022 + case SEQ_UNCOMPRESSED_2:
  1023 + s->lzma2.uncompressed
  1024 + += (uint32_t)b->in[b->in_pos++] + 1;
  1025 + s->lzma2.sequence = SEQ_COMPRESSED_0;
  1026 + break;
  1027 +
  1028 + case SEQ_COMPRESSED_0:
  1029 + s->lzma2.compressed
  1030 + = (uint32_t)b->in[b->in_pos++] << 8;
  1031 + s->lzma2.sequence = SEQ_COMPRESSED_1;
  1032 + break;
  1033 +
  1034 + case SEQ_COMPRESSED_1:
  1035 + s->lzma2.compressed
  1036 + += (uint32_t)b->in[b->in_pos++] + 1;
  1037 + s->lzma2.sequence = s->lzma2.next_sequence;
  1038 + break;
  1039 +
  1040 + case SEQ_PROPERTIES:
  1041 + if (!lzma_props(s, b->in[b->in_pos++]))
  1042 + return XZ_DATA_ERROR;
  1043 +
  1044 + s->lzma2.sequence = SEQ_LZMA_PREPARE;
  1045 +
  1046 + case SEQ_LZMA_PREPARE:
  1047 + if (s->lzma2.compressed < RC_INIT_BYTES)
  1048 + return XZ_DATA_ERROR;
  1049 +
  1050 + if (!rc_read_init(&s->rc, b))
  1051 + return XZ_OK;
  1052 +
  1053 + s->lzma2.compressed -= RC_INIT_BYTES;
  1054 + s->lzma2.sequence = SEQ_LZMA_RUN;
  1055 +
  1056 + case SEQ_LZMA_RUN:
  1057 + /*
  1058 + * Set dictionary limit to indicate how much we want
  1059 + * to be encoded at maximum. Decode new data into the
  1060 + * dictionary. Flush the new data from dictionary to
  1061 + * b->out. Check if we finished decoding this chunk.
  1062 + * In case the dictionary got full but we didn't fill
  1063 + * the output buffer yet, we may run this loop
  1064 + * multiple times without changing s->lzma2.sequence.
  1065 + */
  1066 + dict_limit(&s->dict, min_t(size_t,
  1067 + b->out_size - b->out_pos,
  1068 + s->lzma2.uncompressed));
  1069 + if (!lzma2_lzma(s, b))
  1070 + return XZ_DATA_ERROR;
  1071 +
  1072 + s->lzma2.uncompressed -= dict_flush(&s->dict, b);
  1073 +
  1074 + if (s->lzma2.uncompressed == 0) {
  1075 + if (s->lzma2.compressed > 0 || s->lzma.len > 0
  1076 + || !rc_is_finished(&s->rc))
  1077 + return XZ_DATA_ERROR;
  1078 +
  1079 + rc_reset(&s->rc);
  1080 + s->lzma2.sequence = SEQ_CONTROL;
  1081 +
  1082 + } else if (b->out_pos == b->out_size
  1083 + || (b->in_pos == b->in_size
  1084 + && s->temp.size
  1085 + < s->lzma2.compressed)) {
  1086 + return XZ_OK;
  1087 + }
  1088 +
  1089 + break;
  1090 +
  1091 + case SEQ_COPY:
  1092 + dict_uncompressed(&s->dict, b, &s->lzma2.compressed);
  1093 + if (s->lzma2.compressed > 0)
  1094 + return XZ_OK;
  1095 +
  1096 + s->lzma2.sequence = SEQ_CONTROL;
  1097 + break;
  1098 + }
  1099 + }
  1100 +
  1101 + return XZ_OK;
  1102 +}
  1103 +
  1104 +XZ_EXTERN struct xz_dec_lzma2 *xz_dec_lzma2_create(enum xz_mode mode,
  1105 + uint32_t dict_max)
  1106 +{
  1107 + struct xz_dec_lzma2 *s = kmalloc(sizeof(*s), GFP_KERNEL);
  1108 + if (s == NULL)
  1109 + return NULL;
  1110 +
  1111 + s->dict.mode = mode;
  1112 + s->dict.size_max = dict_max;
  1113 +
  1114 + if (DEC_IS_PREALLOC(mode)) {
  1115 + s->dict.buf = vmalloc(dict_max);
  1116 + if (s->dict.buf == NULL) {
  1117 + kfree(s);
  1118 + return NULL;
  1119 + }
  1120 + } else if (DEC_IS_DYNALLOC(mode)) {
  1121 + s->dict.buf = NULL;
  1122 + s->dict.allocated = 0;
  1123 + }
  1124 +
  1125 + return s;
  1126 +}
  1127 +
  1128 +XZ_EXTERN enum xz_ret xz_dec_lzma2_reset(struct xz_dec_lzma2 *s, uint8_t props)
  1129 +{
  1130 + /* This limits dictionary size to 3 GiB to keep parsing simpler. */
  1131 + if (props > 39)
  1132 + return XZ_OPTIONS_ERROR;
  1133 +
  1134 + s->dict.size = 2 + (props & 1);
  1135 + s->dict.size <<= (props >> 1) + 11;
  1136 +
  1137 + if (DEC_IS_MULTI(s->dict.mode)) {
  1138 + if (s->dict.size > s->dict.size_max)
  1139 + return XZ_MEMLIMIT_ERROR;
  1140 +
  1141 + s->dict.end = s->dict.size;
  1142 +
  1143 + if (DEC_IS_DYNALLOC(s->dict.mode)) {
  1144 + if (s->dict.allocated < s->dict.size) {
  1145 + vfree(s->dict.buf);
  1146 + s->dict.buf = vmalloc(s->dict.size);
  1147 + if (s->dict.buf == NULL) {
  1148 + s->dict.allocated = 0;
  1149 + return XZ_MEM_ERROR;
  1150 + }
  1151 + }
  1152 + }
  1153 + }
  1154 +
  1155 + s->lzma.len = 0;
  1156 +
  1157 + s->lzma2.sequence = SEQ_CONTROL;
  1158 + s->lzma2.need_dict_reset = true;
  1159 +
  1160 + s->temp.size = 0;
  1161 +
  1162 + return XZ_OK;
  1163 +}
  1164 +
  1165 +XZ_EXTERN void xz_dec_lzma2_end(struct xz_dec_lzma2 *s)
  1166 +{
  1167 + if (DEC_IS_MULTI(s->dict.mode))
  1168 + vfree(s->dict.buf);
  1169 +
  1170 + kfree(s);
  1171 +}
lib/xz/xz_dec_stream.c
  1 +/*
  2 + * .xz Stream decoder
  3 + *
  4 + * Author: Lasse Collin <lasse.collin@tukaani.org>
  5 + *
  6 + * This file has been put into the public domain.
  7 + * You can do whatever you want with this file.
  8 + */
  9 +
  10 +#include "xz_private.h"
  11 +#include "xz_stream.h"
  12 +
  13 +/* Hash used to validate the Index field */
  14 +struct xz_dec_hash {
  15 + vli_type unpadded;
  16 + vli_type uncompressed;
  17 + uint32_t crc32;
  18 +};
  19 +
  20 +struct xz_dec {
  21 + /* Position in dec_main() */
  22 + enum {
  23 + SEQ_STREAM_HEADER,
  24 + SEQ_BLOCK_START,
  25 + SEQ_BLOCK_HEADER,
  26 + SEQ_BLOCK_UNCOMPRESS,
  27 + SEQ_BLOCK_PADDING,
  28 + SEQ_BLOCK_CHECK,
  29 + SEQ_INDEX,
  30 + SEQ_INDEX_PADDING,
  31 + SEQ_INDEX_CRC32,
  32 + SEQ_STREAM_FOOTER
  33 + } sequence;
  34 +
  35 + /* Position in variable-length integers and Check fields */
  36 + uint32_t pos;
  37 +
  38 + /* Variable-length integer decoded by dec_vli() */
  39 + vli_type vli;
  40 +
  41 + /* Saved in_pos and out_pos */
  42 + size_t in_start;
  43 + size_t out_start;
  44 +
  45 + /* CRC32 value in Block or Index */
  46 + uint32_t crc32;
  47 +
  48 + /* Type of the integrity check calculated from uncompressed data */
  49 + enum xz_check check_type;
  50 +
  51 + /* Operation mode */
  52 + enum xz_mode mode;
  53 +
  54 + /*
  55 + * True if the next call to xz_dec_run() is allowed to return
  56 + * XZ_BUF_ERROR.
  57 + */
  58 + bool allow_buf_error;
  59 +
  60 + /* Information stored in Block Header */
  61 + struct {
  62 + /*
  63 + * Value stored in the Compressed Size field, or
  64 + * VLI_UNKNOWN if Compressed Size is not present.
  65 + */
  66 + vli_type compressed;
  67 +
  68 + /*
  69 + * Value stored in the Uncompressed Size field, or
  70 + * VLI_UNKNOWN if Uncompressed Size is not present.
  71 + */
  72 + vli_type uncompressed;
  73 +
  74 + /* Size of the Block Header field */
  75 + uint32_t size;
  76 + } block_header;
  77 +
  78 + /* Information collected when decoding Blocks */
  79 + struct {
  80 + /* Observed compressed size of the current Block */
  81 + vli_type compressed;
  82 +
  83 + /* Observed uncompressed size of the current Block */
  84 + vli_type uncompressed;
  85 +
  86 + /* Number of Blocks decoded so far */
  87 + vli_type count;
  88 +
  89 + /*
  90 + * Hash calculated from the Block sizes. This is used to
  91 + * validate the Index field.
  92 + */
  93 + struct xz_dec_hash hash;
  94 + } block;
  95 +
  96 + /* Variables needed when verifying the Index field */
  97 + struct {
  98 + /* Position in dec_index() */
  99 + enum {
  100 + SEQ_INDEX_COUNT,
  101 + SEQ_INDEX_UNPADDED,
  102 + SEQ_INDEX_UNCOMPRESSED
  103 + } sequence;
  104 +
  105 + /* Size of the Index in bytes */
  106 + vli_type size;
  107 +
  108 + /* Number of Records (matches block.count in valid files) */
  109 + vli_type count;
  110 +
  111 + /*
  112 + * Hash calculated from the Records (matches block.hash in
  113 + * valid files).
  114 + */
  115 + struct xz_dec_hash hash;
  116 + } index;
  117 +
  118 + /*
  119 + * Temporary buffer needed to hold Stream Header, Block Header,
  120 + * and Stream Footer. The Block Header is the biggest (1 KiB)
  121 + * so we reserve space according to that. buf[] has to be aligned
  122 + * to a multiple of four bytes; the size_t variables before it
  123 + * should guarantee this.
  124 + */
  125 + struct {
  126 + size_t pos;
  127 + size_t size;
  128 + uint8_t buf[1024];
  129 + } temp;
  130 +
  131 + struct xz_dec_lzma2 *lzma2;
  132 +
  133 +#ifdef XZ_DEC_BCJ
  134 + struct xz_dec_bcj *bcj;
  135 + bool bcj_active;
  136 +#endif
  137 +};
  138 +
  139 +#ifdef XZ_DEC_ANY_CHECK
  140 +/* Sizes of the Check field with different Check IDs */
  141 +static const uint8_t check_sizes[16] = {
  142 + 0,
  143 + 4, 4, 4,
  144 + 8, 8, 8,
  145 + 16, 16, 16,
  146 + 32, 32, 32,
  147 + 64, 64, 64
  148 +};
  149 +#endif
  150 +
  151 +/*
  152 + * Fill s->temp by copying data starting from b->in[b->in_pos]. Caller
  153 + * must have set s->temp.pos to indicate how much data we are supposed
  154 + * to copy into s->temp.buf. Return true once s->temp.pos has reached
  155 + * s->temp.size.
  156 + */
  157 +static bool fill_temp(struct xz_dec *s, struct xz_buf *b)
  158 +{
  159 + size_t copy_size = min_t(size_t,
  160 + b->in_size - b->in_pos, s->temp.size - s->temp.pos);
  161 +
  162 + memcpy(s->temp.buf + s->temp.pos, b->in + b->in_pos, copy_size);
  163 + b->in_pos += copy_size;
  164 + s->temp.pos += copy_size;
  165 +
  166 + if (s->temp.pos == s->temp.size) {
  167 + s->temp.pos = 0;
  168 + return true;
  169 + }
  170 +
  171 + return false;
  172 +}
  173 +
  174 +/* Decode a variable-length integer (little-endian base-128 encoding) */
  175 +static enum xz_ret dec_vli(struct xz_dec *s, const uint8_t *in,
  176 + size_t *in_pos, size_t in_size)
  177 +{
  178 + uint8_t byte;
  179 +
  180 + if (s->pos == 0)
  181 + s->vli = 0;
  182 +
  183 + while (*in_pos < in_size) {
  184 + byte = in[*in_pos];
  185 + ++*in_pos;
  186 +
  187 + s->vli |= (vli_type)(byte & 0x7F) << s->pos;
  188 +
  189 + if ((byte & 0x80) == 0) {
  190 + /* Don't allow non-minimal encodings. */
  191 + if (byte == 0 && s->pos != 0)
  192 + return XZ_DATA_ERROR;
  193 +
  194 + s->pos = 0;
  195 + return XZ_STREAM_END;
  196 + }
  197 +
  198 + s->pos += 7;
  199 + if (s->pos == 7 * VLI_BYTES_MAX)
  200 + return XZ_DATA_ERROR;
  201 + }
  202 +
  203 + return XZ_OK;
  204 +}
  205 +
  206 +/*
  207 + * Decode the Compressed Data field from a Block. Update and validate
  208 + * the observed compressed and uncompressed sizes of the Block so that
  209 + * they don't exceed the values possibly stored in the Block Header
  210 + * (validation assumes that no integer overflow occurs, since vli_type
  211 + * is normally uint64_t). Update the CRC32 if presence of the CRC32
  212 + * field was indicated in Stream Header.
  213 + *
  214 + * Once the decoding is finished, validate that the observed sizes match
  215 + * the sizes possibly stored in the Block Header. Update the hash and
  216 + * Block count, which are later used to validate the Index field.
  217 + */
  218 +static enum xz_ret dec_block(struct xz_dec *s, struct xz_buf *b)
  219 +{
  220 + enum xz_ret ret;
  221 +
  222 + s->in_start = b->in_pos;
  223 + s->out_start = b->out_pos;
  224 +
  225 +#ifdef XZ_DEC_BCJ
  226 + if (s->bcj_active)
  227 + ret = xz_dec_bcj_run(s->bcj, s->lzma2, b);
  228 + else
  229 +#endif
  230 + ret = xz_dec_lzma2_run(s->lzma2, b);
  231 +
  232 + s->block.compressed += b->in_pos - s->in_start;
  233 + s->block.uncompressed += b->out_pos - s->out_start;
  234 +
  235 + /*
  236 + * There is no need to separately check for VLI_UNKNOWN, since
  237 + * the observed sizes are always smaller than VLI_UNKNOWN.
  238 + */
  239 + if (s->block.compressed > s->block_header.compressed
  240 + || s->block.uncompressed
  241 + > s->block_header.uncompressed)
  242 + return XZ_DATA_ERROR;
  243 +
  244 + if (s->check_type == XZ_CHECK_CRC32)
  245 + s->crc32 = xz_crc32(b->out + s->out_start,
  246 + b->out_pos - s->out_start, s->crc32);
  247 +
  248 + if (ret == XZ_STREAM_END) {
  249 + if (s->block_header.compressed != VLI_UNKNOWN
  250 + && s->block_header.compressed
  251 + != s->block.compressed)
  252 + return XZ_DATA_ERROR;
  253 +
  254 + if (s->block_header.uncompressed != VLI_UNKNOWN
  255 + && s->block_header.uncompressed
  256 + != s->block.uncompressed)
  257 + return XZ_DATA_ERROR;
  258 +
  259 + s->block.hash.unpadded += s->block_header.size
  260 + + s->block.compressed;
  261 +
  262 +#ifdef XZ_DEC_ANY_CHECK
  263 + s->block.hash.unpadded += check_sizes[s->check_type];
  264 +#else
  265 + if (s->check_type == XZ_CHECK_CRC32)
  266 + s->block.hash.unpadded += 4;
  267 +#endif
  268 +
  269 + s->block.hash.uncompressed += s->block.uncompressed;
  270 + s->block.hash.crc32 = xz_crc32(
  271 + (const uint8_t *)&s->block.hash,
  272 + sizeof(s->block.hash), s->block.hash.crc32);
  273 +
  274 + ++s->block.count;
  275 + }
  276 +
  277 + return ret;
  278 +}
  279 +
  280 +/* Update the Index size and the CRC32 value. */
  281 +static void index_update(struct xz_dec *s, const struct xz_buf *b)
  282 +{
  283 + size_t in_used = b->in_pos - s->in_start;
  284 + s->index.size += in_used;
  285 + s->crc32 = xz_crc32(b->in + s->in_start, in_used, s->crc32);
  286 +}
  287 +
  288 +/*
  289 + * Decode the Number of Records, Unpadded Size, and Uncompressed Size
  290 + * fields from the Index field. That is, Index Padding and CRC32 are not
  291 + * decoded by this function.
  292 + *
  293 + * This can return XZ_OK (more input needed), XZ_STREAM_END (everything
  294 + * successfully decoded), or XZ_DATA_ERROR (input is corrupt).
  295 + */
  296 +static enum xz_ret dec_index(struct xz_dec *s, struct xz_buf *b)
  297 +{
  298 + enum xz_ret ret;
  299 +
  300 + do {
  301 + ret = dec_vli(s, b->in, &b->in_pos, b->in_size);
  302 + if (ret != XZ_STREAM_END) {
  303 + index_update(s, b);
  304 + return ret;
  305 + }
  306 +
  307 + switch (s->index.sequence) {
  308 + case SEQ_INDEX_COUNT:
  309 + s->index.count = s->vli;
  310 +
  311 + /*
  312 + * Validate that the Number of Records field
  313 + * indicates the same number of Records as
  314 + * there were Blocks in the Stream.
  315 + */
  316 + if (s->index.count != s->block.count)
  317 + return XZ_DATA_ERROR;
  318 +
  319 + s->index.sequence = SEQ_INDEX_UNPADDED;
  320 + break;
  321 +
  322 + case SEQ_INDEX_UNPADDED:
  323 + s->index.hash.unpadded += s->vli;
  324 + s->index.sequence = SEQ_INDEX_UNCOMPRESSED;
  325 + break;
  326 +
  327 + case SEQ_INDEX_UNCOMPRESSED:
  328 + s->index.hash.uncompressed += s->vli;
  329 + s->index.hash.crc32 = xz_crc32(
  330 + (const uint8_t *)&s->index.hash,
  331 + sizeof(s->index.hash),
  332 + s->index.hash.crc32);
  333 + --s->index.count;
  334 + s->index.sequence = SEQ_INDEX_UNPADDED;
  335 + break;
  336 + }
  337 + } while (s->index.count > 0);
  338 +
  339 + return XZ_STREAM_END;
  340 +}
  341 +
  342 +/*
  343 + * Validate that the next four input bytes match the value of s->crc32.
  344 + * s->pos must be zero when starting to validate the first byte.
  345 + */
  346 +static enum xz_ret crc32_validate(struct xz_dec *s, struct xz_buf *b)
  347 +{
  348 + do {
  349 + if (b->in_pos == b->in_size)
  350 + return XZ_OK;
  351 +
  352 + if (((s->crc32 >> s->pos) & 0xFF) != b->in[b->in_pos++])
  353 + return XZ_DATA_ERROR;
  354 +
  355 + s->pos += 8;
  356 +
  357 + } while (s->pos < 32);
  358 +
  359 + s->crc32 = 0;
  360 + s->pos = 0;
  361 +
  362 + return XZ_STREAM_END;
  363 +}
  364 +
  365 +#ifdef XZ_DEC_ANY_CHECK
  366 +/*
  367 + * Skip over the Check field when the Check ID is not supported.
  368 + * Returns true once the whole Check field has been skipped over.
  369 + */
  370 +static bool check_skip(struct xz_dec *s, struct xz_buf *b)
  371 +{
  372 + while (s->pos < check_sizes[s->check_type]) {
  373 + if (b->in_pos == b->in_size)
  374 + return false;
  375 +
  376 + ++b->in_pos;
  377 + ++s->pos;
  378 + }
  379 +
  380 + s->pos = 0;
  381 +
  382 + return true;
  383 +}
  384 +#endif
  385 +
  386 +/* Decode the Stream Header field (the first 12 bytes of the .xz Stream). */
  387 +static enum xz_ret dec_stream_header(struct xz_dec *s)
  388 +{
  389 + if (!memeq(s->temp.buf, HEADER_MAGIC, HEADER_MAGIC_SIZE))
  390 + return XZ_FORMAT_ERROR;
  391 +
  392 + if (xz_crc32(s->temp.buf + HEADER_MAGIC_SIZE, 2, 0)
  393 + != get_le32(s->temp.buf + HEADER_MAGIC_SIZE + 2))
  394 + return XZ_DATA_ERROR;
  395 +
  396 + if (s->temp.buf[HEADER_MAGIC_SIZE] != 0)
  397 + return XZ_OPTIONS_ERROR;
  398 +
  399 + /*
  400 + * Of integrity checks, we support only none (Check ID = 0) and
  401 + * CRC32 (Check ID = 1). However, if XZ_DEC_ANY_CHECK is defined,
  402 + * we will accept other check types too, but then the check won't
  403 + * be verified and a warning (XZ_UNSUPPORTED_CHECK) will be given.
  404 + */
  405 + s->check_type = s->temp.buf[HEADER_MAGIC_SIZE + 1];
  406 +
  407 +#ifdef XZ_DEC_ANY_CHECK
  408 + if (s->check_type > XZ_CHECK_MAX)
  409 + return XZ_OPTIONS_ERROR;
  410 +
  411 + if (s->check_type > XZ_CHECK_CRC32)
  412 + return XZ_UNSUPPORTED_CHECK;
  413 +#else
  414 + if (s->check_type > XZ_CHECK_CRC32)
  415 + return XZ_OPTIONS_ERROR;
  416 +#endif
  417 +
  418 + return XZ_OK;
  419 +}
  420 +
  421 +/* Decode the Stream Footer field (the last 12 bytes of the .xz Stream) */
  422 +static enum xz_ret dec_stream_footer(struct xz_dec *s)
  423 +{
  424 + if (!memeq(s->temp.buf + 10, FOOTER_MAGIC, FOOTER_MAGIC_SIZE))
  425 + return XZ_DATA_ERROR;
  426 +
  427 + if (xz_crc32(s->temp.buf + 4, 6, 0) != get_le32(s->temp.buf))
  428 + return XZ_DATA_ERROR;
  429 +
  430 + /*
  431 + * Validate Backward Size. Note that we never added the size of the
  432 + * Index CRC32 field to s->index.size, thus we use s->index.size / 4
  433 + * instead of s->index.size / 4 - 1.
  434 + */
  435 + if ((s->index.size >> 2) != get_le32(s->temp.buf + 4))
  436 + return XZ_DATA_ERROR;
  437 +
  438 + if (s->temp.buf[8] != 0 || s->temp.buf[9] != s->check_type)
  439 + return XZ_DATA_ERROR;
  440 +
  441 + /*
  442 + * Use XZ_STREAM_END instead of XZ_OK to be more convenient
  443 + * for the caller.
  444 + */
  445 + return XZ_STREAM_END;
  446 +}
  447 +
  448 +/* Decode the Block Header and initialize the filter chain. */
  449 +static enum xz_ret dec_block_header(struct xz_dec *s)
  450 +{
  451 + enum xz_ret ret;
  452 +
  453 + /*
  454 + * Validate the CRC32. We know that the temp buffer is at least
  455 + * eight bytes so this is safe.
  456 + */
  457 + s->temp.size -= 4;
  458 + if (xz_crc32(s->temp.buf, s->temp.size, 0)
  459 + != get_le32(s->temp.buf + s->temp.size))
  460 + return XZ_DATA_ERROR;
  461 +
  462 + s->temp.pos = 2;
  463 +
  464 + /*
  465 + * Catch unsupported Block Flags. We support only one or two filters
  466 + * in the chain, so we catch that with the same test.
  467 + */
  468 +#ifdef XZ_DEC_BCJ
  469 + if (s->temp.buf[1] & 0x3E)
  470 +#else
  471 + if (s->temp.buf[1] & 0x3F)
  472 +#endif
  473 + return XZ_OPTIONS_ERROR;
  474 +
  475 + /* Compressed Size */
  476 + if (s->temp.buf[1] & 0x40) {
  477 + if (dec_vli(s, s->temp.buf, &s->temp.pos, s->temp.size)
  478 + != XZ_STREAM_END)
  479 + return XZ_DATA_ERROR;
  480 +
  481 + s->block_header.compressed = s->vli;
  482 + } else {
  483 + s->block_header.compressed = VLI_UNKNOWN;
  484 + }
  485 +
  486 + /* Uncompressed Size */
  487 + if (s->temp.buf[1] & 0x80) {
  488 + if (dec_vli(s, s->temp.buf, &s->temp.pos, s->temp.size)
  489 + != XZ_STREAM_END)
  490 + return XZ_DATA_ERROR;
  491 +
  492 + s->block_header.uncompressed = s->vli;
  493 + } else {
  494 + s->block_header.uncompressed = VLI_UNKNOWN;
  495 + }
  496 +
  497 +#ifdef XZ_DEC_BCJ
  498 + /* If there are two filters, the first one must be a BCJ filter. */
  499 + s->bcj_active = s->temp.buf[1] & 0x01;
  500 + if (s->bcj_active) {
  501 + if (s->temp.size - s->temp.pos < 2)
  502 + return XZ_OPTIONS_ERROR;
  503 +
  504 + ret = xz_dec_bcj_reset(s->bcj, s->temp.buf[s->temp.pos++]);
  505 + if (ret != XZ_OK)
  506 + return ret;
  507 +
  508 + /*
  509 + * We don't support custom start offset,
  510 + * so Size of Properties must be zero.
  511 + */
  512 + if (s->temp.buf[s->temp.pos++] != 0x00)
  513 + return XZ_OPTIONS_ERROR;
  514 + }
  515 +#endif
  516 +
  517 + /* Valid Filter Flags always take at least two bytes. */
  518 + if (s->temp.size - s->temp.pos < 2)
  519 + return XZ_DATA_ERROR;
  520 +
  521 + /* Filter ID = LZMA2 */
  522 + if (s->temp.buf[s->temp.pos++] != 0x21)
  523 + return XZ_OPTIONS_ERROR;
  524 +
  525 + /* Size of Properties = 1-byte Filter Properties */
  526 + if (s->temp.buf[s->temp.pos++] != 0x01)
  527 + return XZ_OPTIONS_ERROR;
  528 +
  529 + /* Filter Properties contains LZMA2 dictionary size. */
  530 + if (s->temp.size - s->temp.pos < 1)
  531 + return XZ_DATA_ERROR;
  532 +
  533 + ret = xz_dec_lzma2_reset(s->lzma2, s->temp.buf[s->temp.pos++]);
  534 + if (ret != XZ_OK)
  535 + return ret;
  536 +
  537 + /* The rest must be Header Padding. */
  538 + while (s->temp.pos < s->temp.size)
  539 + if (s->temp.buf[s->temp.pos++] != 0x00)
  540 + return XZ_OPTIONS_ERROR;
  541 +
  542 + s->temp.pos = 0;
  543 + s->block.compressed = 0;
  544 + s->block.uncompressed = 0;
  545 +
  546 + return XZ_OK;
  547 +}
  548 +
  549 +static enum xz_ret dec_main(struct xz_dec *s, struct xz_buf *b)
  550 +{
  551 + enum xz_ret ret;
  552 +
  553 + /*
  554 + * Store the start position for the case when we are in the middle
  555 + * of the Index field.
  556 + */
  557 + s->in_start = b->in_pos;
  558 +
  559 + while (true) {
  560 + switch (s->sequence) {
  561 + case SEQ_STREAM_HEADER:
  562 + /*
  563 + * Stream Header is copied to s->temp, and then
  564 + * decoded from there. This way if the caller
  565 + * gives us only little input at a time, we can
  566 + * still keep the Stream Header decoding code
  567 + * simple. Similar approach is used in many places
  568 + * in this file.
  569 + */
  570 + if (!fill_temp(s, b))
  571 + return XZ_OK;
  572 +
  573 + /*
  574 + * If dec_stream_header() returns
  575 + * XZ_UNSUPPORTED_CHECK, it is still possible
  576 + * to continue decoding if working in multi-call
  577 + * mode. Thus, update s->sequence before calling
  578 + * dec_stream_header().
  579 + */
  580 + s->sequence = SEQ_BLOCK_START;
  581 +
  582 + ret = dec_stream_header(s);
  583 + if (ret != XZ_OK)
  584 + return ret;
  585 +
  586 + case SEQ_BLOCK_START:
  587 + /* We need one byte of input to continue. */
  588 + if (b->in_pos == b->in_size)
  589 + return XZ_OK;
  590 +
  591 + /* See if this is the beginning of the Index field. */
  592 + if (b->in[b->in_pos] == 0) {
  593 + s->in_start = b->in_pos++;
  594 + s->sequence = SEQ_INDEX;
  595 + break;
  596 + }
  597 +
  598 + /*
  599 + * Calculate the size of the Block Header and
  600 + * prepare to decode it.
  601 + */
  602 + s->block_header.size
  603 + = ((uint32_t)b->in[b->in_pos] + 1) * 4;
  604 +
  605 + s->temp.size = s->block_header.size;
  606 + s->temp.pos = 0;
  607 + s->sequence = SEQ_BLOCK_HEADER;
  608 +
  609 + case SEQ_BLOCK_HEADER:
  610 + if (!fill_temp(s, b))
  611 + return XZ_OK;
  612 +
  613 + ret = dec_block_header(s);
  614 + if (ret != XZ_OK)
  615 + return ret;
  616 +
  617 + s->sequence = SEQ_BLOCK_UNCOMPRESS;
  618 +
  619 + case SEQ_BLOCK_UNCOMPRESS:
  620 + ret = dec_block(s, b);
  621 + if (ret != XZ_STREAM_END)
  622 + return ret;
  623 +
  624 + s->sequence = SEQ_BLOCK_PADDING;
  625 +
  626 + case SEQ_BLOCK_PADDING:
  627 + /*
  628 + * Size of Compressed Data + Block Padding
  629 + * must be a multiple of four. We don't need
  630 + * s->block.compressed for anything else
  631 + * anymore, so we use it here to test the size
  632 + * of the Block Padding field.
  633 + */
  634 + while (s->block.compressed & 3) {
  635 + if (b->in_pos == b->in_size)
  636 + return XZ_OK;
  637 +
  638 + if (b->in[b->in_pos++] != 0)
  639 + return XZ_DATA_ERROR;
  640 +
  641 + ++s->block.compressed;
  642 + }
  643 +
  644 + s->sequence = SEQ_BLOCK_CHECK;
  645 +
  646 + case SEQ_BLOCK_CHECK:
  647 + if (s->check_type == XZ_CHECK_CRC32) {
  648 + ret = crc32_validate(s, b);
  649 + if (ret != XZ_STREAM_END)
  650 + return ret;
  651 + }
  652 +#ifdef XZ_DEC_ANY_CHECK
  653 + else if (!check_skip(s, b)) {
  654 + return XZ_OK;
  655 + }
  656 +#endif
  657 +
  658 + s->sequence = SEQ_BLOCK_START;
  659 + break;
  660 +
  661 + case SEQ_INDEX:
  662 + ret = dec_index(s, b);
  663 + if (ret != XZ_STREAM_END)
  664 + return ret;
  665 +
  666 + s->sequence = SEQ_INDEX_PADDING;
  667 +
  668 + case SEQ_INDEX_PADDING:
  669 + while ((s->index.size + (b->in_pos - s->in_start))
  670 + & 3) {
  671 + if (b->in_pos == b->in_size) {
  672 + index_update(s, b);
  673 + return XZ_OK;
  674 + }
  675 +
  676 + if (b->in[b->in_pos++] != 0)
  677 + return XZ_DATA_ERROR;
  678 + }
  679 +
  680 + /* Finish the CRC32 value and Index size. */
  681 + index_update(s, b);
  682 +
  683 + /* Compare the hashes to validate the Index field. */
  684 + if (!memeq(&s->block.hash, &s->index.hash,
  685 + sizeof(s->block.hash)))
  686 + return XZ_DATA_ERROR;
  687 +
  688 + s->sequence = SEQ_INDEX_CRC32;
  689 +
  690 + case SEQ_INDEX_CRC32:
  691 + ret = crc32_validate(s, b);
  692 + if (ret != XZ_STREAM_END)
  693 + return ret;
  694 +
  695 + s->temp.size = STREAM_HEADER_SIZE;
  696 + s->sequence = SEQ_STREAM_FOOTER;
  697 +
  698 + case SEQ_STREAM_FOOTER:
  699 + if (!fill_temp(s, b))
  700 + return XZ_OK;
  701 +
  702 + return dec_stream_footer(s);
  703 + }
  704 + }
  705 +
  706 + /* Never reached */
  707 +}
  708 +
  709 +/*
  710 + * xz_dec_run() is a wrapper for dec_main() to handle some special cases in
  711 + * multi-call and single-call decoding.
  712 + *
  713 + * In multi-call mode, we must return XZ_BUF_ERROR when it seems clear that we
  714 + * are not going to make any progress anymore. This is to prevent the caller
  715 + * from calling us infinitely when the input file is truncated or otherwise
  716 + * corrupt. Since zlib-style API allows that the caller fills the input buffer
  717 + * only when the decoder doesn't produce any new output, we have to be careful
  718 + * to avoid returning XZ_BUF_ERROR too easily: XZ_BUF_ERROR is returned only
  719 + * after the second consecutive call to xz_dec_run() that makes no progress.
  720 + *
  721 + * In single-call mode, if we couldn't decode everything and no error
  722 + * occurred, either the input is truncated or the output buffer is too small.
  723 + * Since we know that the last input byte never produces any output, we know
  724 + * that if all the input was consumed and decoding wasn't finished, the file
  725 + * must be corrupt. Otherwise the output buffer has to be too small or the
  726 + * file is corrupt in a way that decoding it produces too big output.
  727 + *
  728 + * If single-call decoding fails, we reset b->in_pos and b->out_pos back to
  729 + * their original values. This is because with some filter chains there won't
  730 + * be any valid uncompressed data in the output buffer unless the decoding
  731 + * actually succeeds (that's the price to pay of using the output buffer as
  732 + * the workspace).
  733 + */
  734 +XZ_EXTERN enum xz_ret xz_dec_run(struct xz_dec *s, struct xz_buf *b)
  735 +{
  736 + size_t in_start;
  737 + size_t out_start;
  738 + enum xz_ret ret;
  739 +
  740 + if (DEC_IS_SINGLE(s->mode))
  741 + xz_dec_reset(s);
  742 +
  743 + in_start = b->in_pos;
  744 + out_start = b->out_pos;
  745 + ret = dec_main(s, b);
  746 +
  747 + if (DEC_IS_SINGLE(s->mode)) {
  748 + if (ret == XZ_OK)
  749 + ret = b->in_pos == b->in_size
  750 + ? XZ_DATA_ERROR : XZ_BUF_ERROR;
  751 +
  752 + if (ret != XZ_STREAM_END) {
  753 + b->in_pos = in_start;
  754 + b->out_pos = out_start;
  755 + }
  756 +
  757 + } else if (ret == XZ_OK && in_start == b->in_pos
  758 + && out_start == b->out_pos) {
  759 + if (s->allow_buf_error)
  760 + ret = XZ_BUF_ERROR;
  761 +
  762 + s->allow_buf_error = true;
  763 + } else {
  764 + s->allow_buf_error = false;
  765 + }
  766 +
  767 + return ret;
  768 +}
  769 +
  770 +XZ_EXTERN struct xz_dec *xz_dec_init(enum xz_mode mode, uint32_t dict_max)
  771 +{
  772 + struct xz_dec *s = kmalloc(sizeof(*s), GFP_KERNEL);
  773 + if (s == NULL)
  774 + return NULL;
  775 +
  776 + s->mode = mode;
  777 +
  778 +#ifdef XZ_DEC_BCJ
  779 + s->bcj = xz_dec_bcj_create(DEC_IS_SINGLE(mode));
  780 + if (s->bcj == NULL)
  781 + goto error_bcj;
  782 +#endif
  783 +
  784 + s->lzma2 = xz_dec_lzma2_create(mode, dict_max);
  785 + if (s->lzma2 == NULL)
  786 + goto error_lzma2;
  787 +
  788 + xz_dec_reset(s);
  789 + return s;
  790 +
  791 +error_lzma2:
  792 +#ifdef XZ_DEC_BCJ
  793 + xz_dec_bcj_end(s->bcj);
  794 +error_bcj:
  795 +#endif
  796 + kfree(s);
  797 + return NULL;
  798 +}
  799 +
  800 +XZ_EXTERN void xz_dec_reset(struct xz_dec *s)
  801 +{
  802 + s->sequence = SEQ_STREAM_HEADER;
  803 + s->allow_buf_error = false;
  804 + s->pos = 0;
  805 + s->crc32 = 0;
  806 + memzero(&s->block, sizeof(s->block));
  807 + memzero(&s->index, sizeof(s->index));
  808 + s->temp.pos = 0;
  809 + s->temp.size = STREAM_HEADER_SIZE;
  810 +}
  811 +
  812 +XZ_EXTERN void xz_dec_end(struct xz_dec *s)
  813 +{
  814 + if (s != NULL) {
  815 + xz_dec_lzma2_end(s->lzma2);
  816 +#ifdef XZ_DEC_BCJ
  817 + xz_dec_bcj_end(s->bcj);
  818 +#endif
  819 + kfree(s);
  820 + }
  821 +}
lib/xz/xz_dec_syms.c
  1 +/*
  2 + * XZ decoder module information
  3 + *
  4 + * Author: Lasse Collin <lasse.collin@tukaani.org>
  5 + *
  6 + * This file has been put into the public domain.
  7 + * You can do whatever you want with this file.
  8 + */
  9 +
  10 +#include <linux/module.h>
  11 +#include <linux/xz.h>
  12 +
  13 +EXPORT_SYMBOL(xz_dec_init);
  14 +EXPORT_SYMBOL(xz_dec_reset);
  15 +EXPORT_SYMBOL(xz_dec_run);
  16 +EXPORT_SYMBOL(xz_dec_end);
  17 +
  18 +MODULE_DESCRIPTION("XZ decompressor");
  19 +MODULE_VERSION("1.0");
  20 +MODULE_AUTHOR("Lasse Collin <lasse.collin@tukaani.org> and Igor Pavlov");
  21 +
  22 +/*
  23 + * This code is in the public domain, but in Linux it's simplest to just
  24 + * say it's GPL and consider the authors as the copyright holders.
  25 + */
  26 +MODULE_LICENSE("GPL");
lib/xz/xz_dec_test.c
  1 +/*
  2 + * XZ decoder tester
  3 + *
  4 + * Author: Lasse Collin <lasse.collin@tukaani.org>
  5 + *
  6 + * This file has been put into the public domain.
  7 + * You can do whatever you want with this file.
  8 + */
  9 +
  10 +#include <linux/kernel.h>
  11 +#include <linux/module.h>
  12 +#include <linux/fs.h>
  13 +#include <linux/uaccess.h>
  14 +#include <linux/crc32.h>
  15 +#include <linux/xz.h>
  16 +
  17 +/* Maximum supported dictionary size */
  18 +#define DICT_MAX (1 << 20)
  19 +
  20 +/* Device name to pass to register_chrdev(). */
  21 +#define DEVICE_NAME "xz_dec_test"
  22 +
  23 +/* Dynamically allocated device major number */
  24 +static int device_major;
  25 +
  26 +/*
  27 + * We reuse the same decoder state, and thus can decode only one
  28 + * file at a time.
  29 + */
  30 +static bool device_is_open;
  31 +
  32 +/* XZ decoder state */
  33 +static struct xz_dec *state;
  34 +
  35 +/*
  36 + * Return value of xz_dec_run(). We need to avoid calling xz_dec_run() after
  37 + * it has returned XZ_STREAM_END, so we make this static.
  38 + */
  39 +static enum xz_ret ret;
  40 +
  41 +/*
  42 + * Input and output buffers. The input buffer is used as a temporary safe
  43 + * place for the data coming from the userspace.
  44 + */
  45 +static uint8_t buffer_in[1024];
  46 +static uint8_t buffer_out[1024];
  47 +
  48 +/*
  49 + * Structure to pass the input and output buffers to the XZ decoder.
  50 + * A few of the fields are never modified so we initialize them here.
  51 + */
  52 +static struct xz_buf buffers = {
  53 + .in = buffer_in,
  54 + .out = buffer_out,
  55 + .out_size = sizeof(buffer_out)
  56 +};
  57 +
  58 +/*
  59 + * CRC32 of uncompressed data. This is used to give the user a simple way
  60 + * to check that the decoder produces correct output.
  61 + */
  62 +static uint32_t crc;
  63 +
  64 +static int xz_dec_test_open(struct inode *i, struct file *f)
  65 +{
  66 + if (device_is_open)
  67 + return -EBUSY;
  68 +
  69 + device_is_open = true;
  70 +
  71 + xz_dec_reset(state);
  72 + ret = XZ_OK;
  73 + crc = 0xFFFFFFFF;
  74 +
  75 + buffers.in_pos = 0;
  76 + buffers.in_size = 0;
  77 + buffers.out_pos = 0;
  78 +
  79 + printk(KERN_INFO DEVICE_NAME ": opened\n");
  80 + return 0;
  81 +}
  82 +
  83 +static int xz_dec_test_release(struct inode *i, struct file *f)
  84 +{
  85 + device_is_open = false;
  86 +
  87 + if (ret == XZ_OK)
  88 + printk(KERN_INFO DEVICE_NAME ": input was truncated\n");
  89 +
  90 + printk(KERN_INFO DEVICE_NAME ": closed\n");
  91 + return 0;
  92 +}
  93 +
  94 +/*
  95 + * Decode the data given to us from the userspace. CRC32 of the uncompressed
  96 + * data is calculated and is printed at the end of successful decoding. The
  97 + * uncompressed data isn't stored anywhere for further use.
  98 + *
  99 + * The .xz file must have exactly one Stream and no Stream Padding. The data
  100 + * after the first Stream is considered to be garbage.
  101 + */
  102 +static ssize_t xz_dec_test_write(struct file *file, const char __user *buf,
  103 + size_t size, loff_t *pos)
  104 +{
  105 + size_t remaining;
  106 +
  107 + if (ret != XZ_OK) {
  108 + if (size > 0)
  109 + printk(KERN_INFO DEVICE_NAME ": %zu bytes of "
  110 + "garbage at the end of the file\n",
  111 + size);
  112 +
  113 + return -ENOSPC;
  114 + }
  115 +
  116 + printk(KERN_INFO DEVICE_NAME ": decoding %zu bytes of input\n",
  117 + size);
  118 +
  119 + remaining = size;
  120 + while ((remaining > 0 || buffers.out_pos == buffers.out_size)
  121 + && ret == XZ_OK) {
  122 + if (buffers.in_pos == buffers.in_size) {
  123 + buffers.in_pos = 0;
  124 + buffers.in_size = min(remaining, sizeof(buffer_in));
  125 + if (copy_from_user(buffer_in, buf, buffers.in_size))
  126 + return -EFAULT;
  127 +
  128 + buf += buffers.in_size;
  129 + remaining -= buffers.in_size;
  130 + }
  131 +
  132 + buffers.out_pos = 0;
  133 + ret = xz_dec_run(state, &buffers);
  134 + crc = crc32(crc, buffer_out, buffers.out_pos);
  135 + }
  136 +
  137 + switch (ret) {
  138 + case XZ_OK:
  139 + printk(KERN_INFO DEVICE_NAME ": XZ_OK\n");
  140 + return size;
  141 +
  142 + case XZ_STREAM_END:
  143 + printk(KERN_INFO DEVICE_NAME ": XZ_STREAM_END, "
  144 + "CRC32 = 0x%08X\n", ~crc);
  145 + return size - remaining - (buffers.in_size - buffers.in_pos);
  146 +
  147 + case XZ_MEMLIMIT_ERROR:
  148 + printk(KERN_INFO DEVICE_NAME ": XZ_MEMLIMIT_ERROR\n");
  149 + break;
  150 +
  151 + case XZ_FORMAT_ERROR:
  152 + printk(KERN_INFO DEVICE_NAME ": XZ_FORMAT_ERROR\n");
  153 + break;
  154 +
  155 + case XZ_OPTIONS_ERROR:
  156 + printk(KERN_INFO DEVICE_NAME ": XZ_OPTIONS_ERROR\n");
  157 + break;
  158 +
  159 + case XZ_DATA_ERROR:
  160 + printk(KERN_INFO DEVICE_NAME ": XZ_DATA_ERROR\n");
  161 + break;
  162 +
  163 + case XZ_BUF_ERROR:
  164 + printk(KERN_INFO DEVICE_NAME ": XZ_BUF_ERROR\n");
  165 + break;
  166 +
  167 + default:
  168 + printk(KERN_INFO DEVICE_NAME ": Bug detected!\n");
  169 + break;
  170 + }
  171 +
  172 + return -EIO;
  173 +}
  174 +
  175 +/* Allocate the XZ decoder state and register the character device. */
  176 +static int __init xz_dec_test_init(void)
  177 +{
  178 + static const struct file_operations fileops = {
  179 + .owner = THIS_MODULE,
  180 + .open = &xz_dec_test_open,
  181 + .release = &xz_dec_test_release,
  182 + .write = &xz_dec_test_write
  183 + };
  184 +
  185 + state = xz_dec_init(XZ_PREALLOC, DICT_MAX);
  186 + if (state == NULL)
  187 + return -ENOMEM;
  188 +
  189 + device_major = register_chrdev(0, DEVICE_NAME, &fileops);
  190 + if (device_major < 0) {
  191 + xz_dec_end(state);
  192 + return device_major;
  193 + }
  194 +
  195 + printk(KERN_INFO DEVICE_NAME ": module loaded\n");
  196 + printk(KERN_INFO DEVICE_NAME ": Create a device node with "
  197 + "'mknod " DEVICE_NAME " c %d 0' and write .xz files "
  198 + "to it.\n", device_major);
  199 + return 0;
  200 +}
  201 +
  202 +static void __exit xz_dec_test_exit(void)
  203 +{
  204 + unregister_chrdev(device_major, DEVICE_NAME);
  205 + xz_dec_end(state);
  206 + printk(KERN_INFO DEVICE_NAME ": module unloaded\n");
  207 +}
  208 +
  209 +module_init(xz_dec_test_init);
  210 +module_exit(xz_dec_test_exit);
  211 +
  212 +MODULE_DESCRIPTION("XZ decompressor tester");
  213 +MODULE_VERSION("1.0");
  214 +MODULE_AUTHOR("Lasse Collin <lasse.collin@tukaani.org>");
  215 +
  216 +/*
  217 + * This code is in the public domain, but in Linux it's simplest to just
  218 + * say it's GPL and consider the authors as the copyright holders.
  219 + */
  220 +MODULE_LICENSE("GPL");
  1 +/*
  2 + * LZMA2 definitions
  3 + *
  4 + * Authors: Lasse Collin <lasse.collin@tukaani.org>
  5 + * Igor Pavlov <http://7-zip.org/>
  6 + *
  7 + * This file has been put into the public domain.
  8 + * You can do whatever you want with this file.
  9 + */
  10 +
  11 +#ifndef XZ_LZMA2_H
  12 +#define XZ_LZMA2_H
  13 +
  14 +/* Range coder constants */
  15 +#define RC_SHIFT_BITS 8
  16 +#define RC_TOP_BITS 24
  17 +#define RC_TOP_VALUE (1 << RC_TOP_BITS)
  18 +#define RC_BIT_MODEL_TOTAL_BITS 11
  19 +#define RC_BIT_MODEL_TOTAL (1 << RC_BIT_MODEL_TOTAL_BITS)
  20 +#define RC_MOVE_BITS 5
  21 +
  22 +/*
  23 + * Maximum number of position states. A position state is the lowest pb
  24 + * number of bits of the current uncompressed offset. In some places there
  25 + * are different sets of probabilities for different position states.
  26 + */
  27 +#define POS_STATES_MAX (1 << 4)
  28 +
  29 +/*
  30 + * This enum is used to track which LZMA symbols have occurred most recently
  31 + * and in which order. This information is used to predict the next symbol.
  32 + *
  33 + * Symbols:
  34 + * - Literal: One 8-bit byte
  35 + * - Match: Repeat a chunk of data at some distance
  36 + * - Long repeat: Multi-byte match at a recently seen distance
  37 + * - Short repeat: One-byte repeat at a recently seen distance
  38 + *
  39 + * The symbol names are in from STATE_oldest_older_previous. REP means
  40 + * either short or long repeated match, and NONLIT means any non-literal.
  41 + */
  42 +enum lzma_state {
  43 + STATE_LIT_LIT,
  44 + STATE_MATCH_LIT_LIT,
  45 + STATE_REP_LIT_LIT,
  46 + STATE_SHORTREP_LIT_LIT,
  47 + STATE_MATCH_LIT,
  48 + STATE_REP_LIT,
  49 + STATE_SHORTREP_LIT,
  50 + STATE_LIT_MATCH,
  51 + STATE_LIT_LONGREP,
  52 + STATE_LIT_SHORTREP,
  53 + STATE_NONLIT_MATCH,
  54 + STATE_NONLIT_REP
  55 +};
  56 +
  57 +/* Total number of states */
  58 +#define STATES 12
  59 +
  60 +/* The lowest 7 states indicate that the previous state was a literal. */
  61 +#define LIT_STATES 7
  62 +
  63 +/* Indicate that the latest symbol was a literal. */
  64 +static inline void lzma_state_literal(enum lzma_state *state)
  65 +{
  66 + if (*state <= STATE_SHORTREP_LIT_LIT)
  67 + *state = STATE_LIT_LIT;
  68 + else if (*state <= STATE_LIT_SHORTREP)
  69 + *state -= 3;
  70 + else
  71 + *state -= 6;
  72 +}
  73 +
  74 +/* Indicate that the latest symbol was a match. */
  75 +static inline void lzma_state_match(enum lzma_state *state)
  76 +{
  77 + *state = *state < LIT_STATES ? STATE_LIT_MATCH : STATE_NONLIT_MATCH;
  78 +}
  79 +
  80 +/* Indicate that the latest state was a long repeated match. */
  81 +static inline void lzma_state_long_rep(enum lzma_state *state)
  82 +{
  83 + *state = *state < LIT_STATES ? STATE_LIT_LONGREP : STATE_NONLIT_REP;
  84 +}
  85 +
  86 +/* Indicate that the latest symbol was a short match. */
  87 +static inline void lzma_state_short_rep(enum lzma_state *state)
  88 +{
  89 + *state = *state < LIT_STATES ? STATE_LIT_SHORTREP : STATE_NONLIT_REP;
  90 +}
  91 +
  92 +/* Test if the previous symbol was a literal. */
  93 +static inline bool lzma_state_is_literal(enum lzma_state state)
  94 +{
  95 + return state < LIT_STATES;
  96 +}
  97 +
  98 +/* Each literal coder is divided in three sections:
  99 + * - 0x001-0x0FF: Without match byte
  100 + * - 0x101-0x1FF: With match byte; match bit is 0
  101 + * - 0x201-0x2FF: With match byte; match bit is 1
  102 + *
  103 + * Match byte is used when the previous LZMA symbol was something else than
  104 + * a literal (that is, it was some kind of match).
  105 + */
  106 +#define LITERAL_CODER_SIZE 0x300
  107 +
  108 +/* Maximum number of literal coders */
  109 +#define LITERAL_CODERS_MAX (1 << 4)
  110 +
  111 +/* Minimum length of a match is two bytes. */
  112 +#define MATCH_LEN_MIN 2
  113 +
  114 +/* Match length is encoded with 4, 5, or 10 bits.
  115 + *
  116 + * Length Bits
  117 + * 2-9 4 = Choice=0 + 3 bits
  118 + * 10-17 5 = Choice=1 + Choice2=0 + 3 bits
  119 + * 18-273 10 = Choice=1 + Choice2=1 + 8 bits
  120 + */
  121 +#define LEN_LOW_BITS 3
  122 +#define LEN_LOW_SYMBOLS (1 << LEN_LOW_BITS)
  123 +#define LEN_MID_BITS 3
  124 +#define LEN_MID_SYMBOLS (1 << LEN_MID_BITS)
  125 +#define LEN_HIGH_BITS 8
  126 +#define LEN_HIGH_SYMBOLS (1 << LEN_HIGH_BITS)
  127 +#define LEN_SYMBOLS (LEN_LOW_SYMBOLS + LEN_MID_SYMBOLS + LEN_HIGH_SYMBOLS)
  128 +
  129 +/*
  130 + * Maximum length of a match is 273 which is a result of the encoding
  131 + * described above.
  132 + */
  133 +#define MATCH_LEN_MAX (MATCH_LEN_MIN + LEN_SYMBOLS - 1)
  134 +
  135 +/*
  136 + * Different sets of probabilities are used for match distances that have
  137 + * very short match length: Lengths of 2, 3, and 4 bytes have a separate
  138 + * set of probabilities for each length. The matches with longer length
  139 + * use a shared set of probabilities.
  140 + */
  141 +#define DIST_STATES 4
  142 +
  143 +/*
  144 + * Get the index of the appropriate probability array for decoding
  145 + * the distance slot.
  146 + */
  147 +static inline uint32_t lzma_get_dist_state(uint32_t len)
  148 +{
  149 + return len < DIST_STATES + MATCH_LEN_MIN
  150 + ? len - MATCH_LEN_MIN : DIST_STATES - 1;
  151 +}
  152 +
  153 +/*
  154 + * The highest two bits of a 32-bit match distance are encoded using six bits.
  155 + * This six-bit value is called a distance slot. This way encoding a 32-bit
  156 + * value takes 6-36 bits, larger values taking more bits.
  157 + */
  158 +#define DIST_SLOT_BITS 6
  159 +#define DIST_SLOTS (1 << DIST_SLOT_BITS)
  160 +
  161 +/* Match distances up to 127 are fully encoded using probabilities. Since
  162 + * the highest two bits (distance slot) are always encoded using six bits,
  163 + * the distances 0-3 don't need any additional bits to encode, since the
  164 + * distance slot itself is the same as the actual distance. DIST_MODEL_START
  165 + * indicates the first distance slot where at least one additional bit is
  166 + * needed.
  167 + */
  168 +#define DIST_MODEL_START 4
  169 +
  170 +/*
  171 + * Match distances greater than 127 are encoded in three pieces:
  172 + * - distance slot: the highest two bits
  173 + * - direct bits: 2-26 bits below the highest two bits
  174 + * - alignment bits: four lowest bits
  175 + *
  176 + * Direct bits don't use any probabilities.
  177 + *
  178 + * The distance slot value of 14 is for distances 128-191.
  179 + */
  180 +#define DIST_MODEL_END 14
  181 +
  182 +/* Distance slots that indicate a distance <= 127. */
  183 +#define FULL_DISTANCES_BITS (DIST_MODEL_END / 2)
  184 +#define FULL_DISTANCES (1 << FULL_DISTANCES_BITS)
  185 +
  186 +/*
  187 + * For match distances greater than 127, only the highest two bits and the
  188 + * lowest four bits (alignment) is encoded using probabilities.
  189 + */
  190 +#define ALIGN_BITS 4
  191 +#define ALIGN_SIZE (1 << ALIGN_BITS)
  192 +#define ALIGN_MASK (ALIGN_SIZE - 1)
  193 +
  194 +/* Total number of all probability variables */
  195 +#define PROBS_TOTAL (1846 + LITERAL_CODERS_MAX * LITERAL_CODER_SIZE)
  196 +
  197 +/*
  198 + * LZMA remembers the four most recent match distances. Reusing these
  199 + * distances tends to take less space than re-encoding the actual
  200 + * distance value.
  201 + */
  202 +#define REPS 4
  203 +
  204 +#endif
  1 +/*
  2 + * Private includes and definitions
  3 + *
  4 + * Author: Lasse Collin <lasse.collin@tukaani.org>
  5 + *
  6 + * This file has been put into the public domain.
  7 + * You can do whatever you want with this file.
  8 + */
  9 +
  10 +#ifndef XZ_PRIVATE_H
  11 +#define XZ_PRIVATE_H
  12 +
  13 +#ifdef __KERNEL__
  14 +# include <linux/xz.h>
  15 +# include <asm/byteorder.h>
  16 +# include <asm/unaligned.h>
  17 + /* XZ_PREBOOT may be defined only via decompress_unxz.c. */
  18 +# ifndef XZ_PREBOOT
  19 +# include <linux/slab.h>
  20 +# include <linux/vmalloc.h>
  21 +# include <linux/string.h>
  22 +# ifdef CONFIG_XZ_DEC_X86
  23 +# define XZ_DEC_X86
  24 +# endif
  25 +# ifdef CONFIG_XZ_DEC_POWERPC
  26 +# define XZ_DEC_POWERPC
  27 +# endif
  28 +# ifdef CONFIG_XZ_DEC_IA64
  29 +# define XZ_DEC_IA64
  30 +# endif
  31 +# ifdef CONFIG_XZ_DEC_ARM
  32 +# define XZ_DEC_ARM
  33 +# endif
  34 +# ifdef CONFIG_XZ_DEC_ARMTHUMB
  35 +# define XZ_DEC_ARMTHUMB
  36 +# endif
  37 +# ifdef CONFIG_XZ_DEC_SPARC
  38 +# define XZ_DEC_SPARC
  39 +# endif
  40 +# define memeq(a, b, size) (memcmp(a, b, size) == 0)
  41 +# define memzero(buf, size) memset(buf, 0, size)
  42 +# endif
  43 +# define get_le32(p) le32_to_cpup((const uint32_t *)(p))
  44 +#else
  45 + /*
  46 + * For userspace builds, use a separate header to define the required
  47 + * macros and functions. This makes it easier to adapt the code into
  48 + * different environments and avoids clutter in the Linux kernel tree.
  49 + */
  50 +# include "xz_config.h"
  51 +#endif
  52 +
  53 +/* If no specific decoding mode is requested, enable support for all modes. */
  54 +#if !defined(XZ_DEC_SINGLE) && !defined(XZ_DEC_PREALLOC) \
  55 + && !defined(XZ_DEC_DYNALLOC)
  56 +# define XZ_DEC_SINGLE
  57 +# define XZ_DEC_PREALLOC
  58 +# define XZ_DEC_DYNALLOC
  59 +#endif
  60 +
  61 +/*
  62 + * The DEC_IS_foo(mode) macros are used in "if" statements. If only some
  63 + * of the supported modes are enabled, these macros will evaluate to true or
  64 + * false at compile time and thus allow the compiler to omit unneeded code.
  65 + */
  66 +#ifdef XZ_DEC_SINGLE
  67 +# define DEC_IS_SINGLE(mode) ((mode) == XZ_SINGLE)
  68 +#else
  69 +# define DEC_IS_SINGLE(mode) (false)
  70 +#endif
  71 +
  72 +#ifdef XZ_DEC_PREALLOC
  73 +# define DEC_IS_PREALLOC(mode) ((mode) == XZ_PREALLOC)
  74 +#else
  75 +# define DEC_IS_PREALLOC(mode) (false)
  76 +#endif
  77 +
  78 +#ifdef XZ_DEC_DYNALLOC
  79 +# define DEC_IS_DYNALLOC(mode) ((mode) == XZ_DYNALLOC)
  80 +#else
  81 +# define DEC_IS_DYNALLOC(mode) (false)
  82 +#endif
  83 +
  84 +#if !defined(XZ_DEC_SINGLE)
  85 +# define DEC_IS_MULTI(mode) (true)
  86 +#elif defined(XZ_DEC_PREALLOC) || defined(XZ_DEC_DYNALLOC)
  87 +# define DEC_IS_MULTI(mode) ((mode) != XZ_SINGLE)
  88 +#else
  89 +# define DEC_IS_MULTI(mode) (false)
  90 +#endif
  91 +
  92 +/*
  93 + * If any of the BCJ filter decoders are wanted, define XZ_DEC_BCJ.
  94 + * XZ_DEC_BCJ is used to enable generic support for BCJ decoders.
  95 + */
  96 +#ifndef XZ_DEC_BCJ
  97 +# if defined(XZ_DEC_X86) || defined(XZ_DEC_POWERPC) \
  98 + || defined(XZ_DEC_IA64) || defined(XZ_DEC_ARM) \
  99 + || defined(XZ_DEC_ARM) || defined(XZ_DEC_ARMTHUMB) \
  100 + || defined(XZ_DEC_SPARC)
  101 +# define XZ_DEC_BCJ
  102 +# endif
  103 +#endif
  104 +
  105 +/*
  106 + * Allocate memory for LZMA2 decoder. xz_dec_lzma2_reset() must be used
  107 + * before calling xz_dec_lzma2_run().
  108 + */
  109 +XZ_EXTERN struct xz_dec_lzma2 *xz_dec_lzma2_create(enum xz_mode mode,
  110 + uint32_t dict_max);
  111 +
  112 +/*
  113 + * Decode the LZMA2 properties (one byte) and reset the decoder. Return
  114 + * XZ_OK on success, XZ_MEMLIMIT_ERROR if the preallocated dictionary is not
  115 + * big enough, and XZ_OPTIONS_ERROR if props indicates something that this
  116 + * decoder doesn't support.
  117 + */
  118 +XZ_EXTERN enum xz_ret xz_dec_lzma2_reset(struct xz_dec_lzma2 *s,
  119 + uint8_t props);
  120 +
  121 +/* Decode raw LZMA2 stream from b->in to b->out. */
  122 +XZ_EXTERN enum xz_ret xz_dec_lzma2_run(struct xz_dec_lzma2 *s,
  123 + struct xz_buf *b);
  124 +
  125 +/* Free the memory allocated for the LZMA2 decoder. */
  126 +XZ_EXTERN void xz_dec_lzma2_end(struct xz_dec_lzma2 *s);
  127 +
  128 +#ifdef XZ_DEC_BCJ
  129 +/*
  130 + * Allocate memory for BCJ decoders. xz_dec_bcj_reset() must be used before
  131 + * calling xz_dec_bcj_run().
  132 + */
  133 +XZ_EXTERN struct xz_dec_bcj *xz_dec_bcj_create(bool single_call);
  134 +
  135 +/*
  136 + * Decode the Filter ID of a BCJ filter. This implementation doesn't
  137 + * support custom start offsets, so no decoding of Filter Properties
  138 + * is needed. Returns XZ_OK if the given Filter ID is supported.
  139 + * Otherwise XZ_OPTIONS_ERROR is returned.
  140 + */
  141 +XZ_EXTERN enum xz_ret xz_dec_bcj_reset(struct xz_dec_bcj *s, uint8_t id);
  142 +
  143 +/*
  144 + * Decode raw BCJ + LZMA2 stream. This must be used only if there actually is
  145 + * a BCJ filter in the chain. If the chain has only LZMA2, xz_dec_lzma2_run()
  146 + * must be called directly.
  147 + */
  148 +XZ_EXTERN enum xz_ret xz_dec_bcj_run(struct xz_dec_bcj *s,
  149 + struct xz_dec_lzma2 *lzma2,
  150 + struct xz_buf *b);
  151 +
  152 +/* Free the memory allocated for the BCJ filters. */
  153 +#define xz_dec_bcj_end(s) kfree(s)
  154 +#endif
  155 +
  156 +#endif
  1 +/*
  2 + * Definitions for handling the .xz file format
  3 + *
  4 + * Author: Lasse Collin <lasse.collin@tukaani.org>
  5 + *
  6 + * This file has been put into the public domain.
  7 + * You can do whatever you want with this file.
  8 + */
  9 +
  10 +#ifndef XZ_STREAM_H
  11 +#define XZ_STREAM_H
  12 +
  13 +#if defined(__KERNEL__) && !XZ_INTERNAL_CRC32
  14 +# include <linux/crc32.h>
  15 +# undef crc32
  16 +# define xz_crc32(buf, size, crc) \
  17 + (~crc32_le(~(uint32_t)(crc), buf, size))
  18 +#endif
  19 +
  20 +/*
  21 + * See the .xz file format specification at
  22 + * http://tukaani.org/xz/xz-file-format.txt
  23 + * to understand the container format.
  24 + */
  25 +
  26 +#define STREAM_HEADER_SIZE 12
  27 +
  28 +#define HEADER_MAGIC "\3757zXZ"
  29 +#define HEADER_MAGIC_SIZE 6
  30 +
  31 +#define FOOTER_MAGIC "YZ"
  32 +#define FOOTER_MAGIC_SIZE 2
  33 +
  34 +/*
  35 + * Variable-length integer can hold a 63-bit unsigned integer or a special
  36 + * value indicating that the value is unknown.
  37 + *
  38 + * Experimental: vli_type can be defined to uint32_t to save a few bytes
  39 + * in code size (no effect on speed). Doing so limits the uncompressed and
  40 + * compressed size of the file to less than 256 MiB and may also weaken
  41 + * error detection slightly.
  42 + */
  43 +typedef uint64_t vli_type;
  44 +
  45 +#define VLI_MAX ((vli_type)-1 / 2)
  46 +#define VLI_UNKNOWN ((vli_type)-1)
  47 +
  48 +/* Maximum encoded size of a VLI */
  49 +#define VLI_BYTES_MAX (sizeof(vli_type) * 8 / 7)
  50 +
  51 +/* Integrity Check types */
  52 +enum xz_check {
  53 + XZ_CHECK_NONE = 0,
  54 + XZ_CHECK_CRC32 = 1,
  55 + XZ_CHECK_CRC64 = 4,
  56 + XZ_CHECK_SHA256 = 10
  57 +};
  58 +
  59 +/* Maximum possible Check ID */
  60 +#define XZ_CHECK_MAX 15
  61 +
  62 +#endif
scripts/Makefile.lib
... ... @@ -262,6 +262,34 @@
262 262 lzop -9 && $(call size_append, $(filter-out FORCE,$^))) > $@ || \
263 263 (rm -f $@ ; false)
264 264  
  265 +# XZ
  266 +# ---------------------------------------------------------------------------
  267 +# Use xzkern to compress the kernel image and xzmisc to compress other things.
  268 +#
  269 +# xzkern uses a big LZMA2 dictionary since it doesn't increase memory usage
  270 +# of the kernel decompressor. A BCJ filter is used if it is available for
  271 +# the target architecture. xzkern also appends uncompressed size of the data
  272 +# using size_append. The .xz format has the size information available at
  273 +# the end of the file too, but it's in more complex format and it's good to
  274 +# avoid changing the part of the boot code that reads the uncompressed size.
  275 +# Note that the bytes added by size_append will make the xz tool think that
  276 +# the file is corrupt. This is expected.
  277 +#
  278 +# xzmisc doesn't use size_append, so it can be used to create normal .xz
  279 +# files. xzmisc uses smaller LZMA2 dictionary than xzkern, because a very
  280 +# big dictionary would increase the memory usage too much in the multi-call
  281 +# decompression mode. A BCJ filter isn't used either.
  282 +quiet_cmd_xzkern = XZKERN $@
  283 +cmd_xzkern = (cat $(filter-out FORCE,$^) | \
  284 + sh $(srctree)/scripts/xz_wrap.sh && \
  285 + $(call size_append, $(filter-out FORCE,$^))) > $@ || \
  286 + (rm -f $@ ; false)
  287 +
  288 +quiet_cmd_xzmisc = XZMISC $@
  289 +cmd_xzmisc = (cat $(filter-out FORCE,$^) | \
  290 + xz --check=crc32 --lzma2=dict=1MiB) > $@ || \
  291 + (rm -f $@ ; false)
  292 +
265 293 # misc stuff
266 294 # ---------------------------------------------------------------------------
267 295 quote:="
  1 +#!/bin/sh
  2 +#
  3 +# This is a wrapper for xz to compress the kernel image using appropriate
  4 +# compression options depending on the architecture.
  5 +#
  6 +# Author: Lasse Collin <lasse.collin@tukaani.org>
  7 +#
  8 +# This file has been put into the public domain.
  9 +# You can do whatever you want with this file.
  10 +#
  11 +
  12 +BCJ=
  13 +LZMA2OPTS=
  14 +
  15 +case $ARCH in
  16 + x86|x86_64) BCJ=--x86 ;;
  17 + powerpc) BCJ=--powerpc ;;
  18 + ia64) BCJ=--ia64; LZMA2OPTS=pb=4 ;;
  19 + arm) BCJ=--arm ;;
  20 + sparc) BCJ=--sparc ;;
  21 +esac
  22 +
  23 +exec xz --check=crc32 $BCJ --lzma2=$LZMA2OPTS,dict=32MiB