^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 1) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 2) * Wrapper for decompressing XZ-compressed kernel, initramfs, and initrd
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 3) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 4) * Author: Lasse Collin <lasse.collin@tukaani.org>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 5) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 6) * This file has been put into the public domain.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 7) * You can do whatever you want with this file.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 8) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 9)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 10) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 11) * Important notes about in-place decompression
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 12) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 13) * At least on x86, the kernel is decompressed in place: the compressed data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 14) * is placed to the end of the output buffer, and the decompressor overwrites
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 15) * most of the compressed data. There must be enough safety margin to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 16) * guarantee that the write position is always behind the read position.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 17) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 18) * The safety margin for XZ with LZMA2 or BCJ+LZMA2 is calculated below.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 19) * Note that the margin with XZ is bigger than with Deflate (gzip)!
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 20) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 21) * The worst case for in-place decompression is that the beginning of
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 22) * the file is compressed extremely well, and the rest of the file is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 23) * uncompressible. Thus, we must look for worst-case expansion when the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 24) * compressor is encoding uncompressible data.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 25) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 26) * The structure of the .xz file in case of a compresed kernel is as follows.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 27) * Sizes (as bytes) of the fields are in parenthesis.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 28) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 29) * Stream Header (12)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 30) * Block Header:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 31) * Block Header (8-12)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 32) * Compressed Data (N)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 33) * Block Padding (0-3)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 34) * CRC32 (4)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 35) * Index (8-20)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 36) * Stream Footer (12)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 37) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 38) * Normally there is exactly one Block, but let's assume that there are
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 39) * 2-4 Blocks just in case. Because Stream Header and also Block Header
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 40) * of the first Block don't make the decompressor produce any uncompressed
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 41) * data, we can ignore them from our calculations. Block Headers of possible
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 42) * additional Blocks have to be taken into account still. With these
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 43) * assumptions, it is safe to assume that the total header overhead is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 44) * less than 128 bytes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 45) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 46) * Compressed Data contains LZMA2 or BCJ+LZMA2 encoded data. Since BCJ
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 47) * doesn't change the size of the data, it is enough to calculate the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 48) * safety margin for LZMA2.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 49) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 50) * LZMA2 stores the data in chunks. Each chunk has a header whose size is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 51) * a maximum of 6 bytes, but to get round 2^n numbers, let's assume that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 52) * the maximum chunk header size is 8 bytes. After the chunk header, there
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 53) * may be up to 64 KiB of actual payload in the chunk. Often the payload is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 54) * quite a bit smaller though; to be safe, let's assume that an average
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 55) * chunk has only 32 KiB of payload.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 56) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 57) * The maximum uncompressed size of the payload is 2 MiB. The minimum
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 58) * uncompressed size of the payload is in practice never less than the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 59) * payload size itself. The LZMA2 format would allow uncompressed size
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 60) * to be less than the payload size, but no sane compressor creates such
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 61) * files. LZMA2 supports storing uncompressible data in uncompressed form,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 62) * so there's never a need to create payloads whose uncompressed size is
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 63) * smaller than the compressed size.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 64) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 65) * The assumption, that the uncompressed size of the payload is never
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 66) * smaller than the payload itself, is valid only when talking about
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 67) * the payload as a whole. It is possible that the payload has parts where
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 68) * the decompressor consumes more input than it produces output. Calculating
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 69) * the worst case for this would be tricky. Instead of trying to do that,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 70) * let's simply make sure that the decompressor never overwrites any bytes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 71) * of the payload which it is currently reading.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 72) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 73) * Now we have enough information to calculate the safety margin. We need
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 74) * - 128 bytes for the .xz file format headers;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 75) * - 8 bytes per every 32 KiB of uncompressed size (one LZMA2 chunk header
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 76) * per chunk, each chunk having average payload size of 32 KiB); and
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 77) * - 64 KiB (biggest possible LZMA2 chunk payload size) to make sure that
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 78) * the decompressor never overwrites anything from the LZMA2 chunk
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 79) * payload it is currently reading.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 80) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 81) * We get the following formula:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 82) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 83) * safety_margin = 128 + uncompressed_size * 8 / 32768 + 65536
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 84) * = 128 + (uncompressed_size >> 12) + 65536
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 85) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 86) * For comparison, according to arch/x86/boot/compressed/misc.c, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 87) * equivalent formula for Deflate is this:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 88) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 89) * safety_margin = 18 + (uncompressed_size >> 12) + 32768
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 90) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 91) * Thus, when updating Deflate-only in-place kernel decompressor to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 92) * support XZ, the fixed overhead has to be increased from 18+32768 bytes
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 93) * to 128+65536 bytes.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 94) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 95)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 96) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 97) * STATIC is defined to "static" if we are being built for kernel
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 98) * decompression (pre-boot code). <linux/decompress/mm.h> will define
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 99) * STATIC to empty if it wasn't already defined. Since we will need to
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 100) * know later if we are being used for kernel decompression, we define
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 101) * XZ_PREBOOT here.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 102) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 103) #ifdef STATIC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 104) # define XZ_PREBOOT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 105) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 106) #ifdef __KERNEL__
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 107) # include <linux/decompress/mm.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 108) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 109) #define XZ_EXTERN STATIC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 110)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 111) #ifndef XZ_PREBOOT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 112) # include <linux/slab.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 113) # include <linux/xz.h>
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 114) #else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 115) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 116) * Use the internal CRC32 code instead of kernel's CRC32 module, which
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 117) * is not available in early phase of booting.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 118) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 119) #define XZ_INTERNAL_CRC32 1
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 120)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 121) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 122) * For boot time use, we enable only the BCJ filter of the current
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 123) * architecture or none if no BCJ filter is available for the architecture.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 124) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 125) #ifdef CONFIG_X86
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 126) # define XZ_DEC_X86
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 127) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 128) #ifdef CONFIG_PPC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 129) # define XZ_DEC_POWERPC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 130) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 131) #ifdef CONFIG_ARM
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 132) # ifdef CONFIG_THUMB2_KERNEL
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 133) # define XZ_DEC_ARMTHUMB
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 134) # else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 135) # define XZ_DEC_ARM
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 136) # endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 137) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 138) #ifdef CONFIG_IA64
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 139) # define XZ_DEC_IA64
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 140) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 141) #ifdef CONFIG_SPARC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 142) # define XZ_DEC_SPARC
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 143) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 144)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 145) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 146) * This will get the basic headers so that memeq() and others
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 147) * can be defined.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 148) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 149) #include "xz/xz_private.h"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 150)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 151) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 152) * Replace the normal allocation functions with the versions from
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 153) * <linux/decompress/mm.h>. vfree() needs to support vfree(NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 154) * when XZ_DYNALLOC is used, but the pre-boot free() doesn't support it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 155) * Workaround it here because the other decompressors don't need it.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 156) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 157) #undef kmalloc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 158) #undef kfree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 159) #undef vmalloc
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 160) #undef vfree
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 161) #define kmalloc(size, flags) malloc(size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 162) #define kfree(ptr) free(ptr)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 163) #define vmalloc(size) malloc(size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 164) #define vfree(ptr) do { if (ptr != NULL) free(ptr); } while (0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 165)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 166) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 167) * FIXME: Not all basic memory functions are provided in architecture-specific
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 168) * files (yet). We define our own versions here for now, but this should be
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 169) * only a temporary solution.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 170) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 171) * memeq and memzero are not used much and any remotely sane implementation
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 172) * is fast enough. memcpy/memmove speed matters in multi-call mode, but
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 173) * the kernel image is decompressed in single-call mode, in which only
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 174) * memmove speed can matter and only if there is a lot of uncompressible data
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 175) * (LZMA2 stores uncompressible chunks in uncompressed form). Thus, the
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 176) * functions below should just be kept small; it's probably not worth
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 177) * optimizing for speed.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 178) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 179)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 180) #ifndef memeq
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 181) static bool memeq(const void *a, const void *b, size_t size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 182) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 183) const uint8_t *x = a;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 184) const uint8_t *y = b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 185) size_t i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 186)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 187) for (i = 0; i < size; ++i)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 188) if (x[i] != y[i])
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 189) return false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 190)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 191) return true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 192) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 193) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 194)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 195) #ifndef memzero
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 196) static void memzero(void *buf, size_t size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 197) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 198) uint8_t *b = buf;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 199) uint8_t *e = b + size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 200)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 201) while (b != e)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 202) *b++ = '\0';
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 203) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 204) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 205)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 206) #ifndef memmove
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 207) /* Not static to avoid a conflict with the prototype in the Linux headers. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 208) void *memmove(void *dest, const void *src, size_t size)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 209) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 210) uint8_t *d = dest;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 211) const uint8_t *s = src;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 212) size_t i;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 213)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 214) if (d < s) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 215) for (i = 0; i < size; ++i)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 216) d[i] = s[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 217) } else if (d > s) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 218) i = size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 219) while (i-- > 0)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 220) d[i] = s[i];
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 221) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 222)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 223) return dest;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 224) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 225) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 226)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 227) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 228) * Since we need memmove anyway, would use it as memcpy too.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 229) * Commented out for now to avoid breaking things.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 230) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 231) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 232) #ifndef memcpy
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 233) # define memcpy memmove
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 234) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 235) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 236)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 237) #include "xz/xz_crc32.c"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 238) #include "xz/xz_dec_stream.c"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 239) #include "xz/xz_dec_lzma2.c"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 240) #include "xz/xz_dec_bcj.c"
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 241)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 242) #endif /* XZ_PREBOOT */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 243)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 244) /* Size of the input and output buffers in multi-call mode */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 245) #define XZ_IOBUF_SIZE 4096
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 246)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 247) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 248) * This function implements the API defined in <linux/decompress/generic.h>.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 249) *
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 250) * This wrapper will automatically choose single-call or multi-call mode
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 251) * of the native XZ decoder API. The single-call mode can be used only when
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 252) * both input and output buffers are available as a single chunk, i.e. when
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 253) * fill() and flush() won't be used.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 254) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 255) STATIC int INIT unxz(unsigned char *in, long in_size,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 256) long (*fill)(void *dest, unsigned long size),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 257) long (*flush)(void *src, unsigned long size),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 258) unsigned char *out, long *in_used,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 259) void (*error)(char *x))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 260) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 261) struct xz_buf b;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 262) struct xz_dec *s;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 263) enum xz_ret ret;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 264) bool must_free_in = false;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 265)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 266) #if XZ_INTERNAL_CRC32
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 267) xz_crc32_init();
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 268) #endif
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 269)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 270) if (in_used != NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 271) *in_used = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 272)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 273) if (fill == NULL && flush == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 274) s = xz_dec_init(XZ_SINGLE, 0);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 275) else
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 276) s = xz_dec_init(XZ_DYNALLOC, (uint32_t)-1);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 277)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 278) if (s == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 279) goto error_alloc_state;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 280)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 281) if (flush == NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 282) b.out = out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 283) b.out_size = (size_t)-1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 284) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 285) b.out_size = XZ_IOBUF_SIZE;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 286) b.out = malloc(XZ_IOBUF_SIZE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 287) if (b.out == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 288) goto error_alloc_out;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 289) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 290)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 291) if (in == NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 292) must_free_in = true;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 293) in = malloc(XZ_IOBUF_SIZE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 294) if (in == NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 295) goto error_alloc_in;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 296) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 297)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 298) b.in = in;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 299) b.in_pos = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 300) b.in_size = in_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 301) b.out_pos = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 302)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 303) if (fill == NULL && flush == NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 304) ret = xz_dec_run(s, &b);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 305) } else {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 306) do {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 307) if (b.in_pos == b.in_size && fill != NULL) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 308) if (in_used != NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 309) *in_used += b.in_pos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 310)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 311) b.in_pos = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 312)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 313) in_size = fill(in, XZ_IOBUF_SIZE);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 314) if (in_size < 0) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 315) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 316) * This isn't an optimal error code
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 317) * but it probably isn't worth making
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 318) * a new one either.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 319) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 320) ret = XZ_BUF_ERROR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 321) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 322) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 323)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 324) b.in_size = in_size;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 325) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 326)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 327) ret = xz_dec_run(s, &b);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 328)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 329) if (flush != NULL && (b.out_pos == b.out_size
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 330) || (ret != XZ_OK && b.out_pos > 0))) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 331) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 332) * Setting ret here may hide an error
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 333) * returned by xz_dec_run(), but probably
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 334) * it's not too bad.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 335) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 336) if (flush(b.out, b.out_pos) != (long)b.out_pos)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 337) ret = XZ_BUF_ERROR;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 338)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 339) b.out_pos = 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 340) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 341) } while (ret == XZ_OK);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 342)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 343) if (must_free_in)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 344) free(in);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 345)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 346) if (flush != NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 347) free(b.out);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 348) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 349)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 350) if (in_used != NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 351) *in_used += b.in_pos;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 352)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 353) xz_dec_end(s);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 354)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 355) switch (ret) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 356) case XZ_STREAM_END:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 357) return 0;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 358)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 359) case XZ_MEM_ERROR:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 360) /* This can occur only in multi-call mode. */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 361) error("XZ decompressor ran out of memory");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 362) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 363)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 364) case XZ_FORMAT_ERROR:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 365) error("Input is not in the XZ format (wrong magic bytes)");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 366) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 367)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 368) case XZ_OPTIONS_ERROR:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 369) error("Input was encoded with settings that are not "
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 370) "supported by this XZ decoder");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 371) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 372)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 373) case XZ_DATA_ERROR:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 374) case XZ_BUF_ERROR:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 375) error("XZ-compressed data is corrupt");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 376) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 377)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 378) default:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 379) error("Bug in the XZ decompressor");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 380) break;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 381) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 382)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 383) return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 384)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 385) error_alloc_in:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 386) if (flush != NULL)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 387) free(b.out);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 388)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 389) error_alloc_out:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 390) xz_dec_end(s);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 391)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 392) error_alloc_state:
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 393) error("XZ decompressor ran out of memory");
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 394) return -1;
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 395) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 396)
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 397) /*
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 398) * This macro is used by architecture-specific files to decompress
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 399) * the kernel image.
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 400) */
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 401) #ifdef XZ_PREBOOT
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 402) STATIC int INIT __decompress(unsigned char *buf, long len,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 403) long (*fill)(void*, unsigned long),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 404) long (*flush)(void*, unsigned long),
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 405) unsigned char *out_buf, long olen,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 406) long *pos,
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 407) void (*error)(char *x))
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 408) {
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 409) return unxz(buf, len, fill, flush, out_buf, pos, error);
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 410) }
^8f3ce5b39 (kx 2023-10-28 12:00:06 +0300 411) #endif