27 Sep, 2022

1 commit

  • This is the 5.15.70 stable release

    * tag 'v5.15.70': (2444 commits)
    Linux 5.15.70
    ALSA: hda/sigmatel: Fix unused variable warning for beep power change
    cgroup: Add missing cpus_read_lock() to cgroup_attach_task_all()
    ...

    Signed-off-by: Jason Liu

    Conflicts:
    arch/arm/boot/dts/imx6ul.dtsi
    arch/arm/mm/mmu.c
    arch/arm64/boot/dts/freescale/imx8mp-evk.dts
    drivers/gpu/drm/imx/dcss/dcss-kms.c
    drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
    drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.h
    drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
    drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
    drivers/soc/fsl/Kconfig
    drivers/soc/imx/gpcv2.c
    drivers/usb/dwc3/host.c
    net/dsa/slave.c
    sound/soc/fsl/imx-card.c

    Jason Liu
     

17 Aug, 2022

1 commit

  • [ Upstream commit 2d16803c562ecc644803d42ba98a8e0aef9c014e ]

    BLAKE2s has no currently known use as an shash. Just remove all of this
    unnecessary plumbing. Removing this shash was something we talked about
    back when we were making BLAKE2s a built-in, but I simply never got
    around to doing it. So this completes that project.

    Importantly, this fixs a bug in which the lib code depends on
    crypto_simd_disabled_for_test, causing linker errors.

    Also add more alignment tests to the selftests and compare SIMD and
    non-SIMD compression functions, to make up for what we lose from
    testmgr.c.

    Reported-by: gaochao
    Cc: Eric Biggers
    Cc: Ard Biesheuvel
    Cc: stable@vger.kernel.org
    Fixes: 6048fdcc5f26 ("lib/crypto: blake2s: include as built-in")
    Signed-off-by: Jason A. Donenfeld
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin

    Jason A. Donenfeld
     

19 Nov, 2021

1 commit

  • [ Upstream commit 3ae88f676aa63366ffa9eebb8ae787c7e19f0c57 ]

    Commit ad6d66bcac77e ("crypto: tcrypt - include 1420 byte blocks in aead and skcipher benchmarks")
    mentions:
    > power-of-2 block size. So let's add 1420 bytes explicitly, and round
    > it up to the next blocksize multiple of the algo in question if it
    > does not support 1420 byte blocks.
    but misses updating skcipher multi-buffer tests.

    Fix this by using the proper (rounded) input size.

    Fixes: ad6d66bcac77e ("crypto: tcrypt - include 1420 byte blocks in aead and skcipher benchmarks")
    Signed-off-by: Horia Geantă
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin

    Horia Geantă
     

02 Nov, 2021

5 commits

  • - aes-128-cbc-hmac-sha256
    - aes-256-cbc-hmac-sha256

    Signed-off-by: Pankaj Gupta

    Pankaj Gupta
     
  • Commit ad6d66bcac77e ("crypto: tcrypt - include 1420 byte blocks in aead and skcipher benchmarks")
    mentions:
    > power-of-2 block size. So let's add 1420 bytes explicitly, and round
    > it up to the next blocksize multiple of the algo in question if it
    > does not support 1420 byte blocks.
    but misses updating skcipher multi-buffer tests.

    Fix this by using the proper (rounded) input size.

    Fixes: ad6d66bcac77e ("crypto: tcrypt - include 1420 byte blocks in aead and skcipher benchmarks")
    Signed-off-by: Horia Geantă
    Reviewed-by: Gaurav Jain

    Horia Geantă
     
  • This is a temporary workround for the case when:
    -SWIOTLB is used for DMA bounce buffering AND
    -data to be DMA-ed is mapped DMA_FROM_DEVICE and device only partially
    overwrites the "original" data AND
    -it's expected that the "original" data that was not overwritten
    by the device to be untouched

    As discussed in upstream, the proper fix should be:
    -either an extension of the DMA API OR
    -a workaround in the device driver (considering these cases are rarely
    met in practice)

    Since both alternatives are not trivial (to say the least),
    add a workaround for the few cases matching the error conditions
    listed above.

    Link: https://lore.kernel.org/lkml/VI1PR0402MB348537CB86926B3E6D1DBE0A98070@VI1PR0402MB3485.eurprd04.prod.outlook.com/
    Link: https://lore.kernel.org/lkml/20190522072018.10660-1-horia.geanta@nxp.com/
    Signed-off-by: Horia Geantă
    Reviewed-by: Valentin Ciocoi Radulescu

    Horia Geantă
     
  • Signed-off-by: Radu Alexe
    Signed-off-by: Tudor Ambarus

    Radu Alexe
     
  • This patch adds kernel support for encryption/decryption of TLS 1.0
    records using block ciphers. Implementation is similar to authenc in the
    sense that the base algorithms (AES, SHA1) are combined in a template to
    produce TLS encapsulation frames. The composite algorithm will be called
    "tls10(hmac(),cbc())". The cipher and hmac keys are
    wrapped in the same format used by authenc.c.

    Signed-off-by: Radu Alexe
    Signed-off-by: Cristian Stoica
    Signed-off-by: Horia Geantă

    Merged LF commit (rebase-20200703/crypto/core):
    856fb52acc28 ("crypto: tls - fix logical-not-parentheses compile warning")

    Merged LF commit (next-nxp-20200811):
    0f90a0618247 ("crypto: tls: fix build issue")

    Signed-off-by: Horia Geantă

    Radu Alexe
     

21 Aug, 2021

2 commits

  • tcrypt supports GCM/CCM mode, CMAC, CBCMAC, and speed test of
    SM4 algorithm.

    Signed-off-by: Tianjia Zhang
    Signed-off-by: Herbert Xu

    Tianjia Zhang
     
  • There are several places where the return value check of crypto_aead_setkey
    and crypto_aead_setauthsize were lost. It is necessary to add these checks.

    At the same time, move the crypto_aead_setauthsize() call out of the loop,
    and only need to call it once after load transform.

    Fixee: 53f52d7aecb4 ("crypto: tcrypt - Added speed tests for AEAD crypto alogrithms in tcrypt test suite")
    Signed-off-by: Tianjia Zhang
    Reviewed-by: Vitaly Chikunov
    Signed-off-by: Herbert Xu

    Tianjia Zhang
     

30 Jul, 2021

1 commit

  • tcrypt supports testing of SM4 cipher algorithms that use avx
    instruction set acceleration. The implementation of sm4 instruction
    set acceleration supports up to 8 blocks in parallel encryption and
    decryption, which is 128 bytes. Therefore, the 128-byte block size
    is also added to block_sizes.

    Signed-off-by: Tianjia Zhang
    Signed-off-by: Herbert Xu

    Tianjia Zhang
     

28 May, 2021

1 commit


10 Feb, 2021

1 commit

  • It is not trivial to trace back why exactly the tnepres variant of
    serpent was added ~17 years ago - Google searches come up mostly empty,
    but it seems to be related with the 'kerneli' version, which was based
    on an incorrect interpretation of the serpent spec.

    In other words, nobody is likely to care anymore today, so let's get rid
    of it.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

29 Jan, 2021

5 commits


03 Jan, 2021

1 commit

  • The signed long type used for printing the number of bytes processed in
    tcrypt benchmarks limits the range to -/+ 2 GiB, which is not sufficient
    to cover the performance of common accelerated ciphers such as AES-NI
    when benchmarked with sec=1. So switch to u64 instead.

    While at it, fix up a missing printk->pr_cont conversion in the AEAD
    benchmark.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

27 Nov, 2020

2 commits

  • WireGuard and IPsec both typically operate on input blocks that are
    ~1420 bytes in size, given the default Ethernet MTU of 1500 bytes and
    the overhead of the VPN metadata.

    Many aead and sckipher implementations are optimized for power-of-2
    block sizes, and whether they perform well when operating on 1420
    byte blocks cannot be easily extrapolated from the performance on
    power-of-2 block size. So let's add 1420 bytes explicitly, and round
    it up to the next blocksize multiple of the algo in question if it
    does not support 1420 byte blocks.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Commit c4741b2305979 ("crypto: run initcalls for generic implementations
    earlier") converted tcrypt.ko's module_init() to subsys_initcall(), but
    this was unintentional: tcrypt.ko currently cannot be built into the core
    kernel, and so the subsys_initcall() gets converted into module_init()
    under the hood. Given that tcrypt.ko does not implement a generic version
    of a crypto algorithm that has to be available early during boot, there
    is no point in running the tcrypt init code earlier than implied by
    module_init().

    However, for crypto development purposes, we will lift the restriction
    that tcrypt.ko must be built as a module, and when builtin, it makes sense
    for tcrypt.ko (which does its work inside the module init function) to run
    as late as possible. So let's switch to late_initcall() instead.

    Signed-off-by: Ard Biesheuvel
    Reviewed-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

13 Oct, 2020

1 commit

  • Pull crypto updates from Herbert Xu:
    "API:
    - Allow DRBG testing through user-space af_alg
    - Add tcrypt speed testing support for keyed hashes
    - Add type-safe init/exit hooks for ahash

    Algorithms:
    - Mark arc4 as obsolete and pending for future removal
    - Mark anubis, khazad, sead and tea as obsolete
    - Improve boot-time xor benchmark
    - Add OSCCA SM2 asymmetric cipher algorithm and use it for integrity

    Drivers:
    - Fixes and enhancement for XTS in caam
    - Add support for XIP8001B hwrng in xiphera-trng
    - Add RNG and hash support in sun8i-ce/sun8i-ss
    - Allow imx-rngc to be used by kernel entropy pool
    - Use crypto engine in omap-sham
    - Add support for Ingenic X1830 with ingenic"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (205 commits)
    X.509: Fix modular build of public_key_sm2
    crypto: xor - Remove unused variable count in do_xor_speed
    X.509: fix error return value on the failed path
    crypto: bcm - Verify GCM/CCM key length in setkey
    crypto: qat - drop input parameter from adf_enable_aer()
    crypto: qat - fix function parameters descriptions
    crypto: atmel-tdes - use semicolons rather than commas to separate statements
    crypto: drivers - use semicolons rather than commas to separate statements
    hwrng: mxc-rnga - use semicolons rather than commas to separate statements
    hwrng: iproc-rng200 - use semicolons rather than commas to separate statements
    hwrng: stm32 - use semicolons rather than commas to separate statements
    crypto: xor - use ktime for template benchmarking
    crypto: xor - defer load time benchmark to a later time
    crypto: hisilicon/zip - fix the uninitalized 'curr_qm_qp_num'
    crypto: hisilicon/zip - fix the return value when device is busy
    crypto: hisilicon/zip - fix zero length input in GZIP decompress
    crypto: hisilicon/zip - fix the uncleared debug registers
    lib/mpi: Fix unused variable warnings
    crypto: x86/poly1305 - Remove assignments with no effect
    hwrng: npcm - modify readl to readb
    ...

    Linus Torvalds
     

24 Aug, 2020

1 commit

  • Replace the existing /* fall through */ comments and its variants with
    the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary
    fall-through markings when it is the case.

    [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through

    Signed-off-by: Gustavo A. R. Silva

    Gustavo A. R. Silva
     

21 Aug, 2020

2 commits


13 Feb, 2020

1 commit

  • When running tcrypt skcipher speed tests, logs contain things like:
    testing speed of async ecb(des3_ede) (ecb(des3_ede-generic)) encryption
    or:
    testing speed of async ecb(aes) (ecb(aes-ce)) encryption

    The algorithm implementations are sync, not async.
    Fix this inaccuracy.

    Fixes: 7166e589da5b6 ("crypto: tcrypt - Use skcipher")
    Signed-off-by: Horia Geantă
    Signed-off-by: Herbert Xu

    Horia Geantă
     

17 Nov, 2019

1 commit


30 Aug, 2019

1 commit


26 Jul, 2019

1 commit


31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

18 Apr, 2019

1 commit

  • Use subsys_initcall for registration of all templates and generic
    algorithm implementations, rather than module_init. Then change
    cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

    This is needed so that when both a generic and optimized implementation
    of an algorithm are built into the kernel (not loadable modules), the
    generic implementation is registered before the optimized one.
    Otherwise, the self-tests for the optimized implementation are unable to
    allocate the generic implementation for the new comparison fuzz tests.

    Note that on arm, a side effect of this change is that self-tests for
    generic implementations may run before the unaligned access handler has
    been installed. So, unaligned accesses will crash the kernel. This is
    arguably a good thing as it makes it easier to detect that type of bug.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

08 Mar, 2019

1 commit

  • To prevent any issues with persistent data, separate lzo-rle from lzo so
    that it is treated as a separate algorithm, and lzo is still available.

    Link: http://lkml.kernel.org/r/20190205155944.16007-3-dave.rodgman@arm.com
    Signed-off-by: Dave Rodgman
    Cc: David S. Miller
    Cc: Greg Kroah-Hartman
    Cc: Herbert Xu
    Cc: Markus F.X.J. Oberhumer
    Cc: Matt Sealey
    Cc: Minchan Kim
    Cc: Nitin Gupta
    Cc: Richard Purdie
    Cc: Sergey Senozhatsky
    Cc: Sonny Rao
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dave Rodgman
     

13 Dec, 2018

1 commit


20 Nov, 2018

1 commit

  • Add support for the Adiantum encryption mode. Adiantum was designed by
    Paul Crowley and is specified by our paper:

    Adiantum: length-preserving encryption for entry-level processors
    (https://eprint.iacr.org/2018/720.pdf)

    See our paper for full details; this patch only provides an overview.

    Adiantum is a tweakable, length-preserving encryption mode designed for
    fast and secure disk encryption, especially on CPUs without dedicated
    crypto instructions. Adiantum encrypts each sector using the XChaCha12
    stream cipher, two passes of an ε-almost-∆-universal (εA∆U) hash
    function, and an invocation of the AES-256 block cipher on a single
    16-byte block. On CPUs without AES instructions, Adiantum is much
    faster than AES-XTS; for example, on ARM Cortex-A7, on 4096-byte sectors
    Adiantum encryption is about 4 times faster than AES-256-XTS encryption,
    and decryption about 5 times faster.

    Adiantum is a specialization of the more general HBSH construction. Our
    earlier proposal, HPolyC, was also a HBSH specialization, but it used a
    different εA∆U hash function, one based on Poly1305 only. Adiantum's
    εA∆U hash function, which is based primarily on the "NH" hash function
    like that used in UMAC (RFC4418), is about twice as fast as HPolyC's;
    consequently, Adiantum is about 20% faster than HPolyC.

    This speed comes with no loss of security: Adiantum is provably just as
    secure as HPolyC, in fact slightly *more* secure. Like HPolyC,
    Adiantum's security is reducible to that of XChaCha12 and AES-256,
    subject to a security bound. XChaCha12 itself has a security reduction
    to ChaCha12. Therefore, one need not "trust" Adiantum; one need only
    trust ChaCha12 and AES-256. Note that the εA∆U hash function is only
    used for its proven combinatorical properties so cannot be "broken".

    Adiantum is also a true wide-block encryption mode, so flipping any
    plaintext bit in the sector scrambles the entire ciphertext, and vice
    versa. No other such mode is available in the kernel currently; doing
    the same with XTS scrambles only 16 bytes. Adiantum also supports
    arbitrary-length tweaks and naturally supports any length input >= 16
    bytes without needing "ciphertext stealing".

    For the stream cipher, Adiantum uses XChaCha12 rather than XChaCha20 in
    order to make encryption feasible on the widest range of devices.
    Although the 20-round variant is quite popular, the best known attacks
    on ChaCha are on only 7 rounds, so ChaCha12 still has a substantial
    security margin; in fact, larger than AES-256's. 12-round Salsa20 is
    also the eSTREAM recommendation. For the block cipher, Adiantum uses
    AES-256, despite it having a lower security margin than XChaCha12 and
    needing table lookups, due to AES's extensive adoption and analysis
    making it the obvious first choice. Nevertheless, for flexibility this
    patch also permits the "adiantum" template to be instantiated with
    XChaCha20 and/or with an alternate block cipher.

    We need Adiantum support in the kernel for use in dm-crypt and fscrypt,
    where currently the only other suitable options are block cipher modes
    such as AES-XTS. A big problem with this is that many low-end mobile
    devices (e.g. Android Go phones sold primarily in developing countries,
    as well as some smartwatches) still have CPUs that lack AES
    instructions, e.g. ARM Cortex-A7. Sadly, AES-XTS encryption is much too
    slow to be viable on these devices. We did find that some "lightweight"
    block ciphers are fast enough, but these suffer from problems such as
    not having much cryptanalysis or being too controversial.

    The ChaCha stream cipher has excellent performance but is insecure to
    use directly for disk encryption, since each sector's IV is reused each
    time it is overwritten. Even restricting the threat model to offline
    attacks only isn't enough, since modern flash storage devices don't
    guarantee that "overwrites" are really overwrites, due to wear-leveling.
    Adiantum avoids this problem by constructing a
    "tweakable super-pseudorandom permutation"; this is the strongest
    possible security model for length-preserving encryption.

    Of course, storing random nonces along with the ciphertext would be the
    ideal solution. But doing that with existing hardware and filesystems
    runs into major practical problems; in most cases it would require data
    journaling (like dm-integrity) which severely degrades performance.
    Thus, for now length-preserving encryption is still needed.

    Signed-off-by: Eric Biggers
    Reviewed-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Eric Biggers
     

16 Nov, 2018

1 commit

  • Add testmgr and tcrypt tests and vectors for Streebog hash function
    from RFC 6986 and GOST R 34.11-2012, for HMAC-Streebog vectors are
    from RFC 7836 and R 50.1.113-2016.

    Cc: linux-integrity@vger.kernel.org
    Signed-off-by: Vitaly Chikunov
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     

09 Nov, 2018

1 commit


28 Sep, 2018

3 commits


21 Sep, 2018

1 commit

  • ghash is a keyed hash algorithm, thus setkey needs to be called.
    Otherwise the following error occurs:
    $ modprobe tcrypt mode=318 sec=1
    testing speed of async ghash-generic (ghash-generic)
    tcrypt: test 0 ( 16 byte blocks, 16 bytes per update, 1 updates):
    tcrypt: hashing failed ret=-126

    Cc: # 4.6+
    Fixes: 0660511c0bee ("crypto: tcrypt - Use ahash")
    Tested-by: Franck Lenormand
    Signed-off-by: Horia Geantă
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Horia Geantă