23 Jul, 2020

1 commit

  • Rationale:
    Reduces attack surface on kernel devs opening the links for MITM
    as HTTPS traffic is much harder to manipulate.

    Deterministic algorithm:
    For each file:
    If not .svg:
    For each line:
    If doesn't contain `\bxmlns\b`:
    For each link, `\bhttp://[^# \t\r\n]*(?:\w|/)`:
    If neither `\bgnu\.org/license`, nor `\bmozilla\.org/MPL\b`:
    If both the HTTP and HTTPS versions
    return 200 OK and serve the same content:
    Replace HTTP with HTTPS.

    Signed-off-by: Alexander A. Klimov
    Signed-off-by: Herbert Xu

    Alexander A. Klimov
     

16 Jul, 2020

2 commits

  • Overly-generic names can cause problems like naming collisions,
    confusing crash reports, and reduced grep-ability. E.g. see
    commit d099ea6e6fde ("crypto - Avoid free() namespace collision").

    Clean this up for the lrw template by prefixing the names with "lrw_".

    (I didn't use "crypto_lrw_" instead because that seems overkill.)

    Also constify the tfm context in a couple places.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The flag CRYPTO_ALG_ASYNC is "inherited" in the sense that when a
    template is instantiated, the template will have CRYPTO_ALG_ASYNC set if
    any of the algorithms it uses has CRYPTO_ALG_ASYNC set.

    We'd like to add a second flag (CRYPTO_ALG_ALLOCATES_MEMORY) that gets
    "inherited" in the same way. This is difficult because the handling of
    CRYPTO_ALG_ASYNC is hardcoded everywhere. Address this by:

    - Add CRYPTO_ALG_INHERITED_FLAGS, which contains the set of flags that
    have these inheritance semantics.

    - Add crypto_algt_inherited_mask(), for use by template ->create()
    methods. It returns any of these flags that the user asked to be
    unset and thus must be passed in the 'mask' to crypto_grab_*().

    - Also modify crypto_check_attr_type() to handle computing the 'mask'
    so that most templates can just use this.

    - Make crypto_grab_*() propagate these flags to the template instance
    being created so that templates don't have to do this themselves.

    Make crypto/simd.c propagate these flags too, since it "wraps" another
    algorithm, similar to a template.

    Based on a patch by Mikulas Patocka
    (https://lore.kernel.org/r/alpine.LRH.2.02.2006301414580.30526@file01.intranet.prod.int.rdu2.redhat.com).

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

08 May, 2020

1 commit

  • gcc-10 complains about using the name of a standard library
    function in the kernel, as we are not building with -ffreestanding:

    crypto/xts.c:325:13: error: conflicting types for built-in function 'free'; expected 'void(void *)' [-Werror=builtin-declaration-mismatch]
    325 | static void free(struct skcipher_instance *inst)
    | ^~~~
    crypto/lrw.c:290:13: error: conflicting types for built-in function 'free'; expected 'void(void *)' [-Werror=builtin-declaration-mismatch]
    290 | static void free(struct skcipher_instance *inst)
    | ^~~~
    crypto/lrw.c:27:1: note: 'free' is declared in header ''

    The xts and lrw cipher implementations run into this because they do
    not use the conventional namespaced function names.

    It might be better to rename all local functions in those files to
    help with things like 'ctags' and 'grep', but just renaming these two
    avoids the build issue. I picked the more verbose crypto_xts_free()
    and crypto_lrw_free() names for consistency with several other drivers
    that do use namespaced function names.

    Fixes: f1c131b45410 ("crypto: xts - Convert to skcipher")
    Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher")
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Herbert Xu

    Arnd Bergmann
     

06 Mar, 2020

1 commit


09 Jan, 2020

2 commits

  • Initializing a crypto_skcipher_spawn currently requires:

    1. Set spawn->base.inst to point to the instance.
    2. Call crypto_grab_skcipher().

    But there's no reason for these steps to be separate, and in fact this
    unneeded complication has caused at least one bug, the one fixed by
    commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")

    So just make crypto_grab_skcipher() take the instance as an argument.

    To keep the function calls from getting too unwieldy due to this extra
    argument, also introduce a 'mask' variable into the affected places
    which weren't already using one.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The CRYPTO_TFM_RES_* flags were apparently meant as a way to make the
    ->setkey() functions provide more information about errors. But these
    flags weren't actually being used or tested, and in many cases they
    weren't being set correctly anyway. So they've now been removed.

    Also, if someone ever actually needs to start better distinguishing
    ->setkey() errors (which is somewhat unlikely, as this has been unneeded
    for a long time), we'd be much better off just defining different return
    values, like -EINVAL if the key is invalid for the algorithm vs.
    -EKEYREJECTED if the key was rejected by a policy like "no weak keys".
    That would be much simpler, less error-prone, and easier to test.

    So just remove CRYPTO_TFM_RES_MASK and all the unneeded logic that
    propagates these flags around.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

09 Jul, 2019

1 commit

  • Pull crypto updates from Herbert Xu:
    "Here is the crypto update for 5.3:

    API:
    - Test shash interface directly in testmgr
    - cra_driver_name is now mandatory

    Algorithms:
    - Replace arc4 crypto_cipher with library helper
    - Implement 5 way interleave for ECB, CBC and CTR on arm64
    - Add xxhash
    - Add continuous self-test on noise source to drbg
    - Update jitter RNG

    Drivers:
    - Add support for SHA204A random number generator
    - Add support for 7211 in iproc-rng200
    - Fix fuzz test failures in inside-secure
    - Fix fuzz test failures in talitos
    - Fix fuzz test failures in qat"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (143 commits)
    crypto: stm32/hash - remove interruptible condition for dma
    crypto: stm32/hash - Fix hmac issue more than 256 bytes
    crypto: stm32/crc32 - rename driver file
    crypto: amcc - remove memset after dma_alloc_coherent
    crypto: ccp - Switch to SPDX license identifiers
    crypto: ccp - Validate the the error value used to index error messages
    crypto: doc - Fix formatting of new crypto engine content
    crypto: doc - Add parameter documentation
    crypto: arm64/aes-ce - implement 5 way interleave for ECB, CBC and CTR
    crypto: arm64/aes-ce - add 5 way interleave routines
    crypto: talitos - drop icv_ool
    crypto: talitos - fix hash on SEC1.
    crypto: talitos - move struct talitos_edesc into talitos.h
    lib/scatterlist: Fix mapping iterator when sg->offset is greater than PAGE_SIZE
    crypto/NX: Set receive window credits to max number of CRBs in RxFIFO
    crypto: asymmetric_keys - select CRYPTO_HASH where needed
    crypto: serpent - mark __serpent_setkey_sbox noinline
    crypto: testmgr - dynamically allocate crypto_shash
    crypto: testmgr - dynamically allocate testvec_config
    crypto: talitos - eliminate unneeded 'done' functions at build time
    ...

    Linus Torvalds
     

06 Jun, 2019

1 commit

  • Commit c778f96bf347 ("crypto: lrw - Optimize tweak computation")
    incorrectly reduced the alignmask of LRW instances from
    '__alignof__(u64) - 1' to '__alignof__(__be32) - 1'.

    However, xor_tweak() and setkey() assume that the data and key,
    respectively, are aligned to 'be128', which has u64 alignment.

    Fix the alignmask to be at least '__alignof__(be128) - 1'.

    Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation")
    Cc: # v4.20+
    Cc: Ondrej Mosnacek
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

07 May, 2019

1 commit

  • Pull crypto update from Herbert Xu:
    "API:
    - Add support for AEAD in simd
    - Add fuzz testing to testmgr
    - Add panic_on_fail module parameter to testmgr
    - Use per-CPU struct instead multiple variables in scompress
    - Change verify API for akcipher

    Algorithms:
    - Convert x86 AEAD algorithms over to simd
    - Forbid 2-key 3DES in FIPS mode
    - Add EC-RDSA (GOST 34.10) algorithm

    Drivers:
    - Set output IV with ctr-aes in crypto4xx
    - Set output IV in rockchip
    - Fix potential length overflow with hashing in sun4i-ss
    - Fix computation error with ctr in vmx
    - Add SM4 protected keys support in ccree
    - Remove long-broken mxc-scc driver
    - Add rfc4106(gcm(aes)) cipher support in cavium/nitrox"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (179 commits)
    crypto: ccree - use a proper le32 type for le32 val
    crypto: ccree - remove set but not used variable 'du_size'
    crypto: ccree - Make cc_sec_disable static
    crypto: ccree - fix spelling mistake "protedcted" -> "protected"
    crypto: caam/qi2 - generate hash keys in-place
    crypto: caam/qi2 - fix DMA mapping of stack memory
    crypto: caam/qi2 - fix zero-length buffer DMA mapping
    crypto: stm32/cryp - update to return iv_out
    crypto: stm32/cryp - remove request mutex protection
    crypto: stm32/cryp - add weak key check for DES
    crypto: atmel - remove set but not used variable 'alg_name'
    crypto: picoxcell - Use dev_get_drvdata()
    crypto: crypto4xx - get rid of redundant using_sd variable
    crypto: crypto4xx - use sync skcipher for fallback
    crypto: crypto4xx - fix cfb and ofb "overran dst buffer" issues
    crypto: crypto4xx - fix ctr-aes missing output IV
    crypto: ecrdsa - select ASN1 and OID_REGISTRY for EC-RDSA
    crypto: ux500 - use ccflags-y instead of CFLAGS_.o
    crypto: ccree - handle tee fips error during power management resume
    crypto: ccree - add function to handle cryptocell tee fips error
    ...

    Linus Torvalds
     

18 Apr, 2019

3 commits

  • Use subsys_initcall for registration of all templates and generic
    algorithm implementations, rather than module_init. Then change
    cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

    This is needed so that when both a generic and optimized implementation
    of an algorithm are built into the kernel (not loadable modules), the
    generic implementation is registered before the optimized one.
    Otherwise, the self-tests for the optimized implementation are unable to
    allocate the generic implementation for the new comparison fuzz tests.

    Note that on arm, a side effect of this change is that self-tests for
    generic implementations may run before the unaligned access handler has
    been installed. So, unaligned accesses will crash the kernel. This is
    arguably a good thing as it makes it easier to detect that type of bug.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • If the user-provided IV needs to be aligned to the algorithm's
    alignmask, then skcipher_walk_virt() copies the IV into a new aligned
    buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
    if the caller unconditionally accesses walk.iv, it's a use-after-free.

    Fix this in the LRW template by checking the return value of
    skcipher_walk_virt().

    This bug was detected by my patches that improve testmgr to fuzz
    algorithms against their generic implementation. When the extra
    self-tests were run on a KASAN-enabled kernel, a KASAN use-after-free
    splat occured during lrw(aes) testing.

    Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation")
    Cc: # v4.20+
    Cc: Ondrej Mosnacek
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • When we perform a walk in the completion function, we need to ensure
    that it is atomic.

    Fixes: ac3c8f36c31d ("crypto: lrw - Do not use auxiliary buffer")
    Cc:
    Signed-off-by: Herbert Xu
    Acked-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Herbert Xu
     

05 Oct, 2018

1 commit

  • Due to an unfortunate interaction between commit fbe1a850b3b1
    ("crypto: lrw - Fix out-of bounds access on counter overflow") and
    commit c778f96bf347 ("crypto: lrw - Optimize tweak computation"),
    we ended up with a version of next_index() that always returns 127.

    Fixes: c778f96bf347 ("crypto: lrw - Optimize tweak computation")
    Signed-off-by: Ard Biesheuvel
    Reviewed-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

21 Sep, 2018

3 commits

  • This patch simplifies the LRW template to recompute the LRW tweaks from
    scratch in the second pass and thus also removes the need to allocate a
    dynamic buffer using kmalloc().

    As discussed at [1], the use of kmalloc causes deadlocks with dm-crypt.

    PERFORMANCE MEASUREMENTS (x86_64)
    Performed using: https://gitlab.com/omos/linux-crypto-bench
    Crypto driver used: lrw(ecb-aes-aesni)

    The results show that the new code has about the same performance as the
    old code. For 512-byte message it seems to be even slightly faster, but
    that might be just noise.

    Before:
    ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
    lrw(aes) 256 64 200 203
    lrw(aes) 320 64 202 204
    lrw(aes) 384 64 204 205
    lrw(aes) 256 512 415 415
    lrw(aes) 320 512 432 440
    lrw(aes) 384 512 449 451
    lrw(aes) 256 4096 1838 1995
    lrw(aes) 320 4096 2123 1980
    lrw(aes) 384 4096 2100 2119
    lrw(aes) 256 16384 7183 6954
    lrw(aes) 320 16384 7844 7631
    lrw(aes) 384 16384 8256 8126
    lrw(aes) 256 32768 14772 14484
    lrw(aes) 320 32768 15281 15431
    lrw(aes) 384 32768 16469 16293

    After:
    ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
    lrw(aes) 256 64 197 196
    lrw(aes) 320 64 200 197
    lrw(aes) 384 64 203 199
    lrw(aes) 256 512 385 380
    lrw(aes) 320 512 401 395
    lrw(aes) 384 512 415 415
    lrw(aes) 256 4096 1869 1846
    lrw(aes) 320 4096 2080 1981
    lrw(aes) 384 4096 2160 2109
    lrw(aes) 256 16384 7077 7127
    lrw(aes) 320 16384 7807 7766
    lrw(aes) 384 16384 8108 8357
    lrw(aes) 256 32768 14111 14454
    lrw(aes) 320 32768 15268 15082
    lrw(aes) 384 32768 16581 16250

    [1] https://lkml.org/lkml/2018/8/23/1315

    Signed-off-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Ondrej Mosnacek
     
  • This patch rewrites the tweak computation to a slightly simpler method
    that performs less bswaps. Based on performance measurements the new
    code seems to provide slightly better performance than the old one.

    PERFORMANCE MEASUREMENTS (x86_64)
    Performed using: https://gitlab.com/omos/linux-crypto-bench
    Crypto driver used: lrw(ecb-aes-aesni)

    Before:
    ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
    lrw(aes) 256 64 204 286
    lrw(aes) 320 64 227 203
    lrw(aes) 384 64 208 204
    lrw(aes) 256 512 441 439
    lrw(aes) 320 512 456 455
    lrw(aes) 384 512 469 483
    lrw(aes) 256 4096 2136 2190
    lrw(aes) 320 4096 2161 2213
    lrw(aes) 384 4096 2295 2369
    lrw(aes) 256 16384 7692 7868
    lrw(aes) 320 16384 8230 8691
    lrw(aes) 384 16384 8971 8813
    lrw(aes) 256 32768 15336 15560
    lrw(aes) 320 32768 16410 16346
    lrw(aes) 384 32768 18023 17465

    After:
    ALGORITHM KEY (b) DATA (B) TIME ENC (ns) TIME DEC (ns)
    lrw(aes) 256 64 200 203
    lrw(aes) 320 64 202 204
    lrw(aes) 384 64 204 205
    lrw(aes) 256 512 415 415
    lrw(aes) 320 512 432 440
    lrw(aes) 384 512 449 451
    lrw(aes) 256 4096 1838 1995
    lrw(aes) 320 4096 2123 1980
    lrw(aes) 384 4096 2100 2119
    lrw(aes) 256 16384 7183 6954
    lrw(aes) 320 16384 7844 7631
    lrw(aes) 384 16384 8256 8126
    lrw(aes) 256 32768 14772 14484
    lrw(aes) 320 32768 15281 15431
    lrw(aes) 384 32768 16469 16293

    Signed-off-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Ondrej Mosnacek
     
  • When the LRW block counter overflows, the current implementation returns
    128 as the index to the precomputed multiplication table, which has 128
    entries. This patch fixes it to return the correct value (127).

    Fixes: 64470f1b8510 ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode")
    Cc: # 2.6.20+
    Reported-by: Eric Biggers
    Signed-off-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Ondrej Mosnacek
     

03 Aug, 2018

1 commit


31 Mar, 2018

1 commit


03 Mar, 2018

1 commit

  • Now that all users of lrw_crypt() have been removed in favor of the LRW
    template wrapping an ECB mode algorithm, remove lrw_crypt(). Also
    remove crypto/lrw.h as that is no longer needed either; and fold
    'struct lrw_table_ctx' into 'struct priv', lrw_init_table() into
    setkey(), and lrw_free_table() into exit_tfm().

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

03 Nov, 2017

1 commit


12 Oct, 2017

2 commits


10 Apr, 2017

1 commit

  • When we get an EINPROGRESS completion in lrw, we will end up marking
    the request as done and freeing it. This then blows up when the
    request is really completed as we've already freed the memory.

    Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher")
    Cc:
    Signed-off-by: Herbert Xu

    Herbert Xu
     

24 Mar, 2017

1 commit

  • In the generic XTS and LRW algorithms, for input data > 128 bytes, a
    temporary buffer is allocated to hold the values to be XOR'ed with the
    data before and after encryption or decryption. If the allocation
    fails, the fixed-size buffer embedded in the request buffer is meant to
    be used as a fallback --- resulting in more calls to the ECB algorithm,
    but still producing the correct result. However, we weren't correctly
    limiting subreq->cryptlen in this case, resulting in pre_crypt()
    overrunning the embedded buffer. Fix this by setting subreq->cryptlen
    correctly.

    Fixes: f1c131b45410 ("crypto: xts - Convert to skcipher")
    Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher")
    Cc: stable@vger.kernel.org # v4.10+
    Reported-by: Dmitry Vyukov
    Signed-off-by: Eric Biggers
    Acked-by: David S. Miller
    Signed-off-by: Herbert Xu

    Eric Biggers
     

28 Nov, 2016

1 commit

  • This patch converts lrw over to the skcipher interface. It also
    optimises the implementation to be based on ECB instead of the
    underlying cipher. For compatibility the existing naming scheme
    of lrw(aes) is maintained as opposed to the more obvious one of
    lrw(ecb(aes)).

    Signed-off-by: Herbert Xu

    Herbert Xu
     

26 Nov, 2014

1 commit

  • This adds the module loading prefix "crypto-" to the template lookup
    as well.

    For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
    includes the "crypto-" prefix at every level, correctly rejecting "vfat":

    net-pf-38
    algif-hash
    crypto-vfat(blowfish)
    crypto-vfat(blowfish)-all
    crypto-vfat

    Reported-by: Mathias Krause
    Signed-off-by: Kees Cook
    Acked-by: Mathias Krause
    Signed-off-by: Herbert Xu

    Kees Cook
     

09 Nov, 2011

4 commits


17 Feb, 2009

1 commit

  • It turns out that LRW has never worked properly on big endian.
    This was never discussed because nobody actually used it that
    way. In fact, it was only discovered when Geert Uytterhoeven
    loaded it through tcrypt which failed the test on it.

    The fix is straightforward, on big endian the to find the nth
    bit we should be grouping them by words instead of bytes. So
    setbit128_bbe should xor with 128 - BITS_PER_LONG instead of
    128 - BITS_PER_BYTE == 0x78.

    Tested-by: Geert Uytterhoeven
    Signed-off-by: Herbert Xu

    Herbert Xu
     

21 Apr, 2008

1 commit


08 Feb, 2008

1 commit


02 May, 2007

1 commit

  • This patch passes the type/mask along when constructing instances of
    templates. This is in preparation for templates that may support
    multiple types of instances depending on what is requested. For example,
    the planned software async crypto driver will use this construct.

    For the moment this allows us to check whether the instance constructed
    is of the correct type and avoid returning success if the type does not
    match.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

07 Feb, 2007

1 commit


07 Dec, 2006

2 commits

  • Fixes:

    crypto/lrw.c:99: warning: conflicting types for built-in function ‘round’

    Signed-off-by: David S. Miller

    David S. Miller
     
  • Main module, this implements the Liskov Rivest Wagner block cipher mode
    in the new blockcipher API. The implementation is based on ecb.c.

    The LRW-32-AES specification I used can be found at:
    http://grouper.ieee.org/groups/1619/email/pdf00017.pdf

    It implements the optimization specified as optional in the
    specification, and in addition it uses optimized multiplication
    routines from gf128mul.c.

    Since gf128mul.[ch] is not tested on bigendian, this cipher mode
    may currently fail badly on bigendian machines.

    Signed-off-by: Rik Snel
    Signed-off-by: Herbert Xu

    Rik Snel