09 Jan, 2020

1 commit

  • The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to
    make the ->setkey() functions provide more information about errors.

    However, no one actually checks for this flag, which makes it pointless.

    Also, many algorithms fail to set this flag when given a bad length key.
    Reviewing just the generic implementations, this is the case for
    aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309,
    rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably
    many more in arch/*/crypto/ and drivers/crypto/.

    Some algorithms can even set this flag when the key is the correct
    length. For example, authenc and authencesn set it when the key payload
    is malformed in any way (not just a bad length), the atmel-sha and ccree
    drivers can set it if a memory allocation fails, and the chelsio driver
    sets it for bad auth tag lengths, not just bad key lengths.

    So even if someone actually wanted to start checking this flag (which
    seems unlikely, since it's been unused for a long time), there would be
    a lot of work needed to get it working correctly. But it would probably
    be much better to go back to the drawing board and just define different
    return values, like -EINVAL if the key is invalid for the algorithm vs.
    -EKEYREJECTED if the key was rejected by a policy like "no weak keys".
    That would be much simpler, less error-prone, and easier to test.

    So just remove this flag.

    Signed-off-by: Eric Biggers
    Reviewed-by: Horia Geantă
    Signed-off-by: Herbert Xu

    Eric Biggers
     

15 Aug, 2019

1 commit


26 Jul, 2019

3 commits


18 Apr, 2019

1 commit

  • Use subsys_initcall for registration of all templates and generic
    algorithm implementations, rather than module_init. Then change
    cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

    This is needed so that when both a generic and optimized implementation
    of an algorithm are built into the kernel (not loadable modules), the
    generic implementation is registered before the optimized one.
    Otherwise, the self-tests for the optimized implementation are unable to
    allocate the generic implementation for the new comparison fuzz tests.

    Note that on arm, a side effect of this change is that self-tests for
    generic implementations may run before the unaligned access handler has
    been installed. So, unaligned accesses will crash the kernel. This is
    arguably a good thing as it makes it easier to detect that type of bug.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

08 Apr, 2019

1 commit

  • cacheline_aligned is a special section. It cannot be const at the same
    time because it's not read-only. It doesn't give any MMU protection.

    Mark it ____cacheline_aligned to not place it in a special section,
    but just align it in .rodata

    Cc: herbert@gondor.apana.org.au
    Suggested-by: Rasmus Villemoes
    Signed-off-by: Andi Kleen
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Andi Kleen
     

09 Nov, 2018

1 commit

  • Make the ARM scalar AES implementation closer to constant-time by
    disabling interrupts and prefetching the tables into L1 cache. This is
    feasible because due to ARM's "free" rotations, the main tables are only
    1024 bytes instead of the usual 4096 used by most AES implementations.

    On ARM Cortex-A7, the speed loss is only about 5%. The resulting code
    is still over twice as fast as aes_ti.c. Responsiveness is potentially
    a concern, but interrupts are only disabled for a single AES block.

    Note that even after these changes, the implementation still isn't
    necessarily guaranteed to be constant-time; see
    https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion
    of the many difficulties involved in writing truly constant-time AES
    software. But it's valuable to make such attacks more difficult.

    Much of this patch is based on patches suggested by Ard Biesheuvel.

    Suggested-by: Ard Biesheuvel
    Signed-off-by: Eric Biggers
    Reviewed-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Eric Biggers
     

11 Feb, 2017

1 commit

  • The generic AES code exposes a 32-bit align mask, which forces all
    users of the code to use temporary buffers or take other measures to
    ensure the alignment requirement is adhered to, even on architectures
    that don't care about alignment for software algorithms such as this
    one.

    So drop the align mask, and fix the code to use get_unaligned_le32()
    where appropriate, which will resolve to whatever is optimal for the
    architecture.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

13 Jan, 2015

1 commit

  • Commit 5d26a105b5a7 ("crypto: prefix module autoloading with "crypto-"")
    changed the automatic module loading when requesting crypto algorithms
    to prefix all module requests with "crypto-". This requires all crypto
    modules to have a crypto specific module alias even if their file name
    would otherwise match the requested crypto algorithm.

    Even though commit 5d26a105b5a7 added those aliases for a vast amount of
    modules, it was missing a few. Add the required MODULE_ALIAS_CRYPTO
    annotations to those files to make them get loaded automatically, again.
    This fixes, e.g., requesting 'ecb(blowfish-generic)', which used to work
    with kernels v3.18 and below.

    Also change MODULE_ALIAS() lines to MODULE_ALIAS_CRYPTO(). The former
    won't work for crypto modules any more.

    Fixes: 5d26a105b5a7 ("crypto: prefix module autoloading with "crypto-"")
    Cc: Kees Cook
    Signed-off-by: Mathias Krause
    Signed-off-by: Herbert Xu

    Mathias Krause
     

24 Nov, 2014

1 commit


14 Aug, 2013

1 commit


01 Aug, 2012

1 commit

  • Initialization of cra_list is currently mixed, most ciphers initialize this
    field and most shashes do not. Initialization however is not needed at all
    since cra_list is initialized/overwritten in __crypto_register_alg() with
    list_add(). Therefore perform cleanup to remove all unneeded initializations
    of this field in 'crypto/'.

    Signed-off-by: Jussi Kivilinna
    Signed-off-by: Herbert Xu

    Jussi Kivilinna
     

16 Feb, 2010

1 commit


24 Jul, 2009

1 commit

  • It's undefined behaviour in C to write outside the bounds of an array.
    The key expansion routine takes a shortcut of creating 8 words at a
    time, but this creates 4 additional words which don't fit in the array.

    As everyone is hopefully now aware, GCC is at liberty to make any
    assumptions and optimisations it likes in situations where it can
    detect that UB has occured, up to and including nasal demons, and
    as the indices being accessed in the array are trivially calculable,
    it's rash to invite gcc to do take any liberties at all.

    Signed-off-by: Phil Carmody
    Signed-off-by: Herbert Xu

    Phil Carmody
     

25 Dec, 2008

1 commit

  • The tables used by the various AES algorithms are currently
    computed at run-time. This has created an init ordering problem
    because some AES algorithms may be registered before the tables
    have been initialised.

    This patch gets around this whole thing by precomputing the tables.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

21 Apr, 2008

1 commit


11 Jan, 2008

3 commits


11 Oct, 2007

1 commit

  • Loading the crypto algorithm by the alias instead of by module directly
    has the advantage that all possible implementations of this algorithm
    are loaded automatically and the crypto API can choose the best one
    depending on its priority.

    Additionally it ensures that the generic implementation as well as the
    HW driver (if available) is loaded in case the HW driver needs the
    generic version as fallback in corner cases.

    Signed-off-by: Sebastian Siewior
    Signed-off-by: Herbert Xu

    Sebastian Siewior