03 Jan, 2021

1 commit

  • The cipher routines in the crypto API are mostly intended for templates
    implementing skcipher modes generically in software, and shouldn't be
    used outside of the crypto subsystem. So move the prototypes and all
    related definitions to a new header file under include/crypto/internal.
    Also, let's use the new module namespace feature to move the symbol
    exports into a new namespace CRYPTO_INTERNAL.

    Signed-off-by: Ard Biesheuvel
    Acked-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

16 Jul, 2020

2 commits

  • CRYPTO_ALG_NEED_FALLBACK is handled inconsistently. When it's requested
    to be clear, some templates propagate that request to child algorithms,
    while others don't.

    It's apparently desired for NEED_FALLBACK to be propagated, to avoid
    deadlocks where a module tries to load itself while it's being
    initialized, and to avoid unnecessarily complex fallback chains where we
    have e.g. cbc-aes-$driver falling back to cbc(aes-$driver) where
    aes-$driver itself falls back to aes-generic, instead of cbc-aes-$driver
    simply falling back to cbc(aes-generic). There have been a number of
    fixes to this effect:

    commit 89027579bc6c ("crypto: xts - Propagate NEED_FALLBACK bit")
    commit d2c2a85cfe82 ("crypto: ctr - Propagate NEED_FALLBACK bit")
    commit e6c2e65c70a6 ("crypto: cbc - Propagate NEED_FALLBACK bit")

    But it seems that other templates can have the same problems too.

    To avoid this whack-a-mole, just add NEED_FALLBACK to INHERITED_FLAGS so
    that it's always inherited.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The flag CRYPTO_ALG_ASYNC is "inherited" in the sense that when a
    template is instantiated, the template will have CRYPTO_ALG_ASYNC set if
    any of the algorithms it uses has CRYPTO_ALG_ASYNC set.

    We'd like to add a second flag (CRYPTO_ALG_ALLOCATES_MEMORY) that gets
    "inherited" in the same way. This is difficult because the handling of
    CRYPTO_ALG_ASYNC is hardcoded everywhere. Address this by:

    - Add CRYPTO_ALG_INHERITED_FLAGS, which contains the set of flags that
    have these inheritance semantics.

    - Add crypto_algt_inherited_mask(), for use by template ->create()
    methods. It returns any of these flags that the user asked to be
    unset and thus must be passed in the 'mask' to crypto_grab_*().

    - Also modify crypto_check_attr_type() to handle computing the 'mask'
    so that most templates can just use this.

    - Make crypto_grab_*() propagate these flags to the template instance
    being created so that templates don't have to do this themselves.

    Make crypto/simd.c propagate these flags too, since it "wraps" another
    algorithm, similar to a template.

    Based on a patch by Mikulas Patocka
    (https://lore.kernel.org/r/alpine.LRH.2.02.2006301414580.30526@file01.intranet.prod.int.rdu2.redhat.com).

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

06 Mar, 2020

1 commit


09 Jan, 2020

2 commits

  • Initializing a crypto_skcipher_spawn currently requires:

    1. Set spawn->base.inst to point to the instance.
    2. Call crypto_grab_skcipher().

    But there's no reason for these steps to be separate, and in fact this
    unneeded complication has caused at least one bug, the one fixed by
    commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")

    So just make crypto_grab_skcipher() take the instance as an argument.

    To keep the function calls from getting too unwieldy due to this extra
    argument, also introduce a 'mask' variable into the affected places
    which weren't already using one.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The CRYPTO_TFM_RES_* flags were apparently meant as a way to make the
    ->setkey() functions provide more information about errors. But these
    flags weren't actually being used or tested, and in many cases they
    weren't being set correctly anyway. So they've now been removed.

    Also, if someone ever actually needs to start better distinguishing
    ->setkey() errors (which is somewhat unlikely, as this has been unneeded
    for a long time), we'd be much better off just defining different return
    values, like -EINVAL if the key is invalid for the algorithm vs.
    -EKEYREJECTED if the key was rejected by a policy like "no weak keys".
    That would be much simpler, less error-prone, and easier to test.

    So just remove CRYPTO_TFM_RES_MASK and all the unneeded logic that
    propagates these flags around.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

27 Dec, 2019

1 commit

  • This patch introduces the skcipher_ialg_simple helper which fetches
    the crypto_alg structure from a simple skcipher instance's spawn.

    This allows us to remove the third argument from the function
    skcipher_alloc_instance_simple.

    In doing so the reference count to the algorithm is now maintained
    by the Crypto API and the caller no longer needs to drop the alg
    refcount.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

18 Apr, 2019

1 commit

  • Use subsys_initcall for registration of all templates and generic
    algorithm implementations, rather than module_init. Then change
    cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

    This is needed so that when both a generic and optimized implementation
    of an algorithm are built into the kernel (not loadable modules), the
    generic implementation is registered before the optimized one.
    Otherwise, the self-tests for the optimized implementation are unable to
    allocate the generic implementation for the new comparison fuzz tests.

    Note that on arm, a side effect of this change is that self-tests for
    generic implementations may run before the unaligned access handler has
    been installed. So, unaligned accesses will crash the kernel. This is
    arguably a good thing as it makes it easier to detect that type of bug.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

25 Jan, 2019

1 commit


11 Jan, 2019

1 commit


23 Dec, 2018

1 commit

  • Remove dead code related to internal IV generators, which are no longer
    used since they've been replaced with the "seqiv" and "echainiv"
    templates. The removed code includes:

    - The "givcipher" (GIVCIPHER) algorithm type. No algorithms are
    registered with this type anymore, so it's unneeded.

    - The "const char *geniv" member of aead_alg, ablkcipher_alg, and
    blkcipher_alg. A few algorithms still set this, but it isn't used
    anymore except to show via /proc/crypto and CRYPTO_MSG_GETALG.
    Just hardcode "" or "" in those cases.

    - The 'skcipher_givcrypt_request' structure, which is never used.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

21 Apr, 2018

1 commit

  • We avoid various VLAs[1] by using constant expressions for block size
    and alignment mask.

    [1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

    Signed-off-by: Salvatore Mesoraca
    Signed-off-by: Herbert Xu

    Salvatore Mesoraca
     

04 Aug, 2017

1 commit

  • There are quite a number of occurrences in the kernel of the pattern

    if (dst != src)
    memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
    crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE);

    or

    crypto_xor(keystream, src, nbytes);
    memcpy(dst, keystream, nbytes);

    where crypto_xor() is preceded or followed by a memcpy() invocation
    that is only there because crypto_xor() uses its output parameter as
    one of the inputs. To avoid having to add new instances of this pattern
    in the arm64 code, which will be refactored to implement non-SIMD
    fallbacks, add an alternative implementation called crypto_xor_cpy(),
    taking separate input and output arguments. This removes the need for
    the separate memcpy().

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

09 Mar, 2017

1 commit

  • When requesting a fallback algorithm, we should propagate the
    NEED_FALLBACK bit when search for the underlying algorithm.

    This will prevents drivers from allocating unnecessary fallbacks that
    are never called. For instance, currently the vmx-crypto driver will use
    the following chain of calls when calling the fallback implementation:

    p8_aes_ctr -> ctr(p8_aes) -> aes-generic

    However p8_aes will always delegate its calls to aes-generic. With this
    patch, p8_aes_ctr will be able to use ctr(aes-generic) directly as its
    fallback. The same applies to aes_s390.

    Signed-off-by: Marcelo Henrique Cerri
    Signed-off-by: Herbert Xu

    Marcelo Cerri
     

11 Feb, 2017

1 commit

  • Instead of unconditionally forcing 4 byte alignment for all generic
    chaining modes that rely on crypto_xor() or crypto_inc() (which may
    result in unnecessary copying of data when the underlying hardware
    can perform unaligned accesses efficiently), make those functions
    deal with unaligned input explicitly, but only if the Kconfig symbol
    HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
    the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.

    For crypto_inc(), this simply involves making the 4-byte stride
    conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
    it typically operates on 16 byte buffers.

    For crypto_xor(), an algorithm is implemented that simply runs through
    the input using the largest strides possible if unaligned accesses are
    allowed. If they are not, an optimal sequence of memory accesses is
    emitted that takes the relative alignment of the input buffers into
    account, e.g., if the relative misalignment of dst and src is 4 bytes,
    the entire xor operation will be completed using 4 byte loads and stores
    (modulo unaligned bits at the start and end). Note that all expressions
    involving misalign are simply eliminated by the compiler when
    HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

01 Nov, 2016

2 commits


18 Jul, 2016

1 commit


26 Nov, 2014

1 commit

  • This adds the module loading prefix "crypto-" to the template lookup
    as well.

    For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
    includes the "crypto-" prefix at every level, correctly rejecting "vfat":

    net-pf-38
    algif-hash
    crypto-vfat(blowfish)
    crypto-vfat(blowfish)-all
    crypto-vfat

    Reported-by: Mathias Krause
    Signed-off-by: Kees Cook
    Acked-by: Mathias Krause
    Signed-off-by: Herbert Xu

    Kees Cook
     

24 Nov, 2014

1 commit


26 Feb, 2013

1 commit

  • Pull crypto update from Herbert Xu:
    "Here is the crypto update for 3.9:

    - Added accelerated implementation of crc32 using pclmulqdq.

    - Added test vector for fcrypt.

    - Added support for OMAP4/AM33XX cipher and hash.

    - Fixed loose crypto_user input checks.

    - Misc fixes"

    * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (43 commits)
    crypto: user - ensure user supplied strings are nul-terminated
    crypto: user - fix empty string test in report API
    crypto: user - fix info leaks in report API
    crypto: caam - Added property fsl,sec-era in SEC4.0 device tree binding.
    crypto: use ERR_CAST
    crypto: atmel-aes - adjust duplicate test
    crypto: crc32-pclmul - Kill warning on x86-32
    crypto: x86/twofish - assembler clean-ups: use ENTRY/ENDPROC, localize jump labels
    crypto: x86/sha1 - assembler clean-ups: use ENTRY/ENDPROC
    crypto: x86/serpent - use ENTRY/ENDPROC for assember functions and localize jump targets
    crypto: x86/salsa20 - assembler cleanup, use ENTRY/ENDPROC for assember functions and rename ECRYPT_* to salsa20_*
    crypto: x86/ghash - assembler clean-up: use ENDPROC at end of assember functions
    crypto: x86/crc32c - assembler clean-up: use ENTRY/ENDPROC
    crypto: cast6-avx: use ENTRY()/ENDPROC() for assembler functions
    crypto: cast5-avx: use ENTRY()/ENDPROC() for assembler functions and localize jump targets
    crypto: camellia-x86_64/aes-ni: use ENTRY()/ENDPROC() for assembler functions and localize jump targets
    crypto: blowfish-x86_64: use ENTRY()/ENDPROC() for assembler functions and localize jump targets
    crypto: aesni-intel - add ENDPROC statements for assembler functions
    crypto: x86/aes - assembler clean-ups: use ENTRY/ENDPROC, localize jump targets
    crypto: testmgr - add test vector for fcrypt
    ...

    Linus Torvalds
     

04 Feb, 2013

1 commit

  • Replace PTR_ERR followed by ERR_PTR by ERR_CAST, to be more concise.

    The semantic patch that makes this change is as follows:
    (http://coccinelle.lip6.fr/)

    //
    @@
    expression err,x;
    @@
    - err = PTR_ERR(x);
    if (IS_ERR(x))
    - return ERR_PTR(err);
    + return ERR_CAST(x);
    //

    Signed-off-by: Julia Lawall
    Signed-off-by: Herbert Xu

    Julia Lawall
     

08 Jan, 2013

1 commit

  • Some hardware crypto drivers register asynchronous ctr(aes), which is left
    unused in IPSEC because rfc3686 template only supports synchronous block
    ciphers. Some other drivers register rfc3686(ctr(aes)) to workaround this
    limitation but not all.

    This patch changes rfc3686 to use asynchronous block ciphers, to allow async
    ctr(aes) algorithms to be utilized automatically by IPSEC.

    Signed-off-by: Jussi Kivilinna
    Acked-by: Herbert Xu
    Signed-off-by: Steffen Klassert

    Jussi Kivilinna
     

26 May, 2010

1 commit

  • Use ERR_CAST(x) rather than ERR_PTR(PTR_ERR(x)). The former makes more
    clear what is the purpose of the operation, which otherwise looks like a
    no-op.

    The semantic patch that makes this change is as follows:
    (http://coccinelle.lip6.fr/)

    //
    @@
    type T;
    T x;
    identifier f;
    @@

    T f (...) { }

    @@
    expression x;
    @@

    - ERR_PTR(PTR_ERR(x))
    + ERR_CAST(x)
    //

    Signed-off-by: Julia Lawall
    Signed-off-by: Herbert Xu

    Julia Lawall
     

13 Aug, 2009

1 commit

  • Raw counter mode only works with chainiv, which is no longer
    the default IV generator on SMP machines. This broke raw counter
    mode as it can no longer instantiate as a givcipher.

    This patch fixes it by always picking chainiv on raw counter
    mode. This is based on the diagnosis and a patch by Huang
    Ying.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

11 Jan, 2008

6 commits

  • This generator generates an IV based on a sequence number by xoring it
    with a salt. This algorithm is mainly useful for CTR and similar modes.

    This patch also sets it as the default IV generator for ctr.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • As discussed previously, this patch moves the basic CTR functionality
    into a chainable algorithm called ctr. The IPsec-specific variant of
    it is now placed on top with the name rfc3686.

    So ctr(aes) gives a chainable cipher with IV size 16 while the IPsec
    variant will be called rfc3686(ctr(aes)). This patch also adjusts
    gcm accordingly.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • When the data spans across a page boundary, CTR may incorrectly process
    a partial block in the middle because the blkcipher walking code may
    supply partial blocks in the middle as long as the total length of the
    supplied data is more than a block. CTR is supposed to return any unused
    partial block in that case to the walker.

    This patch fixes this by doing exactly that, returning partial blocks to
    the walker unless we received less than a block-worth of data to start
    with.

    This also allows us to optimise the bulk of the processing since we no
    longer have to worry about partial blocks until the very end.

    Thanks to Tan Swee Heng for fixes and actually testing this :)

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch replaces the custom inc/xor in CTR with the generic functions.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds countersize to CTR mode.
    The template is now ctr(algo,noncesize,ivsize,countersize).

    For example, ctr(aes,4,8,4) indicates the counterblock
    will be composed of a salt/nonce that is 4 bytes, an iv
    that is 8 bytes and the counter is 4 bytes.

    When noncesize + ivsize < blocksize, CTR initializes the
    last block - ivsize - noncesize portion of the block to
    zero. Otherwise the counter block is composed of the IV
    (and nonce if necessary).

    If noncesize + ivsize == blocksize, then this indicates that
    user is passing in entire counterblock. Thus countersize
    indicates the amount of bytes in counterblock to use as
    the counter for incrementing. CTR will increment counter
    portion by 1, and begin encryption with that value.

    Note that CTR assumes the counter portion of the block that
    will be incremented is stored in big endian.

    Signed-off-by: Joy Latten
    Signed-off-by: Herbert Xu

    Joy Latten
     
  • This patch implements CTR mode for IPsec.
    It is based off of RFC 3686.

    Please note:
    1. CTR turns a block cipher into a stream cipher.
    Encryption is done in blocks, however the last block
    may be a partial block.

    A "counter block" is encrypted, creating a keystream
    that is xor'ed with the plaintext. The counter portion
    of the counter block is incremented after each block
    of plaintext is encrypted.
    Decryption is performed in same manner.

    2. The CTR counterblock is composed of,
    nonce + IV + counter

    The size of the counterblock is equivalent to the
    blocksize of the cipher.
    sizeof(nonce) + sizeof(IV) + sizeof(counter) = blocksize

    The CTR template requires the name of the cipher
    algorithm, the sizeof the nonce, and the sizeof the iv.
    ctr(cipher,sizeof_nonce,sizeof_iv)

    So for example,
    ctr(aes,4,8)
    specifies the counterblock will be composed of 4 bytes
    from a nonce, 8 bytes from the iv, and 4 bytes for counter
    since aes has a blocksize of 16 bytes.

    3. The counter portion of the counter block is stored
    in big endian for conformance to rfc 3686.

    Signed-off-by: Joy Latten
    Signed-off-by: Herbert Xu

    Joy Latten