28 Aug, 2020

1 commit


21 Aug, 2020

1 commit

  • Revert "crypto: hash - Add real ahash walk interface"
    This reverts commit 75ecb231ff45b54afa9f4ec9137965c3c00868f4.

    The callers of the functions in this commit were removed in ab8085c130ed

    Remove these unused calls.

    Fixes: ab8085c130ed ("crypto: x86 - remove SHA multibuffer routines and mcryptd")
    Cc: Ard Biesheuvel
    Signed-off-by: Ira Weiny
    Signed-off-by: Herbert Xu

    Ira Weiny
     

08 Aug, 2020

1 commit

  • As said by Linus:

    A symmetric naming is only helpful if it implies symmetries in use.
    Otherwise it's actively misleading.

    In "kzalloc()", the z is meaningful and an important part of what the
    caller wants.

    In "kzfree()", the z is actively detrimental, because maybe in the
    future we really _might_ want to use that "memfill(0xdeadbeef)" or
    something. The "zero" part of the interface isn't even _relevant_.

    The main reason that kzfree() exists is to clear sensitive information
    that should not be leaked to other future users of the same memory
    objects.

    Rename kzfree() to kfree_sensitive() to follow the example of the recently
    added kvfree_sensitive() and make the intention of the API more explicit.
    In addition, memzero_explicit() is used to clear the memory to make sure
    that it won't get optimized away by the compiler.

    The renaming is done by using the command sequence:

    git grep -w --name-only kzfree |\
    xargs sed -i 's/kzfree/kfree_sensitive/'

    followed by some editing of the kfree_sensitive() kerneldoc and adding
    a kzfree backward compatibility macro in slab.h.

    [akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
    [akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]

    Suggested-by: Joe Perches
    Signed-off-by: Waiman Long
    Signed-off-by: Andrew Morton
    Acked-by: David Howells
    Acked-by: Michal Hocko
    Acked-by: Johannes Weiner
    Cc: Jarkko Sakkinen
    Cc: James Morris
    Cc: "Serge E. Hallyn"
    Cc: Joe Perches
    Cc: Matthew Wilcox
    Cc: David Rientjes
    Cc: Dan Carpenter
    Cc: "Jason A . Donenfeld"
    Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
    Signed-off-by: Linus Torvalds

    Waiman Long
     

16 Jul, 2020

1 commit


16 Jan, 2020

1 commit

  • These two C implementations from Zinc -- a 32x32 one and a 64x64 one,
    depending on the platform -- come from Andrew Moon's public domain
    poly1305-donna portable code, modified for usage in the kernel. The
    precomputation in the 32-bit version and the use of 64x64 multiplies in
    the 64-bit version make these perform better than the code it replaces.
    Moon's code is also very widespread and has received many eyeballs of
    scrutiny.

    There's a bit of interference between the x86 implementation, which
    relies on internal details of the old scalar implementation. In the next
    commit, the x86 implementation will be replaced with a faster one that
    doesn't rely on this, so none of this matters much. But for now, to keep
    this passing the tests, we inline the bits of the old implementation
    that the x86 implementation relied on. Also, since we now support a
    slightly larger key space, via the union, some offsets had to be fixed
    up.

    Nonce calculation was folded in with the emit function, to take
    advantage of 64x64 arithmetic. However, Adiantum appeared to rely on no
    nonce handling in emit, so this path was conditionalized. We also
    introduced a new struct, poly1305_core_key, to represent the precise
    amount of space that particular implementation uses.

    Testing with kbench9000, depending on the CPU, the update function for
    the 32x32 version has been improved by 4%-7%, and for the 64x64 by
    19%-30%. The 32x32 gains are small, but I think there's great value in
    having a parallel implementation to the 64x64 one so that the two can be
    compared side-by-side as nice stand-alone units.

    Signed-off-by: Jason A. Donenfeld
    Signed-off-by: Herbert Xu

    Jason A. Donenfeld
     

09 Jan, 2020

16 commits

  • Convert shash_free_instance() and its users to the new way of freeing
    instances, where a ->free() method is installed to the instance struct
    itself. This replaces the weakly-typed method crypto_template::free().

    This will allow removing support for the old way of freeing instances.

    Also give shash_free_instance() a more descriptive name to reflect that
    it's only for instances with a single spawn, not for any instance.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Convert the "seqiv" template to the new way of freeing instances where a
    ->free() method is installed to the instance struct itself. Also remove
    the unused implementation of the old way of freeing instances from the
    "echainiv" template, since it's already using the new way too.

    In doing this, also simplify the code by making the helper function
    aead_geniv_alloc() install the ->free() method, instead of making seqiv
    and echainiv do this themselves. This is analogous to how
    skcipher_alloc_instance_simple() works.

    This will allow removing support for the old way of freeing instances.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Add support to shash and ahash for the new way of freeing instances
    (already used for skcipher, aead, and akcipher) where a ->free() method
    is installed to the instance struct itself. These methods are more
    strongly-typed than crypto_template::free(), which they replace.

    This will allow removing support for the old way of freeing instances.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Now that all the templates that need ahash spawns have been converted to
    use crypto_grab_ahash() rather than look up the algorithm directly,
    crypto_ahash_type is no longer used outside of ahash.c. Make it static.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Remove lots of helper functions that were previously used for
    instantiating crypto templates, but are now unused:

    - crypto_get_attr_alg() and similar functions looked up an inner
    algorithm directly from a template parameter. These were replaced
    with getting the algorithm's name, then calling crypto_grab_*().

    - crypto_init_spawn2() and similar functions initialized a spawn, given
    an algorithm. Similarly, these were replaced with crypto_grab_*().

    - crypto_alloc_instance() and similar functions allocated an instance
    with a single spawn, given the inner algorithm. These aren't useful
    anymore since crypto_grab_*() need the instance allocated first.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Make skcipher_alloc_instance_simple() use the new function
    crypto_grab_cipher() to initialize its cipher spawn.

    This is needed to make all spawns be initialized in a consistent way.

    Also simplify the error handling by taking advantage of crypto_drop_*()
    now accepting (as a no-op) spawns that haven't been initialized yet, and
    by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Currently, ahash spawns are initialized by using ahash_attr_alg() or
    crypto_find_alg() to look up the ahash algorithm, then calling
    crypto_init_ahash_spawn().

    This is different from how skcipher, aead, and akcipher spawns are
    initialized (they use crypto_grab_*()), and for no good reason. This
    difference introduces unnecessary complexity.

    The crypto_grab_*() functions used to have some problems, like not
    holding a reference to the algorithm and requiring the caller to
    initialize spawn->base.inst. But those problems are fixed now.

    So, let's introduce crypto_grab_ahash() so that we can convert all
    templates to the same way of initializing their spawns.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Currently, shash spawns are initialized by using shash_attr_alg() or
    crypto_alg_mod_lookup() to look up the shash algorithm, then calling
    crypto_init_shash_spawn().

    This is different from how skcipher, aead, and akcipher spawns are
    initialized (they use crypto_grab_*()), and for no good reason. This
    difference introduces unnecessary complexity.

    The crypto_grab_*() functions used to have some problems, like not
    holding a reference to the algorithm and requiring the caller to
    initialize spawn->base.inst. But those problems are fixed now.

    So, let's introduce crypto_grab_shash() so that we can convert all
    templates to the same way of initializing their spawns.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Initializing a crypto_akcipher_spawn currently requires:

    1. Set spawn->base.inst to point to the instance.
    2. Call crypto_grab_akcipher().

    But there's no reason for these steps to be separate, and in fact this
    unneeded complication has caused at least one bug, the one fixed by
    commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")

    So just make crypto_grab_akcipher() take the instance as an argument.

    To keep the function call from getting too unwieldy due to this extra
    argument, also introduce a 'mask' variable into pkcs1pad_create().

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Initializing a crypto_aead_spawn currently requires:

    1. Set spawn->base.inst to point to the instance.
    2. Call crypto_grab_aead().

    But there's no reason for these steps to be separate, and in fact this
    unneeded complication has caused at least one bug, the one fixed by
    commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")

    So just make crypto_grab_aead() take the instance as an argument.

    To keep the function calls from getting too unwieldy due to this extra
    argument, also introduce a 'mask' variable into the affected places
    which weren't already using one.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Initializing a crypto_skcipher_spawn currently requires:

    1. Set spawn->base.inst to point to the instance.
    2. Call crypto_grab_skcipher().

    But there's no reason for these steps to be separate, and in fact this
    unneeded complication has caused at least one bug, the one fixed by
    commit 6db43410179b ("crypto: adiantum - initialize crypto_spawn::inst")

    So just make crypto_grab_skcipher() take the instance as an argument.

    To keep the function calls from getting too unwieldy due to this extra
    argument, also introduce a 'mask' variable into the affected places
    which weren't already using one.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Define struct ahash_instance in a way analogous to struct
    skcipher_instance, struct aead_instance, and struct akcipher_instance,
    where the struct is defined to include both the algorithm structure at
    the beginning and the additional crypto_instance fields at the end.

    This is needed to allow allocating ahash instances directly using
    kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher,
    aead, and akcipher instances. In turn, that's needed to make spawns be
    initialized in a consistent way everywhere.

    Also take advantage of the addition of the base instance to struct
    ahash_instance by simplifying the ahash_crypto_instance() and
    ahash_instance() functions.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Define struct shash_instance in a way analogous to struct
    skcipher_instance, struct aead_instance, and struct akcipher_instance,
    where the struct is defined to include both the algorithm structure at
    the beginning and the additional crypto_instance fields at the end.

    This is needed to allow allocating shash instances directly using
    kzalloc(sizeof(*inst) + sizeof(*ictx), ...) in the same way as skcipher,
    aead, and akcipher instances. In turn, that's needed to make spawns be
    initialized in a consistent way everywhere.

    Also take advantage of the addition of the base instance to struct
    shash_instance by simplifying the shash_crypto_instance() and
    shash_instance() functions.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The CRYPTO_TFM_RES_WEAK_KEY flag was apparently meant as a way to make
    the ->setkey() functions provide more information about errors.

    However, no one actually checks for this flag, which makes it pointless.
    There are also no tests that verify that all algorithms actually set (or
    don't set) it correctly.

    This is also the last remaining CRYPTO_TFM_RES_* flag, which means that
    it's the only thing still needing all the boilerplate code which
    propagates these flags around from child => parent tfms.

    And if someone ever needs to distinguish this error in the future (which
    is somewhat unlikely, as it's been unneeded for a long time), it would
    be much better to just define a new return value like -EKEYREJECTED.
    That would be much simpler, less error-prone, and easier to test.

    So just remove this flag.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to
    make the ->setkey() functions provide more information about errors.

    However, no one actually checks for this flag, which makes it pointless.

    Also, many algorithms fail to set this flag when given a bad length key.
    Reviewing just the generic implementations, this is the case for
    aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309,
    rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably
    many more in arch/*/crypto/ and drivers/crypto/.

    Some algorithms can even set this flag when the key is the correct
    length. For example, authenc and authencesn set it when the key payload
    is malformed in any way (not just a bad length), the atmel-sha and ccree
    drivers can set it if a memory allocation fails, and the chelsio driver
    sets it for bad auth tag lengths, not just bad key lengths.

    So even if someone actually wanted to start checking this flag (which
    seems unlikely, since it's been unused for a long time), there would be
    a lot of work needed to get it working correctly. But it would probably
    be much better to go back to the drawing board and just define different
    return values, like -EINVAL if the key is invalid for the algorithm vs.
    -EKEYREJECTED if the key was rejected by a policy like "no weak keys".
    That would be much simpler, less error-prone, and easier to test.

    So just remove this flag.

    Signed-off-by: Eric Biggers
    Reviewed-by: Horia Geantă
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • skcipher_walk_aead() is unused and is identical to
    skcipher_walk_aead_encrypt(), so remove it.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

27 Dec, 2019

1 commit

  • This patch introduces the skcipher_ialg_simple helper which fetches
    the crypto_alg structure from a simple skcipher instance's spawn.

    This allows us to remove the third argument from the function
    skcipher_alloc_instance_simple.

    In doing so the reference count to the algorithm is now maintained
    by the Crypto API and the caller no longer needs to drop the alg
    refcount.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

20 Dec, 2019

1 commit

  • Some of the algorithm unregistration functions return -ENOENT when asked
    to unregister a non-registered algorithm, while others always return 0
    or always return void. But no users check the return value, except for
    two of the bulk unregistration functions which print a message on error
    but still always return 0 to their caller, and crypto_del_alg() which
    calls crypto_unregister_instance() which always returns 0.

    Since unregistering a non-registered algorithm is always a kernel bug
    but there isn't anything callers should do to handle this situation at
    runtime, let's simplify things by making all the unregistration
    functions return void, and moving the error message into
    crypto_unregister_alg() and upgrading it to a WARN().

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

11 Dec, 2019

4 commits

  • This patch switches hmac over to the new init_tfm/exit_tfm interface
    as opposed to cra_init/cra_exit. This way the shash API can make
    sure that descsize does not exceed the maximum.

    This patch also adds the API helper shash_alg_instance.

    Signed-off-by: Herbert Xu
    Reviewed-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Building with W=1 causes a warning:

    CC [M] arch/x86/crypto/chacha_glue.o
    In file included from arch/x86/crypto/chacha_glue.c:10:
    ./include/crypto/internal/chacha.h:37:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]
    37 | static int inline chacha12_setkey(struct crypto_skcipher *tfm, const u8 *key,
    | ^~~~~~

    Straighten out the order to match the rest of the header file.

    Signed-off-by: Valdis Kletnieks
    Signed-off-by: Herbert Xu

    Valdis Klētnieks
     
  • Move crypto_aead_maxauthsize() to so that it's available
    to users of the API, not just AEAD implementations.

    This will be used by the self-tests.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The essiv and hmac templates refuse to use any hash algorithm that has a
    ->setkey() function, which includes not just algorithms that always need
    a key, but also algorithms that optionally take a key.

    Previously the only optionally-keyed hash algorithms in the crypto API
    were non-cryptographic algorithms like crc32, so this didn't really
    matter. But that's changed with BLAKE2 support being added. BLAKE2
    should work with essiv and hmac, just like any other cryptographic hash.

    Fix this by allowing the use of both algorithms without a ->setkey()
    function and algorithms that have the OPTIONAL_KEY flag set.

    Signed-off-by: Eric Biggers
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Eric Biggers
     

17 Nov, 2019

8 commits

  • Now that all users of the deprecated ablkcipher interface have been
    moved to the skcipher interface, ablkcipher is no longer used and
    can be removed.

    Reviewed-by: Eric Biggers
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Wire up our newly added Blake2s implementation via the shash API.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • The C implementation was originally based on Samuel Neves' public
    domain reference implementation but has since been heavily modified
    for the kernel. We're able to do compile-time optimizations by moving
    some scaffolding around the final function into the header file.

    Information: https://blake2.net/

    Signed-off-by: Jason A. Donenfeld
    Signed-off-by: Samuel Neves
    Co-developed-by: Samuel Neves
    [ardb: - move from lib/zinc to lib/crypto
    - remove simd handling
    - rewrote selftest for better coverage
    - use fixed digest length for blake2s_hmac() and rename to
    blake2s256_hmac() ]
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Jason A. Donenfeld
     
  • Remove the dependency on the generic Poly1305 driver. Instead, depend
    on the generic library so that we only reuse code without pulling in
    the generic skcipher implementation as well.

    While at it, remove the logic that prefers the non-SIMD path for short
    inputs - this is no longer necessary after recent FPU handling changes
    on x86.

    Since this removes the last remaining user of the routines exported
    by the generic shash driver, unexport them and make them static.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • In preparation of exposing a Poly1305 library interface directly from
    the accelerated x86 driver, align the state descriptor of the x86 code
    with the one used by the generic driver. This is needed to make the
    library interface unified between all implementations.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Move the core Poly1305 routines shared between the generic Poly1305
    shash driver and the Adiantum and NHPoly1305 drivers into a separate
    library so that using just this pieces does not pull in the crypto
    API pieces of the generic Poly1305 routine.

    In a subsequent patch, we will augment this generic library with
    init/update/final routines so that Poyl1305 algorithm can be used
    directly without the need for using the crypto API's shash abstraction.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Now that all users of generic ChaCha code have moved to the core library,
    there is no longer a need for the generic ChaCha skcpiher driver to
    export parts of it implementation for reuse by other drivers. So drop
    the exports, and make the symbols static.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Currently, our generic ChaCha implementation consists of a permute
    function in lib/chacha.c that operates on the 64-byte ChaCha state
    directly [and which is always included into the core kernel since it
    is used by the /dev/random driver], and the crypto API plumbing to
    expose it as a skcipher.

    In order to support in-kernel users that need the ChaCha streamcipher
    but have no need [or tolerance] for going through the abstractions of
    the crypto API, let's expose the streamcipher bits via a library API
    as well, in a way that permits the implementation to be superseded by
    an architecture specific one if provided.

    So move the streamcipher code into a separate module in lib/crypto,
    and expose the init() and crypt() routines to users of the library.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

01 Nov, 2019

1 commit

  • Now that all "blkcipher" algorithms have been converted to "skcipher",
    remove the blkcipher algorithm type.

    The skcipher (symmetric key cipher) algorithm type was introduced a few
    years ago to replace both blkcipher and ablkcipher (synchronous and
    asynchronous block cipher). The advantages of skcipher include:

    - A much less confusing name, since none of these algorithm types have
    ever actually been for raw block ciphers, but rather for all
    length-preserving encryption modes including block cipher modes of
    operation, stream ciphers, and other length-preserving modes.

    - It unified blkcipher and ablkcipher into a single algorithm type
    which supports both synchronous and asynchronous implementations.
    Note, blkcipher already operated only on scatterlists, so the fact
    that skcipher does too isn't a regression in functionality.

    - Better type safety by using struct skcipher_alg, struct
    crypto_skcipher, etc. instead of crypto_alg, crypto_tfm, etc.

    - It sometimes simplifies the implementations of algorithms.

    Also, the blkcipher API was no longer being tested.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

04 Oct, 2019

1 commit

  • When algif_skcipher does a partial operation it always process data
    that is a multiple of blocksize. However, for algorithms such as
    CTR this is wrong because even though it can process any number of
    bytes overall, the partial block must come at the very end and not
    in the middle.

    This is exactly what chunksize is meant to describe so this patch
    changes blocksize to chunksize.

    Fixes: 8ff590903d5f ("crypto: algif_skcipher - User-space...")
    Signed-off-by: Herbert Xu
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Herbert Xu
     

09 Sep, 2019

1 commit


22 Aug, 2019

2 commits

  • Another one for the cipher museum: split off DES core processing into
    a separate module so other drivers (mostly for crypto accelerators)
    can reuse the code without pulling in the generic DES cipher itself.
    This will also permit the cipher interface to be made private to the
    crypto API itself once we move the only user in the kernel (CIFS) to
    this library interface.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • The recently added helper routine to perform key strength validation
    of triple DES keys is slightly inadequate, since it comes in two versions,
    neither of which are highly useful for anything other than skciphers (and
    many drivers still use the older blkcipher interfaces).

    So let's add a new helper and, considering that this is a helper function
    that is only intended to be used by crypto code itself, put it in a new
    des.h header under crypto/internal.

    While at it, implement a similar helper for single DES, so that we can
    start replacing the pattern of calling des_ekey() into a temp buffer
    that occurs in many drivers in drivers/crypto.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel