14 Dec, 2020

1 commit

  • This patch adds kernel support for encryption/decryption of TLS 1.0
    records using block ciphers. Implementation is similar to authenc in the
    sense that the base algorithms (AES, SHA1) are combined in a template to
    produce TLS encapsulation frames. The composite algorithm will be called
    "tls10(hmac(),cbc())". The cipher and hmac keys are
    wrapped in the same format used by authenc.c.

    Signed-off-by: Radu Alexe
    Signed-off-by: Cristian Stoica
    Signed-off-by: Horia Geantă

    Merged LF commit (rebase-20200703/crypto/core):
    856fb52acc28 ("crypto: tls - fix logical-not-parentheses compile warning")

    Merged LF commit (next-nxp-20200811):
    0f90a0618247 ("crypto: tls: fix build issue")

    Signed-off-by: Horia Geantă

    Radu Alexe
     

25 Sep, 2020

3 commits


20 Aug, 2020

1 commit

  • The header file algapi.h includes skbuff.h unnecessarily since
    all we need is a forward declaration for struct sk_buff. This
    patch removes that inclusion.

    Unfortunately skbuff.h pulls in a lot of things and drivers over
    the years have come to rely on it so this patch adds a lot of
    missing inclusions that result from this.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

08 Aug, 2020

1 commit

  • As said by Linus:

    A symmetric naming is only helpful if it implies symmetries in use.
    Otherwise it's actively misleading.

    In "kzalloc()", the z is meaningful and an important part of what the
    caller wants.

    In "kzfree()", the z is actively detrimental, because maybe in the
    future we really _might_ want to use that "memfill(0xdeadbeef)" or
    something. The "zero" part of the interface isn't even _relevant_.

    The main reason that kzfree() exists is to clear sensitive information
    that should not be leaked to other future users of the same memory
    objects.

    Rename kzfree() to kfree_sensitive() to follow the example of the recently
    added kvfree_sensitive() and make the intention of the API more explicit.
    In addition, memzero_explicit() is used to clear the memory to make sure
    that it won't get optimized away by the compiler.

    The renaming is done by using the command sequence:

    git grep -w --name-only kzfree |\
    xargs sed -i 's/kzfree/kfree_sensitive/'

    followed by some editing of the kfree_sensitive() kerneldoc and adding
    a kzfree backward compatibility macro in slab.h.

    [akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
    [akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]

    Suggested-by: Joe Perches
    Signed-off-by: Waiman Long
    Signed-off-by: Andrew Morton
    Acked-by: David Howells
    Acked-by: Michal Hocko
    Acked-by: Johannes Weiner
    Cc: Jarkko Sakkinen
    Cc: James Morris
    Cc: "Serge E. Hallyn"
    Cc: Joe Perches
    Cc: Matthew Wilcox
    Cc: David Rientjes
    Cc: Dan Carpenter
    Cc: "Jason A . Donenfeld"
    Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
    Signed-off-by: Linus Torvalds

    Waiman Long
     

02 Apr, 2020

1 commit

  • Pull crypto updates from Herbert Xu:
    "API:
    - Fix out-of-sync IVs in self-test for IPsec AEAD algorithms

    Algorithms:
    - Use formally verified implementation of x86/curve25519

    Drivers:
    - Enhance hwrng support in caam

    - Use crypto_engine for skcipher/aead/rsa/hash in caam

    - Add Xilinx AES driver

    - Add uacce driver

    - Register zip engine to uacce in hisilicon

    - Add support for OCTEON TX CPT engine in marvell"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (162 commits)
    crypto: af_alg - bool type cosmetics
    crypto: arm[64]/poly1305 - add artifact to .gitignore files
    crypto: caam - limit single JD RNG output to maximum of 16 bytes
    crypto: caam - enable prediction resistance in HRWNG
    bus: fsl-mc: add api to retrieve mc version
    crypto: caam - invalidate entropy register during RNG initialization
    crypto: caam - check if RNG job failed
    crypto: caam - simplify RNG implementation
    crypto: caam - drop global context pointer and init_done
    crypto: caam - use struct hwrng's .init for initialization
    crypto: caam - allocate RNG instantiation descriptor with GFP_DMA
    crypto: ccree - remove duplicated include from cc_aead.c
    crypto: chelsio - remove set but not used variable 'adap'
    crypto: marvell - enable OcteonTX cpt options for build
    crypto: marvell - add the Virtual Function driver for CPT
    crypto: marvell - add support for OCTEON TX CPT engine
    crypto: marvell - create common Kconfig and Makefile for Marvell
    crypto: arm/neon - memzero_explicit aes-cbc key
    crypto: bcm - Use scnprintf() for avoiding potential buffer overflow
    crypto: atmel-i2c - Fix wakeup fail
    ...

    Linus Torvalds
     

12 Mar, 2020

2 commits

  • Do test_aead_vs_generic_impl() before test_aead_inauthentic_inputs() so
    that any differences with the generic driver are detected before getting
    to the inauthentic input tests, which intentionally use only the driver
    being tested (so that they run even if a generic driver is unavailable).

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • rfc4543 was missing from the list of algorithms that may treat the end
    of the AAD buffer specially.

    Also, with rfc4106, rfc4309, rfc4543, and rfc7539esp, the end of the AAD
    buffer is actually supposed to contain a second copy of the IV, and
    we've concluded that if the IV copies don't match the behavior is
    implementation-defined. So, the fuzz tests can't easily test that case.

    So, make the fuzz tests only use inputs where the two IV copies match.

    Reported-by: Geert Uytterhoeven
    Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against their generic implementation")
    Cc: Stephan Mueller
    Originally-from: Gilad Ben-Yossef
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

14 Feb, 2020

1 commit

  • This patch enables the selftests for the s390 specific protected key
    AES (PAES) cipher implementations:
    * cbc-paes-s390
    * ctr-paes-s390
    * ecb-paes-s390
    * xts-paes-s390
    PAES is an AES cipher but with encrypted ('protected') key
    material. However, the paes ciphers are able to derive an protected
    key from clear key material with the help of the pkey kernel module.

    So this patch now enables the generic AES tests for the paes
    ciphers. Under the hood the setkey() functions rearrange the clear key
    values as clear key token and so the pkey kernel module is able to
    provide protected key blobs from the given clear key values. The
    derived protected key blobs are then used within the paes cipers and
    should produce the very same results as the generic AES implementation
    with the clear key values.

    The s390-paes cipher testlist entries are surrounded
    by #if IS_ENABLED(CONFIG_CRYPTO_PAES_S390) because they don't
    make any sense on non s390 platforms or without the PAES
    cipher implementation.

    Link: http://lkml.kernel.org/r/20200213083946.zicarnnt3wizl5ty@gondor.apana.org.au
    Acked-by: Herbert Xu
    Signed-off-by: Harald Freudenberger
    Signed-off-by: Vasily Gorbik

    Harald Freudenberger
     

11 Dec, 2019

6 commits

  • The whole point of using an AEAD over length-preserving encryption is
    that the data is authenticated. However currently the fuzz tests don't
    test any inauthentic inputs to verify that the data is actually being
    authenticated. And only two algorithms ("rfc4543(gcm(aes))" and
    "ccm(aes)") even have any inauthentic test vectors at all.

    Therefore, update the AEAD fuzz tests to sometimes generate inauthentic
    test vectors, either by generating a (ciphertext, AAD) pair without
    using the key, or by mutating an authentic pair that was generated.

    To avoid flakiness, only assume this works reliably if the auth tag is
    at least 8 bytes. Also account for the rfc4106, rfc4309, and rfc7539esp
    algorithms intentionally ignoring the last 8 AAD bytes, and for some
    algorithms doing extra checks that result in EINVAL rather than EBADMSG.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • In preparation for adding inauthentic input fuzz tests, which don't
    require that a generic implementation of the algorithm be available,
    refactor test_aead_vs_generic_impl() so that instead there's a
    higher-level function test_aead_extra() which initializes a struct
    aead_extra_tests_ctx and then calls test_aead_vs_generic_impl() with a
    pointer to that struct.

    As a bonus, this reduces stack usage.

    Also switch from crypto_aead_alg(tfm)->maxauthsize to
    crypto_aead_maxauthsize(), now that the latter is available in
    .

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The alignment bug in ghash_setkey() fixed by commit 5c6bc4dfa515
    ("crypto: ghash - fix unaligned memory access in ghash_setkey()")
    wasn't reliably detected by the crypto self-tests on ARM because the
    tests only set the keys directly from the test vectors.

    To improve test coverage, update the tests to sometimes pass misaligned
    keys to setkey(). This applies to shash, ahash, skcipher, and aead.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • When checking two implementations of the same skcipher algorithm for
    consistency, require that the minimum key size be the same, not just the
    maximum key size. There's no good reason to allow different minimum key
    sizes.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Currently if the comparison fuzz tests encounter an encryption error
    when generating an skcipher or AEAD test vector, they will still test
    the decryption side (passing it the uninitialized ciphertext buffer)
    and expect it to fail with the same error.

    This is sort of broken because it's not well-defined usage of the API to
    pass an uninitialized buffer, and furthermore in the AEAD case it's
    acceptable for the decryption error to be EBADMSG (meaning "inauthentic
    input") even if the encryption error was something else like EINVAL.

    Fix this for skcipher by explicitly initializing the ciphertext buffer
    on error, and for AEAD by skipping the decryption test on error.

    Reported-by: Pascal Van Leeuwen
    Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against their generic implementation")
    Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against their generic implementation")
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Due to the removal of the blkcipher and ablkcipher algorithm types,
    crypto_skcipher::keysize is now redundant since it always equals
    crypto_skcipher_alg(tfm)->max_keysize.

    Remove it and update crypto_skcipher_default_keysize() accordingly.

    Also rename crypto_skcipher_default_keysize() to
    crypto_skcipher_max_keysize() to clarify that it specifically returns
    the maximum key size, not some unspecified "default".

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

17 Nov, 2019

2 commits

  • In preparation of introducing KPP implementations of Curve25519, import
    the set of test cases proposed by the Zinc patch set, but converted to
    the KPP format.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • As suggested by Eric for the Blake2b implementation contributed by
    David, introduce a set of test vectors for Blake2s covering different
    digest and key sizes.

    blake2s-128 blake2s-160 blake2s-224 blake2s-256
    ---------------------------------------------------
    len=0 | klen=0 klen=1 klen=16 klen=32
    len=1 | klen=16 klen=32 klen=0 klen=1
    len=7 | klen=32 klen=0 klen=1 klen=16
    len=15 | klen=1 klen=16 klen=32 klen=0
    len=64 | klen=0 klen=1 klen=16 klen=32
    len=247 | klen=16 klen=32 klen=0 klen=1
    len=256 | klen=32 klen=0 klen=1 klen=16

    Cc: David Sterba
    Cc: Eric Biggers
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

01 Nov, 2019

1 commit

  • Test vectors for blake2b with various digest sizes. As the algorithm is
    the same up to the digest calculation, the key and input data length is
    distributed in a way that tests all combinanions of the two over the
    digest sizes.

    Based on the suggestion from Eric, the following input sizes are tested
    [0, 1, 7, 15, 64, 247, 256], where blake2b blocksize is 128, so the
    padded and the non-padded input buffers are tested.

    blake2b-160 blake2b-256 blake2b-384 blake2b-512
    ---------------------------------------------------
    len=0 | klen=0 klen=1 klen=32 klen=64
    len=1 | klen=32 klen=64 klen=0 klen=1
    len=7 | klen=64 klen=0 klen=1 klen=32
    len=15 | klen=1 klen=32 klen=64 klen=0
    len=64 | klen=0 klen=1 klen=32 klen=64
    len=247 | klen=32 klen=64 klen=0 klen=1
    len=256 | klen=64 klen=0 klen=1 klen=32

    Where key:

    - klen=0: empty key
    - klen=1: 1 byte value 0x42, 'B'
    - klen=32: first 32 bytes of the default key, sequence 00..1f
    - klen=64: default key, sequence 00..3f

    The unkeyed vectors are ordered before keyed, as this is required by
    testmgr.

    CC: Eric Biggers
    Signed-off-by: David Sterba
    Signed-off-by: Herbert Xu

    David Sterba
     

04 Oct, 2019

3 commits


30 Aug, 2019

1 commit


26 Jul, 2019

4 commits

  • Three variants of AEGIS were proposed for the CAESAR competition, and
    only one was selected for the final portfolio: AEGIS128.

    The other variants, AEGIS128L and AEGIS256, are not likely to ever turn
    up in networking protocols or other places where interoperability
    between Linux and other systems is a concern, nor are they likely to
    be subjected to further cryptanalysis. However, uninformed users may
    think that AEGIS128L (which is faster) is equally fit for use.

    So let's remove them now, before anyone starts using them and we are
    forced to support them forever.

    Note that there are no known flaws in the algorithms or in any of these
    implementations, but they have simply outlived their usefulness.

    Reviewed-by: Ondrej Mosnacek
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • MORUS was not selected as a winner in the CAESAR competition, which
    is not surprising since it is considered to be cryptographically
    broken [0]. (Note that this is not an implementation defect, but a
    flaw in the underlying algorithm). Since it is unlikely to be in use
    currently, let's remove it before we're stuck with it.

    [0] https://eprint.iacr.org/2019/172.pdf

    Reviewed-by: Ondrej Mosnacek
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Add self-tests for the lzo-rle algorithm.

    Signed-off-by: Hannah Pan
    Signed-off-by: Herbert Xu

    Hannah Pan
     
  • Crypto test failures in FIPS mode cause an immediate panic, but
    on some system the cryptographic boundary extends beyond just
    the Linux controlled domain.

    Add a simple atomic notification chain to allow interested parties
    to register to receive notification prior to us kicking the bucket.

    Signed-off-by: Gilad Ben-Yossef
    Signed-off-by: Herbert Xu

    Gilad Ben-Yossef
     

09 Jul, 2019

1 commit

  • Pull crypto updates from Herbert Xu:
    "Here is the crypto update for 5.3:

    API:
    - Test shash interface directly in testmgr
    - cra_driver_name is now mandatory

    Algorithms:
    - Replace arc4 crypto_cipher with library helper
    - Implement 5 way interleave for ECB, CBC and CTR on arm64
    - Add xxhash
    - Add continuous self-test on noise source to drbg
    - Update jitter RNG

    Drivers:
    - Add support for SHA204A random number generator
    - Add support for 7211 in iproc-rng200
    - Fix fuzz test failures in inside-secure
    - Fix fuzz test failures in talitos
    - Fix fuzz test failures in qat"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (143 commits)
    crypto: stm32/hash - remove interruptible condition for dma
    crypto: stm32/hash - Fix hmac issue more than 256 bytes
    crypto: stm32/crc32 - rename driver file
    crypto: amcc - remove memset after dma_alloc_coherent
    crypto: ccp - Switch to SPDX license identifiers
    crypto: ccp - Validate the the error value used to index error messages
    crypto: doc - Fix formatting of new crypto engine content
    crypto: doc - Add parameter documentation
    crypto: arm64/aes-ce - implement 5 way interleave for ECB, CBC and CTR
    crypto: arm64/aes-ce - add 5 way interleave routines
    crypto: talitos - drop icv_ool
    crypto: talitos - fix hash on SEC1.
    crypto: talitos - move struct talitos_edesc into talitos.h
    lib/scatterlist: Fix mapping iterator when sg->offset is greater than PAGE_SIZE
    crypto/NX: Set receive window credits to max number of CRBs in RxFIFO
    crypto: asymmetric_keys - select CRYPTO_HASH where needed
    crypto: serpent - mark __serpent_setkey_sbox noinline
    crypto: testmgr - dynamically allocate crypto_shash
    crypto: testmgr - dynamically allocate testvec_config
    crypto: talitos - eliminate unneeded 'done' functions at build time
    ...

    Linus Torvalds
     

27 Jun, 2019

2 commits

  • The largest stack object in this file is now the shash descriptor.
    Since there are many other stack variables, this can push it
    over the 1024 byte warning limit, in particular with clang and
    KASAN:

    crypto/testmgr.c:1693:12: error: stack frame size of 1312 bytes in function '__alg_test_hash' [-Werror,-Wframe-larger-than=]

    Make test_hash_vs_generic_impl() do the same thing as the
    corresponding eaed and skcipher functions by allocating the
    descriptor dynamically. We can still do better than this,
    but it brings us well below the 1024 byte limit.

    Suggested-by: Eric Biggers
    Fixes: 9a8a6b3f0950 ("crypto: testmgr - fuzz hashes against their generic implementation")
    Signed-off-by: Arnd Bergmann
    Reviewed-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Arnd Bergmann
     
  • On arm32, we get warnings about high stack usage in some of the functions:

    crypto/testmgr.c:2269:12: error: stack frame size of 1032 bytes in function 'alg_test_aead' [-Werror,-Wframe-larger-than=]
    static int alg_test_aead(const struct alg_test_desc *desc, const char *driver,
    ^
    crypto/testmgr.c:1693:12: error: stack frame size of 1312 bytes in function '__alg_test_hash' [-Werror,-Wframe-larger-than=]
    static int __alg_test_hash(const struct hash_testvec *vecs,
    ^

    On of the larger objects on the stack here is struct testvec_config, so
    change that to dynamic allocation.

    Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against their generic implementation")
    Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against their generic implementation")
    Fixes: 9a8a6b3f0950 ("crypto: testmgr - fuzz hashes against their generic implementation")
    Signed-off-by: Arnd Bergmann
    Reviewed-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Arnd Bergmann
     

20 Jun, 2019

1 commit

  • There are no remaining users of the cipher implementation, and there
    are no meaningful ways in which the arc4 cipher can be combined with
    templates other than ECB (and the way we do provide that combination
    is highly dubious to begin with).

    So let's drop the arc4 cipher altogether, and only keep the ecb(arc4)
    skcipher, which is used in various places in the kernel.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

13 Jun, 2019

1 commit

  • Call cond_resched() after each fuzz test iteration. This avoids stall
    warnings if fuzz_iterations is set very high for testing purposes.

    While we're at it, also call cond_resched() after finishing testing each
    test vector.

    Signed-off-by: Eric Biggers
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Eric Biggers
     

06 Jun, 2019

2 commits

  • xxhash is currently implemented as a self-contained module in /lib.
    This patch enables that module to be used as part of the generic kernel
    crypto framework. It adds a simple wrapper to the 64bit version.

    I've also added test vectors (with help from Nick Terrell). The upstream
    xxhash code is tested by running hashing operation on random 222 byte
    data with seed values of 0 and a prime number. The upstream test
    suite can be found at https://github.com/Cyan4973/xxHash/blob/cf46e0c/xxhsum.c#L664

    Essentially hashing is run on data of length 0,1,14,222 with the
    aforementioned seed values 0 and prime 2654435761. The particular random
    222 byte string was provided to me by Nick Terrell by reading
    /dev/random and the checksums were calculated by the upstream xxsum
    utility with the following bash script:

    dd if=/dev/random of=TEST_VECTOR bs=1 count=222

    for a in 0 1; do
    for l in 0 1 14 222; do
    for s in 0 2654435761; do
    echo algo $a length $l seed $s;
    head -c $l TEST_VECTOR | ~/projects/kernel/xxHash/xxhsum -H$a -s$s
    done
    done
    done

    This produces output as follows:

    algo 0 length 0 seed 0
    02cc5d05 stdin
    algo 0 length 0 seed 2654435761
    02cc5d05 stdin
    algo 0 length 1 seed 0
    25201171 stdin
    algo 0 length 1 seed 2654435761
    25201171 stdin
    algo 0 length 14 seed 0
    c1d95975 stdin
    algo 0 length 14 seed 2654435761
    c1d95975 stdin
    algo 0 length 222 seed 0
    b38662a6 stdin
    algo 0 length 222 seed 2654435761
    b38662a6 stdin
    algo 1 length 0 seed 0
    ef46db3751d8e999 stdin
    algo 1 length 0 seed 2654435761
    ac75fda2929b17ef stdin
    algo 1 length 1 seed 0
    27c3f04c2881203a stdin
    algo 1 length 1 seed 2654435761
    4a15ed26415dfe4d stdin
    algo 1 length 14 seed 0
    3d33dc700231dfad stdin
    algo 1 length 14 seed 2654435761
    ea5f7ddef9a64f80 stdin
    algo 1 length 222 seed 0
    5f3d3c08ec2bef34 stdin
    algo 1 length 222 seed 2654435761
    6a9df59664c7ed62 stdin

    algo 1 is xx64 variant, algo 0 is the 32 bit variant which is currently
    not hooked up.

    Signed-off-by: Nikolay Borisov
    Reviewed-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Nikolay Borisov
     
  • For hash algorithms implemented using the "shash" algorithm type, test
    both the ahash and shash APIs, not just the ahash API.

    Testing the ahash API already tests the shash API indirectly, which is
    normally good enough. However, there have been corner cases where there
    have been shash bugs that don't get exposed through the ahash API. So,
    update testmgr to test the shash API too.

    This would have detected the arm64 SHA-1 and SHA-2 bugs for which fixes
    were just sent out (https://patchwork.kernel.org/patch/10964843/ and
    https://patchwork.kernel.org/patch/10965089/):

    alg: shash: sha1-ce test failed (wrong result) on test vector 0, cfg="init+finup aligned buffer"
    alg: shash: sha224-ce test failed (wrong result) on test vector 0, cfg="init+finup aligned buffer"
    alg: shash: sha256-ce test failed (wrong result) on test vector 0, cfg="init+finup aligned buffer"

    This also would have detected the bugs fixed by commit 307508d10729
    ("crypto: crct10dif-generic - fix use via crypto_shash_digest()") and
    commit dec3d0b1071a
    ("crypto: x86/crct10dif-pcl - fix use via crypto_shash_digest()").

    Signed-off-by: Eric Biggers
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Eric Biggers
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

25 Apr, 2019

2 commits

  • Mark sm4 and missing aes using protected keys which are indetical to
    same algs with no HW protected keys as tested.

    Signed-off-by: Gilad Ben-Yossef
    Signed-off-by: Herbert Xu

    Gilad Ben-Yossef
     
  • The flags field in 'struct shash_desc' never actually does anything.
    The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP.
    However, no shash algorithm ever sleeps, making this flag a no-op.

    With this being the case, inevitably some users who can't sleep wrongly
    pass MAY_SLEEP. These would all need to be fixed if any shash algorithm
    actually started sleeping. For example, the shash_ahash_*() functions,
    which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP
    from the ahash API to the shash API. However, the shash functions are
    called under kmap_atomic(), so actually they're assumed to never sleep.

    Even if it turns out that some users do need preemption points while
    hashing large buffers, we could easily provide a helper function
    crypto_shash_update_large() which divides the data into smaller chunks
    and calls crypto_shash_update() and cond_resched() for each chunk. It's
    not necessary to have a flag in 'struct shash_desc', nor is it necessary
    to make individual shash algorithms aware of this at all.

    Therefore, remove shash_desc::flags, and document that the
    crypto_shash_*() functions can be called from any context.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

18 Apr, 2019

3 commits

  • When the extra crypto self-tests are enabled, test each AEAD algorithm
    against its generic implementation when one is available. This
    involves: checking the algorithm properties for consistency, then
    randomly generating test vectors using the generic implementation and
    running them against the implementation under test. Both good and bad
    inputs are tested.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • When the extra crypto self-tests are enabled, test each skcipher
    algorithm against its generic implementation when one is available.
    This involves: checking the algorithm properties for consistency, then
    randomly generating test vectors using the generic implementation and
    running them against the implementation under test. Both good and bad
    inputs are tested.

    This has already detected a bug in the skcipher_walk API, a bug in the
    LRW template, and an inconsistency in the cts implementations.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • When the extra crypto self-tests are enabled, test each hash algorithm
    against its generic implementation when one is available. This
    involves: checking the algorithm properties for consistency, then
    randomly generating test vectors using the generic implementation and
    running them against the implementation under test. Both good and bad
    inputs are tested.

    This has already detected a bug in the x86 implementation of poly1305,
    bugs in crct10dif, and an inconsistency in cbcmac.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers