17 Jun, 2020

1 commit

  • commit beeb460cd12ac9b91640b484b6a52dcba9d9fc8f upstream.

    Currently after any algorithm is registered and tested, there's an
    unnecessary request_module("cryptomgr") even if it's already loaded.
    Also, CRYPTO_MSG_ALG_LOADED is sent twice, and thus if the algorithm is
    "crct10dif", lib/crc-t10dif.c replaces the tfm twice rather than once.

    This occurs because CRYPTO_MSG_ALG_LOADED is sent using
    crypto_probing_notify(), which tries to load "cryptomgr" if the
    notification is not handled (NOTIFY_DONE). This doesn't make sense
    because "cryptomgr" doesn't handle this notification.

    Fix this by using crypto_notify() instead of crypto_probing_notify().

    Fixes: dd8b083f9a5e ("crypto: api - Introduce notifier for new crypto algorithms")
    Cc: # v4.20+
    Cc: Martin K. Petersen
    Signed-off-by: Eric Biggers
    Reviewed-by: Martin K. Petersen
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

11 Feb, 2020

3 commits

  • commit 73669cc556462f4e50376538d77ee312142e8a8a upstream.

    The function crypto_spawn_alg is racy because it drops the lock
    before shooting the dying algorithm. The algorithm could disappear
    altogether before we shoot it.

    This patch fixes it by moving the shooting into the locked section.

    Fixes: 6bfd48096ff8 ("[CRYPTO] api: Added spawns")
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Herbert Xu
     
  • commit 2bbb3375d967155bccc86a5887d4a6e29c56b683 upstream.

    When CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y, the first lookup of an
    algorithm that needs to be instantiated using a template will always get
    the generic implementation, even when an accelerated one is available.

    This happens because the extra self-tests for the accelerated
    implementation allocate the generic implementation for comparison
    purposes, and then crypto_alg_tested() for the generic implementation
    "fulfills" the original request (i.e. sets crypto_larval::adult).

    This patch fixes this by only fulfilling the original request if
    we are currently the best outstanding larval as judged by the
    priority. If we're not the best then we will ask all waiters on
    that larval request to retry the lookup.

    Note that this patch introduces a behaviour change when the module
    providing the new algorithm is unregistered during the process.
    Previously we would have failed with ENOENT, after the patch we
    will instead redo the lookup.

    Fixes: 9a8a6b3f0950 ("crypto: testmgr - fuzz hashes against...")
    Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against...")
    Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against...")
    Reported-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Reviewed-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Herbert Xu
     
  • commit 7db3b61b6bba4310f454588c2ca6faf2958ad79f upstream.

    We need to check whether spawn->alg is NULL under lock as otherwise
    the algorithm could be removed from under us after we have checked
    it and found it to be non-NULL. This could cause us to remove the
    spawn from a non-existent list.

    Fixes: 7ede5a5ba55a ("crypto: api - Fix crypto_drop_spawn crash...")
    Cc:
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Herbert Xu
     

09 Jul, 2019

1 commit

  • Pull crypto updates from Herbert Xu:
    "Here is the crypto update for 5.3:

    API:
    - Test shash interface directly in testmgr
    - cra_driver_name is now mandatory

    Algorithms:
    - Replace arc4 crypto_cipher with library helper
    - Implement 5 way interleave for ECB, CBC and CTR on arm64
    - Add xxhash
    - Add continuous self-test on noise source to drbg
    - Update jitter RNG

    Drivers:
    - Add support for SHA204A random number generator
    - Add support for 7211 in iproc-rng200
    - Fix fuzz test failures in inside-secure
    - Fix fuzz test failures in talitos
    - Fix fuzz test failures in qat"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (143 commits)
    crypto: stm32/hash - remove interruptible condition for dma
    crypto: stm32/hash - Fix hmac issue more than 256 bytes
    crypto: stm32/crc32 - rename driver file
    crypto: amcc - remove memset after dma_alloc_coherent
    crypto: ccp - Switch to SPDX license identifiers
    crypto: ccp - Validate the the error value used to index error messages
    crypto: doc - Fix formatting of new crypto engine content
    crypto: doc - Add parameter documentation
    crypto: arm64/aes-ce - implement 5 way interleave for ECB, CBC and CTR
    crypto: arm64/aes-ce - add 5 way interleave routines
    crypto: talitos - drop icv_ool
    crypto: talitos - fix hash on SEC1.
    crypto: talitos - move struct talitos_edesc into talitos.h
    lib/scatterlist: Fix mapping iterator when sg->offset is greater than PAGE_SIZE
    crypto/NX: Set receive window credits to max number of CRBs in RxFIFO
    crypto: asymmetric_keys - select CRYPTO_HASH where needed
    crypto: serpent - mark __serpent_setkey_sbox noinline
    crypto: testmgr - dynamically allocate crypto_shash
    crypto: testmgr - dynamically allocate testvec_config
    crypto: talitos - eliminate unneeded 'done' functions at build time
    ...

    Linus Torvalds
     

13 Jun, 2019

1 commit

  • Now that all algorithms explicitly set cra_driver_name, make it required
    for algorithm registration and remove the code that generated a default
    cra_driver_name.

    Also add an explicit check that cra_name is set too, since that's
    obviously required too, yet it didn't seem to be checked anywhere.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

30 May, 2019

1 commit


25 Jan, 2019

1 commit


11 Jan, 2019

2 commits

  • It took me a while to notice the bug where the adiantum template left
    crypto_spawn::inst == NULL, because this only caused problems in certain
    cases where algorithms are dynamically loaded/unloaded.

    More improvements are needed, but for now make crypto_init_spawn()
    reject this case and WARN(), so this type of bug will be noticed
    immediately in the future.

    Note: I checked all callers and the adiantum template was the only place
    that had this wrong. So this WARN shouldn't trigger anymore.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Now that all "blkcipher" templates have been converted to "skcipher",
    crypto_alloc_instance() is no longer used. And it's not useful any
    longer as it creates an old-style weakly typed instance rather than a
    new-style strongly typed instance. So remove it, and now that the name
    is freed up rename crypto_alloc_instance2() to crypto_alloc_instance().

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

07 Dec, 2018

6 commits


28 Sep, 2018

1 commit


04 Sep, 2018

2 commits

  • Introduce a facility that can be used to receive a notification
    callback when a new algorithm becomes available. This can be used by
    existing crypto registrations to trigger a switch from a software-only
    algorithm to a hardware-accelerated version.

    A new CRYPTO_MSG_ALG_LOADED state is introduced to the existing crypto
    notification chain, and the register/unregister functions are exported
    so they can be called by subsystems outside of crypto.

    Signed-off-by: Martin K. Petersen
    Suggested-by: Herbert Xu
    Signed-off-by: Martin K. Petersen
    Signed-off-by: Herbert Xu

    Martin K. Petersen
     
  • In the quest to remove all stack VLA usage from the kernel[1], this
    exposes a new general upper bound on crypto blocksize and alignmask
    (higher than for the existing cipher limits) for VLA removal,
    and introduces new checks.

    At present, the highest cra_alignmask in the kernel is 63. The highest
    cra_blocksize is 144 (SHA3_224_BLOCK_SIZE, 18 8-byte words). For the
    new blocksize limit, I went with 160 (20 8-byte words).

    [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

    Signed-off-by: Kees Cook
    Signed-off-by: Herbert Xu

    Kees Cook
     

21 Apr, 2018

1 commit

  • In preparation for the removal of VLAs[1] from crypto code.
    We create 2 new compile-time constants: all ciphers implemented
    in Linux have a block size less than or equal to 16 bytes and
    the most demanding hw require 16 bytes alignment for the block
    buffer.
    We also enforce these limits in crypto_check_alg when a new
    cipher is registered.

    [1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

    Signed-off-by: Salvatore Mesoraca
    Signed-off-by: Herbert Xu

    Salvatore Mesoraca
     

31 Mar, 2018

1 commit

  • This patch reverts commit 9c521a200bc3 ("crypto: api - remove
    instance when test failed") and fixes the underlying problem
    in a different way.

    To recap, prior to the reverted commit, an instance that fails
    a self-test is kept around. However, it would satisfy any new
    lookups against its name and therefore the system may accumlulate
    an unbounded number of failed instances for the same algorithm
    name.

    The reverted commit fixed it by unregistering the instance. Hoever,
    this still does not prevent the creation of the same failed instance
    over and over again each time the name is looked up.

    This patch fixes it by keeping the failed instance around, just as
    we would if it were a normal algorithm. However, the lookup code
    has been udpated so that we do not attempt to create another
    instance as long as this failed one is still registered. Of course,
    you could still force a new creation by deleting the instance from
    user-space.

    A new error (ELIBBAD) has been commandeered for this purpose and
    will be returned when all registered algorithm of a given name
    have failed the self-test.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

01 Feb, 2018

1 commit

  • Pull crypto updates from Herbert Xu:
    "API:
    - Enforce the setting of keys for keyed aead/hash/skcipher
    algorithms.
    - Add multibuf speed tests in tcrypt.

    Algorithms:
    - Improve performance of sha3-generic.
    - Add native sha512 support on arm64.
    - Add v8.2 Crypto Extentions version of sha3/sm3 on arm64.
    - Avoid hmac nesting by requiring underlying algorithm to be unkeyed.
    - Add cryptd_max_cpu_qlen module parameter to cryptd.

    Drivers:
    - Add support for EIP97 engine in inside-secure.
    - Add inline IPsec support to chelsio.
    - Add RevB core support to crypto4xx.
    - Fix AEAD ICV check in crypto4xx.
    - Add stm32 crypto driver.
    - Add support for BCM63xx platforms in bcm2835 and remove bcm63xx.
    - Add Derived Key Protocol (DKP) support in caam.
    - Add Samsung Exynos True RNG driver.
    - Add support for Exynos5250+ SoCs in exynos PRNG driver"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (166 commits)
    crypto: picoxcell - Fix error handling in spacc_probe()
    crypto: arm64/sha512 - fix/improve new v8.2 Crypto Extensions code
    crypto: arm64/sm3 - new v8.2 Crypto Extensions implementation
    crypto: arm64/sha3 - new v8.2 Crypto Extensions implementation
    crypto: testmgr - add new testcases for sha3
    crypto: sha3-generic - export init/update/final routines
    crypto: sha3-generic - simplify code
    crypto: sha3-generic - rewrite KECCAK transform to help the compiler optimize
    crypto: sha3-generic - fixes for alignment and big endian operation
    crypto: aesni - handle zero length dst buffer
    crypto: artpec6 - remove select on non-existing CRYPTO_SHA384
    hwrng: bcm2835 - Remove redundant dev_err call in bcm2835_rng_probe()
    crypto: stm32 - remove redundant dev_err call in stm32_cryp_probe()
    crypto: axis - remove unnecessary platform_get_resource() error check
    crypto: testmgr - test misuse of result in ahash
    crypto: inside-secure - make function safexcel_try_push_requests static
    crypto: aes-generic - fix aes-generic regression on powerpc
    crypto: chelsio - Fix indentation warning
    crypto: arm64/sha1-ce - get rid of literal pool
    crypto: arm64/sha2-ce - move the round constant table to .rodata section
    ...

    Linus Torvalds
     

05 Jan, 2018

3 commits

  • There is a message posted to the crypto notifier chain when an algorithm
    is unregistered, and when a template is registered or unregistered. But
    nothing is listening for those messages; currently there are only
    listeners for the algorithm request and registration messages.

    Get rid of these unused notifications for now.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Reference counters should use refcount_t rather than atomic_t, since the
    refcount_t implementation can prevent overflows, reducing the
    exploitability of reference leak bugs. crypto_alg.cra_refcount is a
    reference counter with the usual semantics, so switch it over to
    refcount_t.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • syzkaller triggered a NULL pointer dereference in crypto_remove_spawns()
    via a program that repeatedly and concurrently requests AEADs
    "authenc(cmac(des3_ede-asm),pcbc-aes-aesni)" and hashes "cmac(des3_ede)"
    through AF_ALG, where the hashes are requested as "untested"
    (CRYPTO_ALG_TESTED is set in ->salg_mask but clear in ->salg_feat; this
    causes the template to be instantiated for every request).

    Although AF_ALG users really shouldn't be able to request an "untested"
    algorithm, the NULL pointer dereference is actually caused by a
    longstanding race condition where crypto_remove_spawns() can encounter
    an instance which has had spawn(s) "grabbed" but hasn't yet been
    registered, resulting in ->cra_users still being NULL.

    We probably should properly initialize ->cra_users earlier, but that
    would require updating many templates individually. For now just fix
    the bug in a simple way that can easily be backported: make
    crypto_remove_spawns() treat a NULL ->cra_users list as empty.

    Reported-by: syzbot
    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

03 Nov, 2017

1 commit

  • The crypto API was using the -EBUSY return value to indicate
    both a hard failure to submit a crypto operation into a
    transformation provider when the latter was busy and the backlog
    mechanism was not enabled as well as a notification that the
    operation was queued into the backlog when the backlog mechanism
    was enabled.

    Having the same return code indicate two very different conditions
    depending on a flag is both error prone and requires extra runtime
    check like the following to discern between the cases:

    if (err == -EINPROGRESS ||
    (err == -EBUSY && (ahash_request_flags(req) &
    CRYPTO_TFM_REQ_MAY_BACKLOG)))

    This patch changes the return code used to indicate a crypto op
    failed due to the transformation provider being transiently busy
    to -ENOSPC.

    Signed-off-by: Gilad Ben-Yossef
    Signed-off-by: Herbert Xu

    Gilad Ben-Yossef
     

04 Aug, 2017

1 commit


19 Jun, 2017

1 commit


09 Mar, 2017

1 commit

  • To prevent unnecessary branching, mark the exit condition of the
    primary loop as likely(), given that a carry in a 32-bit counter
    occurs very rarely.

    On arm64, the resulting code is emitted by GCC as

    9a8: cmp w1, #0x3
    9ac: add x3, x0, w1, uxtw
    9b0: b.ls 9e0
    9b4: ldr w2, [x3,#-4]!
    9b8: rev w2, w2
    9bc: add w2, w2, #0x1
    9c0: rev w4, w2
    9c4: str w4, [x3]
    9c8: cbz w2, 9d0
    9cc: ret

    where the two remaining branch conditions (one for size < 4 and one for
    the carry) are statically predicted as non-taken, resulting in optimal
    execution in the vast majority of cases.

    Also, replace the open coded alignment test with IS_ALIGNED().

    Cc: Jason A. Donenfeld
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

11 Feb, 2017

1 commit

  • Instead of unconditionally forcing 4 byte alignment for all generic
    chaining modes that rely on crypto_xor() or crypto_inc() (which may
    result in unnecessary copying of data when the underlying hardware
    can perform unaligned accesses efficiently), make those functions
    deal with unaligned input explicitly, but only if the Kconfig symbol
    HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
    the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.

    For crypto_inc(), this simply involves making the 4-byte stride
    conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
    it typically operates on 16 byte buffers.

    For crypto_xor(), an algorithm is implemented that simply runs through
    the input using the largest strides possible if unaligned accesses are
    allowed. If they are not, an optimal sequence of memory accesses is
    emitted that takes the relative alignment of the input buffers into
    account, e.g., if the relative misalignment of dst and src is 4 bytes,
    the entire xor operation will be completed using 4 byte loads and stores
    (modulo unaligned bits at the start and end). Note that all expressions
    involving misalign are simply eliminated by the compiler when
    HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

23 Jan, 2017

1 commit


01 Jul, 2016

1 commit


25 Jan, 2016

1 commit


23 Nov, 2015

1 commit


20 Oct, 2015

1 commit

  • Currently a number of Crypto API operations may fail when a signal
    occurs. This causes nasty problems as the caller of those operations
    are often not in a good position to restart the operation.

    In fact there is currently no need for those operations to be
    interrupted by user signals at all. All we need is for them to
    be killable.

    This patch replaces the relevant calls of signal_pending with
    fatal_signal_pending, and wait_for_completion_interruptible with
    wait_for_completion_killable, respectively.

    Cc: stable@vger.kernel.org
    Signed-off-by: Herbert Xu

    Herbert Xu
     

14 Jul, 2015

2 commits


03 Jun, 2015

1 commit


13 May, 2015

1 commit

  • This patch adds a new primitive crypto_grab_spawn which is meant
    to replace crypto_init_spawn and crypto_init_spawn2. Under the
    new scheme the user no longer has to worry about reference counting
    the alg object before it is subsumed by the spawn.

    It is pretty much an exact copy of crypto_grab_aead.

    Prior to calling this function spawn->frontend and spawn->inst
    must have been set.

    Signed-off-by: Herbert Xu

    Herbert Xu