11 Sep, 2020

1 commit


27 Dec, 2019

1 commit

  • This patch introduces the skcipher_ialg_simple helper which fetches
    the crypto_alg structure from a simple skcipher instance's spawn.

    This allows us to remove the third argument from the function
    skcipher_alloc_instance_simple.

    In doing so the reference count to the algorithm is now maintained
    by the Crypto API and the caller no longer needs to drop the alg
    refcount.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

18 Apr, 2019

1 commit

  • Use subsys_initcall for registration of all templates and generic
    algorithm implementations, rather than module_init. Then change
    cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

    This is needed so that when both a generic and optimized implementation
    of an algorithm are built into the kernel (not loadable modules), the
    generic implementation is registered before the optimized one.
    Otherwise, the self-tests for the optimized implementation are unable to
    allocate the generic implementation for the new comparison fuzz tests.

    Note that on arm, a side effect of this change is that self-tests for
    generic implementations may run before the unaligned access handler has
    been installed. So, unaligned accesses will crash the kernel. This is
    arguably a good thing as it makes it easier to detect that type of bug.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

11 Jan, 2019

1 commit


29 Nov, 2018

1 commit

  • In multiple functions, the algorithm fields are read after its reference
    is dropped through crypto_mod_put. In this case, the algorithm memory
    may be freed, resulting in use-after-free bugs. This patch delays the
    put operation until the algorithm is never used.

    Fixes: 79c65d179a40 ("crypto: cbc - Convert to skcipher")
    Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode")
    Fixes: 043a44001b9e ("crypto: pcbc - Convert to skcipher")
    Cc:
    Signed-off-by: Pan Bian
    Signed-off-by: Herbert Xu

    Pan Bian
     

09 Mar, 2017

1 commit

  • When requesting a fallback algorithm, we should propagate the
    NEED_FALLBACK bit when search for the underlying algorithm.

    This will prevents drivers from allocating unnecessary fallbacks that
    are never called. For instance, currently the vmx-crypto driver will use
    the following chain of calls when calling the fallback implementation:

    p8_aes_cbc -> cbc(p8_aes) -> aes-generic

    However p8_aes will always delegate its calls to aes-generic. With this
    patch, p8_aes_cbc will be able to use cbc(aes-generic) directly as its
    fallback. The same applies to aes_s390.

    Signed-off-by: Marcelo Henrique Cerri
    Signed-off-by: Herbert Xu

    Marcelo Cerri
     

11 Feb, 2017

1 commit

  • Instead of unconditionally forcing 4 byte alignment for all generic
    chaining modes that rely on crypto_xor() or crypto_inc() (which may
    result in unnecessary copying of data when the underlying hardware
    can perform unaligned accesses efficiently), make those functions
    deal with unaligned input explicitly, but only if the Kconfig symbol
    HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
    the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.

    For crypto_inc(), this simply involves making the 4-byte stride
    conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
    it typically operates on 16 byte buffers.

    For crypto_xor(), an algorithm is implemented that simply runs through
    the input using the largest strides possible if unaligned accesses are
    allowed. If they are not, an optimal sequence of memory accesses is
    emitted that takes the relative alignment of the input buffers into
    account, e.g., if the relative misalignment of dst and src is 4 bytes,
    the entire xor operation will be completed using 4 byte loads and stores
    (modulo unaligned bits at the start and end). Note that all expressions
    involving misalign are simply eliminated by the compiler when
    HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

28 Nov, 2016

2 commits


26 Nov, 2014

1 commit

  • This adds the module loading prefix "crypto-" to the template lookup
    as well.

    For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
    includes the "crypto-" prefix at every level, correctly rejecting "vfat":

    net-pf-38
    algif-hash
    crypto-vfat(blowfish)
    crypto-vfat(blowfish)-all
    crypto-vfat

    Reported-by: Mathias Krause
    Signed-off-by: Kees Cook
    Acked-by: Mathias Krause
    Signed-off-by: Herbert Xu

    Kees Cook
     

08 Feb, 2008

1 commit


11 Jan, 2008

2 commits

  • All common block ciphers have a block size that's a power of 2. In fact,
    all of our block ciphers obey this rule.

    If we require this then CBC can be optimised to avoid an expensive divide
    on in-place decryption.

    I've also changed the saving of the first IV in the in-place decryption
    case to the last IV because that lets us use walk->iv (which is already
    aligned) for the xor operation where alignment is required.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch replaces the custom xor in CBC with the generic crypto_xor.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

02 May, 2007

1 commit

  • This patch passes the type/mask along when constructing instances of
    templates. This is in preparation for templates that may support
    multiple types of instances depending on what is requested. For example,
    the planned software async crypto driver will use this construct.

    For the moment this allows us to check whether the instance constructed
    is of the correct type and avoid returning success if the type does not
    match.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

07 Feb, 2007

1 commit


21 Sep, 2006

1 commit

  • This patch adds two block cipher algorithms, CBC and ECB. These
    are implemented as templates on top of existing single-block cipher
    algorithms. They invoke the single-block cipher through the new
    encrypt_one/decrypt_one interface.

    This also optimises the in-place encryption and decryption to remove
    the cost of an IV copy each round.

    Signed-off-by: Herbert Xu

    Herbert Xu