27 Dec, 2019

1 commit

  • This patch introduces the skcipher_ialg_simple helper which fetches
    the crypto_alg structure from a simple skcipher instance's spawn.

    This allows us to remove the third argument from the function
    skcipher_alloc_instance_simple.

    In doing so the reference count to the algorithm is now maintained
    by the Crypto API and the caller no longer needs to drop the alg
    refcount.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

18 Apr, 2019

1 commit

  • Use subsys_initcall for registration of all templates and generic
    algorithm implementations, rather than module_init. Then change
    cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

    This is needed so that when both a generic and optimized implementation
    of an algorithm are built into the kernel (not loadable modules), the
    generic implementation is registered before the optimized one.
    Otherwise, the self-tests for the optimized implementation are unable to
    allocate the generic implementation for the new comparison fuzz tests.

    Note that on arm, a side effect of this change is that self-tests for
    generic implementations may run before the unaligned access handler has
    been installed. So, unaligned accesses will crash the kernel. This is
    arguably a good thing as it makes it easier to detect that type of bug.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

11 Jan, 2019

3 commits

  • The PCBC template just wraps a single block cipher algorithm, so
    simplify it by converting it to use skcipher_alloc_instance_simple().

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Following commit 944585a64f5e ("crypto: x86/aes-ni - remove special
    handling of AES in PCBC mode"), it's no longer needed for the PCBC
    template to support wrapping a cipher that has the CRYPTO_ALG_INTERNAL
    flag set. Thus, remove this now-unused functionality to make PCBC
    consistent with the other single block cipher templates.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The memcpy()s in the PCBC implementation use walk->iv as both the source
    and destination, which has undefined behavior. These memcpy()'s are
    actually unneeded, because walk->iv is already used to hold the previous
    plaintext block XOR'd with the previous ciphertext block. Thus,
    walk->iv is already updated to its final value.

    So remove the broken and unnecessary memcpy()s.

    Fixes: 91652be5d1b9 ("[CRYPTO] pcbc: Add Propagated CBC template")
    Cc: # v2.6.21+
    Cc: David Howells
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

29 Nov, 2018

1 commit

  • In multiple functions, the algorithm fields are read after its reference
    is dropped through crypto_mod_put. In this case, the algorithm memory
    may be freed, resulting in use-after-free bugs. This patch delays the
    put operation until the algorithm is never used.

    Fixes: 79c65d179a40 ("crypto: cbc - Convert to skcipher")
    Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode")
    Fixes: 043a44001b9e ("crypto: pcbc - Convert to skcipher")
    Cc:
    Signed-off-by: Pan Bian
    Signed-off-by: Herbert Xu

    Pan Bian
     

21 Apr, 2018

1 commit

  • We avoid various VLAs[1] by using constant expressions for block size
    and alignment mask.

    [1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

    Signed-off-by: Salvatore Mesoraca
    Signed-off-by: Herbert Xu

    Salvatore Mesoraca
     

04 Aug, 2017

1 commit

  • There are quite a number of occurrences in the kernel of the pattern

    if (dst != src)
    memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
    crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE);

    or

    crypto_xor(keystream, src, nbytes);
    memcpy(dst, keystream, nbytes);

    where crypto_xor() is preceded or followed by a memcpy() invocation
    that is only there because crypto_xor() uses its output parameter as
    one of the inputs. To avoid having to add new instances of this pattern
    in the arm64 code, which will be refactored to implement non-SIMD
    fallbacks, add an alternative implementation called crypto_xor_cpy(),
    taking separate input and output arguments. This removes the need for
    the separate memcpy().

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

11 Feb, 2017

1 commit

  • Instead of unconditionally forcing 4 byte alignment for all generic
    chaining modes that rely on crypto_xor() or crypto_inc() (which may
    result in unnecessary copying of data when the underlying hardware
    can perform unaligned accesses efficiently), make those functions
    deal with unaligned input explicitly, but only if the Kconfig symbol
    HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
    the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.

    For crypto_inc(), this simply involves making the 4-byte stride
    conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
    it typically operates on 16 byte buffers.

    For crypto_xor(), an algorithm is implemented that simply runs through
    the input using the largest strides possible if unaligned accesses are
    allowed. If they are not, an optimal sequence of memory accesses is
    emitted that takes the relative alignment of the input buffers into
    account, e.g., if the relative misalignment of dst and src is 4 bytes,
    the entire xor operation will be completed using 4 byte loads and stores
    (modulo unaligned bits at the start and end). Note that all expressions
    involving misalign are simply eliminated by the compiler when
    HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

13 Jan, 2017

1 commit

  • Continuing from this commit: 52f5684c8e1e
    ("kernel: use macros from compiler.h instead of __attribute__((...))")

    I submitted 4 total patches. They are part of task I've taken up to
    increase compiler portability in the kernel. I've cleaned up the
    subsystems under /kernel /mm /block and /security, this patch targets
    /crypto.

    There is which provides macros for various gcc specific
    constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
    instances of gcc specific attributes with the right macros for the crypto
    subsystem.

    I had to make one additional change into compiler-gcc.h for the case when
    one wants to use this: __attribute__((aligned) and not specify an alignment
    factor. From the gcc docs, this will result in the largest alignment for
    that data type on the target machine so I've named the macro
    __aligned_largest. Please advise if another name is more appropriate.

    Signed-off-by: Gideon Israel Dsouza
    Signed-off-by: Herbert Xu

    Gideon Israel Dsouza
     

28 Nov, 2016

1 commit


26 Nov, 2014

1 commit

  • This adds the module loading prefix "crypto-" to the template lookup
    as well.

    For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
    includes the "crypto-" prefix at every level, correctly rejecting "vfat":

    net-pf-38
    algif-hash
    crypto-vfat(blowfish)
    crypto-vfat(blowfish)-all
    crypto-vfat

    Reported-by: Mathias Krause
    Signed-off-by: Kees Cook
    Acked-by: Mathias Krause
    Signed-off-by: Herbert Xu

    Kees Cook
     

08 Feb, 2008

1 commit


11 Jan, 2008

1 commit

  • This patch replaces the custom xor in CBC with the generic crypto_xor.

    It changes the operations for in-place encryption slightly to avoid
    calling crypto_xor with tmpbuf since it is not necessarily aligned.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

02 May, 2007

1 commit

  • This patch passes the type/mask along when constructing instances of
    templates. This is in preparation for templates that may support
    multiple types of instances depending on what is requested. For example,
    the planned software async crypto driver will use this construct.

    For the moment this allows us to check whether the instance constructed
    is of the correct type and avoid returning success if the type does not
    match.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

07 Feb, 2007

2 commits