16 Jul, 2020

1 commit

  • The flag CRYPTO_ALG_ASYNC is "inherited" in the sense that when a
    template is instantiated, the template will have CRYPTO_ALG_ASYNC set if
    any of the algorithms it uses has CRYPTO_ALG_ASYNC set.

    We'd like to add a second flag (CRYPTO_ALG_ALLOCATES_MEMORY) that gets
    "inherited" in the same way. This is difficult because the handling of
    CRYPTO_ALG_ASYNC is hardcoded everywhere. Address this by:

    - Add CRYPTO_ALG_INHERITED_FLAGS, which contains the set of flags that
    have these inheritance semantics.

    - Add crypto_algt_inherited_mask(), for use by template ->create()
    methods. It returns any of these flags that the user asked to be
    unset and thus must be passed in the 'mask' to crypto_grab_*().

    - Also modify crypto_check_attr_type() to handle computing the 'mask'
    so that most templates can just use this.

    - Make crypto_grab_*() propagate these flags to the template instance
    being created so that templates don't have to do this themselves.

    Make crypto/simd.c propagate these flags too, since it "wraps" another
    algorithm, similar to a template.

    Based on a patch by Mikulas Patocka
    (https://lore.kernel.org/r/alpine.LRH.2.02.2006301414580.30526@file01.intranet.prod.int.rdu2.redhat.com).

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

09 Jan, 2020

3 commits

  • Convert shash_free_instance() and its users to the new way of freeing
    instances, where a ->free() method is installed to the instance struct
    itself. This replaces the weakly-typed method crypto_template::free().

    This will allow removing support for the old way of freeing instances.

    Also give shash_free_instance() a more descriptive name to reflect that
    it's only for instances with a single spawn, not for any instance.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Now that all users of single-block cipher spawns have been converted to
    use 'struct crypto_cipher_spawn' rather than the less specifically typed
    'struct crypto_spawn', make crypto_spawn_cipher() take a pointer to a
    'struct crypto_cipher_spawn' rather than a 'struct crypto_spawn'.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Make the cmac template use the new function crypto_grab_cipher() to
    initialize its cipher spawn.

    This is needed to make all spawns be initialized in a consistent way.

    This required making cmac_create() allocate the instance directly rather
    than use shash_alloc_instance().

    Also simplify the error handling by taking advantage of crypto_drop_*()
    now accepting (as a no-op) spawns that haven't been initialized yet, and
    by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

18 Apr, 2019

1 commit

  • Use subsys_initcall for registration of all templates and generic
    algorithm implementations, rather than module_init. Then change
    cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

    This is needed so that when both a generic and optimized implementation
    of an algorithm are built into the kernel (not loadable modules), the
    generic implementation is registered before the optimized one.
    Otherwise, the self-tests for the optimized implementation are unable to
    allocate the generic implementation for the new comparison fuzz tests.

    Note that on arm, a side effect of this change is that self-tests for
    generic implementations may run before the unaligned access handler has
    been installed. So, unaligned accesses will crash the kernel. This is
    arguably a good thing as it makes it easier to detect that type of bug.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

11 Feb, 2017

1 commit

  • Instead of unconditionally forcing 4 byte alignment for all generic
    chaining modes that rely on crypto_xor() or crypto_inc() (which may
    result in unnecessary copying of data when the underlying hardware
    can perform unaligned accesses efficiently), make those functions
    deal with unaligned input explicitly, but only if the Kconfig symbol
    HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
    the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.

    For crypto_inc(), this simply involves making the 4-byte stride
    conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
    it typically operates on 16 byte buffers.

    For crypto_xor(), an algorithm is implemented that simply runs through
    the input using the largest strides possible if unaligned accesses are
    allowed. If they are not, an optimal sequence of memory accesses is
    emitted that takes the relative alignment of the input buffers into
    account, e.g., if the relative misalignment of dst and src is 4 bytes,
    the entire xor operation will be completed using 4 byte loads and stores
    (modulo unaligned bits at the start and end). Note that all expressions
    involving misalign are simply eliminated by the compiler when
    HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

21 Oct, 2016

2 commits

  • The per-transform 'consts' array is accessed as __be64 in
    crypto_cmac_digest_setkey() but was only guaranteed to be aligned to
    __alignof__(long). Fix this by aligning it to __alignof__(__be64).

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • cmac_create() previously returned 0 if a cipher with a block size other
    than 8 or 16 bytes was specified. It should return -EINVAL instead.
    Granted, this doesn't actually change any behavior because cryptomgr
    currently ignores any return value other than -EAGAIN from template
    ->create() functions.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

26 Nov, 2014

1 commit

  • This adds the module loading prefix "crypto-" to the template lookup
    as well.

    For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
    includes the "crypto-" prefix at every level, correctly rejecting "vfat":

    net-pf-38
    algif-hash
    crypto-vfat(blowfish)
    crypto-vfat(blowfish)-all
    crypto-vfat

    Reported-by: Mathias Krause
    Signed-off-by: Kees Cook
    Acked-by: Mathias Krause
    Signed-off-by: Herbert Xu

    Kees Cook
     

25 Apr, 2013

1 commit

  • Patch adds support for NIST recommended block cipher mode CMAC to CryptoAPI.

    This work is based on Tom St Denis' earlier patch,
    http://marc.info/?l=linux-crypto-vger&m=135877306305466&w=2

    Cc: Tom St Denis
    Signed-off-by: Jussi Kivilinna
    Acked-by: David S. Miller
    Signed-off-by: Herbert Xu

    Jussi Kivilinna