16 Jul, 2020

1 commit

  • The flag CRYPTO_ALG_ASYNC is "inherited" in the sense that when a
    template is instantiated, the template will have CRYPTO_ALG_ASYNC set if
    any of the algorithms it uses has CRYPTO_ALG_ASYNC set.

    We'd like to add a second flag (CRYPTO_ALG_ALLOCATES_MEMORY) that gets
    "inherited" in the same way. This is difficult because the handling of
    CRYPTO_ALG_ASYNC is hardcoded everywhere. Address this by:

    - Add CRYPTO_ALG_INHERITED_FLAGS, which contains the set of flags that
    have these inheritance semantics.

    - Add crypto_algt_inherited_mask(), for use by template ->create()
    methods. It returns any of these flags that the user asked to be
    unset and thus must be passed in the 'mask' to crypto_grab_*().

    - Also modify crypto_check_attr_type() to handle computing the 'mask'
    so that most templates can just use this.

    - Make crypto_grab_*() propagate these flags to the template instance
    being created so that templates don't have to do this themselves.

    Make crypto/simd.c propagate these flags too, since it "wraps" another
    algorithm, similar to a template.

    Based on a patch by Mikulas Patocka
    (https://lore.kernel.org/r/alpine.LRH.2.02.2006301414580.30526@file01.intranet.prod.int.rdu2.redhat.com).

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

09 Jan, 2020

3 commits

  • Convert shash_free_instance() and its users to the new way of freeing
    instances, where a ->free() method is installed to the instance struct
    itself. This replaces the weakly-typed method crypto_template::free().

    This will allow removing support for the old way of freeing instances.

    Also give shash_free_instance() a more descriptive name to reflect that
    it's only for instances with a single spawn, not for any instance.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Now that all users of single-block cipher spawns have been converted to
    use 'struct crypto_cipher_spawn' rather than the less specifically typed
    'struct crypto_spawn', make crypto_spawn_cipher() take a pointer to a
    'struct crypto_cipher_spawn' rather than a 'struct crypto_spawn'.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Make the xcbc template use the new function crypto_grab_cipher() to
    initialize its cipher spawn.

    This is needed to make all spawns be initialized in a consistent way.

    This required making xcbc_create() allocate the instance directly rather
    than use shash_alloc_instance().

    Also simplify the error handling by taking advantage of crypto_drop_*()
    now accepting (as a no-op) spawns that haven't been initialized yet, and
    by taking advantage of crypto_grab_*() now handling ERR_PTR() names.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

21 May, 2019

1 commit

  • Based on 2 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version this program is distributed in the
    hope that it will be useful but without any warranty without even
    the implied warranty of merchantability or fitness for a particular
    purpose see the gnu general public license for more details you
    should have received a copy of the gnu general public license along
    with this program if not see http www gnu org licenses

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version this program is distributed in the
    hope that it will be useful but without any warranty without even
    the implied warranty of merchantability or fitness for a particular
    purpose see the gnu general public license for more details [based]
    [from] [clk] [highbank] [c] you should have received a copy of the
    gnu general public license along with this program if not see http
    www gnu org licenses

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 355 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Kate Stewart
    Reviewed-by: Jilayne Lovejoy
    Reviewed-by: Steve Winslow
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190519154041.837383322@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

18 Apr, 2019

1 commit

  • Use subsys_initcall for registration of all templates and generic
    algorithm implementations, rather than module_init. Then change
    cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

    This is needed so that when both a generic and optimized implementation
    of an algorithm are built into the kernel (not loadable modules), the
    generic implementation is registered before the optimized one.
    Otherwise, the self-tests for the optimized implementation are unable to
    allocate the generic implementation for the new comparison fuzz tests.

    Note that on arm, a side effect of this change is that self-tests for
    generic implementations may run before the unaligned access handler has
    been installed. So, unaligned accesses will crash the kernel. This is
    arguably a good thing as it makes it easier to detect that type of bug.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

04 Sep, 2018

1 commit

  • In the quest to remove all stack VLA usage from the kernel[1], this uses
    the maximum blocksize and adds a sanity check. For xcbc, the blocksize
    must always be 16, so use that, since it's already being enforced during
    instantiation.

    [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

    Signed-off-by: Kees Cook
    Signed-off-by: Herbert Xu

    Kees Cook
     

29 Nov, 2017

1 commit


26 Nov, 2014

1 commit

  • This adds the module loading prefix "crypto-" to the template lookup
    as well.

    For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
    includes the "crypto-" prefix at every level, correctly rejecting "vfat":

    net-pf-38
    algif-hash
    crypto-vfat(blowfish)
    crypto-vfat(blowfish)-all
    crypto-vfat

    Reported-by: Mathias Krause
    Signed-off-by: Kees Cook
    Acked-by: Mathias Krause
    Signed-off-by: Herbert Xu

    Kees Cook
     

01 Nov, 2011

1 commit


20 Aug, 2009

1 commit

  • The alignment calculation of xcbc_tfm_ctx uses alg->cra_alignmask
    and not alg->cra_alignmask + 1 as it should. This led to frequent
    crashes during the selftest of xcbc(aes-asm) on x86_64
    machines. This patch fixes this. Also we use the alignmask
    of xcbc and not the alignmask of the underlying algorithm
    for the alignmnent calculation in xcbc_create now.

    Signed-off-by: Steffen Klassert
    Signed-off-by: Herbert Xu

    Steffen Klassert
     

22 Jul, 2009

2 commits


15 Jul, 2009

1 commit


14 Jul, 2009

1 commit


02 Apr, 2008

1 commit

  • The kernel crashes when ipsec passes a udp packet of about 14XX bytes
    of data to aes-xcbc-mac.

    It seems the first xxxx bytes of the data are in first sg entry,
    and remaining xx bytes are in next sg entry. But we don't
    check next sg entry to see if we need to go look the page up.

    I noticed in hmac.c, we do a scatterwalk_sg_next(), to do this check
    and possible lookup, thus xcbc.c needs to use this routine too.

    A 15-hour run of an ipsec stress test sending streams of tcp and
    udp packets of various sizes, using this patch and
    aes-xcbc-mac completed successfully, so hopefully this fixes the
    problem.

    Signed-off-by: Joy Latten
    Signed-off-by: Herbert Xu

    Joy Latten
     

06 Mar, 2008

1 commit

  • When using aes-xcbc-mac for authentication in IPsec,
    the kernel crashes. It seems this algorithm doesn't
    account for the space IPsec may make in scatterlist for authtag.
    Thus when crypto_xcbc_digest_update2() gets called,
    nbytes may be less than sg[i].length.
    Since nbytes is an unsigned number, it wraps
    at the end of the loop allowing us to go back
    into loop and causing crash in memcpy.

    I used update function in digest.c to model this fix.
    Please let me know if it looks ok.

    Signed-off-by: Joy Latten
    Signed-off-by: Herbert Xu

    Joy Latten
     

08 Feb, 2008

1 commit


11 Jan, 2008

3 commits


23 Oct, 2007

1 commit


02 May, 2007

1 commit

  • This patch passes the type/mask along when constructing instances of
    templates. This is in preparation for templates that may support
    multiple types of instances depending on what is requested. For example,
    the planned software async crypto driver will use this construct.

    For the moment this allows us to check whether the instance constructed
    is of the correct type and avoid returning success if the type does not
    match.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

07 Feb, 2007

3 commits


07 Dec, 2006

2 commits

  • On Tue, Nov 14, 2006 at 01:41:25AM -0800, Andrew Morton wrote:
    >...
    > Changes since 2.6.19-rc5-mm2:
    >...
    > git-cryptodev.patch
    >...
    > git trees
    >...

    This patch makes some needlessly global code static.

    Signed-off-by: Adrian Bunk
    Signed-off-by: Herbert Xu

    Adrian Bunk
     
  • This is core code of XCBC.

    XCBC is an algorithm that forms a MAC algorithm out of a cipher algorithm.
    For example, AES-XCBC-MAC is a MAC algorithm based on the AES cipher
    algorithm.

    Signed-off-by: Kazunori MIYAZAWA
    Signed-off-by: Herbert Xu

    Kazunori MIYAZAWA