13 Jan, 2021

1 commit

  • commit 0aa171e9b267ce7c52d3a3df7bc9c1fc0203dec5 upstream.

    Pavel reports that commit 17858b140bf4 ("crypto: ecdh - avoid unaligned
    accesses in ecdh_set_secret()") fixes one problem but introduces another:
    the unconditional memcpy() introduced by that commit may overflow the
    target buffer if the source data is invalid, which could be the result of
    intentional tampering.

    So check params.key_size explicitly against the size of the target buffer
    before validating the key further.

    Fixes: 17858b140bf4 ("crypto: ecdh - avoid unaligned accesses in ecdh_set_secret()")
    Reported-by: Pavel Machek
    Cc:
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Ard Biesheuvel
     

30 Dec, 2020

1 commit

  • commit 17858b140bf49961b71d4e73f1c3ea9bc8e7dda0 upstream.

    ecdh_set_secret() casts a void* pointer to a const u64* in order to
    feed it into ecc_is_key_valid(). This is not generally permitted by
    the C standard, and leads to actual misalignment faults on ARMv6
    cores. In some cases, these are fixed up in software, but this still
    leads to performance hits that are entirely avoidable.

    So let's copy the key into the ctx buffer first, which we will do
    anyway in the common case, and which guarantees correct alignment.

    Cc:
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Ard Biesheuvel
     

08 Aug, 2020

1 commit

  • As said by Linus:

    A symmetric naming is only helpful if it implies symmetries in use.
    Otherwise it's actively misleading.

    In "kzalloc()", the z is meaningful and an important part of what the
    caller wants.

    In "kzfree()", the z is actively detrimental, because maybe in the
    future we really _might_ want to use that "memfill(0xdeadbeef)" or
    something. The "zero" part of the interface isn't even _relevant_.

    The main reason that kzfree() exists is to clear sensitive information
    that should not be leaked to other future users of the same memory
    objects.

    Rename kzfree() to kfree_sensitive() to follow the example of the recently
    added kvfree_sensitive() and make the intention of the API more explicit.
    In addition, memzero_explicit() is used to clear the memory to make sure
    that it won't get optimized away by the compiler.

    The renaming is done by using the command sequence:

    git grep -w --name-only kzfree |\
    xargs sed -i 's/kzfree/kfree_sensitive/'

    followed by some editing of the kfree_sensitive() kerneldoc and adding
    a kzfree backward compatibility macro in slab.h.

    [akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
    [akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]

    Suggested-by: Joe Perches
    Signed-off-by: Waiman Long
    Signed-off-by: Andrew Morton
    Acked-by: David Howells
    Acked-by: Michal Hocko
    Acked-by: Johannes Weiner
    Cc: Jarkko Sakkinen
    Cc: James Morris
    Cc: "Serge E. Hallyn"
    Cc: Joe Perches
    Cc: Matthew Wilcox
    Cc: David Rientjes
    Cc: Dan Carpenter
    Cc: "Jason A . Donenfeld"
    Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
    Signed-off-by: Linus Torvalds

    Waiman Long
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

18 Apr, 2019

1 commit

  • Use subsys_initcall for registration of all templates and generic
    algorithm implementations, rather than module_init. Then change
    cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

    This is needed so that when both a generic and optimized implementation
    of an algorithm are built into the kernel (not loadable modules), the
    generic implementation is registered before the optimized one.
    Otherwise, the self-tests for the optimized implementation are unable to
    allocate the generic implementation for the new comparison fuzz tests.

    Note that on arm, a side effect of this change is that self-tests for
    generic implementations may run before the unaligned access handler has
    been installed. So, unaligned accesses will crash the kernel. This is
    arguably a good thing as it makes it easier to detect that type of bug.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

21 Apr, 2018

1 commit

  • On the quest to remove all VLAs from the kernel[1], this avoids VLAs
    by just using the maximum allocation size (4 bytes) for stack arrays.
    All the VLAs in ecc were either 3 or 4 bytes (or a multiple), so just
    make it 4 bytes all the time. Initialization routines are adjusted to
    check that ndigits does not end up larger than the arrays.

    This includes a removal of the earlier attempt at this fix from
    commit a963834b4742 ("crypto/ecc: Remove stack VLA usage")

    [1] https://lkml.org/lkml/2018/3/7/621

    Signed-off-by: Kees Cook
    Signed-off-by: Herbert Xu

    Kees Cook
     

09 Mar, 2018

1 commit


06 Nov, 2017

1 commit

  • Pointer members of an object with static storage duration, if not
    explicitly initialized, will be initialized to a NULL pointer. The crypto
    API checks if this pointer is not NULL before using it, we are safe to
    remove the function.

    Signed-off-by: Tudor Ambarus
    Signed-off-by: Herbert Xu

    Tudor-Dan Ambarus
     

03 Aug, 2017

1 commit

  • ecdh_ctx contained static allocated data for the shared secret
    and public key.

    The shared secret and the public key were doomed to concurrency
    issues because they could be shared by multiple crypto requests.

    The concurrency is fixed by replacing per-tfm shared secret and
    public key with per-request dynamically allocated shared secret
    and public key.

    Signed-off-by: Tudor Ambarus
    Signed-off-by: Herbert Xu

    Tudor-Dan Ambarus
     

10 Jun, 2017

6 commits


09 Mar, 2017

1 commit


24 Jun, 2016

1 commit


23 Jun, 2016

1 commit