28 Aug, 2020

1 commit


21 Aug, 2020

1 commit

  • Revert "crypto: hash - Add real ahash walk interface"
    This reverts commit 75ecb231ff45b54afa9f4ec9137965c3c00868f4.

    The callers of the functions in this commit were removed in ab8085c130ed

    Remove these unused calls.

    Fixes: ab8085c130ed ("crypto: x86 - remove SHA multibuffer routines and mcryptd")
    Cc: Ard Biesheuvel
    Signed-off-by: Ira Weiny
    Signed-off-by: Herbert Xu

    Ira Weiny
     

08 Aug, 2020

1 commit

  • As said by Linus:

    A symmetric naming is only helpful if it implies symmetries in use.
    Otherwise it's actively misleading.

    In "kzalloc()", the z is meaningful and an important part of what the
    caller wants.

    In "kzfree()", the z is actively detrimental, because maybe in the
    future we really _might_ want to use that "memfill(0xdeadbeef)" or
    something. The "zero" part of the interface isn't even _relevant_.

    The main reason that kzfree() exists is to clear sensitive information
    that should not be leaked to other future users of the same memory
    objects.

    Rename kzfree() to kfree_sensitive() to follow the example of the recently
    added kvfree_sensitive() and make the intention of the API more explicit.
    In addition, memzero_explicit() is used to clear the memory to make sure
    that it won't get optimized away by the compiler.

    The renaming is done by using the command sequence:

    git grep -w --name-only kzfree |\
    xargs sed -i 's/kzfree/kfree_sensitive/'

    followed by some editing of the kfree_sensitive() kerneldoc and adding
    a kzfree backward compatibility macro in slab.h.

    [akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
    [akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]

    Suggested-by: Joe Perches
    Signed-off-by: Waiman Long
    Signed-off-by: Andrew Morton
    Acked-by: David Howells
    Acked-by: Michal Hocko
    Acked-by: Johannes Weiner
    Cc: Jarkko Sakkinen
    Cc: James Morris
    Cc: "Serge E. Hallyn"
    Cc: Joe Perches
    Cc: Matthew Wilcox
    Cc: David Rientjes
    Cc: Dan Carpenter
    Cc: "Jason A . Donenfeld"
    Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
    Signed-off-by: Linus Torvalds

    Waiman Long
     

09 Jan, 2020

6 commits

  • All instances need to have a ->free() method, but people could forget to
    set it and then not notice if the instance is never unregistered. To
    help detect this bug earlier, don't allow an instance without a ->free()
    method to be registered, and complain loudly if someone tries to do it.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Now that all templates provide a ->create() method which creates an
    instance, installs a strongly-typed ->free() method directly to it, and
    registers it, the older ->alloc() and ->free() methods in
    'struct crypto_template' are no longer used. Remove them.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Add support to shash and ahash for the new way of freeing instances
    (already used for skcipher, aead, and akcipher) where a ->free() method
    is installed to the instance struct itself. These methods are more
    strongly-typed than crypto_template::free(), which they replace.

    This will allow removing support for the old way of freeing instances.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Now that all the templates that need ahash spawns have been converted to
    use crypto_grab_ahash() rather than look up the algorithm directly,
    crypto_ahash_type is no longer used outside of ahash.c. Make it static.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Remove lots of helper functions that were previously used for
    instantiating crypto templates, but are now unused:

    - crypto_get_attr_alg() and similar functions looked up an inner
    algorithm directly from a template parameter. These were replaced
    with getting the algorithm's name, then calling crypto_grab_*().

    - crypto_init_spawn2() and similar functions initialized a spawn, given
    an algorithm. Similarly, these were replaced with crypto_grab_*().

    - crypto_alloc_instance() and similar functions allocated an instance
    with a single spawn, given the inner algorithm. These aren't useful
    anymore since crypto_grab_*() need the instance allocated first.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Currently, ahash spawns are initialized by using ahash_attr_alg() or
    crypto_find_alg() to look up the ahash algorithm, then calling
    crypto_init_ahash_spawn().

    This is different from how skcipher, aead, and akcipher spawns are
    initialized (they use crypto_grab_*()), and for no good reason. This
    difference introduces unnecessary complexity.

    The crypto_grab_*() functions used to have some problems, like not
    holding a reference to the algorithm and requiring the caller to
    initialize spawn->base.inst. But those problems are fixed now.

    So, let's introduce crypto_grab_ahash() so that we can convert all
    templates to the same way of initializing their spawns.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

20 Dec, 2019

1 commit

  • Some of the algorithm unregistration functions return -ENOENT when asked
    to unregister a non-registered algorithm, while others always return 0
    or always return void. But no users check the return value, except for
    two of the bulk unregistration functions which print a message on error
    but still always return 0 to their caller, and crypto_del_alg() which
    calls crypto_unregister_instance() which always returns 0.

    Since unregistering a non-registered algorithm is always a kernel bug
    but there isn't anything callers should do to handle this situation at
    runtime, let's simplify things by making all the unregistration
    functions return void, and moving the error message into
    crypto_unregister_alg() and upgrading it to a WARN().

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

08 Feb, 2019

1 commit

  • Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and
    "michael_mic", fail the improved hash tests because they sometimes
    produce the wrong digest. The bug is that in the case where a
    scatterlist element crosses pages, not all the data is actually hashed
    because the scatterlist walk terminates too early. This happens because
    the 'nbytes' variable in crypto_hash_walk_done() is assigned the number
    of bytes remaining in the page, then later interpreted as the number of
    bytes remaining in the scatterlist element. Fix it.

    Fixes: 900a081f6912 ("crypto: ahash - Fix early termination in hash walk")
    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

18 Jan, 2019

1 commit

  • Some algorithms have a ->setkey() method that is not atomic, in the
    sense that setting a key can fail after changes were already made to the
    tfm context. In this case, if a key was already set the tfm can end up
    in a state that corresponds to neither the old key nor the new key.

    It's not feasible to make all ->setkey() methods atomic, especially ones
    that have to key multiple sub-tfms. Therefore, make the crypto API set
    CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a
    key, to prevent the tfm from being used until a new key is set.

    Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so
    ->setkey() for those must nevertheless be atomic. That's fine for now
    since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's
    not intended that OPTIONAL_KEY be used much.

    [Cc stable mainly because when introducing the NEED_KEY flag I changed
    AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
    previously didn't have this problem. So these "incompletely keyed"
    states became theoretically accessible via AF_ALG -- though, the
    opportunities for causing real mischief seem pretty limited.]

    Fixes: 9fa68f620041 ("crypto: hash - prevent using keyed hashes without setting key")
    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

07 Dec, 2018

1 commit

  • All crypto_stats functions use the struct xxx_request for feeding stats,
    but in some case this structure could already be freed.

    For fixing this, the needed parameters (len and alg) will be stored
    before the request being executed.
    Fixes: cac5818c25d0 ("crypto: user - Implement a generic crypto statistics")
    Reported-by: syzbot

    Signed-off-by: Corentin Labbe
    Signed-off-by: Herbert Xu

    Corentin Labbe
     

09 Nov, 2018

1 commit

  • There have been a pretty ridiculous number of issues with initializing
    the report structures that are copied to userspace by NETLINK_CRYPTO.
    Commit 4473710df1f8 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME
    expansion") replaced some strncpy()s with strlcpy()s, thereby
    introducing information leaks. Later two other people tried to replace
    other strncpy()s with strlcpy() too, which would have introduced even
    more information leaks:

    - https://lore.kernel.org/patchwork/patch/954991/
    - https://patchwork.kernel.org/patch/10434351/

    Commit cac5818c25d0 ("crypto: user - Implement a generic crypto
    statistics") also uses the buggy strlcpy() approach and therefore leaks
    uninitialized memory to userspace. A fix was proposed, but it was
    originally incomplete.

    Seeing as how apparently no one can get this right with the current
    approach, change all the reporting functions to:

    - Start by memsetting the report structure to 0. This guarantees it's
    always initialized, regardless of what happens later.
    - Initialize all strings using strscpy(). This is safe after the
    memset, ensures null termination of long strings, avoids unnecessary
    work, and avoids the -Wstringop-truncation warnings from gcc.
    - Use sizeof(var) instead of sizeof(type). This is more robust against
    copy+paste errors.

    For simplicity, also reuse the -EMSGSIZE return value from nla_put().

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

28 Sep, 2018

1 commit


04 Sep, 2018

1 commit

  • In the quest to remove all stack VLA usage from the kernel[1], this
    removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize())
    by using the maximum allowable size (which is now more clearly captured
    in a macro), along with a few other cases. Similar limits are turned into
    macros as well.

    A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the
    largest digest size and that sizeof(struct sha3_state) (360) is the
    largest descriptor size. The corresponding maximums are reduced.

    [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

    Signed-off-by: Kees Cook
    Signed-off-by: Herbert Xu

    Kees Cook
     

31 Mar, 2018

1 commit

  • When we have an unaligned SG list entry where there is no leftover
    aligned data, the hash walk code will incorrectly return zero as if
    the entire SG list has been processed.

    This patch fixes it by moving onto the next page instead.

    Reported-by: Eli Cooper
    Cc:
    Signed-off-by: Herbert Xu

    Herbert Xu
     

15 Feb, 2018

1 commit


12 Jan, 2018

2 commits

  • Currently, almost none of the keyed hash algorithms check whether a key
    has been set before proceeding. Some algorithms are okay with this and
    will effectively just use a key of all 0's or some other bogus default.
    However, others will severely break, as demonstrated using
    "hmac(sha3-512-generic)", the unkeyed use of which causes a kernel crash
    via a (potentially exploitable) stack buffer overflow.

    A while ago, this problem was solved for AF_ALG by pairing each hash
    transform with a 'has_key' bool. However, there are still other places
    in the kernel where userspace can specify an arbitrary hash algorithm by
    name, and the kernel uses it as unkeyed hash without checking whether it
    is really unkeyed. Examples of this include:

    - KEYCTL_DH_COMPUTE, via the KDF extension
    - dm-verity
    - dm-crypt, via the ESSIV support
    - dm-integrity, via the "internal hash" mode with no key given
    - drbd (Distributed Replicated Block Device)

    This bug is especially bad for KEYCTL_DH_COMPUTE as that requires no
    privileges to call.

    Fix the bug for all users by adding a flag CRYPTO_TFM_NEED_KEY to the
    ->crt_flags of each hash transform that indicates whether the transform
    still needs to be keyed or not. Then, make the hash init, import, and
    digest functions return -ENOKEY if the key is still needed.

    The new flag also replaces the 'has_key' bool which algif_hash was
    previously using, thereby simplifying the algif_hash implementation.

    Reported-by: syzbot
    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Templates that use an shash spawn can use crypto_shash_alg_has_setkey()
    to determine whether the underlying algorithm requires a key or not.
    But there was no corresponding function for ahash spawns. Add it.

    Note that the new function actually has to support both shash and ahash
    algorithms, since the ahash API can be used with either.

    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

03 Nov, 2017

1 commit


22 Aug, 2017

1 commit


10 Apr, 2017

1 commit

  • The ahash API modifies the request's callback function in order
    to clean up after itself in some corner cases (unaligned final
    and missing finup).

    When the request is complete ahash will restore the original
    callback and everything is fine. However, when the request gets
    an EBUSY on a full queue, an EINPROGRESS callback is made while
    the request is still ongoing.

    In this case the ahash API will incorrectly call its own callback.

    This patch fixes the problem by creating a temporary request
    object on the stack which is used to relay EINPROGRESS back to
    the original completion function.

    This patch also adds code to preserve the original flags value.

    Fixes: ab6bf4e5e5e4 ("crypto: hash - Fix the pointer voodoo in...")
    Cc:
    Reported-by: Sabrina Dubroca
    Tested-by: Sabrina Dubroca
    Signed-off-by: Herbert Xu

    Herbert Xu
     

13 Jan, 2017

1 commit

  • Continuing from this commit: 52f5684c8e1e
    ("kernel: use macros from compiler.h instead of __attribute__((...))")

    I submitted 4 total patches. They are part of task I've taken up to
    increase compiler portability in the kernel. I've cleaned up the
    subsystems under /kernel /mm /block and /security, this patch targets
    /crypto.

    There is which provides macros for various gcc specific
    constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
    instances of gcc specific attributes with the right macros for the crypto
    subsystem.

    I had to make one additional change into compiler-gcc.h for the case when
    one wants to use this: __attribute__((aligned) and not specify an alignment
    factor. From the gcc docs, this will result in the largest alignment for
    that data type on the target machine so I've named the macro
    __aligned_largest. Please advise if another name is more appropriate.

    Signed-off-by: Gideon Israel Dsouza
    Signed-off-by: Herbert Xu

    Gideon Israel Dsouza
     

01 Jul, 2016

1 commit


05 May, 2016

1 commit


06 Feb, 2016

1 commit


25 Jan, 2016

1 commit


18 Jan, 2016

1 commit


13 Oct, 2015

1 commit

  • Unlike shash algorithms, ahash drivers must implement export
    and import as their descriptors may contain hardware state and
    cannot be exported as is. Unfortunately some ahash drivers did
    not provide them and end up causing crashes with algif_hash.

    This patch adds a check to prevent these drivers from registering
    ahash algorithms until they are fixed.

    Cc: stable@vger.kernel.org
    Signed-off-by: Russell King
    Signed-off-by: Herbert Xu

    Russell King
     

26 Jan, 2015

1 commit


22 Dec, 2014

1 commit


25 Aug, 2014

1 commit


21 May, 2014

1 commit

  • Although the existing hash walk interface has already been used
    by a number of ahash crypto drivers, it turns out that none of
    them were really asynchronous. They were all essentially polling
    for completion.

    That's why nobody has noticed until now that the walk interface
    couldn't work with a real asynchronous driver since the memory
    is mapped using kmap_atomic.

    As we now have a use-case for a real ahash implementation on x86,
    this patch creates a minimal ahash walk interface. Basically it
    just calls kmap instead of kmap_atomic and does away with the
    crypto_yield call. Real ahash crypto drivers don't need to yield
    since by definition they won't be hogging the CPU.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

21 Mar, 2014

3 commits

  • The ahash_def_finup() can make use of the request save/restore functions,
    thus make it so. This simplifies the code a little and unifies the code
    paths.

    Note that the same remark about free()ing the req->priv applies here, the
    req->priv can only be free()'d after the original request was restored.

    Finally, squash a bug in the invocation of completion in the ASYNC path.
    In both ahash_def_finup_done{1,2}, the function areq->base.complete(X, err);
    was called with X=areq->base.data . This is incorrect , as X=&areq->base
    is the correct value. By analysis of the data structures, we see the areq is
    of type 'struct ahash_request' , areq->base is of type 'struct crypto_async_request'
    and areq->base.completion is of type crypto_completion_t, which is defined in
    include/linux/crypto.h as:

    typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err);

    This is one lead that the X should be &areq->base . Next up, we can inspect
    other code which calls the completion callback to give us kind-of statistical
    idea of how this callback is used. We can try:

    $ git grep base\.complete\( drivers/crypto/

    Finally, by inspecting ahash_request_set_callback() implementation defined
    in include/crypto/hash.h , we observe that the .data entry of 'struct
    crypto_async_request' is intended for arbitrary data, not for completion
    argument.

    Signed-off-by: Marek Vasut
    Cc: David S. Miller
    Cc: Fabio Estevam
    Cc: Herbert Xu
    Cc: Shawn Guo
    Cc: Tom Lendacky
    Signed-off-by: Herbert Xu

    Marek Vasut
     
  • The functions to save original request within a newly adjusted request
    and it's counterpart to restore the original request can be re-used by
    more code in the crypto/ahash.c file. Pull these functions out from the
    code so they're available.

    Signed-off-by: Marek Vasut
    Cc: David S. Miller
    Cc: Fabio Estevam
    Cc: Herbert Xu
    Cc: Shawn Guo
    Cc: Tom Lendacky
    Signed-off-by: Herbert Xu

    Marek Vasut
     
  • Add documentation for the pointer voodoo that is happening in crypto/ahash.c
    in ahash_op_unaligned(). This code is quite confusing, so add a beefy chunk
    of documentation.

    Moreover, make sure the mangled request is completely restored after finishing
    this unaligned operation. This means restoring all of .result, .base.data
    and .base.complete .

    Also, remove the crypto_completion_t complete = ... line present in the
    ahash_op_unaligned_done() function. This type actually declares a function
    pointer, which is very confusing.

    Finally, yet very important nonetheless, make sure the req->priv is free()'d
    only after the original request is restored in ahash_op_unaligned_done().
    The req->priv data must not be free()'d before that in ahash_op_unaligned_finish(),
    since we would be accessing previously free()'d data in ahash_op_unaligned_done()
    and cause corruption.

    Signed-off-by: Marek Vasut
    Cc: David S. Miller
    Cc: Fabio Estevam
    Cc: Herbert Xu
    Cc: Shawn Guo
    Cc: Tom Lendacky
    Signed-off-by: Herbert Xu

    Marek Vasut
     

05 Jan, 2014

1 commit

  • When finishing the ahash request, the ahash_op_unaligned_done() will
    call complete() on the request. Yet, this will not call the correct
    complete callback. The correct complete callback was previously stored
    in the requests' private data, as seen in ahash_op_unaligned(). This
    patch restores the correct complete callback and .data field of the
    request before calling complete() on it.

    Signed-off-by: Marek Vasut
    Cc: David S. Miller
    Cc: Fabio Estevam
    Cc: Shawn Guo
    Signed-off-by: Herbert Xu

    Marek Vasut
     

19 Feb, 2013

1 commit

  • Three errors resulting in kernel memory disclosure:

    1/ The structures used for the netlink based crypto algorithm report API
    are located on the stack. As snprintf() does not fill the remainder of
    the buffer with null bytes, those stack bytes will be disclosed to users
    of the API. Switch to strncpy() to fix this.

    2/ crypto_report_one() does not initialize all field of struct
    crypto_user_alg. Fix this to fix the heap info leak.

    3/ For the module name we should copy only as many bytes as
    module_name() returns -- not as much as the destination buffer could
    hold. But the current code does not and therefore copies random data
    from behind the end of the module name, as the module name is always
    shorter than CRYPTO_MAX_ALG_NAME.

    Also switch to use strncpy() to copy the algorithm's name and
    driver_name. They are strings, after all.

    Signed-off-by: Mathias Krause
    Cc: Steffen Klassert
    Signed-off-by: Herbert Xu

    Mathias Krause