24 Mar, 2019

2 commits

  • commit ba7d7433a0e998c902132bd47330e355a1eaa894 upstream.

    Some algorithms have a ->setkey() method that is not atomic, in the
    sense that setting a key can fail after changes were already made to the
    tfm context. In this case, if a key was already set the tfm can end up
    in a state that corresponds to neither the old key nor the new key.

    It's not feasible to make all ->setkey() methods atomic, especially ones
    that have to key multiple sub-tfms. Therefore, make the crypto API set
    CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a
    key, to prevent the tfm from being used until a new key is set.

    Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so
    ->setkey() for those must nevertheless be atomic. That's fine for now
    since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's
    not intended that OPTIONAL_KEY be used much.

    [Cc stable mainly because when introducing the NEED_KEY flag I changed
    AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
    previously didn't have this problem. So these "incompletely keyed"
    states became theoretically accessible via AF_ALG -- though, the
    opportunities for causing real mischief seem pretty limited.]

    Fixes: 9fa68f620041 ("crypto: hash - prevent using keyed hashes without setting key")
    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 77568e535af7c4f97eaef1e555bf0af83772456c upstream.

    Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and
    "michael_mic", fail the improved hash tests because they sometimes
    produce the wrong digest. The bug is that in the case where a
    scatterlist element crosses pages, not all the data is actually hashed
    because the scatterlist walk terminates too early. This happens because
    the 'nbytes' variable in crypto_hash_walk_done() is assigned the number
    of bytes remaining in the page, then later interpreted as the number of
    bytes remaining in the scatterlist element. Fix it.

    Fixes: 900a081f6912 ("crypto: ahash - Fix early termination in hash walk")
    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

31 Mar, 2018

1 commit

  • When we have an unaligned SG list entry where there is no leftover
    aligned data, the hash walk code will incorrectly return zero as if
    the entire SG list has been processed.

    This patch fixes it by moving onto the next page instead.

    Reported-by: Eli Cooper
    Cc:
    Signed-off-by: Herbert Xu

    Herbert Xu
     

15 Feb, 2018

1 commit


12 Jan, 2018

2 commits

  • Currently, almost none of the keyed hash algorithms check whether a key
    has been set before proceeding. Some algorithms are okay with this and
    will effectively just use a key of all 0's or some other bogus default.
    However, others will severely break, as demonstrated using
    "hmac(sha3-512-generic)", the unkeyed use of which causes a kernel crash
    via a (potentially exploitable) stack buffer overflow.

    A while ago, this problem was solved for AF_ALG by pairing each hash
    transform with a 'has_key' bool. However, there are still other places
    in the kernel where userspace can specify an arbitrary hash algorithm by
    name, and the kernel uses it as unkeyed hash without checking whether it
    is really unkeyed. Examples of this include:

    - KEYCTL_DH_COMPUTE, via the KDF extension
    - dm-verity
    - dm-crypt, via the ESSIV support
    - dm-integrity, via the "internal hash" mode with no key given
    - drbd (Distributed Replicated Block Device)

    This bug is especially bad for KEYCTL_DH_COMPUTE as that requires no
    privileges to call.

    Fix the bug for all users by adding a flag CRYPTO_TFM_NEED_KEY to the
    ->crt_flags of each hash transform that indicates whether the transform
    still needs to be keyed or not. Then, make the hash init, import, and
    digest functions return -ENOKEY if the key is still needed.

    The new flag also replaces the 'has_key' bool which algif_hash was
    previously using, thereby simplifying the algif_hash implementation.

    Reported-by: syzbot
    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Templates that use an shash spawn can use crypto_shash_alg_has_setkey()
    to determine whether the underlying algorithm requires a key or not.
    But there was no corresponding function for ahash spawns. Add it.

    Note that the new function actually has to support both shash and ahash
    algorithms, since the ahash API can be used with either.

    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

03 Nov, 2017

1 commit


22 Aug, 2017

1 commit


10 Apr, 2017

1 commit

  • The ahash API modifies the request's callback function in order
    to clean up after itself in some corner cases (unaligned final
    and missing finup).

    When the request is complete ahash will restore the original
    callback and everything is fine. However, when the request gets
    an EBUSY on a full queue, an EINPROGRESS callback is made while
    the request is still ongoing.

    In this case the ahash API will incorrectly call its own callback.

    This patch fixes the problem by creating a temporary request
    object on the stack which is used to relay EINPROGRESS back to
    the original completion function.

    This patch also adds code to preserve the original flags value.

    Fixes: ab6bf4e5e5e4 ("crypto: hash - Fix the pointer voodoo in...")
    Cc:
    Reported-by: Sabrina Dubroca
    Tested-by: Sabrina Dubroca
    Signed-off-by: Herbert Xu

    Herbert Xu
     

13 Jan, 2017

1 commit

  • Continuing from this commit: 52f5684c8e1e
    ("kernel: use macros from compiler.h instead of __attribute__((...))")

    I submitted 4 total patches. They are part of task I've taken up to
    increase compiler portability in the kernel. I've cleaned up the
    subsystems under /kernel /mm /block and /security, this patch targets
    /crypto.

    There is which provides macros for various gcc specific
    constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
    instances of gcc specific attributes with the right macros for the crypto
    subsystem.

    I had to make one additional change into compiler-gcc.h for the case when
    one wants to use this: __attribute__((aligned) and not specify an alignment
    factor. From the gcc docs, this will result in the largest alignment for
    that data type on the target machine so I've named the macro
    __aligned_largest. Please advise if another name is more appropriate.

    Signed-off-by: Gideon Israel Dsouza
    Signed-off-by: Herbert Xu

    Gideon Israel Dsouza
     

01 Jul, 2016

1 commit


05 May, 2016

1 commit


06 Feb, 2016

1 commit


25 Jan, 2016

1 commit


18 Jan, 2016

1 commit


13 Oct, 2015

1 commit

  • Unlike shash algorithms, ahash drivers must implement export
    and import as their descriptors may contain hardware state and
    cannot be exported as is. Unfortunately some ahash drivers did
    not provide them and end up causing crashes with algif_hash.

    This patch adds a check to prevent these drivers from registering
    ahash algorithms until they are fixed.

    Cc: stable@vger.kernel.org
    Signed-off-by: Russell King
    Signed-off-by: Herbert Xu

    Russell King
     

26 Jan, 2015

1 commit


22 Dec, 2014

1 commit


25 Aug, 2014

1 commit


21 May, 2014

1 commit

  • Although the existing hash walk interface has already been used
    by a number of ahash crypto drivers, it turns out that none of
    them were really asynchronous. They were all essentially polling
    for completion.

    That's why nobody has noticed until now that the walk interface
    couldn't work with a real asynchronous driver since the memory
    is mapped using kmap_atomic.

    As we now have a use-case for a real ahash implementation on x86,
    this patch creates a minimal ahash walk interface. Basically it
    just calls kmap instead of kmap_atomic and does away with the
    crypto_yield call. Real ahash crypto drivers don't need to yield
    since by definition they won't be hogging the CPU.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

21 Mar, 2014

3 commits

  • The ahash_def_finup() can make use of the request save/restore functions,
    thus make it so. This simplifies the code a little and unifies the code
    paths.

    Note that the same remark about free()ing the req->priv applies here, the
    req->priv can only be free()'d after the original request was restored.

    Finally, squash a bug in the invocation of completion in the ASYNC path.
    In both ahash_def_finup_done{1,2}, the function areq->base.complete(X, err);
    was called with X=areq->base.data . This is incorrect , as X=&areq->base
    is the correct value. By analysis of the data structures, we see the areq is
    of type 'struct ahash_request' , areq->base is of type 'struct crypto_async_request'
    and areq->base.completion is of type crypto_completion_t, which is defined in
    include/linux/crypto.h as:

    typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err);

    This is one lead that the X should be &areq->base . Next up, we can inspect
    other code which calls the completion callback to give us kind-of statistical
    idea of how this callback is used. We can try:

    $ git grep base\.complete\( drivers/crypto/

    Finally, by inspecting ahash_request_set_callback() implementation defined
    in include/crypto/hash.h , we observe that the .data entry of 'struct
    crypto_async_request' is intended for arbitrary data, not for completion
    argument.

    Signed-off-by: Marek Vasut
    Cc: David S. Miller
    Cc: Fabio Estevam
    Cc: Herbert Xu
    Cc: Shawn Guo
    Cc: Tom Lendacky
    Signed-off-by: Herbert Xu

    Marek Vasut
     
  • The functions to save original request within a newly adjusted request
    and it's counterpart to restore the original request can be re-used by
    more code in the crypto/ahash.c file. Pull these functions out from the
    code so they're available.

    Signed-off-by: Marek Vasut
    Cc: David S. Miller
    Cc: Fabio Estevam
    Cc: Herbert Xu
    Cc: Shawn Guo
    Cc: Tom Lendacky
    Signed-off-by: Herbert Xu

    Marek Vasut
     
  • Add documentation for the pointer voodoo that is happening in crypto/ahash.c
    in ahash_op_unaligned(). This code is quite confusing, so add a beefy chunk
    of documentation.

    Moreover, make sure the mangled request is completely restored after finishing
    this unaligned operation. This means restoring all of .result, .base.data
    and .base.complete .

    Also, remove the crypto_completion_t complete = ... line present in the
    ahash_op_unaligned_done() function. This type actually declares a function
    pointer, which is very confusing.

    Finally, yet very important nonetheless, make sure the req->priv is free()'d
    only after the original request is restored in ahash_op_unaligned_done().
    The req->priv data must not be free()'d before that in ahash_op_unaligned_finish(),
    since we would be accessing previously free()'d data in ahash_op_unaligned_done()
    and cause corruption.

    Signed-off-by: Marek Vasut
    Cc: David S. Miller
    Cc: Fabio Estevam
    Cc: Herbert Xu
    Cc: Shawn Guo
    Cc: Tom Lendacky
    Signed-off-by: Herbert Xu

    Marek Vasut
     

05 Jan, 2014

1 commit

  • When finishing the ahash request, the ahash_op_unaligned_done() will
    call complete() on the request. Yet, this will not call the correct
    complete callback. The correct complete callback was previously stored
    in the requests' private data, as seen in ahash_op_unaligned(). This
    patch restores the correct complete callback and .data field of the
    request before calling complete() on it.

    Signed-off-by: Marek Vasut
    Cc: David S. Miller
    Cc: Fabio Estevam
    Cc: Shawn Guo
    Signed-off-by: Herbert Xu

    Marek Vasut
     

19 Feb, 2013

1 commit

  • Three errors resulting in kernel memory disclosure:

    1/ The structures used for the netlink based crypto algorithm report API
    are located on the stack. As snprintf() does not fill the remainder of
    the buffer with null bytes, those stack bytes will be disclosed to users
    of the API. Switch to strncpy() to fix this.

    2/ crypto_report_one() does not initialize all field of struct
    crypto_user_alg. Fix this to fix the heap info leak.

    3/ For the module name we should copy only as many bytes as
    module_name() returns -- not as much as the destination buffer could
    hold. But the current code does not and therefore copies random data
    from behind the end of the module name, as the module name is always
    shorter than CRYPTO_MAX_ALG_NAME.

    Also switch to use strncpy() to copy the algorithm's name and
    driver_name. They are strings, after all.

    Signed-off-by: Mathias Krause
    Cc: Steffen Klassert
    Signed-off-by: Herbert Xu

    Mathias Krause
     

02 Apr, 2012

1 commit


20 Mar, 2012

1 commit


11 Nov, 2011

1 commit


21 Oct, 2011

1 commit


06 Aug, 2010

1 commit

  • If a scatterwalk chain contains an entry with an unaligned offset then
    hash_walk_next() will cut off the next step at the next alignment point.

    However, if the entry ends before the next alignment point then we a loop,
    which leads to a kernel oops.

    Fix this by checking whether the next aligment point is before the end of the
    current entry.

    Signed-off-by: Szilveszter Ördög
    Acked-by: David S. Miller
    Signed-off-by: Herbert Xu

    Szilveszter Ördög
     

03 Mar, 2010

1 commit

  • The correct way to calculate the start of the aligned part of an
    unaligned buffer is:

    offset = ALIGN(offset, alignmask + 1);

    However, crypto_hash_walk_done() has:

    offset += alignmask - 1;
    offset = ALIGN(offset, alignmask + 1);

    which actually skips a whole block unless offset % (alignmask + 1) == 1.

    This patch fixes the problem.

    Signed-off-by: Szilveszter Ördög
    Signed-off-by: Herbert Xu

    Szilveszter Ördög
     

24 Jul, 2009

1 commit


15 Jul, 2009

2 commits


14 Jul, 2009

5 commits


31 May, 2009

1 commit

  • A quirk that we've always supported is having an sg entry that's
    bigger than a page, or more generally an sg entry that crosses
    page boundaries. Even though it would be better to explicitly have
    to sg entries for this, we need to support it for the existing users,
    in particular, IPsec.

    The new ahash sg walking code did try to handle this, but there was
    a bug where we didn't increment the page so kept on walking on the
    first page over an dover again.

    This patch fixes it.

    Tested-by: Martin Willi
    Signed-off-by: Herbert Xu

    Herbert Xu