20 Sep, 2018

1 commit

  • commit 6e36719fbe90213fbba9f50093fa2d4d69b0e93c upstream.

    My last bugfix added -Os on the command line, which unfortunately caused
    a build regression on powerpc in some configurations.

    I've done some more analysis of the original problem and found slightly
    different workaround that avoids this regression and also results in
    better performance on gcc-7.0: -fcode-hoisting is an optimization step
    that got added in gcc-7 and that for all gcc-7 versions causes worse
    performance.

    This disables -fcode-hoisting on all compilers that understand the option.
    For gcc-7.1 and 7.2 I found the same performance as my previous patch
    (using -Os), in gcc-7.0 it was even better. On gcc-8 I could see no
    change in performance from this patch. In theory, code hoisting should
    not be able make things better for the AES cipher, so leaving it
    disabled for gcc-8 only serves to simplify the Makefile change.

    Reported-by: kbuild test robot
    Link: https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg30418.html
    Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83356
    Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83651
    Fixes: 148b974deea9 ("crypto: aes-generic - build with -Os on gcc-7+")
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Herbert Xu
    Cc: Horia Geanta
    Signed-off-by: Greg Kroah-Hartman

    Arnd Bergmann
     

10 Sep, 2018

1 commit

  • commit 817aef260037f33ee0f44c17fe341323d3aebd6d upstream.

    Replace the use of a magic number that indicates that verify_*_signature()
    should use the secondary keyring with a symbol.

    Signed-off-by: Yannik Sembritzki
    Signed-off-by: David Howells
    Cc: keyrings@vger.kernel.org
    Cc: linux-security-module@vger.kernel.org
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Yannik Sembritzki
     

18 Aug, 2018

6 commits

  • commit 8088d3dd4d7c6933a65aa169393b5d88d8065672 upstream.

    scatterwalk_done() is only meant to be called after a nonzero number of
    bytes have been processed, since scatterwalk_pagedone() will flush the
    dcache of the *previous* page. But in the error case of
    skcipher_walk_done(), e.g. if the input wasn't an integer number of
    blocks, scatterwalk_done() was actually called after advancing 0 bytes.
    This caused a crash ("BUG: unable to handle kernel paging request")
    during '!PageSlab(page)' on architectures like arm and arm64 that define
    ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
    page-aligned as in that case walk->offset == 0.

    Fix it by reorganizing skcipher_walk_done() to skip the
    scatterwalk_advance() and scatterwalk_done() if an error has occurred.

    This bug was found by syzkaller fuzzing.

    Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:

    #include
    #include
    #include

    int main()
    {
    struct sockaddr_alg addr = {
    .salg_type = "skcipher",
    .salg_name = "cbc(aes-generic)",
    };
    char buffer[4096] __attribute__((aligned(4096))) = { 0 };
    int fd;

    fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
    bind(fd, (void *)&addr, sizeof(addr));
    setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
    fd = accept(fd, NULL, NULL);
    write(fd, buffer, 15);
    read(fd, buffer, 15);
    }

    Reported-by: Liu Chao
    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc: # v4.10+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 0567fc9e90b9b1c8dbce8a5468758e6206744d4a upstream.

    The ALIGN() macro needs to be passed the alignment, not the alignmask
    (which is the alignment minus 1).

    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc: # v4.10+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 318abdfbe708aaaa652c79fb500e9bd60521f9dc upstream.

    Like the skcipher_walk and blkcipher_walk cases:

    scatterwalk_done() is only meant to be called after a nonzero number of
    bytes have been processed, since scatterwalk_pagedone() will flush the
    dcache of the *previous* page. But in the error case of
    ablkcipher_walk_done(), e.g. if the input wasn't an integer number of
    blocks, scatterwalk_done() was actually called after advancing 0 bytes.
    This caused a crash ("BUG: unable to handle kernel paging request")
    during '!PageSlab(page)' on architectures like arm and arm64 that define
    ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
    page-aligned as in that case walk->offset == 0.

    Fix it by reorganizing ablkcipher_walk_done() to skip the
    scatterwalk_advance() and scatterwalk_done() if an error has occurred.

    Reported-by: Liu Chao
    Fixes: bf06099db18a ("crypto: skcipher - Add ablkcipher_walk interfaces")
    Cc: # v2.6.35+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 0868def3e4100591e7a1fdbf3eed1439cc8f7ca3 upstream.

    Like the skcipher_walk case:

    scatterwalk_done() is only meant to be called after a nonzero number of
    bytes have been processed, since scatterwalk_pagedone() will flush the
    dcache of the *previous* page. But in the error case of
    blkcipher_walk_done(), e.g. if the input wasn't an integer number of
    blocks, scatterwalk_done() was actually called after advancing 0 bytes.
    This caused a crash ("BUG: unable to handle kernel paging request")
    during '!PageSlab(page)' on architectures like arm and arm64 that define
    ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
    page-aligned as in that case walk->offset == 0.

    Fix it by reorganizing blkcipher_walk_done() to skip the
    scatterwalk_advance() and scatterwalk_done() if an error has occurred.

    This bug was found by syzkaller fuzzing.

    Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:

    #include
    #include
    #include

    int main()
    {
    struct sockaddr_alg addr = {
    .salg_type = "skcipher",
    .salg_name = "ecb(aes-generic)",
    };
    char buffer[4096] __attribute__((aligned(4096))) = { 0 };
    int fd;

    fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
    bind(fd, (void *)&addr, sizeof(addr));
    setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
    fd = accept(fd, NULL, NULL);
    write(fd, buffer, 15);
    read(fd, buffer, 15);
    }

    Reported-by: Liu Chao
    Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type")
    Cc: # v2.6.19+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit bb29648102335586e9a66289a1d98a0cb392b6e5 upstream.

    syzbot reported a crash in vmac_final() when multiple threads
    concurrently use the same "vmac(aes)" transform through AF_ALG. The bug
    is pretty fundamental: the VMAC template doesn't separate per-request
    state from per-tfm (per-key) state like the other hash algorithms do,
    but rather stores it all in the tfm context. That's wrong.

    Also, vmac_final() incorrectly zeroes most of the state including the
    derived keys and cached pseudorandom pad. Therefore, only the first
    VMAC invocation with a given key calculates the correct digest.

    Fix these bugs by splitting the per-tfm state from the per-request state
    and using the proper init/update/final sequencing for requests.

    Reproducer for the crash:

    #include
    #include
    #include

    int main()
    {
    int fd;
    struct sockaddr_alg addr = {
    .salg_type = "hash",
    .salg_name = "vmac(aes)",
    };
    char buf[256] = { 0 };

    fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
    bind(fd, (void *)&addr, sizeof(addr));
    setsockopt(fd, SOL_ALG, ALG_SET_KEY, buf, 16);
    fork();
    fd = accept(fd, NULL, NULL);
    for (;;)
    write(fd, buf, 256);
    }

    The immediate cause of the crash is that vmac_ctx_t.partial_size exceeds
    VMAC_NHBYTES, causing vmac_final() to memset() a negative length.

    Reported-by: syzbot+264bca3a6e8d645550d3@syzkaller.appspotmail.com
    Fixes: f1939f7c5645 ("crypto: vmac - New hash algorithm for intel_txt support")
    Cc: # v2.6.32+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 73bf20ef3df262026c3470241ae4ac8196943ffa upstream.

    The VMAC template assumes the block cipher has a 128-bit block size, but
    it failed to check for that. Thus it was possible to instantiate it
    using a 64-bit block size cipher, e.g. "vmac(cast5)", causing
    uninitialized memory to be used.

    Add the needed check when instantiating the template.

    Fixes: f1939f7c5645 ("crypto: vmac - New hash algorithm for intel_txt support")
    Cc: # v2.6.32+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

03 Aug, 2018

2 commits

  • [ Upstream commit ad2fdcdf75d169e7a5aec6c7cb421c0bec8ec711 ]

    In crypto_authenc_setkey we save pointers to the authenc keys in
    a local variable of type struct crypto_authenc_keys and we don't
    zeroize it after use. Fix this and don't leak pointers to the
    authenc keys.

    Signed-off-by: Tudor Ambarus
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Tudor-Dan Ambarus
     
  • [ Upstream commit 31545df391d58a3bb60e29b1192644a6f2b5a8dd ]

    In crypto_authenc_esn_setkey we save pointers to the authenc keys
    in a local variable of type struct crypto_authenc_keys and we don't
    zeroize it after use. Fix this and don't leak pointers to the
    authenc keys.

    Signed-off-by: Tudor Ambarus
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Tudor-Dan Ambarus
     

22 Jul, 2018

1 commit

  • commit 2546da99212f22034aecf279da9c47cbfac6c981 upstream.

    The RX SGL in processing is already registered with the RX SGL tracking
    list to support proper cleanup. The cleanup code path uses the
    sg_num_bytes variable which must therefore be always initialized, even
    in the error code path.

    Signed-off-by: Stephan Mueller
    Reported-by: syzbot+9c251bdd09f83b92ba95@syzkaller.appspotmail.com
    #syz test: https://github.com/google/kmsan.git master
    CC: #4.14
    Fixes: e870456d8e7c ("crypto: algif_skcipher - overhaul memory management")
    Fixes: d887c52d6ae4 ("crypto: algif_aead - overhaul memory management")
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Stephan Mueller
     

17 Jul, 2018

1 commit

  • commit b7b73cd5d74694ed59abcdb4974dacb4ff8b2a2a upstream.

    The x86 assembly implementations of Salsa20 use the frame base pointer
    register (%ebp or %rbp), which breaks frame pointer convention and
    breaks stack traces when unwinding from an interrupt in the crypto code.
    Recent (v4.10+) kernels will warn about this, e.g.

    WARNING: kernel stack regs at 00000000a8291e69 in syzkaller047086:4677 has bad 'bp' value 000000001077994c
    [...]

    But after looking into it, I believe there's very little reason to still
    retain the x86 Salsa20 code. First, these are *not* vectorized
    (SSE2/SSSE3/AVX2) implementations, which would be needed to get anywhere
    close to the best Salsa20 performance on any remotely modern x86
    processor; they're just regular x86 assembly. Second, it's still
    unclear that anyone is actually using the kernel's Salsa20 at all,
    especially given that now ChaCha20 is supported too, and with much more
    efficient SSSE3 and AVX2 implementations. Finally, in benchmarks I did
    on both Intel and AMD processors with both gcc 8.1.0 and gcc 4.9.4, the
    x86_64 salsa20-asm is actually slightly *slower* than salsa20-generic
    (~3% slower on Skylake, ~10% slower on Zen), while the i686 salsa20-asm
    is only slightly faster than salsa20-generic (~15% faster on Skylake,
    ~20% faster on Zen). The gcc version made little difference.

    So, the x86_64 salsa20-asm is pretty clearly useless. That leaves just
    the i686 salsa20-asm, which based on my tests provides a 15-20% speed
    boost. But that's without updating the code to not use %ebp. And given
    the maintenance cost, the small speed difference vs. salsa20-generic,
    the fact that few people still use i686 kernels, the doubt that anyone
    is even using the kernel's Salsa20 at all, and the fact that a SSE2
    implementation would almost certainly be much faster on any remotely
    modern x86 processor yet no one has cared enough to add one yet, I don't
    think it's worthwhile to keep.

    Thus, just remove both the x86_64 and i686 salsa20-asm implementations.

    Reported-by: syzbot+ffa3a158337bbc01ff09@syzkaller.appspotmail.com
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

03 Jul, 2018

1 commit

  • commit b65c32ec5a942ab3ada93a048089a938918aba7f upstream.

    The signatureValue field of a X.509 certificate is encoded as a BIT STRING.
    For RSA signatures this BIT STRING is of so-called primitive subtype, which
    contains a u8 prefix indicating a count of unused bits in the encoding.

    We have to strip this prefix from signature data, just as we already do for
    key data in x509_extract_key_data() function.

    This wasn't noticed earlier because this prefix byte is zero for RSA key
    sizes divisible by 8. Since BIT STRING is a big-endian encoding adding zero
    prefixes has no bearing on its value.

    The signature length, however was incorrect, which is a problem for RSA
    implementations that need it to be exactly correct (like AMD CCP).

    Signed-off-by: Maciej S. Szmigiero
    Fixes: c26fd69fa009 ("X.509: Add a crypto key parser for binary (DER) X.509 certificates")
    Cc: stable@vger.kernel.org
    Signed-off-by: James Morris
    Signed-off-by: Greg Kroah-Hartman

    Maciej S. Szmigiero
     

30 May, 2018

1 commit

  • [ Upstream commit 6459ae386699a5fe0dc52cf30255f75274fa43a4 ]

    If none of the certificates in a SignerInfo's certificate chain match a
    trusted key, nor is the last certificate signed by a trusted key, then
    pkcs7_validate_trust_one() tries to check whether the SignerInfo's
    signature was made directly by a trusted key. But, it actually fails to
    set the 'sig' variable correctly, so it actually verifies the last
    signature seen. That will only be the SignerInfo's signature if the
    certificate chain is empty; otherwise it will actually be the last
    certificate's signature.

    This is not by itself a security problem, since verifying any of the
    certificates in the chain should be sufficient to verify the SignerInfo.
    Still, it's not working as intended so it should be fixed.

    Fix it by setting 'sig' correctly for the direct verification case.

    Fixes: 757932e6da6d ("PKCS#7: Handle PKCS#7 messages that contain no X.509 certs")
    Signed-off-by: Eric Biggers
    Signed-off-by: David Howells
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

16 May, 2018

1 commit

  • commit a466856e0b7ab269cdf9461886d007e88ff575b0 upstream.

    syzbot reported :

    BUG: KMSAN: uninit-value in alg_bind+0xe3/0xd90 crypto/af_alg.c:162

    We need to check addr_len before dereferencing sa (or uaddr)

    Fixes: bb30b8848c85 ("crypto: af_alg - whitelist mask and type")
    Signed-off-by: Eric Dumazet
    Reported-by: syzbot
    Cc: Stephan Mueller
    Cc: Herbert Xu
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Eric Dumazet
     

02 May, 2018

1 commit

  • commit eea0d3ea7546961f69f55b26714ac8fd71c7c020 upstream.

    During freeing of the internal buffers used by the DRBG, set the pointer
    to NULL. It is possible that the context with the freed buffers is
    reused. In case of an error during initialization where the pointers
    do not yet point to allocated memory, the NULL value prevents a double
    free.

    Cc: stable@vger.kernel.org
    Fixes: 3cfc3b9721123 ("crypto: drbg - use aligned buffers")
    Signed-off-by: Stephan Mueller
    Reported-by: syzbot+75397ee3df5c70164154@syzkaller.appspotmail.com
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Stephan Mueller
     

12 Apr, 2018

1 commit

  • [ Upstream commit 148b974deea927f5dbb6c468af2707b488bfa2de ]

    While testing other changes, I discovered that gcc-7.2.1 produces badly
    optimized code for aes_encrypt/aes_decrypt. This is especially true when
    CONFIG_UBSAN_SANITIZE_ALL is enabled, where it leads to extremely
    large stack usage that in turn might cause kernel stack overflows:

    crypto/aes_generic.c: In function 'aes_encrypt':
    crypto/aes_generic.c:1371:1: warning: the frame size of 4880 bytes is larger than 2048 bytes [-Wframe-larger-than=]
    crypto/aes_generic.c: In function 'aes_decrypt':
    crypto/aes_generic.c:1441:1: warning: the frame size of 4864 bytes is larger than 2048 bytes [-Wframe-larger-than=]

    I verified that this problem exists on all architectures that are
    supported by gcc-7.2, though arm64 in particular is less affected than
    the others. I also found that gcc-7.1 and gcc-8 do not show the extreme
    stack usage but still produce worse code than earlier versions for this
    file, apparently because of optimization passes that generally provide
    a substantial improvement in object code quality but understandably fail
    to find any shortcuts in the AES algorithm.

    Possible workarounds include

    a) disabling -ftree-pre and -ftree-sra optimizations, this was an earlier
    patch I tried, which reliably fixed the stack usage, but caused a
    serious performance regression in some versions, as later testing
    found.

    b) disabling UBSAN on this file or all ciphers, as suggested by Ard
    Biesheuvel. This would lead to massively better crypto performance in
    UBSAN-enabled kernels and avoid the stack usage, but there is a concern
    over whether we should exclude arbitrary files from UBSAN at all.

    c) Forcing the optimization level in a different way. Similar to a),
    but rather than deselecting specific optimization stages,
    this now uses "gcc -Os" for this file, regardless of the
    CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE/SIZE option. This is a reliable
    workaround for the stack consumption on all architecture, and I've
    retested the performance results now on x86, cycles/byte (lower is
    better) for cbc(aes-generic) with 256 bit keys:

    -O2 -Os
    gcc-6.3.1 14.9 15.1
    gcc-7.0.1 14.7 15.3
    gcc-7.1.1 15.3 14.7
    gcc-7.2.1 16.8 15.9
    gcc-8.0.0 15.5 15.6

    This implements the option c) by enabling forcing -Os on all compiler
    versions starting with gcc-7.1. As a workaround for PR83356, it would
    only be needed for gcc-7.2+ with UBSAN enabled, but since it also shows
    better performance on gcc-7.1 without UBSAN, it seems appropriate to
    use the faster version here as well.

    Side note: during testing, I also played with the AES code in libressl,
    which had a similar performance regression from gcc-6 to gcc-7.2,
    but was three times slower overall. It might be interesting to
    investigate that further and possibly port the Linux implementation
    into that.

    Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83356
    Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83651
    Cc: Richard Biener
    Cc: Jakub Jelinek
    Cc: Ard Biesheuvel
    Signed-off-by: Arnd Bergmann
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Arnd Bergmann
     

08 Apr, 2018

3 commits

  • commit 900a081f6912a8985dc15380ec912752cb66025a upstream.

    When we have an unaligned SG list entry where there is no leftover
    aligned data, the hash walk code will incorrectly return zero as if
    the entire SG list has been processed.

    This patch fixes it by moving onto the next page instead.

    Reported-by: Eli Cooper
    Cc:
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Herbert Xu
     
  • commit 333e18c5cc74438f8940c7f3a8b3573748a371f9 upstream.

    The RSA private key for the first form should have
    version, prime1, prime2, exponent1, exponent2, coefficient
    values 0.
    With non-zero values for prime1,2, exponent 1,2 and coefficient
    the Intel QAT driver will assume that values are provided for the
    private key second form. This will result in signature verification
    failures for modules where QAT device is present and the modules
    are signed with rsa,sha256.

    Cc:
    Signed-off-by: Giovanni Cabiddu
    Signed-off-by: Conor McLoughlin
    Reviewed-by: Stephan Mueller
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Conor McLoughlin
     
  • commit 8c9bdab21289c211ca1ca6a5f9b7537b4a600a02 upstream.

    The buffer rctx->ext contains potentially sensitive data and should
    be freed with kzfree.

    Cc:
    Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher")
    Reported-by: Dan Carpenter
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Herbert Xu
     

19 Mar, 2018

1 commit

  • [ Upstream commit 4c0e22c90510308433272d7ba281b1eb4eda8209 ]

    If crypto_get_default_rng returns an error, the
    function ecc_gen_privkey should return an error.
    Instead, it currently tries to use the default_rng
    nevertheless, thus creating a kernel panic with a
    NULL pointer dereference.
    Returning the error directly, as was supposedly
    intended when looking at the code, fixes this.

    Signed-off-by: Pierre Ducroquet
    Reviewed-by: PrasannaKumar Muralidharan
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Pierre
     

03 Mar, 2018

1 commit

  • [ Upstream commit af955bf15d2c27496b0269b1f05c26f758c68314 ]

    This variable was increased and decreased without any protection.
    Result was an occasional misscount and negative wrap around resulting
    in false resource allocation failures.

    Fixes: 7d2c3f54e6f6 ("crypto: af_alg - remove locking in async callback")
    Signed-off-by: Jonathan Cameron
    Reviewed-by: Stephan Mueller
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Jonathan Cameron
     

28 Feb, 2018

4 commits

  • commit 29f4a67c17e19314b7d74b8569be935e6c7edf50 upstream.

    If there is a blacklisted certificate in a SignerInfo's certificate
    chain, then pkcs7_verify_sig_chain() sets sinfo->blacklisted and returns
    0. But, pkcs7_verify() fails to handle this case appropriately, as it
    actually continues on to the line 'actual_ret = 0;', indicating that the
    SignerInfo has passed verification. Consequently, PKCS#7 signature
    verification ignores the certificate blacklist.

    Fix this by not considering blacklisted SignerInfos to have passed
    verification.

    Also fix the function comment with regards to when 0 is returned.

    Fixes: 03bb79315ddc ("PKCS#7: Handle blacklisted certificates")
    Cc: # v4.12+
    Signed-off-by: Eric Biggers
    Signed-off-by: David Howells
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 971b42c038dc83e3327872d294fe7131bab152fc upstream.

    When pkcs7_verify_sig_chain() is building the certificate chain for a
    SignerInfo using the certificates in the PKCS#7 message, it is passing
    the wrong arguments to public_key_verify_signature(). Consequently,
    when the next certificate is supposed to be used to verify the previous
    certificate, the next certificate is actually used to verify itself.

    An attacker can use this bug to create a bogus certificate chain that
    has no cryptographic relationship between the beginning and end.

    Fortunately I couldn't quite find a way to use this to bypass the
    overall signature verification, though it comes very close. Here's the
    reasoning: due to the bug, every certificate in the chain beyond the
    first actually has to be self-signed (where "self-signed" here refers to
    the actual key and signature; an attacker might still manipulate the
    certificate fields such that the self_signed flag doesn't actually get
    set, and thus the chain doesn't end immediately). But to pass trust
    validation (pkcs7_validate_trust()), either the SignerInfo or one of the
    certificates has to actually be signed by a trusted key. Since only
    self-signed certificates can be added to the chain, the only way for an
    attacker to introduce a trusted signature is to include a self-signed
    trusted certificate.

    But, when pkcs7_validate_trust_one() reaches that certificate, instead
    of trying to verify the signature on that certificate, it will actually
    look up the corresponding trusted key, which will succeed, and then try
    to verify the *previous* certificate, which will fail. Thus, disaster
    is narrowly averted (as far as I could tell).

    Fixes: 6c2dc5ae4ab7 ("X.509: Extract signature digest and make self-signed cert checks earlier")
    Cc: # v4.7+
    Signed-off-by: Eric Biggers
    Signed-off-by: David Howells
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 4b34968e77ad09628cfb3c4a7daf2adc2cefc6e8 upstream.

    The asymmetric key type allows an X.509 certificate to be added even if
    its signature's hash algorithm is not available in the crypto API. In
    that case 'payload.data[asym_auth]' will be NULL. But the key
    restriction code failed to check for this case before trying to use the
    signature, resulting in a NULL pointer dereference in
    key_or_keyring_common() or in restrict_link_by_signature().

    Fix this by returning -ENOPKG when the signature is unsupported.

    Reproducer when all the CONFIG_CRYPTO_SHA512* options are disabled and
    keyctl has support for the 'restrict_keyring' command:

    keyctl new_session
    keyctl restrict_keyring @s asymmetric builtin_trusted
    openssl req -new -sha512 -x509 -batch -nodes -outform der \
    | keyctl padd asymmetric desc @s

    Fixes: a511e1af8b12 ("KEYS: Move the point of trust determination to __key_link()")
    Cc: # v4.7+
    Signed-off-by: Eric Biggers
    Signed-off-by: David Howells
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 437499eea4291ae9621e8763a41df027c110a1ef upstream.

    The X.509 parser mishandles the case where the certificate's signature's
    hash algorithm is not available in the crypto API. In this case,
    x509_get_sig_params() doesn't allocate the cert->sig->digest buffer;
    this part seems to be intentional. However,
    public_key_verify_signature() is still called via
    x509_check_for_self_signed(), which triggers the 'BUG_ON(!sig->digest)'.

    Fix this by making public_key_verify_signature() return -ENOPKG if the
    hash buffer has not been allocated.

    Reproducer when all the CONFIG_CRYPTO_SHA512* options are disabled:

    openssl req -new -sha512 -x509 -batch -nodes -outform der \
    | keyctl padd asymmetric desc @s

    Fixes: 6c2dc5ae4ab7 ("X.509: Extract signature digest and make self-signed cert checks earlier")
    Reported-by: Paolo Valente
    Cc: Paolo Valente
    Cc: # v4.7+
    Signed-off-by: Eric Biggers
    Signed-off-by: David Howells
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

22 Feb, 2018

1 commit

  • commit 75f296d93bcebcfe375884ddac79e30263a31766 upstream.

    Convert all allocations that used a NOTRACK flag to stop using it.

    Link: http://lkml.kernel.org/r/20171007030159.22241-3-alexander.levin@verizon.com
    Signed-off-by: Sasha Levin
    Cc: Alexander Potapenko
    Cc: Eric W. Biederman
    Cc: Michal Hocko
    Cc: Pekka Enberg
    Cc: Steven Rostedt
    Cc: Tim Hansen
    Cc: Vegard Nossum
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Levin, Alexander (Sasha Levin)
     

17 Feb, 2018

6 commits

  • commit 9fa68f620041be04720d0cbfb1bd3ddfc6310b24 upstream.

    Currently, almost none of the keyed hash algorithms check whether a key
    has been set before proceeding. Some algorithms are okay with this and
    will effectively just use a key of all 0's or some other bogus default.
    However, others will severely break, as demonstrated using
    "hmac(sha3-512-generic)", the unkeyed use of which causes a kernel crash
    via a (potentially exploitable) stack buffer overflow.

    A while ago, this problem was solved for AF_ALG by pairing each hash
    transform with a 'has_key' bool. However, there are still other places
    in the kernel where userspace can specify an arbitrary hash algorithm by
    name, and the kernel uses it as unkeyed hash without checking whether it
    is really unkeyed. Examples of this include:

    - KEYCTL_DH_COMPUTE, via the KDF extension
    - dm-verity
    - dm-crypt, via the ESSIV support
    - dm-integrity, via the "internal hash" mode with no key given
    - drbd (Distributed Replicated Block Device)

    This bug is especially bad for KEYCTL_DH_COMPUTE as that requires no
    privileges to call.

    Fix the bug for all users by adding a flag CRYPTO_TFM_NEED_KEY to the
    ->crt_flags of each hash transform that indicates whether the transform
    still needs to be keyed or not. Then, make the hash init, import, and
    digest functions return -ENOKEY if the key is still needed.

    The new flag also replaces the 'has_key' bool which algif_hash was
    previously using, thereby simplifying the algif_hash implementation.

    Reported-by: syzbot
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit a208fa8f33031b9e0aba44c7d1b7e68eb0cbd29e upstream.

    We need to consistently enforce that keyed hashes cannot be used without
    setting the key. To do this we need a reliable way to determine whether
    a given hash algorithm is keyed or not. AF_ALG currently does this by
    checking for the presence of a ->setkey() method. However, this is
    actually slightly broken because the CRC-32 algorithms implement
    ->setkey() but can also be used without a key. (The CRC-32 "key" is not
    actually a cryptographic key but rather represents the initial state.
    If not overridden, then a default initial state is used.)

    Prepare to fix this by introducing a flag CRYPTO_ALG_OPTIONAL_KEY which
    indicates that the algorithm has a ->setkey() method, but it is not
    required to be called. Then set it on all the CRC-32 algorithms.

    The same also applies to the Adler-32 implementation in Lustre.

    Also, the cryptd and mcryptd templates have to pass through the flag
    from their underlying algorithm.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit a16e772e664b9a261424107784804cffc8894977 upstream.

    Since Poly1305 requires a nonce per invocation, the Linux kernel
    implementations of Poly1305 don't use the crypto API's keying mechanism
    and instead expect the key and nonce as the first 32 bytes of the data.
    But ->setkey() is still defined as a stub returning an error code. This
    prevents Poly1305 from being used through AF_ALG and will also break it
    completely once we start enforcing that all crypto API users (not just
    AF_ALG) call ->setkey() if present.

    Fix it by removing crypto_poly1305_setkey(), leaving ->setkey as NULL.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit fa59b92d299f2787e6bae1ff078ee0982e80211f upstream.

    When the mcryptd template is used to wrap an unkeyed hash algorithm,
    don't install a ->setkey() method to the mcryptd instance. This change
    is necessary for mcryptd to keep working with unkeyed hash algorithms
    once we start enforcing that ->setkey() is called when present.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 841a3ff329713f796a63356fef6e2f72e4a3f6a3 upstream.

    When the cryptd template is used to wrap an unkeyed hash algorithm,
    don't install a ->setkey() method to the cryptd instance. This change
    is necessary for cryptd to keep working with unkeyed hash algorithms
    once we start enforcing that ->setkey() is called when present.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit cd6ed77ad5d223dc6299fb58f62e0f5267f7e2ba upstream.

    Templates that use an shash spawn can use crypto_shash_alg_has_setkey()
    to determine whether the underlying algorithm requires a key or not.
    But there was no corresponding function for ahash spawns. Add it.

    Note that the new function actually has to support both shash and ahash
    algorithms, since the ahash API can be used with either.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

13 Feb, 2018

1 commit

  • commit 5c6ac1d4f8fbdbed65dbeb8cf149d736409d16a1 upstream.

    In case buffer length is a multiple of PAGE_SIZE,
    the S/G table is incorrectly generated.
    Fix this by handling buflen = k * PAGE_SIZE separately.

    Signed-off-by: Robert Baronescu
    Signed-off-by: Herbert Xu
    Signed-off-by: Horia Geantă
    Signed-off-by: Greg Kroah-Hartman

    Robert Baronescu
     

04 Feb, 2018

3 commits

  • commit bb30b8848c85e18ca7e371d0a869e94b3e383bdf upstream.

    The user space interface allows specifying the type and mask field used
    to allocate the cipher. Only a subset of the possible flags are intended
    for user space. Therefore, white-list the allowed flags.

    In case the user space caller uses at least one non-allowed flag, EINVAL
    is returned.

    Reported-by: syzbot
    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Stephan Mueller
     
  • commit c013cee99d5a18aec8c71fee8f5f41369cd12595 upstream.

    Ensure that the input is byte swabbed before injecting it into the
    SHA3 transform. Use the get_unaligned() accessor for this so that
    we don't perform unaligned access inadvertently on architectures
    that do not support that.

    Fixes: 53964b9ee63b7075 ("crypto: sha3 - Add SHA-3 hash algorithm")
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Ard Biesheuvel
     
  • commit b5b9007730ce1d90deaf25d7f678511550744bdc upstream.

    This fixes a typo in the CRYPTO_KPP dependency of CRYPTO_ECDH.

    Fixes: 3c4b23901a0c ("crypto: ecdh - Add ECDH software support")
    Signed-off-by: Hauke Mehrtens
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Hauke Mehrtens
     

17 Jan, 2018

1 commit

  • commit 9a00674213a3f00394f4e3221b88f2d21fc05789 upstream.

    syzkaller triggered a NULL pointer dereference in crypto_remove_spawns()
    via a program that repeatedly and concurrently requests AEADs
    "authenc(cmac(des3_ede-asm),pcbc-aes-aesni)" and hashes "cmac(des3_ede)"
    through AF_ALG, where the hashes are requested as "untested"
    (CRYPTO_ALG_TESTED is set in ->salg_mask but clear in ->salg_feat; this
    causes the template to be instantiated for every request).

    Although AF_ALG users really shouldn't be able to request an "untested"
    algorithm, the NULL pointer dereference is actually caused by a
    longstanding race condition where crypto_remove_spawns() can encounter
    an instance which has had spawn(s) "grabbed" but hasn't yet been
    registered, resulting in ->cra_users still being NULL.

    We probably should properly initialize ->cra_users earlier, but that
    would require updating many templates individually. For now just fix
    the bug in a simple way that can easily be backported: make
    crypto_remove_spawns() treat a NULL ->cra_users list as empty.

    Reported-by: syzbot
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

10 Jan, 2018

2 commits

  • commit d76c68109f37cb85b243a1cf0f40313afd2bae68 upstream.

    pcrypt is using the old way of freeing instances, where the ->free()
    method specified in the 'struct crypto_template' is passed a pointer to
    the 'struct crypto_instance'. But the crypto_instance is being
    kfree()'d directly, which is incorrect because the memory was actually
    allocated as an aead_instance, which contains the crypto_instance at a
    nonzero offset. Thus, the wrong pointer was being kfree()'d.

    Fix it by switching to the new way to free aead_instance's where the
    ->free() method is specified in the aead_instance itself.

    Reported-by: syzbot
    Fixes: 0496f56065e0 ("crypto: pcrypt - Add support for new AEAD interface")
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit e57121d08c38dabec15cf3e1e2ad46721af30cae upstream.

    If the rfc7539 template was instantiated with a hash algorithm with
    digest size larger than 16 bytes (POLY1305_DIGEST_SIZE), then the digest
    overran the 'tag' buffer in 'struct chachapoly_req_ctx', corrupting the
    subsequent memory, including 'cryptlen'. This caused a crash during
    crypto_skcipher_decrypt().

    Fix it by, when instantiating the template, requiring that the
    underlying hash algorithm has the digest size expected for Poly1305.

    Reproducer:

    #include
    #include
    #include

    int main()
    {
    int algfd, reqfd;
    struct sockaddr_alg addr = {
    .salg_type = "aead",
    .salg_name = "rfc7539(chacha20,sha256)",
    };
    unsigned char buf[32] = { 0 };

    algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
    bind(algfd, (void *)&addr, sizeof(addr));
    setsockopt(algfd, SOL_ALG, ALG_SET_KEY, buf, sizeof(buf));
    reqfd = accept(algfd, 0, 0);
    write(reqfd, buf, 16);
    read(reqfd, buf, 16);
    }

    Reported-by: syzbot
    Fixes: 71ebc4d1b27d ("crypto: chacha20poly1305 - Add a ChaCha20-Poly1305 AEAD construction, RFC7539")
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers