12 Feb, 2019

12 commits

  • Fix the b value to be compliant with FIPS 186-4 D.1.2.1. This fix is
    required to make sure the SP800-56A public key test passes for P-192.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu
    (cherry picked from commit aef66587f19c7ecc52717328a4c5484f1d2268e9)

    Stephan Mueller
     
  • The reverted commits was disabling some code because it was
    not compatible. Now it is.

    This reverts commit 2570172aabd1962b953625283587541424f7b6a4.

    Signed-off-by: Franck LENORMAND

    Franck LENORMAND
     
  • Commit 110492183c4b ("crypto: compress - remove unused pcomp interface")
    removed pcomp interface but missed cleaning up tcrypt.

    Signed-off-by: Horia Geantă
    Signed-off-by: Franck LENORMAND

    Franck LENORMAND
     
  • Generic GCM is likely to end up using a hardware accelerator to do
    part of the job. Allocating hash, iv and result in a contiguous memory
    area increases the risk of dma mapping multiple ranges on the same
    cacheline. Also having dma and cpu written data on the same cacheline
    will cause coherence issues.

    Signed-off-by: Franck LENORMAND

    Franck LENORMAND
     
  • Generic GCM is likely to end up using a hardware accelerator to do
    part of the job. Allocating hash, iv and result in a contiguous memory
    area increases the risk of dma mapping multiple ranges on the same
    cacheline. Also having dma and cpu written data on the same cacheline
    will cause coherence issues.

    Signed-off-by: Franck LENORMAND

    Franck LENORMAND
     
  • The kernel implementation of crc32 (crc32_generic.c)
    accepts a key to set a seed. It is incompatible with the
    kernel implementation of the crypto template hmac which
    does not support keyed algorithms.

    So it is not possible to load the algorithm hmac(crc32)
    so remove it from tcrypt.

    Signed-off-by: Franck LENORMAND

    Franck LENORMAND
     
  • If the test manager is not disable, it is not possible to
    determine if tcrypt result is suitable or not.

    This patch fix this issue printing a message to the user.

    Signed-off-by: Franck LENORMAND

    Franck LENORMAND
     
  • According to SP800-56A section 5.6.2.1, the public key to be processed
    for the ECDH operation shall be checked for appropriateness. When the
    public key is considered to be an ephemeral key, the partial validation
    test as defined in SP800-56A section 5.6.2.3.4 can be applied.

    The partial verification test requires the presence of the field
    elements of a and b. For the implemented NIST curves, b is defined in
    FIPS 186-4 appendix D.1.2. The element a is implicitly given with the
    Weierstrass equation given in D.1.2 where a = p - 3.

    Without the test, the NIST ACVP testing fails. After adding this check,
    the NIST ACVP testing passes.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • On the quest to remove all VLAs from the kernel[1], this avoids VLAs
    by just using the maximum allocation size (4 bytes) for stack arrays.
    All the VLAs in ecc were either 3 or 4 bytes (or a multiple), so just
    make it 4 bytes all the time. Initialization routines are adjusted to
    check that ndigits does not end up larger than the arrays.

    This includes a removal of the earlier attempt at this fix from
    commit a963834b4742 ("crypto/ecc: Remove stack VLA usage")

    [1] https://lkml.org/lkml/2018/3/7/621

    Signed-off-by: Kees Cook
    Signed-off-by: Herbert Xu

    Kees Cook
     
  • On the quest to remove all VLAs from the kernel[1], this switches to
    a pair of kmalloc regions instead of using the stack. This also moves
    the get_random_bytes() after all allocations (and drops the needless
    "nbytes" variable).

    [1] https://lkml.org/lkml/2018/3/7/621

    Signed-off-by: Kees Cook
    Reviewed-by: Tudor Ambarus
    Signed-off-by: Herbert Xu

    Kees Cook
     
  • CAAM uses DMA to transfer data to and from memory, if
    DMA and CPU accessed data share the same cacheline cache
    pollution will occur. Marking the result as cacheline aligned
    moves it to a separate cache line.

    Signed-off-by: Radu Solea

    Radu Solea
     
  • Because the old rfc4543 implementation always injected an IV into
    the AD, while the new one does not, we have to disable the test
    while it is converted over to the new AEAD interface.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

23 Jan, 2019

2 commits

  • commit 8f9c469348487844328e162db57112f7d347c49f upstream.

    Keys for "authenc" AEADs are formatted as an rtattr containing a 4-byte
    'enckeylen', followed by an authentication key and an encryption key.
    crypto_authenc_extractkeys() parses the key to find the inner keys.

    However, it fails to consider the case where the rtattr's payload is
    longer than 4 bytes but not 4-byte aligned, and where the key ends
    before the next 4-byte aligned boundary. In this case, 'keylen -=
    RTA_ALIGN(rta->rta_len);' underflows to a value near UINT_MAX. This
    causes a buffer overread and crash during crypto_ahash_setkey().

    Fix it by restricting the rtattr payload to the expected size.

    Reproducer using AF_ALG:

    #include
    #include
    #include

    int main()
    {
    int fd;
    struct sockaddr_alg addr = {
    .salg_type = "aead",
    .salg_name = "authenc(hmac(sha256),cbc(aes))",
    };
    struct {
    struct rtattr attr;
    __be32 enckeylen;
    char keys[1];
    } __attribute__((packed)) key = {
    .attr.rta_len = sizeof(key),
    .attr.rta_type = 1 /* CRYPTO_AUTHENC_KEYA_PARAM */,
    };

    fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
    bind(fd, (void *)&addr, sizeof(addr));
    setsockopt(fd, SOL_ALG, ALG_SET_KEY, &key, sizeof(key));
    }

    It caused:

    BUG: unable to handle kernel paging request at ffff88007ffdc000
    PGD 2e01067 P4D 2e01067 PUD 2e04067 PMD 2e05067 PTE 0
    Oops: 0000 [#1] SMP
    CPU: 0 PID: 883 Comm: authenc Not tainted 4.20.0-rc1-00108-g00c9fe37a7f27 #13
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-20181126_142135-anatol 04/01/2014
    RIP: 0010:sha256_ni_transform+0xb3/0x330 arch/x86/crypto/sha256_ni_asm.S:155
    [...]
    Call Trace:
    sha256_ni_finup+0x10/0x20 arch/x86/crypto/sha256_ssse3_glue.c:321
    crypto_shash_finup+0x1a/0x30 crypto/shash.c:178
    shash_digest_unaligned+0x45/0x60 crypto/shash.c:186
    crypto_shash_digest+0x24/0x40 crypto/shash.c:202
    hmac_setkey+0x135/0x1e0 crypto/hmac.c:66
    crypto_shash_setkey+0x2b/0xb0 crypto/shash.c:66
    shash_async_setkey+0x10/0x20 crypto/shash.c:223
    crypto_ahash_setkey+0x2d/0xa0 crypto/ahash.c:202
    crypto_authenc_setkey+0x68/0x100 crypto/authenc.c:96
    crypto_aead_setkey+0x2a/0xc0 crypto/aead.c:62
    aead_setkey+0xc/0x10 crypto/algif_aead.c:526
    alg_setkey crypto/af_alg.c:223 [inline]
    alg_setsockopt+0xfe/0x130 crypto/af_alg.c:256
    __sys_setsockopt+0x6d/0xd0 net/socket.c:1902
    __do_sys_setsockopt net/socket.c:1913 [inline]
    __se_sys_setsockopt net/socket.c:1910 [inline]
    __x64_sys_setsockopt+0x1f/0x30 net/socket.c:1910
    do_syscall_64+0x4a/0x180 arch/x86/entry/common.c:290
    entry_SYSCALL_64_after_hwframe+0x49/0xbe

    Fixes: e236d4a89a2f ("[CRYPTO] authenc: Move enckeylen into key itself")
    Cc: # v2.6.25+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit a7773363624b034ab198c738661253d20a8055c2 upstream.

    Authencesn template in decrypt path unconditionally calls aead_request_complete
    after ahash_verify which leads to following kernel panic in after decryption.

    [ 338.539800] BUG: unable to handle kernel NULL pointer dereference at 0000000000000004
    [ 338.548372] PGD 0 P4D 0
    [ 338.551157] Oops: 0000 [#1] SMP PTI
    [ 338.554919] CPU: 0 PID: 0 Comm: swapper/0 Kdump: loaded Tainted: G W I 4.19.7+ #13
    [ 338.564431] Hardware name: Supermicro X8ST3/X8ST3, BIOS 2.0 07/29/10
    [ 338.572212] RIP: 0010:esp_input_done2+0x350/0x410 [esp4]
    [ 338.578030] Code: ff 0f b6 68 10 48 8b 83 c8 00 00 00 e9 8e fe ff ff 8b 04 25 04 00 00 00 83 e8 01 48 98 48 8b 3c c5 10 00 00 00 e9 f7 fd ff ff 04 25 04 00 00 00 83 e8 01 48 98 4c 8b 24 c5 10 00 00 00 e9 3b
    [ 338.598547] RSP: 0018:ffff911c97803c00 EFLAGS: 00010246
    [ 338.604268] RAX: 0000000000000002 RBX: ffff911c4469ee00 RCX: 0000000000000000
    [ 338.612090] RDX: 0000000000000000 RSI: 0000000000000130 RDI: ffff911b87c20400
    [ 338.619874] RBP: 0000000000000000 R08: ffff911b87c20498 R09: 000000000000000a
    [ 338.627610] R10: 0000000000000001 R11: 0000000000000004 R12: 0000000000000000
    [ 338.635402] R13: ffff911c89590000 R14: ffff911c91730000 R15: 0000000000000000
    [ 338.643234] FS: 0000000000000000(0000) GS:ffff911c97800000(0000) knlGS:0000000000000000
    [ 338.652047] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [ 338.658299] CR2: 0000000000000004 CR3: 00000001ec20a000 CR4: 00000000000006f0
    [ 338.666382] Call Trace:
    [ 338.669051]
    [ 338.671254] esp_input_done+0x12/0x20 [esp4]
    [ 338.675922] chcr_handle_resp+0x3b5/0x790 [chcr]
    [ 338.680949] cpl_fw6_pld_handler+0x37/0x60 [chcr]
    [ 338.686080] chcr_uld_rx_handler+0x22/0x50 [chcr]
    [ 338.691233] uldrx_handler+0x8c/0xc0 [cxgb4]
    [ 338.695923] process_responses+0x2f0/0x5d0 [cxgb4]
    [ 338.701177] ? bitmap_find_next_zero_area_off+0x3a/0x90
    [ 338.706882] ? matrix_alloc_area.constprop.7+0x60/0x90
    [ 338.712517] ? apic_update_irq_cfg+0x82/0xf0
    [ 338.717177] napi_rx_handler+0x14/0xe0 [cxgb4]
    [ 338.722015] net_rx_action+0x2aa/0x3e0
    [ 338.726136] __do_softirq+0xcb/0x280
    [ 338.730054] irq_exit+0xde/0xf0
    [ 338.733504] do_IRQ+0x54/0xd0
    [ 338.736745] common_interrupt+0xf/0xf

    Fixes: 104880a6b470 ("crypto: authencesn - Convert to new AEAD...")
    Signed-off-by: Harsh Jain
    Cc: stable@vger.kernel.org
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Harsh Jain
     

01 Dec, 2018

1 commit

  • [ Upstream commit 508a1c4df085a547187eed346f1bfe5e381797f1 ]

    The simd wrapper's skcipher request context structure consists
    of a single subrequest whose size is taken from the subordinate
    skcipher. However, in simd_skcipher_init(), the reqsize that is
    retrieved is not from the subordinate skcipher but from the
    cryptd request structure, whose size is completely unrelated to
    the actual wrapped skcipher.

    Reported-by: Qian Cai
    Signed-off-by: Ard Biesheuvel
    Tested-by: Qian Cai
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin

    Ard Biesheuvel
     

21 Nov, 2018

1 commit

  • commit f43f39958beb206b53292801e216d9b8a660f087 upstream.

    All bytes of the NETLINK_CRYPTO report structures must be initialized,
    since they are copied to userspace. The change from strncpy() to
    strlcpy() broke this. As a minimal fix, change it back.

    Fixes: 4473710df1f8 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME expansion")
    Cc: # v4.12+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

14 Nov, 2018

2 commits

  • commit 331351f89c36bf7d03561a28b6f64fa10a9f6f3a upstream.

    ghash is a keyed hash algorithm, thus setkey needs to be called.
    Otherwise the following error occurs:
    $ modprobe tcrypt mode=318 sec=1
    testing speed of async ghash-generic (ghash-generic)
    tcrypt: test 0 ( 16 byte blocks, 16 bytes per update, 1 updates):
    tcrypt: hashing failed ret=-126

    Cc: # 4.6+
    Fixes: 0660511c0bee ("crypto: tcrypt - Use ahash")
    Tested-by: Franck Lenormand
    Signed-off-by: Horia Geantă
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Horia Geantă
     
  • commit fbe1a850b3b1522e9fc22319ccbbcd2ab05328d2 upstream.

    When the LRW block counter overflows, the current implementation returns
    128 as the index to the precomputed multiplication table, which has 128
    entries. This patch fixes it to return the correct value (127).

    Fixes: 64470f1b8510 ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode")
    Cc: # 2.6.20+
    Reported-by: Eric Biggers
    Signed-off-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Ondrej Mosnacek
     

04 Oct, 2018

1 commit

  • [ Upstream commit cefd769fd0192c84d638f66da202459ed8ad63ba ]

    As of GCC 9.0.0 the build is reporting warnings like:

    crypto/ablkcipher.c: In function ‘crypto_ablkcipher_report’:
    crypto/ablkcipher.c:374:2: warning: ‘strncpy’ specified bound 64 equals destination size [-Wstringop-truncation]
    strncpy(rblkcipher.geniv, alg->cra_ablkcipher.geniv ?: "",
    ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    sizeof(rblkcipher.geniv));
    ~~~~~~~~~~~~~~~~~~~~~~~~~

    This means the strnycpy might create a non null terminated string. Fix this by
    explicitly performing '\0' termination.

    Cc: Greg Kroah-Hartman
    Cc: Arnd Bergmann
    Cc: Max Filippov
    Cc: Eric Biggers
    Cc: Nick Desaulniers
    Signed-off-by: Stafford Horne
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Stafford Horne
     

26 Sep, 2018

1 commit

  • [ Upstream commit e2861fa71641c6414831d628a1f4f793b6562580 ]

    When EVM attempts to appraise a file signed with a crypto algorithm the
    kernel doesn't have support for, it will cause the kernel to trigger a
    module load. If the EVM policy includes appraisal of kernel modules this
    will in turn call back into EVM - since EVM is holding a lock until the
    crypto initialisation is complete, this triggers a deadlock. Add a
    CRYPTO_NOLOAD flag and skip module loading if it's set, and add that flag
    in the EVM case in order to fail gracefully with an error message
    instead of deadlocking.

    Signed-off-by: Matthew Garrett
    Acked-by: Herbert Xu
    Signed-off-by: Mimi Zohar
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Matthew Garrett
     

20 Sep, 2018

1 commit

  • commit 6e36719fbe90213fbba9f50093fa2d4d69b0e93c upstream.

    My last bugfix added -Os on the command line, which unfortunately caused
    a build regression on powerpc in some configurations.

    I've done some more analysis of the original problem and found slightly
    different workaround that avoids this regression and also results in
    better performance on gcc-7.0: -fcode-hoisting is an optimization step
    that got added in gcc-7 and that for all gcc-7 versions causes worse
    performance.

    This disables -fcode-hoisting on all compilers that understand the option.
    For gcc-7.1 and 7.2 I found the same performance as my previous patch
    (using -Os), in gcc-7.0 it was even better. On gcc-8 I could see no
    change in performance from this patch. In theory, code hoisting should
    not be able make things better for the AES cipher, so leaving it
    disabled for gcc-8 only serves to simplify the Makefile change.

    Reported-by: kbuild test robot
    Link: https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg30418.html
    Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83356
    Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83651
    Fixes: 148b974deea9 ("crypto: aes-generic - build with -Os on gcc-7+")
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Herbert Xu
    Cc: Horia Geanta
    Signed-off-by: Greg Kroah-Hartman

    Arnd Bergmann
     

10 Sep, 2018

1 commit

  • commit 817aef260037f33ee0f44c17fe341323d3aebd6d upstream.

    Replace the use of a magic number that indicates that verify_*_signature()
    should use the secondary keyring with a symbol.

    Signed-off-by: Yannik Sembritzki
    Signed-off-by: David Howells
    Cc: keyrings@vger.kernel.org
    Cc: linux-security-module@vger.kernel.org
    Signed-off-by: Linus Torvalds
    Signed-off-by: Greg Kroah-Hartman

    Yannik Sembritzki
     

18 Aug, 2018

6 commits

  • commit 8088d3dd4d7c6933a65aa169393b5d88d8065672 upstream.

    scatterwalk_done() is only meant to be called after a nonzero number of
    bytes have been processed, since scatterwalk_pagedone() will flush the
    dcache of the *previous* page. But in the error case of
    skcipher_walk_done(), e.g. if the input wasn't an integer number of
    blocks, scatterwalk_done() was actually called after advancing 0 bytes.
    This caused a crash ("BUG: unable to handle kernel paging request")
    during '!PageSlab(page)' on architectures like arm and arm64 that define
    ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
    page-aligned as in that case walk->offset == 0.

    Fix it by reorganizing skcipher_walk_done() to skip the
    scatterwalk_advance() and scatterwalk_done() if an error has occurred.

    This bug was found by syzkaller fuzzing.

    Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:

    #include
    #include
    #include

    int main()
    {
    struct sockaddr_alg addr = {
    .salg_type = "skcipher",
    .salg_name = "cbc(aes-generic)",
    };
    char buffer[4096] __attribute__((aligned(4096))) = { 0 };
    int fd;

    fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
    bind(fd, (void *)&addr, sizeof(addr));
    setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
    fd = accept(fd, NULL, NULL);
    write(fd, buffer, 15);
    read(fd, buffer, 15);
    }

    Reported-by: Liu Chao
    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc: # v4.10+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 0567fc9e90b9b1c8dbce8a5468758e6206744d4a upstream.

    The ALIGN() macro needs to be passed the alignment, not the alignmask
    (which is the alignment minus 1).

    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc: # v4.10+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 318abdfbe708aaaa652c79fb500e9bd60521f9dc upstream.

    Like the skcipher_walk and blkcipher_walk cases:

    scatterwalk_done() is only meant to be called after a nonzero number of
    bytes have been processed, since scatterwalk_pagedone() will flush the
    dcache of the *previous* page. But in the error case of
    ablkcipher_walk_done(), e.g. if the input wasn't an integer number of
    blocks, scatterwalk_done() was actually called after advancing 0 bytes.
    This caused a crash ("BUG: unable to handle kernel paging request")
    during '!PageSlab(page)' on architectures like arm and arm64 that define
    ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
    page-aligned as in that case walk->offset == 0.

    Fix it by reorganizing ablkcipher_walk_done() to skip the
    scatterwalk_advance() and scatterwalk_done() if an error has occurred.

    Reported-by: Liu Chao
    Fixes: bf06099db18a ("crypto: skcipher - Add ablkcipher_walk interfaces")
    Cc: # v2.6.35+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 0868def3e4100591e7a1fdbf3eed1439cc8f7ca3 upstream.

    Like the skcipher_walk case:

    scatterwalk_done() is only meant to be called after a nonzero number of
    bytes have been processed, since scatterwalk_pagedone() will flush the
    dcache of the *previous* page. But in the error case of
    blkcipher_walk_done(), e.g. if the input wasn't an integer number of
    blocks, scatterwalk_done() was actually called after advancing 0 bytes.
    This caused a crash ("BUG: unable to handle kernel paging request")
    during '!PageSlab(page)' on architectures like arm and arm64 that define
    ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
    page-aligned as in that case walk->offset == 0.

    Fix it by reorganizing blkcipher_walk_done() to skip the
    scatterwalk_advance() and scatterwalk_done() if an error has occurred.

    This bug was found by syzkaller fuzzing.

    Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:

    #include
    #include
    #include

    int main()
    {
    struct sockaddr_alg addr = {
    .salg_type = "skcipher",
    .salg_name = "ecb(aes-generic)",
    };
    char buffer[4096] __attribute__((aligned(4096))) = { 0 };
    int fd;

    fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
    bind(fd, (void *)&addr, sizeof(addr));
    setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
    fd = accept(fd, NULL, NULL);
    write(fd, buffer, 15);
    read(fd, buffer, 15);
    }

    Reported-by: Liu Chao
    Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type")
    Cc: # v2.6.19+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit bb29648102335586e9a66289a1d98a0cb392b6e5 upstream.

    syzbot reported a crash in vmac_final() when multiple threads
    concurrently use the same "vmac(aes)" transform through AF_ALG. The bug
    is pretty fundamental: the VMAC template doesn't separate per-request
    state from per-tfm (per-key) state like the other hash algorithms do,
    but rather stores it all in the tfm context. That's wrong.

    Also, vmac_final() incorrectly zeroes most of the state including the
    derived keys and cached pseudorandom pad. Therefore, only the first
    VMAC invocation with a given key calculates the correct digest.

    Fix these bugs by splitting the per-tfm state from the per-request state
    and using the proper init/update/final sequencing for requests.

    Reproducer for the crash:

    #include
    #include
    #include

    int main()
    {
    int fd;
    struct sockaddr_alg addr = {
    .salg_type = "hash",
    .salg_name = "vmac(aes)",
    };
    char buf[256] = { 0 };

    fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
    bind(fd, (void *)&addr, sizeof(addr));
    setsockopt(fd, SOL_ALG, ALG_SET_KEY, buf, 16);
    fork();
    fd = accept(fd, NULL, NULL);
    for (;;)
    write(fd, buf, 256);
    }

    The immediate cause of the crash is that vmac_ctx_t.partial_size exceeds
    VMAC_NHBYTES, causing vmac_final() to memset() a negative length.

    Reported-by: syzbot+264bca3a6e8d645550d3@syzkaller.appspotmail.com
    Fixes: f1939f7c5645 ("crypto: vmac - New hash algorithm for intel_txt support")
    Cc: # v2.6.32+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit 73bf20ef3df262026c3470241ae4ac8196943ffa upstream.

    The VMAC template assumes the block cipher has a 128-bit block size, but
    it failed to check for that. Thus it was possible to instantiate it
    using a 64-bit block size cipher, e.g. "vmac(cast5)", causing
    uninitialized memory to be used.

    Add the needed check when instantiating the template.

    Fixes: f1939f7c5645 ("crypto: vmac - New hash algorithm for intel_txt support")
    Cc: # v2.6.32+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

03 Aug, 2018

2 commits

  • [ Upstream commit ad2fdcdf75d169e7a5aec6c7cb421c0bec8ec711 ]

    In crypto_authenc_setkey we save pointers to the authenc keys in
    a local variable of type struct crypto_authenc_keys and we don't
    zeroize it after use. Fix this and don't leak pointers to the
    authenc keys.

    Signed-off-by: Tudor Ambarus
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Tudor-Dan Ambarus
     
  • [ Upstream commit 31545df391d58a3bb60e29b1192644a6f2b5a8dd ]

    In crypto_authenc_esn_setkey we save pointers to the authenc keys
    in a local variable of type struct crypto_authenc_keys and we don't
    zeroize it after use. Fix this and don't leak pointers to the
    authenc keys.

    Signed-off-by: Tudor Ambarus
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Tudor-Dan Ambarus
     

22 Jul, 2018

1 commit

  • commit 2546da99212f22034aecf279da9c47cbfac6c981 upstream.

    The RX SGL in processing is already registered with the RX SGL tracking
    list to support proper cleanup. The cleanup code path uses the
    sg_num_bytes variable which must therefore be always initialized, even
    in the error code path.

    Signed-off-by: Stephan Mueller
    Reported-by: syzbot+9c251bdd09f83b92ba95@syzkaller.appspotmail.com
    #syz test: https://github.com/google/kmsan.git master
    CC: #4.14
    Fixes: e870456d8e7c ("crypto: algif_skcipher - overhaul memory management")
    Fixes: d887c52d6ae4 ("crypto: algif_aead - overhaul memory management")
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Stephan Mueller
     

17 Jul, 2018

1 commit

  • commit b7b73cd5d74694ed59abcdb4974dacb4ff8b2a2a upstream.

    The x86 assembly implementations of Salsa20 use the frame base pointer
    register (%ebp or %rbp), which breaks frame pointer convention and
    breaks stack traces when unwinding from an interrupt in the crypto code.
    Recent (v4.10+) kernels will warn about this, e.g.

    WARNING: kernel stack regs at 00000000a8291e69 in syzkaller047086:4677 has bad 'bp' value 000000001077994c
    [...]

    But after looking into it, I believe there's very little reason to still
    retain the x86 Salsa20 code. First, these are *not* vectorized
    (SSE2/SSSE3/AVX2) implementations, which would be needed to get anywhere
    close to the best Salsa20 performance on any remotely modern x86
    processor; they're just regular x86 assembly. Second, it's still
    unclear that anyone is actually using the kernel's Salsa20 at all,
    especially given that now ChaCha20 is supported too, and with much more
    efficient SSSE3 and AVX2 implementations. Finally, in benchmarks I did
    on both Intel and AMD processors with both gcc 8.1.0 and gcc 4.9.4, the
    x86_64 salsa20-asm is actually slightly *slower* than salsa20-generic
    (~3% slower on Skylake, ~10% slower on Zen), while the i686 salsa20-asm
    is only slightly faster than salsa20-generic (~15% faster on Skylake,
    ~20% faster on Zen). The gcc version made little difference.

    So, the x86_64 salsa20-asm is pretty clearly useless. That leaves just
    the i686 salsa20-asm, which based on my tests provides a 15-20% speed
    boost. But that's without updating the code to not use %ebp. And given
    the maintenance cost, the small speed difference vs. salsa20-generic,
    the fact that few people still use i686 kernels, the doubt that anyone
    is even using the kernel's Salsa20 at all, and the fact that a SSE2
    implementation would almost certainly be much faster on any remotely
    modern x86 processor yet no one has cared enough to add one yet, I don't
    think it's worthwhile to keep.

    Thus, just remove both the x86_64 and i686 salsa20-asm implementations.

    Reported-by: syzbot+ffa3a158337bbc01ff09@syzkaller.appspotmail.com
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

03 Jul, 2018

1 commit

  • commit b65c32ec5a942ab3ada93a048089a938918aba7f upstream.

    The signatureValue field of a X.509 certificate is encoded as a BIT STRING.
    For RSA signatures this BIT STRING is of so-called primitive subtype, which
    contains a u8 prefix indicating a count of unused bits in the encoding.

    We have to strip this prefix from signature data, just as we already do for
    key data in x509_extract_key_data() function.

    This wasn't noticed earlier because this prefix byte is zero for RSA key
    sizes divisible by 8. Since BIT STRING is a big-endian encoding adding zero
    prefixes has no bearing on its value.

    The signature length, however was incorrect, which is a problem for RSA
    implementations that need it to be exactly correct (like AMD CCP).

    Signed-off-by: Maciej S. Szmigiero
    Fixes: c26fd69fa009 ("X.509: Add a crypto key parser for binary (DER) X.509 certificates")
    Cc: stable@vger.kernel.org
    Signed-off-by: James Morris
    Signed-off-by: Greg Kroah-Hartman

    Maciej S. Szmigiero
     

30 May, 2018

1 commit

  • [ Upstream commit 6459ae386699a5fe0dc52cf30255f75274fa43a4 ]

    If none of the certificates in a SignerInfo's certificate chain match a
    trusted key, nor is the last certificate signed by a trusted key, then
    pkcs7_validate_trust_one() tries to check whether the SignerInfo's
    signature was made directly by a trusted key. But, it actually fails to
    set the 'sig' variable correctly, so it actually verifies the last
    signature seen. That will only be the SignerInfo's signature if the
    certificate chain is empty; otherwise it will actually be the last
    certificate's signature.

    This is not by itself a security problem, since verifying any of the
    certificates in the chain should be sufficient to verify the SignerInfo.
    Still, it's not working as intended so it should be fixed.

    Fix it by setting 'sig' correctly for the direct verification case.

    Fixes: 757932e6da6d ("PKCS#7: Handle PKCS#7 messages that contain no X.509 certs")
    Signed-off-by: Eric Biggers
    Signed-off-by: David Howells
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

16 May, 2018

1 commit

  • commit a466856e0b7ab269cdf9461886d007e88ff575b0 upstream.

    syzbot reported :

    BUG: KMSAN: uninit-value in alg_bind+0xe3/0xd90 crypto/af_alg.c:162

    We need to check addr_len before dereferencing sa (or uaddr)

    Fixes: bb30b8848c85 ("crypto: af_alg - whitelist mask and type")
    Signed-off-by: Eric Dumazet
    Reported-by: syzbot
    Cc: Stephan Mueller
    Cc: Herbert Xu
    Signed-off-by: David S. Miller
    Signed-off-by: Greg Kroah-Hartman

    Eric Dumazet
     

02 May, 2018

1 commit

  • commit eea0d3ea7546961f69f55b26714ac8fd71c7c020 upstream.

    During freeing of the internal buffers used by the DRBG, set the pointer
    to NULL. It is possible that the context with the freed buffers is
    reused. In case of an error during initialization where the pointers
    do not yet point to allocated memory, the NULL value prevents a double
    free.

    Cc: stable@vger.kernel.org
    Fixes: 3cfc3b9721123 ("crypto: drbg - use aligned buffers")
    Signed-off-by: Stephan Mueller
    Reported-by: syzbot+75397ee3df5c70164154@syzkaller.appspotmail.com
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Stephan Mueller
     

12 Apr, 2018

1 commit

  • [ Upstream commit 148b974deea927f5dbb6c468af2707b488bfa2de ]

    While testing other changes, I discovered that gcc-7.2.1 produces badly
    optimized code for aes_encrypt/aes_decrypt. This is especially true when
    CONFIG_UBSAN_SANITIZE_ALL is enabled, where it leads to extremely
    large stack usage that in turn might cause kernel stack overflows:

    crypto/aes_generic.c: In function 'aes_encrypt':
    crypto/aes_generic.c:1371:1: warning: the frame size of 4880 bytes is larger than 2048 bytes [-Wframe-larger-than=]
    crypto/aes_generic.c: In function 'aes_decrypt':
    crypto/aes_generic.c:1441:1: warning: the frame size of 4864 bytes is larger than 2048 bytes [-Wframe-larger-than=]

    I verified that this problem exists on all architectures that are
    supported by gcc-7.2, though arm64 in particular is less affected than
    the others. I also found that gcc-7.1 and gcc-8 do not show the extreme
    stack usage but still produce worse code than earlier versions for this
    file, apparently because of optimization passes that generally provide
    a substantial improvement in object code quality but understandably fail
    to find any shortcuts in the AES algorithm.

    Possible workarounds include

    a) disabling -ftree-pre and -ftree-sra optimizations, this was an earlier
    patch I tried, which reliably fixed the stack usage, but caused a
    serious performance regression in some versions, as later testing
    found.

    b) disabling UBSAN on this file or all ciphers, as suggested by Ard
    Biesheuvel. This would lead to massively better crypto performance in
    UBSAN-enabled kernels and avoid the stack usage, but there is a concern
    over whether we should exclude arbitrary files from UBSAN at all.

    c) Forcing the optimization level in a different way. Similar to a),
    but rather than deselecting specific optimization stages,
    this now uses "gcc -Os" for this file, regardless of the
    CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE/SIZE option. This is a reliable
    workaround for the stack consumption on all architecture, and I've
    retested the performance results now on x86, cycles/byte (lower is
    better) for cbc(aes-generic) with 256 bit keys:

    -O2 -Os
    gcc-6.3.1 14.9 15.1
    gcc-7.0.1 14.7 15.3
    gcc-7.1.1 15.3 14.7
    gcc-7.2.1 16.8 15.9
    gcc-8.0.0 15.5 15.6

    This implements the option c) by enabling forcing -Os on all compiler
    versions starting with gcc-7.1. As a workaround for PR83356, it would
    only be needed for gcc-7.2+ with UBSAN enabled, but since it also shows
    better performance on gcc-7.1 without UBSAN, it seems appropriate to
    use the faster version here as well.

    Side note: during testing, I also played with the AES code in libressl,
    which had a similar performance regression from gcc-6 to gcc-7.2,
    but was three times slower overall. It might be interesting to
    investigate that further and possibly port the Linux implementation
    into that.

    Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83356
    Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83651
    Cc: Richard Biener
    Cc: Jakub Jelinek
    Cc: Ard Biesheuvel
    Signed-off-by: Arnd Bergmann
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Arnd Bergmann
     

08 Apr, 2018

3 commits

  • commit 900a081f6912a8985dc15380ec912752cb66025a upstream.

    When we have an unaligned SG list entry where there is no leftover
    aligned data, the hash walk code will incorrectly return zero as if
    the entire SG list has been processed.

    This patch fixes it by moving onto the next page instead.

    Reported-by: Eli Cooper
    Cc:
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Herbert Xu
     
  • commit 333e18c5cc74438f8940c7f3a8b3573748a371f9 upstream.

    The RSA private key for the first form should have
    version, prime1, prime2, exponent1, exponent2, coefficient
    values 0.
    With non-zero values for prime1,2, exponent 1,2 and coefficient
    the Intel QAT driver will assume that values are provided for the
    private key second form. This will result in signature verification
    failures for modules where QAT device is present and the modules
    are signed with rsa,sha256.

    Cc:
    Signed-off-by: Giovanni Cabiddu
    Signed-off-by: Conor McLoughlin
    Reviewed-by: Stephan Mueller
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Conor McLoughlin
     
  • commit 8c9bdab21289c211ca1ca6a5f9b7537b4a600a02 upstream.

    The buffer rctx->ext contains potentially sensitive data and should
    be freed with kzfree.

    Cc:
    Fixes: 700cb3f5fe75 ("crypto: lrw - Convert to skcipher")
    Reported-by: Dan Carpenter
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Herbert Xu