18 Aug, 2018

1 commit

  • commit bb29648102335586e9a66289a1d98a0cb392b6e5 upstream.

    syzbot reported a crash in vmac_final() when multiple threads
    concurrently use the same "vmac(aes)" transform through AF_ALG. The bug
    is pretty fundamental: the VMAC template doesn't separate per-request
    state from per-tfm (per-key) state like the other hash algorithms do,
    but rather stores it all in the tfm context. That's wrong.

    Also, vmac_final() incorrectly zeroes most of the state including the
    derived keys and cached pseudorandom pad. Therefore, only the first
    VMAC invocation with a given key calculates the correct digest.

    Fix these bugs by splitting the per-tfm state from the per-request state
    and using the proper init/update/final sequencing for requests.

    Reproducer for the crash:

    #include
    #include
    #include

    int main()
    {
    int fd;
    struct sockaddr_alg addr = {
    .salg_type = "hash",
    .salg_name = "vmac(aes)",
    };
    char buf[256] = { 0 };

    fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
    bind(fd, (void *)&addr, sizeof(addr));
    setsockopt(fd, SOL_ALG, ALG_SET_KEY, buf, 16);
    fork();
    fd = accept(fd, NULL, NULL);
    for (;;)
    write(fd, buf, 256);
    }

    The immediate cause of the crash is that vmac_ctx_t.partial_size exceeds
    VMAC_NHBYTES, causing vmac_final() to memset() a negative length.

    Reported-by: syzbot+264bca3a6e8d645550d3@syzkaller.appspotmail.com
    Fixes: f1939f7c5645 ("crypto: vmac - New hash algorithm for intel_txt support")
    Cc: # v2.6.32+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

03 Mar, 2018

1 commit

  • [ Upstream commit af955bf15d2c27496b0269b1f05c26f758c68314 ]

    This variable was increased and decreased without any protection.
    Result was an occasional misscount and negative wrap around resulting
    in false resource allocation failures.

    Fixes: 7d2c3f54e6f6 ("crypto: af_alg - remove locking in async callback")
    Signed-off-by: Jonathan Cameron
    Reviewed-by: Stephan Mueller
    Signed-off-by: Herbert Xu
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Jonathan Cameron
     

17 Feb, 2018

3 commits

  • commit 9fa68f620041be04720d0cbfb1bd3ddfc6310b24 upstream.

    Currently, almost none of the keyed hash algorithms check whether a key
    has been set before proceeding. Some algorithms are okay with this and
    will effectively just use a key of all 0's or some other bogus default.
    However, others will severely break, as demonstrated using
    "hmac(sha3-512-generic)", the unkeyed use of which causes a kernel crash
    via a (potentially exploitable) stack buffer overflow.

    A while ago, this problem was solved for AF_ALG by pairing each hash
    transform with a 'has_key' bool. However, there are still other places
    in the kernel where userspace can specify an arbitrary hash algorithm by
    name, and the kernel uses it as unkeyed hash without checking whether it
    is really unkeyed. Examples of this include:

    - KEYCTL_DH_COMPUTE, via the KDF extension
    - dm-verity
    - dm-crypt, via the ESSIV support
    - dm-integrity, via the "internal hash" mode with no key given
    - drbd (Distributed Replicated Block Device)

    This bug is especially bad for KEYCTL_DH_COMPUTE as that requires no
    privileges to call.

    Fix the bug for all users by adding a flag CRYPTO_TFM_NEED_KEY to the
    ->crt_flags of each hash transform that indicates whether the transform
    still needs to be keyed or not. Then, make the hash init, import, and
    digest functions return -ENOKEY if the key is still needed.

    The new flag also replaces the 'has_key' bool which algif_hash was
    previously using, thereby simplifying the algif_hash implementation.

    Reported-by: syzbot
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit a16e772e664b9a261424107784804cffc8894977 upstream.

    Since Poly1305 requires a nonce per invocation, the Linux kernel
    implementations of Poly1305 don't use the crypto API's keying mechanism
    and instead expect the key and nonce as the first 32 bytes of the data.
    But ->setkey() is still defined as a stub returning an error code. This
    prevents Poly1305 from being used through AF_ALG and will also break it
    completely once we start enforcing that all crypto API users (not just
    AF_ALG) call ->setkey() if present.

    Fix it by removing crypto_poly1305_setkey(), leaving ->setkey as NULL.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     
  • commit cd6ed77ad5d223dc6299fb58f62e0f5267f7e2ba upstream.

    Templates that use an shash spawn can use crypto_shash_alg_has_setkey()
    to determine whether the underlying algorithm requires a key or not.
    But there was no corresponding function for ahash spawns. Add it.

    Note that the new function actually has to support both shash and ahash
    algorithms, since the ahash API can be used with either.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

04 Feb, 2018

1 commit

  • commit ef780324592dd639e4bfbc5b9bf8934b234b7c99 upstream.

    Many GCM users use directly GCM IV size instead of using some constant.

    This patch add all IV size constant used by GCM.

    Signed-off-by: Corentin Labbe
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Corentin LABBE
     

30 Dec, 2017

1 commit

  • commit 9abffc6f2efe46c3564c04312e52e07622d40e51 upstream.

    mcryptd_enqueue_request() grabs the per-CPU queue struct and protects
    access to it with disabled preemption. Then it schedules a worker on the
    same CPU. The worker in mcryptd_queue_worker() guards access to the same
    per-CPU variable with disabled preemption.

    If we take CPU-hotplug into account then it is possible that between
    queue_work_on() and the actual invocation of the worker the CPU goes
    down and the worker will be scheduled on _another_ CPU. And here the
    preempt_disable() protection does not work anymore. The easiest thing is
    to add a spin_lock() to guard access to the list.

    Another detail: mcryptd_queue_worker() is not processing more than
    MCRYPTD_BATCH invocation in a row. If there are still items left, then
    it will invoke queue_work() to proceed with more later. *I* would
    suggest to simply drop that check because it does not use a system
    workqueue and the workqueue is already marked as "CPU_INTENSIVE". And if
    preemption is required then the scheduler should do it.
    However if queue_work() is used then the work item is marked as CPU
    unbound. That means it will try to run on the local CPU but it may run
    on another CPU as well. Especially with CONFIG_DEBUG_WQ_FORCE_RR_CPU=y.
    Again, the preempt_disable() won't work here but lock which was
    introduced will help.
    In order to keep work-item on the local CPU (and avoid RR) I changed it
    to queue_work_on().

    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Sebastian Andrzej Siewior
     

20 Dec, 2017

1 commit

  • commit af3ff8045bbf3e32f1a448542e73abb4c8ceb6f1 upstream.

    Because the HMAC template didn't check that its underlying hash
    algorithm is unkeyed, trying to use "hmac(hmac(sha3-512-generic))"
    through AF_ALG or through KEYCTL_DH_COMPUTE resulted in the inner HMAC
    being used without having been keyed, resulting in sha3_update() being
    called without sha3_init(), causing a stack buffer overflow.

    This is a very old bug, but it seems to have only started causing real
    problems when SHA-3 support was added (requires CONFIG_CRYPTO_SHA3)
    because the innermost hash's state is ->import()ed from a zeroed buffer,
    and it just so happens that other hash algorithms are fine with that,
    but SHA-3 is not. However, there could be arch or hardware-dependent
    hash algorithms also affected; I couldn't test everything.

    Fix the bug by introducing a function crypto_shash_alg_has_setkey()
    which tests whether a shash algorithm is keyed. Then update the HMAC
    template to require that its underlying hash algorithm is unkeyed.

    Here is a reproducer:

    #include
    #include

    int main()
    {
    int algfd;
    struct sockaddr_alg addr = {
    .salg_type = "hash",
    .salg_name = "hmac(hmac(sha3-512-generic))",
    };
    char key[4096] = { 0 };

    algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
    bind(algfd, (const struct sockaddr *)&addr, sizeof(addr));
    setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key));
    }

    Here was the KASAN report from syzbot:

    BUG: KASAN: stack-out-of-bounds in memcpy include/linux/string.h:341 [inline]
    BUG: KASAN: stack-out-of-bounds in sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161
    Write of size 4096 at addr ffff8801cca07c40 by task syzkaller076574/3044

    CPU: 1 PID: 3044 Comm: syzkaller076574 Not tainted 4.14.0-mm1+ #25
    Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
    Call Trace:
    __dump_stack lib/dump_stack.c:17 [inline]
    dump_stack+0x194/0x257 lib/dump_stack.c:53
    print_address_description+0x73/0x250 mm/kasan/report.c:252
    kasan_report_error mm/kasan/report.c:351 [inline]
    kasan_report+0x25b/0x340 mm/kasan/report.c:409
    check_memory_region_inline mm/kasan/kasan.c:260 [inline]
    check_memory_region+0x137/0x190 mm/kasan/kasan.c:267
    memcpy+0x37/0x50 mm/kasan/kasan.c:303
    memcpy include/linux/string.h:341 [inline]
    sha3_update+0xdf/0x2e0 crypto/sha3_generic.c:161
    crypto_shash_update+0xcb/0x220 crypto/shash.c:109
    shash_finup_unaligned+0x2a/0x60 crypto/shash.c:151
    crypto_shash_finup+0xc4/0x120 crypto/shash.c:165
    hmac_finup+0x182/0x330 crypto/hmac.c:152
    crypto_shash_finup+0xc4/0x120 crypto/shash.c:165
    shash_digest_unaligned+0x9e/0xd0 crypto/shash.c:172
    crypto_shash_digest+0xc4/0x120 crypto/shash.c:186
    hmac_setkey+0x36a/0x690 crypto/hmac.c:66
    crypto_shash_setkey+0xad/0x190 crypto/shash.c:64
    shash_async_setkey+0x47/0x60 crypto/shash.c:207
    crypto_ahash_setkey+0xaf/0x180 crypto/ahash.c:200
    hash_setkey+0x40/0x90 crypto/algif_hash.c:446
    alg_setkey crypto/af_alg.c:221 [inline]
    alg_setsockopt+0x2a1/0x350 crypto/af_alg.c:254
    SYSC_setsockopt net/socket.c:1851 [inline]
    SyS_setsockopt+0x189/0x360 net/socket.c:1830
    entry_SYSCALL_64_fastpath+0x1f/0x96

    Reported-by: syzbot
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

05 Dec, 2017

1 commit

  • commit 7d2c3f54e6f646887d019faa45f35d6fe9fe82ce upstream.

    The code paths protected by the socket-lock do not use or modify the
    socket in a non-atomic fashion. The actions pertaining the socket do not
    even need to be handled as an atomic operation. Thus, the socket-lock
    can be safely ignored.

    This fixes a bug regarding scheduling in atomic as the callback function
    may be invoked in interrupt context.

    In addition, the sock_hold is moved before the AIO encrypt/decrypt
    operation to ensure that the socket is always present. This avoids a
    tiny race window where the socket is unprotected and yet used by the AIO
    operation.

    Finally, the release of resources for a crypto operation is moved into a
    common function of af_alg_free_resources.

    Fixes: e870456d8e7c8 ("crypto: algif_skcipher - overhaul memory management")
    Fixes: d887c52d6ae43 ("crypto: algif_aead - overhaul memory management")
    Reported-by: Romain Izard
    Signed-off-by: Stephan Mueller
    Tested-by: Romain Izard
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Stephan Mueller
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

22 Aug, 2017

1 commit


09 Aug, 2017

1 commit

  • Consolidate following data structures:

    skcipher_async_req, aead_async_req -> af_alg_async_req
    skcipher_rsgl, aead_rsql -> af_alg_rsgl
    skcipher_tsgl, aead_tsql -> af_alg_tsgl
    skcipher_ctx, aead_ctx -> af_alg_ctx

    Consolidate following functions:

    skcipher_sndbuf, aead_sndbuf -> af_alg_sndbuf
    skcipher_writable, aead_writable -> af_alg_writable
    skcipher_rcvbuf, aead_rcvbuf -> af_alg_rcvbuf
    skcipher_readable, aead_readable -> af_alg_readable
    aead_alloc_tsgl, skcipher_alloc_tsgl -> af_alg_alloc_tsgl
    aead_count_tsgl, skcipher_count_tsgl -> af_alg_count_tsgl
    aead_pull_tsgl, skcipher_pull_tsgl -> af_alg_pull_tsgl
    aead_free_areq_sgls, skcipher_free_areq_sgls -> af_alg_free_areq_sgls
    aead_wait_for_wmem, skcipher_wait_for_wmem -> af_alg_wait_for_wmem
    aead_wmem_wakeup, skcipher_wmem_wakeup -> af_alg_wmem_wakeup
    aead_wait_for_data, skcipher_wait_for_data -> af_alg_wait_for_data
    aead_data_wakeup, skcipher_data_wakeup -> af_alg_data_wakeup
    aead_sendmsg, skcipher_sendmsg -> af_alg_sendmsg
    aead_sendpage, skcipher_sendpage -> af_alg_sendpage
    aead_async_cb, skcipher_async_cb -> af_alg_async_cb
    aead_poll, skcipher_poll -> af_alg_poll

    Split out the following common code from recvmsg:

    af_alg_alloc_areq: allocation of the request data structure for the
    cipher operation

    af_alg_get_rsgl: creation of the RX SGL anchored in the request data
    structure

    The following changes to the implementation without affecting the
    functionality have been applied to synchronize slightly different code
    bases in algif_skcipher and algif_aead:

    The wakeup in af_alg_wait_for_data is triggered when either more data
    is received or the indicator that more data is to be expected is
    released. The first is triggered by user space, the second is
    triggered by the kernel upon finishing the processing of data
    (i.e. the kernel is ready for more).

    af_alg_sendmsg uses size_t in min_t calculation for obtaining len.
    Return code determination is consistent with algif_skcipher. The
    scope of the variable i is reduced to match algif_aead. The type of the
    variable i is switched from int to unsigned int to match algif_aead.

    af_alg_sendpage does not contain the superfluous err = 0 from
    aead_sendpage.

    af_alg_async_cb requires to store the number of output bytes in
    areq->outlen before the AIO callback is triggered.

    The POLLIN / POLLRDNORM is now set when either not more data is given or
    the kernel is supplied with data. This is consistent to the wakeup from
    sleep when the kernel waits for data.

    The request data structure is extended by the field last_rsgl which
    points to the last RX SGL list entry. This shall help recvmsg
    implementation to chain the RX SGL to other SG(L)s if needed. It is
    currently used by algif_aead which chains the tag SGL to the RX SGL
    during decryption.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

04 Aug, 2017

2 commits

  • There are quite a number of occurrences in the kernel of the pattern

    if (dst != src)
    memcpy(dst, src, walk.total % AES_BLOCK_SIZE);
    crypto_xor(dst, final, walk.total % AES_BLOCK_SIZE);

    or

    crypto_xor(keystream, src, nbytes);
    memcpy(dst, keystream, nbytes);

    where crypto_xor() is preceded or followed by a memcpy() invocation
    that is only there because crypto_xor() uses its output parameter as
    one of the inputs. To avoid having to add new instances of this pattern
    in the arm64 code, which will be refactored to implement non-SIMD
    fallbacks, add an alternative implementation called crypto_xor_cpy(),
    taking separate input and output arguments. This removes the need for
    the separate memcpy().

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • In preparation of introducing crypto_xor_cpy(), which will use separate
    operands for input and output, modify the __crypto_xor() implementation,
    which it will share with the existing crypto_xor(), which provides the
    actual functionality when not using the inline version.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

28 Jul, 2017

1 commit


18 Jul, 2017

1 commit


19 Jun, 2017

1 commit


10 Jun, 2017

3 commits

  • As of now, crypto_akcipher_maxsize() can not be reached without
    successfully setting the key for the transformation. akcipher
    algorithm implementations check if the key was set and then return
    the output buffer size required for the given key.

    Change the return type to unsigned int and always assume that this
    function is called after a successful setkey of the transformation.
    akcipher algorithm implementations will remove the check if key is not NULL
    and directly return the max size.

    Signed-off-by: Tudor Ambarus
    Signed-off-by: Herbert Xu

    Tudor-Dan Ambarus
     
  • As of now, crypto_kpp_maxsize() can not be reached without successfully
    setting the key for the transformation. kpp algorithm implementations
    check if the key was set and then return the output buffer size
    required for the given key.

    Change the return type to unsigned int and always assume that this
    function is called after a successful setkey of the transformation.
    kpp algorithm implementations will remove the check if key is not NULL
    and directly return the max size.

    Signed-off-by: Tudor Ambarus
    Signed-off-by: Herbert Xu

    Tudor-Dan Ambarus
     
  • While here, add missing argument description (ndigits).

    Signed-off-by: Tudor Ambarus
    Signed-off-by: Herbert Xu

    Tudor-Dan Ambarus
     

23 May, 2017

1 commit

  • Many HMAC users directly use directly 0x36/0x5c values.
    It's better with crypto to use a name instead of directly some crypto
    constant.

    This patch simply add HMAC_IPAD_VALUE/HMAC_OPAD_VALUE defines in a new
    include file "crypto/hmac.h" and use them in crypto/hmac.c

    Signed-off-by: Corentin Labbe
    Signed-off-by: Herbert Xu

    Corentin LABBE
     

03 May, 2017

2 commits

  • Pull security subsystem updates from James Morris:
    "Highlights:

    IMA:
    - provide ">" and " of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (98 commits)
    tpm: Fix reference count to main device
    tpm_tis: convert to using locality callbacks
    tpm: fix handling of the TPM 2.0 event logs
    tpm_crb: remove a cruft constant
    keys: select CONFIG_CRYPTO when selecting DH / KDF
    apparmor: Make path_max parameter readonly
    apparmor: fix parameters so that the permission test is bypassed at boot
    apparmor: fix invalid reference to index variable of iterator line 836
    apparmor: use SHASH_DESC_ON_STACK
    security/apparmor/lsm.c: set debug messages
    apparmor: fix boolreturn.cocci warnings
    Smack: Use GFP_KERNEL for smk_netlbl_mls().
    smack: fix double free in smack_parse_opts_str()
    KEYS: add SP800-56A KDF support for DH
    KEYS: Keyring asymmetric key restrict method with chaining
    KEYS: Restrict asymmetric key linkage using a specific keychain
    KEYS: Add a lookup_restriction function for the asymmetric key type
    KEYS: Add KEYCTL_RESTRICT_KEYRING
    KEYS: Consistent ordering for __key_link_begin and restrict check
    KEYS: Add an optional lookup_restriction hook to key_type
    ...

    Linus Torvalds
     
  • Pull crypto updates from Herbert Xu:
    "Here is the crypto update for 4.12:

    API:
    - Add batch registration for acomp/scomp
    - Change acomp testing to non-unique compressed result
    - Extend algorithm name limit to 128 bytes
    - Require setkey before accept(2) in algif_aead

    Algorithms:
    - Add support for deflate rfc1950 (zlib)

    Drivers:
    - Add accelerated crct10dif for powerpc
    - Add crc32 in stm32
    - Add sha384/sha512 in ccp
    - Add 3des/gcm(aes) for v5 devices in ccp
    - Add Queue Interface (QI) backend support in caam
    - Add new Exynos RNG driver
    - Add ThunderX ZIP driver
    - Add driver for hardware random generator on MT7623 SoC"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (101 commits)
    crypto: stm32 - Fix OF module alias information
    crypto: algif_aead - Require setkey before accept(2)
    crypto: scomp - add support for deflate rfc1950 (zlib)
    crypto: scomp - allow registration of multiple scomps
    crypto: ccp - Change ISR handler method for a v5 CCP
    crypto: ccp - Change ISR handler method for a v3 CCP
    crypto: crypto4xx - rename ce_ring_contol to ce_ring_control
    crypto: testmgr - Allow ecb(cipher_null) in FIPS mode
    Revert "crypto: arm64/sha - Add constant operand modifier to ASM_EXPORT"
    crypto: ccp - Disable interrupts early on unload
    crypto: ccp - Use only the relevant interrupt bits
    hwrng: mtk - Add driver for hardware random generator on MT7623 SoC
    dt-bindings: hwrng: Add Mediatek hardware random generator bindings
    crypto: crct10dif-vpmsum - Fix missing preempt_disable()
    crypto: testmgr - replace compression known answer test
    crypto: acomp - allow registration of multiple acomps
    hwrng: n2 - Use devm_kcalloc() in n2rng_probe()
    crypto: chcr - Fix error handling related to 'chcr_alloc_shash'
    padata: get_next is never NULL
    crypto: exynos - Add new Exynos RNG driver
    ...

    Linus Torvalds
     

24 Apr, 2017

1 commit


21 Apr, 2017

1 commit


19 Apr, 2017

1 commit

  • Pull crypto fixes from Herbert Xu:
    "This fixes the following problems:

    - regression in new XTS/LRW code when used with async crypto

    - long-standing bug in ahash API when used with certain algos

    - bogus memory dereference in async algif_aead with certain algos"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
    crypto: algif_aead - Fix bogus request dereference in completion function
    crypto: ahash - Fix EINPROGRESS notification callback
    crypto: lrw - Fix use-after-free on EINPROGRESS
    crypto: xts - Fix use-after-free on EINPROGRESS

    Linus Torvalds
     

10 Apr, 2017

1 commit

  • The ahash API modifies the request's callback function in order
    to clean up after itself in some corner cases (unaligned final
    and missing finup).

    When the request is complete ahash will restore the original
    callback and everything is fine. However, when the request gets
    an EBUSY on a full queue, an EINPROGRESS callback is made while
    the request is still ongoing.

    In this case the ahash API will incorrectly call its own callback.

    This patch fixes the problem by creating a temporary request
    object on the stack which is used to relay EINPROGRESS back to
    the original completion function.

    This patch also adds code to preserve the original flags value.

    Fixes: ab6bf4e5e5e4 ("crypto: hash - Fix the pointer voodoo in...")
    Cc:
    Reported-by: Sabrina Dubroca
    Tested-by: Sabrina Dubroca
    Signed-off-by: Herbert Xu

    Herbert Xu
     

05 Apr, 2017

4 commits

  • Currently, gf128mul_x_ble works with pointers to be128, even though it
    actually interprets the words as little-endian. Consequently, it uses
    cpu_to_le64/le64_to_cpu on fields of type __be64, which is incorrect.

    This patch fixes that by changing the function to accept pointers to
    le128 and updating all users accordingly.

    Signed-off-by: Ondrej Mosnacek
    Reviewd-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Ondrej Mosnáček
     
  • The gf128mul_x_ble function is currently defined in gf128mul.c, because
    it depends on the gf128mul_table_be multiplication table.

    However, since the function is very small and only uses two values from
    the table, it is better for it to be defined as inline function in
    gf128mul.h. That way, the function can be inlined by the compiler for
    better performance.

    For consistency, the other gf128mul_x_* functions are also moved to the
    header file. In addition, the code is rewritten to be constant-time.

    After this change, the speed of the generic 'xts(aes)' implementation
    increased from ~225 MiB/s to ~235 MiB/s (measured using 'cryptsetup
    benchmark -c aes-xts-plain64' on an Intel system with CRYPTO_AES_X86_64
    and CRYPTO_AES_NI_INTEL disabled).

    Signed-off-by: Ondrej Mosnacek
    Reviewd-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Ondrej Mosnáček
     
  • Add a restrict_link_by_key_or_keyring_chain link restriction that
    searches for signing keys in the destination keyring in addition to the
    signing key or keyring designated when the destination keyring was
    created. Userspace enables this behavior by including the "chain" option
    in the keyring restriction:

    keyctl(KEYCTL_RESTRICT_KEYRING, keyring, "asymmetric",
    "key_or_keyring::chain");

    Signed-off-by: Mat Martineau

    Mat Martineau
     
  • Adds restrict_link_by_signature_keyring(), which uses the restrict_key
    member of the provided destination_keyring data structure as the
    key or keyring to search for signing keys.

    Signed-off-by: Mat Martineau

    Mat Martineau
     

04 Apr, 2017

1 commit

  • The first argument to the restrict_link_func_t functions was a keyring
    pointer. These functions are called by the key subsystem with this
    argument set to the destination keyring, but restrict_link_by_signature
    expects a pointer to the relevant trusted keyring.

    Restrict functions may need something other than a single struct key
    pointer to allow or reject key linkage, so the data used to make that
    decision (such as the trust keyring) is moved to a new, fourth
    argument. The first argument is now always the destination keyring.

    Signed-off-by: Mat Martineau

    Mat Martineau
     

24 Mar, 2017

1 commit


10 Mar, 2017

1 commit

  • Lockdep issues a circular dependency warning when AFS issues an operation
    through AF_RXRPC from a context in which the VFS/VM holds the mmap_sem.

    The theory lockdep comes up with is as follows:

    (1) If the pagefault handler decides it needs to read pages from AFS, it
    calls AFS with mmap_sem held and AFS begins an AF_RXRPC call, but
    creating a call requires the socket lock:

    mmap_sem must be taken before sk_lock-AF_RXRPC

    (2) afs_open_socket() opens an AF_RXRPC socket and binds it. rxrpc_bind()
    binds the underlying UDP socket whilst holding its socket lock.
    inet_bind() takes its own socket lock:

    sk_lock-AF_RXRPC must be taken before sk_lock-AF_INET

    (3) Reading from a TCP socket into a userspace buffer might cause a fault
    and thus cause the kernel to take the mmap_sem, but the TCP socket is
    locked whilst doing this:

    sk_lock-AF_INET must be taken before mmap_sem

    However, lockdep's theory is wrong in this instance because it deals only
    with lock classes and not individual locks. The AF_INET lock in (2) isn't
    really equivalent to the AF_INET lock in (3) as the former deals with a
    socket entirely internal to the kernel that never sees userspace. This is
    a limitation in the design of lockdep.

    Fix the general case by:

    (1) Double up all the locking keys used in sockets so that one set are
    used if the socket is created by userspace and the other set is used
    if the socket is created by the kernel.

    (2) Store the kern parameter passed to sk_alloc() in a variable in the
    sock struct (sk_kern_sock). This informs sock_lock_init(),
    sock_init_data() and sk_clone_lock() as to the lock keys to be used.

    Note that the child created by sk_clone_lock() inherits the parent's
    kern setting.

    (3) Add a 'kern' parameter to ->accept() that is analogous to the one
    passed in to ->create() that distinguishes whether kernel_accept() or
    sys_accept4() was the caller and can be passed to sk_alloc().

    Note that a lot of accept functions merely dequeue an already
    allocated socket. I haven't touched these as the new socket already
    exists before we get the parameter.

    Note also that there are a couple of places where I've made the accepted
    socket unconditionally kernel-based:

    irda_accept()
    rds_rcp_accept_one()
    tcp_accept_from_sock()

    because they follow a sock_create_kern() and accept off of that.

    Whilst creating this, I noticed that lustre and ocfs don't create sockets
    through sock_create_kern() and thus they aren't marked as for-kernel,
    though they appear to be internal. I wonder if these should do that so
    that they use the new set of lock keys.

    Signed-off-by: David Howells
    Signed-off-by: David S. Miller

    David Howells
     

09 Mar, 2017

3 commits


27 Feb, 2017

1 commit


11 Feb, 2017

1 commit

  • Instead of unconditionally forcing 4 byte alignment for all generic
    chaining modes that rely on crypto_xor() or crypto_inc() (which may
    result in unnecessary copying of data when the underlying hardware
    can perform unaligned accesses efficiently), make those functions
    deal with unaligned input explicitly, but only if the Kconfig symbol
    HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
    the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.

    For crypto_inc(), this simply involves making the 4-byte stride
    conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
    it typically operates on 16 byte buffers.

    For crypto_xor(), an algorithm is implemented that simply runs through
    the input using the largest strides possible if unaligned accesses are
    allowed. If they are not, an optimal sequence of memory accesses is
    emitted that takes the relative alignment of the input buffers into
    account, e.g., if the relative misalignment of dst and src is 4 bytes,
    the entire xor operation will be completed using 4 byte loads and stores
    (modulo unaligned bits at the start and end). Note that all expressions
    involving misalign are simply eliminated by the compiler when
    HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

03 Feb, 2017

1 commit