09 Sep, 2019

1 commit

  • skcipher_walk_done may be called with an error by internal or
    external callers. For those internal callers we shouldn't unmap
    pages but for external callers we must unmap any pages that are
    in use.

    This patch distinguishes between the two cases by checking whether
    walk->nbytes is zero or not. For internal callers, we now set
    walk->nbytes to zero prior to the call. For external callers,
    walk->nbytes has always been non-zero (as zero is used to indicate
    the termination of a walk).

    Reported-by: Ard Biesheuvel
    Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type")
    Cc:
    Signed-off-by: Herbert Xu
    Tested-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Herbert Xu
     

09 Jul, 2019

1 commit

  • Pull crypto updates from Herbert Xu:
    "Here is the crypto update for 5.3:

    API:
    - Test shash interface directly in testmgr
    - cra_driver_name is now mandatory

    Algorithms:
    - Replace arc4 crypto_cipher with library helper
    - Implement 5 way interleave for ECB, CBC and CTR on arm64
    - Add xxhash
    - Add continuous self-test on noise source to drbg
    - Update jitter RNG

    Drivers:
    - Add support for SHA204A random number generator
    - Add support for 7211 in iproc-rng200
    - Fix fuzz test failures in inside-secure
    - Fix fuzz test failures in talitos
    - Fix fuzz test failures in qat"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (143 commits)
    crypto: stm32/hash - remove interruptible condition for dma
    crypto: stm32/hash - Fix hmac issue more than 256 bytes
    crypto: stm32/crc32 - rename driver file
    crypto: amcc - remove memset after dma_alloc_coherent
    crypto: ccp - Switch to SPDX license identifiers
    crypto: ccp - Validate the the error value used to index error messages
    crypto: doc - Fix formatting of new crypto engine content
    crypto: doc - Add parameter documentation
    crypto: arm64/aes-ce - implement 5 way interleave for ECB, CBC and CTR
    crypto: arm64/aes-ce - add 5 way interleave routines
    crypto: talitos - drop icv_ool
    crypto: talitos - fix hash on SEC1.
    crypto: talitos - move struct talitos_edesc into talitos.h
    lib/scatterlist: Fix mapping iterator when sg->offset is greater than PAGE_SIZE
    crypto/NX: Set receive window credits to max number of CRBs in RxFIFO
    crypto: asymmetric_keys - select CRYPTO_HASH where needed
    crypto: serpent - mark __serpent_setkey_sbox noinline
    crypto: testmgr - dynamically allocate crypto_shash
    crypto: testmgr - dynamically allocate testvec_config
    crypto: talitos - eliminate unneeded 'done' functions at build time
    ...

    Linus Torvalds
     

13 Jun, 2019

1 commit

  • crypto_skcipher_encrypt() and crypto_skcipher_decrypt() have grown to be
    more than a single indirect function call. They now also check whether
    a key has been set, and with CONFIG_CRYPTO_STATS=y they also update the
    crypto statistics. That can add up to a lot of bloat at every call
    site. Moreover, these always involve a function call anyway, which
    greatly limits the benefits of inlining.

    So change them to be non-inline.

    Signed-off-by: Eric Biggers
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Eric Biggers
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

08 Apr, 2019

1 commit

  • skcipher_walk_done() assumes it's a bug if, after the "slow" path is
    executed where the next chunk of data is processed via a bounce buffer,
    the algorithm says it didn't process all bytes. Thus it WARNs on this.

    However, this can happen legitimately when the message needs to be
    evenly divisible into "blocks" but isn't, and the algorithm has a
    'walksize' greater than the block size. For example, ecb-aes-neonbs
    sets 'walksize' to 128 bytes and only supports messages evenly divisible
    into 16-byte blocks. If, say, 17 message bytes remain but they straddle
    scatterlist elements, the skcipher_walk code will take the "slow" path
    and pass the algorithm all 17 bytes in the bounce buffer. But the
    algorithm will only be able to process 16 bytes, triggering the WARN.

    Fix this by just removing the WARN_ON(). Returning -EINVAL, as the code
    already does, is the right behavior.

    This bug was detected by my patches that improve testmgr to fuzz
    algorithms against their generic implementation.

    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc: # v4.10+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

18 Jan, 2019

1 commit

  • Some algorithms have a ->setkey() method that is not atomic, in the
    sense that setting a key can fail after changes were already made to the
    tfm context. In this case, if a key was already set the tfm can end up
    in a state that corresponds to neither the old key nor the new key.

    For example, in lrw.c, if gf128mul_init_64k_bbe() fails due to lack of
    memory, then priv::table will be left NULL. After that, encryption with
    that tfm will cause a NULL pointer dereference.

    It's not feasible to make all ->setkey() methods atomic, especially ones
    that have to key multiple sub-tfms. Therefore, make the crypto API set
    CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a
    key, to prevent the tfm from being used until a new key is set.

    [Cc stable mainly because when introducing the NEED_KEY flag I changed
    AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
    previously didn't have this problem. So these "incompletely keyed"
    states became theoretically accessible via AF_ALG -- though, the
    opportunities for causing real mischief seem pretty limited.]

    Fixes: f8d33fac8480 ("crypto: skcipher - prevent using skciphers without setting key")
    Cc: # v4.16+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

11 Jan, 2019

1 commit

  • The majority of skcipher templates (including both the existing ones and
    the ones remaining to be converted from the "blkcipher" API) just wrap a
    single block cipher algorithm. This includes cbc, cfb, ctr, ecb, kw,
    ofb, and pcbc. Add a helper function skcipher_alloc_instance_simple()
    that handles allocating an skcipher instance for this common case.

    Signed-off-by: Eric Biggers
    Reviewed-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Eric Biggers
     

23 Dec, 2018

2 commits

  • Remove dead code related to internal IV generators, which are no longer
    used since they've been replaced with the "seqiv" and "echainiv"
    templates. The removed code includes:

    - The "givcipher" (GIVCIPHER) algorithm type. No algorithms are
    registered with this type anymore, so it's unneeded.

    - The "const char *geniv" member of aead_alg, ablkcipher_alg, and
    blkcipher_alg. A few algorithms still set this, but it isn't used
    anymore except to show via /proc/crypto and CRYPTO_MSG_GETALG.
    Just hardcode "" or "" in those cases.

    - The 'skcipher_givcrypt_request' structure, which is never used.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • skcipher_walk_virt() can still sleep even with atomic=true, since that
    only affects the later calls to skcipher_walk_done(). But,
    skcipher_walk_virt() only has to allocate memory for some input data
    layouts, so incorrectly calling it with preemption disabled can go
    undetected. Use might_sleep() so that it's detected reliably.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

09 Nov, 2018

1 commit

  • There have been a pretty ridiculous number of issues with initializing
    the report structures that are copied to userspace by NETLINK_CRYPTO.
    Commit 4473710df1f8 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME
    expansion") replaced some strncpy()s with strlcpy()s, thereby
    introducing information leaks. Later two other people tried to replace
    other strncpy()s with strlcpy() too, which would have introduced even
    more information leaks:

    - https://lore.kernel.org/patchwork/patch/954991/
    - https://patchwork.kernel.org/patch/10434351/

    Commit cac5818c25d0 ("crypto: user - Implement a generic crypto
    statistics") also uses the buggy strlcpy() approach and therefore leaks
    uninitialized memory to userspace. A fix was proposed, but it was
    originally incomplete.

    Seeing as how apparently no one can get this right with the current
    approach, change all the reporting functions to:

    - Start by memsetting the report structure to 0. This guarantees it's
    always initialized, regardless of what happens later.
    - Initialize all strings using strscpy(). This is safe after the
    memset, ensures null termination of long strings, avoids unnecessary
    work, and avoids the -Wstringop-truncation warnings from gcc.
    - Use sizeof(var) instead of sizeof(type). This is more robust against
    copy+paste errors.

    For simplicity, also reuse the -EMSGSIZE return value from nla_put().

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

28 Sep, 2018

1 commit

  • In preparation for removal of VLAs due to skcipher requests on the stack
    via SKCIPHER_REQUEST_ON_STACK() usage, this introduces the infrastructure
    for the "sync skcipher" tfm, which is for handling the on-stack cases of
    skcipher, which are always non-ASYNC and have a known limited request
    size.

    The crypto API additions:

    struct crypto_sync_skcipher (wrapper for struct crypto_skcipher)
    crypto_alloc_sync_skcipher()
    crypto_free_sync_skcipher()
    crypto_sync_skcipher_setkey()
    crypto_sync_skcipher_get_flags()
    crypto_sync_skcipher_set_flags()
    crypto_sync_skcipher_clear_flags()
    crypto_sync_skcipher_blocksize()
    crypto_sync_skcipher_ivsize()
    crypto_sync_skcipher_reqtfm()
    skcipher_request_set_sync_tfm()
    SYNC_SKCIPHER_REQUEST_ON_STACK() (with tfm type check)

    Signed-off-by: Kees Cook
    Reviewed-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Kees Cook
     

03 Aug, 2018

3 commits

  • scatterwalk_done() is only meant to be called after a nonzero number of
    bytes have been processed, since scatterwalk_pagedone() will flush the
    dcache of the *previous* page. But in the error case of
    skcipher_walk_done(), e.g. if the input wasn't an integer number of
    blocks, scatterwalk_done() was actually called after advancing 0 bytes.
    This caused a crash ("BUG: unable to handle kernel paging request")
    during '!PageSlab(page)' on architectures like arm and arm64 that define
    ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
    page-aligned as in that case walk->offset == 0.

    Fix it by reorganizing skcipher_walk_done() to skip the
    scatterwalk_advance() and scatterwalk_done() if an error has occurred.

    This bug was found by syzkaller fuzzing.

    Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:

    #include
    #include
    #include

    int main()
    {
    struct sockaddr_alg addr = {
    .salg_type = "skcipher",
    .salg_name = "cbc(aes-generic)",
    };
    char buffer[4096] __attribute__((aligned(4096))) = { 0 };
    int fd;

    fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
    bind(fd, (void *)&addr, sizeof(addr));
    setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
    fd = accept(fd, NULL, NULL);
    write(fd, buffer, 15);
    read(fd, buffer, 15);
    }

    Reported-by: Liu Chao
    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc: # v4.10+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Setting 'walk->nbytes = walk->total' in skcipher_walk_first() doesn't
    make sense because actually walk->nbytes needs to be set to the length
    of the first step in the walk, which may be less than walk->total. This
    is done by skcipher_walk_next() which is called immediately afterwards.
    Also walk->nbytes was already set to 0 in skcipher_walk_skcipher(),
    which is a better default value in case it's forgotten to be set later.

    Therefore, remove the unnecessary assignment to walk->nbytes.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The ALIGN() macro needs to be passed the alignment, not the alignmask
    (which is the alignment minus 1).

    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc: # v4.10+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

01 Jul, 2018

1 commit

  • The function skcipher_walk_next declared as static and marked as
    EXPORT_SYMBOL_GPL. It's a bit confusing for internal function to be
    exported. The area of visibility for such function is its .c file
    and all other modules. Other *.c files of the same module can't use it,
    despite all other modules can. Relying on the fact that this is the
    internal function and it's not a crucial part of the API, the patch
    just removes the EXPORT_SYMBOL_GPL marking of skcipher_walk_next.

    Found by Linux Driver Verification project (linuxtesting.org).

    Signed-off-by: Denis Efremov
    Signed-off-by: Herbert Xu

    Denis Efremov
     

12 Jan, 2018

1 commit

  • Similar to what was done for the hash API, update the skcipher API to
    track whether each transform has been keyed, and reject
    encryption/decryption if a key is needed but one hasn't been set.

    This isn't as important as the equivalent fix for the hash API because
    symmetric ciphers almost always require a key (the "null cipher" is the
    only exception), so are unlikely to be used without one. Still,
    tracking the key will prevent accidental unkeyed use. algif_skcipher
    also had to track the key anyway, so the new flag replaces that and
    simplifies the algif_skcipher implementation.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

11 Dec, 2017

1 commit

  • All the ChaCha20 algorithms as well as the ARM bit-sliced AES-XTS
    algorithms call skcipher_walk_virt(), then access the IV (walk.iv)
    before checking whether any bytes need to be processed (walk.nbytes).

    But if the input is empty, then skcipher_walk_virt() doesn't set the IV,
    and the algorithms crash trying to use the uninitialized IV pointer.

    Fix it by setting the IV earlier in skcipher_walk_virt(). Also fix it
    for the AEAD walk functions.

    This isn't a perfect solution because we can't actually align the IV to
    ->cra_alignmask unless there are bytes to process, for one because the
    temporary buffer for the aligned IV is freed by skcipher_walk_done(),
    which is only called when there are bytes to process. Thus, algorithms
    that require aligned IVs will still need to avoid accessing the IV when
    walk.nbytes == 0. Still, many algorithms/architectures are fine with
    IVs having any alignment, and even for those that aren't, a misaligned
    pointer bug is much less severe than an uninitialized pointer bug.

    This change also matches the behavior of the older blkcipher_walk API.

    Fixes: 0cabf2af6f5a ("crypto: skcipher - Fix crash on zero-length input")
    Reported-by: syzbot
    Cc: # v4.14+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

29 Nov, 2017

1 commit


25 Nov, 2017

1 commit

  • The skcipher_walk_aead_common function calls scatterwalk_copychunks on
    the input and output walks to skip the associated data. If the AD end
    at an SG list entry boundary, then after these calls the walks will
    still be pointing to the end of the skipped region.

    These offsets are later checked for alignment in skcipher_walk_next,
    so the skcipher_walk may detect the alignment incorrectly.

    This patch fixes it by calling scatterwalk_done after the copychunks
    calls to ensure that the offsets refer to the right SG list entry.

    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc:
    Signed-off-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Ondrej Mosnáček
     

07 Oct, 2017

1 commit

  • The skcipher walk interface doesn't handle zero-length input
    properly as the old blkcipher walk interface did. This is due
    to the fact that the length check is done too late.

    This patch moves the length check forward so that it does the
    right thing.

    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk...")
    Cc:
    Reported-by: Stephan Müller
    Signed-off-by: Herbert Xu

    Herbert Xu
     

18 May, 2017

1 commit


13 Jan, 2017

1 commit

  • Continuing from this commit: 52f5684c8e1e
    ("kernel: use macros from compiler.h instead of __attribute__((...))")

    I submitted 4 total patches. They are part of task I've taken up to
    increase compiler portability in the kernel. I've cleaned up the
    subsystems under /kernel /mm /block and /security, this patch targets
    /crypto.

    There is which provides macros for various gcc specific
    constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
    instances of gcc specific attributes with the right macros for the crypto
    subsystem.

    I had to make one additional change into compiler-gcc.h for the case when
    one wants to use this: __attribute__((aligned) and not specify an alignment
    factor. From the gcc docs, this will result in the largest alignment for
    that data type on the target machine so I've named the macro
    __aligned_largest. Please advise if another name is more appropriate.

    Signed-off-by: Gideon Israel Dsouza
    Signed-off-by: Herbert Xu

    Gideon Israel Dsouza
     

30 Dec, 2016

1 commit

  • In some cases, SIMD algorithms can only perform optimally when
    allowed to operate on multiple input blocks in parallel. This is
    especially true for bit slicing algorithms, which typically take
    the same amount of time processing a single block or 8 blocks in
    parallel. However, other SIMD algorithms may benefit as well from
    bigger strides.

    So add a walksize attribute to the skcipher algorithm definition, and
    wire it up to the skcipher walk API. To avoid confusion between the
    skcipher and AEAD attributes, rename the skcipher_walk chunksize
    attribute to 'stride', and set it from the walksize (in the skcipher
    case) or from the chunksize (in the AEAD case).

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

14 Dec, 2016

1 commit

  • The new skcipher walk API may crash in the following way. (Interestingly,
    the tcrypt boot time tests seem unaffected, while an explicit test using
    the module triggers it)

    Unable to handle kernel NULL pointer dereference at virtual address 00000000
    ...
    [] __memcpy+0x84/0x180
    [] skcipher_walk_done+0x328/0x340
    [] ctr_encrypt+0x84/0x100
    [] simd_skcipher_encrypt+0x88/0x98
    [] crypto_rfc3686_crypt+0x8c/0x98
    [] test_skcipher_speed+0x518/0x820 [tcrypt]
    [] do_test+0x1408/0x3b70 [tcrypt]
    [] tcrypt_mod_init+0x50/0x1000 [tcrypt]
    [] do_one_initcall+0x44/0x138
    [] do_init_module+0x68/0x1e0
    [] load_module+0x1fd0/0x2458
    [] SyS_finit_module+0xe0/0xf0
    [] el0_svc_naked+0x24/0x28

    This is due to the fact that skcipher_done_slow() may be entered with
    walk->buffer unset. Since skcipher_walk_done() already deals with the
    case where walk->buffer == walk->page, it appears to be the intention
    that walk->buffer point to walk->page after skcipher_next_slow(), so
    ensure that is the case.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

01 Dec, 2016

1 commit


30 Nov, 2016

1 commit

  • The new skcipher_walk_aead() may crash in the following way due to
    the walk flag SKCIPHER_WALK_PHYS not being cleared at the start of the
    walk:

    Unable to handle kernel NULL pointer dereference at virtual address 00000001
    [..]
    Internal error: Oops: 96000044 [#1] PREEMPT SMP
    [..]
    PC is at skcipher_walk_next+0x208/0x450
    LR is at skcipher_walk_next+0x1e4/0x450
    pc : [] lr : [] pstate: 40000045
    sp : ffffb925fa517940
    [...]
    [] skcipher_walk_next+0x208/0x450
    [] skcipher_walk_first+0x54/0x148
    [] skcipher_walk_aead+0xd4/0x108
    [] ccm_encrypt+0x68/0x158

    So clear the flag at the appropriate time.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

28 Nov, 2016

1 commit


18 Jul, 2016

2 commits

  • This patch removes the old crypto_grab_skcipher helper and replaces
    it with crypto_grab_skcipher2.

    As this is the final entry point into givcipher this patch also
    removes all traces of the top-level givcipher interface, including
    all implicit IV generators such as chainiv.

    The bottom-level givcipher interface remains until the drivers
    using it are converted.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch allows skcipher algorithms and instances to be created
    and registered with the crypto API. They are accessible through
    the top-level skcipher interface, along with ablkcipher/blkcipher
    algorithms and instances.

    This patch also introduces a new parameter called chunk size
    which is meant for ciphers such as CTR and CTS which ostensibly
    can handle arbitrary lengths, but still behave like block ciphers
    in that you can only process a partial block at the very end.

    For these ciphers the block size will continue to be set to 1
    as it is now while the chunk size will be set to the underlying
    block size.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

25 Jan, 2016

1 commit

  • While converting ecryptfs over to skcipher I found that it needs
    to pick a default key size if one isn't given. Rather than having
    it poke into the guts of the algorithm to get max_keysize, let's
    provide a helper that is meant to give a sane default (just in
    case we ever get an algorithm that has no maximum key size).

    Signed-off-by: Herbert Xu

    Herbert Xu
     

18 Jan, 2016

1 commit


01 Oct, 2015

1 commit


21 Aug, 2015

1 commit

  • This patch introduces the crypto skcipher interface which aims
    to replace both blkcipher and ablkcipher.

    It's very similar to the existing ablkcipher interface. The
    main difference is the removal of the givcrypt interface. In
    order to make the transition easier for blkcipher users, there
    is a helper SKCIPHER_REQUEST_ON_STACK which can be used to place
    a request on the stack for synchronous transforms.

    Signed-off-by: Herbert Xu

    Herbert Xu