24 Mar, 2019

1 commit

  • commit b1f6b4bf416b49f00f3abc49c639371cdecaaad1 upstream.

    Some algorithms have a ->setkey() method that is not atomic, in the
    sense that setting a key can fail after changes were already made to the
    tfm context. In this case, if a key was already set the tfm can end up
    in a state that corresponds to neither the old key nor the new key.

    For example, in lrw.c, if gf128mul_init_64k_bbe() fails due to lack of
    memory, then priv::table will be left NULL. After that, encryption with
    that tfm will cause a NULL pointer dereference.

    It's not feasible to make all ->setkey() methods atomic, especially ones
    that have to key multiple sub-tfms. Therefore, make the crypto API set
    CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a
    key, to prevent the tfm from being used until a new key is set.

    [Cc stable mainly because when introducing the NEED_KEY flag I changed
    AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
    previously didn't have this problem. So these "incompletely keyed"
    states became theoretically accessible via AF_ALG -- though, the
    opportunities for causing real mischief seem pretty limited.]

    Fixes: f8d33fac8480 ("crypto: skcipher - prevent using skciphers without setting key")
    Cc: # v4.16+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Signed-off-by: Greg Kroah-Hartman

    Eric Biggers
     

03 Aug, 2018

3 commits

  • scatterwalk_done() is only meant to be called after a nonzero number of
    bytes have been processed, since scatterwalk_pagedone() will flush the
    dcache of the *previous* page. But in the error case of
    skcipher_walk_done(), e.g. if the input wasn't an integer number of
    blocks, scatterwalk_done() was actually called after advancing 0 bytes.
    This caused a crash ("BUG: unable to handle kernel paging request")
    during '!PageSlab(page)' on architectures like arm and arm64 that define
    ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE, provided that the input was
    page-aligned as in that case walk->offset == 0.

    Fix it by reorganizing skcipher_walk_done() to skip the
    scatterwalk_advance() and scatterwalk_done() if an error has occurred.

    This bug was found by syzkaller fuzzing.

    Reproducer, assuming ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE:

    #include
    #include
    #include

    int main()
    {
    struct sockaddr_alg addr = {
    .salg_type = "skcipher",
    .salg_name = "cbc(aes-generic)",
    };
    char buffer[4096] __attribute__((aligned(4096))) = { 0 };
    int fd;

    fd = socket(AF_ALG, SOCK_SEQPACKET, 0);
    bind(fd, (void *)&addr, sizeof(addr));
    setsockopt(fd, SOL_ALG, ALG_SET_KEY, buffer, 16);
    fd = accept(fd, NULL, NULL);
    write(fd, buffer, 15);
    read(fd, buffer, 15);
    }

    Reported-by: Liu Chao
    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc: # v4.10+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Setting 'walk->nbytes = walk->total' in skcipher_walk_first() doesn't
    make sense because actually walk->nbytes needs to be set to the length
    of the first step in the walk, which may be less than walk->total. This
    is done by skcipher_walk_next() which is called immediately afterwards.
    Also walk->nbytes was already set to 0 in skcipher_walk_skcipher(),
    which is a better default value in case it's forgotten to be set later.

    Therefore, remove the unnecessary assignment to walk->nbytes.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The ALIGN() macro needs to be passed the alignment, not the alignmask
    (which is the alignment minus 1).

    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc: # v4.10+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

01 Jul, 2018

1 commit

  • The function skcipher_walk_next declared as static and marked as
    EXPORT_SYMBOL_GPL. It's a bit confusing for internal function to be
    exported. The area of visibility for such function is its .c file
    and all other modules. Other *.c files of the same module can't use it,
    despite all other modules can. Relying on the fact that this is the
    internal function and it's not a crucial part of the API, the patch
    just removes the EXPORT_SYMBOL_GPL marking of skcipher_walk_next.

    Found by Linux Driver Verification project (linuxtesting.org).

    Signed-off-by: Denis Efremov
    Signed-off-by: Herbert Xu

    Denis Efremov
     

12 Jan, 2018

1 commit

  • Similar to what was done for the hash API, update the skcipher API to
    track whether each transform has been keyed, and reject
    encryption/decryption if a key is needed but one hasn't been set.

    This isn't as important as the equivalent fix for the hash API because
    symmetric ciphers almost always require a key (the "null cipher" is the
    only exception), so are unlikely to be used without one. Still,
    tracking the key will prevent accidental unkeyed use. algif_skcipher
    also had to track the key anyway, so the new flag replaces that and
    simplifies the algif_skcipher implementation.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

11 Dec, 2017

1 commit

  • All the ChaCha20 algorithms as well as the ARM bit-sliced AES-XTS
    algorithms call skcipher_walk_virt(), then access the IV (walk.iv)
    before checking whether any bytes need to be processed (walk.nbytes).

    But if the input is empty, then skcipher_walk_virt() doesn't set the IV,
    and the algorithms crash trying to use the uninitialized IV pointer.

    Fix it by setting the IV earlier in skcipher_walk_virt(). Also fix it
    for the AEAD walk functions.

    This isn't a perfect solution because we can't actually align the IV to
    ->cra_alignmask unless there are bytes to process, for one because the
    temporary buffer for the aligned IV is freed by skcipher_walk_done(),
    which is only called when there are bytes to process. Thus, algorithms
    that require aligned IVs will still need to avoid accessing the IV when
    walk.nbytes == 0. Still, many algorithms/architectures are fine with
    IVs having any alignment, and even for those that aren't, a misaligned
    pointer bug is much less severe than an uninitialized pointer bug.

    This change also matches the behavior of the older blkcipher_walk API.

    Fixes: 0cabf2af6f5a ("crypto: skcipher - Fix crash on zero-length input")
    Reported-by: syzbot
    Cc: # v4.14+
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

29 Nov, 2017

1 commit


25 Nov, 2017

1 commit

  • The skcipher_walk_aead_common function calls scatterwalk_copychunks on
    the input and output walks to skip the associated data. If the AD end
    at an SG list entry boundary, then after these calls the walks will
    still be pointing to the end of the skipped region.

    These offsets are later checked for alignment in skcipher_walk_next,
    so the skcipher_walk may detect the alignment incorrectly.

    This patch fixes it by calling scatterwalk_done after the copychunks
    calls to ensure that the offsets refer to the right SG list entry.

    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk interface")
    Cc:
    Signed-off-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Ondrej Mosnáček
     

07 Oct, 2017

1 commit

  • The skcipher walk interface doesn't handle zero-length input
    properly as the old blkcipher walk interface did. This is due
    to the fact that the length check is done too late.

    This patch moves the length check forward so that it does the
    right thing.

    Fixes: b286d8b1a690 ("crypto: skcipher - Add skcipher walk...")
    Cc:
    Reported-by: Stephan Müller
    Signed-off-by: Herbert Xu

    Herbert Xu
     

18 May, 2017

1 commit


13 Jan, 2017

1 commit

  • Continuing from this commit: 52f5684c8e1e
    ("kernel: use macros from compiler.h instead of __attribute__((...))")

    I submitted 4 total patches. They are part of task I've taken up to
    increase compiler portability in the kernel. I've cleaned up the
    subsystems under /kernel /mm /block and /security, this patch targets
    /crypto.

    There is which provides macros for various gcc specific
    constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
    instances of gcc specific attributes with the right macros for the crypto
    subsystem.

    I had to make one additional change into compiler-gcc.h for the case when
    one wants to use this: __attribute__((aligned) and not specify an alignment
    factor. From the gcc docs, this will result in the largest alignment for
    that data type on the target machine so I've named the macro
    __aligned_largest. Please advise if another name is more appropriate.

    Signed-off-by: Gideon Israel Dsouza
    Signed-off-by: Herbert Xu

    Gideon Israel Dsouza
     

30 Dec, 2016

1 commit

  • In some cases, SIMD algorithms can only perform optimally when
    allowed to operate on multiple input blocks in parallel. This is
    especially true for bit slicing algorithms, which typically take
    the same amount of time processing a single block or 8 blocks in
    parallel. However, other SIMD algorithms may benefit as well from
    bigger strides.

    So add a walksize attribute to the skcipher algorithm definition, and
    wire it up to the skcipher walk API. To avoid confusion between the
    skcipher and AEAD attributes, rename the skcipher_walk chunksize
    attribute to 'stride', and set it from the walksize (in the skcipher
    case) or from the chunksize (in the AEAD case).

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

14 Dec, 2016

1 commit

  • The new skcipher walk API may crash in the following way. (Interestingly,
    the tcrypt boot time tests seem unaffected, while an explicit test using
    the module triggers it)

    Unable to handle kernel NULL pointer dereference at virtual address 00000000
    ...
    [] __memcpy+0x84/0x180
    [] skcipher_walk_done+0x328/0x340
    [] ctr_encrypt+0x84/0x100
    [] simd_skcipher_encrypt+0x88/0x98
    [] crypto_rfc3686_crypt+0x8c/0x98
    [] test_skcipher_speed+0x518/0x820 [tcrypt]
    [] do_test+0x1408/0x3b70 [tcrypt]
    [] tcrypt_mod_init+0x50/0x1000 [tcrypt]
    [] do_one_initcall+0x44/0x138
    [] do_init_module+0x68/0x1e0
    [] load_module+0x1fd0/0x2458
    [] SyS_finit_module+0xe0/0xf0
    [] el0_svc_naked+0x24/0x28

    This is due to the fact that skcipher_done_slow() may be entered with
    walk->buffer unset. Since skcipher_walk_done() already deals with the
    case where walk->buffer == walk->page, it appears to be the intention
    that walk->buffer point to walk->page after skcipher_next_slow(), so
    ensure that is the case.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

01 Dec, 2016

1 commit


30 Nov, 2016

1 commit

  • The new skcipher_walk_aead() may crash in the following way due to
    the walk flag SKCIPHER_WALK_PHYS not being cleared at the start of the
    walk:

    Unable to handle kernel NULL pointer dereference at virtual address 00000001
    [..]
    Internal error: Oops: 96000044 [#1] PREEMPT SMP
    [..]
    PC is at skcipher_walk_next+0x208/0x450
    LR is at skcipher_walk_next+0x1e4/0x450
    pc : [] lr : [] pstate: 40000045
    sp : ffffb925fa517940
    [...]
    [] skcipher_walk_next+0x208/0x450
    [] skcipher_walk_first+0x54/0x148
    [] skcipher_walk_aead+0xd4/0x108
    [] ccm_encrypt+0x68/0x158

    So clear the flag at the appropriate time.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

28 Nov, 2016

1 commit


18 Jul, 2016

2 commits

  • This patch removes the old crypto_grab_skcipher helper and replaces
    it with crypto_grab_skcipher2.

    As this is the final entry point into givcipher this patch also
    removes all traces of the top-level givcipher interface, including
    all implicit IV generators such as chainiv.

    The bottom-level givcipher interface remains until the drivers
    using it are converted.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch allows skcipher algorithms and instances to be created
    and registered with the crypto API. They are accessible through
    the top-level skcipher interface, along with ablkcipher/blkcipher
    algorithms and instances.

    This patch also introduces a new parameter called chunk size
    which is meant for ciphers such as CTR and CTS which ostensibly
    can handle arbitrary lengths, but still behave like block ciphers
    in that you can only process a partial block at the very end.

    For these ciphers the block size will continue to be set to 1
    as it is now while the chunk size will be set to the underlying
    block size.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

25 Jan, 2016

1 commit

  • While converting ecryptfs over to skcipher I found that it needs
    to pick a default key size if one isn't given. Rather than having
    it poke into the guts of the algorithm to get max_keysize, let's
    provide a helper that is meant to give a sane default (just in
    case we ever get an algorithm that has no maximum key size).

    Signed-off-by: Herbert Xu

    Herbert Xu
     

18 Jan, 2016

1 commit


01 Oct, 2015

1 commit


21 Aug, 2015

1 commit

  • This patch introduces the crypto skcipher interface which aims
    to replace both blkcipher and ablkcipher.

    It's very similar to the existing ablkcipher interface. The
    main difference is the removal of the givcrypt interface. In
    order to make the transition easier for blkcipher users, there
    is a helper SKCIPHER_REQUEST_ON_STACK which can be used to place
    a request on the stack for synchronous transforms.

    Signed-off-by: Herbert Xu

    Herbert Xu