08 Aug, 2020

1 commit

  • As said by Linus:

    A symmetric naming is only helpful if it implies symmetries in use.
    Otherwise it's actively misleading.

    In "kzalloc()", the z is meaningful and an important part of what the
    caller wants.

    In "kzfree()", the z is actively detrimental, because maybe in the
    future we really _might_ want to use that "memfill(0xdeadbeef)" or
    something. The "zero" part of the interface isn't even _relevant_.

    The main reason that kzfree() exists is to clear sensitive information
    that should not be leaked to other future users of the same memory
    objects.

    Rename kzfree() to kfree_sensitive() to follow the example of the recently
    added kvfree_sensitive() and make the intention of the API more explicit.
    In addition, memzero_explicit() is used to clear the memory to make sure
    that it won't get optimized away by the compiler.

    The renaming is done by using the command sequence:

    git grep -w --name-only kzfree |\
    xargs sed -i 's/kzfree/kfree_sensitive/'

    followed by some editing of the kfree_sensitive() kerneldoc and adding
    a kzfree backward compatibility macro in slab.h.

    [akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
    [akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]

    Suggested-by: Joe Perches
    Signed-off-by: Waiman Long
    Signed-off-by: Andrew Morton
    Acked-by: David Howells
    Acked-by: Michal Hocko
    Acked-by: Johannes Weiner
    Cc: Jarkko Sakkinen
    Cc: James Morris
    Cc: "Serge E. Hallyn"
    Cc: Joe Perches
    Cc: Matthew Wilcox
    Cc: David Rientjes
    Cc: Dan Carpenter
    Cc: "Jason A . Donenfeld"
    Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
    Signed-off-by: Linus Torvalds

    Waiman Long
     

09 Jul, 2020

1 commit

  • For a Linux server with NUMA, there are possibly multiple (de)compressors
    which are either local or remote to some NUMA node. Some drivers will
    automatically use the (de)compressor near the CPU calling acomp_alloc().
    However, it is not necessarily correct because users who send acomp_req
    could be from different NUMA node with the CPU which allocates acomp.

    Just like kernel has kmalloc() and kmalloc_node(), here crypto can have
    same support.

    Cc: Seth Jennings
    Cc: Dan Streetman
    Cc: Vitaly Wool
    Cc: Andrew Morton
    Cc: Jonathan Cameron
    Signed-off-by: Barry Song
    Signed-off-by: Herbert Xu

    Barry Song
     

16 Apr, 2020

1 commit

  • There are two problems in crypto_spawn_alg. First of all it may
    return spawn->alg even if spawn->dead is set. This results in a
    double-free as detected by syzbot.

    Secondly the setting of the DYING flag is racy because we hold
    the read-lock instead of the write-lock. We should instead call
    crypto_shoot_alg in a safe manner by gaining a refcount, dropping
    the lock, and then releasing the refcount.

    This patch fixes both problems.

    Reported-by: syzbot+fc0674cde00b66844470@syzkaller.appspotmail.com
    Fixes: 4f87ee118d16 ("crypto: api - Do not zap spawn->alg")
    Fixes: 73669cc55646 ("crypto: api - Fix race condition in...")
    Cc:
    Signed-off-by: Herbert Xu

    Herbert Xu
     

20 Dec, 2019

1 commit

  • When CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y, the first lookup of an
    algorithm that needs to be instantiated using a template will always get
    the generic implementation, even when an accelerated one is available.

    This happens because the extra self-tests for the accelerated
    implementation allocate the generic implementation for comparison
    purposes, and then crypto_alg_tested() for the generic implementation
    "fulfills" the original request (i.e. sets crypto_larval::adult).

    This patch fixes this by only fulfilling the original request if
    we are currently the best outstanding larval as judged by the
    priority. If we're not the best then we will ask all waiters on
    that larval request to retry the lookup.

    Note that this patch introduces a behaviour change when the module
    providing the new algorithm is unregistered during the process.
    Previously we would have failed with ENOENT, after the patch we
    will instead redo the lookup.

    Fixes: 9a8a6b3f0950 ("crypto: testmgr - fuzz hashes against...")
    Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against...")
    Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against...")
    Reported-by: Eric Biggers
    Signed-off-by: Herbert Xu
    Reviewed-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Herbert Xu
     

11 Dec, 2019

4 commits

  • The function crypto_spawn_alg is racy because it drops the lock
    before shooting the dying algorithm. The algorithm could disappear
    altogether before we shoot it.

    This patch fixes it by moving the shooting into the locked section.

    Fixes: 6bfd48096ff8 ("[CRYPTO] api: Added spawns")
    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Of the three fields in crt_u.cipher (struct cipher_tfm), ->cit_setkey()
    is pointless because it always points to setkey() in crypto/cipher.c.

    ->cit_decrypt_one() and ->cit_encrypt_one() are slightly less pointless,
    since if the algorithm doesn't have an alignmask, they are set directly
    to ->cia_encrypt() and ->cia_decrypt(). However, this "optimization"
    isn't worthwhile because:

    - The "cipher" algorithm type is the only algorithm still using crt_u,
    so it's bloating every struct crypto_tfm for every algorithm type.

    - If the algorithm has an alignmask, this "optimization" actually makes
    things slower, as it causes 2 indirect calls per block rather than 1.

    - It adds extra code complexity.

    - Some templates already call ->cia_encrypt()/->cia_decrypt() directly
    instead of going through ->cit_encrypt_one()/->cit_decrypt_one().

    - The "cipher" algorithm type never gives optimal performance anyway.
    For that, a higher-level type such as skcipher needs to be used.

    Therefore, just remove the extra indirection, and make
    crypto_cipher_setkey(), crypto_cipher_encrypt_one(), and
    crypto_cipher_decrypt_one() be direct calls into crypto/cipher.c.

    Also remove the unused function crypto_cipher_cast().

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • crt_u.compress (struct compress_tfm) is pointless because its two
    fields, ->cot_compress() and ->cot_decompress(), always point to
    crypto_compress() and crypto_decompress().

    Remove this pointless indirection, and just make crypto_comp_compress()
    and crypto_comp_decompress() be direct calls to what used to be
    crypto_compress() and crypto_decompress().

    Also remove the unused function crypto_comp_cast().

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Update a comment to refer to crypto_alloc_skcipher() rather than
    crypto_alloc_blkcipher() (the latter having been removed).

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

17 Nov, 2019

1 commit

  • The crypto API requires cryptomgr to be present for probing to work
    so we need a softdep to ensure that cryptomgr is added to the
    initramfs.

    This was usually not a problem because until very recently it was
    not practical to build crypto API as module but with the recent
    work to eliminate direct AES users this is now possible.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

01 Nov, 2019

1 commit

  • Now that all "blkcipher" algorithms have been converted to "skcipher",
    remove the blkcipher algorithm type.

    The skcipher (symmetric key cipher) algorithm type was introduced a few
    years ago to replace both blkcipher and ablkcipher (synchronous and
    asynchronous block cipher). The advantages of skcipher include:

    - A much less confusing name, since none of these algorithm types have
    ever actually been for raw block ciphers, but rather for all
    length-preserving encryption modes including block cipher modes of
    operation, stream ciphers, and other length-preserving modes.

    - It unified blkcipher and ablkcipher into a single algorithm type
    which supports both synchronous and asynchronous implementations.
    Note, blkcipher already operated only on scatterlists, so the fact
    that skcipher does too isn't a regression in functionality.

    - Better type safety by using struct skcipher_alg, struct
    crypto_skcipher, etc. instead of crypto_alg, crypto_tfm, etc.

    - It sometimes simplifies the implementations of algorithms.

    Also, the blkcipher API was no longer being tested.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

18 Jul, 2018

1 commit

  • When EVM attempts to appraise a file signed with a crypto algorithm the
    kernel doesn't have support for, it will cause the kernel to trigger a
    module load. If the EVM policy includes appraisal of kernel modules this
    will in turn call back into EVM - since EVM is holding a lock until the
    crypto initialisation is complete, this triggers a deadlock. Add a
    CRYPTO_NOLOAD flag and skip module loading if it's set, and add that flag
    in the EVM case in order to fail gracefully with an error message
    instead of deadlocking.

    Signed-off-by: Matthew Garrett
    Acked-by: Herbert Xu
    Signed-off-by: Mimi Zohar

    Matthew Garrett
     

21 Apr, 2018

1 commit

  • Commit eb02c38f0197 ("crypto: api - Keep failed instances alive") is
    making allocating crypto transforms sometimes fail with ELIBBAD, when
    multiple processes try to access encrypted files with fscrypt for the
    first time since boot. The problem is that the "request larval" for the
    algorithm is being mistaken for an algorithm which failed its tests.

    Fix it by only returning ELIBBAD for "non-larval" algorithms. Also
    don't leak a reference to the algorithm.

    Fixes: eb02c38f0197 ("crypto: api - Keep failed instances alive")
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

31 Mar, 2018

3 commits

  • This patch reverts commit 9c521a200bc3 ("crypto: api - remove
    instance when test failed") and fixes the underlying problem
    in a different way.

    To recap, prior to the reverted commit, an instance that fails
    a self-test is kept around. However, it would satisfy any new
    lookups against its name and therefore the system may accumlulate
    an unbounded number of failed instances for the same algorithm
    name.

    The reverted commit fixed it by unregistering the instance. Hoever,
    this still does not prevent the creation of the same failed instance
    over and over again each time the name is looked up.

    This patch fixes it by keeping the failed instance around, just as
    we would if it were a normal algorithm. However, the lookup code
    has been udpated so that we do not attempt to create another
    instance as long as this failed one is still registered. Of course,
    you could still force a new creation by deleting the instance from
    user-space.

    A new error (ELIBBAD) has been commandeered for this purpose and
    will be returned when all registered algorithm of a given name
    have failed the self-test.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The function crypto_alg_lookup is only usd within the crypto API
    and should be not be exported to the modules. This patch marks
    it as a static function.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The lookup function in crypto_type was only used for the implicit
    IV generators which have been completely removed from the crypto
    API.

    This patch removes the lookup function as it is now useless.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

05 Jan, 2018

1 commit

  • Reference counters should use refcount_t rather than atomic_t, since the
    refcount_t implementation can prevent overflows, reducing the
    exploitability of reference leak bugs. crypto_alg.cra_refcount is a
    reference counter with the usual semantics, so switch it over to
    refcount_t.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

22 Dec, 2017

1 commit


03 Nov, 2017

1 commit

  • Invoking a possibly async. crypto op and waiting for completion
    while correctly handling backlog processing is a common task
    in the crypto API implementation and outside users of it.

    This patch adds a generic implementation for doing so in
    preparation for using it across the board instead of hand
    rolled versions.

    Signed-off-by: Gilad Ben-Yossef
    CC: Eric Biggers
    CC: Jonathan Cameron
    Signed-off-by: Herbert Xu

    Gilad Ben-Yossef
     

02 Mar, 2017

1 commit


28 Nov, 2016

1 commit

  • Currently all bits not set in mask are cleared in crypto_larval_lookup.
    This is unnecessary as wherever the type bits are used it is always
    masked anyway.

    This patch removes the clearing so that we may use bits set in the
    type but not in the mask for special purposes, e.g., picking up
    internal algorithms.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

21 Oct, 2016

1 commit


20 Oct, 2015

1 commit

  • Currently a number of Crypto API operations may fail when a signal
    occurs. This causes nasty problems as the caller of those operations
    are often not in a good position to restart the operation.

    In fact there is currently no need for those operations to be
    interrupted by user signals at all. All we need is for them to
    be killable.

    This patch replaces the relevant calls of signal_pending with
    fatal_signal_pending, and wait_for_completion_interruptible with
    wait_for_completion_killable, respectively.

    Cc: stable@vger.kernel.org
    Signed-off-by: Herbert Xu

    Herbert Xu
     

31 Mar, 2015

1 commit

  • Several hardware related cipher implementations are implemented as
    follows: a "helper" cipher implementation is registered with the
    kernel crypto API.

    Such helper ciphers are never intended to be called by normal users. In
    some cases, calling them via the normal crypto API may even cause
    failures including kernel crashes. In a normal case, the "wrapping"
    ciphers that use the helpers ensure that these helpers are invoked
    such that they cannot cause any calamity.

    Considering the AF_ALG user space interface, unprivileged users can
    call all ciphers registered with the crypto API, including these
    helper ciphers that are not intended to be called directly. That
    means, with AF_ALG user space may invoke these helper ciphers
    and may cause undefined states or side effects.

    To avoid any potential side effects with such helpers, the patch
    prevents the helpers to be called directly. A new cipher type
    flag is added: CRYPTO_ALG_INTERNAL. This flag shall be used
    to mark helper ciphers. These ciphers can only be used if the
    caller invoke the cipher with CRYPTO_ALG_INTERNAL in the type and
    mask field.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

24 Nov, 2014

1 commit


08 Sep, 2013

1 commit

  • crypto_larval_lookup should only return a larval if it created one.
    Any larval created by another entity must be processed through
    crypto_larval_wait before being returned.

    Otherwise this will lead to a larval being killed twice, which
    will most likely lead to a crash.

    Cc: stable@vger.kernel.org
    Reported-by: Kees Cook
    Tested-by: Kees Cook
    Signed-off-by: Herbert Xu

    Herbert Xu
     

20 Aug, 2013

1 commit


25 Jun, 2013

1 commit

  • On Thu, Jun 20, 2013 at 10:00:21AM +0200, Daniel Borkmann wrote:
    > After having fixed a NULL pointer dereference in SCTP 1abd165e ("net:
    > sctp: fix NULL pointer dereference in socket destruction"), I ran into
    > the following NULL pointer dereference in the crypto subsystem with
    > the same reproducer, easily hit each time:
    >
    > BUG: unable to handle kernel NULL pointer dereference at (null)
    > IP: [] __wake_up_common+0x31/0x90
    > PGD 0
    > Oops: 0000 [#1] SMP
    > Modules linked in: padlock_sha(F-) sha256_generic(F) sctp(F) libcrc32c(F) [..]
    > CPU: 6 PID: 3326 Comm: cryptomgr_probe Tainted: GF 3.10.0-rc5+ #1
    > Hardware name: Dell Inc. PowerEdge T410/0H19HD, BIOS 1.6.3 02/01/2011
    > task: ffff88007b6cf4e0 ti: ffff88007b7cc000 task.ti: ffff88007b7cc000
    > RIP: 0010:[] [] __wake_up_common+0x31/0x90
    > RSP: 0018:ffff88007b7cde08 EFLAGS: 00010082
    > RAX: ffffffffffffffe8 RBX: ffff88003756c130 RCX: 0000000000000000
    > RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff88003756c130
    > RBP: ffff88007b7cde48 R08: 0000000000000000 R09: ffff88012b173200
    > R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000282
    > R13: ffff88003756c138 R14: 0000000000000000 R15: 0000000000000000
    > FS: 0000000000000000(0000) GS:ffff88012fc60000(0000) knlGS:0000000000000000
    > CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
    > CR2: 0000000000000000 CR3: 0000000001a0b000 CR4: 00000000000007e0
    > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    > Stack:
    > ffff88007b7cde28 0000000300000000 ffff88007b7cde28 ffff88003756c130
    > 0000000000000282 ffff88003756c128 ffffffff81227670 0000000000000000
    > ffff88007b7cde78 ffffffff810722b7 ffff88007cdcf000 ffffffff81a90540
    > Call Trace:
    > [] ? crypto_alloc_pcomp+0x20/0x20
    > [] complete_all+0x47/0x60
    > [] cryptomgr_probe+0x98/0xc0
    > [] ? crypto_alloc_pcomp+0x20/0x20
    > [] kthread+0xce/0xe0
    > [] ? kthread_freezable_should_stop+0x70/0x70
    > [] ret_from_fork+0x7c/0xb0
    > [] ? kthread_freezable_should_stop+0x70/0x70
    > Code: 41 56 41 55 41 54 53 48 83 ec 18 66 66 66 66 90 89 75 cc 89 55 c8
    > 4c 8d 6f 08 48 8b 57 08 41 89 cf 4d 89 c6 48 8d 42 e
    > RIP [] __wake_up_common+0x31/0x90
    > RSP
    > CR2: 0000000000000000
    > ---[ end trace b495b19270a4d37e ]---
    >
    > My assumption is that the following is happening: the minimal SCTP
    > tool runs under ``echo 1 > /proc/sys/net/sctp/auth_enable'', hence
    > it's making use of crypto_alloc_hash() via sctp_auth_init_hmacs().
    > It forks itself, heavily allocates, binds, listens and waits in
    > accept on sctp sockets, and then randomly kills some of them (no
    > need for an actual client in this case to hit this). Then, again,
    > allocating, binding, etc, and then killing child processes.
    >
    > The problem that might be happening here is that cryptomgr requests
    > the module to probe/load through cryptomgr_schedule_probe(), but
    > before the thread handler cryptomgr_probe() returns, we return from
    > the wait_for_completion_interruptible() function and probably already
    > have cleared up larval, thus we run into a NULL pointer dereference
    > when in cryptomgr_probe() complete_all() is being called.
    >
    > If we wait with wait_for_completion() instead, this panic will not
    > occur anymore. This is valid, because in case a signal is pending,
    > cryptomgr_probe() returns from probing anyway with properly calling
    > complete_all().

    The use of wait_for_completion_interruptible is intentional so that
    we don't lock up the thread if a bug causes us to never wake up.

    This bug is caused by the helper thread using the larval without
    holding a reference count on it. If the helper thread completes
    after the original thread requesting for help has gone away and
    destroyed the larval, then we get the crash above.

    So the fix is to hold a reference count on the larval.

    Cc: # 3.6+
    Reported-by: Daniel Borkmann
    Tested-by: Daniel Borkmann
    Signed-off-by: Herbert Xu

    Herbert Xu
     

16 Feb, 2010

1 commit


14 Jul, 2009

2 commits


08 Jul, 2009

1 commit


02 Jun, 2009

2 commits


21 Apr, 2009

1 commit

  • The commit a760a6656e6f00bb0144a42a048cf0266646e22c (crypto:
    api - Fix module load deadlock with fallback algorithms) broke
    the auto-loading of algorithms that require fallbacks. The
    problem is that the fallback mask check is missing an and which
    cauess bits that should be considered to interfere with the
    result.

    Reported-by: Chuck Ebbert
    Signed-off-by: Herbert Xu

    Herbert Xu
     

27 Mar, 2009

1 commit

  • * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (29 commits)
    crypto: sha512-s390 - Add missing block size
    hwrng: timeriomem - Breaks an allyesconfig build on s390:
    nlattr: Fix build error with NET off
    crypto: testmgr - add zlib test
    crypto: zlib - New zlib crypto module, using pcomp
    crypto: testmgr - Add support for the pcomp interface
    crypto: compress - Add pcomp interface
    netlink: Move netlink attribute parsing support to lib
    crypto: Fix dead links
    hwrng: timeriomem - New driver
    crypto: chainiv - Use kcrypto_wq instead of keventd_wq
    crypto: cryptd - Per-CPU thread implementation based on kcrypto_wq
    crypto: api - Use dedicated workqueue for crypto subsystem
    crypto: testmgr - Test skciphers with no IVs
    crypto: aead - Avoid infinite loop when nivaead fails selftest
    crypto: skcipher - Avoid infinite loop when cipher fails selftest
    crypto: api - Fix crypto_alloc_tfm/create_create_tfm return convention
    crypto: api - crypto_alg_mod_lookup either tested or untested
    crypto: amcc - Add crypt4xx driver
    crypto: ansi_cprng - Add maintainer
    ...

    Linus Torvalds
     

26 Feb, 2009

1 commit

  • With the mandatory algorithm testing at registration, we have
    now created a deadlock with algorithms requiring fallbacks.
    This can happen if the module containing the algorithm requiring
    fallback is loaded first, without the fallback module being loaded
    first. The system will then try to test the new algorithm, find
    that it needs to load a fallback, and then try to load that.

    As both algorithms share the same module alias, it can attempt
    to load the original algorithm again and block indefinitely.

    As algorithms requiring fallbacks are a special case, we can fix
    this by giving them a different module alias than the rest. Then
    it's just a matter of using the right aliases according to what
    algorithms we're trying to find.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

18 Feb, 2009

2 commits

  • This is based on a report and patch by Geert Uytterhoeven.

    The functions crypto_alloc_tfm and create_create_tfm return a
    pointer that needs to be adjusted by the caller when successful
    and otherwise an error value. This means that the caller has
    to check for the error and only perform the adjustment if the
    pointer returned is valid.

    Since all callers want to make the adjustment and we know how
    to adjust it ourselves, it's much easier to just return adjusted
    pointer directly.

    The only caveat is that we have to return a void * instead of
    struct crypto_tfm *. However, this isn't that bad because both
    of these functions are for internal use only (by types code like
    shash.c, not even algorithms code).

    This patch also moves crypto_alloc_tfm into crypto/internal.h
    (crypto_create_tfm is already there) to reflect this.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • As it stands crypto_alg_mod_lookup will search either tested or
    untested algorithms, but never both at the same time. However,
    we need exactly that when constructing givcipher and aead so
    this patch adds support for that by setting the tested bit in
    type but clearing it in mask. This combination is currently
    unused.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

05 Feb, 2009

1 commit

  • Geert Uytterhoeven pointed out that we're not zeroing all the
    memory when freeing a transform. This patch fixes it by calling
    ksize to ensure that we zero everything in sight.

    Reported-by: Geert Uytterhoeven
    Signed-off-by: Herbert Xu

    Herbert Xu