26 Sep, 2018

1 commit

  • [ Upstream commit e2861fa71641c6414831d628a1f4f793b6562580 ]

    When EVM attempts to appraise a file signed with a crypto algorithm the
    kernel doesn't have support for, it will cause the kernel to trigger a
    module load. If the EVM policy includes appraisal of kernel modules this
    will in turn call back into EVM - since EVM is holding a lock until the
    crypto initialisation is complete, this triggers a deadlock. Add a
    CRYPTO_NOLOAD flag and skip module loading if it's set, and add that flag
    in the EVM case in order to fail gracefully with an error message
    instead of deadlocking.

    Signed-off-by: Matthew Garrett
    Acked-by: Herbert Xu
    Signed-off-by: Mimi Zohar
    Signed-off-by: Sasha Levin
    Signed-off-by: Greg Kroah-Hartman

    Matthew Garrett
     

02 Mar, 2017

1 commit


28 Nov, 2016

1 commit

  • Currently all bits not set in mask are cleared in crypto_larval_lookup.
    This is unnecessary as wherever the type bits are used it is always
    masked anyway.

    This patch removes the clearing so that we may use bits set in the
    type but not in the mask for special purposes, e.g., picking up
    internal algorithms.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

21 Oct, 2016

1 commit


20 Oct, 2015

1 commit

  • Currently a number of Crypto API operations may fail when a signal
    occurs. This causes nasty problems as the caller of those operations
    are often not in a good position to restart the operation.

    In fact there is currently no need for those operations to be
    interrupted by user signals at all. All we need is for them to
    be killable.

    This patch replaces the relevant calls of signal_pending with
    fatal_signal_pending, and wait_for_completion_interruptible with
    wait_for_completion_killable, respectively.

    Cc: stable@vger.kernel.org
    Signed-off-by: Herbert Xu

    Herbert Xu
     

31 Mar, 2015

1 commit

  • Several hardware related cipher implementations are implemented as
    follows: a "helper" cipher implementation is registered with the
    kernel crypto API.

    Such helper ciphers are never intended to be called by normal users. In
    some cases, calling them via the normal crypto API may even cause
    failures including kernel crashes. In a normal case, the "wrapping"
    ciphers that use the helpers ensure that these helpers are invoked
    such that they cannot cause any calamity.

    Considering the AF_ALG user space interface, unprivileged users can
    call all ciphers registered with the crypto API, including these
    helper ciphers that are not intended to be called directly. That
    means, with AF_ALG user space may invoke these helper ciphers
    and may cause undefined states or side effects.

    To avoid any potential side effects with such helpers, the patch
    prevents the helpers to be called directly. A new cipher type
    flag is added: CRYPTO_ALG_INTERNAL. This flag shall be used
    to mark helper ciphers. These ciphers can only be used if the
    caller invoke the cipher with CRYPTO_ALG_INTERNAL in the type and
    mask field.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

24 Nov, 2014

1 commit


08 Sep, 2013

1 commit

  • crypto_larval_lookup should only return a larval if it created one.
    Any larval created by another entity must be processed through
    crypto_larval_wait before being returned.

    Otherwise this will lead to a larval being killed twice, which
    will most likely lead to a crash.

    Cc: stable@vger.kernel.org
    Reported-by: Kees Cook
    Tested-by: Kees Cook
    Signed-off-by: Herbert Xu

    Herbert Xu
     

20 Aug, 2013

1 commit


25 Jun, 2013

1 commit

  • On Thu, Jun 20, 2013 at 10:00:21AM +0200, Daniel Borkmann wrote:
    > After having fixed a NULL pointer dereference in SCTP 1abd165e ("net:
    > sctp: fix NULL pointer dereference in socket destruction"), I ran into
    > the following NULL pointer dereference in the crypto subsystem with
    > the same reproducer, easily hit each time:
    >
    > BUG: unable to handle kernel NULL pointer dereference at (null)
    > IP: [] __wake_up_common+0x31/0x90
    > PGD 0
    > Oops: 0000 [#1] SMP
    > Modules linked in: padlock_sha(F-) sha256_generic(F) sctp(F) libcrc32c(F) [..]
    > CPU: 6 PID: 3326 Comm: cryptomgr_probe Tainted: GF 3.10.0-rc5+ #1
    > Hardware name: Dell Inc. PowerEdge T410/0H19HD, BIOS 1.6.3 02/01/2011
    > task: ffff88007b6cf4e0 ti: ffff88007b7cc000 task.ti: ffff88007b7cc000
    > RIP: 0010:[] [] __wake_up_common+0x31/0x90
    > RSP: 0018:ffff88007b7cde08 EFLAGS: 00010082
    > RAX: ffffffffffffffe8 RBX: ffff88003756c130 RCX: 0000000000000000
    > RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff88003756c130
    > RBP: ffff88007b7cde48 R08: 0000000000000000 R09: ffff88012b173200
    > R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000282
    > R13: ffff88003756c138 R14: 0000000000000000 R15: 0000000000000000
    > FS: 0000000000000000(0000) GS:ffff88012fc60000(0000) knlGS:0000000000000000
    > CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
    > CR2: 0000000000000000 CR3: 0000000001a0b000 CR4: 00000000000007e0
    > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    > Stack:
    > ffff88007b7cde28 0000000300000000 ffff88007b7cde28 ffff88003756c130
    > 0000000000000282 ffff88003756c128 ffffffff81227670 0000000000000000
    > ffff88007b7cde78 ffffffff810722b7 ffff88007cdcf000 ffffffff81a90540
    > Call Trace:
    > [] ? crypto_alloc_pcomp+0x20/0x20
    > [] complete_all+0x47/0x60
    > [] cryptomgr_probe+0x98/0xc0
    > [] ? crypto_alloc_pcomp+0x20/0x20
    > [] kthread+0xce/0xe0
    > [] ? kthread_freezable_should_stop+0x70/0x70
    > [] ret_from_fork+0x7c/0xb0
    > [] ? kthread_freezable_should_stop+0x70/0x70
    > Code: 41 56 41 55 41 54 53 48 83 ec 18 66 66 66 66 90 89 75 cc 89 55 c8
    > 4c 8d 6f 08 48 8b 57 08 41 89 cf 4d 89 c6 48 8d 42 e
    > RIP [] __wake_up_common+0x31/0x90
    > RSP
    > CR2: 0000000000000000
    > ---[ end trace b495b19270a4d37e ]---
    >
    > My assumption is that the following is happening: the minimal SCTP
    > tool runs under ``echo 1 > /proc/sys/net/sctp/auth_enable'', hence
    > it's making use of crypto_alloc_hash() via sctp_auth_init_hmacs().
    > It forks itself, heavily allocates, binds, listens and waits in
    > accept on sctp sockets, and then randomly kills some of them (no
    > need for an actual client in this case to hit this). Then, again,
    > allocating, binding, etc, and then killing child processes.
    >
    > The problem that might be happening here is that cryptomgr requests
    > the module to probe/load through cryptomgr_schedule_probe(), but
    > before the thread handler cryptomgr_probe() returns, we return from
    > the wait_for_completion_interruptible() function and probably already
    > have cleared up larval, thus we run into a NULL pointer dereference
    > when in cryptomgr_probe() complete_all() is being called.
    >
    > If we wait with wait_for_completion() instead, this panic will not
    > occur anymore. This is valid, because in case a signal is pending,
    > cryptomgr_probe() returns from probing anyway with properly calling
    > complete_all().

    The use of wait_for_completion_interruptible is intentional so that
    we don't lock up the thread if a bug causes us to never wake up.

    This bug is caused by the helper thread using the larval without
    holding a reference count on it. If the helper thread completes
    after the original thread requesting for help has gone away and
    destroyed the larval, then we get the crash above.

    So the fix is to hold a reference count on the larval.

    Cc: # 3.6+
    Reported-by: Daniel Borkmann
    Tested-by: Daniel Borkmann
    Signed-off-by: Herbert Xu

    Herbert Xu
     

16 Feb, 2010

1 commit


14 Jul, 2009

2 commits


08 Jul, 2009

1 commit


02 Jun, 2009

2 commits


21 Apr, 2009

1 commit

  • The commit a760a6656e6f00bb0144a42a048cf0266646e22c (crypto:
    api - Fix module load deadlock with fallback algorithms) broke
    the auto-loading of algorithms that require fallbacks. The
    problem is that the fallback mask check is missing an and which
    cauess bits that should be considered to interfere with the
    result.

    Reported-by: Chuck Ebbert
    Signed-off-by: Herbert Xu

    Herbert Xu
     

27 Mar, 2009

1 commit

  • * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (29 commits)
    crypto: sha512-s390 - Add missing block size
    hwrng: timeriomem - Breaks an allyesconfig build on s390:
    nlattr: Fix build error with NET off
    crypto: testmgr - add zlib test
    crypto: zlib - New zlib crypto module, using pcomp
    crypto: testmgr - Add support for the pcomp interface
    crypto: compress - Add pcomp interface
    netlink: Move netlink attribute parsing support to lib
    crypto: Fix dead links
    hwrng: timeriomem - New driver
    crypto: chainiv - Use kcrypto_wq instead of keventd_wq
    crypto: cryptd - Per-CPU thread implementation based on kcrypto_wq
    crypto: api - Use dedicated workqueue for crypto subsystem
    crypto: testmgr - Test skciphers with no IVs
    crypto: aead - Avoid infinite loop when nivaead fails selftest
    crypto: skcipher - Avoid infinite loop when cipher fails selftest
    crypto: api - Fix crypto_alloc_tfm/create_create_tfm return convention
    crypto: api - crypto_alg_mod_lookup either tested or untested
    crypto: amcc - Add crypt4xx driver
    crypto: ansi_cprng - Add maintainer
    ...

    Linus Torvalds
     

26 Feb, 2009

1 commit

  • With the mandatory algorithm testing at registration, we have
    now created a deadlock with algorithms requiring fallbacks.
    This can happen if the module containing the algorithm requiring
    fallback is loaded first, without the fallback module being loaded
    first. The system will then try to test the new algorithm, find
    that it needs to load a fallback, and then try to load that.

    As both algorithms share the same module alias, it can attempt
    to load the original algorithm again and block indefinitely.

    As algorithms requiring fallbacks are a special case, we can fix
    this by giving them a different module alias than the rest. Then
    it's just a matter of using the right aliases according to what
    algorithms we're trying to find.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

18 Feb, 2009

2 commits

  • This is based on a report and patch by Geert Uytterhoeven.

    The functions crypto_alloc_tfm and create_create_tfm return a
    pointer that needs to be adjusted by the caller when successful
    and otherwise an error value. This means that the caller has
    to check for the error and only perform the adjustment if the
    pointer returned is valid.

    Since all callers want to make the adjustment and we know how
    to adjust it ourselves, it's much easier to just return adjusted
    pointer directly.

    The only caveat is that we have to return a void * instead of
    struct crypto_tfm *. However, this isn't that bad because both
    of these functions are for internal use only (by types code like
    shash.c, not even algorithms code).

    This patch also moves crypto_alloc_tfm into crypto/internal.h
    (crypto_create_tfm is already there) to reflect this.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • As it stands crypto_alg_mod_lookup will search either tested or
    untested algorithms, but never both at the same time. However,
    we need exactly that when constructing givcipher and aead so
    this patch adds support for that by setting the tested bit in
    type but clearing it in mask. This combination is currently
    unused.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

05 Feb, 2009

1 commit

  • Geert Uytterhoeven pointed out that we're not zeroing all the
    memory when freeing a transform. This patch fixes it by calling
    ksize to ensure that we zero everything in sight.

    Reported-by: Geert Uytterhoeven
    Signed-off-by: Herbert Xu

    Herbert Xu
     

25 Dec, 2008

2 commits

  • This patch reintroduces a completely revamped crypto_alloc_tfm.
    The biggest change is that we now take two crypto_type objects
    when allocating a tfm, a frontend and a backend. In fact this
    simply formalises what we've been doing behind the API's back.

    For example, as it stands crypto_alloc_ahash may use an
    actual ahash algorithm or a crypto_hash algorithm. Putting
    this in the API allows us to do this much more cleanly.

    The existing types will be converted across gradually.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The type exit function needs to undo any allocations done by the type
    init function. However, the type init function may differ depending
    on the upper-level type of the transform (e.g., a crypto_blkcipher
    instantiated as a crypto_ablkcipher).

    So we need to move the exit function out of the lower-level
    structure and into crypto_tfm itself.

    As it stands this is a no-op since nobody uses exit functions at
    all. However, all cases where a lower-level type is instantiated
    as a different upper-level type (such as blkcipher as ablkcipher)
    will be converted such that they allocate the underlying transform
    and use that instead of casting (e.g., crypto_ablkcipher casted
    into crypto_blkcipher). That will need to use a different exit
    function depending on the upper-level type.

    This patch also allows the type init/exit functions to call (or not)
    cra_init/cra_exit instead of always calling them from the top level.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

29 Aug, 2008

2 commits


10 Jul, 2008

1 commit


21 Apr, 2008

1 commit


11 Jan, 2008

1 commit

  • This patch makes crypto_alloc_ablkcipher/crypto_grab_skcipher always
    return algorithms that are capable of generating their own IVs through
    givencrypt and givdecrypt. Each algorithm may specify its default IV
    generator through the geniv field.

    For algorithms that do not set the geniv field, the blkcipher layer will
    pick a default. Currently it's chainiv for synchronous algorithms and
    eseqiv for asynchronous algorithms. Note that if these wrappers do not
    work on an algorithm then that algorithm must specify its own geniv or
    it can't be used at all.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

20 Oct, 2007

1 commit


11 Jul, 2007

1 commit


19 May, 2007

1 commit

  • The function crypto_mod_put first frees the algorithm and then drops
    the reference to its module. Unfortunately we read the module pointer
    which after freeing the algorithm and that pointer sits inside the
    object that we just freed.

    So this patch reads the module pointer out before we free the object.

    Thanks to Luca Tettamanti for reporting this.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

07 Feb, 2007

2 commits


07 Dec, 2006

1 commit

  • This patch removes the following no longer used functions:
    - api.c: crypto_alg_available()
    - digest.c: crypto_digest_init()
    - digest.c: crypto_digest_update()
    - digest.c: crypto_digest_final()
    - digest.c: crypto_digest_digest()

    Signed-off-by: Adrian Bunk
    Signed-off-by: Herbert Xu

    Adrian Bunk
     

11 Oct, 2006

1 commit

  • This patch makes crypto_alloc_base() return proper return value.

    - If kzalloc() failure happens within __crypto_alloc_tfm(),
    crypto_alloc_base() returns NULL. But crypto_alloc_base()
    is supposed to return error code as pointer. So this patch
    makes it return -ENOMEM in that case.

    - crypto_alloc_base() is suppose to return -EINTR, if it is
    interrupted by signal. But it may not return -EINTR.

    Signed-off-by: Akinobu Mita
    Signed-off-by: Herbert Xu

    Akinobu Mita
     

21 Sep, 2006

4 commits

  • This patch adds the crypto_comp type to complete the compile-time checking
    conversion. The functions crypto_has_alg and crypto_has_cipher, etc. are
    also added to replace crypto_alg_available.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds the crypto_type structure which will be used for all new
    crypto algorithm types, beginning with block ciphers.

    The primary purpose of this abstraction is to allow different crypto_type
    objects for crypto algorithms of the same type, in particular, there will
    be a different crypto_type objects for asynchronous algorithms.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Up until now all crypto transforms have been of the same type, struct
    crypto_tfm, regardless of whether they are ciphers, digests, or other
    types. As a result of that, we check the types at run-time before
    each crypto operation.

    This is rather cumbersome. We could instead use different C types for
    each crypto type to ensure that the correct types are used at compile
    time. That is, we would have crypto_cipher/crypto_digest instead of
    just crypto_tfm. The appropriate type would then be required for the
    actual operations such as crypto_digest_digest.

    Now that we have the type/mask fields when looking up algorithms, it
    is easy to request for an algorithm of the precise type that the user
    wants. However, crypto_alloc_tfm currently does not expose these new
    attributes.

    This patch introduces the function crypto_alloc_base which will carry
    these new parameters. It will be renamed to crypto_alloc_tfm once
    all existing users have been converted.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds the asynchronous flag and changes all existing users to
    only look up algorithms that are synchronous.

    Signed-off-by: Herbert Xu

    Herbert Xu