21 Apr, 2018

1 commit

  • In preparation for the removal of VLAs[1] from crypto code.
    We create 2 new compile-time constants: all ciphers implemented
    in Linux have a block size less than or equal to 16 bytes and
    the most demanding hw require 16 bytes alignment for the block
    buffer.
    We also enforce these limits in crypto_check_alg when a new
    cipher is registered.

    [1] http://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com

    Signed-off-by: Salvatore Mesoraca
    Signed-off-by: Herbert Xu

    Salvatore Mesoraca
     

31 Mar, 2018

1 commit

  • This patch reverts commit 9c521a200bc3 ("crypto: api - remove
    instance when test failed") and fixes the underlying problem
    in a different way.

    To recap, prior to the reverted commit, an instance that fails
    a self-test is kept around. However, it would satisfy any new
    lookups against its name and therefore the system may accumlulate
    an unbounded number of failed instances for the same algorithm
    name.

    The reverted commit fixed it by unregistering the instance. Hoever,
    this still does not prevent the creation of the same failed instance
    over and over again each time the name is looked up.

    This patch fixes it by keeping the failed instance around, just as
    we would if it were a normal algorithm. However, the lookup code
    has been udpated so that we do not attempt to create another
    instance as long as this failed one is still registered. Of course,
    you could still force a new creation by deleting the instance from
    user-space.

    A new error (ELIBBAD) has been commandeered for this purpose and
    will be returned when all registered algorithm of a given name
    have failed the self-test.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

01 Feb, 2018

1 commit

  • Pull crypto updates from Herbert Xu:
    "API:
    - Enforce the setting of keys for keyed aead/hash/skcipher
    algorithms.
    - Add multibuf speed tests in tcrypt.

    Algorithms:
    - Improve performance of sha3-generic.
    - Add native sha512 support on arm64.
    - Add v8.2 Crypto Extentions version of sha3/sm3 on arm64.
    - Avoid hmac nesting by requiring underlying algorithm to be unkeyed.
    - Add cryptd_max_cpu_qlen module parameter to cryptd.

    Drivers:
    - Add support for EIP97 engine in inside-secure.
    - Add inline IPsec support to chelsio.
    - Add RevB core support to crypto4xx.
    - Fix AEAD ICV check in crypto4xx.
    - Add stm32 crypto driver.
    - Add support for BCM63xx platforms in bcm2835 and remove bcm63xx.
    - Add Derived Key Protocol (DKP) support in caam.
    - Add Samsung Exynos True RNG driver.
    - Add support for Exynos5250+ SoCs in exynos PRNG driver"

    * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (166 commits)
    crypto: picoxcell - Fix error handling in spacc_probe()
    crypto: arm64/sha512 - fix/improve new v8.2 Crypto Extensions code
    crypto: arm64/sm3 - new v8.2 Crypto Extensions implementation
    crypto: arm64/sha3 - new v8.2 Crypto Extensions implementation
    crypto: testmgr - add new testcases for sha3
    crypto: sha3-generic - export init/update/final routines
    crypto: sha3-generic - simplify code
    crypto: sha3-generic - rewrite KECCAK transform to help the compiler optimize
    crypto: sha3-generic - fixes for alignment and big endian operation
    crypto: aesni - handle zero length dst buffer
    crypto: artpec6 - remove select on non-existing CRYPTO_SHA384
    hwrng: bcm2835 - Remove redundant dev_err call in bcm2835_rng_probe()
    crypto: stm32 - remove redundant dev_err call in stm32_cryp_probe()
    crypto: axis - remove unnecessary platform_get_resource() error check
    crypto: testmgr - test misuse of result in ahash
    crypto: inside-secure - make function safexcel_try_push_requests static
    crypto: aes-generic - fix aes-generic regression on powerpc
    crypto: chelsio - Fix indentation warning
    crypto: arm64/sha1-ce - get rid of literal pool
    crypto: arm64/sha2-ce - move the round constant table to .rodata section
    ...

    Linus Torvalds
     

05 Jan, 2018

3 commits

  • There is a message posted to the crypto notifier chain when an algorithm
    is unregistered, and when a template is registered or unregistered. But
    nothing is listening for those messages; currently there are only
    listeners for the algorithm request and registration messages.

    Get rid of these unused notifications for now.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Reference counters should use refcount_t rather than atomic_t, since the
    refcount_t implementation can prevent overflows, reducing the
    exploitability of reference leak bugs. crypto_alg.cra_refcount is a
    reference counter with the usual semantics, so switch it over to
    refcount_t.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • syzkaller triggered a NULL pointer dereference in crypto_remove_spawns()
    via a program that repeatedly and concurrently requests AEADs
    "authenc(cmac(des3_ede-asm),pcbc-aes-aesni)" and hashes "cmac(des3_ede)"
    through AF_ALG, where the hashes are requested as "untested"
    (CRYPTO_ALG_TESTED is set in ->salg_mask but clear in ->salg_feat; this
    causes the template to be instantiated for every request).

    Although AF_ALG users really shouldn't be able to request an "untested"
    algorithm, the NULL pointer dereference is actually caused by a
    longstanding race condition where crypto_remove_spawns() can encounter
    an instance which has had spawn(s) "grabbed" but hasn't yet been
    registered, resulting in ->cra_users still being NULL.

    We probably should properly initialize ->cra_users earlier, but that
    would require updating many templates individually. For now just fix
    the bug in a simple way that can easily be backported: make
    crypto_remove_spawns() treat a NULL ->cra_users list as empty.

    Reported-by: syzbot
    Cc: stable@vger.kernel.org
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

03 Nov, 2017

1 commit

  • The crypto API was using the -EBUSY return value to indicate
    both a hard failure to submit a crypto operation into a
    transformation provider when the latter was busy and the backlog
    mechanism was not enabled as well as a notification that the
    operation was queued into the backlog when the backlog mechanism
    was enabled.

    Having the same return code indicate two very different conditions
    depending on a flag is both error prone and requires extra runtime
    check like the following to discern between the cases:

    if (err == -EINPROGRESS ||
    (err == -EBUSY && (ahash_request_flags(req) &
    CRYPTO_TFM_REQ_MAY_BACKLOG)))

    This patch changes the return code used to indicate a crypto op
    failed due to the transformation provider being transiently busy
    to -ENOSPC.

    Signed-off-by: Gilad Ben-Yossef
    Signed-off-by: Herbert Xu

    Gilad Ben-Yossef
     

04 Aug, 2017

1 commit


19 Jun, 2017

1 commit


09 Mar, 2017

1 commit

  • To prevent unnecessary branching, mark the exit condition of the
    primary loop as likely(), given that a carry in a 32-bit counter
    occurs very rarely.

    On arm64, the resulting code is emitted by GCC as

    9a8: cmp w1, #0x3
    9ac: add x3, x0, w1, uxtw
    9b0: b.ls 9e0
    9b4: ldr w2, [x3,#-4]!
    9b8: rev w2, w2
    9bc: add w2, w2, #0x1
    9c0: rev w4, w2
    9c4: str w4, [x3]
    9c8: cbz w2, 9d0
    9cc: ret

    where the two remaining branch conditions (one for size < 4 and one for
    the carry) are statically predicted as non-taken, resulting in optimal
    execution in the vast majority of cases.

    Also, replace the open coded alignment test with IS_ALIGNED().

    Cc: Jason A. Donenfeld
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

11 Feb, 2017

1 commit

  • Instead of unconditionally forcing 4 byte alignment for all generic
    chaining modes that rely on crypto_xor() or crypto_inc() (which may
    result in unnecessary copying of data when the underlying hardware
    can perform unaligned accesses efficiently), make those functions
    deal with unaligned input explicitly, but only if the Kconfig symbol
    HAVE_EFFICIENT_UNALIGNED_ACCESS is set. This will allow us to drop
    the alignmasks from the CBC, CMAC, CTR, CTS, PCBC and SEQIV drivers.

    For crypto_inc(), this simply involves making the 4-byte stride
    conditional on HAVE_EFFICIENT_UNALIGNED_ACCESS being set, given that
    it typically operates on 16 byte buffers.

    For crypto_xor(), an algorithm is implemented that simply runs through
    the input using the largest strides possible if unaligned accesses are
    allowed. If they are not, an optimal sequence of memory accesses is
    emitted that takes the relative alignment of the input buffers into
    account, e.g., if the relative misalignment of dst and src is 4 bytes,
    the entire xor operation will be completed using 4 byte loads and stores
    (modulo unaligned bits at the start and end). Note that all expressions
    involving misalign are simply eliminated by the compiler when
    HAVE_EFFICIENT_UNALIGNED_ACCESS is defined.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

23 Jan, 2017

1 commit


01 Jul, 2016

1 commit


25 Jan, 2016

1 commit


23 Nov, 2015

1 commit


20 Oct, 2015

1 commit

  • Currently a number of Crypto API operations may fail when a signal
    occurs. This causes nasty problems as the caller of those operations
    are often not in a good position to restart the operation.

    In fact there is currently no need for those operations to be
    interrupted by user signals at all. All we need is for them to
    be killable.

    This patch replaces the relevant calls of signal_pending with
    fatal_signal_pending, and wait_for_completion_interruptible with
    wait_for_completion_killable, respectively.

    Cc: stable@vger.kernel.org
    Signed-off-by: Herbert Xu

    Herbert Xu
     

14 Jul, 2015

2 commits


03 Jun, 2015

1 commit


13 May, 2015

1 commit

  • This patch adds a new primitive crypto_grab_spawn which is meant
    to replace crypto_init_spawn and crypto_init_spawn2. Under the
    new scheme the user no longer has to worry about reference counting
    the alg object before it is subsumed by the spawn.

    It is pretty much an exact copy of crypto_grab_aead.

    Prior to calling this function spawn->frontend and spawn->inst
    must have been set.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

26 Apr, 2015

1 commit

  • The commit 59afdc7b32143528524455039e7557a46b60e4c8 ("crypto:
    api - Move module sig ifdef into accessor function") broke the
    build when modules are completely disabled because we directly
    dereference module->name.

    This patch fixes this by using the accessor function module_name.

    Reported-by: Fengguang Wu
    Signed-off-by: Herbert Xu

    Herbert Xu
     

23 Apr, 2015

2 commits


21 Apr, 2015

1 commit


16 Apr, 2015

1 commit

  • Commit 9c521a200bc3 ("crypto: api - remove instance when test failed")
    tried to grab a module reference count before the module was even set.

    Worse, it then goes on to free the module reference count after it is
    set so you quickly end up with a negative module reference count which
    prevents people from using any instances belonging to that module.

    This patch moves the module initialisation before the reference
    count.

    Reported-by: Linus Torvalds
    Signed-off-by: Herbert Xu
    Signed-off-by: Linus Torvalds

    Herbert Xu
     

10 Apr, 2015

2 commits

  • A cipher instance is added to the list of instances unconditionally
    regardless of whether the associated test failed. However, a failed
    test implies that during another lookup, the cipher instance will
    be added to the list again as it will not be found by the lookup
    code.

    That means that the list can be filled up with instances whose tests
    failed.

    Note: tests only fail in reality in FIPS mode when a cipher is not
    marked as fips_allowed=1. This can be seen with cmac(des3_ede) that does
    not have a fips_allowed=1. When allocating the cipher, the allocation
    fails with -ENOENT due to the missing fips_allowed=1 flag (which
    causes the testmgr to return EINVAL). Yet, the instance of
    cmac(des3_ede) is shown in /proc/crypto. Allocating the cipher again
    fails again, but a 2nd instance is listed in /proc/crypto.

    The patch simply de-registers the instance when the testing failed.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • We currently initialise the crypto_alg ref count in the function
    __crypto_register_alg. As one of the callers of that function
    crypto_register_instance needs to obtain a ref count before it
    calls __crypto_register_alg, we need to move the initialisation
    out of there.

    Since both callers of __crypto_register_alg call crypto_check_alg,
    this is the logical place to perform the initialisation.

    Signed-off-by: Herbert Xu
    Acked-by: Stephan Mueller

    Herbert Xu
     

03 Apr, 2015

2 commits


22 Dec, 2014

1 commit


26 Nov, 2014

1 commit

  • This adds the module loading prefix "crypto-" to the template lookup
    as well.

    For example, attempting to load 'vfat(blowfish)' via AF_ALG now correctly
    includes the "crypto-" prefix at every level, correctly rejecting "vfat":

    net-pf-38
    algif-hash
    crypto-vfat(blowfish)
    crypto-vfat(blowfish)-all
    crypto-vfat

    Reported-by: Mathias Krause
    Signed-off-by: Kees Cook
    Acked-by: Mathias Krause
    Signed-off-by: Herbert Xu

    Kees Cook
     

03 Jul, 2014

1 commit

  • Per further discussion with NIST, the requirements for FIPS state that
    we only need to panic the system on failed kernel module signature checks
    for crypto subsystem modules. This moves the fips-mode-only module
    signature check out of the generic module loading code, into the crypto
    subsystem, at points where we can catch both algorithm module loads and
    mode module loads. At the same time, make CONFIG_CRYPTO_FIPS dependent on
    CONFIG_MODULE_SIG, as this is entirely necessary for FIPS mode.

    v2: remove extraneous blank line, perform checks in static inline
    function, drop no longer necessary fips.h include.

    CC: "David S. Miller"
    CC: Rusty Russell
    CC: Stephan Mueller
    Signed-off-by: Jarod Wilson
    Acked-by: Neil Horman
    Signed-off-by: Herbert Xu

    Jarod Wilson
     

04 Jul, 2013

1 commit

  • The template lookup interface does not provide a way to use format
    strings, so make sure that the interface cannot be abused accidentally.

    Signed-off-by: Kees Cook
    Cc: Herbert Xu
    Cc: "David S. Miller"
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kees Cook
     

28 Feb, 2013

1 commit

  • I'm not sure why, but the hlist for each entry iterators were conceived

    list_for_each_entry(pos, head, member)

    The hlist ones were greedy and wanted an extra parameter:

    hlist_for_each_entry(tpos, pos, head, member)

    Why did they need an extra pos parameter? I'm not quite sure. Not only
    they don't really need it, it also prevents the iterator from looking
    exactly like the list iterator, which is unfortunate.

    Besides the semantic patch, there was some manual work required:

    - Fix up the actual hlist iterators in linux/list.h
    - Fix up the declaration of other iterators based on the hlist ones.
    - A very small amount of places were using the 'node' parameter, this
    was modified to use 'obj->member' instead.
    - Coccinelle didn't handle the hlist_for_each_entry_safe iterator
    properly, so those had to be fixed up manually.

    The semantic patch which is mostly the work of Peter Senna Tschudin is here:

    @@
    iterator name hlist_for_each_entry, hlist_for_each_entry_continue, hlist_for_each_entry_from, hlist_for_each_entry_rcu, hlist_for_each_entry_rcu_bh, hlist_for_each_entry_continue_rcu_bh, for_each_busy_worker, ax25_uid_for_each, ax25_for_each, inet_bind_bucket_for_each, sctp_for_each_hentry, sk_for_each, sk_for_each_rcu, sk_for_each_from, sk_for_each_safe, sk_for_each_bound, hlist_for_each_entry_safe, hlist_for_each_entry_continue_rcu, nr_neigh_for_each, nr_neigh_for_each_safe, nr_node_for_each, nr_node_for_each_safe, for_each_gfn_indirect_valid_sp, for_each_gfn_sp, for_each_host;

    type T;
    expression a,c,d,e;
    identifier b;
    statement S;
    @@

    -T b;

    [akpm@linux-foundation.org: drop bogus change from net/ipv4/raw.c]
    [akpm@linux-foundation.org: drop bogus hunk from net/ipv6/raw.c]
    [akpm@linux-foundation.org: checkpatch fixes]
    [akpm@linux-foundation.org: fix warnings]
    [akpm@linux-foudnation.org: redo intrusive kvm changes]
    Tested-by: Peter Senna Tschudin
    Acked-by: Paul E. McKenney
    Signed-off-by: Sasha Levin
    Cc: Wu Fengguang
    Cc: Marcelo Tosatti
    Cc: Gleb Natapov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sasha Levin
     

04 Feb, 2013

1 commit

  • Replace PTR_ERR followed by ERR_PTR by ERR_CAST, to be more concise.

    The semantic patch that makes this change is as follows:
    (http://coccinelle.lip6.fr/)

    //
    @@
    expression err,x;
    @@
    - err = PTR_ERR(x);
    if (IS_ERR(x))
    - return ERR_PTR(err);
    + return ERR_CAST(x);
    //

    Signed-off-by: Julia Lawall
    Signed-off-by: Herbert Xu

    Julia Lawall
     

22 Jun, 2012

1 commit

  • It has been observed that sometimes the crypto allocation code
    will get stuck for 60 seconds or multiples thereof. This is
    usually caused by an algorithm failing to pass the self-test.

    If an algorithm fails to be constructed, we will immediately notify
    all larval waiters. However, if it succeeds in construction, but
    then fails the self-test, we won't notify anyone at all.

    This patch fixes this by merging the notification in the case
    where the algorithm fails to be constructed with that of the
    the case where it pases the self-test. This way regardless of
    what happens, we'll give the larval waiters an answer.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

26 Jan, 2012

1 commit

  • Hardware crypto engines frequently need to register a selection of
    different algorithms with the core. Simplify their code slightly,
    especially the error handling, by providing functions to register a
    number of algorithms in a single call.

    Signed-off-by: Mark Brown
    Signed-off-by: Herbert Xu

    Mark Brown
     

09 Nov, 2011

1 commit

  • We leak the crypto instance when we unregister an instance with
    crypto_del_alg(). Therefore we introduce crypto_unregister_instance()
    to unlink the crypto instance from the template's instances list and
    to free the recources of the instance properly.

    Signed-off-by: Steffen Klassert
    Signed-off-by: Herbert Xu

    Steffen Klassert
     

21 Oct, 2011

2 commits