24 Aug, 2020

1 commit

  • Replace the existing /* fall through */ comments and its variants with
    the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary
    fall-through markings when it is the case.

    [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through

    Signed-off-by: Gustavo A. R. Silva

    Gustavo A. R. Silva
     

08 Aug, 2020

1 commit

  • As said by Linus:

    A symmetric naming is only helpful if it implies symmetries in use.
    Otherwise it's actively misleading.

    In "kzalloc()", the z is meaningful and an important part of what the
    caller wants.

    In "kzfree()", the z is actively detrimental, because maybe in the
    future we really _might_ want to use that "memfill(0xdeadbeef)" or
    something. The "zero" part of the interface isn't even _relevant_.

    The main reason that kzfree() exists is to clear sensitive information
    that should not be leaked to other future users of the same memory
    objects.

    Rename kzfree() to kfree_sensitive() to follow the example of the recently
    added kvfree_sensitive() and make the intention of the API more explicit.
    In addition, memzero_explicit() is used to clear the memory to make sure
    that it won't get optimized away by the compiler.

    The renaming is done by using the command sequence:

    git grep -w --name-only kzfree |\
    xargs sed -i 's/kzfree/kfree_sensitive/'

    followed by some editing of the kfree_sensitive() kerneldoc and adding
    a kzfree backward compatibility macro in slab.h.

    [akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
    [akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]

    Suggested-by: Joe Perches
    Signed-off-by: Waiman Long
    Signed-off-by: Andrew Morton
    Acked-by: David Howells
    Acked-by: Michal Hocko
    Acked-by: Johannes Weiner
    Cc: Jarkko Sakkinen
    Cc: James Morris
    Cc: "Serge E. Hallyn"
    Cc: Joe Perches
    Cc: Matthew Wilcox
    Cc: David Rientjes
    Cc: Dan Carpenter
    Cc: "Jason A . Donenfeld"
    Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
    Signed-off-by: Linus Torvalds

    Waiman Long
     

15 Jun, 2020

1 commit

  • The Jitter RNG is unconditionally allocated as a seed source follwoing
    the patch 97f2650e5040. Thus, the instance must always be deallocated.

    Reported-by: syzbot+2e635807decef724a1fa@syzkaller.appspotmail.com
    Fixes: 97f2650e5040 ("crypto: drbg - always seeded with SP800-90B ...")
    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Müller
     

08 May, 2020

1 commit

  • Fix to return negative error code -ENOMEM from the kzalloc error handling
    case instead of 0, as done elsewhere in this function.

    Reported-by: Xiumei Mu
    Fixes: db07cd26ac6a ("crypto: drbg - add FIPS 140-2 CTRNG for noise source")
    Cc:
    Signed-off-by: Wei Yongjun
    Reviewed-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Wei Yongjun
     

24 Apr, 2020

1 commit


23 May, 2019

1 commit

  • FIPS 140-2 section 4.9.2 requires a continuous self test of the noise
    source. Up to kernel 4.8 drivers/char/random.c provided this continuous
    self test. Afterwards it was moved to a location that is inconsistent
    with the FIPS 140-2 requirements. The relevant patch was
    e192be9d9a30555aae2ca1dc3aad37cba484cd4a .

    Thus, the FIPS 140-2 CTRNG is added to the DRBG when it obtains the
    seed. This patch resurrects the function drbg_fips_continous_test that
    existed some time ago and applies it to the noise sources. The patch
    that removed the drbg_fips_continous_test was
    b3614763059b82c26bdd02ffcb1c016c1132aad0 .

    The Jitter RNG implements its own FIPS 140-2 self test and thus does not
    need to be subjected to the test in the DRBG.

    The patch contains a tiny fix to ensure proper zeroization in case of an
    error during the Jitter RNG data gathering.

    Signed-off-by: Stephan Mueller
    Reviewed-by: Yann Droneaud
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

25 Apr, 2019

1 commit

  • The flags field in 'struct shash_desc' never actually does anything.
    The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP.
    However, no shash algorithm ever sleeps, making this flag a no-op.

    With this being the case, inevitably some users who can't sleep wrongly
    pass MAY_SLEEP. These would all need to be fixed if any shash algorithm
    actually started sleeping. For example, the shash_ahash_*() functions,
    which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP
    from the ahash API to the shash API. However, the shash functions are
    called under kmap_atomic(), so actually they're assumed to never sleep.

    Even if it turns out that some users do need preemption points while
    hashing large buffers, we could easily provide a helper function
    crypto_shash_update_large() which divides the data into smaller chunks
    and calls crypto_shash_update() and cond_resched() for each chunk. It's
    not necessary to have a flag in 'struct shash_desc', nor is it necessary
    to make individual shash algorithms aware of this at all.

    Therefore, remove shash_desc::flags, and document that the
    crypto_shash_*() functions can be called from any context.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

18 Apr, 2019

1 commit

  • Use subsys_initcall for registration of all templates and generic
    algorithm implementations, rather than module_init. Then change
    cryptomgr to use arch_initcall, to place it before the subsys_initcalls.

    This is needed so that when both a generic and optimized implementation
    of an algorithm are built into the kernel (not loadable modules), the
    generic implementation is registered before the optimized one.
    Otherwise, the self-tests for the optimized implementation are unable to
    allocate the generic implementation for the new comparison fuzz tests.

    Note that on arm, a side effect of this change is that self-tests for
    generic implementations may run before the unaligned access handler has
    been installed. So, unaligned accesses will crash the kernel. This is
    arguably a good thing as it makes it easier to detect that type of bug.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

03 Aug, 2018

1 commit

  • The cipher implementations of the kernel crypto API favor in-place
    cipher operations. Thus, switch the CTR cipher operation in the DRBG to
    perform in-place operations. This is implemented by using the output
    buffer as input buffer and zeroizing it before the cipher operation to
    implement a CTR encryption of a NULL buffer.

    The speed improvement is quite visibile with the following comparison
    using the LRNG implementation.

    Without the patch set:

    16 bytes| 12.267661 MB/s| 61338304 bytes | 5000000213 ns
    32 bytes| 23.603770 MB/s| 118018848 bytes | 5000000073 ns
    64 bytes| 46.732262 MB/s| 233661312 bytes | 5000000241 ns
    128 bytes| 90.038042 MB/s| 450190208 bytes | 5000000244 ns
    256 bytes| 160.399616 MB/s| 801998080 bytes | 5000000393 ns
    512 bytes| 259.878400 MB/s| 1299392000 bytes | 5000001675 ns
    1024 bytes| 386.050662 MB/s| 1930253312 bytes | 5000001661 ns
    2048 bytes| 493.641728 MB/s| 2468208640 bytes | 5000001598 ns
    4096 bytes| 581.835981 MB/s| 2909179904 bytes | 5000003426 ns

    With the patch set:

    16 bytes | 17.051142 MB/s | 85255712 bytes | 5000000854 ns
    32 bytes | 32.695898 MB/s | 163479488 bytes | 5000000544 ns
    64 bytes | 64.490739 MB/s | 322453696 bytes | 5000000954 ns
    128 bytes | 123.285043 MB/s | 616425216 bytes | 5000000201 ns
    256 bytes | 233.434573 MB/s | 1167172864 bytes | 5000000573 ns
    512 bytes | 384.405197 MB/s | 1922025984 bytes | 5000000671 ns
    1024 bytes | 566.313370 MB/s | 2831566848 bytes | 5000001080 ns
    2048 bytes | 744.518042 MB/s | 3722590208 bytes | 5000000926 ns
    4096 bytes | 867.501670 MB/s | 4337508352 bytes | 5000002181 ns

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Müller
     

20 Jul, 2018

1 commit

  • The CTR DRBG requires two SGLs pointing to input/output buffers for the
    CTR AES operation. The used SGLs always have only one entry. Thus, the
    SGL can be initialized during allocation time, preventing a
    re-initialization of the SGLs during each call.

    The performance is increased by about 1 to 3 percent depending on the
    size of the requested buffer size.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

21 Apr, 2018

1 commit

  • During freeing of the internal buffers used by the DRBG, set the pointer
    to NULL. It is possible that the context with the freed buffers is
    reused. In case of an error during initialization where the pointers
    do not yet point to allocated memory, the NULL value prevents a double
    free.

    Cc: stable@vger.kernel.org
    Fixes: 3cfc3b9721123 ("crypto: drbg - use aligned buffers")
    Signed-off-by: Stephan Mueller
    Reported-by: syzbot+75397ee3df5c70164154@syzkaller.appspotmail.com
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

03 Nov, 2017

1 commit

  • DRBG is starting an async. crypto op and waiting for it complete.
    Move it over to generic code doing the same.

    The code now also passes CRYPTO_TFM_REQ_MAY_SLEEP flag indicating
    crypto request memory allocation may use GFP_KERNEL which should
    be perfectly fine as the code is obviously sleeping for the
    completion of the request any way.

    Signed-off-by: Gilad Ben-Yossef
    Signed-off-by: Herbert Xu

    Gilad Ben-Yossef
     

20 Sep, 2017

1 commit

  • During the change to use aligned buffers, the deallocation code path was
    not updated correctly. The current code tries to free the aligned buffer
    pointer and not the original buffer pointer as it is supposed to.

    Thus, the code is updated to free the original buffer pointer and set
    the aligned buffer pointer that is used throughout the code to NULL.

    Fixes: 3cfc3b9721123 ("crypto: drbg - use aligned buffers")
    CC:
    CC: Herbert Xu
    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

22 Jun, 2017

1 commit


23 May, 2017

1 commit

  • drbg_kcapi_sym_ctr() was using wait_for_completion_interruptible() to
    wait for completion of async crypto op but if a signal occurs it
    may return before DMA ops of HW crypto provider finish, thus
    corrupting the output buffer.

    Resolve this by using wait_for_completion() instead.

    Reported-by: Eric Biggers
    Signed-off-by: Gilad Ben-Yossef
    CC: stable@vger.kernel.org
    Signed-off-by: Herbert Xu

    Gilad Ben-Yossef
     

24 Mar, 2017

1 commit


30 Nov, 2016

2 commits

  • Merge the crypto tree to pull in chelsio chcr fix.

    Herbert Xu
     
  • When using SGs, only heap memory (memory that is valid as per
    virt_addr_valid) is allowed to be referenced. The CTR DRBG used to
    reference the caller-provided memory directly in an SG. In case the
    caller provided stack memory pointers, the SG mapping is not considered
    to be valid. In some cases, this would even cause a paging fault.

    The change adds a new scratch buffer that is used unconditionally to
    catch the cases where the caller-provided buffer is not suitable for
    use in an SG. The crypto operation of the CTR DRBG produces its output
    with that scratch buffer and finally copies the content of the
    scratch buffer to the caller's buffer.

    The scratch buffer is allocated during allocation time of the CTR DRBG
    as its access is protected with the DRBG mutex.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

21 Nov, 2016

1 commit

  • The CTR DRBG segments the number of random bytes to be generated into
    128 byte blocks. The current code misses the advancement of the output
    buffer pointer when the requestor asks for more than 128 bytes of data.
    In this case, the next 128 byte block of random numbers is copied to
    the beginning of the output buffer again. This implies that only the
    first 128 bytes of the output buffer would ever be filled.

    The patch adds the advancement of the buffer pointer to fill the entire
    buffer.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

24 Aug, 2016

1 commit


16 Aug, 2016

1 commit

  • When calling the DRBG health test in FIPS mode, the Jitter RNG is not
    yet present in the kernel crypto API which will cause the instantiation
    to fail and thus the health test to fail.

    As the health tests cover the enforcement of various thresholds, invoke
    the functions that are supposed to enforce the thresholds directly.

    This patch also saves precious seed.

    Reported-by: Tapas Sarangi
    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

20 Jun, 2016

2 commits


15 Jun, 2016

4 commits

  • The TFM object maintains the key for the CTR DRBG.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • The CTR DRBG update function performs a full CTR AES operation including
    the XOR with "plaintext" data. Hence, remove the XOR from the code and
    use the CTR mode to do the XOR.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • Hardware cipher implementation may require aligned buffers. All buffers
    that potentially are processed with a cipher are now aligned.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • The CTR DRBG derives its random data from the CTR that is encrypted with
    AES.

    This patch now changes the CTR DRBG implementation such that the
    CTR AES mode is employed. This allows the use of steamlined CTR AES
    implementation such as ctr-aes-aesni.

    Unfortunately there are the following subtile changes we need to apply
    when using the CTR AES mode:

    - the CTR mode increments the counter after the cipher operation, but
    the CTR DRBG requires the increment before the cipher op. Hence, the
    crypto_inc is applied to the counter (drbg->V) once it is
    recalculated.

    - the CTR mode wants to encrypt data, but the CTR DRBG is interested in
    the encrypted counter only. The full CTR mode is the XOR of the
    encrypted counter with the plaintext data. To access the encrypted
    counter, the patch uses a NULL data vector as plaintext to be
    "encrypted".

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

02 Jun, 2016

1 commit

  • The CTR DRBG code always set the key for each sym cipher invocation even
    though the key has not been changed.

    The patch ensures that the setkey is only invoked when a new key is
    generated by the DRBG.

    With this patch, the CTR DRBG performance increases by more than 150%.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

05 Apr, 2016

1 commit

  • The HMAC implementation allows setting the HMAC key independently from
    the hashing operation. Therefore, the key only needs to be set when a
    new key is generated.

    This patch increases the speed of the HMAC DRBG by at least 35% depending
    on the use case.

    The patch is fully CAVS tested.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

25 Jan, 2016

1 commit


10 Dec, 2015

1 commit


11 Jun, 2015

1 commit


10 Jun, 2015

2 commits

  • As required by SP800-90A, the DRBG implements are reseeding threshold.
    This threshold is at 2**48 (64 bit) and 2**32 bit (32 bit) as
    implemented in drbg_max_requests.

    With the recently introduced changes, the DRBG is now always used as a
    stdrng which is initialized very early in the boot cycle. To ensure that
    sufficient entropy is present, the Jitter RNG is added to even provide
    entropy at early boot time.

    However, the 2nd seed source, the nonblocking pool, is usually
    degraded at that time. Therefore, the DRBG is seeded with the Jitter RNG
    (which I believe contains good entropy, which however is questioned by
    others) and is seeded with a degradded nonblocking pool. This seed is
    now used for quasi the lifetime of the system (2**48 requests is a lot).

    The patch now changes the reseed threshold as follows: up until the time
    the DRBG obtains a seed from a fully iniitialized nonblocking pool, the
    reseeding threshold is lowered such that the DRBG is forced to reseed
    itself resonably often. Once it obtains the seed from a fully
    initialized nonblocking pool, the reseed threshold is set to the value
    required by SP800-90A.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • The get_blocking_random_bytes API is broken because the wait can
    be arbitrarily long (potentially forever) so there is no safe way
    of calling it from within the kernel.

    This patch replaces it with the new callback API which does not
    have this problem.

    The patch also removes the entropy buffer registered with the DRBG
    handle in favor of stack variables to hold the seed data.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

04 Jun, 2015

1 commit


27 May, 2015

3 commits

  • During initialization, the DRBG now tries to allocate a handle of the
    Jitter RNG. If such a Jitter RNG is available during seeding, the DRBG
    pulls the required entropy/nonce string from get_random_bytes and
    concatenates it with a string of equal size from the Jitter RNG. That
    combined string is now the seed for the DRBG.

    Written differently, the initial seed of the DRBG is now:

    get_random_bytes(entropy/nonce) || jitterentropy (entropy/nonce)

    If the Jitter RNG is not available, the DRBG only seeds from
    get_random_bytes.

    CC: Andreas Steffen
    CC: Theodore Ts'o
    CC: Sandy Harris
    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • The async seeding operation is triggered during initalization right
    after the first non-blocking seeding is completed. As required by the
    asynchronous operation of random.c, a callback function is provided that
    is triggered by random.c once entropy is available. That callback
    function performs the actual seeding of the DRBG.

    CC: Andreas Steffen
    CC: Theodore Ts'o
    CC: Sandy Harris
    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     
  • In order to prepare for the addition of the asynchronous seeding call,
    the invocation of seeding the DRBG is moved out into a helper function.

    In addition, a block of memory is allocated during initialization time
    that will be used as a scratchpad for obtaining entropy. That scratchpad
    is used for the initial seeding operation as well as by the
    asynchronous seeding call. The memory must be zeroized every time the
    DRBG seeding call succeeds to avoid entropy data lingering in memory.

    CC: Andreas Steffen
    CC: Theodore Ts'o
    CC: Sandy Harris
    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

23 Apr, 2015

1 commit


22 Apr, 2015

1 commit