20 Dec, 2019

1 commit

  • Some of the algorithm unregistration functions return -ENOENT when asked
    to unregister a non-registered algorithm, while others always return 0
    or always return void. But no users check the return value, except for
    two of the bulk unregistration functions which print a message on error
    but still always return 0 to their caller, and crypto_del_alg() which
    calls crypto_unregister_instance() which always returns 0.

    Since unregistering a non-registered algorithm is always a kernel bug
    but there isn't anything callers should do to handle this situation at
    runtime, let's simplify things by making all the unregistration
    functions return void, and moving the error message into
    crypto_unregister_alg() and upgrading it to a WARN().

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 3029 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

18 Apr, 2019

1 commit

  • In commit 71052dcf4be70 ("crypto: scompress - Use per-CPU struct instead
    multiple variables") I accidentally initialized multiple times the memory on a
    random CPU. I should have initialize the memory on every CPU like it has
    been done earlier. I didn't notice this because the scheduler didn't
    move the task to another CPU.
    Guenter managed to do that and the code crashed as expected.

    Allocate / free per-CPU memory on each CPU.

    Fixes: 71052dcf4be70 ("crypto: scompress - Use per-CPU struct instead multiple variables")
    Reported-by: Guenter Roeck
    Signed-off-by: Sebastian Andrzej Siewior
    Tested-by: Guenter Roeck
    Signed-off-by: Herbert Xu

    Sebastian Andrzej Siewior
     

08 Apr, 2019

2 commits

  • Two per-CPU variables are allocated as pointer to per-CPU memory which
    then are used as scratch buffers.
    We could be smart about this and use instead a per-CPU struct which
    contains the pointers already and then we need to allocate just the
    scratch buffers.
    Add a lock to the struct. By doing so we can avoid the get_cpu()
    statement and gain lockdep coverage (if enabled) to ensure that the lock
    is always acquired in the right context. On non-preemptible kernels the
    lock vanishes.
    It is okay to use raw_cpu_ptr() in order to get a pointer to the struct
    since it is protected by the spinlock.

    The diffstat of this is negative and according to size scompress.o:
    text data bss dec hex filename
    1847 160 24 2031 7ef dbg_before.o
    1754 232 4 1990 7c6 dbg_after.o
    1799 64 24 1887 75f no_dbg-before.o
    1703 88 4 1795 703 no_dbg-after.o

    The overall size increase difference is also negative. The increase in
    the data section is only four bytes without lockdep.

    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Herbert Xu

    Sebastian Andrzej Siewior
     
  • If scomp_acomp_comp_decomp() fails to allocate memory for the
    destination then we never copy back the data we compressed.
    It is probably best to return an error code instead 0 in case of
    failure.
    I haven't found any user that is using acomp_request_set_params()
    without the `dst' buffer so there is probably no harm.

    Signed-off-by: Sebastian Andrzej Siewior
    Signed-off-by: Herbert Xu

    Sebastian Andrzej Siewior
     

09 Nov, 2018

1 commit

  • There have been a pretty ridiculous number of issues with initializing
    the report structures that are copied to userspace by NETLINK_CRYPTO.
    Commit 4473710df1f8 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME
    expansion") replaced some strncpy()s with strlcpy()s, thereby
    introducing information leaks. Later two other people tried to replace
    other strncpy()s with strlcpy() too, which would have introduced even
    more information leaks:

    - https://lore.kernel.org/patchwork/patch/954991/
    - https://patchwork.kernel.org/patch/10434351/

    Commit cac5818c25d0 ("crypto: user - Implement a generic crypto
    statistics") also uses the buggy strlcpy() approach and therefore leaks
    uninitialized memory to userspace. A fix was proposed, but it was
    originally incomplete.

    Seeing as how apparently no one can get this right with the current
    approach, change all the reporting functions to:

    - Start by memsetting the report structure to 0. This guarantees it's
    always initialized, regardless of what happens later.
    - Initialize all strings using strscpy(). This is safe after the
    memset, ensures null termination of long strings, avoids unnecessary
    work, and avoids the -Wstringop-truncation warnings from gcc.
    - Use sizeof(var) instead of sizeof(type). This is more robust against
    copy+paste errors.

    For simplicity, also reuse the -EMSGSIZE return value from nla_put().

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

07 Jan, 2018

1 commit


03 Aug, 2017

3 commits

  • The scompress code allocates 2 x 128 KB of scratch buffers for each CPU,
    so that clients of the async API can use synchronous implementations
    even from atomic context. However, on systems such as Cavium Thunderx
    (which has 96 cores), this adds up to a non-negligible 24 MB. Also,
    32-bit systems may prefer to use their precious vmalloc space for other
    things,especially since there don't appear to be any clients for the
    async compression API yet.

    So let's defer allocation of the scratch buffers until the first time
    we allocate an acompress cipher based on an scompress implementation.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • When allocating the per-CPU scratch buffers, we allocate the source
    and destination buffers separately, but bail immediately if the second
    allocation fails, without freeing the first one. Fix that.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Due to the use of per-CPU buffers, scomp_acomp_comp_decomp() executes
    with preemption disabled, and so whether the CRYPTO_TFM_REQ_MAY_SLEEP
    flag is set is irrelevant, since we cannot sleep anyway. So disregard
    the flag, and use GFP_ATOMIC unconditionally.

    Cc: # v4.10+
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

24 Apr, 2017

1 commit


13 Jan, 2017

1 commit

  • Continuing from this commit: 52f5684c8e1e
    ("kernel: use macros from compiler.h instead of __attribute__((...))")

    I submitted 4 total patches. They are part of task I've taken up to
    increase compiler portability in the kernel. I've cleaned up the
    subsystems under /kernel /mm /block and /security, this patch targets
    /crypto.

    There is which provides macros for various gcc specific
    constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
    instances of gcc specific attributes with the right macros for the crypto
    subsystem.

    I had to make one additional change into compiler-gcc.h for the case when
    one wants to use this: __attribute__((aligned) and not specify an alignment
    factor. From the gcc docs, this will result in the largest alignment for
    that data type on the target machine so I've named the macro
    __aligned_largest. Please advise if another name is more appropriate.

    Signed-off-by: Gideon Israel Dsouza
    Signed-off-by: Herbert Xu

    Gideon Israel Dsouza
     

25 Oct, 2016

1 commit