28 Sep, 2019

2 commits

  • Pull kernel lockdown mode from James Morris:
    "This is the latest iteration of the kernel lockdown patchset, from
    Matthew Garrett, David Howells and others.

    From the original description:

    This patchset introduces an optional kernel lockdown feature,
    intended to strengthen the boundary between UID 0 and the kernel.
    When enabled, various pieces of kernel functionality are restricted.
    Applications that rely on low-level access to either hardware or the
    kernel may cease working as a result - therefore this should not be
    enabled without appropriate evaluation beforehand.

    The majority of mainstream distributions have been carrying variants
    of this patchset for many years now, so there's value in providing a
    doesn't meet every distribution requirement, but gets us much closer
    to not requiring external patches.

    There are two major changes since this was last proposed for mainline:

    - Separating lockdown from EFI secure boot. Background discussion is
    covered here: https://lwn.net/Articles/751061/

    - Implementation as an LSM, with a default stackable lockdown LSM
    module. This allows the lockdown feature to be policy-driven,
    rather than encoding an implicit policy within the mechanism.

    The new locked_down LSM hook is provided to allow LSMs to make a
    policy decision around whether kernel functionality that would allow
    tampering with or examining the runtime state of the kernel should be
    permitted.

    The included lockdown LSM provides an implementation with a simple
    policy intended for general purpose use. This policy provides a coarse
    level of granularity, controllable via the kernel command line:

    lockdown={integrity|confidentiality}

    Enable the kernel lockdown feature. If set to integrity, kernel features
    that allow userland to modify the running kernel are disabled. If set to
    confidentiality, kernel features that allow userland to extract
    confidential information from the kernel are also disabled.

    This may also be controlled via /sys/kernel/security/lockdown and
    overriden by kernel configuration.

    New or existing LSMs may implement finer-grained controls of the
    lockdown features. Refer to the lockdown_reason documentation in
    include/linux/security.h for details.

    The lockdown feature has had signficant design feedback and review
    across many subsystems. This code has been in linux-next for some
    weeks, with a few fixes applied along the way.

    Stephen Rothwell noted that commit 9d1f8be5cf42 ("bpf: Restrict bpf
    when kernel lockdown is in confidentiality mode") is missing a
    Signed-off-by from its author. Matthew responded that he is providing
    this under category (c) of the DCO"

    * 'next-lockdown' of git://git.kernel.org/pub/scm/linux/kernel/git/jmorris/linux-security: (31 commits)
    kexec: Fix file verification on S390
    security: constify some arrays in lockdown LSM
    lockdown: Print current->comm in restriction messages
    efi: Restrict efivar_ssdt_load when the kernel is locked down
    tracefs: Restrict tracefs when the kernel is locked down
    debugfs: Restrict debugfs when the kernel is locked down
    kexec: Allow kexec_file() with appropriate IMA policy when locked down
    lockdown: Lock down perf when in confidentiality mode
    bpf: Restrict bpf when kernel lockdown is in confidentiality mode
    lockdown: Lock down tracing and perf kprobes when in confidentiality mode
    lockdown: Lock down /proc/kcore
    x86/mmiotrace: Lock down the testmmiotrace module
    lockdown: Lock down module params that specify hardware parameters (eg. ioport)
    lockdown: Lock down TIOCSSERIAL
    lockdown: Prohibit PCMCIA CIS storage when the kernel is locked down
    acpi: Disable ACPI table override if the kernel is locked down
    acpi: Ignore acpi_rsdp kernel param when the kernel has been locked down
    ACPI: Limit access to custom_method when the kernel is locked down
    x86/msr: Restrict MSR access when the kernel is locked down
    x86: Lock down IO port access when the kernel is locked down
    ...

    Linus Torvalds
     
  • Pull integrity updates from Mimi Zohar:
    "The major feature in this time is IMA support for measuring and
    appraising appended file signatures. In addition are a couple of bug
    fixes and code cleanup to use struct_size().

    In addition to the PE/COFF and IMA xattr signatures, the kexec kernel
    image may be signed with an appended signature, using the same
    scripts/sign-file tool that is used to sign kernel modules.

    Similarly, the initramfs may contain an appended signature.

    This contained a lot of refactoring of the existing appended signature
    verification code, so that IMA could retain the existing framework of
    calculating the file hash once, storing it in the IMA measurement list
    and extending the TPM, verifying the file's integrity based on a file
    hash or signature (eg. xattrs), and adding an audit record containing
    the file hash, all based on policy. (The IMA support for appended
    signatures patch set was posted and reviewed 11 times.)

    The support for appended signature paves the way for adding other
    signature verification methods, such as fs-verity, based on a single
    system-wide policy. The file hash used for verifying the signature and
    the signature, itself, can be included in the IMA measurement list"

    * 'next-integrity' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity:
    ima: ima_api: Use struct_size() in kzalloc()
    ima: use struct_size() in kzalloc()
    sefltest/ima: support appended signatures (modsig)
    ima: Fix use after free in ima_read_modsig()
    MODSIGN: make new include file self contained
    ima: fix freeing ongoing ahash_request
    ima: always return negative code for error
    ima: Store the measurement again when appraising a modsig
    ima: Define ima-modsig template
    ima: Collect modsig
    ima: Implement support for module-style appended signatures
    ima: Factor xattr_verify() out of ima_appraise_measurement()
    ima: Add modsig appraise_type option for module-style appended signatures
    integrity: Select CONFIG_KEYS instead of depending on it
    PKCS#7: Introduce pkcs7_get_digest()
    PKCS#7: Refactor verify_pkcs7_signature()
    MODSIGN: Export module signature definitions
    ima: initialize the "template" field with the default template

    Linus Torvalds
     

22 Sep, 2019

1 commit

  • …device-mapper/linux-dm

    Pull device mapper updates from Mike Snitzer:

    - crypto and DM crypt advances that allow the crypto API to reclaim
    implementation details that do not belong in DM crypt. The wrapper
    template for ESSIV generation that was factored out will also be used
    by fscrypt in the future.

    - Add root hash pkcs#7 signature verification to the DM verity target.

    - Add a new "clone" DM target that allows for efficient remote
    replication of a device.

    - Enhance DM bufio's cache to be tailored to each client based on use.
    Clients that make heavy use of the cache get more of it, and those
    that use less have reduced cache usage.

    - Add a new DM_GET_TARGET_VERSION ioctl to allow userspace to query the
    version number of a DM target (even if the associated module isn't
    yet loaded).

    - Fix invalid memory access in DM zoned target.

    - Fix the max_discard_sectors limit advertised by the DM raid target;
    it was mistakenly storing the limit in bytes rather than sectors.

    - Small optimizations and cleanups in DM writecache target.

    - Various fixes and cleanups in DM core, DM raid1 and space map portion
    of DM persistent data library.

    * tag 'for-5.4/dm-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm: (22 commits)
    dm: introduce DM_GET_TARGET_VERSION
    dm bufio: introduce a global cache replacement
    dm bufio: remove old-style buffer cleanup
    dm bufio: introduce a global queue
    dm bufio: refactor adjust_total_allocated
    dm bufio: call adjust_total_allocated from __link_buffer and __unlink_buffer
    dm: add clone target
    dm raid: fix updating of max_discard_sectors limit
    dm writecache: skip writecache_wait for pmem mode
    dm stats: use struct_size() helper
    dm crypt: omit parsing of the encapsulated cipher
    dm crypt: switch to ESSIV crypto API template
    crypto: essiv - create wrapper template for ESSIV generation
    dm space map common: remove check for impossible sm_find_free() return value
    dm raid1: use struct_size() with kzalloc()
    dm writecache: optimize performance by sorting the blocks for writeback_all
    dm writecache: add unlikely for getting two block with same LBA
    dm writecache: remove unused member pointer in writeback_struct
    dm zoned: fix invalid memory access
    dm verity: add root hash pkcs#7 signature verification
    ...

    Linus Torvalds
     

13 Sep, 2019

4 commits

  • With pcrypt's cpumask no longer used, take the CPU hotplug lock inside
    padata_alloc_possible.

    Useful later in the series for avoiding nested acquisition of the CPU
    hotplug lock in padata when padata_alloc_possible is allocating an
    unbound workqueue.

    Without this patch, this nested acquisition would happen later in the
    series:

    pcrypt_init_padata
    get_online_cpus
    alloc_padata_possible
    alloc_padata
    alloc_workqueue(WQ_UNBOUND) // later in the series
    alloc_and_link_pwqs
    apply_wqattrs_lock
    get_online_cpus // recursive rwsem acquisition

    Signed-off-by: Daniel Jordan
    Acked-by: Steffen Klassert
    Cc: Herbert Xu
    Cc: Lai Jiangshan
    Cc: Peter Zijlstra
    Cc: Tejun Heo
    Cc: linux-crypto@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Herbert Xu

    Daniel Jordan
     
  • Now that padata_do_parallel takes care of finding an alternate callback
    CPU, there's no need for pcrypt's callback cpumask, so remove it and the
    notifier callback that keeps it in sync.

    Signed-off-by: Daniel Jordan
    Acked-by: Steffen Klassert
    Cc: Herbert Xu
    Cc: Lai Jiangshan
    Cc: Peter Zijlstra
    Cc: Tejun Heo
    Cc: linux-crypto@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Herbert Xu

    Daniel Jordan
     
  • padata_do_parallel currently returns -EINVAL if the callback CPU isn't
    in the callback cpumask.

    pcrypt tries to prevent this situation by keeping its own callback
    cpumask in sync with padata's and checks that the callback CPU it passes
    to padata is valid. Make padata handle this instead.

    padata_do_parallel now takes a pointer to the callback CPU and updates
    it for the caller if an alternate CPU is used. Overall behavior in
    terms of which callback CPUs are chosen stays the same.

    Prepares for removal of the padata cpumask notifier in pcrypt, which
    will fix a lockdep complaint about nested acquisition of the CPU hotplug
    lock later in the series.

    Signed-off-by: Daniel Jordan
    Acked-by: Steffen Klassert
    Cc: Herbert Xu
    Cc: Lai Jiangshan
    Cc: Peter Zijlstra
    Cc: Tejun Heo
    Cc: linux-crypto@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Herbert Xu

    Daniel Jordan
     
  • Move workqueue allocation inside of padata to prepare for further
    changes to how padata uses workqueues.

    Guarantees the workqueue is created with max_active=1, which padata
    relies on to work correctly. No functional change.

    Signed-off-by: Daniel Jordan
    Acked-by: Steffen Klassert
    Cc: Herbert Xu
    Cc: Jonathan Corbet
    Cc: Lai Jiangshan
    Cc: Peter Zijlstra
    Cc: Tejun Heo
    Cc: linux-crypto@vger.kernel.org
    Cc: linux-doc@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Herbert Xu

    Daniel Jordan
     

09 Sep, 2019

1 commit

  • skcipher_walk_done may be called with an error by internal or
    external callers. For those internal callers we shouldn't unmap
    pages but for external callers we must unmap any pages that are
    in use.

    This patch distinguishes between the two cases by checking whether
    walk->nbytes is zero or not. For internal callers, we now set
    walk->nbytes to zero prior to the call. For external callers,
    walk->nbytes has always been non-zero (as zero is used to indicate
    the termination of a walk).

    Reported-by: Ard Biesheuvel
    Fixes: 5cde0af2a982 ("[CRYPTO] cipher: Added block cipher type")
    Cc:
    Signed-off-by: Herbert Xu
    Tested-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Herbert Xu
     

05 Sep, 2019

1 commit


04 Sep, 2019

1 commit

  • Implement a template that wraps a (skcipher,shash) or (aead,shash) tuple
    so that we can consolidate the ESSIV handling in fscrypt and dm-crypt and
    move it into the crypto API. This will result in better test coverage, and
    will allow future changes to make the bare cipher interface internal to the
    crypto subsystem, in order to increase robustness of the API against misuse.

    Signed-off-by: Ard Biesheuvel
    Acked-by: Herbert Xu
    Tested-by: Milan Broz
    Signed-off-by: Mike Snitzer

    Ard Biesheuvel
     

30 Aug, 2019

3 commits

  • crypto/aegis.h:27:32: warning:
    crypto_aegis_const defined but not used [-Wunused-const-variable=]

    crypto_aegis_const is only used in aegis128-core.c,
    just move the definition over there.

    Reported-by: Hulk Robot
    Signed-off-by: YueHaibing
    Signed-off-by: Herbert Xu

    YueHaibing
     
  • Add a test vector for the ESSIV mode that is the most widely used,
    i.e., using cbc(aes) and sha256, in both skcipher and AEAD modes
    (the latter is used by tcrypt to encapsulate the authenc template
    or h/w instantiations of the same)

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • When building the new aegis128 NEON code in big endian mode, Clang
    complains about the const uint8x16_t permute vectors in the following
    way:

    crypto/aegis128-neon-inner.c:58:40: warning: vector initializers are not
    compatible with NEON intrinsics in big endian mode
    [-Wnonportable-vector-initialization]
    static const uint8x16_t shift_rows = {
    ^
    crypto/aegis128-neon-inner.c:58:40: note: consider using vld1q_u8() to
    initialize a vector from memory, or vcombine_u8(vcreate_u8(), vcreate_u8())
    to initialize from integer constants

    Since the same issue applies to the uint8x16x4_t loads of the AES Sbox,
    update those references as well. However, since GCC does not implement
    the vld1q_u8_x4() intrinsic, switch from IS_ENABLED() to a preprocessor
    conditional to conditionally include this code.

    Reported-by: Nathan Chancellor
    Signed-off-by: Ard Biesheuvel
    Tested-by: Nathan Chancellor
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

22 Aug, 2019

7 commits

  • Drop the duplicate generic sha256 (and sha224) implementation from
    crypto/sha256_generic.c and use the implementation from
    lib/crypto/sha256.c instead.

    "diff -u lib/crypto/sha256.c sha256_generic.c" shows that the core
    sha256_transform function from both implementations is identical and
    the other code is functionally identical too.

    Suggested-by: Eric Biggers
    Signed-off-by: Hans de Goede
    Signed-off-by: Herbert Xu

    Hans de Goede
     
  • Before this commit lib/crypto/sha256.c has only been used in the s390 and
    x86 purgatory code, make it suitable for generic use:

    * Export interesting symbols
    * Add -D__DISABLE_EXPORTS to CFLAGS_sha256.o for purgatory builds to
    avoid the exports for the purgatory builds
    * Add to lib/crypto/Makefile and crypto/Kconfig

    Signed-off-by: Hans de Goede
    Signed-off-by: Herbert Xu

    Hans de Goede
     
  • Add a bunch of missing spaces after commas and arround operators.

    Note the main goal of this is to make sha256_transform and its helpers
    identical in formatting too the duplcate implementation in lib/sha256.c,
    so that "diff -u" can be used to compare them to prove that no functional
    changes are made when further patches in this series consolidate the 2
    implementations into 1.

    Signed-off-by: Hans de Goede
    Signed-off-by: Herbert Xu

    Hans de Goede
     
  • Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Another one for the cipher museum: split off DES core processing into
    a separate module so other drivers (mostly for crypto accelerators)
    can reuse the code without pulling in the generic DES cipher itself.
    This will also permit the cipher interface to be made private to the
    crypto API itself once we move the only user in the kernel (CIFS) to
    this library interface.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • In preparation of moving the shared key expansion routine into the
    DES library, move the verification done by __des3_ede_setkey() into
    its callers.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • The recently added helper routine to perform key strength validation
    of triple DES keys is slightly inadequate, since it comes in two versions,
    neither of which are highly useful for anything other than skciphers (and
    many drivers still use the older blkcipher interfaces).

    So let's add a new helper and, considering that this is a helper function
    that is only intended to be used by crypto code itself, put it in a new
    des.h header under crypto/internal.

    While at it, implement a similar helper for single DES, so that we can
    start replacing the pattern of calling des_ekey() into a temp buffer
    that occurs in many drivers in drivers/crypto.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

20 Aug, 2019

1 commit

  • This is a preparatory patch for kexec_file_load() lockdown. A locked down
    kernel needs to prevent unsigned kernel images from being loaded with
    kexec_file_load(). Currently, the only way to force the signature
    verification is compiling with KEXEC_VERIFY_SIG. This prevents loading
    usigned images even when the kernel is not locked down at runtime.

    This patch splits KEXEC_VERIFY_SIG into KEXEC_SIG and KEXEC_SIG_FORCE.
    Analogous to the MODULE_SIG and MODULE_SIG_FORCE for modules, KEXEC_SIG
    turns on the signature verification but allows unsigned images to be
    loaded. KEXEC_SIG_FORCE disallows images without a valid signature.

    Signed-off-by: Jiri Bohac
    Signed-off-by: David Howells
    Signed-off-by: Matthew Garrett
    cc: kexec@lists.infradead.org
    Signed-off-by: James Morris

    Jiri Bohac
     

15 Aug, 2019

8 commits

  • Provide a version of the core AES transform to the aegis128 SIMD
    code that does not rely on the special AES instructions, but uses
    plain NEON instructions instead. This allows the SIMD version of
    the aegis128 driver to be used on arm64 systems that do not
    implement those instructions (which are not mandatory in the
    architecture), such as the Raspberry Pi 3.

    Since GCC makes a mess of this when using the tbl/tbx intrinsics
    to perform the sbox substitution, preload the Sbox into v16..v31
    in this case and use inline asm to emit the tbl/tbx instructions.
    Clang does not support this approach, nor does it require it, since
    it does a much better job at code generation, so there we use the
    intrinsics as usual.

    Cc: Nick Desaulniers
    Signed-off-by: Ard Biesheuvel
    Acked-by: Nick Desaulniers
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Provide an accelerated implementation of aegis128 by wiring up the
    SIMD hooks in the generic driver to an implementation based on NEON
    intrinsics, which can be compiled to both ARM and arm64 code.

    This results in a performance of 2.2 cycles per byte on Cortex-A53,
    which is a performance increase of ~11x compared to the generic
    code.

    Reviewed-by: Ondrej Mosnacek
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Add some plumbing to allow the AEGIS128 code to be built with SIMD
    routines for acceleration.

    Reviewed-by: Ondrej Mosnacek
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Add support for the missing ciphertext stealing part of the XTS-AES
    specification, which permits inputs of any size >= the block size.

    Cc: Pascal van Leeuwen
    Cc: Ondrej Mosnacek
    Tested-by: Milan Broz
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • Return -EINVAL on an attempt to set the authsize to 0 with an auth.
    algorithm with a non-zero digestsize (i.e. anything but digest_null)
    as authenticating the data and then throwing away the result does not
    make any sense at all.

    The digestsize zero exception is for use with digest_null for testing
    purposes only.

    Signed-off-by: Pascal van Leeuwen
    Signed-off-by: Herbert Xu

    Pascal van Leeuwen
     
  • crypto/streebog_generic.c:162:17: warning:
    Pi defined but not used [-Wunused-const-variable=]
    crypto/streebog_generic.c:151:17: warning:
    Tau defined but not used [-Wunused-const-variable=]

    They are never used, so can be removed.

    Reported-by: Hulk Robot
    Signed-off-by: YueHaibing
    Reviewed-by: Vitaly Chikunov
    Signed-off-by: Herbert Xu

    YueHaibing
     
  • crypto/aes_generic.c:64:18: warning:
    rco_tab defined but not used [-Wunused-const-variable=]

    It is never used, so can be removed.

    Reported-by: Hulk Robot
    Signed-off-by: YueHaibing
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    YueHaibing
     
  • Reference counters are preferred to use refcount_t instead of
    atomic_t.
    This is because the implementation of refcount_t can prevent
    overflows and detect possible use-after-free.
    So convert atomic_t ref counters to refcount_t.

    Signed-off-by: Chuhong Yuan
    Signed-off-by: Herbert Xu

    Chuhong Yuan
     

09 Aug, 2019

3 commits

  • Based on seqiv, IPsec ESP and rfc4543/rfc4106 the assoclen can be 16 or
    20 bytes.

    From esp4/esp6, assoclen is sizeof IP Header. This includes spi, seq_no
    and extended seq_no, that is 8 or 12 bytes.
    In seqiv, to asscolen is added the IV size (8 bytes).
    Therefore, the assoclen, for rfc4543, should be restricted to 16 or 20
    bytes, as for rfc4106.

    Signed-off-by: Iuliana Prodan
    Reviewed-by: Horia Geanta
    Signed-off-by: Herbert Xu

    Iuliana Prodan
     
  • The crypto engine initializes its kworker thread to FIFO-99 (when
    requesting RT priority), reduce this to FIFO-50.

    FIFO-99 is the very highest priority available to SCHED_FIFO and
    it not a suitable default; it would indicate the crypto work is the
    most important work on the machine.

    Cc: Herbert Xu
    Cc: "David S. Miller"
    Cc: linux-crypto@vger.kernel.org
    Cc: linux-kernel@vger.kernel.org
    Cc: Thomas Gleixner
    Signed-off-by: Peter Zijlstra (Intel)
    Signed-off-by: Herbert Xu

    Peter Zijlstra
     
  • Added inline helper functions to check authsize and assoclen for
    gcm, rfc4106 and rfc4543.
    These are used in the generic implementation of gcm, rfc4106 and
    rfc4543.

    Signed-off-by: Iuliana Prodan
    Signed-off-by: Herbert Xu

    Iuliana Prodan
     

06 Aug, 2019

1 commit

  • IMA will need to access the digest of the PKCS7 message (as calculated by
    the kernel) before the signature is verified, so introduce
    pkcs7_get_digest() for that purpose.

    Also, modify pkcs7_digest() to detect when the digest was already
    calculated so that it doesn't have to do redundant work. Verifying that
    sinfo->sig->digest isn't NULL is sufficient because both places which
    allocate sinfo->sig (pkcs7_parse_message() and pkcs7_note_signed_info())
    use kzalloc() so sig->digest is always initialized to zero.

    Signed-off-by: Thiago Jung Bauermann
    Reviewed-by: Mimi Zohar
    Cc: David Howells
    Cc: David Woodhouse
    Cc: Herbert Xu
    Cc: "David S. Miller"
    Signed-off-by: Mimi Zohar

    Thiago Jung Bauermann
     

02 Aug, 2019

2 commits

  • Recent clang-9 snapshots double the kernel stack usage when building
    this file with -O0 -fsanitize=kernel-hwaddress, compared to clang-8
    and older snapshots, this changed between commits svn364966 and
    svn366056:

    crypto/jitterentropy.c:516:5: error: stack frame size of 2640 bytes in function 'jent_entropy_init' [-Werror,-Wframe-larger-than=]
    int jent_entropy_init(void)
    ^
    crypto/jitterentropy.c:185:14: error: stack frame size of 2224 bytes in function 'jent_lfsr_time' [-Werror,-Wframe-larger-than=]
    static __u64 jent_lfsr_time(struct rand_data *ec, __u64 time, __u64 loop_cnt)
    ^

    I prepared a reduced test case in case any clang developers want to
    take a closer look, but from looking at the earlier output it seems
    that even with clang-8, something was very wrong here.

    Turn off any KASAN and UBSAN sanitizing for this file, as that likely
    clashes with -O0 anyway. Turning off just KASAN avoids the warning
    already, but I suspect both of these have undesired side-effects
    for jitterentropy.

    Link: https://godbolt.org/z/fDcwZ5
    Signed-off-by: Arnd Bergmann
    Signed-off-by: Herbert Xu

    Arnd Bergmann
     
  • This reverts commit ecc8bc81f2fb3976737ef312f824ba6053aa3590
    ("crypto: aegis128 - provide a SIMD implementation based on NEON
    intrinsics") and commit 7cdc0ddbf74a19cecb2f0e9efa2cae9d3c665189
    ("crypto: aegis128 - add support for SIMD acceleration").

    They cause compile errors on platforms other than ARM because
    the mechanism to selectively compile the SIMD code is broken.

    Repoted-by: Heiko Carstens
    Reported-by: Stephen Rothwell
    Signed-off-by: Herbert Xu

    Herbert Xu
     

27 Jul, 2019

2 commits

  • To help avoid confusion, add a comment to ghash-generic.c which explains
    the convention that the kernel's implementation of GHASH uses.

    Also update the Kconfig help text and module descriptions to call GHASH
    a "hash function" rather than a "message digest", since the latter
    normally means a real cryptographic hash function, which GHASH is not.

    Cc: Pascal Van Leeuwen
    Signed-off-by: Eric Biggers
    Reviewed-by: Ard Biesheuvel
    Acked-by: Pascal Van Leeuwen
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Clang sometimes makes very different inlining decisions from gcc.
    In case of the aegis crypto algorithms, it decides to turn the innermost
    primitives (and, xor, ...) into separate functions but inline most of
    the rest.

    This results in a huge amount of variables spilled on the stack, leading
    to rather slow execution as well as kernel stack usage beyond the 32-bit
    warning limit when CONFIG_KASAN is enabled:

    crypto/aegis256.c:123:13: warning: stack frame size of 648 bytes in function 'crypto_aegis256_encrypt_chunk' [-Wframe-larger-than=]
    crypto/aegis256.c:366:13: warning: stack frame size of 1264 bytes in function 'crypto_aegis256_crypt' [-Wframe-larger-than=]
    crypto/aegis256.c:187:13: warning: stack frame size of 656 bytes in function 'crypto_aegis256_decrypt_chunk' [-Wframe-larger-than=]
    crypto/aegis128l.c:135:13: warning: stack frame size of 832 bytes in function 'crypto_aegis128l_encrypt_chunk' [-Wframe-larger-than=]
    crypto/aegis128l.c:415:13: warning: stack frame size of 1480 bytes in function 'crypto_aegis128l_crypt' [-Wframe-larger-than=]
    crypto/aegis128l.c:218:13: warning: stack frame size of 848 bytes in function 'crypto_aegis128l_decrypt_chunk' [-Wframe-larger-than=]
    crypto/aegis128.c:116:13: warning: stack frame size of 584 bytes in function 'crypto_aegis128_encrypt_chunk' [-Wframe-larger-than=]
    crypto/aegis128.c:351:13: warning: stack frame size of 1064 bytes in function 'crypto_aegis128_crypt' [-Wframe-larger-than=]
    crypto/aegis128.c:177:13: warning: stack frame size of 592 bytes in function 'crypto_aegis128_decrypt_chunk' [-Wframe-larger-than=]

    Forcing the primitives to all get inlined avoids the issue and the
    resulting code is similar to what gcc produces.

    Signed-off-by: Arnd Bergmann
    Acked-by: Nick Desaulniers
    Signed-off-by: Herbert Xu

    Arnd Bergmann
     

26 Jul, 2019

3 commits