25 Apr, 2019

1 commit

  • Fix undefined symbol issue in ecrdsa_generic module when ASN1
    or OID_REGISTRY aren't enabled in the config by selecting these
    options for CRYPTO_ECRDSA.

    ERROR: "asn1_ber_decoder" [crypto/ecrdsa_generic.ko] undefined!
    ERROR: "look_up_OID" [crypto/ecrdsa_generic.ko] undefined!

    Reported-by: Randy Dunlap
    Cc: Stephen Rothwell
    Signed-off-by: Vitaly Chikunov
    Acked-by: Randy Dunlap # build-tested
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     

18 Apr, 2019

3 commits

  • Add Elliptic Curve Russian Digital Signature Algorithm (GOST R
    34.10-2012, RFC 7091, ISO/IEC 14888-3) is one of the Russian (and since
    2018 the CIS countries) cryptographic standard algorithms (called GOST
    algorithms). Only signature verification is supported, with intent to be
    used in the IMA.

    Summary of the changes:

    * crypto/Kconfig:
    - EC-RDSA is added into Public-key cryptography section.

    * crypto/Makefile:
    - ecrdsa objects are added.

    * crypto/asymmetric_keys/x509_cert_parser.c:
    - Recognize EC-RDSA and Streebog OIDs.

    * include/linux/oid_registry.h:
    - EC-RDSA OIDs are added to the enum. Also, a two currently not
    implemented curve OIDs are added for possible extension later (to
    not change numbering and grouping).

    * crypto/ecc.c:
    - Kenneth MacKay copyright date is updated to 2014, because
    vli_mmod_slow, ecc_point_add, ecc_point_mult_shamir are based on his
    code from micro-ecc.
    - Functions needed for ecrdsa are EXPORT_SYMBOL'ed.
    - New functions:
    vli_is_negative - helper to determine sign of vli;
    vli_from_be64 - unpack big-endian array into vli (used for
    a signature);
    vli_from_le64 - unpack little-endian array into vli (used for
    a public key);
    vli_uadd, vli_usub - add/sub u64 value to/from vli (used for
    increment/decrement);
    mul_64_64 - optimized to use __int128 where appropriate, this speeds
    up point multiplication (and as a consequence signature
    verification) by the factor of 1.5-2;
    vli_umult - multiply vli by a small value (speeds up point
    multiplication by another factor of 1.5-2, depending on vli sizes);
    vli_mmod_special - module reduction for some form of Pseudo-Mersenne
    primes (used for the curves A);
    vli_mmod_special2 - module reduction for another form of
    Pseudo-Mersenne primes (used for the curves B);
    vli_mmod_barrett - module reduction using pre-computed value (used
    for the curve C);
    vli_mmod_slow - more general module reduction which is much slower
    (used when the modulus is subgroup order);
    vli_mod_mult_slow - modular multiplication;
    ecc_point_add - add two points;
    ecc_point_mult_shamir - add two points multiplied by scalars in one
    combined multiplication (this gives speed up by another factor 2 in
    compare to two separate multiplications).
    ecc_is_pubkey_valid_partial - additional samity check is added.
    - Updated vli_mmod_fast with non-strict heuristic to call optimal
    module reduction function depending on the prime value;
    - All computations for the previously defined (two NIST) curves should
    not unaffected.

    * crypto/ecc.h:
    - Newly exported functions are documented.

    * crypto/ecrdsa_defs.h
    - Five curves are defined.

    * crypto/ecrdsa.c:
    - Signature verification is implemented.

    * crypto/ecrdsa_params.asn1, crypto/ecrdsa_pub_key.asn1:
    - Templates for BER decoder for EC-RDSA parameters and public key.

    Cc: linux-integrity@vger.kernel.org
    Signed-off-by: Vitaly Chikunov
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     
  • ecc.c have algorithms that could be used togeter by ecdh and ecrdsa.
    Make it separate module. Add CRYPTO_ECC into Kconfig. EXPORT_SYMBOL and
    document to what seems appropriate. Move structs ecc_point and ecc_curve
    from ecc_curve_defs.h into ecc.h.

    No code changes.

    Signed-off-by: Vitaly Chikunov
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     
  • Group RSA, DH, and ECDH into Public-key cryptography config section.

    Signed-off-by: Vitaly Chikunov
    Signed-off-by: Herbert Xu

    Vitaly Chikunov
     

28 Mar, 2019

1 commit


22 Mar, 2019

6 commits


08 Feb, 2019

1 commit

  • To achieve more comprehensive crypto test coverage, I'd like to add fuzz
    tests that use random data layouts and request flags.

    To be most effective these tests should be part of testmgr, so they
    automatically run on every algorithm registered with the crypto API.
    However, they will take much longer to run than the current tests and
    therefore will only really be intended to be run by developers, whereas
    the current tests have a wider audience.

    Therefore, add a new kconfig option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
    that can be set by developers to enable these extra, expensive tests.

    Similar to the regular tests, also add a module parameter
    cryptomgr.noextratests to support disabling the tests.

    Finally, another module parameter cryptomgr.fuzz_iterations is added to
    control how many iterations the fuzz tests do. Note: for now setting
    this to 0 will be equivalent to cryptomgr.noextratests=1. But I opted
    for separate parameters to provide more flexibility to add other types
    of tests under the "extra tests" category in the future.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

11 Jan, 2019

1 commit


30 Dec, 2018

1 commit

  • Pull Kconfig updates from Masahiro Yamada:

    - support -y option for merge_config.sh to avoid downgrading =y to =m

    - remove S_OTHER symbol type, and touch include/config/*.h files correctly

    - fix file name and line number in lexer warnings

    - fix memory leak when EOF is encountered in quotation

    - resolve all shift/reduce conflicts of the parser

    - warn no new line at end of file

    - make 'source' statement more strict to take only string literal

    - rewrite the lexer and remove the keyword lookup table

    - convert to SPDX License Identifier

    - compile C files independently instead of including them from zconf.y

    - fix various warnings of gconfig

    - misc cleanups

    * tag 'kconfig-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (39 commits)
    kconfig: surround dbg_sym_flags with #ifdef DEBUG to fix gconf warning
    kconfig: split images.c out of qconf.cc/gconf.c to fix gconf warnings
    kconfig: add static qualifiers to fix gconf warnings
    kconfig: split the lexer out of zconf.y
    kconfig: split some C files out of zconf.y
    kconfig: convert to SPDX License Identifier
    kconfig: remove keyword lookup table entirely
    kconfig: update current_pos in the second lexer
    kconfig: switch to ASSIGN_VAL state in the second lexer
    kconfig: stop associating kconf_id with yylval
    kconfig: refactor end token rules
    kconfig: stop supporting '.' and '/' in unquoted words
    treewide: surround Kconfig file paths with double quotes
    microblaze: surround string default in Kconfig with double quotes
    kconfig: use T_WORD instead of T_VARIABLE for variables
    kconfig: use specific tokens instead of T_ASSIGN for assignments
    kconfig: refactor scanning and parsing "option" properties
    kconfig: use distinct tokens for type and default properties
    kconfig: remove redundant token defines
    kconfig: rename depends_list to comment_option_list
    ...

    Linus Torvalds
     

21 Dec, 2018

1 commit

  • The Kconfig lexer supports special characters such as '.' and '/' in
    the parameter context. In my understanding, the reason is just to
    support bare file paths in the source statement.

    I do not see a good reason to complicate Kconfig for the room of
    ambiguity.

    The majority of code already surrounds file paths with double quotes,
    and it makes sense since file paths are constant string literals.

    Make it treewide consistent now.

    Signed-off-by: Masahiro Yamada
    Acked-by: Wolfram Sang
    Acked-by: Geert Uytterhoeven
    Acked-by: Ingo Molnar

    Masahiro Yamada
     

13 Dec, 2018

4 commits

  • Now that the x86_64 SIMD implementations of ChaCha20 and XChaCha20 have
    been refactored to support varying the number of rounds, add support for
    XChaCha12. This is identical to XChaCha20 except for the number of
    rounds, which is 12 instead of 20. This can be used by Adiantum.

    Reviewed-by: Martin Willi
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Add an XChaCha20 implementation that is hooked up to the x86_64 SIMD
    implementations of ChaCha20. This can be used by Adiantum.

    An SSSE3 implementation of single-block HChaCha20 is also added so that
    XChaCha20 can use it rather than the generic implementation. This
    required refactoring the ChaCha permutation into its own function.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Add a 64-bit AVX2 implementation of NHPoly1305, an ε-almost-∆-universal
    hash function used in the Adiantum encryption mode. For now, only the
    NH portion is actually AVX2-accelerated; the Poly1305 part is less
    performance-critical so is just implemented in C.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Add a 64-bit SSE2 implementation of NHPoly1305, an ε-almost-∆-universal
    hash function used in the Adiantum encryption mode. For now, only the
    NH portion is actually SSE2-accelerated; the Poly1305 part is less
    performance-critical so is just implemented in C.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

07 Dec, 2018

1 commit


20 Nov, 2018

4 commits

  • Add support for the Adiantum encryption mode. Adiantum was designed by
    Paul Crowley and is specified by our paper:

    Adiantum: length-preserving encryption for entry-level processors
    (https://eprint.iacr.org/2018/720.pdf)

    See our paper for full details; this patch only provides an overview.

    Adiantum is a tweakable, length-preserving encryption mode designed for
    fast and secure disk encryption, especially on CPUs without dedicated
    crypto instructions. Adiantum encrypts each sector using the XChaCha12
    stream cipher, two passes of an ε-almost-∆-universal (εA∆U) hash
    function, and an invocation of the AES-256 block cipher on a single
    16-byte block. On CPUs without AES instructions, Adiantum is much
    faster than AES-XTS; for example, on ARM Cortex-A7, on 4096-byte sectors
    Adiantum encryption is about 4 times faster than AES-256-XTS encryption,
    and decryption about 5 times faster.

    Adiantum is a specialization of the more general HBSH construction. Our
    earlier proposal, HPolyC, was also a HBSH specialization, but it used a
    different εA∆U hash function, one based on Poly1305 only. Adiantum's
    εA∆U hash function, which is based primarily on the "NH" hash function
    like that used in UMAC (RFC4418), is about twice as fast as HPolyC's;
    consequently, Adiantum is about 20% faster than HPolyC.

    This speed comes with no loss of security: Adiantum is provably just as
    secure as HPolyC, in fact slightly *more* secure. Like HPolyC,
    Adiantum's security is reducible to that of XChaCha12 and AES-256,
    subject to a security bound. XChaCha12 itself has a security reduction
    to ChaCha12. Therefore, one need not "trust" Adiantum; one need only
    trust ChaCha12 and AES-256. Note that the εA∆U hash function is only
    used for its proven combinatorical properties so cannot be "broken".

    Adiantum is also a true wide-block encryption mode, so flipping any
    plaintext bit in the sector scrambles the entire ciphertext, and vice
    versa. No other such mode is available in the kernel currently; doing
    the same with XTS scrambles only 16 bytes. Adiantum also supports
    arbitrary-length tweaks and naturally supports any length input >= 16
    bytes without needing "ciphertext stealing".

    For the stream cipher, Adiantum uses XChaCha12 rather than XChaCha20 in
    order to make encryption feasible on the widest range of devices.
    Although the 20-round variant is quite popular, the best known attacks
    on ChaCha are on only 7 rounds, so ChaCha12 still has a substantial
    security margin; in fact, larger than AES-256's. 12-round Salsa20 is
    also the eSTREAM recommendation. For the block cipher, Adiantum uses
    AES-256, despite it having a lower security margin than XChaCha12 and
    needing table lookups, due to AES's extensive adoption and analysis
    making it the obvious first choice. Nevertheless, for flexibility this
    patch also permits the "adiantum" template to be instantiated with
    XChaCha20 and/or with an alternate block cipher.

    We need Adiantum support in the kernel for use in dm-crypt and fscrypt,
    where currently the only other suitable options are block cipher modes
    such as AES-XTS. A big problem with this is that many low-end mobile
    devices (e.g. Android Go phones sold primarily in developing countries,
    as well as some smartwatches) still have CPUs that lack AES
    instructions, e.g. ARM Cortex-A7. Sadly, AES-XTS encryption is much too
    slow to be viable on these devices. We did find that some "lightweight"
    block ciphers are fast enough, but these suffer from problems such as
    not having much cryptanalysis or being too controversial.

    The ChaCha stream cipher has excellent performance but is insecure to
    use directly for disk encryption, since each sector's IV is reused each
    time it is overwritten. Even restricting the threat model to offline
    attacks only isn't enough, since modern flash storage devices don't
    guarantee that "overwrites" are really overwrites, due to wear-leveling.
    Adiantum avoids this problem by constructing a
    "tweakable super-pseudorandom permutation"; this is the strongest
    possible security model for length-preserving encryption.

    Of course, storing random nonces along with the ciphertext would be the
    ideal solution. But doing that with existing hardware and filesystems
    runs into major practical problems; in most cases it would require data
    journaling (like dm-integrity) which severely degrades performance.
    Thus, for now length-preserving encryption is still needed.

    Signed-off-by: Eric Biggers
    Reviewed-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Add a generic implementation of NHPoly1305, an ε-almost-∆-universal hash
    function used in the Adiantum encryption mode.

    CONFIG_NHPOLY1305 is not selectable by itself since there won't be any
    real reason to enable it without also enabling Adiantum support.

    Signed-off-by: Eric Biggers
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Now that the generic implementation of ChaCha20 has been refactored to
    allow varying the number of rounds, add support for XChaCha12, which is
    the XSalsa construction applied to ChaCha12. ChaCha12 is one of the
    three ciphers specified by the original ChaCha paper
    (https://cr.yp.to/chacha/chacha-20080128.pdf: "ChaCha, a variant of
    Salsa20"), alongside ChaCha8 and ChaCha20. ChaCha12 is faster than
    ChaCha20 but has a lower, but still large, security margin.

    We need XChaCha12 support so that it can be used in the Adiantum
    encryption mode, which enables disk/file encryption on low-end mobile
    devices where AES-XTS is too slow as the CPUs lack AES instructions.

    We'd prefer XChaCha20 (the more popular variant), but it's too slow on
    some of our target devices, so at least in some cases we do need the
    XChaCha12-based version. In more detail, the problem is that Adiantum
    is still much slower than we're happy with, and encryption still has a
    quite noticeable effect on the feel of low-end devices. Users and
    vendors push back hard against encryption that degrades the user
    experience, which always risks encryption being disabled entirely. So
    we need to choose the fastest option that gives us a solid margin of
    security, and here that's XChaCha12. The best known attack on ChaCha
    breaks only 7 rounds and has 2^235 time complexity, so ChaCha12's
    security margin is still better than AES-256's. Much has been learned
    about cryptanalysis of ARX ciphers since Salsa20 was originally designed
    in 2005, and it now seems we can be comfortable with a smaller number of
    rounds. The eSTREAM project also suggests the 12-round version of
    Salsa20 as providing the best balance among the different variants:
    combining very good performance with a "comfortable margin of security".

    Note that it would be trivial to add vanilla ChaCha12 in addition to
    XChaCha12. However, it's unneeded for now and therefore is omitted.

    As discussed in the patch that introduced XChaCha20 support, I
    considered splitting the code into separate chacha-common, chacha20,
    xchacha20, and xchacha12 modules, so that these algorithms could be
    enabled/disabled independently. However, since nearly all the code is
    shared anyway, I ultimately decided there would have been little benefit
    to the added complexity.

    Reviewed-by: Ard Biesheuvel
    Acked-by: Martin Willi
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Add support for the XChaCha20 stream cipher. XChaCha20 is the
    application of the XSalsa20 construction
    (https://cr.yp.to/snuffle/xsalsa-20081128.pdf) to ChaCha20 rather than
    to Salsa20. XChaCha20 extends ChaCha20's nonce length from 64 bits (or
    96 bits, depending on convention) to 192 bits, while provably retaining
    ChaCha20's security. XChaCha20 uses the ChaCha20 permutation to map the
    key and first 128 nonce bits to a 256-bit subkey. Then, it does the
    ChaCha20 stream cipher with the subkey and remaining 64 bits of nonce.

    We need XChaCha support in order to add support for the Adiantum
    encryption mode. Note that to meet our performance requirements, we
    actually plan to primarily use the variant XChaCha12. But we believe
    it's wise to first add XChaCha20 as a baseline with a higher security
    margin, in case there are any situations where it can be used.
    Supporting both variants is straightforward.

    Since XChaCha20's subkey differs for each request, XChaCha20 can't be a
    template that wraps ChaCha20; that would require re-keying the
    underlying ChaCha20 for every request, which wouldn't be thread-safe.
    Instead, we make XChaCha20 its own top-level algorithm which calls the
    ChaCha20 streaming implementation internally.

    Similar to the existing ChaCha20 implementation, we define the IV to be
    the nonce and stream position concatenated together. This allows users
    to seek to any position in the stream.

    I considered splitting the code into separate chacha20-common, chacha20,
    and xchacha20 modules, so that chacha20 and xchacha20 could be
    enabled/disabled independently. However, since nearly all the code is
    shared anyway, I ultimately decided there would have been little benefit
    to the added complexity of separate modules.

    Reviewed-by: Ard Biesheuvel
    Acked-by: Martin Willi
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

16 Nov, 2018

2 commits


09 Nov, 2018

1 commit

  • In the "aes-fixed-time" AES implementation, disable interrupts while
    accessing the S-box, in order to make cache-timing attacks more
    difficult. Previously it was possible for the CPU to be interrupted
    while the S-box was loaded into L1 cache, potentially evicting the
    cachelines and causing later table lookups to be time-variant.

    In tests I did on x86 and ARM, this doesn't affect performance
    significantly. Responsiveness is potentially a concern, but interrupts
    are only disabled for a single AES block.

    Note that even after this change, the implementation still isn't
    necessarily guaranteed to be constant-time; see
    https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion
    of the many difficulties involved in writing truly constant-time AES
    software. But it's valuable to make such attacks more difficult.

    Reviewed-by: Ard Biesheuvel
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

05 Oct, 2018

1 commit

  • For historical reasons, the AES-NI based implementation of the PCBC
    chaining mode uses a special FPU chaining mode wrapper template to
    amortize the FPU start/stop overhead over multiple blocks.

    When this FPU wrapper was introduced, it supported widely used
    chaining modes such as XTS and CTR (as well as LRW), but currently,
    PCBC is the only remaining user.

    Since there are no known users of pcbc(aes) in the kernel, let's remove
    this special driver, and rely on the generic pcbc driver to encapsulate
    the AES-NI core cipher.

    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     

28 Sep, 2018

2 commits


04 Sep, 2018

2 commits

  • As it turns out, the AVX2 multibuffer SHA routines are currently
    broken [0], in a way that would have likely been noticed if this
    code were in wide use. Since the code is too complicated to be
    maintained by anyone except the original authors, and since the
    performance benefits for real-world use cases are debatable to
    begin with, it is better to drop it entirely for the moment.

    [0] https://marc.info/?l=linux-crypto-vger&m=153476243825350&w=2

    Suggested-by: Eric Biggers
    Cc: Megha Dey
    Cc: Tim Chen
    Cc: Geert Uytterhoeven
    Cc: Martin Schwidefsky
    Cc: Heiko Carstens
    Cc: Thomas Gleixner
    Cc: Ingo Molnar
    Signed-off-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Ard Biesheuvel
     
  • These are unused, undesired, and have never actually been used by
    anybody. The original authors of this code have changed their mind about
    its inclusion. While originally proposed for disk encryption on low-end
    devices, the idea was discarded [1] in favor of something else before
    that could really get going. Therefore, this patch removes Speck.

    [1] https://marc.info/?l=linux-crypto-vger&m=153359499015659

    Signed-off-by: Jason A. Donenfeld
    Acked-by: Eric Biggers
    Cc: stable@vger.kernel.org
    Acked-by: Ard Biesheuvel
    Signed-off-by: Herbert Xu

    Jason A. Donenfeld
     

31 May, 2018

2 commits

  • The x86 assembly implementations of Salsa20 use the frame base pointer
    register (%ebp or %rbp), which breaks frame pointer convention and
    breaks stack traces when unwinding from an interrupt in the crypto code.
    Recent (v4.10+) kernels will warn about this, e.g.

    WARNING: kernel stack regs at 00000000a8291e69 in syzkaller047086:4677 has bad 'bp' value 000000001077994c
    [...]

    But after looking into it, I believe there's very little reason to still
    retain the x86 Salsa20 code. First, these are *not* vectorized
    (SSE2/SSSE3/AVX2) implementations, which would be needed to get anywhere
    close to the best Salsa20 performance on any remotely modern x86
    processor; they're just regular x86 assembly. Second, it's still
    unclear that anyone is actually using the kernel's Salsa20 at all,
    especially given that now ChaCha20 is supported too, and with much more
    efficient SSSE3 and AVX2 implementations. Finally, in benchmarks I did
    on both Intel and AMD processors with both gcc 8.1.0 and gcc 4.9.4, the
    x86_64 salsa20-asm is actually slightly *slower* than salsa20-generic
    (~3% slower on Skylake, ~10% slower on Zen), while the i686 salsa20-asm
    is only slightly faster than salsa20-generic (~15% faster on Skylake,
    ~20% faster on Zen). The gcc version made little difference.

    So, the x86_64 salsa20-asm is pretty clearly useless. That leaves just
    the i686 salsa20-asm, which based on my tests provides a 15-20% speed
    boost. But that's without updating the code to not use %ebp. And given
    the maintenance cost, the small speed difference vs. salsa20-generic,
    the fact that few people still use i686 kernels, the doubt that anyone
    is even using the kernel's Salsa20 at all, and the fact that a SSE2
    implementation would almost certainly be much faster on any remotely
    modern x86 processor yet no one has cared enough to add one yet, I don't
    think it's worthwhile to keep.

    Thus, just remove both the x86_64 and i686 salsa20-asm implementations.

    Reported-by: syzbot+ffa3a158337bbc01ff09@syzkaller.appspotmail.com
    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • Commit 56e8e57fc3a7 ("crypto: morus - Add common SIMD glue code for
    MORUS") accidetally consiedered the glue code to be usable by different
    architectures, but it seems to be only usable on x86.

    This patch moves it under arch/x86/crypto and adds 'depends on X86' to
    the Kconfig options and also removes the prompt to hide these internal
    options from the user.

    Reported-by: kbuild test robot
    Signed-off-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Ondrej Mosnacek
     

19 May, 2018

5 commits

  • This patch adds optimized implementations of MORUS-640 and MORUS-1280,
    utilizing the SSE2 and AVX2 x86 extensions.

    For MORUS-1280 (which operates on 256-bit blocks) we provide both AVX2
    and SSE2 implementation. Although SSE2 MORUS-1280 is slower than AVX2
    MORUS-1280, it is comparable in speed to the SSE2 MORUS-640.

    Signed-off-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Ondrej Mosnacek
     
  • This patch adds a common glue code for optimized implementations of
    MORUS AEAD algorithms.

    Signed-off-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Ondrej Mosnacek
     
  • This patch adds the generic implementation of the MORUS family of AEAD
    algorithms (MORUS-640 and MORUS-1280). The original authors of MORUS
    are Hongjun Wu and Tao Huang.

    At the time of writing, MORUS is one of the finalists in CAESAR, an
    open competition intended to select a portfolio of alternatives to
    the problematic AES-GCM:

    https://competitions.cr.yp.to/caesar-submissions.html
    https://competitions.cr.yp.to/round3/morusv2.pdf

    Signed-off-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Ondrej Mosnacek
     
  • This patch adds optimized implementations of AEGIS-128, AEGIS-128L,
    and AEGIS-256, utilizing the AES-NI and SSE2 x86 extensions.

    Signed-off-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Ondrej Mosnacek
     
  • This patch adds the generic implementation of the AEGIS family of AEAD
    algorithms (AEGIS-128, AEGIS-128L, and AEGIS-256). The original
    authors of AEGIS are Hongjun Wu and Bart Preneel.

    At the time of writing, AEGIS is one of the finalists in CAESAR, an
    open competition intended to select a portfolio of alternatives to
    the problematic AES-GCM:

    https://competitions.cr.yp.to/caesar-submissions.html
    https://competitions.cr.yp.to/round3/aegisv11.pdf

    Signed-off-by: Ondrej Mosnacek
    Signed-off-by: Herbert Xu

    Ondrej Mosnacek
     

21 Apr, 2018

1 commit

  • Adds zstd support to crypto and scompress. Only supports the default
    level.

    Previously we held off on this patch, since there weren't any users.
    Now zram is ready for zstd support, but depends on CONFIG_CRYPTO_ZSTD,
    which isn't defined until this patch is in. I also see a patch adding
    zstd to pstore [0], which depends on crypto zstd.

    [0] lkml.kernel.org/r/9c9416b2dff19f05fb4c35879aaa83d11ff72c92.1521626182.git.geliangtang@gmail.com

    Signed-off-by: Nick Terrell
    Signed-off-by: Herbert Xu

    Nick Terrell