09 Jan, 2020

1 commit

  • The CRYPTO_TFM_RES_BAD_KEY_LEN flag was apparently meant as a way to
    make the ->setkey() functions provide more information about errors.

    However, no one actually checks for this flag, which makes it pointless.

    Also, many algorithms fail to set this flag when given a bad length key.
    Reviewing just the generic implementations, this is the case for
    aes-fixed-time, cbcmac, echainiv, nhpoly1305, pcrypt, rfc3686, rfc4309,
    rfc7539, rfc7539esp, salsa20, seqiv, and xcbc. But there are probably
    many more in arch/*/crypto/ and drivers/crypto/.

    Some algorithms can even set this flag when the key is the correct
    length. For example, authenc and authencesn set it when the key payload
    is malformed in any way (not just a bad length), the atmel-sha and ccree
    drivers can set it if a memory allocation fails, and the chelsio driver
    sets it for bad auth tag lengths, not just bad key lengths.

    So even if someone actually wanted to start checking this flag (which
    seems unlikely, since it's been unused for a long time), there would be
    a lot of work needed to get it working correctly. But it would probably
    be much better to go back to the drawing board and just define different
    return values, like -EINVAL if the key is invalid for the algorithm vs.
    -EKEYREJECTED if the key was rejected by a policy like "no weak keys".
    That would be much simpler, less error-prone, and easier to test.

    So just remove this flag.

    Signed-off-by: Eric Biggers
    Reviewed-by: Horia Geantă
    Signed-off-by: Herbert Xu

    Eric Biggers
     

11 Dec, 2019

1 commit

  • The crypto glue performed function prototype casting via macros to make
    indirect calls to assembly routines. Instead of performing casts at the
    call sites (which trips Control Flow Integrity prototype checking), switch
    each prototype to a common standard set of arguments which allows the
    removal of the existing macros. In order to keep pointer math unchanged,
    internal casting between u128 pointers and u8 pointers is added.

    Co-developed-by: João Moreira
    Signed-off-by: João Moreira
    Signed-off-by: Kees Cook
    Reviewed-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Kees Cook
     

22 Aug, 2019

1 commit


31 May, 2019

1 commit

  • Based on 1 normalized pattern(s):

    this program is free software you can redistribute it and or modify
    it under the terms of the gnu general public license as published by
    the free software foundation either version 2 of the license or at
    your option any later version this program is distributed in the
    hope that it will be useful but without any warranty without even
    the implied warranty of merchantability or fitness for a particular
    purpose see the gnu general public license for more details you
    should have received a copy of the gnu general public license along
    with this program if not write to the free software foundation inc
    59 temple place suite 330 boston ma 02111 1307 usa

    extracted by the scancode license scanner the SPDX license identifier

    GPL-2.0-or-later

    has been chosen to replace the boilerplate/reference in 1334 file(s).

    Signed-off-by: Thomas Gleixner
    Reviewed-by: Allison Randal
    Reviewed-by: Richard Fontana
    Cc: linux-spdx@vger.kernel.org
    Link: https://lkml.kernel.org/r/20190527070033.113240726@linutronix.de
    Signed-off-by: Greg Kroah-Hartman

    Thomas Gleixner
     

03 Dec, 2018

1 commit

  • Go over arch/x86/ and fix common typos in comments,
    and a typo in an actual function argument name.

    No change in functionality intended.

    Cc: Thomas Gleixner
    Cc: Borislav Petkov
    Cc: Peter Zijlstra
    Cc: Linus Torvalds
    Cc: linux-kernel@vger.kernel.org
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

03 Mar, 2018

2 commits

  • Convert the AVX implementation of CAST6 from the (deprecated) ablkcipher
    and blkcipher interfaces over to the skcipher interface. Note that this
    includes replacing the use of ablk_helper with crypto_simd.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     
  • The LRW template now wraps an ECB mode algorithm rather than the block
    cipher directly. Therefore it is now redundant for crypto modules to
    wrap their ECB code with generic LRW code themselves via lrw_crypt().

    Remove the lrw-cast6-avx algorithm which did this. Users who request
    lrw(cast6) and previously would have gotten lrw-cast6-avx will now get
    lrw(ecb-cast6-avx) instead, which is just as fast.

    Signed-off-by: Eric Biggers
    Signed-off-by: Herbert Xu

    Eric Biggers
     

17 Feb, 2016

1 commit

  • The patch centralizes the XTS key check logic into the service function
    xts_check_key which is invoked from the different XTS implementations.
    With this, the XTS implementations in ARM, ARM64, PPC and S390 have now
    a sanity check for the XTS keys similar to the other arches.

    In addition, this service function received a check to ensure that the
    key != the tweak key which is mandated by FIPS 140-2 IG A.9. As the
    check is not present in the standards defining XTS, it is only enforced
    in FIPS mode of the kernel.

    Signed-off-by: Stephan Mueller
    Signed-off-by: Herbert Xu

    Stephan Mueller
     

14 Sep, 2015

1 commit

  • There are two concepts that have some confusing naming:
    1. Extended State Component numbers (currently called
    XFEATURE_BIT_*)
    2. Extended State Component masks (currently called XSTATE_*)

    The numbers are (currently) from 0-9. State component 3 is the
    bounds registers for MPX, for instance.

    But when we want to enable "state component 3", we go set a bit
    in XCR0. The bit we set is 1<
    Cc: Andy Lutomirski
    Cc: Borislav Petkov
    Cc: Brian Gerst
    Cc: Denys Vlasenko
    Cc: Fenghua Yu
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: Tim Chen
    Cc: dave@sr71.net
    Cc: linux-kernel@vger.kernel.org
    Link: http://lkml.kernel.org/r/20150902233126.38653250@viggo.jf.intel.com
    [ Ported to v4.3-rc1. ]
    Signed-off-by: Ingo Molnar

    Dave Hansen
     

19 May, 2015

3 commits

  • Use the new 'cpu_has_xfeatures()' function to query AVX CPU support.

    This has the following advantages to the driver:

    - Decouples the driver from FPU internals: it's now only using .

    - Removes detection complexity from the driver, no more raw XGETBV instruction

    - Shrinks the code a bit.

    - Standardizes feature name error message printouts across drivers

    There are also advantages to the x86 FPU code: once all drivers
    are decoupled from internals we can move them out of common
    headers and we'll also be able to remove xcr.h.

    Cc: Andy Lutomirski
    Cc: Borislav Petkov
    Cc: Dave Hansen
    Cc: Fenghua Yu
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • 'xsave' is an x86 instruction name to most people - but xsave.h is
    about a lot more than just the XSAVE instruction: it includes
    definitions and support, both internal and external, related to
    xstate and xfeatures support.

    As a first step in cleaning up the various xstate uses rename this
    header to 'fpu/xstate.h' to better reflect what this header file
    is about.

    Cc: Andy Lutomirski
    Cc: Borislav Petkov
    Cc: Dave Hansen
    Cc: Fenghua Yu
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • Move the xsave.h header file to the FPU directory as well.

    Reviewed-by: Borislav Petkov
    Cc: Andy Lutomirski
    Cc: Dave Hansen
    Cc: Fenghua Yu
    Cc: H. Peter Anvin
    Cc: Linus Torvalds
    Cc: Oleg Nesterov
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

31 Mar, 2015

1 commit


24 Nov, 2014

1 commit


24 Sep, 2013

1 commit


25 Apr, 2013

1 commit

  • Change cast6-avx to use the new XTS code, for smaller stack usage and small
    boost to performance.

    tcrypt results, with Intel i5-2450M:
    enc dec
    16B 1.01x 1.01x
    64B 1.01x 1.00x
    256B 1.09x 1.02x
    1024B 1.08x 1.06x
    8192B 1.08x 1.07x

    Signed-off-by: Jussi Kivilinna
    Signed-off-by: Herbert Xu

    Jussi Kivilinna
     

24 Oct, 2012

2 commits

  • Introduce new assembler functions to avoid use temporary stack buffers in
    glue code. This also allows use of vector instructions for xoring output
    in CTR and CBC modes and construction of IVs for CTR mode.

    ECB mode sees ~0.5% decrease in speed because added one extra function
    call. CBC mode decryption and CTR mode benefit from vector operations
    and gain ~2%.

    Signed-off-by: Jussi Kivilinna
    Signed-off-by: Herbert Xu

    Jussi Kivilinna
     
  • 'u128' currently used for CTR mode is on little-endian 'long long' swapped
    and would require extra swap operations by SSE/AVX code. Use of le128
    instead of u128 allows IV calculations to be done with vector registers
    easier.

    Signed-off-by: Jussi Kivilinna
    Signed-off-by: Herbert Xu

    Jussi Kivilinna
     

01 Aug, 2012

1 commit

  • This patch adds a x86_64/avx assembler implementation of the Cast6 block
    cipher. The implementation processes eight blocks in parallel (two 4 block
    chunk AVX operations). The table-lookups are done in general-purpose registers.
    For small blocksizes the functions from the generic module are called. A good
    performance increase is provided for blocksizes greater or equal to 128B.

    Patch has been tested with tcrypt and automated filesystem tests.

    Tcrypt benchmark results:

    Intel Core i5-2500 CPU (fam:6, model:42, step:7)

    cast6-avx-x86_64 vs. cast6-generic
    128bit key: (lrw:256bit) (xts:256bit)
    size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec
    16B 0.97x 1.00x 1.01x 1.01x 0.99x 0.97x 0.98x 1.01x 0.96x 0.98x
    64B 0.98x 0.99x 1.02x 1.01x 0.99x 1.00x 1.01x 0.99x 1.00x 0.99x
    256B 1.77x 1.84x 0.99x 1.85x 1.77x 1.77x 1.70x 1.74x 1.69x 1.72x
    1024B 1.93x 1.95x 0.99x 1.96x 1.93x 1.93x 1.84x 1.85x 1.89x 1.87x
    8192B 1.91x 1.95x 0.99x 1.97x 1.95x 1.91x 1.86x 1.87x 1.93x 1.90x

    256bit key: (lrw:384bit) (xts:512bit)
    size ecb-enc ecb-dec cbc-enc cbc-dec ctr-enc ctr-dec lrw-enc lrw-dec xts-enc xts-dec
    16B 0.97x 0.99x 1.02x 1.01x 0.98x 0.99x 1.00x 1.00x 0.98x 0.98x
    64B 0.98x 0.99x 1.01x 1.00x 1.00x 1.00x 1.01x 1.01x 0.97x 1.00x
    256B 1.77x 1.83x 1.00x 1.86x 1.79x 1.78x 1.70x 1.76x 1.71x 1.69x
    1024B 1.92x 1.95x 0.99x 1.96x 1.93x 1.93x 1.83x 1.86x 1.89x 1.87x
    8192B 1.94x 1.95x 0.99x 1.97x 1.95x 1.95x 1.87x 1.87x 1.93x 1.91x

    Signed-off-by: Johannes Goetzfried
    Signed-off-by: Herbert Xu

    Johannes Goetzfried