26 Jun, 2006

11 commits


21 Mar, 2006

6 commits

  • The AES setkey routine writes 64 bytes to the E_KEY area even though
    there are only 60 bytes there. It is in fact safe since E_KEY is
    immediately follwed by D_KEY which is initialised afterwards. However,
    doing this may trigger undefined behaviour and makes Coverity unhappy.

    So by combining E_KEY and D_KEY into one array we sidestep this issue
    altogether.

    This problem was reported by Adrian Bunk.

    Signed-off-by: Herbert Xu

    David McCullough
     
  • Force 32-bit alignment on keys in tcrypt test vectors. Also rearrange the
    structure to prevent unnecessary padding.

    Signed-off-by: Atsushi Nemoto
    Signed-off-by: Herbert Xu

    Atsushi Nemoto
     
  • The "des3_ede" and "serpent" lack cra_alignmask.

    Signed-off-by: Atsushi Nemoto
    Signed-off-by: Herbert Xu

    Atsushi Nemoto
     
  • this patch converts crypto/ to kzalloc usage.
    Compile tested with allyesconfig.

    Signed-off-by: Eric Sesterhenn
    Signed-off-by: Herbert Xu

    Eric Sesterhenn
     
  • Since tfm contexts can contain arbitrary types we should provide at least
    natural alignment (__attribute__ ((__aligned__))) for them. In particular,
    this is needed on the Xscale which is a 32-bit architecture with a u64 type
    that requires 64-bit alignment. This problem was reported by Ronen Shitrit.

    The crypto_tfm structure's size was 44 bytes on 32-bit architectures and
    80 bytes on 64-bit architectures. So adding this requirement only means
    that we have to add an extra 4 bytes on 32-bit architectures.

    On i386 the natural alignment is 16 bytes which also benefits the VIA
    Padlock as it no longer has to manually align its context structure to
    128 bits.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Convert open coded rotations to rol32/ror32.

    Signed-off-by: Herbert Xu

    Denis Vlasenko
     

08 Feb, 2006

1 commit


10 Jan, 2006

10 commits

  • Many cipher implementations use 4-byte/8-byte loads/stores which require
    alignment on some architectures. This patch explicitly sets the alignment
    requirements for them.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The cipher code path may allocate up to two blocks of data on the stack.
    Therefore we need to place limits on the maximum block size.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • After a partial update, the done pointer is off to the right by 64 bytes.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Since the temporary buffer is used as an argument to cia_decrypt, it must be
    aligned by cra_alignmask. This bug was found by linux@horizon.com.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch avoids shifting the count left and right needlessly for each
    call to sha1_update(). It instead can be done only once at the end in
    sha1_final().

    Keeping the previous test example (sha1_update() successively called with
    len=64), a 1.3% performance increase can be observed on i386, or 0.2% on
    ARM. The generated code is also smaller on ARM.

    Signed-off-by: Nicolas Pitre
    Signed-off-by: Herbert Xu

    Nicolas Pitre
     
  • This patch gives more descriptive names to the variables i and j.

    Signed-off-by: Nicolas Pitre
    Signed-off-by: Herbert Xu

    Nicolas Pitre
     
  • The current code unconditionally copy the first block for every call to
    sha1_update(). This can be avoided if there is no pending partial block.
    This is always the case on the first call to sha1_update() (if the length
    is >= 64 of course.

    Furthermore, temp does need to be called if sha_transform is never invoked.
    Also consolidate the sha_transform calls into one to reduce code size.

    Signed-off-by: Nicolas Pitre
    Signed-off-by: Herbert Xu

    Nicolas Pitre
     
  • As the Crypto API now allows multiple implementations to be registered
    for the same algorithm, we no longer have to play tricks with Kconfig
    to select the right AES implementation.

    This patch sets the driver name and priority for all the AES
    implementations and removes the Kconfig conditions on the C implementation
    for AES.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This is the first step on the road towards asynchronous support in
    the Crypto API. It adds support for having multiple crypto_alg objects
    for the same algorithm registered in the system.

    For example, each device driver would register a crypto_alg object
    for each algorithm that it supports. While at the same time the
    user may load software implementations of those same algorithms.

    Users of the Crypto API may then select a specific implementation
    by name, or choose any implementation for a given algorithm with
    the highest priority.

    The priority field is a 32-bit signed integer. In future it will be
    possible to modify it from user-space.

    This also provides a solution to the problem of selecting amongst
    various AES implementations, that is, aes vs. aes-i586 vs. aes-padlock.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • A lot of crypto code needs to read/write a 32-bit/64-bit words in a
    specific gender. Many of them open code them by reading/writing one
    byte at a time. This patch converts all the applicable usages over
    to use the standard byte order macros.

    This is based on a previous patch by Denis Vlasenko.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

07 Jan, 2006

5 commits

  • Sanitize some s390 Kconfig options. We have ARCH_S390, ARCH_S390X,
    ARCH_S390_31, 64BIT, S390_SUPPORT and COMPAT. Replace these 6 options by
    S390, 64BIT and COMPAT.

    Signed-off-by: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Martin Schwidefsky
     
  • Add new test vectors to the AES test suite for AES CBC and AES with plaintext
    larger than AES blocksize.

    Signed-off-by: Jan Glauber
    Signed-off-by: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Glauber
     
  • Add support for the hardware accelerated AES crypto algorithm.

    Signed-off-by: Jan Glauber
    Signed-off-by: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Glauber
     
  • Add support for the hardware accelerated sha256 crypto algorithm.

    Signed-off-by: Jan Glauber
    Signed-off-by: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Glauber
     
  • Replace all references to z990 by s390 in the in-kernel crypto files in
    arch/s390/crypto. The code is not specific to a particular machine (z990) but
    to the s390 platform. Big diff, does nothing..

    Signed-off-by: Jan Glauber
    Signed-off-by: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Glauber
     

30 Oct, 2005

3 commits


07 Sep, 2005

1 commit


02 Sep, 2005

2 commits

  • The crypto layer currently uses in_atomic() to determine whether it is
    allowed to sleep. This is incorrect since spin locks don't always cause
    in_atomic() to return true.

    Instead of that, this patch returns to an earlier idea of a per-tfm flag
    which determines whether sleeping is allowed. Unlike the earlier version,
    the default is to not allow sleeping. This ensures that no existing code
    can break.

    As usual, this flag may either be set through crypto_alloc_tfm(), or
    just before a specific crypto operation.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     
  • The XTEA implementation was incorrect due to a misinterpretation of
    operator precedence. Because of the wide-spread nature of this
    error, the erroneous implementation will be kept, albeit under the
    new name of XETA.

    Signed-off-by: Aaron Grothe
    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Aaron Grothe
     

28 Jul, 2005

1 commit

  • `gcc -W' likes to complain if the static keyword is not at the beginning of
    the declaration. This patch fixes all remaining occurrences of "inline
    static" up with "static inline" in the entire kernel tree (140 occurrences in
    47 files).

    While making this change I came across a few lines with trailing whitespace
    that I also fixed up, I have also added or removed a blank line or two here
    and there, but there are no functional changes in the patch.

    Signed-off-by: Jesper Juhl
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jesper Juhl