19 May, 2010

1 commit

  • These are akin to the blkcipher_walk helpers.

    The main differences in the async variant are:

    1) Only physical walking is supported. We can't hold on to
    kmap mappings across the async operation to support virtual
    ablkcipher_walk operations anyways.

    2) Bounce buffers used for async more need to be persistent and
    freed at a later point in time when the async op completes.
    Therefore we maintain a list of writeback buffers and require
    that the ablkcipher_walk user call the 'complete' operation
    so we can copy the bounce buffers out to the real buffers and
    free up the bounce buffer chunks.

    These interfaces will be used by the new Niagara2 crypto driver.

    Signed-off-by: David S. Miller
    Signed-off-by: Herbert Xu

    David S. Miller
     

17 Jan, 2010

1 commit

  • This patch adds export/import support to md5. The exported type is
    defined by struct md5_state.

    This is modeled after the equivalent change to sha1_generic.

    Signed-off-by: Max Vozeler
    Signed-off-by: Herbert Xu

    Max Vozeler
     

07 Jan, 2010

1 commit


19 Oct, 2009

2 commits

  • 6941c3a0 disabled compilation of the legacy digest code but didn't
    actually remove it. Rectify this. Also, remove the crypto_hash_type
    extern declaration from algapi.h now that the struct is gone.

    Signed-off-by: Benjamin Gilbert
    Signed-off-by: Herbert Xu

    Benjamin Gilbert
     
  • PCLMULQDQ is used to accelerate the most time-consuming part of GHASH,
    carry-less multiplication. More information about PCLMULQDQ can be
    found at:

    http://software.intel.com/en-us/articles/carry-less-multiplication-and-its-usage-for-computing-the-gcm-mode/

    Because PCLMULQDQ changes XMM state, its usage must be enclosed with
    kernel_fpu_begin/end, which can be used only in process context, the
    acceleration is implemented as crypto_ahash. That is, request in soft
    IRQ context will be defered to the cryptd kernel thread.

    Signed-off-by: Huang Ying
    Signed-off-by: Herbert Xu

    Huang Ying
     

12 Sep, 2009

1 commit

  • * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits)
    crypto: sha-s390 - Fix warnings in import function
    crypto: vmac - New hash algorithm for intel_txt support
    crypto: api - Do not displace newly registered algorithms
    crypto: ansi_cprng - Fix module initialization
    crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx
    crypto: fips - Depend on ansi_cprng
    crypto: blkcipher - Do not use eseqiv on stream ciphers
    crypto: ctr - Use chainiv on raw counter mode
    Revert crypto: fips - Select CPRNG
    crypto: rng - Fix typo
    crypto: talitos - add support for 36 bit addressing
    crypto: talitos - align locks on cache lines
    crypto: talitos - simplify hmac data size calculation
    crypto: mv_cesa - Add support for Orion5X crypto engine
    crypto: cryptd - Add support to access underlaying shash
    crypto: gcm - Use GHASH digest algorithm
    crypto: ghash - Add GHASH digest algorithm for GCM
    crypto: authenc - Convert to ahash
    crypto: api - Fix aligned ctx helper
    crypto: hmac - Prehash ipad/opad
    ...

    Linus Torvalds
     

02 Sep, 2009

1 commit


29 Aug, 2009

1 commit

  • As struct skcipher_givcrypt_request includes struct crypto_request
    at a non-zero offset, testing for NULL after converting the pointer
    returned by crypto_dequeue_request does not work. This can result
    in IPsec crashes when the queue is depleted.

    This patch fixes it by doing the pointer conversion only when the
    return value is non-NULL. In particular, we create a new function
    __crypto_dequeue_request that does the pointer conversion.

    Reported-by: Brad Bosch
    Signed-off-by: Herbert Xu

    Herbert Xu
     

06 Aug, 2009

1 commit


24 Jul, 2009

1 commit

  • The aligned ctx helper was using a bogus alignment value thas was
    one off the correct value. Fortunately the current users do not
    require anything beyond the natural alignment of the platform so
    this hasn't caused a problem.

    This patch fixes that and also removes the unnecessary minimum
    check since if the alignment is less than the natural alignment
    then the subsequent ALIGN operation should be a noop.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

22 Jul, 2009

2 commits


15 Jul, 2009

2 commits

  • When an shash algorithm is exported as ahash, ahash will access
    its digest size through hash_alg_common. That's why the shash
    layout needs to match hash_alg_common. This wasn't the case
    because the alignment weren't identical.

    This patch fixes the problem.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch exports the finup operation where available and adds
    a default finup operation for ahash. The operations final, finup
    and digest also will now deal with unaligned result pointers by
    copying it. Finally export/import operations are will now be
    exported too.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

14 Jul, 2009

10 commits


12 Jul, 2009

1 commit


11 Jul, 2009

3 commits

  • This patch adds export/import support to sha256_generic. The exported
    type is defined by struct sha256_state, which is basically the entire
    descriptor state of sha256_generic.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds export/import support to sha1_generic. The exported
    type is defined by struct sha1_state, which is basically the entire
    descriptor state of sha1_generic.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch replaces the full descriptor export with an export of
    the partial hash state. This allows the use of a consistent export
    format across all implementations of a given algorithm.

    This is useful because a number of cases require the use of the
    partial hash state, e.g., PadLock can use the SHA1 hash state
    to get around the fact that it can only hash contiguous data
    chunks.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

09 Jul, 2009

1 commit


08 Jul, 2009

8 commits


07 Jul, 2009

2 commits

  • This patch adds a new argument to crypto_alloc_instance which
    sets aside some space before the instance for use by algorithms
    such as shash that place type-specific data before crypto_alg.

    For compatibility the function has been renamed so that existing
    users aren't affected.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch introduces the template->create function intended
    to replace the existing alloc function. The intention is for
    create to handle the registration directly, whereas currently
    the caller of alloc has to handle the registration.

    This allows type-specific code to be run prior to registration.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

04 Mar, 2009

1 commit