21 Oct, 2008

1 commit


11 Oct, 2008

1 commit

  • * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
    crypto: skcipher - Use RNG interface instead of get_random_bytes
    crypto: rng - RNG interface and implementation
    crypto: api - Add fips_enable flag
    crypto: skcipher - Move IV generators into their own modules
    crypto: cryptomgr - Test ciphers using ECB
    crypto: api - Use test infrastructure
    crypto: cryptomgr - Add test infrastructure
    crypto: tcrypt - Add alg_test interface
    crypto: tcrypt - Abort and only log if there is an error
    crypto: crc32c - Use Intel CRC32 instruction
    crypto: tcrypt - Avoid using contiguous pages
    crypto: api - Display larval objects properly
    crypto: api - Export crypto_alg_lookup instead of __crypto_alg_lookup
    crypto: Kconfig - Replace leading spaces with tabs

    Linus Torvalds
     

14 Sep, 2008

1 commit


08 Sep, 2008

1 commit


05 Sep, 2008

1 commit

  • Should clear the next pointer of the TX if we are sure that the
    next TX (say NXT) will be submitted to the channel too. Overwise,
    we break the chain of descriptors, because we lose the information
    about the next descriptor to run. So next time, when invoke
    async_tx_run_dependencies() with TX, it's TX->next will be NULL, and
    NXT will be never submitted.

    Cc: [2.6.26]
    Signed-off-by: Yuri Tikhonov
    Signed-off-by: Ilya Yanok
    Signed-off-by: Dan Williams

    Yuri Tikhonov
     

29 Aug, 2008

14 commits


22 Aug, 2008

1 commit

  • Authenc works in two stages for encryption, it first encrypts and
    then computes an ICV. The context memory of the request is used
    by both operations. The problem is that when an asynchronous
    encryption completes, we will compute the ICV and then reread the
    context memory of the encryption to get the original request.

    It just happens that we have a buffer of 16 bytes in front of the
    request pointer, so ICVs of 16 bytes (such as SHA1) do not trigger
    the bug. However, any attempt to uses a larger ICV instantly kills
    the machine when the first asynchronous encryption is completed.

    This patch fixes this by saving the request pointer before we start
    the ICV computation.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

13 Aug, 2008

2 commits

  • The changeset ca786dc738f4f583b57b1bba7a335b5e8233f4b0

    crypto: hash - Fixed digest size check

    missed one spot for the digest type. This patch corrects that
    error.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • My changeset 4b22f0ddb6564210c9ded7ba25b2a1007733e784

    crypto: tcrpyt - Remove unnecessary kmap/kunmap calls

    introduced a typo that broke AEAD chunk testing. In particular,
    axbuf should really be xbuf.

    There is also an issue with testing the last segment when encrypting.
    The additional part produced by AEAD wasn't tested. Similarly, on
    decryption the additional part of the AEAD input is mistaken for
    corruption.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

06 Aug, 2008

1 commit


24 Jul, 2008

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx: (24 commits)
    I/OAT: I/OAT version 3.0 support
    I/OAT: tcp_dma_copybreak default value dependent on I/OAT version
    I/OAT: Add watchdog/reset functionality to ioatdma
    iop_adma: cleanup iop_chan_xor_slot_count
    iop_adma: document how to calculate the minimum descriptor pool size
    iop_adma: directly reclaim descriptors on allocation failure
    async_tx: make async_tx_test_ack a boolean routine
    async_tx: remove depend_tx from async_tx_sync_epilog
    async_tx: export async_tx_quiesce
    async_tx: fix handling of the "out of descriptor" condition in async_xor
    async_tx: ensure the xor destination buffer remains dma-mapped
    async_tx: list_for_each_entry_rcu() cleanup
    dmaengine: Driver for the Synopsys DesignWare DMA controller
    dmaengine: Add slave DMA interface
    dmaengine: add DMA_COMPL_SKIP_{SRC,DEST}_UNMAP flags to control dma unmap
    dmaengine: Add dma_client parameter to device_alloc_chan_resources
    dmatest: Simple DMA memcpy test client
    dmaengine: DMA engine driver for Marvell XOR engine
    iop-adma: fix platform driver hotplug/coldplug
    dmaengine: track the number of clients using a channel
    ...

    Fixed up conflict in drivers/dca/dca-sysfs.c manually

    Linus Torvalds
     

18 Jul, 2008

5 commits

  • All callers of async_tx_sync_epilog have called async_tx_quiesce on the
    depend_tx, so async_tx_sync_epilog need only call the callback to
    complete the operation.

    Signed-off-by: Dan Williams

    Dan Williams
     
  • Replace open coded "wait and acknowledge" instances with async_tx_quiesce.

    Signed-off-by: Dan Williams

    Dan Williams
     
  • Ensure forward progress is made when a dmaengine driver is unable to
    allocate an xor descriptor by breaking the dependency chain with
    async_tx_quisce() and issue any pending descriptors.

    Tested with iop-adma by setting device->max_xor = 2 to force multiple
    calls to device_prep_dma_xor for each call to async_xor and limiting the
    descriptor slot pool to 5. Discovered that the minimum descriptor pool
    size for iop-adma is 2 * iop_chan_xor_slot_cnt(device->max_xor) + 1.

    Signed-off-by: Dan Williams

    Dan Williams
     
  • When the number of source buffers for an xor operation exceeds the hardware
    channel maximum async_xor creates a chain of dependent operations. The result
    of one operation is reused as an input to the next to continue the xor
    calculation. The destination buffer should remain mapped for the duration of
    the entire chain. To provide this guarantee the code must no longer be allowed
    to fallback to the synchronous path as this will preclude the buffer from being
    unmapped, i.e. the dma-driver will potentially miss the descriptor with
    !DMA_COMPL_SKIP_DEST_UNMAP.

    Cc: Neil Brown
    Signed-off-by: Dan Williams

    Dan Williams
     
  • In the rcu update side, don't use list_for_each_entry_rcu().

    Signed-off-by: Li Zefan
    Signed-off-by: Dan Williams

    Li Zefan
     

16 Jul, 2008

2 commits


15 Jul, 2008

1 commit


11 Jul, 2008

1 commit


10 Jul, 2008

7 commits

  • All new crypto interfaces should go into individual files as much
    as possible in order to ensure that crypto.h does not collapse under
    its own weight.

    This patch moves the ahash code into crypto/hash.h and crypto/internal/hash.h
    respectively.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch reimplements crc32c using the ahash interface. This
    allows one tfm to be used by an unlimited number of users provided
    that they all use the same key (which all current crc32c users do).

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds the walking helpers for hash algorithms akin to
    those of block ciphers. This is a necessary step before we can
    reimplement existing hash algorithms using the new ahash interface.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds a cryptographic pseudo-random number generator
    based on CTR(AES-128). It is meant to be used in cases where a
    deterministic CPRNG is required.

    One of the first applications will be as an input in the IPsec IV
    generation process.

    Signed-off-by: Neil Horman
    Signed-off-by: Herbert Xu

    Neil Horman
     
  • The base field in ahash_tfm appears to have been cut-n-pasted from
    ablkcipher. It isn't needed here at all. Similarly, the info field
    in ahash_request also appears to have originated from its cipher
    counter-part and is vestigial.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The digest size check on hash algorithms is incorrect. It's
    perfectly valid for hash algorithms to have a digest length
    longer than their block size. For example crc32c has a block
    size of 1 and a digest size of 4. Rather than having it lie
    about its block size, this patch fixes the checks to do what
    they really should which is to bound the digest size so that
    code placing the digest on the stack continue to work.

    HMAC however still needs to check this as it's only defined
    for such algorithms.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Similar to the rmd128.c annotations, significantly cuts down on the
    noise.

    Signed-off-by: Harvey Harrison
    Signed-off-by: Herbert Xu

    Harvey Harrison