25 Dec, 2008

11 commits

  • This patch changes crc32c to the new shash interface.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch allows shash algorithms to be used through the old hash
    interface. This is a transitional measure so we can convert the
    underlying algorithms to shash before converting the users across.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch makes /proc/crypto call the type-specific show function
    if one is present before calling the legacy show functions for
    cipher/digest/compress. This allows us to reuse the type values
    for those legacy types. In particular, hash and digest will share
    one type value while shash is phased in as the default hash type.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • It is often useful to save the partial state of a hash function
    so that it can be used as a base for two or more computations.

    The most prominent example is HMAC where all hashes start from
    a base determined by the key. Having an import/export interface
    means that we only have to compute that base once rather than
    for each message.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch allows shash algorithms to be used through the ahash
    interface. This is required before we can convert digest algorithms
    over to shash.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The shash interface replaces the current synchronous hash interface.
    It improves over hash in two ways. Firstly shash is reentrant,
    meaning that the same tfm may be used by two threads simultaneously
    as all hashing state is stored in a local descriptor.

    The other enhancement is that shash no longer takes scatter list
    entries. This is because shash is specifically designed for
    synchronous algorithms and as such scatter lists are unnecessary.

    All existing hash users will be converted to shash once the
    algorithms have been completely converted.

    There is also a new finup function that combines update with final.
    This will be extended to ahash once the algorithm conversion is
    done.

    This is also the first time that an algorithm type has their own
    registration function. Existing algorithm types will be converted
    to this way in due course.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch reintroduces a completely revamped crypto_alloc_tfm.
    The biggest change is that we now take two crypto_type objects
    when allocating a tfm, a frontend and a backend. In fact this
    simply formalises what we've been doing behind the API's back.

    For example, as it stands crypto_alloc_ahash may use an
    actual ahash algorithm or a crypto_hash algorithm. Putting
    this in the API allows us to do this much more cleanly.

    The existing types will be converted across gradually.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The type exit function needs to undo any allocations done by the type
    init function. However, the type init function may differ depending
    on the upper-level type of the transform (e.g., a crypto_blkcipher
    instantiated as a crypto_ablkcipher).

    So we need to move the exit function out of the lower-level
    structure and into crypto_tfm itself.

    As it stands this is a no-op since nobody uses exit functions at
    all. However, all cases where a lower-level type is instantiated
    as a different upper-level type (such as blkcipher as ablkcipher)
    will be converted such that they allocate the underlying transform
    and use that instead of casting (e.g., crypto_ablkcipher casted
    into crypto_blkcipher). That will need to use a different exit
    function depending on the upper-level type.

    This patch also allows the type init/exit functions to call (or not)
    cra_init/cra_exit instead of always calling them from the top level.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This is a patch that was sent to me by Jarod Wilson, marking off my
    outstanding todo to allow the ansi cprng to set/reset the DT counter value in a
    cprng instance. Currently crytpo_rng_reset accepts a seed byte array which is
    interpreted by the ansi_cprng as a {V key} tuple. This patch extends that tuple
    to now be {V key DT}, with DT an optional value during reset. This patch also
    fixes a bug we noticed in which the offset of the key area of the seed is
    started at DEFAULT_PRNG_KSZ rather than DEFAULT_BLK_SZ as it should be.

    Signed-off-by: Neil Horman
    Signed-off-by: Jarod Wilson
    Signed-off-by: Herbert Xu

    Neil Horman
     
  • Remove the private implementation of 32-bit rotation and unaligned
    access with byteswapping.

    As a bonus, fixes sparse warnings:
    crypto/camellia.c:602:2: warning: cast to restricted __be32
    crypto/camellia.c:603:2: warning: cast to restricted __be32
    crypto/camellia.c:604:2: warning: cast to restricted __be32
    crypto/camellia.c:605:2: warning: cast to restricted __be32
    crypto/camellia.c:710:2: warning: cast to restricted __be32
    crypto/camellia.c:711:2: warning: cast to restricted __be32
    crypto/camellia.c:712:2: warning: cast to restricted __be32
    crypto/camellia.c:713:2: warning: cast to restricted __be32
    crypto/camellia.c:714:2: warning: cast to restricted __be32
    crypto/camellia.c:715:2: warning: cast to restricted __be32
    crypto/camellia.c:716:2: warning: cast to restricted __be32
    crypto/camellia.c:717:2: warning: cast to restricted __be32

    [Thanks to Tomoyuki Okazaki for spotting the typo]
    Tested-by: Carlo E. Prelz
    Signed-off-by: Harvey Harrison
    Signed-off-by: Herbert Xu

    Harvey Harrison
     
  • The FIPS specification requires that should self test for any supported
    crypto algorithm fail during operation in fips mode, we need to prevent
    the use of any crypto functionality until such time as the system can
    be re-initialized. Seems like the best way to handle that would be
    to panic the system if we were in fips mode and failed a self test.
    This patch implements that functionality. I've built and run it
    successfully.

    Signed-off-by: Neil Horman
    Signed-off-by: Herbert Xu

    Neil Horman
     

19 Dec, 2008

1 commit


10 Dec, 2008

1 commit

  • If we have at least one algorithm built-in then it no longer makes
    sense to have the testing framework, and hence cryptomgr to be a
    module. It should be either on or off, i.e., built-in or disabled.

    This just happens to stop a potential runaway modprobe loop that
    seems to trigger on at least one distro.

    With fixes from Evgeniy Polyakov.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

09 Dec, 2008

1 commit

  • Mapping the destination multiple times is a misuse of the dma-api.
    Since the destination may be reused as a source, ensure that it is only
    mapped once and that it is mapped bidirectionally. This appears to add
    ugliness on the unmap side in that it always reads back the destination
    address from the descriptor, but gcc can determine that dma_unmap is a
    nop and not emit the code that calculates its arguments.

    Cc:
    Cc: Saeed Bishara
    Acked-by: Yuri Tikhonov
    Signed-off-by: Dan Williams

    Dan Williams
     

21 Oct, 2008

1 commit


11 Oct, 2008

1 commit

  • * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
    crypto: skcipher - Use RNG interface instead of get_random_bytes
    crypto: rng - RNG interface and implementation
    crypto: api - Add fips_enable flag
    crypto: skcipher - Move IV generators into their own modules
    crypto: cryptomgr - Test ciphers using ECB
    crypto: api - Use test infrastructure
    crypto: cryptomgr - Add test infrastructure
    crypto: tcrypt - Add alg_test interface
    crypto: tcrypt - Abort and only log if there is an error
    crypto: crc32c - Use Intel CRC32 instruction
    crypto: tcrypt - Avoid using contiguous pages
    crypto: api - Display larval objects properly
    crypto: api - Export crypto_alg_lookup instead of __crypto_alg_lookup
    crypto: Kconfig - Replace leading spaces with tabs

    Linus Torvalds
     

14 Sep, 2008

1 commit


08 Sep, 2008

1 commit


05 Sep, 2008

1 commit

  • Should clear the next pointer of the TX if we are sure that the
    next TX (say NXT) will be submitted to the channel too. Overwise,
    we break the chain of descriptors, because we lose the information
    about the next descriptor to run. So next time, when invoke
    async_tx_run_dependencies() with TX, it's TX->next will be NULL, and
    NXT will be never submitted.

    Cc: [2.6.26]
    Signed-off-by: Yuri Tikhonov
    Signed-off-by: Ilya Yanok
    Signed-off-by: Dan Williams

    Yuri Tikhonov
     

29 Aug, 2008

14 commits


22 Aug, 2008

1 commit

  • Authenc works in two stages for encryption, it first encrypts and
    then computes an ICV. The context memory of the request is used
    by both operations. The problem is that when an asynchronous
    encryption completes, we will compute the ICV and then reread the
    context memory of the encryption to get the original request.

    It just happens that we have a buffer of 16 bytes in front of the
    request pointer, so ICVs of 16 bytes (such as SHA1) do not trigger
    the bug. However, any attempt to uses a larger ICV instantly kills
    the machine when the first asynchronous encryption is completed.

    This patch fixes this by saving the request pointer before we start
    the ICV computation.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

13 Aug, 2008

2 commits

  • The changeset ca786dc738f4f583b57b1bba7a335b5e8233f4b0

    crypto: hash - Fixed digest size check

    missed one spot for the digest type. This patch corrects that
    error.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • My changeset 4b22f0ddb6564210c9ded7ba25b2a1007733e784

    crypto: tcrpyt - Remove unnecessary kmap/kunmap calls

    introduced a typo that broke AEAD chunk testing. In particular,
    axbuf should really be xbuf.

    There is also an issue with testing the last segment when encrypting.
    The additional part produced by AEAD wasn't tested. Similarly, on
    decryption the additional part of the AEAD input is mistaken for
    corruption.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

06 Aug, 2008

1 commit


24 Jul, 2008

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx: (24 commits)
    I/OAT: I/OAT version 3.0 support
    I/OAT: tcp_dma_copybreak default value dependent on I/OAT version
    I/OAT: Add watchdog/reset functionality to ioatdma
    iop_adma: cleanup iop_chan_xor_slot_count
    iop_adma: document how to calculate the minimum descriptor pool size
    iop_adma: directly reclaim descriptors on allocation failure
    async_tx: make async_tx_test_ack a boolean routine
    async_tx: remove depend_tx from async_tx_sync_epilog
    async_tx: export async_tx_quiesce
    async_tx: fix handling of the "out of descriptor" condition in async_xor
    async_tx: ensure the xor destination buffer remains dma-mapped
    async_tx: list_for_each_entry_rcu() cleanup
    dmaengine: Driver for the Synopsys DesignWare DMA controller
    dmaengine: Add slave DMA interface
    dmaengine: add DMA_COMPL_SKIP_{SRC,DEST}_UNMAP flags to control dma unmap
    dmaengine: Add dma_client parameter to device_alloc_chan_resources
    dmatest: Simple DMA memcpy test client
    dmaengine: DMA engine driver for Marvell XOR engine
    iop-adma: fix platform driver hotplug/coldplug
    dmaengine: track the number of clients using a channel
    ...

    Fixed up conflict in drivers/dca/dca-sysfs.c manually

    Linus Torvalds
     

18 Jul, 2008

2 commits