23 Sep, 2009

1 commit


22 Sep, 2009

1 commit


17 Sep, 2009

1 commit

  • Testing on x86_64 with NDISKS=255 yields:

    do_IRQ: modprobe near stack overflow (cur:ffff88007d19c000,sp:ffff88007d19c128)

    ...and eventually

    general protection fault: 0000 [#1]

    Moving the scribble buffers off the stack allows the test to complete
    successfully.

    Signed-off-by: Dan Williams

    Dan Williams
     

12 Sep, 2009

1 commit

  • * git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits)
    crypto: sha-s390 - Fix warnings in import function
    crypto: vmac - New hash algorithm for intel_txt support
    crypto: api - Do not displace newly registered algorithms
    crypto: ansi_cprng - Fix module initialization
    crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx
    crypto: fips - Depend on ansi_cprng
    crypto: blkcipher - Do not use eseqiv on stream ciphers
    crypto: ctr - Use chainiv on raw counter mode
    Revert crypto: fips - Select CPRNG
    crypto: rng - Fix typo
    crypto: talitos - add support for 36 bit addressing
    crypto: talitos - align locks on cache lines
    crypto: talitos - simplify hmac data size calculation
    crypto: mv_cesa - Add support for Orion5X crypto engine
    crypto: cryptd - Add support to access underlaying shash
    crypto: gcm - Use GHASH digest algorithm
    crypto: ghash - Add GHASH digest algorithm for GCM
    crypto: authenc - Convert to ahash
    crypto: api - Fix aligned ctx helper
    crypto: hmac - Prehash ipad/opad
    ...

    Linus Torvalds
     

09 Sep, 2009

6 commits

  • Conflicts:
    crypto/async_tx/async_xor.c
    drivers/dma/ioat/dma_v2.h
    drivers/dma/ioat/pci.c
    drivers/md/raid5.c

    Dan Williams
     
  • Some engines have transfer size and address alignment restrictions. Add
    a per-operation alignment property to struct dma_device that the async
    routines and dmatest can use to check alignment capabilities.

    Signed-off-by: Dan Williams

    Dan Williams
     
  • Channel switching is problematic for some dmaengine drivers as the
    architecture precludes separating the ->prep from ->submit. In these
    cases the driver can select ASYNC_TX_DISABLE_CHANNEL_SWITCH to modify
    the async_tx allocator to only return channels that support all of the
    required asynchronous operations.

    For example MD_RAID456=y selects support for asynchronous xor, xor
    validate, pq, pq validate, and memcpy. When
    ASYNC_TX_DISABLE_CHANNEL_SWITCH=y any channel with all these
    capabilities is marked DMA_ASYNC_TX allowing async_tx_find_channel() to
    quickly locate compatible channels with the guarantee that dependency
    chains will remain on one channel. When
    ASYNC_TX_DISABLE_CHANNEL_SWITCH=n async_tx_find_channel() may select
    channels that lead to operation chains that need to cross channel
    boundaries using the async_tx channel switch capability.

    Signed-off-by: Dan Williams

    Dan Williams
     
  • Some engines optimize operation by reading ahead in the descriptor chain
    such that descriptor2 may start execution before descriptor1 completes.
    If descriptor2 depends on the result from descriptor1 then a fence is
    required (on descriptor2) to disable this optimization. The async_tx
    api could implicitly identify dependencies via the 'depend_tx'
    parameter, but that would constrain cases where the dependency chain
    only specifies a completion order rather than a data dependency. So,
    provide an ASYNC_TX_FENCE to explicitly identify data dependencies.

    Signed-off-by: Dan Williams

    Dan Williams
     
  • Conflicts:
    include/linux/dmaengine.h

    Dan Williams
     
  • Dan Williams
     

06 Sep, 2009

1 commit


02 Sep, 2009

1 commit


31 Aug, 2009

1 commit

  • We have a mechanism where newly registered algorithms of a higher
    priority can displace existing instances that use a different
    implementation of the same algorithm with a lower priority.

    Unfortunately the same mechanism can cause a newly registered
    algorithm to displace itself if it depends on an existing version
    of the same algorithm.

    This patch fixes this by keeping all algorithms that the newly
    reigstered algorithm depends on, thus protecting them from being
    removed.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

30 Aug, 2009

6 commits

  • Port drivers/md/raid6test/test.c to use the async raid6 recovery
    routines. This is meant as a unit test for raid6 acceleration drivers. In
    addition to the 16-drive test case this implements tests for the 4-disk and
    5-disk special cases (dma devices can not generically handle less than 2
    sources), and adds a test for the D+Q case.

    Reviewed-by: Andre Noll
    Acked-by: Maciej Sosnowski
    Signed-off-by: Dan Williams

    Dan Williams
     
  • async_raid6_2data_recov() recovers two data disk failures

    async_raid6_datap_recov() recovers a data disk and the P disk

    These routines are a port of the synchronous versions found in
    drivers/md/raid6recov.c. The primary difference is breaking out the xor
    operations into separate calls to async_xor. Two helper routines are
    introduced to perform scalar multiplication where needed.
    async_sum_product() multiplies two sources by scalar coefficients and
    then sums (xor) the result. async_mult() simply multiplies a single
    source by a scalar.

    This implemention also includes, in contrast to the original
    synchronous-only code, special case handling for the 4-disk and 5-disk
    array cases. In these situations the default N-disk algorithm will
    present 0-source or 1-source operations to dma devices. To cover for
    dma devices where the minimum source count is 2 we implement 4-disk and
    5-disk handling in the recovery code.

    [ Impact: asynchronous raid6 recovery routines for 2data and datap cases ]

    Cc: Yuri Tikhonov
    Cc: Ilya Yanok
    Cc: H. Peter Anvin
    Cc: David Woodhouse
    Reviewed-by: Andre Noll
    Acked-by: Maciej Sosnowski
    Signed-off-by: Dan Williams

    Dan Williams
     
  • [ Based on an original patch by Yuri Tikhonov ]

    This adds support for doing asynchronous GF multiplication by adding
    two additional functions to the async_tx API:

    async_gen_syndrome() does simultaneous XOR and Galois field
    multiplication of sources.

    async_syndrome_val() validates the given source buffers against known P
    and Q values.

    When a request is made to run async_pq against more than the hardware
    maximum number of supported sources we need to reuse the previous
    generated P and Q values as sources into the next operation. Care must
    be taken to remove Q from P' and P from Q'. For example to perform a 5
    source pq op with hardware that only supports 4 sources at a time the
    following approach is taken:

    p, q = PQ(src0, src1, src2, src3, COEF({01}, {02}, {04}, {08}))
    p', q' = PQ(p, q, q, src4, COEF({00}, {01}, {00}, {10}))

    p' = p + q + q + src4 = p + src4
    q' = {00}*p + {01}*q + {00}*q + {10}*src4 = q + {10}*src4

    Note: 4 is the minimum acceptable maxpq otherwise we punt to
    synchronous-software path.

    The DMA_PREP_CONTINUE flag indicates to the driver to reuse p and q as
    sources (in the above manner) and fill the remaining slots up to maxpq
    with the new sources/coefficients.

    Note1: Some devices have native support for P+Q continuation and can skip
    this extra work. Devices with this capability can advertise it with
    dma_set_maxpq. It is up to each driver how to handle the
    DMA_PREP_CONTINUE flag.

    Note2: The api supports disabling the generation of P when generating Q,
    this is ignored by the synchronous path but is implemented by some dma
    devices to save unnecessary writes. In this case the continuation
    algorithm is simplified to only reuse Q as a source.

    Cc: H. Peter Anvin
    Cc: David Woodhouse
    Signed-off-by: Yuri Tikhonov
    Signed-off-by: Ilya Yanok
    Reviewed-by: Andre Noll
    Acked-by: Maciej Sosnowski
    Signed-off-by: Dan Williams

    Dan Williams
     
  • We currently walk the parent chain when waiting for a given tx to
    complete however this walk may race with the driver cleanup routine.
    The routines in async_raid6_recov.c may fall back to the synchronous
    path at any point so we need to be prepared to call async_tx_quiesce()
    (which calls dma_wait_for_async_tx). To remove the ->parent walk we
    guarantee that every time a dependency is attached ->issue_pending() is
    invoked, then we can simply poll the initial descriptor until
    completion.

    This also allows for a lighter weight 'issue pending' implementation as
    there is no longer a requirement to iterate through all the channels'
    ->issue_pending() routines as long as operations have been submitted in
    an ordered chain. async_tx_issue_pending() is added for this case.

    Signed-off-by: Dan Williams

    Dan Williams
     
  • If module_init and module_exit are nops then neither need to be defined.

    [ Impact: pure cleanup ]

    Reviewed-by: Andre Noll
    Acked-by: Maciej Sosnowski
    Signed-off-by: Dan Williams

    Dan Williams
     
  • Replace the flat zero_sum_result with a collection of flags to contain
    the P (xor) zero-sum result, and the soon to be utilized Q (raid6 reed
    solomon syndrome) zero-sum result. Use the SUM_CHECK_ namespace instead
    of DMA_ since these flags will be used on non-dma-zero-sum enabled
    platforms.

    Reviewed-by: Andre Noll
    Acked-by: Maciej Sosnowski
    Signed-off-by: Dan Williams

    Dan Williams
     

29 Aug, 2009

2 commits

  • As struct skcipher_givcrypt_request includes struct crypto_request
    at a non-zero offset, testing for NULL after converting the pointer
    returned by crypto_dequeue_request does not work. This can result
    in IPsec crashes when the queue is depleted.

    This patch fixes it by doing the pointer conversion only when the
    return value is non-NULL. In particular, we create a new function
    __crypto_dequeue_request that does the pointer conversion.

    Reported-by: Brad Bosch
    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Return the value we got from crypto_register_alg() instead of
    returning 0 in any case.

    Signed-off-by: Steffen Klassert
    Acked-by: Neil Horman
    Signed-off-by: Herbert Xu

    Steffen Klassert
     

20 Aug, 2009

2 commits

  • The alignment calculation of xcbc_tfm_ctx uses alg->cra_alignmask
    and not alg->cra_alignmask + 1 as it should. This led to frequent
    crashes during the selftest of xcbc(aes-asm) on x86_64
    machines. This patch fixes this. Also we use the alignmask
    of xcbc and not the alignmask of the underlying algorithm
    for the alignmnent calculation in xcbc_create now.

    Signed-off-by: Steffen Klassert
    Signed-off-by: Herbert Xu

    Steffen Klassert
     
  • What about something like this? It defaults the CPRNG to m and makes FIPS
    dependent on the CPRNG. That way you get a module build by default, but you can
    change it to y manually during config and still satisfy the dependency, and if
    you select N it disables FIPS as well. I rather like that better than making
    FIPS a tristate. I just tested it out here and it seems to work well. Let me
    know what you think

    Signed-off-by: Neil Horman
    Signed-off-by: Herbert Xu

    Neil Horman
     

14 Aug, 2009

1 commit

  • Recently we switched to using eseqiv on SMP machines in preference
    over chainiv. However, eseqiv does not support stream ciphers so
    they should still default to chainiv.

    This patch applies the same check as done by eseqiv to weed out
    the stream ciphers. In particular, all algorithms where the IV
    size is not equal to the block size will now default to chainiv.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

13 Aug, 2009

3 commits

  • Raw counter mode only works with chainiv, which is no longer
    the default IV generator on SMP machines. This broke raw counter
    mode as it can no longer instantiate as a givcipher.

    This patch fixes it by always picking chainiv on raw counter
    mode. This is based on the diagnosis and a patch by Huang
    Ying.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This reverts commit 215ccd6f55a2144bd553e0a3d12e1386f02309fd.

    It causes CPRNG and everything selected by it to be built-in
    whenever FIPS is enabled. The problem is that it is selecting
    a tristate from a bool, which is usually not what is intended.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Correct a typo in crypto/rng.c

    Signed-off-by: Christian Kujau
    Acked-by: Neil Horman
    Signed-off-by: Herbert Xu

    Christian Kujau
     

06 Aug, 2009

3 commits

  • cryptd_alloc_ahash() will allocate a cryptd-ed ahash for specified
    algorithm name. The new allocated one is guaranteed to be cryptd-ed
    ahash, so the shash underlying can be gotten via cryptd_ahash_child().

    Signed-off-by: Huang Ying
    Signed-off-by: Herbert Xu

    Huang Ying
     
  • Remove the dedicated GHASH implementation in GCM, and uses the GHASH
    digest algorithm instead. This will make GCM uses hardware accelerated
    GHASH implementation automatically if available.

    ahash instead of shash interface is used, because some hardware
    accelerated GHASH implementation needs asynchronous interface.

    Signed-off-by: Huang Ying
    Signed-off-by: Herbert Xu

    Huang Ying
     
  • GHASH is implemented as a shash algorithm. The actual implementation
    is copied from gcm.c. This makes it possible to add
    architecture/hardware accelerated GHASH implementation.

    Signed-off-by: Huang Ying
    Signed-off-by: Herbert Xu

    Huang Ying
     

05 Aug, 2009

1 commit


31 Jul, 2009

1 commit

  • * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx:
    dmaengine: at_hdmac: add DMA slave transfers
    dmaengine: at_hdmac: new driver for the Atmel AHB DMA Controller
    dmaengine: dmatest: correct thread_count while using multiple thread per channel
    dmaengine: dmatest: add a maximum number of test iterations
    drivers/dma: Remove unnecessary semicolons
    drivers/dma/fsldma.c: Remove unnecessary semicolons
    dmaengine: move HIGHMEM64G restriction to ASYNC_TX_DMA
    fsldma: do not clear bandwidth control bits on the 83xx controller
    fsldma: enable external start for the 83xx controller
    fsldma: use PCI Read Multiple command

    Linus Torvalds
     

24 Jul, 2009

4 commits


22 Jul, 2009

4 commits

  • This patch provides a default export/import function for all
    shash algorithms. It simply copies the descriptor context as
    is done by sha1_generic.

    This in essence means that all existing shash algorithms now
    support export/import. This is something that will be depended
    upon in implementations such as hmac. Therefore all new shash
    and ahash implementations must support export/import.

    For those that cannot obtain a partial result, padlock-sha's
    fallback model should be used so that a partial result is always
    available.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch replaces the 32-bit counters in sha512_generic with
    64-bit counters. It also switches the bit count to the simpler
    byte count.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch renames struct sha512_ctx and exports it as struct
    sha512_state so that other sha512 implementations can use it
    as the reference structure for exporting their state.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Although xcbc was converted to shash, it didn't obey the new
    requirement that all hash state must be stored in the descriptor
    rather than the transform.

    This patch fixes this issue and also optimises away the rekeying
    by precomputing K2 and K3 within setkey.

    Signed-off-by: Herbert Xu

    Herbert Xu