07 Jan, 2009

4 commits

  • Now that clients no longer need to be notified of channel arrival
    dma_async_client_register can simply increment the dmaengine_ref_count.

    Reviewed-by: Andrew Morton
    Signed-off-by: Dan Williams

    Dan Williams
     
  • async_tx and net_dma each have open-coded versions of issue_pending_all,
    so provide a common routine in dmaengine.

    The implementation needs to walk the global device list, so implement
    rcu to allow dma_issue_pending_all to run lockless. Clients protect
    themselves from channel removal events by holding a dmaengine reference.

    Reviewed-by: Andrew Morton
    Signed-off-by: Dan Williams

    Dan Williams
     
  • Allowing multiple clients to each define their own channel allocation
    scheme quickly leads to a pathological situation. For memory-to-memory
    offload all clients can share a central allocator.

    This simply moves the existing async_tx allocator to dmaengine with
    minimal fixups:
    * async_tx.c:get_chan_ref_by_cap --> dmaengine.c:nth_chan
    * async_tx.c:async_tx_rebalance --> dmaengine.c:dma_channel_rebalance
    * split out common code from async_tx.c:__async_tx_find_channel -->
    dma_find_channel

    Reviewed-by: Andrew Morton
    Signed-off-by: Dan Williams

    Dan Williams
     
  • Simply, if a client wants any dmaengine channel then prevent all dmaengine
    modules from being removed. Once the clients are done re-enable module
    removal.

    Why?, beyond reducing complication:
    1/ Tracking reference counts per-transaction in an efficient manner, as
    is currently done, requires a complicated scheme to avoid cache-line
    bouncing effects.
    2/ Per-transaction ref-counting gives the false impression that a
    dma-driver can be gracefully removed ahead of its user (net, md, or
    dma-slave)
    3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but
    if such an engine were built one day we still would not need to notify
    clients of remove events. The driver can simply return NULL to a
    ->prep() request, something that is much easier for a client to handle.

    Reviewed-by: Andrew Morton
    Acked-by: Maciej Sosnowski
    Signed-off-by: Dan Williams

    Dan Williams
     

06 Jan, 2009

1 commit

  • async_tx.ko is a consumer of dma channels. A circular dependency arises
    if modules in drivers/dma rely on common code in async_tx.ko. It
    prevents either module from being unloaded.

    Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o
    where they should have been from the beginning.

    Reviewed-by: Andrew Morton
    Signed-off-by: Dan Williams

    Dan Williams
     

25 Dec, 2008

35 commits

  • The tables used by the various AES algorithms are currently
    computed at run-time. This has created an init ordering problem
    because some AES algorithms may be registered before the tables
    have been initialised.

    This patch gets around this whole thing by precomputing the tables.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The comment for the deflate test vectors says the winbits parameter is 11,
    while the deflate module actually uses -11 (a negative window bits parameter
    enables the raw deflate format instead of the zlib format).
    Correct this, to avoid confusion about the format used.

    Signed-off-by: Geert Uytterhoeven
    Signed-off-by: Herbert Xu

    Geert Uytterhoeven
     
  • ROTATE -> rol32
    XOR was always used with the same destination, use ^=
    PLUS/PLUSONE use ++ or +=

    Signed-off-by: Harvey Harrison
    Signed-off-by: Herbert Xu

    Harvey Harrison
     
  • While its a slightly insane to bypass the key1 == key2 ||
    key2 == key3 check in triple-des, since it reduces it to the
    same strength as des, some folks do need to do this from time
    to time for backwards compatibility with des.

    My own case is FIPS CAVS test vectors. Many triple-des test
    vectors use a single key, replicated 3x. In order to get the
    expected results, des3_ede_setkey() needs to only reject weak
    keys if the CRYPTO_TFM_REQ_WEAK_KEY flag is set.

    Also sets a more appropriate RES flag when a weak key is found.

    Signed-off-by: Jarod Wilson
    Signed-off-by: Herbert Xu

    Jarod Wilson
     
  • This patch changes sha512 and sha384 to the new shash interface.

    Signed-off-by: Adrian-Ken Rueegsegger
    Signed-off-by: Herbert Xu

    Adrian-Ken Rueegsegger
     
  • The message schedule W (u64[80]) is too big for the stack. In order
    for this algorithm to be used with shash it is moved to a static
    percpu area.

    Signed-off-by: Adrian-Ken Rueegsegger
    Signed-off-by: Herbert Xu

    Adrian-Ken Rueegsegger
     
  • This patch changes michael_mic to the new shash interface.

    Signed-off-by: Adrian-Ken Rueegsegger
    Signed-off-by: Herbert Xu

    Adrian-Ken Rueegsegger
     
  • This patch changes wp512, wp384 and wp256 to the new shash interface.

    Signed-off-by: Adrian-Ken Rueegsegger
    Signed-off-by: Herbert Xu

    Adrian-Ken Rueegsegger
     
  • This patch changes tgr192, tgr160 and tgr128 to the new shash interface.

    Signed-off-by: Adrian-Ken Rueegsegger
    Signed-off-by: Herbert Xu

    Adrian-Ken Rueegsegger
     
  • This patch changes sha256 and sha224 to the new shash interface.

    Signed-off-by: Adrian-Ken Rueegsegger
    Signed-off-by: Herbert Xu

    Adrian-Ken Rueegsegger
     
  • This patch changes md5 to the new shash interface.

    Signed-off-by: Adrian-Ken Rueegsegger
    Signed-off-by: Herbert Xu

    Adrian-Ken Rueegsegger
     
  • This patch changes md4 to the new shash interface.

    Signed-off-by: Adrian-Ken Rueegsegger
    Signed-off-by: Herbert Xu

    Adrian-Ken Rueegsegger
     
  • This patch changes sha1 to the new shash interface.

    Signed-off-by: Adrian-Ken Rueegsegger
    Signed-off-by: Herbert Xu

    Adrian-Ken Rueegsegger
     
  • This patch changes rmd320 to the new shash interface.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch changes rmd256 to the new shash interface.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch changes rmd160 to the new shash interface.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch changes rmd128 to the new shash interface.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch changes digest_null to the new shash interface.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Since most cryptographic hash algorithms have no keys, this patch
    makes the setkey function optional for ahash and shash.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • When self-testing (de)compression algorithms, make sure the actual size of
    the (de)compressed output data matches the expected output size.
    Otherwise, in case the actual output size would be smaller than the expected
    output size, the subsequent buffer compare test would still succeed, and no
    error would be reported.

    Signed-off-by: Geert Uytterhoeven
    Signed-off-by: Herbert Xu

    Geert Uytterhoeven
     
  • Base versions handle constant folding just fine.

    Signed-off-by: Harvey Harrison
    Signed-off-by: Herbert Xu

    Harvey Harrison
     
  • This warning:

    crypto/testmgr.c: In function ‘test_comp’:
    crypto/testmgr.c:829: warning: ‘ret’ may be used uninitialized in this function

    triggers because GCC correctly notices that in the ctcount == 0 &&
    dtcount != 0 input condition case this function can return an undefined
    value, if the second loop fails.

    Remove the shadowed 'ret' variable from the second loop that was probably
    unintended.

    Signed-off-by: Ingo Molnar
    Signed-off-by: Herbert Xu

    Ingo Molnar
     
  • The ANSI X9.31 PRNG docs aren't particularly clear on how to increment DT,
    but empirical testing shows we're incrementing from the wrong end. A 10,000
    iteration Monte Carlo RNG test currently winds up not getting the expected
    result.

    From http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf :

    # CAVS 4.3
    # ANSI931 MCT
    [X9.31]
    [AES 128-Key]

    COUNT = 0
    Key = 9f5b51200bf334b5d82be8c37255c848
    DT = 6376bbe52902ba3b67c925fa701f11ac
    V = 572c8e76872647977e74fbddc49501d1
    R = 48e9bd0d06ee18fbe45790d5c3fc9b73

    Currently, we get 0dd08496c4f7178bfa70a2161a79459a after 10000 loops.

    Inverting the DT increment routine results in us obtaining the expected result
    of 48e9bd0d06ee18fbe45790d5c3fc9b73. Verified on both x86_64 and ppc64.

    Signed-off-by: Jarod Wilson
    Acked-by: Neil Horman
    Signed-off-by: Herbert Xu

    Jarod Wilson
     
  • While working with some FIPS RNGVS test vectors yesterday, I discovered a
    little bug in the way the ansi_cprng code works right now.

    For example, the following test vector (complete with expected result)
    from http://csrc.nist.gov/groups/STM/cavp/documents/rng/RNGVS.pdf ...

    Key = f3b1666d13607242ed061cabb8d46202
    DT = e6b3be782a23fa62d71d4afbb0e922fc
    V = f0000000000000000000000000000000
    R = 88dda456302423e5f69da57e7b95c73a

    ...when run through ansi_cprng, yields an incorrect R value
    of e2afe0d794120103d6e86a2b503bdfaa.

    If I load up ansi_cprng w/dbg=1 though, it was fairly obvious what was
    going wrong:

    ----8
    Acked-by: Neil Horman
    Signed-off-by: Herbert Xu

    Jarod Wilson
     
  • ARRAY_SIZE is more concise to use when the size of an array is divided by
    the size of its type or the size of its first element.

    The semantic patch that makes this change is as follows:
    (http://www.emn.fr/x-info/coccinelle/)

    //
    @i@
    @@

    #include

    @depends on i using "paren.iso"@
    type T;
    T[] E;
    @@

    - (sizeof(E)/sizeof(T))
    + ARRAY_SIZE(E)
    //

    Signed-off-by: Julia Lawall
    Signed-off-by: Herbert Xu

    Julia Lawall
     
  • This patch swaps the role of libcrc32c and crc32c. Previously
    the implementation was in libcrc32c and crc32c was a wrapper.
    Now the code is in crc32c and libcrc32c just calls the crypto
    layer.

    The reason for the change is to tap into the algorithm selection
    capability of the crypto API so that optimised implementations
    such as the one utilising Intel's CRC32C instruction can be
    used where available.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds a test for the requirement that all crc32c algorithms
    shall store the partial result in the first four bytes of the descriptor
    context.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch changes crc32c to the new shash interface.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch allows shash algorithms to be used through the old hash
    interface. This is a transitional measure so we can convert the
    underlying algorithms to shash before converting the users across.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch makes /proc/crypto call the type-specific show function
    if one is present before calling the legacy show functions for
    cipher/digest/compress. This allows us to reuse the type values
    for those legacy types. In particular, hash and digest will share
    one type value while shash is phased in as the default hash type.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • It is often useful to save the partial state of a hash function
    so that it can be used as a base for two or more computations.

    The most prominent example is HMAC where all hashes start from
    a base determined by the key. Having an import/export interface
    means that we only have to compute that base once rather than
    for each message.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch allows shash algorithms to be used through the ahash
    interface. This is required before we can convert digest algorithms
    over to shash.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The shash interface replaces the current synchronous hash interface.
    It improves over hash in two ways. Firstly shash is reentrant,
    meaning that the same tfm may be used by two threads simultaneously
    as all hashing state is stored in a local descriptor.

    The other enhancement is that shash no longer takes scatter list
    entries. This is because shash is specifically designed for
    synchronous algorithms and as such scatter lists are unnecessary.

    All existing hash users will be converted to shash once the
    algorithms have been completely converted.

    There is also a new finup function that combines update with final.
    This will be extended to ahash once the algorithm conversion is
    done.

    This is also the first time that an algorithm type has their own
    registration function. Existing algorithm types will be converted
    to this way in due course.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch reintroduces a completely revamped crypto_alloc_tfm.
    The biggest change is that we now take two crypto_type objects
    when allocating a tfm, a frontend and a backend. In fact this
    simply formalises what we've been doing behind the API's back.

    For example, as it stands crypto_alloc_ahash may use an
    actual ahash algorithm or a crypto_hash algorithm. Putting
    this in the API allows us to do this much more cleanly.

    The existing types will be converted across gradually.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The type exit function needs to undo any allocations done by the type
    init function. However, the type init function may differ depending
    on the upper-level type of the transform (e.g., a crypto_blkcipher
    instantiated as a crypto_ablkcipher).

    So we need to move the exit function out of the lower-level
    structure and into crypto_tfm itself.

    As it stands this is a no-op since nobody uses exit functions at
    all. However, all cases where a lower-level type is instantiated
    as a different upper-level type (such as blkcipher as ablkcipher)
    will be converted such that they allocate the underlying transform
    and use that instead of casting (e.g., crypto_ablkcipher casted
    into crypto_blkcipher). That will need to use a different exit
    function depending on the upper-level type.

    This patch also allows the type init/exit functions to call (or not)
    cra_init/cra_exit instead of always calling them from the top level.

    Signed-off-by: Herbert Xu

    Herbert Xu