14 Dec, 2006

1 commit

  • Remove useless includes of linux/io.h, don't even try to build iomap_copy
    on uml (it doesn't have readb() et.al., so...)

    Signed-off-by: Al Viro
    Acked-by: Jeff Dike
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Al Viro
     

12 Dec, 2006

1 commit


07 Dec, 2006

8 commits

  • Fixes:

    crypto/lrw.c:99: warning: conflicting types for built-in function ‘round’

    Signed-off-by: David S. Miller

    David S. Miller
     
  • Do modprobe tcrypt mode=10 to check the included test vectors, they are
    from: http://grouper.ieee.org/groups/1619/email/pdf00017.pdf and from
    http://www.mail-archive.com/stds-p1619@listserv.ieee.org/msg00173.html.

    To make the last test vector fit, I had to increase the buffer size of
    input and result to 512 bytes.

    Signed-off-by: Rik Snel
    Signed-off-by: Herbert Xu

    Rik Snel
     
  • Main module, this implements the Liskov Rivest Wagner block cipher mode
    in the new blockcipher API. The implementation is based on ecb.c.

    The LRW-32-AES specification I used can be found at:
    http://grouper.ieee.org/groups/1619/email/pdf00017.pdf

    It implements the optimization specified as optional in the
    specification, and in addition it uses optimized multiplication
    routines from gf128mul.c.

    Since gf128mul.[ch] is not tested on bigendian, this cipher mode
    may currently fail badly on bigendian machines.

    Signed-off-by: Rik Snel
    Signed-off-by: Herbert Xu

    Rik Snel
     
  • A lot of cypher modes need multiplications in GF(2^128). LRW, ABL, GCM...
    I use functions from this library in my LRW implementation and I will
    also use them in my ABL (Arbitrary Block Length, an unencumbered (correct
    me if I am wrong, wide block cipher mode).

    Elements of GF(2^128) must be presented as u128 *, it encourages automatic
    and proper alignment.

    The library contains support for two different representations of GF(2^128),
    see the comment in gf128mul.h. There different levels of optimization
    (memory/speed tradeoff).

    The code is based on work by Dr Brian Gladman. Notable changes:
    - deletion of two optimization modes
    - change from u32 to u64 for faster handling on 64bit machines
    - support for 'bbe' representation in addition to the, already implemented,
    'lle' representation.
    - move 'inline void' functions from header to 'static void' in the
    source file
    - update to use the linux coding style conventions

    The original can be found at:
    http://fp.gladman.plus.com/AES/modes.vc8.19-06-06.zip

    The copyright (and GPL statement) of the original author is preserved.

    Signed-off-by: Rik Snel
    Signed-off-by: Herbert Xu

    Rik Snel
     
  • This patch removes the following no longer used functions:
    - api.c: crypto_alg_available()
    - digest.c: crypto_digest_init()
    - digest.c: crypto_digest_update()
    - digest.c: crypto_digest_final()
    - digest.c: crypto_digest_digest()

    Signed-off-by: Adrian Bunk
    Signed-off-by: Herbert Xu

    Adrian Bunk
     
  • On Tue, Nov 14, 2006 at 01:41:25AM -0800, Andrew Morton wrote:
    >...
    > Changes since 2.6.19-rc5-mm2:
    >...
    > git-cryptodev.patch
    >...
    > git trees
    >...

    This patch makes some needlessly global code static.

    Signed-off-by: Adrian Bunk
    Signed-off-by: Herbert Xu

    Adrian Bunk
     
  • est vectors of XCBC with AES-128.

    Signed-off-by: Kazunori MIYAZAWA
    Signed-off-by: Herbert Xu

    Kazunori MIYAZAWA
     
  • This is core code of XCBC.

    XCBC is an algorithm that forms a MAC algorithm out of a cipher algorithm.
    For example, AES-XCBC-MAC is a MAC algorithm based on the AES cipher
    algorithm.

    Signed-off-by: Kazunori MIYAZAWA
    Signed-off-by: Herbert Xu

    Kazunori MIYAZAWA
     

22 Nov, 2006

1 commit

  • Pass the work_struct pointer to the work function rather than context data.
    The work function can use container_of() to work out the data.

    For the cases where the container of the work_struct may go away the moment the
    pending bit is cleared, it is made possible to defer the release of the
    structure by deferring the clearing of the pending bit.

    To make this work, an extra flag is introduced into the management side of the
    work_struct. This governs auto-release of the structure upon execution.

    Ordinarily, the work queue executor would release the work_struct for further
    scheduling or deallocation by clearing the pending bit prior to jumping to the
    work function. This means that, unless the driver makes some guarantee itself
    that the work_struct won't go away, the work function may not access anything
    else in the work_struct or its container lest they be deallocated.. This is a
    problem if the auxiliary data is taken away (as done by the last patch).

    However, if the pending bit is *not* cleared before jumping to the work
    function, then the work function *may* access the work_struct and its container
    with no problems. But then the work function must itself release the
    work_struct by calling work_release().

    In most cases, automatic release is fine, so this is the default. Special
    initiators exist for the non-auto-release case (ending in _NAR).

    Signed-Off-By: David Howells

    David Howells
     

16 Oct, 2006

1 commit


11 Oct, 2006

2 commits

  • This patch makes crypto_alloc_base() return proper return value.

    - If kzalloc() failure happens within __crypto_alloc_tfm(),
    crypto_alloc_base() returns NULL. But crypto_alloc_base()
    is supposed to return error code as pointer. So this patch
    makes it return -ENOMEM in that case.

    - crypto_alloc_base() is suppose to return -EINTR, if it is
    interrupted by signal. But it may not return -EINTR.

    Signed-off-by: Akinobu Mita
    Signed-off-by: Herbert Xu

    Akinobu Mita
     
  • Signed-off-by: Alexey Dobriyan
    Signed-off-by: Al Viro
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

24 Sep, 2006

2 commits


21 Sep, 2006

24 commits

  • This patch converts all users to use the new crypto_comp type and the
    crypto_has_* functions.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds the crypto_comp type to complete the compile-time checking
    conversion. The functions crypto_has_alg and crypto_has_cipher, etc. are
    also added to replace crypto_alg_available.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch removes the old HMAC implementation now that nobody uses it
    anymore.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     
  • This patch converts tcrypt to use the new HMAC template rather than the
    hard-coded version of HMAC. It also converts all digest users to use
    the new cipher interface.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     
  • This patch rewrites HMAC as a crypto template. This means that HMAC is no
    longer a hard-coded part of the API. It's now a template that generates
    standard digest algorithms like any other.

    The old HMAC is preserved until all current users are converted.

    The same structure can be used by other MACs such as AES-XCBC-MAC.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     
  • The existing digest user interface is inadequate for support asynchronous
    operations. For one it doesn't return a value to indicate success or
    failure, nor does it take a per-operation descriptor which is essential
    for the issuing of requests while other requests are still outstanding.

    This patch is the first in a series of steps to remodel the interface
    for asynchronous operations.

    For the ease of transition the new interface will be known as "hash"
    while the old one will remain as "digest".

    This patch also changes sg_next to allow chaining.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Mark the parts of the cipher interface that have been replaced by
    block ciphers as deprecated. Thanks to Andrew Morton for suggesting
    doing this before removing them completely.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch converts tcrypt to use the new block cipher type where
    applicable.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds block cipher algorithms for S390. Once all users of the
    old cipher type have been converted the existing CBC/ECB non-block cipher
    operations will be removed.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds two block cipher algorithms, CBC and ECB. These
    are implemented as templates on top of existing single-block cipher
    algorithms. They invoke the single-block cipher through the new
    encrypt_one/decrypt_one interface.

    This also optimises the in-place encryption and decryption to remove
    the cost of an IV copy each round.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds the new type of block ciphers. Unlike current cipher
    algorithms which operate on a single block at a time, block ciphers
    operate on an arbitrarily long linear area of data. As it is block-based,
    it will skip any data remaining at the end which cannot form a block.

    The block cipher has one major difference when compared to the existing
    block cipher implementation. The sg walking is now performed by the
    algorithm rather than the cipher mid-layer. This is needed for drivers
    that directly support sg lists. It also improves performance for all
    algorithms as it reduces the total number of indirect calls by one.

    In future the existing cipher algorithm will be converted to only have
    a single-block interface. This will be done after all existing users
    have switched over to the new block cipher type.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch prepares the scatterwalk code for use by the new block cipher
    type.

    Firstly it halves the size of scatter_walk on 32-bit platforms. This
    is important as we allocate at least two of these objects on the stack
    for each block cipher operation.

    It also exports the symbols since the block cipher code can be built as
    a module.

    Finally there is a hack in scatterwalk_unmap that relies on progress
    being made. Unfortunately, for hardware crypto we can't guarantee
    progress to be made since the hardware can fail.

    So this also gets rid of the hack by not advancing the address returned
    by scatterwalk_map.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds two new operations for the simple cipher that encrypts or
    decrypts a single block at a time. This will be the main interface after
    the existing block operations have moved over to the new block ciphers.

    It also adds the crypto_cipher type which is currently only used on the
    new operations but will be extended to setkey as well once existing users
    have been converted to use block ciphers where applicable.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds the crypto_type structure which will be used for all new
    crypto algorithm types, beginning with block ciphers.

    The primary purpose of this abstraction is to allow different crypto_type
    objects for crypto algorithms of the same type, in particular, there will
    be a different crypto_type objects for asynchronous algorithms.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The sleeping flag used to determine whether crypto_yield can actually
    yield is really a per-operation flag rather than a per-tfm flag. This
    patch changes crypto_yield to take a flag directly so that we can start
    using a per-operation flag instead the tfm flag.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Up until now all crypto transforms have been of the same type, struct
    crypto_tfm, regardless of whether they are ciphers, digests, or other
    types. As a result of that, we check the types at run-time before
    each crypto operation.

    This is rather cumbersome. We could instead use different C types for
    each crypto type to ensure that the correct types are used at compile
    time. That is, we would have crypto_cipher/crypto_digest instead of
    just crypto_tfm. The appropriate type would then be required for the
    actual operations such as crypto_digest_digest.

    Now that we have the type/mask fields when looking up algorithms, it
    is easy to request for an algorithm of the precise type that the user
    wants. However, crypto_alloc_tfm currently does not expose these new
    attributes.

    This patch introduces the function crypto_alloc_base which will carry
    these new parameters. It will be renamed to crypto_alloc_tfm once
    all existing users have been converted.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds the asynchronous flag and changes all existing users to
    only look up algorithms that are synchronous.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds the helpers crypto_get_attr_alg and crypto_alloc_instance
    which can be used by simple one-argument templates like hmac to process
    input parameters and allocate instances.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch makes IV operations on ECB fail through nocrypt_iv rather than
    calling BUG(). This is needed to generalise CBC/ECB using the template
    mechanism.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Now that crc32c has been fixed to conform with standard digest semantics,
    we can use test_hash for it. I've turned the last test into a chunky
    test.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • When the final result location is unaligned, we store the digest in a
    temporary buffer before copying it to the final location. Currently
    that buffer sits on the stack. This patch moves it to an area in the
    tfm, just like the CBC IV buffer.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Now that the tfm is passed directly to setkey instead of the ctx, we no
    longer need to pass the &tfm->crt_flags pointer.

    This patch also gets rid of a few unnecessary checks on the key length
    for ciphers as the cipher layer guarantees that the key length is within
    the bounds specified by the algorithm.

    Rather than testing dia_setkey every time, this patch does it only once
    during crypto_alloc_tfm. The redundant check from crypto_digest_setkey
    is also removed.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The convention for setkey is that once it is set it should not change,
    in particular, init must not wipe out the key set by it. In fact, init
    should always be used after setkey before any digestion is performed.

    The only user of crc32c that sets the key is tcrypt. This patch adds
    the necessary init calls there.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Crypto modules should be loadable by their .cra_driver_name, so
    we should make MODULE_ALIAS()es with these names. This patch adds
    aliases for SHA1 and SHA256 only as that's what we need for
    PadLock-SHA driver.

    Signed-off-by: Michal Ludvig
    Signed-off-by: Herbert Xu

    Michal Ludvig