25 Dec, 2008

6 commits

  • The tables used by the various AES algorithms are currently
    computed at run-time. This has created an init ordering problem
    because some AES algorithms may be registered before the tables
    have been initialised.

    This patch gets around this whole thing by precomputing the tables.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch allows shash algorithms to be used through the old hash
    interface. This is a transitional measure so we can convert the
    underlying algorithms to shash before converting the users across.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • It is often useful to save the partial state of a hash function
    so that it can be used as a base for two or more computations.

    The most prominent example is HMAC where all hashes start from
    a base determined by the key. Having an import/export interface
    means that we only have to compute that base once rather than
    for each message.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The shash interface replaces the current synchronous hash interface.
    It improves over hash in two ways. Firstly shash is reentrant,
    meaning that the same tfm may be used by two threads simultaneously
    as all hashing state is stored in a local descriptor.

    The other enhancement is that shash no longer takes scatter list
    entries. This is because shash is specifically designed for
    synchronous algorithms and as such scatter lists are unnecessary.

    All existing hash users will be converted to shash once the
    algorithms have been completely converted.

    There is also a new finup function that combines update with final.
    This will be extended to ahash once the algorithm conversion is
    done.

    This is also the first time that an algorithm type has their own
    registration function. Existing algorithm types will be converted
    to this way in due course.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch reintroduces a completely revamped crypto_alloc_tfm.
    The biggest change is that we now take two crypto_type objects
    when allocating a tfm, a frontend and a backend. In fact this
    simply formalises what we've been doing behind the API's back.

    For example, as it stands crypto_alloc_ahash may use an
    actual ahash algorithm or a crypto_hash algorithm. Putting
    this in the API allows us to do this much more cleanly.

    The existing types will be converted across gradually.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The type exit function needs to undo any allocations done by the type
    init function. However, the type init function may differ depending
    on the upper-level type of the transform (e.g., a crypto_blkcipher
    instantiated as a crypto_ablkcipher).

    So we need to move the exit function out of the lower-level
    structure and into crypto_tfm itself.

    As it stands this is a no-op since nobody uses exit functions at
    all. However, all cases where a lower-level type is instantiated
    as a different upper-level type (such as blkcipher as ablkcipher)
    will be converted such that they allocate the underlying transform
    and use that instead of casting (e.g., crypto_ablkcipher casted
    into crypto_blkcipher). That will need to use a different exit
    function depending on the upper-level type.

    This patch also allows the type init/exit functions to call (or not)
    cra_init/cra_exit instead of always calling them from the top level.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

29 Aug, 2008

2 commits


13 Aug, 2008

1 commit


10 Jul, 2008

3 commits


01 May, 2008

1 commit

  • When I backed out of using the generic sg chaining (as it isn't currently
    portable) and introduced scatterwalk_sg_chain/scatterwalk_sg_next I left
    out the sg_is_last check in the latter. This causes it to potentially
    dereference beyond the end of the sg array.

    As most uses of scatterwalk_sg_next are bound by an overall length, this
    only affected the chaining code in authenc and eseqiv. Thanks to Patrick
    McHardy for identifying this problem.

    This patch also clears the "last" bit on the head of the chained list as
    it's no longer last. This also went missing in scatterwalk_sg_chain and
    is present in sg_chain.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

21 Apr, 2008

1 commit


08 Mar, 2008

1 commit


23 Feb, 2008

1 commit


11 Jan, 2008

24 commits

  • As Andrew Morton correctly points out, we need to explicitly include
    sched.h as we use the function cond_resched in crypto/scatterwalk.h.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch makes chainiv avoid spinning by postponing requests on lock
    contention if the user allows the use of asynchronous algorithms. If
    a synchronous algorithm is requested then we behave as before.

    This should improve IPsec performance on SMP when two CPUs attempt to
    transmit over the same SA. Currently one of them will spin doing nothing
    waiting for the other CPU to finish its encryption. This patch makes it
    postpone the request and get on with other work.

    If only one CPU is transmitting for a given SA, then we will process
    the request synchronously as before.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds a null blkcipher algorithm called ecb(cipher_null) for
    backwards compatibility. Previously the null algorithm when used by
    IPsec copied the data byte by byte. This new algorithm optimises that
    to a straight memcpy which lets us better measure inherent overheads in
    our IPsec code.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch makes crypto_alloc_aead always return algorithms that is
    capable of generating their own IVs through givencrypt and givdecrypt.
    All existing AEAD algorithms already do. New ones must either supply
    their own or specify a generic IV generator with the geniv field.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch creates the infrastructure to help the construction of IV
    generator templates that wrap around AEAD algorithms by adding an IV
    generator to them. This is useful for AEAD algorithms with no built-in
    IV generator or to replace their built-in generator.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch finally makes the givencrypt/givdecrypt operations available
    to users by adding crypto_aead_givencrypt and crypto_aead_givdecrypt.
    A suite of helpers to allocate and fill in the request is also available.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds the underlying givcrypt operations for aead and associated
    support elements. The rationale is identical to that of the skcipher
    givcrypt operations, i.e., sometimes only the algorithm knows how the
    IV should be generated.

    A new request type aead_givcrypt_request is added which contains an
    embedded aead_request structure with two new elements to support this
    operation. The new elements are seq and giv. The seq field should
    contain a strictly increasing 64-bit integer which may be used by
    certain IV generators as an input value. The giv field will be used
    to store the generated IV. It does not need to obey the alignment
    requirements of the algorithm because it's not used during the operation.

    The existing iv field must still be available as it will be used to store
    intermediate IVs and the output IV if chaining is desired.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch finally makes the givencrypt/givdecrypt operations available
    to users by adding crypto_skcipher_givencrypt and crypto_skcipher_givdecrypt.
    A suite of helpers to allocate and fill in the request is also available.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Now that gcm and authenc have been converted to crypto_spawn_skcipher,
    this patch removes the obsolete crypto_spawn_ablkcipher function.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch makes crypto_alloc_ablkcipher/crypto_grab_skcipher always
    return algorithms that are capable of generating their own IVs through
    givencrypt and givdecrypt. Each algorithm may specify its default IV
    generator through the geniv field.

    For algorithms that do not set the geniv field, the blkcipher layer will
    pick a default. Currently it's chainiv for synchronous algorithms and
    eseqiv for asynchronous algorithms. Note that if these wrappers do not
    work on an algorithm then that algorithm must specify its own geniv or
    it can't be used at all.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds the helper skcipher_givcrypt_complete which should be
    called when an ablkcipher algorithm has completed a givcrypt request.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch creates the infrastructure to help the construction of givcipher
    templates that wrap around existing blkcipher/ablkcipher algorithms by adding
    an IV generator to them.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Different block cipher modes have different requirements for intialisation
    vectors. For example, CBC can use a simple randomly generated IV while
    modes such as CTR must use an IV generation mechanisms that give a stronger
    guarantee on the lack of collisions. Furthermore, disk encryption modes
    have their own IV generation algorithms.

    Up until now IV generation has been left to the users of the symmetric
    key cipher API. This is inconvenient as the number of block cipher modes
    increase because the user needs to be aware of which mode is supposed to
    be paired with which IV generation algorithm.

    Therefore it makes sense to integrate the IV generation into the crypto
    API. This patch takes the first step in that direction by creating two
    new ablkcipher operations, givencrypt and givdecrypt that generates an
    IV before performing the actual encryption or decryption.

    The operations are currently not exposed to the user. That will be done
    once the underlying functionality has actually been implemented.

    It also creates the underlying givcipher type. Algorithms that directly
    generate IVs would use it instead of ablkcipher. All other algorithms
    (including all existing ones) would generate a givcipher algorithm upon
    registration. This givcipher algorithm will be constructed from the geniv
    string that's stored in every algorithm. That string will locate a template
    which is instantiated by the blkcipher/ablkcipher algorithm in question to
    give a givcipher algorithm.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Note: From now on the collective of ablkcipher/blkcipher/givcipher will
    be known as skcipher, i.e., symmetric key cipher. The name blkcipher has
    always been much of a misnomer since it supports stream ciphers too.

    This patch adds the function crypto_grab_skcipher as a new way of getting
    an ablkcipher spawn. The problem is that previously we did this in two
    steps, first getting the algorithm and then calling crypto_init_spawn.

    This meant that each spawn user had to be aware of what type and mask to
    use for these two steps. This is difficult and also presents a problem
    when the type/mask changes as they're about to be for IV generators.

    The new interface does both steps together just like crypto_alloc_ablkcipher.

    As a side-effect this also allows us to be stronger on type enforcement
    for spawns. For now this is only done for ablkcipher but it's trivial
    to extend for other types.

    This patch also moves the type/mask logic for skcipher into the helpers
    crypto_skcipher_type and crypto_skcipher_mask.

    Finally this patch introduces the function crypto_require_sync to determine
    whether the user is specifically requesting a sync algorithm.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • As discussed previously, this patch moves the basic CTR functionality
    into a chainable algorithm called ctr. The IPsec-specific variant of
    it is now placed on top with the name rfc3686.

    So ctr(aes) gives a chainable cipher with IV size 16 while the IPsec
    variant will be called rfc3686(ctr(aes)). This patch also adjusts
    gcm accordingly.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds a new helper crypto_attr_alg_name which is basically the
    first half of crypto_attr_alg. That is, it returns an algorithm name
    parameter as a string without looking it up. The caller can then look it
    up immediately or defer it until later.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Unfortunately the generic chaining hasn't been ported to all architectures
    yet, and notably not s390. So this patch restores the chainging that we've
    been using previously which does work everywhere.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The scatterwalk infrastructure is used by algorithms so it needs to
    move out of crypto for future users that may live in drivers/crypto
    or asm/*/crypto.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Having enckeylen as a template parameter makes it a pain for hardware
    devices that implement ciphers with many key sizes since each one would
    have to be registered separately.

    Since the authenc algorithm is mainly used for legacy purposes where its
    key is going to be constructed out of two separate keys, we can in fact
    embed this value into the key itself.

    This patch does this by prepending an rtnetlink header to the key that
    contains the encryption key length.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • With the addition of more stream ciphers we need to curb the proliferation
    of ad-hoc xor functions. This patch creates a generic pair of functions,
    crypto_inc and crypto_xor which does big-endian increment and exclusive or,
    respectively.

    For optimum performance, they both use u32 operations so alignment must be
    as that of u32 even though the arguments are of type u8 *.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Up until now we have ablkcipher algorithms have been identified as
    type BLKCIPHER with the ASYNC bit set. This is suboptimal because
    ablkcipher refers to two things. On the one hand it refers to the
    top-level ablkcipher interface with requests. On the other hand it
    refers to and algorithm type underneath.

    As it is you cannot request a synchronous block cipher algorithm
    with the ablkcipher interface on top. This is a problem because
    we want to be able to eventually phase out the blkcipher top-level
    interface.

    This patch fixes this by making ABLKCIPHER its own type, just as
    we have distinct types for HASH and DIGEST. The type it associated
    with the algorithm implementation only.

    Which top-level interface is used for synchronous block ciphers is
    then determined by the mask that's used. If it's a specific mask
    then the old blkcipher interface is given, otherwise we go with the
    new ablkcipher interface.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • Resubmitting this patch which extends sha256_generic.c to support SHA-224 as
    described in FIPS 180-2 and RFC 3874. HMAC-SHA-224 as described in RFC4231
    is then supported through the hmac interface.

    Patch includes test vectors for SHA-224 and HMAC-SHA-224.

    SHA-224 chould be chosen as a hash algorithm when 112 bits of security
    strength is required.

    Patch generated against the 2.6.24-rc1 kernel and tested against
    2.6.24-rc1-git14 which includes fix for scatter gather implementation for HMAC.

    Signed-off-by: Jonathan Lynch
    Signed-off-by: Herbert Xu

    Jonathan Lynch
     
  • This patch exports four tables and the set_key() routine. This ressources
    can be shared by other AES implementations (aes-x86_64 for instance).
    The decryption key has been turned around (deckey[0] is the first piece
    of the key instead of deckey[keylen+20]). The encrypt/decrypt functions
    are looking now identical (except they are using different tables and
    key).

    Signed-off-by: Sebastian Siewior
    Signed-off-by: Herbert Xu

    Sebastian Siewior
     
  • This three defines are used in all AES related hardware.

    Signed-off-by: Sebastian Siewior
    Signed-off-by: Herbert Xu

    Sebastian Siewior