17 Jan, 2010
1 commit
-
This patch adds export/import support to md5. The exported type is
defined by struct md5_state.This is modeled after the equivalent change to sha1_generic.
Signed-off-by: Max Vozeler
Signed-off-by: Herbert Xu
07 Jan, 2010
1 commit
-
This patch adds a parallel crypto template that takes a crypto
algorithm and converts it to process the crypto transforms in
parallel. For the moment only aead algorithms are supported.Signed-off-by: Steffen Klassert
Signed-off-by: Herbert Xu
19 Oct, 2009
2 commits
-
6941c3a0 disabled compilation of the legacy digest code but didn't
actually remove it. Rectify this. Also, remove the crypto_hash_type
extern declaration from algapi.h now that the struct is gone.Signed-off-by: Benjamin Gilbert
Signed-off-by: Herbert Xu -
PCLMULQDQ is used to accelerate the most time-consuming part of GHASH,
carry-less multiplication. More information about PCLMULQDQ can be
found at:http://software.intel.com/en-us/articles/carry-less-multiplication-and-its-usage-for-computing-the-gcm-mode/
Because PCLMULQDQ changes XMM state, its usage must be enclosed with
kernel_fpu_begin/end, which can be used only in process context, the
acceleration is implemented as crypto_ahash. That is, request in soft
IRQ context will be defered to the cryptd kernel thread.Signed-off-by: Huang Ying
Signed-off-by: Herbert Xu
12 Sep, 2009
1 commit
-
* git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (102 commits)
crypto: sha-s390 - Fix warnings in import function
crypto: vmac - New hash algorithm for intel_txt support
crypto: api - Do not displace newly registered algorithms
crypto: ansi_cprng - Fix module initialization
crypto: xcbc - Fix alignment calculation of xcbc_tfm_ctx
crypto: fips - Depend on ansi_cprng
crypto: blkcipher - Do not use eseqiv on stream ciphers
crypto: ctr - Use chainiv on raw counter mode
Revert crypto: fips - Select CPRNG
crypto: rng - Fix typo
crypto: talitos - add support for 36 bit addressing
crypto: talitos - align locks on cache lines
crypto: talitos - simplify hmac data size calculation
crypto: mv_cesa - Add support for Orion5X crypto engine
crypto: cryptd - Add support to access underlaying shash
crypto: gcm - Use GHASH digest algorithm
crypto: ghash - Add GHASH digest algorithm for GCM
crypto: authenc - Convert to ahash
crypto: api - Fix aligned ctx helper
crypto: hmac - Prehash ipad/opad
...
02 Sep, 2009
1 commit
-
This patch adds VMAC (a fast MAC) support into crypto framework.
Signed-off-by: Shane Wang
Signed-off-by: Joseph Cihula
Signed-off-by: Herbert Xu
29 Aug, 2009
1 commit
-
As struct skcipher_givcrypt_request includes struct crypto_request
at a non-zero offset, testing for NULL after converting the pointer
returned by crypto_dequeue_request does not work. This can result
in IPsec crashes when the queue is depleted.This patch fixes it by doing the pointer conversion only when the
return value is non-NULL. In particular, we create a new function
__crypto_dequeue_request that does the pointer conversion.Reported-by: Brad Bosch
Signed-off-by: Herbert Xu
06 Aug, 2009
1 commit
-
cryptd_alloc_ahash() will allocate a cryptd-ed ahash for specified
algorithm name. The new allocated one is guaranteed to be cryptd-ed
ahash, so the shash underlying can be gotten via cryptd_ahash_child().Signed-off-by: Huang Ying
Signed-off-by: Herbert Xu
24 Jul, 2009
1 commit
-
The aligned ctx helper was using a bogus alignment value thas was
one off the correct value. Fortunately the current users do not
require anything beyond the natural alignment of the platform so
this hasn't caused a problem.This patch fixes that and also removes the unnecessary minimum
check since if the alignment is less than the natural alignment
then the subsequent ALIGN operation should be a noop.Signed-off-by: Herbert Xu
22 Jul, 2009
2 commits
-
This patch replaces the 32-bit counters in sha512_generic with
64-bit counters. It also switches the bit count to the simpler
byte count.Signed-off-by: Herbert Xu
-
This patch renames struct sha512_ctx and exports it as struct
sha512_state so that other sha512 implementations can use it
as the reference structure for exporting their state.Signed-off-by: Herbert Xu
15 Jul, 2009
2 commits
-
When an shash algorithm is exported as ahash, ahash will access
its digest size through hash_alg_common. That's why the shash
layout needs to match hash_alg_common. This wasn't the case
because the alignment weren't identical.This patch fixes the problem.
Signed-off-by: Herbert Xu
-
This patch exports the finup operation where available and adds
a default finup operation for ahash. The operations final, finup
and digest also will now deal with unaligned result pointers by
copying it. Finally export/import operations are will now be
exported too.Signed-off-by: Herbert Xu
14 Jul, 2009
10 commits
-
Now that all ahash implementations have been converted to the new
ahash type, we can remove old_ahash_alg and its associated support.Signed-off-by: Herbert Xu
-
This patch changes crypto4xx to use the new style ahash type.
In particular, we now use ahash_alg to define ahash algorithms
instead of crypto_alg.This is achieved by introducing a union that encapsulates the
new type and the existing crypto_alg structure. They're told
apart through a u32 field containing the type value.Signed-off-by: Herbert Xu
-
This patch changes cryptd to use the template->create function
instead of alloc in anticipation for the switch to new style
ahash algorithms.Signed-off-by: Herbert Xu
-
This patch adds the helpers crypto_drop_ahash and crypto_drop_shash
so that these spawns can be dropped without ugly casts.Signed-off-by: Herbert Xu
-
This patch adds support for creating ahash instances and using
ahash as spawns.Signed-off-by: Herbert Xu
-
This patch converts crypto_ahash to the new style. The old ahash
algorithm type is retained until the existing ahash implementations
are also converted. All ahash users will automatically get the
new crypto_ahash type.Signed-off-by: Herbert Xu
-
As the extsize and init_tfm functions belong to the frontend the
frontend argument is superfluous.Signed-off-by: Herbert Xu
-
This patch adds the helper crypto_ahash_set_reqsize so that
implementations do not directly access the crypto_ahash structure.Signed-off-by: Herbert Xu
-
This patch exports the async functions so that they can be reused
by cryptd when it switches over to using shash.Signed-off-by: Herbert Xu
-
This patch changes descsize to a run-time attribute so that
implementations can change it in their init functions.Signed-off-by: Herbert Xu
12 Jul, 2009
1 commit
-
This patch changes the kfree call to kzfree for async requests.
As the request may contain sensitive data it needs to be zeroed
before it can be reallocated by others.Signed-off-by: Herbert Xu
11 Jul, 2009
3 commits
-
This patch adds export/import support to sha256_generic. The exported
type is defined by struct sha256_state, which is basically the entire
descriptor state of sha256_generic.Signed-off-by: Herbert Xu
-
This patch adds export/import support to sha1_generic. The exported
type is defined by struct sha1_state, which is basically the entire
descriptor state of sha1_generic.Signed-off-by: Herbert Xu
-
This patch replaces the full descriptor export with an export of
the partial hash state. This allows the use of a consistent export
format across all implementations of a given algorithm.This is useful because a number of cases require the use of the
partial hash state, e.g., PadLock can use the SHA1 hash state
to get around the fact that it can only hash contiguous data
chunks.Signed-off-by: Herbert Xu
09 Jul, 2009
1 commit
-
This patch adds the helper shash_instance_ctx which is the shash
analogue of crypto_instance_ctx.Signed-off-by: Herbert Xu
08 Jul, 2009
8 commits
-
This patch adds __crypto_shash_cast which turns a crypto_tfm
into crypto_shash. It's analogous to the other __crypto_*_cast
functions.It hasn't been needed until now since no existing shash algorithms
have had an init function.Signed-off-by: Herbert Xu
-
This patch adds crypto_shash_ctx_aligned which will be needed
by hmac after its conversion to shash.Signed-off-by: Herbert Xu
-
This patch adds shash_register_instance so that shash instances
can be registered without bypassing the shash checks applied to
normal algorithms.Signed-off-by: Herbert Xu
-
This patch adds the helper shash_attr_alg2 which locates a shash
algorithm based on the information in the given attribute.Signed-off-by: Herbert Xu
-
This patch adds the helper crypto_attr_alg2 which is similar to
crypto_attr_alg but takes an extra frontend argument. This is
intended to be used by new style algorithm types such as shash.Signed-off-by: Herbert Xu
-
This patch adds the functions needed to create and use shash
spawns, i.e., to use shash algorithms in a template.Signed-off-by: Herbert Xu
-
This patch modifies the spawn infrastructure to support new style
algorithms like shash. In particular, this means storing the
frontend type in the spawn and using crypto_create_tfm to allocate
the tfm.Signed-off-by: Herbert Xu
-
This patch adds shash_instance and the associated alloc/free
functions. This is meant to be an instance that with a shash
algorithm under it. Note that the instance itself doesn't have
to be shash.Signed-off-by: Herbert Xu
07 Jul, 2009
2 commits
-
This patch adds a new argument to crypto_alloc_instance which
sets aside some space before the instance for use by algorithms
such as shash that place type-specific data before crypto_alg.For compatibility the function has been renamed so that existing
users aren't affected.Signed-off-by: Herbert Xu
-
This patch introduces the template->create function intended
to replace the existing alloc function. The intention is for
create to handle the registration directly, whereas currently
the caller of alloc has to handle the registration.This allows type-specific code to be run prior to registration.
Signed-off-by: Herbert Xu
04 Mar, 2009
2 commits
-
Signed-off-by: Geert Uytterhoeven
Cc: James Morris
Signed-off-by: Herbert Xu -
The current "comp" crypto interface supports one-shot (de)compression only,
i.e. the whole data buffer to be (de)compressed must be passed at once, and
the whole (de)compressed data buffer will be received at once.
In several use-cases (e.g. compressed file systems that store files in big
compressed blocks), this workflow is not suitable.
Furthermore, the "comp" type doesn't provide for the configuration of
(de)compression parameters, and always allocates workspace memory for both
compression and decompression, which may waste memory.To solve this, add a "pcomp" partial (de)compression interface that provides
the following operations:
- crypto_compress_{init,update,final}() for compression,
- crypto_decompress_{init,update,final}() for decompression,
- crypto_{,de}compress_setup(), to configure (de)compression parameters
(incl. allocating workspace memory).The (de)compression methods take a struct comp_request, which was mimicked
after the z_stream object in zlib, and contains buffer pointer and length
pairs for input and output.The setup methods take an opaque parameter pointer and length pair. Parameters
are supposed to be encoded using netlink attributes, whose meanings depend on
the actual (name of the) (de)compression algorithm.Signed-off-by: Geert Uytterhoeven
Signed-off-by: Herbert Xu