11 Oct, 2007

5 commits

  • XTS currently considered to be the successor of the LRW mode by the IEEE1619
    workgroup. LRW was discarded, because it was not secure if the encyption key
    itself is encrypted with LRW.

    XTS does not have this problem. The implementation is pretty straightforward,
    a new function was added to gf128mul to handle GF(128) elements in ble format.
    Four testvectors from the specification
    http://grouper.ieee.org/groups/1619/email/pdf00086.pdf
    were added, and they verify on my system.

    Signed-off-by: Rik Snel
    Signed-off-by: Herbert Xu

    Rik Snel
     
  • This patch adds the authenc algorithm which constructs an AEAD algorithm
    from an asynchronous block cipher and a hash. The construction is done
    by concatenating the encrypted result from the cipher with the output
    from the hash, as is used by the IPsec ESP protocol.

    The authenc algorithm exists as a template with four parameters:

    authenc(auth, authsize, enc, enckeylen).

    The authentication algorithm, the authentication size (i.e., truncating
    the output of the authentication algorithm), the encryption algorithm,
    and the encryption key length. Both the size field and the key length
    field are in bytes. For example, AES-128 with SHA1-HMAC would be
    represented by

    authenc(hmac(sha1), 12, cbc(aes), 16)

    The key for the authenc algorithm is the concatenation of the keys for
    the authentication algorithm with the encryption algorithm. For the
    above example, if a key of length 36 bytes is given, then hmac(sha1)
    would receive the first 20 bytes while the last 16 would be given to
    cbc(aes).

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds crypto_aead which is the interface for AEAD
    (Authenticated Encryption with Associated Data) algorithms.

    AEAD algorithms perform authentication and encryption in one
    step. Traditionally users (such as IPsec) would use two
    different crypto algorithms to perform these. With AEAD
    this comes down to one algorithm and one operation.

    Of course if traditional algorithms were used we'd still
    be doing two operations underneath. However, real AEAD
    algorithms may allow the underlying operations to be
    optimised as well.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds support for the SEED cipher (RFC4269).

    This patch have been used in few VPN appliance vendors in Korea for
    several years. And it was verified by KISA, who developed the
    algorithm itself.

    As its importance in Korean banking industry, it would be great
    if linux incorporates the support.

    Signed-off-by: Hye-Shik Chang
    Signed-off-by: Herbert Xu

    Hye-Shik Chang
     
  • Other options requiring specific block cipher algorithms already have
    the appropriate select's.

    Signed-off-by: Adrian Bunk
    Signed-off-by: Herbert Xu

    Adrian Bunk
     

15 Jul, 2007

1 commit


13 Jul, 2007

2 commits

  • The async_tx api provides methods for describing a chain of asynchronous
    bulk memory transfers/transforms with support for inter-transactional
    dependencies. It is implemented as a dmaengine client that smooths over
    the details of different hardware offload engine implementations. Code
    that is written to the api can optimize for asynchronous operation and the
    api will fit the chain of operations to the available offload resources.

    I imagine that any piece of ADMA hardware would register with the
    'async_*' subsystem, and a call to async_X would be routed as
    appropriate, or be run in-line. - Neil Brown

    async_tx exploits the capabilities of struct dma_async_tx_descriptor to
    provide an api of the following general format:

    struct dma_async_tx_descriptor *
    async_(..., struct dma_async_tx_descriptor *depend_tx,
    dma_async_tx_callback cb_fn, void *cb_param)
    {
    struct dma_chan *chan = async_tx_find_channel(depend_tx, );
    struct dma_device *device = chan ? chan->device : NULL;
    int int_en = cb_fn ? 1 : 0;
    struct dma_async_tx_descriptor *tx = device ?
    device->device_prep_dma_(chan, len, int_en) : NULL;

    if (tx) { /* run asynchronously */
    ...
    tx->tx_set_dest(addr, tx, index);
    ...
    tx->tx_set_src(addr, tx, index);
    ...
    async_tx_submit(chan, tx, flags, depend_tx, cb_fn, cb_param);
    } else { /* run synchronously */
    ...

    ...
    async_tx_sync_epilog(flags, depend_tx, cb_fn, cb_param);
    }

    return tx;
    }

    async_tx_find_channel() returns a capable channel from its pool. The
    channel pool is organized as a per-cpu array of channel pointers. The
    async_tx_rebalance() routine is tasked with managing these arrays. In the
    uniprocessor case async_tx_rebalance() tries to spread responsibility
    evenly over channels of similar capabilities. For example if there are two
    copy+xor channels, one will handle copy operations and the other will
    handle xor. In the SMP case async_tx_rebalance() attempts to spread the
    operations evenly over the cpus, e.g. cpu0 gets copy channel0 and xor
    channel0 while cpu1 gets copy channel 1 and xor channel 1. When a
    dependency is specified async_tx_find_channel defaults to keeping the
    operation on the same channel. A xor->copy->xor chain will stay on one
    channel if it supports both operation types, otherwise the transaction will
    transition between a copy and a xor resource.

    Currently the raid5 implementation in the MD raid456 driver has been
    converted to the async_tx api. A driver for the offload engines on the
    Intel Xscale series of I/O processors, iop-adma, is provided in a later
    commit. With the iop-adma driver and async_tx, raid456 is able to offload
    copy, xor, and xor-zero-sum operations to hardware engines.

    On iop342 tiobench showed higher throughput for sequential writes (20 - 30%
    improvement) and sequential reads to a degraded array (40 - 55%
    improvement). For the other cases performance was roughly equal, +/- a few
    percentage points. On a x86-smp platform the performance of the async_tx
    implementation (in synchronous mode) was also +/- a few percentage points
    of the original implementation. According to 'top' on iop342 CPU
    utilization drops from ~50% to ~15% during a 'resync' while the speed
    according to /proc/mdstat doubles from ~25 MB/s to ~50 MB/s.

    The tiobench command line used for testing was: tiobench --size 2048
    --block 4096 --block 131072 --dir /mnt/raid --numruns 5
    * iop342 had 1GB of memory available

    Details:
    * if CONFIG_DMA_ENGINE=n the asynchronous path is compiled away by making
    async_tx_find_channel a static inline routine that always returns NULL
    * when a callback is specified for a given transaction an interrupt will
    fire at operation completion time and the callback will occur in a
    tasklet. if the the channel does not support interrupts then a live
    polling wait will be performed
    * the api is written as a dmaengine client that requests all available
    channels
    * In support of dependencies the api implicitly schedules channel-switch
    interrupts. The interrupt triggers the cleanup tasklet which causes
    pending operations to be scheduled on the next channel
    * Xor engines treat an xor destination address differently than a software
    xor routine. To the software routine the destination address is an implied
    source, whereas engines treat it as a write-only destination. This patch
    modifies the xor_blocks routine to take a an explicit destination address
    to mirror the hardware.

    Changelog:
    * fixed a leftover debug print
    * don't allow callbacks in async_interrupt_cond
    * fixed xor_block changes
    * fixed usage of ASYNC_TX_XOR_DROP_DEST
    * drop dma mapping methods, suggested by Chris Leech
    * printk warning fixups from Andrew Morton
    * don't use inline in C files, Adrian Bunk
    * select the API when MD is enabled
    * BUG_ON xor source counts
    Signed-off-by: Dan Williams
    Acked-By: NeilBrown

    Dan Williams
     
  • The async_tx api tries to use a dma engine for an operation, but will fall
    back to an optimized software routine otherwise. Xor support is
    implemented using the raid5 xor routines. For organizational purposes this
    routine is moved to a common area.

    The following fixes are also made:
    * rename xor_block => xor_blocks, suggested by Adrian Bunk
    * ensure that xor.o initializes before md.o in the built-in case
    * checkpatch.pl fixes
    * mark calibrate_xor_blocks __init, Adrian Bunk

    Cc: Adrian Bunk
    Cc: NeilBrown
    Cc: Herbert Xu
    Signed-off-by: Dan Williams

    Dan Williams
     

11 Jul, 2007

1 commit


09 May, 2007

1 commit


02 May, 2007

2 commits


09 Feb, 2007

1 commit


07 Feb, 2007

3 commits


06 Feb, 2007

1 commit

  • This patch moves the config options for the s390 crypto instructions
    to the standard "Hardware crypto devices" menu. In addition some
    cleanup has been done: use a flag for supported keylengths, add a
    warning about machien limitation, return ENOTSUPP in case the
    hardware has no support, remove superfluous printks and update
    email addresses.

    Signed-off-by: Jan Glauber
    Signed-off-by: Martin Schwidefsky

    Jan Glauber
     

07 Dec, 2006

3 commits

  • Main module, this implements the Liskov Rivest Wagner block cipher mode
    in the new blockcipher API. The implementation is based on ecb.c.

    The LRW-32-AES specification I used can be found at:
    http://grouper.ieee.org/groups/1619/email/pdf00017.pdf

    It implements the optimization specified as optional in the
    specification, and in addition it uses optimized multiplication
    routines from gf128mul.c.

    Since gf128mul.[ch] is not tested on bigendian, this cipher mode
    may currently fail badly on bigendian machines.

    Signed-off-by: Rik Snel
    Signed-off-by: Herbert Xu

    Rik Snel
     
  • A lot of cypher modes need multiplications in GF(2^128). LRW, ABL, GCM...
    I use functions from this library in my LRW implementation and I will
    also use them in my ABL (Arbitrary Block Length, an unencumbered (correct
    me if I am wrong, wide block cipher mode).

    Elements of GF(2^128) must be presented as u128 *, it encourages automatic
    and proper alignment.

    The library contains support for two different representations of GF(2^128),
    see the comment in gf128mul.h. There different levels of optimization
    (memory/speed tradeoff).

    The code is based on work by Dr Brian Gladman. Notable changes:
    - deletion of two optimization modes
    - change from u32 to u64 for faster handling on 64bit machines
    - support for 'bbe' representation in addition to the, already implemented,
    'lle' representation.
    - move 'inline void' functions from header to 'static void' in the
    source file
    - update to use the linux coding style conventions

    The original can be found at:
    http://fp.gladman.plus.com/AES/modes.vc8.19-06-06.zip

    The copyright (and GPL statement) of the original author is preserved.

    Signed-off-by: Rik Snel
    Signed-off-by: Herbert Xu

    Rik Snel
     
  • This is core code of XCBC.

    XCBC is an algorithm that forms a MAC algorithm out of a cipher algorithm.
    For example, AES-XCBC-MAC is a MAC algorithm based on the AES cipher
    algorithm.

    Signed-off-by: Kazunori MIYAZAWA
    Signed-off-by: Herbert Xu

    Kazunori MIYAZAWA
     

16 Oct, 2006

1 commit


21 Sep, 2006

11 commits

  • This patch removes the old HMAC implementation now that nobody uses it
    anymore.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     
  • This patch rewrites HMAC as a crypto template. This means that HMAC is no
    longer a hard-coded part of the API. It's now a template that generates
    standard digest algorithms like any other.

    The old HMAC is preserved until all current users are converted.

    The same structure can be used by other MACs such as AES-XCBC-MAC.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     
  • The existing digest user interface is inadequate for support asynchronous
    operations. For one it doesn't return a value to indicate success or
    failure, nor does it take a per-operation descriptor which is essential
    for the issuing of requests while other requests are still outstanding.

    This patch is the first in a series of steps to remodel the interface
    for asynchronous operations.

    For the ease of transition the new interface will be known as "hash"
    while the old one will remain as "digest".

    This patch also changes sg_next to allow chaining.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds block cipher algorithms for S390. Once all users of the
    old cipher type have been converted the existing CBC/ECB non-block cipher
    operations will be removed.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds two block cipher algorithms, CBC and ECB. These
    are implemented as templates on top of existing single-block cipher
    algorithms. They invoke the single-block cipher through the new
    encrypt_one/decrypt_one interface.

    This also optimises the in-place encryption and decryption to remove
    the cost of an IV copy each round.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • This patch adds the new type of block ciphers. Unlike current cipher
    algorithms which operate on a single block at a time, block ciphers
    operate on an arbitrarily long linear area of data. As it is block-based,
    it will skip any data remaining at the end which cannot form a block.

    The block cipher has one major difference when compared to the existing
    block cipher implementation. The sg walking is now performed by the
    algorithm rather than the cipher mid-layer. This is needed for drivers
    that directly support sg lists. It also improves performance for all
    algorithms as it reduces the total number of indirect calls by one.

    In future the existing cipher algorithm will be converted to only have
    a single-block interface. This will be done after all existing users
    have switched over to the new block cipher type.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The cryptomgr module is a simple manager of crypto algorithm instances.
    It ensures that parameterised algorithms of the type tmpl(alg) (e.g.,
    cbc(aes)) are always created.

    This is meant to satisfy the needs for most users. For more complex
    cases such as deeper combinations or multiple parameters, a netlink
    module will be created which allows arbitrary expressions to be parsed
    in user-space.

    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Herbert Xu
     
  • The crypto API is made up of the part facing users such as IPsec and the
    low-level part which is used by cryptographic entities such as algorithms.
    This patch splits out the latter so that the two APIs are more clearly
    delineated. As a bonus the low-level API can now be modularised if all
    algorithms are built as modules.

    Signed-off-by: Herbert Xu

    Herbert Xu
     
  • The patch passed the trycpt tests and automated filesystem tests.
    This rewrite resulted in some nice perfomance increase over my last patch.

    Short summary of the tcrypt benchmarks:

    Twofish Assembler vs. Twofish C (256bit 8kb block CBC)
    encrypt: -27% Cycles
    decrypt: -23% Cycles

    Twofish Assembler vs. AES Assembler (128bit 8kb block CBC)
    encrypt: +18% Cycles
    decrypt: +15% Cycles

    Twofish Assembler vs. AES Assembler (256bit 8kb block CBC)
    encrypt: -9% Cycles
    decrypt: -8% Cycles

    Full Output:
    http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-c-x86_64.txt
    http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-asm-x86_64.txt
    http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-aes-asm-x86_64.txt

    Here is another bonnie++ benchmark with encrypted filesystems. Most runs maxed
    out the hd. It should give some idea what the module can do for encrypted filesystem
    performance even though you can't see the full numbers.

    http://homepages.tu-darmstadt.de/~fritschi/twofish/output_20060610_130806_x86_64.html

    Signed-off-by: Joachim Fritschi
    Signed-off-by: Herbert Xu

    Joachim Fritschi
     
  • The patch passed the trycpt tests and automated filesystem tests.
    This rewrite resulted in some nice perfomance increase over my last patch.

    Short summary of the tcrypt benchmarks:

    Twofish Assembler vs. Twofish C (256bit 8kb block CBC)
    encrypt: -33% Cycles
    decrypt: -45% Cycles

    Twofish Assembler vs. AES Assembler (128bit 8kb block CBC)
    encrypt: +3% Cycles
    decrypt: -22% Cycles

    Twofish Assembler vs. AES Assembler (256bit 8kb block CBC)
    encrypt: -20% Cycles
    decrypt: -36% Cycles

    Full Output:
    http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-asm-i586.txt
    http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-twofish-c-i586.txt
    http://homepages.tu-darmstadt.de/~fritschi/twofish/tcrypt-speed-aes-asm-i586.txt

    Here is another bonnie++ benchmark with encrypted filesystems. All runs with
    the twofish assembler modules max out the drivespeed. It should give some
    idea what the module can do for encrypted filesystem performance even though
    you can't see the full numbers.

    http://homepages.tu-darmstadt.de/~fritschi/twofish/output_20060611_205432_x86.html

    Signed-off-by: Joachim Fritschi
    Signed-off-by: Herbert Xu

    Joachim Fritschi
     
  • This patch splits up the twofish crypto routine into a common part ( key
    setup ) which will be uses by all twofish crypto modules ( generic-c , i586
    assembler and x86_64 assembler ) and generic-c part. It also creates a new
    header file which will be used by all 3 modules.

    This eliminates all code duplication.

    Correctness was verified with the tcrypt module and automated test scripts.

    Signed-off-by: Joachim Fritschi
    Signed-off-by: Herbert Xu

    Joachim Fritschi
     

26 Jun, 2006

1 commit


10 Jan, 2006

1 commit

  • As the Crypto API now allows multiple implementations to be registered
    for the same algorithm, we no longer have to play tricks with Kconfig
    to select the right AES implementation.

    This patch sets the driver name and priority for all the AES
    implementations and removes the Kconfig conditions on the C implementation
    for AES.

    Signed-off-by: Herbert Xu

    Herbert Xu
     

07 Jan, 2006

4 commits

  • Sanitize some s390 Kconfig options. We have ARCH_S390, ARCH_S390X,
    ARCH_S390_31, 64BIT, S390_SUPPORT and COMPAT. Replace these 6 options by
    S390, 64BIT and COMPAT.

    Signed-off-by: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Martin Schwidefsky
     
  • Add support for the hardware accelerated AES crypto algorithm.

    Signed-off-by: Jan Glauber
    Signed-off-by: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Glauber
     
  • Add support for the hardware accelerated sha256 crypto algorithm.

    Signed-off-by: Jan Glauber
    Signed-off-by: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Glauber
     
  • Replace all references to z990 by s390 in the in-kernel crypto files in
    arch/s390/crypto. The code is not specific to a particular machine (z990) but
    to the s390 platform. Big diff, does nothing..

    Signed-off-by: Jan Glauber
    Signed-off-by: Martin Schwidefsky
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Glauber
     

02 Sep, 2005

1 commit

  • The XTEA implementation was incorrect due to a misinterpretation of
    operator precedence. Because of the wide-spread nature of this
    error, the erroneous implementation will be kept, albeit under the
    new name of XETA.

    Signed-off-by: Aaron Grothe
    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Aaron Grothe
     

07 Jul, 2005

1 commit

  • Implementation:
    ===============
    The encrypt/decrypt code is based on an x86 implementation I did a while
    ago which I never published. This unpublished implementation does
    include an assembler based key schedule and precomputed tables. For
    simplicity and best acceptance, however, I took Gladman's in-kernel code
    for table generation and key schedule for the kernel port of my
    assembler code and modified this code to produce the key schedule as
    required by my assembler implementation. File locations and Kconfig are
    kept similar to the i586 AES assembler implementation.
    It may seem a little bit strange to use 32 bit I/O and registers in the
    assembler implementation but this gives the best code size. My
    implementation takes one instruction more per round compared to
    Gladman's x86 assembler but it doesn't require any stack for local
    variables or saved registers and it is less serialized than Gladman's
    code.
    Note that all comparisons to Gladman's code were done after my code was
    implemented. I did only use FIPS PUB 197 for the implementation so my
    implementation is independent work.
    If anybody has a better assembler solution for x86_64 I'll be pleased to
    have my code replaced with the better solution.

    Testing:
    ========
    The implementation passes the in-kernel crypto testing module and I'm
    running it without any problems on my laptop where it is mainly used for
    dm-crypt.

    Microbenchmark:
    ===============
    The microbenchmark was done in userspace with similar compile flags as
    used during kernel compile.
    Encrypt/decrypt is about 35% faster than the generic C implementation.
    As the generic C as well as my assembler implementation are both table
    I don't really expect that there is much room for further
    improvements though I'll be glad to be corrected here.
    The key schedule is about 5% slower than the generic C implementation.
    This is due to the fact that some more work has to be done in the key
    schedule routine to fit the schedule to the assembler implementation.

    Code Size:
    ==========
    Encrypt and decrypt are together about 2.1 Kbytes smaller than the
    generic C implementation which is important with regard to L1 cache
    usage. The key schedule routine is about 100 bytes larger than the
    generic C implementation.

    Data Size:
    ==========
    There's no difference in data size requirements between the assembler
    implementation and the generic C implementation.

    License:
    ========
    Gladmans's code is dual BSD/GPL whereas my assembler code is GPLv2 only
    (I'm not going to change the license for my code). So I had to change
    the module license for the x86_64 aes module from 'Dual BSD/GPL' to
    'GPL' to reflect the most restrictive license within the module.

    Signed-off-by: Andreas Steinmetz
    Signed-off-by: Herbert Xu
    Signed-off-by: David S. Miller

    Andreas Steinmetz