26 Sep, 2011

1 commit

  • If optional discard support in dm-crypt is enabled, discards requests
    bypass the crypt queue and blocks of the underlying device are discarded.
    For the read path, discarded blocks are handled the same as normal
    ciphertext blocks, thus decrypted.

    So if the underlying device announces discarded regions return zeroes,
    dm-crypt must disable this flag because after decryption there is just
    random noise instead of zeroes.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     

02 Aug, 2011

3 commits

  • Add optional parameter field to dmcrypt table and support
    "allow_discards" option.

    Discard requests bypass crypt queue processing. Bio is simple remapped
    to underlying device.

    Note that discard will be never enabled by default because of security
    consequences. It is up to the administrator to enable it for encrypted
    devices.

    (Note that userspace cryptsetup does not understand new optional
    parameters yet. Support for this will come later. Until then, you
    should use 'dmsetup' to enable and disable this.)

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Move multipath target argument parsing code into dm-table so other
    targets can share it.

    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mike Snitzer
     
  • Suppress sparse warnings about cpu_to_le32() by using __le32 types for
    on-disk data etc.

    Signed-off-by: Alasdair G Kergon

    Alasdair G Kergon
     

27 Jul, 2011

1 commit

  • This allows us to move duplicated code in
    (atomic_inc_not_zero() for now) to

    Signed-off-by: Arun Sharma
    Reviewed-by: Eric Dumazet
    Cc: Ingo Molnar
    Cc: David Miller
    Cc: Eric Dumazet
    Acked-by: Mike Frysinger
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Arun Sharma
     

26 Mar, 2011

1 commit

  • * git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dm:
    dm stripe: implement merge method
    dm mpath: allow table load with no priority groups
    dm mpath: fail message ioctl if specified path is not valid
    dm ioctl: add flag to wipe buffers for secure data
    dm ioctl: prepare for crypt key wiping
    dm crypt: wipe keys string immediately after key is set
    dm: add flakey target
    dm: fix opening log and cow devices for read only tables

    Linus Torvalds
     

24 Mar, 2011

1 commit


10 Mar, 2011

1 commit

  • Code has been converted over to the new explicit on-stack plugging,
    and delay users have been converted to use the new API for that.
    So lets kill off the old plugging along with aops->sync_page().

    Signed-off-by: Jens Axboe

    Jens Axboe
     

14 Jan, 2011

7 commits

  • This patch adds a compatible implementation of the block
    chaining mode used by the Loop-AES block device encryption
    system (http://loop-aes.sourceforge.net/) designed
    by Jari Ruusu.

    It operates on full 512 byte sectors and uses CBC
    with an IV derived from the sector number, the data and
    optionally extra IV seed.

    This means that after CBC decryption the first block of sector
    must be tweaked according to decrypted data.

    Loop-AES can use three encryption schemes:
    version 1: is plain aes-cbc mode (already compatible)
    version 2: uses 64 multikey scheme with own IV generator
    version 3: the same as version 2 with additional IV seed
    (it uses 65 keys, last key is used as IV seed)

    The IV generator is here named lmk (Loop-AES multikey)
    and for the cipher specification looks like: aes:64-cbc-lmk

    Version 2 and 3 is recognised according to length
    of provided multi-key string (which is just hexa encoded
    "raw key" used in original Loop-AES ioctl).

    Configuration of the device and decoding key string will
    be done in userspace (cryptsetup).
    (Loop-AES stores keys in gpg encrypted file, raw keys are
    output of simple hashing of lines in this file).

    Based on an implementation by Max Vozeler:
    http://article.gmane.org/gmane.linux.kernel.cryptoapi/3752/

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon
    CC: Max Vozeler

    Milan Broz
     
  • This patch adds generic multikey handling to be used
    in following patch for Loop-AES mode compatibility.

    This patch extends mapping table to optional keycount and
    implements generic multi-key capability.

    With more keys defined the string is divided into
    several sections and these are used for tfms.

    The tfm is used according to sector offset
    (sector 0->tfm[0], sector 1->tfm[1], sector N->tfm[N modulo keycount])
    (only power of two values supported for keycount here).

    Because of tfms per-cpu allocation, this mode can be take
    a lot of memory on large smp systems.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon
    Cc: Max Vozeler

    Milan Broz
     
  • IV (initialisation vector) can in principle depend not only
    on sector but also on plaintext data (or other attributes).

    Change IV generator interface to work directly with dmreq
    structure to allow such dependence in generator.

    Also add post() function which is called after the crypto
    operation.

    This allows tricky modification of decrypted data or IV
    internals.

    In asynchronous mode the post() can be called after
    ctx->sector count was increased so it is needed
    to add iv_sector copy directly to dmreq structure.
    (N.B. dmreq always include only one sector in scatterlists)

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • If there is enough memory, code can directly submit bio
    instead queing this operation in separate thread.

    Try to alloc bio clone with GFP_NOWAIT and only if it
    fails use separate queue (map function cannot block here).

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Currently dm-crypt does all the encryption work for a single dm-crypt
    mapping in a single workqueue. This does not scale well when multiple
    CPUs are submitting IO at a high rate. The single CPU running the single
    thread cannot keep up with the encryption and encrypted IO performance
    tanks.

    This patch changes the crypto workqueue to be per CPU. This means
    that as long as the IO submitter (or the interrupt target CPUs
    for reads) runs on different CPUs the encryption work will be also
    parallel.

    To avoid a bottleneck on the IO worker I also changed those to be
    per-CPU threads.

    There is still some shared data, so I suspect some bouncing
    cache lines. But I haven't done a detailed study on that yet.

    Signed-off-by: Andi Kleen
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Andi Kleen
     
  • Rename cc->cipher_mode to cc->cipher_string and store the whole of the cipher
    information so it can easily be printed when processing the DM_DEV_STATUS ioctl.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Simplify key size verification (hexadecimal string) and
    set key size early in constructor.

    (Patch required by later changes.)

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     

10 Sep, 2010

1 commit

  • This patch converts bio-based dm to support REQ_FLUSH/FUA instead of
    now deprecated REQ_HARDBARRIER.

    * -EOPNOTSUPP handling logic dropped.

    * Preflush is handled as before but postflush is dropped and replaced
    with passing down REQ_FUA to member request_queues. This replaces
    one array wide cache flush w/ member specific FUA writes.

    * __split_and_process_bio() now calls __clone_and_map_flush() directly
    for flushes and guarantees all FLUSH bio's going to targets are zero
    ` length.

    * It's now guaranteed that all FLUSH bio's which are passed onto dm
    targets are zero length. bio_empty_barrier() tests are replaced
    with REQ_FLUSH tests.

    * Empty WRITE_BARRIERs are replaced with WRITE_FLUSHes.

    * Dropped unlikely() around REQ_FLUSH tests. Flushes are not unlikely
    enough to be marked with unlikely().

    * Block layer now filters out REQ_FLUSH/FUA bio's if the request_queue
    doesn't support cache flushing. Advertise REQ_FLUSH | REQ_FUA
    capability.

    * Request based dm isn't converted yet. dm_init_request_based_queue()
    resets flush support to 0 for now. To avoid disturbing request
    based dm code, dm->flush_error is added for bio based dm while
    requested based dm continues to use dm->barrier_error.

    Lightly tested linear, stripe, raid1, snap and crypt targets. Please
    proceed with caution as I'm not familiar with the code base.

    Signed-off-by: Tejun Heo
    Cc: dm-devel@redhat.com
    Cc: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Tejun Heo
     

12 Aug, 2010

4 commits

  • Use new dm_target_offset() macro to avoid most references to ti->begin
    in dm targets.

    Signed-off-by: Alasdair G Kergon

    Alasdair G Kergon
     
  • Allocate cipher strings indpendently of struct crypt_config and move
    cipher parsing and allocation into a separate function to prepare for
    supporting the cryptoapi format e.g. "xts(aes)".

    No functional change in this patch.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Use just one label and reuse common destructor for crypt target.

    Parse remaining argv arguments in logic order.

    Also do not ignore error values from IV init and set key functions.

    No functional change in this patch except changed return codes
    based on above.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Use kstrdup when the goal of an allocation is copy a string into the
    allocated region.

    The semantic patch that makes this change is as follows:
    (http://coccinelle.lip6.fr/)

    //
    @@
    expression from,to;
    expression flag,E1,E2;
    statement S;
    @@

    - to = kmalloc(strlen(from) + 1,flag);
    + to = kstrdup(from, flag);
    ... when != \(from = E1 \| to = E1 \)
    if (to==NULL || ...) S
    ... when != \(from = E2 \| to = E2 \)
    - strcpy(to, from);
    //

    Signed-off-by: Julia Lawall
    Signed-off-by: Alasdair G Kergon

    Julia Lawall
     

06 Mar, 2010

1 commit


11 Dec, 2009

6 commits

  • The default plain IV is 32-bit only.

    This plain64 IV provides a compatible mode for encrypted devices bigger
    than 4TB.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • The "wipe key" message is used to wipe the volume key from memory
    temporarily, for example when suspending to RAM.

    But the initialisation vector in ESSIV mode is calculated from the
    hashed volume key, so the wipe message should wipe this IV key too and
    reinitialise it when the volume key is reinstated.

    This patch adds an IV wipe method called from a wipe message callback.
    ESSIV is then reinitialised using the init function added by the
    last patch.

    Cc: stable@kernel.org
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • This patch separates the construction of IV from its initialisation.
    (For ESSIV it is a hash calculation based on volume key.)

    Constructor code now preallocates hash tfm and salt array
    and saves it in a private IV structure.

    The next patch requires this to reinitialise the wiped IV
    without reallocating memory when resuming a suspended device.

    Cc: stable@kernel.org
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Use kzfree for salt deallocation because it is derived from the volume
    key. Use a common error path in ESSIV constructor.

    Required by a later patch which fixes the way key material is wiped
    from memory.

    Cc: stable@kernel.org
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Define private structures for IV so it's easy to add further attributes
    in a following patch which fixes the way key material is wiped from
    memory. Also move ESSIV destructor and remove unnecessary 'status'
    operation.

    There are no functional changes in this patch.

    Cc: stable@kernel.org
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • The "wipe key" message is used to wipe a volume key from memory
    temporarily, for example when suspending to RAM.

    There are two instances of the key in memory (inside crypto tfm)
    but only one got wiped. This patch wipes them both.

    Cc: stable@kernel.org
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     

09 Nov, 2009

1 commit

  • something-bility is spelled as something-blity
    so a grep for 'blit' would find these lines

    this is so trivial that I didn't split it by subsystem / copy
    additional maintainers - all changes are to comments
    The only purpose is to get fewer false positives when grepping
    around the kernel sources.

    Signed-off-by: Dirk Hohndel
    Signed-off-by: Jiri Kosina

    Dirk Hohndel
     

24 Jul, 2009

1 commit

  • Incorrect device area lengths are being passed to device_area_is_valid().

    The regression appeared in 2.6.31-rc1 through commit
    754c5fc7ebb417b23601a6222a6005cc2e7f2913.

    With the dm-stripe target, the size of the target (ti->len) was used
    instead of the stripe_width (ti->len/#stripes). An example of a
    consequent incorrect error message is:

    device-mapper: table: 254:0: sdb too small for target

    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mike Snitzer
     

11 Jul, 2009

1 commit


22 Jun, 2009

2 commits

  • Add .iterate_devices to 'struct target_type' to allow a function to be
    called for all devices in a DM target. Implemented it for all targets
    except those in dm-snap.c (origin and snapshot).

    (The raid1 version number jumps to 1.12 because we originally reserved
    1.1 to 1.11 for 'block_on_error' but ended up using 'handle_errors'
    instead.)

    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon
    Cc: martin.petersen@oracle.com

    Mike Snitzer
     
  • Flush support for dm-crypt target.

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Alasdair G Kergon

    Mikulas Patocka
     

03 Apr, 2009

1 commit


17 Mar, 2009

2 commits

  • The following oops has been reported when dm-crypt runs over a loop device.

    ...
    [ 70.381058] Process loop0 (pid: 4268, ti=cf3b2000 task=cf1cc1f0 task.ti=cf3b2000)
    ...
    [ 70.381058] Call Trace:
    [ 70.381058] [] ? crypt_dec_pending+0x5e/0x62 [dm_crypt]
    [ 70.381058] [] ? crypt_endio+0xa2/0xaa [dm_crypt]
    [ 70.381058] [] ? crypt_endio+0x0/0xaa [dm_crypt]
    [ 70.381058] [] ? bio_endio+0x2b/0x2e
    [ 70.381058] [] ? dec_pending+0x224/0x23b [dm_mod]
    [ 70.381058] [] ? clone_endio+0x79/0xa4 [dm_mod]
    [ 70.381058] [] ? clone_endio+0x0/0xa4 [dm_mod]
    [ 70.381058] [] ? bio_endio+0x2b/0x2e
    [ 70.381058] [] ? loop_thread+0x380/0x3b7
    [ 70.381058] [] ? do_lo_send_aops+0x0/0x165
    [ 70.381058] [] ? autoremove_wake_function+0x0/0x33
    [ 70.381058] [] ? loop_thread+0x0/0x3b7

    When a table is being replaced, it waits for I/O to complete
    before destroying the mempool, but the endio function doesn't
    call mempool_free() until after completing the bio.

    Fix it by swapping the order of those two operations.

    The same problem occurs in dm.c with md referenced after dec_pending.
    Again, we swap the order.

    Cc: stable@kernel.org
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • In the async encryption-complete function (kcryptd_async_done), the
    crypto_async_request passed in may be different from the one passed to
    crypto_ablkcipher_encrypt/decrypt. Only crypto_async_request->data is
    guaranteed to be same as the one passed in. The current
    kcryptd_async_done uses the passed-in crypto_async_request directly
    which may cause the AES-NI-based AES algorithm implementation to panic.

    This patch fixes this bug by only using crypto_async_request->data,
    which points to dm_crypt_request, the crypto_async_request passed in.
    The original data (convert_context) is gotten from dm_crypt_request.

    [mbroz@redhat.com: reworked]
    Cc: stable@kernel.org
    Signed-off-by: Huang Ying
    Cc: Herbert Xu
    Signed-off-by: Milan Broz
    Signed-off-by: Andrew Morton
    Signed-off-by: Alasdair G Kergon

    Huang Ying
     

06 Jan, 2009

1 commit

  • Change dm_unregister_target to return void and use BUG() for error
    reporting.

    dm_unregister_target can only fail because of programming bug in the
    target driver. It can't fail because of user's behavior or disk errors.

    This patch changes unregister_target to return void and use BUG if
    someone tries to unregister non-registered target or unregister target
    that is in use.

    This patch removes code duplication (testing of error codes in all dm
    targets) and reports bugs in just one place, in dm_unregister_target. In
    some target drivers, these return codes were ignored, which could lead
    to a situation where bugs could be missed.

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Alasdair G Kergon

    Mikulas Patocka
     

29 Dec, 2008

1 commit

  • Instead of having a global bio slab cache, add a reference to one
    in each bio_set that is created. This allows for personalized slabs
    in each bio_set, so that they can have bios of different sizes.

    This means we can personalize the bios we return. File systems may
    want to embed the bio inside another structure, to avoid allocation
    more items (and stuffing them in ->bi_private) after the get a bio.
    Or we may want to embed a number of bio_vecs directly at the end
    of a bio, to avoid doing two allocations to return a bio. This is now
    possible.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

22 Oct, 2008

3 commits

  • Remove waitqueue no longer needed with the async crypto interface.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • When writing io, dm-crypt has to allocate a new cloned bio
    and encrypt the data into newly-allocated pages attached to this bio.
    In rare cases, because of hw restrictions (e.g. physical segment limit)
    or memory pressure, sometimes more than one cloned bio has to be used,
    each processing a different fragment of the original.

    Currently there is one waitqueue which waits for one fragment to finish
    and continues processing the next fragment.

    But when using asynchronous crypto this doesn't work, because several
    fragments may be processed asynchronously or in parallel and there is
    only one crypt context that cannot be shared between the bio fragments.
    The result may be corruption of the data contained in the encrypted bio.

    The patch fixes this by allocating new dm_crypt_io structs (with new
    crypto contexts) and running them independently.

    The fragments contains a pointer to the base dm_crypt_io struct to
    handle reference counting, so the base one is properly deallocated
    after all the fragments are finished.

    In a low memory situation, this only uses one additional object from the
    mempool. If the mempool is empty, the next allocation simple waits for
    previous fragments to complete.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Prepare local sector variable (offset) for later patch.
    Do not update io->sector for still-running I/O.

    No functional change.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz