11 Dec, 2009

6 commits

  • The default plain IV is 32-bit only.

    This plain64 IV provides a compatible mode for encrypted devices bigger
    than 4TB.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • The "wipe key" message is used to wipe the volume key from memory
    temporarily, for example when suspending to RAM.

    But the initialisation vector in ESSIV mode is calculated from the
    hashed volume key, so the wipe message should wipe this IV key too and
    reinitialise it when the volume key is reinstated.

    This patch adds an IV wipe method called from a wipe message callback.
    ESSIV is then reinitialised using the init function added by the
    last patch.

    Cc: stable@kernel.org
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • This patch separates the construction of IV from its initialisation.
    (For ESSIV it is a hash calculation based on volume key.)

    Constructor code now preallocates hash tfm and salt array
    and saves it in a private IV structure.

    The next patch requires this to reinitialise the wiped IV
    without reallocating memory when resuming a suspended device.

    Cc: stable@kernel.org
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Use kzfree for salt deallocation because it is derived from the volume
    key. Use a common error path in ESSIV constructor.

    Required by a later patch which fixes the way key material is wiped
    from memory.

    Cc: stable@kernel.org
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Define private structures for IV so it's easy to add further attributes
    in a following patch which fixes the way key material is wiped from
    memory. Also move ESSIV destructor and remove unnecessary 'status'
    operation.

    There are no functional changes in this patch.

    Cc: stable@kernel.org
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • The "wipe key" message is used to wipe a volume key from memory
    temporarily, for example when suspending to RAM.

    There are two instances of the key in memory (inside crypto tfm)
    but only one got wiped. This patch wipes them both.

    Cc: stable@kernel.org
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     

09 Nov, 2009

1 commit

  • something-bility is spelled as something-blity
    so a grep for 'blit' would find these lines

    this is so trivial that I didn't split it by subsystem / copy
    additional maintainers - all changes are to comments
    The only purpose is to get fewer false positives when grepping
    around the kernel sources.

    Signed-off-by: Dirk Hohndel
    Signed-off-by: Jiri Kosina

    Dirk Hohndel
     

24 Jul, 2009

1 commit

  • Incorrect device area lengths are being passed to device_area_is_valid().

    The regression appeared in 2.6.31-rc1 through commit
    754c5fc7ebb417b23601a6222a6005cc2e7f2913.

    With the dm-stripe target, the size of the target (ti->len) was used
    instead of the stripe_width (ti->len/#stripes). An example of a
    consequent incorrect error message is:

    device-mapper: table: 254:0: sdb too small for target

    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mike Snitzer
     

11 Jul, 2009

1 commit


22 Jun, 2009

2 commits

  • Add .iterate_devices to 'struct target_type' to allow a function to be
    called for all devices in a DM target. Implemented it for all targets
    except those in dm-snap.c (origin and snapshot).

    (The raid1 version number jumps to 1.12 because we originally reserved
    1.1 to 1.11 for 'block_on_error' but ended up using 'handle_errors'
    instead.)

    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon
    Cc: martin.petersen@oracle.com

    Mike Snitzer
     
  • Flush support for dm-crypt target.

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Alasdair G Kergon

    Mikulas Patocka
     

03 Apr, 2009

1 commit


17 Mar, 2009

2 commits

  • The following oops has been reported when dm-crypt runs over a loop device.

    ...
    [ 70.381058] Process loop0 (pid: 4268, ti=cf3b2000 task=cf1cc1f0 task.ti=cf3b2000)
    ...
    [ 70.381058] Call Trace:
    [ 70.381058] [] ? crypt_dec_pending+0x5e/0x62 [dm_crypt]
    [ 70.381058] [] ? crypt_endio+0xa2/0xaa [dm_crypt]
    [ 70.381058] [] ? crypt_endio+0x0/0xaa [dm_crypt]
    [ 70.381058] [] ? bio_endio+0x2b/0x2e
    [ 70.381058] [] ? dec_pending+0x224/0x23b [dm_mod]
    [ 70.381058] [] ? clone_endio+0x79/0xa4 [dm_mod]
    [ 70.381058] [] ? clone_endio+0x0/0xa4 [dm_mod]
    [ 70.381058] [] ? bio_endio+0x2b/0x2e
    [ 70.381058] [] ? loop_thread+0x380/0x3b7
    [ 70.381058] [] ? do_lo_send_aops+0x0/0x165
    [ 70.381058] [] ? autoremove_wake_function+0x0/0x33
    [ 70.381058] [] ? loop_thread+0x0/0x3b7

    When a table is being replaced, it waits for I/O to complete
    before destroying the mempool, but the endio function doesn't
    call mempool_free() until after completing the bio.

    Fix it by swapping the order of those two operations.

    The same problem occurs in dm.c with md referenced after dec_pending.
    Again, we swap the order.

    Cc: stable@kernel.org
    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • In the async encryption-complete function (kcryptd_async_done), the
    crypto_async_request passed in may be different from the one passed to
    crypto_ablkcipher_encrypt/decrypt. Only crypto_async_request->data is
    guaranteed to be same as the one passed in. The current
    kcryptd_async_done uses the passed-in crypto_async_request directly
    which may cause the AES-NI-based AES algorithm implementation to panic.

    This patch fixes this bug by only using crypto_async_request->data,
    which points to dm_crypt_request, the crypto_async_request passed in.
    The original data (convert_context) is gotten from dm_crypt_request.

    [mbroz@redhat.com: reworked]
    Cc: stable@kernel.org
    Signed-off-by: Huang Ying
    Cc: Herbert Xu
    Signed-off-by: Milan Broz
    Signed-off-by: Andrew Morton
    Signed-off-by: Alasdair G Kergon

    Huang Ying
     

06 Jan, 2009

1 commit

  • Change dm_unregister_target to return void and use BUG() for error
    reporting.

    dm_unregister_target can only fail because of programming bug in the
    target driver. It can't fail because of user's behavior or disk errors.

    This patch changes unregister_target to return void and use BUG if
    someone tries to unregister non-registered target or unregister target
    that is in use.

    This patch removes code duplication (testing of error codes in all dm
    targets) and reports bugs in just one place, in dm_unregister_target. In
    some target drivers, these return codes were ignored, which could lead
    to a situation where bugs could be missed.

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Alasdair G Kergon

    Mikulas Patocka
     

29 Dec, 2008

1 commit

  • Instead of having a global bio slab cache, add a reference to one
    in each bio_set that is created. This allows for personalized slabs
    in each bio_set, so that they can have bios of different sizes.

    This means we can personalize the bios we return. File systems may
    want to embed the bio inside another structure, to avoid allocation
    more items (and stuffing them in ->bi_private) after the get a bio.
    Or we may want to embed a number of bio_vecs directly at the end
    of a bio, to avoid doing two allocations to return a bio. This is now
    possible.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

22 Oct, 2008

4 commits

  • Remove waitqueue no longer needed with the async crypto interface.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • When writing io, dm-crypt has to allocate a new cloned bio
    and encrypt the data into newly-allocated pages attached to this bio.
    In rare cases, because of hw restrictions (e.g. physical segment limit)
    or memory pressure, sometimes more than one cloned bio has to be used,
    each processing a different fragment of the original.

    Currently there is one waitqueue which waits for one fragment to finish
    and continues processing the next fragment.

    But when using asynchronous crypto this doesn't work, because several
    fragments may be processed asynchronously or in parallel and there is
    only one crypt context that cannot be shared between the bio fragments.
    The result may be corruption of the data contained in the encrypted bio.

    The patch fixes this by allocating new dm_crypt_io structs (with new
    crypto contexts) and running them independently.

    The fragments contains a pointer to the base dm_crypt_io struct to
    handle reference counting, so the base one is properly deallocated
    after all the fragments are finished.

    In a low memory situation, this only uses one additional object from the
    mempool. If the mempool is empty, the next allocation simple waits for
    previous fragments to complete.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Prepare local sector variable (offset) for later patch.
    Do not update io->sector for still-running I/O.

    No functional change.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Change #include "dm.h" to #include in all targets.
    Targets should not need direct access to internal DM structures.

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Alasdair G Kergon

    Mikulas Patocka
     

10 Oct, 2008

8 commits

  • Don't wait between submitting crypt requests for a bio unless
    we are short of memory.

    There are two situations when we must split an encrypted bio:
    1) there are no free pages;
    2) the new bio would violate underlying device restrictions
    (e.g. max hw segments).

    In case (2) we do not need to wait.

    Add output variable to crypt_alloc_buffer() to distinguish between
    these cases.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Move the initialisation of ctx->pending into one place, at the
    start of crypt_convert().

    Introduce crypt_finished to indicate whether or not the encryption
    is finished, for use in a later patch.

    No functional change.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • The pending reference count must be incremented *before* the async work is
    queued to another thread, not after. Otherwise there's a race if the
    work completes and decrements the reference count before it gets incremented.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Make kcryptd_crypt_write_io_submit() responsible for decrementing
    the pending count after an error.

    Also fixes a bug in the async path that forgot to decrement it.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Make the caller reponsible for incrementing the pending count before calling
    kcryptd_crypt_write_io_submit() in the non-async case to bring it into line
    with the async case.

    Signed-off-by: Alasdair G Kergon

    Alasdair G Kergon
     
  • Move kcryptd_crypt_write_convert_loop inside kcryptd_crypt_write_convert.
    This change is needed for a later patch.

    No functional change.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Factor out crypt io allocation code.
    Later patches will call it from another place.

    No functional change.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     
  • Move io pending to one place.

    No functional change, usefull to simplify debugging.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     

21 Jul, 2008

1 commit

  • This patch implements biovec merge function for crypt target.

    If the underlying device has merge function defined, call it.
    If not, keep precomputed value.

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     

02 Jul, 2008

1 commit

  • Add cond_resched() to prevent monopolising CPU when processing large bios.

    dm-crypt processes encryption of bios in sector units. If the bio request
    is big it can spend a long time in the encryption call.

    Signed-off-by: Milan Broz
    Tested-by: Yan Li
    Signed-off-by: Andrew Morton
    Signed-off-by: Alasdair G Kergon

    Milan Broz
     

29 Mar, 2008

1 commit

  • Fix regression in dm-crypt introduced in commit
    3a7f6c990ad04e6f576a159876c602d14d6f7fef ("dm crypt: use async crypto").

    If write requests need to be split into pieces, the code must not process them
    in parallel because the crypto context cannot be shared. So there can be
    parallel crypto operations on one part of the write, but only one write bio
    can be processed at a time.

    This is not optimal and the workqueue code needs to be optimized for parallel
    processing, but for now it solves the problem without affecting the
    performance of synchronous crypto operation (most of current dm-crypt users).

    http://bugzilla.kernel.org/show_bug.cgi?id=10242
    http://bugzilla.kernel.org/show_bug.cgi?id=10207

    Signed-off-by: Milan Broz
    Signed-off-by: Alasdair G Kergon
    Cc: "Rafael J. Wysocki"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Milan Broz
     

08 Feb, 2008

9 commits