27 Jul, 2012

1 commit

  • Add read-only and fail-io modes to thin provisioning.

    If a transaction commit fails the pool's metadata device will transition
    to "read-only" mode. If a commit fails once already in read-only mode
    the transition to "fail-io" mode occurs.

    Once in fail-io mode the pool and all associated thin devices will
    report a status of "Fail".

    Signed-off-by: Joe Thornber
    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Joe Thornber
     

03 Jun, 2012

1 commit

  • This patch implements two new messages that can be sent to the thin
    pool target allowing it to take a snapshot of the _metadata_. This,
    read-only snapshot can be accessed by userland, concurrently with the
    live target.

    Only one metadata snapshot can be held at a time. The pool's status
    line will give the block location for the current msnap.

    Since version 0.1.5 of the userland thin provisioning tools, the
    thin_dump program displays the msnap as follows:

    thin_dump -m

    Available here: https://github.com/jthornber/thin-provisioning-tools

    Now that userland can access the metadata we can do various things
    that have traditionally been kernel side tasks:

    i) Incremental backups.

    By using metadata snapshots we can work out what blocks have
    changed over time. Combined with data snapshots we can ensure
    the data doesn't change while we back it up.

    A short proof of concept script can be found here:

    https://github.com/jthornber/thinp-test-suite/blob/master/incremental_backup_example.rb

    ii) Migration of thin devices from one pool to another.

    iii) Merging snapshots back into an external origin.

    iv) Asyncronous replication.

    Signed-off-by: Joe Thornber
    Signed-off-by: Alasdair G Kergon

    Joe Thornber
     

29 Mar, 2012

4 commits

  • Add dm thin target arguments to control discard support.

    ignore_discard: Disables discard support

    no_discard_passdown: Don't pass discards down to the underlying data
    device, but just remove the mapping within the thin provisioning target.

    Signed-off-by: Joe Thornber
    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Joe Thornber
     
  • Support the use of an external _read only_ device as an origin for a thin
    device.

    Any read to an unprovisioned area of the thin device will be passed
    through to the origin. Writes trigger allocation of new blocks as
    usual.

    One possible use case for this would be VM hosts that want to run
    guests on thinly-provisioned volumes but have the base image on another
    device (possibly shared between many VMs).

    Signed-off-by: Joe Thornber
    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Joe Thornber
     
  • The thin metadata format can only make use of a device that is = 1 GB, physical extents).

    Rather than reject a larger metadata device, during thin-pool device
    construction, switch to allowing it but issue a warning if a device
    larger than THIN_METADATA_MAX_SECTORS_WARNING (16 GB) is
    provided. Any space over 15.9375 GB will not be used.

    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Mike Snitzer
     
  • Remove documentation for unimplemented 'trim' message.

    I'd planned a 'trim' target message for shrinking thin devices, but
    this is better handled via the discard ioctl.

    Signed-off-by: Joe Thornber
    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Joe Thornber
     

21 Feb, 2012

1 commit


01 Nov, 2011

1 commit

  • Initial EXPERIMENTAL implementation of device-mapper thin provisioning
    with snapshot support. The 'thin' target is used to create instances of
    the virtual devices that are hosted in the 'thin-pool' target. The
    thin-pool target provides data sharing among devices. This sharing is
    made possible using the persistent-data library in the previous patch.

    The main highlight of this implementation, compared to the previous
    implementation of snapshots, is that it allows many virtual devices to
    be stored on the same data volume, simplifying administration and
    allowing sharing of data between volumes (thus reducing disk usage).

    Another big feature is support for arbitrary depth of recursive
    snapshots (snapshots of snapshots of snapshots ...). The previous
    implementation of snapshots did this by chaining together lookup tables,
    and so performance was O(depth). This new implementation uses a single
    data structure so we don't get this degradation with depth.

    For further information and examples of how to use this, please read
    Documentation/device-mapper/thin-provisioning.txt

    Signed-off-by: Joe Thornber
    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Joe Thornber