30 May, 2015

1 commit

  • Rather than always releasing the prisoners in a cell, the client may
    want to promote one of them to be the new holder. There is a race here
    though between releasing an empty cell, and other threads adding new
    inmates. So this function makes the decision with its lock held.

    This function can have two outcomes:
    i) An inmate is promoted to be the holder of the cell (return value of 0).
    ii) The cell has no inmate for promotion and is released (return value of 1).

    Signed-off-by: Joe Thornber
    Signed-off-by: Mike Snitzer

    Joe Thornber
     

11 Nov, 2014

4 commits

  • Ranges will be placed in the same cell if they overlap.

    Range locking is a prerequisite for more efficient multi-block discard
    support in both the cache and thin-provisioning targets.

    Signed-off-by: Joe Thornber
    Signed-off-by: Mike Snitzer

    Joe Thornber
     
  • This use of direct submission in process_prepared_mapping() reduces
    latency for submitting bios in a cell by avoiding adding those bios to
    the deferred list and waiting for the next iteration of the worker.

    But this direct submission exposes the potential for a race between
    releasing a cell and incrementing deferred set. Fix this by introducing
    dm_cell_visit_release() and refactoring inc_remap_and_issue_cell()
    accordingly.

    Signed-off-by: Joe Thornber
    Signed-off-by: Mike Snitzer

    Joe Thornber
     
  • This avoids dropping the cell, so increases the probability that other
    bios will collect within the cell, rather than being passed individually
    to the worker.

    Also add required process_cell and process_discard_cell error handling
    wrappers and set associated pool-mode function pointers accordingly.

    Signed-off-by: Joe Thornber
    Signed-off-by: Mike Snitzer

    Joe Thornber
     
  • Previously it was using a fixed sized hash table. There are times
    when very many concurrent cells are held (such as when processing a very
    large discard). When this happens the hash table performance becomes
    very poor.

    Signed-off-by: Joe Thornber
    Signed-off-by: Mike Snitzer

    Joe Thornber
     

04 Jun, 2014

1 commit

  • Update the DM thin provisioning target's allocation failure error to be
    consistent with commit a9d6ceb8 ("[SCSI] return ENOSPC on thin
    provisioning failure").

    The DM thin target now returns -ENOSPC rather than -EIO when
    block allocation fails due to the pool being out of data space (and
    the 'error_if_no_space' thin-pool feature is enabled).

    Signed-off-by: Mike Snitzer
    Acked-By: Joe Thornber

    Mike Snitzer
     

02 Mar, 2013

3 commits

  • Add a target that allows a fast device such as an SSD to be used as a
    cache for a slower device such as a disk.

    A plug-in architecture was chosen so that the decisions about which data
    to migrate and when are delegated to interchangeable tunable policy
    modules. The first general purpose module we have developed, called
    "mq" (multiqueue), follows in the next patch. Other modules are
    under development.

    Signed-off-by: Joe Thornber
    Signed-off-by: Heinz Mauelshagen
    Signed-off-by: Mike Snitzer
    Signed-off-by: Alasdair G Kergon

    Joe Thornber
     
  • This patch takes advantage of the new bio-prison interface where the
    memory is now passed in rather than using a mempool in bio-prison.
    This allows the map function to avoid performing potentially-blocking
    allocations that could lead to deadlocks: We want to avoid the cell
    allocation that is done in bio_detain.

    (The potential for mempool deadlocks still remains in other functions
    that use bio_detain.)

    Signed-off-by: Joe Thornber
    Signed-off-by: Alasdair G Kergon

    Joe Thornber
     
  • Change the dm_bio_prison interface so that instead of allocating memory
    internally, dm_bio_detain is supplied with a pre-allocated cell each
    time it is called.

    This enables a subsequent patch to move the allocation of the struct
    dm_bio_prison_cell outside the thin target's mapping function so it can
    no longer block there.

    Signed-off-by: Joe Thornber
    Signed-off-by: Alasdair G Kergon

    Joe Thornber
     

22 Dec, 2012

1 commit

  • Change existing users of the function dm_cell_release_singleton to share
    cell_defer_except instead, and then remove the now-unused function.

    Everywhere that calls dm_cell_release_singleton, the bio in question
    is the holder of the cell.

    If there are no non-holder entries in the cell then cell_defer_except
    behaves exactly like dm_cell_release_singleton. Conversely, if there
    *are* non-holder entries then dm_cell_release_singleton must not be used
    because those entries would need to be deferred.

    Consequently, it is safe to replace use of dm_cell_release_singleton
    with cell_defer_except.

    This patch is a pre-requisite for "dm thin: fix race between
    simultaneous io and discards to same block".

    Signed-off-by: Joe Thornber
    Signed-off-by: Mike Snitzer
    Cc: stable@vger.kernel.org
    Signed-off-by: Alasdair G Kergon

    Joe Thornber
     

13 Oct, 2012

1 commit