27 Jun, 2017

2 commits

  • Due to user writes being decoupled from media writes because of the need
    of an intermediate write buffer, irrecoverable media write errors lead
    to pblk stalling; user writes fill up the buffer and end up in an
    infinite retry loop.

    In order to let user writes fail gracefully, it is necessary for pblk to
    keep track of its own internal state and prevent further writes from
    being placed into the write buffer.

    This patch implements a state machine to keep track of internal errors
    and, in case of failure, fail further user writes in an standard way.
    Depending on the type of error, pblk will do its best to persist
    buffered writes (which are already acknowledged) and close down on a
    graceful manner. This way, data might be recovered by re-instantiating
    pblk. Such state machine paves out the way for a state-based FTL log.

    Signed-off-by: Javier González
    Signed-off-by: Matias Bjørling
    Signed-off-by: Jens Axboe

    Javier González
     
  • At the moment, in order to get enough read parallelism, we have recycled
    several lines at the same time. This approach has proven not to work
    well when reaching capacity, since we end up mixing valid data from all
    lines, thus not maintaining a sustainable free/recycled line ratio.

    The new design, relies on a two level workqueue mechanism. In the first
    level, we read the metadata for a number of lines based on the GC list
    they reside on (this is governed by the number of valid sectors in each
    line). In the second level, we recycle a single line at a time. Here, we
    issue reads in parallel, while a single GC write thread places data in
    the write buffer. This design allows to (i) only move data from one line
    at a time, thus maintaining a sane free/recycled ration and (ii)
    maintain the GC writer busy with recycled data.

    Signed-off-by: Javier González
    Signed-off-by: Matias Bjørling
    Signed-off-by: Jens Axboe

    Javier González
     

24 Apr, 2017

1 commit

  • When block erases fail, these blocks are marked bad. The number of valid
    blocks in the line was not updated, which could cause an infinite loop
    on the erase path.

    Fix this atomic counter and, in order to avoid taking an irq lock on the
    interrupt context, make the erase counters atomic too.

    Also, in the case that a significant number of blocks become bad in a
    line, the result is the double shared metadata buffer (emeta) to stop
    the pipeline until all metadata is flushed to the media. Increase the
    number of metadata lines from 2 to 4 to avoid this case.

    Fixes: a4bd217b4326 "lightnvm: physical block device (pblk) target"

    Signed-off-by: Javier González
    Reviewed-by: Matias Bjørling
    Signed-off-by: Jens Axboe

    Javier González
     

17 Apr, 2017

1 commit

  • This patch introduces pblk, a host-side translation layer for
    Open-Channel SSDs to expose them like block devices. The translation
    layer allows data placement decisions, and I/O scheduling to be
    managed by the host, enabling users to optimize the SSD for their
    specific workloads.

    An open-channel SSD has a set of LUNs (parallel units) and a
    collection of blocks. Each block can be read in any order, but
    writes must be sequential. Writes may also fail, and if a block
    requires it, must also be reset before new writes can be
    applied.

    To manage the constraints, pblk maintains a logical to
    physical address (L2P) table, write cache, garbage
    collection logic, recovery scheme, and logic to rate-limit
    user I/Os versus garbage collection I/Os.

    The L2P table is fully-associative and manages sectors at a
    4KB granularity. Pblk stores the L2P table in two places, in
    the out-of-band area of the media and on the last page of a
    line. In the cause of a power failure, pblk will perform a
    scan to recover the L2P table.

    The user data is organized into lines. A line is data
    striped across blocks and LUNs. The lines enable the host to
    reduce the amount of metadata to maintain besides the user
    data and makes it easier to implement RAID or erasure coding
    in the future.

    pblk implements multi-tenant support and can be instantiated
    multiple times on the same drive. Each instance owns a
    portion of the SSD - both regarding I/O bandwidth and
    capacity - providing I/O isolation for each case.

    Finally, pblk also exposes a sysfs interface that allows
    user-space to peek into the internals of pblk. The interface
    is available at /dev/block/*/pblk/ where * is the block
    device name exposed.

    This work also contains contributions from:
    Matias Bjørling
    Simon A. F. Lund
    Young Tack Jin
    Huaicheng Li

    Signed-off-by: Javier González
    Signed-off-by: Matias Bjørling
    Signed-off-by: Jens Axboe

    Javier González