09 Feb, 2018

1 commit

  • config_fallbacks.h has some logic that sets HAVE_BLOCK_DEVICE
    based on a list of enabled options. Moving HAVE_BLOCK_DEVICE to
    Kconfig allows us to drastically shrink the logic in
    config_fallbacks.h

    Signed-off-by: Adam Ford
    [trini: Rename HAVE_BLOCK_DEVICE to CONFIG_BLOCK_DEVICE]
    Signed-off-by: Tom Rini

    Adam Ford
     

04 Sep, 2017

3 commits


28 Aug, 2017

11 commits

  • At present the NVMe uclass driver uses a global variable nvme_info
    to store global information like namespace id, and NVMe controller
    driver's priv struct has a blk_dev_start that is used to calculate
    the namespace id based on the global information from nvme_info.

    This is not a good design in the DM world and can be replaced with
    the following changes:

    - Encode the namespace id in the NVMe block device name during
    the NVMe uclass post probe
    - Extract the namespace id from the device name during the NVMe
    block device probe
    - Let BLK uclass calculate the devnum for us by passing -1 to
    blk_create_devicef() as the devnum

    Signed-off-by: Bin Meng

    Bin Meng
     
  • The codes in nvme_uclass_post_probe() can be replaced to call the
    blk_create_devicef() API directly.

    Signed-off-by: Bin Meng

    Bin Meng
     
  • So far cache operations are only applied on the submission queue and
    completion queue, but they are missing in other places like identify
    and block read/write routines.

    In order to correctly operate on the caches, the DMA buffer passed
    to identify routine must be allocated properly on the stack with the
    existing macro ALLOC_CACHE_ALIGN_BUFFER().

    Signed-off-by: Bin Meng

    Bin Meng
     
  • The NVMe block read and write routines are almost the same except
    the command opcode. Let's consolidate them to avoid duplication.

    Signed-off-by: Bin Meng

    Bin Meng
     
  • NVMe driver only uses two queues. The first one is allocated to do
    admin stuff, while the second one is for IO stuff. So far the driver
    uses magic number (0/1) to access them. Change to use macros.

    Signed-off-by: Bin Meng

    Bin Meng
     
  • So far the driver unconditionally delays 10ms when en/disabling the
    controller and still return 0 if 10ms times out. In fact, spec defines
    a timeout value in the CAP register that is the worst case time that
    host software shall wait for the controller to become ready.

    Signed-off-by: Bin Meng

    Bin Meng
     
  • Capabilities register is RO and accessed at various places in the
    driver. Let's cache it in the controller driver's priv struct.

    Signed-off-by: Bin Meng

    Bin Meng
     
  • So far this is not causing any issue due to NVMe and x86 are using
    the same endianness, but for correctness, it should be fixed.

    Signed-off-by: Bin Meng

    Bin Meng
     
  • ndev->queues is a pointer to pointer, but the allocation wrongly
    requests sizeof(struct nvme_queue). Fix it.

    Signed-off-by: Bin Meng

    Bin Meng
     
  • The codes currently try to read PCI vendor id of the NVMe block
    device by dm_pci_read_config16() with its parameter set as its
    root complex controller (ndev->pdev) instead of itself. This is
    seriously wrong. We can read the vendor id by passing the correct
    udevice parameter to the dm_pci_read_config16() API, however there
    is a shortcut by reading the cached vendor id from the PCI device's
    struct pci_child_platdata.

    While we are here fixing this bug, apparently the quirk stuff handle
    codes in nvme_get_info_from_identify() never takes effect since its
    logic has never been true at all. Remove these codes completely.

    Signed-off-by: Bin Meng

    Bin Meng
     
  • These are leftover when the driver was ported from Linux and are not
    used by the U-Boot driver.

    Signed-off-by: Bin Meng

    Bin Meng
     

14 Aug, 2017

5 commits

  • Maximum Data Transfer Size (MDTS) field indicates the maximum
    data transfer size between the host and the controller. The
    host should not submit a command that exceeds this transfer
    size. The value is in units of the minimum memory page size
    and is reported as a power of two (2^n).

    The spec also says: a value of 0h indicates no restrictions
    on transfer size. On the real NVMe card this is normally not
    0 due to hardware restrictions, but with QEMU emulated NVMe
    device it reports as 0. In nvme_blk_read/write() below we
    have the following algorithm for maximum number of logic
    blocks per transfer:

    u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);

    dev->max_transfer_shift being 0 will for sure cause lbas to
    overflow. Let's use 20. With this fix, the NVMe driver works
    on QEMU emulated NVMe device.

    Signed-off-by: Bin Meng
    Reviewed-by: Tom Rini

    Bin Meng
     
  • NVMe should use the nsze value from the queried device. This will
    reflect the total number of blocks of the device and fix detecting
    my Samsung 960 EVO 256GB.

    Original:
    Capacity: 40386.6 MB = 39.4 GB (82711872 x 512)

    Fixed:
    Capacity: 238475.1 MB = 232.8 GB (488397168 x 512)

    Signed-off-by: Jon Nettleton
    Reviewed-by: Bin Meng
    Tested-by: Bin Meng
    Reviewed-by: Tom Rini

    Jon Nettleton
     
  • This adds support to detect the catchall PCI class for NVMe devices.
    It allows the drivers to work with most NVMe devices that don't need
    specific detection due to quirks etc.

    Tested against a Samsung 960 EVO drive.

    Signed-off-by: Jon Nettleton
    Signed-off-by: Bin Meng
    Reviewed-by: Tom Rini

    Jon Nettleton
     
  • This adds nvme_print_info() to show detailed NVMe controller and
    namespace information.

    Signed-off-by: Zhikang Zhang
    Signed-off-by: Wenbin Song
    Signed-off-by: Bin Meng
    Reviewed-by: Tom Rini

    Zhikang Zhang
     
  • NVM Express (NVMe) is a register level interface that allows host
    software to communicate with a non-volatile memory subsystem. This
    interface is optimized for enterprise and client solid state drives,
    typically attached to the PCI express interface.

    This adds a U-Boot driver support of devices that follow the NVMe
    standard [1] and supports basic read/write operations.

    Tested with a 400GB Intel SSD 750 series NVMe card with controller
    id 8086:0953.

    [1] http://www.nvmexpress.org/resources/specifications/

    Signed-off-by: Zhikang Zhang
    Signed-off-by: Wenbin Song
    Signed-off-by: Bin Meng
    Reviewed-by: Tom Rini

    Zhikang Zhang