21 Sep, 2016

1 commit


17 Sep, 2016

1 commit

  • We currently account a '0' dispatch, and anything above that still falls
    below the range set by BLK_MQ_MAX_DISPATCH_ORDER. If we dispatch more,
    we don't account it.

    Change the last bucket to be inclusive of anything above the range we
    track, and have the sysfs file reflect that by including a '+' in the
    output:

    $ cat /sys/block/nvme0n1/mq/0/dispatched
    0 1006
    1 20229
    2 1
    4 0
    8 0
    16 0
    32+ 0

    Signed-off-by: Jens Axboe
    Reviewed-by: Omar Sandoval

    Jens Axboe
     

14 Sep, 2016

2 commits

  • Allow the io_poll statistics to be zeroed to make for easier logging
    of polling event.

    Signed-off-by: Stephen Bates
    Acked-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Stephen Bates
     
  • In order to help determine the effectiveness of polling in a running
    system it is usful to determine the ratio of how often the poll
    function is called vs how often the completion is checked. For this
    reason we add a poll_considered variable and add it to the sysfs entry
    for io_poll.

    Signed-off-by: Stephen Bates
    Acked-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Stephen Bates
     

05 Aug, 2016

1 commit


20 Mar, 2016

1 commit

  • queue_for_each_ctx() iterates over per_cpu variables under the assumption that
    the possible cpu mask cannot have holes. That's wrong as all cpumasks can have
    holes. In case there are holes the iteration ends up accessing uninitialized
    memory and crashing as a result.

    Replace the macro by a proper for_each_possible_cpu() loop and drop the unused
    macro blk_ctx_sum() which references queue_for_each_ctx().

    Reported-by: Xiong Zhou
    Signed-off-by: Thomas Gleixner
    Signed-off-by: Jens Axboe

    Thomas Gleixner
     

10 Feb, 2016

1 commit

  • The hardware's provided queue count may change at runtime with resource
    provisioning. This patch allows a block driver to alter the number of
    h/w queues available when its resource count changes.

    The main part is a new blk-mq API to request a new number of h/w queues
    for a given live tag set. The new API freezes all queues using that set,
    then adjusts the allocated count prior to remapping these to CPUs.

    The bulk of the rest just shifts where h/w contexts and all their
    artifacts are allocated and freed.

    The number of max h/w contexts is capped to the number of possible cpus
    since there is no use for more than that. As such, all pre-allocated
    memory for pointers need to account for the max possible rather than
    the initial number of queues.

    A side effect of this is that the blk-mq will proceed successfully as
    long as it can allocate at least one h/w context. Previously it would
    fail request queue initialization if less than the requested number
    was allocated.

    Signed-off-by: Keith Busch
    Reviewed-by: Christoph Hellwig
    Tested-by: Jon Derrick
    Signed-off-by: Jens Axboe

    Keith Busch
     

08 Nov, 2015

1 commit

  • Add basic support for polling for specific IO to complete. This uses
    the cookie that blk-mq passes back, which enables the block layer
    to pass this cookie to the driver to spin for a specific request.

    This will be combined with request latency tracking, so we can make
    qualified decisions about when to poll and when not to. For now, for
    benchmark purposes, we add a sysfs file that controls whether polling
    is enabled or not.

    Signed-off-by: Jens Axboe
    Acked-by: Christoph Hellwig
    Acked-by: Keith Busch

    Jens Axboe
     

22 Oct, 2015

1 commit

  • Allow pmem, and other synchronous/bio-based block drivers, to fallback
    on a per-cpu reference count managed by the core for tracking queue
    live/dead state.

    The existing per-cpu reference count for the blk_mq case is promoted to
    be used in all block i/o scenarios. This involves initializing it by
    default, waiting for it to drop to zero at exit, and holding a live
    reference over the invocation of q->make_request_fn() in
    generic_make_request(). The blk_mq code continues to take its own
    reference per blk_mq request and retains the ability to freeze the
    queue, but the check that the queue is frozen is moved to
    generic_make_request().

    This fixes crash signatures like the following:

    BUG: unable to handle kernel paging request at ffff880140000000
    [..]
    Call Trace:
    [] ? copy_user_handle_tail+0x5f/0x70
    [] pmem_do_bvec.isra.11+0x70/0xf0 [nd_pmem]
    [] pmem_make_request+0xd1/0x200 [nd_pmem]
    [] ? mempool_alloc+0x72/0x1a0
    [] generic_make_request+0xd6/0x110
    [] submit_bio+0x76/0x170
    [] submit_bh_wbc+0x12f/0x160
    [] submit_bh+0x12/0x20
    [] jbd2_write_superblock+0x8d/0x170
    [] jbd2_mark_journal_empty+0x5d/0x90
    [] jbd2_journal_destroy+0x24b/0x270
    [] ? put_pwq_unlocked+0x2a/0x30
    [] ? destroy_workqueue+0x225/0x250
    [] ext4_put_super+0x64/0x360
    [] generic_shutdown_super+0x6a/0xf0

    Cc: Jens Axboe
    Cc: Keith Busch
    Cc: Ross Zwisler
    Suggested-by: Christoph Hellwig
    Reviewed-by: Christoph Hellwig
    Tested-by: Ross Zwisler
    Signed-off-by: Dan Williams
    Signed-off-by: Jens Axboe

    Dan Williams
     

30 Sep, 2015

2 commits

  • CPU hotplug handling for blk-mq (blk_mq_queue_reinit) acquires
    all_q_mutex in blk_mq_queue_reinit_notify() and then removes sysfs
    entries by blk_mq_sysfs_unregister(). Removing sysfs entry needs to
    be blocked until the active reference of the kernfs_node to be zero.

    On the other hand, reading blk_mq_hw_sysfs_cpu sysfs entry (e.g.
    /sys/block/nullb0/mq/0/cpu_list) acquires all_q_mutex in
    blk_mq_hw_sysfs_cpus_show().

    If these happen at the same time, a deadlock can happen. Because one
    can wait for the active reference to be zero with holding all_q_mutex,
    and the other tries to acquire all_q_mutex with holding the active
    reference.

    The reason that all_q_mutex is acquired in blk_mq_hw_sysfs_cpus_show()
    is to avoid reading an imcomplete hctx->cpumask. Since reading sysfs
    entry for blk-mq needs to acquire q->sysfs_lock, we can avoid deadlock
    and reading an imcomplete hctx->cpumask by protecting q->sysfs_lock
    while hctx->cpumask is being updated.

    Signed-off-by: Akinobu Mita
    Reviewed-by: Ming Lei
    Cc: Ming Lei
    Cc: Wanpeng Li
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Akinobu Mita
     
  • There is a race between cpu hotplug handling and adding/deleting
    gendisk for blk-mq, where both are trying to register and unregister
    the same sysfs entries.

    null_add_dev
    --> blk_mq_init_queue
    --> blk_mq_init_allocated_queue
    --> add to 'all_q_list' (*)
    --> add_disk
    --> blk_register_queue
    --> blk_mq_register_disk (++)

    null_del_dev
    --> del_gendisk
    --> blk_unregister_queue
    --> blk_mq_unregister_disk (--)
    --> blk_cleanup_queue
    --> blk_mq_free_queue
    --> del from 'all_q_list' (*)

    blk_mq_queue_reinit
    --> blk_mq_sysfs_unregister (-)
    --> blk_mq_sysfs_register (+)

    While the request queue is added to 'all_q_list' (*),
    blk_mq_queue_reinit() can be called for the queue anytime by CPU
    hotplug callback. But blk_mq_sysfs_unregister (-) and
    blk_mq_sysfs_register (+) in blk_mq_queue_reinit must not be called
    before blk_mq_register_disk (++) and after blk_mq_unregister_disk (--)
    is finished. Because '/sys/block/*/mq/' is not exists.

    There has already been BLK_MQ_F_SYSFS_UP flag in hctx->flags which can
    be used to track these sysfs stuff, but it is only fixing this issue
    partially.

    In order to fix it completely, we just need per-queue flag instead of
    per-hctx flag with appropriate locking. So this introduces
    q->mq_sysfs_init_done which is properly protected with all_q_mutex.

    Also, we need to ensure that blk_mq_map_swqueue() is called with
    all_q_mutex is held. Since hctx->nr_ctx is reset temporarily and
    updated in blk_mq_map_swqueue(), so we should avoid
    blk_mq_register_hctx() seeing the temporary hctx->nr_ctx value
    in CPU hotplug handling or adding/deleting gendisk .

    Signed-off-by: Akinobu Mita
    Reviewed-by: Ming Lei
    Cc: Ming Lei
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Akinobu Mita
     

15 Aug, 2015

1 commit

  • There may be lots of pending requests so that the buffer of PAGE_SIZE
    can't hold them at all.

    One typical example is scsi-mq, the queue depth(.can_queue) of
    scsi_host and blk-mq is quite big but scsi_device's queue_depth
    is a bit small(.cmd_per_lun), then it is quite easy to have lots
    of pending requests in hw queue.

    This patch fixes the following warning and the related memory
    destruction.

    [ 359.025101] fill_read_buffer: blk_mq_hw_sysfs_show+0x0/0x7d returned bad count^M
    [ 359.055595] irq event stamp: 15537^M
    [ 359.055606] general protection fault: 0000 [#1] PREEMPT SMP DEBUG_PAGEALLOC ^M
    [ 359.055614] Dumping ftrace buffer:^M
    [ 359.055660] (ftrace buffer empty)^M
    [ 359.055672] Modules linked in: nbd ipv6 kvm_intel kvm serio_raw^M
    [ 359.055678] CPU: 4 PID: 21631 Comm: stress-ng-sysfs Not tainted 4.2.0-rc5-next-20150805 #434^M
    [ 359.055679] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011^M
    [ 359.055682] task: ffff8802161cc000 ti: ffff88021b4a8000 task.ti: ffff88021b4a8000^M
    [ 359.055693] RIP: 0010:[] [] __kmalloc+0xe8/0x152^M

    Cc:
    Signed-off-by: Ming Lei
    Signed-off-by: Jens Axboe

    Ming Lei
     

13 Mar, 2015

1 commit

  • Add a variant of blk_mq_init_queue that allows a previously allocated
    queue to be initialized. blk_mq_init_allocated_queue models
    blk_init_allocated_queue -- which was also created for DM's use.

    DM's approach to device creation requires a placeholder request_queue be
    allocated for use with alloc_dev() but the decision about what type of
    request_queue will be ultimately created is deferred until all component
    devices referenced in the DM table are processed to determine the table
    type (request-based, blk-mq request-based, or bio-based).

    Also, because of DM's late finalization of the request_queue type
    the call to blk_mq_register_disk() doesn't happen during alloc_dev().
    Must export blk_mq_register_disk() so that DM can backfill the 'mq' dir
    once the blk-mq queue is fully allocated.

    Signed-off-by: Mike Snitzer
    Reviewed-by: Ming Lei
    Signed-off-by: Jens Axboe

    Mike Snitzer
     

30 Jan, 2015

1 commit


21 Jan, 2015

1 commit

  • The kobject memory shouldn't have been freed before the kobject
    is released because driver core can access it freely before its
    release.

    This patch frees hctx in its release callback. For ctx, they
    share one single per-cpu variable which is associated with
    the request queue, so free ctx in q->mq_kobj's release handler.

    Signed-off-by: Sasha Levin
    (fix ctx kobjects)
    Signed-off-by: Ming Lei
    Signed-off-by: Jens Axboe

    Ming Lei
     

10 Dec, 2014

1 commit

  • When a CPU is hotplugged, the current blk-mq spews a warning like:

    kobject '(null)' (ffffe8ffffc8b5d8): tried to add an uninitialized object, something is seriously wrong.
    CPU: 1 PID: 1386 Comm: systemd-udevd Not tainted 3.18.0-rc7-2.g088d59b-default #1
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_171129-lamiak 04/01/2014
    0000000000000000 0000000000000002 ffffffff81605f07 ffffe8ffffc8b5d8
    ffffffff8132c7a0 ffff88023341d370 0000000000000020 ffff8800bb05bd58
    ffff8800bb05bd08 000000000000a0a0 000000003f441940 0000000000000007
    Call Trace:
    [] dump_trace+0x86/0x330
    [] show_stack_log_lvl+0x94/0x170
    [] show_stack+0x21/0x50
    [] dump_stack+0x41/0x51
    [] kobject_add+0xa0/0xb0
    [] blk_mq_register_hctx+0x91/0xb0
    [] blk_mq_sysfs_register+0x3e/0x60
    [] blk_mq_queue_reinit_notify+0xf8/0x190
    [] notifier_call_chain+0x4c/0x70
    [] cpu_notify+0x23/0x50
    [] _cpu_up+0x157/0x170
    [] cpu_up+0x89/0xb0
    [] cpu_subsys_online+0x35/0x80
    [] device_online+0x5d/0xa0
    [] online_store+0x75/0x80
    [] kernfs_fop_write+0xda/0x150
    [] vfs_write+0xb2/0x1f0
    [] SyS_write+0x42/0xb0
    [] system_call_fastpath+0x16/0x1b
    [] 0x7f0132fb24e0

    This is indeed because of an uninitialized kobject for blk_mq_ctx.
    The blk_mq_ctx kobjects are initialized in blk_mq_sysfs_init(), but it
    goes loop over hctx_for_each_ctx(), i.e. it initializes only for
    online CPUs. Thus, when a CPU is hotplugged, the ctx for the newly
    onlined CPU is registered without initialization.

    This patch fixes the issue by initializing the all ctx kobjects
    belonging to each queue.

    Bugzilla: https://bugzilla.novell.com/show_bug.cgi?id=908794
    Cc:
    Signed-off-by: Takashi Iwai
    Signed-off-by: Jens Axboe

    Takashi Iwai
     

25 Sep, 2014

1 commit

  • blk-mq uses percpu_ref for its usage counter which tracks the number
    of in-flight commands and used to synchronously drain the queue on
    freeze. percpu_ref shutdown takes measureable wallclock time as it
    involves a sched RCU grace period. This means that draining a blk-mq
    takes measureable wallclock time. One would think that this shouldn't
    matter as queue shutdown should be a rare event which takes place
    asynchronously w.r.t. userland.

    Unfortunately, SCSI probing involves synchronously setting up and then
    tearing down a lot of request_queues back-to-back for non-existent
    LUNs. This means that SCSI probing may take above ten seconds when
    scsi-mq is used.

    [ 0.949892] scsi host0: Virtio SCSI HBA
    [ 1.007864] scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 1.1. PQ: 0 ANSI: 5
    [ 1.021299] scsi 0:0:1:0: Direct-Access QEMU QEMU HARDDISK 1.1. PQ: 0 ANSI: 5
    [ 1.520356] tsc: Refined TSC clocksource calibration: 2491.910 MHz

    [ 16.186549] sd 0:0:0:0: Attached scsi generic sg0 type 0
    [ 16.190478] sd 0:0:1:0: Attached scsi generic sg1 type 0
    [ 16.194099] osd: LOADED open-osd 0.2.1
    [ 16.203202] sd 0:0:0:0: [sda] 31457280 512-byte logical blocks: (16.1 GB/15.0 GiB)
    [ 16.208478] sd 0:0:0:0: [sda] Write Protect is off
    [ 16.211439] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    [ 16.218771] sd 0:0:1:0: [sdb] 31457280 512-byte logical blocks: (16.1 GB/15.0 GiB)
    [ 16.223264] sd 0:0:1:0: [sdb] Write Protect is off
    [ 16.225682] sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA

    This is also the reason why request_queues start in bypass mode which
    is ended on blk_register_queue() as shutting down a fully functional
    queue also involves a RCU grace period and the queues for non-existent
    SCSI devices never reach registration.

    blk-mq basically needs to do the same thing - start the mq in a
    degraded mode which is faster to shut down and then make it fully
    functional only after the queue reaches registration. percpu_ref
    recently grew facilities to force atomic operation until explicitly
    switched to percpu mode, which can be used for this purpose. This
    patch makes blk-mq initialize q->mq_usage_counter in atomic mode and
    switch it to percpu mode only once blk_register_queue() is reached.

    Note that this issue was previously worked around by 0a30288da1ae
    ("blk-mq, percpu_ref: implement a kludge for SCSI blk-mq stall during
    probe") for v3.17. The temp fix was reverted in preparation of adding
    persistent atomic mode to percpu_ref by 9eca80461a45 ("Revert "blk-mq,
    percpu_ref: implement a kludge for SCSI blk-mq stall during probe"").
    This patch and the prerequisite percpu_ref changes will be merged
    during v3.18 devel cycle.

    Signed-off-by: Tejun Heo
    Reported-by: Christoph Hellwig
    Link: http://lkml.kernel.org/g/20140919113815.GA10791@lst.de
    Fixes: add703fda981 ("blk-mq: use percpu_ref for mq usage count")
    Reviewed-by: Kent Overstreet
    Cc: Jens Axboe
    Cc: Johannes Weiner

    Tejun Heo
     

31 May, 2014

1 commit


30 May, 2014

1 commit

  • Currently blk-mq registers all the hardware queues in sysfs,
    regardless of whether it uses them (e.g. they have CPU mappings)
    or not. The unused hardware queues lack the cpux/ directories,
    and the other sysfs entries (like active, pending, etc) are all
    zeroes.

    Change this so that sysfs correctly reflects the current mappings
    of the hardware queues.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

14 May, 2014

1 commit

  • This adds support for active queue tracking, meaning that the
    blk-mq tagging maintains a count of active users of a tag set.
    This allows us to maintain a notion of fairness between users,
    so that we can distribute the tag depth evenly without starving
    some users while allowing others to try unfair deep queues.

    If sharing of a tag set is detected, each hardware queue will
    track the depth of its own queue. And if this exceeds the total
    depth divided by the number of active queues, the user is actively
    throttled down.

    The active queue count is done lazily to avoid bouncing that data
    between submitter and completer. Each hardware queue gets marked
    active when it allocates its first tag, and gets marked inactive
    when 1) the last tag is cleared, and 2) the queue timeout grace
    period has passed.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

25 Apr, 2014

1 commit

  • The blk-mq code is using it's own version of the I/O completion affinity
    tunables, which causes a few issues:

    - the rq_affinity sysfs file doesn't work for blk-mq devices, even if it
    still is present, thus breaking existing tuning setups.
    - the rq_affinity = 1 mode, which is the defauly for legacy request based
    drivers isn't implemented at all.
    - blk-mq drivers don't implement any completion affinity with the default
    flag settings.

    This patches removes the blk-mq ipi_redirect flag and sysfs file, as well
    as the internal BLK_MQ_F_SHOULD_IPI flag and replaces it with code that
    respects the queue-wide rq_affinity flags and also implements the
    rq_affinity = 1 mode.

    This means I/O completion affinity can now only be tuned block-queue wide
    instead of per context, which seems more sensible to me anyway.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     

10 Apr, 2014

1 commit


21 Mar, 2014

1 commit


07 Dec, 2013

1 commit

  • All objects, which are allocated in blk_mq_register_disk, must be
    released in blk_mq_unregister_disk.

    I use a KVM virtual machine and virtio disk to reproduce this issue.

    kmemleak: 18 new suspected memory leaks (see /sys/kernel/debug/kmemleak)
    $ cat /sys/kernel/debug/kmemleak | head -n 30
    unreferenced object 0xffff8800b6636150 (size 8):
    comm "kworker/0:2", pid 65, jiffies 4294809903 (age 86.358s)
    hex dump (first 8 bytes):
    76 69 72 74 69 6f 34 00 virtio4.
    backtrace:
    [] kmemleak_alloc+0x4e/0xb0
    [] __kmalloc_track_caller+0xf5/0x260
    [] kstrdup+0x31/0x60
    [] sysfs_new_dirent+0x2e/0x140
    [] create_dir+0x38/0xe0
    [] sysfs_create_dir_ns+0x73/0xc0
    [] kobject_add_internal+0xc9/0x340
    [] kobject_add+0x65/0xb0
    [] device_add+0x128/0x660
    [] device_register+0x1a/0x20
    [] register_virtio_device+0x98/0xe0
    [] virtio_pci_probe+0x12e/0x1c0
    [] local_pci_probe+0x45/0xa0
    [] pci_device_probe+0x121/0x130
    [] driver_probe_device+0x87/0x390
    [] __device_attach+0x3b/0x40
    unreferenced object 0xffff8800b65aa1d8 (size 144):

    Fixes: 320ae51feed5 (blk-mq: new multi-queue block IO queueing mechanism)
    Cc: Jens Axboe
    Signed-off-by: Andrey Vagin
    Signed-off-by: Jens Axboe

    Andrey Vagin
     

25 Oct, 2013

1 commit

  • Linux currently has two models for block devices:

    - The classic request_fn based approach, where drivers use struct
    request units for IO. The block layer provides various helper
    functionalities to let drivers share code, things like tag
    management, timeout handling, queueing, etc.

    - The "stacked" approach, where a driver squeezes in between the
    block layer and IO submitter. Since this bypasses the IO stack,
    driver generally have to manage everything themselves.

    With drivers being written for new high IOPS devices, the classic
    request_fn based driver doesn't work well enough. The design dates
    back to when both SMP and high IOPS was rare. It has problems with
    scaling to bigger machines, and runs into scaling issues even on
    smaller machines when you have IOPS in the hundreds of thousands
    per device.

    The stacked approach is then most often selected as the model
    for the driver. But this means that everybody has to re-invent
    everything, and along with that we get all the problems again
    that the shared approach solved.

    This commit introduces blk-mq, block multi queue support. The
    design is centered around per-cpu queues for queueing IO, which
    then funnel down into x number of hardware submission queues.
    We might have a 1:1 mapping between the two, or it might be
    an N:M mapping. That all depends on what the hardware supports.

    blk-mq provides various helper functions, which include:

    - Scalable support for request tagging. Most devices need to
    be able to uniquely identify a request both in the driver and
    to the hardware. The tagging uses per-cpu caches for freed
    tags, to enable cache hot reuse.

    - Timeout handling without tracking request on a per-device
    basis. Basically the driver should be able to get a notification,
    if a request happens to fail.

    - Optional support for non 1:1 mappings between issue and
    submission queues. blk-mq can redirect IO completions to the
    desired location.

    - Support for per-request payloads. Drivers almost always need
    to associate a request structure with some driver private
    command structure. Drivers can tell blk-mq this at init time,
    and then any request handed to the driver will have the
    required size of memory associated with it.

    - Support for merging of IO, and plugging. The stacked model
    gets neither of these. Even for high IOPS devices, merging
    sequential IO reduces per-command overhead and thus
    increases bandwidth.

    For now, this is provided as a potential 3rd queueing model, with
    the hope being that, as it matures, it can replace both the classic
    and stacked model. That would get us back to having just 1 real
    model for block devices, leaving the stacked approach to dm/md
    devices (as it was originally intended).

    Contributions in this patch from the following people:

    Shaohua Li
    Alexander Gordeev
    Christoph Hellwig
    Mike Christie
    Matias Bjorling
    Jeff Moyer

    Acked-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Jens Axboe