05 Aug, 2019
1 commit
-
blk-mq may schedule to call queue's complete function on remote CPU via
IPI, but doesn't provide any way to synchronize the request's complete
fn. The current queue freeze interface can't provide the synchonization
because aborted requests stay at blk-mq queues during EH.In some driver's EH(such as NVMe), hardware queue's resource may be freed &
re-allocated. If the completed request's complete fn is run finally after the
hardware queue's resource is released, kernel crash will be triggered.Prepare for fixing this kind of issue by introducing
blk_mq_tagset_wait_completed_request().Cc: Max Gurtovoy
Cc: Sagi Grimberg
Cc: Keith Busch
Cc: Christoph Hellwig
Reviewed-by: Sagi Grimberg
Signed-off-by: Ming Lei
Signed-off-by: Jens Axboe
03 Jul, 2019
1 commit
-
No code that occurs between blk_mq_get_ctx() and blk_mq_put_ctx() depends
on preemption being disabled for its correctness. Since removing the CPU
preemption calls does not measurably affect performance, simplify the
blk-mq code by removing the blk_mq_put_ctx() function and also by not
disabling preemption in blk_mq_get_ctx().Cc: Hannes Reinecke
Cc: Omar Sandoval
Reviewed-by: Christoph Hellwig
Reviewed-by: Ming Lei
Signed-off-by: Bart Van Assche
Signed-off-by: Jens Axboe
01 May, 2019
1 commit
-
Various block layer files do not have any licensing information at all.
Add SPDX tags for the default kernel GPLv2 license to those.Reviewed-by: Chaitanya Kulkarni
Signed-off-by: Christoph Hellwig
Signed-off-by: Jens Axboe
01 Feb, 2019
1 commit
-
Currently, the queue mapping result is saved in a two-dimensional
array. In the hot path, to get a hctx, we need do following:q->queue_hw_ctx[q->tag_set->map[type].mq_map[cpu]]
This isn't very efficient. We could save the queue mapping result into
ctx directly with different hctx type, like,ctx->hctxs[type]
Signed-off-by: Jianchao Wang
Signed-off-by: Jens Axboe
01 Dec, 2018
1 commit
-
Even if we have no waiters on any of the sbitmap_queue wait states, we
still have to loop every entry to check. We do this for every IO, so
the cost adds up.Shift a bit of the cost to the slow path, when we actually have waiters.
Wrap prepare_to_wait_exclusive() and finish_wait(), so we can maintain
an internal count of how many are currently active. Then we can simply
check this count in sbq_wake_ptr() and not have to loop if we don't
have any sleepers.Convert the two users of sbitmap with waiting, blk-mq-tag and iSCSI.
Reviewed-by: Omar Sandoval
Signed-off-by: Jens Axboe
09 Nov, 2018
2 commits
-
Document the fact that the strategy function passed in can
control whether to continue iterating or not.Suggested-by: Bart Van Assche
Signed-off-by: Jens Axboe -
We have this functionality in sbitmap, but we don't export it in
blk-mq for users of the tags busy iteration. This can be useful
for stopping the iteration, if the caller doesn't need to find
more requests.Reviewed-by: Mike Snitzer
Signed-off-by: Jens Axboe
08 Nov, 2018
3 commits
-
We call blk_mq_map_queue() a lot, at least two times for each
request per IO, sometimes more. Since we now have an indirect
call as well in that function. cache the mapping so we don't
have to re-call blk_mq_map_queue() for the same request
multiple times.Reviewed-by: Keith Busch
Reviewed-by: Sagi Grimberg
Reviewed-by: Hannes Reinecke
Signed-off-by: Jens Axboe -
Prep patch for being able to place request based not just on
CPU location, but also on the type of request.Reviewed-by: Hannes Reinecke
Reviewed-by: Keith Busch
Signed-off-by: Jens Axboe -
It's now unused, kill it.
Reviewed-by: Hannes Reinecke
Tested-by: Ming Lei
Reviewed-by: Omar Sandoval
Signed-off-by: Jens Axboe
01 Oct, 2018
1 commit
-
Merge -rc6 in, for two reasons:
1) Resolve a trivial conflict in the blk-mq-tag.c documentation
2) A few important regression fixes went into upstream directly, so
they aren't in the 4.20 branch.Signed-off-by: Jens Axboe
* tag 'v4.19-rc6': (780 commits)
Linux 4.19-rc6
MAINTAINERS: fix reference to moved drivers/{misc => auxdisplay}/panel.c
cpufreq: qcom-kryo: Fix section annotations
perf/core: Add sanity check to deal with pinned event failure
xen/blkfront: correct purging of persistent grants
Revert "xen/blkfront: When purging persistent grants, keep them in the buffer"
selftests/powerpc: Fix Makefiles for headers_install change
blk-mq: I/O and timer unplugs are inverted in blktrace
dax: Fix deadlock in dax_lock_mapping_entry()
x86/boot: Fix kexec booting failure in the SEV bit detection code
bcache: add separate workqueue for journal_write to avoid deadlock
drm/amd/display: Fix Edid emulation for linux
drm/amd/display: Fix Vega10 lightup on S3 resume
drm/amdgpu: Fix vce work queue was not cancelled when suspend
Revert "drm/panel: Add device_link from panel device to DRM device"
xen/blkfront: When purging persistent grants, keep them in the buffer
clocksource/drivers/timer-atmel-pit: Properly handle error cases
block: fix deadline elevator drain for zoned block devices
ACPI / hotplug / PCI: Don't scan for non-hotplug bridges if slot is not bridge
drm/syncobj: Don't leak fences when WAIT_FOR_SUBMIT is set
...Signed-off-by: Jens Axboe
26 Sep, 2018
1 commit
-
A recent commit runs tag iterator callbacks under the rcu read lock,
but existing callbacks do not satisfy the non-blocking requirement.
The commit intended to prevent an iterator from accessing a queue that's
being modified. This patch fixes the original issue by taking a queue
reference instead of reading it, which allows callbacks to make blocking
calls.Fixes: f5bbbbe4d6357 ("blk-mq: sync the update nr_hw_queues with blk_mq_queue_tag_busy_iter")
Acked-by: Jianchao Wang
Signed-off-by: Keith Busch
Signed-off-by: Jens Axboe
22 Sep, 2018
1 commit
-
Make it easier to understand the purpose of the functions that iterate
over requests by documenting their purpose. Fix several minor spelling
and grammer mistakes in comments in these functions.Signed-off-by: Bart Van Assche
Reviewed-by: Johannes Thumshirn
Cc: Christoph Hellwig
Cc: Ming Lei
Cc: Jianchao Wang
Cc: Hannes Reinecke
Signed-off-by: Jens Axboe
23 Aug, 2018
1 commit
-
Pull more block updates from Jens Axboe:
- Set of bcache fixes and changes (Coly)
- The flush warn fix (me)
- Small series of BFQ fixes (Paolo)
- wbt hang fix (Ming)
- blktrace fix (Steven)
- blk-mq hardware queue count update fix (Jianchao)
- Various little fixes
* tag 'for-4.19/post-20180822' of git://git.kernel.dk/linux-block: (31 commits)
block/DAC960.c: make some arrays static const, shrinks object size
blk-mq: sync the update nr_hw_queues with blk_mq_queue_tag_busy_iter
blk-mq: init hctx sched after update ctx and hctx mapping
block: remove duplicate initialization
tracing/blktrace: Fix to allow setting same value
pktcdvd: fix setting of 'ret' error return for a few cases
block: change return type to bool
block, bfq: return nbytes and not zero from struct cftype .write() method
block, bfq: improve code of bfq_bfqq_charge_time
block, bfq: reduce write overcharge
block, bfq: always update the budget of an entity when needed
block, bfq: readd missing reset of parent-entity service
blk-wbt: fix IO hang in wbt_wait()
block: don't warn for flush on read-only device
bcache: add the missing comments for smp_mb()/smp_wmb()
bcache: remove unnecessary space before ioctl function pointer arguments
bcache: add missing SPDX header
bcache: move open brace at end of function definitions to next line
bcache: add static const prefix to char * array declarations
bcache: fix code comments style
...
21 Aug, 2018
1 commit
-
For blk-mq, part_in_flight/rw will invoke blk_mq_in_flight/rw to
account the inflight requests. It will access the queue_hw_ctx and
nr_hw_queues w/o any protection. When updating nr_hw_queues and
blk_mq_in_flight/rw occur concurrently, panic comes up.Before update nr_hw_queues, the q will be frozen. So we could use
q_usage_counter to avoid the race. percpu_ref_is_zero is used here
so that we will not miss any in-flight request. The access to
nr_hw_queues and queue_hw_ctx in blk_mq_queue_tag_busy_iter are
under rcu critical section, __blk_mq_update_nr_hw_queues could use
synchronize_rcu to ensure the zeroed q_usage_counter to be globally
visible.Signed-off-by: Jianchao Wang
Reviewed-by: Ming Lei
Signed-off-by: Jens Axboe
15 Aug, 2018
1 commit
-
Pull block updates from Jens Axboe:
"First pull request for this merge window, there will also be a
followup request with some stragglers.This pull request contains:
- Fix for a thundering heard issue in the wbt block code (Anchal
Agarwal)- A few NVMe pull requests:
* Improved tracepoints (Keith)
* Larger inline data support for RDMA (Steve Wise)
* RDMA setup/teardown fixes (Sagi)
* Effects log suppor for NVMe target (Chaitanya Kulkarni)
* Buffered IO suppor for NVMe target (Chaitanya Kulkarni)
* TP4004 (ANA) support (Christoph)
* Various NVMe fixes- Block io-latency controller support. Much needed support for
properly containing block devices. (Josef)- Series improving how we handle sense information on the stack
(Kees)- Lightnvm fixes and updates/improvements (Mathias/Javier et al)
- Zoned device support for null_blk (Matias)
- AIX partition fixes (Mauricio Faria de Oliveira)
- DIF checksum code made generic (Max Gurtovoy)
- Add support for discard in iostats (Michael Callahan / Tejun)
- Set of updates for BFQ (Paolo)
- Removal of async write support for bsg (Christoph)
- Bio page dirtying and clone fixups (Christoph)
- Set of bcache fix/changes (via Coly)
- Series improving blk-mq queue setup/teardown speed (Ming)
- Series improving merging performance on blk-mq (Ming)
- Lots of other fixes and cleanups from a slew of folks"
* tag 'for-4.19/block-20180812' of git://git.kernel.dk/linux-block: (190 commits)
blkcg: Make blkg_root_lookup() work for queues in bypass mode
bcache: fix error setting writeback_rate through sysfs interface
null_blk: add lock drop/acquire annotation
Blk-throttle: reduce tail io latency when iops limit is enforced
block: paride: pd: mark expected switch fall-throughs
block: Ensure that a request queue is dissociated from the cgroup controller
block: Introduce blk_exit_queue()
blkcg: Introduce blkg_root_lookup()
block: Remove two superfluous #include directives
blk-mq: count the hctx as active before allocating tag
block: bvec_nr_vecs() returns value for wrong slab
bcache: trivial - remove tailing backslash in macro BTREE_FLAG
bcache: make the pr_err statement used for ENOENT only in sysfs_attatch section
bcache: set max writeback rate when I/O request is idle
bcache: add code comments for bset.c
bcache: fix mistaken comments in request.c
bcache: fix mistaken code comments in bcache.h
bcache: add a comment in super.c
bcache: avoid unncessary cache prefetch bch_btree_node_get()
bcache: display rate debug parameters to 0 when writeback is not running
...
09 Aug, 2018
1 commit
-
Currently, we count the hctx as active after allocate driver tag
successfully. If a previously inactive hctx try to get tag first
time, it may fails and need to wait. However, due to the stale tag
->active_queues, the other shared-tags users are still able to
occupy all driver tags while there is someone waiting for tag.
Consequently, even if the previously inactive hctx is waked up, it
still may not be able to get a tag and could be starved.To fix it, we count the hctx as active before try to allocate driver
tag, then when it is waiting the tag, the other shared-tag users
will reserve budget for it.Reviewed-by: Ming Lei
Signed-off-by: Jianchao Wang
Signed-off-by: Jens Axboe
03 Aug, 2018
2 commits
-
Commit d250bf4e776ff09d5("blk-mq: only iterate over inflight requests
in blk_mq_tagset_busy_iter") uses 'blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT'
to replace 'blk_mq_request_started(req)', this way is wrong, and causes
lots of test system hang during booting.Fix the issue by using blk_mq_request_started(req) inside bt_tags_iter().
Fixes: d250bf4e776ff09d5 ("blk-mq: only iterate over inflight requests in blk_mq_tagset_busy_iter")
Cc: Josef Bacik
Cc: Christoph Hellwig
Cc: Guenter Roeck
Cc: Mark Brown
Cc: Matt Hart
Cc: Johannes Thumshirn
Cc: John Garry
Cc: Hannes Reinecke ,
Cc: "Martin K. Petersen" ,
Cc: James Bottomley
Cc: linux-scsi@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Reviewed-by: Bart Van Assche
Tested-by: Guenter Roeck
Reported-by: Mark Brown
Reported-by: Guenter Roeck
Signed-off-by: Ming Lei
Signed-off-by: Jens Axboe -
The passed 'nr' from userspace represents the total depth, meantime
inside 'struct blk_mq_tags', 'nr_tags' stores the total tag depth,
and 'nr_reserved_tags' stores the reserved part.There are two issues in blk_mq_tag_update_depth() now:
1) for growing tags, we should have used the passed 'nr', and keep the
number of reserved tags not changed.2) the passed 'nr' should have been used for checking against
'tags->nr_tags', instead of number of the normal part.This patch fixes the above two cases, and avoids kernel crash caused
by wrong resizing sbitmap queue.Cc: "Ewan D. Milne"
Cc: Christoph Hellwig
Cc: Bart Van Assche
Cc: Omar Sandoval
Tested by: Marco Patalano
Signed-off-by: Ming Lei
Signed-off-by: Jens Axboe
14 Jun, 2018
1 commit
-
Unused now that nvme stopped using it.
Signed-off-by: Christoph Hellwig
Reviewed-by: Jens Axboe
31 May, 2018
1 commit
-
We already check for started commands in all callbacks, but we should
also protect against already completed commands. Do this by taking
the checks to common code.Acked-by: Josef Bacik
Signed-off-by: Christoph Hellwig
Signed-off-by: Jens Axboe
25 May, 2018
1 commit
-
When the allocation process is scheduled back and the mapped hw queue is
changed, fake one extra wake up on previous queue for compensating wake
up miss, so other allocations on the previous queue won't be starved.This patch fixes one request allocation hang issue, which can be
triggered easily in case of very low nr_request.The race is as follows:
1) 2 hw queues, nr_requests are 2, and wake_batch is one
2) there are 3 waiters on hw queue 0
3) two in-flight requests in hw queue 0 are completed, and only two
waiters of 3 are waken up because of wake_batch, but both the two
waiters can be scheduled to another CPU and cause to switch to hw
queue 14) then the 3rd waiter will wait for ever, since no in-flight request
is in hw queue 0 any more.5) this patch fixes it by the fake wakeup when waiter is scheduled to
another hw queueCc:
Reviewed-by: Omar Sandoval
Signed-off-by: Ming LeiModified commit message to make it clearer, and make it apply on
top of the 4.18 branch.Signed-off-by: Jens Axboe
23 Dec, 2017
1 commit
-
Even with a number of waitqueues, we can get into a situation where we
are heavily contended on the waitqueue lock. I got a report on spc1
where we're spending seconds doing this. Arguably the use case is nasty,
I reproduce it with one device and 1000 threads banging on the device.
But that doesn't mean we shouldn't be handling it better.What ends up happening is that a thread will fail to get a tag, add
itself to the waitqueue, and subsequently get woken up when a tag is
freed - only to find itself going back to sleep on the waitqueue.Instead of waking all threads, use an exclusive wait and wake up our
sbitmap batch count instead. This seems to work well for me (massive
improvement for this use case), and it survives basic testing. But I
haven't fully verified it yet.An additional improvement is running the queue and checking for a new
tag BEFORE needing to add ourselves to the waitqueue.Signed-off-by: Jens Axboe
19 Oct, 2017
2 commits
-
No callers left.
Reviewed-by: Jens Axboe
Reviewed-by: Bart Van Assche
Reviewed-by: Max Gurtovoy
Reviewed-by: Johannes Thumshirn
Signed-off-by: Sagi Grimberg
Signed-off-by: Christoph Hellwig -
Iterator helper to apply a function on all the
tags in a given tagset. export it as it will be used
outside the block layer later on.Reviewed-by: Bart Van Assche
Reviewed-by: Jens Axboe
Reviewed-by: Max Gurtovoy
Reviewed-by: Johannes Thumshirn
Signed-off-by: Sagi Grimberg
Signed-off-by: Christoph Hellwig
18 Aug, 2017
1 commit
-
Since blk_mq_ops.reinit_request is only called from inside
blk_mq_reinit_tagset(), make this function pointer an argument of
blk_mq_reinit_tagset() instead of a member of struct blk_mq_ops.
This patch does not change any functionality but makes
blk_mq_reinit_tagset() calls easier to read and to analyze.Signed-off-by: Bart Van Assche
Reviewed-by: Hannes Reinecke
Cc: Christoph Hellwig
Cc: Sagi Grimberg
Cc: James Smart
Cc: Johannes Thumshirn
Signed-off-by: Jens Axboe
10 Aug, 2017
1 commit
-
Since we introduced blk-mq-sched, the tags->rqs[] array has been
dynamically assigned. So we need to check for NULL when iterating,
since there's a window of time where the bit is set, but we haven't
dynamically assigned the tags->rqs[] array position yet.This is perfectly safe, since the memory backing of the request is
never going away while the device is alive.Reviewed-by: Bart Van Assche
Reviewed-by: Omar Sandoval
Signed-off-by: Jens Axboe
15 Apr, 2017
1 commit
-
Wire up the sbitmap_get_shallow() operation to the tag code so that a
caller can limit the number of tags available to it.Signed-off-by: Omar Sandoval
Signed-off-by: Jens Axboe
13 Mar, 2017
1 commit
-
In case cpu was unplugged, we need to make sure not to assume
that the tags for that cpu are still allocated. so check
for null tags when reinitializing a tagset.Reported-by: Yi Zhang
Signed-off-by: Sagi Grimberg
Signed-off-by: Jens Axboe
02 Mar, 2017
1 commit
-
Signed-off-by: Sagi Grimberg
Modified by me to also check at driver tag allocation time if the
original request was reserved, so we can be sure to allocate a
properly reserved tag at that point in time, too.Signed-off-by: Jens Axboe
28 Jan, 2017
1 commit
-
Instead of letting the caller check this and handle the details
of inserting a flush request, put the logic in the scheduler
insertion function. This fixes direct flush insertion outside
of the usual make_request_fn calls, like from dm via
blk_insert_cloned_request().Signed-off-by: Jens Axboe
27 Jan, 2017
1 commit
-
These are very tied to the blk-mq tag implementation, so exposing them
to sysfs isn't a great idea. Move the debugging information to debugfs
and add basic entries for the number of tags and the number of reserved
tags to sysfs.Reviewed-by: Hannes Reinecke
Signed-off-by: Omar Sandoval
Signed-off-by: Jens Axboe
25 Jan, 2017
1 commit
-
If we have a scheduler attached, we have two sets of tags. We don't
want to apply our active queue throttling for the scheduler side
of tags, that only applies to driver tags since that's the resource
we need to dispatch an IO.Signed-off-by: Jens Axboe
21 Jan, 2017
1 commit
-
Add support for growing the tags associated with a hardware queue, for
the scheduler tags. Currently we only support resizing within the
limits of the original depth, change that so we can grow it as well by
allocating and replacing the existing scheduler tag set.This is similar to how we could increase the software queue depth with
the legacy IO stack and schedulers.Signed-off-by: Jens Axboe
Reviewed-by: Omar Sandoval
19 Jan, 2017
1 commit
-
We used to pass in NULL for hctx for reserved tags, but we don't
do that anymore. Hence the check for whether hctx is NULL or not
is now redundant, kill it.Reported-by: Dan Carpenter
Fixes: a642a158aec6 ("blk-mq-tag: cleanup the normal/reserved tag allocation")
Signed-off-by: Jens Axboe
18 Jan, 2017
2 commits
-
This is in preparation for having two sets of tags available. For
that we need a static index, and a dynamically assignable one.Signed-off-by: Jens Axboe
Reviewed-by: Omar Sandoval -
This is in preparation for having another tag set available. Cleanup
the parameters, and allow passing in of tags for blk_mq_put_tag().Signed-off-by: Jens Axboe
[hch: even more cleanups]
Signed-off-by: Christoph Hellwig
Reviewed-by: Omar Sandoval
10 Oct, 2016
1 commit
-
Pull blk-mq irq/cpu mapping updates from Jens Axboe:
"This is the block-irq topic branch for 4.9-rc. It's mostly from
Christoph, and it allows drivers to specify their own mappings, and
more importantly, to share the blk-mq mappings with the IRQ affinity
mappings. It's a good step towards making this work better out of the
box"* 'for-4.9/block-irq' of git://git.kernel.dk/linux-block:
blk_mq: linux/blk-mq.h does not include all the headers it depends on
blk-mq: kill unused blk_mq_create_mq_map()
blk-mq: get rid of the cpumask in struct blk_mq_tags
nvme: remove the post_scan callout
nvme: switch to use pci_alloc_irq_vectors
blk-mq: provide a default queue mapping for PCI device
blk-mq: allow the driver to pass in a queue mapping
blk-mq: remove ->map_queue
blk-mq: only allocate a single mq_map per tag_set
blk-mq: don't redistribute hardware queues on a CPU hotplug event
17 Sep, 2016
2 commits
-
In order to get good cache behavior from a sbitmap, we want each CPU to
stick to its own cacheline(s) as much as possible. This might happen
naturally as the bitmap gets filled up and the alloc_hint values spread
out, but we really want this behavior from the start. blk-mq apparently
intended to do this, but the code to do this was never wired up. Get rid
of the dead code and make it part of the sbitmap library.Signed-off-by: Omar Sandoval
Signed-off-by: Jens Axboe -
Again, there's no point in passing this in every time. Make it part of
struct sbitmap_queue and clean up the API.Signed-off-by: Omar Sandoval
Signed-off-by: Jens Axboe