10 Sep, 2010
1 commit
-
While testing CPU DLPAR, the following problem was discovered.
We were DLPAR removing the first CPU, which in this case was
logical CPUs 0-3. CPUs 0-2 were already marked offline and
we were in the process of offlining CPU 3. After marking
the CPU inactive and offline in cpu_disable, but before the
cpu was completely idle (cpu_die), we ended up in __make_request
on CPU 3. There we looked at the topology map to see which CPU
to complete the I/O on and found no CPUs in the cpu_sibling_map.
This resulted in the block layer setting the completion cpu
to be NR_CPUS, which then caused an oops when we tried to
complete the I/O.Fix this by sanity checking the value we return from blk_cpu_to_group
to be a valid cpu value.Signed-off-by: Brian King
Signed-off-by: Jens Axboe
23 Aug, 2010
8 commits
-
Currently drivers must do an elevator_exit() + elevator_init()
to switch IO schedulers. There are a few problems with this:- Since commit 1abec4fdbb142e3ccb6ce99832fae42129134a96,
elevator_init() requires a zeroed out q->elevator
pointer. The two existing in-kernel users don't do that.- It will only work at initialization time, since using the
above two-staged construct does not properly quisce the queue.So add elevator_change() which takes care of this, and convert
the elv_iosched_store() sysfs interface to use this helper as well.Reported-by: Peter Oberparleiter
Reported-by: Kevin Vigor
Signed-off-by: Jens Axboe -
Return of the bi_rw tests is no longer bool after commit 74450be1. But
results of such tests are stored in bools. This doesn't fit in there
for some compilers (gcc 4.5 here), so either use !! magic to get real
bools or use ulong where the result is assigned somewhere.Signed-off-by: Jiri Slaby
Cc: Christoph Hellwig
Reviewed-by: Jeff Moyer
Signed-off-by: Jens Axboe -
kernel needs to kobject_put on dev->kobj if elv_register_queue fails.
Signed-off-by: Xiaotian Feng
Cc: "Martin K. Petersen"
Cc: Stephen Hemminger
Cc: Nikanth Karthikesan
Cc: David Teigland
Signed-off-by: Jens Axboe -
o Divyesh had gotten rid of this code in the past. I want to re-introduce it
back as it helps me a lot during debugging.Reviewed-by: Jeff Moyer
Reviewed-by: Divyesh Shah
Signed-off-by: Vivek Goyal
Signed-off-by: Jens Axboe -
o Implement a new tunable group_idle, which allows idling on the group
instead of a cfq queue. Hence one can set slice_idle = 0 and not idle
on the individual queues but idle on the group. This way on fast storage
we can get fairness between groups at the same time overall throughput
improves.Signed-off-by: Vivek Goyal
Acked-by: Jeff Moyer
Signed-off-by: Jens Axboe -
o Implement another CFQ mode where we charge group in terms of number
of requests dispatched instead of measuring the time. Measuring in terms
of time is not possible when we are driving deeper queue depths and there
are requests from multiple cfq queues in the request queue.o This mode currently gets activated if one sets slice_idle=0 and associated
disk supports NCQ. Again the idea is that on an NCQ disk with idling disabled
most of the queues will dispatch 1 or more requests and then cfq queue
expiry happens and we don't have a way to measure time. So start providing
fairness in terms of IOPS.o Currently IOPS mode works only with cfq group scheduling. CFQ is following
different scheduling algorithms for queue and group scheduling. These IOPS
stats are used only for group scheduling hence in non-croup mode nothing
should change.o For CFQ group scheduling one can disable slice idling so that we don't idle
on queue and drive deeper request queue depths (achieving better throughput),
at the same time group idle is enabled so one should get service
differentiation among groups.Signed-off-by: Vivek Goyal
Acked-by: Jeff Moyer
Signed-off-by: Jens Axboe -
Do not idle either on cfq queue or service tree if slice_idle=0. User does
not want any queue or service tree idling. Currently even if slice_idle=0,
we were waiting for request to finish before expiring the queue and that
can lead to lower queue depths.Acked-by: Jeff Moyer
Signed-off-by: Vivek Goyal
Signed-off-by: Jens Axboe -
If the cgroup hierarchy for blkio control groups is deeper than two
levels, kernel should not allow the creation of further levels. mkdir
system call does not except EINVAL as a return value. This patch
replaces EINVAL with more appropriate EPERMSigned-off-by: Ciju Rajan K
Reviewed-by: KAMEZAWA Hiroyuki
Signed-off-by: Jens Axboe
12 Aug, 2010
1 commit
-
Secure discard is the same as discard except that all copies of the
discarded sectors (perhaps created by garbage collection) must also be
erased.Signed-off-by: Adrian Hunter
Acked-by: Jens Axboe
Cc: Kyungmin Park
Cc: Madhusudhan Chikkature
Cc: Christoph Hellwig
Cc: Ben Gardiner
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
09 Aug, 2010
2 commits
-
- If function called without barrier option retvalue is incorrect
Signed-off-by: Dmitry Monakhov
Signed-off-by: Jens Axboe -
Propagate REQ_DISCARD in cmd_flags when cloning a discard request.
Skip blk_rq_check_limits's existing checks for discard requests because
discard limits will have already been checked in blkdev_issue_discard.Signed-off-by: Mike Snitzer
Reviewed-by: Christoph Hellwig
Signed-off-by: Jens Axboe
08 Aug, 2010
19 commits
-
q->bar_rq.rq_disk is NULL. Use the rq_disk of the original request
instead.Signed-off-by: FUJITA Tomonori
Signed-off-by: Jens Axboe -
the block layer doesn't set rq->cmd_type on flush requests. By
definition, it should be REQ_TYPE_FS (the lower layers build a command
and interpret the result of it, that is, the block layer doesn't know
the details).Signed-off-by: FUJITA Tomonori
Signed-off-by: Jens Axboe -
If the queue doesn't have a limit set, or it just set UINT_MAX like
we default to, we coud be sending down a discard request that isn't
of the correct granularity if the block size is > 512b.Fix this by adjusting max_discard_sectors down to the proper
alignment.Signed-off-by: Jens Axboe
-
Issuing a blkdev_issue_flush() on an unconfigured loop device causes a panic as
q->make_request_fn is not configured. This can occur when trying to mount the
unconfigured loop device as an XFS filesystem. There are no guards that catch
the bio before the request function is called because we don't add a payload to
the bio. Instead, manually check this case as soon as we have a pointer to the
queue to flush.Signed-off-by: Dave Chinner
Signed-off-by: Jens Axboe -
The blkpg_ioctl and blkdev_reread_part access fields of
the bdev and gendisk structures, yet they always do so
under the protection of bdev->bd_mutex, which seems
sufficient.Signed-off-by: Arnd Bergmann
cked-by: Christoph Hellwig
Signed-off-by: Jens Axboe -
We only call the functions set_device_ro(),
invalidate_bdev(), sync_filesystem() and sync_blockdev()
while holding the BKL in these commands. All
of these are also done in other code paths without
the BKL, which leads me to the conclusion that
the BKL is not needed here either.The reason we hold it here is that it was originally
pushed down into the ioctl function from vfs_ioctl.Signed-off-by: Arnd Bergmann
Reviewed-by: Christoph Hellwig
Signed-off-by: Jens Axboe -
The blktrace driver currently needs the BKL, but
we should not need to take that in the block layer,
so just push it down into the driver itself.It is quite likely that the BKL is not actually
required in blktrace code and could be removed
in a follow-on patch.Signed-off-by: Arnd Bergmann
Acked-by: Christoph Hellwig
Signed-off-by: Jens Axboe -
As a preparation for the removal of the big kernel
lock in the block layer, this removes the BKL
from the common ioctl handling code, moving it
into every single driver still using it.Signed-off-by: Arnd Bergmann
Acked-by: Christoph Hellwig
Signed-off-by: Jens Axboe -
This removes q->prepare_flush_fn completely (changes the
blk_queue_ordered API).Signed-off-by: FUJITA Tomonori
Reviewed-by: Christoph Hellwig
Signed-off-by: Jens Axboe -
This is preparation for removing q->prepare_flush_fn.
Temporarily, blk_queue_ordered() permits QUEUE_ORDERED_DO_PREFLUSH and
QUEUE_ORDERED_DO_POSTFLUSH without prepare_flush_fn.Signed-off-by: FUJITA Tomonori
Reviewed-by: Christoph Hellwig
Signed-off-by: Jens Axboe -
SCSI-ml needs a way to mark a request as flush request in
q->prepare_flush_fn because it needs to identify them later (e.g. in
q->request_fn or prep_rq_fn).queue_flush sets REQ_HARDBARRIER in rq->cmd_flags however the block
layer also sends normal REQ_TYPE_FS requests with REQ_HARDBARRIER. So
SCSI-ml can't use REQ_HARDBARRIER to identify flush requests.We could change the block layer to clear REQ_HARDBARRIER bit before
sending non flush requests to the lower layers. However, intorudcing
the new flag looks cleaner (surely easier).Signed-off-by: FUJITA Tomonori
Cc: James Bottomley
Cc: David S. Miller
Cc: Rusty Russell
Cc: Alasdair G Kergon
Reviewed-by: Christoph Hellwig
Signed-off-by: Jens Axboe -
Reviewed-by: FUJITA Tomonori
Signed-off-by: Jens Axboe
-
Didn't cause a merge conflict, so fixed this one up manually
post merge.Signed-off-by: Jens Axboe
-
Just some dead code.
Signed-off-by: Andi Kleen
Signed-off-by: Andrew Morton
Signed-off-by: Jens Axboe -
Allocating a fixed payload for discard requests always was a horrible hack,
and it's not coming to byte us when adding support for discard in DM/MD.So change the code to leave the allocation of a payload to the lowlevel
driver. Unfortunately that means we'll need another hack, which allows
us to update the various block layer length fields indicating that we
have a payload. Instead of hiding this in sd.c, which we already partially
do for UNMAP support add a documented helper in the core block layer for it.Signed-off-by: Christoph Hellwig
Acked-by: Mike Snitzer
Signed-off-by: Jens Axboe -
Remove the current bio flags and reuse the request flags for the bio, too.
This allows to more easily trace the type of I/O from the filesystem
down to the block driver. There were two flags in the bio that were
missing in the requests: BIO_RW_UNPLUG and BIO_RW_AHEAD. Also I've
renamed two request flags that had a superflous RW in them.Note that the flags are in bio.h despite having the REQ_ name - as
blkdev.h includes bio.h that is the only way to go for now.Signed-off-by: Christoph Hellwig
Signed-off-by: Jens Axboe -
Remove all the trivial wrappers for the cmd_type and cmd_flags fields in
struct requests. This allows much easier grepping for different request
types instead of unwinding through macros.Signed-off-by: Christoph Hellwig
Signed-off-by: Jens Axboe -
The code for nonrot, random, and io stats are completely identical.
Signed-off-by: Jens Axboe
-
There are two reasons for doing this:
- On SSD disks, the completion times aren't as random as they
are for rotational drives. So it's questionable whether they
should contribute to the random pool in the first place.- Calling add_disk_randomness() has a lot of overhead.
This adds /sys/block//queue/add_random that will allow you to
switch off on a per-device basis. The default setting is on, so there
should be no functional changes from this patch.Signed-off-by: Jens Axboe
24 Jun, 2010
1 commit
-
In submit_bio, we count vm events by check READ/WRITE.
But actually DISCARD_NOBARRIER also has the WRITE flag set.
It looks as if in blkdev_issue_discard, we also add a
page as the payload and the bio_has_data check isn't enough.
So add another check for discard bio.Signed-off-by: Tao Ma
Signed-off-by: Jens Axboe
21 Jun, 2010
1 commit
-
e98ef89b has a typo, causing cfq_blkiocg_update_completion_stats()
to call itself instead of blkiocg_update_completion_stats().Reported-by: Ingo Molnar
Signed-off-by: Jens Axboe
19 Jun, 2010
1 commit
-
Hi Jens,
Few days back Ingo noticed a CFQ boot time warning. This patch fixes it.
The issue here is that with CFQ_GROUP_IOSCHED=n, CFQ should not really
be making blkio stat related calls.> Hm, it's still not entirely fixed, as of 2.6.35-rc2-00131-g7908a9e. With
> some
> configs i get bad spinlock warnings during bootup:
>
> [ 28.968013] initcall net_olddevs_init+0x0/0x82 returned 0 after 93750
> usecs
> [ 28.972003] calling b44_init+0x0/0x55 @ 1
> [ 28.976009] bus: 'pci': add driver b44
> [ 28.976374] sda:
> [ 28.978157] BUG: spinlock bad magic on CPU#1, async/0/117
> [ 28.980000] lock: 7e1c5bbc, .magic: 00000000, .owner: /-1, +.owner_cpu: 0
> [ 28.980000] Pid: 117, comm: async/0 Not tainted +2.6.35-rc2-tip-01092-g010e7ef-dirty #8183
> [ 28.980000] Call Trace:
> [ 28.980000] [] ? printk+0x20/0x24
> [ 28.980000] [] spin_bug+0x7c/0x87
> [ 28.980000] [] do_raw_spin_lock+0x1e/0x123
> [ 28.980000] [] ? _raw_spin_lock_irqsave+0x12/0x20
> [ 28.980000] [] _raw_spin_lock_irqsave+0x1a/0x20
> [ 28.980000] [] blkiocg_update_io_add_stats+0x25/0xfb
> [ 28.980000] [] ? cfq_prio_tree_add+0xb1/0xc1
> [ 28.980000] [] cfq_insert_request+0x8c/0x425Signed-off-by: Vivek Goyal
Signed-off-by: Jens Axboe
18 Jun, 2010
1 commit
-
Hi,
A user reported a kernel bug when running a particular program that did
the following:created 32 threads
- each thread took a mutex, grabbed a global offset, added a buffer size
to that offset, released the lock
- read from the given offset in the file
- created a new thread to do the same
- exitedThe result is that cfq's close cooperator logic would trigger, as the
threads were issuing I/O within the mean seek distance of one another.
This workload managed to routinely trigger a use after free bug when
walking the list of merge candidates for a particular cfqq
(cfqq->new_cfqq). The logic used for merging queues looks like this:static void cfq_setup_merge(struct cfq_queue *cfqq, struct cfq_queue *new_cfqq)
{
int process_refs, new_process_refs;
struct cfq_queue *__cfqq;/* Avoid a circular list and skip interim queue merges */
while ((__cfqq = new_cfqq->new_cfqq)) {
if (__cfqq == cfqq)
return;
new_cfqq = __cfqq;
}process_refs = cfqq_process_refs(cfqq);
/*
* If the process for the cfqq has gone away, there is no
* sense in merging the queues.
*/
if (process_refs == 0)
return;/*
* Merge in the direction of the lesser amount of work.
*/
new_process_refs = cfqq_process_refs(new_cfqq);
if (new_process_refs >= process_refs) {
cfqq->new_cfqq = new_cfqq;
atomic_add(process_refs, &new_cfqq->ref);
} else {
new_cfqq->new_cfqq = cfqq;
atomic_add(new_process_refs, &cfqq->ref);
}
}When a merge candidate is found, we add the process references for the
queue with less references to the queue with more. The actual merging
of queues happens when a new request is issued for a given cfqq. In the
case of the test program, it only does a single pread call to read in
1MB, so the actual merge never happens.Normally, this is fine, as when the queue exits, we simply drop the
references we took on the other cfqqs in the merge chain:/*
* If this queue was scheduled to merge with another queue, be
* sure to drop the reference taken on that queue (and others in
* the merge chain). See cfq_setup_merge and cfq_merge_cfqqs.
*/
__cfqq = cfqq->new_cfqq;
while (__cfqq) {
if (__cfqq == cfqq) {
WARN(1, "cfqq->new_cfqq loop detected\n");
break;
}
next = __cfqq->new_cfqq;
cfq_put_queue(__cfqq);
__cfqq = next;
}However, there is a hole in this logic. Consider the following (and
keep in mind that each I/O keeps a reference to the cfqq):q1->new_cfqq = q2 // q2 now has 2 process references
q3->new_cfqq = q2 // q2 now has 3 process references// the process associated with q2 exits
// q2 now has 2 process references// queue 1 exits, drops its reference on q2
// q2 now has 1 process reference// q3 exits, so has 0 process references, and hence drops its references
// to q2, which leaves q2 also with 0 process referencesq4 comes along and wants to merge with q3
q3->new_cfqq still points at q2! We follow that link and end up at an
already freed cfqq.So, the fix is to not follow a merge chain if the top-most queue does
not have a process reference, otherwise any queue in the chain could be
already freed. I also changed the logic to disallow merging with a
queue that does not have any process references. Previously, we did
this check for one of the merge candidates, but not the other. That
doesn't really make sense.Without the attached patch, my system would BUG within a couple of
seconds of running the reproducer program. With the patch applied, my
system ran the program for over an hour without issues.This addresses the following bugzilla:
https://bugzilla.kernel.org/show_bug.cgi?id=16217Thanks a ton to Phil Carns for providing the bug report and an excellent
reproducer.[ Note for stable: this applies to 2.6.32/33/34 ].
Signed-off-by: Jeff Moyer
Reported-by: Phil Carns
Cc: stable@kernel.org
Signed-off-by: Jens Axboe
17 Jun, 2010
1 commit
-
Filesystems assume that DISCARD_BARRIER are full barriers, so that they
don't have to track in-progress discard operation when submitting new I/O.
But currently we only treat them as elevator barriers, which don't
actually do the nessecary queue drains.Also remove the unlikely around both the DISCARD and BARRIER requests -
the happen far too often for a static mispredict.Signed-off-by: Christoph Hellwig
Signed-off-by: Jens Axboe
04 Jun, 2010
2 commits
-
blk_init_allocated_queue_node may fail and the caller _could_ retry.
Accommodate the unlikely event that blk_init_allocated_queue_node is
called on an already initialized (possibly partially) request_queue.Signed-off-by: Mike Snitzer
Signed-off-by: Jens Axboe -
On blk_init_allocated_queue_node failure, only free the request_queue if
it is wasn't previously allocated outside the block layer
(e.g. blk_init_queue_node was blk_init_allocated_queue_node caller).This addresses an interface bug introduced by the following commit:
01effb0 block: allow initialization of previously allocated
request_queueOtherwise the request_queue may be free'd out from underneath a caller
that is managing the request_queue directly (e.g. caller uses
blk_alloc_queue + blk_init_allocated_queue_node).Signed-off-by: Mike Snitzer
Signed-off-by: Jens Axboe
25 May, 2010
1 commit
-
I got below oops when unloading cfq-iosched. Considering scenario:
queue A merge to B, C merge to D and B will be merged to D. Before B is merged
to D, we do split B. We should put B's reference for D.[ 807.768536] =============================================================================
[ 807.768539] BUG cfq_queue: Objects remaining on kmem_cache_close()
[ 807.768541] -----------------------------------------------------------------------------
[ 807.768543]
[ 807.768546] INFO: Slab 0xffffea0003e6b4e0 objects=26 used=1 fp=0xffff88011d584fd8 flags=0x200000000004082
[ 807.768550] Pid: 5946, comm: rmmod Tainted: G W 2.6.34-07097-gf4b87de-dirty #724
[ 807.768552] Call Trace:
[ 807.768560] [] slab_err+0x8f/0x9d
[ 807.768564] [] ? flush_cpu_slab+0x0/0x93
[ 807.768569] [] ? add_preempt_count+0xe/0xca
[ 807.768572] [] ? sub_preempt_count+0xe/0xb6
[ 807.768577] [] ? _raw_spin_unlock+0x15/0x30
[ 807.768580] [] ? sub_preempt_count+0xe/0xb6
[ 807.768584] [] list_slab_objects+0x9b/0x19f
[ 807.768588] [] ? add_preempt_count+0xc6/0xca
[ 807.768591] [] kmem_cache_destroy+0x13f/0x21d
[ 807.768597] [] cfq_slab_kill+0x1a/0x43 [cfq_iosched]
[ 807.768601] [] cfq_exit+0x93/0x9e [cfq_iosched]
[ 807.768606] [] sys_delete_module+0x1b1/0x219
[ 807.768612] [] system_call_fastpath+0x16/0x1b
[ 807.768618] INFO: Object 0xffff88011d584618 @offset=1560
[ 807.768622] INFO: Allocated in cfq_get_queue+0x11e/0x274 [cfq_iosched] age=7173 cpu=1 pid=5496
[ 807.768626] =============================================================================Cc: stable@kernel.org
Signed-off-by: Shaohua Li
Signed-off-by: Jens Axboe
24 May, 2010
1 commit
-
Bio-based DM doesn't use an elevator (queue is !blk_queue_stackable()).
Longer-term DM will not allocate an elevator for bio-based DM. But even
then there will be small potential for an elevator to be allocated for
a request-based DM table only to have a bio-based table be loaded in the
end.Displaying "none" for bio-based DM will help avoid user confusion.
Signed-off-by: Mike Snitzer
Signed-off-by: Jens Axboe