09 Feb, 2011
1 commit
-
Commit 7667aa0630407bc07dc38dcc79d29cc0a65553c1 added logic to wait for
the last queue of the group to become busy (have at least one request),
so that the group does not lose out for not being continuously
backlogged. The commit did not check for the condition that the last
queue already has some requests. As a result, if the queue already has
requests, wait_busy is set. Later on, cfq_select_queue() checks the
flag, and decides that since the queue has a request now and wait_busy
is set, the queue is expired. This results in early expiration of the
queue.This patch fixes the problem by adding a check to see if queue already
has requests. If it does, wait_busy is not set. As a result, time slices
do not expire early.The queues with more than one request are usually buffered writers.
Testing shows improvement in isolation between buffered writers.Cc: stable@kernel.org
Signed-off-by: Justin TerAvest
Reviewed-by: Gui Jianfeng
Acked-by: Vivek Goyal
Signed-off-by: Jens Axboe
19 Jan, 2011
1 commit
-
o Rename a function to give it more approprate name. We are calculating
cfq queue slice and function name gives the impression as if cfq group
slice length is being calculated.Signed-off-by: Vivek Goyal
Signed-off-by: Jens Axboe
14 Jan, 2011
3 commits
-
If a queue is preempted before it gets slice assigned, the queue doesn't get
compensation, which looks unfair. For such queue, we compensate it for a whole
slice.Signed-off-by: Shaohua Li
Signed-off-by: Jens Axboe -
I got this:
fio-874 [007] 2157.724514: 8,32 m N cfq874 preempt
fio-874 [007] 2157.724519: 8,32 m N cfq830 slice expired t=1
fio-874 [007] 2157.724520: 8,32 m N cfq830 sl_used=1 disp=0 charge=1 iops=0 sect=0
fio-874 [007] 2157.724521: 8,32 m N cfq830 set_active wl_prio:0 wl_type:0
fio-874 [007] 2157.724522: 8,32 m N cfq830 Not idling. st->count:1cfq830 is an async queue, and preempted by a sync queue cfq874. But since we
have cfqg->saved_workload_slice mechanism, the preempt is a nop.
Looks currently our preempt is totally broken if the two queues are not from
the same workload type.
Below patch fixes it. This will might make async queue starvation, but it's
what our old code does before cgroup is added.Signed-off-by: Shaohua Li
Signed-off-by: Jens Axboe -
* 'for-2.6.38/core' of git://git.kernel.dk/linux-2.6-block: (43 commits)
block: ensure that completion error gets properly traced
blktrace: add missing probe argument to block_bio_complete
block cfq: don't use atomic_t for cfq_group
block cfq: don't use atomic_t for cfq_queue
block: trace event block fix unassigned field
block: add internal hd part table references
block: fix accounting bug on cross partition merges
kref: add kref_test_and_get
bio-integrity: mark kintegrityd_wq highpri and CPU intensive
block: make kblockd_workqueue smarter
Revert "sd: implement sd_check_events()"
block: Clean up exit_io_context() source code.
Fix compile warnings due to missing removal of a 'ret' variable
fs/block: type signature of major_to_index(int) to major_to_index(unsigned)
block: convert !IS_ERR(p) && p to !IS_ERR_NOR_NULL(p)
cfq-iosched: don't check cfqg in choose_service_tree()
fs/splice: Pull buf->ops->confirm() from splice_from_pipe actors
cdrom: export cdrom_check_events()
sd: implement sd_check_events()
sr: implement sr_check_events()
...
07 Jan, 2011
2 commits
-
cfq_group->ref is used with queue_lock hold, the only exception is
cfq_set_request, which looks like a bug to me, so ref doesn't need
to be an atomic and atomic operation is slower.Signed-off-by: Shaohua Li
Reviewed-by: Jeff Moyer
Acked-by: Vivek Goyal
Signed-off-by: Jens Axboe -
cfq_queue->ref is used with queue_lock hold, so ref doesn't need to be an atomic
and atomic operation is slower.Signed-off-by: Shaohua Li
Reviewed-by: Jeff Moyer
Acked-by: Vivek Goyal
Signed-off-by: Jens Axboe
17 Dec, 2010
1 commit
-
When cfq_choose_cfqg() is called in select_queue(), there must be at least one
backlogged CFQ queue waiting for dispatching, hence there must be at least one
backlogged CFQ group on service tree. So we never call choose_service_tree()
with cfqg == NULL.Signed-off-by: Gui Jianfeng
Reviewed-by: Jeff Moyer
Acked-by: Vivek Goyal
Signed-off-by: Jens Axboe
13 Dec, 2010
1 commit
-
If priority is changed, continuing to check workload_expires and service tree
count of the previous workload does not make sense. We should always choose
the workload with lowest key of new priority in such case.Signed-off-by: Shaohua Li
Reviewed-by: Jeff Moyer
Signed-off-by: Jens Axboe
01 Dec, 2010
2 commits
-
It's able to check whether a CFQ group on a service tree by
checking "cfqg->rb_node". There's no need to maintain an
extra flag here.Signed-off-by: Gui Jianfeng
Acked-by: Vivek Goyal
Signed-off-by: Jens Axboe -
When a cfq group is running, it won't be dequeued from service tree, so
there's no need to store the active one in st->active. Just gid rid of it.Signed-off-by: Gui Jianfeng
Acked-by: Vivek Goyal
Signed-off-by: Jens Axboe
16 Nov, 2010
1 commit
09 Nov, 2010
1 commit
-
Vivek suggests we don't need schedule a dispatch when an idle queue
becomes nonidle. And he is right, cfq_should_preempt already covers
the logic.Signed-off-by: Shaohua Li
Signed-off-by: Jens Axboe
08 Nov, 2010
3 commits
-
If a deep seek queue slowly deliver requests but disk is much faster, idle
for the queue just wastes disk throughput. If the queue delevers all requests
before half its slice is used, the patch disable idle for it.
In my test, application delivers 32 requests one time, the disk can accept
128 requests at maxium and disk is fast. without the patch, the throughput
is just around 30m/s, while with it, the speed is about 80m/s. The disk is
a SSD, but is detected as a rotational disk. I can configure it as SSD, but
I thought the deep seek queue logic should be fixed too, for example,
considering a fast raid.Signed-off-by: Shaohua Li
Signed-off-by: Jens Axboe -
A queue is idle at cfq_dispatch_requests(), but it gets noidle later. Unless
other task explictly does unplug or all requests are drained, we will not
deliever requests to the disk even cfq_arm_slice_timer doesn't make the
queue idle. For example, cfq_should_idle() returns true because of
service_tree->count == 1, and then other queues are added. Note, I didn't
see obvious performance impacts so far with the patch, but just thought
this could be a problem.Signed-off-by: Shaohua Li
Signed-off-by: Jens Axboe -
Some functions should return boolean.
Signed-off-by: Shaohua Li
Signed-off-by: Jens Axboe
02 Nov, 2010
1 commit
-
"gadget", "through", "command", "maintain", "maintain", "controller", "address",
"between", "initiali[zs]e", "instead", "function", "select", "already",
"equal", "access", "management", "hierarchy", "registration", "interest",
"relative", "memory", "offset", "already",Signed-off-by: Uwe Kleine-König
Signed-off-by: Jiri Kosina
23 Oct, 2010
1 commit
-
* 'for-2.6.37/core' of git://git.kernel.dk/linux-2.6-block: (39 commits)
cfq-iosched: Fix a gcc 4.5 warning and put some comments
block: Turn bvec_k{un,}map_irq() into static inline functions
block: fix accounting bug on cross partition merges
block: Make the integrity mapped property a bio flag
block: Fix double free in blk_integrity_unregister
block: Ensure physical block size is unsigned int
blkio-throttle: Fix possible multiplication overflow in iops calculations
blkio-throttle: limit max iops value to UINT_MAX
blkio-throttle: There is no need to convert jiffies to milli seconds
blkio-throttle: Fix link failure failure on i386
blkio: Recalculate the throttled bio dispatch time upon throttle limit change
blkio: Add root group to td->tg_list
blkio: deletion of a cgroup was causes oops
blkio: Do not export throttle files if CONFIG_BLK_DEV_THROTTLING=n
block: set the bounce_pfn to the actual DMA limit rather than to max memory
block: revert bad fix for memory hotplug causing bounces
Fix compile error in blk-exec.c for !CONFIG_DETECT_HUNG_TASK
block: set the bounce_pfn to the actual DMA limit rather than to max memory
block: Prevent hang_check firing during long I/O
cfq: improve fsync performance for small files
...Fix up trivial conflicts due to __rcu sparse annotation in include/linux/genhd.h
22 Oct, 2010
1 commit
-
- Andi encountedred following warning with gcc 4.5
linux/block/cfq-iosched.c: In function ‘cfq_dispatch_requests’:
linux/block/cfq-iosched.c:2156:3: warning: array subscript is above array
bounds- Warning happens due to following code.
slice = group_slice * count /
max_t(unsigned, cfqg->busy_queues_avg[cfqd->serving_prio],
cfq_group_busy_queues_wl(cfqd->serving_prio, cfqd, cfqg));gcc is complaining about cfqg->busy_queues_avg[] being indexed by CFQ
prio classes (RT, BE and IDLE) while the array size is only 2.- At run time, we never access cfqg->busy_queues_avg[IDLE] and return from
function before this code hits.- To fix warning increase the array size though it will remain unused. This
patch also puts some comments to clarify some of the confusions.- I have taken Jens's patch and modified it a bit.
- Compile tested with gcc 4.4 and boot tested. I don't have gcc 4.5
running, Andi can you please test it with gcc 4.5 to make sure it
worked.Reported-by: Andi Kleen
Signed-off-by: Vivek Goyal
Acked-by: Jeff Moyer
Signed-off-by: Jens Axboe
01 Oct, 2010
1 commit
-
o Currently any cgroup throttle limit changes are processed asynchronousy and
the change does not take affect till a new bio is dispatched from same group.o It might happen that a user sets a redicuously low limit on throttling.
Say 1 bytes per second on reads. In such cases simple operations like mount
a disk can wait for a very long time.o Once bio is throttled, there is no easy way to come out of that wait even if
user increases the read limit later.o This patch fixes it. Now if a user changes the cgroup limits, we recalculate
the bio dispatch time according to new limits.o Can't take queueu lock under blkcg_lock, hence after the change I wake
up the dispatch thread again which recalculates the time. So there are some
variables being synchronized across two threads without lock and I had to
make use of barriers. Hoping I have used barriers correctly. Any review of
memory barrier code especially will help.Signed-off-by: Vivek Goyal
Signed-off-by: Jens Axboe
21 Sep, 2010
1 commit
-
Mike reported a kernel crash when a usb key hotplug is performed while all
kernel thrads are not in a root cgroup and are running in one of the child
cgroups of blkio controller.BUG: unable to handle kernel NULL pointer dereference at 0000002c
IP: [] cfq_get_queue+0x232/0x412
*pde = 00000000
Oops: 0000 [#1] PREEMPT
last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/host3/scsi_host/host3/uevent[..]
Pid: 30039, comm: scsi_scan_3 Not tainted 2.6.35.2-fg.roam #1 Volvi2 /Aspire 4315
EIP: 0060:[] EFLAGS: 00010086 CPU: 0
EIP is at cfq_get_queue+0x232/0x412
EAX: f705f9c0 EBX: e977abac ECX: 00000000 EDX: 00000000
ESI: f00da400 EDI: f00da4ec EBP: e977a800 ESP: dff8fd00
DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
Process scsi_scan_3 (pid: 30039, ti=dff8e000 task=f6b6c9a0 task.ti=dff8e000)
Stack:
00000000 00000000 00000001 01ff0000 f00da508 00000000 f00da524 f00da540
e7994940 dd631750 f705f9c0 e977a820 e977ac44 f00da4d0 00000001 f6b6c9a0
00000010 00008010 0000000b 00000000 00000001 e977a800 dd76fac0 00000246
Call Trace:
[] ? cfq_set_request+0x228/0x34c
[] ? cfq_set_request+0x0/0x34c
[] ? elv_set_request+0xf/0x1c
[] ? get_request+0x1ad/0x22f
[] ? get_request_wait+0x1f/0x11a
[] ? kvasprintf+0x33/0x3b
[] ? scsi_execute+0x1d/0x103
[] ? scsi_execute_req+0x58/0x83
[] ? scsi_probe_and_add_lun+0x188/0x7c2
[] ? attribute_container_add_device+0x15/0xfa
[] ? kobject_get+0xf/0x13
[] ? get_device+0x10/0x14
[] ? scsi_alloc_target+0x217/0x24d
[] ? __scsi_scan_target+0x95/0x480
[] ? dequeue_entity+0x14/0x1fe
[] ? update_curr+0x165/0x1ab
[] ? update_curr+0x165/0x1ab
[] ? scsi_scan_channel+0x4a/0x76
[] ? scsi_scan_host_selected+0x77/0xad
[] ? do_scan_async+0x0/0x11a
[] ? do_scsi_scan_host+0x51/0x56
[] ? do_scan_async+0x0/0x11a
[] ? do_scan_async+0xe/0x11a
[] ? do_scan_async+0x0/0x11a
[] ? kthread+0x5e/0x63
[] ? kthread+0x0/0x63
[] ? kernel_thread_helper+0x6/0x10
Code: 44 24 1c 54 83 44 24 18 54 83 fa 03 75 94 8b 06 c7 86 64 02 00 00 01 00 00 00 83 e0 03 09 f0 89 06 8b 44 24 28 8b 90 58 01 00 00 42 2c 85 c0 75 03 8b 42 08 8d 54 24 48 52 8d 4c 24 50 51 68
EIP: [] cfq_get_queue+0x232/0x412 SS:ESP 0068:dff8fd00
CR2: 000000000000002c
---[ end trace 9a88306573f69b12 ]---The problem here is that we don't have bdi->dev information available when
thread does some IO. Hence when dev_name() tries to access bdi->dev, it
crashes.This problem does not happen if kernel threads are in root group as root
group is statically allocated at device initialization time and we don't
hit this piece of code.Fix it by delaying the filling of major and minor number information of
device in blk_group. Initially a blk_group is created with 0 as device
information and this information is filled later once some more IO comes
in from same group.Reported-by: Mike Kazantsev
Signed-off-by: Vivek Goyal
Signed-off-by: Jens Axboe
20 Sep, 2010
1 commit
-
Fsync performance for small files achieved by cfq on high-end disks is
lower than what deadline can achieve, due to idling introduced between
the sync write happening in process context and the journal commit.Moreover, when competing with a sequential reader, a process writing
small files and fsync-ing them is starved.This patch fixes the two problems by:
- marking journal commits as WRITE_SYNC, so that they get the REQ_NOIDLE
flag set,
- force all queues that have REQ_NOIDLE requests to be put in the noidle
tree.Having the queue associated to the fsync-ing process and the one associated
to journal commits in the noidle tree allows:
- switching between them without idling,
- fairness vs. competing idling queues, since they will be serviced only
after the noidle tree expires its slice.Acked-by: Vivek Goyal
Reviewed-by: Jeff Moyer
Tested-by: Jeff Moyer
Signed-off-by: Corrado Zoccolo
Signed-off-by: Jens Axboe
16 Sep, 2010
1 commit
-
o This patch prepares the base for introducing new IO control policies.
Currently all the code is written knowing there is only one policy
and that is proportional bandwidth. Creating infrastructure for newer
policies to come in.o Also there were many functions which were generated using macro. It was
very confusing. Got rid of those.Signed-off-by: Vivek Goyal
Signed-off-by: Jens Axboe
23 Aug, 2010
4 commits
-
o Divyesh had gotten rid of this code in the past. I want to re-introduce it
back as it helps me a lot during debugging.Reviewed-by: Jeff Moyer
Reviewed-by: Divyesh Shah
Signed-off-by: Vivek Goyal
Signed-off-by: Jens Axboe -
o Implement a new tunable group_idle, which allows idling on the group
instead of a cfq queue. Hence one can set slice_idle = 0 and not idle
on the individual queues but idle on the group. This way on fast storage
we can get fairness between groups at the same time overall throughput
improves.Signed-off-by: Vivek Goyal
Acked-by: Jeff Moyer
Signed-off-by: Jens Axboe -
o Implement another CFQ mode where we charge group in terms of number
of requests dispatched instead of measuring the time. Measuring in terms
of time is not possible when we are driving deeper queue depths and there
are requests from multiple cfq queues in the request queue.o This mode currently gets activated if one sets slice_idle=0 and associated
disk supports NCQ. Again the idea is that on an NCQ disk with idling disabled
most of the queues will dispatch 1 or more requests and then cfq queue
expiry happens and we don't have a way to measure time. So start providing
fairness in terms of IOPS.o Currently IOPS mode works only with cfq group scheduling. CFQ is following
different scheduling algorithms for queue and group scheduling. These IOPS
stats are used only for group scheduling hence in non-croup mode nothing
should change.o For CFQ group scheduling one can disable slice idling so that we don't idle
on queue and drive deeper request queue depths (achieving better throughput),
at the same time group idle is enabled so one should get service
differentiation among groups.Signed-off-by: Vivek Goyal
Acked-by: Jeff Moyer
Signed-off-by: Jens Axboe -
Do not idle either on cfq queue or service tree if slice_idle=0. User does
not want any queue or service tree idling. Currently even if slice_idle=0,
we were waiting for request to finish before expiring the queue and that
can lead to lower queue depths.Acked-by: Jeff Moyer
Signed-off-by: Vivek Goyal
Signed-off-by: Jens Axboe
08 Aug, 2010
2 commits
-
Remove the current bio flags and reuse the request flags for the bio, too.
This allows to more easily trace the type of I/O from the filesystem
down to the block driver. There were two flags in the bio that were
missing in the requests: BIO_RW_UNPLUG and BIO_RW_AHEAD. Also I've
renamed two request flags that had a superflous RW in them.Note that the flags are in bio.h despite having the REQ_ name - as
blkdev.h includes bio.h that is the only way to go for now.Signed-off-by: Christoph Hellwig
Signed-off-by: Jens Axboe -
Remove all the trivial wrappers for the cmd_type and cmd_flags fields in
struct requests. This allows much easier grepping for different request
types instead of unwinding through macros.Signed-off-by: Christoph Hellwig
Signed-off-by: Jens Axboe
19 Jun, 2010
1 commit
-
Hi Jens,
Few days back Ingo noticed a CFQ boot time warning. This patch fixes it.
The issue here is that with CFQ_GROUP_IOSCHED=n, CFQ should not really
be making blkio stat related calls.> Hm, it's still not entirely fixed, as of 2.6.35-rc2-00131-g7908a9e. With
> some
> configs i get bad spinlock warnings during bootup:
>
> [ 28.968013] initcall net_olddevs_init+0x0/0x82 returned 0 after 93750
> usecs
> [ 28.972003] calling b44_init+0x0/0x55 @ 1
> [ 28.976009] bus: 'pci': add driver b44
> [ 28.976374] sda:
> [ 28.978157] BUG: spinlock bad magic on CPU#1, async/0/117
> [ 28.980000] lock: 7e1c5bbc, .magic: 00000000, .owner: /-1, +.owner_cpu: 0
> [ 28.980000] Pid: 117, comm: async/0 Not tainted +2.6.35-rc2-tip-01092-g010e7ef-dirty #8183
> [ 28.980000] Call Trace:
> [ 28.980000] [] ? printk+0x20/0x24
> [ 28.980000] [] spin_bug+0x7c/0x87
> [ 28.980000] [] do_raw_spin_lock+0x1e/0x123
> [ 28.980000] [] ? _raw_spin_lock_irqsave+0x12/0x20
> [ 28.980000] [] _raw_spin_lock_irqsave+0x1a/0x20
> [ 28.980000] [] blkiocg_update_io_add_stats+0x25/0xfb
> [ 28.980000] [] ? cfq_prio_tree_add+0xb1/0xc1
> [ 28.980000] [] cfq_insert_request+0x8c/0x425Signed-off-by: Vivek Goyal
Signed-off-by: Jens Axboe
18 Jun, 2010
1 commit
-
Hi,
A user reported a kernel bug when running a particular program that did
the following:created 32 threads
- each thread took a mutex, grabbed a global offset, added a buffer size
to that offset, released the lock
- read from the given offset in the file
- created a new thread to do the same
- exitedThe result is that cfq's close cooperator logic would trigger, as the
threads were issuing I/O within the mean seek distance of one another.
This workload managed to routinely trigger a use after free bug when
walking the list of merge candidates for a particular cfqq
(cfqq->new_cfqq). The logic used for merging queues looks like this:static void cfq_setup_merge(struct cfq_queue *cfqq, struct cfq_queue *new_cfqq)
{
int process_refs, new_process_refs;
struct cfq_queue *__cfqq;/* Avoid a circular list and skip interim queue merges */
while ((__cfqq = new_cfqq->new_cfqq)) {
if (__cfqq == cfqq)
return;
new_cfqq = __cfqq;
}process_refs = cfqq_process_refs(cfqq);
/*
* If the process for the cfqq has gone away, there is no
* sense in merging the queues.
*/
if (process_refs == 0)
return;/*
* Merge in the direction of the lesser amount of work.
*/
new_process_refs = cfqq_process_refs(new_cfqq);
if (new_process_refs >= process_refs) {
cfqq->new_cfqq = new_cfqq;
atomic_add(process_refs, &new_cfqq->ref);
} else {
new_cfqq->new_cfqq = cfqq;
atomic_add(new_process_refs, &cfqq->ref);
}
}When a merge candidate is found, we add the process references for the
queue with less references to the queue with more. The actual merging
of queues happens when a new request is issued for a given cfqq. In the
case of the test program, it only does a single pread call to read in
1MB, so the actual merge never happens.Normally, this is fine, as when the queue exits, we simply drop the
references we took on the other cfqqs in the merge chain:/*
* If this queue was scheduled to merge with another queue, be
* sure to drop the reference taken on that queue (and others in
* the merge chain). See cfq_setup_merge and cfq_merge_cfqqs.
*/
__cfqq = cfqq->new_cfqq;
while (__cfqq) {
if (__cfqq == cfqq) {
WARN(1, "cfqq->new_cfqq loop detected\n");
break;
}
next = __cfqq->new_cfqq;
cfq_put_queue(__cfqq);
__cfqq = next;
}However, there is a hole in this logic. Consider the following (and
keep in mind that each I/O keeps a reference to the cfqq):q1->new_cfqq = q2 // q2 now has 2 process references
q3->new_cfqq = q2 // q2 now has 3 process references// the process associated with q2 exits
// q2 now has 2 process references// queue 1 exits, drops its reference on q2
// q2 now has 1 process reference// q3 exits, so has 0 process references, and hence drops its references
// to q2, which leaves q2 also with 0 process referencesq4 comes along and wants to merge with q3
q3->new_cfqq still points at q2! We follow that link and end up at an
already freed cfqq.So, the fix is to not follow a merge chain if the top-most queue does
not have a process reference, otherwise any queue in the chain could be
already freed. I also changed the logic to disallow merging with a
queue that does not have any process references. Previously, we did
this check for one of the merge candidates, but not the other. That
doesn't really make sense.Without the attached patch, my system would BUG within a couple of
seconds of running the reproducer program. With the patch applied, my
system ran the program for over an hour without issues.This addresses the following bugzilla:
https://bugzilla.kernel.org/show_bug.cgi?id=16217Thanks a ton to Phil Carns for providing the bug report and an excellent
reproducer.[ Note for stable: this applies to 2.6.32/33/34 ].
Signed-off-by: Jeff Moyer
Reported-by: Phil Carns
Cc: stable@kernel.org
Signed-off-by: Jens Axboe
25 May, 2010
1 commit
-
I got below oops when unloading cfq-iosched. Considering scenario:
queue A merge to B, C merge to D and B will be merged to D. Before B is merged
to D, we do split B. We should put B's reference for D.[ 807.768536] =============================================================================
[ 807.768539] BUG cfq_queue: Objects remaining on kmem_cache_close()
[ 807.768541] -----------------------------------------------------------------------------
[ 807.768543]
[ 807.768546] INFO: Slab 0xffffea0003e6b4e0 objects=26 used=1 fp=0xffff88011d584fd8 flags=0x200000000004082
[ 807.768550] Pid: 5946, comm: rmmod Tainted: G W 2.6.34-07097-gf4b87de-dirty #724
[ 807.768552] Call Trace:
[ 807.768560] [] slab_err+0x8f/0x9d
[ 807.768564] [] ? flush_cpu_slab+0x0/0x93
[ 807.768569] [] ? add_preempt_count+0xe/0xca
[ 807.768572] [] ? sub_preempt_count+0xe/0xb6
[ 807.768577] [] ? _raw_spin_unlock+0x15/0x30
[ 807.768580] [] ? sub_preempt_count+0xe/0xb6
[ 807.768584] [] list_slab_objects+0x9b/0x19f
[ 807.768588] [] ? add_preempt_count+0xc6/0xca
[ 807.768591] [] kmem_cache_destroy+0x13f/0x21d
[ 807.768597] [] cfq_slab_kill+0x1a/0x43 [cfq_iosched]
[ 807.768601] [] cfq_exit+0x93/0x9e [cfq_iosched]
[ 807.768606] [] sys_delete_module+0x1b1/0x219
[ 807.768612] [] system_call_fastpath+0x16/0x1b
[ 807.768618] INFO: Object 0xffff88011d584618 @offset=1560
[ 807.768622] INFO: Allocated in cfq_get_queue+0x11e/0x274 [cfq_iosched] age=7173 cpu=1 pid=5496
[ 807.768626] =============================================================================Cc: stable@kernel.org
Signed-off-by: Shaohua Li
Signed-off-by: Jens Axboe
24 May, 2010
2 commits
-
Use small consequent indexes as radix tree keys instead of sparse cfqd address.
This change will reduce radix tree depth from 11 (6 for 32-bit hosts)
to 1 if host have key.
(bit 0 -- dead mark, bits 1..30 -- index: ida produce id in range 0..2^31-1)Signed-off-by: Konstantin Khlebnikov
Signed-off-by: Jens Axboe -
Remove ->dead_key field from cfq_io_context to shrink its size to 128 bytes.
(64 bytes for 32-bit hosts)Use lower bit in ->key as dead-mark, instead of moving key to separate field.
After this for dead cfq_io_context we got cic->key != cfqd automatically.
Thus, io_context's last-hit cache should work without changing.Now to check ->key for non-dead state compare it with cfqd,
instead of checking ->key for non-null value as it was before.Plus remove obsolete race protection in cfq_cic_lookup.
This race gone after v2.6.24-1728-g4ac845aSigned-off-by: Konstantin Khlebnikov
Signed-off-by: Jens Axboe
22 May, 2010
2 commits
-
Conflicts:
fs/ext3/fsync.cSigned-off-by: Jens Axboe
-
Remove all rcu head inits. We don't care about the RCU head state before passing
it to call_rcu() anyway. Only leave the "on_stack" variants so debugobjects can
keep track of objects on stack.Signed-off-by: Mathieu Desnoyers
Signed-off-by: Paul E. McKenney
Signed-off-by: Jens Axboe
06 May, 2010
1 commit
-
It is necessary to be in an RCU read-side critical section when invoking
css_id(), so this patch adds one to blkiocg_add_blkio_group(). This is
actually a false positive, because this is called at initialization time
and hence always refers to the root cgroup, which cannot go away.[ 103.790505] ===================================================
[ 103.790509] [ INFO: suspicious rcu_dereference_check() usage. ]
[ 103.790511] ---------------------------------------------------
[ 103.790514] kernel/cgroup.c:4432 invoked rcu_dereference_check() without protection!
[ 103.790517]
[ 103.790517] other info that might help us debug this:
[ 103.790519]
[ 103.790521]
[ 103.790521] rcu_scheduler_active = 1, debug_locks = 1
[ 103.790524] 4 locks held by bash/4422:
[ 103.790526] #0: (&buffer->mutex){+.+.+.}, at: [] sysfs_write_file+0x3c/0x144
[ 103.790537] #1: (s_active#102){.+.+.+}, at: [] sysfs_write_file+0xe7/0x144
[ 103.790544] #2: (&q->sysfs_lock){+.+.+.}, at: [] queue_attr_store+0x49/0x8f
[ 103.790552] #3: (&(&blkcg->lock)->rlock){......}, at: [] blkiocg_add_blkio_group+0x2b/0xad
[ 103.790560]
[ 103.790561] stack backtrace:
[ 103.790564] Pid: 4422, comm: bash Not tainted 2.6.34-rc4-blkio-second-crash #81
[ 103.790567] Call Trace:
[ 103.790572] [] lockdep_rcu_dereference+0x9d/0xa5
[ 103.790577] [] css_id+0x44/0x57
[ 103.790581] [] blkiocg_add_blkio_group+0x53/0xad
[ 103.790586] [] cfq_init_queue+0x139/0x32c
[ 103.790591] [] elv_iosched_store+0xbf/0x1bf
[ 103.790595] [] queue_attr_store+0x70/0x8f
[ 103.790599] [] ? sysfs_write_file+0xe7/0x144
[ 103.790603] [] sysfs_write_file+0x108/0x144
[ 103.790609] [] vfs_write+0xae/0x10b
[ 103.790612] [] ? trace_hardirqs_on_caller+0x10c/0x130
[ 103.790616] [] sys_write+0x4a/0x6e
[ 103.790622] [] system_call_fastpath+0x16/0x1b
[ 103.790625]Located-by: Miles Lane
Signed-off-by: Vivek Goyal
Signed-off-by: Paul E. McKenney
Signed-off-by: Ingo Molnar
Signed-off-by: Jens Axboe
29 Apr, 2010
1 commit
-
We should return the cfq_group for this case, not NULL.
Signed-off-by: Jens Axboe
27 Apr, 2010
2 commits
-
This patch fixes few usability and configurability issues.
o All the cgroup based controller options are configurable from
"Genral Setup/Control Group Support/" menu. blkio is the only exception.
Hence make this option visible in above menu and make it configurable from
there to bring it inline with rest of the cgroup based controllers.o Get rid of CONFIG_DEBUG_CFQ_IOSCHED.
This option currently does two things.
- Enable printing of cgroup paths in blktrace
- Enables CONFIG_DEBUG_BLK_CGROUP, which in turn displays additional stat
files in cgroup.If we are using group scheduling, blktrace data is of not really much use
if cgroup information is not present. To get this data, currently one has to
also enable CONFIG_DEBUG_CFQ_IOSCHED, which in turn brings the overhead of
all the additional debug stat files which is not desired.Hence, this patch moves printing of cgroup paths under
CONFIG_CFQ_GROUP_IOSCHED.This allows us to get rid of CONFIG_DEBUG_CFQ_IOSCHED completely. Now all
the debug stat files are controlled only by CONFIG_DEBUG_BLK_CGROUP which
can be enabled through config menu.Signed-off-by: Vivek Goyal
Acked-by: Divyesh Shah
Reviewed-by: Gui Jianfeng
Signed-off-by: Jens Axboe -
o Once in a while, I was hitting a BUG_ON() in blkio code. empty_time was
assuming that upon slice expiry, group can't be marked empty already (except
forced dispatch).But this assumption is broken if cfqq can move (group_isolation=0) across
groups after receiving a request.I think most likely in this case we got a request in a cfqq and accounted
the rq in one group, later while adding the cfqq to tree, we moved the queue
to a different group which was already marked empty and after dispatch from
slice we found group already marked empty and raised alarm.This patch does not error out if group is already marked empty. This can
introduce some empty_time stat error only in case of group_isolation=0. This
is better than crashing. In case of group_isolation=1 we should still get
same stats as before this patch.[ 222.308546] ------------[ cut here ]------------
[ 222.309311] kernel BUG at block/blk-cgroup.c:236!
[ 222.309311] invalid opcode: 0000 [#1] SMP
[ 222.309311] last sysfs file: /sys/devices/virtual/block/dm-3/queue/scheduler
[ 222.309311] CPU 1
[ 222.309311] Modules linked in: dm_round_robin dm_multipath qla2xxx scsi_transport_fc dm_zero dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]
[ 222.309311]
[ 222.309311] Pid: 4780, comm: fio Not tainted 2.6.34-rc4-blkio-config #68 0A98h/HP xw8600 Workstation
[ 222.309311] RIP: 0010:[] [] blkiocg_set_start_empty_time+0x50/0x83
[ 222.309311] RSP: 0018:ffff8800ba6e79f8 EFLAGS: 00010002
[ 222.309311] RAX: 0000000000000082 RBX: ffff8800a13b7990 RCX: ffff8800a13b7808
[ 222.309311] RDX: 0000000000002121 RSI: 0000000000000082 RDI: ffff8800a13b7a30
[ 222.309311] RBP: ffff8800ba6e7a18 R08: 0000000000000000 R09: 0000000000000001
[ 222.309311] R10: 000000000002f8c8 R11: ffff8800ba6e7ad8 R12: ffff8800a13b78ff
[ 222.309311] R13: ffff8800a13b7990 R14: 0000000000000001 R15: ffff8800a13b7808
[ 222.309311] FS: 00007f3beec476f0(0000) GS:ffff880001e40000(0000) knlGS:0000000000000000
[ 222.309311] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 222.309311] CR2: 000000000040e7f0 CR3: 00000000a12d5000 CR4: 00000000000006e0
[ 222.309311] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 222.309311] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 222.309311] Process fio (pid: 4780, threadinfo ffff8800ba6e6000, task ffff8800b3d6bf00)
[ 222.309311] Stack:
[ 222.309311] 0000000000000001 ffff8800bab17a48 ffff8800bab17a48 ffff8800a13b7800
[ 222.309311] ffff8800ba6e7a68 ffffffff8121da35 ffff880000000001 00ff8800ba5c5698
[ 222.309311] ffff8800ba6e7a68 ffff8800a13b7800 0000000000000000 ffff8800bab17a48
[ 222.309311] Call Trace:
[ 222.309311] [] __cfq_slice_expired+0x2af/0x3ec
[ 222.309311] [] cfq_dispatch_requests+0x2c8/0x8e8
[ 222.309311] [] ? spin_unlock_irqrestore+0xe/0x10
[ 222.309311] [] ? blk_insert_cloned_request+0x70/0x7b
[ 222.309311] [] blk_peek_request+0x191/0x1a7
[ 222.309311] [] dm_request_fn+0x38/0x14c [dm_mod]
[ 222.309311] [] ? sync_page_killable+0x0/0x35
[ 222.309311] [] __generic_unplug_device+0x32/0x37
[ 222.309311] [] generic_unplug_device+0x2e/0x3c
[ 222.309311] [] dm_unplug_all+0x42/0x5b [dm_mod]
[ 222.309311] [] blk_unplug+0x29/0x2d
[ 222.309311] [] blk_backing_dev_unplug+0x12/0x14
[ 222.309311] [] block_sync_page+0x35/0x39
[ 222.309311] [] sync_page+0x41/0x4a
[ 222.309311] [] sync_page_killable+0xe/0x35
[ 222.309311] [] __wait_on_bit_lock+0x46/0x8f
[ 222.309311] [] __lock_page_killable+0x66/0x6d
[ 222.309311] [] ? wake_bit_function+0x0/0x33
[ 222.309311] [] lock_page_killable+0x2c/0x2e
[ 222.309311] [] generic_file_aio_read+0x361/0x4f0
[ 222.309311] [] do_sync_read+0xcb/0x108
[ 222.309311] [] ? security_file_permission+0x16/0x18
[ 222.309311] [] vfs_read+0xab/0x108
[ 222.309311] [] sys_read+0x4a/0x6e
[ 222.309311] [] system_call_fastpath+0x16/0x1b
[ 222.309311] Code: 58 01 00 00 00 48 89 c6 75 0a 48 83 bb 60 01 00 00 00 74 09 48 8d bb a0 00 00 00 eb 35 41 fe cc 74 0d f6 83 c0 01 00 00 04 74 04 0b eb fe 48 89 75 e8 e8 be e0 de ff 66 83 8b c0 01 00 00 04
[ 222.309311] RIP [] blkiocg_set_start_empty_time+0x50/0x83
[ 222.309311] RSP
[ 222.309311] ---[ end trace 32b4f71dffc15712 ]---Signed-off-by: Vivek Goyal
Acked-by: Divyesh Shah
Signed-off-by: Jens Axboe