07 May, 2010
1 commit
-
with CONFIG_PROVE_RCU=y, a warning can be triggered:
# mount -t cgroup -o blkio xxx /mnt
# mkdir /mnt/subgroup...
kernel/cgroup.c:4442 invoked rcu_dereference_check() without protection!
...To fix this, we avoid caling css_depth() here, which is a bit simpler
than the original code.Signed-off-by: Li Zefan
Acked-by: Vivek Goyal
Signed-off-by: Jens Axboe
06 May, 2010
1 commit
-
It is necessary to be in an RCU read-side critical section when invoking
css_id(), so this patch adds one to blkiocg_add_blkio_group(). This is
actually a false positive, because this is called at initialization time
and hence always refers to the root cgroup, which cannot go away.[ 103.790505] ===================================================
[ 103.790509] [ INFO: suspicious rcu_dereference_check() usage. ]
[ 103.790511] ---------------------------------------------------
[ 103.790514] kernel/cgroup.c:4432 invoked rcu_dereference_check() without protection!
[ 103.790517]
[ 103.790517] other info that might help us debug this:
[ 103.790519]
[ 103.790521]
[ 103.790521] rcu_scheduler_active = 1, debug_locks = 1
[ 103.790524] 4 locks held by bash/4422:
[ 103.790526] #0: (&buffer->mutex){+.+.+.}, at: [] sysfs_write_file+0x3c/0x144
[ 103.790537] #1: (s_active#102){.+.+.+}, at: [] sysfs_write_file+0xe7/0x144
[ 103.790544] #2: (&q->sysfs_lock){+.+.+.}, at: [] queue_attr_store+0x49/0x8f
[ 103.790552] #3: (&(&blkcg->lock)->rlock){......}, at: [] blkiocg_add_blkio_group+0x2b/0xad
[ 103.790560]
[ 103.790561] stack backtrace:
[ 103.790564] Pid: 4422, comm: bash Not tainted 2.6.34-rc4-blkio-second-crash #81
[ 103.790567] Call Trace:
[ 103.790572] [] lockdep_rcu_dereference+0x9d/0xa5
[ 103.790577] [] css_id+0x44/0x57
[ 103.790581] [] blkiocg_add_blkio_group+0x53/0xad
[ 103.790586] [] cfq_init_queue+0x139/0x32c
[ 103.790591] [] elv_iosched_store+0xbf/0x1bf
[ 103.790595] [] queue_attr_store+0x70/0x8f
[ 103.790599] [] ? sysfs_write_file+0xe7/0x144
[ 103.790603] [] sysfs_write_file+0x108/0x144
[ 103.790609] [] vfs_write+0xae/0x10b
[ 103.790612] [] ? trace_hardirqs_on_caller+0x10c/0x130
[ 103.790616] [] sys_write+0x4a/0x6e
[ 103.790622] [] system_call_fastpath+0x16/0x1b
[ 103.790625]Located-by: Miles Lane
Signed-off-by: Vivek Goyal
Signed-off-by: Paul E. McKenney
Signed-off-by: Ingo Molnar
Signed-off-by: Jens Axboe
21 Apr, 2010
1 commit
-
blk_rq_timed_out_timer() relied on blk_add_timer() never returning a
timer value of zero, but commit 7838c15b8dd18e78a523513749e5b54bda07b0cb
removed the code that bumped this value when it was zero.
Therefore when jiffies is near wrap we could get unlucky & not set the
timeout value correctly.This patch uses a flag to indicate that the timeout value was set and so
handles jiffies wrap correctly, and it keeps all the logic in one
function so should be easier to maintain in the future.Signed-off-by: Richard Kennedy
Cc: stable@kernel.org
Signed-off-by: Jens Axboe
10 Apr, 2010
1 commit
-
* 'for-linus' of git://git.kernel.dk/linux-2.6-block: (34 commits)
cfq-iosched: Fix the incorrect timeslice accounting with forced_dispatch
loop: Update mtime when writing using aops
block: expose the statistics in blkio.time and blkio.sectors for the root cgroup
backing-dev: Handle class_create() failure
Block: Fix block/elevator.c elevator_get() off-by-one error
drbd: lc_element_by_index() never returns NULL
cciss: unlock on error path
cfq-iosched: Do not merge queues of BE and IDLE classes
cfq-iosched: Add additional blktrace log messages in CFQ for easier debugging
i2o: Remove the dangerous kobj_to_i2o_device macro
block: remove 16 bytes of padding from struct request on 64bits
cfq-iosched: fix a kbuild regression
block: make CONFIG_BLK_CGROUP visible
Remove GENHD_FL_DRIVERFS
block: Export max number of segments and max segment size in sysfs
block: Finalize conversion of block limits functions
block: Fix overrun in lcm() and move it to lib
vfs: improve writeback_inodes_wb()
paride: fix off-by-one test
drbd: fix al-to-on-disk-bitmap for 4k logical_block_size
...
09 Apr, 2010
1 commit
-
When CFQ dispatches requests forcefully due to a barrier or changing iosched,
it runs through all cfqq's dispatching requests and then expires each queue.
However, it does not activate a cfqq before flushing its IOs resulting in
using stale values for computing slice_used.
This patch fixes it by calling activate queue before flushing reuqests from
each queue.This is useful mostly for barrier requests because when the iosched is changing
it really doesnt matter if we have incorrect accounting since we're going to
break down all structures anyway.We also now expire the current timeslice before moving on with the dispatch
to accurately account slice used for that cfqq.Signed-off-by: Divyesh Shah
Signed-off-by: Jens Axboe
06 Apr, 2010
1 commit
-
Currently, the io statistics for the root cgroup are maintained, but
they are not shown because the device information is not available at
the point that the root blkio cgroup is created. This patch updates
the device information when the statistics are updated so that the
statistics become visible.Signed-off-by: Ricky Benitez
Acked-by: Vivek Goyal
Signed-off-by: Jens Axboe
02 Apr, 2010
1 commit
-
elevator_get() not check the name length, if the name length > sizeof(elv),
elv will miss the '\0'. And elv buffer will be replace "-iosched" as something
like aaaaaaaaa, then call request_module() can load an not trust module.Signed-off-by: Zhitong Wang
Signed-off-by: Jens Axboe
30 Mar, 2010
1 commit
-
…it slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
25 Mar, 2010
2 commits
-
Even if they are found to be co-operating.
The prio_trees do not have any IDLE cfqqs on them. cfq_close_cooperator()
is called from cfq_select_queue() and cfq_completed_request(). The latter
ensures that the close cooperator code does not get invoked if the current
cfqq is of class IDLE but the former doesn't seem to have any such checks.
So an IDLE cfqq may get merged with a BE cfqq from the same group which
should be avoided.Signed-off-by: Divyesh Shah
Acked-by: Vivek Goyal
Signed-off-by: Jens Axboe -
These have helped us debug some issues we've noticed in earlier IO
controller versions and should be useful now as well. The extra logging
covers:
- idling behavior. Since there are so many conditions based on which we decide
to idle or not, this patch adds a log message for some conditions that we've
found useful.
- workload slices and current prio and workload typeChangelog from v1:
o moved log message from cfq_set_active_queue() to __cfq_set_active_queue()
o changed queue_count to st->countSigned-off-by: Divyesh Shah
Signed-off-by: Jens Axboe
19 Mar, 2010
2 commits
-
Conflicts:
block/KconfigSigned-off-by: Jens Axboe
-
Alex Shi reported a kbuild regression which is about 10% performance lost.
He bisected to this commit: 3dde36ddea3e07dd025c4c1ba47edec91606fec0.
The reason is cfqq_close() can't find close cooperator. Restoring
cfq_rq_close()'s threshold to original value makes the regression go away.Since for_preempt parameter isn't used anymore, this patch deletes it.
Reported-by: Alex Shi
Signed-off-by: Shaohua Li
Acked-by: Corrado Zoccolo
Signed-off-by: Jens Axboe
16 Mar, 2010
1 commit
-
Make the config visible, so we can choose from CONFIG_BLK_CGROUP=y
and CONFIG_BLK_CGROUP=m when CONFIG_IOSCHED_CFQ=m.Signed-off-by: Li Zefan
Signed-off-by: Jens Axboe
15 Mar, 2010
2 commits
-
These two values are useful when debugging issues surrounding maximum
I/O size. Put them in sysfs with the rest of the queue limits.Signed-off-by: Martin K. Petersen
Signed-off-by: Jens Axboe -
lcm() was defined to take integer-sized arguments. The supplied
arguments are multiplied, however, causing us to overflow given
sufficiently large input. That in turn led to incorrect optimal I/O
size reporting in some cases (RAID over RAID).Switch lcm() over to unsigned long similar to gcd() and move the
function from blk-settings.c to lib.Signed-off-by: Martin K. Petersen
Signed-off-by: Jens Axboe
13 Mar, 2010
2 commits
-
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial: (56 commits)
doc: fix typo in comment explaining rb_tree usage
Remove fs/ntfs/ChangeLog
doc: fix console doc typo
doc: cpuset: Update the cpuset flag file
Fix of spelling in arch/sparc/kernel/leon_kernel.c no longer needed
Remove drivers/parport/ChangeLog
Remove drivers/char/ChangeLog
doc: typo - Table 1-2 should refer to "status", not "statm"
tree-wide: fix typos "ass?o[sc]iac?te" -> "associate" in comments
No need to patch AMD-provided drivers/gpu/drm/radeon/atombios.h
devres/irq: Fix devm_irq_match comment
Remove reference to kthread_create_on_cpu
tree-wide: Assorted spelling fixes
tree-wide: fix 'lenght' typo in comments and code
drm/kms: fix spelling in error message
doc: capitalization and other minor fixes in pnp doc
devres: typo fix s/dev/devm/
Remove redundant trailing semicolons from macros
fix typo "definetly" -> "definitely" in comment
tree-wide: s/widht/width/g typo in comments
...Fix trivial conflict in Documentation/laptops/00-INDEX
-
Modify the Block I/O cgroup subsystem to be able to be built as a module.
As the CFQ disk scheduler optionally depends on blk-cgroup, config options
in block/Kconfig, block/Kconfig.iosched, and block/blk-cgroup.h are
enhanced to support the new module dependency.Signed-off-by: Ben Blum
Cc: Li Zefan
Cc: Paul Menage
Cc: "David S. Miller"
Cc: KAMEZAWA Hiroyuki
Cc: Lai Jiangshan
Cc: Vivek Goyal
Cc: Jens Axboe
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
08 Mar, 2010
2 commits
-
Conflicts:
Documentation/filesystems/proc.txt
arch/arm/mach-u300/include/mach/debug-macro.S
drivers/net/qlge/qlge_ethtool.c
drivers/net/qlge/qlge_main.c
drivers/net/typhoon.c -
Constify struct sysfs_ops.
This is part of the ops structure constification
effort started by Arjan van de Ven et al.Benefits of this constification:
* prevents modification of data that is shared
(referenced) by many other structure instances
at runtime* detects/prevents accidental (but not intentional)
modification attempts on archs that enforce
read-only kernel data at runtime* potentially better optimized code as the compiler
can assume that the const data cannot be changed* the compiler/linker move const data into .rodata
and therefore exclude them from false sharingSigned-off-by: Emese Revfy
Acked-by: David Teigland
Acked-by: Matt Domsch
Acked-by: Maciej Sosnowski
Acked-by: Hans J. Koch
Acked-by: Pekka Enberg
Acked-by: Jens Axboe
Acked-by: Stephen Hemminger
Signed-off-by: Greg Kroah-Hartman
01 Mar, 2010
6 commits
-
As the comment says the initial value of last_waited is never used, so
there is no need to initialise it with the current jiffies. Jiffies is
hot enough without accessing it for no reason.Signed-off-by: Richard Kennedy
Signed-off-by: Jens Axboe -
Reorder cfq_rb_root to remove 8 bytes of padding on 64 bit builds.
Consequently removing 56 bytes from cfq_group and 64 bytes from
cfq_data.Signed-off-by: Richard Kennedy
Signed-off-by: Jens Axboe -
Currently a queue can only dispatch up to 4 requests if there are other queues.
This isn't optimal, device can handle more requests, for example, AHCI can
handle 31 requests. I can understand the limit is for fairness, but we could
do a tweak: if the queue still has a lot of slice left, sounds we could
ignore the limit. Test shows this boost my workload (two thread randread of
a SSD) from 78m/s to 100m/s.
Thanks for suggestions from Corrado and Vivek for the patch.Signed-off-by: Shaohua Li
Signed-off-by: Jens Axboe -
Counters for requests "in flight" and "in driver" are used asymmetrically
in cfq_may_dispatch, and have slightly different meaning.
We split the rq_in_flight counter (was sync_flight) to count both sync
and async requests, in order to use this one, which is more accurate in
some corner cases.
The rq_in_driver counter is coalesced, since individual sync/async counts
are not used any more.Signed-off-by: Corrado Zoccolo
Signed-off-by: Jens Axboe -
CFQ currently applies the same logic of detecting seeky queues and
grouping them together for rotational disks as well as SSDs.
For SSDs, the time to complete a request doesn't depend on the
request location, but only on the size.
This patch therefore changes the criterion to group queues by
request size in case of SSDs, in order to achieve better fairness.Signed-off-by: Corrado Zoccolo
Signed-off-by: Jens Axboe -
Current seeky detection is based on average seek lenght.
This is suboptimal, since the average will not distinguish between:
* a process doing medium sized seeks
* a process doing some sequential requests interleaved with larger seeks
and even a medium seek can take lot of time, if the requested sector
happens to be behind the disk head in the rotation (50% probability).Therefore, we change the seeky queue detection to work as follows:
* each request can be classified as sequential if it is very close to
the current head position, i.e. it is likely in the disk cache (disks
usually read more data than requested, and put it in cache for
subsequent reads). Otherwise, the request is classified as seeky.
* an history window of the last 32 requests is kept, storing the
classification result.
* A queue is marked as seeky if more than 1/8 of the last 32 requests
were seeky.This patch fixes a regression reported by Yanmin, on mmap 64k random
reads.Reported-by: Yanmin Zhang
Signed-off-by: Corrado Zoccolo
Signed-off-by: Jens Axboe
26 Feb, 2010
6 commits
-
Except for SCSI no device drivers distinguish between physical and
hardware segment limits. Consolidate the two into a single segment
limit.Signed-off-by: Martin K. Petersen
Signed-off-by: Jens Axboe -
The block layer calling convention is blk_queue_.
blk_queue_max_sectors predates this practice, leading to some confusion.
Rename the function to appropriately reflect that its intended use is to
set max_hw_sectors.Also introduce a temporary wrapper for backwards compability. This can
be removed after the merge window is closed.Signed-off-by: Martin K. Petersen
Signed-off-by: Jens Axboe -
Add a BLK_ prefix to block layer constants.
Signed-off-by: Martin K. Petersen
Signed-off-by: Jens Axboe -
blk_queue_max_hw_sectors is no longer called by any subsystem and can be
removed.Signed-off-by: Martin K. Petersen
Signed-off-by: Jens Axboe -
Clarify blk_queue_max_sectors and update documentation.
Signed-off-by: Martin K. Petersen
Signed-off-by: Jens Axboe -
There's no need to take css reference here, for the caller
has already called rcu_read_lock() to prevent cgroup from
being removed.Signed-off-by: Gui Jianfeng
Reviewed-by: Li Zefan
Acked-by: Vivek Goyal
Signed-off-by: Jens Axboe
25 Feb, 2010
1 commit
-
Conflicts:
include/linux/blkdev.hSigned-off-by: Jens Axboe
23 Feb, 2010
2 commits
-
Now that the bio list management stuff is generic, convert
generic_make_request to use bio lists instead of its own private bio
list implementation.Signed-off-by: Akinobu Mita
Cc: Christoph Hellwig
Signed-off-by: Jens Axboe -
This reverts commit fb1e75389bd06fd5987e9cda1b4e0305c782f854.
"Benjamin S." reports that the patch in question
causes a big drop in sequential throughput for him, dropping from
200MB/sec down to only 70MB/sec.Needs to be investigated more fully, for now lets just revert the
offending commit.Conflicts:
include/linux/blkdev.h
Signed-off-by: Jens Axboe
22 Feb, 2010
2 commits
-
This removes 8 bytes of padding from struct cfq_queue on 64 bit builds,
shrinking it's size to 256 bytes, so fitting into 1 fewer cachelines and
allowing 1 more object/slab in it's kmem_cache.Signed-off-by: Richard Kennedy
Reviewed-by: Jeff Moyer
----
patch against 2.6.33-rc8
tested on x86_64 AMDX2
Signed-off-by: Jens Axboe
09 Feb, 2010
1 commit
-
In particular, several occurances of funny versions of 'success',
'unknown', 'therefore', 'acknowledge', 'argument', 'achieve', 'address',
'beginning', 'desirable', 'separate' and 'necessary' are fixed.Signed-off-by: Daniel Mack
Cc: Joe Perches
Cc: Junio C Hamano
Signed-off-by: Jiri Kosina
05 Feb, 2010
1 commit
-
Currently we split seeky coop queues after 1s, which is too big. Below patch
marks seeky coop queue split_coop flag after one slice. After that, if new
requests come in, the queues will be splitted. Patch is suggested by Corrado.Signed-off-by: Shaohua Li
Reviewed-by: Corrado Zoccolo
Acked-by: Jeff Moyer
Signed-off-by: Jens Axboe
03 Feb, 2010
1 commit
-
Few weeks back, Shaohua Li had posted similar patch. I am reposting it
with more test results.This patch does two things.
- Do not idle on async queues.
- It also changes the write queue depth CFQ drives (cfq_may_dispatch()).
Currently, we seem to driving queue depth of 1 always for WRITES. This is
true even if there is only one write queue in the system and all the logic
of infinite queue depth in case of single busy queue as well as slowly
increasing queue depth based on last delayed sync request does not seem to
be kicking in at all.This patch will allow deeper WRITE queue depths (subjected to the other
WRITE queue depth contstraints like cfq_quantum and last delayed sync
request).Shaohua Li had reported getting more out of his SSD. For me, I have got
one Lun exported from an HP EVA and when pure buffered writes are on, I
can get more out of the system. Following are test results of pure
buffered writes (with end_fsync=1) with vanilla and patched kernel. These
results are average of 3 sets of run with increasing number of threads.AVERAGE[bufwfs][vanilla]
-------
job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us)
--- --- -- ------------ ----------- ------------- -----------
bufwfs 3 1 0 0 95349 474141
bufwfs 3 2 0 0 100282 806926
bufwfs 3 4 0 0 109989 2.7301e+06
bufwfs 3 8 0 0 116642 3762231
bufwfs 3 16 0 0 118230 6902970AVERAGE[bufwfs] [patched kernel]
-------
bufwfs 3 1 0 0 270722 404352
bufwfs 3 2 0 0 206770 1.06552e+06
bufwfs 3 4 0 0 195277 1.62283e+06
bufwfs 3 8 0 0 260960 2.62979e+06
bufwfs 3 16 0 0 299260 1.70731e+06I also ran buffered writes along with some sequential reads and some
buffered reads going on in the system on a SATA disk because the potential
risk could be that we should not be driving queue depth higher in presence
of sync IO going to keep the max clat low.With some random and sequential reads going on in the system on one SATA
disk I did not see any significant increase in max clat. So it looks like
other WRITE queue depth control logic is doing its job. Here are the
results.AVERAGE[brr, bsr, bufw together] [vanilla]
-------
job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us)
--- --- -- ------------ ----------- ------------- -----------
brr 3 1 850 546345 0 0
bsr 3 1 14650 729543 0 0
bufw 3 1 0 0 23908 8274517brr 3 2 981.333 579395 0 0
bsr 3 2 14149.7 1175689 0 0
bufw 3 2 0 0 21921 1.28108e+07brr 3 4 898.333 1.75527e+06 0 0
bsr 3 4 12230.7 1.40072e+06 0 0
bufw 3 4 0 0 19722.3 2.4901e+07brr 3 8 900 3160594 0 0
bsr 3 8 9282.33 1.91314e+06 0 0
bufw 3 8 0 0 18789.3 23890622AVERAGE[brr, bsr, bufw mixed] [patched kernel]
-------
job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us)
--- --- -- ------------ ----------- ------------- -----------
brr 3 1 837 417973 0 0
bsr 3 1 14357.7 591275 0 0
bufw 3 1 0 0 24869.7 8910662brr 3 2 1038.33 543434 0 0
bsr 3 2 13351.3 1205858 0 0
bufw 3 2 0 0 18626.3 13280370brr 3 4 913 1.86861e+06 0 0
bsr 3 4 12652.3 1430974 0 0
bufw 3 4 0 0 15343.3 2.81305e+07brr 3 8 890 2.92695e+06 0 0
bsr 3 8 9635.33 1.90244e+06 0 0
bufw 3 8 0 0 17200.3 24424392So looks like it might make sense to include this patch.
Thanks
VivekSigned-off-by: Vivek Goyal
Signed-off-by: Jens Axboe
01 Feb, 2010
1 commit
-
I triggered a lockdep warning as following.
=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.33-rc2 #1
-------------------------------------------------------
test_io_control/7357 is trying to acquire lock:
(blkio_list_lock){+.+...}, at: [] blkiocg_weight_write+0x82/0x9ebut task is already holding lock:
(&(&blkcg->lock)->rlock){......}, at: [] blkiocg_weight_write+0x3b/0x9ewhich lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #2 (&(&blkcg->lock)->rlock){......}:
[] validate_chain+0x8bc/0xb9c
[] __lock_acquire+0x723/0x789
[] lock_acquire+0x90/0xa7
[] _raw_spin_lock_irqsave+0x27/0x5a
[] blkiocg_add_blkio_group+0x1a/0x6d
[] cfq_get_queue+0x225/0x3de
[] cfq_set_request+0x217/0x42d
[] elv_set_request+0x17/0x26
[] get_request+0x203/0x2c5
[] get_request_wait+0x18/0x10e
[] __make_request+0x2ba/0x375
[] generic_make_request+0x28d/0x30f
[] submit_bio+0x8a/0x8f
[] submit_bh+0xf0/0x10f
[] ll_rw_block+0xc0/0xf9
[] ext3_find_entry+0x319/0x544 [ext3]
[] ext3_lookup+0x2c/0xb9 [ext3]
[] do_lookup+0xd3/0x172
[] link_path_walk+0x5fb/0x95c
[] path_walk+0x3c/0x81
[] do_path_lookup+0x21/0x8a
[] do_filp_open+0xf0/0x978
[] open_exec+0x1b/0xb7
[] do_execve+0xbb/0x266
[] sys_execve+0x24/0x4a
[] ptregs_execve+0x12/0x18-> #1 (&(&q->__queue_lock)->rlock){..-.-.}:
[] validate_chain+0x8bc/0xb9c
[] __lock_acquire+0x723/0x789
[] lock_acquire+0x90/0xa7
[] _raw_spin_lock_irqsave+0x27/0x5a
[] cfq_unlink_blkio_group+0x17/0x41
[] blkiocg_destroy+0x72/0xc7
[] cgroup_diput+0x4a/0xb2
[] dentry_iput+0x93/0xb7
[] d_kill+0x1c/0x36
[] dput+0xf5/0xfe
[] do_rmdir+0x95/0xbe
[] sys_rmdir+0x10/0x12
[] sysenter_do_call+0x12/0x32-> #0 (blkio_list_lock){+.+...}:
[] validate_chain+0x61c/0xb9c
[] __lock_acquire+0x723/0x789
[] lock_acquire+0x90/0xa7
[] _raw_spin_lock+0x1e/0x4e
[] blkiocg_weight_write+0x82/0x9e
[] cgroup_file_write+0xc6/0x1c0
[] vfs_write+0x8c/0x116
[] sys_write+0x3b/0x60
[] sysenter_do_call+0x12/0x32other info that might help us debug this:
1 lock held by test_io_control/7357:
#0: (&(&blkcg->lock)->rlock){......}, at: [] blkiocg_weight_write+0x3b/0x9e
stack backtrace:
Pid: 7357, comm: test_io_control Not tainted 2.6.33-rc2 #1
Call Trace:
[] print_circular_bug+0x91/0x9d
[] validate_chain+0x61c/0xb9c
[] __lock_acquire+0x723/0x789
[] lock_acquire+0x90/0xa7
[] ? blkiocg_weight_write+0x82/0x9e
[] _raw_spin_lock+0x1e/0x4e
[] ? blkiocg_weight_write+0x82/0x9e
[] blkiocg_weight_write+0x82/0x9e
[] cgroup_file_write+0xc6/0x1c0
[] ? trace_hardirqs_off+0xb/0xd
[] ? cpu_clock+0x2e/0x44
[] ? security_file_permission+0xf/0x11
[] ? rw_verify_area+0x8a/0xad
[] ? cgroup_file_write+0x0/0x1c0
[] vfs_write+0x8c/0x116
[] sys_write+0x3b/0x60
[] sysenter_do_call+0x12/0x32To prevent deadlock, we should take locks as following sequence:
blkio_list_lock -> queue_lock -> blkcg_lock.
The following patch should fix this bug.
Signed-off-by: Gui Jianfeng
Signed-off-by: Jens Axboe