13 Mar, 2008
1 commit
-
Fixes:
block/genhd.c:361: warning: ignoring return value of ‘class_register’, declared with attribute warn_unused_result
Signed-off-by: Roland McGrath
Acked-by: Jeff Garzik
Signed-off-by: Linus Torvalds
04 Mar, 2008
12 commits
-
This is important to eg dm, that tries to decide whether to stop using
barriers or not.Tested as working by Anders Henke
Signed-off-by: Jens Axboe
-
Introduced between 2.6.25-rc2 and -rc3
block/blk-map.c:154:14: warning: symbol 'bio' shadows an earlier one
block/blk-map.c:110:13: originally declared hereSigned-off-by: Harvey Harrison
Signed-off-by: Jens Axboe -
Intoduced between 2.6.25-rc2 and -rc3
block/blk-settings.c:319:12: warning: function 'blk_queue_dma_drain' with external linkage has definitionSigned-off-by: Harvey Harrison
Signed-off-by: Jens Axboe -
This patch removes the unused export of blk_rq_map_user_iov.
Signed-off-by: Adrian Bunk
Signed-off-by: Jens Axboe -
This patch removes the unused exports of blk_{get,put}_queue.
Signed-off-by: Adrian Bunk
Signed-off-by: Jens Axboe -
This patch contains the following cleanups:
- make the needlessly global struct disk_type static
- #if 0 the unused genhd_media_change_notify()Signed-off-by: Adrian Bunk
Signed-off-by: Jens Axboe -
This patch adds a proper prototye for blk_dev_init() in block/blk.h
Signed-off-by: Adrian Bunk
Signed-off-by: Jens Axboe -
Every file should include the headers containing the externs for its
global functions (in this case for __blk_queue_free_tags()).Signed-off-by: Adrian Bunk
Signed-off-by: Jens Axboe -
For some non-x86 systems with 4GB or upper 4GB memory,
we need increase the range of addresses that can be
used for direct DMA in 64-bit kernel.Signed-off-by: Yang Shi
Signed-off-by: Jens Axboe -
Block layer alignment was used for two different purposes - memory
alignment and padding. This causes problems in lower layers because
drivers which only require memory alignment ends up with adjusted
rq->data_len. Separate out padding such that padding occurs iff
driver explicitly requests it.Tomo: restorethe code to update bio in blk_rq_map_user
introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa
according to padding alignment.Signed-off-by: Tejun Heo
Signed-off-by: FUJITA Tomonori
Signed-off-by: Jens Axboe -
The meaning of rq->data_len was changed to the length of an allocated
buffer from the true data length. It breaks SG_IO friends and
bsg. This patch restores the meaning of rq->data_len to the true data
length and adds rq->extra_len to store an extended length (due to
drain buffer and padding).This patch also removes the code to update bio in blk_rq_map_user
introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa.
The commit adjusts bio according to memory alignment
(queue_dma_alignment). However, memory alignment is NOT padding
alignment. This adjustment also breaks SG_IO friends and bsg. Padding
alignment needs to be fixed in a proper way (by a separate patch).Signed-off-by: FUJITA Tomonori
Signed-off-by: Jens Axboe -
kernel-doc for block/:
- add missing parameters
- fix one function's parameter list (remove blank line)
- add 2 source files to docbook for non-exported kernel-doc functionsSigned-off-by: Randy Dunlap
Signed-off-by: Jens Axboe
19 Feb, 2008
10 commits
-
Clear drain buffer before chaining if the command in question is a
write.Signed-off-by: Tejun Heo
Signed-off-by: Jens Axboe -
Draining shouldn't be done for commands where overflow may indicate
data integrity issues. Add dma_drain_needed callback to
request_queue. Drain buffer is appened iff this function returns
non-zero.Signed-off-by: Tejun Heo
Cc: James Bottomley
Signed-off-by: Jens Axboe -
With padding and draining moved into it, block layer now may extend
requests as directed by queue parameters, so now a request has two
sizes - the original request size and the extended size which matches
the size of area pointed to by bios and later by sgs. The latter size
is what lower layers are primarily interested in when allocating,
filling up DMA tables and setting up the controller.Both padding and draining extend the data area to accomodate
controller characteristics. As any controller which speaks SCSI can
handle underflows, feeding larger data area is safe.So, this patch makes the primary data length field, request->data_len,
indicate the size of full data area and add a separate length field,
request->raw_data_len, for the unmodified request size. The latter is
used to report to higher layer (userland) and where the original
request size should be fed to the controller or device.Signed-off-by: Tejun Heo
Cc: James Bottomley
Signed-off-by: Jens Axboe -
DMA start address and transfer size alignment for PC requests are
achieved using bio_copy_user() instead of bio_map_user(). This works
because bio_copy_user() always uses full pages and block DMA alignment
isn't allowed to go over PAGE_SIZE.However, the implementation didn't update the last bio of the request
to make this padding visible to lower layers. This patch makes
blk_rq_map_user() extend the last bio such that it includes the
padding area and the size of area pointed to by the request is
properly aligned.Signed-off-by: Tejun Heo
Cc: James Bottomley
Signed-off-by: Jens Axboe -
Currently we fail if someone requests a valid io scheduler, but it's
modular and not currently loaded. That can happen from a driver init
asking for a different scheduler, or online switching through sysfs
as requested by a user.This patch makes elevator_get() request_module() to attempt to load
the appropriate module, instead of requiring that done manually.Signed-off-by: Jens Axboe
-
It's cumbersome to browse a radix tree from start to finish, especially
since we modify keys when a process exits. So add a hlist for the single
purpose of browsing over all known cfq_io_contexts, used for exit,
io prio change, etc.This fixes http://bugzilla.kernel.org/show_bug.cgi?id=9948
Signed-off-by: Jens Axboe
-
That way the interface is symmetric, and calling blk_rq_unmap_user()
on the request wont oops.Signed-off-by: Jens Axboe
-
blk_settings_init() can become static.
Signed-off-by: Adrian Bunk
Signed-off-by: Jens Axboe -
blk_ioc_init() can become static.
Signed-off-by: Adrian Bunk
Signed-off-by: Jens Axboe -
request_cachep needlessly became global.
Signed-off-by: Adrian Bunk
Signed-off-by: Jens Axboe
08 Feb, 2008
4 commits
-
Removes the now unused old partition statistic code.
Signed-off-by: Jerome Marchand
Signed-off-by: Jens Axboe -
Reports enhanced partition statistics in /proc/diskstats.
Signed-off-by: Jerome Marchand
-
Updates the enhanced partition statistics in generic block layer
besides the disk statistics.Signed-off-by: Jerome Marchand
Signed-off-by: Jens Axboe -
Rearrange fields in cache order and initialize some fields that
we didn't previously init. Remove init of ->completion_data, it's
part of a union with ->hash. Luckily clearing the rb node is the same
as setting it to null!Signed-off-by: Jens Axboe
01 Feb, 2008
6 commits
-
It blindly copies everything in the io_context, including the lock.
That doesn't work so well for either lock ordering or lockdep.There seems zero point in swapping io contexts on a request to request
merge, so the best point of action is to just remove it.Signed-off-by: Jens Axboe
-
Since it's acquired from irq context, all locking must be of the
irq safe variant. Most are already inside the queue lock (which
already disables interrupts), but the io scheduler rmmod path
always has irqs enabled and the put_io_context() path may legally
be called with irqs enabled (even if it isn't usually). So fixup
those two.Signed-off-by: Jens Axboe
-
Signed-off-by: Jens Axboe
-
Signed-off-by: Jens Axboe
-
Signed-off-by: Jens Axboe
-
No point in passing signed integers as the byte count, they can never
be negative.Signed-off-by: Jens Axboe
31 Jan, 2008
1 commit
-
This fixes a problem in SCSI where we use the (previously
uninitialised) cmd_type via blk_pc_request() to set up the transfer in
scsi_init_sgtable().Acked-by: FUJITA Tomonori
Signed-off-by: James Bottomley
30 Jan, 2008
6 commits
-
If the two requests belong to the same io context, we will attempt
to lock the same lock twice. But swapping contexts is pointless in
that case, so just check for rioc == nioc before doing the double
lock and copy.Tested-by: Olof Johansson
Signed-off-by: Jens Axboe -
Signed-off-by: Jan Engelhardt
Signed-off-by: Jens Axboe -
Expose hardware sector size in sysfs queue directory.
Signed-off-by: Martin K. Petersen
Signed-off-by: Jens Axboe -
Signed-off-by: Jens Axboe
-
Signed-off-by: Jens Axboe
-
Signed-off-by: Jens Axboe