21 Dec, 2011

1 commit


14 Nov, 2011

1 commit


29 Nov, 2010

1 commit


10 Nov, 2010

1 commit


23 Oct, 2010

1 commit

  • * 'for-2.6.37/core' of git://git.kernel.dk/linux-2.6-block: (39 commits)
    cfq-iosched: Fix a gcc 4.5 warning and put some comments
    block: Turn bvec_k{un,}map_irq() into static inline functions
    block: fix accounting bug on cross partition merges
    block: Make the integrity mapped property a bio flag
    block: Fix double free in blk_integrity_unregister
    block: Ensure physical block size is unsigned int
    blkio-throttle: Fix possible multiplication overflow in iops calculations
    blkio-throttle: limit max iops value to UINT_MAX
    blkio-throttle: There is no need to convert jiffies to milli seconds
    blkio-throttle: Fix link failure failure on i386
    blkio: Recalculate the throttled bio dispatch time upon throttle limit change
    blkio: Add root group to td->tg_list
    blkio: deletion of a cgroup was causes oops
    blkio: Do not export throttle files if CONFIG_BLK_DEV_THROTTLING=n
    block: set the bounce_pfn to the actual DMA limit rather than to max memory
    block: revert bad fix for memory hotplug causing bounces
    Fix compile error in blk-exec.c for !CONFIG_DETECT_HUNG_TASK
    block: set the bounce_pfn to the actual DMA limit rather than to max memory
    block: Prevent hang_check firing during long I/O
    cfq: improve fsync performance for small files
    ...

    Fix up trivial conflicts due to __rcu sparse annotation in include/linux/genhd.h

    Linus Torvalds
     

21 Sep, 2010

1 commit


15 Sep, 2010

1 commit

  • Change type of 2nd parameter of blk_rq_aligned() into unsigned long
    and remove unnecessary casting. Now we can call it with 'uaddr'
    instead of 'ubuf' in __blk_rq_map_user() so that it can remove
    following warnings from sparse:

    block/blk-map.c:57:31: warning: incorrect type in argument 2 (different address spaces)
    block/blk-map.c:57:31: expected void *addr
    block/blk-map.c:57:31: got void [noderef] *ubuf

    However blk_rq_map_kern() needs one more local variable to handle it.

    Signed-off-by: Namhyung Kim
    Signed-off-by: Jens Axboe

    Namhyung Kim
     

08 Aug, 2010

1 commit

  • Remove the current bio flags and reuse the request flags for the bio, too.
    This allows to more easily trace the type of I/O from the filesystem
    down to the block driver. There were two flags in the bio that were
    missing in the requests: BIO_RW_UNPLUG and BIO_RW_AHEAD. Also I've
    renamed two request flags that had a superflous RW in them.

    Note that the flags are in bio.h despite having the REQ_ name - as
    blkdev.h includes bio.h that is the only way to go for now.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     

23 May, 2009

1 commit


19 May, 2009

2 commits


11 May, 2009

1 commit

  • Block low level drivers for some reason have been pretty good at
    abusing block layer API. Especially struct request's fields tend to
    get violated in all possible ways. Make it clear that low level
    drivers MUST NOT access or manipulate rq->sector and rq->data_len
    directly by prefixing them with double underscores.

    This change is also necessary to break build of out-of-tree codes
    which assume the previous block API where internal fields can be
    manipulated and rq->data_len carries residual count on completion.

    [ Impact: hide internal fields, block API change ]

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     

28 Apr, 2009

1 commit

  • Now that all block request data transfer is done via bio, rq->data
    isn't used. Kill it.

    While at it, make the roles of rq->special and buffer clear.

    [ Impact: drop now unncessary field from struct request ]

    Signed-off-by: Tejun Heo
    Cc: Boaz Harrosh

    Tejun Heo
     

03 Jan, 2009

2 commits

  • The commit 818827669d85b84241696ffef2de485db46b0b5e (block: make
    blk_rq_map_user take a NULL user-space buffer) extended
    blk_rq_map_user to accept a NULL user-space buffer with a READ
    command. It was necessary to convert sg to use the block layer mapping
    API.

    This patch extends blk_rq_map_user again for a WRITE command. It is
    necessary to convert st and osst drivers to use the block layer
    apping API.

    Signed-off-by: FUJITA Tomonori
    Acked-by: Jens Axboe
    Signed-off-by: James Bottomley

    FUJITA Tomonori
     
  • This fixes bio_copy_user_iov to properly handle the partial mappings
    with struct rq_map_data (which only sg uses for now but st and osst
    will shortly). It adds the offset member to struct rq_map_data and
    changes blk_rq_map_user to update it so that bio_copy_user_iov can add
    an appropriate page frame via bio_add_pc_page().

    Signed-off-by: FUJITA Tomonori
    Acked-by: Jens Axboe
    Signed-off-by: James Bottomley

    FUJITA Tomonori
     

03 Dec, 2008

1 commit


18 Nov, 2008

1 commit

  • If the size passed in is OK but we end up mapping too many segments,
    we call the unmap path directly like from IO completion. But from IO
    completion we have an extra reference to the bio, so this error case
    goes OOPS when it attempts to free and already free bio.

    Fix it by getting an extra reference to the bio before calling the
    unmap failure case.

    Reported-by: Petr Vandrovec

    Signed-off-by: Jens Axboe

    Jens Axboe
     

09 Oct, 2008

5 commits

  • This patch changes blk_rq_map_user to accept a NULL user-space buffer
    with a READ command if rq_map_data is not NULL. Thus a caller can pass
    page frames to lk_rq_map_user to just set up a request and bios with
    page frames propely. bio_uncopy_user (called via blk_rq_unmap_user)
    doesn't copy data to user space with such request.

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • This adds blk_rq_aligned helper function to see if alignment and
    padding requirement is satisfied for DMA transfer. This also converts
    blk_rq_map_kern and __blk_rq_map_user to use the helper function.

    Signed-off-by: FUJITA Tomonori
    Cc: Jens Axboe
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • This patch introduces struct rq_map_data to enable bio_copy_use_iov()
    use reserved pages.

    Currently, bio_copy_user_iov allocates bounce pages but
    drivers/scsi/sg.c wants to allocate pages by itself and use
    them. struct rq_map_data can be used to pass allocated pages to
    bio_copy_user_iov.

    The current users of bio_copy_user_iov simply passes NULL (they don't
    want to use pre-allocated pages).

    Signed-off-by: FUJITA Tomonori
    Cc: Jens Axboe
    Cc: Douglas Gilbert
    Cc: Mike Christie
    Cc: James Bottomley
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • Currently, blk_rq_map_user and blk_rq_map_user_iov always do
    GFP_KERNEL allocation.

    This adds gfp_mask argument to blk_rq_map_user and blk_rq_map_user_iov
    so sg can use it (sg always does GFP_ATOMIC allocation).

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Douglas Gilbert
    Cc: Mike Christie
    Cc: James Bottomley
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • …in them as needed. Fix changed function parameter names. Fix typos/spellos. In comments, change REQ_SPECIAL to REQ_TYPE_SPECIAL and REQ_BLOCK_PC to REQ_TYPE_BLOCK_PC.

    Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com>
    Signed-off-by: Jens Axboe <jens.axboe@oracle.com>

    Randy Dunlap
     

27 Jul, 2008

1 commit


04 Jul, 2008

1 commit

  • blk_rq_map_kern is used for kernel internal I/Os. Some callers use
    this function with stack buffers but DMA to/from the stack buffers
    leads to memory corruption on a non-coherent platform.

    This patch make blk_rq_map_kern uses the bounce buffers if a caller
    passes a stack buffer (on the all platforms for simplicity).

    Signed-off-by: FUJITA Tomonori
    Cc: Bartlomiej Zolnierkiewicz
    Cc: Thomas Bogendoerfer
    Cc: Tejun Heo
    Cc: James Bottomley
    Signed-off-by: Andrew Morton
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     

03 Jul, 2008

1 commit


29 Apr, 2008

1 commit

  • This patch adds bio_copy_kern similar to
    bio_copy_user. blk_rq_map_kern uses bio_copy_kern instead of
    bio_map_kern if necessary.

    bio_copy_kern uses temporary pages and the bi_end_io callback frees
    these pages. bio_copy_kern saves the original kernel buffer at
    bio->bi_private it doesn't use something like struct bio_map_data to
    store the information about the caller.

    Signed-off-by: FUJITA Tomonori
    Cc: Tejun Heo
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     

21 Apr, 2008

2 commits

  • blk_rq_map_user adjusts bi_size of the last bio. It breaks the rule
    that req->data_len (the true data length) is equal to sum(bio). It
    broke the scsi command completion code.

    commit e97a294ef6938512b655b1abf17656cf2b26f709 was introduced to fix
    the above issue. However, the partial completion code doesn't work
    with it. The commit is also a layer violation (scsi mid-layer should
    not know about the block layer's padding).

    This patch moves the padding adjustment to blk_rq_map_sg (suggested by
    James). The padding works like the drain buffer. This patch breaks the
    rule that req->data_len is equal to sum(sg), however, the drain buffer
    already broke it. So this patch just restores the rule that
    req->data_len is equal to sub(bio) without breaking anything new.

    Now when a low level driver needs padding, blk_rq_map_user and
    blk_rq_map_user_iov guarantee there's enough room for padding.
    blk_rq_map_sg can safely extend the last entry of a scatter list.

    blk_rq_map_sg must extend the last entry of a scatter list only for a
    request that got through bio_copy_user_iov. This patches introduces
    new REQ_COPY_USER flag.

    Signed-off-by: FUJITA Tomonori
    Cc: Tejun Heo
    Cc: Mike Christie
    Cc: James Bottomley
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • With this patch, blk_rq_map_user_iov uses bio_copy_user_iov when a low
    level driver needs padding or a buffer in sg_iovec isn't aligned. That
    is, it uses temporary kernel buffers instead of mapping user pages
    directly.

    When a LLD needs padding, later blk_rq_map_sg needs to extend the last
    entry of a scatter list. bio_copy_user_iov guarantees that there is
    enough space for padding by using temporary kernel buffers instead of
    user pages.

    blk_rq_map_user_iov needs buffers in sg_iovec to be aligned. The
    comment in blk_rq_map_user_iov indicates that drivers/scsi/sg.c also
    needs buffers in sg_iovec to be aligned. Actually, drivers/scsi/sg.c
    works with unaligned buffers in sg_iovec (it always uses temporary
    kernel buffers).

    Signed-off-by: FUJITA Tomonori
    Cc: Tejun Heo
    Cc: Mike Christie
    Cc: James Bottomley
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     

04 Mar, 2008

4 commits

  • Introduced between 2.6.25-rc2 and -rc3
    block/blk-map.c:154:14: warning: symbol 'bio' shadows an earlier one
    block/blk-map.c:110:13: originally declared here

    Signed-off-by: Harvey Harrison
    Signed-off-by: Jens Axboe

    Harvey Harrison
     
  • This patch removes the unused export of blk_rq_map_user_iov.

    Signed-off-by: Adrian Bunk
    Signed-off-by: Jens Axboe

    Adrian Bunk
     
  • Block layer alignment was used for two different purposes - memory
    alignment and padding. This causes problems in lower layers because
    drivers which only require memory alignment ends up with adjusted
    rq->data_len. Separate out padding such that padding occurs iff
    driver explicitly requests it.

    Tomo: restorethe code to update bio in blk_rq_map_user
    introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa
    according to padding alignment.

    Signed-off-by: Tejun Heo
    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • The meaning of rq->data_len was changed to the length of an allocated
    buffer from the true data length. It breaks SG_IO friends and
    bsg. This patch restores the meaning of rq->data_len to the true data
    length and adds rq->extra_len to store an extended length (due to
    drain buffer and padding).

    This patch also removes the code to update bio in blk_rq_map_user
    introduced by the commit 40b01b9bbdf51ae543a04744283bf2d56c4a6afa.
    The commit adjusts bio according to memory alignment
    (queue_dma_alignment). However, memory alignment is NOT padding
    alignment. This adjustment also breaks SG_IO friends and bsg. Padding
    alignment needs to be fixed in a proper way (by a separate patch).

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     

19 Feb, 2008

3 commits

  • With padding and draining moved into it, block layer now may extend
    requests as directed by queue parameters, so now a request has two
    sizes - the original request size and the extended size which matches
    the size of area pointed to by bios and later by sgs. The latter size
    is what lower layers are primarily interested in when allocating,
    filling up DMA tables and setting up the controller.

    Both padding and draining extend the data area to accomodate
    controller characteristics. As any controller which speaks SCSI can
    handle underflows, feeding larger data area is safe.

    So, this patch makes the primary data length field, request->data_len,
    indicate the size of full data area and add a separate length field,
    request->raw_data_len, for the unmodified request size. The latter is
    used to report to higher layer (userland) and where the original
    request size should be fed to the controller or device.

    Signed-off-by: Tejun Heo
    Cc: James Bottomley
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • DMA start address and transfer size alignment for PC requests are
    achieved using bio_copy_user() instead of bio_map_user(). This works
    because bio_copy_user() always uses full pages and block DMA alignment
    isn't allowed to go over PAGE_SIZE.

    However, the implementation didn't update the last bio of the request
    to make this padding visible to lower layers. This patch makes
    blk_rq_map_user() extend the last bio such that it includes the
    padding area and the size of area pointed to by the request is
    properly aligned.

    Signed-off-by: Tejun Heo
    Cc: James Bottomley
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • That way the interface is symmetric, and calling blk_rq_unmap_user()
    on the request wont oops.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

01 Feb, 2008

1 commit


30 Jan, 2008

1 commit