15 Mar, 2009

2 commits


26 Feb, 2009

1 commit


18 Feb, 2009

1 commit


03 Jan, 2009

3 commits

  • The commit 818827669d85b84241696ffef2de485db46b0b5e (block: make
    blk_rq_map_user take a NULL user-space buffer) extended
    blk_rq_map_user to accept a NULL user-space buffer with a READ
    command. It was necessary to convert sg to use the block layer mapping
    API.

    This patch extends blk_rq_map_user again for a WRITE command. It is
    necessary to convert st and osst drivers to use the block layer
    apping API.

    Signed-off-by: FUJITA Tomonori
    Acked-by: Jens Axboe
    Signed-off-by: James Bottomley

    FUJITA Tomonori
     
  • This fixes bio_copy_user_iov to properly handle the partial mappings
    with struct rq_map_data (which only sg uses for now but st and osst
    will shortly). It adds the offset member to struct rq_map_data and
    changes blk_rq_map_user to update it so that bio_copy_user_iov can add
    an appropriate page frame via bio_add_pc_page().

    Signed-off-by: FUJITA Tomonori
    Acked-by: Jens Axboe
    Signed-off-by: James Bottomley

    FUJITA Tomonori
     
  • This fixes bio_add_page misuse in bio_copy_user_iov with rq_map_data,
    which only sg uses now.

    rq_map_data carries page frames for bio_add_pc_page. bio_copy_user_iov
    uses bio_add_pc_page with a larger size than PAGE_SIZE. It's clearly
    wrong.

    Signed-off-by: FUJITA Tomonori
    Acked-by: Jens Axboe
    Signed-off-by: James Bottomley

    FUJITA Tomonori
     

29 Dec, 2008

5 commits

  • We don't need to clear the memory used for adding bio_vec entries,
    since nobody should be looking at members unitialized. Any valid
    use should be below bio->bi_vcnt, and that members up until that count
    must be valid since they were added through bio_add_page().

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • When we go and allocate a bio for IO, we actually do two allocations.
    One for the bio itself, and one for the bi_io_vec that holds the
    actual pages we are interested in.

    This feature inlines a definable amount of io vecs inside the bio
    itself, so we eliminate the bio_vec array allocation for IO's up
    to a certain size. It defaults to 4 vecs, which is typically 16k
    of IO.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Instead of having a global bio slab cache, add a reference to one
    in each bio_set that is created. This allows for personalized slabs
    in each bio_set, so that they can have bios of different sizes.

    This means we can personalize the bios we return. File systems may
    want to embed the bio inside another structure, to avoid allocation
    more items (and stuffing them in ->bi_private) after the get a bio.
    Or we may want to embed a number of bio_vecs directly at the end
    of a bio, to avoid doing two allocations to return a bio. This is now
    possible.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • In preparation for adding differently sized bios.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • We only very rarely need the mempool backing, so it makes sense to
    get rid of all but one of the mempool in a bio_set. So keep the
    largest bio_vec count mempool so we can always honor the largest
    allocation, and "upgrade" callers that fail.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

26 Nov, 2008

2 commits

  • Port to the new tracepoints API: split DEFINE_TRACE() and DECLARE_TRACE()
    sites. Spread them out to the usage sites, as suggested by
    Mathieu Desnoyers.

    Signed-off-by: Ingo Molnar
    Acked-by: Mathieu Desnoyers

    Ingo Molnar
     
  • This was a forward port of work done by Mathieu Desnoyers, I changed it to
    encode the 'what' parameter on the tracepoint name, so that one can register
    interest in specific events and not on classes of events to then check the
    'what' parameter.

    Signed-off-by: Arnaldo Carvalho de Melo
    Signed-off-by: Jens Axboe
    Signed-off-by: Ingo Molnar

    Arnaldo Carvalho de Melo
     

09 Oct, 2008

10 commits

  • Since all bio_split calls refer the same single bio_split_pool, the bio_split
    function can use bio_split_pool directly instead of the mempool_t parameter;

    then the mempool_t parameter can be removed from bio_split param list, and
    bio_split_pool is only referred in fs/bio.c file, can be marked static.

    Signed-off-by: Denis ChengRq
    Signed-off-by: Jens Axboe

    Denis ChengRq
     
  • Helper function to find the sector offset in a bio given bvec index
    and page offset.

    Signed-off-by: Martin K. Petersen
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     
  • Not all callers need (or want!) the mempool backing guarentee, it
    essentially means that you can only use bio_alloc() for short allocations
    and not for preallocating some bio's at setup or init time.

    So add bio_kmalloc() which does the same thing as bio_alloc(), except
    it just uses kmalloc() as the backing instead of the bio mempools.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • This patch changes blk_rq_map_user to accept a NULL user-space buffer
    with a READ command if rq_map_data is not NULL. Thus a caller can pass
    page frames to lk_rq_map_user to just set up a request and bios with
    page frames propely. bio_uncopy_user (called via blk_rq_unmap_user)
    doesn't copy data to user space with such request.

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • bio_copy_kern and bio_copy_user are very similar. This converts
    bio_copy_kern to use bio_copy_user.

    Signed-off-by: FUJITA Tomonori
    Cc: Jens Axboe
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • This patch introduces struct rq_map_data to enable bio_copy_use_iov()
    use reserved pages.

    Currently, bio_copy_user_iov allocates bounce pages but
    drivers/scsi/sg.c wants to allocate pages by itself and use
    them. struct rq_map_data can be used to pass allocated pages to
    bio_copy_user_iov.

    The current users of bio_copy_user_iov simply passes NULL (they don't
    want to use pre-allocated pages).

    Signed-off-by: FUJITA Tomonori
    Cc: Jens Axboe
    Cc: Douglas Gilbert
    Cc: Mike Christie
    Cc: James Bottomley
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • Currently, blk_rq_map_user and blk_rq_map_user_iov always do
    GFP_KERNEL allocation.

    This adds gfp_mask argument to blk_rq_map_user and blk_rq_map_user_iov
    so sg can use it (sg always does GFP_ATOMIC allocation).

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Douglas Gilbert
    Cc: Mike Christie
    Cc: James Bottomley
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • This patch adds support for controlling the IO completion CPU of
    either all requests on a queue, or on a per-request basis. We export
    a sysfs variable (rq_affinity) which, if set, migrates completions
    of requests to the CPU that originally submitted it. A bio helper
    (bio_set_completion_cpu()) is also added, so that queuers can ask
    for completion on that specific CPU.

    In testing, this has been show to cut the system time by as much
    as 20-40% on synthetic workloads where CPU affinity is desired.

    This requires a little help from the architecture, so it'll only
    work as designed for archs that are using the new generic smp
    helper infrastructure.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Remove hw_segments field from struct bio and struct request. Without virtual
    merge accounting they have no purpose.

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Jens Axboe

    Mikulas Patocka
     
  • Remove virtual merge accounting.

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Jens Axboe

    Mikulas Patocka
     

27 Aug, 2008

2 commits

  • The commit c5dec1c3034f1ae3503efbf641ff3b0273b64797 introduced
    __bio_copy_iov() to add bounce support to blk_rq_map_user_iov.

    __bio_copy_iov() uses bio->bv_len to copy data for READ commands after
    the completion but it doesn't work with a request that partially
    completed. SCSI always completes a PC request as a whole but seems
    some don't.

    Signed-off-by: FUJITA Tomonori
    Cc: stable@kernel.org
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • The commit 68154e90c9d1492d570671ae181d9a8f8530da55 introduced
    bio_copy_kern() to add bounce support to blk_rq_map_kern.

    bio_copy_kern() uses bio->bv_len to copy data for READ commands after
    the completion but it doesn't work with a request that partially
    completed. SCSI always completes a PC request as a whole but seems
    some don't.

    This patch fixes bio_copy_kern to handle the above case. As
    bio_copy_user does, bio_copy_kern uses struct bio_map_data to store
    struct bio_vec.

    Signed-off-by: FUJITA Tomonori
    Reported-by: Nix
    Tested-by: Nix
    Cc: stable@kernel.org
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     

06 Aug, 2008

1 commit


27 Jul, 2008

1 commit

  • Use get_user_pages_fast in the common/generic block and fs direct IO paths.

    Signed-off-by: Nick Piggin
    Cc: Dave Kleikamp
    Cc: Andy Whitcroft
    Cc: Ingo Molnar
    Cc: Thomas Gleixner
    Cc: Andi Kleen
    Cc: Dave Kleikamp
    Cc: Badari Pulavarty
    Cc: Zach Brown
    Cc: Jens Axboe
    Reviewed-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

03 Jul, 2008

3 commits

  • When devices are stacked, one device's merge_bvec_fn may need to perform
    the mapping and then call one or more functions for its underlying devices.

    The following bio fields are used:
    bio->bi_sector
    bio->bi_bdev
    bio->bi_size
    bio->bi_rw using bio_data_dir()

    This patch creates a new struct bvec_merge_data holding a copy of those
    fields to avoid having to change them directly in the struct bio when
    going down the stack only to have to change them back again on the way
    back up. (And then when the bio gets mapped for real, the whole
    exercise gets repeated, but that's a problem for another day...)

    Signed-off-by: Alasdair G Kergon
    Cc: Neil Brown
    Cc: Milan Broz
    Signed-off-by: Jens Axboe

    Alasdair G Kergon
     
  • Some block devices support verifying the integrity of requests by way
    of checksums or other protection information that is submitted along
    with the I/O.

    This patch implements support for generating and verifying integrity
    metadata, as well as correctly merging, splitting and cloning bios and
    requests that have this extra information attached.

    See Documentation/block/data-integrity.txt for more information.

    Signed-off-by: Martin K. Petersen
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     
  • Move struct bio_set and biovec_slab definitions to bio.h so they can
    be used outside of bio.c.

    Signed-off-by: Martin K. Petersen
    Reviewed-by: Jeff Moyer
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     

08 May, 2008

1 commit


07 May, 2008

1 commit


29 Apr, 2008

1 commit

  • This patch adds bio_copy_kern similar to
    bio_copy_user. blk_rq_map_kern uses bio_copy_kern instead of
    bio_map_kern if necessary.

    bio_copy_kern uses temporary pages and the bi_end_io callback frees
    these pages. bio_copy_kern saves the original kernel buffer at
    bio->bi_private it doesn't use something like struct bio_map_data to
    store the information about the caller.

    Signed-off-by: FUJITA Tomonori
    Cc: Tejun Heo
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     

21 Apr, 2008

1 commit

  • This patch enables bio_copy_user to take struct sg_iovec (renamed
    bio_copy_user_iov). bio_copy_user uses bio_copy_user_iov internally as
    bio_map_user uses bio_map_user_iov.

    The major changes are:

    - adds sg_iovec array to struct bio_map_data

    - adds __bio_copy_iov that copy data between bio and
    sg_iovec. bio_copy_user_iov and bio_uncopy_user use it.

    Signed-off-by: FUJITA Tomonori
    Cc: Tejun Heo
    Cc: Mike Christie
    Cc: James Bottomley
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     

18 Mar, 2008

1 commit

  • Outside users like asmlib uses the mapping functions. API wise, the
    export is definitely sane. It's a better idea to keep this export
    than to require external users to open-code this piece of code instead.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

19 Feb, 2008

1 commit

  • Commit b2e895dbd80c420bfc0937c3729b4afe073b3848 #if 0'ed this code stating:

    [PATCH] revert blockdev direct io back to 2.6.19 version

    Andrew Vasquez is reporting as-iosched oopses and a 65% throughput
    slowdown due to the recent special-casing of direct-io against
    blockdevs. We don't know why either of these things are occurring.

    The patch minimally reverts us back to the 2.6.19 code for a 2.6.20
    release.

    It has since been dead code, and unless someone wants to revive it now
    it's time to remove it.

    This patch also makes bio_release_pages() static again and removes the
    ki_bio_count member from struct kiocb, reverting changes that had been
    done for this dead code.

    Signed-off-by: Adrian Bunk
    Signed-off-by: Jens Axboe

    Adrian Bunk
     

28 Jan, 2008

1 commit


16 Oct, 2007

2 commits