09 May, 2013

1 commit

  • Pull block core updates from Jens Axboe:

    - Major bit is Kents prep work for immutable bio vecs.

    - Stable candidate fix for a scheduling-while-atomic in the queue
    bypass operation.

    - Fix for the hang on exceeded rq->datalen 32-bit unsigned when merging
    discard bios.

    - Tejuns changes to convert the writeback thread pool to the generic
    workqueue mechanism.

    - Runtime PM framework, SCSI patches exists on top of these in James'
    tree.

    - A few random fixes.

    * 'for-3.10/core' of git://git.kernel.dk/linux-block: (40 commits)
    relay: move remove_buf_file inside relay_close_buf
    partitions/efi.c: replace useless kzalloc's by kmalloc's
    fs/block_dev.c: fix iov_shorten() criteria in blkdev_aio_read()
    block: fix max discard sectors limit
    blkcg: fix "scheduling while atomic" in blk_queue_bypass_start
    Documentation: cfq-iosched: update documentation help for cfq tunables
    writeback: expose the bdi_wq workqueue
    writeback: replace custom worker pool implementation with unbound workqueue
    writeback: remove unused bdi_pending_list
    aoe: Fix unitialized var usage
    bio-integrity: Add explicit field for owner of bip_buf
    block: Add an explicit bio flag for bios that own their bvec
    block: Add bio_alloc_pages()
    block: Convert some code to bio_for_each_segment_all()
    block: Add bio_for_each_segment_all()
    bounce: Refactor __blk_queue_bounce to not use bi_io_vec
    raid1: use bio_copy_data()
    pktcdvd: Use bio_reset() in disabled code to kill bi_idx usage
    pktcdvd: use bio_copy_data()
    block: Add bio_copy_data()
    ...

    Linus Torvalds
     

08 May, 2013

1 commit

  • Faster kernel compiles by way of fewer unnecessary includes.

    [akpm@linux-foundation.org: fix fallout]
    [akpm@linux-foundation.org: fix build]
    Signed-off-by: Kent Overstreet
    Cc: Zach Brown
    Cc: Felipe Balbi
    Cc: Greg Kroah-Hartman
    Cc: Mark Fasheh
    Cc: Joel Becker
    Cc: Rusty Russell
    Cc: Jens Axboe
    Cc: Asai Thambi S P
    Cc: Selvan Mani
    Cc: Sam Bradshaw
    Cc: Jeff Moyer
    Cc: Al Viro
    Cc: Benjamin LaHaise
    Reviewed-by: "Theodore Ts'o"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kent Overstreet
     

19 Apr, 2013

1 commit

  • This reverts commit 3a366e614d0837d9fc23f78cdb1a1186ebc3387f.

    Wanlong Gao reports that it causes a kernel panic on his machine several
    minutes after boot. Reverting it removes the panic.

    Jens says:
    "It's not quite clear why that is yet, so I think we should just revert
    the commit for 3.9 final (which I'm assuming is pretty close).

    The wifi is crap at the LSF hotel, so sending this email instead of
    queueing up a revert and pull request."

    Reported-by: Wanlong Gao
    Requested-by: Jens Axboe
    Cc: Tejun Heo
    Cc: Steven Rostedt
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

24 Mar, 2013

10 commits

  • This is for the new bio splitting code. When we split a bio, if the
    split occured on a bvec boundry we reuse the bvec for the new bio. But
    that means bio_free() can't free it, hence the explicit flag.

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    Acked-by: Tejun Heo

    Kent Overstreet
     
  • More utility code to replace stuff that's getting open coded.

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    CC: NeilBrown

    Kent Overstreet
     
  • More prep work for immutable bvecs:

    A few places in the code were either open coding or using the wrong
    version - fix.

    After we introduce the bvec iter, it'll no longer be possible to modify
    the biovec through bio_for_each_segment_all() - it doesn't increment a
    pointer to the current bvec, you pass in a struct bio_vec (not a
    pointer) which is updated with what the current biovec would be (taking
    into account bi_bvec_done and bi_size).

    So because of that it's more worthwhile to be consistent about
    bio_for_each_segment()/bio_for_each_segment_all() usage.

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    CC: NeilBrown
    CC: Alasdair Kergon
    CC: dm-devel@redhat.com
    CC: Alexander Viro

    Kent Overstreet
     
  • __bio_for_each_segment() iterates bvecs from the specified index
    instead of bio->bv_idx. Currently, the only usage is to walk all the
    bvecs after the bio has been advanced by specifying 0 index.

    For immutable bvecs, we need to split these apart;
    bio_for_each_segment() is going to have a different implementation.
    This will also help document the intent of code that's using it -
    bio_for_each_segment_all() is only legal to use for code that owns the
    bio.

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    CC: Neil Brown
    CC: Boaz Harrosh

    Kent Overstreet
     
  • This gets open coded quite a bit and it's tricky to get right, so make a
    generic version and convert some existing users over to it instead.

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe

    Kent Overstreet
     
  • Random cleanup - this code was duplicated and it's not really specific
    to md.

    Also added the ability to return the actual error code.

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    CC: NeilBrown
    Acked-by: Tejun Heo

    Kent Overstreet
     
  • In the current code bio_split() won't be seeing partially completed bios
    so this doesn't change any behaviour, but this makes the code a bit
    clearer as to what bio_split() actually requires.

    The immediate purpose of the patch is removing unnecessary bi_idx
    references, but the end goal is to allow partial completed bios to be
    submitted, which along with immutable biovecs enables effecient bio
    splitting.

    Some of the callers were (double) checking that bios could be split, so
    update their checks too.

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    CC: Lars Ellenberg
    CC: Neil Brown
    CC: Martin K. Petersen

    Kent Overstreet
     
  • This is prep work for immutable bio vecs; we first want to centralize
    where bvecs are modified.

    Next two patches convert some existing code to use this function.

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe

    Kent Overstreet
     
  • This adds a pointer to the bvec array to struct bio_integrity_payload,
    instead of the bvecs always being inline; then the bvecs are allocated
    with bvec_alloc_bs().

    Changed bvec_alloc_bs() and bvec_free_bs() to take a pointer to a
    mempool instead of the bioset, so that bio integrity can use a different
    mempool for its bvecs, and thus avoid a potential deadlock.

    This is eventually for immutable bio vecs - immutable bvecs aren't
    useful if we still have to copy them, hence the need for the pointer.
    Less code is always nice too, though.

    Also, bio_integrity_alloc() was using fs_bio_set if no bio_set was
    specified. This was wrong - using the bio_set doesn't protect us from
    memory allocation failures, because we just used kmalloc for the
    bio_integrity_payload. But it does introduce the possibility of
    deadlock, if for some reason we weren't supposed to be using fs_bio_set.

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    CC: Martin K. Petersen

    Kent Overstreet
     
  • Previously, if we ever try to allocate more than once from the same bio
    set while running under generic_make_request() (i.e. a stacking block
    driver), we risk deadlock.

    This is because of the code in generic_make_request() that converts
    recursion to iteration; any bios we submit won't actually be submitted
    (so they can complete and eventually be freed) until after we return -
    this means if we allocate a second bio, we're blocking the first one
    from ever being freed.

    Thus if enough threads call into a stacking block driver at the same
    time with bios that need multiple splits, and the bio_set's reserve gets
    used up, we deadlock.

    This can be worked around in the driver code - we could check if we're
    running under generic_make_request(), then mask out __GFP_WAIT when we
    go to allocate a bio, and if the allocation fails punt to workqueue and
    retry the allocation.

    But this is tricky and not a generic solution. This patch solves it for
    all users by inverting the previously described technique. We allocate a
    rescuer workqueue for each bio_set, and then in the allocation code if
    there are bios on current->bio_list we would be blocking, we punt them
    to the rescuer workqueue to be submitted.

    This guarantees forward progress for bio allocations under
    generic_make_request() provided each bio is submitted before allocating
    the next, and provided the bios are freed after they complete.

    Note that this doesn't do anything for allocation from other mempools.
    Instead of allocating per bio data structures from a mempool, code
    should use bio_set's front_pad.

    Tested it by forcing the rescue codepath to be taken (by disabling the
    first GFP_NOWAIT) attempt, and then ran it with bcache (which does a lot
    of arbitrary bio splitting) and verified that the rescuer was being
    invoked.

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    Acked-by: Tejun Heo
    Reviewed-by: Muthukumar Ratty

    Kent Overstreet
     

14 Jan, 2013

1 commit

  • bio completion didn't kick block_bio_complete TP. Only dm was
    explicitly triggering the TP on IO completion. This makes
    block_bio_complete TP useless for tracers which want to know about
    bios, and all other bio based drivers skip generating blktrace
    completion events.

    This patch makes all bio completions via bio_endio() generate
    block_bio_complete TP.

    * Explicit trace_block_bio_complete() invocation removed from dm and
    the trace point is unexported.

    * @rq dropped from trace_block_bio_complete(). bios may fly around
    w/o queue associated. Verifying and accessing the assocaited queue
    belongs to TP probes.

    * blktrace now gets both request and bio completions. Make it ignore
    bio completions if request completion path is happening.

    This makes all bio based drivers generate blktrace completion events
    properly and makes the block_bio_complete TP actually useful.

    v2: With this change, block_bio_complete TP could be invoked on sg
    commands which have bio's with %NULL bi_bdev. Update TP
    assignment code to check whether bio->bi_bdev is %NULL before
    dereferencing.

    Signed-off-by: Tejun Heo
    Original-patch-by: Namhyung Kim
    Cc: Tejun Heo
    Cc: Steven Rostedt
    Cc: Alasdair Kergon
    Cc: dm-devel@redhat.com
    Cc: Neil Brown
    Signed-off-by: Jens Axboe

    Tejun Heo
     

23 Oct, 2012

1 commit


28 Sep, 2012

1 commit


20 Sep, 2012

1 commit

  • The WRITE SAME command supported on some SCSI devices allows the same
    block to be efficiently replicated throughout a block range. Only a
    single logical block is transferred from the host and the storage device
    writes the same data to all blocks described by the I/O.

    This patch implements support for WRITE SAME in the block layer. The
    blkdev_issue_write_same() function can be used by filesystems and block
    drivers to replicate a buffer across a block range. This can be used to
    efficiently initialize software RAID devices, etc.

    Signed-off-by: Martin K. Petersen
    Acked-by: Mike Snitzer
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     

09 Sep, 2012

6 commits

  • Previously, there was bio_clone() but it only allocated from the fs bio
    set; as a result various users were open coding it and using
    __bio_clone().

    This changes bio_clone() to become bio_clone_bioset(), and then we add
    bio_clone() and bio_clone_kmalloc() as wrappers around it, making use of
    the functionality the last patch adedd.

    This will also help in a later patch changing how bio cloning works.

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    CC: NeilBrown
    CC: Alasdair Kergon
    CC: Boaz Harrosh
    CC: Jeff Garzik
    Acked-by: Jeff Garzik
    Signed-off-by: Jens Axboe

    Kent Overstreet
     
  • Previously, bio_kmalloc() and bio_alloc_bioset() behaved slightly
    different because there was some almost-duplicated code - this fixes
    some of that.

    The important change is that previously bio_kmalloc() always set
    bi_io_vec = bi_inline_vecs, even if nr_iovecs == 0 - unlike
    bio_alloc_bioset(). This would cause bio_has_data() to return true; I
    don't know if this resulted in any actual bugs but it was certainly
    wrong.

    bio_kmalloc() and bio_alloc_bioset() also have different arbitrary
    limits on nr_iovecs - 1024 (UIO_MAXIOV) for bio_kmalloc(), 256
    (BIO_MAX_PAGES) for bio_alloc_bioset(). This patch doesn't fix that, but
    at least they're enforced closer together and hopefully they will be
    fixed in a later patch.

    This'll also help with some future cleanups - there are a fair number of
    functions that allocate bios (e.g. bio_clone()), and now they don't have
    to be duplicated for bio_alloc(), bio_alloc_bioset(), and bio_kmalloc().

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    v7: Re-add dropped comments, improv patch description
    Signed-off-by: Jens Axboe

    Kent Overstreet
     
  • Now that we've got generic code for freeing bios allocated from bio
    pools, this isn't needed anymore.

    This patch also makes bio_free() static, since without bi_destructor
    there should be no need for it to be called anywhere else.

    bio_free() is now only called from bio_put, so we can refactor those a
    bit - move some code from bio_put() to bio_free() and kill the redundant
    bio->bi_next = NULL.

    v5: Switch to BIO_KMALLOC_POOL ((void *)~0), per Boaz
    v6: BIO_KMALLOC_POOL now NULL, drop bio_free's EXPORT_SYMBOL
    v7: No #define BIO_KMALLOC_POOL anymore

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    Signed-off-by: Jens Axboe

    Kent Overstreet
     
  • Reusing bios is something that's been highly frowned upon in the past,
    but driver code keeps doing it anyways. If it's going to happen anyways,
    we should provide a generic method.

    This'll help with getting rid of bi_destructor - drivers/block/pktcdvd.c
    was open coding it, by doing a bio_init() and resetting bi_destructor.

    This required reordering struct bio, but the block layer is not yet
    nearly fast enough for any cacheline effects to matter here.

    v5: Add a define BIO_RESET_BITS, to be very explicit about what parts of
    bio->bi_flags are saved.
    v6: Further commenting verbosity, per Tejun
    v9: Add a function comment

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    Acked-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Kent Overstreet
     
  • Now that bios keep track of where they were allocated from,
    bio_integrity_alloc_bioset() becomes redundant.

    Remove bio_integrity_alloc_bioset() and drop bio_set argument from the
    related functions and make them use bio->bi_pool.

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    CC: Martin K. Petersen
    Acked-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Kent Overstreet
     
  • With the old code, when you allocate a bio from a bio pool you have to
    implement your own destructor that knows how to find the bio pool the
    bio was originally allocated from.

    This adds a new field to struct bio (bi_pool) and changes
    bio_alloc_bioset() to use it. This makes various bio destructors
    unnecessary, so they're then deleted.

    v6: Explain the temporary if statement in bio_put

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    CC: NeilBrown
    CC: Alasdair Kergon
    CC: Nicholas Bellinger
    CC: Lars Ellenberg
    Acked-by: Tejun Heo
    Acked-by: Nicholas Bellinger
    Signed-off-by: Jens Axboe

    Kent Overstreet
     

26 Aug, 2012

1 commit

  • Pull block-related fixes from Jens Axboe:

    - Improvements to the buffered and direct write IO plugging from
    Fengguang.

    - Abstract out the mapping of a bio in a request, and use that to
    provide a blk_bio_map_sg() helper. Useful for mapping just a bio
    instead of a full request.

    - Regression fix from Hugh, fixing up a patch that went into the
    previous release cycle (and marked stable, too) attempting to prevent
    a loop in __getblk_slow().

    - Updates to discard requests, fixing up the sizing and how we align
    them. Also a change to disallow merging of discard requests, since
    that doesn't really work properly yet.

    - A few drbd fixes.

    - Documentation updates.

    * 'for-linus' of git://git.kernel.dk/linux-block:
    block: replace __getblk_slow misfix by grow_dev_page fix
    drbd: Write all pages of the bitmap after an online resize
    drbd: Finish requests that completed while IO was frozen
    drbd: fix drbd wire compatibility for empty flushes
    Documentation: update tunable options in block/cfq-iosched.txt
    Documentation: update tunable options in block/cfq-iosched.txt
    Documentation: update missing index files in block/00-INDEX
    block: move down direct IO plugging
    block: remove plugging at buffered write time
    block: disable discard request merge temporarily
    bio: Fix potential memory leak in bio_find_or_create_slab()
    block: Don't use static to define "void *p" in show_partition_start()
    block: Add blk_bio_map_sg() helper
    block: Introduce __blk_segment_map_sg() helper
    fs/block-dev.c:fix performance regression in O_DIRECT writes to md block devices
    block: split discard into aligned requests
    block: reorganize rounding of max_discard_sectors

    Linus Torvalds
     

09 Aug, 2012

1 commit


04 Aug, 2012

1 commit


30 May, 2012

1 commit

  • Merge block/IO core bits from Jens Axboe:
    "This is a bit bigger on the core side than usual, but that is purely
    because we decided to hold off on parts of Tejun's submission on 3.4
    to give it a bit more time to simmer. As a consequence, it's seen a
    long cycle in for-next.

    It contains:

    - Bug fix from Dan, wrong locking type.
    - Relax splice gifting restriction from Eric.
    - A ton of updates from Tejun, primarily for blkcg. This improves
    the code a lot, making the API nicer and cleaner, and also includes
    fixes for how we handle and tie policies and re-activate on
    switches. The changes also include generic bug fixes.
    - A simple fix from Vivek, along with a fix for doing proper delayed
    allocation of the blkcg stats."

    Fix up annoying conflict just due to different merge resolution in
    Documentation/feature-removal-schedule.txt

    * 'for-3.5/core' of git://git.kernel.dk/linux-block: (92 commits)
    blkcg: tg_stats_alloc_lock is an irq lock
    vmsplice: relax alignement requirements for SPLICE_F_GIFT
    blkcg: use radix tree to index blkgs from blkcg
    blkcg: fix blkcg->css ref leak in __blkg_lookup_create()
    block: fix elvpriv allocation failure handling
    block: collapse blk_alloc_request() into get_request()
    blkcg: collapse blkcg_policy_ops into blkcg_policy
    blkcg: embed struct blkg_policy_data in policy specific data
    blkcg: mass rename of blkcg API
    blkcg: style cleanups for blk-cgroup.h
    blkcg: remove blkio_group->path[]
    blkcg: blkg_rwstat_read() was missing inline
    blkcg: shoot down blkgs if all policies are deactivated
    blkcg: drop stuff unused after per-queue policy activation update
    blkcg: implement per-queue policy activation
    blkcg: add request_queue->root_blkg
    blkcg: make request_queue bypassing on allocation
    blkcg: make sure blkg_lookup() returns %NULL if @q is bypassing
    blkcg: make blkg_conf_prep() take @pol and return with queue lock held
    blkcg: remove static policy ID enums
    ...

    Linus Torvalds
     

11 May, 2012

1 commit

  • The number of bio_get_nr_vecs() is passed down via bio_alloc() to
    bvec_alloc_bs(), which fails the bio allocation if
    nr_iovecs > BIO_MAX_PAGES. For the underlying caller this causes an
    unexpected bio allocation failure.
    Limiting to queue_max_segments() is not sufficient, as max_segments
    also might be very large.

    bvec_alloc_bs(gfp_mask, nr_iovecs, ) => NULL when nr_iovecs > BIO_MAX_PAGES
    bio_alloc_bioset(gfp_mask, nr_iovecs, ...)
    bio_alloc(GFP_NOIO, nvecs)
    xfs_alloc_ioend_bio()

    Signed-off-by: Bernd Schubert
    Cc: stable@kernel.org
    Signed-off-by: Jens Axboe

    Bernd Schubert
     

02 Apr, 2012

1 commit

  • cgroup/for-3.5 contains the following changes which blk-cgroup needs
    to proceed with the on-going cleanup.

    * Dynamic addition and removal of cftypes to make config/stat file
    handling modular for policies.

    * cgroup removal update to not wait for css references to drain to fix
    blkcg removal hang caused by cfq caching cfqgs.

    Pull in cgroup/for-3.5 into block/for-3.5/core. This causes the
    following conflicts in block/blk-cgroup.c.

    * 761b3ef50e "cgroup: remove cgroup_subsys argument from callbacks"
    conflicts with blkiocg_pre_destroy() addition and blkiocg_attach()
    removal. Resolved by removing @subsys from all subsys methods.

    * 676f7c8f84 "cgroup: relocate cftype and cgroup_subsys definitions in
    controllers" conflicts with ->pre_destroy() and ->attach() updates
    and removal of modular config. Resolved by dropping forward
    declarations of the methods and applying updates to the relocated
    blkio_subsys.

    * 4baf6e3325 "cgroup: convert all non-memcg controllers to the new
    cftype interface" builds upon the previous item. Resolved by adding
    ->base_cftypes to the relocated blkio_subsys.

    Signed-off-by: Tejun Heo

    Tejun Heo
     

07 Mar, 2012

1 commit

  • IO scheduling and cgroup are tied to the issuing task via io_context
    and cgroup of %current. Unfortunately, there are cases where IOs need
    to be routed via a different task which makes scheduling and cgroup
    limit enforcement applied completely incorrectly.

    For example, all bios delayed by blk-throttle end up being issued by a
    delayed work item and get assigned the io_context of the worker task
    which happens to serve the work item and dumped to the default block
    cgroup. This is double confusing as bios which aren't delayed end up
    in the correct cgroup and makes using blk-throttle and cfq propio
    together impossible.

    Any code which punts IO issuing to another task is affected which is
    getting more and more common (e.g. btrfs). As both io_context and
    cgroup are firmly tied to task including userland visible APIs to
    manipulate them, it makes a lot of sense to match up tasks to bios.

    This patch implements bio_associate_current() which associates the
    specified bio with %current. The bio will record the associated ioc
    and blkcg at that point and block layer will use the recorded ones
    regardless of which task actually ends up issuing the bio. bio
    release puts the associated ioc and blkcg.

    It grabs and remembers ioc and blkcg instead of the task itself
    because task may already be dead by the time the bio is issued making
    ioc and blkcg inaccessible and those are all block layer cares about.

    elevator_set_req_fn() is updated such that the bio elvdata is being
    allocated for is available to the elevator.

    This doesn't update block cgroup policies yet. Further patches will
    implement the support.

    -v2: #ifdef CONFIG_BLK_CGROUP added around bio->bi_ioc dereference in
    rq_ioc() to fix build breakage.

    Signed-off-by: Tejun Heo
    Cc: Vivek Goyal
    Cc: Kent Overstreet
    Signed-off-by: Jens Axboe

    Tejun Heo
     

29 Feb, 2012

1 commit


09 Feb, 2012

1 commit

  • There were two places bio_get_nr_vecs() could overflow:

    First, it did a left shift to convert from sectors to bytes immediately
    before dividing by PAGE_SIZE. If PAGE_SIZE ever was less than 512 a great
    many things would break, so dividing by PAGE_SIZE >> 9 is safe and will
    generate smaller code too.

    The nastier overflow was in the DIV_ROUND_UP() (that's what the code was
    effectively doing, anyways). If n + d overflowed, the whole thing would
    return 0 which breaks things rather effectively.

    bio_get_nr_vecs() doesn't claim to give an exact value anyways, so the
    DIV_ROUND_UP() is silly; we could do a straight divide except if a
    device's queue_max_sectors was less than PAGE_SIZE we'd return 0. So we
    just add 1; this should always be safe - things will break badly if
    bio_get_nr_vecs() returns > BIO_MAX_PAGES (bio_alloc() will suddenly start
    failing) but it's queue_max_segments that must guard against this, if
    queue_max_sectors is preventing this from happen things are going to
    explode on architectures with different PAGE_SIZE.

    Signed-off-by: Kent Overstreet
    Cc: Tejun Heo
    Acked-by: Valdis Kletnieks
    Signed-off-by: Andrew Morton
    Signed-off-by: Jens Axboe

    Kent Overstreet
     

16 Nov, 2011

1 commit

  • This is just a cleanup patch to silence a static checker warning.

    The problem is that we cap "nr_iovecs" so it can't be larger than
    "UIO_MAXIOV" but we don't check for negative values. It turns out this is
    prevented at other layers, but logically it doesn't make sense to have
    negative nr_iovecs so making it unsigned is nicer.

    Signed-off-by: Dan Carpenter
    Signed-off-by: Andrew Morton
    Signed-off-by: Jens Axboe

    Dan Carpenter
     

24 Oct, 2011

1 commit

  • bio originally has the functionality to set the complete cpu, but
    it is broken.

    Chirstoph said that "This code is unused, and from the all the
    discussions lately pretty obviously broken. The only thing keeping
    it serves is creating more confusion and possibly more bugs."

    And Jens replied with "We can kill bio_set_completion_cpu(). I'm fine
    with leaving cpu control to the request based drivers, they are the
    only ones that can toggle the setting anyway".

    So this patch tries to remove all the work of controling complete cpu
    from a bio.

    Cc: Shaohua Li
    Cc: Christoph Hellwig
    Signed-off-by: Tao Ma
    Signed-off-by: Jens Axboe

    Tao Ma
     

27 May, 2011

1 commit


31 Mar, 2011

1 commit


25 Mar, 2011

1 commit

  • * 'for-2.6.39/core' of git://git.kernel.dk/linux-2.6-block: (65 commits)
    Documentation/iostats.txt: bit-size reference etc.
    cfq-iosched: removing unnecessary think time checking
    cfq-iosched: Don't clear queue stats when preempt.
    blk-throttle: Reset group slice when limits are changed
    blk-cgroup: Only give unaccounted_time under debug
    cfq-iosched: Don't set active queue in preempt
    block: fix non-atomic access to genhd inflight structures
    block: attempt to merge with existing requests on plug flush
    block: NULL dereference on error path in __blkdev_get()
    cfq-iosched: Don't update group weights when on service tree
    fs: assign sb->s_bdi to default_backing_dev_info if the bdi is going away
    block: Require subsystems to explicitly allocate bio_set integrity mempool
    jbd2: finish conversion from WRITE_SYNC_PLUG to WRITE_SYNC and explicit plugging
    jbd: finish conversion from WRITE_SYNC_PLUG to WRITE_SYNC and explicit plugging
    fs: make fsync_buffers_list() plug
    mm: make generic_writepages() use plugging
    blk-cgroup: Add unaccounted time to timeslice_used.
    block: fixup plugging stubs for !CONFIG_BLOCK
    block: remove obsolete comments for blkdev_issue_zeroout.
    blktrace: Use rq->cmd_flags directly in blk_add_trace_rq.
    ...

    Fix up conflicts in fs/{aio.c,super.c}

    Linus Torvalds
     

23 Mar, 2011

1 commit

  • printk()s without a priority level default to KERN_WARNING. To reduce
    noise at KERN_WARNING, this patch set the priority level appriopriately
    for unleveled printks()s. This should be useful to folks that look at
    dmesg warnings closely.

    Signed-off-by: Mandeep Singh Baines
    Cc: Jens Axboe
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Mandeep Singh Baines
     

17 Mar, 2011

1 commit

  • MD and DM create a new bio_set for every metadevice. Each bio_set has an
    integrity mempool attached regardless of whether the metadevice is
    capable of passing integrity metadata. This is a waste of memory.

    Instead we defer the allocation decision to MD and DM since we know at
    metadevice creation time whether integrity passthrough is needed or not.

    Automatic integrity mempool allocation can then be removed from
    bioset_create() and we make an explicit integrity allocation for the
    fs_bio_set.

    Signed-off-by: Martin K. Petersen
    Reported-by: Zdenek Kabelac
    Acked-by: Mike Snitzer
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     

08 Mar, 2011

1 commit