12 Sep, 2013

1 commit

  • Read block device partition table from command line. The partition used
    for fixed block device (eMMC) embedded device. It is no MBR, save
    storage space. Bootloader can be easily accessed by absolute address of
    data on the block device. Users can easily change the partition.

    This code reference MTD partition, source "drivers/mtd/cmdlinepart.c"
    About the partition verbose reference
    "Documentation/block/cmdline-partition.txt"

    [akpm@linux-foundation.org: fix printk text]
    [yongjun_wei@trendmicro.com.cn: fix error return code in parse_parts()]
    Signed-off-by: Cai Zhiyong
    Cc: Karel Zak
    Cc: "Wanglin (Albert)"
    Cc: Marius Groeger
    Cc: David Woodhouse
    Cc: Jens Axboe
    Cc: Brian Norris
    Cc: Artem Bityutskiy
    Signed-off-by: Wei Yongjun
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Cai Zhiyong
     

25 Jul, 2013

1 commit


28 May, 2013

1 commit


09 Apr, 2013

1 commit


10 Jan, 2013

1 commit

  • With the previous two patches, all cfqg scheduling decisions are based
    on vfraction and ready for hierarchy support. The only thing which
    keeps the behavior flat is cfqg_flat_parent() which makes vfraction
    calculation consider all non-root cfqgs children of the root cfqg.

    Replace it with cfqg_parent() which returns the real parent. This
    enables full blkcg hierarchy support for cfq-iosched. For example,
    consider the following hierarchy.

    root
    / \
    A:500 B:250
    / \
    AA:500 AB:1000

    For simplicity, let's say all the leaf nodes have active tasks and are
    on service tree. For each leaf node, vfraction would be

    AA: (500 / 1500) * (500 / 750) =~ 0.2222
    AB: (1000 / 1500) * (500 / 750) =~ 0.4444
    B: (250 / 750) =~ 0.3333

    and vdisktime will be distributed accordingly. For more detail,
    please refer to Documentation/block/cfq-iosched.txt.

    v2: cfq-iosched.txt updated to describe group scheduling as suggested
    by Vivek.

    v3: blkio-controller.txt updated.

    Signed-off-by: Tejun Heo
    Acked-by: Vivek Goyal

    Tejun Heo
     

09 Sep, 2012

1 commit

  • Now that we've got generic code for freeing bios allocated from bio
    pools, this isn't needed anymore.

    This patch also makes bio_free() static, since without bi_destructor
    there should be no need for it to be called anywhere else.

    bio_free() is now only called from bio_put, so we can refactor those a
    bit - move some code from bio_put() to bio_free() and kill the redundant
    bio->bi_next = NULL.

    v5: Switch to BIO_KMALLOC_POOL ((void *)~0), per Boaz
    v6: BIO_KMALLOC_POOL now NULL, drop bio_free's EXPORT_SYMBOL
    v7: No #define BIO_KMALLOC_POOL anymore

    Signed-off-by: Kent Overstreet
    CC: Jens Axboe
    Signed-off-by: Jens Axboe

    Kent Overstreet
     

09 Aug, 2012

3 commits


27 Jun, 2012

1 commit

  • Currently, request_queue has one request_list to allocate requests
    from regardless of blkcg of the IO being issued. When the unified
    request pool is used up, cfq proportional IO limits become meaningless
    - whoever grabs the next request being freed wins the race regardless
    of the configured weights.

    This can be easily demonstrated by creating a blkio cgroup w/ very low
    weight, put a program which can issue a lot of random direct IOs there
    and running a sequential IO from a different cgroup. As soon as the
    request pool is used up, the sequential IO bandwidth crashes.

    This patch implements per-blkg request_list. Each blkg has its own
    request_list and any IO allocates its request from the matching blkg
    making blkcgs completely isolated in terms of request allocation.

    * Root blkcg uses the request_list embedded in each request_queue,
    which was renamed to @q->root_rl from @q->rq. While making blkcg rl
    handling a bit harier, this enables avoiding most overhead for root
    blkcg.

    * Queue fullness is properly per request_list but bdi isn't blkcg
    aware yet, so congestion state currently just follows the root
    blkcg. As writeback isn't aware of blkcg yet, this works okay for
    async congestion but readahead may get the wrong signals. It's
    better than blkcg completely collapsing with shared request_list but
    needs to be improved with future changes.

    * After this change, each block cgroup gets a full request pool making
    resource consumption of each cgroup higher. This makes allowing
    non-root users to create cgroups less desirable; however, note that
    allowing non-root users to directly manage cgroups is already
    severely broken regardless of this patch - each block cgroup
    consumes kernel memory and skews IO weight (IO weights are not
    hierarchical).

    v2: queue-sysfs.txt updated and patch description udpated as suggested
    by Vivek.

    v3: blk_get_rl() wasn't checking error return from
    blkg_lookup_create() and may cause oops on lookup failure. Fix it
    by falling back to root_rl on blkg lookup failures. This problem
    was spotted by Rakesh Iyer .

    v4: Updated to accomodate 458f27a982 "block: Avoid missed wakeup in
    request waitqueue". blk_drain_queue() now wakes up waiters on all
    blkg->rl on the target queue.

    Signed-off-by: Tejun Heo
    Acked-by: Vivek Goyal
    Cc: Wu Fengguang
    Signed-off-by: Jens Axboe

    Tejun Heo
     

05 Nov, 2011

1 commit


28 Sep, 2011

1 commit

  • There are numerous broken references to Documentation files (in other
    Documentation files, in comments, etc.). These broken references are
    caused by typo's in the references, and by renames or removals of the
    Documentation files. Some broken references are simply odd.

    Fix these broken references, sometimes by dropping the irrelevant text
    they were part of.

    Signed-off-by: Paul Bolle
    Signed-off-by: Jiri Kosina

    Paul Bolle
     

05 Aug, 2011

1 commit

  • There are always questions about why CFQ is idling on various conditions.
    Recent ones is Christoph asking again why to idle on REQ_NOIDLE. His
    assertion is that XFS is relying more and more on workqueues and is
    concerned that CFQ idling on IO from every workqueue will impact
    XFS badly.

    So he suggested that I add some more documentation about CFQ idling
    and that can provide more clarity on the topic and also gives an
    opprotunity to poke a hole in theory and lead to improvements.

    So here is my attempt at that. Any comments are welcome.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

24 Jul, 2011

1 commit

  • Some systems benefit from completions always being steered to the strict
    requester cpu rather than the looser "per-socket" steering that
    blk_cpu_to_group() attempts by default. This is because the first
    CPU in the group mask ends up being completely overloaded with work,
    while the others (including the original submitter) has power left
    to spare.

    Allow the strict mode to be set by writing '2' to the sysfs control
    file. This is identical to the scheme used for the nomerges file,
    where '2' is a more aggressive setting than just being turned on.

    echo 2 > /sys/block//queue/rq_affinity

    Cc: Christoph Hellwig
    Cc: Roland Dreier
    Tested-by: Dave Jiang
    Signed-off-by: Dan Williams
    Signed-off-by: Jens Axboe

    Dan Williams
     

31 Mar, 2011

1 commit


10 Mar, 2011

1 commit

  • Code has been converted over to the new explicit on-stack plugging,
    and delay users have been converted to use the new API for that.
    So lets kill off the old plugging along with aops->sync_page().

    Signed-off-by: Jens Axboe

    Jens Axboe
     

11 Nov, 2010

1 commit


19 Oct, 2010

1 commit


10 Sep, 2010

1 commit


23 Aug, 2010

1 commit


05 Aug, 2010

1 commit


08 Mar, 2010

1 commit


22 Feb, 2010

1 commit


29 Jan, 2010

1 commit

  • Updated 'nomerges' tunable to accept a value of '2' - indicating that _no_
    merges at all are to be attempted (not even the simple one-hit cache).

    The following table illustrates the additional benefit - 5 minute runs of
    a random I/O load were applied to a dozen devices on a 16-way x86_64 system.

    nomerges Throughput %System Improvement (tput / %sys)
    -------- ------------ ----------- -------------------------
    0 12.45 MB/sec 0.669365609
    1 12.50 MB/sec 0.641519199 0.40% / 2.71%
    2 12.52 MB/sec 0.639849750 0.56% / 2.96%

    Signed-off-by: Alan D. Brunelle
    Signed-off-by: Jens Axboe

    Alan D. Brunelle
     

03 Jan, 2010

1 commit


18 Dec, 2009

1 commit


01 Jul, 2009

1 commit


13 Jun, 2009

1 commit


02 Jun, 2009

1 commit


15 Apr, 2009

1 commit


26 Mar, 2009

1 commit


02 Feb, 2009

1 commit


30 Jan, 2009

2 commits

  • biodoc.txt mentions that elevator functions marked with * are mandatory, but
    no function is marked with *. Mark the 3 functions which should be
    implemented by any io scheduler.

    Signed-off-by: Nikanth Karthikesan
    Signed-off-by: Jens Axboe

    Nikanth Karthikesan
     
  • Move DMA-mapping.txt to Documentation/PCI/.

    DMA-mapping.txt was supposed to be moved from Documentation/ to
    Documentation/PCI/. The 00-INDEX files in those two directories
    were updated, along with a few other text files, but the file
    itself somehow escaped being moved, so move it and update more
    text files and source files with its new location.

    Signed-off-by: Randy Dunlap
    Acked-by: Greg Kroah-Hartman
    cc: Jesse Barnes
    Signed-off-by: Linus Torvalds

    Randy Dunlap
     

29 Dec, 2008

1 commit


17 Oct, 2008

2 commits


09 Oct, 2008

1 commit


03 Jul, 2008

1 commit


22 Apr, 2008

1 commit