24 Apr, 2009

1 commit

  • This simplifies I/O stat accounting switching code and separates it
    completely from I/O scheduler switch code.

    Requests are accounted according to the state of their request queue
    at the time of the request allocation. There is no need anymore to
    flush the request queue when switching I/O accounting state.

    Signed-off-by: Jerome Marchand
    Signed-off-by: Jens Axboe

    Jerome Marchand
     

15 Apr, 2009

1 commit


07 Apr, 2009

1 commit


06 Apr, 2009

1 commit


30 Jan, 2009

2 commits


29 Dec, 2008

1 commit


09 Oct, 2008

2 commits

  • This patch adds support for controlling the IO completion CPU of
    either all requests on a queue, or on a per-request basis. We export
    a sysfs variable (rq_affinity) which, if set, migrates completions
    of requests to the CPU that originally submitted it. A bio helper
    (bio_set_completion_cpu()) is also added, so that queuers can ask
    for completion on that specific CPU.

    In testing, this has been show to cut the system time by as much
    as 20-40% on synthetic workloads where CPU affinity is desired.

    This requires a little help from the architecture, so it'll only
    work as designed for archs that are using the new generic smp
    helper infrastructure.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Implement {disk|part}_to_dev() and use them to access generic device
    instead of directly dereferencing {disk|part}->dev. To make sure no
    user is left behind, rename generic devices fields to __dev.

    This is in preparation of unifying partition 0 handling with other
    partitions.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     

07 May, 2008

1 commit


29 Apr, 2008

1 commit

  • The block I/O + elevator + I/O scheduler code spend a lot of time trying
    to merge I/Os -- rightfully so under "normal" circumstances. However,
    if one were to know that the incoming I/O stream was /very/ random in
    nature, the cycles are wasted.

    This patch adds a per-request_queue tunable that (when set) disables
    merge attempts (beyond the simple one-hit cache check), thus freeing up
    a non-trivial amount of CPU cycles.

    Signed-off-by: Alan D. Brunelle
    Signed-off-by: Jens Axboe

    Alan D. Brunelle
     

21 Apr, 2008

1 commit

  • blk_register_queue() returns -ENXIO when queue->request_fn is NULL. But there
    are some block drivers that call blk_register_queue() via add_disk() with
    queue->request_fn == NULL. (For example, brd, loop)

    Although no one checks return value of blk_register_queue(), this patch makes
    it return 0 instead of -ENXIO when queue->request_fn is NULL,

    Also this patch adds warning when blk_register_queue() and
    blk_unregister_queue() are called with queue == NULL rather than ignore
    invalid usage silently.

    Signed-off-by: Akinobu Mita
    Cc: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Jens Axboe

    Akinobu Mita
     

01 Feb, 2008

1 commit


30 Jan, 2008

2 commits