14 Dec, 2011

1 commit

  • Let elevators set ->icq_size and ->icq_align in elevator_type and
    elv_register() and elv_unregister() respectively create and destroy
    kmem_cache for icq.

    * elv_register() now can return failure. All callers updated.

    * icq caches are automatically named "ELVNAME_io_cq".

    * cfq_slab_setup/kill() are collapsed into cfq_init/exit().

    * While at it, minor indentation change for iosched_cfq.elevator_name
    for consistency.

    This will help moving icq management to block core. This doesn't
    introduce any functional change.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     

03 Jun, 2011

1 commit

  • Hi, Jens,

    If you recall, I posted an RFC patch for this back in July of last year:
    http://lkml.org/lkml/2010/7/13/279

    The basic problem is that a process can issue a never-ending stream of
    async direct I/Os to the same sector on a device, thus starving out
    other I/O in the system (due to the way the alias handling works in both
    cfq and deadline). The solution I proposed back then was to start
    dispatching from the fifo after a certain number of aliases had been
    dispatched. Vivek asked why we had to treat aliases differently at all,
    and I never had a good answer. So, I put together a simple patch which
    allows aliases to be added to the rb tree (it adds them to the right,
    though that doesn't matter as the order isn't guaranteed anyway). I
    think this is the preferred solution, as it doesn't break up time slices
    in CFQ or batches in deadline. I've tested it, and it does solve the
    starvation issue. Let me know what you think.

    Cheers,
    Jeff

    Signed-off-by: Jeff Moyer
    Signed-off-by: Jens Axboe

    Jeff Moyer
     

10 Mar, 2011

1 commit

  • Code has been converted over to the new explicit on-stack plugging,
    and delay users have been converted to use the new API for that.
    So lets kill off the old plugging along with aops->sync_page().

    Signed-off-by: Jens Axboe

    Jens Axboe
     

11 May, 2009

1 commit

  • With recent cleanups, there is no place where low level driver
    directly manipulates request fields. This means that the 'hard'
    request fields always equal the !hard fields. Convert all
    rq->sectors, nr_sectors and current_nr_sectors references to
    accessors.

    While at it, drop superflous blk_rq_pos() < 0 test in swim.c.

    [ Impact: use pos and nr_sectors accessors ]

    Signed-off-by: Tejun Heo
    Acked-by: Geert Uytterhoeven
    Tested-by: Grant Likely
    Acked-by: Grant Likely
    Tested-by: Adrian McMenamin
    Acked-by: Adrian McMenamin
    Acked-by: Mike Miller
    Cc: James Bottomley
    Cc: Bartlomiej Zolnierkiewicz
    Cc: Borislav Petkov
    Cc: Sergei Shtylyov
    Cc: Eric Moore
    Cc: Alan Stern
    Cc: FUJITA Tomonori
    Cc: Pete Zaitcev
    Cc: Stephen Rothwell
    Cc: Paul Clements
    Cc: Tim Waugh
    Cc: Jeff Garzik
    Cc: Jeremy Fitzhardinge
    Cc: Alex Dubov
    Cc: David Woodhouse
    Cc: Martin Schwidefsky
    Cc: Dario Ballabio
    Cc: David S. Miller
    Cc: Rusty Russell
    Cc: unsik Kim
    Cc: Laurent Vivier
    Signed-off-by: Jens Axboe

    Tejun Heo
     

29 Dec, 2008

1 commit


09 Oct, 2008

2 commits

  • * convert goto to simpler while loop;
    * use rq_end_sector() instead of computing manually;
    * fix false comments;
    * remove spurious whitespace;
    * convert rq_rb_root macro to an inline function.

    Signed-off-by: Aaron Carroll
    Signed-off-by: Jens Axboe

    Aaron Carroll
     
  • Deadline currently only batches sector-contiguous requests, so except
    for a few circumstances (e.g. requests in a single direction), it is
    essentially first come first served. This is bad for throughput, so
    change it to CSCAN, which means requests in a batch do not need to be
    sequential and are issued in increasing sector order.

    Signed-off-by: Aaron Carroll
    Signed-off-by: Jens Axboe

    Aaron Carroll
     

18 Dec, 2007

1 commit


02 Nov, 2007

3 commits

  • After switching data directions, deadline always starts the next batch
    from the lowest-sector request. This gives excessive deadline expiries
    and large latency and throughput disparity between high- and low-sector
    requests; an order of magnitude in some tests.

    This patch changes the batching behaviour so new batches start from the
    request whose expiry is earliest.

    Signed-off-by: Aaron Carroll
    Signed-off-by: Jens Axboe

    Aaron Carroll
     
  • The deadline I/O scheduler does not reset the batch count when starting
    a new batch at a higher-sectored request. This means the second and
    subsequent batch in the same data direction will never exceed a single
    request in size whenever higher-sectored requests are pending.

    This patch gives new batches in the same data direction as old ones
    their full quota of requests by resetting the batch count.

    Signed-off-by: Aaron Carroll
    Signed-off-by: Jens Axboe

    Aaron Carroll
     
  • Factor finding the next request in sector-sorted order into
    a function deadline_latter_request.

    Signed-off-by: Aaron Carroll
    Signed-off-by: Jens Axboe

    Aaron Carroll
     

24 Jul, 2007

1 commit

  • Some of the code has been gradually transitioned to using the proper
    struct request_queue, but there's lots left. So do a full sweet of
    the kernel and get rid of this typedef and replace its uses with
    the proper type.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

18 Jul, 2007

1 commit

  • kmalloc_node() and kmem_cache_alloc_node() were not available in a zeroing
    variant in the past. But with __GFP_ZERO it is possible now to do zeroing
    while allocating.

    Use __GFP_ZERO to remove the explicit clearing of memory via memset whereever
    we can.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

01 Dec, 2006

1 commit


01 Oct, 2006

4 commits


01 Jul, 2006

1 commit


23 Jun, 2006

2 commits


21 Jun, 2006

1 commit

  • * git://git.infradead.org/~dwmw2/rbtree-2.6:
    [RBTREE] Switch rb_colour() et al to en_US spelling of 'color' for consistency
    Update UML kernel/physmem.c to use rb_parent() accessor macro
    [RBTREE] Update hrtimers to use rb_parent() accessor macro.
    [RBTREE] Add explicit alignment to sizeof(long) for struct rb_node.
    [RBTREE] Merge colour and parent fields of struct rb_node.
    [RBTREE] Remove dead code in rb_erase()
    [RBTREE] Update JFFS2 to use rb_parent() accessor macro.
    [RBTREE] Update eventpoll.c to use rb_parent() accessor macro.
    [RBTREE] Update key.c to use rb_parent() accessor macro.
    [RBTREE] Update ext3 to use rb_parent() accessor macro.
    [RBTREE] Change rbtree off-tree marking in I/O schedulers.
    [RBTREE] Add accessor macros for colour and parent fields of rb_node

    Linus Torvalds
     

09 Jun, 2006

1 commit

  • There's a race between shutting down one io scheduler and firing up the
    next, in which a new io could enter and cause the io scheduler to be
    invoked with bad or NULL data.

    To fix this, we need to maintain the queue lock for a bit longer.
    Unfortunately we cannot do that, since the elevator init requires to be
    run without the lock held. This isn't easily fixable, without also
    changing the mempool API. So split the initialization into two parts,
    and alloc-init operation and an attach operation. Then we can
    preallocate the io scheduler and related structures, and run the attach
    inside the lock after we detach the old one.

    This patch has survived 30 minutes of 1 second io scheduler switching
    with a very busy io load.

    Signed-off-by: Jens Axboe
    Signed-off-by: Linus Torvalds

    Jens Axboe
     

21 Apr, 2006

1 commit


19 Mar, 2006

2 commits


06 Jan, 2006

1 commit

  • the patch below marks various read-only variables in block/* as const,
    so that gcc can optimize the use of them; eg gcc will replace the use by
    the value directly now and will even remove the memory usage of these.

    Signed-off-by: Arjan van de Ven
    Signed-off-by: Jens Axboe

    Arjan van de Ven
     

19 Nov, 2005

1 commit


04 Nov, 2005

1 commit