01 Feb, 2008

2 commits

  • It blindly copies everything in the io_context, including the lock.
    That doesn't work so well for either lock ordering or lockdep.

    There seems zero point in swapping io contexts on a request to request
    merge, so the best point of action is to just remove it.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Since it's acquired from irq context, all locking must be of the
    irq safe variant. Most are already inside the queue lock (which
    already disables interrupts), but the io scheduler rmmod path
    always has irqs enabled and the put_io_context() path may legally
    be called with irqs enabled (even if it isn't usually). So fixup
    those two.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

30 Jan, 2008

1 commit


28 Jan, 2008

1 commit


18 Dec, 2007

3 commits

  • elv_register() always returns 0, and there isn't anything it does where
    it should return an error (the only error condition is so grave that
    it's handled with a BUG_ON).

    Signed-off-by: Adrian Bunk
    Signed-off-by: Jens Axboe

    Adrian Bunk
     
  • New write batches currently start from where the last one completed.
    We have no idea where the head is after switching batches, so this
    makes little sense. Instead, start the next batch from the request
    with the earliest deadline in the hope that we avoid a deadline
    expiry later on.

    Signed-off-by: Aaron Carroll
    Acked-by: Nick Piggin
    Signed-off-by: Jens Axboe

    Aaron Carroll
     
  • Two comments refer to deadlines applying to reads only. This is
    not the case.

    Signed-off-by: Aaron Carroll
    Acked-by: Nick Piggin
    Signed-off-by: Jens Axboe

    Aaron Carroll
     

24 Jul, 2007

1 commit

  • Some of the code has been gradually transitioned to using the proper
    struct request_queue, but there's lots left. So do a full sweet of
    the kernel and get rid of this typedef and replace its uses with
    the proper type.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

18 Jul, 2007

1 commit

  • kmalloc_node() and kmem_cache_alloc_node() were not available in a zeroing
    variant in the past. But with __GFP_ZERO it is possible now to do zeroing
    while allocating.

    Use __GFP_ZERO to remove the explicit clearing of memory via memset whereever
    we can.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

10 May, 2007

1 commit

  • Switch the kblockd flushing from a global flush to a more specific
    flush_work().

    (akpm: bypassed maintainers, sorry. There are other patches which depend on
    this)

    Cc: "Maciej W. Rozycki"
    Cc: David Howells
    Cc: Jens Axboe
    Cc: Nick Piggin
    Cc: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

09 May, 2007

1 commit


13 Dec, 2006

1 commit


05 Dec, 2006

1 commit


01 Dec, 2006

1 commit


22 Nov, 2006

1 commit

  • Pass the work_struct pointer to the work function rather than context data.
    The work function can use container_of() to work out the data.

    For the cases where the container of the work_struct may go away the moment the
    pending bit is cleared, it is made possible to defer the release of the
    structure by deferring the clearing of the pending bit.

    To make this work, an extra flag is introduced into the management side of the
    work_struct. This governs auto-release of the structure upon execution.

    Ordinarily, the work queue executor would release the work_struct for further
    scheduling or deallocation by clearing the pending bit prior to jumping to the
    work function. This means that, unless the driver makes some guarantee itself
    that the work_struct won't go away, the work function may not access anything
    else in the work_struct or its container lest they be deallocated.. This is a
    problem if the auxiliary data is taken away (as done by the last patch).

    However, if the pending bit is *not* cleared before jumping to the work
    function, then the work function *may* access the work_struct and its container
    with no problems. But then the work function must itself release the
    work_struct by calling work_release().

    In most cases, automatic release is fine, so this is the default. Special
    initiators exist for the non-auto-release case (ending in _NAR).

    Signed-Off-By: David Howells

    David Howells
     

01 Oct, 2006

13 commits


01 Jul, 2006

1 commit


27 Jun, 2006

1 commit

  • acquired (aquired)
    contiguous (contigious)
    successful (succesful, succesfull)
    surprise (suprise)
    whether (weather)
    some other misspellings

    Signed-off-by: Andreas Mohr
    Signed-off-by: Adrian Bunk

    Andreas Mohr
     

23 Jun, 2006

3 commits

  • They all duplicate macros to check for empty root and/or node, and
    clearing a node. So put those in rbtree.h.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • A process flag to indicate whether we are doing sync io is incredibly
    ugly. It also causes performance problems when one does a lot of async
    io and then proceeds to sync it. Part of the io will go out as async,
    and the other part as sync. This causes a disconnect between the
    previously submitted io and the synced io. For io schedulers such as CFQ,
    this will cause us lost merges and suboptimal behaviour in scheduling.

    Remove PF_SYNCWRITE completely from the fsync/msync paths, and let
    the O_DIRECT path just directly indicate that the writes are sync
    by using WRITE_SYNC instead.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Use hlist instead of list_head for request hashtable in deadline-iosched
    and as-iosched. It also can remove the flag to know hashed or unhashed.

    Signed-off-by: Akinobu Mita
    Signed-off-by: Jens Axboe

    block/as-iosched.c | 45 +++++++++++++++++++--------------------------
    block/deadline-iosched.c | 39 ++++++++++++++++-----------------------
    2 files changed, 35 insertions(+), 49 deletions(-)

    Akinobu Mita
     

21 Jun, 2006

1 commit

  • * git://git.infradead.org/~dwmw2/rbtree-2.6:
    [RBTREE] Switch rb_colour() et al to en_US spelling of 'color' for consistency
    Update UML kernel/physmem.c to use rb_parent() accessor macro
    [RBTREE] Update hrtimers to use rb_parent() accessor macro.
    [RBTREE] Add explicit alignment to sizeof(long) for struct rb_node.
    [RBTREE] Merge colour and parent fields of struct rb_node.
    [RBTREE] Remove dead code in rb_erase()
    [RBTREE] Update JFFS2 to use rb_parent() accessor macro.
    [RBTREE] Update eventpoll.c to use rb_parent() accessor macro.
    [RBTREE] Update key.c to use rb_parent() accessor macro.
    [RBTREE] Update ext3 to use rb_parent() accessor macro.
    [RBTREE] Change rbtree off-tree marking in I/O schedulers.
    [RBTREE] Add accessor macros for colour and parent fields of rb_node

    Linus Torvalds
     

09 Jun, 2006

1 commit

  • There's a race between shutting down one io scheduler and firing up the
    next, in which a new io could enter and cause the io scheduler to be
    invoked with bad or NULL data.

    To fix this, we need to maintain the queue lock for a bit longer.
    Unfortunately we cannot do that, since the elevator init requires to be
    run without the lock held. This isn't easily fixable, without also
    changing the mempool API. So split the initialization into two parts,
    and alloc-init operation and an attach operation. Then we can
    preallocate the io scheduler and related structures, and run the attach
    inside the lock after we detach the old one.

    This patch has survived 30 minutes of 1 second io scheduler switching
    with a very busy io load.

    Signed-off-by: Jens Axboe
    Signed-off-by: Linus Torvalds

    Jens Axboe
     

21 Apr, 2006

1 commit


18 Apr, 2006

1 commit

  • On rmmod path, cfq/as waits to make sure all io-contexts was
    freed. However, it's using complete(), not wait_for_completion().

    I think barrier() is not enough in here. To avoid the following case,
    this patch replaces barrier() with smb_wmb().

    cpu0 visibility cpu1
    [ioc_gnone=NULL,ioc_count=1]

    ioc_gnone = &all_gone NULL,ioc_count=1
    atomic_read(&ioc_count) NULL,ioc_count=1
    wait_for_completion() NULL,ioc_count=0 atomic_sub_and_test()
    NULL,ioc_count=0 if ( && ioc_gone)
    [ioc_gone==NULL,
    so doesn't call complete()]
    &all_gone,ioc_count=0

    Signed-off-by: OGAWA Hirofumi
    Signed-off-by: Jens Axboe

    OGAWA Hirofumi
     

19 Mar, 2006

3 commits