12 Jul, 2011

1 commit


11 Nov, 2010

1 commit


20 Aug, 2010

1 commit

  • This adds annotations for RCU operations in core kernel components

    Signed-off-by: Arnd Bergmann
    Signed-off-by: Paul E. McKenney
    Cc: Al Viro
    Cc: Jens Axboe
    Cc: Andrew Morton
    Reviewed-by: Josh Triplett

    Arnd Bergmann
     

24 May, 2010

1 commit

  • Remove ->dead_key field from cfq_io_context to shrink its size to 128 bytes.
    (64 bytes for 32-bit hosts)

    Use lower bit in ->key as dead-mark, instead of moving key to separate field.
    After this for dead cfq_io_context we got cic->key != cfqd automatically.
    Thus, io_context's last-hit cache should work without changing.

    Now to check ->key for non-dead state compare it with cfqd,
    instead of checking ->key for non-null value as it was before.

    Plus remove obsolete race protection in cfq_cic_lookup.
    This race gone after v2.6.24-1728-g4ac845a

    Signed-off-by: Konstantin Khlebnikov
    Signed-off-by: Jens Axboe

    Konstantin Khlebnikov
     

13 Mar, 2010

1 commit

  • Modify the Block I/O cgroup subsystem to be able to be built as a module.
    As the CFQ disk scheduler optionally depends on blk-cgroup, config options
    in block/Kconfig, block/Kconfig.iosched, and block/blk-cgroup.h are
    enhanced to support the new module dependency.

    Signed-off-by: Ben Blum
    Cc: Li Zefan
    Cc: Paul Menage
    Cc: "David S. Miller"
    Cc: KAMEZAWA Hiroyuki
    Cc: Lai Jiangshan
    Cc: Vivek Goyal
    Cc: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ben Blum
     

26 Feb, 2010

1 commit

  • On 64 bit builds when CONFIG_BLK_CGROUP=n (the default) this removes 8
    bytes of padding from structure io_context and drops its size from 72 to
    64 bytes, so needing one fewer cachelines and allowing more objects per
    slab in it's kmem_cache.

    Signed-off-by: Richard Kennedy

    ----
    patch against 2.6.33
    compiled & test on x86_64 AMDX2
    regards
    Richard
    Signed-off-by: Jens Axboe

    Richard Kennedy
     

11 Jan, 2010

1 commit


04 Dec, 2009

2 commits

  • With CLONE_IO, parent's io_context->nr_tasks is incremented, but never
    decremented whenever copy_process() fails afterwards, which prevents
    exit_io_context() from calling IO schedulers exit functions.

    Give a task_struct to exit_io_context(), and call exit_io_context() instead of
    put_io_context() in copy_process() cleanup path.

    Signed-off-by: Louis Rilling
    Signed-off-by: Jens Axboe

    Louis Rilling
     
  • o This is basic implementation of blkio controller cgroup interface. This is
    the common interface visible to user space and should be used by different
    IO control policies as we implement those.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

26 Oct, 2009

1 commit


31 Jul, 2009

1 commit

  • Commit d9c7d394a8ebacb60097b192939ae9f15235225e
    ("block: prevent possible io_context->refcount overflow") mistakenly
    changed atomic_inc(&ioc->nr_tasks) to atomic_long_inc(&ioc->refcount).

    Signed-off-by: Li Zefan
    Acked-by: Nikanth Karthikesan
    Signed-off-by: Jens Axboe

    Li Zefan
     

11 Jun, 2009

1 commit

  • Currently io_context has an atomic_t(32-bit) as refcount. In the case of
    cfq, for each device against whcih a task does I/O, a reference to the
    io_context would be taken. And when there are multiple process sharing
    io_contexts(CLONE_IO) would also have a reference to the same io_context.

    Theoretically the possible maximum number of processes sharing the same
    io_context + the number of disks/cfq_data referring to the same io_context
    can overflow the 32-bit counter on a very high-end machine.

    Even though it is an improbable case, let us make it atomic_long_t.

    Signed-off-by: Nikanth Karthikesan
    Signed-off-by: Andrew Morton
    Signed-off-by: Jens Axboe

    Nikanth Karthikesan
     

03 Jul, 2008

1 commit


15 Apr, 2008

1 commit


02 Apr, 2008

1 commit


19 Feb, 2008

1 commit


28 Jan, 2008

3 commits

  • The io context sharing introduced a per-ioc spinlock, that would protect
    the cfq io context lookup. That is a regression from the original, since
    we never needed any locking there because the ioc/cic were process private.

    The cic lookup is changed from an rbtree construct to a radix tree, which
    we can then use RCU to make the reader side lockless. That is the performance
    critical path, modifying the radix tree is only done on process creation
    (when that process first does IO, actually) and on process exit (if that
    process has done IO).

    As it so happens, radix trees are also much faster for this type of
    lookup where the key is a pointer. It's a very sparse tree.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Detach task state from ioc, instead keep track of how many processes
    are accessing the ioc.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • This is where it belongs and then it doesn't take up space for a
    process that doesn't do IO.

    Signed-off-by: Jens Axboe

    Jens Axboe