02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

20 Mar, 2012

1 commit

  • After the previous patch to cfq, there's no ioc_get_changed() user
    left. This patch yanks out ioc_{ioprio|cgroup|get}_changed() and all
    related stuff.

    Signed-off-by: Tejun Heo
    Cc: Vivek Goyal
    Signed-off-by: Jens Axboe

    Tejun Heo
     

07 Mar, 2012

2 commits

  • Currently ioc->nr_tasks is used to decide two things - whether an ioc
    is done issuing IOs and whether it's shared by multiple tasks. This
    patch separate out the first into ioc->active_ref, which is acquired
    and released using {get|put}_io_context_active() respectively.

    This will be used to associate bio's with a given task. This patch
    doesn't introduce any visible behavior change.

    Signed-off-by: Tejun Heo
    Cc: Vivek Goyal
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • ioc_task_link() is used to share %current's ioc on clone. If
    %current->io_context is set, %current is guaranteed to have refcount
    on the ioc and, thus, ioc_task_link() can't fail.

    Replace error checking in ioc_task_link() with WARN_ON_ONCE() and make
    it just increment refcount and nr_tasks.

    -v2: Description typo fix (Vivek).

    Signed-off-by: Tejun Heo
    Cc: Vivek Goyal
    Signed-off-by: Jens Axboe

    Tejun Heo
     

15 Feb, 2012

2 commits

  • While updating locking, b2efa05265 "block, cfq: unlink
    cfq_io_context's immediately" moved elevator_exit_icq_fn() invocation
    from exit_io_context() to the final ioc put. While this doesn't cause
    catastrophic failure, it effectively removes task exit notification to
    elevator and cause noticeable IO performance degradation with CFQ.

    On task exit, CFQ used to immediately expire the slice if it was being
    used by the exiting task as no more IO would be issued by the task;
    however, after b2efa05265, the notification is lost and disk could sit
    idle needlessly, leading to noticeable IO performance degradation for
    certain workloads.

    This patch renames ioc_exit_icq() to ioc_destroy_icq(), separates
    elevator_exit_icq_fn() invocation into ioc_exit_icq() and invokes it
    from exit_io_context(). ICQ_EXITED flag is added to avoid invoking
    the callback more than once for the same icq.

    Walking icq_list from ioc side and invoking elevator callback requires
    reverse double locking. This may be better implemented using RCU;
    unfortunately, using RCU isn't trivial. e.g. RCU protection would
    need to cover request_queue and queue_lock switch on cleanup makes
    grabbing queue_lock from RCU unsafe. Reverse double locking should
    do, at least for now.

    Signed-off-by: Tejun Heo
    Reported-and-bisected-by: Shaohua Li
    LKML-Reference:
    Tested-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • icq->changed was used for ICQ_*_CHANGED bits. Rename it to flags and
    access it under ioc->lock instead of using atomic bitops.
    ioc_get_changed() is added so that the changed part can be fetched and
    cleared as before.

    icq->flags will be used to carry other flags.

    Signed-off-by: Tejun Heo
    Tested-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Tejun Heo
     

07 Feb, 2012

1 commit

  • put_io_context() performed a complex trylock dancing to avoid
    deferring ioc release to workqueue. It was also broken on UP because
    trylock was always assumed to succeed which resulted in unbalanced
    preemption count.

    While there are ways to fix the UP breakage, even the most
    pathological microbench (forced ioc allocation and tight fork/exit
    loop) fails to show any appreciable performance benefit of the
    optimization. Strip it out. If there turns out to be workloads which
    are affected by this change, simpler optimization from the discussion
    thread can be applied later.

    Signed-off-by: Tejun Heo
    LKML-Reference:
    Signed-off-by: Jens Axboe

    Tejun Heo
     

14 Dec, 2011

9 commits

  • Now block layer knows everything necessary to create and associate
    icq's with requests. Move ioc_create_icq() to blk-ioc.c and update
    get_request() such that, if elevator_type->icq_size is set, requests
    are automatically associated with their matching icq's before
    elv_set_request(). io_context reference is also managed by block core
    on request alloc/free.

    * Only ioprio/cgroup changed handling remains from cfq_get_cic().
    Collapsed into cfq_set_request().

    * This removes queue kicking on icq allocation failure (for now). As
    icq allocation failure is rare and the only effect of queue kicking
    achieved was possibily accelerating queue processing, this change
    shouldn't be noticeable.

    There is a larger underlying problem. Unlike request allocation,
    icq allocation is not guaranteed to succeed eventually after
    retries. The number of icq is unbound and thus mempool can't be the
    solution either. This effectively adds allocation dependency on
    memory free path and thus possibility of deadlock.

    This usually wouldn't happen because icq allocation is not a hot
    path and, even when the condition triggers, it's highly unlikely
    that none of the writeback workers already has icq.

    However, this is still possible especially if elevator is being
    switched under high memory pressure, so we better get it fixed.
    Probably the only solution is just bypassing elevator and appending
    to dispatch queue on any elevator allocation failure.

    * Comment added to explain how icq's are managed and synchronized.

    This completes cleanup of io_context interface.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • With kmem_cache managed by blk-ioc, io_cq exit/release can be moved to
    blk-ioc too. The odd ->io_cq->exit/release() callbacks are replaced
    with elevator_ops->elevator_exit_icq_fn() with unlinking from both ioc
    and q, and freeing automatically handled by blk-ioc. The elevator
    operation only need to perform exit operation specific to the elevator
    - in cfq's case, exiting the cfqq's.

    Also, clearing of io_cq's on q detach is moved to block core and
    automatically performed on elevator switch and q release.

    Because the q io_cq points to might be freed before RCU callback for
    the io_cq runs, blk-ioc code should remember to which cache the io_cq
    needs to be freed when the io_cq is released. New field
    io_cq->__rcu_icq_cache is added for this purpose. As both the new
    field and rcu_head are used only after io_cq is released and the
    q/ioc_node fields aren't, they are put into unions.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Currently io_context and cfq logics are mixed without clear boundary.
    Most of io_context is independent from cfq but cfq_io_context handling
    logic is dispersed between generic ioc code and cfq.

    cfq_io_context represents association between an io_context and a
    request_queue, which is a concept useful outside of cfq, but it also
    contains fields which are useful only to cfq.

    This patch takes out generic part and put it into io_cq (io
    context-queue) and the rest into cfq_io_cq (cic moniker remains the
    same) which contains io_cq. The following changes are made together.

    * cfq_ttime and cfq_io_cq now live in cfq-iosched.c.

    * All related fields, functions and constants are renamed accordingly.

    * ioc->ioc_data is now "struct io_cq *" instead of "void *" and
    renamed to icq_hint.

    This prepares for io_context API cleanup. Documentation is currently
    sparse. It will be added later.

    Changes in this patch are mechanical and don't cause functional
    change.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Now that lazy paths are removed, cfqd_dead_key() is meaningless and
    cic->q can be used whereever cic->key is used. Kill cic->key.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • cic is association between io_context and request_queue. A cic is
    linked from both ioc and q and should be destroyed when either one
    goes away. As ioc and q both have their own locks, locking becomes a
    bit complex - both orders work for removal from one but not from the
    other.

    Currently, cfq tries to circumvent this locking order issue with RCU.
    ioc->lock nests inside queue_lock but the radix tree and cic's are
    also protected by RCU allowing either side to walk their lists without
    grabbing lock.

    This rather unconventional use of RCU quickly devolves into extremely
    fragile convolution. e.g. The following is from cfqd going away too
    soon after ioc and q exits raced.

    general protection fault: 0000 [#1] PREEMPT SMP
    CPU 2
    Modules linked in:
    [ 88.503444]
    Pid: 599, comm: hexdump Not tainted 3.1.0-rc10-work+ #158 Bochs Bochs
    RIP: 0010:[] [] cfq_exit_single_io_context+0x58/0xf0
    ...
    Call Trace:
    [] call_for_each_cic+0x5a/0x90
    [] cfq_exit_io_context+0x15/0x20
    [] exit_io_context+0x100/0x140
    [] do_exit+0x579/0x850
    [] do_group_exit+0x5b/0xd0
    [] sys_exit_group+0x17/0x20
    [] system_call_fastpath+0x16/0x1b

    The only real hot path here is cic lookup during request
    initialization and avoiding extra locking requires very confined use
    of RCU. This patch makes cic removal from both ioc and request_queue
    perform double-locking and unlink immediately.

    * From q side, the change is almost trivial as ioc->lock nests inside
    queue_lock. It just needs to grab each ioc->lock as it walks
    cic_list and unlink it.

    * From ioc side, it's a bit more difficult because of inversed lock
    order. ioc needs its lock to walk its cic_list but can't grab the
    matching queue_lock and needs to perform unlock-relock dancing.

    Unlinking is now wholly done from put_io_context() and fast path is
    optimized by using the queue_lock the caller already holds, which is
    by far the most common case. If the ioc accessed multiple devices,
    it tries with trylock. In unlikely cases of fast path failure, it
    falls back to full double-locking dance from workqueue.

    Double-locking isn't the prettiest thing in the world but it's *far*
    simpler and more understandable than RCU trick without adding any
    meaningful overhead.

    This still leaves a lot of now unnecessary RCU logics. Future patches
    will trim them.

    -v2: Vivek pointed out that cic->q was being dereferenced after
    cic->release() was called. Updated to use local variable @this_q
    instead.

    Signed-off-by: Tejun Heo
    Cc: Vivek Goyal
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • ioprio/cgroup change was handled by marking the changed state in ioc
    and, on the following access to the ioc, performing RCU-protected
    iteration through all cic's grabbing the matching queue_lock.

    This patch moves the changed state to each cic. When ioprio or cgroup
    changes, the respective bit is set on all cic's of the ioc and when
    each of those cic (not ioc) is accessed, change is applied for that
    specific ioc-queue pair.

    This also fixes the following two race conditions between setting and
    clearing of changed states.

    * Missing barrier between assign/load of ioprio and ioprio_changed
    allowed applying old ioprio.

    * Change requests could happen between application of change and
    clearing of changed variables.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Make the following changes to prepare for ioc/cic management cleanup.

    * Add cic->q so that ioc can determine the associated queue without
    querying cfq. This will eventually replace ->key.

    * Factor out cfq_release_cic() from cic_free_func(). This function
    assumes that the caller handled locking.

    * Rename __cfq_exit_single_io_context() to cfq_exit_cic() and make it
    take only @cic.

    * Restructure cfq_cic_link() for future updates.

    This patch doesn't introduce any functional changes.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Ignoring copy_io() during fork, io_context can be allocated from two
    places - current_io_context() and set_task_ioprio(). The former is
    always called from local task while the latter can be called from
    different task. The synchornization between them are peculiar and
    dubious.

    * current_io_context() doesn't grab task_lock() and assumes that if it
    saw %NULL ->io_context, it would stay that way until allocation and
    assignment is complete. It has smp_wmb() between alloc/init and
    assignment.

    * set_task_ioprio() grabs task_lock() for assignment and does
    smp_read_barrier_depends() between "ioc = task->io_context" and "if
    (ioc)". Unfortunately, this doesn't achieve anything - the latter
    is not a dependent load of the former. ie, if ioc itself were being
    dereferenced "ioc->xxx", it would mean something (not sure what tho)
    but as the code currently stands, the dependent read barrier is
    noop.

    As only one of the the two test-assignment sequences is task_lock()
    protected, the task_lock() can't do much about race between the two.
    Nothing prevents current_io_context() and set_task_ioprio() allocating
    its own ioc for the same task and overwriting the other's.

    Also, set_task_ioprio() can race with exiting task and create a new
    ioc after exit_io_context() is finished.

    ioc get/put doesn't have any reason to be complex. The only hot path
    is accessing the existing ioc of %current, which is simple to achieve
    given that ->io_context is never destroyed as long as the task is
    alive. All other paths can happily go through task_lock() like all
    other task sub structures without impacting anything.

    This patch updates ioc get/put so that it becomes more conventional.

    * alloc_io_context() is replaced with get_task_io_context(). This is
    the only interface which can acquire access to ioc of another task.
    On return, the caller has an explicit reference to the object which
    should be put using put_io_context() afterwards.

    * The functionality of current_io_context() remains the same but when
    creating a new ioc, it shares the code path with
    get_task_io_context() and always goes through task_lock().

    * get_io_context() now means incrementing ref on an ioc which the
    caller already has access to (be that an explicit refcnt or implicit
    %current one).

    * PF_EXITING inhibits creation of new io_context and once
    exit_io_context() is finished, it's guaranteed that both ioc
    acquisition functions return %NULL.

    * All users are updated. Most are trivial but
    smp_read_barrier_depends() removal from cfq_get_io_context() needs a
    bit of explanation. I suppose the original intention was to ensure
    ioc->ioprio is visible when set_task_ioprio() allocates new
    io_context and installs it; however, this wouldn't have worked
    because set_task_ioprio() doesn't have wmb between init and install.
    There are other problems with this which will be fixed in another
    patch.

    * While at it, use NUMA_NO_NODE instead of -1 for wildcard node
    specification.

    -v2: Vivek spotted contamination from debug patch. Removed.

    Signed-off-by: Tejun Heo
    Cc: Vivek Goyal
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • * int return from put_io_context() wasn't used by anybody. Make it
    return void like other put functions and docbook-fy the function
    comment.

    * Reorder dummy declarations for !CONFIG_BLOCK case a bit.

    * Make alloc_ioc_context() use __GFP_ZERO allocation, take init out of
    if block and drop 0'ing.

    * Docbook-fy current_io_context() comment.

    This patch doesn't introduce any functional change.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     

12 Jul, 2011

1 commit


11 Nov, 2010

1 commit


20 Aug, 2010

1 commit

  • This adds annotations for RCU operations in core kernel components

    Signed-off-by: Arnd Bergmann
    Signed-off-by: Paul E. McKenney
    Cc: Al Viro
    Cc: Jens Axboe
    Cc: Andrew Morton
    Reviewed-by: Josh Triplett

    Arnd Bergmann
     

24 May, 2010

1 commit

  • Remove ->dead_key field from cfq_io_context to shrink its size to 128 bytes.
    (64 bytes for 32-bit hosts)

    Use lower bit in ->key as dead-mark, instead of moving key to separate field.
    After this for dead cfq_io_context we got cic->key != cfqd automatically.
    Thus, io_context's last-hit cache should work without changing.

    Now to check ->key for non-dead state compare it with cfqd,
    instead of checking ->key for non-null value as it was before.

    Plus remove obsolete race protection in cfq_cic_lookup.
    This race gone after v2.6.24-1728-g4ac845a

    Signed-off-by: Konstantin Khlebnikov
    Signed-off-by: Jens Axboe

    Konstantin Khlebnikov
     

13 Mar, 2010

1 commit

  • Modify the Block I/O cgroup subsystem to be able to be built as a module.
    As the CFQ disk scheduler optionally depends on blk-cgroup, config options
    in block/Kconfig, block/Kconfig.iosched, and block/blk-cgroup.h are
    enhanced to support the new module dependency.

    Signed-off-by: Ben Blum
    Cc: Li Zefan
    Cc: Paul Menage
    Cc: "David S. Miller"
    Cc: KAMEZAWA Hiroyuki
    Cc: Lai Jiangshan
    Cc: Vivek Goyal
    Cc: Jens Axboe
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ben Blum
     

26 Feb, 2010

1 commit

  • On 64 bit builds when CONFIG_BLK_CGROUP=n (the default) this removes 8
    bytes of padding from structure io_context and drops its size from 72 to
    64 bytes, so needing one fewer cachelines and allowing more objects per
    slab in it's kmem_cache.

    Signed-off-by: Richard Kennedy

    ----
    patch against 2.6.33
    compiled & test on x86_64 AMDX2
    regards
    Richard
    Signed-off-by: Jens Axboe

    Richard Kennedy
     

11 Jan, 2010

1 commit


04 Dec, 2009

2 commits

  • With CLONE_IO, parent's io_context->nr_tasks is incremented, but never
    decremented whenever copy_process() fails afterwards, which prevents
    exit_io_context() from calling IO schedulers exit functions.

    Give a task_struct to exit_io_context(), and call exit_io_context() instead of
    put_io_context() in copy_process() cleanup path.

    Signed-off-by: Louis Rilling
    Signed-off-by: Jens Axboe

    Louis Rilling
     
  • o This is basic implementation of blkio controller cgroup interface. This is
    the common interface visible to user space and should be used by different
    IO control policies as we implement those.

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

26 Oct, 2009

1 commit


31 Jul, 2009

1 commit

  • Commit d9c7d394a8ebacb60097b192939ae9f15235225e
    ("block: prevent possible io_context->refcount overflow") mistakenly
    changed atomic_inc(&ioc->nr_tasks) to atomic_long_inc(&ioc->refcount).

    Signed-off-by: Li Zefan
    Acked-by: Nikanth Karthikesan
    Signed-off-by: Jens Axboe

    Li Zefan
     

11 Jun, 2009

1 commit

  • Currently io_context has an atomic_t(32-bit) as refcount. In the case of
    cfq, for each device against whcih a task does I/O, a reference to the
    io_context would be taken. And when there are multiple process sharing
    io_contexts(CLONE_IO) would also have a reference to the same io_context.

    Theoretically the possible maximum number of processes sharing the same
    io_context + the number of disks/cfq_data referring to the same io_context
    can overflow the 32-bit counter on a very high-end machine.

    Even though it is an improbable case, let us make it atomic_long_t.

    Signed-off-by: Nikanth Karthikesan
    Signed-off-by: Andrew Morton
    Signed-off-by: Jens Axboe

    Nikanth Karthikesan
     

03 Jul, 2008

1 commit


15 Apr, 2008

1 commit


02 Apr, 2008

1 commit


19 Feb, 2008

1 commit


28 Jan, 2008

3 commits

  • The io context sharing introduced a per-ioc spinlock, that would protect
    the cfq io context lookup. That is a regression from the original, since
    we never needed any locking there because the ioc/cic were process private.

    The cic lookup is changed from an rbtree construct to a radix tree, which
    we can then use RCU to make the reader side lockless. That is the performance
    critical path, modifying the radix tree is only done on process creation
    (when that process first does IO, actually) and on process exit (if that
    process has done IO).

    As it so happens, radix trees are also much faster for this type of
    lookup where the key is a pointer. It's a very sparse tree.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Detach task state from ioc, instead keep track of how many processes
    are accessing the ioc.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • This is where it belongs and then it doesn't take up space for a
    process that doesn't do IO.

    Signed-off-by: Jens Axboe

    Jens Axboe