16 Jan, 2012

1 commit

  • * 'for-3.3/core' of git://git.kernel.dk/linux-block: (37 commits)
    Revert "block: recursive merge requests"
    block: Stop using macro stubs for the bio data integrity calls
    blockdev: convert some macros to static inlines
    fs: remove unneeded plug in mpage_readpages()
    block: Add BLKROTATIONAL ioctl
    block: Introduce blk_set_stacking_limits function
    block: remove WARN_ON_ONCE() in exit_io_context()
    block: an exiting task should be allowed to create io_context
    block: ioc_cgroup_changed() needs to be exported
    block: recursive merge requests
    block, cfq: fix empty queue crash caused by request merge
    block, cfq: move icq creation and rq->elv.icq association to block core
    block, cfq: restructure io_cq creation path for io_context interface cleanup
    block, cfq: move io_cq exit/release to blk-ioc.c
    block, cfq: move icq cache management to block core
    block, cfq: move io_cq lookup to blk-ioc.c
    block, cfq: move cfqd->icq_list to request_queue and add request->elv.icq
    block, cfq: reorganize cfq_io_context into generic and cfq specific parts
    block: remove elevator_queue->ops
    block: reorder elevator switch sequence

    Fix up conflicts in:
    - block/blk-cgroup.c
    Switch from can_attach_task to can_attach
    - block/cfq-iosched.c
    conflict with now removed cic index changes (we now use q->id instead)

    Linus Torvalds

15 Jan, 2012

3 commits

  • This reverts commit 274193224cdabd687d804a26e0150bb20f2dd52c.

    We have some problems related to selection of empty queues
    that need to be resolved, evidence so far points to the
    recursive merge logic making either being the cause or at
    least the accelerator for this. So revert it for now, until
    we figure this out.

    Signed-off-by: Jens Axboe

    Jens Axboe
  • Linux allows executing the SG_IO ioctl on a partition or LVM volume, and
    will pass the command to the underlying block device. This is
    well-known, but it is also a large security problem when (via Unix
    permissions, ACLs, SELinux or a combination thereof) a program or user
    needs to be granted access only to part of the disk.

    This patch lets partitions forward a small set of harmless ioctls;
    others are logged with printk so that we can see which ioctls are
    actually sent. In my tests only CDROM_GET_CAPABILITY actually occurred.
    Of course it was being sent to a (partition on a) hard disk, so it would
    have failed with ENOTTY and the patch isn't changing anything in
    practice. Still, I'm treating it specially to avoid spamming the logs.

    In principle, this restriction should include programs running with
    CAP_SYS_RAWIO. If for example I let a program access /dev/sda2 and
    /dev/sdb, it still should not be able to read/write outside the
    boundaries of /dev/sda2 independent of the capabilities. However, for
    now programs with CAP_SYS_RAWIO will still be allowed to send the
    ioctls. Their actions will still be logged.

    This patch does not affect the non-libata IDE driver. That driver
    however already tests for bd != bd->bd_contains before issuing some
    ioctl; it could be restricted further to forbid these ioctls even for
    programs running with CAP_SYS_ADMIN/CAP_SYS_RAWIO.

    Cc: linux-scsi@vger.kernel.org
    Cc: Jens Axboe
    Cc: James Bottomley
    Signed-off-by: Paolo Bonzini
    [ Make it also print the command name when warning - Linus ]
    Signed-off-by: Linus Torvalds

    Paolo Bonzini
  • Introduce a wrapper around scsi_cmd_ioctl that takes a block device.

    The function will then be enhanced to detect partition block devices
    and, in that case, subject the ioctls to whitelisting.

    Cc: linux-scsi@vger.kernel.org
    Cc: Jens Axboe
    Cc: James Bottomley
    Signed-off-by: Paolo Bonzini
    Signed-off-by: Linus Torvalds

    Paolo Bonzini

11 Jan, 2012

2 commits

  • Introduce an ioctl which permits applications to query whether a block
    device is rotational.

    Signed-off-by: Martin K. Petersen
    Signed-off-by: Jens Axboe

    Martin K. Petersen
  • Stacking driver queue limits are typically bounded exclusively by the
    capabilities of the low level devices, not by the stacking driver

    This patch introduces blk_set_stacking_limits() which has more liberal
    metrics than the default queue limits function. This allows us to
    inherit topology parameters from bottom devices without manually
    tweaking the default limits in each driver prior to calling the stacking

    Since there is now a clear distinction between stacking and low-level
    devices, blk_set_default_limits() has been modified to carry the more
    conservative values that we used to manually set in

    Signed-off-by: Martin K. Petersen
    Acked-by: Mike Snitzer
    Signed-off-by: Jens Axboe

    Martin K. Petersen

10 Jan, 2012

1 commit

  • * 'for-3.3' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: (21 commits)
    cgroup: fix to allow mounting a hierarchy by name
    cgroup: move assignement out of condition in cgroup_attach_proc()
    cgroup: Remove task_lock() from cgroup_post_fork()
    cgroup: add sparse annotation to cgroup_iter_start() and cgroup_iter_end()
    cgroup: mark cgroup_rmdir_waitq and cgroup_attach_proc() as static
    cgroup: only need to check oldcgrp==newgrp once
    cgroup: remove redundant get/put of task struct
    cgroup: remove redundant get/put of old css_set from migrate
    cgroup: Remove unnecessary task_lock before fetching css_set on migration
    cgroup: Drop task_lock(parent) on cgroup_fork()
    cgroups: remove redundant get/put of css_set from css_set_check_fetched()
    resource cgroups: remove bogus cast
    cgroup: kill subsys->can_attach_task(), pre_attach() and attach_task()
    cgroup, cpuset: don't use ss->pre_attach()
    cgroup: don't use subsys->can_attach_task() or ->attach_task()
    cgroup: introduce cgroup_taskset and use it in subsys->can_attach(), cancel_attach() and attach()
    cgroup: improve old cgroup handling in cgroup_attach_proc()
    cgroup: always lock threadgroup during migration
    threadgroup: extend threadgroup_lock() to cover exit and exec
    threadgroup: rename signal->threadgroup_fork_lock to ->group_rwsem

    Fix up conflict in kernel/cgroup.c due to commit e0197aae59e5: "cgroups:
    fix a css_set not found bug in cgroup_attach_proc" that already
    mentioned that the bug is fixed (differently) in Tejun's cgroup
    patchset. This one, in other words.

    Linus Torvalds

09 Jan, 2012

1 commit

  • * 'for-linus2' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (165 commits)
    reiserfs: Properly display mount options in /proc/mounts
    vfs: prevent remount read-only if pending removes
    vfs: count unlinked inodes
    vfs: protect remounting superblock read-only
    vfs: keep list of mounts for each superblock
    vfs: switch ->show_options() to struct dentry *
    vfs: switch ->show_path() to struct dentry *
    vfs: switch ->show_devname() to struct dentry *
    vfs: switch ->show_stats to struct dentry *
    switch security_path_chmod() to struct path *
    vfs: prefer ->dentry->d_sb to ->mnt->mnt_sb
    vfs: trim includes a bit
    switch mnt_namespace ->root to struct mount
    vfs: take /proc/*/mounts and friends to fs/proc_namespace.c
    vfs: opencode mntget() mnt_set_mountpoint()
    vfs: spread struct mount - remaining argument of next_mnt()
    vfs: move fsnotify junk to struct mount
    vfs: move mnt_devname
    vfs: move mnt_list to struct mount
    vfs: switch pnode.h macros to struct mount *

    Linus Torvalds

07 Jan, 2012

1 commit

06 Jan, 2012

1 commit

  • We're doing some odd things there, which already messes up various users
    (see the net/socket.c code that this removes), and it was going to add
    yet more crud to the block layer because of the incorrect error code

    ENOIOCTLCMD is not an error return that should be returned to user mode
    from the "ioctl()" system call, but it should *not* be translated as
    EINVAL ("Invalid argument"). It should be translated as ENOTTY
    ("Inappropriate ioctl for device").

    That EINVAL confusion has apparently so permeated some code that the
    block layer actually checks for it, which is sad. We continue to do so
    for now, but add a big comment about how wrong that is, and we should
    remove it entirely eventually. In the meantime, this tries to keep the
    changes localized to just the EINVAL -> ENOTTY fix, and removing code
    that makes it harder to do the right thing.

    Signed-off-by: Linus Torvalds

    Linus Torvalds

04 Jan, 2012

5 commits

29 Dec, 2011

1 commit

  • Commit 5e081591 "block: warn if tag is greater than real_max_depth"
    cleaned up blk_queue_end_tag() to warn when the tag is truly invalid
    (greater than real_max_depth). However, it changed behavior in the tag <
    max_depth case to not end the request. Leading to triggering of
    BUG_ON(blk_queued_rq(rq)) in the request completion path:


    In order to allow blk_queue_resize_tags() to shrink the tag space
    blk_queue_end_tag() must always complete tags with a value less than
    real_max_depth regardless of the current max_depth. The comment about
    "handling the shrink case" seems to be what prompted changes in this
    space, so remove it and BUG on all invalid tags (made even simpler by
    Matthew's suggestion to use an unsigned compare).

    Signed-off-by: Dan Williams
    Cc: Tao Ma
    Cc: Matthew Wilcox
    Reported-by: Meelis Roos
    Reported-by: Ed Nadolski
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Jens Axboe

    Dan Williams

28 Dec, 2011

1 commit

  • 6e736be7 "block: make ioc get/put interface more conventional and fix
    race on alloction" added WARN_ON_ONCE() in exit_io_context() which
    triggers if !PF_EXITING. All tasks hitting exit_io_context() from
    task exit should have PF_EXITING set but task struct tearing down
    after fork failure calls into the function without PF_EXITING,
    triggering the condition.

    WARNING: at block/blk-ioc.c:234 exit_io_context+0x40/0x92()
    Pid: 17090, comm: trinity Not tainted 3.2.0-rc6-next-20111222-sasha-dirty #77
    Call Trace:
    [] warn_slowpath_common+0x8f/0xb2
    [] warn_slowpath_null+0x18/0x1a
    [] exit_io_context+0x40/0x92
    [] copy_process+0x126f/0x1453
    [] do_fork+0x120/0x3e9
    [] sys_clone+0x26/0x28
    [] stub_clone+0x13/0x20
    ---[ end trace a2e4eb670b375238 ]---

    Reported-by: Sasha Levin
    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo

25 Dec, 2011

1 commit

  • While fixing io_context creation / task exit race condition,
    6e736be7f2 "block: make ioc get/put interface more conventional and
    fix race on alloction" also prevented an exiting (%PF_EXITING) task
    from creating its own io_context. This is incorrect as exit path may
    issue IOs, e.g. from exit_files(), and if those IOs are the first ones
    issued by the task, io_context needs to be created to process the IOs.

    Combined with the existing problem of io_context / io_cq creation
    failure having the possibility of stalling IO, this problem results in
    deterministic full IO lockup with certain workloads.

    Fix it by allowing io_context creation regardless of %PF_EXITING for

    Signed-off-by: Tejun Heo
    Reported-by: Andrew Morton
    Reported-by: Hugh Dickins
    Signed-off-by: Jens Axboe

    Tejun Heo

21 Dec, 2011

1 commit

19 Dec, 2011

1 commit

16 Dec, 2011

4 commits

  • All requests of a queue could be merged to other requests of other queue.
    Such queue will not have request in it, but it's in service tree. This
    will cause kernel oops.
    I encounter a BUG_ON() in cfq_dispatch_request() with next patch, but the
    issue should exist without the patch.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
  • In my workload, thread 1 accesses a, a+2, ..., thread 2 accesses a+1,
    a+3,.... When the requests are flushed to queue, a and a+1 are merged
    to (a, a+1), a+2 and a+3 too to (a+2, a+3), but (a, a+1) and (a+2, a+3)
    aren't merged.
    With recursive merge below, the workload throughput gets improved 20%
    and context switch drops 60%.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
  • All requests of a queue could be merged to other requests of other queue.
    Such queue will not have request in it, but it's in service tree. This
    will cause kernel oops.
    I encounter a BUG_ON() in cfq_dispatch_request() with next patch, but the
    issue should exist without the patch.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
  • While probing, fd sets up queue, probes hardware and tears down the
    queue if probing fails. In the process, blk_drain_queue() kicks the
    queue which failed to finish initialization and fd is unhappy about

    floppy0: no floppy controllers found
    ------------[ cut here ]------------
    WARNING: at drivers/block/floppy.c:2929 do_fd_request+0xbf/0xd0()
    Hardware name: To Be Filled By O.E.M.
    VFS: do_fd_request called on non-open device
    Modules linked in:
    Pid: 1, comm: swapper Not tainted 3.2.0-rc4-00077-g5983fe2 #2
    Call Trace:
    [] warn_slowpath_common+0x7a/0xb0
    [] warn_slowpath_fmt+0x41/0x50
    [] do_fd_request+0xbf/0xd0
    [] blk_drain_queue+0x65/0x80
    [] blk_cleanup_queue+0xe3/0x1a0
    [] floppy_init+0xdeb/0xe28
    [] ? daring+0x6b/0x6b
    [] do_one_initcall+0x3f/0x170
    [] kernel_init+0x9d/0x11e
    [] ? schedule_tail+0x22/0xa0
    [] kernel_thread_helper+0x4/0x10
    [] ? start_kernel+0x2be/0x2be
    [] ? gs_change+0xb/0xb

    Avoid it by making blk_drain_queue() kick queue iff dispatch queue has
    something on it.

    Signed-off-by: Tejun Heo
    Reported-by: Ralf Hildebrandt
    Reported-by: Wu Fengguang
    Tested-by: Sergei Trofimovich
    Signed-off-by: Jens Axboe

    Tejun Heo

14 Dec, 2011

16 commits

  • Now block layer knows everything necessary to create and associate
    icq's with requests. Move ioc_create_icq() to blk-ioc.c and update
    get_request() such that, if elevator_type->icq_size is set, requests
    are automatically associated with their matching icq's before
    elv_set_request(). io_context reference is also managed by block core
    on request alloc/free.

    * Only ioprio/cgroup changed handling remains from cfq_get_cic().
    Collapsed into cfq_set_request().

    * This removes queue kicking on icq allocation failure (for now). As
    icq allocation failure is rare and the only effect of queue kicking
    achieved was possibily accelerating queue processing, this change
    shouldn't be noticeable.

    There is a larger underlying problem. Unlike request allocation,
    icq allocation is not guaranteed to succeed eventually after
    retries. The number of icq is unbound and thus mempool can't be the
    solution either. This effectively adds allocation dependency on
    memory free path and thus possibility of deadlock.

    This usually wouldn't happen because icq allocation is not a hot
    path and, even when the condition triggers, it's highly unlikely
    that none of the writeback workers already has icq.

    However, this is still possible especially if elevator is being
    switched under high memory pressure, so we better get it fixed.
    Probably the only solution is just bypassing elevator and appending
    to dispatch queue on any elevator allocation failure.

    * Comment added to explain how icq's are managed and synchronized.

    This completes cleanup of io_context interface.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • Add elevator_ops->elevator_init_icq_fn() and restructure
    cfq_create_cic() and rename it to ioc_create_icq().

    The new function expects its caller to pass in io_context, uses
    elevator_type->icq_cache, handles generic init, calls the new elevator
    operation for elevator specific initialization, and returns pointer to
    created or looked up icq. This leaves cfq_icq_pool variable without
    any user. Removed.

    This prepares for io_context interface cleanup and doesn't introduce
    any functional difference.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • With kmem_cache managed by blk-ioc, io_cq exit/release can be moved to
    blk-ioc too. The odd ->io_cq->exit/release() callbacks are replaced
    with elevator_ops->elevator_exit_icq_fn() with unlinking from both ioc
    and q, and freeing automatically handled by blk-ioc. The elevator
    operation only need to perform exit operation specific to the elevator
    - in cfq's case, exiting the cfqq's.

    Also, clearing of io_cq's on q detach is moved to block core and
    automatically performed on elevator switch and q release.

    Because the q io_cq points to might be freed before RCU callback for
    the io_cq runs, blk-ioc code should remember to which cache the io_cq
    needs to be freed when the io_cq is released. New field
    io_cq->__rcu_icq_cache is added for this purpose. As both the new
    field and rcu_head are used only after io_cq is released and the
    q/ioc_node fields aren't, they are put into unions.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • Let elevators set ->icq_size and ->icq_align in elevator_type and
    elv_register() and elv_unregister() respectively create and destroy
    kmem_cache for icq.

    * elv_register() now can return failure. All callers updated.

    * icq caches are automatically named "ELVNAME_io_cq".

    * cfq_slab_setup/kill() are collapsed into cfq_init/exit().

    * While at it, minor indentation change for iosched_cfq.elevator_name
    for consistency.

    This will help moving icq management to block core. This doesn't
    introduce any functional change.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • Now that all io_cq related data structures are in block core layer,
    io_cq lookup can be moved from cfq-iosched.c to blk-ioc.c.

    Lookup logic from cfq_cic_lookup() is moved to ioc_lookup_icq() with
    parameter return type changes (cfqd -> request_queue, cfq_io_cq ->
    io_cq) and cfq_cic_lookup() becomes thin wrapper around

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • Most of icq management is about to be moved out of cfq into blk-ioc.
    This patch prepares for it.

    * Move cfqd->icq_list to request_queue->icq_list

    * Make request explicitly point to icq instead of through elevator
    private data. ->elevator_private[3] is replaced with sub struct elv
    which contains icq pointer and priv[2]. cfq is updated accordingly.

    * Meaningless clearing of ->elevator_private[0] removed from
    elv_set_request(). At that point in code, the field was guaranteed
    to be %NULL anyway.

    This patch doesn't introduce any functional change.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • Currently io_context and cfq logics are mixed without clear boundary.
    Most of io_context is independent from cfq but cfq_io_context handling
    logic is dispersed between generic ioc code and cfq.

    cfq_io_context represents association between an io_context and a
    request_queue, which is a concept useful outside of cfq, but it also
    contains fields which are useful only to cfq.

    This patch takes out generic part and put it into io_cq (io
    context-queue) and the rest into cfq_io_cq (cic moniker remains the
    same) which contains io_cq. The following changes are made together.

    * cfq_ttime and cfq_io_cq now live in cfq-iosched.c.

    * All related fields, functions and constants are renamed accordingly.

    * ioc->ioc_data is now "struct io_cq *" instead of "void *" and
    renamed to icq_hint.

    This prepares for io_context API cleanup. Documentation is currently
    sparse. It will be added later.

    Changes in this patch are mechanical and don't cause functional

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • elevator_queue->ops points to the same ops struct ->elevator_type.ops
    is pointing to. The only effect of caching it in elevator_queue is
    shorter notation - it doesn't save any indirect derefence.

    Relocate elevator_type->list which used only during module init/exit
    to the end of the structure, rename elevator_queue->elevator_type to
    ->type, and replace elevator_queue->ops with elevator_queue->type.ops.

    This doesn't introduce any functional difference.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • Elevator switch sequence first attached the new elevator, then tried
    registering it (sysfs) and if that failed attached back the old
    elevator. However, sysfs registration doesn't require the elevator to
    be attached, so there is no reason to do the "detach, attach new,
    register, maybe re-attach old" sequence. It can just do "register,
    detach, attach".

    * elevator_init_queue() is updated to set ->elevator_data directly and
    return 0 / -errno. This allows elevator_exit() on an unattached

    * __elv_unregister_queue() which was necessary to unregister
    unattached q is removed in favor of __elv_register_queue() which can
    register unattached q.

    * elevator_attach() becomes a single assignment and obscures more then
    it helps. Dropped.

    This will help cleaning up io_context handling across elevator switch.

    This patch doesn't introduce visible behavior change.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • When called under queue_lock, current_io_context() triggers lockdep
    warning if it hits allocation path. This is because io_context
    installation is protected by task_lock which is not IRQ safe, so it
    triggers irq-unsafe-lock -> irq -> irq-safe-lock -> irq-unsafe-lock
    deadlock warning.

    Given the restriction, accessor + creator rolled into one doesn't work
    too well. Drop current_io_context() and let the users access
    task->io_context directly inside queue_lock combined with explicit
    creation using create_io_context().

    Future ioc updates will further consolidate ioc access and the create
    interface will be unexported.

    While at it, relocate ioc internal interface declarations in blk.h and
    add section comments before and after.

    This patch does not introduce functional change.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • Now that lazy paths are removed, cfqd_dead_key() is meaningless and
    cic->q can be used whereever cic->key is used. Kill cic->key.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • Now that cic's are immediately unlinked under both locks, there's no
    need to count and drain cic's before module unload. RCU callback
    completion is waited with rcu_barrier().

    While at it, remove residual RCU operations on cic_list.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • Now that all cic's are immediately unlinked from both ioc and queue,
    lazy dropping from lookup path and trimming on elevator unregister are
    unnecessary. Kill them and remove now unused elevator_ops->trim().

    This also leaves call_for_each_cic() without any user. Removed.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • cic is association between io_context and request_queue. A cic is
    linked from both ioc and q and should be destroyed when either one
    goes away. As ioc and q both have their own locks, locking becomes a
    bit complex - both orders work for removal from one but not from the

    Currently, cfq tries to circumvent this locking order issue with RCU.
    ioc->lock nests inside queue_lock but the radix tree and cic's are
    also protected by RCU allowing either side to walk their lists without
    grabbing lock.

    This rather unconventional use of RCU quickly devolves into extremely
    fragile convolution. e.g. The following is from cfqd going away too
    soon after ioc and q exits raced.

    general protection fault: 0000 [#1] PREEMPT SMP
    CPU 2
    Modules linked in:
    [ 88.503444]
    Pid: 599, comm: hexdump Not tainted 3.1.0-rc10-work+ #158 Bochs Bochs
    RIP: 0010:[] [] cfq_exit_single_io_context+0x58/0xf0
    Call Trace:
    [] call_for_each_cic+0x5a/0x90
    [] cfq_exit_io_context+0x15/0x20
    [] exit_io_context+0x100/0x140
    [] do_exit+0x579/0x850
    [] do_group_exit+0x5b/0xd0
    [] sys_exit_group+0x17/0x20
    [] system_call_fastpath+0x16/0x1b

    The only real hot path here is cic lookup during request
    initialization and avoiding extra locking requires very confined use
    of RCU. This patch makes cic removal from both ioc and request_queue
    perform double-locking and unlink immediately.

    * From q side, the change is almost trivial as ioc->lock nests inside
    queue_lock. It just needs to grab each ioc->lock as it walks
    cic_list and unlink it.

    * From ioc side, it's a bit more difficult because of inversed lock
    order. ioc needs its lock to walk its cic_list but can't grab the
    matching queue_lock and needs to perform unlock-relock dancing.

    Unlinking is now wholly done from put_io_context() and fast path is
    optimized by using the queue_lock the caller already holds, which is
    by far the most common case. If the ioc accessed multiple devices,
    it tries with trylock. In unlikely cases of fast path failure, it
    falls back to full double-locking dance from workqueue.

    Double-locking isn't the prettiest thing in the world but it's *far*
    simpler and more understandable than RCU trick without adding any
    meaningful overhead.

    This still leaves a lot of now unnecessary RCU logics. Future patches
    will trim them.

    -v2: Vivek pointed out that cic->q was being dereferenced after
    cic->release() was called. Updated to use local variable @this_q

    Signed-off-by: Tejun Heo
    Cc: Vivek Goyal
    Signed-off-by: Jens Axboe

    Tejun Heo
  • * cfq_cic_lookup() may be called without queue_lock and multiple tasks
    can execute it simultaneously for the same shared ioc. Nothing
    prevents them racing each other and trying to drop the same dead cic
    entry multiple times.

    * smp_wmb() in cfq_exit_cic() doesn't really do anything and nothing
    prevents cfq_cic_lookup() seeing stale cic->key. This usually
    doesn't blow up because by the time cic is exited, all requests have
    been drained and new requests are terminated before going through
    elevator. However, it can still be triggered by plug merge path
    which doesn't grab queue_lock and thus can't check DEAD state

    This patch updates lookup locking such that,

    * Lookup is always performed under queue_lock. This doesn't add any
    more locking. The only issue is cfq_allow_merge() which can be
    called from plug merge path without holding any lock. For now, this
    is worked around by using cic of the request to merge into, which is
    guaranteed to have the same ioc. For longer term, I think it would
    be best to separate out plug merge method from regular one.

    * Spurious ioc->lock locking around cic lookup hint assignment

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
  • cfq_get_io_context() would fail if multiple tasks race to insert cic's
    for the same association. This patch restructures
    cfq_get_io_context() such that slow path insertion race is handled

    Note that the restructuring also makes cfq_get_io_context() called
    under queue_lock and performs both ioc and cfqd insertions while
    holding both ioc and queue locks. This is part of on-going locking
    tightening and will be used to simplify synchronization rules.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo