11 Feb, 2012

1 commit

  • 11a3122f6c "block: strip out locking optimization in put_io_context()"
    removed ioc_lock depth lockdep annoation along with locking
    optimization; however, while recursing from put_io_context() is no
    longer possible, ioc_release_fn() may still end up putting the last
    reference of another ioc through elevator, which wlil grab ioc->lock
    triggering spurious (as the ioc is always different one) A-A deadlock
    warning.

    As this can only happen one time from ioc_release_fn(), using non-zero
    subclass from ioc_release_fn() is enough. Use subclass 1.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     

09 Feb, 2012

1 commit

  • We create "bsg" link if q->kobj.sd is not NULL, so remove it only
    when the same condition is true.

    Fixes:

    WARNING: at fs/sysfs/inode.c:323 sysfs_hash_and_remove+0x2b/0x77()
    sysfs: can not remove 'bsg', no directory
    Call Trace:
    [] warn_slowpath_common+0x6a/0x7f
    [] ? sysfs_hash_and_remove+0x2b/0x77
    [] warn_slowpath_fmt+0x2b/0x2f
    [] sysfs_hash_and_remove+0x2b/0x77
    [] sysfs_remove_link+0x20/0x23
    [] bsg_unregister_queue+0x40/0x6d
    [] __scsi_remove_device+0x31/0x9d
    [] scsi_forget_host+0x41/0x52
    [] scsi_remove_host+0x71/0xe0
    [] quiesce_and_remove_host+0x51/0x83 [usb_storage]
    [] usb_stor_disconnect+0x18/0x22 [usb_storage]
    [] usb_unbind_interface+0x4e/0x109
    [] __device_release_driver+0x6b/0xa6
    [] device_release_driver+0x17/0x22
    [] bus_remove_device+0xd6/0xe6
    [] device_del+0xf2/0x137
    [] usb_disable_device+0x94/0x1a0

    Signed-off-by: Stanislaw Gruszka
    Signed-off-by: Jens Axboe

    Stanislaw Gruszka
     

08 Feb, 2012

2 commits

  • Plug merge calls two elevator callbacks outside queue lock -
    elevator_allow_merge_fn() and elevator_bio_merged_fn(). Although
    attempt_plug_merge() suggests that elevator is guaranteed to be there
    through the existing request on the plug list, nothing prevents plug
    merge from calling into dying or initializing elevator.

    For regular merges, bypass ensures elvpriv count to reach zero, which
    in turn prevents merges as all !ELVPRIV requests get REQ_SOFTBARRIER
    from forced back insertion. Plug merge doesn't check ELVPRIV, and, as
    the requests haven't gone through elevator insertion yet, it doesn't
    have SOFTBARRIER set allowing merges on a bypassed queue.

    This, for example, leads to the following crash during elevator
    switch.

    BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
    IP: [] cfq_allow_merge+0x49/0xa0
    PGD 112cbc067 PUD 115d5c067 PMD 0
    Oops: 0000 [#1] PREEMPT SMP
    CPU 1
    Modules linked in: deadline_iosched

    Pid: 819, comm: dd Not tainted 3.3.0-rc2-work+ #76 Bochs Bochs
    RIP: 0010:[] [] cfq_allow_merge+0x49/0xa0
    RSP: 0018:ffff8801143a38f8 EFLAGS: 00010297
    RAX: 0000000000000000 RBX: ffff88011817ce28 RCX: ffff880116eb6cc0
    RDX: 0000000000000000 RSI: ffff880118056e20 RDI: ffff8801199512f8
    RBP: ffff8801143a3908 R08: 0000000000000000 R09: 0000000000000000
    R10: 0000000000000001 R11: 0000000000000000 R12: ffff880118195708
    R13: ffff880118052aa0 R14: ffff8801143a3d50 R15: ffff880118195708
    FS: 00007f19f82cb700(0000) GS:ffff88011fc80000(0000) knlGS:0000000000000000
    CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
    CR2: 0000000000000008 CR3: 0000000112c6a000 CR4: 00000000000006e0
    DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    Process dd (pid: 819, threadinfo ffff8801143a2000, task ffff880116eb6cc0)
    Stack:
    ffff88011817ce28 ffff880118195708 ffff8801143a3928 ffffffff81391bba
    ffff88011817ce28 ffff880118195708 ffff8801143a3948 ffffffff81391bf1
    ffff88011817ce28 0000000000000000 ffff8801143a39a8 ffffffff81398e3e
    Call Trace:
    [] elv_rq_merge_ok+0x4a/0x60
    [] elv_try_merge+0x21/0x40
    [] blk_queue_bio+0x8e/0x390
    [] generic_make_request+0xca/0x100
    [] submit_bio+0x74/0x100
    [] __blockdev_direct_IO+0x1ce2/0x3450
    [] blkdev_direct_IO+0x57/0x60
    [] generic_file_aio_read+0x6d5/0x760
    [] do_sync_read+0xe2/0x120
    [] vfs_read+0xc5/0x180
    [] sys_read+0x51/0x90
    [] system_call_fastpath+0x16/0x1b

    There are multiple ways to fix this including making plug merge check
    ELVPRIV; however,

    * Calling into elevator outside queue lock is confusing and
    error-prone.

    * Requests on plug list aren't known to the elevator. They aren't on
    the elevator yet, so there's no elevator specific state to update.

    * Given the nature of plug merges - collecting bio's for the same
    purpose from the same issuer - elevator specific restrictions aren't
    applicable.

    So, simply don't call into elevator methods from plug merge by moving
    elv_bio_merged() from bio_attempt_*_merge() to blk_queue_bio(), and
    using blk_try_merge() in attempt_plug_merge().

    This is based on Jens' patch to skip elevator_allow_merge_fn() from
    plug merge.

    Note that this makes per-cgroup merged stats skip plug merging.

    Signed-off-by: Tejun Heo
    LKML-Reference:
    Original-patch-by: Jens Axboe
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • blk_rq_merge_ok() is the elevator-neutral part of merge eligibility
    test. blk_try_merge() determines merge direction and expects the
    caller to have tested elv_rq_merge_ok() previously.

    elv_rq_merge_ok() now wraps blk_rq_merge_ok() and then calls
    elv_iosched_allow_merge(). elv_try_merge() is removed and the two
    callers are updated to call elv_rq_merge_ok() explicitly followed by
    blk_try_merge(). While at it, make rq_merge_ok() functions return
    bool.

    This is to prepare for plug merge update and doesn't introduce any
    behavior change.

    This is based on Jens' patch to skip elevator_allow_merge_fn() from
    plug merge.

    Signed-off-by: Tejun Heo
    LKML-Reference:
    Original-patch-by: Jens Axboe
    Signed-off-by: Jens Axboe

    Tejun Heo
     

07 Feb, 2012

1 commit

  • put_io_context() performed a complex trylock dancing to avoid
    deferring ioc release to workqueue. It was also broken on UP because
    trylock was always assumed to succeed which resulted in unbalanced
    preemption count.

    While there are ways to fix the UP breakage, even the most
    pathological microbench (forced ioc allocation and tight fork/exit
    loop) fails to show any appreciable performance benefit of the
    optimization. Strip it out. If there turns out to be workloads which
    are affected by this change, simpler optimization from the discussion
    thread can be applied later.

    Signed-off-by: Tejun Heo
    LKML-Reference:
    Signed-off-by: Jens Axboe

    Tejun Heo
     

06 Feb, 2012

1 commit

  • Meelis reported a warning:

    WARNING: at kernel/timer.c:1122 run_timer_softirq+0x199/0x1ec()
    Hardware name: 939Dual-SATA2
    timer: cfq_idle_slice_timer+0x0/0xaa preempt leak: 00000102 -> 00000103
    Modules linked in: sr_mod cdrom videodev media drm_kms_helper ohci_hcd ehci_hcd v4l2_compat_ioctl32 usbcore i2c_ali15x3 snd_seq drm snd_timer snd_seq
    Pid: 0, comm: swapper Not tainted 3.3.0-rc2-00110-gd125666 #176
    Call Trace:
    [] warn_slowpath_common+0x7e/0x96
    [] ? cfq_slice_expired+0x1d/0x1d
    [] warn_slowpath_fmt+0x41/0x43
    [] ? cfq_idle_slice_timer+0xa1/0xaa
    [] ? cfq_slice_expired+0x1d/0x1d
    [] run_timer_softirq+0x199/0x1ec
    [] ? timekeeping_get_ns+0x12/0x31
    [] ? apic_write+0x11/0x13
    [] __do_softirq+0x74/0xfa
    [] call_softirq+0x1a/0x30
    [] do_softirq+0x31/0x68
    [] irq_exit+0x3d/0xa3
    [] smp_apic_timer_interrupt+0x6b/0x77
    [] apic_timer_interrupt+0x69/0x70
    [] ? sched_clock_cpu+0x73/0x7d
    [] ? sched_clock_cpu+0x73/0x7d
    [] ? default_idle+0x1e/0x32
    [] ? default_idle+0x18/0x32
    [] cpu_idle+0x87/0xd1
    [] rest_init+0x85/0x89
    [] start_kernel+0x2eb/0x2f8
    [] x86_64_start_reservations+0x7e/0x82
    [] x86_64_start_kernel+0xf0/0xf7

    this_q == locked_q is possible. There are two problems here:
    1. In UP case, there is preemption counter issue as spin_trylock always
    successes.
    2. In SMP case, the loop breaks too earlier.

    Signed-off-by: Shaohua Li
    Reported-by: Meelis Roos
    Reported-by: Knut Petersen
    Tested-by: Knut Petersen
    Signed-off-by: Jens Axboe

    Shaohua Li
     

19 Jan, 2012

2 commits

  • Vivek reported a kernel crash:
    [ 94.217015] BUG: unable to handle kernel NULL pointer dereference at 000000000000001c
    [ 94.218004] IP: [] kmem_cache_free+0x5e/0x200
    [ 94.218004] PGD 13abda067 PUD 137d52067 PMD 0
    [ 94.218004] Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
    [ 94.218004] CPU 0
    [ 94.218004] Modules linked in: [last unloaded: scsi_wait_scan]
    [ 94.218004]
    [ 94.218004] Pid: 0, comm: swapper/0 Not tainted 3.2.0+ #16 Hewlett-Packard HP xw6600 Workstation/0A9Ch
    [ 94.218004] RIP: 0010:[] [] kmem_cache_free+0x5e/0x200
    [ 94.218004] RSP: 0018:ffff88013fc03de0 EFLAGS: 00010006
    [ 94.218004] RAX: ffffffff81e0d020 RBX: ffff880138b3c680 RCX: 00000001801c001b
    [ 94.218004] RDX: 00000000003aac1d RSI: ffff880138b3c680 RDI: ffffffff81142fae
    [ 94.218004] RBP: ffff88013fc03e10 R08: ffff880137830238 R09: 0000000000000001
    [ 94.218004] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
    [ 94.218004] R13: ffffea0004e2cf00 R14: ffffffff812f6eb6 R15: 0000000000000246
    [ 94.218004] FS: 0000000000000000(0000) GS:ffff88013fc00000(0000) knlGS:0000000000000000
    [ 94.218004] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
    [ 94.218004] CR2: 000000000000001c CR3: 00000001395ab000 CR4: 00000000000006f0
    [ 94.218004] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    [ 94.218004] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    [ 94.218004] Process swapper/0 (pid: 0, threadinfo ffffffff81e00000, task ffffffff81e0d020)
    [ 94.218004] Stack:
    [ 94.218004] 0000000000000102 ffff88013fc0db20 ffffffff81e22700 ffff880139500f00
    [ 94.218004] 0000000000000001 000000000000000a ffff88013fc03e20 ffffffff812f6eb6
    [ 94.218004] ffff88013fc03e90 ffffffff810c8da2 ffffffff81e01fd8 ffff880137830240
    [ 94.218004] Call Trace:
    [ 94.218004]
    [ 94.218004] [] icq_free_icq_rcu+0x16/0x20
    [ 94.218004] [] __rcu_process_callbacks+0x1c2/0x420
    [ 94.218004] [] rcu_process_callbacks+0x38/0x250
    [ 94.218004] [] __do_softirq+0xce/0x3e0
    [ 94.218004] [] ? clockevents_program_event+0x74/0x100
    [ 94.218004] [] ? tick_program_event+0x24/0x30
    [ 94.218004] [] call_softirq+0x1c/0x30
    [ 94.218004] [] do_softirq+0x8d/0xc0
    [ 94.218004] [] irq_exit+0xae/0xe0
    [ 94.218004] [] smp_apic_timer_interrupt+0x6e/0x99
    [ 94.218004] [] apic_timer_interrupt+0x70/0x80

    Once a queue is quiesced, it's not supposed to have any elvpriv data or
    icq's, and elevator switching depends on that. Request alloc path
    followed the rule for elvpriv data but forgot apply it to icq's
    leading to the following crash during elevator switch. Fix it by not
    allocating icq's if ELVPRIV is not set for the request.

    Reported-by: Vivek Goyal
    Tested-by: Vivek Goyal
    Signed-off-by: Shaohua Li
    Acked-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Shaohua Li
     
  • cfq_slice_expired will change saved_workload_slice. It should be called
    first so saved_workload_slice is correctly set to 0 after workload type
    is changed.
    This fixes the code order changed by 54b466e44b1c7.

    Tested-by: Tetsuo Handa
    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     

18 Jan, 2012

1 commit

  • With the changes in life time management between the cfq IO contexts
    and the cfq queues, we now risk having cfqd->active_queue being
    freed when cfq_slice_expired() is being called. cfq_preempt_queue()
    caches this queue and uses it after calling said function, causing
    a use-after-free condition. This triggers the following oops,
    when cfqq_type() attempts to dereference it:

    BUG: unable to handle kernel paging request at ffff8800746c4f0c
    IP: [] cfqq_type+0xb/0x20
    PGD 18d4063 PUD 1fe15067 PMD 1ffb9067 PTE 80000000746c4160
    Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
    CPU 3
    Modules linked in:

    Pid: 1, comm: init Not tainted 3.2.0-josef+ #367 Bochs Bochs
    RIP: 0010:[] [] cfqq_type+0xb/0x20
    RSP: 0018:ffff880079c11778 EFLAGS: 00010046
    RAX: 0000000000000000 RBX: ffff880076f3df08 RCX: 0000000000000000
    RDX: 0000000000000006 RSI: ffff880074271888 RDI: ffff8800746c4f08
    RBP: ffff880079c11778 R08: 0000000000000078 R09: 0000000000000001
    R10: 09f911029d74e35b R11: 09f911029d74e35b R12: ffff880076f337f0
    R13: ffff8800746c4f08 R14: ffff8800746c4f08 R15: 0000000000000002
    FS: 00007f62fd44f700(0000) GS:ffff88007cd80000(0000) knlGS:0000000000000000
    CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: ffff8800746c4f0c CR3: 0000000076c21000 CR4: 00000000000006e0
    DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    Process init (pid: 1, threadinfo ffff880079c10000, task ffff880079c0a040)
    Stack:
    ffff880079c117c8 ffffffff812683d8 ffff880079c117a8 ffffffff8125de43
    ffff8800744fcf48 ffff880074b43e98 ffff8800770c8828 ffff880074b43e98
    0000000000000003 0000000000000000 ffff880079c117f8 ffffffff81254149
    Call Trace:
    [] cfq_insert_request+0x3f5/0x47c
    [] ? blk_recount_segments+0x20/0x31
    [] __elv_add_request+0x1ca/0x200
    [] blk_queue_bio+0x2ef/0x312
    [] generic_make_request+0x9f/0xe0
    [] submit_bio+0xbf/0xca
    [] submit_bh+0xdf/0xfe
    [] ext3_bread+0x50/0x99
    [] dx_probe+0x38/0x291
    [] ext3_dx_find_entry+0x58/0x219
    [] ext3_find_entry+0xb0/0x406
    [] ? cache_alloc_debugcheck_after.isra.46+0x14d/0x1a0
    [] ? kmem_cache_alloc+0xef/0x191
    [] ext3_lookup+0x39/0xe1
    [] d_alloc_and_lookup+0x45/0x6c
    [] do_lookup+0x1e4/0x2f5
    [] link_path_walk+0x1a4/0x6ef
    [] path_lookupat+0x59/0x5ea
    [] ? __strncpy_from_user+0x30/0x5a
    [] do_path_lookup+0x23/0x59
    [] user_path_at_empty+0x53/0x99
    [] ? remove_wait_queue+0x51/0x56
    [] user_path_at+0x11/0x13
    [] vfs_fstatat+0x3a/0x64
    [] vfs_stat+0x1b/0x1d
    [] sys_newstat+0x1a/0x33
    [] ? task_stopped_code+0x42/0x42
    [] system_call_fastpath+0x16/0x1b
    Code: 89 e6 48 89 c7 e8 fa ca fe ff 85 c0 74 06 4c 89 2b 41 b6 01 5b 44 89 f0 41 5c 41 5d 41 5e 5d c3 55 48 89 e5 66 66 66 66 90 31 c0 57 04 f6 c6 01 74 0b 83 e2 20 83 fa 01 19 c0 83 c0 02 5d c3
    RIP [] cfqq_type+0xb/0x20
    RSP
    CR2: ffff8800746c4f0c

    Get rid of the caching of cfqd->active_queue, and reorder the
    check so that it happens before we expire the active queue.

    Thanks to Tejun for pin pointing the error location.

    Reported-by: Chris Mason
    Tested-by: Chris Mason
    Signed-off-by: Jens Axboe

    Jens Axboe
     

16 Jan, 2012

1 commit

  • * 'for-3.3/core' of git://git.kernel.dk/linux-block: (37 commits)
    Revert "block: recursive merge requests"
    block: Stop using macro stubs for the bio data integrity calls
    blockdev: convert some macros to static inlines
    fs: remove unneeded plug in mpage_readpages()
    block: Add BLKROTATIONAL ioctl
    block: Introduce blk_set_stacking_limits function
    block: remove WARN_ON_ONCE() in exit_io_context()
    block: an exiting task should be allowed to create io_context
    block: ioc_cgroup_changed() needs to be exported
    block: recursive merge requests
    block, cfq: fix empty queue crash caused by request merge
    block, cfq: move icq creation and rq->elv.icq association to block core
    block, cfq: restructure io_cq creation path for io_context interface cleanup
    block, cfq: move io_cq exit/release to blk-ioc.c
    block, cfq: move icq cache management to block core
    block, cfq: move io_cq lookup to blk-ioc.c
    block, cfq: move cfqd->icq_list to request_queue and add request->elv.icq
    block, cfq: reorganize cfq_io_context into generic and cfq specific parts
    block: remove elevator_queue->ops
    block: reorder elevator switch sequence
    ...

    Fix up conflicts in:
    - block/blk-cgroup.c
    Switch from can_attach_task to can_attach
    - block/cfq-iosched.c
    conflict with now removed cic index changes (we now use q->id instead)

    Linus Torvalds
     

15 Jan, 2012

3 commits

  • This reverts commit 274193224cdabd687d804a26e0150bb20f2dd52c.

    We have some problems related to selection of empty queues
    that need to be resolved, evidence so far points to the
    recursive merge logic making either being the cause or at
    least the accelerator for this. So revert it for now, until
    we figure this out.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Linux allows executing the SG_IO ioctl on a partition or LVM volume, and
    will pass the command to the underlying block device. This is
    well-known, but it is also a large security problem when (via Unix
    permissions, ACLs, SELinux or a combination thereof) a program or user
    needs to be granted access only to part of the disk.

    This patch lets partitions forward a small set of harmless ioctls;
    others are logged with printk so that we can see which ioctls are
    actually sent. In my tests only CDROM_GET_CAPABILITY actually occurred.
    Of course it was being sent to a (partition on a) hard disk, so it would
    have failed with ENOTTY and the patch isn't changing anything in
    practice. Still, I'm treating it specially to avoid spamming the logs.

    In principle, this restriction should include programs running with
    CAP_SYS_RAWIO. If for example I let a program access /dev/sda2 and
    /dev/sdb, it still should not be able to read/write outside the
    boundaries of /dev/sda2 independent of the capabilities. However, for
    now programs with CAP_SYS_RAWIO will still be allowed to send the
    ioctls. Their actions will still be logged.

    This patch does not affect the non-libata IDE driver. That driver
    however already tests for bd != bd->bd_contains before issuing some
    ioctl; it could be restricted further to forbid these ioctls even for
    programs running with CAP_SYS_ADMIN/CAP_SYS_RAWIO.

    Cc: linux-scsi@vger.kernel.org
    Cc: Jens Axboe
    Cc: James Bottomley
    Signed-off-by: Paolo Bonzini
    [ Make it also print the command name when warning - Linus ]
    Signed-off-by: Linus Torvalds

    Paolo Bonzini
     
  • Introduce a wrapper around scsi_cmd_ioctl that takes a block device.

    The function will then be enhanced to detect partition block devices
    and, in that case, subject the ioctls to whitelisting.

    Cc: linux-scsi@vger.kernel.org
    Cc: Jens Axboe
    Cc: James Bottomley
    Signed-off-by: Paolo Bonzini
    Signed-off-by: Linus Torvalds

    Paolo Bonzini
     

11 Jan, 2012

2 commits

  • Introduce an ioctl which permits applications to query whether a block
    device is rotational.

    Signed-off-by: Martin K. Petersen
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     
  • Stacking driver queue limits are typically bounded exclusively by the
    capabilities of the low level devices, not by the stacking driver
    itself.

    This patch introduces blk_set_stacking_limits() which has more liberal
    metrics than the default queue limits function. This allows us to
    inherit topology parameters from bottom devices without manually
    tweaking the default limits in each driver prior to calling the stacking
    function.

    Since there is now a clear distinction between stacking and low-level
    devices, blk_set_default_limits() has been modified to carry the more
    conservative values that we used to manually set in
    blk_queue_make_request().

    Signed-off-by: Martin K. Petersen
    Acked-by: Mike Snitzer
    Signed-off-by: Jens Axboe

    Martin K. Petersen
     

10 Jan, 2012

1 commit

  • * 'for-3.3' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup: (21 commits)
    cgroup: fix to allow mounting a hierarchy by name
    cgroup: move assignement out of condition in cgroup_attach_proc()
    cgroup: Remove task_lock() from cgroup_post_fork()
    cgroup: add sparse annotation to cgroup_iter_start() and cgroup_iter_end()
    cgroup: mark cgroup_rmdir_waitq and cgroup_attach_proc() as static
    cgroup: only need to check oldcgrp==newgrp once
    cgroup: remove redundant get/put of task struct
    cgroup: remove redundant get/put of old css_set from migrate
    cgroup: Remove unnecessary task_lock before fetching css_set on migration
    cgroup: Drop task_lock(parent) on cgroup_fork()
    cgroups: remove redundant get/put of css_set from css_set_check_fetched()
    resource cgroups: remove bogus cast
    cgroup: kill subsys->can_attach_task(), pre_attach() and attach_task()
    cgroup, cpuset: don't use ss->pre_attach()
    cgroup: don't use subsys->can_attach_task() or ->attach_task()
    cgroup: introduce cgroup_taskset and use it in subsys->can_attach(), cancel_attach() and attach()
    cgroup: improve old cgroup handling in cgroup_attach_proc()
    cgroup: always lock threadgroup during migration
    threadgroup: extend threadgroup_lock() to cover exit and exec
    threadgroup: rename signal->threadgroup_fork_lock to ->group_rwsem
    ...

    Fix up conflict in kernel/cgroup.c due to commit e0197aae59e5: "cgroups:
    fix a css_set not found bug in cgroup_attach_proc" that already
    mentioned that the bug is fixed (differently) in Tejun's cgroup
    patchset. This one, in other words.

    Linus Torvalds
     

09 Jan, 2012

1 commit

  • * 'for-linus2' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (165 commits)
    reiserfs: Properly display mount options in /proc/mounts
    vfs: prevent remount read-only if pending removes
    vfs: count unlinked inodes
    vfs: protect remounting superblock read-only
    vfs: keep list of mounts for each superblock
    vfs: switch ->show_options() to struct dentry *
    vfs: switch ->show_path() to struct dentry *
    vfs: switch ->show_devname() to struct dentry *
    vfs: switch ->show_stats to struct dentry *
    switch security_path_chmod() to struct path *
    vfs: prefer ->dentry->d_sb to ->mnt->mnt_sb
    vfs: trim includes a bit
    switch mnt_namespace ->root to struct mount
    vfs: take /proc/*/mounts and friends to fs/proc_namespace.c
    vfs: opencode mntget() mnt_set_mountpoint()
    vfs: spread struct mount - remaining argument of next_mnt()
    vfs: move fsnotify junk to struct mount
    vfs: move mnt_devname
    vfs: move mnt_list to struct mount
    vfs: switch pnode.h macros to struct mount *
    ...

    Linus Torvalds
     

07 Jan, 2012

1 commit


06 Jan, 2012

1 commit

  • We're doing some odd things there, which already messes up various users
    (see the net/socket.c code that this removes), and it was going to add
    yet more crud to the block layer because of the incorrect error code
    translation.

    ENOIOCTLCMD is not an error return that should be returned to user mode
    from the "ioctl()" system call, but it should *not* be translated as
    EINVAL ("Invalid argument"). It should be translated as ENOTTY
    ("Inappropriate ioctl for device").

    That EINVAL confusion has apparently so permeated some code that the
    block layer actually checks for it, which is sad. We continue to do so
    for now, but add a big comment about how wrong that is, and we should
    remove it entirely eventually. In the meantime, this tries to keep the
    changes localized to just the EINVAL -> ENOTTY fix, and removing code
    that makes it harder to do the right thing.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

04 Jan, 2012

5 commits


29 Dec, 2011

1 commit

  • Commit 5e081591 "block: warn if tag is greater than real_max_depth"
    cleaned up blk_queue_end_tag() to warn when the tag is truly invalid
    (greater than real_max_depth). However, it changed behavior in the tag <
    max_depth case to not end the request. Leading to triggering of
    BUG_ON(blk_queued_rq(rq)) in the request completion path:

    http://marc.info/?l=linux-kernel&m=132204370518629&w=2

    In order to allow blk_queue_resize_tags() to shrink the tag space
    blk_queue_end_tag() must always complete tags with a value less than
    real_max_depth regardless of the current max_depth. The comment about
    "handling the shrink case" seems to be what prompted changes in this
    space, so remove it and BUG on all invalid tags (made even simpler by
    Matthew's suggestion to use an unsigned compare).

    Signed-off-by: Dan Williams
    Cc: Tao Ma
    Cc: Matthew Wilcox
    Reported-by: Meelis Roos
    Reported-by: Ed Nadolski
    Cc: Tejun Heo
    Signed-off-by: Andrew Morton
    Signed-off-by: Jens Axboe

    Dan Williams
     

28 Dec, 2011

1 commit

  • 6e736be7 "block: make ioc get/put interface more conventional and fix
    race on alloction" added WARN_ON_ONCE() in exit_io_context() which
    triggers if !PF_EXITING. All tasks hitting exit_io_context() from
    task exit should have PF_EXITING set but task struct tearing down
    after fork failure calls into the function without PF_EXITING,
    triggering the condition.

    WARNING: at block/blk-ioc.c:234 exit_io_context+0x40/0x92()
    Pid: 17090, comm: trinity Not tainted 3.2.0-rc6-next-20111222-sasha-dirty #77
    Call Trace:
    [] warn_slowpath_common+0x8f/0xb2
    [] warn_slowpath_null+0x18/0x1a
    [] exit_io_context+0x40/0x92
    [] copy_process+0x126f/0x1453
    [] do_fork+0x120/0x3e9
    [] sys_clone+0x26/0x28
    [] stub_clone+0x13/0x20
    ---[ end trace a2e4eb670b375238 ]---

    Reported-by: Sasha Levin
    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     

25 Dec, 2011

1 commit

  • While fixing io_context creation / task exit race condition,
    6e736be7f2 "block: make ioc get/put interface more conventional and
    fix race on alloction" also prevented an exiting (%PF_EXITING) task
    from creating its own io_context. This is incorrect as exit path may
    issue IOs, e.g. from exit_files(), and if those IOs are the first ones
    issued by the task, io_context needs to be created to process the IOs.

    Combined with the existing problem of io_context / io_cq creation
    failure having the possibility of stalling IO, this problem results in
    deterministic full IO lockup with certain workloads.

    Fix it by allowing io_context creation regardless of %PF_EXITING for
    %current.

    Signed-off-by: Tejun Heo
    Reported-by: Andrew Morton
    Reported-by: Hugh Dickins
    Signed-off-by: Jens Axboe

    Tejun Heo
     

21 Dec, 2011

1 commit


19 Dec, 2011

1 commit


16 Dec, 2011

4 commits

  • All requests of a queue could be merged to other requests of other queue.
    Such queue will not have request in it, but it's in service tree. This
    will cause kernel oops.
    I encounter a BUG_ON() in cfq_dispatch_request() with next patch, but the
    issue should exist without the patch.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     
  • In my workload, thread 1 accesses a, a+2, ..., thread 2 accesses a+1,
    a+3,.... When the requests are flushed to queue, a and a+1 are merged
    to (a, a+1), a+2 and a+3 too to (a+2, a+3), but (a, a+1) and (a+2, a+3)
    aren't merged.
    With recursive merge below, the workload throughput gets improved 20%
    and context switch drops 60%.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     
  • All requests of a queue could be merged to other requests of other queue.
    Such queue will not have request in it, but it's in service tree. This
    will cause kernel oops.
    I encounter a BUG_ON() in cfq_dispatch_request() with next patch, but the
    issue should exist without the patch.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     
  • While probing, fd sets up queue, probes hardware and tears down the
    queue if probing fails. In the process, blk_drain_queue() kicks the
    queue which failed to finish initialization and fd is unhappy about
    that.

    floppy0: no floppy controllers found
    ------------[ cut here ]------------
    WARNING: at drivers/block/floppy.c:2929 do_fd_request+0xbf/0xd0()
    Hardware name: To Be Filled By O.E.M.
    VFS: do_fd_request called on non-open device
    Modules linked in:
    Pid: 1, comm: swapper Not tainted 3.2.0-rc4-00077-g5983fe2 #2
    Call Trace:
    [] warn_slowpath_common+0x7a/0xb0
    [] warn_slowpath_fmt+0x41/0x50
    [] do_fd_request+0xbf/0xd0
    [] blk_drain_queue+0x65/0x80
    [] blk_cleanup_queue+0xe3/0x1a0
    [] floppy_init+0xdeb/0xe28
    [] ? daring+0x6b/0x6b
    [] do_one_initcall+0x3f/0x170
    [] kernel_init+0x9d/0x11e
    [] ? schedule_tail+0x22/0xa0
    [] kernel_thread_helper+0x4/0x10
    [] ? start_kernel+0x2be/0x2be
    [] ? gs_change+0xb/0xb

    Avoid it by making blk_drain_queue() kick queue iff dispatch queue has
    something on it.

    Signed-off-by: Tejun Heo
    Reported-by: Ralf Hildebrandt
    Reported-by: Wu Fengguang
    Tested-by: Sergei Trofimovich
    Signed-off-by: Jens Axboe

    Tejun Heo
     

14 Dec, 2011

7 commits

  • Now block layer knows everything necessary to create and associate
    icq's with requests. Move ioc_create_icq() to blk-ioc.c and update
    get_request() such that, if elevator_type->icq_size is set, requests
    are automatically associated with their matching icq's before
    elv_set_request(). io_context reference is also managed by block core
    on request alloc/free.

    * Only ioprio/cgroup changed handling remains from cfq_get_cic().
    Collapsed into cfq_set_request().

    * This removes queue kicking on icq allocation failure (for now). As
    icq allocation failure is rare and the only effect of queue kicking
    achieved was possibily accelerating queue processing, this change
    shouldn't be noticeable.

    There is a larger underlying problem. Unlike request allocation,
    icq allocation is not guaranteed to succeed eventually after
    retries. The number of icq is unbound and thus mempool can't be the
    solution either. This effectively adds allocation dependency on
    memory free path and thus possibility of deadlock.

    This usually wouldn't happen because icq allocation is not a hot
    path and, even when the condition triggers, it's highly unlikely
    that none of the writeback workers already has icq.

    However, this is still possible especially if elevator is being
    switched under high memory pressure, so we better get it fixed.
    Probably the only solution is just bypassing elevator and appending
    to dispatch queue on any elevator allocation failure.

    * Comment added to explain how icq's are managed and synchronized.

    This completes cleanup of io_context interface.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Add elevator_ops->elevator_init_icq_fn() and restructure
    cfq_create_cic() and rename it to ioc_create_icq().

    The new function expects its caller to pass in io_context, uses
    elevator_type->icq_cache, handles generic init, calls the new elevator
    operation for elevator specific initialization, and returns pointer to
    created or looked up icq. This leaves cfq_icq_pool variable without
    any user. Removed.

    This prepares for io_context interface cleanup and doesn't introduce
    any functional difference.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • With kmem_cache managed by blk-ioc, io_cq exit/release can be moved to
    blk-ioc too. The odd ->io_cq->exit/release() callbacks are replaced
    with elevator_ops->elevator_exit_icq_fn() with unlinking from both ioc
    and q, and freeing automatically handled by blk-ioc. The elevator
    operation only need to perform exit operation specific to the elevator
    - in cfq's case, exiting the cfqq's.

    Also, clearing of io_cq's on q detach is moved to block core and
    automatically performed on elevator switch and q release.

    Because the q io_cq points to might be freed before RCU callback for
    the io_cq runs, blk-ioc code should remember to which cache the io_cq
    needs to be freed when the io_cq is released. New field
    io_cq->__rcu_icq_cache is added for this purpose. As both the new
    field and rcu_head are used only after io_cq is released and the
    q/ioc_node fields aren't, they are put into unions.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Let elevators set ->icq_size and ->icq_align in elevator_type and
    elv_register() and elv_unregister() respectively create and destroy
    kmem_cache for icq.

    * elv_register() now can return failure. All callers updated.

    * icq caches are automatically named "ELVNAME_io_cq".

    * cfq_slab_setup/kill() are collapsed into cfq_init/exit().

    * While at it, minor indentation change for iosched_cfq.elevator_name
    for consistency.

    This will help moving icq management to block core. This doesn't
    introduce any functional change.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Now that all io_cq related data structures are in block core layer,
    io_cq lookup can be moved from cfq-iosched.c to blk-ioc.c.

    Lookup logic from cfq_cic_lookup() is moved to ioc_lookup_icq() with
    parameter return type changes (cfqd -> request_queue, cfq_io_cq ->
    io_cq) and cfq_cic_lookup() becomes thin wrapper around
    cfq_cic_lookup().

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Most of icq management is about to be moved out of cfq into blk-ioc.
    This patch prepares for it.

    * Move cfqd->icq_list to request_queue->icq_list

    * Make request explicitly point to icq instead of through elevator
    private data. ->elevator_private[3] is replaced with sub struct elv
    which contains icq pointer and priv[2]. cfq is updated accordingly.

    * Meaningless clearing of ->elevator_private[0] removed from
    elv_set_request(). At that point in code, the field was guaranteed
    to be %NULL anyway.

    This patch doesn't introduce any functional change.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Currently io_context and cfq logics are mixed without clear boundary.
    Most of io_context is independent from cfq but cfq_io_context handling
    logic is dispersed between generic ioc code and cfq.

    cfq_io_context represents association between an io_context and a
    request_queue, which is a concept useful outside of cfq, but it also
    contains fields which are useful only to cfq.

    This patch takes out generic part and put it into io_cq (io
    context-queue) and the rest into cfq_io_cq (cic moniker remains the
    same) which contains io_cq. The following changes are made together.

    * cfq_ttime and cfq_io_cq now live in cfq-iosched.c.

    * All related fields, functions and constants are renamed accordingly.

    * ioc->ioc_data is now "struct io_cq *" instead of "void *" and
    renamed to icq_hint.

    This prepares for io_context API cleanup. Documentation is currently
    sparse. It will be added later.

    Changes in this patch are mechanical and don't cause functional
    change.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo