24 Oct, 2011

6 commits

  • Jens Axboe
     
  • The following command sequence triggers an oops.

    # mount /dev/sdb1 /mnt
    # echo 1 > /sys/class/scsi_device/0\:0\:1\:0/device/delete
    # umount /mnt

    general protection fault: 0000 [#1] PREEMPT SMP
    CPU 2
    Modules linked in:

    Pid: 791, comm: umount Not tainted 3.1.0-rc3-work+ #8 Bochs Bochs
    RIP: 0010:[] [] __lock_acquire+0x389/0x1d60
    ...
    Call Trace:
    [] lock_acquire+0x95/0x140
    [] _raw_spin_lock+0x3b/0x50
    [] bdi_lock_two+0x5c/0x70
    [] bdev_inode_switch_bdi+0x4c/0xf0
    [] __blkdev_put+0x11b/0x1d0
    [] __blkdev_put+0x160/0x1d0
    [] blkdev_put+0x5f/0x190
    [] kill_block_super+0x4d/0x80
    [] deactivate_locked_super+0x45/0x70
    [] deactivate_super+0x4a/0x70
    [] mntput_no_expire+0xed/0x130
    [] sys_umount+0x7e/0x3a0
    [] system_call_fastpath+0x16/0x1b

    This is because bdev holds on to disk but disk doesn't pin the
    associated queue. If a SCSI device is removed while the device is
    still open, the sdev puts the base reference to the queue on release.
    When the bdev is finally released, the associated queue is already
    gone along with the bdi and bdev_inode_switch_bdi() ends up
    dereferencing already freed bdi.

    Even if it were not for this bug, disk not holding onto the associated
    queue is very unusual and error-prone.

    Fix it by making add_disk() take an extra reference to its queue and
    put it on disk_release() and ensuring that disk and its fops owner are
    put in that order after all accesses to the disk and queue are
    complete.

    Signed-off-by: Tejun Heo
    Cc: Jens Axboe
    Cc: stable@kernel.org
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • A dm-multipath user reported[1] a problem when trying to boot
    a kernel with commit 4853abaae7e4a2af938115ce9071ef8684fb7af4
    (block: fix flush machinery for stacking drivers with differring
    flush flags) applied. It turns out that an empty flush request
    can be sent into blk_insert_flush. When the BUG_ON was fixed
    to allow for this, I/O on the underlying device would stall. The
    reason is that blk_insert_cloned_request does not kick the queue.
    In the aforementioned commit, I had added a special case to
    kick the queue if data was sent down but the queue flags did
    not require a flush. A better solution is to push the queue
    kick up into blk_insert_cloned_request.

    This patch, along with a follow-on which fixes the BUG_ON, fixes
    the issue reported.

    [1] http://www.redhat.com/archives/dm-devel/2011-September/msg00154.html

    Reported-by: Christophe Saout
    Signed-off-by: Jeff Moyer
    Acked-by: Tejun Heo

    Stable note: 3.1
    Cc: stable@vger.kernel.org
    Signed-off-by: Jens Axboe

    Jeff Moyer
     
  • A user reported a regression due to commit
    4853abaae7e4a2af938115ce9071ef8684fb7af4 (block: fix flush
    machinery for stacking drivers with differring flush flags).
    Part of the problem is that blk_insert_flush required a
    single bio be attached to the request. In reality, having
    no attached bio is also a valid case, as can be observed with
    an empty flush.

    [1] http://www.redhat.com/archives/dm-devel/2011-September/msg00154.html

    Reported-by: Christophe Saout
    Signed-off-by: Jeff Moyer

    Stable note: 3.1
    Cc: stable@vger.kernel.org
    Signed-off-by: Jens Axboe

    Jeff Moyer
     
  • bio originally has the functionality to set the complete cpu, but
    it is broken.

    Chirstoph said that "This code is unused, and from the all the
    discussions lately pretty obviously broken. The only thing keeping
    it serves is creating more confusion and possibly more bugs."

    And Jens replied with "We can kill bio_set_completion_cpu(). I'm fine
    with leaving cpu control to the request based drivers, they are the
    only ones that can toggle the setting anyway".

    So this patch tries to remove all the work of controling complete cpu
    from a bio.

    Cc: Shaohua Li
    Cc: Christoph Hellwig
    Signed-off-by: Tao Ma
    Signed-off-by: Jens Axboe

    Tao Ma
     
  • byptes -> bytes.

    Signed-off-by: Jie Liu
    Signed-off-by: Jens Axboe

    Jie Liu
     

21 Oct, 2011

1 commit

  • init_emergency_pool() does not create the page pool for bouncing block
    requests if the current count of high pages is zero. If high memory
    may be added later (either via memory hotplug or a balloon driver in a
    virtualized system) then a oops occurs if a request with a high page
    need bouncing because the pool does not exist.

    So, always create the pool if memory hotplug is enabled and change the
    test so it's valid even if all high pages are currently in the balloon
    (the balloon drivers adjust totalhigh_pages but not max_pfn).

    Signed-off-by: David Vrabel
    Signed-off-by: Jens Axboe

    David Vrabel
     

19 Oct, 2011

11 commits

  • request_queue is refcounted but actually depdends on lifetime
    management from the queue owner - on blk_cleanup_queue(), block layer
    expects that there's no request passing through request_queue and no
    new one will.

    This is fundamentally broken. The queue owner (e.g. SCSI layer)
    doesn't have a way to know whether there are other active users before
    calling blk_cleanup_queue() and other users (e.g. bsg) don't have any
    guarantee that the queue is and would stay valid while it's holding a
    reference.

    With delay added in blk_queue_bio() before queue_lock is grabbed, the
    following oops can be easily triggered when a device is removed with
    in-flight IOs.

    sd 0:0:1:0: [sdb] Stopping disk
    ata1.01: disabled
    general protection fault: 0000 [#1] PREEMPT SMP
    CPU 2
    Modules linked in:

    Pid: 648, comm: test_rawio Not tainted 3.1.0-rc3-work+ #56 Bochs Bochs
    RIP: 0010:[] [] elv_rqhash_find+0x61/0x100
    ...
    Process test_rawio (pid: 648, threadinfo ffff880019efa000, task ffff880019ef8a80)
    ...
    Call Trace:
    [] elv_merge+0x84/0xe0
    [] blk_queue_bio+0xf4/0x400
    [] generic_make_request+0xca/0x100
    [] submit_bio+0x74/0x100
    [] dio_bio_submit+0xbc/0xc0
    [] __blockdev_direct_IO+0x92e/0xb40
    [] blkdev_direct_IO+0x57/0x60
    [] generic_file_aio_read+0x6d5/0x760
    [] do_sync_read+0xda/0x120
    [] vfs_read+0xc5/0x180
    [] sys_pread64+0x9a/0xb0
    [] system_call_fastpath+0x16/0x1b

    This happens because blk_queue_cleanup() destroys the queue and
    elevator whether IOs are in progress or not and DEAD tests are
    sprinkled in the request processing path without proper
    synchronization.

    Similar problem exists for blk-throtl. On queue cleanup, blk-throtl
    is shutdown whether it has requests in it or not. Depending on
    timing, it either oopses or throttled bios are lost putting tasks
    which are waiting for bio completion into eternal D state.

    The way it should work is having the usual clear distinction between
    shutdown and release. Shutdown drains all currently pending requests,
    marks the queue dead, and performs partial teardown of the now
    unnecessary part of the queue. Even after shutdown is complete,
    reference holders are still allowed to issue requests to the queue
    although they will be immmediately failed. The rest of teardown
    happens on release.

    This patch makes the following changes to make blk_queue_cleanup()
    behave as proper shutdown.

    * QUEUE_FLAG_DEAD is now set while holding both q->exit_mutex and
    queue_lock.

    * Unsynchronized DEAD check in generic_make_request_checks() removed.
    This couldn't make any meaningful difference as the queue could die
    after the check.

    * blk_drain_queue() updated such that it can drain all requests and is
    now called during cleanup.

    * blk_throtl updated such that it checks DEAD on grabbing queue_lock,
    drains all throttled bios during cleanup and free td when queue is
    released.

    Signed-off-by: Tejun Heo
    Cc: Vivek Goyal
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • attempt_plug_merge() accesses elevator without holding queue_lock and
    may call into ->elevator_bio_merge_fn(). The elvator is guaranteed to
    be valid because it's accessed iff the plugged list has requests and
    elevator is never exited with live requests, so as long as the
    elevator method can deal with unlocked access, this is safe.

    Explain the sync rules around attempt_plug_merge() and drop the
    unnecessary @tsk parameter.

    This patch doesn't introduce any functional change.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Currently get_request[_wait]() allocates request whether queue is dead
    or not. This patch makes get_request[_wait]() return NULL if @q is
    dead. blk_queue_bio() is updated to fail the submitted bio if request
    allocation fails. While at it, add docbook comments for
    get_request[_wait]().

    Note that the current code has rather unclear (there are spurious DEAD
    tests scattered around) assumption that the owner of a queue
    guarantees that no request travels block layer if the queue is dead
    and this patch in itself doesn't change much; however, this will allow
    fixing the broken assumption in the next patch.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • blk_throtl_bio() and throtl_get_tg() have rather unusual interface.

    * throtl_get_tg() returns pointer to a valid tg or ERR_PTR(-ENODEV),
    and drops queue_lock in the latter case. Different locking context
    depending on return value is error-prone and DEAD state is scheduled
    to be protected by queue_lock anyway. Move DEAD check inside
    queue_lock and return valid tg or NULL.

    * blk_throtl_bio() indicates return status both with its return value
    and in/out param **@bio. The former is used to indicate whether
    queue is found to be dead during throtl processing. The latter
    whether the bio is throttled.

    There's no point in returning DEAD check result from
    blk_throtl_bio(). The queue can die after blk_throtl_bio() is
    finished but before make_request_fn() grabs queue lock.

    Make it take *@bio instead and return boolean result indicating
    whether the request is throttled or not.

    This patch doesn't cause any visible functional difference.

    Signed-off-by: Tejun Heo
    Cc: Vivek Goyal
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Reorganize queue draining related code in preparation of queue exit
    changes.

    * Factor out actual draining from elv_quiesce_start() to
    blk_drain_queue().

    * Make elv_quiesce_start/end() responsible for their own locking.

    * Replace open-coded ELVSWITCH clearing in elevator_switch() with
    elv_quiesce_end().

    This patch doesn't cause any visible functional difference.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • blk_get/put_queue() in scsi_cmd_ioctl() and throtl_get_tg() are
    completely bogus. The caller must have a reference to the queue on
    entry and taking an extra reference doesn't change anything.

    For scsi_cmd_ioctl(), the only effect is that it ends up checking
    QUEUE_FLAG_DEAD on entry; however, this is bogus as queue can die
    right after blk_get_queue(). Dead queue should be and is handled in
    request issue path (it's somewhat broken now but that's a separate
    problem and doesn't affect this one much).

    throtl_get_tg() incorrectly assumes that q is rcu freed. Also, it
    doesn't check return value of blk_get_queue(). If the queue is
    already dead, it ends up doing an extra put.

    Drop them.

    Signed-off-by: Tejun Heo
    Cc: Vivek Goyal
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • blk_alloc_request() and freed_request() take different combinations of
    REQ_* @flags, @priv and @is_sync when @flags is superset of the latter
    two. Make them take @flags only. This cleans up the code a bit and
    will ease updating allocation related REQ_* flags.

    This patch doesn't introduce any functional difference.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • blk_throtl interface is block internal and there's no reason to have
    them in linux/blkdev.h. Move them to block/blk.h.

    This patch doesn't introduce any functional change.

    Signed-off-by: Tejun Heo
    Cc: Vivek Goyal
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • blkio_policy_parse_and_set() calls blkio_check_dev_num() to check
    whether the given dev_t is valid. blkio_check_dev_num() uses
    get_gendisk() for verification but never puts the returned genhd
    leaking the reference.

    This patch collapses blkio_check_dev_num() into its caller and updates
    it such that the genhd is put before returning.

    Signed-off-by: Tejun Heo
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • The following command sequence triggers an oops.

    # mount /dev/sdb1 /mnt
    # echo 1 > /sys/class/scsi_device/0\:0\:1\:0/device/delete
    # umount /mnt

    general protection fault: 0000 [#1] PREEMPT SMP
    CPU 2
    Modules linked in:

    Pid: 791, comm: umount Not tainted 3.1.0-rc3-work+ #8 Bochs Bochs
    RIP: 0010:[] [] __lock_acquire+0x389/0x1d60
    ...
    Call Trace:
    [] lock_acquire+0x95/0x140
    [] _raw_spin_lock+0x3b/0x50
    [] bdi_lock_two+0x5c/0x70
    [] bdev_inode_switch_bdi+0x4c/0xf0
    [] __blkdev_put+0x11b/0x1d0
    [] __blkdev_put+0x160/0x1d0
    [] blkdev_put+0x5f/0x190
    [] kill_block_super+0x4d/0x80
    [] deactivate_locked_super+0x45/0x70
    [] deactivate_super+0x4a/0x70
    [] mntput_no_expire+0xed/0x130
    [] sys_umount+0x7e/0x3a0
    [] system_call_fastpath+0x16/0x1b

    This is because bdev holds on to disk but disk doesn't pin the
    associated queue. If a SCSI device is removed while the device is
    still open, the sdev puts the base reference to the queue on release.
    When the bdev is finally released, the associated queue is already
    gone along with the bdi and bdev_inode_switch_bdi() ends up
    dereferencing already freed bdi.

    Even if it were not for this bug, disk not holding onto the associated
    queue is very unusual and error-prone.

    Fix it by making add_disk() take an extra reference to its queue and
    put it on disk_release() and ensuring that disk and its fops owner are
    put in that order after all accesses to the disk and queue are
    complete.

    Signed-off-by: Tejun Heo
    Cc: stable@kernel.org
    Signed-off-by: Jens Axboe

    Tejun Heo
     
  • Conflicts:
    block/blk-core.c
    include/linux/blkdev.h

    Signed-off-by: Jens Axboe

    Jens Axboe
     

18 Oct, 2011

1 commit


17 Oct, 2011

3 commits

  • The size is always valid, but variable-length arrays generate worse code
    for no good reason (unless the function happens to be inlined and the
    compiler sees the length for the simple constant it is).

    Also, there seems to be some code generation problem on POWER, where
    Henrik Bakken reports that register r28 can get corrupted under some
    subtle circumstances (interrupt happening at the wrong time?). That all
    indicates some seriously broken compiler issues, but since variable
    length arrays are bad regardless, there's little point in trying to
    chase it down.

    "Just don't do that, then".

    Reported-by: Henrik Grindal Bakken
    Cc: Benjamin Herrenschmidt
    Cc: stable@kernel.org
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     
  • Currently the loop device tries to call directly into write_begin/write_end
    instead of going through ->write if it can. This is a fairly nasty shortcut
    as write_begin and write_end are only callbacks for the generic write code
    and expect to be called with filesystem specific locks held.

    This code currently causes various issues for clustered filesystems as it
    doesn't take the required cluster locks, and it also causes issues for XFS
    as it doesn't properly lock against the swapext ioctl as called by the
    defragmentation tools. This in case causes data corruption if
    defragmentation hits a busy loop device in the wrong time window, as
    reported by RH QA.

    The reason why we have this shortcut is that it saves a data copy when
    doing a transformation on the loop device, which is the technical term
    for using cryptoloop (or an XOR transformation). Given that cryptoloop
    has been deprecated in favour of dm-crypt my opinion is that we should
    simply drop this shortcut instead of finding complicated ways to to
    introduce a formal interface for this shortcut.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     
  • * 'fixes' of http://ftp.arm.linux.org.uk/pub/linux/arm/kernel/git-cur/linux-2.6-arm:
    ARM: 7128/1: vic: Don't write to the read-only register VIC_IRQ_STATUS
    ARM: 7122/1: localtimer: add header linux/errno.h explicitly
    ARM: 7117/1: perf: fix HW_CACHE_* events on Cortex-A9
    ARM: 7113/1: mm: Align bank start to MAX_ORDER_NR_PAGES

    Linus Torvalds
     

15 Oct, 2011

4 commits


14 Oct, 2011

8 commits


13 Oct, 2011

4 commits


12 Oct, 2011

2 commits

  • Currently we have a few issues with the way the workqueue code is used to
    implement AIL pushing:

    - it accidentally uses the same workqueue as the syncer action, and thus
    can be prevented from running if there are enough sync actions active
    in the system.
    - it doesn't use the HIGHPRI flag to queue at the head of the queue of
    work items

    At this point I'm not confident enough in getting all the workqueue flags and
    tweaks right to provide a perfectly reliable execution context for AIL
    pushing, which is the most important piece in XFS to make forward progress
    when the log fills.

    Revert back to use a kthread per filesystem which fixes all the above issues
    at the cost of having a task struct and stack around for each mounted
    filesystem. In addition this also gives us much better ways to diagnose
    any issues involving hung AIL pushing and removes a small amount of code.

    Signed-off-by: Christoph Hellwig
    Reported-by: Stefan Priebe
    Tested-by: Stefan Priebe
    Reviewed-by: Dave Chinner
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • We need to check for pinned buffers even in .iop_pushbuf given that inode
    items flush into the same buffers that may be pinned directly due operations
    on the unlinked inode list operating directly on buffers. To do this add a
    return value to .iop_pushbuf that tells the AIL push about this and use
    the existing log force mechanisms to unpin it.

    Signed-off-by: Christoph Hellwig
    Reported-by: Stefan Priebe
    Tested-by: Stefan Priebe
    Reviewed-by: Dave Chinner
    Signed-off-by: Alex Elder

    Christoph Hellwig