25 Sep, 2010

1 commit

  • Add logic to prevent two I/O requests being merged if
    only one of them is a discard. Ditto secure discard.

    Without this fix, it is possible for write requests
    to transform into discard requests. For example:

    Submit bio 1 to discard 8 sectors from sector n
    Submit bio 2 to write 8 sectors from sector n + 16
    Submit bio 3 to write 8 sectors from sector n + 8

    Bio 1 becomes request 1. Bio 2 becomes request 2.
    Bio 3 is merged with request 2, and then subsequently
    request 2 is merged with request 1 resulting in just
    one I/O request which discards all 24 sectors.

    Signed-off-by: Adrian Hunter

    (Moved the checks above the position checks /Jens)

    Signed-off-by: Jens Axboe

    Adrian Hunter
     

21 Sep, 2010

2 commits

  • Mike reported a kernel crash when a usb key hotplug is performed while all
    kernel thrads are not in a root cgroup and are running in one of the child
    cgroups of blkio controller.

    BUG: unable to handle kernel NULL pointer dereference at 0000002c
    IP: [] cfq_get_queue+0x232/0x412
    *pde = 00000000
    Oops: 0000 [#1] PREEMPT
    last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/host3/scsi_host/host3/uevent

    [..]
    Pid: 30039, comm: scsi_scan_3 Not tainted 2.6.35.2-fg.roam #1 Volvi2 /Aspire 4315
    EIP: 0060:[] EFLAGS: 00010086 CPU: 0
    EIP is at cfq_get_queue+0x232/0x412
    EAX: f705f9c0 EBX: e977abac ECX: 00000000 EDX: 00000000
    ESI: f00da400 EDI: f00da4ec EBP: e977a800 ESP: dff8fd00
    DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
    Process scsi_scan_3 (pid: 30039, ti=dff8e000 task=f6b6c9a0 task.ti=dff8e000)
    Stack:
    00000000 00000000 00000001 01ff0000 f00da508 00000000 f00da524 f00da540
    e7994940 dd631750 f705f9c0 e977a820 e977ac44 f00da4d0 00000001 f6b6c9a0
    00000010 00008010 0000000b 00000000 00000001 e977a800 dd76fac0 00000246
    Call Trace:
    [] ? cfq_set_request+0x228/0x34c
    [] ? cfq_set_request+0x0/0x34c
    [] ? elv_set_request+0xf/0x1c
    [] ? get_request+0x1ad/0x22f
    [] ? get_request_wait+0x1f/0x11a
    [] ? kvasprintf+0x33/0x3b
    [] ? scsi_execute+0x1d/0x103
    [] ? scsi_execute_req+0x58/0x83
    [] ? scsi_probe_and_add_lun+0x188/0x7c2
    [] ? attribute_container_add_device+0x15/0xfa
    [] ? kobject_get+0xf/0x13
    [] ? get_device+0x10/0x14
    [] ? scsi_alloc_target+0x217/0x24d
    [] ? __scsi_scan_target+0x95/0x480
    [] ? dequeue_entity+0x14/0x1fe
    [] ? update_curr+0x165/0x1ab
    [] ? update_curr+0x165/0x1ab
    [] ? scsi_scan_channel+0x4a/0x76
    [] ? scsi_scan_host_selected+0x77/0xad
    [] ? do_scan_async+0x0/0x11a
    [] ? do_scsi_scan_host+0x51/0x56
    [] ? do_scan_async+0x0/0x11a
    [] ? do_scan_async+0xe/0x11a
    [] ? do_scan_async+0x0/0x11a
    [] ? kthread+0x5e/0x63
    [] ? kthread+0x0/0x63
    [] ? kernel_thread_helper+0x6/0x10
    Code: 44 24 1c 54 83 44 24 18 54 83 fa 03 75 94 8b 06 c7 86 64 02 00 00 01 00 00 00 83 e0 03 09 f0 89 06 8b 44 24 28 8b 90 58 01 00 00 42 2c 85 c0 75 03 8b 42 08 8d 54 24 48 52 8d 4c 24 50 51 68
    EIP: [] cfq_get_queue+0x232/0x412 SS:ESP 0068:dff8fd00
    CR2: 000000000000002c
    ---[ end trace 9a88306573f69b12 ]---

    The problem here is that we don't have bdi->dev information available when
    thread does some IO. Hence when dev_name() tries to access bdi->dev, it
    crashes.

    This problem does not happen if kernel threads are in root group as root
    group is statically allocated at device initialization time and we don't
    hit this piece of code.

    Fix it by delaying the filling of major and minor number information of
    device in blk_group. Initially a blk_group is created with 0 as device
    information and this information is filled later once some more IO comes
    in from same group.

    Reported-by: Mike Kazantsev
    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • This bug was introduced in 7b6d91daee5cac6402186ff224c3af39d79f4a0e
    "block: unify flags for struct bio and struct request"

    Cc: Boaz Harrosh
    Signed-off-by: Benny Halevy
    Signed-off-by: Jens Axboe

    Benny Halevy
     

10 Sep, 2010

1 commit

  • While testing CPU DLPAR, the following problem was discovered.
    We were DLPAR removing the first CPU, which in this case was
    logical CPUs 0-3. CPUs 0-2 were already marked offline and
    we were in the process of offlining CPU 3. After marking
    the CPU inactive and offline in cpu_disable, but before the
    cpu was completely idle (cpu_die), we ended up in __make_request
    on CPU 3. There we looked at the topology map to see which CPU
    to complete the I/O on and found no CPUs in the cpu_sibling_map.
    This resulted in the block layer setting the completion cpu
    to be NR_CPUS, which then caused an oops when we tried to
    complete the I/O.

    Fix this by sanity checking the value we return from blk_cpu_to_group
    to be a valid cpu value.

    Signed-off-by: Brian King
    Signed-off-by: Jens Axboe

    Brian King
     

23 Aug, 2010

8 commits

  • Currently drivers must do an elevator_exit() + elevator_init()
    to switch IO schedulers. There are a few problems with this:

    - Since commit 1abec4fdbb142e3ccb6ce99832fae42129134a96,
    elevator_init() requires a zeroed out q->elevator
    pointer. The two existing in-kernel users don't do that.

    - It will only work at initialization time, since using the
    above two-staged construct does not properly quisce the queue.

    So add elevator_change() which takes care of this, and convert
    the elv_iosched_store() sysfs interface to use this helper as well.

    Reported-by: Peter Oberparleiter
    Reported-by: Kevin Vigor
    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Return of the bi_rw tests is no longer bool after commit 74450be1. But
    results of such tests are stored in bools. This doesn't fit in there
    for some compilers (gcc 4.5 here), so either use !! magic to get real
    bools or use ulong where the result is assigned somewhere.

    Signed-off-by: Jiri Slaby
    Cc: Christoph Hellwig
    Reviewed-by: Jeff Moyer
    Signed-off-by: Jens Axboe

    Jiri Slaby
     
  • kernel needs to kobject_put on dev->kobj if elv_register_queue fails.

    Signed-off-by: Xiaotian Feng
    Cc: "Martin K. Petersen"
    Cc: Stephen Hemminger
    Cc: Nikanth Karthikesan
    Cc: David Teigland
    Signed-off-by: Jens Axboe

    Xiaotian Feng
     
  • o Divyesh had gotten rid of this code in the past. I want to re-introduce it
    back as it helps me a lot during debugging.

    Reviewed-by: Jeff Moyer
    Reviewed-by: Divyesh Shah
    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Implement a new tunable group_idle, which allows idling on the group
    instead of a cfq queue. Hence one can set slice_idle = 0 and not idle
    on the individual queues but idle on the group. This way on fast storage
    we can get fairness between groups at the same time overall throughput
    improves.

    Signed-off-by: Vivek Goyal
    Acked-by: Jeff Moyer
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • o Implement another CFQ mode where we charge group in terms of number
    of requests dispatched instead of measuring the time. Measuring in terms
    of time is not possible when we are driving deeper queue depths and there
    are requests from multiple cfq queues in the request queue.

    o This mode currently gets activated if one sets slice_idle=0 and associated
    disk supports NCQ. Again the idea is that on an NCQ disk with idling disabled
    most of the queues will dispatch 1 or more requests and then cfq queue
    expiry happens and we don't have a way to measure time. So start providing
    fairness in terms of IOPS.

    o Currently IOPS mode works only with cfq group scheduling. CFQ is following
    different scheduling algorithms for queue and group scheduling. These IOPS
    stats are used only for group scheduling hence in non-croup mode nothing
    should change.

    o For CFQ group scheduling one can disable slice idling so that we don't idle
    on queue and drive deeper request queue depths (achieving better throughput),
    at the same time group idle is enabled so one should get service
    differentiation among groups.

    Signed-off-by: Vivek Goyal
    Acked-by: Jeff Moyer
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • Do not idle either on cfq queue or service tree if slice_idle=0. User does
    not want any queue or service tree idling. Currently even if slice_idle=0,
    we were waiting for request to finish before expiring the queue and that
    can lead to lower queue depths.

    Acked-by: Jeff Moyer
    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     
  • If the cgroup hierarchy for blkio control groups is deeper than two
    levels, kernel should not allow the creation of further levels. mkdir
    system call does not except EINVAL as a return value. This patch
    replaces EINVAL with more appropriate EPERM

    Signed-off-by: Ciju Rajan K
    Reviewed-by: KAMEZAWA Hiroyuki
    Signed-off-by: Jens Axboe

    Ciju Rajan K
     

12 Aug, 2010

1 commit

  • Secure discard is the same as discard except that all copies of the
    discarded sectors (perhaps created by garbage collection) must also be
    erased.

    Signed-off-by: Adrian Hunter
    Acked-by: Jens Axboe
    Cc: Kyungmin Park
    Cc: Madhusudhan Chikkature
    Cc: Christoph Hellwig
    Cc: Ben Gardiner
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adrian Hunter
     

09 Aug, 2010

2 commits


08 Aug, 2010

19 commits

  • q->bar_rq.rq_disk is NULL. Use the rq_disk of the original request
    instead.

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • the block layer doesn't set rq->cmd_type on flush requests. By
    definition, it should be REQ_TYPE_FS (the lower layers build a command
    and interpret the result of it, that is, the block layer doesn't know
    the details).

    Signed-off-by: FUJITA Tomonori
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • If the queue doesn't have a limit set, or it just set UINT_MAX like
    we default to, we coud be sending down a discard request that isn't
    of the correct granularity if the block size is > 512b.

    Fix this by adjusting max_discard_sectors down to the proper
    alignment.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Issuing a blkdev_issue_flush() on an unconfigured loop device causes a panic as
    q->make_request_fn is not configured. This can occur when trying to mount the
    unconfigured loop device as an XFS filesystem. There are no guards that catch
    the bio before the request function is called because we don't add a payload to
    the bio. Instead, manually check this case as soon as we have a pointer to the
    queue to flush.

    Signed-off-by: Dave Chinner
    Signed-off-by: Jens Axboe

    Dave Chinner
     
  • The blkpg_ioctl and blkdev_reread_part access fields of
    the bdev and gendisk structures, yet they always do so
    under the protection of bdev->bd_mutex, which seems
    sufficient.

    Signed-off-by: Arnd Bergmann
    cked-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Arnd Bergmann
     
  • We only call the functions set_device_ro(),
    invalidate_bdev(), sync_filesystem() and sync_blockdev()
    while holding the BKL in these commands. All
    of these are also done in other code paths without
    the BKL, which leads me to the conclusion that
    the BKL is not needed here either.

    The reason we hold it here is that it was originally
    pushed down into the ioctl function from vfs_ioctl.

    Signed-off-by: Arnd Bergmann
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Arnd Bergmann
     
  • The blktrace driver currently needs the BKL, but
    we should not need to take that in the block layer,
    so just push it down into the driver itself.

    It is quite likely that the BKL is not actually
    required in blktrace code and could be removed
    in a follow-on patch.

    Signed-off-by: Arnd Bergmann
    Acked-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Arnd Bergmann
     
  • As a preparation for the removal of the big kernel
    lock in the block layer, this removes the BKL
    from the common ioctl handling code, moving it
    into every single driver still using it.

    Signed-off-by: Arnd Bergmann
    Acked-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Arnd Bergmann
     
  • This removes q->prepare_flush_fn completely (changes the
    blk_queue_ordered API).

    Signed-off-by: FUJITA Tomonori
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • This is preparation for removing q->prepare_flush_fn.

    Temporarily, blk_queue_ordered() permits QUEUE_ORDERED_DO_PREFLUSH and
    QUEUE_ORDERED_DO_POSTFLUSH without prepare_flush_fn.

    Signed-off-by: FUJITA Tomonori
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • SCSI-ml needs a way to mark a request as flush request in
    q->prepare_flush_fn because it needs to identify them later (e.g. in
    q->request_fn or prep_rq_fn).

    queue_flush sets REQ_HARDBARRIER in rq->cmd_flags however the block
    layer also sends normal REQ_TYPE_FS requests with REQ_HARDBARRIER. So
    SCSI-ml can't use REQ_HARDBARRIER to identify flush requests.

    We could change the block layer to clear REQ_HARDBARRIER bit before
    sending non flush requests to the lower layers. However, intorudcing
    the new flag looks cleaner (surely easier).

    Signed-off-by: FUJITA Tomonori
    Cc: James Bottomley
    Cc: David S. Miller
    Cc: Rusty Russell
    Cc: Alasdair G Kergon
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    FUJITA Tomonori
     
  • Reviewed-by: FUJITA Tomonori

    Signed-off-by: Jens Axboe

    James Bottomley
     
  • Didn't cause a merge conflict, so fixed this one up manually
    post merge.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • Just some dead code.

    Signed-off-by: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Jens Axboe

    Andi Kleen
     
  • Allocating a fixed payload for discard requests always was a horrible hack,
    and it's not coming to byte us when adding support for discard in DM/MD.

    So change the code to leave the allocation of a payload to the lowlevel
    driver. Unfortunately that means we'll need another hack, which allows
    us to update the various block layer length fields indicating that we
    have a payload. Instead of hiding this in sd.c, which we already partially
    do for UNMAP support add a documented helper in the core block layer for it.

    Signed-off-by: Christoph Hellwig
    Acked-by: Mike Snitzer
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     
  • Remove the current bio flags and reuse the request flags for the bio, too.
    This allows to more easily trace the type of I/O from the filesystem
    down to the block driver. There were two flags in the bio that were
    missing in the requests: BIO_RW_UNPLUG and BIO_RW_AHEAD. Also I've
    renamed two request flags that had a superflous RW in them.

    Note that the flags are in bio.h despite having the REQ_ name - as
    blkdev.h includes bio.h that is the only way to go for now.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     
  • Remove all the trivial wrappers for the cmd_type and cmd_flags fields in
    struct requests. This allows much easier grepping for different request
    types instead of unwinding through macros.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     
  • The code for nonrot, random, and io stats are completely identical.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • There are two reasons for doing this:

    - On SSD disks, the completion times aren't as random as they
    are for rotational drives. So it's questionable whether they
    should contribute to the random pool in the first place.

    - Calling add_disk_randomness() has a lot of overhead.

    This adds /sys/block//queue/add_random that will allow you to
    switch off on a per-device basis. The default setting is on, so there
    should be no functional changes from this patch.

    Signed-off-by: Jens Axboe

    Jens Axboe
     

24 Jun, 2010

1 commit

  • In submit_bio, we count vm events by check READ/WRITE.
    But actually DISCARD_NOBARRIER also has the WRITE flag set.
    It looks as if in blkdev_issue_discard, we also add a
    page as the payload and the bio_has_data check isn't enough.
    So add another check for discard bio.

    Signed-off-by: Tao Ma
    Signed-off-by: Jens Axboe

    Tao Ma
     

21 Jun, 2010

1 commit


19 Jun, 2010

1 commit

  • Hi Jens,

    Few days back Ingo noticed a CFQ boot time warning. This patch fixes it.
    The issue here is that with CFQ_GROUP_IOSCHED=n, CFQ should not really
    be making blkio stat related calls.

    > Hm, it's still not entirely fixed, as of 2.6.35-rc2-00131-g7908a9e. With
    > some
    > configs i get bad spinlock warnings during bootup:
    >
    > [ 28.968013] initcall net_olddevs_init+0x0/0x82 returned 0 after 93750
    > usecs
    > [ 28.972003] calling b44_init+0x0/0x55 @ 1
    > [ 28.976009] bus: 'pci': add driver b44
    > [ 28.976374] sda:
    > [ 28.978157] BUG: spinlock bad magic on CPU#1, async/0/117
    > [ 28.980000] lock: 7e1c5bbc, .magic: 00000000, .owner: /-1, +.owner_cpu: 0
    > [ 28.980000] Pid: 117, comm: async/0 Not tainted +2.6.35-rc2-tip-01092-g010e7ef-dirty #8183
    > [ 28.980000] Call Trace:
    > [ 28.980000] [] ? printk+0x20/0x24
    > [ 28.980000] [] spin_bug+0x7c/0x87
    > [ 28.980000] [] do_raw_spin_lock+0x1e/0x123
    > [ 28.980000] [] ? _raw_spin_lock_irqsave+0x12/0x20
    > [ 28.980000] [] _raw_spin_lock_irqsave+0x1a/0x20
    > [ 28.980000] [] blkiocg_update_io_add_stats+0x25/0xfb
    > [ 28.980000] [] ? cfq_prio_tree_add+0xb1/0xc1
    > [ 28.980000] [] cfq_insert_request+0x8c/0x425

    Signed-off-by: Vivek Goyal
    Signed-off-by: Jens Axboe

    Vivek Goyal
     

18 Jun, 2010

1 commit

  • Hi,

    A user reported a kernel bug when running a particular program that did
    the following:

    created 32 threads
    - each thread took a mutex, grabbed a global offset, added a buffer size
    to that offset, released the lock
    - read from the given offset in the file
    - created a new thread to do the same
    - exited

    The result is that cfq's close cooperator logic would trigger, as the
    threads were issuing I/O within the mean seek distance of one another.
    This workload managed to routinely trigger a use after free bug when
    walking the list of merge candidates for a particular cfqq
    (cfqq->new_cfqq). The logic used for merging queues looks like this:

    static void cfq_setup_merge(struct cfq_queue *cfqq, struct cfq_queue *new_cfqq)
    {
    int process_refs, new_process_refs;
    struct cfq_queue *__cfqq;

    /* Avoid a circular list and skip interim queue merges */
    while ((__cfqq = new_cfqq->new_cfqq)) {
    if (__cfqq == cfqq)
    return;
    new_cfqq = __cfqq;
    }

    process_refs = cfqq_process_refs(cfqq);
    /*
    * If the process for the cfqq has gone away, there is no
    * sense in merging the queues.
    */
    if (process_refs == 0)
    return;

    /*
    * Merge in the direction of the lesser amount of work.
    */
    new_process_refs = cfqq_process_refs(new_cfqq);
    if (new_process_refs >= process_refs) {
    cfqq->new_cfqq = new_cfqq;
    atomic_add(process_refs, &new_cfqq->ref);
    } else {
    new_cfqq->new_cfqq = cfqq;
    atomic_add(new_process_refs, &cfqq->ref);
    }
    }

    When a merge candidate is found, we add the process references for the
    queue with less references to the queue with more. The actual merging
    of queues happens when a new request is issued for a given cfqq. In the
    case of the test program, it only does a single pread call to read in
    1MB, so the actual merge never happens.

    Normally, this is fine, as when the queue exits, we simply drop the
    references we took on the other cfqqs in the merge chain:

    /*
    * If this queue was scheduled to merge with another queue, be
    * sure to drop the reference taken on that queue (and others in
    * the merge chain). See cfq_setup_merge and cfq_merge_cfqqs.
    */
    __cfqq = cfqq->new_cfqq;
    while (__cfqq) {
    if (__cfqq == cfqq) {
    WARN(1, "cfqq->new_cfqq loop detected\n");
    break;
    }
    next = __cfqq->new_cfqq;
    cfq_put_queue(__cfqq);
    __cfqq = next;
    }

    However, there is a hole in this logic. Consider the following (and
    keep in mind that each I/O keeps a reference to the cfqq):

    q1->new_cfqq = q2 // q2 now has 2 process references
    q3->new_cfqq = q2 // q2 now has 3 process references

    // the process associated with q2 exits
    // q2 now has 2 process references

    // queue 1 exits, drops its reference on q2
    // q2 now has 1 process reference

    // q3 exits, so has 0 process references, and hence drops its references
    // to q2, which leaves q2 also with 0 process references

    q4 comes along and wants to merge with q3

    q3->new_cfqq still points at q2! We follow that link and end up at an
    already freed cfqq.

    So, the fix is to not follow a merge chain if the top-most queue does
    not have a process reference, otherwise any queue in the chain could be
    already freed. I also changed the logic to disallow merging with a
    queue that does not have any process references. Previously, we did
    this check for one of the merge candidates, but not the other. That
    doesn't really make sense.

    Without the attached patch, my system would BUG within a couple of
    seconds of running the reproducer program. With the patch applied, my
    system ran the program for over an hour without issues.

    This addresses the following bugzilla:
    https://bugzilla.kernel.org/show_bug.cgi?id=16217

    Thanks a ton to Phil Carns for providing the bug report and an excellent
    reproducer.

    [ Note for stable: this applies to 2.6.32/33/34 ].

    Signed-off-by: Jeff Moyer
    Reported-by: Phil Carns
    Cc: stable@kernel.org
    Signed-off-by: Jens Axboe

    Jeff Moyer
     

17 Jun, 2010

1 commit

  • Filesystems assume that DISCARD_BARRIER are full barriers, so that they
    don't have to track in-progress discard operation when submitting new I/O.
    But currently we only treat them as elevator barriers, which don't
    actually do the nessecary queue drains.

    Also remove the unlikely around both the DISCARD and BARRIER requests -
    the happen far too often for a static mispredict.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Jens Axboe

    Christoph Hellwig
     

04 Jun, 2010

1 commit