12 Dec, 2018

1 commit

  • rwb_enabled() can't be changed when there is any inflight IO.

    wbt_disable_default() may set rwb->wb_normal as zero, however the
    blk_stat timer may still be pending, and the timer function will update
    wrb->wb_normal again.

    This patch introduces blk_stat_deactivate() and applies it in
    wbt_disable_default(), then the following IO hang triggered when running
    parted & switching io scheduler can be fixed:

    [ 369.937806] INFO: task parted:3645 blocked for more than 120 seconds.
    [ 369.938941] Not tainted 4.20.0-rc6-00284-g906c801e5248 #498
    [ 369.939797] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    [ 369.940768] parted D 0 3645 3239 0x00000000
    [ 369.941500] Call Trace:
    [ 369.941874] ? __schedule+0x6d9/0x74c
    [ 369.942392] ? wbt_done+0x5e/0x5e
    [ 369.942864] ? wbt_cleanup_cb+0x16/0x16
    [ 369.943404] ? wbt_done+0x5e/0x5e
    [ 369.943874] schedule+0x67/0x78
    [ 369.944298] io_schedule+0x12/0x33
    [ 369.944771] rq_qos_wait+0xb5/0x119
    [ 369.945193] ? karma_partition+0x1c2/0x1c2
    [ 369.945691] ? wbt_cleanup_cb+0x16/0x16
    [ 369.946151] wbt_wait+0x85/0xb6
    [ 369.946540] __rq_qos_throttle+0x23/0x2f
    [ 369.947014] blk_mq_make_request+0xe6/0x40a
    [ 369.947518] generic_make_request+0x192/0x2fe
    [ 369.948042] ? submit_bio+0x103/0x11f
    [ 369.948486] ? __radix_tree_lookup+0x35/0xb5
    [ 369.949011] submit_bio+0x103/0x11f
    [ 369.949436] ? blkg_lookup_slowpath+0x25/0x44
    [ 369.949962] submit_bio_wait+0x53/0x7f
    [ 369.950469] blkdev_issue_flush+0x8a/0xae
    [ 369.951032] blkdev_fsync+0x2f/0x3a
    [ 369.951502] do_fsync+0x2e/0x47
    [ 369.951887] __x64_sys_fsync+0x10/0x13
    [ 369.952374] do_syscall_64+0x89/0x149
    [ 369.952819] entry_SYSCALL_64_after_hwframe+0x49/0xbe
    [ 369.953492] RIP: 0033:0x7f95a1e729d4
    [ 369.953996] Code: Bad RIP value.
    [ 369.954456] RSP: 002b:00007ffdb570dd48 EFLAGS: 00000246 ORIG_RAX: 000000000000004a
    [ 369.955506] RAX: ffffffffffffffda RBX: 000055c2139c6be0 RCX: 00007f95a1e729d4
    [ 369.956389] RDX: 0000000000000001 RSI: 0000000000001261 RDI: 0000000000000004
    [ 369.957325] RBP: 0000000000000002 R08: 0000000000000000 R09: 000055c2139c6ce0
    [ 369.958199] R10: 0000000000000000 R11: 0000000000000246 R12: 000055c2139c0380
    [ 369.959143] R13: 0000000000000004 R14: 0000000000000100 R15: 0000000000000008

    Cc: stable@vger.kernel.org
    Cc: Paolo Valente
    Signed-off-by: Ming Lei
    Signed-off-by: Jens Axboe

    Ming Lei
     

09 Jul, 2018

1 commit


09 May, 2018

3 commits

  • Currently, struct request has four timestamp fields:

    - A start time, set at get_request time, in jiffies, used for iostats
    - An I/O start time, set at start_request time, in ktime nanoseconds,
    used for blk-stats (i.e., wbt, kyber, hybrid polling)
    - Another start time and another I/O start time, used for cfq and bfq

    These can all be consolidated into one start time and one I/O start
    time, both in ktime nanoseconds, shaving off up to 16 bytes from struct
    request depending on the kernel config.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     
  • struct blk_issue_stat squashes three things into one u64:

    - The time the driver started working on a request
    - The original size of the request (for the io.low controller)
    - Flags for writeback throttling

    It turns out that on x86_64, we have a 4 byte hole in struct request
    which we can fill with the non-timestamp fields from blk_issue_stat,
    simplifying things quite a bit.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     
  • Throttle discards like we would any background write. Discards should
    be background activity, so if they are impacting foreground IO, then
    we will throttle them down.

    Reviewed-by: Darrick J. Wong
    Reviewed-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Jens Axboe
     

02 Nov, 2017

1 commit

  • Many source files in the tree are missing licensing information, which
    makes it harder for compliance tools to determine the correct license.

    By default all files without license information are under the default
    license of the kernel, which is GPL version 2.

    Update the files which contain no license information with the 'GPL-2.0'
    SPDX license identifier. The SPDX identifier is a legally binding
    shorthand, which can be used instead of the full boiler plate text.

    This patch is based on work done by Thomas Gleixner and Kate Stewart and
    Philippe Ombredanne.

    How this work was done:

    Patches were generated and checked against linux-4.14-rc6 for a subset of
    the use cases:
    - file had no licensing information it it.
    - file was a */uapi/* one with no licensing information in it,
    - file was a */uapi/* one with existing licensing information,

    Further patches will be generated in subsequent months to fix up cases
    where non-standard license headers were used, and references to license
    had to be inferred by heuristics based on keywords.

    The analysis to determine which SPDX License Identifier to be applied to
    a file was done in a spreadsheet of side by side results from of the
    output of two independent scanners (ScanCode & Windriver) producing SPDX
    tag:value files created by Philippe Ombredanne. Philippe prepared the
    base worksheet, and did an initial spot review of a few 1000 files.

    The 4.13 kernel was the starting point of the analysis with 60,537 files
    assessed. Kate Stewart did a file by file comparison of the scanner
    results in the spreadsheet to determine which SPDX license identifier(s)
    to be applied to the file. She confirmed any determination that was not
    immediately clear with lawyers working with the Linux Foundation.

    Criteria used to select files for SPDX license identifier tagging was:
    - Files considered eligible had to be source code files.
    - Make and config files were included as candidates if they contained >5
    lines of source
    - File already had some variant of a license header in it (even if
    Reviewed-by: Philippe Ombredanne
    Reviewed-by: Thomas Gleixner
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

21 Apr, 2017

2 commits


28 Mar, 2017

2 commits

  • User configures latency target, but the latency threshold for each
    request size isn't fixed. For a SSD, the IO latency highly depends on
    request size. To calculate latency threshold, we sample some data, eg,
    average latency for request size 4k, 8k, 16k, 32k .. 1M. The latency
    threshold of each request size will be the sample latency (I'll call it
    base latency) plus latency target. For example, the base latency for
    request size 4k is 80us and user configures latency target 60us. The 4k
    latency threshold will be 80 + 60 = 140us.

    To sample data, we calculate the order base 2 of rounded up IO sectors.
    If the IO size is bigger than 1M, it will be accounted as 1M. Since the
    calculation does round up, the base latency will be slightly smaller
    than actual value. Also if there isn't any IO dispatched for a specific
    IO size, we will use the base latency of smaller IO size for this IO
    size.

    But we shouldn't sample data at any time. The base latency is supposed
    to be latency where disk isn't congested, because we use latency
    threshold to schedule IOs between cgroups. If disk is congested, the
    latency is higher, using it for scheduling is meaningless. Hence we only
    do the sampling when block throttling is in the LOW limit, with
    assumption disk isn't congested in such state. If the assumption isn't
    true, eg, low limit is too high, calculated latency threshold will be
    higher.

    Hard disk is completely different. Latency depends on spindle seek
    instead of request size. Currently this feature is SSD only, we probably
    can use a fixed threshold like 4ms for hard disk though.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     
  • Currently there is no way to know the request size when the request is
    finished. Next patch will need this info. We could add extra field to
    record the size, but blk_issue_stat has enough space to record it, so
    this patch just overloads blk_issue_stat. With this, we will have 49bits
    to track time, which still is very long time.

    Signed-off-by: Shaohua Li
    Signed-off-by: Jens Axboe

    Shaohua Li
     

22 Mar, 2017

2 commits

  • Currently, statistics are gathered in ~0.13s windows, and users grab the
    statistics whenever they need them. This is not ideal for both in-tree
    users:

    1. Writeback throttling wants its own dynamically sized window of
    statistics. Since the blk-stats statistics are reset after every
    window and the wbt windows don't line up with the blk-stats windows,
    wbt doesn't see every I/O.
    2. Polling currently grabs the statistics on every I/O. Again, depending
    on how the window lines up, we may miss some I/Os. It's also
    unnecessary overhead to get the statistics on every I/O; the hybrid
    polling heuristic would be just as happy with the statistics from the
    previous full window.

    This reworks the blk-stats infrastructure to be callback-based: users
    register a callback that they want called at a given time with all of
    the statistics from the window during which the callback was active.
    Users can dynamically bucketize the statistics. wbt and polling both
    currently use read vs. write, but polling can be extended to further
    subdivide based on request size.

    The callbacks are kept on an RCU list, and each callback has percpu
    stats buffers. There will only be a few users, so the overhead on the
    I/O completion side is low. The stats flushing is also simplified
    considerably: since the timer function is responsible for clearing the
    statistics, we don't have to worry about stale statistics.

    wbt is a trivial conversion. After the conversion, the windowing problem
    mentioned above is fixed.

    For polling, we register an extra callback that caches the previous
    window's statistics in the struct request_queue for the hybrid polling
    heuristic to use.

    Since we no longer have a single stats buffer for the request queue,
    this also removes the sysfs and debugfs stats entries. To replace those,
    we add a debugfs entry for the poll statistics.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     
  • The stats buckets will become generic soon, so make the existing users
    use the common READ and WRITE definitions instead of one internal to
    blk-stat.

    Signed-off-by: Omar Sandoval
    Signed-off-by: Jens Axboe

    Omar Sandoval
     

11 Nov, 2016

1 commit

  • For legacy block, we simply track them in the request queue. For
    blk-mq, we track them on a per-sw queue basis, which we can then
    sum up through the hardware queues and finally to a per device
    state.

    The stats are tracked in, roughly, 0.1s interval windows.

    Add sysfs files to display the stats.

    The feature is off by default, to avoid any extra overhead. In-kernel
    users of it can turn it on by setting QUEUE_FLAG_STATS in the queue
    flags. We currently don't turn it on if someone just reads any of
    the stats files, that is something we could add as well.

    Signed-off-by: Jens Axboe

    Jens Axboe