04 Apr, 2014

2 commits

  • After commit 839a8e8660b6 ("writeback: replace custom worker pool
    implementation with unbound workqueue") when device is removed while we
    are writing to it we crash in bdi_writeback_workfn() ->
    set_worker_desc() because bdi->dev is NULL.

    This can happen because even though bdi_unregister() cancels all pending
    flushing work, nothing really prevents new ones from being queued from
    balance_dirty_pages() or other places.

    Fix the problem by clearing BDI_registered bit in bdi_unregister() and
    checking it before scheduling of any flushing work.

    Fixes: 839a8e8660b6777e7fe4e80af1a048aebe2b5977

    Reviewed-by: Tejun Heo
    Signed-off-by: Jan Kara
    Cc: Derek Basehore
    Cc: Jens Axboe
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Kara
     
  • bdi_wakeup_thread_delayed() used the mod_delayed_work() function to
    schedule work to writeback dirty inodes. The problem with this is that
    it can delay work that is scheduled for immediate execution, such as the
    work from sync_inodes_sb(). This can happen since mod_delayed_work()
    can now steal work from a work_queue. This fixes the problem by using
    queue_delayed_work() instead. This is a regression caused by commit
    839a8e8660b6 ("writeback: replace custom worker pool implementation with
    unbound workqueue").

    The reason that this causes a problem is that laptop-mode will change
    the delay, dirty_writeback_centisecs, to 60000 (10 minutes) by default.
    In the case that bdi_wakeup_thread_delayed() races with
    sync_inodes_sb(), sync will be stopped for 10 minutes and trigger a hung
    task. Even if dirty_writeback_centisecs is not long enough to cause a
    hung task, we still don't want to delay sync for that long.

    We fix the problem by using queue_delayed_work() when we want to
    schedule writeback sometime in future. This function doesn't change the
    timer if it is already armed.

    For the same reason, we also change bdi_writeback_workfn() to
    immediately queue the work again in the case that the work_list is not
    empty. The same problem can happen if the sync work is run on the
    rescue worker.

    [jack@suse.cz: update changelog, add comment, use bdi_wakeup_thread_delayed()]
    Signed-off-by: Derek Basehore
    Reviewed-by: Jan Kara
    Cc: Alexander Viro
    Reviewed-by: Tejun Heo
    Cc: Greg Kroah-Hartman
    Cc: "Darrick J. Wong"
    Cc: Derek Basehore
    Cc: Kees Cook
    Cc: Benson Leung
    Cc: Sonny Rao
    Cc: Luigi Semenzato
    Cc: Jens Axboe
    Cc: Dave Chinner
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Derek Basehore
     

12 Sep, 2013

1 commit


20 Aug, 2013

1 commit


17 Jul, 2013

1 commit


04 Jul, 2013

1 commit

  • Calling dev_set_name with a single paramter causes it to be handled as a
    format string. Many callers are passing potentially dynamic string
    content, so use "%s" in those cases to avoid any potential accidents,
    including wrappers like device_create*() and bdi_register().

    Signed-off-by: Kees Cook
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kees Cook
     

02 Apr, 2013

3 commits

  • There are cases where userland wants to tweak the priority and
    affinity of writeback flushers. Expose bdi_wq to userland by setting
    WQ_SYSFS. It appears under /sys/bus/workqueue/devices/writeback/ and
    allows adjusting maximum concurrency level, cpumask and nice level.

    Signed-off-by: Tejun Heo
    Cc: Jens Axboe
    Cc: Fengguang Wu
    Cc: Jeff Moyer
    Cc: Kay Sievers
    Cc: Greg Kroah-Hartman

    Tejun Heo
     
  • Writeback implements its own worker pool - each bdi can be associated
    with a worker thread which is created and destroyed dynamically. The
    worker thread for the default bdi is always present and serves as the
    "forker" thread which forks off worker threads for other bdis.

    there's no reason for writeback to implement its own worker pool when
    using unbound workqueue instead is much simpler and more efficient.
    This patch replaces custom worker pool implementation in writeback
    with an unbound workqueue.

    The conversion isn't too complicated but the followings are worth
    mentioning.

    * bdi_writeback->last_active, task and wakeup_timer are removed.
    delayed_work ->dwork is added instead. Explicit timer handling is
    no longer necessary. Everything works by either queueing / modding
    / flushing / canceling the delayed_work item.

    * bdi_writeback_thread() becomes bdi_writeback_workfn() which runs off
    bdi_writeback->dwork. On each execution, it processes
    bdi->work_list and reschedules itself if there are more things to
    do.

    The function also handles low-mem condition, which used to be
    handled by the forker thread. If the function is running off a
    rescuer thread, it only writes out limited number of pages so that
    the rescuer can serve other bdis too. This preserves the flusher
    creation failure behavior of the forker thread.

    * INIT_LIST_HEAD(&bdi->bdi_list) is used to tell
    bdi_writeback_workfn() about on-going bdi unregistration so that it
    always drains work_list even if it's running off the rescuer. Note
    that the original code was broken in this regard. Under memory
    pressure, a bdi could finish unregistration with non-empty
    work_list.

    * The default bdi is no longer special. It now is treated the same as
    any other bdi and bdi_cap_flush_forker() is removed.

    * BDI_pending is no longer used. Removed.

    * Some tracepoints become non-applicable. The following TPs are
    removed - writeback_nothread, writeback_wake_thread,
    writeback_wake_forker_thread, writeback_thread_start,
    writeback_thread_stop.

    Everything, including devices coming and going away and rescuer
    operation under simulated memory pressure, seems to work fine in my
    test setup.

    Signed-off-by: Tejun Heo
    Reviewed-by: Jan Kara
    Cc: Jens Axboe
    Cc: Fengguang Wu
    Cc: Jeff Moyer

    Tejun Heo
     
  • There's no user left. Remove it.

    Signed-off-by: Tejun Heo
    Reviewed-by: Jan Kara
    Cc: Jens Axboe
    Cc: Fengguang Wu

    Tejun Heo
     

22 Feb, 2013

1 commit

  • This patchset ("stable page writes, part 2") makes some key
    modifications to the original 'stable page writes' patchset. First, it
    provides creators (devices and filesystems) of a backing_dev_info a flag
    that declares whether or not it is necessary to ensure that page
    contents cannot change during writeout. It is no longer assumed that
    this is true of all devices (which was never true anyway). Second, the
    flag is used to relaxed the wait_on_page_writeback calls so that wait
    only occurs if the device needs it. Third, it fixes up the remaining
    disk-backed filesystems to use this improved conditional-wait logic to
    provide stable page writes on those filesystems.

    It is hoped that (for people not using checksumming devices, anyway)
    this patchset will give back unnecessary performance decreases since the
    original stable page write patchset went into 3.0. Sorry about not
    fixing it sooner.

    Complaints were registered by several people about the long write
    latencies introduced by the original stable page write patchset.
    Generally speaking, the kernel ought to allocate as little extra memory
    as possible to facilitate writeout, but for people who simply cannot
    wait, a second page stability strategy is (re)introduced: snapshotting
    page contents. The waiting behavior is still the default strategy; to
    enable page snapshotting, a superblock flag (MS_SNAP_STABLE) must be
    set. This flag is used to bandaid^Henable stable page writeback on
    ext3[1], and is not used anywhere else.

    Given that there are already a few storage devices and network FSes that
    have rolled their own page stability wait/page snapshot code, it would
    be nice to move towards consolidating all of these. It seems possible
    that iscsi and raid5 may wish to use the new stable page write support
    to enable zero-copy writeout.

    Thank you to Jan Kara for helping fix a couple more filesystems.

    Per Andrew Morton's request, here are the result of using dbench to measure
    latencies on ext2:

    3.8.0-rc3:
    Operation Count AvgLat MaxLat
    ----------------------------------------
    WriteX 109347 0.028 59.817
    ReadX 347180 0.004 3.391
    Flush 15514 29.828 287.283

    Throughput 57.429 MB/sec 4 clients 4 procs max_latency=287.290 ms

    3.8.0-rc3 + patches:
    WriteX 105556 0.029 4.273
    ReadX 335004 0.005 4.112
    Flush 14982 30.540 298.634

    Throughput 55.4496 MB/sec 4 clients 4 procs max_latency=298.650 ms

    As you can see, for ext2 the maximum write latency decreases from ~60ms
    on a laptop hard disk to ~4ms. I'm not sure why the flush latencies
    increase, though I suspect that being able to dirty pages faster gives
    the flusher more work to do.

    On ext4, the average write latency decreases as well as all the maximum
    latencies:

    3.8.0-rc3:
    WriteX 85624 0.152 33.078
    ReadX 272090 0.010 61.210
    Flush 12129 36.219 168.260

    Throughput 44.8618 MB/sec 4 clients 4 procs max_latency=168.276 ms

    3.8.0-rc3 + patches:
    WriteX 86082 0.141 30.928
    ReadX 273358 0.010 36.124
    Flush 12214 34.800 165.689

    Throughput 44.9941 MB/sec 4 clients 4 procs max_latency=165.722 ms

    XFS seems to exhibit similar latency improvements as ext2:

    3.8.0-rc3:
    WriteX 125739 0.028 104.343
    ReadX 399070 0.005 4.115
    Flush 17851 25.004 131.390

    Throughput 66.0024 MB/sec 4 clients 4 procs max_latency=131.406 ms

    3.8.0-rc3 + patches:
    WriteX 123529 0.028 6.299
    ReadX 392434 0.005 4.287
    Flush 17549 25.120 188.687

    Throughput 64.9113 MB/sec 4 clients 4 procs max_latency=188.704 ms

    ...and btrfs, just to round things out, also shows some latency
    decreases:

    3.8.0-rc3:
    WriteX 67122 0.083 82.355
    ReadX 212719 0.005 2.828
    Flush 9547 47.561 147.418

    Throughput 35.3391 MB/sec 4 clients 4 procs max_latency=147.433 ms

    3.8.0-rc3 + patches:
    WriteX 64898 0.101 71.631
    ReadX 206673 0.005 7.123
    Flush 9190 47.963 219.034

    Throughput 34.0795 MB/sec 4 clients 4 procs max_latency=219.044 ms

    Before this patchset, all filesystems would block, regardless of whether
    or not it was necessary. ext3 would wait, but still generate occasional
    checksum errors. The network filesystems were left to do their own
    thing, so they'd wait too.

    After this patchset, all the disk filesystems except ext3 and btrfs will
    wait only if the hardware requires it. ext3 (if necessary) snapshots
    pages instead of blocking, and btrfs provides its own bdi so the mm will
    never wait. Network filesystems haven't been touched, so either they
    provide their own wait code, or they don't block at all. The blocking
    behavior is back to what it was before 3.0 if you don't have a disk
    requiring stable page writes.

    This patchset has been tested on 3.8.0-rc3 on x64 with ext3, ext4, and
    xfs. I've spot-checked 3.8.0-rc4 and seem to be getting the same
    results as -rc3.

    [1] The alternative fixes to ext3 include fixing the locking order and
    page bit handling like we did for ext4 (but then why not just use
    ext4?), or setting PG_writeback so early that ext3 becomes extremely
    slow. I tried that, but the number of write()s I could initiate dropped
    by nearly an order of magnitude. That was a bit much even for the
    author of the stable page series! :)

    This patch:

    Creates a per-backing-device flag that tracks whether or not pages must
    be held immutable during writeout. Eventually it will be used to waive
    wait_for_page_writeback() if nothing requires stable pages.

    Signed-off-by: Darrick J. Wong
    Reviewed-by: Jan Kara
    Cc: Adrian Hunter
    Cc: Andy Lutomirski
    Cc: Artem Bityutskiy
    Cc: Joel Becker
    Cc: Mark Fasheh
    Cc: Steven Whitehouse
    Cc: Jens Axboe
    Cc: Eric Van Hensbergen
    Cc: Ron Minnich
    Cc: Latchesar Ionkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Darrick J. Wong
     

18 Dec, 2012

1 commit


06 Dec, 2012

1 commit

  • In realtime environments, it may be desirable to keep the per-bdi
    flusher threads from running on certain cpus. This patch adds a
    cpu_list file to /sys/class/bdi/* to enable this. The default is to tie
    the flusher threads to the same numa node as the backing device (though
    I could be convinced to make it a mask of all cpus to avoid a change in
    behaviour).

    Thanks to Jeremy Eder for the original idea.

    Signed-off-by: Jeff Moyer
    Signed-off-by: Jens Axboe

    Jeff Moyer
     

25 Aug, 2012

1 commit


04 Aug, 2012

1 commit

  • Finally we can kill the 'sync_supers' kernel thread along with the
    '->write_super()' superblock operation because all the users are gone.
    Now every file-system is supposed to self-manage own superblock and
    its dirty state.

    The nice thing about killing this thread is that it improves power management.
    Indeed, 'sync_supers' is a source of monotonic system wake-ups - it woke up
    every 5 seconds no matter what - even if there were no dirty superblocks and
    even if there were no file-systems using this service (e.g., btrfs and
    journalled ext4 do not need it). So it was wasting power most of the time. And
    because the thread was in the core of the kernel, all systems had to have it.
    So I am quite happy to make it go away.

    Interestingly, this thread is a left-over from the pdflush kernel thread which
    was a self-forking kernel thread responsible for all the write-back in old
    Linux kernels. It was turned into per-block device BDI threads, and
    'sync_supers' was a left-over. Thus, R.I.P, pdflush as well.

    Signed-off-by: Artem Bityutskiy
    Signed-off-by: Al Viro

    Artem Bityutskiy
     

01 Aug, 2012

1 commit

  • Since per-BDI flusher threads were introduced in 2.6, the pdflush
    mechanism is not used any more. But the old interface exported through
    /proc/sys/vm/nr_pdflush_threads still exists and is obviously useless.

    For back-compatibility, printk warning information and return 2 to notify
    the users that the interface is removed.

    Signed-off-by: Wanpeng Li
    Cc: Wu Fengguang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanpeng Li
     

09 Jun, 2012

1 commit


01 Feb, 2012

1 commit

  • While 7a401a972df8e18 ("backing-dev: ensure wakeup_timer is deleted")
    addressed the problem of the bdi being freed with a queued wakeup
    timer, there are other races that could happen if the wakeup timer
    expires after/during bdi_unregister(), before bdi_destroy() is called.

    wakeup_timer_fn() could attempt to wakeup a task which has already has
    been freed, or could access a NULL bdi->dev via the wake_forker_thread
    tracepoint.

    Cc:
    Cc: Jens Axboe
    Reported-by: Chanho Min
    Reviewed-by: Namjae Jeon
    Signed-off-by: Rabin Vincent
    Signed-off-by: Wu Fengguang

    Rabin Vincent
     

22 Nov, 2011

1 commit

  • Writeback and thinkpad_acpi have been using thaw_process() to prevent
    deadlock between the freezer and kthread_stop(); unfortunately, this
    is inherently racy - nothing prevents freezing from happening between
    thaw_process() and kthread_stop().

    This patch implements kthread_freezable_should_stop() which enters
    refrigerator if necessary but is guaranteed to return if
    kthread_stop() is invoked. Both thaw_process() users are converted to
    use the new function.

    Note that this deadlock condition exists for many of freezable
    kthreads. They need to be converted to use the new should_stop or
    freezable workqueue.

    Tested with synthetic test case.

    Signed-off-by: Tejun Heo
    Acked-by: Henrique de Moraes Holschuh
    Cc: Jens Axboe
    Cc: Oleg Nesterov

    Tejun Heo
     

11 Nov, 2011

1 commit

  • bdi_prune_sb() in bdi_unregister() attempts to removes the bdi links
    from all super_blocks and then del_timer_sync() the writeback timer.

    However, this can race with __mark_inode_dirty(), leading to
    bdi_wakeup_thread_delayed() rearming the writeback timer on the bdi
    we're unregistering, after we've called del_timer_sync().

    This can end up with the bdi being freed with an active timer inside it,
    as in the case of the following dump after the removal of an SD card.

    Fix this by redoing the del_timer_sync() in bdi_destory().

    ------------[ cut here ]------------
    WARNING: at /home/rabin/kernel/arm/lib/debugobjects.c:262 debug_print_object+0x9c/0xc8()
    ODEBUG: free active (active state 0) object type: timer_list hint: wakeup_timer_fn+0x0/0x180
    Modules linked in:
    Backtrace:
    [] (dump_backtrace+0x0/0x110) from [] (dump_stack+0x18/0x1c)
    r6:c02bc638 r5:00000106 r4:c79f5d18 r3:00000000
    [] (dump_stack+0x0/0x1c) from [] (warn_slowpath_common+0x54/0x6c)
    [] (warn_slowpath_common+0x0/0x6c) from [] (warn_slowpath_fmt+0x38/0x40)
    r8:20000013 r7:c780c6f0 r6:c031613c r5:c780c6f0 r4:c02b1b29
    r3:00000009
    [] (warn_slowpath_fmt+0x0/0x40) from [] (debug_print_object+0x9c/0xc8)
    r3:c02b1b29 r2:c02bc662
    [] (debug_print_object+0x0/0xc8) from [] (debug_check_no_obj_freed+0xac/0x1dc)
    r6:c7964000 r5:00000001 r4:c7964000
    [] (debug_check_no_obj_freed+0x0/0x1dc) from [] (kmem_cache_free+0x88/0x1f8)
    [] (kmem_cache_free+0x0/0x1f8) from [] (blk_release_queue+0x70/0x78)
    [] (blk_release_queue+0x0/0x78) from [] (kobject_release+0x70/0x84)
    r5:c79641f0 r4:c796420c
    [] (kobject_release+0x0/0x84) from [] (kref_put+0x68/0x80)
    r7:00000083 r6:c74083d0 r5:c015289c r4:c796420c
    [] (kref_put+0x0/0x80) from [] (kobject_put+0x48/0x5c)
    r5:c79643b4 r4:c79641f0
    [] (kobject_put+0x0/0x5c) from [] (blk_cleanup_queue+0x68/0x74)
    r4:c7964000
    [] (blk_cleanup_queue+0x0/0x74) from [] (mmc_blk_put+0x78/0xe8)
    r5:00000000 r4:c794c400
    [] (mmc_blk_put+0x0/0xe8) from [] (mmc_blk_release+0x24/0x38)
    r5:c794c400 r4:c0322824
    [] (mmc_blk_release+0x0/0x38) from [] (__blkdev_put+0xe8/0x170)
    r5:c78d5e00 r4:c74083c0
    [] (__blkdev_put+0x0/0x170) from [] (blkdev_put+0x11c/0x12c)
    r8:c79f5f70 r7:00000001 r6:c74083d0 r5:00000083 r4:c74083c0
    r3:00000000
    [] (blkdev_put+0x0/0x12c) from [] (kill_block_super+0x60/0x6c)
    r7:c7942300 r6:c79f4000 r5:00000083 r4:c74083c0
    [] (kill_block_super+0x0/0x6c) from [] (deactivate_locked_super+0x44/0x70)
    r6:c79f4000 r5:c031af64 r4:c794dc00 r3:c00b06c4
    [] (deactivate_locked_super+0x0/0x70) from [] (deactivate_super+0x6c/0x70)
    r5:c794dc00 r4:c794dc00
    [] (deactivate_super+0x0/0x70) from [] (mntput_no_expire+0x188/0x194)
    r5:c794dc00 r4:c7942300
    [] (mntput_no_expire+0x0/0x194) from [] (sys_umount+0x2e4/0x310)
    r6:c7942300 r5:00000000 r4:00000000 r3:00000000
    [] (sys_umount+0x0/0x310) from [] (ret_fast_syscall+0x0/0x30)
    ---[ end trace e5c83c92ada51c76 ]---

    Cc: stable@kernel.org
    Signed-off-by: Rabin Vincent
    Signed-off-by: Linus Walleij
    Signed-off-by: Jens Axboe

    Rabin Vincent
     

07 Nov, 2011

1 commit

  • * 'writeback-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/wfg/linux:
    writeback: Add a 'reason' to wb_writeback_work
    writeback: send work item to queue_io, move_expired_inodes
    writeback: trace event balance_dirty_pages
    writeback: trace event bdi_dirty_ratelimit
    writeback: fix ppc compile warnings on do_div(long long, unsigned long)
    writeback: per-bdi background threshold
    writeback: dirty position control - bdi reserve area
    writeback: control dirty pause time
    writeback: limit max dirty pause time
    writeback: IO-less balance_dirty_pages()
    writeback: per task dirty rate limit
    writeback: stabilize bdi->dirty_ratelimit
    writeback: dirty rate control
    writeback: add bg_threshold parameter to __bdi_update_bandwidth()
    writeback: dirty position control
    writeback: account per-bdi accumulated dirtied pages

    Linus Torvalds
     

01 Nov, 2011

1 commit


31 Oct, 2011

1 commit

  • This creates a new 'reason' field in a wb_writeback_work
    structure, which unambiguously identifies who initiates
    writeback activity. A 'wb_reason' enumeration has been
    added to writeback.h, to enumerate the possible reasons.

    The 'writeback_work_class' and tracepoint event class and
    'writeback_queue_io' tracepoints are updated to include the
    symbolic 'reason' in all trace events.

    And the 'writeback_inodes_sbXXX' family of routines has had
    a wb_stats parameter added to them, so callers can specify
    why writeback is being started.

    Acked-by: Jan Kara
    Signed-off-by: Curt Wohlgemuth
    Signed-off-by: Wu Fengguang

    Curt Wohlgemuth
     

03 Oct, 2011

3 commits

  • There are some imperfections in balanced_dirty_ratelimit.

    1) large fluctuations

    The dirty_rate used for computing balanced_dirty_ratelimit is merely
    averaged in the past 200ms (very small comparing to the 3s estimation
    period for write_bw), which makes rather dispersed distribution of
    balanced_dirty_ratelimit.

    It's pretty hard to average out the singular points by increasing the
    estimation period. Considering that the averaging technique will
    introduce very undesirable time lags, I give it up totally. (btw, the 3s
    write_bw averaging time lag is much more acceptable because its impact
    is one-way and therefore won't lead to oscillations.)

    The more practical way is filtering -- most singular
    balanced_dirty_ratelimit points can be filtered out by remembering some
    prev_balanced_rate and prev_prev_balanced_rate. However the more
    reliable way is to guard balanced_dirty_ratelimit with task_ratelimit.

    2) due to truncates and fs redirties, the (write_bw dirty_rate)
    match could become unbalanced, which may lead to large systematical
    errors in balanced_dirty_ratelimit. The truncates, due to its possibly
    bumpy nature, can hardly be compensated smoothly. So let's face it. When
    some over-estimated balanced_dirty_ratelimit brings dirty_ratelimit
    high, dirty pages will go higher than the setpoint. task_ratelimit will
    in turn become lower than dirty_ratelimit. So if we consider both
    balanced_dirty_ratelimit and task_ratelimit and update dirty_ratelimit
    only when they are on the same side of dirty_ratelimit, the systematical
    errors in balanced_dirty_ratelimit won't be able to bring
    dirty_ratelimit far away.

    The balanced_dirty_ratelimit estimation may also be inaccurate near
    @limit or @freerun, however is less an issue.

    3) since we ultimately want to

    - keep the fluctuations of task ratelimit as small as possible
    - keep the dirty pages around the setpoint as long time as possible

    the update policy used for (2) also serves the above goals nicely:
    if for some reason the dirty pages are high (task_ratelimit < dirty_ratelimit),
    and dirty_ratelimit is low (dirty_ratelimit < balanced_dirty_ratelimit),
    there is no point to bring up dirty_ratelimit in a hurry only to hurt
    both the above two goals.

    So, we make use of task_ratelimit to limit the update of dirty_ratelimit
    in two ways:

    1) avoid changing dirty rate when it's against the position control target
    (the adjusted rate will slow down the progress of dirty pages going
    back to setpoint).

    2) limit the step size. task_ratelimit is changing values step by step,
    leaving a consistent trace comparing to the randomly jumping
    balanced_dirty_ratelimit. task_ratelimit also has the nice smaller
    errors in stable state and typically larger errors when there are big
    errors in rate. So it's a pretty good limiting factor for the step
    size of dirty_ratelimit.

    Note that bdi->dirty_ratelimit is always tracking balanced_dirty_ratelimit.
    task_ratelimit is merely used as a limiting factor.

    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • It's all about bdi->dirty_ratelimit, which aims to be (write_bw / N)
    when there are N dd tasks.

    On write() syscall, use bdi->dirty_ratelimit
    ============================================

    balance_dirty_pages(pages_dirtied)
    {
    task_ratelimit = bdi->dirty_ratelimit * bdi_position_ratio();
    pause = pages_dirtied / task_ratelimit;
    sleep(pause);
    }

    On every 200ms, update bdi->dirty_ratelimit
    ===========================================

    bdi_update_dirty_ratelimit()
    {
    task_ratelimit = bdi->dirty_ratelimit * bdi_position_ratio();
    balanced_dirty_ratelimit = task_ratelimit * write_bw / dirty_rate;
    bdi->dirty_ratelimit = balanced_dirty_ratelimit
    }

    Estimation of balanced bdi->dirty_ratelimit
    ===========================================

    balanced task_ratelimit
    -----------------------

    balance_dirty_pages() needs to throttle tasks dirtying pages such that
    the total amount of dirty pages stays below the specified dirty limit in
    order to avoid memory deadlocks. Furthermore we desire fairness in that
    tasks get throttled proportionally to the amount of pages they dirty.

    IOW we want to throttle tasks such that we match the dirty rate to the
    writeout bandwidth, this yields a stable amount of dirty pages:

    dirty_rate == write_bw (1)

    The fairness requirement gives us:

    task_ratelimit = balanced_dirty_ratelimit
    == write_bw / N (2)

    where N is the number of dd tasks. We don't know N beforehand, but
    still can estimate balanced_dirty_ratelimit within 200ms.

    Start by throttling each dd task at rate

    task_ratelimit = task_ratelimit_0 (3)
    (any non-zero initial value is OK)

    After 200ms, we measured

    dirty_rate = # of pages dirtied by all dd's / 200ms
    write_bw = # of pages written to the disk / 200ms

    For the aggressive dd dirtiers, the equality holds

    dirty_rate == N * task_rate
    == N * task_ratelimit_0 (4)
    Or
    task_ratelimit_0 == dirty_rate / N (5)

    Now we conclude that the balanced task ratelimit can be estimated by

    write_bw
    balanced_dirty_ratelimit = task_ratelimit_0 * ---------- (6)
    dirty_rate

    Because with (4) and (5) we can get the desired equality (1):

    write_bw
    balanced_dirty_ratelimit == (dirty_rate / N) * ----------
    dirty_rate
    == write_bw / N

    Then using the balanced task ratelimit we can compute task pause times like:

    task_pause = task->nr_dirtied / task_ratelimit

    task_ratelimit with position control
    ------------------------------------

    However, while the above gives us means of matching the dirty rate to
    the writeout bandwidth, it at best provides us with a stable dirty page
    count (assuming a static system). In order to control the dirty page
    count such that it is high enough to provide performance, but does not
    exceed the specified limit we need another control.

    The dirty position control works by extending (2) to

    task_ratelimit = balanced_dirty_ratelimit * pos_ratio (7)

    where pos_ratio is a negative feedback function that subjects to

    1) f(setpoint) = 1.0
    2) df/dx < 0

    That is, if the dirty pages are ABOVE the setpoint, we throttle each
    task a bit more HEAVY than balanced_dirty_ratelimit, so that the dirty
    pages are created less fast than they are cleaned, thus DROP to the
    setpoints (and the reverse).

    Based on (7) and the assumption that both dirty_ratelimit and pos_ratio
    remains CONSTANT for the past 200ms, we get

    task_ratelimit_0 = balanced_dirty_ratelimit * pos_ratio (8)

    Putting (8) into (6), we get the formula used in
    bdi_update_dirty_ratelimit():

    write_bw
    balanced_dirty_ratelimit *= pos_ratio * ---------- (9)
    dirty_rate

    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • Introduce the BDI_DIRTIED counter. It will be used for estimating the
    bdi's dirty bandwidth.

    CC: Jan Kara
    CC: Michael Rubin
    CC: Peter Zijlstra
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     

03 Sep, 2011

2 commits


27 Jul, 2011

1 commit

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/wfg/writeback: (27 commits)
    mm: properly reflect task dirty limits in dirty_exceeded logic
    writeback: don't busy retry writeback on new/freeing inodes
    writeback: scale IO chunk size up to half device bandwidth
    writeback: trace global_dirty_state
    writeback: introduce max-pause and pass-good dirty limits
    writeback: introduce smoothed global dirty limit
    writeback: consolidate variable names in balance_dirty_pages()
    writeback: show bdi write bandwidth in debugfs
    writeback: bdi write bandwidth estimation
    writeback: account per-bdi accumulated written pages
    writeback: make writeback_control.nr_to_write straight
    writeback: skip tmpfs early in balance_dirty_pages_ratelimited_nr()
    writeback: trace event writeback_queue_io
    writeback: trace event writeback_single_inode
    writeback: remove .nonblocking and .encountered_congestion
    writeback: remove writeback_control.more_io
    writeback: skip balance_dirty_pages() for in-memory fs
    writeback: add bdi_dirty_limit() kernel-doc
    writeback: avoid extra sync work at enqueue time
    writeback: elevate queue_io() into wb_writeback()
    ...

    Fix up trivial conflicts in fs/fs-writeback.c and mm/filemap.c

    Linus Torvalds
     

26 Jul, 2011

2 commits

  • * Merge akpm patch series: (122 commits)
    drivers/connector/cn_proc.c: remove unused local
    Documentation/SubmitChecklist: add RCU debug config options
    reiserfs: use hweight_long()
    reiserfs: use proper little-endian bitops
    pnpacpi: register disabled resources
    drivers/rtc/rtc-tegra.c: properly initialize spinlock
    drivers/rtc/rtc-twl.c: check return value of twl_rtc_write_u8() in twl_rtc_set_time()
    drivers/rtc: add support for Qualcomm PMIC8xxx RTC
    drivers/rtc/rtc-s3c.c: support clock gating
    drivers/rtc/rtc-mpc5121.c: add support for RTC on MPC5200
    init: skip calibration delay if previously done
    misc/eeprom: add eeprom access driver for digsy_mtc board
    misc/eeprom: add driver for microwire 93xx46 EEPROMs
    checkpatch.pl: update $logFunctions
    checkpatch: make utf-8 test --strict
    checkpatch.pl: add ability to ignore various messages
    checkpatch: add a "prefer __aligned" check
    checkpatch: validate signature styles and To: and Cc: lines
    checkpatch: add __rcu as a sparse modifier
    checkpatch: suggest using min_t or max_t
    ...

    Did this as a merge because of (trivial) conflicts in
    - Documentation/feature-removal-schedule.txt
    - arch/xtensa/include/asm/uaccess.h
    that were just easier to fix up in the merge than in the patch series.

    Linus Torvalds
     
  • Vito said:

    : The system has many usb disks coming and going day to day, with their
    : respective bdi's having min_ratio set to 1 when inserted. It works for
    : some time until eventually min_ratio can no longer be set, even when the
    : active set of bdi's seen in /sys/class/bdi/*/min_ratio doesn't add up to
    : anywhere near 100.
    :
    : This then leads to an unrelated starvation problem caused by write-heavy
    : fuse mounts being used atop the usb disks, a problem the min_ratio setting
    : at the underlying devices bdi effectively prevents.

    Fix this leakage by resetting the bdi min_ratio when unregistering the
    BDI.

    Signed-off-by: Peter Zijlstra
    Reported-by: Vito Caputo
    Cc: Wu Fengguang
    Cc: Miklos Szeredi
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     

24 Jul, 2011

1 commit

  • backing-dev: use synchronize_rcu_expedited instead of synchronize_rcu

    synchronize_rcu sleeps several timer ticks. synchronize_rcu_expedited is
    much faster.

    With 100Hz timer frequency, when we remove 10000 block devices with
    "dmsetup remove_all" command, it takes 27 minutes. With this patch,
    removing 10000 block devices takes only 15 seconds.

    Signed-off-by: Mikulas Patocka
    Signed-off-by: Jens Axboe

    Mikulas Patocka
     

10 Jul, 2011

4 commits

  • Add a "BdiWriteBandwidth" entry and indent others in /debug/bdi/*/stats.

    btw, increase digital field width to 10, for keeping the possibly
    huge BdiWritten number aligned at least for desktop systems.

    Impact: this could break user space tools if they are dumb enough to
    depend on the number of white spaces.

    CC: Theodore Ts'o
    CC: Jan Kara
    CC: Peter Zijlstra
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • The estimation value will start from 100MB/s and adapt to the real
    bandwidth in seconds.

    It tries to update the bandwidth only when disk is fully utilized.
    Any inactive period of more than one second will be skipped.

    The estimated bandwidth will be reflecting how fast the device can
    writeout when _fully utilized_, and won't drop to 0 when it goes idle.
    The value will remain constant at disk idle time. At busy write time, if
    not considering fluctuations, it will also remain high unless be knocked
    down by possible concurrent reads that compete for the disk time and
    bandwidth with async writes.

    The estimation is not done purely in the flusher because there is no
    guarantee for write_cache_pages() to return timely to update bandwidth.

    The bdi->avg_write_bandwidth smoothing is very effective for filtering
    out sudden spikes, however may be a little biased in long term.

    The overheads are low because the bdi bandwidth update only occurs at
    200ms intervals.

    The 200ms update interval is suitable, because it's not possible to get
    the real bandwidth for the instance at all, due to large fluctuations.

    The NFS commits can be as large as seconds worth of data. One XFS
    completion may be as large as half second worth of data if we are going
    to increase the write chunk to half second worth of data. In ext4,
    fluctuations with time period of around 5 seconds is observed. And there
    is another pattern of irregular periods of up to 20 seconds on SSD tests.

    That's why we are not only doing the estimation at 200ms intervals, but
    also averaging them over a period of 3 seconds and then go further to do
    another level of smoothing in avg_write_bandwidth.

    CC: Li Shaohua
    CC: Peter Zijlstra
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • Introduce the BDI_WRITTEN counter. It will be used for estimating the
    bdi's write bandwidth.

    Peter Zijlstra :
    Move BDI_WRITTEN accounting into __bdi_writeout_inc().
    This will cover and fix fuse, which only calls bdi_writeout_inc().

    CC: Michael Rubin
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Jan Kara
    Signed-off-by: Wu Fengguang

    Jan Kara
     
  • Pass struct wb_writeback_work all the way down to writeback_sb_inodes(),
    and initialize the struct writeback_control there.

    struct writeback_control is basically designed to control writeback of a
    single file, but we keep abuse it for writing multiple files in
    writeback_sb_inodes() and its callers.

    It immediately clean things up, e.g. suddenly wbc.nr_to_write vs
    work->nr_pages starts to make sense, and instead of saving and restoring
    pages_skipped in writeback_sb_inodes it can always start with a clean
    zero value.

    It also makes a neat IO pattern change: large dirty files are now
    written in the full 4MB writeback chunk size, rather than whatever
    remained quota in wbc->nr_to_write.

    Acked-by: Jan Kara
    Proposed-by: Christoph Hellwig
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     

08 Jun, 2011

1 commit

  • Split the global inode_wb_list_lock into a per-bdi_writeback list_lock,
    as it's currently the most contended lock in the system for metadata
    heavy workloads. It won't help for single-filesystem workloads for
    which we'll need the I/O-less balance_dirty_pages, but at least we
    can dedicate a cpu to spinning on each bdi now for larger systems.

    Based on earlier patches from Nick Piggin and Dave Chinner.

    It reduces lock contentions to 1/4 in this test case:
    10 HDD JBOD, 100 dd on each disk, XFS, 6GB ram

    lock_stat version 0.3
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    class name con-bounces contentions waittime-min waittime-max waittime-total acq-bounces acquisitions holdtime-min holdtime-max holdtime-total
    -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    vanilla 2.6.39-rc3:
    inode_wb_list_lock: 42590 44433 0.12 147.74 144127.35 252274 886792 0.08 121.34 917211.23
    ------------------
    inode_wb_list_lock 2 [] bdev_inode_switch_bdi+0x29/0x85
    inode_wb_list_lock 34 [] inode_wb_list_del+0x22/0x49
    inode_wb_list_lock 12893 [] __mark_inode_dirty+0x170/0x1d0
    inode_wb_list_lock 10702 [] writeback_single_inode+0x16d/0x20a
    ------------------
    inode_wb_list_lock 2 [] bdev_inode_switch_bdi+0x29/0x85
    inode_wb_list_lock 19 [] inode_wb_list_del+0x22/0x49
    inode_wb_list_lock 5550 [] __mark_inode_dirty+0x170/0x1d0
    inode_wb_list_lock 8511 [] writeback_sb_inodes+0x10f/0x157

    2.6.39-rc3 + patch:
    &(&wb->list_lock)->rlock: 11383 11657 0.14 151.69 40429.51 90825 527918 0.11 145.90 556843.37
    ------------------------
    &(&wb->list_lock)->rlock 10 [] inode_wb_list_del+0x5f/0x86
    &(&wb->list_lock)->rlock 1493 [] writeback_inodes_wb+0x3d/0x150
    &(&wb->list_lock)->rlock 3652 [] writeback_sb_inodes+0x123/0x16f
    &(&wb->list_lock)->rlock 1412 [] writeback_single_inode+0x17f/0x223
    ------------------------
    &(&wb->list_lock)->rlock 3 [] bdi_lock_two+0x46/0x4b
    &(&wb->list_lock)->rlock 6 [] inode_wb_list_del+0x5f/0x86
    &(&wb->list_lock)->rlock 2061 [] __mark_inode_dirty+0x173/0x1cf
    &(&wb->list_lock)->rlock 2629 [] writeback_sb_inodes+0x123/0x16f

    hughd@google.com: fix recursive lock when bdi_lock_two() is called with new the same as old
    akpm@linux-foundation.org: cleanup bdev_inode_switch_bdi() comment

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Wu Fengguang

    Christoph Hellwig
     

21 May, 2011

1 commit


31 Mar, 2011

1 commit


25 Mar, 2011

2 commits

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs-2.6:
    fs: simplify iget & friends
    fs: pull inode->i_lock up out of writeback_single_inode
    fs: rename inode_lock to inode_hash_lock
    fs: move i_wb_list out from under inode_lock
    fs: move i_sb_list out from under inode_lock
    fs: remove inode_lock from iput_final and prune_icache
    fs: Lock the inode LRU list separately
    fs: factor inode disposal
    fs: protect inode->i_state with inode->i_lock
    autofs4: Do not potentially dereference NULL pointer returned by fget() in autofs_dev_ioctl_setpipefd()
    autofs4 - remove autofs4_lock
    autofs4 - fix d_manage() return on rcu-walk
    autofs4 - fix autofs4_expire_indirect() traversal
    autofs4 - fix dentry leak in autofs4_expire_direct()
    autofs4 - reinstate last used update on access
    vfs - check non-mountpoint dentry might block in __follow_mount_rcu()

    Linus Torvalds
     
  • Protect the inode writeback list with a new global lock
    inode_wb_list_lock and use it to protect the list manipulations and
    traversals. This lock replaces the inode_lock as the inodes on the
    list can be validity checked while holding the inode->i_lock and
    hence the inode_lock is no longer needed to protect the list.

    Signed-off-by: Dave Chinner
    Signed-off-by: Al Viro

    Dave Chinner