01 Mar, 2013

2 commits

  • Pull writeback fixes from Wu Fengguang:
    "Two writeback fixes

    - fix negative (setpoint - dirty) in 32bit archs

    - use down_read_trylock() in writeback_inodes_sb(_nr)_if_idle()"

    * tag 'writeback-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/wfg/linux:
    Negative (setpoint-dirty) in bdi_position_ratio()
    vfs: re-implement writeback_inodes_sb(_nr)_if_idle() and rename them

    Linus Torvalds
     
  • Pull block IO core bits from Jens Axboe:
    "Below are the core block IO bits for 3.9. It was delayed a few days
    since my workstation kept crashing every 2-8h after pulling it into
    current -git, but turns out it is a bug in the new pstate code (divide
    by zero, will report separately). In any case, it contains:

    - The big cfq/blkcg update from Tejun and and Vivek.

    - Additional block and writeback tracepoints from Tejun.

    - Improvement of the should sort (based on queues) logic in the plug
    flushing.

    - _io() variants of the wait_for_completion() interface, using
    io_schedule() instead of schedule() to contribute to io wait
    properly.

    - Various little fixes.

    You'll get two trivial merge conflicts, which should be easy enough to
    fix up"

    Fix up the trivial conflicts due to hlist traversal cleanups (commit
    b67bfe0d42ca: "hlist: drop the node parameter from iterators").

    * 'for-3.9/core' of git://git.kernel.dk/linux-block: (39 commits)
    block: remove redundant check to bd_openers()
    block: use i_size_write() in bd_set_size()
    cfq: fix lock imbalance with failed allocations
    drivers/block/swim3.c: fix null pointer dereference
    block: don't select PERCPU_RWSEM
    block: account iowait time when waiting for completion of IO request
    sched: add wait_for_completion_io[_timeout]
    writeback: add more tracepoints
    block: add block_{touch|dirty}_buffer tracepoint
    buffer: make touch_buffer() an exported function
    block: add @req to bio_{front|back}_merge tracepoints
    block: add missing block_bio_complete() tracepoint
    block: Remove should_sort judgement when flush blk_plug
    block,elevator: use new hashtable implementation
    cfq-iosched: add hierarchical cfq_group statistics
    cfq-iosched: collect stats from dead cfqgs
    cfq-iosched: separate out cfqg_stats_reset() from cfq_pd_reset_stats()
    blkcg: make blkcg_print_blkgs() grab q locks instead of blkcg lock
    block: RCU free request_queue
    blkcg: implement blkg_[rw]stat_recursive_sum() and blkg_[rw]stat_merge()
    ...

    Linus Torvalds
     

24 Feb, 2013

1 commit

  • When calculating amount of dirtyable memory, min_free_kbytes should be
    subtracted because it is not intended for dirty pages.

    Addresses http://bugs.debian.org/695182

    [akpm@linux-foundation.org: fix up min_free_kbytes extern declarations]
    [akpm@linux-foundation.org: fix min() warning]
    Signed-off-by: Paul Szabo
    Acked-by: Rik van Riel
    Cc: Wu Fengguang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Szabo
     

22 Feb, 2013

2 commits

  • This provides a band-aid to provide stable page writes on jbd without
    needing to backport the fixed locking and page writeback bit handling
    schemes of jbd2. The band-aid works by using bounce buffers to snapshot
    page contents instead of waiting.

    For those wondering about the ext3 bandage -- fixing the jbd locking
    (which was done as part of ext4dev years ago) is a lot of surgery, and
    setting PG_writeback on data pages when we actually hold the page lock
    dropped ext3 performance by nearly an order of magnitude. If we're
    going to migrate iscsi and raid to use stable page writes, the
    complaints about high latency will likely return. We might as well
    centralize their page snapshotting thing to one place.

    Signed-off-by: Darrick J. Wong
    Tested-by: Andy Lutomirski
    Cc: Adrian Hunter
    Cc: Artem Bityutskiy
    Reviewed-by: Jan Kara
    Cc: Joel Becker
    Cc: Mark Fasheh
    Cc: Steven Whitehouse
    Cc: Jens Axboe
    Cc: Eric Van Hensbergen
    Cc: Ron Minnich
    Cc: Latchesar Ionkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Darrick J. Wong
     
  • Create a helper function to check if a backing device requires stable
    page writes and, if so, performs the necessary wait. Then, make it so
    that all points in the memory manager that handle making pages writable
    use the helper function. This should provide stable page write support
    to most filesystems, while eliminating unnecessary waiting for devices
    that don't require the feature.

    Before this patchset, all filesystems would block, regardless of whether
    or not it was necessary. ext3 would wait, but still generate occasional
    checksum errors. The network filesystems were left to do their own
    thing, so they'd wait too.

    After this patchset, all the disk filesystems except ext3 and btrfs will
    wait only if the hardware requires it. ext3 (if necessary) snapshots
    pages instead of blocking, and btrfs provides its own bdi so the mm will
    never wait. Network filesystems haven't been touched, so either they
    provide their own stable page guarantees or they don't block at all.
    The blocking behavior is back to what it was before 3.0 if you don't
    have a disk requiring stable page writes.

    Here's the result of using dbench to test latency on ext2:

    3.8.0-rc3:
    Operation Count AvgLat MaxLat
    ----------------------------------------
    WriteX 109347 0.028 59.817
    ReadX 347180 0.004 3.391
    Flush 15514 29.828 287.283

    Throughput 57.429 MB/sec 4 clients 4 procs max_latency=287.290 ms

    3.8.0-rc3 + patches:
    WriteX 105556 0.029 4.273
    ReadX 335004 0.005 4.112
    Flush 14982 30.540 298.634

    Throughput 55.4496 MB/sec 4 clients 4 procs max_latency=298.650 ms

    As you can see, the maximum write latency drops considerably with this
    patch enabled. The other filesystems (ext3/ext4/xfs/btrfs) behave
    similarly, but see the cover letter for those results.

    Signed-off-by: Darrick J. Wong
    Acked-by: Steven Whitehouse
    Reviewed-by: Jan Kara
    Cc: Adrian Hunter
    Cc: Andy Lutomirski
    Cc: Artem Bityutskiy
    Cc: Joel Becker
    Cc: Mark Fasheh
    Cc: Jens Axboe
    Cc: Eric Van Hensbergen
    Cc: Ron Minnich
    Cc: Latchesar Ionkov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Darrick J. Wong
     

08 Feb, 2013

1 commit


24 Jan, 2013

1 commit

  • In bdi_position_ratio(), get difference (setpoint-dirty) right even when
    negative. Both setpoint and dirty are unsigned long, the difference was
    zero-padded thus wrongly sign-extended to s64. This issue affects all
    32-bit architectures, does not affect 64-bit architectures where long
    and s64 are equivalent.

    In this function, dirty is between freerun and limit, the pseudo-float x
    is between [-1,1], expected to be negative about half the time. With
    zero-padding, instead of a small negative x we obtained a large positive
    one so bdi_position_ratio() returned garbage.

    Casting the difference to s64 also prevents overflow with left-shift;
    though normally these numbers are small and I never observed a 32-bit
    overflow there.

    (This patch does not solve the PAE OOM issue.)

    Paul Szabo psz@maths.usyd.edu.au http://www.maths.usyd.edu.au/u/psz/
    School of Mathematics and Statistics University of Sydney Australia

    Reviewed-by: Jan Kara
    Reported-by: Paul Szabo
    Reference: http://bugs.debian.org/695182
    Signed-off-by: Paul Szabo
    Signed-off-by: Fengguang Wu

    paul.szabo@sydney.edu.au
     

14 Jan, 2013

1 commit

  • Add tracepoints for page dirtying, writeback_single_inode start, inode
    dirtying and writeback. For the latter two inode events, a pair of
    events are defined to denote start and end of the operations (the
    starting one has _start suffix and the one w/o suffix happens after
    the operation is complete). These inode ops are FS specific and can
    be non-trivial and having enclosing tracepoints is useful for external
    tracers.

    This is part of tracepoint additions to improve visiblity into
    dirtying / writeback operations for io tracer and userland.

    v2: writeback_dirty_inode[_start] TPs may be called for files on
    pseudo FSes w/ unregistered bdi. Check whether bdi->dev is %NULL
    before dereferencing.

    v3: buffer dirtying moved to a block TP.

    Signed-off-by: Tejun Heo
    Reviewed-by: Jan Kara
    Signed-off-by: Jens Axboe

    Tejun Heo
     

21 Dec, 2012

1 commit

  • The system uses global_dirtyable_memory() to calculate number of
    dirtyable pages/pages that can be allocated to the page cache. A bug
    causes an underflow thus making the page count look like a big unsigned
    number. This in turn confuses the dirty writeback throttling to
    aggressively write back pages as they become dirty (usually 1 page at a
    time). This generally only affects systems with highmem because the
    underflowed count gets subtracted from the global count of dirtyable
    memory.

    The problem was introduced with v3.2-4896-gab8fabd

    Fix is to ensure we don't get an underflowed total of either highmem or
    global dirtyable memory.

    Signed-off-by: Sonny Rao
    Signed-off-by: Puneet Kumar
    Acked-by: Johannes Weiner
    Tested-by: Damien Wyart
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sonny Rao
     

12 Dec, 2012

1 commit


28 Sep, 2012

1 commit


04 Aug, 2012

1 commit

  • Finally we can kill the 'sync_supers' kernel thread along with the
    '->write_super()' superblock operation because all the users are gone.
    Now every file-system is supposed to self-manage own superblock and
    its dirty state.

    The nice thing about killing this thread is that it improves power management.
    Indeed, 'sync_supers' is a source of monotonic system wake-ups - it woke up
    every 5 seconds no matter what - even if there were no dirty superblocks and
    even if there were no file-systems using this service (e.g., btrfs and
    journalled ext4 do not need it). So it was wasting power most of the time. And
    because the thread was in the core of the kernel, all systems had to have it.
    So I am quite happy to make it go away.

    Interestingly, this thread is a left-over from the pdflush kernel thread which
    was a self-forking kernel thread responsible for all the write-back in old
    Linux kernels. It was turned into per-block device BDI threads, and
    'sync_supers' was a left-over. Thus, R.I.P, pdflush as well.

    Signed-off-by: Artem Bityutskiy
    Signed-off-by: Al Viro

    Artem Bityutskiy
     

09 Jun, 2012

2 commits


06 May, 2012

1 commit

  • This prevents global_dirty_limit from remaining 0 (the initial value)
    for long time, since it's only updated in update_dirty_limit() when
    above the dirty freerun area.

    It will avoid unexpected consequences when some random code use it as a
    convenient approximation of the global dirty threshold.

    Signed-off-by: Fengguang Wu

    Fengguang Wu
     

14 Apr, 2012

1 commit


29 Mar, 2012

1 commit

  • Pull ext4 updates for 3.4 from Ted Ts'o:
    "Ext4 commits for 3.3 merge window; mostly cleanups and bug fixes

    The changes to export dirty_writeback_interval are from Artem's s_dirt
    cleanup patch series. The same is true of the change to remove the
    s_dirt helper functions which never got used by anyone in-tree. I've
    run these changes by Al Viro, and am carrying them so that Artem can
    more easily fix up the rest of the file systems during the next merge
    window. (Originally we had hopped to remove the use of s_dirt from
    ext4 during this merge window, but his patches had some bugs, so I
    ultimately ended dropping them from the ext4 tree.)"

    * tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: (66 commits)
    vfs: remove unused superblock helpers
    mm: export dirty_writeback_interval
    ext4: remove useless s_dirt assignment
    ext4: write superblock only once on unmount
    ext4: do not mark superblock as dirty unnecessarily
    ext4: correct ext4_punch_hole return codes
    ext4: remove restrictive checks for EOFBLOCKS_FL
    ext4: always set then trimmed blocks count into len
    ext4: fix trimmed block count accunting
    ext4: fix start and len arguments handling in ext4_trim_fs()
    ext4: update s_free_{inodes,blocks}_count during online resize
    ext4: change some printk() calls to use ext4_msg() instead
    ext4: avoid output message interleaving in ext4_error_()
    ext4: remove trailing newlines from ext4_msg() and ext4_error() messages
    ext4: add no_printk argument validation, fix fallout
    ext4: remove redundant "EXT4-fs: " from uses of ext4_msg
    ext4: give more helpful error message in ext4_ext_rm_leaf()
    ext4: remove unused code from ext4_ext_map_blocks()
    ext4: rewrite punch hole to use ext4_ext_remove_space()
    jbd2: cleanup journal tail after transaction commit
    ...

    Linus Torvalds
     

22 Mar, 2012

2 commits

  • Export 'dirty_writeback_interval' to make it visible to
    file-systems. We are going to push superblock management down to
    file-systems and get rid of the 'sync_supers' kernel thread completly.

    Signed-off-by: Artem Bityutskiy
    Cc: Al Viro
    Signed-off-by: "Theodore Ts'o"

    Artem Bityutskiy
     
  • When starting a memory hog task, a desktop box w/o swap is found to go
    unresponsive for a long time. It's solely caused by lots of congestion
    waits in throttle_vm_writeout():

    gnome-system-mo-4201 553.073384: congestion_wait: throttle_vm_writeout+0x70/0x7f shrink_mem_cgroup_zone+0x48f/0x4a1
    gnome-system-mo-4201 553.073386: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
    gtali-4237 553.080377: congestion_wait: throttle_vm_writeout+0x70/0x7f shrink_mem_cgroup_zone+0x48f/0x4a1
    gtali-4237 553.080378: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000
    Xorg-3483 553.103375: congestion_wait: throttle_vm_writeout+0x70/0x7f shrink_mem_cgroup_zone+0x48f/0x4a1
    Xorg-3483 553.103377: writeback_congestion_wait: usec_timeout=100000 usec_delayed=100000

    The root cause is, the dirty threshold is knocked down a lot by the memory
    hog task. Fixed by using global_dirty_limit which decreases gradually on
    such events and can guarantee we stay above (the also decreasing) nr_dirty
    in the progress of following down to the new dirty threshold.

    Signed-off-by: Fengguang Wu
    Cc: Johannes Weiner
    Cc: Jan Kara
    Cc: Greg Thelen
    Cc: Ying Han
    Cc: KAMEZAWA Hiroyuki
    Reviewed-by: Rik van Riel
    Cc: Mel Gorman
    Cc: Minchan Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Fengguang Wu
     

11 Jan, 2012

5 commits

  • * 'writeback-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/wfg/linux:
    writeback: move MIN_WRITEBACK_PAGES to fs-writeback.c
    writeback: balanced_rate cannot exceed write bandwidth
    writeback: do strict bdi dirty_exceeded
    writeback: avoid tiny dirty poll intervals
    writeback: max, min and target dirty pause time
    writeback: dirty ratelimit - think time compensation
    btrfs: fix dirtied pages accounting on sub-page writes
    writeback: fix dirtied pages accounting on redirty
    writeback: fix dirtied pages accounting on sub-page writes
    writeback: charge leaked page dirties to active tasks
    writeback: Include all dirty inodes in background writeback

    Linus Torvalds
     
  • The maximum number of dirty pages that exist in the system at any time is
    determined by a number of pages considered dirtyable and a user-configured
    percentage of those, or an absolute number in bytes.

    This number of dirtyable pages is the sum of memory provided by all the
    zones in the system minus their lowmem reserves and high watermarks, so
    that the system can retain a healthy number of free pages without having
    to reclaim dirty pages.

    But there is a flaw in that we have a zoned page allocator which does not
    care about the global state but rather the state of individual memory
    zones. And right now there is nothing that prevents one zone from filling
    up with dirty pages while other zones are spared, which frequently leads
    to situations where kswapd, in order to restore the watermark of free
    pages, does indeed have to write pages from that zone's LRU list. This
    can interfere so badly with IO from the flusher threads that major
    filesystems (btrfs, xfs, ext4) mostly ignore write requests from reclaim
    already, taking away the VM's only possibility to keep such a zone
    balanced, aside from hoping the flushers will soon clean pages from that
    zone.

    Enter per-zone dirty limits. They are to a zone's dirtyable memory what
    the global limit is to the global amount of dirtyable memory, and try to
    make sure that no single zone receives more than its fair share of the
    globally allowed dirty pages in the first place. As the number of pages
    considered dirtyable excludes the zones' lowmem reserves and high
    watermarks, the maximum number of dirty pages in a zone is such that the
    zone can always be balanced without requiring page cleaning.

    As this is a placement decision in the page allocator and pages are
    dirtied only after the allocation, this patch allows allocators to pass
    __GFP_WRITE when they know in advance that the page will be written to and
    become dirty soon. The page allocator will then attempt to allocate from
    the first zone of the zonelist - which on NUMA is determined by the task's
    NUMA memory policy - that has not exceeded its dirty limit.

    At first glance, it would appear that the diversion to lower zones can
    increase pressure on them, but this is not the case. With a full high
    zone, allocations will be diverted to lower zones eventually, so it is
    more of a shift in timing of the lower zone allocations. Workloads that
    previously could fit their dirty pages completely in the higher zone may
    be forced to allocate from lower zones, but the amount of pages that
    "spill over" are limited themselves by the lower zones' dirty constraints,
    and thus unlikely to become a problem.

    For now, the problem of unfair dirty page distribution remains for NUMA
    configurations where the zones allowed for allocation are in sum not big
    enough to trigger the global dirty limits, wake up the flusher threads and
    remedy the situation. Because of this, an allocation that could not
    succeed on any of the considered zones is allowed to ignore the dirty
    limits before going into direct reclaim or even failing the allocation,
    until a future patch changes the global dirty throttling and flusher
    thread activation so that they take individual zone states into account.

    Test results

    15M DMA + 3246M DMA32 + 504 Normal = 3765M memory
    40% dirty ratio
    16G USB thumb drive
    10 runs of dd if=/dev/zero of=disk/zeroes bs=32k count=$((10 << 15))

    seconds nr_vmscan_write
    (stddev) min| median| max
    xfs
    vanilla: 549.747( 3.492) 0.000| 0.000| 0.000
    patched: 550.996( 3.802) 0.000| 0.000| 0.000

    fuse-ntfs
    vanilla: 1183.094(53.178) 54349.000| 59341.000| 65163.000
    patched: 558.049(17.914) 0.000| 0.000| 43.000

    btrfs
    vanilla: 573.679(14.015) 156657.000| 460178.000| 606926.000
    patched: 563.365(11.368) 0.000| 0.000| 1362.000

    ext4
    vanilla: 561.197(15.782) 0.000|2725438.000|4143837.000
    patched: 568.806(17.496) 0.000| 0.000| 0.000

    Signed-off-by: Johannes Weiner
    Reviewed-by: Minchan Kim
    Acked-by: Mel Gorman
    Reviewed-by: Michal Hocko
    Tested-by: Wu Fengguang
    Cc: KAMEZAWA Hiroyuki
    Cc: Christoph Hellwig
    Cc: Dave Chinner
    Cc: Jan Kara
    Cc: Shaohua Li
    Cc: Rik van Riel
    Cc: Chris Mason
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • The next patch will introduce per-zone dirty limiting functions in
    addition to the traditional global dirty limiting.

    Rename determine_dirtyable_memory() to global_dirtyable_memory() before
    adding the zone-specific version, and fix up its documentation.

    Also, move the functions to determine the dirtyable memory and the
    function to calculate the dirty limit based on that together so that their
    relationship is more apparent and that they can be commented on as a
    group.

    Signed-off-by: Johannes Weiner
    Reviewed-by: Minchan Kim
    Acked-by: Mel Gorman
    Reviewed-by: Michal Hocko
    Cc: KAMEZAWA Hiroyuki
    Cc: Christoph Hellwig
    Cc: Wu Fengguang
    Cc: Dave Chinner
    Cc: Jan Kara
    Cc: Shaohua Li
    Cc: Rik van Riel
    Cc: Chris Mason
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • Per-zone dirty limits try to distribute page cache pages allocated for
    writing across zones in proportion to the individual zone sizes, to reduce
    the likelihood of reclaim having to write back individual pages from the
    LRU lists in order to make progress.

    This patch:

    The amount of dirtyable pages should not include the full number of free
    pages: there is a number of reserved pages that the page allocator and
    kswapd always try to keep free.

    The closer (reclaimable pages - dirty pages) is to the number of reserved
    pages, the more likely it becomes for reclaim to run into dirty pages:

    +----------+ ---
    | anon | |
    +----------+ |
    | | |
    | | -- dirty limit new -- flusher new
    | file | | |
    | | | |
    | | -- dirty limit old -- flusher old
    | | |
    +----------+ --- reclaim
    | reserved |
    +----------+
    | kernel |
    +----------+

    This patch introduces a per-zone dirty reserve that takes both the lowmem
    reserve as well as the high watermark of the zone into account, and a
    global sum of those per-zone values that is subtracted from the global
    amount of dirtyable pages. The lowmem reserve is unavailable to page
    cache allocations and kswapd tries to keep the high watermark free. We
    don't want to end up in a situation where reclaim has to clean pages in
    order to balance zones.

    Not treating reserved pages as dirtyable on a global level is only a
    conceptual fix. In reality, dirty pages are not distributed equally
    across zones and reclaim runs into dirty pages on a regular basis.

    But it is important to get this right before tackling the problem on a
    per-zone level, where the distance between reclaim and the dirty pages is
    mostly much smaller in absolute numbers.

    [akpm@linux-foundation.org: fix highmem build]
    Signed-off-by: Johannes Weiner
    Reviewed-by: Rik van Riel
    Reviewed-by: Michal Hocko
    Reviewed-by: Minchan Kim
    Acked-by: Mel Gorman
    Cc: KAMEZAWA Hiroyuki
    Cc: Christoph Hellwig
    Cc: Wu Fengguang
    Cc: Dave Chinner
    Cc: Jan Kara
    Cc: Shaohua Li
    Cc: Chris Mason
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     
  • The tracing ring-buffer used this function briefly, but not anymore.
    Make it local to the writeback code again.

    Also, move the function so that no forward declaration needs to be
    reintroduced.

    Signed-off-by: Johannes Weiner
    Acked-by: Mel Gorman
    Reviewed-by: Michal Hocko
    Cc: Wu Fengguang
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Weiner
     

04 Jan, 2012

1 commit

  • Move invalidate_bdev, block_sync_page into fs/block_dev.c. Export
    kill_bdev as well, so brd doesn't have to open code it. Reduce
    buffer_head.h requirement accordingly.

    Removed a rather large comment from invalidate_bdev, as it looked a bit
    obsolete to bother moving. The small comment replacing it says enough.

    Signed-off-by: Nick Piggin
    Cc: Al Viro
    Cc: Christoph Hellwig
    Signed-off-by: Andrew Morton
    Signed-off-by: Al Viro

    Al Viro
     

18 Dec, 2011

8 commits

  • Add an upper limit to balanced_rate according to the below inequality.
    This filters out some rare but huge singular points, which at least
    enables more readable gnuplot figures.

    When there are N dd dirtiers,

    balanced_dirty_ratelimit = write_bw / N

    So it holds that

    balanced_dirty_ratelimit
    Acked-by: Peter Zijlstra
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • This helps to reduce dirty throttling polls and hence CPU overheads.

    bdi->dirty_exceeded typically only helps when suddenly starting 100+
    dd's on a disk, in which case the dd's may need to poll
    balance_dirty_pages() earlier than tsk->nr_dirtied_pause.

    CC: Jan Kara
    CC: Peter Zijlstra
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • The LKP tests see big 56% regression for the case fio_mmap_randwrite_64k.
    Shaohua manages to root cause it to be the much smaller dirty pause times
    and hence much more frequent invocations to the IO-less balance_dirty_pages().
    Since fio_mmap_randwrite_64k effectively contains both reads and writes,
    the more frequent pauses triggered more idling in the cfq IO scheduler.

    The solution is to increase pause time all the way up to the max 200ms
    in this case, which is found to restore most performance. This will help
    reduce CPU overheads in other cases, too.

    Note that I don't expect many performance critical workloads to run this
    access pattern: the mmap read-on-write is rather inefficient and could
    be avoided by doing normal writes syscalls.

    CC: Jan Kara
    CC: Peter Zijlstra
    Reported-by: Li Shaohua
    Tested-by: Li Shaohua
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • Control the pause time and the call intervals to balance_dirty_pages()
    with three parameters:

    1) max_pause, limited by bdi_dirty and MAX_PAUSE

    2) the target pause time, grows with the number of dd tasks
    and is normally limited by max_pause/2

    3) the minimal pause, set to half the target pause
    and is used to skip short sleeps and accumulate them into bigger ones

    The typical behaviors after patch:

    - if ever task_ratelimit is far below dirty_ratelimit, the pause time
    will remain constant at max_pause and nr_dirtied_pause will be
    fluctuating with task_ratelimit

    - in the normal cases, nr_dirtied_pause will remain stable (keep in the
    same pace with dirty_ratelimit) and the pause time will be fluctuating
    with task_ratelimit

    In summary, someone has to fluctuate with task_ratelimit, because

    task_ratelimit = nr_dirtied_pause / pause

    We normally prefer a stable nr_dirtied_pause, until reaching max_pause.

    The notable behavior changes are:

    - in stable workloads, there will no longer be sudden big trajectory
    switching of nr_dirtied_pause as concerned by Peter. It will be as
    smooth as dirty_ratelimit and changing proportionally with it (as
    always, assuming bdi bandwidth does not fluctuate across 2^N lines,
    otherwise nr_dirtied_pause will show up in 2+ parallel trajectories)

    - in the rare cases when something keeps task_ratelimit far below
    dirty_ratelimit, the smoothness can no longer be retained and
    nr_dirtied_pause will be "dancing" with task_ratelimit. This fixes a
    (not that destructive but still not good) bug that
    dirty_ratelimit gets brought down undesirably

    CC: Peter Zijlstra
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • Compensate the task's think time when computing the final pause time,
    so that ->dirty_ratelimit can be executed accurately.

    think time := time spend outside of balance_dirty_pages()

    In the rare case that the task slept longer than the 200ms period time
    (result in negative pause time), the sleep time will be compensated in
    the following periods, too, if it's less than 1 second.

    Accumulated errors are carefully avoided as long as the max pause area
    is not hitted.

    Pseudo code:

    period = pages_dirtied / task_ratelimit;
    think = jiffies - dirty_paused_when;
    pause = period - think;

    1) normal case: period > think

    pause = period - think
    dirty_paused_when = jiffies + pause
    nr_dirtied = 0

    period time
    |===============================>|
    think time pause time
    |===============>|==============>|
    ------|----------------|---------------|------------------------
    dirty_paused_when jiffies

    2) no pause case: period |
    think time
    |===================================================>|
    ------|--------------------------------+-------------------|----
    dirty_paused_when jiffies

    Acked-by: Jan Kara
    Acked-by: Peter Zijlstra
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • De-account the accumulative dirty counters on page redirty.

    Page redirties (very common in ext4) will introduce mismatch between
    counters (a) and (b)

    a) NR_DIRTIED, BDI_DIRTIED, tsk->nr_dirtied
    b) NR_WRITTEN, BDI_WRITTEN

    This will introduce systematic errors in balanced_rate and result in
    dirty page position errors (ie. the dirty pages are no longer balanced
    around the global/bdi setpoints).

    Acked-by: Jan Kara
    Acked-by: Peter Zijlstra
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • When dd in 512bytes, generic_perform_write() calls
    balance_dirty_pages_ratelimited() 8 times for the same page, but
    obviously the page is only dirtied once.

    Fix it by accounting tsk->nr_dirtied and bdp_ratelimits at page dirty time.

    Acked-by: Jan Kara
    Acked-by: Peter Zijlstra
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • It's a years long problem that a large number of short-lived dirtiers
    (eg. gcc instances in a fast kernel build) may starve long-run dirtiers
    (eg. dd) as well as pushing the dirty pages to the global hard limit.

    The solution is to charge the pages dirtied by the exited gcc to the
    other random dirtying tasks. It sounds not perfect, however should
    behave good enough in practice, seeing as that throttled tasks aren't
    actually running so those that are running are more likely to pick it up
    and get throttled, therefore promoting an equal spread.

    Randy: fix compile error: 'dirty_throttle_leaks' undeclared in exit.c

    Acked-by: Jan Kara
    Acked-by: Peter Zijlstra
    Signed-off-by: Randy Dunlap
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     

08 Dec, 2011

3 commits

  • Some trace shows lots of bdi_dirty=0 lines where it's actually some
    small value if w/o the accounting errors in the per-cpu bdi stats.

    In this case the max pause time should really be set to the smallest
    (non-zero) value to avoid IO queue underrun and improve throughput.

    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • On a system with 1 local mount and 1 NFS mount, if the NFS server
    becomes not responding when dd to the NFS mount, the NFS dirty pages may
    exceed the global dirty limit and _every_ task involving writing will be
    blocked. The whole system appears unresponsive.

    The workaround is to permit through the bdi's that only has a small
    number of dirty pages. The number chosen (bdi_stat_error pages) is not
    enough to enable the local disk to run in optimal throughput, however is
    enough to make the system responsive on a broken NFS mount. The user can
    then kill the dirtiers on the NFS mount and increase the global dirty
    limit to bring up the local disk's throughput.

    It risks allowing dirty pages to grow much larger than the global dirty
    limit when there are 1000+ mounts, however that's very unlikely to happen,
    especially in low memory profiles.

    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • We do "floating proportions" to let active devices to grow its target
    share of dirty pages and stalled/inactive devices to decrease its target
    share over time.

    It works well except in the case of "an inactive disk suddenly goes
    busy", where the initial target share may be too small. To mitigate
    this, bdi_position_ratio() has the below line to raise a small
    bdi_thresh when it's safe to do so, so that the disk be feed with enough
    dirty pages for efficient IO and in turn fast rampup of bdi_thresh:

    bdi_thresh = max(bdi_thresh, (limit - dirty) / 8);

    balance_dirty_pages() normally does negative feedback control which
    adjusts ratelimit to balance the bdi dirty pages around the target.
    In some extreme cases when that is not enough, it will have to block
    the tasks completely until the bdi dirty pages drop below bdi_thresh.

    Acked-by: Jan Kara
    Acked-by: Peter Zijlstra
    Signed-off-by: Wu Fengguang

    Wu Fengguang
     

17 Nov, 2011

2 commits

  • They are not used any more.

    Signed-off-by: Wu Fengguang

    Wu Fengguang
     
  • The sleep based balance_dirty_pages() can pause at most MAX_PAUSE=200ms
    on every 1 4KB-page, which means it cannot throttle a task under
    4KB/200ms=20KB/s. So when there are more than 512 dd writing to a
    10MB/s USB stick, its bdi dirty pages could grow out of control.

    Even if we can increase MAX_PAUSE, the minimal (task_ratelimit = 1)
    means a limit of 4KB/s.

    They can eventually be safeguarded by the global limit check
    (nr_dirty < dirty_thresh). However if someone is also writing to an
    HDD at the same time, it'll get poor HDD write performance.

    We at least want to maintain good write performance for other devices
    when one device is attacked by some "massive parallel" workload, or
    suffers from slow write bandwidth, or somehow get stalled due to some
    error condition (eg. NFS server not responding).

    For a stalled device, we need to completely block its dirtiers, too,
    before its bdi dirty pages grow all the way up to the global limit and
    leave no space for the other functional devices.

    So change the loop exit condition to

    /*
    * Always enforce global dirty limit; also enforce bdi dirty limit
    * if the normal max_pause sleeps cannot keep things under control.
    */
    if (nr_dirty < dirty_thresh &&
    (bdi_dirty < bdi_thresh || bdi->dirty_ratelimit > 1))
    break;

    which can be further simplified to

    if (task_ratelimit)
    break;

    Signed-off-by: Wu Fengguang

    Wu Fengguang
     

16 Nov, 2011

1 commit

  • There is no reason why task in balance_dirty_pages() shouldn't be killable
    and it helps in recovering from some error conditions (like when filesystem
    goes in error state and cannot accept writeback anymore but we still want to
    kill processes using it to be able to unmount it).

    There will be follow up patches to further abort the generic_perform_write()
    and other filesystem write loops, to avoid large write + SIGKILL combination
    exceeding the dirty limit and possibly strange OOM.

    Reported-by: Kazuya Mio
    Tested-by: Kazuya Mio
    Reviewed-by: Neil Brown
    Reviewed-by: KOSAKI Motohiro
    Signed-off-by: Jan Kara
    Signed-off-by: Wu Fengguang

    Jan Kara
     

07 Nov, 2011

1 commit

  • In balance_dirty_pages() task_ratelimit may be not initialized
    (initialization skiped by goto pause), and then used when calling
    tracing hook.

    Fix it by moving the task_ratelimit assignment before goto pause.

    Reported-by: Witold Baryluk
    Signed-off-by: Wu Fengguang

    Wu Fengguang