06 Apr, 2009

1 commit


26 Mar, 2009

1 commit


07 Jan, 2009

2 commits

  • …/git/tip/linux-2.6-tip

    * 'core-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
    rcu: fix rcutorture bug
    rcu: eliminate synchronize_rcu_xxx macro
    rcu: make treercu safe for suspend and resume
    rcu: fix rcutree grace-period-latency bug on small systems
    futex: catch certain assymetric (get|put)_futex_key calls
    futex: make futex_(get|put)_key() calls symmetric
    locking, percpu counters: introduce separate lock classes
    swiotlb: clean up EXPORT_SYMBOL usage
    swiotlb: remove unnecessary declaration
    swiotlb: replace architecture-specific swiotlb.h with linux/swiotlb.h
    swiotlb: add support for systems with highmem
    swiotlb: store phys address in io_tlb_orig_addr array
    swiotlb: add hwdev to swiotlb_phys_to_bus() / swiotlb_sg_to_bus()

    Linus Torvalds
     
  • The background dirty and dirty limits are better defined with type
    specifiers of unsigned long since negative writeback thresholds are not
    possible.

    These values, as returned by get_dirty_limits(), are normally compared
    with ZVC values to determine whether writeback shall commence or be
    throttled. Such page counts cannot be negative, so declaring the page
    limits as signed is unnecessary.

    Acked-by: Peter Zijlstra
    Cc: Dave Chinner
    Cc: Christoph Lameter
    Signed-off-by: David Rientjes
    Cc: Andrea Righi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes
     

06 Jan, 2009

1 commit


29 Dec, 2008

1 commit

  • Impact: fix lockdep false positives

    Classify percpu_counter instances similar to regular lock objects --
    that is, per instantiation site.

    The networking code has increased its use of percpu_counters, which
    leads to false positives if they are treated as a single class.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

11 Dec, 2008

1 commit


03 Dec, 2008

1 commit

  • Devices which share the same queue, like floppies and mtd devices, get
    registered multiple times in the bdi interface, but bdi accounts only the
    last registered device of the devices sharing one queue.

    On remove, all earlier registered devices leak, stay around in sysfs, and
    cause "duplicate filename" errors if the devices are re-created.

    This prevents the creation of multiple bdi interfaces per queue, and the
    bdi device will carry the dev_t name of the block device which is the
    first one registered, of the pool of devices using the same queue.

    [akpm@linux-foundation.org: add a WARN_ON so we know which drivers are misbehaving]
    Tested-by: Peter Korsgaard
    Acked-by: Peter Zijlstra
    Signed-off-by: Kay Sievers
    Cc: David Woodhouse
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kay Sievers
     

21 May, 2008

1 commit

  • There is a race from when a device is created with device_create() and
    then the drvdata is set with a call to dev_set_drvdata() in which a
    sysfs file could be open, yet the drvdata will be NULL, causing all
    sorts of bad things to happen.

    This patch fixes the problem by using the new function,
    device_create_vargs().

    Many thanks to Arthur Jones for reporting the bug,
    and testing patches out.

    Cc: Kay Sievers
    Cc: Arthur Jones
    Cc: Peter Zijlstra
    Cc: Miklos Szeredi
    Signed-off-by: Greg Kroah-Hartman

    Greg Kroah-Hartman
     

30 Apr, 2008

4 commits

  • Move BDI statistics to debugfs:

    /sys/kernel/debug/bdi//stats

    Use postcore_initcall() to initialize the sysfs class and debugfs,
    because debugfs is initialized in core_initcall().

    Update descriptions in ABI documentation.

    Signed-off-by: Miklos Szeredi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Miklos Szeredi
     
  • Add "max_ratio" to /sys/class/bdi. This indicates the maximum percentage of
    the global dirty threshold allocated to this bdi.

    [mszeredi@suse.cz]

    - fix parsing in max_ratio_store().
    - export bdi_set_max_ratio() to modules
    - limit bdi_dirty with bdi->max_ratio
    - document new sysfs attribute

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Miklos Szeredi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Under normal circumstances each device is given a part of the total write-back
    cache that relates to its current avg writeout speed in relation to the other
    devices.

    min_ratio - allows one to assign a minimum portion of the write-back cache to
    a particular device. This is useful in situations where you might want to
    provide a minimum QoS. (One request for this feature came from flash based
    storage people who wanted to avoid writing out at all costs - they of course
    needed some pdflush hacks as well)

    max_ratio - allows one to assign a maximum portion of the dirty limit to a
    particular device. This is useful in situations where you want to avoid one
    device taking all or most of the write-back cache. Eg. an NFS mount that is
    prone to get stuck, or a FUSE mount which you don't trust to play fair.

    Add "min_ratio" to /sys/class/bdi. This indicates the minimum percentage of
    the global dirty threshold allocated to this bdi.

    [mszeredi@suse.cz]

    - fix parsing in min_ratio_store()
    - document new sysfs attribute

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Miklos Szeredi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Provide a place in sysfs (/sys/class/bdi) for the backing_dev_info object.
    This allows us to see and set the various BDI specific variables.

    In particular this properly exposes the read-ahead window for all relevant
    users and /sys/block//queue/read_ahead_kb should be deprecated.

    With patient help from Kay Sievers and Greg KH

    [mszeredi@suse.cz]

    - split off NFS and FUSE changes into separate patches
    - document new sysfs attributes under Documentation/ABI
    - do bdi_class_init as a core_initcall, otherwise the "default" BDI
    won't be initialized
    - remove bdi_init_fmt macro, it's not used very much

    [akpm@linux-foundation.org: fix ia64 warning]
    Signed-off-by: Peter Zijlstra
    Cc: Kay Sievers
    Acked-by: Greg KH
    Cc: Trond Myklebust
    Signed-off-by: Miklos Szeredi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     

06 Dec, 2007

1 commit


17 Oct, 2007

3 commits

  • Scale writeback cache per backing device, proportional to its writeout speed.

    By decoupling the BDI dirty thresholds a number of problems we currently have
    will go away, namely:

    - mutual interference starvation (for any number of BDIs);
    - deadlocks with stacked BDIs (loop, FUSE and local NFS mounts).

    It might be that all dirty pages are for a single BDI while other BDIs are
    idling. By giving each BDI a 'fair' share of the dirty limit, each one can have
    dirty pages outstanding and make progress.

    A global threshold also creates a deadlock for stacked BDIs; when A writes to
    B, and A generates enough dirty pages to get throttled, B will never start
    writeback until the dirty pages go away. Again, by giving each BDI its own
    'independent' dirty limit, this problem is avoided.

    So the problem is to determine how to distribute the total dirty limit across
    the BDIs fairly and efficiently. A DBI that has a large dirty limit but does
    not have any dirty pages outstanding is a waste.

    What is done is to keep a floating proportion between the DBIs based on
    writeback completions. This way faster/more active devices get a larger share
    than slower/idle devices.

    [akpm@linux-foundation.org: fix warnings]
    [hugh@veritas.com: Fix occasional hang when a task couldn't get out of balance_dirty_pages]
    Signed-off-by: Peter Zijlstra
    Signed-off-by: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Provide scalable per backing_dev_info statistics counters.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • These patches aim to improve balance_dirty_pages() and directly address three
    issues:
    1) inter device starvation
    2) stacked device deadlocks
    3) inter process starvation

    1 and 2 are a direct result from removing the global dirty limit and using
    per device dirty limits. By giving each device its own dirty limit is will
    no longer starve another device, and the cyclic dependancy on the dirty limit
    is broken.

    In order to efficiently distribute the dirty limit across the independant
    devices a floating proportion is used, this will allocate a share of the total
    limit proportional to the device's recent activity.

    3 is done by also scaling the dirty limit proportional to the current task's
    recent dirty rate.

    This patch:

    nfs: remove congestion_end(). It's redundant, clear_bdi_congested() already
    wakes the waiters.

    Signed-off-by: Peter Zijlstra
    Cc: Trond Myklebust
    Cc: "J. Bruce Fields"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     

17 Jul, 2007

1 commit


17 Mar, 2007

1 commit

  • The current NFS client congestion logic is severly broken, it marks the
    backing device congested during each nfs_writepages() call but doesn't
    mirror this in nfs_writepage() which makes for deadlocks. Also it
    implements its own waitqueue.

    Replace this by a more regular congestion implementation that puts a cap on
    the number of active writeback pages and uses the bdi congestion waitqueue.

    Also always use an interruptible wait since it makes sense to be able to
    SIGKILL the process even for mounts without 'intr'.

    Signed-off-by: Peter Zijlstra
    Acked-by: Trond Myklebust
    Cc: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     

21 Oct, 2006

1 commit

  • Separate out the concept of "queue congestion" from "backing-dev congestion".
    Congestion is a backing-dev concept, not a queue concept.

    The blk_* congestion functions are retained, as wrappers around the core
    backing-dev congestion functions.

    This proper layering is needed so that NFS can cleanly use the congestion
    functions, and so that CONFIG_BLOCK=n actually links.

    Cc: "Thomas Maier"
    Cc: "Jens Axboe"
    Cc: Trond Myklebust
    Cc: David Howells
    Cc: Peter Osterlund
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton