07 Mar, 2011

3 commits


04 Jan, 2011

1 commit

  • Currently the size of the speculative preallocation during delayed
    allocation is fixed by either the allocsize mount option of a
    default size. We are seeing a lot of cases where we need to
    recommend using the allocsize mount option to prevent fragmentation
    when buffered writes land in the same AG.

    Rather than using a fixed preallocation size by default (up to 64k),
    make it dynamic by basing it on the current inode size. That way the
    EOF preallocation will increase as the file size increases. Hence
    for streaming writes we are much more likely to get large
    preallocations exactly when we need it to reduce fragementation.

    For default settings, the size of the initial extents is determined
    by the number of parallel writers and the amount of memory in the
    machine. For 4GB RAM and 4 concurrent 32GB file writes:

    EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL
    0: [0..1048575]: 1048672..2097247 0 (1048672..2097247) 1048576
    1: [1048576..2097151]: 5242976..6291551 0 (5242976..6291551) 1048576
    2: [2097152..4194303]: 12583008..14680159 0 (12583008..14680159) 2097152
    3: [4194304..8388607]: 25165920..29360223 0 (25165920..29360223) 4194304
    4: [8388608..16777215]: 58720352..67108959 0 (58720352..67108959) 8388608
    5: [16777216..33554423]: 117440584..134217791 0 (117440584..134217791) 16777208
    6: [33554424..50331511]: 184549056..201326143 0 (184549056..201326143) 16777088
    7: [50331512..67108599]: 251657408..268434495 0 (251657408..268434495) 16777088

    and for 16 concurrent 16GB file writes:

    EXT: FILE-OFFSET BLOCK-RANGE AG AG-OFFSET TOTAL
    0: [0..262143]: 2490472..2752615 0 (2490472..2752615) 262144
    1: [262144..524287]: 6291560..6553703 0 (6291560..6553703) 262144
    2: [524288..1048575]: 13631592..14155879 0 (13631592..14155879) 524288
    3: [1048576..2097151]: 30408808..31457383 0 (30408808..31457383) 1048576
    4: [2097152..4194303]: 52428904..54526055 0 (52428904..54526055) 2097152
    5: [4194304..8388607]: 104857704..109052007 0 (104857704..109052007) 4194304
    6: [8388608..16777215]: 209715304..218103911 0 (209715304..218103911) 8388608
    7: [16777216..33554423]: 452984848..469762055 0 (452984848..469762055) 16777208

    Because it is hard to take back specualtive preallocation, cases
    where there are large slow growing log files on a nearly full
    filesystem may cause premature ENOSPC. Hence as the filesystem nears
    full, the maximum dynamic prealloc size іs reduced according to this
    table (based on 4k block size):

    freespace max prealloc size
    >5% full extent (8GB)
    4-5% 2GB (8GB >> 2)
    3-4% 1GB (8GB >> 3)
    2-3% 512MB (8GB >> 4)
    1-2% 256MB (8GB >> 5)
    > 6)

    This should reduce the amount of space held in speculative
    preallocation for such cases.

    The allocsize mount option turns off the dynamic behaviour and fixes
    the prealloc size to whatever the mount option specifies. i.e. the
    behaviour is unchanged.

    Signed-off-by: Dave Chinner

    Dave Chinner
     

16 Dec, 2010

1 commit

  • now that we are using RCU protection for the inode cache lookups,
    the lock is only needed on the modification side. Hence it is not
    necessary for the lock to be a rwlock as there are no read side
    holders anymore. Convert it to a spin lock to reflect it's exclusive
    nature.

    Signed-off-by: Dave Chinner
    Reviewed-by: Alex Elder
    Reviewed-by: Christoph Hellwig

    Dave Chinner
     

11 Nov, 2010

1 commit


19 Oct, 2010

11 commits

  • Stop having two different names for many buffer functions and use
    the more descriptive xfs_buf_* names directly.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • Update the per-cpu counters manually in xfs_trans_unreserve_and_mod_sb
    and remove support for per-cpu counters from xfs_mod_incore_sb_batch
    to simplify it. And added benefit is that we don't have to take
    m_sb_lock for transactions that only modify per-cpu counters.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • Export xfs_icsb_modify_counters and always use it for modifying
    the per-cpu counters. Remove support for per-cpu counters from
    xfs_mod_incore_sb to simplify it.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • Fail the mount if we can't allocate memory for the per-CPU counters.
    This is consistent with how we handle everything else in the mount
    path and makes the superblock counter modification a lot simpler.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • The buffer cache hash is showing typical hash scalability problems.
    In large scale testing the number of cached items growing far larger
    than the hash can efficiently handle. Hence we need to move to a
    self-scaling cache indexing mechanism.

    I have selected rbtrees for indexing becuse they can have O(log n)
    search scalability, and insert and remove cost is not excessive,
    even on large trees. Hence we should be able to cache large numbers
    of buffers without incurring the excessive cache miss search
    penalties that the hash is imposing on us.

    To ensure we still have parallel access to the cache, we need
    multiple trees. Rather than hashing the buffers by disk address to
    select a tree, it seems more sensible to separate trees by typical
    access patterns. Most operations use buffers from within a single AG
    at a time, so rather than searching lots of different lists,
    separate the buffer indexes out into per-AG rbtrees. This means that
    searches during metadata operation have a much higher chance of
    hitting cache resident nodes, and that updates of the tree are less
    likely to disturb trees being accessed on other CPUs doing
    independent operations.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • Memory reclaim via shrinkers has a terrible habit of having N+M
    concurrent shrinker executions (N = num CPUs, M = num kswapds) all
    trying to shrink the same cache. When the cache they are all working
    on is protected by a single spinlock, massive contention an
    slowdowns occur.

    Wrap the per-ag inode caches with a reclaim mutex to serialise
    reclaim access to the AG. This will block concurrent reclaim in each
    AG but still allow reclaim to scan multiple AGs concurrently. Allow
    shrinkers to move on to the next AG if it can't get the lock, and if
    we can't get any AG, then start blocking on locks.

    To prevent reclaimers from continually scanning the same inodes in
    each AG, add a cursor that tracks where the last reclaim got up to
    and start from that point on the next reclaim. This should avoid
    only ever scanning a small number of inodes at the satart of each AG
    and not making progress. If we have a non-shrinker based reclaim
    pass, ignore the cursor and reset it to zero once we are done.

    Signed-off-by: Dave Chinner
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • The reclaim walk requires different locking and has a slightly
    different walk algorithm, so separate it out so that it can be
    optimised separately.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • When we are checking we can access the last block of each device, we
    do not need to use cached buffers as they will be tossed away
    immediately. Use uncached buffers for size checks so that all IO
    prior to full in-memory structure initialisation does not use the
    buffer cache.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • Filesystem level managed buffers are buffers that have their
    lifecycle controlled by the filesystem layer, not the buffer cache.
    We currently cache these buffers, which makes cleanup and cache
    walking somewhat troublesome. Convert the fs managed buffers to
    uncached buffers obtained by via xfs_buf_get_uncached(), and remove
    the XBF_FS_MANAGED special cases from the buffer cache.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • When we start taking a reference to the per-ag for every cached
    buffer in the system, kernel lockstat profiling on an 8-way create
    workload shows the mp->m_perag_lock has higher acquisition rates
    than the inode lock and has significantly more contention. That is,
    it becomes the highest contended lock in the system.

    The perag lookup is trivial to convert to lock-less RCU lookups
    because perag structures never go away. Hence the only thing we need
    to protect against is tree structure changes during a grow. This can
    be done simply by replacing the locking in xfs_perag_get() with RCU
    read locking. This removes the mp->m_perag_lock completely from this
    path.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     
  • When we start taking references per cached buffer to the the perag
    it is cached on, it will blow the current debug maximum reference
    count assert out of the water. The assert has never caught a bug,
    and we have tracing to track changes if there ever is a problem,
    so just remove it.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Reviewed-by: Alex Elder

    Dave Chinner
     

27 Jul, 2010

2 commits

  • Signed-off-by: Christoph Hellwig
    Reviewed-by: Dave Chinner

    Christoph Hellwig
     
  • Dmapi support was never merged upstream, but we still have a lot of hooks
    bloating XFS for it, all over the fast pathes of the filesystem.

    This patch drops over 700 lines of dmapi overhead. If we'll ever get HSM
    support in mainline at least the namespace events can be done much saner
    in the VFS instead of the individual filesystem, so it's not like this
    is much help for future work.

    Signed-off-by: Christoph Hellwig
    Reviewed-by: Dave Chinner

    Christoph Hellwig
     

24 Jun, 2010

1 commit

  • The block number comes from bulkstat based inode lookups to shortcut
    the mapping calculations. We ar enot able to trust anything from
    bulkstat, so drop the block number as well so that the correct
    lookups and mappings are always done.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig

    Dave Chinner
     

29 May, 2010

3 commits

  • If a filesystem is mounted without the inode64 mount option we
    should still be able to access inodes not fitting into 32 bits, just
    not created new ones. For this to work we need to make sure the
    inode cache radix tree is initialized for all allocation groups, not
    just those we plan to allocate inodes from. This patch makes sure
    we initialize the inode cache radix tree for all allocation groups,
    and also cleans xfs_initialize_perag up a bit to separate the
    inode32 logical from the general perag structure setup.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • The use of radix_tree_preload() only works if the radix tree was
    initialised without the __GFP_WAIT flag. The per-ag tree uses
    GFP_NOFS, so does not trigger allocation of new tree nodes from the
    preloaded array. Hence it enters the allocator with a spinlock held
    and triggers the might_sleep() warnings.

    Reported-by; Chris Mason
    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Dave Chinner
     
  • Many places in the xfs code return E2BIG when they really mean
    EFBIG; trying to grow past 16T on a 32 bit machine, for example,
    says "Argument list too long" rather than "File too large" which is
    not particularly helpful.

    Some of these don't make perfect sense as EFBIG either, but still
    better than E2BIG IMHO.

    Signed-off-by: Eric Sandeen
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Eric Sandeen
     

19 May, 2010

1 commit


06 Mar, 2010

1 commit

  • The current default size of the reserved blocks pool is easy to deplete
    with certain workloads, in particular workloads that do lots of concurrent
    delayed allocation extent conversions. If enough transactions are running
    in parallel and the entire pool is consumed then subsequent calls to
    xfs_trans_reserve() will fail with ENOSPC. Also add a rate limited
    warning so we know if this starts happening again.

    This is an updated version of an old patch from Lachlan McIlroy.

    Signed-off-by: Dave Chinner
    Signed-off-by: Alex Elder

    Dave Chinner
     

02 Mar, 2010

1 commit

  • Currently the code to implement the file operations is split over
    two small files. Merge the content of xfs_lrw.c into xfs_file.c to
    have it in one place. Note that I haven't done various cleanups
    that are possible after this yet, they will follow in the next
    patch. Also the function xfs_dev_is_read_only which was in
    xfs_lrw.c before really doesn't fit in here at all and was moved to
    xfs_mount.c.

    Signed-off-by: Christoph Hellwig
    Reviewed-by: Dave Chinner
    Signed-off-by: Alex Elder

    Christoph Hellwig
     

09 Feb, 2010

1 commit

  • This mangles the reserved blocks counts a little more.

    1) add a helper function for the default reserved count
    2) add helper functions to save/restore counts on ro/rw
    3) save/restore reserved blocks on freeze/thaw
    4) disallow changing reserved count while readonly

    V2: changed field name to match Dave's changes

    Signed-off-by: Eric Sandeen
    Signed-off-by: Alex Elder

    Eric Sandeen
     

06 Feb, 2010

1 commit

  • We currently do background inode flush asynchronously, resulting in
    inodes being written in whatever order the background writeback
    issues them. Not only that, there are also blocking and non-blocking
    asynchronous inode flushes, depending on where the flush comes from.

    This patch completely removes asynchronous inode writeback. It
    removes all the strange writeback modes and replaces them with
    either a synchronous flush or a non-blocking delayed write flush.
    That is, inode flushes will only issue IO directly if they are
    synchronous, and background flushing may do nothing if the operation
    would block (e.g. on a pinned inode or buffer lock).

    Delayed write flushes will now result in the inode buffer sitting in
    the delwri queue of the buffer cache to be flushed by either an AIL
    push or by the xfsbufd timing out the buffer. This will allow
    accumulation of dirty inode buffers in memory and allow optimisation
    of inode cluster writeback at the xfsbufd level where we have much
    greater queue depths than the block layer elevators. We will also
    get adjacent inode cluster buffer IO merging for free when a later
    patch in the series allows sorting of the delayed write buffers
    before dispatch.

    This effectively means that any inode that is written back by
    background writeback will be seen as flush locked during AIL
    pushing, and will result in the buffers being pushed from there.
    This writeback path is currently non-optimal, but the next patch
    in the series will fix that problem.

    A side effect of this delayed write mechanism is that background
    inode reclaim will no longer directly flush inodes, nor can it wait
    on the flush lock. The result is that inode reclaim must leave the
    inode in the reclaimable state until it is clean. Hence attempts to
    reclaim a dirty inode in the background will simply skip the inode
    until it is clean and this allows other mechanisms (i.e. xfsbufd) to
    do more optimal writeback of the dirty buffers. As a result, the
    inode reclaim code has been rewritten so that it no longer relies on
    the ambiguous return values of xfs_iflush() to determine whether it
    is safe to reclaim an inode.

    Portions of this patch are derived from patches by Christoph
    Hellwig.

    Version 2:
    - cleanup reclaim code as suggested by Christoph
    - log background reclaim inode flush errors
    - just pass sync flags to xfs_iflush

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig

    Dave Chinner
     

22 Jan, 2010

2 commits

  • Remove the XFS_LOG_FORCE argument which was always set, and the
    XFS_LOG_URGE define, which was never used.

    Split xfs_log_force into a two helpers - xfs_log_force which forces
    the whole log, and xfs_log_force_lsn which forces up to the
    specified LSN. The underlying implementations already were entirely
    separate, as were the users.

    Also re-indent the new _xfs_log_force/_xfs_log_force which
    previously had a weird coding style.

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • Currently we define aliases for the buffer flags in various
    namespaces, which only adds confusion. Remove all but the XBF_
    flags to clean this up a bit.

    Note that we still abuse XFS_B_ASYNC/XBF_ASYNC for some non-buffer
    uses, but I'll clean that up later.

    Signed-off-by: Christoph Hellwig
    Reviewed-by: Dave Chinner
    Signed-off-by: Alex Elder

    Christoph Hellwig
     

20 Jan, 2010

1 commit


16 Jan, 2010

6 commits

  • Now that the perag structure is allocated memory rather than held in
    an array, we don't need to have the busy extent array external to
    the structure. Embed it into the perag structure to avoid needing an
    extra allocation when setting up.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Dave Chinner
     
  • Add proper error handling in case an error occurs while initializing
    new perag structures for a mount point. The mount structure is
    restored to its previous state by deleting and freeing any perag
    structures added during the call.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Dave Chinner
     
  • Uninline xfs_perag_{get,put} so that tracepoints can be inserted
    into them to speed debugging of reference count problems.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Dave Chinner
     
  • Reference count the per-ag structures to ensure that we keep get/put
    pairs balanced. Assert that the reference counts are zero at unmount
    time to catch leaks. In future, reference counts will enable us to
    safely remove perag structures by allowing us to detect when they
    are no longer in use.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Dave Chinner
     
  • The use of an array for the per-ag structures requires reallocation
    of the array when growing the filesystem. This requires locking
    access to the array to avoid use after free situations, and the
    locking is difficult to get right. To avoid needing to reallocate an
    array, change the per-ag structures to an allocated object per ag
    and index them using a tree structure.

    The AGs are always densely indexed (hence the use of an array), but
    the number supported is 2^32 and lookups tend to be random and hence
    indexing needs to scale. A simple choice is a radix tree - it works
    well with this sort of index. This change also removes another
    large contiguous allocation from the mount/growfs path in XFS.

    The growing process now needs to change to only initialise the new
    AGs required for the extra space, and as such only needs to
    exclusively lock the tree for inserts. The rest of the code only
    needs to lock the tree while doing lookups, and hence this will
    remove all the deadlocks that currently occur on the m_perag_lock as
    it is now an innermost lock. The lock is also changed to a spinlock
    from a read/write lock as the hold time is now extremely short.

    To complete the picture, the per-ag structures will need to be
    reference counted to ensure that we don't free/modify them while
    they are still in use. This will be done in subsequent patch.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Dave Chinner
     
  • Convert the remaining direct lookups of the per ag structures to use
    get/put accesses. Ensure that the loops across AGs and prior users
    of the interface balance gets and puts correctly.

    Signed-off-by: Dave Chinner
    Reviewed-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Dave Chinner
     

15 Dec, 2009

2 commits

  • Convert the old xfs tracing support that could only be used with the
    out of tree kdb and xfsidbg patches to use the generic event tracer.

    To use it make sure CONFIG_EVENT_TRACING is enabled and then enable
    all xfs trace channels by:

    echo 1 > /sys/kernel/debug/tracing/events/xfs/enable

    or alternatively enable single events by just doing the same in one
    event subdirectory, e.g.

    echo 1 > /sys/kernel/debug/tracing/events/xfs/xfs_ihold/enable

    or set more complex filters, etc. In Documentation/trace/events.txt
    all this is desctribed in more detail. To reads the events do a

    cat /sys/kernel/debug/tracing/trace

    Compared to the last posting this patch converts the tracing mostly to
    the one tracepoint per callsite model that other users of the new
    tracing facility also employ. This allows a very fine-grained control
    of the tracing, a cleaner output of the traces and also enables the
    perf tool to use each tracepoint as a virtual performance counter,
    allowing us to e.g. count how often certain workloads git various
    spots in XFS. Take a look at

    http://lwn.net/Articles/346470/

    for some examples.

    Also the btree tracing isn't included at all yet, as it will require
    additional core tracing features not in mainline yet, I plan to
    deliver it later.

    And the really nice thing about this patch is that it actually removes
    many lines of code while adding this nice functionality:

    fs/xfs/Makefile | 8
    fs/xfs/linux-2.6/xfs_acl.c | 1
    fs/xfs/linux-2.6/xfs_aops.c | 52 -
    fs/xfs/linux-2.6/xfs_aops.h | 2
    fs/xfs/linux-2.6/xfs_buf.c | 117 +--
    fs/xfs/linux-2.6/xfs_buf.h | 33
    fs/xfs/linux-2.6/xfs_fs_subr.c | 3
    fs/xfs/linux-2.6/xfs_ioctl.c | 1
    fs/xfs/linux-2.6/xfs_ioctl32.c | 1
    fs/xfs/linux-2.6/xfs_iops.c | 1
    fs/xfs/linux-2.6/xfs_linux.h | 1
    fs/xfs/linux-2.6/xfs_lrw.c | 87 --
    fs/xfs/linux-2.6/xfs_lrw.h | 45 -
    fs/xfs/linux-2.6/xfs_super.c | 104 ---
    fs/xfs/linux-2.6/xfs_super.h | 7
    fs/xfs/linux-2.6/xfs_sync.c | 1
    fs/xfs/linux-2.6/xfs_trace.c | 75 ++
    fs/xfs/linux-2.6/xfs_trace.h | 1369 +++++++++++++++++++++++++++++++++++++++++
    fs/xfs/linux-2.6/xfs_vnode.h | 4
    fs/xfs/quota/xfs_dquot.c | 110 ---
    fs/xfs/quota/xfs_dquot.h | 21
    fs/xfs/quota/xfs_qm.c | 40 -
    fs/xfs/quota/xfs_qm_syscalls.c | 4
    fs/xfs/support/ktrace.c | 323 ---------
    fs/xfs/support/ktrace.h | 85 --
    fs/xfs/xfs.h | 16
    fs/xfs/xfs_ag.h | 14
    fs/xfs/xfs_alloc.c | 230 +-----
    fs/xfs/xfs_alloc.h | 27
    fs/xfs/xfs_alloc_btree.c | 1
    fs/xfs/xfs_attr.c | 107 ---
    fs/xfs/xfs_attr.h | 10
    fs/xfs/xfs_attr_leaf.c | 14
    fs/xfs/xfs_attr_sf.h | 40 -
    fs/xfs/xfs_bmap.c | 507 +++------------
    fs/xfs/xfs_bmap.h | 49 -
    fs/xfs/xfs_bmap_btree.c | 6
    fs/xfs/xfs_btree.c | 5
    fs/xfs/xfs_btree_trace.h | 17
    fs/xfs/xfs_buf_item.c | 87 --
    fs/xfs/xfs_buf_item.h | 20
    fs/xfs/xfs_da_btree.c | 3
    fs/xfs/xfs_da_btree.h | 7
    fs/xfs/xfs_dfrag.c | 2
    fs/xfs/xfs_dir2.c | 8
    fs/xfs/xfs_dir2_block.c | 20
    fs/xfs/xfs_dir2_leaf.c | 21
    fs/xfs/xfs_dir2_node.c | 27
    fs/xfs/xfs_dir2_sf.c | 26
    fs/xfs/xfs_dir2_trace.c | 216 ------
    fs/xfs/xfs_dir2_trace.h | 72 --
    fs/xfs/xfs_filestream.c | 8
    fs/xfs/xfs_fsops.c | 2
    fs/xfs/xfs_iget.c | 111 ---
    fs/xfs/xfs_inode.c | 67 --
    fs/xfs/xfs_inode.h | 76 --
    fs/xfs/xfs_inode_item.c | 5
    fs/xfs/xfs_iomap.c | 85 --
    fs/xfs/xfs_iomap.h | 8
    fs/xfs/xfs_log.c | 181 +----
    fs/xfs/xfs_log_priv.h | 20
    fs/xfs/xfs_log_recover.c | 1
    fs/xfs/xfs_mount.c | 2
    fs/xfs/xfs_quota.h | 8
    fs/xfs/xfs_rename.c | 1
    fs/xfs/xfs_rtalloc.c | 1
    fs/xfs/xfs_rw.c | 3
    fs/xfs/xfs_trans.h | 47 +
    fs/xfs/xfs_trans_buf.c | 62 -
    fs/xfs/xfs_vnodeops.c | 8
    70 files changed, 2151 insertions(+), 2592 deletions(-)

    Signed-off-by: Christoph Hellwig
    Signed-off-by: Alex Elder

    Christoph Hellwig
     
  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (34 commits)
    m68k: rename global variable vmalloc_end to m68k_vmalloc_end
    percpu: add missing per_cpu_ptr_to_phys() definition for UP
    percpu: Fix kdump failure if booted with percpu_alloc=page
    percpu: make misc percpu symbols unique
    percpu: make percpu symbols in ia64 unique
    percpu: make percpu symbols in powerpc unique
    percpu: make percpu symbols in x86 unique
    percpu: make percpu symbols in xen unique
    percpu: make percpu symbols in cpufreq unique
    percpu: make percpu symbols in oprofile unique
    percpu: make percpu symbols in tracer unique
    percpu: make percpu symbols under kernel/ and mm/ unique
    percpu: remove some sparse warnings
    percpu: make alloc_percpu() handle array types
    vmalloc: fix use of non-existent percpu variable in put_cpu_var()
    this_cpu: Use this_cpu_xx in trace_functions_graph.c
    this_cpu: Use this_cpu_xx for ftrace
    this_cpu: Use this_cpu_xx in nmi handling
    this_cpu: Use this_cpu operations in RCU
    this_cpu: Use this_cpu ops for VM statistics
    ...

    Fix up trivial (famous last words) global per-cpu naming conflicts in
    arch/x86/kvm/svm.c
    mm/slab.c

    Linus Torvalds
     

12 Dec, 2009

1 commit

  • Remove our own STATIC_INLINE macro. For small function inside
    implementation files just use STATIC and let gcc inline it, and for
    those in headers do the normal static inline - they are all small
    enough to be inlined for debug builds, too.

    Signed-off-by: Christoph Hellwig
    Reviewed-by: Dave Chinner
    Signed-off-by: Alex Elder

    Christoph Hellwig