23 Jun, 2006

5 commits

  • The comment states: 'Setting a tag on a not-present item is a BUG.' Hence
    if 'index' is larger than the maxindex; the item _cannot_ be presen; it
    should also be a BUG.

    Also, this allows the following statement (assume a fresh tree):

    radix_tree_tag_set(root, 16, 1);

    to fail silently, but when preceded by:

    radix_tree_insert(root, 32, item);

    it would BUG, because the height has been extended by the insert.

    In neither case was 16 present.

    Signed-off-by: Peter Zijlstra
    Acked-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Peter Zijlstra
     
  • Reduce radix tree node memory usage by about a factor of 4 for small files
    (< 64K). There are pointer traversal and memory usage costs for large
    files with dense pagecache.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • The ability to have height 0 radix trees (a direct pointer to the data item
    rather than going through a full node->slot) quietly disappeared with
    old-2.6-bkcvs commit ffee171812d51652f9ba284302d9e5c5cc14bdfd. On 64-bit
    machines this causes nearly 600 bytes to be used for every gfp_mask bits.

    Simplify radix_tree_delete's complex tag clearing arrangement (which would
    become even more complex) by just falling back to tag clearing functions
    (the pagecache radix-tree never uses this path anyway, so the icache
    savings will mean it's actually a speedup).

    On my 4GB G5, this saves 8MB RAM per kernel kernel source+object tree in
    pagecache.

    Pagecache lookup, insertion, and removal speed for small files will also be
    improved.

    This makes RCU radix tree harder, but it's worth it.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Modify the gen_pool allocator (lib/genalloc.c) to utilize a bitmap scheme
    instead of the buddy scheme. The purpose of this change is to eliminate
    the touching of the actual memory being allocated.

    Since the change modifies the interface, a change to the uncached allocator
    (arch/ia64/kernel/uncached.c) is also required.

    Both Andrey Volkov and Jes Sorenson have expressed a desire that the
    gen_pool allocator not write to the memory being managed. See the
    following:

    http://marc.theaimsgroup.com/?l=linux-kernel&m=113518602713125&w=2
    http://marc.theaimsgroup.com/?l=linux-kernel&m=113533568827916&w=2

    Signed-off-by: Dean Nelson
    Cc: Andrey Volkov
    Acked-by: Jes Sorensen
    Cc: "Luck, Tony"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dean Nelson
     
  • Upgrade the zlib_inflate implementation in the kernel from a patched
    version 1.1.3/4 to a patched 1.2.3.

    The code in the kernel is about seven years old and I noticed that the
    external zlib library's inflate performance was significantly faster (~50%)
    than the code in the kernel on ARM (and faster again on x86_32).

    For comparison the newer deflate code is 20% slower on ARM and 50% slower
    on x86_32 but gives an approx 1% compression ratio improvement. I don't
    consider this to be an improvement for kernel use so have no plans to
    change the zlib_deflate code.

    Various changes have been made to the zlib code in the kernel, the most
    significant being the extra functions/flush option used by ppp_deflate.
    This update reimplements the features PPP needs to ensure it continues to
    work.

    This code has been tested on ARM under both JFFS2 (with zlib compression
    enabled) and ppp_deflate and on x86_32. JFFS2 sees an approx. 10% real
    world file read speed improvement.

    This patch also removes ZLIB_VERSION as it no longer has a correct value.
    We don't need version checks anyway as the kernel's module handling will
    take care of that for us. This removal is also more in keeping with the
    zlib author's wishes (http://www.zlib.net/zlib_faq.html#faq24) and I've
    added something to the zlib.h header to note its a modified version.

    Signed-off-by: Richard Purdie
    Acked-by: Joern Engel
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Richard Purdie
     

22 Jun, 2006

1 commit


21 Jun, 2006

2 commits

  • Introduce __iowrite64_copy. It will be used by the Myri-10G Ethernet
    driver to post requests to the NIC. This driver will be submitted soon.

    __iowrite64_copy copies to I/O memory in units of 64 bits when possible (on
    64 bit architectures). It reverts to __iowrite32_copy on 32 bit
    architectures.

    Signed-off-by: Brice Goglin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Brice Goglin
     
  • * git://git.infradead.org/~dwmw2/rbtree-2.6:
    [RBTREE] Switch rb_colour() et al to en_US spelling of 'color' for consistency
    Update UML kernel/physmem.c to use rb_parent() accessor macro
    [RBTREE] Update hrtimers to use rb_parent() accessor macro.
    [RBTREE] Add explicit alignment to sizeof(long) for struct rb_node.
    [RBTREE] Merge colour and parent fields of struct rb_node.
    [RBTREE] Remove dead code in rb_erase()
    [RBTREE] Update JFFS2 to use rb_parent() accessor macro.
    [RBTREE] Update eventpoll.c to use rb_parent() accessor macro.
    [RBTREE] Update key.c to use rb_parent() accessor macro.
    [RBTREE] Update ext3 to use rb_parent() accessor macro.
    [RBTREE] Change rbtree off-tree marking in I/O schedulers.
    [RBTREE] Add accessor macros for colour and parent fields of rb_node

    Linus Torvalds
     

06 Jun, 2006

1 commit


22 May, 2006

1 commit

  • People don't like released kernels yelling at them, no matter how real the
    error might be. So only report it if CONFIG_KOBJECT_DEBUG is enabled.

    Sent on request of Andrew Morton.

    (akpm: should bring this back post-2.6.17)

    Signed-off-by: Greg Kroah-Hartman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Greg Kroah-Hartman
     

13 May, 2006

1 commit


28 Apr, 2006

2 commits

  • This patch contains the following possible cleanups:
    - #if 0 the following unused global function:
    - subsys_remove_file()
    - remove the following unused EXPORT_SYMBOL's:
    - kset_find_obj
    - subsystem_init
    - remove the following unused EXPORT_SYMBOL_GPL:
    - kobject_add_dir

    Signed-off-by: Adrian Bunk
    Signed-off-by: Greg Kroah-Hartman

    Adrian Bunk
     
  • This fixes a build error for various odd combinations of CONFIG_HOTPLUG
    and CONFIG_NET.

    Signed-off-by: Kay Sievers
    Cc: Nigel Cunningham
    Cc: Andrew Morton
    Signed-off-by: Greg Kroah-Hartman

    Kay Sievers
     

21 Apr, 2006

2 commits

  • We only used a single bit for colour information, so having a whole
    machine word of space allocated for it was a bit wasteful. Instead,
    store it in the lowest bit of the 'parent' pointer, since that was
    always going to be aligned anyway.

    Signed-off-by: David Woodhouse

    David Woodhouse
     
  • Observe rb_erase(), when the victim node 'old' has two children so
    neither of the simple cases at the beginning are taken.

    Observe that it effectively does an 'rb_next()' operation to find the
    next (by value) node in the tree. That is; we go to the victim's
    right-hand child and then follow left-hand pointers all the way
    down the tree as far as we can until we find the next node 'node'. We
    end up with 'node' being either the same immediate right-hand child of
    'old', or one of its descendants on the far left-hand side.

    For a start, we _know_ that 'node' has a parent. We can drop that check.

    We also know that if 'node's parent is 'old', then 'node' is the
    right-hand child of its parent. And that if 'node's parent is _not_
    'old', then 'node' is the left-hand child of its parent.

    So instead of checking for 'node->rb_parent == old' in one place and
    also checking 'node's heritage separately when we're trying to change
    its link from its parent, we can shuffle things around a bit and do
    it like this...

    Signed-off-by: David Woodhouse

    David Woodhouse
     

20 Apr, 2006

1 commit

  • DEBUG_MUTEX flag is on by default in current kernel configuration.

    During performance testing, we saw mutex debug functions like
    mutex_debug_check_no_locks_freed (called by kfree()) is expensive as it
    goes through a global list of memory areas with mutex lock and do the
    checking. For benchmarks such as Volanomark and Hackbench, we have seen
    more than 40% drop in performance on some platforms. We suggest to set
    DEBUG_MUTEX off by default. Or at least do that later when we feel that
    the mutex changes in the current code have stabilized.

    Signed-off-by: Tim Chen
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Tim Chen
     

15 Apr, 2006

1 commit

  • It works like this:
    Open the file
    Read all the contents.
    Call poll requesting POLLERR or POLLPRI (so select/exceptfds works)
    When poll returns,
    close the file and go to top of loop.
    or lseek to start of file and go back to the 'read'.

    Events are signaled by an object manager calling
    sysfs_notify(kobj, dir, attr);

    If the dir is non-NULL, it is used to find a subdirectory which
    contains the attribute (presumably created by sysfs_create_group).

    This has a cost of one int per attribute, one wait_queuehead per kobject,
    one int per open file.

    The name "sysfs_notify" may be confused with the inotify
    functionality. Maybe it would be nice to support inotify for sysfs
    attributes as well?

    This patch also uses sysfs_notify to allow /sys/block/md*/md/sync_action
    to be pollable

    Signed-off-by: Neil Brown
    Signed-off-by: Greg Kroah-Hartman

    NeilBrown
     

11 Apr, 2006

3 commits

  • lib/string.c: In function 'memcpy':
    lib/string.c:470: warning: initialization discards qualifiers from pointer =
    target type

    Signed-off-by: Jan-Benedict Glaw
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan-Benedict Glaw
     
  • Some string functions were safely overrideable in lib/string.c, but their
    corresponding declarations in linux/string.h were not. Correct this, and
    make strcspn overrideable.

    Odds of someone wanting to do optimized assembly of these are small, but
    for the sake of cleanliness, might as well bring them into line with the
    rest of the file.

    Signed-off-by: Kyle McMartin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kyle McMartin
     
  • While cleaning up parisc_ksyms.c earlier, I noticed that strpbrk wasn't
    being exported from lib/string.c. Investigating further, I noticed a
    changeset that removed its export and added it to _ksyms.c on a few more
    architectures. The justification was that "other arches do it."

    I think this is wrong, since no architecture currently defines
    __HAVE_ARCH_STRPBRK, there's no reason for any of them to be exporting it
    themselves. Therefore, consolidate the export to lib/string.c.

    Signed-off-by: Kyle McMartin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Kyle McMartin
     

31 Mar, 2006

1 commit


27 Mar, 2006

6 commits

  • We use it generally now, at least blktrace isn't a specific debug
    kernel feature.

    Signed-off-by: Jens Axboe

    Jens Axboe
     
  • wrote:

    This is an extremely well-known technique. You can see a similar version that
    uses a multiply for the last few steps at
    http://graphics.stanford.edu/~seander/bithacks.html#CountBitsSetParallel whch
    refers to "Software Optimization Guide for AMD Athlon 64 and Opteron
    Processors"
    http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/25112.PDF

    It's section 8.6, "Efficient Implementation of Population-Count Function in
    32-bit Mode", pages 179-180.

    It uses the name that I am more familiar with, "popcount" (population count),
    although "Hamming weight" also makes sense.

    Anyway, the proof of correctness proceeds as follows:

    b = a - ((a >> 1) & 0x55555555);
    c = (b & 0x33333333) + ((b >> 2) & 0x33333333);
    d = (c + (c >> 4)) & 0x0f0f0f0f;
    #if SLOW_MULTIPLY
    e = d + (d >> 8)
    f = e + (e >> 16);
    return f & 63;
    #else
    /* Useful if multiply takes at most 4 cycles */
    return (d * 0x01010101) >> 24;
    #endif

    The input value a can be thought of as 32 1-bit fields each holding their own
    hamming weight. Now look at it as 16 2-bit fields. Each 2-bit field a1..a0
    has the value 2*a1 + a0. This can be converted into the hamming weight of the
    2-bit field a1+a0 by subtracting a1.

    That's what the (a >> 1) & mask subtraction does. Since there can be no
    borrows, you can just do it all at once.

    Enumerating the 4 possible cases:

    0b00 = 0 -> 0 - 0 = 0
    0b01 = 1 -> 1 - 0 = 1
    0b10 = 2 -> 2 - 1 = 1
    0b11 = 3 -> 3 - 1 = 2

    The next step consists of breaking up b (made of 16 2-bir fields) into
    even and odd halves and adding them into 4-bit fields. Since the largest
    possible sum is 2+2 = 4, which will not fit into a 4-bit field, the 2-bit
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    "which will not fit into a 2-bit field"

    fields have to be masked before they are added.

    After this point, the masking can be delayed. Each 4-bit field holds a
    population count from 0..4, taking at most 3 bits. These numbers can be added
    without overflowing a 4-bit field, so we can compute c + (c >> 4), and only
    then mask off the unwanted bits.

    This produces d, a number of 4 8-bit fields, each in the range 0..8. From
    this point, we can shift and add d multiple times without overflowing an 8-bit
    field, and only do a final mask at the end.

    The number to mask with has to be at least 63 (so that 32 on't be truncated),
    but can also be 128 or 255. The x86 has a special encoding for signed
    immediate byte values -128..127, so the value of 255 is slower. On other
    processors, a special "sign extend byte" instruction might be faster.

    On a processor with fast integer multiplies (Athlon but not P4), you can
    reduce the final few serially dependent instructions to a single integer
    multiply. Consider d to be 3 8-bit values d3, d2, d1 and d0, each in the
    range 0..8. The multiply forms the partial products:

    d3 d2 d1 d0
    d3 d2 d1 d0
    d3 d2 d1 d0
    + d3 d2 d1 d0
    ----------------------
    e3 e2 e1 e0

    Where e3 = d3 + d2 + d1 + d0. e2, e1 and e0 obviously cannot generate
    any carries.

    Signed-off-by: Akinobu Mita
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • By defining generic hweight*() routines

    - hweight64() will be defined on all architectures
    - hweight_long() will use architecture optimized hweight32() or hweight64()

    I found two possible cleanups by these reasons.

    Signed-off-by: Akinobu Mita
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • This patch introduces the C-language equivalents of the functions below:

    int ext2_set_bit(int nr, volatile unsigned long *addr);
    int ext2_clear_bit(int nr, volatile unsigned long *addr);
    int ext2_test_bit(int nr, const volatile unsigned long *addr);
    unsigned long ext2_find_first_zero_bit(const unsigned long *addr,
    unsigned long size);
    unsinged long ext2_find_next_zero_bit(const unsigned long *addr,
    unsigned long size);

    In include/asm-generic/bitops/ext2-non-atomic.h

    This code largely copied from:

    include/asm-powerpc/bitops.h
    include/asm-parisc/bitops.h

    Signed-off-by: Akinobu Mita
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • This patch introduces the C-language equivalents of the functions below:

    unsigned int hweight32(unsigned int w);
    unsigned int hweight16(unsigned int w);
    unsigned int hweight8(unsigned int w);
    unsigned long hweight64(__u64 w);

    In include/asm-generic/bitops/hweight.h

    This code largely copied from: include/linux/bitops.h

    Signed-off-by: Akinobu Mita
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     
  • This patch introduces the C-language equivalents of the functions below:

    unsigned logn find_next_bit(const unsigned long *addr, unsigned long size,
    unsigned long offset);
    unsigned long find_next_zero_bit(const unsigned long *addr, unsigned long size,
    unsigned long offset);
    unsigned long find_first_zero_bit(const unsigned long *addr,
    unsigned long size);
    unsigned long find_first_bit(const unsigned long *addr, unsigned long size);

    In include/asm-generic/bitops/find.h

    This code largely copied from: arch/powerpc/lib/bitops.c

    Signed-off-by: Akinobu Mita
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Akinobu Mita
     

26 Mar, 2006

9 commits

  • DEBUG_KERNEL is often enabled just for sysrq, but this doesn't
    mean the user wants more heavyweight debugging information.

    Cc: jbeulich@novell.com

    Signed-off-by: Andi Kleen
    Signed-off-by: Linus Torvalds

    Andi Kleen
     
  • * git://git.kernel.org/pub/scm/linux/kernel/git/bunk/trivial: (21 commits)
    BUG_ON() Conversion in drivers/video/
    BUG_ON() Conversion in drivers/parisc/
    BUG_ON() Conversion in drivers/block/
    BUG_ON() Conversion in sound/sparc/cs4231.c
    BUG_ON() Conversion in drivers/s390/block/dasd.c
    BUG_ON() Conversion in lib/swiotlb.c
    BUG_ON() Conversion in kernel/cpu.c
    BUG_ON() Conversion in ipc/msg.c
    BUG_ON() Conversion in block/elevator.c
    BUG_ON() Conversion in fs/coda/
    BUG_ON() Conversion in fs/binfmt_elf_fdpic.c
    BUG_ON() Conversion in input/serio/hil_mlc.c
    BUG_ON() Conversion in md/dm-hw-handler.c
    BUG_ON() Conversion in md/bitmap.c
    The comment describing how MS_ASYNC works in msync.c is confusing
    rcu: undeclared variable used in documentation
    fix typos "wich" -> "which"
    typo patch for fs/ufs/super.c
    Fix simple typos
    tabify drivers/char/Makefile
    ...

    Linus Torvalds
     
  • text data bss dec hex filename
    before: 3605597 1363528 363328 5332453 515de5 vmlinux
    after: 3605295 1363612 363200 5332107 515c8b vmlinux

    218 bytes saved.

    Also, optimise any_online_cpu() out of existence on CONFIG_SMP=n.

    This function seems inefficient. Can't we simply AND the two masks, then use
    find_first_bit()?

    Cc: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Shrinks the only caller (net/bridge/netfilter/ebtables.c) by 174 bytes.

    Also, optimise highest_possible_processor_id() out of existence on
    CONFIG_SMP=n.

    Cc: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • text data bss dec hex filename
    before: 3488027 1322496 360128 5170651 4ee5db vmlinux
    after: 3485112 1322480 359968 5167560 4ed9c8 vmlinux

    2931 bytes saved

    Cc: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • text data bss dec hex filename
    before: 3490577 1322408 360000 5172985 4eeef9 vmlinux
    after: 3488027 1322496 360128 5170651 4ee5db vmlinux

    Cc: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Documentation changes to help radix tree users avoid overrunning the tags
    array. RADIX_TREE_TAGS moves to linux/radix-tree.h and is now known as
    RADIX_TREE_MAX_TAGS (Nick Piggin's idea). Tag parameters are changed to
    unsigned, and some comments are updated.

    Signed-off-by: Jonathan Corbet
    Cc: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jonathan Corbet
     
  • The Kconfig text for CONFIG_DEBUG_SLAB and CONFIG_DEBUG_PAGEALLOC have always
    seemed a bit confusing. Change them to:

    CONFIG_DEBUG_SLAB: "Debug slab memory allocations"
    CONFIG_DEBUG_PAGEALLOC: "Debug page memory allocations"

    Cc: "David S. Miller"
    Cc: Hirokazu Takata
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • Implement /proc/slab_allocators. It produces output like:

    idr_layer_cache: 80 idr_pre_get+0x33/0x4e
    buffer_head: 2555 alloc_buffer_head+0x20/0x75
    mm_struct: 9 mm_alloc+0x1e/0x42
    mm_struct: 20 dup_mm+0x36/0x370
    vm_area_struct: 384 dup_mm+0x18f/0x370
    vm_area_struct: 151 do_mmap_pgoff+0x2e0/0x7c3
    vm_area_struct: 1 split_vma+0x5a/0x10e
    vm_area_struct: 11 do_brk+0x206/0x2e2
    vm_area_struct: 2 copy_vma+0xda/0x142
    vm_area_struct: 9 setup_arg_pages+0x99/0x214
    fs_cache: 8 copy_fs_struct+0x21/0x133
    fs_cache: 29 copy_process+0xf38/0x10e3
    files_cache: 30 alloc_files+0x1b/0xcf
    signal_cache: 81 copy_process+0xbaa/0x10e3
    sighand_cache: 77 copy_process+0xe65/0x10e3
    sighand_cache: 1 de_thread+0x4d/0x5f8
    anon_vma: 241 anon_vma_prepare+0xd9/0xf3
    size-2048: 1 add_sect_attrs+0x5f/0x145
    size-2048: 2 journal_init_revoke+0x99/0x302
    size-2048: 2 journal_init_revoke+0x137/0x302
    size-2048: 2 journal_init_inode+0xf9/0x1c4

    Cc: Manfred Spraul
    Cc: Alexander Nyberg
    Cc: Pekka Enberg
    Cc: Christoph Lameter
    Cc: Ravikiran Thirumalai
    Signed-off-by: Al Viro
    DESC
    slab-leaks3-locking-fix
    EDESC
    From: Andrew Morton

    Update for slab-remove-cachep-spinlock.patch

    Cc: Al Viro
    Cc: Manfred Spraul
    Cc: Alexander Nyberg
    Cc: Pekka Enberg
    Cc: Christoph Lameter
    Cc: Ravikiran Thirumalai
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Al Viro
     

25 Mar, 2006

1 commit


24 Mar, 2006

3 commits

  • As a foundation for reliable stack unwinding, this adds a config option
    (available to all architectures except IA64 and those where the module
    loader might have problems with the resulting relocations) to enable the
    generation of frame unwind information.

    Signed-off-by: Jan Beulich
    Cc: Miles Bader
    Cc: "Luck, Tony"
    Cc: Ralf Baechle
    Cc: Kyle McMartin
    Cc: Benjamin Herrenschmidt
    Cc: Paul Mackerras
    Cc: "David S. Miller"
    Cc: Paul Mundt ,
    Cc: Andi Kleen
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jan Beulich
     
  • Restructure the bitmap_*_region() operations, to avoid code duplication.

    Also reduces binary text size by about 100 bytes (ia64 arch). The original
    Bottomley bitmap_*_region patch added about 1000 bytes of compiled kernel text
    (ia64). The Mundt multiword extension added another 600 bytes, and this
    restructuring patch gets back about 100 bytes.

    But the real motivation was the reduced amount of duplicated code.

    Tested by Paul Mundt using
    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Jackson
     
  • Add support to the lib/bitmap.c bitmap_*_region() routines

    For bitmap regions larger than one word (nbits > BITS_PER_LONG). This removes
    a BUG_ON() in lib bitmap.

    I have an updated store queue API for SH that is currently using this with
    relative success, and at first glance, it seems like this could be useful for
    x86 (arch/i386/kernel/pci-dma.c) as well. Particularly for anything using
    dma_declare_coherent_memory() on large areas and that attempts to allocate
    large buffers from that space.

    Paul Jackson also did some cleanup to this patch.

    Signed-off-by: Paul Mundt
    Signed-off-by: Paul Jackson
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Mundt