12 Jun, 2008

1 commit

  • This implements a few changes on top of the recent kobjsize() refactoring
    introduced by commit 6cfd53fc03670c7a544a56d441eb1a6cc800d72b.

    As Christoph points out:

    virt_to_head_page cannot return NULL. virt_to_page also
    does not return NULL. pfn_valid() needs to be used to
    figure out if a page is valid. Otherwise the page struct
    reference that was returned may have PageReserved() set
    to indicate that it is not a valid page.

    As discussed further in the thread, virt_addr_valid() is the preferable
    way to validate the object pointer in this case. In addition to fixing
    up the reserved page case, it also has the benefit of encapsulating the
    hack introduced by commit 4016a1390d07f15b267eecb20e76a48fd5c524ef on
    the impacted platforms, allowing us to get rid of the extra checking in
    kobjsize() for the platforms that don't perform this type of bizarre
    memory_end abuse (every nommu platform that isn't blackfin). If blackfin
    decides to get in line with every other platform and use PageReserved
    for the DMA pages in question, kobjsize() will also continue to work
    fine.

    It also turns out that compound_order() will give us back 0-order for
    non-head pages, so we can get rid of the PageCompound check and just
    use compound_order() directly. Clean that up while we're at it.

    Signed-off-by: Paul Mundt
    Reviewed-by: Christoph Lameter
    Acked-by: David Howells
    Signed-off-by: Linus Torvalds

    Paul Mundt
     

07 Jun, 2008

1 commit

  • kobjsize() has been abusing page->index as a method for sorting out
    compound order, which blows up both for page cache pages, and SLOB's
    reuse of the index in struct slob_page.

    Presently we are not able to accurately size arbitrary pointers that
    don't come from kmalloc(), so the best we can do is sort out the
    compound order from the head page if it's a compound page, or default
    to 0-order if it's impossible to ksize() the object.

    Obviously this leaves quite a bit to be desired in terms of object
    sizing accuracy, but the behaviour is unchanged over the existing
    implementation, while fixing the page->index oopses originally reported
    here:

    http://marc.info/?l=linux-mm&m=121127773325245&w=2

    Accuracy could also be improved by having SLUB and SLOB both set PG_slab
    on ksizeable pages, rather than just handling the __GFP_COMP cases
    irregardless of the PG_slab setting, as made possibly with Pekka's
    patches:

    http://marc.info/?l=linux-kernel&m=121139439900534&w=2
    http://marc.info/?l=linux-kernel&m=121139440000537&w=2
    http://marc.info/?l=linux-kernel&m=121139440000540&w=2

    This is primarily a bugfix for nommu systems for 2.6.26, with the aim
    being to gradually kill off kobjsize() and its particular brand of
    object abuse entirely.

    Reviewed-by: Pekka Enberg
    Signed-off-by: Paul Mundt
    Acked-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Mundt
     

25 May, 2008

1 commit

  • The atomic_t type is 32bit but a 64bit system can have more than 2^32
    pages of virtual address space available. Without this we overflow on
    ludicrously large mappings

    Signed-off-by: Alan Cox
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alan Cox
     

29 Apr, 2008

1 commit

  • The kernel implements readlink of /proc/pid/exe by getting the file from
    the first executable VMA. Then the path to the file is reconstructed and
    reported as the result.

    Because of the VMA walk the code is slightly different on nommu systems.
    This patch avoids separate /proc/pid/exe code on nommu systems. Instead of
    walking the VMAs to find the first executable file-backed VMA we store a
    reference to the exec'd file in the mm_struct.

    That reference would prevent the filesystem holding the executable file
    from being unmounted even after unmapping the VMAs. So we track the number
    of VM_EXECUTABLE VMAs and drop the new reference when the last one is
    unmapped. This avoids pinning the mounted filesystem.

    [akpm@linux-foundation.org: improve comments]
    [yamamoto@valinux.co.jp: fix dup_mmap]
    Signed-off-by: Matt Helsley
    Cc: Oleg Nesterov
    Cc: David Howells
    Cc:"Eric W. Biederman"
    Cc: Christoph Hellwig
    Cc: Al Viro
    Cc: Hugh Dickins
    Signed-off-by: YAMAMOTO Takashi
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Matt Helsley
     

28 Apr, 2008

1 commit

  • Don't perform kobjsize operations on objects the kernel doesn't manage.

    On Blackfin, drivers can get dma coherent memory by calling a function
    dma_alloc_coherent(). We do this in nommu by configuring a chunk of uncached
    memory at the top of memory.

    Since we don't want the kernel to use the uncached memory, we lie to the
    kernel, and tell it that it's max memory is between 0, and the start of the
    uncached dma coherent section.

    this all works well, until this memory gets exposed into userspace (with a
    frame buffer), when you look at the process's maps, it shows the framebuf:

    root:/proc> cat maps
    [snip]
    03f0ef00-03f34700 rw-p 00000000 1f:00 192 /dev/fb0
    root:/proc>

    This is outside the "normal" range for the kernel. When the kernel tries to
    find the size of this object (when you run ps), it dies in nommu.c in
    kobjsize.

    BUG_ON(page->index >= MAX_ORDER);

    since the page we are referring to is outside what the kernel thinks is it's
    max valid memory.

    root:~> while [ 1 ]; ps > /dev/null; done
    kernel BUG at mm/nommu.c:119!
    Kernel panic - not syncing: BUG!

    We fixed this by adding a check to reject out of range object pointers as it
    already does that for NULL pointers.

    Signed-off-by: Michael Hennerich
    Signed-off-by: Robin Getz
    Acked-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Michael Hennerich
     

06 Feb, 2008

2 commits

  • This builds on top of the earlier vmalloc_32_user() work introduced by
    b50731732f926d6c49fd0724616a7344c31cd5cf, as we now have places in the nommu
    allmodconfig that hit up against these missing APIs.

    As vmalloc_32_user() is already implemented, this is moved over to
    vmalloc_user() and simply made a wrapper. As all current nommu platforms are
    32-bit addressable, there's no special casing we have to do for ZONE_DMA and
    things of that nature as per GFP_VMALLOC32.

    remap_vmalloc_range() needs to check VM_USERMAP in order to figure out whether
    we permit the remap or not, which means that we also have to rework the
    vmalloc_user() code to grovel for the VMA and set the flag.

    Signed-off-by: Paul Mundt
    Acked-by: David McCullough
    Acked-by: David Howells
    Acked-by: Greg Ungerer
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Mundt
     
  • Make vmalloc functions work the same way as kfree() and friends that
    take a const void * argument.

    [akpm@linux-foundation.org: fix consts, coding-style]
    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

05 Dec, 2007

1 commit

  • If mmap_min_addr is set and a process attempts to mmap (not fixed) with a
    non-null hint address less than mmap_min_addr the mapping will fail the
    security checks. Since this is just a hint address this patch will round
    such a hint address above mmap_min_addr.

    gcj was found to try to be very frugal with vm usage and give hint addresses
    in the 8k-32k range. Without this patch all such programs failed and with
    the patch they happily get a higher address.

    This patch is wrappad in CONFIG_SECURITY since mmap_min_addr doesn't exist
    without it and there would be no security check possible no matter what. So
    we should not bother compiling in this rounding if it is just a waste of
    time.

    Signed-off-by: Eric Paris
    Signed-off-by: James Morris

    Eric Paris
     

29 Oct, 2007

1 commit


20 Oct, 2007

1 commit


17 Oct, 2007

1 commit

  • This patch contains the following cleanups that are now possible:
    - remove the unused security_operations->inode_xattr_getsuffix
    - remove the no longer used security_operations->unregister_security
    - remove some no longer required exit code
    - remove a bunch of no longer used exports

    Signed-off-by: Adrian Bunk
    Acked-by: James Morris
    Cc: Chris Wright
    Cc: Stephen Smalley
    Cc: Serge Hallyn
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Adrian Bunk
     

23 Aug, 2007

1 commit

  • The new exec code inserts an accounted vma into an mm struct which is not
    current->mm. The existing memory check code has a hard coded assumption
    that this does not happen as does the security code.

    As the correct mm is known we pass the mm to the security method and the
    helper function. A new security test is added for the case where we need
    to pass the mm and the existing one is modified to pass current->mm to
    avoid the need to change large amounts of code.

    (Thanks to Tobias for fixing rejects and testing)

    Signed-off-by: Alan Cox
    Cc: WU Fengguang
    Cc: James Morris
    Cc: Tobias Diedrich
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alan Cox
     

22 Jul, 2007

1 commit

  • Trying to survive an allmodconfig on a nommu platform results in many
    screen lengths of module unhappiness. Many of the mmap related things that
    binfmt_flat hooks in to are never exported despite being global, and there
    are also missing definitions for vmalloc_32_user() and vm_insert_page().

    I've implemented vmalloc_32_user() trying to stick as close to the
    mm/vmalloc.c implementation as possible, though we don't have any need for
    VM_USERMAP, so groveling for the VMA can be skipped. vm_insert_page() has
    been stubbed for now in order to keep the build happy.

    Signed-off-by: Paul Mundt
    Cc: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Paul Mundt
     

20 Jul, 2007

2 commits

  • Change ->fault prototype. We now return an int, which contains
    VM_FAULT_xxx code in the low byte, and FAULT_RET_xxx code in the next byte.
    FAULT_RET_ code tells the VM whether a page was found, whether it has been
    locked, and potentially other things. This is not quite the way he wanted
    it yet, but that's changed in the next patch (which requires changes to
    arch code).

    This means we no longer set VM_CAN_INVALIDATE in the vma in order to say
    that a page is locked which requires filemap_nopage to go away (because we
    can no longer remain backward compatible without that flag), but we were
    going to do that anyway.

    struct fault_data is renamed to struct vm_fault as Linus asked. address
    is now a void __user * that we should firmly encourage drivers not to use
    without really good reason.

    The page is now returned via a page pointer in the vm_fault struct.

    Signed-off-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     
  • Nonlinear mappings are (AFAIKS) simply a virtual memory concept that encodes
    the virtual address -> file offset differently from linear mappings.

    ->populate is a layering violation because the filesystem/pagecache code
    should need to know anything about the virtual memory mapping. The hitch here
    is that the ->nopage handler didn't pass down enough information (ie. pgoff).
    But it is more logical to pass pgoff rather than have the ->nopage function
    calculate it itself anyway (because that's a similar layering violation).

    Having the populate handler install the pte itself is likewise a nasty thing
    to be doing.

    This patch introduces a new fault handler that replaces ->nopage and
    ->populate and (later) ->nopfn. Most of the old mechanism is still in place
    so there is a lot of duplication and nice cleanups that can be removed if
    everyone switches over.

    The rationale for doing this in the first place is that nonlinear mappings are
    subject to the pagefault vs invalidate/truncate race too, and it seemed stupid
    to duplicate the synchronisation logic rather than just consolidate the two.

    After this patch, MAP_NONBLOCK no longer sets up ptes for pages present in
    pagecache. Seems like a fringe functionality anyway.

    NOPAGE_REFAULT is removed. This should be implemented with ->fault, and no
    users have hit mainline yet.

    [akpm@linux-foundation.org: cleanup]
    [randy.dunlap@oracle.com: doc. fixes for readahead]
    [akpm@linux-foundation.org: build fix]
    Signed-off-by: Nick Piggin
    Signed-off-by: Randy Dunlap
    Cc: Mark Fasheh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

17 Jul, 2007

1 commit


12 Jul, 2007

1 commit

  • Add a new security check on mmap operations to see if the user is attempting
    to mmap to low area of the address space. The amount of space protected is
    indicated by the new proc tunable /proc/sys/vm/mmap_min_addr and defaults to
    0, preserving existing behavior.

    This patch uses a new SELinux security class "memprotect." Policy already
    contains a number of allow rules like a_t self:process * (unconfined_t being
    one of them) which mean that putting this check in the process class (its
    best current fit) would make it useless as all user processes, which we also
    want to protect against, would be allowed. By taking the memprotect name of
    the new class it will also make it possible for us to move some of the other
    memory protect permissions out of 'process' and into the new class next time
    we bump the policy version number (which I also think is a good future idea)

    Acked-by: Stephen Smalley
    Acked-by: Chris Wright
    Signed-off-by: Eric Paris
    Signed-off-by: James Morris

    Eric Paris
     

09 May, 2007

1 commit

  • This patch moves the die notifier handling to common code. Previous
    various architectures had exactly the same code for it. Note that the new
    code is compiled unconditionally, this should be understood as an appel to
    the other architecture maintainer to implement support for it aswell (aka
    sprinkling a notify_die or two in the proper place)

    arm had a notifiy_die that did something totally different, I renamed it to
    arm_notify_die as part of the patch and made it static to the file it's
    declared and used at. avr32 used to pass slightly less information through
    this interface and I brought it into line with the other architectures.

    [akpm@linux-foundation.org: build fix]
    [akpm@linux-foundation.org: fix vmalloc_sync_all bustage]
    [bryan.wu@analog.com: fix vmalloc_sync_all in nommu]
    Signed-off-by: Christoph Hellwig
    Cc:
    Cc: Russell King
    Signed-off-by: Bryan Wu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Hellwig
     

13 Apr, 2007

1 commit


23 Mar, 2007

2 commits

  • Make the SYSV SHM nattch counter work correctly by forcing multiple VMAs to
    be produced to represent MAP_SHARED segments, even if they overlap exactly.

    Using this test program:

    http://people.redhat.com/~dhowells/doshm.c

    Run as:

    doshm sysv

    I can see nattch going from one before the patch:

    # /doshm sysv
    Command: sysv
    shmid: 65536
    memory: 0xc3700000
    c0b00000-c0b04000 rw-p 00000000 00:00 0
    c0bb0000-c0bba788 r-xs 00000000 00:0b 14582157 /lib/ld-uClibc-0.9.28.so
    c3180000-c31dede4 r-xs 00000000 00:0b 14582179 /lib/libuClibc-0.9.28.so
    c3520000-c352278c rw-p 00000000 00:0b 13763417 /doshm
    c3584000-c35865e8 r-xs 00000000 00:0b 13763417 /doshm
    c3588000-c358aa00 rw-p 00008000 00:0b 14582157 /lib/ld-uClibc-0.9.28.so
    c3590000-c359b6c0 rw-p 00000000 00:00 0
    c3620000-c3640000 rwxp 00000000 00:00 0
    c3700000-c37fa000 rw-S 00000000 00:06 1411 /SYSV00000000 (deleted)
    c3700000-c37fa000 rw-S 00000000 00:06 1411 /SYSV00000000 (deleted)
    nattch 1

    To two after the patch:

    # /doshm sysv
    Command: sysv
    shmid: 0
    memory: 0xc3700000
    c0bb0000-c0bba788 r-xs 00000000 00:0b 14582157 /lib/ld-uClibc-0.9.28.so
    c3180000-c31dede4 r-xs 00000000 00:0b 14582179 /lib/libuClibc-0.9.28.so
    c3320000-c3340000 rwxp 00000000 00:00 0
    c3530000-c35325e8 r-xs 00000000 00:0b 13763417 /doshm
    c3534000-c353678c rw-p 00000000 00:0b 13763417 /doshm
    c3538000-c353aa00 rw-p 00008000 00:0b 14582157 /lib/ld-uClibc-0.9.28.so
    c3590000-c359b6c0 rw-p 00000000 00:00 0
    c35a4000-c35a8000 rw-p 00000000 00:00 0
    c3700000-c37fa000 rw-S 00000000 00:06 1369 /SYSV00000000 (deleted)
    c3700000-c37fa000 rw-S 00000000 00:06 1369 /SYSV00000000 (deleted)
    nattch 2

    That's +1 to nattch for each shmat() made.

    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     
  • Supply a get_unmapped_area() to fix NOMMU SYSV SHM support.

    Signed-off-by: David Howells
    Acked-by: Adam Litke
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     

09 Dec, 2006

1 commit


08 Dec, 2006

1 commit


06 Dec, 2006

1 commit

  • I was playing with blackfin when i hit a neat bug ... doing an open() on a
    directory and then passing that fd to mmap() would cause the kernel to hang

    after poking into the code a bit more, i found that
    mm/nommu.c:validate_mmap_request() checks the length and if it is 0, just
    returns the address ... this is in stark contrast to mmu's
    mm/mmap.c:do_mmap_pgoff() where it returns -EINVAL for 0 length requests ...
    i then noticed that some other parts of the logic is out of date between the
    two funcs, so perhaps that's the easy fix ?

    Signed-off-by: Greg Ungerer
    Signed-off-by: Linus Torvalds

    Mike Frysinger
     

04 Oct, 2006

1 commit


01 Oct, 2006

1 commit


27 Sep, 2006

8 commits

  • Make futexes work under NOMMU conditions.

    This can be tested by running this in one shell:

    #define SYSERROR(X, Y) \
    do { if ((long)(X) == -1L) { perror(Y); exit(1); }} while(0)

    int main()
    {
    int shmid, tmp, *f, n;

    shmid = shmget(23, 4, IPC_CREAT|0666);
    SYSERROR(shmid, "shmget");

    f = shmat(shmid, NULL, 0);
    SYSERROR(f, "shmat");

    n = *f;
    printf("WAIT: %p{%x}\n", f, n);
    tmp = futex(f, FUTEX_WAIT, n, NULL, NULL, 0);
    SYSERROR(tmp, "futex");
    printf("WAITED: %d\n", tmp);

    tmp = shmdt(f);
    SYSERROR(tmp, "shmdt");

    exit(0);
    }

    And then this in the other shell:

    #define SYSERROR(X, Y) \
    do { if ((long)(X) == -1L) { perror(Y); exit(1); }} while(0)

    int main()
    {
    int shmid, tmp, *f;

    shmid = shmget(23, 4, IPC_CREAT|0666);
    SYSERROR(shmid, "shmget");

    f = shmat(shmid, NULL, 0);
    SYSERROR(f, "shmat");

    (*f)++;
    printf("WAKE: %p{%x}\n", f, *f);
    tmp = futex(f, FUTEX_WAKE, 1, NULL, NULL, 0);
    SYSERROR(tmp, "futex");
    printf("WOKE: %d\n", tmp);

    tmp = shmdt(f);
    SYSERROR(tmp, "shmdt");

    exit(0);
    }

    The first program will set up a SYSV IPC SHM segment and wait on a futex in it
    for the number at the start to change. The program will increment that number
    and wake the first program up. This leads to output of the form:

    SHELL 1 SHELL 2
    ======================= =======================
    # /dowait
    WAIT: 0xc32ac000{0}
    # /dowake
    WAKE: 0xc32ac000{1}
    WAITED: 0 WOKE: 1

    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     
  • Make mremap() partially work for NOMMU kernels. It may resize a VMA provided
    that it doesn't exceed the size of the slab object in which the storage is
    allocated that the VMA refers to. Shareable VMAs may not be resized.

    Moving VMAs (as permitted by MREMAP_MAYMOVE) is not currently supported.

    This patch also makes use of the fact that the VMA list is now ordered to cut
    it short when possible.

    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     
  • Order the per-mm_struct VMA list by address so that searching it can be cut
    short when the appropriate address has been exceeded.

    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     
  • Permit ptrace to modify a section that's non-shared but is marked
    unwritable, such as is obtained by mapping the text segment of an ELF-FDPIC
    executable binary with into a binary that's being ptraced[*].

    [*] Under NOMMU conditions ptrace causes read-only MAP_PRIVATE mmaps to become
    totally private copies because if a private mapping was actually shared
    then the debugging setting breakpoints in it would potentially crash
    other processes.

    This is done by using the VM_MAYWRITE flag rather than the VM_WRITE flag
    when deciding whether to permit a write.

    Without this patch a debugger can't set breakpoints in the mapped text
    sections of executables that are mapped read-only private, even if the
    mmap() syscall has taken a private copy because PT_PTRACED is set.

    In addition, VM_MAYREAD is used instead of VM_READ for similar reasons.

    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     
  • Check the VMA protections in get_user_pages() against what's being asked.

    This checks to see that we don't accidentally write on a non-writable VMA or
    permit an I/O mapping VMA to be accessed (which may lack page structs).

    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     
  • In NOMMU arch, if run "cat /proc/self/mem", data from physical address 0
    are read. This behavior is different from MMU arch. In IA32, message
    "cat: /proc/self/mem: Input/output error" is reported.

    This issue is rootcaused by not validate the start address in NOMMU
    function get_user_pages(). Following patch solves this issue.

    Signed-off-by: Sonic Zhang
    Cc: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sonic Zhang
     
  • Use find_vma() in the NOMMU version of access_process_vm() rather than
    reimplementing it.

    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     
  • Check that access_process_vm() is accessing a valid mapping in the target
    process.

    This limits ptrace() accesses and accesses through /proc//maps to only
    those regions actually mapped by a program.

    Signed-off-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Howells
     

26 Sep, 2006

1 commit

  • Remove the atomic counter for slab_reclaim_pages and replace the counter
    and NR_SLAB with two ZVC counter that account for unreclaimable and
    reclaimable slab pages: NR_SLAB_RECLAIMABLE and NR_SLAB_UNRECLAIMABLE.

    Change the check in vmscan.c to refer to to NR_SLAB_RECLAIMABLE. The
    intend seems to be to check for slab pages that could be freed.

    Signed-off-by: Christoph Lameter
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

15 Jul, 2006

1 commit


01 Jul, 2006

1 commit

  • Currently a single atomic variable is used to establish the size of the page
    cache in the whole machine. The zoned VM counters have the same method of
    implementation as the nr_pagecache code but also allow the determination of
    the pagecache size per zone.

    Remove the special implementation for nr_pagecache and make it a zoned counter
    named NR_FILE_PAGES.

    Updates of the page cache counters are always performed with interrupts off.
    We can therefore use the __ variant here.

    Signed-off-by: Christoph Lameter
    Cc: Trond Myklebust
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Christoph Lameter
     

11 Apr, 2006

1 commit

  • This patch is an enhancement of OVERCOMMIT_GUESS algorithm in
    __vm_enough_memory() in mm/nommu.c.

    When the OVERCOMMIT_GUESS algorithm calculates the number of free pages,
    the algorithm subtracts the number of reserved pages from the result
    nr_free_pages().

    Signed-off-by: Hideo Aoki
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Hideo AOKI
     

22 Mar, 2006

1 commit

  • Now that compound page handling is properly fixed in the VM, move nommu
    over to using compound pages rather than rolling their own refcounting.

    nommu vm page refcounting is broken anyway, but there is no need to have
    divergent code in the core VM now, nor when it gets fixed.

    Signed-off-by: Nick Piggin
    Cc: David Howells

    (Needs testing, please).
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

01 Mar, 2006

1 commit