13 Feb, 2009

2 commits

  • * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/penberg/slab-2.6:
    mm: Export symbol ksize()

    Linus Torvalds
     
  • A bug was introduced into write_cache_pages cyclic writeout by commit
    31a12666d8f0c22235297e1c1575f82061480029 ("mm: write_cache_pages cyclic
    fix"). The intention (and comments) is that we should cycle back and
    look for more dirty pages at the beginning of the file if there is no
    more work to be done.

    But the !done condition was dropped from the test. This means that any
    time the page writeout loop breaks (eg. due to nr_to_write == 0), we
    will set index to 0, then goto again. This will set done_index to
    index, then find done is set, so will proceed to the end of the
    function. When updating mapping->writeback_index for cyclic writeout,
    we now use done_index == 0, so we're always cycling back to 0.

    This seemed to be causing random mmap writes (slapadd and iozone) to
    start writing more pages from the LRU and writeout would slowdown, and
    caused bugzilla entry

    http://bugzilla.kernel.org/show_bug.cgi?id=12604

    about Berkeley DB slowing down dramatically.

    With this patch, iozone random write performance is increased nearly
    5x on my system (iozone -B -r 4k -s 64k -s 512m -s 1200m on ext2).

    Signed-off-by: Nick Piggin
    Reported-and-tested-by: Jan Kara
    Signed-off-by: Linus Torvalds

    Nick Piggin
     

12 Feb, 2009

8 commits

  • Commit 7b2cd92adc5430b0c1adeb120971852b4ea1ab08 ("crypto: api - Fix
    zeroing on free") added modular user of ksize(). Export that to fix
    crypto.ko compilation.

    Cc: Herbert Xu
    Signed-off-by: Kirill A. Shutemov
    Signed-off-by: Pekka Enberg

    Kirill A. Shutemov
     
  • Christophe Saout reported [in precursor to:
    http://marc.info/?l=linux-kernel&m=123209902707347&w=4]:

    > Note that I also some a different issue with CONFIG_UNEVICTABLE_LRU.
    > Seems like Xen tears down current->mm early on process termination, so
    > that __get_user_pages in exit_mmap causes nasty messages when the
    > process had any mlocked pages. (in fact, it somehow manages to get into
    > the swapping code and produces a null pointer dereference trying to get
    > a swap token)

    Jeremy explained:

    Yes. In the normal case under Xen, an in-use pagetable is "pinned",
    meaning that it is RO to the kernel, and all updates must go via hypercall
    (or writes are trapped and emulated, which is much the same thing). An
    unpinned pagetable is not currently in use by any process, and can be
    directly accessed as normal RW pages.

    As an optimisation at process exit time, we unpin the pagetable as early
    as possible (switching the process to init_mm), so that all the normal
    pagetable teardown can happen with direct memory accesses.

    This happens in exit_mmap() -> arch_exit_mmap(). The munlocking happens
    a few lines below. The obvious thing to do would be to move
    arch_exit_mmap() to below the munlock code, but I think we'd want to
    call it even if mm->mmap is NULL, just to be on the safe side.

    Thus, this patch:

    exit_mmap() needs to unlock any locked vmas before calling arch_exit_mmap,
    as the latter may switch the current mm to init_mm, which would cause the
    former to fail.

    Signed-off-by: Jeremy Fitzhardinge
    Signed-off-by: Lee Schermerhorn
    Cc: Christophe Saout
    Cc: Keir Fraser
    Cc: Christophe Saout
    Cc: Alex Williamson
    Cc: [2.6.28.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jeremy Fitzhardinge
     
  • Commit dcf6a79dda5cc2a2bec183e50d829030c0972aaa ("write-back: fix
    nr_to_write counter") fixed nr_to_write counter, but didn't set the break
    condition properly.

    If nr_to_write == 0 after being decremented it will loop one more time
    before setting done = 1 and breaking the loop.

    [akpm@linux-foundation.org: coding-style fixes]
    Cc: Artem Bityutskiy
    Acked-by: Nick Piggin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Federico Cuello
     
  • page_cgroup's page allocation at init/memory hotplug uses kmalloc() and
    vmalloc(). If kmalloc() failes, vmalloc() is used.

    This is because vmalloc() is very limited resource on 32bit systems.
    We want to use kmalloc() first.

    But in this kind of call, __GFP_NOWARN should be specified.

    Reported-by: Heiko Carstens
    Signed-off-by: KAMEZAWA Hiroyuki
    Acked-by: Balbir Singh
    Acked-by: Pekka Enberg
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • When I tested following program, I found that the mlocked counter
    is strange. It cannot free some mlocked pages.

    It is because try_to_unmap_file() doesn't check real
    page mappings in vmas.

    That is because the goal of an address_space for a file is to find all
    processes into which the file's specific interval is mapped. It is
    related to the file's interval, not to pages.

    Even if the page isn't really mapped by the vma, it returns SWAP_MLOCK
    since the vma has VM_LOCKED, then calls try_to_mlock_page. After this the
    mlocked counter is increased again.

    COWed anon page in a file-backed vma could be a such case. This patch
    resolves it.

    -- my test program --

    int main()
    {
    mlockall(MCL_CURRENT);
    return 0;
    }

    -- before --

    root@barrios-target-linux:~# cat /proc/meminfo | egrep 'Mlo|Unev'
    Unevictable: 0 kB
    Mlocked: 0 kB

    -- after --

    root@barrios-target-linux:~# cat /proc/meminfo | egrep 'Mlo|Unev'
    Unevictable: 8 kB
    Mlocked: 8 kB

    Signed-off-by: MinChan Kim
    Acked-by: Lee Schermerhorn
    Acked-by: KOSAKI Motohiro
    Tested-by: Lee Schermerhorn
    Cc: Hugh Dickins
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    MinChan Kim
     
  • We need to pass an unsigned long as the minimum, because it gets casted
    to an unsigned long in the sysctl handler. If we pass an int, we'll
    access four more bytes on 64bit arches, resulting in a random minimum
    value.

    [rientjes@google.com: fix type of `old_bytes']
    Signed-off-by: Sven Wegener
    Cc: Peter Zijlstra
    Cc: Dave Chinner
    Cc: Christoph Lameter
    Cc: David Rientjes
    Signed-off-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Sven Wegener
     
  • migrate_vmas() should check "vma" not "vma->vm_next" for for-loop condition.

    Signed-off-by: Daisuke Nishimura
    Cc: Christoph Lameter
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daisuke Nishimura
     
  • Commit 5a6fe125950676015f5108fb71b2a67441755003 brought hugetlbfs more
    in line with the core VM by obeying VM_NORESERVE and not reserving
    hugepages for both shared and private mappings when [SHM|MAP]_NORESERVE
    are specified. However, it is still taking filesystem quota
    unconditionally.

    At fault time, if there are no reserves and attempt is made to allocate
    the page and account for filesystem quota. If either fail, the fault
    fails. The impact is that quota is getting accounted for twice. This
    patch partially reverts 5a6fe125950676015f5108fb71b2a67441755003. To
    help prevent this mistake happening again, it improves the documentation
    of hugetlb_reserve_pages()

    Reported-by: Andy Whitcroft
    Signed-off-by: Mel Gorman
    Acked-by: Andy Whitcroft
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

11 Feb, 2009

1 commit

  • When overcommit is disabled, the core VM accounts for pages used by anonymous
    shared, private mappings and special mappings. It keeps track of VMAs that
    should be accounted for with VM_ACCOUNT and VMAs that never had a reserve
    with VM_NORESERVE.

    Overcommit for hugetlbfs is much riskier than overcommit for base pages
    due to contiguity requirements. It avoids overcommiting on both shared and
    private mappings using reservation counters that are checked and updated
    during mmap(). This ensures (within limits) that hugepages exist in the
    future when faults occurs or it is too easy to applications to be SIGKILLed.

    As hugetlbfs makes its own reservations of a different unit to the base page
    size, VM_ACCOUNT should never be set. Even if the units were correct, we would
    double account for the usage in the core VM and hugetlbfs. VM_NORESERVE may
    be set because an application can request no reserves be made for hugetlbfs
    at the risk of getting killed later.

    With commit fc8744adc870a8d4366908221508bb113d8b72ee, VM_NORESERVE and
    VM_ACCOUNT are getting unconditionally set for hugetlbfs-backed mappings. This
    breaks the accounting for both the core VM and hugetlbfs, can trigger an
    OOM storm when hugepage pools are too small lockups and corrupted counters
    otherwise are used. This patch brings hugetlbfs more in line with how the
    core VM treats VM_NORESERVE but prevents VM_ACCOUNT being set.

    Signed-off-by: Mel Gorman
    Signed-off-by: Linus Torvalds

    Mel Gorman
     

09 Feb, 2009

1 commit

  • Commit 27421e211a39784694b597dbf35848b88363c248, Manually revert
    "mlock: downgrade mmap sem while populating mlocked regions", has
    introduced its own regression: __mlock_vma_pages_range() may report
    an error (for example, -EFAULT from trying to lock down pages from
    beyond EOF), but mlock_vma_pages_range() must hide that from its
    callers as before.

    Reported-by: Sami Farin
    Signed-off-by: Hugh Dickins
    Cc: stable@kernel.org
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

06 Feb, 2009

1 commit

  • Fix do_wp_page for VM_MIXEDMAP mappings.

    In the case where pfn_valid returns 0 for a pfn at the beginning of
    do_wp_page and the mapping is not shared writable, the code branches to
    label `gotten:' with old_page == NULL.

    In case the vma is locked (vma->vm_flags & VM_LOCKED), lock_page,
    clear_page_mlock, and unlock_page try to access the old_page.

    This patch checks whether old_page is valid before it is dereferenced.

    The regression was introduced by "mlock: mlocked pages are unevictable"
    (commit b291f000393f5a0b679012b39d79fbc85c018233).

    Signed-off-by: Carsten Otte
    Cc: Nick Piggin
    Cc: Heiko Carstens
    Cc: [2.6.28.x]
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Carsten Otte
     

04 Feb, 2009

1 commit

  • Commit 05fe478dd04e02fa230c305ab9b5616669821dd3 introduced some
    @wbc->nr_to_write breakage.

    It made the following changes:
    1. Decrement wbc->nr_to_write instead of nr_to_write
    2. Decrement wbc->nr_to_write _only_ if wbc->sync_mode == WB_SYNC_NONE
    3. If synced nr_to_write pages, stop only if if wbc->sync_mode ==
    WB_SYNC_NONE, otherwise keep going.

    However, according to the commit message, the intention was to only make
    change 3. Change 1 is a bug. Change 2 does not seem to be necessary,
    and it breaks UBIFS expectations, so if needed, it should be done
    separately later. And change 2 does not seem to be documented in the
    commit message.

    This patch does the following:
    1. Undo changes 1 and 2
    2. Add a comment explaining change 3 (it very useful to have comments
    in _code_, not only in the commit).

    Signed-off-by: Artem Bityutskiy
    Acked-by: Nick Piggin
    Cc: Andrew Morton
    Signed-off-by: Linus Torvalds

    Artem Bityutskiy
     

03 Feb, 2009

1 commit


02 Feb, 2009

1 commit

  • This essentially reverts commit 8edb08caf68184fb170f4f69c7445929e199eaea.

    It downgraded our mmap semaphore to a read-lock while mlocking pages, in
    order to allow other threads (and external accesses like "ps" et al) to
    walk the vma lists and take page faults etc. Which is a nice idea, but
    the implementation does not work.

    Because we cannot upgrade the lock back to a write lock without
    releasing the mmap semaphore, the code had to release the lock entirely
    and then re-take it as a writelock. However, that meant that the caller
    possibly lost the vma chain that it was following, since now another
    thread could come in and mmap/munmap the range.

    The code tried to work around that by just looking up the vma again and
    erroring out if that happened, but quite frankly, that was just a buggy
    hack that doesn't actually protect against anything (the other thread
    could just have replaced the vma with another one instead of totally
    unmapping it).

    The only way to downgrade to a read map _reliably_ is to do it at the
    end, which is likely the right thing to do: do all the 'vma' operations
    with the write-lock held, then downgrade to a read after completing them
    all, and then do the "populate the newly mlocked regions" while holding
    just the read lock. And then just drop the read-lock and return to user
    space.

    The (perhaps somewhat simpler) alternative is to just make all the
    callers of mlock_vma_pages_range() know that the mmap lock got dropped,
    and just re-grab the mmap semaphore if it needs to mlock more than one
    vma region.

    So we can do this "downgrade mmap sem while populating mlocked regions"
    thing right, but the way it was done here was absolutely not correct.
    Thus the revert, in the expectation that we will do it all correctly
    some day.

    Cc: Lee Schermerhorn
    Cc: Rik van Riel
    Cc: Andrew Morton
    Cc: stable@kernel.org
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

01 Feb, 2009

1 commit

  • The mmap_region() code would temporarily set the VM_ACCOUNT flag for
    anonymous shared mappings just to inform shmem_zero_setup() that it
    should enable accounting for the resulting shm object. It would then
    clear the flag after calling ->mmap (for the /dev/zero case) or doing
    shmem_zero_setup() (for the MAP_ANON case).

    This just resulted in vma merge issues, but also made for just
    unnecessary confusion. Use the already-existing VM_NORESERVE flag for
    this instead, and let shmem_{zero|file}_setup() just figure it out from
    that.

    This also happens to make it obvious that the new DRI2 GEM layer uses a
    non-reserving backing store for its object allocation - which is quite
    possibly not intentional. But since I didn't want to change semantics
    in this patch, I left it alone, and just updated the caller to use the
    new flag semantics.

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

31 Jan, 2009

1 commit

  • Commit de33c8db5910cda599899dd431cc30d7c1018cbf ("Fix OOPS in
    mmap_region() when merging adjacent VM_LOCKED file segments") unified
    the vma merging of anonymous and file maps to just one place, which
    simplified the code and fixed a use-after-free bug that could cause an
    oops.

    But by doing the merge opportunistically before even having called
    ->mmap() on the file method, it now compares two different 'vm_flags'
    values: the pre-mmap() value of the new not-yet-formed vma, and previous
    mappings of the same file around it.

    And in doing so, it refused to merge the common file case, which adds a
    marker to say "I can be made non-linear".

    This fixes it by just adding a set of flags that don't have to match,
    because we know they are ok to merge. Currently it's only that single
    VM_CAN_NONLINEAR flag, but at least conceptually there could be others
    in the future.

    Reported-and-acked-by: Hugh Dickins
    Cc: Lee Schermerhorn
    Cc: Nick Piggin
    Cc: Andrew Morton
    Cc: Greg KH
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

30 Jan, 2009

4 commits

  • N_POSSIBLE doesn't means there is memory...and force_empty can
    visit invalid node which have no pgdat.

    To visit all valid nodes, N_HIGH_MEMORY should be used.

    Reported-by: Li Zefan
    Signed-off-by: KAMEZAWA Hiroyuki
    Tested-by: Li Zefan
    Cc: Balbir Singh
    Cc: Daisuke Nishimura
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • Now, at swapoff, even while try_charge() fails, commit is executed. This
    is a bug which turns the refcnt of cgroup_subsys_state negative.

    Reported-by: Li Zefan
    Tested-by: Li Zefan
    Tested-by: Daisuke Nishimura
    Signed-off-by: KAMEZAWA Hiroyuki
    Reviewed-by: Daisuke Nishimura
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    KAMEZAWA Hiroyuki
     
  • The lifetime of struct cgroup and struct mem_cgroup is different and
    mem_cgroup has its own reference count for handling references from
    swap_cgroup.

    This causes strange problem that the parent mem_cgroup dies while child
    mem_cgroup alive, and this problem causes a bug in case of
    use_hierarchy==1 because res_counter_uncharge climbs up the tree.

    This patch is for avoiding it by getting the parent at create, and putting
    it at freeing.

    Signed-off-by: Daisuke Nishimura
    Reviewed-by; KAMEZAWA Hiroyuki
    Cc: Balbir Singh
    Cc: Pavel Emelyanov
    Cc: Li Zefan
    Cc: Paul Menage
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daisuke Nishimura
     
  • As of commit ba470de43188cdbff795b5da43a1474523c6c2fb ("map: handle
    mlocked pages during map, remap, unmap") we now use the 'vma' variable
    at the end of mmap_region() to handle the page-in of newly mapped
    mlocked pages.

    However, if we merged adjacent vma's together, the vma we're using may
    be stale. We historically consciously avoided using it after the merge
    operation, but that got overlooked when redoing the locked page
    handling.

    This commit simplifies mmap_region() by doing any vma merges early,
    avoiding the issue entirely, and 'vma' will always be valid. As pointed
    out by Hugh Dickins, this depends on any drivers that change the page
    offset of flags to have set one of the VM_SPECIAL bits (so that they
    cannot trigger the early merge logic), but that's true in general.

    Reported-and-tested-by: Maksim Yevmenkin
    Cc: Lee Schermerhorn
    Cc: Nick Piggin
    Cc: Andrew Morton
    Cc: Hugh Dickins
    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

28 Jan, 2009

1 commit

  • The per cpu array of kmem_cache_cpu structures accomodates
    NR_KMEM_CACHE_CPU such structs.

    When this array overflows and a struct is allocated by kmalloc(), it may
    have an address at the upper bound of this array. If this happens, it
    does not get freed and the per cpu kmem_cache_cpu_free pointer will be out
    of bounds after kmem_cache_destroy() or cpu offlining.

    Cc: Christoph Lameter
    Signed-off-by: David Rientjes
    Signed-off-by: Pekka Enberg

    David Rientjes
     

27 Jan, 2009

1 commit


21 Jan, 2009

1 commit


16 Jan, 2009

8 commits

  • (suppose: memcg->use_hierarchy == 0 and memcg->swappiness == 60)

    echo 10 > /memcg/0/swappiness |
    mem_cgroup_swappiness_write() |
    ... | echo 1 > /memcg/0/use_hierarchy
    | mkdir /mnt/0/1
    | sub_memcg->swappiness = 60;
    memcg->swappiness = 10; |

    In the above scenario, we end up having 2 different swappiness
    values in a single hierarchy.

    We should hold cgroup_lock() when cheking cgrp->children list.

    Signed-off-by: Li Zefan
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Balbir Singh
    Cc: Paul Menage
    Cc: Daisuke Nishimura
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Li Zefan
     
  • At system boot when creating the top cgroup, mem_cgroup_create() calls
    enable_swap_cgroup() which is marked as __init, so mark
    mem_cgroup_create() as __ref to avoid false section mismatch warning.

    Reported-by: Rakib Mullick
    Signed-off-by: Li Zefan
    Acked-by; KAMEZAWA Hiroyuki
    Cc: Balbir Singh
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Li Zefan
     
  • Revert commit e97a630eb0f5b8b380fd67504de6cedebb489003 ("mm: vmalloc use
    mutex for purge")

    Bryan Donlan reports:

    : After testing 2.6.29-rc1 on xen-x86 with a btrfs root filesystem, I
    : got the OOPS quoted below and a hard freeze shortly after boot.
    : Boot messages and config are attached.
    :
    : ------------[ cut here ]------------
    : Kernel BUG at c05ef80d [verbose debug info unavailable]
    : invalid opcode: 0000 [#1] SMP
    : last sysfs file: /sys/block/xvdc/size
    : Modules linked in:
    :
    : Pid: 0, comm: swapper Not tainted (2.6.29-rc1 #6)
    : EIP: 0061:[] EFLAGS: 00010087 CPU: 2
    : EIP is at schedule+0x7cd/0x950
    : EAX: d5aeca80 EBX: 00000002 ECX: 00000000 EDX: d4cb9a40
    : ESI: c12f5600 EDI: d4cb9a40 EBP: d6033fa4 ESP: d6033ef4
    : DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0069
    : Process swapper (pid: 0, ti=d6032000 task=d6020b70 task.ti=d6032000)
    : Stack:
    : 000d85bc 00000000 000186a0 00000000 0dd11410 c0105417 c12efe00 0dc367c3
    : 00000011 c0105d46 d5a5d310 deadbeef d4cb9a40 c07cc600 c05f1340 c12e0060
    : deadbeef d6020b70 d6020d08 00000002 c014377d 00000000 c12f5600 00002c22
    : Call Trace:
    : [] xen_force_evtchn_callback+0x17/0x30
    : [] check_events+0x8/0x12
    : [] _spin_unlock_irqrestore+0x20/0x40
    : [] hrtimer_start_range_ns+0x12d/0x2e0
    : [] tick_nohz_restart_sched_tick+0x146/0x160
    : [] cpu_idle+0xa5/0xc0

    and bisected it to this commit.

    Let's remove it now while we have a think about the problem.

    Reported-by: Bryan Donlan
    Tested-by: Christophe Saout
    Cc: Nick Piggin
    Cc: Ingo Molnar
    Cc: Jeremy Fitzhardinge
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • In previous implementation, mem_cgroup_try_charge checked the return
    value of mem_cgroup_try_to_free_pages, and just retried if some pages
    had been reclaimed.
    But now, try_charge(and mem_cgroup_hierarchical_reclaim called from it)
    only checks whether the usage is less than the limit.

    This patch tries to change the behavior as before to cause oom less
    frequently.

    Signed-off-by: Daisuke Nishimura
    Acked-by: Balbir Singh
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Pavel Emelyanov
    Cc: Li Zefan
    Cc: Paul Menage
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daisuke Nishimura
     
  • If root_mem has no children, last_scaned_child is set to root_mem itself.
    But after some children added to root_mem, mem_cgroup_get_next_node can
    mem_cgroup_put the root_mem although root_mem has not been mem_cgroup_get.

    This patch fixes this behavior by:

    - Set last_scanned_child to NULL if root_mem has no children or DFS
    search has returned to root_mem itself(root_mem is not a "child" of
    root_mem). Make mem_cgroup_get_first_node return root_mem in this case.
    There are no mem_cgroup_get/put for root_mem.

    - Rename mem_cgroup_get_next_node to __mem_cgroup_get_next_node, and
    mem_cgroup_get_first_node to mem_cgroup_get_next_node. Make
    mem_cgroup_hierarchical_reclaim call only new mem_cgroup_get_next_node.

    Signed-off-by: Daisuke Nishimura
    Cc: Balbir Singh
    Cc: KAMEZAWA Hiroyuki
    Cc: Pavel Emelyanov
    Cc: Li Zefan
    Cc: Paul Menage
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daisuke Nishimura
     
  • There is a bug in error path of mem_cgroup_move_parent.

    Extra refcnt got from try_charge should be dropped, and usages incremented
    by try_charge should be decremented in both error paths:

    A: failure at get_page_unless_zero
    B: failure at isolate_lru_page

    This bug makes this parent directory unremovable.

    In case of A, rmdir doesn't return, because res.usage doesn't go down to 0
    at mem_cgroup_force_empty even after all the pc in lru are removed.

    In case of B, rmdir fails and returns -EBUSY, because it has extra ref
    counts even after res.usage goes down to 0.

    Signed-off-by: Daisuke Nishimura
    Acked-by: KAMEZAWA Hiroyuki
    Acked-by: Balbir Singh
    Cc: Pavel Emelyanov
    Cc: Li Zefan
    Cc: Paul Menage
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daisuke Nishimura
     
  • In case of swapin, a new page is added to lru before it is charged,
    so page->pc->mem_cgroup points to NULL or last mem_cgroup the page
    was charged before.

    In the latter case, if the mem_cgroup has already freed by rmdir,
    the area pointed to by page->pc->mem_cgroup may have invalid data.

    Actually, I saw general protection fault.

    general protection fault: 0000 [#1] SMP
    last sysfs file: /sys/devices/system/cpu/cpu15/cache/index1/shared_cpu_map
    CPU 4
    Modules linked in: ipt_REJECT xt_tcpudp iptable_filter ip_tables x_tables bridge stp ipv6 autofs4 hidp rfcomm l2cap bluetooth sunrpc dm_mirror dm_region_hash dm_log dm_multipath dm_mod rfkill input_polldev sbs sbshc battery ac lp sg ide_cd_mod cdrom button serio_raw acpi_memhotplug parport_pc e1000 rtc_cmos parport rtc_core rtc_lib i2c_i801 i2c_core shpchp pcspkr ata_piix libata megaraid_mbox megaraid_mm sd_mod scsi_mod ext3 jbd ehci_hcd ohci_hcd uhci_hcd [last unloaded: microcode]
    Pid: 26038, comm: page01 Tainted: G W 2.6.28-rc9-mm1-mmotm-2008-12-22-16-14-f2ab3dea #1
    RIP: 0010:[] [] update_page_reclaim_stat+0x2f/0x42
    RSP: 0000:ffff8801ee457da8 EFLAGS: 00010002
    RAX: 32353438312021c8 RBX: 0000000000000000 RCX: 32353438312021c8
    RDX: 0000000000000000 RSI: ffff8800cb0b1000 RDI: ffff8801164d1d28
    RBP: ffff880110002cb8 R08: ffff88010f2eae23 R09: 0000000000000001
    R10: ffff8800bc514b00 R11: ffff880110002c00 R12: 0000000000000000
    R13: ffff88000f484100 R14: 0000000000000003 R15: 00000000001200d2
    FS: 00007f8a261726f0(0000) GS:ffff88010f2eaa80(0000) knlGS:0000000000000000
    CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
    CR2: 00007f8a25d22000 CR3: 00000001ef18c000 CR4: 00000000000006e0
    DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    Process page01 (pid: 26038, threadinfo ffff8801ee456000, task ffff8800b585b960)
    Stack:
    ffffe200071ee568 ffff880110001f00 0000000000000000 ffffffff8028ea17
    ffff88000f484100 0000000000000000 0000000000000020 00007f8a25d22000
    ffff8800bc514b00 ffffffff8028ec34 0000000000000000 0000000000016fd8
    Call Trace:
    [] ? ____pagevec_lru_add+0xc1/0x13c
    [] ? drain_cpu_pagevecs+0x36/0x89
    [] ? swapin_readahead+0x78/0x98
    [] ? handle_mm_fault+0x3d9/0x741
    [] ? do_page_fault+0x3ce/0x78c
    [] ? trace_hardirqs_off_thunk+0x3a/0x3c
    [] ? page_fault+0x1f/0x30
    Code: cc 55 48 8d af b8 0d 00 00 48 89 f7 53 89 d3 e8 39 85 02 00 48 63 d3 48 ff 44 d5 10 45 85 e4 74 05 48 ff 44 d5 00 48 85 c0 74 0e ff 44 d0 10 45 85 e4 74 04 48 ff 04 d0 5b 5d 41 5c c3 41 54
    RIP [] update_page_reclaim_stat+0x2f/0x42
    RSP

    Signed-off-by: Daisuke Nishimura
    Acked-by: KAMEZAWA Hiroyuki
    Cc: Balbir Singh
    Cc: Pavel Emelyanov
    Cc: Li Zefan
    Cc: Paul Menage
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Daisuke Nishimura
     
  • On alpha, we have to map some stuff in the VMALLOC space very early in the
    boot process (to make SRM console callbacks work and so on, see
    arch/alpha/mm/init.c). For old VM allocator, we just manually placed a
    vm_struct onto the global vmlist and this worked for ages.

    Unfortunately, the new allocator isn't aware of this, so it constantly
    tries to allocate the VM space which is already in use, making vmalloc on
    alpha defunct.

    This patch forces KVA to import vmlist entries on init.

    [akpm@linux-foundation.org: remove unneeded check (per Johannes)]
    Signed-off-by: Ivan Kokshaysky
    Cc: Nick Piggin
    Cc: Johannes Weiner
    Cc: Richard Henderson
    Cc: Johannes Weiner
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ivan Kokshaysky
     

14 Jan, 2009

7 commits