08 Apr, 2014

2 commits

  • vm_map_ram() has a fragmentation problem when it cannot purge a
    chunk(ie, 4M address space) if there is a pinning object in that
    addresss space. So it could consume all VMALLOC address space easily.

    We can fix the fragmentation problem by using vmap instead of
    vm_map_ram() but vmap() is known to be slow compared to vm_map_ram().
    Minchan said vm_map_ram is 5 times faster than vmap in his tests. So I
    thought we should fix fragment problem of vm_map_ram because our
    proprietary GPU driver has used it heavily.

    On second thought, it's not an easy because we should reuse freed space
    for solving the problem and it could make more IPI and bitmap operation
    for searching hole. It could mitigate API's goal which is very fast
    mapping. And even fragmentation problem wouldn't show in 64 bit
    machine.

    Another option is that the user should separate long-life and short-life
    object and use vmap for long-life but vm_map_ram for short-life. If we
    inform the user about the characteristic of vm_map_ram the user can
    choose one according to the page lifetime.

    Let's add some notice messages to user.

    [akpm@linux-foundation.org: tweak comment text]
    Signed-off-by: Gioh Kim
    Reviewed-by: Zhang Yanfei
    Cc: Minchan Kim
    Cc: Johannes Weiner
    Cc: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gioh Kim
     
  • To increase compiler portability there is which
    provides convenience macros for various gcc constructs. Eg: __weak for
    __attribute__((weak)). I've replaced all instances of gcc attributes with
    the right macro in the memory management (/mm) subsystem.

    [akpm@linux-foundation.org: while-we're-there consistency tweaks]
    Signed-off-by: Gideon Israel Dsouza
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Gideon Israel Dsouza
     

28 Jan, 2014

1 commit

  • Revert commit ece86e222db4, which was intended as a small performance
    improvement.

    Despite the claim that the patch doesn't introduce any functional
    changes in fact it does.

    The "no page" path behaves different now. Originally, vmalloc_to_page
    might return NULL under some conditions, with new implementation it
    returns pfn_to_page(0) which is not the same as NULL.

    Simple test shows the difference.

    test.c

    #include
    #include
    #include
    #include

    int __init myi(void)
    {
    struct page *p;
    void *v;

    v = vmalloc(PAGE_SIZE);
    /* trigger the "no page" path in vmalloc_to_page*/
    vfree(v);

    p = vmalloc_to_page(v);

    pr_err("expected val = NULL, returned val = %p", p);

    return -EBUSY;
    }

    void __exit mye(void)
    {

    }
    module_init(myi)
    module_exit(mye)

    Before interchange:
    expected val = NULL, returned val = (null)

    After interchange:
    expected val = NULL, returned val = c7ebe000

    Signed-off-by: Vladimir Murzin
    Cc: Jianyu Zhan
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    malc
     

22 Jan, 2014

1 commit

  • Currently we are implementing vmalloc_to_pfn() as a wrapper around
    vmalloc_to_page(), which is implemented as follow:

    1. walks the page talbes to generates the corresponding pfn,
    2. then converts the pfn to struct page,
    3. returns it.

    And vmalloc_to_pfn() re-wraps vmalloc_to_page() to get the pfn.

    This seems too circuitous, so this patch reverses the way: implement
    vmalloc_to_page() as a wrapper around vmalloc_to_pfn(). This makes
    vmalloc_to_pfn() and vmalloc_to_page() slightly more efficient.

    No functional change.

    Signed-off-by: Jianyu Zhan
    Cc: Vladimir Murzin
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jianyu Zhan
     

13 Nov, 2013

6 commits

  • Commit 248ac0e1943a ("mm/vmalloc: remove guard page from between vmap
    blocks") had the side effect of making vmap_area.va_end member point to
    the next vmap_area.va_start. This was creating an artificial reference
    to vmalloc'ed objects and kmemleak was rarely reporting vmalloc() leaks.

    This patch marks the vmap_area containing pointers explicitly and
    reduces the min ref_count to 2 as vm_struct still contains a reference
    to the vmalloc'ed object. The kmemleak add_scan_area() function has
    been improved to allow a SIZE_MAX argument covering the rest of the
    object (for simpler calling sites).

    Signed-off-by: Catalin Marinas
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Catalin Marinas
     
  • Don't warn twice in __vmalloc_area_node and __vmalloc_node_range if
    __vmalloc_area_node allocation failure. This patch reverts commit
    46c001a2753f ("mm/vmalloc.c: emit the failure message before return").

    Signed-off-by: Wanpeng Li
    Reviewed-by: Zhang Yanfei
    Cc: Joonsoo Kim
    Cc: KOSAKI Motohiro
    Cc: Mitsuo Hayasaka
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanpeng Li
     
  • The VM_UNINITIALIZED/VM_UNLIST flag introduced by f5252e009d5b ("mm:
    avoid null pointer access in vm_struct via /proc/vmallocinfo") is used
    to avoid accessing the pages field with unallocated page when
    show_numa_info() is called.

    This patch moves the check just before show_numa_info in order that some
    messages still can be dumped via /proc/vmallocinfo. This patch reverts
    commit d157a55815ff ("mm/vmalloc.c: check VM_UNINITIALIZED flag in
    s_show instead of show_numa_info");

    Reviewed-by: Zhang Yanfei
    Signed-off-by: Wanpeng Li
    Cc: Mitsuo Hayasaka
    Cc: Joonsoo Kim
    Cc: KOSAKI Motohiro
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanpeng Li
     
  • There is a race window between vmap_area tear down and show vmap_area
    information.

    A B

    remove_vm_area
    spin_lock(&vmap_area_lock);
    va->vm = NULL;
    va->flags &= ~VM_VM_AREA;
    spin_unlock(&vmap_area_lock);
    spin_lock(&vmap_area_lock);
    if (va->flags & (VM_LAZY_FREE | VM_LAZY_FREEZING))
    return 0;
    if (!(va->flags & VM_VM_AREA)) {
    seq_printf(m, "0x%pK-0x%pK %7ld vm_map_ram\n",
    (void *)va->va_start, (void *)va->va_end,
    va->va_end - va->va_start);
    return 0;
    }
    free_unmap_vmap_area(va);
    flush_cache_vunmap
    free_unmap_vmap_area_noflush
    unmap_vmap_area
    free_vmap_area_noflush
    va->flags |= VM_LAZY_FREE

    The assumption !VM_VM_AREA represents vm_map_ram allocation is
    introduced by d4033afdf828 ("mm, vmalloc: iterate vmap_area_list,
    instead of vmlist, in vmallocinfo()").

    However, !VM_VM_AREA also represents vmap_area is being tear down in
    race window mentioned above. This patch fix it by don't dump any
    information for !VM_VM_AREA case and also remove (VM_LAZY_FREE |
    VM_LAZY_FREEING) check since they are not possible for !VM_VM_AREA case.

    Suggested-by: Joonsoo Kim
    Acked-by: KOSAKI Motohiro
    Signed-off-by: Wanpeng Li
    Cc: Mitsuo Hayasaka
    Cc: Zhang Yanfei
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanpeng Li
     
  • The caller address has already been set in set_vmalloc_vm(), there's no
    need to set it again in __vmalloc_area_node.

    Reviewed-by: Zhang Yanfei
    Signed-off-by: Wanpeng Li
    Cc: Joonsoo Kim
    Cc: KOSAKI Motohiro
    Cc: Mitsuo Hayasaka
    Cc: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Wanpeng Li
     
  • Use more appropriate "if (node == NUMA_NO_NODE)" instead of "if (node < 0)"

    Signed-off-by: Jianguo Wu
    Acked-by: David Rientjes
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jianguo Wu
     

12 Sep, 2013

3 commits


10 Jul, 2013

9 commits

  • When searching a vmap area in the vmalloc space, we use (addr + size -
    1) to check if the value is less than addr, which is an overflow. But
    we assign (addr + size) to vmap_area->va_end.

    So if we come across the below case:

    (addr + size - 1) : not overflow
    (addr + size) : overflow

    we will assign an overflow value (e.g 0) to vmap_area->va_end, And this
    will trigger BUG in __insert_vmap_area, causing system panic.

    So using (addr + size) to check the overflow should be the correct
    behaviour, not (addr + size - 1).

    Signed-off-by: Zhang Yanfei
    Reported-by: Ghennadi Procopciuc
    Tested-by: Daniel Baluta
    Cc: David Rientjes
    Cc: Minchan Kim
    Cc: KOSAKI Motohiro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Yanfei
     
  • vfree() only needs schedule_work(&p->wq) if p->list was empty, otherwise
    vfree_deferred->wq is already pending or it is running and didn't do
    llist_del_all() yet.

    Signed-off-by: Oleg Nesterov
    Cc: Al Viro
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • We should check the VM_UNITIALIZED flag in s_show(). If this flag is
    set, that said, the vm_struct is not fully initialized. So it is
    unnecessary to try to show the information contained in vm_struct.

    We checked this flag in show_numa_info(), but I think it's better to
    check it earlier.

    Signed-off-by: Zhang Yanfei
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Yanfei
     
  • VM_UNLIST was used to indicate that the vm_struct is not listed in
    vmlist.

    But after commit 4341fa454796 ("mm, vmalloc: remove list management of
    vmlist after initializing vmalloc"), the meaning of this flag changed.
    It now means the vm_struct is not fully initialized. So renaming it to
    VM_UNINITIALIZED seems more reasonable.

    Also change clear_vm_unlist to clear_vm_uninitialized_flag.

    Signed-off-by: Zhang Yanfei
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Yanfei
     
  • Use goto to jump to the fail label to give a failure message before
    returning NULL. This makes the failure handling in this function
    consistent.

    Signed-off-by: Zhang Yanfei
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Yanfei
     
  • As we have removed the dead code in the vb_alloc, it seems there is no
    place to use the alloc_map. So there is no reason to maintain the
    alloc_map in vmap_block.

    Signed-off-by: Zhang Yanfei
    Cc: Johannes Weiner
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Yanfei
     
  • This function is nowhere used now, so remove it.

    Signed-off-by: Zhang Yanfei
    Cc: Johannes Weiner
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Yanfei
     
  • Space in a vmap block that was once allocated is considered dirty and
    not made available for allocation again before the whole block is
    recycled. The result is that free space within a vmap block is always
    contiguous.

    So if a vmap block has enough free space for allocation, the allocation
    is impossible to fail. Thus, the fragmented block purging was never
    invoked from vb_alloc(). So remove this dead code.

    [ Same patches also sent by:

    Chanho Min
    Johannes Weiner

    but git doesn't do "multiple authors" ]

    Signed-off-by: Zhang Yanfei
    Cc: Johannes Weiner
    Cc: Mel Gorman
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Yanfei
     
  • There is an extra semi-colon so the function always returns.

    Signed-off-by: Dan Carpenter
    Acked-by: Zhang Yanfei
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Dan Carpenter
     

04 Jul, 2013

6 commits

  • Signed-off-by: Zhang Yanfei
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Yanfei
     
  • Now this function is nowhere used, we can remove it directly.

    Signed-off-by: Zhang Yanfei
    Acked-by: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Yanfei
     
  • Here we pass flags with only VM_ALLOC bit set, it is unnecessary to call
    clear_vm_unlist to clear VM_UNLIST bit. So use setup_vmalloc_vm instead
    of insert_vmalloc_vm.

    Signed-off-by: Zhang Yanfei
    Acked-by: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Yanfei
     
  • Now for insert_vmalloc_vm, it only calls the two functions:

    - setup_vmalloc_vm: fill vm_struct and vmap_area instances
    - clear_vm_unlist: clear VM_UNLIST bit in vm_struct->flags

    So in __get_vm_area_node(), if VM_UNLIST bit unset in flags, that is the
    else branch here, we don't need to clear VM_UNLIST bit for vm->flags since
    this bit is obviously not set. That is to say, we could only call
    setup_vmalloc_vm instead of insert_vmalloc_vm here. And then we could
    even remove the if test here.

    Signed-off-by: Zhang Yanfei
    Acked-by: Joonsoo Kim
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Zhang Yanfei
     
  • We want to allocate ELF note segment buffer on the 2nd kernel in vmalloc
    space and remap it to user-space in order to reduce the risk that memory
    allocation fails on system with huge number of CPUs and so with huge ELF
    note segment that exceeds 11-order block size.

    Although there's already remap_vmalloc_range for the purpose of
    remapping vmalloc memory to user-space, we need to specify user-space
    range via vma.
    Mmap on /proc/vmcore needs to remap range across multiple objects, so
    the interface that requires vma to cover full range is problematic.

    This patch introduces remap_vmalloc_range_partial that receives user-space
    range as a pair of base address and size and can be used for mmap on
    /proc/vmcore case.

    remap_vmalloc_range is rewritten using remap_vmalloc_range_partial.

    [akpm@linux-foundation.org: use PAGE_ALIGNED()]
    Signed-off-by: HATAYAMA Daisuke
    Cc: KOSAKI Motohiro
    Cc: Vivek Goyal
    Cc: Atsushi Kumagai
    Cc: Lisa Mitchell
    Cc: Zhang Yanfei
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    HATAYAMA Daisuke
     
  • Currently, __find_vmap_area searches for the kernel VM area starting at
    a given address. This patch changes this behavior so that it searches
    for the kernel VM area to which the address belongs. This change is
    needed by remap_vmalloc_range_partial to be introduced in later patch
    that receives any position of kernel VM area as target address.

    This patch changes the condition (addr > va->va_start) to the equivalent
    (addr >= va->va_end) by taking advantage of the fact that each kernel VM
    area is non-overlapping.

    Signed-off-by: HATAYAMA Daisuke
    Acked-by: KOSAKI Motohiro
    Cc: Vivek Goyal
    Cc: Atsushi Kumagai
    Cc: Lisa Mitchell
    Cc: Zhang Yanfei
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    HATAYAMA Daisuke
     

08 May, 2013

1 commit


02 May, 2013

1 commit

  • Pull VFS updates from Al Viro,

    Misc cleanups all over the place, mainly wrt /proc interfaces (switch
    create_proc_entry to proc_create(), get rid of the deprecated
    create_proc_read_entry() in favor of using proc_create_data() and
    seq_file etc).

    7kloc removed.

    * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (204 commits)
    don't bother with deferred freeing of fdtables
    proc: Move non-public stuff from linux/proc_fs.h to fs/proc/internal.h
    proc: Make the PROC_I() and PDE() macros internal to procfs
    proc: Supply a function to remove a proc entry by PDE
    take cgroup_open() and cpuset_open() to fs/proc/base.c
    ppc: Clean up scanlog
    ppc: Clean up rtas_flash driver somewhat
    hostap: proc: Use remove_proc_subtree()
    drm: proc: Use remove_proc_subtree()
    drm: proc: Use minor->index to label things, not PDE->name
    drm: Constify drm_proc_list[]
    zoran: Don't print proc_dir_entry data in debug
    reiserfs: Don't access the proc_dir_entry in r_open(), r_start() r_show()
    proc: Supply an accessor for getting the data from a PDE's parent
    airo: Use remove_proc_subtree()
    rtl8192u: Don't need to save device proc dir PDE
    rtl8187se: Use a dir under /proc/net/r8180/
    proc: Add proc_mkdir_data()
    proc: Move some bits from linux/proc_fs.h to linux/{of.h,signal.h,tty.h}
    proc: Move PDE_NET() to fs/proc/proc_net.c
    ...

    Linus Torvalds
     

30 Apr, 2013

8 commits

  • Now, vmap_area_list is exported as VMCOREINFO for makedumpfile to get
    the start address of vmalloc region (vmalloc_start). The address which
    contains vmalloc_start value is represented as below:

    vmap_area_list.next - OFFSET(vmap_area.list) + OFFSET(vmap_area.va_start)

    However, both OFFSET(vmap_area.va_start) and OFFSET(vmap_area.list)
    aren't exported as VMCOREINFO.

    So this patch exports them externally with small cleanup.

    [akpm@linux-foundation.org: vmalloc.h should include list.h for list_head]
    Signed-off-by: Atsushi Kumagai
    Cc: Joonsoo Kim
    Cc: Joonsoo Kim
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Atsushi Kumagai
    Cc: Chris Metcalf
    Cc: Dave Anderson
    Cc: Eric Biederman
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Atsushi Kumagai
     
  • Now, there is no need to maintain vmlist after initializing vmalloc. So
    remove related code and data structure.

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Joonsoo Kim
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Atsushi Kumagai
    Cc: Chris Metcalf
    Cc: Dave Anderson
    Cc: Eric Biederman
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Although our intention is to unexport internal structure entirely, but
    there is one exception for kexec. kexec dumps address of vmlist and
    makedumpfile uses this information.

    We are about to remove vmlist, then another way to retrieve information
    of vmalloc layer is needed for makedumpfile. For this purpose, we
    export vmap_area_list, instead of vmlist.

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Joonsoo Kim
    Cc: Eric Biederman
    Cc: Dave Anderson
    Cc: Vivek Goyal
    Cc: Atsushi Kumagai
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Chris Metcalf
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • This patch is a preparatory step for removing vmlist entirely. For
    above purpose, we change iterating a vmap_list codes to iterating a
    vmap_area_list. It is somewhat trivial change, but just one thing
    should be noticed.

    Using vmap_area_list in vmallocinfo() introduce ordering problem in SMP
    system. In s_show(), we retrieve some values from vm_struct.
    vm_struct's values is not fully setup when va->vm is assigned. Full
    setup is notified by removing VM_UNLIST flag without holding a lock.
    When we see that VM_UNLIST is removed, it is not ensured that vm_struct
    has proper values in view of other CPUs. So we need smp_[rw]mb for
    ensuring that proper values is assigned when we see that VM_UNLIST is
    removed.

    Therefore, this patch not only change a iteration list, but also add a
    appropriate smp_[rw]mb to right places.

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Joonsoo Kim
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Atsushi Kumagai
    Cc: Chris Metcalf
    Cc: Dave Anderson
    Cc: Eric Biederman
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • This patch is a preparatory step for removing vmlist entirely. For
    above purpose, we change iterating a vmap_list codes to iterating a
    vmap_area_list. It is somewhat trivial change, but just one thing
    should be noticed.

    vmlist is lack of information about some areas in vmalloc address space.
    For example, vm_map_ram() allocate area in vmalloc address space, but it
    doesn't make a link with vmlist. To provide full information about
    vmalloc address space is better idea, so we don't use va->vm and use
    vmap_area directly. This makes get_vmalloc_info() more precise.

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Joonsoo Kim
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Atsushi Kumagai
    Cc: Chris Metcalf
    Cc: Dave Anderson
    Cc: Eric Biederman
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Now, when we hold a vmap_area_lock, va->vm can't be discarded. So we can
    safely access to va->vm when iterating a vmap_area_list with holding a
    vmap_area_lock. With this property, change iterating vmlist codes in
    vread/vwrite() to iterating vmap_area_list.

    There is a little difference relate to lock, because vmlist_lock is mutex,
    but, vmap_area_lock is spin_lock. It may introduce a spinning overhead
    during vread/vwrite() is executing. But, these are debug-oriented
    functions, so this overhead is not real problem for common case.

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Joonsoo Kim
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Atsushi Kumagai
    Cc: Chris Metcalf
    Cc: Dave Anderson
    Cc: Eric Biederman
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Inserting and removing an entry to vmlist is linear time complexity, so
    it is inefficient. Following patches will try to remove vmlist
    entirely. This patch is preparing step for it.

    For removing vmlist, iterating vmlist codes should be changed to
    iterating a vmap_area_list. Before implementing that, we should make
    sure that when we iterate a vmap_area_list, accessing to va->vm doesn't
    cause a race condition. This patch ensure that when iterating a
    vmap_area_list, there is no race condition for accessing to vm_struct.

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Joonsoo Kim
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Atsushi Kumagai
    Cc: Chris Metcalf
    Cc: Dave Anderson
    Cc: Eric Biederman
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     
  • Now get_vmalloc_info() is in fs/proc/mmu.c. There is no reason that this
    code must be here and it's implementation needs vmlist_lock and it iterate
    a vmlist which may be internal data structure for vmalloc.

    It is preferable that vmlist_lock and vmlist is only used in vmalloc.c
    for maintainability. So move the code to vmalloc.c

    Signed-off-by: Joonsoo Kim
    Signed-off-by: Joonsoo Kim
    Cc: Thomas Gleixner
    Cc: "H. Peter Anvin"
    Cc: Atsushi Kumagai
    Cc: Chris Metcalf
    Cc: Dave Anderson
    Cc: Eric Biederman
    Cc: Guan Xuetao
    Cc: Ingo Molnar
    Cc: Vivek Goyal
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Joonsoo Kim
     

11 Mar, 2013

1 commit

  • A bunch of RCU callbacks want to be able to do vfree() and end up with
    rather kludgy schemes. Just let vfree() do the right thing - put the
    victim on llist and schedule actual __vunmap() via schedule_work(), so
    that it runs from non-interrupt context.

    Signed-off-by: Al Viro

    Al Viro
     

24 Feb, 2013

1 commit

  • Make a sweep through mm/ and convert code that uses -1 directly to using
    the more appropriate NUMA_NO_NODE.

    Signed-off-by: David Rientjes
    Reviewed-by: Yasuaki Ishimatsu
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    David Rientjes