08 May, 2013
1 commit
-
Cc: Al Viro
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
02 May, 2013
1 commit
-
Pull VFS updates from Al Viro,
Misc cleanups all over the place, mainly wrt /proc interfaces (switch
create_proc_entry to proc_create(), get rid of the deprecated
create_proc_read_entry() in favor of using proc_create_data() and
seq_file etc).7kloc removed.
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: (204 commits)
don't bother with deferred freeing of fdtables
proc: Move non-public stuff from linux/proc_fs.h to fs/proc/internal.h
proc: Make the PROC_I() and PDE() macros internal to procfs
proc: Supply a function to remove a proc entry by PDE
take cgroup_open() and cpuset_open() to fs/proc/base.c
ppc: Clean up scanlog
ppc: Clean up rtas_flash driver somewhat
hostap: proc: Use remove_proc_subtree()
drm: proc: Use remove_proc_subtree()
drm: proc: Use minor->index to label things, not PDE->name
drm: Constify drm_proc_list[]
zoran: Don't print proc_dir_entry data in debug
reiserfs: Don't access the proc_dir_entry in r_open(), r_start() r_show()
proc: Supply an accessor for getting the data from a PDE's parent
airo: Use remove_proc_subtree()
rtl8192u: Don't need to save device proc dir PDE
rtl8187se: Use a dir under /proc/net/r8180/
proc: Add proc_mkdir_data()
proc: Move some bits from linux/proc_fs.h to linux/{of.h,signal.h,tty.h}
proc: Move PDE_NET() to fs/proc/proc_net.c
...
30 Apr, 2013
8 commits
-
Now, vmap_area_list is exported as VMCOREINFO for makedumpfile to get
the start address of vmalloc region (vmalloc_start). The address which
contains vmalloc_start value is represented as below:vmap_area_list.next - OFFSET(vmap_area.list) + OFFSET(vmap_area.va_start)
However, both OFFSET(vmap_area.va_start) and OFFSET(vmap_area.list)
aren't exported as VMCOREINFO.So this patch exports them externally with small cleanup.
[akpm@linux-foundation.org: vmalloc.h should include list.h for list_head]
Signed-off-by: Atsushi Kumagai
Cc: Joonsoo Kim
Cc: Joonsoo Kim
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Cc: Atsushi Kumagai
Cc: Chris Metcalf
Cc: Dave Anderson
Cc: Eric Biederman
Cc: Guan Xuetao
Cc: Ingo Molnar
Cc: Vivek Goyal
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Now, there is no need to maintain vmlist after initializing vmalloc. So
remove related code and data structure.Signed-off-by: Joonsoo Kim
Signed-off-by: Joonsoo Kim
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Cc: Atsushi Kumagai
Cc: Chris Metcalf
Cc: Dave Anderson
Cc: Eric Biederman
Cc: Guan Xuetao
Cc: Ingo Molnar
Cc: Vivek Goyal
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Although our intention is to unexport internal structure entirely, but
there is one exception for kexec. kexec dumps address of vmlist and
makedumpfile uses this information.We are about to remove vmlist, then another way to retrieve information
of vmalloc layer is needed for makedumpfile. For this purpose, we
export vmap_area_list, instead of vmlist.Signed-off-by: Joonsoo Kim
Signed-off-by: Joonsoo Kim
Cc: Eric Biederman
Cc: Dave Anderson
Cc: Vivek Goyal
Cc: Atsushi Kumagai
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Cc: Chris Metcalf
Cc: Guan Xuetao
Cc: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
This patch is a preparatory step for removing vmlist entirely. For
above purpose, we change iterating a vmap_list codes to iterating a
vmap_area_list. It is somewhat trivial change, but just one thing
should be noticed.Using vmap_area_list in vmallocinfo() introduce ordering problem in SMP
system. In s_show(), we retrieve some values from vm_struct.
vm_struct's values is not fully setup when va->vm is assigned. Full
setup is notified by removing VM_UNLIST flag without holding a lock.
When we see that VM_UNLIST is removed, it is not ensured that vm_struct
has proper values in view of other CPUs. So we need smp_[rw]mb for
ensuring that proper values is assigned when we see that VM_UNLIST is
removed.Therefore, this patch not only change a iteration list, but also add a
appropriate smp_[rw]mb to right places.Signed-off-by: Joonsoo Kim
Signed-off-by: Joonsoo Kim
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Cc: Atsushi Kumagai
Cc: Chris Metcalf
Cc: Dave Anderson
Cc: Eric Biederman
Cc: Guan Xuetao
Cc: Ingo Molnar
Cc: Vivek Goyal
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
This patch is a preparatory step for removing vmlist entirely. For
above purpose, we change iterating a vmap_list codes to iterating a
vmap_area_list. It is somewhat trivial change, but just one thing
should be noticed.vmlist is lack of information about some areas in vmalloc address space.
For example, vm_map_ram() allocate area in vmalloc address space, but it
doesn't make a link with vmlist. To provide full information about
vmalloc address space is better idea, so we don't use va->vm and use
vmap_area directly. This makes get_vmalloc_info() more precise.Signed-off-by: Joonsoo Kim
Signed-off-by: Joonsoo Kim
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Cc: Atsushi Kumagai
Cc: Chris Metcalf
Cc: Dave Anderson
Cc: Eric Biederman
Cc: Guan Xuetao
Cc: Ingo Molnar
Cc: Vivek Goyal
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Now, when we hold a vmap_area_lock, va->vm can't be discarded. So we can
safely access to va->vm when iterating a vmap_area_list with holding a
vmap_area_lock. With this property, change iterating vmlist codes in
vread/vwrite() to iterating vmap_area_list.There is a little difference relate to lock, because vmlist_lock is mutex,
but, vmap_area_lock is spin_lock. It may introduce a spinning overhead
during vread/vwrite() is executing. But, these are debug-oriented
functions, so this overhead is not real problem for common case.Signed-off-by: Joonsoo Kim
Signed-off-by: Joonsoo Kim
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Cc: Atsushi Kumagai
Cc: Chris Metcalf
Cc: Dave Anderson
Cc: Eric Biederman
Cc: Guan Xuetao
Cc: Ingo Molnar
Cc: Vivek Goyal
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Inserting and removing an entry to vmlist is linear time complexity, so
it is inefficient. Following patches will try to remove vmlist
entirely. This patch is preparing step for it.For removing vmlist, iterating vmlist codes should be changed to
iterating a vmap_area_list. Before implementing that, we should make
sure that when we iterate a vmap_area_list, accessing to va->vm doesn't
cause a race condition. This patch ensure that when iterating a
vmap_area_list, there is no race condition for accessing to vm_struct.Signed-off-by: Joonsoo Kim
Signed-off-by: Joonsoo Kim
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Cc: Atsushi Kumagai
Cc: Chris Metcalf
Cc: Dave Anderson
Cc: Eric Biederman
Cc: Guan Xuetao
Cc: Ingo Molnar
Cc: Vivek Goyal
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Now get_vmalloc_info() is in fs/proc/mmu.c. There is no reason that this
code must be here and it's implementation needs vmlist_lock and it iterate
a vmlist which may be internal data structure for vmalloc.It is preferable that vmlist_lock and vmlist is only used in vmalloc.c
for maintainability. So move the code to vmalloc.cSigned-off-by: Joonsoo Kim
Signed-off-by: Joonsoo Kim
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Cc: Atsushi Kumagai
Cc: Chris Metcalf
Cc: Dave Anderson
Cc: Eric Biederman
Cc: Guan Xuetao
Cc: Ingo Molnar
Cc: Vivek Goyal
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
11 Mar, 2013
1 commit
-
A bunch of RCU callbacks want to be able to do vfree() and end up with
rather kludgy schemes. Just let vfree() do the right thing - put the
victim on llist and schedule actual __vunmap() via schedule_work(), so
that it runs from non-interrupt context.Signed-off-by: Al Viro
24 Feb, 2013
1 commit
-
Make a sweep through mm/ and convert code that uses -1 directly to using
the more appropriate NUMA_NO_NODE.Signed-off-by: David Rientjes
Reviewed-by: Yasuaki Ishimatsu
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
12 Dec, 2012
1 commit
-
We don't need custom NUMA_BUILD anymore, since we have handy
IS_ENABLED().Signed-off-by: Kirill A. Shutemov
Acked-by: KOSAKI Motohiro
Acked-by: David Rientjes
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
09 Oct, 2012
2 commits
-
In the paranoid case of sysctl kernel.kptr_restrict=2, mask the kernel
virtual addresses in /proc/vmallocinfo too.Signed-off-by: Kees Cook
Reported-by: Brad Spengler
Acked-by: KOSAKI Motohiro
Acked-by: David Rientjes
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA,
currently it lost original meaning but still has some effects:| effect | alternative flags
-+------------------------+---------------------------------------------
1| account as reserved_vm | VM_IO
2| skip in core dump | VM_IO, VM_DONTDUMP
3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
4| do not mlock | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAPThis patch removes reserved_vm counter from mm_struct. Seems like nobody
cares about it, it does not exported into userspace directly, it only
reduces total_vm showed in proc.Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND | VM_DONTDUMP.
remap_pfn_range() and io_remap_pfn_range() set VM_IO|VM_DONTEXPAND|VM_DONTDUMP.
remap_vmalloc_range() set VM_DONTEXPAND | VM_DONTDUMP.[akpm@linux-foundation.org: drivers/vfio/pci/vfio_pci.c fixup]
Signed-off-by: Konstantin Khlebnikov
Cc: Alexander Viro
Cc: Carsten Otte
Cc: Chris Metcalf
Cc: Cyrill Gorcunov
Cc: Eric Paris
Cc: H. Peter Anvin
Cc: Hugh Dickins
Cc: Ingo Molnar
Cc: James Morris
Cc: Jason Baron
Cc: Kentaro Takeda
Cc: Matt Helsley
Cc: Nick Piggin
Cc: Oleg Nesterov
Cc: Peter Zijlstra
Cc: Robert Richter
Cc: Suresh Siddha
Cc: Tetsuo Handa
Cc: Venkatesh Pallipadi
Acked-by: Linus Torvalds
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
01 Aug, 2012
2 commits
-
If someone calls vb_alloc() (or vm_map_ram() for that matter) to allocate
0 bytes (0 pages), get_order() returns BITS_PER_LONG - PAGE_CACHE_SHIFT
and interesting stuff happens. So make debugging such problems easier and
warn about 0-size allocation.[akpm@linux-foundation.org: use WARN_ON-return-value feature]
Signed-off-by: Jan Kara
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
There's a walk by repeating rb_next to find a suitable hole. Could be
simply replaced by walk on the sorted vmap_area_list. More simpler and
efficient.Mutation of the list and tree only happens in pair within
__insert_vmap_area and __free_vmap_area, under protection of
vmap_area_lock. The patch code is also under vmap_area_lock, so the list
walk is safe, and consistent with the tree walk.Tested on SMP by repeating batch of vmalloc anf vfree for random sizes and
rounds for hours.Signed-off-by: Hong Zhiguo
Cc: Nick Piggin
Cc: Johannes Weiner
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
31 Jul, 2012
1 commit
-
Pull DMA-mapping updates from Marek Szyprowski:
"Those patches are continuation of my earlier work.They contains extensions to DMA-mapping framework to remove limitation
of the current ARM implementation (like limited total size of DMA
coherent/write combine buffers), improve performance of buffer sharing
between devices (attributes to skip cpu cache operations or creation
of additional kernel mapping for some specific use cases) as well as
some unification of the common code for dma_mmap_attrs() and
dma_mmap_coherent() functions. All extensions have been implemented
and tested for ARM architecture."* 'for-linus-for-3.6-rc1' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping:
ARM: dma-mapping: add support for DMA_ATTR_SKIP_CPU_SYNC attribute
common: DMA-mapping: add DMA_ATTR_SKIP_CPU_SYNC attribute
ARM: dma-mapping: add support for dma_get_sgtable()
common: dma-mapping: introduce dma_get_sgtable() function
ARM: dma-mapping: add support for DMA_ATTR_NO_KERNEL_MAPPING attribute
common: DMA-mapping: add DMA_ATTR_NO_KERNEL_MAPPING attribute
common: dma-mapping: add support for generic dma_mmap_* calls
ARM: dma-mapping: fix error path for memory allocation failure
ARM: dma-mapping: add more sanity checks in arm_dma_mmap()
ARM: dma-mapping: remove custom consistent dma region
mm: vmalloc: use const void * for caller argument
scatterlist: add sg_alloc_table_from_pages function
30 Jul, 2012
2 commits
-
This patch changes dma-mapping subsystem to use generic vmalloc areas
for all consistent dma allocations. This increases the total size limit
of the consistent allocations and removes platform hacks and a lot of
duplicated code.Atomic allocations are served from special pool preallocated on boot,
because vmalloc areas cannot be reliably created in atomic context.Signed-off-by: Marek Szyprowski
Reviewed-by: Kyungmin Park
Reviewed-by: Minchan Kim -
'const void *' is a safer type for caller function type. This patch
updates all references to caller function type.Signed-off-by: Marek Szyprowski
Reviewed-by: Kyungmin Park
Reviewed-by: Minchan Kim
24 Jul, 2012
1 commit
-
Signed-off-by: Cong Wang
30 May, 2012
2 commits
-
The transfer of ->flags causes some of the static mapping virtual
addresses to be prematurely freed (before the mapping is removed) because
VM_LAZY_FREE gets "set" if tmp->flags has VM_IOREMAP set. This might
cause subsequent vmalloc/ioremap calls to fail because it might allocate
one of the freed virtual address ranges that aren't unmapped.va->flags has different types of flags from tmp->flags. If a region with
VM_IOREMAP set is registered with vm_area_add_early(), it will be removed
by __purge_vmap_area_lazy().Fix vmalloc_init() to correctly initialize vmap_area for the given
vm_struct.Also initialise va->vm. If it is not set, find_vm_area() for the early
vm regions will always fail.Signed-off-by: KyongHo Cho
Cc: "Olav Haugan"
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
The advantage of kcalloc is, that will prevent integer overflows which
could result from the multiplication of number of elements and size and
it is also a bit nicer to read.The semantic patch that makes this change is available in
https://lkml.org/lkml/2011/11/25/107Signed-off-by: Thomas Meyer
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
20 Mar, 2012
1 commit
-
Signed-off-by: Cong Wang
13 Jan, 2012
1 commit
-
If either of the vas or vms arrays are not properly kzalloced, then the
code jumps to the err_free label.The err_free label runs a loop to check and free each of the array members
of the vas and vms arrays which is not required for this situation as none
of the array members have been allocated till this point.Eliminate the extra loop we have to go through by introducing a new label
err_free2 and then jumping to it.[akpm@linux-foundation.org: remove now-unneeded tests]
Signed-off-by: Kautuk Consul
Acked-by: David Rientjes
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
11 Jan, 2012
1 commit
-
vmap_area->private is void* but we don't use the field for various purpose
but use only for vm_struct. So change it to a vm_struct* with naming to
improve for readability and type checking.Signed-off-by: Minchan Kim
Acked-by: David Rientjes
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
05 Jan, 2012
1 commit
-
Conflicts:
arch/arm/kernel/setup.c
arch/arm/mach-shmobile/board-kota2.c
21 Dec, 2011
1 commit
-
Static storage is not required for the struct vmap_area in
__get_vm_area_node.Removing "static" to store this variable on the stack instead.
Signed-off-by: Kautuk Consul
Acked-by: David Rientjes
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
09 Dec, 2011
1 commit
-
Commit f5252e00 ("mm: avoid null pointer access in vm_struct via
/proc/vmallocinfo") adds newly allocated vm_structs to the vmlist after
it is fully initialised. Unfortunately, it did not check that
__vmalloc_area_node() successfully populated the area. In the event of
allocation failure, the vmalloc area is freed but the pointer to freed
memory is inserted into the vmlist leading to a a crash later in
get_vmalloc_info().This patch adds a check for ____vmalloc_area_node() failure within
__vmalloc_node_range. It does not use "goto fail" as in the previous
error path as a warning was already displayed by __vmalloc_area_node()
before it called vfree in its failure path.Credit goes to Luciano Chavez for doing all the real work of identifying
exactly where the problem was.Signed-off-by: Mel Gorman
Reported-by: Luciano Chavez
Tested-by: Luciano Chavez
Reviewed-by: Rik van Riel
Acked-by: David Rientjes
Cc: [3.1.x+]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
06 Dec, 2011
1 commit
19 Nov, 2011
1 commit
-
The existing vm_area_register_early() allows for early vmalloc space
allocation. However upcoming cleanups in the ARM architecture require
that some fixed locations in the vmalloc area be reserved also very early.The name "vm_area_register_early" would have been a good name for the
reservation part without the allocation. Since it is already in use with
different semantics, let's create vm_area_add_early() instead.Both vm_area_register_early() and vm_area_add_early() can be used together
meaning that the former is now implemented using the later where it is
ensured that no conflicting areas are added, but no attempt is made to
make the allocation scheme in vm_area_register_early() more sophisticated.
After all, you must know what you're doing when using those functions.Signed-off-by: Nicolas Pitre
Acked-by: Andrew Morton
Cc: linux-mm@kvack.org
17 Nov, 2011
1 commit
-
When mapping a foreign page with xenbus_map_ring_valloc() with the
GNTTABOP_map_grant_ref hypercall, set the GNTMAP_contains_pte flag and
pass a pointer to the PTE (in init_mm).After the page is mapped, the usual fault mechanism can be used to
update additional MMs. This allows the vmalloc_sync_all() to be
removed from alloc_vm_area().Signed-off-by: David Vrabel
Acked-by: Andrew Morton
[v1: Squashed fix by Michal for no-mmu case]
Signed-off-by: Konrad Rzeszutek Wilk
Signed-off-by: Michal Simek
01 Nov, 2011
3 commits
-
Some vmalloc failure paths do not report OOM conditions.
Add warn_alloc_failed, which also does a dump_stack, to those failure
paths.This allows more site specific vmalloc failure logging message printks to
be removed.Signed-off-by: Joe Perches
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Add __attribute__((format (printf...) to the function to validate format
and arguments. Use vsprintf extension %pV to avoid any possible message
interleaving. Coalesce format string. Convert printks/pr_warning to
pr_warn.[akpm@linux-foundation.org: use the __printf() macro]
Signed-off-by: Joe Perches
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
The /proc/vmallocinfo shows information about vmalloc allocations in
vmlist that is a linklist of vm_struct. It, however, may access pages
field of vm_struct where a page was not allocated. This results in a null
pointer access and leads to a kernel panic.Why this happens: In __vmalloc_node_range() called from vmalloc(), newly
allocated vm_struct is added to vmlist at __get_vm_area_node() and then,
some fields of vm_struct such as nr_pages and pages are set at
__vmalloc_area_node(). In other words, it is added to vmlist before it is
fully initialized. At the same time, when the /proc/vmallocinfo is read,
it accesses the pages field of vm_struct according to the nr_pages field
at show_numa_info(). Thus, a null pointer access happens.The patch adds the newly allocated vm_struct to the vmlist *after* it is
fully initialized. So, it can avoid accessing the pages field with
unallocated page when show_numa_info() is called.Signed-off-by: Mitsuo Hayasaka
Cc: Andrew Morton
Cc: David Rientjes
Cc: Namhyung Kim
Cc: "Paul E. McKenney"
Cc: Jeremy Fitzhardinge
Cc:
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
15 Sep, 2011
1 commit
-
Xen backend drivers (e.g., blkback and netback) would sometimes fail to
map grant pages into the vmalloc address space allocated with
alloc_vm_area(). The GNTTABOP_map_grant_ref would fail because Xen could
not find the page (in the L2 table) containing the PTEs it needed to
update.(XEN) mm.c:3846:d0 Could not find L1 PTE for address fbb42000
netback and blkback were making the hypercall from a kernel thread where
task->active_mm != &init_mm and alloc_vm_area() was only updating the page
tables for init_mm. The usual method of deferring the update to the page
tables of other processes (i.e., after taking a fault) doesn't work as a
fault cannot occur during the hypercall.This would work on some systems depending on what else was using vmalloc.
Fix this by reverting ef691947d8a3 ("vmalloc: remove vmalloc_sync_all()
from alloc_vm_area()") and add a comment to explain why it's needed.Signed-off-by: David Vrabel
Cc: Jeremy Fitzhardinge
Cc: Konrad Rzeszutek Wilk
Cc: Ian Campbell
Cc: Keir Fraser
Cc: [3.0.x]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
15 Aug, 2011
1 commit
-
Commit db64fe02258f ("mm: rewrite vmap layer") introduced code that does
address calculations under the assumption that VMAP_BLOCK_SIZE is a
power of two. However, this might not be true if CONFIG_NR_CPUS is not
set to a power of two.Wrong vmap_block index/offset values could lead to memory corruption.
However, this has never been observed in practice (or never been
diagnosed correctly); what caught this was the BUG_ON in vb_alloc() that
checks for inconsistent vmap_block indices.To fix this, ensure that VMAP_BLOCK_SIZE always is a power of two.
BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=31572
Reported-by: Pavel Kysilka
Reported-by: Matias A. Fonzo
Signed-off-by: Clemens Ladisch
Signed-off-by: Stefan Richter
Cc: Nick Piggin
Cc: Jeremy Fitzhardinge
Cc: Krzysztof Helt
Cc: Andrew Morton
Cc: 2.6.28+
Signed-off-by: Linus Torvalds
27 Jul, 2011
1 commit
-
This allows us to move duplicated code in
(atomic_inc_not_zero() for now) toSigned-off-by: Arun Sharma
Reviewed-by: Eric Dumazet
Cc: Ingo Molnar
Cc: David Miller
Cc: Eric Dumazet
Acked-by: Mike Frysinger
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
21 Jul, 2011
2 commits
-
The rcu callback rcu_free_vb() just calls a kfree(),
so we use kfree_rcu() instead of the call_rcu(rcu_free_vb).Signed-off-by: Lai Jiangshan
Signed-off-by: Paul E. McKenney
Cc: Andrew Morton
Cc: Namhyung Kim
Cc: David Rientjes
Reviewed-by: Josh Triplett -
The rcu callback rcu_free_va() just calls a kfree(),
so we use kfree_rcu() instead of the call_rcu(rcu_free_va).Signed-off-by: Lai Jiangshan
Signed-off-by: Paul E. McKenney
Cc: Andrew Morton
Cc: Namhyung Kim
Cc: David Rientjes
Reviewed-by: Josh Triplett