09 Jun, 2022
1 commit
-
commit 60a60e32cf91169840abcb4a80f0b0df31708ba7 upstream.
This reverts commit a4efc174b382fcdb which introduced a regression issue
that when there're multiple processes allocating dma memory in parallel by
calling dma_alloc_coherent(), it may fail sometimes as follows:Error log:
cma: cma_alloc: linux,cma: alloc failed, req-size: 148 pages, ret: -16
cma: number of available pages:
3@125+20@172+12@236+4@380+32@736+17@2287+23@2473+20@36076+99@40477+108@40852+44@41108+20@41196+108@41364+108@41620+
108@42900+108@43156+483@44061+1763@45341+1440@47712+20@49324+20@49388+5076@49452+2304@55040+35@58141+20@58220+20@58284+
7188@58348+84@66220+7276@66452+227@74525+6371@75549=> 33161 free of 81920 total pagesWhen issue happened, we saw there were still 33161 pages (129M) free CMA
memory and a lot available free slots for 148 pages in CMA bitmap that we
want to allocate.When dumping memory info, we found that there was also ~342M normal
memory, but only 1352K CMA memory left in buddy system while a lot of
pageblocks were isolated.Memory info log:
Normal free:351096kB min:30000kB low:37500kB high:45000kB reserved_highatomic:0KB
active_anon:98060kB inactive_anon:98948kB active_file:60864kB inactive_file:31776kB
unevictable:0kB writepending:0kB present:1048576kB managed:1018328kB mlocked:0kB
bounce:0kB free_pcp:220kB local_pcp:192kB free_cma:1352kB lowmem_reserve[]: 0 0 0
Normal: 78*4kB (UECI) 1772*8kB (UMECI) 1335*16kB (UMECI) 360*32kB (UMECI) 65*64kB (UMCI)
36*128kB (UMECI) 16*256kB (UMCI) 6*512kB (EI) 8*1024kB (UEI) 4*2048kB (MI) 8*4096kB (EI)
8*8192kB (UI) 3*16384kB (EI) 8*32768kB (M) = 489288kBThe root cause of this issue is that since commit a4efc174b382 ("mm/cma.c:
remove redundant cma_mutex lock"), CMA supports concurrent memory
allocation. It's possible that the memory range process A trying to alloc
has already been isolated by the allocation of process B during memory
migration.The problem here is that the memory range isolated during one allocation
by start_isolate_page_range() could be much bigger than the real size we
want to alloc due to the range is aligned to MAX_ORDER_NR_PAGES.Taking an ARMv7 platform with 1G memory as an example, when
MAX_ORDER_NR_PAGES is big (e.g. 32M with max_order 14) and CMA memory is
relatively small (e.g. 128M), there're only 4 MAX_ORDER slot, then it's
very easy that all CMA memory may have already been isolated by other
processes when one trying to allocate memory using dma_alloc_coherent().
Since current CMA code will only scan one time of whole available CMA
memory, then dma_alloc_coherent() may easy fail due to contention with
other processes.This patch simply falls back to the original method that using cma_mutex
to make alloc_contig_range() run sequentially to avoid the issue.Link: https://lkml.kernel.org/r/20220509094551.3596244-1-aisheng.dong@nxp.com
Link: https://lore.kernel.org/all/20220315144521.3810298-2-aisheng.dong@nxp.com/
Fixes: a4efc174b382 ("mm/cma.c: remove redundant cma_mutex lock")
Signed-off-by: Dong Aisheng
Acked-by: Minchan Kim
Acked-by: David Hildenbrand
Cc: Marek Szyprowski
Cc: Lecopzer Chen
Cc: Vlastimil Babka
Cc: [5.11+]
Signed-off-by: Andrew Morton
Signed-off-by: Greg Kroah-Hartman
06 May, 2021
7 commits
-
size_t in cma_alloc is confusing since it makes people think it's byte
count, not pages. Change it to unsigned long[1].The unsigned int in cma_release is also not right so change it. Since we
have unsigned long in cma_release, free_contig_range should also respect
it.[1] 67a2e213e7e9, mm: cma: fix incorrect type conversion for size during dma allocation
Link: https://lore.kernel.org/linux-mm/20210324043434.GP1719932@casper.infradead.org/
Link: https://lkml.kernel.org/r/20210331164018.710560-1-minchan@kernel.org
Signed-off-by: Minchan Kim
Reviewed-by: David Hildenbrand
Cc: Matthew Wilcox
Cc: David Hildenbrand
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
There were missing places to add cma instance name. To identify each CMA
instance, let's add the name for every cma trace. This patch also changes
the existing cma_trace_alloc to cma_trace_finish since we have
cma_alloc_start[1].[1] https://lore.kernel.org/linux-mm/20210324160740.15901-1-georgi.djakov@linaro.org
Link: https://lkml.kernel.org/r/20210330220237.748899-1-minchan@kernel.org
Signed-off-by: Minchan Kim
Cc: Liam Mark
Cc: Georgi Djakov
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Since CMA is getting used more widely, it's more important to keep
monitoring CMA statistics for system health since it's directly related to
user experience.This patch introduces sysfs statistics for CMA, in order to provide some
basic monitoring of the CMA allocator.* the number of CMA page successful allocations
* the number of CMA page allocation failuresThese two values allow the user to calcuate the allocation
failure rate for each CMA area.e.g.)
/sys/kernel/mm/cma/WIFI/alloc_pages_[success|fail]
/sys/kernel/mm/cma/SENSOR/alloc_pages_[success|fail]
/sys/kernel/mm/cma/BLUETOOTH/alloc_pages_[success|fail]The cma_stat was intentionally allocated by dynamic allocation
to harmonize with kobject lifetime management.
https://lore.kernel.org/linux-mm/YCOAmXqt6dZkCQYs@kroah.com/Link: https://lkml.kernel.org/r/20210324230759.2213957-1-minchan@kernel.org
Link: https://lore.kernel.org/linux-mm/20210316100433.17665-1-colin.king@canonical.com/
Signed-off-by: Minchan Kim
Signed-off-by: Colin Ian KingTested-by: Dmitry Osipenko
Reviewed-by: Dmitry Osipenko
Reviewed-by: Greg Kroah-Hartman
Reviewed-by: John Hubbard
Tested-by: Anders Roxell
Cc: Suren Baghdasaryan
Cc: John Dias
Cc: Matthew Wilcox (Oracle)
Cc: Colin Ian King
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Add cma and migrate trace events to enable CMA allocation performance to
be measured via ftrace.[georgi.djakov@linaro.org: add the CMA instance name to the cma_alloc_start trace event]
Link: https://lkml.kernel.org/r/20210326155414.25006-1-georgi.djakov@linaro.orgLink: https://lkml.kernel.org/r/20210324160740.15901-1-georgi.djakov@linaro.org
Signed-off-by: Liam Mark
Signed-off-by: Georgi Djakov
Acked-by: Minchan Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
If we did not reserve extra CMA memory, the log buffer can be easily
filled up by CMA failure warning when the devices calling
dmam_alloc_coherent() to alloc DMA memory. Thus we can use
pr_err_ratelimited() instead to reduce the duplicate CMA warning.Link: https://lkml.kernel.org/r/ce2251ef49e1727a9a40531d1996660b05462bd2.1615279825.git.baolin.wang@linux.alibaba.com
Signed-off-by: Baolin Wang
Reviewed-by: David Hildenbrand
Acked-by: Minchan Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Since CMA is used more widely, it's worth to have CMA allocation
statistics into vmstat. With it, we could know how agressively system
uses cma allocation and how often it fails.Link: https://lkml.kernel.org/r/20210302183346.3707237-1-minchan@kernel.org
Signed-off-by: Minchan Kim
Reviewed-by: John Hubbard
Cc: John Dias
Cc: Suren Baghdasaryan
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Patch series "make hugetlb put_page safe for all calling contexts", v5.
This effort is the result a recent bug report [1]. Syzbot found a
potential deadlock in the hugetlb put_page/free_huge_page_path. WARNING:
SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected Since the
free_huge_page_path already has code to 'hand off' page free requests to a
workqueue, a suggestion was proposed to make the in_irq() detection
accurate by always enabling PREEMPT_COUNT [2]. The outcome of that
discussion was that the hugetlb put_page path (free_huge_page) path should
be properly fixed and safe for all calling contexts.[1] https://lore.kernel.org/linux-mm/000000000000f1c03b05bc43aadc@google.com/
[2] http://lkml.kernel.org/r/20210311021321.127500-1-mike.kravetz@oracle.comThis patch (of 8):
cma_release is currently a sleepable operatation because the bitmap
manipulation is protected by cma->lock mutex. Hugetlb code which relies
on cma_release for CMA backed (giga) hugetlb pages, however, needs to be
irq safe.The lock doesn't protect any sleepable operation so it can be changed to a
(irq aware) spin lock. The bitmap processing should be quite fast in
typical case but if cma sizes grow to TB then we will likely need to
replace the lock by a more optimized bitmap implementation.Link: https://lkml.kernel.org/r/20210409205254.242291-1-mike.kravetz@oracle.com
Link: https://lkml.kernel.org/r/20210409205254.242291-2-mike.kravetz@oracle.com
Signed-off-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: David Hildenbrand
Acked-by: Roman Gushchin
Cc: Shakeel Butt
Cc: Oscar Salvador
Cc: Muchun Song
Cc: David Rientjes
Cc: Miaohe Lin
Cc: Peter Zijlstra
Cc: Matthew Wilcox
Cc: HORIGUCHI NAOYA
Cc: "Aneesh Kumar K . V"
Cc: Waiman Long
Cc: Peter Xu
Cc: Mina Almasry
Cc: Hillf Danton
Cc: Joonsoo Kim
Cc: Barry Song
Cc: Will Deacon
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
27 Feb, 2021
3 commits
-
Print the name of the CMA region for convenience. This is useful
information to have when cma_alloc() fails.[pdaly@codeaurora.org: print the "count" variable]
Link: https://lkml.kernel.org/r/20210209142414.12768-1-georgi.djakov@linaro.orgLink: https://lkml.kernel.org/r/20210208115200.20286-1-georgi.djakov@linaro.org
Signed-off-by: Patrick Daly
Signed-off-by: Georgi Djakov
Acked-by: Minchan Kim
Reviewed-by: David Hildenbrand
Reviewed-by: Randy Dunlap
Cc: Minchan Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Right now, if activation fails, we might already have exposed some pages
to the buddy for CMA use (although they will never get actually used by
CMA), and some pages won't be exposed to the buddy at all.Let's check for "single zone" early and on error, don't expose any pages
for CMA use - instead, expose them to the buddy available for any use.
Simply call free_reserved_page() on every single page - easier than going
via free_reserved_area(), converting back and forth between pfns and virt
addresses.In addition, make sure to fixup totalcma_pages properly.
Example: 6 GiB QEMU VM with "... hugetlb_cma=2G movablecore=20% ...":
[ 0.006891] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
[ 0.006893] cma: Reserved 2048 MiB at 0x0000000100000000
[ 0.006893] hugetlb_cma: reserved 2048 MiB on node 0
...
[ 0.175433] cma: CMA area hugetlb0 could not be activatedBefore this patch:
# cat /proc/meminfo
MemTotal: 5867348 kB
MemFree: 5692808 kB
MemAvailable: 5542516 kB
...
CmaTotal: 2097152 kB
CmaFree: 1884160 kBAfter this patch:
# cat /proc/meminfo
MemTotal: 6077308 kB
MemFree: 5904208 kB
MemAvailable: 5747968 kB
...
CmaTotal: 0 kB
CmaFree: 0 kBNote: cma_init_reserved_mem() makes sure that we always cover full
pageblocks / MAX_ORDER - 1 pages.Link: https://lkml.kernel.org/r/20210127101813.6370-2-david@redhat.com
Signed-off-by: David Hildenbrand
Reviewed-by: Zi Yan
Reviewed-by: Oscar Salvador
Cc: Thomas Gleixner
Cc: "Peter Zijlstra (Intel)"
Cc: Mike Rapoport
Cc: Michal Hocko
Cc: Wei Yang
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Currently cma areas without a fixed base are allocated close to the end of
the node. This placement is sub-optimal because of compaction: it brings
pages into the cma area. In particular, it can bring in hot executable
pages, even if there is a plenty of free memory on the machine. This
results in cma allocation failures.Instead let's place cma areas close to the beginning of a node. In this
case the compaction will help to free cma areas, resulting in better cma
allocation success rates.If there is enough memory let's try to allocate bottom-up starting with
4GB to exclude any possible interference with DMA32. On smaller machines
or in a case of a failure, stick with the old behavior.16GB vm, 2GB cma area:
With this patch:
[ 0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G
[ 0.002928] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
[ 0.002930] cma: Reserved 2048 MiB at 0x0000000100000000
[ 0.002931] hugetlb_cma: reserved 2048 MiB on node 0Without this patch:
[ 0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G
[ 0.002930] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
[ 0.002933] cma: Reserved 2048 MiB at 0x00000003c0000000
[ 0.002934] hugetlb_cma: reserved 2048 MiB on node 0v2:
- switched to memblock_set_bottom_up(true), by Mike
- start with 4GB, by Mike[guro@fb.com: whitespace fix, per Mike]
Link: https://lkml.kernel.org/r/20201221170551.GB3428478@carbon.DHCP.thefacebook.com
[guro@fb.com: fix 32-bit warnings]
Link: https://lkml.kernel.org/r/20201223163537.GA4011967@carbon.DHCP.thefacebook.com
[guro@fb.com: fix 32-bit systems]
[akpm@linux-foundation.org: build fix]Link: https://lkml.kernel.org/r/20201217201214.3414100-1-guro@fb.com
Signed-off-by: Roman Gushchin
Reviewed-by: Mike Rapoport
Cc: Wonhyuk Yang
Cc: Joonsoo Kim
Cc: Rik van Riel
Cc: Michal Hocko
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
16 Dec, 2020
2 commits
-
It is required to print 'count' of pages, along with the pages, passed to
cma_release to debug the cases of mismatched count value passed between
cma_alloc() and cma_release() from a code path.As an example, consider the below scenario:
1) CMA pool size is 4MB and
2) User doing the erroneous step of allocating 2 pages but freeing 1
page in a loop from this CMA pool. The step 2 causes cma_alloc() to
return NULL at one point of time because of -ENOMEM condition.And the current pr_debug logs is not giving the info about these types of
allocation patterns because of count value not being printed in
cma_release().We are printing the count value in the trace logs, just extend the same to
pr_debug logs too.[akpm@linux-foundation.org: fix printk warning]
Link: https://lkml.kernel.org/r/1606318341-29521-1-git-send-email-charante@codeaurora.org
Signed-off-by: Charan Teja Reddy
Reviewed-by: Souptick Joarder
Reviewed-by: David Hildenbrand
Acked-by: Vlastimil Babka
Cc: Vinayak Menon
Cc: Joonsoo Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
The cma_mutex which protects alloc_contig_range() was first appeared in
commit 7ee793a62fa8c ("cma: Remove potential deadlock situation"), at that
time, there is no guarantee the behavior of concurrency inside
alloc_contig_range().After commit 2c7452a075d4db2dc ("mm/page_isolation.c: make
start_isolate_page_range() fail if already isolated")> However, two subsystems (CMA and gigantic
> huge pages for example) could attempt operations on the same range. If
> this happens, one thread may 'undo' the work another thread is doing.
> This can result in pageblocks being incorrectly left marked as
> MIGRATE_ISOLATE and therefore not available for page allocation.The concurrency inside alloc_contig_range() was clarified.
Now we can find that hugepage and virtio call alloc_contig_range() without
any lock, thus cma_mutex is "redundant" in cma_alloc() now.Link: https://lkml.kernel.org/r/20201020102241.3729-1-lecopzer.chen@mediatek.com
Signed-off-by: Lecopzer Chen
Acked-by: David Hildenbrand
Acked-by: Vlastimil Babka
Cc: Matthias Brugger
Cc: YJ Chiang
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
13 Aug, 2020
3 commits
-
The routine cma_init_reserved_areas is designed to activate all
reserved cma areas. It quits when it first encounters an error.
This can leave some areas in a state where they are reserved but
not activated. There is no feedback to code which performed the
reservation. Attempting to allocate memory from areas in such a
state will result in a BUG.Modify cma_init_reserved_areas to always attempt to activate all
areas. The called routine, cma_activate_area is responsible for
leaving the area in a valid state. No one is making active use
of returned error codes, so change the routine to void.How to reproduce: This example uses kernelcore, hugetlb and cma
as an easy way to reproduce. However, this is a more general cma
issue.Two node x86 VM 16GB total, 8GB per node
Kernel command line parameters, kernelcore=4G hugetlb_cma=8G
Related boot time messages,
hugetlb_cma: reserve 8192 MiB, up to 4096 MiB per node
cma: Reserved 4096 MiB at 0x0000000100000000
hugetlb_cma: reserved 4096 MiB on node 0
cma: Reserved 4096 MiB at 0x0000000300000000
hugetlb_cma: reserved 4096 MiB on node 1
cma: CMA area hugetlb could not be activated# echo 8 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
BUG: kernel NULL pointer dereference, address: 0000000000000000
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 0 P4D 0
Oops: 0000 [#1] SMP PTI
...
Call Trace:
bitmap_find_next_zero_area_off+0x51/0x90
cma_alloc+0x1a5/0x310
alloc_fresh_huge_page+0x78/0x1a0
alloc_pool_huge_page+0x6f/0xf0
set_max_huge_pages+0x10c/0x250
nr_hugepages_store_common+0x92/0x120
? __kmalloc+0x171/0x270
kernfs_fop_write+0xc1/0x1a0
vfs_write+0xc7/0x1f0
ksys_write+0x5f/0xe0
do_syscall_64+0x4d/0x90
entry_SYSCALL_64_after_hwframe+0x44/0xa9Fixes: c64be2bb1c6e ("drivers: add Contiguous Memory Allocator")
Signed-off-by: Mike Kravetz
Signed-off-by: Andrew Morton
Reviewed-by: Roman Gushchin
Acked-by: Barry Song
Cc: Marek Szyprowski
Cc: Michal Nazarewicz
Cc: Kyungmin Park
Cc: Joonsoo Kim
Cc:
Link: http://lkml.kernel.org/r/20200730163123.6451-1-mike.kravetz@oracle.com
Signed-off-by: Linus Torvalds -
Patch series "mm: fix the names of general cma and hugetlb cma", v2.
The current code of CMA can only work when users pass a const string as
name parameter. we need to fix the way to handle names in CMA. On the
other hand, to avoid name conflicts after enabling CMA_DEBUGFS, each
hugetlb should get a different CMA name.This patch (of 2):
If users give a name saved in stack, the current code will generate magic
pointer. if users don't give a name(NULL), kasprintf() will always return
NULL as we are at the early stage. that means cma_init_reserved_mem()
will return -ENOMEM if users set name parameter as NULL.[natechancellor@gmail.com: return cma->name directly in cma_get_name]
Link: https://github.com/ClangBuiltLinux/linux/issues/1063
Link: http://lkml.kernel.org/r/20200623015840.621964-1-natechancellor@gmail.comSigned-off-by: Barry Song
Signed-off-by: Nathan Chancellor
Signed-off-by: Andrew Morton
Reviewed-by: Mike Kravetz
Acked-by: Roman Gushchin
Link: http://lkml.kernel.org/r/20200616223131.33828-2-song.bao.hua@hisilicon.com
Signed-off-by: Linus Torvalds -
In some case the cma area could not be activated, but the cma_alloc be
used under this case, then the kernel will crash caused by NULL pointer
dereference.Add bitmap valid check in cma_alloc to avoid this issue.
Signed-off-by: Jianqun Xu
Signed-off-by: Andrew Morton
Reviewed-by: David Hildenbrand
Link: http://lkml.kernel.org/r/20200615010123.15596-1-jay.xu@rock-chips.com
Signed-off-by: Linus Torvalds
04 Jul, 2020
1 commit
-
Calling cma_declare_contiguous_nid() with false exact_nid for per-numa
reservation can easily cause cma leak and various confusion. For example,
mm/hugetlb.c is trying to reserve per-numa cma for gigantic pages. But it
can easily leak cma and make users confused when system has memoryless
nodes.In case the system has 4 numa nodes, and only numa node0 has memory. if
we set hugetlb_cma=4G in bootargs, mm/hugetlb.c will get 4 cma areas for 4
different numa nodes. since exact_nid=false in current code, all 4 numa
nodes will get cma successfully from node0, but hugetlb_cma[1 to 3] will
never be available to hugepage will only allocate memory from
hugetlb_cma[0].In case the system has 4 numa nodes, both numa node0&2 has memory, other
nodes have no memory. if we set hugetlb_cma=4G in bootargs, mm/hugetlb.c
will get 4 cma areas for 4 different numa nodes. since exact_nid=false in
current code, all 4 numa nodes will get cma successfully from node0 or 2,
but hugetlb_cma[1] and [3] will never be available to hugepage as
mm/hugetlb.c will only allocate memory from hugetlb_cma[0] and
hugetlb_cma[2]. This causes permanent leak of the cma areas which are
supposed to be used by memoryless node.Of cource we can workaround the issue by letting mm/hugetlb.c scan all cma
areas in alloc_gigantic_page() even node_mask includes node0 only. that
means when node_mask includes node0 only, we can get page from
hugetlb_cma[1] to hugetlb_cma[3]. But this will cause kernel crash in
free_gigantic_page() while it wants to free page by:
cma_release(hugetlb_cma[page_to_nid(page)], page, 1 << order)On the other hand, exact_nid=false won't consider numa distance, it might
be not that useful to leverage cma areas on remote nodes. I feel it is
much simpler to make exact_nid true to make everything clear. After that,
memoryless nodes won't be able to reserve per-numa CMA from other nodes
which have memory.Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages using cma")
Signed-off-by: Barry Song
Signed-off-by: Andrew Morton
Acked-by: Roman Gushchin
Cc: Jonathan Cameron
Cc: Aslan Bakirov
Cc: Michal Hocko
Cc: Andreas Schaufler
Cc: Mike Kravetz
Cc: Rik van Riel
Cc: Joonsoo Kim
Cc: Robin Murphy
Cc:
Link: http://lkml.kernel.org/r/20200628074345.27228-1-song.bao.hua@hisilicon.com
Signed-off-by: Linus Torvalds
11 Apr, 2020
1 commit
-
I've noticed that there is no interface exposed by CMA which would let
me to declare contigous memory on particular NUMA node.This patchset adds the ability to try to allocate contiguous memory on a
specific node. It will fallback to other nodes if the specified one
doesn't work.Implement a new method for declaring contigous memory on particular node
and keep cma_declare_contiguous() as a wrapper.[akpm@linux-foundation.org: build fix]
Signed-off-by: Aslan Bakirov
Signed-off-by: Roman Gushchin
Signed-off-by: Andrew Morton
Acked-by: Michal Hocko
Cc: Andreas Schaufler
Cc: Mike Kravetz
Cc: Rik van Riel
Cc: Joonsoo Kim
Link: http://lkml.kernel.org/r/20200407163840.92263-2-guro@fb.com
Signed-off-by: Linus Torvalds
02 Dec, 2019
1 commit
-
kzalloc() is used for cma bitmap allocation in cma_activate_area(),
switch to bitmap_zalloc() for clarity.Link: http://lkml.kernel.org/r/895d4627-f115-c77a-d454-c0a196116426@huawei.com
Signed-off-by: Yunfeng Ye
Reviewed-by: Andrew Morton
Cc: Mike Rapoport
Cc: Yue Hu
Cc: Peng Fan
Cc: Andrey Ryabinin
Cc: Ryohei Suzuki
Cc: Andrey Konovalov
Cc: Doug Berger
Cc: Thomas Gleixner
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
17 Jul, 2019
2 commits
-
The description of cma_declare_contiguous() indicates that if the
'fixed' argument is true the reserved contiguous area must be exactly at
the address of the 'base' argument.However, the function currently allows the 'base', 'size', and 'limit'
arguments to be silently adjusted to meet alignment constraints. This
commit enforces the documented behavior through explicit checks that
return an error if the region does not fit within a specified region.Link: http://lkml.kernel.org/r/1561422051-16142-1-git-send-email-opendmb@gmail.com
Fixes: 5ea3b1b2f8ad ("cma: add placement specifier for "cma=" kernel parameter")
Signed-off-by: Doug Berger
Acked-by: Michal Nazarewicz
Cc: Yue Hu
Cc: Mike Rapoport
Cc: Laura Abbott
Cc: Peng Fan
Cc: Thomas Gleixner
Cc: Marek Szyprowski
Cc: Andrey Konovalov
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
A comment referred to a non-existent function alloc_cma(), which should
have been cma_alloc().Link: http://lkml.kernel.org/r/20190712085549.5920-1-ryh.szk.cmnty@gmail.com
Signed-off-by: Ryohei Suzuki
Reviewed-by: Andrew Morton
Cc: Joonsoo Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
24 May, 2019
1 commit
-
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your optional any later version of the licenseextracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 3 file(s).
Signed-off-by: Thomas Gleixner
Reviewed-by: Richard Fontana
Reviewed-by: Allison Randal
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190520075212.713472955@linutronix.de
Signed-off-by: Greg Kroah-Hartman
15 May, 2019
2 commits
-
f022d8cb7ec7 ("mm: cma: Don't crash on allocation if CMA area can't be
activated") fixes the crash issue when activation fails via setting
cma->count as 0, same logic exists if bitmap allocation fails.Link: http://lkml.kernel.org/r/20190325081309.6004-1-zbestahu@gmail.com
Signed-off-by: Yue Hu
Reviewed-by: Anshuman Khandual
Cc: Joonsoo Kim
Cc: Laura Abbott
Cc: Mike Rapoport
Cc: Randy Dunlap
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Currently one bit in cma bitmap represents number of pages rather than
one page, cma->count means cma size in pages. So to find available pages
via find_next_zero_bit()/find_next_bit() we should use cma size not in
pages but in bits although current free pages number is correct due to
zero value of order_per_bit. Once order_per_bit is changed the bitmap
status will be incorrect.The size input in cma_debug_show_areas() is not correct. It will
affect the available pages at some position to debug the failure issue.This is an example with order_per_bit = 1
Before this change:
[ 4.120060] cma: number of available pages: 1@93+4@108+7@121+7@137+7@153+7@169+7@185+7@201+3@213+3@221+3@229+3@237+3@245+3@253+3@261+3@269+3@277+3@285+3@293+3@301+3@309+3@317+3@325+19@333+15@369+512@512=> 638 free of 1024 total pagesAfter this change:
[ 4.143234] cma: number of available pages: 2@93+8@108+14@121+14@137+14@153+14@169+14@185+14@201+6@213+6@221+6@229+6@237+6@245+6@253+6@261+6@269+6@277+6@285+6@293+6@301+6@309+6@317+6@325+38@333+30@369=> 252 free of 1024 total pagesObviously the bitmap status before is incorrect.
Link: http://lkml.kernel.org/r/20190320060829.9144-1-zbestahu@gmail.com
Signed-off-by: Yue Hu
Reviewed-by: Andrew Morton
Cc: Joonsoo Kim
Cc: Ingo Molnar
Cc: Vlastimil Babka
Cc: Mike Rapoport
Cc: Randy Dunlap
Cc: Laura Abbott
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
13 Mar, 2019
1 commit
-
Rename memblock_alloc_range() to memblock_phys_alloc_range() to
emphasize that it returns a physical address.While on it, remove the 'enum memblock_flags' parameter from this
function as its only user anyway sets it to MEMBLOCK_NONE, which is the
default for the most of memblock allocations.Link: http://lkml.kernel.org/r/1548057848-15136-6-git-send-email-rppt@linux.ibm.com
Signed-off-by: Mike Rapoport
Cc: Catalin Marinas
Cc: Christophe Leroy
Cc: Christoph Hellwig
Cc: "David S. Miller"
Cc: Dennis Zhou
Cc: Geert Uytterhoeven
Cc: Greentime Hu
Cc: Greg Kroah-Hartman
Cc: Guan Xuetao
Cc: Guo Ren
Cc: Guo Ren [c-sky]
Cc: Heiko Carstens
Cc: Juergen Gross [Xen]
Cc: Mark Salter
Cc: Matt Turner
Cc: Max Filippov
Cc: Michael Ellerman
Cc: Michal Simek
Cc: Paul Burton
Cc: Petr Mladek
Cc: Richard Weinberger
Cc: Rich Felker
Cc: Rob Herring
Cc: Rob Herring
Cc: Russell King
Cc: Stafford Horne
Cc: Tony Luck
Cc: Vineet Gupta
Cc: Yoshinori Sato
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
06 Mar, 2019
1 commit
-
In case cma_init_reserved_mem failed, need to free the memblock
allocated by memblock_reserve or memblock_alloc_range.Quote Catalin's comments:
https://lkml.org/lkml/2019/2/26/482Kmemleak is supposed to work with the memblock_{alloc,free} pair and it
ignores the memblock_reserve() as a memblock_alloc() implementation
detail. It is, however, tolerant to memblock_free() being called on
a sub-range or just a different range from a previous memblock_alloc().
So the original patch looks fine to me. FWIW:Link: http://lkml.kernel.org/r/20190227144631.16708-1-peng.fan@nxp.com
Signed-off-by: Peng Fan
Reviewed-by: Catalin Marinas
Reviewed-by: Mike Rapoport
Cc: Laura Abbott
Cc: Joonsoo Kim
Cc: Michal Hocko
Cc: Vlastimil Babka
Cc: Marek Szyprowski
Cc: Andrey Konovalov
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
29 Dec, 2018
1 commit
-
Tag-based KASAN doesn't check memory accesses through pointers tagged with
0xff. When page_address is used to get pointer to memory that corresponds
to some page, the tag of the resulting pointer gets set to 0xff, even
though the allocated memory might have been tagged differently.For slab pages it's impossible to recover the correct tag to return from
page_address, since the page might contain multiple slab objects tagged
with different values, and we can't know in advance which one of them is
going to get accessed. For non slab pages however, we can recover the tag
in page_address, since the whole page was marked with the same tag.This patch adds tagging to non slab memory allocated with pagealloc. To
set the tag of the pointer returned from page_address, the tag gets stored
to page->flags when the memory gets allocated.Link: http://lkml.kernel.org/r/d758ddcef46a5abc9970182b9137e2fbee202a2c.1544099024.git.andreyknvl@google.com
Signed-off-by: Andrey Konovalov
Reviewed-by: Andrey Ryabinin
Reviewed-by: Dmitry Vyukov
Acked-by: Will Deacon
Cc: Christoph Lameter
Cc: Mark Rutland
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
18 Aug, 2018
1 commit
-
cma_alloc() doesn't really support gfp flags other than __GFP_NOWARN, so
convert gfp_mask parameter to boolean no_warn parameter.This will help to avoid giving false feeling that this function supports
standard gfp flags and callers can pass __GFP_ZERO to get zeroed buffer,
what has already been an issue: see commit dd65a941f6ba ("arm64:
dma-mapping: clear buffers allocated with FORCE_CONTIGUOUS flag").Link: http://lkml.kernel.org/r/20180709122019eucas1p2340da484acfcc932537e6014f4fd2c29~-sqTPJKij2939229392eucas1p2j@eucas1p2.samsung.com
Signed-off-by: Marek Szyprowski
Acked-by: Michal Hocko
Acked-by: Michał Nazarewicz
Acked-by: Laura Abbott
Acked-by: Vlastimil Babka
Reviewed-by: Christoph Hellwig
Cc: Joonsoo Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
25 May, 2018
1 commit
-
This reverts the following commits that change CMA design in MM.
3d2054ad8c2d ("ARM: CMA: avoid double mapping to the CMA area if CONFIG_HIGHMEM=y")
1d47a3ec09b5 ("mm/cma: remove ALLOC_CMA")
bad8c6c0b114 ("mm/cma: manage the memory of the CMA area by using the ZONE_MOVABLE")
Ville reported a following error on i386.
Inode-cache hash table entries: 65536 (order: 6, 262144 bytes)
microcode: microcode updated early to revision 0x4, date = 2013-06-28
Initializing CPU#0
Initializing HighMem for node 0 (000377fe:00118000)
Initializing Movable for node 0 (00000001:00118000)
BUG: Bad page state in process swapper pfn:377fe
page:f53effc0 count:0 mapcount:-127 mapping:00000000 index:0x0
flags: 0x80000000()
raw: 80000000 00000000 00000000 ffffff80 00000000 00000100 00000200 00000001
page dumped because: nonzero mapcount
Modules linked in:
CPU: 0 PID: 0 Comm: swapper Not tainted 4.17.0-rc5-elk+ #145
Hardware name: Dell Inc. Latitude E5410/03VXMC, BIOS A15 07/11/2013
Call Trace:
dump_stack+0x60/0x96
bad_page+0x9a/0x100
free_pages_check_bad+0x3f/0x60
free_pcppages_bulk+0x29d/0x5b0
free_unref_page_commit+0x84/0xb0
free_unref_page+0x3e/0x70
__free_pages+0x1d/0x20
free_highmem_page+0x19/0x40
add_highpages_with_active_regions+0xab/0xeb
set_highmem_pages_init+0x66/0x73
mem_init+0x1b/0x1d7
start_kernel+0x17a/0x363
i386_start_kernel+0x95/0x99
startup_32_smp+0x164/0x168The reason for this error is that the span of MOVABLE_ZONE is extended
to whole node span for future CMA initialization, and, normal memory is
wrongly freed here. I submitted the fix and it seems to work, but,
another problem happened.It's so late time to fix the later problem so I decide to reverting the
series.Reported-by: Ville Syrjälä
Acked-by: Laura Abbott
Acked-by: Michal Hocko
Cc: Andrew Morton
Signed-off-by: Joonsoo Kim
Signed-off-by: Linus Torvalds
12 Apr, 2018
1 commit
-
Patch series "mm/cma: manage the memory of the CMA area by using the
ZONE_MOVABLE", v2.0. History
This patchset is the follow-up of the discussion about the "Introduce
ZONE_CMA (v7)" [1]. Please reference it if more information is needed.1. What does this patch do?
This patch changes the management way for the memory of the CMA area in
the MM subsystem. Currently the memory of the CMA area is managed by
the zone where their pfn is belong to. However, this approach has some
problems since MM subsystem doesn't have enough logic to handle the
situation that different characteristic memories are in a single zone.
To solve this issue, this patch try to manage all the memory of the CMA
area by using the MOVABLE zone. In MM subsystem's point of view,
characteristic of the memory on the MOVABLE zone and the memory of the
CMA area are the same. So, managing the memory of the CMA area by using
the MOVABLE zone will not have any problem.2. Motivation
There are some problems with current approach. See following. Although
these problem would not be inherent and it could be fixed without this
conception change, it requires many hooks addition in various code path
and it would be intrusive to core MM and would be really error-prone.
Therefore, I try to solve them with this new approach. Anyway,
following is the problems of the current implementation.o CMA memory utilization
First, following is the freepage calculation logic in MM.
- For movable allocation: freepage = total freepage
- For unmovable allocation: freepage = total freepage - CMA freepageFreepages on the CMA area is used after the normal freepages in the zone
where the memory of the CMA area is belong to are exhausted. At that
moment that the number of the normal freepages is zero, so- For movable allocation: freepage = total freepage = CMA freepage
- For unmovable allocation: freepage = 0If unmovable allocation comes at this moment, allocation request would
fail to pass the watermark check and reclaim is started. After reclaim,
there would exist the normal freepages so freepages on the CMA areas
would not be used.FYI, there is another attempt [2] trying to solve this problem in lkml.
And, as far as I know, Qualcomm also has out-of-tree solution for this
problem.Useless reclaim:
There is no logic to distinguish CMA pages in the reclaim path. Hence,
CMA page is reclaimed even if the system just needs the page that can be
usable for the kernel allocation.Atomic allocation failure:
This is also related to the fallback allocation policy for the memory of
the CMA area. Consider the situation that the number of the normal
freepages is *zero* since the bunch of the movable allocation requests
come. Kswapd would not be woken up due to following freepage
calculation logic.- For movable allocation: freepage = total freepage = CMA freepage
If atomic unmovable allocation request comes at this moment, it would
fails due to following logic.- For unmovable allocation: freepage = total freepage - CMA freepage = 0
It was reported by Aneesh [3].
Useless compaction:
Usual high-order allocation request is unmovable allocation request and
it cannot be served from the memory of the CMA area. In compaction,
migration scanner try to migrate the page in the CMA area and make
high-order page there. As mentioned above, it cannot be usable for the
unmovable allocation request so it's just waste.3. Current approach and new approach
Current approach is that the memory of the CMA area is managed by the
zone where their pfn is belong to. However, these memory should be
distinguishable since they have a strong limitation. So, they are
marked as MIGRATE_CMA in pageblock flag and handled specially. However,
as mentioned in section 2, the MM subsystem doesn't have enough logic to
deal with this special pageblock so many problems raised.New approach is that the memory of the CMA area is managed by the
MOVABLE zone. MM already have enough logic to deal with special zone
like as HIGHMEM and MOVABLE zone. So, managing the memory of the CMA
area by the MOVABLE zone just naturally work well because constraints
for the memory of the CMA area that the memory should always be
migratable is the same with the constraint for the MOVABLE zone.There is one side-effect for the usability of the memory of the CMA
area. The use of MOVABLE zone is only allowed for a request with
GFP_HIGHMEM && GFP_MOVABLE so now the memory of the CMA area is also
only allowed for this gfp flag. Before this patchset, a request with
GFP_MOVABLE can use them. IMO, It would not be a big issue since most
of GFP_MOVABLE request also has GFP_HIGHMEM flag. For example, file
cache page and anonymous page. However, file cache page for blockdev
file is an exception. Request for it has no GFP_HIGHMEM flag. There is
pros and cons on this exception. In my experience, blockdev file cache
pages are one of the top reason that causes cma_alloc() to fail
temporarily. So, we can get more guarantee of cma_alloc() success by
discarding this case.Note that there is no change in admin POV since this patchset is just
for internal implementation change in MM subsystem. Just one minor
difference for admin is that the memory stat for CMA area will be
printed in the MOVABLE zone. That's all.4. Result
Following is the experimental result related to utilization problem.
8 CPUs, 1024 MB, VIRTUAL MACHINE
make -j16CMA area: 0 MB 512 MB
Elapsed-time: 92.4 186.5
pswpin: 82 18647
pswpout: 160 69839CMA : 0 MB 512 MB
Elapsed-time: 93.1 93.4
pswpin: 84 46
pswpout: 183 92akpm: "kernel test robot" reported a 26% improvement in
vm-scalability.throughput:
http://lkml.kernel.org/r/20180330012721.GA3845@yexl-desktop[1]: lkml.kernel.org/r/1491880640-9944-1-git-send-email-iamjoonsoo.kim@lge.com
[2]: https://lkml.org/lkml/2014/10/15/623
[3]: http://www.spinics.net/lists/linux-mm/msg100562.htmlLink: http://lkml.kernel.org/r/1512114786-5085-2-git-send-email-iamjoonsoo.kim@lge.com
Signed-off-by: Joonsoo Kim
Reviewed-by: Aneesh Kumar K.V
Tested-by: Tony Lindgren
Acked-by: Vlastimil Babka
Cc: Johannes Weiner
Cc: Laura Abbott
Cc: Marek Szyprowski
Cc: Mel Gorman
Cc: Michal Hocko
Cc: Michal Nazarewicz
Cc: Minchan Kim
Cc: Rik van Riel
Cc: Russell King
Cc: Will Deacon
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
06 Apr, 2018
2 commits
-
Currently #includes for no obvious
reason. It looks like it's only a convenience, so remove kmemleak.h
from slab.h and add to any users of kmemleak_* that
don't already #include it. Also remove from source
files that do not use it.This is tested on i386 allmodconfig and x86_64 allmodconfig. It would
be good to run it through the 0day bot for other $ARCHes. I have
neither the horsepower nor the storage space for the other $ARCHes.Update: This patch has been extensively build-tested by both the 0day
bot & kisskb/ozlabs build farms. Both of them reported 2 build failures
for which patches are included here (in v2).[ slab.h is the second most used header file after module.h; kernel.h is
right there with slab.h. There could be some minor error in the
counting due to some #includes having comments after them and I didn't
combine all of those. ][akpm@linux-foundation.org: security/keys/big_key.c needs vmalloc.h, per sfr]
Link: http://lkml.kernel.org/r/e4309f98-3749-93e1-4bb7-d9501a39d015@infradead.org
Link: http://kisskb.ellerman.id.au/kisskb/head/13396/
Signed-off-by: Randy Dunlap
Reviewed-by: Ingo Molnar
Reported-by: Michael Ellerman [2 build failures]
Reported-by: Fengguang Wu [2 build failures]
Reviewed-by: Andrew Morton
Cc: Wei Yongjun
Cc: Luis R. Rodriguez
Cc: Greg Kroah-Hartman
Cc: Mimi Zohar
Cc: John Johansen
Cc: Stephen Rothwell
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Link: http://lkml.kernel.org/r/1519585191-10180-4-git-send-email-rppt@linux.vnet.ibm.com
Signed-off-by: Mike Rapoport
Reviewed-by: Andrew Morton
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
16 Nov, 2017
1 commit
-
It was observed that under cma_alloc fail log, pr_info was used instead
of pr_err. This will lead to problems if printk debug level is set to
below 7. In this case the cma_alloc failure log will not be captured in
the log and it will be difficult to debug.Simply replace the pr_info with pr_err to capture failure log.
Link: http://lkml.kernel.org/r/1507650633-4430-1-git-send-email-pintu.ping@gmail.com
Signed-off-by: Pintu Agarwal
Cc: Laura Abbott
Cc: Greg Kroah-Hartman
Cc: Jaewon Kim
Cc: Doug Berger
Cc: Joonsoo Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
14 Oct, 2017
1 commit
-
cma_alloc() unconditionally prints an INFO message when the CMA
allocation fails. Make this message conditional on the non-presence of
__GFP_NOWARN in gfp_mask.This patch aims at removing INFO messages that are displayed when the
VC4 driver tries to allocate buffer objects. From the driver
perspective an allocation failure is acceptable, and the driver can
possibly do something to make following allocation succeed (like
flushing the VC4 internal cache).Link: http://lkml.kernel.org/r/20171004125447.15195-1-boris.brezillon@free-electrons.com
Signed-off-by: Boris Brezillon
Acked-by: Laura Abbott
Cc: Jaewon Kim
Cc: David Airlie
Cc: Daniel Vetter
Cc: Eric Anholt
Cc: Joonsoo Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
11 Jul, 2017
2 commits
-
The align_offset parameter is used by bitmap_find_next_zero_area_off()
to represent the offset of map's base from the previous alignment
boundary; the function ensures that the returned index, plus the
align_offset, honors the specified align_mask.The logic introduced by commit b5be83e308f7 ("mm: cma: align to physical
address, not CMA region position") has the cma driver calculate the
offset to the *next* alignment boundary. In most cases, the base
alignment is greater than that specified when making allocations,
resulting in a zero offset whether we align up or down. In the example
given with the commit, the base alignment (8MB) was half the requested
alignment (16MB) so the math also happened to work since the offset is
8MB in both directions. However, when requesting allocations with an
alignment greater than twice that of the base, the returned index would
not be correctly aligned.Also, the align_order arguments of cma_bitmap_aligned_mask() and
cma_bitmap_aligned_offset() should not be negative so the argument type
was made unsigned.Fixes: b5be83e308f7 ("mm: cma: align to physical address, not CMA region position")
Link: http://lkml.kernel.org/r/20170628170742.2895-1-opendmb@gmail.com
Signed-off-by: Angus Clark
Signed-off-by: Doug Berger
Acked-by: Gregory Fong
Cc: Doug Berger
Cc: Angus Clark
Cc: Laura Abbott
Cc: Vlastimil Babka
Cc: Greg Kroah-Hartman
Cc: Lucas Stach
Cc: Catalin Marinas
Cc: Shiraz Hashim
Cc: Jaewon Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
While activating a CMA area we check to make sure that all the PFNs in
the range are inside the same zone. This is a requirement for
alloc_contig_range() to work. Any CMA area failing the check is
disabled for good. This happens silently right now making all future
cma_alloc() allocations failure inevitable.Here we add an error message stating that the CMA area could not be
activated which makes it easier to explain any future cma_alloc()
failures on it. While in there, change the bail out goto label from
'err' to 'not_in_zone' which makes more sense.Link: http://lkml.kernel.org/r/20170605023729.26303-1-khandual@linux.vnet.ibm.com
Signed-off-by: Anshuman Khandual
Cc: "Aneesh Kumar K.V"
Cc: Joonsoo Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
19 Apr, 2017
2 commits
-
Frameworks (e.g. Ion) may want to iterate over each possible CMA area to
allow for enumeration. Introduce a function to allow a callback.Signed-off-by: Laura Abbott
Signed-off-by: Greg Kroah-Hartman -
Frameworks that may want to enumerate CMA heaps (e.g. Ion) will find it
useful to have an explicit name attached to each region. Store the name
in each CMA structure.Signed-off-by: Laura Abbott
Signed-off-by: Greg Kroah-Hartman
25 Feb, 2017
2 commits
-
There are many reasons of CMA allocation failure such as EBUSY, ENOMEM,
EINTR. But we did not know error reason so far. This patch prints the
error value.Additionally if CONFIG_CMA_DEBUG is enabled, this patch shows bitmap
status to know available pages. Actually CMA internally tries on all
available regions because some regions can be failed because of EBUSY.
Bitmap status is useful to know in detail on both ENONEM and EBUSY;ENOMEM: not tried at all because of no available region
it could be too small total region or could be fragmentation issue
EBUSY: tried some region but all failedThis is an ENOMEM example with this patch.
[2: Binder:714_1: 744] cma: cma_alloc: alloc failed, req-size: 256 pages, ret: -12
If CONFIG_CMA_DEBUG is enabled, avabile pages also will be shown as
concatenated size@position format. So 4@572 means that there are 4
available pages at 572 position starting from 0 position.[2: Binder:714_1: 744] cma: number of available pages: 4@572+7@585+7@601+8@632+38@730+166@1114+127@1921=> 357 free of 2048 total pages
Link: http://lkml.kernel.org/r/1485909785-3952-1-git-send-email-jaewon31.kim@samsung.com
Signed-off-by: Jaewon Kim
Acked-by: Michal Nazarewicz
Cc: Laura Abbott
Cc: Joonsoo Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Most users of this interface just want to use it with the default
GFP_KERNEL flags, but for cases where DMA memory is allocated it may be
called from a different context.No functional change yet, just passing through the flag to the
underlying alloc_contig_range function.Link: http://lkml.kernel.org/r/20170127172328.18574-2-l.stach@pengutronix.de
Signed-off-by: Lucas Stach
Acked-by: Vlastimil Babka
Acked-by: Michal Hocko
Cc: Radim Krcmar
Cc: Catalin Marinas
Cc: Will Deacon
Cc: Chris Zankel
Cc: Ralf Baechle
Cc: Paolo Bonzini
Cc: Alexander Graf
Cc: Joonsoo Kim
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds