25 Jun, 2015
1 commit
-
For !CONFIG_NUMA, hashdist will always be 0, since it's setter is
otherwise compiled out. So we can save 4 bytes of data and some .text
(although mostly in __init functions) by only defining it for
CONFIG_NUMA.Signed-off-by: Rasmus Villemoes
Acked-by: David Rientjes
Reviewed-by: Michal Hocko
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
14 Nov, 2014
1 commit
-
In free_area_init_core(), zone->managed_pages is set to an approximate
value for lowmem, and will be adjusted when the bootmem allocator frees
pages into the buddy system.But free_area_init_core() is also called by hotadd_new_pgdat() when
hot-adding memory. As a result, zone->managed_pages of the newly added
node's pgdat is set to an approximate value in the very beginning.Even if the memory on that node has node been onlined,
/sys/device/system/node/nodeXXX/meminfo has wrong value:hot-add node2 (memory not onlined)
cat /sys/device/system/node/node2/meminfo
Node 2 MemTotal: 33554432 kB
Node 2 MemFree: 0 kB
Node 2 MemUsed: 33554432 kB
Node 2 Active: 0 kBThis patch fixes this problem by reset node managed pages to 0 after
hot-adding a new node.1. Move reset_managed_pages_done from reset_node_managed_pages() to
reset_all_zones_managed_pages()
2. Make reset_node_managed_pages() non-static
3. Call reset_node_managed_pages() in hotadd_new_pgdat() after pgdat
is initializedSigned-off-by: Tang Chen
Signed-off-by: Yasuaki Ishimatsu
Cc: [3.16+]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
05 Jun, 2014
1 commit
-
Use BOOTMEM_DEFAULT instead of 0 in the comment.
Signed-off-by: Wang Sheng-Hui
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
31 Jan, 2014
1 commit
-
Now we have memblock_virt_alloc_low to replace original bootmem api in
swiotlb.But we should not use BOOTMEM_LOW_LIMIT for arch that does not support
CONFIG_NOBOOTMEM, as old api take 0.| #define alloc_bootmem_low(x) \
| __alloc_bootmem_low(x, SMP_CACHE_BYTES, 0)
|#define alloc_bootmem_low_pages_nopanic(x) \
| __alloc_bootmem_low_nopanic(x, PAGE_SIZE, 0)and we have
#define BOOTMEM_LOW_LIMIT __pa(MAX_DMA_ADDRESS)
for CONFIG_NOBOOTMEM.Restore goal to 0 to fix ia64 crash, that Tony found.
Signed-off-by: Yinghai Lu
Reported-by: Tony Luck
Tested-by: Tony Luck
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
28 Jan, 2014
1 commit
-
The new memblock_virt APIs are used to replaced old bootmem API.
We need to allocate page below 4G for swiotlb.
That should fix regression on Andrew's system that is using swiotlb.
Signed-off-by: Yinghai Lu
Cc: Russell King
Cc: Konrad Rzeszutek Wilk
Acked-by: Santosh Shilimkar
Cc: Dave Hansen
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
22 Jan, 2014
2 commits
-
Introduce memblock memory allocation APIs which allow to support PAE or
LPAE extension on 32 bits archs where the physical memory start address
can be beyond 4GB. In such cases, existing bootmem APIs which operate
on 32 bit addresses won't work and needs memblock layer which operates
on 64 bit addresses.So we add equivalent APIs so that we can replace usage of bootmem with
memblock interfaces. Architectures already converted to NO_BOOTMEM use
these new memblock interfaces. The architectures which are still not
converted to NO_BOOTMEM continue to function as is because we still
maintain the fal lback option of bootmem back-end supporting these new
interfaces. So no functional change as such.In long run, once all the architectures moves to NO_BOOTMEM, we can get
rid of bootmem layer completely. This is one step to remove the core
code dependency with bootmem and also gives path for architectures to
move away from bootmem.The proposed interface will became active if both CONFIG_HAVE_MEMBLOCK
and CONFIG_NO_BOOTMEM are specified by arch. In case
!CONFIG_NO_BOOTMEM, the memblock() wrappers will fallback to the
existing bootmem apis so that arch's not converted to NO_BOOTMEM
continue to work as is.The meaning of MEMBLOCK_ALLOC_ACCESSIBLE and MEMBLOCK_ALLOC_ANYWHERE
is kept same.[akpm@linux-foundation.org: s/depricated/deprecated/]
Signed-off-by: Grygorii Strashko
Signed-off-by: Santosh Shilimkar
Cc: Yinghai Lu
Cc: Tejun Heo
Cc: "Rafael J. Wysocki"
Cc: Arnd Bergmann
Cc: Christoph Lameter
Cc: Greg Kroah-Hartman
Cc: H. Peter Anvin
Cc: Johannes Weiner
Cc: KAMEZAWA Hiroyuki
Cc: Konrad Rzeszutek Wilk
Cc: Michal Hocko
Cc: Paul Walmsley
Cc: Pavel Machek
Cc: Russell King
Cc: Tony Lindgren
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
The __free_pages_bootmem is used internally by MM core and already
defined in internal.h. So, remove duplicated declaration.Signed-off-by: Grygorii Strashko
Signed-off-by: Santosh Shilimkar
Reviewed-by: Tejun Heo
Cc: Yinghai Lu
Cc: "Rafael J. Wysocki"
Cc: Arnd Bergmann
Cc: Christoph Lameter
Cc: Greg Kroah-Hartman
Cc: H. Peter Anvin
Cc: Johannes Weiner
Cc: KAMEZAWA Hiroyuki
Cc: Konrad Rzeszutek Wilk
Cc: Michal Hocko
Cc: Paul Walmsley
Cc: Pavel Machek
Cc: Russell King
Cc: Tony Lindgren
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
04 Jul, 2013
2 commits
-
Now nobody makes use of free_all_bootmem_node(), kill it.
Signed-off-by: Jiang Liu
Cc: Johannes Weiner
Cc: "David S. Miller"
Cc: Yinghai Lu
Acked-by: Tejun Heo
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Commit "mm: introduce new field 'managed_pages' to struct zone" assumes
that all highmem pages will be freed into the buddy system by function
mem_init(). But that's not always true, some architectures may reserve
some highmem pages during boot. For example PPC may allocate highmem
pages for giagant HugeTLB pages, and several architectures have code to
check PageReserved flag to exclude highmem pages allocated during boot
when freeing highmem pages into the buddy system.So treat highmem pages in the same way as normal pages, that is to:
1) reset zone->managed_pages to zero in mem_init().
2) recalculate managed_pages when freeing pages into the buddy system.Signed-off-by: Jiang Liu
Cc: "H. Peter Anvin"
Cc: Tejun Heo
Cc: Joonsoo Kim
Cc: Yinghai Lu
Cc: Mel Gorman
Cc: Minchan Kim
Cc: Kamezawa Hiroyuki
Cc: Marek Szyprowski
Cc: "Michael S. Tsirkin"
Cc:
Cc: Arnd Bergmann
Cc: Catalin Marinas
Cc: Chris Metcalf
Cc: David Howells
Cc: Geert Uytterhoeven
Cc: Ingo Molnar
Cc: Jeremy Fitzhardinge
Cc: Jianguo Wu
Cc: Konrad Rzeszutek Wilk
Cc: Michel Lespinasse
Cc: Rik van Riel
Cc: Rusty Russell
Cc: Tang Chen
Cc: Thomas Gleixner
Cc: Wen Congyang
Cc: Will Deacon
Cc: Yasuaki Ishimatsu
Cc: Russell King
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
30 Apr, 2013
1 commit
-
Remove unused argument and make function static, because there is no user
outside of nobootmem.cSigned-off-by: Joonsoo Kim
Cc: Yinghai Lu
Acked-by: Johannes Weiner
Cc: Jiang Liu
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
24 Feb, 2013
1 commit
-
When memory is removed, the corresponding pagetables should alse be
removed. This patch introduces some common APIs to support vmemmap
pagetable and x86_64 architecture direct mapping pagetable removing.All pages of virtual mapping in removed memory cannot be freed if some
pages used as PGD/PUD include not only removed memory but also other
memory. So this patch uses the following way to check whether a page
can be freed or not.1) When removing memory, the page structs of the removed memory are
filled with 0FD.2) All page structs are filled with 0xFD on PT/PMD, PT/PMD can be
cleared. In this case, the page used as PT/PMD can be freed.For direct mapping pages, update direct_pages_count[level] when we freed
their pagetables. And do not free the pages again because they were
freed when offlining.For vmemmap pages, free the pages and their pagetables.
For larger pages, do not split them into smaller ones because there is
no way to know if the larger page has been split. As a result, there is
no way to decide when to split. We deal the larger pages in the
following way:1) For direct mapped pages, all the pages were freed when they were
offlined. And since menmory offline is done section by section, all
the memory ranges being removed are aligned to PAGE_SIZE. So only need
to deal with unaligned pages when freeing vmemmap pages.2) For vmemmap pages being used to store page_struct, if part of the
larger page is still in use, just fill the unused part with 0xFD. And
when the whole page is fulfilled with 0xFD, then free the larger page.[akpm@linux-foundation.org: fix typo in comment]
[tangchen@cn.fujitsu.com: do not calculate direct mapping pages when freeing vmemmap pagetables]
[tangchen@cn.fujitsu.com: do not free direct mapping pages twice]
[tangchen@cn.fujitsu.com: do not free page split from hugepage one by one]
[tangchen@cn.fujitsu.com: do not split pages when freeing pagetable pages]
[akpm@linux-foundation.org: use pmd_page_vaddr()]
[akpm@linux-foundation.org: fix used-uninitialised bug]
Signed-off-by: Yasuaki Ishimatsu
Signed-off-by: Jianguo Wu
Signed-off-by: Wen Congyang
Signed-off-by: Tang Chen
Cc: KOSAKI Motohiro
Cc: Jiang Liu
Cc: Kamezawa Hiroyuki
Cc: Lai Jiangshan
Cc: Wu Jianguo
Cc: Ingo Molnar
Cc: Thomas Gleixner
Cc: "H. Peter Anvin"
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
30 Jan, 2013
1 commit
-
We don't need to panic in some case, like for swiotlb preallocating.
Signed-off-by: Yinghai Lu
Link: http://lkml.kernel.org/r/1359058816-7615-35-git-send-email-yinghai@kernel.org
Signed-off-by: H. Peter Anvin
13 Dec, 2012
1 commit
-
reserve_bootmem_generic() has no caller,
Signed-off-by: Lin Feng
Acked-by: Johannes Weiner
Cc: Yinghai Lu
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
12 Dec, 2012
1 commit
-
It is strange that alloc_bootmem() returns a virtual address and
free_bootmem() requires a physical address. Anyway, free_bootmem()'s
first parameter should be physical address.There are some call sites for free_bootmem() with virtual address. So fix
them.[akpm@linux-foundation.org: improve free_bootmem() and free_bootmem_pate() documentation]
Signed-off-by: Joonsoo Kim
Cc: Haavard Skinnemoen
Cc: Hans-Christian Egtvedt
Cc: Johannes Weiner
Cc: FUJITA Tomonori
Cc: Benjamin Herrenschmidt
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
12 Jul, 2012
1 commit
-
After commit f5bf18fa22f8 ("bootmem/sparsemem: remove limit constraint
in alloc_bootmem_section"), usemap allocations may easily be placed
outside the optimal section that holds the node descriptor, even if
there is space available in that section. This results in unnecessary
hotplug dependencies that need to have the node unplugged before the
section holding the usemap.The reason is that the bootmem allocator doesn't guarantee a linear
search starting from the passed allocation goal but may start out at a
much higher address absent an upper limit.Fix this by trying the allocation with the limit at the section end,
then retry without if that fails. This keeps the fix from f5bf18fa22f8
of not panicking if the allocation does not fit in the section, but
still makes sure to try to stay within the section at first.Signed-off-by: Yinghai Lu
Signed-off-by: Johannes Weiner
Cc: [3.3.x, 3.4.x]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
30 May, 2012
1 commit
-
alloc_bootmem_section() derives allocation area constraints from the
specified sparsemem section. This is a bit specific for a generic memory
allocator like bootmem, though, so move it over to sparsemem.As __alloc_bootmem_node_nopanic() already retries failed allocations with
relaxed area constraints, the fallback code in sparsemem.c can be removed
and the code becomes a bit more compact overall.[akpm@linux-foundation.org: fix build]
Signed-off-by: Johannes Weiner
Acked-by: Tejun Heo
Acked-by: David S. Miller
Cc: Yinghai Lu
Cc: Gavin Shan
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
24 May, 2012
1 commit
-
UDP stack needs a minimum hash size value for proper operation and also
uses alloc_large_system_hash() for proper NUMA distribution of its hash
tables and automatic sizing depending on available system memory.On some low memory situations, udp_table_init() must ignore the
alloc_large_system_hash() result and reallocs a bigger memory area.As we cannot easily free old hash table, we leak it and kmemleak can
issue a warning.This patch adds a low limit parameter to alloc_large_system_hash() to
solve this problem.We then specify UDP_HTABLE_SIZE_MIN for UDP/UDPLite hash table
allocation.Reported-by: Mark Asselstine
Reported-by: Tim Bird
Signed-off-by: Eric Dumazet
Cc: Paul Gortmaker
Signed-off-by: David S. Miller
15 Jul, 2011
1 commit
-
nomemblock is currently used only by x86 and on x86_32
free_all_memory_core_early() silently freed only the low mem because
get_free_all_memory_range() in arch/x86/mm/memblock.c implicitly
limited range to max_low_pfn.Rename free_all_memory_core_early() to free_low_memory_core_early()
and make it call __get_free_all_memory_range() and limit the range to
max_low_pfn explicitly. This makes things clearer and also is
consistent with the bootmem behavior.This leaves get_free_all_memory_range() without any user. Kill it.
Signed-off-by: Tejun Heo
Link: http://lkml.kernel.org/r/1310462166-31469-9-git-send-email-tj@kernel.org
Cc: Yinghai Lu
Cc: Benjamin Herrenschmidt
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Signed-off-by: H. Peter Anvin
25 May, 2011
1 commit
-
The bootmem wrapper with memblock supports top-down now, so we do not need
to set the low limit to __pa(MAX_DMA_ADDRESS).The logic should be: good to allocate above __pa(MAX_DMA_ADDRESS), but it
is ok if we can not find memory above 16M on system that has a small
amount of RAM.Signed-off-by: Yinghai LU
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Olaf Hering
Cc: Tejun Heo
Cc: Lucas De Marchi
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
12 May, 2011
1 commit
-
Stefan found nobootmem does not work on his system that has only 8M of
RAM. This causes an early panic:BIOS-provided physical RAM map:
BIOS-88: 0000000000000000 - 000000000009f000 (usable)
BIOS-88: 0000000000100000 - 0000000000840000 (usable)
bootconsole [earlyser0] enabled
Notice: NX (Execute Disable) protection missing in CPU or disabled in BIOS!
DMI not present or invalid.
last_pfn = 0x840 max_arch_pfn = 0x100000
init_memory_mapping: 0000000000000000-0000000000840000
8MB LOWMEM available.
mapped low ram: 0 - 00840000
low ram: 0 - 00840000
Zone PFN ranges:
DMA 0x00000001 -> 0x00001000
Normal empty
Movable zone start PFN for each node
early_node_map[2] active PFN ranges
0: 0x00000001 -> 0x0000009f
0: 0x00000100 -> 0x00000840
BUG: Int 6: CR2 (null)
EDI c034663c ESI (null) EBP c0329f38 ESP c0329ef4
EBX c0346380 EDX 00000006 ECX ffffffff EAX fffffff4
err (null) EIP c0353191 CS c0320060 flg 00010082
Stack: (null) c030c533 000007cd (null) c030c533 00000001 (null) (null)
00000003 0000083f 00000018 00000002 00000002 c0329f6c c03534d6 (null)
(null) 00000100 00000840 (null) c0329f64 00000001 00001000 (null)
Pid: 0, comm: swapper Not tainted 2.6.36 #5
Call Trace:
[] ? 0xc02e3707
[] 0xc035e6e5
[] ? 0xc0353191
[] 0xc03534d6
[] 0xc034f1cd
[] 0xc034a824
[] ? 0xc03513cb
[] 0xc0349432
[] 0xc0349066It turns out that we should ignore the low limit of 16M.
Use alloc_bootmem_node_nopanic() in this case.
[akpm@linux-foundation.org: less mess]
Signed-off-by: Yinghai LU
Reported-by: Stefan Hellermann
Tested-by: Stefan Hellermann
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Cc: Thomas Gleixner
Cc: [2.6.34+]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
24 Mar, 2011
1 commit
-
…p_elfcorehdr and saved_max_pfn
The Xen PV drivers in a crashed HVM guest can not connect to the dom0
backend drivers because both frontend and backend drivers are still in
connected state. To run the connection reset function only in case of a
crashdump, the is_kdump_kernel() function needs to be available for the PV
driver modules.Consolidate elfcorehdr_addr, setup_elfcorehdr and saved_max_pfn into
kernel/crash_dump.c Also export elfcorehdr_addr to make is_kdump_kernel()
usable for modules.Leave 'elfcorehdr' as early_param(). This changes powerpc from __setup()
to early_param(). It adds an address range check from x86 also on ia64
and powerpc.[akpm@linux-foundation.org: additional #includes]
[akpm@linux-foundation.org: remove elfcorehdr_addr export]
[akpm@linux-foundation.org: fix for Tejun's mm/nobootmem.c changes]
Signed-off-by: Olaf Hering <olaf@aepfle.de>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: "Luck, Tony" <tony.luck@intel.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mundt <lethal@linux-sh.org>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
14 Dec, 2010
1 commit
-
Add an alloc_bootmem_align() interface to allocate bootmem with
specified alignment. This is necessary to be able to allocate the
xsave area in a subsequent patch.Signed-off-by: Suresh Siddha
LKML-Reference:
Acked-by: H. Peter Anvin
Signed-off-by: H. Peter Anvin
Cc:
13 Feb, 2010
1 commit
-
Finally we can use early_res to replace bootmem for x86_64 now.
Still can use CONFIG_NO_BOOTMEM to enable it or not.
-v2: fix 32bit compiling about MAX_DMA32_PFN
-v3: folded bug fix from LKML message belowSigned-off-by: Yinghai Lu
LKML-Reference:
Signed-off-by: H. Peter Anvin
10 Nov, 2009
1 commit
-
Add a new function for freeing bootmem after the bootmem
allocator has been released and the unreserved pages given to
the page allocator.This allows us to reserve bootmem and then release it if we
later discover it was not needed.( This new API will be used by the swiotlb code to recover
a significant amount of RAM (64MB). )Signed-off-by: FUJITA Tomonori
Acked-by: Pekka Enberg
Cc: chrisw@sous-sol.org
Cc: dwmw2@infradead.org
Cc: joerg.roedel@amd.com
Cc: muli@il.ibm.com
Cc: hannes@cmpxchg.org
Cc: tj@kernel.org
Cc: akpm@linux-foundation.org
Cc: Linus Torvalds
LKML-Reference:
Signed-off-by: Ingo Molnar
22 Sep, 2009
1 commit
-
This is being done by allowing boot time allocations to specify that they
may want a sub-page sized amount of memory.Overall this seems more consistent with the other hash table allocations,
and allows making two supposedly mm-only variables really mm-only
(nr_{kernel,all}_pages).Signed-off-by: Jan Beulich
Cc: Ingo Molnar
Cc: "Eric W. Biederman"
Cc: Mel Gorman
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
01 Apr, 2009
1 commit
-
On PowerPC we allocate large boot time hashes on node 0. This leads to an
imbalance in the free memory, for example on a 64GB box (4 x 16GB nodes):Free memory:
Node 0: 97.03%
Node 1: 98.54%
Node 2: 98.42%
Node 3: 98.53%If we switch to using vmalloc (like ia64 and x86-64) things are more
balanced:Free memory:
Node 0: 97.53%
Node 1: 98.35%
Node 2: 98.33%
Node 3: 98.33%For many HPC applications we are limited by the free available memory on
the smallest node, so even though the same amount of memory is used the
better balancing helps.Since all 64bit NUMA capable architectures should have sufficient vmalloc
space, it makes sense to enable it via CONFIG_64BIT.Signed-off-by: Anton Blanchard
Acked-by: David S. Miller
Acked-by: Benjamin Herrenschmidt
Acked-by: Ralf Baechle
Cc: Heiko Carstens
Cc: Martin Schwidefsky
Cc: Ivan Kokshaysky
Cc: Richard Henderson
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
24 Feb, 2009
2 commits
-
Impact: cleanup and addition of missing interface wrapper
The interface functions in bootmem.h was ordered in not so orderly
manner. Reorder them such that* functions allocating the same area group together -
ie. alloc_bootmem group and alloc_bootmem_low group.* functions w/o node parameter come before the ones w/ node parameter.
* nopanic variants are immediately below their panicky counterparts.
While at it, add alloc_bootmem_pages_node_nopanic() which was missing.
Signed-off-by: Tejun Heo
Cc: Johannes Weiner -
Impact: cleaner and consistent bootmem wrapping
By setting CONFIG_HAVE_ARCH_BOOTMEM_NODE, archs can define
arch-specific wrappers for bootmem allocation. However, this is done
a bit strangely in that only the high level convenience macros can be
changed while lower level, but still exported, interface functions
can't be wrapped. This not only is messy but also leads to strange
situation where alloc_bootmem() does what the arch wants it to do but
the equivalent __alloc_bootmem() call doesn't although they should be
able to be used interchangeably.This patch updates bootmem such that archs can override / wrap the
backend function - alloc_bootmem_core() instead of the highlevel
interface functions to allow simpler and consistent wrapping. Also,
HAVE_ARCH_BOOTMEM_NODE is renamed to HAVE_ARCH_BOOTMEM.Signed-off-by: Tejun Heo
Cc: Johannes Weiner
13 Aug, 2008
1 commit
-
.. since a failed allocation is being (initially) handled gracefully, and
panic()-ed upon failure explicitly in the function if retries with smaller
sizes failed.Signed-off-by: Jan Beulich
Signed-off-by: David Howells
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
26 Jul, 2008
1 commit
-
These got unintentionally moved, put them back as x86 provides its own
versions.Signed-off-by: Johannes Weiner
Signed-off-by: Linus Torvalds
25 Jul, 2008
6 commits
-
Almost all users of this field need a PFN instead of a physical address,
so replace node_boot_start with node_min_pfn.[Lee.Schermerhorn@hp.com: fix spurious BUG_ON() in mark_bootmem()]
Signed-off-by: Johannes Weiner
Cc:
Signed-off-by: Lee Schermerhorn
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
alloc_bootmem_core has become quite nasty to read over time. This is a
clean rewrite that keeps the semantics.bdata->last_pos has been dropped.
bdata->last_success has been renamed to hint_idx and it is now an index
relative to the node's range. Since further block searching might start
at this index, it is now set to the end of a succeeded allocation rather
than its beginning.bdata->last_offset has been renamed to last_end_off to be more clear that
it represents the ending address of the last allocation relative to the
node.[y-goto@jp.fujitsu.com: fix new alloc_bootmem_core()]
Signed-off-by: Johannes Weiner
Signed-off-by: Yasunori Goto
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
This only reorders functions so that further patches will be easier to
read. No code changed.Signed-off-by: Johannes Weiner
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Straight forward variant of the existing __alloc_bootmem_node, only
subsequent patch when allocating giant hugepages at boot -- don't want to
panic if we can't allocate as many as the user asked for.Signed-off-by: Andi Kleen
Signed-off-by: Nick Piggin
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
This function has no external callers, so unexport it. Also fix its naming
inconsistency.Signed-off-by: Johannes Weiner
Cc: Ingo Molnar
Cc: Yinghai Lu
Cc: Christoph Lameter
Cc: Mel Gorman
Cc: Andy Whitcroft
Cc: Mel Gorman
Cc: Andy Whitcroft
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
There are a lot of places that define either a single bootmem descriptor or an
array of them. Use only one central array with MAX_NUMNODES items instead.Signed-off-by: Johannes Weiner
Acked-by: Ralf Baechle
Cc: Ingo Molnar
Cc: Richard Henderson
Cc: Russell King
Cc: Tony Luck
Cc: Hirokazu Takata
Cc: Geert Uytterhoeven
Cc: Kyle McMartin
Cc: Paul Mackerras
Cc: Paul Mundt
Cc: David S. Miller
Cc: Yinghai Lu
Cc: Christoph Lameter
Cc: Mel Gorman
Cc: Andy Whitcroft
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
08 Jul, 2008
1 commit
-
1. add reserve_bootmem_generic for 32bit
2. change len to unsigned long
3. make early_res_to_bootmem to use itSigned-off-by: Yinghai Lu
Signed-off-by: Ingo Molnar
22 Jun, 2008
1 commit
-
This patch changes the function reserve_bootmem_node() from void to int,
returning -ENOMEM if the allocation fails.This fixes a build problem on x86 with CONFIG_KEXEC=y and
CONFIG_NEED_MULTIPLE_NODES=ySigned-off-by: Bernhard Walle
Reported-by: Adrian Bunk
Signed-off-by: Linus Torvalds
28 Apr, 2008
1 commit
-
alloc_bootmem_section() can allocate specified section's area. This is used
for usemap to keep same section with pgdat by later patch.Signed-off-by: Yasunori Goto
Cc: Badari Pulavarty
Cc: Yinghai Lu
Cc: Yasunori Goto
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
08 Feb, 2008
1 commit
-
This patchset adds a flags variable to reserve_bootmem() and uses the
BOOTMEM_EXCLUSIVE flag in crashkernel reservation code to detect collisions
between crashkernel area and already used memory.This patch:
Change the reserve_bootmem() function to accept a new flag BOOTMEM_EXCLUSIVE.
If that flag is set, the function returns with -EBUSY if the memory already
has been reserved in the past. This is to avoid conflicts.Because that code runs before SMP initialisation, there's no race condition
inside reserve_bootmem_core().[akpm@linux-foundation.org: coding-style fixes]
[akpm@linux-foundation.org: fix powerpc build]
Signed-off-by: Bernhard Walle
Cc:
Cc: "Eric W. Biederman"
Cc: Vivek Goyal
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds