25 May, 2010
1 commit
-
We need to put mem_map high when virtual memmap is not used.
before this patch
free mem pfn range on first node:
[ 0.000000] 19 - 1f
[ 0.000000] 28 40 - 80 95
[ 0.000000] 702 740 - 1000 1000
[ 0.000000] 347c - 347e
[ 0.000000] 34e7 3500 - 3b80 3b8b
[ 0.000000] 73b8b 73bc0 - 73c00 73c00
[ 0.000000] 73ddd - 73e00
[ 0.000000] 73fdd - 74000
[ 0.000000] 741dd - 74200
[ 0.000000] 743dd - 74400
[ 0.000000] 745dd - 74600
[ 0.000000] 747dd - 74800
[ 0.000000] 749dd - 74a00
[ 0.000000] 74bdd - 74c00
[ 0.000000] 74ddd - 74e00
[ 0.000000] 74fdd - 75000
[ 0.000000] 751dd - 75200
[ 0.000000] 753dd - 75400
[ 0.000000] 755dd - 75600
[ 0.000000] 757dd - 75800
[ 0.000000] 759dd - 75a00
[ 0.000000] 79bdd 79c00 - 7d540 7d550
[ 0.000000] 7f745 - 7f750
[ 0.000000] 10000b 100040 - 2080000 2080000
so only 79c00 - 7d540 are major free block under 4g...after this patch, we will get
[ 0.000000] 19 - 1f
[ 0.000000] 28 40 - 80 95
[ 0.000000] 702 740 - 1000 1000
[ 0.000000] 347c - 347e
[ 0.000000] 34e7 3500 - 3600 3600
[ 0.000000] 37dd - 3800
[ 0.000000] 39dd - 3a00
[ 0.000000] 3bdd - 3c00
[ 0.000000] 3ddd - 3e00
[ 0.000000] 3fdd - 4000
[ 0.000000] 41dd - 4200
[ 0.000000] 43dd - 4400
[ 0.000000] 45dd - 4600
[ 0.000000] 47dd - 4800
[ 0.000000] 49dd - 4a00
[ 0.000000] 4bdd - 4c00
[ 0.000000] 4ddd - 4e00
[ 0.000000] 4fdd - 5000
[ 0.000000] 51dd - 5200
[ 0.000000] 53dd - 5400
[ 0.000000] 95dd 9600 - 7d540 7d550
[ 0.000000] 7f745 - 7f750
[ 0.000000] 17000b 170040 - 2080000 2080000
we will have 9600 - 7d540 for major free block...sparse-vmemmap path already used __alloc_bootmem_node_high()
Signed-off-by: Yinghai Lu
Cc: Jiri Slaby
Cc: "H. Peter Anvin"
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Christoph Lameter
Cc: Greg Thelen
Cc: Johannes Weiner
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
30 Mar, 2010
1 commit
-
…it slab.h inclusion from percpu.h
percpu.h is included by sched.h and module.h and thus ends up being
included when building most .c files. percpu.h includes slab.h which
in turn includes gfp.h making everything defined by the two files
universally available and complicating inclusion dependencies.percpu.h -> slab.h dependency is about to be removed. Prepare for
this change by updating users of gfp and slab facilities include those
headers directly instead of assuming availability. As this conversion
needs to touch large number of source files, the following script is
used as the basis of conversion.http://userweb.kernel.org/~tj/misc/slabh-sweep.py
The script does the followings.
* Scan files for gfp and slab usages and update includes such that
only the necessary includes are there. ie. if only gfp is used,
gfp.h, if slab is used, slab.h.* When the script inserts a new include, it looks at the include
blocks and try to put the new include such that its order conforms
to its surrounding. It's put in the include block which contains
core kernel includes, in the same order that the rest are ordered -
alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
doesn't seem to be any matching order.* If the script can't find a place to put a new include (mostly
because the file doesn't have fitting include block), it prints out
an error message indicating which .h file needs to be added to the
file.The conversion was done in the following steps.
1. The initial automatic conversion of all .c files updated slightly
over 4000 files, deleting around 700 includes and adding ~480 gfp.h
and ~3000 slab.h inclusions. The script emitted errors for ~400
files.2. Each error was manually checked. Some didn't need the inclusion,
some needed manual addition while adding it to implementation .h or
embedding .c file was more appropriate for others. This step added
inclusions to around 150 files.3. The script was run again and the output was compared to the edits
from #2 to make sure no file was left behind.4. Several build tests were done and a couple of problems were fixed.
e.g. lib/decompress_*.c used malloc/free() wrappers around slab
APIs requiring slab.h to be added manually.5. The script was run on all .h files but without automatically
editing them as sprinkling gfp.h and slab.h inclusions around .h
files could easily lead to inclusion dependency hell. Most gfp.h
inclusion directives were ignored as stuff from gfp.h was usually
wildly available and often used in preprocessor macros. Each
slab.h inclusion directive was examined and added manually as
necessary.6. percpu.h was updated not to include slab.h.
7. Build test were done on the following configurations and failures
were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my
distributed build env didn't work with gcov compiles) and a few
more options had to be turned off depending on archs to make things
build (like ipr on powerpc/64 which failed due to missing writeq).* x86 and x86_64 UP and SMP allmodconfig and a custom test config.
* powerpc and powerpc64 SMP allmodconfig
* sparc and sparc64 SMP allmodconfig
* ia64 SMP allmodconfig
* s390 SMP allmodconfig
* alpha SMP allmodconfig
* um on x86_64 SMP allmodconfig8. percpu.h modifications were reverted so that it could be applied as
a separate patch and serve as bisection point.Given the fact that I had only a couple of failures from tests on step
6, I'm fairly confident about the coverage of this conversion patch.
If there is a breakage, it's likely to be something in one of the arch
headers which should be easily discoverable easily on most builds of
the specific arch.Signed-off-by: Tejun Heo <tj@kernel.org>
Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
02 Mar, 2010
1 commit
-
Stephen reported:
build (powerpc
ppc64_defconfig) produced these warnings:mm/sparse.c: In function 'sparse_init':
mm/sparse.c:488: warning: unused variable 'map_count'
mm/sparse.c:484: warning: unused variable 'size2'
mm/sparse.c:481: warning: unused variable 'map_map'
mm/sparse.c: At top level:
mm/sparse.c:442: warning: 'sparse_early_mem_maps_alloc_node' defined but not usedIntroduced by commit 9bdac914240759457175ac0d6529a37d2820bc4d
("sparsemem: Put mem map for one node together").Conditionalize the bits appropriately based on the setting of
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER.Reported-by: Stephen Rothwell
Tested-by: Stephen Rothwell
Signed-off-by: Yinghai Lu
LKML-Reference:
Signed-off-by: H. Peter Anvin
13 Feb, 2010
2 commits
-
Add vmemmap_alloc_block_buf for mem map only.
It will fallback to the old way if it cannot get a block that big.
Before this patch, when a node have 128g ram installed, memmap are
split into two parts or more.
[ 0.000000] [ffffea0000000000-ffffea003fffffff] PMD -> [ffff880100600000-ffff88013e9fffff] on node 1
[ 0.000000] [ffffea0040000000-ffffea006fffffff] PMD -> [ffff88013ec00000-ffff88016ebfffff] on node 1
[ 0.000000] [ffffea0070000000-ffffea007fffffff] PMD -> [ffff882000600000-ffff8820105fffff] on node 0
[ 0.000000] [ffffea0080000000-ffffea00bfffffff] PMD -> [ffff882010800000-ffff8820507fffff] on node 0
[ 0.000000] [ffffea00c0000000-ffffea00dfffffff] PMD -> [ffff882050a00000-ffff8820709fffff] on node 0
[ 0.000000] [ffffea00e0000000-ffffea00ffffffff] PMD -> [ffff884000600000-ffff8840205fffff] on node 2
[ 0.000000] [ffffea0100000000-ffffea013fffffff] PMD -> [ffff884020800000-ffff8840607fffff] on node 2
[ 0.000000] [ffffea0140000000-ffffea014fffffff] PMD -> [ffff884060a00000-ffff8840709fffff] on node 2
[ 0.000000] [ffffea0150000000-ffffea017fffffff] PMD -> [ffff886000600000-ffff8860305fffff] on node 3
[ 0.000000] [ffffea0180000000-ffffea01bfffffff] PMD -> [ffff886030800000-ffff8860707fffff] on node 3
[ 0.000000] [ffffea01c0000000-ffffea01ffffffff] PMD -> [ffff888000600000-ffff8880405fffff] on node 4
[ 0.000000] [ffffea0200000000-ffffea022fffffff] PMD -> [ffff888040800000-ffff8880707fffff] on node 4
[ 0.000000] [ffffea0230000000-ffffea023fffffff] PMD -> [ffff88a000600000-ffff88a0105fffff] on node 5
[ 0.000000] [ffffea0240000000-ffffea027fffffff] PMD -> [ffff88a010800000-ffff88a0507fffff] on node 5
[ 0.000000] [ffffea0280000000-ffffea029fffffff] PMD -> [ffff88a050a00000-ffff88a0709fffff] on node 5
[ 0.000000] [ffffea02a0000000-ffffea02bfffffff] PMD -> [ffff88c000600000-ffff88c0205fffff] on node 6
[ 0.000000] [ffffea02c0000000-ffffea02ffffffff] PMD -> [ffff88c020800000-ffff88c0607fffff] on node 6
[ 0.000000] [ffffea0300000000-ffffea030fffffff] PMD -> [ffff88c060a00000-ffff88c0709fffff] on node 6
[ 0.000000] [ffffea0310000000-ffffea033fffffff] PMD -> [ffff88e000600000-ffff88e0305fffff] on node 7
[ 0.000000] [ffffea0340000000-ffffea037fffffff] PMD -> [ffff88e030800000-ffff88e0707fffff] on node 7after patch will get
[ 0.000000] [ffffea0000000000-ffffea006fffffff] PMD -> [ffff880100200000-ffff88016e5fffff] on node 0
[ 0.000000] [ffffea0070000000-ffffea00dfffffff] PMD -> [ffff882000200000-ffff8820701fffff] on node 1
[ 0.000000] [ffffea00e0000000-ffffea014fffffff] PMD -> [ffff884000200000-ffff8840701fffff] on node 2
[ 0.000000] [ffffea0150000000-ffffea01bfffffff] PMD -> [ffff886000200000-ffff8860701fffff] on node 3
[ 0.000000] [ffffea01c0000000-ffffea022fffffff] PMD -> [ffff888000200000-ffff8880701fffff] on node 4
[ 0.000000] [ffffea0230000000-ffffea029fffffff] PMD -> [ffff88a000200000-ffff88a0701fffff] on node 5
[ 0.000000] [ffffea02a0000000-ffffea030fffffff] PMD -> [ffff88c000200000-ffff88c0701fffff] on node 6
[ 0.000000] [ffffea0310000000-ffffea037fffffff] PMD -> [ffff88e000200000-ffff88e0701fffff] on node 7-v2: change buf to vmemmap_buf instead according to Ingo
also add CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER according to Ingo
-v3: according to Andrew, use sizeof(name) instead of hard coded 15Signed-off-by: Yinghai Lu
LKML-Reference:
Cc: Christoph Lameter
Acked-by: Christoph Lameter
Signed-off-by: H. Peter Anvin -
Could save some buffer space instead of applying one by one.
Could help that system that is going to use early_res instead of bootmem
less entries in early_res make search more faster on system with more memory.Signed-off-by: Yinghai Lu
LKML-Reference:
Signed-off-by: H. Peter Anvin
22 Sep, 2009
1 commit
-
To initialize hotadded node, some pages are allocated. At that time, the
node hasn't memory, this makes the allocation always fail. In such case,
let's allocate pages from other nodes.Signed-off-by: Shaohua Li
Signed-off-by: Yakui Zhao
Cc: Mel Gorman
Cc: Christoph Lameter
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
01 Apr, 2009
1 commit
-
In case if start_pfn overlap the upper bound no need to test end_pfn again
since we have it already trimmed.Signed-off-by: Cyrill Gorcunov
Reviewed-by: Christoph Lameter
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
01 Dec, 2008
1 commit
-
Signed-off-by: Al Viro
Signed-off-by: Linus Torvalds
13 Aug, 2008
1 commit
-
Signed-off-by: Huang Weiyi
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
27 Jul, 2008
1 commit
-
This patch makes the needlessly global sparse_early_mem_map_alloc()
static.Signed-off-by: Adrian Bunk
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
25 Jul, 2008
2 commits
-
Usemaps are allocated on the section which has pgdat by this.
Because usemap size is very small, many other sections usemaps are
allocated on only one page. If a section has usemap, it can't be removed
until removing other sections. This dependency is not desirable for
memory removing.Pgdat has similar feature. When a section has pgdat area, it must be the
last section for removing on the node. So, if section A has pgdat and
section B has usemap for section A, Both sections can't be removed due to
dependency each other.To solve this issue, this patch collects usemap on same section with pgdat
as much as possible. If other sections doesn't have any dependency, this
section will be able to be removed finally.Signed-off-by: Yasunori Goto
Cc: Mel Gorman
Cc: Andy Whitcroft
Cc: David Miller
Cc: Badari Pulavarty
Cc: Heiko Carstens
Cc: Hiroyuki KAMEZAWA
Cc: Tony Breeds
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
There are a number of different views to how much memory is currently active.
There is the arch-independent zone-sizing view, the bootmem allocator and
memory models view.Architectures register this information at different times and is not
necessarily in sync particularly with respect to some SPARSEMEM limitations.This patch introduces mminit_validate_memmodel_limits() which is able to
validate and correct PFN ranges with respect to the memory model. It is only
SPARSEMEM that currently validates itself.Signed-off-by: Mel Gorman
Cc: Christoph Lameter
Cc: Andy Whitcroft
Cc: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
30 Apr, 2008
2 commits
-
This:
commit 86f6dae1377523689bd8468fed2f2dd180fc0560
Author: Yasunori Goto
Date: Mon Apr 28 02:13:33 2008 -0700memory hotplug: allocate usemap on the section with pgdat
Usemaps are allocated on the section which has pgdat by this.
Because usemap size is very small, many other sections usemaps are allocated
on only one page. If a section has usemap, it can't be removed until removing
other sections. This dependency is not desirable for memory removing.Pgdat has similar feature. When a section has pgdat area, it must be the last
section for removing on the node. So, if section A has pgdat and section B
has usemap for section A, Both sections can't be removed due to dependency
each other.To solve this issue, this patch collects usemap on same section with pgdat.
If other sections doesn't have any dependency, this section will be able to be
removed finally.Signed-off-by: Yasunori Goto
Cc: Badari Pulavarty
Cc: Yinghai Lu
Cc: Yasunori Goto
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvaldsbroke davem's sparc64 bootup. Revert it while we work out what went wrong.
Cc: Yasunori Goto
Cc: Badari Pulavarty
Cc: Yinghai Lu
Cc: "David S. Miller"
Cc: Heiko Carstens
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
__FUNCTION__ is gcc-specific, use __func__
Signed-off-by: Harvey Harrison
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
28 Apr, 2008
5 commits
-
This patch is to free memmaps which is allocated by bootmem.
Freeing usemap is not necessary. The pages of usemap may be necessary for
other sections.If removing section is last section on the node, its section is the final user
of usemap page. (usemaps are allocated on its section by previous patch.) But
it shouldn't be freed too, because the section must be logical offline state
which all pages are isolated against page allocater. If it is freed, page
alloctor may use it which will be removed physically soon. It will be
disaster. So, this patch keeps it as it is.Signed-off-by: Yasunori Goto
Cc: Badari Pulavarty
Cc: Yinghai Lu
Cc: Yasunori Goto
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Usemaps are allocated on the section which has pgdat by this.
Because usemap size is very small, many other sections usemaps are allocated
on only one page. If a section has usemap, it can't be removed until removing
other sections. This dependency is not desirable for memory removing.Pgdat has similar feature. When a section has pgdat area, it must be the last
section for removing on the node. So, if section A has pgdat and section B
has usemap for section A, Both sections can't be removed due to dependency
each other.To solve this issue, this patch collects usemap on same section with pgdat.
If other sections doesn't have any dependency, this section will be able to be
removed finally.Signed-off-by: Yasunori Goto
Cc: Badari Pulavarty
Cc: Yinghai Lu
Cc: Yasunori Goto
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
To free memmap easier, this patch aligns it to page size. Bootmem allocater
may mix some objects in one pages. It's not good for freeing memmap of memory
hot-remove.Signed-off-by: Yasunori Goto
Cc: Badari Pulavarty
Cc: Yinghai Lu
Cc: Yasunori Goto
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
This patch set is to free pages which is allocated by bootmem for
memory-hotremove. Some structures of memory management are allocated by
bootmem. ex) memmap, etc.To remove memory physically, some of them must be freed according to
circumstance. This patch set makes basis to free those pages, and free
memmaps.Basic my idea is using remain members of struct page to remember information
of users of bootmem (section number or node id). When the section is
removing, kernel can confirm it. By this information, some issues can be
solved.1) When the memmap of removing section is allocated on other
section by bootmem, it should/can be free.
2) When the memmap of removing section is allocated on the
same section, it shouldn't be freed. Because the section has to be
logical memory offlined already and all pages must be isolated against
page allocater. If it is freed, page allocator may use it which will
be removed physically soon.
3) When removing section has other section's memmap,
kernel will be able to show easily which section should be removed
before it for user. (Not implemented yet)
4) When the above case 2), the page isolation will be able to check and skip
memmap's page when logical memory offline (offline_pages()).
Current page isolation code fails in this case because this page is
just reserved page and it can't distinguish this pages can be
removed or not. But, it will be able to do by this patch.
(Not implemented yet.)
5) The node information like pgdat has similar issues. But, this
will be able to be solved too by this.
(Not implemented yet, but, remembering node id in the pages.)Fortunately, current bootmem allocator just keeps PageReserved flags,
and doesn't use any other members of page struct. The users of
bootmem doesn't use them too.This patch:
This is to register information which is node or section's id. Kernel can
distinguish which node/section uses the pages allcated by bootmem. This is
basis for hot-remove sections or nodes.Signed-off-by: Yasunori Goto
Cc: Badari Pulavarty
Cc: Yinghai Lu
Cc: Yasunori Goto
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Generic helper function to remove section mappings and sysfs entries for the
section of the memory we are removing. offline_pages() correctly adjusted
zone and marked the pages reserved.TODO: Yasunori Goto is working on patches to free up allocations from bootmem.
Signed-off-by: Badari Pulavarty
Acked-by: Yasunori Goto
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
27 Apr, 2008
2 commits
-
On big systems with lots of memory, don't print out too much during
bootup, and make it easy to find if it is continuous.on 256G 8 sockets system will get
[ffffe20000000000-ffffe20002bfffff] PMD -> [ffff810001400000-ffff810003ffffff] on node 0
[ffffe2001c700000-ffffe2001c7fffff] potential offnode page_structs
[ffffe20002c00000-ffffe2001c7fffff] PMD -> [ffff81000c000000-ffff8100255fffff] on node 0
[ffffe20038700000-ffffe200387fffff] potential offnode page_structs
[ffffe2001c800000-ffffe200387fffff] PMD -> [ffff810820200000-ffff81083c1fffff] on node 1
[ffffe20040000000-ffffe2007fffffff] PUD ->ffff811027a00000 on node 2
[ffffe20038800000-ffffe2003fffffff] PMD -> [ffff811020200000-ffff8110279fffff] on node 2
[ffffe20054700000-ffffe200547fffff] potential offnode page_structs
[ffffe20040000000-ffffe200547fffff] PMD -> [ffff811027c00000-ffff81103c3fffff] on node 2
[ffffe20070700000-ffffe200707fffff] potential offnode page_structs
[ffffe20054800000-ffffe200707fffff] PMD -> [ffff811820200000-ffff81183c1fffff] on node 3
[ffffe20080000000-ffffe200bfffffff] PUD ->ffff81202fa00000 on node 4
[ffffe20070800000-ffffe2007fffffff] PMD -> [ffff812020200000-ffff81202f9fffff] on node 4
[ffffe2008c700000-ffffe2008c7fffff] potential offnode page_structs
[ffffe20080000000-ffffe2008c7fffff] PMD -> [ffff81202fc00000-ffff81203c3fffff] on node 4
[ffffe200a8700000-ffffe200a87fffff] potential offnode page_structs
[ffffe2008c800000-ffffe200a87fffff] PMD -> [ffff812820200000-ffff81283c1fffff] on node 5
[ffffe200c0000000-ffffe200ffffffff] PUD ->ffff813037a00000 on node 6
[ffffe200a8800000-ffffe200bfffffff] PMD -> [ffff813020200000-ffff8130379fffff] on node 6
[ffffe200c4700000-ffffe200c47fffff] potential offnode page_structs
[ffffe200c0000000-ffffe200c47fffff] PMD -> [ffff813037c00000-ffff81303c3fffff] on node 6
[ffffe200c4800000-ffffe200e07fffff] PMD -> [ffff813820200000-ffff81383c1fffff] on node 7instead of a very long print out...
Signed-off-by: Yinghai Lu
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
vmemmap allocation currently has this layout:
[ffffe20000000000-ffffe200001fffff] PMD ->ffff810001400000 on node 0
[ffffe20000200000-ffffe200003fffff] PMD ->ffff810001800000 on node 0
[ffffe20000400000-ffffe200005fffff] PMD ->ffff810001c00000 on node 0
[ffffe20000600000-ffffe200007fffff] PMD ->ffff810002000000 on node 0
[ffffe20000800000-ffffe200009fffff] PMD ->ffff810002400000 on node 0
...note that there is a 2M hole between them - not optimal.
the root cause is that usemap (24 bytes) will be allocated after every 2M
mem_map, and it will push next vmemmap (2M) to the next (2M) alignment.solution: try to allocate the mem_map continously.
after the patch, we get:
[ffffe20000000000-ffffe200001fffff] PMD ->ffff810001400000 on node 0
[ffffe20000200000-ffffe200003fffff] PMD ->ffff810001600000 on node 0
[ffffe20000400000-ffffe200005fffff] PMD ->ffff810001800000 on node 0
[ffffe20000600000-ffffe200007fffff] PMD ->ffff810001a00000 on node 0
[ffffe20000800000-ffffe200009fffff] PMD ->ffff810001c00000 on node 0
...which is the ideal layout.
and usemap will share a page because of they are allocated continuously too:
sparse_early_usemap_alloc: usemap = ffff810024e00000 size = 24
sparse_early_usemap_alloc: usemap = ffff810024e00080 size = 24
sparse_early_usemap_alloc: usemap = ffff810024e00100 size = 24
sparse_early_usemap_alloc: usemap = ffff810024e00180 size = 24
...so we make the bootmem allocation more compact and use less memory
for usemap => mission accomplished ;-)Signed-off-by: Yinghai Lu
Signed-off-by: Ingo Molnar
16 Apr, 2008
1 commit
-
Fix memory corruption and crash on 32-bit x86 systems.
If a !PAE x86 kernel is booted on a 32-bit system with more than 4GB of
RAM, then we call memory_present() with a start/end that goes outside
the scope of MAX_PHYSMEM_BITS.That causes this loop to happily walk over the limit of the sparse
memory section map:for (pfn = start; pfn < end; pfn += PAGES_PER_SECTION) {
unsigned long section = pfn_to_section_nr(pfn);
struct mem_section *ms;sparse_index_init(section, nid);
set_section_nid(section, nid);ms = __nr_to_section(section);
if (!ms->section_mem_map)
ms->section_mem_map = sparse_encode_early_nid(nid) |
SECTION_MARKED_PRESENT;'ms' will be out of bounds and we'll corrupt a small amount of memory by
encoding the node ID and writing SECTION_MARKED_PRESENT (==0x1) over it.The corruption might happen when encoding a non-zero node ID, or due to
the SECTION_MARKED_PRESENT which is 0x1:mmzone.h:#define SECTION_MARKED_PRESENT (1UL<
Tested-by: Christoph Lameter
Cc: Pekka Enberg
Cc: Mel Gorman
Cc: Nick Piggin
Cc: Andrew Morton
Cc: Rafael J. Wysocki
Cc: Yinghai Lu
Cc: KAMEZAWA Hiroyuki
Signed-off-by: Linus Torvalds
06 Feb, 2008
2 commits
-
Fix following warning:
WARNING: mm/built-in.o(.text+0x22069): Section mismatch in reference from the function sparse_early_usemap_alloc() to the function .init.text:__alloc_bootmem_node()static sparse_early_usemap_alloc() were used only by sparse_init()
and with sparse_init() annotated _init it is safe to
annotate sparse_early_usemap_alloc with __init too.Signed-off-by: Sam Ravnborg
Cc: Andy Whitcroft
Cc: Mel Gorman
Cc: Christoph Lameter
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Checking if an address is a vmalloc address is done in a couple of places.
Define a common version in mm.h and replace the other checks.Again the include structures suck. The definition of VMALLOC_START and
VMALLOC_END is not available in vmalloc.h since highmem.c cannot be included
there.Signed-off-by: Christoph Lameter
Cc: Nick Piggin
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
18 Dec, 2007
2 commits
-
Improve the error handling for mm/sparse.c::sparse_add_one_section(). And I
see no reason to check 'usemap' until holding the 'pgdat_resize_lock'.[geoffrey.levand@am.sony.com: sparse_index_init() returns -EEXIST]
Cc: Christoph Lameter
Acked-by: Dave Hansen
Cc: Rik van Riel
Acked-by: Yasunori Goto
Cc: Andy Whitcroft
Signed-off-by: WANG Cong
Signed-off-by: Geoff Levand
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Since sparse_index_alloc() can return NULL on memory allocation failure,
we must deal with the failure condition when calling it.Signed-off-by: WANG Cong
Cc: Christoph Lameter
Cc: Rik van Riel
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
30 Oct, 2007
1 commit
-
This reverts commit 2e1c49db4c640b35df13889b86b9d62215ade4b6.
First off, testing in Fedora has shown it to cause boot failures,
bisected down by Martin Ebourne, and reported by Dave Jobes. So the
commit will likely be reverted in the 2.6.23 stable kernels.Secondly, in the 2.6.24 model, x86-64 has now grown support for
SPARSEMEM_VMEMMAP, which disables the relevant code anyway, so while the
bug is not visible any more, it's become invisible due to the code just
being irrelevant and no longer enabled on the only architecture that
this ever affected.Reported-by: Dave Jones
Tested-by: Martin Ebourne
Cc: Zou Nan hai
Cc: Suresh Siddha
Cc: Andrew Morton
Acked-by: Andy Whitcroft
Signed-off-by: Linus Torvalds
17 Oct, 2007
5 commits
-
This patch is to avoid panic when memory hot-add is executed with
sparsemem-vmemmap. Current vmemmap-sparsemem code doesn't support memory
hot-add. Vmemmap must be populated when hot-add. This is for
2.6.23-rc2-mm2.Todo: # Even if this patch is applied, the message "[xxxx-xxxx] potential
offnode page_structs" is displayed. To allocate memmap on its node,
memmap (and pgdat) must be initialized itself like chicken and
egg relationship.# vmemmap_unpopulate will be necessary for followings.
- For cancel hot-add due to error.
- For unplug.Signed-off-by: Yasunori Goto
Cc: Andy Whitcroft
Cc: Christoph Lameter
Cc: Mel Gorman
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
There are problems in the use of SPARSEMEM and pageblock flags that causes
problems on ia64.The first part of the problem is that units are incorrect in
SECTION_BLOCKFLAGS_BITS computation. This results in a map_section's
section_mem_map being treated as part of a bitmap which isn't good. This
was evident with an invalid virtual address when mem_init attempted to free
bootmem pages while relinquishing control from the bootmem allocator.The second part of the problem occurs because the pageblock flags bitmap is
be located with the mem_section. The SECTIONS_PER_ROOT computation using
sizeof (mem_section) may not be a power of 2 depending on the size of the
bitmap. This renders masks and other such things not power of 2 base.
This issue was seen with SPARSEMEM_EXTREME on ia64. This patch moves the
bitmap outside of mem_section and uses a pointer instead in the
mem_section. The bitmaps are allocated when the section is being
initialised.Note that sparse_early_usemap_alloc() does not use alloc_remap() like
sparse_early_mem_map_alloc(). The allocation required for the bitmap on
x86, the only architecture that uses alloc_remap is typically smaller than
a cache line. alloc_remap() pads out allocations to the cache size which
would be a needless waste.Credit to Bob Picco for identifying the original problem and effecting a
fix for the SECTION_BLOCKFLAGS_BITS calculation. Credit to Andy Whitcroft
for devising the best way of allocating the bitmaps only when required for
the section.[wli@holomorphy.com: warning fix]
Signed-off-by: Bob Picco
Signed-off-by: Andy Whitcroft
Signed-off-by: Mel Gorman
Cc: "Luck, Tony"
Signed-off-by: William Irwin
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
SPARSEMEM is a pretty nice framework that unifies quite a bit of code over all
the arches. It would be great if it could be the default so that we can get
rid of various forms of DISCONTIG and other variations on memory maps. So far
what has hindered this are the additional lookups that SPARSEMEM introduces
for virt_to_page and page_address. This goes so far that the code to do this
has to be kept in a separate function and cannot be used inline.This patch introduces a virtual memmap mode for SPARSEMEM, in which the memmap
is mapped into a virtually contigious area, only the active sections are
physically backed. This allows virt_to_page page_address and cohorts become
simple shift/add operations. No page flag fields, no table lookups, nothing
involving memory is required.The two key operations pfn_to_page and page_to_page become:
#define __pfn_to_page(pfn) (vmemmap + (pfn))
#define __page_to_pfn(page) ((page) - vmemmap)By having a virtual mapping for the memmap we allow simple access without
wasting physical memory. As kernel memory is typically already mapped 1:1
this introduces no additional overhead. The virtual mapping must be big
enough to allow a struct page to be allocated and mapped for all valid
physical pages. This vill make a virtual memmap difficult to use on 32 bit
platforms that support 36 address bits.However, if there is enough virtual space available and the arch already maps
its 1-1 kernel space using TLBs (f.e. true of IA64 and x86_64) then this
technique makes SPARSEMEM lookups even more efficient than CONFIG_FLATMEM.
FLATMEM needs to read the contents of the mem_map variable to get the start of
the memmap and then add the offset to the required entry. vmemmap is a
constant to which we can simply add the offset.This patch has the potential to allow us to make SPARSMEM the default (and
even the only) option for most systems. It should be optimal on UP, SMP and
NUMA on most platforms. Then we may even be able to remove the other memory
models: FLATMEM, DISCONTIG etc.[apw@shadowen.org: config cleanups, resplit code etc]
[kamezawa.hiroyu@jp.fujitsu.com: Fix sparsemem_vmemmap init]
[apw@shadowen.org: vmemmap: remove excess debugging]
[apw@shadowen.org: simplify initialisation code and reduce duplication]
[apw@shadowen.org: pull out the vmemmap code into its own file]
Signed-off-by: Christoph Lameter
Signed-off-by: Andy Whitcroft
Acked-by: Mel Gorman
Cc: "Luck, Tony"
Cc: Andi Kleen
Cc: "David S. Miller"
Cc: Paul Mackerras
Cc: Benjamin Herrenschmidt
Cc: KAMEZAWA Hiroyuki
Signed-off-by: KAMEZAWA Hiroyuki
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
We have flags to indicate whether a section actually has a valid mem_map
associated with it. This is never set and we rely solely on the present bit
to indicate a section is valid. By definition a section is not valid if it
has no mem_map and there is a window during init where the present bit is set
but there is no mem_map, during which pfn_valid() will return true
incorrectly.Use the existing SECTION_HAS_MEM_MAP flag to indicate the presence of a valid
mem_map. Switch valid_section{,_nr} and pfn_valid() to this bit. Add a new
present_section{,_nr} and pfn_present() interfaces for those users who care to
know that a section is going to be valid.[akpm@linux-foundation.org: coding-syle fixes]
Signed-off-by: Andy Whitcroft
Acked-by: Mel Gorman
Cc: Christoph Lameter
Cc: "Luck, Tony"
Cc: Andi Kleen
Cc: "David S. Miller"
Cc: Paul Mackerras
Cc: Benjamin Herrenschmidt
Cc: KAMEZAWA Hiroyuki
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
SPARSEMEM is a pretty nice framework that unifies quite a bit of code over all
the arches. It would be great if it could be the default so that we can get
rid of various forms of DISCONTIG and other variations on memory maps. So far
what has hindered this are the additional lookups that SPARSEMEM introduces
for virt_to_page and page_address. This goes so far that the code to do this
has to be kept in a separate function and cannot be used inline.This patch introduces a virtual memmap mode for SPARSEMEM, in which the memmap
is mapped into a virtually contigious area, only the active sections are
physically backed. This allows virt_to_page page_address and cohorts become
simple shift/add operations. No page flag fields, no table lookups, nothing
involving memory is required.The two key operations pfn_to_page and page_to_page become:
#define __pfn_to_page(pfn) (vmemmap + (pfn))
#define __page_to_pfn(page) ((page) - vmemmap)By having a virtual mapping for the memmap we allow simple access without
wasting physical memory. As kernel memory is typically already mapped 1:1
this introduces no additional overhead. The virtual mapping must be big
enough to allow a struct page to be allocated and mapped for all valid
physical pages. This vill make a virtual memmap difficult to use on 32 bit
platforms that support 36 address bits.However, if there is enough virtual space available and the arch already maps
its 1-1 kernel space using TLBs (f.e. true of IA64 and x86_64) then this
technique makes SPARSEMEM lookups even more efficient than CONFIG_FLATMEM.
FLATMEM needs to read the contents of the mem_map variable to get the start of
the memmap and then add the offset to the required entry. vmemmap is a
constant to which we can simply add the offset.This patch has the potential to allow us to make SPARSMEM the default (and
even the only) option for most systems. It should be optimal on UP, SMP and
NUMA on most platforms. Then we may even be able to remove the other memory
models: FLATMEM, DISCONTIG etc.The current aim is to bring a common virtually mapped mem_map to all
architectures. This should facilitate the removal of the bespoke
implementations from the architectures. This also brings performance
improvements for most architecture making sparsmem vmemmap the more desirable
memory model. The ultimate aim of this work is to expand sparsemem support to
encompass all the features of the other memory models. This could allow us to
drop support for and remove the other models in the longer term.Below are some comparitive kernbench numbers for various architectures,
comparing default memory model against SPARSEMEM VMEMMAP. All but ia64 show
marginal improvement; we expect the ia64 figures to be sorted out when the
larger mapping support returns.x86-64 non-NUMA
Base VMEMAP % change (-ve good)
User 85.07 84.84 -0.26
System 34.32 33.84 -1.39
Total 119.38 118.68 -0.59ia64
Base VMEMAP % change (-ve good)
User 1016.41 1016.93 0.05
System 50.83 51.02 0.36
Total 1067.25 1067.95 0.07x86-64 NUMA
Base VMEMAP % change (-ve good)
User 30.77 431.73 0.22
System 45.39 43.98 -3.11
Total 476.17 475.71 -0.10ppc64
Base VMEMAP % change (-ve good)
User 488.77 488.35 -0.09
System 56.92 56.37 -0.97
Total 545.69 544.72 -0.18Below are some AIM bencharks on IA64 and x86-64 (thank Bob). The seems
pretty much flat as you would expect.ia64 results 2 cpu non-numa 4Gb SCSI disk
Benchmark Version Machine Run Date
AIM Multiuser Benchmark - Suite VII "1.1" extreme Jun 1 07:17:24 2007Tasks Jobs/Min JTI Real CPU Jobs/sec/task
1 98.9 100 58.9 1.3 1.6482
101 5547.1 95 106.0 79.4 0.9154
201 6377.7 95 183.4 158.3 0.5288
301 6932.2 95 252.7 237.3 0.3838
401 7075.8 93 329.8 316.7 0.2941
501 7235.6 94 403.0 396.2 0.2407
600 7387.5 94 472.7 475.0 0.2052Benchmark Version Machine Run Date
AIM Multiuser Benchmark - Suite VII "1.1" vmemmap Jun 1 09:59:04 2007Tasks Jobs/Min JTI Real CPU Jobs/sec/task
1 99.1 100 58.8 1.2 1.6509
101 5480.9 95 107.2 79.2 0.9044
201 6490.3 95 180.2 157.8 0.5382
301 6886.6 94 254.4 236.8 0.3813
401 7078.2 94 329.7 316.0 0.2942
501 7250.3 95 402.2 395.4 0.2412
600 7399.1 94 471.9 473.9 0.2055open power 710 2 cpu, 4 Gb, SCSI and configured physically
Benchmark Version Machine Run Date
AIM Multiuser Benchmark - Suite VII "1.1" extreme May 29 15:42:53 2007Tasks Jobs/Min JTI Real CPU Jobs/sec/task
1 25.7 100 226.3 4.3 0.4286
101 1096.0 97 536.4 199.8 0.1809
201 1236.4 96 946.1 389.1 0.1025
301 1280.5 96 1368.0 582.3 0.0709
401 1270.2 95 1837.4 771.0 0.0528
501 1251.4 96 2330.1 955.9 0.0416
601 1252.6 96 2792.4 1139.2 0.0347
701 1245.2 96 3276.5 1334.6 0.0296
918 1229.5 96 4345.4 1728.7 0.0223Benchmark Version Machine Run Date
AIM Multiuser Benchmark - Suite VII "1.1" vmemmap May 30 07:28:26 2007Tasks Jobs/Min JTI Real CPU Jobs/sec/task
1 25.6 100 226.9 4.3 0.4275
101 1049.3 97 560.2 198.1 0.1731
201 1199.1 97 975.6 390.7 0.0994
301 1261.7 96 1388.5 591.5 0.0699
401 1256.1 96 1858.1 771.9 0.0522
501 1220.1 96 2389.7 955.3 0.0406
601 1224.6 96 2856.3 1133.4 0.0340
701 1252.0 96 3258.7 1314.1 0.0298
915 1232.8 96 4319.7 1704.0 0.0225amd64 2 2-core, 4Gb and SATA
Benchmark Version Machine Run Date
AIM Multiuser Benchmark - Suite VII "1.1" extreme Jun 2 03:59:48 2007Tasks Jobs/Min JTI Real CPU Jobs/sec/task
1 13.0 100 446.4 2.1 0.2173
101 533.4 97 1102.0 110.2 0.0880
201 578.3 97 2022.8 220.8 0.0480
301 583.8 97 3000.6 332.3 0.0323
401 580.5 97 4020.1 442.2 0.0241
501 574.8 98 5072.8 558.8 0.0191
600 566.5 98 6163.8 671.0 0.0157Benchmark Version Machine Run Date
AIM Multiuser Benchmark - Suite VII "1.1" vmemmap Jun 3 04:19:31 2007Tasks Jobs/Min JTI Real CPU Jobs/sec/task
1 13.0 100 447.8 2.0 0.2166
101 536.5 97 1095.6 109.7 0.0885
201 567.7 97 2060.5 219.3 0.0471
301 582.1 96 3009.4 330.2 0.0322
401 578.2 96 4036.4 442.4 0.0240
501 585.1 98 4983.2 555.1 0.0195
600 565.5 98 6175.2 660.6 0.0157This patch:
Fix some spelling errors.
Signed-off-by: Christoph Lameter
Signed-off-by: Andy Whitcroft
Acked-by: Mel Gorman
Cc: "Luck, Tony"
Cc: Andi Kleen
Cc: "David S. Miller"
Cc: Paul Mackerras
Cc: Benjamin Herrenschmidt
Cc: KAMEZAWA Hiroyuki
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
23 Aug, 2007
1 commit
-
Booting SPARSEMEM on NUMA systems trips a BUG in page_alloc.c:
Initializing HighMem for node 0 (00038000:00100000)
Initializing HighMem for node 1 (00100000:001ffe00)
------------[ cut here ]------------
kernel BUG at /home/apw/git/linux-2.6/mm/page_alloc.c:456!
[...]This occurs because the section to node id mapping is not being
setup correctly during init under SPARSEMEM_STATIC, leading to an
attempt to free pages from all nodes into the zones on node 0.When the zone_table[] was removed in the following commit, a new
section to node mapping table was introduced:commit 89689ae7f95995723fbcd5c116c47933a3bb8b13
[PATCH] Get rid of zone_table[]That conversion inadvertantly only initialised the node mapping in
SPARSEMEM_EXTREME. Ensure we initialise the node mapping in
SPARSEMEM_STATIC.[akpm@linux-foundation.org: make the stubs static inline]
Signed-off-by: Andy Whitcroft
Cc: Christoph Lameter
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
23 Jul, 2007
1 commit
-
Fix following warning:
WARNING: vmlinux.o(.text+0x188ea): Section mismatch: reference to .init.text:__alloc_bootmem_core (between 'alloc_bootmem_high_node' and 'get_gate_vma')alloc_bootmem_high_node() is only used from __init scope so declare it __init.
And in addition declare the weak variant __init too.Signed-off-by: Sam Ravnborg
Signed-off-by: Andi Kleen
Signed-off-by: Linus Torvalds
09 Jun, 2007
1 commit
-
into the appropriate #ifdef.
Signed-off-by: Stephen Rothwell
Cc: Yasunori Goto
Cc: Andy Whitcroft
Cc: Badari Pulavarty
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
01 Jun, 2007
1 commit
-
On systems with huge amount of physical memory, VFS cache and memory memmap
may eat all available system memory under 4G, then the system may fail to
allocate swiotlb bounce buffer.There was a fix for this issue in arch/x86_64/mm/numa.c, but that fix dose
not cover sparsemem model.This patch add fix to sparsemem model by first try to allocate memmap above
4G.Signed-off-by: Zou Nan hai
Acked-by: Suresh Siddha
Cc: Andi Kleen
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
19 May, 2007
1 commit
-
modpost had two cases hardcoded for mm/
Shift over to __init_refok and kill the
hardcoded function names in modpost.This has the drawback that the functions
will always be kept no matter configuration.
With previous code the function were placed in
init section if configuration allowed it.Signed-off-by: Sam Ravnborg
09 May, 2007
2 commits
-
This patch is add white list into modpost.c for some functions and
ia64's section to fix section mismatchs.sparse_index_alloc() and zone_wait_table_init() calls bootmem allocator
at boot time, and kmalloc/vmalloc at hotplug time. If config
memory hotplug is on, there are references of bootmem allocater(init text)
from them (normal text). This is cause of section mismatch.Bootmem is called by many functions and it must be
used only at boot time. I think __init of them should keep for
section mismatch check. So, I would like to register sparse_index_alloc()
and zone_wait_table_init() into white list.In addition, ia64's .machvec section is function table of some platform
dependent code. It is mixture of .init.text and normal text. These
reference of __init functions are valid too.Signed-off-by: Yasunori Goto
Cc: Sam Ravnborg
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
This is to fix many section mismatches of code related to memory hotplug.
I checked compile with memory hotplug on/off on ia64 and x86-64 box.Signed-off-by: Yasunori Goto
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
07 May, 2007
1 commit
-
Signed-off-by: David S. Miller