17 Dec, 2009
1 commit
-
* git://git.infradead.org/iommu-2.6:
implement early_io{re,un}map for ia64
Revert "Intel IOMMU: Avoid memory allocation failures in dma map api calls"
intel-iommu: ignore page table validation in pass through mode
intel-iommu: Fix oops with intel_iommu=igfx_off
intel-iommu: Check for an RMRR which ends before it starts.
intel-iommu: Apply BIOS sanity checks for interrupt remapping too.
intel-iommu: Detect DMAR in hyperspace at probe time.
dmar: Fix build failure without NUMA, warn on bogus RHSA tables and don't abort
iommu: Allocate dma-remapping structures using numa locality info
intr_remap: Allocate intr-remapping table using numa locality info
dmar: Allocate queued invalidation structure using numa locality info
dmar: support for parsing Remapping Hardware Static Affinity structure
16 Dec, 2009
1 commit
-
Needed for commit 2c992208 ("intel-iommu: Detect DMAR in hyperspace at
probe time.) to build on IA64.Signed-off-by: Tony Luck
Signed-off-by: David Woodhouse
15 Dec, 2009
1 commit
-
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: (34 commits)
m68k: rename global variable vmalloc_end to m68k_vmalloc_end
percpu: add missing per_cpu_ptr_to_phys() definition for UP
percpu: Fix kdump failure if booted with percpu_alloc=page
percpu: make misc percpu symbols unique
percpu: make percpu symbols in ia64 unique
percpu: make percpu symbols in powerpc unique
percpu: make percpu symbols in x86 unique
percpu: make percpu symbols in xen unique
percpu: make percpu symbols in cpufreq unique
percpu: make percpu symbols in oprofile unique
percpu: make percpu symbols in tracer unique
percpu: make percpu symbols under kernel/ and mm/ unique
percpu: remove some sparse warnings
percpu: make alloc_percpu() handle array types
vmalloc: fix use of non-existent percpu variable in put_cpu_var()
this_cpu: Use this_cpu_xx in trace_functions_graph.c
this_cpu: Use this_cpu_xx for ftrace
this_cpu: Use this_cpu_xx in nmi handling
this_cpu: Use this_cpu operations in RCU
this_cpu: Use this_cpu ops for VM statistics
...Fix up trivial (famous last words) global per-cpu naming conflicts in
arch/x86/kvm/svm.c
mm/slab.c
29 Oct, 2009
1 commit
-
This patch updates percpu related symbols in ia64 such that percpu
symbols are unique and don't clash with local symbols. This serves
two purposes of decreasing the possibility of global percpu symbol
collision and allowing dropping per_cpu__ prefix from percpu symbols.* arch/ia64/kernel/setup.c: s/cpu_info/ia64_cpu_info/
Partly based on Rusty Russell's "alloc_percpu: rename percpu vars
which cause name clashes" patch.Signed-off-by: Tejun Heo
Reviewed-by: Christoph Lameter
Cc: Rusty Russell
Cc: Tony Luck
Cc: Fenghua Yu
Cc: linux-ia64@vger.kernel.org
10 Oct, 2009
1 commit
-
Bound the wait time for the ptcg_sem by using similar idea to the
ticket spin locks. In this case we have only one instance of a
spinaphore, so make it 8 bytes rather than try to squeeze it into
4-bytes to keep the code simpler (and shorter).Signed-off-by: Tony Luck
02 Oct, 2009
3 commits
-
Unlike other archs, ia64 reserves space for percpu areas during early
memory initialization. These areas occupy a contiguous region indexed
by cpu number on contiguous memory model or are grouped by node on
discontiguous memory model.As allocation and initialization are done by the arch code, all that
setup_per_cpu_areas() needs to do is communicating the determined
layout to the percpu allocator. This patch implements
setup_per_cpu_areas() for both contig and discontig memory models and
drops HAVE_LEGACY_PER_CPU_AREA.Please note that for contig model, the allocation itself is modified
only to allocate for possible cpus instead of NR_CPUS. As dynamic
percpu allocator can handle non-direct mapping, there's no reason to
allocate memory for cpus which aren't possible.Signed-off-by: Tejun Heo
Acked-by: Tony Luck
Cc: Fenghua Yu
Cc: linux-ia64 -
cpu0 used special percpu area reserved by the linker, __cpu0_per_cpu,
which is set up early in boot by head.S. However, this doesn't
guarantee that the area will be on the same node as cpu0 and the
percpu area for cpu0 ends up very far away from percpu areas for other
cpus which cause problems for congruent percpu allocator.This patch makes percpu area initialization allocate percpu area for
cpu0 like any other cpus and copy it from __cpu0_per_cpu which now
resides in the __init area. This means that for cpu0, percpu area is
first setup at __cpu0_per_cpu early by head.S and then moved to an
area in the linear mapping during memory initialization and it's not
allowed to take a pointer to percpu variables between head.S and
memory initialization.Signed-off-by: Tejun Heo
Acked-by: Tony Luck
Cc: Fenghua Yu
Cc: linux-ia64 -
If CONFIG_VIRTUAL_MEM_MAP is enabled, ia64 defines macro VMALLOC_END
as unsigned long variable vmalloc_end which is adjusted to prepare
room for vmemmap. This becomes probnlematic if a local variables
vmalloc_end is defined in some function (not very unlikely) and
VMALLOC_END is used in the function - the function thinks its
referencing the global VMALLOC_END value but would be referencing its
own local vmalloc_end variable.There's no reason VMALLOC_END should be a macro. Just define it as an
unsigned long variable if CONFIG_VIRTUAL_MEM_MAP is set to avoid nasty
surprises.Signed-off-by: Tejun Heo
Acked-by: Tony Luck
Cc: Fenghua Yu
Cc: linux-ia64
Cc: Christoph Lameter
23 Sep, 2009
4 commits
-
For /proc/kcore, each arch registers its memory range by kclist_add().
In usual,- range of physical memory
- range of vmalloc area
- text, etc...are registered but "range of physical memory" has some troubles. It
doesn't updated at memory hotplug and it tend to include unnecessary
memory holes. Now, /proc/iomem (kernel/resource.c) includes required
physical memory range information and it's properly updated at memory
hotplug. Then, it's good to avoid using its own code(duplicating
information) and to rebuild kclist for physical memory based on
/proc/iomem.Signed-off-by: KAMEZAWA Hiroyuki
Signed-off-by: Jiri Slaby
Cc: Ralf Baechle
Cc: Benjamin Herrenschmidt
Cc: WANG Cong
Cc: KOSAKI Motohiro
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Some 64bit arch has special segment for mapping kernel text. It should be
entried to /proc/kcore in addtion to direct-linear-map, vmalloc area.
This patch unifies KCORE_TEXT entry scattered under x86 and ia64.I'm not familiar with other archs (mips has its own even after this patch)
but range of [_stext ..._end) is a valid area of text and it's not in
direct-map area, defining CONFIG_ARCH_PROC_KCORE_TEXT is only a necessary
thing to do.Note: I left mips as it is now.
Signed-off-by: KAMEZAWA Hiroyuki
Cc: Ralf Baechle
Cc: Benjamin Herrenschmidt
Cc: WANG Cong
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
For /proc/kcore, vmalloc areas are registered per arch. But, all of them
registers same range of [VMALLOC_START...VMALLOC_END) This patch unifies
them. By this. archs which have no kclist_add() hooks can see vmalloc
area correctly.Signed-off-by: KAMEZAWA Hiroyuki
Cc: Ralf Baechle
Cc: Benjamin Herrenschmidt
Cc: WANG Cong
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Presently, kclist_add() only eats start address and size as its arguments.
Considering to make kclist dynamically reconfigulable, it's necessary to
know which kclists are for System RAM and which are not.This patch add kclist types as
KCORE_RAM
KCORE_VMALLOC
KCORE_TEXT
KCORE_OTHERThis "type" is used in a patch following this for detecting KCORE_RAM.
Signed-off-by: KAMEZAWA Hiroyuki
Cc: Ralf Baechle
Cc: Benjamin Herrenschmidt
Cc: WANG Cong
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
22 Sep, 2009
1 commit
-
Commit 96177299416dbccb73b54e6b344260154a445375 ("Drop free_pages()")
modified nr_free_pages() to return 'unsigned long' instead of 'unsigned
int'. This made the casts to 'unsigned long' in most callers superfluous,
so remove them.[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Geert Uytterhoeven
Reviewed-by: Christoph Lameter
Acked-by: Ingo Molnar
Acked-by: Russell King
Acked-by: David S. Miller
Acked-by: Kyle McMartin
Acked-by: WANG Cong
Cc: Richard Henderson
Cc: Ivan Kokshaysky
Cc: Haavard Skinnemoen
Cc: Mikael Starvik
Cc: "Luck, Tony"
Cc: Hirokazu Takata
Cc: Ralf Baechle
Cc: David Howells
Acked-by: Benjamin Herrenschmidt
Cc: Martin Schwidefsky
Cc: Paul Mundt
Cc: Chris Zankel
Cc: Michal Simek
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
22 Jun, 2009
1 commit
-
This allows the callers to now pass down the full set of FAULT_FLAG_xyz
flags to handle_mm_fault(). All callers have been (mechanically)
converted to the new calling convention, there's almost certainly room
for architectures to clean up their code and then add FAULT_FLAG_RETRY
when that support is added.Signed-off-by: Linus Torvalds
18 Jun, 2009
1 commit
-
It is generally agreed that it would be beneficial for u64 to be an
unsigned long long on all architectures. ia64 (in common with several
other 64-bit architectures) currently uses unsigned long. Migrating
piecemeal is too painful; this giant patch fixes all compilation warnings
and errors that come as a result of switching to use int-ll64.h.Note that userspace will still see __u64 defined as unsigned long. This
is important as it affects C++ name mangling.[Updated by Tony Luck to change efi.h:efi_freemem_callback_t to use
u64 for start/end rather than unsigned long]Signed-off-by: Matthew Wilcox
Signed-off-by: Tony Luck
16 Jun, 2009
1 commit
-
ad6561dffa17f17bb68d7207d422c26c381c4313 ("module: trim exception table on init
free.") put a bogus trim_init_extable() function into ia64 which didn't compile.Signed-off-by: Rusty Russell
Signed-off-by: Tony Luck
12 Jun, 2009
1 commit
-
It's theoretically possible that there are exception table entries
which point into the (freed) init text of modules. These could cause
future problems if other modules get loaded into that memory and cause
an exception as we'd see the wrong fixup. The only case I know of is
kvm-intel.ko (when CONFIG_CC_OPTIMIZE_FOR_SIZE=n).Amerigo fixed this long-standing FIXME in the x86 version, but this
patch is more general.This implements trim_init_extable(); most archs are simple since they
use the standard lib/extable.c sort code. Alpha and IA64 use relative
addresses in their fixups, so thier trimming is a slight variation.Sparc32 is unique; it doesn't seem to define ARCH_HAS_SORT_EXTABLE,
yet it defines its own sort_extable() which overrides the one in lib.
It doesn't sort, so we have to mark deleted entries instead of
actually trimming them.Inspired-by: Amerigo Wang
Signed-off-by: Rusty Russell
Cc: linux-alpha@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: linux-ia64@vger.kernel.org
02 Apr, 2009
1 commit
-
Replace:
if (test)
BUG();with
BUG_ON(test);Signed-off-by: Stoyan Gaydarov
Signed-off-by: Tony Luck
01 Apr, 2009
1 commit
27 Mar, 2009
2 commits
-
paravirtualize gate page by allowing each pv_ops instances
to define its own gate page.Signed-off-by: Isaku Yamahata
Signed-off-by: Tony Luck -
Add two hooks, paravirt_get_fsyscall_table() and
paravirt_get_fsys_bubble_doen() to paravirtualize fsyscall implementation.
This patch just add the hooks fsyscall and don't paravirtualize it.Signed-off-by: Isaku Yamahata
Signed-off-by: Tony Luck
16 Mar, 2009
1 commit
-
Makes code futureproof against the impending change to mm->cpu_vm_mask.
It's also a chance to use the new cpumask_ ops which take a pointer
(the older ones are deprecated, but there's no hurry for arch code).Signed-off-by: Rusty Russell
19 Feb, 2009
2 commits
-
Now, early_pfn_in_nid(PFN, NID) may returns false if PFN is a hole.
and memmap initialization was not done. This was a trouble for
sparc boot.To fix this, the PFN should be initialized and marked as PG_reserved.
This patch changes early_pfn_in_nid() return true if PFN is a hole.Signed-off-by: KAMEZAWA Hiroyuki
Reported-by: David Miller
Tested-by: KOSAKI Motohiro
Cc: Mel Gorman
Cc: Heiko Carstens
Cc: [2.6.25.x, 2.6.26.x, 2.6.27.x, 2.6.28.x]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
What's happening is that the assertion in mm/page_alloc.c:move_freepages()
is triggering:BUG_ON(page_zone(start_page) != page_zone(end_page));
Once I knew this is what was happening, I added some annotations:
if (unlikely(page_zone(start_page) != page_zone(end_page))) {
printk(KERN_ERR "move_freepages: Bogus zones: "
"start_page[%p] end_page[%p] zone[%p]\n",
start_page, end_page, zone);
printk(KERN_ERR "move_freepages: "
"start_zone[%p] end_zone[%p]\n",
page_zone(start_page), page_zone(end_page));
printk(KERN_ERR "move_freepages: "
"start_pfn[0x%lx] end_pfn[0x%lx]\n",
page_to_pfn(start_page), page_to_pfn(end_page));
printk(KERN_ERR "move_freepages: "
"start_nid[%d] end_nid[%d]\n",
page_to_nid(start_page), page_to_nid(end_page));
...And here's what I got:
move_freepages: Bogus zones: start_page[2207d0000] end_page[2207dffc0] zone[fffff8103effcb00]
move_freepages: start_zone[fffff8103effcb00] end_zone[fffff8003fffeb00]
move_freepages: start_pfn[0x81f600] end_pfn[0x81f7ff]
move_freepages: start_nid[1] end_nid[0]My memory layout on this box is:
[ 0.000000] Zone PFN ranges:
[ 0.000000] Normal 0x00000000 -> 0x0081ff5d
[ 0.000000] Movable zone start PFN for each node
[ 0.000000] early_node_map[8] active PFN ranges
[ 0.000000] 0: 0x00000000 -> 0x00020000
[ 0.000000] 1: 0x00800000 -> 0x0081f7ff
[ 0.000000] 1: 0x0081f800 -> 0x0081fe50
[ 0.000000] 1: 0x0081fed1 -> 0x0081fed8
[ 0.000000] 1: 0x0081feda -> 0x0081fedb
[ 0.000000] 1: 0x0081fedd -> 0x0081fee5
[ 0.000000] 1: 0x0081fee7 -> 0x0081ff51
[ 0.000000] 1: 0x0081ff59 -> 0x0081ff5dSo it's a block move in that 0x81f600-->0x81f7ff region which triggers
the problem.This patch:
Declaration of early_pfn_to_nid() is scattered over per-arch include
files, and it seems it's complicated to know when the declaration is used.
I think it makes fix-for-memmap-init not easy.This patch moves all declaration to include/linux/mm.h
After this,
if !CONFIG_NODES_POPULATES_NODE_MAP && !CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID
-> Use static definition in include/linux/mm.h
else if !CONFIG_HAVE_ARCH_EARLY_PFN_TO_NID
-> Use generic definition in mm/page_alloc.c
else
-> per-arch back end function will be called.Signed-off-by: KAMEZAWA Hiroyuki
Tested-by: KOSAKI Motohiro
Reported-by: David Miller
Cc: Mel Gorman
Cc: Heiko Carstens
Cc: [2.6.25.x, 2.6.26.x, 2.6.27.x, 2.6.28.x]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
07 Jan, 2009
1 commit
-
Show node to memory section relationship with symlinks in sysfs
Add /sys/devices/system/node/nodeX/memoryY symlinks for all
the memory sections located on nodeX. For example:
/sys/devices/system/node/node1/memory135 -> ../../memory/memory135
indicates that memory section 135 resides on node1.Also revises documentation to cover this change as well as updating
Documentation/ABI/testing/sysfs-devices-memory to include descriptions
of memory hotremove files 'phys_device', 'phys_index', and 'state'
that were previously not described there.In addition to it always being a good policy to provide users with
the maximum possible amount of physical location information for
resources that can be hot-added and/or hot-removed, the following
are some (but likely not all) of the user benefits provided by
this change.
Immediate:
- Provides information needed to determine the specific node
on which a defective DIMM is located. This will reduce system
downtime when the node or defective DIMM is swapped out.
- Prevents unintended onlining of a memory section that was
previously offlined due to a defective DIMM. This could happen
during node hot-add when the user or node hot-add assist script
onlines _all_ offlined sections due to user or script inability
to identify the specific memory sections located on the hot-added
node. The consequences of reintroducing the defective memory
could be ugly.
- Provides information needed to vary the amount and distribution
of memory on specific nodes for testing or debugging purposes.
Future:
- Will provide information needed to identify the memory
sections that need to be offlined prior to physical removal
of a specific node.Symlink creation during boot was tested on 2-node x86_64, 2-node
ppc64, and 2-node ia64 systems. Symlink creation during physical
memory hot-add tested on a 2-node x86_64 system.Signed-off-by: Gary Hade
Signed-off-by: Badari Pulavarty
Acked-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
05 Nov, 2008
1 commit
-
makedumpfile[1] cannot run on ia64 discontigmem kernel, because the member
node_mem_map of struct pgdat_list has invalid value. This patch fixes it.node_start_pfn shows the start pfn of each node, and node_mem_map should
point 'struct page' of each node's node_start_pfn. On my machine, node0's
node_start_pfn shows 0x400 and its node_mem_map points 0xa0007fffbf000000.
This address is the same as vmem_map, so the node_mem_map points 'struct
page' of pfn 0, even if its node_start_pfn shows 0x400.The cause is due to the round down of min_pfn in count_node_pages() and
node0's node_mem_map points 'struct page' of inactive pfn (0x0). This
patch fixes it.makedumpfile[1]: dump filtering command
https://sourceforge.net/projects/makedumpfile/Signed-off-by: Ken'ichi Ohmichi
Cc: Bernhard Walle
Cc: Jay Lan
Signed-off-by: Andrew Morton
Signed-off-by: Tony Luck
23 Oct, 2008
1 commit
-
* 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: (41 commits)
[IA64] Fix annoying IA64_TR_ALLOC_MAX message.
[IA64] kill sys32_pipe
[IA64] remove sys32_pause
[IA64] Add Variable Page Size and IA64 Support in Intel IOMMU
ia64/pv_ops: paravirtualized instruction checker.
ia64/xen: a recipe for using xen/ia64 with pv_ops.
ia64/pv_ops: update Kconfig for paravirtualized guest and xen.
ia64/xen: preliminary support for save/restore.
ia64/xen: define xen machine vector for domU.
ia64/pv_ops/xen: implement xen pv_time_ops.
ia64/pv_ops/xen: implement xen pv_irq_ops.
ia64/pv_ops/xen: define the nubmer of irqs which xen needs.
ia64/pv_ops/xen: implement xen pv_iosapic_ops.
ia64/pv_ops/xen: paravirtualize entry.S for ia64/xen.
ia64/pv_ops/xen: paravirtualize ivt.S for xen.
ia64/pv_ops/xen: paravirtualize DO_SAVE_MIN for xen.
ia64/pv_ops/xen: define xen paravirtualized instructions for hand written assembly code
ia64/pv_ops/xen: define xen pv_cpu_ops.
ia64/pv_ops/xen: define xen pv_init_ops for various xen initialization.
ia64/pv_ops/xen: elf note based xen startup.
...
20 Oct, 2008
1 commit
-
There is nothing architecture specific about remove_memory().
remove_memory() function is common for all architectures which support
hotplug memory remove. Instead of duplicating it in every architecture,
collapse them into arch neutral function.[akpm@linux-foundation.org: fix the export]
Signed-off-by: Badari Pulavarty
Cc: Yasunori Goto
Cc: Gary Hade
Cc: Mel Gorman
Cc: Yasunori Goto
Cc: "Luck, Tony"
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Heiko Carstens
Cc: Martin Schwidefsky
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
18 Oct, 2008
1 commit
-
Madison cpus support 64 TR registers. Increase IA64_TR_ALLOC_MAX
to 64. Also fixup the messages that get printed when this limit
is exceeded. Repeating for every cpu is too noisy.Signed-off-by: Tony Luck
14 Oct, 2008
1 commit
-
Conflicts:
include/asm-x86/statfs.h
30 Sep, 2008
1 commit
-
Initial fix for making sure that we can access percpu variables
in all C code (commit: 10617bbe84628eb18ab5f723d3ba35005adde143)
inadvertantly allocated the memory in the "percpu" section of
the vmlinux ELF executable. This confused kexec/dump.Signed-off-by: Tony Luck
07 Sep, 2008
1 commit
-
This patch also includes the required removal of (unused) inclusion of
's in the arch/ code for these
architectures.[dwmw2: updated for 2.6.27-rc]
Signed-off-by: Adrian Bunk
Signed-off-by: David Woodhouse
13 Aug, 2008
1 commit
-
ia64 handles per-cpu variables a litle differently from other architectures
in that it maps the physical memory allocated for each cpu at a constant
virtual address (0xffffffffffff0000). This mapping is not enabled until
the architecture specific cpu_init() function is run, which causes problems
since some generic code is run before this point. In particular when
CONFIG_PRINTK_TIME is enabled, the boot cpu will trap on the access to
per-cpu memory at the first printk() call so the boot will fail without
the kernel printing anything to the console.Fix this by allocating percpu memory for cpu0 in the kernel data section
and doing all initialization to enable percpu access in head.S before
calling any generic code.Other cpus must take care not to access per-cpu variables too early, but
their code path from start_secondary() to cpu_init() is all in arch/ia64Signed-off-by: Tony Luck
31 Jul, 2008
1 commit
-
This series of patches adds a driver for the SGI UV GRU. The driver is
still in development but it currently compiles for both x86_64 & IA64.
All simple regression tests pass on IA64. Although features remain to be
added, I'd like to start the process of getting the driver into the
kernel. Additional kernel drivers will depend on services provide by the
GRU driver.The GRU is a hardware resource located in the system chipset. The GRU
contains memory that is mmaped into the user address space. This memory
is used to communicate with the GRU to perform functions such as
load/store, scatter/gather, bcopy, AMOs, etc. The GRU is directly
accessed by user instructions using user virtual addresses. GRU
instructions (ex., bcopy) use user virtual addresses for operands.The GRU contains a large TLB that is functionally very similar to
processor TLBs. Because the external contains a TLB with user virtual
address, it requires callouts from the core VM system when certain types
of changes are made to the process page tables. There are several MMUOPS
patches currently being discussed but none has been accepted into the
kernel. The GRU driver is built using version V18 from Andrea Arcangeli.This patch:
Contains the definitions of the hardware GRU data structures that are used
by the driver to manage the GRU.[akpm@linux-foundation;org: export hpage_shift]
Signed-off-by: Jack Steiner
Cc: "Luck, Tony"
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
25 Jul, 2008
5 commits
-
Almost all users of this field need a PFN instead of a physical address,
so replace node_boot_start with node_min_pfn.[Lee.Schermerhorn@hp.com: fix spurious BUG_ON() in mark_bootmem()]
Signed-off-by: Johannes Weiner
Cc:
Signed-off-by: Lee Schermerhorn
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Straight forward extensions for huge pages located in the PUD instead of
PMDs.Signed-off-by: Andi Kleen
Signed-off-by: Nick Piggin
Cc: Martin Schwidefsky
Cc: Heiko Carstens
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
The goal of this patchset is to support multiple hugetlb page sizes. This
is achieved by introducing a new struct hstate structure, which
encapsulates the important hugetlb state and constants (eg. huge page
size, number of huge pages currently allocated, etc).The hstate structure is then passed around the code which requires these
fields, they will do the right thing regardless of the exact hstate they
are operating on.This patch adds the hstate structure, with a single global instance of it
(default_hstate), and does the basic work of converting hugetlb to use the
hstate.Future patches will add more hstate structures to allow for different
hugetlbfs mounts to have different page sizes.[akpm@linux-foundation.org: coding-style fixes]
Acked-by: Adam Litke
Acked-by: Nishanth Aravamudan
Signed-off-by: Andi Kleen
Signed-off-by: Nick Piggin
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
The double indirection here is not needed anywhere and hence (at least)
confusing.Signed-off-by: Jan Beulich
Cc: Hugh Dickins
Cc: Nick Piggin
Cc: Christoph Lameter
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: "Luck, Tony"
Cc: Paul Mundt
Cc: "David S. Miller"
Acked-by: Jeremy Fitzhardinge
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
There are a lot of places that define either a single bootmem descriptor or an
array of them. Use only one central array with MAX_NUMNODES items instead.Signed-off-by: Johannes Weiner
Acked-by: Ralf Baechle
Cc: Ingo Molnar
Cc: Richard Henderson
Cc: Russell King
Cc: Tony Luck
Cc: Hirokazu Takata
Cc: Geert Uytterhoeven
Cc: Kyle McMartin
Cc: Paul Mackerras
Cc: Paul Mundt
Cc: David S. Miller
Cc: Yinghai Lu
Cc: Christoph Lameter
Cc: Mel Gorman
Cc: Andy Whitcroft
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
16 May, 2008
1 commit
-
The patch aims to fix a performance issue for the syscall
personality(PER_LINUX32).On IA-64 box, the syscall personality (PER_LINUX32) has poor performance
because it failed to find the Linux/x86 execution domain. Then it tried
to load the kernel module however it failed always and it used the default
execution domain PER_LINUX instead. Requesting kernel modules is very
expensive. It caused the performance issue. (see the function
lookup_exec_domain in kernel/exec_domain.c).To resolve the issue, execution domain Linux/x86 is always registered in
initialization time for IA-64 architecture.Signed-off-by: Xiaolan Huang
Signed-off-by: Tony Luck