24 Jul, 2008
1 commit
-
This patch removes the old kgdb reminants from ARCH=powerpc and
implements the new style arch specific stub for the common kgdb core
interface.It is possible to have xmon and kgdb in the same kernel, but you
cannot use both at the same time because there is only one set of
debug hooks.The arch specific kgdb implementation saves the previous state of the
debug hooks and restores them if you unconfigure the kgdb I/O driver.
Kgdb should have no impact on a kernel that has no kgdb I/O driver
configured.Signed-off-by: Jason Wessel
22 Jul, 2008
6 commits
-
Manually fixed up:
drivers/net/fs_enet/fs_enet-main.c
-
Use the new PPC_LONG_ALIGN macro instead of passing an argument
to the asm for consistency.Signed-off-by: Michael Ellerman
Signed-off-by: Benjamin Herrenschmidt -
Add a #define for aligning to a long-sized boundary. It would be nice
to use sizeof(long) for this, but that requires generating the value
with asm-offsets.c, and asm-offsets.c includes asm-compat.h and we
descend into some sort of recursive include hell.Signed-off-by: Michael Ellerman
Signed-off-by: Benjamin Herrenschmidt -
Add sub match id for ps3 system bus so that two different system bus
devices can be connected to a shared device.Signed-off-by: Masakazu Mokuno
Signed-off-by: Geoff Levand
Signed-off-by: Benjamin Herrenschmidt -
Update iommu_alloc() to take the struct dma_attrs and pass them on to
tce_build(). This change propagates down to the tce_build functions of
all the platforms.Signed-off-by: Mark Nelson
Signed-off-by: Arnd Bergmann
Signed-off-by: Benjamin Herrenschmidt -
This patch adds support for the power button on future IBM cell blades.
It actually doesn't shut down the machine. Instead it exposes an
input device /dev/input/event0 to userspace which sends KEY_POWER
if power button has been pressed.
haldaemon actually recognizes the button, so a plattform independent acpid
replacement should handle it correctly.Signed-off-by: Christian Krafft
Signed-off-by: Arnd Bergmann
Signed-off-by: Benjamin Herrenschmidt
20 Jul, 2008
1 commit
-
This patch enables coalesced MMIO for powerpc architecture.
It defines KVM_MMIO_PAGE_OFFSET and KVM_CAP_COALESCED_MMIO.
It enables the compilation of coalesced_mmio.c.Signed-off-by: Laurent Vivier
Signed-off-by: Avi Kivity
17 Jul, 2008
3 commits
-
This converts the FSL Book-E PTE access and TLB miss handling to match
with the recent changes to 44x that introduce support for non-atomic PTE
operations in pgtable-ppc32.h and removes write back to the PTE from
the TLB miss handlers. In addition, the DSI interrupt code no longer
tries to fixup write permission, this is left to generic code, and
_PAGE_HWWRITE is gone.Signed-off-by: Kumar Gala
-
Mostly having to do with not marking things __iomem. And some failure
to use appropriate accessors to read MMIO regs.Signed-off-by: Andy Fleming
Signed-off-by: Kumar Gala -
Basic PM support for 83xx. Standby is implemented as sleep.
Suspend-to-RAM is implemented as "deep sleep" (with the processor
turned off) on 831x.Signed-off-by: Scott Wood
Signed-off-by: Kumar Gala
16 Jul, 2008
2 commits
-
Manual merge of:
arch/powerpc/Kconfig
arch/powerpc/kernel/stacktrace.c
arch/powerpc/mm/slice.c
arch/ppc/kernel/smp.c -
Conflicts:
arch/powerpc/Kconfig
arch/s390/kernel/time.c
arch/x86/kernel/apic_32.c
arch/x86/kernel/cpu/perfctr-watchdog.c
arch/x86/kernel/i8259_64.c
arch/x86/kernel/ldt.c
arch/x86/kernel/nmi_64.c
arch/x86/kernel/smpboot.c
arch/x86/xen/smp.c
include/asm-x86/hw_irq_32.h
include/asm-x86/hw_irq_64.h
include/asm-x86/mach-default/irq_vectors.h
include/asm-x86/mach-voyager/irq_vectors.h
include/asm-x86/smp.h
kernel/MakefileSigned-off-by: Ingo Molnar
15 Jul, 2008
6 commits
-
Manual fixup of:
arch/powerpc/Kconfig
-
Because the pte is now 64-bits the compiler was optimizing the update
to always clear the upper 32-bits of the pte. We need to ensure the
clr mask is treated as an unsigned long long to get the proper behavior.Signed-off-by: Kumar Gala
Acked-by: Josh Boyer
Signed-off-by: Benjamin Herrenschmidt -
giveup_vsx didn't save the FPU and VMX regsiters. Change it to be
like giveup_fpr/altivec which save these registers.Also update call sites where FPU and VMX are already saved to use the
original giveup_vsx (renamed to __giveup_vsx).Signed-off-by: Michael Neuling
Signed-off-by: Benjamin Herrenschmidt -
Background from Maynard Johnson:
As of POWER6, a set of 32 common events is defined that must be
supported on all future POWER processors. The main impetus for this
compat set is the need to support partition migration, especially from
processor P(n) to processor P(n+1), where performance software that's
running in the new partition may not be knowledgeable about processor
P(n+1). If a performance tool determines it does not support the
physical processor, but is told (via the
PPC_FEATURE_PSERIES_PERFMON_COMPAT bit) that the processor supports
the notion of the PMU compat set, then the performance tool can
surface just those events to the user of the tool.PPC_FEATURE_PSERIES_PERFMON_COMPAT indicates that the PMU supports at
least this basic subset of events which is compatible across POWER
processor lines.Signed-off-by: Nathan Lynch
Signed-off-by: Benjamin Herrenschmidt -
Commit ef3d3246a0d06be622867d21af25f997aeeb105f ("powerpc/mm: Add Strong
Access Ordering support") in the powerpc/{next,master} tree caused the
following in a powerpc allmodconfig build:usr/include/asm/mman.h requires linux/mm.h, which does not exist in exported headers
We should not use CONFIG_PPC64 in an unprotected (by __KERNEL__)
section of an exported include file and linux/mm.h is not exported. So
protect the whole section that is CONFIG_PPC64 with __KERNEL__ and put
the two introduced includes in there as well.Signed-off-by: Stephen Rothwell
Acked-by: Dave Kleikamp
Signed-off-by: Benjamin Herrenschmidt
14 Jul, 2008
1 commit
-
Manual fixup of include/asm-powerpc/pgtable-ppc64.h
10 Jul, 2008
2 commits
-
This is some preliminary work to improve TLB management on SW loaded
TLB powerpc platforms. This introduce support for non-atomic PTE
operations in pgtable-ppc32.h and removes write back to the PTE from
the TLB miss handlers. In addition, the DSI interrupt code no longer
tries to fixup write permission, this is left to generic code, and
_PAGE_HWWRITE is gone.Signed-off-by: Benjamin Herrenschmidt
Signed-off-by: Josh Boyer
09 Jul, 2008
9 commits
-
Allow an application to enable Strong Access Ordering on specific pages of
memory on Power 7 hardware. Currently, power has a weaker memory model than
x86. Implementing a stronger memory model allows an emulator to more
efficiently translate x86 code into power code, resulting in faster code
execution.On Power 7 hardware, storing 0b1110 in the WIMG bits of the hpte enables
strong access ordering mode for the memory page. This patchset allows a
user to specify which pages are thus enabled by passing a new protection
bit through mmap() and mprotect(). I have defined PROT_SAO to be 0x10.Signed-off-by: Dave Kleikamp
Signed-off-by: Benjamin Herrenschmidt -
Add the CPU feature bit for the new Strong Access Ordering
facility of Power7Signed-off-by: Dave Kleikamp
Signed-off-by: Joel Schopp
Signed-off-by: Benjamin Herrenschmidt -
This patch defines:
- PROT_SAO, which is passed into mmap() and mprotect() in the prot field
- VM_SAO in vma->vm_flags, and
- _PAGE_SAO, the combination of WIMG bits in the pte that enables strong
access ordering for the page.Signed-off-by: Dave Kleikamp
Signed-off-by: Benjamin Herrenschmidt -
The task_pt_regs() macro allows access to the pt_regs of a given task.
This macro is not currently defined for the powerpc architecture, but
we need it for some upcoming utrace additions.Signed-off-by: Srinivasa DS
Signed-off-by: Benjamin Herrenschmidt -
Move device_to_mask() to dma-mapping.h because we need to use it from
outside dma_64.c in a later patch.Signed-off-by: Mark Nelson
Signed-off-by: Arnd Bergmann
Signed-off-by: Benjamin Herrenschmidt -
Update powerpc to use the new dma_*map*_attrs() interfaces. In doing so
update struct dma_mapping_ops to accept a struct dma_attrs and propagate
these changes through to all users of the code (generic IOMMU and the
64bit DMA code, and the iseries and ps3 platform code).The old dma_*map_*() interfaces are reimplemented as calls to the
corresponding new interfaces.Signed-off-by: Mark Nelson
Signed-off-by: Arnd Bergmann
Acked-by: Geoff Levand
Signed-off-by: Benjamin Herrenschmidt -
Make iommu_map_sg take a struct iommu_table. It did so before commit
740c3ce66700640a6e6136ff679b067e92125794 (iommu sg merging: ppc: make
iommu respect the segment size limits).This stops the function looking in the archdata.dma_data for the iommu
table because in the future it will be called with a device that has
no table there.This also has the nice side effect of making iommu_map_sg() match the
other map functions.Signed-off-by: Mark Nelson
Signed-off-by: Arnd Bergmann
Signed-off-by: Benjamin Herrenschmidt -
As nr_active counter includes also spus waiting for syscalls to return
we need a seperate counter that only counts spus that are currently running
on spu side. This counter shall be used by a cpufreq governor that targets
a frequency dependent from the number of running spus.Signed-off-by: Christian Krafft
Acked-by: Jeremy Kerr
Signed-off-by: Benjamin Herrenschmidt -
As Andy Whitcroft recently pointed out, the current powerpc version of
huge_ptep_set_wrprotect() has a bug. It just calls ptep_set_wrprotect()
which in turn calls pte_update() then hpte_need_flush() with the 'huge'
argument set to 0. This will cause hpte_need_flush() to flush the wrong
hash entries (of any). Andy's fix for this is already in the powerpc
tree as commit 016b33c4958681c24056abed8ec95844a0da80a3.I have confirmed this is a real bug, not masked by some other
synchronization, with a new testcase for libhugetlbfs. A process write
a (MAP_PRIVATE) hugepage mapping, fork(), then alter the mapping and
have the child incorrectly see the second write.Therefore, this should be fixed for 2.6.26, and for the stable tree.
Here is a suitable patch for 2.6.26, which I think will also be suitable
for the stable tree (neither of the headers in question has been changed
much recently).It is cut down slighlty from Andy's original version, in that it does
not include a 32-bit version of huge_ptep_set_wrprotect(). Currently,
hugepages are not supported on any 32-bit powerpc platform. When they
are, a suitable 32-bit version can be added - the only 32-bit hardware
which supports hugepages does not use the conventional hashtable MMU and
so will have different needs anyway.Signed-off-by: Andy Whitcroft
Signed-off-by: David Gibson
Signed-off-by: Benjamin Herrenschmidt
Signed-off-by: Linus Torvalds
03 Jul, 2008
5 commits
-
This updates the device tree manipulation routines so that memory
add/remove of lmbs represented under the
ibm,dynamic-reconfiguration-memory node of the device tree invokes the
hotplug notifier chain.This change is needed because of the change in the way memory is
represented under the ibm,dynamic-reconfiguration-memory node. All lmbs
are described in the ibm,dynamic-memory property instead of having a
separate node for each lmb as in previous device tree layouts. This
requires the update_node() routine to check for updates to the
ibm,dynamic-memory property and invoke the hotplug notifier chain.This also updates the pseries hotplug notifier to be able to gather information
for lmbs represented under the ibm,dynamic-reconfiguration-memory node and
have the lmbs added/removed.Signed-off-by: Nathan Fontenot
Signed-off-by: Paul Mackerras -
Since Roland's ptrace cleanup starting with commit
f65255e8d51ecbc6c9eef20d39e0377d19b658ca ("[POWERPC] Use user_regset
accessors for FP regs"), the dump_task_* functions are no longer being
used.Signed-off-by: Michael Neuling
Signed-off-by: Paul Mackerras -
To allow for a single kernel image on e500 v1/v2/mc we need to fixup lwsync
at runtime. On e500v1/v2 lwsync causes an illop so we need to patch up
the code. We default to 'sync' since that is always safe and if the cpu
is capable we will replace 'sync' with 'lwsync'.We introduce CPU_FTR_LWSYNC as a way to determine at runtime if this is
needed. This flag could be moved elsewhere since we dont really use it
for the normal CPU_FTR purpose.Finally we only store the relative offset in the fixup section to keep it
as small as possible rather than using a full fixup_entry.Signed-off-by: Kumar Gala
Signed-off-by: Paul Mackerras -
Currently we get this warning:
arch/powerpc/kernel/init_task.c:33: warning: missing braces around initializer
arch/powerpc/kernel/init_task.c:33: warning: (near initialization for 'init_task.thread.fpr[0]')This fixes it.
Noticed by Stephen Rothwell.
Signed-off-by: Michael Neuling
Signed-off-by: Paul Mackerras -
Currently the kernel fails to build with the above config options with:
CC arch/powerpc/mm/mem.o
arch/powerpc/mm/mem.c: In function 'arch_add_memory':
arch/powerpc/mm/mem.c:130: error: implicit declaration of function 'create_section_mapping'This explicitly includes asm/sparsemem.h in arch/powerpc/mm/mem.c and
moves the guards in include/asm-powerpc/sparsemem.h to protect the
SPARSEMEM specific portions only.Signed-off-by: Tony Breeds
Signed-off-by: Paul Mackerras
01 Jul, 2008
4 commits
-
This correctly hooks the VSX dump into Roland McGrath core file
infrastructure. It adds the VSX dump information as an additional elf
note in the core file (after talking more to the tool chain/gdb guys).
This also ensures the formats are consistent between signals, ptrace
and core files.Signed-off-by: Michael Neuling
Signed-off-by: Paul Mackerras -
Currently when a 32 bit process is exec'd on a powerpc 64 bit host the
value in the top three bytes of the personality is clobbered. patch
adds a check in the SET_PERSONALITY macro that will carry all the
values in the top three bytes across the exec.These three bytes currently carry flags to disable address randomisation,
limit the address space, force zeroing of an mmapped page, etc. Should an
application set any of these bits they will be maintained and honoured on
homogeneous environment but discarded and ignored on a heterogeneous
environment. So if an application requires all mmapped pages to be initialised
to zero and a wrapper is used to setup the personality and exec the target,
these flags will remain set on an all 32 or all 64 bit envrionment, but they
will be lost in the exec on a mixed 32/64 bit environment. Losing these bits
means that the same application would behave differently in different
environments. Tested on a POWER5+ machine with 64bit kernel and a mixed
64/32 bit user space.Signed-off-by: Eric B Munson
Signed-off-by: Paul Mackerras -
When compiling kernel modules for ppc that include ,
gcc prints a warning message every time it encounters a function
declaration where the inline keyword appears after the return type.
This makes sure that the order of the inline keyword and the return
type is as gcc expects it. Additionally, the __inline__ keyword is
replaced by inline, as checkpatch expects.Signed-off-by: Bart Van Assche
Signed-off-by: Paul Mackerras -
The implementation of huge_ptep_set_wrprotect() directly calls
ptep_set_wrprotect() to mark a hugepte write protected. However this
call is not appropriate on ppc64 kernels as this is a small page only
implementation. This can lead to the hash not being flushed correctly
when a mapping is being converted to COW, allowing processes to continue
using the original copy.Currently huge_ptep_set_wrprotect() unconditionally calls
ptep_set_wrprotect(). This is fine on ppc32 kernels as this call is
generic. On 64 bit this is implemented as:pte_update(mm, addr, ptep, _PAGE_RW, 0);
On ppc64 this last parameter is the page size and is passed directly on
to hpte_need_flush():hpte_need_flush(mm, addr, ptep, old, huge);
And this directly affects the page size we pass to flush_hash_page():
flush_hash_page(vaddr, rpte, psize, ssize, 0);
As this changes the way the hash is calculated we will flush the wrong
pages, potentially leaving live hashes to the original page.Move the definition of huge_ptep_set_wrprotect() to the 32/64 bit specific
headers.Signed-off-by: Andy Whitcroft
Acked-by: Benjamin Herrenschmidt
Signed-off-by: Paul Mackerras