30 Jan, 2008
40 commits
-
Here is a quick and naive smoke test for kprobes. This is intended to
just verify if some unrelated change broke the *probes subsystem. It is
self contained, architecture agnostic and isn't of any great use by itself.This needs to be built in the kernel and runs a basic set of tests to
verify if kprobes, jprobes and kretprobes run fine on the kernel. In case
of an error, it'll print out a message with a "BUG" prefix.This is a start; we intend to add more tests to this bucket over time.
Thanks to Jim Keniston and Masami Hiramatsu for comments and suggestions.
Tested on x86 (32/64) and powerpc.
Signed-off-by: Ananth N Mavinakayanahalli
Acked-by: Masami Hiramatsu
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar -
Form a single percpu.h from percpu_32.h and percpu_64.h. Both are now pretty
small so this is simply adding them together.Cc: Rusty Russell
Signed-off-by: Christoph Lameter
Signed-off-by: Mike Travis
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar -
x86_64 provides an optimized way to determine the local per cpu area
offset through the pda and determines the base by accessing a remote
pda.Cc: Rusty Russell
Cc: Andi Kleen
Signed-off-by: Christoph Lameter
Signed-off-by: Mike Travis
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar -
x86_32 only provides a special way to obtain the local per cpu area offset
via x86_read_percpu. Otherwise it can fully use the generic handling.Cc: ak@suse.de
Signed-off-by: Christoph Lameter
Signed-off-by: Mike Travis
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar -
- add support for PER_CPU_ATTRIBUTES
- fix generic smp percpu_modcopy to use per_cpu_offset() macro.
Add the ability to use generic/percpu even if the arch needs to override
several aspects of its operations. This will enable the use of generic
percpu.h for all arches.An arch may define:
__per_cpu_offset Do not use the generic pointer array. Arch must
define per_cpu_offset(cpu) (used by x86_64, s390).__my_cpu_offset Can be defined to provide an optimized way to determine
the offset for variables of the currently executing
processor. Used by ia64, x86_64, x86_32, sparc64, s/390.SHIFT_PTR(ptr, offset) If an arch defines it then special handling
of pointer arithmentic may be implemented. Used
by s/390.(Some of these special percpu arch implementations may be later consolidated
so that there are less cases to deal with.)Cc: Rusty Russell
Cc: Andi Kleen
Signed-off-by: Christoph Lameter
Signed-off-by: Mike Travis
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar -
- Special consideration for IA64: Add the ability to specify
arch specific per cpu flags- remove .data.percpu attribute from DEFINE_PER_CPU for non-smp case.
The arch definitions are all the same. So move them into linux/percpu.h.
We cannot move DECLARE_PER_CPU since some include files just include
asm/percpu.h to avoid include recursion problems.Cc: Rusty Russell
Cc: Andi Kleen
Signed-off-by: Christoph Lameter
Signed-off-by: Mike Travis
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar -
The use of the __GENERIC_PERCPU is a bit problematic since arches
may want to run their own percpu setup while using the generic
percpu definitions. Replace it through a kconfig variable.Cc: Rusty Russell
Cc: Andi Kleen
Signed-off-by: Christoph Lameter
Signed-off-by: Mike Travis
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar -
The boot protocol has until now required that the initrd be located in
lowmem, which makes the lowmem/highmem boundary visible to the boot
loader. This was exported to the bootloader via a compile-time
field. Unfortunately, the vmalloc= command-line option breaks this
part of the protocol; instead of adding yet another hack that affects
the bootloader, have the kernel relocate the initrd down below the
lowmem boundary inside the kernel itself.Note that this does not rely on HIGHMEM being enabled in the kernel.
Signed-off-by: H. Peter Anvin
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar -
This patch export the boot parameters via debugfs for debugging.
The files added are as follow:
boot_params/data : binary file for struct boot_params
boot_params/version : boot protocol versionThis patch is based on 2.6.24-rc5-mm1 and has been tested on i386 and
x86_64 platform.This patch is based on the Peter Anvin's proposal.
Signed-off-by: Huang Ying
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar -
reboot_{32|64}.c unification patch.
This patch unifies the code from the reboot_32.c and reboot_64.c files.
It has been tested in computers with X86_32 and X86_64 kernels and it
looks like all reboot modes work fine (EFI restart system hasn't been
tested yet).Probably I made some mistakes (like I usually do) so I hope
we can identify and fix them soon.Signed-off-by: Miguel Boton
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
Unlike oopses, WARN_ON() currently does't print the loaded modules list.
This makes it harder to take action on certain bug reports. For example,
recently there were a set of WARN_ON()s reported in the mac80211 stack,
which were just signalling a driver bug. It takes then anther round trip
to the bug reporter (if he responds at all) to find out which driver
is at fault.Another issue is that, unlike oopses, WARN_ON() doesn't currently printk
the helpful "cut here" line, nor the "end of trace" marker.
Now that WARN_ON() is out of line, the size increase due to this is
minimal and it's worth adding.Signed-off-by: Arjan van de Ven
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
A quick grep shows that there are currently 1145 instances of WARN_ON
in the kernel. Currently, WARN_ON is pretty much entirely inlined,
which makes it hard to enhance it without growing the size of the kernel
(and getting Andrew unhappy).This patch build on top of Olof's patch that introduces __WARN,
and places the slowpath out of line. It also uses Ingo's suggestion
to not use __FUNCTION__ but to use kallsyms to do the lookup;
this saves a ton of extra space since gcc doesn't need to store the function
string twice now:3936367 833603 624736 5394706 525112 vmlinux.before
3917508 833603 624736 5375847 520767 vmlinux-slowpath15Kb savings...
Signed-off-by: Arjan van de Ven
CC: Andrew Morton
CC: Olof Johansson
Acked-by: Matt Meckall
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
Introduce __WARN() in the generic case, so the generic WARN_ON()
can use arch-specific code for when the condition is true.Signed-off-by: Olof Johansson
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
Signed-off-by: Abhishek Sagar
Signed-off-by: Quentin Barnes
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
While examining vmlinux namelist on i386 (nm -v vmlinux) I noticed :
c01021d0 t es7000_rename_gsi
c010221a T es7000_start_cpu
c0103000 T thread_saved_pcand
c0113218 T acpi_restore_state_mem
c0113219 T acpi_save_state_mem
c0114000 t wakeup_codeThis is because arch/x86/kernel/acpi/wakeup_32.S forces a .text alignment
of 4096 bytes. (I have no idea if it is really needed, since
arch/x86/kernel/acpi/wakeup_64.S uses a 16 bytes alignment *only*)So arch/x86/kernel/built-in.o also has this alignment
arch/x86/kernel/built-in.o: file format elf32-i386
Sections:
Idx Name Size VMA LMA File off Algn
0 .text 00018c94 00000000 00000000 00001000 2**12
CONTENTS, ALLOC, LOAD, RELOC, READONLY, CODEBut as arch/x86/kernel/acpi/wakeup_32.o is not the first object linked
into arch/x86/kernel/built-in.o, linker had to build several holes to meet
alignement requirements, because of .o nestings in the kbuild process.This can be solved by using a special section, .text.page_aligned, so that
no holes are needed.# size vmlinux.before vmlinux.after
text data bss dec hex filename
4619942 422838 458752 5501532 53f25c vmlinux.before
4610534 422838 458752 5492124 53cd9c vmlinux.afterThis saves 9408 bytes
Signed-off-by: Eric Dumazet
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
clean up include/asm-x86/calling.h.
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
Otherwise
WARNING: vmlinux.o(.text+0x64a9): Section mismatch: reference to .init.text:machine_specific_memory_setup (between 'memory_setup' and 'show_cpuinfo')
Signed-off-by: Andi Kleen
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
Someone complained that the 32-bit defconfig contains AS as default IO
scheduler. Change that to CFQ.Signed-off-by: Andi Kleen
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
Previously the complete files were #ifdef'ed, but now handle that in the
Makefile.May save a minor bit of compilation time.
[ Stephen Rothwell : build dependency fix ]
Signed-off-by: Andi Kleen
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
Add missing targets and missing options in x86 make help
[ mingo@elte.hu: more whitespace cleanups ]
Signed-off-by: Andi Kleen
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
I don't know of any case where they have been useful and they look ugly.
Signed-off-by: Andi Kleen
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
This is useful to debug problems with interrupt handlers that return
sometimes IRQ_NONE.Signed-off-by: Andi Kleen
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
This allows to change them at runtime using sysfs. No need to
reboot to set them.I only added aliases (kernel.noirqdebug etc.) so the old options
still work.Signed-off-by: Andi Kleen
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
# HG changeset patch
# User Jeremy Fitzhardinge
# Date 1199391030 28800
# Node ID 5d35c92fdf0e2c52edbb6fc4ccd06c7f65f25009
# Parent 22f6a5902285b58bfc1fbbd9e183498c9017bd78
x86/efi: fix improper use of lvaluepgd_val is no longer valid as an lvalue, so don't try to assign to it.
Signed-off-by: Jeremy Fitzhardinge
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
# HG changeset patch
# User Jeremy Fitzhardinge
# Date 1199321648 28800
# Node ID 22f6a5902285b58bfc1fbbd9e183498c9017bd78
# Parent bba9287641ff90e836d090d80b5c0a846aab7162
x86: page.h: move things back to their own filesOops, asm/page.h has turned into an #ifdef hellhole. Move
32/64-specific things back to their own headers to make it somewhat
comprehensible...Signed-off-by: Jeremy Fitzhardinge
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
# HG changeset patch
# User Jeremy Fitzhardinge
# Date 1199319657 28800
# Node ID bba9287641ff90e836d090d80b5c0a846aab7162
# Parent d617b72a0cc9d14bde2087d065c36d4ed3265761
x86: page.h: move remaining bits and piecesMove the remaining odds and ends into page.h.
Signed-off-by: Jeremy Fitzhardinge
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
# HG changeset patch
# User Jeremy Fitzhardinge
# Date 1199319656 28800
# Node ID d617b72a0cc9d14bde2087d065c36d4ed3265761
# Parent 3bd7db6e85e66e7f3362874802df26a82fcb2d92
x86: page.h: move pa and va related thingsMove and unify the virtualphysical address space conversion
functions.Signed-off-by: Jeremy Fitzhardinge
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
based on:
Subject: x86: page.h: move and unify types for pagetable entry
From: Jeremy FitzhardingeSigned-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
# HG changeset patch
# User Jeremy Fitzhardinge
# Date 1199319654 28800
# Node ID 3bd7db6e85e66e7f3362874802df26a82fcb2d92
# Parent f7e7db3facd9406545103164f9be8f9ba1a2b549
x86: page.h: move and unify types for pagetable entry definitionsThis patch:
1. Defines arch-specific types for the contents of a pagetable entry.
That is, 32-bit entries for 32-bit non-PAE, and 64-bit entries for
32-bit PAE and 64-bit. However, even though the latter two are the
same size, they're defined with different types in order to retain
compatibility with printk format strings, etc.2. Defines arch-specific pte_t. This is different because 32-bit PAE
defines it in two halves, whereas 32-bit PAE and 64-bit define it as a
single entry. All the other pagetable levels can be defined in a
common way. This also defines arch-specific pte_val/make_pte functions.3. Define PAGETABLE_LEVELS for each architecture variation, for later use.
4. Define common pagetable entry accessors in a paravirt-compatible
way. (64-bit does not yet use paravirt-ops in any way).5. Convert a few instances of using a *_val() as an lvalue where it is
no longer a macro. There are still places in the 64-bit code which
use pte_val() as an lvalue.Signed-off-by: Jeremy Fitzhardinge
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
based on:
Subject: x86: page.h: move and unify types for pagetable entry
From: Jeremy FitzhardingeSigned-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
based on:
Subject: x86: page.h: move and unify types for pagetable entry
From: Jeremy FitzhardingeSigned-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
based on:
Subject: x86: page.h: move and unify types for pagetable entry
From: Jeremy FitzhardingeSigned-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
based on:
Subject: x86: page.h: move and unify types for pagetable entry
From: Jeremy FitzhardingeSigned-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
# HG changeset patch
# User Jeremy Fitzhardinge
# Date 1199317452 28800
# Node ID f7e7db3facd9406545103164f9be8f9ba1a2b549
# Parent 4d9a413a0f4c1d98dbea704f0366457b5117045d
x86: add _AT() macro to conditionally castDefine _AT(type, value) to conditionally cast a value when compiling C
code, but not when used in assembler.Signed-off-by: Jeremy Fitzhardinge
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
# HG changeset patch
# User Jeremy Fitzhardinge
# Date 1199317362 28800
# Node ID 4d9a413a0f4c1d98dbea704f0366457b5117045d
# Parent ba0ec40a50a7aef1a3153cea124c35e261f5a2df
x86: page.h: unify page copying and clearingMove, and to some extent unify, the various page copying and clearing
functions. The only unification here is that both architectures use
the same function for copying/clearing user and kernel pages.Signed-off-by: Jeremy Fitzhardinge
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
# HG changeset patch
# User Jeremy Fitzhardinge
# Date 1199317360 28800
# Node ID ba0ec40a50a7aef1a3153cea124c35e261f5a2df
# Parent c45c263179cb78284b6b869c574457df088027d1
x86: page.h: unify constantsThere are many constants which are shared by 32 and 64-bit.
Signed-off-by: Jeremy Fitzhardinge
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
Commits
- c52f61fcbdb2aa84f0e4d831ef07f375e6b99b2c
(x86: allow TSC clock source on AMD Fam10h and some cleanup)
- e30436f05d456efaff77611e4494f607b14c2782
(x86: move X86_FEATURE_CONSTANT_TSC into early cpu feature detection)are supposed to fix the detection of contant TSC for AMD CPUs.
Unfortunately on x86_64 it does still not work with current x86/mm.
For a Phenom I still get:...
TSC calibrated against PM_TIMER
Marking TSC unstable due to TSCs unsynchronized
time.c: Detected 2288.366 MHz processor.
...We have to set c->x86_power in early_identify_cpu to properly detect
the CONSTANT_TSC bit in early_init_amd.Attached patch fixes this issue. Following the relevant boot
messages when the fix is used:...
TSC calibrated against PM_TIMER
time.c: Detected 2288.279 MHz processor.
...
Initializing CPU#1
...
checking TSC synchronization [CPU#0 -> CPU#1]: passed.
...
Initializing CPU#2
...
checking TSC synchronization [CPU#0 -> CPU#2]: passed.
...
Booting processor 3/4 APIC 0x3
...
checking TSC synchronization [CPU#0 -> CPU#3]: passed.
Brought up 4 CPUs
...Patch is against x86/mm (v2.6.24-rc8-672-ga9f7faa).
Please apply.Set c->x86_power in early_identify_cpu. This ensures that
X86_FEATURE_CONSTANT_TSC can properly be set in early_init_amd.Signed-off-by: Andreas Herrmann
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
The ACPI code currently disables TSC use in any C2 and C3
states. But the AMD Fam10h BKDG documents that the TSC
will never stop in any C states when the CONSTANT_TSC bit is
set. Make this disabling conditional on CONSTANT_TSC
not set on AMD.I actually think this is true on Intel too for C2 states
on CPUs with p-state invariant TSC, but this needs
further discussions with Len to really confirm :-)So far it is only enabled on AMD.
Cc: lenb@kernel.org
Signed-off-by: Andi Kleen
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner -
Trust the ACPI code to disable TSC instead when C3 is used.
AMD Fam10h does not disable TSC in any C states so the
check was incorrect there anyways after the change
to handle this like Intel on AMD too.This allows to use the TSC when C3 is disabled in software
(acpi.max_c_state=2), but the BIOS supports it anyways.Match i386 behaviour.
Cc: lenb@kernel.org
Signed-off-by: Andi Kleen
Signed-off-by: Ingo Molnar
Signed-off-by: Thomas Gleixner