14 Aug, 2009
27 commits
-
To directly use spread NUMA memories for percpu units, percpu
allocator will be updated to allow sparsely mapping units in a chunk.
As the distances between units can be very large, this makes
allocating single vmap area for each chunk undesirable. This patch
implements pcpu_get_vm_areas() and pcpu_free_vm_areas() which
allocates and frees sparse congruent vmap areas.pcpu_get_vm_areas() take @offsets and @sizes array which define
distances and sizes of vmap areas. It scans down from the top of
vmalloc area looking for the top-most address which can accomodate all
the areas. The top-down scan is to avoid interacting with regular
vmallocs which can push up these congruent areas up little by little
ending up wasting address space and page table.To speed up top-down scan, the highest possible address hint is
maintained. Although the scan is linear from the hint, given the
usual large holes between memory addresses between NUMA nodes, the
scanning is highly likely to finish after finding the first hole for
the last unit which is scanned first.Signed-off-by: Tejun Heo
Cc: Nick Piggin -
Separate out insert_vmalloc_vm() from __get_vm_area_node().
insert_vmalloc_vm() initializes vm_struct from vmap_area and inserts
it into vmlist. insert_vmalloc_vm() only initializes fields which can
be determined from @vm, @flags and @caller The rest should be
initialized by the caller. For __get_vm_area_node(), all other fields
just need to be cleared and this is done by using kzalloc instead of
kmalloc.This will be used to implement pcpu_get_vm_areas().
Signed-off-by: Tejun Heo
Cc: Nick Piggin -
The only thing percpu allocator wants to know about a vmalloc area is
the base address. Instead of requiring chunk->vm, add
chunk->base_addr which contains the necessary value. This simplifies
the code a bit and makes the dummy first_vm unnecessary. This change
will ease allowing a chunk to be mapped by multiple vms.Signed-off-by: Tejun Heo
-
Currently units are mapped sequentially into address space. This
patch adds pcpu_unit_offsets[] which allows units to be mapped to
arbitrary offsets from the chunk base address. This is necessary to
allow sparse embedding which might would need to allocate address
ranges and memory areas which aren't aligned to unit size but
allocation atom size (page or large page size). This also simplifies
things a bit by removing the need to calculate offset from unit
number.With this change, there's no need for the arch code to know
pcpu_unit_size. Update pcpu_setup_first_chunk() and first chunk
allocators to return regular 0 or -errno return code instead of unit
size or -errno.Signed-off-by: Tejun Heo
Cc: David S. Miller -
Till now, non-linear cpu->unit map was expressed using an integer
array which maps each cpu to a unit and used only by lpage allocator.
Although how many units have been placed in a single contiguos area
(group) is known while building unit_map, the information is lost when
the result is recorded into the unit_map array. For lpage allocator,
as all allocations are done by lpages and whether two adjacent lpages
are in the same group or not is irrelevant, this didn't cause any
problem. Non-linear cpu->unit mapping will be used for sparse
embedding and this grouping information is necessary for that.This patch introduces pcpu_alloc_info which contains all the
information necessary for initializing percpu allocator.
pcpu_alloc_info contains array of pcpu_group_info which describes how
units are grouped and mapped to cpus. pcpu_group_info also has
base_offset field to specify its offset from the chunk's base address.
pcpu_build_alloc_info() initializes this field as if all groups are
allocated back-to-back as is currently done but this will be used to
sparsely place groups.pcpu_alloc_info is a rather complex data structure which contains a
flexible array which in turn points to nested cpu_map arrays.* pcpu_alloc_alloc_info() and pcpu_free_alloc_info() are provided to
help dealing with pcpu_alloc_info.* pcpu_lpage_build_unit_map() is updated to build pcpu_alloc_info,
generalized and renamed to pcpu_build_alloc_info().
@cpu_distance_fn may be NULL indicating that all cpus are of
LOCAL_DISTANCE.* pcpul_lpage_dump_cfg() is updated to process pcpu_alloc_info,
generalized and renamed to pcpu_dump_alloc_info(). It now also
prints which group each alloc unit belongs to.* pcpu_setup_first_chunk() now takes pcpu_alloc_info instead of the
separate parameters. All first chunk allocators are updated to use
pcpu_build_alloc_info() to build alloc_info and call
pcpu_setup_first_chunk() with it. This has the side effect of
packing units for sparse possible cpus. ie. if cpus 0, 2 and 4 are
possible, they'll be assigned unit 0, 1 and 2 instead of 0, 2 and 4.* x86 setup_pcpu_lpage() is updated to deal with alloc_info.
* sparc64 setup_per_cpu_areas() is updated to build alloc_info.
Although the changes made by this patch are pretty pervasive, it
doesn't cause any behavior difference other than packing of sparse
cpus. It mostly changes how information is passed among
initialization functions and makes room for more flexibility.Signed-off-by: Tejun Heo
Cc: Ingo Molnar
Cc: David Miller -
Unit map handling will be generalized and extended and used for
embedding sparse first chunk and other purposes. Relocate two
unit_map related functions upward in preparation. This patch just
moves the code without any actual change.Signed-off-by: Tejun Heo
-
pcpu_fc_alloc_fn_t is about to see more interesting usage, add @align
parameter.Signed-off-by: Tejun Heo
-
Now that all actual first chunk allocation and copying happen in the
first chunk allocators and helpers, there's no reason for
pcpu_setup_first_chunk() to try to determine @dyn_size automatically.
The only left user is page first chunk allocator. Make it determine
dyn_size like other allocators and make @dyn_size mandatory for
pcpu_setup_first_chunk().Signed-off-by: Tejun Heo
-
First chunk allocators assume percpu areas have been linked using one
of PERCPU_*() macros and depend on __per_cpu_load symbol defined by
those macros, so there isn't much point in passing in static area size
explicitly when it can be easily calculated from __per_cpu_start and
__per_cpu_end. Drop @static_size from all percpu first chunk
allocators and helpers.Signed-off-by: Tejun Heo
-
Now that all first chunk allocators are in mm/percpu.c, it makes sense
to make generalize percpu_alloc kernel parameter. Define PCPU_FC_*
and set pcpu_chosen_fc using early_param() in mm/percpu.c. Arch code
can use the set value to determine which first chunk allocator to use.Signed-off-by: Tejun Heo
-
There's no need to build unused first chunk allocators in. Define
CONFIG_NEED_PER_CPU_*_FIRST_CHUNK and let archs enable them
selectively.Signed-off-by: Tejun Heo
-
Page size isn't always 4k depending on arch and configuration. Rename
4k first chunk allocator to page.Signed-off-by: Tejun Heo
Cc: David Howells -
Improve percpu boot messages such that they're uniform and contain
more information.Signed-off-by: Tejun Heo
Reviewed-by: Christoph Lameter -
pcpu_reclaim() calls pcpu_depopulate_chunk() which makes use of pages
array and bitmap returned by pcpu_get_pages_and_bitmap() and thus
should be called under pcpu_alloc_mutex. pcpu_reclaim() released the
mutex before calling depopulate leading to double free and other
strange problems caused by the unexpected concurrent usages of pages
array and bitmap. Fix it.Signed-off-by: Tejun Heo
Reviewed-by: Christoph Lameter -
Conflicts:
arch/sparc/kernel/smp_64.c
arch/x86/kernel/cpu/perf_counter.c
arch/x86/kernel/setup_percpu.c
drivers/cpufreq/cpufreq_ondemand.c
mm/percpu.cConflicts in core and arch percpu codes are mostly from commit
ed78e1e078dd44249f88b1dd8c76dafb39567161 which substituted many
num_possible_cpus() with nr_cpu_ids. As for-next branch has moved all
the first chunk allocators into mm/percpu.c, the changes are moved
from arch code to mm/percpu.c.Signed-off-by: Tejun Heo
-
get_vm_area() only accepts VM_* flags, not GFP_*.
And according to the doc of get_vm_area(), here should be
VM_ALLOC.Signed-off-by: WANG Cong
Acked-by: Tejun Heo
Cc: Ingo Molnar -
percpu code has been assuming num_possible_cpus() == nr_cpu_ids which
is incorrect if cpu_possible_map contains holes. This causes percpu
code to access beyond allocated memories and vmalloc areas. On a
sparc64 machine with cpus 0 and 2 (u60), this triggers the following
warning or fails boot.WARNING: at /devel/tj/os/work/mm/vmalloc.c:106 vmap_page_range_noflush+0x1f0/0x240()
Modules linked in:
Call Trace:
[00000000004b17d0] vmap_page_range_noflush+0x1f0/0x240
[00000000004b1840] map_vm_area+0x20/0x60
[00000000004b1950] __vmalloc_area_node+0xd0/0x160
[0000000000593434] deflate_init+0x14/0xe0
[0000000000583b94] __crypto_alloc_tfm+0xd4/0x1e0
[00000000005844f0] crypto_alloc_base+0x50/0xa0
[000000000058b898] alg_test_comp+0x18/0x80
[000000000058dad4] alg_test+0x54/0x180
[000000000058af00] cryptomgr_test+0x40/0x60
[0000000000473098] kthread+0x58/0x80
[000000000042b590] kernel_thread+0x30/0x60
[0000000000472fd0] kthreadd+0xf0/0x160
---[ end trace 429b268a213317ba ]---This patch fixes generic percpu functions and sparc64
setup_per_cpu_areas() so that they handle sparse cpu_possible_map
properly.Please note that on x86, cpu_possible_map() doesn't contain holes and
thus num_possible_cpus() == nr_cpu_ids and this patch doesn't cause
any behavior difference.Signed-off-by: Tejun Heo
Acked-by: David S. Miller
Cc: Ingo Molnar -
nr_cpu_ids is dependent only on cpu_possible_map and
setup_per_cpu_areas() already depends on cpu_possible_map and will use
nr_cpu_ids. Initialize nr_cpu_ids before setting up percpu areas.Signed-off-by: Tejun Heo
-
free_irq() can remove an irqaction while the corresponding interrupt
is in progress, but free_irq() sets action->thread to NULL
unconditionally, which might lead to a NULL pointer dereference in
handle_IRQ_event() when the hard interrupt context tries to wake up
the handler thread.Prevent this by moving the thread stop after synchronize_irq(). No
need to set action->thread to NULL either as action is going to be
freed anyway.This fixes a boot crash reported against preempt-rt which uses the
mainline irq threads code to implement full irq threading.[ tglx: removed local irqthread variable ]
Signed-off-by: Linus Torvalds
Signed-off-by: Thomas Gleixner -
…x/kernel/git/tip/linux-2.6-tip
* 'perfcounters-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
perf_counter: Report the cloning task as parent on perf_counter_fork()
perf_counter: Fix an ipi-deadlock
perf: Rework/fix the whole read vs group stuff
perf_counter: Fix swcounter context invariance
perf report: Don't show unresolved DSOs and symbols when -S/-d is used
perf tools: Add a general option to enable raw sample records
perf tools: Add a per tracepoint counter attribute to get raw sample
perf_counter: Provide hw_perf_counter_setup_online() APIs
perf list: Fix large list output by using the pager
perf_counter, x86: Fix/improve apic fallback
perf record: Add missing -C option support for specifying profile cpu
perf tools: Fix dso__new handle() to handle deleted DSOs
perf tools: Fix fallback to cplus_demangle() when bfd_demangle() is not available
perf report: Show the tid too in -D
perf record: Fix .tid and .pid fill-in when synthesizing events
perf_counter, x86: Fix generic cache events on P6-mobile CPUs
perf_counter, x86: Fix lapic printk message -
…/git/tip/linux-2.6-tip
* 'core-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
futex: Fix handling of bad requeue syscall pairing
futex: Fix compat_futex to be same as futex for REQUEUE_PI
locking, sched: Give waitqueue spinlocks their own lockdep classes
futex: Update futex_q lock_ptr on requeue proxy lock -
…git/tip/linux-2.6-tip
* 'x86-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
x86: Fix oops in identify_cpu() on CPUs without CPUID
x86: Clear incorrectly forced X86_FEATURE_LAHF_LM flag
x86, mce: therm_throt - change when we print messages
x86: Add reboot quirk for every 5 series MacBook/Pro -
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jlbec/ocfs2: (22 commits)
ocfs2: Fix possible deadlock when extending quota file
ocfs2: keep index within status_map[]
ocfs2: Initialize the cluster we're writing to in a non-sparse extend
ocfs2: Remove redundant BUG_ON in __dlm_queue_ast()
ocfs2/quota: Release lock for error in ocfs2_quota_write.
ocfs2: Define credit counts for quota operations
ocfs2: Remove syncjiff field from quota info
ocfs2: Fix initialization of blockcheck stats
ocfs2: Zero out padding of on disk dquot structure
ocfs2: Initialize blocks allocated to local quota file
ocfs2: Mark buffer uptodate before calling ocfs2_journal_access_dq()
ocfs2: Make global quota files blocksize aligned
ocfs2: Use ocfs2_rec_clusters in ocfs2_adjust_adjacent_records.
ocfs2: Fix deadlock on umount
ocfs2: Add extra credits and access the modified bh in update_edge_lengths.
ocfs2: Fail ocfs2_get_block() immediately when a block needs allocation
ocfs2: Fix error return in ocfs2_write_cluster()
ocfs2: Fix compilation warning for fs/ocfs2/xattr.c
ocfs2: Initialize count in aio_write before generic_write_checks
ocfs2: log the actual return value of ocfs2_file_aio_write()
... -
* 'for-linus' of git://neil.brown.name/md:
md: allow upper limit for resync/reshape to be set when array is read-only
md/raid5: Properly remove excess drives after shrinking a raid5/6
md/raid5: make sure a reshape restarts at the correct address.
md/raid5: allow new reshape modes to be restarted in the middle.
md: never advance 'events' counter by more than 1.
Remove deadlock potential in md_open -
* 'sh/for-2.6.31' of git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6:
sh: fix i2c init order on ap325rxa V2
sh: fix i2c init order on Migo-R V2
sh: convert processor device setup functions to arch_initcall() -
kernel_sendpage() does the proper default case handling for when the
socket doesn't have a native sendpage implementation.Now, arguably this might be something that we could instead solve by
just specifying that all protocols should do it themselves at the
protocol level, but we really only care about the common protocols.
Does anybody really care about sendpage on something like Appletalk? Not
likely.Acked-by: David S. Miller
Acked-by: Julien TINNES
Acked-by: Tavis Ormandy
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds
13 Aug, 2009
13 commits
-
A bug in (9f498cc: perf_counter: Full task tracing) makes
profiling multi-threaded apps it go belly up.[ output as: (PID:TID):(PPID:PTID) ]
# ./perf report -D | grep FORK
0x4b0 [0x18]: PERF_EVENT_FORK: (3237:3237):(3236:3236)
0xa10 [0x18]: PERF_EVENT_FORK: (3237:3238):(3236:3236)
0xa70 [0x18]: PERF_EVENT_FORK: (3237:3239):(3236:3236)
0xad0 [0x18]: PERF_EVENT_FORK: (3237:3240):(3236:3236)
0xb18 [0x18]: PERF_EVENT_FORK: (3237:3241):(3236:3236)Shows us that the test (27d028d perf report: Update for the new
FORK/EXIT events) in builtin-report.c:/*
* A thread clone will have the same PID for both
* parent and child.
*/
if (thread == parent)
return 0;Will clearly fail.
The problem is that perf_counter_fork() reports the actual
parent, instead of the cloning thread.Fixing that (with the below patch), yields:
# ./perf report -D | grep FORK
0x4c8 [0x18]: PERF_EVENT_FORK: (1590:1590):(1589:1589)
0xbd8 [0x18]: PERF_EVENT_FORK: (1590:1591):(1590:1590)
0xc80 [0x18]: PERF_EVENT_FORK: (1590:1592):(1590:1590)
0x3338 [0x18]: PERF_EVENT_FORK: (1590:1593):(1590:1590)
0x66b0 [0x18]: PERF_EVENT_FORK: (1590:1594):(1590:1590)Which both makes more sense and doesn't confuse perf report
anymore.Reported-by: Pekka Enberg
Signed-off-by: Peter Zijlstra
Cc: Arnaldo Carvalho de Melo
Cc: paulus@samba.org
Cc: Anton Blanchard
Cc: Arjan van de Ven
LKML-Reference:
Signed-off-by: Ingo Molnar -
perf_pending_counter() is called from IRQ context and will call
perf_counter_disable(), however perf_counter_disable() uses
smp_call_function_single() which doesn't fancy being used with
IRQs disabled due to IPI deadlocks.Fix this by making it use the local __perf_counter_disable()
call and teaching the counter_sched_out() code about pending
disables as well.This should cover the case where a counter migrates before the
pending queue gets processed.Signed-off-by: Peter Zijlstra
Cc: Corey J Ashford
Cc: Paul Mackerras
Cc: stephane eranian
LKML-Reference:
Signed-off-by: Ingo Molnar -
Replace PERF_SAMPLE_GROUP with PERF_SAMPLE_READ and introduce
PERF_FORMAT_GROUP to deal with group reads in a more generic
way.This allows you to get group reads out of read() as well.
Signed-off-by: Peter Zijlstra
Cc: Corey J Ashford
Cc: Paul Mackerras
Cc: stephane eranian
LKML-Reference:
Signed-off-by: Ingo Molnar -
perf_swcounter_is_counting() uses a lock, which means we cannot
use swcounters from NMI or when holding that particular lock,
this is unintended.The below removes the lock, this opens up race window, but not
worse than the swcounters already experience due to RCU
traversal of the context in perf_swcounter_ctx_event().This also fixes the hard lockups while opening a lockdep
tracepoint counter.Signed-off-by: Peter Zijlstra
Acked-by: Frederic Weisbecker
Cc: Paul Mackerras
Cc: stephane eranian
Cc: Corey J Ashford
LKML-Reference:
Signed-off-by: Ingo Molnar -
We're interested in just those symbols/DSOs, so filter out the
unresolved ones.Signed-off-by: Arnaldo Carvalho de Melo
Cc: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar -
While we can enable the perf sample records per tracepoint
counter, we may also want to enable this option for every
tracepoint counters to open, so that we don't need to add a
:record flag for all of them.Add the -R, --raw-samples options for this purpose.
Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Arnaldo Carvalho de Melo
Cc: Mike Galbraith
LKML-Reference:
Signed-off-by: Ingo Molnar -
Add a new flag field while opening a tracepoint perf counter:
-e tracepoint_subsystem:tracepoint_name:flags
This is intended to be generic although for now it only supports the
r[e[c[o[r[d]]]]] flag:./perf record -e workqueue:workqueue_insertion:record
./perf record -e workqueue:workqueue_insertion:rwill have the same effect: enabling the raw samples record for
the given tracepoint counter.In the future, we may want to support further flags, separated
by commas.Signed-off-by: Frederic Weisbecker
Cc: Peter Zijlstra
Cc: Arnaldo Carvalho de Melo
Cc: Mike Galbraith
LKML-Reference:
Signed-off-by: Ingo Molnar -
Provide weak aliases for hw_perf_counter_setup_online(). This is
used by the BTS patches (for v2.6.32), but it interacts with
fixes so propagate this upstream. (it has no effect as of yet)Also export perf_counter_output() to architecture code.
Cc: Peter Zijlstra
Cc: Paul Mackerras
Cc: Frederic Weisbecker
LKML-Reference:
Signed-off-by: Ingo Molnar -
When /sys/kernel/debug is mounted the list can be imense, so
use the pager like the other tools.Signed-off-by: Arnaldo Carvalho de Melo
Acked-by: Frederic Weisbecker
Cc: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar -
Convert the AP325RXA board code to register devices at
arch_initcall() time instead of device_initcall(). This
fix unbreaks pcf8563 RTC driver support.Signed-off-by: Magnus Damm
Signed-off-by: Paul Mundt -
Convert the Migo-R board code to register devices at
arch_initcall() time instead of __initcall(). This fix
unbreaks migor_ts touch screen driver support.Signed-off-by: Magnus Damm
Signed-off-by: Paul Mundt -
Convert the processor platform device setup
functions from __initcall() and sometimes
device_initcall() to arch_initcall().This makes sure that the platform devices are
registered a bit earlier so the devices are
available when drivers register using initcall
levels earlier than device_initcall().A good example is platform devices needed by
i2c-sh_mobile.c which registers a bit earlier
using subsys_initcall().Signed-off-by: Magnus Damm
Signed-off-by: Paul Mundt -
Normally we only allow the upper limit for a reshape to be decreased
when the array not performing a sync/recovery/reshape, otherwise there
could be races. But if an array is part-way through a reshape when it
is assembled the reshape is started immediately leaving no window
to set an upper bound.If the array is started read-only, the reshape will be suspended until
the array becomes writable, so that provides a window during which it
is perfectly safe to reduce the upper limit of a reshape.So: allow the upper limit (sync_max) to be reduced even if the reshape
thread is running, as long as the array is still read-only.Signed-off-by: NeilBrown