27 Jul, 2011
1 commit
-
This allows us to move duplicated code in
(atomic_inc_not_zero() for now) toSigned-off-by: Arun Sharma
Reviewed-by: Eric Dumazet
Cc: Ingo Molnar
Cc: David Miller
Cc: Eric Dumazet
Acked-by: Mike Frysinger
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
28 Jun, 2011
4 commits
-
MTRR rendezvous sequence is not implemened using stop_machine() before, as this
gets called both from the process context aswell as the cpu online paths
(where the cpu has not come online and the interrupts are disabled etc).Now that we have a new stop_machine_from_inactive_cpu() API, use it for
rendezvous during mtrr init of a logical processor that is coming online.For the rest (runtime MTRR modification, system boot, resume paths), use
stop_machine() to implement the rendezvous sequence. This will consolidate and
cleanup the code.Signed-off-by: Suresh Siddha
Link: http://lkml.kernel.org/r/20110623182057.076997177@sbsiddha-MOBL3.sc.intel.com
Signed-off-by: H. Peter Anvin -
Currently, mtrr wants stop_machine functionality while a CPU is being
brought up. As stop_machine() requires the calling CPU to be active,
mtrr implements its own stop_machine using stop_one_cpu() on each
online CPU. This doesn't only unnecessarily duplicate complex logic
but also introduces a possibility of deadlock when it races against
the generic stop_machine().This patch implements stop_machine_from_inactive_cpu() to serve such
use cases. Its functionality is basically the same as stop_machine();
however, it should be called from a CPU which isn't active and doesn't
depend on working scheduling on the calling CPU.This is achieved by using busy loops for synchronization and
open-coding stop_cpus queuing and waiting with direct invocation of
fn() for local CPU inbetween.Signed-off-by: Tejun Heo
Link: http://lkml.kernel.org/r/20110623182056.982526827@sbsiddha-MOBL3.sc.intel.com
Signed-off-by: Suresh Siddha
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Linus Torvalds
Cc: Peter Zijlstra
Signed-off-by: H. Peter Anvin -
Refactor the queuing part of the stop cpus work from __stop_cpus() into
queue_stop_cpus_work().The reorganization is to help future improvements to stop_machine()
and doesn't introduce any behavior difference.Signed-off-by: Tejun Heo
Link: http://lkml.kernel.org/r/20110623182056.897818337@sbsiddha-MOBL3.sc.intel.com
Signed-off-by: Suresh Siddha
Cc: Ingo Molnar
Cc: Andrew Morton
Cc: Linus Torvalds
Cc: Peter Zijlstra
Signed-off-by: H. Peter Anvin -
MTRR rendezvous sequence using stop_one_cpu_nowait() can potentially
happen in parallel with another system wide rendezvous using
stop_machine(). This can lead to deadlock (The order in which
works are queued can be different on different cpu's. Some cpu's
will be running the first rendezvous handler and others will be running
the second rendezvous handler. Each set waiting for the other set to join
for the system wide rendezvous, leading to a deadlock).MTRR rendezvous sequence is not implemented using stop_machine() as this
gets called both from the process context aswell as the cpu online paths
(where the cpu has not come online and the interrupts are disabled etc).
stop_machine() works with only online cpus.For now, take the stop_machine mutex in the MTRR rendezvous sequence that
gets called from an online cpu (here we are in the process context
and can potentially sleep while taking the mutex). And the MTRR rendezvous
that gets triggered during cpu online doesn't need to take this stop_machine
lock (as the stop_machine() already ensures that there is no cpu hotplug
going on in parallel by doing get_online_cpus())TBD: Pursue a cleaner solution of extending the stop_machine()
infrastructure to handle the case where the calling cpu is
still not online and use this for MTRR rendezvous sequence.fixes: https://bugzilla.novell.com/show_bug.cgi?id=672008
Reported-by: Vadim Kotelnikov
Signed-off-by: Suresh Siddha
Link: http://lkml.kernel.org/r/20110623182056.807230326@sbsiddha-MOBL3.sc.intel.com
Cc: stable@kernel.org # 2.6.35+, backport a week or two after this gets more testing in mainline
Signed-off-by: H. Peter Anvin
23 Mar, 2011
1 commit
-
ksoftirqd, kworker, migration, and pktgend kthreads can be created with
kthread_create_on_node(), to get proper NUMA affinities for their stack and
task_struct.Signed-off-by: Eric Dumazet
Acked-by: David S. Miller
Reviewed-by: Andi Kleen
Acked-by: Rusty Russell
Acked-by: Tejun Heo
Cc: Tony Luck
Cc: Fenghua Yu
Cc: David Howells
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
27 Oct, 2010
2 commits
-
In commit e6bde73b07edeb703d4c89c1daabc09c303de11f ("cpu-hotplug: return
better errno on cpu hotplug failure"), the cpu notifier can return an
encapsulated errno value.This converts the cpu notifier to return an encapsulated errno value for
stop_machine().Signed-off-by: Akinobu Mita
Cc: Rusty Russell
Cc: Tejun Heo
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
kernel/stop_machine.c: In function `cpu_stopper_thread':
kernel/stop_machine.c:265: warning: unused variable `ksym_buf'ksym_buf[] is unused if WARN_ON() is a no-op.
Signed-off-by: Rakib Mullick
Cc: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
19 Oct, 2010
1 commit
-
In order to separate the stop/migrate work thread from the SCHED_FIFO
implementation, create a special class for it that is of higher priority than
SCHED_FIFO itself.This currently solves a problem where cpu-hotplug consumes so much cpu-time
that the SCHED_FIFO class gets throttled, but has the bandwidth replenishment
timer pending on the now dead cpu.It is also required for when we add the planned deadline scheduling class above
SCHED_FIFO, as the stop/migrate thread still needs to transcent those tasks.Tested-by: Heiko Carstens
Signed-off-by: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar
10 Aug, 2010
1 commit
-
Reorder elements in structure cpu_stopper to remove alignment padding on
64 bit builds, this shrinks its size from 40 to 32 bytes saving 8 bytes
per cpu.Signed-off-by: Richard Kennedy
Acked-by: Tejun Heo
Cc: Peter Zijlstra
Cc: Ingo Molnar
Cc: Rusty Russell
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
31 May, 2010
1 commit
-
Problem: In a stress test where some heavy tests were running along with
regular CPU offlining and onlining, a hang was observed. The system seems
to be hung at a point where migration_call() tries to kill the
migration_thread of the dying CPU, which just got moved to the current
CPU. This migration thread does not get a chance to run (and die) since
rt_throttled is set to 1 on current, and it doesn't get cleared as the
hrtimer which is supposed to reset the rt bandwidth
(sched_rt_period_timer) is tied to the CPU which we just marked dead!Solution: This patch pushes the killing of migration thread to
"CPU_POST_DEAD" event. By then all the timers (including
sched_rt_period_timer) should have got migrated (along with other
callbacks).Signed-off-by: Amit Arora
Signed-off-by: Gautham R Shenoy
Acked-by: Tejun Heo
Signed-off-by: Peter Zijlstra
Cc: Thomas Gleixner
LKML-Reference:
Signed-off-by: Ingo Molnar
18 May, 2010
1 commit
-
This addresses the following compiler warning:
kernel/stop_machine.c: In function 'cpu_stop_cpu_callback':
kernel/stop_machine.c:297: warning: unused variable 'work'Cc: Tejun Heo
Cc: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar
08 May, 2010
1 commit
-
When !CONFIG_SMP, cpu_stop functions weren't defined at all which
could lead to build failures if UP code uses cpu_stop facility. Add
dummy cpu_stop implementation for UP. The waiting variants execute
the work function directly with preempt disabled and
stop_one_cpu_nowait() schedules a workqueue work.Makefile and ifdefs around stop_machine implementation are updated to
accomodate CONFIG_SMP && !CONFIG_STOP_MACHINE case.Signed-off-by: Tejun Heo
Reported-by: Ingo Molnar
07 May, 2010
3 commits
-
Currently migration_thread is serving three purposes - migration
pusher, context to execute active_load_balance() and forced context
switcher for expedited RCU synchronize_sched. All three roles are
hardcoded into migration_thread() and determining which job is
scheduled is slightly messy.This patch kills migration_thread and replaces all three uses with
cpu_stop. The three different roles of migration_thread() are
splitted into three separate cpu_stop callbacks -
migration_cpu_stop(), active_load_balance_cpu_stop() and
synchronize_sched_expedited_cpu_stop() - and each use case now simply
asks cpu_stop to execute the callback as necessary.synchronize_sched_expedited() was implemented with private
preallocated resources and custom multi-cpu queueing and waiting
logic, both of which are provided by cpu_stop.
synchronize_sched_expedited_count is made atomic and all other shared
resources along with the mutex are dropped.synchronize_sched_expedited() also implemented a check to detect cases
where not all the callback got executed on their assigned cpus and
fall back to synchronize_sched(). If called with cpu hotplug blocked,
cpu_stop already guarantees that and the condition cannot happen;
otherwise, stop_machine() would break. However, this patch preserves
the paranoid check using a cpumask to record on which cpus the stopper
ran so that it can serve as a bisection point if something actually
goes wrong theree.Because the internal execution state is no longer visible,
rcu_expedited_torture_stats() is removed.This patch also renames cpu_stop threads to from "stopper/%d" to
"migration/%d". The names of these threads ultimately don't matter
and there's no reason to make unnecessary userland visible changes.With this patch applied, stop_machine() and sched now share the same
resources. stop_machine() is faster without wasting any resources and
sched migration users are much cleaner.Signed-off-by: Tejun Heo
Acked-by: Peter Zijlstra
Cc: Ingo Molnar
Cc: Dipankar Sarma
Cc: Josh Triplett
Cc: Paul E. McKenney
Cc: Oleg Nesterov
Cc: Dimitri Sivanich -
Reimplement stop_machine using cpu_stop. As cpu stoppers are
guaranteed to be available for all online cpus,
stop_machine_create/destroy() are no longer necessary and removed.With resource management and synchronization handled by cpu_stop, the
new implementation is much simpler. Asking the cpu_stop to execute
the stop_cpu() state machine on all online cpus with cpu hotplug
disabled is enough.stop_machine itself doesn't need to manage any global resources
anymore, so all per-instance information is rolled into struct
stop_machine_data and the mutex and all static data variables are
removed.The previous implementation created and destroyed RT workqueues as
necessary which made stop_machine() calls highly expensive on very
large machines. According to Dimitri Sivanich, preventing the dynamic
creation/destruction makes booting faster more than twice on very
large machines. cpu_stop resources are preallocated for all online
cpus and should have the same effect.Signed-off-by: Tejun Heo
Acked-by: Rusty Russell
Acked-by: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Dimitri Sivanich -
Implement a simplistic per-cpu maximum priority cpu monopolization
mechanism. A non-sleeping callback can be scheduled to run on one or
multiple cpus with maximum priority monopolozing those cpus. This is
primarily to replace and unify RT workqueue usage in stop_machine and
scheduler migration_thread which currently is serving multiple
purposes.Four functions are provided - stop_one_cpu(), stop_one_cpu_nowait(),
stop_cpus() and try_stop_cpus().This is to allow clean sharing of resources among stop_cpu and all the
migration thread users. One stopper thread per cpu is created which
is currently named "stopper/CPU". This will eventually replace the
migration thread and take on its name.* This facility was originally named cpuhog and lived in separate
files but Peter Zijlstra nacked the name and thus got renamed to
cpu_stop and moved into stop_machine.c.* Better reporting of preemption leak as per Peter's suggestion.
Signed-off-by: Tejun Heo
Acked-by: Peter Zijlstra
Cc: Oleg Nesterov
Cc: Dimitri Sivanich
17 Feb, 2010
1 commit
-
Add __percpu sparse annotations to core subsystems.
These annotations are to make sparse consider percpu variables to be
in a different address space and warn if accessed without going
through percpu accessors. This patch doesn't affect normal builds.Signed-off-by: Tejun Heo
Reviewed-by: Christoph Lameter
Acked-by: Paul E. McKenney
Cc: Jens Axboe
Cc: linux-mm@kvack.org
Cc: Rusty Russell
Cc: Dipankar Sarma
Cc: Peter Zijlstra
Cc: Andrew Morton
Cc: Eric Biederman
30 Mar, 2009
1 commit
-
Impact: cleanup
struct cpumask is nicer, and we use it to make where we've made code
safe for CONFIG_CPUMASK_OFFSTACK=y.Signed-off-by: Rusty Russell
20 Feb, 2009
1 commit
-
Impact: cleanup
There are two allocated per-cpu accessor macros with almost identical
spelling. The original and far more popular is per_cpu_ptr (44
files), so change over the other 4 files.tj: kill percpu_ptr() and update UP too
Signed-off-by: Rusty Russell
Cc: mingo@redhat.com
Cc: lenb@kernel.org
Cc: cpufreq@vger.kernel.org
Signed-off-by: Tejun Heo
05 Jan, 2009
1 commit
-
Introduce stop_machine_create/destroy. With this interface subsystems
that need a non-failing stop_machine environment can create the
stop_machine machine threads before actually calling stop_machine.
When the threads aren't needed anymore they can be killed with
stop_machine_destroy again.When stop_machine gets called and the threads aren't present they
will be created and destroyed automatically. This restores the old
behaviour of stop_machine.This patch also converts cpu hotplug to the new interface since it
is special: cpu_down calls __stop_machine instead of stop_machine.
However the kstop threads will only be created when stop_machine
gets called.Changing the code so that the threads would be created automatically
on __stop_machine is currently not possible: when __stop_machine gets
called we hold cpu_add_remove_lock, which is the same lock that
create_rt_workqueue would take. So the workqueue needs to be created
before the cpu hotplug code locks cpu_add_remove_lock.Signed-off-by: Heiko Carstens
Signed-off-by: Rusty Russell
01 Jan, 2009
1 commit
-
Impact: Reduce stack usage, use new cpumask API.
Mainly changing cpumask_t to 'struct cpumask' and similar simple API
conversion. Two conversions worth mentioning:1) we use cpumask_any_but to avoid a temporary in kernel/softlockup.c,
2) Use cpumask_var_t in taskstats_user_cmd().Signed-off-by: Rusty Russell
Signed-off-by: Mike Travis
Cc: Balbir Singh
Cc: Ingo Molnar
17 Nov, 2008
1 commit
-
Bug #11989: Suspend failure on NForce4-based boards due to chanes in
stop_machineWe should not access active.fnret outside the lock; in theory the next
stop_machine could overwrite it.Signed-off-by: Rusty Russell
Tested-by: "Rafael J. Wysocki"
Signed-off-by: Linus Torvalds
26 Oct, 2008
1 commit
-
This reverts commit a802dd0eb5fc97a50cf1abb1f788a8f6cc5db635 by moving
the call to init_workqueues() back where it belongs - after SMP has been
initialized.It also moves stop_machine_init() - which needs workqueues - to a later
phase using a core_initcall() instead of early_initcall(). That should
satisfy all ordering requirements, and was apparently the reason why
init_workqueues() was moved to be too early.Cc: Heiko Carstens
Cc: Rusty Russell
Signed-off-by: Linus Torvalds
22 Oct, 2008
2 commits
-
Using |= for updating a value which might be updated on several cpus
concurrently will not always work since we need to make sure that the
update happens atomically.
To fix this just use a write if the called function returns an error
code on a cpu. We end up writing the error code of an arbitrary cpu
if multiple ones fail but that should be sufficient.Signed-off-by: Heiko Carstens
Signed-off-by: Rusty Russell -
Convert stop_machine to a workqueue based approach. Instead of using kernel
threads for stop_machine we now use a an rt workqueue to synchronize all
cpus.
This has the advantage that all needed per cpu threads are already created
when stop_machine gets called. And therefore a call to stop_machine won't
fail anymore. This is needed for s390 which needs a mechanism to synchronize
all cpus without allocating any memory.
As Rusty pointed out free_module() needs a non-failing stop_machine interface
as well.As a side effect the stop_machine code gets simplified.
Signed-off-by: Heiko Carstens
Signed-off-by: Rusty Russell
12 Aug, 2008
1 commit
-
Signed-off-by: Li Zefan
Signed-off-by: Rusty Russell
28 Jul, 2008
3 commits
-
Instead of a "cpu" arg with magic values NR_CPUS (any cpu) and ~0 (all
cpus), pass a cpumask_t. Allow NULL for the common case (where we
don't care which CPU the function is run on): temporary cpumask_t's
are usually considered bad for stack space.This deprecates stop_machine_run, to be removed soon when all the
callers are dead.Signed-off-by: Rusty Russell
-
stop_machine creates a kthread which creates kernel threads. We can
create those threads directly and simplify things a little. Some care
must be taken with CPU hotunplug, which has special needs, but that code
seems more robust than it was in the past.Signed-off-by: Rusty Russell
Acked-by: Christian Borntraeger -
-allow stop_mahcine_run() to call a function on all cpus. Calling
stop_machine_run() with a 'ALL_CPUS' invokes this new behavior.
stop_machine_run() proceeds as normal until the calling cpu has
invoked 'fn'. Then, we tell all the other cpus to call 'fn'.Signed-off-by: Jason Baron
Signed-off-by: Mathieu Desnoyers
Signed-off-by: Rusty Russell
CC: Adrian Bunk
CC: Andi Kleen
CC: Alexey Dobriyan
CC: Christoph Hellwig
CC: mingo@elte.hu
CC: akpm@osdl.org
19 Jul, 2008
1 commit
-
* This patch replaces the dangerous lvalue version of cpumask_of_cpu
with new cpumask_of_cpu_ptr macros. These are patterned after the
node_to_cpumask_ptr macros.In general terms, if there is a cpumask_of_cpu_map[] then a pointer to
the cpumask_of_cpu_map[cpu] entry is used. The cpumask_of_cpu_map
is provided when there is a large NR_CPUS count, reducing
greatly the amount of code generated and stack space used for
cpumask_of_cpu(). The pointer to the cpumask_t value is needed for
calling set_cpus_allowed_ptr() to reduce the amount of stack space
needed to pass the cpumask_t value.If there isn't a cpumask_of_cpu_map[], then a temporary variable is
declared and filled in with value from cpumask_of_cpu(cpu) as well as
a pointer variable pointing to this temporary variable. Afterwards,
the pointer is used to reference the cpumask value. The compiler
will optimize out the extra dereference through the pointer as well
as the stack space used for the pointer, resulting in identical code.A good example of the orthogonal usages is in net/sunrpc/svc.c:
case SVC_POOL_PERCPU:
{
unsigned int cpu = m->pool_to[pidx];
cpumask_of_cpu_ptr(cpumask, cpu);*oldmask = current->cpus_allowed;
set_cpus_allowed_ptr(current, cpumask);
return 1;
}
case SVC_POOL_PERNODE:
{
unsigned int node = m->pool_to[pidx];
node_to_cpumask_ptr(nodecpumask, node);*oldmask = current->cpus_allowed;
set_cpus_allowed_ptr(current, nodecpumask);
return 1;
}Signed-off-by: Mike Travis
Signed-off-by: Ingo Molnar
24 Jun, 2008
1 commit
-
Hidehiro Kawai noticed that sched_setscheduler() can fail in
stop_machine: it calls sched_setscheduler() from insmod, which can
have CAP_SYS_MODULE without CAP_SYS_NICE.Two cases could have failed, so are changed to sched_setscheduler_nocheck:
kernel/softirq.c:cpu_callback()
- CPU hotplug callback
kernel/stop_machine.c:__stop_machine_run()
- Called from various places, including modprobe()Signed-off-by: Rusty Russell
Cc: Jeremy Fitzhardinge
Cc: Hidehiro Kawai
Cc: Andrew Morton
Cc: linux-mm@kvack.org
Cc: sugita
Cc: Satoshi OSHIMA
Signed-off-by: Ingo Molnar
23 May, 2008
1 commit
-
On kvm I have seen some rare hangs in stop_machine when I used more guest
cpus than hosts cpus. e.g. 32 guest cpus on 1 host cpu triggered the
hang quite often. I could also reproduce the problem on a 4 way z/VM host with
a 64 way guest.It turned out that the guest was consuming all available cpus mostly for
spinning on scheduler locks like rq->lock. This is expected as the threads are
calling yield all the time.
The problem is now, that the host scheduling decisings together with the guest
scheduling decisions and spinlocks not being fair managed to create an
interesting scenario similar to a live lock. (Sometimes the hang resolved
itself after some minutes)Changing stop_machine to yield the cpu to the hypervisor when yielding inside
the guest fixed the problem for me. While I am not completely happy with this
patch, I think it causes no harm and it really improves the situation for me.I used cpu_relax for yielding to the hypervisor, does that work on all
architectures?p.s.: If you want to reproduce the problem, cpu hotplug and kprobes use
stop_machine_run and both triggered the problem after some retries.Signed-off-by: Christian Borntraeger
CC: Ingo Molnar
Signed-off-by: Rusty Russell
22 Apr, 2008
3 commits
-
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/juhl/trivial: (24 commits)
DOC: A couple corrections and clarifications in USB doc.
Generate a slightly more informative error msg for bad HZ
fix typo "is" -> "if" in Makefile
ext*: spelling fix prefered -> preferred
DOCUMENTATION: Use newer DEFINE_SPINLOCK macro in docs.
KEYS: Fix the comment to match the file name in rxrpc-type.h.
RAID: remove trailing space from printk line
DMA engine: typo fixes
Remove unused MAX_NODES_SHIFT
MAINTAINERS: Clarify access to OCFS2 development mailing list.
V4L: Storage class should be before const qualifier (sn9c102)
V4L: Storage class should be before const qualifier
sonypi: Storage class should be before const qualifier
intel_menlow: Storage class should be before const qualifier
DVB: Storage class should be before const qualifier
arm: Storage class should be before const qualifier
ALSA: Storage class should be before const qualifier
acpi: Storage class should be before const qualifier
firmware_sample_driver.c: fix coding style
MAINTAINERS: Add ati_remote2 driver
...Fixed up trivial conflicts in firmware_sample_driver.c
-
* 'semaphore' of git://git.kernel.org/pub/scm/linux/kernel/git/willy/misc:
Deprecate the asm/semaphore.h files in feature-removal-schedule.
Convert asm/semaphore.h users to linux/semaphore.h
security: Remove unnecessary inclusions of asm/semaphore.h
lib: Remove unnecessary inclusions of asm/semaphore.h
kernel: Remove unnecessary inclusions of asm/semaphore.h
include: Remove unnecessary inclusions of asm/semaphore.h
fs: Remove unnecessary inclusions of asm/semaphore.h
drivers: Remove unnecessary inclusions of asm/semaphore.h
net: Remove unnecessary inclusions of asm/semaphore.h
arch: Remove unnecessary inclusions of asm/semaphore.h -
These are small cleanups all over the tree.
Trivial style and comment changes to
fs/select.c, kernel/signal.c, kernel/stop_machine.c & mm/pdflush.cSigned-off-by: Pavel Machek
Signed-off-by: Jesper Juhl
20 Apr, 2008
1 commit
-
* Use new set_cpus_allowed_ptr() function added by previous patch,
which instead of passing the "newly allowed cpus" cpumask_t arg
by value, pass it by pointer:-int set_cpus_allowed(struct task_struct *p, cpumask_t new_mask)
+int set_cpus_allowed_ptr(struct task_struct *p, const cpumask_t *new_mask)* Modify CPU_MASK_ALL
Depends on:
[sched-devel]: sched: add new set_cpus_allowed_ptr functionSigned-off-by: Mike Travis
Signed-off-by: Ingo Molnar
19 Apr, 2008
1 commit
-
None of these files use any of the functionality promised by
asm/semaphore.h.Signed-off-by: Matthew Wilcox
07 Feb, 2008
1 commit
-
[akpm@linux-foundation.org: cleanup]
Signed-off-by: Daniel Walker
Acked-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
26 Jan, 2008
1 commit
-
Replace all lock_cpu_hotplug/unlock_cpu_hotplug from the kernel and use
get_online_cpus and put_online_cpus instead as it highlights the
refcount semantics in these operations.The new API guarantees protection against the cpu-hotplug operation, but
it doesn't guarantee serialized access to any of the local data
structures. Hence the changes needs to be reviewed.In case of pseries_add_processor/pseries_remove_processor, use
cpu_maps_update_begin()/cpu_maps_update_done() as we're modifying the
cpu_present_map there.Signed-off-by: Gautham R Shenoy
Signed-off-by: Ingo Molnar
17 Jul, 2007
1 commit
-
stop_machine_run() does its work on "kstopmachine" thread having max
priority. However that thread get such priority after woken up.
Therefore, in the following case ...- "kstopmachine" try to run on CPU1
- There is a real time process which doesn't relinquish CPU time
voluntary on CPU1... "kstopmachine" can't start to run and the CPU on which
stop_machine_run() is runing hangs up. To fix this problem, call
sched_setscheduler() before waking up that thread.Signed-off-by: Satoru Takeuchi
Cc: Rusty Russell
Cc: Ingo Molnar
Cc: Oleg Nesterov
Cc: Ashok Raj
Cc: Gautham R Shenoy
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds