28 May, 2011
1 commit
-
The rule is, we have to update tsk->rt.nr_cpus_allowed if we change
tsk->cpus_allowed. Otherwise RT scheduler may confuse.Signed-off-by: KOSAKI Motohiro
Cc: Oleg Nesterov
Signed-off-by: Peter Zijlstra
Link: http://lkml.kernel.org/r/4DD4B3FA.5060901@jp.fujitsu.com
Signed-off-by: Ingo Molnar
31 Mar, 2011
1 commit
-
Fixes generated by 'codespell' and manually reviewed.
Signed-off-by: Lucas De Marchi
23 Mar, 2011
1 commit
-
All kthreads being created from a single helper task, they all use memory
from a single node for their kernel stack and task struct.This patch suite creates kthread_create_on_node(), adding a 'cpu' parameter
to parameters already used by kthread_create().This parameter serves in allocating memory for the new kthread on its
memory node if possible.Signed-off-by: Eric Dumazet
Acked-by: David S. Miller
Reviewed-by: Andi Kleen
Acked-by: Rusty Russell
Cc: Tejun Heo
Cc: Tony Luck
Cc: Fenghua Yu
Cc: David Howells
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
07 Jan, 2011
1 commit
-
Function-scope statics are discouraged because they are
easily overlooked and can cause subtle bugs/races due to
their global (non-SMP safe) nature.Linus noticed that we did this for sched_param - at minimum
make the const.Suggested-by: Linus Torvalds
Signed-off-by: Peter Zijlstra
LKML-Reference: Message-ID:
Signed-off-by: Ingo Molnar
05 Jan, 2011
1 commit
-
Merge reason: Merge the final .37 tree.
Signed-off-by: Ingo Molnar
22 Dec, 2010
1 commit
-
spinlock in kthread_worker and wait_queue_head in kthread_work both
should be lockdep sensible, so change the interface to make it
suiltable for CONFIG_LOCKDEP.tj: comment update
Reported-by: Nicolas
Signed-off-by: Yong Zhang
Signed-off-by: Andy Walls
Tested-by: Andy Walls
Cc: Tejun Heo
Cc: Andrew Morton
Signed-off-by: Tejun Heo
23 Oct, 2010
1 commit
-
Andrew Morton pointed out almost all sched_setscheduler() callers are
using fixed parameters and can be converted to static. It reduces runtime
memory use a little.Signed-off-by: KOSAKI Motohiro
Reported-by: Andrew Morton
Acked-by: James Morris
Cc: Ingo Molnar
Cc: Steven Rostedt
Signed-off-by: Andrew Morton
Signed-off-by: Thomas Gleixner
Signed-off-by: Ingo Molnar
29 Jun, 2010
2 commits
-
Implement kthread_data() which takes @task pointing to a kthread and
returns @data specified when creating the kthread. The caller is
responsible for ensuring the validity of @task when calling this
function.Signed-off-by: Tejun Heo
-
Implement simple work processor for kthread. This is to ease using
kthread. Single thread workqueue used to be used for things like this
but workqueue won't guarantee fixed kthread association anymore to
enable worker sharing.This can be used in cases where specific kthread association is
necessary, for example, when it should have RT priority or be assigned
to certain cgroup.Signed-off-by: Tejun Heo
Cc: Andrew Morton
25 Mar, 2010
1 commit
-
cpuset_mem_spread_node() returns an offline node, and causes an oops.
This patch fixes it by initializing task->mems_allowed to
node_states[N_HIGH_MEMORY], and updating task->mems_allowed when doing
memory hotplug.Signed-off-by: Miao Xie
Acked-by: David Rientjes
Reported-by: Nick Piggin
Tested-by: Nick Piggin
Cc: Paul Menage
Cc: Li Zefan
Cc: Ingo Molnar
Cc:
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
09 Feb, 2010
1 commit
-
kthread_create_on_cpu doesn't exist so update a comment in
kthread.c to reflect this.Signed-off-by: Anton Blanchard
Acked-by: Rusty Russell
Cc: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar
17 Dec, 2009
1 commit
-
Since kthread_bind() lost its dependencies on sched.c, move it
back where it came from.Signed-off-by: Peter Zijlstra
Cc: Mike Galbraith
LKML-Reference:
Signed-off-by: Ingo Molnar
03 Nov, 2009
1 commit
-
Eric Paris reported that commit
f685ceacab07d3f6c236f04803e2f2f0dbcc5afb causes boot time
PREEMPT_DEBUG complaints.[ 4.590699] BUG: using smp_processor_id() in preemptible [00000000] code: rmmod/1314
[ 4.593043] caller is task_hot+0x86/0xd0Since kthread_bind() messes with scheduler internals, move the
body to sched.c, and lock the runqueue.Reported-by: Eric Paris
Signed-off-by: Mike Galbraith
Tested-by: Eric Paris
Cc: Peter Zijlstra
LKML-Reference:
[ v2: fix !SMP build and clean up ]
Signed-off-by: Ingo Molnar
09 Sep, 2009
1 commit
-
Removes kthread/workqueue priority boost, they increase worst-case
desktop latencies.Signed-off-by: Mike Galbraith
Acked-by: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar
28 Jul, 2009
1 commit
-
Commit 63706172f332fd3f6e7458ebfb35fa6de9c21dc5 ("kthreads: rework
kthread_stop()") removed the limitation that the thread function mysr
not call do_exit() itself, but forgot to update the comment.Since that commit it is OK to use kthread_stop() even if kthread can
exit itself.Signed-off-by: Oleg Nesterov
Signed-off-by: Rusty Russell
Signed-off-by: Linus Torvalds
19 Jun, 2009
2 commits
-
Based on Eric's patch which in turn was based on my patch.
kthread_stop() has the nasty problems:
- it runs unpredictably long with the global semaphore held.
- it deadlocks if kthread itself does kthread_stop() before it obeys
the kthread_should_stop() request.- it is not useable if kthread exits on its own, see for example the
ugly "wait_to_die:" hack in migration_thread()- it is not possible to just tell kthread it should stop, we must always
wait for its exit.With this patch kthread() allocates all neccesary data (struct kthread) on
its own stack, globals kthread_stop_xxx are deleted. ->vfork_done is used
as a pointer into "struct kthread", this means kthread_stop() can easily
wait for kthread's exit.Signed-off-by: Oleg Nesterov
Cc: Christoph Hellwig
Cc: "Eric W. Biederman"
Cc: Ingo Molnar
Cc: Pavel Emelyanov
Cc: Rusty Russell
Cc: Vitaliy Gusev
Signed-off-by: Linus Torvalds -
We use two completions two create the kernel thread, this is a bit ugly.
kthread() wakes up create_kthread() via ->started, then create_kthread()
wakes up the caller kthread_create() via ->done. But kthread() does not
need to wait for kthread(), it can just return. Instead kthread() itself
can wake up the caller of kthread_create().Kill kthread_create_info->started, ->done is enough. This improves the
scalability a bit and sijmplifies the code.The only problem if kernel_thread() fails, in that case create_kthread()
must do complete(&create->done).Signed-off-by: Oleg Nesterov
Cc: Christoph Hellwig
Cc: "Eric W. Biederman"
Cc: Ingo Molnar
Cc: Pavel Emelyanov
Cc: Rusty Russell
Cc: Vitaliy Gusev
Signed-off-by: Linus Torvalds
17 Jun, 2009
1 commit
-
Fix allocating page cache/slab object on the unallowed node when memory
spread is set by updating tasks' mems_allowed after its cpuset's mems is
changed.In order to update tasks' mems_allowed in time, we must modify the code of
memory policy. Because the memory policy is applied in the process's
context originally. After applying this patch, one task directly
manipulates anothers mems_allowed, and we use alloc_lock in the
task_struct to protect mems_allowed and memory policy of the task.But in the fast path, we didn't use lock to protect them, because adding a
lock may lead to performance regression. But if we don't add a lock,the
task might see no nodes when changing cpuset's mems_allowed to some
non-overlapping set. In order to avoid it, we set all new allowed nodes,
then clear newly disallowed ones.[lee.schermerhorn@hp.com:
The rework of mpol_new() to extract the adjusting of the node mask to
apply cpuset and mpol flags "context" breaks set_mempolicy() and mbind()
with MPOL_PREFERRED and a NULL nodemask--i.e., explicit local
allocation. Fix this by adding the check for MPOL_PREFERRED and empty
node mask to mpol_new_mpolicy().Remove the now unneeded 'nodes = NULL' from mpol_new().
Note that mpol_new_mempolicy() is always called with a non-NULL
'nodes' parameter now that it has been removed from mpol_new().
Therefore, we don't need to test nodes for NULL before testing it for
'empty'. However, just to be extra paranoid, add a VM_BUG_ON() to
verify this assumption.]
[lee.schermerhorn@hp.com:I don't think the function name 'mpol_new_mempolicy' is descriptive
enough to differentiate it from mpol_new().This function applies cpuset set context, usually constraining nodes
to those allowed by the cpuset. However, when the 'RELATIVE_NODES flag
is set, it also translates the nodes. So I settled on
'mpol_set_nodemask()', because the comment block for mpol_new() mentions
that we need to call this function to "set nodes".Some additional minor line length, whitespace and typo cleanup.]
Signed-off-by: Miao Xie
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Christoph Lameter
Cc: Paul Menage
Cc: Nick Piggin
Cc: Yasunori Goto
Cc: Pekka Enberg
Cc: David Rientjes
Signed-off-by: Lee Schermerhorn
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
15 Apr, 2009
2 commits
-
Impact: clean up
Create a sub directory in include/trace called events to keep the
trace point headers in their own separate directory. Only headers that
declare trace points should be defined in this directory.Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Neil Horman
Cc: Zhao Lei
Cc: Eduard - Gabriel Munteanu
Cc: Pekka Enberg
Signed-off-by: Steven Rostedt -
This patch lowers the number of places a developer must modify to add
new tracepoints. The current method to add a new tracepoint
into an existing system is to write the trace point macro in the
trace header with one of the macros TRACE_EVENT, TRACE_FORMAT or
DECLARE_TRACE, then they must add the same named item into the C file
with the macro DEFINE_TRACE(name) and then add the trace point.This change cuts out the needing to add the DEFINE_TRACE(name).
Every file that uses the tracepoint must still include the trace/.h
file, but the one C file must also add a define before the including
of that file.#define CREATE_TRACE_POINTS
#includeThis will cause the trace/mytrace.h file to also produce the C code
necessary to implement the trace point.Note, if more than one trace/.h is used to create the C code
it is best to list them all together.#define CREATE_TRACE_POINTS
#include
#include
#includeThanks to Mathieu Desnoyers and Christoph Hellwig for coming up with
the cleaner solution of the define above the includes over my first
design to have the C code include a "special" header.This patch converts sched, irq and lockdep and skb to use this new
method.Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Neil Horman
Cc: Zhao Lei
Cc: Eduard - Gabriel Munteanu
Cc: Pekka Enberg
Signed-off-by: Steven Rostedt
09 Apr, 2009
2 commits
-
kthreadd is the single thread which implements ths "create" request, move
sched_setscheduler/etc from create_kthread() to kthread_create() to
improve the scalability.We should be careful with sched_setscheduler(), use _nochek helper.
Signed-off-by: Oleg Nesterov
Cc: Christoph Hellwig
Cc: "Eric W. Biederman"
Cc: Ingo Molnar
Cc: Pavel Emelyanov
Cc: Vitaliy Gusev
Signed-off-by: Rusty Russell -
Remove the unnecessary find_task_by_pid_ns(). kthread() can just
use "current" to get the same result.Signed-off-by: Vitaliy Gusev
Acked-by: Oleg Nesterov
Signed-off-by: Rusty Russell
30 Mar, 2009
1 commit
-
Impact: cleanup
(Thanks to Al Viro for reminding me of this, via Ingo)
CPU_MASK_ALL is the (deprecated) "all bits set" cpumask, defined as so:
#define CPU_MASK_ALL (cpumask_t) { { ... } }
Taking the address of such a temporary is questionable at best,
unfortunately 321a8e9d (cpumask: add CPU_MASK_ALL_PTR macro) added
CPU_MASK_ALL_PTR:#define CPU_MASK_ALL_PTR (&CPU_MASK_ALL)
Which formalizes this practice. One day gcc could bite us over this
usage (though we seem to have gotten away with it so far).So replace everywhere which used &CPU_MASK_ALL or CPU_MASK_ALL_PTR
with the modern "cpu_all_mask" (a real const struct cpumask *).Signed-off-by: Rusty Russell
Acked-by: Ingo Molnar
Reported-by: Al Viro
Cc: Mike Travis
16 Nov, 2008
1 commit
-
Impact: API *CHANGE*. Must update all tracepoint users.
Add DEFINE_TRACE() to tracepoints to let them declare the tracepoint
structure in a single spot for all the kernel. It helps reducing memory
consumption, especially when declaring a lot of tracepoints, e.g. for
kmalloc tracing.*API CHANGE WARNING*: now, DECLARE_TRACE() must be used in headers for
tracepoint declarations rather than DEFINE_TRACE(). This is the sane way
to do it. The name previously used was misleading.Updates scheduler instrumentation to follow this API change.
Signed-off-by: Mathieu Desnoyers
Signed-off-by: Ingo Molnar
21 Oct, 2008
1 commit
-
…l/git/tip/linux-2.6-tip
* 'tracing-v28-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (131 commits)
tracing/fastboot: improve help text
tracing/stacktrace: improve help text
tracing/fastboot: fix initcalls disposition in bootgraph.pl
tracing/fastboot: fix bootgraph.pl initcall name regexp
tracing/fastboot: fix issues and improve output of bootgraph.pl
tracepoints: synchronize unregister static inline
tracepoints: tracepoint_synchronize_unregister()
ftrace: make ftrace_test_p6nop disassembler-friendly
markers: fix synchronize marker unregister static inline
tracing/fastboot: add better resolution to initcall debug/tracing
trace: add build-time check to avoid overrunning hex buffer
ftrace: fix hex output mode of ftrace
tracing/fastboot: fix initcalls disposition in bootgraph.pl
tracing/fastboot: fix printk format typo in boot tracer
ftrace: return an error when setting a nonexistent tracer
ftrace: make some tracers reentrant
ring-buffer: make reentrant
ring-buffer: move page indexes into page headers
tracing/fastboot: only trace non-module initcalls
ftrace: move pc counter in irqtrace
...Manually fix conflicts:
- init/main.c: initcall tracing
- kernel/module.c: verbose level vs tracepoints
- scripts/bootgraph.pl: fallout from cherry-picking commits.
20 Oct, 2008
1 commit
-
Now that wait_task_inactive(task, state) checks task->state == state,
we can simplify the code and make this debugging check more robust.Signed-off-by: Oleg Nesterov
Cc: Roland McGrath
Cc: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
14 Oct, 2008
1 commit
-
Instrument the scheduler activity (sched_switch, migration, wakeups,
wait for a task, signal delivery) and process/thread
creation/destruction (fork, exit, kthread stop). Actually, kthread
creation is not instrumented in this patch because it is architecture
dependent. It allows to connect tracers such as ftrace which detects
scheduling latencies, good/bad scheduler decisions. Tools like LTTng can
export this scheduler information along with instrumentation of the rest
of the kernel activity to perform post-mortem analysis on the scheduler
activity.About the performance impact of tracepoints (which is comparable to
markers), even without immediate values optimizations, tests done by
Hideo Aoki on ia64 show no regression. His test case was using hackbench
on a kernel where scheduler instrumentation (about 5 events in code
scheduler code) was added. See the "Tracepoints" patch header for
performance result detail.Changelog :
- Change instrumentation location and parameter to match ftrace
instrumentation, previously done with kernel markers.[ mingo@elte.hu: conflict resolutions ]
Signed-off-by: Mathieu Desnoyers
Acked-by: 'Peter Zijlstra'
Signed-off-by: Ingo Molnar
27 Jul, 2008
1 commit
-
This extends wait_task_inactive() with a new argument so it can be used in
a "soft" mode where it will check for the task changing state unexpectedly
and back off. There is no change to existing callers. This lays the
groundwork to allow robust, noninvasive tracing that can try to sample a
blocked thread but back off safely if it wakes up.Signed-off-by: Roland McGrath
Cc: Oleg Nesterov
Reviewed-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
19 Jul, 2008
1 commit
-
* Replace:
set_cpus_allowed(..., CPU_MASK_ALL)
with:
set_cpus_allowed_ptr(..., CPU_MASK_ALL_PTR)
to remove excessive stack requirements when NR_CPUS=4096.
Signed-off-by: Mike Travis
Cc: Andrew Morton
Signed-off-by: Ingo Molnar
17 Jul, 2008
1 commit
-
The freezer currently attempts to distinguish kernel threads from
user space tasks by checking if their mm pointer is unset and it
does not send fake signals to kernel threads. However, there are
kernel threads, mostly related to networking, that behave like
user space tasks and may want to be sent a fake signal to be frozen.Introduce the new process flag PF_FREEZER_NOSIG that will be set
by default for all kernel threads and make the freezer only send
fake signals to the tasks having PF_FREEZER_NOSIG unset. Provide
the set_freezable_with_signal() function to be called by the kernel
threads that want to be sent a fake signal for freezing.This patch should not change the freezer's observable behavior.
Signed-off-by: Rafael J. Wysocki
Signed-off-by: Andi Kleen
Acked-by: Pavel Machek
Signed-off-by: Len Brown
10 Jun, 2008
1 commit
-
Kthreads that have called kthread_bind() are bound to specific cpus, so
other tasks should not be able to change their cpus_allowed from under
them. Otherwise, it is possible to move kthreads, such as the migration
or software watchdog threads, so they are not allowed access to the cpu
they work on.Cc: Peter Zijlstra
Cc: Paul Menage
Cc: Paul Jackson
Signed-off-by: David Rientjes
Signed-off-by: Ingo Molnar
30 Apr, 2008
1 commit
-
There are some places that are known to operate on tasks'
global pids only:* the rest_init() call (called on boot)
* the kgdb's getthread
* the create_kthread() (since the kthread is run in init ns)So use the find_task_by_pid_ns(..., &init_pid_ns) there
and schedule the find_task_by_pid for removal.[sukadev@us.ibm.com: Fix warning in kernel/pid.c]
Signed-off-by: Pavel Emelyanov
Cc: "Eric W. Biederman"
Signed-off-by: Sukadev Bhattiprolu
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
29 Apr, 2008
1 commit
-
From the POV of synchronization, there should be no need to call
wake_up_process() with the 'kthread_create_lock' being held.Signed-off-by: Dmitry Adamushko
Cc: Nick Piggin
Cc: Ingo Molnar
Cc: Rusty Russell
Cc: "Paul E. McKenney"
Cc: Peter Zijlstra
Cc: Andy Whitcroft
Cc: Oleg Nesterov
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
22 Apr, 2008
1 commit
-
* 'semaphore' of git://git.kernel.org/pub/scm/linux/kernel/git/willy/misc:
Deprecate the asm/semaphore.h files in feature-removal-schedule.
Convert asm/semaphore.h users to linux/semaphore.h
security: Remove unnecessary inclusions of asm/semaphore.h
lib: Remove unnecessary inclusions of asm/semaphore.h
kernel: Remove unnecessary inclusions of asm/semaphore.h
include: Remove unnecessary inclusions of asm/semaphore.h
fs: Remove unnecessary inclusions of asm/semaphore.h
drivers: Remove unnecessary inclusions of asm/semaphore.h
net: Remove unnecessary inclusions of asm/semaphore.h
arch: Remove unnecessary inclusions of asm/semaphore.h
20 Apr, 2008
1 commit
-
Signed-off-by: Gregory Haskins
Acked-by: Steven Rostedt
Signed-off-by: Ingo Molnar
19 Apr, 2008
1 commit
-
None of these files use any of the functionality promised by
asm/semaphore.h.Signed-off-by: Matthew Wilcox
26 Jan, 2008
1 commit
-
Ensure that the kernel threads are created with the usual nice level
and affinity even if kthreadd's properties were changed from the
default by root.Signed-off-by: Michal Schmidt
Signed-off-by: Ingo Molnar
01 Aug, 2007
1 commit
-
WARNING: kernel/built-in.o(.text+0x16910): Section mismatch:
reference to .init.text: (between 'kthreadd' and 'init_waitqueue_head')comes because kernel/kthread.c:kthreadd() is not __init but calls
kthreadd_setup() which is __init. But this is ok, because kthreadd_setup()
is only ever called at init time, and then kthreadd() proceeds into its
"for (;;)" loop. We could mark kthreadd __init_refok, but kthreadd_setup()
with just one callsite and 4 lines in it (it's been that small since
10ab825bdef8df51) doesn't need to be a separate function at all -- so let's
just move those four lines at beginning of kthreadd() itself.Signed-off-by: Satyam Sharma
Acked-by: Randy Dunlap
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
17 Jul, 2007
1 commit
-
.. which modpost started warning about.
Signed-off-by: Jan Beulich
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
24 May, 2007
1 commit
-
kthread() sleeps in TASK_INTERRUPTIBLE state waiting for the first wakeup. In
theory, this wakeup may come from freeze_process()->signal_wake_up(), so the
task can disappear even before kthread_create() sets its ->comm.Change kthread() to use TASK_UNINTERRUPTIBLE.
[akpm@linux-foundation.org: s/BUG_ON/WARN_ON+recover]
Signed-off-by: Oleg Nesterov
Acked-by: "Eric W. Biederman"
Signed-off-by: Rafael J. Wysocki
Cc: Gautham R Shenoy
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds