26 Jan, 2008
40 commits
-
Use HR-timers (when available) to deliver an accurate preemption tick.
The regular scheduler tick that runs at 1/HZ can be too coarse when nice
level are used. The fairness system will still keep the cpu utilisation 'fair'
by then delaying the task that got an excessive amount of CPU time but try to
minimize this by delivering preemption points spot-on.The average frequency of this extra interrupt is sched_latency / nr_latency.
Which need not be higher than 1/HZ, its just that the distribution within the
sched_latency period is important.Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar -
Why do we even have cond_resched when real preemption
is on? It seems to be a waste of space and time.remove cond_resched with CONFIG_PREEMPT on.
Signed-off-by: Ingo Molnar
-
whitespace fixes.
Signed-off-by: Ingo Molnar
-
Introduce a new rlimit that allows the user to set a runtime timeout on
real-time tasks their slice. Once this limit is exceeded the task will receive
SIGXCPU.So it measures runtime since the last sleep.
Input and ideas by Thomas Gleixner and Lennart Poettering.
Signed-off-by: Peter Zijlstra
CC: Lennart Poettering
CC: Michael Kerrisk
CC: Ulrich Drepper
Signed-off-by: Ingo Molnar -
Move the task_struct members specific to rt scheduling together.
A future optimization could be to put sched_entity and sched_rt_entity
into a union.Signed-off-by: Peter Zijlstra
CC: Srivatsa Vaddagiri
Signed-off-by: Ingo Molnar -
There are already 4 error paths in alloc_uid() that do incremental rollbacks.
I think it's time to merge them. This costs us 8 lines of code :)Maybe it would be better to merge this patch with the previous one, but I
remember that some time ago I sent a similar patch (fixing the error path and
cleaning it), but I was told to make two patches in such cases.Signed-off-by: Pavel Emelyanov
Acked-by: Dhaval Giani
Signed-off-by: Andrew Morton
Signed-off-by: Ingo Molnar -
The baseline code statically builds the span maps when the domain is formed.
Previous attempts at dynamically updating the maps caused a suspend-to-ram
regression, which should now be fixed.Signed-off-by: Gregory Haskins
CC: Gautham R Shenoy
Signed-off-by: Ingo Molnar -
This patch updates the RCU documentation to reflect preemptible RCU as
well as recent publications.Signed-off-by: Paul E. McKenney
Signed-off-by: Gautham R Shenoy
Reviewed-by: Steven Rostedt
Signed-off-by: Ingo Molnar -
This patch allows preemptible RCU to tolerate CPU-hotplug operations.
It accomplishes this by maintaining a local copy of a map of online
CPUs, which it accesses under its own lock.Signed-off-by: Gautham R Shenoy
Signed-off-by: Paul E. McKenney
Reviewed-by: Steven Rostedt
Signed-off-by: Ingo Molnar -
This patch implements a new version of RCU which allows its read-side
critical sections to be preempted. It uses a set of counter pairs
to keep track of the read-side critical sections and flips them
when all tasks exit read-side critical section. The details
of this implementation can be found in this paper -http://www.rdrop.com/users/paulmck/RCU/OLSrtRCU.2006.08.11a.pdf
and the article-
http://lwn.net/Articles/253651/
This patch was developed as a part of the -rt kernel development and
meant to provide better latencies when read-side critical sections of
RCU don't disable preemption. As a consequence of keeping track of RCU
readers, the readers have a slight overhead (optimizations in the paper).
This implementation co-exists with the "classic" RCU implementations
and can be switched to at compiler.Also includes RCU tracing summarized in debugfs.
[ akpm@linux-foundation.org: build fixes on non-preempt architectures ]
Signed-off-by: Gautham R Shenoy
Signed-off-by: Dipankar Sarma
Signed-off-by: Paul E. McKenney
Reviewed-by: Steven Rostedt
Signed-off-by: Ingo Molnar -
Fix rcu_barrier() to work properly in preemptive kernel environment.
Also, the ordering of callback must be preserved while moving
callbacks to another CPU during CPU hotplug.Signed-off-by: Gautham R Shenoy
Signed-off-by: Dipankar Sarma
Signed-off-by: Paul E. McKenney
Reviewed-by: Steven Rostedt
Signed-off-by: Ingo Molnar -
This patch re-organizes the RCU code to enable multiple implementations
of RCU. Users of RCU continues to include rcupdate.h and the
RCU interfaces remain the same. This is in preparation for
subsequently merging the preemptible RCU implementation.Signed-off-by: Gautham R Shenoy
Signed-off-by: Dipankar Sarma
Signed-off-by: Paul E. McKenney
Reviewed-by: Steven Rostedt
Signed-off-by: Ingo Molnar -
This patch makes RCU use softirq instead of tasklets.
It also adds a memory barrier after raising the softirq
inorder to ensure that the cpu sees the most recently updated
value of rcu->cur while processing callbacks.
The discussion of the related theoretical race pointed out
by James Huang can be found here --> http://lkml.org/lkml/2007/11/20/603Signed-off-by: Gautham R Shenoy
Signed-off-by: Steven Rostedt
Signed-off-by: Dipankar Sarma
Reviewed-by: Steven Rostedt
Signed-off-by: Ingo Molnar -
We had support for overlapping cpuset based rto logic in early
prototypes that is no longer used, so remove it.Signed-off-by: Gregory Haskins
Signed-off-by: Steven Rostedt
Signed-off-by: Ingo Molnar -
The overload set/clears were originally idempotent when this logic was first
implemented. But that is no longer true due to the addition of the atomic
counter and this logic was never updated to work properly with that change.
So only adjust the overload state if it is actually changing to avoid
getting out of sync.Signed-off-by: Gregory Haskins
Signed-off-by: Steven Rostedt
Signed-off-by: Ingo Molnar -
Dmitry Adamushko found that the current implementation of the RT
balancing code left out changes to the sched_setscheduler and
rt_mutex_setprio.This patch addresses this issue by adding methods to the schedule classes
to handle being switched out of (switched_from) and being switched into
(switched_to) a sched_class. Also a method for changing of priorities
is also added (prio_changed).This patch also removes some duplicate logic between rt_mutex_setprio and
sched_setscheduler.Signed-off-by: Steven Rostedt
Signed-off-by: Ingo Molnar -
To make the main sched.c code more agnostic to the schedule classes.
Instead of having specific hooks in the schedule code for the RT class
balancing. They are replaced with a pre_schedule, post_schedule
and task_wake_up methods. These methods may be used by any of the classes
but currently, only the sched_rt class implements them.Signed-off-by: Steven Rostedt
Signed-off-by: Ingo Molnar -
Yanmin Zhang noticed a nice optimization:
p = l * nr / nl, nl = l/g -> p = g * nr
which eliminates a do_div() from __sched_period().
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar -
Clean-up try_to_wake_up().
Get rid of the 'new_cpu' variable in try_to_wake_up() [ that's, one
#ifdef section less ]. Also remove a few redundant blank lines.Signed-off-by: Dmitry Adamushko
Signed-off-by: Ingo Molnar -
No need to do a check for 'affine wakeup and passive balancing possibilities'
in select_task_rq_fair() when task_cpu(p) == this_cpu.I guess, this part got missed upon introduction of per-sched_class
select_task_rq() in try_to_wake_up().Signed-off-by: Dmitry Adamushko
Signed-off-by: Ingo Molnar -
whitespace cleanups in topology.h.
Signed-off-by: Ingo Molnar
-
reactivate fork balancing.
Signed-off-by: Ingo Molnar
-
add credits for RT balancing improvements.
Signed-off-by: Ingo Molnar
-
style cleanup of various changes that were done recently.
no code changed:
text data bss dec hex filename
26399 2578 48 29025 7161 sched.o.before
26399 2578 48 29025 7161 sched.o.afterSigned-off-by: Ingo Molnar
-
remove unused JIFFIES_TO_NS() macro.
Signed-off-by: Ingo Molnar
-
fix build bug in sched_rt.c:join/leave_domain and make them only
be included on SMP builds.Signed-off-by: Ingo Molnar
-
We move the rt-overload data as the first global to per-domain
reclassification. This limits the scope of overload related cache-line
bouncing to stay with a specified partition instead of affecting all
cpus in the system.Finally, we limit the scope of find_lowest_cpu searches to the domain
instead of the entire system. Note that we would always respect domain
boundaries even without this patch, but we first would scan potentially
all cpus before whittling the list down. Now we can avoid looking at
RQs that are out of scope, again reducing cache-line hits.Note: In some cases, task->cpus_allowed will effectively reduce our search
to within our domain. However, I believe there are cases where the
cpus_allowed mask may be all ones and therefore we err on the side of
caution. If it can be optimized later, so be it.Signed-off-by: Gregory Haskins
CC: Christoph Lameter
Signed-off-by: Ingo Molnar -
We add the notion of a root-domain which will be used later to rescope
global variables to per-domain variables. Each exclusive cpuset
essentially defines an island domain by fully partitioning the member cpus
from any other cpuset. However, we currently still maintain some
policy/state as global variables which transcend all cpusets. Consider,
for instance, rt-overload state.Whenever a new exclusive cpuset is created, we also create a new
root-domain object and move each cpu member to the root-domain's span.
By default the system creates a single root-domain with all cpus as
members (mimicking the global state we have today).We add some plumbing for storing class specific data in our root-domain.
Whenever a RQ is switching root-domains (because of repartitioning) we
give each sched_class the opportunity to remove any state from its old
domain and add state to the new one. This logic doesn't have any clients
yet but it will later in the series.Signed-off-by: Gregory Haskins
CC: Christoph Lameter
CC: Paul Jackson
CC: Simon Derr
Signed-off-by: Ingo Molnar -
clean up schedule_balance_rt().
Signed-off-by: Ingo Molnar
-
clean up pull_rt_task().
Signed-off-by: Ingo Molnar
-
remove leftover debugging.
Signed-off-by: Ingo Molnar
-
remove rt_overload() - it's an unnecessary indirection.
Signed-off-by: Ingo Molnar
-
clean up whitespace damage and missing comments in kernel/sched_rt.c.
Signed-off-by: Ingo Molnar
-
clean up overlong line in kernel/sched_debug.c.
Signed-off-by: Ingo Molnar
-
clean up find_lock_lowest_rq().
Signed-off-by: Ingo Molnar
-
clean up pick_next_highest_task_rt().
Signed-off-by: Ingo Molnar
-
rt-balance when creating new tasks.
Signed-off-by: Ingo Molnar
-
This patch removes several cpumask operations by keeping track
of the first of the CPUS that is of the lowest priority. When
the search for the lowest priority runqueue is completed, all
the bits up to the first CPU with the lowest priority runqueue
is cleared.Signed-off-by: Steven Rostedt
Signed-off-by: Ingo Molnar -
We can cheaply track the number of bits set in the cpumask for the lowest
priority CPUs. Therefore, compute the mask's weight and use it to skip
the optimal domain search logic when there is only one CPU available.Signed-off-by: Gregory Haskins
Signed-off-by: Ingo Molnar -
We don't need to bother searching if the task cannot be migrated
Signed-off-by: Gregory Haskins
Signed-off-by: Steven Rostedt
Signed-off-by: Ingo Molnar