18 Jul, 2009
1 commit
-
I spotted two sites that didn't take vruntime wrap-around into
account. Fix these by creating a comparison helper that does do
so.Signed-off-by: Fabio Checconi
Signed-off-by: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar
11 Jul, 2009
1 commit
-
One of the isolation modifications for SCHED_IDLE is the
unitization of sleeper credit. However the check for this
assumes that the sched_entity we're placing always belongs to a
task.This is potentially not true with group scheduling and leaves
us rummaging randomly when we try to pull the policy.Signed-off-by: Paul Turner
Cc: peterz@infradead.org
LKML-Reference:
Signed-off-by: Ingo Molnar
18 Jun, 2009
1 commit
-
Access to local variable lw is aliased by usage of pointer load.
Access to pointer load in calc_delta_mine() happens when lw is
already out of scope.[ Reported by static code analysis. ]
Signed-off-by: Christian Engelmayer
LKML-Reference:
Signed-off-by: Ingo Molnar
09 Apr, 2009
1 commit
-
Impact: micro-optimization
Under group scheduling we traverse up until we are at common siblings
to make the wakeup comparison on.At this point however, they should have the same parent so continuing
to check up the tree is redundant.Signed-off-by: Paul Turner
Acked-by: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar
16 Feb, 2009
1 commit
11 Feb, 2009
1 commit
-
Intel reported a 10% regression (mysql+sysbench) on a 16-way machine
with these patches:1596e29: sched: symmetric sync vs avg_overlap
d942fb6: sched: fix sync wakeupsRevert them.
Reported-by: "Zhang, Yanmin"
Bisected-by: Lin Ming
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar
09 Feb, 2009
1 commit
01 Feb, 2009
3 commits
-
Similar to the previous patch, by not clearing buddies we can select entities
past their run quota, which can increase latency. This means we have to clear
group buddies as well.Do not use the group clear for pick_next_task(), otherwise that'll get O(n^2).
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar -
It was noticed that a task could get re-elected past its run quota due to buddy
affinities. This could increase latency a little. Cure it by more aggresively
clearing buddy state.We do so in two situations:
- when we force preempt
- when we select a buddy to runSigned-off-by: Mike Galbraith
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar -
Pawel Dziekonski reported that the openssl benchmark and his
quantum chemistry application both show slowdowns due to the
scheduler under-parallelizing execution.The reason are pipe wakeups still doing 'sync' wakeups which
overrides the normal buddy wakeup logic - even if waker and
wakee are loosely coupled.Fix an inversion of logic in the buddy wakeup code.
Reported-by: Pawel Dziekonski
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar
16 Jan, 2009
1 commit
-
Mike's change: 0a582440f "sched: fix sched_slice())" broke group
scheduling by forgetting to reload cfs_rq on each loop.This patch fixes aim7 regression and specjbb2005 regression becomes
less than 1.5% on 8-core stokley.Signed-off-by: Lin Ming
Signed-off-by: Peter Zijlstra
Tested-by: Jayson King
Signed-off-by: Ingo Molnar
15 Jan, 2009
3 commits
-
Impact: fix SCHED_IDLE latency problems
OK, so we have 1 running task A (which is obviously curr and the tree is
equally obviously empty).'A' nicely chugs along, doing its thing, carrying min_vruntime along as it
goes.Then some whacko speed freak SCHED_IDLE task gets inserted due to SMP
balancing, which is very likely far right, in that caseupdate_curr
update_min_vruntime
cfs_rq->rb_leftmost := true (the crazy task sitting in a tree)
vruntime = se->vruntimeand voila, min_vruntime is waaay right of where it ought to be.
OK, so why did I write it like that to begin with...
Aah, yes.
Say we've just dequeued current
schedule
deactivate_task(prev)
dequeue_entity
update_min_vruntimeThen we'll set
vruntime = cfs_rq->min_vruntime;
we find !cfs_rq->curr, but do find someone in the tree. Then we _must_
do vruntime = se->vruntime, becausevruntime = min_vruntime(vruntime := cfs_rq->min_vruntime, se->vruntime)
will not advance vruntime, and cause lags the other way around (which we
fixed with that initial patch: 1af5f730fc1bf7c62ec9fb2d307206e18bf40a69
(sched: more accurate min_vruntime accounting).Signed-off-by: Peter Zijlstra
Tested-by: Mike Galbraith
Acked-by: Mike Galbraith
Cc:
Signed-off-by: Ingo Molnar -
Stronger SCHED_IDLE isolation:
- no SCHED_IDLE buddies
- never let SCHED_IDLE preempt on wakeup
- always preempt SCHED_IDLE on wakeup
- limit SLEEPER fairness for SCHED_IDLE.Signed-off-by: Mike Galbraith
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar -
Prefer tasks that wake other tasks to preempt quickly. This improves
performance because more work is available sooner.The workload that prompted this patch was a kernel build over NFS4 (for some
curious and not understood reason we had to revert commit:
18de9735300756e3ca9c361ef58409d8561dfe0d to make any progress at all)Without this patch a make -j8 bzImage (of x86-64 defconfig) would take
3m30-ish, with this patch we're down to 2m50-ish.psql-sysbench/mysql-sysbench show a slight improvement in peak performance as
well, tbench and vmark seemed to not care.It is possible to improve upon the build time (to 2m20-ish) but that seriously
destroys other benchmarks (just shows that there's more room for tinkering).Much thanks to Mike who put in a lot of effort to benchmark things and proved
a worthy opponent with a competing patch.Signed-off-by: Peter Zijlstra
Signed-off-by: Mike Galbraith
Signed-off-by: Ingo Molnar
09 Jan, 2009
1 commit
-
Use the new generic implementation.
Signed-off-by: Wu Fengguang
Cc: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
05 Jan, 2009
1 commit
03 Jan, 2009
2 commits
-
…/git/tip/linux-2.6-tip
* 'cpus4096-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (66 commits)
x86: export vector_used_by_percpu_irq
x86: use logical apicid in x2apic_cluster's x2apic_cpu_mask_to_apicid_and()
sched: nominate preferred wakeup cpu, fix
x86: fix lguest used_vectors breakage, -v2
x86: fix warning in arch/x86/kernel/io_apic.c
sched: fix warning in kernel/sched.c
sched: move test_sd_parent() to an SMP section of sched.h
sched: add SD_BALANCE_NEWIDLE at MC and CPU level for sched_mc>0
sched: activate active load balancing in new idle cpus
sched: bias task wakeups to preferred semi-idle packages
sched: nominate preferred wakeup cpu
sched: favour lower logical cpu number for sched_mc balance
sched: framework for sched_mc/smt_power_savings=N
sched: convert BALANCE_FOR_xx_POWER to inline functions
x86: use possible_cpus=NUM to extend the possible cpus allowed
x86: fix cpu_mask_to_apicid_and to include cpu_online_mask
x86: update io_apic.c to the new cpumask code
x86: Introduce topology_core_cpumask()/topology_thread_cpumask()
x86: xen: use smp_call_function_many()
x86: use work_on_cpu in x86/kernel/cpu/mcheck/mce_amd_64.c
...Fixed up trivial conflict in kernel/time/tick-sched.c manually
-
Impact: fix bad-interactivity buglet
Fix sched_slice() to emit a sane result whether a task is currently
enqueued or not.Signed-off-by: Mike Galbraith
Tested-by: Jayson King
Signed-off-by: Ingo Molnarkernel/sched_fair.c | 30 ++++++++++++------------------
1 file changed, 12 insertions(+), 18 deletions(-)
19 Dec, 2008
1 commit
-
Impact: tweak task wakeup to save power more agressively
Preferred wakeup cpu (from a semi idle package) has been
nominated in find_busiest_group() in the previous patch. Use
this information in sched_mc_preferred_wakeup_cpu in function
wake_idle() to bias task wakeups if the following conditions
are satisfied:- The present cpu that is trying to wakeup the process is
idle and waking the target process on this cpu will
potentially wakeup a completely idle package
- The previous cpu on which the target process ran is
also idle and hence selecting the previous cpu may
wakeup a semi idle cpu package
- The task being woken up is allowed to run in the
nominated cpu (cpu affinity and restrictions)Basically if both the current cpu and the previous cpu on
which the task ran is idle, select the nominated cpu from semi
idle cpu package for running the new task that is waking up.Cache hotness is considered since the actual biasing happens
in wake_idle() only if the application is cache cold.This technique will effectively move short running bursty jobs in
a mostly idle system.Wakeup biasing for power savings gets automatically disabled if
system utilisation increases due to the fact that the probability
of finding both this_cpu and prev_cpu idle decreases.Signed-off-by: Vaidyanathan Srinivasan
Acked-by: Balbir Singh
Acked-by: Peter Zijlstra
Signed-off-by: Ingo Molnar
16 Dec, 2008
2 commits
-
Impact: micro-optimization
Skip the hard work when there is none.
Signed-off-by: Peter Zijlstra
Acked-by: Mike Galbraith
Signed-off-by: Ingo Molnar -
Impact: sharpen the wakeup-granularity to always be against current scheduler time
It was possible to do the preemption check against an old time stamp.
Signed-off-by: Mike Galbraith
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar
25 Nov, 2008
2 commits
-
Impact: Trivial API conversion
NR_CPUS -> nr_cpu_ids
cpumask_t -> struct cpumask
sizeof(cpumask_t) -> cpumask_size()
cpumask_a = cpumask_b -> cpumask_copy(&cpumask_a, &cpumask_b)cpu_set() -> cpumask_set_cpu()
first_cpu() -> cpumask_first()
cpumask_of_cpu() -> cpumask_of()
cpus_* -> cpumask_*There are some FIXMEs where we all archs to complete infrastructure
(patches have been sent):cpu_coregroup_map -> cpu_coregroup_mask
node_to_cpumask* -> cpumask_of_nodeThere is also one FIXME where we pass an array of cpumasks to
partition_sched_domains(): this implies knowing the definition of
'struct cpumask' and the size of a cpumask. This will be fixed in a
future patch.Signed-off-by: Rusty Russell
Signed-off-by: Ingo Molnar -
Impact: trivial wrap of member accesses
This eases the transition in the next patch.
We also get rid of a temporary cpumask in find_idlest_cpu() thanks to
for_each_cpu_and, and sched_balance_self() due to getting weight before
setting sd to NULL.Signed-off-by: Rusty Russell
Signed-off-by: Ingo Molnar
11 Nov, 2008
1 commit
-
Clear buddies on yield, so that the buddy rules don't schedule them
despite them being placed right-most.This fixed a performance regression with yield-happy binary JVMs.
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar
Tested-by: Lin Ming
05 Nov, 2008
4 commits
-
Impact: scheduling order fix for group scheduling
For each level in the hierarchy, set the buddy to point to the right entity.
Therefore, when we do the hierarchical schedule, we have a fair chance of
ending up where we meant to.Signed-off-by: Peter Zijlstra
Acked-by: Mike Galbraith
Signed-off-by: Ingo Molnar -
Impact: improve/change/fix wakeup-buddy scheduling
Currently we only have a forward looking buddy, that is, we prefer to
schedule to the task we last woke up, under the presumption that its
going to consume the data we just produced, and therefore will have
cache hot benefits.This allows co-waking producer/consumer task pairs to run ahead of the
pack for a little while, keeping their cache warm. Without this, we
would interleave all pairs, utterly trashing the cache.This patch introduces a backward looking buddy, that is, suppose that
in the above scenario, the consumer preempts the producer before it
can go to sleep, we will therefore miss the wakeup from consumer to
producer (its already running, after all), breaking the cycle and
reverting to the cache-trashing interleaved schedule pattern.The backward buddy will try to schedule back to the task that woke us
up in case the forward buddy is not available, under the assumption
that the last task will be the one with the most cache hot task around
barring current.This will basically allow a task to continue after it got preempted.
In order to avoid starvation, we allow either buddy to get wakeup_gran
ahead of the pack.Signed-off-by: Peter Zijlstra
Acked-by: Mike Galbraith
Signed-off-by: Ingo Molnar -
Impact: fix cross-class preemption
Inter-class wakeup preemptions should go on class order.
Signed-off-by: Peter Zijlstra
Acked-by: Mike Galbraith
Signed-off-by: Ingo Molnar -
Impact: cleanup
Clean up task selection
Signed-off-by: Peter Zijlstra
Acked-by: Mike Galbraith
Signed-off-by: Ingo Molnar
24 Oct, 2008
6 commits
-
Since we moved wakeup preemption back to virtual time, it makes sense to move
the buddy stuff back as well. The purpose of the buddy scheduling is to allow
a quickly scheduling pair of tasks to run away from the group as far as a
regular busy task would be allowed under wakeup preemption.This has the advantage that the pair can ping-pong for a while, enjoying
cache-hotness. Without buddy scheduling other tasks would interleave destroying
the cache.Also, it saves a word in cfs_rq.
Signed-off-by: Peter Zijlstra
Acked-by: Mike Galbraith
Signed-off-by: Ingo Molnar -
The advantage is that vruntime based wakeup preemption has a better
conceptual model. Here wakeup_gran = 0 means: preempt when 'fair'.
Therefore wakeup_gran is the granularity of unfairness we allow in order
to make progress.Signed-off-by: Peter Zijlstra
Acked-by: Mike Galbraith
Signed-off-by: Ingo Molnar -
Mysql+oltp and pgsql+oltp peaks are still shifted right. The below puts
the peaks back to 1 client/server pair per core.Use the avg_overlap information to weaken the sync hint.
Signed-off-by: Mike Galbraith
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar -
Mike noticed the current min_vruntime tracking can go wrong and skip the
current task. If the only remaining task in the tree is a nice 19 task
with huge vruntime, new tasks will be inserted too far to the right too,
causing some interactibity issues.min_vruntime can only change due to the leftmost entry disappearing
(dequeue_entity()), or by the leftmost entry being incremented past the
next entry, which elects a new leftmost (__update_curr())Due to the current entry not being part of the actual tree, we have to
compare the leftmost tree entry with the current entry, and take the
leftmost of these two.So create a update_min_vruntime() function that takes computes the
leftmost vruntime in the system (either tree of current) and increases
the cfs_rq->min_vruntime if the computed value is larger than the
previously found min_vruntime. And call this from the two sites we've
identified that can change min_vruntime.Reported-by: Mike Galbraith
Signed-off-by: Peter Zijlstra
Acked-by: Mike Galbraith
Signed-off-by: Ingo Molnar -
…l/git/tip/linux-2.6-tip
* 'sched-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
sched: disable the hrtick for now
sched: revert back to per-rq vruntime
sched: fair scheduler should not resched rt tasks
sched: optimize group load balancer
sched: minor fast-path overhead reduction
sched: fix the wrong mask_len, cleanup
sched: kill unused scheduler decl.
sched: fix the wrong mask_len
sched: only update rq->clock while holding rq->lock
22 Oct, 2008
1 commit
-
a patch from Henrik Austad did this:
>> Do not declare select_task_rq as part of sched_class when CONFIG_SMP is
>> not set.Peter observed:
> While a proper cleanup, could you do it by re-arranging the methods so
> as to not create an additional ifdef?Do not declare select_task_rq and some other methods as part of sched_class
when CONFIG_SMP is not set.Also gather those methods to avoid CONFIG_SMP mess.
Idea-by: Henrik Austad
Signed-off-by: Li Zefan
Acked-by: Peter Zijlstra
Acked-by: Henrik Austad
Signed-off-by: Ingo Molnar
20 Oct, 2008
3 commits
-
Vatsa rightly points out that having the runqueue weight in the vruntime
calculations can cause unfairness in the face of task joins/leaves.Suppose: dv = dt * rw / w
Then take 10 tasks t_n, each of similar weight. If the first will run 1
then its vruntime will increase by 10. Now, if the next 8 tasks leave after
having run their 1, then the last task will get a vruntime increase of 2
after having run 1.Which will leave us with 2 tasks of equal weight and equal runtime, of which
one will not be scheduled for 8/2=4 units of time.Ergo, we cannot do that and must use: dv = dt / w.
This means we cannot have a global vruntime based on effective priority, but
must instead go back to the vruntime per rq model we started out with.This patch was lightly tested by doing starting while loops on each nice level
and observing their execution time, and a simple group scenario of 1:2:3 pinned
to a single cpu.Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar -
With use of ftrace Steven noticed that some RT tasks got rescheduled due
to sched_fair interaction.What happens is that we reprogram the hrtick from enqueue/dequeue_fair_task()
because that can change nr_running, and thus a current tasks ideal runtime.
However, its possible the current task isn't a fair_sched_class task, and thus
doesn't have a hrtick set to change.Fix this by wrapping those hrtick_start_fair() calls in a hrtick_update()
function, which will check for the right conditions.Reported-by: Steven Rostedt
Signed-off-by: Peter Zijlstra
Acked-by: Steven Rostedt
Signed-off-by: Ingo Molnar -
…tp', 'timers/posixtimers' and 'timers/debug' into v28-timers-for-linus
17 Oct, 2008
1 commit
-
Greetings,
103638d added a bit of avoidable overhead to the fast-path.
Use sysctl_sched_min_granularity instead of sched_slice() to restrict buddy wakeups.
Signed-off-by: Mike Galbraith
Acked-by: Peter Zijlstra
Signed-off-by: Ingo Molnar
08 Oct, 2008
1 commit
-
While looking at the code I wondered why we always do:
sync && avg_overlap < migration_cost
Which is a bit odd, since the overlap test was meant to detect sync wakeups
so using it to specialize sync wakeups doesn't make much sense.Hence change the code to do:
sync || avg_overlap < migration_cost
Signed-off-by: Peter Zijlstra
Signed-off-by: Ingo Molnar