26 Jan, 2008

40 commits

  • Dmitry Adamushko found that the current implementation of the RT
    balancing code left out changes to the sched_setscheduler and
    rt_mutex_setprio.

    This patch addresses this issue by adding methods to the schedule classes
    to handle being switched out of (switched_from) and being switched into
    (switched_to) a sched_class. Also a method for changing of priorities
    is also added (prio_changed).

    This patch also removes some duplicate logic between rt_mutex_setprio and
    sched_setscheduler.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • To make the main sched.c code more agnostic to the schedule classes.
    Instead of having specific hooks in the schedule code for the RT class
    balancing. They are replaced with a pre_schedule, post_schedule
    and task_wake_up methods. These methods may be used by any of the classes
    but currently, only the sched_rt class implements them.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • Yanmin Zhang noticed a nice optimization:

    p = l * nr / nl, nl = l/g -> p = g * nr

    which eliminates a do_div() from __sched_period().

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Clean-up try_to_wake_up().

    Get rid of the 'new_cpu' variable in try_to_wake_up() [ that's, one
    #ifdef section less ]. Also remove a few redundant blank lines.

    Signed-off-by: Dmitry Adamushko
    Signed-off-by: Ingo Molnar

    Dmitry Adamushko
     
  • No need to do a check for 'affine wakeup and passive balancing possibilities'
    in select_task_rq_fair() when task_cpu(p) == this_cpu.

    I guess, this part got missed upon introduction of per-sched_class
    select_task_rq() in try_to_wake_up().

    Signed-off-by: Dmitry Adamushko
    Signed-off-by: Ingo Molnar

    Dmitry Adamushko
     
  • whitespace cleanups in topology.h.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • reactivate fork balancing.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • add credits for RT balancing improvements.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • style cleanup of various changes that were done recently.

    no code changed:

    text data bss dec hex filename
    26399 2578 48 29025 7161 sched.o.before
    26399 2578 48 29025 7161 sched.o.after

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • remove unused JIFFIES_TO_NS() macro.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • fix build bug in sched_rt.c:join/leave_domain and make them only
    be included on SMP builds.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • We move the rt-overload data as the first global to per-domain
    reclassification. This limits the scope of overload related cache-line
    bouncing to stay with a specified partition instead of affecting all
    cpus in the system.

    Finally, we limit the scope of find_lowest_cpu searches to the domain
    instead of the entire system. Note that we would always respect domain
    boundaries even without this patch, but we first would scan potentially
    all cpus before whittling the list down. Now we can avoid looking at
    RQs that are out of scope, again reducing cache-line hits.

    Note: In some cases, task->cpus_allowed will effectively reduce our search
    to within our domain. However, I believe there are cases where the
    cpus_allowed mask may be all ones and therefore we err on the side of
    caution. If it can be optimized later, so be it.

    Signed-off-by: Gregory Haskins
    CC: Christoph Lameter
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • We add the notion of a root-domain which will be used later to rescope
    global variables to per-domain variables. Each exclusive cpuset
    essentially defines an island domain by fully partitioning the member cpus
    from any other cpuset. However, we currently still maintain some
    policy/state as global variables which transcend all cpusets. Consider,
    for instance, rt-overload state.

    Whenever a new exclusive cpuset is created, we also create a new
    root-domain object and move each cpu member to the root-domain's span.
    By default the system creates a single root-domain with all cpus as
    members (mimicking the global state we have today).

    We add some plumbing for storing class specific data in our root-domain.
    Whenever a RQ is switching root-domains (because of repartitioning) we
    give each sched_class the opportunity to remove any state from its old
    domain and add state to the new one. This logic doesn't have any clients
    yet but it will later in the series.

    Signed-off-by: Gregory Haskins
    CC: Christoph Lameter
    CC: Paul Jackson
    CC: Simon Derr
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • clean up schedule_balance_rt().

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • clean up pull_rt_task().

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • remove leftover debugging.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • remove rt_overload() - it's an unnecessary indirection.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • clean up whitespace damage and missing comments in kernel/sched_rt.c.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • clean up overlong line in kernel/sched_debug.c.

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • clean up find_lock_lowest_rq().

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • clean up pick_next_highest_task_rt().

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • rt-balance when creating new tasks.

    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch removes several cpumask operations by keeping track
    of the first of the CPUS that is of the lowest priority. When
    the search for the lowest priority runqueue is completed, all
    the bits up to the first CPU with the lowest priority runqueue
    is cleared.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • We can cheaply track the number of bits set in the cpumask for the lowest
    priority CPUs. Therefore, compute the mask's weight and use it to skip
    the optimal domain search logic when there is only one CPU available.

    Signed-off-by: Gregory Haskins
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • We don't need to bother searching if the task cannot be migrated

    Signed-off-by: Gregory Haskins
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • This patch changes the searching for a run queue by a waking RT task
    to try to pick another runqueue if the currently running task
    is an RT task.

    The reason is that RT tasks behave different than normal
    tasks. Preempting a normal task to run a RT task to keep
    its cache hot is fine, because the preempted non-RT task
    may wait on that same runqueue to run again unless the
    migration thread comes along and pulls it off.

    RT tasks behave differently. If one is preempted, it makes
    an active effort to continue to run. So by having a high
    priority task preempt a lower priority RT task, that lower
    RT task will then quickly try to run on another runqueue.
    This will cause that lower RT task to replace its nice
    hot cache (and TLB) with a completely cold one. This is
    for the hope that the new high priority RT task will keep
    its cache hot.

    Remeber that this high priority RT task was just woken up.
    So it may likely have been sleeping for several milliseconds,
    and will end up with a cold cache anyway. RT tasks run till
    they voluntarily stop, or are preempted by a higher priority
    task. This means that it is unlikely that the woken RT task
    will have a hot cache to wake up to. So pushing off a lower
    RT task is just killing its cache for no good reason.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • We have logic to detect whether the system has migratable tasks, but we are
    not using it when deciding whether to push tasks away. So we add support
    for considering this new information.

    Signed-off-by: Gregory Haskins
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • The current code base assumes a relatively flat CPU/core topology and will
    route RT tasks to any CPU fairly equally. In the real world, there are
    various toplogies and affinities that govern where a task is best suited to
    run with the smallest amount of overhead. NUMA and multi-core CPUs are
    prime examples of topologies that can impact cache performance.

    Fortunately, linux is already structured to represent these topologies via
    the sched_domains interface. So we change our RT router to consult a
    combination of topology and affinity policy to best place tasks during
    migration.

    Signed-off-by: Gregory Haskins
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • In the original patch series that Steven Rostedt and I worked on together,
    we both took different approaches to low-priority wakeup path. I utilized
    "pre-routing" (push the task away to a less important RQ before activating)
    approach, while Steve utilized a "post-routing" approach. The advantage of
    my approach is that you avoid the overhead of a wasted activate/deactivate
    cycle and peripherally related burdens. The advantage of Steve's method is
    that it neatly solves an issue preventing a "pull" optimization from being
    deployed.

    In the end, we ended up deploying Steve's idea. But it later dawned on me
    that we could get the best of both worlds by deploying both ideas together,
    albeit slightly modified.

    The idea is simple: Use a "light-weight" lookup for pre-routing, since we
    only need to approximate a good home for the task. And we also retain the
    post-routing push logic to clean up any inaccuracies caused by a condition
    of "priority mistargeting" caused by the lightweight lookup. Most of the
    time, the pre-routing should work and yield lower overhead. In the cases
    where it doesnt, the post-router will bat cleanup.

    Signed-off-by: Gregory Haskins
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • It doesn't hurt if we allow the current CPU to be included in the
    search. We will just simply skip it later if the current CPU turns out
    to be the lowest.

    We will use this later in the series

    Signed-off-by: Gregory Haskins
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • Isolate the search logic into a function so that it can be used later
    in places other than find_locked_lowest_rq().

    Signed-off-by: Gregory Haskins
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • The current wake-up code path tries to determine if it can optimize the
    wake-up to "this_cpu" by computing load calculations. The problem is that
    these calculations are only relevant to SCHED_OTHER tasks where load is king.
    For RT tasks, priority is king. So the load calculation is completely wasted
    bandwidth.

    Therefore, we create a new sched_class interface to help with
    pre-wakeup routing decisions and move the load calculation as a function
    of CFS task's class.

    Signed-off-by: Gregory Haskins
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • "this_rq" is normally used to denote the RQ on the current cpu
    (i.e. "cpu_rq(this_cpu)"). So clean up the usage of this_rq to be
    more consistent with the rest of the code.

    Signed-off-by: Gregory Haskins
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • Some RT tasks (particularly kthreads) are bound to one specific CPU.
    It is fairly common for two or more bound tasks to get queued up at the
    same time. Consider, for instance, softirq_timer and softirq_sched. A
    timer goes off in an ISR which schedules softirq_thread to run at RT50.
    Then the timer handler determines that it's time to smp-rebalance the
    system so it schedules softirq_sched to run. So we are in a situation
    where we have two RT50 tasks queued, and the system will go into
    rt-overload condition to request other CPUs for help.

    This causes two problems in the current code:

    1) If a high-priority bound task and a low-priority unbounded task queue
    up behind the running task, we will fail to ever relocate the unbounded
    task because we terminate the search on the first unmovable task.

    2) We spend precious futile cycles in the fast-path trying to pull
    overloaded tasks over. It is therefore optimial to strive to avoid the
    overhead all together if we can cheaply detect the condition before
    overload even occurs.

    This patch tries to achieve this optimization by utilizing the hamming
    weight of the task->cpus_allowed mask. A weight of 1 indicates that
    the task cannot be migrated. We will then utilize this information to
    skip non-migratable tasks and to eliminate uncessary rebalance attempts.

    We introduce a per-rq variable to count the number of migratable tasks
    that are currently running. We only go into overload if we have more
    than one rt task, AND at least one of them is migratable.

    In addition, we introduce a per-task variable to cache the cpus_allowed
    weight, since the hamming calculation is probably relatively expensive.
    We only update the cached value when the mask is updated which should be
    relatively infrequent, especially compared to scheduling frequency
    in the fast path.

    Signed-off-by: Gregory Haskins
    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • Since we now take an active approach to load balancing, we don't need to
    balance RT tasks via the normal task balancer. In fact, this code was
    found to pull RT tasks away from CPUS that the active movement performed,
    resulting in large latencies.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch adds pushing of overloaded RT tasks from a runqueue that is
    having tasks (most likely RT tasks) added to the run queue.

    TODO: We don't cover the case of waking of new RT tasks (yet).

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch adds the algorithm to pull tasks from RT overloaded runqueues.

    When a pull RT is initiated, all overloaded runqueues are examined for
    a RT task that is higher in prio than the highest prio task queued on the
    target runqueue. If another runqueue holds a RT task that is of higher
    prio than the highest prio task on the target runqueue is found it is pulled
    to the target runqueue.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch adds an RT overload accounting system. When a runqueue has
    more than one RT task queued, it is marked as overloaded. That is that it
    is a candidate to have RT tasks pulled from it.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch adds an algorithm to push extra RT tasks off a run queue to
    other CPU runqueues.

    When more than one RT task is added to a run queue, this algorithm takes
    an assertive approach to push the RT tasks that are not running onto other
    run queues that have lower priority. The way this works is that the highest
    RT task that is not running is looked at and we examine the runqueues on
    the CPUS for that tasks affinity mask. We find the runqueue with the lowest
    prio in the CPU affinity of the picked task, and if it is lower in prio than
    the picked task, we push the task onto that CPU runqueue.

    We continue pushing RT tasks off the current runqueue until we don't push any
    more. The algorithm stops when the next highest RT task can't preempt any
    other processes on other CPUS.

    TODO: The algorithm may stop when there are still RT tasks that can be
    migrated. Specifically, if the highest non running RT task CPU affinity
    is restricted to CPUs that are running higher priority tasks, there may
    be a lower priority task queued that has an affinity with a CPU that is
    running a lower priority task that it could be migrated to. This
    patch set does not address this issue.

    Note: checkpatch reveals two over 80 character instances. I'm not sure
    that breaking them up will help visually, so I left them as is.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt
     
  • This patch adds accounting to each runqueue to keep track of the
    highest prio task queued on the run queue. We only care about
    RT tasks, so if the run queue does not contain any active RT tasks
    its priority will be considered MAX_RT_PRIO.

    This information will be used for later patches.

    Signed-off-by: Steven Rostedt
    Signed-off-by: Ingo Molnar

    Steven Rostedt