03 Apr, 2010

3 commits

  • In order to reduce the dependency on TASK_WAKING rework the enqueue
    interface to support a proper flags field.

    Replace the int wakeup, bool head arguments with an int flags argument
    and create the following flags:

    ENQUEUE_WAKEUP - the enqueue is a wakeup of a sleeping task,
    ENQUEUE_WAKING - the enqueue has relative vruntime due to
    having sched_class::task_waking() called,
    ENQUEUE_HEAD - the waking task should be places on the head
    of the priority queue (where appropriate).

    For symmetry also convert sched_class::dequeue() to a flags scheme.

    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Oleg noticed a few races with the TASK_WAKING usage on fork.

    - since TASK_WAKING is basically a spinlock, it should be IRQ safe
    - since we set TASK_WAKING (*) without holding rq->lock it could
    be there still is a rq->lock holder, thereby not actually
    providing full serialization.

    (*) in fact we clear PF_STARTING, which in effect enables TASK_WAKING.

    Cure the second issue by not setting TASK_WAKING in sched_fork(), but
    only temporarily in wake_up_new_task() while calling select_task_rq().

    Cure the first by holding rq->lock around the select_task_rq() call,
    this will disable IRQs, this however requires that we push down the
    rq->lock release into select_task_rq_fair()'s cgroup stuff.

    Because select_task_rq_fair() still needs to drop the rq->lock we
    cannot fully get rid of TASK_WAKING.

    Reported-by: Oleg Nesterov
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Merge reason: update to latest upstream

    Signed-off-by: Ingo Molnar

    Ingo Molnar
     

14 Mar, 2010

1 commit


11 Mar, 2010

2 commits


07 Mar, 2010

1 commit

  • Make sure compiler won't do weird things with limits. E.g. fetching them
    twice may return 2 different values after writable limits are implemented.

    I.e. either use rlimit helpers added in commit 3e10e716abf3 ("resource:
    add helpers for fetching rlimits") or ACCESS_ONCE if not applicable.

    Signed-off-by: Jiri Slaby
    Cc: Ingo Molnar
    Cc: Peter Zijlstra
    Cc: Thomas Gleixner
    Cc: john stultz
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Jiri Slaby
     

04 Feb, 2010

1 commit


23 Jan, 2010

2 commits

  • The ability of enqueueing a task to the head of a SCHED_FIFO priority
    list is required to fix some violations of POSIX scheduling policy.

    Implement the functionality in sched_rt.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Tested-by: Carsten Emde
    Tested-by: Mathias Weber
    LKML-Reference:

    Thomas Gleixner
     
  • The ability of enqueueing a task to the head of a SCHED_FIFO priority
    list is required to fix some violations of POSIX scheduling policy.

    Extend the related functions with a "head" argument.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    Tested-by: Carsten Emde
    Tested-by: Mathias Weber
    LKML-Reference:

    Thomas Gleixner
     

21 Jan, 2010

1 commit


17 Jan, 2010

1 commit

  • kernel/sched: don't expose local functions

    The get_rr_interval_* functions are all class methods of
    struct sched_class. They are not exported so make them
    static.

    Signed-off-by: H Hartley Sweeten
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    H Hartley Sweeten
     

17 Dec, 2009

1 commit

  • As will be apparent in the next patch, we need a pre wakeup hook
    for sched_fair task migration, hence rename the post wakeup hook
    and one pre wakeup.

    Signed-off-by: Peter Zijlstra
    Cc: Mike Galbraith
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

15 Dec, 2009

2 commits


09 Dec, 2009

1 commit

  • sched_rr_get_param calls
    task->sched_class->get_rr_interval(task) without protection
    against a concurrent sched_setscheduler() call which modifies
    task->sched_class.

    Serialize the access with task_rq_lock(task) and hand the rq
    pointer into get_rr_interval() as it's needed at least in the
    sched_fair implementation.

    Signed-off-by: Thomas Gleixner
    Acked-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Thomas Gleixner
     

04 Nov, 2009

1 commit

  • find_lowest_rq() wants to call pick_optimal_cpu() on the
    intersection of sched_domain_span(sd) and lowest_mask. Rather
    than doing a cpus_and into a temporary, we can open-code it.

    This actually makes the code slightly clearer, IMHO.

    Signed-off-by: Rusty Russell
    Acked-by: Gregory Haskins
    Cc: Steven Rostedt
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Rusty Russell
     

21 Sep, 2009

1 commit


15 Sep, 2009

3 commits


04 Sep, 2009

1 commit

  • Keep an average on the amount of time spend on RT tasks and use
    that fraction to scale down the cpu_power for regular tasks.

    Signed-off-by: Peter Zijlstra
    Tested-by: Andreas Herrmann
    Acked-by: Andreas Herrmann
    Acked-by: Gautham R Shenoy
    Cc: Balbir Singh
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

02 Aug, 2009

4 commits

  • This build bug:

    In file included from kernel/sched.c:1765:
    kernel/sched_rt.c: In function ‘has_pushable_tasks’:
    kernel/sched_rt.c:1069: error: ‘struct rt_rq’ has no member named ‘pushable_tasks’
    kernel/sched_rt.c: In function ‘pick_next_task_rt’:
    kernel/sched_rt.c:1084: error: ‘struct rq’ has no member named ‘post_schedule’

    Triggers because both pushable_tasks and post_schedule are
    SMP-only fields.

    Move pushable_tasks() to the SMP section and #ifdef the post_schedule use.

    Cc: Gregory Haskins
    Cc: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Ingo Molnar
     
  • A frequent mistake appears to be to call task_of() on a
    scheduler entity that is not actually a task, which can result
    in a wild pointer.

    Add a check to catch these mistakes.

    Suggested-by: Ingo Molnar
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     
  • Reflect "active" cpus in the rq->rd->online field, instead of
    the online_map.

    The motivation is that things that use the root-domain code
    (such as cpupri) only care about cpus classified as "active"
    anyway. By synchronizing the root-domain state with the active
    map, we allow several optimizations.

    For instance, we can remove an extra cpumask_and from the
    scheduler hotpath by utilizing rq->rd->online (since it is now
    a cached version of cpu_active_map & rq->rd->span).

    Signed-off-by: Gregory Haskins
    Acked-by: Peter Zijlstra
    Acked-by: Max Krasnyansky
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     
  • We currently have an explicit "needs_post" vtable method which
    returns a stack variable for whether we should later run
    post-schedule. This leads to an awkward exchange of the
    variable as it bubbles back up out of the context switch. Peter
    Zijlstra observed that this information could be stored in the
    run-queue itself instead of handled on the stack.

    Therefore, we revert to the method of having context_switch
    return void, and update an internal rq->post_schedule variable
    when we require further processing.

    In addition, we fix a race condition where we try to access
    current->sched_class without holding the rq->lock. This is
    technically racy, as the sched-class could change out from
    under us. Instead, we reference the per-rq post_schedule
    variable with the runqueue unlocked, but with preemption
    disabled to see if we need to reacquire the rq->lock.

    Finally, we clean the code up slightly by removing the #ifdef
    CONFIG_SMP conditionals from the schedule() call, and implement
    some inline helper functions instead.

    This patch passes checkpatch, and rt-migrate.

    Signed-off-by: Gregory Haskins
    Signed-off-by: Peter Zijlstra
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Gregory Haskins
     

10 Jul, 2009

1 commit

  • Fixes an easily triggerable BUG() when setting process affinities.

    Make sure to count the number of migratable tasks in the same place:
    the root rt_rq. Otherwise the number doesn't make sense and we'll hit
    the BUG in set_cpus_allowed_rt().

    Also, make sure we only count tasks, not groups (this is probably
    already taken care of by the fact that rt_se->nr_cpus_allowed will be 0
    for groups, but be more explicit)

    Tested-by: Thomas Gleixner
    CC: stable@kernel.org
    Signed-off-by: Peter Zijlstra
    Acked-by: Gregory Haskins
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

09 Jun, 2009

1 commit


08 Apr, 2009

1 commit


01 Apr, 2009

1 commit


28 Mar, 2009

1 commit


09 Feb, 2009

1 commit


01 Feb, 2009

1 commit


16 Jan, 2009

1 commit

  • Ingo Molnar wrote:

    > here's a new build failure with tip/sched/rt:
    >
    > LD .tmp_vmlinux1
    > kernel/built-in.o: In function `set_curr_task_rt':
    > sched.c:(.text+0x3675): undefined reference to `plist_del'
    > kernel/built-in.o: In function `pick_next_task_rt':
    > sched.c:(.text+0x37ce): undefined reference to `plist_del'
    > kernel/built-in.o: In function `enqueue_pushable_task':
    > sched.c:(.text+0x381c): undefined reference to `plist_del'

    Eliminate the plist library kconfig and make it available
    unconditionally.

    Signed-off-by: Peter Zijlstra
    Signed-off-by: Ingo Molnar

    Peter Zijlstra
     

14 Jan, 2009

2 commits


12 Jan, 2009

1 commit


11 Jan, 2009

1 commit


04 Jan, 2009

2 commits

  • Impact: prevents panic from stack overflow on numa-capable machines.

    Some of the "removal of stack hogs" changes in kernel/sched.c by using
    node_to_cpumask_ptr were undone by the early cpumask API updates, and
    causes a panic due to stack overflow. This patch undoes those changes
    by using cpumask_of_node() which returns a 'const struct cpumask *'.

    In addition, cpu_coregoup_map is replaced with cpu_coregroup_mask further
    reducing stack usage. (Both of these updates removed 9 FIXME's!)

    Also:
    Pick up some remaining changes from the old 'cpumask_t' functions to
    the new 'struct cpumask *' functions.

    Optimize memory traffic by allocating each percpu local_cpu_mask on the
    same node as the referring cpu.

    Signed-off-by: Mike Travis
    Acked-by: Rusty Russell
    Signed-off-by: Ingo Molnar

    Mike Travis
     
  • …ux-2.6-cpumask into merge-rr-cpumask

    Conflicts:
    arch/x86/kernel/io_apic.c
    kernel/rcuclassic.c
    kernel/sched.c
    kernel/time/tick-sched.c

    Signed-off-by: Mike Travis <travis@sgi.com>
    [ mingo@elte.hu: backmerged typo fix for io_apic.c ]
    Signed-off-by: Ingo Molnar <mingo@elte.hu>

    Mike Travis