11 Jan, 2006

2 commits


07 Jan, 2006

1 commit

  • I've spent the past 3 days digging into a glibc testsuite failure in
    current CVS, specifically libc/rt/tst-cputimer1.c The thr1 and thr2
    timers fire too early in the second pass of this test. The second
    pass is noteworthy because it makes use of intervals, whereas the
    first pass does not.

    All throughout the posix-cpu-timers.c code, the calculation of the
    process sched_time sum is implemented roughly as:

    unsigned long long sum;

    sum = tsk->signal->sched_time;
    t = tsk;
    do {
    sum += t->sched_time;
    t = next_thread(t);
    } while (t != tsk);

    In fact this is the exact scheme used by check_process_timers().

    In the case of check_process_timers(), current->sched_time has just
    been updated (via scheduler_tick(), which is invoked by
    update_process_times(), which subsequently invokes
    run_posix_cpu_timers()) So there is no special processing necessary
    wrt. that.

    In other contexts, we have to allot for the fact that tsk->sched_time
    might be a bit out of date if we are current. And the
    posix-cpu-timers.c code uses current_sched_time() to deal with that.

    Unfortunately it does so in an erroneous and inconsistent manner in
    one spot which is what results in the early timer firing.

    In cpu_clock_sample_group_locked(), it does this:

    cpu->sched = p->signal->sched_time;
    /* Add in each other live thread. */
    while ((t = next_thread(t)) != p) {
    cpu->sched += t->sched_time;
    }
    if (p->tgid == current->tgid) {
    /*
    * We're sampling ourselves, so include the
    * cycles not yet banked. We still omit
    * other threads running on other CPUs,
    * so the total can always be behind as
    * much as max(nthreads-1,ncpus) * (NSEC_PER_SEC/HZ).
    */
    cpu->sched += current_sched_time(current);
    } else {
    cpu->sched += p->sched_time;
    }

    The problem is the "p->tgid == current->tgid" test. If "p" is
    not current, and the tgids are the same, we will add the process
    t->sched_time twice into cpu->sched and omit "p"'s sched_time
    which is very very very wrong.

    posix-cpu-timers.c has a helper function, sched_ns(p) which takes care
    of this, so my fix is to use that here instead of this special tgid
    test.

    The fact that current can be one of the sub-threads of "p" points out
    that we could make things a little bit more accurate, perhaps by using
    sched_ns() on every thread we process in these loops. It also points
    out that we don't use the most accurate value for threads in the group
    actively running other cpus (and this is mentioned in the comment).

    But that is a future enhancement, and this fix here definitely makes
    sense.

    Signed-off-by: David S. Miller
    Signed-off-by: Linus Torvalds

    David S. Miller
     

29 Nov, 2005

1 commit


07 Nov, 2005

1 commit


31 Oct, 2005

1 commit


28 Oct, 2005

2 commits


27 Oct, 2005

2 commits


24 Oct, 2005

5 commits

  • This might be harmless, but looks like a race from code inspection (I
    was unable to trigger it). I must admit, I don't understand why we
    can't return TIMER_RETRY after 'spin_unlock(&p->sighand->siglock)'
    without doing bump_cpu_timer(), but this is what original code does.

    posix_cpu_timer_set:

    read_lock(&tasklist_lock);

    spin_lock(&p->sighand->siglock);
    list_del_init(&timer->it.cpu.entry);
    spin_unlock(&p->sighand->siglock);

    We are probaly deleting the timer from run_posix_cpu_timers's 'firing'
    local list_head while run_posix_cpu_timers() does list_for_each_safe.

    Various bad things can happen, for example we can just delete this timer
    so that list_for_each() will not notice it and run_posix_cpu_timers()
    will not reset '->firing' flag. In that case,

    ....

    if (timer->it.cpu.firing) {
    read_unlock(&tasklist_lock);
    timer->it.cpu.firing = -1;
    return TIMER_RETRY;
    }

    sys_timer_settime() goes to 'retry:', calls posix_cpu_timer_set() again,
    it returns TIMER_RETRY ...

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • No need to rebalance when task exited

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • do_exit() clears ->it_##clock##_expires, but nothing prevents
    another cpu to attach the timer to exiting process after that.

    After exit_notify() does 'write_unlock_irq(&tasklist_lock)' and
    before do_exit() calls 'schedule() local timer interrupt can find
    tsk->exit_state != 0. If that state was EXIT_DEAD (or another cpu
    does sys_wait4) interrupted task has ->signal == NULL.

    At this moment exiting task has no pending cpu timers, they were cleaned
    up in __exit_signal()->posix_cpu_timers_exit{,_group}(), so we can just
    return from irq.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • 1. cleanup_timers() sets timer->task = NULL under tasklist + ->sighand locks.
    That means that this code in posix_cpu_timer_del() and posix_cpu_timer_set()

    lock_timer(timer);
    if (timer->task == NULL)
    return;
    read_lock(tasklist);
    put_task_struct(timer->task)

    is racy. With this patch timer->task modified and accounted only under
    timer->it_lock. Sadly, this means that dead task_struct won't be freed
    until timer deleted or armed.

    2. run_posix_cpu_timers() collects expired timers into local list under
    tasklist + ->sighand again. That means that posix_cpu_timer_del()
    should check timer->it.cpu.firing under these locks too.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Bursty timers aren't good for anybody, very much including latency for
    other programs when we trigger lots of timers in interrupt context. So
    set a random limit, after which we'll handle the rest on the next timer
    tick.

    Noted by Oleg Nesterov

    Signed-off-by: Linus Torvalds

    Linus Torvalds
     

22 Oct, 2005

1 commit


20 Oct, 2005

1 commit

  • Oleg Nesterov reported an SMP deadlock. If there is a running timer
    tracking a different process's CPU time clock when the process owning
    the timer exits, we deadlock on tasklist_lock in posix_cpu_timer_del via
    exit_itimers.

    That code was using tasklist_lock to check for a race with __exit_signal
    being called on the timer-target task and clearing its ->signal.
    However, there is actually no such race. __exit_signal will have called
    posix_cpu_timers_exit and posix_cpu_timers_exit_group before it does
    that. Those will clear those k_itimer's association with the dying
    task, so posix_cpu_timer_del will return early and never reach the code
    in question.

    In addition, posix_cpu_timer_del called from exit_itimers during execve
    or directly from timer_delete in the process owning the timer can race
    with an exiting timer-target task to cause a double put on timer-target
    task struct. Make sure we always access cpu_timers lists with sighand
    lock held.

    Signed-off-by: Roland McGrath
    Signed-off-by: Chris Wright
    Signed-off-by: Linus Torvalds

    Roland McGrath
     

18 Oct, 2005

1 commit


17 Apr, 2005

1 commit

  • Initial git repository build. I'm not bothering with the full history,
    even though we have it. We can create a separate "historical" git
    archive of that later if we want to, and in the meantime it's about
    3.2GB when imported into git - space that would just make the early
    git days unnecessarily complicated, when we don't have a lot of good
    infrastructure for it.

    Let it rip!

    Linus Torvalds