26 Jan, 2008

1 commit


16 Jan, 2008

1 commit

  • Dave Young reported warnings from lockdep that the workqueue API
    can sometimes try to register lockdep classes with the same key
    but different names. This is not permitted in lockdep.

    Unfortunately, I was unaware of that restriction when I wrote
    the code to debug workqueue problems with lockdep and used the
    workqueue name as the lockdep class name. This can obviously
    lead to the problem if the workqueue name is dynamic.

    This patch solves the problem by always using a constant name
    for the workqueue's lockdep class, namely either the constant
    name that was passed in or a string consisting of the variable
    name.

    Signed-off-by: Johannes Berg
    Signed-off-by: Ingo Molnar
    Signed-off-by: Peter Zijlstra

    Johannes Berg
     

20 Oct, 2007

2 commits

  • The task_struct->pid member is going to be deprecated, so start
    using the helpers (task_pid_nr/task_pid_vnr/task_pid_nr_ns) in
    the kernel.

    The first thing to start with is the pid, printed to dmesg - in
    this case we may safely use task_pid_nr(). Besides, printks produce
    more (much more) than a half of all the explicit pid usage.

    [akpm@linux-foundation.org: git-drm went and changed lots of stuff]
    Signed-off-by: Pavel Emelyanov
    Cc: Dave Airlie
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Pavel Emelyanov
     
  • In the following scenario:

    code path 1:
    my_function() -> lock(L1); ...; flush_workqueue(); ...

    code path 2:
    run_workqueue() -> my_work() -> ...; lock(L1); ...

    you can get a deadlock when my_work() is queued or running
    but my_function() has acquired L1 already.

    This patch adds a pseudo-lock to each workqueue to make lockdep
    warn about this scenario.

    [akpm@linux-foundation.org: coding-style fixes]
    Signed-off-by: Johannes Berg
    Acked-by: Oleg Nesterov
    Acked-by: Ingo Molnar
    Acked-by: Peter Zijlstra
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Johannes Berg
     

28 Aug, 2007

1 commit

  • Fix bogus DEBUG_PREEMPT warning on x86_64, when cpu brought online after
    bootup: current_is_keventd is right to note its use of smp_processor_id
    is preempt-safe, but should use raw_smp_processor_id to avoid the warning.

    Signed-off-by: Hugh Dickins
    Signed-off-by: Linus Torvalds

    Hugh Dickins
     

18 Jul, 2007

2 commits

  • Pointed out by Michal Schmidt .

    The bug was introduced in 2.6.22 by me.

    cleanup_workqueue_thread() does flush_cpu_workqueue(cwq) in a loop until
    ->worklist becomes empty. This is live-lockable, a re-niced caller can get
    CPU after wake_up() and insert a new barrier before the lower-priority
    cwq->thread has a chance to clear ->current_work.

    Change cleanup_workqueue_thread() to do flush_cpu_workqueue(cwq) only once.
    We can rely on the fact that run_workqueue() won't return until it flushes
    all works. So it is safe to call kthread_stop() after that, the "should
    stop" request won't be noticed until run_workqueue() returns.

    Signed-off-by: Oleg Nesterov
    Cc: Michal Schmidt
    Cc: Srivatsa Vaddagiri
    Cc:
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Currently, the freezer treats all tasks as freezable, except for the kernel
    threads that explicitly set the PF_NOFREEZE flag for themselves. This
    approach is problematic, since it requires every kernel thread to either
    set PF_NOFREEZE explicitly, or call try_to_freeze(), even if it doesn't
    care for the freezing of tasks at all.

    It seems better to only require the kernel threads that want to or need to
    be frozen to use some freezer-related code and to remove any
    freezer-related code from the other (nonfreezable) kernel threads, which is
    done in this patch.

    The patch causes all kernel threads to be nonfreezable by default (ie. to
    have PF_NOFREEZE set by default) and introduces the set_freezable()
    function that should be called by the freezable kernel threads in order to
    unset PF_NOFREEZE. It also makes all of the currently freezable kernel
    threads call set_freezable(), so it shouldn't cause any (intentional)
    change of behaviour to appear. Additionally, it updates documentation to
    describe the freezing of tasks more accurately.

    [akpm@linux-foundation.org: build fixes]
    Signed-off-by: Rafael J. Wysocki
    Acked-by: Nigel Cunningham
    Cc: Pavel Machek
    Cc: Oleg Nesterov
    Cc: Gautham R Shenoy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rafael J. Wysocki
     

17 Jul, 2007

2 commits

  • Change cancel_work_sync() and cancel_delayed_work_sync() to return a boolean
    indicating whether the work was actually cancelled. A zero return value means
    that the work was not pending/queued.

    Without that kind of change it is not possible to avoid flush_workqueue()
    sometimes, see the next patch as an example.

    Also, this patch unifies both functions and kills the (unlikely) busy-wait
    loop.

    Signed-off-by: Oleg Nesterov
    Acked-by: Jarek Poplawski
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Imho, the current naming of cancel_xxx workqueue functions is very confusing.

    cancel_delayed_work()
    cancel_rearming_delayed_work()
    cancel_rearming_delayed_workqueue() // obsolete

    cancel_work_sync()

    This looks as if the first 2 functions differ in "type" of their argument
    which is not true any longer, nowadays the difference is the behaviour.

    The semantics of cancel_rearming_delayed_work(dwork) was changed
    significantly, it doesn't require that dwork rearms itself, and cancels dwork
    synchronously.

    Rename it to cancel_delayed_work_sync(). This matches cancel_delayed_work()
    and cancel_work_sync(). Re-create cancel_rearming_delayed_work() as a simple
    inline obsolete wrapper, like cancel_rearming_delayed_workqueue().

    Signed-off-by: Oleg Nesterov
    Acked-by: Jarek Poplawski
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     

24 May, 2007

1 commit

  • cleanup_workqueue_thread() and cwq_should_stop() are overcomplicated.

    Convert the code to use kthread_should_stop/kthread_stop as was
    suggested by Gautham and Srivatsa.

    In particular this patch removes the (unlikely) busy-wait loop from the
    exit path, it was a temporary and ugly kludge (if not a bug).

    Note: the current code was designed to solve another old problem:
    work->func can't share locks with hotplug callbacks. I think this could
    be done, see

    http://marc.info/?l=linux-kernel&m=116905366428633

    but this needs some more complications to preserve CPU affinity of
    cwq->thread during cpu_up(). A freezer-based hotplug looks more
    appealing.

    [akpm@linux-foundation.org: make it more tolerant of gcc borkenness]
    Signed-off-by: Oleg Nesterov
    Cc: Zilvinas Valinskas
    Cc: Gautham R Shenoy
    Cc: Srivatsa Vaddagiri
    Cc: "Rafael J. Wysocki"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     

10 May, 2007

27 commits

  • Since nonboot CPUs are now disabled after tasks and devices have been
    frozen and the CPU hotplug infrastructure is used for this purpose, we need
    special CPU hotplug notifications that will help the CPU-hotplug-aware
    subsystems distinguish normal CPU hotplug events from CPU hotplug events
    related to a system-wide suspend or resume operation in progress. This
    patch introduces such notifications and causes them to be used during
    suspend and resume transitions. It also changes all of the
    CPU-hotplug-aware subsystems to take these notifications into consideration
    (for now they are handled in the same way as the corresponding "normal"
    ones).

    [oleg@tv-sign.ru: cleanups]
    Signed-off-by: Rafael J. Wysocki
    Cc: Gautham R Shenoy
    Cc: Pavel Machek
    Signed-off-by: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Rafael J. Wysocki
     
  • Thanks to Jarek Poplawski for the ideas and for spotting the bug in the
    initial draft patch.

    cancel_rearming_delayed_work() currently has many limitations, because it
    requires that dwork always re-arms itself via queue_delayed_work(). So it
    hangs forever if dwork doesn't do this, or cancel_rearming_delayed_work/
    cancel_delayed_work was already called. It uses flush_workqueue() in a
    loop, so it can't be used if workqueue was freezed, and it is potentially
    live- lockable on busy system if delay is small.

    With this patch cancel_rearming_delayed_work() doesn't make any assumptions
    about dwork, it can re-arm itself via queue_delayed_work(), or
    queue_work(), or do nothing.

    As a "side effect", cancel_work_sync() was changed to handle re-arming works
    as well.

    Disadvantages:

    - this patch adds wmb() to insert_work().

    - slowdowns the fast path (when del_timer() succeeds on entry) of
    cancel_rearming_delayed_work(), because wait_on_work() is called
    unconditionally. In that case, compared to the old version, we are
    doing "unneeded" lock/unlock for each online CPU.

    On the other hand, this means we don't need to use cancel_work_sync()
    after cancel_rearming_delayed_work().

    - complicates the code (.text grows by 130 bytes).

    [akpm@linux-foundation.org: fix speling]
    Signed-off-by: Oleg Nesterov
    Cc: David Chinner
    Cc: David Howells
    Cc: Gautham Shenoy
    Acked-by: Jarek Poplawski
    Cc: Srivatsa Vaddagiri
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • worker_thread() inherits ignored SIGCHLD and numa_default_policy() from its
    parent, kthreadd. No need to setup this again.

    Signed-off-by: Oleg Nesterov
    Acked-by: "Eric W. Biederman"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • flush_work(wq, work) doesn't need the first parameter, we can use cwq->wq
    (this was possible from the very beginnig, I missed this). So we can unify
    flush_work_keventd and flush_work.

    Also, rename flush_work() to cancel_work_sync() and fix all callers.
    Perhaps this is not the best name, but "flush_work" is really bad.

    (akpm: this is why the earlier patches bypassed maintainers)

    Signed-off-by: Oleg Nesterov
    Cc: Jeff Garzik
    Cc: "David S. Miller"
    Cc: Jens Axboe
    Cc: Tejun Heo
    Cc: Auke Kok ,
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • worker_thread() can miss freeze_process()->signal_wake_up() if it happens
    between try_to_freeze() and prepare_to_wait(). We should check freezing()
    before entering schedule().

    This race was introduced by me in

    [PATCH 1/1] workqueue: don't migrate pending works from the dead CPU

    Looks like mm/vmscan.c:kswapd() has the same race.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • worker_thread() doesn't need to "Block and flush all signals", this was
    already done by its caller, kthread().

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • We don't have any users, and it is not so trivial to use NOAUTOREL works
    correctly. It is better to simplify API.

    Delete NOAUTOREL support and rename work_release to work_clear_pending to
    avoid a confusion.

    Signed-off-by: Oleg Nesterov
    Acked-by: David Howells
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • cancel_rearming_delayed_workqueue(wq, dwork) doesn't need the first
    parameter. We don't hang on un-queued dwork any longer, and work->data
    doesn't change its type. This means we can always figure out "wq" from
    dwork when it is needed.

    Remove this parameter, and rename the function to
    cancel_rearming_delayed_work(). Re-create an inline "obsolete"
    cancel_rearming_delayed_workqueue(wq) which just calls
    cancel_rearming_delayed_work().

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Cleanup. A number of per_cpu_ptr(wq->cpu_wq, cpu) users have to check that
    cpu is valid for this wq. Make a simple helper.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Change queue_delayed_work() to use queue_delayed_work_on() to avoid the code
    duplication (saves 133 bytes).

    Q: queue_delayed_work() enqueues &dwork->work directly when delay == 0, why?

    [jirislaby@gmail.com: oops fix]
    Signed-off-by: Oleg Nesterov
    Signed-off-by: Jiri Slaby
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Currently typeof(delayed_work->work.data) is

    "struct workqueue_struct" when the timer is pending

    "struct cpu_workqueue_struct" whe the work is queued

    This makes impossible to use flush_fork(delayed_work->work) in addition
    to cancel_delayed_work/cancel_rearming_delayed_work, not good.

    Change queue_delayed_work/delayed_work_timer_fn to use cwq, not wq. This
    complicates (and uglifies) these functions a little bit, but alows us to
    use flush_fork(dwork) and imho makes the whole code more consistent.

    Also, document the fact that cancel_rearming_delayed_work() doesn't garantee
    the completion of work->func() upon return.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • CPU_UP_PREPARE binds cwq->thread to the new CPU. So CPU_UP_CANCELED tries to
    wake up the task which is bound to the failed CPU.

    With this patch we don't bind cwq->thread until CPU becomes online. The first
    wake_up() after kthread_create() is a bit special, make a simple helper for
    that.

    Signed-off-by: Oleg Nesterov
    Cc: Gautham R Shenoy
    Cc: "Rafael J. Wysocki"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • The only caller of init_workqueues() is do_basic_setup().

    Signed-off-by: Oleg Nesterov
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Add explicit workqueue_struct->singlethread flag. This lessens .text a
    little, but most importantly this allows us to manipulate wq->list without
    changine the meaning of is_single_threaded().

    Signed-off-by: Oleg Nesterov
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • The code like

    if (is_single_threaded(wq))
    do_something(singlethread_cpu);
    else {
    for_each_cpu_mask(cpu, cpu_populated_map)
    do_something(cpu);
    }

    looks very annoying. We can add "static cpumask_t cpu_singlethread_map" and
    simplify the code. Lessens .text a bit, and imho makes the code more readable.

    Signed-off-by: Oleg Nesterov
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • cancel_rearming_delayed_workqueue(dwork) will hang forever if dwork was not
    scheduled, because in that case cancel_delayed_work()->del_timer_sync() never
    returns true.

    I don't know if there are any callers which may have problems, but this is not
    so convenient, and the fix is very simple.

    Q: looks like we don't need "struct workqueue_struct *wq" parameter. If the
    timer was aborted successfully, get_wq_data() == wq. Is it worth to add the
    new function?

    Signed-off-by: Oleg Nesterov
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • work->func() may sleep, it's a bug to call run_workqueue() with irqs disabled.

    Signed-off-by: Oleg Nesterov
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Because it has no callers.

    Actually, I think the whole idea of run_scheduled_work() was not right, not
    good to mix "unqueue this work and execute its ->func()" in one function.

    Signed-off-by: Oleg Nesterov
    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Currently CPU_DEAD uses kthread_stop() to stop cwq->thread and then
    transfers cwq->worklist to another CPU. However, it is very unlikely that
    worker_thread() will notice kthread_should_stop() before flushing
    cwq->worklist. It is only possible if worker_thread() was preempted after
    run_workqueue(cwq), a new work_struct was added, and CPU_DEAD happened
    before cwq->thread has a chance to run.

    This means that take_over_work() mostly adds unneeded complications. Note
    also that kthread_stop() is not good per se, wake_up_process() may confuse
    work->func() if it sleeps waiting for some event.

    Remove take_over_work() and migrate_sequence complications. CPU_DEAD sets
    the cwq->should_stop flag (introduced by this patch) and waits for
    cwq->thread to flush cwq->worklist and exit. Because the dead CPU is not
    on cpu_online_map, no more works can be added to that cwq.

    cpu_populated_map was introduced to optimize for_each_possible_cpu(), it is
    not strictly needed, and it is more a documentation in fact.

    Saves 418 bytes.

    Signed-off-by: Oleg Nesterov
    Cc: Srivatsa Vaddagiri
    Cc: "Pallipadi, Venkatesh"
    Cc: Gautham shenoy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Pointed out by Srivatsa Vaddagiri.

    cleanup_workqueue_thread() sets cwq->thread = NULL and does kthread_stop().
    This breaks the "if (cwq->thread == current)" logic in flush_cpu_workqueue()
    and leads to deadlock.

    Kill the thead first, then clear cwq->thread. workqueue_mutex protects us
    from create_workqueue_thread() so we don't need cwq->lock.

    Signed-off-by: Oleg Nesterov
    Cc: Srivatsa Vaddagiri
    Cc: "Pallipadi, Venkatesh"
    Cc: Gautham shenoy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Many thanks to Srivatsa Vaddagiri for the helpful discussion and for spotting
    the bug in my previous attempt.

    work->func() (and thus flush_workqueue()) must not use workqueue_mutex,
    this leads to deadlock when CPU_DEAD does kthread_stop(). However without
    this mutex held we can't detect CPU_DEAD in progress, which can move pending
    works to another CPU while the dead one is not on cpu_online_map.

    Change flush_workqueue() to use for_each_possible_cpu(). This means that
    flush_cpu_workqueue() may hit CPU which is already dead. However in that
    case

    !list_empty(&cwq->worklist) || cwq->current_work != NULL

    means that CPU_DEAD in progress, it will do kthread_stop() + take_over_work()
    so we can proceed and insert a barrier. We hold cwq->lock, so we are safe.

    Also, add migrate_sequence incremented by take_over_work() under cwq->lock.
    If take_over_work() happened before we checked this CPU, we should see the
    new value after spin_unlock().

    Further possible changes:

    remove CPU_DEAD handling (along with take_over_work, migrate_sequence)
    from workqueue.c. CPU_DEAD just sets cwq->please_exit_after_flush flag.

    CPU_UP_PREPARE->create_workqueue_thread() clears this flag, and creates
    the new thread if cwq->thread == NULL.

    This way the workqueue/cpu-hotplug interaction is almost zero, workqueue_mutex
    just protects "workqueues" list, CPU_LOCK_ACQUIRE/CPU_LOCK_RELEASE go away.

    Signed-off-by: Oleg Nesterov
    Cc: Srivatsa Vaddagiri
    Cc: "Pallipadi, Venkatesh"
    Cc: Gautham shenoy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Currently ->freezeable is per-cpu, this is wrong. CPU_UP_PREPARE creates
    cwq->thread which is not freezeable. Move ->freezeable to workqueue_struct.

    Signed-off-by: Oleg Nesterov
    Cc: Srivatsa Vaddagiri
    Cc: "Pallipadi, Venkatesh"
    Cc: Gautham shenoy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Now when we have ->current_work we can avoid adding a barrier and waiting
    for its completition when cwq's queue is empty.

    Note: this change is also useful if we change flush_workqueue() to also
    check the dead CPUs.

    Signed-off-by: Oleg Nesterov
    Cc: Srivatsa Vaddagiri
    Cc: Gautham Shenoy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Cc: Oleg Nesterov
    Cc: Srivatsa Vaddagiri
    Cc: Gautham Shenoy
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     
  • A basic problem with flush_scheduled_work() is that it blocks behind _all_
    presently-queued works, rather than just the work whcih the caller wants to
    flush. If the caller holds some lock, and if one of the queued work happens
    to want that lock as well then accidental deadlocks can occur.

    One example of this is the phy layer: it wants to flush work while holding
    rtnl_lock(). But if a linkwatch event happens to be queued, the phy code will
    deadlock because the linkwatch callback function takes rtnl_lock.

    So we implement a new function which will flush a *single* work - just the one
    which the caller wants to free up. Thus we avoid the accidental deadlocks
    which can arise from unrelated subsystems' callbacks taking shared locks.

    flush_work() non-blockingly dequeues the work_struct which we want to kill,
    then it waits for its handler to complete on all CPUs.

    Add ->current_work to the "struct cpu_workqueue_struct", it points to
    currently running "struct work_struct". When flush_work(work) detects
    ->current_work == work, it inserts a barrier at the _head_ of ->worklist
    (and thus right _after_ that work) and waits for completition. This means
    that the next work fired on that CPU will be this barrier, or another
    barrier queued by concurrent flush_work(), so the caller of flush_work()
    will be woken before any "regular" work has a chance to run.

    When wait_on_work() unlocks workqueue_mutex (or whatever we choose to protect
    against CPU hotplug), CPU may go away. But in that case take_over_work() will
    move a barrier we queued to another CPU, it will be fired sometime, and
    wait_on_work() will be woken.

    Actually, we are doing cleanup_workqueue_thread()->kthread_stop() before
    take_over_work(), so cwq->thread should complete its ->worklist (and thus
    the barrier), because currently we don't check kthread_should_stop() in
    run_workqueue(). But even if we did, everything should be ok.

    [akpm@osdl.org: cleanup]
    [akpm@osdl.org: add flush_work_keventd() wrapper]
    Signed-off-by: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • Remove ->remove_sequence, ->insert_sequence, and ->work_done from struct
    cpu_workqueue_struct. To implement flush_workqueue() we can queue a
    barrier work on each CPU and wait for its completition.

    The barrier is queued under workqueue_mutex to ensure that per cpu
    wq->cpu_wq is alive, we drop this mutex before going to sleep. If CPU goes
    down while we are waiting for completition, take_over_work() will move the
    barrier on another CPU, and the handler will wake up us eventually.

    Signed-off-by: Oleg Nesterov
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Oleg Nesterov
     
  • We take workqueue_mutex in there to keep CPU hotplug away. But
    preempt_disable() will suffice for that.

    Cc: Ingo Molnar
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Andrew Morton
     

17 Feb, 2007

1 commit

  • Add /proc/timer_stats support: debugging feature to profile timer expiration.
    Both the starting site, process/PID and the expiration function is captured.
    This allows the quick identification of timer event sources in a system.

    Sample output:

    # echo 1 > /proc/timer_stats
    # cat /proc/timer_stats
    Timer Stats Version: v0.1
    Sample period: 4.010 s
    24, 0 swapper hrtimer_stop_sched_tick (hrtimer_sched_tick)
    11, 0 swapper sk_reset_timer (tcp_delack_timer)
    6, 0 swapper hrtimer_stop_sched_tick (hrtimer_sched_tick)
    2, 1 swapper queue_delayed_work_on (delayed_work_timer_fn)
    17, 0 swapper hrtimer_restart_sched_tick (hrtimer_sched_tick)
    2, 1 swapper queue_delayed_work_on (delayed_work_timer_fn)
    4, 2050 pcscd do_nanosleep (hrtimer_wakeup)
    5, 4179 sshd sk_reset_timer (tcp_write_timer)
    4, 2248 yum-updatesd schedule_timeout (process_timeout)
    18, 0 swapper hrtimer_restart_sched_tick (hrtimer_sched_tick)
    3, 0 swapper sk_reset_timer (tcp_delack_timer)
    1, 1 swapper neigh_table_init_no_netlink (neigh_periodic_timer)
    2, 1 swapper e1000_up (e1000_watchdog)
    1, 1 init schedule_timeout (process_timeout)
    100 total events, 25.24 events/sec

    [ cleanups and hrtimers support from Thomas Gleixner ]
    [bunk@stusta.de: nr_entries can become static]
    Signed-off-by: Ingo Molnar
    Signed-off-by: Thomas Gleixner
    Cc: john stultz
    Cc: Roman Zippel
    Cc: Andi Kleen
    Signed-off-by: Adrian Bunk
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Ingo Molnar
     

12 Feb, 2007

1 commit

  • A variety of (mostly) innocuous fixes to the embedded kernel-doc content in
    source files, including:

    * make multi-line initial descriptions single line
    * denote some function names, constants and structs as such
    * change erroneous opening '/*' to '/**' in a few places
    * reword some text for clarity

    Signed-off-by: Robert P. J. Day
    Cc: "Randy.Dunlap"
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Robert P. J. Day
     

23 Dec, 2006

1 commit