27 Nov, 2009

1 commit

  • I'm seeing spikes of up to 0.5ms in khungtaskd on a large
    machine. To reduce this source of jitter I tried setting
    hung_task_check_count to 0:

    # echo 0 > /proc/sys/kernel/hung_task_check_count

    which didn't have the intended response. Change to a post
    increment of max_count, so a value of 0 means check 0 tasks.

    Signed-off-by: Anton Blanchard
    Acked-by: Frederic Weisbecker
    Cc: msb@google.com
    LKML-Reference:
    Signed-off-by: Ingo Molnar

    Anton Blanchard
     

24 Sep, 2009

1 commit

  • It's unused.

    It isn't needed -- read or write flag is already passed and sysctl
    shouldn't care about the rest.

    It _was_ used in two places at arch/frv for some reason.

    Signed-off-by: Alexey Dobriyan
    Cc: David Howells
    Cc: "Eric W. Biederman"
    Cc: Al Viro
    Cc: Ralf Baechle
    Cc: Martin Schwidefsky
    Cc: Ingo Molnar
    Cc: "David S. Miller"
    Cc: James Morris
    Signed-off-by: Andrew Morton
    Signed-off-by: Linus Torvalds

    Alexey Dobriyan
     

11 Feb, 2009

1 commit

  • When we check if a task has been switched out since the last scan, we might
    have a race condition on the following scenario:

    - the task is freshly created and scheduled

    - it puts its state to TASK_UNINTERRUPTIBLE and is not yet switched out

    - check_hung_task() scans this task and will report a false positive because
    t->nvcsw + t->nivcsw == t->last_switch_count == 0

    Add a check for such cases.

    Signed-off-by: Frederic Weisbecker
    Acked-by: Mandeep Singh Baines
    Signed-off-by: Ingo Molnar

    Frederic Weisbecker
     

09 Feb, 2009

1 commit


06 Feb, 2009

2 commits

  • Since the tasklist is protected by rcu list operations, it is safe
    to convert the read_lock()s to rcu_read_lock().

    Suggested-by: Peter Zijlstra
    Signed-off-by: Mandeep Singh Baines
    Signed-off-by: Ingo Molnar

    Mandeep Singh Baines
     
  • Impact: extend the scope of hung-task checks

    Changed the default value of hung_task_check_count to PID_MAX_LIMIT.
    hung_task_batch_count added to put an upper bound on the critical
    section. Every hung_task_batch_count checks, the rcu lock is never
    held for a too long time.

    Keeping the critical section small minimizes time preemption is disabled
    and keeps rcu grace periods small.

    To prevent following a stale pointer, get_task_struct is called on g and t.
    To verify that g and t have not been unhashed while outside the critical
    section, the task states are checked.

    The design was proposed by Frédéric Weisbecker.

    Signed-off-by: Mandeep Singh Baines
    Suggested-by: Frédéric Weisbecker
    Acked-by: Andrew Morton
    Signed-off-by: Ingo Molnar

    Mandeep Singh Baines
     

19 Jan, 2009

1 commit


16 Jan, 2009

1 commit

  • Decoupling allows:

    * hung tasks check to happen at very low priority

    * hung tasks check and softlockup to be enabled/disabled independently
    at compile and/or run-time

    * individual panic settings to be enabled disabled independently
    at compile and/or run-time

    * softlockup threshold to be reduced without increasing hung tasks
    poll frequency (hung task check is expensive relative to softlock watchdog)

    * hung task check to be zero over-head when disabled at run-time

    Signed-off-by: Mandeep Singh Baines
    Signed-off-by: Ingo Molnar

    Mandeep Singh Baines