17 Aug, 2010
2 commits
-
This fixes a minor grammar problem in the comments in
hung_task.cSigned-off-by: John Kacur
Cc: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar -
There is no longer any functional difference between
__debug_show_held_locks() and debug_show_held_locks(),
so remove the former.Signed-off-by: John Kacur
Cc: Peter Zijlstra
LKML-Reference:
Signed-off-by: Ingo Molnar
27 Nov, 2009
1 commit
-
I'm seeing spikes of up to 0.5ms in khungtaskd on a large
machine. To reduce this source of jitter I tried setting
hung_task_check_count to 0:# echo 0 > /proc/sys/kernel/hung_task_check_count
which didn't have the intended response. Change to a post
increment of max_count, so a value of 0 means check 0 tasks.Signed-off-by: Anton Blanchard
Acked-by: Frederic Weisbecker
Cc: msb@google.com
LKML-Reference:
Signed-off-by: Ingo Molnar
24 Sep, 2009
1 commit
-
It's unused.
It isn't needed -- read or write flag is already passed and sysctl
shouldn't care about the rest.It _was_ used in two places at arch/frv for some reason.
Signed-off-by: Alexey Dobriyan
Cc: David Howells
Cc: "Eric W. Biederman"
Cc: Al Viro
Cc: Ralf Baechle
Cc: Martin Schwidefsky
Cc: Ingo Molnar
Cc: "David S. Miller"
Cc: James Morris
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
11 Feb, 2009
1 commit
-
When we check if a task has been switched out since the last scan, we might
have a race condition on the following scenario:- the task is freshly created and scheduled
- it puts its state to TASK_UNINTERRUPTIBLE and is not yet switched out
- check_hung_task() scans this task and will report a false positive because
t->nvcsw + t->nivcsw == t->last_switch_count == 0Add a check for such cases.
Signed-off-by: Frederic Weisbecker
Acked-by: Mandeep Singh Baines
Signed-off-by: Ingo Molnar
09 Feb, 2009
1 commit
-
Impact: saves sizeof(long) bytes per task_struct
By guaranteeing that sysctl_hung_task_timeout_secs have elapsed between
tasklist scans we can avoid using timestamps.Signed-off-by: Mandeep Singh Baines
Signed-off-by: Ingo Molnar
06 Feb, 2009
2 commits
-
Since the tasklist is protected by rcu list operations, it is safe
to convert the read_lock()s to rcu_read_lock().Suggested-by: Peter Zijlstra
Signed-off-by: Mandeep Singh Baines
Signed-off-by: Ingo Molnar -
Impact: extend the scope of hung-task checks
Changed the default value of hung_task_check_count to PID_MAX_LIMIT.
hung_task_batch_count added to put an upper bound on the critical
section. Every hung_task_batch_count checks, the rcu lock is never
held for a too long time.Keeping the critical section small minimizes time preemption is disabled
and keeps rcu grace periods small.To prevent following a stale pointer, get_task_struct is called on g and t.
To verify that g and t have not been unhashed while outside the critical
section, the task states are checked.The design was proposed by Frédéric Weisbecker.
Signed-off-by: Mandeep Singh Baines
Suggested-by: Frédéric Weisbecker
Acked-by: Andrew Morton
Signed-off-by: Ingo Molnar
19 Jan, 2009
1 commit
-
Impact: fix potential false panic
A potential race exists if sysctl_hung_task_timeout_secs is reset to 0
while inside check_hung_uniterruptible_tasks(). If check_task() is
entered, a comparison with 0 will result in a false hung_task being
detected.If sysctl_hung_task_panic is set, the system will panic.
Signed-off-by: Mandeep Singh Baines
Signed-off-by: Ingo Molnar
16 Jan, 2009
1 commit
-
Decoupling allows:
* hung tasks check to happen at very low priority
* hung tasks check and softlockup to be enabled/disabled independently
at compile and/or run-time* individual panic settings to be enabled disabled independently
at compile and/or run-time* softlockup threshold to be reduced without increasing hung tasks
poll frequency (hung task check is expensive relative to softlock watchdog)* hung task check to be zero over-head when disabled at run-time
Signed-off-by: Mandeep Singh Baines
Signed-off-by: Ingo Molnar