02 Oct, 2005
1 commit
-
We should always use bitmask ops, rather than depend on some ordering of
the different states. With the TASK_NONINTERACTIVE flag, the inequality
doesn't really work.Oleg Nesterov argues (likely correctly) that this test is unnecessary in
the first place. However, the minimal fix for now is to at least make
it work in the presense of TASK_NONINTERACTIVE. Waiting for consensus
from Roland & co on potential bigger cleanups.Signed-off-by: Linus Torvalds
30 Sep, 2005
2 commits
-
Switched cpuset_common_file_read() to simple_read_from_buffer(), killed
a bunch of useless (and not quite correct - e.g. min(size_t,ssize_t))
code.Signed-off-by: Al Viro
Signed-off-by: Linus Torvalds -
Any tests using < TASK_STOPPED or the like are left over from the time
when the TASK_ZOMBIE and TASK_DEAD bits were in the same word, and it
served to check for "stopped or dead". I think this one in
do_signal_stop is the only such case. It has been buggy ever since
exit_state was separated, and isn't testing the exit_state value.Signed-off-by: Roland McGrath
Signed-off-by: Linus Torvalds
28 Sep, 2005
5 commits
-
Don't leak a page of memory if user reads a cpuset file past eof.
Signed-off-by: KUROSAWA Takahiro
Signed-off-by: Paul Jackson
Signed-off-by: Linus Torvalds -
The following patch makes swsusp avoid problems during resume if there are
too many pages to save on suspend. It adds a constant that allows us to
verify if we are going to save too many pages and implements the check
(this is done as early as we can tell that the check will trigger, which is
in swsusp_alloc()).Signed-off-by: Rafael J. Wysocki
Acked-by: Pavel Machek
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Dave Jones says:
... if the modprobe.conf has trailing whitespace, modules fail to load
with the following helpful message..snd_intel8x0: Unknown parameter `'
Previous version truncated last argument.
Signed-off-by: Rusty Russell
Cc: Dave Jones
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Prevent swsusp from leaking some memory in case of an error in
read_pagedir(). It also prevents the BUG_ON() from triggering if there's
an error while reading swap.Signed-off-by: Rafael J. Wysocki
Acked-by: Pavel Machek
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
The following patch removes some wrong code from the data_free() function
in swsusp.This function could only be called if there's an error while writing the
suspend image to swap, so it is not triggered easily. However, if
triggered, it would probably corrupt some memory.Signed-off-by: Rafael J. Wysocki
Acked-by: Pavel Machek
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
24 Sep, 2005
1 commit
-
Bhavesh P. Davda noticed that SIGKILL wouldn't
properly kill a process under just the right cicumstances: a stopped
task that already had another signal queued would get the SIGKILL
queued onto the shared queue, and there it would remain until SIGCONT.This simplifies the signal acceptance logic, and fixes the bug in the
process.Losely based on an earlier patch by Bhavesh.
Signed-off-by: Linus Torvalds
23 Sep, 2005
5 commits
-
Fix comments in swsusp.
Signed-off-by: Pavel Machek
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
The following patch makes swsusp avoid triggering the BUG_ON() in
swsusp_suspend() if there is not enough memory for suspend.Signed-off-by: Rafael J. Wysocki
Cc: Pavel Machek
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Signed-off-by: Randy Dunlap
Acked-by: Pavel Machek
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
In the lead up to 2.6.13 I fixed a large number of reboot problems by
making the calling conventions consistent. Despite checking and double
checking my work it appears I missed an obvious one.The S4 suspend code for PM_DISK_PLATFORM was also calling device_shutdown
without setting system_state, and was not calling the appropriate
reboot_notifier.This patch fixes the bug by replacing the call of device_suspend with
kernel_poweroff_prepare.Various forms of this failure have been fixed and tracked for a while.
Thanks for tracking this down go to: Alexey Starikovskiy, Meelis Roos
, Nigel Cunningham , Pierre
OssmanHistory of this bug is at:
http://bugme.osdl.org/show_bug.cgi?id=4320Signed-off-by: Eric W. Biederman
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
In the lead up to 2.6.13 I fixed a large number of reboot problems by
making the calling conventions consistent. Despite checking and double
checking my work it appears I missed an obvious one.This first patch simply refactors the reboot routines so all of the
preparation for various kinds of reboots are in their own functions.
Making it very hard to get the various kinds of reboot out of sync.Signed-off-by: Eric W. Biederman
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
22 Sep, 2005
1 commit
-
ia64's sched_clock() accesses per-cpu data which isn't set up at boot time.
Hence ia64 cannot use printk timestamping, because printk() will crash in
sched_clock().So make printk() use printk_clock(), defaulting to sched_clock(), overrideable
by the architecture via attribute(weak).Cc: "Luck, Tony"
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
18 Sep, 2005
3 commits
-
With the new fdtable locking rules, you have to protect fdtable with either
->file_lock or rcu_read_lock/unlock(). There are some places where we
aren't doing either. This patch fixes those places.Signed-off-by: Dipankar Sarma
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
2.6.13 incorporated Alan Cox's patch for /proc/sys/fs/suid_dumpable (one
version of this patch can be found here
http://marc.theaimsgroup.com/?l=linux-kernel&m=109647550421014&w=2 ).This patch also made corresponding changes in kernel/sys.c to change the
prctl() PR_SET_DUMPABLE operation so that the permitted range of 'arg2' was
modified from 0..1 to 0..2.However, a corresponding change was not made for PR_GET_DUMPABLE: if the
dumpable flag is non-zero, then PR_GET_DUMPABLE always returns 1, so that
the caller can't determine the true setting of this flag.Acked-by: Alan Cox
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Fix a problem wherein a new-born task is added to a dead CPU.
Signed-off-by: Srivatsa Vaddagiri
Acked-by: Nick Piggin
Acked-by: Shaohua Li
Acked-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
14 Sep, 2005
2 commits
-
fix up the runqueue lock owner only if we truly did a context-switch
with the runqueue lock held. Impacts ia64, mips, sparc64 and arm.Signed-off-by: Ingo Molnar
Signed-off-by: Linus Torvalds
13 Sep, 2005
4 commits
-
Use the add_taint() interface for setting tainted bit flags instead of
doing it manually.Signed-off-by: Randy Dunlap
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
These functions don't need schedule_timeout()'s barrier.
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
- Remove unused irqrsp field
- Remove pda->me
- Optimize set_softirq_pending slightlySigned-off-by: Andi Kleen
Signed-off-by: Linus Torvalds -
Optimize the deadlock avoidance check on the global cpuset
semaphore cpuset_sem. Instead of adding a depth counter to the
task struct of each task, rather just two words are enough, one
to store the depth and the other the current cpuset_sem holder.Thanks to Nikita Danilov for the idea.
Signed-off-by: Paul Jackson
[ We may want to change this further, but at least it's now
a totally internal decision to the cpusets code ]Signed-off-by: Linus Torvalds
12 Sep, 2005
2 commits
-
..and only enable them for ia64. The functions are only valid
when the whole system has been totally stopped and no scheduler
activity is ongoing on any CPU, and interrupts are globally
disabled.In other words, they aren't useful for anything else. So make
sure that nobody can use them by mistake.Signed-off-by: Linus Torvalds
-
Scheduler hooks to see/change which process is deemed to be on a cpu.
Signed-off-by: Keith Owens
Signed-off-by: Tony Luck
11 Sep, 2005
14 commits
-
Use schedule_timeout_{,un}interruptible() instead of
set_current_state()/schedule_timeout() to reduce kernel size.Signed-off-by: Nishanth Aravamudan
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Add schedule_timeout_{,un}interruptible() interfaces so that
schedule_timeout() callers don't have to worry about forgetting to add the
set_current_state() call beforehand.Signed-off-by: Nishanth Aravamudan
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
for kernel/acct.c:
- fix typos
- add kerneldoc for non-static functionsSigned-off-by: Randy Dunlap
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Don't pull tasks from a group if that would cause the group's total load to
drop below its total cpu_power (ie. cause the group to start going idle).Signed-off-by: Suresh Siddha
Signed-off-by: Nick Piggin
Acked-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Jack Steiner brought this issue at my OLS talk.
Take a scenario where two tasks are pinned to two HT threads in a physical
package. Idle packages in the system will keep kicking migration_thread on
the busy package with out any success.We will run into similar scenarios in the presence of CMP/NUMA.
Signed-off-by: Suresh Siddha
Acked-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
In sys_sched_yield(), we cache current->array in the "array" variable, thus
there's no need to dereference "current" again later.Signed-Off-By: Renaud Lienhart
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
If an idle sibling of an HT queue encounters a busy sibling, then make
higher level load balancing of the non-idle variety.Performance of multiprocessor HT systems with low numbers of tasks
(generally < number of virtual CPUs) can be significantly worse than the
exact same workloads when running in non-HT mode. The reason is largely
due to poor scheduling behaviour.This patch improves the situation, making the performance gap far less
significant on one problematic test case (tbench).Signed-off-by: Nick Piggin
Acked-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
During periodic load balancing, don't hold this runqueue's lock while
scanning remote runqueues, which can take a non trivial amount of time
especially on very large systems.Holding the runqueue lock will only help to stabilise ->nr_running, however
this doesn't do much to help because tasks being woken will simply get held
up on the runqueue lock, so ->nr_running would not provide a really
accurate picture of runqueue load in that case anyway.What's more, ->nr_running (and possibly the cpu_load averages) of remote
runqueues won't be stable anyway, so load balancing is always an inexact
operation.Signed-off-by: Nick Piggin
Acked-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Similarly to the earlier change in load_balance, only lock the runqueue in
load_balance_newidle if the busiest queue found has a nr_running > 1. This
will reduce frequency of expensive remote runqueue lock aquisitions in the
schedule() path on some workloads.Signed-off-by: Nick Piggin
Acked-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
William Weston reported unusually high scheduling latencies on his x86 HT
box, on the -RT kernel. I managed to reproduce it on my HT box and the
latency tracer shows the incident in action:_------=> CPU#
/ _-----=> irqs-off
| / _----=> need-resched
|| / _---=> hardirq/softirq
||| / _--=> preempt-depth
|||| /
||||| delay
cmd pid ||||| time | caller
\ / ||||| \ | /
du-2803 3Dnh2 0us : __trace_start_sched_wakeup (try_to_wake_up)
..............................................................
... we are running on CPU#3, PID 2778 gets woken to CPU#1: ...
..............................................................
du-2803 3Dnh2 0us : __trace_start_sched_wakeup <-2778> (73 1)
du-2803 3Dnh2 0us : _raw_spin_unlock (try_to_wake_up)
................................................
... still on CPU#3, we send an IPI to CPU#1: ...
................................................
du-2803 3Dnh1 0us : resched_task (try_to_wake_up)
du-2803 3Dnh1 1us : smp_send_reschedule (try_to_wake_up)
du-2803 3Dnh1 1us : send_IPI_mask_bitmask (smp_send_reschedule)
du-2803 3Dnh1 2us : _raw_spin_unlock_irqrestore (try_to_wake_up)
...............................................
... 1 usec later, the IPI arrives on CPU#1: ...
...............................................
-0 1Dnh. 2us : smp_reschedule_interrupt (c0100c5a 0 0)So far so good, this is the normal wakeup/preemption mechanism. But here
comes the scheduler anomaly on CPU#1:-0 1Dnh. 2us : preempt_schedule_irq (need_resched)
-0 1Dnh. 2us : preempt_schedule_irq (need_resched)
-0 1Dnh. 3us : __schedule (preempt_schedule_irq)
-0 1Dnh. 3us : profile_hit (__schedule)
-0 1Dnh1 3us : sched_clock (__schedule)
-0 1Dnh1 4us : _raw_spin_lock_irq (__schedule)
-0 1Dnh1 4us : _raw_spin_lock_irqsave (__schedule)
-0 1Dnh2 5us : _raw_spin_unlock (__schedule)
-0 1Dnh1 5us : preempt_schedule (__schedule)
-0 1Dnh1 6us : _raw_spin_lock (__schedule)
-0 1Dnh2 6us : find_next_bit (__schedule)
-0 1Dnh2 6us : _raw_spin_lock (__schedule)
-0 1Dnh3 7us : find_next_bit (__schedule)
-0 1Dnh3 7us : find_next_bit (__schedule)
-0 1Dnh3 8us : _raw_spin_unlock (__schedule)
-0 1Dnh2 8us : preempt_schedule (__schedule)
-0 1Dnh2 8us : find_next_bit (__schedule)
-0 1Dnh2 9us : trace_stop_sched_switched (__schedule)
-0 1Dnh2 9us : _raw_spin_lock (trace_stop_sched_switched)
-0 1Dnh3 10us : trace_stop_sched_switched <-2778> (73 8c)
-0 1Dnh3 10us : _raw_spin_unlock (trace_stop_sched_switched)
-0 1Dnh1 10us : _raw_spin_unlock (__schedule)
-0 1Dnh. 11us : local_irq_enable_noresched (preempt_schedule_irq)
-0 1Dnh. 11us < (0)we didnt pick up pid 2778! It only gets scheduled much later:
-2778 1Dnh2 412us : __switch_to (__schedule)
-2778 1Dnh2 413us : __schedule <-0> (8c 73)
-2778 1Dnh2 413us : _raw_spin_unlock (__schedule)
-2778 1Dnh1 413us : trace_stop_sched_switched (__schedule)
-2778 1Dnh1 414us : _raw_spin_lock (trace_stop_sched_switched)
-2778 1Dnh2 414us : trace_stop_sched_switched <-2778> (73 1)
-2778 1Dnh2 414us : _raw_spin_unlock (trace_stop_sched_switched)
-2778 1Dnh1 415us : trace_stop_sched_switched (__schedule)the reason for this anomaly is the following code in dependent_sleeper():
/*
* If a user task with lower static priority than the
* running task on the SMT sibling is trying to schedule,
* delay it till there is proportionately less timeslice
* left of the sibling task to prevent a lower priority
* task from using an unfair proportion of the
* physical cpu's resources. -ck
*/
[...]
if (((smt_curr->time_slice * (100 - sd->per_cpu_gain) /
100) > task_timeslice(p)))
ret = 1;Note that in contrast to the comment above, we dont actually do the check
based on static priority, we do the check based on timeslices. But
timeslices go up and down, and even highprio tasks can randomly have very
low timeslices (just before their next refill) and can thus be judged as
'lowprio' by the above piece of code. This condition is clearly buggy.
The correct test is to check for static_prio _and_ to check for the
preemption priority. Even on different static priority levels, a
higher-prio interactive task should not be delayed due to a
higher-static-prio CPU hog.There is a symmetric bug in the 'kick SMT sibling' code of this function as
well, which can be solved in a similar way.The patch below (against the current scheduler queue in -mm) fixes both
bugs. I have build and boot-tested this on x86 SMT, and nice +20 tasks
still get properly throttled - so the dependent-sleeper logic is still in
action.btw., these bugs pessimised the SMT scheduler because the 'delay wakeup'
property was applied too liberally, so this fix is likely a throughput
improvement as well.I separated out a smt_slice() function to make the code easier to read.
Signed-off-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
This patch implements a task state bit (TASK_NONINTERACTIVE), which can be
used by blocking points to mark the task's wait as "non-interactive". This
does not mean the task will be considered a CPU-hog - the wait will simply
not have an effect on the waiting task's priority - positive or negative
alike. Right now only pipe_wait() will make use of it, because it's a
common source of not-so-interactive waits (kernel compilation jobs, etc.).Signed-off-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
whitespace cleanups.
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Add relevant checks into find_idlest_group() and find_idlest_cpu() to make
them return only the groups that have allowed CPUs and allowed CPUs
respectively.Signed-off-by: M.Baris Demiray
Signed-off-by: Nick Piggin
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
The hyperthread aware nice handling currently puts to sleep any non real
time task when a real time task is running on its sibling cpu. This can
lead to prolonged starvation by having the non real time task pegged to the
cpu with load balancing not pulling that task away.Currently we force lower priority hyperthread tasks to run a percentage of
time difference based on timeslice differences which is meaningless when
comparing real time tasks to SCHED_NORMAL tasks. We can allow non real
time tasks to run with real time tasks on the sibling up to per_cpu_gain%
if we use jiffies as a counter.Cleanups and micro-optimisations to the relevant code section should make
it more understandable as well.Signed-off-by: Con Kolivas
Acked-by: Ingo Molnar
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds