13 Jan, 2012
1 commit
-
module_param(bool) used to counter-intuitively take an int. In
fddd5201 (mid-2009) we allowed bool or int/unsigned int using a messy
trick.It's time to remove the int/unsigned int option. For this version
it'll simply give a warning, but it'll break next kernel version.Signed-off-by: Rusty Russell
03 Oct, 2011
1 commit
-
The ARM GIC interrupt controller offers per CPU interrupts (PPIs),
which are usually used to connect local timers to each core. Each CPU
has its own private interface to the GIC, and only sees the PPIs that
are directly connect to it.While these timers are separate devices and have a separate interrupt
line to a core, they all use the same IRQ number.For these devices, request_irq() is not the right API as it assumes
that an IRQ number is visible by a number of CPUs (through the
affinity setting), but makes it very awkward to express that an IRQ
number can be handled by all CPUs, and yet be a different interrupt
line on each CPU, requiring a different dev_id cookie to be passed
back to the handler.The *_percpu_irq() functions is designed to overcome these
limitations, by providing a per-cpu dev_id vector:int request_percpu_irq(unsigned int irq, irq_handler_t handler,
const char *devname, void __percpu *percpu_dev_id);
void free_percpu_irq(unsigned int, void __percpu *);
int setup_percpu_irq(unsigned int irq, struct irqaction *new);
void remove_percpu_irq(unsigned int irq, struct irqaction *act);
void enable_percpu_irq(unsigned int irq);
void disable_percpu_irq(unsigned int irq);The API has a number of limitations:
- no interrupt sharing
- no threading
- common handler across all the CPUsOnce the interrupt is requested using setup_percpu_irq() or
request_percpu_irq(), it must be enabled by each core that wishes its
local interrupt to be delivered.Based on an initial patch by Thomas Gleixner.
Signed-off-by: Marc Zyngier
Cc: linux-arm-kernel@lists.infradead.org
Link: http://lkml.kernel.org/r/1316793788-14500-2-git-send-email-marc.zyngier@arm.com
Signed-off-by: Thomas Gleixner
29 Mar, 2011
1 commit
-
Signed-off-by: Thomas Gleixner
28 Mar, 2011
1 commit
-
We really need these flags for some of the interrupt chips. Move it
from internal state to irq_data and provide proper accessors.Signed-off-by: Thomas Gleixner
Cc: David Daney
26 Feb, 2011
1 commit
-
Add a commandline parameter "threadirqs" which forces all interrupts except
those marked IRQF_NO_THREAD to run threaded. That's mostly a debug option to
allow retrieving better debug data from crashing interrupt handlers. If
"threadirqs" is not enabled on the kernel command line, then there is no
impact in the interrupt hotpath.Architecture code needs to select CONFIG_IRQ_FORCED_THREADING after
marking the interrupts which cant be threaded IRQF_NO_THREAD. All
interrupts which have IRQF_TIMER set are implict marked
IRQF_NO_THREAD. Also all PER_CPU interrupts are excluded.Forced threading hard interrupts also forces all soft interrupt
handling into thread context.When enabled it might slow down things a bit, but for debugging problems in
interrupt code it's a reasonable penalty as it does not immediately
crash and burn the machine when an interrupt handler is buggy.Some test results on a Core2Duo machine:
Cache cold run of:
# time git grep irq_descnon-threaded threaded
real 1m18.741s 1m19.061s
user 0m1.874s 0m1.757s
sys 0m5.843s 0m5.427s# iperf -c server
non-threaded
[ 3] 0.0-10.0 sec 1.09 GBytes 933 Mbits/sec
[ 3] 0.0-10.0 sec 1.09 GBytes 934 Mbits/sec
[ 3] 0.0-10.0 sec 1.09 GBytes 933 Mbits/sec
threaded
[ 3] 0.0-10.0 sec 1.09 GBytes 939 Mbits/sec
[ 3] 0.0-10.0 sec 1.09 GBytes 934 Mbits/sec
[ 3] 0.0-10.0 sec 1.09 GBytes 937 Mbits/secSigned-off-by: Thomas Gleixner
Cc: Peter Zijlstra
LKML-Reference:
19 Feb, 2011
30 commits
-
Most of the managing functions get the irq descriptor and lock it -
either with or without buslock. Instead of open coding this over and
over provide a common function to do that.Signed-off-by: Thomas Gleixner
-
These transition helpers are stale for years now. Remove them.
Signed-off-by: Thomas Gleixner
-
If everything uses the right accessors, then enabling
GENERIC_HARDIRQS_NO_COMPAT should just work. If not it will tell you.Don't be lazy and use the trick which I use in the core code!
git grep status_use_accessors
will unearth it in a split second. Offenders are tracked down and not
slapped with stinking trouts. This time we use frozen shark for a
better educational value.Signed-off-by: Thomas Gleixner
-
Some irq_chips need to know the state of wakeup mode for
setting the trigger type etc. Reflect it in irq_data state.Signed-off-by: Thomas Gleixner
-
irq_chips, which require to mask the chip before changing the trigger
type should set this flag. So the core takes care of it and the
requirement for looking into desc->status in the chip goes away.Signed-off-by: Thomas Gleixner
Cc: Linus Walleij
Cc: Lars-Peter Clausen -
Keep status in sync until last abuser is gone.
Signed-off-by: Thomas Gleixner
-
That's the right data structure to look at for arch code.
Accessor functions are provided.
irqd_is_per_cpu(irqdata);
irqd_can_balance(irqdata);Coders who access them directly will be tracked down and slapped with
stinking trouts.Signed-off-by: Thomas Gleixner
-
It'll break when I'm going to undefine the constants.
Signed-off-by: Thomas Gleixner
-
The saving of this switch is minimal versus the ifdef mess it
creates. Simple enable PER_CPU unconditionally and remove the config
switch.Signed-off-by: Thomas Gleixner
-
chip implementations need to know about it. Keep status in sync until
all users are fixed.Accessor function: irqd_is_setaffinity_pending(irqdata)
Coders who access them directly will be tracked down and slapped with
stinking trouts.Signed-off-by: Thomas Gleixner
-
No users outside of core.
Signed-off-by: Thomas Gleixner
-
No users outside of core.
Signed-off-by: Thomas Gleixner
-
Keep status in sync until all users are fixed.
Signed-off-by: Thomas Gleixner
-
Keep status in sync until all users are fixed.
Signed-off-by: Thomas Gleixner
-
Keep status in sync until all abusers are fixed.
Signed-off-by: Thomas Gleixner
-
No users outside of core.
Signed-off-by: Thomas Gleixner
-
No users outside of core.
Signed-off-by: Thomas Gleixner
-
We need to maintain the flag for now in both fields status and istate.
Add a CONFIG_GENERIC_HARDIRQS_NO_COMPAT switch to allow testing w/o
the status one. Wrap the access to status IRQ_INPROGRESS in a inline
which can be turned of with CONFIG_GENERIC_HARDIRQS_NO_COMPAT along
with the define.There is no reason that anything outside of core looks at this. That
needs some modifications, but we'll get there.Signed-off-by: Thomas Gleixner
-
No users outside of core.
Signed-off-by: Thomas Gleixner
-
No users outside.
Signed-off-by: Thomas Gleixner
-
No users outside of core
Signed-off-by: Thomas Gleixner
-
The irq_desc.status field will either go away or renamed to
settings. Anyway we need to maintain compatibility to avoid breaking
the world and some more. While moving bits into the core, I need to
avoid that I use any of the still existing IRQ_ bits in the core code
by typos. So that file will hold the inline wrappers and some nasty
CPP tricks to break the build when typoed.Signed-off-by: Thomas Gleixner
-
That field will contain internal state information which is not going
to be exposed to anything outside the core code - except via accessor
functions. I'm tired of everyone fiddling in irq_desc.status.core_internal_state__do_not_mess_with_it is clear enough, annoying to
type and easy to grep for. Offenders will be tracked down and slapped
with stinking trouts.Signed-off-by: Thomas Gleixner
-
Core code replacement for the ugly camel case. It contains all the
code which is shared in all handlers.clear status flags
set INPROGRESS flag
unlock
call action chain
note_interrupt
lock
clr INPROGRESS flagSigned-off-by: Thomas Gleixner
-
Create irq_disable/enable and use them to keep the flags consistent.
Signed-off-by: Thomas Gleixner
-
Aside of duplicated code some of the startup/shutdown sites do not
handle the MASKED/DISABLED flags and the depth field at all. Move that
to a helper function and take care of it there.Signed-off-by: Thomas Gleixner
Cc: Peter Zijlstra
LKML-Reference: -
Soleley used in core code.
Signed-off-by: Thomas Gleixner
-
With the chip.end() function gone we might run into a situation where
a poll call runs and the real interrupt comes in, sees IRQ_INPROGRESS
and disables the line. That might be a perfect working one, which will
then be masked forever.So mark them polled while the poll runs. When the real handler sees
IRQ_INPROGRESS it checks the poll flag and waits for the polling to
complete. Add the necessary amount of sanity checks to it to avoid
deadlocks.Signed-off-by: Thomas Gleixner
-
While rumaging through arch code I found that there are a few
workarounds which deal with the fact that the initial affinity setting
from request_irq() copies the mask into irq_data->affinity before the
chip code is called. In the normal path we unconditionally copy the
mask when the chip code returns 0.Copy after the code is called and add a return code
IRQ_SET_MASK_OK_NOCOPY for the chip functions, which prevents the
copy. That way we see the real mask when the chip function decided to
truncate it further as some arches do. IRQ_SET_MASK_OK is 0, which is
the current behaviour.Signed-off-by: Thomas Gleixner
-
Lars-Peter Clausen pointed out:
I stumbled upon this while looking through the existing archs using
SPARSE_IRQ. Even with SPARSE_IRQ the NR_IRQS is still the upper
limit for the number of IRQs.Both PXA and MMP set NR_IRQS to IRQ_BOARD_START, with
IRQ_BOARD_START being the number of IRQs used by the core.In various machine files the nr_irqs field of the ARM machine
defintion struct is then set to "IRQ_BOARD_START + NR_BOARD_IRQS".As a result "nr_irqs" will greater then NR_IRQS which then again
causes the "allocated_irqs" bitmap in the core irq code to be
accessed beyond its size overwriting unrelated data.The core code really misses a sanity check there.
This went unnoticed so far as by chance the compiler/linker places
data behind that bitmap which gets initialized later on those affected
platforms.So the obvious fix would be to add a sanity check in early_irq_init()
and break all affected platforms. Though that check wants to be
backported to stable as well, which will require to fix all known
problematic platforms and probably some more yet not known ones as
well. Lots of churn.A way simpler solution is to allocate a slightly larger bitmap and
avoid the whole churn w/o breaking anything. Add a few warnings when
an arch returns utter crap.Reported-by: Lars-Peter Clausen
Signed-off-by: Thomas Gleixner
Cc: stable@kernel.org # .37
Cc: Haojian Zhuang
Cc: Eric Miao
Cc: Peter Zijlstra
12 Oct, 2010
4 commits
-
The move_irq_desc() function was only used due to the problem that the
allocator did not free the old descriptors. So the descriptors had to
be moved in create_irq_nr(). That's history.The code would have never been able to move active interrupt
descriptors on affinity settings. That can be done in a completely
different way w/o all this horror.Remove all of it.
Signed-off-by: Thomas Gleixner
Reviewed-by: Ingo Molnar -
Use the cleanup functions of the dynamic allocator. No need to have
separate implementations.Signed-off-by: Thomas Gleixner
Reviewed-by: Ingo Molnar -
/proc/irq never removes any entries, but when irq descriptors can be
freed for real this is necessary. Otherwise we'd reference a freed
descriptor in /proc/irq/NSigned-off-by: Thomas Gleixner
Reviewed-by: Ingo Molnar -
Move irq_desc and internal functions out of irq.h
Signed-off-by: Thomas Gleixner
Reviewed-by: Ingo Molnar
04 Oct, 2010
1 commit
-
This option covers now the old chip functions and the irq_desc data
fields which are moving to struct irq_data. More stuff will follow.Pretty handy for testing a conversion, whether something broke or not.
Signed-off-by: Thomas Gleixner
Reviewed-by: Ingo Molnar