19 Oct, 2010
1 commit
-
Provide a mechanism that allows running code in IRQ context. It is
most useful for NMI code that needs to interact with the rest of the
system -- like wakeup a task to drain buffers.Perf currently has such a mechanism, so extract that and provide it as
a generic feature, independent of perf so that others may also
benefit.The IRQ context callback is generated through self-IPIs where
possible, or on architectures like powerpc the decrementer (the
built-in timer facility) is set to generate an interrupt immediately.Architectures that don't have anything like this get to do with a
callback from the timer tick. These architectures can call
irq_work_run() at the tail of any IRQ handlers that might enqueue such
work (like the perf IRQ handler) to avoid undue latencies in
processing the work.Signed-off-by: Peter Zijlstra
Acked-by: Kyle McMartin
Acked-by: Martin Schwidefsky
[ various fixes ]
Signed-off-by: Huang Ying
LKML-Reference:
Signed-off-by: Ingo Molnar
24 Sep, 2010
1 commit
-
…stedt/linux-2.6-trace into perf/core
23 Sep, 2010
4 commits
-
Conflicts:
arch/sparc/kernel/perf_event.cMerge reason: Resolve the conflict.
Signed-off-by: Ingo Molnar
-
The !CC_OPTIMIZE_FOR_SIZE was added to enable the jump label functionality
because Jason noticed that the gcc option would not optimize the labels
and may even hurt performance.But this is a gcc problem not a kernel one. Removing this condition should
add motivation to the gcc developers to actually fix it.Cc: Jason Baron
Acked-by: David S. Miller
Signed-off-by: Steven Rostedt -
Add jump label support for sparc64.
Signed-off-by: David S. Miller
LKML-Reference:
Signed-off-by: Jason Baron[ cleaned up some formatting ]
Signed-off-by: Steven Rostedt
-
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6:
sparc: Prevent no-handler signal syscall restart recursion.
sparc: Don't mask signal when we can't setup signal frame.
sparc64: Fix race in signal instruction flushing.
sparc64: Support RAW perf events.
22 Sep, 2010
2 commits
-
Explicitly clear the "in-syscall" bit when we have no signal
handler and back up the program counters to back up the system
call.Reported-by: Al Viro
Signed-off-by: David S. Miller -
Don't invoke the signal handler tracehook in that situation
either.Reported-by: Al Viro
Signed-off-by: David S. Miller
21 Sep, 2010
2 commits
-
Merge reason: Pick up the latest fixes in -rc5.
Signed-off-by: Ingo Molnar
-
If another cpu does a very wide munmap() on the signal frame area,
it can tear down the page table hierarchy from underneath us.Borrow an idea from the 64-bit fault path's get_user_insn(), and
disable cross call interrupts during the page table traversal
to lock them in place while we operate.Reported-by: Al Viro
Signed-off-by: David S. Miller
15 Sep, 2010
2 commits
-
…stedt/linux-2.6-trace into perf/core
-
compat_alloc_user_space() expects the caller to independently call
access_ok() to verify the returned area. A missing call could
introduce problems on some architectures.This patch incorporates the access_ok() check into
compat_alloc_user_space() and also adds a sanity check on the length.
The existing compat_alloc_user_space() implementations are renamed
arch_compat_alloc_user_space() and are used as part of the
implementation of the new global function.This patch assumes NULL will cause __get_user()/__put_user() to either
fail or access userspace on all architectures. This should be
followed by checking the return value of compat_access_user_space()
for NULL in the callers, at which time the access_ok() in the callers
can also be removed.Reported-by: Ben Hawkes
Signed-off-by: H. Peter Anvin
Acked-by: Benjamin Herrenschmidt
Acked-by: Chris Metcalf
Acked-by: David S. Miller
Acked-by: Ingo Molnar
Acked-by: Thomas Gleixner
Acked-by: Tony Luck
Cc: Andrew Morton
Cc: Arnd Bergmann
Cc: Fenghua Yu
Cc: H. Peter Anvin
Cc: Heiko Carstens
Cc: Helge Deller
Cc: James Bottomley
Cc: Kyle McMartin
Cc: Martin Schwidefsky
Cc: Paul Mackerras
Cc: Ralf Baechle
Cc:
13 Sep, 2010
1 commit
-
Encoding is "(encoding << 16) | pic_mask"
Signed-off-by: David S. Miller
11 Sep, 2010
1 commit
-
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6:
sparc: Kill all BKL usage.
10 Sep, 2010
6 commits
-
Neither the overcommit nor the reservation sysfs parameter were
actually working, remove them as they'll only get in the way.Signed-off-by: Peter Zijlstra
Cc: paulus
LKML-Reference:
Signed-off-by: Ingo Molnar -
Replace pmu::{enable,disable,start,stop,unthrottle} with
pmu::{add,del,start,stop}, all of which take a flags argument.The new interface extends the capability to stop a counter while
keeping it scheduled on the PMU. We replace the throttled state with
the generic stopped state.This also allows us to efficiently stop/start counters over certain
code paths (like IRQ handlers).It also allows scheduling a counter without it starting, allowing for
a generic frozen state (useful for rotating stopped counters).The stopped state is implemented in two different ways, depending on
how the architecture implemented the throttled state:1) We disable the counter:
a) the pmu has per-counter enable bits, we flip that
b) we program a NOP event, preserving the counter state2) We store the counter state and ignore all read/overflow events
Signed-off-by: Peter Zijlstra
Cc: paulus
Cc: stephane eranian
Cc: Robert Richter
Cc: Will Deacon
Cc: Paul Mundt
Cc: Frederic Weisbecker
Cc: Cyrill Gorcunov
Cc: Lin Ming
Cc: Yanmin
Cc: Deng-Cheng Zhu
Cc: David Miller
Cc: Michael Cree
LKML-Reference:
Signed-off-by: Ingo Molnar -
Changes perf_disable() into perf_pmu_disable().
Signed-off-by: Peter Zijlstra
Cc: paulus
Cc: stephane eranian
Cc: Robert Richter
Cc: Will Deacon
Cc: Paul Mundt
Cc: Frederic Weisbecker
Cc: Cyrill Gorcunov
Cc: Lin Ming
Cc: Yanmin
Cc: Deng-Cheng Zhu
Cc: David Miller
Cc: Michael Cree
LKML-Reference:
Signed-off-by: Ingo Molnar -
Since the current perf_disable() usage is only an optimization,
remove it for now. This eases the removal of the __weak
hw_perf_enable() interface.Signed-off-by: Peter Zijlstra
Cc: paulus
Cc: stephane eranian
Cc: Robert Richter
Cc: Will Deacon
Cc: Paul Mundt
Cc: Frederic Weisbecker
Cc: Cyrill Gorcunov
Cc: Lin Ming
Cc: Yanmin
Cc: Deng-Cheng Zhu
Cc: David Miller
Cc: Michael Cree
LKML-Reference:
Signed-off-by: Ingo Molnar -
Simple registration interface for struct pmu, this provides the
infrastructure for removing all the weak functions.Signed-off-by: Peter Zijlstra
Cc: paulus
Cc: stephane eranian
Cc: Robert Richter
Cc: Will Deacon
Cc: Paul Mundt
Cc: Frederic Weisbecker
Cc: Cyrill Gorcunov
Cc: Lin Ming
Cc: Yanmin
Cc: Deng-Cheng Zhu
Cc: David Miller
Cc: Michael Cree
LKML-Reference:
Signed-off-by: Ingo Molnar -
sed -ie 's/const struct pmu\>/struct pmu/g' `git grep -l "const struct pmu\>"`
Signed-off-by: Peter Zijlstra
Cc: paulus
Cc: stephane eranian
Cc: Robert Richter
Cc: Will Deacon
Cc: Paul Mundt
Cc: Frederic Weisbecker
Cc: Cyrill Gorcunov
Cc: Lin Ming
Cc: Yanmin
Cc: Deng-Cheng Zhu
Cc: David Miller
Cc: Michael Cree
LKML-Reference:
Signed-off-by: Ingo Molnar
09 Sep, 2010
1 commit
-
They were all bogus artifacts and completely unnecessary.
Signed-off-by: David S. Miller
29 Aug, 2010
1 commit
-
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input:
Input: pxa27x_keypad - remove input_free_device() in pxa27x_keypad_remove()
Input: mousedev - fix regression of inverting axes
Input: uinput - add devname alias to allow module on-demand load
Input: hil_kbd - fix compile error
USB: drop tty argument from usb_serial_handle_sysrq_char()
Input: sysrq - drop tty argument form handle_sysrq()
Input: sysrq - drop tty argument from sysrq ops handlers
25 Aug, 2010
2 commits
-
Merge reason: pick up perf fixes
Signed-off-by: Ingo Molnar
-
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6:
sparc64: Get rid of indirect p1275 PROM call buffer.
sparc64: Fill a missing delay slot.
sparc64: Make lock backoff really a NOP on UP builds.
sparc64: simple microoptimizations for atomic functions
sparc64: Make rwsems 64-bit.
sparc64: Really fix atomic64_t interface types.
24 Aug, 2010
1 commit
-
This is based upon a report by Meelis Roos showing that it's possible
that we'll try to fetch a property that is 32K in size with some
devices. With the current fixed 3K buffer we use for moving data in
and out of the firmware during PROM calls, that simply won't work.In fact, it will scramble random kernel data during bootup.
The reasoning behind the temporary buffer is entirely historical. It
used to be the case that we had problems referencing dynamic kernel
memory (including the stack) early in the boot process before we
explicitly told the firwmare to switch us over to the kernel trap
table.So what we did was always give the firmware buffers that were locked
into the main kernel image.But we no longer have problems like that, so get rid of all of this
indirect bounce buffering.Besides fixing Meelis's bug, this also makes the kernel data about 3K
smaller.It was also discovered during these conversions that the
implementation of prom_retain() was completely wrong, so that was
fixed here as well. Currently that interface is not in use.Reported-by: Meelis Roos
Tested-by: Meelis Roos
Signed-off-by: David S. Miller
20 Aug, 2010
2 commits
-
Noone is using tty argument so let's get rid of it.
Acked-by: Alan Cox
Acked-by: Jason Wessel
Acked-by: Greg Kroah-Hartman
Signed-off-by: Dmitry Torokhov -
If the code were already aligned to 64 bytes, wr instruction would be executed
twice --- once in delay slot and once in the jump target.Signed-off-by: Mikulas Patocka
Signed-off-by: David S. Miller
19 Aug, 2010
6 commits
-
…rostedt/linux-2.6-trace into perf/core
-
As noticed by Mikulas Patocka, the backoff macros don't
completely nop out for UP builds, we still get a
branch always and a delay slot nop.Fix this by making the branch to the backoff spin loop
selective, then we can nop out the spin loop completely.Signed-off-by: David S. Miller
-
Simple microoptimizations for sparc64 atomic functions:
Save one instruction by using a delay slot.
Use %g1 instead of %g7, because %g1 is written earlier.Signed-off-by: Mikulas Patocka
Signed-off-by: David S. Miller -
Store the kernel and user contexts from the generic layer instead
of archs, this gathers some repetitive code.Signed-off-by: Frederic Weisbecker
Acked-by: Paul Mackerras
Tested-by: Will Deacon
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Arnaldo Carvalho de Melo
Cc: Stephane Eranian
Cc: David Miller
Cc: Paul Mundt
Cc: Borislav Petkov -
- Most archs use one callchain buffer per cpu, except x86 that needs
to deal with NMIs. Provide a default perf_callchain_buffer()
implementation that x86 overrides.- Centralize all the kernel/user regs handling and invoke new arch
handlers from there: perf_callchain_user() / perf_callchain_kernel()
That avoid all the user_mode(), current->mm checks and so...- Invert some parameters in perf_callchain_*() helpers: entry to the
left, regs to the right, following the traditional (dst, src).Signed-off-by: Frederic Weisbecker
Acked-by: Paul Mackerras
Tested-by: Will Deacon
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Arnaldo Carvalho de Melo
Cc: Stephane Eranian
Cc: David Miller
Cc: Paul Mundt
Cc: Borislav Petkov -
callchain_store() is the same on every archs, inline it in
perf_event.h and rename it to perf_callchain_store() to avoid
any collision.This removes repetitive code.
Signed-off-by: Frederic Weisbecker
Acked-by: Paul Mackerras
Tested-by: Will Deacon
Cc: Ingo Molnar
Cc: Peter Zijlstra
Cc: Arnaldo Carvalho de Melo
Cc: Stephane Eranian
Cc: David Miller
Cc: Paul Mundt
Cc: Borislav Petkov
18 Aug, 2010
6 commits
-
Basically tip-off the powerpc code, use a 64-bit type and atomic64_t
interfaces for the implementation.This gets us off of the by-hand asm code I wrote, which frankly I
think probably ruins I-cache hit rates.The idea was the keep the call chains less deep, but anything taking
the rw-semaphores probably is also calling other stuff and therefore
already has allocated a stack-frame. So no real stack frame savings
ever.Ben H. has posted patches to make powerpc use 64-bit too and with some
abstractions we can probably use a shared header file somewhere.With suggestions from Sam Ravnborg.
Signed-off-by: David S. Miller
-
Linus noticed that some of the interface arguments
didn't get "int" --> "long" conversion, as needed.Signed-off-by: David S. Miller
-
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6:
sparc64: Fix atomic64_t routine return values.
sparc64: Fix rwsem constant bug leading to hangs.
sparc: Hook up new fanotify and prlimit64 syscalls.
sparc: Really fix "console=" for serial consoles. -
Make do_execve() take a const filename pointer so that kernel_execve() compiles
correctly on ARM:arch/arm/kernel/sys_arm.c:88: warning: passing argument 1 of 'do_execve' discards qualifiers from pointer target type
This also requires the argv and envp arguments to be consted twice, once for
the pointer array and once for the strings the array points to. This is
because do_execve() passes a pointer to the filename (now const) to
copy_strings_kernel(). A simpler alternative would be to cast the filename
pointer in do_execve() when it's passed to copy_strings_kernel().do_execve() may not change any of the strings it is passed as part of the argv
or envp lists as they are some of them in .rodata, so marking these strings as
const should be fine.Further kernel_execve() and sys_execve() need to be changed to match.
This has been test built on x86_64, frv, arm and mips.
Signed-off-by: David Howells
Tested-by: Ralf Baechle
Acked-by: Russell King
Signed-off-by: Linus Torvalds -
Should return 'long' instead of 'int'.
Thanks to Dimitris Michailidis and Tony Luck.
Signed-off-by: David S. Miller
-
As noticed by Linus, it is critical that some of the
rwsem constants be signed. Yet, hex constants are
unsigned unless explicitly casted or negated.The most critical one is RWSEM_WAITING_BIAS.
This bug was exacerbated by commit
424acaaeb3a3932d64a9b4bd59df6cf72c22d8f3 ("rwsem: wake queued readers
when writer blocks on active read lock")Signed-off-by: David S. Miller
17 Aug, 2010
1 commit
-
The only tricky bit is the compat version of fanotify_mark, which
which on 32-bit the 64-bit mark argument is passed in as "high32",
"low32".Signed-off-by: David S. Miller