28 May, 2020
4 commits
-
Put the rseq_syscall check point at the prologue of the syscall
will break the a0 ... a7. This will casue system call bug when
DEBUG_RSEQ is enabled.So move it to the epilogue of syscall, but before syscall_trace.
Signed-off-by: Guo Ren
-
There is no fixup or feature in the patch, we only cleanup with:
- Remove unnecessary reg used (r11, r12), just use r9 & r10 &
syscallid regs as temp useage.
- Add _TIF_SYSCALL_WORK and _TIF_WORK_MASK to gather macros.Signed-off-by: Guo Ren
-
Current implementation could destory a4 & a5 when strace, so we need to get them
from pt_regs by SAVE_ALL.Signed-off-by: Guo Ren
-
log:
[ 0.13373200] Calibrating delay loop...
[ 0.14077600] ------------[ cut here ]------------
[ 0.14116700] WARNING: CPU: 0 PID: 0 at kernel/sched/core.c:3790 preempt_count_add+0xc8/0x11c
[ 0.14348000] DEBUG_LOCKS_WARN_ON((preempt_count() < 0))Modules linked in:
[ 0.14395100] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.6.0 #7
[ 0.14410800]
[ 0.14427400] Call Trace:
[ 0.14450700] [] dump_stack+0x8a/0xe4
[ 0.14473500] [] __warn+0x10e/0x15c
[ 0.14495900] [] warn_slowpath_fmt+0x72/0xc0
[ 0.14518600] [] preempt_count_add+0xc8/0x11c
[ 0.14544900] [] _raw_spin_lock+0x28/0x68
[ 0.14572600] [] vprintk_emit+0x84/0x2d8
[ 0.14599000] [] vprintk_default+0x2e/0x44
[ 0.14625100] [] vprintk_func+0x12a/0x1d0
[ 0.14651300] [] printk+0x30/0x48
[ 0.14677600] [] lockdep_init+0x12/0xb0
[ 0.14703800] [] start_kernel+0x558/0x7f8
[ 0.14730000] [] csky_start+0x58/0x94
[ 0.14756600] irq event stamp: 34
[ 0.14775100] hardirqs last enabled at (33): [] ret_from_exception+0x2c/0x72
[ 0.14793700] hardirqs last disabled at (34): [] vprintk_emit+0x7a/0x2d8
[ 0.14812300] softirqs last enabled at (32): [] __do_softirq+0x578/0x6d8
[ 0.14830800] softirqs last disabled at (25): [] irq_exit+0xec/0x128The preempt_count of reg could be destroyed after csky_do_IRQ without reload
from memory.After reference to other architectures (arm64, riscv), we move preempt entry
into ret_from_exception and disable irq at the beginning of
ret_from_exception instead of RESTORE_ALL.Signed-off-by: Guo Ren
Reported-by: Lu Baoquan
15 May, 2020
2 commits
-
If raw_copy_from_user(to, from, N) returns K, callers expect
the first N - K bytes starting at to to have been replaced with
the contents of corresponding area starting at from and the last
K bytes of destination *left* *unmodified*.What arch/sky/lib/usercopy.c is doing is broken - it can lead to e.g.
data corruption on write(2).raw_copy_to_user() is inaccurate about return value, which is a bug,
but consequences are less drastic than for raw_copy_from_user().
And just what are those access_ok() doing in there? I mean, look into
linux/uaccess.h; that's where we do that check (as well as zero tail
on failure in the callers that need zeroing).AFAICS, all of that shouldn't be hard to fix; something like a patch
below might make a useful starting point.I would suggest moving these macros into usercopy.c (they are never
used anywhere else) and possibly expanding them there; if you leave
them alive, please at least rename __copy_user_zeroing(). Again,
it must not zero anything on failed read.Said that, I'm not sure we won't be better off simply turning
usercopy.c into usercopy.S - all that is left there is a couple of
functions, each consisting only of inline asm.Guo Ren reply:
Yes, raw_copy_from_user is wrong, it's no need zeroing code.
unsigned long _copy_from_user(void *to, const void __user *from,
unsigned long n)
{
unsigned long res = n;
might_fault();
if (likely(access_ok(from, n))) {
kasan_check_write(to, n);
res = raw_copy_from_user(to, from, n);
}
if (unlikely(res))
memset(to + (n - res), 0, res);
return res;
}
EXPORT_SYMBOL(_copy_from_user);You are right and access_ok() should be removed.
but, how about:
do {
...
"2: stw %3, (%1, 0) \n" \
+ " subi %0, 4 \n" \
"9: stw %4, (%1, 4) \n" \
+ " subi %0, 4 \n" \
"10: stw %5, (%1, 8) \n" \
+ " subi %0, 4 \n" \
"11: stw %6, (%1, 12) \n" \
+ " subi %0, 4 \n" \
" addi %2, 16 \n" \
" addi %1, 16 \n" \Don't expand __ex_table
AI Viro reply:
Hey, I've no idea about the instruction scheduling on csky -
if that doesn't slow the things down, all the better. It's just
that copy_to_user() and friends are on fairly hot codepaths,
and in quite a few situations they will dominate the speed of
e.g. read(2). So I tried to keep the fast path unchanged.
Up to the architecture maintainers, obviously. Which would be
you...As for the fixups size increase (__ex_table size is unchanged)...
You have each of those macros expanded exactly once.
So the size is not a serious argument, IMO - useless complexity
would be, if it is, in fact, useless; the size... not really,
especially since those extra subi will at least offset it.Again, up to you - asm optimizations of (essentially)
memcpy()-style loops are tricky and can depend upon the
fairly subtle details of architecture. So even on something
I know reasonably well I would resort to direct experiments
if I can't pass the buck to architecture maintainers.It *is* worth optimizing - this is where read() from a file
that is already in page cache spends most of the time, etc.Guo Ren reply:
Thx, after fixup some typo “sub %0, 4”, apply the patch.
TODO:
- user copy/from codes are still need optimizing.Signed-off-by: Al Viro
Signed-off-by: Guo Ren -
The gdbmacros.txt use sp in thread_struct, but csky use ksp. This
cause bttnobp fail to excute.TODO:
- Still couldn't display the contents of stack.Signed-off-by: Guo Ren
13 May, 2020
8 commits
-
All processes' PSR could success from SETUP_MMU, so need set it
in INIT_THREAD again.And use a3 instead of r7 in __switch_to for code convention.
Signed-off-by: Guo Ren
-
Interrupt has been disabled in __schedule() with local_irq_disable()
and enabled in finish_task_switch->finish_lock_switch() with
local_irq_enabled(), So needn't to disable irq here.Signed-off-by: Liu Yibin
Signed-off-by: Guo Ren -
The implementation of show_stack will panic with wrong fp:
addr = *fp++;
because the fp isn't checked properly.
The current implementations of show_stack, wchan and stack_trace
haven't been designed properly, so just deprecate them.This patch is a reference to riscv's way, all codes are modified from
arm's. The patch is passed with:- cat /proc//stack
- cat /proc//wchan
- echo c > /proc/sysrq-triggerSigned-off-by: Guo Ren
-
[ 5221.974084] Unable to handle kernel paging request at virtual address 0xfffff000, pc: 0x8002c18e
[ 5221.985929] Oops: 00000000
[ 5221.989488]
[ 5221.989488] CURRENT PROCESS:
[ 5221.989488]
[ 5221.992877] COMM=callchain_test PID=11962
[ 5221.995213] TEXT=00008000-000087e0 DATA=00009f1c-0000a018 BSS=0000a018-0000b000
[ 5221.999037] USER-STACK=7fc18e20 KERNEL-STACK=be204680
[ 5221.999037]
[ 5222.003292] PC: 0x8002c18e (perf_callchain_kernel+0x3e/0xd4)
[ 5222.007957] LR: 0x8002c198 (perf_callchain_kernel+0x48/0xd4)
[ 5222.074873] Call Trace:
[ 5222.074873] [] get_perf_callchain+0x20a/0x29c
[ 5222.074873] [] perf_callchain+0x64/0x80
[ 5222.074873] [] perf_prepare_sample+0x29c/0x4b8
[ 5222.074873] [] perf_event_output_forward+0x36/0x98
[ 5222.074873] [] search_exception_tables+0x20/0x44
[ 5222.074873] [] do_page_fault+0x92/0x378
[ 5222.074873] [] __perf_event_overflow+0x54/0xdc
[ 5222.074873] [] perf_swevent_hrtimer+0xe8/0x164
[ 5222.074873] [] update_mmu_cache+0x0/0xd8
[ 5222.074873] [] user_backtrace+0x58/0xc4
[ 5222.074873] [] perf_callchain_user+0x34/0xd0
[ 5222.074873] [] get_perf_callchain+0x1be/0x29c
[ 5222.074873] [] perf_callchain+0x64/0x80
[ 5222.074873] [] perf_output_sample+0x78c/0x858
[ 5222.074873] [] perf_prepare_sample+0x29c/0x4b8
[ 5222.074873] [] perf_event_output_forward+0x5c/0x98
[ 5222.097846]
[ 5222.097846] [] perf_event_exit_task+0x58/0x43c
[ 5222.097846] [] hrtimer_interrupt+0x104/0x2ec
[ 5222.097846] [] perf_event_exit_task+0x58/0x43c
[ 5222.097846] [] dw_apb_clockevent_irq+0x2a/0x4c
[ 5222.097846] [] hrtimer_interrupt+0x0/0x2ec
[ 5222.097846] [] __handle_irq_event_percpu+0xac/0x19c
[ 5222.097846] [] dw_apb_clockevent_irq+0x2a/0x4c
[ 5222.097846] [] handle_irq_event_percpu+0x34/0x88
[ 5222.097846] [] handle_irq_event+0x24/0x64
[ 5222.097846] [] handle_level_irq+0x68/0xdc
[ 5222.097846] [] __handle_domain_irq+0x56/0xa8
[ 5222.097846] [] ck_irq_handler+0xac/0xe4
[ 5222.097846] [] csky_do_IRQ+0x12/0x24
[ 5222.097846] [] csky_irq+0x70/0x80
[ 5222.097846] [] alloc_set_pte+0xd2/0x238
[ 5222.097846] [] update_mmu_cache+0x0/0xd8
[ 5222.097846] [] perf_event_exit_task+0x98/0x43cThe original fp check doesn't base on the real kernal stack region.
Invalid fp address may cause kernel panic.Signed-off-by: Mao Han
Signed-off-by: Guo Ren -
Just as comment mentioned, the msa format:
cr MSA register format:
31 - 29 | 28 - 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0
BA Reserved SH WA B SO SEC C D VSo we should shift 29 bits not 28 bits for mask
Signed-off-by: Liu Yibin
Signed-off-by: Guo Ren -
case:
# perf probe -x /lib/libc-2.28.9000.so memcpy
# perf record -e probe_libc:memcpy -aR sleep 1System hangup and cpu get in trap_c loop, because our hardware
singlestep state could still get interrupt signal. When we get in
uprobe_xol singlestep slot, we should disable irq in pt_regs->psr.And is_swbp_insn() need a csky arch implementation with a low 16bit
mask.Signed-off-by: Guo Ren
Cc: Steven Rostedt (VMware) -
This bug is from uprobe signal definition in thread_info.h. The
instruction (andi) of abiv1 immediate is smaller than abiv2, then
it will cause:AS arch/csky/kernel/entry.o
arch/csky/kernel/entry.S: Assembler messages:
arch/csky/kernel/entry.S:224: Error: Operand 2 immediate is overflow.Signed-off-by: Guo Ren
-
When CONFIG_DYNAMIC_FTRACE is enabled, static ftrace will fail to
boot up and compile. It's a carelessness when developing "dynamic
ftrace" and "ftrace with regs".Signed-off-by: Guo Ren
11 Apr, 2020
2 commits
-
Currently there are many platforms that dont enable ARCH_HAS_PTE_SPECIAL
but required to define quite similar fallback stubs for special page
table entry helpers such as pte_special() and pte_mkspecial(), as they
get build in generic MM without a config check. This creates two
generic fallback stub definitions for these helpers, eliminating much
code duplication.mips platform has a special case where pte_special() and pte_mkspecial()
visibility is wider than what ARCH_HAS_PTE_SPECIAL enablement requires.
This restricts those symbol visibility in order to avoid redefinitions
which is now exposed through this new generic stubs and subsequent build
failure. arm platform set_pte_at() definition needs to be moved into a
C file just to prevent a build failure.[anshuman.khandual@arm.com: use defined(CONFIG_ARCH_HAS_PTE_SPECIAL) in mips per Thomas]
Link: http://lkml.kernel.org/r/1583851924-21603-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Anshuman Khandual
Signed-off-by: Andrew Morton
Acked-by: Guo Ren [csky]
Acked-by: Geert Uytterhoeven [m68k]
Acked-by: Stafford Horne [openrisc]
Acked-by: Helge Deller [parisc]
Cc: Richard Henderson
Cc: Ivan Kokshaysky
Cc: Matt Turner
Cc: Russell King
Cc: Brian Cain
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Sam Creasey
Cc: Michal Simek
Cc: Ralf Baechle
Cc: Paul Burton
Cc: Nick Hu
Cc: Greentime Hu
Cc: Vincent Chen
Cc: Ley Foon Tan
Cc: Jonas Bonn
Cc: Stefan Kristiansson
Cc: "James E.J. Bottomley"
Cc: "David S. Miller"
Cc: Jeff Dike
Cc: Richard Weinberger
Cc: Anton Ivanov
Cc: Guan Xuetao
Cc: Chris Zankel
Cc: Max Filippov
Cc: Thomas Bogendoerfer
Link: http://lkml.kernel.org/r/1583802551-15406-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Linus Torvalds -
There are many platforms with exact same value for VM_DATA_DEFAULT_FLAGS
This creates a default value for VM_DATA_DEFAULT_FLAGS in line with the
existing VM_STACK_DEFAULT_FLAGS. While here, also define some more
macros with standard VMA access flag combinations that are used
frequently across many platforms. Apart from simplification, this
reduces code duplication as well.Signed-off-by: Anshuman Khandual
Signed-off-by: Andrew Morton
Reviewed-by: Vlastimil Babka
Acked-by: Geert Uytterhoeven
Cc: Richard Henderson
Cc: Vineet Gupta
Cc: Russell King
Cc: Catalin Marinas
Cc: Mark Salter
Cc: Guo Ren
Cc: Yoshinori Sato
Cc: Brian Cain
Cc: Tony Luck
Cc: Michal Simek
Cc: Ralf Baechle
Cc: Paul Burton
Cc: Nick Hu
Cc: Ley Foon Tan
Cc: Jonas Bonn
Cc: "James E.J. Bottomley"
Cc: Michael Ellerman
Cc: Paul Walmsley
Cc: Heiko Carstens
Cc: Rich Felker
Cc: "David S. Miller"
Cc: Guan Xuetao
Cc: Thomas Gleixner
Cc: Jeff Dike
Cc: Chris Zankel
Link: http://lkml.kernel.org/r/1583391014-8170-2-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Linus Torvalds
08 Apr, 2020
2 commits
-
It is unlikely that an inaccessible VMA without required permission flags
will get a page fault. Hence lets just append unlikely() directive to
such checks in order to improve performance while also standardizing it
across various platforms.Signed-off-by: Anshuman Khandual
Signed-off-by: Andrew Morton
Reviewed-by: Andrew Morton
Cc: Guo Ren
Cc: Geert Uytterhoeven
Cc: Ralf Baechle
Cc: Paul Burton
Cc: Mike Rapoport
Link: http://lkml.kernel.org/r/1582525304-32113-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Linus Torvalds -
Lets move vma_is_accessible() helper to include/linux/mm.h which makes it
available for general use. While here, this replaces all remaining open
encodings for VMA access check with vma_is_accessible().Signed-off-by: Anshuman Khandual
Signed-off-by: Andrew Morton
Acked-by: Geert Uytterhoeven
Acked-by: Guo Ren
Acked-by: Vlastimil Babka
Cc: Guo Ren
Cc: Geert Uytterhoeven
Cc: Ralf Baechle
Cc: Paul Burton
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: Yoshinori Sato
Cc: Rich Felker
Cc: Dave Hansen
Cc: Andy Lutomirski
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Steven Rostedt
Cc: Mel Gorman
Cc: Alexander Viro
Cc: "Aneesh Kumar K.V"
Cc: Arnaldo Carvalho de Melo
Cc: Arnd Bergmann
Cc: Nick Piggin
Cc: Paul Mackerras
Cc: Will Deacon
Link: http://lkml.kernel.org/r/1582520593-30704-3-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Linus Torvalds
07 Apr, 2020
1 commit
-
Pull csky updates from Guo Ren:
- Add kproobes/uprobes support
- Add lockdep, rseq, gcov support
- Fixup init_fpu
- Fixup ftrace_modify deadlock
- Fixup speculative execution on IO area
* tag 'csky-for-linus-5.7-rc1' of git://github.com/c-sky/csky-linux:
csky: Fixup cpu speculative execution to IO area
csky: Add uprobes support
csky: Add kprobes supported
csky: Enable LOCKDEP_SUPPORT
csky: Enable the gcov function
csky: Fixup get wrong psr value from phyical reg
csky/ftrace: Fixup ftrace_modify_code deadlock without CPU_HAS_ICACHE_INS
csky: Implement ftrace with regs
csky: Add support for restartable sequence
csky: Implement ptrace regs and stack API
csky: Fixup init_fpu compile warning with __init
03 Apr, 2020
4 commits
-
For the memory size ( > 512MB, < 1GB), the MSA setting is:
- SSEG0: PHY_START , PHY_START + 512MB
- SSEG1: PHY_START + 512MB, PHY_START + 1GBBut the real memory is no more than 1GB, there is a gap between the
end size of memory and border of 1GB. CPU could speculatively
execute to that gap and if the gap of the bus couldn't respond to
the CPU request, then the crash will happen.Now make the setting with:
- SSEG0: PHY_START , PHY_START + 512MB (no change)
- SSEG1: Disabled (We use highmem to use the memory of 512MB~1GB)We also deprecated zhole_szie[] settings, it's only used by arm
style CPUs. All memory gap should use Reserved setting of dts in
csky system.Signed-off-by: Guo Ren
-
This patch adds support for uprobes on csky architecture.
Just like kprobe, it support single-step and simulate instructions.
Signed-off-by: Guo Ren
Cc: Arnd Bergmann
Cc: Steven Rostedt (VMware) -
This patch enable kprobes, kretprobes, ftrace interface. It utilized
software breakpoint and single step debug exceptions, instructions
simulation on csky.We use USR_BKPT replace origin instruction, and the kprobe handler
prepares an excutable memory slot for out-of-line execution with a
copy of the original instruction being probed. Most of instructions
could be executed by single-step, but some instructions need origin
pc value to execute and we need software simulate these instructions.Signed-off-by: Guo Ren
Cc: Arnd Bergmann
Cc: Steven Rostedt (VMware) -
Change a header to mandatory-y if both of the following are met:
[1] At least one architecture (except um) specifies it as generic-y in
arch/*/include/asm/Kbuild[2] Every architecture (except um) either has its own implementation
(arch/*/include/asm/*.h) or specifies it as generic-y in
arch/*/include/asm/KbuildThis commit was generated by the following shell script.
----------------------------------->8-----------------------------------
arches=$(cd arch; ls -1 | sed -e '/Kconfig/d' -e '/um/d')
tmpfile=$(mktemp)
grep "^mandatory-y +=" include/asm-generic/Kbuild > $tmpfile
find arch -path 'arch/*/include/asm/Kbuild' |
xargs sed -n 's/^generic-y += \(.*\)/\1/p' | sort -u |
while read header
do
mandatory=yesfor arch in $arches
do
if ! grep -q "generic-y += $header" arch/$arch/include/asm/Kbuild &&
! [ -f arch/$arch/include/asm/$header ]; then
mandatory=no
break
fi
doneif [ "$mandatory" = yes ]; then
echo "mandatory-y += $header" >> $tmpfilefor arch in $arches
do
sed -i "/generic-y += $header/d" arch/$arch/include/asm/Kbuild
done
fidone
sed -i '/^mandatory-y +=/d' include/asm-generic/Kbuild
LANG=C sort $tmpfile >> include/asm-generic/Kbuild
----------------------------------->8-----------------------------------
One obvious benefit is the diff stat:
25 files changed, 52 insertions(+), 557 deletions(-)
It is tedious to list generic-y for each arch that needs it.
So, mandatory-y works like a fallback default (by just wrapping
asm-generic one) when arch does not have a specific header
implementation.See the following commits:
def3f7cefe4e81c296090e1722a76551142c227c
a1b39bae16a62ce4aae02d958224f19316d98b24It is tedious to convert headers one by one, so I processed by a shell
script.Signed-off-by: Masahiro Yamada
Signed-off-by: Andrew Morton
Cc: Michal Simek
Cc: Christoph Hellwig
Cc: Arnd Bergmann
Link: http://lkml.kernel.org/r/20200210175452.5030-1-masahiroy@kernel.org
Signed-off-by: Linus Torvalds
02 Apr, 2020
1 commit
-
Lockdep is needed by proving the spinlocks and rwlocks. Currently,
we only put trace_hardirqs_on/off with csky_irq and
ret_from_exception.Signed-off-by: Guo Ren
01 Apr, 2020
2 commits
-
Support the gcov function in csky architecture.
Signed-off-by: Ma Jun
Signed-off-by: Guo Ren -
We should get psr value from regs->psr in stack, not directly get
it from phyiscal register then save the vector number in
tsk->trap_no.Signed-off-by: Guo Ren
31 Mar, 2020
1 commit
-
If ICACHE_INS is not supported, we use IPI to sync icache on each
core. But ftrace_modify_code is called from stop_machine from default
implementation of arch_ftrace_update_code and stop_machine callback
is irq_disabled. When you call ipi with irq_disabled, a deadlock will
happen.We couldn't use icache_flush with irq_disabled, but startup make_nop
is specific case and it needn't ipi other cores.Signed-off-by: Guo Ren
21 Mar, 2020
1 commit
-
The defconfig compiles without linux/mm.h. With mm.h included the
include chain leands to:
| CC kernel/locking/percpu-rwsem.o
| In file included from include/linux/huge_mm.h:8,
| from include/linux/mm.h:567,
| from arch/csky/include/asm/uaccess.h:,
| from include/linux/uaccess.h:11,
| from include/linux/sched/task.h:11,
| from include/linux/sched/signal.h:9,
| from include/linux/rcuwait.h:6,
| from include/linux/percpu-rwsem.h:8,
| from kernel/locking/percpu-rwsem.c:6:
| include/linux/fs.h:1422:29: error: array type has incomplete element type 'struct percpu_rw_semaphore'
| 1422 | struct percpu_rw_semaphore rw_sem[SB_FREEZE_LEVELS];once rcuwait.h includes linux/sched/signal.h.
Remove the linux/mm.h include.
Reported-by: kbuild test robot
Signed-off-by: Sebastian Andrzej Siewior
Signed-off-by: Thomas Gleixner
Signed-off-by: Peter Zijlstra (Intel)
Link: https://lkml.kernel.org/r/20200321113241.434999165@linutronix.de
08 Mar, 2020
4 commits
-
This patch implements FTRACE_WITH_REGS for csky, which allows a traced
function's arguments (and some other registers) to be captured into a
struct pt_regs, allowing these to be inspected and/or modified.Signed-off-by: Guo Ren
-
Copied and adapted from vincent's patch, but modified for csky.
ref:
https://lore.kernel.org/linux-riscv/1572919114-3886-3-git-send-email-vincent.chen@sifive.com/rawAdd calls to rseq_signal_deliver(), rseq_handle_notify_resume() and
rseq_syscall() to introduce RSEQ support.1. Call the rseq_handle_notify_resume() function on return to userspace
if TIF_NOTIFY_RESUME thread flag is set.2. Call the rseq_signal_deliver() function to fixup on the pre-signal
frame when a signal is delivered on top of a restartable sequence
critical section.3. Check that system calls are not invoked from within rseq critical
sections by invoking rseq_signal() from ret_from_syscall(). With
CONFIG_DEBUG_RSEQ, such behavior results in termination of the
process with SIGSEGV.Signed-off-by: Guo Ren
-
Needed for kprobes support. Copied and adapted from Patrick's patch,
but it has been modified for csky's pt_regs.ref:
https://lore.kernel.org/linux-riscv/1572919114-3886-2-git-send-email-vincent.chen@sifive.com/rawSigned-off-by: Guo Ren
Cc: Patrick Staehlin -
WARNING: vmlinux.o(.text+0x2366): Section mismatch in reference from the
function csky_start_secondary() to the function .init.text:init_fpu()The function csky_start_secondary() references
the function __init init_fpu().
This is often because csky_start_secondary lacks a __init
annotation or the annotation of init_fpu is wrong.Reported-by: Lu Chongzhi
Signed-off-by: Guo Ren
23 Feb, 2020
1 commit
-
The C-Sky platform code is not a clock provider, and just needs to call
of_clk_init().Hence it can include instead of .
Signed-off-by: Geert Uytterhoeven
Signed-off-by: Guo Ren
21 Feb, 2020
7 commits
-
This is required for clone3 which passes the TLS value through a
struct rather than a register.Cc: Amanieu d'Antras
Signed-off-by: Guo Ren -
Add the pci related code for csky arch to support basic pci virtual
function, such as qemu virt-pci-9pfs.Signed-off-by: MaJun
Signed-off-by: Guo Ren -
Some bsp (eg: buildroot) has defconfig.fragment design to add more
configs into the defconfig in linux source code tree. For example,
we could put different cpu configs into different defconfig.fragments,
but they all use the same defconfig in Linux.Signed-off-by: Ma Jun
Signed-off-by: Guo Ren -
We should give some necessary check for initrd just like other
architectures and it seems that setup_initrd() could be a common
code for all architectures.Signed-off-by: Guo Ren
-
CONFIG_CLKSRC_OF is gone since commit bb0eb050a577
("clocksource/drivers: Rename CLKSRC_OF to TIMER_OF"). The platform
already selects TIMER_OF.CONFIG_HAVE_DMA_API_DEBUG is gone since commit 6e88628d03dd ("dma-debug:
remove CONFIG_HAVE_DMA_API_DEBUG").CONFIG_DEFAULT_DEADLINE is gone since commit f382fb0bcef4 ("block:
remove legacy IO schedulers").Signed-off-by: Krzysztof Kozlowski
Signed-off-by: Guo Ren -
Fix wording in help text for the CPU_HAS_LDSTEX symbol.
Signed-off-by: Randy Dunlap
Signed-off-by: Guo Ren
Signed-off-by: Guo Ren -
Implement fstat64, fstatat64, clone3 syscalls to fixup
checksyscalls.sh compile warnings.Signed-off-by: Guo Ren