20 Jan, 2021
1 commit
-
[ Upstream commit 8a48c0a3360bf2bf4f40c980d0ec216e770e58ee ]
fs/dax.c uses copy_user_page() but ARC does not provide that interface,
resulting in a build error.Provide copy_user_page() in .
../fs/dax.c: In function 'copy_cow_page_dax':
../fs/dax.c:702:2: error: implicit declaration of function 'copy_user_page'; did you mean 'copy_to_user_page'? [-Werror=implicit-function-declaration]Reported-by: kernel test robot
Signed-off-by: Randy Dunlap
Cc: Vineet Gupta
Cc: linux-snps-arc@lists.infradead.org
Cc: Dan Williams
#Acked-by: Vineet Gupta # v1
Cc: Andrew Morton
Cc: Matthew Wilcox
Cc: Jan Kara
Cc: linux-fsdevel@vger.kernel.org
Cc: linux-nvdimm@lists.01.org
#Reviewed-by: Ira Weiny # v2
Signed-off-by: Vineet Gupta
Signed-off-by: Sasha Levin
13 Jan, 2021
1 commit
-
[ Upstream commit 87dbc209ea04645fd2351981f09eff5d23f8e2e9 ]
Make mandatory in include/asm-generic/Kbuild and
remove all arch/*/include/asm/local64.h arch-specific files since they
only #include .This fixes build errors on arch/c6x/ and arch/nios2/ for
block/blk-iocost.c.Build-tested on 21 of 25 arch-es. (tools problems on the others)
Yes, we could even rename to
and change all #includes to use
instead.Link: https://lkml.kernel.org/r/20201227024446.17018-1-rdunlap@infradead.org
Signed-off-by: Randy Dunlap
Suggested-by: Christoph Hellwig
Reviewed-by: Masahiro Yamada
Cc: Jens Axboe
Cc: Ley Foon Tan
Cc: Mark Salter
Cc: Aurelien Jacquiot
Cc: Peter Zijlstra
Cc: Arnd Bergmann
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
Signed-off-by: Sasha Levin
28 Nov, 2020
1 commit
-
…l/git/arnd/asm-generic
Pull asm-generic fix from Arnd Bergmann:
"Add correct MAX_POSSIBLE_PHYSMEM_BITS setting to asm-generic.This is a single bugfix for a bug that Stefan Agner found on 32-bit
Arm, but that exists on several other architectures"* tag 'asm-generic-fixes-5.10-2' of git://git.kernel.org/pub/scm/linux/kernel/git/arnd/asm-generic:
arch: pgtable: define MAX_POSSIBLE_PHYSMEM_BITS where needed
18 Nov, 2020
1 commit
-
The 1-bit shift rotation to the left on x variable located on
4 last if statement can be removed because the computed value is will
not be used afront.Signed-off-by: Gustavo Pimentel
Signed-off-by: Vineet Gupta
16 Nov, 2020
1 commit
-
Stefan Agner reported a bug when using zsram on 32-bit Arm machines
with RAM above the 4GB address boundary:Unable to handle kernel NULL pointer dereference at virtual address 00000000
pgd = a27bd01c
[00000000] *pgd=236a0003, *pmd=1ffa64003
Internal error: Oops: 207 [#1] SMP ARM
Modules linked in: mdio_bcm_unimac(+) brcmfmac cfg80211 brcmutil raspberrypi_hwmon hci_uart crc32_arm_ce bcm2711_thermal phy_generic genet
CPU: 0 PID: 123 Comm: mkfs.ext4 Not tainted 5.9.6 #1
Hardware name: BCM2711
PC is at zs_map_object+0x94/0x338
LR is at zram_bvec_rw.constprop.0+0x330/0xa64
pc : [] lr : [] psr: 60000013
sp : e376bbe0 ip : 00000000 fp : c1e2921c
r10: 00000002 r9 : c1dda730 r8 : 00000000
r7 : e8ff7a00 r6 : 00000000 r5 : 02f9ffa0 r4 : e3710000
r3 : 000fdffe r2 : c1e0ce80 r1 : ebf979a0 r0 : 00000000
Flags: nZCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
Control: 30c5383d Table: 235c2a80 DAC: fffffffd
Process mkfs.ext4 (pid: 123, stack limit = 0x495a22e6)
Stack: (0xe376bbe0 to 0xe376c000)As it turns out, zsram needs to know the maximum memory size, which
is defined in MAX_PHYSMEM_BITS when CONFIG_SPARSEMEM is set, or in
MAX_POSSIBLE_PHYSMEM_BITS on the x86 architecture.The same problem will be hit on all 32-bit architectures that have a
physical address space larger than 4GB and happen to not enable sparsemem
and include asm/sparsemem.h from asm/pgtable.h.After the initial discussion, I suggested just always defining
MAX_POSSIBLE_PHYSMEM_BITS whenever CONFIG_PHYS_ADDR_T_64BIT is
set, or provoking a build error otherwise. This addresses all
configurations that can currently have this runtime bug, but
leaves all other configurations unchanged.I looked up the possible number of bits in source code and
datasheets, here is what I found:- on ARC, CONFIG_ARC_HAS_PAE40 controls whether 32 or 40 bits are used
- on ARM, CONFIG_LPAE enables 40 bit addressing, without it we never
support more than 32 bits, even though supersections in theory allow
up to 40 bits as well.
- on MIPS, some MIPS32r1 or later chips support 36 bits, and MIPS32r5
XPA supports up to 60 bits in theory, but 40 bits are more than
anyone will ever ship
- On PowerPC, there are three different implementations of 36 bit
addressing, but 32-bit is used without CONFIG_PTE_64BIT
- On RISC-V, the normal page table format can support 34 bit
addressing. There is no highmem support on RISC-V, so anything
above 2GB is unused, but it might be useful to eventually support
CONFIG_ZRAM for high pages.Fixes: 61989a80fb3a ("staging: zsmalloc: zsmalloc memory allocation library")
Fixes: 02390b87a945 ("mm/zsmalloc: Prepare to variable MAX_PHYSMEM_BITS")
Acked-by: Thomas Bogendoerfer
Reviewed-by: Stefan Agner
Tested-by: Stefan Agner
Acked-by: Mike Rapoport
Link: https://lore.kernel.org/linux-mm/bdfa44bf1c570b05d6c70898e2bbb0acf234ecdf.1604762181.git.stefan@agner.ch/
Signed-off-by: Arnd Bergmann
26 Oct, 2020
1 commit
-
Use a more generic form for __section that requires quotes to avoid
complications with clang and gcc differences.Remove the quote operator # from compiler_attributes.h __section macro.
Convert all unquoted __section(foo) uses to quoted __section("foo").
Also convert __attribute__((section("foo"))) uses to __section("foo")
even if the __attribute__ has multiple list entry forms.Conversion done using the script at:
https://lore.kernel.org/lkml/75393e5ddc272dc7403de74d645e6c6e0f4e70eb.camel@perches.com/2-convert_section.pl
Signed-off-by: Joe Perches
Reviewed-by: Nick Desaulniers
Reviewed-by: Miguel Ojeda
Signed-off-by: Linus Torvalds
06 Oct, 2020
2 commits
-
Fix copy/paste spello of "themselves" in 3 places.
Signed-off-by: Randy Dunlap
Cc: Vineet Gupta
Cc: linux-snps-arc@lists.infradead.org
Signed-off-by: Vineet Gupta -
NPS customers are no longer doing active development, as evident from
rand config build failures reported in recent times, so drop support
for NPS platform.Tested-by: kernel test robot
Signed-off-by: Vineet Gupta
17 Aug, 2020
1 commit
-
Drop the repeated word "to".
Change "Thay" to "That".
Add a closing right parenthesis.Signed-off-by: Randy Dunlap
Cc: Vineet Gupta
Cc: linux-snps-arc@lists.infradead.org
Signed-off-by: Vineet Gupta
13 Aug, 2020
1 commit
-
segment_eq is only used to implement uaccess_kernel. Just open code
uaccess_kernel in the arch uaccess headers and remove one layer of
indirection.Signed-off-by: Christoph Hellwig
Signed-off-by: Andrew Morton
Acked-by: Linus Torvalds
Acked-by: Greentime Hu
Acked-by: Geert Uytterhoeven
Cc: Nick Hu
Cc: Vincent Chen
Cc: Paul Walmsley
Cc: Palmer Dabbelt
Link: http://lkml.kernel.org/r/20200710135706.537715-5-hch@lst.de
Signed-off-by: Linus Torvalds
29 Jul, 2020
2 commits
-
This patch moves ATOMIC_INIT from asm/atomic.h into linux/types.h.
This allows users of atomic_t to use ATOMIC_INIT without having to
include atomic.h as that way may lead to header loops.Signed-off-by: Herbert Xu
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Waiman Long
Link: https://lkml.kernel.org/r/20200729123105.GB7047@gondor.apana.org.au
17 Jun, 2020
2 commits
-
Cc:
Signed-off-by: Vineet Gupta -
Signed-off-by: Vineet Gupta
10 Jun, 2020
3 commits
-
All architectures define pte_index() as
(address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1)
and all architectures define pte_offset_kernel() as an entry in the array
of PTEs indexed by the pte_index().For the most architectures the pte_offset_kernel() implementation relies
on the availability of pmd_page_vaddr() that converts a PMD entry value to
the virtual address of the page containing PTEs array.Let's move x86 definitions of the PTE accessors to the generic place in
and then simply drop the respective definitions from the
other architectures.The architectures that didn't provide pmd_page_vaddr() are updated to have
that defined.The generic implementation of pte_offset_kernel() can be overridden by an
architecture and alpha makes use of this because it has special ordering
requirements for its version of pte_offset_kernel().[rppt@linux.ibm.com: v2]
Link: http://lkml.kernel.org/r/20200514170327.31389-11-rppt@kernel.org
[rppt@linux.ibm.com: update]
Link: http://lkml.kernel.org/r/20200514170327.31389-12-rppt@kernel.org
[rppt@linux.ibm.com: update]
Link: http://lkml.kernel.org/r/20200514170327.31389-13-rppt@kernel.org
[akpm@linux-foundation.org: fix x86 warning]
[sfr@canb.auug.org.au: fix powerpc build]
Link: http://lkml.kernel.org/r/20200607153443.GB738695@linux.ibm.comSigned-off-by: Mike Rapoport
Signed-off-by: Stephen Rothwell
Signed-off-by: Andrew Morton
Cc: Arnd Bergmann
Cc: Borislav Petkov
Cc: Brian Cain
Cc: Catalin Marinas
Cc: Chris Zankel
Cc: "David S. Miller"
Cc: Geert Uytterhoeven
Cc: Greentime Hu
Cc: Greg Ungerer
Cc: Guan Xuetao
Cc: Guo Ren
Cc: Heiko Carstens
Cc: Helge Deller
Cc: Ingo Molnar
Cc: Ley Foon Tan
Cc: Mark Salter
Cc: Matthew Wilcox
Cc: Matt Turner
Cc: Max Filippov
Cc: Michael Ellerman
Cc: Michal Simek
Cc: Nick Hu
Cc: Paul Walmsley
Cc: Richard Weinberger
Cc: Rich Felker
Cc: Russell King
Cc: Stafford Horne
Cc: Thomas Bogendoerfer
Cc: Thomas Gleixner
Cc: Tony Luck
Cc: Vincent Chen
Cc: Vineet Gupta
Cc: Will Deacon
Cc: Yoshinori Sato
Link: http://lkml.kernel.org/r/20200514170327.31389-10-rppt@kernel.org
Signed-off-by: Linus Torvalds -
The include/linux/pgtable.h is going to be the home of generic page table
manipulation functions.Start with moving asm-generic/pgtable.h to include/linux/pgtable.h and
make the latter include asm/pgtable.h.Signed-off-by: Mike Rapoport
Signed-off-by: Andrew Morton
Cc: Arnd Bergmann
Cc: Borislav Petkov
Cc: Brian Cain
Cc: Catalin Marinas
Cc: Chris Zankel
Cc: "David S. Miller"
Cc: Geert Uytterhoeven
Cc: Greentime Hu
Cc: Greg Ungerer
Cc: Guan Xuetao
Cc: Guo Ren
Cc: Heiko Carstens
Cc: Helge Deller
Cc: Ingo Molnar
Cc: Ley Foon Tan
Cc: Mark Salter
Cc: Matthew Wilcox
Cc: Matt Turner
Cc: Max Filippov
Cc: Michael Ellerman
Cc: Michal Simek
Cc: Nick Hu
Cc: Paul Walmsley
Cc: Richard Weinberger
Cc: Rich Felker
Cc: Russell King
Cc: Stafford Horne
Cc: Thomas Bogendoerfer
Cc: Thomas Gleixner
Cc: Tony Luck
Cc: Vincent Chen
Cc: Vineet Gupta
Cc: Will Deacon
Cc: Yoshinori Sato
Link: http://lkml.kernel.org/r/20200514170327.31389-3-rppt@kernel.org
Signed-off-by: Linus Torvalds -
Currently, the log-level of show_stack() depends on a platform
realization. It creates situations where the headers are printed with
lower log level or higher than the stacktrace (depending on a platform or
user).Furthermore, it forces the logic decision from user to an architecture
side. In result, some users as sysrq/kdb/etc are doing tricks with
temporary rising console_loglevel while printing their messages. And in
result it not only may print unwanted messages from other CPUs, but also
omit printing at all in the unlucky case where the printk() was deferred.Introducing log-level parameter and KERN_UNSUPPRESSED [1] seems an easier
approach than introducing more printk buffers. Also, it will consolidate
printings with headers.Introduce show_stack_loglvl(), that eventually will substitute
show_stack().As a good side-effect header "Stack Trace:" is now printed with the same
log level as the rest of backtrace.[1]: https://lore.kernel.org/lkml/20190528002412.1625-1-dima@arista.com/T/#u
Signed-off-by: Dmitry Safonov
Signed-off-by: Andrew Morton
Cc: Vineet Gupta
Link: http://lkml.kernel.org/r/20200418201944.482088-4-dima@arista.com
Signed-off-by: Linus Torvalds
05 Jun, 2020
6 commits
-
Most architectures define kmap_prot to be PAGE_KERNEL.
Let sparc and xtensa define there own and define PAGE_KERNEL as the
default if not overridden.[akpm@linux-foundation.org: coding style fixes]
Suggested-by: Christoph Hellwig
Signed-off-by: Ira Weiny
Signed-off-by: Andrew Morton
Cc: Al Viro
Cc: Andy Lutomirski
Cc: Benjamin Herrenschmidt
Cc: Borislav Petkov
Cc: Christian König
Cc: Chris Zankel
Cc: Daniel Vetter
Cc: Dan Williams
Cc: Dave Hansen
Cc: "David S. Miller"
Cc: Helge Deller
Cc: "H. Peter Anvin"
Cc: Ingo Molnar
Cc: "James E.J. Bottomley"
Cc: Max Filippov
Cc: Paul Mackerras
Cc: Peter Zijlstra
Cc: Thomas Bogendoerfer
Cc: Thomas Gleixner
Link: http://lkml.kernel.org/r/20200507150004.1423069-16-ira.weiny@intel.com
Signed-off-by: Linus Torvalds -
Every single architecture (including !CONFIG_HIGHMEM) calls...
pagefault_enable();
preempt_enable();... before returning from __kunmap_atomic(). Lift this code into the
kunmap_atomic() macro.While we are at it rename __kunmap_atomic() to kunmap_atomic_high() to
be consistent.[ira.weiny@intel.com: don't enable pagefault/preempt twice]
Link: http://lkml.kernel.org/r/20200518184843.3029640-1-ira.weiny@intel.com
[akpm@linux-foundation.org: coding style fixes]
Signed-off-by: Ira Weiny
Signed-off-by: Andrew Morton
Reviewed-by: Christoph Hellwig
Cc: Al Viro
Cc: Andy Lutomirski
Cc: Benjamin Herrenschmidt
Cc: Borislav Petkov
Cc: Christian König
Cc: Chris Zankel
Cc: Daniel Vetter
Cc: Dan Williams
Cc: Dave Hansen
Cc: "David S. Miller"
Cc: Helge Deller
Cc: "H. Peter Anvin"
Cc: Ingo Molnar
Cc: "James E.J. Bottomley"
Cc: Max Filippov
Cc: Paul Mackerras
Cc: Peter Zijlstra
Cc: Thomas Bogendoerfer
Cc: Thomas Gleixner
Cc: Guenter Roeck
Link: http://lkml.kernel.org/r/20200507150004.1423069-8-ira.weiny@intel.com
Signed-off-by: Linus Torvalds -
Every arch has the same code to ensure atomic operations and a check for
!HIGHMEM page.Remove the duplicate code by defining a core kmap_atomic() which only
calls the arch specific kmap_atomic_high() when the page is high memory.[akpm@linux-foundation.org: coding style fixes]
Signed-off-by: Ira Weiny
Signed-off-by: Andrew Morton
Reviewed-by: Christoph Hellwig
Cc: Al Viro
Cc: Andy Lutomirski
Cc: Benjamin Herrenschmidt
Cc: Borislav Petkov
Cc: Christian König
Cc: Chris Zankel
Cc: Daniel Vetter
Cc: Dan Williams
Cc: Dave Hansen
Cc: "David S. Miller"
Cc: Helge Deller
Cc: "H. Peter Anvin"
Cc: Ingo Molnar
Cc: "James E.J. Bottomley"
Cc: Max Filippov
Cc: Paul Mackerras
Cc: Peter Zijlstra
Cc: Thomas Bogendoerfer
Cc: Thomas Gleixner
Link: http://lkml.kernel.org/r/20200507150004.1423069-7-ira.weiny@intel.com
Signed-off-by: Linus Torvalds -
All architectures do exactly the same thing for kunmap(); remove all the
duplicate definitions and lift the call to the core.This also has the benefit of changing kmap_unmap() on a number of
architectures to be an inline call rather than an actual function.[akpm@linux-foundation.org: fix CONFIG_HIGHMEM=n build on various architectures]
Signed-off-by: Ira Weiny
Signed-off-by: Andrew Morton
Reviewed-by: Christoph Hellwig
Cc: Al Viro
Cc: Andy Lutomirski
Cc: Benjamin Herrenschmidt
Cc: Borislav Petkov
Cc: Christian König
Cc: Chris Zankel
Cc: Daniel Vetter
Cc: Dan Williams
Cc: Dave Hansen
Cc: "David S. Miller"
Cc: Helge Deller
Cc: "H. Peter Anvin"
Cc: Ingo Molnar
Cc: "James E.J. Bottomley"
Cc: Max Filippov
Cc: Paul Mackerras
Cc: Peter Zijlstra
Cc: Thomas Bogendoerfer
Cc: Thomas Gleixner
Link: http://lkml.kernel.org/r/20200507150004.1423069-5-ira.weiny@intel.com
Signed-off-by: Linus Torvalds -
The kmap code for all the architectures is almost 100% identical.
Lift the common code to the core. Use ARCH_HAS_KMAP_FLUSH_TLB to indicate
if an arch defines kmap_flush_tlb() and call if if needed.This also has the benefit of changing kmap() on a number of architectures
to be an inline call rather than an actual function.Signed-off-by: Ira Weiny
Signed-off-by: Andrew Morton
Reviewed-by: Christoph Hellwig
Cc: Al Viro
Cc: Andy Lutomirski
Cc: Benjamin Herrenschmidt
Cc: Borislav Petkov
Cc: Christian König
Cc: Chris Zankel
Cc: Daniel Vetter
Cc: Dan Williams
Cc: Dave Hansen
Cc: "David S. Miller"
Cc: Helge Deller
Cc: "H. Peter Anvin"
Cc: Ingo Molnar
Cc: "James E.J. Bottomley"
Cc: Max Filippov
Cc: Paul Mackerras
Cc: Peter Zijlstra
Cc: Thomas Bogendoerfer
Cc: Thomas Gleixner
Link: http://lkml.kernel.org/r/20200507150004.1423069-4-ira.weiny@intel.com
Signed-off-by: Linus Torvalds -
Patch series "Remove duplicated kmap code", v3.
The kmap infrastructure has been copied almost verbatim to every
architecture. This series consolidates obvious duplicated code by
defining core functions which call into the architectures only when
needed.Some of the k[un]map_atomic() implementations have some similarities but
the similarities were not sufficient to warrant further changes.In addition we remove a duplicate implementation of kmap() in DRM.
This patch (of 15):
Replace the use of BUG_ON(in_interrupt()) in the kmap() and kunmap() in
favor of might_sleep().Besides the benefits of might_sleep(), this normalizes the implementations
such that they can be made generic in subsequent patches.Signed-off-by: Ira Weiny
Signed-off-by: Andrew Morton
Reviewed-by: Dan Williams
Reviewed-by: Christoph Hellwig
Cc: Al Viro
Cc: Christian König
Cc: Daniel Vetter
Cc: Thomas Bogendoerfer
Cc: "James E.J. Bottomley"
Cc: Helge Deller
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: "David S. Miller"
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: "H. Peter Anvin"
Cc: Dave Hansen
Cc: Andy Lutomirski
Cc: Peter Zijlstra
Cc: Chris Zankel
Cc: Max Filippov
Link: http://lkml.kernel.org/r/20200507150004.1423069-1-ira.weiny@intel.com
Link: http://lkml.kernel.org/r/20200507150004.1423069-2-ira.weiny@intel.com
Signed-off-by: Linus Torvalds
04 Jun, 2020
1 commit
-
pmd_present() is expected to test positive after pmdp_mknotpresent() as
the PMD entry still points to a valid huge page in memory.
pmdp_mknotpresent() implies that given PMD entry is just invalidated from
MMU perspective while still holding on to pmd_page() referred valid huge
page thus also clearing pmd_present() test. This creates the following
situation which is counter intuitive.[pmd_present(pmd_mknotpresent(pmd)) = true]
This renames pmd_mknotpresent() as pmd_mkinvalid() reflecting the helper's
functionality more accurately while changing the above mentioned situation
as follows. This does not create any functional change.[pmd_present(pmd_mkinvalid(pmd)) = true]
This is not applicable for platforms that define own pmdp_invalidate() via
__HAVE_ARCH_PMDP_INVALIDATE. Suggestion for renaming came during a
previous discussion here.https://patchwork.kernel.org/patch/11019637/
[anshuman.khandual@arm.com: change pmd_mknotvalid() to pmd_mkinvalid() per Will]
Link: http://lkml.kernel.org/r/1587520326-10099-3-git-send-email-anshuman.khandual@arm.com
Suggested-by: Catalin Marinas
Signed-off-by: Anshuman Khandual
Signed-off-by: Andrew Morton
Acked-by: Will Deacon
Cc: Vineet Gupta
Cc: Russell King
Cc: Catalin Marinas
Cc: Thomas Bogendoerfer
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: "H. Peter Anvin"
Cc: Steven Rostedt
Cc: Dave Hansen
Cc: Andy Lutomirski
Cc: Peter Zijlstra
Cc: Benjamin Herrenschmidt
Cc: Michael Ellerman
Cc: Paul Mackerras
Link: http://lkml.kernel.org/r/1584680057-13753-3-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Linus Torvalds
20 May, 2020
1 commit
-
Pull ARC fixes from Vineet Gupta:
- fix recent DSP code regression on ARC700 platforms
- fix thinkos in ICCM/DCCM size checks
- USB regression fix
- other small fixes here and there
* tag 'arc-5.7-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc:
ARC: show_regs: avoid extra line of output
ARC: guard dsp early init against non ARCv2
ARC: [plat-eznps]: Restrict to CONFIG_ISA_ARCOMPACT
ARC: entry: comment
arc: remove #ifndef CONFIG_AS_CFI_SIGNAL_FRAME
arc: ptrace: hard-code "arc" instead of UTS_MACHINE
ARC: [plat-hsdk]: fix USB regression
ARC: Fix ICCM & DCCM runtime size checks
30 Apr, 2020
1 commit
-
As of today we guard early DSP init code with
ARC_AUX_DSP_BUILD (0x7A) BCR check to verify that we have
CPU with DSP configured. However that's not enough as in
ARCv1 CPU the same BCR (0x7A) is used for checking MUL/MAC
instructions presence.So, let's guard DSP early init against non ARCv2.
Fixes: 4827d0cf744e ("ARC: handle DSP presence in HW")
Reported-by: Angelo Ribeiro
Suggested-by: Jose Abreu
Signed-off-by: Eugeniy Paltsev
Signed-off-by: Vineet Gupta
23 Apr, 2020
1 commit
-
As the bug report [1] pointed out, must be included
after .I believe we should not impose any include order restriction. We often
sort include directives alphabetically, but it is just coding style
convention. Technically, we can include header files in any order by
making every header self-contained.Currently, arch-specific MODULE_ARCH_VERMAGIC is defined in
, which is not included from .Hence, the straight-forward fix-up would be as follows:
|--- a/include/linux/vermagic.h
|+++ b/include/linux/vermagic.h
|@@ -1,5 +1,6 @@
| /* SPDX-License-Identifier: GPL-2.0 */
| #include
|+#include
|
| /* Simply sanity version stamp for modules. */
| #ifdef CONFIG_SMPThis works enough, but for further cleanups, I split MODULE_ARCH_VERMAGIC
definitions into .With this, and will be orthogonal,
and the location of MODULE_ARCH_VERMAGIC definitions will be consistent.For arc and ia64, MODULE_PROC_FAMILY is only used for defining
MODULE_ARCH_VERMAGIC. I squashed it.For hexagon, nds32, and xtensa, I removed entirely
because they contained nothing but MODULE_ARCH_VERMAGIC definition.
Kbuild will automatically generate at build-time,
wrapping .[1] https://lore.kernel.org/lkml/20200411155623.GA22175@zn.tnic
Reported-by: Borislav Petkov
Signed-off-by: Masahiro Yamada
Acked-by: Jessica Yu
13 Apr, 2020
1 commit
-
Signed-off-by: Vineet Gupta
11 Apr, 2020
1 commit
-
There are many platforms with exact same value for VM_DATA_DEFAULT_FLAGS
This creates a default value for VM_DATA_DEFAULT_FLAGS in line with the
existing VM_STACK_DEFAULT_FLAGS. While here, also define some more
macros with standard VMA access flag combinations that are used
frequently across many platforms. Apart from simplification, this
reduces code duplication as well.Signed-off-by: Anshuman Khandual
Signed-off-by: Andrew Morton
Reviewed-by: Vlastimil Babka
Acked-by: Geert Uytterhoeven
Cc: Richard Henderson
Cc: Vineet Gupta
Cc: Russell King
Cc: Catalin Marinas
Cc: Mark Salter
Cc: Guo Ren
Cc: Yoshinori Sato
Cc: Brian Cain
Cc: Tony Luck
Cc: Michal Simek
Cc: Ralf Baechle
Cc: Paul Burton
Cc: Nick Hu
Cc: Ley Foon Tan
Cc: Jonas Bonn
Cc: "James E.J. Bottomley"
Cc: Michael Ellerman
Cc: Paul Walmsley
Cc: Heiko Carstens
Cc: Rich Felker
Cc: "David S. Miller"
Cc: Guan Xuetao
Cc: Thomas Gleixner
Cc: Jeff Dike
Cc: Chris Zankel
Link: http://lkml.kernel.org/r/1583391014-8170-2-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Linus Torvalds
04 Apr, 2020
1 commit
-
Pull ARC updates from Vineet Gupta:
- Support for DSP enabled userspace (save/restore regs)
- Misc other platform fixes
* tag 'arc-5.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc:
ARC: allow userspace DSP applications to use AGU extensions
ARC: add support for DSP-enabled userspace applications
ARC: handle DSP presence in HW
ARC: add helpers to sanitize config options
ARC: [plat-axs10x]: PGU: remove unused encoder-slave property
03 Apr, 2020
1 commit
-
Change a header to mandatory-y if both of the following are met:
[1] At least one architecture (except um) specifies it as generic-y in
arch/*/include/asm/Kbuild[2] Every architecture (except um) either has its own implementation
(arch/*/include/asm/*.h) or specifies it as generic-y in
arch/*/include/asm/KbuildThis commit was generated by the following shell script.
----------------------------------->8-----------------------------------
arches=$(cd arch; ls -1 | sed -e '/Kconfig/d' -e '/um/d')
tmpfile=$(mktemp)
grep "^mandatory-y +=" include/asm-generic/Kbuild > $tmpfile
find arch -path 'arch/*/include/asm/Kbuild' |
xargs sed -n 's/^generic-y += \(.*\)/\1/p' | sort -u |
while read header
do
mandatory=yesfor arch in $arches
do
if ! grep -q "generic-y += $header" arch/$arch/include/asm/Kbuild &&
! [ -f arch/$arch/include/asm/$header ]; then
mandatory=no
break
fi
doneif [ "$mandatory" = yes ]; then
echo "mandatory-y += $header" >> $tmpfilefor arch in $arches
do
sed -i "/generic-y += $header/d" arch/$arch/include/asm/Kbuild
done
fidone
sed -i '/^mandatory-y +=/d' include/asm-generic/Kbuild
LANG=C sort $tmpfile >> include/asm-generic/Kbuild
----------------------------------->8-----------------------------------
One obvious benefit is the diff stat:
25 files changed, 52 insertions(+), 557 deletions(-)
It is tedious to list generic-y for each arch that needs it.
So, mandatory-y works like a fallback default (by just wrapping
asm-generic one) when arch does not have a specific header
implementation.See the following commits:
def3f7cefe4e81c296090e1722a76551142c227c
a1b39bae16a62ce4aae02d958224f19316d98b24It is tedious to convert headers one by one, so I processed by a shell
script.Signed-off-by: Masahiro Yamada
Signed-off-by: Andrew Morton
Cc: Michal Simek
Cc: Christoph Hellwig
Cc: Arnd Bergmann
Link: http://lkml.kernel.org/r/20200210175452.5030-1-masahiroy@kernel.org
Signed-off-by: Linus Torvalds
31 Mar, 2020
1 commit
-
Pull locking updates from Ingo Molnar:
"The main changes in this cycle were:- Continued user-access cleanups in the futex code.
- percpu-rwsem rewrite that uses its own waitqueue and atomic_t
instead of an embedded rwsem. This addresses a couple of
weaknesses, but the primary motivation was complications on the -rt
kernel.- Introduce raw lock nesting detection on lockdep
(CONFIG_PROVE_RAW_LOCK_NESTING=y), document the raw_lock vs. normal
lock differences. This too originates from -rt.- Reuse lockdep zapped chain_hlocks entries, to conserve RAM
footprint on distro-ish kernels running into the "BUG:
MAX_LOCKDEP_CHAIN_HLOCKS too low!" depletion of the lockdep
chain-entries pool.- Misc cleanups, smaller fixes and enhancements - see the changelog
for details"* 'locking-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (55 commits)
fs/buffer: Make BH_Uptodate_Lock bit_spin_lock a regular spinlock_t
thermal/x86_pkg_temp: Make pkg_temp_lock a raw_spinlock_t
Documentation/locking/locktypes: Minor copy editor fixes
Documentation/locking/locktypes: Further clarifications and wordsmithing
m68knommu: Remove mm.h include from uaccess_no.h
x86: get rid of user_atomic_cmpxchg_inatomic()
generic arch_futex_atomic_op_inuser() doesn't need access_ok()
x86: don't reload after cmpxchg in unsafe_atomic_op2() loop
x86: convert arch_futex_atomic_op_inuser() to user_access_begin/user_access_end()
objtool: whitelist __sanitizer_cov_trace_switch()
[parisc, s390, sparc64] no need for access_ok() in futex handling
sh: no need of access_ok() in arch_futex_atomic_op_inuser()
futex: arch_futex_atomic_op_inuser() calling conventions change
completion: Use lockdep_assert_RT_in_threaded_ctx() in complete_all()
lockdep: Add posixtimer context tracing bits
lockdep: Annotate irq_work
lockdep: Add hrtimer context tracing bits
lockdep: Introduce wait-type checks
completion: Use simple wait queues
sched/swait: Prepare usage in completions
...
28 Mar, 2020
1 commit
-
Move access_ok() in and pagefault_enable()/pagefault_disable() out.
Mechanical conversion only - some instances don't really need
a separate access_ok() at all (e.g. the ones only using
get_user()/put_user(), or architectures where access_ok()
is always true); we'll deal with that in followups.Signed-off-by: Al Viro
17 Mar, 2020
4 commits
-
To be able to run DSP-enabled userspace applications with AGU
(address generation unit) extensions we additionally need to
save and restore following registers at context switch:
* AGU_AP*
* AGU_OS*
* AGU_MOD*Reviewed-by: Vineet Gupta
Signed-off-by: Eugeniy Paltsev
Signed-off-by: Vineet Gupta -
To be able to run DSP-enabled userspace applications we need to
save and restore following DSP-related registers:
At IRQ/exception entry/exit:
* DSP_CTRL (save it and reset to value suitable for kernel)
* ACC0_LO, ACC0_HI (we already save them as r58, r59 pair)
At context switch:
* ACC0_GLO, ACC0_GHI
* DSP_BFLY0, DSP_FFT_CTRLReviewed-by: Vineet Gupta
Signed-off-by: Eugeniy Paltsev
Signed-off-by: Vineet Gupta -
When DSP extensions are present, some of the regular integer instructions
such as DIV, MACD etc are executed in the DSP unit with semantics alterable
by flags in DSP_CTRL aux register. This register is writable by userspace
and thus can potentially affect corresponding instructions in kernel code,
intentionally or otherwise. So safegaurd kernel by effectively disabling
DSP_CTRL upon bootup and every entry to kernel.Do note that for this config we simply zero out the DSP_CTRL reg assuming
userspace doesn't really care about DSP. The next patch caters to the DSP
aware userspace where this reg is saved/restored upon kernel entry/exit.Reviewed-by: Vineet Gupta
Signed-off-by: Eugeniy Paltsev
Signed-off-by: Vineet Gupta -
We'll use this macro in coming patches extensively.
Reviewed-by: Vineet Gupta
Signed-off-by: Eugeniy Paltsev
Signed-off-by: Vineet Gupta
12 Mar, 2020
1 commit
-
The default defintions use fill pattern 0x90 for padding which for ARC
generates unintended "ldh_s r12,[r0,0x20]" corresponding to opcode 0x9090So use ".align 4" which insert a "nop_s" instruction instead.
Cc: stable@vger.kernel.org
Acked-by: Vineet Gupta
Signed-off-by: Eugeniy Paltsev
Signed-off-by: Vineet Gupta
10 Feb, 2020
1 commit
-
Reported-by: kbuild test robot
Link: http://lists.infradead.org/pipermail/linux-snps-arc/2020-February/006845.html
Signed-off-by: Vineet Gupta
04 Feb, 2020
1 commit
-
walk_page_range() is going to be allowed to walk page tables other than
those of user space. For this it needs to know when it has reached a
'leaf' entry in the page tables. This information will be provided by the
p?d_leaf() functions/macros.For arc, we only have two levels, so only pmd_leaf() is needed.
Link: http://lkml.kernel.org/r/20191218162402.45610-3-steven.price@arm.com
Signed-off-by: Steven Price
Acked-by: Vineet Gupta
Cc: Albert Ou
Cc: Alexandre Ghiti
Cc: Andy Lutomirski
Cc: Ard Biesheuvel
Cc: Arnd Bergmann
Cc: Benjamin Herrenschmidt
Cc: Borislav Petkov
Cc: Catalin Marinas
Cc: Christian Borntraeger
Cc: Dave Hansen
Cc: David S. Miller
Cc: Heiko Carstens
Cc: "H. Peter Anvin"
Cc: Ingo Molnar
Cc: James Hogan
Cc: James Morse
Cc: Jerome Glisse
Cc: "Liang, Kan"
Cc: Mark Rutland
Cc: Michael Ellerman
Cc: Paul Burton
Cc: Paul Mackerras
Cc: Paul Walmsley
Cc: Peter Zijlstra
Cc: Ralf Baechle
Cc: Russell King
Cc: Thomas Gleixner
Cc: Vasily Gorbik
Cc: Will Deacon
Cc: Zong Li
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds