20 Jan, 2021
1 commit
-
commit c25a053e15778f6b4d6553708673736e27a6c2cf upstream.
Use virtual address instead of physical address when translating
the address to shadow memory by kasan_mem_to_shadow().Signed-off-by: Nick Hu
Signed-off-by: Nylon Chen
Fixes: b10d6bca8720 ("arch, drivers: replace for_each_membock() with for_each_mem_range()")
Cc: stable@vger.kernel.org
Signed-off-by: Palmer Dabbelt
Signed-off-by: Greg Kroah-Hartman
30 Dec, 2020
1 commit
-
commit de043da0b9e71147ca610ed542d34858aadfc61c upstream.
memblock_enforce_memory_limit accepts the maximum memory size not the
maximum address that can be handled by kernel. Fix the function invocation
accordingly.Fixes: 1bd14a66ee52 ("RISC-V: Remove any memblock representing unusable memory area")
Cc: stable@vger.kernel.org
Reported-by: Bin Meng
Tested-by: Bin Meng
Acked-by: Mike Rapoport
Signed-off-by: Atish Patra
Signed-off-by: Palmer Dabbelt
Signed-off-by: Greg Kroah-Hartman
06 Nov, 2020
3 commits
-
Currently, we use PGD mappings for early DTB mapping in early_pgd
but this breaks Linux kernel on SiFive Unleashed because on SiFive
Unleashed PMP checks don't work correctly for PGD mappings.To fix early DTB mappings on SiFive Unleashed, we use non-PGD
mappings (i.e. PMD) for early DTB access.Fixes: 8f3a2b4a96dc ("RISC-V: Move DT mapping outof fixmap")
Signed-off-by: Anup Patel
Reviewed-by: Atish Patra
Tested-by: Atish Patra
Signed-off-by: Palmer Dabbelt -
The argument to pfn_to_virt() should be pfn not the value of CSR_SATP.
Reviewed-by: Palmer Dabbelt
Reviewed-by: Anup Patel
Signed-off-by: liush
Reviewed-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt -
RISC-V limits the physical memory size by -PAGE_OFFSET. Any memory beyond
that size from DRAM start is unusable. Just remove any memblock pointing
to those memory region without worrying about computing the maximum size.Signed-off-by: Atish Patra
Reviewed-by: Mike Rapoport
Signed-off-by: Palmer Dabbelt
20 Oct, 2020
1 commit
-
Pull RISC-V updates from Palmer Dabbelt:
"A handful of cleanups and new features:- A handful of cleanups for our page fault handling
- Improvements to how we fill out cacheinfo
- Support for EFI-based systems"
* tag 'riscv-for-linus-5.10-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (22 commits)
RISC-V: Add page table dump support for uefi
RISC-V: Add EFI runtime services
RISC-V: Add EFI stub support.
RISC-V: Add PE/COFF header for EFI stub
RISC-V: Implement late mapping page table allocation functions
RISC-V: Add early ioremap support
RISC-V: Move DT mapping outof fixmap
RISC-V: Fix duplicate included thread_info.h
riscv/mm/fault: Set FAULT_FLAG_INSTRUCTION flag in do_page_fault()
riscv/mm/fault: Fix inline placement in vmalloc_fault() declaration
riscv: Add cache information in AUX vector
riscv: Define AT_VECTOR_SIZE_ARCH for ARCH_DLINFO
riscv: Set more data to cacheinfo
riscv/mm/fault: Move access error check to function
riscv/mm/fault: Move FAULT_FLAG_WRITE handling in do_page_fault()
riscv/mm/fault: Simplify mm_fault_error()
riscv/mm/fault: Move fault error handling to mm_fault_error()
riscv/mm/fault: Simplify fault error handling
riscv/mm/fault: Move vmalloc fault handling to vmalloc_fault()
riscv/mm/fault: Move bad area handling to bad_area()
...
14 Oct, 2020
3 commits
-
for_each_memblock() is used to iterate over memblock.memory in a few
places that use data from memblock_region rather than the memory ranges.Introduce separate for_each_mem_region() and
for_each_reserved_mem_region() to improve encapsulation of memblock
internals from its users.Signed-off-by: Mike Rapoport
Signed-off-by: Andrew Morton
Reviewed-by: Baoquan He
Acked-by: Ingo Molnar [x86]
Acked-by: Thomas Bogendoerfer [MIPS]
Acked-by: Miguel Ojeda [.clang-format]
Cc: Andy Lutomirski
Cc: Benjamin Herrenschmidt
Cc: Borislav Petkov
Cc: Catalin Marinas
Cc: Christoph Hellwig
Cc: Daniel Axtens
Cc: Dave Hansen
Cc: Emil Renner Berthing
Cc: Hari Bathini
Cc: Ingo Molnar
Cc: Jonathan Cameron
Cc: Marek Szyprowski
Cc: Max Filippov
Cc: Michael Ellerman
Cc: Michal Simek
Cc: Palmer Dabbelt
Cc: Paul Mackerras
Cc: Paul Walmsley
Cc: Peter Zijlstra
Cc: Russell King
Cc: Stafford Horne
Cc: Thomas Gleixner
Cc: Will Deacon
Cc: Yoshinori Sato
Link: https://lkml.kernel.org/r/20200818151634.14343-18-rppt@kernel.org
Signed-off-by: Linus Torvalds -
There are several occurrences of the following pattern:
for_each_memblock(memory, reg) {
start = __pfn_to_phys(memblock_region_memory_base_pfn(reg);
end = __pfn_to_phys(memblock_region_memory_end_pfn(reg));/* do something with start and end */
}Using for_each_mem_range() iterator is more appropriate in such cases and
allows simpler and cleaner code.[akpm@linux-foundation.org: fix arch/arm/mm/pmsa-v7.c build]
[rppt@linux.ibm.com: mips: fix cavium-octeon build caused by memblock refactoring]
Link: http://lkml.kernel.org/r/20200827124549.GD167163@linux.ibm.comSigned-off-by: Mike Rapoport
Signed-off-by: Andrew Morton
Cc: Andy Lutomirski
Cc: Baoquan He
Cc: Benjamin Herrenschmidt
Cc: Borislav Petkov
Cc: Catalin Marinas
Cc: Christoph Hellwig
Cc: Daniel Axtens
Cc: Dave Hansen
Cc: Emil Renner Berthing
Cc: Hari Bathini
Cc: Ingo Molnar
Cc: Ingo Molnar
Cc: Jonathan Cameron
Cc: Marek Szyprowski
Cc: Max Filippov
Cc: Michael Ellerman
Cc: Michal Simek
Cc: Miguel Ojeda
Cc: Palmer Dabbelt
Cc: Paul Mackerras
Cc: Paul Walmsley
Cc: Peter Zijlstra
Cc: Russell King
Cc: Stafford Horne
Cc: Thomas Bogendoerfer
Cc: Thomas Gleixner
Cc: Will Deacon
Cc: Yoshinori Sato
Link: https://lkml.kernel.org/r/20200818151634.14343-13-rppt@kernel.org
Signed-off-by: Linus Torvalds -
RISC-V does not (yet) support NUMA and for UMA architectures node 0 is
used implicitly during early memory initialization.There is no need to call memblock_set_node(), remove this call and the
surrounding code.Signed-off-by: Mike Rapoport
Signed-off-by: Andrew Morton
Cc: Andy Lutomirski
Cc: Baoquan He
Cc: Benjamin Herrenschmidt
Cc: Borislav Petkov
Cc: Catalin Marinas
Cc: Christoph Hellwig
Cc: Daniel Axtens
Cc: Dave Hansen
Cc: Emil Renner Berthing
Cc: Hari Bathini
Cc: Ingo Molnar
Cc: Ingo Molnar
Cc: Jonathan Cameron
Cc: Marek Szyprowski
Cc: Max Filippov
Cc: Michael Ellerman
Cc: Michal Simek
Cc: Miguel Ojeda
Cc: Palmer Dabbelt
Cc: Paul Mackerras
Cc: Paul Walmsley
Cc: Peter Zijlstra
Cc: Russell King
Cc: Stafford Horne
Cc: Thomas Bogendoerfer
Cc: Thomas Gleixner
Cc: Will Deacon
Cc: Yoshinori Sato
Link: https://lkml.kernel.org/r/20200818151634.14343-7-rppt@kernel.org
Signed-off-by: Linus Torvalds
05 Oct, 2020
1 commit
-
Currently, the memory containing DT is not reserved. Thus, that region
of memory can be reallocated or reused for other purposes. This may result
in corrupted DT for nommu virt board in Qemu. We may not face any issue
in kendryte as DT is embedded in the kernel image for that.Fixes: 6bd33e1ece52 ("riscv: add nommu support")
Cc: stable@vger.kernel.org
Signed-off-by: Atish Patra
Signed-off-by: Palmer Dabbelt
03 Oct, 2020
5 commits
-
Extend the current page table dump support in RISC-V to include efi
pages as well.Here is the output of efi runtime page table mappings.
---[ UEFI runtime start ]---
0x0000000020002000-0x0000000020003000 0x00000000be732000 4K PTE D A . . . W R V
0x0000000020018000-0x0000000020019000 0x00000000be738000 4K PTE D A . . . W R V
0x000000002002c000-0x000000002002d000 0x00000000be73c000 4K PTE D A . . . W R V
0x0000000020031000-0x0000000020032000 0x00000000bff61000 4K PTE D A . . X W R V
---[ UEFI runtime end ]---Signed-off-by: Atish Patra
Reviewed-by: Anup Patel
Signed-off-by: Palmer Dabbelt -
This patch adds EFI runtime service support for RISC-V.
Signed-off-by: Atish Patra
[ardb: - Remove the page check]
Signed-off-by: Ard Biesheuvel
Acked-by: Ard Biesheuvel
Signed-off-by: Palmer Dabbelt -
Currently, page table setup is done during setup_va_final where fixmap can
be used to create the temporary mappings. The physical frame is allocated
from memblock_alloc_* functions. However, this won't work if page table
mapping needs to be created for a different mm context (i.e. efi mm) at
a later point of time.Use generic kernel page allocation function & macros for any mapping
after setup_vm_final.Signed-off-by: Atish Patra
Reviewed-by: Anup Patel
Acked-by: Mike Rapoport
Signed-off-by: Palmer Dabbelt -
UEFI uses early IO or memory mappings for runtime services before
normal ioremap() is usable. Add the necessary fixmap bindings and
pmd mappings for generic ioremap support to work.Signed-off-by: Atish Patra
Reviewed-by: Anup Patel
Reviewed-by: Palmer Dabbelt
Signed-off-by: Palmer Dabbelt -
Currently, RISC-V reserves 1MB of fixmap memory for device tree. However,
it maps only single PMD (2MB) space for fixmap which leaves only < 1MB space
left for other kernel features such as early ioremap which requires fixmap
as well. The fixmap size can be increased by another 2MB but it brings
additional complexity and changes the virtual memory layout as well.
If we require some additional feature requiring fixmap again, it has to be
moved again.Technically, DT doesn't need a fixmap as the memory occupied by the DT is
only used during boot. That's why, We map device tree in early page table
using two consecutive PGD mappings at lower addresses (< PAGE_OFFSET).
This frees lot of space in fixmap and also makes maximum supported
device tree size supported as PGDIR_SIZE. Thus, init memory section can be used
for the same purpose as well. This simplifies fixmap implementation.Signed-off-by: Anup Patel
Signed-off-by: Atish Patra
Reviewed-by: Palmer Dabbelt
Signed-off-by: Palmer Dabbelt
20 Sep, 2020
1 commit
-
This invalidates local TLB after modifying the page tables during early init as
it's too early to handle suprious faults as we otherwise do.Fixes: f2c17aabc917 ("RISC-V: Implement compile-time fixed mappings")
Reported-by: Syven Wang
Signed-off-by: Syven Wang
Signed-off-by: Greentime Hu
Reviewed-by: Anup Patel
[Palmer: Cleaned up the commit text]
Signed-off-by: Palmer Dabbelt
16 Sep, 2020
11 commits
-
If the page fault "cause" is EXC_INST_PAGE_FAULT, set the
FAULT_FLAG_INSTRUCTION flag to let handle_mm_fault() and friends know
about it. This has no functional changes because RISC-V uses the default
arch_vma_access_permitted() implementation, which always returns true.
However, dax_pmd_fault(), for example, has a tracepoint that uses
FAULT_FLAG_INSTRUCTION, so we might as well set it.Signed-off-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt -
The "inline" keyword is in the wrong place in vmalloc_fault()
declaration:>> arch/riscv/mm/fault.c:56:1: warning: 'inline' is not at beginning of declaration [-Wold-style-declaration]
56 | static void inline vmalloc_fault(struct pt_regs *regs, int code, unsigned long addr)
| ^~~~~~Fix that up.
Reported-by: kernel test robot
Signed-off-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt -
Move the access error check into a access_error() function to simplify
the control flow in do_page_fault().Signed-off-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt -
Let's handle the translation of EXC_STORE_PAGE_FAULT to FAULT_FLAG_WRITE
once before looking up the VMA. This makes it easier to extract access
error logic in the next patch.Signed-off-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt -
Simplify the mm_fault_error() handling function by eliminating the
unnecessary gotos.Signed-off-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt -
This patch moves the fault error handling to mm_fault_error() function
and converts gotos to calls to the new function.Signed-off-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt -
Move fault error handling after retry logic. This simplifies the code
flow and makes it easier to move fault error handling to its own
function.Signed-off-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt -
This patch moves the vmalloc fault handling in do_page_fault() to
vmalloc_fault() function and converts gotos to calls to the new
function.Signed-off-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt -
This patch moves the bad area handling in do_page_fault() to bad_area()
function and converts gotos to calls to the new function.Signed-off-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt -
This patch moves the no context handling in do_page_fault() to
no_context() function and converts gotos to calls to the new function.Signed-off-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt -
Let's combine the two retry logic if statements in do_page_fault() to
simplify the code.Signed-off-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt
13 Aug, 2020
2 commits
-
Use the general page fault accounting by passing regs into
handle_mm_fault(). It naturally solve the issue of multiple page fault
accounting when page fault retry happened.Signed-off-by: Peter Xu
Signed-off-by: Andrew Morton
Reviewed-by: Pekka Enberg
Acked-by: Palmer Dabbelt
Cc: Paul Walmsley
Cc: Albert Ou
Link: http://lkml.kernel.org/r/20200707225021.200906-18-peterx@redhat.com
Signed-off-by: Linus Torvalds -
Patch series "mm: Page fault accounting cleanups", v5.
This is v5 of the pf accounting cleanup series. It originates from Gerald
Schaefer's report on an issue a week ago regarding to incorrect page fault
accountings for retried page fault after commit 4064b9827063 ("mm: allow
VM_FAULT_RETRY for multiple times"):https://lore.kernel.org/lkml/20200610174811.44b94525@thinkpad/
What this series did:
- Correct page fault accounting: we do accounting for a page fault
(no matter whether it's from #PF handling, or gup, or anything else)
only with the one that completed the fault. For example, page fault
retries should not be counted in page fault counters. Same to the
perf events.- Unify definition of PERF_COUNT_SW_PAGE_FAULTS: currently this perf
event is used in an adhoc way across different archs.Case (1): for many archs it's done at the entry of a page fault
handler, so that it will also cover e.g. errornous faults.Case (2): for some other archs, it is only accounted when the page
fault is resolved successfully.Case (3): there're still quite some archs that have not enabled
this perf event.Since this series will touch merely all the archs, we unify this
perf event to always follow case (1), which is the one that makes most
sense. And since we moved the accounting into handle_mm_fault, the
other two MAJ/MIN perf events are well taken care of naturally.- Unify definition of "major faults": the definition of "major
fault" is slightly changed when used in accounting (not
VM_FAULT_MAJOR). More information in patch 1.- Always account the page fault onto the one that triggered the page
fault. This does not matter much for #PF handlings, but mostly for
gup. More information on this in patch 25.Patchset layout:
Patch 1: Introduced the accounting in handle_mm_fault(), not enabled.
Patch 2-23: Enable the new accounting for arch #PF handlers one by one.
Patch 24: Enable the new accounting for the rest outliers (gup, iommu, etc.)
Patch 25: Cleanup GUP task_struct pointer since it's not needed any moreThis patch (of 25):
This is a preparation patch to move page fault accountings into the
general code in handle_mm_fault(). This includes both the per task
flt_maj/flt_min counters, and the major/minor page fault perf events. To
do this, the pt_regs pointer is passed into handle_mm_fault().PERF_COUNT_SW_PAGE_FAULTS should still be kept in per-arch page fault
handlers.So far, all the pt_regs pointer that passed into handle_mm_fault() is
NULL, which means this patch should have no intented functional change.Suggested-by: Linus Torvalds
Signed-off-by: Peter Xu
Signed-off-by: Andrew Morton
Cc: Albert Ou
Cc: Alexander Gordeev
Cc: Andy Lutomirski
Cc: Benjamin Herrenschmidt
Cc: Borislav Petkov
Cc: Brian Cain
Cc: Catalin Marinas
Cc: Christian Borntraeger
Cc: Chris Zankel
Cc: Dave Hansen
Cc: David S. Miller
Cc: Geert Uytterhoeven
Cc: Gerald Schaefer
Cc: Greentime Hu
Cc: Guo Ren
Cc: Heiko Carstens
Cc: Helge Deller
Cc: H. Peter Anvin
Cc: Ingo Molnar
Cc: Ivan Kokshaysky
Cc: James E.J. Bottomley
Cc: John Hubbard
Cc: Jonas Bonn
Cc: Ley Foon Tan
Cc: "Luck, Tony"
Cc: Matt Turner
Cc: Max Filippov
Cc: Michael Ellerman
Cc: Michal Simek
Cc: Nick Hu
Cc: Palmer Dabbelt
Cc: Paul Mackerras
Cc: Paul Walmsley
Cc: Pekka Enberg
Cc: Peter Zijlstra
Cc: Richard Henderson
Cc: Rich Felker
Cc: Russell King
Cc: Stafford Horne
Cc: Stefan Kristiansson
Cc: Thomas Bogendoerfer
Cc: Thomas Gleixner
Cc: Vasily Gorbik
Cc: Vincent Chen
Cc: Vineet Gupta
Cc: Will Deacon
Cc: Yoshinori Sato
Link: http://lkml.kernel.org/r/20200707225021.200906-1-peterx@redhat.com
Link: http://lkml.kernel.org/r/20200707225021.200906-2-peterx@redhat.com
Signed-off-by: Linus Torvalds
08 Aug, 2020
5 commits
-
Merge misc updates from Andrew Morton:
- a few MM hotfixes
- kthread, tools, scripts, ntfs and ocfs2
- some of MM
Subsystems affected by this patch series: kthread, tools, scripts, ntfs,
ocfs2 and mm (hofixes, pagealloc, slab-generic, slab, slub, kcsan,
debug, pagecache, gup, swap, shmem, memcg, pagemap, mremap, mincore,
sparsemem, vmalloc, kasan, pagealloc, hugetlb and vmscan).* emailed patches from Andrew Morton : (162 commits)
mm: vmscan: consistent update to pgrefill
mm/vmscan.c: fix typo
khugepaged: khugepaged_test_exit() check mmget_still_valid()
khugepaged: retract_page_tables() remember to test exit
khugepaged: collapse_pte_mapped_thp() protect the pmd lock
khugepaged: collapse_pte_mapped_thp() flush the right range
mm/hugetlb: fix calculation of adjust_range_if_pmd_sharing_possible
mm: thp: replace HTTP links with HTTPS ones
mm/page_alloc: fix memalloc_nocma_{save/restore} APIs
mm/page_alloc.c: skip setting nodemask when we are in interrupt
mm/page_alloc: fallbacks at most has 3 elements
mm/page_alloc: silence a KASAN false positive
mm/page_alloc.c: remove unnecessary end_bitidx for [set|get]_pfnblock_flags_mask()
mm/page_alloc.c: simplify pageblock bitmap access
mm/page_alloc.c: extract the common part in pfn_to_bitidx()
mm/page_alloc.c: replace the definition of NR_MIGRATETYPE_BITS with PB_migratetype_bits
mm/shuffle: remove dynamic reconfiguration
mm/memory_hotplug: document why shuffle_zone() is relevant
mm/page_alloc: remove nr_free_pagecache_pages()
mm: remove vm_total_pages
... -
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP we have two equivalent
functions that call memory_present() for each region in memblock.memory:
sparse_memory_present_with_active_regions() and membocks_present().Moreover, all architectures have a call to either of these functions
preceding the call to sparse_init() and in the most cases they are called
one after the other.Mark the regions from memblock.memory as present during sparce_init() by
making sparse_init() call memblocks_present(), make memblocks_present()
and memory_present() functions static and remove redundant
sparse_memory_present_with_active_regions() function.Also remove no longer required HAVE_MEMORY_PRESENT configuration option.
Signed-off-by: Mike Rapoport
Signed-off-by: Andrew Morton
Link: http://lkml.kernel.org/r/20200712083130.22919-1-rppt@kernel.org
Signed-off-by: Linus Torvalds -
Patch series "arm64: Enable vmemmap mapping from device memory", v4.
This series enables vmemmap backing memory allocation from device memory
ranges on arm64. But before that, it enables vmemmap_populate_basepages()
and vmemmap_alloc_block_buf() to accommodate struct vmem_altmap based
alocation requests.This patch (of 3):
vmemmap_populate_basepages() is used across platforms to allocate backing
memory for vmemmap mapping. This is used as a standard default choice or
as a fallback when intended huge pages allocation fails. This just
creates entire vmemmap mapping with base pages (PAGE_SIZE).On arm64 platforms, vmemmap_populate_basepages() is called instead of the
platform specific vmemmap_populate() when ARM64_SWAPPER_USES_SECTION_MAPS
is not enabled as in case for ARM64_16K_PAGES and ARM64_64K_PAGES configs.At present vmemmap_populate_basepages() does not support allocating from
driver defined struct vmem_altmap while trying to create vmemmap mapping
for a device memory range. It prevents ARM64_16K_PAGES and
ARM64_64K_PAGES configs on arm64 from supporting device memory with
vmemap_altmap request.This enables vmem_altmap support in vmemmap_populate_basepages() unlocking
device memory allocation for vmemap mapping on arm64 platforms with 16K or
64K base page configs.Each architecture should evaluate and decide on subscribing device memory
based base page allocation through vmemmap_populate_basepages(). Hence
lets keep it disabled on all archs in order to preserve the existing
semantics. A subsequent patch enables it on arm64.Signed-off-by: Anshuman Khandual
Signed-off-by: Andrew Morton
Tested-by: Jia He
Reviewed-by: David Hildenbrand
Acked-by: Will Deacon
Acked-by: Catalin Marinas
Cc: Mark Rutland
Cc: Paul Walmsley
Cc: Palmer Dabbelt
Cc: Tony Luck
Cc: Fenghua Yu
Cc: Dave Hansen
Cc: Andy Lutomirski
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Mike Rapoport
Cc: Michal Hocko
Cc: "Matthew Wilcox (Oracle)"
Cc: "Kirill A. Shutemov"
Cc: Dan Williams
Cc: Pavel Tatashin
Cc: Benjamin Herrenschmidt
Cc: Borislav Petkov
Cc: "H. Peter Anvin"
Cc: Hsin-Yi Wang
Cc: Jonathan Corbet
Cc: Michael Ellerman
Cc: Paul Mackerras
Cc: Robin Murphy
Cc: Steve Capper
Cc: Yu Zhao
Link: http://lkml.kernel.org/r/1594004178-8861-1-git-send-email-anshuman.khandual@arm.com
Link: http://lkml.kernel.org/r/1594004178-8861-2-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Linus Torvalds -
Patch series "mm: cleanup usage of "
Most architectures have very similar versions of pXd_alloc_one() and
pXd_free_one() for intermediate levels of page table. These patches add
generic versions of these functions in and enable
use of the generic functions where appropriate.In addition, functions declared and defined in headers are
used mostly by core mm and early mm initialization in arch and there is no
actual reason to have the included all over the place.
The first patch in this series removes unneeded includes ofIn the end it didn't work out as neatly as I hoped and moving
pXd_alloc_track() definitions to would require
unnecessary changes to arches that have custom page table allocations, so
I've decided to move lib/ioremap.c to mm/ and make pgalloc-track.h local
to mm/.This patch (of 8):
In most cases header is required only for allocations of
page table memory. Most of the .c files that include that header do not
use symbols declared in and do not require that header.As for the other header files that used to include , it is
possible to move that include into the .c file that actually uses symbols
from and drop the include from the header file.The process was somewhat automated using
sed -i -E '/[
Signed-off-by: Andrew Morton
Reviewed-by: Pekka Enberg
Acked-by: Geert Uytterhoeven [m68k]
Cc: Abdul Haleem
Cc: Andy Lutomirski
Cc: Arnd Bergmann
Cc: Christophe Leroy
Cc: Joerg Roedel
Cc: Max Filippov
Cc: Peter Zijlstra
Cc: Satheesh Rajendran
Cc: Stafford Horne
Cc: Stephen Rothwell
Cc: Steven Rostedt
Cc: Joerg Roedel
Cc: Matthew Wilcox
Link: http://lkml.kernel.org/r/20200627143453.31835-1-rppt@kernel.org
Link: http://lkml.kernel.org/r/20200627143453.31835-2-rppt@kernel.org
Signed-off-by: Linus Torvalds -
Pull RISC-V updates from Palmer Dabbelt:
"We have a lot of new kernel features for this merge window:- ARCH_SUPPORTS_ATOMIC_RMW, to allow OSQ locks to be enabled
- The ability to enable NO_HZ_FULL
- Support for enabling kcov, kmemleak, stack protector, and VM
debugging- JUMP_LABEL support
There are also a handful of cleanups"
* tag 'riscv-for-linus-5.9-mw0' of git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux: (24 commits)
riscv: disable stack-protector for vDSO
RISC-V: Fix build warning for smpboot.c
riscv: fix build warning of mm/pageattr
riscv: Fix build warning for mm/init
RISC-V: Setup exception vector early
riscv: Select ARCH_HAS_DEBUG_VM_PGTABLE
riscv: Use generic pgprot_* macros from
mm: pgtable: Make generic pgprot_* macros available for no-MMU
riscv: Cleanup unnecessary define in asm-offset.c
riscv: Add jump-label implementation
riscv: Support R_RISCV_ADD64 and R_RISCV_SUB64 relocs
Replace HTTP links with HTTPS ones: RISC-V
riscv: Add STACKPROTECTOR supported
riscv: Fix typo in asm/hwcap.h uapi header
riscv: Add kmemleak support
riscv: Allow building with kcov coverage
riscv: Enable context tracking
riscv: Support irq_work via self IPIs
riscv: Enable LOCKDEP_SUPPORT & fixup TRACE_IRQFLAGS_SUPPORT
riscv: Fixup lockdep_assert_held with wrong param cpu_running
...
31 Jul, 2020
3 commits
-
Add hearder for missing prototype. Also, static keyword should be at
beginning of declaration.Signed-off-by: Zong Li
Reviewed-by: Pekka Enberg
Signed-off-by: Palmer Dabbelt -
Add static keyword for resource_init, this function is only used in this
object file.Signed-off-by: Zong Li
Signed-off-by: Palmer Dabbelt -
Add ARCH_HAS_KCOV and HAVE_GCC_PLUGINS to the riscv Kconfig.
Also disable instrumentation of some early boot code and vdso.Boot-tested on QEMU's riscv64 virt machine.
Signed-off-by: Tobias Klauser
Acked-by: Dmitry Vyukov
Signed-off-by: Palmer Dabbelt
25 Jul, 2020
3 commits
-
Currently, maximum physical memory allowed is equal to -PAGE_OFFSET.
That's why we remove any memory blocks spanning beyond that size. However,
it is done only for memblock containing linux kernel which will not work
if there are multiple memblocks.Process all memory blocks to figure out how much memory needs to be removed
and remove at the end instead of updating the memblock list in place.Signed-off-by: Atish Patra
Signed-off-by: Palmer Dabbelt -
Currently, initrd_start/end are computed during early_init_dt_scan
but used during arch_setup. We will get the following panic if initrd is used
and CONFIG_DEBUG_VIRTUAL is turned on.[ 0.000000] ------------[ cut here ]------------
[ 0.000000] kernel BUG at arch/riscv/mm/physaddr.c:33!
[ 0.000000] Kernel BUG [#1]
[ 0.000000] Modules linked in:
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.8.0-rc4-00015-ged0b226fed02 #886
[ 0.000000] epc: ffffffe0002058d2 ra : ffffffe0000053f0 sp : ffffffe001001f40
[ 0.000000] gp : ffffffe00106e250 tp : ffffffe001009d40 t0 : ffffffe00107ee28
[ 0.000000] t1 : 0000000000000000 t2 : ffffffe000a2e880 s0 : ffffffe001001f50
[ 0.000000] s1 : ffffffe0001383e8 a0 : ffffffe00c087e00 a1 : 0000000080200000
[ 0.000000] a2 : 00000000010bf000 a3 : ffffffe00106f3c8 a4 : ffffffe0010bf000
[ 0.000000] a5 : ffffffe000000000 a6 : 0000000000000006 a7 : 0000000000000001
[ 0.000000] s2 : ffffffe00106f068 s3 : ffffffe00106f070 s4 : 0000000080200000
[ 0.000000] s5 : 0000000082200000 s6 : 0000000000000000 s7 : 0000000000000000
[ 0.000000] s8 : 0000000080011010 s9 : 0000000080012700 s10: 0000000000000000
[ 0.000000] s11: 0000000000000000 t3 : 000000000001fe30 t4 : 000000000001fe30
[ 0.000000] t5 : 0000000000000000 t6 : ffffffe00107c471
[ 0.000000] status: 0000000000000100 badaddr: 0000000000000000 cause: 0000000000000003
[ 0.000000] random: get_random_bytes called from print_oops_end_marker+0x22/0x46 with crng_init=0To avoid the error, initrd_start/end can be computed from phys_initrd_start/size
in setup itself. It also improves the initrd placement by aligning the start
and size with the page size.Fixes: 76d2a0493a17 ("RISC-V: Init and Halt Code")
Signed-off-by: Atish Patra
Signed-off-by: Palmer Dabbelt -
Currently, maximum number of mapper pages are set to the pfn calculated
from the memblock size of the memblock containing kernel. This will work
until that memblock spans the entire memory. However, it will be set to
a wrong value if there are multiple memblocks defined in kernel
(e.g. with efi runtime services).Set the the maximum value to the pfn calculated from dram size.
Signed-off-by: Atish Patra
Signed-off-by: Palmer Dabbelt