25 Jan, 2021
1 commit
-
Change-Id: I173bc0325c93daaf82bb894f77ba35fa827864de
20 Jan, 2021
2 commits
-
[ Upstream commit 9348b73c2e1bfea74ccd4a44fb4ccc7276ab9623 ]
Turning a pinned page read-only breaks the pinning after COW. Don't do it.
The whole "track page soft dirty" state doesn't work with pinned pages
anyway, since the page might be dirtied by the pinning entity without
ever being noticed in the page tables.Signed-off-by: Linus Torvalds
Signed-off-by: Sasha Levin -
[ Upstream commit 29a951dfb3c3263c3a0f3bd9f7f2c2cfde4baedb ]
Turning page table entries read-only requires the mmap_sem held for
writing.So stop doing the odd games with turning things from read locks to write
locks and back. Just get the write lock.Signed-off-by: Linus Torvalds
Signed-off-by: Sasha Levin
12 Dec, 2020
2 commits
-
…pub/scm/linux/kernel/git/mtd/linux") into android-mainline
Steps on the way to 5.10-final
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Iaf246e292babee342e89c090dad69e12e2cb4d75 -
When we try to visit the pagemap of a tagged userspace pointer, we find
that the start_vaddr is not correct because of the tag.
To fix it, we should untag the userspace pointers in pagemap_read().I tested with 5.10-rc4 and the issue remains.
Explanation from Catalin in [1]:
"Arguably, that's a user-space bug since tagged file offsets were never
supported. In this case it's not even a tag at bit 56 as per the arm64
tagged address ABI but rather down to bit 47. You could say that the
problem is caused by the C library (malloc()) or whoever created the
tagged vaddr and passed it to this function. It's not a kernel
regression as we've never supported it.Now, pagemap is a special case where the offset is usually not
generated as a classic file offset but rather derived by shifting a
user virtual address. I guess we can make a concession for pagemap
(only) and allow such offset with the tag at bit (56 - PAGE_SHIFT + 3)"My test code is based on [2]:
A userspace pointer which has been tagged by 0xb4: 0xb400007662f541c8
userspace program:
uint64 OsLayer::VirtualToPhysical(void *vaddr) {
uint64 frame, paddr, pfnmask, pagemask;
int pagesize = sysconf(_SC_PAGESIZE);
off64_t off = ((uintptr_t)vaddr) / pagesize * 8; // off = 0xb400007662f541c8 / pagesize * 8 = 0x5a00003b317aa0
int fd = open(kPagemapPath, O_RDONLY);
...if (lseek64(fd, off, SEEK_SET) != off || read(fd, &frame, 8) != 8) {
int err = errno;
string errtxt = ErrorString(err);
if (fd >= 0)
close(fd);
return 0;
}
...
}kernel fs/proc/task_mmu.c:
static ssize_t pagemap_read(struct file *file, char __user *buf,
size_t count, loff_t *ppos)
{
...
src = *ppos;
svpfn = src / PM_ENTRY_BYTES; // svpfn == 0xb400007662f54
start_vaddr = svpfn << PAGE_SHIFT; // start_vaddr == 0xb400007662f54000
end_vaddr = mm->task_size;/* watch out for wraparound */
// svpfn == 0xb400007662f54
// (mm->task_size >> PAGE) == 0x8000000
if (svpfn > mm->task_size >> PAGE_SHIFT) // the condition is true because of the tag 0xb4
start_vaddr = end_vaddr;ret = 0;
while (count && (start_vaddr < end_vaddr)) { // we cannot visit correct entry because start_vaddr is set to end_vaddr
int len;
unsigned long end;
...
}
...
}[1] https://lore.kernel.org/patchwork/patch/1343258/
[2] https://github.com/stressapptest/stressapptest/blob/master/src/os.cc#L158Link: https://lkml.kernel.org/r/20201204024347.8295-1-miles.chen@mediatek.com
Signed-off-by: Miles Chen
Reviewed-by: Vincenzo Frascino
Reviewed-by: Catalin Marinas
Cc: Alexey Dobriyan
Cc: Andrey Konovalov
Cc: Alexander Potapenko
Cc: Vincenzo Frascino
Cc: Andrey Ryabinin
Cc: Catalin Marinas
Cc: Dmitry Vyukov
Cc: Marco Elver
Cc: Will Deacon
Cc: Eric W. Biederman
Cc: Song Bao Hua (Barry Song)
Cc: [5.4-]
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
26 Oct, 2020
1 commit
-
…nux/kernel/git/mchehab/linux-media") into android-mainline
Steps on the way to 5.10-rc1
Resolves conflicts in:
fs/userfaultfd.cSigned-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Ie3fe3c818f1f6565cfd4fa551de72d2b72ef60af
25 Oct, 2020
1 commit
-
steps on the way to 5.10-rc1
Change-Id: Iddc84c25b6a9d71fa8542b927d6f69c364131c3d
Signed-off-by: Greg Kroah-Hartman
21 Oct, 2020
1 commit
-
…linux/kernel/git/arm64/linux") into android-mainline
Tiny steps on the way to 5.10-rc1.
Change-Id: I8ff6cb398ac1c0623bf2cefd29860616d05be107
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
17 Oct, 2020
1 commit
-
The preceding patches have ensured that core dumping properly takes the
mmap_lock. Thanks to that, we can now remove mmget_still_valid() and all
its users.Signed-off-by: Jann Horn
Signed-off-by: Andrew Morton
Acked-by: Linus Torvalds
Cc: Christoph Hellwig
Cc: Alexander Viro
Cc: "Eric W . Biederman"
Cc: Oleg Nesterov
Cc: Hugh Dickins
Link: http://lkml.kernel.org/r/20200827114932.3572699-8-jannh@google.com
Signed-off-by: Linus Torvalds
14 Oct, 2020
3 commits
-
smaps_rollup will try to grab mmap_lock and go through the whole vma list
until it finishes the iterating. When encountering large processes, the
mmap_lock will be held for a longer time, which may block other write
requests like mmap and munmap from progressing smoothly.There are upcoming mmap_lock optimizations like range-based locks, but the
lock applied to smaps_rollup would be the coarse type, which doesn't avoid
the occurrence of unpleasant contention.To solve aforementioned issue, we add a check which detects whether anyone
wants to grab mmap_lock for write attempts.Signed-off-by: Chinwen Chang
Signed-off-by: Andrew Morton
Cc: Steven Price
Cc: Michel Lespinasse
Cc: Matthias Brugger
Cc: Vlastimil Babka
Cc: Daniel Jordan
Cc: Davidlohr Bueso
Cc: Chinwen Chang
Cc: Alexey Dobriyan
Cc: "Matthew Wilcox (Oracle)"
Cc: Jason Gunthorpe
Cc: Song Liu
Cc: Jimmy Assarsson
Cc: Huang Ying
Cc: Daniel Kiss
Cc: Laurent Dufour
Link: http://lkml.kernel.org/r/1597715898-3854-4-git-send-email-chinwen.chang@mediatek.com
Signed-off-by: Linus Torvalds -
Extend smap_gather_stats to support indicated beginning address at which
it should start gathering. To achieve the goal, we add a new parameter
@start assigned by the caller and try to refactor it for simplicity.If @start is 0, it will use the range of @vma for gathering.
Signed-off-by: Chinwen Chang
Signed-off-by: Andrew Morton
Reviewed-by: Steven Price
Cc: Michel Lespinasse
Cc: Alexey Dobriyan
Cc: Daniel Jordan
Cc: Daniel Kiss
Cc: Davidlohr Bueso
Cc: Huang Ying
Cc: Jason Gunthorpe
Cc: Jimmy Assarsson
Cc: Laurent Dufour
Cc: "Matthew Wilcox (Oracle)"
Cc: Matthias Brugger
Cc: Song Liu
Cc: Vlastimil Babka
Link: http://lkml.kernel.org/r/1597715898-3854-3-git-send-email-chinwen.chang@mediatek.com
Signed-off-by: Linus Torvalds -
Avoid bumping the refcount on pages when we're only interested in the
swap entries.Signed-off-by: Matthew Wilcox (Oracle)
Signed-off-by: Andrew Morton
Acked-by: Johannes Weiner
Cc: Alexey Dobriyan
Cc: Chris Wilson
Cc: Huang Ying
Cc: Hugh Dickins
Cc: Jani Nikula
Cc: Matthew Auld
Cc: William Kucharski
Link: https://lkml.kernel.org/r/20200910183318.20139-5-willy@infradead.org
Signed-off-by: Linus Torvalds
04 Sep, 2020
1 commit
-
To enable tagging on a memory range, the user must explicitly opt in via
a new PROT_MTE flag passed to mmap() or mprotect(). Since this is a new
memory type in the AttrIndx field of a pte, simplify the or'ing of these
bits over the protection_map[] attributes by making MT_NORMAL index 0.There are two conditions for arch_vm_get_page_prot() to return the
MT_NORMAL_TAGGED memory type: (1) the user requested it via PROT_MTE,
registered as VM_MTE in the vm_flags, and (2) the vma supports MTE,
decided during the mmap() call (only) and registered as VM_MTE_ALLOWED.arch_calc_vm_prot_bits() is responsible for registering the user request
as VM_MTE. The newly introduced arch_calc_vm_flag_bits() sets
VM_MTE_ALLOWED if the mapping is MAP_ANONYMOUS. An MTE-capable
filesystem (RAM-based) may be able to set VM_MTE_ALLOWED during its
mmap() file ops call.In addition, update VM_DATA_DEFAULT_FLAGS to allow mprotect(PROT_MTE) on
stack or brk area.The Linux mmap() syscall currently ignores unknown PROT_* flags. In the
presence of MTE, an mmap(PROT_MTE) on a file which does not support MTE
will not report an error and the memory will not be mapped as Normal
Tagged. For consistency, mprotect(PROT_MTE) will not report an error
either if the memory range does not support MTE. Two subsequent patches
in the series will propose tightening of this behaviour.Co-developed-by: Vincenzo Frascino
Signed-off-by: Vincenzo Frascino
Signed-off-by: Catalin Marinas
Cc: Will Deacon
13 Aug, 2020
3 commits
-
…te anonymous memory")
In the mainline commit 64019a2e467a ("mm/gup: remove task_struct pointer
for all gup code"), get_user_pages_remote() removed a parameter, which
broke the build in the fs/proc/task_mmu.c file due to the previously
mentioned android-only change.Fix this up by correcting the parameters for this call.
Bug: 120441514
Cc: Dmitry Shmidt <dimitrysh@google.com>
Cc: Amit Pundir <amit.pundir@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I753c22c298d0397b625daf28fe89182e074ac540 -
…ernel/git/abelloni/linux") into android-mainline
Steps on the way to 5.9-rc1.
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: Iceded779988ff472863b7e1c54e22a9fa6383a30 -
The keys in smaps output are padded to fixed width with spaces. All
except for THPeligible that uses tabs (only since commit c06306696f83
("mm: thp: fix false negative of shmem vma's THP eligibility")).Unify the output formatting to save time debugging some naïve parsers.
(Part of the unification is also aligning FilePmdMapped with others.)Signed-off-by: Michal Koutný
Signed-off-by: Andrew Morton
Acked-by: Yang Shi
Cc: Alexey Dobriyan
Cc: Matthew Wilcox
Link: http://lkml.kernel.org/r/20200728083207.17531-1-mkoutny@suse.com
Signed-off-by: Linus Torvalds
24 Jun, 2020
1 commit
-
…cm/linux/kernel/git/linkinjeon/exfat") into android-mainline
Steps on the way to 5.8-rc1.
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I4bc42f572167ea2f815688b4d1eb6124b6d260d4
12 Jun, 2020
1 commit
-
Tiny merge resolutions along the way to 5.8-rc1.
Change-Id: I24b3cca28ed36f32c92b6374dae5d7f006d3bced
Signed-off-by: Greg Kroah-Hartman
10 Jun, 2020
2 commits
-
Convert comments that reference mmap_sem to reference mmap_lock instead.
[akpm@linux-foundation.org: fix up linux-next leftovers]
[akpm@linux-foundation.org: s/lockaphore/lock/, per Vlastimil]
[akpm@linux-foundation.org: more linux-next fixups, per Michel]Signed-off-by: Michel Lespinasse
Signed-off-by: Andrew Morton
Reviewed-by: Vlastimil Babka
Reviewed-by: Daniel Jordan
Cc: Davidlohr Bueso
Cc: David Rientjes
Cc: Hugh Dickins
Cc: Jason Gunthorpe
Cc: Jerome Glisse
Cc: John Hubbard
Cc: Laurent Dufour
Cc: Liam Howlett
Cc: Matthew Wilcox
Cc: Peter Zijlstra
Cc: Ying Han
Link: http://lkml.kernel.org/r/20200520052908.204642-13-walken@google.com
Signed-off-by: Linus Torvalds -
This change converts the existing mmap_sem rwsem calls to use the new mmap
locking API instead.The change is generated using coccinelle with the following rule:
// spatch --sp-file mmap_lock_api.cocci --in-place --include-headers --dir .
@@
expression mm;
@@
(
-init_rwsem
+mmap_init_lock
|
-down_write
+mmap_write_lock
|
-down_write_killable
+mmap_write_lock_killable
|
-down_write_trylock
+mmap_write_trylock
|
-up_write
+mmap_write_unlock
|
-downgrade_write
+mmap_write_downgrade
|
-down_read
+mmap_read_lock
|
-down_read_killable
+mmap_read_lock_killable
|
-down_read_trylock
+mmap_read_trylock
|
-up_read
+mmap_read_unlock
)
-(&mm->mmap_sem)
+(mm)Signed-off-by: Michel Lespinasse
Signed-off-by: Andrew Morton
Reviewed-by: Daniel Jordan
Reviewed-by: Laurent Dufour
Reviewed-by: Vlastimil Babka
Cc: Davidlohr Bueso
Cc: David Rientjes
Cc: Hugh Dickins
Cc: Jason Gunthorpe
Cc: Jerome Glisse
Cc: John Hubbard
Cc: Liam Howlett
Cc: Matthew Wilcox
Cc: Peter Zijlstra
Cc: Ying Han
Link: http://lkml.kernel.org/r/20200520052908.204642-5-walken@google.com
Signed-off-by: Linus Torvalds
09 Jun, 2020
2 commits
-
…/pub/scm/linux/kernel/git/arm64/linux") into android-mainline
arm64 baby steps along the way to 5.8-rc1.
Signed-off-by: Will Deacon <willdeacon@google.com>
Change-Id: I245a58d34012df3a7c5702bc05f6f4803e53856e -
…scm/linux/kernel/git/geert/linux-m68k") into android-mainline
Steps along the way to 5.8-rc1
Change-Id: I9b3945d9f149835b7db64d8eba015d8de4160013
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
03 Jun, 2020
2 commits
-
Merge updates from Andrew Morton:
"A few little subsystems and a start of a lot of MM patches.Subsystems affected by this patch series: squashfs, ocfs2, parisc,
vfs. With mm subsystems: slab-generic, slub, debug, pagecache, gup,
swap, memcg, pagemap, memory-failure, vmalloc, kasan"* emailed patches from Andrew Morton : (128 commits)
kasan: move kasan_report() into report.c
mm/mm_init.c: report kasan-tag information stored in page->flags
ubsan: entirely disable alignment checks under UBSAN_TRAP
kasan: fix clang compilation warning due to stack protector
x86/mm: remove vmalloc faulting
mm: remove vmalloc_sync_(un)mappings()
x86/mm/32: implement arch_sync_kernel_mappings()
x86/mm/64: implement arch_sync_kernel_mappings()
mm/ioremap: track which page-table levels were modified
mm/vmalloc: track which page-table levels were modified
mm: add functions to track page directory modifications
s390: use __vmalloc_node in stack_alloc
powerpc: use __vmalloc_node in alloc_vm_stack
arm64: use __vmalloc_node in arch_alloc_vmap_stack
mm: remove vmalloc_user_node_flags
mm: switch the test_vmalloc module to use __vmalloc_node
mm: remove __vmalloc_node_flags_caller
mm: remove both instances of __vmalloc_node_flags
mm: remove the prot argument to __vmalloc_node
mm: remove the pgprot argument to __vmalloc
... -
Now, when reading /proc/PID/smaps, the PMD migration entry in page table
is simply ignored. To improve the accuracy of /proc/PID/smaps, its
parsing and processing is added.To test the patch, we run pmbench to eat 400 MB memory in background,
then run /usr/bin/migratepages and `cat /proc/PID/smaps` every second.
The issue as follows can be reproduced within 60 seconds.Before the patch, for the fully populated 400 MB anonymous VMA, some THP
pages under migration may be lost as below.7f3f6a7e5000-7f3f837e5000 rw-p 00000000 00:00 0
Size: 409600 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Rss: 407552 kB
Pss: 407552 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 407552 kB
Referenced: 301056 kB
Anonymous: 407552 kB
LazyFree: 0 kB
AnonHugePages: 405504 kB
ShmemPmdMapped: 0 kB
FilePmdMapped: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
Swap: 0 kB
SwapPss: 0 kB
Locked: 0 kB
THPeligible: 1
VmFlags: rd wr mr mw me acAfter the patch, it will be always,
7f3f6a7e5000-7f3f837e5000 rw-p 00000000 00:00 0
Size: 409600 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Rss: 409600 kB
Pss: 409600 kB
Shared_Clean: 0 kB
Shared_Dirty: 0 kB
Private_Clean: 0 kB
Private_Dirty: 409600 kB
Referenced: 294912 kB
Anonymous: 409600 kB
LazyFree: 0 kB
AnonHugePages: 407552 kB
ShmemPmdMapped: 0 kB
FilePmdMapped: 0 kB
Shared_Hugetlb: 0 kB
Private_Hugetlb: 0 kB
Swap: 0 kB
SwapPss: 0 kB
Locked: 0 kB
THPeligible: 1
VmFlags: rd wr mr mw me acSigned-off-by: "Huang, Ying"
Signed-off-by: Andrew Morton
Reviewed-by: Zi Yan
Acked-by: Michal Hocko
Acked-by: Kirill A. Shutemov
Acked-by: Vlastimil Babka
Cc: Andrea Arcangeli
Cc: Alexey Dobriyan
Cc: Konstantin Khlebnikov
Cc: "Jérôme Glisse"
Cc: Yang Shi
Link: http://lkml.kernel.org/r/20200403123059.1846960-1-ying.huang@intel.com
Signed-off-by: Linus Torvalds
02 Jun, 2020
1 commit
-
Pull arm64 updates from Will Deacon:
"A sizeable pile of arm64 updates for 5.8.Summary below, but the big two features are support for Branch Target
Identification and Clang's Shadow Call stack. The latter is currently
arm64-only, but the high-level parts are all in core code so it could
easily be adopted by other architectures pending toolchain supportBranch Target Identification (BTI):
- Support for ARMv8.5-BTI in both user- and kernel-space. This allows
branch targets to limit the types of branch from which they can be
called and additionally prevents branching to arbitrary code,
although kernel support requires a very recent toolchain.- Function annotation via SYM_FUNC_START() so that assembly functions
are wrapped with the relevant "landing pad" instructions.- BPF and vDSO updates to use the new instructions.
- Addition of a new HWCAP and exposure of BTI capability to userspace
via ID register emulation, along with ELF loader support for the
BTI feature in .note.gnu.property.- Non-critical fixes to CFI unwind annotations in the sigreturn
trampoline.Shadow Call Stack (SCS):
- Support for Clang's Shadow Call Stack feature, which reserves
platform register x18 to point at a separate stack for each task
that holds only return addresses. This protects function return
control flow from buffer overruns on the main stack.- Save/restore of x18 across problematic boundaries (user-mode,
hypervisor, EFI, suspend, etc).- Core support for SCS, should other architectures want to use it
too.- SCS overflow checking on context-switch as part of the existing
stack limit check if CONFIG_SCHED_STACK_END_CHECK=y.CPU feature detection:
- Removed numerous "SANITY CHECK" errors when running on a system
with mismatched AArch32 support at EL1. This is primarily a concern
for KVM, which disabled support for 32-bit guests on such a system.- Addition of new ID registers and fields as the architecture has
been extended.Perf and PMU drivers:
- Minor fixes and cleanups to system PMU drivers.
Hardware errata:
- Unify KVM workarounds for VHE and nVHE configurations.
- Sort vendor errata entries in Kconfig.
Secure Monitor Call Calling Convention (SMCCC):
- Update to the latest specification from Arm (v1.2).
- Allow PSCI code to query the SMCCC version.
Software Delegated Exception Interface (SDEI):
- Unexport a bunch of unused symbols.
- Minor fixes to handling of firmware data.
Pointer authentication:
- Add support for dumping the kernel PAC mask in vmcoreinfo so that
the stack can be unwound by tools such as kdump.- Simplification of key initialisation during CPU bringup.
BPF backend:
- Improve immediate generation for logical and add/sub instructions.
vDSO:
- Minor fixes to the linker flags for consistency with other
architectures and support for LLVM's unwinder.- Clean up logic to initialise and map the vDSO into userspace.
ACPI:
- Work around for an ambiguity in the IORT specification relating to
the "num_ids" field.- Support _DMA method for all named components rather than only PCIe
root complexes.- Minor other IORT-related fixes.
Miscellaneous:
- Initialise debug traps early for KGDB and fix KDB cacheflushing
deadlock.- Minor tweaks to early boot state (documentation update, set
TEXT_OFFSET to 0x0, increase alignment of PE/COFF sections).- Refactoring and cleanup"
* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (148 commits)
KVM: arm64: Move __load_guest_stage2 to kvm_mmu.h
KVM: arm64: Check advertised Stage-2 page size capability
arm64/cpufeature: Add get_arm64_ftr_reg_nowarn()
ACPI/IORT: Remove the unused __get_pci_rid()
arm64/cpuinfo: Add ID_MMFR4_EL1 into the cpuinfo_arm64 context
arm64/cpufeature: Add remaining feature bits in ID_AA64PFR1 register
arm64/cpufeature: Add remaining feature bits in ID_AA64PFR0 register
arm64/cpufeature: Add remaining feature bits in ID_AA64ISAR0 register
arm64/cpufeature: Add remaining feature bits in ID_MMFR4 register
arm64/cpufeature: Add remaining feature bits in ID_PFR0 register
arm64/cpufeature: Introduce ID_MMFR5 CPU register
arm64/cpufeature: Introduce ID_DFR1 CPU register
arm64/cpufeature: Introduce ID_PFR2 CPU register
arm64/cpufeature: Make doublelock a signed feature in ID_AA64DFR0
arm64/cpufeature: Drop TraceFilt feature exposure from ID_DFR0 register
arm64/cpufeature: Add explicit ftr_id_isar0[] for ID_ISAR0 register
arm64: mm: Add asid_gen_match() helper
firmware: smccc: Fix missing prototype warning for arm_smccc_version_init
arm64: vdso: Fix CFI directives in sigreturn trampoline
arm64: vdso: Don't prefix sigreturn trampoline with a BTI C instruction
...
05 May, 2020
1 commit
-
Merge in user support for Branch Target Identification, which narrowly
missed the cut for 5.7 after a late ABI concern.* for-next/bti-user:
arm64: bti: Document behaviour for dynamically linked binaries
arm64: elf: Fix allnoconfig kernel build with !ARCH_USE_GNU_PROPERTY
arm64: BTI: Add Kconfig entry for userspace BTI
mm: smaps: Report arm64 guarded pages in smaps
arm64: mm: Display guarded pages in ptdump
KVM: arm64: BTI: Reset BTYPE when skipping emulated instructions
arm64: BTI: Reset BTYPE when skipping emulated instructions
arm64: traps: Shuffle code to eliminate forward declarations
arm64: unify native/compat instruction skipping
arm64: BTI: Decode BYTPE bits when printing PSTATE
arm64: elf: Enable BTI at exec based on ELF program properties
elf: Allow arch to tweak initial mmap prot flags
arm64: Basic Branch Target Identification support
ELF: Add ELF program property parsing support
ELF: UAPI and Kconfig additions for ELF program properties
23 Apr, 2020
1 commit
-
Remove MPX leftovers in generic code.
Fixes: 45fc24e89b7c ("x86/mpx: remove MPX from arch/x86")
Signed-off-by: Jimmy Assarsson
Signed-off-by: Borislav Petkov
Acked-by: Dave Hansen
Link: https://lkml.kernel.org/r/20200402172507.2786-1-jimmyassarsson@gmail.com
10 Apr, 2020
1 commit
-
…x") into android-mainline
Baby steps on the way to 5.7-rc1
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I89095a90046a14eab189aab257a75b3dfdb5b1db
08 Apr, 2020
4 commits
-
It's clearer to just put this inline.
Signed-off-by: Matthew Wilcox (Oracle)
Signed-off-by: Alexey Dobriyan
Signed-off-by: Andrew Morton
Link: http://lkml.kernel.org/r/20200317193201.9924-5-adobriyan@gmail.com
Signed-off-by: Linus Torvalds -
The ppos is a private cursor, just like m->version. Use the canonical
cursor, not a special one.Signed-off-by: Matthew Wilcox (Oracle)
Signed-off-by: Alexey Dobriyan
Signed-off-by: Andrew Morton
Link: http://lkml.kernel.org/r/20200317193201.9924-3-adobriyan@gmail.com
Signed-off-by: Linus Torvalds -
Instead of setting m->version in the show method, set it in m_next(),
where it should be. Also remove the fallback code for failing to find a
vma, or version being zero.Signed-off-by: Matthew Wilcox (Oracle)
Signed-off-by: Alexey Dobriyan
Signed-off-by: Andrew Morton
Link: http://lkml.kernel.org/r/20200317193201.9924-2-adobriyan@gmail.com
Signed-off-by: Linus Torvalds -
Instead of calling vma_stop() from m_start() and m_next(), do its work
in m_stop().Signed-off-by: Matthew Wilcox (Oracle)
Signed-off-by: Alexey Dobriyan
Signed-off-by: Andrew Morton
Link: http://lkml.kernel.org/r/20200317193201.9924-1-adobriyan@gmail.com
Signed-off-by: Linus Torvalds
17 Mar, 2020
1 commit
-
The arm64 Branch Target Identification support is activated by marking
executable pages as guarded pages. Report pages mapped this way in
smaps to aid diagnostics.Signed-off-by: Mark Brown
Signed-off-by: Daniel Kiss
Reviewed-by: Kees Cook
Signed-off-by: Catalin Marinas
08 Feb, 2020
1 commit
-
…x/kernel/git/andersson/remoteproc") into android-mainline
Another "small" merge point to handle conflicts in a sane way.
Signed-off-by: Greg Kroah-Hartman <gregkh@google.com>
Change-Id: I5dc2f5f11275b29f3c9b5b8d4dd59864ceb6faf9
04 Feb, 2020
1 commit
-
The pte_hole() callback is called at multiple levels of the page tables.
Code dumping the kernel page tables needs to know what at what depth the
missing entry is. Add this is an extra parameter to pte_hole(). When the
depth isn't know (e.g. processing a vma) then -1 is passed.The depth that is reported is the actual level where the entry is missing
(ignoring any folding that is in place), i.e. any levels where
PTRS_PER_P?D is set to 1 are ignored.Note that depth starts at 0 for a PGD so that PUD/PMD/PTE retain their
natural numbers as levels 2/3/4.Link: http://lkml.kernel.org/r/20191218162402.45610-16-steven.price@arm.com
Signed-off-by: Steven Price
Tested-by: Zong Li
Cc: Albert Ou
Cc: Alexandre Ghiti
Cc: Andy Lutomirski
Cc: Ard Biesheuvel
Cc: Arnd Bergmann
Cc: Benjamin Herrenschmidt
Cc: Borislav Petkov
Cc: Catalin Marinas
Cc: Christian Borntraeger
Cc: Dave Hansen
Cc: David S. Miller
Cc: Heiko Carstens
Cc: "H. Peter Anvin"
Cc: Ingo Molnar
Cc: James Hogan
Cc: James Morse
Cc: Jerome Glisse
Cc: "Liang, Kan"
Cc: Mark Rutland
Cc: Michael Ellerman
Cc: Paul Burton
Cc: Paul Mackerras
Cc: Paul Walmsley
Cc: Peter Zijlstra
Cc: Ralf Baechle
Cc: Russell King
Cc: Thomas Gleixner
Cc: Vasily Gorbik
Cc: Vineet Gupta
Cc: Will Deacon
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
02 Oct, 2019
1 commit
-
To make the 5.4-rc1 merge easier, merge at a prerelease point in time
before the final release happens.Signed-off-by: Greg Kroah-Hartman
Change-Id: If613d657fd0abf9910c5bf3435a745f01b89765e
25 Sep, 2019
2 commits
-
In preparation for non-shmem THP, this patch adds a few stats and exposes
them in /proc/meminfo, /sys/bus/node/devices//meminfo, and
/proc//task//smaps.This patch is mostly a rewrite of Kirill A. Shutemov's earlier version:
https://lkml.kernel.org/r/20170126115819.58875-5-kirill.shutemov@linux.intel.com/Link: http://lkml.kernel.org/r/20190801184244.3169074-5-songliubraving@fb.com
Signed-off-by: Song Liu
Acked-by: Rik van Riel
Acked-by: Kirill A. Shutemov
Acked-by: Johannes Weiner
Cc: Hillf Danton
Cc: Hugh Dickins
Cc: William Kucharski
Cc: Oleg Nesterov
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds -
Replace 1 << compound_order(page) with compound_nr(page). Minor
improvements in readability.Link: http://lkml.kernel.org/r/20190721104612.19120-4-willy@infradead.org
Signed-off-by: Matthew Wilcox (Oracle)
Reviewed-by: Andrew Morton
Reviewed-by: Ira Weiny
Acked-by: Kirill A. Shutemov
Cc: Michal Hocko
Signed-off-by: Andrew Morton
Signed-off-by: Linus Torvalds
23 Sep, 2019
1 commit
-
To make the 5.4-rc1 merge easier, merge at a prerelease point in time
before the final release happens.Signed-off-by: Greg Kroah-Hartman
Change-Id: I29b683c837ed1a3324644dbf9bf863f30740cd0b
07 Sep, 2019
1 commit
-
The mm_walk structure currently mixed data and code. Split out the
operations vectors into a new mm_walk_ops structure, and while we are
changing the API also declare the mm_walk structure inside the
walk_page_range and walk_page_vma functions.Based on patch from Linus Torvalds.
Link: https://lore.kernel.org/r/20190828141955.22210-3-hch@lst.de
Signed-off-by: Christoph Hellwig
Reviewed-by: Thomas Hellstrom
Reviewed-by: Steven Price
Reviewed-by: Jason Gunthorpe
Signed-off-by: Jason Gunthorpe