13 Jan, 2021

1 commit


04 Dec, 2020

1 commit


03 Dec, 2020

1 commit

  • Pull arm64 fixes from Will Deacon:
    "I'm sad to say that we've got an unusually large arm64 fixes pull for
    rc7 which addresses numerous significant instrumentation issues with
    our entry code.

    Without these patches, lockdep is hopelessly unreliable in some
    configurations [1,2] and syzkaller is therefore not a lot of use
    because it's so noisy.

    Although much of this has always been broken, it appears to have been
    exposed more readily by other changes such as 044d0d6de9f5 ("lockdep:
    Only trace IRQ edges") and general lockdep improvements around IRQ
    tracing and NMIs.

    Fixing this properly required moving much of the instrumentation hooks
    from our entry assembly into C, which Mark has been working on for the
    last few weeks. We're not quite ready to move to the recently added
    generic functions yet, but the code here has been deliberately written
    to mimic that closely so we can look at cleaning things up once we
    have a bit more breathing room.

    Having said all that, the second version of these patches was posted
    last week and I pushed it into our CI (kernelci and cki) along with a
    commit which forced on PROVE_LOCKING, NOHZ_FULL and
    CONTEXT_TRACKING_FORCE. The result? We found a real bug in the
    md/raid10 code [3].

    Oh, and there's also a really silly typo patch that's unrelated.

    Summary:

    - Fix numerous issues with instrumentation and exception entry

    - Fix hideous typo in unused register field definition"

    [1] https://lore.kernel.org/r/CACT4Y+aAzoJ48Mh1wNYD17pJqyEcDnrxGfApir=-j171TnQXhw@mail.gmail.com
    [2] https://lore.kernel.org/r/20201119193819.GA2601289@elver.google.com
    [3] https://lore.kernel.org/r/94c76d5e-466a-bc5f-e6c2-a11b65c39f83@redhat.com

    * tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
    arm64: mte: Fix typo in macro definition
    arm64: entry: fix EL1 debug transitions
    arm64: entry: fix NMI {user, kernel}->kernel transitions
    arm64: entry: fix non-NMI kernelkernel transitions
    arm64: ptrace: prepare for EL1 irq/rcu tracking
    arm64: entry: fix non-NMI userkernel transitions
    arm64: entry: move el1 irq/nmi logic to C
    arm64: entry: prepare ret_to_user for function call
    arm64: entry: move enter_from_user_mode to entry-common.c
    arm64: entry: mark entry code as noinstr
    arm64: mark idle code as noinstr
    arm64: syscall: exit userspace before unmasking exceptions

    Linus Torvalds
     

30 Nov, 2020

3 commits

  • Core code disables RCU when calling arch_cpu_idle(), so it's not safe
    for arch_cpu_idle() or its calees to be instrumented, as the
    instrumentation callbacks may attempt to use RCU or other features which
    are unsafe to use in this context.

    Mark them noinstr to prevent issues.

    The use of local_irq_enable() in arch_cpu_idle() is similarly
    problematic, and the "sched/idle: Fix arch_cpu_idle() vs tracing" patch
    queued in the tip tree addresses that case.

    Reported-by: Marco Elver
    Signed-off-by: Mark Rutland
    Cc: Catalin Marinas
    Cc: James Morse
    Cc: Will Deacon
    Link: https://lore.kernel.org/r/20201130115950.22492-3-mark.rutland@arm.com
    Signed-off-by: Will Deacon

    Mark Rutland
     
  • Linux 5.10-rc6

    Signed-off-by: Greg Kroah-Hartman
    Change-Id: If86eed9a017e59d6e92d173f089f98102424d052

    Greg Kroah-Hartman
     
  • Pull locking fixes from Thomas Gleixner:
    "Two more places which invoke tracing from RCU disabled regions in the
    idle path.

    Similar to the entry path the low level idle functions have to be
    non-instrumentable"

    * tag 'locking-urgent-2020-11-29' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
    intel_idle: Fix intel_idle() vs tracing
    sched/idle: Fix arch_cpu_idle() vs tracing

    Linus Torvalds
     

24 Nov, 2020

1 commit

  • We call arch_cpu_idle() with RCU disabled, but then use
    local_irq_{en,dis}able(), which invokes tracing, which relies on RCU.

    Switch all arch_cpu_idle() implementations to use
    raw_local_irq_{en,dis}able() and carefully manage the
    lockdep,rcu,tracing state like we do in entry.

    (XXX: we really should change arch_cpu_idle() to not return with
    interrupts enabled)

    Reported-by: Sven Schnelle
    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Mark Rutland
    Tested-by: Mark Rutland
    Link: https://lkml.kernel.org/r/20201120114925.594122626@infradead.org

    Peter Zijlstra
     

14 Nov, 2020

1 commit


10 Nov, 2020

1 commit

  • In a surprising turn of events, it transpires that CPU capabilities
    configured as ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE are never set as the
    result of late-onlining. Therefore our handling of erratum 1418040 does
    not get activated if it is not required by any of the boot CPUs, even
    though we allow late-onlining of an affected CPU.

    In order to get things working again, replace the cpus_have_const_cap()
    invocation with an explicit check for the current CPU using
    this_cpu_has_cap().

    Cc: Sai Prakash Ranjan
    Cc: Stephen Boyd
    Cc: Catalin Marinas
    Cc: Mark Rutland
    Reviewed-by: Suzuki K Poulose
    Acked-by: Marc Zyngier
    Link: https://lore.kernel.org/r/20201106114952.10032-1-will@kernel.org
    Signed-off-by: Will Deacon

    Will Deacon
     

30 Oct, 2020

1 commit

  • When the CONFIG_ASYMMETRIC_AARCH32 option is enabled (EXPERT), the type
    of the ARM64_HAS_32BIT_EL0 capability becomes WEAK_LOCAL_CPU_FEATURE.
    The kernel will now return true for system_supports_32bit_el0() and
    checks 32-bit tasks are affined to AArch32 capable CPUs only in
    do_notify_resume(). If the affinity contains a non-capable AArch32 CPU,
    the tasks will get SIGKILLed. If the last CPU supporting 32-bit is
    offlined, the kernel will SIGKILL any scheduled 32-bit tasks (the
    alternative is to prevent offlining through a new .cpu_disable feature
    entry).

    In addition to the relaxation of the ARM64_HAS_32BIT_EL0 capability,
    this patch factors out the 32-bit cpuinfo and features setting into
    separate functions: __cpuinfo_store_cpu_32bit(),
    init_cpu_32bit_features(). The cpuinfo of the booting CPU
    (boot_cpu_data) is now updated on the first 32-bit capable CPU even if
    it is a secondary one. The ID_AA64PFR0_EL0_64BIT_ONLY feature is relaxed
    to FTR_NONSTRICT and FTR_HIGHER_SAFE when the asymmetric AArch32 support
    is enabled. The compat_elf_hwcaps are only verified for the
    AArch32-capable CPUs to still allow hotplugging AArch64-only CPUs.

    Bug: 168847043
    Reason: Needed for bringup. Revert when upstream patch is available
    Nacked-for-upstream-by: Catalin Marinas
    Cc: Suzuki K Poulose
    Cc: Morten Rasmussen
    Cc: Valentin Schneider
    Cc: Qais Yousef
    Signed-off-by: Catalin Marinas
    Signed-off-by: Qais Yousef
    [Qais: removed affinity handling to a separate patch and fixed up
    docs/naming to match the change]
    Change-Id: I1a9860a883f001ddebb4df9dee7504edf970d593

    Catalin Marinas
     

21 Oct, 2020

1 commit


02 Oct, 2020

1 commit

  • Add userspace support for the Memory Tagging Extension introduced by
    Armv8.5.

    (Catalin Marinas and others)
    * for-next/mte: (30 commits)
    arm64: mte: Fix typo in memory tagging ABI documentation
    arm64: mte: Add Memory Tagging Extension documentation
    arm64: mte: Kconfig entry
    arm64: mte: Save tags when hibernating
    arm64: mte: Enable swap of tagged pages
    mm: Add arch hooks for saving/restoring tags
    fs: Handle intra-page faults in copy_mount_options()
    arm64: mte: ptrace: Add NT_ARM_TAGGED_ADDR_CTRL regset
    arm64: mte: ptrace: Add PTRACE_{PEEK,POKE}MTETAGS support
    arm64: mte: Allow {set,get}_tagged_addr_ctrl() on non-current tasks
    arm64: mte: Restore the GCR_EL1 register after a suspend
    arm64: mte: Allow user control of the generated random tags via prctl()
    arm64: mte: Allow user control of the tag check mode via prctl()
    mm: Allow arm64 mmap(PROT_MTE) on RAM-based files
    arm64: mte: Validate the PROT_MTE request via arch_validate_flags()
    mm: Introduce arch_validate_flags()
    arm64: mte: Add PROT_MTE support to mmap() and mprotect()
    mm: Introduce arch_calc_vm_flag_bits()
    arm64: mte: Tags-aware aware memcmp_pages() implementation
    arm64: Avoid unnecessary clear_user_page() indirection
    ...

    Will Deacon
     

29 Sep, 2020

2 commits

  • The PR_SPEC_DISABLE_NOEXEC option to the PR_SPEC_STORE_BYPASS prctl()
    allows the SSB mitigation to be enabled only until the next execve(),
    at which point the state will revert back to PR_SPEC_ENABLE and the
    mitigation will be disabled.

    Add support for PR_SPEC_DISABLE_NOEXEC on arm64.

    Reported-by: Anthony Steinhauser
    Signed-off-by: Will Deacon

    Will Deacon
     
  • Rewrite the Spectre-v4 mitigation handling code to follow the same
    approach as that taken by Spectre-v2.

    For now, report to KVM that the system is vulnerable (by forcing
    'ssbd_state' to ARM64_SSBD_UNKNOWN), as this will be cleared up in
    subsequent steps.

    Signed-off-by: Will Deacon

    Will Deacon
     

04 Sep, 2020

4 commits

  • In preparation for ptrace() access to the prctl() value, allow calling
    these functions on non-current tasks.

    Signed-off-by: Catalin Marinas
    Cc: Will Deacon

    Catalin Marinas
     
  • The IRG, ADDG and SUBG instructions insert a random tag in the resulting
    address. Certain tags can be excluded via the GCR_EL1.Exclude bitmap
    when, for example, the user wants a certain colour for freed buffers.
    Since the GCR_EL1 register is not accessible at EL0, extend the
    prctl(PR_SET_TAGGED_ADDR_CTRL) interface to include a 16-bit field in
    the first argument for controlling which tags can be generated by the
    above instruction (an include rather than exclude mask). Note that by
    default all non-zero tags are excluded. This setting is per-thread.

    Signed-off-by: Catalin Marinas
    Cc: Will Deacon

    Catalin Marinas
     
  • By default, even if PROT_MTE is set on a memory range, there is no tag
    check fault reporting (SIGSEGV). Introduce a set of option to the
    exiting prctl(PR_SET_TAGGED_ADDR_CTRL) to allow user control of the tag
    check fault mode:

    PR_MTE_TCF_NONE - no reporting (default)
    PR_MTE_TCF_SYNC - synchronous tag check fault reporting
    PR_MTE_TCF_ASYNC - asynchronous tag check fault reporting

    These options translate into the corresponding SCTLR_EL1.TCF0 bitfield,
    context-switched by the kernel. Note that the kernel accesses to the
    user address space (e.g. read() system call) are not checked if the user
    thread tag checking mode is PR_MTE_TCF_NONE or PR_MTE_TCF_ASYNC. If the
    tag checking mode is PR_MTE_TCF_SYNC, the kernel makes a best effort to
    check its user address accesses, however it cannot always guarantee it.

    Signed-off-by: Catalin Marinas
    Cc: Will Deacon

    Catalin Marinas
     
  • The Memory Tagging Extension has two modes of notifying a tag check
    fault at EL0, configurable through the SCTLR_EL1.TCF0 field:

    1. Synchronous raising of a Data Abort exception with DFSC 17.
    2. Asynchronous setting of a cumulative bit in TFSRE0_EL1.

    Add the exception handler for the synchronous exception and handling of
    the asynchronous TFSRE0_EL1.TF0 bit setting via a new TIF flag in
    do_notify_resume().

    On a tag check failure in user-space, whether synchronous or
    asynchronous, a SIGSEGV will be raised on the faulting thread.

    Signed-off-by: Vincenzo Frascino
    Co-developed-by: Catalin Marinas
    Signed-off-by: Catalin Marinas
    Cc: Will Deacon

    Vincenzo Frascino
     

01 Sep, 2020

1 commit


28 Aug, 2020

1 commit


26 Aug, 2020

1 commit

  • Remove trace_cpu_idle() from the arch_cpu_idle() implementations and
    put it in the generic code, right before disabling RCU. Gets rid of
    more trace_*_rcuidle() users.

    Signed-off-by: Peter Zijlstra (Intel)
    Reviewed-by: Steven Rostedt (VMware)
    Reviewed-by: Thomas Gleixner
    Acked-by: Rafael J. Wysocki
    Tested-by: Marco Elver
    Link: https://lkml.kernel.org/r/20200821085348.428433395@infradead.org

    Peter Zijlstra
     

24 Aug, 2020

1 commit


21 Aug, 2020

1 commit

  • Instead of dealing with erratum 1418040 on each entry and exit,
    let's move the handling to __switch_to() instead, which has
    several advantages:

    - It can be applied when it matters (switching between 32 and 64
    bit tasks).
    - It is written in C (yay!)
    - It can rely on static keys rather than alternatives

    Signed-off-by: Marc Zyngier
    Tested-by: Sai Prakash Ranjan
    Reviewed-by: Stephen Boyd
    Acked-by: Will Deacon
    Link: https://lore.kernel.org/r/20200731173824.107480-2-maz@kernel.org
    Signed-off-by: Catalin Marinas

    Marc Zyngier
     

13 Aug, 2020

1 commit

  • All users of arm_pm_restart() have been converted to use the kernel
    restart handler.

    Acked-by: Arnd Bergmann
    Reviewed-by: Wolfram Sang
    Tested-by: Wolfram Sang
    Acked-by: Catalin Marinas
    Signed-off-by: Guenter Roeck
    Signed-off-by: Thierry Reding

    Bug: 163752725
    Link: https://lore.kernel.org/lkml/20191015145147.1106247-6-thierry.reding@gmail.com/
    Change-Id: I2c44db9cf885b9b36c8fdd82d53d9730a8cae738
    Signed-off-by: Elliot Berman

    Guenter Roeck
     

07 Aug, 2020

1 commit


14 Jul, 2020

1 commit

  • - To use fpsimd in kernel task, vendor_hook call is needed to
    save/restore fpsimd at scheduling time.
    - ANDROID_VENDOR_DATA added to thread_struct.
    - Vendor_hooks is called when thread is switching for save/restore
    fpsimd states.
    (trace_android_vh_is_fpsimd_save(prev, next))

    Bug: 149632552

    Signed-off-by: Wooyeon Kim
    Change-Id: I853e1b6a9a51e24f770423bbc39fdd84265d78fc

    Wooyeon Kim
     

05 Jul, 2020

1 commit

  • Now that HAVE_COPY_THREAD_TLS has been removed, rename copy_thread_tls()
    back simply copy_thread(). It's a simpler name, and doesn't imply that only
    tls is copied here. This finishes an outstanding chunk of internal process
    creation work since we've added clone3().

    Cc: linux-arch@vger.kernel.org
    Acked-by: Thomas Bogendoerfer A
    Acked-by: Stafford Horne
    Acked-by: Greentime Hu
    Acked-by: Geert Uytterhoeven A
    Reviewed-by: Kees Cook
    Signed-off-by: Christian Brauner

    Christian Brauner
     

10 Jun, 2020

1 commit

  • Currently, the log-level of show_stack() depends on a platform
    realization. It creates situations where the headers are printed with
    lower log level or higher than the stacktrace (depending on a platform or
    user).

    Furthermore, it forces the logic decision from user to an architecture
    side. In result, some users as sysrq/kdb/etc are doing tricks with
    temporary rising console_loglevel while printing their messages. And in
    result it not only may print unwanted messages from other CPUs, but also
    omit printing at all in the unlucky case where the printk() was deferred.

    Introducing log-level parameter and KERN_UNSUPPRESSED [1] seems an easier
    approach than introducing more printk buffers. Also, it will consolidate
    printings with headers.

    Add log level argument to dump_backtrace() as a preparation for
    introducing show_stack_loglvl().

    [1]: https://lore.kernel.org/lkml/20190528002412.1625-1-dima@arista.com/T/#u

    Signed-off-by: Dmitry Safonov
    Signed-off-by: Andrew Morton
    Cc: Catalin Marinas
    Cc: Russell King
    Cc: Will Deacon
    Link: http://lkml.kernel.org/r/20200418201944.482088-10-dima@arista.com
    Signed-off-by: Linus Torvalds

    Dmitry Safonov
     

05 May, 2020

1 commit

  • Merge in user support for Branch Target Identification, which narrowly
    missed the cut for 5.7 after a late ABI concern.

    * for-next/bti-user:
    arm64: bti: Document behaviour for dynamically linked binaries
    arm64: elf: Fix allnoconfig kernel build with !ARCH_USE_GNU_PROPERTY
    arm64: BTI: Add Kconfig entry for userspace BTI
    mm: smaps: Report arm64 guarded pages in smaps
    arm64: mm: Display guarded pages in ptdump
    KVM: arm64: BTI: Reset BTYPE when skipping emulated instructions
    arm64: BTI: Reset BTYPE when skipping emulated instructions
    arm64: traps: Shuffle code to eliminate forward declarations
    arm64: unify native/compat instruction skipping
    arm64: BTI: Decode BYTPE bits when printing PSTATE
    arm64: elf: Enable BTI at exec based on ELF program properties
    elf: Allow arch to tweak initial mmap prot flags
    arm64: Basic Branch Target Identification support
    ELF: Add ELF program property parsing support
    ELF: UAPI and Kconfig additions for ELF program properties

    Will Deacon
     

01 Apr, 2020

1 commit

  • Pull arm64 updates from Catalin Marinas:
    "The bulk is in-kernel pointer authentication, activity monitors and
    lots of asm symbol annotations. I also queued the sys_mremap() patch
    commenting the asymmetry in the address untagging.

    Summary:

    - In-kernel Pointer Authentication support (previously only offered
    to user space).

    - ARM Activity Monitors (AMU) extension support allowing better CPU
    utilisation numbers for the scheduler (frequency invariance).

    - Memory hot-remove support for arm64.

    - Lots of asm annotations (SYM_*) in preparation for the in-kernel
    Branch Target Identification (BTI) support.

    - arm64 perf updates: ARMv8.5-PMU 64-bit counters, refactoring the
    PMU init callbacks, support for new DT compatibles.

    - IPv6 header checksum optimisation.

    - Fixes: SDEI (software delegated exception interface) double-lock on
    hibernate with shared events.

    - Minor clean-ups and refactoring: cpu_ops accessor,
    cpu_do_switch_mm() converted to C, cpufeature finalisation helper.

    - sys_mremap() comment explaining the asymmetric address untagging
    behaviour"

    * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (81 commits)
    mm/mremap: Add comment explaining the untagging behaviour of mremap()
    arm64: head: Convert install_el2_stub to SYM_INNER_LABEL
    arm64: Introduce get_cpu_ops() helper function
    arm64: Rename cpu_read_ops() to init_cpu_ops()
    arm64: Declare ACPI parking protocol CPU operation if needed
    arm64: move kimage_vaddr to .rodata
    arm64: use mov_q instead of literal ldr
    arm64: Kconfig: verify binutils support for ARM64_PTR_AUTH
    lkdtm: arm64: test kernel pointer authentication
    arm64: compile the kernel with ptrauth return address signing
    kconfig: Add support for 'as-option'
    arm64: suspend: restore the kernel ptrauth keys
    arm64: __show_regs: strip PAC from lr in printk
    arm64: unwind: strip PAC from kernel addresses
    arm64: mask PAC bits of __builtin_return_address
    arm64: initialize ptrauth keys for kernel booting task
    arm64: initialize and switch ptrauth kernel keys
    arm64: enable ptrauth earlier
    arm64: cpufeature: handle conflicts based on capability
    arm64: cpufeature: Move cpu capability helpers inside C file
    ...

    Linus Torvalds
     

25 Mar, 2020

3 commits

  • Use `reboot_cpu` variable instead of hardcoding 0 as the reboot cpu in
    machine_shutdown().

    Signed-off-by: Qais Yousef
    Signed-off-by: Thomas Gleixner
    Acked-by: Catalin Marinas
    Cc: Will Deacon
    Link: https://lkml.kernel.org/r/20200323135110.30522-8-qais.yousef@arm.com

    Qais Yousef
     
  • disable_nonboot_cpus() is not safe to use when doing machine_down(),
    because it relies on freeze_secondary_cpus() which in turn is
    a suspend/resume related freeze and could abort if the logic detects any
    pending activities that can prevent finishing the offlining process.

    Beside disable_nonboot_cpus() is dependent on CONFIG_PM_SLEEP_SMP which
    is an othogonal config to rely on to ensure this function works
    correctly.

    Signed-off-by: Qais Yousef
    Signed-off-by: Thomas Gleixner
    Acked-by: Catalin Marinas
    Cc: Will Deacon
    Link: https://lkml.kernel.org/r/20200323135110.30522-7-qais.yousef@arm.com

    Qais Yousef
     
  • For dynamically linked binaries the interpreter is responsible for setting
    PROT_BTI on everything except itself. The dynamic linker needs to be aware
    of PROT_BTI, for example in order to avoid dropping that when marking
    executable pages read only after doing relocations, and doing everything
    in userspace ensures that we don't get any issues due to divergences in
    behaviour between the kernel and dynamic linker within a single executable.
    Add a comment indicating that this is intentional to the code to help
    people trying to understand what's going on.

    Signed-off-by: Mark Brown
    Signed-off-by: Catalin Marinas

    Mark Brown
     

18 Mar, 2020

3 commits

  • lr is printed with %pS which will try to find an entry in kallsyms.
    After enabling pointer authentication, this match will fail due to
    PAC present in the lr.

    Strip PAC from the lr to display the correct symbol name.

    Suggested-by: James Morse
    Signed-off-by: Amit Daniel Kachhap
    Reviewed-by: Vincenzo Frascino
    Acked-by: Catalin Marinas
    Signed-off-by: Catalin Marinas

    Amit Daniel Kachhap
     
  • Set up keys to use pointer authentication within the kernel. The kernel
    will be compiled with APIAKey instructions, the other keys are currently
    unused. Each task is given its own APIAKey, which is initialized during
    fork. The key is changed during context switch and on kernel entry from
    EL0.

    The keys for idle threads need to be set before calling any C functions,
    because it is not possible to enter and exit a function with different
    keys.

    Reviewed-by: Kees Cook
    Reviewed-by: Catalin Marinas
    Reviewed-by: Vincenzo Frascino
    Signed-off-by: Kristina Martsenko
    [Amit: Modified secondary cores key structure, comments]
    Signed-off-by: Amit Daniel Kachhap
    Signed-off-by: Catalin Marinas

    Kristina Martsenko
     
  • As we're going to enable pointer auth within the kernel and use a
    different APIAKey for the kernel itself, so move the user APIAKey
    switch to EL0 exception return.

    The other 4 keys could remain switched during task switch, but are also
    moved to keep things consistent.

    Reviewed-by: Kees Cook
    Reviewed-by: James Morse
    Reviewed-by: Vincenzo Frascino
    Signed-off-by: Kristina Martsenko
    [Amit: commit msg, re-positioned the patch, comments]
    Signed-off-by: Amit Daniel Kachhap
    Signed-off-by: Catalin Marinas

    Kristina Martsenko
     

17 Mar, 2020

2 commits

  • The current code to print PSTATE symbolically when generating
    backtraces etc., does not include the BYTPE field used by Branch
    Target Identification.

    So, decode BYTPE and print it too.

    In the interests of human-readability, print the classes of BTI
    matched. The symbolic notation, BYTPE (PSTATE[11:10]) and
    permitted classes of subsequent instruction are:

    -- (BTYPE=0b00): any insn
    jc (BTYPE=0b01): BTI jc, BTI j, BTI c, PACIxSP
    -c (BYTPE=0b10): BTI jc, BTI c, PACIxSP
    j- (BTYPE=0b11): BTI jc, BTI j

    Signed-off-by: Mark Brown
    Signed-off-by: Dave Martin
    Reviewed-by: Kees Cook
    Signed-off-by: Catalin Marinas

    Dave Martin
     
  • For BTI protection to be as comprehensive as possible, it is
    desirable to have BTI enabled from process startup. If this is not
    done, the process must use mprotect() to enable BTI for each of its
    executable mappings, but this is painful to do in the libc startup
    code. It's simpler and more sound to have the kernel do it
    instead.

    To this end, detect BTI support in the executable (or ELF
    interpreter, as appropriate), via the
    NT_GNU_PROGRAM_PROPERTY_TYPE_0 note, and tweak the initial prot
    flags for the process' executable pages to include PROT_BTI as
    appropriate.

    Signed-off-by: Mark Brown
    Signed-off-by: Dave Martin
    Reviewed-by: Kees Cook
    Signed-off-by: Catalin Marinas

    Dave Martin
     

10 Feb, 2020

2 commits

  • When all CPUs in the system implement the SSBS extension, the SSBS field
    in PSTATE is the definitive indication of the mitigation state. Further,
    when the CPUs implement the SSBS manipulation instructions (advertised
    to userspace via an HWCAP), EL0 can toggle the SSBS field directly and
    so we cannot rely on any shadow state such as TIF_SSBD at all.

    Avoid forcing the SSBS field in context-switch on such a system, and
    simply rely on the PSTATE register instead.

    Cc:
    Cc: Catalin Marinas
    Cc: Srinivas Ramana
    Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
    Reviewed-by: Marc Zyngier
    Signed-off-by: Will Deacon

    Will Deacon
     
  • Use shared sysctl variables for zero and one constants, as in
    commit eec4844fae7c ("proc/sysctl: add shared variables for range check")

    Fixes: 63f0c6037965 ("arm64: Introduce prctl() options to control the tagged user addresses ABI")
    Signed-off-by: Matteo Croce
    Signed-off-by: Will Deacon

    Matteo Croce