20 Apr, 2018

5 commits


18 Apr, 2018

1 commit


17 Apr, 2018

1 commit

  • On the i.MX7ULP EVK Rev.B baord, the backlight brigntness driver circuit
    is updated. A RC filter is added on the MP3301's EN pin. So the PWM's frequency
    should be change to 20KHZ. for EN pin, A DC voltage from 0.7V to 1.4V can control
    the LED current from 0% to 100%. the backlight brightness level also need to be
    updated.

    Signed-off-by: Bai Ping
    Reviewed-by: Anson Huang
    (cherry picked from commit 82555e15a5f958c09492d0103425dc30bc7cd927)

    Bai Ping
     

13 Apr, 2018

3 commits

  • The default display interface on i.MX7ULP EVK board is the HDMI
    interface, and a hardware rework is required to support the MIPI
    panel. To match the current board design, added the HDMI node in
    the imx7ulp-evk.dts and created a new file named imx7ulp-evk-mipi.dts.

    Signed-off-by: Shenwei Wang
    Reviewed-by: Andy Duan

    Shenwei Wang
     
  • commit a56e6e190015 ("MLK-17961 dts: imx7ulp-evk: add non-removable
    property for wifi sdio") add non-removable property, sd1 slot on
    base board share the same usdhc with wifi, and the sd1 slot support
    card detect, so for sd1 slot, need to remove the non-removable
    property.

    Signed-off-by: Haibo Chen
    Reviewed-by: Andy Duan
    (cherry picked from commit 2a40d8123aff4b4fb7a5cbf286d0c308a42c2fc7)

    Haibo Chen
     
  • This patch fix resume failure in freeze suspend mode on i.mx7ULP
    ("echo freeze > /sys/power/state") while pressing onoff key or
    enabling rtc alarm wakeup. In freeze mode, kernel can only be woken
    up by drivers which register wakup source such as 'device_init_wakeup'
    or 'irq_set_irq_wake', otherwise, kernel will wait for irq handler
    freeze_wake(). Unfortunately, our NMI interrupt which used to wakeup
    A7 by M4 is not a common device and request irq as 'IRQF_NO_SUSPEND'
    which means feeze_wake() never get chance to run while wakeup by any
    event from M4 such as RTC, ONOFF. In this case, use pm_system_wakeup()
    instead in NMI interrupt handle to trigger freeze_wake() directly.

    Signed-off-by: Robin Gong
    Reviewed-by: Anson Huang

    Robin Gong
     

12 Apr, 2018

30 commits

  • Add poweron key support on i.mx7ulp-evk board since M4 take
    over snvs on B0 chip.

    Signed-off-by: Robin Gong
    Reviewed-by: Anson Huang

    Robin Gong
     
  • Add non-removable property for usdhc1 that is used as Murata
    1PJ wifi sdio interface, which means wifi card always is present.

    Signed-off-by: Shenwei Wang
    Signed-off-by: Fugang Duan
    Tested-by: Fugang Duan

    Fugang Duan
     
  • commit 3a0a397ff5ff upstream.

    Now that we've standardised on SMCCC v1.1 to perform the branch
    prediction invalidation, let's drop the previous band-aid.
    If vendors haven't updated their firmware to do SMCCC 1.1, they
    haven't updated PSCI either, so we don't loose anything.

    Tested-by: Ard Biesheuvel
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    no falkor/thunderx2/vulcan in arch/arm64/kernel/cpu_errata.c

    Marc Zyngier
     
  • commit b092201e0020 upstream.

    Add the detection and runtime code for ARM_SMCCC_ARCH_WORKAROUND_1.
    It is lovely. Really.

    Tested-by: Ard Biesheuvel
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    no qcom hyp functions in
    arch/arm64/kernel/bpi.S
    arch/arm64/kernel/cpu_errata.c

    Marc Zyngier
     
  • commit f72af90c3783 upstream.

    We want SMCCC_ARCH_WORKAROUND_1 to be fast. As fast as possible.
    So let's intercept it as early as we can by testing for the
    function call number as soon as we've identified a HVC call
    coming from the guest.

    Tested-by: Ard Biesheuvel
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Marc Zyngier
     
  • commit 6167ec5c9145 upstream.

    A new feature of SMCCC 1.1 is that it offers firmware-based CPU
    workarounds. In particular, SMCCC_ARCH_WORKAROUND_1 provides
    BP hardening for CVE-2017-5715.

    If the host has some mitigation for this issue, report that
    we deal with it using SMCCC_ARCH_WORKAROUND_1, as we apply the
    host workaround on every guest exit.

    Tested-by: Ard Biesheuvel
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    no sve support in arch/arm64/include/asm/kvm_host.h
    mv changes from virt/kvm/arm/psci.c to arch/arm/kvm/psci.c
    using cpus_have_cap instead of cpus_have_const_cap

    Marc Zyngier
     
  • commit a4097b351118 upstream.

    We're about to need kvm_psci_version in HYP too. So let's turn it
    into a static inline, and pass the kvm structure as a second
    parameter (so that HYP can do a kern_hyp_va on it).

    Tested-by: Ard Biesheuvel
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    mv changes from virt/kvm/arm/psci.c to arch/arm/kvm/psci.c

    Marc Zyngier
     
  • commit 90348689d500 upstream.

    For those CPUs that require PSCI to perform a BP invalidation,
    going all the way to the PSCI code for not much is a waste of
    precious cycles. Let's terminate that call as early as possible.

    Signed-off-by: Marc Zyngier
    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas
    Signed-off-by: Alex Shi

    Marc Zyngier
     
  • commit 09e6be12effd upstream.

    The new SMC Calling Convention (v1.1) allows for a reduced overhead
    when calling into the firmware, and provides a new feature discovery
    mechanism.

    Make it visible to KVM guests.

    Tested-by: Ard Biesheuvel
    Reviewed-by: Christoffer Dall
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    mv change from virt/kvm/arm/psci.c to arch/arm/kvm/psci.c

    Marc Zyngier
     
  • commit 58e0b2239a4d upstream.

    PSCI 1.0 can be trivially implemented by providing the FEATURES
    call on top of PSCI 0.2 and returning 1.0 as the PSCI version.

    We happily ignore everything else, as they are either optional or
    are clarifications that do not require any additional change.

    PSCI 1.0 is now the default until we decide to add a userspace
    selection API.

    Reviewed-by: Christoffer Dall
    Tested-by: Ard Biesheuvel
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    mv chagnes from virt/kvm/arm/psci.c to arch/arm/kvm/psci.c

    Marc Zyngier
     
  • commit 84684fecd7ea upstream.

    Instead of open coding the accesses to the various registers,
    let's add explicit SMCCC accessors.

    Reviewed-by: Christoffer Dall
    Tested-by: Ard Biesheuvel
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    mv change from virt/kvm/arm/psci.c to arch/arm/kvm/psci.c

    Marc Zyngier
     
  • commit d0a144f12a7c upstream.

    As we're about to trigger a PSCI version explosion, it doesn't
    hurt to introduce a PSCI_VERSION helper that is going to be
    used everywhere.

    Reviewed-by: Christoffer Dall
    Tested-by: Ard Biesheuvel
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    mv change form virt/kvm/arm/psci.c to arch/arm/kvm/psci.c

    Marc Zyngier
     
  • commit 1a2fb94e6a77 upstream.

    As we're about to update the PSCI support, and because I'm lazy,
    let's move the PSCI include file to include/kvm so that both
    ARM architectures can find it.

    Acked-by: Christoffer Dall
    Tested-by: Ard Biesheuvel
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    need kvm/arm_psci.h in files:
    arch/arm64/kvm/handle_exit.c
    arch/arm/kvm/psci.c and arch/arm/kvm/arm.c
    no virt/kvm/arm/arm.c and virt/kvm/arm/psci.c

    Marc Zyngier
     
  • commit f5115e8869e1 upstream.

    When handling an SMC trap, the "preferred return address" is set
    to that of the SMC, and not the next PC (which is a departure from
    the behaviour of an SMC that isn't trapped).

    Increment PC in the handler, as the guest is otherwise forever
    stuck...

    Cc: stable@vger.kernel.org
    Fixes: acfb3b883f6d ("arm64: KVM: Fix SMCCC handling of unimplemented SMC/HVC calls")
    Reviewed-by: Christoffer Dall
    Tested-by: Ard Biesheuvel
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Marc Zyngier
     
  • commit aa6acde65e03 upstream.

    Cortex-A57, A72, A73 and A75 are susceptible to branch predictor aliasing
    and can theoretically be attacked by malicious code.

    This patch implements a PSCI-based mitigation for these CPUs when available.
    The call into firmware will invalidate the branch predictor state, preventing
    any malicious entries from affecting other victim contexts.

    Co-developed-by: Marc Zyngier
    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas
    Signed-off-by: Alex Shi

    Conflicts:
    no falkor in arch/arm64/kernel/cpu_errata.c

    Will Deacon
     
  • commit 30d88c0e3ace upstream.

    It is possible to take an IRQ from EL0 following a branch to a kernel
    address in such a way that the IRQ is prioritised over the instruction
    abort. Whilst an attacker would need to get the stars to align here,
    it might be sufficient with enough calibration so perform BP hardening
    in the rare case that we see a kernel address in the ELR when handling
    an IRQ from EL0.

    Reported-by: Dan Hettena
    Reviewed-by: Marc Zyngier
    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas
    Signed-off-by: Alex Shi

    Will Deacon
     
  • commit 5dfc6ed27710 upstream.

    Software-step and PC alignment fault exceptions have higher priority than
    instruction abort exceptions, so apply the BP hardening hooks there too
    if the user PC appears to reside in kernel space.

    Reported-by: Dan Hettena
    Reviewed-by: Marc Zyngier
    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas
    Signed-off-by: Alex Shi

    Conflicts:
    expand enable_da_f to 'msr daifclr, #(8 | 4 | 1)'
    in arch/arm64/kernel/entry.S

    Will Deacon
     
  • commit 6840bdd73d07 upstream

    Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code.

    Signed-off-by: Marc Zyngier
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    mv changes from virt/kvm/arm/arm.c to arch/arm/kvm/arm.c

    Marc Zyngier
     
  • commit a8e4c0a919ae upstream.

    We call arm64_apply_bp_hardening() from post_ttbr_update_workaround,
    which has the unexpected consequence of being triggered on every
    exception return to userspace when ARM64_SW_TTBR0_PAN is selected,
    even if no context switch actually occured.

    This is a bit suboptimal, and it would be more logical to only
    invalidate the branch predictor when we actually switch to
    a different mm.

    In order to solve this, move the call to arm64_apply_bp_hardening()
    into check_and_switch_context(), where we're guaranteed to pick
    a different mm context.

    Acked-by: Will Deacon
    Signed-off-by: Marc Zyngier
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    no sw pan in arch/arm64/mm/context.c

    Marc Zyngier
     
  • commit 0f15adbb2861 upstream.

    Aliasing attacks against CPU branch predictors can allow an attacker to
    redirect speculative control flow on some CPUs and potentially divulge
    information from one context to another.

    This patch adds initial skeleton code behind a new Kconfig option to
    enable implementation-specific mitigations against these attacks for
    CPUs that are affected.

    Co-developed-by: Marc Zyngier
    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas
    Signed-off-by: Alex Shi

    Conflicts:
    expand enable_da_f in entry.S
    use 5 parameters ARM64_FTR_BITS()
    add percpu.h in mm_types.h for percpu functions
    use cpus_have_cap instead of cpus_have_const_cap
    arch/arm64/Kconfig
    arch/arm64/include/asm/cpucaps.h
    arch/arm64/include/asm/mmu.h
    arch/arm64/include/asm/sysreg.h
    arch/arm64/kernel/cpufeature.c
    arch/arm64/kernel/entry.S
    arch/arm64/mm/fault.c

    Will Deacon
     
  • commit 95e3de3590e3 upstream.

    We will soon need to invoke a CPU-specific function pointer after changing
    page tables, so move post_ttbr_update_workaround out into C code to make
    this possible.

    Signed-off-by: Marc Zyngier
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    don't include PAN related changes
    arch/arm64/include/asm/assembler.h
    arch/arm64/kernel/entry.S
    arch/arm64/mm/proc.S

    Marc Zyngier
     
  • commit 0a0d111d40fd upstream.

    In order to invoke the CPU capability ->matches callback from the ->enable
    callback for applying local-CPU workarounds, we need a handle on the
    capability structure.

    This patch passes a pointer to the capability structure to the ->enable
    callback.

    Reviewed-by: Suzuki K Poulose
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Conflicts:
    arch/arm64/kernel/cpufeature.c

    Will Deacon
     
  • commit 55b35d070c25 upstream.

    When a CPU is brought up after we have finalised the system
    wide capabilities (i.e, features and errata), we make sure the
    new CPU doesn't need a new errata work around which has not been
    detected already. However we don't run enable() method on the new
    CPU for the errata work arounds already detected. This could
    cause the new CPU running without potential work arounds.
    It is upto the "enable()" method to decide if this CPU should
    do something about the errata.

    Fixes: commit 6a6efbb45b7d95c84 ("arm64: Verify CPU errata work arounds on hotplugged CPU")
    Cc: Will Deacon
    Cc: Mark Rutland
    Cc: Andre Przywara
    Cc: Dave Martin
    Signed-off-by: Suzuki K Poulose
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon
    Signed-off-by: Alex Shi

    Suzuki K Poulose
     
  • commit 06f1494f837 upstream.

    Some minor erratum may not be fixed in further revisions of a core,
    leading to a situation where the workaround needs to be updated each
    time an updated core is released.

    Introduce a MIDR_ALL_VERSIONS match helper that will work for all
    versions of that MIDR, once and for all.

    Acked-by: Thomas Gleixner
    Acked-by: Mark Rutland
    Acked-by: Daniel Lezcano
    Reviewed-by: Suzuki K Poulose
    Signed-off-by: Marc Zyngier
    Signed-off-by: Alex Shi

    Marc Zyngier
     
  • commit edf298cfce47 upstream.
    Alex Shi rewrite this commit on func this_cpu_has_cap(). The following commit
    log is still meaningful.

    this_cpu_has_cap() tests caps->desc not caps->matches, so it stops
    walking the list when it finds a 'silent' feature, instead of
    walking to the end of the list.

    Prior to v4.6's 644c2ae198412 ("arm64: cpufeature: Test 'matches' pointer
    to find the end of the list") we always tested desc to find the end of
    a capability list. This was changed for dubious things like PAN_NOT_UAO.
    v4.7's e3661b128e53e ("arm64: Allow a capability to be checked on
    single CPU") added this_cpu_has_cap() using the old desc style test.

    CC: Suzuki K Poulose
    Reviewed-by: Suzuki K Poulose
    Acked-by: Marc Zyngier
    Signed-off-by: James Morse
    Signed-off-by: Catalin Marinas
    Signed-off-by: Will Deacon

    Signed-off-by: Alex Shi

    James Morse
     
  • commit 91b2d3442f6a upstream.

    The arm64 futex code has some explicit dereferencing of user pointers
    where performing atomic operations in response to a futex command. This
    patch uses masking to limit any speculative futex operations to within
    the user address space.

    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas
    Signed-off-by: Alex Shi

    Conflicts:
    change on old futex_atomic_op_inuser function instead of
    arch_futex_atomic_op_inuser in arch/arm64/include/asm/futex.h

    Will Deacon
     
  • Rewritting from commit f71c2ffcb20d upstream. On LTS 4.9, there has no
    raw_copy_from/to_user, neither __copy_user_flushcache, and it isn't good
    idead to pick up them. The following is origin commit log, that's also
    applicable for the new patch.

    Like we've done for get_user and put_user, ensure that user pointers
    are masked before invoking the underlying __arch_{clear,copy_*}_user
    operations.

    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas

    Signed-off-by: Alex Shi

    Will Deacon
     
  • commit 84624087dd7e upstream.

    access_ok isn't an expensive operation once the addr_limit for the current
    thread has been loaded into the cache. Given that the initial access_ok
    check preceding a sequence of __{get,put}_user operations will take
    the brunt of the miss, we can make the __* variants identical to the
    full-fat versions, which brings with it the benefits of address masking.

    The likely cost in these sequences will be from toggling PAN/UAO, which
    we can address later by implementing the *_unsafe versions.

    Reviewed-by: Robin Murphy
    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas
    Signed-off-by: Alex Shi

    Conflicts:
    keep __{get/put}_user_unaligned in arch/arm64/include/asm/uaccess.h

    Will Deacon
     
  • commit c2f0ad4fc089 upstream.

    A mispredicted conditional call to set_fs could result in the wrong
    addr_limit being forwarded under speculation to a subsequent access_ok
    check, potentially forming part of a spectre-v1 attack using uaccess
    routines.

    This patch prevents this forwarding from taking place, but putting heavy
    barriers in set_fs after writing the addr_limit.

    Reviewed-by: Mark Rutland
    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas
    Signed-off-by: Alex Shi

    Conflicts:
    no set_thread_flag(TIF_FSCHECK) in arch/arm64/include/asm/uaccess.h

    Will Deacon
     
  • commit 6314d90e6493 upstream.

    In a similar manner to array_index_mask_nospec, this patch introduces an
    assembly macro (mask_nospec64) which can be used to bound a value under
    speculation. This macro is then used to ensure that the indirect branch
    through the syscall table is bounded under speculation, with out-of-range
    addresses speculating as calls to sys_io_setup (0).

    Reviewed-by: Mark Rutland
    Signed-off-by: Will Deacon
    Signed-off-by: Catalin Marinas
    Signed-off-by: Alex Shi

    Will Deacon